hackage-security-0.6.2.4/ 0000755 0000000 0000000 00000000000 07346545000 013302 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/ChangeLog.md 0000644 0000000 0000000 00000011115 07346545000 015452 0 ustar 00 0000000 0000000 See also http://pvp.haskell.org/faq
0.6.2.4
-------
* Allow `tar-0.6`
* Drop support for GHC < 7.8 in favor of `PatternSynonyms`
* Drop flags `base48`, `mtl21`, `old-directory` and support for GHC 7.8, `mtl < 2.2` and `directory < 1.2`
* Tested with GHC 7.10 - 9.8
0.6.2.3
-------
* Bump base for GHC 9.4 comp
* Fix code to *really* support mtl-2.3
0.6.2.2
-------
* Fix broken compilation of test-suite with Cabal-syntax-3.8.1.0 on Hackage
* Huge README updates
0.6.2.1
-------
* Allow GHC-9.0 (base-4.15) (#265)
* Fix running `cabal repl hackage-security` (#263)
0.6.2.0
-------
* Safely prepare for when cabal factors out Cabal-syntax
0.6.1.0
-------
* Support basic auth in package-indices (#252)
* Fix tests due to new aeson handling of unescaped control sequences (#256)
* Bump a lot of bounds on packages we depend on
0.6.0.1
-------
* Fix bug in non-default `-lukko` build-configuration (#242)
* Add support for `template-haskell-2.16.0.0` (#240)
0.6.0.0
-------
* Remove `Hackage.Security.TUF.FileMap.lookupM`
* Don't expose `Hackage.Security.Util.IO` module
* Don't expose `Hackage.Security.Util.Lens` module
* Report missing keys in `.meta` objects more appropriately as
`ReportSchemaErrors(expected)` instead of via `Monad(fail)`
* Add support for GHC 8.8 / base-4.13
* Use `lukko` for file-locking
* Extend `LogMessage` to signal events for cache lock acquiring and release
* New `lockCacheWithLogger` operation
0.5.3.0
-------
* Use `flock(2)`-based locking where available
(compat-shim taken from `cabal-install`'s code-base) (#207)
* Improve handling of async exceptions (#187)
* Detect & recover from local corruption of uncompressed index tarball (#196)
* Support `base-4.11`
0.5.2.2
-------
* Fix client in case where server provides MD5 hashes
(ignore them, use only SHA256)
* Fix warnings with GHC 8
0.5.2.1
-------
* Fix accidental breakage with GHC 8
0.5.2.0
-------
* Change path handling to work on Windows (#162).
* Add new MD5 hash type (#163). This is not for security (only SHA256 is
used for verification) but to provide as metadata to help with other
services like mirroring (e.g. HTTP & S3 use MD5 checksum headers).
* Adjust reading of JSON maps to ignore unknown keys. This allows adding
e.g. new hash types in future without breaking existing clients.
* Fix build warnings on GHC 8
0.5.1.0
-------
* Fix for other local programs corrputing the 00-index.tar. Detect it
and do a full rewrite rather than incremental append.
* New JSON pretty-printer (not canonical rendering)
* Round-trip tests for Canonical JSON parser and printers
* Minor fix for Canonical JSON parser
* Switch from cryptohash to cryptohash-sha256 to avoid new dependencies
0.5.0.2
-------
* Use tar 0.5.0
* Relax lower bound on directory
0.5.0.1
-------
* Relaxed dependency bounds
0.5.0.0
-------
* Treat deserialization errors as verification errors (#108, #75)
* Avoid `Content-Length: 0` in GET requests (#103)
* Fix bug in Trusted
* Build tar-index incrementally (#22)
* Generalize 'Repository' over the representation of downloaded remote files.
* Update index incrementally by downloading delta of `.tar.gz` and writing only
tail of local `.tar` file (#101). Content compression no longer used.
* Take a lock on the cache directory before updating it, and no longer use
atomic file ops (pointless since we now update some files incrementally)
* Code refactoring/simplification.
* Support for ed25519 >= 0.0.4
* `downloadPackage` no longer takes a callback.
* API for accessing the Hackage index contents changed; it should now be
easier for clients to do their own incremental updates should they wish
to do so.
* Relies on tar >= 0.4.4
* Removed obsolete option for downloading the compressed index (we now _always_
download the compressed index)
* Path module now works on Windows (#118)
* Dropped support for ghc 7.2
* Replaced uses of Int with Int54, to make sure canonical JSON really is
canonical (#141).
0.4.0.0
-------
* Allow clients to pass in their own time for expiry verification
(this is an API change hence the major version bump)
* Export .Client.Formats (necessary to define new Repositories)
* Start work on basic test framework
0.3.0.0
-------
* Don't use compression for range requests (#101)
* Download index.tar.gz, not index.tar, if range request fails (#99)
* Minor change in the LogMessage type (hence the API version bumb)
* Include ChangeLog.md in the tarball (#98)
0.2.0.0
-------
* Allow for network-2.5 (rather than network-uri-2.6)
* Use cryptohash rather than SHA
* Various bugfixes
* API change: introduce RepoOpts in the Remote repository
0.1.0.0
-------
* Initial beta release
hackage-security-0.6.2.4/LICENSE 0000644 0000000 0000000 00000002766 07346545000 014322 0 ustar 00 0000000 0000000 Copyright (c) 2015, Well-Typed LLP
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Well-Typed LLP nor the names of other
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
hackage-security-0.6.2.4/Setup.hs 0000644 0000000 0000000 00000000056 07346545000 014737 0 ustar 00 0000000 0000000 import Distribution.Simple
main = defaultMain
hackage-security-0.6.2.4/hackage-security.cabal 0000644 0000000 0000000 00000027505 07346545000 017527 0 ustar 00 0000000 0000000 cabal-version: 1.12
name: hackage-security
version: 0.6.2.4
synopsis: Hackage security library
description: The hackage security library provides both server and
client utilities for securing the Hackage package server
(). It is based on The Update
Framework (), a set of
recommendations developed by security researchers at
various universities in the US as well as developers on the
Tor project ().
.
The current implementation supports only index signing,
thereby enabling untrusted mirrors. It does not yet provide
facilities for author package signing.
.
The library has two main entry points:
"Hackage.Security.Client" is the main entry point for
clients (the typical example being @cabal@), and
"Hackage.Security.Server" is the main entry point for
servers (the typical example being @hackage-server@).
license: BSD3
license-file: LICENSE
author: Edsko de Vries
maintainer: cabal-devel@haskell.org
copyright: Copyright 2015-2022 Well-Typed LLP
category: Distribution
homepage: https://github.com/haskell/hackage-security
bug-reports: https://github.com/haskell/hackage-security/issues
build-type: Simple
tested-with:
GHC == 9.8.1
GHC == 9.6.3
GHC == 9.4.8
GHC == 9.2.8
GHC == 9.0.2
GHC == 8.10.7
GHC == 8.8.4
GHC == 8.6.5
GHC == 8.4.4
GHC == 8.2.2
GHC == 8.0.2
GHC == 7.10.3
extra-source-files:
ChangeLog.md
source-repository head
type: git
location: https://github.com/haskell/hackage-security.git
flag use-network-uri
description: Are we using @network-uri@?
manual: False
flag Cabal-syntax
description: Are we using Cabal-syntax?
manual: False
default: False
flag lukko
description: Use @lukko@ for file-locking, otherwise use @GHC.IO.Handle.Lock@
manual: True
default: True
library
-- Most functionality is exported through the top-level entry points .Client
-- and .Server; the other exported modules are intended for qualified imports.
exposed-modules: Hackage.Security.Client
Hackage.Security.Client.Formats
Hackage.Security.Client.Repository
Hackage.Security.Client.Repository.Cache
Hackage.Security.Client.Repository.Local
Hackage.Security.Client.Repository.Remote
Hackage.Security.Client.Repository.HttpLib
Hackage.Security.Client.Verify
Hackage.Security.JSON
Hackage.Security.Key.Env
Hackage.Security.Server
Hackage.Security.Trusted
Hackage.Security.TUF.FileMap
Hackage.Security.Util.Checked
Hackage.Security.Util.Path
Hackage.Security.Util.Pretty
Hackage.Security.Util.Some
Text.JSON.Canonical
other-modules: Hackage.Security.Key
Hackage.Security.Trusted.TCB
Hackage.Security.TUF
Hackage.Security.TUF.Common
Hackage.Security.TUF.FileInfo
Hackage.Security.TUF.Header
Hackage.Security.TUF.Layout.Cache
Hackage.Security.TUF.Layout.Index
Hackage.Security.TUF.Layout.Repo
Hackage.Security.TUF.Mirrors
Hackage.Security.TUF.Paths
Hackage.Security.TUF.Patterns
Hackage.Security.TUF.Root
Hackage.Security.TUF.Signed
Hackage.Security.TUF.Snapshot
Hackage.Security.TUF.Targets
Hackage.Security.TUF.Timestamp
Hackage.Security.Util.Base64
Hackage.Security.Util.Exit
Hackage.Security.Util.IO
Hackage.Security.Util.JSON
Hackage.Security.Util.Lens
Hackage.Security.Util.Stack
Hackage.Security.Util.TypedEmbedded
MyPrelude
-- We support ghc 7.4 (bundled with Cabal 1.14) and up
build-depends: base >= 4.8 && < 4.20,
-- PatternSynonyms are only available since GHC 7.8 (base 4.7)
base16-bytestring >= 0.1.1 && < 1.1,
base64-bytestring >= 1.0 && < 1.3,
bytestring >= 0.9 && < 0.13,
containers >= 0.4 && < 0.8,
cryptohash-sha256 >= 0.11 && < 0.12,
directory >= 1.2 && < 1.4,
ed25519 >= 0.0 && < 0.1,
filepath >= 1.2 && < 1.5,
mtl >= 2.2 && < 2.4,
parsec >= 3.1 && < 3.2,
pretty >= 1.0 && < 1.2,
-- 0.4.2 introduces TarIndex, 0.4.4 introduces more
-- functionality, 0.5.0 changes type of serialise
tar >= 0.5 && < 0.7,
template-haskell >= 2.7 && < 2.22,
time >= 1.2 && < 1.13,
transformers >= 0.3 && < 0.7,
zlib >= 0.5 && < 0.7,
-- whatever versions are bundled with ghc:
ghc-prim
if flag(lukko)
build-depends: lukko >= 0.1 && < 0.2
else
build-depends: base >= 4.10
if flag(Cabal-syntax) && impl(ghc >= 8.2)
build-depends: Cabal-syntax >= 3.7 && < 3.12
else
build-depends: Cabal >= 1.14 && < 1.26
|| >= 2.0 && < 2.6
|| >= 3.0 && < 3.7,
Cabal-syntax < 3.7
hs-source-dirs: src
default-language: Haskell2010
default-extensions: DefaultSignatures
DeriveDataTypeable
DeriveFunctor
FlexibleContexts
FlexibleInstances
GADTs
GeneralizedNewtypeDeriving
KindSignatures
MultiParamTypeClasses
NamedFieldPuns
NoImplicitPrelude
NoMonomorphismRestriction
PatternSynonyms
RankNTypes
RecordWildCards
ScopedTypeVariables
StandaloneDeriving
TupleSections
TypeFamilies
TypeOperators
ViewPatterns
other-extensions:
AllowAmbiguousTypes
BangPatterns
CPP
OverlappingInstances
PackageImports
RoleAnnotations
StaticPointers
UndecidableInstances
-- use the new stage1/cross-compile-friendly DeriveLift extension for GHC 8.0+
if impl(ghc >= 8.0)
other-extensions: DeriveLift
else
other-extensions: TemplateHaskell
ghc-options: -Wall
-- The URI type got split out off the network package after version 2.5, and
-- moved to a separate network-uri package. Since we don't need the rest of
-- network here, it would suffice to rely only on network-uri:
--
-- > if flag(use-network-uri)
-- > build-depends: network-uri >= 2.6 && < 2.7
-- > else
-- > build-depends: network >= 2.5 && < 2.6
--
-- However, if we did the same in hackage-security-HTTP, Cabal would consider
-- those two flag choices (hackage-security:use-network-uri and
-- hackage-security-HTTP:use-network-uri) to be completely independent; but
-- they aren't: if it links hackage-security against network-uri and
-- hackage-security-HTTP against network, we will get type errors when
-- hackage-security-HTTP tries to pass a URI to hackage-security.
--
-- It might seem we can solve this problem by re-exporting the URI type in
-- hackage-security and avoid the dependency in hackage-security-HTTP
-- altogether. However, this merely shifts the problem: hackage-security-HTTP
-- relies on the HTTP library which--surprise!--makes the same choice between
-- depending on network or network-uri. Cabal will not notice that we cannot
-- build hackage-security and hackage-security-HTTP against network-uri but
-- HTTP against network.
--
-- We solve the problem by explicitly relying on network-2.6 when choosing
-- network-uri. This dependency is redundant, strictly speaking. However, it
-- serves as a proxy for forcing flag choices: since all packages in a
-- solution must be linked against the same version of network, having one
-- version of network in one branch of the conditional and another version of
-- network in the other branch forces the choice to be consistent throughout.
-- (Note that the HTTP library does the same thing, though in this case the
-- dependency in network is not redundant.)
if flag(use-network-uri)
build-depends: network-uri >= 2.6 && < 2.7,
network >= 2.6 && < 2.9
|| >= 3.0 && < 3.2
else
build-depends: network >= 2.5 && < 2.6
test-suite TestSuite
type: exitcode-stdio-1.0
main-is: TestSuite.hs
other-modules: TestSuite.HttpMem
TestSuite.InMemCache
TestSuite.InMemRepo
TestSuite.InMemRepository
TestSuite.JSON
TestSuite.PrivateKeys
TestSuite.Util.StrictMVar
-- inherited constraints from lib:hackage-security component
build-depends: hackage-security,
base,
containers,
bytestring,
network-uri,
tar,
text,
time,
zlib
if flag(Cabal-syntax) && impl(ghc >= 8.2)
build-depends: Cabal >= 3.7 && < 3.12,
Cabal-syntax >= 3.7 && < 3.12
else
build-depends: Cabal >= 1.14 && < 1.26
|| >= 2.0 && < 2.6
|| >= 3.0 && < 3.7,
Cabal-syntax < 3.7
-- dependencies exclusive to test-suite
build-depends: tasty >= 1.2 && < 1.6,
tasty-hunit == 0.10.*,
tasty-quickcheck == 0.10.*,
QuickCheck >= 2.11 && <2.15,
aeson >= 1.4 && < 1.6 || >= 2.0 && < 2.3,
vector >= 0.12 && <0.14,
unordered-containers >=0.2.8.0 && <0.3,
temporary >= 1.2 && < 1.4
hs-source-dirs: tests
default-language: Haskell2010
default-extensions: FlexibleContexts
GADTs
KindSignatures
RankNTypes
RecordWildCards
ScopedTypeVariables
ghc-options: -Wall
hackage-security-0.6.2.4/src/Hackage/Security/ 0000755 0000000 0000000 00000000000 07346545000 017223 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Client.hs 0000644 0000000 0000000 00000130035 07346545000 020777 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
#if __GLASGOW_HASKELL__ >= 710
{-# LANGUAGE StaticPointers #-}
#endif
-- | Main entry point into the Hackage Security framework for clients
module Hackage.Security.Client (
-- * Checking for updates
checkForUpdates
, HasUpdates(..)
-- * Downloading targets
, downloadPackage
, downloadPackage'
-- * Access to the Hackage index
, Directory(..)
, DirectoryEntry(..)
, getDirectory
, IndexFile(..)
, IndexEntry(..)
, IndexCallbacks(..)
, withIndex
-- * Bootstrapping
, requiresBootstrap
, bootstrap
-- * Re-exports
, module Hackage.Security.TUF
, module Hackage.Security.Key
, trusted
-- ** We only a few bits from .Repository
-- TODO: Maybe this is a sign that these should be in a different module?
, Repository -- opaque
, DownloadedFile(..)
, SomeRemoteError(..)
, LogMessage(..)
-- * Exceptions
, uncheckClientErrors
, VerificationError(..)
, VerificationHistory
, RootUpdated(..)
, InvalidPackageException(..)
, InvalidFileInIndex(..)
, LocalFileCorrupted(..)
) where
import MyPrelude hiding (log)
import Control.Arrow (first)
import Control.Exception
import Control.Monad
import Control.Monad.IO.Class
import Data.List (sortBy)
import Data.Maybe (isNothing)
import Data.Ord (comparing)
import Data.Time
import Data.Traversable (for)
import Data.Typeable (Typeable)
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Archive.Tar.Entry as Tar
import qualified Codec.Archive.Tar.Index as Tar
import qualified Data.ByteString.Lazy as BS.L
import qualified Data.ByteString.Lazy.Char8 as BS.L.C8
import Distribution.Package (PackageIdentifier)
import Distribution.Text (display)
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Verify
import Hackage.Security.JSON
import Hackage.Security.Key
import Hackage.Security.Key.Env (KeyEnv)
import Hackage.Security.Trusted
import Hackage.Security.Trusted.TCB
import Hackage.Security.TUF
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
import Hackage.Security.Util.Stack
import qualified Hackage.Security.Key.Env as KeyEnv
{-------------------------------------------------------------------------------
Checking for updates
-------------------------------------------------------------------------------}
data HasUpdates = HasUpdates | NoUpdates
deriving (Show, Eq, Ord)
-- | Generic logic for checking if there are updates
--
-- This implements the logic described in Section 5.1, "The client application",
-- of the TUF spec. It checks which of the server metadata has changed, and
-- downloads all changed metadata to the local cache. (Metadata here refers
-- both to the TUF security metadata as well as the Hackage package index.)
--
-- You should pass @Nothing@ for the UTCTime _only_ under exceptional
-- circumstances (such as when the main server is down for longer than the
-- expiry dates used in the timestamp files on mirrors).
checkForUpdates :: (Throws VerificationError, Throws SomeRemoteError)
=> Repository down
-> Maybe UTCTime -- ^ To check expiry times against (if using)
-> IO HasUpdates
checkForUpdates rep@Repository{..} mNow =
withMirror rep $ limitIterations []
where
-- More or less randomly chosen maximum iterations
-- See .
maxNumIterations :: Int
maxNumIterations = 5
-- The spec stipulates that on a verification error we must download new
-- root information and start over. However, in order to prevent DoS attacks
-- we limit how often we go round this loop.
-- See als .
limitIterations :: VerificationHistory -> IO HasUpdates
limitIterations history | length history >= maxNumIterations =
throwChecked $ VerificationErrorLoop (reverse history)
limitIterations history = do
-- Get all cached info
--
-- NOTE: Although we don't normally update any cached files until the
-- whole verification process successfully completes, in case of a
-- verification error, or in case of a regular update of the root info,
-- we DO update the local files. Hence, we must re-read all local files
-- on each iteration.
cachedInfo <- getCachedInfo rep
mHasUpdates <- tryChecked -- catch RootUpdated
$ tryChecked -- catch VerificationError
$ runVerify repLockCache
$ go attemptNr cachedInfo
case mHasUpdates of
Left ex -> do
-- NOTE: This call to updateRoot is not itself protected by an
-- exception handler, and may therefore throw a VerificationError.
-- This is intentional: if we get verification errors during the
-- update process, _and_ we cannot update the main root info, then
-- we cannot do anything.
log rep $ LogVerificationError ex
let history' = Right ex : history
attemptNr' = attemptNr + 1
updateRoot rep mNow attemptNr' cachedInfo (Left ex)
limitIterations history'
Right (Left RootUpdated) -> do
log rep $ LogRootUpdated
let history' = Left RootUpdated : history
limitIterations history'
Right (Right hasUpdates) ->
return hasUpdates
where
attemptNr :: AttemptNr
attemptNr = fromIntegral $ length history
-- The 'Verify' monad only caches the downloaded files after verification.
-- See also .
go :: Throws RootUpdated => AttemptNr -> CachedInfo -> Verify HasUpdates
go attemptNr cachedInfo@CachedInfo{..} = do
-- Get the new timestamp
newTS <- getRemoteFile' RemoteTimestamp
let newInfoSS = static timestampInfoSnapshot <$$> newTS
-- Check if the snapshot has changed
if not (fileChanged cachedInfoSnapshot newInfoSS)
then return NoUpdates
else do
-- Get the new snapshot
newSS <- getRemoteFile' (RemoteSnapshot newInfoSS)
let newInfoRoot = static snapshotInfoRoot <$$> newSS
newInfoMirrors = static snapshotInfoMirrors <$$> newSS
newInfoTarGz = static snapshotInfoTarGz <$$> newSS
mNewInfoTar = trustElems (static snapshotInfoTar <$$> newSS)
-- If root metadata changed, download and restart
when (rootChanged cachedInfoRoot newInfoRoot) $ liftIO $ do
updateRoot rep mNow attemptNr cachedInfo (Right newInfoRoot)
-- By throwing 'RootUpdated' as an exception we make sure that
-- any files previously downloaded (to temporary locations)
-- will not be cached.
throwChecked RootUpdated
-- If mirrors changed, download and verify
when (fileChanged cachedInfoMirrors newInfoMirrors) $
newMirrors =<< getRemoteFile' (RemoteMirrors newInfoMirrors)
-- If index changed, download and verify
when (fileChanged cachedInfoTarGz newInfoTarGz) $
updateIndex newInfoTarGz mNewInfoTar
return HasUpdates
where
getRemoteFile' :: ( VerifyRole a
, FromJSON ReadJSON_Keys_Layout (Signed a)
)
=> RemoteFile (f :- ()) Metadata -> Verify (Trusted a)
getRemoteFile' = liftM fst . getRemoteFile rep cachedInfo attemptNr mNow
-- Update the index and check against the appropriate hash
updateIndex :: Trusted FileInfo -- info about @.tar.gz@
-> Maybe (Trusted FileInfo) -- info about @.tar@
-> Verify ()
updateIndex newInfoTarGz Nothing = do
(targetPath, tempPath) <- getRemote' rep attemptNr $
RemoteIndex (HFZ FGz) (FsGz newInfoTarGz)
verifyFileInfo' (Just newInfoTarGz) targetPath tempPath
updateIndex newInfoTarGz (Just newInfoTar) = do
(format, targetPath, tempPath) <- getRemote rep attemptNr $
RemoteIndex (HFS (HFZ FGz)) (FsUnGz newInfoTar newInfoTarGz)
case format of
Some FGz -> verifyFileInfo' (Just newInfoTarGz) targetPath tempPath
Some FUn -> verifyFileInfo' (Just newInfoTar) targetPath tempPath
-- Unlike for other files, if we didn't have an old snapshot, consider the
-- root info unchanged (otherwise we would loop indefinitely).
-- See also
rootChanged :: Maybe (Trusted FileInfo) -> Trusted FileInfo -> Bool
rootChanged Nothing _ = False
rootChanged (Just old) new = not (trustedFileInfoEqual old new)
-- For any file other than the root we consider the file to have changed
-- if we do not yet have a local snapshot to tell us the old info.
fileChanged :: Maybe (Trusted FileInfo) -> Trusted FileInfo -> Bool
fileChanged Nothing _ = True
fileChanged (Just old) new = not (trustedFileInfoEqual old new)
-- We don't actually _do_ anything with the mirrors file until the next call
-- to 'checkUpdates', because we want to use a single server for a single
-- check-for-updates request. If validation was successful the repository
-- will have cached the mirrors file and it will be available on the next
-- request.
newMirrors :: Trusted Mirrors -> Verify ()
newMirrors _ = return ()
-- | Update the root metadata
--
-- Note that the new root metadata is verified using the old root metadata,
-- and only then trusted.
--
-- We don't always have root file information available. If we notice during
-- the normal update process that the root information has changed then the
-- snapshot will give us the new file information; but if we need to update
-- the root information due to a verification error we do not.
--
-- We additionally delete the cached cached snapshot and timestamp. This is
-- necessary for two reasons:
--
-- 1. If during the normal update process we notice that the root info was
-- updated (because the hash of @root.json@ in the new snapshot is different
-- from the old snapshot) we download new root info and start over, without
-- (yet) downloading a (potential) new index. This means it is important that
-- we not overwrite our local cached snapshot, because if we did we would
-- then on the next iteration conclude there were no updates and we would
-- fail to notice that we should have updated the index. However, unless we
-- do something, this means that we would conclude on the next iteration once
-- again that the root info has changed (because the hash in the new shapshot
-- still doesn't match the hash in the cached snapshot), and we would loop
-- until we throw a 'VerificationErrorLoop' exception. By deleting the local
-- snapshot we basically reset the client to its initial state, and we will
-- not try to download the root info once again. The only downside of this is
-- that we will also re-download the index after every root info change.
-- However, this should be infrequent enough that this isn't an issue.
-- See also .
--
-- 2. Additionally, deleting the local timestamp and snapshot protects against
-- an attack where an attacker has set the file version of the snapshot or
-- timestamp to MAX_INT, thereby making further updates impossible.
-- (Such an attack would require a timestamp/snapshot key compromise.)
--
-- However, we _ONLY_ do this when the root information has actually changed.
-- If we did this unconditionally it would mean that we delete the locally
-- cached timestamp whenever the version on the remote timestamp is invalid,
-- thereby rendering the file version on the timestamp and the snapshot useless.
-- See
updateRoot :: (Throws VerificationError, Throws SomeRemoteError)
=> Repository down
-> Maybe UTCTime
-> AttemptNr
-> CachedInfo
-> Either VerificationError (Trusted FileInfo)
-> IO ()
updateRoot rep@Repository{..} mNow isRetry cachedInfo eFileInfo = do
rootReallyChanged <- runVerify repLockCache $ do
(_newRoot :: Trusted Root, rootTempFile) <- getRemoteFile
rep
cachedInfo
isRetry
mNow
(RemoteRoot (eitherToMaybe eFileInfo))
-- NOTE: It is important that we do this check within the evalContT,
-- because the temporary file will be deleted once we leave its scope.
case eFileInfo of
Right _ ->
-- We are downloading the root info because the hash in the snapshot
-- changed. In this case the root definitely changed.
return True
Left _e -> liftIO $ do
-- We are downloading the root because of a verification error. In
-- this case the root info may or may not have changed. In most cases
-- it would suffice to compare the file version now; however, in the
-- (exceptional) circumstance where the root info has changed but
-- the file version has not, this would result in the same infinite
-- loop described above. Hence, we must compare file hashes, and they
-- must be computed on the raw file, not the parsed file.
oldRootFile <- repGetCachedRoot
oldRootInfo <- DeclareTrusted <$> computeFileInfo oldRootFile
not <$> downloadedVerify rootTempFile oldRootInfo
when rootReallyChanged $ clearCache rep
{-------------------------------------------------------------------------------
Convenience functions for downloading and parsing various files
-------------------------------------------------------------------------------}
data CachedInfo = CachedInfo {
cachedRoot :: Trusted Root
, cachedKeyEnv :: KeyEnv
, cachedTimestamp :: Maybe (Trusted Timestamp)
, cachedSnapshot :: Maybe (Trusted Snapshot)
, cachedMirrors :: Maybe (Trusted Mirrors)
, cachedInfoSnapshot :: Maybe (Trusted FileInfo)
, cachedInfoRoot :: Maybe (Trusted FileInfo)
, cachedInfoMirrors :: Maybe (Trusted FileInfo)
, cachedInfoTarGz :: Maybe (Trusted FileInfo)
}
cachedVersion :: CachedInfo -> RemoteFile fs typ -> Maybe FileVersion
cachedVersion CachedInfo{..} remoteFile =
case mustCache remoteFile of
CacheAs CachedTimestamp -> timestampVersion . trusted <$> cachedTimestamp
CacheAs CachedSnapshot -> snapshotVersion . trusted <$> cachedSnapshot
CacheAs CachedMirrors -> mirrorsVersion . trusted <$> cachedMirrors
CacheAs CachedRoot -> Just . rootVersion . trusted $ cachedRoot
CacheIndex -> Nothing
DontCache -> Nothing
-- | Get all cached info (if any)
getCachedInfo ::
#if __GLASGOW_HASKELL__ < 800
(Applicative m, MonadIO m)
#else
MonadIO m
#endif
=> Repository down -> m CachedInfo
getCachedInfo rep = do
(cachedRoot, cachedKeyEnv) <- readLocalRoot rep
cachedTimestamp <- readLocalFile rep cachedKeyEnv CachedTimestamp
cachedSnapshot <- readLocalFile rep cachedKeyEnv CachedSnapshot
cachedMirrors <- readLocalFile rep cachedKeyEnv CachedMirrors
let cachedInfoSnapshot = fmap (static timestampInfoSnapshot <$$>) cachedTimestamp
cachedInfoRoot = fmap (static snapshotInfoRoot <$$>) cachedSnapshot
cachedInfoMirrors = fmap (static snapshotInfoMirrors <$$>) cachedSnapshot
cachedInfoTarGz = fmap (static snapshotInfoTarGz <$$>) cachedSnapshot
return CachedInfo{..}
readLocalRoot :: MonadIO m => Repository down -> m (Trusted Root, KeyEnv)
readLocalRoot rep = do
cachedPath <- liftIO $ repGetCachedRoot rep
signedRoot <- throwErrorsUnchecked LocalFileCorrupted =<<
readCachedJSON rep KeyEnv.empty cachedPath
return (trustLocalFile signedRoot, rootKeys (signed signedRoot))
readLocalFile :: ( FromJSON ReadJSON_Keys_Layout (Signed a), MonadIO m
#if __GLASGOW_HASKELL__ < 800
, Applicative m
#endif
)
=> Repository down -> KeyEnv -> CachedFile -> m (Maybe (Trusted a))
readLocalFile rep cachedKeyEnv file = do
mCachedPath <- liftIO $ repGetCached rep file
for mCachedPath $ \cachedPath -> do
signed <- throwErrorsUnchecked LocalFileCorrupted =<<
readCachedJSON rep cachedKeyEnv cachedPath
return $ trustLocalFile signed
getRemoteFile :: ( Throws VerificationError
, Throws SomeRemoteError
, VerifyRole a
, FromJSON ReadJSON_Keys_Layout (Signed a)
)
=> Repository down
-> CachedInfo
-> AttemptNr
-> Maybe UTCTime
-> RemoteFile (f :- ()) Metadata
-> Verify (Trusted a, down Metadata)
getRemoteFile rep@Repository{..} cachedInfo@CachedInfo{..} isRetry mNow file = do
(targetPath, tempPath) <- getRemote' rep isRetry file
verifyFileInfo' (remoteFileDefaultInfo file) targetPath tempPath
signed <- throwErrorsChecked (VerificationErrorDeserialization targetPath) =<<
readDownloadedJSON rep cachedKeyEnv tempPath
verified <- throwErrorsChecked id $ verifyRole
cachedRoot
targetPath
(cachedVersion cachedInfo file)
mNow
signed
return (trustVerified verified, tempPath)
{-------------------------------------------------------------------------------
Downloading target files
-------------------------------------------------------------------------------}
-- | Download a package
downloadPackage :: ( Throws SomeRemoteError
, Throws VerificationError
, Throws InvalidPackageException
)
=> Repository down -- ^ Repository
-> PackageIdentifier -- ^ Package to download
-> Path Absolute -- ^ Destination (see also 'downloadPackage'')
-> IO ()
downloadPackage rep@Repository{..} pkgId dest =
withMirror rep $
withIndex rep $ \IndexCallbacks{..} -> runVerify repLockCache $ do
-- Get the metadata (from the previously updated index)
targetFileInfo <- liftIO $ indexLookupFileInfo pkgId
-- TODO: should we check if cached package available? (spec says no)
tarGz <- do
(targetPath, downloaded) <- getRemote' rep (AttemptNr 0) $
RemotePkgTarGz pkgId targetFileInfo
verifyFileInfo' (Just targetFileInfo) targetPath downloaded
return downloaded
-- If all checks succeed, copy file to its target location.
liftIO $ downloadedCopyTo tarGz dest
-- | Variation on 'downloadPackage' that takes a FilePath instead.
downloadPackage' :: ( Throws SomeRemoteError
, Throws VerificationError
, Throws InvalidPackageException
)
=> Repository down -- ^ Repository
-> PackageIdentifier -- ^ Package to download
-> FilePath -- ^ Destination
-> IO ()
downloadPackage' rep pkgId dest =
downloadPackage rep pkgId =<< makeAbsolute (fromFilePath dest)
{-------------------------------------------------------------------------------
Access to the tar index (the API is exported and used internally)
NOTE: The files inside the index as evaluated lazily.
1. The index tarball contains delegated target.json files for both unsigned
and signed packages. We need to verify the signatures of all signed
metadata (that is: the metadata for signed packages).
2. Since the tarball also contains the .cabal files, we should also verify the
hashes of those .cabal files against the hashes recorded in signed metadata
(there is no point comparing against hashes recorded in unsigned metadata
because attackers could just change those).
Since we don't have author signing yet, we don't have any additional signed
metadata and therefore we currently don't have to do anything here.
TODO: If we have explicit, author-signed, lists of versions for a package (as
described in @README.md@), then evaluating these "middle-level" delegation
files lazily opens us up to a rollback attack: if we've never downloaded the
delegations for a package before, then we have nothing to compare the version
number in the file that we downloaded against. One option is to always
download and verify all these middle level files (strictly); other is to
include the version number of all of these files in the snapshot. This is
described in more detail in
.
TODO: Currently we hardcode the location of the package specific metadata. By
rights we should read the global targets file and apply the delegation rules.
Until we have author signing however this is unnecessary.
-------------------------------------------------------------------------------}
-- | Index directory
data Directory = Directory {
-- | The first entry in the dictionary
directoryFirst :: DirectoryEntry
-- | The next available (i.e., one after last) directory entry
, directoryNext :: DirectoryEntry
-- | Lookup an entry in the dictionary
--
-- This is an efficient operation.
, directoryLookup :: forall dec. IndexFile dec -> Maybe DirectoryEntry
-- | An enumeration of all entries
--
-- This field is lazily constructed, so if you don't need it, it does not
-- incur a performance overhead. Moreover, the 'IndexFile' is also created
-- lazily so if you only need the raw 'IndexPath' there is no parsing
-- overhead.
--
-- The entries are ordered by 'DirectoryEntry' so that the entries can
-- efficiently be read in sequence.
--
-- NOTE: This means that there are two ways to enumerate all entries in the
-- tar file, since when lookup an entry using 'indexLookupEntry' the
-- 'DirectoryEntry' of the next entry is also returned. However, this
-- involves reading through the entire @tar@ file. If you only need to read
-- /some/ files, it is significantly more efficient to enumerate the tar
-- entries using 'directoryEntries' instead and only call 'indexLookupEntry'
-- when required.
, directoryEntries :: [(DirectoryEntry, IndexPath, Maybe (Some IndexFile))]
}
-- | Entry into the Hackage index.
newtype DirectoryEntry = DirectoryEntry {
-- | (Low-level) block number of the tar index entry
--
-- Exposed for the benefit of clients who read the @.tar@ file directly.
-- For this reason also the 'Show' and 'Read' instances for 'DirectoryEntry'
-- just print and parse the underlying 'TarEntryOffset'.
directoryEntryBlockNo :: Tar.TarEntryOffset
}
deriving (Eq, Ord)
instance Show DirectoryEntry where
show = show . directoryEntryBlockNo
instance Read DirectoryEntry where
readsPrec p = map (first DirectoryEntry) . readsPrec p
-- | Read the Hackage index directory
--
-- Should only be called after 'checkForUpdates'.
getDirectory :: Repository down -> IO Directory
getDirectory Repository{..} = mkDirectory <$> repGetIndexIdx
where
mkDirectory :: Tar.TarIndex -> Directory
mkDirectory idx = Directory {
directoryFirst = DirectoryEntry 0
, directoryNext = DirectoryEntry $ Tar.indexEndEntryOffset idx
, directoryLookup = liftM dirEntry . Tar.lookup idx . filePath
, directoryEntries = map mkEntry $ sortBy (comparing snd) (Tar.toList idx)
}
mkEntry :: (FilePath, Tar.TarEntryOffset)
-> (DirectoryEntry, IndexPath, Maybe (Some IndexFile))
mkEntry (fp, off) = (DirectoryEntry off, path, indexFile path)
where
path = indexPath fp
dirEntry :: Tar.TarIndexEntry -> DirectoryEntry
dirEntry (Tar.TarFileEntry offset) = DirectoryEntry offset
dirEntry (Tar.TarDir _) = error "directoryLookup: unexpected directory"
indexFile :: IndexPath -> Maybe (Some IndexFile)
indexFile = indexFileFromPath repIndexLayout
indexPath :: FilePath -> IndexPath
indexPath = rootPath . fromUnrootedFilePath
filePath :: IndexFile dec -> FilePath
filePath = toUnrootedFilePath . unrootPath . indexFileToPath repIndexLayout
-- | Entry from the Hackage index; see 'withIndex'.
data IndexEntry dec = IndexEntry {
-- | The raw path in the tarfile
indexEntryPath :: IndexPath
-- | The parsed file (if recognised)
, indexEntryPathParsed :: Maybe (IndexFile dec)
-- | The raw contents
--
-- Although this is a lazy bytestring, this is actually read into memory
-- strictly (i.e., it can safely be used outside the scope of withIndex and
-- friends).
, indexEntryContent :: BS.L.ByteString
-- | The parsed contents
--
-- This field is lazily constructed; the parser is not unless you do a
-- pattern match on this value.
, indexEntryContentParsed :: Either SomeException dec
-- | The time of the entry in the tarfile.
, indexEntryTime :: Tar.EpochTime
}
-- | Various operations that we can perform on the index once its open
--
-- Note that 'IndexEntry' contains a fields both for the raw file contents and
-- the parsed file contents; clients can choose which to use.
--
-- In principle these callbacks will do verification (once we have implemented
-- author signing). Right now they don't need to do that, because the index as a
-- whole will have been verified.
data IndexCallbacks = IndexCallbacks {
-- | Look up an entry by 'DirectoryEntry'
--
-- Since these 'DirectoryEntry's must come from somewhere (probably from the
-- 'Directory'), it is assumed that they are valid; if they are not, an
-- (unchecked) exception will be thrown.
--
-- This function also returns the 'DirectoryEntry' of the /next/ file in the
-- index (if any) for the benefit of clients who wish to walk through the
-- entire index.
indexLookupEntry :: DirectoryEntry
-> IO (Some IndexEntry, Maybe DirectoryEntry)
-- | Look up an entry by 'IndexFile'
--
-- Returns 'Nothing' if the 'IndexFile' does not refer to an existing file.
, indexLookupFile :: forall dec.
IndexFile dec
-> IO (Maybe (IndexEntry dec))
-- | Variation if both the 'DirectoryEntry' and the 'IndexFile' are known
--
-- You might use this when scanning the index using 'directoryEntries'.
, indexLookupFileEntry :: forall dec.
DirectoryEntry
-> IndexFile dec
-> IO (IndexEntry dec)
-- | Get (raw) cabal file (wrapper around 'indexLookupFile')
, indexLookupCabal :: Throws InvalidPackageException
=> PackageIdentifier
-> IO (Trusted BS.L.ByteString)
-- | Lookup package metadata (wrapper around 'indexLookupFile')
--
-- This will throw an (unchecked) exception if the @targets.json@ file
-- could not be parsed.
, indexLookupMetadata :: Throws InvalidPackageException
=> PackageIdentifier
-> IO (Trusted Targets)
-- | Get file info (including hash) (wrapper around 'indexLookupFile')
, indexLookupFileInfo :: ( Throws InvalidPackageException
, Throws VerificationError
)
=> PackageIdentifier
-> IO (Trusted FileInfo)
-- | Get the SHA256 hash for a package (wrapper around 'indexLookupInfo')
--
-- In addition to the exceptions thrown by 'indexLookupInfo', this will also
-- throw an exception if the SHA256 is not listed in the 'FileMap' (again,
-- this will not happen with a well-formed Hackage index.)
, indexLookupHash :: ( Throws InvalidPackageException
, Throws VerificationError
)
=> PackageIdentifier
-> IO (Trusted Hash)
-- | The 'Directory' for the index
--
-- We provide this here because 'withIndex' will have read this anyway.
, indexDirectory :: Directory
}
-- | Look up entries in the Hackage index
--
-- This is in 'withFile' style so that clients can efficiently look up multiple
-- files from the index.
--
-- Should only be called after 'checkForUpdates'.
withIndex :: Repository down -> (IndexCallbacks -> IO a) -> IO a
withIndex rep@Repository{..} callback = do
-- We need the cached root information in order to resolve key IDs and
-- verify signatures. Note that whenever we read a JSON file, we verify
-- signatures (even if we don't verify the keys); if this is a problem
-- (for performance) we need to parameterize parseJSON.
(_cachedRoot, keyEnv) <- readLocalRoot rep
-- We need the directory to resolve 'IndexFile's and to know the index of
-- the last entry.
dir@Directory{..} <- getDirectory rep
-- Open the index
repWithIndex $ \h -> do
let getEntry :: DirectoryEntry
-> IO (Some IndexEntry, Maybe DirectoryEntry)
getEntry entry = do
(tarEntry, content, next) <- getTarEntry entry
let path = indexPath tarEntry
case indexFile path of
Nothing ->
return (Some (mkEntry tarEntry content Nothing), next)
Just (Some file) ->
return (Some (mkEntry tarEntry content (Just file)), next)
getFile :: IndexFile dec -> IO (Maybe (IndexEntry dec))
getFile file =
case directoryLookup file of
Nothing -> return Nothing
Just dirEntry -> Just <$> getFileEntry dirEntry file
getFileEntry :: DirectoryEntry
-> IndexFile dec
-> IO (IndexEntry dec)
getFileEntry dirEntry file = do
(tarEntry, content, _next) <- getTarEntry dirEntry
return $ mkEntry tarEntry content (Just file)
mkEntry :: Tar.Entry
-> BS.L.ByteString
-> Maybe (IndexFile dec)
-> IndexEntry dec
mkEntry tarEntry content mFile = IndexEntry {
indexEntryPath = indexPath tarEntry
, indexEntryPathParsed = mFile
, indexEntryContent = content
, indexEntryContentParsed = parseContent mFile content
, indexEntryTime = Tar.entryTime tarEntry
}
parseContent :: Maybe (IndexFile dec)
-> BS.L.ByteString -> Either SomeException dec
parseContent Nothing _ = Left pathNotRecognized
parseContent (Just file) raw = case file of
IndexPkgPrefs _ ->
Right () -- We don't currently parse preference files
IndexPkgCabal _ ->
Right () -- We don't currently parse .cabal files
IndexPkgMetadata _ ->
let mkEx = either
(Left . SomeException . InvalidFileInIndex file raw)
Right
in mkEx $ parseJSON_Keys_NoLayout keyEnv raw
-- Read an entry from the tar file. Returns entry content separately,
-- throwing an exception if the entry is not a regular file.
-- Also throws an exception if the 'DirectoryEntry' is invalid.
getTarEntry :: DirectoryEntry
-> IO (Tar.Entry, BS.L.ByteString, Maybe DirectoryEntry)
getTarEntry (DirectoryEntry offset) = do
entry <- Tar.hReadEntry h offset
content <- case Tar.entryContent entry of
Tar.NormalFile content _sz -> return content
_ -> throwIO $ userError "withIndex: unexpected entry"
let next = DirectoryEntry $ Tar.nextEntryOffset entry offset
mNext = guard (next < directoryNext) >> return next
return (entry, content, mNext)
-- Get cabal file
getCabal :: Throws InvalidPackageException
=> PackageIdentifier -> IO (Trusted BS.L.ByteString)
getCabal pkgId = do
mCabal <- getFile $ IndexPkgCabal pkgId
case mCabal of
Nothing ->
throwChecked $ InvalidPackageException pkgId
Just IndexEntry{..} ->
return $ DeclareTrusted indexEntryContent
-- Get package metadata
getMetadata :: Throws InvalidPackageException
=> PackageIdentifier -> IO (Trusted Targets)
getMetadata pkgId = do
mEntry <- getFile $ IndexPkgMetadata pkgId
case mEntry of
Nothing ->
throwChecked $ InvalidPackageException pkgId
Just IndexEntry{indexEntryContentParsed = Left ex} ->
throwUnchecked $ ex
Just IndexEntry{indexEntryContentParsed = Right signed} ->
return $ trustLocalFile signed
-- Get package info
getFileInfo :: ( Throws InvalidPackageException
, Throws VerificationError
)
=> PackageIdentifier -> IO (Trusted FileInfo)
getFileInfo pkgId = do
targets <- getMetadata pkgId
let mTargetMetadata :: Maybe (Trusted FileInfo)
mTargetMetadata = trustElems
$ trustStatic (static targetsLookup)
`trustApply` DeclareTrusted (targetPath pkgId)
`trustApply` targets
case mTargetMetadata of
Nothing ->
throwChecked $ VerificationErrorUnknownTarget (targetPath pkgId)
Just info ->
return info
-- Get package SHA256
getHash :: ( Throws InvalidPackageException
, Throws VerificationError
)
=> PackageIdentifier -> IO (Trusted Hash)
getHash pkgId = do
info <- getFileInfo pkgId
let mTrustedHash :: Maybe (Trusted Hash)
mTrustedHash = trustElems
$ trustStatic (static fileInfoSHA256)
`trustApply` info
case mTrustedHash of
Nothing ->
throwChecked $ VerificationErrorMissingSHA256 (targetPath pkgId)
Just hash ->
return hash
callback IndexCallbacks{
indexLookupEntry = getEntry
, indexLookupFile = getFile
, indexLookupFileEntry = getFileEntry
, indexDirectory = dir
, indexLookupCabal = getCabal
, indexLookupMetadata = getMetadata
, indexLookupFileInfo = getFileInfo
, indexLookupHash = getHash
}
where
indexPath :: Tar.Entry -> IndexPath
indexPath = rootPath . fromUnrootedFilePath
. Tar.fromTarPathToPosixPath
. Tar.entryTarPath
indexFile :: IndexPath -> Maybe (Some IndexFile)
indexFile = indexFileFromPath repIndexLayout
targetPath :: PackageIdentifier -> TargetPath
targetPath = TargetPathRepo . repoLayoutPkgTarGz repLayout
pathNotRecognized :: SomeException
pathNotRecognized = SomeException (userError "Path not recognized")
{-------------------------------------------------------------------------------
Bootstrapping
-------------------------------------------------------------------------------}
-- | Check if we need to bootstrap (i.e., if we have root info)
requiresBootstrap :: Repository down -> IO Bool
requiresBootstrap rep = isNothing <$> repGetCached rep CachedRoot
-- | Bootstrap the chain of trust
--
-- New clients might need to obtain a copy of the root metadata. This however
-- represents a chicken-and-egg problem: how can we verify the root metadata
-- we downloaded? The only possibility is to be provided with a set of an
-- out-of-band set of root keys and an appropriate threshold.
--
-- Clients who provide a threshold of 0 can do an initial "unsafe" update
-- of the root information, if they wish.
--
-- The downloaded root information will _only_ be verified against the
-- provided keys, and _not_ against previously downloaded root info (if any).
-- It is the responsibility of the client to call `bootstrap` only when this
-- is the desired behaviour.
bootstrap :: (Throws SomeRemoteError, Throws VerificationError)
=> Repository down -> [KeyId] -> KeyThreshold -> IO ()
bootstrap rep@Repository{..} trustedRootKeys keyThreshold = withMirror rep $ runVerify repLockCache $ do
_newRoot :: Trusted Root <- do
(targetPath, tempPath) <- getRemote' rep (AttemptNr 0) (RemoteRoot Nothing)
signed <- throwErrorsChecked (VerificationErrorDeserialization targetPath) =<<
readDownloadedJSON rep KeyEnv.empty tempPath
verified <- throwErrorsChecked id $ verifyFingerprints
trustedRootKeys
keyThreshold
targetPath
signed
return $ trustVerified verified
clearCache rep
{-------------------------------------------------------------------------------
Wrapper around the Repository functions
-------------------------------------------------------------------------------}
getRemote :: forall fs down typ. Throws SomeRemoteError
=> Repository down
-> AttemptNr
-> RemoteFile fs typ
-> Verify (Some Format, TargetPath, down typ)
getRemote r attemptNr file = do
(Some format, downloaded) <- repGetRemote r attemptNr file
let targetPath = TargetPathRepo $ remoteRepoPath' (repLayout r) file format
return (Some (hasFormatGet format), targetPath, downloaded)
-- | Variation on getRemote where we only expect one type of result
getRemote' :: forall f down typ. Throws SomeRemoteError
=> Repository down
-> AttemptNr
-> RemoteFile (f :- ()) typ
-> Verify (TargetPath, down typ)
getRemote' r isRetry file = ignoreFormat <$> getRemote r isRetry file
where
ignoreFormat (_format, targetPath, tempPath) = (targetPath, tempPath)
clearCache :: MonadIO m => Repository down -> m ()
clearCache r = liftIO $ repClearCache r
log :: MonadIO m => Repository down -> LogMessage -> m ()
log r msg = liftIO $ repLog r msg
-- Tries to load the cached mirrors file
withMirror :: Repository down -> IO a -> IO a
withMirror rep callback = do
mMirrors <- repGetCached rep CachedMirrors
mirrors <- case mMirrors of
Nothing -> return Nothing
Just fp -> filterMirrors <$>
(throwErrorsUnchecked LocalFileCorrupted =<<
readJSON_NoKeys_NoLayout fp)
repWithMirror rep mirrors $ callback
where
filterMirrors :: UninterpretedSignatures Mirrors -> Maybe [Mirror]
filterMirrors = Just
. filter (canUseMirror . mirrorContent)
. mirrorsMirrors
. uninterpretedSigned
-- Once we add support for partial mirrors, we wil need an additional
-- argument to 'repWithMirror' (here, not in the Repository API itself)
-- that tells us which files we will be requested from the mirror.
-- We can then compare that against the specification of the partial mirror
-- to see if all of those files are available from this mirror.
canUseMirror :: MirrorContent -> Bool
canUseMirror MirrorFull = True
{-------------------------------------------------------------------------------
Exceptions
-------------------------------------------------------------------------------}
-- | Re-throw all exceptions thrown by the client API as unchecked exceptions
uncheckClientErrors :: ( ( Throws VerificationError
, Throws SomeRemoteError
, Throws InvalidPackageException
) => IO a )
-> IO a
uncheckClientErrors act = handleChecked rethrowVerificationError
$ handleChecked rethrowSomeRemoteError
$ handleChecked rethrowInvalidPackageException
$ act
where
rethrowVerificationError :: VerificationError -> IO a
rethrowVerificationError = throwIO
rethrowSomeRemoteError :: SomeRemoteError -> IO a
rethrowSomeRemoteError = throwIO
rethrowInvalidPackageException :: InvalidPackageException -> IO a
rethrowInvalidPackageException = throwIO
data InvalidPackageException = InvalidPackageException PackageIdentifier
deriving (Typeable)
data LocalFileCorrupted = LocalFileCorrupted DeserializationError
deriving (Typeable)
data InvalidFileInIndex = forall dec. InvalidFileInIndex {
invalidFileInIndex :: IndexFile dec
, invalidFileInIndexRaw :: BS.L.ByteString
, invalidFileInIndexError :: DeserializationError
}
deriving (Typeable)
#if MIN_VERSION_base(4,8,0)
deriving instance Show InvalidPackageException
deriving instance Show LocalFileCorrupted
deriving instance Show InvalidFileInIndex
instance Exception InvalidPackageException where displayException = pretty
instance Exception LocalFileCorrupted where displayException = pretty
instance Exception InvalidFileInIndex where displayException = pretty
#else
instance Show InvalidPackageException where show = pretty
instance Show LocalFileCorrupted where show = pretty
instance Show InvalidFileInIndex where show = pretty
instance Exception InvalidPackageException
instance Exception LocalFileCorrupted
instance Exception InvalidFileInIndex
#endif
instance Pretty InvalidPackageException where
pretty (InvalidPackageException pkgId) = "Invalid package " ++ display pkgId
instance Pretty LocalFileCorrupted where
pretty (LocalFileCorrupted err) = "Local file corrupted: " ++ pretty err
instance Pretty InvalidFileInIndex where
pretty (InvalidFileInIndex file raw err) = unlines [
"Invalid file in index: " ++ pretty file
, "Error: " ++ pretty err
, "Unparsed file: " ++ BS.L.C8.unpack raw
]
{-------------------------------------------------------------------------------
Auxiliary
-------------------------------------------------------------------------------}
-- | Local files are assumed trusted
--
-- There is no point tracking chain of trust for local files because that chain
-- would necessarily have to start at an implicitly trusted (though unverified)
-- file: the root metadata.
trustLocalFile :: Signed a -> Trusted a
trustLocalFile Signed{..} = DeclareTrusted signed
-- | Just a simple wrapper around 'verifyFileInfo'
--
-- Throws a VerificationError if verification failed.
verifyFileInfo' :: (MonadIO m, DownloadedFile down)
=> Maybe (Trusted FileInfo)
-> TargetPath -- ^ For error messages
-> down typ -- ^ File to verify
-> m ()
verifyFileInfo' Nothing _ _ = return ()
verifyFileInfo' (Just info) targetPath tempPath = liftIO $ do
verified <- downloadedVerify tempPath info
unless verified $ throw $ VerificationErrorFileInfo targetPath
readCachedJSON :: (MonadIO m, FromJSON ReadJSON_Keys_Layout a)
=> Repository down -> KeyEnv -> Path Absolute
-> m (Either DeserializationError a)
readCachedJSON Repository{..} keyEnv fp = liftIO $ do
bs <- readLazyByteString fp
evaluate $ parseJSON_Keys_Layout keyEnv repLayout bs
readDownloadedJSON :: (MonadIO m, FromJSON ReadJSON_Keys_Layout a)
=> Repository down -> KeyEnv -> down Metadata
-> m (Either DeserializationError a)
readDownloadedJSON Repository{..} keyEnv fp = liftIO $ do
bs <- downloadedRead fp
evaluate $ parseJSON_Keys_Layout keyEnv repLayout bs
throwErrorsUnchecked :: ( MonadIO m
, Exception e'
)
=> (e -> e') -> Either e a -> m a
throwErrorsUnchecked f (Left err) = liftIO $ throwUnchecked (f err)
throwErrorsUnchecked _ (Right a) = return a
throwErrorsChecked :: ( Throws e'
, MonadIO m
, Exception e'
)
=> (e -> e') -> Either e a -> m a
throwErrorsChecked f (Left err) = liftIO $ throwChecked (f err)
throwErrorsChecked _ (Right a) = return a
eitherToMaybe :: Either a b -> Maybe b
eitherToMaybe (Left _) = Nothing
eitherToMaybe (Right b) = Just b
hackage-security-0.6.2.4/src/Hackage/Security/Client/ 0000755 0000000 0000000 00000000000 07346545000 020441 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Client/Formats.hs 0000644 0000000 0000000 00000007443 07346545000 022420 0 ustar 00 0000000 0000000 module Hackage.Security.Client.Formats (
-- * Formats
-- ** Type level
FormatUn
, FormatGz
-- ** Term level
, Format(..)
, Formats(..)
-- * Key membership
, HasFormat(..)
-- ** Utility
, hasFormatAbsurd
, hasFormatGet
-- * Map-like operations
, formatsMap
, formatsMember
, formatsLookup
) where
import MyPrelude
import Hackage.Security.Util.Stack
import Hackage.Security.Util.TypedEmbedded
{-------------------------------------------------------------------------------
Formats
-------------------------------------------------------------------------------}
data FormatUn
data FormatGz
-- | Format is a singleton type (reflection type to term level)
--
-- NOTE: In the future we might add further compression formats.
data Format :: * -> * where
FUn :: Format FormatUn
FGz :: Format FormatGz
deriving instance Show (Format f)
deriving instance Eq (Format f)
instance Unify Format where
unify FUn FUn = Just Refl
unify FGz FGz = Just Refl
unify _ _ = Nothing
{-------------------------------------------------------------------------------
Products
-------------------------------------------------------------------------------}
-- | Available formats
--
-- Rather than having a general list here, we enumerate all possibilities.
-- This means we are very precise about what we expect, and we avoid any runtime
-- errors about unexpected format definitions.
--
-- NOTE: If we add additional cases here (for dealing with additional formats)
-- all calls to @error "inaccessible"@ need to be reevaluated.
data Formats :: * -> * -> * where
FsNone :: Formats () a
FsUn :: a -> Formats (FormatUn :- ()) a
FsGz :: a -> Formats (FormatGz :- ()) a
FsUnGz :: a -> a -> Formats (FormatUn :- FormatGz :- ()) a
deriving instance Eq a => Eq (Formats fs a)
deriving instance Show a => Show (Formats fs a)
instance Functor (Formats fs) where
fmap g = formatsMap (\_format -> g)
{-------------------------------------------------------------------------------
Key membership
-------------------------------------------------------------------------------}
-- | @HasFormat fs f@ is a proof that @f@ is a key in @fs@.
--
-- See 'formatsMember' and 'formatsLookup' for typical usage.
data HasFormat :: * -> * -> * where
HFZ :: Format f -> HasFormat (f :- fs) f
HFS :: HasFormat fs f -> HasFormat (f' :- fs) f
deriving instance Eq (HasFormat fs f)
deriving instance Show (HasFormat fs f)
hasFormatAbsurd :: HasFormat () f -> a
hasFormatAbsurd _ = error "inaccessible"
hasFormatGet :: HasFormat fs f -> Format f
hasFormatGet (HFZ f) = f
hasFormatGet (HFS hf) = hasFormatGet hf
{-------------------------------------------------------------------------------
Map-like functionality
-------------------------------------------------------------------------------}
formatsMap :: (forall f. Format f -> a -> b) -> Formats fs a -> Formats fs b
formatsMap _ FsNone = FsNone
formatsMap f (FsUn a) = FsUn (f FUn a)
formatsMap f (FsGz a) = FsGz (f FGz a)
formatsMap f (FsUnGz a a') = FsUnGz (f FUn a) (f FGz a')
formatsMember :: Format f -> Formats fs a -> Maybe (HasFormat fs f)
formatsMember _ FsNone = Nothing
formatsMember FUn (FsUn _ ) = Just $ HFZ FUn
formatsMember FUn (FsGz _) = Nothing
formatsMember FUn (FsUnGz _ _) = Just $ HFZ FUn
formatsMember FGz (FsUn _ ) = Nothing
formatsMember FGz (FsGz _) = Just $ HFZ FGz
formatsMember FGz (FsUnGz _ _) = Just $ HFS (HFZ FGz)
formatsLookup :: HasFormat fs f -> Formats fs a -> a
formatsLookup (HFZ FUn) (FsUn a ) = a
formatsLookup (HFZ FUn) (FsUnGz a _) = a
formatsLookup (HFZ FGz) (FsGz a) = a
formatsLookup (HFS hf) (FsUn _ ) = hasFormatAbsurd hf
formatsLookup (HFS hf) (FsGz _) = hasFormatAbsurd hf
formatsLookup (HFS hf) (FsUnGz _ a) = formatsLookup hf (FsGz a)
hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository.hs 0000644 0000000 0000000 00000043732 07346545000 023165 0 ustar 00 0000000 0000000 -- | Abstract definition of a Repository
--
-- Most clients should only need to import this module if they wish to define
-- their own Repository implementations.
{-# LANGUAGE CPP #-}
module Hackage.Security.Client.Repository (
-- * Files
Metadata -- type index (really a kind)
, Binary -- type index (really a kind)
, RemoteFile(..)
, CachedFile(..)
, IndexFile(..)
, remoteFileDefaultFormat
, remoteFileDefaultInfo
-- * Repository proper
, Repository(..)
, AttemptNr(..)
, LogMessage(..)
, UpdateFailure(..)
, SomeRemoteError(..)
-- ** Downloaded files
, DownloadedFile(..)
-- ** Helpers
, mirrorsUnsupported
-- * Paths
, remoteRepoPath
, remoteRepoPath'
-- * Utility
, IsCached(..)
, mustCache
) where
import MyPrelude
import Control.Exception
import Data.Typeable (Typeable)
import qualified Codec.Archive.Tar.Index as Tar
import qualified Data.ByteString.Lazy as BS.L
import Distribution.Package
import Distribution.Text
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Verify
import Hackage.Security.Trusted
import Hackage.Security.TUF
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
import Hackage.Security.Util.Stack
{-------------------------------------------------------------------------------
Files
-------------------------------------------------------------------------------}
data Metadata
data Binary
-- | Abstract definition of files we might have to download
--
-- 'RemoteFile' is parametrized by the type of the formats that we can accept
-- from the remote repository, as well as with information on whether this file
-- is metadata actual binary content.
--
-- NOTE: Haddock lacks GADT support so constructors have only regular comments.
data RemoteFile :: * -> * -> * where
-- Timestamp metadata (@timestamp.json@)
--
-- We never have (explicit) file length available for timestamps.
RemoteTimestamp :: RemoteFile (FormatUn :- ()) Metadata
-- Root metadata (@root.json@)
--
-- For root information we may or may not have the file info available:
--
-- - If during the normal update process the new snapshot tells us the root
-- information has changed, we can use the file info from the snapshot.
-- - If however we need to update the root metadata due to a verification
-- exception we do not know the file info.
-- - We also do not know the file info during bootstrapping.
RemoteRoot :: Maybe (Trusted FileInfo) -> RemoteFile (FormatUn :- ()) Metadata
-- Snapshot metadata (@snapshot.json@)
--
-- We get file info of the snapshot from the timestamp.
RemoteSnapshot :: Trusted FileInfo -> RemoteFile (FormatUn :- ()) Metadata
-- Mirrors metadata (@mirrors.json@)
--
-- We get the file info from the snapshot.
RemoteMirrors :: Trusted FileInfo -> RemoteFile (FormatUn :- ()) Metadata
-- Index
--
-- The index file length comes from the snapshot.
--
-- When we request that the index is downloaded, it is up to the repository
-- to decide whether to download @00-index.tar@ or @00-index.tar.gz@.
-- The callback is told which format was requested.
--
-- It is a bug to request a file that the repository does not provide
-- (the snapshot should make it clear which files are available).
RemoteIndex :: HasFormat fs FormatGz
-> Formats fs (Trusted FileInfo)
-> RemoteFile fs Binary
-- Actual package
--
-- Package file length comes from the corresponding @targets.json@.
RemotePkgTarGz :: PackageIdentifier
-> Trusted FileInfo
-> RemoteFile (FormatGz :- ()) Binary
deriving instance Show (RemoteFile fs typ)
instance Pretty (RemoteFile fs typ) where
pretty RemoteTimestamp = "timestamp"
pretty (RemoteRoot _) = "root"
pretty (RemoteSnapshot _) = "snapshot"
pretty (RemoteMirrors _) = "mirrors"
pretty (RemoteIndex _ _) = "index"
pretty (RemotePkgTarGz pkgId _) = "package " ++ display pkgId
-- | Files that we might request from the local cache
data CachedFile =
-- | Timestamp metadata (@timestamp.json@)
CachedTimestamp
-- | Root metadata (@root.json@)
| CachedRoot
-- | Snapshot metadata (@snapshot.json@)
| CachedSnapshot
-- | Mirrors list (@mirrors.json@)
| CachedMirrors
deriving (Eq, Ord, Show)
instance Pretty CachedFile where
pretty CachedTimestamp = "timestamp"
pretty CachedRoot = "root"
pretty CachedSnapshot = "snapshot"
pretty CachedMirrors = "mirrors"
-- | Default format for each file type
--
-- For most file types we don't have a choice; for the index the repository
-- is only required to offer the GZip-compressed format so that is the default.
remoteFileDefaultFormat :: RemoteFile fs typ -> Some (HasFormat fs)
remoteFileDefaultFormat RemoteTimestamp = Some $ HFZ FUn
remoteFileDefaultFormat (RemoteRoot _) = Some $ HFZ FUn
remoteFileDefaultFormat (RemoteSnapshot _) = Some $ HFZ FUn
remoteFileDefaultFormat (RemoteMirrors _) = Some $ HFZ FUn
remoteFileDefaultFormat (RemotePkgTarGz _ _) = Some $ HFZ FGz
remoteFileDefaultFormat (RemoteIndex pf _) = Some pf
-- | Default file info (see also 'remoteFileDefaultFormat')
remoteFileDefaultInfo :: RemoteFile fs typ -> Maybe (Trusted FileInfo)
remoteFileDefaultInfo RemoteTimestamp = Nothing
remoteFileDefaultInfo (RemoteRoot info) = info
remoteFileDefaultInfo (RemoteSnapshot info) = Just info
remoteFileDefaultInfo (RemoteMirrors info) = Just info
remoteFileDefaultInfo (RemotePkgTarGz _ info) = Just info
remoteFileDefaultInfo (RemoteIndex pf info) = Just $ formatsLookup pf info
{-------------------------------------------------------------------------------
Repository proper
-------------------------------------------------------------------------------}
-- | Repository
--
-- This is an abstract representation of a repository. It simply provides a way
-- to download metafiles and target files, without specifying how this is done.
-- For instance, for a local repository this could just be doing a file read,
-- whereas for remote repositories this could be using any kind of HTTP client.
data Repository down = DownloadedFile down => Repository {
-- | Get a file from the server
--
-- Responsibilies of 'repGetRemote':
--
-- * Download the file from the repository and make it available at a
-- temporary location
-- * Use the provided file length to protect against endless data attacks.
-- (Repositories such as local repositories that are not susceptible to
-- endless data attacks can safely ignore this argument.)
-- * Move the file from its temporary location to its permanent location
-- if verification succeeds.
--
-- NOTE: Calls to 'repGetRemote' should _always_ be in the scope of
-- 'repWithMirror'.
repGetRemote :: forall fs typ. Throws SomeRemoteError
=> AttemptNr
-> RemoteFile fs typ
-> Verify (Some (HasFormat fs), down typ)
-- | Get a cached file (if available)
, repGetCached :: CachedFile -> IO (Maybe (Path Absolute))
-- | Get the cached root
--
-- This is a separate method only because clients must ALWAYS have root
-- information available.
, repGetCachedRoot :: IO (Path Absolute)
-- | Clear all cached data
--
-- In particular, this should remove the snapshot and the timestamp.
-- It would also be okay, but not required, to delete the index.
, repClearCache :: IO ()
-- | Open the tarball for reading
--
-- This function has this shape so that:
--
-- * We can read multiple files from the tarball without having to open
-- and close the handle each time
-- * We can close the handle immediately when done.
, repWithIndex :: forall a. (Handle -> IO a) -> IO a
-- | Read the index index
, repGetIndexIdx :: IO Tar.TarIndex
-- | Lock the cache (during updates)
, repLockCache :: IO () -> IO ()
-- | Mirror selection
--
-- The purpose of 'repWithMirror' is to scope mirror selection. The idea
-- is that if we have
--
-- > repWithMirror mirrorList $
-- > someCallback
--
-- then the repository may pick a mirror before calling @someCallback@,
-- catch exceptions thrown by @someCallback@, and potentially try the
-- callback again with a different mirror.
--
-- The list of mirrors may be @Nothing@ if we haven't yet downloaded the
-- list of mirrors from the repository, or when our cached list of mirrors
-- is invalid. Of course, if we did download it, then the list of mirrors
-- may still be empty. In this case the repository must fall back to its
-- primary download mechanism.
--
-- Mirrors as currently defined (in terms of a "base URL") are inherently a
-- HTTP (or related) concept, so in repository implementations such as the
-- local-repo 'repWithMirrors' is probably just an identity operation (see
-- 'ignoreMirrors'). Conversely, HTTP implementations of repositories may
-- have other, out-of-band information (for example, coming from a cabal
-- config file) that they may use to influence mirror selection.
, repWithMirror :: forall a. Maybe [Mirror] -> IO a -> IO a
-- | Logging
, repLog :: LogMessage -> IO ()
-- | Layout of this repository
, repLayout :: RepoLayout
-- | Layout of the index
--
-- Since the repository hosts the index, the layout of the index is
-- not independent of the layout of the repository.
, repIndexLayout :: IndexLayout
-- | Description of the repository (used in the show instance)
, repDescription :: String
}
instance Show (Repository down) where
show = repDescription
-- | Helper function to implement 'repWithMirrors'.
mirrorsUnsupported :: Maybe [Mirror] -> IO a -> IO a
mirrorsUnsupported _ = id
-- | Are we requesting this information because of a previous validation error?
--
-- Clients can take advantage of this to tell caches to revalidate files.
newtype AttemptNr = AttemptNr Int
deriving (Eq, Ord, Num)
-- | Log messages
--
-- We use a 'RemoteFile' rather than a 'RepoPath' here because we might not have
-- a 'RepoPath' for the file that we were trying to download (that is, for
-- example if the server does not provide an uncompressed tarball, it doesn't
-- make much sense to list the path to that non-existing uncompressed tarball).
data LogMessage =
-- | Root information was updated
--
-- This message is issued when the root information is updated as part of
-- the normal check for updates procedure. If the root information is
-- updated because of a verification error WarningVerificationError is
-- issued instead.
LogRootUpdated
-- | A verification error
--
-- Verification errors can be temporary, and may be resolved later; hence
-- these are just warnings. (Verification errors that cannot be resolved
-- are thrown as exceptions.)
| LogVerificationError VerificationError
-- | Download a file from a repository
| forall fs typ. LogDownloading (RemoteFile fs typ)
-- | Incrementally updating a file from a repository
| forall fs. LogUpdating (RemoteFile fs Binary)
-- | Selected a particular mirror
| LogSelectedMirror MirrorDescription
-- | Updating a file failed
-- (we will instead download it whole)
| forall fs. LogCannotUpdate (RemoteFile fs Binary) UpdateFailure
-- | We got an exception with a particular mirror
-- (we will try with a different mirror if any are available)
| LogMirrorFailed MirrorDescription SomeException
-- | This log event is triggered before invoking a filesystem lock
-- operation that may block for a significant amount of time; once
-- the possibly blocking call completes successfully,
-- 'LogLockWaitDone' will be emitted.
--
-- @since 0.6.0
| LogLockWait (Path Absolute)
-- | Denotes completion of the operation that advertised a
-- 'LogLockWait' event
--
-- @since 0.6.0
| LogLockWaitDone (Path Absolute)
-- | Denotes the filesystem lock previously acquired (signaled by
-- 'LogLockWait') has been released.
--
-- @since 0.6.0
| LogUnlock (Path Absolute)
-- | Records why we are downloading a file rather than updating it.
data UpdateFailure =
-- | Server does not support incremental downloads
UpdateImpossibleUnsupported
-- | We don't have a local copy of the file to update
| UpdateImpossibleNoLocalCopy
-- | Update failed twice
--
-- If we attempt an incremental update the first time, and it fails, we let
-- it go round the loop, update local security information, and try again.
-- But if an incremental update then fails _again_, we instead attempt a
-- regular download.
| UpdateFailedTwice
-- | Update failed (for example: perhaps the local file got corrupted)
| UpdateFailed SomeException
{-------------------------------------------------------------------------------
Downloaded files
-------------------------------------------------------------------------------}
class DownloadedFile (down :: * -> *) where
-- | Verify a download file
downloadedVerify :: down a -> Trusted FileInfo -> IO Bool
-- | Read the file we just downloaded into memory
--
-- We never read binary data, only metadata.
downloadedRead :: down Metadata -> IO BS.L.ByteString
-- | Copy a downloaded file to its destination
downloadedCopyTo :: down a -> Path Absolute -> IO ()
{-------------------------------------------------------------------------------
Exceptions thrown by specific Repository implementations
-------------------------------------------------------------------------------}
-- | Repository-specific exceptions
--
-- For instance, for repositories using HTTP this might correspond to a 404;
-- for local repositories this might correspond to file-not-found, etc.
data SomeRemoteError :: * where
SomeRemoteError :: Exception e => e -> SomeRemoteError
deriving (Typeable)
#if MIN_VERSION_base(4,8,0)
deriving instance Show SomeRemoteError
instance Exception SomeRemoteError where displayException = pretty
#else
instance Exception SomeRemoteError
instance Show SomeRemoteError where show = pretty
#endif
instance Pretty SomeRemoteError where
pretty (SomeRemoteError ex) = displayException ex
{-------------------------------------------------------------------------------
Paths
-------------------------------------------------------------------------------}
remoteRepoPath :: RepoLayout -> RemoteFile fs typ -> Formats fs RepoPath
remoteRepoPath RepoLayout{..} = go
where
go :: RemoteFile fs typ -> Formats fs RepoPath
go RemoteTimestamp = FsUn $ repoLayoutTimestamp
go (RemoteRoot _) = FsUn $ repoLayoutRoot
go (RemoteSnapshot _) = FsUn $ repoLayoutSnapshot
go (RemoteMirrors _) = FsUn $ repoLayoutMirrors
go (RemotePkgTarGz pId _) = FsGz $ repoLayoutPkgTarGz pId
go (RemoteIndex _ lens) = formatsMap goIndex lens
goIndex :: Format f -> a -> RepoPath
goIndex FUn _ = repoLayoutIndexTar
goIndex FGz _ = repoLayoutIndexTarGz
remoteRepoPath' :: RepoLayout -> RemoteFile fs typ -> HasFormat fs f -> RepoPath
remoteRepoPath' repoLayout file format =
formatsLookup format $ remoteRepoPath repoLayout file
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
-- | Is a particular remote file cached?
data IsCached :: * -> * where
-- This remote file should be cached, and we ask for it by name
CacheAs :: CachedFile -> IsCached Metadata
-- We don't cache this remote file
--
-- This doesn't mean a Repository should not feel free to cache the file
-- if desired, but it does mean the generic algorithms will never ask for
-- this file from the cache.
DontCache :: IsCached Binary
-- The index is somewhat special: it should be cached, but we never
-- ask for it directly.
--
-- Instead, we will ask the Repository for files _from_ the index, which it
-- can serve however it likes. For instance, some repositories might keep
-- the index in uncompressed form, others in compressed form; some might
-- keep an index tarball index for quick access, others may scan the tarball
-- linearly, etc.
CacheIndex :: IsCached Binary
--TODO: ^^ older haddock doesn't support GADT doc comments :-(
deriving instance Eq (IsCached typ)
deriving instance Show (IsCached typ)
-- | Which remote files should we cache locally?
mustCache :: RemoteFile fs typ -> IsCached typ
mustCache RemoteTimestamp = CacheAs CachedTimestamp
mustCache (RemoteRoot _) = CacheAs CachedRoot
mustCache (RemoteSnapshot _) = CacheAs CachedSnapshot
mustCache (RemoteMirrors _) = CacheAs CachedMirrors
mustCache (RemoteIndex {}) = CacheIndex
mustCache (RemotePkgTarGz _ _) = DontCache
instance Pretty LogMessage where
pretty LogRootUpdated =
"Root info updated"
pretty (LogVerificationError err) =
"Verification error: " ++ pretty err
pretty (LogDownloading file) =
"Downloading " ++ pretty file
pretty (LogUpdating file) =
"Updating " ++ pretty file
pretty (LogSelectedMirror mirror) =
"Selected mirror " ++ mirror
pretty (LogCannotUpdate file ex) =
"Cannot update " ++ pretty file ++ " (" ++ pretty ex ++ ")"
pretty (LogMirrorFailed mirror ex) =
"Exception " ++ displayException ex ++ " when using mirror " ++ mirror
pretty (LogLockWait file) =
"Waiting to acquire cache lock on " ++ pretty file
pretty (LogLockWaitDone file) =
"Acquired cache lock on " ++ pretty file
pretty (LogUnlock file) =
"Released cache lock on " ++ pretty file
instance Pretty UpdateFailure where
pretty UpdateImpossibleUnsupported =
"server does not provide incremental downloads"
pretty UpdateImpossibleNoLocalCopy =
"no local copy"
pretty UpdateFailedTwice =
"update failed twice"
pretty (UpdateFailed ex) =
displayException ex
hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository/ 0000755 0000000 0000000 00000000000 07346545000 022620 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository/Cache.hs 0000644 0000000 0000000 00000025211 07346545000 024160 0 ustar 00 0000000 0000000 {-# LANGUAGE BangPatterns #-}
-- | The files we cache from the repository
--
-- Both the Local and the Remote repositories make use of this module.
module Hackage.Security.Client.Repository.Cache (
Cache(..)
, getCached
, getCachedRoot
, getCachedIndex
, clearCache
, withIndex
, getIndexIdx
, cacheRemoteFile
, lockCache
, lockCacheWithLogger
) where
import MyPrelude
import Control.Exception
import Control.Monad
import Control.Monad.IO.Class
import Data.Maybe
import Codec.Archive.Tar (Entries, pattern Done, pattern Fail, pattern Next)
import Codec.Archive.Tar.Index (TarIndex, IndexBuilder, TarEntryOffset)
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Archive.Tar.Index as TarIndex
import qualified Codec.Compression.GZip as GZip
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as BS.L
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Formats
import Hackage.Security.TUF
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Exit
import Hackage.Security.Util.IO
import Hackage.Security.Util.Path
-- | Location and layout of the local cache
data Cache = Cache {
cacheRoot :: Path Absolute
, cacheLayout :: CacheLayout
}
-- | Cache a previously downloaded remote file
cacheRemoteFile :: forall down typ f. DownloadedFile down
=> Cache -> down typ -> Format f -> IsCached typ -> IO ()
cacheRemoteFile cache downloaded f isCached = do
go f isCached
case isCached of
CacheIndex -> rebuildTarIndex cache
_otherwise -> return ()
where
go :: Format f -> IsCached typ -> IO ()
go _ DontCache = return ()
go FUn (CacheAs file) = copyTo (cachedFilePath cache file)
go FGz CacheIndex = copyTo (cachedIndexPath cache FGz) >> unzipIndex
go _ _ = error "cacheRemoteFile: unexpected case" -- TODO: enforce in types?
copyTo :: Path Absolute -> IO ()
copyTo fp = do
createDirectoryIfMissing True (takeDirectory fp)
downloadedCopyTo downloaded fp
-- Whether or not we downloaded the compressed index incrementally, we can
-- update the uncompressed index incrementally (assuming the local files
-- have not been corrupted).
-- NOTE: This assumes we already updated the compressed file.
unzipIndex :: IO ()
unzipIndex = do
createDirectoryIfMissing True (takeDirectory indexUn)
shouldTryIncremental <- cachedIndexProbablyValid
if shouldTryIncremental
then do
success <- unzipIncremental
unless success unzipNonIncremental
else unzipNonIncremental
where
unzipIncremental = do
compressed <- readLazyByteString indexGz
let uncompressed = GZip.decompress compressed
-- compare prefix of old index with prefix of new index to
-- ensure that it's safe to incrementally append
(seekTo',newTail') <- withFile indexUn ReadMode $ \h ->
multipleExitPoints $ do
currentSize <- liftIO $ hFileSize h
let seekTo = 0 `max` (currentSize - tarTrailer)
(newPrefix,newTail) = BS.L.splitAt (fromInteger seekTo)
uncompressed
(oldPrefix,oldTrailer) <- BS.L.splitAt (fromInteger seekTo) <$>
liftIO (BS.L.hGetContents h)
unless (oldPrefix == newPrefix) $
exit (0,mempty) -- corrupted index.tar prefix
-- sanity check: verify there's a 1KiB zero-filled trailer
unless (oldTrailer == tarTrailerBs) $
exit (0,mempty) -- corrupted .tar trailer
return (seekTo,newTail)
if seekTo' <= 0
then return False -- fallback to non-incremental update
else withFile indexUn ReadWriteMode $ \h -> do
-- everything seems fine; append the new data
liftIO $ hSeek h AbsoluteSeek seekTo'
liftIO $ BS.L.hPut h newTail'
return True
unzipNonIncremental = do
compressed <- readLazyByteString indexGz
let uncompressed = GZip.decompress compressed
withFile indexUn WriteMode $ \h ->
BS.L.hPut h uncompressed
void . handleDoesNotExist $
removeFile indexIdx -- Force a full rebuild of the index too
-- When we update the 00-index.tar we also update the 00-index.tar.idx
-- so the expected state is that the modification time for the tar.idx
-- is the same or later than the .tar file. But if someone modified
-- the 00-index.tar then the modification times will be reversed. So,
-- if the modification times are reversed then we should not do an
-- incremental update but should rewrite the whole file.
cachedIndexProbablyValid :: IO Bool
cachedIndexProbablyValid =
fmap (fromMaybe False) $
handleDoesNotExist $ do
tsUn <- getModificationTime indexUn
tsIdx <- getModificationTime indexIdx
return (tsIdx >= tsUn)
indexGz = cachedIndexPath cache FGz
indexUn = cachedIndexPath cache FUn
indexIdx = cachedIndexIdxPath cache
tarTrailer :: Integer
tarTrailer = 1024
tarTrailerBs = BS.L.replicate (fromInteger tarTrailer) 0x00
-- | Rebuild the tarball index
--
-- Attempts to add to the existing index, if one exists.
--
-- TODO: Use throwChecked rather than throwUnchecked, and deal with the fallout.
-- See .
rebuildTarIndex :: Cache -> IO ()
rebuildTarIndex cache = do
(builder, offset) <- initBuilder <$> tryReadIndex (cachedIndexIdxPath cache)
withFile (cachedIndexPath cache FUn) ReadMode $ \hTar -> do
TarIndex.hSeekEntryOffset hTar offset
newEntries <- Tar.read <$> BS.L.hGetContents hTar
case addEntries builder newEntries of
Left ex -> throwUnchecked ex
Right idx -> withFile (cachedIndexIdxPath cache) WriteMode $ \hIdx -> do
hSetBuffering hIdx (BlockBuffering Nothing)
BS.hPut hIdx $ TarIndex.serialise idx
where
-- The initial index builder
-- If we don't have an index (or it's broken), we start from scratch
initBuilder :: Either e TarIndex -> (IndexBuilder, TarEntryOffset)
initBuilder (Left _) = ( TarIndex.empty, 0 )
initBuilder (Right idx) = ( TarIndex.unfinalise idx
, TarIndex.indexEndEntryOffset idx
)
-- | Get a cached file (if available)
getCached :: Cache -> CachedFile -> IO (Maybe (Path Absolute))
getCached cache cachedFile = do
exists <- doesFileExist localPath
if exists then return $ Just localPath
else return $ Nothing
where
localPath = cachedFilePath cache cachedFile
-- | Get the cached index (if available)
getCachedIndex :: Cache -> Format f -> IO (Maybe (Path Absolute))
getCachedIndex cache format = do
exists <- doesFileExist localPath
if exists then return $ Just localPath
else return $ Nothing
where
localPath = cachedIndexPath cache format
-- | Get the cached root
--
-- Calling 'getCachedRoot' without root info available is a programmer error
-- and will result in an unchecked exception. See 'requiresBootstrap'.
getCachedRoot :: Cache -> IO (Path Absolute)
getCachedRoot cache = do
mPath <- getCached cache CachedRoot
case mPath of
Just p -> return p
Nothing -> internalError "Client missing root info"
getIndexIdx :: Cache -> IO TarIndex
getIndexIdx cache = do
mIndex <- tryReadIndex $ cachedIndexIdxPath cache
case mIndex of
Left _ -> throwIO $ userError "Could not read index. Did you call 'checkForUpdates'?"
Right idx -> return idx
withIndex :: Cache -> (Handle -> IO a) -> IO a
withIndex cache = withFile (cachedIndexPath cache FUn) ReadMode
-- | Delete a previously downloaded remote file
clearCache :: Cache -> IO ()
clearCache cache = void . handleDoesNotExist $ do
removeFile $ cachedFilePath cache CachedTimestamp
removeFile $ cachedFilePath cache CachedSnapshot
-- | Lock the cache
--
-- This avoids two concurrent processes updating the cache at the same time,
-- provided they both take the lock.
lockCache :: Cache -> IO () -> IO ()
lockCache Cache{..} = withDirLock (\_ -> return ()) cacheRoot
-- | Variant of 'lockCache' which emits 'LogMessage's before and after
-- a possibly blocking file-locking system call
--
-- @since 0.6.0
lockCacheWithLogger :: (LogMessage -> IO ()) -> Cache -> IO () -> IO ()
lockCacheWithLogger logger Cache{..} = withDirLock logger' cacheRoot
where
logger' (WithDirLockEventPre fn) = logger (LogLockWait fn)
logger' (WithDirLockEventPost fn) = logger (LogLockWaitDone fn)
logger' (WithDirLockEventUnlock fn) = logger (LogUnlock fn)
{-------------------------------------------------------------------------------
Auxiliary: tar
-------------------------------------------------------------------------------}
-- | Variation on 'TarIndex.build' that takes in the initial 'IndexBuilder'
addEntries :: IndexBuilder -> Entries e -> Either e TarIndex
addEntries = go
where
go !builder (Next e es) = go (TarIndex.addNextEntry e builder) es
go !builder Done = Right $! TarIndex.finalise builder
go !_ (Fail err) = Left err
-- TODO: How come 'deserialise' uses _strict_ ByteStrings?
tryReadIndex :: Path Absolute -> IO (Either (Maybe IOException) TarIndex)
tryReadIndex fp =
aux <$> try (TarIndex.deserialise <$> readStrictByteString fp)
where
aux :: Either e (Maybe (a, leftover)) -> Either (Maybe e) a
aux (Left e) = Left (Just e)
aux (Right Nothing) = Left Nothing
aux (Right (Just (a, _))) = Right a
{-------------------------------------------------------------------------------
Auxiliary: paths
-------------------------------------------------------------------------------}
cachedFilePath :: Cache -> CachedFile -> Path Absolute
cachedFilePath Cache{cacheLayout=CacheLayout{..}, ..} file =
anchorCachePath cacheRoot $ go file
where
go :: CachedFile -> CachePath
go CachedRoot = cacheLayoutRoot
go CachedTimestamp = cacheLayoutTimestamp
go CachedSnapshot = cacheLayoutSnapshot
go CachedMirrors = cacheLayoutMirrors
cachedIndexPath :: Cache -> Format f -> Path Absolute
cachedIndexPath Cache{..} format =
anchorCachePath cacheRoot $ go format
where
go :: Format f -> CachePath
go FUn = cacheLayoutIndexTar cacheLayout
go FGz = cacheLayoutIndexTarGz cacheLayout
cachedIndexIdxPath :: Cache -> Path Absolute
cachedIndexIdxPath Cache{..} =
anchorCachePath cacheRoot $ cacheLayoutIndexIdx cacheLayout
hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository/HttpLib.hs 0000644 0000000 0000000 00000012277 07346545000 024533 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
-- | Abstracting over HTTP libraries
module Hackage.Security.Client.Repository.HttpLib (
HttpLib(..)
, HttpRequestHeader(..)
, HttpResponseHeader(..)
, HttpStatus(..)
, ProxyConfig(..)
-- ** Body reader
, BodyReader
, bodyReaderFromBS
) where
import MyPrelude
import Data.IORef
import Network.URI hiding (uriPath, path)
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as BS.L
import Hackage.Security.Util.Checked
import Hackage.Security.Client.Repository (SomeRemoteError)
{-------------------------------------------------------------------------------
Abstraction over HTTP clients (such as HTTP, http-conduit, etc.)
-------------------------------------------------------------------------------}
-- | Abstraction over HTTP clients
--
-- This avoids insisting on a particular implementation (such as the HTTP
-- package) and allows for other implementations (such as a conduit based one).
--
-- NOTE: Library-specific exceptions MUST be wrapped in 'SomeRemoteError'.
data HttpLib = HttpLib {
-- | Download a file
httpGet :: forall a. Throws SomeRemoteError
=> [HttpRequestHeader]
-> URI
-> ([HttpResponseHeader] -> BodyReader -> IO a)
-> IO a
-- | Download a byte range
--
-- Range is starting and (exclusive) end offset in bytes.
--
-- HTTP servers are normally expected to respond to a range request with
-- a "206 Partial Content" response. However, servers can respond with a
-- "200 OK" response, sending the entire file instead (for instance, this
-- may happen for servers that don't actually support range requests, but
-- for which we optimistically assumed they did). Implementations of
-- 'HttpLib' may accept such a response and inform the @hackage-security@
-- library that the whole file is being returned; the security library can
-- then decide to execute the 'BodyReader' anyway (downloading the entire
-- file) or abort the request and try something else. For this reason
-- the security library must be informed whether the server returned the
-- full file or the requested range.
, httpGetRange :: forall a. Throws SomeRemoteError
=> [HttpRequestHeader]
-> URI
-> (Int, Int)
-> (HttpStatus -> [HttpResponseHeader] -> BodyReader -> IO a)
-> IO a
}
-- | Additional request headers
--
-- Since different libraries represent headers differently, here we just
-- abstract over the few request headers that we might want to set
data HttpRequestHeader =
-- | Set @Cache-Control: max-age=0@
HttpRequestMaxAge0
-- | Set @Cache-Control: no-transform@
| HttpRequestNoTransform
deriving (Eq, Ord, Show)
-- | HTTP status code
data HttpStatus =
-- | 200 OK
HttpStatus200OK
-- | 206 Partial Content
| HttpStatus206PartialContent
-- | Response headers
--
-- Since different libraries represent headers differently, here we just
-- abstract over the few response headers that we might want to know about.
data HttpResponseHeader =
-- | Server accepts byte-range requests (@Accept-Ranges: bytes@)
HttpResponseAcceptRangesBytes
deriving (Eq, Ord, Show)
-- | Proxy configuration
--
-- Although actually setting the proxy is the purview of the initialization
-- function for individual 'HttpLib' implementations and therefore outside
-- the scope of this module, we offer this 'ProxyConfiguration' type here as a
-- way to uniformly configure proxies across all 'HttpLib's.
data ProxyConfig a =
-- | Don't use a proxy
ProxyConfigNone
-- | Use this specific proxy
--
-- Individual HTTP backends use their own types for specifying proxies.
| ProxyConfigUse a
-- | Use automatic proxy settings
--
-- What precisely automatic means is 'HttpLib' specific, though
-- typically it will involve looking at the @HTTP_PROXY@ environment
-- variable or the (Windows) registry.
| ProxyConfigAuto
{-------------------------------------------------------------------------------
Body readers
-------------------------------------------------------------------------------}
-- | An @IO@ action that represents an incoming response body coming from the
-- server.
--
-- The action gets a single chunk of data from the response body, or an empty
-- bytestring if no more data is available.
--
-- This definition is copied from the @http-client@ package.
type BodyReader = IO BS.ByteString
-- | Construct a 'Body' reader from a lazy bytestring
--
-- This is appropriate if the lazy bytestring is constructed, say, by calling
-- 'hGetContents' on a network socket, and the chunks of the bytestring
-- correspond to the chunks as they are returned from the OS network layer.
--
-- If the lazy bytestring needs to be re-chunked this function is NOT suitable.
bodyReaderFromBS :: BS.L.ByteString -> IO BodyReader
bodyReaderFromBS lazyBS = do
chunks <- newIORef $ BS.L.toChunks lazyBS
-- NOTE: Lazy bytestrings invariant: no empty chunks
let br = do bss <- readIORef chunks
case bss of
[] -> return BS.empty
(bs:bss') -> writeIORef chunks bss' >> return bs
return br
hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository/Local.hs 0000644 0000000 0000000 00000007217 07346545000 024215 0 ustar 00 0000000 0000000 -- | Local repository
module Hackage.Security.Client.Repository.Local (
LocalRepo
, LocalFile -- opaque
, withRepository
) where
import MyPrelude
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Repository.Cache
import Hackage.Security.Client.Verify
import Hackage.Security.TUF
import Hackage.Security.Trusted
import Hackage.Security.Util.IO
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
-- | Location of the repository
--
-- Note that we regard the local repository as immutable; we cache files just
-- like we do for remote repositories.
type LocalRepo = Path Absolute
-- | Initialize the repository (and cleanup resources afterwards)
--
-- Like a remote repository, a local repository takes a RepoLayout as argument;
-- but where the remote repository interprets this RepoLayout relative to a URL,
-- the local repository interprets it relative to a local directory.
--
-- It uses the same cache as the remote repository.
withRepository
:: LocalRepo -- ^ Location of local repository
-> Cache -- ^ Location of local cache
-> RepoLayout -- ^ Repository layout
-> IndexLayout -- ^ Index layout
-> (LogMessage -> IO ()) -- ^ Logger
-> (Repository LocalFile -> IO a) -- ^ Callback
-> IO a
withRepository repo
cache
repLayout
repIndexLayout
logger
callback
=
callback Repository {
repGetRemote = getRemote repLayout repo cache
, repGetCached = getCached cache
, repGetCachedRoot = getCachedRoot cache
, repClearCache = clearCache cache
, repWithIndex = withIndex cache
, repGetIndexIdx = getIndexIdx cache
, repLockCache = lockCacheWithLogger logger cache
, repWithMirror = mirrorsUnsupported
, repLog = logger
, repLayout = repLayout
, repIndexLayout = repIndexLayout
, repDescription = "Local repository at " ++ pretty repo
}
-- | Get a file from the server
getRemote :: RepoLayout -> LocalRepo -> Cache
-> AttemptNr
-> RemoteFile fs typ
-> Verify (Some (HasFormat fs), LocalFile typ)
getRemote repoLayout repo cache _attemptNr remoteFile = do
case remoteFileDefaultFormat remoteFile of
Some format -> do
let remotePath' = remoteRepoPath' repoLayout remoteFile format
remotePath = anchorRepoPathLocally repo remotePath'
localFile = LocalFile remotePath
ifVerified $
cacheRemoteFile cache
localFile
(hasFormatGet format)
(mustCache remoteFile)
return (Some format, localFile)
{-------------------------------------------------------------------------------
Files in the local repository
-------------------------------------------------------------------------------}
newtype LocalFile a = LocalFile (Path Absolute)
instance DownloadedFile LocalFile where
downloadedVerify = verifyLocalFile
downloadedRead = \(LocalFile local) -> readLazyByteString local
downloadedCopyTo = \(LocalFile local) -> copyFile local
verifyLocalFile :: LocalFile typ -> Trusted FileInfo -> IO Bool
verifyLocalFile (LocalFile fp) trustedInfo = do
-- Verify the file size before comparing the entire file info
sz <- FileLength <$> getFileSize fp
if sz /= fileInfoLength (trusted trustedInfo)
then return False
else compareTrustedFileInfo (trusted trustedInfo) <$> computeFileInfo fp
hackage-security-0.6.2.4/src/Hackage/Security/Client/Repository/Remote.hs 0000644 0000000 0000000 00000067014 07346545000 024417 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
-- | An implementation of Repository that talks to repositories over HTTP.
--
-- This implementation is itself parameterized over a 'HttpClient', so that it
-- it not tied to a specific library; for instance, 'HttpClient' can be
-- implemented with the @HTTP@ library, the @http-client@ libary, or others.
--
-- It would also be possible to give _other_ Repository implementations that
-- talk to repositories over HTTP, if you want to make other design decisions
-- than we did here, in particular:
--
-- * We attempt to do incremental downloads of the index when possible.
-- * We reuse the "Repository.Local" to deal with the local cache.
-- * We download @timestamp.json@ and @snapshot.json@ together. This is
-- implemented here because:
-- - One level down (HttpClient) we have no access to the local cache
-- - One level up (Repository API) would require _all_ Repositories to
-- implement this optimization.
module Hackage.Security.Client.Repository.Remote (
-- * Top-level API
withRepository
, RepoOpts(..)
, defaultRepoOpts
, RemoteTemp
-- * File sizes
, FileSize(..)
, fileSizeWithinBounds
) where
import MyPrelude
import Control.Concurrent
import Control.Exception
import Control.Monad (when, unless)
import Control.Monad.IO.Class (MonadIO)
import Data.List (nub, intercalate)
import Data.Typeable
import Network.URI hiding (uriPath, path)
import System.IO ()
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as BS.L
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Repository.Cache (Cache)
import Hackage.Security.Client.Repository.HttpLib
import Hackage.Security.Client.Verify
import Hackage.Security.Trusted
import Hackage.Security.TUF
import Hackage.Security.Util.Checked
import Hackage.Security.Util.IO
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
import Hackage.Security.Util.Exit
import qualified Hackage.Security.Client.Repository.Cache as Cache
{-------------------------------------------------------------------------------
Server capabilities
-------------------------------------------------------------------------------}
-- | Server capabilities
--
-- As the library interacts with the server and receives replies, we may
-- discover more information about the server's capabilities; for instance,
-- we may discover that it supports incremental downloads.
newtype ServerCapabilities = SC (MVar ServerCapabilities_)
-- | Internal type recording the various server capabilities we support
data ServerCapabilities_ = ServerCapabilities {
-- | Does the server support range requests?
serverAcceptRangesBytes :: Bool
}
newServerCapabilities :: IO ServerCapabilities
newServerCapabilities = SC <$> newMVar ServerCapabilities {
serverAcceptRangesBytes = False
}
updateServerCapabilities :: ServerCapabilities -> [HttpResponseHeader] -> IO ()
updateServerCapabilities (SC mv) responseHeaders = modifyMVar_ mv $ \caps ->
return $ caps {
serverAcceptRangesBytes = serverAcceptRangesBytes caps
|| HttpResponseAcceptRangesBytes `elem` responseHeaders
}
checkServerCapability :: MonadIO m
=> ServerCapabilities -> (ServerCapabilities_ -> a) -> m a
checkServerCapability (SC mv) f = liftIO $ withMVar mv $ return . f
{-------------------------------------------------------------------------------
File size
-------------------------------------------------------------------------------}
data FileSize =
-- | For most files we download we know the exact size beforehand
-- (because this information comes from the snapshot or delegated info)
FileSizeExact Int54
-- | For some files we might not know the size beforehand, but we might
-- be able to provide an upper bound (timestamp, root info)
| FileSizeBound Int54
deriving Show
fileSizeWithinBounds :: Int54 -> FileSize -> Bool
fileSizeWithinBounds sz (FileSizeExact sz') = sz <= sz'
fileSizeWithinBounds sz (FileSizeBound sz') = sz <= sz'
{-------------------------------------------------------------------------------
Top-level API
-------------------------------------------------------------------------------}
-- | Repository options with a reasonable default
--
-- Clients should use 'defaultRepositoryOpts' and override required settings.
data RepoOpts = RepoOpts {
-- | Allow additional mirrors?
--
-- If this is set to True (default), in addition to the (out-of-band)
-- specified mirrors we will also use mirrors reported by those
-- out-of-band mirrors (that is, @mirrors.json@).
repoAllowAdditionalMirrors :: Bool
}
-- | Default repository options
defaultRepoOpts :: RepoOpts
defaultRepoOpts = RepoOpts {
repoAllowAdditionalMirrors = True
}
-- | Initialize the repository (and cleanup resources afterwards)
--
-- We allow to specify multiple mirrors to initialize the repository. These
-- are mirrors that can be found "out of band" (out of the scope of the TUF
-- protocol), for example in a @cabal.config@ file. The TUF protocol itself
-- will specify that any of these mirrors can serve a @mirrors.json@ file
-- that itself contains mirrors; we consider these as _additional_ mirrors
-- to the ones that are passed here.
--
-- NOTE: The list of mirrors should be non-empty (and should typically include
-- the primary server).
--
-- TODO: In the future we could allow finer control over precisely which
-- mirrors we use (which combination of the mirrors that are passed as arguments
-- here and the mirrors that we get from @mirrors.json@) as well as indicating
-- mirror preferences.
withRepository
:: HttpLib -- ^ Implementation of the HTTP protocol
-> [URI] -- ^ "Out of band" list of mirrors
-> RepoOpts -- ^ Repository options
-> Cache -- ^ Location of local cache
-> RepoLayout -- ^ Repository layout
-> IndexLayout -- ^ Index layout
-> (LogMessage -> IO ()) -- ^ Logger
-> (Repository RemoteTemp -> IO a) -- ^ Callback
-> IO a
withRepository httpLib
outOfBandMirrors
repoOpts
cache
repLayout
repIndexLayout
logger
callback
= do
selectedMirror <- newMVar Nothing
caps <- newServerCapabilities
let remoteConfig mirror = RemoteConfig {
cfgLayout = repLayout
, cfgHttpLib = httpLib
, cfgBase = mirror
, cfgCache = cache
, cfgCaps = caps
, cfgLogger = liftIO . logger
, cfgOpts = repoOpts
}
callback Repository {
repGetRemote = getRemote remoteConfig selectedMirror
, repGetCached = Cache.getCached cache
, repGetCachedRoot = Cache.getCachedRoot cache
, repClearCache = Cache.clearCache cache
, repWithIndex = Cache.withIndex cache
, repGetIndexIdx = Cache.getIndexIdx cache
, repLockCache = Cache.lockCacheWithLogger logger cache
, repWithMirror = withMirror httpLib
selectedMirror
logger
outOfBandMirrors
repoOpts
, repLog = logger
, repLayout = repLayout
, repIndexLayout = repIndexLayout
, repDescription = "Remote repository at " ++ show outOfBandMirrors
}
{-------------------------------------------------------------------------------
Implementations of the various methods of Repository
-------------------------------------------------------------------------------}
-- | We select a mirror in 'withMirror' (the implementation of 'repWithMirror').
-- Outside the scope of 'withMirror' no mirror is selected, and a call to
-- 'getRemote' will throw an exception. If this exception is ever thrown its
-- a bug: calls to 'getRemote' ('repGetRemote') should _always_ be in the
-- scope of 'repWithMirror'.
type SelectedMirror = MVar (Maybe URI)
-- | Get the selected mirror
--
-- Throws an exception if no mirror was selected (this would be a bug in the
-- client code).
--
-- NOTE: Cannot use 'withMVar' here, because the callback would be inside the
-- scope of the withMVar, and there might be further calls to 'withRemote' made
-- by the callback argument to 'withRemote', leading to deadlock.
getSelectedMirror :: SelectedMirror -> IO URI
getSelectedMirror selectedMirror = do
mBaseURI <- readMVar selectedMirror
case mBaseURI of
Nothing -> internalError "Internal error: no mirror selected"
Just baseURI -> return baseURI
-- | Get a file from the server
getRemote :: Throws SomeRemoteError
=> (URI -> RemoteConfig)
-> SelectedMirror
-> AttemptNr
-> RemoteFile fs typ
-> Verify (Some (HasFormat fs), RemoteTemp typ)
getRemote remoteConfig selectedMirror attemptNr remoteFile = do
baseURI <- liftIO $ getSelectedMirror selectedMirror
let cfg = remoteConfig baseURI
downloadMethod <- liftIO $ pickDownloadMethod cfg attemptNr remoteFile
getFile cfg attemptNr remoteFile downloadMethod
-- | HTTP options
--
-- We want to make sure caches don't transform files in any way (as this will
-- mess things up with respect to hashes etc). Additionally, after a validation
-- error we want to make sure caches get files upstream in case the validation
-- error was because the cache updated files out of order.
httpRequestHeaders :: RemoteConfig -> AttemptNr -> [HttpRequestHeader]
httpRequestHeaders RemoteConfig{..} attemptNr =
if attemptNr == 0 then defaultHeaders
else HttpRequestMaxAge0 : defaultHeaders
where
-- Headers we provide for _every_ attempt, first or not
defaultHeaders :: [HttpRequestHeader]
defaultHeaders = [HttpRequestNoTransform]
-- | Mirror selection
withMirror :: forall a.
HttpLib -- ^ HTTP client
-> SelectedMirror -- ^ MVar indicating currently mirror
-> (LogMessage -> IO ()) -- ^ Logger
-> [URI] -- ^ Out-of-band mirrors
-> RepoOpts -- ^ Repository options
-> Maybe [Mirror] -- ^ TUF mirrors
-> IO a -- ^ Callback
-> IO a
withMirror HttpLib{..}
selectedMirror
logger
oobMirrors
repoOpts
tufMirrors
callback
=
go orderedMirrors
where
go :: [URI] -> IO a
-- Empty list of mirrors is a bug
go [] = internalError "No mirrors configured"
-- If we only have a single mirror left, let exceptions be thrown up
go [m] = do
logger $ LogSelectedMirror (show m)
select m $ callback
-- Otherwise, catch exceptions and if any were thrown, try with different
-- mirror
go (m:ms) = do
logger $ LogSelectedMirror (show m)
catchChecked (select m callback) $ \ex -> do
logger $ LogMirrorFailed (show m) ex
go ms
-- TODO: We will want to make the construction of this list configurable.
orderedMirrors :: [URI]
orderedMirrors = nub $ concat [
oobMirrors
, if repoAllowAdditionalMirrors repoOpts
then maybe [] (map mirrorUrlBase) tufMirrors
else []
]
select :: URI -> IO a -> IO a
select uri =
bracket_ (modifyMVar_ selectedMirror $ \_ -> return $ Just uri)
(modifyMVar_ selectedMirror $ \_ -> return Nothing)
{-------------------------------------------------------------------------------
Download methods
-------------------------------------------------------------------------------}
-- | Download method (downloading or updating)
data DownloadMethod :: * -> * -> * where
-- Download this file (we never attempt to update this type of file)
NeverUpdated :: {
neverUpdatedFormat :: HasFormat fs f
} -> DownloadMethod fs typ
-- Download this file (we cannot update this file right now)
CannotUpdate :: {
cannotUpdateFormat :: HasFormat fs f
, cannotUpdateReason :: UpdateFailure
} -> DownloadMethod fs Binary
-- Attempt an (incremental) update of this file
Update :: {
updateFormat :: HasFormat fs f
, updateInfo :: Trusted FileInfo
, updateLocal :: Path Absolute
, updateTail :: Int54
} -> DownloadMethod fs Binary
--TODO: ^^ older haddock doesn't support GADT doc comments :-(
pickDownloadMethod :: forall fs typ. RemoteConfig
-> AttemptNr
-> RemoteFile fs typ
-> IO (DownloadMethod fs typ)
pickDownloadMethod RemoteConfig{..} attemptNr remoteFile =
case remoteFile of
RemoteTimestamp -> return $ NeverUpdated (HFZ FUn)
(RemoteRoot _) -> return $ NeverUpdated (HFZ FUn)
(RemoteSnapshot _) -> return $ NeverUpdated (HFZ FUn)
(RemoteMirrors _) -> return $ NeverUpdated (HFZ FUn)
(RemotePkgTarGz _ _) -> return $ NeverUpdated (HFZ FGz)
(RemoteIndex hasGz formats) -> multipleExitPoints $ do
-- Server must support @Range@ with a byte-range
rangeSupport <- checkServerCapability cfgCaps serverAcceptRangesBytes
unless rangeSupport $ exit $ CannotUpdate hasGz UpdateImpossibleUnsupported
-- We must already have a local file to be updated
mCachedIndex <- liftIO $ Cache.getCachedIndex cfgCache (hasFormatGet hasGz)
cachedIndex <- case mCachedIndex of
Nothing -> exit $ CannotUpdate hasGz UpdateImpossibleNoLocalCopy
Just fp -> return fp
-- We attempt an incremental update a maximum of 2 times
-- See 'UpdateFailedTwice' for details.
when (attemptNr >= 2) $ exit $ CannotUpdate hasGz UpdateFailedTwice
-- If all these checks pass try to do an incremental update.
return Update {
updateFormat = hasGz
, updateInfo = formatsLookup hasGz formats
, updateLocal = cachedIndex
, updateTail = 65536 -- max gzip block size
}
-- | Download the specified file using the given download method
getFile :: forall fs typ. Throws SomeRemoteError
=> RemoteConfig -- ^ Internal configuration
-> AttemptNr -- ^ Did a security check previously fail?
-> RemoteFile fs typ -- ^ File to get
-> DownloadMethod fs typ -- ^ Selected format
-> Verify (Some (HasFormat fs), RemoteTemp typ)
getFile cfg@RemoteConfig{..} attemptNr remoteFile method =
go method
where
go :: DownloadMethod fs typ -> Verify (Some (HasFormat fs), RemoteTemp typ)
go NeverUpdated{..} = do
cfgLogger $ LogDownloading remoteFile
download neverUpdatedFormat
go CannotUpdate{..} = do
cfgLogger $ LogCannotUpdate remoteFile cannotUpdateReason
cfgLogger $ LogDownloading remoteFile
download cannotUpdateFormat
go Update{..} = do
cfgLogger $ LogUpdating remoteFile
update updateFormat updateInfo updateLocal updateTail
headers :: [HttpRequestHeader]
headers = httpRequestHeaders cfg attemptNr
-- Get any file from the server, without using incremental updates
download :: HasFormat fs f -> Verify (Some (HasFormat fs), RemoteTemp typ)
download format = do
(tempPath, h) <- openTempFile (Cache.cacheRoot cfgCache) (uriTemplate uri)
liftIO $ do
httpGet headers uri $ \responseHeaders bodyReader -> do
updateServerCapabilities cfgCaps responseHeaders
execBodyReader targetPath sz h bodyReader
hClose h
cacheIfVerified format $ DownloadedWhole tempPath
where
targetPath = TargetPathRepo $ remoteRepoPath' cfgLayout remoteFile format
uri = formatsLookup format $ remoteFileURI cfgLayout cfgBase remoteFile
sz = formatsLookup format $ remoteFileSize remoteFile
-- Get a file incrementally
update :: (typ ~ Binary)
=> HasFormat fs f -- ^ Selected format
-> Trusted FileInfo -- ^ Expected info
-> Path Absolute -- ^ Location of cached file (after callback)
-> Int54 -- ^ How much of the tail to overwrite
-> Verify (Some (HasFormat fs), RemoteTemp typ)
update format info cachedFile fileTail = do
currentSz <- liftIO $ getFileSize cachedFile
let fileSz = fileLength' info
range = (0 `max` (currentSz - fileTail), fileSz)
range' = (fromIntegral (fst range), fromIntegral (snd range))
cacheRoot = Cache.cacheRoot cfgCache
(tempPath, h) <- openTempFile cacheRoot (uriTemplate uri)
statusCode <- liftIO $
httpGetRange headers uri range' $ \statusCode responseHeaders bodyReader -> do
updateServerCapabilities cfgCaps responseHeaders
let expectedSize =
case statusCode of
HttpStatus206PartialContent ->
FileSizeExact (snd range - fst range)
HttpStatus200OK ->
FileSizeExact fileSz
execBodyReader targetPath expectedSize h bodyReader
hClose h
return statusCode
let downloaded =
case statusCode of
HttpStatus206PartialContent ->
DownloadedDelta {
deltaTemp = tempPath
, deltaExisting = cachedFile
, deltaSeek = fst range
}
HttpStatus200OK ->
DownloadedWhole tempPath
cacheIfVerified format downloaded
where
targetPath = TargetPathRepo repoPath
uri = modifyUriPath cfgBase (`anchorRepoPathRemotely` repoPath)
repoPath = remoteRepoPath' cfgLayout remoteFile format
cacheIfVerified :: HasFormat fs f -> RemoteTemp typ
-> Verify (Some (HasFormat fs), RemoteTemp typ)
cacheIfVerified format remoteTemp = do
ifVerified $
Cache.cacheRemoteFile cfgCache
remoteTemp
(hasFormatGet format)
(mustCache remoteFile)
return (Some format, remoteTemp)
httpGetRange :: forall a. Throws SomeRemoteError
=> [HttpRequestHeader]
-> URI
-> (Int, Int)
-> (HttpStatus -> [HttpResponseHeader] -> BodyReader -> IO a)
-> IO a
HttpLib{..} = cfgHttpLib
{-------------------------------------------------------------------------------
Execute body reader
-------------------------------------------------------------------------------}
-- | Execute a body reader
--
-- TODO: Deal with minimum download rate.
execBodyReader :: Throws SomeRemoteError
=> TargetPath -- ^ File source (for error msgs only)
-> FileSize -- ^ Maximum file size
-> Handle -- ^ Handle to write data too
-> BodyReader -- ^ The action to give us blocks from the file
-> IO ()
execBodyReader file mlen h br = go 0
where
go :: Int54 -> IO ()
go sz = do
unless (sz `fileSizeWithinBounds` mlen) $
throwChecked $ SomeRemoteError $ FileTooLarge file mlen
bs <- br
if BS.null bs
then return ()
else BS.hPut h bs >> go (sz + fromIntegral (BS.length bs))
-- | The file we requested from the server was larger than expected
-- (potential endless data attack)
data FileTooLarge = FileTooLarge {
fileTooLargePath :: TargetPath
, fileTooLargeExpected :: FileSize
}
deriving (Typeable)
instance Pretty FileTooLarge where
pretty FileTooLarge{..} = concat [
"file returned by server too large: "
, pretty fileTooLargePath
, " (expected " ++ expected fileTooLargeExpected ++ " bytes)"
]
where
expected :: FileSize -> String
expected (FileSizeExact n) = "exactly " ++ show n
expected (FileSizeBound n) = "at most " ++ show n
#if MIN_VERSION_base(4,8,0)
deriving instance Show FileTooLarge
instance Exception FileTooLarge where displayException = pretty
#else
instance Exception FileTooLarge
instance Show FileTooLarge where show = pretty
#endif
{-------------------------------------------------------------------------------
Information about remote files
-------------------------------------------------------------------------------}
remoteFileURI :: RepoLayout -> URI -> RemoteFile fs typ -> Formats fs URI
remoteFileURI repoLayout baseURI = fmap aux . remoteRepoPath repoLayout
where
aux :: RepoPath -> URI
aux repoPath = modifyUriPath baseURI (`anchorRepoPathRemotely` repoPath)
-- | Extracting or estimating file sizes
remoteFileSize :: RemoteFile fs typ -> Formats fs FileSize
remoteFileSize (RemoteTimestamp) =
FsUn $ FileSizeBound fileSizeBoundTimestamp
remoteFileSize (RemoteRoot mLen) =
FsUn $ maybe (FileSizeBound fileSizeBoundRoot)
(FileSizeExact . fileLength')
mLen
remoteFileSize (RemoteSnapshot len) =
FsUn $ FileSizeExact (fileLength' len)
remoteFileSize (RemoteMirrors len) =
FsUn $ FileSizeExact (fileLength' len)
remoteFileSize (RemoteIndex _ lens) =
fmap (FileSizeExact . fileLength') lens
remoteFileSize (RemotePkgTarGz _pkgId len) =
FsGz $ FileSizeExact (fileLength' len)
-- | Bound on the size of the timestamp
--
-- This is intended as a permissive rather than tight bound.
--
-- The timestamp signed with a single key is 420 bytes; the signature makes up
-- just under 200 bytes of that. So even if the timestamp is signed with 10
-- keys it would still only be 2420 bytes. Doubling this amount, an upper bound
-- of 4kB should definitely be sufficient.
fileSizeBoundTimestamp :: Int54
fileSizeBoundTimestamp = 4096
-- | Bound on the size of the root
--
-- This is intended as a permissive rather than tight bound.
--
-- The variable parts of the root metadata are
--
-- * Signatures, each of which are about 200 bytes
-- * A key environment (mapping from key IDs to public keys), each is of
-- which is also about 200 bytes
-- * Mirrors, root, snapshot, targets, and timestamp role specifications.
-- These contains key IDs, each of which is about 80 bytes.
--
-- A skeleton root metadata is about 580 bytes. Allowing for
--
-- * 100 signatures
-- * 100 mirror keys, 1000 root keys, 100 snapshot keys, 1000 target keys,
-- 100 timestamp keys
-- * the corresponding 2300 entries in the key environment
--
-- We end up with a bound of about 665,000 bytes. Doubling this amount, an
-- upper bound of 2MB should definitely be sufficient.
fileSizeBoundRoot :: Int54
fileSizeBoundRoot = 2 * 1024 * 2014
{-------------------------------------------------------------------------------
Configuration
-------------------------------------------------------------------------------}
-- | Remote repository configuration
--
-- This is purely for internal convenience.
data RemoteConfig = RemoteConfig {
cfgLayout :: RepoLayout
, cfgHttpLib :: HttpLib
, cfgBase :: URI
, cfgCache :: Cache
, cfgCaps :: ServerCapabilities
, cfgLogger :: forall m. MonadIO m => LogMessage -> m ()
, cfgOpts :: RepoOpts
}
{-------------------------------------------------------------------------------
Auxiliary
-------------------------------------------------------------------------------}
-- | Template for the local file we use to download a URI to
uriTemplate :: URI -> String
uriTemplate = takeFileName . uriPath
fileLength' :: Trusted FileInfo -> Int54
fileLength' = fileLength . fileInfoLength . trusted
{-------------------------------------------------------------------------------
Files downloaded from the remote repository
-------------------------------------------------------------------------------}
data RemoteTemp :: * -> * where
DownloadedWhole :: {
wholeTemp :: Path Absolute
} -> RemoteTemp a
-- If we download only the delta, we record both the path to where the
-- "old" file is stored and the path to the temp file containing the delta.
-- Then:
--
-- When we verify the file, we need both of these paths if we compute
-- the hash from scratch, or only the path to the delta if we attempt
-- to compute the hash incrementally (TODO: incremental verification
-- not currently implemented).
--
-- When we copy a file over, we are additionally given a destination
-- path. In this case, we expect that destination path to be equal to
-- the path to the old file (and assert this to be the case).
DownloadedDelta :: {
deltaTemp :: Path Absolute
, deltaExisting :: Path Absolute
, deltaSeek :: Int54 -- ^ How much of the existing file to keep
} -> RemoteTemp Binary
--TODO: ^^ older haddock doesn't support GADT doc comments :-(
-- and add the '*' bullet points back in
instance Pretty (RemoteTemp typ) where
pretty DownloadedWhole{..} = intercalate " " $ [
"DownloadedWhole"
, pretty wholeTemp
]
pretty DownloadedDelta{..} = intercalate " " $ [
"DownloadedDelta"
, pretty deltaTemp
, pretty deltaExisting
, show deltaSeek
]
instance DownloadedFile RemoteTemp where
downloadedVerify = verifyRemoteFile
downloadedRead = readLazyByteString . wholeTemp
downloadedCopyTo = \f dest ->
case f of
DownloadedWhole{..} ->
renameFile wholeTemp dest
DownloadedDelta{..} -> do
unless (deltaExisting == dest) $
throwIO $ userError "Assertion failure: deltaExisting /= dest"
-- We need ReadWriteMode in order to be able to seek
withFile deltaExisting ReadWriteMode $ \h -> do
hSeek h AbsoluteSeek (fromIntegral deltaSeek)
BS.L.hPut h =<< readLazyByteString deltaTemp
-- | Verify a file downloaded from the remote repository
--
-- TODO: This currently still computes the hash for the whole file. If we cached
-- the state of the hash generator we could compute the hash incrementally.
-- However, profiling suggests that this would only be a minor improvement.
verifyRemoteFile :: RemoteTemp typ -> Trusted FileInfo -> IO Bool
verifyRemoteFile remoteTemp trustedInfo = do
sz <- FileLength <$> remoteSize remoteTemp
if sz /= fileInfoLength (trusted trustedInfo)
then return False
else withRemoteBS remoteTemp $
compareTrustedFileInfo (trusted trustedInfo) . fileInfo
where
remoteSize :: RemoteTemp typ -> IO Int54
remoteSize DownloadedWhole{..} = getFileSize wholeTemp
remoteSize DownloadedDelta{..} = do
deltaSize <- getFileSize deltaTemp
return $ deltaSeek + deltaSize
-- It is important that we close the file handles when we're done
-- (esp. since we may not read the whole file)
withRemoteBS :: RemoteTemp typ -> (BS.L.ByteString -> Bool) -> IO Bool
withRemoteBS DownloadedWhole{..} callback = do
withFile wholeTemp ReadMode $ \h -> do
bs <- BS.L.hGetContents h
evaluate $ callback bs
withRemoteBS DownloadedDelta{..} callback =
withFile deltaExisting ReadMode $ \hExisting ->
withFile deltaTemp ReadMode $ \hTemp -> do
existing <- BS.L.hGetContents hExisting
temp <- BS.L.hGetContents hTemp
evaluate $ callback $ BS.L.concat [
BS.L.take (fromIntegral deltaSeek) existing
, temp
]
hackage-security-0.6.2.4/src/Hackage/Security/Client/Verify.hs 0000644 0000000 0000000 00000007466 07346545000 022256 0 ustar 00 0000000 0000000 module Hackage.Security.Client.Verify (
-- * Verification monad
Verify -- opaque
, runVerify
, acquire
, ifVerified
-- * Specific resources
, openTempFile
-- * Re-exports
, liftIO
) where
import MyPrelude
import Control.Exception
import Control.Monad (join, void)
import Control.Monad.IO.Class (MonadIO, liftIO)
import Control.Monad.Reader (ReaderT, runReaderT, ask)
import Data.IORef
import Hackage.Security.Util.IO
import Hackage.Security.Util.Path
{-------------------------------------------------------------------------------
Verification monad
-------------------------------------------------------------------------------}
type Finaliser = IO ()
type Cleanup = IO ()
-- | Verification monad
--
-- The verification monad is similar to 'ResourceT' in intent, in that we can
-- register handlers to be run to release resources. Unlike 'ResourceT',
-- however, we maintain _two_ handlers: a cleanup handler which is run whether
-- or not verification succeeds, and a finalisation handler which is run only if
-- verification succeeds.
--
-- * Cleanup handlers are registered using 'acquire', and are guaranteed to run
-- just before the computation terminates (after the finalisation handler).
-- * The finalisation handlers are run only when verification succeeds, and can
-- be registered with 'ifVerified'. Finalisation can be used for instance to
-- update the local cache (which should only happen if verification is
-- successful).
newtype Verify a = Verify {
unVerify :: ReaderT (IORef Cleanup, IORef Finaliser) IO a
}
deriving (Functor, Applicative, Monad, MonadIO)
-- | Run an action in the 'Verify' monad
runVerify :: (Finaliser -> Finaliser) -> Verify a -> IO a
runVerify modifyFinaliser v = do
rCleanup <- newIORef $ return ()
rFinaliser <- newIORef $ return ()
mask $ \restore -> do
ma <- try $ restore $ runReaderT (unVerify v) (rCleanup, rFinaliser)
case ma of
Left ex -> do join $ readIORef rCleanup
throwIO (ex :: SomeException)
Right a -> do modifyFinaliser $ join $ readIORef rFinaliser
join $ readIORef rCleanup
return a
-- | Acquire a resource and register the corresponding cleanup handler
--
-- NOTE: Resource acquisition happens with exceptions masked. If it is important
-- that the resource acquistion can be timed out (or receive other kinds of
-- asynchronous exceptions), you will need to use an interruptible operation.
-- See for
-- details.
acquire :: IO a -> (a -> IO ()) -> Verify a
acquire get release = Verify $ do
(rCleanup, _rFinaliser) <- ask
liftIO $ mask_ $ do
a <- liftIO get
modifyIORef rCleanup (>> release a)
return a
-- | Register an action to be run only if verification succeeds
ifVerified :: IO () -> Verify ()
ifVerified handler = Verify $ do
(_rCleanup, rFinaliser) <- ask
liftIO $ modifyIORef rFinaliser (>> handler)
{-------------------------------------------------------------------------------
Specific resources
-------------------------------------------------------------------------------}
-- | Create a short-lived temporary file
--
-- Creates the directory where the temp file should live if it does not exist.
openTempFile :: FsRoot root
=> Path root -- ^ Temp directory
-> String -- ^ Template
-> Verify (Path Absolute, Handle)
openTempFile tmpDir template =
acquire createTempFile closeAndDelete
where
createTempFile :: IO (Path Absolute, Handle)
createTempFile = do
createDirectoryIfMissing True tmpDir
openTempFile' tmpDir template
closeAndDelete :: (Path Absolute, Handle) -> IO ()
closeAndDelete (fp, h) = do
hClose h
void $ handleDoesNotExist $ removeFile fp
hackage-security-0.6.2.4/src/Hackage/Security/JSON.hs 0000644 0000000 0000000 00000026766 07346545000 020351 0 ustar 00 0000000 0000000 -- | Hackage-specific wrappers around the Util.JSON module
{-# LANGUAGE CPP #-}
module Hackage.Security.JSON (
-- * Deserialization errors
DeserializationError(..)
, validate
, verifyType
-- * MonadKeys
, MonadKeys(..)
, addKeys
, withKeys
, lookupKey
, readKeyAsId
-- * Reader monads
, ReadJSON_Keys_Layout
, ReadJSON_Keys_NoLayout
, ReadJSON_NoKeys_NoLayout
, runReadJSON_Keys_Layout
, runReadJSON_Keys_NoLayout
, runReadJSON_NoKeys_NoLayout
-- ** Utility
, parseJSON_Keys_Layout
, parseJSON_Keys_NoLayout
, parseJSON_NoKeys_NoLayout
, readJSON_Keys_Layout
, readJSON_Keys_NoLayout
, readJSON_NoKeys_NoLayout
-- * Writing
, WriteJSON
, runWriteJSON
-- ** Utility
, renderJSON
, renderJSON_NoLayout
, writeJSON
, writeJSON_NoLayout
, writeKeyAsId
-- * Re-exports
, module Hackage.Security.Util.JSON
) where
import MyPrelude
import Control.Arrow (first, second)
import Control.Exception
import Control.Monad (unless, liftM)
import Control.Monad.Except (MonadError, Except, ExceptT, runExcept, runExceptT, throwError)
import Control.Monad.Reader (MonadReader, Reader, runReader, local, ask)
import Data.Functor.Identity
import Data.Typeable (Typeable)
import qualified Data.ByteString.Lazy as BS.L
import Hackage.Security.Key
import Hackage.Security.Key.Env (KeyEnv)
import Hackage.Security.TUF.Layout.Repo
import Hackage.Security.Util.JSON
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
import Text.JSON.Canonical
import qualified Hackage.Security.Key.Env as KeyEnv
{-------------------------------------------------------------------------------
Deserialization errors
-------------------------------------------------------------------------------}
data DeserializationError =
-- | Malformed JSON has syntax errors in the JSON itself
-- (i.e., we cannot even parse it to a JSValue)
DeserializationErrorMalformed String
-- | Invalid JSON has valid syntax but invalid structure
--
-- The string gives a hint about what we expected instead
| DeserializationErrorSchema String
-- | The JSON file contains a key ID of an unknown key
| DeserializationErrorUnknownKey KeyId
-- | Some verification step failed
| DeserializationErrorValidation String
-- | Wrong file type
--
-- Records actual and expected types.
| DeserializationErrorFileType String String
deriving (Typeable)
#if MIN_VERSION_base(4,8,0)
deriving instance Show DeserializationError
instance Exception DeserializationError where displayException = pretty
#else
instance Show DeserializationError where show = pretty
instance Exception DeserializationError
#endif
instance Pretty DeserializationError where
pretty (DeserializationErrorMalformed str) =
"Malformed: " ++ str
pretty (DeserializationErrorSchema str) =
"Schema error: " ++ str
pretty (DeserializationErrorUnknownKey kId) =
"Unknown key: " ++ keyIdString kId
pretty (DeserializationErrorValidation str) =
"Invalid: " ++ str
pretty (DeserializationErrorFileType actualType expectedType) =
"Expected file of type " ++ show expectedType
++ " but got file of type " ++ show actualType
validate :: MonadError DeserializationError m => String -> Bool -> m ()
validate _ True = return ()
validate msg False = throwError $ DeserializationErrorValidation msg
verifyType :: (ReportSchemaErrors m, MonadError DeserializationError m)
=> JSValue -> String -> m ()
verifyType enc expectedType = do
actualType <- fromJSField enc "_type"
unless (actualType == expectedType) $
throwError $ DeserializationErrorFileType actualType expectedType
{-------------------------------------------------------------------------------
Access to keys
-------------------------------------------------------------------------------}
-- | MonadReader-like monad, specialized to key environments
class (ReportSchemaErrors m, MonadError DeserializationError m) => MonadKeys m where
localKeys :: (KeyEnv -> KeyEnv) -> m a -> m a
askKeys :: m KeyEnv
readKeyAsId :: MonadKeys m => JSValue -> m (Some PublicKey)
readKeyAsId (JSString kId) = lookupKey (KeyId kId)
readKeyAsId val = expected' "key ID" val
addKeys :: MonadKeys m => KeyEnv -> m a -> m a
addKeys keys = localKeys (KeyEnv.union keys)
withKeys :: MonadKeys m => KeyEnv -> m a -> m a
withKeys keys = localKeys (const keys)
lookupKey :: MonadKeys m => KeyId -> m (Some PublicKey)
lookupKey kId = do
keyEnv <- askKeys
case KeyEnv.lookup kId keyEnv of
Just key -> return key
Nothing -> throwError $ DeserializationErrorUnknownKey kId
{-------------------------------------------------------------------------------
Reading
-------------------------------------------------------------------------------}
newtype ReadJSON_Keys_Layout a = ReadJSON_Keys_Layout {
unReadJSON_Keys_Layout :: ExceptT DeserializationError (Reader (RepoLayout, KeyEnv)) a
}
deriving ( Functor
, Applicative
, Monad
, MonadError DeserializationError
)
newtype ReadJSON_Keys_NoLayout a = ReadJSON_Keys_NoLayout {
unReadJSON_Keys_NoLayout :: ExceptT DeserializationError (Reader KeyEnv) a
}
deriving ( Functor
, Applicative
, Monad
, MonadError DeserializationError
)
newtype ReadJSON_NoKeys_NoLayout a = ReadJSON_NoKeys_NoLayout {
unReadJSON_NoKeys_NoLayout :: Except DeserializationError a
}
deriving ( Functor
, Applicative
, Monad
, MonadError DeserializationError
)
instance ReportSchemaErrors ReadJSON_Keys_Layout where
expected str mgot = throwError $ expectedError str mgot
instance ReportSchemaErrors ReadJSON_Keys_NoLayout where
expected str mgot = throwError $ expectedError str mgot
instance ReportSchemaErrors ReadJSON_NoKeys_NoLayout where
expected str mgot = throwError $ expectedError str mgot
expectedError :: Expected -> Maybe Got -> DeserializationError
expectedError str mgot = DeserializationErrorSchema msg
where
msg = case mgot of
Nothing -> "Expected " ++ str
Just got -> "Expected " ++ str ++ " but got " ++ got
instance MonadReader RepoLayout ReadJSON_Keys_Layout where
ask = ReadJSON_Keys_Layout $ fst `liftM` ask
local f act = ReadJSON_Keys_Layout $ local (first f) act'
where
act' = unReadJSON_Keys_Layout act
instance MonadKeys ReadJSON_Keys_Layout where
askKeys = ReadJSON_Keys_Layout $ snd `liftM` ask
localKeys f act = ReadJSON_Keys_Layout $ local (second f) act'
where
act' = unReadJSON_Keys_Layout act
instance MonadKeys ReadJSON_Keys_NoLayout where
askKeys = ReadJSON_Keys_NoLayout $ ask
localKeys f act = ReadJSON_Keys_NoLayout $ local f act'
where
act' = unReadJSON_Keys_NoLayout act
runReadJSON_Keys_Layout :: KeyEnv
-> RepoLayout
-> ReadJSON_Keys_Layout a
-> Either DeserializationError a
runReadJSON_Keys_Layout keyEnv repoLayout act =
runReader (runExceptT (unReadJSON_Keys_Layout act)) (repoLayout, keyEnv)
runReadJSON_Keys_NoLayout :: KeyEnv
-> ReadJSON_Keys_NoLayout a
-> Either DeserializationError a
runReadJSON_Keys_NoLayout keyEnv act =
runReader (runExceptT (unReadJSON_Keys_NoLayout act)) keyEnv
runReadJSON_NoKeys_NoLayout :: ReadJSON_NoKeys_NoLayout a
-> Either DeserializationError a
runReadJSON_NoKeys_NoLayout act =
runExcept (unReadJSON_NoKeys_NoLayout act)
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
parseJSON_Keys_Layout :: FromJSON ReadJSON_Keys_Layout a
=> KeyEnv
-> RepoLayout
-> BS.L.ByteString
-> Either DeserializationError a
parseJSON_Keys_Layout keyEnv repoLayout bs =
case parseCanonicalJSON bs of
Left err -> Left (DeserializationErrorMalformed err)
Right val -> runReadJSON_Keys_Layout keyEnv repoLayout (fromJSON val)
parseJSON_Keys_NoLayout :: FromJSON ReadJSON_Keys_NoLayout a
=> KeyEnv
-> BS.L.ByteString
-> Either DeserializationError a
parseJSON_Keys_NoLayout keyEnv bs =
case parseCanonicalJSON bs of
Left err -> Left (DeserializationErrorMalformed err)
Right val -> runReadJSON_Keys_NoLayout keyEnv (fromJSON val)
parseJSON_NoKeys_NoLayout :: FromJSON ReadJSON_NoKeys_NoLayout a
=> BS.L.ByteString
-> Either DeserializationError a
parseJSON_NoKeys_NoLayout bs =
case parseCanonicalJSON bs of
Left err -> Left (DeserializationErrorMalformed err)
Right val -> runReadJSON_NoKeys_NoLayout (fromJSON val)
readJSON_Keys_Layout :: ( FsRoot root
, FromJSON ReadJSON_Keys_Layout a
)
=> KeyEnv
-> RepoLayout
-> Path root
-> IO (Either DeserializationError a)
readJSON_Keys_Layout keyEnv repoLayout fp = do
withFile fp ReadMode $ \h -> do
bs <- BS.L.hGetContents h
evaluate $ parseJSON_Keys_Layout keyEnv repoLayout bs
readJSON_Keys_NoLayout :: ( FsRoot root
, FromJSON ReadJSON_Keys_NoLayout a
)
=> KeyEnv
-> Path root
-> IO (Either DeserializationError a)
readJSON_Keys_NoLayout keyEnv fp = do
withFile fp ReadMode $ \h -> do
bs <- BS.L.hGetContents h
evaluate $ parseJSON_Keys_NoLayout keyEnv bs
readJSON_NoKeys_NoLayout :: ( FsRoot root
, FromJSON ReadJSON_NoKeys_NoLayout a
)
=> Path root
-> IO (Either DeserializationError a)
readJSON_NoKeys_NoLayout fp = do
withFile fp ReadMode $ \h -> do
bs <- BS.L.hGetContents h
evaluate $ parseJSON_NoKeys_NoLayout bs
{-------------------------------------------------------------------------------
Writing
-------------------------------------------------------------------------------}
newtype WriteJSON a = WriteJSON {
unWriteJSON :: Reader RepoLayout a
}
deriving ( Functor
, Applicative
, Monad
, MonadReader RepoLayout
)
runWriteJSON :: RepoLayout -> WriteJSON a -> a
runWriteJSON repoLayout act = runReader (unWriteJSON act) repoLayout
{-------------------------------------------------------------------------------
Writing: Utility
-------------------------------------------------------------------------------}
-- | Render to canonical JSON format
renderJSON :: ToJSON WriteJSON a => RepoLayout -> a -> BS.L.ByteString
renderJSON repoLayout = renderCanonicalJSON . runWriteJSON repoLayout . toJSON
-- | Variation on 'renderJSON' for files that don't require the repo layout
renderJSON_NoLayout :: ToJSON Identity a => a -> BS.L.ByteString
renderJSON_NoLayout = renderCanonicalJSON . runIdentity . toJSON
writeJSON :: ToJSON WriteJSON a => RepoLayout -> Path Absolute -> a -> IO ()
writeJSON repoLayout fp = writeLazyByteString fp . renderJSON repoLayout
writeJSON_NoLayout :: ToJSON Identity a => Path Absolute -> a -> IO ()
writeJSON_NoLayout fp = writeLazyByteString fp . renderJSON_NoLayout
writeKeyAsId :: Some PublicKey -> JSValue
writeKeyAsId = JSString . keyIdString . someKeyId
hackage-security-0.6.2.4/src/Hackage/Security/Key.hs 0000644 0000000 0000000 00000023411 07346545000 020310 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
module Hackage.Security.Key (
-- * Key types
Ed25519
-- * Types abstracting over key types
, Key(..)
, PublicKey(..)
, PrivateKey(..)
-- * Key types in isolation
, KeyType(..)
-- * Hiding key types
, somePublicKey
, somePublicKeyType
, someKeyId
-- * Operations on keys
, publicKey
, privateKey
, createKey
, createKey'
-- * Key IDs
, KeyId(..)
, HasKeyId(..)
-- * Signing
, sign
, verify
) where
import MyPrelude
import Control.Monad
import Data.Functor.Identity
import Data.Typeable (Typeable)
import Text.JSON.Canonical
import qualified Crypto.Hash.SHA256 as SHA256
import qualified Crypto.Sign.Ed25519 as Ed25519
import qualified Data.ByteString as BS
import qualified Data.ByteString.Char8 as BS.C8
import qualified Data.ByteString.Base16 as Base16
import qualified Data.ByteString.Lazy as BS.L
#if !MIN_VERSION_base(4,7,0)
import qualified Data.Typeable as Typeable
#endif
import Hackage.Security.Util.JSON
import Hackage.Security.Util.Some
import Hackage.Security.Util.TypedEmbedded
import qualified Hackage.Security.Util.Base64 as B64
{-------------------------------------------------------------------------------
Generalization over key types
-------------------------------------------------------------------------------}
data Ed25519
data Key a where
KeyEd25519 :: Ed25519.PublicKey -> Ed25519.SecretKey -> Key Ed25519
deriving (Typeable)
data PublicKey a where
PublicKeyEd25519 :: Ed25519.PublicKey -> PublicKey Ed25519
deriving (Typeable)
data PrivateKey a where
PrivateKeyEd25519 :: Ed25519.SecretKey -> PrivateKey Ed25519
deriving (Typeable)
deriving instance Show (Key typ)
deriving instance Show (PublicKey typ)
deriving instance Show (PrivateKey typ)
deriving instance Eq (Key typ)
deriving instance Eq (PublicKey typ)
deriving instance Eq (PrivateKey typ)
instance SomeShow Key where someShow = DictShow
instance SomeShow PublicKey where someShow = DictShow
instance SomeShow PrivateKey where someShow = DictShow
instance SomeEq Key where someEq = DictEq
instance SomeEq PublicKey where someEq = DictEq
instance SomeEq PrivateKey where someEq = DictEq
publicKey :: Key a -> PublicKey a
publicKey (KeyEd25519 pub _pri) = PublicKeyEd25519 pub
privateKey :: Key a -> PrivateKey a
privateKey (KeyEd25519 _pub pri) = PrivateKeyEd25519 pri
{-------------------------------------------------------------------------------
Sometimes it's useful to talk about the type of a key independent of the key
-------------------------------------------------------------------------------}
data KeyType typ where
KeyTypeEd25519 :: KeyType Ed25519
deriving instance Show (KeyType typ)
deriving instance Eq (KeyType typ)
instance SomeShow KeyType where someShow = DictShow
instance SomeEq KeyType where someEq = DictEq
instance Unify KeyType where
unify KeyTypeEd25519 KeyTypeEd25519 = Just Refl
type instance TypeOf Key = KeyType
type instance TypeOf PublicKey = KeyType
type instance TypeOf PrivateKey = KeyType
instance Typed Key where
typeOf (KeyEd25519 _ _) = KeyTypeEd25519
instance Typed PublicKey where
typeOf (PublicKeyEd25519 _) = KeyTypeEd25519
instance Typed PrivateKey where
typeOf (PrivateKeyEd25519 _) = KeyTypeEd25519
{-------------------------------------------------------------------------------
We don't always know the key type
-------------------------------------------------------------------------------}
somePublicKey :: Some Key -> Some PublicKey
somePublicKey (Some key) = Some (publicKey key)
somePublicKeyType :: Some PublicKey -> Some KeyType
somePublicKeyType (Some pub) = Some (typeOf pub)
someKeyId :: HasKeyId key => Some key -> KeyId
someKeyId (Some a) = keyId a
{-------------------------------------------------------------------------------
Creating keys
-------------------------------------------------------------------------------}
createKey :: KeyType key -> IO (Key key)
createKey KeyTypeEd25519 = uncurry KeyEd25519 <$> Ed25519.createKeypair
createKey' :: KeyType key -> IO (Some Key)
createKey' = liftM Some . createKey
{-------------------------------------------------------------------------------
Key IDs
-------------------------------------------------------------------------------}
-- | The key ID of a key, by definition, is the hexdigest of the SHA-256 hash of
-- the canonical JSON form of the key where the private object key is excluded.
--
-- NOTE: The FromJSON and ToJSON instances for KeyId are intentionally omitted.
-- Use writeKeyAsId instead.
newtype KeyId = KeyId { keyIdString :: String }
deriving (Show, Eq, Ord)
instance Monad m => ToObjectKey m KeyId where
toObjectKey = return . keyIdString
instance Monad m => FromObjectKey m KeyId where
fromObjectKey = return . Just . KeyId
-- | Compute the key ID of a key
class HasKeyId key where
keyId :: key typ -> KeyId
instance HasKeyId PublicKey where
keyId = KeyId
. BS.C8.unpack
. Base16.encode
. SHA256.hashlazy
. renderCanonicalJSON
. runIdentity
. toJSON
instance HasKeyId Key where
keyId = keyId . publicKey
{-------------------------------------------------------------------------------
Signing
-------------------------------------------------------------------------------}
-- | Sign a bytestring and return the signature
--
-- TODO: It is unfortunate that we have to convert to a strict bytestring for
-- ed25519
sign :: PrivateKey typ -> BS.L.ByteString -> BS.ByteString
sign (PrivateKeyEd25519 pri) =
Ed25519.unSignature . dsign pri . BS.concat . BS.L.toChunks
where
#if MIN_VERSION_ed25519(0,0,4)
dsign = Ed25519.dsign
#else
dsign = Ed25519.sign'
#endif
verify :: PublicKey typ -> BS.L.ByteString -> BS.ByteString -> Bool
verify (PublicKeyEd25519 pub) inp sig =
dverify pub (BS.concat $ BS.L.toChunks inp) (Ed25519.Signature sig)
where
#if MIN_VERSION_ed25519(0,0,4)
dverify = Ed25519.dverify
#else
dverify = Ed25519.verify'
#endif
{-------------------------------------------------------------------------------
JSON encoding and decoding
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m (Key typ) where
toJSON key = case key of
KeyEd25519 pub pri ->
enc "ed25519" (Ed25519.unPublicKey pub) (Ed25519.unSecretKey pri)
where
enc :: String -> BS.ByteString -> BS.ByteString -> m JSValue
enc tag pub pri = mkObject [
("keytype", return $ JSString tag)
, ("keyval", mkObject [
("public", toJSON (B64.fromByteString pub))
, ("private", toJSON (B64.fromByteString pri))
])
]
instance ReportSchemaErrors m => FromJSON m (Some Key) where
fromJSON enc = do
(tag, pub, pri) <- dec enc
case tag of
"ed25519" -> return . Some $
KeyEd25519 (Ed25519.PublicKey pub) (Ed25519.SecretKey pri)
_otherwise ->
expected "valid key type" (Just tag)
where
dec :: JSValue -> m (String, BS.ByteString, BS.ByteString)
dec obj = do
tag <- fromJSField obj "keytype"
val <- fromJSField obj "keyval"
pub <- fromJSField val "public"
pri <- fromJSField val "private"
return (tag, B64.toByteString pub, B64.toByteString pri)
instance Monad m => ToJSON m (PublicKey typ) where
toJSON key = case key of
PublicKeyEd25519 pub ->
enc "ed25519" (Ed25519.unPublicKey pub)
where
enc :: String -> BS.ByteString -> m JSValue
enc tag pub = mkObject [
("keytype", return $ JSString tag)
, ("keyval", mkObject [
("public", toJSON (B64.fromByteString pub))
])
]
instance Monad m => ToJSON m (Some Key) where toJSON (Some a) = toJSON a
instance Monad m => ToJSON m (Some PublicKey) where toJSON (Some a) = toJSON a
instance Monad m => ToJSON m (Some KeyType) where toJSON (Some a) = toJSON a
instance ReportSchemaErrors m => FromJSON m (Some PublicKey) where
fromJSON enc = do
(tag, pub) <- dec enc
case tag of
"ed25519" -> return . Some $
PublicKeyEd25519 (Ed25519.PublicKey pub)
_otherwise ->
expected "valid key type" (Just tag)
where
dec :: JSValue -> m (String, BS.ByteString)
dec obj = do
tag <- fromJSField obj "keytype"
val <- fromJSField obj "keyval"
pub <- fromJSField val "public"
return (tag, B64.toByteString pub)
instance Monad m => ToJSON m (KeyType typ) where
toJSON KeyTypeEd25519 = return $ JSString "ed25519"
instance ReportSchemaErrors m => FromJSON m (Some KeyType) where
fromJSON enc = do
tag <- fromJSON enc
case tag of
"ed25519" -> return . Some $ KeyTypeEd25519
_otherwise -> expected "valid key type" (Just tag)
{-------------------------------------------------------------------------------
Orphans
Pre-7.8 (base 4.7) we cannot have Typeable instance for higher-kinded types.
Instead, here we provide some instance for specific instantiations.
-------------------------------------------------------------------------------}
#if !MIN_VERSION_base(4,7,0)
tyConKey, tyConPublicKey, tyConPrivateKey :: Typeable.TyCon
tyConKey = Typeable.mkTyCon3 "hackage-security" "Hackage.Security.Key" "Key"
tyConPublicKey = Typeable.mkTyCon3 "hackage-security" "Hackage.Security.Key" "PublicKey"
tyConPrivateKey = Typeable.mkTyCon3 "hackage-security" "Hackage.Security.Key" "PrivateKey"
instance Typeable (Some Key) where
typeOf _ = Typeable.mkTyConApp tyConSome [Typeable.mkTyConApp tyConKey []]
instance Typeable (Some PublicKey) where
typeOf _ = Typeable.mkTyConApp tyConSome [Typeable.mkTyConApp tyConPublicKey []]
instance Typeable (Some PrivateKey) where
typeOf _ = Typeable.mkTyConApp tyConSome [Typeable.mkTyConApp tyConPrivateKey []]
#endif
hackage-security-0.6.2.4/src/Hackage/Security/Key/ 0000755 0000000 0000000 00000000000 07346545000 017753 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Key/Env.hs 0000644 0000000 0000000 00000005337 07346545000 021047 0 ustar 00 0000000 0000000 module Hackage.Security.Key.Env (
KeyEnv -- opaque
, keyEnvMap
-- * Convenience constructors
, fromPublicKeys
, fromKeys
-- * The usual accessors
, empty
, null
, insert
, lookup
, union
) where
import MyPrelude hiding (lookup, null)
import Control.Monad
import Data.Map (Map)
import qualified Data.Map as Map
import Hackage.Security.Key
import Hackage.Security.Util.JSON
import Hackage.Security.Util.Some
{-------------------------------------------------------------------------------
Main datatype
-------------------------------------------------------------------------------}
-- | A key environment is a mapping from key IDs to the corresponding keys.
--
-- It should satisfy the invariant that these key IDs actually match the keys;
-- see 'checkKeyEnvInvariant'.
newtype KeyEnv = KeyEnv {
keyEnvMap :: Map KeyId (Some PublicKey)
}
deriving (Show)
-- | Verify that each key ID is mapped to a key with that ID
checkKeyEnvInvariant :: KeyEnv -> Bool
checkKeyEnvInvariant = all (uncurry go) . Map.toList . keyEnvMap
where
go :: KeyId -> Some PublicKey -> Bool
go kId key = kId == someKeyId key
{-------------------------------------------------------------------------------
Convenience constructors
-------------------------------------------------------------------------------}
fromPublicKeys :: [Some PublicKey] -> KeyEnv
fromPublicKeys = KeyEnv . Map.fromList . map aux
where
aux :: Some PublicKey -> (KeyId, Some PublicKey)
aux pub = (someKeyId pub, pub)
fromKeys :: [Some Key] -> KeyEnv
fromKeys = fromPublicKeys . map somePublicKey
{-------------------------------------------------------------------------------
The usual accessors
-------------------------------------------------------------------------------}
empty :: KeyEnv
empty = KeyEnv Map.empty
null :: KeyEnv -> Bool
null (KeyEnv env) = Map.null env
insert :: Some PublicKey -> KeyEnv -> KeyEnv
insert key (KeyEnv env) = KeyEnv $ Map.insert (someKeyId key) key env
lookup :: KeyId -> KeyEnv -> Maybe (Some PublicKey)
lookup kId (KeyEnv env) = Map.lookup kId env
union :: KeyEnv -> KeyEnv -> KeyEnv
union (KeyEnv env) (KeyEnv env') = KeyEnv (env `Map.union` env')
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m KeyEnv where
toJSON (KeyEnv keyEnv) = toJSON keyEnv
instance ReportSchemaErrors m => FromJSON m KeyEnv where
fromJSON enc = do
keyEnv <- KeyEnv <$> fromJSON enc
-- We should really use 'validate', but that causes module import cycles.
-- Sigh.
unless (checkKeyEnvInvariant keyEnv) $
expected "valid key environment" Nothing
return keyEnv
hackage-security-0.6.2.4/src/Hackage/Security/Server.hs 0000644 0000000 0000000 00000001317 07346545000 021027 0 ustar 00 0000000 0000000 -- | Main entry point into the Hackage Security framework for clients
module Hackage.Security.Server (
-- * Re-exports
module Hackage.Security.JSON
, module Hackage.Security.Key
, module Hackage.Security.TUF
) where
import Hackage.Security.JSON (
ToJSON(..)
, FromJSON(..)
, DeserializationError(..)
, ReadJSON_Keys_Layout
, ReadJSON_Keys_NoLayout
, ReadJSON_NoKeys_NoLayout
, parseJSON_Keys_Layout
, parseJSON_Keys_NoLayout
, parseJSON_NoKeys_NoLayout
, readJSON_Keys_Layout
, readJSON_Keys_NoLayout
, readJSON_NoKeys_NoLayout
, WriteJSON
, renderJSON
, renderJSON_NoLayout
, writeJSON
, writeJSON_NoLayout
)
import Hackage.Security.Key
import Hackage.Security.TUF
hackage-security-0.6.2.4/src/Hackage/Security/TUF.hs 0000644 0000000 0000000 00000002557 07346545000 020226 0 ustar 00 0000000 0000000 -- | Export all the TUF datatypes
module Hackage.Security.TUF (
module Hackage.Security.TUF.Common
, module Hackage.Security.TUF.FileInfo
, module Hackage.Security.TUF.FileMap
, module Hackage.Security.TUF.Header
, module Hackage.Security.TUF.Layout.Cache
, module Hackage.Security.TUF.Layout.Index
, module Hackage.Security.TUF.Layout.Repo
, module Hackage.Security.TUF.Mirrors
, module Hackage.Security.TUF.Paths
-- , module Hackage.Security.TUF.Patterns
, module Hackage.Security.TUF.Root
, module Hackage.Security.TUF.Signed
, module Hackage.Security.TUF.Snapshot
, module Hackage.Security.TUF.Targets
, module Hackage.Security.TUF.Timestamp
) where
import Hackage.Security.TUF.Common
import Hackage.Security.TUF.FileInfo
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.Layout.Cache
import Hackage.Security.TUF.Layout.Index
import Hackage.Security.TUF.Layout.Repo
import Hackage.Security.TUF.Mirrors
-- import Hackage.Security.TUF.Patterns
import Hackage.Security.TUF.Paths
import Hackage.Security.TUF.Root
import Hackage.Security.TUF.Signed
import Hackage.Security.TUF.Snapshot
import Hackage.Security.TUF.Targets
import Hackage.Security.TUF.Timestamp
-- FileMap is intended for qualified imports, so we only export a subset
import Hackage.Security.TUF.FileMap (
FileMap
, TargetPath(..)
, FileChange(..)
, fileMapChanges
)
hackage-security-0.6.2.4/src/Hackage/Security/TUF/ 0000755 0000000 0000000 00000000000 07346545000 017661 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/TUF/Common.hs 0000644 0000000 0000000 00000003121 07346545000 021442 0 ustar 00 0000000 0000000 -- | Simple type wrappers
module Hackage.Security.TUF.Common (
-- * Types
FileLength(..)
, Hash(..)
, KeyThreshold(..)
) where
import MyPrelude
import Hackage.Security.JSON
{-------------------------------------------------------------------------------
Simple types
-------------------------------------------------------------------------------}
-- | File length
--
-- Having verified file length information means we can protect against
-- endless data attacks and similar.
newtype FileLength = FileLength { fileLength :: Int54 }
deriving (Eq, Ord, Show)
-- | Key threshold
--
-- The key threshold is the minimum number of keys a document must be signed
-- with. Key thresholds are specified in 'RoleSpec' or 'DelegationsSpec'.
newtype KeyThreshold = KeyThreshold Int54
deriving (Eq, Ord, Show)
-- | File hash
newtype Hash = Hash String
deriving (Eq, Ord, Show)
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m KeyThreshold where
toJSON (KeyThreshold i) = toJSON i
instance Monad m => ToJSON m FileLength where
toJSON (FileLength i) = toJSON i
instance Monad m => ToJSON m Hash where
toJSON (Hash str) = toJSON str
instance ReportSchemaErrors m => FromJSON m KeyThreshold where
fromJSON enc = KeyThreshold <$> fromJSON enc
instance ReportSchemaErrors m => FromJSON m FileLength where
fromJSON enc = FileLength <$> fromJSON enc
instance ReportSchemaErrors m => FromJSON m Hash where
fromJSON enc = Hash <$> fromJSON enc
hackage-security-0.6.2.4/src/Hackage/Security/TUF/FileInfo.hs 0000644 0000000 0000000 00000011653 07346545000 021716 0 ustar 00 0000000 0000000 -- | Information about files
module Hackage.Security.TUF.FileInfo (
FileInfo(..)
, HashFn(..)
, Hash(..)
-- * Utility
, fileInfo
, computeFileInfo
, compareTrustedFileInfo
, knownFileInfoEqual
, fileInfoSHA256
-- ** Re-exports
, Int54
) where
import MyPrelude hiding (lookup)
import Data.Map (Map)
import qualified Crypto.Hash.SHA256 as SHA256
import qualified Data.Map as Map
import qualified Data.ByteString.Base16 as Base16
import qualified Data.ByteString.Lazy as BS.L
import qualified Data.ByteString.Char8 as BS.C8
import Hackage.Security.JSON
import Hackage.Security.TUF.Common
import Hackage.Security.Util.Path
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
data HashFn = HashFnSHA256
| HashFnMD5
deriving (Show, Eq, Ord)
-- | File information
--
-- This intentionally does not have an 'Eq' instance; see 'knownFileInfoEqual'
-- and 'verifyFileInfo' instead.
--
-- NOTE: Throughout we compute file information always over the raw bytes.
-- For example, when @timestamp.json@ lists the hash of @snapshot.json@, this
-- hash is computed over the actual @snapshot.json@ file (as opposed to the
-- canonical form of the embedded JSON). This brings it in line with the hash
-- computed over target files, where that is the only choice available.
data FileInfo = FileInfo {
fileInfoLength :: FileLength
, fileInfoHashes :: Map HashFn Hash
}
deriving (Show)
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
-- | Compute 'FileInfo'
--
-- TODO: Currently this will load the entire input bytestring into memory.
-- We need to make this incremental, by computing the length and all hashes
-- in a single traversal over the input.
fileInfo :: BS.L.ByteString -> FileInfo
fileInfo bs = FileInfo {
fileInfoLength = FileLength . fromIntegral $ BS.L.length bs
, fileInfoHashes = Map.fromList [
-- Note: if you add or change hash functions here and you want to
-- make them compulsory then you also need to update
-- 'compareTrustedFileInfo' below.
(HashFnSHA256, Hash $ BS.C8.unpack $ Base16.encode $ SHA256.hashlazy bs)
]
}
-- | Compute 'FileInfo'
computeFileInfo :: FsRoot root => Path root -> IO FileInfo
computeFileInfo fp = fileInfo <$> readLazyByteString fp
-- | Compare the expected trusted file info against the actual file info of a
-- target file.
--
-- This should be used only when the 'FileInfo' is already known. If we want
-- to compare known 'FileInfo' against a file on disk we should delay until we
-- have confirmed that the file lengths match (see 'downloadedVerify').
--
compareTrustedFileInfo :: FileInfo -- ^ expected (from trusted TUF files)
-> FileInfo -- ^ actual (from 'fileInfo' on target file)
-> Bool
compareTrustedFileInfo expectedInfo actualInfo =
-- The expected trusted file info may have hashes for several hash
-- functions, including ones we do not care about and do not want to
-- check. In particular the file info may have an md5 hash, but this
-- is not one that we want to check.
--
-- Our current policy is to check sha256 only and ignore md5:
sameLength expectedInfo actualInfo
&& sameSHA256 expectedInfo actualInfo
where
sameLength a b = fileInfoLength a
== fileInfoLength b
sameSHA256 a b = case (fileInfoSHA256 a,
fileInfoSHA256 b) of
(Just ha, Just hb) -> ha == hb
_ -> False
knownFileInfoEqual :: FileInfo -> FileInfo -> Bool
knownFileInfoEqual a b = (==) (fileInfoLength a, fileInfoHashes a)
(fileInfoLength b, fileInfoHashes b)
-- | Extract SHA256 hash from 'FileInfo' (if present)
fileInfoSHA256 :: FileInfo -> Maybe Hash
fileInfoSHA256 FileInfo{..} = Map.lookup HashFnSHA256 fileInfoHashes
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToObjectKey m HashFn where
toObjectKey HashFnSHA256 = return "sha256"
toObjectKey HashFnMD5 = return "md5"
instance ReportSchemaErrors m => FromObjectKey m HashFn where
fromObjectKey "sha256" = return (Just HashFnSHA256)
fromObjectKey "md5" = return (Just HashFnMD5)
fromObjectKey _ = return Nothing
instance Monad m => ToJSON m FileInfo where
toJSON FileInfo{..} = mkObject [
("length", toJSON fileInfoLength)
, ("hashes", toJSON fileInfoHashes)
]
instance ReportSchemaErrors m => FromJSON m FileInfo where
fromJSON enc = do
fileInfoLength <- fromJSField enc "length"
fileInfoHashes <- fromJSField enc "hashes"
return FileInfo{..}
hackage-security-0.6.2.4/src/Hackage/Security/TUF/FileMap.hs 0000644 0000000 0000000 00000007623 07346545000 021542 0 ustar 00 0000000 0000000 -- | Information about files
--
-- Intended to be double imported
--
-- > import Hackage.Security.TUF.FileMap (FileMap)
-- > import qualified Hackage.Security.TUF.FileMap as FileMap
module Hackage.Security.TUF.FileMap (
FileMap -- opaque
, TargetPath(..)
-- * Standard accessors
, empty
, lookup
, (!)
, insert
, fromList
-- * Comparing file maps
, FileChange(..)
, fileMapChanges
) where
import MyPrelude hiding (lookup)
import Control.Arrow (second)
import Data.Map (Map)
import qualified Data.Map as Map
import Hackage.Security.JSON
import Hackage.Security.TUF.FileInfo
import Hackage.Security.TUF.Paths
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
-- | Mapping from paths to file info
--
-- File maps are used in target files; the paths are relative to the location
-- of the target files containing the file map.
newtype FileMap = FileMap { fileMap :: Map TargetPath FileInfo }
deriving (Show)
-- | Entries in 'FileMap' either talk about the repository or the index
data TargetPath =
TargetPathRepo RepoPath
| TargetPathIndex IndexPath
deriving (Show, Eq, Ord)
instance Pretty TargetPath where
pretty (TargetPathRepo path) = pretty path
pretty (TargetPathIndex path) = pretty path
{-------------------------------------------------------------------------------
Standard accessors
-------------------------------------------------------------------------------}
empty :: FileMap
empty = FileMap Map.empty
lookup :: TargetPath -> FileMap -> Maybe FileInfo
lookup fp = Map.lookup fp . fileMap
(!) :: FileMap -> TargetPath -> FileInfo
fm ! fp = fileMap fm Map.! fp
insert :: TargetPath -> FileInfo -> FileMap -> FileMap
insert fp nfo = FileMap . Map.insert fp nfo . fileMap
fromList :: [(TargetPath, FileInfo)] -> FileMap
fromList = FileMap . Map.fromList
{-------------------------------------------------------------------------------
Comparing filemaps
-------------------------------------------------------------------------------}
data FileChange =
-- | File got added or modified; we record the new file info
FileChanged FileInfo
-- | File got deleted
| FileDeleted
deriving (Show)
fileMapChanges :: FileMap -- ^ Old
-> FileMap -- ^ New
-> Map TargetPath FileChange
fileMapChanges (FileMap a) (FileMap b) =
Map.fromList $ go (Map.toList a) (Map.toList b)
where
-- Assumes the old and new lists are sorted alphabetically
-- (Map.toList guarantees this)
go :: [(TargetPath, FileInfo)]
-> [(TargetPath, FileInfo)]
-> [(TargetPath, FileChange)]
go [] new = map (second FileChanged) new
go old [] = map (second (const FileDeleted)) old
go old@((fp, nfo):old') new@((fp', nfo'):new')
| fp < fp' = (fp , FileDeleted ) : go old' new
| fp > fp' = (fp', FileChanged nfo') : go old new'
| knownFileInfoEqual nfo nfo' = (fp , FileChanged nfo') : go old' new'
| otherwise = go old' new'
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m FileMap where
toJSON (FileMap metaFiles) = toJSON metaFiles
instance ReportSchemaErrors m => FromJSON m FileMap where
fromJSON enc = FileMap <$> fromJSON enc
instance Monad m => ToObjectKey m TargetPath where
toObjectKey = return . pretty
instance ReportSchemaErrors m => FromObjectKey m TargetPath where
fromObjectKey ('<':'r':'e':'p':'o':'>':'/':path) =
return . Just . TargetPathRepo . rootPath . fromUnrootedFilePath $ path
fromObjectKey ('<':'i':'n':'d':'e':'x':'>':'/':path) =
return . Just . TargetPathIndex . rootPath . fromUnrootedFilePath $ path
fromObjectKey _str = return Nothing
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Header.hs 0000644 0000000 0000000 00000007433 07346545000 021414 0 ustar 00 0000000 0000000 -- | Header used by all TUF types
module Hackage.Security.TUF.Header (
HasHeader(..)
, FileVersion(..)
, FileExpires(..)
, Header(..)
-- ** Utility
, expiresInDays
, expiresNever
, isExpired
, versionInitial
, versionIncrement
) where
import MyPrelude
import Data.Time
import Data.Typeable (Typeable)
import Hackage.Security.JSON
import Hackage.Security.Util.Lens
{-------------------------------------------------------------------------------
TUF header
-------------------------------------------------------------------------------}
class HasHeader a where
-- | File expiry date
fileExpires :: Lens' a FileExpires
-- | File version (monotonically increasing counter)
fileVersion :: Lens' a FileVersion
-- | File version
--
-- The file version is a flat integer which must monotonically increase on
-- every file update.
--
-- 'Show' and 'Read' instance are defined in terms of the underlying 'Int'
-- (this is used for example by Hackage during the backup process).
newtype FileVersion = FileVersion Int54
deriving (Eq, Ord, Typeable)
instance Show FileVersion where
show (FileVersion v) = show v
instance Read FileVersion where
readsPrec p = map (\(v, xs) -> (FileVersion v, xs)) . readsPrec p
-- | File expiry date
--
-- A 'Nothing' value here means no expiry. That makes it possible to set some
-- files to never expire. (Note that not having the Maybe in the type here still
-- allows that, because you could set an expiry date 2000 years into the future.
-- By having the Maybe here we avoid the _need_ for such encoding issues.)
newtype FileExpires = FileExpires (Maybe UTCTime)
deriving (Eq, Ord, Show, Typeable)
-- | Occasionally it is useful to read only a header from a file.
--
-- 'HeaderOnly' intentionally only has a 'FromJSON' instance (no 'ToJSON').
data Header = Header {
headerExpires :: FileExpires
, headerVersion :: FileVersion
}
instance HasHeader Header where
fileExpires f x = (\y -> x { headerExpires = y }) <$> f (headerExpires x)
fileVersion f x = (\y -> x { headerVersion = y }) <$> f (headerVersion x)
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
expiresNever :: FileExpires
expiresNever = FileExpires Nothing
expiresInDays :: UTCTime -> Integer -> FileExpires
expiresInDays now n =
FileExpires . Just $ addUTCTime (fromInteger n * oneDay) now
isExpired :: UTCTime -> FileExpires -> Bool
isExpired _ (FileExpires Nothing) = False
isExpired now (FileExpires (Just e)) = e < now
versionInitial :: FileVersion
versionInitial = FileVersion 1
versionIncrement :: FileVersion -> FileVersion
versionIncrement (FileVersion i) = FileVersion (i + 1)
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m FileVersion where
toJSON (FileVersion i) = toJSON i
instance Monad m => ToJSON m FileExpires where
toJSON (FileExpires (Just e)) = toJSON e
toJSON (FileExpires Nothing) = return JSNull
instance ReportSchemaErrors m => FromJSON m FileVersion where
fromJSON enc = FileVersion <$> fromJSON enc
instance ReportSchemaErrors m => FromJSON m FileExpires where
fromJSON JSNull = return $ FileExpires Nothing
fromJSON enc = FileExpires . Just <$> fromJSON enc
instance ReportSchemaErrors m => FromJSON m Header where
fromJSON enc = do
headerExpires <- fromJSField enc "expires"
headerVersion <- fromJSField enc "version"
return Header{..}
{-------------------------------------------------------------------------------
Auxiliary
-------------------------------------------------------------------------------}
oneDay :: NominalDiffTime
oneDay = 24 * 60 * 60
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Layout/ 0000755 0000000 0000000 00000000000 07346545000 021136 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/TUF/Layout/Cache.hs 0000644 0000000 0000000 00000004343 07346545000 022501 0 ustar 00 0000000 0000000 module Hackage.Security.TUF.Layout.Cache (
-- * Cache layout
CacheLayout(..)
, cabalCacheLayout
) where
import MyPrelude
import Hackage.Security.TUF.Paths
import Hackage.Security.Util.Path
{-------------------------------------------------------------------------------
Cache layout
-------------------------------------------------------------------------------}
-- | Location of the various files we cache
--
-- Although the generic TUF algorithms do not care how we organize the cache,
-- we nonetheless specify this here because as long as there are tools which
-- access files in the cache directly we need to define the cache layout.
-- See also comments for 'defaultCacheLayout'.
data CacheLayout = CacheLayout {
-- | TUF root metadata
cacheLayoutRoot :: CachePath
-- | TUF timestamp
, cacheLayoutTimestamp :: CachePath
-- | TUF snapshot
, cacheLayoutSnapshot :: CachePath
-- | TUF mirrors list
, cacheLayoutMirrors :: CachePath
-- | Uncompressed index tarball
, cacheLayoutIndexTar :: CachePath
-- | Index to the uncompressed index tarball
, cacheLayoutIndexIdx :: CachePath
-- | Compressed index tarball
--
-- We cache both the compressed and the uncompressed tarballs, because
-- incremental updates happen through the compressed tarball, but reads
-- happen through the uncompressed one (with the help of the tarball index).
, cacheLayoutIndexTarGz :: CachePath
}
-- | The cache layout cabal-install uses
--
-- We cache the index as @/00-index.tar@; this is important because
-- `cabal-install` expects to find it there (and does not currently go through
-- the hackage-security library to get files from the index).
cabalCacheLayout :: CacheLayout
cabalCacheLayout = CacheLayout {
cacheLayoutRoot = rp $ fragment "root.json"
, cacheLayoutTimestamp = rp $ fragment "timestamp.json"
, cacheLayoutSnapshot = rp $ fragment "snapshot.json"
, cacheLayoutMirrors = rp $ fragment "mirrors.json"
, cacheLayoutIndexTar = rp $ fragment "00-index.tar"
, cacheLayoutIndexIdx = rp $ fragment "00-index.tar.idx"
, cacheLayoutIndexTarGz = rp $ fragment "00-index.tar.gz"
}
where
rp :: Path Unrooted -> CachePath
rp = rootPath
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Layout/Index.hs 0000644 0000000 0000000 00000011005 07346545000 022536 0 ustar 00 0000000 0000000 module Hackage.Security.TUF.Layout.Index (
-- * Repository layout
IndexLayout(..)
, IndexFile(..)
, hackageIndexLayout
-- ** Utility
, indexLayoutPkgMetadata
, indexLayoutPkgCabal
, indexLayoutPkgPrefs
) where
import MyPrelude
import Distribution.Package
import Distribution.Text
import Hackage.Security.TUF.Paths
import Hackage.Security.TUF.Signed
import Hackage.Security.TUF.Targets
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
import Hackage.Security.Util.Some
{-------------------------------------------------------------------------------
Index layout
-------------------------------------------------------------------------------}
-- | Layout of the files within the index tarball
data IndexLayout = IndexLayout {
-- | Translate an 'IndexFile' to a path
indexFileToPath :: forall dec. IndexFile dec -> IndexPath
-- | Parse an 'FilePath'
, indexFileFromPath :: IndexPath -> Maybe (Some IndexFile)
}
-- | Files that we might request from the index
--
-- The type index tells us the type of the decoded file, if any. For files for
-- which the library does not support decoding this will be @()@.
-- NOTE: Clients should NOT rely on this type index being @()@, or they might
-- break if we add support for parsing additional file formats in the future.
--
-- TODO: If we wanted to support legacy Hackage, we should also have a case for
-- the global preferred-versions file. But supporting legacy Hackage will
-- probably require more work anyway..
data IndexFile :: * -> * where
-- Package-specific metadata (@targets.json@)
IndexPkgMetadata :: PackageIdentifier -> IndexFile (Signed Targets)
-- Cabal file for a package
IndexPkgCabal :: PackageIdentifier -> IndexFile ()
-- Preferred versions a package
IndexPkgPrefs :: PackageName -> IndexFile ()
--TODO: ^^ older haddock doesn't support GADT doc comments :-(
deriving instance Show (IndexFile dec)
instance Pretty (IndexFile dec) where
pretty (IndexPkgMetadata pkgId) = "metadata for " ++ display pkgId
pretty (IndexPkgCabal pkgId) = ".cabal for " ++ display pkgId
pretty (IndexPkgPrefs pkgNm) = "preferred-versions for " ++ display pkgNm
instance SomeShow IndexFile where someShow = DictShow
instance SomePretty IndexFile where somePretty = DictPretty
-- | The layout of the index as maintained on Hackage
hackageIndexLayout :: IndexLayout
hackageIndexLayout = IndexLayout {
indexFileToPath = toPath
, indexFileFromPath = fromPath
}
where
toPath :: IndexFile dec -> IndexPath
toPath (IndexPkgCabal pkgId) = fromFragments [
display (packageName pkgId)
, display (packageVersion pkgId)
, display (packageName pkgId) ++ ".cabal"
]
toPath (IndexPkgMetadata pkgId) = fromFragments [
display (packageName pkgId)
, display (packageVersion pkgId)
, "package.json"
]
toPath (IndexPkgPrefs pkgNm) = fromFragments [
display pkgNm
, "preferred-versions"
]
fromFragments :: [String] -> IndexPath
fromFragments = rootPath . joinFragments
fromPath :: IndexPath -> Maybe (Some IndexFile)
fromPath fp = case splitFragments (unrootPath fp) of
[pkg, version, _file] -> do
pkgId <- simpleParse (pkg ++ "-" ++ version)
case takeExtension fp of
".cabal" -> return $ Some $ IndexPkgCabal pkgId
".json" -> return $ Some $ IndexPkgMetadata pkgId
_otherwise -> Nothing
[pkg, "preferred-versions"] ->
Some . IndexPkgPrefs <$> simpleParse pkg
_otherwise -> Nothing
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
indexLayoutPkgMetadata :: IndexLayout -> PackageIdentifier -> IndexPath
indexLayoutPkgMetadata IndexLayout{..} = indexFileToPath . IndexPkgMetadata
indexLayoutPkgCabal :: IndexLayout -> PackageIdentifier -> IndexPath
indexLayoutPkgCabal IndexLayout{..} = indexFileToPath . IndexPkgCabal
indexLayoutPkgPrefs :: IndexLayout -> PackageName -> IndexPath
indexLayoutPkgPrefs IndexLayout{..} = indexFileToPath . IndexPkgPrefs
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Layout/Repo.hs 0000644 0000000 0000000 00000004576 07346545000 022413 0 ustar 00 0000000 0000000 module Hackage.Security.TUF.Layout.Repo (
-- * Repository layout
RepoLayout(..)
, hackageRepoLayout
, cabalLocalRepoLayout
) where
import MyPrelude
import Distribution.Package
import Distribution.Text
import Hackage.Security.TUF.Paths
import Hackage.Security.Util.Path
{-------------------------------------------------------------------------------
Repository layout
-------------------------------------------------------------------------------}
-- | Layout of a repository
data RepoLayout = RepoLayout {
-- | TUF root metadata
repoLayoutRoot :: RepoPath
-- | TUF timestamp
, repoLayoutTimestamp :: RepoPath
-- | TUF snapshot
, repoLayoutSnapshot :: RepoPath
-- | TUF mirrors list
, repoLayoutMirrors :: RepoPath
-- | Compressed index tarball
, repoLayoutIndexTarGz :: RepoPath
-- | Uncompressed index tarball
, repoLayoutIndexTar :: RepoPath
-- | Path to the package tarball
, repoLayoutPkgTarGz :: PackageIdentifier -> RepoPath
}
-- | The layout used on Hackage
hackageRepoLayout :: RepoLayout
hackageRepoLayout = RepoLayout {
repoLayoutRoot = rp $ fragment "root.json"
, repoLayoutTimestamp = rp $ fragment "timestamp.json"
, repoLayoutSnapshot = rp $ fragment "snapshot.json"
, repoLayoutMirrors = rp $ fragment "mirrors.json"
, repoLayoutIndexTarGz = rp $ fragment "01-index.tar.gz"
, repoLayoutIndexTar = rp $ fragment "01-index.tar"
, repoLayoutPkgTarGz = \pkgId -> rp $ fragment "package" > pkgFile pkgId
}
where
pkgFile :: PackageIdentifier -> Path Unrooted
pkgFile pkgId = fragment (display pkgId) <.> "tar.gz"
rp :: Path Unrooted -> RepoPath
rp = rootPath
-- | Layout used by cabal for ("legacy") local repos
--
-- Obviously, such repos do not normally contain any of the TUF files, so their
-- location is more or less arbitrary here.
cabalLocalRepoLayout :: RepoLayout
cabalLocalRepoLayout = hackageRepoLayout {
repoLayoutPkgTarGz = \pkgId -> rp $ pkgLoc pkgId > pkgFile pkgId
}
where
pkgLoc :: PackageIdentifier -> Path Unrooted
pkgLoc pkgId = joinFragments [
display (packageName pkgId)
, display (packageVersion pkgId)
]
pkgFile :: PackageIdentifier -> Path Unrooted
pkgFile pkgId = fragment (display pkgId) <.> "tar.gz"
rp :: Path Unrooted -> RepoPath
rp = rootPath
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Mirrors.hs 0000644 0000000 0000000 00000006234 07346545000 021657 0 ustar 00 0000000 0000000 {-# LANGUAGE UndecidableInstances #-}
module Hackage.Security.TUF.Mirrors (
-- * TUF types
Mirrors(..)
, Mirror(..)
, MirrorContent(..)
-- ** Utility
, MirrorDescription
, describeMirror
) where
import MyPrelude
import Control.Monad.Except
import Network.URI
import Hackage.Security.JSON
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.Signed
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
data Mirrors = Mirrors {
mirrorsVersion :: FileVersion
, mirrorsExpires :: FileExpires
, mirrorsMirrors :: [Mirror]
}
-- | Definition of a mirror
--
-- NOTE: Unlike the TUF specification, we require that all mirrors must have
-- the same format. That is, we omit @metapath@ and @targetspath@.
data Mirror = Mirror {
mirrorUrlBase :: URI
, mirrorContent :: MirrorContent
}
deriving Show
-- | Full versus partial mirrors
--
-- The TUF spec explicitly allows for partial mirrors, with the mirrors file
-- specifying (through patterns) what is available from partial mirrors.
--
-- For now we only support full mirrors; if we wanted to add partial mirrors,
-- we would add a second @MirrorPartial@ constructor here with arguments
-- corresponding to TUF's @metacontent@ and @targetscontent@ fields.
data MirrorContent =
MirrorFull
deriving Show
instance HasHeader Mirrors where
fileVersion f x = (\y -> x { mirrorsVersion = y }) <$> f (mirrorsVersion x)
fileExpires f x = (\y -> x { mirrorsExpires = y }) <$> f (mirrorsExpires x)
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
type MirrorDescription = String
-- | Give a human-readable description of a particular mirror
--
-- (for use in error messages)
describeMirror :: Mirror -> MirrorDescription
describeMirror = show . mirrorUrlBase
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m Mirror where
toJSON Mirror{..} = mkObject $ concat [
[ ("urlbase", toJSON mirrorUrlBase) ]
, case mirrorContent of
MirrorFull -> []
]
instance Monad m => ToJSON m Mirrors where
toJSON Mirrors{..} = mkObject [
("_type" , return $ JSString "Mirrorlist")
, ("version" , toJSON mirrorsVersion)
, ("expires" , toJSON mirrorsExpires)
, ("mirrors" , toJSON mirrorsMirrors)
]
instance ReportSchemaErrors m => FromJSON m Mirror where
fromJSON enc = do
mirrorUrlBase <- fromJSField enc "urlbase"
let mirrorContent = MirrorFull
return Mirror{..}
instance ( MonadError DeserializationError m
, ReportSchemaErrors m
) => FromJSON m Mirrors where
fromJSON enc = do
verifyType enc "Mirrorlist"
mirrorsVersion <- fromJSField enc "version"
mirrorsExpires <- fromJSField enc "expires"
mirrorsMirrors <- fromJSField enc "mirrors"
return Mirrors{..}
instance MonadKeys m => FromJSON m (Signed Mirrors) where
fromJSON = signedFromJSON
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Paths.hs 0000644 0000000 0000000 00000004335 07346545000 021301 0 ustar 00 0000000 0000000 -- | Paths used in the TUF data structures
module Hackage.Security.TUF.Paths (
-- * Repository
RepoRoot
, RepoPath
, anchorRepoPathLocally
, anchorRepoPathRemotely
-- * Index
, IndexRoot
, IndexPath
-- * Cache
, CacheRoot
, CachePath
, anchorCachePath
) where
import MyPrelude
import Hackage.Security.Util.Path
import Hackage.Security.Util.Pretty
{-------------------------------------------------------------------------------
Repo
-------------------------------------------------------------------------------}
-- | The root of the repository
--
-- Repository roots can be anchored at a remote URL or a local directory.
--
-- Note that even for remote repos 'RepoRoot' is (potentially) different from
-- 'Web' -- for a repository located at, say, @http://hackage.haskell.org@
-- they happen to coincide, but for one location at
-- @http://example.com/some/subdirectory@ they do not.
data RepoRoot
-- | Paths relative to the root of the repository
type RepoPath = Path RepoRoot
instance Pretty (Path RepoRoot) where
pretty (Path fp) = "/" ++ fp
anchorRepoPathLocally :: Path root -> RepoPath -> Path root
anchorRepoPathLocally localRoot repoPath = localRoot > unrootPath repoPath
anchorRepoPathRemotely :: Path Web -> RepoPath -> Path Web
anchorRepoPathRemotely remoteRoot repoPath = remoteRoot > unrootPath repoPath
{-------------------------------------------------------------------------------
Index
-------------------------------------------------------------------------------}
-- | The root of the index tarball
data IndexRoot
-- | Paths relative to the root of the index tarball
type IndexPath = Path IndexRoot
instance Pretty (Path IndexRoot) where
pretty (Path fp) = "/" ++ fp
{-------------------------------------------------------------------------------
Cache
-------------------------------------------------------------------------------}
-- | The cache directory
data CacheRoot
type CachePath = Path CacheRoot
instance Pretty (Path CacheRoot) where
pretty (Path fp) = "/" ++ fp
-- | Anchor a cache path to the location of the cache
anchorCachePath :: Path root -> CachePath -> Path root
anchorCachePath cacheRoot cachePath = cacheRoot > unrootPath cachePath
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Patterns.hs 0000644 0000000 0000000 00000031114 07346545000 022015 0 ustar 00 0000000 0000000 -- | Patterns and replacements
--
-- NOTE: This module was developed to prepare for proper delegation (#39).
-- It is currently unused.
{-# LANGUAGE CPP #-}
#if __GLASGOW_HASKELL__ >= 800
{-# LANGUAGE DeriveLift #-}
{-# LANGUAGE StandaloneDeriving #-}
#else
{-# LANGUAGE TemplateHaskell #-}
#endif
module Hackage.Security.TUF.Patterns (
-- * Patterns and replacements
FileName
, Directory
, Extension
, BaseName
, Pattern(..)
, Replacement(..)
, Delegation(..)
-- ** Utility
, identityReplacement
, matchDelegation
-- ** Parsing and quasi-quoting
, parseDelegation
, qqd
) where
import MyPrelude
import Control.Monad (guard)
import Language.Haskell.TH (Q, Exp)
import System.FilePath.Posix
import qualified Language.Haskell.TH.Syntax as TH
import Hackage.Security.JSON
import Hackage.Security.Util.Some
import Hackage.Security.Util.Stack
import Hackage.Security.Util.TypedEmbedded
{-------------------------------------------------------------------------------
Patterns and replacements
-------------------------------------------------------------------------------}
type FileName = String
type Directory = String
type Extension = String
type BaseName = String
-- | Structured patterns over paths
--
-- The type argument indicates what kind of function we expect when the
-- pattern matches. For example, we have the pattern @"*/*.txt"@:
--
-- > PathPatternDirAny (PathPatternFileExt ".txt")
-- > :: PathPattern (Directory :- BaseName :- ())
--
-- TODOs (see README.md):
--
-- * Update this to work with 'Path' rather than 'FilePath'/'String'
-- * Add different kinds of wildcards
-- * Add path roots
--
-- Currently this is a proof of concept more than anything else; the right
-- structure is here, but it needs updating. However, until we add author
-- signing (or out-of-tarball targets) we don't actually use this yet.
--
-- NOTE: Haddock lacks GADT support so constructors have only regular comments.
data Pattern a where
-- Match against a specific filename
PatFileConst :: FileName -> Pattern ()
-- Match against a filename with the given extension
PatFileExt :: Extension -> Pattern (BaseName :- ())
-- Match against any filename
PatFileAny :: Pattern (FileName :- ())
-- Match against a specific directory
PatDirConst :: Directory -> Pattern a -> Pattern a
-- Match against any directory
PatDirAny :: Pattern a -> Pattern (Directory :- a)
-- | Replacement patterns
--
-- These constructors match the ones in 'Pattern': wildcards must be used
-- in the same order as they appear in the pattern, but they don't all have to
-- be used (that's why the base constructors are polymorphic in the stack tail).
data Replacement a where
RepFileConst :: FileName -> Replacement a
RepFileExt :: Extension -> Replacement (BaseName :- a)
RepFileAny :: Replacement (FileName :- a)
RepDirConst :: Directory -> Replacement a -> Replacement a
RepDirAny :: Replacement a -> Replacement (Directory :- a)
deriving instance Eq (Pattern typ)
deriving instance Show (Pattern typ)
deriving instance Eq (Replacement typ)
deriving instance Show (Replacement typ)
-- | The identity replacement replaces a matched pattern with itself
identityReplacement :: Pattern typ -> Replacement typ
identityReplacement = go
where
go :: Pattern typ -> Replacement typ
go (PatFileConst fn) = RepFileConst fn
go (PatFileExt e) = RepFileExt e
go PatFileAny = RepFileAny
go (PatDirConst d p) = RepDirConst d (go p)
go (PatDirAny p) = RepDirAny (go p)
-- | A delegation
--
-- A delegation is a pair of a pattern and a replacement.
--
-- See 'match' for an example.
data Delegation = forall a. Delegation (Pattern a) (Replacement a)
deriving instance Show Delegation
{-------------------------------------------------------------------------------
Matching
-------------------------------------------------------------------------------}
matchPattern :: String -> Pattern a -> Maybe a
matchPattern = go . splitDirectories
where
go :: [String] -> Pattern a -> Maybe a
go [] _ = Nothing
go [f] (PatFileConst f') = do guard (f == f')
return ()
go [f] (PatFileExt e') = do let (bn, _:e) = splitExtension f
guard $ e == e'
return (bn :- ())
go [_] _ = Nothing
go (d:p) (PatDirConst d' p') = do guard (d == d')
go p p'
go (d:p) (PatDirAny p') = (d :-) <$> go p p'
go (_:_) _ = Nothing
constructReplacement :: Replacement a -> a -> String
constructReplacement = \repl a -> joinPath $ go repl a
where
go :: Replacement a -> a -> [String]
go (RepFileConst c) _ = [c]
go (RepFileExt e) (bn :- _) = [bn <.> e]
go RepFileAny (fn :- _) = [fn]
go (RepDirConst d p) a = d : go p a
go (RepDirAny p) (d :- a) = d : go p a
matchDelegation :: Delegation -> String -> Maybe String
matchDelegation (Delegation pat repl) str =
constructReplacement repl <$> matchPattern str pat
{-------------------------------------------------------------------------------
Typechecking patterns and replacements
-------------------------------------------------------------------------------}
-- | Types for pattern and replacements
--
-- We intentially are not very precise here, saying @String@ (instead of
-- @FileName@, @BaseName@, or @Directory@, say) so that we can, for example,
-- use a matched filename in a pattern as a directory in a replacement.
data PatternType a where
PatTypeNil :: PatternType ()
PatTypeStr :: PatternType a -> PatternType (String :- a)
instance Unify PatternType where
unify PatTypeNil PatTypeNil = Just Refl
unify (PatTypeStr p) (PatTypeStr p') = case unify p p' of
Just Refl -> Just Refl
Nothing -> Nothing
unify _ _ = Nothing
type instance TypeOf Pattern = PatternType
type instance TypeOf Replacement = PatternType
instance Typed Pattern where
typeOf (PatFileConst _) = PatTypeNil
typeOf (PatFileExt _) = PatTypeStr PatTypeNil
typeOf (PatFileAny ) = PatTypeStr PatTypeNil
typeOf (PatDirConst _ p) = typeOf p
typeOf (PatDirAny p) = PatTypeStr (typeOf p)
instance AsType Replacement where
asType = go
where
go :: Replacement typ -> PatternType typ' -> Maybe (Replacement typ')
go (RepFileConst c) _ = return $ RepFileConst c
go (RepFileExt _) PatTypeNil = Nothing
go (RepFileExt e) (PatTypeStr _) = return $ RepFileExt e
go RepFileAny PatTypeNil = Nothing
go RepFileAny (PatTypeStr _) = return $ RepFileAny
go (RepDirConst c p) tp = RepDirConst c <$> go p tp
go (RepDirAny _) PatTypeNil = Nothing
go (RepDirAny p) (PatTypeStr tp) = RepDirAny <$> go p tp
{-------------------------------------------------------------------------------
Pretty-printing and parsing patterns and replacements
-------------------------------------------------------------------------------}
prettyPattern :: Pattern typ -> String
prettyPattern (PatFileConst f) = f
prettyPattern (PatFileExt e) = "*" <.> e
prettyPattern PatFileAny = "*"
prettyPattern (PatDirConst d p) = d > prettyPattern p
prettyPattern (PatDirAny p) = "*" > prettyPattern p
prettyReplacement :: Replacement typ -> String
prettyReplacement (RepFileConst f) = f
prettyReplacement (RepFileExt e) = "*" <.> e
prettyReplacement RepFileAny = "*"
prettyReplacement (RepDirConst d p) = d > prettyReplacement p
prettyReplacement (RepDirAny p) = "*" > prettyReplacement p
-- | Parse a pattern
parsePattern :: String -> Maybe (Some Pattern)
parsePattern = go . splitDirectories
where
go :: [String] -> Maybe (Some Pattern)
go [] = Nothing
go ["*"] = return . Some $ PatFileAny
go [p] = if '*' `notElem` p
then return . Some $ PatFileConst p
else case splitExtension p of
("*", _:ext) -> return . Some $ PatFileExt ext
_otherwise -> Nothing
go (p:ps) = do Some p' <- go ps
if '*' `notElem` p
then return . Some $ PatDirConst p p'
else case p of
"*" -> return . Some $ PatDirAny p'
_otherwise -> Nothing
-- | Parse a replacement
--
-- We cheat and use the parser for patterns and then translate using the
-- identity replacement.
parseReplacement :: String -> Maybe (Some Replacement)
parseReplacement = fmap aux . parsePattern
where
aux :: Some Pattern -> Some Replacement
aux (Some pat) = Some (identityReplacement pat)
parseDelegation :: String -> String -> Either String Delegation
parseDelegation pat repl =
case (parsePattern pat, parseReplacement repl) of
(Just (Some pat'), Just (Some repl')) ->
case repl' `asType` typeOf pat' of
Just repl'' -> Right $ Delegation pat' repl''
Nothing -> Left "Replacement does not match pattern type"
_otherwise ->
Left "Cannot parse delegation"
{-------------------------------------------------------------------------------
Quasi-quotation
We cannot (easily) use dataToExpQ because of the use of GADTs, so we manually
give Lift instances.
-------------------------------------------------------------------------------}
-- | Quasi-quoter for delegations to make them easier to write in code
--
-- This allows to write delegations as
--
-- > $(qqd "targets/*/*/*.cabal" "targets/*/*/revisions.json")
--
-- (The alternative syntax which actually uses a quasi-quoter doesn't work very
-- well because the '/*' bits confuse CPP: "unterminated comment")
qqd :: String -> String -> Q Exp
qqd pat repl =
case parseDelegation pat repl of
Left err -> fail $ "Invalid delegation: " ++ err
Right del -> TH.lift del
#if __GLASGOW_HASKELL__ >= 800
deriving instance TH.Lift (Pattern a)
deriving instance TH.Lift (Replacement a)
deriving instance TH.Lift Delegation
#else
instance TH.Lift (Pattern a) where
lift (PatFileConst fn) = [| PatFileConst fn |]
lift (PatFileExt e) = [| PatFileExt e |]
lift PatFileAny = [| PatFileAny |]
lift (PatDirConst d p) = [| PatDirConst d p |]
lift (PatDirAny p) = [| PatDirAny p |]
instance TH.Lift (Replacement a) where
lift (RepFileConst fn) = [| RepFileConst fn |]
lift (RepFileExt e ) = [| RepFileExt e |]
lift RepFileAny = [| RepFileAny |]
lift (RepDirConst d r) = [| RepDirConst d r |]
lift (RepDirAny r) = [| RepDirAny r |]
instance TH.Lift Delegation where
lift (Delegation pat repl) = [| Delegation pat repl |]
#endif
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m (Pattern typ) where
toJSON = return . JSString . prettyPattern
instance Monad m => ToJSON m (Replacement typ) where
toJSON = return . JSString . prettyReplacement
instance Monad m => ToJSON m (Some Pattern) where
toJSON (Some p) = toJSON p
instance Monad m => ToJSON m (Some Replacement) where
toJSON (Some r) = toJSON r
instance ReportSchemaErrors m => FromJSON m (Some Pattern) where
fromJSON enc = do
str <- fromJSON enc
case parsePattern str of
Nothing -> expected "valid pattern" (Just str)
Just p -> return p
instance ReportSchemaErrors m => FromJSON m (Some Replacement) where
fromJSON enc = do
str <- fromJSON enc
case parseReplacement str of
Nothing -> expected "valid replacement" (Just str)
Just r -> return r
{-------------------------------------------------------------------------------
Debugging: examples
-------------------------------------------------------------------------------}
_ex1 :: Maybe String
_ex1 = matchDelegation del "A/x/y/z.foo"
where
del = Delegation
( PatDirConst "A"
$ PatDirAny
$ PatDirAny
$ PatFileExt "foo"
)
( RepDirConst "B"
$ RepDirAny
$ RepDirConst "C"
$ RepDirAny
$ RepFileExt "bar"
)
_ex2 :: Maybe String
_ex2 = matchDelegation del "A/x/y/z.foo"
where
Right del = parseDelegation "A/*/*/*.foo" "B/*/C/*/*.bar"
_ex3 :: Either String Delegation
_ex3 = parseDelegation "foo" "*/bar"
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Root.hs 0000644 0000000 0000000 00000007651 07346545000 021151 0 ustar 00 0000000 0000000 -- | The root filetype
module Hackage.Security.TUF.Root (
-- * Datatypes
Root(..)
, RootRoles(..)
, RoleSpec(..)
) where
import MyPrelude
import Hackage.Security.JSON
import Hackage.Security.Key
import Hackage.Security.Key.Env (KeyEnv)
import Hackage.Security.TUF.Common
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.Mirrors
import Hackage.Security.TUF.Signed
import Hackage.Security.TUF.Snapshot
import Hackage.Security.TUF.Targets
import Hackage.Security.TUF.Timestamp
import Hackage.Security.Util.Some
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
-- | The root metadata
--
-- NOTE: We must have the invariant that ALL keys (apart from delegation keys)
-- must be listed in 'rootKeys'. (Delegation keys satisfy a similar invariant,
-- see Targets.)
data Root = Root {
rootVersion :: FileVersion
, rootExpires :: FileExpires
, rootKeys :: KeyEnv
, rootRoles :: RootRoles
}
data RootRoles = RootRoles {
rootRolesRoot :: RoleSpec Root
, rootRolesSnapshot :: RoleSpec Snapshot
, rootRolesTargets :: RoleSpec Targets
, rootRolesTimestamp :: RoleSpec Timestamp
, rootRolesMirrors :: RoleSpec Mirrors
}
-- | Role specification
--
-- The phantom type indicates what kind of type this role is meant to verify.
data RoleSpec a = RoleSpec {
roleSpecKeys :: [Some PublicKey]
, roleSpecThreshold :: KeyThreshold
}
deriving (Show)
instance HasHeader Root where
fileVersion f x = (\y -> x { rootVersion = y }) <$> f (rootVersion x)
fileExpires f x = (\y -> x { rootExpires = y }) <$> f (rootExpires x)
{-------------------------------------------------------------------------------
JSON encoding
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m RootRoles where
toJSON RootRoles{..} = mkObject [
("root" , toJSON rootRolesRoot)
, ("snapshot" , toJSON rootRolesSnapshot)
, ("targets" , toJSON rootRolesTargets)
, ("timestamp" , toJSON rootRolesTimestamp)
, ("mirrors" , toJSON rootRolesMirrors)
]
instance MonadKeys m => FromJSON m RootRoles where
fromJSON enc = do
rootRolesRoot <- fromJSField enc "root"
rootRolesSnapshot <- fromJSField enc "snapshot"
rootRolesTargets <- fromJSField enc "targets"
rootRolesTimestamp <- fromJSField enc "timestamp"
rootRolesMirrors <- fromJSField enc "mirrors"
return RootRoles{..}
instance Monad m => ToJSON m Root where
toJSON Root{..} = mkObject [
("_type" , return $ JSString "Root")
, ("version" , toJSON rootVersion)
, ("expires" , toJSON rootExpires)
, ("keys" , toJSON rootKeys)
, ("roles" , toJSON rootRoles)
]
instance Monad m => ToJSON m (RoleSpec a) where
toJSON RoleSpec{..} = mkObject [
("keyids" , return . JSArray . map writeKeyAsId $ roleSpecKeys)
, ("threshold" , toJSON roleSpecThreshold)
]
-- | We give an instance for Signed Root rather than Root because the key
-- environment from the root data is necessary to resolve the explicit sharing
-- in the signatures.
instance MonadKeys m => FromJSON m (Signed Root) where
fromJSON envelope = do
enc <- fromJSField envelope "signed"
rootKeys <- fromJSField enc "keys"
withKeys rootKeys $ do
verifyType enc "Root"
rootVersion <- fromJSField enc "version"
rootExpires <- fromJSField enc "expires"
rootRoles <- fromJSField enc "roles"
let signed = Root{..}
signatures <- fromJSField envelope "signatures"
validate "signatures" $ verifySignatures enc signatures
return Signed{..}
instance MonadKeys m => FromJSON m (RoleSpec a) where
fromJSON enc = do
roleSpecKeys <- mapM readKeyAsId =<< fromJSField enc "keyids"
roleSpecThreshold <- fromJSField enc "threshold"
return RoleSpec{..}
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Signed.hs 0000644 0000000 0000000 00000020773 07346545000 021437 0 ustar 00 0000000 0000000 -- | Wrapper around an arbitrary datatype that adds signatures
--
-- Note that in the spec there is explicit sharing of keys through key IDs;
-- we translate this to implicit sharing in our Haskell datatypes, with the
-- translation done in the JSON serialization/deserialization.
module Hackage.Security.TUF.Signed (
-- * TUF types
Signed(..)
, Signatures(..)
, Signature(..)
-- * Construction and verification
, unsigned
, withSignatures
, withSignatures'
, signRendered
, verifySignature
-- * JSON aids
, signedFromJSON
, verifySignatures
-- * Avoid interpreting signatures
, UninterpretedSignatures(..)
, PreSignature(..)
-- ** Utility
, fromPreSignature
, fromPreSignatures
, toPreSignature
, toPreSignatures
) where
import MyPrelude
import Control.Monad
import Data.Functor.Identity
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as BS.L
import qualified Data.Set as Set
import Hackage.Security.JSON
import Hackage.Security.Key
import Hackage.Security.TUF.Layout.Repo
import Hackage.Security.Util.Some
import Text.JSON.Canonical
import qualified Hackage.Security.Util.Base64 as B64
{-------------------------------------------------------------------------------
Signed objects
-------------------------------------------------------------------------------}
data Signed a = Signed {
signed :: a
, signatures :: Signatures
}
-- | A list of signatures
--
-- Invariant: each signature must be made with a different key.
-- We enforce this invariant for incoming untrusted data ('fromPreSignatures')
-- but not for lists of signatures that we create in code.
newtype Signatures = Signatures [Signature]
data Signature = Signature {
signature :: BS.ByteString
, signatureKey :: Some PublicKey
}
-- | Create a new document without any signatures
unsigned :: a -> Signed a
unsigned a = Signed { signed = a, signatures = Signatures [] }
-- | Sign a document
withSignatures :: ToJSON WriteJSON a => RepoLayout -> [Some Key] -> a -> Signed a
withSignatures repoLayout keys doc = Signed {
signed = doc
, signatures = signRendered keys $ renderJSON repoLayout doc
}
-- | Variation on 'withSignatures' that doesn't need the repo layout
withSignatures' :: ToJSON Identity a => [Some Key] -> a -> Signed a
withSignatures' keys doc = Signed {
signed = doc
, signatures = signRendered keys $ renderJSON_NoLayout doc
}
-- | Construct signatures for already rendered value
signRendered :: [Some Key] -> BS.L.ByteString -> Signatures
signRendered keys rendered = Signatures $ map go keys
where
go :: Some Key -> Signature
go (Some key) = Signature {
signature = sign (privateKey key) rendered
, signatureKey = Some $ publicKey key
}
verifySignature :: BS.L.ByteString -> Signature -> Bool
verifySignature inp Signature{signature = sig, signatureKey = Some pub} =
verify pub inp sig
instance (Monad m, ToJSON m a) => ToJSON m (Signed a) where
toJSON Signed{..} = mkObject [
("signed" , toJSON signed)
, ("signatures" , toJSON signatures)
]
instance Monad m => ToJSON m Signatures where
toJSON = toJSON . toPreSignatures
instance MonadKeys m => FromJSON m Signatures where
fromJSON = fromPreSignatures <=< fromJSON
{-------------------------------------------------------------------------------
JSON aids
-------------------------------------------------------------------------------}
-- | General FromJSON instance for signed datatypes
--
-- We don't give a general FromJSON instance for Signed because for some
-- datatypes we need to do something special (datatypes where we need to
-- read key environments); for instance, see the "Signed Root" instance.
signedFromJSON :: (MonadKeys m, FromJSON m a) => JSValue -> m (Signed a)
signedFromJSON envelope = do
enc <- fromJSField envelope "signed"
signed <- fromJSON enc
signatures <- fromJSField envelope "signatures"
validate "signatures" $ verifySignatures enc signatures
return Signed{..}
-- | Signature verification
--
-- NOTES:
-- 1. By definition, the signature must be verified against the canonical
-- JSON format. This means we _must_ parse and then pretty print (as
-- we do here) because the document as stored may or may not be in
-- canonical format.
-- 2. However, it is important that we NOT translate from the JSValue
-- to whatever internal datatype we are using and then back to JSValue,
-- because that may not roundtrip: we must allow for additional fields
-- in the JSValue that we ignore (and would therefore lose when we
-- attempt to roundtrip).
-- 3. We verify that all signatures are valid, but we cannot verify (here)
-- that these signatures are signed with the right key, or that we
-- have a sufficient number of signatures. This will be the
-- responsibility of the calling code.
verifySignatures :: JSValue -> Signatures -> Bool
verifySignatures parsed (Signatures sigs) =
all (verifySignature $ renderCanonicalJSON parsed) sigs
{-------------------------------------------------------------------------------
Uninterpreted signatures
-------------------------------------------------------------------------------}
-- | File with uninterpreted signatures
--
-- Sometimes we want to be able to read a file without interpreting the
-- signatures (that is, resolving the key IDs) or doing any kind of checks on
-- them. One advantage of this is that this allows us to read many file types
-- without any key environment at all, which is sometimes useful.
data UninterpretedSignatures a = UninterpretedSignatures {
uninterpretedSigned :: a
, uninterpretedSignatures :: [PreSignature]
}
deriving (Show)
-- | A signature with a key ID (rather than an actual key)
--
-- This corresponds precisely to the TUF representation of a signature.
data PreSignature = PreSignature {
presignature :: BS.ByteString
, presigMethod :: Some KeyType
, presigKeyId :: KeyId
}
deriving (Show)
-- | Convert a pre-signature to a signature
--
-- Verifies that the key type matches the advertised method.
fromPreSignature :: MonadKeys m => PreSignature -> m Signature
fromPreSignature PreSignature{..} = do
key <- lookupKey presigKeyId
validate "key type" $ typecheckSome key presigMethod
return Signature {
signature = presignature
, signatureKey = key
}
-- | Convert signature to pre-signature
toPreSignature :: Signature -> PreSignature
toPreSignature Signature{..} = PreSignature {
presignature = signature
, presigMethod = somePublicKeyType signatureKey
, presigKeyId = someKeyId signatureKey
}
-- | Convert a list of 'PreSignature's to a list of 'Signature's
--
-- This verifies the invariant that all signatures are made with different keys.
-- We do this on the presignatures rather than the signatures so that we can do
-- the check on key IDs, rather than keys (the latter don't have an Ord
-- instance).
fromPreSignatures :: MonadKeys m => [PreSignature] -> m Signatures
fromPreSignatures sigs = do
validate "all signatures made with different keys" $
Set.size (Set.fromList (map presigKeyId sigs)) == length sigs
Signatures <$> mapM fromPreSignature sigs
-- | Convert list of pre-signatures to a list of signatures
toPreSignatures :: Signatures -> [PreSignature]
toPreSignatures (Signatures sigs) = map toPreSignature sigs
instance ReportSchemaErrors m => FromJSON m PreSignature where
fromJSON enc = do
kId <- fromJSField enc "keyid"
method <- fromJSField enc "method"
sig <- fromJSField enc "sig"
return PreSignature {
presignature = B64.toByteString sig
, presigMethod = method
, presigKeyId = KeyId kId
}
instance Monad m => ToJSON m PreSignature where
toJSON PreSignature{..} = mkObject [
("keyid" , return $ JSString . keyIdString $ presigKeyId)
, ("method" , toJSON $ presigMethod)
, ("sig" , toJSON $ B64.fromByteString presignature)
]
instance ( ReportSchemaErrors m
, FromJSON m a
) => FromJSON m (UninterpretedSignatures a) where
fromJSON envelope = do
enc <- fromJSField envelope "signed"
uninterpretedSigned <- fromJSON enc
uninterpretedSignatures <- fromJSField envelope "signatures"
return UninterpretedSignatures{..}
instance (Monad m, ToJSON m a) => ToJSON m (UninterpretedSignatures a) where
toJSON UninterpretedSignatures{..} = mkObject [
("signed" , toJSON uninterpretedSigned)
, ("signatures" , toJSON uninterpretedSignatures)
]
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Snapshot.hs 0000644 0000000 0000000 00000007706 07346545000 022026 0 ustar 00 0000000 0000000 {-# LANGUAGE UndecidableInstances #-}
module Hackage.Security.TUF.Snapshot (
Snapshot(..)
) where
import MyPrelude
import Control.Monad.Except
import Control.Monad.Reader
import Hackage.Security.JSON
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.FileInfo
import Hackage.Security.TUF.FileMap
import Hackage.Security.TUF.Layout.Repo
import Hackage.Security.TUF.Signed
import qualified Hackage.Security.TUF.FileMap as FileMap
import Hackage.Security.Util.Pretty (pretty)
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
data Snapshot = Snapshot {
snapshotVersion :: FileVersion
, snapshotExpires :: FileExpires
-- | File info for the root metadata
--
-- We list this explicitly in the snapshot so that we can check if we need
-- to update the root metadata without first having to download the entire
-- index tarball.
, snapshotInfoRoot :: FileInfo
-- | File info for the mirror metadata
, snapshotInfoMirrors :: FileInfo
-- | Compressed index tarball
, snapshotInfoTarGz :: FileInfo
-- | Uncompressed index tarball
--
-- Repositories are not required to provide this.
, snapshotInfoTar :: Maybe FileInfo
}
instance HasHeader Snapshot where
fileVersion f x = (\y -> x { snapshotVersion = y }) <$> f (snapshotVersion x)
fileExpires f x = (\y -> x { snapshotExpires = y }) <$> f (snapshotExpires x)
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance MonadReader RepoLayout m => ToJSON m Snapshot where
toJSON Snapshot{..} = do
repoLayout <- ask
mkObject [
("_type" , return $ JSString "Snapshot")
, ("version" , toJSON snapshotVersion)
, ("expires" , toJSON snapshotExpires)
, ("meta" , toJSON (snapshotMeta repoLayout))
]
where
snapshotMeta repoLayout = FileMap.fromList $ [
(pathRoot repoLayout , snapshotInfoRoot)
, (pathMirrors repoLayout , snapshotInfoMirrors)
, (pathIndexTarGz repoLayout , snapshotInfoTarGz)
] ++
[ (pathIndexTar repoLayout , infoTar) | Just infoTar <- [snapshotInfoTar] ]
instance ( MonadReader RepoLayout m
, MonadError DeserializationError m
, ReportSchemaErrors m
) => FromJSON m Snapshot where
fromJSON enc = do
verifyType enc "Snapshot"
repoLayout <- ask
snapshotVersion <- fromJSField enc "version"
snapshotExpires <- fromJSField enc "expires"
snapshotMeta <- fromJSField enc "meta"
let lookupMeta k = case FileMap.lookup k snapshotMeta of
Nothing -> expected ("\"" ++ pretty k ++ "\" entry in .meta object") Nothing
Just v -> pure v
snapshotInfoRoot <- lookupMeta (pathRoot repoLayout)
snapshotInfoMirrors <- lookupMeta (pathMirrors repoLayout)
snapshotInfoTarGz <- lookupMeta (pathIndexTarGz repoLayout)
let snapshotInfoTar = FileMap.lookup (pathIndexTar repoLayout) snapshotMeta
return Snapshot{..}
instance (MonadKeys m, MonadReader RepoLayout m) => FromJSON m (Signed Snapshot) where
fromJSON = signedFromJSON
{-------------------------------------------------------------------------------
Paths used in the snapshot
NOTE: Since the snapshot lives in the top-level directory of the repository,
we can safely reinterpret "relative to the repo root" as "relative to the
snapshot"; hence, this use of 'castRoot' is okay.
-------------------------------------------------------------------------------}
pathRoot, pathMirrors, pathIndexTarGz, pathIndexTar :: RepoLayout -> TargetPath
pathRoot = TargetPathRepo . repoLayoutRoot
pathMirrors = TargetPathRepo . repoLayoutMirrors
pathIndexTarGz = TargetPathRepo . repoLayoutIndexTarGz
pathIndexTar = TargetPathRepo . repoLayoutIndexTar
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Targets.hs 0000644 0000000 0000000 00000011274 07346545000 021633 0 ustar 00 0000000 0000000 module Hackage.Security.TUF.Targets (
-- * TUF types
Targets(..)
, Delegations(..)
, DelegationSpec(..)
, Delegation(..)
-- ** Util
, targetsLookup
) where
import MyPrelude
import Hackage.Security.JSON
import Hackage.Security.Key
import Hackage.Security.Key.Env (KeyEnv)
import Hackage.Security.TUF.Common
import Hackage.Security.TUF.FileInfo
import Hackage.Security.TUF.FileMap (FileMap, TargetPath)
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.Patterns
import Hackage.Security.TUF.Signed
import Hackage.Security.Util.Some
import qualified Hackage.Security.TUF.FileMap as FileMap
{-------------------------------------------------------------------------------
TUF types
-------------------------------------------------------------------------------}
-- | Target metadata
--
-- Most target files do not need expiry dates because they are not subject to
-- change (and hence attacks like freeze attacks are not a concern).
data Targets = Targets {
targetsVersion :: FileVersion
, targetsExpires :: FileExpires
, targetsTargets :: FileMap
, targetsDelegations :: Maybe Delegations
}
deriving (Show)
-- | Delegations
--
-- Much like the Root datatype, this must have an invariant that ALL used keys
-- (apart from the global keys, which are in the root key environment) must
-- be listed in 'delegationsKeys'.
data Delegations = Delegations {
delegationsKeys :: KeyEnv
, delegationsRoles :: [DelegationSpec]
}
deriving (Show)
-- | Delegation specification
--
-- NOTE: This is a close analogue of 'RoleSpec'.
data DelegationSpec = DelegationSpec {
delegationSpecKeys :: [Some PublicKey]
, delegationSpecThreshold :: KeyThreshold
, delegation :: Delegation
}
deriving (Show)
instance HasHeader Targets where
fileVersion f x = (\y -> x { targetsVersion = y }) <$> f (targetsVersion x)
fileExpires f x = (\y -> x { targetsExpires = y }) <$> f (targetsExpires x)
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
targetsLookup :: TargetPath -> Targets -> Maybe FileInfo
targetsLookup fp Targets{..} = FileMap.lookup fp targetsTargets
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m DelegationSpec where
toJSON DelegationSpec{delegation = Delegation fp name, ..} = mkObject [
("name" , toJSON name)
, ("keyids" , return . JSArray . map writeKeyAsId $ delegationSpecKeys)
, ("threshold" , toJSON delegationSpecThreshold)
, ("path" , toJSON fp)
]
instance MonadKeys m => FromJSON m DelegationSpec where
fromJSON enc = do
delegationName <- fromJSField enc "name"
delegationSpecKeys <- mapM readKeyAsId =<< fromJSField enc "keyids"
delegationSpecThreshold <- fromJSField enc "threshold"
delegationPath <- fromJSField enc "path"
case parseDelegation delegationName delegationPath of
Left err -> expected ("valid name/path combination: " ++ err) Nothing
Right delegation -> return DelegationSpec{..}
-- NOTE: Unlike the Root object, the keys that are used to sign the delegations
-- are NOT listed inside the delegations, so the same "bootstrapping" problems
-- do not arise here.
instance Monad m => ToJSON m Delegations where
toJSON Delegations{..} = mkObject [
("keys" , toJSON delegationsKeys)
, ("roles" , toJSON delegationsRoles)
]
instance MonadKeys m => FromJSON m Delegations where
fromJSON enc = do
delegationsKeys <- fromJSField enc "keys"
delegationsRoles <- fromJSField enc "roles"
return Delegations{..}
instance Monad m => ToJSON m Targets where
toJSON Targets{..} = mkObject $ mconcat [
[ ("_type" , return $ JSString "Targets")
, ("version" , toJSON targetsVersion)
, ("expires" , toJSON targetsExpires)
, ("targets" , toJSON targetsTargets)
]
, [ ("delegations" , toJSON d) | Just d <- [ targetsDelegations ] ]
]
instance MonadKeys m => FromJSON m Targets where
fromJSON enc = do
verifyType enc "Targets"
targetsVersion <- fromJSField enc "version"
targetsExpires <- fromJSField enc "expires"
targetsTargets <- fromJSField enc "targets"
targetsDelegations <- fromJSOptField enc "delegations"
return Targets{..}
-- TODO: This is okay right now because targets do not introduce additional
-- keys, but will no longer be okay once we have author keys.
instance MonadKeys m => FromJSON m (Signed Targets) where
fromJSON = signedFromJSON
hackage-security-0.6.2.4/src/Hackage/Security/TUF/Timestamp.hs 0000644 0000000 0000000 00000005615 07346545000 022167 0 ustar 00 0000000 0000000 {-# LANGUAGE UndecidableInstances #-}
module Hackage.Security.TUF.Timestamp (
Timestamp(..)
) where
import MyPrelude
import Control.Monad.Except
import Control.Monad.Reader
import Hackage.Security.JSON
import Hackage.Security.TUF.FileInfo
import Hackage.Security.TUF.FileMap
import Hackage.Security.TUF.Header
import Hackage.Security.TUF.Layout.Repo
import Hackage.Security.TUF.Signed
import qualified Hackage.Security.TUF.FileMap as FileMap
import Hackage.Security.Util.Pretty (pretty)
{-------------------------------------------------------------------------------
Datatypes
-------------------------------------------------------------------------------}
data Timestamp = Timestamp {
timestampVersion :: FileVersion
, timestampExpires :: FileExpires
, timestampInfoSnapshot :: FileInfo
}
instance HasHeader Timestamp where
fileVersion f x = (\y -> x { timestampVersion = y }) <$> f (timestampVersion x)
fileExpires f x = (\y -> x { timestampExpires = y }) <$> f (timestampExpires x)
{-------------------------------------------------------------------------------
JSON
-------------------------------------------------------------------------------}
instance MonadReader RepoLayout m => ToJSON m Timestamp where
toJSON Timestamp{..} = do
repoLayout <- ask
mkObject [
("_type" , return $ JSString "Timestamp")
, ("version" , toJSON timestampVersion)
, ("expires" , toJSON timestampExpires)
, ("meta" , toJSON (timestampMeta repoLayout))
]
where
timestampMeta repoLayout = FileMap.fromList [
(pathSnapshot repoLayout, timestampInfoSnapshot)
]
instance ( MonadReader RepoLayout m
, MonadError DeserializationError m
, ReportSchemaErrors m
) => FromJSON m Timestamp where
fromJSON enc = do
verifyType enc "Timestamp"
repoLayout <- ask
timestampVersion <- fromJSField enc "version"
timestampExpires <- fromJSField enc "expires"
timestampMeta <- fromJSField enc "meta"
let lookupMeta k = case FileMap.lookup k timestampMeta of
Nothing -> expected ("\"" ++ pretty k ++ "\" entry in .meta object") Nothing
Just v -> pure v
timestampInfoSnapshot <- lookupMeta (pathSnapshot repoLayout)
return Timestamp{..}
instance (MonadKeys m, MonadReader RepoLayout m) => FromJSON m (Signed Timestamp) where
fromJSON = signedFromJSON
{-------------------------------------------------------------------------------
Paths used in the timestamp
NOTE: Since the timestamp lives in the top-level directory of the repository,
we can safely reinterpret "relative to the repo root" as "relative to the
timestamp"; hence, this use of 'castRoot' is okay.
-------------------------------------------------------------------------------}
pathSnapshot :: RepoLayout -> TargetPath
pathSnapshot = TargetPathRepo . repoLayoutSnapshot
hackage-security-0.6.2.4/src/Hackage/Security/Trusted.hs 0000644 0000000 0000000 00000004232 07346545000 021212 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
#if __GLASGOW_HASKELL__ >= 710
{-# LANGUAGE StaticPointers #-}
#endif
module Hackage.Security.Trusted (
module Hackage.Security.Trusted.TCB
-- * Derived functions
, (<$$>)
-- ** Role verification
, VerifyRole(..)
-- ** File info verification
, trustedFileInfoEqual
) where
import MyPrelude
import Data.Function (on)
import Data.Time
import Hackage.Security.TUF
import Hackage.Security.Trusted.TCB hiding (DeclareTrusted)
{-------------------------------------------------------------------------------
Combinators on trusted values
-------------------------------------------------------------------------------}
-- | Apply a static function to a trusted argument
(<$$>) :: StaticPtr (a -> b) -> Trusted a -> Trusted b
(<$$>) = trustApply . trustStatic
{-------------------------------------------------------------------------------
Role verification
-------------------------------------------------------------------------------}
class VerifyRole a where
verifyRole :: Trusted Root -- ^ Root data
-> TargetPath -- ^ Source (for error messages)
-> Maybe FileVersion -- ^ Previous version (if available)
-> Maybe UTCTime -- ^ Time now (if checking expiry)
-> Signed a -- ^ Mirrors to verify
-> Either VerificationError (SignaturesVerified a)
instance VerifyRole Root where
verifyRole = verifyRole' . (static (rootRolesRoot . rootRoles) <$$>)
instance VerifyRole Timestamp where
verifyRole = verifyRole' . (static (rootRolesTimestamp . rootRoles) <$$>)
instance VerifyRole Snapshot where
verifyRole = verifyRole' . (static (rootRolesSnapshot . rootRoles) <$$>)
instance VerifyRole Mirrors where
verifyRole = verifyRole' . (static (rootRolesMirrors . rootRoles) <$$>)
{-------------------------------------------------------------------------------
File info verification
-------------------------------------------------------------------------------}
-- | Variation on 'knownFileInfoEqual' for 'Trusted' 'FileInfo'
trustedFileInfoEqual :: Trusted FileInfo -> Trusted FileInfo -> Bool
trustedFileInfoEqual = knownFileInfoEqual `on` trusted
hackage-security-0.6.2.4/src/Hackage/Security/Trusted/ 0000755 0000000 0000000 00000000000 07346545000 020655 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Trusted/TCB.hs 0000644 0000000 0000000 00000031452 07346545000 021626 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
module Hackage.Security.Trusted.TCB (
-- * Trusted values
Trusted(DeclareTrusted)
, trusted
, trustStatic
, trustVerified
, trustApply
, trustElems
-- * Verification errors
, VerificationError(..)
, RootUpdated(..)
, VerificationHistory
-- * Role verification
, SignaturesVerified -- opaque
, signaturesVerified
, verifyRole'
, verifyFingerprints
#if __GLASGOW_HASKELL__ >= 710
-- * Re-exports
, StaticPtr
#else
-- * Fake static pointers
, StaticPtr
, static
#endif
) where
import MyPrelude
import Control.Exception
import Control.Monad (when, unless)
import Control.Monad.Except (Except, runExcept, throwError)
import Data.Typeable
import Data.Time
import Hackage.Security.TUF
import Hackage.Security.JSON
import Hackage.Security.Key
import Hackage.Security.Util.Pretty
import qualified Hackage.Security.Util.Lens as Lens
#if __GLASGOW_HASKELL__ >= 710
import GHC.StaticPtr
#else
-- Fake static pointers for ghc < 7.10. This means Trusted offers no
-- additional type safety, but that's okay: we can still verify the code
-- with ghc 7.10 and get the additional checks.
newtype StaticPtr a = StaticPtr { deRefStaticPtr :: a }
static :: a -> StaticPtr a
static = StaticPtr
#endif
-- | Trusted values
--
-- Trusted values originate in only two ways:
--
-- * Anything that is statically known is trusted ('trustStatic')
-- * If we have "dynamic" data we can trust it once we have verified the
-- the signatures (trustSigned).
--
-- NOTE: Trusted is NOT a functor. If it was we could define
--
-- > trustAnything :: a -> Trusted a
-- > trustAnything a = fmap (const a) (trustStatic (static ()))
--
-- Consequently, it is neither a monad nor a comonad. However, we _can_ apply
-- trusted functions to trusted arguments ('trustApply').
--
-- The 'DeclareTrusted' constructor is exported, but any use of it should be
-- verified.
newtype Trusted a = DeclareTrusted { trusted :: a }
deriving (Eq, Show)
trustStatic :: StaticPtr a -> Trusted a
trustStatic = DeclareTrusted . deRefStaticPtr
trustVerified :: SignaturesVerified a -> Trusted a
trustVerified = DeclareTrusted . signaturesVerified
-- | Equivalent of '<*>'
--
-- Trusted isn't quite applicative (no pure, not a functor), but it is
-- somehow Applicative-like: we have the equivalent of '<*>'
trustApply :: Trusted (a -> b) -> Trusted a -> Trusted b
trustApply (DeclareTrusted f) (DeclareTrusted x) = DeclareTrusted (f x)
-- | Trust all elements of some trusted (traversable) container
--
-- If we have, say, a trusted list of values, we should be able to get a list
-- of trusted values out of it.
--
-- > trustElems :: Trusted [a] -> [Trusted a]
--
-- NOTE. It might appear that the more natural primitive to offer is a
-- 'sequenceA'-like operator such as
--
-- > trustSeq :: Applicative f => Trusted (f a) -> f (Trusted a)
--
-- However, this is unsound. To see this, consider that @((->) a)@ is
-- 'Applicative' (it's the reader monad); hence, we can instantiate 'trustSeq'
-- at
--
-- > trustSeq :: Trusted (a -> a) -> a -> Trusted a
--
-- and by passing @trustStatic (static id)@ make 'Trusted' a functor, which we
-- certainly don't want to do (see comments for 'Trusted').
--
-- So why is it okay when we insist on 'Traversable' rather than 'Applicative'?
-- To see this, it's instructive to consider how we might make a @((->) a)@ an
-- instance of 'Traversable'. If we define the domain of enumerable types as
--
-- > class Eq a => Enumerable a where
-- > enumerate :: [a]
--
-- then we can make @((->) r)@ traversable by
--
-- > instance Enumerable r => Traversable ((->) r) where
-- > sequenceA f = rebuild <$> sequenceA ((\r -> (r,) <$> f r) <$> enumerate)
-- > where
-- > rebuild :: [(r, a)] -> r -> a
-- > rebuild fun arg = fromJust (lookup arg fun)
--
-- The idea is that if the domain of a function is enumerable, we can apply the
-- function to each possible input, collect the outputs, and construct a new
-- function by pairing the inputs with the outputs. I.e., if we had something of
-- type
--
-- > a -> IO b
--
-- and @a@ is enumerable, we just run the @IO@ action on each possible @a@ and
-- collect all @b@s to get a pure function @a -> b@. Of course, you probably
-- don't want to be doing that, but the point is that as far as the type system
-- is concerned you could.
--
-- In the context of 'Trusted', this means that we can derive
--
-- > enumPure :: Enumerable a => a -> Trusted a
--
-- but in a way this this makes sense anyway. If a domain is enumerable, it
-- would not be unreasonable to change @Enumerable@ to
--
-- > class Eq a => Enumerable a where
-- > enumerate :: [StaticPtr a]
--
-- so we could define @enumPure@ as
--
-- > enumPure :: Enumerable a => a -> Trusted a
-- > enumPure x = trustStatic
-- > $ fromJust (find ((== x) . deRefStaticPtr) enumerate)
--
-- In other words, we just enumerate the entire domain as trusted values
-- (because we defined them locally) and then return the one that matched the
-- untrusted value.
--
-- The conclusion from all of this is that the types of untrusted input (like
-- the types of the TUF files we download from the server) should probably not
-- be considered enumerable.
trustElems :: Traversable f => Trusted (f a) -> f (Trusted a)
trustElems (DeclareTrusted fa) = DeclareTrusted `fmap` fa
{-------------------------------------------------------------------------------
Role verification
-------------------------------------------------------------------------------}
newtype SignaturesVerified a = SignaturesVerified { signaturesVerified :: a }
-- | Errors thrown during role validation
data VerificationError =
-- | Not enough signatures signed with the appropriate keys
VerificationErrorSignatures TargetPath -- what were we verifying?
Integer -- threshold
[KeyId] -- trusted keys
[KeyId] -- found signing keys
-- | The file is expired
| VerificationErrorExpired TargetPath
-- | The file version is less than the previous version
| VerificationErrorVersion TargetPath
-- | File information mismatch
| VerificationErrorFileInfo TargetPath
-- | We tried to lookup file information about a particular target file,
-- but the information wasn't in the corresponding @targets.json@ file.
| VerificationErrorUnknownTarget TargetPath
-- | The metadata for the specified target is missing a SHA256
| VerificationErrorMissingSHA256 TargetPath
-- | Some verification errors materialize as deserialization errors
--
-- For example: if we try to deserialize a timestamp file but the timestamp
-- key has been rolled over, deserialization of the file will fail with
-- 'DeserializationErrorUnknownKey'.
| VerificationErrorDeserialization TargetPath DeserializationError
-- | The spec stipulates that if a verification error occurs during
-- the check for updates, we must download new root information and
-- start over. However, we limit how often we attempt this.
--
-- We record all verification errors that occurred before we gave up.
| VerificationErrorLoop VerificationHistory
deriving (Typeable)
-- | Root metadata updated (as part of the normal update process)
data RootUpdated = RootUpdated
deriving (Typeable)
type VerificationHistory = [Either RootUpdated VerificationError]
#if MIN_VERSION_base(4,8,0)
deriving instance Show VerificationError
deriving instance Show RootUpdated
instance Exception VerificationError where displayException = pretty
instance Exception RootUpdated where displayException = pretty
#else
instance Exception VerificationError
instance Show VerificationError where show = pretty
instance Show RootUpdated where show = pretty
instance Exception RootUpdated
#endif
indentedLines :: [String] -> String
indentedLines = unlines . map (" " ++)
instance Pretty VerificationError where
pretty (VerificationErrorSignatures file threshold trusted sigs) =
pretty file ++ " does not have enough signatures signed with the appropriate keys\n"
++ "Expected at least " ++ show threshold ++ " signatures from:\n"
++ indentedLines (map keyIdString trusted)
++ "Found signatures from:\n"
++ indentedLines (map keyIdString sigs)
pretty (VerificationErrorExpired file) =
pretty file ++ " is expired"
pretty (VerificationErrorVersion file) =
"Version of " ++ pretty file ++ " is less than the previous version"
pretty (VerificationErrorFileInfo file) =
"Invalid hash for " ++ pretty file
pretty (VerificationErrorUnknownTarget file) =
pretty file ++ " not found in corresponding target metadata"
pretty (VerificationErrorMissingSHA256 file) =
"Missing SHA256 hash for " ++ pretty file
pretty (VerificationErrorDeserialization file err) =
"Could not deserialize " ++ pretty file ++ ": " ++ pretty err
pretty (VerificationErrorLoop es) =
"Verification loop. Errors in order:\n"
++ indentedLines (map (either pretty pretty) es)
instance Pretty RootUpdated where
pretty RootUpdated = "Root information updated"
-- | Role verification
--
-- NOTE: We throw an error when the version number _decreases_, but allow it
-- to be the same. This is sufficient: the file number is there so that
-- attackers cannot replay old files. It cannot protect against freeze attacks
-- (that's what the expiry date is for), so "replaying" the same file is not
-- a problem. If an attacker changes the contents of the file but not the
-- version number we have an inconsistent situation, but this is not something
-- we need to worry about: in this case the attacker will need to resign the
-- file or otherwise the signature won't match, and if the attacker has
-- compromised the key then he might just as well increase the version number
-- and resign.
--
-- NOTE 2: We are not actually verifying the signatures _themselves_ here
-- (we did that when we parsed the JSON). We are merely verifying the provenance
-- of the keys.
verifyRole' :: forall a. HasHeader a
=> Trusted (RoleSpec a) -- ^ For signature validation
-> TargetPath -- ^ File source (for error messages)
-> Maybe FileVersion -- ^ Previous version (if available)
-> Maybe UTCTime -- ^ Time now (if checking expiry)
-> Signed a -> Either VerificationError (SignaturesVerified a)
verifyRole' (trusted -> RoleSpec{roleSpecThreshold = KeyThreshold threshold, ..})
targetPath
mPrev
mNow
Signed{signatures = Signatures sigs, ..} =
runExcept go
where
go :: Except VerificationError (SignaturesVerified a)
go = do
-- Verify expiry date
case mNow of
Just now ->
when (isExpired now (Lens.get fileExpires signed)) $
throwError $ VerificationErrorExpired targetPath
_otherwise ->
return ()
-- Verify timestamp
case mPrev of
Nothing -> return ()
Just prev ->
when (Lens.get fileVersion signed < prev) $
throwError $ VerificationErrorVersion targetPath
-- Verify signatures
-- NOTE: We only need to verify the keys that were used; if the signature
-- was invalid we would already have thrown an error constructing Signed.
-- (Similarly, if two signatures were made by the same key, the FromJSON
-- instance for Signatures would have thrown an error.)
let nSigs = length (filter isRoleSpecKey sigs)
unless (nSigs >= fromIntegral threshold) $
throwError $ VerificationErrorSignatures targetPath (fromIntegral threshold) trustedKeys signingKeys
-- Everything is A-OK!
return $ SignaturesVerified signed
isRoleSpecKey :: Signature -> Bool
isRoleSpecKey Signature{..} = signatureKey `elem` roleSpecKeys
trustedKeys, signingKeys :: [KeyId]
trustedKeys = map someKeyId roleSpecKeys
signingKeys = map (someKeyId . signatureKey) sigs
-- | Variation on 'verifyRole' that uses key IDs rather than keys
--
-- This is used during the bootstrap process.
--
-- See .
verifyFingerprints :: [KeyId]
-> KeyThreshold
-> TargetPath -- ^ For error messages
-> Signed Root
-> Either VerificationError (SignaturesVerified Root)
verifyFingerprints fingerprints
(KeyThreshold threshold)
targetPath
Signed{signatures = Signatures sigs, ..} =
if length (filter isTrustedKey signingKeys) >= fromIntegral threshold
then Right $ SignaturesVerified signed
else Left $ VerificationErrorSignatures targetPath (fromIntegral threshold) fingerprints signingKeys
where
signingKeys :: [KeyId]
signingKeys = map (someKeyId . signatureKey) sigs
isTrustedKey :: KeyId -> Bool
isTrustedKey key = key `elem` fingerprints
hackage-security-0.6.2.4/src/Hackage/Security/Util/ 0000755 0000000 0000000 00000000000 07346545000 020140 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Hackage/Security/Util/Base64.hs 0000644 0000000 0000000 00000001643 07346545000 021524 0 ustar 00 0000000 0000000 module Hackage.Security.Util.Base64 (
Base64 -- opaque
, fromByteString
, toByteString
) where
import MyPrelude
import Data.ByteString (ByteString)
import qualified Data.ByteString.Char8 as C8 -- only called on B64-enc strings
import qualified Data.ByteString.Base64 as B64
import Hackage.Security.Util.JSON
-- | Simple wrapper around bytestring with ToJSON and FromJSON instances that
-- use base64 encoding.
newtype Base64 = Base64 ByteString
fromByteString :: ByteString -> Base64
fromByteString = Base64
toByteString :: Base64 -> ByteString
toByteString (Base64 bs) = bs
instance Monad m => ToJSON m Base64 where
toJSON (Base64 bs) = toJSON (C8.unpack (B64.encode bs))
instance ReportSchemaErrors m => FromJSON m Base64 where
fromJSON val = do
str <- fromJSON val
case B64.decode (C8.pack str) of
Left _err -> expected "base-64 encoded string" Nothing
Right bs -> return $ Base64 bs
hackage-security-0.6.2.4/src/Hackage/Security/Util/Checked.hs 0000644 0000000 0000000 00000010376 07346545000 022031 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
{-# OPTIONS_GHC -fno-warn-unused-binds #-}
#if __GLASGOW_HASKELL__ >= 800
{-# OPTIONS_GHC -Wno-redundant-constraints #-}
#endif
#if __GLASGOW_HASKELL__ >= 708
{-# LANGUAGE RoleAnnotations #-}
{-# LANGUAGE IncoherentInstances #-}
#endif
{-# LANGUAGE DeriveDataTypeable#-}
-- | Checked exceptions
module Hackage.Security.Util.Checked (
Throws
, unthrow
-- ** Base exceptions
, throwChecked
, catchChecked
, handleChecked
, tryChecked
, checkIO
, throwUnchecked
, internalError
) where
import MyPrelude
import Control.Exception (Exception, IOException)
import qualified Control.Exception as Base
import Data.Typeable (Typeable)
#if __GLASGOW_HASKELL__ >= 708
import GHC.Prim (coerce)
#else
import Unsafe.Coerce (unsafeCoerce)
#endif
{-------------------------------------------------------------------------------
Basic infrastructure
-------------------------------------------------------------------------------}
-- | Checked exceptions
class Throws e where
#if __GLASGOW_HASKELL__ >= 708
type role Throws representational
#endif
unthrow :: forall a e proxy . proxy e -> (Throws e => a) -> a
unthrow _ x = unWrap (coerceWrap (Wrap x :: Wrap e a))
{-------------------------------------------------------------------------------
Base exceptions
-------------------------------------------------------------------------------}
-- | Determine if an exception is asynchronous, based on its type.
isAsync :: Exception e => e -> Bool
#if MIN_VERSION_base(4, 7, 0)
isAsync e =
case Base.fromException $ Base.toException e of
Just Base.SomeAsyncException{} -> True
Nothing -> False
#else
-- Earlier versions of GHC had no SomeAsyncException. We have to
-- instead make up a list of async exceptions.
isAsync e =
let se = Base.toException e
in case () of
()
| Just (_ :: Base.AsyncException) <- Base.fromException se -> True
| show e == "<>" -> True
| otherwise -> False
#endif
-- | 'Base.catch', but immediately rethrows asynchronous exceptions
-- (as determined by 'isAsync').
catchSync :: Exception e => IO a -> (e -> IO a) -> IO a
catchSync act onErr = act `Base.catch` \e ->
if isAsync e
then Base.throwIO e
else onErr e
-- | Wraps up an async exception as a synchronous exception.
newtype SyncException = SyncException Base.SomeException
deriving (Show, Typeable)
instance Exception SyncException
-- | Throw a checked exception
throwChecked :: (Exception e, Throws e) => e -> IO a
throwChecked e
| isAsync e = Base.throwIO $ SyncException $ Base.toException e
| otherwise = Base.throwIO e
-- | Catch a checked exception
catchChecked :: forall a e. Exception e
=> (Throws e => IO a) -> (e -> IO a) -> IO a
catchChecked act = catchSync (unthrow (Proxy :: Proxy e) act)
-- | 'catchChecked' with the arguments reversed
handleChecked :: Exception e => (e -> IO a) -> (Throws e => IO a) -> IO a
handleChecked act handler = catchChecked handler act
-- | Like 'try', but for checked exceptions
tryChecked :: Exception e => (Throws e => IO a) -> IO (Either e a)
tryChecked act = catchChecked (Right <$> act) (return . Left)
-- | Rethrow IO exceptions as checked exceptions
checkIO :: Throws IOException => IO a -> IO a
checkIO = Base.handle $ \(ex :: IOException) -> throwChecked ex
-- | Throw an unchecked exception
--
-- This is just an alias for 'throw', but makes it evident that this is a very
-- intentional use of an unchecked exception.
throwUnchecked :: Exception e => e -> IO a
throwUnchecked = Base.throwIO
-- | Variation on 'throwUnchecked' for internal errors
internalError :: String -> IO a
internalError = throwUnchecked . userError
{-------------------------------------------------------------------------------
Auxiliary definitions (not exported)
-------------------------------------------------------------------------------}
-- | Wrap an action that may throw a checked exception
--
-- This is used internally in 'unthrow' to avoid impredicative
-- instantiation of the type of 'coerce'/'unsafeCoerce'.
newtype Wrap e a = Wrap { unWrap :: Throws e => a }
coerceWrap :: Wrap e a -> Wrap (Catch e) a
#if __GLASGOW_HASKELL__ >= 708
coerceWrap = coerce
#else
coerceWrap = unsafeCoerce
#endif
data Proxy a = Proxy
newtype Catch a = Catch a
instance Throws (Catch e) where
hackage-security-0.6.2.4/src/Hackage/Security/Util/Exit.hs 0000644 0000000 0000000 00000001765 07346545000 021416 0 ustar 00 0000000 0000000 module Hackage.Security.Util.Exit where
import MyPrelude
import Control.Monad (liftM)
import Control.Monad.Except (ExceptT, runExceptT, throwError)
{-------------------------------------------------------------------------------
Auxiliary: multiple exit points
-------------------------------------------------------------------------------}
-- | Multiple exit points
--
-- We can simulate the imperative code
--
-- > if (cond1)
-- > return exp1;
-- > if (cond2)
-- > return exp2;
-- > if (cond3)
-- > return exp3;
-- > return exp4;
--
-- as
--
-- > multipleExitPoints $ do
-- > when (cond1) $
-- > exit exp1
-- > when (cond2) $
-- > exit exp2
-- > when (cond3) $
-- > exit exp3
-- > return exp4
multipleExitPoints :: Monad m => ExceptT a m a -> m a
multipleExitPoints = liftM aux . runExceptT
where
aux :: Either a a -> a
aux (Left a) = a
aux (Right a) = a
-- | Function exit point (see 'multipleExitPoints')
exit :: Monad m => e -> ExceptT e m a
exit = throwError
hackage-security-0.6.2.4/src/Hackage/Security/Util/IO.hs 0000644 0000000 0000000 00000012102 07346545000 020777 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
module Hackage.Security.Util.IO (
-- * Miscelleneous
getFileSize
, handleDoesNotExist
, WithDirLockEvent(..)
, withDirLock
-- * Debugging
, timedIO
) where
import MyPrelude
import Control.Concurrent (threadDelay)
import Control.Exception
import Data.Time
import System.IO hiding (openTempFile, withFile)
import System.IO.Error
import Hackage.Security.Util.Path
#ifdef MIN_VERSION_lukko
import Lukko (FD, fileLockingSupported, fdOpen, fdClose, fdLock, fdUnlock, LockMode(ExclusiveLock))
#else
import GHC.IO.Handle.Lock (hLock, LockMode(ExclusiveLock), FileLockingNotSupported)
#if MIN_VERSION_base(4,11,0)
import GHC.IO.Handle.Lock (hUnlock)
#endif
#endif
{-------------------------------------------------------------------------------
Miscelleneous
-------------------------------------------------------------------------------}
getFileSize :: (Num a, FsRoot root) => Path root -> IO a
getFileSize fp = fromInteger <$> withFile fp ReadMode hFileSize
handleDoesNotExist :: IO a -> IO (Maybe a)
handleDoesNotExist act =
handle aux (Just <$> act)
where
aux e =
if isDoesNotExistError e
then return Nothing
else throwIO e
data WithDirLockEvent
= WithDirLockEventPre (Path Absolute)
| WithDirLockEventPost (Path Absolute)
| WithDirLockEventUnlock (Path Absolute)
-- | Attempt to create a filesystem lock in the specified directory.
--
-- This will use OS-specific file locking primitives: "GHC.IO.Handle.Lock" with
-- @base-4.10" and later or a shim for @base@ versions.
--
-- Blocks if the lock is already present.
--
-- The logger callback passed as first argument is invoked before and
-- after acquiring a lock, and after unlocking.
--
-- May fallback to locking via creating a directory:
-- Given a file @/path/to@, we do this by attempting to create the directory
-- @//path/to/hackage-security-lock@, and deleting the directory again
-- afterwards. Creating a directory that already exists will throw an exception
-- on most OSs (certainly Linux, OSX and Windows) and is a reasonably common way
-- to implement a lock file.
withDirLock :: (WithDirLockEvent -> IO ()) -> Path Absolute -> IO a -> IO a
withDirLock logger dir
= bracket takeLock (\h -> releaseLock h >> logger (WithDirLockEventUnlock lock))
. const
where
lock :: Path Absolute
lock = dir > fragment "hackage-security-lock"
lock' :: FilePath
lock' = toFilePath lock
me = "Hackage.Security.Util.IO.withDirLock: "
wrapLog :: IO a -> IO a
wrapLog op = do
logger (WithDirLockEventPre lock)
h <- op
logger (WithDirLockEventPost lock)
return h
#ifdef MIN_VERSION_lukko
takeLock :: IO FD
takeLock
| fileLockingSupported = do
h <- fdOpen lock'
wrapLog (fdLock h ExclusiveLock `onException` fdClose h)
return h
| otherwise = wrapLog takeDirLock
where
takeDirLock :: IO FD
takeDirLock = handle onCreateDirError $ do
createDirectory lock
return (undefined :: FD)
onCreateDirError :: IOError -> IO FD
onCreateDirError ioe
| isAlreadyExistsError ioe = threadDelay (1*1000*1000) >> takeDirLock
| otherwise = fail (me++"error creating directory lock: "++show ioe)
releaseLock h
| fileLockingSupported = do
fdUnlock h
fdClose h
| otherwise =
removeDirectory lock
#else
takeLock = do
h <- openFile lock' ReadWriteMode
wrapLog $ handle (fallbackToDirLock h) $ do
hLock h ExclusiveLock
return (Just h)
-- If file locking isn't supported then we fallback to directory locking,
-- polling if necessary.
fallbackToDirLock :: Handle -> FileLockingNotSupported -> IO (Maybe Handle)
fallbackToDirLock h _ = takeDirLock >> return Nothing
where
takeDirLock :: IO ()
takeDirLock = do
-- We fallback to directory locking
-- so we need to cleanup lock file first: close and remove
hClose h
handle onIOError (removeFile lock)
handle onCreateDirError (createDirectory lock)
onCreateDirError :: IOError -> IO ()
onCreateDirError ioe
| isAlreadyExistsError ioe = threadDelay (1*1000*1000) >> takeDirLock
| otherwise = fail (me++"error creating directory lock: "++show ioe)
onIOError :: IOError -> IO ()
onIOError _ = hPutStrLn stderr
(me++"cannot remove lock file before directory lock fallback")
releaseLock (Just h) =
#if MIN_VERSION_base(4,11,0)
hUnlock h >>
#endif
hClose h
releaseLock Nothing = removeDirectory lock
#endif
{-------------------------------------------------------------------------------
Debugging
-------------------------------------------------------------------------------}
timedIO :: String -> IO a -> IO a
timedIO label act = do
before <- getCurrentTime
result <- act
after <- getCurrentTime
hPutStrLn stderr $ label ++ ": " ++ show (after `diffUTCTime` before)
hFlush stderr
return result
hackage-security-0.6.2.4/src/Hackage/Security/Util/JSON.hs 0000644 0000000 0000000 00000014645 07346545000 021257 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
#if __GLASGOW_HASKELL__ < 710
{-# LANGUAGE OverlappingInstances #-}
#endif
-- |
module Hackage.Security.Util.JSON (
-- * Type classes
ToJSON(..)
, FromJSON(..)
, ToObjectKey(..)
, FromObjectKey(..)
, ReportSchemaErrors(..)
, Expected
, Got
, expected'
-- * Utility
, fromJSObject
, fromJSField
, fromJSOptField
, mkObject
-- * Re-exports
, JSValue(..)
, Int54
) where
import MyPrelude
import Control.Monad (liftM)
import Data.Maybe (catMaybes)
import Data.Map (Map)
import Data.Time
import Text.JSON.Canonical
import Network.URI
import qualified Data.Map as Map
#if !MIN_VERSION_time(1,5,0)
import System.Locale (defaultTimeLocale)
#endif
import Hackage.Security.Util.Path
{-------------------------------------------------------------------------------
ToJSON and FromJSON classes
We parameterize over the monad here to avoid mutual module dependencies.
-------------------------------------------------------------------------------}
class ToJSON m a where
toJSON :: a -> m JSValue
class FromJSON m a where
fromJSON :: JSValue -> m a
-- | Used in the 'ToJSON' instance for 'Map'
class ToObjectKey m a where
toObjectKey :: a -> m String
-- | Used in the 'FromJSON' instance for 'Map'
class FromObjectKey m a where
fromObjectKey :: String -> m (Maybe a)
-- | Monads in which we can report schema errors
class (Applicative m, Monad m) => ReportSchemaErrors m where
expected :: Expected -> Maybe Got -> m a
type Expected = String
type Got = String
expected' :: ReportSchemaErrors m => Expected -> JSValue -> m a
expected' descr val = expected descr (Just (describeValue val))
where
describeValue :: JSValue -> String
describeValue (JSNull ) = "null"
describeValue (JSBool _) = "bool"
describeValue (JSNum _) = "num"
describeValue (JSString _) = "string"
describeValue (JSArray _) = "array"
describeValue (JSObject _) = "object"
unknownField :: ReportSchemaErrors m => String -> m a
unknownField field = expected ("field " ++ show field) Nothing
{-------------------------------------------------------------------------------
ToObjectKey and FromObjectKey instances
-------------------------------------------------------------------------------}
instance Monad m => ToObjectKey m String where
toObjectKey = return
instance Monad m => FromObjectKey m String where
fromObjectKey = return . Just
instance Monad m => ToObjectKey m (Path root) where
toObjectKey (Path fp) = return fp
instance Monad m => FromObjectKey m (Path root) where
fromObjectKey = liftM (fmap Path) . fromObjectKey
{-------------------------------------------------------------------------------
ToJSON and FromJSON instances
-------------------------------------------------------------------------------}
instance Monad m => ToJSON m JSValue where
toJSON = return
instance Monad m => FromJSON m JSValue where
fromJSON = return
instance Monad m => ToJSON m String where
toJSON = return . JSString
instance ReportSchemaErrors m => FromJSON m String where
fromJSON (JSString str) = return str
fromJSON val = expected' "string" val
instance Monad m => ToJSON m Int54 where
toJSON = return . JSNum
instance ReportSchemaErrors m => FromJSON m Int54 where
fromJSON (JSNum i) = return i
fromJSON val = expected' "int" val
instance
#if __GLASGOW_HASKELL__ >= 710
{-# OVERLAPPABLE #-}
#endif
(Monad m, ToJSON m a) => ToJSON m [a] where
toJSON = liftM JSArray . mapM toJSON
instance
#if __GLASGOW_HASKELL__ >= 710
{-# OVERLAPPABLE #-}
#endif
(ReportSchemaErrors m, FromJSON m a) => FromJSON m [a] where
fromJSON (JSArray as) = mapM fromJSON as
fromJSON val = expected' "array" val
instance Monad m => ToJSON m UTCTime where
toJSON = return . JSString . formatTime defaultTimeLocale "%FT%TZ"
instance ReportSchemaErrors m => FromJSON m UTCTime where
fromJSON enc = do
str <- fromJSON enc
case parseTimeM False defaultTimeLocale "%FT%TZ" str of
Just time -> return time
Nothing -> expected "valid date-time string" (Just str)
#if !MIN_VERSION_time(1,5,0)
where
parseTimeM _trim = parseTime
#endif
instance ( Monad m
, ToObjectKey m k
, ToJSON m a
) => ToJSON m (Map k a) where
toJSON = liftM JSObject . mapM aux . Map.toList
where
aux :: (k, a) -> m (String, JSValue)
aux (k, a) = do k' <- toObjectKey k; a' <- toJSON a; return (k', a')
instance ( ReportSchemaErrors m
, Ord k
, FromObjectKey m k
, FromJSON m a
) => FromJSON m (Map k a) where
fromJSON enc = do
obj <- fromJSObject enc
Map.fromList . catMaybes <$> mapM aux obj
where
aux :: (String, JSValue) -> m (Maybe (k, a))
aux (k, a) = knownKeys <$> fromObjectKey k <*> fromJSON a
knownKeys :: Maybe k -> a -> Maybe (k, a)
knownKeys Nothing _ = Nothing
knownKeys (Just k) a = Just (k, a)
instance Monad m => ToJSON m URI where
toJSON = toJSON . show
instance ReportSchemaErrors m => FromJSON m URI where
fromJSON enc = do
str <- fromJSON enc
case parseURI str of
Nothing -> expected "valid URI" (Just str)
Just uri -> return uri
{-------------------------------------------------------------------------------
Utility
-------------------------------------------------------------------------------}
fromJSObject :: ReportSchemaErrors m => JSValue -> m [(String, JSValue)]
fromJSObject (JSObject obj) = return obj
fromJSObject val = expected' "object" val
-- | Extract a field from a JSON object
fromJSField :: (ReportSchemaErrors m, FromJSON m a)
=> JSValue -> String -> m a
fromJSField val nm = do
obj <- fromJSObject val
case lookup nm obj of
Just fld -> fromJSON fld
Nothing -> unknownField nm
fromJSOptField :: (ReportSchemaErrors m, FromJSON m a)
=> JSValue -> String -> m (Maybe a)
fromJSOptField val nm = do
obj <- fromJSObject val
case lookup nm obj of
Just fld -> Just <$> fromJSON fld
Nothing -> return Nothing
mkObject :: forall m. Monad m => [(String, m JSValue)] -> m JSValue
mkObject = liftM JSObject . sequenceFields
where
sequenceFields :: [(String, m JSValue)] -> m [(String, JSValue)]
sequenceFields [] = return []
sequenceFields ((fld,val):flds) = do val' <- val
flds' <- sequenceFields flds
return ((fld,val'):flds')
hackage-security-0.6.2.4/src/Hackage/Security/Util/Lens.hs 0000644 0000000 0000000 00000002347 07346545000 021403 0 ustar 00 0000000 0000000 -- | Some very simple lens definitions (to avoid further dependencies)
--
-- Intended to be double-imported
-- > import Hackage.Security.Util.Lens (Lens)
-- > import qualified Hackage.Security.Util.Lens as Lens
module Hackage.Security.Util.Lens (
-- * Generic definitions
Lens
, Lens'
, Traversal
, Traversal'
, get
, over
, set
) where
import MyPrelude
import Control.Applicative
import Data.Functor.Identity
{-------------------------------------------------------------------------------
General definitions
-------------------------------------------------------------------------------}
-- | Polymorphic lens
type Lens s t a b = forall f. Functor f => LensLike f s t a b
-- | Monomorphic lens
type Lens' s a = Lens s s a a
-- | Polymorphic traversal
type Traversal s t a b = forall f. Applicative f => LensLike f s t a b
-- | Monomorphic traversal
type Traversal' s a = Traversal s s a a
type LensLike f s t a b = (a -> f b) -> s -> f t
type LensLike' f s a = LensLike f s s a a
get :: LensLike' (Const a) s a -> s -> a
get l = getConst . l Const
over :: LensLike Identity s t a b -> (a -> b) -> s -> t
over l f = runIdentity . l (Identity . f)
set :: LensLike Identity s t a b -> b -> s -> t
set l = over l . const
hackage-security-0.6.2.4/src/Hackage/Security/Util/Path.hs 0000644 0000000 0000000 00000040564 07346545000 021401 0 ustar 00 0000000 0000000 -- | A more type-safe version of file paths
--
-- This module is intended to replace imports of System.FilePath, and
-- additionally exports thin wrappers around common IO functions. To facilitate
-- importing this module unqualified we also re-export some definitions from
-- System.IO (importing both would likely lead to name clashes).
--
-- Note that his module does not import any other modules from Hackage.Security;
-- everywhere else we use Path instead of FilePath directly.
{-# LANGUAGE CPP #-}
module Hackage.Security.Util.Path (
-- * Paths
Path(..)
, castRoot
-- * FilePath-like operations on paths with arbitrary roots
, takeDirectory
, takeFileName
, (<.>)
, splitExtension
, takeExtension
-- * Unrooted paths
, Unrooted
, (>)
, rootPath
, unrootPath
, toUnrootedFilePath
, fromUnrootedFilePath
, fragment
, joinFragments
, splitFragments
, isPathPrefixOf
-- * File-system paths
, Relative
, Absolute
, HomeDir
, FsRoot(..)
, FsPath(..)
-- ** Conversions
, toFilePath
, fromFilePath
, makeAbsolute
, fromAbsoluteFilePath
-- ** Wrappers around System.IO
, withFile
, openTempFile'
-- ** Wrappers around Data.ByteString
, readLazyByteString
, readStrictByteString
, writeLazyByteString
, writeStrictByteString
-- ** Wrappers around System.Directory
, copyFile
, createDirectory
, createDirectoryIfMissing
, removeDirectory
, doesFileExist
, doesDirectoryExist
, getModificationTime
, removeFile
, getTemporaryDirectory
, getDirectoryContents
, getRecursiveContents
, renameFile
, getCurrentDirectory
-- * Wrappers around Codec.Archive.Tar
, Tar
, tarIndexLookup
, tarAppend
-- * Wrappers around Network.URI
, Web
, toURIPath
, fromURIPath
, uriPath
, modifyUriPath
-- * Re-exports
, IOMode(..)
, BufferMode(..)
, Handle
, SeekMode(..)
, IO.hSetBuffering
, IO.hClose
, IO.hFileSize
, IO.hSeek
) where
import MyPrelude
import Control.Monad
import Data.List (isPrefixOf)
import System.IO (IOMode(..), BufferMode(..), Handle, SeekMode(..))
import System.IO.Unsafe (unsafeInterleaveIO)
#if MIN_VERSION_directory(1,2,0)
import Data.Time (UTCTime)
#else
import System.Time (ClockTime)
#endif
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy as BS.L
import qualified System.FilePath as FP.Native
import qualified System.FilePath.Posix as FP.Posix
import qualified System.IO as IO
import qualified System.Directory as Dir
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Archive.Tar.Index as TarIndex
import qualified Network.URI as URI
import Hackage.Security.Util.Pretty
{-------------------------------------------------------------------------------
Paths
-------------------------------------------------------------------------------}
-- | Paths
--
-- A 'Path' is simply a 'FilePath' with a type-level tag indicating where this
-- path is rooted (relative to the current directory, absolute path, relative to
-- a web domain, whatever). Most operations on 'Path' are just lifted versions
-- of the operations on the underlying 'FilePath'. The tag however allows us to
-- give a lot of operations a more meaningful type. For instance, it does not
-- make sense to append two absolute paths together; instead, we can only append
-- an unrooted path to another path. It also means we avoid bugs where we use
-- one kind of path where we expect another.
newtype Path a = Path FilePath -- always a Posix style path internally
deriving (Show, Eq, Ord)
mkPathNative :: FilePath -> Path a
mkPathNative = Path . FP.Posix.joinPath . FP.Native.splitDirectories
unPathNative :: Path a -> FilePath
unPathNative (Path fp) = FP.Native.joinPath . FP.Posix.splitDirectories $ fp
mkPathPosix :: FilePath -> Path a
mkPathPosix = Path
unPathPosix :: Path a -> FilePath
unPathPosix (Path fp) = fp
-- | Reinterpret the root of a path
--
-- This literally just changes the type-level tag; use with caution!
castRoot :: Path root -> Path root'
castRoot (Path fp) = Path fp
{-------------------------------------------------------------------------------
FilePath-like operations on paths with an arbitrary root
-------------------------------------------------------------------------------}
takeDirectory :: Path a -> Path a
takeDirectory = liftFP FP.Posix.takeDirectory
takeFileName :: Path a -> String
takeFileName = liftFromFP FP.Posix.takeFileName
(<.>) :: Path a -> String -> Path a
fp <.> ext = liftFP (FP.Posix.<.> ext) fp
splitExtension :: Path a -> (Path a, String)
splitExtension (Path fp) = (Path fp', ext)
where
(fp', ext) = FP.Posix.splitExtension fp
takeExtension :: Path a -> String
takeExtension (Path fp) = FP.Posix.takeExtension fp
{-------------------------------------------------------------------------------
Unrooted paths
-------------------------------------------------------------------------------}
-- | Type-level tag for unrooted paths
--
-- Unrooted paths need a root before they can be interpreted.
data Unrooted
instance Pretty (Path Unrooted) where
pretty (Path fp) = fp
(>) :: Path a -> Path Unrooted -> Path a
(>) = liftFP2 (FP.Posix.>)
-- | Reinterpret an unrooted path
--
-- This is an alias for 'castRoot'; see comments there.
rootPath :: Path Unrooted -> Path root
rootPath (Path fp) = Path fp
-- | Forget a path's root
--
-- This is an alias for 'castRoot'; see comments there.
unrootPath :: Path root -> Path Unrooted
unrootPath (Path fp) = Path fp
-- | Convert a relative\/unrooted Path to a FilePath (using POSIX style
-- directory separators).
--
-- See also 'toAbsoluteFilePath'
--
toUnrootedFilePath :: Path Unrooted -> FilePath
toUnrootedFilePath = unPathPosix
-- | Convert from a relative\/unrooted FilePath (using POSIX style directory
-- separators).
--
fromUnrootedFilePath :: FilePath -> Path Unrooted
fromUnrootedFilePath = mkPathPosix
-- | A path fragment (like a single directory or filename)
fragment :: String -> Path Unrooted
fragment = Path
joinFragments :: [String] -> Path Unrooted
joinFragments = liftToFP FP.Posix.joinPath
splitFragments :: Path Unrooted -> [String]
splitFragments (Path fp) = FP.Posix.splitDirectories fp
isPathPrefixOf :: Path Unrooted -> Path Unrooted -> Bool
isPathPrefixOf = liftFromFP2 isPrefixOf
{-------------------------------------------------------------------------------
File-system paths
-------------------------------------------------------------------------------}
data Relative
data Absolute
data HomeDir
instance Pretty (Path Absolute) where
pretty (Path fp) = fp
instance Pretty (Path Relative) where
pretty (Path fp) = "./" ++ fp
instance Pretty (Path HomeDir) where
pretty (Path fp) = "~/" ++ fp
-- | A file system root can be interpreted as an (absolute) FilePath
class FsRoot root where
-- | Convert a Path to an absolute FilePath (using native style directory separators).
--
toAbsoluteFilePath :: Path root -> IO FilePath
instance FsRoot Relative where
toAbsoluteFilePath p = go (unPathNative p)
where
go :: FilePath -> IO FilePath
#if MIN_VERSION_directory(1,2,2)
go = Dir.makeAbsolute
#else
-- copied implementation from the directory package
go = (FP.Native.normalise <$>) . absolutize
absolutize path -- avoid the call to `getCurrentDirectory` if we can
| FP.Native.isRelative path
= (FP.Native.> path)
. FP.Native.addTrailingPathSeparator <$>
Dir.getCurrentDirectory
| otherwise = return path
#endif
instance FsRoot Absolute where
toAbsoluteFilePath = return . unPathNative
instance FsRoot HomeDir where
toAbsoluteFilePath p = do
home <- Dir.getHomeDirectory
return $ home FP.Native.> unPathNative p
-- | Abstract over a file system root
--
-- see 'fromFilePath'
data FsPath = forall root. FsRoot root => FsPath (Path root)
{-------------------------------------------------------------------------------
Conversions
-------------------------------------------------------------------------------}
toFilePath :: Path Absolute -> FilePath
toFilePath = unPathNative
fromFilePath :: FilePath -> FsPath
fromFilePath fp
| FP.Native.isAbsolute fp = FsPath (mkPathNative fp :: Path Absolute)
| Just fp' <- atHome fp = FsPath (mkPathNative fp' :: Path HomeDir)
| otherwise = FsPath (mkPathNative fp :: Path Relative)
where
-- TODO: I don't know if there a standard way that Windows users refer to
-- their home directory. For now, we'll only interpret '~'. Everybody else
-- can specify an absolute path if this doesn't work.
atHome :: FilePath -> Maybe FilePath
atHome "~" = Just ""
atHome ('~':sep:fp') | FP.Native.isPathSeparator sep = Just fp'
atHome _otherwise = Nothing
makeAbsolute :: FsPath -> IO (Path Absolute)
makeAbsolute (FsPath p) = mkPathNative <$> toAbsoluteFilePath p
fromAbsoluteFilePath :: FilePath -> Path Absolute
fromAbsoluteFilePath fp
| FP.Native.isAbsolute fp = mkPathNative fp
| otherwise = error ("fromAbsoluteFilePath: not an absolute path: " ++ fp)
{-------------------------------------------------------------------------------
Wrappers around System.IO
-------------------------------------------------------------------------------}
-- | Wrapper around 'withFile'
withFile :: FsRoot root => Path root -> IOMode -> (Handle -> IO r) -> IO r
withFile path mode callback = do
filePath <- toAbsoluteFilePath path
IO.withFile filePath mode callback
-- | Wrapper around 'openBinaryTempFileWithDefaultPermissions'
--
-- NOTE: The caller is responsible for cleaning up the temporary file.
openTempFile' :: FsRoot root => Path root -> String -> IO (Path Absolute, Handle)
openTempFile' path template = do
filePath <- toAbsoluteFilePath path
(tempFilePath, h) <- IO.openBinaryTempFileWithDefaultPermissions filePath template
return (fromAbsoluteFilePath tempFilePath, h)
{-------------------------------------------------------------------------------
Wrappers around Data.ByteString.*
-------------------------------------------------------------------------------}
readLazyByteString :: FsRoot root => Path root -> IO BS.L.ByteString
readLazyByteString path = do
filePath <- toAbsoluteFilePath path
BS.L.readFile filePath
readStrictByteString :: FsRoot root => Path root -> IO BS.ByteString
readStrictByteString path = do
filePath <- toAbsoluteFilePath path
BS.readFile filePath
writeLazyByteString :: FsRoot root => Path root -> BS.L.ByteString -> IO ()
writeLazyByteString path bs = do
filePath <- toAbsoluteFilePath path
BS.L.writeFile filePath bs
writeStrictByteString :: FsRoot root => Path root -> BS.ByteString -> IO ()
writeStrictByteString path bs = do
filePath <- toAbsoluteFilePath path
BS.writeFile filePath bs
{-------------------------------------------------------------------------------
Wrappers around System.Directory
-------------------------------------------------------------------------------}
copyFile :: (FsRoot root, FsRoot root') => Path root -> Path root' -> IO ()
copyFile src dst = do
src' <- toAbsoluteFilePath src
dst' <- toAbsoluteFilePath dst
Dir.copyFile src' dst'
createDirectory :: FsRoot root => Path root -> IO ()
createDirectory path = Dir.createDirectory =<< toAbsoluteFilePath path
createDirectoryIfMissing :: FsRoot root => Bool -> Path root -> IO ()
createDirectoryIfMissing createParents path = do
filePath <- toAbsoluteFilePath path
Dir.createDirectoryIfMissing createParents filePath
removeDirectory :: FsRoot root => Path root -> IO ()
removeDirectory path = Dir.removeDirectory =<< toAbsoluteFilePath path
doesFileExist :: FsRoot root => Path root -> IO Bool
doesFileExist path = do
filePath <- toAbsoluteFilePath path
Dir.doesFileExist filePath
doesDirectoryExist :: FsRoot root => Path root -> IO Bool
doesDirectoryExist path = do
filePath <- toAbsoluteFilePath path
Dir.doesDirectoryExist filePath
#if MIN_VERSION_directory(1,2,0)
getModificationTime :: FsRoot root => Path root -> IO UTCTime
#else
getModificationTime :: FsRoot root => Path root -> IO ClockTime
#endif
getModificationTime path = do
filePath <- toAbsoluteFilePath path
Dir.getModificationTime filePath
removeFile :: FsRoot root => Path root -> IO ()
removeFile path = do
filePath <- toAbsoluteFilePath path
Dir.removeFile filePath
getTemporaryDirectory :: IO (Path Absolute)
getTemporaryDirectory = fromAbsoluteFilePath <$> Dir.getTemporaryDirectory
-- | Return the immediate children of a directory
--
-- Filters out @"."@ and @".."@.
getDirectoryContents :: FsRoot root => Path root -> IO [Path Unrooted]
getDirectoryContents path = do
filePath <- toAbsoluteFilePath path
fragments <$> Dir.getDirectoryContents filePath
where
fragments :: [String] -> [Path Unrooted]
fragments = map fragment . filter (not . skip)
skip :: String -> Bool
skip "." = True
skip ".." = True
skip _ = False
-- | Recursive traverse a directory structure
--
-- Returns a set of paths relative to the directory specified. The list is
-- lazily constructed, so that directories are only read when required.
-- (This is also essential to ensure that this function does not build the
-- entire result in memory before returning, potentially running out of heap.)
getRecursiveContents :: FsRoot root => Path root -> IO [Path Unrooted]
getRecursiveContents root = go emptyPath
where
go :: Path Unrooted -> IO [Path Unrooted]
go subdir = unsafeInterleaveIO $ do
entries <- getDirectoryContents (root > subdir)
liftM concat $ forM entries $ \entry -> do
let path = subdir > entry
isDirectory <- doesDirectoryExist (root > path)
if isDirectory then go path
else return [path]
emptyPath :: Path Unrooted
emptyPath = joinFragments []
renameFile :: (FsRoot root, FsRoot root')
=> Path root -- ^ Old
-> Path root' -- ^ New
-> IO ()
renameFile old new = do
old' <- toAbsoluteFilePath old
new' <- toAbsoluteFilePath new
Dir.renameFile old' new'
getCurrentDirectory :: IO (Path Absolute)
getCurrentDirectory = do
cwd <- Dir.getCurrentDirectory
makeAbsolute $ fromFilePath cwd
{-------------------------------------------------------------------------------
Wrappers around Codec.Archive.Tar.*
-------------------------------------------------------------------------------}
data Tar
instance Pretty (Path Tar) where
pretty (Path fp) = "/" ++ fp
tarIndexLookup :: TarIndex.TarIndex -> Path Tar -> Maybe TarIndex.TarIndexEntry
tarIndexLookup index path = TarIndex.lookup index path'
where
path' :: FilePath
path' = toUnrootedFilePath $ unrootPath path
tarAppend :: (FsRoot root, FsRoot root')
=> Path root -- ^ Path of the @.tar@ file
-> Path root' -- ^ Base directory
-> [Path Tar] -- ^ Files to add, relative to the base dir
-> IO ()
tarAppend tarFile baseDir contents = do
tarFile' <- toAbsoluteFilePath tarFile
baseDir' <- toAbsoluteFilePath baseDir
Tar.append tarFile' baseDir' contents'
where
contents' :: [FilePath]
contents' = map (unPathNative . unrootPath) contents
{-------------------------------------------------------------------------------
Wrappers around Network.URI
-------------------------------------------------------------------------------}
data Web
toURIPath :: FilePath -> Path Web
toURIPath = rootPath . fromUnrootedFilePath
fromURIPath :: Path Web -> FilePath
fromURIPath = toUnrootedFilePath . unrootPath
uriPath :: URI.URI -> Path Web
uriPath = toURIPath . URI.uriPath
modifyUriPath :: URI.URI -> (Path Web -> Path Web) -> URI.URI
modifyUriPath uri f = uri { URI.uriPath = f' (URI.uriPath uri) }
where
f' :: FilePath -> FilePath
f' = fromURIPath . f . toURIPath
{-------------------------------------------------------------------------------
Auxiliary
-------------------------------------------------------------------------------}
liftFP :: (FilePath -> FilePath) -> Path a -> Path b
liftFP f (Path fp) = Path (f fp)
liftFP2 :: (FilePath -> FilePath -> FilePath) -> Path a -> Path b -> Path c
liftFP2 f (Path fp) (Path fp') = Path (f fp fp')
liftFromFP :: (FilePath -> x) -> Path a -> x
liftFromFP f (Path fp) = f fp
liftFromFP2 :: (FilePath -> FilePath -> x) -> Path a -> Path b -> x
liftFromFP2 f (Path fp) (Path fp') = f fp fp'
liftToFP :: (x -> FilePath) -> x -> Path a
liftToFP f x = Path (f x)
hackage-security-0.6.2.4/src/Hackage/Security/Util/Pretty.hs 0000644 0000000 0000000 00000000313 07346545000 021760 0 ustar 00 0000000 0000000 -- | Producing human-reaadable strings
module Hackage.Security.Util.Pretty (
Pretty(..)
) where
import MyPrelude
-- | Produce a human-readable string
class Pretty a where
pretty :: a -> String
hackage-security-0.6.2.4/src/Hackage/Security/Util/Some.hs 0000644 0000000 0000000 00000005757 07346545000 021415 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
-- | Hiding existentials
module Hackage.Security.Util.Some (
Some(..)
-- ** Equality
, DictEq(..)
, SomeEq(..)
-- ** Serialization
, DictShow(..)
, SomeShow(..)
-- ** Pretty-printing
, DictPretty(..)
, SomePretty(..)
-- ** Type checking
, typecheckSome
#if !MIN_VERSION_base(4,7,0)
-- ** Compatibility with base < 4.7
, tyConSome
#endif
) where
import MyPrelude
#if MIN_VERSION_base(4,7,0)
import Data.Typeable (Typeable)
#else
import qualified Data.Typeable as Typeable
#endif
import Hackage.Security.Util.TypedEmbedded
import Hackage.Security.Util.Pretty
data Some f = forall a. Some (f a)
#if MIN_VERSION_base(4,7,0)
deriving instance Typeable Some
#else
tyConSome :: Typeable.TyCon
tyConSome = Typeable.mkTyCon3 "hackage-security" "Hackage.Security.Util.Some" "Some"
#endif
{-------------------------------------------------------------------------------
Equality on Some types
Note that we cannot really do something similar for ordering; what value
should we return for
> Some (f x) `compare` Some (f x')
where @x :: a@, @x' :: a'@ and @a /= a'@? These are incomparable.
-------------------------------------------------------------------------------}
data DictEq a where
DictEq :: Eq a => DictEq a
-- | Type @f@ satisfies @SomeEq f@ if @f a@ satisfies @Eq@ independent of @a@
class SomeEq f where
someEq :: DictEq (f a)
instance (Typed f, SomeEq f) => Eq (Some f) where
Some (x :: f a) == Some (y :: f a') =
case unify (typeOf x) (typeOf y) of
Nothing -> False
Just Refl -> case someEq :: DictEq (f a) of DictEq -> x == y
{-------------------------------------------------------------------------------
Showing Some types
-------------------------------------------------------------------------------}
data DictShow a where
DictShow :: Show a => DictShow a
-- | Type @f@ satisfies @SomeShow f@ if @f a@ satisfies @Show@ independent of @a@
class SomeShow f where
someShow :: DictShow (f a)
instance SomeShow f => Show (Some f) where
show (Some (x :: f a)) =
case someShow :: DictShow (f a) of DictShow -> show x
{-------------------------------------------------------------------------------
Pretty-printing Some types
-------------------------------------------------------------------------------}
data DictPretty a where
DictPretty :: Pretty a => DictPretty a
-- | Type @f@ satisfies @SomeShow f@ if @f a@ satisfies @Show@ independent of @a@
class SomePretty f where
somePretty :: DictPretty (f a)
instance SomePretty f => Pretty (Some f) where
pretty (Some (x :: f a)) =
case somePretty :: DictPretty (f a) of DictPretty -> pretty x
{-------------------------------------------------------------------------------
Typechecking Some types
-------------------------------------------------------------------------------}
typecheckSome :: Typed f => Some f -> Some (TypeOf f) -> Bool
typecheckSome (Some x) (Some typ) =
case unify (typeOf x) typ of
Just Refl -> True
Nothing -> False
hackage-security-0.6.2.4/src/Hackage/Security/Util/Stack.hs 0000644 0000000 0000000 00000000236 07346545000 021542 0 ustar 00 0000000 0000000 -- | Heterogenous lists
module Hackage.Security.Util.Stack (
(:-)(..)
) where
import MyPrelude
data h :- t = h :- t
deriving (Eq, Show)
infixr 5 :-
hackage-security-0.6.2.4/src/Hackage/Security/Util/TypedEmbedded.hs 0000644 0000000 0000000 00000002373 07346545000 023200 0 ustar 00 0000000 0000000 -- | Embedded languages with meta level types
module Hackage.Security.Util.TypedEmbedded (
(:=:)(Refl)
, TypeOf
, Unify(..)
, Typed(..)
, AsType(..)
) where
import MyPrelude
-- | Type equality proofs
--
-- This is a direct copy of "type-equality:Data.Type.Equality"; if we don't
-- mind the dependency we can use that package directly.
data a :=: b where
Refl :: a :=: a
type family TypeOf (f :: * -> *) :: * -> *
-- | Equality check that gives us a type-level equality proof.
class Unify f where
unify :: f typ -> f typ' -> Maybe (typ :=: typ')
-- | Embedded languages with type inference
class Unify (TypeOf f) => Typed f where
typeOf :: f typ -> TypeOf f typ
-- | Cast from one type to another
--
-- By default (for language with type inference) we just compare the types
-- returned by 'typeOf'; however, in languages in which terms can have more
-- than one type this may not be the correct definition (indeed, for such
-- languages we cannot give an instance of 'Typed').
class AsType f where
asType :: f typ -> TypeOf f typ' -> Maybe (f typ')
default asType :: Typed f => f typ -> TypeOf f typ' -> Maybe (f typ')
asType x typ = case unify (typeOf x) typ of
Just Refl -> Just x
Nothing -> Nothing
hackage-security-0.6.2.4/src/ 0000755 0000000 0000000 00000000000 07346545000 014071 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/MyPrelude.hs 0000644 0000000 0000000 00000001227 07346545000 016335 0 ustar 00 0000000 0000000 -- | Smooth over differences between various ghc versions by making older
-- preludes look like 4.8.0
{-# LANGUAGE CPP #-}
module MyPrelude (
module P
#if !MIN_VERSION_base(4,8,0)
, Applicative(..)
, Monoid(..)
, (<$>)
, (<$)
, Traversable(traverse)
, displayException
#endif
) where
#if MIN_VERSION_base(4,8,0)
import Prelude as P
#else
#if MIN_VERSION_base(4,6,0)
import Prelude as P
#else
import Prelude as P hiding (catch)
#endif
import Control.Applicative
import Control.Exception (Exception)
import Data.Monoid
import Data.Traversable (Traversable(traverse))
displayException :: Exception e => e -> String
displayException = show
#endif
hackage-security-0.6.2.4/src/Text/JSON/ 0000755 0000000 0000000 00000000000 07346545000 015566 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/src/Text/JSON/Canonical.hs 0000644 0000000 0000000 00000024166 07346545000 020022 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
--------------------------------------------------------------------
-- |
-- Module : Text.JSON.Parsec
-- Copyright : (c) Galois, Inc. 2007-2009, Duncan Coutts 2015
--
--
-- Minimal implementation of Canonical JSON.
--
--
--
-- A \"canonical JSON\" format is provided in order to provide meaningful and
-- repeatable hashes of JSON-encoded data. Canonical JSON is parsable with any
-- full JSON parser, but security-conscious applications will want to verify
-- that input is in canonical form before authenticating any hash or signature
-- on that input.
--
-- This implementation is derived from the json parser from the json package,
-- with simplifications to meet the Canonical JSON grammar.
--
-- TODO: Known bugs/limitations:
--
-- * Decoding/encoding Unicode code-points beyond @U+00ff@ is currently broken
--
module Text.JSON.Canonical
( JSValue(..)
, Int54
, parseCanonicalJSON
, renderCanonicalJSON
, prettyCanonicalJSON
) where
import MyPrelude
import Text.ParserCombinators.Parsec
( CharParser, (<|>), (>), many, between, sepBy
, satisfy, char, string, digit, spaces
, parse )
import Text.PrettyPrint hiding (char)
import qualified Text.PrettyPrint as Doc
#if !(MIN_VERSION_base(4,7,0))
import Control.Applicative ((<$>), (<$), pure, (<*>), (<*), (*>))
#endif
import Control.Arrow (first)
import Data.Bits (Bits)
#if MIN_VERSION_base(4,7,0)
import Data.Bits (FiniteBits)
#endif
import Data.Char (isDigit, digitToInt)
import Data.Data (Data)
import Data.Function (on)
import Data.Int (Int64)
import Data.Ix (Ix)
import Data.List (foldl', sortBy)
import Data.Typeable (Typeable)
import Foreign.Storable (Storable)
import Text.Printf (PrintfArg)
import qualified Data.ByteString.Lazy.Char8 as BS
data JSValue
= JSNull
| JSBool !Bool
| JSNum !Int54
| JSString String
| JSArray [JSValue]
| JSObject [(String, JSValue)]
deriving (Show, Read, Eq, Ord)
-- | 54-bit integer values
--
-- JavaScript can only safely represent numbers between @-(2^53 - 1)@ and
-- @2^53 - 1@.
--
-- TODO: Although we introduce the type here, we don't actually do any bounds
-- checking and just inherit all type class instance from Int64. We should
-- probably define `fromInteger` to do bounds checking, give different instances
-- for type classes such as `Bounded` and `FiniteBits`, etc.
newtype Int54 = Int54 { int54ToInt64 :: Int64 }
deriving ( Enum
, Eq
, Integral
, Data
, Num
, Ord
, Real
, Ix
#if MIN_VERSION_base(4,7,0)
, FiniteBits
#endif
, Bits
, Storable
, PrintfArg
, Typeable
)
instance Bounded Int54 where
maxBound = Int54 ( 2^(53 :: Int) - 1)
minBound = Int54 (-(2^(53 :: Int) - 1))
instance Show Int54 where
show = show . int54ToInt64
instance Read Int54 where
readsPrec p = map (first Int54) . readsPrec p
------------------------------------------------------------------------------
-- rendering flat
--
-- | Render a JSON value in canonical form. This rendered form is canonical
-- and so allows repeatable hashes.
--
-- For pretty printing, see prettyCanonicalJSON.
--
-- NB: Canonical JSON's string escaping rules deviate from RFC 7159
-- JSON which requires
--
-- "All Unicode characters may be placed within the quotation
-- marks, except for the characters that must be escaped: quotation
-- mark, reverse solidus, and the control characters (@U+0000@
-- through @U+001F@)."
--
-- Whereas the current specification of Canonical JSON explicitly
-- requires to violate this by only escaping the quotation mark and
-- the reverse solidus. This, however, contradicts Canonical JSON's
-- statement that "Canonical JSON is parsable with any full JSON
-- parser"
--
-- Consequently, Canonical JSON is not a proper subset of RFC 7159.
--
renderCanonicalJSON :: JSValue -> BS.ByteString
renderCanonicalJSON v = BS.pack (s_value v [])
s_value :: JSValue -> ShowS
s_value JSNull = showString "null"
s_value (JSBool False) = showString "false"
s_value (JSBool True) = showString "true"
s_value (JSNum n) = shows n
s_value (JSString s) = s_string s
s_value (JSArray vs) = s_array vs
s_value (JSObject fs) = s_object (sortBy (compare `on` fst) fs)
s_string :: String -> ShowS
s_string s = showChar '"' . showl s
where showl [] = showChar '"'
showl (c:cs) = s_char c . showl cs
s_char '"' = showChar '\\' . showChar '"'
s_char '\\' = showChar '\\' . showChar '\\'
s_char c = showChar c
s_array :: [JSValue] -> ShowS
s_array [] = showString "[]"
s_array (v0:vs0) = showChar '[' . s_value v0 . showl vs0
where showl [] = showChar ']'
showl (v:vs) = showChar ',' . s_value v . showl vs
s_object :: [(String, JSValue)] -> ShowS
s_object [] = showString "{}"
s_object ((k0,v0):kvs0) = showChar '{' . s_string k0
. showChar ':' . s_value v0
. showl kvs0
where showl [] = showChar '}'
showl ((k,v):kvs) = showChar ',' . s_string k
. showChar ':' . s_value v
. showl kvs
------------------------------------------------------------------------------
-- parsing
--
-- | Parse a canonical JSON format string as a JSON value. The input string
-- does not have to be in canonical form, just in the \"canonical JSON\"
-- format.
--
-- Use 'renderCanonicalJSON' to convert into canonical form.
--
parseCanonicalJSON :: BS.ByteString -> Either String JSValue
parseCanonicalJSON = either (Left . show) Right
. parse p_value ""
. BS.unpack
p_value :: CharParser () JSValue
p_value = spaces *> p_jvalue
tok :: CharParser () a -> CharParser () a
tok p = p <* spaces
{-
value:
string
number
object
array
true
false
null
-}
p_jvalue :: CharParser () JSValue
p_jvalue = (JSNull <$ p_null)
<|> (JSBool <$> p_boolean)
<|> (JSArray <$> p_array)
<|> (JSString <$> p_string)
<|> (JSObject <$> p_object)
<|> (JSNum <$> p_number)
> "JSON value"
p_null :: CharParser () ()
p_null = tok (string "null") >> return ()
p_boolean :: CharParser () Bool
p_boolean = tok
( (True <$ string "true")
<|> (False <$ string "false")
)
{-
array:
[]
[ elements ]
elements:
value
value , elements
-}
p_array :: CharParser () [JSValue]
p_array = between (tok (char '[')) (tok (char ']'))
$ p_jvalue `sepBy` tok (char ',')
{-
string:
""
" chars "
chars:
char
char chars
char:
any byte except hex 22 (") or hex 5C (\)
\\
\"
-}
p_string :: CharParser () String
p_string = between (char '"') (tok (char '"')) (many p_char)
where p_char = (char '\\' >> p_esc)
<|> (satisfy (\x -> x /= '"' && x /= '\\'))
p_esc = ('"' <$ char '"')
<|> ('\\' <$ char '\\')
> "escape character"
{-
object:
{}
{ members }
members:
pair
pair , members
pair:
string : value
-}
p_object :: CharParser () [(String,JSValue)]
p_object = between (tok (char '{')) (tok (char '}'))
$ p_field `sepBy` tok (char ',')
where p_field = (,) <$> (p_string <* tok (char ':')) <*> p_jvalue
{-
number:
int
int:
digit
digit1-9 digits
- digit1-9
- digit1-9 digits
digits:
digit
digit digits
-}
-- | Parse an int
--
-- TODO: Currently this allows for a maximum of 15 digits (i.e. a maximum value
-- of @999,999,999,999,999@) as a crude approximation of the 'Int54' range.
p_number :: CharParser () Int54
p_number = tok
( (char '-' *> (negate <$> pnat))
<|> pnat
<|> zero
)
where pnat = (\d ds -> strToInt (d:ds)) <$> digit19 <*> manyN 14 digit
digit19 = satisfy (\c -> isDigit c && c /= '0') > "digit"
strToInt = foldl' (\x d -> 10*x + digitToInt54 d) 0
zero = 0 <$ char '0'
digitToInt54 :: Char -> Int54
digitToInt54 = fromIntegral . digitToInt
manyN :: Int -> CharParser () a -> CharParser () [a]
manyN 0 _ = pure []
manyN n p = ((:) <$> p <*> manyN (n-1) p)
<|> pure []
------------------------------------------------------------------------------
-- rendering nicely
--
-- | Render a JSON value in a reasonable human-readable form. This rendered
-- form is /not the canonical form/ used for repeatable hashes, use
-- 'renderCanonicalJSON' for that.
-- It is suitable however as an external form as any canonical JSON parser can
-- read it and convert it into the form used for repeatable hashes.
--
prettyCanonicalJSON :: JSValue -> String
prettyCanonicalJSON = render . jvalue
jvalue :: JSValue -> Doc
jvalue JSNull = text "null"
jvalue (JSBool False) = text "false"
jvalue (JSBool True) = text "true"
jvalue (JSNum n) = integer (fromIntegral (int54ToInt64 n))
jvalue (JSString s) = jstring s
jvalue (JSArray vs) = jarray vs
jvalue (JSObject fs) = jobject fs
jstring :: String -> Doc
jstring = doubleQuotes . hcat . map jchar
jchar :: Char -> Doc
jchar '"' = Doc.char '\\' Doc.<> Doc.char '"'
jchar '\\' = Doc.char '\\' Doc.<> Doc.char '\\'
jchar c = Doc.char c
jarray :: [JSValue] -> Doc
jarray = sep . punctuate' lbrack comma rbrack
. map jvalue
jobject :: [(String, JSValue)] -> Doc
jobject = sep . punctuate' lbrace comma rbrace
. map (\(k,v) -> sep [jstring k Doc.<> colon, nest 2 (jvalue v)])
-- | Punctuate in this style:
--
-- > [ foo, bar ]
--
-- if it fits, or vertically otherwise:
--
-- > [ foo
-- > , bar
-- > ]
--
punctuate' :: Doc -> Doc -> Doc -> [Doc] -> [Doc]
punctuate' l _ r [] = [l Doc.<> r]
punctuate' l _ r [x] = [l <+> x <+> r]
punctuate' l p r (x:xs) = l <+> x : go xs
where
go [] = []
go [y] = [p <+> y, r]
go (y:ys) = (p <+> y) : go ys
hackage-security-0.6.2.4/tests/ 0000755 0000000 0000000 00000000000 07346545000 014444 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/tests/TestSuite.hs 0000644 0000000 0000000 00000051665 07346545000 016746 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP, RecordWildCards, GADTs #-}
module Main (main) where
-- stdlib
import Control.Exception
import Control.Monad
import Data.Maybe (fromJust)
import Data.Time
import Network.URI (URI, parseURI)
import Test.Tasty
import Test.Tasty.HUnit
import Test.Tasty.QuickCheck hiding (label)
import System.IO.Temp (withSystemTempDirectory)
import qualified Codec.Archive.Tar.Entry as Tar
import qualified Data.ByteString.Lazy.Char8 as BS
-- Cabal
#if MIN_VERSION_Cabal(2,0,0)
import Distribution.Package (mkPackageName)
#else
import Distribution.Package (PackageName(PackageName))
#endif
-- hackage-security
import Hackage.Security.Client
import Hackage.Security.Client.Repository
import Hackage.Security.JSON (DeserializationError(..))
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Path
import Hackage.Security.Util.Some
import Hackage.Security.Util.Pretty
import qualified Hackage.Security.Client.Repository.Remote as Remote
import qualified Hackage.Security.Client.Repository.Cache as Cache
-- TestSuite
import TestSuite.HttpMem
import TestSuite.InMemCache
import TestSuite.InMemRepo
import TestSuite.InMemRepository
import TestSuite.PrivateKeys
import TestSuite.Util.StrictMVar
import TestSuite.JSON as JSON
{-------------------------------------------------------------------------------
TestSuite driver
-------------------------------------------------------------------------------}
main :: IO ()
main = defaultMain tests
tests :: TestTree
tests = testGroup "hackage-security" [
testGroup "InMem" [
testCase "testInMemInitialHasForUpdates" testInMemInitialHasUpdates
, testCase "testInMemNoUpdates" testInMemNoUpdates
, testCase "testInMemUpdatesAfterCron" testInMemUpdatesAfterCron
, testCase "testInMemKeyRollover" testInMemKeyRollover
, testCase "testInMemOutdatedTimestamp" testInMemOutdatedTimestamp
, testCase "testInMemIndex" testInMemIndex
]
, testGroup "HttpMem" [
testCase "testHttpMemInitialHasForUpdates" testHttpMemInitialHasUpdates
, testCase "testHttpMemNoUpdates" testHttpMemNoUpdates
, testCase "testHttpMemUpdatesAfterCron" testHttpMemUpdatesAfterCron
, testCase "testHttpMemKeyRollover" testHttpMemKeyRollover
, testCase "testHttpMemOutdatedTimestamp" testHttpMemOutdatedTimestamp
, testCase "testHttpMemIndex" testHttpMemIndex
]
, testGroup "Canonical JSON" [
testProperty "prop_roundtrip_canonical" JSON.prop_roundtrip_canonical
, testProperty "prop_roundtrip_pretty" JSON.prop_roundtrip_pretty
, testProperty "prop_canonical_pretty" JSON.prop_canonical_pretty
, testProperty "prop_aeson_canonical" JSON.prop_aeson_canonical
]
]
{-------------------------------------------------------------------------------
In-memory tests
These tests test the core TUF infrastructure, but any specific Repository
implementation; instead, they use one specifically designed for testing
(almost a Repository mock-up).
-------------------------------------------------------------------------------}
-- | Initial check for updates: empty cache
testInMemInitialHasUpdates :: Assertion
testInMemInitialHasUpdates = inMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs [] $
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Check that if we run updates again, with no changes on the server,
-- we get NoUpdates
testInMemNoUpdates :: Assertion
testInMemNoUpdates = inMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs [] $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs [] $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test that we have updates reported after the timestamp is resigned
testInMemUpdatesAfterCron :: Assertion
testInMemUpdatesAfterCron = inMemTest $ \inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs [] $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs [] $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
inMemRepoCron inMemRepo =<< getCurrentTime
withAssertLog "C" logMsgs [] $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "D" logMsgs [] $ do
assertEqual "D.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test what happens when the timestamp/snapshot keys rollover
testInMemKeyRollover :: Assertion
testInMemKeyRollover = inMemTest $ \inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs [] $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs [] $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
inMemRepoKeyRollover inMemRepo =<< getCurrentTime
let msgs = [verificationError $ unknownKeyError timestampPath]
withAssertLog "C" logMsgs msgs $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "D" logMsgs [] $ do
assertEqual "D.1" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test what happens when server has an outdated timestamp
-- (after a successful initial update)
testInMemOutdatedTimestamp :: Assertion
testInMemOutdatedTimestamp = inMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs [] $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs [] $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
now <- getCurrentTime
let (FileExpires fourDaysLater) = expiresInDays now 4
let msgs = replicate 5 (inHistory (Right (expired timestampPath)))
catchVerificationLoop msgs $ do
withAssertLog "C" logMsgs [] $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo fourDaysLater
testInMemIndex :: Assertion
testInMemIndex = inMemTest $ \inMemRepo _logMsgs repo ->
testRepoIndex inMemRepo repo
{-------------------------------------------------------------------------------
Same tests, but going through the "real" Remote repository and Cache, though
still using an in-memory repository (with a HttpLib bridge)
These are almost hte same as the in-memory tests, but the log messages we
expect are slightly different because the Remote repository indicates what
is is downloading, and why.
-------------------------------------------------------------------------------}
-- | Initial check for updates: empty cache
testHttpMemInitialHasUpdates :: Assertion
testHttpMemInitialHasUpdates = httpMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs msgsInitialUpdate $
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Check that if we run updates again, with no changes on the server,
-- we get NoUpdates
testHttpMemNoUpdates :: Assertion
testHttpMemNoUpdates = httpMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs msgsInitialUpdate $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs msgsNoUpdates $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test that we have updates reported after the timestamp is resigned
testHttpMemUpdatesAfterCron :: Assertion
testHttpMemUpdatesAfterCron = httpMemTest $ \inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs msgsInitialUpdate $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs msgsNoUpdates $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
inMemRepoCron inMemRepo =<< getCurrentTime
withAssertLog "C" logMsgs msgsResigned $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "D" logMsgs msgsNoUpdates $ do
assertEqual "D.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test what happens when the timestamp/snapshot keys rollover
testHttpMemKeyRollover :: Assertion
testHttpMemKeyRollover = httpMemTest $ \inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs msgsInitialUpdate $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs msgsNoUpdates $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
inMemRepoKeyRollover inMemRepo =<< getCurrentTime
withAssertLog "C" logMsgs msgsKeyRollover $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "D" logMsgs msgsNoUpdates $ do
assertEqual "D.1" NoUpdates =<< checkForUpdates repo =<< checkExpiry
-- | Test what happens when server has an outdated timestamp
-- (after a successful initial update)
testHttpMemOutdatedTimestamp :: Assertion
testHttpMemOutdatedTimestamp = httpMemTest $ \_inMemRepo logMsgs repo -> do
withAssertLog "A" logMsgs msgsInitialUpdate $ do
assertEqual "A.1" HasUpdates =<< checkForUpdates repo =<< checkExpiry
withAssertLog "B" logMsgs msgsNoUpdates $ do
assertEqual "B.2" NoUpdates =<< checkForUpdates repo =<< checkExpiry
now <- getCurrentTime
let (FileExpires fourDaysLater) = expiresInDays now 4
let msgs = replicate 5 (inHistory (Right (expired timestampPath)))
catchVerificationLoop msgs $ do
withAssertLog "C" logMsgs [] $ do
assertEqual "C.1" HasUpdates =<< checkForUpdates repo fourDaysLater
testHttpMemIndex :: Assertion
testHttpMemIndex = httpMemTest $ \inMemRepo _logMsgs repo ->
testRepoIndex inMemRepo repo
{-------------------------------------------------------------------------------
Identical tests between the two variants
-------------------------------------------------------------------------------}
testRepoIndex :: (Throws SomeRemoteError, Throws VerificationError)
=> InMemRepo -> Repository down -> IO ()
testRepoIndex inMemRepo repo = do
assertEqual "A" HasUpdates =<< checkForUpdates repo =<< checkExpiry
dir1 <- getDirectory repo
directoryFirst dir1 @?= DirectoryEntry 0
directoryNext dir1 @?= DirectoryEntry 0
length (directoryEntries dir1) @?= 0
now <- getCurrentTime
inMemRepoSetIndex inMemRepo now [testEntry1]
assertEqual "B" HasUpdates =<< checkForUpdates repo =<< checkExpiry
dir2 <- getDirectory repo
directoryFirst dir2 @?= DirectoryEntry 0
directoryNext dir2 @?= DirectoryEntry 2
length (directoryEntries dir2) @?= 1
directoryLookup dir2 testEntryIndexFile @?= Just (DirectoryEntry 0)
withIndex repo $ \IndexCallbacks{..} -> do
(sentry, next) <- indexLookupEntry (DirectoryEntry 0)
next @?= Nothing
case sentry of Some entry -> checkIndexEntry entry
where
checkIndexEntry :: IndexEntry dec -> Assertion
checkIndexEntry entry = do
toUnrootedFilePath (unrootPath (indexEntryPath entry))
@?= "foo/preferred-versions"
indexEntryContent entry @?= testEntrycontent
case indexEntryPathParsed entry of
Just (IndexPkgPrefs pkgname) -> do
pkgname @?= mkPackageName "foo"
case indexEntryContentParsed entry of
Right () -> return ()
_ -> fail "unexpected index entry content"
_ -> fail "unexpected index path"
testEntry1 = Tar.fileEntry path testEntrycontent
where
Right path = Tar.toTarPath False "foo/preferred-versions"
testEntrycontent = BS.pack "foo >= 1"
testEntryIndexFile = IndexPkgPrefs (mkPackageName "foo")
{-------------------------------------------------------------------------------
Log messages we expect when using the Remote repository
-------------------------------------------------------------------------------}
-- | The log messages we expect on the initial check for updates
msgsInitialUpdate :: [LogMessage -> Bool]
msgsInitialUpdate = [
selectedMirror inMemURI
, downloading isTimestamp
, downloading isSnapshot
, downloading isMirrors
, noLocalCopy
, downloading isIndex
, lockingWait
, lockingWaitDone
, lockingRelease
]
-- | Log messages when we do a check for updates and there are no changes
msgsNoUpdates :: [LogMessage -> Bool]
msgsNoUpdates = [
selectedMirror inMemURI
, downloading isTimestamp
, lockingWait
, lockingWaitDone
, lockingRelease
]
-- | Log messages we expect when the timestamp and snapshot have been resigned
msgsResigned :: [LogMessage -> Bool]
msgsResigned = [
selectedMirror inMemURI
, downloading isTimestamp
, downloading isSnapshot
, lockingWait
, lockingWaitDone
, lockingRelease
]
-- | Log messages we expect when the timestamp key has been rolled over
msgsKeyRollover :: [LogMessage -> Bool]
msgsKeyRollover = [
selectedMirror inMemURI
, downloading isTimestamp
, verificationError $ unknownKeyError timestampPath
, downloading isRoot
, lockingWait
, lockingWaitDone
, lockingRelease
, downloading isTimestamp
, downloading isSnapshot
-- Since we delete the timestamp and snapshot on a root info change,
-- we will then conclude that we need to update the mirrors and the index.
, downloading isMirrors
, updating isIndex
, lockingWait
, lockingWaitDone
, lockingRelease
]
{-------------------------------------------------------------------------------
Classifying log messages
-------------------------------------------------------------------------------}
downloading :: (forall fs typ. RemoteFile fs typ -> Bool) -> LogMessage -> Bool
downloading isFile (LogDownloading file) = isFile file
downloading _ _ = False
noLocalCopy :: LogMessage -> Bool
noLocalCopy (LogCannotUpdate (RemoteIndex _ _) UpdateImpossibleNoLocalCopy) = True
noLocalCopy _ = False
selectedMirror :: URI -> LogMessage -> Bool
selectedMirror mirror (LogSelectedMirror mirror') = mirror' == show mirror
selectedMirror _ _ = False
updating :: (forall fs typ. RemoteFile fs typ -> Bool) -> LogMessage -> Bool
updating isFile (LogUpdating file) = isFile file
updating _ _ = False
lockingWait, lockingWaitDone, lockingRelease :: LogMessage -> Bool
lockingWait (LogLockWait _) = True
lockingWait _ = False
lockingWaitDone (LogLockWaitDone _) = True
lockingWaitDone _ = False
lockingRelease (LogUnlock _) = True
lockingRelease _ = False
expired :: TargetPath -> VerificationError -> Bool
expired f (VerificationErrorExpired f') = f == f'
expired _ _ = False
unknownKeyError :: TargetPath -> VerificationError -> Bool
unknownKeyError f (VerificationErrorDeserialization f' (DeserializationErrorUnknownKey _keyId)) =
f == f'
unknownKeyError _ _ = False
verificationError :: (VerificationError -> Bool) -> LogMessage -> Bool
verificationError isErr (LogVerificationError err) = isErr err
verificationError _ _ = False
inHistory :: Either RootUpdated (VerificationError -> Bool) -> HistoryMsg -> Bool
inHistory (Right isErr) (Right err) = isErr err
inHistory (Left _) (Left _) = True
inHistory _ _ = False
type HistoryMsg = Either RootUpdated VerificationError
catchVerificationLoop :: ([HistoryMsg -> Bool]) -> Assertion -> Assertion
catchVerificationLoop history = handleJust isLoop handler
where
isLoop :: VerificationError -> Maybe VerificationHistory
isLoop (VerificationErrorLoop history') = Just history'
isLoop _ = Nothing
handler :: VerificationHistory -> Assertion
handler history' =
unless (length history == length history' && and (zipWith ($) history history')) $
assertFailure $ "Unexpected verification history:"
++ unlines (map pretty' history')
pretty' :: HistoryMsg -> String
pretty' (Left RootUpdated) = "root updated"
pretty' (Right err) = pretty err
{-------------------------------------------------------------------------------
Classifying files
-------------------------------------------------------------------------------}
isRoot :: RemoteFile fs typ -> Bool
isRoot (RemoteRoot _) = True
isRoot _ = False
isIndex :: RemoteFile fs typ -> Bool
isIndex (RemoteIndex _ _) = True
isIndex _ = False
isMirrors :: RemoteFile fs typ -> Bool
isMirrors (RemoteMirrors _) = True
isMirrors _ = False
isSnapshot :: RemoteFile fs typ -> Bool
isSnapshot (RemoteSnapshot _) = True
isSnapshot _ = False
isTimestamp :: RemoteFile fs typ -> Bool
isTimestamp RemoteTimestamp = True
isTimestamp _ = False
timestampPath :: TargetPath
timestampPath = TargetPathRepo $ repoLayoutTimestamp hackageRepoLayout
{-------------------------------------------------------------------------------
Auxiliary
-------------------------------------------------------------------------------}
-- | Check the contents of the log
assertLog :: String -> [LogMessage -> Bool] -> [LogMessage] -> Assertion
assertLog label expected actual = go expected actual
where
go :: [LogMessage -> Bool] -> [LogMessage] -> Assertion
go [] [] = return ()
go [] as = unexpected as
go (_:_) [] = assertFailure $ label ++ ": expected log message"
go (e:es) (a:as) = if e a then go es as else unexpected [a]
unexpected :: [LogMessage] -> Assertion
unexpected msgs = assertFailure $ label ++ ": "
++ "unexpected log messages:\n"
++ unlines (map pretty msgs)
++ "\nfull set of log messages was:\n"
++ unlines (map pretty actual)
-- | Run the actions and check its log messages
withAssertLog :: String
-> MVar [LogMessage]
-> [LogMessage -> Bool]
-> Assertion -> Assertion
withAssertLog label mv expected action = do
oldMsgs <- modifyMVar mv $ \old -> return ([], old)
action
newMsgs <- modifyMVar mv $ \new -> return (oldMsgs, new)
assertLog label expected newMsgs
-- | Unit test using the in-memory repository/cache
inMemTest :: ( ( Throws SomeRemoteError
, Throws VerificationError
) => InMemRepo -> MVar [LogMessage] -> Repository InMemFile -> Assertion
)
-> Assertion
inMemTest test = uncheckClientErrors $ do
now <- getCurrentTime
keys <- createPrivateKeys
let root = initRoot now layout keys
withSystemTempDirectory "hackage-security-test" $ \tempDir' -> do
tempDir <- makeAbsolute $ fromFilePath tempDir'
inMemRepo <- newInMemRepo layout root now keys
inMemCache <- newInMemCache tempDir layout
logMsgs <- newMVar []
let logger msg = modifyMVar_ logMsgs $ \msgs -> return $ msgs ++ [msg]
repository <- newInMemRepository layout hackageIndexLayout inMemRepo inMemCache logger
bootstrap repository (map someKeyId (privateRoot keys)) (KeyThreshold 2)
test inMemRepo logMsgs repository
where
layout :: RepoLayout
layout = hackageRepoLayout
-- | Unit test using the Remote repository but with the in-mem repo
httpMemTest :: ( ( Throws SomeRemoteError
, Throws VerificationError
) => InMemRepo -> MVar [LogMessage] -> Repository Remote.RemoteTemp -> Assertion
)
-> Assertion
httpMemTest test = uncheckClientErrors $ do
now <- getCurrentTime
keys <- createPrivateKeys
let root = initRoot now layout keys
withSystemTempDirectory "hackage-security-test" $ \tempDir' -> do
tempDir <- makeAbsolute $ fromFilePath tempDir'
inMemRepo <- newInMemRepo layout root now keys
logMsgs <- newMVar []
let logger msg = modifyMVar_ logMsgs $ \msgs -> return $ msgs ++ [msg]
httpLib = httpMem inMemRepo
cache = Cache.Cache {
cacheRoot = tempDir > fragment "cache"
, cacheLayout = cabalCacheLayout
}
Remote.withRepository httpLib
[inMemURI]
Remote.defaultRepoOpts
cache
hackageRepoLayout
hackageIndexLayout
logger
$ \repository -> do
withAssertLog "bootstrap" logMsgs bootstrapMsgs $
bootstrap repository (map someKeyId (privateRoot keys)) (KeyThreshold 2)
test inMemRepo logMsgs repository
where
bootstrapMsgs :: [LogMessage -> Bool]
bootstrapMsgs = [ selectedMirror inMemURI
, downloading isRoot
, lockingWait
, lockingWaitDone
, lockingRelease
]
layout :: RepoLayout
layout = hackageRepoLayout
-- | Base URI for the in-memory repository
--
-- This could really be anything at all
inMemURI :: URI
inMemURI = fromJust (parseURI "inmem://")
-- | Return @Just@ the current time
checkExpiry :: IO (Maybe UTCTime)
checkExpiry = Just `fmap` getCurrentTime
#if !MIN_VERSION_Cabal(2,0,0)
-- | Emulate Cabal2's @mkPackageName@ constructor-function
mkPackageName :: String -> PackageName
mkPackageName = PackageName
#endif
hackage-security-0.6.2.4/tests/TestSuite/ 0000755 0000000 0000000 00000000000 07346545000 016375 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/tests/TestSuite/HttpMem.hs 0000644 0000000 0000000 00000004440 07346545000 020311 0 ustar 00 0000000 0000000 -- | HttpLib bridge to the in-memory repository
module TestSuite.HttpMem (
httpMem
) where
-- stdlib
import Network.URI (URI)
import qualified Data.ByteString.Lazy as BS.L
-- hackage-security
import Hackage.Security.Client
import Hackage.Security.Client.Repository.HttpLib
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Path
import Hackage.Security.Util.Some
-- TestSuite
import TestSuite.InMemRepo
httpMem :: InMemRepo -> HttpLib
httpMem inMemRepo = HttpLib {
httpGet = get inMemRepo
, httpGetRange = getRange inMemRepo
}
{-------------------------------------------------------------------------------
Individual methods
-------------------------------------------------------------------------------}
-- | Download a file
--
-- Since we don't (yet?) make any attempt to simulate a cache, we ignore
-- caching headers.
get :: forall a. Throws SomeRemoteError
=> InMemRepo
-> [HttpRequestHeader]
-> URI
-> ([HttpResponseHeader] -> BodyReader -> IO a)
-> IO a
get InMemRepo{..} _requestHeaders uri callback = do
Some inMemFile <- inMemRepoGetPath $ castRoot (uriPath uri)
br <- bodyReaderFromBS $ inMemFileRender inMemFile
callback [HttpResponseAcceptRangesBytes] br
-- | Download a byte range
--
-- Range is starting and (exclusive) end offset in bytes.
--
-- We ignore requests for compression; different servers deal with compression
-- for byte range requests differently; in particular, Apache returns the range
-- of the _compressed_ file, which is pretty useless for our purposes. For now
-- we ignore this issue completely here.
getRange :: forall a. Throws SomeRemoteError
=> InMemRepo
-> [HttpRequestHeader]
-> URI
-> (Int, Int)
-> (HttpStatus -> [HttpResponseHeader] -> BodyReader -> IO a)
-> IO a
getRange InMemRepo{..} _requestHeaders uri (fr, to) callback = do
Some inMemFile <- inMemRepoGetPath $ castRoot (uriPath uri)
br <- bodyReaderFromBS $ substr (inMemFileRender inMemFile)
let responseHeaders = concat [
[ HttpResponseAcceptRangesBytes ]
]
callback HttpStatus206PartialContent responseHeaders br
where
substr :: BS.L.ByteString -> BS.L.ByteString
substr = BS.L.take (fromIntegral (to - fr)) . BS.L.drop (fromIntegral fr)
hackage-security-0.6.2.4/tests/TestSuite/InMemCache.hs 0000644 0000000 0000000 00000014774 07346545000 020677 0 ustar 00 0000000 0000000 module TestSuite.InMemCache (
InMemCache(..)
, newInMemCache
) where
-- base
import Control.Exception
import qualified Codec.Compression.GZip as GZip
import qualified Data.ByteString.Lazy as BS.L
-- tar
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Archive.Tar.Index as TarIndex
import Codec.Archive.Tar.Index (TarIndex)
-- hackage-security
import Hackage.Security.Client hiding (withIndex)
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.JSON
import Hackage.Security.Util.Path
-- TestSuite
import TestSuite.Util.StrictMVar
import TestSuite.InMemRepo
data InMemCache = InMemCache {
inMemCacheGet :: CachedFile -> IO (Maybe (Path Absolute))
, inMemCacheGetRoot :: IO (Path Absolute)
, inMemCacheWithIndex :: forall a. (Handle -> IO a) -> IO a
, inMemCacheGetIndexIdx :: IO TarIndex
, inMemCacheClear :: IO ()
, inMemCachePut :: forall f typ. InMemFile typ -> Format f
-> IsCached typ -> IO ()
}
newInMemCache :: Path Absolute -> RepoLayout -> IO InMemCache
newInMemCache tempDir layout = do
state <- newMVar $ initLocalState layout
return InMemCache {
inMemCacheGet = get state tempDir
, inMemCacheGetRoot = getRoot state tempDir
, inMemCacheWithIndex = withIndex state tempDir
, inMemCacheGetIndexIdx = getIndexIdx state
, inMemCacheClear = clear state
, inMemCachePut = put state
}
{-------------------------------------------------------------------------------
"Local" state (the files we "cached")
-------------------------------------------------------------------------------}
data LocalState = LocalState {
cacheRepoLayout :: !RepoLayout
, cachedRoot :: !(Maybe (Signed Root))
, cachedMirrors :: !(Maybe (Signed Mirrors))
, cachedTimestamp :: !(Maybe (Signed Timestamp))
, cachedSnapshot :: !(Maybe (Signed Snapshot))
-- We cache only the uncompressed index
-- (we can unambiguously construct the @.tar@ from the @.tar.gz@,
-- but not the other way around)
, cachedIndex :: Maybe BS.L.ByteString
}
cachedRoot' :: LocalState -> Signed Root
cachedRoot' LocalState{..} = needRoot cachedRoot
needRoot :: Maybe a -> a
needRoot Nothing = error "InMemCache: no root info (did you bootstrap?)"
needRoot (Just root) = root
initLocalState :: RepoLayout -> LocalState
initLocalState layout = LocalState {
cacheRepoLayout = layout
, cachedRoot = Nothing
, cachedMirrors = Nothing
, cachedTimestamp = Nothing
, cachedSnapshot = Nothing
, cachedIndex = Nothing
}
{-------------------------------------------------------------------------------
Individual methods
-------------------------------------------------------------------------------}
-- | Get a cached file (if available)
get :: MVar LocalState -> Path Absolute -> CachedFile -> IO (Maybe (Path Absolute))
get state cacheTempDir cachedFile =
case cachedFile of
CachedRoot -> serve "root.json" $ render cachedRoot
CachedMirrors -> serve "mirrors.json" $ render cachedMirrors
CachedTimestamp -> serve "timestamp.json" $ render cachedTimestamp
CachedSnapshot -> serve "snapshot.json" $ render cachedSnapshot
where
render :: forall b. ToJSON WriteJSON b
=> (LocalState -> Maybe b)
-> (LocalState -> Maybe BS.L.ByteString)
render f st = renderJSON (cacheRepoLayout st) `fmap` (f st)
serve :: String
-> (LocalState -> Maybe BS.L.ByteString)
-> IO (Maybe (Path Absolute))
serve template f =
withMVar state $ \st ->
case f st of
Nothing -> return Nothing
Just bs -> do (tempFile, h) <- openTempFile' cacheTempDir template
BS.L.hPut h bs
hClose h
return $ Just tempFile
-- | Get the cached root
getRoot :: MVar LocalState -> Path Absolute -> IO (Path Absolute)
getRoot state cacheTempDir =
needRoot `fmap` get state cacheTempDir CachedRoot
withIndex :: MVar LocalState -> Path Absolute -> (Handle -> IO a) -> IO a
withIndex state cacheTempDir action = do
st <- readMVar state
case cachedIndex st of
Nothing -> error "InMemCache.withIndex: Could not read index."
Just bs -> do
(_, h) <- openTempFile' cacheTempDir "01-index.tar"
BS.L.hPut h bs
hSeek h AbsoluteSeek 0
x <- action h
hClose h
return x
getIndexIdx :: MVar LocalState -> IO TarIndex
getIndexIdx state = do
st <- readMVar state
case cachedIndex st of
Nothing -> error "InMemCache.getIndexIdx: Could not read index."
Just index -> either throwIO return . TarIndex.build . Tar.read $ index
-- | Clear all cached data
clear :: MVar LocalState -> IO ()
clear state = modifyMVar_ state $ \st -> return st {
cachedMirrors = Nothing
, cachedTimestamp = Nothing
, cachedSnapshot = Nothing
, cachedIndex = Nothing
}
-- | Cache a previously downloaded remote file
put :: MVar LocalState -> InMemFile typ -> Format f -> IsCached typ -> IO ()
put state = put' state . inMemFileRender
put' :: MVar LocalState -> BS.L.ByteString -> Format f -> IsCached typ -> IO ()
put' state bs = go
where
go :: Format f -> IsCached typ -> IO ()
go _ DontCache = return ()
go FUn (CacheAs f) = go' f
go FGz (CacheAs _) = error "put: the impossible happened"
go FUn CacheIndex = modifyMVar_ state $ \st -> return st {
cachedIndex = Just bs
}
go FGz CacheIndex = modifyMVar_ state $ \st -> return st {
cachedIndex = Just (GZip.decompress bs)
}
go' :: CachedFile -> IO ()
go' CachedRoot = go'' $ \x st -> st { cachedRoot = Just x }
go' CachedTimestamp = go'' $ \x st -> st { cachedTimestamp = Just x }
go' CachedSnapshot = go'' $ \x st -> st { cachedSnapshot = Just x }
go' CachedMirrors = go'' $ \x st -> st { cachedMirrors = Just x }
go'' :: forall a. FromJSON ReadJSON_Keys_Layout a
=> (a -> LocalState -> LocalState) -> IO ()
go'' f = do
modifyMVar_ state $ \st@LocalState{..} -> do
let keyEnv = rootKeys . signed . cachedRoot' $ st
case parseJSON_Keys_Layout keyEnv cacheRepoLayout bs of
Left err -> throwIO err
Right parsed -> return $ f parsed st
hackage-security-0.6.2.4/tests/TestSuite/InMemRepo.hs 0000644 0000000 0000000 00000026325 07346545000 020574 0 ustar 00 0000000 0000000 module TestSuite.InMemRepo (
InMemRepo(..)
, newInMemRepo
, initRoot
, InMemFile(..)
, inMemFileRender
) where
-- stdlib
import Control.Exception
import Data.Time
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Compression.GZip as GZip
import qualified Data.ByteString.Lazy as BS.L
-- Cabal
import Distribution.Text
-- hackage-security
import Hackage.Security.Client
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Verify
import Hackage.Security.JSON
import Hackage.Security.Util.Path
import Hackage.Security.Util.Some
-- TestSuite
import TestSuite.PrivateKeys
import TestSuite.Util.StrictMVar
{-------------------------------------------------------------------------------
"Files" from the in-memory repository
-------------------------------------------------------------------------------}
data InMemFile :: * -> * where
InMemMetadata :: ToJSON WriteJSON a => RepoLayout -> a -> InMemFile Metadata
InMemBinary :: BS.L.ByteString -> InMemFile Binary
inMemFileRender :: InMemFile typ -> BS.L.ByteString
inMemFileRender (InMemMetadata layout file) = renderJSON layout file
inMemFileRender (InMemBinary bs) = bs
instance DownloadedFile InMemFile where
downloadedRead file =
return $ inMemFileRender file
downloadedVerify file info =
return $ knownFileInfoEqual (fileInfo (inMemFileRender file))
(trusted info)
downloadedCopyTo file dest =
writeLazyByteString dest (inMemFileRender file)
{-------------------------------------------------------------------------------
In-memory repository
-------------------------------------------------------------------------------}
data InMemRepo = InMemRepo {
-- | Get a file from the repository
inMemRepoGet :: forall fs typ.
RemoteFile fs typ
-> Verify (Some (HasFormat fs), InMemFile typ)
-- | Get a file, based on a path (uses hackageRepoLayout)
, inMemRepoGetPath :: RepoPath -> IO (Some InMemFile)
-- | Run the "cron job" on the server
--
-- That is, resign the timestamp and the snapshot
, inMemRepoCron :: UTCTime -> IO ()
-- | Rollover the timestamp and snapshot keys
, inMemRepoKeyRollover :: UTCTime -> IO ()
-- | Set the content of the repo tar index and resign
, inMemRepoSetIndex :: UTCTime -> [Tar.Entry] -> IO ()
}
newInMemRepo :: RepoLayout
-> Signed Root
-> UTCTime
-> PrivateKeys
-> IO InMemRepo
newInMemRepo layout root now keys = do
state <- newMVar $ initRemoteState now layout keys root
return InMemRepo {
inMemRepoGet = get state
, inMemRepoGetPath = getPath state
, inMemRepoCron = cron state
, inMemRepoKeyRollover = keyRollover state
, inMemRepoSetIndex = setIndex state
}
{-------------------------------------------------------------------------------
"Remote" state (as it is "on the server")
-------------------------------------------------------------------------------}
data RemoteState = RemoteState {
remoteKeys :: !PrivateKeys
, remoteLayout :: !RepoLayout
, remoteRoot :: !(Signed Root)
, remoteTimestamp :: !(Signed Timestamp)
, remoteSnapshot :: !(Signed Snapshot)
, remoteMirrors :: !(Signed Mirrors)
, remoteTar :: !BS.L.ByteString
, remoteTarGz :: !BS.L.ByteString
}
initRoot :: UTCTime -> RepoLayout -> PrivateKeys -> Signed Root
initRoot now layout keys = withSignatures layout (privateRoot keys) Root {
rootVersion = FileVersion 1
, rootExpires = expiresInDays now (365 * 10)
, rootKeys = privateKeysEnv keys
, rootRoles = privateKeysRoles keys
}
initRemoteState :: UTCTime
-> RepoLayout
-> PrivateKeys
-> Signed Root
-> RemoteState
initRemoteState now layout keys signedRoot = RemoteState {
remoteKeys = keys
, remoteLayout = layout
, remoteRoot = signedRoot
, remoteTimestamp = signedTimestamp
, remoteSnapshot = signedSnapshot
, remoteMirrors = signedMirrors
, remoteTar = initTar
, remoteTarGz = initTarGz
}
where
signedTimestamp = withSignatures layout [privateTimestamp keys] initTimestamp
signedSnapshot = withSignatures layout [privateSnapshot keys] initSnapshot
signedMirrors = withSignatures layout [privateMirrors keys] initMirrors
initMirrors :: Mirrors
initMirrors = Mirrors {
mirrorsVersion = FileVersion 1
, mirrorsExpires = expiresNever
, mirrorsMirrors = []
}
initSnapshot :: Snapshot
initSnapshot = Snapshot {
snapshotVersion = FileVersion 1
, snapshotExpires = expiresInDays now 3
, snapshotInfoRoot = fileInfo $ renderJSON layout signedRoot
, snapshotInfoMirrors = fileInfo $ renderJSON layout signedMirrors
, snapshotInfoTarGz = fileInfo $ initTarGz
, snapshotInfoTar = Just $ fileInfo initTar
}
initTimestamp :: Timestamp
initTimestamp = Timestamp {
timestampVersion = FileVersion 1
, timestampExpires = expiresInDays now 3
, timestampInfoSnapshot = fileInfo $ renderJSON layout signedSnapshot
}
initTar :: BS.L.ByteString
initTar = Tar.write []
initTarGz :: BS.L.ByteString
initTarGz = GZip.compress initTar
{-------------------------------------------------------------------------------
InMemRepo methods
-------------------------------------------------------------------------------}
-- | Get a file from the server
get :: MVar RemoteState -> RemoteFile fs typ -> Verify (Some (HasFormat fs), InMemFile typ)
get state remoteFile = do
RemoteState{..} <- liftIO $ readMVar state
case remoteFile of
RemoteTimestamp -> return (Some (HFZ FUn), InMemMetadata remoteLayout remoteTimestamp)
RemoteSnapshot _ -> return (Some (HFZ FUn), InMemMetadata remoteLayout remoteSnapshot)
RemoteMirrors _ -> return (Some (HFZ FUn), InMemMetadata remoteLayout remoteMirrors)
RemoteRoot _ -> return (Some (HFZ FUn), InMemMetadata remoteLayout remoteRoot)
RemoteIndex hasGz _ -> return (Some hasGz, InMemBinary remoteTarGz)
RemotePkgTarGz pkgId _ -> error $ "withRemote: RemotePkgTarGz " ++ display pkgId
getPath :: MVar RemoteState -> RepoPath -> IO (Some InMemFile)
getPath state repoPath = do
RemoteState{..} <- readMVar state
case toUnrootedFilePath (unrootPath repoPath) of
"root.json" -> return $ Some (InMemMetadata remoteLayout remoteRoot)
"timestamp.json" -> return $ Some (InMemMetadata remoteLayout remoteTimestamp)
"snapshot.json" -> return $ Some (InMemMetadata remoteLayout remoteSnapshot)
"mirrors.json" -> return $ Some (InMemMetadata remoteLayout remoteMirrors)
"01-index.tar.gz" -> return $ Some (InMemBinary remoteTarGz)
"01-index.tar" -> return $ Some (InMemBinary remoteTar)
otherPath -> throwIO . userError $ "getPath: Unknown path " ++ otherPath
where
cron :: MVar RemoteState -> UTCTime -> IO ()
cron state now = modifyMVar_ state $ \st@RemoteState{..} -> do
let snapshot, snapshot' :: Snapshot
snapshot = signed remoteSnapshot
snapshot' = snapshot {
snapshotVersion = versionIncrement $ snapshotVersion snapshot
, snapshotExpires = expiresInDays now 3
}
timestamp, timestamp' :: Timestamp
timestamp = signed remoteTimestamp
timestamp' = Timestamp {
timestampVersion = versionIncrement $ timestampVersion timestamp
, timestampExpires = expiresInDays now 3
, timestampInfoSnapshot = fileInfo $ renderJSON remoteLayout signedSnapshot
}
signedTimestamp = withSignatures remoteLayout [privateTimestamp remoteKeys] timestamp'
signedSnapshot = withSignatures remoteLayout [privateSnapshot remoteKeys] snapshot'
return st {
remoteTimestamp = signedTimestamp
, remoteSnapshot = signedSnapshot
}
setIndex :: MVar RemoteState -> UTCTime -> [Tar.Entry] -> IO ()
setIndex state now entries = modifyMVar_ state $ \st@RemoteState{..} -> do
let snapshot, snapshot' :: Snapshot
snapshot = signed remoteSnapshot
snapshot' = snapshot {
snapshotVersion = versionIncrement $ snapshotVersion snapshot
, snapshotExpires = expiresInDays now 3
, snapshotInfoTarGz = fileInfo $ newTarGz
, snapshotInfoTar = Just $ fileInfo newTar
}
newTar :: BS.L.ByteString
newTar = Tar.write entries
newTarGz :: BS.L.ByteString
newTarGz = GZip.compress newTar
timestamp, timestamp' :: Timestamp
timestamp = signed remoteTimestamp
timestamp' = Timestamp {
timestampVersion = versionIncrement $ timestampVersion timestamp
, timestampExpires = expiresInDays now 3
, timestampInfoSnapshot = fileInfo $ renderJSON remoteLayout signedSnapshot
}
signedTimestamp = withSignatures remoteLayout [privateTimestamp remoteKeys] timestamp'
signedSnapshot = withSignatures remoteLayout [privateSnapshot remoteKeys] snapshot'
return st {
remoteTimestamp = signedTimestamp
, remoteSnapshot = signedSnapshot
, remoteTar = newTar
, remoteTarGz = newTarGz
}
keyRollover :: MVar RemoteState -> UTCTime -> IO ()
keyRollover state now = modifyMVar_ state $ \st@RemoteState{..} -> do
newKeySnapshot <- createKey' KeyTypeEd25519
newKeyTimestamp <- createKey' KeyTypeEd25519
let remoteKeys' :: PrivateKeys
remoteKeys' = remoteKeys {
privateSnapshot = newKeySnapshot
, privateTimestamp = newKeyTimestamp
}
root, root' :: Root
root = signed remoteRoot
root' = Root {
rootVersion = versionIncrement $ rootVersion root
, rootExpires = expiresInDays now (365 * 10)
, rootKeys = privateKeysEnv remoteKeys'
, rootRoles = privateKeysRoles remoteKeys'
}
snapshot, snapshot' :: Snapshot
snapshot = signed remoteSnapshot
snapshot' = snapshot {
snapshotVersion = versionIncrement $ snapshotVersion snapshot
, snapshotExpires = expiresInDays now 3
, snapshotInfoRoot = fileInfo $ renderJSON remoteLayout signedRoot
}
timestamp, timestamp' :: Timestamp
timestamp = signed remoteTimestamp
timestamp' = Timestamp {
timestampVersion = versionIncrement $ timestampVersion timestamp
, timestampExpires = expiresInDays now 3
, timestampInfoSnapshot = fileInfo $ renderJSON remoteLayout signedSnapshot
}
signedRoot = withSignatures remoteLayout (privateRoot remoteKeys') root'
signedTimestamp = withSignatures remoteLayout [privateTimestamp remoteKeys'] timestamp'
signedSnapshot = withSignatures remoteLayout [privateSnapshot remoteKeys'] snapshot'
return st {
remoteRoot = signedRoot
, remoteTimestamp = signedTimestamp
, remoteSnapshot = signedSnapshot
}
hackage-security-0.6.2.4/tests/TestSuite/InMemRepository.hs 0000644 0000000 0000000 00000004251 07346545000 022040 0 ustar 00 0000000 0000000 module TestSuite.InMemRepository (
newInMemRepository
) where
-- stdlib
import Control.Concurrent
-- hackage-security
import Hackage.Security.Client
import Hackage.Security.Client.Formats
import Hackage.Security.Client.Repository
import Hackage.Security.Client.Verify
import Hackage.Security.Util.Checked
import Hackage.Security.Util.Some
-- TestSuite
import TestSuite.InMemCache
import TestSuite.InMemRepo
newInMemRepository :: RepoLayout
-> IndexLayout
-> InMemRepo
-> InMemCache
-> (LogMessage -> IO ())
-> IO (Repository InMemFile)
newInMemRepository layout indexLayout repo cache logger = do
cacheLock <- newMVar ()
return $ Repository {
repGetRemote = getRemote repo cache
, repGetCached = inMemCacheGet cache
, repGetCachedRoot = inMemCacheGetRoot cache
, repClearCache = inMemCacheClear cache
, repLockCache = withMVar cacheLock . const
, repWithIndex = inMemCacheWithIndex cache
, repGetIndexIdx = inMemCacheGetIndexIdx cache
, repWithMirror = withMirror
, repLog = logger
, repLayout = layout
, repIndexLayout = indexLayout
, repDescription = "In memory repository"
}
{-------------------------------------------------------------------------------
Repository methods
-------------------------------------------------------------------------------}
-- | Get a file from the server
getRemote :: forall fs typ. Throws SomeRemoteError
=> InMemRepo
-> InMemCache
-> AttemptNr
-> RemoteFile fs typ
-> Verify (Some (HasFormat fs), InMemFile typ)
getRemote InMemRepo{..} InMemCache{..} _isRetry remoteFile = do
(Some format, inMemFile) <- inMemRepoGet remoteFile
ifVerified $ inMemCachePut inMemFile (hasFormatGet format) (mustCache remoteFile)
return (Some format, inMemFile)
-- | Mirror selection
withMirror :: forall a. Maybe [Mirror] -> IO a -> IO a
withMirror Nothing callback = callback
withMirror (Just []) callback = callback
withMirror _ _ = error "Mirror selection not implemented"
hackage-security-0.6.2.4/tests/TestSuite/JSON.hs 0000644 0000000 0000000 00000007137 07346545000 017512 0 ustar 00 0000000 0000000 {-# LANGUAGE CPP #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
module TestSuite.JSON (
prop_roundtrip_canonical,
prop_roundtrip_pretty,
prop_canonical_pretty,
prop_aeson_canonical,
) where
-- stdlib
import Data.Int
import Data.List (sortBy, nubBy)
import Data.Function (on)
import Control.Applicative
import qualified Data.ByteString.Lazy.Char8 as BS
import Test.QuickCheck
-- hackage-security
import Text.JSON.Canonical
-- aeson
import Data.Aeson (Value (..), eitherDecode)
import Data.String (fromString)
import qualified Data.Vector as V
#if MIN_VERSION_aeson(2,0,0)
import qualified Data.Aeson.KeyMap as KM
#else
import qualified Data.HashMap.Strict as HM
#endif
-- text
import qualified Data.Text as Text
prop_aeson_canonical, prop_roundtrip_canonical, prop_roundtrip_pretty, prop_canonical_pretty
:: JSValue -> Property
prop_roundtrip_canonical jsval =
parseCanonicalJSON (renderCanonicalJSON jsval) === Right (canonicalise jsval)
prop_roundtrip_pretty jsval =
parseCanonicalJSON (BS.pack (prettyCanonicalJSON jsval)) === Right jsval
prop_canonical_pretty jsval =
parseCanonicalJSON (renderCanonicalJSON jsval) ===
fmap canonicalise (parseCanonicalJSON (BS.pack (prettyCanonicalJSON jsval)))
prop_aeson_canonical jsval =
eitherDecode (renderCanonicalJSON jsval) === Right (toAeson (canonicalise jsval))
canonicalise :: JSValue -> JSValue
canonicalise v@JSNull = v
canonicalise v@(JSBool _) = v
canonicalise v@(JSNum _) = v
canonicalise v@(JSString _) = v
canonicalise (JSArray vs) = JSArray [ canonicalise v | v <- vs]
canonicalise (JSObject vs) = JSObject [ (k, canonicalise v)
| (k,v) <- sortBy (compare `on` fst) vs ]
sanitizeString :: String -> String
sanitizeString s = Text.unpack (Text.replace (Text.pack "\\") (Text.pack "\\\\") (Text.pack (show s)))
instance Arbitrary JSValue where
arbitrary =
sized $ \sz ->
frequency
[ (1, pure JSNull)
, (1, JSBool <$> arbitrary)
, (2, JSNum <$> arbitrary)
, (2, JSString . sanitizeString . getASCIIString <$> arbitrary)
, (3, JSArray <$> resize (sz `div` 2) arbitrary)
, (3, JSObject . mapFirst (sanitizeString . getASCIIString) . noDupFields <$> resize (sz `div` 2) arbitrary)
]
where
noDupFields = nubBy (\(x,_) (y,_) -> x==y)
mapFirst f = map (\(x, y) -> (f x, y))
shrink JSNull = []
shrink (JSBool _) = []
shrink (JSNum n) = [ JSNum n' | n' <- shrink n ]
shrink (JSString s) = [ JSString s' | s' <- shrink s ]
shrink (JSArray vs) = [ JSArray vs' | vs' <- shrink vs ]
shrink (JSObject vs) = [ JSObject vs' | vs' <- shrinkList shrinkSnd vs ]
where
shrinkSnd (a,b) = [ (a,b') | b' <- shrink b ]
toAeson :: JSValue -> Value
toAeson JSNull = Null
toAeson (JSBool b) = Bool b
toAeson (JSNum n) = Number (fromIntegral n)
toAeson (JSString s) = String (fromString s)
toAeson (JSArray xs) = Array $ V.fromList [ toAeson x | x <- xs ]
#if MIN_VERSION_aeson(2,0,0)
toAeson (JSObject xs) = Object $ KM.fromList [ (fromString k, toAeson v) | (k, v) <- xs ]
#else
toAeson (JSObject xs) = Object $ HM.fromList [ (fromString k, toAeson v) | (k, v) <- xs ]
#endif
instance Arbitrary Int54 where
arbitrary = fromIntegral <$>
frequency [ (1, pure lowerbound)
, (1, pure upperbound)
, (8, choose (lowerbound, upperbound))
]
where
upperbound, lowerbound :: Int64
upperbound = 999999999999999 -- 15 decimal digits
lowerbound = (-999999999999999)
shrink = shrinkIntegral
hackage-security-0.6.2.4/tests/TestSuite/PrivateKeys.hs 0000644 0000000 0000000 00000004217 07346545000 021203 0 ustar 00 0000000 0000000 module TestSuite.PrivateKeys (
PrivateKeys(..)
, createPrivateKeys
, privateKeysEnv
, privateKeysRoles
) where
-- stdlib
import Control.Monad
-- hackage-security
import Hackage.Security.Client
import Hackage.Security.Key.Env (KeyEnv)
import Hackage.Security.Util.Some
import qualified Hackage.Security.Key.Env as KeyEnv
{-------------------------------------------------------------------------------
All private keys
-------------------------------------------------------------------------------}
data PrivateKeys = PrivateKeys {
privateRoot :: [Some Key]
, privateTarget :: [Some Key]
, privateSnapshot :: Some Key
, privateTimestamp :: Some Key
, privateMirrors :: Some Key
}
createPrivateKeys :: IO PrivateKeys
createPrivateKeys = do
privateRoot <- replicateM 3 $ createKey' KeyTypeEd25519
privateTarget <- replicateM 3 $ createKey' KeyTypeEd25519
privateSnapshot <- createKey' KeyTypeEd25519
privateTimestamp <- createKey' KeyTypeEd25519
privateMirrors <- createKey' KeyTypeEd25519
return PrivateKeys{..}
privateKeysEnv :: PrivateKeys -> KeyEnv
privateKeysEnv PrivateKeys{..} = KeyEnv.fromKeys $ concat [
privateRoot
, privateTarget
, [privateSnapshot]
, [privateTimestamp]
, [privateMirrors]
]
privateKeysRoles :: PrivateKeys -> RootRoles
privateKeysRoles PrivateKeys{..} = RootRoles {
rootRolesRoot = RoleSpec {
roleSpecKeys = map somePublicKey privateRoot
, roleSpecThreshold = KeyThreshold 2
}
, rootRolesSnapshot = RoleSpec {
roleSpecKeys = [somePublicKey privateSnapshot]
, roleSpecThreshold = KeyThreshold 1
}
, rootRolesTargets = RoleSpec {
roleSpecKeys = map somePublicKey privateTarget
, roleSpecThreshold = KeyThreshold 2
}
, rootRolesTimestamp = RoleSpec {
roleSpecKeys = [somePublicKey privateTimestamp]
, roleSpecThreshold = KeyThreshold 1
}
, rootRolesMirrors = RoleSpec {
roleSpecKeys = [somePublicKey privateMirrors]
, roleSpecThreshold = KeyThreshold 1
}
}
hackage-security-0.6.2.4/tests/TestSuite/Util/ 0000755 0000000 0000000 00000000000 07346545000 017312 5 ustar 00 0000000 0000000 hackage-security-0.6.2.4/tests/TestSuite/Util/StrictMVar.hs 0000644 0000000 0000000 00000001221 07346545000 021700 0 ustar 00 0000000 0000000 module TestSuite.Util.StrictMVar (
MVar -- opaque
, newMVar
, CC.withMVar
, modifyMVar
, modifyMVar_
, CC.readMVar
) where
import Control.Concurrent (MVar)
import Control.Exception
import qualified Control.Concurrent as CC
newMVar :: a -> IO (MVar a)
newMVar x = CC.newMVar =<< evaluate x
modifyMVar :: MVar a -> (a -> IO (a, b)) -> IO b
modifyMVar mv f = CC.modifyMVar mv $ \old -> do
(new, ret) <- f old
new' <- evaluate new
return (new', ret)
modifyMVar_ :: MVar a -> (a -> IO a) -> IO ()
modifyMVar_ mv f = modifyMVar mv (returnUnit . f)
where
returnUnit :: IO a -> IO (a, ())
returnUnit = fmap $ \a -> (a, ())