monad-par-0.3.6/0000755000000000000000000000000007346545000011567 5ustar0000000000000000monad-par-0.3.6/Control/Monad/0000755000000000000000000000000007346545000014245 5ustar0000000000000000monad-par-0.3.6/Control/Monad/Par.hs0000644000000000000000000001400707346545000015325 0ustar0000000000000000 {-| The @monad-par@ package provides a family of @Par@ monads, for speeding up pure computations using parallel processors. (for a similar programming model for use with @IO@, see "Control.Monad.Par.IO".) The result of a given @Par@ computation is always the same - i.e. it is deterministic, but the computation may be performed more quickly if there are processors available to share the work. For example, the following program fragment computes the values of @(f x)@ and @(g x)@ in parallel, and returns a pair of their results: > runPar $ do > fx <- spawnP (f x) -- start evaluating (f x) > gx <- spawnP (g x) -- start evaluating (g x) > a <- get fx -- wait for fx > b <- get gx -- wait for gx > return (a,b) -- return results @Par@ can be used for specifying pure parallel computations in which the order of the computation is not known beforehand. The programmer specifies how information flows from one part of the computation to another, but not the order in which computations will be evaluated at runtime. Information flow is described using "variables" called @IVar@s, which support 'put' and 'get' operations. For example, suppose you have a problem that can be expressed as a network with four nodes, where @b@ and @c@ require the value of @a@, and @d@ requires the value of @b@ and @c@: > a > / \ > b c > \ / > d Then you could express this in the @Par@ monad like this: > runPar $ do > [a,b,c,d] <- sequence [new,new,new,new] > fork $ do x <- get a; put b (x+1) > fork $ do x <- get a; put c (x+2) > fork $ do x <- get b; y <- get c; put d (x+y) > fork $ do put a (3 :: Int) > get d The result of the above computation is always 9. The 'get' operation waits until its input is available; multiple 'put's to the same @IVar@ are not allowed, and result in a runtime error. Values stored in @IVar@s are usually fully evaluated (although there are ways provided to pass lazy values if necessary). In the above example, @b@ and @c@ will be evaluated in parallel. In practice the work involved at each node is too small here to see the benefits of parallelism though: typically each node should involve much more work. The granularity is completely under your control - too small and the overhead of the @Par@ monad will outweigh any parallelism benefits, whereas if the nodes are too large then there might not be enough parallelism to use all the available processors. Unlike @Control.Parallel@, in @Control.Monad.Par@ parallelism is not combined with laziness, so sharing and granularity are completely under the control of the programmer. New units of parallel work are only created by @fork@ and a few other combinators. The default implementation is based on a work-stealing scheduler that divides the work as evenly as possible between the available processors at runtime. Other schedulers are available that are based on different policies and have different performance characteristics. To use one of these other schedulers, just import its module instead of "Control.Monad.Par": * "Control.Monad.Par.Scheds.Trace" * "Control.Monad.Par.Scheds.Sparks" For more information on the programming model, please see these sources: * The wiki\/tutorial () * The original paper () * Tutorial slides () * Other slides: (, ) -} module Control.Monad.Par ( -- * The Par Monad Par, runPar, runParIO, fork, -- | Forks a computation to happen in parallel. The forked -- computation may exchange values with other computations using -- @IVar@s. -- * Communication: IVars IVar, new, -- creates a new @IVar@ newFull, -- creates a new @IVar@ that contains a value newFull_, -- creates a new @IVar@ that contains a value (head-strict only) get, -- read the value in an @IVar@. 'get' can only return when the -- value has been written by a prior or parallel @put@ to the same -- @IVar@. put, -- put a value into an @IVar@. Multiple 'put's to the same @IVar@ -- are not allowed, and result in a runtime error. -- -- 'put' fully evaluates its argument, which therefore must be an -- instance of 'NFData'. The idea is that this forces the work to -- happen when we expect it, rather than being passed to the consumer -- of the @IVar@ and performed later, which often results in less -- parallelism than expected. -- -- Sometimes partial strictness is more appropriate: see 'put_'. -- put_, -- like 'put', but only head-strict rather than fully-strict. -- * Operations spawn, -- | Like 'fork', but returns an @IVar@ that can be used to query the -- result of the forked computataion. Therefore @spawn@ provides /futures/ or /promises/. -- -- > spawn p = do -- > r <- new -- > fork (p >>= put r) -- > return r -- spawn_, -- | Like 'spawn', but the result is only head-strict, not fully-strict. spawnP, -- | Spawn a pure (rather than monadic) computation. Fully-strict. -- -- > spawnP = spawn . return module Control.Monad.Par.Combinator, -- | This module also reexports the Combinator library for backwards -- compatibility with version 0.1. NFData() -- | /(0.3)/ Reexport 'NFData' for fully-strict operators. ) where import Control.Monad.Par.Class hiding ( spawn, spawn_, spawnP, put, put_ , get, newFull, new, fork, newFull_ ) import Control.Monad.Par.Scheds.Trace import Control.Monad.Par.Combinator monad-par-0.3.6/Control/Monad/Par/0000755000000000000000000000000007346545000014767 5ustar0000000000000000monad-par-0.3.6/Control/Monad/Par/IO.hs0000644000000000000000000000246607346545000015642 0ustar0000000000000000{-# LANGUAGE GeneralizedNewtypeDeriving #-} {- | This module is an alternative version of "Control.Monad.Par" in which the `Par` type provides `IO` operations, by means of `liftIO`. The price paid is that only `runParIO` is available, not the pure `runPar`. This module uses the same default scheduler as "Control.Monad.Par". -} module Control.Monad.Par.IO ( ParIO, IVar, runParIO -- And instances! ) where import Control.Monad.Par.Scheds.Trace (Par, IVar) import qualified Control.Monad.Par.Scheds.TraceInternal as Internal import Control.Monad.Par.Class import Control.Applicative import Control.Monad.Trans (liftIO, MonadIO) import Control.Monad.Fix (MonadFix) -- | A wrapper around an underlying Par type which allows IO. newtype ParIO a = ParIO (Par a) deriving (Functor, Applicative, Monad, ParFuture IVar, ParIVar IVar, MonadFix) -- | A run method which allows actual IO to occur on top of the Par -- monad. Of course this means that all the normal problems of -- parallel IO computations are present, including nondeterminism. -- -- A simple example program: -- -- > runParIO (liftIO $ putStrLn "hi" :: ParIO ()) runParIO :: ParIO a -> IO a runParIO (ParIO p) = Internal.runParIO p instance MonadIO ParIO where liftIO io = ParIO (Internal.Par (Internal.LiftIO io)) monad-par-0.3.6/Control/Monad/Par/Scheds/0000755000000000000000000000000007346545000016200 5ustar0000000000000000monad-par-0.3.6/Control/Monad/Par/Scheds/Direct.hs0000644000000000000000000010634507346545000017757 0ustar0000000000000000{-# LANGUAGE RankNTypes, NamedFieldPuns, BangPatterns, ExistentialQuantification, CPP, ScopedTypeVariables, TypeSynonymInstances, MultiParamTypeClasses, GeneralizedNewtypeDeriving, PackageImports, ParallelListComp #-} {- OPTIONS_GHC -Wall -fno-warn-name-shadowing -fno-warn-unused-do-bind -} -- {- LANGUAGE Trustworthy -} -- TODO: Before declaring this module TRUSTWORTHY/SAFE, we need to -- make the IVar type abstract. {-# LANGUAGE TypeFamilies #-} -- | A scheduler for the Par monad based on directly performing IO -- actions when Par methods are called (i.e. without using a lazy -- trace data structure). module Control.Monad.Par.Scheds.Direct ( Sched(..), Par, -- abstract: Constructor not exported. IVar(..), IVarContents(..), -- sched, runPar, runParIO, new, get, put_, fork, newFull, newFull_, put, spawn, spawn_, spawnP, spawn1_, fixPar, FixParException (..) -- runParAsync, runParAsyncHelper, -- yield, ) where import Control.Applicative import Control.Concurrent hiding (yield) import Data.IORef (IORef,newIORef,readIORef,writeIORef,atomicModifyIORef) import Text.Printf (printf) import GHC.Conc (numCapabilities,yield) import Control.Monad import Control.Monad.IO.Class import Control.Monad.Trans import "mtl" Control.Monad.Cont as C import qualified "mtl" Control.Monad.Reader as RD import qualified System.Random.MWC as Random import System.IO.Unsafe (unsafePerformIO) import System.Mem.StableName (makeStableName, hashStableName) import qualified Control.Monad.Par.Class as PC import qualified Control.Monad.Par.Unsafe as UN import Control.Monad.Par.Scheds.DirectInternal (Par(..), Sched(..), HotVar, SessionID, Session(Session), newHotVar, readHotVar, modifyHotVar, modifyHotVar_, writeHotVarRaw, fixPar, FixParException (..)) #ifdef NEW_GENERIC import qualified Control.Par.Class as PN import qualified Control.Par.Class.Unsafe as PU #endif import Control.DeepSeq #ifdef NESTED_SCHEDS import qualified Data.Map as M #endif import qualified Data.Set as S import Data.Maybe (catMaybes) import Data.Word (Word64) -- import Data.Concurrent.Deque.Class (WSDeque) #ifdef USE_CHASELEV #warning "Note: using Chase-Lev lockfree workstealing deques..." import Data.Concurrent.Deque.ChaseLev.DequeInstance import Data.Concurrent.Deque.ChaseLev as R #else import Data.Concurrent.Deque.Reference.DequeInstance import Data.Concurrent.Deque.Reference as R #endif import qualified Control.Exception as E import Prelude hiding (null) import qualified Prelude #if __GLASGOW_HASKELL__ <= 700 import GHC.Conc (forkOnIO) forkOn = forkOnIO #endif -------------------------------------------------------------------------------- -- Configuration Toggles -------------------------------------------------------------------------------- -- [2012.08.30] This shows a 10X improvement on nested parfib: -- #define NESTED_SCHEDS #define PARPUTS #define FORKPARENT #define IDLING_ON -- Next, IF idling is on, should we do wakeups?: #define WAKEIDLE -- #define WAIT_FOR_WORKERS ------------------------------------------------------------------- -- Ifdefs for the above preprocessor defines. Try to MINIMIZE code -- that lives in this dangerous region, and instead do normal -- conditionals and trust dead-code-elimination. -------------------------------------------------------------------- #ifdef DEBUG_DIRECT #warning "DEBUG: Activating debugging for Direct.hs" import Debug.Trace (trace) import System.Environment (getEnvironment) theEnv = unsafePerformIO $ getEnvironment dbg = True dbglvl = 1 #else dbg = False dbglvl = 0 #endif dbg :: Bool dbglvl :: Int _PARPUTS :: Bool #ifdef PARPUTS _PARPUTS = True #else _PARPUTS = False #endif _FORKPARENT :: Bool #ifdef FORKPARENT _FORKPARENT = True #else #warning "FORKPARENT POLICY NOT USED; THIS IS GENERALLY WORSE" _FORKPARENT = False #endif _IDLING_ON :: Bool #ifdef IDLING_ON _IDLING_ON = True #else _IDLING_ON = False #endif _WAIT_FOR_WORKERS :: Bool #ifdef WAIT_FOR_WORKERS _WAIT_FOR_WORKERS = True #else _WAIT_FOR_WORKERS = False #endif -------------------------------------------------------------------------------- -- Core type definitions -------------------------------------------------------------------------------- type ROnly = RD.ReaderT Sched IO newtype IVar a = IVar (IORef (IVarContents a)) data IVarContents a = Full a | Empty | Blocked [a -> IO ()] unsafeParIO :: IO a -> Par a unsafeParIO iom = Par (lift$ lift iom) io :: IO a -> Par a io = unsafeParIO -- shorthand used below -------------------------------------------------------------------------------- -- Global State -------------------------------------------------------------------------------- -- This keeps track of ALL worker threads across all unreated -- `runPar` instantiations. This is used to detect nested invocations -- of `runPar` and avoid reinitialization. -- globalWorkerPool :: IORef (Data.IntMap ()) #ifdef NESTED_SCHEDS globalWorkerPool :: IORef (M.Map ThreadId Sched) globalWorkerPool = unsafePerformIO $ newIORef M.empty #endif -- TODO! Make this semi-local! (not shared between "top-level" runPars) {-# INLINE amINested #-} {-# INLINE registerWorker #-} {-# INLINE unregisterWorker #-} amINested :: ThreadId -> IO (Maybe Sched) registerWorker :: ThreadId -> Sched -> IO () unregisterWorker :: ThreadId -> IO () #ifdef NESTED_SCHEDS -- | If the current threadID is ALREADY a worker, return the corresponding Sched structure. amINested tid = do -- There is no race here. Each thread inserts itself before it -- becomes an active worker. wp <- readIORef globalWorkerPool return (M.lookup tid wp) registerWorker tid sched = atomicModifyIORef globalWorkerPool $ \ mp -> (M.insert tid sched mp, ()) unregisterWorker tid = atomicModifyIORef globalWorkerPool $ \ mp -> (M.delete tid mp, ()) #else amINested _ = return Nothing registerWorker _ _ = return () unregisterWorker _tid = return () #endif ----------------------------------------------------------------------------- -- Helpers #2: Pushing and popping work. ----------------------------------------------------------------------------- {-# INLINE popWork #-} popWork :: Sched -> IO (Maybe (Par ())) popWork Sched{ workpool, no } = do mb <- R.tryPopL workpool when dbg $ case mb of Nothing -> return () Just _ -> do sn <- makeStableName mb printf " [%d] -> POP work unit %d\n" no (hashStableName sn) return mb {-# INLINE pushWork #-} pushWork :: Sched -> Par () -> IO () pushWork Sched { workpool, idle, no } task = do R.pushL workpool task when dbg $ do sn <- makeStableName task printf " [%d] -> PUSH work unit %d\n" no (hashStableName sn) #if defined(IDLING_ON) && defined(WAKEIDLE) --when isMain$ -- Experimenting with reducing contention by doing this only from a single thread. -- TODO: We need to have a proper binary wakeup-tree. tryWakeIdle idle #endif return () tryWakeIdle :: HotVar [MVar Bool] -> IO () tryWakeIdle idle = do -- NOTE: I worry about having the idle var hammered by all threads on their spawn-path: -- If any worker is idle, wake one up and give it work to do. idles <- readHotVar idle -- Optimistically do a normal read first. when (not (Prelude.null idles)) $ do when dbg$ printf "Waking %d idle thread(s).\n" (length idles) r <- modifyHotVar idle (\is -> case is of [] -> ([], return ()) (i:ils) -> (ils, putMVar i False)) r -- wake an idle worker up by putting an MVar. rand :: HotVar Random.GenIO -> IO Int rand ref = Random.uniformR (0, numCapabilities-1) =<< readHotVar ref -------------------------------------------------------------------------------- -- Running computations in the Par monad -------------------------------------------------------------------------------- instance NFData (IVar a) where rnf !_ = () {-# NOINLINE runPar #-} runPar = unsafePerformIO . runParIO -- | This procedure creates a new worker on the current thread (with a -- new session ID) and plugs it into the work-stealing environment. -- This new worker extracts itself from the work stealing pool when -- `userComp` has completed, thus freeing the current thread (this -- procedure) to return normally. runNewSessionAndWait :: String -> Sched -> Par b -> IO b runNewSessionAndWait name sched userComp = do tid <- myThreadId -- TODO: remove when done debugging sid <- modifyHotVar (sessionCounter sched) (\ x -> (x+1,x)) _ <- modifyHotVar (activeSessions sched) (\ set -> (S.insert sid set, ())) -- Here we have an extra IORef... ugly. ref <- newIORef (error$ "Empty session-result ref ("++name++") should never be touched (sid "++ show sid++", "++show tid ++")") newFlag <- newHotVar False -- Push the new session: _ <- modifyHotVar (sessions sched) (\ ls -> ((Session sid newFlag) : ls, ())) let userComp' = do when dbg$ io$ do tid2 <- myThreadId printf " [%d %s] Starting Par computation on %s.\n" (no sched) (show tid2) name ans <- userComp -- This add-on to userComp will run only after userComp has completed successfully, -- but that does NOT guarantee that userComp-forked computations have terminated: io$ do when (dbglvl>=1) $ do tid3 <- myThreadId printf " [%d %s] Continuation for %s called, finishing it up (%d)...\n" (no sched) (show tid3) name sid writeIORef ref ans writeHotVarRaw newFlag True modifyHotVar (activeSessions sched) (\ set -> (S.delete sid set, ())) kont :: Word64 -> a -> ROnly () kont n = trivialCont$ "("++name++", sid "++show sid++", round "++show n++")" loop :: Word64 -> ROnly () loop n = do flg <- liftIO$ readIORef newFlag unless flg $ do when dbg $ liftIO$ do tid4 <- myThreadId printf " [%d %s] BOUNCE %d... going into reschedule until finished.\n" (no sched) (show tid4) n rescheduleR 0 $ trivialCont$ "("++name++", sid "++show sid++")" loop (n+1) -- THIS IS RETURNING TOO EARLY!!: runReaderWith sched (C.runContT (unPar userComp') (kont 0)) -- Does this ASSUME child stealing? runReaderWith sched (loop 1) -- TODO: Ideally we would wait for ALL outstanding (stolen) work on this "team" to complete. when (dbglvl>=1)$ do active <- readHotVar (activeSessions sched) sess@True <- readHotVar newFlag -- ASSERT! printf " [%d %s] RETURN from %s (sessFin %s) runContT (%d) active set %s\n" (no sched) (show tid) name (show sess) sid (show active) -- Here we pop off the frame we added to the session stack: modifyHotVar_ (sessions sched) $ \ (Session sid2 _ : tl) -> if sid == sid2 then tl else error$ "Tried to pop the session stack and found we ("++show sid ++") were not on the top! (instead "++show sid2++")" -- By returning here we ARE implicitly reengaging the scheduler, since we -- are already inside the rescheduleR loop on this thread -- (before runParIO was called in a nested fashion). readIORef ref {-# NOINLINE runParIO #-} runParIO userComp = do tid <- myThreadId #if __GLASGOW_HASKELL__ >= 701 /* 20110301 */ -- -- We create a thread on each CPU with forkOn. The CPU on which -- the current thread is running will host the main thread; the -- other CPUs will host worker threads. -- -- Note: GHC 7.1.20110301 is required for this to work, because that -- is when threadCapability was added. -- (main_cpu, _) <- threadCapability tid #else -- -- Lacking threadCapability, we always pick CPU #0 to run the main -- thread. If the current thread is not running on CPU #0, this -- will require some data to be shipped over the memory bus, and -- hence will be slightly slower than the version above. -- let main_cpu = 0 #endif maybSched <- amINested tid tidorig <- myThreadId -- TODO: remove when done debugging case maybSched of Just (sched) -> do -- Here the current thread is ALREADY a worker. All we need to -- do is plug the users new computation in. sid0 <- readHotVar (sessionCounter sched) when (dbglvl>=1)$ printf " [%d %s] runPar called from existing worker thread, new session (%d)....\n" (no sched) (show tid) (sid0 + 1) runNewSessionAndWait "nested runPar" sched userComp ------------------------------------------------------------ -- Non-nested case, make a new set of worker threads: ------------------------------------------------------------ Nothing -> do allscheds <- makeScheds main_cpu [Session _ topSessFlag] <- readHotVar$ sessions$ head allscheds mfin <- newEmptyMVar doneFlags <- forM (zip [0..] allscheds) $ \(cpu,sched) -> do workerDone <- newEmptyMVar ---------------------------------------- let wname = ("(worker "++show cpu++" of originator "++show tidorig++")") -- forkOn cpu $ do _ <- forkWithExceptions (forkOn cpu) wname $ do ------------------------------------------------------------STRT WORKER THREAD tid2 <- myThreadId registerWorker tid2 sched if (cpu /= main_cpu) then do when dbg$ printf " [%d %s] Anonymous worker entering scheduling loop.\n" cpu (show tid2) runReaderWith sched $ rescheduleR 0 (trivialCont (wname++show tid2)) when dbg$ printf " [%d] Anonymous worker exited scheduling loop. FINISHED.\n" cpu putMVar workerDone cpu return () else do x <- runNewSessionAndWait "top-lvl main worker" sched userComp -- When the main worker finishes we can tell the anonymous "system" workers: writeIORef topSessFlag True when dbg$ do printf " *** Out of entire runContT user computation on main thread %s.\n" (show tid2) -- sanityCheck allscheds putMVar mfin x unregisterWorker tid ------------------------------------------------------------END WORKER THREAD return (if cpu == main_cpu then Nothing else Just workerDone) when _WAIT_FOR_WORKERS $ do when dbg$ printf " *** [%s] Originator thread: waiting for workers to complete." (show tidorig) forM_ (catMaybes doneFlags) $ \ mv -> do n <- readMVar mv -- n <- A.wait mv when dbg$ printf " * [%s] Worker %s completed\n" (show tidorig) (show n) when dbg$ do printf " *** [%s] Reading final MVar on originator thread.\n" (show tidorig) -- We don't directly use the thread we come in on. Rather, that thread waits -- waits. One reason for this is that the main/progenitor thread in -- GHC is expensive like a forkOS thread. ---------------------------------------- -- DEBUGGING -- #ifdef DEBUG_DIRECT busyTakeMVar (" The global wait "++ show tidorig) mfin -- Final value. -- dbgTakeMVar "global waiting thread" mfin -- Final value. #else takeMVar mfin -- Final value. #endif ---------------------------------------- -- Create the default scheduler(s) state: makeScheds :: Int -> IO [Sched] makeScheds main = do when dbg$ do tid <- myThreadId printf "[initialization] Creating %d worker threads, currently on %s\n" numCapabilities (show tid) workpools <- replicateM numCapabilities $ R.newQ rngs <- replicateM numCapabilities $ Random.create >>= newHotVar idle <- newHotVar [] -- The STACKs are per-worker.. but the root finished flag is shared between all anonymous system workers: sessionFinished <- newHotVar False sessionStacks <- mapM newHotVar (replicate numCapabilities [Session baseSessionID sessionFinished]) activeSessions <- newHotVar S.empty sessionCounter <- newHotVar (baseSessionID + 1) let allscheds = [ Sched { no=x, idle, isMain= (x==main), workpool=wp, scheds=allscheds, rng=rng, sessions = stck, activeSessions=activeSessions, sessionCounter=sessionCounter } -- | (x,wp,rng,stck) <- zip4 [0..] workpools rngs sessionStacks | x <- [0 .. numCapabilities-1] | wp <- workpools | rng <- rngs | stck <- sessionStacks ] return allscheds -- The ID of top-level runPar sessions. baseSessionID :: SessionID baseSessionID = 1000 -------------------------------------------------------------------------------- -- IVar operations -------------------------------------------------------------------------------- {-# INLINE new #-} -- | Creates a new @IVar@ new :: Par (IVar a) new = io$ do r <- newIORef Empty return (IVar r) {-# INLINE get #-} -- | Read the value in an @IVar@. The 'get' operation can only return when the -- value has been written by a prior or parallel @put@ to the same -- @IVar@. get (IVar vr) = do callCC $ \kont -> do e <- io$ readIORef vr case e of Full a -> return a _ -> do sch <- RD.ask # ifdef DEBUG_DIRECT sn <- io$ makeStableName vr -- Should probably do the MutVar inside... let resched = trace (" ["++ show (no sch) ++ "] - Rescheduling on unavailable ivar "++show (hashStableName sn)++"!") #else let resched = # endif longjmpSched -- Invariant: kont must not be lost. -- Because we continue on the same processor the Sched stays the same: -- TODO: Try NOT using monadic values as first class. Check for performance effect: r <- io$ atomicModifyIORef vr $ \x -> case x of Empty -> (Blocked [pushWork sch . kont], resched) Full a -> (Full a, return a) -- kont is implicit here. Blocked ks -> (Blocked (pushWork sch . kont:ks), resched) r -- | NOTE unsafePeek is NOT exposed directly through this module. (So -- this module remains SAFE in the Safe Haskell sense.) It can only -- be accessed by importing Control.Monad.Par.Unsafe. {-# INLINE unsafePeek #-} unsafePeek :: IVar a -> Par (Maybe a) unsafePeek (IVar v) = do e <- io$ readIORef v case e of Full a -> return (Just a) _ -> return Nothing ------------------------------------------------------------ {-# INLINE put_ #-} -- | @put_@ is a version of @put@ that is head-strict rather than fully-strict. -- In this scheduler, puts immediately execute woken work in the current thread. put_ (IVar vr) !content = do sched <- RD.ask ks <- io$ do ks <- atomicModifyIORef vr $ \e -> case e of Empty -> (Full content, []) Full _ -> error "multiple put" Blocked ks -> (Full content, ks) #ifdef DEBUG_DIRECT when (dbglvl >= 3) $ do sn <- makeStableName vr printf " [%d] Put value %s into IVar %d. Waking up %d continuations.\n" (no sched) (show content) (hashStableName sn) (length ks) return () #endif return ks wakeUp sched ks content -- | NOTE unsafeTryPut is NOT exposed directly through this module. (So -- this module remains SAFE in the Safe Haskell sense.) It can only -- be accessed by importing Control.Monad.Par.Unsafe. {-# INLINE unsafeTryPut #-} unsafeTryPut (IVar vr) !content = do -- Head strict rather than fully strict. sched <- RD.ask (ks,res) <- io$ do pr <- atomicModifyIORef vr $ \e -> case e of Empty -> (Full content, ([], content)) Full x -> (Full x, ([], x)) Blocked ks -> (Full content, (ks, content)) #ifdef DEBUG_DIRECT sn <- makeStableName vr printf " [%d] unsafeTryPut: value %s in IVar %d. Waking up %d continuations.\n" (no sched) (show content) (hashStableName sn) (length (fst pr)) #endif return pr wakeUp sched ks content return res -- | When an IVar is filled in, continuations wake up. {-# INLINE wakeUp #-} wakeUp :: Sched -> [a -> IO ()]-> a -> Par () wakeUp _sched ks arg = loop ks where loop [] = return () loop (kont:rest) = do -- FIXME -- without strict firewalls keeping ivars from moving -- between runPar sessions, if we allow nested scheduler use -- we could potentially wake up work belonging to a different -- runPar and thus bring it into our worker and delay our own -- continuation until its completion. if _PARPUTS then -- We do NOT force the putting thread to postpone its continuation. do _ <- spawn_$ pMap kont rest return () -- case rest of -- [] -> spawn_$ io$ kont arg -- _ -> spawn_$ do spawn_$ io$ kont arg -- io$ parchain rest -- error$"FINISHME - wake "++show (length ks)++" conts" else -- This version sacrifices a parallelism opportunity and -- imposes additional serialization. -- -- [2012.08.31] WARNING -- this serialzation CAN cause deadlock. -- This "optimization" should not be on the table. -- mapM_ ($arg) ks do io$ kont arg loop rest return () pMap kont [] = io$ kont arg pMap kont (more:rest) = do _ <- spawn_$ io$ kont arg pMap more rest -- parchain [kont] = kont arg -- parchain (kont:rest) = do spawn$ io$ kont arg -- parchain rest ------------------------------------------------------------ {-# INLINE fork #-} fork :: Par () -> Par () fork task = -- Forking the "parent" means offering up the continuation of the -- fork rather than the task argument for stealing: case _FORKPARENT of True -> do sched <- RD.ask callCC$ \parent -> do let wrapped = parent () io$ pushWork sched wrapped -- Then execute the child task and return to the scheduler when it is complete: task -- If we get to this point we have finished the child task: _ <- longjmpSched -- We reschedule to pop the cont we pushed. -- TODO... OPTIMIZATION: we could also try the pop directly, and if it succeeds return normally.... io$ printf " !!! ERROR: Should never reach this point #1\n" when dbg$ do sched2 <- RD.ask io$ printf " - called parent continuation... was on worker [%d] now on worker [%d]\n" (no sched) (no sched2) return () False -> do sch <- RD.ask when dbg$ io$ printf " [%d] forking task...\n" (no sch) io$ pushWork sch task -- This routine "longjmp"s to the scheduler, throwing out its own continuation. longjmpSched :: Par a -- longjmpSched = Par $ C.ContT rescheduleR longjmpSched = Par $ C.ContT (\ _k -> rescheduleR 0 (trivialCont "longjmpSched")) -- Reschedule the scheduler loop until it observes sessionFinished==True, and -- then it finally invokes its continuation. rescheduleR :: Word64 -> (a -> ROnly ()) -> ROnly () rescheduleR cnt kont = do mysched <- RD.ask when dbg$ liftIO$ do tid <- myThreadId sess <- readSessions mysched null <- R.nullQ (workpool mysched) printf " [%d %s] - Reschedule #%d... sessions %s, pool empty %s\n" (no mysched) (show tid) cnt (show sess) (show null) mtask <- liftIO$ popWork mysched case mtask of Nothing -> do (Session _ finRef):_ <- liftIO$ readIORef $ sessions mysched fin <- liftIO$ readIORef finRef if fin then do when (dbglvl >= 1) $ liftIO $ do tid <- myThreadId sess <- readSessions mysched printf " [%d %s] - DROP out of reschedule loop, sessionFinished=%s, all sessions %s\n" (no mysched) (show tid) (show fin) (show sess) empt <- R.nullQ$ workpool mysched when (not empt) $ do printf " [%d %s] - WARNING - leaving rescheduleR while local workpoll is nonempty\n" (no mysched) (show tid) kont (error "Direct.hs: The result value from rescheduleR should not be used.") else do -- when (dbglvl >= 1) $ liftIO $ do -- tid <- myThreadId -- sess <- readSessions mysched -- printf " [%d %s] - Apparently NOT finished with head session... trying to steal, all sessions %s\n" -- (no mysched) (show tid) (show sess) liftIO$ steal mysched #ifdef WAKEIDLE -- io$ tryWakeIdle (idle mysched) #endif liftIO yield rescheduleR (cnt+1) kont Just task -> do -- When popping work from our own queue the Sched (Reader value) stays the same: when dbg $ do sn <- liftIO$ makeStableName task liftIO$ printf " [%d] popped work %d from own queue\n" (no mysched) (hashStableName sn) let C.ContT fn = unPar task -- Run the stolen task with a continuation that returns to the scheduler if the task exits normally: fn (\ _ -> do sch <- RD.ask when dbg$ liftIO$ printf " + task finished successfully on cpu %d, calling reschedule continuation..\n" (no sch) rescheduleR 0 kont) -- | Attempt to steal work or, failing that, give up and go idle. -- -- The current policy is to do a burst of of N tries without -- yielding or pausing in between. steal :: Sched -> IO () steal mysched@Sched{ idle, scheds, rng, no=my_no } = do when (dbglvl>=2)$ do tid <- myThreadId printf " [%d %s] + stealing\n" my_no (show tid) i <- getnext (-1 :: Int) go maxtries i where -- maxtries = numCapabilities -- How many times should we attempt theft before going idle? maxtries = 20 * numCapabilities -- How many times should we attempt theft before going idle? getnext _ = rand rng ---------------------------------------- -- IDLING behavior: go 0 _ | _IDLING_ON = do m <- newEmptyMVar r <- modifyHotVar idle $ \is -> (m:is, is) if length r == numCapabilities - 1 then do when dbg$ printf " [%d] | waking up all threads\n" my_no writeHotVarRaw idle [] mapM_ (\vr -> putMVar vr True) r else do (Session _ finRef):_ <- readIORef $ sessions mysched fin <- readIORef finRef done <- if fin then pure True else takeMVar m if done then do when dbg$ printf " [%d] | shutting down\n" my_no return () else do when dbg$ printf " [%d] | woken up\n" my_no i <- getnext (-1::Int) go maxtries i -- We need to return from this loop to check sessionFinished and exit the scheduler if necessary. go 0 _i | _IDLING_ON == False = yield ---------------------------------------- go tries i | i == my_no = do i' <- getnext i go (tries-1) i' | otherwise = do -- We ONLY go through the global sched array to access victims: let schd = scheds!!i when (dbglvl>=2)$ printf " [%d] | trying steal from %d\n" my_no (no schd) -- let dq = workpool schd :: WSDeque (Par ()) let dq = workpool schd r <- R.tryPopR dq case r of Just task -> do when dbg$ do sn <- makeStableName task printf " [%d] | stole work (unit %d) from cpu %d\n" my_no (hashStableName sn) (no schd) runReaderWith mysched $ C.runContT (unPar task) (\_ -> do when dbg$ do sn <- liftIO$ makeStableName task liftIO$ printf " [%d] | DONE running stolen work (unit %d) from %d\n" my_no (hashStableName sn) (no schd) return ()) Nothing -> do i' <- getnext i go (tries-1) i' -- | The continuation which should not be called. _errK :: t _errK = error "Error cont: this closure shouldn't be used" trivialCont :: String -> a -> ROnly () #ifdef DEBUG_DIRECT trivialCont str _ = do -- trace (str ++" trivialCont evaluated!") liftIO$ printf " !! trivialCont evaluated, msg: %s\n" str #else trivialCont _str _ = do #endif return () ---------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -- -- TEMP: TODO: Factor out this boilerplate somehow. {-# INLINE spawn1_ #-} -- Spawn a one argument function instead of a thunk. This is good for debugging if the value supports "Show". spawn1_ f x = #ifdef DEBUG_DIRECT do sn <- io$ makeStableName f sch <- RD.ask; when dbg$ io$ printf " [%d] spawning fn %d with arg %s\n" (no sch) (hashStableName sn) (show x) #endif spawn_ (f x) -- The following is usually inefficient! newFull_ a = do v <- new put_ v a return v newFull a = deepseq a (newFull_ a) {-# INLINE put #-} put v a = deepseq a (put_ v a) spawn p = do r <- new; fork (p >>= put r); return r spawn_ p = do r <- new; fork (p >>= put_ r); return r spawnP a = spawn (return a) -- In Debug mode we require that IVar contents be Show-able: #ifdef DEBUG_DIRECT put :: (Show a, NFData a) => IVar a -> a -> Par () spawn :: (Show a, NFData a) => Par a -> Par (IVar a) spawn_ :: Show a => Par a -> Par (IVar a) spawn1_ :: (Show a, Show b) => (a -> Par b) -> a -> Par (IVar b) spawnP :: (Show a, NFData a) => a -> Par (IVar a) put_ :: Show a => IVar a -> a -> Par () get :: Show a => IVar a -> Par a runPar :: Show a => Par a -> a runParIO :: Show a => Par a -> IO a newFull :: (Show a, NFData a) => a -> Par (IVar a) newFull_ :: Show a => a -> Par (IVar a) unsafeTryPut :: Show b => IVar b -> b -> Par b #else spawn :: NFData a => Par a -> Par (IVar a) spawn_ :: Par a -> Par (IVar a) spawn1_ :: (a -> Par b) -> a -> Par (IVar b) spawnP :: NFData a => a -> Par (IVar a) put_ :: IVar a -> a -> Par () put :: NFData a => IVar a -> a -> Par () get :: IVar a -> Par a runPar :: Par a -> a runParIO :: Par a -> IO a newFull :: NFData a => a -> Par (IVar a) newFull_ :: a -> Par (IVar a) unsafeTryPut :: IVar b -> b -> Par b -- We can't make proper instances with the extra Show constraints: instance PC.ParFuture IVar Par where get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP instance PC.ParIVar IVar Par where fork = fork new = new put_ = put_ newFull = newFull newFull_ = newFull_ instance UN.ParUnsafe IVar Par where unsafePeek = unsafePeek unsafeTryPut = unsafeTryPut unsafeParIO = unsafeParIO #endif #ifdef NEW_GENERIC instance PU.ParMonad Par where fork = fork internalLiftIO io = Par (lift $ lift io) instance PU.ParThreadSafe Par where unsafeParIO io = Par (lift $ lift io) instance PN.ParFuture Par where type Future Par = IVar type FutContents Par a = () get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP instance PN.ParIVar Par where new = new put_ = put_ newFull = newFull newFull_ = newFull_ #endif -- -------------------------------------------------------------------------------- {-# INLINE runReaderWith #-} -- | Arguments flipped for convenience. runReaderWith :: r -> RD.ReaderT r m a -> m a runReaderWith state m = RD.runReaderT m state -------------------------------------------------------------------------------- -- DEBUGGING TOOLs -------------------------------------------------------------------------------- -- Make sure there is no work left in any deque after exiting. _sanityCheck :: [Sched] -> IO () _sanityCheck allscheds = do forM_ allscheds $ \ Sched{no, workpool} -> do b <- R.nullQ workpool when (not b) $ do () <- printf "WARNING: After main thread exited non-empty queue remains for worker %d\n" no return () printf "Sanity check complete.\n" -- | This tries to localize the blocked-indefinitely exception: _dbgTakeMVar :: String -> MVar a -> IO a _dbgTakeMVar msg mv = -- catch (takeMVar mv) ((\_ -> doDebugStuff) :: BlockedIndefinitelyOnMVar -> IO a) E.catch (takeMVar mv) (\(_::IOError) -> doDebugStuff) where doDebugStuff = do printf "This takeMVar blocked indefinitely!: %s\n" msg error "failed" -- | For debugging purposes. This can help us figure out (by an ugly -- process of elimination) which MVar reads are leading to a "Thread -- blocked indefinitely" exception. {- busyTakeMVar :: String -> MVar a -> IO a busyTakeMVar msg mv = try (10 * 1000 * 1000) where try 0 = do when dbg $ do tid <- myThreadId -- After we've failed enough times, start complaining: printf "%s not getting anywhere, msg: %s\n" (show tid) msg try (100 * 1000) try n = do x <- tryTakeMVar mv case x of Just y -> return y Nothing -> do yield; try (n-1) -} -- | Fork a thread but ALSO set up an error handler that suppresses -- MVar exceptions. _forkIO_Suppress :: Int -> IO () -> IO ThreadId _forkIO_Suppress whre action = forkOn whre $ E.handle (\e -> case (e :: E.BlockedIndefinitelyOnMVar) of _ -> do putStrLn$"CAUGHT child thread exception: "++show e return () ) action -- | Exceptions that walk up the fork tree of threads: forkWithExceptions :: (IO () -> IO ThreadId) -> String -> IO () -> IO ThreadId forkWithExceptions forkit descr action = do parent <- myThreadId forkit $ do tid <- myThreadId E.catch action (\ e -> case E.fromException e of Just E.ThreadKilled -> printf "\nThreadKilled exception inside child thread, %s (not propagating!): %s\n" (show tid) (show descr) _ -> do printf "\nException inside child thread %s, %s: %s\n" (show descr) (show tid) (show e) E.throwTo parent (e :: E.SomeException) ) -- Do all the memory reads to snapshot the current session stack: readSessions :: Sched -> IO [(SessionID, Bool)] readSessions sched = do ls <- readIORef (sessions sched) bools <- mapM (\ (Session _ r) -> readIORef r) ls return (zip (map (\ (Session sid _) -> sid) ls) bools) monad-par-0.3.6/Control/Monad/Par/Scheds/DirectInternal.hs0000644000000000000000000001527107346545000021451 0ustar0000000000000000{-# LANGUAGE PackageImports, CPP, GeneralizedNewtypeDeriving, DeriveDataTypeable #-} -- | Type definition and some helpers. This is used mainly by -- Direct.hs but can also be used by other modules that want access to -- the internals of the scheduler (i.e. the private `Par` type constructor). module Control.Monad.Par.Scheds.DirectInternal where #if !MIN_VERSION_base(4,6,0) import Prelude hiding (catch) #endif import Control.Applicative import "mtl" Control.Monad.Cont as C import qualified "mtl" Control.Monad.Reader as RD import "mtl" Control.Monad.Trans (liftIO) import qualified System.Random.MWC as Random import Control.Concurrent hiding (yield) import GHC.Conc import Data.IORef import qualified Data.Set as S import Data.Word (Word64) import Data.Concurrent.Deque.Class (WSDeque) import Control.Monad.Fix (MonadFix (mfix)) #if MIN_VERSION_base(4,9,0) import GHC.IO.Unsafe (unsafeDupableInterleaveIO) #else import System.IO.Unsafe (unsafeInterleaveIO) #endif #ifdef USE_CHASELEV #warning "Note: using Chase-Lev lockfree workstealing deques..." import Data.Concurrent.Deque.ChaseLev.DequeInstance import Data.Concurrent.Deque.ChaseLev as R #endif import Data.Typeable (Typeable) import Control.Exception (Exception, throwIO, BlockedIndefinitelyOnMVar (..), catch) -- Our monad stack looks like this: -- --------- -- ContT -- ReaderT -- IO -- --------- -- The ReaderT monad is there for retrieving the scheduler given the -- fact that the API calls do not get it as an argument. -- -- Note that the result type for continuations is unit. Forked -- computations return nothing. -- newtype Par a = Par { unPar :: C.ContT () ROnly a } deriving (Functor, Applicative, Monad, MonadCont, RD.MonadReader Sched) type ROnly = RD.ReaderT Sched IO instance MonadFix Par where mfix = fixPar -- | Take the monadic fixpoint of a 'Par' computation. This is -- the definition of 'mfix' for 'Par'. Throws 'FixParException' -- if the result is demanded strictly within the computation. fixPar :: (a -> Par a) -> Par a -- We do this IO-style, rather than ST-style, in order to get a -- consistent exception type. Using the ST-style mfix, a strict -- argument could lead us to *either* a <> exception *or* -- (if the wrong sort of computation gets re-run) a "multiple-put" -- error. fixPar f = Par $ ContT $ \ar -> RD.ReaderT $ \sched -> do mv <- newEmptyMVar ans <- unsafeDupableInterleaveIO (readMVar mv `catch` \ ~BlockedIndefinitelyOnMVar -> throwIO FixParException) flip RD.runReaderT sched $ runContT (unPar (f ans)) $ \a -> liftIO (putMVar mv a) >> ar a #if !MIN_VERSION_base(4,9,0) unsafeDupableInterleaveIO :: IO a -> IO a unsafeDupableInterleaveIO = unsafeInterleaveIO #endif data FixParException = FixParException deriving (Show, Typeable) instance Exception FixParException type SessionID = Word64 -- An ID along with a flag to signal completion: data Session = Session SessionID (HotVar Bool) data Sched = Sched { ---- Per worker ---- no :: {-# UNPACK #-} !Int, workpool :: WSDeque (Par ()), rng :: HotVar Random.GenIO, -- Random number gen for work stealing. isMain :: Bool, -- Are we the main/master thread? -- The stack of nested sessions that THIS worker is participating in. -- When a session finishes, the worker can return to its Haskell -- calling context (it's "real" continuation). sessions :: HotVar [Session], -- (1) This is always non-empty, containing at least the root -- session corresponding to the anonymous system workers. -- (2) The original invocation of runPar also counts as a session -- and pushes a second -- (3) Nested runPar invocations may push further sessions onto the stack. ---- Global data: ---- idle :: HotVar [MVar Bool], -- waiting idle workers scheds :: [Sched], -- A global list of schedulers. -- Any thread that enters runPar (original or nested) registers -- itself in this global list. When the list becomes null, -- worker threads may shut down or at least go idle. activeSessions :: HotVar (S.Set SessionID), -- A counter to support unique session IDs: sessionCounter :: HotVar SessionID } -------------------------------------------------------------------------------- -- Helpers #1: Atomic Variables -------------------------------------------------------------------------------- -- TEMP: Experimental #ifndef HOTVAR #define HOTVAR 1 #endif newHotVar :: a -> IO (HotVar a) modifyHotVar :: HotVar a -> (a -> (a,b)) -> IO b modifyHotVar_ :: HotVar a -> (a -> a) -> IO () writeHotVar :: HotVar a -> a -> IO () readHotVar :: HotVar a -> IO a -- readHotVarRaw :: HotVar a -> m a -- writeHotVarRaw :: HotVar a -> m a {-# INLINE newHotVar #-} {-# INLINE modifyHotVar #-} {-# INLINE modifyHotVar_ #-} {-# INLINE readHotVar #-} {-# INLINE writeHotVar #-} #if HOTVAR == 1 type HotVar a = IORef a newHotVar = newIORef modifyHotVar = atomicModifyIORef modifyHotVar_ v fn = atomicModifyIORef v (\a -> (fn a, ())) readHotVar = readIORef writeHotVar = writeIORef instance Show (IORef a) where show _ref = "" writeHotVarRaw :: HotVar a -> a -> IO () -- hotVarTransaction = id hotVarTransaction = error "Transactions not currently possible for IO refs" readHotVarRaw :: HotVar a -> IO a readHotVarRaw = readHotVar writeHotVarRaw = writeHotVar #elif HOTVAR == 2 #warning "Using MVars for hot atomic variables." -- This uses MVars that are always full with *something* type HotVar a = MVar a newHotVar x = do v <- newMVar; putMVar v x; return v modifyHotVar v fn = modifyMVar v (return . fn) modifyHotVar_ v fn = modifyMVar_ v (return . fn) readHotVar = readMVar writeHotVar v x = do swapMVar v x; return () instance Show (MVar a) where show _ref = "" -- hotVarTransaction = id -- We could in theory do this by taking the mvar to grab the lock. -- But we'd need some temporary storage.... hotVarTransaction = error "Transactions not currently possible for MVars" readHotVarRaw = readHotVar writeHotVarRaw = writeHotVar #elif HOTVAR == 3 #warning "Using TVars for hot atomic variables." -- Simon Marlow said he saw better scaling with TVars (surprise to me): type HotVar a = TVar a newHotVar = newTVarIO modifyHotVar tv fn = atomically (do x <- readTVar tv let (x2,b) = fn x writeTVar tv x2 return b) modifyHotVar_ tv fn = atomically (do x <- readTVar tv; writeTVar tv (fn x)) readHotVar x = atomically $ readTVar x writeHotVar v x = atomically $ writeTVar v x instance Show (TVar a) where show ref = "" hotVarTransaction = atomically readHotVarRaw = readTVar writeHotVarRaw = writeTVar #endif monad-par-0.3.6/Control/Monad/Par/Scheds/Sparks.hs0000644000000000000000000000544407346545000020006 0ustar0000000000000000{-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE TypeFamilies, CPP #-} -- | This scheduler uses sparks (par/pseq) directly, but only supplies -- the @Monad.Par.Class.ParFuture@ interface. module Control.Monad.Par.Scheds.Sparks ( Par(..), Future(..), runPar, get, spawn, spawn_, spawnP, fixPar ) where import Control.Applicative import Control.Monad import Control.DeepSeq import Control.Parallel import qualified Control.Monad.Par.Class as PC import Control.Monad.Fix (MonadFix (mfix)) -- import Control.Parallel.Strategies (rpar) #ifdef NEW_GENERIC import qualified Control.Par.Class as PN import qualified Control.Par.Class.Unsafe as PU import System.IO.Unsafe (unsafePerformIO) #endif {-# INLINE runPar #-} {-# INLINE spawn #-} {-# INLINE spawn_ #-} {-# INLINE spawnP #-} {-# INLINE get #-} data Par a = Done a data Future a = Future a runPar :: Par a -> a runPar (Done x) = x spawn_ :: Par a -> Par (Future a) -- spawn_ a = do a' <- rpar (runPar a); return (Future a') spawn_ a = let a' = runPar a in a' `par` return (Future a') spawn :: NFData a => Par a -> Par (Future a) spawn a = let a' = runPar a in a' `par` return (Future (rnf a' `pseq` a')) spawnP :: NFData a => a -> Par (Future a) spawnP a = a `par` return (Future (rnf a `pseq` a)) get :: Future a -> Par a get (Future a) = a `pseq` return a -------------------------------------------------------------------------------- -- instance Monad Par where return = pure Done x >>= k = k x instance PC.ParFuture Future Par where get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP instance Functor Par where fmap f xs = xs >>= return . f instance Applicative Par where (<*>) = ap pure = Done instance MonadFix Par where mfix = fixPar -- | Take the monadic fixpoint of a 'Par' computation. This is -- the definition of 'mfix' for 'Par'. fixPar :: (a -> Par a) -> Par a fixPar f = let fr = f (case fr of Done x -> x) in fr #ifdef NEW_GENERIC doio :: IO a -> Par a doio io = let x = unsafePerformIO io in return $! x instance PU.ParMonad Par where -- This is a No-Op for this monad. Because there are no side-effects permitted, -- there is no way to observe whether anything happens on the child thread. -- fork _m = return () -- FIXME: except for exceptions!! -- This version doesn't work, because the spark may get spilled/dropped: -- fork m = spawn m -- I think this is all that we're left with: fork m = m internalLiftIO = doio instance PU.ParThreadSafe Par where unsafeParIO = doio instance PN.ParFuture Par where type Future Par = Future type FutContents Par a = () get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP #endif -- -------------------------------------------------------------------------------- monad-par-0.3.6/Control/Monad/Par/Scheds/Trace.hs0000644000000000000000000000444107346545000017575 0ustar0000000000000000{-# LANGUAGE RankNTypes, NamedFieldPuns, BangPatterns, ExistentialQuantification, MultiParamTypeClasses, CPP #-} {- OPTIONS_GHC -Wall -fno-warn-name-shadowing -fwarn-unused-imports -} {-# LANGUAGE TypeFamilies #-} {- | This is the scheduler described in the paper "A Monad for Deterministic Parallelism". It is based on a lazy @Trace@ data structure that separates the scheduler from the @Par@ monad method implementations. -} module Control.Monad.Par.Scheds.Trace ( Par, runPar, runParIO, fork, IVar, new, newFull, newFull_, get, put, put_, spawn, spawn_, spawnP, fixPar, FixParException (..) ) where import qualified Control.Monad.Par.Class as PC import Control.Monad.Par.Scheds.TraceInternal import Control.DeepSeq import Control.Monad as M hiding (mapM, sequence, join) import Prelude hiding (mapM, sequence, head,tail) #ifdef NEW_GENERIC import qualified Control.Par.Class as PN import qualified Control.Par.Class.Unsafe as PU #endif -- ----------------------------------------------------------------------------- -- Not in 6.12: {- INLINABLE fork -} {-# INLINE fork #-} fork :: Par () -> Par () fork p = Par $ \c -> Fork (runCont p (\_ -> Done)) (c ()) -- -------------------------------------------------------------------------------- -- -- Standard instances: -- spawn :: NFData a => Par a -> Par (IVar a) spawn p = do r <- new; fork (p >>= put r); return r spawn_ :: Par a -> Par (IVar a) spawn_ p = do r <- new; fork (p >>= put_ r); return r -- > spawnP :: NFData a => a -> Par (IVar a) spawnP a = spawn (return a) instance PC.ParFuture IVar Par where get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP instance PC.ParIVar IVar Par where fork = fork new = new put = put put_ = put_ newFull = newFull newFull_ = newFull_ -- yield = yield #ifdef NEW_GENERIC instance PU.ParMonad Par where fork = fork internalLiftIO io = Par (LiftIO io) instance PU.ParThreadSafe Par where unsafeParIO io = Par (LiftIO io) instance PN.ParFuture Par where type Future Par = IVar type FutContents Par a = () get = get spawn = spawn spawn_ = spawn_ spawnP = spawnP instance PN.ParIVar Par where new = new put_ = put_ newFull = newFull newFull_ = newFull_ #endif monad-par-0.3.6/Control/Monad/Par/Scheds/TraceInternal.hs0000644000000000000000000003045607346545000021277 0ustar0000000000000000{-# LANGUAGE RankNTypes, NamedFieldPuns, BangPatterns, ExistentialQuantification, CPP, DeriveDataTypeable #-} {-# OPTIONS_GHC -Wall -fno-warn-name-shadowing -fno-warn-unused-do-bind #-} -- | This module exposes the internals of the @Par@ monad so that you -- can build your own scheduler or other extensions. Do not use this -- module for purposes other than extending the @Par@ monad with new -- functionality. module Control.Monad.Par.Scheds.TraceInternal ( Trace(..), Sched(..), Par(..), IVar(..), IVarContents(..), sched, runPar, runParIO, runParAsync, -- runParAsyncHelper, new, newFull, newFull_, get, put_, put, pollIVar, yield, fixPar, FixParException (..) ) where #if MIN_VERSION_base(4,6,0) import Prelude hiding (mapM, sequence, head,tail) #else import Prelude hiding (mapM, sequence, head,tail,catch) #endif import Control.Monad as M hiding (mapM, sequence, join) import Data.IORef import System.IO.Unsafe #if MIN_VERSION_base(4,9,0) import GHC.IO.Unsafe (unsafeDupableInterleaveIO) #else import System.IO.Unsafe (unsafeInterleaveIO) #endif import Control.Concurrent hiding (yield) import GHC.Conc (numCapabilities) import Control.DeepSeq import Control.Monad.Fix (MonadFix (mfix)) import Control.Exception (Exception, throwIO, BlockedIndefinitelyOnMVar (..), catch) import Data.Typeable (Typeable) -- import Text.Printf #if !MIN_VERSION_base(4,8,0) import Control.Applicative #endif #if __GLASGOW_HASKELL__ <= 700 import GHC.Conc (forkOnIO) forkOn = forkOnIO #endif -- --------------------------------------------------------------------------- data Trace = forall a . Get (IVar a) (a -> Trace) | forall a . Put (IVar a) a Trace | forall a . New (IVarContents a) (IVar a -> Trace) | Fork Trace Trace | Done | Yield Trace | forall a . LiftIO (IO a) (a -> Trace) -- | The main scheduler loop. sched :: Bool -> Sched -> Trace -> IO () sched _doSync queue t = loop t where loop t = case t of New a f -> do r <- newIORef a loop (f (IVar r)) Get (IVar v) c -> do e <- readIORef v case e of Full a -> loop (c a) _other -> do r <- atomicModifyIORef v $ \e -> case e of Empty -> (Blocked [c], reschedule queue) Full a -> (Full a, loop (c a)) Blocked cs -> (Blocked (c:cs), reschedule queue) r Put (IVar v) a t -> do cs <- atomicModifyIORef v $ \e -> case e of Empty -> (Full a, []) Full _ -> error "multiple put" Blocked cs -> (Full a, cs) mapM_ (pushWork queue. ($a)) cs loop t Fork child parent -> do pushWork queue child loop parent Done -> if _doSync then reschedule queue -- We could fork an extra thread here to keep numCapabilities workers -- even when the main thread returns to the runPar caller... else do putStrLn " [par] Forking replacement thread..\n" forkIO (reschedule queue); return () -- But even if we don't we are not orphaning any work in this -- threads work-queue because it can be stolen by other threads. -- else return () Yield parent -> do -- Go to the end of the worklist: let Sched { workpool } = queue -- TODO: Perhaps consider Data.Seq here. -- This would also be a chance to steal and work from opposite ends of the queue. atomicModifyIORef workpool $ \ts -> (ts++[parent], ()) reschedule queue LiftIO io c -> do r <- io loop (c r) data FixParException = FixParException deriving (Show, Typeable) instance Exception FixParException -- | Process the next item on the work queue or, failing that, go into -- work-stealing mode. reschedule :: Sched -> IO () reschedule queue@Sched{ workpool } = do e <- atomicModifyIORef workpool $ \ts -> case ts of [] -> ([], Nothing) (t:ts') -> (ts', Just t) case e of Nothing -> steal queue Just t -> sched True queue t -- RRN: Note -- NOT doing random work stealing breaks the traditional -- Cilk time/space bounds if one is running strictly nested (series -- parallel) programs. -- | Attempt to steal work or, failing that, give up and go idle. steal :: Sched -> IO () steal q@Sched{ idle, scheds, no=my_no } = do -- printf "cpu %d stealing\n" my_no go scheds where go [] = do m <- newEmptyMVar r <- atomicModifyIORef idle $ \is -> (m:is, is) if length r == numCapabilities - 1 then do -- printf "cpu %d initiating shutdown\n" my_no mapM_ (\m -> putMVar m True) r else do done <- takeMVar m if done then do -- printf "cpu %d shutting down\n" my_no return () else do -- printf "cpu %d woken up\n" my_no go scheds go (x:xs) | no x == my_no = go xs | otherwise = do r <- atomicModifyIORef (workpool x) $ \ ts -> case ts of [] -> ([], Nothing) (x:xs) -> (xs, Just x) case r of Just t -> do -- printf "cpu %d got work from cpu %d\n" my_no (no x) sched True q t Nothing -> go xs -- | If any worker is idle, wake one up and give it work to do. pushWork :: Sched -> Trace -> IO () pushWork Sched { workpool, idle } t = do atomicModifyIORef workpool $ \ts -> (t:ts, ()) idles <- readIORef idle when (not (null idles)) $ do r <- atomicModifyIORef idle (\is -> case is of [] -> ([], return ()) (i:is) -> (is, putMVar i False)) r -- wake one up data Sched = Sched { no :: {-# UNPACK #-} !Int, workpool :: IORef [Trace], idle :: IORef [MVar Bool], scheds :: [Sched] -- Global list of all per-thread workers. } -- deriving Show newtype Par a = Par { runCont :: (a -> Trace) -> Trace } instance Functor Par where fmap f m = Par $ \c -> runCont m (c . f) instance Monad Par where return = pure m >>= k = Par $ \c -> runCont m $ \a -> runCont (k a) c instance Applicative Par where (<*>) = ap pure a = Par ($ a) instance MonadFix Par where mfix = fixPar -- | Take the monadic fixpoint of a 'Par' computation. This is -- the definition of 'mfix' for 'Par'. Throws 'FixParException' -- if the result is demanded strictly within the computation. fixPar :: (a -> Par a) -> Par a -- We do this IO-style, rather than ST-style, in order to get a -- consistent exception type. Using the ST-style mfix, a strict -- argument could lead us to *either* a <> exception *or* -- (if the wrong sort of computation gets re-run) a "multiple-put" -- error. fixPar f = Par $ \ c -> LiftIO (do mv <- newEmptyMVar ans <- unsafeDupableInterleaveIO (readMVar mv `catch` \ ~BlockedIndefinitelyOnMVar -> throwIO FixParException) case f ans of Par q -> pure $ q $ \a -> LiftIO (putMVar mv a) (\ ~() -> c a)) id #if !MIN_VERSION_base(4,9,0) unsafeDupableInterleaveIO :: IO a -> IO a unsafeDupableInterleaveIO = unsafeInterleaveIO #endif newtype IVar a = IVar (IORef (IVarContents a)) -- data IVar a = IVar (IORef (IVarContents a)) -- | Equality for IVars is physical equality, as with other reference types. instance Eq (IVar a) where (IVar r1) == (IVar r2) = r1 == r2 instance NFData (IVar a) where rnf !_ = () -- From outside the Par computation we can peek. But this is nondeterministic. pollIVar :: IVar a -> IO (Maybe a) pollIVar (IVar ref) = do contents <- readIORef ref case contents of Full x -> return (Just x) _ -> return (Nothing) data IVarContents a = Full a | Empty | Blocked [a -> Trace] {-# INLINE runPar_internal #-} runPar_internal :: Bool -> Par a -> IO a runPar_internal _doSync x = do workpools <- replicateM numCapabilities $ newIORef [] idle <- newIORef [] let states = [ Sched { no=x, workpool=wp, idle, scheds=states } | (x,wp) <- zip [0..] workpools ] #if __GLASGOW_HASKELL__ >= 701 /* 20110301 */ -- -- We create a thread on each CPU with forkOn. The CPU on which -- the current thread is running will host the main thread; the -- other CPUs will host worker threads. -- -- Note: GHC 7.1.20110301 is required for this to work, because that -- is when threadCapability was added. -- (main_cpu, _) <- threadCapability =<< myThreadId #else -- -- Lacking threadCapability, we always pick CPU #0 to run the main -- thread. If the current thread is not running on CPU #0, this -- will require some data to be shipped over the memory bus, and -- hence will be slightly slower than the version above. -- let main_cpu = 0 #endif m <- newEmptyMVar forM_ (zip [0..] states) $ \(cpu,state) -> forkOn cpu $ if (cpu /= main_cpu) then reschedule state else do rref <- newIORef Empty sched _doSync state $ runCont (x >>= put_ (IVar rref)) (const Done) readIORef rref >>= putMVar m r <- takeMVar m case r of Full a -> return a _ -> error "no result" -- | Run a parallel, deterministic computation and return its result. -- -- Note: you must NOT return an IVar in the output of the parallel -- computation. This is unfortunately not enforced, as it is with -- `runST` or with newer libraries that export a Par monad, such as -- `lvish`. runPar :: Par a -> a runPar = unsafePerformIO . runPar_internal True -- | A version that avoids an internal `unsafePerformIO` for calling -- contexts that are already in the `IO` monad. -- -- Returning any value containing IVar is still disallowed, as it -- can compromise type safety. runParIO :: Par a -> IO a runParIO = runPar_internal True -- | An asynchronous version in which the main thread of control in a -- Par computation can return while forked computations still run in -- the background. runParAsync :: Par a -> a runParAsync = unsafePerformIO . runPar_internal False -- ----------------------------------------------------------------------------- -- | Creates a new @IVar@ new :: Par (IVar a) new = Par $ New Empty -- | Creates a new @IVar@ that contains a value newFull :: NFData a => a -> Par (IVar a) -- What are we doing here? We're manually raising the arity -- of newFull from 2 to 3, which seems like it's probably what -- we want most of the time. Notably, fmapping over the result -- gives really awful-looking Core if we don't do this. -- Regardless, I think we logically want to force the -- value when it's installed in the IVar rather than -- when we create the action to install it in the IVar. newFull x = Par $ \c -> x `deepseq` New (Full x) c -- | Creates a new @IVar@ that contains a value (head-strict only) newFull_ :: a -> Par (IVar a) newFull_ !x = Par $ New (Full x) -- | Read the value in an @IVar@. The 'get' operation can only return when the -- value has been written by a prior or parallel @put@ to the same -- @IVar@. get :: IVar a -> Par a get v = Par $ \c -> Get v c -- | Like 'put', but only head-strict rather than fully-strict. put_ :: IVar a -> a -> Par () put_ v !a = Par $ \c -> Put v a (c ()) -- | Put a value into an @IVar@. Multiple 'put's to the same @IVar@ -- are not allowed, and result in a runtime error. -- -- 'put' fully evaluates its argument, which therefore must be an -- instance of 'NFData'. The idea is that this forces the work to -- happen when we expect it, rather than being passed to the consumer -- of the @IVar@ and performed later, which often results in less -- parallelism than expected. -- -- Sometimes partial strictness is more appropriate: see 'put_'. -- put :: NFData a => IVar a -> a -> Par () -- Manually raise the arity, which seems likely to be what -- we want most of the time. We really want to force the -- value when it's installed in the IVar, not when we -- create the Par action to install it in the IVar. put v a = Par $ \c -> a `deepseq` Put v a (c ()) -- | Allows other parallel computations to progress. (should not be -- necessary in most cases). yield :: Par () yield = Par $ \c -> Yield (c ()) monad-par-0.3.6/LICENSE0000644000000000000000000000277107346545000012603 0ustar0000000000000000Copyright Simon Marlow, Ryan Newton 2011 All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the authors nor the names of other contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. monad-par-0.3.6/Setup.hs0000644000000000000000000000011007346545000013213 0ustar0000000000000000#!/usr/bin/env runhaskell import Distribution.Simple main = defaultMain monad-par-0.3.6/monad-par.cabal0000644000000000000000000001456407346545000014443 0ustar0000000000000000Name: monad-par Version: 0.3.6 Synopsis: A library for parallel programming based on a monad -- Version history: -- 0.1 : First release -- 0.1.0.1 : -- 0.1.0.2 : -- 0.1.1.0 : module reorganization, moving ParClass -- 0.2 : Bumped for new class-based API -- 0.2.1 : Bumped for change AList -- 0.2.2 : incorporation of real deque package -- 0.2.3 : Restricted module export for 0.2 release. -- 0.3 : Factored/reorganized modules and packages. -- *This* package is the original, core monad-par. -- 0.3.1 : fix for ghc 7.6.1, expose Par.IO -- 0.3.4 : switch to direct scheduler as default (only 1-level nesting allowed) -- 0.3.4.1 : fix build with GHC 7.0, and fix test -- 0.3.4.2 : Bugfix, 0.3.4.1 was released with debugging switches flipped. -- 0.3.4.3 : Bugfix, Trace scheduler is now the default -- 0.3.4.4 : Use the Trace scheduler in Control.Monad.Par.IO too -- 0.3.4.5 : Extremely minor, fix to unit tests. -- 0.3.4.6 : Add newgeneric flag, supporting the par-classes module. -- 0.3.4.7 : bugfix #38 for GHC 7.10 Description: The 'Par' monad offers a simple API for parallel programming. The library works for parallelising both pure and @IO@ computations, although only the pure version is deterministic. The default implementation provides a work-stealing scheduler and supports forking tasks that are much lighter weight than IO-threads. . For complete documentation see "Control.Monad.Par". . Some examples of use can be found in the @examples/@ directory of the source package. . Other related packages: . * @abstract-par@ provides the type classes that abstract over different implementations of the @Par@ monad. . * @monad-par-extras@ provides extra combinators and monad transformers layered on top of the @Par@ monad. . Changes in 0.3.4 relative to 0.3: . * Fix bugs that cause "thread blocked indefinitely on MVar" crashes. . * Added "Control.Monad.Par.IO" Homepage: https://github.com/simonmar/monad-par License: BSD3 License-file: LICENSE Author: Simon Marlow, Ryan Newton Maintainer: Simon Marlow , Ryan Newton Copyright: (c) Simon Marlow 2011 Stability: Experimental Category: Control,Parallelism,Monads Build-type: Simple Cabal-version: >=1.10 extra-source-files: tests/AListTest.hs tests/AllTests.hs tests/AsyncTest.hs tests/Makefile tests/ParTests1.hs tests/ParTests2.hs tests/ParTests_shared.hs tests/TestHelpers.hs tests/TestParDist.hs tests/Test_ContReaderT.hs tests/Test_ReaderContT.hs tests/hs_cassandra_microbench.hs tests/hs_cassandra_microbench2.hs Flag chaselev Description: Use Chase-Lev Deques for higher-perf work-stealing. Default: False Flag newgeneric Description: Provide instances for the new par-classes generic Par programming interface. Default: False Source-repository head type: git location: https://github.com/simonmar/monad-par Library Default-Language: Haskell98 Exposed-modules: -- The classic, simple monad-par interface: Control.Monad.Par , Control.Monad.Par.IO -- This is the default scheduler: , Control.Monad.Par.Scheds.Trace , Control.Monad.Par.Scheds.TraceInternal -- Replacement scheduler for Trace: , Control.Monad.Par.Scheds.Direct -- This scheduler uses sparks rather than IO threads. -- It only supports Futures, not full IVars. Fork -- becomes lighter weight. , Control.Monad.Par.Scheds.Sparks Build-depends: base >= 4 && < 5 -- This provides the interface which monad-par implements: , abstract-par , abstract-deque >= 0.1.4 -- Extras such as parMap, RNG, State , monad-par-extras >= 0.3 , deepseq >= 1.1 , array >= 0.3 , mwc-random >= 0.11 , containers , parallel >= 3.1 , mtl >= 2.0.1.0 if flag(chaselev) cpp-options: -DUSE_CHASELEV build-depends: chaselev-deque if flag(newgeneric) cpp-options: -DNEW_GENERIC build-depends: par-classes ghc-options: -O2 Other-modules: ------------------------------------------------------------ -- Schedulers & Infrastructure -- ------------------------------------------------------------ -- Strawman scheduler that forks IO threads: -- Control.Monad.Par.Scheds.ContFree, -- Internal logging framework: -- Control.Monad.Par.Logging, -- Serial Elision scheduling is currently experimental: -- Control.Monad.Par.Scheds.SerialElision Control.Monad.Par.Scheds.DirectInternal ------------------------------------------------------------ -- Data Structures -- ------------------------------------------------------------ -- ILists are internal: -- , Control.Monad.Par.IList -- RRN: Not exposing Streams or OpenLists yet. Need to improve performance. -- We have some ideas for enabling bounded chans while preventing deadlock: -- , Control.Monad.Par.OpenList -- , Control.Monad.Par.Stream Test-Suite test-monad-par Default-Language: Haskell98 type: exitcode-stdio-1.0 main-is: tests/AllTests.hs hs-source-dirs: tests/ ./ -- Run tests in parallel: ghc-options: -O2 -threaded -rtsopts -with-rtsopts=-N4 build-depends: base >= 4.3 && < 5 , abstract-par, monad-par-extras , array >= 0.3 , deepseq >= 1.2 , time , QuickCheck, HUnit , test-framework-hunit , test-framework-quickcheck2 >= 0.3 , test-framework, test-framework-th , abstract-deque >= 0.1.4 , mwc-random >= 0.11 , mtl >= 2.0.1.0 , containers monad-par-0.3.6/tests/0000755000000000000000000000000007346545000012731 5ustar0000000000000000monad-par-0.3.6/tests/AListTest.hs0000644000000000000000000000502407346545000015142 0ustar0000000000000000 module AListTest ( tests ) where import Test.Framework (defaultMain, testGroup) import Test.Framework.Providers.HUnit import Test.Framework.Providers.QuickCheck2 (testProperty) import Test.HUnit import Test.QuickCheck import Control.Monad.Par.AList as A import Prelude hiding (tail, length, map, filter) import qualified Prelude as P -------------------------------------------------------------------------------- -- QuickCheck properties prop_tofrom :: [Int] -> Bool prop_tofrom xs = toList (fromList xs) == xs prop_tofromB :: [Int] -> Bool prop_tofromB xs = toList (fromListBalanced xs) == xs prop_balance :: [Int] -> Bool prop_balance xs = toList (balance (fromList xs)) == xs prop_map :: [Int] -> Bool prop_map ls = map (+1) (fromList ls) == fromList (P.map (+1) ls) prop_filter :: [Int] -> Bool prop_filter ls = filter odd (fromList ls) == fromList (P.filter odd ls) -- | All QuickCheck tests together: test = mapM_ quickCheck [ prop_tofrom , prop_tofromB , prop_balance , prop_map , prop_filter ] -------------------------------------------------------------------------------- -- Testing Utils: bintree 0 x = x bintree n x = Append sub sub where sub = bintree (n-1) x showDbg ANil = "_" showDbg (ASing x) = show x showDbg (Append l r) = "("++showDbg l++" | "++showDbg r++")" showDbg (AList l) = show l -------------------------------------------------------------------------------- tests = [ -- testGroup "AList HUnit Tests" (hUnitTestToTests alist_tests), testGroup "AList HUnit Tests" [ testCase "fromList1" $ 8 @=? (length$ tail$ tail$ fromList [1..10]) , testCase "cons X3" $ 1 @=? (length$ tail$tail$ cons 1$ cons 2$ cons 3 empty) , testCase "tail X3" $ 253 @=? (length$ tail$tail$tail$ bintree 8 $ singleton 'a') , testCase "len bintree"$ 0 @=? (length$ bintree 8 $ empty) , testCase "inspect tree1"$ "((1 | 1) | (1 | 1))" @=? (showDbg$ bintree 2 $ singleton 1) , testCase "inspect tree2"$ "((_ | 1) | (1 | 1))" @=? (showDbg$ tail$ bintree 2 $ singleton 1) , testCase "inspect tree3"$ "(_ | (1 | 1))" @=? (showDbg$ tail$tail$ bintree 2 $ singleton 1) ], testGroup "AList QuickCheck Tests " [ testProperty "map" prop_map , testProperty "filter" prop_filter , testProperty "tofrom" prop_tofrom , testProperty "tofromB" prop_tofromB , testProperty "balance" prop_balance ] ] -- main = defaultMain tests monad-par-0.3.6/tests/AllTests.hs0000644000000000000000000000100707346545000015016 0ustar0000000000000000 -- | Combine individual test modules into a single module. module Main where import Test.Framework (defaultMain, testGroup) import qualified AListTest import qualified AsyncTest import qualified ParTests1 import qualified ParTests2 main = defaultMain $ concat $ [ AListTest.tests , ParTests1.tests -- , ParTests2.tests -- [2013.09.08] Temporarily disabling till we get back to debugging Direct. -- Not working right now: -- , AsyncTests.tests ] -- main = AsyncTests.manual_main monad-par-0.3.6/tests/AsyncTest.hs0000644000000000000000000000246407346545000015210 0ustar0000000000000000{-# LANGUAGE TemplateHaskell #-} module AsyncTest (tests, manual_main) where import Control.Exception import Control.Monad.Par.Scheds.Trace import Control.Monad.Par.Scheds.TraceInternal (Par(..),Trace(Fork),runCont,runParAsync) import Test.Framework.TH (testGroupGenerator) import Test.Framework (defaultMain, testGroup) import Test.Framework.Providers.HUnit import TestHelpers -------------------------------------------------------------------------------- -- A D E or A D B E but no C -- -- ODD -- this passes when run manually (compiled or interpreted) but -- fails when run through the test framework [2011.10.25]. case_async_test2 = do x <- async2 case words x of ["A","D","E"] -> return () ["A","D","B","E"] -> return () _ -> error$ "async_test2: Bad output: "++ show (words x) async2 = collectOutput $ \ r -> do prnt r "A" evaluate$ runParAsync $ do fork $ do _prnt r "B" x <- _unsafeio$ waste_time 0.5 _prnt r$ "C "++ show x -- _prnt r$ "C "++ show (_waste_time awhile) _prnt r "D" prnt r "E" -------------------------------------------------------------------------------- tests = [ $(testGroupGenerator) ] -- It works fine when run like this: manual_main = do x <- async2; putStrLn x case_async_test2 -- case_async_test1 monad-par-0.3.6/tests/Makefile0000644000000000000000000000136007346545000014371 0ustar0000000000000000 # NOTE!! THIS IS REDUNDANT WITH THE CABAL TEST-SUITE IN monad-par.cabal # # This is simply an alternative way to run the tests. # These are HUnit/QuickCheck tests, factored out of their respective files: SRC = ParTests.hs AListTest.hs AsyncTests.hs TestHelpers.hs Makefile TESTS= AllTests.hs EXES= $(TESTS:.hs=.exe) all: $(EXES) test .SUFFIXES: .hs .exe .hs.exe: $< $(SRC) ghc -i.. --make $< -o $@ -rtsopts -threaded deps: cabal install QuickCheck HUnit test-framework test-framework-hunit test-framework-quickcheck2 test-framework-th test: @echo @echo "Running tests:" @echo "==================================" ./AllTests.exe +RTS -N2 # "for t in $(EXES); do $t; done" # ./ParTests.exe # ./AListTest.exe clean: rm -f *.hi *.exe *.o monad-par-0.3.6/tests/ParTests1.hs0000644000000000000000000000023307346545000015111 0ustar0000000000000000{-# LANGUAGE TemplateHaskell, BangPatterns, CPP #-} module ParTests1 (tests) where import Control.Monad.Par.Scheds.Trace #include "ParTests_shared.hs" monad-par-0.3.6/tests/ParTests2.hs0000644000000000000000000000023307346545000015112 0ustar0000000000000000{-# LANGUAGE TemplateHaskell, BangPatterns, CPP #-} module ParTests2 (tests) where import Control.Monad.Par.Scheds.Direct #include "ParTests_shared.hs" monad-par-0.3.6/tests/ParTests_shared.hs0000644000000000000000000001430707346545000016365 0ustar0000000000000000 import Control.Monad.Par.Combinator -- import Control.Concurrent.Chan () import GHC.Conc (numCapabilities) import Control.Exception (evaluate) -- import System.IO.Unsafe -- import Data.IORef import Test.HUnit (Assertion, (@=?)) import Test.Framework.TH (testGroupGenerator) -- import Test.Framework (defaultMain, testGroup) import qualified Test.Framework as TF import Test.Framework.Providers.HUnit -- import Test.Framework.Providers.QuickCheck2 (testProperty) import System.Timeout (timeout) import TestHelpers (assertException, prnt, _prnt, _unsafeio, waste_time, collectOutput) -- ----------------------------------------------------------------------------- -- Testing three :: Int three = 3 par :: (Eq a, Show a) => a -> Par a -> Assertion par res m = res @=? runPar m -- From https://github.com/simonmar/monad-par/pull/49 case_parallelFilter :: Assertion case_parallelFilter = run 200 where run 0 = pure () run i = do par result (parfilter p xs) run (i-1) p x = x `mod` 2 == 0 xs = [0..10] :: [Int] result = filter p xs parfilter _ [] = pure [] parfilter f [x] = pure (if f x then [x] else []) parfilter f xs = do let (as, bs) = halve xs v1 <- spawn $ parfilter f as v2 <- spawn $ parfilter f bs left <- get v1 right <- get v2 pure (left ++ right) halve xs = splitAt (length xs `div` 2) xs -- | Make sure there's no problem with bringing the worker threads up and down many -- times. 10K runPar's takes about 6.3 seconds. case_lotsaRunPar :: Assertion case_lotsaRunPar = loop 2000 where loop 0 = putStrLn "" loop i = do -- We need to do runParIO to make sure the compiler does the runPar each time. runParIO (return ()) putStr "." loop (i-1) case_justReturn :: Assertion case_justReturn = par three (return 3) case_oneIVar :: Assertion case_oneIVar = par three (do r <- new; put r 3; get r) -- [2012.01.02] Apparently observing divergences here too: case_forkNFill :: Assertion case_forkNFill = par three (do r <- new; fork (put r 3); get r) -- [2012.05.02] The nested Trace implementation sometimes fails to -- throw this exception, so we expect either the exception or a -- timeout. This is reasonable since we might expect a deadlock in a -- non-Trace scheduler. --ACF -- -- [2013.05.17] Update, it's also possible to get a blocked-indefinitely error here -- --RRN -- -- [2013.09.08] Yep, I'm nondeterministically seeing this fail using -- Direct. But this is actually a failure of the exception handling -- setup. `assertException` should be catching blocked-indefinitely -- error and it's NOT always. Running this test ALONE, I cannot trip -- it, but running it with others I do. In fact, running it with -- through test-framework's "-j1" I cannot reproduce the error. It is -- probably just the perturbation to timing caused by this, after all, -- WAIT_WORKERS is not currently on for Direct. Still, I thought that -- wouldn't matter here because the *main* thread can't return. -- -- Also, it seems like this test can just hang indefinitely, with the -- timeout failing to do the trick.... -- case_getEmpty :: IO () case_getEmpty = do -- Microseconds: _ <- timeout (100 * 1000) $ assertException ["no result", "timeout", "thread blocked indefinitely"] $ runPar $ do r <- new; get r return () -- [2012.01.02] Observed a blocked-indef-on-MVar failure here on -- master branch with 16 threads: -- -- | Simple diamond test. case_test_diamond :: Assertion case_test_diamond = 9 @=? (m :: Int) where m = runPar $ do abcd <- sequence [new,new,new,new] case abcd of [a,b,c,d] -> do fork $ do x <- get a; put b (x+1) fork $ do x <- get a; put c (x+2) fork $ do x <- get b; y <- get c; put d (x+y) fork $ do put a 3 get d _ -> error "Oops" -- | Violate IVar single-assignment: -- -- NOTE: presently observing termination problems here. -- runPar is failing to exist after the exception? disabled_case_multiput :: IO () disabled_case_multiput = assertException ["multiple put"] $ runPar $ do a <- new put a (3::Int) put a (4::Int) return () -- disabled_test3 = assertException "multiple put" $ -- runPar $ do -- a <- new -- put a (3::Int) -- both (return 1) (return 2) -- where -- -- both a b >> c == both (a >> c) (b >> c) -- -- Duplicate the continuation: is this useful for anything? -- both :: Par a -> Par a -> Par a -- both a b = Par $ \c -> Fork (runCont a c) (runCont b c) -- | A reduction test. case_test_pmrr1 :: Assertion -- Saw a failure here using Direct: -- http://tester-lin.soic.indiana.edu:8080/job/HackageReleased_monad-par/GHC_VERS=7.0.4,label=tank.cs.indiana.edu/40/console -- Exception inside child thread "(worker 0 of originator ThreadId 5)", ThreadId 10: thread blocked indefinitely in an MVar operation case_test_pmrr1 = par 5050 $ parMapReduceRangeThresh 1 (InclusiveRange 1 100) (return) (return `bincomp` (+)) 0 where bincomp unary bin a b = unary (bin a b) ------------------------------------------------------------ -- | Observe the real time ordering of events: -- -- Child-stealing: -- A D B C E -- -- Parent-stealing: -- A B D C E -- -- Sequential: -- A B C D E -- -- This is only for the TRACE scheduler right now. -- -- This test is DISABLED because it fails unless you run with +RTS -N2 -- or greater. -- disabled_case_async_test1 :: IO () disabled_case_async_test1 = do x <- res case (numCapabilities, words x) of (1,["A","B","C",_,"D","E"]) -> return () (n,["A","D","B","C",_,"E"]) | n > 1 -> return () (n,["A","B","D","C",_,"E"]) | n > 1 -> return () _ -> error$ "Bad temporal pattern: "++ show (words x) where res = collectOutput $ \ r -> do prnt r "A" evaluate$ runPar $ do iv <- new fork $ do _prnt r "B" x <- _unsafeio$ waste_time 0.5 _prnt r$ "C "++ show x -- _prnt r$ "C "++ show (_waste_time awhile) put iv () _prnt r "D" get iv prnt r$ "E" ------------------------------------------------------------ tests :: [TF.Test] tests = [ $(testGroupGenerator) ] monad-par-0.3.6/tests/TestHelpers.hs0000644000000000000000000000547107346545000015536 0ustar0000000000000000{-# LANGUAGE BangPatterns #-} module TestHelpers where import Data.List import Prelude hiding (catch) import Control.Exception import System.IO.Unsafe (unsafePerformIO) import Data.IORef import Data.Time.Clock import Control.Monad.Par.Class ------------------------------------------------------------ -- Helpers -- _unsafeio :: IO a -> Par a _unsafeio :: ParFuture iv p => IO a -> p a _unsafeio io = let x = unsafePerformIO io in x `seq` return x _waste_time :: Int -> Double _waste_time n = loop n 1.00111 where loop 0 !x = x loop !n !x | x > 100.0 = loop (n-1) (x / 2) loop !n !x = loop (n-1) (x + x * 0.5011) -- This version watches the clock so it uses a constant amount of time -- regardless of compile/interpret mode an opt lvl. waste_time :: Double -> IO Double waste_time seconds = do strt <- getCurrentTime let loop !x | x > 100.0 = chk (x / 2) loop !x = chk (x + x * 0.5011) chk !x = do t <- getCurrentTime if diffUTCTime t strt >= realToFrac seconds then return x else loop x loop 1.00111 -- Obviously this takes a lot longer if it's interpreted: --awhile = 300000000 awhile :: Integer awhile = 3 * 1000 * 1000 -- awhile = 300000 atomicModifyIORef_ :: IORef a -> (a -> a) -> IO () atomicModifyIORef_ rf fn = atomicModifyIORef rf (\x -> (fn x, ())) -- | Haskell doesn't offer a way to create a Handle for in-memory output. -- So here we use IORefs instead... collectOutput :: (IORef [String] -> IO ()) -> IO String collectOutput fn = do c <- newIORef [] fn c ls <- readIORef c return (unlines (reverse ls)) prnt :: IORef [String] -> String -> IO () prnt ref str = atomicModifyIORef_ ref (str:) -- _prnt :: IORef [String] -> String -> Par () _prnt :: ParFuture iv p => IORef [String] -> String -> p () _prnt ref = _unsafeio . prnt ref -- ----------------------------------------------------------------------------- -- assertException :: (Exception e, Eq e) => e -> IO a -> IO () -- assertException ex action = -- handleJust isWanted (const $ return ()) $ do -- action -- assertFailure $ "Expected exception: " ++ show ex -- where isWanted = guard . (== ex) -- | Ensure that evaluating an expression returns an exception -- containing one of the expected messages. assertException :: [String] -> a -> IO () assertException msgs val = do x <- catch (do evaluate val; return Nothing) (\e -> do putStrLn$ "Good. Caught exception: " ++ show (e :: SomeException) return (Just$ show e)) case x of Nothing -> error "Failed to get an exception!" Just s -> if any (`isInfixOf` s) msgs then return () else error$ "Got the wrong exception, expected to one of the strings: "++ show msgs ++ "\nInstead got this exception:\n " ++ show s monad-par-0.3.6/tests/TestParDist.hs0000644000000000000000000000666007346545000015503 0ustar0000000000000000{-# LANGUAGE TemplateHaskell, DeriveDataTypeable #-} {-# OPTIONS_GHC -O2 -ddump-splices #-} -- Simple tests of distributed work stealing. ---------------------------------------------- import System.Environment (getArgs) import qualified Control.Monad.Par.Meta.Dist as D import Control.Monad.IO.Class (liftIO) import Control.Monad (mapM_) -- Tweaked version of CloudHaskell's closures: import Remote2.Call (mkClosureRec, remotable) import Control.Concurrent (myThreadId, threadDelay) import System.Process (readProcess) import System.Posix.Process (getProcessID) import Data.Char (isSpace) import Data.Typeable import Data.Binary import qualified Data.ByteString.Char8 as BS -- import qualified Data.IntMap as M -- import qualified Data.Set as M -------------------------------------------------------------------------------- -- A description of a fake work topology which is executed by sleeping -- (threadDelay) different amounts at different points. -- Presently this describes futures-only work: data FakeWork = Work Time FakeWork | Fork FakeWork FakeWork | SyncAll FakeWork | Nop -- | Spawn Id FakeWork FakeWork -- | Seq [FakeWork] -- | Sync Id FakeWork deriving (Eq,Show,Read,Typeable) -- All constructors take a continuation. type Time = Int -- Time in milliseconds type Id = Int -- A simple test that ensures a single steal for two workers: -- t1 = Spawn 11 (Work 101) $ -- Seq [Work 102, -- Sync 11] t1 = Fork (Work 101 Nop) $ Work 102 $ SyncAll Nop t2 = switcher 3 3 True -- This one alternates where the *real* work will be. -- -- Takes number of switches. Performs a shrinking number of work -- items per "straightaway": switcher 1 m _ = Work 100 (SyncAll Nop) switcher _ _ False = Work 101 (SyncAll Nop) switcher n m True = Work 102 $ switcher n (m-1) True switcher n 1 b = Fork (switcher (n-1) n b) (switcher (n-1) n (not b)) ---------------------------------------------------------------------------------------------------- runTest :: FakeWork -> D.Par () --runTest fw = theloop fw M.empty runTest fw = theloop fw [] theloop fw ls = case fw of Nop -> return () Work t cont -> do liftIO$ threadDelay (t * 1000) liftIO$ putStrLn$ "Finished work "++show t theloop cont ls -- Seq ls -> mapM_ theloop ls -- Spawn id fw cont -> Fork child cont -> do iv <- D.longSpawn$ $(mkClosureRec 'runTest) child -- Associate id with ivar -- theloop cont (M.insert id iv mp) theloop cont (iv:ls) SyncAll cont -> do mapM_ D.get ls theloop cont [] -- Generate stub code for RPC: remotable ['runTest] -- instance Serialize FakeWork where instance Binary FakeWork where put fw = put (show fw) -- get str = return (read (BS.unpack str)) get = get >>= return . read ---------------------------------------------------------------------------------------------------- main = do [version] <- getArgs case version of "slave" -> D.runParSlaveWithTransport [__remoteCallMetaData] D.TCP "master" -> do D.runParDistWithTransport [__remoteCallMetaData] D.TCP (runTest t1) putStrLn $ "Finished with work. Calling SHUTDOWN..." D.shutdownDist putStrLn $ "... returned from shutdown, apparently successful." str -> error$"Unhandled mode: " ++ str monad-par-0.3.6/tests/Test_ContReaderT.hs0000644000000000000000000000213707346545000016441 0ustar0000000000000000{-# LANGUAGE CPP, ScopedTypeVariables #-} -- If we instead commute the monad transformers we can change the -- reader value observed by a continuation. import Control.Monad.Cont as C import qualified Control.Monad.Reader as R import Data.IORef import Text.Printf type M a = C.ContT () (R.ReaderT Int IO) a test ref = do x <- R.ask liftIO$ printf "Observed value %d before callCC\n" x callCC$ \cont -> do liftIO$ writeIORef ref cont liftIO$ printf "Write ioref inside callcc...\n" z <- R.ask liftIO$ printf "Observed value %d in invoked continuation\n" z return () main = do ref :: IORef (() -> M ()) <- newIORef (error "should not be used") let m0 = test ref m1 = C.runContT m0 (\ () -> return ()) m2 = R.runReaderT m1 (100::Int) m2 k <- readIORef ref let m3 = do w <- lift$ R.ask liftIO$ putStrLn ("In new runReader instance: observed " ++ show w) k () liftIO$ putStrLn " !! Should not reach here..." R.runReaderT (C.runContT m3 (\ () -> return ())) 200 putStrLn "Done with main." monad-par-0.3.6/tests/Test_ReaderContT.hs0000644000000000000000000000172707346545000016445 0ustar0000000000000000{-# LANGUAGE CPP #-} import Control.Monad.Cont as C import qualified Control.Monad.Reader as R import Data.IORef import Text.Printf type M a = R.ReaderT Int ContIO a type ContIO = C.ContT () IO test ref = do x <- R.ask liftIO$ printf "Observed value %d before callCC\n" x callCC$ \cont -> do y <- R.ask liftIO$ writeIORef ref cont liftIO$ printf "Observed value %d inside callCC\n" y z <- R.ask liftIO$ printf "Observed value %d in invoked continuation\n" z return () main = do ref <- newIORef (error "unused") let test' = do test ref m1 = do R.runReaderT test' (100::Int) C.runContT m1 (\ () -> return ()) k <- readIORef ref let m2 = do w <- R.ask liftIO$ putStrLn (" In new runReader instance: observed " ++ show w) k () liftIO$ putStrLn " !! Should not reach here..." C.runContT (R.runReaderT m2 200) (\ () -> return ()) putStrLn "Done with main." monad-par-0.3.6/tests/hs_cassandra_microbench.hs0000644000000000000000000000465007346545000020114 0ustar0000000000000000 -- Peculiar: -- * Requires hscassandra 0.0.8, available on github only -- Functions to: -- * Connect to a cassandra nosql server -- * Put a bytestring -- * Get a bytestring -- This file acts as a microbench for insertion or retrieval of keys import Database.Cassandra import Database.Cassandra.Types import qualified Data.ByteString.Lazy.Char8 as BS import qualified Database.Cassandra.Thrift.Cassandra_Types as CT import Data.Either.Utils import qualified Data.List as L import System.Time import Data.Time.Clock import System.Environment import Control.Monad import qualified Data.Set as S config = CassandraConfig { cassandraKeyspace = "Test2" , cassandraConsistencyLevel = ONE , cassandraHostname = "127.0.0.1" , cassandraPort = 9160 , cassandraUsername = "" , cassandraPassword = "" } pack :: [Char] -> BS.ByteString pack string = BS.pack string deColumnValue :: Column -> BS.ByteString deColumnValue (Column _ value _) = value fetchValue string = runCassandraT config $ do res <- get "Users" (pack string) AllColumns return $ deColumnValue $ head $ res insertValue string byteString = runCassandraT config $ do insert "Users" (pack string) [ (pack "name") =: byteString ] removeValue string = runCassandraT config $ do remove "Users" (pack string) (ColNames [ (pack "name") ]) main = do --input <- readFile "/usr/share/dict/words" --let words = tail $ lines input let words = map (\n -> (show n) ++ "Thequickbrownfoxjumpedoverthelazydog") $ take 5000 [1..] putStrLn "Benchmarking Cassandra bindings by writing in every dictionary file word as a k,v pair." putStrLn "About to start timing writes..." writeStart <- getCurrentTime mapM_ (\w -> insertValue w (pack w)) words writeStop <- getCurrentTime let writeTime = diffUTCTime writeStop writeStart putStrLn ("Writes took " ++ (show $ writeTime)) putStrLn "Starting reads..." readStart <- getCurrentTime mapM_ (\w -> fetchValue w) words let pairs = length words readStop <- getCurrentTime let readTime = diffUTCTime readStop readStart putStrLn ("Reads took " ++ (show $ readTime)) let totalTime = diffUTCTime readStop writeStart putStrLn ("Total time: " ++ (show $ totalTime)) putStrLn "Removing keys..." mapM_ (\w -> removeValue w) words monad-par-0.3.6/tests/hs_cassandra_microbench2.hs0000644000000000000000000000476607346545000020206 0ustar0000000000000000 -- Peculiar: -- * Requires hscassandra 0.0.8, available on github only -- Functions to: -- * Connect to a cassandra nosql server -- * Put a bytestring -- * Get a bytestring -- This file acts as a microbench for insertion or retrieval of keys import Database.Cassandra import Database.Cassandra.Types import qualified Data.ByteString.Lazy.Char8 as BS import qualified Database.Cassandra.Thrift.Cassandra_Types as CT import Data.Either.Utils import qualified Data.List as L import System.Time import Data.Time.Clock import System.Environment import Control.Monad import qualified Data.Set as S config = CassandraConfig { cassandraKeyspace = "Test2" , cassandraConsistencyLevel = ONE , cassandraHostname = "127.0.0.1" , cassandraPort = 9160 , cassandraUsername = "" , cassandraPassword = "" } pack :: [Char] -> BS.ByteString pack string = BS.pack string deColumnValue :: Column -> BS.ByteString deColumnValue (Column _ value _) = value fetchValue string = runCassandraT config $ do res <- get "Users" (pack string) AllColumns return $ deColumnValue $ head $ res insertValue string byteString = runCassandraT config $ do insert "Users" (pack string) [ (pack "name") =: byteString ] removeValue string = runCassandraT config $ do remove "Users" (pack string) (ColNames [ (pack "name") ]) main = do let words = map (\n -> (show n) ++ "Thequickbrownfoxjumpedoverthelazydog") $ take 50000 [1..] putStrLn "Benchmarking Cassandra binding." putStrLn "About to start timing writes..." writeStart <- getCurrentTime runCassandraT config $ do mapM_ (\w -> insert "Users" (pack w) [(pack "name") =: (pack w) ]) words writeStop <- getCurrentTime let writeTime = diffUTCTime writeStop writeStart putStrLn ("Writes took " ++ (show $ writeTime)) putStrLn "Starting reads..." readStart <- getCurrentTime runCassandraT config $ do mapM_ (\w -> get "Users" (pack w) AllColumns) words readStop <- getCurrentTime let readTime = diffUTCTime readStop readStart putStrLn ("Reads took " ++ (show $ readTime)) let totalTime = diffUTCTime readStop writeStart putStrLn ("Total time: " ++ (show $ totalTime)) putStrLn "Removing keys..." runCassandraT config $ do mapM_ (\w -> remove "Users" (pack w) (ColNames [ (pack "name") ])) words