rq-1.16.2/CHANGES.md0000644000000000000000000010002613615410400010534 0ustar00### RQ 1.16.2 (2024-05-01) * Fixed a bug that may cause jobs from intermediate queue to be moved to FailedJobRegistry. Thanks @selwin! ### RQ 1.16.1 (2024-03-09) * Added `worker_pool.get_worker_process()` to make `WorkerPool` easier to extend. Thanks @selwin! ### RQ 1.16 (2024-02-24) * Added a way for jobs to wait for latest result `job.latest_result(timeout=60)`. Thanks @ajnisbet! * Fixed an issue where `stopped_callback` is not respected when job is enqueued via `enqueue_many()`. Thanks @eswolinsky3241! * `worker-pool` no longer ignores `--quiet`. Thanks @Mindiell! * Added compatibility with AWS Serverless Redis. Thanks @peter-gy! * `worker-pool` now starts with scheduler. Thanks @chromium7! ### RQ 1.15.1 (2023-06-20) * Fixed a bug that may cause a crash when cleaning intermediate queue. Thanks @selwin! * Fixed a bug that may cause canceled jobs to still run dependent jobs. Thanks @fredsod! ### RQ 1.15 (2023-05-24) * Added `Callback(on_stopped='my_callback)`. Thanks @eswolinsky3241! * `Callback` now accepts dotted path to function as input. Thanks @rishabh-ranjan! * `queue.enqueue_many()` now supports job dependencies. Thanks @eswolinsky3241! * `rq worker` CLI script now configures logging based on `DICT_CONFIG` key present in config file. Thanks @juur! * Whenever possible, `Worker` now uses `lmove()` to implement [reliable queue pattern](https://redis.io/commands/lmove/). Thanks @selwin! * `Scheduler` should only release locks that it successfully acquires. Thanks @xzander! * Fixes crashes that may happen by changes to `as_text()` function in v1.14. Thanks @tchapi! * Various linting, CI and code quality improvements. Thanks @robhudson! ### RQ 1.14.1 (2023-05-05) * Fixes a crash that happens if Redis connection uses SSL. Thanks @tchapi! * Fixes a crash if `job.meta()` is loaded using the wrong serializer. Thanks @gabriels1234! ### RQ 1.14.0 (2023-05-01) * Added `WorkerPool` (beta) that manages multiple workers in a single CLI. Thanks @selwin! * Added a new `Callback` class that allows more flexibility in declaring job callbacks. Thanks @ronlut! * Fixed a regression where jobs with unserializable return value crashes RQ. Thanks @tchapi! * Added `--dequeue-strategy` option to RQ's CLI. Thanks @ccrvlh! * Added `--max-idle-time` option to RQ's worker CLI. Thanks @ronlut! * Added `--maintenance-interval` option to RQ's worker CLI. Thanks @ronlut! * Fixed RQ usage in Windows as well as various other refactorings. Thanks @ccrvlh! * Show more info on `rq info` CLI command. Thanks @iggeehu! * `queue.enqueue_jobs()` now properly account for job dependencies. Thanks @sim6! * `TimerDeathPenalty` now properly handles negative/infinite timeout. Thanks @marqueurs404! ### RQ 1.13.0 (2023-02-19) * Added `work_horse_killed_handler` argument to `Worker`. Thanks @ronlut! * Fixed an issue where results aren't properly persisted on synchronous jobs. Thanks @selwin! * Fixed a bug where job results are not properly persisted when `result_ttl` is `-1`. Thanks @sim6! * Various documentation and logging fixes. Thanks @lowercase00! * Improve Redis connection reliability. Thanks @lowercase00! * Scheduler reliability improvements. Thanks @OlegZv and @lowercase00! * Fixed a bug where `dequeue_timeout` ignores `worker_ttl`. Thanks @ronlut! * Use `job.return_value()` instead of `job.result` when processing callbacks. Thanks @selwin! * Various internal refactorings to make `Worker` code more easily extendable. Thanks @lowercase00! * RQ's source code is now black formatted. Thanks @aparcar! ### RQ 1.12.0 (2023-01-15) * RQ now stores multiple job execution results. This feature is only available on Redis >= 5.0 Redis Streams. Please refer to [the docs](https://python-rq.org/docs/results/) for more info. Thanks @selwin! * Improve performance when enqueueing many jobs at once. Thanks @rggjan! * Redis server version is now cached in connection object. Thanks @odarbelaeze! * Properly handle `at_front` argument when jobs are scheduled. Thanks @gabriels1234! * Add type hints to RQ's code base. Thanks @lowercase00! * Fixed a bug where exceptions are logged twice. Thanks @selwin! * Don't delete `job.worker_name` after job is finished. Thanks @eswolinsky3241! ### RQ 1.11.1 (2022-09-25) * `queue.enqueue_many()` now supports `on_success` and on `on_failure` arguments. Thanks @y4n9squared! * You can now pass `enqueue_at_front` to `Dependency()` objects to put dependent jobs at the front when they are enqueued. Thanks @jtfidje! * Fixed a bug where workers may wrongly acquire scheduler locks. Thanks @milesjwinter! * Jobs should not be enqueued if any one of it's dependencies is canceled. Thanks @selwin! * Fixed a bug when handling jobs that have been stopped. Thanks @ronlut! * Fixed a bug in handling Redis connections that don't allow `SETNAME` command. Thanks @yilmaz-burak! ### RQ 1.11 (2022-07-31) * This will be the last RQ version that supports Python 3.5. * Allow jobs to be enqueued even when their dependencies fail via `Dependency(allow_failure=True)`. Thanks @mattchan-tencent, @caffeinatedMike and @selwin! * When stopped jobs are deleted, they should also be removed from FailedJobRegistry. Thanks @selwin! * `job.requeue()` now supports `at_front()` argument. Thanks @buroa! * Added ssl support for sentinel connections. Thanks @nevious! * `SimpleWorker` now works better on Windows. Thanks @caffeinatedMike! * Added `on_failure` and `on_success` arguments to @job decorator. Thanks @nepta1998! * Fixed a bug in dependency handling. Thanks @th3hamm0r! * Minor fixes and optimizations by @xavfernandez, @olaure, @kusaku. ### RQ 1.10.1 (2021-12-07) * **BACKWARDS INCOMPATIBLE**: synchronous execution of jobs now correctly mimics async job execution. Exception is no longer raised when a job fails, job status will now be correctly set to `FAILED` and failure callbacks are now properly called when job is run synchronously. Thanks @ericman93! * Fixes a bug that could cause job keys to be left over when `result_ttl=0`. Thanks @selwin! * Allow `ssl_cert_reqs` argument to be passed to Redis. Thanks @mgcdanny! * Better compatibility with Python 3.10. Thanks @rpkak! * `job.cancel()` should also remove itself from registries. Thanks @joshcoden! * Pubsub threads are now launched in `daemon` mode. Thanks @mik3y! ### RQ 1.10.0 (2021-09-09) * You can now enqueue jobs from CLI. Docs [here](https://python-rq.org/docs/#cli-enqueueing). Thanks @rpkak! * Added a new `CanceledJobRegistry` to keep track of canceled jobs. Thanks @selwin! * Added custom serializer support to various places in RQ. Thanks @joshcoden! * `cancel_job(job_id, enqueue_dependents=True)` allows you to cancel a job while enqueueing its dependents. Thanks @joshcoden! * Added `job.get_meta()` to fetch fresh meta value directly from Redis. Thanks @aparcar! * Fixes a race condition that could cause jobs to be incorrectly added to FailedJobRegistry. Thanks @selwin! * Requeueing a job now clears `job.exc_info`. Thanks @selwin! * Repo infrastructure improvements by @rpkak. * Other minor fixes by @cesarferradas and @bbayles. ### RQ 1.9.0 (2021-06-30) * Added success and failure callbacks. You can now do `queue.enqueue(foo, on_success=do_this, on_failure=do_that)`. Thanks @selwin! * Added `queue.enqueue_many()` to enqueue many jobs in one go. Thanks @joshcoden! * Various improvements to CLI commands. Thanks @rpkak! * Minor logging improvements. Thanks @clavigne and @natbusa! ### RQ 1.8.1 (2021-05-17) * Jobs that fail due to hard shutdowns are now retried. Thanks @selwin! * `Scheduler` now works with custom serializers. Thanks @alella! * Added support for click 8.0. Thanks @rpkak! * Enqueueing static methods are now supported. Thanks @pwws! * Job exceptions no longer get printed twice. Thanks @petrem! ### RQ 1.8.0 (2021-03-31) * You can now declare multiple job dependencies. Thanks @skieffer and @thomasmatecki for laying the groundwork for multi dependency support in RQ. * Added `RoundRobinWorker` and `RandomWorker` classes to control how jobs are dequeued from multiple queues. Thanks @bielcardona! * Added `--serializer` option to `rq worker` CLI. Thanks @f0cker! * Added support for running asyncio tasks. Thanks @MyrikLD! * Added a new `STOPPED` job status so that you can differentiate between failed and manually stopped jobs. Thanks @dralley! * Fixed a serialization bug when used with job dependency feature. Thanks @jtfidje! * `clean_worker_registry()` now works in batches of 1,000 jobs to prevent modifying too many keys at once. Thanks @AxeOfMen and @TheSneak! * Workers will now wait and try to reconnect in case of Redis connection errors. Thanks @Asrst! ### RQ 1.7.0 (2020-11-29) * Added `job.worker_name` attribute that tells you which worker is executing a job. Thanks @selwin! * Added `send_stop_job_command()` that tells a worker to stop executing a job. Thanks @selwin! * Added `JSONSerializer` as an alternative to the default `pickle` based serializer. Thanks @JackBoreczky! * Fixes `RQScheduler` running on Redis with `ssl=True`. Thanks @BobReid! ### RQ 1.6.1 (2020-11-08) * Worker now properly releases scheduler lock when run in burst mode. Thanks @selwin! ### RQ 1.6.0 (2020-11-08) * Workers now listen to external commands via pubsub. The first two features taking advantage of this infrastructure are `send_shutdown_command()` and `send_kill_horse_command()`. Thanks @selwin! * Added `job.last_heartbeat` property that's periodically updated when job is running. Thanks @theambient! * Now horses are killed by their parent group. This helps in cleanly killing all related processes if job uses multiprocessing. Thanks @theambient! * Fixed scheduler usage with Redis connections that uses custom parser classes. Thanks @selwin! * Scheduler now enqueue jobs in batches to prevent lock timeouts. Thanks @nikkonrom! * Scheduler now follows RQ worker's logging configuration. Thanks @christopher-dG! ### RQ 1.5.2 (2020-09-10) * Scheduler now uses the class of connection that's used. Thanks @pacahon! * Fixes a bug that puts retried jobs in `FailedJobRegistry`. Thanks @selwin! * Fixed a deprecated import. Thanks @elmaghallawy! ### RQ 1.5.1 (2020-08-21) * Fixes for Redis server version parsing. Thanks @selwin! * Retries can now be set through @job decorator. Thanks @nerok! * Log messages below logging.ERROR is now sent to stdout. Thanks @selwin! * Better logger name for RQScheduler. Thanks @atainter! * Better handling of exceptions thrown by horses. Thanks @theambient! ### RQ 1.5.0 (2020-07-26) * Failed jobs can now be retried. Thanks @selwin! * Fixed scheduler on Python > 3.8.0. Thanks @selwin! * RQ is now aware of which version of Redis server it's running on. Thanks @aparcar! * RQ now uses `hset()` on redis-py >= 3.5.0. Thanks @aparcar! * Fix incorrect worker timeout calculation in SimpleWorker.execute_job(). Thanks @davidmurray! * Make horse handling logic more robust. Thanks @wevsty! ### RQ 1.4.3 (2020-06-28) * Added `job.get_position()` and `queue.get_job_position()`. Thanks @aparcar! * Longer TTLs for worker keys to prevent them from expiring inside the worker lifecycle. Thanks @selwin! * Long job args/kwargs are now truncated during logging. Thanks @JhonnyBn! * `job.requeue()` now returns the modified job. Thanks @ericatkin! ### RQ 1.4.2 (2020-05-26) * Reverted changes to `hmset` command which causes workers on Redis server < 4 to crash. Thanks @selwin! * Merged in more groundwork to enable jobs with multiple dependencies. Thanks @thomasmatecki! ### RQ 1.4.1 (2020-05-16) * Default serializer now uses `pickle.HIGHEST_PROTOCOL` for backward compatibility reasons. Thanks @bbayles! * Avoid deprecation warnings on redis-py >= 3.5.0. Thanks @bbayles! ### RQ 1.4.0 (2020-05-13) * Custom serializer is now supported. Thanks @solababs! * `delay()` now accepts `job_id` argument. Thanks @grayshirt! * Fixed a bug that may cause early termination of scheduled or requeued jobs. Thanks @rmartin48! * When a job is scheduled, always add queue name to a set containing active RQ queue names. Thanks @mdawar! * Added `--sentry-ca-certs` and `--sentry-debug` parameters to `rq worker` CLI. Thanks @kichawa! * Jobs cleaned up by `StartedJobRegistry` are given an exception info. Thanks @selwin! * Python 2.7 is no longer supported. Thanks @selwin! ### RQ 1.3.0 (2020-03-09) * Support for infinite job timeout. Thanks @theY4Kman! * Added `__main__` file so you can now do `python -m rq.cli`. Thanks @bbayles! * Fixes an issue that may cause zombie processes. Thanks @wevsty! * `job_id` is now passed to logger during failed jobs. Thanks @smaccona! * `queue.enqueue_at()` and `queue.enqueue_in()` now supports explicit `args` and `kwargs` function invocation. Thanks @selwin! ### RQ 1.2.2 (2020-01-31) * `Job.fetch()` now properly handles unpickleable return values. Thanks @selwin! ### RQ 1.2.1 (2020-01-31) * `enqueue_at()` and `enqueue_in()` now sets job status to `scheduled`. Thanks @coolhacker170597! * Failed jobs data are now automatically expired by Redis. Thanks @selwin! * Fixes `RQScheduler` logging configuration. Thanks @FlorianPerucki! ### RQ 1.2.0 (2020-01-04) * This release also contains an alpha version of RQ's builtin job scheduling mechanism. Thanks @selwin! * Various internal API changes in preparation to support multiple job dependencies. Thanks @thomasmatecki! * `--verbose` or `--quiet` CLI arguments should override `--logging-level`. Thanks @zyt312074545! * Fixes a bug in `rq info` where it doesn't show workers for empty queues. Thanks @zyt312074545! * Fixed `queue.enqueue_dependents()` on custom `Queue` classes. Thanks @van-ess0! * `RQ` and Python versions are now stored in job metadata. Thanks @eoranged! * Added `failure_ttl` argument to job decorator. Thanks @pax0r! ### RQ 1.1.0 (2019-07-20) - Added `max_jobs` to `Worker.work` and `--max-jobs` to `rq worker` CLI. Thanks @perobertson! - Passing `--disable-job-desc-logging` to `rq worker` now does what it's supposed to do. Thanks @janierdavila! - `StartedJobRegistry` now properly handles jobs with infinite timeout. Thanks @macintoshpie! - `rq info` CLI command now cleans up registries when it first runs. Thanks @selwin! - Replaced the use of `procname` with `setproctitle`. Thanks @j178! ### 1.0 (2019-04-06) Backward incompatible changes: - `job.status` has been removed. Use `job.get_status()` and `job.set_status()` instead. Thanks @selwin! - `FailedQueue` has been replaced with `FailedJobRegistry`: * `get_failed_queue()` function has been removed. Please use `FailedJobRegistry(queue=queue)` instead. * `move_to_failed_queue()` has been removed. * RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year. * Thanks @selwin! - RQ's custom job exception handling mechanism has also changed slightly: * RQ's default exception handling mechanism (moving jobs to `FailedJobRegistry`) can be disabled by doing `Worker(disable_default_exception_handler=True)`. * Custom exception handlers are no longer executed in reverse order. * Thanks @selwin! - `Worker` names are now randomized. Thanks @selwin! - `timeout` argument on `queue.enqueue()` has been deprecated in favor of `job_timeout`. Thanks @selwin! - Sentry integration has been reworked: * RQ now uses the new [sentry-sdk](https://pypi.org/project/sentry-sdk/) in place of the deprecated [Raven](https://pypi.org/project/raven/) library * RQ will look for the more explicit `RQ_SENTRY_DSN` environment variable instead of `SENTRY_DSN` before instantiating Sentry integration * Thanks @selwin! - Fixed `Worker.total_working_time` accounting bug. Thanks @selwin! ### 0.13.0 (2018-12-11) - Compatibility with Redis 3.0. Thanks @dash-rai! - Added `job_timeout` argument to `queue.enqueue()`. This argument will eventually replace `timeout` argument. Thanks @selwin! - Added `job_id` argument to `BaseDeathPenalty` class. Thanks @loopbio! - Fixed a bug which causes long running jobs to timeout under `SimpleWorker`. Thanks @selwin! - You can now override worker's name from config file. Thanks @houqp! - Horses will now return exit code 1 if they don't terminate properly (e.g when Redis connection is lost). Thanks @selwin! - Added `date_format` and `log_format` arguments to `Worker` and `rq worker` CLI. Thanks @shikharsg! ### 0.12.0 (2018-07-14) - Added support for Python 3.7. Since `async` is a keyword in Python 3.7, `Queue(async=False)` has been changed to `Queue(is_async=False)`. The `async` keyword argument will still work, but raises a `DeprecationWarning`. Thanks @dchevell! ### 0.11.0 (2018-06-01) - `Worker` now periodically sends heartbeats and checks whether child process is still alive while performing long running jobs. Thanks @Kriechi! - `Job.create` now accepts `timeout` in string format (e.g `1h`). Thanks @theodesp! - `worker.main_work_horse()` should exit with return code `0` even if job execution fails. Thanks @selwin! - `job.delete(delete_dependents=True)` will delete job along with its dependents. Thanks @olingerc! - Other minor fixes and documentation updates. ### 0.10.0 - `@job` decorator now accepts `description`, `meta`, `at_front` and `depends_on` kwargs. Thanks @jlucas91 and @nlyubchich! - Added the capability to fetch workers by queue using `Worker.all(queue=queue)` and `Worker.count(queue=queue)`. - Improved RQ's default logging configuration. Thanks @samuelcolvin! - `job.data` and `job.exc_info` are now stored in compressed format in Redis. ### 0.9.2 - Fixed an issue where `worker.refresh()` may fail when `birth_date` is not set. Thanks @vanife! ### 0.9.1 - Fixed an issue where `worker.refresh()` may fail when upgrading from previous versions of RQ. ### 0.9.0 - `Worker` statistics! `Worker` now keeps track of `last_heartbeat`, `successful_job_count`, `failed_job_count` and `total_working_time`. Thanks @selwin! - `Worker` now sends heartbeat during suspension check. Thanks @theodesp! - Added `queue.delete()` method to delete `Queue` objects entirely from Redis. Thanks @theodesp! - More robust exception string decoding. Thanks @stylight! - Added `--logging-level` option to command line scripts. Thanks @jiajunhuang! - Added millisecond precision to job timestamps. Thanks @samuelcolvin! - Python 2.6 is no longer supported. Thanks @samuelcolvin! ### 0.8.2 - Fixed an issue where `job.save()` may fail with unpickleable return value. ### 0.8.1 - Replace `job.id` with `Job` instance in local `_job_stack `. Thanks @katichev! - `job.save()` no longer implicitly calls `job.cleanup()`. Thanks @katichev! - Properly catch `StopRequested` `worker.heartbeat()`. Thanks @fate0! - You can now pass in timeout in days. Thanks @yaniv-g! - The core logic of sending job to `FailedQueue` has been moved to `rq.handlers.move_to_failed_queue`. Thanks @yaniv-g! - RQ cli commands now accept `--path` parameter. Thanks @kirill and @sjtbham! - Make `job.dependency` slightly more efficient. Thanks @liangsijian! - `FailedQueue` now returns jobs with the correct class. Thanks @amjith! ### 0.8.0 - Refactored APIs to allow custom `Connection`, `Job`, `Worker` and `Queue` classes via CLI. Thanks @jezdez! - `job.delete()` now properly cleans itself from job registries. Thanks @selwin! - `Worker` should no longer overwrite `job.meta`. Thanks @WeatherGod! - `job.save_meta()` can now be used to persist custom job data. Thanks @katichev! - Added Redis Sentinel support. Thanks @strawposter! - Make `Worker.find_by_key()` more efficient. Thanks @selwin! - You can now specify job `timeout` using strings such as `queue.enqueue(foo, timeout='1m')`. Thanks @luojiebin! - Better unicode handling. Thanks @myme5261314 and @jaywink! - Sentry should default to HTTP transport. Thanks @Atala! - Improve `HerokuWorker` termination logic. Thanks @samuelcolvin! ### 0.7.1 - Fixes a bug that prevents fetching jobs from `FailedQueue` (#765). Thanks @jsurloppe! - Fixes race condition when enqueueing jobs with dependency (#742). Thanks @th3hamm0r! - Skip a test that requires Linux signals on MacOS (#763). Thanks @jezdez! - `enqueue_job` should use Redis pipeline when available (#761). Thanks mtdewulf! ### 0.7.0 - Better support for Heroku workers (#584, #715) - Support for connecting using a custom connection class (#741) - Fix: connection stack in default worker (#479, #641) - Fix: `fetch_job` now checks that a job requested actually comes from the intended queue (#728, #733) - Fix: Properly raise exception if a job dependency does not exist (#747) - Fix: Job status not updated when horse dies unexpectedly (#710) - Fix: `request_force_stop_sigrtmin` failing for Python 3 (#727) - Fix `Job.cancel()` method on failed queue (#707) - Python 3.5 compatibility improvements (#729) - Improved signal name lookup (#722) ### 0.6.0 - Jobs that depend on job with result_ttl == 0 are now properly enqueued. - `cancel_job` now works properly. Thanks @jlopex! - Jobs that execute successfully now no longer tries to remove itself from queue. Thanks @amyangfei! - Worker now properly logs Falsy return values. Thanks @liorsbg! - `Worker.work()` now accepts `logging_level` argument. Thanks @jlopex! - Logging related fixes by @redbaron4 and @butla! - `@job` decorator now accepts `ttl` argument. Thanks @javimb! - `Worker.__init__` now accepts `queue_class` keyword argument. Thanks @antoineleclair! - `Worker` now saves warm shutdown time. You can access this property from `worker.shutdown_requested_date`. Thanks @olingerc! - Synchronous queues now properly sets completed job status as finished. Thanks @ecarreras! - `Worker` now correctly deletes `current_job_id` after failed job execution. Thanks @olingerc! - `Job.create()` and `queue.enqueue_call()` now accepts `meta` argument. Thanks @tornstrom! - Added `job.started_at` property. Thanks @samuelcolvin! - Cleaned up the implementation of `job.cancel()` and `job.delete()`. Thanks @glaslos! - `Worker.execute_job()` now exports `RQ_WORKER_ID` and `RQ_JOB_ID` to OS environment variables. Thanks @mgk! - `rqinfo` now accepts `--config` option. Thanks @kfrendrich! - `Worker` class now has `request_force_stop()` and `request_stop()` methods that can be overridden by custom worker classes. Thanks @samuelcolvin! - Other minor fixes by @VicarEscaped, @kampfschlaefer, @ccurvey, @zfz, @antoineleclair, @orangain, @nicksnell, @SkyLothar, @ahxxm and @horida. ### 0.5.6 - Job results are now logged on `DEBUG` level. Thanks @tbaugis! - Modified `patch_connection` so Redis connection can be easily mocked - Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex! - Jobs can now depend on jobs in a different queue. Thanks @jlopex! ### 0.5.5 (2015-08-25) - Add support for `--exception-handler` command line flag - Fix compatibility with click>=5.0 - Fix maximum recursion depth problem for very large queues that contain jobs that all fail ### 0.5.4 (July 8th, 2015) - Fix compatibility with raven>=5.4.0 ### 0.5.3 (June 3rd, 2015) - Better API for instantiating Workers. Thanks @RyanMTB! - Better support for unicode kwargs. Thanks @nealtodd and @brownstein! - Workers now automatically cleans up job registries every hour - Jobs in `FailedQueue` now have their statuses set properly - `enqueue_call()` no longer ignores `ttl`. Thanks @mbodock! - Improved logging. Thanks @trevorprater! ### 0.5.2 (April 14th, 2015) - Support SSL connection to Redis (requires redis-py>=2.10) - Fix to prevent deep call stacks with large queues ### 0.5.1 (March 9th, 2015) - Resolve performance issue when queues contain many jobs - Restore the ability to specify connection params in config - Record `birth_date` and `death_date` on Worker - Add support for SSL URLs in Redis (and `REDIS_SSL` config option) - Fix encoding issues with non-ASCII characters in function arguments - Fix Redis transaction management issue with job dependencies ### 0.5.0 (Jan 30th, 2015) - RQ workers can now be paused and resumed using `rq suspend` and `rq resume` commands. Thanks Jonathan Tushman! - Jobs that are being performed are now stored in `StartedJobRegistry` for monitoring purposes. This also prevents currently active jobs from being orphaned/lost in the case of hard shutdowns. - You can now monitor finished jobs by checking `FinishedJobRegistry`. Thanks Nic Cope for helping! - Jobs with unmet dependencies are now created with `deferred` as their status. You can monitor deferred jobs by checking `DeferredJobRegistry`. - It is now possible to enqueue a job at the beginning of queue using `queue.enqueue(func, at_front=True)`. Thanks Travis Johnson! - Command line scripts have all been refactored to use `click`. Thanks Lyon Zhang! - Added a new `SimpleWorker` that does not fork when executing jobs. Useful for testing purposes. Thanks Cal Leeming! - Added `--queue-class` and `--job-class` arguments to `rqworker` script. Thanks David Bonner! - Many other minor bug fixes and enhancements. ### 0.4.6 (May 21st, 2014) - Raise a warning when RQ workers are used with Sentry DSNs using asynchronous transports. Thanks Wei, Selwin & Toms! ### 0.4.5 (May 8th, 2014) - Fix where rqworker broke on Python 2.6. Thanks, Marko! ### 0.4.4 (May 7th, 2014) - Properly declare redis dependency. - Fix a NameError regression that was introduced in 0.4.3. ### 0.4.3 (May 6th, 2014) - Make job and queue classes overridable. Thanks, Marko! - Don't require connection for @job decorator at definition time. Thanks, Sasha! - Syntactic code cleanup. ### 0.4.2 (April 28th, 2014) - Add missing depends_on kwarg to @job decorator. Thanks, Sasha! ### 0.4.1 (April 22nd, 2014) - Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4. ### 0.4.0 (April 22nd, 2014) - Emptying the failed queue from the command line is now as simple as running `rqinfo -X` or `rqinfo --empty-failed-queue`. - Job data is unpickled lazily. Thanks, Malthe! - Removed dependency on the `times` library. Thanks, Malthe! - Job dependencies! Thanks, Selwin. - Custom worker classes, via the `--worker-class=path.to.MyClass` command line argument. Thanks, Selwin. - `Queue.all()` and `rqinfo` now report empty queues, too. Thanks, Rob! - Fixed a performance issue in `Queue.all()` when issued in large Redis DBs. Thanks, Rob! - Birth and death dates are now stored as proper datetimes, not timestamps. - Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim. - Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions. - Fix: `Queue.empty()` now correctly deletes job hashes from Redis. ### 0.3.13 (December 17th, 2013) - Bug fix where the worker crashes on jobs that have their timeout explicitly removed. Thanks for reporting, @algrs. ### 0.3.12 (December 16th, 2013) - Bug fix where a worker could time out before the job was done, removing it from any monitor overviews (#288). ### 0.3.11 (August 23th, 2013) - Some more fixes in command line scripts for Python 3 ### 0.3.10 (August 20th, 2013) - Bug fix in setup.py ### 0.3.9 (August 20th, 2013) - Python 3 compatibility (Thanks, Alex!) - Minor bug fix where Sentry would break when func cannot be imported ### 0.3.8 (June 17th, 2013) - `rqworker` and `rqinfo` have a `--url` argument to connect to a Redis url. - `rqworker` and `rqinfo` have a `--socket` option to connect to a Redis server through a Unix socket. - `rqworker` reads `SENTRY_DSN` from the environment, unless specifically provided on the command line. - `Queue` has a new API that supports paging `get_jobs(3, 7)`, which will return at most 7 jobs, starting from the 3rd. ### 0.3.7 (February 26th, 2013) - Fixed bug where workers would not execute builtin functions properly. ### 0.3.6 (February 18th, 2013) - Worker registrations now expire. This should prevent `rqinfo` from reporting about ghosted workers. (Thanks, @yaniv-aknin!) - `rqworker` will automatically clean up ghosted worker registrations from pre-0.3.6 runs. - `rqworker` grew a `-q` flag, to be more silent (only warnings/errors are shown) ### 0.3.5 (February 6th, 2013) - `ended_at` is now recorded for normally finished jobs, too. (Previously only for failed jobs.) - Adds support for both `Redis` and `StrictRedis` connection types - Makes `StrictRedis` the default connection type if none is explicitly provided ### 0.3.4 (January 23rd, 2013) - Restore compatibility with Python 2.6. ### 0.3.3 (January 18th, 2013) - Fix bug where work was lost due to silently ignored unpickle errors. - Jobs can now access the current `Job` instance from within. Relevant documentation [here](http://python-rq.org/docs/jobs/). - Custom properties can be set by modifying the `job.meta` dict. Relevant documentation [here](http://python-rq.org/docs/jobs/). - Custom properties can be set by modifying the `job.meta` dict. Relevant documentation [here](http://python-rq.org/docs/jobs/). - `rqworker` now has an optional `--password` flag. - Remove `logbook` dependency (in favor of `logging`) ### 0.3.2 (September 3rd, 2012) - Fixes broken `rqinfo` command. - Improve compatibility with Python < 2.7. ### 0.3.1 (August 30th, 2012) - `.enqueue()` now takes a `result_ttl` keyword argument that can be used to change the expiration time of results. - Queue constructor now takes an optional `async=False` argument to bypass the worker (for testing purposes). - Jobs now carry status information. To get job status information, like whether a job is queued, finished, or failed, use the property `status`, or one of the new boolean accessor properties `is_queued`, `is_finished` or `is_failed`. - Jobs return values are always stored explicitly, even if they have to explicit return value or return `None` (with given TTL of course). This makes it possible to distinguish between a job that explicitly returned `None` and a job that isn't finished yet (see `status` property). - Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation [here](http://python-rq.org/docs/exceptions/) and [here](http://python-rq.org/patterns/sentry/). - `rqworker` now supports passing in configuration files instead of the many command line options: `rqworker -c settings` will source `settings.py`. - `rqworker` now supports one-flag setup to enable Sentry as its exception handler: `rqworker --sentry-dsn="http://public:secret@example.com/1"` Alternatively, you can use a settings file and configure `SENTRY_DSN = 'http://public:secret@example.com/1'` instead. ### 0.3.0 (August 5th, 2012) - Reliability improvements - Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle - Worker does not leak worker registrations anymore when stopped gracefully - `.enqueue()` does not consume the `timeout` kwarg anymore. Instead, to pass RQ a timeout value while enqueueing a function, use the explicit invocation instead: ```python q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30) ``` - Add a `@job` decorator, which can be used to do Celery-style delayed invocations: ```python from redis import StrictRedis from rq.decorators import job # Connect to Redis redis = StrictRedis() @job('high', timeout=10, connection=redis) def some_work(x, y): return x + y ``` Then, in another module, you can call `some_work`: ```python from foo.bar import some_work some_work.delay(2, 3) ``` ### 0.2.2 (August 1st, 2012) - Fix bug where return values that couldn't be pickled crashed the worker ### 0.2.1 (July 20th, 2012) - Fix important bug where result data wasn't restored from Redis correctly (affected non-string results only). ### 0.2.0 (July 18th, 2012) - `q.enqueue()` accepts instance methods now, too. Objects will be pickle'd along with the instance method, so beware. - `q.enqueue()` accepts string specification of functions now, too. Example: `q.enqueue("my.math.lib.fibonacci", 5)`. Useful if the worker and the submitter of work don't share code bases. - Job can be assigned custom attrs and they will be pickle'd along with the rest of the job's attrs. Can be used when writing RQ extensions. - Workers can now accept explicit connections, like Queues. - Various bug fixes. ### 0.1.2 (May 15, 2012) - Fix broken PyPI deployment. ### 0.1.1 (May 14, 2012) - Thread-safety by using context locals - Register scripts as console_scripts, for better portability - Various bugfixes. ### 0.1.0: (March 28, 2012) - Initially released version. rq-1.16.2/tox.ini0000644000000000000000000000163113615410400010457 0ustar00[tox] envlist=py36,py37,py38,py39,py310 [testenv] commands=pytest --cov rq --cov-config=.coveragerc --durations=5 {posargs} deps= codecov psutil pytest pytest-cov sentry-sdk passenv= RUN_SSL_TESTS ; [testenv:lint] ; basepython = python3.10 ; deps = ; black ; ruff ; commands = ; black --check rq tests ; ruff check rq tests [testenv:py36] skipdist = True basepython = python3.6 deps = {[testenv]deps} [testenv:py37] skipdist = True basepython = python3.7 deps = {[testenv]deps} [testenv:py38] skipdist = True basepython = python3.8 deps = {[testenv]deps} [testenv:py39] skipdist = True basepython = python3.9 deps = {[testenv]deps} [testenv:py310] skipdist = True basepython = python3.10 deps = {[testenv]deps} [testenv:ssl] skipdist = True basepython = python3.10 deps= pytest sentry-sdk psutil passenv= RUN_SSL_TESTS commands=pytest -m ssl_test {posargs} rq-1.16.2/docs/CNAME0000644000000000000000000000001613615410400010636 0ustar00python-rq.org rq-1.16.2/docs/_config.yml0000644000000000000000000000216013615410400012221 0ustar00baseurl: / exclude: design permalink: pretty navigation: - text: Home url: / - text: Docs url: /docs/ subs: - text: Queues url: /docs/ - text: Workers url: /docs/workers/ - text: Results url: /docs/results/ - text: Jobs url: /docs/jobs/ - text: Exceptions & Retries url: /docs/exceptions/ - text: Scheduling Jobs url: /docs/scheduling/ - text: Job Registries url: /docs/job_registries/ - text: Monitoring url: /docs/monitoring/ - text: Connections url: /docs/connections/ - text: Testing url: /docs/testing/ - text: Patterns url: /patterns/ subs: - text: Heroku url: /patterns/ - text: Django url: /patterns/django/ - text: Sentry url: /patterns/sentry/ - text: Supervisor url: /patterns/supervisor/ - text: Systemd url: /patterns/systemd/ - text: Contributing url: /contrib/ subs: - text: Internals url: /contrib/ - text: GitHub url: /contrib/github/ - text: Documentation url: /contrib/docs/ - text: Testing url: /contrib/testing/ - text: Vagrant url: /contrib/vagrant/ - text: Chat url: /chat/ rq-1.16.2/docs/favicon.png0000644000000000000000000000267713615410400012242 0ustar00PNG  IHDRatEXtSoftwareAdobe ImageReadyqe<"iTXtXML:com.adobe.xmp H,3IDATxڌOOQϛ?Z,AʒD 5ZuFܩ!j!Z. х+$&HFJSjtzkIn;;JD0O^}l\u*3FZDoRb_~7fV>a, 1kǁpk(z^̛HN q:oYzӹt),c^m| 6 N"1>{TXŽqI4%4ޭ"662a ʅiX#й~@\:s9SX*T(FP[.Bҁ >$uك~>%^zL#u'آ lz+7c(rt?}+=7nMޠ=;^Um~)<6)!B"!Xz}J &Z*ާyB pv\衯C h }! J{Ś<~ kO_!7~q]CPCYJTTy@6{G` h`IENDB`rq-1.16.2/docs/index.md0000644000000000000000000000503213615410400011524 0ustar00--- title: "RQ: Simple job queues for Python" layout: default --- RQ (_Redis Queue_) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It can be integrated in your web stack easily. RQ requires Redis >= 3.0.0. ## Getting started First, run a Redis server. You can use an existing one. To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function: ```python import requests def count_words_at_url(url): resp = requests.get(url) return len(resp.text.split()) ``` Then, create a RQ queue: ```python from redis import Redis from rq import Queue q = Queue(connection=Redis()) ``` And enqueue the function call: ```python from my_module import count_words_at_url result = q.enqueue(count_words_at_url, 'http://nvie.com') ``` Scheduling jobs are similarly easy: ```python # Schedule job to run at 9:15, October 10th job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello) # Schedule job to be run in 10 seconds job = queue.enqueue_in(timedelta(seconds=10), say_hello) ``` You can also ask RQ to retry failed jobs: ```python from rq import Retry # Retry up to 3 times, failed job will be requeued immediately queue.enqueue(say_hello, retry=Retry(max=3)) # Retry up to 3 times, with configurable intervals between retries queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60])) ``` ### The worker To start executing enqueued function calls in the background, start a worker from your project's directory: ```console $ rq worker --with-scheduler *** Listening for work on default Got count_words_at_url('http://nvie.com') from default Job result = 818 *** Listening for work on default ``` That's about it. ## Installation Simply use the following command to install the latest released version: pip install rq If you want the cutting edge version (that may well be broken), use this: pip install git+https://github.com/nvie/rq.git@master#egg=rq ## Project history This project has been inspired by the good parts of [Celery][1], [Resque][2] and [this snippet][3], and has been created as a lightweight alternative to existing queueing frameworks, with a low barrier to entry. [m]: http://pypi.python.org/pypi/mailer [p]: http://docs.python.org/library/pickle.html [1]: http://www.celeryproject.org/ [2]: https://github.com/defunkt/resque [3]: https://github.com/fengsp/flask-snippets/blob/1f65833a4291c5b833b195a09c365aa815baea4e/utilities/rq.py rq-1.16.2/docs/_includes/forward.html0000644000000000000000000000031413615410400014370 0ustar00 rq-1.16.2/docs/_includes/ga_tracking.html0000644000000000000000000000124113615410400015175 0ustar00 rq-1.16.2/docs/_layouts/chat.html0000644000000000000000000000054613615410400013544 0ustar00--- layout: default --- {{ content }} rq-1.16.2/docs/_layouts/contrib.html0000644000000000000000000000055113615410400014261 0ustar00--- layout: default --- {{ content }} rq-1.16.2/docs/_layouts/default.html0000644000000000000000000000303113615410400014241 0ustar00 {{ page.title }}
Fork me on GitHub
{{ content }}
{% include forward.html %} {% include ga_tracking.html %} rq-1.16.2/docs/_layouts/docs.html0000644000000000000000000000054613615410400013555 0ustar00--- layout: default --- {{ content }} rq-1.16.2/docs/_layouts/patterns.html0000644000000000000000000000055213615410400014462 0ustar00--- layout: default --- {{ content }} rq-1.16.2/docs/chat/index.md0000644000000000000000000000034213615410400012442 0ustar00--- title: "RQ Discord" layout: chat --- Join our discord [here](https://discord.gg/pYannYntWH){:target="_blank" rel="noopener noreferrer"} if you need help or want to chat about contributions or what should come next in RQ. rq-1.16.2/docs/contrib/docs.md0000644000000000000000000000036713615410400013013 0ustar00--- title: "Documentation" layout: contrib --- ### Running docs locally To build the docs, run [jekyll](http://jekyllrb.com/): ``` jekyll serve ``` If you rather use Vagrant, see [these instructions][v]. [v]: {{site.baseurl}}contrib/vagrant/ rq-1.16.2/docs/contrib/github.md0000644000000000000000000000053313615410400013340 0ustar00--- title: "Contributing to RQ" layout: contrib --- If you'd like to contribute to RQ, simply [fork](https://github.com/rq/rq) the project on GitHub and submit a pull request. Please bear in mind the philosiphy behind RQ: it should rather remain small and simple, than packed with features. And it should value insightfulness over performance. rq-1.16.2/docs/contrib/index.md0000644000000000000000000000411513615410400013165 0ustar00--- title: "RQ: Simple job queues for Python" layout: contrib --- This document describes how RQ works internally when enqueuing or dequeueing. ## Enqueueing internals Whenever a function call gets enqueued, RQ does two things: * It creates a job instance representing the delayed function call and persists it in a Redis [hash][h]; and * It pushes the given job's ID onto the requested Redis queue. All jobs are stored in Redis under the `rq:job:` prefix, for example: rq:job:55528e58-9cac-4e05-b444-8eded32e76a1 The keys of such a job [hash][h] are: created_at => '2012-02-13 14:35:16+0000' enqueued_at => '2012-02-13 14:35:16+0000' origin => 'default' data => description => "count_words_at_url('http://nvie.com')" Depending on whether or not the job has run successfully or has failed, the following keys are available, too: ended_at => '2012-02-13 14:41:33+0000' result => exc_info => [h]: http://redis.io/topics/data-types#hashes ## Dequeueing internals Whenever a dequeue is requested, an RQ worker does two things: * It pops a job ID from the queue, and fetches the job data belonging to that job ID; * It starts executing the function call. * If the job succeeds, its return value is written to the `result` hash key and the hash itself is expired after 500 seconds; or * If the job fails, the exception information is written to the `exc_info` hash key and the job ID is pushed onto the `failed` queue. ## Cancelling jobs Any job ID that is encountered by a worker for which no job hash is found in Redis is simply ignored. This makes it easy to cancel jobs by simply removing the job hash. In Python: ```python from rq import cancel_job cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c') ``` Note that it is irrelevant on which queue the job resides. When a worker eventually pops the job ID from the queue and notes that the Job hash does not exist (anymore), it simply discards the job ID and continues with the next. rq-1.16.2/docs/contrib/testing.md0000644000000000000000000000271513615410400013537 0ustar00--- title: "Testing" layout: contrib --- ### Testing RQ locally To run tests locally you can use `tox`, which will run the tests with all supported Python versions (3.6 - 3.11) ``` tox ``` Bear in mind that you need to have all those versions installed in your local environment for that to work. ### Testing with Pytest directly For a faster and simpler testing alternative you can just run `pytest` directly. ```sh pytest . ``` It should automatically pickup the `tests` directory and run the test suite. Bear in mind that some tests may be be skipped in your local environment - make sure to look at which tests are being skipped. ### Skipped Tests Apart from skipped tests related to the interpreter (eg. `PyPy`) or operational systems, slow tests are also skipped by default, but are ran in the GitHub CI/CD workflow. To include slow tests in your local environment, use the `RUN_SLOW_TESTS_TOO=1` environment variable: ```sh RUN_SLOW_TESTS_TOO=1 pytest . ``` If you want to analyze the coverage reports, you can use the `--cov` argument to `pytest`. By adding `--cov-report`, you also have some flexibility in terms of the report output format: ```sh RUN_SLOW_TESTS_TOO=1 pytest --cov=rq --cov-config=.coveragerc --cov-report={{report_format}} --durations=5 ``` Where you replace the `report_format` by the desired format (`term` / `html` / `xml`). ### Using Vagrant If you rather use Vagrant, see [these instructions][v]. [v]: {{site.baseurl}}contrib/vagrant/ rq-1.16.2/docs/contrib/vagrant.md0000644000000000000000000000212713615410400013521 0ustar00--- title: "Using Vagrant" layout: contrib --- If you don't feel like installing dependencies on your main development machine, you can use [Vagrant](https://www.vagrantup.com/). Here's how you run your tests and build the documentation on Vagrant. ### Running tests in Vagrant To create a working Vagrant environment, use the following; ``` vagrant init ubuntu/trusty64 vagrant up vagrant ssh -- "sudo apt-get -y install redis-server python-dev python-pip" vagrant ssh -- "sudo pip install --no-input redis hiredis mock" vagrant ssh -- "(cd /vagrant; ./run_tests)" ``` ### Running docs on Vagrant ``` vagrant init ubuntu/trusty64 vagrant up vagrant ssh -- "sudo apt-get -y install ruby-dev nodejs" vagrant ssh -- "sudo gem install jekyll" vagrant ssh -- "(cd /vagrant; jekyll serve)" ``` You'll also need to add a port forward entry to your `Vagrantfile`; ``` config.vm.network "forwarded_port", guest: 4000, host: 4001 ``` Then you can access the docs using; ``` http://127.0.0.1:4001 ``` You also may need to forcibly kill Jekyll if you ctrl+c; ``` vagrant ssh -- "sudo killall -9 jekyll" ``` rq-1.16.2/docs/css/reset.css0000644000000000000000000000210513615410400012515 0ustar00/* http://meyerweb.com/eric/tools/css/reset/ v2.0 | 20110126 License: none (public domain) */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video { margin: 0; padding: 0; border: 0; font-size: 100%; font: inherit; vertical-align: baseline; } /* HTML5 display-role reset for older browsers */ article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; } body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } table { border-collapse: collapse; border-spacing: 0; } rq-1.16.2/docs/css/screen.css0000644000000000000000000001275013615410400012661 0ustar00@import url("reset.css"); body { background: #DBE0DF url(../img/bg.png) 50% 0 repeat-y !important; height: 100%; font-family: system-ui, -apple-system, sans-serif; font-size: 1rem; line-height: 1.55; padding: 0 30px 80px; } header { background: url(../img/ribbon.png) no-repeat 50% 0; max-width: 630px; width: 100%; text-align: center; padding: 240px 0 1em 0; border-bottom: 1px dashed #e1e1e1; margin: 0 auto 2em auto; } li { padding-bottom: 5px; } ul.inline { list-style-type: none; margin: 0; padding: 0; } ul.inline li { display: inline; margin: 0 10px; } .subnav ul.inline li { margin: 0 6px; } header a { color: #3a3a3a; border: 0; font-size: 110%; font-weight: 600; text-decoration: none; transition: color linear 0.1s; -webkit-transition: color linear 0.1s; -moz-transition: color linear 0.1s; } header a:hover { border-bottom-color: rgba(0, 0, 0, 0.1); color: rgba(0, 0, 0, 0.4); } .subnav { text-align: center; font-size: 94%; margin: -3em auto 2em auto; } .subnav li { background-color: white; padding: 0 4px; } .subnav a { text-decoration: none; white-space: nowrap; } .container { margin: 0 auto; max-width: 630px; width: 100%; } footer { margin: 2em auto; max-width: 430px; width: 100%; border-top: 1px dashed #e1e1e1; padding-top: 1em; } footer p { text-align: center; font-size: 90%; font-style: italic; margin-bottom: 0; } footer a { font-weight: 400; } pre, pre.highlight { margin: 0 0 1em 1em; padding: 1em 1.8em; color: #222; border-bottom: 1px solid #ccc; border-right: 1px solid #ccc; background: #F3F3F0 url(../img/bq.png) top left no-repeat; line-height: 1.15em; overflow: auto; } code { font-family: "Droid Sans Mono", SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; font-weight: 400; font-size: 80%; line-height: 0.5em; border: 1px solid #efeaea; padding: 0.2em 0.4em; } pre code { border: none; padding: 0; } h1 { font-size: 280%; font-weight: 400; } .ir { display: block; border: 0; text-indent: -999em; overflow: hidden; background-color: transparent; background-repeat: no-repeat; text-align: left; direction: ltr; } .ir br { display: none; } h1#logo { margin: 0 auto; width: 305px; height: 186px; background-image: url(../img/logo2.png); } /* h1:hover:after { color: rgba(0, 0, 0, 0.3); content: attr(title); font-size: 60%; font-weight: 300; margin: 0 0 0 0.5em; } */ h2 { font-size: 200%; font-weight: 400; margin: 0 0 0.4em; } h3 { font-size: 135%; font-weight: 400; margin: 0 0 0.25em; } p { color: rgba(0, 0, 0, 0.7); margin: 0 0 1em; } p:last-child { margin-bottom: 0; } img { border-radius: 4px; float: left; margin: 6px 12px 15px 0; -moz-border-radius: 4px; -webkit-border-radius: 4px; } .nomargin { margin: 0; } a { border-bottom: 1px solid rgba(65, 131, 196, 0.1); color: rgb(65, 131, 196); font-weight: 600; text-decoration: none; transition: color linear 0.1s; -webkit-transition: color linear 0.1s; -moz-transition: color linear 0.1s; } a:hover { border-bottom-color: rgba(0, 0, 0, 0.1); color: rgba(0, 0, 0, 0.4); } em { font-style: italic; } strong { font-weight: 600; } acronym { border-bottom: 1px dotted rgba(0, 0, 0, 0.1); cursor: help; } blockquote { font-style: italic; padding: 1em; } ul { list-style: circle; margin: 0 0 1em 2em; color: rgba(0, 0, 0, 0.7); } li { font-size: 100%; } ol { list-style-type: decimal; margin: 0 0 1em 2em; color: rgba(0, 0, 0, 0.7); } li { font-size: 100%; } .warning { position: relative; padding: 7px 15px; margin-bottom: 18px; color: #404040; background-color: #eedc94; background-repeat: repeat-x; background-image: -khtml-gradient(linear, left top, left bottom, from(#fceec1), to(#eedc94)); background-image: -moz-linear-gradient(top, #fceec1, #eedc94); background-image: -ms-linear-gradient(top, #fceec1, #eedc94); background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #fceec1), color-stop(100%, #eedc94)); background-image: -webkit-linear-gradient(top, #fceec1, #eedc94); background-image: -o-linear-gradient(top, #fceec1, #eedc94); background-image: linear-gradient(top, #fceec1, #eedc94); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fceec1', endColorstr='#eedc94', GradientType=0); text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); border-color: #eedc94 #eedc94 #e4c652; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); text-shadow: 0 1px 0 rgba(255, 255, 255, 0.5); border-width: 1px; border-style: solid; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25); -moz-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25); box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25); } .alert-message .close { *margin-top: 3px; /* IE7 spacing */ } /* @media screen and (max-width: 1400px) { body { padding-bottom: 60px; padding-top: 60px; } } @media screen and (max-width: 600px) { body { padding-bottom: 40px; padding-top: 30px; } } */rq-1.16.2/docs/css/syntax.css0000644000000000000000000000724013615410400012726 0ustar00.highlight { background: #ffffff; } .highlight .c { color: #999988; } /* Comment */ .highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */ .highlight .k { font-weight: bold; color: #555555; } /* Keyword */ .highlight .kn { font-weight: bold; color: #555555; } /* Keyword */ .highlight .o { font-weight: bold; color: #555555; } /* Operator */ .highlight .cm { color: #999988; } /* Comment.Multiline */ .highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */ .highlight .c1 { color: #999988; } /* Comment.Single */ .highlight .cs { color: #999999; font-weight: bold; } /* Comment.Special */ .highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ .highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */ .highlight .ge {} /* Generic.Emph */ .highlight .gr { color: #aa0000 } /* Generic.Error */ .highlight .gh { color: #999999 } /* Generic.Heading */ .highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ .highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */ .highlight .go { color: #888888 } /* Generic.Output */ .highlight .gp { color: #555555 } /* Generic.Prompt */ .highlight .gs { font-weight: bold } /* Generic.Strong */ .highlight .gu { color: #aaaaaa } /* Generic.Subheading */ .highlight .gt { color: #aa0000 } /* Generic.Traceback */ .highlight .kc { font-weight: bold } /* Keyword.Constant */ .highlight .kd { font-weight: bold } /* Keyword.Declaration */ .highlight .kp { font-weight: bold } /* Keyword.Pseudo */ .highlight .kr { font-weight: bold } /* Keyword.Reserved */ .highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */ .highlight .m { color: #009999 } /* Literal.Number */ .highlight .s { color: #d14 } /* Literal.String */ .highlight .na { color: #008080 } /* Name.Attribute */ .highlight .nb { color: #0086B3 } /* Name.Builtin */ .highlight .nc { color: #445588; font-weight: bold } /* Name.Class */ .highlight .no { color: #008080 } /* Name.Constant */ .highlight .ni { color: #800080 } /* Name.Entity */ .highlight .ne { color: #aa0000; font-weight: bold } /* Name.Exception */ .highlight .nf { color: #aa0000; font-weight: bold } /* Name.Function */ .highlight .nn { color: #555555 } /* Name.Namespace */ .highlight .nt { color: #000080 } /* Name.Tag */ .highlight .nv { color: #008080 } /* Name.Variable */ .highlight .ow { font-weight: bold } /* Operator.Word */ .highlight .w { color: #bbbbbb } /* Text.Whitespace */ .highlight .mf { color: #009999 } /* Literal.Number.Float */ .highlight .mh { color: #009999 } /* Literal.Number.Hex */ .highlight .mi { color: #009999 } /* Literal.Number.Integer */ .highlight .mo { color: #009999 } /* Literal.Number.Oct */ .highlight .sb { color: #d14 } /* Literal.String.Backtick */ .highlight .sc { color: #d14 } /* Literal.String.Char */ .highlight .sd { color: #d14 } /* Literal.String.Doc */ .highlight .s2 { color: #d14 } /* Literal.String.Double */ .highlight .se { color: #d14 } /* Literal.String.Escape */ .highlight .sh { color: #d14 } /* Literal.String.Heredoc */ .highlight .si { color: #d14 } /* Literal.String.Interpol */ .highlight .sx { color: #d14 } /* Literal.String.Other */ .highlight .sr { color: #009926 } /* Literal.String.Regex */ .highlight .s1 { color: #d14 } /* Literal.String.Single */ .highlight .ss { color: #990073 } /* Literal.String.Symbol */ .highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */ .highlight .vc { color: #008080 } /* Name.Variable.Class */ .highlight .vg { color: #008080 } /* Name.Variable.Global */ .highlight .vi { color: #008080 } /* Name.Variable.Instance */ .highlight .il { color: #009999 } /* Literal.Number.Integer.Long */ rq-1.16.2/docs/design/favicon.psd0000644000000000000000000076335613615410400013525 0ustar008BPSa8BIM%8BIM$< Adobe Photoshop CS5 Macintosh 2011-11-21T20:35:14+01:00 2011-11-25T01:00:18+01:00 2011-11-25T01:00:18+01:00 application/vnd.adobe.photoshop 3 sRGB IEC61966-2.1 RQ RQ xmp.did:018011740720681192B0F57C9699AC60 xmp.did:0180117407206811AB08C21D52883AC8 xmp.iid:038011740720681197A5E183783236B0 xmp.did:028011740720681197A5E183783236B0 xmp.did:028011740720681197A5E183783236B0 created xmp.iid:028011740720681197A5E183783236B0 2011-11-21T20:35:14+01:00 Adobe Photoshop CS5 Macintosh converted from image/png to application/vnd.adobe.photoshop saved xmp.iid:038011740720681197A5E183783236B0 2011-11-25T01:00:18+01:00 Adobe Photoshop CS5 Macintosh / 8BIM: printOutputPstSboolInteenumInteClrmprintSixteenBitbool printerNameTEXTEPSON Stylus DX8400 @ Bricktop8BIM;printOutputOptionsCptnboolClbrboolRgsMboolCrnCboolCntCboolLblsboolNgtvboolEmlDboolIntrboolBckgObjcRGBCRd doub@oGrn doub@oBl doub@oBrdTUntF#RltBld UntF#RltRsltUntF#Pxl@R vectorDataboolPgPsenumPgPsPgPCLeftUntF#RltTop UntF#RltScl UntF#Prc@Y8BIMHH8BIM&?8BIM 8BIM8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM8BIM8BIM08BIM-8BIM@@8BIM6nullVrsnlongenabbool numBeforelongnumAfterlongSpcnlong minOpacitylong maxOpacitylong2BlnMlong8BIM3null Vrsnlong frameStepObjcnull numeratorlong denominatorlongX frameRatedoub@>timeObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlongp denominatorlongX workInTimeObjcnull numeratorlong denominatorlongX workOutTimeObjcnull numeratorlongp denominatorlongXLCntlongglobalTrackListVlLs hasMotionbool8BIM4FnullVrsnlongsheetTimelineOptionsVlLs8BIM8BIMnullbaseNameTEXTUserboundsObjcRct1Top longLeftlongBtomlongRghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlongRghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM H HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)Km8BIM8BIM!UAdobe PhotoshopAdobe Photoshop CS58BIM".MM*bj(1r2i ' 'Adobe Photoshop CS5 Macintosh2011:11:25 01:00:18&(.HH8BIMmopt4TargetSettingsMttCObjc NativeQuadBl longGrn longRd longTrnsbool fileFormatenum FileFormatPNG24 interlacedbool noMatteColorbooltransparencyDitherAlgorithmenumDitherAlgorithmNonetransparencyDitherAmountlong8BIMmsetnullHTMLBackgroundSettingsObjcnullBackgroundColorBluelongBackgroundColorGreenlongBackgroundColorRedlongBackgroundColorStatelongBackgroundImagePathTEXTUseImageAsBackgroundbool HTMLSettingsObjcnullAlwaysAddAltAttributebool AttributeCaselong CloseAllTagsboolEncodinglongFileSavingSettingsObjcnull CopyBackgroundboolDuplicateFileNameBehaviorlongHtmlFileNameComponentsVlLslonglonglonglonglonglongImageSubfolderNameTEXTimagesNameCompatibilityObjcnull NameCompatMacboolNameCompatUNIXboolNameCompatWindowsboolOutputMultipleFilesboolSavingFileNameComponentsVlLs longlonglonglonglonglonglonglonglongSliceFileNameComponentsVlLslonglonglonglonglonglongUseImageSubfolderboolUseLongExtensionsboolGoLiveCompatibleboolImageMapLocationlong ImageMapTypelongIncludeCommentsboolIncludeZeroMarginsboolIndentlong LineEndingslong OutputXHTMLboolQuoteAllAttributesboolSpacersEmptyCellslongSpacersHorizontallongSpacersVerticallong StylesFormatlong TDWidthHeightlongTagCaselongUseCSSboolUseLongHTMLExtensionboolMetadataOutputSettingsObjcnull AddCustomIRboolAddEXIFboolAddXMPboolAddXMPSourceFileURIbool ColorPolicylongMetadataPolicylongWriteMinimalXMPboolWriteXMPToSidecarFilesboolVersionlong8BIMms4w8BIMLr164I] 38BIMnorm(ribbon8BIMluniribbon8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӳg8BIMPlLdplcL$5e69392e-5786-1174-9ab5-d8b86ec60ab5C,zC,z@0e+ԿC,z@0e+@0C,z@0warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@RghtUntF#Pxl@uOrderlongvOrderlong8BIMSoLdsoLDnullIdntTEXT%5e69392e-5786-1174-9ab5-d8b86ec60ab5placedTEXT%80f2df20-5786-1174-9ab5-d8b86ec60ab5PgNmlong totalPageslong frameStepObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlong denominatorlongX frameCountlongAnntlongTypelongTrnfVlLsdoubC,zdoubC,zdoub@0e+doubC,zdoub@0e+doub@0doubC,zdoub@0nonAffineTransformVlLsdoubC,zdoubC,zdoub@0e+doubC,zdoub@0e+doub@0doubC,zdoub@0warpObjcwarp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@RghtUntF#Pxl@uOrderlongvOrderlongSz ObjcPnt Wdthdoub@Hghtdoub@RsltUntF#Rsl@R8BIMfxrp0X"*f 8BIMnorm%(RQ8BIMTySh$d??@@"2TxLrTxt TEXTRQ textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSt TextIndexlong EngineDatatdta" << /EngineDict << /Editor << /Text (RQ ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 3 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 7.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 3 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 2 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal Grayscale) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal Grayscale) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Lato-Italic) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal Grayscale) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal Grayscale) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Lato-Italic) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluniRQ8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӳTu8BIMfxrp@@H2]#;륥̜̀L0?6' 10u*lǧrm <˶UTǗ-.3oGLYZ8NkULeOT[j-Bmի]Ko+8jWu8$oD첝eC#f9>gE@0$nz*߄oؘ656VqJiü=ŀLIھ[̯K_nbb@ zHX>/Z~,%&kow0m`7=-Zc #O_ӇCO=DXi>WL7Xk&0SHb# *盟iXB* Sؘgc,_~./)w5հvWy7 Xdh=y۾)Xdg}dz ٦_a0Mc9٬^0 lwK"W kXdI[q&0) 0w0'`,p1ĠydH lH,tWƀ ,5}- ۿ*T<{U㕀߿. Gӿ9Xds>*}o Xd>^^ƻ ,N`{ij,r8=/,L_v,a]U_S#X8S`į,MÐ a]Cl[NesˋbL  | ]eӺx$K/߷`pab7h;a@zHbw1{dc`KNGvLd^6f)o92nsʡ/dnraS)$cs+3S7g܊|&vFL)~$!2u LTMugFz­e`a԰4>Mg{G*sD!?rVey}>y|wg^/R?^`͂HTa 3 %\h`k0?~=gx y?4s!i#e[D}b? y&;X6 ;\ZM/}@6kRqa)K+!"WX8EH 1Uo H 1Uo H 1Uo 8BIMLMsk28BIMPat28BIMTxt2Y /DocumentResources << /FontSet << /Resources [ << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Lato-Italic) /Type 1 /Synthetic 2 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Lato-Italic) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (MyriadPro-Regular) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (AdobeInvisFont) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (TimesNewRomanPSMT) /Type 1 >> >> >> ] >> /MojiKumiCodeToClassSet << /Resources [ << /Resource << /Name () >> >> ] /DisplayList [ << /Resource 0 >> ] >> /MojiKumiTableSet << /Resources [ << /Resource << /Name (Photoshop6MojiKumiSet4) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet3) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet2) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet1) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (YakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (GyomatsuYakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (GyomatsuYakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (YakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> << /Resource 6 >> << /Resource 7 >> ] >> /KinsokuSet << /Resources [ << /Resource << /Name (None) /Data << /NoStart () /NoEnd () /Keep () /Hanging () /PredefinedTag 0 >> >> >> << /Resource << /Name (PhotoshopKinsokuHard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (PhotoshopKinsokuSoft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> << /Resource << /Name (Hard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (Soft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> ] >> /StyleSheetSet << /Resources [ << /Resource << /Name (Normal Grayscale) /Features << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /ParagraphSheetSet << /Resources [ << /Resource << /Name (Normal Grayscale) /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /TextFrameSet << /Resources [ << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> ] >> /ListStyleSet << /Resources [ << /Resource << /Name (kPredefinedNumericListStyleTag) /PredefinedTag 1 >> >> << /Resource << /Name (kPredefinedUppercaseAlphaListStyleTag) /PredefinedTag 2 >> >> << /Resource << /Name (kPredefinedLowercaseAlphaListStyleTag) /PredefinedTag 3 >> >> << /Resource << /Name (kPredefinedUppercaseRomanNumListStyleTag) /PredefinedTag 4 >> >> << /Resource << /Name (kPredefinedLowercaseRomanNumListStyleTag) /PredefinedTag 5 >> >> << /Resource << /Name (kPredefinedBulletListStyleTag) /PredefinedTag 6 >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> ] >> >> /DocumentObjects << /DocumentSettings << /HiddenGlyphFont << /AlternateGlyphFont 3 /WhitespaceCharacterMapping [ << /WhitespaceCharacter ( ) /AlternateCharacter (1) >> << /WhitespaceCharacter ( ) /AlternateCharacter (6) >> << /WhitespaceCharacter ( ) /AlternateCharacter (0) >> << /WhitespaceCharacter ( \)) /AlternateCharacter (5) >> << /WhitespaceCharacter () /AlternateCharacter (5) >> << /WhitespaceCharacter (0) /AlternateCharacter (1) >> << /WhitespaceCharacter () /AlternateCharacter (3) >> ] >> /NormalStyleSheet 0 /NormalParagraphSheet 0 /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 /UseSmartQuotes true /SmartQuoteSets [ << /Language 0 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 1 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 2 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 3 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 4 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 5 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 6 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 7 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 8 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 9 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 10 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 11 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 12 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 13 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 14 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 15 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 16 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 17 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 18 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 19 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 20 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 21 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 22 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 23 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 24 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 25 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 26 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 27 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 28 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 29 /OpenDoubleQuote (0) /CloseDoubleQuote (0) >> << /Language 30 /OpenDoubleQuote (0 ) /CloseDoubleQuote (0 ) >> << /Language 31 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 32 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 33 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 34 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 35 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 36 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 37 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 38 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 39 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 40 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 41 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 42 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 43 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 44 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 45 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> ] >> /TextObjects [ << /Model << /Text (RQ ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 3 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 7.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 3 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /StorySheet << /AntiAlias 2 >> >> /View << /Frames [ << /Resource 0 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 3 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 53.98064 /EMHeight 7.0 /DHeight 5.25342 /SelectionAscent -6.00772 /SelectionDescent 1.75 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -6.00772 0.0 1.75 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 3 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ -4.5 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -6.00772 9.0 1.75 ] /Glyphs [ 53 52 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 3 ] >> /VisualBounds [ -4.5 -6.00772 4.66594 1.75 ] /RenderedBounds [ -4.5 -6.00772 4.66594 1.75 ] /Invalidation [ -4.5 -6.00772 7.85997 1.75 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 3 ] >> /EndsInCR true /SelectionAscent -6.00772 /SelectionDescent 1.75 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> ] /OriginalNormalStyleFeatures << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> /OriginalNormalParagraphFeatures << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >>8BIMlnk2|tliFD$5e69392e-5786-1174-9ab5-d8b86ec60ab5 ribbon.psd8BPS8BIM8BPSy8BIMGZ%GZ%GZ%GZ%GZ%GZ%GZ%GZ%G8BIM%~:'Wh8BIM$] Adobe Photoshop CS5 Macintosh 2011-11-19T12:56:55+01:00 2011-11-25T00:57:33+01:00 2011-11-25T00:57:33+01:00 3 sRGB IEC61966-2.1 RQ RQ Job queues made easy Job queues made easy xmp.did:018011740720681192B0F57C9699AC60 application/vnd.adobe.photoshop xmp.iid:018011740720681197A5E183783236B0 xmp.did:0180117407206811AB08C21D52883AC8 xmp.did:0180117407206811AB08C21D52883AC8 created xmp.iid:0180117407206811AB08C21D52883AC8 2011-11-19T12:56:55+01:00 Adobe Photoshop CS5 Macintosh saved xmp.iid:0480117407206811AB08C21D52883AC8 2011-11-19T13:00:39+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0680117407206811AB08C21D52883AC8 2011-11-19T13:01:29+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0780117407206811AB08C21D52883AC8 2011-11-19T13:02:07+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0880117407206811AB08C21D52883AC8 2011-11-19T13:12:55+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0980117407206811AB08C21D52883AC8 2011-11-19T13:13:18+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0A80117407206811AB08C21D52883AC8 2011-11-19T13:15:39+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2C6EA9690A206811AB08C21D52883AC8 2011-11-19T13:18:06+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2D6EA9690A206811AB08C21D52883AC8 2011-11-19T13:18:56+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2E6EA9690A206811AB08C21D52883AC8 2011-11-19T13:22:53+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:306EA9690A206811AB08C21D52883AC8 2011-11-19T13:28:51+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:316EA9690A206811AB08C21D52883AC8 2011-11-19T13:28:52+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:346EA9690A206811AB08C21D52883AC8 2011-11-19T13:31:35+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:7C79D5760C206811AB08C21D52883AC8 2011-11-19T13:32:47+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:018011740720681192B0B2CC836803A7 2011-11-19T13:34:02+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:048011740720681192B0B2CC836803A7 2011-11-19T13:45:18+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:078011740720681192B0B2CC836803A7 2011-11-19T13:46:46+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0A8011740720681192B0B2CC836803A7 2011-11-19T14:00:56+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:EF622A3D0B20681192B0B2CC836803A7 2011-11-19T14:10:43+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F1622A3D0B20681192B0B2CC836803A7 2011-11-19T14:11:30+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F4622A3D0B20681192B0B2CC836803A7 2011-11-19T14:12:32+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:018011740720681197A5E183783236B0 2011-11-25T00:57:33+01:00 Adobe Photoshop CS5 Macintosh / 8BIM: printOutputPstSboolInteenumInteClrmprintSixteenBitbool printerNameTEXTEPSON Stylus DX8400 @ Bricktop8BIM;printOutputOptionsCptnboolClbrboolRgsMboolCrnCboolCntCboolLblsboolNgtvboolEmlDboolIntrboolBckgObjcRGBCRd doub@oGrn doub@oBl doub@oBrdTUntF#RltBld UntF#RltRsltUntF#Pxl@R vectorDataboolPgPsenumPgPsPgPCLeftUntF#RltTop UntF#RltScl UntF#Prc@Y8BIMHH8BIM&?8BIMStitches Path Selection8BIM4Stitches Path Selection8BIM5N28BIM528BIM8BIM x8BIM8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM8BIM 8BIM08BIM-8BIM@@8BIM6nullVrsnlongenabbool numBeforelongnumAfterlongSpcnlong minOpacitylong maxOpacitylong2BlnMlong8BIM3null Vrsnlong frameStepObjcnull numeratorlong denominatorlongX frameRatedoub@>timeObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlongp denominatorlongX workInTimeObjcnull numeratorlong denominatorlongX workOutTimeObjcnull numeratorlongp denominatorlongXLCntlongglobalTrackListVlLs hasMotionbool8BIM4FnullVrsnlongsheetTimelineOptionsVlLs8BIM8BIM5nullboundsObjcRct1Top longLeftlongBtomlongRghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlongRghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM H HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)Km8BIM8BIM!UAdobe PhotoshopAdobe Photoshop CS58BIM".MM*bj(1r2i ' 'Adobe Photoshop CS5 Macintosh2011:11:25 00:57:33&(.HH8BIMJ3J3J3333JtJtJt&K&K&Kn2sDŚw{jwwwQwMwI*wI(paI؞r_Js8BIM Ribbon Path # # # W W Wu߀߀߀vET-K) #`$`#"Q |@8BIMnorm(bg8BIMSoCopnullClr ObjcRGBCRd doub@oGrn doub@oBl doub@o8BIMlunibg8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱGڏ8BIMfxrp7~2228BIMnormf((shade8BIMlunishade8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱmw8BIMfxrp@R8@(D8BIMnormM4( sideshade8BIMluni sideshade8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱϴ8BIMfxrp@c&_ (@_y38BIMnorm8 l( ribbon fill8BIMGdFlnullAnglUntF#Ang@VTypeenumGrdTLnr GradObjc GradientGrdnNm TEXTCustomGrdFenumGrdFCstSIntrdoub@ClrsVlLsObjcClrtClr ObjcRGBCRd doub@f Grn doub?oBl doub@TypeenumClryUsrSLctnlongMdpnlong2ObjcClrtClr ObjcRGBCRd doub@oGrn doub@1}`Bl doub@9kfTypeenumClryUsrSLctnlongMdpnlong2TrnsVlLsObjcTrnSOpctUntF#Prc@YLctnlongMdpnlong2ObjcTrnSOpctUntF#Prc@YLctnlongMdpnlong28BIMlfx2tnullScl UntF#Prc@YmasterFXSwitchboolDrShObjcDrSh enabboolMd enumBlnMMltpClr ObjcRGBCRd doub?hXGrn doub?hXBl doub?hXOpctUntF#Prc@P@uglgboollaglUntF#Ang@^DstnUntF#Pxl@CkmtUntF#PxlblurUntF#Pxl@NoseUntF#PrcAntAboolTrnSObjcShpCNm TEXTLinearCrv VlLsObjcCrPtHrzndoubVrtcdoubObjcCrPtHrzndoub@oVrtcdoub@o layerConcealsbool8BIMlrFX8BIMcmnS8BIMdsdw3x8BIMmul 8BIMisdw3x8BIMmul 8BIMoglw*8BIMscrn8BIMiglw+8BIMscrn8BIMbevlNx8BIMscrn8BIMmul 8BIMsofi"8BIMnorm8BIMvmsk( # # # W W Wu߀߀߀vET-K) #`$`#"Q8BIMluni ribbon fill8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱ긟,8BIMfxrp|Md8BIMnorm*.P(RQ8BIMlfx2nullScl UntF#Prc@YmasterFXSwitchboolDrShObjcDrSh enabboolMd enumBlnM linearBurnClr ObjcRGBCRd doub@b Grn doub?oBl doub?oOpctUntF#Prc@RuglgboollaglUntF#Ang@^DstnUntF#Pxl@CkmtUntF#PxlblurUntF#Pxl@NoseUntF#PrcAntAboolTrnSObjcShpCNm TEXTLinearCrv VlLsObjcCrPtHrzndoubVrtcdoubObjcCrPtHrzndoub@oVrtcdoub@o layerConcealsboolFrFXObjcFrFXenabboolStylenumFStlOutFPntTenumFrFlSClrMd enumBlnMNrmlOpctUntF#Prc@YSz UntF#Pxl@Clr ObjcRGBCRd doub@c @Grn doub@9Bl doub@=8BIMlrFX8BIMcmnS8BIMdsdw3x8BIMlbrn8BIMisdw3x8BIMmul 8BIMoglw*8BIMscrn8BIMiglw+8BIMscrn8BIMbevlNx8BIMscrn8BIMmul 8BIMsofi"8BIMnorm8BIMTySh'?I*aD?@pI*aE@r)<2TxLrTxt TEXTRQ textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSm TextIndexlong EngineDatatdta&1 << /EngineDict << /Editor << /Text (RQ ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 3 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning false /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 2 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluniRQ8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱ@W[u8BIMfxrp@b H,uȄvi @8BIMnorm*/d(Job queues made easy8BIMlfx2nullScl UntF#Prc@YmasterFXSwitchboolDrShObjcDrSh enabboolMd enumBlnM linearBurnClr ObjcRGBCRd doub@b Grn doub?oBl doub?oOpctUntF#Prc@RuglgboollaglUntF#Ang@^DstnUntF#PxlCkmtUntF#PxlblurUntF#Pxl?NoseUntF#PrcAntAboolTrnSObjcShpCNm TEXTLinearCrv VlLsObjcCrPtHrzndoubVrtcdoubObjcCrPtHrzndoub@oVrtcdoub@o layerConcealsboolFrFXObjcFrFXenabboolStylenumFStlOutFPntTenumFrFlSClrMd enumBlnMNrmlOpctUntF#Prc@YSz UntF#Pxl?Clr ObjcRGBCRd doub@b Grn doub?oBl doub?o8BIMlrFX8BIMcmnS8BIMdsdw3x8BIMlbrn8BIMisdw3x8BIMmul 8BIMoglw*8BIMscrn8BIMiglw+8BIMscrn8BIMbevlNx8BIMscrn8BIMmul 8BIMsofi"8BIMnorm8BIMTySh(??@pD}pH@xtMvE2TxLrTxt TEXTJob queues made easy textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSm TextIndexlong EngineDatatdta& << /EngineDict << /Editor << /Text (Job queues made easy ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 21 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 1 /FontSize 48.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning false /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 1 /FontSize 48.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 20 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 2 >> << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 2 >> << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluni,Job queues made easy8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱAĹ8BIMfxrp@b@1  (.:EEJYbjwy}sneSPKA5#"                           ! !! !! !    !"#$%%$%$#"#"!!     !"#$%&'()*))**)*) ()((''&&%%$#)"!    !"#$%&')*+,,-./..//.-.-,2+**)((''&%$##"!!     !"$%&'()*+,-./0112343344323281010/.--,++*)(''&%%#"!   !"#$%&'()*+,-./012345667898898987635432210/..-,+*))'&%$"!    !"#$%&()*+,-./0023456789:;<=>>@=>>=>=<=<;<;:987765433200/--,*)''%$"!!   !#$%&'()*+,-./012345566789::;;<=>?@ABCDCDDCBA@7?>>=<;:9887554220/--,)(&%#!!   !!"#$%'()*+,-./0123456789:;;<=>?@ABCDEFGGHHIJIJJIJJIHGFE6DBB@@?>=<;:9865421/-,+)'%$"  !  !!#$$%%&&'()**+,-./0123456789:;<>?@@ABCDEFHIJKLMMNNOPOPPOPPON=<:976421/-+)(&$"! (  "#$$%'(()**++,--../012456789:<=>?@ABCEFFGHIJJKKLNOPQRSSTTUVUVVUVVUT6SRRQPOONMMLJJIGFEDBA@><;97531/-,)'%$" 0  !"$&'()*+,--.//012233456789:;<=>?@ABCDEFGHIJKLLMNPQRTUVWXYYZZ[\[\\[\\[Z8YXXWVUUTTSQQPNMKKIGGECA@><97531/-+('%"  * !#%&')+,-./0133445679:;<=>@ABCDEFGIJKLMNOPRRSTUVVWWXZ[\]_`ababbabba`8a_^^]\[[ZZXVVTSRQPNMLJHFEC@>=:8521.,*(%#  . !"$'(*+-/013456889::;<<=>>?@ABCDEFGGHIJKLMOPQRSTUVWXXYZ\]^`abcdefghhg@fgffgeffedccba``^]\[YYWUTRQOMKJHFCA?<:7520.+)&#! -  "$&'*,..134689:;==>?@ABDEFGHIJKLMNOQRSUVWXYZ[]^^_`abcdefghijklmnnmDlmkllkkjiihgedcb`_^]\ZXWUSQOMJHECA><9631/,)&#" ) !#%')+.023679:=>?@BCDFGHHIJKLMNOPQRSTUVWXYZ[\]^_`acddefghijlmnoprstsstststsrqp;852/,)'$! $  #%'),.0358:;852/+)&#! &  "%(*-/2469<>@CEGJKLOPPRTVY[\]]^^__`abcefghijkmnopqrstuvvwxyz{}~=~}||zyxwvurpomljgec`^[XURNLIEB?<851.,)&" :  "%(*-0358:=@BEHJLOPRTUVXYZ[\^^_`abccddeefghijkklmnopqrstuvwxyz{{|}~:~}}{yxvusroljheb_\YVRPLIEB?;740.+'$! 5  "%'*-0369;>ADGIMOQTUWXZ\]^`abcdefghhijklmnopqrstuvwwxyz{|}~?~|{zxvtrnmkgd`^ZWTQLIGB>:741-)'$! 8 "$'*,0369:62/,)%#  4 !$&),0369:63/,($! 9  #&*.048;?BFIMPTWZ]`bdgjlnprstvxyyz||}~~=}{xvrnlhea]YURNIEA<951.*&#  5 !$(+/25:=ADHKOSVY]`cfiknqrtvwy{|}~?}{yurokgd_[XTOKGC?;74/,(%" 7 "&)-047;?CGJNQVY]`dgjlortwxz{~;~{xtqmifa^YUQMHD@<840,(&" ; !$'+.26:=AEIMPTY\`cgkmpsuxz|~4}yvrokgc_[VRNJEA=951-*&#  8 "%(,048;?CGKPSX[_cfimqsvx{~4}zvtplid`\XSOKFB>951-*&#  8  #&)-148=AEIMQUY]`dgkoruy{}8~zxtqmjfb]XUPKHC>;62.*&$  : #&)-159=BFJNRW[^bfimpswz|7~{xtqmjfb^YUQLHC?;73/+'$  8  #'*.26;?CGKOTX\`dgknrux{~̘.~{xuqmifb^YUQLHC?;73/+'$  8  $'+/37;?CGLPTX]aeilpsvy|՘,}zwuqmiea]YUQLHC?;73/+'#! 9 !$'+/37;?DHKPUY\afimptwz}0~{yvsplid`\XTPLGC?;73/+'$! 9 !$(+.37:?DGLPTX]afilqtwz} 2~|ywtrnjgc_[WSOKFB>:62.+'$  9 !$'+.26:?CGKPTX\aehlpswz}5}{ywurplifb^YURNIEA=951-*'#  4 !$'+.26:>CGKOSX\`dhkosvy|~~~~~~4~|zxvtrpligc`]XTQMIEA=951-)&#  /  $&*.26:>AGJNRV[_cfjmqtvz|~}|{zyyzyz{|}|}}~~3}||zxwusqoljfda]ZWROKGC?;73/,'%! 1 #&*-159=@EJMQUY]aehloqtwz}~}|{zyxwvutststutuvwxyz{zx1wvusrpolhfca^ZWTOLHEA=961.*&#  7 "%),/48<@CHLQTX[_cfjmpruxz}~}||{zyxwvutsrqponmnmnnopqrstuvu2tssqponljhfc`^ZWTQMIFB?;740,)&#  0 !$(+.26:>BFJNRVX]`cgimpruxz|~~~}|{zyxwwvutrqpoonmlkjihghijkjklmnnop5opponnllkihfdb`][WTROLGDA=:630+(%"  0  #&*-148<@DHLOSVY]adfjmortvxz{}~~}|{yxwvutsrqponmkjihgfedcbbcbbcdcdedeffghiijkjig-feca_^[YWTQNKHDA>;741-+'$! 1 "%(,/36:>BEIMPTWZ]acfilnqrtuvxyz{|}~}~~~~}|{zyxwvutsrqponmllkjihgfeedcba`_^]\\]^]^_`ababccdedeedb*a`^\[YWUSPMJHEA>;851.+(%" 4 !$'*.148;852.+(&#  2  #%(,/269=?CGJNOSWY\^`bdfijkmnpqqrstststsrrqponmlkjhgfdcba``_^]\[YXWVUTSRQPPQRSTUVWVWXYZYZYW+VTSQPMLJHEC@>:852/,)&$! 1  #&),036:<@CFILORUWY\^`bdffhikllmnononmljihgfedcba`^]\[ZZYXWVUTSRQPONMLKJIJKJKLMNOPQRSRSTTSTSR)QPONMKIGFDA?>;7520-*($" 8 !$'*-036:=@CFHKNQSUXZ[]_aacdefgghihihiihihihgfdcba`_^\[ZYXWVUTTSRQPONMLKJIHGFEDDEFEFGHIJJKKLMNMNNMLJ(IHFECA@><:842/,+(%" 6 "%(+.147:=@BDGJMNQSTVXZ[]^^`aabbcbcbcbcba`_^]\ZYXVUTSRQPONNLKJIHGFEDCBA@?>?>?@ABCDDEFGHGFD'CB@?=<:8641.,*(%"  /  #&),/1369=<;:9989:;:;<;<=<=>??@@AABCBCCBA?$>=<:975320.+)'%"  / !#&(+.0268:=?ADEGIJLNOQQSSTUVVWVTSRQPNMLKJJIHGFEDCBBA@?>=;:9876543443456789:;<<=>=<:&9865431/.,*(&$" -  #%'+,/1469;=?@BDEGIJKLMNOPQPOPNMLKJIHGFDCBA@?>=<<;:9877654310/.///./0/01234567898765433100-++)&%#! -  "$')+.02468:;>?@BCDFFGHIJKJIJHGFEDCBBA@?>>=<;:986654210/.,+**)*+,+,,+--../12343210//-,+*(&&#!  * "$&(*,/024668:<<>?AABCDDEFEFEEFEDCBA@?>=<;;:9876543210/-,+*)('&%%&'(')*+,,-./.-,+**('&%$" , !#%')+,.01246789:<<=>>?@?@@?>?>=<;:987654210/.-,,+*)(('&%$#"!"!!"!!"#$&%&'&'(()())*))**)(''&&%$"!  #  "$&'(*,-.012246778:;:9:98654310/.-,+*)(('&%$#"!  ! "#$%&%&%$#!"!  # !"$%&()*+--/0122354556554543432110/.-,+*)('&%$$#"  !"!"!!   !"#$&%')**+,-/0/./.-,+*)('&&%$#"!  ! !!"$$%%&()()*+*+*)('&%$"!    !""$%&'&&''&%&%$#"!   '  ! !!""#"#"!"!                                ;?A??AAEEEEFGGHHHHIIJJKLLLLLLLLLLLMMMNNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOONNNNNOONOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOMMMMMMNNOOOOOOOOOOOOOOOONNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNOONNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNMMMMMMMMMMMMMMMNMMMMMMMLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLMMMMMMMMLLLLLLLLMMMMMMMLLLLLLLLLLLLLLLLLMMMLLLLLLLLLLLLKJJJIIIIIIIIIIIIIIHHHHHHHHHHHIIIIIIIIIIIIIIIIIIIIIIIIIIIHIIHGGGGGGFFFEEEEDDDCCBBA@@?@>=;;;;8764710-**%"   "%(+-/1131/-*(%"  "%(*-02356520-+'%" 7  $'*,/3458::;::98641.,)%!   #'*-0369;:61.)%! 9 "&+/48=AEILOPRSTSRQPMJFC>:51,'#   $).3894/*&! ; #',16<@EJNSVX[]^__^]ZWTPKFB<72-(#    %*/4:?DJOSX[^`bdca_\YUPJF@;60+&"  "',27=BINTX\`dfhjigda^ZTOJC?82-($ =  $*/5;@FMSX]bfilnoponmjgc^YSNHB<50*&   "',28>DJQW]bgkoqsutrolhc]XRLE?92-("  $)/5;AHNU[aglptwy{zxuqmgb\VPIB<5/*$ >  &+17>DKRY`flquz||zvrlf`ZTME?82,&!  "'.4:AHOV]djpvz|wqke^WPIB;4.(" > $)06=DKRZahou{|uoib[SLE=70*$   %+28?GOV^fmszztmf_WOH@92+&   "'-4;BJRYbjqx~~xqibZRKC;4-'! @ #(/6>EMU]enu||ume]UME=6/)"  %*18@GPX`iqxyqi`XPG?80*$ C  %,3:BJR[clu||tlcZRJA:2,&  "(-4FOW`is{{ri`WNF=5/(" B "(08@HQZclv~ulcZQH?71*$  #*18AJS\eox!wne\SJB:2+% @ $,2:CKT]gqz{qh_UKC:3,&  > %,4GQZcmwxncZPF=6/'!  "(/7@IR\eoy¿yoe[QG?7/(# B #*08AJT]gq{{qg]SIA70*#  #*29AJT^hr|þ~si_ULC92*# A %*29AJT_is~~tk_ULC;3+$ A %*2:CMU`ku¾vk`WLC;3,%   $+2;DNWalvĿwlaWMD<4-&  D %,46.'!    '-6>GPZdoz ž|qf\SI?6/(! D !'.6>FPZdp{ƿ}rh]SI@8/(! D !'.5>GQ\fq|~sh]SI@8/("   !(.6>GQ\fq| ~sh^TJA90(" D !'.6>HR\fq|ºti^TJA91)"   "(.6?HR]gr} ºti_UKB91*# D "(.6?HR\gq|ûuk`UKB:2*# H "'.7?HR\fq}ûvk`VLB:2*$ $ !(.7@HR]gr} ļvkaWLC:2+$ H !(/7@IR]gr}ļwlbWLC:2+$ H !(/7@IR]gr}ļwlaWNC;3+$ $ !(/7?HR]gr~ ŽwlbXNE<3,% H !(/7?HR]gr~ŽxmcXNE<3,% H "(/7?HS]gs~ŽxmcXNE<3,% $ "(/7?HS^hs~ ƿxmbXNE<3+% $ "(/7?HR]hs~#ǿyncYOE=3+% $ "(.7@IR]hs~!ƿyodYOF=5-% I "(.7@IR]gr}ǿzoeYOF=5-%  E "(.7@IR]gr}zoeZOF=5-& E "(.7@IR]gr}{oeZPF=5-& E "(.7@IR]gr}{pdZPF=5-&  E "(.7@IR]gr}{pf[PF=5-&  E "(.7@IR]gr}{pf[PG>5-&    "(.6@IR]gr}!{pf[PH?5-&    "(.6?HR]gr}!{pf[QH?6.'    "(.6?HR]gr}!|pf[QH?6.'!   "'.6?HQ\gr|!|qe\QH?6.'!   !'.6?HQ\fq|!}rg]QH?6.'! E !'.6?HQ\fq|}rh]RH?6.'! E !'.6?HQ\fq|}rh]RH?6.'! E !'.6?HQ\fq|º}rh]RH?7/(! E !'-6?HQ\fq|º}rh]RH?7/(! E !'-6?HQ\fq|º}rh]RH?7/(! E !'-5?HQ\fq|»}rh]RH@8/(! E !'-5>GP[fq|»}rh]SH@8/(! E !'.6>GP[ep|»}rh]SJA7/(! E !'.6>GQ[ep{»~si^SJA7/(! E !'.6>GQ[ep{û~si^SJA7/(! E !'.6>GQ[ep{û~si^SJA7/(" E !'.6>GQ[ep{ûsi^SJA7/)" E !'.6>GQ[ep{Ļsi^SJA80)" E  &-5>GQ[ep{ļsi^SJB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdp{ļtj_TKA80)" E  &-5=FP[ep{ļtj_TJA80)" E  &-5=FOZeozļtj_TKA80)" E  &-5=FOZdozüuk`UKB91)" E  &-5FQ[ep| Žuk`VKB90)#   &.5>GQ\fq| Ľvk_UKB81)#  !&.5>GP[fp{ Ľvj_UKB81)#   !'-5>GQ[ep| Ľuk`UKB91*#   !'-5>GQ[eq| Ľuk`UKB91*"   !'-6>GQ\fq| ļuk`UKB91*"   !'.6?HQ\fq| ļuk`UKA80)"   !'.6?HR\fq| Ľuj_UJA80)"  !'.6>GR]gr} ļui_TJA80)"  !(.6?HR]gr} ütj_TKB80)"  "(.6?HR]gr} ütj_TKB80)# C !(/7?HR\gr~üti^TKA80)# E !(/7?HR\gr}üti^TKA80)" E !(/7?HR]gr}ütj_TJA7/(" E !(/7@IS]gr}Ļtj_TJA7/(" E !(/8@IS^hs~»tj_TJA70)" E "(/8@IS^hs~ûth^TJA80)" E "(/7@JT^hs~ûsi^SI@80)" E "(/8AJT_itü~si^SI@80)" E "(/7AJS^itû~si^TIA7/(! E ")/8@JS^hs~û~sg]TJA7/(! E #)08AJT^hs»~si^SJ@7/(! E #)08AJT_it»~si^SI@7/(! E #)08AJT_it»~si^SI@7/(! E #*09AJU_itº~sg]SI@7.(" E #*19BKU`juú}rh]RI@7/(" E ")19BKU`juû}rg]SI@7/(! E #)08AKV_it»~si^SI@7/(! E #)09AKU_ju»~rh]SI@7/(! E #*19BKU`juº}rh]RI@7/(! K #*19BKV`juº}rh]RI?6.'! K #*1:BKVakvº}rf\RH?6.'! K $*1:CLVakv|qg\RH?6.'! K $+2:CLV`ju}rh]RH@7/'  K $+2:BKV`jvº}rh]RI@7/(! K $*1:BLVakvº}qf\RH?6.'! K #*1:CLVakvº|qg\RH?6.'! K $*2:CLWblw|qg\QH?6.'! K $+2:CLWblw|pf[QH>6.'! K $+2:CMXakv{pf[PG>5.'  K $+2;DMWakw|qg\PG>5-&  K $+2;DMWblw|qg\QH?5-&  K $+2:CLWblw{pe[QH>5-&  K $,3;CMWblw{pf[QG>5-&  K %,3;DMXbmx{pf[PG>5-&  K %+2;DMXcmxzodZPG>5-& K %+2;DMXcmxzoeZPF=5-& K %+2;DMWbmwzoeZPF=5-& K %+3;DNWamxzodYPF=4,&  K $,3;DNXbmxǿzoeZOF=4-& K $,3GQ[eq|ƿymcXND;3,% F !'-5>GQ[ep{ƿymcXND;3,% F !'-5>GQ[ep{ƿynbXND<3,% F  '.5>GP[ep{ƿyncXNE<3,% F  &-5=FPZdo{ƿyndYNF=3,% F &-5=FPZdozƿyndYOF=4,% F %,4=EOYdozǿyndYOF=4,% F %,45-&    $*1:BKValw"{odZPG>5-&    #*19BKU_ju"|pf[QG>5-&   #)19BKU_ju"|qf[QG>5-&  D #)19AKU_it|qg\QG>5-&  D #)19AJT^it|qg\QG>5-'!  ")08AJT^it"|qg\QG>5.'!  !(08@JT^it"|qg\QH@6.'!  !'.7@IS]gr}"|qg\RI@6.'!  !'.7?HR\gr|"}rh]RI@6/(!  !'.6?HR\fq|"º}rh]RI@7/(!  !'.6>GQ[fq{"º~rh]RI@7/(!   &.6>GQ[fq|"º~rh]SI@7/(!   &-6>GQ[fp{"º~sh]SI@7/(! D  &-5=EPZdoz»ti_TI@7/(! D  &-4HR\gs}ĽwmbWLC:2+$   !'/6?HR\gr}ĽwmbWMC:2+$    '/6>HQ\gr}ĽwmbWMC:2+$    '/6>HQ\gr|ĽxlbWMD:2+$    '/6>HQ\gr|ĽxlbWMD:2+$    &.5>GQ\gr|ľxlbWMD:2+$    &.5=GQ[fq|ľxlbWMD:3+$    &-5=GPZep| ľxmcWND;3+$   &-4=GPZeo{ ľxmcWND;3,$   &-4=GPZeo{ ľymcXND;3,$   &-4=GPZepz ľymcXND;3,$  %-4=GPZepz ľymcXND;3,$  %-4=FOZepz ľyncXND;3,$ C %-44,% B #+2:DLValvſ{oeZPG=4-% B #*19CLValv{oeZPG=4-&  B "*19CLValvzpeZPG=4-&  B #*19CLValvzpeZPF=4-&  B "*19CLValvzpeZPF<4-&  B "*19CLValvzpeZOF<4-&  B "*19CLU`ku{qf[PF<4-& B "*19CKU`kuƿ{qf[PF=4-% B "*19BKU`kuƿ{qf[PG=4-% B "*19BKU`kuƿ{pe[PG=4-% B "*18BKU`kuǿ{peZPG=5.% B ")08BKU`ku{peZPG=5.& B ")08BKU`ku{peZPG=5.& B ")08BKU`ku{peZPG=5.& B !)08BKT_jt|qeZPG=5.& B !)08BKT_jt|qf[QG=5.& B !)08BKT_jt|qf[QG=5.&  B !)08AJT_jt|qf[QG>5.&  B !)08@IT^it|qf[QG>5.&  B !(/8@IS^ht|qf[QG>5.&  B !(/7@IS^is|qf[QH>5.&  B !(/7@IS^is~|qf[QH>5.&  B !(/7@IS^is~|rf\QH>5.&  B !(/7@IR^is}|rg\QH>5.'  C !'/6@IR^is~|rg\QH>6.'! C !'/7?HR]gr}}sh]RG?6/'! C !'.6?HR\fr}}sh]RI?6/'! C !'.6?HQ\gq|~rh]RI?6/'!   '.6?HQ\gq|~sh]RI?7/(!   &-5?GQ[eq|~th]SI?70(!   &-5>HP[fp{~ti^SI@70(!   &-5=GP[fp{~ti^SI?80(!   &-5=FP[fp{ti^TJA80(!  &,4=FPZdozºti^TKA80("  %-45-'! D &-6?HQ[fq||qf[QG?7/'! D &-5=FQ[ep{~sh]SI@8/(" D &-4HR\gr|žzodZPG>5-'  B  '.5>HQ\gq{ž|qf\QG>6.'! B &-5=FPZepzſ}rg]QG?6.'! B &-5=FOYcnyƿ}rg]RI?6.(!   %-3HQ[fp{üxmcXND;3,% B  &-6=GPZeozüxmcXND;3,%   &-4=EOYdnyüyndYOD;4,%  %+35-&   ")08AJT^hs~"¼zoeZPG>5-&   "(08@IS\gr}¼zod[QG>6-'   !'/6>HR[fq{¼|qf\QG>6.'! @  &-5=GPZdoz¼|qg\QG>6/'!   &,46/'!  %,3GPZenz¿|rg\RI@7/'" ? &-4=FOYcmw¾{rh]RI@7/(" = &,4FPYdnxzpf[QH?6/'! @  '.5=FOYcmxzpf\QH?6/'! @  &-4=ENXblwzoe\QG>6.'! @  &,45.'!  &-45.'! @ $,36.'!  $*2;CLV`juxmcYPG>5.'! @ $*2:BKU_it~xndZPG=5.'   $*2:BKU_is}xmcYOF=4-&  C $*2:BKU_is}vmbYOF<4-&  C $*2:BKU_is}vlaXND<3,%  $*2:BKU_is}!ukbWMD;3,% C $*2:BJU_hs}ukaWMD;2+% C $*29BKT^hr|ukaWMD;4+$ C $*1:BKT^hs}~tj_VMD:2+% C #*2:BKU_is}~si_UKB:2+$ C $+2:CLU_is}}sh^UJB:2*$ C $+2;CLV`jt~|qg^UJB90*# ? $+3;CLV`jt~{qf\SJA90)# ? %+3;DMV`jtzqe[RH?70)" ? %,35-'! ?  &-4FOYcmw}si_ULC;3+$  !'.5>GPYcmw{rh^UKA:3+$ = "'.6>GPYcmw{qg]TJA91*$ = "(.6?GPZdmwyof\SI@80)#  !'.6?HQZcmwwnd[RH?7/(" = !(.7?GPYcmxvmbYPG>6.'!  !(/7?GPYdmwukbXOF=5.(! ? !(/7?GPYcmv}sj`WNE<4,&   !(/7?GPYcmv|rh_VLC;3+% ? !(/7?GPYclvzpg]TKB:2*$  !'/6?GOXblu~xne\SIA80)# = !(/5>FOXbkt}vlcZQI?7/(" ? !(.5=ENWajs|}tjaXOG>7/'! @ !'.470+$ : !&-4:BJQYagouy~{uohbZRKC<5/(" ;  %+29@HOV^dkqv{~{wrle^WOHA:3-&!  #)07>ELSZahmrwz}}zwsnhb[TMF?82,% = #).5DJQW]bgjmoqqpomiea\WQJD>71+&  6 $)05;AGMSY_bfijlmljhea]XSLGA;5/)$ 8 "(-38>DJPUZ^`cfhhgfda]XTOIC=82,'" 7 !&+06;AGLQVZ\^accba_\XTPKE@:5/*%  7  $).39>CIMRUWZ\^^]\ZXTQLGB<72-'# 8 !&+15:@DIMOSUWYYXWUSPLHC>94/*%! 8 $)-26<@DIKNQSTTSRPNKGD@;72-(# 9 "&*/38<@DGILNOONMKJGC@<73/+&! 6  $(,059=@BEHIJJIHGEC?<840,(# 4 "&)-259<>@CDEFEDB@><841-)%! 1 #'*.258:<=?AA@?=<:740-)&" 6  $(+.14689;<<;;97531-)&#  4 "%(+.024678876531/,)&#   "%(*,.023210-+)&#   "%')*,./.-,*(&#   "#%&(*+*)(&$"    "$%&'&%$"! /  !##$$##"!                          ")!(&(&%&%$&&*3,.-)+),,-,,,*,,*,*($# ""$%$ "$#"!!##!  &&##'(),+-----*)(*-,,)),.*!&%$%%%%%&+/.002,--.*+'&&$!  0@0 @@ ``PP@ߏ0P@00p@@@@pߏ@0ߏ0@0`@@ᅬ`0pPϿP@0` 0 0`pp0 ``P` p0P`pP0@0`` ``0p ` p@ @P p`@p`@@@@@ 0Ͽp@`00pP`@0 @`p p@ p00p@ 00p Pp@P0`P` @ `  P p0 `  @0@ @0@@@@ @P`Pp `p``p@@ @   @``P@@`@0@ @@@ @ `@ `@@@ @p 0@pPP0`@@ @```@pp@0pP00@`P P@@@@@p@0@`pp00p ppP`@@0@@@P@@0p@ @@00@@@P@@0P@p@`@@@@`p@  0@0@@@P @@PP @pp`0@ P@``Pp0P p @ @@@0@0`0@@`@@` 0`@00`@p@@p0`P`@PP`@0`0`@P `@P0 p@@`0 @ @`@0@00@@``` p00 0  ` @p`P` p 0 @` @`@ P@@ `p p@p `@`P` @0p P00`@ 0@`@@0``p`p0 P@@p @0@@@@@@0 0p0  0@P0`0@0p@@0``P ppP`@p0@@ @0@ @P@`Pp00@``@0@`` @@0@p@@@@@@@@@0@@@@@@@@@@@@@@@@@`@@ @p@P  @`0P@ ')@Rw^, @p0@0pp@ppp`@@@ @0@`@`@@0@ @0` @0@0`@00 0@ @ @pp0 @ 0pp@ 00 @@ P`@@p߿`` `` 0``  `` @ `@` P@@ `@P@ `@P @`P ``P@P@P `` @ p@00` @` @` PP `@@@P`0ϯP@0@@@0@@ 000@@0@@@  0`Pp @@0@p @@0@`00`@ @@0@ @@0@ 0p@0`@@@@@p ` @`@@@`@ @@ @@@`  0 0 @ ` @@00`@ @   0P@0@@p@@p Pp @@@p@pPp@ p``0 `pp0P  @@pp@0Pp`߯` `@0``@``@`  @`@@@@ @ `0 0 @@P @ @@P@`@0 ߟ0@0@0@@0@p000@@`@@P`0@ 00 @ 00 pP p 0PP 00  00  p 0pP @@@`@pp00@pp00p`0000`@00 ppP000 p0@0P ` 0@@000 pp p 28BIMPatt8BIMTxt2v /DocumentResources << /FontSet << /Resources [ << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Jinky) /Type 1 /Synthetic 2 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Jinky) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (MyriadPro-Regular) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (AdobeInvisFont) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (TimesNewRomanPSMT) /Type 1 >> >> >> ] >> /MojiKumiCodeToClassSet << /Resources [ << /Resource << /Name () >> >> ] /DisplayList [ << /Resource 0 >> ] >> /MojiKumiTableSet << /Resources [ << /Resource << /Name (Photoshop6MojiKumiSet4) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet3) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet2) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet1) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (YakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (GyomatsuYakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (GyomatsuYakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (YakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> << /Resource 6 >> << /Resource 7 >> ] >> /KinsokuSet << /Resources [ << /Resource << /Name (None) /Data << /NoStart () /NoEnd () /Keep () /Hanging () /PredefinedTag 0 >> >> >> << /Resource << /Name (PhotoshopKinsokuHard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (PhotoshopKinsokuSoft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> << /Resource << /Name (Hard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (Soft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> ] >> /StyleSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /ParagraphSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /TextFrameSet << /Resources [ << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> ] >> /ListStyleSet << /Resources [ << /Resource << /Name (kPredefinedNumericListStyleTag) /PredefinedTag 1 >> >> << /Resource << /Name (kPredefinedUppercaseAlphaListStyleTag) /PredefinedTag 2 >> >> << /Resource << /Name (kPredefinedLowercaseAlphaListStyleTag) /PredefinedTag 3 >> >> << /Resource << /Name (kPredefinedUppercaseRomanNumListStyleTag) /PredefinedTag 4 >> >> << /Resource << /Name (kPredefinedLowercaseRomanNumListStyleTag) /PredefinedTag 5 >> >> << /Resource << /Name (kPredefinedBulletListStyleTag) /PredefinedTag 6 >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> ] >> >> /DocumentObjects << /DocumentSettings << /HiddenGlyphFont << /AlternateGlyphFont 3 /WhitespaceCharacterMapping [ << /WhitespaceCharacter ( ) /AlternateCharacter (1) >> << /WhitespaceCharacter ( ) /AlternateCharacter (6) >> << /WhitespaceCharacter ( ) /AlternateCharacter (0) >> << /WhitespaceCharacter ( \)) /AlternateCharacter (5) >> << /WhitespaceCharacter () /AlternateCharacter (5) >> << /WhitespaceCharacter (0) /AlternateCharacter (1) >> << /WhitespaceCharacter () /AlternateCharacter (3) >> ] >> /NormalStyleSheet 0 /NormalParagraphSheet 0 /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 /UseSmartQuotes true /SmartQuoteSets [ << /Language 0 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 1 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 2 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 3 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 4 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 5 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 6 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 7 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 8 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 9 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 10 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 11 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 12 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 13 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 14 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 15 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 16 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 17 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 18 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 19 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 20 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 21 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 22 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 23 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 24 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 25 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 26 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 27 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 28 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 29 /OpenDoubleQuote (0) /CloseDoubleQuote (0) >> << /Language 30 /OpenDoubleQuote (0 ) /CloseDoubleQuote (0 ) >> << /Language 31 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 32 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 33 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 34 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 35 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 36 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 37 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 38 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 39 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 40 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 41 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 42 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 43 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 44 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 45 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> ] >> /TextObjects [ << /Model << /Text (RQ ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 3 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 3 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /FirstKern 0 /StorySheet << >> >> /View << /Frames [ << /Resource 0 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 3 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 53.98064 /EMHeight 322.84122 /DHeight 200.31761 /SelectionAscent -227.76129 /SelectionDescent 113.64172 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -227.76129 0.0 113.64172 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 3 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ -123.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -227.76129 246.0 113.64172 ] /Glyphs [ 53 52 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 3 ] >> /VisualBounds [ -130.58136 -227.76129 123.0 113.64172 ] /RenderedBounds [ -130.58136 -227.76129 123.0 113.64172 ] /Invalidation [ -130.58136 -227.76129 277.9624 113.64172 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 2 ] >> /EndsInCR true /SelectionAscent -227.76129 /SelectionDescent 113.64172 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> << /Model << /Text (Job queues made easy ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 0 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 21 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 48.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 21 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 21 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 21 >> ] >> /FirstKern 0 /StorySheet << >> >> /View << /Frames [ << /Resource 1 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 21 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 1 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 53.98064 /EMHeight 48.0 /DHeight 29.7832 /SelectionAscent -33.86353 /SelectionDescent 16.89624 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -33.86353 0.0 16.89624 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 21 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ -133.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -33.86353 266.0 16.89624 ] /Glyphs [ 45 82 69 3 84 88 72 88 72 86 3 80 68 71 72 3 72 68 86 92 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 21 ] >> /VisualBounds [ -133.0 -33.86353 133.0 17.54297 ] /RenderedBounds [ -133.0 -33.86353 133.0 17.54297 ] /Invalidation [ -133.0 -33.86353 156.03979 17.54297 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 20 ] >> /EndsInCR true /SelectionAscent -33.86353 /SelectionDescent 16.89624 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> ] /OriginalNormalStyleFeatures << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> /OriginalNormalParagraphFeatures << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >>8BIMFMsk 2 /)%$"#"""! ! ##, {xxxxxxxxxxxxxxuiiiiiiiiiiiiiii`ZZZZZZZZZZZZZZWKKKKKKKKKKKKKKKB<<<<<<<<<<<<<<<--------------------!xxxxxxuiiiiiicZZZZZZQKKKKE<<<3---! $--3<<BKKNZZ{]xixixxxxxxxxxxxxxxxxxx-3<KQZir~ -9E*Q3`<lKxWcr~$ 6E0T?fQu`o !'0<BQTcfxu*B'W9oN`̜f30 336ff~u̺fN3 3Hf̥fB*Bl޷iQ3 $N{Ϩ]3T ̙rN'?x$ὖf?H<خc3 iQ꽓f3 Zi̜rEc'~ۮW*u9f3 BQ̜rE{cը~Q'c{޴]3Qf6 B ̙lBT̙uN$6Үf6NÙoK'c̨`3 {޽fE!̥fE0̫fK3̽fB33fƙfc303Wf ̺lfQ333?ffuffc3*33!8BIMFMsk 2rq-1.16.2/docs/design/rq-logo.psd0000644000000000000000000256312013615410400013445 0ustar008BPS 8BIMGZ%GZ%GZ%GZ%GZ%GZ%GZ%GZ%G8BIM%~:'Wh8BIM$S Adobe Photoshop CS5 Macintosh 2011-11-17T08:54:29+01:00 2011-11-28T14:44:10+01:00 2011-11-28T14:44:10+01:00 application/vnd.adobe.photoshop 3 sRGB IEC61966-2.1 RQ RQ Simple job queues for Python Simple job queues for Python xmp.did:018011740720681192B0F57C9699AC60 xmp.did:0180117407206811AB08C21D52883AC8 xmp.iid:08801174072068119E0ABA8718FD1176 xmp.did:018011740720681192B0F57C9699AC60 xmp.did:018011740720681192B0F57C9699AC60 created xmp.iid:018011740720681192B0F57C9699AC60 2011-11-17T08:54:29+01:00 Adobe Photoshop CS5 Macintosh saved xmp.iid:02801174072068118DBBD6E36CF3C320 2011-11-17T23:40:49+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:03801174072068118DBBD6E36CF3C320 2011-11-18T01:43:03+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:04801174072068118DBBD6E36CF3C320 2011-11-18T01:43:03+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0280117407206811AB08C21D52883AC8 2011-11-19T12:56:58+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0380117407206811AB08C21D52883AC8 2011-11-19T13:00:19+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0580117407206811AB08C21D52883AC8 2011-11-19T13:00:41+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2F6EA9690A206811AB08C21D52883AC8 2011-11-19T13:27:56+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:7D79D5760C206811AB08C21D52883AC8 2011-11-19T13:32:53+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:028011740720681192B0B2CC836803A7 2011-11-19T13:34:45+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:038011740720681192B0B2CC836803A7 2011-11-19T13:43:43+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F0622A3D0B20681192B0B2CC836803A7 2011-11-19T14:10:46+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F2622A3D0B20681192B0B2CC836803A7 2011-11-19T14:11:32+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F3622A3D0B20681192B0B2CC836803A7 2011-11-19T14:12:13+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F5622A3D0B20681192B0B2CC836803A7 2011-11-19T14:12:34+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:08801174072068119E0ABA8718FD1176 2011-11-28T14:44:10+01:00 Adobe Photoshop CS5 Macintosh / 8BIM: printOutputPstSboolInteenumInteClrmprintSixteenBitbool printerNameTEXTEPSON Stylus DX8400 @ Bricktop8BIM;printOutputOptionsCptnboolClbrboolRgsMboolCrnCboolCntCboolLblsboolNgtvboolEmlDboolIntrboolBckgObjcRGBCRd doub@oGrn doub@oBl doub@oBrdTUntF#RltBld UntF#RltRsltUntF#Pxl@R vectorDataboolPgPsenumPgPsPgPCLeftUntF#RltTop UntF#RltScl UntF#Prc@Y8BIMHH8BIM&?8BIM x8BIM8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM8BIM 8BIM08BIM- 8BIM)@@ )@ 8BIM6nullVrsnlongenabbool numBeforelongnumAfterlongSpcnlong minOpacitylong maxOpacitylong2BlnMlong8BIM3null Vrsnlong frameStepObjcnull numeratorlong denominatorlongX frameRatedoub@>timeObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlongp denominatorlongX workInTimeObjcnull numeratorlong denominatorlongX workOutTimeObjcnull numeratorlongp denominatorlongXLCntlongglobalTrackListVlLs hasMotionbool8BIM4FnullVrsnlongsheetTimelineOptionsVlLs8BIM8BIMnullbaseNameTEXTUserboundsObjcRct1Top longLeftlongBtomlong RghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlong RghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM H HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)Km8BIM8BIM SV7 Adobe_CMAdobed            V"?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?TI%)xY?垡Ϗ^̼_:QwǪwᬹH W c[Nv9>z=]=0&O2q+WuKWGabcMJ--b{-ۆk),z[UU][9p|R.e(KŹ*ާU^[h1Ŭ+FƾoGš?~YcQ? 9G>I+I%?TI%)x]tuޤ?G^s#>=W涏eIWz~=}yX&/VeJFg'f`dٔ;ʩUؕSv<9ܪ^+eW.zWm.mPچZGPof-~U}{_ {lk4{X؍~bpRsܜ}|>OxQ䚩ǬчO;{;Z=lc,itSkk]K$0mgmsW]+<Tc~?V|O|;,`n>o_zW/JI$RI$TI%)xW]?u{Wt+,u+2^㮝wuhJ`_7Id]2l銀q?M=f֋KQr4CZnW_5:l~MlKKG|[Kۇآg,_Є*sVXl?ykֿK2+b_WMz3`qlC:qԽ?H5u_JM> d%w^ ~cσ&È Q$S I$ST'kO:ǽ\Zzegs?$Jm2 s k',>[}'Jo$zEѱ]È&mEH8?EyNoRSy%Qa 7λ$qì~mb1v a>ޟ}FG7W,y` |=Og4 {k^q[mCw__mJ?tv+|R7z7mE{K`n8c,~&kh`x˹Xu9[iTrI$9Cvnh\<6v}T*k$8p~ͿRGpΝ^c74:qjl3n8K1v?Z?pȥ[߼GO_OM 4㰀dH1rnèc 흟P^~KUY~#Yަ,V9mMBqmy?){U{ˬk H%(ٙkĪ1;YV=eO_|Wogj7{i| k>?D=-մmuYȱl60$n'w;]M=xSՓ?$Vƴ !Zrz} t[7o~yJPL}$Ѓ<>uf՝/o?%nI3c? X{{u^2EdRր`N-{]iܬ_[euNs A?Jv1Os$ƻVxrk_EGc]@VwW lz>x3A ?dGuc5̭ t ;orSpƿTlnEy36ѿRo7eKX)[;"ѿRv?bw^R-FKec)TV;)&o7HTV;ӭs8ǖl!ݖ[Vڝ~V?[f7]6Ϭ%Iέ9-.mos2m~{WI̮ڬ1:C\AK>fVߎB~}?LލEH/.q}Z]f^F~Ja8C5,G5>X>MYJs!} =^jeumsY6ێk/ӳO6z_u65)]]oF1{O_ҪǩQLY$8>sJ~i)&G^8>e`3cɗl>kX;mt1FC]k- !֏IzVoEzcCZ$VƖW;gMzoUM_V7]Z.c~1kHQO ILm/*wUw[YW=OQD]/jmk VRG{MO!hgkcF퀒H;l3ײU]CSmnsluL\sګSm$IJI$Sl~N=qk^asVY}ORNݻ~cgTbvVLN6ʮs8n3)C?oh54Iq]FI!04!k*Cv/M9 eٗk}y۷3DA/sAs{=ogwЧc?KONlQtFs:9ѸHL9U㜋\2=Oլh5j,vc=V VF>;n8Cc3(P| P| K)z  HP| I)TI%)$IJI$RI$I%)$IO8BIM!UAdobe PhotoshopAdobe Photoshop CS58BIM".MM*bj(1r2i ' 'Adobe Photoshop CS5 Macintosh2011:11:28 14:44:10 &(.HH8BIMmopt4TargetSettingsMttCObjc NativeQuadBl longGrn longRd longTrnsbool fileFormatenum FileFormatPNG24 interlacedbool noMatteColorbooltransparencyDitherAlgorithmenumDitherAlgorithmNonetransparencyDitherAmountlong8BIMmsetnullHTMLBackgroundSettingsObjcnullBackgroundColorBluelongBackgroundColorGreenlongBackgroundColorRedlongBackgroundColorStatelongBackgroundImagePathTEXTUseImageAsBackgroundbool HTMLSettingsObjcnullAlwaysAddAltAttributebool AttributeCaselong CloseAllTagsboolEncodinglongFileSavingSettingsObjcnull CopyBackgroundboolDuplicateFileNameBehaviorlongHtmlFileNameComponentsVlLslonglonglonglonglonglongImageSubfolderNameTEXTimagesNameCompatibilityObjcnull NameCompatMacboolNameCompatUNIXboolNameCompatWindowsboolOutputMultipleFilesboolSavingFileNameComponentsVlLs longlonglonglonglonglonglonglonglongSliceFileNameComponentsVlLslonglonglonglonglonglongUseImageSubfolderboolUseLongExtensionsboolGoLiveCompatibleboolImageMapLocationlong ImageMapTypelongIncludeCommentsboolIncludeZeroMarginsboolIndentlong LineEndingslong OutputXHTMLboolQuoteAllAttributesboolSpacersEmptyCellslongSpacersHorizontallongSpacersVerticallong StylesFormatlong TDWidthHeightlongTagCaselongUseCSSboolUseLongHTMLExtensionboolMetadataOutputSettingsObjcnull AddCustomIRboolAddEXIFboolAddXMPboolAddXMPSourceFileURIbool ColorPolicylongMetadataPolicylongWriteMinimalXMPboolWriteXMPToSidecarFilesboolVersionlong8BIMms4w pL8BIMnorm(bg8BIMSoCopnullClr ObjcRGBCRd doub@oGrn doub@oBl doub@o8BIMlunibg8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱ^68BIMfxrpVeYy48BIMnorm (lorem8BIMlunilorem8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱu8BIMPlLdxplcL$1dec8551-5331-1174-b798-cb202e8f50fc@{ʥU@p`t t@u9T@p`t t@u9T@gſň@{ʥU@gſňwarp warpStyleenum warpStyle warpCustom warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@|pRghtUntF#Pxl@uOrderlongvOrderlongcustomEnvelopeWarpObjccustomEnvelopeWarp meshPointsObAr rationalPointHrznUnFl#Pxl@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@VrtcUnFl#Pxl@bUUUUU@bUUUUU@bUUUUU@bUUUUU@rUUUUU@rUUUUU@rUUUUU@rUUUUU@|p@|p@|p@|p8BIMSoLdsoLDnullIdntTEXT%1dec8551-5331-1174-b798-cb202e8f50fcplacedTEXT%9b5835fd-5331-1174-b798-cb202e8f50fcPgNmlong totalPageslong frameStepObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlong denominatorlongX frameCountlongAnntlongTypelongTrnfVlLsdoub@{ʥUdoub@p`t tdoub@u9Tdoub@p`t tdoub@u9Tdoub@gſňdoub@{ʥUdoub@gſňnonAffineTransformVlLsdoub@{ʥUdoub@p`t tdoub@u9Tdoub@p`t tdoub@u9Tdoub@gſňdoub@{ʥUdoub@gſňwarpObjcwarp warpStyleenum warpStyle warpCustom warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@|pRghtUntF#Pxl@uOrderlongvOrderlongcustomEnvelopeWarpObjccustomEnvelopeWarp meshPointsObAr rationalPointHrznUnFl#Pxl@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@@fUUUUU@vUUUUU@VrtcUnFl#Pxl@bUUUUU@bUUUUU@bUUUUU@bUUUUU@rUUUUU@rUUUUU@rUUUUU@rUUUUU@|p@|p@|p@|pSz ObjcPnt Wdthdoub@Hghtdoub@|pRsltUntF#Rsl@R8BIMfxrp@{ʥU@p`t t t=$ > >8BIMnorm )<(RQ8BIMTySh(??@\@_2TxLrTxt TEXTRQsubstitutesUsedbool textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSm TextIndexlong EngineDatatdta&- << /EngineDict << /Editor << /Text (RQ ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 3 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 157.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading .01 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 .74902 .16445 .00002 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 157.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading .01 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 2 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (3dumb) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (3dumb) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluniRQ8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱ^8BIMfxrpb@R^ 8BIMnorm .(Simple job queues for Python8BIMTySh-,?WJǖ/ ?WJǖ/ @Tv1'@dru2TxLrTxt TEXTSimple job queues for PythonsubstitutesUsedbool textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSm TextIndexlong EngineDatatdta+ << /EngineDict << /Editor << /Text (Simple job queues for Python ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 29 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 23.9914 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /AutoKerning false /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 .74902 .1647 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 23.9914 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 .74902 .1647 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 23.9914 /FauxBold false /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 .17255 .17255 .17255 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 6 22 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (ArchitectsDaughter) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (2Dumb) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (ArchitectsDaughter) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (2Dumb) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluni<Simple job queues for Python8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӱ^a8BIMfxrp@/*,`Y%y>(%K'f 8BIMnorm(ribbon8BIMluniribbon8BIMlnsrrend8BIMlyid 8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӴc8BIMPlLdplcL$5cf431da-5a55-1174-bd39-835156e90edf@VݏNN%@t,x%@t,x@l8X@VݏNN@l8Xwarp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@RghtUntF#Pxl@uOrderlongvOrderlong8BIMSoLdsoLDnullIdntTEXT%5cf431da-5a55-1174-bd39-835156e90edfplacedTEXT%639555e0-5a55-1174-bd39-835156e90edfPgNmlong totalPageslong frameStepObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlong denominatorlongX frameCountlongAnntlongTypelongTrnfVlLsdoub@VݏNNdoub%doub@t,xdoub%doub@t,xdoub@l8Xdoub@VݏNNdoub@l8XnonAffineTransformVlLsdoub@VݏNNdoub%doub@t,xdoub%doub@t,xdoub@l8Xdoub@VݏNNdoub@l8XwarpObjcwarp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrznboundsObjcRctnTop UntF#PxlLeftUntF#PxlBtomUntF#Pxl@RghtUntF#Pxl@uOrderlongvOrderlongSz ObjcPnt Wdthdoub@Hghtdoub@RsltUntF#Rsl@R8BIMfxrp@^^+z:0>  J\O. 7xxV|~lqa}|k  '(!"$)#(#&))%(%# *0.68<|pldz~v{~z|puzptezX~CwTj~n_VnCi]OUGabejYSZVRn@Ub][[c[f_tl?G|E]jqiokM|h8shN^Bb60OSTAqAE06M!D=CM@J2eP_LEK`D]UA ca` (7BGD4+.&' 8LL) Xhsirgo^ze{~dldffqikqRLLK{OL ]OLLPԪMMLKeNLLLZ~JOT KN[А`_~L_ X PZN|IٴJӧt{L|lKKMJNjyzL}zL} ϽvLQMK;LJəLJ֚K"MLξLJəLLՙLڰ LɘLK LhpҘLK#;ϜLMلLLMلL̋~Lhq؅L:KMKMKM̿ҖKLKL̿KisΚKNqKU Hs{%PTJINxthIhIʳ OsGIKsȑMUv\JKs\IsJKuJJK#JLLrL\IwKL:wKrw^rw^JKxJKLN OkTO$NWOzjjH׭Krwvj vjӰFկ)vݔxOĢnJuݔxs(~XexJxԮeԳIIqrI$rIu۔IzF}Hrv#t[LL[~LҒsSKLrSKLӯLҢ{(x|uL{yyH{xKDžKxx ѢƆK֯x5rrJ|yyIsJ|yyIzxL S}I$ځgӦvIgjkkЮ LMM}sxʟtՙWvM|sxIxwxv LIxְywz rrv rvK|szxL xPΰȳPמ ֳO~ZIPd֟ JPeן IKޯ wWoGxwPȋ τ Jް֯OȌU srsG rsGPǍ΅JsPX$vHiXuILOGdGhGdHhfZGNK(]KMJ] L\] K_K GMK\հK9oMJKKKaLZKNLI#qGNLHMOױhNSHMLLIMJڳMLIMIڳMLIvMLQMhrK־L(qLV΁YMLIKֿMqLVpLW LLJ LLgsղ LLK:dLLJILMKLLILMKLLKջLKLLKLMKLV L΅  IӞ ӞO OqПLZKRKRf JS}쥄즄꣄xOUNMLNNs_H_q~Kqq`ñtܼHsstNJӛNINJK=ӑMIڈLMfoOKoNKwMPwMPνҚMJMJ֛KəMJMKLMP νҙLJ)LgoنLۀMLنLvLQHԚK;tLR ϝLLLLLLrڀKLKKLΛLMKL KKKξrLS KMҖKLKLM̿ӗKͿrLT˸sHKtK=HLMuKuu[OOJK tJKu\NOR JK3ɐMUvJLIv\LNHsM+KLrvIw\wKJL\IIw]wJNsKKwJs N¸֭I|ɆJJIFGuݔxI ^r PŢoIPI FHwGyԮJPI J{ ye JkXJJ{FӰI{y FңtvxMzyyGLL|xLzyyGx }uxNJL{yyHLHxLѢMKK{yyIsƆK}HJ{yyIsL УryLNyswyvwMMM}s}xw x ʠtՙWywpvLHwL LpvxsIw TvysLKzsy LKVuȌOʋOGJJxG x VoPɋKGJHxJ фKݯOKGVsτ {GUtJKݰVsxIGOrI cK(KYXK\xJLKLYI]Z+`KGMLpJ `KK]qpJ[FLoK\ZNgqiLJ rMLH(MLHMLRNLHMLQMLPLֿLrLUMLJv̀ZLLIHMLJLLSKPpLVLLSLLKLhsLLSIL eLMLILeLMKLUKgteLMnLXKLU Ӟ  O LQ؜``:NKʔל՛LهLNKLMPNKwMOͽ~LMOђMIMK ֜KMgouLPML@sZiJrLuZOMPuZ[INONQILu[ suIrMu\II膽vJ]rI$gG]rɄIwI sHIܿ@NzyyGPwMzyyHvMzyyI˄LLx vLzyyHxwrLL{yyI߿w6LwvwwJwMx vwxyv sLvHI猹wH vH JwƍHwOɌrrJHkK]$wOXZwcK \KJZNLH]tLSNLHuNLIMLQvrLLIMLJ sLTMLIMLKLLRMLJKO  ؅                                                       rBz cACaZ? [@lB\ BB@MB    B?C>b?j=A=j@?>>?@ ӏ@A?X@ C=A@JC=)j=޼j?A= ?A?B?? Ni@}P??=BCPOOzrO?k>\Oc|iYAoj?B>ysO?9POOmBCzrO?NB\[BO[BO@@>f7BEBs@@zB=k?@taAA>9j=E=b>a@@Y>?lj?B=`AA>:7yB{hAAU`AA>:xBAz@@D>sKi@oi>Z? j=j?C@j@g|C?Xh@q5C?A?AcNAh@qCAAcBCBC  *a?X?NABABjj>i=iAA@Fi=i=C@i@@??=oi>/Akj?ACAHh?B@AA??Y@?Ao@?Ao  *BA?hAA?zBji=i>i@Z@= j=j=C@i@i>.Akh>A@@Yh?B@y?@@y?@? GC@>C@>BAC;j>i=٤BGj=j=C@i?h?0ACB>CCh>ADp>p>  EW@>N@?o@C~C>hj>j=V?|a=j>C@a?i>1n@C}C>hB?@AB@i>u@AwNAmChB<شChB<??g>w??=h;h;A??j>??    $*@@}    @@AA@@    ݝ                        %          LJI  IXp] \   W Ӳ$}JdktKMLIKHÔJHKIЖ*JJIуJJILsINΔJǷIfkЂJ|ǷʒJIJIJemʒIH)ɊJIJJJIpJOIIIJpƷHJo|IKɑIJIenƆ| +~sJLsXsXNjMTHJssyƋLUtXJsYLrƌLUuJJHIJotJIIIKtZrKuZtuHrJJYHuI賙 } IGHHOlѪ)vOlHHRHuOmINtْH (ƂHѫHHHHNѪ eI а~?݀uJMvvwGLwvvHzs˞w#zrLwvvHLwvvGJwzr܁tKHyہu.uKxvvIʞJJxvvJށ(ށJJ˞łJځuϭ }vuKuușsҔVKv ǚtєWuuKvFȚtҔWvuqJyqwuuuJKuۀۀqJIwwuϭ }OʼnIIHVoJ٫v VoH I&IwUmOËJOŠHIجIIO OKJ٬OϬ&~JZHJGJ[UHIZ[缾IKIKJoJ`JHIZٯٰKHJH]IϬ  ~JIGKINLIGKIG̀ZJdosJRˁ7ZKIHKIGJJQrJS檺ˁZIJGIߩIӹJoJIHKoIJHJIIIdoJJRJJIJISIISIJdpHIKϱ  [˜ !O  JR    颁              #                 ҽ KS ʼ [V ľ\ſ ƺ ӱ  ɿ@ΔK“KGLIdžyKdiJcj ƵǒJ7GJHΔJKdkȋJHŵKIkrINpGHsX0鴗sItHƊMUI6IrsIHIsKt   HuHҮHHOlAtؒvHƂHG  uMvuuHЫ܀t܀tyq5{uۀttށI   tt4ЫutvtƘrєW)Jxp~uuttۀJ    HuIЪOÉOÉUn uOOI S[ЪIIHH I([I_IٮZ    tJQKIGѮJIFIIFZ IҺI*sIRIIHpIHHJIPIHQu  ˜1  LQ                                            Nv aL OpOpM  OoNpL    xZHr*Np~l`NsbNZoXMa|uXL|l_NrONcpLaMX F[-NojNNbMLoOMOU|OPOKjNMNKiMMcMNpXNQ_MMpLOLzXM   +NorMPNNOMPhqMwpLPOiMpLNP YI*NoqLPMOOMMNNMaqLpLOOک`LpMPNMs1yF[rOoqKPNOPMqLpMOO}NMcNLpL܀NML    󸝶!NprKQNOOPqMpLOON]MpMwL  'NprKQNOMLNMxXMqqLpMOOPPnMLiLɲOkOL!NxqKPQjMLNML|rMpLORNMfM}LrMNN    )                              vM{iNOf `L  OMOLiL pM  򸜵PKNNTQKpL   xZHrqN~ZMMKPQX|uXMpMaYWMb|uWMaMX   G[)PQPyMM}NLsNMujMNMKpMOL|MPNMiMMNMOLzXL   *PLΙMO^PKPMPKqNvpL_MNOOhpMwNP  YH*iL`MZOOOOqqLqLpNNMSONNNMbpMPMMs wF[q)OMMpNMLPprMqKpMdMLONpMڀOMM 򸝵(ONLPMLOOPKqKqMاNROPpMvLbMMZMMwNN~OLprK pLaN}}ONxWMpqLNkNL   Op!N~NNLqLpMOMNML|pLrMMN                                  pNON `M`L pNKiNK OLNPM 𷛳pNNeOML ۻN ۻ vYGppNMKYNMKX}k`Mqi˺`M[m(XWX{tWMʂi}l_NqoMN  EZ*pNNRNmiL{OLrNMujNNcNMohLƤOMOV#{NMrNMviNMMKhLiNMcNNoPLpMM^*pKpMQLwLPMPKpMOOpMN"OMPLoNwoMoLOONiXOYH-pKOQzMiLOqpLqKQOoLO$NroLoLoMpMOOOOOV xFZr*pLNMKpKPqpLpKPOoMM OroLpMoLoLONNMMe𷜵)pLQMpLPOQLpLPOoLNNOLoLoMoLOOOO  *jL•jKyOM}MMppKQNiLNNK-wMM}MNroLiLpLPOONMMLL  *yMyMPNNqMQS~NiNN ONNpM~MqNPSO~hNg   MN  OMLMNO  ܠ                         ̷  ĺĻ̀}Vh%g&=wjvjX gijRWE1D3XX@jj X uNZbp"Zc[+,\W ~bz*>LL\XLA W>- |(.yKB |b}(?s- []WcrVd\ J \WcrVc^ I ] z%/z `fM 9*WSGJ;  T  V  YaI TsW^`. ?,Tr Wawe  z[  x]>/ "MNXn c *0 XY N،). Vqb- (f  'h Q֋a=YXX   WX XY V  ]  ^*+, YXX L qh NW  3pi PVKnk x Z w ]4EXXX WJ_ K XZK  VVH  IYL  HYLM] 0110RYXX Wi W  X  }hW  VWT U S{jYYYX W X ZV  VWU T SYYXXWH K W ?J VWT J I=XYXX kw YXjy [WVI  z {XXY W  +WI{V (VW   SjYXXW/*Ncn RhWx SgWW  o +6h o +7fލ BHNWNWXWCʧWx9k;gݲkʦWVLE8 EF z_R DD {] T8jݰnl L WISD2[X [LKV kkI rEvJ__m YzsbZ3P qbkXoW oHrNxbnIqOxcM UQ Eل I PU@ A NLLJLLKMXKڱs]Y^sڱص s Nfn ΑMHՙK˻MfnʝMIvMOMgo͘MJ MK MKMKuLPΊ~ʼԄLtLP՘L*MKKMLMLpLLԃLLLLLʼLho˛LJƹvJ ILstwIIMJwIIJ wJ rvXrwXJLIt*xLxZxuKxYtwJJLyH ΃svHwIIIs rG rGIֳJxH |GIծGծHwv vsvvLwr sNzyyLrNyKLհLw sN{yyL РMzyyLРwvv wsvwvMwwwrrvsvLկLwwsw LÛvLÛwwOƍsswOƎ̈́HOŎVrrsI rrH H֯Hx UqHOGۯHH۰Oď̅K_JL\K]L_ J/wJLL_կ `]#vKܵKLKLK]KýNLIvMLIQLJyLTMLIMLJMLR MLJ/lLJLPLKMLJPLKMLJMLRLִ LLTwLV#kLKMLJLLTLgsLLKLhs LLKLzr~h}ХzzƧ w仮廮f^TZgⵝⵝ MkmWR ӟ 2}%wSTW^STQUTRا>؞󭢜؞ӡ32e~ *@2UZVTV[XRܸWR֞䭞ઢ󥝠򪟷3ڠ$$\|PDWSp[WTzZtWPӢŜ죞 !枛!z D^XSkXUVVWW}WPӣ՞ !󼟥果 #\լ5G XSs^YSXPӢ՞ !󱡷!m*!b}TjvXjVTWXƵWPӢ՞Ξ.壝Ο곤ϝ墝ם7 !N!@f`WVX_YVVS[T֣֠؟襞󺝞򝝰($da@}#k % Em" " !ѩC܇n3W؟ $L X\sgRSRSTQzSUTRxhuqgRSRcQSQ 4d|r)?VTbU]WRWbf]lUTblUXWS٠#w$\}WQVTyZt]nf]mURfZe}VUB WRWUVVXY\nf]lWRf^m$Zt ԫ6XRXT\og]l[VSf]kl, zd|TWQVXǴXWpYUTVRf]k!\WYVVUhXVTZXVY`[Yiap%e_#k" K|v\T4 4|wZzop[~|` '&"%+%%')'+,-//2.0)/29iRTgNLe+ - p=,Gaa)]twAm"JECjixx|tttmio_r_UYfXn`zqu_wsv]owigzhquzzltzuWlihaiWFoVJQFLgXo^CmSUC>E06M)-'JH,I Ea;w *:BFB2*+)' 7NL(" "10?::5>1*    "*QLLJOLLK NLKlS|MLKSTUSVSVNLLV URWLRQWݠTtkn}RqR|LRl{ؤ|֣ڡT֤{ l{| ݨSmm הUq~NPWm}߰}ް۩x߯|m}|ݨSRTRTpKQKPMOoK mKoQLlMLGn|LP}zLQN~sLL|LP|zLRO۩qO}yLRLNzLR#nzLRPMLNOMqxLNPNKM tKMtKMwLTxLRKOyKSMMrݮOTQq%rRRpUQqTQqTR TRROGnTSP{RT陚KӆUTRP{RT虚ڮQ޳T癛zSTUUoST$nSmRMTPQOSP쳙U瓻UUSOTQs RTsRSSmSUSOTTVkSMR QPq|pMNTQNVPTSrTPSrTPmU)nms|osXTmr|oۘ(PP|oT|onm|mmPTmSRsRXTsRWTm{ormU{V ݛOWRp˟PKʚڠ|O~U|PKU|OKnMznnPn|(nQUSSKyyNnPn|oPQ}oĬ©}n nn}٩|z}QĬ|Qr RJyyPrRJyyPn|nPmní}ت~N ܩ~NSoN~VQMrr nLLlnnLN|n ١ݍɩnLN}nڬS}nܺȩ}n nn|ԩ}Ll|Sɩ}R~RǪ~RǪn}nLNmǪ|ҪM ݩRqps LRpmWO WOrn VX nn}}n ݈剜Yn}|n۩xS|nt|n nn}| W|xSt}RVRYVRYn}n}nu }WݨNom~$ˤoPVN绐UrRMmTRIVNTQIVNUnULnoqP|(nRSRPOnpO|nڪN| nNO|nnnnN| UL}NOO}9MQOOMOPNo|nmQnNPnOۧONLHSzRNLOnLkpNjPQLH MOMLPMOMLPlzMKxMMmlOұQ mNLyLLNlOұRmNL٩OML lxLLQlllLLP LMO vLLR:JLPPJLMÅvKLQILM…vKLRllOѮSntKLTKLRuKM}LVQP  Ppo R PoL۫LxLxTҺSOLzMݛݛ ۘ{OSVTW߀RWVOWMLOSORRRأ{|X֣{{R|٢WRQեܱR||Wް||R}ڪRRRݯMJ|MOMMO=oLPTKMLMKMKQKlRKl{LO}LPN}{LOMKPK|Ln zLQ4~sLMQL}LLRLQLnXOOLn,{LQLMLL}LLLNQ zLRqکSLL LNPNLpKNwLTKPQP NLqSOsOSWN=WPMWMWTrWRRUqRTTQ 隚|RTUqmSLs UqRT3KӇVTUN}TVUNRW癚SQ+nQMRPp}TUUP mRM贚T۰PUUOSRPPnTUTPS昛 SRҸ|oSR{nTUWTUUm|os{oW T|Q m|otTl}oWTmUWlV{U}oW TTR{P NQXTTRlU|oSRmmUz} pūR|Ȩ|ūĩȫLyyLLLn|oPn|oLyyN}on}n RVSSĬé|zMWKyyN|MWnM| کzȬzMXKyyPĬR|کQڛPKyyPĬRnNz|oíQnm~NLl~ o۷R||ʩʨLLo}nLN|nɨ |}n|n ١܍ɩ|Lɩ}LWnL} ԩLkܿLȩR}ԩS کȩRnLLk}nݵRm nLʸ X} oډR}}ssYVVo|n}}nY }|n|n ݈剜t||LY|WWnW| Wt|LY؉R|xS کPYىRnWV|mيRnnV UL} oNM UO|)OOOOp|nqO|nP||o|nRSRPNO}}LN}Wn+nMULO|LOMP nNNکMNMPnUL}nMPn o NK mOLJŊ zMLN(zMLMnMKw{MLMzMKxzMKwmmOӲPNKlzMLO{llxLLP{SyLLOxLLzKzMLlwLLz+LLPLNkLLyzSvLLQKLMƋLLPOڨNwLLQKLMƋlvKM|KNlKLNȌlKMlvKM|}P  poLxSVRV٣onפ߮onజ:NJNSJMKQJ|LlMKoqPKkPK|LmoLOnMK NMLsOLmLLWUNR際WMVqnTKrUoURUpoSKrUQMoT 雚URsRQUWVUQoXT|PW$ToPUo|Q|oWTVsRUWTMyyNN|&LyyN|pLyyNoRKo|o| ȨoLyyOĬèrRLKyyOʨ J|&ʨ|}ɨoSLo|}|nɨʨ~RLȩX TQ}X |}XoyRVo}||oXs VRVXROR}$Q||PpNp}|TPoP QOPNP|MLLPK{MLM{{MLNnQzMKxm{{MLPlzMLO PKyMLPNLKƒyLLyxLLOKp} Uo²⢔Ꝕ   ΍ᑏ͉ 𗋎 ܫ  "  φɅ𕆈匈 󔆈∆Ђ~ ㇂ }𐁂 ~ }~~z{} |G@FMBF?F@GBh[AB@BB }~~~~~z}~~~ ~~~|}AMB>F>E?@= E@?>>@@ r@ADC?x}~~}}~{z~}~~}~}~|B=]@?vB? E? ɾE?k@D AA?B??{}|}zw|}|}y5D>D??A@iOMVREBF@X_eb[~EA}E?B@QCBNMV-YAMQCCDAgNAWOAWAA~?By|yv|}|Hu@h@k?H>IA?g@?EA@>AE?C=E?DA?_@@E>AACA@>B:HA?g?@C>MA@}CA@>BGAUp@@ԻB@p??BAq??ABܾBv{|{|{yv|{|{|{|{x&B?^>էA}C>ALBAE?EAD= E?F?BXE?xjA>D?7AMAB@BYD?AY@AcAdxzz{z{zz{z{z{zzwvz{zz{z{zx'E?BMDNA`@E?E?E@A?s E?E@BXE@?=E@+@E@׽AW?xD@A@AA?>@@A|@@A}tzyzyzyzyzyzyxvyzy zyyzyyx$mAA?\@A>@E>E?E?f@@ F?E?BXE?D@,@D?G@@>oDAAOa>ACa>AC vwyyxyxyxyxwvxxyxxyxyxyxyxxyyxyxxuGB@>B@>AWB?F?EAm@hE?E?BYE>D@,AXA@AXD@AbE@D@ uyyxxyyxxyxyxyx}vyxyxyxyxyxw}4D@MC?WC@MxC>E@E?C>E?E?BXE?D@,C@MwC>B?@ABBD@C@BpD@yBwA@{BwA?rwvwvwx{wvwvwxt%d=[=k??=D>C?@= D=C>@fP=B?+j>?=@C?B?k>?=V??JW??J~tvwwvwvwvwvwwvwvwwtwvwvwvwu  n@]?pvuvuvuvuvuvuruvvuvuvuvuvuvt@@AA@Xstutupoupǔptutututtomtsy mssttstssttstqntsts tsttsstsstso |qrrsrsrsrq|zprsrsrssrsrq krqqrqrqrqrqqrrqrqrrqqnmq rqqrqqrqqrqrqrqo qoqqpqqppqqpqpqpqpqp wroppqqpqpqpqpqpqqpqqppqllpopopopopopopo poppqorooqpopopopopoopopopow iononoonnoonononojggjnononononook{mnmn moljw|mjnn gmnm j|ƈhmnmnmmi  nkllmlmlmlmllmllmlmlihllmlmlmllmmlmllj {GFEFEEDNrJNMNJNNJNwFglklklkllklklivglklklklklklkju RwwŔ iiwi hv窆gkkjkjkjkjjkkjkjkjkjkkjkgijjkkjkjkjkkjkjkkjkjkgwwwϤii v iiw fjijijjijjijijijijjil {hjjiijijijijijmlFFuJEaGDFDqf|xpEG{FEz(Iqg{FDKDFDIEb˯zIhi4rf{EGwJEv詓oEHpFGFGxnEG)cEHiEExFEGEcFE.EFhEEv稓xEFmEIf{ EH{qdijijijijiijijijhreiijjijiijjiijijiijfl?NJJN혇M홇铰H~wIKLJܔ)铰HM엇QFM엇RKړNii钱IMJOGvNNjJKKLM KiKJ(OKFiL떈KJJL땉/fiNgwOJJhLKOLKێeihihihiihihihhihihpeihhihihihihhihi hoKkQPQNQNh錬{vjh苬{QM iQMMQꨣiiig苫}Qiwihvkk鮳P 텱vj'v풦iQNQQMOiiOwjvjJ{P턲fghghhghghghghghufhhghghghhdl鵤麛DJvnnyJ޼IwnnyJNPQNviȻs ?Y \ZwLb@nb{NbMcz >*oMcLb| <+oc? CDɓT AvR n2  Cx~IHHGK~ EF̣ Uqu  ~T h`^`]4 | h_qt  EIJI hJK g 6 ~ DF E UQ7 TSDjFUD SVIJI  JL  C  2{EE  U b T  S  JJJW I KZ Q*+* FE  U | T I FF\ H  II I2 I L1 D^hFE  V   T gS|m h   JJ I XL MKUI K XLMLW{o/12]EE  V  PI T G H6 H  II I <I K >4 EE  U   T FI3  I  II J J K 1  EE  V  i T  _H S  a  JI J J L U OE  U  34 T ^xV ]  IS IES KD E  V  [ TlD  kE  FiwII{ Kz  E!V  ,Xcu T 0O@ƏK /uJ If@CLh?DďN Kb E!,/+/V  GmT-ʐٜ|-ʏ  cJGeI>HeL <ٚ qEwRwRU  ^E2Uv Vt  JqIWqKW ZV]{wzvz SfTa{UF32_|RuIRa_tQb^ t04STIFJm[ ЗI 9K3r 3~ڔ MLLJLLJQTLPJROTX١|עRn}~ޭ}ݭ Rn}sNLnLOnƒNML}MNPKlMLzLO,MKnMKnMKPLlxPKOLl‚O*LNPLLLLnLLPLLLLLLMyLO޹sRQVPKrR윝SRQqOLORQpRQ,QMsQQsRQO떽RSNO雝*QNPlnQOROOQNQPRoPLQsW| sRoW}WW|p+VRsRWRsRWRWnnXVRWRWnnWRnW{sůŨ| ɨrRoĮŨ}ܪKĮĨ}o-îQsRNyySrRNyyRKn|L¯ RMyySȯ mx|MyyTxnïé|۩sȨ} ~Rnɨ| ӨLȨ}nQ~Rɨ ~RɨLn}LRȩ nLn|ȩLnnƩ}Ҫsp|TR op|Uq}n܇RTRW TRV Un|V ܆RVpnT|VU nq|sTRURSPoTRpO TQ|n0QOSPVSPUn} $POURnVL|UWLnSRnO׹}MLO}MLPPLJĈSKo|MLPNLO|MKz }MLQq/QLMOOLKʼn{MLPOLKň|MLP{MKzKr {LL{QL#OLKˑ{LLPnLL{qLMzLLQLNmyLLSLLQR}nRrnqQQ޾Q{|Ѥ952<8785<85<85<85<85 <8;8<885<885?885>85 >5  ;       ՏvÀk ÀjZSWT]jh ة$ g !$dZWU $I~$" "  \SSUTsSTQjTUUTSJ>:5]K#C%%!/P4$>[DpRk$6]q& YUtVSbU~VQأWV#&!RDn%"%X~(KWTcYWUVVXWYXP%"(.&#)"<0$'$( %(9# )%=[TWS\Tz̪YXO%#(.& !C#7[:/$q! # .$t/j;P!/$F ZWVXWcWUVWabU*';D&!"$".%!$$!'# $J!##!$ "!#%!$$!"bT$+# ".%  #&""&(##%"ڭ1'n( 9n'/- ""h#k     h vlRSSqRSRdTUTU_[vhvkRSRn\RSQ #6XT [qUSz_U~VRY*UXUc[UUzWUgVR~)dAjd "%WXVTXTk YYUcZVYXTZT?%VUTX})Y`VUVVYXkYYUb[YaYUc[ # a]&mvYaVTdWYTXUYaXUbZ(;"\X+$Y_ ]Uz̨^sZTVTYaYUbZs0n9 1am cXVWXZXVVZWWYYan\Xg^%+"  " JyZP6 5xxW{on^c  &( &%,&(*)#($$#*+0610399`7,L//=<6#)# BpPiw40D>HJ(<9=)48I ZOUEZT=i=QZX\l>Yohdiryy|woyrrerSWkltkmZtknPdhcqR/=1B;,;1< 3!{ue>b(<=99.B!x=Yb +<@DB2*,'(! 2PM)" nxc|xw}xāPLLKOLLK NLKsQLLKՅQSQoQoMLLMo SP{PMQP|釒SP kPQٸJPoO{ O QQ x iOɧOQO NM}yoKOKpLvUpmKmK{ NKqLKHOoLxƼPLPl߬}LKoLxƼPLPm|PmOL|LMOL$PmLzNNmLNMN|O{LNNNyKMݻ zKNݻzKNNMlLzǽyKOkL}KMyzȵTp|~Qknۄqnۄp| PxOOHPQPORnRLPQQOORnMznPPQ$PPONQlMOQkzzQQkPzث xPRثxPRPPPOQPNOz mNzMLւMKׂۑxհՃMx԰ՃM|P| PPհz|}}S캱{Pհz{}(MnR|z{Qz{PPy{O|NnSROyرyzرxzPy{԰PRy{ܻ ޛlzQoJPQkڰrڙoKڙnK|PRLQFOPmOQ}nn߂LyyzPmOQ}xRщ|RQ PPQRωxثxLyثyLyP QmNPLO RN{~N| TPzL̰ͯ| PږLLOPP}LMP| Q{P}LMP|r}P P PPLNryذyرyPP{LNOؑN O|O{q|JxzOTM| M|zP PP{ Pp {pP{ PpPwpP SP OPSPwSxٕyٕyPP{PSSMQT$POKɃvڪMr PJKŸPJK֝PKHPPRxPPQN׻PQxPNPNuPPPPhLNOv,MQzMPzNּMPzNּQ POyPOwOj wMLG݆MKQKʮMLPLHQKMKPKMK MQLKpLL NMĀϕmMMKmOLLvMŀϖnMMLLLL MOLLMMMjLLxmLNLNLL,KLlyKLNʔMLMyKLNʔLLMyM MÀΓpPMLMiKMzNLNLp kpM  pNy pN P KׯQMRMQPOlNRĽ̌ ʋNPoQpP|oN{LNNQOQP߱xx慗xwyyQyyrMJSK~Ljv"PKQKMJMJ~MJOKOKQK(qLxŻQkQK~MKOKRK PL4߬}LKOL}LKPLOLQl ML"nLyoLM{LLzLM{Lky lL{|O{LN zLkyLMmLONMzLlyOn LMONTQTi>SNUiQQQRwwQRP)OnRPRRSJ QS3QLPSiPPSixn Px+PNNOP~PQPPm ON{PQQPlPyNQQPOnPn Pz}y{y zROS伢zPy{ְ~y{z}m Pz{~R컨RNrzO|O yz|r zyy{nR kOzyO|z{xP|PPQy֘LySKRKPQmO}QLy } PQ on߂QMLyyzQL|POM P QMLyyQЈ LyyPMNRQxP}OMNKMPy֎ܗLۗLPPLL ~O } PO R{MڕM} PٕM LNMys PyPؒNLOPyO| PْNP﮿yST޳PP| qP } PP {pRMpP TݵMySOw OyPOyPqPKPMSu R0MtNO׻PPRwPNؼ~ PPQQNuMNּگ P+PhLOMN׻MPy PhNNM׻MPyP M PMQzP QθrMKMOLJ̑ QMK/RLKQLKRLKtRLJRLKMNŀѕmNKMPLLu{ MMm PLLPLLvPLKKLL MOLL+kLLwmLMNLLMLLxKLM͒jLLxLxMLLxKLM͓MMLMlLM MLLNΔMKM MMLMpM  zMSOROQOPOPݜ:MI}ܵQjOJ~MJOJRK~MJP|OKOJQKQKP~MK Rj~ML޻NL}LKXQSNnThQSQSIQSP}wQQRJ$QMPPSnRګyyPRɾ{M}Oz}l zPkSP}myRPz }QڱyzMyuL~Ly}Ly PSӇRJP} PLyyz}ګyPLLyyz WG}} PŌrݗLP}O~ڱyەM Nq } PQwO~RPrRڔxOٹLOع~OعQMQ}RPN׺ OuOSvO׺ڮSLKuRpOJRLKu|SLKuOMRLJN}PMKNQLLv PKQLKMLKɒQLLPLLwK PyyWX_ e+)(()(98(+,+,+,*)w0)+ ,+,,++(q4)+ ,++,,++*J'+.{&+,+)E**+)N(+*2%+(b6)+*+*+ ,u&+**+*+*)D**+*+*+*+*+*)R'*++**++*+*++**++*+*)4%+*++))*+*+*++)f8),+*&)29.'&+*,z&+ )-{ܻ^'+*+)I+*++*+ )6%+*+(ۉ V'+*+*-I'+*5 '+(e%++**+(k <(++*+*++,*+*. %+*+*/<(++*+*+(M+)++**+*7H'*++*+*([%+*+*+)7Z'*+*+**)7 '+*+*+)3_&*+*+*+*+'p ?(+*+*0Y&*+*++*+**-   $**++**+*,N'*+*(Q.)+*+*+**++*,@(**+*+*+**( _%*+*+**+**,<)**+*+*)8 '*+*+*+*5(+'t C(+*+),*+*++**+*+*++**/ $**+**+*++**++**('+*+**++*+*+**(SE@DiVADFD?DBJdDAA?pB\A/)+*++*++*+*('*+*+*+* +*++**+)Q?wlA>C>BY@= C@>>?@@ OJ@nA?e%*+* +**+*++*+*'|&+*+*+*+**+*);AHE@?AX¿BYҬľ C>ݿO@r iZ̓@@A??(**+*+*+*(l%*+*+**+*+'z5C>A??^Y?eIIYKxB^CYvJp[lK{tBHC?ʗAYKxB_7dIIZQAUyzKxB_dBAixBA[xBA[aZR\?AF'**+*+*++*+*+(X|%*+*+*+*++)/*F?f?D>D>E@RS??CA@=|BX{B=BXCA?S??C?f@gBA@=~:D@QS?@BGwBA?AA@=~C@a[@H{AQi?[{ARj?[sgnƽJ$+*+*++*+*(Iq&+*+*+*+(Y&A^F=p?APe@x݈A\C>̪BaC= BXDHw@C?puL@>B?7e@y݇@]^?ت@ΫB?k@o?`?׿_?׾پ2)**+*+**+*+*+**'HZ&*+*+*'C?{AyBdi?h?CQBYªC@A> BXB`@C@?=̫BY+h?BR@hc>BZi@@AA>>z@AA}z@AA}i%*+*+*+*(BE'*+* +**+**(>$F@A>E@@>i?CPBYC@Z@] CXBX@C?ϽBY,g?BQD@@>;BZg@tיJ?A`יJ?A_()+*+*+**):@(**+*+*(>vA@=iA@Gg@vA_CYBgG?}BXBX@C?BZ,g@w@a@oϾBYm@ƵºݺBYݺBZK&+*+*)09(*+*+*+*)1$C?yB?B@T]B=BYBYA? BXBY@C?BY,A@U\A>A@@ABZBYBAP]B?FBpb@gHBpa@g$*+*),-)*+**+*+*'Z CB=@`+ɳT??FɻW?}B?@`ǷU>?FǙ@??^ɾ@??^ڽ2(*)*),&)**)*)*)+ νٽ۾ڽھӽ̼FIF?ѽ½½m%*)*)*+%)**)*)*)*(@`?@A@>~ ׼(*+*+*+*+*+*+*++*(%**++*+*+**+**+*'ؼ ٥vԼ߼N&*+*+*+*+&#++*+**+*+*+*)2ܽӽ׽ $**++*+**++*+*+*+(Y^%+*+* +*++*++**++'aռ̼6)**+*+*)63(*+*+*+*)ͻ޻һq%*+*+*+*+*+* )$*+*(DĻ޻))+*+*+* +*)1,)*(ƻ󻁻ݻͻQ&*)+UR))*+*)3μ޼߼׼ؼ$*+*&##%*+*+* *'eϻ9(*)* )*(%5DE:''*)**+źܺܺϺz$*)*)*)*)* &;M%*)*)**(IѺȺغ+)*)*)*)*))* )'Lr%*)*)*)*(?876Tg;@@U:@ٺA;U<: V%**)*)*)*(6V&**)*)**)*)**)**(5ڵڹsqFFsEٹErhϹ&**)*)*)*)*)*)*)*('))*)*)*)*)**)*&orrEFqEFrtͻF'*)*)**)**)*)*)**), >(**))**))*)*)*)**.]87v:7c]97\86gg}s=7]R87C(Rygg|\77w;7\77M;7dBSyjEEhg|[79v~;7rnjw<7^Q8V[88xw;7]w;8^FZ88Q8798eY87Y88}EiY78}qnP79w:7^ZgY89yyCvv%*) *)*)*))*)*)*)4 W&))**)*))**))*)*))*'e?>kbګ?F>FH@Fsڔi@۶<`(G@>EعESɾ>Eb٪aDFF3GA?~lDUɾr\˵DDsٔh?۹@> }lDtٔh'Cuٗ:F>E?۴D?јFD?јr]?۳|lkK'*+*+*+*)2R&*+*+*+**1&ɹFɺaɺahEjӾhŸlkŽgEʺa FʺakļFFF3gFʹFFwTԸEjսhøu ʹFjռh'jռIEʺaĸl˺`˹мEF˹мxTŸljսhUDøʹFtƼ(*+*+;)+*+*'{|4F6~@^XX\s~?^XX\sY|[pqEDkZ}Zp}>_XX]t r}>_XX]tF6kFEE Z|Zp||rF7ErEYD| |rEq"F|>_XX]uEC7{>_XX]w| FE|D8DqEIw{|s|$*+*'u'*+**+*+**+*(Nt{7ImqEj:Dk Hlqعۿ{9kFFE ImsrsAFrFl7e:s rErsFعh:Eۼx9z FFysAg:ErEl\st M'*+*)/D'**+*)1 Gץ~f[qFuaf[p֤~aFFEf]Fqp˦8FqFiʹ FpFqGFu֢~ FFq˥9uqEDiFr2)*)*)*)*)*)**(8Q%*)*)**)?B~fnCCBEmqE@{BCnCq:Cn{FFFADn@}f´qq͛8EqFFYqw@~g´pE|(DzoµFB@nB@EF@q͙8@pF= @}g¶s0)*)*)**)**)*)* )(+cȶ{3&*)**)*)*)*)@65n@75ǾB75\B65[`tvE_67y?6`tB76]v A76\@66ȷ?7EEE`t?67pϷvpЖjDwEoxO:>76qиvD>Ff77qзE@76^\77y=78ȿ?77^¶=76ED<66oДj\78yvEC =67sѷr /)*)*)* )**)*)&(,.)&))*)*)*)*)*)׼ӷķƷʷ׸ڽ׼Ѹ׸ҷлٻڻڻ ׻׸ֻٻ ڽ\D׻ٽi'z׻ڻ·׽ӷԺڻڻԺ׽ڽ ׻ؽ;()*)*)*))*)*)*)*)*)*)*)**)*)*)*)*)** )**8>h Bi$))*)**)*)*)*)*)*)*)*)*)*)**)**)* )8۷ٷаncrзķ))*)**)*))**))*)*))*))*)**)**)**)**)*)*))*)*))*)*)*)**)*&v̶$(*)*)*)*)* )*)*)*)**))*)*)*)*)*)*)*)*)*))*))*&@ڶ'%)*)*)*))*)**)*)*)*)*))*)*)*)*)*)*)*)*)%>ٶO'$&()**)*))**))*)*)**))*)**)*)*))*)*)*('%+iyJ4(''&'&'&'&'&'&'&'&'&''&&'&&'& '&''&'&'&&'&''3Hɶ ϵ   S=6>?99   qf DD   prƴDEк к  IBPtp;5YY75yAppZ76qEEY65rccxft:66YO5RAPuY65st:6ZfY64w86`|ݥI`Epהdظ?)`={hDpט8C=ϖ_֨?Hjiкhƶ`sƵFEEƵGfDDiҹ;gtkǵFiѷIEȵθȽ jpE|?\UUZsrz {DDz{Y{YoEqEXDkz{qEzE4 ioEqp~EDp}HjDnDl5b7iq~oqDx׻x7   _oE1qFDEFdYEo EgDz_FoEEӢ{P{oEBqB|dEEB|cA~CmEo EFVou(zA|dDxmEA~ls?5uDA54Y$p?54mͳDD?44m̳^rDv8DlvM>5@44nͳFe64n̲D?55~>44Ŧ@  ͸Գ׹պ*ո׸׸ոóDz׷Բ9׹ZſCγԷwԷ׸ѵϲ²  8=f  ͯl`o̱                                            ± ª   » ¬ĩ   o½° é   T?o3P*½¹¿½¾üé¿   1A¼± ޱ»é  pmpm+½¾ԯ²¾éé >3¼ĭ©¯ĭ© éè     XU2m@P»è©ð˻èéêèé¿     n,»ç¨ðĽçèèħçç-»è¨¬®çè¨çç»Ĩ    ¼èªµʷéèª èé   -ӺЩƪҶĨϩЩĩЩϩϯ                                  ­½ò  #½©ºå l|¦ ðå  R?k2N­ Ƨ¾½Ĥ¼´¾¾   0>zû¦® Ļãůļ  kiji1§¼´ðżõ̫® ~<1z#ºťƤĥâĻȰû¸Q0j>Mţ¸Ģģ˸̽  ~k|)øò¾ģĤ·ûã  )ŨƦ¿Ǥģè««ã»ã    ŤȶĢšDZáжѳ̤ʣ΢̪                  ª»â¢á §£çzhwĠ˨§ʧ  O @G  J I  VV*In  ,Gp YFpD V D U0*+, n J J W K9X   K [M     J J J Ke M  d : dMM =L Y- =LY-8/1Cf I J J K Y W )[ WMM\  +I J J J Ko@  W J  <Y JMMN   >H J J JK ) c ׈+ bMM? N PՅH J J J Kv   6db MM  bdHql J J I K  [ MMDu h8tu g7v]Hj ; ; JK=Xm~ w O yMMmBf.}f.~H V U J K<Vz;y:WyMMuwX=V>wHS   J K^E2K  IMMCiSWSVW6LKNc ne4H/ eg PF/f p44F03 E1Rywb1@{M eƻTLKLLINyPKNKNMx֥zPܰ yP۰}MK1QK޹Oi}MJoLtOK}MLQK,|MK޺|MK޹|MKNKOOKNLNl*{Lkw{LK{LKPmLLNLzLLmLLzLMmLwܰORI۪xqOPRJwPQ-OwګxOTګyNTx|OPlxp+NUjNSNPPMNmNSPLNNQKO۹۰S| Tڰy}PS|پ R|y&yڱyxڱyx PQ} xyP|Qx}PQ{}۰ ڪy}PTJ P xګyMy ګyMySK PRK| yMyyz ONMyyzNPֶ۰ ڰy}OJ P xڱx ڱyߘJ PK|y PLPLPP׶ۯSSڑxrPSRRPyڒy ڒx PpyPP RR۪PzDPOSuPPzQmPzO0MQvORvPԻORvPԺة P $MSwPӻOPLPӻLPO{PnܲRLK1SMKOLJΕPJPSLKsMKvSLJSLKS/OLjwNLLΖRLLwNLLΕRLKxRLKK S RLKOK"OLLԖRLLxPLLSpLMQLLyqLMOPLLqLLyPڹPx~Q{PlߴP!""&,",(",((",((%-()%-((%,)(%,()% Ævum锕 um,((%YSVU^nJJ"" ,((%[T`r6'"!,()%USSUS[TS^^UT[v4+P]@:=+WpEX2'oBZ<'A# r]mW,()%VTtUSWTVPρc}V)J539>@޳߳|V#0q* $#',((%Z[XSߦVSXSoI6P9?!3OE'F+Q),)(%\cZTVUVVXVoI69AP9@!3UP"W+),((%\cWSY[pI69@P8> !C'&@$%(,((&YSVSZTȦoH68APo3:%wb{# *v.%c ~$Y,((%YVWXV\VVUzqL9=DV}-K-&'#"To$,((& :@#G $ $,((%8@ʕ8ɰ&&,((%Wcrs#,((%,)(%,((%,((% ,()%,)(%,((%b X,((%kcSSZZSS_[UUTUrowebjcSSZs^SSQ @#^\ r[,((%uUUVTVPWTadYTtUUVTVQ1o)ll %,((%u\VTWTYTadYSs\VSYS)GUT+S,))&mZVUVWYXYTadYTkZadYT!jjV+,((%mZY\ZT acVTlZadYS((?b_%',)(%mZ[TǥeXXTVSkY adYT)w/|d,)(&wb[WVWzZWVlZVWYXvbdg^W&'",))%"1)50*&#&&##$&)(,:C>JUMNLKFROVXYQQRSNONRSSQSXSFKH@EFD?C@@DCAAIJPIIURQOOLJJJJNPTTX``]``_[]\ZVXNRTNMPQQQNMNa^_VfiUWR9DB<8,/,/ 0@p`@0@߯@@ pϏ@ p߿` `߯`@@ @@`P pߟP ` 0P @0  @p``P@00@``P  P0@`0p@ @@P P@P`PP `` `P`00 P0@p0`0``@```߀``` `0`0` ` `0 0`0`ϯp0 `` 0ᅬ@P 0P߀ 0@`P @@0Ͽ@PP P@P  @ 0@@@@P``P`@P@@@@`@ 0@` @@p`pPp `p @ P00@@ 0@@P0 @P@PϿP@@`@P@`p@@ @@0@0 @pP@@`@P00P@  ``@p@00P`@` @0 p@P0ppp P pP`Pp@ ``@@@P`0@p0 @000 P@P` @p0P@@@p@@`@ @@@@ @P`` p@@@P @π`pP 0@P@@``@pp@ 0``0 @@0@0 `@P```@`0`0P@@@ @P@P`P`@0@ p@@@`@0@@@p@@@`0@@@@`@@@ @00 @@pP@p@`pP0 @@@@P@@0@`P@ @p0@@@0@@ 0@ `PP@00@0@00@@0@pPPp@p @ @@ @p@p`0Pp@0@`` @`0P@ @`@``` @ @@`@Pp @@@@`` @pP0@ @` P@0@ `@@@@` @P@ @@@@ @0p`` 0@@@0 @@@@@ @0`@ @p@@@0Pp0@P0@@@0 @`@p@@@0 @00pP @@@`0@p`@ `p`@P@@@00@@ ߀  `@@00@@p@0P  @@`p @@ @p`߀ @`@@ P0P@@`0@p@@@@P @@@` @@@`@@@0P@@@0 @@@`` @@0@0@@@`00@pp0p0@@ `@0@@p`0`P@`@@@`@`@@@ p @ ``0@@0  @@ `PP Pp@`@ @ @@ `@@0@@@@ `0P@0P@p@00`@@@``@PP` @`p@@`` ߏ@P@ @ ppp` `p@@@0``p` 0```P @` `0@p0@P`0PP@ P @@@0@p`@pPp0 p0 `0` `@`p߿@ 0@ ￿p@pp@ 0@pp`PϏP `0@ߟ@00 @@P``P ߀ pPp @   p`0@@P@0p@pp@@`P @pp@@@`@@0`p@Pp@p@PP` @ p@ @@ P ``@P@ 0``0P P ￿P@ p@ p`@ ```@p@  @@`ߏ @@0P@ߐP`@ @`p @p00@p`   濴߿뿺   ************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************** '799{   pp  P 0p0`0 @ π` @  pP0`@@P` P `0@0@0 P`@@@0@ @`P  0P0 @p @```pp @ 0@ @ ``pp0``pp0 p@p0`@P p@0@ Ϗp0P0p@P 0p`@`@` p @p @P@@P@Pp@pP@@P@@00p` P`0 `@ `@0 0 0 @0Pp@ Pp`@``@`p` @@p0@0p@0@p ߿@p@@p`Ϗ@P`@@ @@@0@0` @@@@@@@0@@`@0߯`@`@@`p0 0@0P@ P`@@@@@@@P0`p`@@@`@`@ @p p ` @ ``p@@ @@ P`0pp 0@@@@0`@@@ 0@`0PP 0P@@0@ pߏ@ pߏ0ϏPπ@@@@PP@ P`0  ﯿߏ0πPϿ߀@`ﯟPϿ߀@`ﯟ`߿ P0@@@@ @@0@ߏ 0@ pp `pPp PP0@pPP0@pPP`@P`0`@ `0@0@0 @@@@P`@@@ `@Pϯ@@@`0`@p` "*&&Zq^jxpWw,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, "*&&Zq^jxpWw,,,,,*,,,,,,**,,,,,,,***,,,,,**,,,,,,******,,,,,,,,,,,,,,,,*****,,,,,,,,,,,,,,,,,,,,,,,,,,****,,,,,,,,,,,,,,,,,,,,,,******,,,,,,,,,,,,,,,,,,,,*******,,,,,,,,,,,,,,,,,,,,,,,,********,,,,,,,,,,,,,,,,,,,,,,*********,,,,,,,,,,,,,,,,,,,,,,,,,,,**********,,,,,,,,,,,,,,,,,,,,,,,,,,,,*****,,,,,,,,,,,,,,,,,,,******,,,,,,,,,,,,,,,,,,,,,,,,*,,,,*,,,*,,,*,,,*,,,,*,,,*,,,*, "F]NR\T_^E_ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, "##$$$%& *4ALTTWZ`hepovR.$$               !"!    !#% #    $'())'$   #'*,-,*'#  !&*-/00-*%  $')*)+,'" )::; <>?BEIMRVZ^ac L,)$  2.% V5&!  n;("  r>(#  q>)$ q>*% r>+% r?+& r?,& r?,&  r?,'  r@-'! r@-(! s@-(! s@.(! rA.(! qA.(! q@.(" q@.)" q@.)" qA/)" r@.)" q@/)" p@/)" p@/*# p@/*# p@/*# p@/*# o@/*# o@/*# o@0*# o@0*# n@0*# o@0*# n@0*# n@0*# m@0*$ n@0*$ n@0*# m?/*# m@0*# k?/*# k?/*# k?/*# k?/*# l?/*# k?0*# j?0*# j>0*# j>/)# j>0)# j>/)" j=/)" i=/)" i3-& #g?3.'  #h@3.'  jA4.'  kA4/(! mA4/(! oC4/(! qD50)" !rD50*" #uF51*# #xG61+# %|I62+$ 'J72,$ (L72,% ,N83-% /P83-% 2R93-& 7T93.'  <V93.'  @W:4.'  GY;4.'  Q[;4.'  \^<4/(! h`<4/(! ub=3/(! e>3/'! g?3/'! i@3.'! m@2.'! qA2.'! ȅvC2.'!  Ӆ{E2-'! ޅG1-'  I2-&  )L2,&  4M2+% GO2+% [R2+% oU3*$ X4*$ [4*$ _6(" c6'" τi7'! ߄n9&  s;%  (y=$ <~># R@" hA!  D!  ż  F! Żz#tniea^YUSSQOONOPRSTX\`djov| I  xphaZTOJHD@><<:987656789:;>AEINU]enx K ˟̽|si`XPJE@=;9866554321211234579=BJT^k| O ݥµyoe\TMEA<9754323210/.-./0.-,/4>JZn U Ǹ{pf[RIB=96420/.-,+*)*)*+,+*)'%"  &1ATp Z" %Ⱥ{qg\RJB<731/.-,+*)('&%$#$%&'&%#" .C_ _# :Ǹype\RIA:51.-,+*)('&%$##"!  !"!$;Y e$ Oʼ{ri_ULC<61-+(('&%$#"!  "9Z p& eоwmd\SIA91,'#"! !!"#"!   $9=:9:::<9;;;;?IHLLKLOLLIGIE@ACCHKKKLLAC<;;<?>AB;81.$(**5Ohmmljiid\D&'&$$#(+.IOQRONLMH<+% ,:2,+)(&'$$(:A0 h(68:#~~~~}zyyyxxusttttonnnmmjhhh hhdbbbbb ^Ĵ ]Ýɻ]֣о\\Ͻ\XW ޽WجĹV뵟ǹVᤦV࡬VܢPןPڿ ԟP ͫٲP ӾP OݲOβίK·IؤIŮI޻IȬIIƾάGνʽCӻȺBظأȹC߶֨ȹCܟƭǸCŴǸCȶƷBۙѵǷ<Ѵ< Ѳ< ٳӹ<Öٱǧ<͕ڰ<ؖ؟<ږĨɢ6Ź♤5ŷ훡Μ5Ʋ4ɾ4۸͕4㖨40˼֦-̨-ɕŬ-뽑- ˏ±ԙ- ɷ򶏟-˧ٙ-֫ō-ߩӼ,йк䦏&겐%̷񻐒%ӿǼɶ ɕ%ȳɱ ן%ܮ̻堌%Ѽ ȷ%ퟶͷ ŵ% ɹ➦% Ӯǰ%Ͻɼ%Ľ졽%朾%ݠݠܠ蟿ܠܠ۠۠۠ڠڟٟٟٟ؟؞ܩןܢ{מ̾xםȵƩo֜­j֜Ŀ=՛1ӆ2ӆ2҆2҆1ц0І#0І$0φ)/ζ./2.6/9. =.A- ؿſĿF-ٶοǿƽK-żϾÿǿ μP-żŽ⽿ƿʻ˺鯵ƽ ͬU-½öǾƽؼƽ Y-ĽټŽ޾ϼ׽弿 ؾ],Ƹٸݵ컽Ѵ򸼾 ⳾a-潼ӿľ۹ռ߽Ҹ绾¼ Ѻûg-¼˹տĽܾܽӼѾڽľܾýڿ οͼn-þԾýļսýҾ;þ þƼw.ļҾļ ¾΀.ļӾĻ̇.ܻüú̿ʎ.ۼú̿ʖ.ݾúϾɞ.ȸ¿¹ѼǤ/»ƫ0ſŲ *ĶEB_ù0/ݺջ½(z׺ ۺ0ɿֹ Һٹο9ʿѺѹ ȼٹοAʿϻ˵ ½ҺοHʾ ̺ѶξN˾ƼỺ)Ǹ澽ڻ뻿ݹ͹;U̾ĺſгʺ)ҭ߶곶Ӳᶿ;a37;ƶʺ)ޭ˹鶿;p ̽꺷޲(Ըµջ𶿷굾ͽ| ˽췺۲º(ö޶賶п굾ͽƇ"ʽ⵻(㮺Ĺҷﵾܳµ󷽽ͽő#ʼдմ 쳽˳ӹͼÚ ʼ㵽̺״ͰϷ̳ͼ8ɼߵºúͼ 0/|ɼ۶ŷͼqɻݴͻ#uɻ۲ͻ.xɻǸͻ7vɻ¸ͻA:;Ⱥ̺I%Ǻ̺Z̺k̺z̹%;̹k„̹fĄ۹ jć jÏ }rdUJJG?BJJR`n{#tÕ½jO;/# .=[}-%iI.+Q{6 yaA&!P¼@ÿ{a@=xVºfM,Gi ýpS8 )qºynzV?$h¹^¾bA+p\ŷrT5'. ^÷ѷ{cD)`eʺzjV="4"5X5 (5CL^l|kbUE0 3# .>MVXaacuvwvvwveaaXSKC;.&  &'"9CAGGB>FJEHNLFFCB>?@>:99;;9;;;==?GHKMNNOQLKMKIF@ABAGJHGGJ@A::89<<?;75++% !#(-?RekjjihfcWA,!""!%)BILONKHHED6'                                                                               =dbaZǷ|gZ+   [ ^L    AA  @z % %    y]a`fh !.   `,!   ^лq/  `y:?  P!  #K sn  P     < Mt  n  [# +  u   7M  H   dQ%wµZ  * D  T0%+3    ?(B    nWC`  +  ꂽ    zC +C"    uZ 2}  4   `t ^^ D    G ?.  3`   6  )c|z)  Y   ) 7  m       P     8  B     {q}  C    5U !   ( Z5  <i   ) Df     u:  9     ]> 9q e    HL !  0O  8X /  f  8o  ^9M c  ) Qa c  $   c    G  c    {k  c    Vc    c    6]g S  c      Q6 f       ,H N    ?    O    {W         r  9U  hv   3 T  _ 5  }D   r ~  xG   gm 6~  i_E  dW    Yc p  :     VfFl ( )  h  D _   rU  DZ       .  [     r !/  %  p! e ,  5T  ݿ> [  2 Ms   D p 2 n  Z  'pU  D^tB     ? HGW   ZdI n J88^    8 9E ^sC   k ]< hw  E a ']?E?\  " 1 =    Z   c$ q     [ x   & 'o 2   = X +7 C}5B D<  v!yAM {{ {{{{ { { { |g;+.:CV@ && 0!)d>4Gz" 3-  yEo Jn%Y W_S: &Ig I4"-= ]S K D G, q @-*WX'5-C\rXtr-my< D G ! Q; <}]nvubT ݘs2XW/q1u aj\D@*̫g.X)+'U Eֈ7IO˛ƺ8 ޼''3T(EЀ6  -U~Cvs ̖|̃6TF/6U/̎V%Dt1wnw,2d}D$ mľrsl'}^ηRl ֛ m   ZrQ sr2j xOʇz [K黎UD nw?x  kY  ,~;?ܨ E wX }{NPx{ N;`v)9w&5t sx}L' yw,q Mf i 2~ W != y   Dޯ  S*L)mU&ݠ1k#"'SR?4o[)z"|zJJ ]29?S4Wie )ØH&+[8zؑY9OҚݲQ9F/KF)٣(z/ՉR; 3nb-oWBHդA3'/rAq7?Nk $(fL~{)nC{:CV Lw)S `6 0f a*eVn lrd+tA ?KA @)Jabw; Y>?3"'!9B@JGC?GKFKPMJGED@CB=<<:;;:>;<=>?GGKMNNRRNMOKHEBDFBHLKGILDB=<:<=;A=;7,.' '(+8Vgjkjjige`B,!""!')@LOPPNKJII?.! $                                       Digf`ɺl_1 ^"   dO    G@  Dy  (  (  ! y]a`fg (5"  `,!  $cѼu3  `y  @C U"   #Kr s P  ;St  n[ +2  t =SH d  T%zµZ ,C T0%+3 ! ?(B n \C ` +  ꂽ   zC  1C"   uZ2   4at^^D  J@ 4   3`  8 +g|z)Y +7m  P :  B  !{r} B  9U  ( ]5<i ) Df  v:9^> ?qe JL! 1O :X 5  f9o^9Mc *Vbc 'c  G c k c [ cc ;] !gX% c  $ W6f   3GN   ?   R   W   r  =Ulv   3 Y c 5 }D #  s yG   gm <~ j_J  fW   Zct   : WfFl ( )l  F_  vU EZ   / [  ! r "/   &%  t! e 0 :T   ݿ>Z 5  Rs Dp4 n &Z  .pU E^wB   ?IG \  Zd I  mK8=^    89E_sC  j ]< hx  E a']BHA\  "  1  =     Z     c$  q    [ x  & 'o2   =X +7   C}  5B        D< v!  y   AM   { {     {   { { { { {  {  | g ;+ .  :  C  U@  '&  0     !    )   d>                    6G     z$    4  /   xG     o Kn  '    [ W  `T  ; '     Jh  J 6#-=  !]T L  C H-   pA   .   *WYӿ'6- D\rXts.  my= E G #Q ;   <}\mvv bU ݘs3 XW0q2u ak]DA  ,̫f.X ),( U G  ֈ9IP˛ƺ: ޼*(   4T(F Ѐ6  "-V~Evs ̖{˃    7TH17U/̎  W &D t1wnx,4 d} D%   !lþs r l'} ^ηRl ֛ n     [rR s r  3j  x Nʇz [L雷VE     nw @   x  kZ -~<?ܧ     G w Y      } { P      Q  x {  O   ;  a     v      +: w'   6  t    s  x  } L(    x w-p Ng i 4~Y  )> "!y  Eݯ!)ߒS*L)mU'ݠ 2j %$(SR?4o[(z"|zJJ ] 39@S5Wif(ÙH',\8zؑY9PҚݲQ9G/LF)٣(z0ՉS= 3n b-o" XBHդB4(/rAq8@Nk %(gM~{*nD{;CWLv* T`60f b*eV n mqd, tA @KAA*Kbcw<"Y?@3     -%#!+wIONLJJJJJGEEEEEB@@@@@=;;;;|;x8x6x6x6x6x6x3x1x1x1x1x1x.x,x,x,x,x,x*x(x(x(v(r(r(r$r#r#r#r#r#r#r rrrrrrrrrrrrqlllllllllgeeeeeb _ _ _ _ _ ^ Y Y Y YYYUSSSSSPLLLLGFFB@@<::733.+'%!  ߊ ۊ ׊ԊЊˊʊ ȊĊĊĊ &+05:@FÊMĊVĊ`Ȋjʊt͊~Ҋ ֊ ۊ " *5A$L+Y3h<vIZjyʼn׉ +<Oa&u1CVtbQA22/&% )22=K]pjʖ д|[:" %Iw}ޜš~Z3=uὖqP) <|&̟tP*(r7ճX74Hټd?!m ZrD( eoڻyO+ ʨd= ̮sP+ ƫu[?#  '2J`zŰs^O=*"3>COOSg VOOC;0( 28BIMPatt8BIMTxt2t /DocumentResources << /FontSet << /Resources [ << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (ArchitectsDaughter) /Type 1 /Synthetic 2 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (ArchitectsDaughter) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (2Dumb) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (3dumb) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (MyriadPro-Regular) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (AdobeInvisFont) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (TimesNewRomanPSMT) /Type 1 >> >> >> ] >> /MojiKumiCodeToClassSet << /Resources [ << /Resource << /Name () >> >> ] /DisplayList [ << /Resource 0 >> ] >> /MojiKumiTableSet << /Resources [ << /Resource << /Name (Photoshop6MojiKumiSet4) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet3) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet2) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet1) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (YakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (GyomatsuYakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (GyomatsuYakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (YakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> << /Resource 6 >> << /Resource 7 >> ] >> /KinsokuSet << /Resources [ << /Resource << /Name (None) /Data << /NoStart () /NoEnd () /Keep () /Hanging () /PredefinedTag 0 >> >> >> << /Resource << /Name (PhotoshopKinsokuHard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (PhotoshopKinsokuSoft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> << /Resource << /Name (Hard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (Soft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> ] >> /StyleSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Font 4 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /ParagraphSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /TextFrameSet << /Resources [ << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> ] >> /ListStyleSet << /Resources [ << /Resource << /Name (kPredefinedNumericListStyleTag) /PredefinedTag 1 >> >> << /Resource << /Name (kPredefinedUppercaseAlphaListStyleTag) /PredefinedTag 2 >> >> << /Resource << /Name (kPredefinedLowercaseAlphaListStyleTag) /PredefinedTag 3 >> >> << /Resource << /Name (kPredefinedUppercaseRomanNumListStyleTag) /PredefinedTag 4 >> >> << /Resource << /Name (kPredefinedLowercaseRomanNumListStyleTag) /PredefinedTag 5 >> >> << /Resource << /Name (kPredefinedBulletListStyleTag) /PredefinedTag 6 >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> ] >> >> /DocumentObjects << /DocumentSettings << /HiddenGlyphFont << /AlternateGlyphFont 5 /WhitespaceCharacterMapping [ << /WhitespaceCharacter ( ) /AlternateCharacter (1) >> << /WhitespaceCharacter ( ) /AlternateCharacter (6) >> << /WhitespaceCharacter ( ) /AlternateCharacter (0) >> << /WhitespaceCharacter ( \)) /AlternateCharacter (5) >> << /WhitespaceCharacter () /AlternateCharacter (5) >> << /WhitespaceCharacter (0) /AlternateCharacter (1) >> << /WhitespaceCharacter () /AlternateCharacter (3) >> ] >> /NormalStyleSheet 0 /NormalParagraphSheet 0 /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 /UseSmartQuotes true /SmartQuoteSets [ << /Language 0 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 1 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 2 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 3 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 4 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 5 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 6 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 7 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 8 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 9 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 10 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 11 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 12 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 13 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 14 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 15 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 16 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 17 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 18 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 19 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 20 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 21 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 22 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 23 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 24 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 25 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 26 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 27 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 28 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 29 /OpenDoubleQuote (0) /CloseDoubleQuote (0) >> << /Language 30 /OpenDoubleQuote (0 ) /CloseDoubleQuote (0 ) >> << /Language 31 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 32 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 33 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 34 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 35 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 36 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 37 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 38 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 39 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 40 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 41 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 42 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 43 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 44 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 45 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> ] >> /TextObjects [ << /Model << /Text (RQ ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 4 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 3 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 3 /FontSize 157.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading .01 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 .74902 .16445 .00002 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 1 >> << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 3 /FontSize 157.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading .01 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 2 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /StorySheet << >> >> /View << /Frames [ << /Resource 0 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 3 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 0.0 /EMHeight 0.0 /DHeight 0.0 /SelectionAscent -133.44856 /SelectionDescent 80.06914 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -133.44856 0.0 80.06914 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 3 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -133.44856 200.72 80.06914 ] /Glyphs [ 57 56 3 ] /GlyphAdjustments << /Data [ << /BackFixed -3.14 >> ] /RunLengths [ 3 ] >> /VisualBounds [ 0.0 -133.44856 201.09079 80.06914 ] /RenderedBounds [ 0.0 -133.44856 201.09079 80.06914 ] /Invalidation [ 0.0 -133.44856 276.07935 80.06914 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 3 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 3 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 2 ] >> /EndsInCR true /SelectionAscent -133.44856 /SelectionDescent 80.06914 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> << /Model << /Text (Simple job queues for Python ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 29 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 23.9914 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 .74902 .1647 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 7 >> << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 23.9914 /FauxBold false /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking -20 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 .17255 .17255 .17255 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 22 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 29 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 29 >> ] >> /FirstKern 0 /StorySheet << >> >> /View << /Frames [ << /Resource 1 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 29 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 1 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 0.0 /EMHeight 0.0 /DHeight 0.0 /SelectionAscent -22.71451 /SelectionDescent 12.62829 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -22.71451 0.0 12.62829 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 29 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -22.71451 97.6412 12.62829 ] /Glyphs [ 51 72 76 79 75 68 3 ] /GlyphAdjustments << /Data [ << /BackFixed -.47983 >> ] /RunLengths [ 7 ] >> /VisualBounds [ 0.0 -22.71451 97.6412 12.62829 ] /RenderedBounds [ 0.0 -22.71451 97.6412 12.62829 ] /Invalidation [ 0.0 -22.71451 97.6412 12.62829 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 6 ] >> /SelectionAscent -22.71451 /SelectionDescent 12.62829 /MainDir 0 >> << /StreamTag /GlyphStrike /Transform << /Origin [ 97.6412 0.0 ] >> /Bounds [ 0.0 -22.71451 302.92361 12.62829 ] /Glyphs [ 73 78 65 3 80 84 68 84 68 82 3 69 78 81 3 48 88 83 71 78 77 3 ] /GlyphAdjustments << /Data [ << /BackFixed -.47983 >> ] /RunLengths [ 22 ] >> /VisualBounds [ 94.31429 -24.3194 400.56482 12.62829 ] /RenderedBounds [ 94.31429 -24.3194 400.56482 12.62829 ] /Invalidation [ 94.31429 -24.3194 412.08499 12.62829 ] /ShadowStylesRun << /Data [ << /Index 7 /Font 1 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 22 ] >> /EndsInCR true /SelectionAscent -22.71451 /SelectionDescent 12.62829 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> ] /OriginalNormalStyleFeatures << /Font 4 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> /OriginalNormalParagraphFeatures << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >>8BIMlnk2.liFD$5cf431da-5a55-1174-bd39-835156e90edf ribbon1 .psd8BPS8BIM.H8BPS8BIMWZ%GZ%GZ%GZ%GZ%GZ%GZ%GZ%GZ%GZ%G8BIM%~y q_h 8BIM$dy Adobe Photoshop CS5 Macintosh 2011-11-19T12:56:55+01:00 2011-11-28T14:43:33+01:00 2011-11-28T14:43:33+01:00 3 sRGB IEC61966-2.1 RQ RQ Easy job queues for Python Easy job queues for Python xmp.did:018011740720681192B0F57C9699AC60 application/vnd.adobe.photoshop xmp.iid:07801174072068119E0ABA8718FD1176 xmp.did:0180117407206811AB08C21D52883AC8 xmp.did:0180117407206811AB08C21D52883AC8 created xmp.iid:0180117407206811AB08C21D52883AC8 2011-11-19T12:56:55+01:00 Adobe Photoshop CS5 Macintosh saved xmp.iid:0480117407206811AB08C21D52883AC8 2011-11-19T13:00:39+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0680117407206811AB08C21D52883AC8 2011-11-19T13:01:29+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0780117407206811AB08C21D52883AC8 2011-11-19T13:02:07+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0880117407206811AB08C21D52883AC8 2011-11-19T13:12:55+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0980117407206811AB08C21D52883AC8 2011-11-19T13:13:18+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0A80117407206811AB08C21D52883AC8 2011-11-19T13:15:39+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2C6EA9690A206811AB08C21D52883AC8 2011-11-19T13:18:06+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2D6EA9690A206811AB08C21D52883AC8 2011-11-19T13:18:56+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:2E6EA9690A206811AB08C21D52883AC8 2011-11-19T13:22:53+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:306EA9690A206811AB08C21D52883AC8 2011-11-19T13:28:51+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:316EA9690A206811AB08C21D52883AC8 2011-11-19T13:28:52+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:346EA9690A206811AB08C21D52883AC8 2011-11-19T13:31:35+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:7C79D5760C206811AB08C21D52883AC8 2011-11-19T13:32:47+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:018011740720681192B0B2CC836803A7 2011-11-19T13:34:02+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:048011740720681192B0B2CC836803A7 2011-11-19T13:45:18+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:078011740720681192B0B2CC836803A7 2011-11-19T13:46:46+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:0A8011740720681192B0B2CC836803A7 2011-11-19T14:00:56+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:EF622A3D0B20681192B0B2CC836803A7 2011-11-19T14:10:43+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F1622A3D0B20681192B0B2CC836803A7 2011-11-19T14:11:30+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:F4622A3D0B20681192B0B2CC836803A7 2011-11-19T14:12:32+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:01801174072068119E0ABA8718FD1176 2011-11-28T14:39:08+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:02801174072068119E0ABA8718FD1176 2011-11-28T14:39:20+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:03801174072068119E0ABA8718FD1176 2011-11-28T14:40:11+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:06801174072068119E0ABA8718FD1176 2011-11-28T14:43:22+01:00 Adobe Photoshop CS5 Macintosh / saved xmp.iid:07801174072068119E0ABA8718FD1176 2011-11-28T14:43:33+01:00 Adobe Photoshop CS5 Macintosh / 8BIM: printOutputPstSboolInteenumInteClrmprintSixteenBitbool printerNameTEXTEPSON Stylus DX8400 @ Bricktop8BIM;printOutputOptionsCptnboolClbrboolRgsMboolCrnCboolCntCboolLblsboolNgtvboolEmlDboolIntrboolBckgObjcRGBCRd doub@oGrn doub@oBl doub@oBrdTUntF#RltBld UntF#RltRsltUntF#Pxl@R vectorDataboolPgPsenumPgPsPgPCLeftUntF#RltTop UntF#RltScl UntF#Prc@Y8BIMHH8BIM&?8BIMStitches Path Selection8BIM4Stitches Path Selection8BIM5N28BIM528BIM8BIM x8BIM8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM8BIM 8BIM08BIM-8BIM@@8BIM6nullVrsnlongenabbool numBeforelongnumAfterlongSpcnlong minOpacitylong maxOpacitylong2BlnMlong8BIM3null Vrsnlong frameStepObjcnull numeratorlong denominatorlongX frameRatedoub@>timeObjcnull numeratorlong denominatorlongXdurationObjcnull numeratorlongp denominatorlongX workInTimeObjcnull numeratorlong denominatorlongX workOutTimeObjcnull numeratorlongp denominatorlongXLCntlongglobalTrackListVlLs hasMotionbool8BIM4FnullVrsnlongsheetTimelineOptionsVlLs8BIM8BIM5nullboundsObjcRct1Top longLeftlongBtomlongRghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlongRghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM H HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)Km8BIM8BIM , Adobe_CMAdobed            " ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?TI%#"jvE,pkDs ET߯&}xľgƟJ_6sB ? F>s$c3}A,O~_Yb׎ܿt7"X4oɯI/B"X4oɯI/B"X4oɯI/B"X4oɯI/B"X4oɯI/B"m;z)>W0G auL\\1xtOyJڟ"@W&.[ɞ9e#Dkג8La!"TI%8?^/?RzןKfiԯ+TG'9ĺIUݕQ. ɉRK$e:H]$$LJ]$$ET$.7HRESKD:>DZTI%8?^-?RW}zĮoƟJTG'Iļ))Uݛ}\ztL̫6FC:{we~ş#m }f?Lʈ`+}+%/_?Z̲禮Ml>AZe x@\?{ҕ9aҳ߬菦N۪_,k۾~C?έm}74ڜaiwz/S5r oUIszU&9sHqRyc+!Ƀ,e<͎XY1.jƼUŧc}7Z0\u"C+1-YfSݟ^1zKi9ۅ#*j>;nc]Mw^K 9}W^0l{%k};jܯ2k9iquY2 ylx3/q:A=-G[װvK* __seaK_~ƵU?>:Fs.o;,M1$1{._U%<y#sR_OSܿs~6?->DCE}_TI%8^+Yԯ(_?)Yԯ'Kpt~rzih_Pp!_Cpem ]]*OՏY Ccj{G?1 OLsz?FAwJT[G%*7BJ龡Y?M\//~\?7OKc}2_xTI%<Jg+ɥzI+eSq^/'_1)JiJUwbޫ\{kauUsn3tߪ7u|m˭C]=eoj׷7nKmwG[c5~" F8"_apyC&aL#տ=Qu#e.{H܁sX]O//eulg7P3[-ԫo+f79"]lo}m _=ϺvvWHeaY#Xt?zoJfG(7"r\!(gc&n8.ŵ>z>EP-q#G,b }t2>IhnD49cȯ,Z:D_Z1:Sx=_~cb1GW g9ӌHe;ϋw'κΡײ6cO.p3ɿm?bwNL;- kAS?o}C_?fXg1 NP`:`z?3G]ή7$9ĹqIssd1Rnǟ }G&VRBZ(e3rҺ?/-EM;;'o<WO?TI%<I+zIg+S~q^Y kRRir0rvp{ H.S]K3g[!kFWSIv5U5W8#OO趺wRM{AnWc{4 nl\YkswoP+5Wx &g82.g]Pm>67}eڻ=#谹s\+_IccMds.1<;5 -;[}7*_lNscጥ!RIs)Zv[aӖ܂lCm~;w?Gj}GV6e>RLqk(v==%.yWG~I˔O!UsU`kO6b%`Eu_S%+[?&#/?+h ܖ۪%MWոM T=DW}wU~Wz t6f7 pXUs5O%mi' ǹbqޱp,23rlʰ}^hkZX_S=Y>M lOG&6o[lWgXVOQc:zTQj,nn+YJU.oLUk2Vܚ531v^":ˑ8bc_/c'Pl{g"UTlShk:WO=OԱ;en8?ᾛm93e5mf([Ӳeo~Evcԗ5u3gR@ |#%?lnCsaeb_붶ϱ S$(Eu?PK xʮ=C&#?f2Ac06yTI%<ʻ}KezհW?(8|]1ࢹKܯ)JdN:m:XXM@q}V;ҟ9g,"q=:Y]dv3:1[Xnƾ=kҿU3ͧ2eacr6n1#/Wۉ5\f?6@/O?&u1mlf{5Z]˛^ѵW7=]zi4njR;Ä\-#o:S(cZښVM=3e~PjԯgσBϪ}F~W'pd?/\=\QrR?O~:8hQcOؘܬgiJ>:>V?~n}G ˂?}oǣL[Թ<կ̿NJ?FШ> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 3 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning false /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 0 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 2 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 1 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluniRQ8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӴS8BIMfxrp@b HBr! QmBBB8BIMnorm(2P(Easy job queues for Python8BIMlfx2nullScl UntF#Prc@YmasterFXSwitchboolDrShObjcDrSh enabboolMd enumBlnM linearBurnClr ObjcRGBCRd doub@b Grn doub?oBl doub?oOpctUntF#Prc@RuglgboollaglUntF#Ang@^DstnUntF#PxlCkmtUntF#PxlblurUntF#Pxl?NoseUntF#PrcAntAboolTrnSObjcShpCNm TEXTLinearCrv VlLsObjcCrPtHrzndoubVrtcdoubObjcCrPtHrzndoub@oVrtcdoub@o layerConcealsboolFrFXObjcFrFXenabboolStylenumFStlOutFPntTenumFrFlSClrMd enumBlnMNrmlOpctUntF#Prc@YSz UntF#Pxl?Clr ObjcRGBCRd doub@b Grn doub?oBl doub?o8BIMlrFX8BIMcmnS8BIMdsdw3x8BIMlbrn8BIMisdw3x8BIMmul 8BIMoglw*8BIMscrn8BIMiglw+8BIMscrn8BIMbevlNx8BIMscrn8BIMmul 8BIMsofi"8BIMnorm8BIMTySh+??@pD}pH@wMvE2TxLrTxt TEXTEasy job queues for Python textGriddingenum textGriddingRnd OrntenumOrntHrznAntAenumAnntAnSm TextIndexlong EngineDatatdta) << /EngineDict << /Editor << /Text (Easy job queues for Python ) >> /ParagraphRun << /DefaultRunData << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> /RunArray [ << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> << /ParagraphSheet << /DefaultStyleSheet 0 /Properties << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> /Adjustments << /Axis [ 1.0 0.0 1.0 ] /XY [ 0.0 0.0 ] >> >> ] /RunLengthArray [ 16 11 ] /IsJoinable 1 >> /StyleRun << /DefaultRunData << /StyleSheet << /StyleSheetData << >> >> >> /RunArray [ << /StyleSheet << /StyleSheetData << /Font 1 /FontSize 52.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 52.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning false /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> << /StyleSheet << /StyleSheetData << /Font 1 /FontSize 52.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 52.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst false /YUnderline 1 /OutlineWidth .39991 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> >> ] /RunLengthArray [ 1 26 ] /IsJoinable 2 >> /GridInfo << /GridIsOn false /ShowGrid false /GridSize 18.0 /GridLeading 22.0 /GridColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /GridLeadingFillColor << /Type 1 /Values [ 0.0 0.0 0.0 1.0 ] >> /AlignLineHeightToGridFlags false >> /AntiAlias 3 /UseFractionalGlyphWidths false /Rendered << /Version 1 /Shapes << /WritingDirection 0 /Children [ << /ShapeType 0 /Procession 0 /Lines << /WritingDirection 0 /Children [ ] >> /Cookie << /Photoshop << /ShapeType 0 /PointBase [ 0.0 0.0 ] /Base << /ShapeType 0 /TransformPoint0 [ 1.0 0.0 ] /TransformPoint1 [ 0.0 1.0 ] /TransformPoint2 [ 0.0 0.0 ] >> >> >> >> ] >> >> >> /ResourceDict << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 2 >> << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> /DocumentResources << /KinsokuSet [ << /Name (PhotoshopKinsokuHard) /NoStart (00 00    0=]0 0 0 00000000A0C0E0G0I0c000000000000000000?!\)]},.:;!!  0) /NoEnd (  0;[00 0 00\([{ 0) /Keep (  %) /Hanging (00.,) >> << /Name (PhotoshopKinsokuSoft) /NoStart (00 0   0=]0 0 0 0000000) /NoEnd (  0;[00 0 00) /Keep (  %) /Hanging (00.,) >> ] /MojiKumiSet [ << /InternalName (Photoshop6MojiKumiSet1) >> << /InternalName (Photoshop6MojiKumiSet2) >> << /InternalName (Photoshop6MojiKumiSet3) >> << /InternalName (Photoshop6MojiKumiSet4) >> ] /TheNormalStyleSheet 0 /TheNormalParagraphSheet 0 /ParagraphSheetSet [ << /Name (Normal RGB) /DefaultStyleSheet 0 /Properties << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 8 /Zone 36.0 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /AutoLeading 1.2 /LeadingType 0 /Hanging false /Burasagari false /KinsokuOrder 0 /EveryLineComposer false >> >> ] /StyleSheetSet [ << /Name (Normal RGB) /StyleSheetData << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /AutoKerning true /Kerning 0 /BaselineShift 0.0 /FontCaps 0 /FontBaseline 0 /Underline false /Strikethrough false /Ligatures true /DLigatures false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /NoBreak false /FillColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /StrokeColor << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> /FillFlag true /StrokeFlag false /FillFirst true /YUnderline 1 /OutlineWidth 1.0 /CharacterDirection 0 /HindiNumbers false /Kashida 1 /DiacriticPos 2 >> >> ] /FontSet [ << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 2 >> << /Name (Jinky) /Script 0 /FontType 1 /Synthetic 0 >> << /Name (MyriadPro-Regular) /Script 0 /FontType 0 /Synthetic 0 >> << /Name (AdobeInvisFont) /Script 0 /FontType 0 /Synthetic 0 >> ] /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 >> >>warp warpStyleenum warpStylewarpNone warpValuedoubwarpPerspectivedoubwarpPerspectiveOtherdoub warpRotateenumOrntHrzn8BIMluni8Easy job queues for Python8BIMlnsrrend8BIMlyid8BIMclbl8BIMinfx8BIMknko8BIMlspf8BIMlclr8BIMshmdH8BIMcust4metadata layerTimedoubAӴ8BIMfxrp@b  (.:EEJYbjwy}sneSPKA5#"                           ! !! !! !    !"#$%%$%$#"#"!!     !"#$%&'()*))**)*) ()((''&&%%$#)"!    !"#$%&')*+,,-./..//.-.-,2+**)((''&%$##"!!     !"$%&'()*+,-./0112343344323281010/.--,++*)(''&%%#"!   !"#$%&'()*+,-./012345667898898987635432210/..-,+*))'&%$"!    !"#$%&()*+,-./0023456789:;<=>>@=>>=>=<=<;<;:987765433200/--,*)''%$"!!   !#$%&'()*+,-./012345566789::;;<=>?@ABCDCDDCBA@7?>>=<;:9887554220/--,)(&%#!!   !!"#$%'()*+,-./0123456789:;;<=>?@ABCDEFGGHHIJIJJIJJIHGFE6DBB@@?>=<;:9865421/-,+)'%$"  !  !!#$$%%&&'()**+,-./0123456789:;<>?@@ABCDEFHIJKLMMNNOPOPPOPPON=<:976421/-+)(&$"! (  "#$$%'(()**++,--../012456789:<=>?@ABCEFFGHIJJKKLNOPQRSSTTUVUVVUVVUT6SRRQPOONMMLJJIGFEDBA@><;97531/-,)'%$" 0  !"$&'()*+,--.//012233456789:;<=>?@ABCDEFGHIJKLLMNPQRTUVWXYYZZ[\[\\[\\[Z8YXXWVUUTTSQQPNMKKIGGECA@><97531/-+('%"  * !#%&')+,-./0133445679:;<=>@ABCDEFGIJKLMNOPRRSTUVVWWXZ[\]_`ababbabba`8a_^^]\[[ZZXVVTSRQPNMLJHFEC@>=:8521.,*(%#  . !"$'(*+-/013456889::;<<=>>?@ABCDEFGGHIJKLMOPQRSTUVWXXYZ\]^`abcdefghhg@fgffgeffedccba``^]\[YYWUTRQOMKJHFCA?<:7520.+)&#! -  "$&'*,..134689:;==>?@ABDEFGHIJKLMNOQRSUVWXYZ[]^^_`abcdefghijklmnnmDlmkllkkjiihgedcb`_^]\ZXWUSQOMJHECA><9631/,)&#" ) !#%')+.023679:=>?@BCDFGHHIJKLMNOPQRSTUVWXYZ[\]^_`acddefghijlmnoprstsstststsrqp;852/,)'$! $  #%'),.0358:;852/+)&#! &  "%(*-/2469<>@CEGJKLOPPRTVY[\]]^^__`abcefghijkmnopqrstuvvwxyz{}~=~}||zyxwvurpomljgec`^[XURNLIEB?<851.,)&" :  "%(*-0358:=@BEHJLOPRTUVXYZ[\^^_`abccddeefghijkklmnopqrstuvwxyz{{|}~:~}}{yxvusroljheb_\YVRPLIEB?;740.+'$! 5  "%'*-0369;>ADGIMOQTUWXZ\]^`abcdefghhijklmnopqrstuvwwxyz{|}~?~|{zxvtrnmkgd`^ZWTQLIGB>:741-)'$! 8 "$'*,0369:62/,)%#  4 !$&),0369:63/,($! 9  #&*.048;?BFIMPTWZ]`bdgjlnprstvxyyz||}~~=}{xvrnlhea]YURNIEA<951.*&#  5 !$(+/25:=ADHKOSVY]`cfiknqrtvwy{|}~?}{yurokgd_[XTOKGC?;74/,(%" 7 "&)-047;?CGJNQVY]`dgjlortwxz{~;~{xtqmifa^YUQMHD@<840,(&" ; !$'+.26:=AEIMPTY\`cgkmpsuxz|~4}yvrokgc_[VRNJEA=951-*&#  8 "%(,048;?CGKPSX[_cfimqsvx{~4}zvtplid`\XSOKFB>951-*&#  8  #&)-148=AEIMQUY]`dgkoruy{}8~zxtqmjfb]XUPKHC>;62.*&$  : #&)-159=BFJNRW[^bfimpswz|7~{xtqmjfb^YUQLHC?;73/+'$  8  #'*.26;?CGKOTX\`dgknrux{~̘.~{xuqmifb^YUQLHC?;73/+'$  8  $'+/37;?CGLPTX]aeilpsvy|՘,}zwuqmiea]YUQLHC?;73/+'#! 9 !$'+/37;?DHKPUY\afimptwz}0~{yvsplid`\XTPLGC?;73/+'$! 9 !$(+.37:?DGLPTX]afilqtwz} 2~|ywtrnjgc_[WSOKFB>:62.+'$  9 !$'+.26:?CGKPTX\aehlpswz}5}{ywurplifb^YURNIEA=951-*'#  4 !$'+.26:>CGKOSX\`dhkosvy|~~~~~~4~|zxvtrpligc`]XTQMIEA=951-)&#  /  $&*.26:>AGJNRV[_cfjmqtvz|~}|{zyyzyz{|}|}}~~3}||zxwusqoljfda]ZWROKGC?;73/,'%! 1 #&*-159=@EJMQUY]aehloqtwz}~}|{zyxwvutststutuvwxyz{zx1wvusrpolhfca^ZWTOLHEA=961.*&#  7 "%),/48<@CHLQTX[_cfjmpruxz}~}||{zyxwvutsrqponmnmnnopqrstuvu2tssqponljhfc`^ZWTQMIFB?;740,)&#  0 !$(+.26:>BFJNRVX]`cgimpruxz|~~~}|{zyxwwvutrqpoonmlkjihghijkjklmnnop5opponnllkihfdb`][WTROLGDA=:630+(%"  0  #&*-148<@DHLOSVY]adfjmortvxz{}~~}|{yxwvutsrqponmkjihgfedcbbcbbcdcdedeffghiijkjig-feca_^[YWTQNKHDA>;741-+'$! 1 "%(,/36:>BEIMPTWZ]acfilnqrtuvxyz{|}~}~~~~}|{zyxwvutsrqponmllkjihgfeedcba`_^]\\]^]^_`ababccdedeedb*a`^\[YWUSPMJHEA>;851.+(%" 4 !$'*.148;852.+(&#  2  #%(,/269=?CGJNOSWY\^`bdfijkmnpqqrstststsrrqponmlkjhgfdcba``_^]\[YXWVUTSRQPPQRSTUVWVWXYZYZYW+VTSQPMLJHEC@>:852/,)&$! 1  #&),036:<@CFILORUWY\^`bdffhikllmnononmljihgfedcba`^]\[ZZYXWVUTSRQPONMLKJIJKJKLMNOPQRSRSTTSTSR)QPONMKIGFDA?>;7520-*($" 8 !$'*-036:=@CFHKNQSUXZ[]_aacdefgghihihiihihihgfdcba`_^\[ZYXWVUTTSRQPONMLKJIHGFEDDEFEFGHIJJKKLMNMNNMLJ(IHFECA@><:842/,+(%" 6 "%(+.147:=@BDGJMNQSTVXZ[]^^`aabbcbcbcbcba`_^]\ZYXVUTSRQPONNLKJIHGFEDCBA@?>?>?@ABCDDEFGHGFD'CB@?=<:8641.,*(%"  /  #&),/1369=<;:9989:;:;<;<=<=>??@@AABCBCCBA?$>=<:975320.+)'%"  / !#&(+.0268:=?ADEGIJLNOQQSSTUVVWVTSRQPNMLKJJIHGFEDCBBA@?>=;:9876543443456789:;<<=>=<:&9865431/.,*(&$" -  #%'+,/1469;=?@BDEGIJKLMNOPQPOPNMLKJIHGFDCBA@?>=<<;:9877654310/.///./0/01234567898765433100-++)&%#! -  "$')+.02468:;>?@BCDFFGHIJKJIJHGFEDCBBA@?>>=<;:986654210/.,+**)*+,+,,+--../12343210//-,+*(&&#!  * "$&(*,/024668:<<>?AABCDDEFEFEEFEDCBA@?>=<;;:9876543210/-,+*)('&%%&'(')*+,,-./.-,+**('&%$" , !#%')+,.01246789:<<=>>?@?@@?>?>=<;:987654210/.-,,+*)(('&%$#"!"!!"!!"#$&%&'&'(()())*))**)(''&&%$"!  #  "$&'(*,-.012246778:;:9:98654310/.-,+*)(('&%$#"!  ! "#$%&%&%$#!"!  # !"$%&()*+--/0122354556554543432110/.-,+*)('&%$$#"  !"!"!!   !"#$&%')**+,-/0/./.-,+*)('&&%$#"!  ! !!"$$%%&()()*+*+*)('&%$"!    !""$%&'&&''&%&%$#"!   '  ! !!""#"#"!"!                                ;?A??AAEEEEFGGHHHHIIJJKLLLLLLLLLLLMMMNNNNNNNNNNNNNNNNNNNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOONNNNNOONOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOMMMMMMNNOOOOOOOOOOOOOOOONNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNOONNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNMMMMMMMMMMMMMMMNMMMMMMMLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLMMMMMMMMLLLLLLLLMMMMMMMLLLLLLLLLLLLLLLLLMMMLLLLLLLLLLLLKJJJIIIIIIIIIIIIIIHHHHHHHHHHHIIIIIIIIIIIIIIIIIIIIIIIIIIIHIIHGGGGGGFFFEEEEDDDCCBBA@@?@>=;;;;8764710-**%"   "%(+-/1131/-*(%"  "%(*-02356520-+'%" 7  $'*,/3458::;::98641.,)%!   #'*-0369;:61.)%! 9 "&+/48=AEILOPRSTSRQPMJFC>:51,'#   $).3894/*&! ; #',16<@EJNSVX[]^__^]ZWTPKFB<72-(#    %*/4:?DJOSX[^`bdca_\YUPJF@;60+&"  "',27=BINTX\`dfhjigda^ZTOJC?82-($ =  $*/5;@FMSX]bfilnoponmjgc^YSNHB<50*&   "',28>DJQW]bgkoqsutrolhc]XRLE?92-("  $)/5;AHNU[aglptwy{zxuqmgb\VPIB<5/*$ >  &+17>DKRY`flquz||zvrlf`ZTME?82,&!  "'.4:AHOV]djpvz|wqke^WPIB;4.(" > $)06=DKRZahou{|uoib[SLE=70*$   %+28?GOV^fmszztmf_WOH@92+&   "'-4;BJRYbjqx~~xqibZRKC;4-'! @ #(/6>EMU]enu||ume]UME=6/)"  %*18@GPX`iqxyqi`XPG?80*$ C  %,3:BJR[clu||tlcZRJA:2,&  "(-4FOW`is{{ri`WNF=5/(" B "(08@HQZclv~ulcZQH?71*$  #*18AJS\eox!wne\SJB:2+% @ $,2:CKT]gqz{qh_UKC:3,&  > %,4GQZcmwxncZPF=6/'!  "(/7@IR\eoy¿yoe[QG?7/(# B #*08AJT]gq{{qg]SIA70*#  #*29AJT^hr|þ~si_ULC92*# A %*29AJT_is~~tk_ULC;3+$ A %*2:CMU`ku¾vk`WLC;3,%   $+2;DNWalvĿwlaWMD<4-&  D %,46.'!    '-6>GPZdoz ž|qf\SI?6/(! D !'.6>FPZdp{ƿ}rh]SI@8/(! D !'.5>GQ\fq|~sh]SI@8/("   !(.6>GQ\fq| ~sh^TJA90(" D !'.6>HR\fq|ºti^TJA91)"   "(.6?HR]gr} ºti_UKB91*# D "(.6?HR\gq|ûuk`UKB:2*# H "'.7?HR\fq}ûvk`VLB:2*$ $ !(.7@HR]gr} ļvkaWLC:2+$ H !(/7@IR]gr}ļwlbWLC:2+$ H !(/7@IR]gr}ļwlaWNC;3+$ $ !(/7?HR]gr~ ŽwlbXNE<3,% H !(/7?HR]gr~ŽxmcXNE<3,% H "(/7?HS]gs~ŽxmcXNE<3,% $ "(/7?HS^hs~ ƿxmbXNE<3+% $ "(/7?HR]hs~#ǿyncYOE=3+% $ "(.7@IR]hs~!ƿyodYOF=5-% I "(.7@IR]gr}ǿzoeYOF=5-%  E "(.7@IR]gr}zoeZOF=5-& E "(.7@IR]gr}{oeZPF=5-& E "(.7@IR]gr}{pdZPF=5-&  E "(.7@IR]gr}{pf[PF=5-&  E "(.7@IR]gr}{pf[PG>5-&    "(.6@IR]gr}!{pf[PH?5-&    "(.6?HR]gr}!{pf[QH?6.'    "(.6?HR]gr}!|pf[QH?6.'!   "'.6?HQ\gr|!|qe\QH?6.'!   !'.6?HQ\fq|!}rg]QH?6.'! E !'.6?HQ\fq|}rh]RH?6.'! E !'.6?HQ\fq|}rh]RH?6.'! E !'.6?HQ\fq|º}rh]RH?7/(! E !'-6?HQ\fq|º}rh]RH?7/(! E !'-6?HQ\fq|º}rh]RH?7/(! E !'-5?HQ\fq|»}rh]RH@8/(! E !'-5>GP[fq|»}rh]SH@8/(! E !'.6>GP[ep|»}rh]SJA7/(! E !'.6>GQ[ep{»~si^SJA7/(! E !'.6>GQ[ep{û~si^SJA7/(! E !'.6>GQ[ep{û~si^SJA7/(" E !'.6>GQ[ep{ûsi^SJA7/)" E !'.6>GQ[ep{Ļsi^SJA80)" E  &-5>GQ[ep{ļsi^SJB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdo{ļtj_TKB80)" E  &-5=FPZdp{ļtj_TKA80)" E  &-5=FP[ep{ļtj_TJA80)" E  &-5=FOZeozļtj_TKA80)" E  &-5=FOZdozüuk`UKB91)" E  &-5FQ[ep| Žuk`VKB90)#   &.5>GQ\fq| Ľvk_UKB81)#  !&.5>GP[fp{ Ľvj_UKB81)#   !'-5>GQ[ep| Ľuk`UKB91*#   !'-5>GQ[eq| Ľuk`UKB91*"   !'-6>GQ\fq| ļuk`UKB91*"   !'.6?HQ\fq| ļuk`UKA80)"   !'.6?HR\fq| Ľuj_UJA80)"  !'.6>GR]gr} ļui_TJA80)"  !(.6?HR]gr} ütj_TKB80)"  "(.6?HR]gr} ütj_TKB80)# C !(/7?HR\gr~üti^TKA80)# E !(/7?HR\gr}üti^TKA80)" E !(/7?HR]gr}ütj_TJA7/(" E !(/7@IS]gr}Ļtj_TJA7/(" E !(/8@IS^hs~»tj_TJA70)" E "(/8@IS^hs~ûth^TJA80)" E "(/7@JT^hs~ûsi^SI@80)" E "(/8AJT_itü~si^SI@80)" E "(/7AJS^itû~si^TIA7/(! E ")/8@JS^hs~û~sg]TJA7/(! E #)08AJT^hs»~si^SJ@7/(! E #)08AJT_it»~si^SI@7/(! E #)08AJT_it»~si^SI@7/(! E #*09AJU_itº~sg]SI@7.(" E #*19BKU`juú}rh]RI@7/(" E ")19BKU`juû}rg]SI@7/(! E #)08AKV_it»~si^SI@7/(! E #)09AKU_ju»~rh]SI@7/(! E #*19BKU`juº}rh]RI@7/(! K #*19BKV`juº}rh]RI?6.'! K #*1:BKVakvº}rf\RH?6.'! K $*1:CLVakv|qg\RH?6.'! K $+2:CLV`ju}rh]RH@7/'  K $+2:BKV`jvº}rh]RI@7/(! K $*1:BLVakvº}qf\RH?6.'! K #*1:CLVakvº|qg\RH?6.'! K $*2:CLWblw|qg\QH?6.'! K $+2:CLWblw|pf[QH>6.'! K $+2:CMXakv{pf[PG>5.'  K $+2;DMWakw|qg\PG>5-&  K $+2;DMWblw|qg\QH?5-&  K $+2:CLWblw{pe[QH>5-&  K $,3;CMWblw{pf[QG>5-&  K %,3;DMXbmx{pf[PG>5-&  K %+2;DMXcmxzodZPG>5-& K %+2;DMXcmxzoeZPF=5-& K %+2;DMWbmwzoeZPF=5-& K %+3;DNWamxzodYPF=4,&  K $,3;DNXbmxǿzoeZOF=4-& K $,3GQ[eq|ƿymcXND;3,% F !'-5>GQ[ep{ƿymcXND;3,% F !'-5>GQ[ep{ƿynbXND<3,% F  '.5>GP[ep{ƿyncXNE<3,% F  &-5=FPZdo{ƿyndYNF=3,% F &-5=FPZdozƿyndYOF=4,% F %,4=EOYdozǿyndYOF=4,% F %,45-&    $*1:BKValw"{odZPG>5-&    #*19BKU_ju"|pf[QG>5-&   #)19BKU_ju"|qf[QG>5-&  D #)19AKU_it|qg\QG>5-&  D #)19AJT^it|qg\QG>5-'!  ")08AJT^it"|qg\QG>5.'!  !(08@JT^it"|qg\QH@6.'!  !'.7@IS]gr}"|qg\RI@6.'!  !'.7?HR\gr|"}rh]RI@6/(!  !'.6?HR\fq|"º}rh]RI@7/(!  !'.6>GQ[fq{"º~rh]RI@7/(!   &.6>GQ[fq|"º~rh]SI@7/(!   &-6>GQ[fp{"º~sh]SI@7/(! D  &-5=EPZdoz»ti_TI@7/(! D  &-4HR\gs}ĽwmbWLC:2+$   !'/6?HR\gr}ĽwmbWMC:2+$    '/6>HQ\gr}ĽwmbWMC:2+$    '/6>HQ\gr|ĽxlbWMD:2+$    '/6>HQ\gr|ĽxlbWMD:2+$    &.5>GQ\gr|ľxlbWMD:2+$    &.5=GQ[fq|ľxlbWMD:3+$    &-5=GPZep| ľxmcWND;3+$   &-4=GPZeo{ ľxmcWND;3,$   &-4=GPZeo{ ľymcXND;3,$   &-4=GPZepz ľymcXND;3,$  %-4=GPZepz ľymcXND;3,$  %-4=FOZepz ľyncXND;3,$ C %-44,% B #+2:DLValvſ{oeZPG=4-% B #*19CLValv{oeZPG=4-&  B "*19CLValvzpeZPG=4-&  B #*19CLValvzpeZPF=4-&  B "*19CLValvzpeZPF<4-&  B "*19CLValvzpeZOF<4-&  B "*19CLU`ku{qf[PF<4-& B "*19CKU`kuƿ{qf[PF=4-% B "*19BKU`kuƿ{qf[PG=4-% B "*19BKU`kuƿ{pe[PG=4-% B "*18BKU`kuǿ{peZPG=5.% B ")08BKU`ku{peZPG=5.& B ")08BKU`ku{peZPG=5.& B ")08BKU`ku{peZPG=5.& B !)08BKT_jt|qeZPG=5.& B !)08BKT_jt|qf[QG=5.& B !)08BKT_jt|qf[QG=5.&  B !)08AJT_jt|qf[QG>5.&  B !)08@IT^it|qf[QG>5.&  B !(/8@IS^ht|qf[QG>5.&  B !(/7@IS^is|qf[QH>5.&  B !(/7@IS^is~|qf[QH>5.&  B !(/7@IS^is~|rf\QH>5.&  B !(/7@IR^is}|rg\QH>5.'  C !'/6@IR^is~|rg\QH>6.'! C !'/7?HR]gr}}sh]RG?6/'! C !'.6?HR\fr}}sh]RI?6/'! C !'.6?HQ\gq|~rh]RI?6/'!   '.6?HQ\gq|~sh]RI?7/(!   &-5?GQ[eq|~th]SI?70(!   &-5>HP[fp{~ti^SI@70(!   &-5=GP[fp{~ti^SI?80(!   &-5=FP[fp{ti^TJA80(!  &,4=FPZdozºti^TKA80("  %-45-'! D &-6?HQ[fq||qf[QG?7/'! D &-5=FQ[ep{~sh]SI@8/(" D &-4HR\gr|žzodZPG>5-'  B  '.5>HQ\gq{ž|qf\QG>6.'! B &-5=FPZepzſ}rg]QG?6.'! B &-5=FOYcnyƿ}rg]RI?6.(!   %-3HQ[fp{üxmcXND;3,% B  &-6=GPZeozüxmcXND;3,%   &-4=EOYdnyüyndYOD;4,%  %+35-&   ")08AJT^hs~"¼zoeZPG>5-&   "(08@IS\gr}¼zod[QG>6-'   !'/6>HR[fq{¼|qf\QG>6.'! @  &-5=GPZdoz¼|qg\QG>6/'!   &,46/'!  %,3GPZenz¿|rg\RI@7/'" ? &-4=FOYcmw¾{rh]RI@7/(" = &,4FPYdnxzpf[QH?6/'! @  '.5=FOYcmxzpf\QH?6/'! @  &-4=ENXblwzoe\QG>6.'! @  &,45.'!  &-45.'! @ $,36.'!  $*2;CLV`juxmcYPG>5.'! @ $*2:BKU_it~xndZPG=5.'   $*2:BKU_is}xmcYOF=4-&  C $*2:BKU_is}vmbYOF<4-&  C $*2:BKU_is}vlaXND<3,%  $*2:BKU_is}!ukbWMD;3,% C $*2:BJU_hs}ukaWMD;2+% C $*29BKT^hr|ukaWMD;4+$ C $*1:BKT^hs}~tj_VMD:2+% C #*2:BKU_is}~si_UKB:2+$ C $+2:CLU_is}}sh^UJB:2*$ C $+2;CLV`jt~|qg^UJB90*# ? $+3;CLV`jt~{qf\SJA90)# ? %+3;DMV`jtzqe[RH?70)" ? %,35-'! ?  &-4FOYcmw}si_ULC;3+$  !'.5>GPYcmw{rh^UKA:3+$ = "'.6>GPYcmw{qg]TJA91*$ = "(.6?GPZdmwyof\SI@80)#  !'.6?HQZcmwwnd[RH?7/(" = !(.7?GPYcmxvmbYPG>6.'!  !(/7?GPYdmwukbXOF=5.(! ? !(/7?GPYcmv}sj`WNE<4,&   !(/7?GPYcmv|rh_VLC;3+% ? !(/7?GPYclvzpg]TKB:2*$  !'/6?GOXblu~xne\SIA80)# = !(/5>FOXbkt}vlcZQI?7/(" ? !(.5=ENWajs|}tjaXOG>7/'! @ !'.470+$ : !&-4:BJQYagouy~{uohbZRKC<5/(" ;  %+29@HOV^dkqv{~{wrle^WOHA:3-&!  #)07>ELSZahmrwz}}zwsnhb[TMF?82,% = #).5DJQW]bgjmoqqpomiea\WQJD>71+&  6 $)05;AGMSY_bfijlmljhea]XSLGA;5/)$ 8 "(-38>DJPUZ^`cfhhgfda]XTOIC=82,'" 7 !&+06;AGLQVZ\^accba_\XTPKE@:5/*%  7  $).39>CIMRUWZ\^^]\ZXTQLGB<72-'# 8 !&+15:@DIMOSUWYYXWUSPLHC>94/*%! 8 $)-26<@DIKNQSTTSRPNKGD@;72-(# 9 "&*/38<@DGILNOONMKJGC@<73/+&! 6  $(,059=@BEHIJJIHGEC?<840,(# 4 "&)-259<>@CDEFEDB@><841-)%! 1 #'*.258:<=?AA@?=<:740-)&" 6  $(+.14689;<<;;97531-)&#  4 "%(+.024678876531/,)&#   "%(*,.023210-+)&#   "%')*,./.-,*(&#   "#%&(*+*)(&$"    "$%&'&%$"! /  !##$$##"!                          ")!(&(&%&%$&&*3,.-)+),,-,,,*,,*,*($# ""$%$ "$#"!!##!  &&##'(),+-----*)(*-,,)),.*!&%$%%%%%&+/.002,--.*+'&&$!  0@0 @@ ``PP@ߏ0P@00p@@@@pߏ@0ߏ0@0`@@ᅬ`0pPϿP@0` 0 0`pp0 ``P` p0P`pP0@0`` ``0p ` p@ @P p`@p`@@@@@ 0Ͽp@`00pP`@0 @`p p@ p00p@ 00p Pp@P0`P` @ `  P p0 `  @0@ @0@@@@ @P`Pp `p``p@@ @   @``P@@`@0@ @@@ @ `@ `@@@ @p 0@pPP0`@@ @```@pp@0pP00@`P P@@@@@p@0@`pp00p ppP`@@0@@@P@@0p@ @@00@@@P@@0P@p@`@@@@`p@  0@0@@@P @@PP @pp`0@ P@``Pp0P p @ @@@0@0`0@@`@@` 0`@00`@p@@p0`P`@PP`@0`0`@P `@P0 p@@`0 @ @`@0@00@@``` p00 0  ` @p`P` p 0 @` @`@ P@@ `p p@p `@`P` @0p P00`@ 0@`@@0``p`p0 P@@p @0@@@@@@0 0p0  0@P0`0@0p@@0``P ppP`@p0@@ @0@ @P@`Pp00@``@0@`` @@0@p@@@@@@@@@0@@@@@@@@@@@@@@@@@`@@ @p@P  @`0P@ &3D]l|}tN;&  RSkjvdok][Y[ZLA;('  0@@@ @00p߿@p0 @ @`@߿@ 0` 0 @0@ pp @@`PpPpPPp0@@`00pp @ 0@P@ @@  P@p`` `0￿߿0` @0@ @P00@ @P0P@ @`@0@PPPPP@@ ߿@ 0 0@0p0p000  0P` p@pp@@p@0@p@0 ϿP@@ ` `@0@@@@@@ ``@` ` p@ppp@pp@`P``00Pp@@  P@@@ 0@@ 0@P`p0ߏ`P0p@ @@p@@p0@0@p@@`0`pp0߿``p00@P  P`P@P`P@p0P  P @@0@@00Pp P@ `@ PP P`0@00@0`@``00pp @@@P0 @@@P0 @  @@ @PP @PP PP0PpPPPPPp@P0p@0P@P@p@@ PP`Ͻ @`@0@@0@P`@P@0 @0 p @0`@ `@@  @@p @0`P @ @ `p @00@ 0P@`  @ ` 0@ 0`P 0 `@@@`PP0 0 PPp  `@@@` @pP` p0P 0Ppp0 @@ p߿ P 0 @ @0 p P Ϗ`0 @`@ 0@P@ P0@`PpPp0`@`0 0 `0P00P@ `@0P@@@0``@`p P00@@`P `@p@0@0 0@@@0P0P 0@0 PP p @ 00 p``P@@`ϼ`0`߯ 028BIMPatt8BIMTxt2 /DocumentResources << /FontSet << /Resources [ << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Jinky) /Type 1 /Synthetic 2 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (Jinky) /Type 1 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (MyriadPro-Regular) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (AdobeInvisFont) /Type 0 >> >> >> << /Resource << /StreamTag /CoolTypeFont /Identifier << /Name (TimesNewRomanPSMT) /Type 1 >> >> >> ] >> /MojiKumiCodeToClassSet << /Resources [ << /Resource << /Name () >> >> ] /DisplayList [ << /Resource 0 >> ] >> /MojiKumiTableSet << /Resources [ << /Resource << /Name (Photoshop6MojiKumiSet4) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet3) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet2) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (Photoshop6MojiKumiSet1) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (YakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 1 >> >> >> << /Resource << /Name (GyomatsuYakumonoHankaku) /Members << /CodeToClass 0 /PredefinedTag 3 >> >> >> << /Resource << /Name (GyomatsuYakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 4 >> >> >> << /Resource << /Name (YakumonoZenkaku) /Members << /CodeToClass 0 /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> << /Resource 6 >> << /Resource 7 >> ] >> /KinsokuSet << /Resources [ << /Resource << /Name (None) /Data << /NoStart () /NoEnd () /Keep () /Hanging () /PredefinedTag 0 >> >> >> << /Resource << /Name (PhotoshopKinsokuHard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (PhotoshopKinsokuSoft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> << /Resource << /Name (Hard) /Data << /NoStart (!\),.:;?]}    0!! 0000 0 0 0000A0C0E0G0I0c000000000000000000000000 =]) /NoEnd (\([{  00 0 0000 ;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 1 >> >> >> << /Resource << /Name (Soft) /Data << /NoStart (  0000 0 0 00000000 =]) /NoEnd (  00 0 000;[) /Keep (  % &) /Hanging (00 ) /PredefinedTag 2 >> >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> ] >> /StyleSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /ParagraphSheetSet << /Resources [ << /Resource << /Name (Normal RGB) /Features << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >> >> ] /DisplayList [ << /Resource 0 >> ] >> /TextFrameSet << /Resources [ << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> << /Resource << /Bezier << /Points [ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ] >> /Data << /Type 0 /LineOrientation 0 /TextOnPathTRange [ -1.0 -1.0 ] /RowGutter 0.0 /ColumnGutter 0.0 /FirstBaselineAlignment << /Flag 1 /Min 0.0 >> /PathData << /Spacing -1 >> >> >> >> ] >> /ListStyleSet << /Resources [ << /Resource << /Name (kPredefinedNumericListStyleTag) /PredefinedTag 1 >> >> << /Resource << /Name (kPredefinedUppercaseAlphaListStyleTag) /PredefinedTag 2 >> >> << /Resource << /Name (kPredefinedLowercaseAlphaListStyleTag) /PredefinedTag 3 >> >> << /Resource << /Name (kPredefinedUppercaseRomanNumListStyleTag) /PredefinedTag 4 >> >> << /Resource << /Name (kPredefinedLowercaseRomanNumListStyleTag) /PredefinedTag 5 >> >> << /Resource << /Name (kPredefinedBulletListStyleTag) /PredefinedTag 6 >> >> ] /DisplayList [ << /Resource 0 >> << /Resource 1 >> << /Resource 2 >> << /Resource 3 >> << /Resource 4 >> << /Resource 5 >> ] >> >> /DocumentObjects << /DocumentSettings << /HiddenGlyphFont << /AlternateGlyphFont 3 /WhitespaceCharacterMapping [ << /WhitespaceCharacter ( ) /AlternateCharacter (1) >> << /WhitespaceCharacter ( ) /AlternateCharacter (6) >> << /WhitespaceCharacter ( ) /AlternateCharacter (0) >> << /WhitespaceCharacter ( \)) /AlternateCharacter (5) >> << /WhitespaceCharacter () /AlternateCharacter (5) >> << /WhitespaceCharacter (0) /AlternateCharacter (1) >> << /WhitespaceCharacter () /AlternateCharacter (3) >> ] >> /NormalStyleSheet 0 /NormalParagraphSheet 0 /SuperscriptSize .583 /SuperscriptPosition .333 /SubscriptSize .583 /SubscriptPosition .333 /SmallCapSize .7 /UseSmartQuotes true /SmartQuoteSets [ << /Language 0 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 1 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 2 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 3 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 4 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 5 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 6 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 7 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 8 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 9 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 10 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 11 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 12 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 13 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 14 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 15 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 16 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 17 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 18 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 19 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 20 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 21 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 22 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 23 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 24 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 25 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 26 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 27 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 28 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 29 /OpenDoubleQuote (0) /CloseDoubleQuote (0) >> << /Language 30 /OpenDoubleQuote (0 ) /CloseDoubleQuote (0 ) >> << /Language 31 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 32 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 33 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 34 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 35 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 36 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 37 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 38 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 39 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 40 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 41 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote (<) /CloseSingleQuote (>) >> << /Language 42 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 43 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> << /Language 44 /OpenDoubleQuote () /CloseDoubleQuote () /OpenSingleQuote ( 9) /CloseSingleQuote ( :) >> << /Language 45 /OpenDoubleQuote ( ) /CloseDoubleQuote ( ) /OpenSingleQuote ( ) /CloseSingleQuote ( ) >> ] >> /TextObjects [ << /Model << /Text (Easy job queues for Python ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 0 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 16 >> << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 0 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 11 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 52.0 /FauxBold true /FauxItalic false /AutoLeading false /Leading 52.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 27 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 27 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 27 >> ] >> /FirstKern 0 /StorySheet << >> >> /View << /Frames [ << /Resource 1 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 16 >> << /RunData << /LineCount 1 >> /Length 11 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 1 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 52.0 /EMHeight 52.0 /DHeight 32.26514 /SelectionAscent -36.68549 /SelectionDescent 18.30426 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -36.68549 0.0 18.30426 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 16 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ -105.5 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -36.68549 211.0 18.30426 ] /Glyphs [ 40 68 86 92 3 77 82 69 3 84 88 72 88 72 86 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 16 ] >> /VisualBounds [ -108.69684 -36.68549 105.5 19.00488 ] /RenderedBounds [ -108.69684 -36.68549 105.5 19.00488 ] /Invalidation [ -108.69684 -36.68549 130.45978 19.00488 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 15 ] >> /EndsInCR true /SelectionAscent -36.68549 /SelectionDescent 18.30426 /MainDir 0 >> ] >> ] >> << /StreamTag /LineStrike /Baseline 52.0 /Leading 52.0 /EMHeight 52.0 /DHeight 32.26514 /SelectionAscent -36.68549 /SelectionDescent 18.30426 /Transform << /Origin [ 0.0 52.0 ] >> /Bounds [ 0.0 -36.68549 0.0 18.30426 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 11 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 16 /Transform << /Origin [ -71.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -36.68549 142.0 18.30426 ] /Glyphs [ 73 82 85 3 51 92 87 75 82 81 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 11 ] >> /VisualBounds [ -72.6369 15.31451 71.0 70.30426 ] /RenderedBounds [ -72.6369 15.31451 71.0 70.30426 ] /Invalidation [ -72.6369 15.31451 95.95978 70.30426 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 11 ] >> /EndsInCR true /SelectionAscent -36.68549 /SelectionDescent 18.30426 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> << /Model << /Text (RQ ) /ParagraphRun << /RunArray [ << /RunData << /ParagraphSheet << /Name () /Features << /Justification 2 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 1 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> /Parent 0 >> >> /Length 3 >> ] >> /StyleRun << /RunArray [ << /RunData << /StyleSheet << /Name () /Parent 0 /Features << /Font 1 /FontSize 322.84122 /FauxBold true /FauxItalic false /AutoLeading false /Leading 53.98064 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 0 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures true /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms true /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 1 /Tsume 0.0 /StyleRunAlignment 2 /Language 14 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 1.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 .6 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst false /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth .39991 /MiterLimit 1.59964 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> >> >> /Length 3 >> ] >> /KernRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /AlternateGlyphRun << /RunArray [ << /RunData << >> /Length 3 >> ] >> /FirstKern 0 /StorySheet << >> >> /View << /Frames [ << /Resource 0 >> ] /RenderedData << /RunArray [ << /RunData << /LineCount 1 >> /Length 3 >> ] >> /Strikes [ << /StreamTag /PathSelectGroupCharacter /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 0 /Children [ << /StreamTag /FrameStrike /Frame 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /RowColStrike /RowColIndex 0 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 2 /Children [ << /StreamTag /LineStrike /Baseline 0.0 /Leading 53.98064 /EMHeight 322.84122 /DHeight 200.31761 /SelectionAscent -227.76129 /SelectionDescent 113.64172 /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -227.76129 0.0 113.64172 ] /ChildProcession 1 /Children [ << /StreamTag /Segment /Mapping << /CharacterCount 3 /GlyphCount 0 /WRValid false >> /FirstCharacterIndexInSegment 0 /Transform << /Origin [ -123.0 0.0 ] >> /Bounds [ 0.0 0.0 0.0 0.0 ] /ChildProcession 1 /Children [ << /StreamTag /GlyphStrike /Transform << /Origin [ 0.0 0.0 ] >> /Bounds [ 0.0 -227.76129 246.0 113.64172 ] /Glyphs [ 53 52 3 ] /GlyphAdjustments << /Data [ << >> ] /RunLengths [ 3 ] >> /VisualBounds [ -130.58136 -227.76129 123.0 113.64172 ] /RenderedBounds [ -130.58136 -227.76129 123.0 113.64172 ] /Invalidation [ -130.58136 -227.76129 277.9624 113.64172 ] /ShadowStylesRun << /Data [ << /Index 0 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> << /Index 1 /Font 0 /Scale [ 1.0 1.0 ] /Orientation 0 /BaselineDirection 2 /BaselineShift 0.0 /KernType 0 /EmbeddingLevel 0 /ComplementaryFontIndex 0 >> ] /RunLengths [ 1 2 ] >> /EndsInCR true /SelectionAscent -227.76129 /SelectionDescent 113.64172 /MainDir 0 >> ] >> ] >> ] >> ] >> ] >> ] >> ] >> /OpticalAlignment false >> ] /OriginalNormalStyleFeatures << /Font 2 /FontSize 12.0 /FauxBold false /FauxItalic false /AutoLeading true /Leading 0.0 /HorizontalScale 1.0 /VerticalScale 1.0 /Tracking 0 /BaselineShift 0.0 /CharacterRotation 0.0 /AutoKern 1 /FontCaps 0 /FontBaseline 0 /FontOTPosition 0 /StrikethroughPosition 0 /UnderlinePosition 0 /UnderlineOffset 0.0 /Ligatures true /DiscretionaryLigatures false /ContextualLigatures false /AlternateLigatures false /OldStyle false /Fractions false /Ordinals false /Swash false /Titling false /ConnectionForms false /StylisticAlternates false /Ornaments false /FigureStyle 0 /ProportionalMetrics false /Kana false /Italics false /Ruby false /BaselineDirection 2 /Tsume 0.0 /StyleRunAlignment 2 /Language 0 /JapaneseAlternateFeature 0 /EnableWariChu false /WariChuLineCount 2 /WariChuLineGap 0 /WariChuSubLineAmount << /WariChuSubLineScale .5 >> /WariChuWidowAmount 2 /WariChuOrphanAmount 2 /WariChuJustification 7 /TCYUpDownAdjustment 0 /TCYLeftRightAdjustment 0 /LeftAki -1.0 /RightAki -1.0 /JiDori 0 /NoBreak false /FillColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /StrokeColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 0.0 0.0 0.0 ] >> >> /Blend << /StreamTag /SimpleBlender >> /FillFlag true /StrokeFlag false /FillFirst true /FillOverPrint false /StrokeOverPrint false /LineCap 0 /LineJoin 0 /LineWidth 1.0 /MiterLimit 4.0 /LineDashOffset 0.0 /LineDashArray [ ] /Type1EncodingNames [ ] /Kashidas 0 /DirOverride 0 /DigitSet 0 /DiacVPos 4 /DiacXOffset 0.0 /DiacYOffset 0.0 /OverlapSwash false /JustificationAlternates false /StretchedAlternates false /FillVisibleFlag true /StrokeVisibleFlag true /FillBackgroundColor << /StreamTag /SimplePaint /Color << /Type 1 /Values [ 1.0 1.0 1.0 0.0 ] >> >> /FillBackgroundFlag false /UnderlineStyle 0 /DashedUnderlineGapLength 3.0 /DashedUnderlineDashLength 3.0 /SlashedZero false /StylisticSets 0 /CustomFeature << /StreamTag /SimpleCustomFeature >> >> /OriginalNormalParagraphFeatures << /Justification 0 /FirstLineIndent 0.0 /StartIndent 0.0 /EndIndent 0.0 /SpaceBefore 0.0 /SpaceAfter 0.0 /DropCaps 1 /AutoLeading 1.2 /LeadingType 0 /AutoHyphenate true /HyphenatedWordSize 6 /PreHyphen 2 /PostHyphen 2 /ConsecutiveHyphens 0 /Zone 36.0 /HyphenateCapitalized true /HyphenationPreference .5 /WordSpacing [ .8 1.0 1.33 ] /LetterSpacing [ 0.0 0.0 0.0 ] /GlyphSpacing [ 1.0 1.0 1.0 ] /SingleWordJustification 6 /Hanging false /AutoTCY 0 /KeepTogether true /BurasagariType 0 /KinsokuOrder 0 /Kinsoku /nil /KurikaeshiMojiShori false /MojiKumiTable /nil /EveryLineComposer false /TabStops << >> /DefaultTabWidth 36.0 /DefaultStyle << >> /ParagraphDirection 0 /JustificationMethod 0 /ComposerEngine 0 /ListStyle /nil /ListTier 0 /ListSkip false /ListOffset 0 >> >>8BIMFMsk 2 /)%$"#"""! ! ##, {xxxxxxxxxxxxxxuiiiiiiiiiiiiiii`ZZZZZZZZZZZZZZWKKKKKKKKKKKKKKKB<<<<<<<<<<<<<<<--------------------!xxxxxxuiiiiiicZZZZZZQKKKKE<<<3---! $--3<<BKKNZZ{]xixixxxxxxxxxxxxxxxxxx-3<KQZir~ -9E*Q3`<lKxWcr~$ 6E0T?fQu`o !'0<BQTcfxu*B'W9oN`̜f30 336ff~u̺fN3 3Hf̥fB*Bl޷iQ3 $N{Ϩ]3T ̙rN'?x$ὖf?H<خc3 iQ꽓f3 Zi̜rEc'~ۮW*u9f3 BQ̜rE{cը~Q'c{޴]3Qf6 B ̙lBT̙uN$6Үf6NÙoK'c̨`3 {޽fE!̥fE0̫fK3̽fB33fƙfc303Wf ̺lfQ333?ffuffc3*33!liFD$1dec8551-5331-1174-b798-cb202e8f50fc'Screen Shot 2011-11-19 at 12.35.31.pngePNG  IHDRxgiCCPICC ProfilexYgXټ.a9眓$GX%( AQDd("@$QQTP1<̼[]U]]U g %22 @xD4PC  y i`gg 1dtOE}0nFE'/:r 1 6poܱ}m'#gB@\@XuA?8}`=[8/LB  , wlFٿ?ц7h 'X!(6rrn;maL5KbCXm761*Xǿ:>ha FmeÖɈbtK?LG`La ۀ9mar0EFm-ZXD`W |` 1{@|@0A<"?%9h10X>n#oM_??r 0 c1ƘcL1)pOGA۶ ?u}ߣs- sl[.f\PJ(U!J(6E4P(=6ܦy#~Y=Wև7W O  0w "{x갽@lQ P;?\?4r= @5^hxGcW'@P@K` ;TKp p jp \w}<4"V!B$!QHR4 ] w "tʀNCP)T]B7hz @hD,^B0@X!x1ID QhCDG IdC "eH#-!!ӑ2dy9F. 0(fJ9EEEBeQը6-j@ ߉W LqB8B#4n iˣk맛E$FDOb $C|JJ"H$R4$GzIIL/GoAGH_@F?J `ϐ0̰gc4b0b,`qIɖ))iK#>,3Yو|"m9 8KK%!EV2 kk5i6$[[l+i ?899999qp pprfssBqIqs*͵­MN=ɃqI)Y5=ǻƧw:$U;8x4i~22r32N(;y2PZV)̩SO3?=cvFLog䖜#97gq^kA  y 7d\0QjVV&V[)-s Jʌj[5551rjovgk\WU`w5kY ׏]߼c'gf^g}n}o ݶN]7uӸ~_~۠`CjCm5v>7zsxθG;~b'p~C$uLzQe62(&> !/;=5Ze-u|z%g-慟w7*/.*P2uatv)UaՎ5u R.Ӯx7۴hJqۿtL]YՕrM^s7z{oӿ%zt{M*<8>oΨ8v|ћǏ Lt?.ktQh)Է\]ݳ!  !-8a@j"Qh:mLvN/@`&3ۘY>vzNNA.>n6D~;wT M0J" b",&!+ IIC2,Z/OR`UVSVS1UP QO8YuEiG%*ZZz˦fm=lFvNNٽ*fqlkypS}}~SA&q!͡_U#KGsʌ.iw/O "aIG)R6~#!3Dɠ,S:`A|93wv"M E[K/,/*/ZU]eSF\߸aqi]-mڏv] emtM ƛț{?=SޯypchrvgLqc'ǟL6=|״ϫOj |zAcЧ+%/P~oureymMmcVMxC.P /Qv UF?$bqx]q'g"&s"X\YX3nS9Pp,7u[O ,_1/zA3s"NbgŭĿI\!uQQ)*(#7"تWYAyAV5XMFzFVv#Ψnw'\,B`W$^Vs99-k_~x*I6;yd>cdlʳ+3|2x2V2:Yq6{9Ug u9gz^>_@PHغBtiVYUܪS_=Tvn~I9Ķhj>e]~meO͍[wx0PbQS|֙Q<|" 'MGN&@t2 f  d:pL_ g!P*T ݁"98;B!HUd% Dգ+=ၳN,ۄCv8<I`!%S>/Sfl_}|FyS W?_bRMa+~' W$CoLA8W}D(FB [%> XoKnDO" IF & Lde7vCFl$ jxNk.QMg<%|IAB¼"LhџbKS/Hˌˍʏ)VB)e7j i~Ѧ!c_fkhÄT<٢򣵤 eg#{F{L[]8Oݙ^}X) Fh! {#D;IJ{0СK$I֔Qp"!-[!sf(lK>(•2ǫ[jY/lݬ ::&nqkNq=t gx yyȋ)錙YEso9Opt!# OJ~H(ϥ_ZVVV2zV~ZS_]\{^WY[Y[!aQpcuS|afN ͯbpPyղuV|@O1pO_l9py pHYs   IDATx] \TU?``x(4Ll-ق\˖R[̈́]庮bᮭ-ZI>ZJLKPL5MQ;sgsܙ;3 z·s{|Ϲw[bX A @R"A  A @2ĵxPŽ @ k!u @ q-^#ZH /#@\%ARAe@ ;A @ A @2ĵxPŽ @ k!u @ q-^#ZH /#@\%ARAe@ ;A @ A @2ĵxPŽ @ k!u @ q-^#ZH /#@\%ARAe@ ;A @ A @2ĵxPŽ @ k!u @ q-^#Z(Jfʉܘkȣۺmsь=o'ԹǪΏyn ]_Sr/n'9*li 9+@diۧD4Zt^XUiɓ_|qdl.Ҋ>yz:eqJ |_%LOaR<&1NJHH}Th>I٭۱UFWmLHXn; ݷ*=b?s%!Nk+Ω݉^\6f^AB4b4Alx#J䵇"ڂG91%Ҝ4.hHQ|`üW.jwL߄:$,XFׅ+|$iʄ'VSIk'DҔV}y_-WW^xSȺ1EqNnLZwNÕ2-9}3#x La摊Ƈ;ɂ'%!F `g zʄ~[x"*Krx9H^u7N}&skwWq_(< p1ry~)y@TLZOYqxC߂YcE`hY%iFgLns4snH2L:rfjůNۄ ~eikd|k/Ľi{RuԠ^8"0b 3T(ڂ̄b ޖ^] [RsK/s`窘b)"~}9|ۘ[aWJϴ)I :CUGe.J)ʶ >lT9Ψ-ԶtCV(2=t\0_:*,AɶM3wgf~NjW+6h{SSwׂl*ec.W@^!ȱ8٪p 2_LY|3JǾm̐s&L)l%nriqI{Q_}ys5 .ws'W'ԑ!\X%_n۝%3KV?]Ut3!}wq}"N?d{9=%ZO9#V-G|DvSLlG"%~{!֫ ,O/CPe,+)OtiQEHcQ$i_|& >~Qa5h.9SR l?u1T.ϫČA m MPB ʁpb_E TY+>>d.ِ!ZSʡ %@U+L 4&P n F-Jg;V?0<->~_YSFTlhɐ &LSSIZ/([M&ܚqE!rAW+~'(,XdBm(;x7Y. Xmj:? *!~Κ.Hegd| zKU m lSW؞ ƻފ_EPiM*(fa2D(+V wE-b kMNzԄP 6/kV_`d]\*;u!J薋| Ӎ܇tUPh   ;0nh(뮪ٞ2FT=@AMZD(vm4 xo(' 969 5u$~RFL*A1~9J:)BO) `Ì(*u0HE?3CuF+ UUw}ψ Ip4{y)P?REi{JXg\P;}(u윸UJ㉪OY[TmqKz;"b2*oB32$SF}calg06BJeK6E!ǯMFJL{g*$08uOsPdz?\7UUa&K"FRt>\@U/6%溶N]Mjʲw?31u,yMA5B%9fPxgW͛QPO״ÃyłG+/ikEۗqJ913S~(E({yjj0FNX[3:^tVjHFΚvX2A WǮ_'mav4jFSXQ1 =@䠒 ]NFA1qR.d"%9[o\7GH:0s2@d]Bo)=!8viqsp\k FIdX3?U]`ޓ.~S=kkҖkTn 5%EQOJ!M Kc+ {+kTUeϛ6qGs$Tomʉ2cؐޔxR IA)3'GƢlIm)!acmʘP֦6ge͙6yȑO>c|OظX#hS\rp'L޴H>EO?1>xcSK'04\'>B [Oe#`ؾc{6g=8.BDsVT٭sl\)؆:3z^"9cn/T[xnnouOޑi"F1q++?ܲm[ff֡cf'&L xR#c3ɁGR2NLķXgi 6S9goڪPsE T[^S{ivT{cQ05g#&A&;(j/f/ܩ:J%2ewsE)bQ+pP]mTm'tBo)ckA]D=n:CtsK}1%+#C&/+sA+|SRGpO=9dظЩCP)k8o {EOYϤ Q;FBӧ읍Tqi[׭ e@R^NÃ{e86]9~y)k?юJI%;^* s5A-"FyYhTգxRt HIJ\:ʈ ޞMqIscub ˄`㺊1KL%3566\|4ezfֵx*_-RFJ[HڼJT]ǩܘM_s!S\29Mt[P8x a%;2N׶}IileK"(?D\1w{`;֡AGeHƊsMg󜜧س+e͂܄TsfI:uܗ(q<$V-:ZVHwv E>nN|>!`h2TNESg^gg:*}vsei"W&ˍq O $h ̬y[1\MuVI:l['4 c1-"Yn ` m>-3LrsLֺe .\ED^Ńc9'(fN[Gq`:@2[`~As[G )CGNӌM/PO99Y3&-͙vY B[_6$E͵P527|3_ LYvhȕqgИa7/ GkFs*ِ~f39n9e˨Uѽ-A(vU޲6}o $K ¥C*v_~?96_%L\0쮓5EkJ:bhUK(sM7yv)a}XPTgÐ[yG2Uh4 B\H~m˝EѡydkfŠ̕v2(Z6+2'X/j˒O|tyyg`vJt{?N%dZ"p`=fT3j2.;mG5y7{fBA7W_^ ;Rj^\$>G>;{ٚqgʜV$Woh8<(`CF^LOqGhy <#-/P(Hwhᴏ',Cuj8 K_wR!TO+gA,y{⦦@\pF~H_wR!>`3aӓOUWހ+8TrYE$uɤ>bL: Y%'r -u.)&PbtL7`@ n4a L0UlHNސSV^VV 4ѓ3yWeTCH+98 sԅWTWM%Cj 7ǯ3Z"}Vťi: \ϗ+P9-yMeP,ڈxB\E-7.3Ƃn@j\uuSAg}ʠ+ۆ?JmѮT5q]%zbȋi!n<#n5p@9 ȯ@fx+J/~Ux_xMSrp&8H x15s<ۇaGhKOwCȅ-T1r BP]S hεei:(y+ҸMdپEW01=Xط6WgM%/lHaN-Nf1Xh+؃0։Jq(3w$D4<<( FIdXS 6_ IDATy4q>}X7UMPכkA}Es}ihװ=E580^̧8A9i1SG)ظBDu웵h쨩tɬU Dl+[mչ-jiUXb9{VLM4G٫;҂6-RT+M<ϻպƒyW78zC1~A:.inbļ  AB9/k=ocI Q.R`R(RI]_ۚɻ&1wQ첌%xz0 {mrQSts+S&QIބٛtʌIKĨKN)0Un`U佯Yfz/)3SNe؇FJ[f&m^8+>(DPÒ' 0? ;RJT]u?:7f)b8;F$l^EMa9X.eAUgoۻg"Qw#J.<쩊C_L 2Jej^Oj>ʈ0 Frs_LI{xJaj,nyWk z .GϹєB pLt6k.(F U„2h]q 1EhZTe)3ZݖC w|'L EUWzΔVav'䡸\Kj u.͜jX}ƴ.ʙ!s-bϱG{qvŃ(- m9[_6V$%/LlI47NlbSMILLT(uwM7yD,[3\5rzQI۝ Dv`,mhJfwBJN<L> :cE(|+# 6MYչ KvNI nb8˶^7/˞jEbA=[-ޱs9ntg`m-zV_GbǍlWI{6Fs=ЎZ:D A @`hb}A @{ -[bA @h'ki'XA p"@\[2A NNAEleA 'b ݋q-eKDw9ADwk԰ o_U筈=Ҁژ8[;(Ӟ#Y BHm.$ kIvi-ҡ]PjA p!pkۍ7 cB_; ۏH &گBUeo\ԍpT~z٥6/fN-ɶM3wgffQHeғ17Hnܒ{2;L?TJiK2Sr'g\qk2SqR}Vz0LMQՒp$] viiN_<([*NO%;&5%506*ݘ~GSY]̱'f+ &&-ȈPwg/ d?;.{Fi.2?GƕҨqvLlfyc&hO%}\VN,X4tD ^s9FA8ü-,&a'y3d!qoe*y3&.\讎Į:ަ=ӖilLDùU[zЩ.㮰lH @Dh4B3F"%Jy j3} gY}іvNh JaH NI.úa^Y船Q9r4k\'"8,b9{ݻ*Ϧ+c LP0@DQ{+>E2}VrOӞ{ 2nMRƼ '&L%[<(*7yqۃb9{VLM4 k'V^B}8;s|dYѨ:<;fp q'%ZDub5EfGMyZKݹ|I~2 ),bݥZq'{<9`I:ddqHd $ݺ"( }mʔ,QI QP5w0*#V%~Y5-:4j\T҄$m^89-ọ̏t.x^gRF<аgrEʌIKKa3\6BQH39a!%vIƟ"08p|R:ִOޝ7),cI0P5s_7ss?qǑ Q7˶y3|Yy?AD 4S4k>hf.l,w])F2P>4eJ-5W{qv|R  ڬ|~|>g7 J#Z\4ǕQٹ"EzLQ6h!xˀh;*m3¨%S(MaE[ͅAsylj_f IB :ͿfqeռBƒ4݇8N,2'p%VgkKޱhSR1pm#p%@cc#|  1? wqs^s؝ ћ @ phZ+̩Œޛ*/3߼" {\,CЂMggq_Zb#Jh(8Wbn(ӝ m@CLKYH"ɀk&ਂew[R[d"𻉦͎p4}P&CgmAl)ߡSICpW iZt{v' @ ܹk+< JML$2*Kd2 6*ǭCXf% a`N'wn&퉀F ]l/]q,+H.CVNo1|.:i`?8-;;0F\XhDeA%T`<];15}Lk,ۘquX:v&}&OYm @{!6vYHw]Id-~)dff]-pW;n ĵ Dt$:MMM{0tPCK?tp-L&4 8ۑӅ1dH:A @FV{߱+LL:w,FXSņXҡc(0UAn1` CR  0UN0XBX'rs &J-CJd"I$wИ>0ci>>/ ?O4bhnlG]Φ~ڃd  f!+|}50HVK) A hhh]a0$mADtI[R"t/:`RiguZ#޳|k QgYa6ws }UDf)WtǍb!$F  gvbF Ô~ϧ9oдVޗw"JZ아=^{v!H&fՇi-6#'xRX-M7y sD)aSd1:`:q-PJ@BޛJ ~=Tjn4'#De(p >Dh&#.}i%񁝒.r8O"w,\ĵiŠ8Sr @ M@;tI'm-`/F>C&KÐblԫ g>6^z9 0 EY߭E Xku )A,1LAhO+ ?gbMPej,v?t1C"3njMc;f!Ip0>8kxJ_Íuj! @ tT>}?8݀ 4~t ED/b$oPcL tq$],Dx@ulo}b0&AĖ1#!fׯ ޣG?;|c"D"a4ZIp_<RA2nr[r1' Ѡ(N3]ȢgI` <3qu |˟B5Z :< C8p$"Y ?AhJC FIhP06Zkp7v3=X[$6V@aycN++722#.VpPB*v\kԫ/xZ mS!!@Ƥ g2Aw Om0UZ` {Gufc+>[hPZ3k_< FJdj_lJBߝGs-=ײbʼn}#3V>ZykN4ph*׽s!7np xc_ok[ S{.)*fs}|l(Sֳv:!_2:VL7=$%'>pW\/ ޵cA!2F Xj~4_054`&I&9|qO#u,HfS䎳>}Z-p[cދ0X_\ Iم VV;Y4lhD CMdsC M&~h\xA_uj,Bro#r&? L辝6/#r_bz>zߴ8t6?<69=@D3 %J#k,j$T2Rw:q--bz*gawk<%l?V]lPSM(I7`UoDlǞWˏ/Ů1k?uCz"%G ^֧~D@["Dfj{%gH¸d}7|V^ڹ9Q7ފk̵W cb fcĩۧC쩞â7Ũ)OjJ2p]g" 5V򙗹cz腮IqWK*>tٓ~I7\xQ z^~lo=#T.ٰ1Ϛ]= $zidH7mfct!U&}Qc;?B΍#kqa4ܨZ{9fthܣ^85DWЪ !4(~JZo?;'-"kje #x`)>" 0|xtϳ+t.)J7kݽ9UlǬ!itx0tڐ}R1=ӦRY]؍ASFl~Tuf<;G"LX xF&fA uϿyڪ |2Q$& ?dbg笀Әl1G@5Iϑ(?;\SqL%z`]<3eƿ|I'G<xtE$W@F?[w Krk"+' k>"`|VقBIoˎ[n7좠o"ϟ~'0UR` v<al>Reobп xRe8N4U)ހ7z PL7TI|Qtk eU0웍h MԿ5 r.. B-aaK{Bټ*n jس4oc%t|k54b_9L_Wl.wT$ U`"OE%6XFHr[gOO{mM:WȄ8E}\Gڅ ~bχ~~JMC}׮]T_ n{[r--Ggvପ7AOnJ7K6R!{a4|9 č\F`UϠ.},|9Tw1դx.U>fo/M%}jaݻ_Q}_L&u^]gV5MP3U& sƱx [`0!%Y%]Tے 5_Ҫ,5ӻZ`'J֍tx;t9]| BCLJ',UXШ|,P $ @ %:%mf$WˎOm4ʟLa]Oމ2O*O~J^޵^j+1Wh:!EV;h`cuٶDϻ}Z-HF @@[#;)m޳`N:2萼eHzob4Y2HBOeӎ׃ט0a_@s ^.SaƳ"X"$ @> ZJ_*|;w|b+}G!Oi/}o|$\o9 NZYoyu󚵺/Pjڻ޵mᲽV>A p ")iNzXȽyx} D4/ ϰhLK;' ꆷ7h͵W mփ\i*Ζ/c]X`&B&7RV .^6S]Z07FYF@Ƭ]_x,._"Sr?4הz{Q/?"2N/|)g=S&7lOq4Q>dgazdߧCnmJ& +{ju3}>퍽Yr!x/d!_:xP9|_ǽ$؁Re;E[cҬ<Jf%x# "lplk;[d~3dV:N/ M_͡/1^Ǖo}~>dcGLXᠰΝyGn{/ҎhJj|x?SDq5-=82ճPv}Sr?7G娞CKf+k _7:mw;Nն0iRbĮ7+›f]ML_2wkMN=(ne?YRdXhg: 2Q^$T0mw.~d*meS`ҎEoM-5s0l߇j~x\1;9T_}/n?QiMNz9*2X4ڭrHy!\>/ `.U<鏏ܞ#DqsO0uʎ~MWFv黂ߞ8}5d?}z"ethA:Tݾ~{{.Bcf{;@'QzkyG^wCTj,?[ ( *b/5WY߿PG {泭N7l૬sI#{"CQC<" AShpVܬ8W6*-BRp[WmF?5";V0`}X 1!<} EHO!{v_eG3sRy䤫j+<`ғ}`BA`NZ}3&`I`7 ~'Q0'grKn2~ό9Iӑ2-{liwXyy2?տ7LJJ1?ZqȲ3RI7f 3xQ1pv2ҕ_S/|̀]-WLϘ_MMυwx/>|;rأ*JU&aֈH aѩ-ዣogxkͺw6k)-B7DL %YOg(=2A\5 `>ŖwV,[p:LrA&֐o]pNˁ3F|x x̰`QÃJXPsr#.aCLa>bƥ~ժT}hwj=~_˩/glW}/gw}FdhTejr_1ú^ΔM[Y-*6%αj;d֕&|MV&i#G&_,2}H}Pt{LK/IϘ]qЩs`AkLh^QTO#XƜ߮ZHOS9ӠC%/[,֒y\X}l&4xy+'⻛[,|eh*[f8cN{šuJ#~A(JÏxahj԰nԮ[k||đ ݪ)/ 4]k"bFOW1"lQ@OB7:sWx_OHٯbw V 3WXPa-KV/Ӣp}e`GHiJqb{4>?ɖe& "qr Hbi| ٤ Rl;iJ`r\UQD, +U;߰#N>ɱMP {|;Ԅg/Eb6ULX`Ç^QEƿKj d ?)u2 jO%7tа17ƪk7o6\P|A1d֘${=D@ ϥ4 o}Rq71@5k+JI=̊+88# 5 RsaU ^EҗklPS9<_`)O_Z?c:jN$^sD`J+{Dі) զ8:*Z YQ/d:.q>|B0VH@OP=kftPvFĝ+Um,C[lQ´(`7|(/1s\5߸GT| ZK`hWռ>t%ҔNc}h)tJ]?4`ު?y^z`6Ե2t8^WI; H]nït]ʋWJ!|TaD*.b_3N9Q+Q~nNJ}zIr+gyYJ +\R6nӛ7JԜ5|ܢ>`լ6Z-V}B_yFGcŒ.\WLܠ(C^y%hM0=c"IJ8X]I8eI_w"nz2lm9-u?GyYMQk3{Ѱǰ-@Eogȩ}޴Hyn^˨n9ȃ'?>$rһ,0ԑVanXw9rKeKm9 폷`oRly# mwDX{pF V{z퓲ϮK]:=1q#an S- [0CyKK2שgYPclx 1y-3Kw uc-}fx'a8?wUoXligd15^CZл ee)mB?/',g41?@fIme:X{" EeJ { ]bO U Rһo>oX2ĢØ 3ɲHza/S5dv1L'hxC[[ EE.k,MQ<4=及RLv=xIu1s1. ^KcZ p ^Ck`$KBk:R᫔]ksW~xĤ֜78Fs7&S=Sp oo%_ ۭv܀"6V _۾1az cz]vؾ4C

=Tgyr],9'/P%KY]eo'h(w; ,LKB@OP̷n+LC%-q)cjM?/RЫA2gza&-H[o_In^.6@Znmp(d0]l }¿n|8BL֮p?',f~_%'ELu{api0 Dg gp3Ϲ?Z!U >SVuH|O w-#&>1Zt˥֑㾙kO=mը3k[uq 뽈3VYp^jv`XlsOu6Faӿqϼ%Iĺv+gXoRsk4?vTt_!%%Ppf䵴>X9cf>N K,T%.U |Rk殙xf+XW H80"˞0I sF)m^&. }X0^' 4ՆseHm8 AS/~ KV!Ƕ6@{ZZxp% d*n>DaxOMMDqD¤aه2irM̐9 sU`Tߤs, b("ȤDDNbr˨Qu):J5;"YT8Hc4ĪvQC V#%`Qi S5:SUIDzŢDŰ%fe?Ȃp9j>m)uHF"o򔱋bf~i7xjz_$I&k1t R)~ C~ラ\.RT SdW8\2FL.5yI䴀KgsQHOqRi2US׷Ӷ xNDʎbM#~HVT֔cb8`0J-Co IDAT|ckpC'A+^a6eǕr,22R"Һ*ϕp1-9=5- :qQHO΢89tƹ߬96*\S@ڣPz+^]W$8 S]pc;gBvAD]~zwo?qnm;:so\"X4?]Kϑ@aW,jIEJاZ7$jW[iU/2eY'5^(|ԙhڼb}xL\e-M 0H{IpPfo*n$i:Pмg[brř]twOuY{ fmߵS[:MϘ_zx]CtPƍx>ڨnȲE AB(p0QU AvC\AdFQ Y6n"e딱ƠQaCj2wLKPёM+MܔhRX},J£?@IJ94_BDE1~7sVc>dM"*+o߾mۆ]i0V]Čշc_aݾz7.7oSEdM/%ޚ⻹~KN[[ 0-uQcc Tgd]*#CP`2(Mv6Ʒr\# D`NѪj`M_auiA[5Lf" Ly|ږJC J>Cj /໅º'.]ZQK=[n j L#_!wAA*x% @);z(0e@20\*Sɠe=`trxDڨԨF50W_ Yim]ui͒"ùXtt>sTQZɤ wdȤ4E'j^svL6q$6q1$ml wڪ&:2-#v YxM<( 8(#\Xo<ȵۚRxOh?f¯hk !7T[u%'ǩԷ(i(mn܀EN븳V˨}cLgk<\+ا~:aXieb1^&]M,4bxPIIbN%ՊNYE?9t<`Dc+PXy"볡y >v*C-sey/\LZ8kn=7=kƳ#Imot/!ԼQ0icՁN]Κ1z^4ܤ*ܼlEV֤~zۙkL P^`VVQ*FJ6vTM&}$A&#HI7 ,RtFMHIl{1!Q›L&#C|7 O b,RtH 5SDHK^{3+%`H?cvPCHG5)Ĭc5t!$ xɴ dLa _Jآ^fz2Q2SUgz Hi Ru0͔_DdZ`%@o sCDFڢ3${<^F4ia^?\X@HBA1-pg=W#$F qcZh~=5@gLlD)H $$F/MBAH   $@§n$hޙg^sX P4( & wL ۟t7YaBH 4[~zǴeiT !זxǴ3-XKTO@$mwLy $;Ja֐pOcWAH 7xBүfs0(@w)~{-wil#$z-%#$,n!ZQygk?b)O3ɐGfNdR]#mzAa?Qvi|?QF35r?Ǫn e4A!),f(!&HbyΠ4..@H&JIz MLĆ>1&dG<օT,COJf+Aq".ML[s]_CXOZtg蠤RqWj,%beg.:DDM7 .HnTkȤ9!ղ\6 mJ\5Ȣ6#?hv )^2-PSsm'[}zƁ{iX(@WA xwd2O2!5aל.)st!O7*O0bBD 9 hC|wBw\SMtrA<$@͗7|\sfqwc'/Ly!vss.d>]hETTزh0''jQAiBhpb=a|7ulؕ6+aժq|(ʤnPQa޷T"4RwΜ3BHz i7 {𒷻+idzopQc,|AOy(ӵ:ZɄ vb`TYkʋ\TpB.)"oR L&B$^ hN}]zǴzLɷJQ/)S:ӪM&CϷ^CmzY)#\lh7SM?mi#{^e6I+B4kyϣ E,%6ZI.2qZ>H 4;fi"t2SlcIJDեD! n-"Ce햶S*nɥ1j T6Ǡ"i@ ҘxJ ecZE/ YO\uˆPܪ("Xz"(=0-FMĚ"k ESNQ-甾XD $ИvRR\#ppYQP6@Hu1-쌹Z6^Ȁ 4@k*;}ݫđ@$͠7o䴽cZG-6rQ<@H u%c98㋕uBHu\'1$pz-0ÌnF@H#mL u?#0@z@;kmB"hjġG۟!_[xqJEggTIT=ڕ?ZBDP't64xFddbX^+j(|$M$ :5 zڴ8pDo50>.$cjcH? $T1ET@_ '(~qO#D\N"Xb"f^LIPKw9hb,za Vj]Jhђ) +uol/~ Qo$|wz-A]>?#|\IT<{"tb2Qjě2wի%q*@a2&YAw]j)*C"ʾC$ T([$&:@Xv=}#Ba1-4 裷8vJ~aɔ%BL?HrRJ'?AFėC)/jCȝB9լ IDAT,#R #q3fKc7I}ћeJ&Ա}i44#O6\ gEX2U-|L8;M*Wۚ1s@s#m0iAFgoP;H !S&7yqM΢ӄ &*(#m:Iէe/oMRh{JH(7B053fcC]3ӎu²g X{"MHS{QDS>MO$'%{k]U#BIDк11ڳ7}zEOU+@6&(D5۟?LӜIoy`s$BgҿoJK @g6eY$Rpt#$Ь S5}Z:OQ) *^SWs tOJWqHZbFHJ"$=ECڂCEf1Gk s3"E'&fF) lmb(5TRD2e@AZJAj.$-¢_oס8:9,, e2?4X\"O}Ӫ]@o⥹OӖfå2(5@Z u@ Կ';F̋/gI.??x $м HHcZP>aUjbU,`jŃ@ `P#-B#e1@H 4hZC)H fEMK*.T $xg9P#@^$MwLL{baH pn.};@H MZFnm*az (\`x$s== %^t;{y*^I{H5˞xQ2D*4@4{Q΀ w⏫.Xhڄn' F4$xKMd0ۈ;2vf Z*86D\xCKb(;?xE <.~jtJU)ӊeť"f`R+ͫD:dD#> 6 NTm)l=;%ʐXqOPvЏWUM!$3[eN e^aɄ{(`"ݪ3sI~4\$6$ifp@*\L 8e⯞JHôK[^]UDطx\9.)FDPBTLĘ4AtjډՆ+kS̞=dscu"Y7jLټzR%{{}Jv^ZY' FQ@HO;n1_0u>< >"J(9xܒ?P/,a/2i3ܱuM"-}IR/Q\EMEZ`${PD8k`h34egz+2I, 2^BA)F@eZozCJ8 דmsBL0B NJ T!!mN'`bl$[!ܬW2MUyYf%bD("JVKWk DK=V)Ճ$|C:5*1-,> |{s/,uqk2N{݁t`Ɩ5e&=2 ٗjCȝ|E(\AQ\AC`6 |m5,8L:)rLQ[LJ<$6L::xSH xx[J;rCX"$𷁲3Z <»O?͘91J);w$({{T!ݒi+ʪ5)YiQV-гӼ1:ђH !GT8 RCiռxG}rդ/h|d]~lE]UTœeT sInt&ȆYS2$lEb8y)OWHg N!f}s>H g0ǒaM`zpyh*oEXUIKy`+Q5HtBZ")#TWEɗj ŚL(N*,ż72}vLkȤؼIIM*K DH+ ˺z+I7AwiH /LB">z}eZ8G *ɚs[-Ё i\ )zz_OH`Jۣ2-tKB~n*&rSxh7̱Kf$A4^1-4ׯun0 %jfWA5d!G(K3J+ JLDrg>,Z>6ޓ/WFgR>* @SZB) @H N~пMwJSEH6K)SQS$D@ڲx41Y$/6Z|H "^ _ $!/sbX$?[H 1-'Wżk+1f _#m NZ}C}@͞f_$@k% $=4-;1H _#i||@1-&L @H&(zǴ`[H Ulp( EN_Ҽ~-@ N7$iaF 6x-DH ncZ˂~)$!1-db|Ɲa& Aɍ@>cZb՚r $=4~KZWXWP.f $`G=hf?sugw*jk ( $P L"Kc[_iiP$pF;bδA$hZ1 H "ŷAH 4-~P$o@[ $?(D@Hi@m@~@M"f $[дVy6H ?  -hZ| `OSq>T=.O ƦI4bT i`$=5eܔ3Y~q>ru*x;Ǯ]{N:0*w-A:eƞ^ kʔ8 *sN~F>n=焛mn gܔ9?\ZYj9#'sG4m[JT2)sVo;rˉ86"@E@dl9Mð>|˷] ievC=Z9rNz8d ]40-?sm*b%B40}Xe2Pgʜ{4Ml)); M{H_ũZ!t.eزث,NIbU9D&N5{s|`3aN5*I]˙Kk0װA M$7ucHyV:u2w1i\.0!l3jzO>9:@gtС-uҵ fI1ulgd>&HURL xL{-k_8~+j&bBXa[gsbi[̓??J0y-6I_Ł[}-ZsUsṐ᪂ a}n݌I -I.[ldۙm.%(!;.d &3%h,qry`$L^ves-G)ٵtӭfX0p-ZώE]>Qgs"~-/w%*d,|ɋyZ*2X. ^()N F$@>ѭ"ѫ+2Rv707MXtpzڌ 0? ۴Oζ:J/⚕ɩ\lYZs4&IZ[#-aKySjx$RΚehu[ss6$-Ka<-I'$Q뽉dn WV!uzg?i-WV],%*m$jT_;=k1w;`CJ{g=s"9;5 bh1>9ǶSڟ?vY\ OK}0pBrӭĚڰ.c<<͛H@UueGonwUy"8n1F:xẆT+C#!8-%Puz. _ɳM8vHnW"[~~MSn6)UIG?uurH%6镯ydIxZR6$P/ ^>1~nԡ?}9_߼`aبv2æY k݁>p af [:|גn\.E_㝩8*+tf3"gMH#en'iI9W w iIJ=ã;:wg=Yڎy5/3l뵑 ?׸ݮ HV趵+30pWIk&9#] 5@= zʙZܝ)ODΐN'k߫$_Rǁn>bZB lzµ׋ b4%7"ԦJn[ܼy{x ;rpgIrp[; G=n.8*`uuKJvoy%%ADi|u"9{C ?0t`)RFvI<+ Jciyf͗~HMXCmfᱽNUll#0?'nk|eޣE\:j,p-s8ٮ}7AQRi+i 7̱Tsvoeb۷nuLךiaC4KhDs~]d,G\,̡Sjyk[lKMhzR6䣍'x/6 ]»p-MZmi$2BI B/$!ffZdVn2]yd5<.0 Ί21h*+ `+3iHyƗTqՌ9s_|m{\g%WM_q:=gl732wXG/}5l )]Ny,r?e+.vg0Z_vͧţ/p&wM=X[S՗v_#g-2)Y8vƺ|[#nUlNM%eϐ@}ﱘzŶ>;&43aLN޺{-ŲJV-b7@d!gЈ>ȿj}RڢcMchBrGB&55e2]=hUMe>s9CnFQoEsPY*ղI;qS=r |,Ж6>ק1-glMI{,4j.)x<#MӢΰɮgEIth'&OEli\km[sڂ >^X[|XQ^2L?lݱ mJ2RZ25}<&#z᭳'sW0;Q_VUdo"3aMmM_;ߢ9/ I68&A8qC=w'e=/lW[fJ/֚`kZ 'Rwu?{~r&@DKʊ]H#"ŵw[SYZRU-]Ȃ[83@}:e anrg#[;<0'ᥴڊ,JƅǓj Dn.sC^Ei901T0ZASZJo#U*k=;z&~# 8, r["@:Xmu2uOdW+r`3i1= 0l%u'EcSYWDep bCx=TF' jIxf7v,)Fnhf+ʛnӫw>g94}_y)|]l5샧m }]y _?hΖa_0JZ])MLa/%2C_X9=1?3 OXxo%\l#``""$6쵸#?5=yL8'^.ϸa1f,azʾ7qREz-0!10p@3w1>Pz+u:"n!.6\iYɹ|F~EEIDATZձ|t9HX` *8@$@ #$p@Cp $@ #$дx!$3hZA$a4$pFM32@@!8@iqF@HChZ<ѐ@H4-Ƞ?@H xHM0@H 8#GH  iFCH gд8#H $!4-hH $gd $<$Cp $@ #$дx!$3hZA$a4$pFM32DW{Q錍cubuPH}Xs<ㆱWLzϧcOd7Wc ߌ~)_r̓>n,qAKl,\6AQx篻M[}`v48u)Xs AYR=OCD"M8{0Hr1}dq6%O9$wqko{(SBUc.~pKIoj>z.uO Cޭie,r02v"ߘ$%mD\I'ɴsUt՝:4Юfkm]D; F+gf`φ/W<牓fϚotԇ@+0Yo-54>2:kO_F.~#Gפm!CH`G&ӷ,x&DzlBz|Ȟ-A@qVWC9cH\Y*| buz;hh0"T//638W^k&5-.魗98_{^ht{~-5N&IaІF ig y/g ٨cN帑I1ssn3iߖ^|a׫d{{_>3$pԐ/NJpOͩ:gۈ!v}hlt  @yɬ]6:m=zJ2db~p\TCQ]pVFQmVQڌ^1eY_Ud Y5ɳ@'it;Z,uiE'O+WiQѣ拋5ԜO^Id@Ee0e'4'wז6[0sWv>7+#[Nfݩ,+ʇM ?gYv5l,'˴˿'ee)"dEisNOA9sYNj!uֶ ,t $@+ӢMg G^r4g容sSy!Ҿ=79iRyVlZxY kvqMDk w`c9sS ٹv_`޲`Om*#sB1FWrhlrRxdX}S-vDV֖l+{7qp\UNgf?IzXƃ ZꝲdGY}K9K7hY{ .lٰtr3c^TW|=jU9zq>`e,;˚ߙw.dutSdβ',y+gQŏxkSl_H[ q&чS9eOj1Z!#F؈$_DJ7sd7fOvǂ7 B.LFZ'Q|>Tml:l]-]$Q]Z: 2by:Unv%K[3Ϻ8}-{mR:/, / d ˲'Yc,X5tt0YnS!љ ˩ẂLA}@$ 8o:[kY~ͺ6&*tA.kMq Ű "x 3괕6O:L* b ER4>v2hΧm2e:dYV0TRI9(Ot"$໦9oy?Za\s;W}yҟC*n q>j5( Hwİl@Hb@дsA_$aD$&E "$дx#"$04-\ $<&ct $ i悾H $14-ÈH $ LM0EH  iFDH ahZ/@H xLM0"@H @"}@HchZزn$)k@e@@M?" $SiA01 @HN@кY $;i кNAM@h2֥RīH "H $ HM DH  iDH AhZ'@H xNmOyF4^:t P zraEPC!.^9o)շ 5E+)sY$b{$E1@H@+@4?ҞZB ôTԄ劐ʴպBnE?"eG8|n\򊞐EۄNj.K2s6:O2W>'z4<@" Tdu誚UֽA|p}`*$jx> ivZpD4(j4#OB jrV`iL_KdO&>hvQīŸN }h`'(|r\ fO&&Og;?[xUO؟!餓?k /l1w_K6Чe|ֈ0-+A{D 4#}+mye Unֈ2Øm>Ȟ3 F%+`$|iZ 檖6S +oHׇ'IH/+>W`$q_vt7B`r3/ i X+* d/Kԋ<+wYk.]qz=Ӓ ?B /gsÙ.Jo$YD*%{kFƾ~hB>;S<@Rg!+D ыVe~{|8kSLa Bb3on c;pprlHCֆO"gGpad2x_u'!Ѵhw@_\ mֱa.D}wn%ǯЪ]A†8_B|&mxldt+ӟ'G:΁;،eTH$x"4$n}]ڱ]+iXTx{i0Zb9H+#-qbGQ H~zD܊?X*}]SZl{ϠweV OTNi̐?~a>C&w(D]d:4iQX'-wi1`R2NIt/ hx~f'_|wО nc(՚H?B  ;v_2!;K4eU88$L=9+A}۾$wF9#hKn}Noy5!s̼uH 4;ui*s_~MM`{pJh` m.]vb*`{ 1#và1)tD u Y Kih;rNMYoht u&b2/<%my;X%wI~\OTP-H:1|V},UoVaWܷÏI#F47v"};Ӫî fDn4-Da0Uvε˟ &Q._mC6wе:+c(+$b"2$% T٧oV勣M_]- tbosl+?j;7ɶWR:j7M1; (Hi S#CIaAW-[}M(֙/ !Ò'V,F t[߽"T=K|FӤmg=5}Mi*͹\Q4h}NxW>mns--~9j-(g5:" Pa5CA(rGtUЏʃhm"A}s7B'D$g8!ߦܝ{{Lk=G”b܇?<_gUCc[XM@02?/OT]2ix0>E!pUY2,̽A7d (+W2[ F2/E`՛Rt RWͅ4HBp][wWH!6O}bֺΛjjR{ەkEXx/!w^a؋k+oVQxPס@򮦽izѝ'IcwlĒ?_o%ՅT!G߫; W~}$"qrNpZ-|OC M,o]2[ssr?:1t{1&>|{)؍A/RYi_3v!C[^!1Ы+u0vG>zMr9Xܹc8XRԝ`#@&W-/|R_qtX ~^bSO@-,\eIr9TД#PlfZLpRoB7]Rt'S ۫k~*[*L$W-@BH \0n$QF(+.H=_aA/D(K"h M(]\L:zmCы54a١׺;'M]zWe?:}2k@x@i<(؋(`I&S%{H | Note:

The use of Connection context manager is deprecated. Please don't use Connection in your scripts. Instead, use explicit connection management.

There is a better approach if you want to use multiple connections, though. Each RQ object instance, upon creation, will use the topmost Redis connection on the RQ connection stack, which is a mechanism to temporarily replace the default connection to be used. An example will help to understand it: ```python from rq import Queue, Connection from redis import Redis with Connection(Redis('localhost', 6379)): q1 = Queue('foo') with Connection(Redis('remote.host.org', 9836)): q2 = Queue('bar') q3 = Queue('qux') assert q1.connection != q2.connection assert q2.connection != q3.connection assert q1.connection == q3.connection ``` You can think of this as if, within the `Connection` context, every newly created RQ object instance will have the `connection` argument set implicitly. Enqueueing a job with `q2` will enqueue it in the second (remote) Redis backend, even when outside of the connection context. ### Pushing/popping connections If your code does not allow you to use a `with` statement, for example, if you want to use this to set up a unit test, you can use the `push_connection()` and `pop_connection()` methods instead of using the context manager. ```python import unittest from rq import Queue from rq import push_connection, pop_connection class MyTest(unittest.TestCase): def setUp(self): push_connection(Redis()) def tearDown(self): pop_connection() def test_foo(self): """Any queues created here use local Redis.""" q = Queue() ``` ### Sentinel support To use redis sentinel, you must specify a dictionary in the configuration file. Using this setting in conjunction with the systemd or docker containers with the automatic restart option allows workers and RQ to have a fault-tolerant connection to the redis. ```python SENTINEL: { 'INSTANCES':[('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379)], 'MASTER_NAME': 'master', 'DB': 2, 'USERNAME': 'redis-user', 'PASSWORD': 'redis-secret', 'SOCKET_TIMEOUT': None, 'CONNECTION_KWARGS': { # Eventual addition Redis connection arguments 'ssl_ca_path': None, }, 'SENTINEL_KWARGS': { # Eventual Sentinels connections arguments 'username': 'sentinel-user', 'password': 'sentinel-secret', }, } ``` ### Timeout To avoid potential issues with hanging Redis commands, specifically the blocking `BLPOP` command, RQ automatically sets a `socket_timeout` value that is 10 seconds higher than the `default_worker_ttl`. If you prefer to manually set the `socket_timeout` value, make sure that the value being set is higher than the `default_worker_ttl` (which is 420 by default). ```python from redis import Redis from rq import Queue conn = Redis('localhost', 6379, socket_timeout=500) q = Queue(connection=conn) ``` Setting a `socket_timeout` with a lower value than the `default_worker_ttl` will cause a `TimeoutError` since it will interrupt the worker while it gets new jobs from the queue. ### Encoding / Decoding The encoding and decoding of Redis objects occur in multiple locations within the codebase, which means that the `decode_responses=True` argument of the Redis connection is not currently supported. ```python from redis import Redis from rq import Queue conn = Redis(..., decode_responses=True) # This is not supported q = Queue(connection=conn) ``` rq-1.16.2/docs/docs/exceptions.md0000644000000000000000000001112113615410400013522 0ustar00--- title: "RQ: Exceptions & Retries" layout: docs --- Jobs can fail due to exceptions occurring. When your RQ workers run in the background, how do you get notified of these exceptions? ## Default: FailedJobRegistry The default safety net for RQ is the `FailedJobRegistry`. Every job that doesn't execute successfully is stored here, along with its exception information (type, value, traceback). ```python from redis import Redis from rq import Queue from rq.job import Job from rq.registry import FailedJobRegistry redis = Redis() queue = Queue(connection=redis) registry = FailedJobRegistry(queue=queue) # Show all failed job IDs and the exceptions they caused during runtime for job_id in registry.get_job_ids(): job = Job.fetch(job_id, connection=redis) print(job_id, job.exc_info) ``` ## Retrying Failed Jobs _New in version 1.5.0_ RQ lets you easily retry failed jobs. To configure retries, use RQ's `Retry` object that accepts `max` and `interval` arguments. For example: ```python from redis import Redis from rq import Retry, Queue from somewhere import my_func queue = Queue(connection=redis) # Retry up to 3 times, failed job will be requeued immediately queue.enqueue(my_func, retry=Retry(max=3)) # Retry up to 3 times, with 60 seconds interval in between executions queue.enqueue(my_func, retry=Retry(max=3, interval=60)) # Retry up to 3 times, with longer interval in between retries queue.enqueue(my_func, retry=Retry(max=3, interval=[10, 30, 60])) ```
Note:

If you use `interval` argument with `Retry`, don't forget to run your workers using the `--with-scheduler` argument.

## Custom Exception Handlers RQ supports registering custom exception handlers. This makes it possible to inject your own error handling logic to your workers. This is how you register custom exception handler(s) to an RQ worker: ```python from exception_handlers import foo_handler, bar_handler w = Worker([q], exception_handlers=[foo_handler, bar_handler]) ``` The handler itself is a function that takes the following parameters: `job`, `exc_type`, `exc_value` and `traceback`: ```python def my_handler(job, exc_type, exc_value, traceback): # do custom things here # for example, write the exception info to a DB ``` You might also see the three exception arguments encoded as: ```python def my_handler(job, *exc_info): # do custom things here ``` ```python from exception_handlers import foo_handler w = Worker([q], exception_handlers=[foo_handler], disable_default_exception_handler=True) ``` ## Chaining Exception Handlers The handler itself is responsible for deciding whether or not the exception handling is done, or should fall through to the next handler on the stack. The handler can indicate this by returning a boolean. `False` means stop processing exceptions, `True` means continue and fall through to the next exception handler on the stack. It's important to know for implementers that, by default, when the handler doesn't have an explicit return value (thus `None`), this will be interpreted as `True` (i.e. continue with the next handler). To prevent the next exception handler in the handler chain from executing, use a custom exception handler that doesn't fall through, for example: ```python def black_hole(job, *exc_info): return False ``` ## Work Horse Killed Handler _New in version 1.13.0._ In addition to job exception handler(s), RQ supports registering a handler for unexpected workhorse termination. This handler is called when a workhorse is unexpectedly terminated, for example due to OOM. This is how you set a workhorse termination handler to an RQ worker: ```python from my_handlers import my_work_horse_killed_handler w = Worker([q], work_horse_killed_handler=my_work_horse_killed_handler) ``` The handler itself is a function that takes the following parameters: `job`, `retpid`, `ret_val` and `rusage`: ```python from resource import struct_rusage from rq.job import Job def my_work_horse_killed_handler(job: Job, retpid: int, ret_val: int, rusage: struct_rusage): # do your thing here, for example set job.retries_left to 0 ``` ## Built-in Exceptions RQ Exceptions you can get in your job failure callbacks # AbandonedJobError This error means an unfinished job was collected by another worker's maintenance task. This usually happens when a worker is busy with a job and is terminated before it finished that job. Another worker collects this job and moves it to the FailedJobRegistry.rq-1.16.2/docs/docs/index.md0000644000000000000000000004240513615410400012461 0ustar00--- title: "RQ: Documentation Overview" layout: docs --- A _job_ is a Python object, representing a function that is invoked asynchronously in a worker (background) process. Any Python function can be invoked asynchronously, by simply pushing a reference to the function and its arguments onto a queue. This is called _enqueueing_. ## Enqueueing Jobs To put jobs on queues, first declare a function: ```python import requests def count_words_at_url(url): resp = requests.get(url) return len(resp.text.split()) ``` Noticed anything? There's nothing special about this function! Any Python function call can be put on an RQ queue. To put this potentially expensive word count for a given URL in the background, simply do this: ```python from rq import Queue from redis import Redis from somewhere import count_words_at_url import time # Tell RQ what Redis connection to use redis_conn = Redis() q = Queue(connection=redis_conn) # no args implies the default queue # Delay execution of count_words_at_url('http://nvie.com') job = q.enqueue(count_words_at_url, 'http://nvie.com') print(job.result) # => None # Changed to job.return_value() in RQ >= 1.12.0 # Now, wait a while, until the worker is finished time.sleep(2) print(job.result) # => 889 # Changed to job.return_value() in RQ >= 1.12.0 ``` If you want to put the work on a specific queue, simply specify its name: ```python q = Queue('low', connection=redis_conn) q.enqueue(count_words_at_url, 'http://nvie.com') ``` Notice the `Queue('low')` in the example above? You can use any queue name, so you can quite flexibly distribute work to your own desire. A common naming pattern is to name your queues after priorities (e.g. `high`, `medium`, `low`). In addition, you can add a few options to modify the behaviour of the queued job. By default, these are popped out of the kwargs that will be passed to the job function. * `job_timeout` specifies the maximum runtime of the job before it's interrupted and marked as `failed`. Its default unit is second and it can be an integer or a string representing an integer(e.g. `2`, `'2'`). Furthermore, it can be a string with specify unit including hour, minute, second(e.g. `'1h'`, `'3m'`, `'5s'`). * `result_ttl` specifies how long (in seconds) successful jobs and their results are kept. Expired jobs will be automatically deleted. Defaults to 500 seconds. * `ttl` specifies the maximum queued time (in seconds) of the job before it's discarded. This argument defaults to `None` (infinite TTL). * `failure_ttl` specifies how long failed jobs are kept (defaults to 1 year) * `depends_on` specifies another job (or list of jobs) that must complete before this job will be queued. * `job_id` allows you to manually specify this job's `job_id` * `at_front` will place the job at the *front* of the queue, instead of the back * `description` to add additional description to enqueued jobs. * `on_success` allows you to run a function after a job completes successfully * `on_failure` allows you to run a function after a job fails * `on_stopped` allows you to run a function after a job is stopped * `args` and `kwargs`: use these to explicitly pass arguments and keyword to the underlying job function. This is useful if your function happens to have conflicting argument names with RQ, for example `description` or `ttl`. In the last case, if you want to pass `description` and `ttl` keyword arguments to your job and not to RQ's enqueue function, this is what you do: ```python q = Queue('low', connection=redis_conn) q.enqueue(count_words_at_url, ttl=30, # This ttl will be used by RQ args=('http://nvie.com',), kwargs={ 'description': 'Function description', # This is passed on to count_words_at_url 'ttl': 15 # This is passed on to count_words_at_url function }) ``` For cases where the web process doesn't have access to the source code running in the worker (i.e. code base X invokes a delayed function from code base Y), you can pass the function as a string reference, too. ```python q = Queue('low', connection=redis_conn) q.enqueue('my_package.my_module.my_func', 3, 4) ``` ### Bulk Job Enqueueing _New in version 1.9.0._ You can also enqueue multiple jobs in bulk with `queue.enqueue_many()` and `Queue.prepare_data()`: ```python jobs = q.enqueue_many( [ Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'), Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'), ] ) ``` which will enqueue all the jobs in a single redis `pipeline` which you can optionally pass in yourself: ```python with q.connection.pipeline() as pipe: jobs = q.enqueue_many( [ Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'), Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'), ], pipeline=pipe ) pipe.execute() ``` `Queue.prepare_data` accepts all arguments that `Queue.parse_args` does. ## Job dependencies RQ allows you to chain the execution of multiple jobs. To execute a job that depends on another job, use the `depends_on` argument: ```python q = Queue('low', connection=my_redis_conn) report_job = q.enqueue(generate_report) q.enqueue(send_report, depends_on=report_job) ``` Specifying multiple dependencies are also supported: ```python queue = Queue('low', connection=redis) foo_job = queue.enqueue(foo) bar_job = queue.enqueue(bar) baz_job = queue.enqueue(baz, depends_on=[foo_job, bar_job]) ``` The ability to handle job dependencies allows you to split a big job into several smaller ones. By default, a job that is dependent on another is enqueued only when its dependency finishes *successfully*. _New in 1.11.0._ If you want a job's dependencies to execute regardless if the job completes or fails, RQ provides the `Dependency` class that will allow you to dictate how to handle job failures. The `Dependency(jobs=...)` parameter accepts: - a string representing a single job id - a Job object - an iteratable of job id strings and/or Job objects - `enqueue_at_front` boolean parameter to put dependents at the front when they are enqueued Example: ```python from redis import Redis from rq.job import Dependency from rq import Queue queue = Queue(connection=Redis()) job_1 = queue.enqueue(div_by_zero) dependency = Dependency( jobs=[job_1], allow_failure=True, # allow_failure defaults to False enqueue_at_front=True # enqueue_at_front defaults to False ) job_2 = queue.enqueue(say_hello, depends_on=dependency) """ job_2 will execute even though its dependency (job_1) fails, and it will be enqueued at the front of the queue. """ ``` ## Job Callbacks _New in version 1.9.0._ If you want to execute a function whenever a job completes, fails, or is stopped, RQ provides `on_success`, `on_failure`, and `on_stopped` callbacks. ```python queue.enqueue(say_hello, on_success=report_success, on_failure=report_failure, on_stopped=report_stopped) ``` ### Callback Class and Callback Timeouts _New in version 1.14.0_ RQ lets you configure the method and timeout for each callback - success, failure, and stopped. To configure callback timeouts, use RQ's `Callback` object that accepts `func` and `timeout` arguments. For example: ```python from rq import Callback queue.enqueue(say_hello, on_success=Callback(report_success), # default callback timeout (60 seconds) on_failure=Callback(report_failure, timeout=10), # 10 seconds timeout on_stopped=Callback(report_stopped, timeout="2m")) # 2 minute timeout ``` You can also pass the function as a string reference: `Callback('my_package.my_module.my_func')` ### Success Callback Success callbacks must be a function that accepts `job`, `connection` and `result` arguments. Your function should also accept `*args` and `**kwargs` so your application doesn't break when additional parameters are added. ```python def report_success(job, connection, result, *args, **kwargs): pass ``` Success callbacks are executed after job execution is complete, before dependents are enqueued. If an exception happens when your callback is executed, job status will be set to `FAILED` and dependents won't be enqueued. Callbacks are limited to 60 seconds of execution time. If you want to execute a long running job, consider using RQ's job dependency feature instead. ### Failure Callbacks Failure callbacks are functions that accept `job`, `connection`, `type`, `value` and `traceback` arguments. `type`, `value` and `traceback` values returned by [sys.exc_info()](https://docs.python.org/3/library/sys.html#sys.exc_info), which is the exception raised when executing your job. ```python def report_failure(job, connection, type, value, traceback): pass ``` Failure callbacks are limited to 60 seconds of execution time. ### Stopped Callbacks Stopped callbacks are functions that accept `job` and `connection` arguments. ```python def report_stopped(job, connection): pass ``` Stopped callbacks are functions that are executed when a worker receives a command to stop a job that is currently executing. See [Stopping a Job](https://python-rq.org/docs/workers/#stopping-a-job). ### CLI Enqueueing _New in version 1.10.0._ If you prefer enqueueing jobs via the command line interface or do not use python you can use this. #### Usage: ```bash rq enqueue [OPTIONS] FUNCTION [ARGUMENTS] ``` #### Options: * `-q, --queue [value]` The name of the queue. * `--timeout [value]` Specifies the maximum runtime of the job before it is interrupted and marked as failed. * `--result-ttl [value]` Specifies how long successful jobs and their results are kept. * `--ttl [value]` Specifies the maximum queued time of the job before it is discarded. * `--failure-ttl [value]` Specifies how long failed jobs are kept. * `--description [value]` Additional description of the job * `--depends-on [value]` Specifies another job id that must complete before this job will be queued. * `--job-id [value]` The id of this job * `--at-front` Will place the job at the front of the queue, instead of the end * `--retry-max [value]` Maximum number of retries * `--retry-interval [value]` Interval between retries in seconds * `--schedule-in [value]` Delay until the function is enqueued (e.g. 10s, 5m, 2d). * `--schedule-at [value]` Schedule job to be enqueued at a certain time formatted in ISO 8601 without timezone (e.g. 2021-05-27T21:45:00). * `--quiet` Only logs errors. #### Function: There are two options: * Execute a function: dot-separated string of package, module and function (Just like passing a string to `queue.enqueue()`). * Execute a python file: dot-separated pathname of the file. Because it is technically an import `__name__ == '__main__'` will not work. #### Arguments: | | plain text | json | [literal-eval](https://docs.python.org/3/library/ast.html#ast.literal_eval) | | ---------- | --------------- | ---------------- | --------------------------------------------------------------------------- | | keyword | `[key]=[value]` | `[key]:=[value]` | `[key]%=[value]` | | no keyword | `[value]` | `:[value]` | `%[value]` | Where `[key]` is the keyword and `[value]` is the value which is parsed with the corresponding parsing method. If the first character of `[value]` is `@` the subsequent path will be read. ##### Examples: * `rq enqueue path.to.func abc` -> `queue.enqueue(path.to.func, 'abc')` * `rq enqueue path.to.func abc=def` -> `queue.enqueue(path.to.func, abc='def')` * `rq enqueue path.to.func ':{"json": "abc"}'` -> `queue.enqueue(path.to.func, {'json': 'abc'})` * `rq enqueue path.to.func 'key:={"json": "abc"}'` -> `queue.enqueue(path.to.func, key={'json': 'abc'})` * `rq enqueue path.to.func '%1, 2'` -> `queue.enqueue(path.to.func, (1, 2))` * `rq enqueue path.to.func '%None'` -> `queue.enqueue(path.to.func, None)` * `rq enqueue path.to.func '%True'` -> `queue.enqueue(path.to.func, True)` * `rq enqueue path.to.func 'key%=(1, 2)'` -> `queue.enqueue(path.to.func, key=(1, 2))` * `rq enqueue path.to.func 'key%={"foo": True}'` -> `queue.enqueue(path.to.func, key={"foo": True})` * `rq enqueue path.to.func @path/to/file` -> `queue.enqueue(path.to.func, open('path/to/file', 'r').read())` * `rq enqueue path.to.func key=@path/to/file` -> `queue.enqueue(path.to.func, key=open('path/to/file', 'r').read())` * `rq enqueue path.to.func :@path/to/file.json` -> `queue.enqueue(path.to.func, json.loads(open('path/to/file.json', 'r').read()))` * `rq enqueue path.to.func key:=@path/to/file.json` -> `queue.enqueue(path.to.func, key=json.loads(open('path/to/file.json', 'r').read()))` **Warning:** Do not use plain text without keyword if you do not know what the value is. If the value starts with `@`, `:` or `%` or includes `=` it would be recognised as something else. ## Working with Queues Besides enqueuing jobs, Queues have a few useful methods: ```python from rq import Queue from redis import Redis redis_conn = Redis() q = Queue(connection=redis_conn) # Getting the number of jobs in the queue # Note: Only queued jobs are counted, not including deferred ones print(len(q)) # Retrieving jobs queued_job_ids = q.job_ids # Gets a list of job IDs from the queue queued_jobs = q.jobs # Gets a list of enqueued job instances job = q.fetch_job('my_id') # Returns job having ID "my_id" # Emptying a queue, this will delete all jobs in this queue q.empty() # Deleting a queue q.delete(delete_jobs=True) # Passing in `True` will remove all jobs in the queue # queue is now unusable. It can be recreated by enqueueing jobs to it. ``` ### On the Design With RQ, you don't have to set up any queues upfront, and you don't have to specify any channels, exchanges, routing rules, or whatnot. You can just put jobs onto any queue you want. As soon as you enqueue a job to a queue that does not exist yet, it is created on the fly. RQ does _not_ use an advanced broker to do the message routing for you. You may consider this an awesome advantage or a handicap, depending on the problem you're solving. Lastly, it does not speak a portable protocol, since it depends on [pickle][p] to serialize the jobs, so it's a Python-only system. ## The delayed result When jobs get enqueued, the `queue.enqueue()` method returns a `Job` instance. This is nothing more than a proxy object that can be used to check the outcome of the actual job. For this purpose, it has a convenience `result` accessor property, that will return `None` when the job is not yet finished, or a non-`None` value when the job has finished (assuming the job _has_ a return value in the first place, of course). ## The `@job` decorator If you're familiar with Celery, you might be used to its `@task` decorator. Starting from RQ >= 0.3, there exists a similar decorator: ```python from rq.decorators import job @job('low', connection=my_redis_conn, timeout=5) def add(x, y): return x + y job = add.delay(3, 4) time.sleep(1) print(job.result) # Changed to job.return_value() in RQ >= 1.12.0 ``` ## Bypassing workers For testing purposes, you can enqueue jobs without delegating the actual execution to a worker (available since version 0.3.1). To do this, pass the `is_async=False` argument into the Queue constructor: ```python >>> q = Queue('low', is_async=False, connection=my_redis_conn) >>> job = q.enqueue(fib, 8) >>> job.result 21 ``` The above code runs without an active worker and executes `fib(8)` synchronously within the same process. You may know this behaviour from Celery as `ALWAYS_EAGER`. Note, however, that you still need a working connection to a redis instance for storing states related to job execution and completion. ## The worker To learn about workers, see the [workers][w] documentation. [w]: {{site.baseurl}}workers/ ## Considerations for jobs Technically, you can put any Python function call on a queue, but that does not mean it's always wise to do so. Some things to consider before putting a job on a queue: * Make sure that the function's `__module__` is importable by the worker. In particular, this means that you cannot enqueue functions that are declared in the `__main__` module. * Make sure that the worker and the work generator share _exactly_ the same source code. * Make sure that the function call does not depend on its context. In particular, global variables are evil (as always), but also _any_ state that the function depends on (for example a "current" user or "current" web request) is not there when the worker will process it. If you want work done for the "current" user, you should resolve that user to a concrete instance and pass a reference to that user object to the job as an argument. ## Limitations RQ workers will only run on systems that implement `fork()`. Most notably, this means it is not possible to run the workers on Windows without using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) and running in a bash shell. [m]: http://pypi.python.org/pypi/mailer [p]: http://docs.python.org/library/pickle.html rq-1.16.2/docs/docs/job_registries.md0000644000000000000000000000554113615410400014364 0ustar00--- title: "RQ: Job Registries" layout: docs --- Each queue maintains a set of Job Registries: * `StartedJobRegistry` Holds currently executing jobs. Jobs are added right before they are executed and removed right after completion (success or failure). * `FinishedJobRegistry` Holds successfully completed jobs. * `FailedJobRegistry` Holds jobs that have been executed, but didn't finish successfully. * `DeferredJobRegistry` Holds deferred jobs (jobs that depend on another job and are waiting for that job to finish). * `ScheduledJobRegistry` Holds scheduled jobs. * `CanceledJobRegistry` Holds canceled jobs. You can get the number of jobs in a registry, the ids of the jobs in the registry, and more. Below is an example using a `StartedJobRegistry`. ```python import time from redis import Redis from rq import Queue from rq.registry import StartedJobRegistry from somewhere import count_words_at_url redis = Redis() queue = Queue(connection=redis) job = queue.enqueue(count_words_at_url, 'http://nvie.com') # get StartedJobRegistry by queue registry = StartedJobRegistry(queue=queue) # or get StartedJobRegistry by queue name and connection registry2 = StartedJobRegistry(name='my_queue', connection=redis) # sleep for a moment while job is taken off the queue time.sleep(0.1) print('Queue associated with the registry: %s' % registry.get_queue()) print('Number of jobs in registry %s' % registry.count) # get the list of ids for the jobs in the registry print('IDs in registry %s' % registry.get_job_ids()) # test if a job is in the registry using the job instance or job id print('Job in registry %s' % (job in registry)) print('Job in registry %s' % (job.id in registry)) ``` _New in version 1.2.0_ You can quickly access job registries from `Queue` objects. ```python from redis import Redis from rq import Queue redis = Redis() queue = Queue(connection=redis) queue.started_job_registry # Returns StartedJobRegistry queue.deferred_job_registry # Returns DeferredJobRegistry queue.finished_job_registry # Returns FinishedJobRegistry queue.failed_job_registry # Returns FailedJobRegistry queue.scheduled_job_registry # Returns ScheduledJobRegistry ``` ## Removing Jobs _New in version 1.2.0_ To remove a job from a job registry, use `registry.remove()`. This is useful when you want to manually remove jobs from a registry, such as deleting failed jobs before they expire from `FailedJobRegistry`. ```python from redis import Redis from rq import Queue from rq.registry import FailedJobRegistry redis = Redis() queue = Queue(connection=redis) registry = FailedJobRegistry(queue=queue) # This is how to remove a job from a registry for job_id in registry.get_job_ids(): registry.remove(job_id) # If you want to remove a job from a registry AND delete the job, # use `delete_job=True` for job_id in registry.get_job_ids(): registry.remove(job_id, delete_job=True) ``` rq-1.16.2/docs/docs/jobs.md0000644000000000000000000002560613615410400012313 0ustar00--- title: "RQ: Jobs" layout: docs --- For some use cases it might be useful have access to the current job ID or instance from within the job function itself. Or to store arbitrary data on jobs. ## RQ's Job Object ### Job Creation When you enqueue a function, a job will be returned. You may then access the id property, which can later be used to retrieve the job. ```python from rq import Queue from redis import Redis from somewhere import count_words_at_url redis_conn = Redis() q = Queue(connection=redis_conn) # no args implies the default queue # Delay execution of count_words_at_url('http://nvie.com') job = q.enqueue(count_words_at_url, 'http://nvie.com') print('Job id: %s' % job.id) ``` Or if you want a predetermined job id, you may specify it when creating the job. ```python job = q.enqueue(count_words_at_url, 'http://nvie.com', job_id='my_job_id') ``` A job can also be created directly with `Job.create()`. ```python from rq.job import Job job = Job.create(count_words_at_url, 'http://nvie.com') print('Job id: %s' % job.id) q.enqueue_job(job) # create a job with a predetermined id job = Job.create(count_words_at url, 'http://nvie.com', id='my_job_id') ``` The keyword arguments accepted by `create()` are: * `timeout` specifies the maximum runtime of the job before it's interrupted and marked as `failed`. Its default unit is seconds and it can be an integer or a string representing an integer(e.g. `2`, `'2'`). Furthermore, it can be a string with specify unit including hour, minute, second (e.g. `'1h'`, `'3m'`, `'5s'`). * `result_ttl` specifies how long (in seconds) successful jobs and their results are kept. Expired jobs will be automatically deleted. Defaults to 500 seconds. * `ttl` specifies the maximum queued time (in seconds) of the job before it's discarded. This argument defaults to `None` (infinite TTL). * `failure_ttl` specifies how long (in seconds) failed jobs are kept (defaults to 1 year) * `depends_on` specifies another job (or job id) that must complete before this job will be queued. * `id` allows you to manually specify this job's id * `description` to add additional description to the job * `connection` * `status` * `origin` where this job was originally enqueued * `meta` a dictionary holding custom status information on this job * `args` and `kwargs`: use these to explicitly pass arguments and keyword to the underlying job function. This is useful if your function happens to have conflicting argument names with RQ, for example `description` or `ttl`. In the last case, if you want to pass `description` and `ttl` keyword arguments to your job and not to RQ's enqueue function, this is what you do: ```python job = Job.create(count_words_at_url, ttl=30, # This ttl will be used by RQ args=('http://nvie.com',), kwargs={ 'description': 'Function description', # This is passed on to count_words_at_url 'ttl': 15 # This is passed on to count_words_at_url function }) ``` ### Retrieving Jobs All job information is stored in Redis. You can inspect a job and its attributes by using `Job.fetch()`. ```python from redis import Redis from rq.job import Job redis = Redis() job = Job.fetch('my_job_id', connection=redis) print('Status: %s' % job.get_status()) ``` Some interesting job attributes include: * `job.get_status(refresh=True)` Possible values are `queued`, `started`, `deferred`, `finished`, `stopped`, `scheduled`, `canceled` and `failed`. If `refresh` is `True` fresh values are fetched from Redis. * `job.get_meta(refresh=True)` Returns custom `job.meta` dict containing user stored data. If `refresh` is `True` fresh values are fetched from Redis. * `job.origin` queue name of this job * `job.func_name` * `job.args` arguments passed to the underlying job function * `job.kwargs` key word arguments passed to the underlying job function * `job.result` stores the return value of the job being executed, will return `None` prior to job execution. Results are kept according to the `result_ttl` parameter (500 seconds by default). * `job.enqueued_at` * `job.started_at` * `job.ended_at` * `job.exc_info` stores exception information if job doesn't finish successfully. * `job.last_heartbeat` the latest timestamp that's periodically updated when the job is executing. Can be used to determine if the job is still active. * `job.worker_name` returns the worker name currently executing this job. * `job.refresh()` Update the job instance object with values fetched from Redis. If you want to efficiently fetch a large number of jobs, use `Job.fetch_many()`. ```python jobs = Job.fetch_many(['foo_id', 'bar_id'], connection=redis) for job in jobs: print('Job %s: %s' % (job.id, job.func_name)) ``` ## Stopping a Currently Executing Job _New in version 1.7.0_ You can use `send_stop_job_command()` to tell a worker to immediately stop a currently executing job. A job that's stopped will be sent to [FailedJobRegistry](https://python-rq.org/docs/results/#dealing-with-exceptions). ```python from redis import Redis from rq.command import send_stop_job_command redis = Redis() # This will raise an exception if job is invalid or not currently executing send_stop_job_command(redis, job_id) ``` Unlike failed jobs, stopped jobs will *not* be automatically retried if retry is configured. Subclasses of `Worker` which override `handle_job_failure()` should likewise take care to handle jobs with a `stopped` status appropriately. ## Canceling a Job _New in version 1.10.0_ To prevent a job from running, cancel a job, use `job.cancel()`. ```python from redis import Redis from rq.job import Job from rq.registry import CanceledJobRegistry from .queue import Queue redis = Redis() job = Job.fetch('my_job_id', connection=redis) job.cancel() job.get_status() # Job status is CANCELED registry = CanceledJobRegistry(job.origin, connection=job.connection) print(job in registry) # Job is in CanceledJobRegistry ``` Canceling a job will remove: 1. Sets job status to `CANCELED` 2. Removes job from queue 3. Puts job into `CanceledJobRegistry` Note that `job.cancel()` does **not** delete the job itself from Redis. If you want to delete the job from Redis and reclaim memory, use `job.delete()`. Note: if you want to enqueue the dependents of the job you are trying to cancel use the following: ```python from rq import cancel_job cancel_job( '2eafc1e6-48c2-464b-a0ff-88fd199d039c', enqueue_dependents=True ) ``` ## Job / Queue Creation with Custom Serializer When creating a job or queue, you can pass in a custom serializer that will be used for serializing / de-serializing job arguments. Serializers used should have at least `loads` and `dumps` method. The default serializer used is `pickle`. ```python from rq import Queue from rq.job import Job from rq.serializers import JSONSerializer job = Job(connection=connection, serializer=JSONSerializer) queue = Queue(connection=connection, serializer=JSONSerializer) ``` ## Accessing The "current" Job from within the job function Since job functions are regular Python functions, you must retrieve the job in order to inspect or update the job's attributes. To do this from within the function, you can use: ```python from rq import get_current_job def add(x, y): job = get_current_job() print('Current job: %s' % (job.id,)) return x + y ``` Note that calling get_current_job() outside of the context of a job function will return `None`. ## Storing arbitrary data on jobs _Improved in 0.8.0._ To add/update custom status information on this job, you have access to the `meta` property, which allows you to store arbitrary pickleable data on the job itself: ```python import socket def add(x, y): job = get_current_job() job.meta['handled_by'] = socket.gethostname() job.save_meta() # do more work time.sleep(1) return x + y ``` ## Time to live for job in queue A job has two TTLs, one for the job result, `result_ttl`, and one for the job itself, `ttl`. The latter is used if you have a job that shouldn't be executed after a certain amount of time. ```python # When creating the job: job = Job.create(func=say_hello, result_ttl=600, # how long (in seconds) to keep the job (if successful) and its results ttl=43, # maximum queued time (in seconds) of the job before it's discarded. ) # or when queueing a new job: job = q.enqueue(count_words_at_url, 'http://nvie.com', result_ttl=600, # how long to keep the job (if successful) and its results ttl=43 # maximum queued time ) ``` ## Job Position in Queue For user feedback or debuging it is possible to get the position of a job within the work queue. This allows to track the job processing through the queue. This function iterates over all jobs within the queue and therefore does perform poorly on very large job queues. ```python from rq import Queue from redis import Redis from hello import say_hello redis_conn = Redis() q = Queue(connection=redis_conn) job = q.enqueue(say_hello) job2 = q.enqueue(say_hello) job2.get_position() # returns 1 q.get_job_position(job) # return 0 ``` ## Failed Jobs If a job fails during execution, the worker will put the job in a FailedJobRegistry. On the Job instance, the `is_failed` property will be true. FailedJobRegistry can be accessed through `queue.failed_job_registry`. ```python from redis import Redis from rq import Queue from rq.job import Job def div_by_zero(x): return x / 0 connection = Redis() queue = Queue(connection=connection) job = queue.enqueue(div_by_zero, 1) registry = queue.failed_job_registry worker = Worker([queue]) worker.work(burst=True) assert len(registry) == 1 # Failed jobs are kept in FailedJobRegistry ``` By default, failed jobs are kept for 1 year. You can change this by specifying `failure_ttl` (in seconds) when enqueueing jobs. ```python job = queue.enqueue(foo_job, failure_ttl=300) # 5 minutes in seconds ``` ### Requeuing Failed Jobs If you need to manually requeue failed jobs, here's how to do it: ```python from redis import Redis from rq import Queue connection = Redis() queue = Queue(connection=connection) registry = queue.failed_job_registry # This is how to get jobs from FailedJobRegistry for job_id in registry.get_job_ids(): registry.requeue(job_id) # Puts job back in its original queue assert len(registry) == 0 # Registry will be empty when job is requeued ``` Starting from version 1.5.0, RQ also allows you to [automatically retry failed jobs](https://python-rq.org/docs/exceptions/#retrying-failed-jobs). ### Requeuing Failed Jobs via CLI RQ also provides a CLI tool that makes requeuing failed jobs easy. ```console # This will requeue foo_job_id and bar_job_id from myqueue's failed job registry rq requeue --queue myqueue -u redis://localhost:6379 foo_job_id bar_job_id # This command will requeue all jobs in myqueue's failed job registry rq requeue --queue myqueue -u redis://localhost:6379 --all ``` rq-1.16.2/docs/docs/monitoring.md0000644000000000000000000000455113615410400013537 0ustar00--- title: "RQ: Monitoring" layout: docs --- Monitoring is where RQ shines. The easiest way is probably to use the [RQ dashboard][dashboard], a separately distributed tool, which is a lightweight webbased monitor frontend for RQ, which looks like this: [![RQ dashboard](/img/dashboard.png)][dashboard] To install, just do: ```console $ pip install rq-dashboard $ rq-dashboard ``` It can also be integrated easily in your Flask app. ## Monitoring at the console To see what queues exist and what workers are active, just type `rq info`: ```console $ rq info high |██████████████████████████ 20 low |██████████████ 12 default |█████████ 8 3 queues, 45 jobs total Bricktop.19233 idle: low Bricktop.19232 idle: high, default, low Bricktop.18349 idle: default 3 workers, 3 queues ``` ## Querying by queue names You can also query for a subset of queues, if you're looking for specific ones: ```console $ rq info high default high |██████████████████████████ 20 default |█████████ 8 2 queues, 28 jobs total Bricktop.19232 idle: high, default Bricktop.18349 idle: default 2 workers, 2 queues ``` ## Organising workers by queue By default, `rq info` prints the workers that are currently active, and the queues that they are listening on, like this: ```console $ rq info ... Mickey.26421 idle: high, default Bricktop.25458 busy: high, default, low Turkish.25812 busy: high, default 3 workers, 3 queues ``` To see the same data, but organised by queue, use the `-R` (or `--by-queue`) flag: ```console $ rq info -R ... high: Bricktop.25458 (busy), Mickey.26421 (idle), Turkish.25812 (busy) low: Bricktop.25458 (busy) default: Bricktop.25458 (busy), Mickey.26421 (idle), Turkish.25812 (busy) failed: – 3 workers, 4 queues ``` ## Interval polling By default, `rq info` will print stats and exit. You can specify a poll interval, by using the `--interval` flag. ```console $ rq info --interval 1 ``` `rq info` will now update the screen every second. You may specify a float value to indicate fractions of seconds. Be aware that low interval values will increase the load on Redis, of course. ```console $ rq info --interval 0.5 ``` [dashboard]: https://github.com/nvie/rq-dashboard rq-1.16.2/docs/docs/results.md0000644000000000000000000001425213615410400013052 0ustar00--- title: "RQ: Results" layout: docs --- Enqueueing jobs is delayed execution of function calls. This means we're solving a problem, but are getting back a few in return. ## Dealing with Results Python functions may have return values, so jobs can have them, too. If a job returns a non-`None` return value, the worker will write that return value back to the job's Redis hash under the `result` key. The job's Redis hash itself will expire after 500 seconds by default after the job is finished. The party that enqueued the job gets back a `Job` instance as a result of the enqueueing itself. Such a `Job` object is a proxy object that is tied to the job's ID, to be able to poll for results. ### Return Value TTL Return values are written back to Redis with a limited lifetime (via a Redis expiring key), which is merely to avoid ever-growing Redis databases. The TTL value of the job result can be specified using the `result_ttl` keyword argument to `enqueue()` and `enqueue_call()` calls. It can also be used to disable the expiry altogether. You then are responsible for cleaning up jobs yourself, though, so be careful to use that. You can do the following: q.enqueue(foo) # result expires after 500 secs (the default) q.enqueue(foo, result_ttl=86400) # result expires after 1 day q.enqueue(foo, result_ttl=0) # result gets deleted immediately q.enqueue(foo, result_ttl=-1) # result never expires--you should delete jobs manually Additionally, you can use this for keeping around finished jobs without return values, which would be deleted immediately by default. q.enqueue(func_without_rv, result_ttl=500) # job kept explicitly ## Dealing with Exceptions Jobs can fail and throw exceptions. This is a fact of life. RQ deals with this in the following way. Furthermore, it should be possible to retry failed jobs. Typically, this is something that needs manual interpretation, since there is no automatic or reliable way of letting RQ judge whether it is safe for certain tasks to be retried or not. When an exception is thrown inside a job, it is caught by the worker, serialized and stored under the job's Redis hash's `exc_info` key. A reference to the job is put in the `FailedJobRegistry`. By default, failed jobs will be kept for 1 year. The job itself has some useful properties that can be used to aid inspection: * the original creation time of the job * the last enqueue date * the originating queue * a textual description of the desired function invocation * the exception information This makes it possible to inspect and interpret the problem manually and possibly resubmit the job. ## Dealing with Interruptions When workers get killed in the polite way (Ctrl+C or `kill`), RQ tries hard not to lose any work. The current work is finished after which the worker will stop further processing of jobs. This ensures that jobs always get a fair chance to finish themselves. However, workers can be killed forcefully by `kill -9`, which will not give the workers a chance to finish the job gracefully or to put the job on the `failed` queue. Therefore, killing a worker forcefully could potentially lead to damage. Just sayin'. If the worker gets killed while a job is running, it will eventually end up in `FailedJobRegistry` because a cleanup task will raise an `AbandonedJobError`. Before 0.14 the behavor was the same, but the cleanup task raised a `Moved to FailedJobRegistry at` error message instead. ## Dealing with Job Timeouts By default, jobs should execute within 180 seconds. After that, the worker kills the work horse and puts the job onto the `failed` queue, indicating the job timed out. If a job requires more (or less) time to complete, the default timeout period can be loosened (or tightened), by specifying it as a keyword argument to the `enqueue()` call, like so: ```python q = Queue() q.enqueue(mytask, args=(foo,), kwargs={'bar': qux}, job_timeout=600) # 10 mins ``` You can also change the default timeout for jobs that are enqueued via specific queue instances at once, which can be useful for patterns like this: ```python # High prio jobs should end in 8 secs, while low prio # work may take up to 10 mins high = Queue('high', default_timeout=8) # 8 secs low = Queue('low', default_timeout=600) # 10 mins # Individual jobs can still override these defaults low.enqueue(really_really_slow, job_timeout=3600) # 1 hr ``` Individual jobs can still specify an alternative timeout, as workers will respect these. ## Job Results _New in version 1.12.0._ If a job is executed multiple times, you can access its execution history by calling `job.results()`. RQ will store up to 10 latest execution results. Calling `job.latest_result()` will return the latest `Result` object, which has the following attributes: * `type` - an enum of `SUCCESSFUL`, `FAILED` or `STOPPED` * `created_at` - the time at which result is created * `return_value` - job's return value, only present if result type is `SUCCESSFUL` * `exc_string` - the exception raised by job, only present if result type is `FAILED` * `job_id` ```python job = Job.fetch(id='my_id', connection=redis) result = job.latest_result() # returns Result(id=uid, type=SUCCESSFUL) if result == result.Type.SUCCESSFUL: print(result.return_value) else: print(result.exc_string) ``` Alternatively, you can also use `job.return_value()` as a shortcut to accessing the return value of the latest result. Note that `job.return_value` will only return a not-`None` object if the latest result is a successful execution. ```python job = Job.fetch(id='my_id', connection=redis) print(job.return_value()) # Shortcut for job.latest_result().return_value ``` To access multiple results, use `job.results()`. ```python job = Job.fetch(id='my_id', connection=redis) for result in job.results(): print(result.created_at, result.type) ``` _New in version 1.16.0._ To block until a result arrives, you can pass a timeout in seconds to `job.latest_result()`. If any results already exist, the latest result is returned immediately. If the timeout is reached without a result arriving, a `None` object is returned. ```python job = queue.enqueue(sleep_for_10_seconds) result = job.latest_result(timeout=60) # Will hang for about 10 seconds. ```rq-1.16.2/docs/docs/scheduling.md0000644000000000000000000001025513615410400013475 0ustar00--- title: "RQ: Scheduling Jobs" layout: docs --- _New in version 1.2.0._ If you need a battle tested version of RQ job scheduling, please take a look at https://github.com/rq/rq-scheduler instead. New in RQ 1.2.0 is `RQScheduler`, a built-in component that allows you to schedule jobs for future execution. This component is developed based on prior experience of developing the external `rq-scheduler` library. The goal of taking this component in house is to allow RQ to have job scheduling capabilities without: 1. Running a separate `rqscheduler` CLI command. 2. Worrying about a separate `Scheduler` class. Running RQ workers with the scheduler component is simple: ```console $ rq worker --with-scheduler ``` ## Scheduling Jobs for Execution There are two main APIs to schedule jobs for execution, `enqueue_at()` and `enqueue_in()`. `queue.enqueue_at()` works almost like `queue.enqueue()`, except that it expects a datetime for its first argument. ```python from datetime import datetime from rq import Queue from redis import Redis from somewhere import say_hello queue = Queue(name='default', connection=Redis()) # Schedules job to be run at 9:15, October 10th in the local timezone job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello) ``` Note that if you pass in a naive datetime object, RQ will automatically convert it to the local timezone. `queue.enqueue_in()` accepts a `timedelta` as its first argument. ```python from datetime import timedelta from rq import Queue from redis import Redis from somewhere import say_hello queue = Queue(name='default', connection=Redis()) # Schedules job to be run in 10 seconds job = queue.enqueue_in(timedelta(seconds=10), say_hello) ``` Jobs that are scheduled for execution are not placed in the queue, but they are stored in `ScheduledJobRegistry`. ```python from datetime import timedelta from redis import Redis from rq import Queue from rq.registry import ScheduledJobRegistry redis = Redis() queue = Queue(name='default', connection=redis) job = queue.enqueue_in(timedelta(seconds=10), say_nothing) print(job in queue) # Outputs False as job is not enqueued registry = ScheduledJobRegistry(queue=queue) print(job in registry) # Outputs True as job is placed in ScheduledJobRegistry ``` ## Running the Scheduler If you use RQ's scheduling features, you need to run RQ workers with the scheduler component enabled. ```console $ rq worker --with-scheduler ``` You can also run a worker with scheduler enabled in a programmatic way. ```python from rq import Worker, Queue from redis import Redis redis = Redis() queue = Queue(connection=redis) worker = Worker(queues=[queue], connection=redis) worker.work(with_scheduler=True) ``` Only a single scheduler can run for a specific queue at any one time. If you run multiple workers with scheduler enabled, only one scheduler will be actively working for a given queue. Active schedulers are responsible for enqueueing scheduled jobs. Active schedulers will check for scheduled jobs once every second. Idle schedulers will periodically (every 15 minutes) check whether the queues they're responsible for have active schedulers. If they don't, one of the idle schedulers will start working. This way, if a worker with active scheduler dies, the scheduling work will be picked up by other workers with the scheduling component enabled. ## Safe importing of the worker module When running the worker programmatically with the scheduler, you must keep in mind that the import must be protected with `if __name__ == '__main__'`. The scheduler runs on it's own process (using `multiprocessing` from the stdlib), so the new spawned process must able to safely import the module without causing any side effects (starting a new process on top of the main ones). ```python ... # When running `with_scheduler=True` this is necessary if __name__ == '__main__': worker = Worker(queues=[queue], connection=redis) worker.work(with_scheduler=True) ... # When running without the scheduler this is fine worker = Worker(queues=[queue], connection=redis) worker.work() ``` More information on the Python official docs [here](https://docs.python.org/3.7/library/multiprocessing.html#the-spawn-and-forkserver-start-methods). rq-1.16.2/docs/docs/testing.md0000644000000000000000000000355013615410400013025 0ustar00--- title: "RQ: Testing" layout: docs --- ## Workers inside unit tests You may wish to include your RQ tasks inside unit tests. However, many frameworks (such as Django) use in-memory databases, which do not play nicely with the default `fork()` behaviour of RQ. Therefore, you must use the SimpleWorker class to avoid fork(); ```python from redis import Redis from rq import SimpleWorker, Queue queue = Queue(connection=Redis()) queue.enqueue(my_long_running_job) worker = SimpleWorker([queue], connection=queue.connection) worker.work(burst=True) # Runs enqueued job # Check for result... ``` ## Testing on Windows If you are testing on a Windows machine you can use the approach above, but with a slight tweak. You will need to subclass SimpleWorker to override the default timeout mechanism of the worker. Reason: Windows OS does not implement some underlying signals utilized by the default SimpleWorker. To subclass SimpleWorker for Windows you can do the following: ```python from rq import SimpleWorker from rq.timeouts import TimerDeathPenalty class WindowsSimpleWorker(SimpleWorker): death_penalty_class = TimerDeathPenalty ``` Now you can use WindowsSimpleWorker for running tasks on Windows. ## Running Jobs in unit tests Another solution for testing purposes is to use the `is_async=False` queue parameter, that instructs it to instantly perform the job in the same thread instead of dispatching it to the workers. Workers are not required anymore. Additionally, we can use fake redis to mock a redis instance, so we don't have to run a redis server separately. The instance of the fake redis server can be directly passed as the connection argument to the queue: ```python from fakeredis import FakeStrictRedis from rq import Queue queue = Queue(is_async=False, connection=FakeStrictRedis()) job = queue.enqueue(my_long_running_job) assert job.is_finished ``` rq-1.16.2/docs/docs/workers.md0000644000000000000000000004111313615410400013041 0ustar00--- title: "RQ: Workers" layout: docs --- A worker is a Python process that typically runs in the background and exists solely as a work horse to perform lengthy or blocking tasks that you don't want to perform inside web processes. ## Starting Workers To start crunching work, simply start a worker from the root of your project directory: ```console $ rq worker high default low *** Listening for work on high, default, low Got send_newsletter('me@nvie.com') from default Job ended normally without result *** Listening for work on high, default, low ... ``` Workers will read jobs from the given queues (the order is important) in an endless loop, waiting for new work to arrive when all jobs are done. Each worker will process a single job at a time. Within a worker, there is no concurrent processing going on. If you want to perform jobs concurrently, simply start more workers. You should use process managers like [Supervisor](/patterns/supervisor/) or [systemd](/patterns/systemd/) to run RQ workers in production. ### Burst Mode By default, workers will start working immediately and will block and wait for new work when they run out of work. Workers can also be started in _burst mode_ to finish all currently available work and quit as soon as all given queues are emptied. ```console $ rq worker --burst high default low *** Listening for work on high, default, low Got send_newsletter('me@nvie.com') from default Job ended normally without result No more work, burst finished. Registering death. ``` This can be useful for batch work that needs to be processed periodically, or just to scale up your workers temporarily during peak periods. ### Worker Arguments In addition to `--burst`, `rq worker` also accepts these arguments: * `--url` or `-u`: URL describing Redis connection details (e.g `rq worker --url redis://:secrets@example.com:1234/9` or `rq worker --url unix:///var/run/redis/redis.sock`) * `--burst` or `-b`: run worker in burst mode (stops after all jobs in queue have been processed). * `--path` or `-P`: multiple import paths are supported (e.g `rq worker --path foo --path bar`) * `--config` or `-c`: path to module containing RQ settings. * `--results-ttl`: job results will be kept for this number of seconds (defaults to 500). * `--worker-class` or `-w`: RQ Worker class to use (e.g `rq worker --worker-class 'foo.bar.MyWorker'`) * `--job-class` or `-j`: RQ Job class to use. * `--queue-class`: RQ Queue class to use. * `--connection-class`: Redis connection class to use, defaults to `redis.StrictRedis`. * `--log-format`: Format for the worker logs, defaults to `'%(asctime)s %(message)s'` * `--date-format`: Datetime format for the worker logs, defaults to `'%H:%M:%S'` * `--disable-job-desc-logging`: Turn off job description logging. * `--max-jobs`: Maximum number of jobs to execute. _New in version 1.8.0._ * `--serializer`: Path to serializer object (e.g "rq.serializers.DefaultSerializer" or "rq.serializers.JSONSerializer") _New in version 1.14.0._ * `--dequeue-strategy`: The strategy to dequeue jobs from multiple queues (one of `default`, `random` or `round_robin`, defaults to `default`) * `--max-idle-time`: if specified, worker will wait for X seconds for a job to arrive before shuttind down. * `--maintenance-interval`: defaults to 600 seconds. Runs maintenance tasks every X seconds. ## Inside the worker ### The Worker Lifecycle The life-cycle of a worker consists of a few phases: 1. _Boot_. Loading the Python environment. 2. _Birth registration_. The worker registers itself to the system so it knows of this worker. 3. _Start listening_. A job is popped from any of the given Redis queues. If all queues are empty and the worker is running in burst mode, quit now. Else, wait until jobs arrive. 4. _Prepare job execution_. The worker tells the system that it will begin work by setting its status to `busy` and registers job in the `StartedJobRegistry`. 5. _Fork a child process._ A child process (the "work horse") is forked off to do the actual work in a fail-safe context. 6. _Process work_. This performs the actual job work in the work horse. 7. _Cleanup job execution_. The worker sets its status to `idle` and sets both the job and its result to expire based on `result_ttl`. Job is also removed from `StartedJobRegistry` and added to to `FinishedJobRegistry` in the case of successful execution, or `FailedJobRegistry` in the case of failure. 8. _Loop_. Repeat from step 3. ### Performance Notes Basically the `rq worker` shell script is a simple fetch-fork-execute loop. When a lot of your jobs do lengthy setups, or they all depend on the same set of modules, you pay this overhead each time you run a job (since you're doing the import _after_ the moment of forking). This is clean, because RQ won't ever leak memory this way, but also slow. A pattern you can use to improve the throughput performance for these kind of jobs can be to import the necessary modules _before_ the fork. There is no way of telling RQ workers to perform this set up for you, but you can do it yourself before starting the work loop. To do this, provide your own worker script (instead of using `rq worker`). A simple implementation example: ```python #!/usr/bin/env python from redis import Redis from rq import Worker # Preload libraries import library_that_you_want_preloaded # Provide the worker with the list of queues (str) to listen to. w = Worker(['default'], connection=Redis()) w.work() ``` ### Worker Names Workers are registered to the system under their names, which are generated randomly during instantiation (see [monitoring][m]). To override this default, specify the name when starting the worker, or use the `--name` cli option. ```python from redis import Redis from rq import Queue, Worker redis = Redis() queue = Queue('queue_name') # Start a worker with a custom name worker = Worker([queue], connection=redis, name='foo') ``` [m]: /docs/monitoring/ ### Retrieving Worker Information `Worker` instances store their runtime information in Redis. Here's how to retrieve them: ```python from redis import Redis from rq import Queue, Worker # Returns all workers registered in this connection redis = Redis() workers = Worker.all(connection=redis) # Returns all workers in this queue (new in version 0.10.0) queue = Queue('queue_name') workers = Worker.all(queue=queue) worker = workers[0] print(worker.name) print('Successful jobs: ' + worker.successful_job_count) print('Failed jobs: ' + worker.failed_job_count) print('Total working time: '+ worker.total_working_time) # In seconds ``` Aside from `worker.name`, worker also have the following properties: * `hostname` - the host where this worker is run * `pid` - worker's process ID * `queues` - queues on which this worker is listening for jobs * `state` - possible states are `suspended`, `started`, `busy` and `idle` * `current_job` - the job it's currently executing (if any) * `last_heartbeat` - the last time this worker was seen * `birth_date` - time of worker's instantiation * `successful_job_count` - number of jobs finished successfully * `failed_job_count` - number of failed jobs processed * `total_working_time` - amount of time spent executing jobs, in seconds If you only want to know the number of workers for monitoring purposes, `Worker.count()` is much more performant. ```python from redis import Redis from rq import Worker redis = Redis() # Count the number of workers in this Redis connection workers = Worker.count(connection=redis) # Count the number of workers for a specific queue queue = Queue('queue_name', connection=redis) workers = Worker.all(queue=queue) ``` ## Worker with Custom Serializer When creating a worker, you can pass in a custom serializer that will be implicitly passed to the queue. Serializers used should have at least `loads` and `dumps` method. An example of creating a custom serializer class can be found in serializers.py (rq.serializers.JSONSerializer). The default serializer used is `pickle` ```python from rq import Worker from rq.serialzers import JSONSerializer job = Worker('foo', serializer=JSONSerializer) ``` or when creating from a queue ```python from rq import Queue, Worker from rq.serialzers import JSONSerializer w = Queue('foo', serializer=JSONSerializer) ``` Queues will now use custom serializer ## Better worker process title Worker process will have a better title (as displayed by system tools such as ps and top) after you installed a third-party package `setproctitle`: ```sh pip install setproctitle ``` ## Taking Down Workers If, at any time, the worker receives `SIGINT` (via Ctrl+C) or `SIGTERM` (via `kill`), the worker wait until the currently running task is finished, stop the work loop and gracefully register its own death. If, during this takedown phase, `SIGINT` or `SIGTERM` is received again, the worker will forcefully terminate the child process (sending it `SIGKILL`), but will still try to register its own death. ## Using a Config File If you'd like to configure `rq worker` via a configuration file instead of through command line arguments, you can do this by creating a Python file like `settings.py`: ```python REDIS_URL = 'redis://localhost:6379/1' # You can also specify the Redis DB to use # REDIS_HOST = 'redis.example.com' # REDIS_PORT = 6380 # REDIS_DB = 3 # REDIS_PASSWORD = 'very secret' # Queues to listen on QUEUES = ['high', 'default', 'low'] # If you're using Sentry to collect your runtime exceptions, you can use this # to configure RQ for it in a single step # The 'sync+' prefix is required for raven: https://github.com/nvie/rq/issues/350#issuecomment-43592410 SENTRY_DSN = 'sync+http://public:secret@example.com/1' # If you want custom worker name # NAME = 'worker-1024' # If you want to use a dictConfig # for more complex/consistent logging requirements. DICT_CONFIG = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'standard': { 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s' }, }, 'handlers': { 'default': { 'level': 'INFO', 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stderr', # Default is stderr }, }, 'loggers': { 'root': { # root logger 'handlers': ['default'], 'level': 'INFO', 'propagate': False }, } } ``` The example above shows all the options that are currently supported. To specify which module to read settings from, use the `-c` option: ```console $ rq worker -c settings ``` Alternatively, you can also pass in these options via environment variables. ## Custom Worker Classes There are times when you want to customize the worker's behavior. Some of the more common requests so far are: 1. Managing database connectivity prior to running a job. 2. Using a job execution model that does not require `os.fork`. 3. The ability to use different concurrency models such as `multiprocessing` or `gevent`. 4. Using a custom strategy for dequeuing jobs from different queues. See [link](#round-robin-and-random-strategies-for-dequeuing-jobs-from-queues). You can use the `-w` option to specify a different worker class to use: ```console $ rq worker -w 'path.to.GeventWorker' ``` ## Strategies for Dequeuing Jobs from Queues The default worker considers the order of queues as their priority order. That's to say if the supplied queues are `rq worker high low`, the worker will prioritize dequeueing jobs from `high` before `low`. To choose a different strategy, `rq` provides the `--dequeue-strategy / -ds` option. In certain circumstances, you may want to dequeue jobs in a round robin fashion. For example, when you have `q1`,`q2`,`q3`, the 1st dequeued job is taken from `q1`, the 2nd from `q2`, the 3rd from `q3`, the 4th from `q1`, the 5th from `q2` and so on. To implement this strategy use `-ds round_robin` argument. To dequeue jobs from the different queues randomly, use `-ds random` argument. Deprecation Warning: Those strategies were formely being implemented by using the custom classes `rq.worker.RoundRobinWorker` and `rq.worker.RandomWorker`. As the `--dequeue-strategy` argument allows for this option to be used with any worker, those worker classes are deprecated and will be removed from future versions. ## Custom Job and Queue Classes You can tell the worker to use a custom class for jobs and queues using `--job-class` and/or `--queue-class`. ```console $ rq worker --job-class 'custom.JobClass' --queue-class 'custom.QueueClass' ``` Don't forget to use those same classes when enqueueing the jobs. For example: ```python from rq import Queue from rq.job import Job class CustomJob(Job): pass class CustomQueue(Queue): job_class = CustomJob queue = CustomQueue('default', connection=redis_conn) queue.enqueue(some_func) ``` ## Custom DeathPenalty Classes When a Job times-out, the worker will try to kill it using the supplied `death_penalty_class` (default: `UnixSignalDeathPenalty`). This can be overridden if you wish to attempt to kill jobs in an application specific or 'cleaner' manner. DeathPenalty classes are constructed with the following arguments `BaseDeathPenalty(timeout, JobTimeoutException, job_id=job.id)` ## Custom Exception Handlers If you need to handle errors differently for different types of jobs, or simply want to customize RQ's default error handling behavior, run `rq worker` using the `--exception-handler` option: ```console $ rq worker --exception-handler 'path.to.my.ErrorHandler' # Multiple exception handlers is also supported $ rq worker --exception-handler 'path.to.my.ErrorHandler' --exception-handler 'another.ErrorHandler' ``` If you want to disable RQ's default exception handler, use the `--disable-default-exception-handler` option: ```console $ rq worker --exception-handler 'path.to.my.ErrorHandler' --disable-default-exception-handler ``` ## Sending Commands to Worker _New in version 1.6.0._ Starting in version 1.6.0, workers use Redis' pubsub mechanism to listen to external commands while they're working. Two commands are currently implemented: ### Shutting Down a Worker `send_shutdown_command()` instructs a worker to shutdown. This is similar to sending a SIGINT signal to a worker. ```python from redis import Redis from rq.command import send_shutdown_command from rq.worker import Worker redis = Redis() workers = Worker.all(redis) for worker in workers: send_shutdown_command(redis, worker.name) # Tells worker to shutdown ``` ### Killing a Horse `send_kill_horse_command()` tells a worker to cancel a currently executing job. If worker is not currently working, this command will be ignored. ```python from redis import Redis from rq.command import send_kill_horse_command from rq.worker import Worker, WorkerStatus redis = Redis() workers = Worker.all(redis) for worker in workers: if worker.state == WorkerStatus.BUSY: send_kill_horse_command(redis, worker.name) ``` ### Stopping a Job _New in version 1.7.0._ You can use `send_stop_job_command()` to tell a worker to immediately stop a currently executing job. A job that's stopped will be sent to [FailedJobRegistry](https://python-rq.org/docs/results/#dealing-with-exceptions). ```python from redis import Redis from rq.command import send_stop_job_command redis = Redis() # This will raise an exception if job is invalid or not currently executing send_stop_job_command(redis, job_id) ``` ## Worker Pool _New in version 1.14.0._
Note:

`WorkerPool` is still in beta, use at your own risk!

WorkerPool allows you to run multiple workers in a single CLI command. Usage: ```shell rq worker-pool high default low -n 3 ``` Options: * `-u` or `--url `: as defined in [redis-py's docs](https://redis.readthedocs.io/en/stable/connections.html#redis.Redis.from_url). * `-w` or `--worker-class `: defaults to `rq.worker.Worker`. `rq.worker.SimpleWorker` is also an option. * `-n` or `--num-workers `: defaults to 2. * `-b` or `--burst`: run workers in burst mode (stops after all jobs in queue have been processed). * `-l` or `--logging-level `: defaults to `INFO`. `DEBUG`, `WARNING`, `ERROR` and `CRITICAL` are supported. * `-S` or `--serializer `: defaults to `rq.serializers.DefaultSerializer`. `rq.serializers.JSONSerializer` is also included. * `-P` or `--path `: multiple import paths are supported (e.g `rq worker --path foo --path bar`). * `-j` or `--job-class `: defaults to `rq.job.Job`. rq-1.16.2/docs/img/bg.png0000644000000000000000000000133013615410400011742 0ustar00PNG  IHDR[PLTEHAtEXtSoftwareAdobe ImageReadyqe<IDATxܽkSQ&iBDPuA("dv89H!(wR:dxw{NujF] bO݀(5 aeiO[5mSpGUpsGT93`Nǂs[a3mP>SPd]-#jv_쒙78[z˒7T?ף49œi9ú;PG%_1<+# ="GDB ="zD$H#zD$H#"GDB ="GDB ="zDlSѹ 3 PLTE*C[IDATx1 D j !̫\aFg,Vdk/6QNIENDB`rq-1.16.2/docs/img/dashboard.png0000644000000000000000000015553013615410400013315 0ustar00PNG  IHDRdoOiCCPICC ProfileXYcF2 +oܼ=~[k]O@sw'_鼑DX  C')f共v7gO``6΂?_2=twePBcK`@ğ9 1̰ݢ60fq.*$\zF 7ݶF;msx?t5 0$:Pu1Ӈ1}"mcc}ul䝵:  pKݹwׂxM6?vֻ3WoHpx&)K6C X-9z0 ەt3md 0 /0@oApK d^?=w!4z=~[ ON' a^ l(vJ eZpE)T ,6-귮>X]ή+)/cAb97 ݠw?9(,Pj@ `l#p a@8 @&8΀"P*@5Ahw}(x &,XKX  !@B$ )C>dYCAP eBPtA-]4=y D.0B F"l>PD,"(D# n!"I; H>$R@:!dd2,G^E{q9\D0(% {e"BQ ,Tu ՃAMPh<-VE}Q4t> }}~g?0 3FvG0& sS af0߱X,Vcñiس+;a,vB€‰"""2E0{uJ:J!JUJ JOʓO(g)שDԩlRR]G+555? 5u!uC)5NsEqp縯x<^w‡5n~@ƘƓ&0'ZJZ!ZmZX|F'ttttt tt-tt 2Y1`<R*fHA@$$* FFcF?L:%&&y&{hbvIf$01sIc?YXYXY rjzfֳdeggakb{͎bgbb/eǾȡA)iy;7!WYnEnfn-n?<y'^&^mB%>N># ||"vIy R*z "," |LIxNUX$VV(^TS4T\FLY_ؠ8B\AWXBBQ$qNbhzʞ={%qڒS{M۴TMiJ2 2&2I22_deŲOrrrr^ )t)RTR$+^UWTrS*QWfTTR~VQITiSYSUT WYMR_>}^*ͨ󫻫_Pp81ɧY9%U^[LO'iM]UxN=^ހ>~AaN#QѸ11ѸxD$ޤgjcZd:m&nF6kݏoWBAMkKPV+Kbw2ql66m~؞}i'jaeOkl_c0y@@>GvGcީAg:+899} KK+kۆ{wc%.9[;{Gϼo"ITDZ3+[PH<"2z&tlJ 53—шԈH(hw׎%x$t% $$1.~6}337S:3?s#{BDቭl쁓'KOaN̩Υύ͝9<޼og\<ʗ/+*(,4+l>+xٍ"ߢbΒsKJqe4D4|vmFƫ7n$̸݊4/"_< *5 Ή7bo''ۧm_gކݘMy}͜\ۼfB>,$gKf[_}M[w?dV)=zvدMW[[[!doo\ $|@O{h/8tA >;1 kJ}*Ej)^2)}B 3EVǾ)ʧJ`ZHFk1S{+6ee[GTUe}QgMT`HI_<֢rƚF.C;N[UrKrp!y*y{W||{IHA ,+!#0pQ?czc1# eGt &1&ӦХҧLGoee~Z81𩾜ܖW*Μ.H.<|6(Kʗ*>W~j\ۚ3uWuz{pIYEMG{tG᝻s]nB=<ܗŁ'A!hpHSQOc|+>z2ܫuaoMb&Jgdfޟ{^ua|LJE>~Y^]RƷ+/ִ~OԀpD2 Ц&kl#EeN/F#D+J'E`@g bJa`a]`$qql) \ ,!+Z- !,ysQ fJC*UK}MMHjzF}&f,"rVr64^w:Tp r9uhڥǵ-CI\NqU"ё} D'χ5GEFJ֊A<=q27_pKG%%'8;;>џy+䓇Nh*ޛ'z?,Uh[¹FT,TV!/T^wŠcWC䵬Ս=7&n.7At-|r t)[U}'}>cSY_jc'|ԃC3ÏG?-#>bW_WLx72?86~6]⹋ Z|iuIewV~oo::E!Q>tFm8FH%GMC={o)=ALIaagNc9z'n< @=yQ1;e{z$襕dd˕QPTPQQV#KPѨмզ}OW_OFr SS3^oX,UVcmlpv!1NE:8jFpw/q󼗛7n$҄9B@`z~0|;$!`T[ K<*  *{~@pI SHu8>g T`";$Y@1" M#>e"O?|,B ?{@hy8#+B`L3a))(NSRQRVQ~RJ1pw(("Y-Za:mFzq*zѕ fq>VnPv~1F\q[*d*+ӫ8UQ45 # d 272Ni]Wt>>h\v苫[/>ƓBsDɇDlDFwIJ qd5/ueZqLle9 BK\KM˗T\ܺ]mZ|eZU'ojρ1{[^>6vٹ9M$NLG~wx.áEUYH_F|g!bv /]co ăϣZnp1i9F"-m 7]% }+ہ t9Ee[<1+"gyD" xDVi)i4nyb2A%QuyI}N3@N~1Imim^ۙwgRANs)h"gO'o<|4\?b)la(9y4;5,/Rrӗ_}h%wu껵A?g` mtbEձIiYfKd+xiar;=Ul.r˷~U+:~spq@33i pHYs   IDATxxcIv:lˊ֮$+Z޳eI$ɖ,KWiW+,+ljvwfg'Ğ#f9 s&@HS\DI6 Žu+T:' $HM $H$A $@?$H A=| A z@j/h } A ziנu`U'?$H A$ꁀi/ : $HСڂ?R|3U5U5 $HS-/8m L{HLWՀAF_W/7$HCMVWT=`k*iɨA~>|#؏}#z͵?zwZG9\#?lFv*ν $>xĿ%~x{@US50ݗt/Q]-I@ĥO|XG*Ue A :T$jIZ*xIT LH[HF`USr,qC?C,Cӟ644+W~©S?xwx8qϞ==zGeDND-6:4a|B>8ExCw%4sK. .>O}/e^/$1}ii~7~7%P(D"X, dqq1ޱ+(F;_طpm5pê~^yT/ g'?HVe -bMicwg7er^+ i0htնwvlnˑ@ ?חyzt=R}۾۾#ȷ|>E:e4|0͘@S4]m+'"KQTǢ1xM&+:(6ڰ 8 Aҽ0j`mo/l~ii-FKD1v]*]+m!X6J#TOnq1TzWCm򌾭vjw[nAh7{QYU6aM=FGGy?  s?s3 nv+-kD/{ۿ; AU,#` |wo=TѠ`@xW)Nz8pšƢ~zPص.RX,y݊~7bJ^~W[[ngZTZl~xDW$!(,+ E" 8V >^_>Pj~9\ҽ Yt1n]G0戽Qzwz%zmxw~u5ۅ).-{vD\քkƵhM/gq1ӎヘiwgh z4a5q7p6331\΀a\oΈ^vb5Mdru[bFoƼw_6w}6'ݖ}p{%[eTATի.l6ga$):%% Ut_'M}K9![[r`vvV[~~#/!>w,u<ubzw| ˵OB)>́7۞Au.]g{>Y({"|trz_n+Nv*MWK4K-M2K3@R2S_FsGZ\ƨҌ OǬ7HlP'Ҁg/@69O|Z+߉ Mhm1xM\ހ1ON=qC܁[hFO LtԶԶg׃kv]j;;]i6ޝU߮TV{U v;pmtttg@`1^x> XP=}uUwt7v"ISuP^f6H]DoͿaBP|pEw7&_yQ3EˈX@@佧a# r8p9]i飘mWPOc5 ,+#^zVO}u~~&mzW;we7]wk&6wsoۮkŖ21jof:qOSᦁN)ǥ q@gϜ!Q х(_[gn]7?f n5s8MtvaP6b(OcZk &䪤kJ~Vv|vRg}A_v[n.n~ӿ{sw|gߊomv;wG$| XmelP+vvHL}+[3bt~~2F&Yւ2jPQ bw>vLFUcЩ&l=#V0* :* 's?eW)l) B[S~㖾*έnonsw~cFX{Z̘CKk+: ͚M0̜q1F&F#=s++_3z\6+tt}C]Jaur^۽ewq?h۹װK{ cX5ӎXx===hiiԶ S71^]]?x?À2|gfV#ISuCwO 3L&SwLCWN@=T %G)8$0jaW_'fPۘ_ɓն#Fm,3ҼUػݷuӸͷ:Tmwa&$'iڄwsV(US0_,%#ym!j'fQ[:uSmv6==HK{{1ퟻIݦw7eߩ/Mv[EH#1ۿGgPƂﮚ^RtvT+1q?Bh,"SS$OFRSH&q~'='Mbff W|`&cJߕ3̯ttՄ?Y!)n5IsYUz^T40y34<˿Oq?_[)\=s>ˑ, bY-DyOl6z@R3,s]1/m)t@qWk#w{)=9 pSUeU~Dو:* vpʞ3@xevg<~e,֒fڮVێn/"ӪD?(Q?E/9xG̼Ȥ(^ &0eϙNc{4v-5i if߲ 2g,Ml./~sn{/ \dg g !U"t@" s8qSUn;w  Aɘ5,/0YO;>z*,./l&Vތ41~^c&K)'C8-;rԡ|G<M-21 O 3_$KB&O屾ʭqH2p[Co!]  z$ eul&"_|xooߖ;X!]G I$m^?T+uD?u_u훾~uxxɤbEҞ >Oʕ+p0` %.w>N87rPb6Q2I ]oX__Ñ#G0bwҷBf&fX5t VgMLNN=[Ţ׎qͷqV.(30[hwÕw];+lp t8KI2bSsO߸qs|$y"}3x?Ài;IzI$+]{j#I~,%#?FF@b*2$ F.>,f1ge`~cc#c Ƥ6M15D xN:ߵr:v.bĜ3]]][fzꩧ[ooX X,<l#_cR{I^̦d`%IL *5/4+.g4-K5lU7a(ćG ǥ8 +X y|z;Ia,#83a7LCn`*r"ihB9),Ii 6#B Fq2nFzĦ Y9 $p4MwE~@eve7?-2x<30?XIۮzVaW[R)kV0UD>__q֊3)H enBSS_b@C#3zc2<7@Khess9aBbT`xb j O9C#yHDyF&XM|Sgw ZzI*b.7bU0\' )a>/\쀑$$mNLN}^pgO{o (_6IMMB!InoQ}tlbR6+WeK~@}\~o~2 e~ҿWs2x41ȄE6X,30 _M$I#گڟ &1Y>J͛7dˆe]K*2~ϔK*9Ϙ^c0t3p`HwLbae.#fa]tea0d߰ph1 ulѿڭC C`ھť/R%ʊ' e?ɵ8\s~~?r#ɗI!O Rf) v_n!D2UzWn`kk ~e˦+iY]]RA8fSHg$W+ -^ k󁰔!NRv+N+ N82HcyE+g|T#}_#c?Uǘ>:%꣔oe )c I 9HĞWMwLbaĐ F 748p1 }l1]vg ]oA\={')P)9t}}ҳ1tώ2~bc5ҷ;f>H'% /K¤pcpxeLGxcLG0J-]Õ<.IdaTS*Y 7并|jts0uzDm8Lsyuer< O)2N_JCf @.2 ` =#v/QgzP j01ҕ=K$Qaƾҵ1kie~$7}zW{ m#M¯Sutlu ePjNOmm*j/뽤a/[lN &ۥ&w1ۤmzkRzSX5aSuֶ2"Na۩-R^_?eO * X Ymմ=HGʉ/R'2fSgEIBaDQ?<'H?6/!6˨NS[r|Rso%^op+~kOMf2mW9RT=MTεqˢ7;Q._VaWIuWN5aT~Ԗy%=[Q?]׶w/mg詓vR6o}V׶ms{+}|o;Qv|hS۩SuNu:s_S_9rb#t}8y5)5W={O)u摚^)ӑ黗XW'W_ye#ךk%5Vj?/ׄ)]˕RCP $豠"3^HC A z,أHbN A);H $H$A $I A @$H A $H H $H$A $aW|W$H$Hu++--oVA tx.jH‚ px H_u_ǯ˿ N &H AeGn?Ӊ jDW2>],-UKOa+U=g[݋.ܽ0uxH ]<kG0<3*h+^v{splƨIϴW33gz[AlO@m@+7{°ދ&GAHLa7xg wsvT{` xbOOG Bs~ɉ 3y?ۏ_ycc:zud O^×sS6la6  &_އGe;xM%__¤ɏWF _m)_gN!mbŻ 4Ra|{Ui37Gqsl)5%kgpÈd" bC$\iCr~`+xD?x߁H89ҏ >OmwƲ;T?6IMWX>R_Đnije|(!0w!-0kHwNX$IҶ*ImHHRǑ v-jT#p0>T'~6γ]35pUtSv0.Zuct`ܵ@rϛ/ I{r:Je{:tipKu*0j[2Kz\xY$+ ڔ2$`j,ݛ4h^;4M7:uvLW_ƨm/C,*޻:[6B I֐㗆pC1;N?\tMCL4%OD&G*ESFIBjjaFH2x0T$!ynӶN:W r'hL>9CF0J_=߉0EׇU oĄ9w1c ;(Gg5-r>fT(ҹ<Ԏ/^4IWgN¶Xs|BÇ_x v_\  @I.-4هw_L/4 =~>XSO\nj%܀3FP*|a8HJj'0{`H<2; ٸnxQ 72rED,w2L}gדI>,ie3x;Q~~ƻ0Ksm agl0[Ql$IҶs 0޾6?ypr 2;_<:l. dq #fX]!N%Ht5c$j2bg1bFw=qkiZo޿/§/L%4/P(hroz60j1X[ƻ'Ge-B͗@eGSnT d W{gԎe0o0[~$.O?{/IRL6t$di,nl*LerugdŚ)qcqʿ"y6a7ޛt&Ww]:G4fHݖ]qNz:P/}B2A%&T*Uޮmn[Zp ^}otܻ;l5R| | |{G8HN8$I8N}콊f[;v0SyK b*}lRRؑYWx(øVr}a1ΨkͨG -:j˥6]Ox;hN;1S(aL}:3qw{}V?8zt=&fvsWvvq( s;G&m5C&>Ic7w׆ֵhUv*9~_ZJ"wtn ɦӔ(*5ݹ0i`X܎߅~Wx p(X< BgQsX}=XKw7Vk ]49L:hWpBI<e z^W>^ HiOea1‘T) !/Ol |s^1>4*8|_cbhu_ ҆ˀy) v }[`ruiocrb}#cw6C5aumV CQMU)#j^0(X,+b3Yȧ]JڲfjJACǎ"|ԏ. kzf?MoC]P,XV+ $y(j6(-Ţ0T>ͷۨ~Щpల6m+Y4<ʀ3Om0Hfp E'b@PN'橍{g1TcڬxJJ1yVCCAEO>.0͆jPkEzFj* bP}&kM QwmvzGp3ᙛM榎M 8ۏR7އdj%; 1}f=\9uN:BIi0£N1BwxZ 1K gXyU.J\n^o CxyFALPؓÃDF:xKbFV(&d :}=\lڀ I"ilsv(O_81vL(ǖ#} Gy048maB[[+ZۡF%3 +1W 5 eWY/Z;ՍAJeg~ r$ |hmjҬՠ 9x75覺-.BXގV ^ y1E&=ܺ}zKHij f/jOFF'-"Z9NIw CCN-VPlo%?bthdk~hhAwWڨ. )nCK!sҥROMctՆ1t`U6=c#4xamgxd+ i(AW[Z YfPWd7FvUOw`4ƃߩ&K־/]um:9<&L b &(7]vg%t b^`1J,Pk@'Iv ND壁[h@| j秷<.Sxؠ9hdJAB ޅ%eD' Hf |̀G.QFѡFvux\HiFsޒhp"hIi$79 ox \4*T u!b $,Z0F4W`P%Ihf32n4".-QS$0R$1J.+Y2MfF(N4$R:.iiJ')$H5pIIcbb ) I1a(\AQb*6Iht<::%" N3gghw,iWIɻ< f#X^V 8mN: WE vT$Xd\4PP&i IKIӼf=IԐlFqi4ZjX^+InZ tX.Zt.ڀJRd3}HG )ȲF(d)LXpJw1B(3iގ I6S$.Q}NMuڕ$s խAG1 ptͺ4ImJOҾx=&Υ15A(z4SOI# mHB 0ie~9DP\A<.Nj y QyN$?IHoKл~9{&SŹZ>ɒ]NV_acSZ'Z[-*x8.~FH#ZG"D6ia tb%9w:.9_yN\=Pty+)ij*7j*9J?aqٷrH\ƹl3}/r[]+K&֐Q^(Rbe]8\(H<|r)2Kg2lyw'ںTBI([^s`yIeZNT-t<Ԇ2L n;{K*V\:HF*J{dXgfӥr;9_H&~SV~^j=ANaP=JUj:/QRX>+4i*PnOa$Hᴍ)SO~5z/N5K dw%b{kVYH:SF=QXtF(eʆ4 B&5 'cdxӔQCiӣFŤMè]c: nnLLL`ZjB(ãP.-qUo zaz a*j 7Q@iV"ꉕ0dm}725l2}N tbhxʧdhokAz[) c#"D4!I8 ! F4\6jj'hq`)jfT"%1Tc^7kuomv]Mq&^beIҲhD dy'Fq0a!my-c4glfёzI9Q,N|i*y) $}L9&@{OPV($ܒsm &GkVaQXAҊ9,<ɬ'0 o@#)Ih8%RK$Lsu`ho8\G0/4[AAS3fe4XCX albn^k#|c7.w vVK},L':I? .(ToLCHN|ISs>e5H @'xqjKZcni 'J.N,޵ "36ltSyG,ݬ\cšZ/.N8Ht۴b 5zOe_٫TVNNWX؜:ݠN~eR?!1YĆꏶ2rZz&m VN8 7ԸZZ_Cv772f=Ydw_Jćm%l6i}/ '$He\rשǴZ$Wf'kTP$w3Ys?֐B!D$O4Gz=be3PL:2k9)f́TJkHUUxsXOgX[Yxi?5O?p0zX+[`ѷSl!^PXC*P E($Wf*ةd XLw3KTKlA '=ZCbjJf]MP LΎX% ;f6(J axl#c@/b6fJLL7^ٴ fj -cfzC}p?f4Z2[o*Z  V 7  H #ʢi ttp\3ʴQ̦شBQcpB3Ypm喑53Jh( zz:3y7Kpޅ^NE/N8 % sV(ZpzFJZ'F0*/9 p̹1;K| t\˭E[F{`tLG.L:A~&E4`s70+rqV,-zSc.I=4wa9e.8ܹրiɔ2ray1to%F!tur#Pbç \[nBNNSI先$p |6HjE#ɦRXŐFm}5Idgl->NeɬKe<1[v-V2)ie]xLjxt8cV1&@…Q(rq2*}ot/);,u1M]UQ0.`7\|LMK턢\Ǩ$^~ @3WH5Tq|iګTFﮙn5 ~/eO@NPY]u]>CyEu*~_TNJ'@zTd*d=7:[P]HRǁTi"*) 0>@4cop k%.\2Am}ݖVYasa5{֋\skiރťJ)%X,]HANs:mF/UcZH8GNG,ID >}ʒKg{wڛanΏxdD2Fb P`s@ae) RQ_ @:`@bPlԓWXk?ۇw~y3'Zp 7x^Cp@l?>{Y ggljg+O{\[J7Ka+KHoy?}4\/FÕxr}xԄow=Nk/cWgG2 @tJ 5bzV LzB^XM%[tc}u.yjp,XXTx U/\-|:nqn N?SvR kH&#Ǽa-bJ<) 2HLLQYWXYc5d|<%?\'.oh;|kHE,Q߷ 0Q_ 0ZS[dB<0dz"h\V+"XY\Q.nrB%_ )@`אuRUV)R:jUj_٥WxJP_EBPm˩ZP|JixHdRl=RRQڇ%V?uw-$+_wѮo& jQeV:sJ4fJzki6 ~')Wk!}^gRCOR[9(K ctA :(__>f!-;9܇~ +>E3)L$MEp-:b`ɀa%J_*p+TY}(^qǭHU)ԯ{#j)>%B+<)~W-O=Uٔ]qUĔݡe:tI("!WZ/; i>h6E#5 ŭTHF:3urtqLZf˰2[e'i]xq~rmHKT#EeP6W+-J@zjߒv샖}w Tܘj64+2gbJglԾkPPm04hSbv|ꂵqUjpi4l.~[{,‹XZI-30;H f,IttX'F'Hbb-GC!Ը&[v̸jwpYLLi*{>m~? V6I+xo5ca1Lu)f4740s Zj 8 B{%̬a0ʳ'UmTS~X[KUfYt#0Fg{XLFAR98tXZ]lU>PDd0mc1Ƒv =ha8Q 6^;.zM F:%ښv/^D(8lif}O.lTjtBC'ܞ9L 7^3;]h:+7`.0}*fT*x<194̀Ohikyw@*ouiqn1P;kC筫8}:V* 6]_ˉVLO#oaF@SSYciC[ <{/,TcЛMDcѱ\(© MMNbw^V;MM#6pӉkC{18=.L-$*G{؇;<;E? WٴVzG@jُ1؍jhPQht <> `[>C1M)^GAnjх~ӘI­0::qE).!ɮLc@\x rsA8&G&!W+)&`0Y׮ƕkhifrmFKWfg=8}5L:?I^~=8fsGǏo ͳe pO?MxWqUHK+0E1SڶvoݶS@t9)ch m$&݇^ RghA0X~؀ А"wR@@4At68T!F<>2F4?~.Jj>3[s{E˝XٴAHHe>0gQ5vr cC[> ۇ̳IǮ.8:ݹ=tT x8֍X&(128+PB J4Z`rJIc| z8@Ħ2CH(`K(QVq}p3AT֎ 4ָj/szm8WM'|geAʤVZÔLbTٍtViYAXCJI0</ɬ'RVi /\xeǩQ,cS$ Z+KA*kU2fFe%, 7(rLi pΕ+RPGɘbJҷ)jOceln{Hֲ+*[R >Z{lrf]2+քF@M+'>/ltR$V2 oҞ6;T~YðXw6P`vF'ȮZSjp IƁWQݳ=7{ևn\uʣXe\5St1h ﺪ5*m4jovy-ݮ¸CPkTw2SQp(U? [WnQ&{R=R4m :Du0I+h\mGWW}^(U+-o{XUqGSH>bŹomWl:WFyҢ5*IWT3Q[$ kn7~URC֋a'4qՒ ć#X٭w0Z.bQq՝|[RC4 # ]lh xHUp/G$jU٢W{%+ߒ*~qUNޱj㪅 6*p+/ qlEkV(* qpdۻ;:0[*U(7w5b$ 8}H)XƘN|C4@Zc\0?U:q*%LD؝wW-f0jHUW| xv߾jͰp93t0Ǎ2j4*U @WAfqhb\u9Ð>#62cehKpxc-&" 1M1ێ6bjSS}62.WkM{t7jǹİ0.'fpt b0BQD{k;.7ĕfKN7bF‰?0Ѿa*:ďeokJ},t0Te\O`fF ^ՄvܹSFg0 @zd2nK0s Cy$W4$0 bw,8;o[qu̇!9e:uyމiM 8S0 㪋?#?b\v8ܸ!Q3x~-}S0hFތk/atPcU:1 P "{m*q/,NڸdjznڥӸȍ^|$9q(ڙ/ ~]M$b21 q'A5%ia@14t6?N'C1r}U\f2ZrCS,܀`@uBIK}$)u =$u*} ul^7cC=qu!SWc\cz|CݐOm}Klp4q @UK|^/lI*2-vPMnk$VA}]=I=q 7;T"Y6)kfcEx+AHsyfq#Nam9Υ,_b5Q2 62w`j[4T @z({G̦} {"E:a\fcd\u2]ul]$=5zm*I @:PK :|tqC\_Ÿ.K Kg\MP?XNXj8@[vk0TТI16urA۲KaG/?*euŤ?&@Ue@"&7ՀDBat+4bmMKm+7 2&ŒGqU'eaqqF &7Qogr S‴*BSCHL 88 He޾P}HwAb\e Ւq|Tp!f#y誴~xj糬,Ծ =( 9&E"Shn~33v Vk7"æa"8,1Y7SY̥wA|c9 )U*ߜs-KthiX6Fߨj6c}0Pg\/]87Z,b}**Wbʸ٠A:4p(zucb  #C#DW6+3 U\ij|RNZlE03J!fadRküH xl=t?htW @/ mwBN߂2'w.ra䧯pțߙtD/zU5IJC4uX6gx橧V:~73:okp;]B5%g52 ^5A^ 0d?s}2-ij@_m\v i9_Ib h{;pYxnBVě06rb lģ1Gq>6#%SbI!HdD$3LNܹ2tPu(\zS0CwP|3--f!ĦDS*4wgT4pH Ies "q<7j7jiƛ Z/ݧό^ Ijp##ONw@PK#n^q%,ZRCch#0R4l`u%$ 5mM9\r/]ASc74SSnCcm,|F'0'AmGp=NL=qm8Ip߉M5ZlPm\ui[(RT"2e\5]GI};qջSx)vM'qve 7Qpq\|]Yƕ˗qvN?u-;vNFդ q1M`e5Yז0݆``?-&JGh)'8hm_~ЇԐ$H Yc\5͍NaRCA32*2|3)Ua `bt}CJnHZ[O Z0 Zlz͍J45c<@ʖɡ\[-hzw`:qϡ}p#P/`t&#cCht\5[M)o_lj 穜e#hiDrW!Ip=( 1_!*iUY!+Wr{0cJ6abCP+ Zwtê똖18M~p8 F{;~@TR՟5a#ɨatut୧ Ӄ;9ܿǨDG[3ۡWqlB6@wM.H]@_7pIrm${אHZᅅڃk\54Jt ˱87M#OI$ֈٯc)wi~KJѻ$7F G8&QbPj=4J={ @N8;TS 'p @$p '$H 'p 'p 'Ip '$H 'p @$p 'Ip 'p 'p @$p '$H 'p 'p 'Ip '$H 'p @$p 'Ip 'p 'p @$p '$H 'p 'p 'Ip '$H 'p @$p 'I$N8H 1;XWJYp 'IHH 'p $?~g:&P-]Ъ݈eBar"GB/%Hb_/^P,h;0:-9ͅ'=At|/p 'RϷ諱`ˬ%񹗺0K4^=?}Wz[X|gkr≏= 'GF8g)I4:7zH0;BY6N A=Tf~ΏLP 2uf`ֻ̟3XG04l”ڋfI&GWͰc!REڎIw9ȜpEc%bʊ'>9ő^N8Hxa¹'; |ן4֘T/}O?ՃnDVj'I_6diSD/ށЋW/?hĤ9d| ?K&w ﴖ-@#S_nǏ?ق'OuC0}cE4ߺ|_TBO&'??nFd 'p @:LP'Fyس=XX?G@=x^X;qi7',Ǜ%Y V Fn3lJc#0 c>??y}kbB ;r4剢u4fow[HjKcHM t=% Mܐ"]++!QĄ$p @zȀT尶|yjlGcN&,Mq)w.A\G]LT˜s-<>-Sx ٥R<Ob,#gGa㥳csjπ+b 8ZZC \x/}(ڛko /c/S}Om"PN߆+ƿ8yCVy#/6㭛jE0<#)C(Ow$"Y_Ocy--N8Ht佸EoM];s=i Ns MxO--V3cN8Hw=s~W;-Ū<$ Fa]$L)ΝT8N0^ 'p @P6ZNx0$`Mc*p6m!!=RqW]7źzS~W+M[]:GCۦmwal~#oL~l_MkoەE^L*{za]^ۗcu>,bii hKcn}}HʤR3a ݝ{G^1j[&i)Nay9N~㈯Uz0kh.G Z/Br],%r]bѮKNaJ 5[idvg| yRž82dl]f#= ?ȈL6_ /+sɥ}yYXaQ>GҒL&m8p8L} dxyO'GQ9>1z%w/\*QFfCq'-5$WEw0G]#Ia> +HTvNol btz%"keGpC؁~ԐDLb=̀ s=fܾ0/  J~RIȯC}>Ƶet0XSbSL9OqXM$y*ީXږB^v6  c.1XF`ĔZ8nBX~X%1d0߬X[{ʫk%K5f/,Caʅ ᰕ Fd)팑 yfnl=bҀf1a16Jey~|N3" Kj.hE<)$2X)kYE`|eO%֨ f17h_T~i\ II-v=Xa/|:?!GK' H8}Woһo~WA~~$'7a-~ 6[O@K T.~x:tCB1=I$Ҽ Ĵl j a3v-S\3L> FAߎBfZ\.Դ6jsm![ :gP)d } BKYQ(:(Ct,F!9^˟\fB69IMecGIF14 @ߎS:GG`w Ӎaʋ3qP>&d(]#TJ9FGLe֗aaf(SF큵 XSTf*J((OɆ߼A韄rFſcڪ3R:dR127} ~V>AF,G(}0n޸ ڈˤ61SV J4wS062il8L=Ft@ʂ(^# <-6jS2 cCØQzfר TfS#Л 鴔a ٠6:2>r#&=hki|P:qJ$ߢׁ~P{F̹)ӡ9IH w.KT:{@U<*ƈ0PeR̰LDJ2@ "R HK>\7>_9bS|)|~?}Og}'%_+?S1Go|e]AKseB>8QbFe7܁0gMA$&31b"+4,$!!r{`um7h4pj尻lgӏR-ޅ&pLq51n cG `T0SZ GFAL^Re5bJ Y"9b3B 0@m4@@yםf f(]m W3Q`!PLLbajrgg>J5" @;3TtSR!ǠsxqC'@SʦKC3jp*mj 𝥁MEs;:Px56K&)F$L P'eOzzhHW\bc3PYpPJ ݸx=`'IDF>/bf* rXhAI:0Me?<2I&F,40`9Gf-J' p1xTJHzJ.S]`Z6Et:I ~J &2o<̘b sԡC, ]"=Hh%'\b^_O76L2Z`qIKzn$HqdK*&6FxW011QXtz062 KB$,W^: TF8Hf/UJ$Igi`qBz,px8>pGh* ~Z-?> ~em6^s矇`?`?R ~8?Q 0"jҁRFLbqfg1#vR$y,!@Ј7H1@(VT%"Qтy?K!XMzJ d4 EKK%$M1%\.[3;+5.J&88mVh4j,,TT*uzɟ?瀔K%hŜ߇91K3^: 5XLyTjrnߺIn;Oia႟$JbnFi 0frJ#iBM7I3r x:5Y::|~/58W[#pR<1IS082Ԓ0'@V.JTlj237ORh+xFbZJtBpkax\v(Q=nbJS8b^~b,I:p̖$hI;R &*OQB3p)ih Z0 ա$SVR GBpRON~ T&*c`vZ0CifRO,w mf1v7F &<$s7l#cl&JqfSO= t| ;Dx +TFppF\s/h681W"r53k7` +IW0Ĕه_pvX9$r8ލiԛ?X2 ,Ԙ%ƻIsFͩLvOKre$*WEXuFA'E$ɶ@4F4v_ 7H1bkwR8bg?KR"dbi4ฅ!.FF̾‰yw݉k t6 XbnjRk8Qf.0&Lj@Z$钁%K 6 )-C߰x$ݎchp]@|RM୳Є!t38q<}3I%z4NX\CP _ EH:Bg4Y3h ˗FH xo߆>1!Hnhp6؊E'?0/BO "MSPOU*UϷ Xj-n_oKq)Oߥ[UP?ǏlJIY A<{փh,HұY  (b4 `S $He(O CN 㗟goB{?i~}7~ *PBO@f$,5%xa͆Νw3uesxI!L`*c }aW"cLV_fϖaҳXLÑ}Ϯ,\vֆɟ/j@bS~~~*&ٰtQJgvNh{&bq),K}HՀj1NƗ^G+'1߁.?AS?OgzG1_p~?0 _Y @Nc ?c #]`S_w«V?g1ɅM1ǔj ,è gW mN<$``E-.0BbNp}ӹĔpp (T1d@Lɨ6m[$1Q=cg1[9=7|Yic+I8>ȎMI 5- 0P,~nuJ  ڷt%mrKl{=V{֬T ?bJDY쎘^>KcbKҾĖd2iAY$^ d{"~!G'y㍌G# @'@b̲7eV9UUϷ AT*|/3b~QݏrT]$Mv+/*xd\>oe]"iG0$H?)fؔRhkZcT]6[졕#$FxvdKT,#: H 5]!@OU- U:Jr04xTa(OHe ˻] }nC1O9.J UB}0IHNR E$|L Y-`КaTUNcH 28O~W066~-]hl.ܾu^. ͈1k$R3T,O6S ߡ 1رv L#+ Ļfw+ n454b ?!&vs7"  u.,1ь¶҃rI4_:o8n_=ܟ.]; 8s8^z)|9>LISuV| _|Utnx >3SOs_2܁s =c\Ԁ% `4OϞxo\BCM{ g^Gxi2E*Q#@jv[=+7_ǛG^4?լb-7/߂v*A6)506l5abF;7qI\:s>%45{ośX^IF>.-c`5h@| n^) ڥ`i9gՆ16}}h~O?#Ċ =6Zv ,uG`}e8b|:zR(a @*i4OT7yxN]zʮV@&U'6cK Lb kq_)R!>##P[EX]PSv mr͖<UzV-. $HZvյUSe-2e? |(MIoďyw?HC$_6TV `Q>}ߗuBFWϦrEBq$f.rG ڙ-5}ğ![# h76b$Hн?1v*Kp[#$6 W-$(_\ be8U@kZ>tkHŴ AƸjV+{'Kݔ%R0r7b/'wX ˂nc\[/6|uVw:pP4{VҽaTujsUA5 YkLf˰ UTq6}_%((Wz+׍?[C O㪕{?l9oze k>$VvPktObqd =@*3Es(&FFƴ \*dSЫ>zqjL!z),~bj.Y.u8i0Pp[ cc]Sr  rY6Zr K4oʇuA֊ NJ{֘۞%H" xWFtJ>ldjqtttmhv 'OlRpn6="?Fz02<~u:X-,z=Z]\=So`7۷0;,w<{7̏M`x|v? FoCmaa狨SUkz2#VD1"hĪC@cfu^^0~3Jx3X G^ >\^B#~> k5 ūz<޳ *Z,n @ 5!@:{8w4.? [` 3f6\06!˧qZ,z#ڛڕ q.^7r]+Ϣ 7x/49 pĻ:yw ^BN<[znlE_:ummP*x]FOw+zIJjNsn*< ڌ6(-R9 @WDRM,h v-Z]AW- a-tv`` ĝq ^̓l z1_6fy)AxU.W. X&Ze>VĜˆ1̇cwM i4CEDLGB^}Ex > @<;SkۭЫ4SBoG3ڂnӘVA?3;\$Z-+a1,PM tӸJ2ӉgϠsFO)hdt04ѧ C.7pe&ܹqPhmLcu0Sr E1 pބͳ [@"i>߬϶C̻X4ä 4?91(JͺM`l\db6 ]}*nMg0@!Bxie-Z)TWS :7  Ρg"lR31*('0DYAG)mV b|hJ92W,(5vhQl N!F0YVŴւt*\j`kk n5B#$3TQ+IeJ'ב] j+őj|y'R:\ EOik035 wj,|&dv'9iioNңHeMԹ %%d+[u0MtVŊ!W{l`aqfMFzb+vAgL$OnaʾX=\!9 o-jʔۖd76())Rk,?h(ŘtЀ؟֐1acR?dϢѥ,F7?_a @z\3Bvfzir( J( hgPeP*X p!aeыT b4C%B6BB(H _,H :=J%fXOe/QZ'eJbbf,%3'd$:1:퐍Pfxn,cF`gҭSf2Q:*gI=¼vr:$#veeb.Ȭ6$sH011M$0`GotHjBٯ&Lْ8J[}_vctda @Sj泛(ζd2 ă¡?zeH gG>h83'OB>9ɍc)7Pb-I!ԋPWg E,ìom6qabx asixlj,iQ!ڇ$ Y1ibZo$ɲ~8θ*;1vW[U#_y%wwìחĩcjE8Be.D> gpyGi|bZ j=E&V(035sL 4c/c'ȏL" gqN 0sG1׎)] T'Hԗ\:F%5~d1ߐ[>|2$8V(BhZX =~BbcϪB+?gTP5x H |=\H]=\nJwEm0Ko+iq+ܨv*rsUfK(PR(;mjm\bQ׾킄qUT+ mƖDyLMӥ Ϟ=O>^x*I84ӓJ}L&n&16!dIGJ8])P($QۇLQ L|*kYOmR͕F&QyZemSܥ8$-juM*]IF_r[#܆auzl[ &,蠬}?  -..r0b]V^`FnjIAٕA<ٷ\nuwQ }@a @z}idkmTz\Y'&WIɤ+KQijTys$b_>*o0َ2TiR vG~bd~\zi]ieK'IKo&(,6t(v.SU\*7ֱ OM@P>JϢ%!ףHwSnA^aNB91ىy?K1=>BVJ $t= Z-<>8 >!njkI,\{FTSthdSk|Ơ@ *q ȪSsqܪ@3e6@>1 ~fMzf)U04" ; (dP#J)(z~N9 ˑ05y&& .q0w =8 1+ .s$I֐:݉y!\LO@bl`}ld- F[k &F~SHe2Ȇ1Bg6nu`#^PMfh&'1-SzZܹр~ΩtvS'ڍ!JfFx&::x,R @$iOLř@w<%[Z]Yث$Y)\ ^|qa>{nV$u]sXZ pj0"PMCI)/B~9{зLɁMEHӈCa%P`OhͥQy0Ĝ͊PhO4rq[aנ>JWw\Zsr<=Xt,68 |RTf60l=JX$xU =F'VkI<Ƴ|~O͹MT[PVNq껊y"֚+qU+.Uo @ck[5cC@[uMkti[J\/ HsHItA KޢlvpY[Hy@-Mw $މCĎQ}iT֬ieH S>?LΔ>p?H >Svp/FYr˅ݪiՈ ƂR#Lx1POCI P۲I׽0#G)ԾH]Dpff6Dlc)]yQsb!F+MQ}#|/l؅+5| Fe㥀07y1pRr>A*4nu^BPtM).iVY~U3mĭЗ}OO'N}N /,Gw7Fc?/3c7C [79Ѝg r{Q` >og5a8Xl6àL]¼? =0_MKc!`A6Q_2scrro 94p çDcVa\+\!?+qDX-[/ $`3H~jz淘M#[XIR`.ϪFc* p`#+ =:!@:y8\;MM\ݑԒy wnڍF\=.Fhƍp6z[pNSx_"křߡ3gqU4^'N@Z/?|e 7K/їūw<7>\=4 _XGFzhz1 ç2 p00 mqoL㺫B܄iHp(ГC)aKzR$=P7!=t-@Mbn-YXf6h߬}4=i vهjLSe~:jl^U|)D%:! :!Iƴw^lذ/"~W066s#(>,W t?;Znlkk9q~zzzĐVZʵGsgϥΜ=0ols:^)=aS m5^T{O-/X!0X,W`Kn ) ܞn2ZUUlN>GFk?/jU׊x[#zG_WWҪD< %9xar#* g" :Izפ9;iVm'':o~:ޣ1󆔨ɋ!%si5Unkvu5 l}nZ<.TrW QS]v UP(jͶGSnjۇت+P *kMJoK'Uehhl@ .kmBz FCtvևPG*XlzhվqцKYm1bHːѴ녿~_1G;aWLӋ9>|]|9ߑRP3mÎ6kʐOь*>C%0nt~3Ѽֶ`^u6'JIU!ͨāVmd4.{}dHM o&Cw{UPgPeq׆Y@Џf{xFGQ八=0mRL T?N F\h3 ݎ@gH ߮.bjrA7Xv-dRd4h`ΦAlo@N]>Df"*{ !!;8YpBJMPӦeE3ׅ2Yxq֛{=cYSRF}{/3mlӼ\EC-#KdFȆM$#6r ߭R2'5Y?I,?de-Xڇ5n?f !]VCJV ) O 'VdnŐA }ܜoJFMT7cX~twLTHFCR˹Ϋ *N,ڨdے~<ѿ4m!D:٪$8uu!H>hAX+bta|xH]AGk Q]2Z'Bo??^e0Hݳ iҎ_Kqx>l߾6"7ZﱡIw/T''1=:`xUqщH|''0~nLN7} ͊ղY>) /&#*91bHd8[7oTAVU?1N=C`/_GqW-K;ǎ;pD:Z\^䥟ıCW榍](֍a˦ױe/i;(yuNݿ|o؏;={qAؿ9YMK!ڎ'NоCSϐ"hs";4Zᾥ Dzz*jQW4TUU[c]#0tn^kڃ҂ gi#лZ!Y=l?m`d>K^6g*Ãz}p$VfB i[x?kW#A]8|4:EICHC-vc8x`79"nGQ䢡g,޿::iN`O;!1[q*wMعzŐ4,fd㧫keը):^M9LFcC2n:AMQ^LAN֥W! !;6 ;v-G'?O>dg@khoqE__7,jUzij@}5p^!kHd@ɧhj&}ӣCd3Fޱg0D 62R B.hC /Tz]8^L|=7@kGlZTT7@C29q!;!Y9$/47TLN<͂<64xx=kEܒi(- *-^s]אGClLgN8俀ǎѨ ][8 6Xndއ^߄ݼLS bH+1nAUklcT'ߌDYwLմTqSn]ۼϳa而9M `?!Nk6e >iZڝUhH}-0h'֭[1N2JREY((,BVZ ݇jYrT L*N6'#ZC2Wi`?[P{zRb)j͔xp=c~ )ôh-75Bp5}/-ۍd!B!YZOii), \.ղa SDݣ38148Z +e=RĐV]hj77 i軙9hb 46^5ט3Y6>}\x3&ϙQy[3ʜ˦Gdj>IF|amdY#Ŗ"/p3SI ~]CĈxX%qֆM_sh7Wq~~Z'!}7Ԡz<-kwZX ɎjXGEnE3z455Uk`` BéyݰۣYxAo3,hZekFsDGG02D؀PWJCoijkicuh4jROgdv]@ ?w ty k|>Ae63N@m|-ĐV!,F@r&V`{PS] `/|M6tChF^nM rrP]YO2Ϗv碴\'c)1~j+-'0_!r#$`~c \,C\ c邌󍮽#O}SiE҅zE7ܺn:~~K~zF}!ч-א!ιpX>evCwy iLXJOYsY{YY/׬2B2xꩧ 1)]g.H!1t#8=['ȱta8|xVAXYYw?ǚڿHt Ð~R ] KN|wMMq)kT 08k;kk?{-aH7hOyץMz#%~,[n+G=^~峛7o߸qc#a#xfztУy] µeѬMZYYIۋYY^{''<=b[\VCZg2{iqoij ğ!G=OĿ~HHAH ~_Vh-AkkRk|T{ý&CZw% =G"~2&3{ߢsK4wu?y? ~H p`-֚jlnpbmggך>=E 6b$>L!dHuq#3l 픏0&Sڠ33ܟo AbF5Y}YkZ?3\wGKh!&6#)~Ig6?_v[TAH*i 3<=NG̰KĐ3ҿ?`q}Z߹g|B;eO.gAݠ5Z5Z?5FuI~b׏.ŐbbtaJԡgE/~_vڧ}Z p0)OjǴJkaF6];2GGuŐv(6S :\3La=ɘ7P~_|IAzяk~dDhmdFvƵ#sttCebvkMגn5 ݭw}D_2ƩOP:~ µhO践Hk[M׎.6\!] J2%n7}zAs s~SyDQAI?50jMh7:QBQ"Z zA}ԇth4- $>zaiMMhͿc3akGeHxd ݦ/f_$:3JAH:lÄך~kͿ4LόޕpݢHN;LƴdN7s~c~Ai;dBMF>.`Fk1xCwLFQqɘn;|;bL.A!)0ka>wh ]km&#Ek DFKFGK"CwLȾ3j42LvY 6/ƀn5Z덨h)!֌$bF1 ߽G;ZSd6'A&% $/b lB78f&^dtIНy.^kNA&e d2Mzn6xQQ0]BQB@kL9 0LfdZkAkJ5=1tIchّѲ ))-eLd6X# $1%(Q3Z!%`Lfs5(äAuPܡER%JAXy,dDlH ;iZ BqAml>r6t &% g\ɍ   $ IENDB`rq-1.16.2/docs/img/logo.png0000644000000000000000000001753313615410400012326 0ustar00PNG  IHDRg,tEXtSoftwareAdobe ImageReadyqe<fiTXtXML:com.adobe.xmp kPLTE*,,,_@ߔʿaaa@@@7999ߟR0篟 ```000E ϰTTT{{{z`FFFnnn㡏ppp뼯ۈpPPPmPd<IDATx읇g 6ɤ&OVdK ,:a=_*VmL͝a_ gF~0뤑ahC 7}pt ^/Q v40%uMs`tZX1?X$uj4AW:u+bG,Ps0w'+AU>qz戹~e] MhfJĢ㊁l b4.Ό]'v~|"c1Pb2IciA?VaA .\^3i(=dZjT>+Wz4Z4v'&68D<`Gn`u( :?؉l4io{g6b0b\gjW^>W|1zfr51r9w+1Rc=Դf)Y808\{BBZ,BwB" \J2@ |V3nP2es,勸-zVm`ikoJ.kOOWTBRY 4-F(aoŖ}|,i^HIOdD328-ͬtJΠ.vc\,IuO;uM`N&$UljgZ?. qk`|9ƕ|eY瞊M uWaǷ[]FFF^ %M+`֎r\K&v@p-1+Ǹx\(pZ˙g?$,ైHqk"\>!b ZhXӱ01 ()<^Mv7!MlWlK#Դm[{|%&jp D / P(<6ի2AqAv1̇xu4qX1 |._'pŦEJ { ,O[ɿY~ IWEnP/S&Np/i ˠ˓ӱ|{t\IeQdJp=bZb&_IkC(;׋鐒m+`Ep^ 2ĸhۗ(wXdDx/%W.A5F'Z0䊴"ܙ / \ V<_Ba k2|t"6 p%;NL<r jb$?S[]įqnuq]L0ҳ+LFMO2( ^ra!hƠ.x5'T|p%)WK h"ȸ/5ՓdU1nŵeJ c`5M/paZ^w܎8w1\M/nDjl["1ZlX4ܼN1x3. ZKXH|f8p.h\4ٙn;Q5 oMw5UPЦlM[:(*&uMQ C (W=ø'^o sQȰ.aʰQYh'Hr,oM?u4-?Aх ҸաXq6fk>Uנ…O> +ˡC?}@Wifu?! ,5CQ מw~z` ep 3+W!mR-}W:Sg'epY=: ט*>\&1r 4}.:ɟq"ֳPގ, aG[#AxYZ(,,ɗێXǻBo97̲fKulYi4°*õJ?hP0e2R~ns# BSry{մ /|$,i9ho+ fFca%w+*a Xr#aͯUD]%ALpf1ef38灬dl}EPZg7խ<V+ː.=puRkո+k Ь5hݏ&@Z|ք$@ cp5a~˚#_!P2ʠ7,v3HKe]ڠCFqvm_u tDߦ qZ+Bp X_-uI(/ t2UW.Sr5mAc t*P S\)yx+'Fx`fs+dfq#ۭ ~4r.|JAgJCbb'9:َJϻN6.ǵN7r.ƵkVPXm3} 3BVܚL20R2$rC!(489öסu¹Jetn#gxqiSٯ}<ϷseS,alF[nPdK">:x +L Zo4c2fGeeَeôc{Lʤ DHB^k7`:Bqm],t!Uf^\Mf,QG]"-4jAcG@4rnTPA:M%BҧӽXU [#\!ŨmwydfV]AƟ`Vߔ'prC| j I 68{ f1tX$Sq}"$gp}RbP{̦Y\nZVj(q i R^tH^'58_:<;? [uh[Y (qtbpIOJG%Ih%8D]31vC"L{qKL2*L5P$OR}$$i"0ÊP-un-$kwOς$uGLM4|55޲~I2-nXdquh XyuFW|9BiHUwc9cv[Vv Uedr%G;%ZД,߸!W.8ZX\2*=UnlY\)x҈\(qI/a*B?R!%k=a$jV޳}:} EBw'85ꊽ..|в_2U~P9\Oes!db=W}\<*ϙj~Rc񡲬E\)?tބ7p ĸ^j{yy ;NM* m-8ǵ$+W{wF#g5!Ŝewp.BE7֟&khN΂%{6s.6[aPGї!3N"jgwL }X6׀$-+9@p#y\5^7mp^rE(N3L&#+'sG^pᔓ9uMn#+NdvOJǡ rLڈ%ƣ=57!keI4%q]NpՆҸpUXm9=ĴkM8!{_/IX~i- e z.R5srb \1Wq1 x.wRI 7yk.,6U][~-~$a|,wh0=cp ]Sܼ6.b4NUliE~Y(/py\:\0ҩEaWOK#ozFsJYt 5@Q׼((.xIr9N1]}4%S Wu0ebEXwX@f\&5*e 2GD;Yш5=_F9~$^ReV``ᗋk*/<&ǘ)R4GaYX^3PPݲW5Ѕ?jj;Z`jnYں:\ +p1;-Jޤ*YwvJL#CS7f?޵0`a1'3jgZpM” Y]Nh?y(sy_uSݮ٠>A,[>6U r^ߢ>,_W؀'-}.#'T esjqQ7aʃd3l歡rȕk CIgC¸KQO3jgZ{>!RLe̅r\rC3Oĵ+gHO7e3C/;&z5'CY˭u0ÓO bpݨq5Y{p Hw#$mD'rW$(urEƅMwU\MmqY 7IT\.MU@.pT$sT}pzEJpLu- l^އKk[$;Q]/q'vWY`Tώq+b!Fd3¢p1@b%|/.(.MezUeQ6oTWmxEoB#d]˼3"w> qesXogwQVd:5߄/i$wHEweytFTuTsk.&)CM ["Y=z\ƾM7~,hJ|`x0/k0)]&\x+oxϿ;qwpuါ\7'=4,6?*`{ )S{>If\ؑ4GB2u2X&;\DI.;i?SQJd?ɮp致 {M[ElNZsqwvjg[.ZpkpA~Ϲj R@rNX\2F`2Eeqz+;'qpYoȿ4L.d+5gBʨA")Hx8MA=zts/9Ѱ^Vs~~.6ME/D>tDkyx4IZ]wn3'G\p7aI/]VbT7-tzs~@3'n36ay4 ڔXx v1qD?{MOAeuƹ{?25/ `柤@CiP-at4~YEPW~k8:g4דS,SL)uǧ96braX`%9NlQFsSӼ:Z iho}f{D6ڣQW?vnn^Fg>+I_:: y®=|&os}f3>3>3>3>3>3m*IENDB`rq-1.16.2/docs/img/logo2.png0000644000000000000000000001511013615410400012375 0ustar00PNG  IHDR1tEXtSoftwareAdobe ImageReadyqe<fiTXtXML:com.adobe.xmp $[PLTE*_@ߔʿ,,,@@@7aaa ```R0000篟E z`999pppmPPPP뼯ۈp㡏TTT{{{׉nnnFFFR<IDATx읉Qfm H h]dm;U*mmk[ֶmmk[ֶv'MTU݂%"iwKmt_v%m|ݰCe1X4X;9?5p0ZyDGܰ\9wIE嵀,r@V⌖6!Vflk[tF{ bIiZ|&͎61?2׏s̋cQv{P(ů 9dO+gP{=/;Mx$' źO#&1 WΐK1އ!J^7X>Ưt/J/$+&{W0gwo@Llu_ k~ 3:X{ށ{D^ HY81 j׈)Le`eFYثFOV$5\6[Eb0=gdHVpd؉d3 .4Qƈ+#:jEu!`^r=,m):t]`F\>:O'r+NoP6@R;fmu6N]A0e6h[qiҰ&%by߫t)DRژ(eJ֩儦'!/ͦM ZN?gsX=QXH 6.n\Uk^H;uYue+d?"8^c dNBZ;#E6aq2)3EHbL@&ؐ%>fHf5fb*6.c!l%ZƋfqlrͯMe2jOV:ak(>plXP;YW&aHB&ϙrisTfi%q5as&hWGuVYbSG'AղU!1[K] ᚕv;l,t1?\פ-s&Z4K B3i%b J_ z/6X6 Tg;&<|*-/zڵQb1aEGAr'O"Dm* Sp %6Nѿ8bzݦsC誥gHC"k{!&i_Bn'A q }{XCF|1OُE1_h{E/d\hW0CYbz:!98r`¡Yi3"Qb "'|b8s5VC{-vRWMذ)b!\zzA ]/\ڐebVbdOhlxy_O$v"]IWE¢ !Wȫ6G>.$6ڦBr#UѼɇF@5cm31$,ņAB6g<+H~@ou/,j1bbH`p*)MJz1b$)aB; b`3~j&؄dFF,"b|ҴfC&-Ὢ`!X55[%($IH> b:A۰D k6DV0:A,KN"T4F^# bHDZAŴffeBL#?Xb!GJ EOZq* t}^M Xb gA*Y=n7p I:UCl垕b GfCb8YU6Y@b\adH8S8 f<)]tSE)1CͲ%_Q 94 ȠHa3&*b.5qJc4"1[fi tro4vj5N2u̧+eDIvG8{w-nb&DRxnOu݈>硫KAiĐ(Sn@YEbJ6%f[ b}IOC[虇QG k&@WYd['1=3- -UF̙eu&|k˴1j xs`T("dw3|jb.{bIkn?MuF?>{f4s_7C uua\i8"%.D2E/scGe6>lβ m1h7*>nJxŘP?7YL 0-;0ouNn+sUu$8Qv%JU>DrN FIqn& 1#rgTg3rbOP7g'G#p<4vvF^$~ ,7*ئ*EʀQ4bs)/D1YS+P ds?ѺKQA`rԵlf'6k,2`Cܘ *.Ν>s!dN,M,J[]&H*jQ-Oֱ2T&Q(M* NDQ^)ю~; =s ;Ngmu7z;DO0m)4&B^Í镘irD|\\?ɋVX&;4y,I6; /Y~eTټM0͂ /D?Xǁ=+,_cs&wϻ{|b[~wWmMpı{5ǯuuvx4Q1G3X)Ċ]%38]~VB}07>=_V76=x7u8q)op1_7.şx 6lXUɾrsy8tbL LlI% ]ͻSJhcBrͽrY+rɫ<scdpQX.|GQCAwAo&4wcv1)264_ =En› =z7Goܘ Xx[懀q%t~(ƥ$%q1ai$1D( 1"0F(r(ďW^<`^ϼ画{{~ 2k) ZNİx:>KLwT΀ƥb}O*jX ~;V_@4X]kE10c9"EWe&H:50 lBN-e LMWIDATxwg-eY"FcK='D%_5c7_QJl`* t޽ef1ea6]NDFSX$ Xzɟ[?tӿw>,r$Pfzc-}À Z+`v~1 (C7] P@[:␢Ըu@v-c̀Yp8 (' *P@}nj ?ԗ$;皁(P@[TP@C3D6}e'\c;\s̀cP'd}MjZ@IVX (K7e*ۣޏAer]vP?}b P@P*~ P@2kp $:GZg20(A P@= j >01Up:^p̀D.Ua60.%=Nfˀr[cj}a^0bj1`hzA߉AW@9 L5V0Ii-0l=Q`^$uJ"4Q$P!"2S$ K.耶٫mMG?D?GDm*Ҙ?8cQ dC*ތZWG56!kPMMx}ŗh D8`؏t\a0G@y-xHCRE-[&t|_cSnJzqG@ :x EHo"?tS#2M8ZB [["B_+ˉw/D%ݰ<T(`[T9pT מcs.%oOCyͱjd:B8m| @Ӣnq~b!]wekc_{ D^"@ *i nHh+WGZnOjhB]WXe[OWtH$hS;]{u t}G\S GWӐs[Q:MG?!baRhEرCў{bw^%_Bܹ4>fۛuQf# ?U[Tʿg&RXbs-*f!S㠒@v098xO?Oh7 er}OuR9a\@y6WxYyd+*0O=5W! Bxk WA^3mg1("ޞf{j>\0P~t,|I <`(M)]Ē%zV{/%#/Ȑ!*/۩>\D3%AJŚ܏_Sߝ)AFVΪIDq1:GX6|]qOϰ -MxӯOD+qH+'z89%P D`if"4U}T.HK&1j>:Xt hJaʹ;\*FP櫘{- ꮿ!dU,F_27o=G+0CE E3 `|8F d^⋽mLY`m%*E%YOW&@90PYG E0"0z(cD"C CDPx1 N8o?D~dIwJ-f=PS ؀yɛxf(+)B&X^S`U&h IPFoz< A(#"&ՆewͶ"DEk64k眃Je}Vd](w4)/ qĐ!IO${56yZJﺋWEsD'Tx0VfN@H}IE>x0!@QvzoPeaF (0}scV_ LG^{K %`fZ)>ᇸ+In:OkFs>0$"RM~fV Kl1<&yYχ LfcP 4  2AݭyF~>@9v=1ݶIZ6Bj &OB:הRnj"z5 Q;gkSjh𷅧ЙuX q0U6̲f.[uT&WVf(o@7/g_GmmE?K{ MbpS*(ŋu]7^nldo5x2Nj5tMMgSxIMo.dGbWfʍkffʱ2_}Ee|;#JC, *4rd0hӦ1#Yx틵.sϥQ:tj[v] w @֭#=~[E従Jɞ;5\ڇ5$7J:댬ITW3[ JQm/{^?Ne0?XL((Qd3D Uccyoti)TcOZcs3"!$m1 aDD3ED46k/ܼ~<~wqɵj H(ɗ唆#k[z0Ll "i}w(O"aB'P+nc@q1}9d@Y ϊۑ ޚegSP_ϢO-m3䮻kt2ɛs7 CU 6Y%4d d[u vYFwߝ4IJs.m>SW qI?3ƏJЈXy;싨[ޚmgD-8T#VG=4(/j9%(QuV2WVL]%+4FUU#!<߬)W+BJYxg}}饔yC (5;C&P~ʒ_:eC{JHcdۍu6f3MG65!m<fO[aںj"F!C!\#]%>Xwz/d(0Nۼ$iG)&矧qƌZYgӈsRBb 3'k>0?1s7$s|d*P#؇4QT|t* cyhv(aY4n+GDJ3Z(;| ӌ^.sрc5hZ#a} ,cEb NkF tL oQdH>- &Q?ujӅcb~8q$urB0ی5!rL#(,tƾS6/l,Mɓh0PaY}VTKE)E# ψN:[oI|w+0Z4 4^gz)[1{W B/+k… }TUU|cD*hw9=M`QRަ7F JA1"o,.n R9zTVui^ʰ٧o*ի)l#tc&+!Vd f^y>1@z"wYs*UUYR.=*<uXҥX^v " tmD>dF h1fq=Avm L#Xf}l! yD[r rmqϣq\@ͮ{y5;%Dcc9v-"r ֬4?K%Sc`sg.yyR5):YSC||_:B H~aw.=PD4k"zP- ^В ((=(I&SAXtYKo2/c6]7g^c#6~*3Iz'mJEGtz=%}$ʪ~ ^u Y_璭[^'~-Us]6gWMJ% R6TϘDӷFmiIҥF\y9M9hX#l?Q1E %T^ɖrM%QH no -V7qp^ tWJ/{tsċ55 c l%ä;Qd,y xvO@zrʀ]tY2Jt`\{xa§(mZ*+G۬C*YJI+Ƅܪ巳L,ɔwcvJdsm0J8ZII֡zd?4@!~rAP7ϋ5>DyH6C9nό!hmFVXbD$:`[>.;g$b3~FnbY}B%Ncejcv7ѿwͩƬnɞD>Ě5䐩>c܊i޲9,T\N0>q9.[PFtpl DP;Њ6 èu 'GHsNGcn Zg̠FmoѸ_;1s Hf5 uUP42k)2wn# SV*+۰6]P}YO'SZ4 wm.YU㠏Dzf|߰t/#w Ҍ>+{K1dEc` |yJAr"O&`OOu=|~= ʀ|a`XQT NzK qJc_`Yi(m6r R kCmYmEv]1[z3B//?>;Yk&UWQ+iEAmDrŪCP܊7$U[GifyNM1͹ŋi3!dGu Zi3G=}I뙂\E|N7^CޏOOKIPmEL'w$ʪ.m8 b̙ m}MpEz~3π ѿG[Pׁ)QN.QА=h z^w^ۼ}Rc6#`#^lt+hU6B!<$KehYS MMvd"NWQv=L.#b/%dt#좭-`(08h*N:|:frm-/8sa&}謳ЌU vݕ/礻ćF&i :RyGhnB>_"Sڽi\@8pKedSf׶Eo/8kR&]~9WWFL9hh^wi ZӤnۤ(.F ͏XYBh[+\ ys C;]|1͋e|`aAos>cMA:6x%^ݭ yK`ԩNs:tL Ef==KQ0itZ) Yk3XnF*F&dL7Q> vd*b hƟʹ+QV}|L:uL9j?spI:N*ixK,a{uF]*"/s=ds#!O;zv&L9}7d39D"!ĖTcǝ(.}[_0+HqK'O\Nu:g5+Wsop /D/)F6Ob,7H&P_+ n;԰t)/pLP`~ aQӆcnMi\wz'3KP v3{;ϐ.#&i7u4l`Og-*/gmD;FETM]e2r ;Q1<!NFeEhXV8k604ruܦzVsO:_d)yFGf2X߲O;Oq{4!n30Geװg/A9(J[ihhfnzjZyyXW3b=(eE7488 a/Xw B!~h]fbYin[Q 4baѥhq :t'L C*Vrkw1% h%Jd2C)?(> ޟFtnIas۶.斾Э0zs3+￯+a'F}f)l3h|#Pu#DN&!=Dbt ŮDs$ڢLr̸F2\(Ll+f(! Nk @ I77 :ME~ a'Xy1 =̬Ϭ8x;٘0KBBzĂ|;q";\{m\}>TyY /R5g1O] 0kXFc1Hgn VNȌR,oId^~Mu|O"nbcxTϛgT%heoT熞@C&&؄JuڛHdC!ՁdD,vǪ⤓wܑr is]^gLJ&tK'Mb2j`[Q0mNl^\)chfƫVa!z`v$0L ISZvwOtr?DJ^x!sU=3a8s?s'Nd@<(`P _aX} 0;%bD41et3DISOHS3-zJ䳻&Դy3y|5y2?kײC }OCl&q>TXe{2{ vv2o6VyyKrC wE,98CvֽKfDC+r|0_<8^7c:P^=tRf12cq E"k Ue}cUCJx<@ ʔT_޹עs~9=1ܲ1cPy~!yUݴ K<_oyį`& ;~:4>b~ i~zq~{LQ \F_`R| MM54cȑ[`kg͢(؉[͈sm9&)ejA`{i 93g܌557od]_ G-4Dh6a~+8?f S 5tz8Gb1^ܺ:l aFױeb/‹/fݼyBW . Temia#@W([J 2 QT" 2({]l:S rL=L[oStYx^L;TX#yb1Cк_ 4M,Xi&E_iH0MZ@k_nӤNoit $ yqaQ SX[!4U:W:mnƍ2D#7QúFSP`!J$!RR-$z=o]nY L!s* M7R Na-tc#N2C2U9139oDfamu=]CYw8(%br8$쟽Y u4띙Ir3@ܯek_=lD a}S0,s5`Ҵ ]?Փjc65P} DEv&q9fKAonT(7yZ'+? 8S`L}!@Cy[t`$*4ڤ: 7nyy>\ox9ľ-_si1F:7\u /͜vq?K#BeHE/5޶mB5_Z0{W  QqhD^:D^\З:T \2r• 9LVOfd6/Ftqxa,Ps#t4)^;WP wIc8#wI'،=/O=0~ Jx*.SߦAṸoml؈az"Pv+tYvPEhhkڹ^)<9SU!oBp]&f1!TU m10ˆ 8^yt$$-AOFp5tunyŶ\oǍʳ;"+׍s02!L2{~82Usx2M)<<"%p啌9__ʨ{'G @#: (7^S2AeoP,6}Q>%]L>hCw4c_}Hb0f?p(/ #+Ey ]m=drx}=DZ&I[;>s,3?C*q?\%PrZg7 }Rx<0G]T̠f4"pMT$Q}SN!jXJ~FtHw8v}jֽ7 Q#X~S;vCٯƓ7b< ~svnfc^xr(NJ7[HI ?<#ώ9wlir$gcgPDq "f~B)'Ÿ"p)8xQ4`G1 W=Y1i^xڻB)a7H\ÒqgpeYgёG|&h&JN5sU,lG`%orS;aӶхch^Ԋ(m /a~;e R_~IsZh tU,X AͤJr l;< 8aXeO?ŀCjT/fw@#b9Rl B*<Í7͝wD=XNW_1PJ0 4ǥwvu3w?čŰKo E+(Dy(7Tm-'L`)`@HŚpV@? 2gOmKsđ0r<s02'OhP~v}6мt)iR7mfA!GI70" ԗ_ocψ}6w2 =ܥURwGҬ{=4$a0褓y%o*,vZu4~>Q5> $ʡԽΜψ ((@&8BJ-<X.^.;aZ@%Ȧ&RW# kjh7]hs6n[" *ulAc( JO?uByy|{ݨk/̳pמ/g[Uw?>{}tMo}>Π㳳k|B!MMUT@PC mO31e2JQxq˨+vLVN@g߾H0e WhlKnw0reK݇`]oHøNw Oz5{ x B),q#Š@싚?_!Ѕb'aӽ_Dܟ7`?Odח_fǨewXyhE%Ȕ_DղQoLJ֡SPqqܹDF#D[gɷ4|c.wI0:eԚ5,J*<]#Oh t ncmuײfd3&iF)^Ma=ߠ')=P*/5^fI!򋉽5La$-$ Ql}%jJBKW謳J!+/}add,7=@jylaz/#$k6Ǯ /Y<Qay?eooS3u*;^O /bo(7.|9{y| +(:V!R̊IJj0Q0XYۣk|;nx3aJ8h0[Дqҳ-#b`ॗ2 P2Gͼ803K*@bR`Ij)hcF٬)Bu#j0"@s=6$l/w 7¨fQs/ G;XYToFW_AYz 4ONt8<릾ll@i'mLkYEdLed4Eߞv-]J~$Ys0#^<Ϯapby]7/ ,ֿW&O#Gϳb =2RMBk $FGf\l&l;#{Lp?o1:=5JնkʀF4l9y@z[M06ponJ2Tsȡ~,ƿm]zXM2FvT؝sK_Zid.Z j}4sSJՓ뮬> U &oEYe6_f}Um @en&/'cADQ^~P*HB`.Xu=l="/YizzzizIIٞ'nvF@wz`t#hE|XݴnsIe͟62wDY&E@yiNY tK \i\#hmlAk`n{'|+3 e:ttUC?{^*@$+0-n'l:m]DrCYgR0v,_p6ffYiMfD@hlH8ʢc?`#|[ Cv2^CJʜY EI̛?: 3L@5tv0n j(QԢQ~R8Rn_m5]~_q2+DDfeC`fQϔ$jH~%+<'LVӐ!Xpt W2zRu̘<2OCvY*l*H{ '3P|I8pP^}眞W%JogQel0 qjj33ϑ2n_q9ii^2I/2IjP|FAܑ]RBC]Sz\ }B) vcǓNz!,i_&}Lխ#rUwՙJY~[pڶSڈ4ѓɾ 7tI$鰍SRLR2LJknE@(IìY4> ̞5p2M<ƕ)xhyy$ZZX8B2>X)2I:/S\HRPA)IJ%'Ӥm->Kmd ||:$][gH173yE82+ =Ҥ**yN%$K,&Wb[zSO%]aio%x2M 3jDdưC~/_tȍ(ƍ757))A,Z ݲ<$v /|zƙ4-wd'O=~<3/Oÿ_VW6:&*P=m;/?vy< ޛŷގHTb,#U_%*~#"EE|v}+N>Bzt0 'JJHXAq 7E/Lm"Q"Bd/ՑvW8,$Ej BpYSOceֹnB_x!׭7)2JqcM? %%;]~9Ky5)Z'z?e Z4E#r*R u-{UXȺW^,.foeWy` ;@VO=(.F++aRusV"ĮϲnTil6sL َ1[]H趹jhM4,B< 5xxGYҿVd݈ė!bMmxCO10uY477~z4dC -_jboOkY]dZA?&#bqa\/ {ao!RZJ{v`mۘE؛SݛGDVI5 1h BJd\p@~.g͢a\Npi2?@{eaP3cNWֽ&Z45!/TUs/5+a,g2}~u5 6#> MγK( վe=Loi򂍰(*Hsu?BÖ/eūū^KoSr5.u/-Yhe@MF 9\־2ZC j^o^g?GLs]pib>e? 1^PG 4/Enh6 ` 7baIŅ6Q]䐛W 0+3 DvLs@ ni#1ה譚hq}uO5Iko0!_A"±cYz wɥːc:_\+t4Lm"/80(9^Eo\z#8;Un?C^ bDBHBX@QxD R$,Y!|Dv,H",ŀDPd؞~WWU5.]v۾G*ݯzs=?(gnY9 @EP'_@H7vj3ȖETq ԂZ"kYE!|ڤSAel)hX:aZW#uizH{xx\H(j^r̫?Z93\XAL8*/|~m{_af t?Gj-Ńil9('W|ޗ٩'xWgHeR3"%P;7xTmp~8 عv~_.O1.˗klz9 %gs\8 vz`ݑ"ލ QSڑ'삻C몭5y 5v :h?gotbjj; %v #%."/" ,>̯@i )#:SV3py6lhGU98[hwm߁ ^cZ;h@${@'|EqߋQ iȵBZ(e . VY,>W:13)3Lуy7nGu2&tnuy4Ц҉mNy1eb簑2:{I!;k2)m@i-[2W凐g~y de) fO9]! Y6h-iV]诣mq]Gk6M_Gk ؊؊l(^W`$ $SgmmsWsWVǾk `0Dqi|ˈ䒍?c3h2s^hZhԍnB|%ltUdHe!9 S1L"Hg{Ij=k˞h0Өef)1< a73 ^O=k9)2%FSx1i52`^ƽnɌG]ϸǣ|6sAҏol:k,+cƌLtc ƌ3`3f4f̘Ә11c ƌ03f4f̘Ә11c ƌ03f4f̀i̘11cƆN]_pvIENDB`rq-1.16.2/docs/img/warning.png0000644000000000000000000001623713615410400013033 0ustar00PNG  IHDRddpTtEXtSoftwareAdobe ImageReadyqe<"iTXtXML:com.adobe.xmp AIDATx]xTUږgRAq]E]W 4U1) ˊQDJZ;ef|4Bi9> s_;w e!4H BH !XKq^-~ذaLܷoҽ{.E+V׫Wvm۶myYhHKKk9wQ|-/vZ2`N!`߿saa7tø~mڴ X e˖! z1w;-3ڷos5gH1`j$,K$4K%w(\ =z4*-p&U\@t /MxDK+ri{mǡC oJ*&BOGuYURܞrʃ2gΜAMF-[t*_"\HNt:":!nji^$=GV6//`wS٤Yq*9?Ɠr > ?cjVGn,tcڵ/ŐH$Q+"gW*tV(H@2<<…qWxiW)rqaiy(/$s+#jٳg翬 .L ']6>FITZIѐݎ,C_/bc7QGog@7k1kY: z]<dŸ 2ς=fR͉@Z5}%T,/(ѭYJLv0UKN+Ҕ1/f;ꅖ AZ=CXA6 R0g0|1^uTY]"ߜv1ueg/uImiĦ^_L$_@,:9!H5WJ(&*3| W^Z!Zl63 舺0@??X4XOt \/ ~v>rĨzK.=i! R 8U|Y%.'NdAom/)JWMgcj$>1"rm ^iTkEj]L3 Ca\ FT oA };JI1)t{דkl"h_Q :gEEE{eI$N}d='!ynAp1Юf@ĺ7Q6l޼%Nϱ G@Nr#31xnC/h@,É3d\ aNg^3Aϟ,f`4EV*rHhp >/CRww;N+(0q)u1ѤZއ V%C=nHpY/6őr^Wk57Yۇ֮][;&MJJ9x_ݱ=T2 n?S㑷g\KzE!6o (I1#P?6a4_L6qG+l] }(u8b>۹$ٰp1X)ɞ n=xlQC9@"}x9`х g1ȏ±}X< yZB)q#k$ j )qxNJ&2 g͋b{qKɜ2e,P VZ,Q(IMcP@¦piJjT-* ͮHbwQ1kҕ=u&/Zx7oq=mڴI>ya#ZH{=6 / /xC> ' HkJRV.nZr=v [CI8ryy%:Q&T AJ\ltDg#6j h&I64al}:4rH8Xm^\ib^~!wthB\t>ɞ b QZ?Q!cjwDr A@Nc#/Ԓ28 Aй7uH|Pa #:\B6_&Jf݆(oG&SS VW|c  G/6m;+ Lv*=N#RÒHМ:FO# T4(K%3QQm#eڶmx1%55X^1@'لkxhOm 6hN%kcP`&+_HEO4݌IҘ6=#r܎Ph$+h`Ԇ0xP)&kc0_2;JhPۋ@glh4rXhьAfΜ5ގ?J8 >,NZ<,a10`Q89q:mlyg/:&I$ak>I}2 s5Σ "FD؁1 o7vvm$5r)8~JAt< _soE/ۑhp  Imx %ٔn>+" ^bFWFS 4$_>G7}#HtҰ}^s {nݺV ҷo"rP5gR&QiT{!Ňr(V$uAR~dn~QT*Aՙ8}8pڵkJA OQAasR Jѓx]{T< hn"5>A%D A9p^ڥK94:Wg>[l0@RLFJ8xHX㴳14RѥKJAh"v;[ UjYp*)"y |j5T&_ z^^֭oh~GQFMgTj_ m)fQ#aT q7JVD*Cϕl޼f͚7U^[F9xb%4:ReT=3 rq>=0.13 Create a file called `run-worker.py` with the following content (assuming you are using [Heroku Data For Redis][2] with Heroku): ```python import os import redis from redis import Redis from rq import Queue, Connection from rq.worker import HerokuWorker as Worker listen = ['high', 'default', 'low'] redis_url = os.getenv('REDIS_URL') if not redis_url: raise RuntimeError("Set up Heroku Data For Redis first, \ make sure its config var is named 'REDIS_URL'.") conn = redis.from_url(redis_url) if __name__ == '__main__': with Connection(conn): worker = Worker(map(Queue, listen)) worker.work() ``` Then, add the command to your `Procfile`: worker: python -u run-worker.py Now, all you have to do is spin up a worker: ```console $ heroku scale worker=1 ``` If the from_url function fails to parse your credentials, you might need to do so manually: ```console conn = redis.Redis( host=host, password=password, port=port, ssl=True, ssl_cert_reqs=None ) ``` The details are from the 'settings' page of your Redis add-on on the Heroku dashboard. and for using the cli: ```console rq info --config rq_conf `````` Where the rq_conf.py file looks like: ```console REDIS_HOST = "host" REDIS_PORT = port REDIS_PASSWORD = "password" REDIS_SSL = True REDIS_SSL_CA_CERTS = None REDIS_DB = 0 REDIS_SSL_CERT_REQS = None `````` ## Putting RQ under foreman [Foreman][3] is probably the process manager you use when you host your app on Heroku, or just because it's a pretty friendly tool to use in development. When using RQ under `foreman`, you may experience that the workers are a bit quiet sometimes. This is because of Python buffering the output, so `foreman` cannot (yet) echo it. Here's a related [Wiki page][4]. Just change the way you run your worker process, by adding the `-u` option (to force stdin, stdout and stderr to be totally unbuffered): worker: python -u run-worker.py [1]: https://heroku.com [2]: https://devcenter.heroku.com/articles/heroku-redis [3]: https://github.com/ddollar/foreman [4]: https://github.com/ddollar/foreman/wiki/Missing-Output [5]: https://elements.heroku.com/addons/heroku-redis rq-1.16.2/docs/patterns/sentry.md0000644000000000000000000000224713615410400013606 0ustar00--- title: "RQ: Sending exceptions to Sentry" layout: patterns --- ## Sending Exceptions to Sentry [Sentry](https://www.getsentry.com/) is a popular exception gathering service. RQ allows you to very easily send job exceptions to Sentry. To do this, you'll need to have [sentry-sdk](https://pypi.org/project/sentry-sdk/) installed. There are a few ways to start sending job exceptions to Sentry. ### Configuring Sentry Through CLI Simply invoke the `rqworker` script using the ``--sentry-dsn`` argument. ```console rq worker --sentry-dsn https://my-dsn@sentry.io/123 ``` ### Configuring Sentry Through a Config File Declare `SENTRY_DSN` in RQ's config file like this: ```python SENTRY_DSN = 'https://my-dsn@sentry.io/123' ``` And run RQ's worker with your config file: ```console rq worker -c my_settings ``` Visit [this page](https://python-rq.org/docs/workers/#using-a-config-file) to read more about running RQ using a config file. ### Configuring Sentry Through Environment Variable Simple set `RQ_SENTRY_DSN` in your environment variable and RQ will automatically start Sentry integration for you. ```console RQ_SENTRY_DSN="https://my-dsn@sentry.io/123" rq worker ``` rq-1.16.2/docs/patterns/supervisor.md0000644000000000000000000000475313615410400014507 0ustar00--- title: "Putting RQ under supervisor" layout: patterns --- ## Putting RQ under supervisor [Supervisor][1] is a popular tool for managing long-running processes in production environments. It can automatically restart any crashed processes, and you gain a single dashboard for all of the running processes that make up your product. RQ can be used in combination with supervisor easily. You'd typically want to use the following supervisor settings: ``` [program:myworker] ; Point the command to the specific rq command you want to run. ; If you use virtualenv, be sure to point it to ; /path/to/virtualenv/bin/rq ; Also, you probably want to include a settings module to configure this ; worker. For more info on that, see http://python-rq.org/docs/workers/ command=/path/to/rq worker -c mysettings high default low ; process_num is required if you specify >1 numprocs process_name=%(program_name)s-%(process_num)s ; If you want to run more than one worker instance, increase this numprocs=1 ; This is the directory from which RQ is ran. Be sure to point this to the ; directory where your source code is importable from directory=/path/to ; RQ requires the TERM signal to perform a warm shutdown. If RQ does not die ; within 10 seconds, supervisor will forcefully kill it stopsignal=TERM ; These are up to you autostart=true autorestart=true ``` ### Conda environments [Conda][2] virtualenvs can be used for RQ jobs which require non-Python dependencies. You can use a similar approach as with regular virtualenvs. ``` [program:myworker] ; Point the command to the specific rq command you want to run. ; For conda virtual environments, install RQ into your env. ; Also, you probably want to include a settings module to configure this ; worker. For more info on that, see http://python-rq.org/docs/workers/ environment=PATH='/opt/conda/envs/myenv/bin' command=/opt/conda/envs/myenv/bin/rq worker -c mysettings high default low ; process_num is required if you specify >1 numprocs process_name=%(program_name)s-%(process_num)s ; If you want to run more than one worker instance, increase this numprocs=1 ; This is the directory from which RQ is ran. Be sure to point this to the ; directory where your source code is importable from directory=/path/to ; RQ requires the TERM signal to perform a warm shutdown. If RQ does not die ; within 10 seconds, supervisor will forcefully kill it stopsignal=TERM ; These are up to you autostart=true autorestart=true ``` [1]: http://supervisord.org/ [2]: https://conda.io/docs/ rq-1.16.2/docs/patterns/systemd.md0000644000000000000000000000232413615410400013746 0ustar00--- title: "Running RQ Workers under systemd" layout: patterns --- ## Running RQ Workers Under systemd Systemd is process manager that's built into many popular Linux distributions. To run multiple workers under systemd, you'll first need to create a unit file. We can name this file `rqworker@.service`, put this file in `/etc/systemd/system` directory (location may differ by what distributions you run). ``` [Unit] Description=RQ Worker Number %i After=network.target [Service] Type=simple WorkingDirectory=/path/to/working_directory Environment=LANG=en_US.UTF-8 Environment=LC_ALL=en_US.UTF-8 Environment=LC_LANG=en_US.UTF-8 ExecStart=/path/to/rq worker -c config.py ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s TERM $MAINPID PrivateTmp=true Restart=always [Install] WantedBy=multi-user.target ``` If your unit file is properly installed, you should be able to start workers by invoking `systemctl start rqworker@1.service`, `systemctl start rqworker@2.service` from the terminal. You can also reload all the workers by invoking `systemctl reload rqworker@*`. You can read more about systemd and unit files [here](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). rq-1.16.2/rq/__init__.py0000644000000000000000000000104513615410400011676 0ustar00# ruff: noqa: F401 from .connections import Connection, get_current_connection, pop_connection, push_connection from .job import Callback, Retry, cancel_job, get_current_job, requeue_job from .queue import Queue from .version import VERSION from .worker import SimpleWorker, Worker __all__ = [ "Connection", "get_current_connection", "pop_connection", "push_connection", "Callback", "Retry", "cancel_job", "get_current_job", "requeue_job", "Queue", "SimpleWorker", "Worker", ] __version__ = VERSION rq-1.16.2/rq/command.py0000644000000000000000000001016213615410400011555 0ustar00import json import os import signal from typing import TYPE_CHECKING, Any, Dict if TYPE_CHECKING: from redis import Redis from .worker import Worker from rq.exceptions import InvalidJobOperation from rq.job import Job PUBSUB_CHANNEL_TEMPLATE = 'rq:pubsub:%s' def send_command(connection: 'Redis', worker_name: str, command: str, **kwargs): """ Sends a command to a worker. A command is just a string, availble commands are: - `shutdown`: Shuts down a worker - `kill-horse`: Command for the worker to kill the current working horse - `stop-job`: A command for the worker to stop the currently running job The command string will be parsed into a dictionary and send to a PubSub Topic. Workers listen to the PubSub, and `handle` the specific command. Args: connection (Redis): A Redis Connection worker_name (str): The Job ID """ payload = {'command': command} if kwargs: payload.update(kwargs) connection.publish(PUBSUB_CHANNEL_TEMPLATE % worker_name, json.dumps(payload)) def parse_payload(payload: Dict[Any, Any]) -> Dict[Any, Any]: """ Returns a dict of command data Args: payload (dict): Parses the payload dict. """ return json.loads(payload.get('data').decode()) def send_shutdown_command(connection: 'Redis', worker_name: str): """ Sends a command to shutdown a worker. Args: connection (Redis): A Redis Connection worker_name (str): The Job ID """ send_command(connection, worker_name, 'shutdown') def send_kill_horse_command(connection: 'Redis', worker_name: str): """ Tell worker to kill it's horse Args: connection (Redis): A Redis Connection worker_name (str): The Job ID """ send_command(connection, worker_name, 'kill-horse') def send_stop_job_command(connection: 'Redis', job_id: str, serializer=None): """ Instruct a worker to stop a job Args: connection (Redis): A Redis Connection job_id (str): The Job ID serializer (): The serializer """ job = Job.fetch(job_id, connection=connection, serializer=serializer) if not job.worker_name: raise InvalidJobOperation('Job is not currently executing') send_command(connection, job.worker_name, 'stop-job', job_id=job_id) def handle_command(worker: 'Worker', payload: Dict[Any, Any]): """Parses payload and routes commands to the worker. Args: worker (Worker): The worker to use payload (Dict[Any, Any]): The Payload """ if payload['command'] == 'stop-job': handle_stop_job_command(worker, payload) elif payload['command'] == 'shutdown': handle_shutdown_command(worker) elif payload['command'] == 'kill-horse': handle_kill_worker_command(worker, payload) def handle_shutdown_command(worker: 'Worker'): """Perform shutdown command. Args: worker (Worker): The worker to use. """ worker.log.info('Received shutdown command, sending SIGINT signal.') pid = os.getpid() os.kill(pid, signal.SIGINT) def handle_kill_worker_command(worker: 'Worker', payload: Dict[Any, Any]): """ Stops work horse Args: worker (Worker): The worker to stop payload (Dict[Any, Any]): The payload. """ worker.log.info('Received kill horse command.') if worker.horse_pid: worker.log.info('Kiling horse...') worker.kill_horse() else: worker.log.info('Worker is not working, kill horse command ignored') def handle_stop_job_command(worker: 'Worker', payload: Dict[Any, Any]): """Handles stop job command. Args: worker (Worker): The worker to use payload (Dict[Any, Any]): The payload. """ job_id = payload.get('job_id') worker.log.debug('Received command to stop job %s', job_id) if job_id and worker.get_current_job_id() == job_id: # Sets the '_stopped_job_id' so that the job failure handler knows it # was intentional. worker._stopped_job_id = job_id worker.kill_horse() else: worker.log.info('Not working on job %s, command ignored.', job_id) rq-1.16.2/rq/connections.py0000644000000000000000000000720713615410400012467 0ustar00import warnings from contextlib import contextmanager from typing import Optional, Tuple, Type from redis import Connection as RedisConnection from redis import Redis from .local import LocalStack class NoRedisConnectionException(Exception): pass @contextmanager def Connection(connection: Optional['Redis'] = None): # noqa """The context manager for handling connections in a clean way. It will push the connection to the LocalStack, and pop the connection when leaving the context Example: ..codeblock:python:: with Connection(): w = Worker() w.work() This method is deprecated on version 1.12.0 and will be removed in the future. Pass the connection to the worker explicitly to handle Redis Connections. Args: connection (Optional[Redis], optional): A Redis Connection instance. Defaults to None. """ warnings.warn( "The Connection context manager is deprecated. Use the `connection` parameter instead.", DeprecationWarning, ) if connection is None: connection = Redis() push_connection(connection) try: yield finally: popped = pop_connection() assert ( popped == connection ), 'Unexpected Redis connection was popped off the stack. Check your Redis connection setup.' def push_connection(redis: 'Redis'): """ Pushes the given connection to the stack. Args: redis (Redis): A Redis connection """ warnings.warn( "The `push_connection` function is deprecated. Pass the `connection` explicitly instead.", DeprecationWarning, ) _connection_stack.push(redis) def pop_connection() -> 'Redis': """ Pops the topmost connection from the stack. Returns: redis (Redis): A Redis connection """ warnings.warn( "The `pop_connection` function is deprecated. Pass the `connection` explicitly instead.", DeprecationWarning, ) return _connection_stack.pop() def get_current_connection() -> 'Redis': """ Returns the current Redis connection (i.e. the topmost on the connection stack). Returns: Redis: A Redis Connection """ warnings.warn( "The `get_current_connection` function is deprecated. Pass the `connection` explicitly instead.", DeprecationWarning, ) return _connection_stack.top def resolve_connection(connection: Optional['Redis'] = None) -> 'Redis': """ Convenience function to resolve the given or the current connection. Raises an exception if it cannot resolve a connection now. Args: connection (Optional[Redis], optional): A Redis connection. Defaults to None. Raises: NoRedisConnectionException: If connection couldn't be resolved. Returns: Redis: A Redis Connection """ warnings.warn( "The `resolve_connection` function is deprecated. Pass the `connection` explicitly instead.", DeprecationWarning, ) if connection is not None: return connection connection = get_current_connection() if connection is None: raise NoRedisConnectionException('Could not resolve a Redis connection') return connection def parse_connection(connection: Redis) -> Tuple[Type[Redis], Type[RedisConnection], dict]: connection_pool_kwargs = connection.connection_pool.connection_kwargs.copy() connection_pool_class = connection.connection_pool.connection_class return connection.__class__, connection_pool_class, connection_pool_kwargs _connection_stack = LocalStack() __all__ = ['Connection', 'get_current_connection', 'push_connection', 'pop_connection'] rq-1.16.2/rq/decorators.py0000644000000000000000000001174713615410400012316 0ustar00from functools import wraps from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Type, Union if TYPE_CHECKING: from redis import Redis from .job import Retry from .defaults import DEFAULT_RESULT_TTL from .job import Callback from .queue import Queue from .utils import backend_class class job: # noqa queue_class = Queue def __init__( self, queue: Union['Queue', str], connection: Optional['Redis'] = None, timeout: Optional[int] = None, result_ttl: int = DEFAULT_RESULT_TTL, ttl: Optional[int] = None, queue_class: Optional[Type['Queue']] = None, depends_on: Optional[List[Any]] = None, at_front: bool = False, meta: Optional[Dict[Any, Any]] = None, description: Optional[str] = None, failure_ttl: Optional[int] = None, retry: Optional['Retry'] = None, on_failure: Optional[Union[Callback, Callable[..., Any]]] = None, on_success: Optional[Union[Callback, Callable[..., Any]]] = None, on_stopped: Optional[Union[Callback, Callable[..., Any]]] = None, ): """A decorator that adds a ``delay`` method to the decorated function, which in turn creates a RQ job when called. Accepts a required ``queue`` argument that can be either a ``Queue`` instance or a string denoting the queue name. For example:: ..codeblock:python:: >>> @job(queue='default') >>> def simple_add(x, y): >>> return x + y >>> ... >>> # Puts `simple_add` function into queue >>> simple_add.delay(1, 2) Args: queue (Union['Queue', str]): The queue to use, can be the Queue class itself, or the queue name (str) connection (Optional[Redis], optional): Redis Connection. Defaults to None. timeout (Optional[int], optional): Job timeout. Defaults to None. result_ttl (int, optional): Result time to live. Defaults to DEFAULT_RESULT_TTL. ttl (Optional[int], optional): Time to live. Defaults to None. queue_class (Optional[Queue], optional): A custom class that inherits from `Queue`. Defaults to None. depends_on (Optional[List[Any]], optional): A list of dependents jobs. Defaults to None. at_front (Optional[bool], optional): Whether to enqueue the job at front of the queue. Defaults to None. meta (Optional[Dict[Any, Any]], optional): Arbitraty metadata about the job. Defaults to None. description (Optional[str], optional): Job description. Defaults to None. failure_ttl (Optional[int], optional): Failture time to live. Defaults to None. retry (Optional[Retry], optional): A Retry object. Defaults to None. on_failure (Optional[Union[Callback, Callable[..., Any]]], optional): Callable to run on failure. Defaults to None. on_success (Optional[Union[Callback, Callable[..., Any]]], optional): Callable to run on success. Defaults to None. on_stopped (Optional[Union[Callback, Callable[..., Any]]], optional): Callable to run when stopped. Defaults to None. """ self.queue = queue self.queue_class = backend_class(self, 'queue_class', override=queue_class) self.connection = connection self.timeout = timeout self.result_ttl = result_ttl self.ttl = ttl self.meta = meta self.depends_on = depends_on self.at_front = at_front self.description = description self.failure_ttl = failure_ttl self.retry = retry self.on_success = on_success self.on_failure = on_failure self.on_stopped = on_stopped def __call__(self, f): @wraps(f) def delay(*args, **kwargs): if isinstance(self.queue, str): queue = self.queue_class(name=self.queue, connection=self.connection) else: queue = self.queue depends_on = kwargs.pop('depends_on', None) job_id = kwargs.pop('job_id', None) at_front = kwargs.pop('at_front', False) if not depends_on: depends_on = self.depends_on if not at_front: at_front = self.at_front return queue.enqueue_call( f, args=args, kwargs=kwargs, timeout=self.timeout, result_ttl=self.result_ttl, ttl=self.ttl, depends_on=depends_on, job_id=job_id, at_front=at_front, meta=self.meta, description=self.description, failure_ttl=self.failure_ttl, retry=self.retry, on_failure=self.on_failure, on_success=self.on_success, on_stopped=self.on_stopped, ) f.delay = delay return f rq-1.16.2/rq/defaults.py0000644000000000000000000000550713615410400011755 0ustar00DEFAULT_JOB_CLASS = 'rq.job.Job' """ The path for the default Job class to use. Defaults to the main `Job` class within the `rq.job` module """ DEFAULT_QUEUE_CLASS = 'rq.Queue' """ The path for the default Queue class to use. Defaults to the main `Queue` class within the `rq.queue` module """ DEFAULT_WORKER_CLASS = 'rq.Worker' """ The path for the default Worker class to use. Defaults to the main `Worker` class within the `rq.worker` module """ DEFAULT_SERIALIZER_CLASS = 'rq.serializers.DefaultSerializer' """ The path for the default Serializer class to use. Defaults to the main `DefaultSerializer` class within the `rq.serializers` module """ DEFAULT_CONNECTION_CLASS = 'redis.Redis' """ The path for the default Redis client class to use. Defaults to the main `Redis` class within the `redis` module As imported like `from redis import Redis` """ DEFAULT_WORKER_TTL = 420 """ The default Time To Live (TTL) for the Worker in seconds Defines the effective timeout period for a worker """ DEFAULT_JOB_MONITORING_INTERVAL = 30 """ The interval in seconds for Job monitoring """ DEFAULT_RESULT_TTL = 500 """ The Time To Live (TTL) in seconds to keep job results Means that the results will be expired from Redis after `DEFAULT_RESULT_TTL` seconds """ DEFAULT_FAILURE_TTL = 31536000 """ The Time To Live (TTL) in seconds to keep job failure information Means that the failure information will be expired from Redis after `DEFAULT_FAILURE_TTL` seconds. Defaults to 1 YEAR in seconds """ DEFAULT_SCHEDULER_FALLBACK_PERIOD = 120 """ The amount in seconds it will take for a new scheduler to pickup tasks after a scheduler has died. This is used as a safety net to avoid race conditions and duplicates when using multiple schedulers """ DEFAULT_MAINTENANCE_TASK_INTERVAL = 10 * 60 """ The interval to run maintenance tasks in seconds. Defaults to 10 minutes. """ CALLBACK_TIMEOUT = 60 """ The timeout period in seconds for Callback functions Means that Functions used in `success_callback`, `stopped_callback`, and `failure_callback` will timeout after N seconds """ DEFAULT_LOGGING_DATE_FORMAT = '%H:%M:%S' """ The Date Format to use for RQ logging. Defaults to Hour:Minute:Seconds on 24hour format eg.: `15:45:23` """ DEFAULT_LOGGING_FORMAT = '%(asctime)s %(message)s' """ The default Logging Format to use Uses Python's default attributes as defined https://docs.python.org/3/library/logging.html#logrecord-attributes """ DEFAULT_DEATH_PENALTY_CLASS = 'rq.timeouts.UnixSignalDeathPenalty' """ The path for the default Death Penalty class to use. Defaults to the `UnixSignalDeathPenalty` class within the `rq.timeouts` module """ UNSERIALIZABLE_RETURN_VALUE_PAYLOAD = 'Unserializable return value' """ The value that we store in the job's _result property or in the Result's return_value in case the return value of the actual job is not serializable """ rq-1.16.2/rq/dependency.py0000644000000000000000000000172713615410400012264 0ustar00from typing import List from redis.client import Pipeline from redis.exceptions import WatchError from .job import Job class Dependency: @classmethod def get_jobs_with_met_dependencies(cls, jobs: List['Job'], pipeline: Pipeline): jobs_with_met_dependencies = [] jobs_with_unmet_dependencies = [] for job in jobs: while True: try: pipeline.watch(*[Job.key_for(dependency_id) for dependency_id in job._dependency_ids]) job.register_dependency(pipeline=pipeline) if job.dependencies_are_met(pipeline=pipeline): jobs_with_met_dependencies.append(job) else: jobs_with_unmet_dependencies.append(job) pipeline.execute() except WatchError: continue break return jobs_with_met_dependencies, jobs_with_unmet_dependencies rq-1.16.2/rq/exceptions.py0000644000000000000000000000103213615410400012314 0ustar00class NoSuchJobError(Exception): pass class DeserializationError(Exception): pass class InvalidJobDependency(Exception): pass class InvalidJobOperationError(Exception): pass class InvalidJobOperation(Exception): pass class DequeueTimeout(Exception): pass class ShutDownImminentException(Exception): def __init__(self, msg, extra_info): self.extra_info = extra_info super().__init__(msg) class TimeoutFormatError(Exception): pass class AbandonedJobError(Exception): pass rq-1.16.2/rq/intermediate_queue.py0000644000000000000000000000740213615410400014020 0ustar00from datetime import datetime, timedelta, timezone from typing import TYPE_CHECKING, List, Optional from redis import Redis from rq.utils import now if TYPE_CHECKING: from .queue import Queue from .worker import BaseWorker class IntermediateQueue(object): def __init__(self, queue_key: str, connection: Redis): self.queue_key = queue_key self.key = self.get_intermediate_queue_key(queue_key) self.connection = connection @classmethod def get_intermediate_queue_key(cls, queue_key: str) -> str: """Returns the intermediate queue key for a given queue key. Args: key (str): The queue key Returns: str: The intermediate queue key """ return f'{queue_key}:intermediate' def get_first_seen_key(self, job_id: str) -> str: """Returns the first seen key for a given job ID. Args: job_id (str): The job ID Returns: str: The first seen key """ return f'{self.key}:first_seen:{job_id}' def set_first_seen(self, job_id: str) -> bool: """Sets the first seen timestamp for a job. Args: job_id (str): The job ID timestamp (float): The timestamp """ # TODO: job_id should be changed to execution ID in 2.0 return bool(self.connection.set(self.get_first_seen_key(job_id), now().timestamp(), nx=True, ex=3600 * 24)) def get_first_seen(self, job_id: str) -> Optional[datetime]: """Returns the first seen timestamp for a job. Args: job_id (str): The job ID Returns: Optional[datetime]: The timestamp """ timestamp = self.connection.get(self.get_first_seen_key(job_id)) if timestamp: return datetime.fromtimestamp(float(timestamp), tz=timezone.utc) return None def should_be_cleaned_up(self, job_id: str) -> bool: """Returns whether a job should be cleaned up. A job in intermediate queue should be cleaned up if it has been there for more than 1 minute. Args: job_id (str): The job ID Returns: bool: Whether the job should be cleaned up """ # TODO: should be changed to execution ID in 2.0 first_seen = self.get_first_seen(job_id) if not first_seen: return False return now() - first_seen > timedelta(minutes=1) def get_job_ids(self) -> List[str]: """Returns the job IDs in the intermediate queue. Returns: List[str]: The job IDs """ return [job_id.decode() for job_id in self.connection.lrange(self.key, 0, -1)] def remove(self, job_id: str) -> None: """Removes a job from the intermediate queue. Args: job_id (str): The job ID """ self.connection.lrem(self.key, 1, job_id) def cleanup(self, worker: 'BaseWorker', queue: 'Queue') -> None: job_ids = self.get_job_ids() for job_id in job_ids: job = queue.fetch_job(job_id) if job_id not in queue.started_job_registry: if not job: # If the job doesn't exist in the queue, we can safely remove it from the intermediate queue. self.remove(job_id) continue # If this is the first time we've seen this job, do nothing. # `set_first_seen` will return `True` if the key was set, `False` if it already existed. if self.set_first_seen(job_id): continue if self.should_be_cleaned_up(job_id): worker.handle_job_failure(job, queue, exc_string='Job was stuck in intermediate queue.') self.remove(job_id) rq-1.16.2/rq/job.py0000644000000000000000000020221613615410400010714 0ustar00import asyncio import inspect import json import logging import warnings import zlib from datetime import datetime, timedelta, timezone from enum import Enum from typing import TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, Union from uuid import uuid4 from redis import WatchError from .defaults import CALLBACK_TIMEOUT, UNSERIALIZABLE_RETURN_VALUE_PAYLOAD from .timeouts import BaseDeathPenalty, JobTimeoutException if TYPE_CHECKING: from redis import Redis from redis.client import Pipeline from .queue import Queue from .results import Result from .connections import resolve_connection from .exceptions import DeserializationError, InvalidJobOperation, NoSuchJobError from .local import LocalStack from .serializers import resolve_serializer from .types import FunctionReferenceType, JobDependencyType from .utils import ( as_text, decode_redis_hash, ensure_list, get_call_string, get_version, import_attribute, parse_timeout, str_to_date, utcformat, utcnow, ) logger = logging.getLogger("rq.job") class JobStatus(str, Enum): """The Status of Job within its lifecycle at any given time.""" QUEUED = 'queued' FINISHED = 'finished' FAILED = 'failed' STARTED = 'started' DEFERRED = 'deferred' SCHEDULED = 'scheduled' STOPPED = 'stopped' CANCELED = 'canceled' class Dependency: def __init__(self, jobs: List[Union['Job', str]], allow_failure: bool = False, enqueue_at_front: bool = False): """The definition of a Dependency. Args: jobs (List[Union[Job, str]]): A list of Job instances or Job IDs. Anything different will raise a ValueError allow_failure (bool, optional): Whether to allow for failure when running the depency, meaning, the dependencies should continue running even after one of them failed. Defaults to False. enqueue_at_front (bool, optional): Whether this dependecy should be enqueued at the front of the queue. Defaults to False. Raises: ValueError: If the `jobs` param has anything different than `str` or `Job` class or the job list is empty """ dependent_jobs = ensure_list(jobs) if not all(isinstance(job, Job) or isinstance(job, str) for job in dependent_jobs if job): raise ValueError("jobs: must contain objects of type Job and/or strings representing Job ids") elif len(dependent_jobs) < 1: raise ValueError("jobs: cannot be empty.") self.dependencies = dependent_jobs self.allow_failure = allow_failure self.enqueue_at_front = enqueue_at_front UNEVALUATED = object() """Sentinel value to mark that some of our lazily evaluated properties have not yet been evaluated. """ def cancel_job(job_id: str, connection: Optional['Redis'] = None, serializer=None, enqueue_dependents: bool = False): """Cancels the job with the given job ID, preventing execution. Use with caution. This will discard any job info (i.e. it can't be requeued later). Args: job_id (str): The Job ID connection (Optional[Redis], optional): The Redis Connection. Defaults to None. serializer (str, optional): The string of the path to the serializer to use. Defaults to None. enqueue_dependents (bool, optional): Whether dependents should still be enqueued. Defaults to False. """ Job.fetch(job_id, connection=connection, serializer=serializer).cancel(enqueue_dependents=enqueue_dependents) def get_current_job(connection: Optional['Redis'] = None, job_class: Optional['Job'] = None) -> Optional['Job']: """Returns the Job instance that is currently being executed. If this function is invoked from outside a job context, None is returned. Args: connection (Optional[Redis], optional): The connection to use. Defaults to None. job_class (Optional[Job], optional): The job class (DEPRECATED). Defaults to None. Returns: job (Optional[Job]): The current Job running """ if connection: warnings.warn("connection argument for get_current_job is deprecated.", DeprecationWarning) if job_class: warnings.warn("job_class argument for get_current_job is deprecated.", DeprecationWarning) return _job_stack.top def requeue_job(job_id: str, connection: 'Redis', serializer=None) -> 'Job': """Fetches a Job by ID and requeues it using the `requeue()` method. Args: job_id (str): The Job ID that should be requeued. connection (Redis): The Redis Connection to use serializer (Optional[str], optional): The serializer. Defaults to None. Returns: Job: The requeued Job object. """ job = Job.fetch(job_id, connection=connection, serializer=serializer) return job.requeue() class Job: """A Job is just a convenient datastructure to pass around job (meta) data.""" redis_job_namespace_prefix = 'rq:job:' @classmethod def create( cls, func: FunctionReferenceType, args: Union[List[Any], Optional[Tuple]] = None, kwargs: Optional[Dict[str, Any]] = None, connection: Optional['Redis'] = None, result_ttl: Optional[int] = None, ttl: Optional[int] = None, status: Optional[JobStatus] = None, description: Optional[str] = None, depends_on: Optional[JobDependencyType] = None, timeout: Optional[int] = None, id: Optional[str] = None, origin: str = '', meta: Optional[Dict[str, Any]] = None, failure_ttl: Optional[int] = None, serializer=None, *, on_success: Optional[Union['Callback', Callable[..., Any]]] = None, # Callable is deprecated on_failure: Optional[Union['Callback', Callable[..., Any]]] = None, # Callable is deprecated on_stopped: Optional[Union['Callback', Callable[..., Any]]] = None, # Callable is deprecated ) -> 'Job': """Creates a new Job instance for the given function, arguments, and keyword arguments. Args: func (FunctionReference): The function/method/callable for the Job. This can be a reference to a concrete callable or a string representing the path of function/method to be imported. Effectively this is the only required attribute when creating a new Job. args (Union[List[Any], Optional[Tuple]], optional): A Tuple / List of positional arguments to pass the callable. Defaults to None, meaning no args being passed. kwargs (Optional[Dict], optional): A Dictionary of keyword arguments to pass the callable. Defaults to None, meaning no kwargs being passed. connection (Optional[Redis], optional): The Redis connection to use. Defaults to None. This will be "resolved" using the `resolve_connection` function when initialzing the Job Class. result_ttl (Optional[int], optional): The amount of time in seconds the results should live. Defaults to None. ttl (Optional[int], optional): The Time To Live (TTL) for the job itself. Defaults to None. status (JobStatus, optional): The Job Status. Defaults to None. description (Optional[str], optional): The Job Description. Defaults to None. depends_on (Union['Dependency', List[Union['Dependency', 'Job']]], optional): What the jobs depends on. This accepts a variaty of different arguments including a `Dependency`, a list of `Dependency` or a `Job` list of `Job`. Defaults to None. timeout (Optional[int], optional): The amount of time in seconds that should be a hardlimit for a job execution. Defaults to None. id (Optional[str], optional): An Optional ID (str) for the Job. Defaults to None. origin (Optional[str], optional): The queue of origin. Defaults to None. meta (Optional[Dict[str, Any]], optional): Custom metadata about the job, takes a dictioanry. Defaults to None. failure_ttl (Optional[int], optional): THe time to live in seconds for failed-jobs information. Defaults to None. serializer (Optional[str], optional): The serializer class path to use. Should be a string with the import path for the serializer to use. eg. `mymodule.myfile.MySerializer` Defaults to None. on_success (Optional[Union['Callback', Callable[..., Any]]], optional): A callback to run when/if the Job finishes sucessfully. Defaults to None. Passing a callable is deprecated. on_failure (Optional[Union['Callback', Callable[..., Any]]], optional): A callback to run when/if the Job fails. Defaults to None. Passing a callable is deprecated. on_stopped (Optional[Union['Callback', Callable[..., Any]]], optional): A callback to run when/if the Job is stopped. Defaults to None. Passing a callable is deprecated. Raises: TypeError: If `args` is not a tuple/list TypeError: If `kwargs` is not a dict TypeError: If the `func` is something other than a string or a Callable reference ValueError: If `on_failure` is not a Callback or function or string ValueError: If `on_success` is not a Callback or function or string ValueError: If `on_stopped` is not a Callback or function or string Returns: Job: A job instance. """ if args is None: args = () if kwargs is None: kwargs = {} if not isinstance(args, (tuple, list)): raise TypeError('{0!r} is not a valid args list'.format(args)) if not isinstance(kwargs, dict): raise TypeError('{0!r} is not a valid kwargs dict'.format(kwargs)) job = cls(connection=connection, serializer=serializer) if id is not None: job.set_id(id) if origin: job.origin = origin # Set the core job tuple properties job._instance = None if inspect.ismethod(func): job._instance = func.__self__ job._func_name = func.__name__ elif inspect.isfunction(func) or inspect.isbuiltin(func): job._func_name = '{0}.{1}'.format(func.__module__, func.__qualname__) elif isinstance(func, str): job._func_name = as_text(func) elif not inspect.isclass(func) and hasattr(func, '__call__'): # a callable class instance job._instance = func job._func_name = '__call__' else: raise TypeError('Expected a callable or a string, but got: {0}'.format(func)) job._args = args job._kwargs = kwargs if on_success: if not isinstance(on_success, Callback): warnings.warn( 'Passing a string or function for `on_success` is deprecated, pass `Callback` instead', DeprecationWarning, ) on_success = Callback(on_success) # backward compatibility job._success_callback_name = on_success.name job._success_callback_timeout = on_success.timeout if on_failure: if not isinstance(on_failure, Callback): warnings.warn( 'Passing a string or function for `on_failure` is deprecated, pass `Callback` instead', DeprecationWarning, ) on_failure = Callback(on_failure) # backward compatibility job._failure_callback_name = on_failure.name job._failure_callback_timeout = on_failure.timeout if on_stopped: if not isinstance(on_stopped, Callback): warnings.warn( 'Passing a string or function for `on_stopped` is deprecated, pass `Callback` instead', DeprecationWarning, ) on_stopped = Callback(on_stopped) # backward compatibility job._stopped_callback_name = on_stopped.name job._stopped_callback_timeout = on_stopped.timeout # Extra meta data job.description = description or job.get_call_string() job.result_ttl = parse_timeout(result_ttl) job.failure_ttl = parse_timeout(failure_ttl) job.ttl = parse_timeout(ttl) job.timeout = parse_timeout(timeout) job._status = status job.meta = meta or {} # dependency could be job instance or id, or iterable thereof if depends_on is not None: depends_on = ensure_list(depends_on) depends_on_list = [] for depends_on_item in depends_on: if isinstance(depends_on_item, Dependency): # If a Dependency has enqueue_at_front or allow_failure set to True, these behaviors are used for # all dependencies. job.enqueue_at_front = job.enqueue_at_front or depends_on_item.enqueue_at_front job.allow_dependency_failures = job.allow_dependency_failures or depends_on_item.allow_failure depends_on_list.extend(depends_on_item.dependencies) else: depends_on_list.extend(ensure_list(depends_on_item)) job._dependency_ids = [dep.id if isinstance(dep, Job) else dep for dep in depends_on_list] return job def get_position(self) -> Optional[int]: """Get's the job's position on the queue Returns: position (Optional[int]): The position """ from .queue import Queue if self.origin: q = Queue(name=self.origin, connection=self.connection) return q.get_job_position(self._id) return None def get_status(self, refresh: bool = True) -> JobStatus: """Gets the Job Status Args: refresh (bool, optional): Whether to refresh the Job. Defaults to True. Returns: status (JobStatus): The Job Status """ if refresh: status = self.connection.hget(self.key, 'status') self._status = as_text(status) if status else None return self._status def set_status(self, status: JobStatus, pipeline: Optional['Pipeline'] = None) -> None: """Set's the Job Status Args: status (JobStatus): The Job Status to be set pipeline (Optional[Pipeline], optional): Optional Redis Pipeline to use. Defaults to None. """ self._status = status connection: 'Redis' = pipeline if pipeline is not None else self.connection connection.hset(self.key, 'status', self._status) def get_meta(self, refresh: bool = True) -> Dict: """Get's the metadata for a Job, an arbitrary dictionary. Args: refresh (bool, optional): Whether to refresh. Defaults to True. Returns: meta (Dict): The dictionary of metadata """ if refresh: meta = self.connection.hget(self.key, 'meta') self.meta = self.serializer.loads(meta) if meta else {} return self.meta @property def is_finished(self) -> bool: return self.get_status() == JobStatus.FINISHED @property def is_queued(self) -> bool: return self.get_status() == JobStatus.QUEUED @property def is_failed(self) -> bool: return self.get_status() == JobStatus.FAILED @property def is_started(self) -> bool: return self.get_status() == JobStatus.STARTED @property def is_deferred(self) -> bool: return self.get_status() == JobStatus.DEFERRED @property def is_canceled(self) -> bool: return self.get_status() == JobStatus.CANCELED @property def is_scheduled(self) -> bool: return self.get_status() == JobStatus.SCHEDULED @property def is_stopped(self) -> bool: return self.get_status() == JobStatus.STOPPED @property def _dependency_id(self): """Returns the first item in self._dependency_ids. Present to preserve compatibility with third party packages. """ if self._dependency_ids: return self._dependency_ids[0] @property def dependency(self) -> Optional['Job']: """Returns a job's first dependency. To avoid repeated Redis fetches, we cache job.dependency as job._dependency. """ if not self._dependency_ids: return None if hasattr(self, '_dependency'): return self._dependency job = self.fetch(self._dependency_ids[0], connection=self.connection, serializer=self.serializer) self._dependency = job return job @property def dependent_ids(self) -> List[str]: """Returns a list of ids of jobs whose execution depends on this job's successful execution.""" return list(map(as_text, self.connection.smembers(self.dependents_key))) @property def func(self): func_name = self.func_name if func_name is None: return None if self.instance: return getattr(self.instance, func_name) return import_attribute(self.func_name) @property def success_callback(self): if self._success_callback is UNEVALUATED: if self._success_callback_name: self._success_callback = import_attribute(self._success_callback_name) else: self._success_callback = None return self._success_callback @property def success_callback_timeout(self) -> int: if self._success_callback_timeout is None: return CALLBACK_TIMEOUT return self._success_callback_timeout @property def failure_callback(self): if self._failure_callback is UNEVALUATED: if self._failure_callback_name: self._failure_callback = import_attribute(self._failure_callback_name) else: self._failure_callback = None return self._failure_callback @property def failure_callback_timeout(self) -> int: if self._failure_callback_timeout is None: return CALLBACK_TIMEOUT return self._failure_callback_timeout @property def stopped_callback(self): if self._stopped_callback is UNEVALUATED: if self._stopped_callback_name: self._stopped_callback = import_attribute(self._stopped_callback_name) else: self._stopped_callback = None return self._stopped_callback @property def stopped_callback_timeout(self) -> int: if self._stopped_callback_timeout is None: return CALLBACK_TIMEOUT return self._stopped_callback_timeout def _deserialize_data(self): """Deserializes the Job `data` into a tuple. This includes the `_func_name`, `_instance`, `_args` and `_kwargs` Raises: DeserializationError: Cathes any deserialization error (since serializers are generic) """ try: self._func_name, self._instance, self._args, self._kwargs = self.serializer.loads(self.data) except Exception as e: raise DeserializationError() from e @property def data(self): if self._data is UNEVALUATED: if self._func_name is UNEVALUATED: raise ValueError('Cannot build the job data') if self._instance is UNEVALUATED: self._instance = None if self._args is UNEVALUATED: self._args = () if self._kwargs is UNEVALUATED: self._kwargs = {} job_tuple = self._func_name, self._instance, self._args, self._kwargs self._data = self.serializer.dumps(job_tuple) return self._data @data.setter def data(self, value): self._data = value self._func_name = UNEVALUATED self._instance = UNEVALUATED self._args = UNEVALUATED self._kwargs = UNEVALUATED @property def func_name(self): if self._func_name is UNEVALUATED: self._deserialize_data() return self._func_name @func_name.setter def func_name(self, value): self._func_name = value self._data = UNEVALUATED @property def instance(self): if self._instance is UNEVALUATED: self._deserialize_data() return self._instance @instance.setter def instance(self, value): self._instance = value self._data = UNEVALUATED @property def args(self) -> tuple: if self._args is UNEVALUATED: self._deserialize_data() return self._args @args.setter def args(self, value): self._args = value self._data = UNEVALUATED @property def kwargs(self): if self._kwargs is UNEVALUATED: self._deserialize_data() return self._kwargs @kwargs.setter def kwargs(self, value): self._kwargs = value self._data = UNEVALUATED @classmethod def exists(cls, job_id: str, connection: Optional['Redis'] = None) -> bool: """Checks whether a Job Hash exists for the given Job ID Args: job_id (str): The Job ID connection (Optional[Redis], optional): Optional connection to use. Defaults to None. Returns: job_exists (bool): Whether the Job exists """ if not connection: connection = resolve_connection() job_key = cls.key_for(job_id) job_exists = connection.exists(job_key) return bool(job_exists) @classmethod def fetch(cls, id: str, connection: Optional['Redis'] = None, serializer=None) -> 'Job': """Fetches a persisted Job from its corresponding Redis key and instantiates it Args: id (str): The Job to fetch connection (Optional['Redis'], optional): An optional Redis connection. Defaults to None. serializer (_type_, optional): The serializer to use. Defaults to None. Returns: Job: The Job instance """ job = cls(id, connection=connection, serializer=serializer) job.refresh() return job @classmethod def fetch_many(cls, job_ids: Iterable[str], connection: 'Redis', serializer=None) -> List['Job']: """ Bulk version of Job.fetch For any job_ids which a job does not exist, the corresponding item in the returned list will be None. Args: job_ids (Iterable[str]): A list of job ids. connection (Redis): Redis connection serializer (Callable): A serializer Returns: jobs (list[Job]): A list of Jobs instances. """ with connection.pipeline() as pipeline: for job_id in job_ids: pipeline.hgetall(cls.key_for(job_id)) results = pipeline.execute() jobs: List[Optional['Job']] = [] for i, job_id in enumerate(job_ids): if not results[i]: jobs.append(None) continue job = cls(job_id, connection=connection, serializer=serializer) job.restore(results[i]) jobs.append(job) return jobs def __init__(self, id: Optional[str] = None, connection: Optional['Redis'] = None, serializer=None): if connection: self.connection = connection else: self.connection = resolve_connection() self._id = id self.created_at = utcnow() self._data = UNEVALUATED self._func_name = UNEVALUATED self._instance = UNEVALUATED self._args = UNEVALUATED self._kwargs = UNEVALUATED self._success_callback_name = None self._success_callback = UNEVALUATED self._failure_callback_name = None self._failure_callback = UNEVALUATED self._stopped_callback_name = None self._stopped_callback = UNEVALUATED self.description: Optional[str] = None self.origin: str = '' self.enqueued_at: Optional[datetime] = None self.started_at: Optional[datetime] = None self.ended_at: Optional[datetime] = None self._result = None self._exc_info = None self.timeout: Optional[float] = None self._success_callback_timeout: Optional[int] = None self._failure_callback_timeout: Optional[int] = None self._stopped_callback_timeout: Optional[int] = None self.result_ttl: Optional[int] = None self.failure_ttl: Optional[int] = None self.ttl: Optional[int] = None self.worker_name: Optional[str] = None self._status = None self._dependency_ids: List[str] = [] self.meta: Dict = {} self.serializer = resolve_serializer(serializer) self.retries_left: Optional[int] = None self.retry_intervals: Optional[List[int]] = None self.redis_server_version: Optional[Tuple[int, int, int]] = None self.last_heartbeat: Optional[datetime] = None self.allow_dependency_failures: Optional[bool] = None self.enqueue_at_front: Optional[bool] = None from .results import Result self._cached_result: Optional[Result] = None def __repr__(self): # noqa # pragma: no cover return '{0}({1!r}, enqueued_at={2!r})'.format(self.__class__.__name__, self._id, self.enqueued_at) def __str__(self): return '<{0} {1}: {2}>'.format(self.__class__.__name__, self.id, self.description) def __eq__(self, other): # noqa return isinstance(other, self.__class__) and self.id == other.id def __hash__(self): # pragma: no cover return hash(self.id) # Data access def get_id(self) -> str: # noqa """The job ID for this job instance. Generates an ID lazily the first time the ID is requested. Returns: job_id (str): The Job ID """ if self._id is None: self._id = str(uuid4()) return self._id def set_id(self, value: str) -> None: """Sets a job ID for the given job Args: value (str): The value to set as Job ID """ if not isinstance(value, str): raise TypeError('id must be a string, not {0}'.format(type(value))) self._id = value def heartbeat(self, timestamp: datetime, ttl: int, pipeline: Optional['Pipeline'] = None, xx: bool = False): """Sets the heartbeat for a job. It will set a hash in Redis with the `last_heartbeat` key and datetime value. If a Redis' pipeline is passed, it will use that, else, it will use the job's own connection. Args: timestamp (datetime): The timestamp to use ttl (int): The time to live pipeline (Optional[Pipeline], optional): Can receive a Redis' pipeline to use. Defaults to None. xx (bool, optional): Only sets the key if already exists. Defaults to False. """ self.last_heartbeat = timestamp connection = pipeline if pipeline is not None else self.connection connection.hset(self.key, 'last_heartbeat', utcformat(self.last_heartbeat)) self.started_job_registry.add(self, ttl, pipeline=pipeline, xx=xx) id = property(get_id, set_id) @classmethod def key_for(cls, job_id: str) -> bytes: """The Redis key that is used to store job hash under. Args: job_id (str): The Job ID Returns: redis_job_key (bytes): The Redis fully qualified key for the job """ return (cls.redis_job_namespace_prefix + job_id).encode('utf-8') @classmethod def dependents_key_for(cls, job_id: str) -> str: """The Redis key that is used to store job dependents hash under. Args: job_id (str): The "parent" job id Returns: dependents_key (str): The dependents key """ return '{0}{1}:dependents'.format(cls.redis_job_namespace_prefix, job_id) @property def key(self): """The Redis key that is used to store job hash under.""" return self.key_for(self.id) @property def dependents_key(self): """The Redis key that is used to store job dependents hash under.""" return self.dependents_key_for(self.id) @property def dependencies_key(self): return '{0}:{1}:dependencies'.format(self.redis_job_namespace_prefix, self.id) def fetch_dependencies(self, watch: bool = False, pipeline: Optional['Pipeline'] = None) -> List['Job']: """Fetch all of a job's dependencies. If a pipeline is supplied, and watch is true, then set WATCH on all the keys of all dependencies. Returned jobs will use self's connection, not the pipeline supplied. If a job has been deleted from redis, it is not returned. Args: watch (bool, optional): Wether to WATCH the keys. Defaults to False. pipeline (Optional[Pipeline]): The Redis' pipeline to use. Defaults to None. Returns: jobs (list[Job]): A list of Jobs """ connection = pipeline if pipeline is not None else self.connection if watch and self._dependency_ids: connection.watch(*[self.key_for(dependency_id) for dependency_id in self._dependency_ids]) dependencies_list = self.fetch_many( self._dependency_ids, connection=self.connection, serializer=self.serializer ) jobs = [job for job in dependencies_list if job] return jobs @property def exc_info(self) -> Optional[str]: """ Get the latest result and returns `exc_info` only if the latest result is a failure. """ warnings.warn("job.exc_info is deprecated, use job.latest_result() instead.", DeprecationWarning) from .results import Result if self.supports_redis_streams: if not self._cached_result: self._cached_result = self.latest_result() if self._cached_result and self._cached_result.type == Result.Type.FAILED: return self._cached_result.exc_string return self._exc_info def return_value(self, refresh: bool = False) -> Optional[Any]: """Returns the return value of the latest execution, if it was successful. Args: refresh (bool, optional): Whether to refresh the current status. Defaults to False. Returns: result (Optional[Any]): The job return value. """ from .results import Result if refresh: self._cached_result = None if not self.supports_redis_streams: if self._result is not None: return self._result rv = self.connection.hget(self.key, 'result') if rv is not None: # cache the result self._result = self.serializer.loads(rv) return self._result return None if not self._cached_result: self._cached_result = self.latest_result() if self._cached_result and self._cached_result.type == Result.Type.SUCCESSFUL: return self._cached_result.return_value return None @property def result(self) -> Any: """Returns the return value of the job. Initially, right after enqueueing a job, the return value will be None. But when the job has been executed, and had a return value or exception, this will return that value or exception. Note that, when the job has no return value (i.e. returns None), the ReadOnlyJob object is useless, as the result won't be written back to Redis. Also note that you cannot draw the conclusion that a job has _not_ been executed when its return value is None, since return values written back to Redis will expire after a given amount of time (500 seconds by default). """ warnings.warn("job.result is deprecated, use job.return_value instead.", DeprecationWarning) from .results import Result if self.supports_redis_streams: if not self._cached_result: self._cached_result = self.latest_result() if self._cached_result and self._cached_result.type == Result.Type.SUCCESSFUL: return self._cached_result.return_value # Fallback to old behavior of getting result from job hash if self._result is None: rv = self.connection.hget(self.key, 'result') if rv is not None: # cache the result self._result = self.serializer.loads(rv) return self._result def results(self) -> List['Result']: """Returns all Result objects Returns: all_results (List[Result]): A list of 'Result' objects """ from .results import Result return Result.all(self, serializer=self.serializer) def latest_result(self, timeout: int = 0) -> Optional['Result']: """Get the latest job result. Args: timeout (int, optional): Number of seconds to block waiting for a result. Defaults to 0 (no blocking). Returns: result (Result): The Result object """ from .results import Result return Result.fetch_latest(self, serializer=self.serializer, timeout=timeout) def restore(self, raw_data) -> Any: """Overwrite properties with the provided values stored in Redis. Args: raw_data (_type_): The raw data to load the job data from Raises: NoSuchJobError: If there way an error getting the job data """ obj = decode_redis_hash(raw_data) try: raw_data = obj['data'] except KeyError: raise NoSuchJobError('Unexpected job format: {0}'.format(obj)) try: self.data = zlib.decompress(raw_data) except zlib.error: # Fallback to uncompressed string self.data = raw_data self.created_at = str_to_date(obj.get('created_at')) self.origin = as_text(obj.get('origin')) if obj.get('origin') else '' self.worker_name = obj.get('worker_name').decode() if obj.get('worker_name') else None self.description = as_text(obj.get('description')) if obj.get('description') else None self.enqueued_at = str_to_date(obj.get('enqueued_at')) self.started_at = str_to_date(obj.get('started_at')) self.ended_at = str_to_date(obj.get('ended_at')) self.last_heartbeat = str_to_date(obj.get('last_heartbeat')) result = obj.get('result') if result: try: self._result = self.serializer.loads(result) except Exception: self._result = UNSERIALIZABLE_RETURN_VALUE_PAYLOAD self.timeout = parse_timeout(obj.get('timeout')) if obj.get('timeout') else None self.result_ttl = int(obj.get('result_ttl')) if obj.get('result_ttl') else None self.failure_ttl = int(obj.get('failure_ttl')) if obj.get('failure_ttl') else None self._status = obj.get('status').decode() if obj.get('status') else None if obj.get('success_callback_name'): self._success_callback_name = obj.get('success_callback_name').decode() if 'success_callback_timeout' in obj: self._success_callback_timeout = int(obj.get('success_callback_timeout')) if obj.get('failure_callback_name'): self._failure_callback_name = obj.get('failure_callback_name').decode() if 'failure_callback_timeout' in obj: self._failure_callback_timeout = int(obj.get('failure_callback_timeout')) if obj.get('stopped_callback_name'): self._stopped_callback_name = obj.get('stopped_callback_name').decode() if 'stopped_callback_timeout' in obj: self._stopped_callback_timeout = int(obj.get('stopped_callback_timeout')) dep_ids = obj.get('dependency_ids') dep_id = obj.get('dependency_id') # for backwards compatibility self._dependency_ids = json.loads(dep_ids.decode()) if dep_ids else [dep_id.decode()] if dep_id else [] allow_failures = obj.get('allow_dependency_failures') self.allow_dependency_failures = bool(int(allow_failures)) if allow_failures else None self.enqueue_at_front = bool(int(obj['enqueue_at_front'])) if 'enqueue_at_front' in obj else None self.ttl = int(obj.get('ttl')) if obj.get('ttl') else None try: self.meta = self.serializer.loads(obj.get('meta')) if obj.get('meta') else {} except Exception: # depends on the serializer self.meta = {'unserialized': obj.get('meta', {})} self.retries_left = int(obj.get('retries_left')) if obj.get('retries_left') else None if obj.get('retry_intervals'): self.retry_intervals = json.loads(obj.get('retry_intervals').decode()) raw_exc_info = obj.get('exc_info') if raw_exc_info: try: self._exc_info = as_text(zlib.decompress(raw_exc_info)) except zlib.error: # Fallback to uncompressed string self._exc_info = as_text(raw_exc_info) # Persistence def refresh(self): # noqa """Overwrite the current instance's properties with the values in the corresponding Redis key. Will raise a NoSuchJobError if no corresponding Redis key exists. """ data = self.connection.hgetall(self.key) if not data: raise NoSuchJobError('No such job: {0}'.format(self.key)) self.restore(data) def to_dict(self, include_meta: bool = True, include_result: bool = True) -> dict: """Returns a serialization of the current job instance You can exclude serializing the `meta` dictionary by setting `include_meta=False`. Args: include_meta (bool, optional): Whether to include the Job's metadata. Defaults to True. include_result (bool, optional): Whether to include the Job's result. Defaults to True. Returns: dict: The Job serialized as a dictionary """ obj = { 'created_at': utcformat(self.created_at or utcnow()), 'data': zlib.compress(self.data), 'success_callback_name': self._success_callback_name if self._success_callback_name else '', 'failure_callback_name': self._failure_callback_name if self._failure_callback_name else '', 'stopped_callback_name': self._stopped_callback_name if self._stopped_callback_name else '', 'started_at': utcformat(self.started_at) if self.started_at else '', 'ended_at': utcformat(self.ended_at) if self.ended_at else '', 'last_heartbeat': utcformat(self.last_heartbeat) if self.last_heartbeat else '', 'worker_name': self.worker_name or '', } if self.retries_left is not None: obj['retries_left'] = self.retries_left if self.retry_intervals is not None: obj['retry_intervals'] = json.dumps(self.retry_intervals) if self.origin: obj['origin'] = self.origin if self.description is not None: obj['description'] = self.description if self.enqueued_at is not None: obj['enqueued_at'] = utcformat(self.enqueued_at) if self._result is not None and include_result: try: obj['result'] = self.serializer.dumps(self._result) except: # noqa obj['result'] = "Unserializable return value" if self._exc_info is not None and include_result: obj['exc_info'] = zlib.compress(str(self._exc_info).encode('utf-8')) if self.timeout is not None: obj['timeout'] = self.timeout if self._success_callback_timeout is not None: obj['success_callback_timeout'] = self._success_callback_timeout if self._failure_callback_timeout is not None: obj['failure_callback_timeout'] = self._failure_callback_timeout if self._stopped_callback_timeout is not None: obj['stopped_callback_timeout'] = self._stopped_callback_timeout if self.result_ttl is not None: obj['result_ttl'] = self.result_ttl if self.failure_ttl is not None: obj['failure_ttl'] = self.failure_ttl if self._status is not None: obj['status'] = self._status if self._dependency_ids: obj['dependency_id'] = self._dependency_ids[0] # for backwards compatibility obj['dependency_ids'] = json.dumps(self._dependency_ids) if self.meta and include_meta: obj['meta'] = self.serializer.dumps(self.meta) if self.ttl: obj['ttl'] = self.ttl if self.allow_dependency_failures is not None: # convert boolean to integer to avoid redis.exception.DataError obj["allow_dependency_failures"] = int(self.allow_dependency_failures) if self.enqueue_at_front is not None: obj["enqueue_at_front"] = int(self.enqueue_at_front) return obj def save(self, pipeline: Optional['Pipeline'] = None, include_meta: bool = True, include_result: bool = True): """Dumps the current job instance to its corresponding Redis key. Exclude saving the `meta` dictionary by setting `include_meta=False`. This is useful to prevent clobbering user metadata without an expensive `refresh()` call first. Redis key persistence may be altered by `cleanup()` method. Args: pipeline (Optional[Pipeline], optional): The Redis' pipeline to use. Defaults to None. include_meta (bool, optional): Whether to include the job's metadata. Defaults to True. include_result (bool, optional): Whether to include the job's result. Defaults to True. """ key = self.key connection = pipeline if pipeline is not None else self.connection mapping = self.to_dict(include_meta=include_meta, include_result=include_result) if self.get_redis_server_version() >= (4, 0, 0): connection.hset(key, mapping=mapping) else: connection.hmset(key, mapping) @property def supports_redis_streams(self) -> bool: """Only supported by Redis server >= 5.0 is required.""" return self.get_redis_server_version() >= (5, 0, 0) def get_redis_server_version(self) -> Tuple[int, int, int]: """Return Redis server version of connection Returns: redis_server_version (Tuple[int, int, int]): The Redis version within a Tuple of integers, eg (5, 0, 9) """ if self.redis_server_version is None: self.redis_server_version = get_version(self.connection) return self.redis_server_version def save_meta(self): """Stores job meta from the job instance to the corresponding Redis key.""" meta = self.serializer.dumps(self.meta) self.connection.hset(self.key, 'meta', meta) def cancel(self, pipeline: Optional['Pipeline'] = None, enqueue_dependents: bool = False): """Cancels the given job, which will prevent the job from ever being ran (or inspected). This method merely exists as a high-level API call to cancel jobs without worrying about the internals required to implement job cancellation. You can enqueue the jobs dependents optionally, Same pipelining behavior as Queue.enqueue_dependents on whether or not a pipeline is passed in. Args: pipeline (Optional[Pipeline], optional): The Redis' pipeline to use. Defaults to None. enqueue_dependents (bool, optional): Whether to enqueue dependents jobs. Defaults to False. Raises: InvalidJobOperation: If the job has already been cancelled. """ if self.is_canceled: raise InvalidJobOperation("Cannot cancel already canceled job: {}".format(self.get_id())) from .queue import Queue from .registry import CanceledJobRegistry pipe = pipeline or self.connection.pipeline() while True: try: q = Queue( name=self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) self.set_status(JobStatus.CANCELED, pipeline=pipe) if enqueue_dependents: # Only WATCH if no pipeline passed, otherwise caller is responsible if pipeline is None: pipe.watch(self.dependents_key) q.enqueue_dependents(self, pipeline=pipeline, exclude_job_id=self.id) self._remove_from_registries(pipeline=pipe, remove_from_queue=True) registry = CanceledJobRegistry( self.origin, self.connection, job_class=self.__class__, serializer=self.serializer ) registry.add(self, pipeline=pipe) if pipeline is None: pipe.execute() break except WatchError: if pipeline is None: continue else: # if the pipeline comes from the caller, we re-raise the # exception as it is the responsibility of the caller to # handle it raise def requeue(self, at_front: bool = False) -> 'Job': """Requeues job Args: at_front (bool, optional): Whether the job should be requeued at the front of the queue. Defaults to False. Returns: job (Job): The requeued Job instance """ return self.failed_job_registry.requeue(self, at_front=at_front) def _remove_from_registries(self, pipeline: Optional['Pipeline'] = None, remove_from_queue: bool = True): from .registry import BaseRegistry if remove_from_queue: from .queue import Queue q = Queue(name=self.origin, connection=self.connection, serializer=self.serializer) q.remove(self, pipeline=pipeline) registry: BaseRegistry if self.is_finished: from .registry import FinishedJobRegistry registry = FinishedJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.remove(self, pipeline=pipeline) elif self.is_deferred: from .registry import DeferredJobRegistry registry = DeferredJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.remove(self, pipeline=pipeline) elif self.is_started: from .registry import StartedJobRegistry registry = StartedJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.remove(self, pipeline=pipeline) elif self.is_scheduled: from .registry import ScheduledJobRegistry registry = ScheduledJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.remove(self, pipeline=pipeline) elif self.is_failed or self.is_stopped: self.failed_job_registry.remove(self, pipeline=pipeline) elif self.is_canceled: from .registry import CanceledJobRegistry registry = CanceledJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.remove(self, pipeline=pipeline) def delete( self, pipeline: Optional['Pipeline'] = None, remove_from_queue: bool = True, delete_dependents: bool = False ): """Cancels the job and deletes the job hash from Redis. Jobs depending on this job can optionally be deleted as well. Args: pipeline (Optional[Pipeline], optional): Redis' piepline. Defaults to None. remove_from_queue (bool, optional): Whether the job should be removed from the queue. Defaults to True. delete_dependents (bool, optional): Whether job dependents should also be deleted. Defaults to False. """ connection = pipeline if pipeline is not None else self.connection self._remove_from_registries(pipeline=pipeline, remove_from_queue=remove_from_queue) if delete_dependents: self.delete_dependents(pipeline=pipeline) connection.delete(self.key, self.dependents_key, self.dependencies_key) def delete_dependents(self, pipeline: Optional['Pipeline'] = None): """Delete jobs depending on this job. Args: pipeline (Optional[Pipeline], optional): Redis' piepline. Defaults to None. """ connection = pipeline if pipeline is not None else self.connection for dependent_id in self.dependent_ids: try: job = Job.fetch(dependent_id, connection=self.connection, serializer=self.serializer) job.delete(pipeline=pipeline, remove_from_queue=False) except NoSuchJobError: # It could be that the dependent job was never saved to redis pass connection.delete(self.dependents_key) # Job execution def perform(self) -> Any: # noqa """The main execution method. Invokes the job function with the job arguments. This is the method that actually performs the job - it's what its called by the worker. Returns: result (Any): The job result """ self.connection.persist(self.key) _job_stack.push(self) try: self._result = self._execute() finally: assert self is _job_stack.pop() return self._result def prepare_for_execution(self, worker_name: str, pipeline: 'Pipeline'): """Prepares the job for execution, setting the worker name, heartbeat information, status and other metadata before execution begins. Args: worker_name (str): The worker that will perform the job pipeline (Pipeline): The Redis' piipeline to use """ self.worker_name = worker_name self.last_heartbeat = utcnow() self.started_at = self.last_heartbeat self._status = JobStatus.STARTED mapping = { 'last_heartbeat': utcformat(self.last_heartbeat), 'status': self._status, 'started_at': utcformat(self.started_at), # type: ignore 'worker_name': worker_name, } if self.get_redis_server_version() >= (4, 0, 0): pipeline.hset(self.key, mapping=mapping) else: pipeline.hmset(self.key, mapping=mapping) def _execute(self) -> Any: """Actually runs the function with it's *args and **kwargs. It will use the `func` property, which was already resolved and ready to run at this point. If the function is a coroutine (it's an async function/method), then the `result` will have to be awaited within an event loop. Returns: result (Any): The function result """ result = self.func(*self.args, **self.kwargs) if asyncio.iscoroutine(result): loop = asyncio.new_event_loop() coro_result = loop.run_until_complete(result) return coro_result return result def get_ttl(self, default_ttl: Optional[int] = None) -> Optional[int]: """Returns ttl for a job that determines how long a job will be persisted. In the future, this method will also be responsible for determining ttl for repeated jobs. Args: default_ttl (Optional[int]): The default time to live for the job Returns: ttl (int): The time to live """ return default_ttl if self.ttl is None else self.ttl def get_result_ttl(self, default_ttl: int) -> int: """Returns ttl for a job that determines how long a jobs result will be persisted. In the future, this method will also be responsible for determining ttl for repeated jobs. Args: default_ttl (Optional[int]): The default time to live for the job result Returns: ttl (int): The time to live for the result """ return default_ttl if self.result_ttl is None else self.result_ttl # Representation def get_call_string(self) -> Optional[str]: # noqa """Returns a string representation of the call, formatted as a regular Python function invocation statement. Returns: call_repr (str): The string representation """ call_repr = get_call_string(self.func_name, self.args, self.kwargs, max_length=75) return call_repr def cleanup(self, ttl: Optional[int] = None, pipeline: Optional['Pipeline'] = None, remove_from_queue: bool = True): """Prepare job for eventual deletion (if needed). This method is usually called after successful execution. How long we persist the job and its result depends on the value of ttl: - If ttl is 0, cleanup the job immediately. - If it's a positive number, set the job to expire in X seconds. - If ttl is negative, don't set an expiry to it (persist forever) Args: ttl (Optional[int], optional): Time to live. Defaults to None. pipeline (Optional[Pipeline], optional): Redis' pipeline. Defaults to None. remove_from_queue (bool, optional): Whether the job should be removed from the queue. Defaults to True. """ if ttl == 0: self.delete(pipeline=pipeline, remove_from_queue=remove_from_queue) elif not ttl: return elif ttl > 0: connection = pipeline if pipeline is not None else self.connection connection.expire(self.key, ttl) connection.expire(self.dependents_key, ttl) connection.expire(self.dependencies_key, ttl) @property def started_job_registry(self): from .registry import StartedJobRegistry return StartedJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) @property def failed_job_registry(self): from .registry import FailedJobRegistry return FailedJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) @property def finished_job_registry(self): from .registry import FinishedJobRegistry return FinishedJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) def execute_success_callback(self, death_penalty_class: Type[BaseDeathPenalty], result: Any): """Executes success_callback for a job. with timeout . Args: death_penalty_class (Type[BaseDeathPenalty]): The penalty class to use for timeout result (Any): The job's result. """ if not self.success_callback: return logger.debug('Running success callbacks for %s', self.id) with death_penalty_class(self.success_callback_timeout, JobTimeoutException, job_id=self.id): self.success_callback(self, self.connection, result) def execute_failure_callback(self, death_penalty_class: Type[BaseDeathPenalty], *exc_info): """Executes failure_callback with possible timeout""" if not self.failure_callback: return logger.debug('Running failure callbacks for %s', self.id) try: with death_penalty_class(self.failure_callback_timeout, JobTimeoutException, job_id=self.id): self.failure_callback(self, self.connection, *exc_info) except Exception: # noqa logger.exception(f'Job {self.id}: error while executing failure callback') raise def execute_stopped_callback(self, death_penalty_class: Type[BaseDeathPenalty]): """Executes stopped_callback with possible timeout""" logger.debug('Running stopped callbacks for %s', self.id) try: with death_penalty_class(self.stopped_callback_timeout, JobTimeoutException, job_id=self.id): self.stopped_callback(self, self.connection) except Exception: # noqa logger.exception(f'Job {self.id}: error while executing stopped callback') raise def _handle_success(self, result_ttl: int, pipeline: 'Pipeline'): """Saves and cleanup job after successful execution""" # self.log.debug('Setting job %s status to finished', job.id) self.set_status(JobStatus.FINISHED, pipeline=pipeline) # Result should be saved in job hash only if server # doesn't support Redis streams include_result = not self.supports_redis_streams # Don't clobber user's meta dictionary! self.save(pipeline=pipeline, include_meta=False, include_result=include_result) # Result creation should eventually be moved to job.save() after support # for Redis < 5.0 is dropped. job.save(include_result=...) is used to test # for backward compatibility if self.supports_redis_streams: from .results import Result Result.create(self, Result.Type.SUCCESSFUL, return_value=self._result, ttl=result_ttl, pipeline=pipeline) if result_ttl != 0: finished_job_registry = self.finished_job_registry finished_job_registry.add(self, result_ttl, pipeline) def _handle_failure(self, exc_string: str, pipeline: 'Pipeline'): failed_job_registry = self.failed_job_registry # Exception should be saved in job hash if server # doesn't support Redis streams _save_exc_to_job = not self.supports_redis_streams failed_job_registry.add( self, ttl=self.failure_ttl, exc_string=exc_string, pipeline=pipeline, _save_exc_to_job=_save_exc_to_job, ) if self.supports_redis_streams: from .results import Result Result.create_failure(self, self.failure_ttl, exc_string=exc_string, pipeline=pipeline) def get_retry_interval(self) -> int: """Returns the desired retry interval. If number of retries is bigger than length of intervals, the first value in the list will be used multiple times. Returns: retry_interval (int): The desired retry interval """ if self.retry_intervals is None: return 0 number_of_intervals = len(self.retry_intervals) index = max(number_of_intervals - self.retries_left, 0) return self.retry_intervals[index] def retry(self, queue: 'Queue', pipeline: 'Pipeline'): """Requeue or schedule this job for execution. If the the `retry_interval` was set on the job itself, it will calculate a scheduled time for the job to run, and instead of just regularly `enqueing` the job, it will `schedule` it. Args: queue (Queue): The queue to retry the job on pipeline (Pipeline): The Redis' pipeline to use """ retry_interval = self.get_retry_interval() self.retries_left = self.retries_left - 1 if retry_interval: scheduled_datetime = datetime.now(timezone.utc) + timedelta(seconds=retry_interval) self.set_status(JobStatus.SCHEDULED) queue.schedule_job(self, scheduled_datetime, pipeline=pipeline) else: queue._enqueue_job(self, pipeline=pipeline) def register_dependency(self, pipeline: Optional['Pipeline'] = None): """Jobs may have dependencies. Jobs are enqueued only if the jobs they depend on are successfully performed. We record this relation as a reverse dependency (a Redis set), with a key that looks something like: ..codeblock:python:: rq:job:job_id:dependents = {'job_id_1', 'job_id_2'} This method adds the job in its dependencies' dependents sets, and adds the job to DeferredJobRegistry. Args: pipeline (Optional[Pipeline]): The Redis' pipeline. Defaults to None """ from .registry import DeferredJobRegistry registry = DeferredJobRegistry( self.origin, connection=self.connection, job_class=self.__class__, serializer=self.serializer ) registry.add(self, pipeline=pipeline) connection = pipeline if pipeline is not None else self.connection for dependency_id in self._dependency_ids: dependents_key = self.dependents_key_for(dependency_id) connection.sadd(dependents_key, self.id) connection.sadd(self.dependencies_key, dependency_id) @property def dependency_ids(self) -> List[bytes]: dependencies = self.connection.smembers(self.dependencies_key) return [Job.key_for(_id.decode()) for _id in dependencies] def dependencies_are_met( self, parent_job: Optional['Job'] = None, pipeline: Optional['Pipeline'] = None, exclude_job_id: Optional[str] = None, ) -> bool: """Returns a boolean indicating if all of this job's dependencies are `FINISHED` If a pipeline is passed, all dependencies are WATCHed. `parent_job` allows us to directly pass parent_job for the status check. This is useful when enqueueing the dependents of a _successful_ job -- that status of `FINISHED` may not be yet set in redis, but said job is indeed _done_ and this method is _called_ in the _stack_ of its dependents are being enqueued. Args: parent_job (Optional[Job], optional): The parent Job. Defaults to None. pipeline (Optional[Pipeline], optional): The Redis' pipeline. Defaults to None. exclude_job_id (Optional[str], optional): Whether to exclude the job id.. Defaults to None. Returns: are_met (bool): Whether the dependencies were met. """ connection = pipeline if pipeline is not None else self.connection if pipeline is not None: connection.watch(*[self.key_for(dependency_id) for dependency_id in self._dependency_ids]) dependencies_ids = {_id.decode() for _id in connection.smembers(self.dependencies_key)} if exclude_job_id: dependencies_ids.discard(exclude_job_id) if parent_job and parent_job.id == exclude_job_id: parent_job = None if parent_job: # If parent job is canceled, treat dependency as failed # If parent job is not finished, we should only continue # if this job allows parent job to fail dependencies_ids.discard(parent_job.id) if parent_job.get_status() == JobStatus.CANCELED: return False elif parent_job._status == JobStatus.FAILED and not self.allow_dependency_failures: return False # If the only dependency is parent job, dependency has been met if not dependencies_ids: return True with connection.pipeline() as pipeline: for key in dependencies_ids: pipeline.hget(self.key_for(key), 'status') dependencies_statuses = pipeline.execute() allowed_statuses = [JobStatus.FINISHED] if self.allow_dependency_failures: allowed_statuses.append(JobStatus.FAILED) return all(status.decode() in allowed_statuses for status in dependencies_statuses if status) _job_stack = LocalStack() class Retry: def __init__(self, max: int, interval: Union[int, List[int]] = 0): """The main object to defined Retry logics for jobs. Args: max (int): The max number of times a job should be retried interval (Union[int, List[int]], optional): The interval between retries. Can be a positive number (int) or a list of ints. Defaults to 0 (meaning no interval between retries). Raises: ValueError: If the `max` argument is lower than 1 ValueError: If the interval param is negative or the list contains negative numbers """ super().__init__() if max < 1: raise ValueError('max: please enter a value greater than 0') if isinstance(interval, int): if interval < 0: raise ValueError('interval: negative numbers are not allowed') intervals = [interval] elif isinstance(interval, Iterable): for i in interval: if i < 0: raise ValueError('interval: negative numbers are not allowed') intervals = interval self.max = max self.intervals = intervals class Callback: def __init__(self, func: Union[str, Callable[..., Any]], timeout: Optional[Any] = None): if not isinstance(func, str) and not inspect.isfunction(func) and not inspect.isbuiltin(func): raise ValueError('Callback `func` must be a string or function') self.func = func self.timeout = parse_timeout(timeout) if timeout else CALLBACK_TIMEOUT @property def name(self) -> str: if isinstance(self.func, str): return self.func return '{0}.{1}'.format(self.func.__module__, self.func.__qualname__) rq-1.16.2/rq/local.py0000644000000000000000000002740213615410400011236 0ustar00# ruff: noqa: E731 """ werkzeug.local ~~~~~~~~~~~~~~ This module implements context-local objects. :copyright: (c) 2011 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ # Since each thread has its own greenlet we can just use those as identifiers # for the context. If greenlets are not available we fall back to the # current thread ident. try: from greenlet import getcurrent as get_ident except ImportError: try: from threading import get_ident except ImportError: try: from _thread import get_ident except ImportError: from dummy_thread import get_ident def release_local(local): """Releases the contents of the local for the current context. This makes it possible to use locals without a manager. Example:: >>> loc = Local() >>> loc.foo = 42 >>> release_local(loc) >>> hasattr(loc, 'foo') False With this function one can release :class:`Local` objects as well as :class:`StackLocal` objects. However it is not possible to release data held by proxies that way, one always has to retain a reference to the underlying local object in order to be able to release it. .. versionadded:: 0.6.1 """ local.__release_local__() class Local: __slots__ = ('__storage__', '__ident_func__') def __init__(self): object.__setattr__(self, '__storage__', {}) object.__setattr__(self, '__ident_func__', get_ident) def __iter__(self): return iter(self.__storage__.items()) def __call__(self, proxy): """Create a proxy for a name.""" return LocalProxy(self, proxy) def __release_local__(self): self.__storage__.pop(self.__ident_func__(), None) def __getattr__(self, name): try: return self.__storage__[self.__ident_func__()][name] except KeyError: raise AttributeError(name) def __setattr__(self, name, value): ident = self.__ident_func__() storage = self.__storage__ try: storage[ident][name] = value except KeyError: storage[ident] = {name: value} def __delattr__(self, name): try: del self.__storage__[self.__ident_func__()][name] except KeyError: raise AttributeError(name) class LocalStack: """This class works similar to a :class:`Local` but keeps a stack of objects instead. This is best explained with an example:: >>> ls = LocalStack() >>> ls.push(42) >>> ls.top 42 >>> ls.push(23) >>> ls.top 23 >>> ls.pop() 23 >>> ls.top 42 They can be force released by using a :class:`LocalManager` or with the :func:`release_local` function but the correct way is to pop the item from the stack after using. When the stack is empty it will no longer be bound to the current context (and as such released). By calling the stack without arguments it returns a proxy that resolves to the topmost item on the stack. .. versionadded:: 0.6.1 """ def __init__(self): self._local = Local() def __release_local__(self): self._local.__release_local__() def _get__ident_func__(self): return self._local.__ident_func__ def _set__ident_func__(self, value): object.__setattr__(self._local, '__ident_func__', value) __ident_func__ = property(_get__ident_func__, _set__ident_func__) del _get__ident_func__, _set__ident_func__ def __call__(self): def _lookup(): rv = self.top if rv is None: raise RuntimeError('object unbound') return rv return LocalProxy(_lookup) def push(self, obj): """Pushes a new item to the stack""" rv = getattr(self._local, 'stack', None) if rv is None: self._local.stack = rv = [] rv.append(obj) return rv def pop(self): """Removes the topmost item from the stack, will return the old value or `None` if the stack was already empty. """ stack = getattr(self._local, 'stack', None) if stack is None: return None elif len(stack) == 1: release_local(self._local) return stack[-1] else: return stack.pop() @property def top(self): """The topmost item on the stack. If the stack is empty, `None` is returned. """ try: return self._local.stack[-1] except (AttributeError, IndexError): return None def __len__(self): stack = getattr(self._local, 'stack', None) if stack is None: return 0 return len(stack) class LocalManager: """Local objects cannot manage themselves. For that you need a local manager. You can pass a local manager multiple locals or add them later by appending them to `manager.locals`. Everytime the manager cleans up it, will clean up all the data left in the locals for this context. The `ident_func` parameter can be added to override the default ident function for the wrapped locals. .. versionchanged:: 0.6.1 Instead of a manager the :func:`release_local` function can be used as well. .. versionchanged:: 0.7 `ident_func` was added. """ def __init__(self, locals=None, ident_func=None): if locals is None: self.locals = [] elif isinstance(locals, Local): self.locals = [locals] else: self.locals = list(locals) if ident_func is not None: self.ident_func = ident_func for local in self.locals: object.__setattr__(local, '__ident_func__', ident_func) else: self.ident_func = get_ident def get_ident(self): """Return the context identifier the local objects use internally for this context. You cannot override this method to change the behavior but use it to link other context local objects (such as SQLAlchemy's scoped sessions) to the Werkzeug locals. .. versionchanged:: 0.7 You can pass a different ident function to the local manager that will then be propagated to all the locals passed to the constructor. """ return self.ident_func() def cleanup(self): """Manually clean up the data in the locals for this context. Call this at the end of the request or use `make_middleware()`. """ for local in self.locals: release_local(local) def __repr__(self): return '<%s storages: %d>' % (self.__class__.__name__, len(self.locals)) class LocalProxy: """Acts as a proxy for a werkzeug local. Forwards all operations to a proxied object. The only operations not supported for forwarding are right handed operands and any kind of assignment. Example usage:: from werkzeug.local import Local l = Local() # these are proxies request = l('request') user = l('user') from werkzeug.local import LocalStack _response_local = LocalStack() # this is a proxy response = _response_local() Whenever something is bound to l.user / l.request the proxy objects will forward all operations. If no object is bound a :exc:`RuntimeError` will be raised. To create proxies to :class:`Local` or :class:`LocalStack` objects, call the object as shown above. If you want to have a proxy to an object looked up by a function, you can (as of Werkzeug 0.6.1) pass a function to the :class:`LocalProxy` constructor:: session = LocalProxy(lambda: get_current_request().session) .. versionchanged:: 0.6.1 The class can be instanciated with a callable as well now. """ __slots__ = ('__local', '__dict__', '__name__') def __init__(self, local, name=None): object.__setattr__(self, '_LocalProxy__local', local) object.__setattr__(self, '__name__', name) def _get_current_object(self): """Return the current object. This is useful if you want the real object behind the proxy at a time for performance reasons or because you want to pass the object into a different context. """ if not hasattr(self.__local, '__release_local__'): return self.__local() try: return getattr(self.__local, self.__name__) except AttributeError: raise RuntimeError('no object bound to %s' % self.__name__) @property def __dict__(self): try: return self._get_current_object().__dict__ except RuntimeError: raise AttributeError('__dict__') def __repr__(self): try: obj = self._get_current_object() except RuntimeError: return '<%s unbound>' % self.__class__.__name__ return repr(obj) def __dir__(self): try: return dir(self._get_current_object()) except RuntimeError: return [] def __getattr__(self, name): if name == '__members__': return dir(self._get_current_object()) return getattr(self._get_current_object(), name) def __setitem__(self, key, value): self._get_current_object()[key] = value def __delitem__(self, key): del self._get_current_object()[key] __setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v) __delattr__ = lambda x, n: delattr(x._get_current_object(), n) __str__ = lambda x: str(x._get_current_object()) __lt__ = lambda x, o: x._get_current_object() < o __le__ = lambda x, o: x._get_current_object() <= o __eq__ = lambda x, o: x._get_current_object() == o __ne__ = lambda x, o: x._get_current_object() != o __gt__ = lambda x, o: x._get_current_object() > o __ge__ = lambda x, o: x._get_current_object() >= o __hash__ = lambda x: hash(x._get_current_object()) __call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw) __len__ = lambda x: len(x._get_current_object()) __getitem__ = lambda x, i: x._get_current_object()[i] __iter__ = lambda x: iter(x._get_current_object()) __contains__ = lambda x, i: i in x._get_current_object() __add__ = lambda x, o: x._get_current_object() + o __sub__ = lambda x, o: x._get_current_object() - o __mul__ = lambda x, o: x._get_current_object() * o __floordiv__ = lambda x, o: x._get_current_object() // o __mod__ = lambda x, o: x._get_current_object() % o __divmod__ = lambda x, o: x._get_current_object().__divmod__(o) __pow__ = lambda x, o: x._get_current_object() ** o __lshift__ = lambda x, o: x._get_current_object() << o __rshift__ = lambda x, o: x._get_current_object() >> o __and__ = lambda x, o: x._get_current_object() & o __xor__ = lambda x, o: x._get_current_object() ^ o __or__ = lambda x, o: x._get_current_object() | o __div__ = lambda x, o: x._get_current_object().__div__(o) __truediv__ = lambda x, o: x._get_current_object().__truediv__(o) __neg__ = lambda x: -(x._get_current_object()) __pos__ = lambda x: +(x._get_current_object()) __abs__ = lambda x: abs(x._get_current_object()) __invert__ = lambda x: ~(x._get_current_object()) __complex__ = lambda x: complex(x._get_current_object()) __int__ = lambda x: int(x._get_current_object()) __float__ = lambda x: float(x._get_current_object()) __oct__ = lambda x: oct(x._get_current_object()) __hex__ = lambda x: hex(x._get_current_object()) __index__ = lambda x: x._get_current_object().__index__() __enter__ = lambda x: x._get_current_object().__enter__() __exit__ = lambda x, *a, **kw: x._get_current_object().__exit__(*a, **kw) rq-1.16.2/rq/logutils.py0000644000000000000000000001160213615410400012001 0ustar00import logging import sys from typing import Union from rq.defaults import DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT class _Colorizer: def __init__(self): esc = "\x1b[" self.codes = {} self.codes[""] = "" self.codes["reset"] = esc + "39;49;00m" self.codes["bold"] = esc + "01m" self.codes["faint"] = esc + "02m" self.codes["standout"] = esc + "03m" self.codes["underline"] = esc + "04m" self.codes["blink"] = esc + "05m" self.codes["overline"] = esc + "06m" dark_colors = ["black", "darkred", "darkgreen", "brown", "darkblue", "purple", "teal", "lightgray"] light_colors = ["darkgray", "red", "green", "yellow", "blue", "fuchsia", "turquoise", "white"] x = 30 for dark, light in zip(dark_colors, light_colors): self.codes[dark] = esc + "%im" % x self.codes[light] = esc + "%i;01m" % x x += 1 del dark, light, x self.codes["darkteal"] = self.codes["turquoise"] self.codes["darkyellow"] = self.codes["brown"] self.codes["fuscia"] = self.codes["fuchsia"] self.codes["white"] = self.codes["bold"] if hasattr(sys.stdout, "isatty"): self.notty = not sys.stdout.isatty() else: self.notty = True def colorize(self, color_key, text): if self.notty: return text else: return self.codes[color_key] + text + self.codes["reset"] colorizer = _Colorizer() def make_colorizer(color: str): """Creates a function that colorizes text with the given color. For example:: ..codeblock::python >>> green = make_colorizer('darkgreen') >>> red = make_colorizer('red') >>> >>> # You can then use: >>> print("It's either " + green('OK') + ' or ' + red('Oops')) """ def inner(text): return colorizer.colorize(color, text) return inner green = make_colorizer('darkgreen') yellow = make_colorizer('darkyellow') blue = make_colorizer('darkblue') red = make_colorizer('darkred') class ColorizingStreamHandler(logging.StreamHandler): levels = { logging.WARNING: yellow, logging.ERROR: red, logging.CRITICAL: red, } def __init__(self, exclude=None, *args, **kwargs): self.exclude = exclude super().__init__(*args, **kwargs) @property def is_tty(self): isatty = getattr(self.stream, 'isatty', None) return isatty and isatty() def format(self, record): message = logging.StreamHandler.format(self, record) if self.is_tty: # Don't colorize any traceback parts = message.split('\n', 1) parts[0] = " ".join([parts[0].split(" ", 1)[0], parts[0].split(" ", 1)[1]]) message = '\n'.join(parts) return message def setup_loghandlers( level: Union[int, str, None] = None, date_format: str = DEFAULT_LOGGING_DATE_FORMAT, log_format: str = DEFAULT_LOGGING_FORMAT, name: str = 'rq.worker', ): """Sets up a log handler. Args: level (Union[int, str, None], optional): The log level. Access an integer level (10-50) or a string level ("info", "debug" etc). Defaults to None. date_format (str, optional): The date format to use. Defaults to DEFAULT_LOGGING_DATE_FORMAT ('%H:%M:%S'). log_format (str, optional): The log format to use. Defaults to DEFAULT_LOGGING_FORMAT ('%(asctime)s %(message)s'). name (str, optional): The looger name. Defaults to 'rq.worker'. """ logger = logging.getLogger(name) if not _has_effective_handler(logger): formatter = logging.Formatter(fmt=log_format, datefmt=date_format) handler = ColorizingStreamHandler(stream=sys.stdout) handler.setFormatter(formatter) handler.addFilter(lambda record: record.levelno < logging.ERROR) error_handler = ColorizingStreamHandler(stream=sys.stderr) error_handler.setFormatter(formatter) error_handler.addFilter(lambda record: record.levelno >= logging.ERROR) logger.addHandler(handler) logger.addHandler(error_handler) if level is not None: # The level may be a numeric value (e.g. when using the logging module constants) # Or a string representation of the logging level logger.setLevel(level if isinstance(level, int) else level.upper()) def _has_effective_handler(logger) -> bool: """ Checks if a logger has a handler that will catch its messages in its logger hierarchy. Args: logger (logging.Logger): The logger to be checked. Returns: is_configured (bool): True if a handler is found for the logger, False otherwise. """ while True: if logger.handlers: return True if not logger.parent: return False logger = logger.parent rq-1.16.2/rq/maintenance.py0000644000000000000000000000165513615410400012430 0ustar00import warnings from typing import TYPE_CHECKING from .intermediate_queue import IntermediateQueue from .queue import Queue if TYPE_CHECKING: from .worker import BaseWorker def clean_intermediate_queue(worker: 'BaseWorker', queue: Queue) -> None: """ Check whether there are any jobs stuck in the intermediate queue. A job may be stuck in the intermediate queue if a worker has successfully dequeued a job but was not able to push it to the StartedJobRegistry. This may happen in rare cases of hardware or network failure. We consider a job to be stuck in the intermediate queue if it doesn't exist in the StartedJobRegistry. """ warnings.warn( "clean_intermediate_queue is deprecated. Use IntermediateQueue.cleanup instead.", DeprecationWarning, ) intermediate_queue = IntermediateQueue(queue.key, connection=queue.connection) intermediate_queue.cleanup(worker, queue) rq-1.16.2/rq/py.typed0000644000000000000000000000000013615410400011252 0ustar00rq-1.16.2/rq/queue.py0000644000000000000000000015424013615410400011271 0ustar00import logging import sys import traceback import uuid import warnings from collections import namedtuple from datetime import datetime, timedelta, timezone from functools import total_ordering from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Type, Union from redis import WatchError from .timeouts import BaseDeathPenalty, UnixSignalDeathPenalty if TYPE_CHECKING: from redis import Redis from redis.client import Pipeline from .job import Retry from .connections import resolve_connection from .defaults import DEFAULT_RESULT_TTL from .dependency import Dependency from .exceptions import DequeueTimeout, NoSuchJobError from .intermediate_queue import IntermediateQueue from .job import Callback, Job, JobStatus from .logutils import blue, green from .serializers import resolve_serializer from .types import FunctionReferenceType, JobDependencyType from .utils import as_text, backend_class, compact, get_version, import_attribute, parse_timeout, utcnow logger = logging.getLogger("rq.queue") class EnqueueData( namedtuple( 'EnqueueData', [ "func", "args", "kwargs", "timeout", "result_ttl", "ttl", "failure_ttl", "description", "depends_on", "job_id", "at_front", "meta", "retry", "on_success", "on_failure", "on_stopped", ], ) ): """Helper type to use when calling enqueue_many NOTE: Does not support `depends_on` yet. """ __slots__ = () @total_ordering class Queue: job_class: Type['Job'] = Job death_penalty_class: Type[BaseDeathPenalty] = UnixSignalDeathPenalty DEFAULT_TIMEOUT: int = 180 # Default timeout seconds. redis_queue_namespace_prefix: str = 'rq:queue:' redis_queues_keys: str = 'rq:queues' @classmethod def all( cls, connection: Optional['Redis'] = None, job_class: Optional[Type['Job']] = None, serializer=None, death_penalty_class: Optional[Type[BaseDeathPenalty]] = None, ) -> List['Queue']: """Returns an iterable of all Queues. Args: connection (Optional[Redis], optional): The Redis Connection. Defaults to None. job_class (Optional[Job], optional): The Job class to use. Defaults to None. serializer (optional): The serializer to use. Defaults to None. death_penalty_class (Optional[Job], optional): The Death Penalty class to use. Defaults to None. Returns: queues (List[Queue]): A list of all queues. """ connection = connection or resolve_connection() def to_queue(queue_key: Union[bytes, str]): return cls.from_queue_key( as_text(queue_key), connection=connection, job_class=job_class, serializer=serializer, death_penalty_class=death_penalty_class, ) all_registerd_queues = connection.smembers(cls.redis_queues_keys) all_queues = [to_queue(rq_key) for rq_key in all_registerd_queues if rq_key] return all_queues @classmethod def from_queue_key( cls, queue_key: str, connection: Optional['Redis'] = None, job_class: Optional[Type['Job']] = None, serializer: Any = None, death_penalty_class: Optional[Type[BaseDeathPenalty]] = None, ) -> 'Queue': """Returns a Queue instance, based on the naming conventions for naming the internal Redis keys. Can be used to reverse-lookup Queues by their Redis keys. Args: queue_key (str): The queue key connection (Optional[Redis], optional): Redis connection. Defaults to None. job_class (Optional[Job], optional): Job class. Defaults to None. serializer (Any, optional): Serializer. Defaults to None. death_penalty_class (Optional[BaseDeathPenalty], optional): Death penalty class. Defaults to None. Raises: ValueError: If the queue_key doesn't start with the defined prefix Returns: queue (Queue): The Queue object """ prefix = cls.redis_queue_namespace_prefix if not queue_key.startswith(prefix): raise ValueError('Not a valid RQ queue key: {0}'.format(queue_key)) name = queue_key[len(prefix) :] return cls( name, connection=connection, job_class=job_class, serializer=serializer, death_penalty_class=death_penalty_class, ) def __init__( self, name: str = 'default', default_timeout: Optional[int] = None, connection: Optional['Redis'] = None, is_async: bool = True, job_class: Optional[Union[str, Type['Job']]] = None, serializer: Any = None, death_penalty_class: Type[BaseDeathPenalty] = UnixSignalDeathPenalty, **kwargs, ): """Initializes a Queue object. Args: name (str, optional): The queue name. Defaults to 'default'. default_timeout (Optional[int], optional): Queue's default timeout. Defaults to None. connection (Optional[Redis], optional): Redis connection. Defaults to None. is_async (bool, optional): Whether jobs should run "async" (using the worker). If `is_async` is false, jobs will run on the same process from where it was called. Defaults to True. job_class (Union[str, 'Job', optional): Job class or a string referencing the Job class path. Defaults to None. serializer (Any, optional): Serializer. Defaults to None. death_penalty_class (Type[BaseDeathPenalty, optional): Job class or a string referencing the Job class path. Defaults to UnixSignalDeathPenalty. """ self.connection = connection or resolve_connection() prefix = self.redis_queue_namespace_prefix self.name = name self._key = '{0}{1}'.format(prefix, name) self._default_timeout = parse_timeout(default_timeout) or self.DEFAULT_TIMEOUT self._is_async = is_async self.log = logger if 'async' in kwargs: self._is_async = kwargs['async'] warnings.warn('The `async` keyword is deprecated. Use `is_async` instead', DeprecationWarning) # override class attribute job_class if one was passed if job_class is not None: if isinstance(job_class, str): job_class = import_attribute(job_class) self.job_class = job_class self.death_penalty_class = death_penalty_class self.serializer = resolve_serializer(serializer) self.redis_server_version: Optional[Tuple[int, int, int]] = None def __len__(self): return self.count def __bool__(self): return True def __iter__(self): yield self def get_redis_server_version(self) -> Tuple[int, int, int]: """Return Redis server version of connection Returns: redis_version (Tuple): A tuple with the parsed Redis version (eg: (5,0,0)) """ if not self.redis_server_version: self.redis_server_version = get_version(self.connection) return self.redis_server_version @property def key(self): """Returns the Redis key for this Queue.""" return self._key @property def intermediate_queue_key(self): """Returns the Redis key for intermediate queue.""" return IntermediateQueue.get_intermediate_queue_key(self._key) @property def intermediate_queue(self) -> IntermediateQueue: """Returns the IntermediateQueue instance for this Queue.""" return IntermediateQueue(self.key, connection=self.connection) @property def registry_cleaning_key(self): """Redis key used to indicate this queue has been cleaned.""" return 'rq:clean_registries:%s' % self.name @property def scheduler_pid(self) -> int: from rq.scheduler import RQScheduler pid = self.connection.get(RQScheduler.get_locking_key(self.name)) return int(pid.decode()) if pid is not None else None def acquire_maintenance_lock(self) -> bool: """Returns a boolean indicating whether a lock to clean this queue is acquired. A lock expires in 899 seconds (15 minutes - 1 second) Returns: lock_acquired (bool) """ lock_acquired = self.connection.set(self.registry_cleaning_key, 1, nx=1, ex=899) if not lock_acquired: return False return lock_acquired def release_maintenance_lock(self): """Deletes the maintenance lock after registries have been cleaned""" self.connection.delete(self.registry_cleaning_key) def empty(self): """Removes all messages on the queue. This is currently being done using a Lua script, which iterates all queue messages and deletes the jobs and it's dependents. It registers the Lua script and calls it. Even though is currently being returned, this is not strictly necessary. Returns: script (...): The Lua Script is called. """ script = """ local prefix = "{0}" local q = KEYS[1] local count = 0 while true do local job_id = redis.call("lpop", q) if job_id == false then break end -- Delete the relevant keys redis.call("del", prefix..job_id) redis.call("del", prefix..job_id..":dependents") count = count + 1 end return count """.format( self.job_class.redis_job_namespace_prefix ).encode( "utf-8" ) script = self.connection.register_script(script) return script(keys=[self.key]) def delete(self, delete_jobs: bool = True): """Deletes the queue. Args: delete_jobs (bool): If true, removes all the associated messages on the queue first. """ if delete_jobs: self.empty() with self.connection.pipeline() as pipeline: pipeline.srem(self.redis_queues_keys, self._key) pipeline.delete(self._key) pipeline.execute() def is_empty(self) -> bool: """Returns whether the current queue is empty. Returns: is_empty (bool): Whether the queue is empty """ return self.count == 0 @property def is_async(self) -> bool: """Returns whether the current queue is async.""" return bool(self._is_async) def fetch_job(self, job_id: str) -> Optional['Job']: """Fetch a single job by Job ID. If the job key is not found, will run the `remove` method, to exclude the key. If the job has the same name as as the current job origin, returns the Job Args: job_id (str): The Job ID Returns: job (Optional[Job]): The job if found """ try: job = self.job_class.fetch(job_id, connection=self.connection, serializer=self.serializer) except NoSuchJobError: self.remove(job_id) else: if job.origin == self.name: return job def get_job_position(self, job_or_id: Union['Job', str]) -> Optional[int]: """Returns the position of a job within the queue Using Redis before 6.0.6 and redis-py before 3.5.4 has a complexity of worse than O(N) and should not be used for very long job queues. Redis and redis-py version afterwards should support the LPOS command handling job positions within Redis c implementation. Args: job_or_id (Union[Job, str]): The Job instance or Job ID Returns: _type_: _description_ """ job_id = job_or_id.id if isinstance(job_or_id, self.job_class) else job_or_id if self.get_redis_server_version() >= (6, 0, 6): try: return self.connection.lpos(self.key, job_id) except AttributeError: # not yet implemented by redis-py pass if job_id in self.job_ids: return self.job_ids.index(job_id) return None def get_job_ids(self, offset: int = 0, length: int = -1) -> List[str]: """Returns a slice of job IDs in the queue. Args: offset (int, optional): The offset. Defaults to 0. length (int, optional): The slice length. Defaults to -1 (last element). Returns: _type_: _description_ """ start = offset if length >= 0: end = offset + (length - 1) else: end = length job_ids = [as_text(job_id) for job_id in self.connection.lrange(self.key, start, end)] self.log.debug('Getting jobs for queue %s: %d found.', green(self.name), len(job_ids)) return job_ids def get_jobs(self, offset: int = 0, length: int = -1) -> List['Job']: """Returns a slice of jobs in the queue. Args: offset (int, optional): The offset. Defaults to 0. length (int, optional): The slice length. Defaults to -1. Returns: _type_: _description_ """ job_ids = self.get_job_ids(offset, length) return compact([self.fetch_job(job_id) for job_id in job_ids]) @property def job_ids(self) -> List[str]: """Returns a list of all job IDS in the queue.""" return self.get_job_ids() @property def jobs(self) -> List['Job']: """Returns a list of all (valid) jobs in the queue.""" return self.get_jobs() @property def count(self) -> int: """Returns a count of all messages in the queue.""" return self.connection.llen(self.key) @property def failed_job_registry(self): """Returns this queue's FailedJobRegistry.""" from rq.registry import FailedJobRegistry return FailedJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) @property def started_job_registry(self): """Returns this queue's StartedJobRegistry.""" from rq.registry import StartedJobRegistry return StartedJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) @property def finished_job_registry(self): """Returns this queue's FinishedJobRegistry.""" from rq.registry import FinishedJobRegistry # TODO: Why was job_class only ommited here before? Was it intentional? return FinishedJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) @property def deferred_job_registry(self): """Returns this queue's DeferredJobRegistry.""" from rq.registry import DeferredJobRegistry return DeferredJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) @property def scheduled_job_registry(self): """Returns this queue's ScheduledJobRegistry.""" from rq.registry import ScheduledJobRegistry return ScheduledJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) @property def canceled_job_registry(self): """Returns this queue's CanceledJobRegistry.""" from rq.registry import CanceledJobRegistry return CanceledJobRegistry(queue=self, job_class=self.job_class, serializer=self.serializer) def remove(self, job_or_id: Union['Job', str], pipeline: Optional['Pipeline'] = None): """Removes Job from queue, accepts either a Job instance or ID. Args: job_or_id (Union[Job, str]): The Job instance or Job ID string. pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. Returns: _type_: _description_ """ job_id: str = job_or_id.id if isinstance(job_or_id, self.job_class) else job_or_id if pipeline is not None: return pipeline.lrem(self.key, 1, job_id) return self.connection.lrem(self.key, 1, job_id) def compact(self): """Removes all "dead" jobs from the queue by cycling through it, while guaranteeing FIFO semantics. """ COMPACT_QUEUE = '{0}_compact:{1}'.format(self.redis_queue_namespace_prefix, uuid.uuid4()) # noqa self.connection.rename(self.key, COMPACT_QUEUE) while True: job_id = self.connection.lpop(COMPACT_QUEUE) if job_id is None: break if self.job_class.exists(as_text(job_id), self.connection): self.connection.rpush(self.key, job_id) def push_job_id(self, job_id: str, pipeline: Optional['Pipeline'] = None, at_front: bool = False): """Pushes a job ID on the corresponding Redis queue. 'at_front' allows you to push the job onto the front instead of the back of the queue Args: job_id (str): The Job ID pipeline (Optional[Pipeline], optional): The Redis Pipeline to use. Defaults to None. at_front (bool, optional): Whether to push the job to front of the queue. Defaults to False. """ connection = pipeline if pipeline is not None else self.connection push = connection.lpush if at_front else connection.rpush result = push(self.key, job_id) if pipeline is None: self.log.debug('Pushed job %s into %s, %s job(s) are in queue.', blue(job_id), green(self.name), result) else: # Pipelines do not return the number of jobs in the queue. self.log.debug('Pushed job %s into %s', blue(job_id), green(self.name)) def create_job( self, func: 'FunctionReferenceType', args: Union[Tuple, List, None] = None, kwargs: Optional[Dict] = None, timeout: Optional[int] = None, result_ttl: Optional[int] = None, ttl: Optional[int] = None, failure_ttl: Optional[int] = None, description: Optional[str] = None, depends_on: Optional['JobDependencyType'] = None, job_id: Optional[str] = None, meta: Optional[Dict] = None, status: JobStatus = JobStatus.QUEUED, retry: Optional['Retry'] = None, *, on_success: Optional[Union[Callback, Callable]] = None, on_failure: Optional[Union[Callback, Callable]] = None, on_stopped: Optional[Union[Callback, Callable]] = None, ) -> Job: """Creates a job based on parameters given Args: func (FunctionReferenceType): The function referce: a callable or the path. args (Union[Tuple, List, None], optional): The `*args` to pass to the function. Defaults to None. kwargs (Optional[Dict], optional): The `**kwargs` to pass to the function. Defaults to None. timeout (Optional[int], optional): Function timeout. Defaults to None. result_ttl (Optional[int], optional): Result time to live. Defaults to None. ttl (Optional[int], optional): Time to live. Defaults to None. failure_ttl (Optional[int], optional): Failure time to live. Defaults to None. description (Optional[str], optional): The description. Defaults to None. depends_on (Optional[JobDependencyType], optional): The job dependencies. Defaults to None. job_id (Optional[str], optional): Job ID. Defaults to None. meta (Optional[Dict], optional): Job metadata. Defaults to None. status (JobStatus, optional): Job status. Defaults to JobStatus.QUEUED. retry (Optional[Retry], optional): The Retry Object. Defaults to None. on_success (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on success. Defaults to None. Callable is deprecated. on_failure (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on failure. Defaults to None. Callable is deprecated. on_stopped (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on stopped. Defaults to None. Callable is deprecated. pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. Raises: ValueError: If the timeout is 0 ValueError: If the job TTL is 0 or negative Returns: Job: The created job """ timeout = parse_timeout(timeout) if timeout is None: timeout = self._default_timeout elif timeout == 0: raise ValueError('0 timeout is not allowed. Use -1 for infinite timeout') result_ttl = parse_timeout(result_ttl) failure_ttl = parse_timeout(failure_ttl) ttl = parse_timeout(ttl) if ttl is not None and ttl <= 0: raise ValueError('Job ttl must be greater than 0') job = self.job_class.create( func, args=args, kwargs=kwargs, connection=self.connection, result_ttl=result_ttl, ttl=ttl, failure_ttl=failure_ttl, status=status, description=description, depends_on=depends_on, timeout=timeout, id=job_id, origin=self.name, meta=meta, serializer=self.serializer, on_success=on_success, on_failure=on_failure, on_stopped=on_stopped, ) if retry: job.retries_left = retry.max job.retry_intervals = retry.intervals return job def setup_dependencies(self, job: 'Job', pipeline: Optional['Pipeline'] = None) -> 'Job': """If a _dependent_ job depends on any unfinished job, register all the _dependent_ job's dependencies instead of enqueueing it. `Job#fetch_dependencies` sets WATCH on all dependencies. If WatchError is raised in the when the pipeline is executed, that means something else has modified either the set of dependencies or the status of one of them. In this case, we simply retry. Args: job (Job): The job pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. Returns: job (Job): The Job """ if len(job._dependency_ids) > 0: orig_status = job.get_status(refresh=False) pipe = pipeline if pipeline is not None else self.connection.pipeline() while True: try: # Also calling watch even if caller # passed in a pipeline since Queue#create_job # is called from within this method. pipe.watch(job.dependencies_key) dependencies = job.fetch_dependencies(watch=True, pipeline=pipe) pipe.multi() for dependency in dependencies: if dependency.get_status(refresh=False) != JobStatus.FINISHED: # NOTE: If the following code changes local variables, those values probably have # to be set back to their original values in the handling of WatchError below! job.set_status(JobStatus.DEFERRED, pipeline=pipe) job.register_dependency(pipeline=pipe) job.save(pipeline=pipe) job.cleanup(ttl=job.ttl, pipeline=pipe) if pipeline is None: pipe.execute() return job break except WatchError: if pipeline is None: # The call to job.set_status(JobStatus.DEFERRED, pipeline=pipe) above has changed the # internal "_status". We have to reset it to its original value (probably QUEUED), so # if during the next run no unfinished dependencies exist anymore, the job gets # enqueued correctly by enqueue_call(). job._status = orig_status continue else: # if pipeline comes from caller, re-raise to them raise elif pipeline is not None: pipeline.multi() # Ensure pipeline in multi mode before returning to caller return job def enqueue_call( self, func: 'FunctionReferenceType', args: Union[Tuple, List, None] = None, kwargs: Optional[Dict] = None, timeout: Optional[int] = None, result_ttl: Optional[int] = None, ttl: Optional[int] = None, failure_ttl: Optional[int] = None, description: Optional[str] = None, depends_on: Optional['JobDependencyType'] = None, job_id: Optional[str] = None, at_front: bool = False, meta: Optional[Dict] = None, retry: Optional['Retry'] = None, on_success: Optional[Union[Callback, Callable[..., Any]]] = None, on_failure: Optional[Union[Callback, Callable[..., Any]]] = None, on_stopped: Optional[Union[Callback, Callable[..., Any]]] = None, pipeline: Optional['Pipeline'] = None, ) -> Job: """Creates a job to represent the delayed function call and enqueues it. It is much like `.enqueue()`, except that it takes the function's args and kwargs as explicit arguments. Any kwargs passed to this function contain options for RQ itself. Args: func (FunctionReferenceType): The reference to the function args (Union[Tuple, List, None], optional): THe `*args` to pass to the function. Defaults to None. kwargs (Optional[Dict], optional): THe `**kwargs` to pass to the function. Defaults to None. timeout (Optional[int], optional): Function timeout. Defaults to None. result_ttl (Optional[int], optional): Result time to live. Defaults to None. ttl (Optional[int], optional): Time to live. Defaults to None. failure_ttl (Optional[int], optional): Failure time to live. Defaults to None. description (Optional[str], optional): The job description. Defaults to None. depends_on (Optional[JobDependencyType], optional): The job dependencies. Defaults to None. job_id (Optional[str], optional): The job ID. Defaults to None. at_front (bool, optional): Whether to enqueue the job at the front. Defaults to False. meta (Optional[Dict], optional): Metadata to attach to the job. Defaults to None. retry (Optional[Retry], optional): Retry object. Defaults to None. on_success (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on success. Defaults to None. Callable is deprecated. on_failure (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on failure. Defaults to None. Callable is deprecated. on_stopped (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on stopped. Defaults to None. Callable is deprecated. pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. Returns: Job: The enqueued Job """ job = self.create_job( func, args=args, kwargs=kwargs, result_ttl=result_ttl, ttl=ttl, failure_ttl=failure_ttl, description=description, depends_on=depends_on, job_id=job_id, meta=meta, status=JobStatus.QUEUED, timeout=timeout, retry=retry, on_success=on_success, on_failure=on_failure, on_stopped=on_stopped, ) return self.enqueue_job(job, pipeline=pipeline, at_front=at_front) @staticmethod def prepare_data( func: 'FunctionReferenceType', args: Union[Tuple, List, None] = None, kwargs: Optional[Dict] = None, timeout: Optional[int] = None, result_ttl: Optional[int] = None, ttl: Optional[int] = None, failure_ttl: Optional[int] = None, description: Optional[str] = None, depends_on: Optional[List] = None, job_id: Optional[str] = None, at_front: bool = False, meta: Optional[Dict] = None, retry: Optional['Retry'] = None, on_success: Optional[Union[Callback, Callable]] = None, on_failure: Optional[Union[Callback, Callable]] = None, on_stopped: Optional[Union[Callback, Callable]] = None, ) -> EnqueueData: """Need this till support dropped for python_version < 3.7, where defaults can be specified for named tuples And can keep this logic within EnqueueData Args: func (FunctionReferenceType): The reference to the function args (Union[Tuple, List, None], optional): THe `*args` to pass to the function. Defaults to None. kwargs (Optional[Dict], optional): THe `**kwargs` to pass to the function. Defaults to None. timeout (Optional[int], optional): Function timeout. Defaults to None. result_ttl (Optional[int], optional): Result time to live. Defaults to None. ttl (Optional[int], optional): Time to live. Defaults to None. failure_ttl (Optional[int], optional): Failure time to live. Defaults to None. description (Optional[str], optional): The job description. Defaults to None. depends_on (Optional[JobDependencyType], optional): The job dependencies. Defaults to None. job_id (Optional[str], optional): The job ID. Defaults to None. at_front (bool, optional): Whether to enqueue the job at the front. Defaults to False. meta (Optional[Dict], optional): Metadata to attach to the job. Defaults to None. retry (Optional[Retry], optional): Retry object. Defaults to None. on_success (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on success. Defaults to None. Callable is deprecated. on_failure (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on failure. Defaults to None. Callable is deprecated. on_stopped (Optional[Union[Callback, Callable[..., Any]]], optional): Callback for on stopped. Defaults to None. Callable is deprecated. Returns: EnqueueData: The EnqueueData """ return EnqueueData( func, args, kwargs, timeout, result_ttl, ttl, failure_ttl, description, depends_on, job_id, at_front, meta, retry, on_success, on_failure, on_stopped, ) def enqueue_many(self, job_datas: List['EnqueueData'], pipeline: Optional['Pipeline'] = None) -> List[Job]: """Creates multiple jobs (created via `Queue.prepare_data` calls) to represent the delayed function calls and enqueues them. Args: job_datas (List['EnqueueData']): A List of job data pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. Returns: List[Job]: A list of enqueued jobs """ pipe = pipeline if pipeline is not None else self.connection.pipeline() jobs_without_dependencies = [] jobs_with_unmet_dependencies = [] jobs_with_met_dependencies = [] def get_job_kwargs(job_data, initial_status): return { "func": job_data.func, "args": job_data.args, "kwargs": job_data.kwargs, "result_ttl": job_data.result_ttl, "ttl": job_data.ttl, "failure_ttl": job_data.failure_ttl, "description": job_data.description, "depends_on": job_data.depends_on, "job_id": job_data.job_id, "meta": job_data.meta, "status": initial_status, "timeout": job_data.timeout, "retry": job_data.retry, "on_success": job_data.on_success, "on_failure": job_data.on_failure, "on_stopped": job_data.on_stopped, } # Enqueue jobs without dependencies job_datas_without_dependencies = [job_data for job_data in job_datas if not job_data.depends_on] if job_datas_without_dependencies: jobs_without_dependencies = [ self._enqueue_job( self.create_job(**get_job_kwargs(job_data, JobStatus.QUEUED)), pipeline=pipe, at_front=job_data.at_front, ) for job_data in job_datas_without_dependencies ] if pipeline is None: pipe.execute() job_datas_with_dependencies = [job_data for job_data in job_datas if job_data.depends_on] if job_datas_with_dependencies: # Save all jobs with dependencies as deferred jobs_with_dependencies = [ self.create_job(**get_job_kwargs(job_data, JobStatus.DEFERRED)) for job_data in job_datas_with_dependencies ] for job in jobs_with_dependencies: job.save(pipeline=pipe) if pipeline is None: pipe.execute() # Enqueue the jobs whose dependencies have been met jobs_with_met_dependencies, jobs_with_unmet_dependencies = Dependency.get_jobs_with_met_dependencies( jobs_with_dependencies, pipeline=pipe ) jobs_with_met_dependencies = [ self._enqueue_job(job, pipeline=pipe, at_front=job.enqueue_at_front) for job in jobs_with_met_dependencies ] if pipeline is None: pipe.execute() return jobs_without_dependencies + jobs_with_unmet_dependencies + jobs_with_met_dependencies def run_job(self, job: 'Job') -> Job: """Run the job Args: job (Job): The job to run Returns: Job: _description_ """ job.perform() result_ttl = job.get_result_ttl(default_ttl=DEFAULT_RESULT_TTL) with self.connection.pipeline() as pipeline: job._handle_success(result_ttl=result_ttl, pipeline=pipeline) job.cleanup(result_ttl, pipeline=pipeline) pipeline.execute() return job @classmethod def parse_args(cls, f: 'FunctionReferenceType', *args, **kwargs): """ Parses arguments passed to `queue.enqueue()` and `queue.enqueue_at()` The function argument `f` may be any of the following: * A reference to a function * A reference to an object's instance method * A string, representing the location of a function (must be meaningful to the import context of the workers) Args: f (FunctionReferenceType): The function reference args (*args): function args kwargs (*kwargs): function kargs """ if not isinstance(f, str) and f.__module__ == '__main__': raise ValueError('Functions from the __main__ module cannot be processed by workers') # Detect explicit invocations, i.e. of the form: # q.enqueue(foo, args=(1, 2), kwargs={'a': 1}, job_timeout=30) timeout = kwargs.pop('job_timeout', None) description = kwargs.pop('description', None) result_ttl = kwargs.pop('result_ttl', None) ttl = kwargs.pop('ttl', None) failure_ttl = kwargs.pop('failure_ttl', None) depends_on = kwargs.pop('depends_on', None) job_id = kwargs.pop('job_id', None) at_front = kwargs.pop('at_front', False) meta = kwargs.pop('meta', None) retry = kwargs.pop('retry', None) on_success = kwargs.pop('on_success', None) on_failure = kwargs.pop('on_failure', None) on_stopped = kwargs.pop('on_stopped', None) pipeline = kwargs.pop('pipeline', None) if 'args' in kwargs or 'kwargs' in kwargs: assert args == (), 'Extra positional arguments cannot be used when using explicit args and kwargs' # noqa args = kwargs.pop('args', None) kwargs = kwargs.pop('kwargs', None) return ( f, timeout, description, result_ttl, ttl, failure_ttl, depends_on, job_id, at_front, meta, retry, on_success, on_failure, on_stopped, pipeline, args, kwargs, ) def enqueue(self, f: 'FunctionReferenceType', *args, **kwargs) -> 'Job': """Creates a job to represent the delayed function call and enqueues it. Receives the same parameters accepted by the `enqueue_call` method. Args: f (FunctionReferenceType): The function reference args (*args): function args kwargs (*kwargs): function kargs Returns: job (Job): The created Job """ ( f, timeout, description, result_ttl, ttl, failure_ttl, depends_on, job_id, at_front, meta, retry, on_success, on_failure, on_stopped, pipeline, args, kwargs, ) = Queue.parse_args(f, *args, **kwargs) return self.enqueue_call( func=f, args=args, kwargs=kwargs, timeout=timeout, result_ttl=result_ttl, ttl=ttl, failure_ttl=failure_ttl, description=description, depends_on=depends_on, job_id=job_id, at_front=at_front, meta=meta, retry=retry, on_success=on_success, on_failure=on_failure, on_stopped=on_stopped, pipeline=pipeline, ) def enqueue_at(self, datetime: datetime, f, *args, **kwargs): """Schedules a job to be enqueued at specified time Args: datetime (datetime): _description_ f (_type_): _description_ Returns: _type_: _description_ """ ( f, timeout, description, result_ttl, ttl, failure_ttl, depends_on, job_id, at_front, meta, retry, on_success, on_failure, on_stopped, pipeline, args, kwargs, ) = Queue.parse_args(f, *args, **kwargs) job = self.create_job( f, status=JobStatus.SCHEDULED, args=args, kwargs=kwargs, timeout=timeout, result_ttl=result_ttl, ttl=ttl, failure_ttl=failure_ttl, description=description, depends_on=depends_on, job_id=job_id, meta=meta, retry=retry, on_success=on_success, on_failure=on_failure, on_stopped=on_stopped, ) if at_front: job.enqueue_at_front = True return self.schedule_job(job, datetime, pipeline=pipeline) def schedule_job(self, job: 'Job', datetime: datetime, pipeline: Optional['Pipeline'] = None): """Puts job on ScheduledJobRegistry Args: job (Job): _description_ datetime (datetime): _description_ pipeline (Optional[Pipeline], optional): _description_. Defaults to None. Returns: _type_: _description_ """ from .registry import ScheduledJobRegistry registry = ScheduledJobRegistry(queue=self) pipe = pipeline if pipeline is not None else self.connection.pipeline() # Add Queue key set pipe.sadd(self.redis_queues_keys, self.key) job.save(pipeline=pipe) registry.schedule(job, datetime, pipeline=pipe) if pipeline is None: pipe.execute() return job def enqueue_in(self, time_delta: timedelta, func: 'FunctionReferenceType', *args, **kwargs) -> 'Job': """Schedules a job to be executed in a given `timedelta` object Args: time_delta (timedelta): The timedelta object func (FunctionReferenceType): The function reference Returns: job (Job): The enqueued Job """ return self.enqueue_at(datetime.now(timezone.utc) + time_delta, func, *args, **kwargs) def enqueue_job(self, job: 'Job', pipeline: Optional['Pipeline'] = None, at_front: bool = False) -> Job: """Enqueues a job for delayed execution checking dependencies. Args: job (Job): The job to enqueue pipeline (Optional[Pipeline], optional): The Redis pipeline to use. Defaults to None. at_front (bool, optional): Whether should enqueue at the front of the queue. Defaults to False. Returns: Job: The enqued job """ job.origin = self.name job = self.setup_dependencies(job, pipeline=pipeline) # If we do not depend on an unfinished job, enqueue the job. if job.get_status(refresh=False) != JobStatus.DEFERRED: return self._enqueue_job(job, pipeline=pipeline, at_front=at_front) return job def _enqueue_job(self, job: 'Job', pipeline: Optional['Pipeline'] = None, at_front: bool = False) -> Job: """Enqueues a job for delayed execution without checking dependencies. If Queue is instantiated with is_async=False, job is executed immediately. Args: job (Job): The job to enqueue pipeline (Optional[Pipeline], optional): The Redis pipeline to use. Defaults to None. at_front (bool, optional): Whether should enqueue at the front of the queue. Defaults to False. Returns: Job: The enqued job """ pipe = pipeline if pipeline is not None else self.connection.pipeline() # Add Queue key set pipe.sadd(self.redis_queues_keys, self.key) job.redis_server_version = self.get_redis_server_version() job.set_status(JobStatus.QUEUED, pipeline=pipe) job.origin = self.name job.enqueued_at = utcnow() if job.timeout is None: job.timeout = self._default_timeout job.save(pipeline=pipe) job.cleanup(ttl=job.ttl, pipeline=pipe) if self._is_async: self.push_job_id(job.id, pipeline=pipe, at_front=at_front) if pipeline is None: pipe.execute() if not self._is_async: job = self.run_sync(job) return job def run_sync(self, job: 'Job') -> 'Job': """Run a job synchronously, meaning on the same process the method was called. Args: job (Job): The job to run Returns: Job: The job instance """ with self.connection.pipeline() as pipeline: job.prepare_for_execution('sync', pipeline) try: job = self.run_job(job) except: # noqa with self.connection.pipeline() as pipeline: job.set_status(JobStatus.FAILED, pipeline=pipeline) exc_string = ''.join(traceback.format_exception(*sys.exc_info())) job._handle_failure(exc_string, pipeline) pipeline.execute() if job.failure_callback: job.failure_callback(job, self.connection, *sys.exc_info()) # type: ignore else: if job.success_callback: job.success_callback(job, self.connection, job.return_value()) # type: ignore return job def enqueue_dependents( self, job: 'Job', pipeline: Optional['Pipeline'] = None, exclude_job_id: Optional[str] = None ): """Enqueues all jobs in the given job's dependents set and clears it. When called without a pipeline, this method uses WATCH/MULTI/EXEC. If you pass a pipeline, only MULTI is called. The rest is up to the caller. Args: job (Job): The Job to enqueue the dependents pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. exclude_job_id (Optional[str], optional): Whether to exclude the job id. Defaults to None. """ from .registry import DeferredJobRegistry pipe = pipeline if pipeline is not None else self.connection.pipeline() dependents_key = job.dependents_key while True: try: # if a pipeline is passed, the caller is responsible for calling WATCH # to ensure all jobs are enqueued if pipeline is None: pipe.watch(dependents_key) dependent_job_ids = {as_text(_id) for _id in pipe.smembers(dependents_key)} # There's no dependents if not dependent_job_ids: break jobs_to_enqueue = [ dependent_job for dependent_job in self.job_class.fetch_many( dependent_job_ids, connection=self.connection, serializer=self.serializer ) if dependent_job and dependent_job.dependencies_are_met( parent_job=job, pipeline=pipe, exclude_job_id=exclude_job_id, ) and dependent_job.get_status(refresh=False) != JobStatus.CANCELED ] pipe.multi() if not jobs_to_enqueue: break for dependent in jobs_to_enqueue: enqueue_at_front = dependent.enqueue_at_front or False registry = DeferredJobRegistry( dependent.origin, self.connection, job_class=self.job_class, serializer=self.serializer ) registry.remove(dependent, pipeline=pipe) if dependent.origin == self.name: self._enqueue_job(dependent, pipeline=pipe, at_front=enqueue_at_front) else: queue = self.__class__(name=dependent.origin, connection=self.connection) queue._enqueue_job(dependent, pipeline=pipe, at_front=enqueue_at_front) # Only delete dependents_key if all dependents have been enqueued if len(jobs_to_enqueue) == len(dependent_job_ids): pipe.delete(dependents_key) else: enqueued_job_ids = [job.id for job in jobs_to_enqueue] pipe.srem(dependents_key, *enqueued_job_ids) if pipeline is None: pipe.execute() break except WatchError: if pipeline is None: continue else: # if the pipeline comes from the caller, we re-raise the # exception as it it the responsibility of the caller to # handle it raise def pop_job_id(self) -> Optional[str]: """Pops a given job ID from this Redis queue. Returns: job_id (str): The job id """ return as_text(self.connection.lpop(self.key)) @classmethod def lpop(cls, queue_keys: List[str], timeout: Optional[int], connection: Optional['Redis'] = None): """Helper method to abstract away from some Redis API details where LPOP accepts only a single key, whereas BLPOP accepts multiple. So if we want the non-blocking LPOP, we need to iterate over all queues, do individual LPOPs, and return the result. Until Redis receives a specific method for this, we'll have to wrap it this way. The timeout parameter is interpreted as follows: None - non-blocking (return immediately) > 0 - maximum number of seconds to block Args: queue_keys (_type_): _description_ timeout (Optional[int]): _description_ connection (Optional[Redis], optional): _description_. Defaults to None. Raises: ValueError: If timeout of 0 was passed DequeueTimeout: BLPOP Timeout Returns: _type_: _description_ """ connection = connection or resolve_connection() if timeout is not None: # blocking variant if timeout == 0: raise ValueError('RQ does not support indefinite timeouts. Please pick a timeout value > 0') colored_queues = ', '.join(map(str, [green(str(queue)) for queue in queue_keys])) logger.debug(f"Starting BLPOP operation for queues {colored_queues} with timeout of {timeout}") result = connection.blpop(queue_keys, timeout) if result is None: logger.debug(f"BLPOP timeout, no jobs found on queues {colored_queues}") raise DequeueTimeout(timeout, queue_keys) queue_key, job_id = result return queue_key, job_id else: # non-blocking variant for queue_key in queue_keys: blob = connection.lpop(queue_key) if blob is not None: return queue_key, blob return None @classmethod def lmove(cls, connection: 'Redis', queue_key: str, timeout: Optional[int]): """Similar to lpop, but accepts only a single queue key and immediately pushes the result to an intermediate queue. """ intermediate_queue = IntermediateQueue(queue_key, connection) if timeout is not None: # blocking variant if timeout == 0: raise ValueError('RQ does not support indefinite timeouts. Please pick a timeout value > 0') colored_queue = green(queue_key) logger.debug(f"Starting BLMOVE operation for {colored_queue} with timeout of {timeout}") result = connection.blmove(queue_key, intermediate_queue.key, timeout) if result is None: logger.debug(f"BLMOVE timeout, no jobs found on {colored_queue}") raise DequeueTimeout(timeout, queue_key) return queue_key, result else: # non-blocking variant result = connection.lmove(queue_key, intermediate_queue.key) if result is not None: return queue_key, result return None @classmethod def dequeue_any( cls, queues: List['Queue'], timeout: Optional[int], connection: Optional['Redis'] = None, job_class: Optional['Job'] = None, serializer: Any = None, death_penalty_class: Optional[Type[BaseDeathPenalty]] = None, ) -> Tuple['Job', 'Queue']: """Class method returning the job_class instance at the front of the given set of Queues, where the order of the queues is important. When all of the Queues are empty, depending on the `timeout` argument, either blocks execution of this function for the duration of the timeout or until new messages arrive on any of the queues, or returns None. See the documentation of cls.lpop for the interpretation of timeout. Args: queues (List[Queue]): List of queue objects timeout (Optional[int]): Timeout for the LPOP connection (Optional[Redis], optional): Redis Connection. Defaults to None. job_class (Optional[Type[Job]], optional): The job class. Defaults to None. serializer (Any, optional): Serializer to use. Defaults to None. death_penalty_class (Optional[Type[BaseDeathPenalty]], optional): The death penalty class. Defaults to None. Raises: e: Any exception Returns: job, queue (Tuple[Job, Queue]): A tuple of Job, Queue """ job_class: Job = backend_class(cls, 'job_class', override=job_class) while True: queue_keys = [q.key for q in queues] if len(queue_keys) == 1 and get_version(connection) >= (6, 2, 0): result = cls.lmove(connection, queue_keys[0], timeout) else: result = cls.lpop(queue_keys, timeout, connection=connection) if result is None: return None queue_key, job_id = map(as_text, result) queue = cls.from_queue_key( queue_key, connection=connection, job_class=job_class, serializer=serializer, death_penalty_class=death_penalty_class, ) try: job = job_class.fetch(job_id, connection=connection, serializer=serializer) except NoSuchJobError: # Silently pass on jobs that don't exist (anymore), # and continue in the look continue except Exception as e: # Attach queue information on the exception for improved error # reporting e.job_id = job_id e.queue = queue raise e return job, queue return None, None # Total ordering definition (the rest of the required Python methods are # auto-generated by the @total_ordering decorator) def __eq__(self, other): # noqa if not isinstance(other, Queue): raise TypeError('Cannot compare queues to other objects') return self.name == other.name def __lt__(self, other): if not isinstance(other, Queue): raise TypeError('Cannot compare queues to other objects') return self.name < other.name def __hash__(self): # pragma: no cover return hash(self.name) def __repr__(self): # noqa # pragma: no cover return '{0}({1!r})'.format(self.__class__.__name__, self.name) def __str__(self): return '<{0} {1}>'.format(self.__class__.__name__, self.name) rq-1.16.2/rq/registry.py0000644000000000000000000004077113615410400012020 0ustar00import calendar import logging import time import traceback from datetime import datetime, timedelta, timezone from typing import TYPE_CHECKING, Any, List, Optional, Type, Union from rq.serializers import resolve_serializer from .timeouts import BaseDeathPenalty, UnixSignalDeathPenalty if TYPE_CHECKING: from redis import Redis from redis.client import Pipeline from .connections import resolve_connection from .defaults import DEFAULT_FAILURE_TTL from .exceptions import AbandonedJobError, InvalidJobOperation, NoSuchJobError from .job import Job, JobStatus from .queue import Queue from .utils import as_text, backend_class, current_timestamp logger = logging.getLogger("rq.registry") class BaseRegistry: """ Base implementation of a job registry, implemented in Redis sorted set. Each job is stored as a key in the registry, scored by expiration time (unix timestamp). """ job_class = Job death_penalty_class = UnixSignalDeathPenalty key_template = 'rq:registry:{0}' def __init__( self, name: str = 'default', connection: Optional['Redis'] = None, job_class: Optional[Type['Job']] = None, queue: Optional['Queue'] = None, serializer: Any = None, death_penalty_class: Optional[Type[BaseDeathPenalty]] = None, ): if queue: self.name = queue.name self.connection = queue.connection or resolve_connection() self.serializer = queue.serializer else: self.name = name self.connection = connection or resolve_connection() self.serializer = resolve_serializer(serializer) self.key = self.key_template.format(self.name) self.job_class = backend_class(self, 'job_class', override=job_class) self.death_penalty_class = backend_class(self, 'death_penalty_class', override=death_penalty_class) def __len__(self): """Returns the number of jobs in this registry""" return self.count def __eq__(self, other): return ( self.name == other.name and self.connection.connection_pool.connection_kwargs == other.connection.connection_pool.connection_kwargs ) def __contains__(self, item: Union[str, 'Job']) -> bool: """ Returns a boolean indicating registry contains the given job instance or job id. Args: item (Union[str, Job]): A Job ID or a Job. """ job_id = item if isinstance(item, self.job_class): job_id = item.id return self.connection.zscore(self.key, job_id) is not None @property def count(self) -> int: """Returns the number of jobs in this registry Returns: int: _description_ """ self.cleanup() return self.connection.zcard(self.key) def add(self, job: 'Job', ttl=0, pipeline: Optional['Pipeline'] = None, xx: bool = False) -> int: """Adds a job to a registry with expiry time of now + ttl, unless it's -1 which is set to +inf Args: job (Job): The Job to add ttl (int, optional): The time to live. Defaults to 0. pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. xx (bool, optional): .... Defaults to False. Returns: result (int): The ZADD command result """ score = ttl if ttl < 0 else current_timestamp() + ttl if score == -1: score = '+inf' if pipeline is not None: return pipeline.zadd(self.key, {job.id: score}, xx=xx) return self.connection.zadd(self.key, {job.id: score}, xx=xx) def remove(self, job: 'Job', pipeline: Optional['Pipeline'] = None, delete_job: bool = False): """Removes job from registry and deletes it if `delete_job == True` Args: job (Job): The Job to remove from the registry pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. delete_job (bool, optional): If should delete the job.. Defaults to False. """ connection = pipeline if pipeline is not None else self.connection job_id = job.id if isinstance(job, self.job_class) else job result = connection.zrem(self.key, job_id) if delete_job: if isinstance(job, self.job_class): job_instance = job else: job_instance = Job.fetch(job_id, connection=connection, serializer=self.serializer) job_instance.delete() return result def get_expired_job_ids(self, timestamp: Optional[float] = None): """Returns job ids whose score are less than current timestamp. Returns ids for jobs with an expiry time earlier than timestamp, specified as seconds since the Unix epoch. timestamp defaults to call time if unspecified. """ score = timestamp if timestamp is not None else current_timestamp() expired_jobs = self.connection.zrangebyscore(self.key, 0, score) return [as_text(job_id) for job_id in expired_jobs] def get_job_ids(self, start: int = 0, end: int = -1): """Returns list of all job ids. Args: start (int, optional): _description_. Defaults to 0. end (int, optional): _description_. Defaults to -1. Returns: _type_: _description_ """ self.cleanup() return [as_text(job_id) for job_id in self.connection.zrange(self.key, start, end)] def get_queue(self): """Returns Queue object associated with this registry.""" return Queue(self.name, connection=self.connection, serializer=self.serializer) def get_expiration_time(self, job: 'Job') -> datetime: """Returns job's expiration time. Args: job (Job): The Job to get the expiration """ score = self.connection.zscore(self.key, job.id) return datetime.utcfromtimestamp(score) def requeue(self, job_or_id: Union['Job', str], at_front: bool = False) -> 'Job': """Requeues the job with the given job ID. Args: job_or_id (Union['Job', str]): The Job or the Job ID at_front (bool, optional): If the Job should be put at the front of the queue. Defaults to False. Raises: InvalidJobOperation: If nothing is returned from the `ZREM` operation. Returns: Job: The Requeued Job. """ if isinstance(job_or_id, self.job_class): job = job_or_id serializer = job.serializer else: serializer = self.serializer job = self.job_class.fetch(job_or_id, connection=self.connection, serializer=serializer) result = self.connection.zrem(self.key, job.id) if not result: raise InvalidJobOperation with self.connection.pipeline() as pipeline: queue = Queue(job.origin, connection=self.connection, job_class=self.job_class, serializer=serializer) job.started_at = None job.ended_at = None job._exc_info = '' job.save() job = queue._enqueue_job(job, pipeline=pipeline, at_front=at_front) pipeline.execute() return job class StartedJobRegistry(BaseRegistry): """ Registry of currently executing jobs. Each queue maintains a StartedJobRegistry. Jobs in this registry are ones that are currently being executed. Jobs are added to registry right before they are executed and removed right after completion (success or failure). """ key_template = 'rq:wip:{0}' def cleanup(self, timestamp: Optional[float] = None): """Remove abandoned jobs from registry and add them to FailedJobRegistry. Removes jobs with an expiry time earlier than timestamp, specified as seconds since the Unix epoch. timestamp defaults to call time if unspecified. Removed jobs are added to the global failed job queue. Args: timestamp (datetime): The datetime to use as the limit. """ score = timestamp if timestamp is not None else current_timestamp() job_ids = self.get_expired_job_ids(score) if job_ids: failed_job_registry = FailedJobRegistry(self.name, self.connection, serializer=self.serializer) with self.connection.pipeline() as pipeline: for job_id in job_ids: try: job = self.job_class.fetch(job_id, connection=self.connection, serializer=self.serializer) except NoSuchJobError: continue job.execute_failure_callback( self.death_penalty_class, AbandonedJobError, AbandonedJobError(), traceback.extract_stack() ) retry = job.retries_left and job.retries_left > 0 if retry: queue = self.get_queue() job.retry(queue, pipeline) else: exc_string = f"due to {AbandonedJobError.__name__}" logger.warning( f'{self.__class__.__name__} cleanup: Moving job to {FailedJobRegistry.__name__} ' f'({exc_string})' ) job.set_status(JobStatus.FAILED) job._exc_info = f"Moved to {FailedJobRegistry.__name__}, {exc_string}, at {datetime.now()}" job.save(pipeline=pipeline, include_meta=False) job.cleanup(ttl=-1, pipeline=pipeline) failed_job_registry.add(job, job.failure_ttl) pipeline.zremrangebyscore(self.key, 0, score) pipeline.execute() return job_ids class FinishedJobRegistry(BaseRegistry): """ Registry of jobs that have been completed. Jobs are added to this registry after they have successfully completed for monitoring purposes. """ key_template = 'rq:finished:{0}' def cleanup(self, timestamp: Optional[float] = None): """Remove expired jobs from registry. Removes jobs with an expiry time earlier than timestamp, specified as seconds since the Unix epoch. timestamp defaults to call time if unspecified. """ score = timestamp if timestamp is not None else current_timestamp() self.connection.zremrangebyscore(self.key, 0, score) class FailedJobRegistry(BaseRegistry): """ Registry of containing failed jobs. """ key_template = 'rq:failed:{0}' def cleanup(self, timestamp: Optional[float] = None): """Remove expired jobs from registry. Removes jobs with an expiry time earlier than timestamp, specified as seconds since the Unix epoch. timestamp defaults to call time if unspecified. """ score = timestamp if timestamp is not None else current_timestamp() self.connection.zremrangebyscore(self.key, 0, score) def add( self, job: 'Job', ttl=None, exc_string: str = '', pipeline: Optional['Pipeline'] = None, _save_exc_to_job: bool = False, ): """ Adds a job to a registry with expiry time of now + ttl. `ttl` defaults to DEFAULT_FAILURE_TTL if not specified. """ if ttl is None: ttl = DEFAULT_FAILURE_TTL score = ttl if ttl < 0 else current_timestamp() + ttl if pipeline: p = pipeline else: p = self.connection.pipeline() job._exc_info = exc_string job.save(pipeline=p, include_meta=False, include_result=_save_exc_to_job) job.cleanup(ttl=ttl, pipeline=p) p.zadd(self.key, {job.id: score}) if not pipeline: p.execute() class DeferredJobRegistry(BaseRegistry): """ Registry of deferred jobs (waiting for another job to finish). """ key_template = 'rq:deferred:{0}' def cleanup(self): """This method is only here to prevent errors because this method is automatically called by `count()` and `get_job_ids()` methods implemented in BaseRegistry.""" pass class ScheduledJobRegistry(BaseRegistry): """ Registry of scheduled jobs. """ key_template = 'rq:scheduled:{0}' def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # The underlying implementation of get_jobs_to_enqueue() is # the same as get_expired_job_ids, but get_expired_job_ids() doesn't # make sense in this context self.get_jobs_to_enqueue = self.get_expired_job_ids def schedule(self, job: 'Job', scheduled_datetime, pipeline: Optional['Pipeline'] = None): """ Adds job to registry, scored by its execution time (in UTC). If datetime has no tzinfo, it will assume localtimezone. """ # If datetime has no timezone, assume server's local timezone if not scheduled_datetime.tzinfo: tz = timezone(timedelta(seconds=-(time.timezone if time.daylight == 0 else time.altzone))) scheduled_datetime = scheduled_datetime.replace(tzinfo=tz) timestamp = calendar.timegm(scheduled_datetime.utctimetuple()) return self.connection.zadd(self.key, {job.id: timestamp}) def cleanup(self): """This method is only here to prevent errors because this method is automatically called by `count()` and `get_job_ids()` methods implemented in BaseRegistry.""" pass def remove_jobs(self, timestamp: Optional[datetime] = None, pipeline: Optional['Pipeline'] = None): """Remove jobs whose timestamp is in the past from registry. Args: timestamp (Optional[datetime], optional): The timestamp. Defaults to None. pipeline (Optional[Pipeline], optional): The Redis pipeline. Defaults to None. """ connection = pipeline if pipeline is not None else self.connection score = timestamp if timestamp is not None else current_timestamp() return connection.zremrangebyscore(self.key, 0, score) def get_jobs_to_schedule(self, timestamp: Optional[datetime] = None, chunk_size: int = 1000) -> List[str]: """Get's a list of job IDs that should be scheduled. Args: timestamp (Optional[datetime], optional): _description_. Defaults to None. chunk_size (int, optional): _description_. Defaults to 1000. Returns: jobs (List[str]): A list of Job ids """ score = timestamp if timestamp is not None else current_timestamp() jobs_to_schedule = self.connection.zrangebyscore(self.key, 0, score, start=0, num=chunk_size) return [as_text(job_id) for job_id in jobs_to_schedule] def get_scheduled_time(self, job_or_id: Union['Job', str]) -> datetime: """Returns datetime (UTC) at which job is scheduled to be enqueued Args: job_or_id (Union[Job, str]): The Job instance or Job ID Raises: NoSuchJobError: If the job was not found Returns: datetime (datetime): The scheduled time as datetime object """ if isinstance(job_or_id, self.job_class): job_id = job_or_id.id else: job_id = job_or_id score = self.connection.zscore(self.key, job_id) if not score: raise NoSuchJobError return datetime.fromtimestamp(score, tz=timezone.utc) class CanceledJobRegistry(BaseRegistry): key_template = 'rq:canceled:{0}' def get_expired_job_ids(self, timestamp: Optional[datetime] = None): raise NotImplementedError def cleanup(self): """This method is only here to prevent errors because this method is automatically called by `count()` and `get_job_ids()` methods implemented in BaseRegistry.""" pass def clean_registries(queue: 'Queue'): """Cleans StartedJobRegistry, FinishedJobRegistry and FailedJobRegistry of a queue. Args: queue (Queue): The queue to clean """ registry = FinishedJobRegistry( name=queue.name, connection=queue.connection, job_class=queue.job_class, serializer=queue.serializer ) registry.cleanup() registry = StartedJobRegistry( name=queue.name, connection=queue.connection, job_class=queue.job_class, serializer=queue.serializer ) registry.cleanup() registry = FailedJobRegistry( name=queue.name, connection=queue.connection, job_class=queue.job_class, serializer=queue.serializer ) registry.cleanup() rq-1.16.2/rq/results.py0000644000000000000000000001612513615410400011645 0ustar00import zlib from base64 import b64decode, b64encode from datetime import datetime, timezone from enum import Enum from typing import Any, Optional from redis import Redis from .defaults import UNSERIALIZABLE_RETURN_VALUE_PAYLOAD from .job import Job from .serializers import resolve_serializer from .utils import decode_redis_hash, now def get_key(job_id): return 'rq:results:%s' % job_id class Result: class Type(Enum): SUCCESSFUL = 1 FAILED = 2 STOPPED = 3 def __init__( self, job_id: str, type: Type, connection: Redis, id: Optional[str] = None, created_at: Optional[datetime] = None, return_value: Optional[Any] = None, exc_string: Optional[str] = None, serializer=None, ): self.return_value = return_value self.exc_string = exc_string self.type = type self.created_at = created_at if created_at else now() self.serializer = resolve_serializer(serializer) self.connection = connection self.job_id = job_id self.id = id def __repr__(self): return f'Result(id={self.id}, type={self.Type(self.type).name})' def __eq__(self, other): try: return self.id == other.id except AttributeError: return False def __bool__(self): return bool(self.id) @classmethod def create(cls, job, type, ttl, return_value=None, exc_string=None, pipeline=None): result = cls( job_id=job.id, type=type, connection=job.connection, return_value=return_value, exc_string=exc_string, serializer=job.serializer, ) result.save(ttl=ttl, pipeline=pipeline) return result @classmethod def create_failure(cls, job, ttl, exc_string, pipeline=None): result = cls( job_id=job.id, type=cls.Type.FAILED, connection=job.connection, exc_string=exc_string, serializer=job.serializer, ) result.save(ttl=ttl, pipeline=pipeline) return result @classmethod def all(cls, job: Job, serializer=None): """Returns all results for job""" # response = job.connection.zrange(cls.get_key(job.id), 0, 10, desc=True, withscores=True) response = job.connection.xrevrange(cls.get_key(job.id), '+', '-') results = [] for result_id, payload in response: results.append( cls.restore(job.id, result_id.decode(), payload, connection=job.connection, serializer=serializer) ) return results @classmethod def count(cls, job: Job) -> int: """Returns the number of job results""" return job.connection.xlen(cls.get_key(job.id)) @classmethod def delete_all(cls, job: Job) -> None: """Delete all job results""" job.connection.delete(cls.get_key(job.id)) @classmethod def restore(cls, job_id: str, result_id: str, payload: dict, connection: Redis, serializer=None) -> 'Result': """Create a Result object from given Redis payload""" created_at = datetime.fromtimestamp(int(result_id.split('-')[0]) / 1000, tz=timezone.utc) payload = decode_redis_hash(payload) # data, timestamp = payload # result_data = json.loads(data) # created_at = datetime.fromtimestamp(timestamp, tz=timezone.utc) serializer = resolve_serializer(serializer) return_value = payload.get('return_value') if return_value is not None: return_value = serializer.loads(b64decode(return_value.decode())) exc_string = payload.get('exc_string') if exc_string: exc_string = zlib.decompress(b64decode(exc_string)).decode() return Result( job_id, Result.Type(int(payload['type'])), connection=connection, id=result_id, created_at=created_at, return_value=return_value, exc_string=exc_string, ) @classmethod def fetch(cls, job: Job, serializer=None) -> Optional['Result']: """Fetch a result that matches a given job ID. The current sorted set based implementation does not allow us to fetch a given key by ID so we need to iterate through results, deserialize the payload and look for a matching ID. Future Redis streams based implementation may make this more efficient and scalable. """ return None @classmethod def fetch_latest(cls, job: Job, serializer=None, timeout: int = 0) -> Optional['Result']: """Returns the latest result for given job instance or ID. If a non-zero timeout is provided, block for a result until timeout is reached. """ if timeout: # Unlike blpop, xread timeout is in miliseconds. "0-0" is the special value for the # first item in the stream, like '-' for xrevrange. timeout_ms = timeout * 1000 response = job.connection.xread({cls.get_key(job.id): "0-0"}, block=timeout_ms) if not response: return None response = response[0] # Querying single stream only. response = response[1] # Xread also returns Result.id, which we don't need. result_id, payload = response[-1] # Take most recent result. else: # If not blocking, use xrevrange to load a single result (as xread will load them all). response = job.connection.xrevrange(cls.get_key(job.id), '+', '-', count=1) if not response: return None result_id, payload = response[0] res = cls.restore(job.id, result_id.decode(), payload, connection=job.connection, serializer=serializer) return res @classmethod def get_key(cls, job_id): return 'rq:results:%s' % job_id def save(self, ttl, pipeline=None): """Save result data to Redis""" key = self.get_key(self.job_id) connection = pipeline if pipeline is not None else self.connection # result = connection.zadd(key, {self.serialize(): self.created_at.timestamp()}) result = connection.xadd(key, self.serialize(), maxlen=10) # If xadd() is called in a pipeline, it returns a pipeline object instead of stream ID if pipeline is None: self.id = result.decode() if ttl is not None: if ttl == -1: connection.persist(key) else: connection.expire(key, ttl) return self.id def serialize(self): data = {'type': self.type.value} if self.exc_string is not None: data['exc_string'] = b64encode(zlib.compress(self.exc_string.encode())).decode() try: serialized = self.serializer.dumps(self.return_value) except: # noqa serialized = self.serializer.dumps(UNSERIALIZABLE_RETURN_VALUE_PAYLOAD) if self.return_value is not None: data['return_value'] = b64encode(serialized).decode() # return json.dumps(data) return data rq-1.16.2/rq/scheduler.py0000644000000000000000000001775113615410400012130 0ustar00import logging import os import signal import time import traceback from datetime import datetime from enum import Enum from multiprocessing import Process from typing import List, Set from redis import ConnectionPool, Redis from .connections import parse_connection from .defaults import DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT, DEFAULT_SCHEDULER_FALLBACK_PERIOD from .job import Job from .logutils import setup_loghandlers from .queue import Queue from .registry import ScheduledJobRegistry from .serializers import resolve_serializer from .utils import current_timestamp, parse_names SCHEDULER_KEY_TEMPLATE = 'rq:scheduler:%s' SCHEDULER_LOCKING_KEY_TEMPLATE = 'rq:scheduler-lock:%s' class SchedulerStatus(str, Enum): STARTED = 'started' WORKING = 'working' STOPPED = 'stopped' class RQScheduler: # STARTED: scheduler has been started but sleeping # WORKING: scheduler is in the midst of scheduling jobs # STOPPED: scheduler is in stopped condition Status = SchedulerStatus def __init__( self, queues, connection: Redis, interval=1, logging_level=logging.INFO, date_format=DEFAULT_LOGGING_DATE_FORMAT, log_format=DEFAULT_LOGGING_FORMAT, serializer=None, ): self._queue_names = set(parse_names(queues)) self._acquired_locks: Set[str] = set() self._scheduled_job_registries: List[ScheduledJobRegistry] = [] self.lock_acquisition_time = None self._connection_class, self._pool_class, self._pool_kwargs = parse_connection(connection) self.serializer = resolve_serializer(serializer) self._connection = None self.interval = interval self._stop_requested = False self._status = self.Status.STOPPED self._process = None self.log = logging.getLogger(__name__) setup_loghandlers( level=logging_level, name=__name__, log_format=log_format, date_format=date_format, ) @property def connection(self): if self._connection: return self._connection self._connection = self._connection_class( connection_pool=ConnectionPool(connection_class=self._pool_class, **self._pool_kwargs) ) return self._connection @property def acquired_locks(self): return self._acquired_locks @property def status(self): return self._status @property def should_reacquire_locks(self): """Returns True if lock_acquisition_time is longer than 10 minutes ago""" if self._queue_names == self.acquired_locks: return False if not self.lock_acquisition_time: return True return (datetime.now() - self.lock_acquisition_time).total_seconds() > DEFAULT_SCHEDULER_FALLBACK_PERIOD def acquire_locks(self, auto_start=False): """Returns names of queue it successfully acquires lock on""" successful_locks = set() pid = os.getpid() self.log.debug('Trying to acquire locks for %s', ', '.join(self._queue_names)) for name in self._queue_names: if self.connection.set(self.get_locking_key(name), pid, nx=True, ex=self.interval + 60): successful_locks.add(name) # Always reset _scheduled_job_registries when acquiring locks self._scheduled_job_registries = [] self._acquired_locks = self._acquired_locks.union(successful_locks) self.lock_acquisition_time = datetime.now() # If auto_start is requested and scheduler is not started, # run self.start() if self._acquired_locks and auto_start: if not self._process or not self._process.is_alive(): self.start() return successful_locks def prepare_registries(self, queue_names: str = None): """Prepare scheduled job registries for use""" self._scheduled_job_registries = [] if not queue_names: queue_names = self._acquired_locks for name in queue_names: self._scheduled_job_registries.append( ScheduledJobRegistry(name, connection=self.connection, serializer=self.serializer) ) @classmethod def get_locking_key(cls, name: str): """Returns scheduler key for a given queue name""" return SCHEDULER_LOCKING_KEY_TEMPLATE % name def enqueue_scheduled_jobs(self): """Enqueue jobs whose timestamp is in the past""" self._status = self.Status.WORKING if not self._scheduled_job_registries and self._acquired_locks: self.prepare_registries() for registry in self._scheduled_job_registries: timestamp = current_timestamp() # TODO: try to use Lua script to make get_jobs_to_schedule() # and remove_jobs() atomic job_ids = registry.get_jobs_to_schedule(timestamp) if not job_ids: continue queue = Queue(registry.name, connection=self.connection, serializer=self.serializer) with self.connection.pipeline() as pipeline: jobs = Job.fetch_many(job_ids, connection=self.connection, serializer=self.serializer) for job in jobs: if job is not None: queue._enqueue_job(job, pipeline=pipeline, at_front=bool(job.enqueue_at_front)) registry.remove(job, pipeline=pipeline) pipeline.execute() self._status = self.Status.STARTED def _install_signal_handlers(self): """Installs signal handlers for handling SIGINT and SIGTERM gracefully. """ signal.signal(signal.SIGINT, self.request_stop) signal.signal(signal.SIGTERM, self.request_stop) def request_stop(self, signum=None, frame=None): """Toggle self._stop_requested that's checked on every loop""" self._stop_requested = True def heartbeat(self): """Updates the TTL on scheduler keys and the locks""" self.log.debug('Scheduler sending heartbeat to %s', ', '.join(self.acquired_locks)) if len(self._acquired_locks) > 1: with self.connection.pipeline() as pipeline: for name in self._acquired_locks: key = self.get_locking_key(name) pipeline.expire(key, self.interval + 60) pipeline.execute() elif self._acquired_locks: key = self.get_locking_key(next(iter(self._acquired_locks))) self.connection.expire(key, self.interval + 60) def stop(self): self.log.info('Scheduler stopping, releasing locks for %s...', ', '.join(self._acquired_locks)) self.release_locks() self._status = self.Status.STOPPED def release_locks(self): """Release acquired locks""" keys = [self.get_locking_key(name) for name in self._acquired_locks] self.connection.delete(*keys) self._acquired_locks = set() def start(self): self._status = self.Status.STARTED # Redis instance can't be pickled across processes so we need to # clean this up before forking self._connection = None self._process = Process(target=run, args=(self,), name='Scheduler') self._process.start() return self._process def work(self): self._install_signal_handlers() while True: if self._stop_requested: self.stop() break if self.should_reacquire_locks: self.acquire_locks() self.enqueue_scheduled_jobs() self.heartbeat() time.sleep(self.interval) def run(scheduler): scheduler.log.info('Scheduler for %s started with PID %s', ', '.join(scheduler._queue_names), os.getpid()) try: scheduler.work() except: # noqa scheduler.log.error('Scheduler [PID %s] raised an exception.\n%s', os.getpid(), traceback.format_exc()) raise scheduler.log.info('Scheduler with PID %d has stopped', os.getpid()) rq-1.16.2/rq/serializers.py0000644000000000000000000000300413615410400012470 0ustar00import json import pickle from functools import partial from typing import Optional, Type, Union from .utils import import_attribute class DefaultSerializer: dumps = partial(pickle.dumps, protocol=pickle.HIGHEST_PROTOCOL) loads = pickle.loads class JSONSerializer: @staticmethod def dumps(*args, **kwargs): return json.dumps(*args, **kwargs).encode('utf-8') @staticmethod def loads(s, *args, **kwargs): return json.loads(s.decode('utf-8'), *args, **kwargs) def resolve_serializer(serializer: Optional[Union[Type[DefaultSerializer], str]] = None) -> Type[DefaultSerializer]: """This function checks the user defined serializer for ('dumps', 'loads') methods It returns a default pickle serializer if not found else it returns a MySerializer The returned serializer objects implement ('dumps', 'loads') methods Also accepts a string path to serializer that will be loaded as the serializer. Args: serializer (Callable): The serializer to resolve. Returns: serializer (Callable): An object that implements the SerializerProtocol """ if not serializer: return DefaultSerializer if isinstance(serializer, str): serializer = import_attribute(serializer) default_serializer_methods = ('dumps', 'loads') for instance_method in default_serializer_methods: if not hasattr(serializer, instance_method): raise NotImplementedError('Serializer should have (dumps, loads) methods.') return serializer rq-1.16.2/rq/suspension.py0000644000000000000000000000247113615410400012351 0ustar00from typing import TYPE_CHECKING, Optional if TYPE_CHECKING: from redis import Redis from rq.worker import Worker WORKERS_SUSPENDED = 'rq:suspended' def is_suspended(connection: 'Redis', worker: Optional['Worker'] = None): """Checks whether a Worker is suspendeed on a given connection PS: pipeline returns a list of responses Ref: https://github.com/andymccurdy/redis-py#pipelines Args: connection (Redis): The Redis Connection worker (Optional[Worker], optional): The Worker. Defaults to None. """ with connection.pipeline() as pipeline: if worker is not None: worker.heartbeat(pipeline=pipeline) pipeline.exists(WORKERS_SUSPENDED) return pipeline.execute()[-1] def suspend(connection: 'Redis', ttl: Optional[int] = None): """ Suspends. TTL of 0 will invalidate right away. Args: connection (Redis): The Redis connection to use.. ttl (Optional[int], optional): time to live in seconds. Defaults to `None` """ connection.set(WORKERS_SUSPENDED, 1) if ttl is not None: connection.expire(WORKERS_SUSPENDED, ttl) def resume(connection: 'Redis'): """ Resumes. Args: connection (Redis): The Redis connection to use.. """ return connection.delete(WORKERS_SUSPENDED) rq-1.16.2/rq/timeouts.py0000644000000000000000000001000513615410400012004 0ustar00import ctypes import signal import threading class BaseTimeoutException(Exception): """Base exception for timeouts.""" pass class JobTimeoutException(BaseTimeoutException): """Raised when a job takes longer to complete than the allowed maximum timeout value. """ pass class HorseMonitorTimeoutException(BaseTimeoutException): """Raised when waiting for a horse exiting takes longer than the maximum timeout value. """ pass class BaseDeathPenalty: """Base class to setup job timeouts.""" def __init__(self, timeout, exception=BaseTimeoutException, **kwargs): self._timeout = timeout self._exception = exception def __enter__(self): self.setup_death_penalty() def __exit__(self, type, value, traceback): # Always cancel immediately, since we're done try: self.cancel_death_penalty() except BaseTimeoutException: # Weird case: we're done with the with body, but now the alarm is # fired. We may safely ignore this situation and consider the # body done. pass # __exit__ may return True to supress further exception handling. We # don't want to suppress any exceptions here, since all errors should # just pass through, BaseTimeoutException being handled normally to the # invoking context. return False def setup_death_penalty(self): raise NotImplementedError() def cancel_death_penalty(self): raise NotImplementedError() class UnixSignalDeathPenalty(BaseDeathPenalty): def handle_death_penalty(self, signum, frame): raise self._exception('Task exceeded maximum timeout value ({0} seconds)'.format(self._timeout)) def setup_death_penalty(self): """Sets up an alarm signal and a signal handler that raises an exception after the timeout amount (expressed in seconds). """ signal.signal(signal.SIGALRM, self.handle_death_penalty) signal.alarm(self._timeout) def cancel_death_penalty(self): """Removes the death penalty alarm and puts back the system into default signal handling. """ signal.alarm(0) signal.signal(signal.SIGALRM, signal.SIG_DFL) class TimerDeathPenalty(BaseDeathPenalty): def __init__(self, timeout, exception=JobTimeoutException, **kwargs): super().__init__(timeout, exception, **kwargs) self._target_thread_id = threading.current_thread().ident self._timer = None # Monkey-patch exception with the message ahead of time # since PyThreadState_SetAsyncExc can only take a class def init_with_message(self, *args, **kwargs): # noqa super(exception, self).__init__("Task exceeded maximum timeout value ({0} seconds)".format(timeout)) self._exception.__init__ = init_with_message def new_timer(self): """Returns a new timer since timers can only be used once.""" return threading.Timer(self._timeout, self.handle_death_penalty) def handle_death_penalty(self): """Raises an asynchronous exception in another thread. Reference http://docs.python.org/c-api/init.html#PyThreadState_SetAsyncExc for more info. """ ret = ctypes.pythonapi.PyThreadState_SetAsyncExc( ctypes.c_long(self._target_thread_id), ctypes.py_object(self._exception) ) if ret == 0: raise ValueError("Invalid thread ID {}".format(self._target_thread_id)) elif ret > 1: ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(self._target_thread_id), 0) raise SystemError("PyThreadState_SetAsyncExc failed") def setup_death_penalty(self): """Starts the timer.""" if self._timeout <= 0: return self._timer = self.new_timer() self._timer.start() def cancel_death_penalty(self): """Cancels the timer.""" if self._timeout <= 0: return self._timer.cancel() self._timer = None rq-1.16.2/rq/types.py0000644000000000000000000000122113615410400011277 0ustar00from typing import TYPE_CHECKING, Any, Callable, List, TypeVar, Union if TYPE_CHECKING: from .job import Dependency, Job FunctionReferenceType = TypeVar('FunctionReferenceType', str, Callable[..., Any]) """Custom type definition for what a `func` is in the context of a job. A `func` can be a string with the function import path (eg.: `myfile.mymodule.myfunc`) or a direct callable (function/method). """ JobDependencyType = TypeVar('JobDependencyType', 'Dependency', 'Job', str, List[Union['Dependency', 'Job']]) """Custom type definition for a job dependencies. A simple helper definition for the `depends_on` parameter when creating a job. """ rq-1.16.2/rq/utils.py0000644000000000000000000002545613615410400011313 0ustar00""" Miscellaneous helper functions. The formatter for ANSI colored console output is heavily based on Pygments terminal colorizing code, originally by Georg Brandl. """ import calendar import datetime import datetime as dt import importlib import logging import numbers from collections.abc import Iterable from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Union if TYPE_CHECKING: from redis import Redis from .queue import Queue from redis.exceptions import ResponseError from .exceptions import TimeoutFormatError logger = logging.getLogger(__name__) def compact(lst: List[Any]) -> List[Any]: """Excludes `None` values from a list-like object. Args: lst (list): A list (or list-like) oject Returns: object (list): The list without None values """ return [item for item in lst if item is not None] def as_text(v: Union[bytes, str]) -> str: """Converts a bytes value to a string using `utf-8`. Args: v (Union[bytes, str]): The value (bytes or string) Raises: ValueError: If the value is not bytes or string Returns: value (Optional[str]): Either the decoded string or None """ if isinstance(v, bytes): return v.decode('utf-8') elif isinstance(v, str): return v else: raise ValueError('Unknown type %r' % type(v)) def decode_redis_hash(h) -> Dict[str, Any]: """Decodes the Redis hash, ensuring that keys are strings Most importantly, decodes bytes strings, ensuring the dict has str keys. Args: h (Dict[Any, Any]): The Redis hash Returns: Dict[str, Any]: The decoded Redis data (Dictionary) """ return dict((as_text(k), h[k]) for k in h) def import_attribute(name: str) -> Callable[..., Any]: """Returns an attribute from a dotted path name. Example: `path.to.func`. When the attribute we look for is a staticmethod, module name in its dotted path is not the last-before-end word E.g.: package_a.package_b.module_a.ClassA.my_static_method Thus we remove the bits from the end of the name until we can import it Args: name (str): The name (reference) to the path. Raises: ValueError: If no module is found or invalid attribute name. Returns: Any: An attribute (normally a Callable) """ name_bits = name.split('.') module_name_bits, attribute_bits = name_bits[:-1], [name_bits[-1]] module = None while len(module_name_bits): try: module_name = '.'.join(module_name_bits) module = importlib.import_module(module_name) break except ImportError: attribute_bits.insert(0, module_name_bits.pop()) if module is None: # maybe it's a builtin try: return __builtins__[name] except KeyError: raise ValueError('Invalid attribute name: %s' % name) attribute_name = '.'.join(attribute_bits) if hasattr(module, attribute_name): return getattr(module, attribute_name) # staticmethods attribute_name = attribute_bits.pop() attribute_owner_name = '.'.join(attribute_bits) try: attribute_owner = getattr(module, attribute_owner_name) except: # noqa raise ValueError('Invalid attribute name: %s' % attribute_name) if not hasattr(attribute_owner, attribute_name): raise ValueError('Invalid attribute name: %s' % name) return getattr(attribute_owner, attribute_name) def utcnow(): return datetime.datetime.utcnow() def now(): """Return now in UTC""" return datetime.datetime.now(datetime.timezone.utc) _TIMESTAMP_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ' def utcformat(dt: dt.datetime) -> str: return dt.strftime(as_text(_TIMESTAMP_FORMAT)) def utcparse(string: str) -> dt.datetime: try: return datetime.datetime.strptime(string, _TIMESTAMP_FORMAT) except ValueError: # This catches any jobs remain with old datetime format return datetime.datetime.strptime(string, '%Y-%m-%dT%H:%M:%SZ') def first(iterable: Iterable, default=None, key=None): """Return first element of `iterable` that evaluates true, else return None (or an optional default value). >>> first([0, False, None, [], (), 42]) 42 >>> first([0, False, None, [], ()]) is None True >>> first([0, False, None, [], ()], default='ohai') 'ohai' >>> import re >>> m = first(re.match(regex, 'abc') for regex in ['b.*', 'a(.*)']) >>> m.group(1) 'bc' The optional `key` argument specifies a one-argument predicate function like that used for `filter()`. The `key` argument, if supplied, must be in keyword form. For example: >>> first([1, 1, 3, 4, 5], key=lambda x: x % 2 == 0) 4 Args: iterable (t.Iterable): _description_ default (_type_, optional): _description_. Defaults to None. key (_type_, optional): _description_. Defaults to None. Returns: _type_: _description_ """ if key is None: for el in iterable: if el: return el else: for el in iterable: if key(el): return el return default def is_nonstring_iterable(obj: Any) -> bool: """Returns whether the obj is an iterable, but not a string Args: obj (Any): _description_ Returns: bool: _description_ """ return isinstance(obj, Iterable) and not isinstance(obj, str) def ensure_list(obj: Any) -> List: """When passed an iterable of objects, does nothing, otherwise, it returns a list with just that object in it. Args: obj (Any): _description_ Returns: List: _description_ """ return obj if is_nonstring_iterable(obj) else [obj] def current_timestamp() -> int: """Returns current UTC timestamp Returns: int: _description_ """ return calendar.timegm(datetime.datetime.utcnow().utctimetuple()) def backend_class(holder, default_name, override=None): """Get a backend class using its default attribute name or an override Args: holder (_type_): _description_ default_name (_type_): _description_ override (_type_, optional): _description_. Defaults to None. Returns: _type_: _description_ """ if override is None: return getattr(holder, default_name) elif isinstance(override, str): return import_attribute(override) else: return override def str_to_date(date_str: Optional[str]) -> Union[dt.datetime, Any]: if not date_str: return else: return utcparse(date_str.decode()) def parse_timeout(timeout: Union[int, float, str]) -> int: """Transfer all kinds of timeout format to an integer representing seconds""" if not isinstance(timeout, numbers.Integral) and timeout is not None: try: timeout = int(timeout) except ValueError: digit, unit = timeout[:-1], (timeout[-1:]).lower() unit_second = {'d': 86400, 'h': 3600, 'm': 60, 's': 1} try: timeout = int(digit) * unit_second[unit] except (ValueError, KeyError): raise TimeoutFormatError( 'Timeout must be an integer or a string representing an integer, or ' 'a string with format: digits + unit, unit can be "d", "h", "m", "s", ' 'such as "1h", "23m".' ) return timeout def get_version(connection: 'Redis') -> Tuple[int, int, int]: """ Returns tuple of Redis server version. This function also correctly handles 4 digit redis server versions. Args: connection (Redis): The Redis connection. Returns: version (Tuple[int, int, int]): A tuple representing the semantic versioning format (eg. (5, 0, 9)) """ try: # Getting the connection info for each job tanks performance, we can cache it on the connection object if not getattr(connection, "__rq_redis_server_version", None): setattr( connection, "__rq_redis_server_version", tuple(int(i) for i in str(connection.info("server")["redis_version"]).split('.')[:3]), ) return getattr(connection, "__rq_redis_server_version") except ResponseError: # fakeredis doesn't implement Redis' INFO command return (5, 0, 9) def ceildiv(a, b): """Ceiling division. Returns the ceiling of the quotient of a division operation Args: a (_type_): _description_ b (_type_): _description_ Returns: _type_: _description_ """ return -(-a // b) def split_list(a_list: List[Any], segment_size: int): """Splits a list into multiple smaller lists having size `segment_size` Args: a_list (List[Any]): A list to split segment_size (int): The segment size to split into Yields: list: The splitted listed """ for i in range(0, len(a_list), segment_size): yield a_list[i : i + segment_size] def truncate_long_string(data: str, max_length: Optional[int] = None) -> str: """Truncate arguments with representation longer than max_length Args: data (str): The data to truncate max_length (Optional[int], optional): The max length. Defaults to None. Returns: truncated (str): The truncated string """ if max_length is None: return data return (data[:max_length] + '...') if len(data) > max_length else data def get_call_string( func_name: Optional[str], args: Any, kwargs: Dict[Any, Any], max_length: Optional[int] = None ) -> Optional[str]: """ Returns a string representation of the call, formatted as a regular Python function invocation statement. If max_length is not None, truncate arguments with representation longer than max_length. Args: func_name (str): The funtion name args (Any): The function arguments kwargs (Dict[Any, Any]): The function kwargs max_length (int, optional): The max length. Defaults to None. Returns: str: A String representation of the function call. """ if func_name is None: return None arg_list = [as_text(truncate_long_string(repr(arg), max_length)) for arg in args] list_kwargs = ['{0}={1}'.format(k, as_text(truncate_long_string(repr(v), max_length))) for k, v in kwargs.items()] arg_list += sorted(list_kwargs) args = ', '.join(arg_list) return '{0}({1})'.format(func_name, args) def parse_names(queues_or_names: List[Union[str, 'Queue']]) -> List[str]: """Given a list of strings or queues, returns queue names""" from .queue import Queue names = [] for queue_or_name in queues_or_names: if isinstance(queue_or_name, Queue): names.append(queue_or_name.name) else: names.append(str(queue_or_name)) return names rq-1.16.2/rq/version.py0000644000000000000000000000002313615410400011617 0ustar00VERSION = '1.16.2' rq-1.16.2/rq/worker.py0000644000000000000000000017771613615410400011473 0ustar00import contextlib import errno import logging import math import os import random import signal import socket import sys import time import traceback import warnings from datetime import datetime, timedelta from enum import Enum from random import shuffle from types import FrameType from typing import TYPE_CHECKING, Callable, List, Optional, Tuple, Type, Union from uuid import uuid4 if TYPE_CHECKING: try: from resource import struct_rusage except ImportError: pass from redis import Redis from redis.client import Pipeline try: from signal import SIGKILL except ImportError: from signal import SIGTERM as SIGKILL from contextlib import suppress import redis.exceptions from . import worker_registration from .command import PUBSUB_CHANNEL_TEMPLATE, handle_command, parse_payload from .connections import get_current_connection, pop_connection, push_connection from .defaults import ( DEFAULT_JOB_MONITORING_INTERVAL, DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT, DEFAULT_MAINTENANCE_TASK_INTERVAL, DEFAULT_RESULT_TTL, DEFAULT_WORKER_TTL, ) from .exceptions import DequeueTimeout, DeserializationError, ShutDownImminentException from .job import Job, JobStatus from .logutils import blue, green, setup_loghandlers, yellow from .queue import Queue from .registry import StartedJobRegistry, clean_registries from .scheduler import RQScheduler from .serializers import resolve_serializer from .suspension import is_suspended from .timeouts import HorseMonitorTimeoutException, JobTimeoutException, UnixSignalDeathPenalty from .utils import as_text, backend_class, compact, ensure_list, get_version, utcformat, utcnow, utcparse from .version import VERSION try: from setproctitle import setproctitle as setprocname except ImportError: def setprocname(*args, **kwargs): # noqa pass logger = logging.getLogger("rq.worker") class StopRequested(Exception): pass _signames = dict( (getattr(signal, signame), signame) for signame in dir(signal) if signame.startswith('SIG') and '_' not in signame ) def signal_name(signum): try: if sys.version_info[:2] >= (3, 5): return signal.Signals(signum).name else: return _signames[signum] except KeyError: return 'SIG_UNKNOWN' except ValueError: return 'SIG_UNKNOWN' class DequeueStrategy(str, Enum): DEFAULT = "default" ROUND_ROBIN = "round_robin" RANDOM = "random" class WorkerStatus(str, Enum): STARTED = 'started' SUSPENDED = 'suspended' BUSY = 'busy' IDLE = 'idle' class BaseWorker: redis_worker_namespace_prefix = 'rq:worker:' redis_workers_keys = worker_registration.REDIS_WORKER_KEYS death_penalty_class = UnixSignalDeathPenalty queue_class = Queue job_class = Job # `log_result_lifespan` controls whether "Result is kept for XXX seconds" # messages are logged after every job, by default they are. log_result_lifespan = True # `log_job_description` is used to toggle logging an entire jobs description. log_job_description = True # factor to increase connection_wait_time in case of continuous connection failures. exponential_backoff_factor = 2.0 # Max Wait time (in seconds) after which exponential_backoff_factor won't be applicable. max_connection_wait_time = 60.0 def __init__( self, queues, name: Optional[str] = None, default_result_ttl=DEFAULT_RESULT_TTL, connection: Optional['Redis'] = None, exc_handler=None, exception_handlers=None, default_worker_ttl=DEFAULT_WORKER_TTL, maintenance_interval: int = DEFAULT_MAINTENANCE_TASK_INTERVAL, job_class: Optional[Type['Job']] = None, queue_class: Optional[Type['Queue']] = None, log_job_description: bool = True, job_monitoring_interval=DEFAULT_JOB_MONITORING_INTERVAL, disable_default_exception_handler: bool = False, prepare_for_work: bool = True, serializer=None, work_horse_killed_handler: Optional[Callable[[Job, int, int, 'struct_rusage'], None]] = None, ): # noqa self.default_result_ttl = default_result_ttl self.worker_ttl = default_worker_ttl self.job_monitoring_interval = job_monitoring_interval self.maintenance_interval = maintenance_interval connection = self._set_connection(connection) self.connection = connection self.redis_server_version = None self.job_class = backend_class(self, 'job_class', override=job_class) self.queue_class = backend_class(self, 'queue_class', override=queue_class) self.version = VERSION self.python_version = sys.version self.serializer = resolve_serializer(serializer) queues = [ ( self.queue_class( name=q, connection=connection, job_class=self.job_class, serializer=self.serializer, death_penalty_class=self.death_penalty_class, ) if isinstance(q, str) else q ) for q in ensure_list(queues) ] self.name: str = name or uuid4().hex self.queues = queues self.validate_queues() self._ordered_queues = self.queues[:] self._exc_handlers: List[Callable] = [] self._work_horse_killed_handler = work_horse_killed_handler self._shutdown_requested_date: Optional[datetime] = None self._state: str = 'starting' self._is_horse: bool = False self._horse_pid: int = 0 self._stop_requested: bool = False self._stopped_job_id = None self.log = logger self.log_job_description = log_job_description self.last_cleaned_at = None self.successful_job_count: int = 0 self.failed_job_count: int = 0 self.total_working_time: int = 0 self.current_job_working_time: float = 0 self.birth_date = None self.scheduler: Optional[RQScheduler] = None self.pubsub = None self.pubsub_thread = None self._dequeue_strategy: DequeueStrategy = DequeueStrategy.DEFAULT self.disable_default_exception_handler = disable_default_exception_handler if prepare_for_work: self.hostname: Optional[str] = socket.gethostname() self.pid: Optional[int] = os.getpid() try: connection.client_setname(self.name) except redis.exceptions.ResponseError: warnings.warn('CLIENT SETNAME command not supported, setting ip_address to unknown', Warning) self.ip_address = 'unknown' else: client_adresses = [client['addr'] for client in connection.client_list() if client['name'] == self.name] if len(client_adresses) > 0: self.ip_address = client_adresses[0] else: warnings.warn('CLIENT LIST command not supported, setting ip_address to unknown', Warning) self.ip_address = 'unknown' else: self.hostname = None self.pid = None self.ip_address = 'unknown' if isinstance(exception_handlers, (list, tuple)): for handler in exception_handlers: self.push_exc_handler(handler) elif exception_handlers is not None: self.push_exc_handler(exception_handlers) @classmethod def all( cls, connection: Optional['Redis'] = None, job_class: Optional[Type['Job']] = None, queue_class: Optional[Type['Queue']] = None, queue: Optional['Queue'] = None, serializer=None, ) -> List['Worker']: """Returns an iterable of all Workers. Returns: workers (List[Worker]): A list of workers """ if queue: connection = queue.connection elif connection is None: connection = get_current_connection() worker_keys = worker_registration.get_keys(queue=queue, connection=connection) workers = [ cls.find_by_key( key, connection=connection, job_class=job_class, queue_class=queue_class, serializer=serializer ) for key in worker_keys ] return compact(workers) @classmethod def all_keys(cls, connection: Optional['Redis'] = None, queue: Optional['Queue'] = None) -> List[str]: """List of worker keys Args: connection (Optional[Redis], optional): A Redis Connection. Defaults to None. queue (Optional[Queue], optional): The Queue. Defaults to None. Returns: list_keys (List[str]): A list of worker keys """ return [as_text(key) for key in worker_registration.get_keys(queue=queue, connection=connection)] @classmethod def count(cls, connection: Optional['Redis'] = None, queue: Optional['Queue'] = None) -> int: """Returns the number of workers by queue or connection. Args: connection (Optional[Redis], optional): Redis connection. Defaults to None. queue (Optional[Queue], optional): The queue to use. Defaults to None. Returns: length (int): The queue length. """ return len(worker_registration.get_keys(queue=queue, connection=connection)) @property def should_run_maintenance_tasks(self): """Maintenance tasks should run on first startup or every 10 minutes.""" if self.last_cleaned_at is None: return True if (utcnow() - self.last_cleaned_at) > timedelta(seconds=self.maintenance_interval): return True return False def _set_connection(self, connection: Optional['Redis']) -> 'Redis': """Configures the Redis connection to have a socket timeout. This should timouet the connection in case any specific command hangs at any given time (eg. BLPOP). If the connection provided already has a `socket_timeout` defined, skips. Args: connection (Optional[Redis]): The Redis Connection. """ if connection is None: connection = get_current_connection() current_socket_timeout = connection.connection_pool.connection_kwargs.get("socket_timeout") if current_socket_timeout is None: timeout_config = {"socket_timeout": self.connection_timeout} connection.connection_pool.connection_kwargs.update(timeout_config) return connection @property def dequeue_timeout(self) -> int: return max(1, self.worker_ttl - 15) def clean_registries(self): """Runs maintenance jobs on each Queue's registries.""" for queue in self.queues: # If there are multiple workers running, we only want 1 worker # to run clean_registries(). if queue.acquire_maintenance_lock(): self.log.info('Cleaning registries for queue: %s', queue.name) clean_registries(queue) worker_registration.clean_worker_registry(queue) queue.intermediate_queue.cleanup(self, queue) queue.release_maintenance_lock() self.last_cleaned_at = utcnow() def get_redis_server_version(self): """Return Redis server version of connection""" if not self.redis_server_version: self.redis_server_version = get_version(self.connection) return self.redis_server_version def validate_queues(self): """Sanity check for the given queues.""" for queue in self.queues: if not isinstance(queue, self.queue_class): raise TypeError('{0} is not of type {1} or string types'.format(queue, self.queue_class)) def queue_names(self) -> List[str]: """Returns the queue names of this worker's queues. Returns: List[str]: The queue names. """ return [queue.name for queue in self.queues] def queue_keys(self) -> List[str]: """Returns the Redis keys representing this worker's queues. Returns: List[str]: The list of strings with queues keys """ return [queue.key for queue in self.queues] @property def key(self): """Returns the worker's Redis hash key.""" return self.redis_worker_namespace_prefix + self.name @property def pubsub_channel_name(self): """Returns the worker's Redis hash key.""" return PUBSUB_CHANNEL_TEMPLATE % self.name @property def supports_redis_streams(self) -> bool: """Only supported by Redis server >= 5.0 is required.""" return self.get_redis_server_version() >= (5, 0, 0) def _install_signal_handlers(self): """Installs signal handlers for handling SIGINT and SIGTERM gracefully.""" signal.signal(signal.SIGINT, self.request_stop) signal.signal(signal.SIGTERM, self.request_stop) def work( self, burst: bool = False, logging_level: str = "INFO", date_format: str = DEFAULT_LOGGING_DATE_FORMAT, log_format: str = DEFAULT_LOGGING_FORMAT, max_jobs: Optional[int] = None, max_idle_time: Optional[int] = None, with_scheduler: bool = False, dequeue_strategy: DequeueStrategy = DequeueStrategy.DEFAULT, ) -> bool: """Starts the work loop. Pops and performs all jobs on the current list of queues. When all queues are empty, block and wait for new jobs to arrive on any of the queues, unless `burst` mode is enabled. If `max_idle_time` is provided, worker will die when it's idle for more than the provided value. The return value indicates whether any jobs were processed. Args: burst (bool, optional): Whether to work on burst mode. Defaults to False. logging_level (str, optional): Logging level to use. Defaults to "INFO". date_format (str, optional): Date Format. Defaults to DEFAULT_LOGGING_DATE_FORMAT. log_format (str, optional): Log Format. Defaults to DEFAULT_LOGGING_FORMAT. max_jobs (Optional[int], optional): Max number of jobs. Defaults to None. max_idle_time (Optional[int], optional): Max seconds for worker to be idle. Defaults to None. with_scheduler (bool, optional): Whether to run the scheduler in a separate process. Defaults to False. dequeue_strategy (DequeueStrategy, optional): Which strategy to use to dequeue jobs. Defaults to DequeueStrategy.DEFAULT Returns: worked (bool): Will return True if any job was processed, False otherwise. """ self.bootstrap(logging_level, date_format, log_format) self._dequeue_strategy = dequeue_strategy completed_jobs = 0 if with_scheduler: self._start_scheduler(burst, logging_level, date_format, log_format) self._install_signal_handlers() try: while True: try: self.check_for_suspension(burst) if self.should_run_maintenance_tasks: self.run_maintenance_tasks() if self._stop_requested: self.log.info('Worker %s: stopping on request', self.key) break timeout = None if burst else self.dequeue_timeout result = self.dequeue_job_and_maintain_ttl(timeout, max_idle_time) if result is None: if burst: self.log.info('Worker %s: done, quitting', self.key) elif max_idle_time is not None: self.log.info('Worker %s: idle for %d seconds, quitting', self.key, max_idle_time) break job, queue = result self.execute_job(job, queue) self.heartbeat() completed_jobs += 1 if max_jobs is not None: if completed_jobs >= max_jobs: self.log.info('Worker %s: finished executing %d jobs, quitting', self.key, completed_jobs) break except redis.exceptions.TimeoutError: self.log.error('Worker %s: Redis connection timeout, quitting...', self.key) break except StopRequested: break except SystemExit: # Cold shutdown detected raise except: # noqa self.log.error('Worker %s: found an unhandled exception, quitting...', self.key, exc_info=True) break finally: self.teardown() return bool(completed_jobs) def handle_job_failure(self, job: 'Job', queue: 'Queue', started_job_registry=None, exc_string=''): """ Handles the failure or an executing job by: 1. Setting the job status to failed 2. Removing the job from StartedJobRegistry 3. Setting the workers current job to None 4. Add the job to FailedJobRegistry `save_exc_to_job` should only be used for testing purposes """ self.log.debug('Handling failed execution of job %s', job.id) with self.connection.pipeline() as pipeline: if started_job_registry is None: started_job_registry = StartedJobRegistry( job.origin, self.connection, job_class=self.job_class, serializer=self.serializer ) # check whether a job was stopped intentionally and set the job # status appropriately if it was this job. job_is_stopped = self._stopped_job_id == job.id retry = job.retries_left and job.retries_left > 0 and not job_is_stopped if job_is_stopped: job.set_status(JobStatus.STOPPED, pipeline=pipeline) self._stopped_job_id = None else: # Requeue/reschedule if retry is configured, otherwise if not retry: job.set_status(JobStatus.FAILED, pipeline=pipeline) started_job_registry.remove(job, pipeline=pipeline) if not self.disable_default_exception_handler and not retry: job._handle_failure(exc_string, pipeline=pipeline) with suppress(redis.exceptions.ConnectionError): pipeline.execute() self.set_current_job_id(None, pipeline=pipeline) self.increment_failed_job_count(pipeline) if job.started_at and job.ended_at: self.increment_total_working_time(job.ended_at - job.started_at, pipeline) if retry: job.retry(queue, pipeline) enqueue_dependents = False else: enqueue_dependents = True try: pipeline.execute() if enqueue_dependents: queue.enqueue_dependents(job) except Exception: # Ensure that custom exception handlers are called # even if Redis is down pass def _start_scheduler( self, burst: bool = False, logging_level: str = "INFO", date_format: str = DEFAULT_LOGGING_DATE_FORMAT, log_format: str = DEFAULT_LOGGING_FORMAT, ): """Starts the scheduler process. This is specifically designed to be run by the worker when running the `work()` method. Instanciates the RQScheduler and tries to acquire a lock. If the lock is acquired, start scheduler. If worker is on burst mode just enqueues scheduled jobs and quits, otherwise, starts the scheduler in a separate process. Args: burst (bool, optional): Whether to work on burst mode. Defaults to False. logging_level (str, optional): Logging level to use. Defaults to "INFO". date_format (str, optional): Date Format. Defaults to DEFAULT_LOGGING_DATE_FORMAT. log_format (str, optional): Log Format. Defaults to DEFAULT_LOGGING_FORMAT. """ self.scheduler = RQScheduler( self.queues, connection=self.connection, logging_level=logging_level, date_format=date_format, log_format=log_format, serializer=self.serializer, ) self.scheduler.acquire_locks() if self.scheduler.acquired_locks: if burst: self.scheduler.enqueue_scheduled_jobs() self.scheduler.release_locks() else: self.scheduler.start() def bootstrap( self, logging_level: str = "INFO", date_format: str = DEFAULT_LOGGING_DATE_FORMAT, log_format: str = DEFAULT_LOGGING_FORMAT, ): """Bootstraps the worker. Runs the basic tasks that should run when the worker actually starts working. Used so that new workers can focus on the work loop implementation rather than the full bootstraping process. Args: logging_level (str, optional): Logging level to use. Defaults to "INFO". date_format (str, optional): Date Format. Defaults to DEFAULT_LOGGING_DATE_FORMAT. log_format (str, optional): Log Format. Defaults to DEFAULT_LOGGING_FORMAT. """ setup_loghandlers(logging_level, date_format, log_format) self.register_birth() self.log.info('Worker %s started with PID %d, version %s', self.key, os.getpid(), VERSION) self.subscribe() self.set_state(WorkerStatus.STARTED) qnames = self.queue_names() self.log.info('*** Listening on %s...', green(', '.join(qnames))) def check_for_suspension(self, burst: bool): """Check to see if workers have been suspended by `rq suspend`""" before_state = None notified = False while not self._stop_requested and is_suspended(self.connection, self): if burst: self.log.info('Suspended in burst mode, exiting') self.log.info('Note: There could still be unfinished jobs on the queue') raise StopRequested if not notified: self.log.info('Worker suspended, run `rq resume` to resume') before_state = self.get_state() self.set_state(WorkerStatus.SUSPENDED) notified = True time.sleep(1) if before_state: self.set_state(before_state) def run_maintenance_tasks(self): """ Runs periodic maintenance tasks, these include: 1. Check if scheduler should be started. This check should not be run on first run since worker.work() already calls `scheduler.enqueue_scheduled_jobs()` on startup. 2. Cleaning registries No need to try to start scheduler on first run """ if self.last_cleaned_at: if self.scheduler and (not self.scheduler._process or not self.scheduler._process.is_alive()): self.scheduler.acquire_locks(auto_start=True) self.clean_registries() def subscribe(self): """Subscribe to this worker's channel""" self.log.info('Subscribing to channel %s', self.pubsub_channel_name) self.pubsub = self.connection.pubsub() self.pubsub.subscribe(**{self.pubsub_channel_name: self.handle_payload}) self.pubsub_thread = self.pubsub.run_in_thread(sleep_time=0.2, daemon=True) def unsubscribe(self): """Unsubscribe from pubsub channel""" if self.pubsub_thread: self.log.info('Unsubscribing from channel %s', self.pubsub_channel_name) self.pubsub_thread.stop() self.pubsub_thread.join() self.pubsub.unsubscribe() self.pubsub.close() def dequeue_job_and_maintain_ttl( self, timeout: Optional[int], max_idle_time: Optional[int] = None ) -> Tuple['Job', 'Queue']: """Dequeues a job while maintaining the TTL. Returns: result (Tuple[Job, Queue]): A tuple with the job and the queue. """ result = None qnames = ','.join(self.queue_names()) self.set_state(WorkerStatus.IDLE) self.procline('Listening on ' + qnames) self.log.debug('*** Listening on %s...', green(qnames)) connection_wait_time = 1.0 idle_since = utcnow() idle_time_left = max_idle_time while True: try: self.heartbeat() if self.should_run_maintenance_tasks: self.run_maintenance_tasks() if timeout is not None and idle_time_left is not None: timeout = min(timeout, idle_time_left) self.log.debug('Dequeueing jobs on queues %s and timeout %s', green(qnames), timeout) result = self.queue_class.dequeue_any( self._ordered_queues, timeout, connection=self.connection, job_class=self.job_class, serializer=self.serializer, death_penalty_class=self.death_penalty_class, ) if result is not None: job, queue = result self.reorder_queues(reference_queue=queue) self.log.debug('Dequeued job %s from %s', blue(job.id), green(queue.name)) job.redis_server_version = self.get_redis_server_version() if self.log_job_description: self.log.info('%s: %s (%s)', green(queue.name), blue(job.description), job.id) else: self.log.info('%s: %s', green(queue.name), job.id) break except DequeueTimeout: if max_idle_time is not None: idle_for = (utcnow() - idle_since).total_seconds() idle_time_left = math.ceil(max_idle_time - idle_for) if idle_time_left <= 0: break except redis.exceptions.ConnectionError as conn_err: self.log.error( 'Could not connect to Redis instance: %s Retrying in %d seconds...', conn_err, connection_wait_time ) time.sleep(connection_wait_time) connection_wait_time *= self.exponential_backoff_factor connection_wait_time = min(connection_wait_time, self.max_connection_wait_time) else: connection_wait_time = 1.0 self.heartbeat() return result def heartbeat(self, timeout: Optional[int] = None, pipeline: Optional['Pipeline'] = None): """Specifies a new worker timeout, typically by extending the expiration time of the worker, effectively making this a "heartbeat" to not expire the worker until the timeout passes. The next heartbeat should come before this time, or the worker will die (at least from the monitoring dashboards). If no timeout is given, the worker_ttl will be used to update the expiration time of the worker. Args: timeout (Optional[int]): Timeout pipeline (Optional[Redis]): A Redis pipeline """ timeout = timeout or self.worker_ttl + 60 connection: Union[Redis, 'Pipeline'] = pipeline if pipeline is not None else self.connection connection.expire(self.key, timeout) connection.hset(self.key, 'last_heartbeat', utcformat(utcnow())) self.log.debug('Sent heartbeat to prevent worker timeout. Next one should arrive in %s seconds.', timeout) class Worker(BaseWorker): @classmethod def find_by_key( cls, worker_key: str, connection: Optional['Redis'] = None, job_class: Optional[Type['Job']] = None, queue_class: Optional[Type['Queue']] = None, serializer=None, ) -> 'Worker': """Returns a Worker instance, based on the naming conventions for naming the internal Redis keys. Can be used to reverse-lookup Workers by their Redis keys. Args: worker_key (str): The worker key connection (Optional[Redis], optional): Redis connection. Defaults to None. job_class (Optional[Type[Job]], optional): The job class if custom class is being used. Defaults to None. queue_class (Optional[Type[Queue]]): The queue class if a custom class is being used. Defaults to None. serializer (Any, optional): The serializer to use. Defaults to None. Raises: ValueError: If the key doesn't start with `rq:worker:`, the default worker namespace prefix. Returns: worker (Worker): The Worker instance. """ prefix = cls.redis_worker_namespace_prefix if not worker_key.startswith(prefix): raise ValueError('Not a valid RQ worker key: %s' % worker_key) if connection is None: connection = get_current_connection() if not connection.exists(worker_key): connection.srem(cls.redis_workers_keys, worker_key) return None name = worker_key[len(prefix) :] worker = cls( [], name, connection=connection, job_class=job_class, queue_class=queue_class, prepare_for_work=False, serializer=serializer, ) worker.refresh() return worker @property def horse_pid(self): """The horse's process ID. Only available in the worker. Will return 0 in the horse part of the fork. """ return self._horse_pid @property def is_horse(self): """Returns whether or not this is the worker or the work horse.""" return self._is_horse @property def connection_timeout(self) -> int: return self.dequeue_timeout + 10 def procline(self, message): """Changes the current procname for the process. This can be used to make `ps -ef` output more readable. """ setprocname(f'rq:worker:{self.name}: {message}') def register_birth(self): """Registers its own birth.""" self.log.debug('Registering birth of worker %s', self.name) if self.connection.exists(self.key) and not self.connection.hexists(self.key, 'death'): msg = 'There exists an active worker named {0!r} already' raise ValueError(msg.format(self.name)) key = self.key queues = ','.join(self.queue_names()) with self.connection.pipeline() as p: p.delete(key) now = utcnow() now_in_string = utcformat(now) self.birth_date = now mapping = { 'birth': now_in_string, 'last_heartbeat': now_in_string, 'queues': queues, 'pid': self.pid, 'hostname': self.hostname, 'ip_address': self.ip_address, 'version': self.version, 'python_version': self.python_version, } if self.get_redis_server_version() >= (4, 0, 0): p.hset(key, mapping=mapping) else: p.hmset(key, mapping) worker_registration.register(self, p) p.expire(key, self.worker_ttl + 60) p.execute() def register_death(self): """Registers its own death.""" self.log.debug('Registering death') with self.connection.pipeline() as p: # We cannot use self.state = 'dead' here, because that would # rollback the pipeline worker_registration.unregister(self, p) p.hset(self.key, 'death', utcformat(utcnow())) p.expire(self.key, 60) p.execute() def set_shutdown_requested_date(self): """Sets the date on which the worker received a (warm) shutdown request""" self.connection.hset(self.key, 'shutdown_requested_date', utcformat(self._shutdown_requested_date)) @property def shutdown_requested_date(self): """Fetches shutdown_requested_date from Redis.""" shutdown_requested_timestamp = self.connection.hget(self.key, 'shutdown_requested_date') if shutdown_requested_timestamp is not None: return utcparse(as_text(shutdown_requested_timestamp)) @property def death_date(self): """Fetches death date from Redis.""" death_timestamp = self.connection.hget(self.key, 'death') if death_timestamp is not None: return utcparse(as_text(death_timestamp)) def set_state(self, state: str, pipeline: Optional['Pipeline'] = None): """Sets the worker's state. Args: state (str): The state pipeline (Optional[Pipeline], optional): The pipeline to use. Defaults to None. """ self._state = state connection = pipeline if pipeline is not None else self.connection connection.hset(self.key, 'state', state) def _set_state(self, state): """Raise a DeprecationWarning if ``worker.state = X`` is used""" warnings.warn("worker.state is deprecated, use worker.set_state() instead.", DeprecationWarning) self.set_state(state) def get_state(self) -> str: return self._state def _get_state(self): """Raise a DeprecationWarning if ``worker.state == X`` is used""" warnings.warn("worker.state is deprecated, use worker.get_state() instead.", DeprecationWarning) return self.get_state() state = property(_get_state, _set_state) def set_current_job_working_time(self, current_job_working_time: float, pipeline: Optional['Pipeline'] = None): """Sets the current job working time in seconds Args: current_job_working_time (float): The current job working time in seconds pipeline (Optional[Pipeline], optional): Pipeline to use. Defaults to None. """ self.current_job_working_time = current_job_working_time connection = pipeline if pipeline is not None else self.connection connection.hset(self.key, 'current_job_working_time', current_job_working_time) def set_current_job_id(self, job_id: Optional[str] = None, pipeline: Optional['Pipeline'] = None): """Sets the current job id. If `None` is used it will delete the current job key. Args: job_id (Optional[str], optional): The job id. Defaults to None. pipeline (Optional[Pipeline], optional): The pipeline to use. Defaults to None. """ connection = pipeline if pipeline is not None else self.connection if job_id is None: connection.hdel(self.key, 'current_job') else: connection.hset(self.key, 'current_job', job_id) def get_current_job_id(self, pipeline: Optional['Pipeline'] = None) -> Optional[str]: """Retrieves the current job id. Args: pipeline (Optional['Pipeline'], optional): The pipeline to use. Defaults to None. Returns: job_id (Optional[str): The job id """ connection = pipeline if pipeline is not None else self.connection result = connection.hget(self.key, 'current_job') if result is None: return None return as_text(result) def get_current_job(self) -> Optional['Job']: """Returns the currently executing job instance. Returns: job (Job): The job instance. """ job_id = self.get_current_job_id() if job_id is None: return None return self.job_class.fetch(job_id, self.connection, self.serializer) def kill_horse(self, sig: signal.Signals = SIGKILL): """Kill the horse but catch "No such process" error has the horse could already be dead. Args: sig (signal.Signals, optional): _description_. Defaults to SIGKILL. """ try: os.killpg(os.getpgid(self.horse_pid), sig) self.log.info('Killed horse pid %s', self.horse_pid) except OSError as e: if e.errno == errno.ESRCH: # "No such process" is fine with us self.log.debug('Horse already dead') else: raise def wait_for_horse(self) -> Tuple[Optional[int], Optional[int], Optional['struct_rusage']]: """Waits for the horse process to complete. Uses `0` as argument as to include "any child in the process group of the current process". """ pid = stat = rusage = None with contextlib.suppress(ChildProcessError): # ChildProcessError: [Errno 10] No child processes pid, stat, rusage = os.wait4(self.horse_pid, 0) return pid, stat, rusage def request_force_stop(self, signum: int, frame: Optional[FrameType]): """Terminates the application (cold shutdown). Args: signum (Any): Signum frame (Any): Frame Raises: SystemExit: SystemExit """ # When worker is run through a worker pool, it may receive duplicate signals # One is sent by the pool when it calls `pool.stop_worker()` and another is sent by the OS # when user hits Ctrl+C. In this case if we receive the second signal within 1 second, # we ignore it. if (utcnow() - self._shutdown_requested_date) < timedelta(seconds=1): # type: ignore self.log.debug('Shutdown signal ignored, received twice in less than 1 second') return self.log.warning('Cold shut down') # Take down the horse with the worker if self.horse_pid: self.log.debug('Taking down horse %s with me', self.horse_pid) self.kill_horse() self.wait_for_horse() raise SystemExit() def request_stop(self, signum, frame): """Stops the current worker loop but waits for child processes to end gracefully (warm shutdown). Args: signum (Any): Signum frame (Any): Frame """ self.log.debug('Got signal %s', signal_name(signum)) self._shutdown_requested_date = utcnow() signal.signal(signal.SIGINT, self.request_force_stop) signal.signal(signal.SIGTERM, self.request_force_stop) self.handle_warm_shutdown_request() self._shutdown() def _shutdown(self): """ If shutdown is requested in the middle of a job, wait until finish before shutting down and save the request in redis """ if self.get_state() == WorkerStatus.BUSY: self._stop_requested = True self.set_shutdown_requested_date() self.log.debug('Stopping after current horse is finished. Press Ctrl+C again for a cold shutdown.') if self.scheduler: self.stop_scheduler() else: if self.scheduler: self.stop_scheduler() raise StopRequested() def handle_warm_shutdown_request(self): self.log.info('Worker %s [PID %d]: warm shut down requested', self.name, self.pid) def reorder_queues(self, reference_queue: 'Queue'): """Reorder the queues according to the strategy. As this can be defined both in the `Worker` initialization or in the `work` method, it doesn't take the strategy directly, but rather uses the private `_dequeue_strategy` attribute. Args: reference_queue (Union[Queue, str]): The queues to reorder """ if self._dequeue_strategy is None: self._dequeue_strategy = DequeueStrategy.DEFAULT if self._dequeue_strategy not in ("default", "random", "round_robin"): raise ValueError( f"Dequeue strategy {self._dequeue_strategy} is not allowed. Use `default`, `random` or `round_robin`." ) if self._dequeue_strategy == DequeueStrategy.DEFAULT: return if self._dequeue_strategy == DequeueStrategy.ROUND_ROBIN: pos = self._ordered_queues.index(reference_queue) self._ordered_queues = self._ordered_queues[pos + 1 :] + self._ordered_queues[: pos + 1] return if self._dequeue_strategy == DequeueStrategy.RANDOM: shuffle(self._ordered_queues) return def teardown(self): if not self.is_horse: if self.scheduler: self.stop_scheduler() self.register_death() self.unsubscribe() def stop_scheduler(self): """Ensure scheduler process is stopped Will send the kill signal to scheduler process, if there's an OSError, just passes and `join()`'s the scheduler process, waiting for the process to finish. """ if self.scheduler._process and self.scheduler._process.pid: try: os.kill(self.scheduler._process.pid, signal.SIGTERM) except OSError: pass self.scheduler._process.join() def refresh(self): """Refreshes the worker data. It will get the data from the datastore and update the Worker's attributes """ data = self.connection.hmget( self.key, 'queues', 'state', 'current_job', 'last_heartbeat', 'birth', 'failed_job_count', 'successful_job_count', 'total_working_time', 'current_job_working_time', 'hostname', 'ip_address', 'pid', 'version', 'python_version', ) ( queues, state, job_id, last_heartbeat, birth, failed_job_count, successful_job_count, total_working_time, current_job_working_time, hostname, ip_address, pid, version, python_version, ) = data self.hostname = as_text(hostname) if hostname else None self.ip_address = as_text(ip_address) if ip_address else None self.pid = int(pid) if pid else None self.version = as_text(version) if version else None self.python_version = as_text(python_version) if python_version else None self._state = as_text(state or '?') self._job_id = job_id or None if last_heartbeat: self.last_heartbeat = utcparse(as_text(last_heartbeat)) else: self.last_heartbeat = None if birth: self.birth_date = utcparse(as_text(birth)) else: self.birth_date = None if failed_job_count: self.failed_job_count = int(as_text(failed_job_count)) if successful_job_count: self.successful_job_count = int(as_text(successful_job_count)) if total_working_time: self.total_working_time = float(as_text(total_working_time)) if current_job_working_time: self.current_job_working_time = float(as_text(current_job_working_time)) if queues: queues = as_text(queues) self.queues = [ self.queue_class( queue, connection=self.connection, job_class=self.job_class, serializer=self.serializer ) for queue in queues.split(',') ] def increment_failed_job_count(self, pipeline: Optional['Pipeline'] = None): """Used to keep the worker stats up to date in Redis. Increments the failed job count. Args: pipeline (Optional[Pipeline], optional): A Redis Pipeline. Defaults to None. """ connection = pipeline if pipeline is not None else self.connection connection.hincrby(self.key, 'failed_job_count', 1) def increment_successful_job_count(self, pipeline: Optional['Pipeline'] = None): """Used to keep the worker stats up to date in Redis. Increments the successful job count. Args: pipeline (Optional[Pipeline], optional): A Redis Pipeline. Defaults to None. """ connection = pipeline if pipeline is not None else self.connection connection.hincrby(self.key, 'successful_job_count', 1) def increment_total_working_time(self, job_execution_time: timedelta, pipeline: 'Pipeline'): """Used to keep the worker stats up to date in Redis. Increments the time the worker has been workig for (in seconds). Args: job_execution_time (timedelta): A timedelta object. pipeline (Optional[Pipeline], optional): A Redis Pipeline. Defaults to None. """ pipeline.hincrbyfloat(self.key, 'total_working_time', job_execution_time.total_seconds()) def fork_work_horse(self, job: 'Job', queue: 'Queue'): """Spawns a work horse to perform the actual work and passes it a job. This is where the `fork()` actually happens. Args: job (Job): The Job that will be ran queue (Queue): The queue """ child_pid = os.fork() os.environ['RQ_WORKER_ID'] = self.name os.environ['RQ_JOB_ID'] = job.id if child_pid == 0: os.setsid() self.main_work_horse(job, queue) os._exit(0) # just in case else: self._horse_pid = child_pid self.procline('Forked {0} at {1}'.format(child_pid, time.time())) def get_heartbeat_ttl(self, job: 'Job') -> int: """Get's the TTL for the next heartbeat. Args: job (Job): The Job Returns: int: The heartbeat TTL. """ if job.timeout and job.timeout > 0: remaining_execution_time = job.timeout - self.current_job_working_time return min(remaining_execution_time, self.job_monitoring_interval) + 60 else: return self.job_monitoring_interval + 60 def monitor_work_horse(self, job: 'Job', queue: 'Queue'): """The worker will monitor the work horse and make sure that it either executes successfully or the status of the job is set to failed Args: job (Job): _description_ queue (Queue): _description_ """ retpid = ret_val = rusage = None job.started_at = utcnow() while True: try: with self.death_penalty_class(self.job_monitoring_interval, HorseMonitorTimeoutException): retpid, ret_val, rusage = self.wait_for_horse() break except HorseMonitorTimeoutException: # Horse has not exited yet and is still running. # Send a heartbeat to keep the worker alive. self.set_current_job_working_time((utcnow() - job.started_at).total_seconds()) # Kill the job from this side if something is really wrong (interpreter lock/etc). if job.timeout != -1 and self.current_job_working_time > (job.timeout + 60): # type: ignore self.heartbeat(self.job_monitoring_interval + 60) self.kill_horse() self.wait_for_horse() break self.maintain_heartbeats(job) except OSError as e: # In case we encountered an OSError due to EINTR (which is # caused by a SIGINT or SIGTERM signal during # os.waitpid()), we simply ignore it and enter the next # iteration of the loop, waiting for the child to end. In # any other case, this is some other unexpected OS error, # which we don't want to catch, so we re-raise those ones. if e.errno != errno.EINTR: raise # Send a heartbeat to keep the worker alive. self.heartbeat() self.set_current_job_working_time(0) self._horse_pid = 0 # Set horse PID to 0, horse has finished working if ret_val == os.EX_OK: # The process exited normally. return job_status = job.get_status() if job_status is None: # Job completed and its ttl has expired return elif self._stopped_job_id == job.id: # Work-horse killed deliberately self.log.warning('Job stopped by user, moving job to FailedJobRegistry') if job.stopped_callback: job.execute_stopped_callback(self.death_penalty_class) self.handle_job_failure(job, queue=queue, exc_string='Job stopped by user, work-horse terminated.') elif job_status not in [JobStatus.FINISHED, JobStatus.FAILED]: if not job.ended_at: job.ended_at = utcnow() # Unhandled failure: move the job to the failed queue signal_msg = f" (signal {os.WTERMSIG(ret_val)})" if ret_val and os.WIFSIGNALED(ret_val) else '' exc_string = f"Work-horse terminated unexpectedly; waitpid returned {ret_val}{signal_msg}; " self.log.warning('Moving job to FailedJobRegistry (%s)', exc_string) self.handle_work_horse_killed(job, retpid, ret_val, rusage) self.handle_job_failure(job, queue=queue, exc_string=exc_string) def execute_job(self, job: 'Job', queue: 'Queue'): """Spawns a work horse to perform the actual work and passes it a job. The worker will wait for the work horse and make sure it executes within the given timeout bounds, or will end the work horse with SIGALRM. """ self.set_state(WorkerStatus.BUSY) self.fork_work_horse(job, queue) self.monitor_work_horse(job, queue) self.set_state(WorkerStatus.IDLE) def maintain_heartbeats(self, job: 'Job'): """Updates worker and job's last heartbeat field. If job was enqueued with `result_ttl=0`, a race condition could happen where this heartbeat arrives after job has been deleted, leaving a job key that contains only `last_heartbeat` field. hset() is used when updating job's timestamp. This command returns 1 if a new Redis key is created, 0 otherwise. So in this case we check the return of job's heartbeat() command. If a new key was created, this means the job was already deleted. In this case, we simply send another delete command to remove the key. https://github.com/rq/rq/issues/1450 """ with self.connection.pipeline() as pipeline: self.heartbeat(self.job_monitoring_interval + 60, pipeline=pipeline) ttl = self.get_heartbeat_ttl(job) job.heartbeat(utcnow(), ttl, pipeline=pipeline, xx=True) results = pipeline.execute() if results[2] == 1: self.connection.delete(job.key) def main_work_horse(self, job: 'Job', queue: 'Queue'): """This is the entry point of the newly spawned work horse. After fork()'ing, always assure we are generating random sequences that are different from the worker. os._exit() is the way to exit from childs after a fork(), in contrast to the regular sys.exit() """ random.seed() self.setup_work_horse_signals() self._is_horse = True self.log = logger try: self.perform_job(job, queue) except: # noqa os._exit(1) os._exit(0) def setup_work_horse_signals(self): """Setup signal handing for the newly spawned work horse Always ignore Ctrl+C in the work horse, as it might abort the currently running job. The main worker catches the Ctrl+C and requests graceful shutdown after the current work is done. When cold shutdown is requested, it kills the current job anyway. """ signal.signal(signal.SIGINT, signal.SIG_IGN) signal.signal(signal.SIGTERM, signal.SIG_DFL) def prepare_job_execution(self, job: 'Job', remove_from_intermediate_queue: bool = False): """Performs misc bookkeeping like updating states prior to job execution. """ self.log.debug('Preparing for execution of Job ID %s', job.id) with self.connection.pipeline() as pipeline: self.set_current_job_id(job.id, pipeline=pipeline) self.set_current_job_working_time(0, pipeline=pipeline) heartbeat_ttl = self.get_heartbeat_ttl(job) self.heartbeat(heartbeat_ttl, pipeline=pipeline) job.heartbeat(utcnow(), heartbeat_ttl, pipeline=pipeline) job.prepare_for_execution(self.name, pipeline=pipeline) if remove_from_intermediate_queue: from .queue import Queue queue = Queue(job.origin, connection=self.connection) pipeline.lrem(queue.intermediate_queue_key, 1, job.id) pipeline.execute() self.log.debug('Job preparation finished.') msg = 'Processing {0} from {1} since {2}' self.procline(msg.format(job.func_name, job.origin, time.time())) def handle_job_success(self, job: 'Job', queue: 'Queue', started_job_registry: StartedJobRegistry): """Handles the successful execution of certain job. It will remove the job from the `StartedJobRegistry`, adding it to the `SuccessfulJobRegistry`, and run a few maintenance tasks including: - Resting the current job ID - Enqueue dependents - Incrementing the job count and working time - Handling of the job successful execution Runs within a loop with the `watch` method so that protects interactions with dependents keys. Args: job (Job): The job that was successful. queue (Queue): The queue started_job_registry (StartedJobRegistry): The started registry """ self.log.debug('Handling successful execution of job %s', job.id) with self.connection.pipeline() as pipeline: while True: try: # if dependencies are inserted after enqueue_dependents # a WatchError is thrown by execute() pipeline.watch(job.dependents_key) # enqueue_dependents might call multi() on the pipeline queue.enqueue_dependents(job, pipeline=pipeline) if not pipeline.explicit_transaction: # enqueue_dependents didn't call multi after all! # We have to do it ourselves to make sure everything runs in a transaction pipeline.multi() self.set_current_job_id(None, pipeline=pipeline) self.increment_successful_job_count(pipeline=pipeline) self.increment_total_working_time(job.ended_at - job.started_at, pipeline) # type: ignore result_ttl = job.get_result_ttl(self.default_result_ttl) if result_ttl != 0: self.log.debug('Saving job %s\'s successful execution result', job.id) job._handle_success(result_ttl, pipeline=pipeline) job.cleanup(result_ttl, pipeline=pipeline, remove_from_queue=False) self.log.debug('Removing job %s from StartedJobRegistry', job.id) started_job_registry.remove(job, pipeline=pipeline) pipeline.execute() self.log.debug('Finished handling successful execution of job %s', job.id) break except redis.exceptions.WatchError: continue def perform_job(self, job: 'Job', queue: 'Queue') -> bool: """Performs the actual work of a job. Will/should only be called inside the work horse's process. Args: job (Job): The Job queue (Queue): The Queue Returns: bool: True after finished. """ push_connection(self.connection) started_job_registry = queue.started_job_registry self.log.debug('Started Job Registry set.') try: remove_from_intermediate_queue = len(self.queues) == 1 self.prepare_job_execution(job, remove_from_intermediate_queue) job.started_at = utcnow() timeout = job.timeout or self.queue_class.DEFAULT_TIMEOUT with self.death_penalty_class(timeout, JobTimeoutException, job_id=job.id): self.log.debug('Performing Job...') rv = job.perform() self.log.debug('Finished performing Job ID %s', job.id) job.ended_at = utcnow() # Pickle the result in the same try-except block since we need # to use the same exc handling when pickling fails job._result = rv job.heartbeat(utcnow(), job.success_callback_timeout) job.execute_success_callback(self.death_penalty_class, rv) self.handle_job_success(job=job, queue=queue, started_job_registry=started_job_registry) except: # NOQA self.log.debug('Job %s raised an exception.', job.id) job.ended_at = utcnow() exc_info = sys.exc_info() exc_string = ''.join(traceback.format_exception(*exc_info)) try: job.heartbeat(utcnow(), job.failure_callback_timeout) job.execute_failure_callback(self.death_penalty_class, *exc_info) except: # noqa exc_info = sys.exc_info() exc_string = ''.join(traceback.format_exception(*exc_info)) self.handle_job_failure( job=job, exc_string=exc_string, queue=queue, started_job_registry=started_job_registry ) self.handle_exception(job, *exc_info) return False finally: pop_connection() self.log.info('%s: %s (%s)', green(job.origin), blue('Job OK'), job.id) if rv is not None: self.log.debug('Result: %r', yellow(as_text(str(rv)))) if self.log_result_lifespan: result_ttl = job.get_result_ttl(self.default_result_ttl) if result_ttl == 0: self.log.info('Result discarded immediately') elif result_ttl > 0: self.log.info('Result is kept for %s seconds', result_ttl) else: self.log.info('Result will never expire, clean up result key manually') return True def handle_exception(self, job: 'Job', *exc_info): """Walks the exception handler stack to delegate exception handling. If the job cannot be deserialized, it will raise when func_name or the other properties are accessed, which will stop exceptions from being properly logged, so we guard against it here. """ self.log.debug('Handling exception for %s.', job.id) exc_string = ''.join(traceback.format_exception(*exc_info)) try: extra = { 'func': job.func_name, 'arguments': job.args, 'kwargs': job.kwargs, } func_name = job.func_name except DeserializationError: extra = {} func_name = '' # the properties below should be safe however extra.update({'queue': job.origin, 'job_id': job.id}) # func_name self.log.error( '[Job %s]: exception raised while executing (%s)\n%s', job.id, func_name, exc_string, extra=extra ) for handler in self._exc_handlers: self.log.debug('Invoking exception handler %s', handler) fallthrough = handler(job, *exc_info) # Only handlers with explicit return values should disable further # exc handling, so interpret a None return value as True. if fallthrough is None: fallthrough = True if not fallthrough: break def push_exc_handler(self, handler_func): """Pushes an exception handler onto the exc handler stack.""" self._exc_handlers.append(handler_func) def pop_exc_handler(self): """Pops the latest exception handler off of the exc handler stack.""" return self._exc_handlers.pop() def handle_work_horse_killed(self, job, retpid, ret_val, rusage): if self._work_horse_killed_handler is None: return self._work_horse_killed_handler(job, retpid, ret_val, rusage) def __eq__(self, other): """Equality does not take the database/connection into account""" if not isinstance(other, self.__class__): raise TypeError('Cannot compare workers to other types (of workers)') return self.name == other.name def __hash__(self): """The hash does not take the database/connection into account""" return hash(self.name) def handle_payload(self, message): """Handle external commands""" self.log.debug('Received message: %s', message) payload = parse_payload(message) handle_command(self, payload) class SimpleWorker(Worker): def execute_job(self, job: 'Job', queue: 'Queue'): """Execute job in same thread/process, do not fork()""" self.set_state(WorkerStatus.BUSY) self.perform_job(job, queue) self.set_state(WorkerStatus.IDLE) def get_heartbeat_ttl(self, job: 'Job') -> int: """-1" means that jobs never timeout. In this case, we should _not_ do -1 + 60 = 59. We should just stick to DEFAULT_WORKER_TTL. Args: job (Job): The Job Returns: ttl (int): TTL """ if job.timeout == -1: return DEFAULT_WORKER_TTL else: return (job.timeout or DEFAULT_WORKER_TTL) + 60 class HerokuWorker(Worker): """ Modified version of rq worker which: * stops work horses getting killed with SIGTERM * sends SIGRTMIN to work horses on SIGTERM to the main process which in turn causes the horse to crash `imminent_shutdown_delay` seconds later """ imminent_shutdown_delay = 6 frame_properties = ['f_code', 'f_lasti', 'f_lineno', 'f_locals', 'f_trace'] def setup_work_horse_signals(self): """Modified to ignore SIGINT and SIGTERM and only handle SIGRTMIN""" signal.signal(signal.SIGRTMIN, self.request_stop_sigrtmin) signal.signal(signal.SIGINT, signal.SIG_IGN) signal.signal(signal.SIGTERM, signal.SIG_IGN) def handle_warm_shutdown_request(self): """If horse is alive send it SIGRTMIN""" if self.horse_pid != 0: self.log.info('Worker %s: warm shut down requested, sending horse SIGRTMIN signal', self.key) self.kill_horse(sig=signal.SIGRTMIN) else: self.log.warning('Warm shut down requested, no horse found') def request_stop_sigrtmin(self, signum, frame): if self.imminent_shutdown_delay == 0: self.log.warning('Imminent shutdown, raising ShutDownImminentException immediately') self.request_force_stop_sigrtmin(signum, frame) else: self.log.warning( 'Imminent shutdown, raising ShutDownImminentException in %d seconds', self.imminent_shutdown_delay ) signal.signal(signal.SIGRTMIN, self.request_force_stop_sigrtmin) signal.signal(signal.SIGALRM, self.request_force_stop_sigrtmin) signal.alarm(self.imminent_shutdown_delay) def request_force_stop_sigrtmin(self, signum, frame): info = dict((attr, getattr(frame, attr)) for attr in self.frame_properties) self.log.warning('raising ShutDownImminentException to cancel job...') raise ShutDownImminentException('shut down imminent (signal: %s)' % signal_name(signum), info) class RoundRobinWorker(Worker): """ Modified version of Worker that dequeues jobs from the queues using a round-robin strategy. """ def reorder_queues(self, reference_queue): pos = self._ordered_queues.index(reference_queue) self._ordered_queues = self._ordered_queues[pos + 1 :] + self._ordered_queues[: pos + 1] class RandomWorker(Worker): """ Modified version of Worker that dequeues jobs from the queues using a random strategy. """ def reorder_queues(self, reference_queue): shuffle(self._ordered_queues) rq-1.16.2/rq/worker_pool.py0000644000000000000000000002326613615410400012512 0ustar00import contextlib import errno import logging import os import signal import time from enum import Enum from multiprocessing import Process from typing import Dict, List, NamedTuple, Optional, Type, Union from uuid import uuid4 from redis import ConnectionPool, Redis from rq.serializers import DefaultSerializer from .connections import parse_connection from .defaults import DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT from .job import Job from .logutils import setup_loghandlers from .queue import Queue from .utils import parse_names from .worker import BaseWorker, Worker class WorkerData(NamedTuple): name: str pid: int process: Process class WorkerPool: class Status(Enum): IDLE = 1 STARTED = 2 STOPPED = 3 def __init__( self, queues: List[Union[str, Queue]], connection: Redis, num_workers: int = 1, worker_class: Type[BaseWorker] = Worker, serializer: Type[DefaultSerializer] = DefaultSerializer, job_class: Type[Job] = Job, *args, **kwargs, ): self.num_workers: int = num_workers self._workers: List[Worker] = [] setup_loghandlers('INFO', DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT, name=__name__) self.log: logging.Logger = logging.getLogger(__name__) # self.log: logging.Logger = logger self._queue_names: List[str] = parse_names(queues) self.connection = connection self.name: str = uuid4().hex self._burst: bool = True self._sleep: int = 0 self.status: self.Status = self.Status.IDLE # type: ignore self.worker_class: Type[BaseWorker] = worker_class self.serializer: Type[DefaultSerializer] = serializer self.job_class: Type[Job] = job_class # A dictionary of WorkerData keyed by worker name self.worker_dict: Dict[str, WorkerData] = {} self._connection_class, self._pool_class, self._pool_kwargs = parse_connection(connection) @property def queues(self) -> List[Queue]: """Returns a list of Queue objects""" return [Queue(name, connection=self.connection) for name in self._queue_names] @property def number_of_active_workers(self) -> int: """Returns a list of Queue objects""" return len(self.worker_dict) def _install_signal_handlers(self): """Installs signal handlers for handling SIGINT and SIGTERM gracefully. """ signal.signal(signal.SIGINT, self.request_stop) signal.signal(signal.SIGTERM, self.request_stop) def request_stop(self, signum=None, frame=None): """Toggle self._stop_requested that's checked on every loop""" self.log.info('Received SIGINT/SIGTERM, shutting down...') self.status = self.Status.STOPPED self.stop_workers() def all_workers_have_stopped(self) -> bool: """Returns True if all workers have stopped.""" self.reap_workers() # `bool(self.worker_dict)` sometimes returns True even if the dict is empty return self.number_of_active_workers == 0 def reap_workers(self): """Removes dead workers from worker_dict""" self.log.debug('Reaping dead workers') worker_datas = list(self.worker_dict.values()) for data in worker_datas: data.process.join(0.1) if data.process.is_alive(): self.log.debug('Worker %s with pid %d is alive', data.name, data.pid) else: self.handle_dead_worker(data) continue # I'm still not sure why this is sometimes needed, temporarily commenting # this out until I can figure it out. # with contextlib.suppress(HorseMonitorTimeoutException): # with UnixSignalDeathPenalty(1, HorseMonitorTimeoutException): # try: # # If wait4 returns, the process is dead # os.wait4(data.process.pid, 0) # type: ignore # self.handle_dead_worker(data) # except ChildProcessError: # # Process is dead # self.handle_dead_worker(data) # continue def handle_dead_worker(self, worker_data: WorkerData): """ Handle a dead worker """ self.log.info('Worker %s with pid %d is dead', worker_data.name, worker_data.pid) with contextlib.suppress(KeyError): self.worker_dict.pop(worker_data.name) def check_workers(self, respawn: bool = True) -> None: """ Check whether workers are still alive """ self.log.debug('Checking worker processes') self.reap_workers() # If we have less number of workers than num_workers, # respawn the difference if respawn and self.status != self.Status.STOPPED: delta = self.num_workers - len(self.worker_dict) if delta: for i in range(delta): self.start_worker(burst=self._burst, _sleep=self._sleep) def get_worker_process( self, name: str, burst: bool, _sleep: float = 0, logging_level: str = "INFO", ) -> Process: """Returns the worker process""" return Process( target=run_worker, args=(name, self._queue_names, self._connection_class, self._pool_class, self._pool_kwargs), kwargs={ '_sleep': _sleep, 'burst': burst, 'logging_level': logging_level, 'worker_class': self.worker_class, 'job_class': self.job_class, 'serializer': self.serializer, }, name=f'Worker {name} (WorkerPool {self.name})', ) def start_worker( self, count: Optional[int] = None, burst: bool = True, _sleep: float = 0, logging_level: str = "INFO", ): """ Starts a worker and adds the data to worker_datas. * sleep: waits for X seconds before creating worker, for testing purposes """ name = uuid4().hex process = self.get_worker_process(name, burst=burst, _sleep=_sleep, logging_level=logging_level) process.start() worker_data = WorkerData(name=name, pid=process.pid, process=process) # type: ignore self.worker_dict[name] = worker_data self.log.debug('Spawned worker: %s with PID %d', name, process.pid) def start_workers(self, burst: bool = True, _sleep: float = 0, logging_level: str = "INFO"): """ Run the workers * sleep: waits for X seconds before creating worker, only for testing purposes """ self.log.debug(f'Spawning {self.num_workers} workers') for i in range(self.num_workers): self.start_worker(i + 1, burst=burst, _sleep=_sleep, logging_level=logging_level) def stop_worker(self, worker_data: WorkerData, sig=signal.SIGINT): """ Send stop signal to worker and catch "No such process" error if the worker is already dead. """ try: os.kill(worker_data.pid, sig) self.log.info('Sent shutdown command to worker with %s', worker_data.pid) except OSError as e: if e.errno == errno.ESRCH: # "No such process" is fine with us self.log.debug('Horse already dead') else: raise def stop_workers(self): """Send SIGINT to all workers""" self.log.info('Sending stop signal to %s workers', len(self.worker_dict)) worker_datas = list(self.worker_dict.values()) for worker_data in worker_datas: self.stop_worker(worker_data) def start(self, burst: bool = False, logging_level: str = "INFO"): self._burst = burst respawn = not burst # Don't respawn workers if burst mode is on setup_loghandlers(logging_level, DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT, name=__name__) self.log.info(f'Starting worker pool {self.name} with pid %d...', os.getpid()) self.status = self.Status.IDLE self.start_workers(burst=self._burst, logging_level=logging_level) self._install_signal_handlers() while True: if self.status == self.Status.STOPPED: if self.all_workers_have_stopped(): self.log.info('All workers stopped, exiting...') break else: self.log.info('Waiting for workers to shutdown...') time.sleep(1) continue else: self.check_workers(respawn=respawn) if burst and self.number_of_active_workers == 0: self.log.info('All workers stopped, exiting...') break time.sleep(1) def run_worker( worker_name: str, queue_names: List[str], connection_class, connection_pool_class, connection_pool_kwargs: dict, worker_class: Type[BaseWorker] = Worker, serializer: Type[DefaultSerializer] = DefaultSerializer, job_class: Type[Job] = Job, burst: bool = True, logging_level: str = "INFO", _sleep: int = 0, ): connection = connection_class( connection_pool=ConnectionPool(connection_class=connection_pool_class, **connection_pool_kwargs) ) queues = [Queue(name, connection=connection) for name in queue_names] worker = worker_class(queues, name=worker_name, connection=connection, serializer=serializer, job_class=job_class) worker.log.info("Starting worker started with PID %s", os.getpid()) time.sleep(_sleep) worker.work(burst=burst, with_scheduler=True, logging_level=logging_level) rq-1.16.2/rq/worker_registration.py0000644000000000000000000000605413615410400014247 0ustar00from typing import TYPE_CHECKING, Optional, Set if TYPE_CHECKING: from redis import Redis from redis.client import Pipeline from .queue import Queue from .worker import Worker from rq.utils import split_list from .utils import as_text WORKERS_BY_QUEUE_KEY = 'rq:workers:%s' REDIS_WORKER_KEYS = 'rq:workers' MAX_KEYS = 1000 def register(worker: 'Worker', pipeline: Optional['Pipeline'] = None): """ Store worker key in Redis so we can easily discover active workers. Args: worker (Worker): The Worker pipeline (Optional[Pipeline], optional): The Redis Pipeline. Defaults to None. """ connection = pipeline if pipeline is not None else worker.connection connection.sadd(worker.redis_workers_keys, worker.key) for name in worker.queue_names(): redis_key = WORKERS_BY_QUEUE_KEY % name connection.sadd(redis_key, worker.key) def unregister(worker: 'Worker', pipeline: Optional['Pipeline'] = None): """Remove Worker key from Redis Args: worker (Worker): The Worker pipeline (Optional[Pipeline], optional): Redis Pipeline. Defaults to None. """ if pipeline is None: connection = worker.connection.pipeline() else: connection = pipeline connection.srem(worker.redis_workers_keys, worker.key) for name in worker.queue_names(): redis_key = WORKERS_BY_QUEUE_KEY % name connection.srem(redis_key, worker.key) if pipeline is None: connection.execute() def get_keys(queue: Optional['Queue'] = None, connection: Optional['Redis'] = None) -> Set[str]: """Returns a list of worker keys for a given queue. Args: queue (Optional['Queue'], optional): The Queue. Defaults to None. connection (Optional['Redis'], optional): The Redis Connection. Defaults to None. Raises: ValueError: If no Queue or Connection is provided. Returns: set: A Set with keys. """ if queue is None and connection is None: raise ValueError('"Queue" or "connection" argument is required') if queue: redis = queue.connection redis_key = WORKERS_BY_QUEUE_KEY % queue.name else: redis = connection # type: ignore redis_key = REDIS_WORKER_KEYS return {as_text(key) for key in redis.smembers(redis_key)} def clean_worker_registry(queue: 'Queue'): """Delete invalid worker keys in registry. Args: queue (Queue): The Queue """ keys = list(get_keys(queue)) with queue.connection.pipeline() as pipeline: for key in keys: pipeline.exists(key) results = pipeline.execute() invalid_keys = [] for i, key_exists in enumerate(results): if not key_exists: invalid_keys.append(keys[i]) if invalid_keys: for invalid_subset in split_list(invalid_keys, MAX_KEYS): pipeline.srem(WORKERS_BY_QUEUE_KEY % queue.name, *invalid_subset) pipeline.srem(REDIS_WORKER_KEYS, *invalid_subset) pipeline.execute() rq-1.16.2/rq/cli/__init__.py0000644000000000000000000000033313615410400012444 0ustar00# ruff: noqa: F401 I001 from .cli import main # TODO: the following imports can be removed when we drop the `rqinfo` and # `rqworkers` commands in favor of just shipping the `rq` command. from .cli import info, worker rq-1.16.2/rq/cli/__main__.py0000644000000000000000000000012013615410400012417 0ustar00import sys from . import main if __name__ == '__main__': sys.exit(main()) rq-1.16.2/rq/cli/cli.py0000755000000000000000000004340513615410400011466 0ustar00""" RQ command line tool """ import logging import logging.config import os import sys import warnings from typing import List, Type import click from redis.exceptions import ConnectionError from rq import Connection, Retry from rq import __version__ as version from rq.cli.helpers import ( parse_function_args, parse_schedule, pass_cli_config, read_config_file, refresh, setup_loghandlers_from_args, show_both, show_queues, show_workers, ) # from rq.cli.pool import pool from rq.contrib.legacy import cleanup_ghosts from rq.defaults import ( DEFAULT_JOB_MONITORING_INTERVAL, DEFAULT_LOGGING_DATE_FORMAT, DEFAULT_LOGGING_FORMAT, DEFAULT_MAINTENANCE_TASK_INTERVAL, DEFAULT_RESULT_TTL, DEFAULT_WORKER_TTL, ) from rq.exceptions import InvalidJobOperationError from rq.job import Job, JobStatus from rq.logutils import blue from rq.registry import FailedJobRegistry, clean_registries from rq.serializers import DefaultSerializer from rq.suspension import is_suspended from rq.suspension import resume as connection_resume from rq.suspension import suspend as connection_suspend from rq.utils import get_call_string, import_attribute from rq.worker import Worker from rq.worker_pool import WorkerPool from rq.worker_registration import clean_worker_registry @click.group() @click.version_option(version) def main(): """RQ command line tool.""" pass @main.command() @click.option('--all', '-a', is_flag=True, help='Empty all queues') @click.argument('queues', nargs=-1) @pass_cli_config def empty(cli_config, all, queues, serializer, **options): """Empty given queues.""" if all: queues = cli_config.queue_class.all( connection=cli_config.connection, job_class=cli_config.job_class, death_penalty_class=cli_config.death_penalty_class, serializer=serializer, ) else: queues = [ cli_config.queue_class( queue, connection=cli_config.connection, job_class=cli_config.job_class, serializer=serializer ) for queue in queues ] if not queues: click.echo('Nothing to do') sys.exit(0) for queue in queues: num_jobs = queue.empty() click.echo('{0} jobs removed from {1} queue'.format(num_jobs, queue.name)) @main.command() @click.option('--all', '-a', is_flag=True, help='Requeue all failed jobs') @click.option('--queue', required=True, type=str) @click.argument('job_ids', nargs=-1) @pass_cli_config def requeue(cli_config, queue, all, job_class, serializer, job_ids, **options): """Requeue failed jobs.""" failed_job_registry = FailedJobRegistry( queue, connection=cli_config.connection, job_class=job_class, serializer=serializer ) if all: job_ids = failed_job_registry.get_job_ids() if not job_ids: click.echo('Nothing to do') sys.exit(0) click.echo('Requeueing {0} jobs from failed queue'.format(len(job_ids))) fail_count = 0 with click.progressbar(job_ids) as job_ids: for job_id in job_ids: try: failed_job_registry.requeue(job_id) except InvalidJobOperationError: fail_count += 1 if fail_count > 0: click.secho('Unable to requeue {0} jobs from failed job registry'.format(fail_count), fg='red') @main.command() @click.option('--interval', '-i', type=float, help='Updates stats every N seconds (default: don\'t poll)') @click.option('--raw', '-r', is_flag=True, help='Print only the raw numbers, no bar charts') @click.option('--only-queues', '-Q', is_flag=True, help='Show only queue info') @click.option('--only-workers', '-W', is_flag=True, help='Show only worker info') @click.option('--by-queue', '-R', is_flag=True, help='Shows workers by queue') @click.argument('queues', nargs=-1) @pass_cli_config def info(cli_config, interval, raw, only_queues, only_workers, by_queue, queues, **options): """RQ command-line monitor.""" if only_queues: func = show_queues elif only_workers: func = show_workers else: func = show_both try: with Connection(cli_config.connection): if queues: qs = list(map(cli_config.queue_class, queues)) else: qs = cli_config.queue_class.all() for queue in qs: clean_registries(queue) clean_worker_registry(queue) refresh(interval, func, qs, raw, by_queue, cli_config.queue_class, cli_config.worker_class) except ConnectionError as e: click.echo(e) sys.exit(1) except KeyboardInterrupt: click.echo() sys.exit(0) @main.command() @click.option('--burst', '-b', is_flag=True, help='Run in burst mode (quit after all work is done)') @click.option('--logging_level', type=str, default="INFO", help='Set logging level') @click.option('--log-format', type=str, default=DEFAULT_LOGGING_FORMAT, help='Set the format of the logs') @click.option('--date-format', type=str, default=DEFAULT_LOGGING_DATE_FORMAT, help='Set the date format of the logs') @click.option('--name', '-n', help='Specify a different name') @click.option('--results-ttl', type=int, default=DEFAULT_RESULT_TTL, help='Default results timeout to be used') @click.option('--worker-ttl', type=int, default=DEFAULT_WORKER_TTL, help='Worker timeout to be used') @click.option( '--maintenance-interval', type=int, default=DEFAULT_MAINTENANCE_TASK_INTERVAL, help='Maintenance task interval (in seconds) to be used', ) @click.option( '--job-monitoring-interval', type=int, default=DEFAULT_JOB_MONITORING_INTERVAL, help='Default job monitoring interval to be used', ) @click.option('--disable-job-desc-logging', is_flag=True, help='Turn off description logging.') @click.option('--verbose', '-v', is_flag=True, help='Show more output') @click.option('--quiet', '-q', is_flag=True, help='Show less output') @click.option('--sentry-ca-certs', envvar='RQ_SENTRY_CA_CERTS', help='Path to CRT file for Sentry DSN') @click.option('--sentry-debug', envvar='RQ_SENTRY_DEBUG', help='Enable debug') @click.option('--sentry-dsn', envvar='RQ_SENTRY_DSN', help='Report exceptions to this Sentry DSN') @click.option('--exception-handler', help='Exception handler(s) to use', multiple=True) @click.option('--pid', help='Write the process ID number to a file at the specified path') @click.option('--disable-default-exception-handler', '-d', is_flag=True, help='Disable RQ\'s default exception handler') @click.option('--max-jobs', type=int, default=None, help='Maximum number of jobs to execute') @click.option('--max-idle-time', type=int, default=None, help='Maximum seconds to stay alive without jobs to execute') @click.option('--with-scheduler', '-s', is_flag=True, help='Run worker with scheduler') @click.option('--serializer', '-S', default=None, help='Run worker with custom serializer') @click.option( '--dequeue-strategy', '-ds', default='default', help='Sets a custom stratey to dequeue from multiple queues' ) @click.argument('queues', nargs=-1) @pass_cli_config def worker( cli_config, burst, logging_level, name, results_ttl, worker_ttl, maintenance_interval, job_monitoring_interval, disable_job_desc_logging, verbose, quiet, sentry_ca_certs, sentry_debug, sentry_dsn, exception_handler, pid, disable_default_exception_handler, max_jobs, max_idle_time, with_scheduler, queues, log_format, date_format, serializer, dequeue_strategy, **options, ): """Starts an RQ worker.""" settings = read_config_file(cli_config.config) if cli_config.config else {} # Worker specific default arguments queues = queues or settings.get('QUEUES', ['default']) sentry_ca_certs = sentry_ca_certs or settings.get('SENTRY_CA_CERTS') sentry_debug = sentry_debug or settings.get('SENTRY_DEBUG') sentry_dsn = sentry_dsn or settings.get('SENTRY_DSN') name = name or settings.get('NAME') dict_config = settings.get('DICT_CONFIG') if dict_config: logging.config.dictConfig(dict_config) if pid: with open(os.path.expanduser(pid), "w") as fp: fp.write(str(os.getpid())) worker_name = cli_config.worker_class.__qualname__ if worker_name in ["RoundRobinWorker", "RandomWorker"]: strategy_alternative = "random" if worker_name == "RandomWorker" else "round_robin" msg = f"WARNING: {worker_name} is deprecated. Use `--dequeue-strategy {strategy_alternative}` instead." warnings.warn(msg, DeprecationWarning) click.secho(msg, fg='yellow') if dequeue_strategy not in ("default", "random", "round_robin"): click.secho( "ERROR: Dequeue Strategy can only be one of `default`, `random` or `round_robin`.", err=True, fg='red' ) sys.exit(1) setup_loghandlers_from_args(verbose, quiet, date_format, log_format) try: cleanup_ghosts(cli_config.connection, worker_class=cli_config.worker_class) exception_handlers = [] for h in exception_handler: exception_handlers.append(import_attribute(h)) if is_suspended(cli_config.connection): click.secho('RQ is currently suspended, to resume job execution run "rq resume"', fg='red') sys.exit(1) queues = [ cli_config.queue_class( queue, connection=cli_config.connection, job_class=cli_config.job_class, serializer=serializer ) for queue in queues ] worker = cli_config.worker_class( queues, name=name, connection=cli_config.connection, default_worker_ttl=worker_ttl, default_result_ttl=results_ttl, maintenance_interval=maintenance_interval, job_monitoring_interval=job_monitoring_interval, job_class=cli_config.job_class, queue_class=cli_config.queue_class, exception_handlers=exception_handlers or None, disable_default_exception_handler=disable_default_exception_handler, log_job_description=not disable_job_desc_logging, serializer=serializer, ) # Should we configure Sentry? if sentry_dsn: sentry_opts = {"ca_certs": sentry_ca_certs, "debug": sentry_debug} from rq.contrib.sentry import register_sentry register_sentry(sentry_dsn, **sentry_opts) # if --verbose or --quiet, override --logging_level if verbose or quiet: logging_level = None worker.work( burst=burst, logging_level=logging_level, date_format=date_format, log_format=log_format, max_jobs=max_jobs, max_idle_time=max_idle_time, with_scheduler=with_scheduler, dequeue_strategy=dequeue_strategy, ) except ConnectionError as e: logging.error(e) sys.exit(1) @main.command() @click.option('--duration', help='Seconds you want the workers to be suspended. Default is forever.', type=int) @pass_cli_config def suspend(cli_config, duration, **options): """Suspends all workers, to resume run `rq resume`""" if duration is not None and duration < 1: click.echo("Duration must be an integer greater than 1") sys.exit(1) connection_suspend(cli_config.connection, duration) if duration: msg = """Suspending workers for {0} seconds. No new jobs will be started during that time, but then will automatically resume""".format( duration ) click.echo(msg) else: click.echo("Suspending workers. No new jobs will be started. But current jobs will be completed") @main.command() @pass_cli_config def resume(cli_config, **options): """Resumes processing of queues, that were suspended with `rq suspend`""" connection_resume(cli_config.connection) click.echo("Resuming workers.") @main.command() @click.option('--queue', '-q', help='The name of the queue.', default='default') @click.option( '--timeout', help='Specifies the maximum runtime of the job before it is interrupted and marked as failed.' ) @click.option('--result-ttl', help='Specifies how long successful jobs and their results are kept.') @click.option('--ttl', help='Specifies the maximum queued time of the job before it is discarded.') @click.option('--failure-ttl', help='Specifies how long failed jobs are kept.') @click.option('--description', help='Additional description of the job') @click.option( '--depends-on', help='Specifies another job id that must complete before this job will be queued.', multiple=True ) @click.option('--job-id', help='The id of this job') @click.option('--at-front', is_flag=True, help='Will place the job at the front of the queue, instead of the end') @click.option('--retry-max', help='Maximum amount of retries', default=0, type=int) @click.option('--retry-interval', help='Interval between retries in seconds', multiple=True, type=int, default=[0]) @click.option('--schedule-in', help='Delay until the function is enqueued (e.g. 10s, 5m, 2d).') @click.option( '--schedule-at', help='Schedule job to be enqueued at a certain time formatted in ISO 8601 without ' 'timezone (e.g. 2021-05-27T21:45:00).', ) @click.option('--quiet', is_flag=True, help='Only logs errors.') @click.argument('function') @click.argument('arguments', nargs=-1) @pass_cli_config def enqueue( cli_config, queue, timeout, result_ttl, ttl, failure_ttl, description, depends_on, job_id, at_front, retry_max, retry_interval, schedule_in, schedule_at, quiet, serializer, function, arguments, **options, ): """Enqueues a job from the command line""" args, kwargs = parse_function_args(arguments) function_string = get_call_string(function, args, kwargs) description = description or function_string retry = None if retry_max > 0: retry = Retry(retry_max, retry_interval) schedule = parse_schedule(schedule_in, schedule_at) with Connection(cli_config.connection): queue = cli_config.queue_class(queue, serializer=serializer) if schedule is None: job = queue.enqueue_call( function, args, kwargs, timeout, result_ttl, ttl, failure_ttl, description, depends_on, job_id, at_front, None, retry, ) else: job = queue.create_job( function, args, kwargs, timeout, result_ttl, ttl, failure_ttl, description, depends_on, job_id, None, JobStatus.SCHEDULED, retry, ) queue.schedule_job(job, schedule) if not quiet: click.echo('Enqueued %s with job-id \'%s\'.' % (blue(function_string), job.id)) @main.command() @click.option('--burst', '-b', is_flag=True, help='Run in burst mode (quit after all work is done)') @click.option('--logging-level', '-l', type=str, default="INFO", help='Set logging level') @click.option('--sentry-ca-certs', envvar='RQ_SENTRY_CA_CERTS', help='Path to CRT file for Sentry DSN') @click.option('--sentry-debug', envvar='RQ_SENTRY_DEBUG', help='Enable debug') @click.option('--sentry-dsn', envvar='RQ_SENTRY_DSN', help='Report exceptions to this Sentry DSN') @click.option('--verbose', '-v', is_flag=True, help='Show more output') @click.option('--quiet', '-q', is_flag=True, help='Show less output') @click.option('--log-format', type=str, default=DEFAULT_LOGGING_FORMAT, help='Set the format of the logs') @click.option('--date-format', type=str, default=DEFAULT_LOGGING_DATE_FORMAT, help='Set the date format of the logs') @click.option('--job-class', type=str, default=None, help='Dotted path to a Job class') @click.argument('queues', nargs=-1) @click.option('--num-workers', '-n', type=int, default=1, help='Number of workers to start') @pass_cli_config def worker_pool( cli_config, burst: bool, logging_level, queues, serializer, sentry_ca_certs, sentry_debug, sentry_dsn, verbose, quiet, log_format, date_format, worker_class, job_class, num_workers, **options, ): """Starts a RQ worker pool""" settings = read_config_file(cli_config.config) if cli_config.config else {} # Worker specific default arguments queue_names: List[str] = queues or settings.get('QUEUES', ['default']) sentry_ca_certs = sentry_ca_certs or settings.get('SENTRY_CA_CERTS') sentry_debug = sentry_debug or settings.get('SENTRY_DEBUG') sentry_dsn = sentry_dsn or settings.get('SENTRY_DSN') setup_loghandlers_from_args(verbose, quiet, date_format, log_format) if serializer: serializer_class: Type[DefaultSerializer] = import_attribute(serializer) else: serializer_class = DefaultSerializer if worker_class: worker_class = import_attribute(worker_class) else: worker_class = Worker if job_class: job_class = import_attribute(job_class) else: job_class = Job pool = WorkerPool( queue_names, connection=cli_config.connection, num_workers=num_workers, serializer=serializer_class, worker_class=worker_class, job_class=job_class, ) pool.start(burst=burst, logging_level=logging_level) # Should we configure Sentry? if sentry_dsn: sentry_opts = {"ca_certs": sentry_ca_certs, "debug": sentry_debug} from rq.contrib.sentry import register_sentry register_sentry(sentry_dsn, **sentry_opts) if __name__ == '__main__': main() rq-1.16.2/rq/cli/helpers.py0000644000000000000000000003423113615410400012353 0ustar00import importlib import os import sys import time from ast import literal_eval from datetime import datetime, timedelta, timezone from enum import Enum from functools import partial, update_wrapper from json import JSONDecodeError, loads from shutil import get_terminal_size import click from redis import Redis from redis.sentinel import Sentinel from rq.defaults import ( DEFAULT_CONNECTION_CLASS, DEFAULT_DEATH_PENALTY_CLASS, DEFAULT_JOB_CLASS, DEFAULT_QUEUE_CLASS, DEFAULT_SERIALIZER_CLASS, DEFAULT_WORKER_CLASS, ) from rq.logutils import setup_loghandlers from rq.utils import import_attribute, parse_timeout from rq.worker import WorkerStatus red = partial(click.style, fg='red') green = partial(click.style, fg='green') yellow = partial(click.style, fg='yellow') def read_config_file(module): """Reads all UPPERCASE variables defined in the given module file.""" settings = importlib.import_module(module) return dict([(k, v) for k, v in settings.__dict__.items() if k.upper() == k]) def get_redis_from_config(settings, connection_class=Redis): """Returns a StrictRedis instance from a dictionary of settings. To use redis sentinel, you must specify a dictionary in the configuration file. Example of a dictionary with keys without values: SENTINEL = {'INSTANCES':, 'SOCKET_TIMEOUT':, 'USERNAME':, 'PASSWORD':, 'DB':, 'MASTER_NAME':, 'SENTINEL_KWARGS':} """ if settings.get('REDIS_URL') is not None: return connection_class.from_url(settings['REDIS_URL']) elif settings.get('SENTINEL') is not None: instances = settings['SENTINEL'].get('INSTANCES', [('localhost', 26379)]) master_name = settings['SENTINEL'].get('MASTER_NAME', 'mymaster') connection_kwargs = { 'db': settings['SENTINEL'].get('DB', 0), 'username': settings['SENTINEL'].get('USERNAME', None), 'password': settings['SENTINEL'].get('PASSWORD', None), 'socket_timeout': settings['SENTINEL'].get('SOCKET_TIMEOUT', None), 'ssl': settings['SENTINEL'].get('SSL', False), } connection_kwargs.update(settings['SENTINEL'].get('CONNECTION_KWARGS', {})) sentinel_kwargs = settings['SENTINEL'].get('SENTINEL_KWARGS', {}) sn = Sentinel(instances, sentinel_kwargs=sentinel_kwargs, **connection_kwargs) return sn.master_for(master_name) ssl = settings.get('REDIS_SSL', False) if isinstance(ssl, str): if ssl.lower() in ['y', 'yes', 't', 'true']: ssl = True elif ssl.lower() in ['n', 'no', 'f', 'false', '']: ssl = False else: raise ValueError('REDIS_SSL is a boolean and must be "True" or "False".') kwargs = { 'host': settings.get('REDIS_HOST', 'localhost'), 'port': settings.get('REDIS_PORT', 6379), 'db': settings.get('REDIS_DB', 0), 'password': settings.get('REDIS_PASSWORD', None), 'ssl': ssl, 'ssl_ca_certs': settings.get('REDIS_SSL_CA_CERTS', None), 'ssl_cert_reqs': settings.get('REDIS_SSL_CERT_REQS', 'required'), } return connection_class(**kwargs) def pad(s, pad_to_length): """Pads the given string to the given length.""" return ('%-' + '%ds' % pad_to_length) % (s,) def get_scale(x): """Finds the lowest scale where x <= scale.""" scales = [20, 50, 100, 200, 400, 600, 800, 1000] for scale in scales: if x <= scale: return scale return x def state_symbol(state): symbols = { WorkerStatus.BUSY: red('busy'), WorkerStatus.IDLE: green('idle'), WorkerStatus.SUSPENDED: yellow('suspended'), } try: return symbols[state] except KeyError: return state def show_queues(queues, raw, by_queue, queue_class, worker_class): num_jobs = 0 termwidth = get_terminal_size().columns chartwidth = min(20, termwidth - 20) max_count = 0 counts = dict() for q in queues: count = q.count counts[q] = count max_count = max(max_count, count) scale = get_scale(max_count) ratio = chartwidth * 1.0 / scale for q in queues: count = counts[q] if not raw: chart = green('|' + '█' * int(ratio * count)) line = '%-12s %s %d, %d executing, %d finished, %d failed' % ( q.name, chart, count, q.started_job_registry.count, q.finished_job_registry.count, q.failed_job_registry.count, ) else: line = 'queue %s %d, %d executing, %d finished, %d failed' % ( q.name, count, q.started_job_registry.count, q.finished_job_registry.count, q.failed_job_registry.count, ) click.echo(line) num_jobs += count # print summary when not in raw mode if not raw: click.echo('%d queues, %d jobs total' % (len(queues), num_jobs)) def show_workers(queues, raw, by_queue, queue_class, worker_class): workers = set() if queues: for queue in queues: for worker in worker_class.all(queue=queue): workers.add(worker) else: for worker in worker_class.all(): workers.add(worker) if not by_queue: for worker in workers: queue_names = ', '.join(worker.queue_names()) name = '%s (%s %s %s)' % (worker.name, worker.hostname, worker.ip_address, worker.pid) if not raw: line = '%s: %s %s. jobs: %d finished, %d failed' % ( name, state_symbol(worker.get_state()), queue_names, worker.successful_job_count, worker.failed_job_count, ) click.echo(line) else: line = 'worker %s %s %s. jobs: %d finished, %d failed' % ( name, worker.get_state(), queue_names, worker.successful_job_count, worker.failed_job_count, ) click.echo(line) else: # Display workers by queue queue_dict = {} for queue in queues: queue_dict[queue] = worker_class.all(queue=queue) if queue_dict: max_length = max(len(q.name) for q, in queue_dict.keys()) else: max_length = 0 for queue in queue_dict: if queue_dict[queue]: queues_str = ", ".join( sorted(map(lambda w: '%s (%s)' % (w.name, state_symbol(w.get_state())), queue_dict[queue])) ) else: queues_str = '–' click.echo('%s %s' % (pad(queue.name + ':', max_length + 1), queues_str)) if not raw: click.echo('%d workers, %d queues' % (len(workers), len(queues))) def show_both(queues, raw, by_queue, queue_class, worker_class): show_queues(queues, raw, by_queue, queue_class, worker_class) if not raw: click.echo('') show_workers(queues, raw, by_queue, queue_class, worker_class) if not raw: click.echo('') import datetime click.echo('Updated: %s' % datetime.datetime.now()) def refresh(interval, func, *args): while True: if interval: click.clear() func(*args) if interval: time.sleep(interval) else: break def setup_loghandlers_from_args(verbose, quiet, date_format, log_format): if verbose and quiet: raise RuntimeError("Flags --verbose and --quiet are mutually exclusive.") if verbose: level = 'DEBUG' elif quiet: level = 'WARNING' else: level = 'INFO' setup_loghandlers(level, date_format=date_format, log_format=log_format) def parse_function_arg(argument, arg_pos): class ParsingMode(Enum): PLAIN_TEXT = 0 JSON = 1 LITERAL_EVAL = 2 keyword = None if argument.startswith(':'): # no keyword, json mode = ParsingMode.JSON value = argument[1:] elif argument.startswith('%'): # no keyword, literal_eval mode = ParsingMode.LITERAL_EVAL value = argument[1:] else: index = argument.find('=') if index > 0: if ':' in argument and argument.index(':') + 1 == index: # keyword, json mode = ParsingMode.JSON keyword = argument[: index - 1] elif '%' in argument and argument.index('%') + 1 == index: # keyword, literal_eval mode = ParsingMode.LITERAL_EVAL keyword = argument[: index - 1] else: # keyword, text mode = ParsingMode.PLAIN_TEXT keyword = argument[:index] value = argument[index + 1 :] else: # no keyword, text mode = ParsingMode.PLAIN_TEXT value = argument if value.startswith('@'): try: with open(value[1:], 'r') as file: value = file.read() except FileNotFoundError: raise click.FileError(value[1:], 'Not found') if mode == ParsingMode.JSON: # json try: value = loads(value) except JSONDecodeError: raise click.BadParameter('Unable to parse %s as JSON.' % (keyword or '%s. non keyword argument' % arg_pos)) elif mode == ParsingMode.LITERAL_EVAL: # literal_eval try: value = literal_eval(value) except Exception: raise click.BadParameter( 'Unable to eval %s as Python object. See ' 'https://docs.python.org/3/library/ast.html#ast.literal_eval' % (keyword or '%s. non keyword argument' % arg_pos) ) return keyword, value def parse_function_args(arguments): args = [] kwargs = {} for argument in arguments: keyword, value = parse_function_arg(argument, len(args) + 1) if keyword is not None: if keyword in kwargs: raise click.BadParameter('You can\'t specify multiple values for the same keyword.') kwargs[keyword] = value else: args.append(value) return args, kwargs def parse_schedule(schedule_in, schedule_at): if schedule_in is not None: if schedule_at is not None: raise click.BadArgumentUsage('You can\'t specify both --schedule-in and --schedule-at') return datetime.now(timezone.utc) + timedelta(seconds=parse_timeout(schedule_in)) elif schedule_at is not None: return datetime.strptime(schedule_at, '%Y-%m-%dT%H:%M:%S') class CliConfig: """A helper class to be used with click commands, to handle shared options""" def __init__( self, url=None, config=None, worker_class=DEFAULT_WORKER_CLASS, job_class=DEFAULT_JOB_CLASS, death_penalty_class=DEFAULT_DEATH_PENALTY_CLASS, queue_class=DEFAULT_QUEUE_CLASS, connection_class=DEFAULT_CONNECTION_CLASS, path=None, *args, **kwargs, ): self._connection = None self.url = url self.config = config if path: for pth in path: sys.path.append(pth) try: self.worker_class = import_attribute(worker_class) except (ImportError, AttributeError) as exc: raise click.BadParameter(str(exc), param_hint='--worker-class') try: self.job_class = import_attribute(job_class) except (ImportError, AttributeError) as exc: raise click.BadParameter(str(exc), param_hint='--job-class') try: self.death_penalty_class = import_attribute(death_penalty_class) except (ImportError, AttributeError) as exc: raise click.BadParameter(str(exc), param_hint='--death-penalty-class') try: self.queue_class = import_attribute(queue_class) except (ImportError, AttributeError) as exc: raise click.BadParameter(str(exc), param_hint='--queue-class') try: self.connection_class = import_attribute(connection_class) except (ImportError, AttributeError) as exc: raise click.BadParameter(str(exc), param_hint='--connection-class') @property def connection(self): if self._connection is None: if self.url: self._connection = self.connection_class.from_url(self.url) elif self.config: settings = read_config_file(self.config) if self.config else {} self._connection = get_redis_from_config(settings, self.connection_class) else: self._connection = get_redis_from_config(os.environ, self.connection_class) return self._connection shared_options = [ click.option('--url', '-u', envvar='RQ_REDIS_URL', help='URL describing Redis connection details.'), click.option('--config', '-c', envvar='RQ_CONFIG', help='Module containing RQ settings.'), click.option( '--worker-class', '-w', envvar='RQ_WORKER_CLASS', default=DEFAULT_WORKER_CLASS, help='RQ Worker class to use' ), click.option('--job-class', '-j', envvar='RQ_JOB_CLASS', default=DEFAULT_JOB_CLASS, help='RQ Job class to use'), click.option('--queue-class', envvar='RQ_QUEUE_CLASS', default=DEFAULT_QUEUE_CLASS, help='RQ Queue class to use'), click.option( '--connection-class', envvar='RQ_CONNECTION_CLASS', default=DEFAULT_CONNECTION_CLASS, help='Redis client class to use', ), click.option('--path', '-P', default=['.'], help='Specify the import path.', multiple=True), click.option( '--serializer', '-S', default=DEFAULT_SERIALIZER_CLASS, help='Path to serializer, defaults to rq.serializers.DefaultSerializer', ), ] def pass_cli_config(func): # add all the shared options to the command for option in shared_options: func = option(func) # pass the cli config object into the command def wrapper(*args, **kwargs): ctx = click.get_current_context() cli_config = CliConfig(**kwargs) return ctx.invoke(func, cli_config, *args[1:], **kwargs) return update_wrapper(wrapper, func) rq-1.16.2/rq/contrib/__init__.py0000644000000000000000000000000013615410400013324 0ustar00rq-1.16.2/rq/contrib/legacy.py0000644000000000000000000000164513615410400013051 0ustar00import logging from rq import Worker, get_current_connection logger = logging.getLogger(__name__) def cleanup_ghosts(conn=None, worker_class=Worker): """ RQ versions < 0.3.6 suffered from a race condition where workers, when abruptly terminated, did not have a chance to clean up their worker registration, leading to reports of ghosted workers in `rqinfo`. Since 0.3.6, new worker registrations automatically expire, and the worker will make sure to refresh the registrations as long as it's alive. This function will clean up any of such legacy ghosted workers. """ conn = conn if conn else get_current_connection() for worker in worker_class.all(connection=conn): if conn.ttl(worker.key) == -1: ttl = worker.worker_ttl conn.expire(worker.key, ttl) logger.info('Marked ghosted worker {0} to expire in {1} seconds.'.format(worker.name, ttl)) rq-1.16.2/rq/contrib/sentry.py0000644000000000000000000000051413615410400013123 0ustar00def register_sentry(sentry_dsn, **opts): """Given a Raven client and an RQ worker, registers exception handlers with the worker so exceptions are logged to Sentry. """ import sentry_sdk from sentry_sdk.integrations.rq import RqIntegration sentry_sdk.init(sentry_dsn, integrations=[RqIntegration()], **opts) rq-1.16.2/tests/Dockerfile0000644000000000000000000000176313615410400012306 0ustar00FROM ubuntu:20.04 ARG DEBIAN_FRONTEND=noninteractive ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 RUN apt-get update && \ apt-get upgrade -y && \ apt-get install -y \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev wget \ software-properties-common && \ add-apt-repository ppa:deadsnakes/ppa && \ apt-get update RUN apt-get install -y \ redis-server \ python3-pip \ stunnel \ python3.6 \ python3.7 \ python3.8 \ python3.9 \ python3.10 \ python3.6-distutils \ python3.7-distutils RUN apt-get clean && \ rm -rf /var/lib/apt/lists/* COPY tests/ssl_config/private.pem tests/ssl_config/stunnel.conf /etc/stunnel/ COPY . /tmp/rq WORKDIR /tmp/rq RUN set -e && \ python3 -m pip install --upgrade pip && \ python3 -m pip install --no-cache-dir hatch tox CMD stunnel \ & redis-server \ & RUN_SLOW_TESTS_TOO=1 RUN_SSL_TESTS=1 hatch run tox rq-1.16.2/tests/__init__.py0000644000000000000000000000624313615410400012423 0ustar00import logging import os import unittest import pytest from redis import Redis from rq import pop_connection, push_connection def find_empty_redis_database(ssl=False): """Tries to connect to a random Redis database (starting from 4), and will use/connect it when no keys are in there. """ for dbnum in range(4, 17): connection_kwargs = {'db': dbnum} if ssl: connection_kwargs['port'] = 9736 connection_kwargs['ssl'] = True connection_kwargs['ssl_cert_reqs'] = None # disable certificate validation testconn = Redis(**connection_kwargs) empty = testconn.dbsize() == 0 if empty: return testconn assert False, 'No empty Redis database found to run tests in.' def slow(f): f = pytest.mark.slow(f) return unittest.skipUnless(os.environ.get('RUN_SLOW_TESTS_TOO'), "Slow tests disabled")(f) def ssl_test(f): f = pytest.mark.ssl_test(f) return unittest.skipUnless(os.environ.get('RUN_SSL_TESTS'), "SSL tests disabled")(f) class TestCase(unittest.TestCase): """Base class to inherit test cases from for RQ. It sets up the Redis connection (available via self.connection), turns off logging to the terminal and flushes the Redis database before and after running each test. """ @classmethod def setUpClass(cls): # Set up connection to Redis cls.connection = find_empty_redis_database() # Shut up logging logging.disable(logging.ERROR) def setUp(self): # Flush beforewards (we like our hygiene) self.connection.flushdb() def tearDown(self): # Flush afterwards self.connection.flushdb() @classmethod def tearDownClass(cls): logging.disable(logging.NOTSET) class RQTestCase(unittest.TestCase): """Base class to inherit test cases from for RQ. It sets up the Redis connection (available via self.testconn), turns off logging to the terminal and flushes the Redis database before and after running each test. Also offers assertQueueContains(queue, that_func) assertion method. """ @classmethod def setUpClass(cls): # Set up connection to Redis testconn = find_empty_redis_database() push_connection(testconn) # Store the connection (for sanity checking) cls.testconn = testconn cls.connection = testconn # Shut up logging logging.disable(logging.ERROR) def setUp(self): # Flush beforewards (we like our hygiene) self.testconn.flushdb() def tearDown(self): # Flush afterwards self.testconn.flushdb() # Implement assertIsNotNone for Python runtimes < 2.7 or < 3.1 if not hasattr(unittest.TestCase, 'assertIsNotNone'): def assertIsNotNone(self, value, *args): # noqa self.assertNotEqual(value, None, *args) @classmethod def tearDownClass(cls): logging.disable(logging.NOTSET) # Pop the connection to Redis testconn = pop_connection() assert ( testconn == cls.testconn ), 'Wow, something really nasty happened to the Redis connection stack. Check your setup.' rq-1.16.2/tests/fixtures.py0000644000000000000000000001746713615410400012547 0ustar00""" This file contains all jobs that are used in tests. Each of these test fixtures has a slightly different characteristics. """ import contextlib import os import signal import subprocess import sys import time from multiprocessing import Process from redis import Redis from rq import Connection, Queue, get_current_connection, get_current_job from rq.command import send_kill_horse_command, send_shutdown_command from rq.decorators import job from rq.job import Job from rq.worker import HerokuWorker, Worker def say_pid(): return os.getpid() def say_hello(name=None): """A job with a single argument and a return value.""" if name is None: name = 'Stranger' return 'Hi there, %s!' % (name,) async def say_hello_async(name=None): """A async job with a single argument and a return value.""" return say_hello(name) def say_hello_unicode(name=None): """A job with a single argument and a return value.""" return str(say_hello(name)) # noqa def do_nothing(): """The best job in the world.""" pass def raise_exc(): raise Exception('raise_exc error') def raise_exc_mock(): return raise_exc def div_by_zero(x): """Prepare for a division-by-zero exception.""" return x / 0 def long_process(): time.sleep(60) return def some_calculation(x, y, z=1): """Some arbitrary calculation with three numbers. Choose z smartly if you want a division by zero exception. """ return x * y / z def rpush(key, value, append_worker_name=False, sleep=0): """Push a value into a list in Redis. Useful for detecting the order in which jobs were executed.""" if sleep: time.sleep(sleep) if append_worker_name: value += ':' + get_current_job().worker_name redis = get_current_connection() redis.rpush(key, value) def check_dependencies_are_met(): return get_current_job().dependencies_are_met() def create_file(path): """Creates a file at the given path. Actually, leaves evidence that the job ran.""" with open(path, 'w') as f: f.write('Just a sentinel.') def create_file_after_timeout(path, timeout): time.sleep(timeout) create_file(path) def create_file_after_timeout_and_setsid(path, timeout): os.setsid() create_file_after_timeout(path, timeout) def launch_process_within_worker_and_store_pid(path, timeout): p = subprocess.Popen(['sleep', str(timeout)]) with open(path, 'w') as f: f.write('{}'.format(p.pid)) p.wait() def access_self(): assert get_current_connection() is not None assert get_current_job() is not None def modify_self(meta): j = get_current_job() j.meta.update(meta) j.save() def modify_self_and_error(meta): j = get_current_job() j.meta.update(meta) j.save() return 1 / 0 def echo(*args, **kwargs): return args, kwargs class Number: def __init__(self, value): self.value = value @classmethod def divide(cls, x, y): return x * y def div(self, y): return self.value / y class CallableObject: def __call__(self): return u"I'm callable" class UnicodeStringObject: def __repr__(self): return u'é' class ClassWithAStaticMethod: @staticmethod def static_method(): return u"I'm a static method" with Connection(): @job(queue='default') def decorated_job(x, y): return x + y def black_hole(job, *exc_info): # Don't fall through to default behaviour (moving to failed queue) return False def add_meta(job, *exc_info): job.meta = {'foo': 1} job.save() return True def save_key_ttl(key): # Stores key ttl in meta job = get_current_job() ttl = job.connection.ttl(key) job.meta = {'ttl': ttl} job.save_meta() def long_running_job(timeout=10): time.sleep(timeout) return 'Done sleeping...' def run_dummy_heroku_worker(sandbox, _imminent_shutdown_delay): """ Run the work horse for a simplified heroku worker where perform_job just creates two sentinel files 2 seconds apart. :param sandbox: directory to create files in :param _imminent_shutdown_delay: delay to use for HerokuWorker """ sys.stderr = open(os.path.join(sandbox, 'stderr.log'), 'w') class TestHerokuWorker(HerokuWorker): imminent_shutdown_delay = _imminent_shutdown_delay def perform_job(self, job, queue): create_file(os.path.join(sandbox, 'started')) # have to loop here rather than one sleep to avoid holding the GIL # and preventing signals being received for i in range(20): time.sleep(0.1) create_file(os.path.join(sandbox, 'finished')) w = TestHerokuWorker(Queue('dummy')) w.main_work_horse(None, None) class DummyQueue: pass def kill_worker(pid: int, double_kill: bool, interval: float = 1.5): # wait for the worker to be started over on the main process time.sleep(interval) os.kill(pid, signal.SIGTERM) if double_kill: # give the worker time to switch signal handler time.sleep(interval) os.kill(pid, signal.SIGTERM) class Serializer: def loads(self): pass def dumps(self): pass def start_worker(queue_name, conn_kwargs, worker_name, burst): """ Start a worker. We accept only serializable args, so that this can be executed via multiprocessing. """ # Silence stdout (thanks to ) with open(os.devnull, 'w') as devnull: with contextlib.redirect_stdout(devnull): w = Worker([queue_name], name=worker_name, connection=Redis(**conn_kwargs)) w.work(burst=burst) def start_worker_process(queue_name, connection=None, worker_name=None, burst=False): """ Use multiprocessing to start a new worker in a separate process. """ connection = connection or get_current_connection() conn_kwargs = connection.connection_pool.connection_kwargs p = Process(target=start_worker, args=(queue_name, conn_kwargs, worker_name, burst)) p.start() return p def burst_two_workers(queue, timeout=2, tries=5, pause=0.1): """ Get two workers working simultaneously in burst mode, on a given queue. Return after both workers have finished handling jobs, up to a fixed timeout on the worker that runs in another process. """ w1 = start_worker_process(queue.name, worker_name='w1', burst=True) w2 = Worker(queue, name='w2') jobs = queue.jobs if jobs: first_job = jobs[0] # Give the first worker process time to get started on the first job. # This is helpful in tests where we want to control which worker takes which job. n = 0 while n < tries and not first_job.is_started: time.sleep(pause) n += 1 # Now can start the second worker. w2.work(burst=True) w1.join(timeout) def save_result(job, connection, result): """Store job result in a key""" connection.set('success_callback:%s' % job.id, result, ex=60) def save_exception(job, connection, type, value, traceback): """Store job exception in a key""" connection.set('failure_callback:%s' % job.id, str(value), ex=60) def save_result_if_not_stopped(job, connection, result=""): connection.set('stopped_callback:%s' % job.id, result, ex=60) def erroneous_callback(job): """A callback that's not written properly""" pass def _send_shutdown_command(worker_name, connection_kwargs, delay=0.25): time.sleep(delay) send_shutdown_command(Redis(**connection_kwargs), worker_name) def _send_kill_horse_command(worker_name, connection_kwargs, delay=0.25): """Waits delay before sending kill-horse command""" time.sleep(delay) send_kill_horse_command(Redis(**connection_kwargs), worker_name) class CustomJob(Job): """A custom job class just to test it""" rq-1.16.2/tests/test.json0000644000000000000000000000002513615410400012154 0ustar00{ "test": true } rq-1.16.2/tests/test_callbacks.py0000644000000000000000000003512213615410400013640 0ustar00from datetime import timedelta from rq import Queue, Worker from rq.job import UNEVALUATED, Callback, Job, JobStatus from rq.serializers import JSONSerializer from rq.worker import SimpleWorker from tests import RQTestCase from tests.fixtures import ( div_by_zero, erroneous_callback, long_process, save_exception, save_result, save_result_if_not_stopped, say_hello, ) class QueueCallbackTestCase(RQTestCase): def test_enqueue_with_success_callback(self): """Test enqueue* methods with on_success""" queue = Queue(connection=self.testconn) # Only functions and builtins are supported as callback with self.assertRaises(ValueError): queue.enqueue(say_hello, on_success=Job.fetch) job = queue.enqueue(say_hello, on_success=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_success=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) # test string callbacks job = queue.enqueue(say_hello, on_success=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_success=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) def test_enqueue_with_failure_callback(self): """queue.enqueue* methods with on_failure is persisted correctly""" queue = Queue(connection=self.testconn) # Only functions and builtins are supported as callback with self.assertRaises(ValueError): queue.enqueue(say_hello, on_failure=Job.fetch) job = queue.enqueue(say_hello, on_failure=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_failure=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) # test string callbacks job = queue.enqueue(say_hello, on_failure=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) job = queue.enqueue_in(timedelta(seconds=10), say_hello, on_failure=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) def test_enqueue_with_stopped_callback(self): """queue.enqueue* methods with on_stopped is persisted correctly""" queue = Queue(connection=self.testconn) # Only functions and builtins are supported as callback with self.assertRaises(ValueError): queue.enqueue(say_hello, on_stopped=Job.fetch) job = queue.enqueue(long_process, on_stopped=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) job = queue.enqueue_in(timedelta(seconds=10), long_process, on_stopped=print) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) # test string callbacks job = queue.enqueue(long_process, on_stopped=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) job = queue.enqueue_in(timedelta(seconds=10), long_process, on_stopped=Callback("print")) job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) def test_enqueue_many_callback(self): queue = Queue('example', connection=self.testconn) job_data = Queue.prepare_data( func=say_hello, on_success=print, on_failure=save_exception, on_stopped=save_result_if_not_stopped ) jobs = queue.enqueue_many([job_data]) assert jobs[0].success_callback == job_data.on_success assert jobs[0].failure_callback == job_data.on_failure assert jobs[0].stopped_callback == job_data.on_stopped class SyncJobCallback(RQTestCase): def test_success_callback(self): """Test success callback is executed only when job is successful""" queue = Queue(is_async=False) job = queue.enqueue(say_hello, on_success=save_result) self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.result) job = queue.enqueue(div_by_zero, on_success=save_result) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('success_callback:%s' % job.id)) # test string callbacks job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.save_result")) self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.result) job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result")) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('success_callback:%s' % job.id)) def test_failure_callback(self): """queue.enqueue* methods with on_failure is persisted correctly""" queue = Queue(is_async=False) job = queue.enqueue(div_by_zero, on_failure=save_exception) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode()) job = queue.enqueue(div_by_zero, on_success=save_result) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id)) # test string callbacks job = queue.enqueue(div_by_zero, on_failure=Callback("tests.fixtures.save_exception")) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode()) job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result")) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id)) def test_stopped_callback(self): """queue.enqueue* methods with on_stopped is persisted correctly""" connection = self.testconn queue = Queue('foo', connection=connection, serializer=JSONSerializer) worker = SimpleWorker('foo', connection=connection, serializer=JSONSerializer) job = queue.enqueue(long_process, on_stopped=save_result_if_not_stopped) job.execute_stopped_callback( worker.death_penalty_class ) # Calling execute_stopped_callback directly for coverage self.assertTrue(self.testconn.exists('stopped_callback:%s' % job.id)) # test string callbacks job = queue.enqueue(long_process, on_stopped=Callback("tests.fixtures.save_result_if_not_stopped")) job.execute_stopped_callback( worker.death_penalty_class ) # Calling execute_stopped_callback directly for coverage self.assertTrue(self.testconn.exists('stopped_callback:%s' % job.id)) class WorkerCallbackTestCase(RQTestCase): def test_success_callback(self): """Test success callback is executed only when job is successful""" queue = Queue(connection=self.testconn) worker = SimpleWorker([queue]) # Callback is executed when job is successfully executed job = queue.enqueue(say_hello, on_success=save_result) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.return_value()) job = queue.enqueue(div_by_zero, on_success=save_result) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('success_callback:%s' % job.id)) # test string callbacks job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.save_result")) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertEqual(self.testconn.get('success_callback:%s' % job.id).decode(), job.return_value()) job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result")) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('success_callback:%s' % job.id)) def test_erroneous_success_callback(self): """Test exception handling when executing success callback""" queue = Queue(connection=self.testconn) worker = Worker([queue]) # If success_callback raises an error, job will is considered as failed job = queue.enqueue(say_hello, on_success=erroneous_callback) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) # test string callbacks job = queue.enqueue(say_hello, on_success=Callback("tests.fixtures.erroneous_callback")) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) def test_failure_callback(self): """Test failure callback is executed only when job a fails""" queue = Queue(connection=self.testconn) worker = SimpleWorker([queue]) # Callback is executed when job is successfully executed job = queue.enqueue(div_by_zero, on_failure=save_exception) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) job.refresh() print(job.exc_info) self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode()) job = queue.enqueue(div_by_zero, on_success=save_result) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id)) # test string callbacks job = queue.enqueue(div_by_zero, on_failure=Callback("tests.fixtures.save_exception")) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) job.refresh() print(job.exc_info) self.assertIn('div_by_zero', self.testconn.get('failure_callback:%s' % job.id).decode()) job = queue.enqueue(div_by_zero, on_success=Callback("tests.fixtures.save_result")) worker.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertFalse(self.testconn.exists('failure_callback:%s' % job.id)) # TODO: add test case for error while executing failure callback class JobCallbackTestCase(RQTestCase): def test_job_creation_with_success_callback(self): """Ensure callbacks are created and persisted properly""" job = Job.create(say_hello) self.assertIsNone(job._success_callback_name) # _success_callback starts with UNEVALUATED self.assertEqual(job._success_callback, UNEVALUATED) self.assertEqual(job.success_callback, None) # _success_callback becomes `None` after `job.success_callback` is called if there's no success callback self.assertEqual(job._success_callback, None) # job.success_callback is assigned properly job = Job.create(say_hello, on_success=print) self.assertIsNotNone(job._success_callback_name) self.assertEqual(job.success_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) # test string callbacks job = Job.create(say_hello, on_success=Callback("print")) self.assertIsNotNone(job._success_callback_name) self.assertEqual(job.success_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.success_callback, print) def test_job_creation_with_failure_callback(self): """Ensure failure callbacks are persisted properly""" job = Job.create(say_hello) self.assertIsNone(job._failure_callback_name) # _failure_callback starts with UNEVALUATED self.assertEqual(job._failure_callback, UNEVALUATED) self.assertEqual(job.failure_callback, None) # _failure_callback becomes `None` after `job.failure_callback` is called if there's no failure callback self.assertEqual(job._failure_callback, None) # job.failure_callback is assigned properly job = Job.create(say_hello, on_failure=print) self.assertIsNotNone(job._failure_callback_name) self.assertEqual(job.failure_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) # test string callbacks job = Job.create(say_hello, on_failure=Callback("print")) self.assertIsNotNone(job._failure_callback_name) self.assertEqual(job.failure_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.failure_callback, print) def test_job_creation_with_stopped_callback(self): """Ensure stopped callbacks are persisted properly""" job = Job.create(say_hello) self.assertIsNone(job._stopped_callback_name) # _failure_callback starts with UNEVALUATED self.assertEqual(job._stopped_callback, UNEVALUATED) self.assertEqual(job.stopped_callback, None) # _stopped_callback becomes `None` after `job.stopped_callback` is called if there's no stopped callback self.assertEqual(job._stopped_callback, None) # job.failure_callback is assigned properly job = Job.create(say_hello, on_stopped=print) self.assertIsNotNone(job._stopped_callback_name) self.assertEqual(job.stopped_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) # test string callbacks job = Job.create(say_hello, on_stopped=Callback("print")) self.assertIsNotNone(job._stopped_callback_name) self.assertEqual(job.stopped_callback, print) job.save() job = Job.fetch(id=job.id, connection=self.testconn) self.assertEqual(job.stopped_callback, print) rq-1.16.2/tests/test_cli.py0000644000000000000000000010137413615410400012473 0ustar00import json import os from datetime import datetime, timedelta, timezone from time import sleep from uuid import uuid4 import pytest from click.testing import CliRunner from redis import Redis from rq import Queue from rq.cli import main from rq.cli.helpers import CliConfig, parse_function_arg, parse_schedule, read_config_file from rq.job import Job, JobStatus from rq.registry import FailedJobRegistry, ScheduledJobRegistry from rq.scheduler import RQScheduler from rq.serializers import JSONSerializer from rq.timeouts import UnixSignalDeathPenalty from rq.worker import Worker, WorkerStatus from tests import RQTestCase from tests.fixtures import div_by_zero, say_hello class CLITestCase(RQTestCase): def setUp(self): super().setUp() db_num = self.testconn.connection_pool.connection_kwargs['db'] self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num self.connection = Redis.from_url(self.redis_url) def assert_normal_execution(self, result): if result.exit_code == 0: return True else: print("Non normal execution") print("Exit Code: {}".format(result.exit_code)) print("Output: {}".format(result.output)) print("Exception: {}".format(result.exception)) self.assertEqual(result.exit_code, 0) class TestRQCli(CLITestCase): @pytest.fixture(autouse=True) def set_tmpdir(self, tmpdir): self.tmpdir = tmpdir def assert_normal_execution(self, result): if result.exit_code == 0: return True else: print("Non normal execution") print("Exit Code: {}".format(result.exit_code)) print("Output: {}".format(result.output)) print("Exception: {}".format(result.exception)) self.assertEqual(result.exit_code, 0) """Test rq_cli script""" def setUp(self): super().setUp() job = Job.create(func=div_by_zero, args=(1, 2, 3)) job.origin = 'fake' job.save() def test_config_file(self): settings = read_config_file('tests.config_files.dummy') self.assertIn('REDIS_HOST', settings) self.assertEqual(settings['REDIS_HOST'], 'testhost.example.com') def test_config_file_logging(self): runner = CliRunner() result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '-c', 'tests.config_files.dummy_logging']) self.assert_normal_execution(result) def test_config_file_option(self): """""" cli_config = CliConfig(config='tests.config_files.dummy') self.assertEqual( cli_config.connection.connection_pool.connection_kwargs['host'], 'testhost.example.com', ) runner = CliRunner() result = runner.invoke(main, ['info', '--config', cli_config.config]) self.assertEqual(result.exit_code, 1) def test_config_file_default_options(self): """""" cli_config = CliConfig(config='tests.config_files.dummy') self.assertEqual( cli_config.connection.connection_pool.connection_kwargs['host'], 'testhost.example.com', ) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['port'], 6379) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['db'], 0) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['password'], None) def test_config_file_default_options_override(self): """""" cli_config = CliConfig(config='tests.config_files.dummy_override') self.assertEqual( cli_config.connection.connection_pool.connection_kwargs['host'], 'testhost.example.com', ) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['port'], 6378) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['db'], 2) self.assertEqual(cli_config.connection.connection_pool.connection_kwargs['password'], '123') def test_config_env_vars(self): os.environ['REDIS_HOST'] = "testhost.example.com" cli_config = CliConfig() self.assertEqual( cli_config.connection.connection_pool.connection_kwargs['host'], 'testhost.example.com', ) def test_death_penalty_class(self): cli_config = CliConfig() self.assertEqual(UnixSignalDeathPenalty, cli_config.death_penalty_class) cli_config = CliConfig(death_penalty_class='rq.job.Job') self.assertEqual(Job, cli_config.death_penalty_class) with self.assertRaises(ValueError): CliConfig(death_penalty_class='rq.abcd') def test_empty_nothing(self): """rq empty -u """ runner = CliRunner() result = runner.invoke(main, ['empty', '-u', self.redis_url]) self.assert_normal_execution(result) self.assertEqual(result.output.strip(), 'Nothing to do') def test_requeue(self): """rq requeue -u --all""" connection = Redis.from_url(self.redis_url) queue = Queue('requeue', connection=connection) registry = queue.failed_job_registry runner = CliRunner() job = queue.enqueue(div_by_zero) job2 = queue.enqueue(div_by_zero) job3 = queue.enqueue(div_by_zero) worker = Worker([queue]) worker.work(burst=True) self.assertIn(job, registry) self.assertIn(job2, registry) self.assertIn(job3, registry) result = runner.invoke(main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', job.id]) self.assert_normal_execution(result) # Only the first specified job is requeued self.assertNotIn(job, registry) self.assertIn(job2, registry) self.assertIn(job3, registry) result = runner.invoke(main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', '--all']) self.assert_normal_execution(result) # With --all flag, all failed jobs are requeued self.assertNotIn(job2, registry) self.assertNotIn(job3, registry) def test_requeue_with_serializer(self): """rq requeue -u -S --all""" connection = Redis.from_url(self.redis_url) queue = Queue('requeue', connection=connection, serializer=JSONSerializer) registry = queue.failed_job_registry runner = CliRunner() job = queue.enqueue(div_by_zero) job2 = queue.enqueue(div_by_zero) job3 = queue.enqueue(div_by_zero) worker = Worker([queue], serializer=JSONSerializer) worker.work(burst=True) self.assertIn(job, registry) self.assertIn(job2, registry) self.assertIn(job3, registry) result = runner.invoke( main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', '-S', 'rq.serializers.JSONSerializer', job.id] ) self.assert_normal_execution(result) # Only the first specified job is requeued self.assertNotIn(job, registry) self.assertIn(job2, registry) self.assertIn(job3, registry) result = runner.invoke( main, ['requeue', '-u', self.redis_url, '--queue', 'requeue', '-S', 'rq.serializers.JSONSerializer', '--all'], ) self.assert_normal_execution(result) # With --all flag, all failed jobs are requeued self.assertNotIn(job2, registry) self.assertNotIn(job3, registry) def test_info(self): """rq info -u """ runner = CliRunner() result = runner.invoke(main, ['info', '-u', self.redis_url]) self.assert_normal_execution(result) self.assertIn('0 queues, 0 jobs total', result.output) queue = Queue(connection=self.connection) queue.enqueue(say_hello) result = runner.invoke(main, ['info', '-u', self.redis_url]) self.assert_normal_execution(result) self.assertIn('1 queues, 1 jobs total', result.output) def test_info_only_queues(self): """rq info -u --only-queues (-Q)""" runner = CliRunner() result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-queues']) self.assert_normal_execution(result) self.assertIn('0 queues, 0 jobs total', result.output) queue = Queue(connection=self.connection) queue.enqueue(say_hello) result = runner.invoke(main, ['info', '-u', self.redis_url]) self.assert_normal_execution(result) self.assertIn('1 queues, 1 jobs total', result.output) def test_info_only_workers(self): """rq info -u --only-workers (-W)""" runner = CliRunner() result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) self.assertIn('0 workers, 0 queue', result.output) result = runner.invoke(main, ['info', '--by-queue', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) self.assertIn('0 workers, 0 queue', result.output) worker = Worker(['default'], connection=self.connection) worker.register_birth() result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) self.assertIn('1 workers, 0 queues', result.output) worker.register_death() queue = Queue(connection=self.connection) queue.enqueue(say_hello) result = runner.invoke(main, ['info', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) self.assertIn('0 workers, 1 queues', result.output) foo_queue = Queue(name='foo', connection=self.connection) foo_queue.enqueue(say_hello) bar_queue = Queue(name='bar', connection=self.connection) bar_queue.enqueue(say_hello) worker_1 = Worker([foo_queue, bar_queue], connection=self.connection) worker_1.register_birth() worker_2 = Worker([foo_queue, bar_queue], connection=self.connection) worker_2.register_birth() worker_2.set_state(WorkerStatus.BUSY) result = runner.invoke(main, ['info', 'foo', 'bar', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) self.assertIn('2 workers, 2 queues', result.output) result = runner.invoke(main, ['info', 'foo', 'bar', '--by-queue', '-u', self.redis_url, '--only-workers']) self.assert_normal_execution(result) # Ensure both queues' workers are shown self.assertIn('foo:', result.output) self.assertIn('bar:', result.output) self.assertIn('2 workers, 2 queues', result.output) def test_worker(self): """rq worker -u -b""" runner = CliRunner() result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b']) self.assert_normal_execution(result) def test_worker_pid(self): """rq worker -u /tmp/..""" pid = self.tmpdir.join('rq.pid') runner = CliRunner() result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--pid', str(pid)]) self.assertTrue(len(pid.read()) > 0) self.assert_normal_execution(result) def test_worker_with_scheduler(self): """rq worker -u --with-scheduler""" queue = Queue(connection=self.connection) queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello) registry = ScheduledJobRegistry(queue=queue) runner = CliRunner() result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b']) self.assert_normal_execution(result) self.assertEqual(len(registry), 1) # 1 job still scheduled result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--with-scheduler']) self.assert_normal_execution(result) self.assertEqual(len(registry), 0) # Job has been enqueued def test_worker_logging_options(self): """--quiet and --verbose logging options are supported""" runner = CliRunner() args = ['worker', '-u', self.redis_url, '-b'] result = runner.invoke(main, args + ['--verbose']) self.assert_normal_execution(result) result = runner.invoke(main, args + ['--quiet']) self.assert_normal_execution(result) # --quiet and --verbose are mutually exclusive result = runner.invoke(main, args + ['--quiet', '--verbose']) self.assertNotEqual(result.exit_code, 0) def test_worker_dequeue_strategy(self): """--quiet and --verbose logging options are supported""" runner = CliRunner() args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'random'] result = runner.invoke(main, args) self.assert_normal_execution(result) args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'round_robin'] result = runner.invoke(main, args) self.assert_normal_execution(result) args = ['worker', '-u', self.redis_url, '-b', '--dequeue-strategy', 'wrong'] result = runner.invoke(main, args) self.assertEqual(result.exit_code, 1) def test_exception_handlers(self): """rq worker -u -b --exception-handler """ connection = Redis.from_url(self.redis_url) q = Queue('default', connection=connection) runner = CliRunner() # If exception handler is not given, no custom exception handler is run job = q.enqueue(div_by_zero) runner.invoke(main, ['worker', '-u', self.redis_url, '-b']) registry = FailedJobRegistry(queue=q) self.assertTrue(job in registry) # If disable-default-exception-handler is given, job is not moved to FailedJobRegistry job = q.enqueue(div_by_zero) runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--disable-default-exception-handler']) registry = FailedJobRegistry(queue=q) self.assertFalse(job in registry) # Both default and custom exception handler is run job = q.enqueue(div_by_zero) runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--exception-handler', 'tests.fixtures.add_meta']) registry = FailedJobRegistry(queue=q) self.assertTrue(job in registry) job.refresh() self.assertEqual(job.meta, {'foo': 1}) # Only custom exception handler is run job = q.enqueue(div_by_zero) runner.invoke( main, [ 'worker', '-u', self.redis_url, '-b', '--exception-handler', 'tests.fixtures.add_meta', '--disable-default-exception-handler', ], ) registry = FailedJobRegistry(queue=q) self.assertFalse(job in registry) job.refresh() self.assertEqual(job.meta, {'foo': 1}) def test_suspend_and_resume(self): """rq suspend -u rq worker -u -b rq resume -u """ runner = CliRunner() result = runner.invoke(main, ['suspend', '-u', self.redis_url]) self.assert_normal_execution(result) result = runner.invoke(main, ['worker', '-u', self.redis_url, '-b']) self.assertEqual(result.exit_code, 1) self.assertEqual(result.output.strip(), 'RQ is currently suspended, to resume job execution run "rq resume"') result = runner.invoke(main, ['resume', '-u', self.redis_url]) self.assert_normal_execution(result) def test_suspend_with_ttl(self): """rq suspend -u --duration=2""" runner = CliRunner() result = runner.invoke(main, ['suspend', '-u', self.redis_url, '--duration', 1]) self.assert_normal_execution(result) def test_suspend_with_invalid_ttl(self): """rq suspend -u --duration=0""" runner = CliRunner() result = runner.invoke(main, ['suspend', '-u', self.redis_url, '--duration', 0]) self.assertEqual(result.exit_code, 1) self.assertIn("Duration must be an integer greater than 1", result.output) def test_serializer(self): """rq worker -u --serializer """ connection = Redis.from_url(self.redis_url) q = Queue('default', connection=connection, serializer=JSONSerializer) runner = CliRunner() job = q.enqueue(say_hello) runner.invoke(main, ['worker', '-u', self.redis_url, '--serializer rq.serializer.JSONSerializer']) self.assertIn(job.id, q.job_ids) def test_cli_enqueue(self): """rq enqueue -u tests.fixtures.say_hello""" queue = Queue(connection=self.connection) self.assertTrue(queue.is_empty()) runner = CliRunner() result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello']) self.assert_normal_execution(result) prefix = 'Enqueued tests.fixtures.say_hello() with job-id \'' suffix = '\'.\n' self.assertTrue(result.output.startswith(prefix)) self.assertTrue(result.output.endswith(suffix)) job_id = result.output[len(prefix) : -len(suffix)] queue_key = 'rq:queue:default' self.assertEqual(self.connection.llen(queue_key), 1) self.assertEqual(self.connection.lrange(queue_key, 0, -1)[0].decode('ascii'), job_id) worker = Worker(queue) worker.work(True) self.assertEqual(Job(job_id).result, 'Hi there, Stranger!') def test_cli_enqueue_with_serializer(self): """rq enqueue -u -S rq.serializers.JSONSerializer tests.fixtures.say_hello""" queue = Queue(connection=self.connection, serializer=JSONSerializer) self.assertTrue(queue.is_empty()) runner = CliRunner() result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '-S', 'rq.serializers.JSONSerializer', 'tests.fixtures.say_hello'] ) self.assert_normal_execution(result) prefix = 'Enqueued tests.fixtures.say_hello() with job-id \'' suffix = '\'.\n' self.assertTrue(result.output.startswith(prefix)) self.assertTrue(result.output.endswith(suffix)) job_id = result.output[len(prefix) : -len(suffix)] queue_key = 'rq:queue:default' self.assertEqual(self.connection.llen(queue_key), 1) self.assertEqual(self.connection.lrange(queue_key, 0, -1)[0].decode('ascii'), job_id) worker = Worker(queue, serializer=JSONSerializer) worker.work(True) self.assertEqual(Job(job_id, serializer=JSONSerializer).result, 'Hi there, Stranger!') def test_cli_enqueue_args(self): """rq enqueue -u tests.fixtures.echo hello ':[1, {"key": "value"}]' json:=["abc"] nojson=def""" queue = Queue(connection=self.connection) self.assertTrue(queue.is_empty()) runner = CliRunner() result = runner.invoke( main, [ 'enqueue', '-u', self.redis_url, 'tests.fixtures.echo', 'hello', ':[1, {"key": "value"}]', ':@tests/test.json', '%1, 2', 'json:=[3.0, true]', 'nojson=abc', 'file=@tests/test.json', ], ) self.assert_normal_execution(result) job_id = self.connection.lrange('rq:queue:default', 0, -1)[0].decode('ascii') worker = Worker(queue) worker.work(True) args, kwargs = Job(job_id).result self.assertEqual(args, ('hello', [1, {'key': 'value'}], {"test": True}, (1, 2))) self.assertEqual(kwargs, {'json': [3.0, True], 'nojson': 'abc', 'file': '{\n "test": true\n}\n'}) def test_cli_enqueue_schedule_in(self): """rq enqueue -u tests.fixtures.say_hello --schedule-in 1s""" queue = Queue(connection=self.connection) registry = ScheduledJobRegistry(queue=queue) worker = Worker(queue) scheduler = RQScheduler(queue, self.connection) self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 0) runner = CliRunner() result = runner.invoke( main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-in', '10s'] ) self.assert_normal_execution(result) scheduler.acquire_locks() scheduler.enqueue_scheduled_jobs() self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 1) self.assertFalse(worker.work(True)) sleep(11) scheduler.enqueue_scheduled_jobs() self.assertTrue(len(queue) == 1) self.assertTrue(len(registry) == 0) self.assertTrue(worker.work(True)) def test_cli_enqueue_schedule_at(self): """ rq enqueue -u tests.fixtures.say_hello --schedule-at 2021-01-01T00:00:00 rq enqueue -u tests.fixtures.say_hello --schedule-at 2100-01-01T00:00:00 """ queue = Queue(connection=self.connection) registry = ScheduledJobRegistry(queue=queue) worker = Worker(queue) scheduler = RQScheduler(queue, self.connection) self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 0) runner = CliRunner() result = runner.invoke( main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-at', '2021-01-01T00:00:00'] ) self.assert_normal_execution(result) scheduler.acquire_locks() self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 1) scheduler.enqueue_scheduled_jobs() self.assertTrue(len(queue) == 1) self.assertTrue(len(registry) == 0) self.assertTrue(worker.work(True)) self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 0) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--schedule-at', '2100-01-01T00:00:00'] ) self.assert_normal_execution(result) self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 1) scheduler.enqueue_scheduled_jobs() self.assertTrue(len(queue) == 0) self.assertTrue(len(registry) == 1) self.assertFalse(worker.work(True)) def test_cli_enqueue_retry(self): """rq enqueue -u tests.fixtures.say_hello --retry-max 3 --retry-interval 10 --retry-interval 20 --retry-interval 40""" queue = Queue(connection=self.connection) self.assertTrue(queue.is_empty()) runner = CliRunner() result = runner.invoke( main, [ 'enqueue', '-u', self.redis_url, 'tests.fixtures.say_hello', '--retry-max', '3', '--retry-interval', '10', '--retry-interval', '20', '--retry-interval', '40', ], ) self.assert_normal_execution(result) job = Job.fetch( self.connection.lrange('rq:queue:default', 0, -1)[0].decode('ascii'), connection=self.connection ) self.assertEqual(job.retries_left, 3) self.assertEqual(job.retry_intervals, [10, 20, 40]) def test_cli_enqueue_errors(self): """ rq enqueue -u tests.fixtures.echo :invalid_json rq enqueue -u tests.fixtures.echo %invalid_eval_statement rq enqueue -u tests.fixtures.echo key=value key=value rq enqueue -u tests.fixtures.echo --schedule-in 1s --schedule-at 2000-01-01T00:00:00 rq enqueue -u tests.fixtures.echo @not_existing_file """ runner = CliRunner() result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', ':invalid_json']) self.assertNotEqual(result.exit_code, 0) self.assertIn('Unable to parse 1. non keyword argument as JSON.', result.output) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', '%invalid_eval_statement'] ) self.assertNotEqual(result.exit_code, 0) self.assertIn('Unable to eval 1. non keyword argument as Python object.', result.output) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', 'key=value', 'key=value']) self.assertNotEqual(result.exit_code, 0) self.assertIn('You can\'t specify multiple values for the same keyword.', result.output) result = runner.invoke( main, [ 'enqueue', '-u', self.redis_url, 'tests.fixtures.echo', '--schedule-in', '1s', '--schedule-at', '2000-01-01T00:00:00', ], ) self.assertNotEqual(result.exit_code, 0) self.assertIn('You can\'t specify both --schedule-in and --schedule-at', result.output) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, 'tests.fixtures.echo', '@not_existing_file']) self.assertNotEqual(result.exit_code, 0) self.assertIn('Not found', result.output) def test_parse_schedule(self): """executes the rq.cli.helpers.parse_schedule function""" self.assertEqual(parse_schedule(None, '2000-01-23T23:45:01'), datetime(2000, 1, 23, 23, 45, 1)) start = datetime.now(timezone.utc) + timedelta(minutes=5) middle = parse_schedule('5m', None) end = datetime.now(timezone.utc) + timedelta(minutes=5) self.assertGreater(middle, start) self.assertLess(middle, end) def test_parse_function_arg(self): """executes the rq.cli.helpers.parse_function_arg function""" self.assertEqual(parse_function_arg('abc', 0), (None, 'abc')) self.assertEqual(parse_function_arg(':{"json": true}', 1), (None, {'json': True})) self.assertEqual(parse_function_arg('%1, 2', 2), (None, (1, 2))) self.assertEqual(parse_function_arg('key=value', 3), ('key', 'value')) self.assertEqual(parse_function_arg('jsonkey:=["json", "value"]', 4), ('jsonkey', ['json', 'value'])) self.assertEqual(parse_function_arg('evalkey%=1.2', 5), ('evalkey', 1.2)) self.assertEqual(parse_function_arg(':@tests/test.json', 6), (None, {'test': True})) self.assertEqual(parse_function_arg('@tests/test.json', 7), (None, '{\n "test": true\n}\n')) def test_cli_enqueue_doc_test(self): """tests the examples of the documentation""" runner = CliRunner() id = str(uuid4()) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'abc']) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), (['abc'], {})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'abc=def'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'abc': 'def'})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', ':{"json": "abc"}'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([{'json': 'abc'}], {})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key:={"json": "abc"}'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'key': {'json': 'abc'}})) id = str(uuid4()) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%1, 2']) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([(1, 2)], {})) id = str(uuid4()) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%None']) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([None], {})) id = str(uuid4()) result = runner.invoke(main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '%True']) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([True], {})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key%=(1, 2)'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'key': (1, 2)})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key%={"foo": True}'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'key': {"foo": True}})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', '@tests/test.json'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([open('tests/test.json', 'r').read()], {})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key=@tests/test.json'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'key': open('tests/test.json', 'r').read()})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', ':@tests/test.json'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([json.loads(open('tests/test.json', 'r').read())], {})) id = str(uuid4()) result = runner.invoke( main, ['enqueue', '-u', self.redis_url, '--job-id', id, 'tests.fixtures.echo', 'key:=@tests/test.json'] ) self.assert_normal_execution(result) job = Job.fetch(id) self.assertEqual((job.args, job.kwargs), ([], {'key': json.loads(open('tests/test.json', 'r').read())})) class WorkerPoolCLITestCase(CLITestCase): def test_worker_pool_burst_and_num_workers(self): """rq worker-pool -u -b -n 3""" runner = CliRunner() result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '-n', '3']) self.assert_normal_execution(result) def test_serializer_and_queue_argument(self): """rq worker-pool foo bar -u -b""" queue = Queue('foo', connection=self.connection, serializer=JSONSerializer) job = queue.enqueue(say_hello, 'Hello') queue = Queue('bar', connection=self.connection, serializer=JSONSerializer) job_2 = queue.enqueue(say_hello, 'Hello') runner = CliRunner() runner.invoke( main, ['worker-pool', 'foo', 'bar', '-u', self.redis_url, '-b', '--serializer', 'rq.serializers.JSONSerializer'], ) self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED) self.assertEqual(job_2.get_status(refresh=True), JobStatus.FINISHED) def test_worker_class_argument(self): """rq worker-pool -u -b --worker-class rq.Worker""" runner = CliRunner() result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.Worker']) self.assert_normal_execution(result) result = runner.invoke( main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.worker.SimpleWorker'] ) self.assert_normal_execution(result) # This one fails because the worker class doesn't exist result = runner.invoke( main, ['worker-pool', '-u', self.redis_url, '-b', '--worker-class', 'rq.worker.NonExistantWorker'] ) self.assertNotEqual(result.exit_code, 0) def test_job_class_argument(self): """rq worker-pool -u -b --job-class rq.job.Job""" runner = CliRunner() result = runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--job-class', 'rq.job.Job']) self.assert_normal_execution(result) # This one fails because Job class doesn't exist result = runner.invoke( main, ['worker-pool', '-u', self.redis_url, '-b', '--job-class', 'rq.job.NonExistantJob'] ) self.assertNotEqual(result.exit_code, 0) rq-1.16.2/tests/test_commands.py0000644000000000000000000000730013615410400013517 0ustar00import time from multiprocessing import Process from redis import Redis from rq import Queue, Worker from rq.command import send_command, send_kill_horse_command, send_shutdown_command, send_stop_job_command from rq.exceptions import InvalidJobOperation, NoSuchJobError from rq.serializers import JSONSerializer from rq.worker import WorkerStatus from tests import RQTestCase from tests.fixtures import _send_kill_horse_command, _send_shutdown_command, long_running_job def start_work(queue_name, worker_name, connection_kwargs): worker = Worker(queue_name, name=worker_name, connection=Redis(**connection_kwargs)) worker.work() def start_work_burst(queue_name, worker_name, connection_kwargs): worker = Worker(queue_name, name=worker_name, connection=Redis(**connection_kwargs), serializer=JSONSerializer) worker.work(burst=True) class TestCommands(RQTestCase): def test_shutdown_command(self): """Ensure that shutdown command works properly.""" connection = self.testconn worker = Worker('foo', connection=connection) p = Process( target=_send_shutdown_command, args=(worker.name, connection.connection_pool.connection_kwargs.copy()) ) p.start() worker.work() p.join(1) def test_kill_horse_command(self): """Ensure that shutdown command works properly.""" connection = self.testconn queue = Queue('foo', connection=connection) job = queue.enqueue(long_running_job, 4) worker = Worker('foo', connection=connection) p = Process( target=_send_kill_horse_command, args=(worker.name, connection.connection_pool.connection_kwargs.copy()) ) p.start() worker.work(burst=True) p.join(1) job.refresh() self.assertTrue(job.id in queue.failed_job_registry) p = Process(target=start_work, args=('foo', worker.name, connection.connection_pool.connection_kwargs.copy())) p.start() p.join(2) send_kill_horse_command(connection, worker.name) worker.refresh() # Since worker is not busy, command will be ignored self.assertEqual(worker.get_state(), WorkerStatus.IDLE) send_shutdown_command(connection, worker.name) def test_stop_job_command(self): """Ensure that stop_job command works properly.""" connection = self.testconn queue = Queue('foo', connection=connection, serializer=JSONSerializer) job = queue.enqueue(long_running_job, 3) worker = Worker('foo', connection=connection, serializer=JSONSerializer) # If job is not executing, an error is raised with self.assertRaises(InvalidJobOperation): send_stop_job_command(connection, job_id=job.id, serializer=JSONSerializer) # An exception is raised if job ID is invalid with self.assertRaises(NoSuchJobError): send_stop_job_command(connection, job_id='1', serializer=JSONSerializer) p = Process( target=start_work_burst, args=('foo', worker.name, connection.connection_pool.connection_kwargs.copy()) ) p.start() p.join(1) time.sleep(0.1) send_command(connection, worker.name, 'stop-job', job_id=1) time.sleep(0.25) # Worker still working due to job_id mismatch worker.refresh() self.assertEqual(worker.get_state(), WorkerStatus.BUSY) send_stop_job_command(connection, job_id=job.id, serializer=JSONSerializer) time.sleep(0.25) # Job status is set appropriately self.assertTrue(job.is_stopped) # Worker has stopped working worker.refresh() self.assertEqual(worker.get_state(), WorkerStatus.IDLE) rq-1.16.2/tests/test_connection.py0000644000000000000000000000353313615410400014061 0ustar00from redis import ConnectionPool, Redis, SSLConnection, UnixDomainSocketConnection from rq import Connection, Queue from rq.connections import parse_connection from tests import RQTestCase, find_empty_redis_database from tests.fixtures import do_nothing def new_connection(): return find_empty_redis_database() class TestConnectionInheritance(RQTestCase): def test_connection_detection(self): """Automatic detection of the connection.""" q = Queue() self.assertEqual(q.connection, self.testconn) def test_connection_stacking(self): """Connection stacking.""" conn1 = Redis(db=4) conn2 = Redis(db=5) with Connection(conn1): q1 = Queue() with Connection(conn2): q2 = Queue() self.assertNotEqual(q1.connection, q2.connection) def test_connection_pass_thru(self): """Connection passed through from queues to jobs.""" q1 = Queue(connection=self.testconn) with Connection(new_connection()): q2 = Queue() job1 = q1.enqueue(do_nothing) job2 = q2.enqueue(do_nothing) self.assertEqual(q1.connection, job1.connection) self.assertEqual(q2.connection, job2.connection) def test_parse_connection(self): """Test parsing the connection""" conn_class, pool_class, pool_kwargs = parse_connection(Redis(ssl=True)) self.assertEqual(conn_class, Redis) self.assertEqual(pool_class, SSLConnection) path = '/tmp/redis.sock' pool = ConnectionPool(connection_class=UnixDomainSocketConnection, path=path) conn_class, pool_class, pool_kwargs = parse_connection(Redis(connection_pool=pool)) self.assertEqual(conn_class, Redis) self.assertEqual(pool_class, UnixDomainSocketConnection) self.assertEqual(pool_kwargs, {"path": path}) rq-1.16.2/tests/test_decorator.py0000644000000000000000000002116713615410400013707 0ustar00from unittest import mock from redis import Redis from rq.decorators import job from rq.job import Job, Retry from rq.queue import Queue from rq.worker import DEFAULT_RESULT_TTL from tests import RQTestCase from tests.fixtures import decorated_job class TestDecorator(RQTestCase): def setUp(self): super().setUp() def test_decorator_preserves_functionality(self): """Ensure that a decorated function's functionality is still preserved.""" self.assertEqual(decorated_job(1, 2), 3) def test_decorator_adds_delay_attr(self): """Ensure that decorator adds a delay attribute to function that returns a Job instance when called. """ self.assertTrue(hasattr(decorated_job, 'delay')) result = decorated_job.delay(1, 2) self.assertTrue(isinstance(result, Job)) # Ensure that job returns the right result when performed self.assertEqual(result.perform(), 3) def test_decorator_accepts_queue_name_as_argument(self): """Ensure that passing in queue name to the decorator puts the job in the right queue. """ @job(queue='queue_name') def hello(): return 'Hi' result = hello.delay() self.assertEqual(result.origin, 'queue_name') def test_decorator_accepts_result_ttl_as_argument(self): """Ensure that passing in result_ttl to the decorator sets the result_ttl on the job """ # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.result_ttl, DEFAULT_RESULT_TTL) @job('default', result_ttl=10) def hello(): return 'Why hello' result = hello.delay() self.assertEqual(result.result_ttl, 10) def test_decorator_accepts_ttl_as_argument(self): """Ensure that passing in ttl to the decorator sets the ttl on the job""" # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.ttl, None) @job('default', ttl=30) def hello(): return 'Hello' result = hello.delay() self.assertEqual(result.ttl, 30) def test_decorator_accepts_meta_as_argument(self): """Ensure that passing in meta to the decorator sets the meta on the job""" # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.meta, {}) test_meta = { 'metaKey1': 1, 'metaKey2': 2, } @job('default', meta=test_meta) def hello(): return 'Hello' result = hello.delay() self.assertEqual(result.meta, test_meta) def test_decorator_accepts_result_depends_on_as_argument(self): """Ensure that passing in depends_on to the decorator sets the correct dependency on the job """ # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.dependency, None) self.assertEqual(result._dependency_id, None) @job(queue='queue_name') def foo(): return 'Firstly' foo_job = foo.delay() @job(queue='queue_name', depends_on=foo_job) def bar(): return 'Secondly' bar_job = bar.delay() self.assertEqual(foo_job._dependency_ids, []) self.assertIsNone(foo_job._dependency_id) self.assertEqual(foo_job.dependency, None) self.assertEqual(bar_job.dependency, foo_job) self.assertEqual(bar_job.dependency.id, foo_job.id) def test_decorator_delay_accepts_depends_on_as_argument(self): """Ensure that passing in depends_on to the delay method of a decorated function overrides the depends_on set in the constructor. """ # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.dependency, None) self.assertEqual(result._dependency_id, None) @job(queue='queue_name') def foo(): return 'Firstly' @job(queue='queue_name') def bar(): return 'Firstly' foo_job = foo.delay() bar_job = bar.delay() @job(queue='queue_name', depends_on=foo_job) def baz(): return 'Secondly' baz_job = bar.delay(depends_on=bar_job) self.assertIsNone(foo_job._dependency_id) self.assertIsNone(bar_job._dependency_id) self.assertEqual(foo_job._dependency_ids, []) self.assertEqual(bar_job._dependency_ids, []) self.assertEqual(baz_job._dependency_id, bar_job.id) self.assertEqual(baz_job.dependency, bar_job) self.assertEqual(baz_job.dependency.id, bar_job.id) def test_decorator_accepts_on_failure_function_as_argument(self): """Ensure that passing in on_failure function to the decorator sets the correct on_failure function on the job. """ # Only functions and builtins are supported as callback @job('default', on_failure=Job.fetch) def foo(): return 'Foo' with self.assertRaises(ValueError): result = foo.delay() @job('default', on_failure=print) def hello(): return 'Hello' result = hello.delay() result_job = Job.fetch(id=result.id, connection=self.testconn) self.assertEqual(result_job.failure_callback, print) def test_decorator_accepts_on_success_function_as_argument(self): """Ensure that passing in on_failure function to the decorator sets the correct on_success function on the job. """ # Only functions and builtins are supported as callback @job('default', on_failure=Job.fetch) def foo(): return 'Foo' with self.assertRaises(ValueError): result = foo.delay() @job('default', on_success=print) def hello(): return 'Hello' result = hello.delay() result_job = Job.fetch(id=result.id, connection=self.testconn) self.assertEqual(result_job.success_callback, print) @mock.patch('rq.queue.resolve_connection') def test_decorator_connection_laziness(self, resolve_connection): """Ensure that job decorator resolve connection in `lazy` way""" resolve_connection.return_value = Redis() @job(queue='queue_name') def foo(): return 'do something' self.assertEqual(resolve_connection.call_count, 0) foo() self.assertEqual(resolve_connection.call_count, 0) foo.delay() self.assertEqual(resolve_connection.call_count, 1) def test_decorator_custom_queue_class(self): """Ensure that a custom queue class can be passed to the job decorator""" class CustomQueue(Queue): pass CustomQueue.enqueue_call = mock.MagicMock(spec=lambda *args, **kwargs: None, name='enqueue_call') custom_decorator = job(queue='default', queue_class=CustomQueue) self.assertIs(custom_decorator.queue_class, CustomQueue) @custom_decorator def custom_queue_class_job(x, y): return x + y custom_queue_class_job.delay(1, 2) self.assertEqual(CustomQueue.enqueue_call.call_count, 1) def test_decorate_custom_queue(self): """Ensure that a custom queue instance can be passed to the job decorator""" class CustomQueue(Queue): pass CustomQueue.enqueue_call = mock.MagicMock(spec=lambda *args, **kwargs: None, name='enqueue_call') queue = CustomQueue() @job(queue=queue) def custom_queue_job(x, y): return x + y custom_queue_job.delay(1, 2) self.assertEqual(queue.enqueue_call.call_count, 1) def test_decorator_custom_failure_ttl(self): """Ensure that passing in failure_ttl to the decorator sets the failure_ttl on the job """ # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.failure_ttl, None) @job('default', failure_ttl=10) def hello(): return 'Why hello' result = hello.delay() self.assertEqual(result.failure_ttl, 10) def test_decorator_custom_retry(self): """Ensure that passing in retry to the decorator sets the retry on the job """ # Ensure default result = decorated_job.delay(1, 2) self.assertEqual(result.retries_left, None) self.assertEqual(result.retry_intervals, None) @job('default', retry=Retry(3, [2])) def hello(): return 'Why hello' result = hello.delay() self.assertEqual(result.retries_left, 3) self.assertEqual(result.retry_intervals, [2]) rq-1.16.2/tests/test_dependencies.py0000644000000000000000000002117713615410400014354 0ustar00from rq import Queue, SimpleWorker, Worker from rq.job import Dependency, Job, JobStatus from tests import RQTestCase from tests.fixtures import check_dependencies_are_met, div_by_zero, say_hello class TestDependencies(RQTestCase): def test_allow_failure_is_persisted(self): """Ensure that job.allow_dependency_failures is properly set when providing Dependency object to depends_on.""" dep_job = Job.create(func=say_hello) # default to False, maintaining current behavior job = Job.create(func=say_hello, depends_on=Dependency([dep_job])) job.save() Job.fetch(job.id, connection=self.testconn) self.assertFalse(job.allow_dependency_failures) job = Job.create(func=say_hello, depends_on=Dependency([dep_job], allow_failure=True)) job.save() job = Job.fetch(job.id, connection=self.testconn) self.assertTrue(job.allow_dependency_failures) jobs = Job.fetch_many([job.id], connection=self.testconn) self.assertTrue(jobs[0].allow_dependency_failures) def test_job_dependency(self): """Enqueue dependent jobs only when appropriate""" q = Queue(connection=self.testconn) w = SimpleWorker([q], connection=q.connection) # enqueue dependent job when parent successfully finishes parent_job = q.enqueue(say_hello) job = q.enqueue_call(say_hello, depends_on=parent_job) w.work(burst=True) job = Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.get_status(), JobStatus.FINISHED) q.empty() # don't enqueue dependent job when parent fails parent_job = q.enqueue(div_by_zero) job = q.enqueue_call(say_hello, depends_on=parent_job) w.work(burst=True) job = Job.fetch(job.id, connection=self.testconn) self.assertNotEqual(job.get_status(), JobStatus.FINISHED) q.empty() # don't enqueue dependent job when Dependency.allow_failure=False (the default) parent_job = q.enqueue(div_by_zero) dependency = Dependency(jobs=parent_job) job = q.enqueue_call(say_hello, depends_on=dependency) w.work(burst=True) job = Job.fetch(job.id, connection=self.testconn) self.assertNotEqual(job.get_status(), JobStatus.FINISHED) # enqueue dependent job when Dependency.allow_failure=True parent_job = q.enqueue(div_by_zero) dependency = Dependency(jobs=parent_job, allow_failure=True) job = q.enqueue_call(say_hello, depends_on=dependency) job = Job.fetch(job.id, connection=self.testconn) self.assertTrue(job.allow_dependency_failures) w.work(burst=True) job = Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.get_status(), JobStatus.FINISHED) # When a failing job has multiple dependents, only enqueue those # with allow_failure=True parent_job = q.enqueue(div_by_zero) job_allow_failure = q.enqueue(say_hello, depends_on=Dependency(jobs=parent_job, allow_failure=True)) job = q.enqueue(say_hello, depends_on=Dependency(jobs=parent_job, allow_failure=False)) w.work(burst=True, max_jobs=1) self.assertEqual(parent_job.get_status(), JobStatus.FAILED) self.assertEqual(job_allow_failure.get_status(), JobStatus.QUEUED) self.assertEqual(job.get_status(), JobStatus.DEFERRED) q.empty() # only enqueue dependent job when all dependencies have finished/failed first_parent_job = q.enqueue(div_by_zero) second_parent_job = q.enqueue(say_hello) dependencies = Dependency(jobs=[first_parent_job, second_parent_job], allow_failure=True) job = q.enqueue_call(say_hello, depends_on=dependencies) w.work(burst=True, max_jobs=1) self.assertEqual(first_parent_job.get_status(), JobStatus.FAILED) self.assertEqual(second_parent_job.get_status(), JobStatus.QUEUED) self.assertEqual(job.get_status(), JobStatus.DEFERRED) # When second job finishes, dependent job should be queued w.work(burst=True, max_jobs=1) self.assertEqual(second_parent_job.get_status(), JobStatus.FINISHED) self.assertEqual(job.get_status(), JobStatus.QUEUED) w.work(burst=True) job = Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.get_status(), JobStatus.FINISHED) # Test dependant is enqueued at front q.empty() parent_job = q.enqueue(say_hello) q.enqueue(say_hello, job_id='fake_job_id_1', depends_on=Dependency(jobs=[parent_job])) q.enqueue(say_hello, job_id='fake_job_id_2', depends_on=Dependency(jobs=[parent_job], enqueue_at_front=True)) w.work(burst=True, max_jobs=1) self.assertEqual(q.job_ids, ["fake_job_id_2", "fake_job_id_1"]) def test_multiple_jobs_with_dependencies(self): """Enqueue dependent jobs only when appropriate""" q = Queue(connection=self.testconn) w = SimpleWorker([q], connection=q.connection) # Multiple jobs are enqueued with correct status parent_job = q.enqueue(say_hello) job_no_deps = Queue.prepare_data(say_hello) job_with_deps = Queue.prepare_data(say_hello, depends_on=parent_job) jobs = q.enqueue_many([job_no_deps, job_with_deps]) self.assertEqual(jobs[0].get_status(), JobStatus.QUEUED) self.assertEqual(jobs[1].get_status(), JobStatus.DEFERRED) w.work(burst=True, max_jobs=1) self.assertEqual(jobs[1].get_status(), JobStatus.QUEUED) job_with_met_deps = Queue.prepare_data(say_hello, depends_on=parent_job) jobs = q.enqueue_many([job_with_met_deps]) self.assertEqual(jobs[0].get_status(), JobStatus.QUEUED) q.empty() def test_dependency_list_in_depends_on(self): """Enqueue with Dependency list in depends_on""" q = Queue(connection=self.testconn) w = SimpleWorker([q], connection=q.connection) # enqueue dependent job when parent successfully finishes parent_job1 = q.enqueue(say_hello) parent_job2 = q.enqueue(say_hello) job = q.enqueue_call(say_hello, depends_on=[Dependency([parent_job1]), Dependency([parent_job2])]) w.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_enqueue_job_dependency(self): """Enqueue via Queue.enqueue_job() with depencency""" q = Queue(connection=self.testconn) w = SimpleWorker([q], connection=q.connection) # enqueue dependent job when parent successfully finishes parent_job = Job.create(say_hello) parent_job.save() job = Job.create(say_hello, depends_on=parent_job) q.enqueue_job(job) w.work(burst=True) self.assertEqual(job.get_status(), JobStatus.DEFERRED) q.enqueue_job(parent_job) w.work(burst=True) self.assertEqual(parent_job.get_status(), JobStatus.FINISHED) self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_dependencies_are_met_if_parent_is_canceled(self): """When parent job is canceled, it should be treated as failed""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) job.set_status(JobStatus.CANCELED) dependent_job = queue.enqueue(say_hello, depends_on=job) # dependencies_are_met() should return False, whether or not # parent_job is provided self.assertFalse(dependent_job.dependencies_are_met(job)) self.assertFalse(dependent_job.dependencies_are_met()) def test_can_enqueue_job_if_dependency_is_deleted(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(say_hello, result_ttl=0) w = Worker([queue]) w.work(burst=True) assert queue.enqueue(say_hello, depends_on=dependency_job) def test_dependencies_are_met_if_dependency_is_deleted(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(say_hello, result_ttl=0) dependent_job = queue.enqueue(say_hello, depends_on=dependency_job) w = Worker([queue]) w.work(burst=True, max_jobs=1) assert dependent_job.dependencies_are_met() assert dependent_job.get_status() == JobStatus.QUEUED def test_dependencies_are_met_at_execution_time(self): queue = Queue(connection=self.testconn) queue.empty() queue.enqueue(say_hello, job_id="A") queue.enqueue(say_hello, job_id="B") job_c = queue.enqueue(check_dependencies_are_met, job_id="C", depends_on=["A", "B"]) job_c.dependencies_are_met() w = Worker([queue]) w.work(burst=True) assert job_c.result rq-1.16.2/tests/test_fixtures.py0000644000000000000000000000104613615410400013570 0ustar00from rq import Queue from tests import RQTestCase, fixtures class TestFixtures(RQTestCase): def test_rpush_fixture(self): fixtures.rpush('foo', 'bar') assert self.testconn.lrange('foo', 0, 0)[0].decode() == 'bar' def test_start_worker_fixture(self): queue = Queue(name='testing', connection=self.testconn) queue.enqueue(fixtures.say_hello) conn_kwargs = self.testconn.connection_pool.connection_kwargs fixtures.start_worker(queue.name, conn_kwargs, 'w1', True) assert not queue.jobs rq-1.16.2/tests/test_helpers.py0000644000000000000000000000641013615410400013361 0ustar00from unittest import mock from rq.cli.helpers import get_redis_from_config from tests import RQTestCase class TestHelpers(RQTestCase): @mock.patch('rq.cli.helpers.Sentinel') def test_get_redis_from_config(self, sentinel_class_mock): """Ensure Redis connection params are properly parsed""" settings = {'REDIS_URL': 'redis://localhost:1/1'} # Ensure REDIS_URL is read redis = get_redis_from_config(settings) connection_kwargs = redis.connection_pool.connection_kwargs self.assertEqual(connection_kwargs['db'], 1) self.assertEqual(connection_kwargs['port'], 1) settings = { 'REDIS_URL': 'redis://localhost:1/1', 'REDIS_HOST': 'foo', 'REDIS_DB': 2, 'REDIS_PORT': 2, 'REDIS_PASSWORD': 'bar', } # Ensure REDIS_URL is preferred redis = get_redis_from_config(settings) connection_kwargs = redis.connection_pool.connection_kwargs self.assertEqual(connection_kwargs['db'], 1) self.assertEqual(connection_kwargs['port'], 1) # Ensure fall back to regular connection parameters settings['REDIS_URL'] = None redis = get_redis_from_config(settings) connection_kwargs = redis.connection_pool.connection_kwargs self.assertEqual(connection_kwargs['host'], 'foo') self.assertEqual(connection_kwargs['db'], 2) self.assertEqual(connection_kwargs['port'], 2) self.assertEqual(connection_kwargs['password'], 'bar') # Add Sentinel to the settings settings.update( { 'SENTINEL': { 'INSTANCES': [ ('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379), ], 'MASTER_NAME': 'master', 'DB': 2, 'USERNAME': 'redis-user', 'PASSWORD': 'redis-secret', 'SOCKET_TIMEOUT': None, 'CONNECTION_KWARGS': { 'ssl_ca_path': None, }, 'SENTINEL_KWARGS': { 'username': 'sentinel-user', 'password': 'sentinel-secret', }, }, } ) # Ensure SENTINEL is preferred against REDIS_* parameters redis = get_redis_from_config(settings) sentinel_init_sentinels_args = sentinel_class_mock.call_args[0] sentinel_init_sentinel_kwargs = sentinel_class_mock.call_args[1] self.assertEqual( sentinel_init_sentinels_args, ([('remote.host1.org', 26379), ('remote.host2.org', 26379), ('remote.host3.org', 26379)],), ) self.assertDictEqual( sentinel_init_sentinel_kwargs, { 'db': 2, 'ssl': False, 'username': 'redis-user', 'password': 'redis-secret', 'socket_timeout': None, 'ssl_ca_path': None, 'sentinel_kwargs': { 'username': 'sentinel-user', 'password': 'sentinel-secret', }, }, ) rq-1.16.2/tests/test_intermediate_queue.py0000644000000000000000000002134113615410400015575 0ustar00import unittest from datetime import datetime, timedelta, timezone from unittest.mock import patch from redis import Redis from rq import Queue, Worker from rq.intermediate_queue import IntermediateQueue from rq.maintenance import clean_intermediate_queue from rq.utils import get_version from tests import RQTestCase from tests.fixtures import say_hello @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0') class TestIntermediateQueue(RQTestCase): def test_set_first_seen(self): """Ensure that the first_seen attribute is set correctly.""" queue = Queue('foo', connection=self.connection) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) job = queue.enqueue(say_hello) # set_first_seen() should only succeed the first time around self.assertTrue(intermediate_queue.set_first_seen(job.id)) self.assertFalse(intermediate_queue.set_first_seen(job.id)) # It should succeed again after deleting the key self.connection.delete(intermediate_queue.get_first_seen_key(job.id)) self.assertTrue(intermediate_queue.set_first_seen(job.id)) def test_get_first_seen(self): """Ensure that the first_seen attribute is set correctly.""" queue = Queue('foo', connection=self.connection) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) job = queue.enqueue(say_hello) self.assertIsNone(intermediate_queue.get_first_seen(job.id)) # Check first seen was set correctly intermediate_queue.set_first_seen(job.id) timestamp = intermediate_queue.get_first_seen(job.id) self.assertTrue(datetime.now(tz=timezone.utc) - timestamp < timedelta(seconds=5)) # type: ignore def test_should_be_cleaned_up(self): """Job in the intermediate queue should be cleaned up if it was seen more than 1 minute ago.""" queue = Queue('foo', connection=self.connection) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) job = queue.enqueue(say_hello) # Returns False if there's no first seen timestamp self.assertFalse(intermediate_queue.should_be_cleaned_up(job.id)) # Returns False since first seen timestamp is less than 1 minute ago intermediate_queue.set_first_seen(job.id) self.assertFalse(intermediate_queue.should_be_cleaned_up(job.id)) first_seen_key = intermediate_queue.get_first_seen_key(job.id) two_minutes_ago = datetime.now(tz=timezone.utc) - timedelta(minutes=2) self.connection.set(first_seen_key, two_minutes_ago.timestamp(), ex=10) self.assertTrue(intermediate_queue.should_be_cleaned_up(job.id)) def test_get_job_ids(self): """Dequeueing job from a single queue moves job to intermediate queue.""" queue = Queue('foo', connection=self.connection) job_1 = queue.enqueue(say_hello) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) # Ensure that the intermediate queue is empty self.connection.delete(intermediate_queue.key) # Job ID is not in intermediate queue self.assertEqual(intermediate_queue.get_job_ids(), []) job, queue = Queue.dequeue_any([queue], timeout=None, connection=self.testconn) # After job is dequeued, the job ID is in the intermediate queue self.assertEqual(intermediate_queue.get_job_ids(), [job_1.id]) # Test the blocking version job_2 = queue.enqueue(say_hello) job, queue = Queue.dequeue_any([queue], timeout=1, connection=self.testconn) # After job is dequeued, the job ID is in the intermediate queue self.assertEqual(intermediate_queue.get_job_ids(), [job_1.id, job_2.id]) # After job_1.id is removed, only job_2.id is in the intermediate queue intermediate_queue.remove(job_1.id) self.assertEqual(intermediate_queue.get_job_ids(), [job_2.id]) def test_cleanup_intermediate_queue_in_maintenance(self): """Ensure jobs stuck in the intermediate queue are cleaned up.""" queue = Queue('foo', connection=self.connection) job = queue.enqueue(say_hello) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) self.connection.delete(intermediate_queue.key) # If job execution fails after it's dequeued, job should be in the intermediate queue # and it's status is still QUEUED with patch.object(Worker, 'execute_job'): worker = Worker(queue, connection=self.testconn) worker.work(burst=True) # If worker.execute_job() does nothing, job status should be `queued` # even though it's not in the queue, but it should be in the intermediate queue self.assertEqual(job.get_status(), 'queued') self.assertFalse(job.id in queue.get_job_ids()) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) self.assertIsNone(intermediate_queue.get_first_seen(job.id)) clean_intermediate_queue(worker, queue) # After clean_intermediate_queue is called, the job should be marked as seen, # but since it's been less than 1 minute, it should not be cleaned up self.assertIsNotNone(intermediate_queue.get_first_seen(job.id)) self.assertFalse(intermediate_queue.should_be_cleaned_up(job.id)) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) # If we set the first seen timestamp to 2 minutes ago, the job should be cleaned up first_seen_key = intermediate_queue.get_first_seen_key(job.id) two_minutes_ago = datetime.now(tz=timezone.utc) - timedelta(minutes=2) self.connection.set(first_seen_key, two_minutes_ago.timestamp(), ex=10) clean_intermediate_queue(worker, queue) self.assertEqual(intermediate_queue.get_job_ids(), []) self.assertEqual(job.get_status(), 'failed') job = queue.enqueue(say_hello) worker.work(burst=True) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) # If job is gone, it should be immediately removed from the intermediate queue job.delete() clean_intermediate_queue(worker, queue) self.assertEqual(intermediate_queue.get_job_ids(), []) def test_cleanup_intermediate_queue(self): """Ensure jobs stuck in the intermediate queue are cleaned up.""" queue = Queue('foo', connection=self.connection) job = queue.enqueue(say_hello) intermediate_queue = IntermediateQueue(queue.key, connection=self.connection) self.connection.delete(intermediate_queue.key) # If job execution fails after it's dequeued, job should be in the intermediate queue # and it's status is still QUEUED with patch.object(Worker, 'execute_job'): worker = Worker(queue, connection=self.testconn) worker.work(burst=True) # If worker.execute_job() does nothing, job status should be `queued` # even though it's not in the queue, but it should be in the intermediate queue self.assertEqual(job.get_status(), 'queued') self.assertFalse(job.id in queue.get_job_ids()) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) self.assertIsNone(intermediate_queue.get_first_seen(job.id)) intermediate_queue.cleanup(worker, queue) # After clean_intermediate_queue is called, the job should be marked as seen, # but since it's been less than 1 minute, it should not be cleaned up self.assertIsNotNone(intermediate_queue.get_first_seen(job.id)) self.assertFalse(intermediate_queue.should_be_cleaned_up(job.id)) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) # If we set the first seen timestamp to 2 minutes ago, the job should be cleaned up first_seen_key = intermediate_queue.get_first_seen_key(job.id) two_minutes_ago = datetime.now(tz=timezone.utc) - timedelta(minutes=2) self.connection.set(first_seen_key, two_minutes_ago.timestamp(), ex=10) intermediate_queue.cleanup(worker, queue) self.assertEqual(intermediate_queue.get_job_ids(), []) self.assertEqual(job.get_status(), 'failed') job = queue.enqueue(say_hello) worker.work(burst=True) self.assertEqual(intermediate_queue.get_job_ids(), [job.id]) # If job is gone, it should be immediately removed from the intermediate queue job.delete() intermediate_queue.cleanup(worker, queue) self.assertEqual(intermediate_queue.get_job_ids(), []) rq-1.16.2/tests/test_job.py0000644000000000000000000014262713615410400012504 0ustar00import json import queue import time import unittest import zlib from datetime import datetime, timedelta from pickle import dumps, loads from redis import Redis, WatchError from rq.defaults import CALLBACK_TIMEOUT from rq.exceptions import DeserializationError, InvalidJobOperation, NoSuchJobError from rq.job import Callback, Dependency, Job, JobStatus, cancel_job, get_current_job from rq.queue import Queue from rq.registry import ( CanceledJobRegistry, DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, ScheduledJobRegistry, StartedJobRegistry, ) from rq.serializers import JSONSerializer from rq.utils import as_text, get_version, utcformat, utcnow from rq.worker import Worker from tests import RQTestCase, fixtures class TestJob(RQTestCase): def test_unicode(self): """Unicode in job description [issue405]""" job = Job.create( 'myfunc', args=[12, "☃"], kwargs=dict(snowman="☃", null=None), ) self.assertEqual( job.description, "myfunc(12, '☃', null=None, snowman='☃')", ) def test_create_empty_job(self): """Creation of new empty jobs.""" job = Job() job.description = 'test job' # Jobs have a random UUID and a creation date self.assertIsNotNone(job.id) self.assertIsNotNone(job.created_at) self.assertEqual(str(job), "" % job.id) # ...and nothing else self.assertEqual(job.origin, '') self.assertIsNone(job.enqueued_at) self.assertIsNone(job.started_at) self.assertIsNone(job.ended_at) self.assertIsNone(job.result) self.assertIsNone(job.exc_info) with self.assertRaises(DeserializationError): job.func with self.assertRaises(DeserializationError): job.instance with self.assertRaises(DeserializationError): job.args with self.assertRaises(DeserializationError): job.kwargs def test_create_param_errors(self): """Creation of jobs may result in errors""" self.assertRaises(TypeError, Job.create, fixtures.say_hello, args="string") self.assertRaises(TypeError, Job.create, fixtures.say_hello, kwargs="string") self.assertRaises(TypeError, Job.create, func=42) def test_create_typical_job(self): """Creation of jobs for function calls.""" job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2)) # Jobs have a random UUID self.assertIsNotNone(job.id) self.assertIsNotNone(job.created_at) self.assertIsNotNone(job.description) self.assertIsNone(job.instance) # Job data is set... self.assertEqual(job.func, fixtures.some_calculation) self.assertEqual(job.args, (3, 4)) self.assertEqual(job.kwargs, {'z': 2}) # ...but metadata is not self.assertEqual(job.origin, '') self.assertIsNone(job.enqueued_at) self.assertIsNone(job.result) def test_create_instance_method_job(self): """Creation of jobs for instance methods.""" n = fixtures.Number(2) job = Job.create(func=n.div, args=(4,)) # Job data is set self.assertEqual(job.func, n.div) self.assertEqual(job.instance, n) self.assertEqual(job.args, (4,)) def test_create_job_with_serializer(self): """Creation of jobs with serializer for instance methods.""" # Test using json serializer n = fixtures.Number(2) job = Job.create(func=n.div, args=(4,), serializer=json) self.assertIsNotNone(job.serializer) self.assertEqual(job.func, n.div) self.assertEqual(job.instance, n) self.assertEqual(job.args, (4,)) def test_create_job_from_string_function(self): """Creation of jobs using string specifier.""" job = Job.create(func='tests.fixtures.say_hello', args=('World',)) # Job data is set self.assertEqual(job.func, fixtures.say_hello) self.assertIsNone(job.instance) self.assertEqual(job.args, ('World',)) def test_create_job_from_callable_class(self): """Creation of jobs using a callable class specifier.""" kallable = fixtures.CallableObject() job = Job.create(func=kallable) self.assertEqual(job.func, kallable.__call__) self.assertEqual(job.instance, kallable) def test_job_properties_set_data_property(self): """Data property gets derived from the job tuple.""" job = Job() job.func_name = 'foo' fname, instance, args, kwargs = loads(job.data) self.assertEqual(fname, job.func_name) self.assertEqual(instance, None) self.assertEqual(args, ()) self.assertEqual(kwargs, {}) def test_data_property_sets_job_properties(self): """Job tuple gets derived lazily from data property.""" job = Job() job.data = dumps(('foo', None, (1, 2, 3), {'bar': 'qux'})) self.assertEqual(job.func_name, 'foo') self.assertEqual(job.instance, None) self.assertEqual(job.args, (1, 2, 3)) self.assertEqual(job.kwargs, {'bar': 'qux'}) def test_save(self): # noqa """Storing jobs.""" job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2)) # Saving creates a Redis hash self.assertEqual(self.testconn.exists(job.key), False) job.save() self.assertEqual(self.testconn.type(job.key), b'hash') # Saving writes pickled job data unpickled_data = loads(zlib.decompress(self.testconn.hget(job.key, 'data'))) self.assertEqual(unpickled_data[0], 'tests.fixtures.some_calculation') def test_fetch(self): """Fetching jobs.""" # Prepare test self.testconn.hset( 'rq:job:some_id', 'data', "(S'tests.fixtures.some_calculation'\nN(I3\nI4\nt(dp1\nS'z'\nI2\nstp2\n." ) self.testconn.hset('rq:job:some_id', 'created_at', '2012-02-07T22:13:24.123456Z') # Fetch returns a job job = Job.fetch('some_id') self.assertEqual(job.id, 'some_id') self.assertEqual(job.func_name, 'tests.fixtures.some_calculation') self.assertIsNone(job.instance) self.assertEqual(job.args, (3, 4)) self.assertEqual(job.kwargs, dict(z=2)) self.assertEqual(job.created_at, datetime(2012, 2, 7, 22, 13, 24, 123456)) def test_fetch_many(self): """Fetching many jobs at once.""" data = { 'func': fixtures.some_calculation, 'args': (3, 4), 'kwargs': dict(z=2), 'connection': self.testconn, } job = Job.create(**data) job.save() job2 = Job.create(**data) job2.save() jobs = Job.fetch_many([job.id, job2.id, 'invalid_id'], self.testconn) self.assertEqual(jobs, [job, job2, None]) def test_persistence_of_empty_jobs(self): # noqa """Storing empty jobs.""" job = Job() with self.assertRaises(ValueError): job.save() def test_persistence_of_typical_jobs(self): """Storing typical jobs.""" job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2)) job.save() stored_date = self.testconn.hget(job.key, 'created_at').decode('utf-8') self.assertEqual(stored_date, utcformat(job.created_at)) # ... and no other keys are stored self.assertEqual( { b'created_at', b'data', b'description', b'ended_at', b'last_heartbeat', b'started_at', b'worker_name', b'success_callback_name', b'failure_callback_name', b'stopped_callback_name', }, set(self.testconn.hkeys(job.key)), ) self.assertEqual(job.last_heartbeat, None) self.assertEqual(job.last_heartbeat, None) ts = utcnow() job.heartbeat(ts, 0) self.assertEqual(job.last_heartbeat, ts) def test_persistence_of_parent_job(self): """Storing jobs with parent job, either instance or key.""" parent_job = Job.create(func=fixtures.some_calculation) parent_job.save() job = Job.create(func=fixtures.some_calculation, depends_on=parent_job) job.save() stored_job = Job.fetch(job.id) self.assertEqual(stored_job._dependency_id, parent_job.id) self.assertEqual(stored_job._dependency_ids, [parent_job.id]) self.assertEqual(stored_job.dependency.id, parent_job.id) self.assertEqual(stored_job.dependency, parent_job) job = Job.create(func=fixtures.some_calculation, depends_on=parent_job.id) job.save() stored_job = Job.fetch(job.id) self.assertEqual(stored_job._dependency_id, parent_job.id) self.assertEqual(stored_job._dependency_ids, [parent_job.id]) self.assertEqual(stored_job.dependency.id, parent_job.id) self.assertEqual(stored_job.dependency, parent_job) def test_persistence_of_callbacks(self): """Storing jobs with success and/or failure callbacks.""" job = Job.create( func=fixtures.some_calculation, on_success=Callback(fixtures.say_hello, timeout=10), on_failure=fixtures.say_pid, on_stopped=fixtures.say_hello, ) # deprecated callable job.save() stored_job = Job.fetch(job.id) self.assertEqual(fixtures.say_hello, stored_job.success_callback) self.assertEqual(10, stored_job.success_callback_timeout) self.assertEqual(fixtures.say_pid, stored_job.failure_callback) self.assertEqual(fixtures.say_hello, stored_job.stopped_callback) self.assertEqual(CALLBACK_TIMEOUT, stored_job.failure_callback_timeout) self.assertEqual(CALLBACK_TIMEOUT, stored_job.stopped_callback_timeout) # None(s) job = Job.create(func=fixtures.some_calculation, on_failure=None) job.save() stored_job = Job.fetch(job.id) self.assertIsNone(stored_job.success_callback) self.assertEqual(CALLBACK_TIMEOUT, job.success_callback_timeout) # timeout should be never none self.assertEqual(CALLBACK_TIMEOUT, stored_job.success_callback_timeout) self.assertIsNone(stored_job.failure_callback) self.assertEqual(CALLBACK_TIMEOUT, job.failure_callback_timeout) # timeout should be never none self.assertEqual(CALLBACK_TIMEOUT, stored_job.failure_callback_timeout) self.assertEqual(CALLBACK_TIMEOUT, job.stopped_callback_timeout) # timeout should be never none self.assertIsNone(stored_job.stopped_callback) def test_store_then_fetch(self): """Store, then fetch.""" job = Job.create(func=fixtures.some_calculation, timeout='1h', args=(3, 4), kwargs=dict(z=2)) job.save() job2 = Job.fetch(job.id) self.assertEqual(job.func, job2.func) self.assertEqual(job.args, job2.args) self.assertEqual(job.kwargs, job2.kwargs) self.assertEqual(job.timeout, job2.timeout) # Mathematical equation self.assertEqual(job, job2) def test_fetching_can_fail(self): """Fetching fails for non-existing jobs.""" with self.assertRaises(NoSuchJobError): Job.fetch('b4a44d44-da16-4620-90a6-798e8cd72ca0') def test_fetching_unreadable_data(self): """Fetching succeeds on unreadable data, but lazy props fail.""" # Set up job = Job.create(func=fixtures.some_calculation, args=(3, 4), kwargs=dict(z=2)) job.save() # Just replace the data hkey with some random noise self.testconn.hset(job.key, 'data', 'this is no pickle string') job.refresh() for attr in ('func_name', 'instance', 'args', 'kwargs'): with self.assertRaises(Exception): getattr(job, attr) def test_job_is_unimportable(self): """Jobs that cannot be imported throw exception on access.""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() # Now slightly modify the job to make it unimportable (this is # equivalent to a worker not having the most up-to-date source code # and unable to import the function) job_data = job.data unimportable_data = job_data.replace(b'say_hello', b'nay_hello') self.testconn.hset(job.key, 'data', zlib.compress(unimportable_data)) job.refresh() with self.assertRaises(ValueError): job.func # accessing the func property should fail def test_compressed_exc_info_handling(self): """Jobs handle both compressed and uncompressed exc_info""" exception_string = 'Some exception' job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job._exc_info = exception_string job.save() # exc_info is stored in compressed format exc_info = self.testconn.hget(job.key, 'exc_info') self.assertEqual(as_text(zlib.decompress(exc_info)), exception_string) job.refresh() self.assertEqual(job.exc_info, exception_string) # Uncompressed exc_info is also handled self.testconn.hset(job.key, 'exc_info', exception_string) job.refresh() self.assertEqual(job.exc_info, exception_string) def test_compressed_job_data_handling(self): """Jobs handle both compressed and uncompressed data""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() # Job data is stored in compressed format job_data = job.data self.assertEqual(zlib.compress(job_data), self.testconn.hget(job.key, 'data')) self.testconn.hset(job.key, 'data', job_data) job.refresh() self.assertEqual(job.data, job_data) def test_custom_meta_is_persisted(self): """Additional meta data on jobs are stored persisted correctly.""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.meta['foo'] = 'bar' job.save() raw_data = self.testconn.hget(job.key, 'meta') self.assertEqual(loads(raw_data)['foo'], 'bar') job2 = Job.fetch(job.id) self.assertEqual(job2.meta['foo'], 'bar') def test_get_meta(self): """Test get_meta() function""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.meta['foo'] = 'bar' job.save() self.assertEqual(job.get_meta()['foo'], 'bar') # manually write different data in meta self.testconn.hset(job.key, 'meta', dumps({'fee': 'boo'})) # check if refresh=False keeps old data self.assertEqual(job.get_meta(False)['foo'], 'bar') # check if meta is updated self.assertEqual(job.get_meta()['fee'], 'boo') def test_custom_meta_is_rewriten_by_save_meta(self): """New meta data can be stored by save_meta.""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() serialized = job.to_dict() job.meta['foo'] = 'bar' job.save_meta() raw_meta = self.testconn.hget(job.key, 'meta') self.assertEqual(loads(raw_meta)['foo'], 'bar') job2 = Job.fetch(job.id) self.assertEqual(job2.meta['foo'], 'bar') # nothing else was changed serialized2 = job2.to_dict() serialized2.pop('meta') self.assertDictEqual(serialized, serialized2) def test_unpickleable_result(self): """Unpickleable job result doesn't crash job.save() and job.refresh()""" job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job._result = queue.Queue() job.save() self.assertEqual(self.testconn.hget(job.key, 'result').decode('utf-8'), 'Unserializable return value') job = Job.fetch(job.id) self.assertEqual(job.result, 'Unserializable return value') def test_result_ttl_is_persisted(self): """Ensure that job's result_ttl is set properly""" job = Job.create(func=fixtures.say_hello, args=('Lionel',), result_ttl=10) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.result_ttl, 10) job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.result_ttl, None) def test_failure_ttl_is_persisted(self): """Ensure job.failure_ttl is set and restored properly""" job = Job.create(func=fixtures.say_hello, args=('Lionel',), failure_ttl=15) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.failure_ttl, 15) job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.failure_ttl, None) def test_description_is_persisted(self): """Ensure that job's custom description is set properly""" job = Job.create(func=fixtures.say_hello, args=('Lionel',), description='Say hello!') job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.description, 'Say hello!') # Ensure job description is constructed from function call string job = Job.create(func=fixtures.say_hello, args=('Lionel',)) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.description, "tests.fixtures.say_hello('Lionel')") def test_dependency_parameter_constraints(self): """Ensures the proper constraints are in place for values passed in as job references.""" dep_job = Job.create(func=fixtures.say_hello) # raise error on empty jobs self.assertRaises(ValueError, Dependency, jobs=[]) # raise error on non-str/Job value in jobs iterable self.assertRaises(ValueError, Dependency, jobs=[dep_job, 1]) def test_multiple_dependencies_are_accepted_and_persisted(self): """Ensure job._dependency_ids accepts different input formats, and is set and restored properly""" job_A = Job.create(func=fixtures.some_calculation, args=(3, 1, 4), id="A") job_B = Job.create(func=fixtures.some_calculation, args=(2, 7, 2), id="B") # No dependencies job = Job.create(func=fixtures.say_hello) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job._dependency_ids, []) # Various ways of specifying dependencies cases = [ ["A", ["A"]], [job_A, ["A"]], [["A", "B"], ["A", "B"]], [[job_A, job_B], ["A", "B"]], [["A", job_B], ["A", "B"]], [("A", "B"), ["A", "B"]], [(job_A, job_B), ["A", "B"]], [(job_A, "B"), ["A", "B"]], [Dependency("A"), ["A"]], [Dependency(job_A), ["A"]], [Dependency(["A", "B"]), ["A", "B"]], [Dependency([job_A, job_B]), ["A", "B"]], [Dependency(["A", job_B]), ["A", "B"]], [Dependency(("A", "B")), ["A", "B"]], [Dependency((job_A, job_B)), ["A", "B"]], [Dependency((job_A, "B")), ["A", "B"]], ] for given, expected in cases: job = Job.create(func=fixtures.say_hello, depends_on=given) job.save() Job.fetch(job.id, connection=self.testconn) self.assertEqual(job._dependency_ids, expected) def test_prepare_for_execution(self): """job.prepare_for_execution works properly""" job = Job.create(func=fixtures.say_hello) job.save() with self.testconn.pipeline() as pipeline: job.prepare_for_execution("worker_name", pipeline) pipeline.execute() job.refresh() self.assertEqual(job.worker_name, "worker_name") self.assertEqual(job.get_status(), JobStatus.STARTED) self.assertIsNotNone(job.last_heartbeat) self.assertIsNotNone(job.started_at) def test_job_access_outside_job_fails(self): """The current job is accessible only within a job context.""" self.assertIsNone(get_current_job()) def test_job_access_within_job_function(self): """The current job is accessible within the job function.""" q = Queue() job = q.enqueue(fixtures.access_self) w = Worker([q]) w.work(burst=True) # access_self calls get_current_job() and executes successfully self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_job_access_within_synchronous_job_function(self): queue = Queue(is_async=False) queue.enqueue(fixtures.access_self) def test_job_async_status_finished(self): queue = Queue(is_async=False) job = queue.enqueue(fixtures.say_hello) self.assertEqual(job.result, 'Hi there, Stranger!') self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_enqueue_job_async_status_finished(self): queue = Queue(is_async=False) job = Job.create(func=fixtures.say_hello) job = queue.enqueue_job(job) self.assertEqual(job.result, 'Hi there, Stranger!') self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_get_result_ttl(self): """Getting job result TTL.""" job_result_ttl = 1 default_ttl = 2 job = Job.create(func=fixtures.say_hello, result_ttl=job_result_ttl) job.save() self.assertEqual(job.get_result_ttl(default_ttl=default_ttl), job_result_ttl) job = Job.create(func=fixtures.say_hello) job.save() self.assertEqual(job.get_result_ttl(default_ttl=default_ttl), default_ttl) def test_get_job_ttl(self): """Getting job TTL.""" ttl = 1 job = Job.create(func=fixtures.say_hello, ttl=ttl) job.save() self.assertEqual(job.get_ttl(), ttl) job = Job.create(func=fixtures.say_hello) job.save() self.assertEqual(job.get_ttl(), None) def test_ttl_via_enqueue(self): ttl = 1 queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.say_hello, ttl=ttl) self.assertEqual(job.get_ttl(), ttl) def test_never_expire_during_execution(self): """Test what happens when job expires during execution""" ttl = 1 queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.long_running_job, args=(2,), ttl=ttl) self.assertEqual(job.get_ttl(), ttl) job.save() job.perform() self.assertEqual(job.get_ttl(), ttl) self.assertTrue(job.exists(job.id)) self.assertEqual(job.result, 'Done sleeping...') def test_cleanup(self): """Test that jobs and results are expired properly.""" job = Job.create(func=fixtures.say_hello) job.save() # Jobs with negative TTLs don't expire job.cleanup(ttl=-1) self.assertEqual(self.testconn.ttl(job.key), -1) # Jobs with positive TTLs are eventually deleted job.cleanup(ttl=100) self.assertEqual(self.testconn.ttl(job.key), 100) # Jobs with 0 TTL are immediately deleted job.cleanup(ttl=0) self.assertRaises(NoSuchJobError, Job.fetch, job.id, self.testconn) def test_cleanup_expires_dependency_keys(self): dependency_job = Job.create(func=fixtures.say_hello) dependency_job.save() dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job) dependent_job.register_dependency() dependent_job.save() dependent_job.cleanup(ttl=100) dependency_job.cleanup(ttl=100) self.assertEqual(self.testconn.ttl(dependent_job.dependencies_key), 100) self.assertEqual(self.testconn.ttl(dependency_job.dependents_key), 100) def test_job_get_position(self): queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.say_hello) job2 = queue.enqueue(fixtures.say_hello) job3 = Job(fixtures.say_hello) self.assertEqual(0, job.get_position()) self.assertEqual(1, job2.get_position()) self.assertEqual(None, job3.get_position()) def test_job_with_dependents_delete_parent(self): """job.delete() deletes itself from Redis but not dependents. Wthout a save, the dependent job is never saved into redis. The delete method will get and pass a NoSuchJobError. """ queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(fixtures.say_hello) job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer) job2.register_dependency() job.delete() self.assertFalse(self.testconn.exists(job.key)) self.assertFalse(self.testconn.exists(job.dependents_key)) # By default, dependents are not deleted, but The job is in redis only # if it was saved! self.assertFalse(self.testconn.exists(job2.key)) self.assertNotIn(job.id, queue.get_job_ids()) def test_job_delete_removes_itself_from_registries(self): """job.delete() should remove itself from job registries""" job = Job.create( func=fixtures.say_hello, status=JobStatus.FAILED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) job = Job.create( func=fixtures.say_hello, status=JobStatus.STOPPED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) job = Job.create( func=fixtures.say_hello, status=JobStatus.FINISHED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = FinishedJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) job = Job.create( func=fixtures.say_hello, status=JobStatus.STARTED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) job = Job.create( func=fixtures.say_hello, status=JobStatus.DEFERRED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = DeferredJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) job = Job.create( func=fixtures.say_hello, status=JobStatus.SCHEDULED, connection=self.testconn, origin='default', serializer=JSONSerializer, ) job.save() registry = ScheduledJobRegistry(connection=self.testconn, serializer=JSONSerializer) registry.add(job, 500) job.delete() self.assertFalse(job in registry) def test_job_with_dependents_delete_parent_with_saved(self): """job.delete() deletes itself from Redis but not dependents. If the dependent job was saved, it will remain in redis.""" queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(fixtures.say_hello) job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer) job2.register_dependency() job2.save() job.delete() self.assertFalse(self.testconn.exists(job.key)) self.assertFalse(self.testconn.exists(job.dependents_key)) # By default, dependents are not deleted, but The job is in redis only # if it was saved! self.assertTrue(self.testconn.exists(job2.key)) self.assertNotIn(job.id, queue.get_job_ids()) def test_job_with_dependents_deleteall(self): """job.delete() deletes itself from Redis. Dependents need to be deleted explicitly.""" queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(fixtures.say_hello) job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer) job2.register_dependency() job.delete(delete_dependents=True) self.assertFalse(self.testconn.exists(job.key)) self.assertFalse(self.testconn.exists(job.dependents_key)) self.assertFalse(self.testconn.exists(job2.key)) self.assertNotIn(job.id, queue.get_job_ids()) def test_job_with_dependents_delete_all_with_saved(self): """job.delete() deletes itself from Redis. Dependents need to be deleted explictely. Without a save, the dependent job is never saved into redis. The delete method will get and pass a NoSuchJobError. """ queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(fixtures.say_hello) job2 = Job.create(func=fixtures.say_hello, depends_on=job, serializer=JSONSerializer) job2.register_dependency() job2.save() job.delete(delete_dependents=True) self.assertFalse(self.testconn.exists(job.key)) self.assertFalse(self.testconn.exists(job.dependents_key)) self.assertFalse(self.testconn.exists(job2.key)) self.assertNotIn(job.id, queue.get_job_ids()) def test_dependent_job_creates_dependencies_key(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job) dependent_job.register_dependency() dependent_job.save() self.assertTrue(self.testconn.exists(dependent_job.dependencies_key)) def test_dependent_job_deletes_dependencies_key(self): """ job.delete() deletes itself from Redis. """ queue = Queue(connection=self.testconn, serializer=JSONSerializer) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job, serializer=JSONSerializer) dependent_job.register_dependency() dependent_job.save() dependent_job.delete() self.assertTrue(self.testconn.exists(dependency_job.key)) self.assertFalse(self.testconn.exists(dependent_job.dependencies_key)) self.assertFalse(self.testconn.exists(dependent_job.key)) def test_create_job_with_id(self): """test creating jobs with a custom ID""" queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.say_hello, job_id="1234") self.assertEqual(job.id, "1234") job.perform() self.assertRaises(TypeError, queue.enqueue, fixtures.say_hello, job_id=1234) def test_create_job_with_async(self): """test creating jobs with async function""" queue = Queue(connection=self.testconn) async_job = queue.enqueue(fixtures.say_hello_async, job_id="async_job") sync_job = queue.enqueue(fixtures.say_hello, job_id="sync_job") self.assertEqual(async_job.id, "async_job") self.assertEqual(sync_job.id, "sync_job") async_task_result = async_job.perform() sync_task_result = sync_job.perform() self.assertEqual(sync_task_result, async_task_result) def test_get_call_string_unicode(self): """test call string with unicode keyword arguments""" queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.echo, arg_with_unicode=fixtures.UnicodeStringObject()) self.assertIsNotNone(job.get_call_string()) job.perform() def test_create_job_from_static_method(self): """test creating jobs with static method""" queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.ClassWithAStaticMethod.static_method) self.assertIsNotNone(job.get_call_string()) job.perform() def test_create_job_with_ttl_should_have_ttl_after_enqueued(self): """test creating jobs with ttl and checks if get_jobs returns it properly [issue502]""" queue = Queue(connection=self.testconn) queue.enqueue(fixtures.say_hello, job_id="1234", ttl=10) job = queue.get_jobs()[0] self.assertEqual(job.ttl, 10) def test_create_job_with_ttl_should_expire(self): """test if a job created with ttl expires [issue502]""" queue = Queue(connection=self.testconn) queue.enqueue(fixtures.say_hello, job_id="1234", ttl=1) time.sleep(1.1) self.assertEqual(0, len(queue.get_jobs())) def test_create_and_cancel_job(self): """Ensure job.cancel() works properly""" queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.say_hello) self.assertEqual(1, len(queue.get_jobs())) cancel_job(job.id) self.assertEqual(0, len(queue.get_jobs())) registry = CanceledJobRegistry(connection=self.testconn, queue=queue) self.assertIn(job, registry) self.assertEqual(job.get_status(), JobStatus.CANCELED) # If job is deleted, it's also removed from CanceledJobRegistry job.delete() self.assertNotIn(job, registry) def test_create_and_cancel_job_fails_already_canceled(self): """Ensure job.cancel() fails on already canceled job""" queue = Queue(connection=self.testconn) job = queue.enqueue(fixtures.say_hello, job_id='fake_job_id') self.assertEqual(1, len(queue.get_jobs())) # First cancel should be fine cancel_job(job.id) self.assertEqual(0, len(queue.get_jobs())) registry = CanceledJobRegistry(connection=self.testconn, queue=queue) self.assertIn(job, registry) self.assertEqual(job.get_status(), JobStatus.CANCELED) # Second cancel should fail self.assertRaisesRegex( InvalidJobOperation, r'Cannot cancel already canceled job: fake_job_id', cancel_job, job.id ) def test_create_and_cancel_job_enqueue_dependents(self): """Ensure job.cancel() works properly with enqueue_dependents=True""" queue = Queue(connection=self.testconn) dependency = queue.enqueue(fixtures.say_hello) dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency) self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(1, len(queue.deferred_job_registry)) cancel_job(dependency.id, enqueue_dependents=True) self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(0, len(queue.deferred_job_registry)) registry = CanceledJobRegistry(connection=self.testconn, queue=queue) self.assertIn(dependency, registry) self.assertEqual(dependency.get_status(), JobStatus.CANCELED) self.assertIn(dependent, queue.get_jobs()) self.assertEqual(dependent.get_status(), JobStatus.QUEUED) # If job is deleted, it's also removed from CanceledJobRegistry dependency.delete() self.assertNotIn(dependency, registry) def test_create_and_cancel_job_enqueue_dependents_in_registry(self): """Ensure job.cancel() works properly with enqueue_dependents=True and when the job is in a registry""" queue = Queue(connection=self.testconn) dependency = queue.enqueue(fixtures.raise_exc) dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency) print('# Post enqueue', self.testconn.smembers(dependency.dependents_key)) self.assertTrue(dependency.dependent_ids) self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(1, len(queue.deferred_job_registry)) w = Worker([queue]) w.work(burst=True, max_jobs=1) self.assertTrue(dependency.dependent_ids) print('# Post work', self.testconn.smembers(dependency.dependents_key)) dependency.refresh() dependent.refresh() self.assertEqual(0, len(queue.get_jobs())) self.assertEqual(1, len(queue.deferred_job_registry)) self.assertEqual(1, len(queue.failed_job_registry)) print('# Pre cancel', self.testconn.smembers(dependency.dependents_key)) cancel_job(dependency.id, enqueue_dependents=True) dependency.refresh() dependent.refresh() print('#Post cancel', self.testconn.smembers(dependency.dependents_key)) self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(0, len(queue.deferred_job_registry)) self.assertEqual(0, len(queue.failed_job_registry)) self.assertEqual(1, len(queue.canceled_job_registry)) registry = CanceledJobRegistry(connection=self.testconn, queue=queue) self.assertIn(dependency, registry) self.assertEqual(dependency.get_status(), JobStatus.CANCELED) self.assertNotIn(dependency, queue.failed_job_registry) self.assertIn(dependent, queue.get_jobs()) self.assertEqual(dependent.get_status(), JobStatus.QUEUED) # If job is deleted, it's also removed from CanceledJobRegistry dependency.delete() self.assertNotIn(dependency, registry) def test_create_and_cancel_job_enqueue_dependents_with_pipeline(self): """Ensure job.cancel() works properly with enqueue_dependents=True""" queue = Queue(connection=self.testconn) dependency = queue.enqueue(fixtures.say_hello) dependent = queue.enqueue(fixtures.say_hello, depends_on=dependency) self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(1, len(queue.deferred_job_registry)) self.testconn.set('some:key', b'some:value') with self.testconn.pipeline() as pipe: pipe.watch('some:key') self.assertEqual(self.testconn.get('some:key'), b'some:value') dependency.cancel(pipeline=pipe, enqueue_dependents=True) pipe.set('some:key', b'some:other:value') pipe.execute() self.assertEqual(self.testconn.get('some:key'), b'some:other:value') self.assertEqual(1, len(queue.get_jobs())) self.assertEqual(0, len(queue.deferred_job_registry)) registry = CanceledJobRegistry(connection=self.testconn, queue=queue) self.assertIn(dependency, registry) self.assertEqual(dependency.get_status(), JobStatus.CANCELED) self.assertIn(dependent, queue.get_jobs()) self.assertEqual(dependent.get_status(), JobStatus.QUEUED) # If job is deleted, it's also removed from CanceledJobRegistry dependency.delete() self.assertNotIn(dependency, registry) def test_create_and_cancel_job_with_serializer(self): """test creating and using cancel_job (with serializer) deletes job properly""" queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(fixtures.say_hello) self.assertEqual(1, len(queue.get_jobs())) cancel_job(job.id, serializer=JSONSerializer) self.assertEqual(0, len(queue.get_jobs())) def test_dependents_key_for_should_return_prefixed_job_id(self): """test redis key to store job dependents hash under""" job_id = 'random' key = Job.dependents_key_for(job_id=job_id) assert key == Job.redis_job_namespace_prefix + job_id + ':dependents' def test_key_for_should_return_prefixed_job_id(self): """test redis key to store job hash under""" job_id = 'random' key = Job.key_for(job_id=job_id) assert key == (Job.redis_job_namespace_prefix + job_id).encode('utf-8') def test_dependencies_key_should_have_prefixed_job_id(self): job_id = 'random' job = Job(id=job_id) expected_key = Job.redis_job_namespace_prefix + ":" + job_id + ':dependencies' assert job.dependencies_key == expected_key def test_fetch_dependencies_returns_dependency_jobs(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job) dependent_job.register_dependency() dependent_job.save() dependencies = dependent_job.fetch_dependencies(pipeline=self.testconn) self.assertListEqual(dependencies, [dependency_job]) def test_fetch_dependencies_returns_empty_if_not_dependent_job(self): dependent_job = Job.create(func=fixtures.say_hello) dependent_job.register_dependency() dependent_job.save() dependencies = dependent_job.fetch_dependencies(pipeline=self.testconn) self.assertListEqual(dependencies, []) def test_fetch_dependencies_raises_if_dependency_deleted(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job) dependent_job.register_dependency() dependent_job.save() dependency_job.delete() self.assertNotIn(dependent_job.id, [job.id for job in dependent_job.fetch_dependencies(pipeline=self.testconn)]) def test_fetch_dependencies_watches(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello, depends_on=dependency_job) dependent_job.register_dependency() dependent_job.save() with self.testconn.pipeline() as pipeline: dependent_job.fetch_dependencies(watch=True, pipeline=pipeline) pipeline.multi() with self.assertRaises(WatchError): self.testconn.set(Job.key_for(dependency_job.id), 'somethingelsehappened') pipeline.touch(dependency_job.id) pipeline.execute() def test_dependencies_finished_returns_false_if_dependencies_queued(self): queue = Queue(connection=self.testconn) dependency_job_ids = [queue.enqueue(fixtures.say_hello).id for _ in range(5)] dependent_job = Job.create(func=fixtures.say_hello) dependent_job._dependency_ids = dependency_job_ids dependent_job.register_dependency() dependencies_finished = dependent_job.dependencies_are_met() self.assertFalse(dependencies_finished) def test_dependencies_finished_returns_true_if_no_dependencies(self): dependent_job = Job.create(func=fixtures.say_hello) dependent_job.register_dependency() dependencies_finished = dependent_job.dependencies_are_met() self.assertTrue(dependencies_finished) def test_dependencies_finished_returns_true_if_all_dependencies_finished(self): dependency_jobs = [Job.create(fixtures.say_hello) for _ in range(5)] dependent_job = Job.create(func=fixtures.say_hello) dependent_job._dependency_ids = [job.id for job in dependency_jobs] dependent_job.register_dependency() now = utcnow() # Set ended_at timestamps for i, job in enumerate(dependency_jobs): job._status = JobStatus.FINISHED job.ended_at = now - timedelta(seconds=i) job.save() dependencies_finished = dependent_job.dependencies_are_met() self.assertTrue(dependencies_finished) def test_dependencies_finished_returns_false_if_unfinished_job(self): dependency_jobs = [Job.create(fixtures.say_hello) for _ in range(2)] dependency_jobs[0]._status = JobStatus.FINISHED dependency_jobs[0].ended_at = utcnow() dependency_jobs[0].save() dependency_jobs[1]._status = JobStatus.STARTED dependency_jobs[1].ended_at = None dependency_jobs[1].save() dependent_job = Job.create(func=fixtures.say_hello) dependent_job._dependency_ids = [job.id for job in dependency_jobs] dependent_job.register_dependency() dependencies_finished = dependent_job.dependencies_are_met() self.assertFalse(dependencies_finished) def test_dependencies_finished_watches_job(self): queue = Queue(connection=self.testconn) dependency_job = queue.enqueue(fixtures.say_hello) dependent_job = Job.create(func=fixtures.say_hello) dependent_job._dependency_ids = [dependency_job.id] dependent_job.register_dependency() with self.testconn.pipeline() as pipeline: dependent_job.dependencies_are_met( pipeline=pipeline, ) dependency_job.set_status(JobStatus.FAILED, pipeline=self.testconn) pipeline.multi() with self.assertRaises(WatchError): pipeline.touch(Job.key_for(dependent_job.id)) pipeline.execute() def test_execution_order_with_sole_dependency(self): queue = Queue(connection=self.testconn) key = 'test_job:job_order' # When there are no dependencies, the two fast jobs ("A" and "B") run in the order enqueued. # Worker 1 will be busy with the slow job, so worker 2 will complete both fast jobs. job_slow = queue.enqueue(fixtures.rpush, args=[key, "slow", True, 0.5], job_id='slow_job') job_A = queue.enqueue(fixtures.rpush, args=[key, "A", True]) job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True]) fixtures.burst_two_workers(queue) time.sleep(0.75) jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 2)] self.assertEqual(queue.count, 0) self.assertTrue(all(job.is_finished for job in [job_slow, job_A, job_B])) self.assertEqual(jobs_completed, ["A:w2", "B:w2", "slow:w1"]) self.testconn.delete(key) # When job "A" depends on the slow job, then job "B" finishes before "A". # There is no clear requirement on which worker should take job "A", so we stay silent on that. job_slow = queue.enqueue(fixtures.rpush, args=[key, "slow", True, 0.5], job_id='slow_job') job_A = queue.enqueue(fixtures.rpush, args=[key, "A", False], depends_on='slow_job') job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True]) fixtures.burst_two_workers(queue) time.sleep(0.75) jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 2)] self.assertEqual(queue.count, 0) self.assertTrue(all(job.is_finished for job in [job_slow, job_A, job_B])) self.assertEqual(jobs_completed, ["B:w2", "slow:w1", "A"]) def test_execution_order_with_dual_dependency(self): queue = Queue(connection=self.testconn) key = 'test_job:job_order' # When there are no dependencies, the two fast jobs ("A" and "B") run in the order enqueued. job_slow_1 = queue.enqueue(fixtures.rpush, args=[key, "slow_1", True, 0.5], job_id='slow_1') job_slow_2 = queue.enqueue(fixtures.rpush, args=[key, "slow_2", True, 0.75], job_id='slow_2') job_A = queue.enqueue(fixtures.rpush, args=[key, "A", True]) job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True]) fixtures.burst_two_workers(queue) time.sleep(1) jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 3)] self.assertEqual(queue.count, 0) self.assertTrue(all(job.is_finished for job in [job_slow_1, job_slow_2, job_A, job_B])) self.assertEqual(jobs_completed, ["slow_1:w1", "A:w1", "B:w1", "slow_2:w2"]) self.testconn.delete(key) # This time job "A" depends on two slow jobs, while job "B" depends only on the faster of # the two. Job "B" should be completed before job "A". # There is no clear requirement on which worker should take job "A", so we stay silent on that. job_slow_1 = queue.enqueue(fixtures.rpush, args=[key, "slow_1", True, 0.5], job_id='slow_1') job_slow_2 = queue.enqueue(fixtures.rpush, args=[key, "slow_2", True, 0.75], job_id='slow_2') job_A = queue.enqueue(fixtures.rpush, args=[key, "A", False], depends_on=['slow_1', 'slow_2']) job_B = queue.enqueue(fixtures.rpush, args=[key, "B", True], depends_on=['slow_1']) fixtures.burst_two_workers(queue) time.sleep(1) jobs_completed = [v.decode() for v in self.testconn.lrange(key, 0, 3)] self.assertEqual(queue.count, 0) self.assertTrue(all(job.is_finished for job in [job_slow_1, job_slow_2, job_A, job_B])) self.assertEqual(jobs_completed, ["slow_1:w1", "B:w1", "slow_2:w2", "A"]) @unittest.skipIf(get_version(Redis()) < (5, 0, 0), 'Skip if Redis server < 5.0') def test_blocking_result_fetch(self): # Ensure blocking waits for the time to run the job, but not right up until the timeout. job_sleep_seconds = 2 block_seconds = 5 queue_name = "test_blocking_queue" q = Queue(queue_name) job = q.enqueue(fixtures.long_running_job, job_sleep_seconds) started_at = time.time() fixtures.start_worker_process(queue_name, burst=True) result = job.latest_result(timeout=block_seconds) blocked_for = time.time() - started_at self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertIsNotNone(result) self.assertGreaterEqual(blocked_for, job_sleep_seconds) self.assertLess(blocked_for, block_seconds) rq-1.16.2/tests/test_queue.py0000644000000000000000000010053013615410400013041 0ustar00import json import unittest from datetime import datetime, timedelta, timezone from unittest.mock import patch from redis import Redis from rq import Queue, Retry from rq.job import Job, JobStatus from rq.registry import ( CanceledJobRegistry, DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, ScheduledJobRegistry, StartedJobRegistry, ) from rq.serializers import JSONSerializer from rq.utils import get_version from rq.worker import Worker from tests import RQTestCase from tests.fixtures import echo, say_hello class MultipleDependencyJob(Job): """ Allows for the patching of `_dependency_ids` to simulate multi-dependency support without modifying the public interface of `Job` """ create_job = Job.create @classmethod def create(cls, *args, **kwargs): dependency_ids = kwargs.pop('kwargs').pop('_dependency_ids') _job = cls.create_job(*args, **kwargs) _job._dependency_ids = dependency_ids return _job class TestQueue(RQTestCase): def test_create_queue(self): """Creating queues.""" q = Queue('my-queue') self.assertEqual(q.name, 'my-queue') self.assertEqual(str(q), '') def test_create_queue_with_serializer(self): """Creating queues with serializer.""" # Test using json serializer q = Queue('queue-with-serializer', serializer=json) self.assertEqual(q.name, 'queue-with-serializer') self.assertEqual(str(q), '') self.assertIsNotNone(q.serializer) def test_create_default_queue(self): """Instantiating the default queue.""" q = Queue() self.assertEqual(q.name, 'default') def test_equality(self): """Mathematical equality of queues.""" q1 = Queue('foo') q2 = Queue('foo') q3 = Queue('bar') self.assertEqual(q1, q2) self.assertEqual(q2, q1) self.assertNotEqual(q1, q3) self.assertNotEqual(q2, q3) self.assertGreater(q1, q3) self.assertRaises(TypeError, lambda: q1 == 'some string') self.assertRaises(TypeError, lambda: q1 < 'some string') def test_empty_queue(self): """Emptying queues.""" q = Queue('example') self.testconn.rpush('rq:queue:example', 'foo') self.testconn.rpush('rq:queue:example', 'bar') self.assertEqual(q.is_empty(), False) q.empty() self.assertEqual(q.is_empty(), True) self.assertIsNone(self.testconn.lpop('rq:queue:example')) def test_empty_removes_jobs(self): """Emptying a queue deletes the associated job objects""" q = Queue('example') job = q.enqueue(say_hello) self.assertTrue(Job.exists(job.id)) q.empty() self.assertFalse(Job.exists(job.id)) def test_queue_is_empty(self): """Detecting empty queues.""" q = Queue('example') self.assertEqual(q.is_empty(), True) self.testconn.rpush('rq:queue:example', 'sentinel message') self.assertEqual(q.is_empty(), False) def test_queue_delete(self): """Test queue.delete properly removes queue""" q = Queue('example') job = q.enqueue(say_hello) job2 = q.enqueue(say_hello) self.assertEqual(2, len(q.get_job_ids())) q.delete() self.assertEqual(0, len(q.get_job_ids())) self.assertEqual(False, self.testconn.exists(job.key)) self.assertEqual(False, self.testconn.exists(job2.key)) self.assertEqual(0, len(self.testconn.smembers(Queue.redis_queues_keys))) self.assertEqual(False, self.testconn.exists(q.key)) def test_queue_delete_but_keep_jobs(self): """Test queue.delete properly removes queue but keeps the job keys in the redis store""" q = Queue('example') job = q.enqueue(say_hello) job2 = q.enqueue(say_hello) self.assertEqual(2, len(q.get_job_ids())) q.delete(delete_jobs=False) self.assertEqual(0, len(q.get_job_ids())) self.assertEqual(True, self.testconn.exists(job.key)) self.assertEqual(True, self.testconn.exists(job2.key)) self.assertEqual(0, len(self.testconn.smembers(Queue.redis_queues_keys))) self.assertEqual(False, self.testconn.exists(q.key)) def test_position(self): """Test queue.delete properly removes queue but keeps the job keys in the redis store""" q = Queue('example') job = q.enqueue(say_hello) job2 = q.enqueue(say_hello) job3 = q.enqueue(say_hello) self.assertEqual(0, q.get_job_position(job.id)) self.assertEqual(1, q.get_job_position(job2.id)) self.assertEqual(2, q.get_job_position(job3)) self.assertEqual(None, q.get_job_position("no_real_job")) def test_remove(self): """Ensure queue.remove properly removes Job from queue.""" q = Queue('example', serializer=JSONSerializer) job = q.enqueue(say_hello) self.assertIn(job.id, q.job_ids) q.remove(job) self.assertNotIn(job.id, q.job_ids) job = q.enqueue(say_hello) self.assertIn(job.id, q.job_ids) q.remove(job.id) self.assertNotIn(job.id, q.job_ids) def test_jobs(self): """Getting jobs out of a queue.""" q = Queue('example') self.assertEqual(q.jobs, []) job = q.enqueue(say_hello) self.assertEqual(q.jobs, [job]) # Deleting job removes it from queue job.delete() self.assertEqual(q.job_ids, []) def test_compact(self): """Queue.compact() removes non-existing jobs.""" q = Queue() q.enqueue(say_hello, 'Alice') q.enqueue(say_hello, 'Charlie') self.testconn.lpush(q.key, '1', '2') self.assertEqual(q.count, 4) self.assertEqual(len(q), 4) q.compact() self.assertEqual(q.count, 2) self.assertEqual(len(q), 2) def test_enqueue(self): """Enqueueing job onto queues.""" q = Queue() self.assertEqual(q.is_empty(), True) # say_hello spec holds which queue this is sent to job = q.enqueue(say_hello, 'Nick', foo='bar') job_id = job.id self.assertEqual(job.origin, q.name) # Inspect data inside Redis q_key = 'rq:queue:default' self.assertEqual(self.testconn.llen(q_key), 1) self.assertEqual(self.testconn.lrange(q_key, 0, -1)[0].decode('ascii'), job_id) def test_enqueue_sets_metadata(self): """Enqueueing job onto queues modifies meta data.""" q = Queue() job = Job.create(func=say_hello, args=('Nick',), kwargs=dict(foo='bar')) # Preconditions self.assertIsNone(job.enqueued_at) # Action q.enqueue_job(job) # Postconditions self.assertIsNotNone(job.enqueued_at) def test_pop_job_id(self): """Popping job IDs from queues.""" # Set up q = Queue() uuid = '112188ae-4e9d-4a5b-a5b3-f26f2cb054da' q.push_job_id(uuid) # Pop it off the queue... self.assertEqual(q.count, 1) self.assertEqual(q.pop_job_id(), uuid) # ...and assert the queue count when down self.assertEqual(q.count, 0) def test_dequeue_any(self): """Fetching work from any given queue.""" fooq = Queue('foo', connection=self.testconn) barq = Queue('bar', connection=self.testconn) self.assertRaises(ValueError, Queue.dequeue_any, [fooq, barq], timeout=0, connection=self.testconn) self.assertEqual(Queue.dequeue_any([fooq, barq], None), None) # Enqueue a single item barq.enqueue(say_hello) job, queue = Queue.dequeue_any([fooq, barq], None) self.assertEqual(job.func, say_hello) self.assertEqual(queue, barq) # Enqueue items on both queues barq.enqueue(say_hello, 'for Bar') fooq.enqueue(say_hello, 'for Foo') job, queue = Queue.dequeue_any([fooq, barq], None) self.assertEqual(queue, fooq) self.assertEqual(job.func, say_hello) self.assertEqual(job.origin, fooq.name) self.assertEqual(job.args[0], 'for Foo', 'Foo should be dequeued first.') job, queue = Queue.dequeue_any([fooq, barq], None) self.assertEqual(queue, barq) self.assertEqual(job.func, say_hello) self.assertEqual(job.origin, barq.name) self.assertEqual(job.args[0], 'for Bar', 'Bar should be dequeued second.') @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0') def test_dequeue_any_reliable(self): """Dequeueing job from a single queue moves job to intermediate queue.""" foo_queue = Queue('foo', connection=self.testconn) job_1 = foo_queue.enqueue(say_hello) self.assertRaises(ValueError, Queue.dequeue_any, [foo_queue], timeout=0, connection=self.testconn) # Job ID is not in intermediate queue self.assertIsNone(self.testconn.lpos(foo_queue.intermediate_queue_key, job_1.id)) job, queue = Queue.dequeue_any([foo_queue], timeout=None, connection=self.testconn) self.assertEqual(queue, foo_queue) self.assertEqual(job.func, say_hello) # After job is dequeued, the job ID is in the intermediate queue self.assertEqual(self.testconn.lpos(foo_queue.intermediate_queue_key, job.id), 0) # Test the blocking version foo_queue.enqueue(say_hello) job, queue = Queue.dequeue_any([foo_queue], timeout=1, connection=self.testconn) self.assertEqual(queue, foo_queue) self.assertEqual(job.func, say_hello) # After job is dequeued, the job ID is in the intermediate queue self.assertEqual(self.testconn.lpos(foo_queue.intermediate_queue_key, job.id), 1) @unittest.skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0') def test_intermediate_queue(self): """Job should be stuck in intermediate queue if execution fails after dequeued.""" queue = Queue('foo', connection=self.testconn) job = queue.enqueue(say_hello) # If job execution fails after it's dequeued, job should be in the intermediate queue # # and it's status is still QUEUED with patch.object(Worker, 'execute_job'): # mocked.execute_job.side_effect = Exception() worker = Worker(queue, connection=self.testconn) worker.work(burst=True) # Job status is still QUEUED even though it's already dequeued self.assertEqual(job.get_status(refresh=True), JobStatus.QUEUED) self.assertFalse(job.id in queue.get_job_ids()) self.assertIsNotNone(self.testconn.lpos(queue.intermediate_queue_key, job.id)) def test_dequeue_any_ignores_nonexisting_jobs(self): """Dequeuing (from any queue) silently ignores non-existing jobs.""" q = Queue('low') uuid = '49f205ab-8ea3-47dd-a1b5-bfa186870fc8' q.push_job_id(uuid) # Dequeue simply ignores the missing job and returns None self.assertEqual(q.count, 1) self.assertEqual(Queue.dequeue_any([Queue(), Queue('low')], None), None) # noqa self.assertEqual(q.count, 0) def test_enqueue_with_ttl(self): """Negative TTL value is not allowed""" queue = Queue() self.assertRaises(ValueError, queue.enqueue, echo, 1, ttl=0) self.assertRaises(ValueError, queue.enqueue, echo, 1, ttl=-1) def test_enqueue_sets_status(self): """Enqueueing a job sets its status to "queued".""" q = Queue() job = q.enqueue(say_hello) self.assertEqual(job.get_status(), JobStatus.QUEUED) def test_enqueue_meta_arg(self): """enqueue() can set the job.meta contents.""" q = Queue() job = q.enqueue(say_hello, meta={'foo': 'bar', 'baz': 42}) self.assertEqual(job.meta['foo'], 'bar') self.assertEqual(job.meta['baz'], 42) def test_enqueue_with_failure_ttl(self): """enqueue() properly sets job.failure_ttl""" q = Queue() job = q.enqueue(say_hello, failure_ttl=10) job.refresh() self.assertEqual(job.failure_ttl, 10) def test_job_timeout(self): """Timeout can be passed via job_timeout argument""" queue = Queue() job = queue.enqueue(echo, 1, job_timeout=15) self.assertEqual(job.timeout, 15) # Not passing job_timeout will use queue._default_timeout job = queue.enqueue(echo, 1) self.assertEqual(job.timeout, queue._default_timeout) # job_timeout = 0 is not allowed self.assertRaises(ValueError, queue.enqueue, echo, 1, job_timeout=0) def test_default_timeout(self): """Timeout can be passed via job_timeout argument""" queue = Queue() job = queue.enqueue(echo, 1) self.assertEqual(job.timeout, queue.DEFAULT_TIMEOUT) job = Job.create(func=echo) job = queue.enqueue_job(job) self.assertEqual(job.timeout, queue.DEFAULT_TIMEOUT) queue = Queue(default_timeout=15) job = queue.enqueue(echo, 1) self.assertEqual(job.timeout, 15) job = Job.create(func=echo) job = queue.enqueue_job(job) self.assertEqual(job.timeout, 15) def test_synchronous_timeout(self): queue = Queue(is_async=False) self.assertFalse(queue.is_async) no_expire_job = queue.enqueue(echo, result_ttl=-1) self.assertEqual(queue.connection.ttl(no_expire_job.key), -1) delete_job = queue.enqueue(echo, result_ttl=0) self.assertEqual(queue.connection.ttl(delete_job.key), -2) keep_job = queue.enqueue(echo, result_ttl=100) self.assertLessEqual(queue.connection.ttl(keep_job.key), 100) def test_enqueue_explicit_args(self): """enqueue() works for both implicit/explicit args.""" q = Queue() # Implicit args/kwargs mode job = q.enqueue(echo, 1, job_timeout=1, result_ttl=1, bar='baz') self.assertEqual(job.timeout, 1) self.assertEqual(job.result_ttl, 1) self.assertEqual(job.perform(), ((1,), {'bar': 'baz'})) # Explicit kwargs mode kwargs = { 'timeout': 1, 'result_ttl': 1, } job = q.enqueue(echo, job_timeout=2, result_ttl=2, args=[1], kwargs=kwargs) self.assertEqual(job.timeout, 2) self.assertEqual(job.result_ttl, 2) self.assertEqual(job.perform(), ((1,), {'timeout': 1, 'result_ttl': 1})) # Explicit args and kwargs should also work with enqueue_at time = datetime.now(timezone.utc) + timedelta(seconds=10) job = q.enqueue_at(time, echo, job_timeout=2, result_ttl=2, args=[1], kwargs=kwargs) self.assertEqual(job.timeout, 2) self.assertEqual(job.result_ttl, 2) self.assertEqual(job.perform(), ((1,), {'timeout': 1, 'result_ttl': 1})) # Positional arguments is not allowed if explicit args and kwargs are used self.assertRaises(Exception, q.enqueue, echo, 1, kwargs=kwargs) def test_all_queues(self): """All queues""" q1 = Queue('first-queue') q2 = Queue('second-queue') q3 = Queue('third-queue') # Ensure a queue is added only once a job is enqueued self.assertEqual(len(Queue.all()), 0) q1.enqueue(say_hello) self.assertEqual(len(Queue.all()), 1) # Ensure this holds true for multiple queues q2.enqueue(say_hello) q3.enqueue(say_hello) names = [q.name for q in Queue.all()] self.assertEqual(len(Queue.all()), 3) # Verify names self.assertTrue('first-queue' in names) self.assertTrue('second-queue' in names) self.assertTrue('third-queue' in names) # Now empty two queues w = Worker([q2, q3]) w.work(burst=True) # Queue.all() should still report the empty queues self.assertEqual(len(Queue.all()), 3) def test_all_custom_job(self): class CustomJob(Job): pass q = Queue('all-queue') q.enqueue(say_hello) queues = Queue.all(job_class=CustomJob) self.assertEqual(len(queues), 1) self.assertIs(queues[0].job_class, CustomJob) def test_from_queue_key(self): """Ensure being able to get a Queue instance manually from Redis""" q = Queue() key = Queue.redis_queue_namespace_prefix + 'default' reverse_q = Queue.from_queue_key(key) self.assertEqual(q, reverse_q) def test_from_queue_key_error(self): """Ensure that an exception is raised if the queue prefix is wrong""" key = 'some:weird:prefix:' + 'default' self.assertRaises(ValueError, Queue.from_queue_key, key) def test_enqueue_dependents(self): """Enqueueing dependent jobs pushes all jobs in the depends set to the queue and removes them from DeferredJobQueue.""" q = Queue() parent_job = Job.create(func=say_hello) parent_job.save() job_1 = q.enqueue(say_hello, depends_on=parent_job) job_2 = q.enqueue(say_hello, depends_on=parent_job) registry = DeferredJobRegistry(q.name, connection=self.testconn) parent_job.set_status(JobStatus.FINISHED) self.assertEqual(set(registry.get_job_ids()), set([job_1.id, job_2.id])) # After dependents is enqueued, job_1 and job_2 should be in queue self.assertEqual(q.job_ids, []) q.enqueue_dependents(parent_job) self.assertEqual(set(q.job_ids), set([job_2.id, job_1.id])) self.assertFalse(self.testconn.exists(parent_job.dependents_key)) # DeferredJobRegistry should also be empty self.assertEqual(registry.get_job_ids(), []) def test_enqueue_dependents_on_multiple_queues(self): """Enqueueing dependent jobs on multiple queues pushes jobs in the queues and removes them from DeferredJobRegistry for each different queue.""" q_1 = Queue("queue_1") q_2 = Queue("queue_2") parent_job = Job.create(func=say_hello) parent_job.save() job_1 = q_1.enqueue(say_hello, depends_on=parent_job) job_2 = q_2.enqueue(say_hello, depends_on=parent_job) # Each queue has its own DeferredJobRegistry registry_1 = DeferredJobRegistry(q_1.name, connection=self.testconn) self.assertEqual(set(registry_1.get_job_ids()), set([job_1.id])) registry_2 = DeferredJobRegistry(q_2.name, connection=self.testconn) parent_job.set_status(JobStatus.FINISHED) self.assertEqual(set(registry_2.get_job_ids()), set([job_2.id])) # After dependents is enqueued, job_1 on queue_1 and # job_2 should be in queue_2 self.assertEqual(q_1.job_ids, []) self.assertEqual(q_2.job_ids, []) q_1.enqueue_dependents(parent_job) q_2.enqueue_dependents(parent_job) self.assertEqual(set(q_1.job_ids), set([job_1.id])) self.assertEqual(set(q_2.job_ids), set([job_2.id])) self.assertFalse(self.testconn.exists(parent_job.dependents_key)) # DeferredJobRegistry should also be empty self.assertEqual(registry_1.get_job_ids(), []) self.assertEqual(registry_2.get_job_ids(), []) def test_enqueue_job_with_dependency(self): """Jobs are enqueued only when their dependencies are finished.""" # Job with unfinished dependency is not immediately enqueued parent_job = Job.create(func=say_hello) parent_job.save() q = Queue() job = q.enqueue_call(say_hello, depends_on=parent_job) self.assertEqual(q.job_ids, []) self.assertEqual(job.get_status(), JobStatus.DEFERRED) # Jobs dependent on finished jobs are immediately enqueued parent_job.set_status(JobStatus.FINISHED) parent_job.save() job = q.enqueue_call(say_hello, depends_on=parent_job) self.assertEqual(q.job_ids, [job.id]) self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT) self.assertEqual(job.get_status(), JobStatus.QUEUED) def test_enqueue_job_with_dependency_and_pipeline(self): """Jobs are enqueued only when their dependencies are finished, and by the caller when passing a pipeline.""" # Job with unfinished dependency is not immediately enqueued parent_job = Job.create(func=say_hello) parent_job.save() q = Queue() with q.connection.pipeline() as pipe: job = q.enqueue_call(say_hello, depends_on=parent_job, pipeline=pipe) self.assertEqual(q.job_ids, []) self.assertEqual(job.get_status(refresh=False), JobStatus.DEFERRED) # Not in registry before execute, since passed in pipeline self.assertEqual(len(q.deferred_job_registry), 0) pipe.execute() # Only in registry after execute, since passed in pipeline self.assertEqual(len(q.deferred_job_registry), 1) # Jobs dependent on finished jobs are immediately enqueued parent_job.set_status(JobStatus.FINISHED) parent_job.save() with q.connection.pipeline() as pipe: job = q.enqueue_call(say_hello, depends_on=parent_job, pipeline=pipe) # Pre execute conditions self.assertEqual(q.job_ids, []) self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT) self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED) pipe.execute() # Post execute conditions self.assertEqual(q.job_ids, [job.id]) self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT) self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED) def test_enqueue_job_with_no_dependency_prior_watch_and_pipeline(self): """Jobs are enqueued only when their dependencies are finished, and by the caller when passing a pipeline.""" q = Queue() with q.connection.pipeline() as pipe: pipe.watch(b'fake_key') # Test watch then enqueue job = q.enqueue_call(say_hello, pipeline=pipe) self.assertEqual(q.job_ids, []) self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED) # Not in queue before execute, since passed in pipeline self.assertEqual(len(q), 0) # Make sure modifying key doesn't cause issues, if in multi mode won't fail pipe.set(b'fake_key', b'fake_value') pipe.execute() # Only in registry after execute, since passed in pipeline self.assertEqual(len(q), 1) def test_enqueue_many_internal_pipeline(self): """Jobs should be enqueued in bulk with an internal pipeline, enqueued in order provided (but at_front still applies)""" # Job with unfinished dependency is not immediately enqueued q = Queue() job_1_data = Queue.prepare_data(say_hello, job_id='fake_job_id_1', at_front=False) job_2_data = Queue.prepare_data(say_hello, job_id='fake_job_id_2', at_front=False) job_3_data = Queue.prepare_data(say_hello, job_id='fake_job_id_3', at_front=True) jobs = q.enqueue_many( [job_1_data, job_2_data, job_3_data], ) for job in jobs: self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED) # Only in registry after execute, since passed in pipeline self.assertEqual(len(q), 3) self.assertEqual(q.job_ids, ['fake_job_id_3', 'fake_job_id_1', 'fake_job_id_2']) def test_enqueue_many_with_passed_pipeline(self): """Jobs should be enqueued in bulk with a passed pipeline, enqueued in order provided (but at_front still applies)""" # Job with unfinished dependency is not immediately enqueued q = Queue() with q.connection.pipeline() as pipe: job_1_data = Queue.prepare_data(say_hello, job_id='fake_job_id_1', at_front=False) job_2_data = Queue.prepare_data(say_hello, job_id='fake_job_id_2', at_front=False) job_3_data = Queue.prepare_data(say_hello, job_id='fake_job_id_3', at_front=True) jobs = q.enqueue_many([job_1_data, job_2_data, job_3_data], pipeline=pipe) self.assertEqual(q.job_ids, []) for job in jobs: self.assertEqual(job.get_status(refresh=False), JobStatus.QUEUED) pipe.execute() # Only in registry after execute, since passed in pipeline self.assertEqual(len(q), 3) self.assertEqual(q.job_ids, ['fake_job_id_3', 'fake_job_id_1', 'fake_job_id_2']) def test_enqueue_job_with_dependency_by_id(self): """Can specify job dependency with job object or job id.""" parent_job = Job.create(func=say_hello) parent_job.save() q = Queue() q.enqueue_call(say_hello, depends_on=parent_job.id) self.assertEqual(q.job_ids, []) # Jobs dependent on finished jobs are immediately enqueued parent_job.set_status(JobStatus.FINISHED) parent_job.save() job = q.enqueue_call(say_hello, depends_on=parent_job.id) self.assertEqual(q.job_ids, [job.id]) self.assertEqual(job.timeout, Queue.DEFAULT_TIMEOUT) def test_enqueue_job_with_dependency_and_timeout(self): """Jobs remember their timeout when enqueued as a dependency.""" # Job with unfinished dependency is not immediately enqueued parent_job = Job.create(func=say_hello) parent_job.save() q = Queue() job = q.enqueue_call(say_hello, depends_on=parent_job, timeout=123) self.assertEqual(q.job_ids, []) self.assertEqual(job.timeout, 123) # Jobs dependent on finished jobs are immediately enqueued parent_job.set_status(JobStatus.FINISHED) parent_job.save() job = q.enqueue_call(say_hello, depends_on=parent_job, timeout=123) self.assertEqual(q.job_ids, [job.id]) self.assertEqual(job.timeout, 123) def test_enqueue_job_with_multiple_queued_dependencies(self): parent_jobs = [Job.create(func=say_hello) for _ in range(2)] for job in parent_jobs: job._status = JobStatus.QUEUED job.save() q = Queue() with patch('rq.queue.Job.create', new=MultipleDependencyJob.create): job = q.enqueue(say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs]) self.assertEqual(job.get_status(), JobStatus.DEFERRED) self.assertEqual(q.job_ids, []) self.assertEqual(job.fetch_dependencies(), parent_jobs) def test_enqueue_job_with_multiple_finished_dependencies(self): parent_jobs = [Job.create(func=say_hello) for _ in range(2)] for job in parent_jobs: job._status = JobStatus.FINISHED job.save() q = Queue() with patch('rq.queue.Job.create', new=MultipleDependencyJob.create): job = q.enqueue(say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs]) self.assertEqual(job.get_status(), JobStatus.QUEUED) self.assertEqual(q.job_ids, [job.id]) self.assertEqual(job.fetch_dependencies(), parent_jobs) def test_enqueues_dependent_if_other_dependencies_finished(self): parent_jobs = [Job.create(func=say_hello) for _ in range(3)] parent_jobs[0]._status = JobStatus.STARTED parent_jobs[0].save() parent_jobs[1]._status = JobStatus.FINISHED parent_jobs[1].save() parent_jobs[2]._status = JobStatus.FINISHED parent_jobs[2].save() q = Queue() with patch('rq.queue.Job.create', new=MultipleDependencyJob.create): # dependent job deferred, b/c parent_job 0 is still 'started' dependent_job = q.enqueue( say_hello, depends_on=parent_jobs[0], _dependency_ids=[job.id for job in parent_jobs] ) self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED) # now set parent job 0 to 'finished' parent_jobs[0].set_status(JobStatus.FINISHED) q.enqueue_dependents(parent_jobs[0]) self.assertEqual(dependent_job.get_status(), JobStatus.QUEUED) self.assertEqual(q.job_ids, [dependent_job.id]) def test_does_not_enqueue_dependent_if_other_dependencies_not_finished(self): started_dependency = Job.create(func=say_hello, status=JobStatus.STARTED) started_dependency.save() queued_dependency = Job.create(func=say_hello, status=JobStatus.QUEUED) queued_dependency.save() q = Queue() with patch('rq.queue.Job.create', new=MultipleDependencyJob.create): dependent_job = q.enqueue( say_hello, depends_on=[started_dependency], _dependency_ids=[started_dependency.id, queued_dependency.id], ) self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED) q.enqueue_dependents(started_dependency) self.assertEqual(dependent_job.get_status(), JobStatus.DEFERRED) self.assertEqual(q.job_ids, []) def test_fetch_job_successful(self): """Fetch a job from a queue.""" q = Queue('example') job_orig = q.enqueue(say_hello) job_fetch: Job = q.fetch_job(job_orig.id) # type: ignore self.assertIsNotNone(job_fetch) self.assertEqual(job_orig.id, job_fetch.id) self.assertEqual(job_orig.description, job_fetch.description) def test_fetch_job_missing(self): """Fetch a job from a queue which doesn't exist.""" q = Queue('example') job = q.fetch_job('123') self.assertIsNone(job) def test_fetch_job_different_queue(self): """Fetch a job from a queue which is in a different queue.""" q1 = Queue('example1') q2 = Queue('example2') job_orig = q1.enqueue(say_hello) job_fetch = q2.fetch_job(job_orig.id) self.assertIsNone(job_fetch) job_fetch = q1.fetch_job(job_orig.id) self.assertIsNotNone(job_fetch) def test_getting_registries(self): """Getting job registries from queue object""" queue = Queue('example') self.assertEqual(queue.scheduled_job_registry, ScheduledJobRegistry(queue=queue)) self.assertEqual(queue.started_job_registry, StartedJobRegistry(queue=queue)) self.assertEqual(queue.failed_job_registry, FailedJobRegistry(queue=queue)) self.assertEqual(queue.deferred_job_registry, DeferredJobRegistry(queue=queue)) self.assertEqual(queue.finished_job_registry, FinishedJobRegistry(queue=queue)) self.assertEqual(queue.canceled_job_registry, CanceledJobRegistry(queue=queue)) def test_getting_registries_with_serializer(self): """Getting job registries from queue object (with custom serializer)""" queue = Queue('example', serializer=JSONSerializer) self.assertEqual(queue.scheduled_job_registry, ScheduledJobRegistry(queue=queue)) self.assertEqual(queue.started_job_registry, StartedJobRegistry(queue=queue)) self.assertEqual(queue.failed_job_registry, FailedJobRegistry(queue=queue)) self.assertEqual(queue.deferred_job_registry, DeferredJobRegistry(queue=queue)) self.assertEqual(queue.finished_job_registry, FinishedJobRegistry(queue=queue)) self.assertEqual(queue.canceled_job_registry, CanceledJobRegistry(queue=queue)) # Make sure we don't use default when queue has custom self.assertEqual(queue.scheduled_job_registry.serializer, JSONSerializer) self.assertEqual(queue.started_job_registry.serializer, JSONSerializer) self.assertEqual(queue.failed_job_registry.serializer, JSONSerializer) self.assertEqual(queue.deferred_job_registry.serializer, JSONSerializer) self.assertEqual(queue.finished_job_registry.serializer, JSONSerializer) self.assertEqual(queue.canceled_job_registry.serializer, JSONSerializer) def test_enqueue_with_retry(self): """Enqueueing with retry_strategy works""" queue = Queue('example', connection=self.testconn) job = queue.enqueue(say_hello, retry=Retry(max=3, interval=5)) job = Job.fetch(job.id, connection=self.testconn) self.assertEqual(job.retries_left, 3) self.assertEqual(job.retry_intervals, [5]) class TestJobScheduling(RQTestCase): def test_enqueue_at(self): """enqueue_at() creates a job in ScheduledJobRegistry""" queue = Queue(connection=self.testconn) scheduled_time = datetime.now(timezone.utc) + timedelta(seconds=10) job = queue.enqueue_at(scheduled_time, say_hello) registry = ScheduledJobRegistry(queue=queue) self.assertIn(job, registry) self.assertTrue(registry.get_expiration_time(job), scheduled_time) rq-1.16.2/tests/test_registry.py0000644000000000000000000004650313615410400013576 0ustar00from datetime import datetime, timedelta from unittest import mock from unittest.mock import ANY from rq.defaults import DEFAULT_FAILURE_TTL from rq.exceptions import AbandonedJobError, InvalidJobOperation from rq.job import Job, JobStatus, requeue_job from rq.queue import Queue from rq.registry import ( CanceledJobRegistry, DeferredJobRegistry, FailedJobRegistry, FinishedJobRegistry, StartedJobRegistry, clean_registries, ) from rq.serializers import JSONSerializer from rq.utils import as_text, current_timestamp from rq.worker import Worker from tests import RQTestCase from tests.fixtures import div_by_zero, say_hello class CustomJob(Job): """A custom job class just to test it""" class TestRegistry(RQTestCase): def setUp(self): super().setUp() self.registry = StartedJobRegistry(connection=self.testconn) def test_init(self): """Registry can be instantiated with queue or name/Redis connection""" queue = Queue('foo', connection=self.testconn) registry = StartedJobRegistry(queue=queue) self.assertEqual(registry.name, queue.name) self.assertEqual(registry.connection, queue.connection) self.assertEqual(registry.serializer, queue.serializer) registry = StartedJobRegistry('bar', self.testconn, serializer=JSONSerializer) self.assertEqual(registry.name, 'bar') self.assertEqual(registry.connection, self.testconn) self.assertEqual(registry.serializer, JSONSerializer) def test_key(self): self.assertEqual(self.registry.key, 'rq:wip:default') def test_custom_job_class(self): registry = StartedJobRegistry(job_class=CustomJob) self.assertFalse(registry.job_class == self.registry.job_class) def test_contains(self): registry = StartedJobRegistry(connection=self.testconn) queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) self.assertFalse(job in registry) self.assertFalse(job.id in registry) registry.add(job, 5) self.assertTrue(job in registry) self.assertTrue(job.id in registry) def test_get_expiration_time(self): """registry.get_expiration_time() returns correct datetime objects""" registry = StartedJobRegistry(connection=self.testconn) queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) registry.add(job, 5) time = registry.get_expiration_time(job) expected_time = (datetime.utcnow() + timedelta(seconds=5)).replace(microsecond=0) self.assertGreaterEqual(time, expected_time - timedelta(seconds=2)) self.assertLessEqual(time, expected_time + timedelta(seconds=2)) def test_add_and_remove(self): """Adding and removing job to StartedJobRegistry.""" timestamp = current_timestamp() queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) # Test that job is added with the right score self.registry.add(job, 1000) self.assertLess(self.testconn.zscore(self.registry.key, job.id), timestamp + 1002) # Ensure that a timeout of -1 results in a score of inf self.registry.add(job, -1) self.assertEqual(self.testconn.zscore(self.registry.key, job.id), float('inf')) # Ensure that job is removed from sorted set, but job key is not deleted self.registry.remove(job) self.assertIsNone(self.testconn.zscore(self.registry.key, job.id)) self.assertTrue(self.testconn.exists(job.key)) self.registry.add(job, -1) # registry.remove() also accepts job.id self.registry.remove(job.id) self.assertIsNone(self.testconn.zscore(self.registry.key, job.id)) self.registry.add(job, -1) # delete_job = True deletes job key self.registry.remove(job, delete_job=True) self.assertIsNone(self.testconn.zscore(self.registry.key, job.id)) self.assertFalse(self.testconn.exists(job.key)) job = queue.enqueue(say_hello) self.registry.add(job, -1) # delete_job = True also works with job.id self.registry.remove(job.id, delete_job=True) self.assertIsNone(self.testconn.zscore(self.registry.key, job.id)) self.assertFalse(self.testconn.exists(job.key)) def test_add_and_remove_with_serializer(self): """Adding and removing job to StartedJobRegistry (with serializer).""" # delete_job = True also works with job.id and custom serializer queue = Queue(connection=self.testconn, serializer=JSONSerializer) registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(say_hello) registry.add(job, -1) registry.remove(job.id, delete_job=True) self.assertIsNone(self.testconn.zscore(registry.key, job.id)) self.assertFalse(self.testconn.exists(job.key)) def test_get_job_ids(self): """Getting job ids from StartedJobRegistry.""" timestamp = current_timestamp() self.testconn.zadd(self.registry.key, {'foo': timestamp + 10}) self.testconn.zadd(self.registry.key, {'bar': timestamp + 20}) self.assertEqual(self.registry.get_job_ids(), ['foo', 'bar']) def test_get_expired_job_ids(self): """Getting expired job ids form StartedJobRegistry.""" timestamp = current_timestamp() self.testconn.zadd(self.registry.key, {'foo': 1}) self.testconn.zadd(self.registry.key, {'bar': timestamp + 10}) self.testconn.zadd(self.registry.key, {'baz': timestamp + 30}) self.assertEqual(self.registry.get_expired_job_ids(), ['foo']) self.assertEqual(self.registry.get_expired_job_ids(timestamp + 20), ['foo', 'bar']) # CanceledJobRegistry does not implement get_expired_job_ids() registry = CanceledJobRegistry(connection=self.testconn) self.assertRaises(NotImplementedError, registry.get_expired_job_ids) def test_cleanup_moves_jobs_to_failed_job_registry(self): """Moving expired jobs to FailedJobRegistry.""" queue = Queue(connection=self.testconn) failed_job_registry = FailedJobRegistry(connection=self.testconn) job = queue.enqueue(say_hello) self.testconn.zadd(self.registry.key, {job.id: 2}) # Job has not been moved to FailedJobRegistry self.registry.cleanup(1) self.assertNotIn(job, failed_job_registry) self.assertIn(job, self.registry) with mock.patch.object(Job, 'execute_failure_callback') as mocked: self.registry.cleanup() mocked.assert_called_once_with(queue.death_penalty_class, AbandonedJobError, ANY, ANY) self.assertIn(job.id, failed_job_registry) self.assertNotIn(job, self.registry) job.refresh() self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertTrue(job.exc_info) # explanation is written to exc_info def test_job_execution(self): """Job is removed from StartedJobRegistry after execution.""" registry = StartedJobRegistry(connection=self.testconn) queue = Queue(connection=self.testconn) worker = Worker([queue]) job = queue.enqueue(say_hello) self.assertTrue(job.is_queued) worker.prepare_job_execution(job) self.assertIn(job.id, registry.get_job_ids()) self.assertTrue(job.is_started) worker.perform_job(job, queue) self.assertNotIn(job.id, registry.get_job_ids()) self.assertTrue(job.is_finished) # Job that fails job = queue.enqueue(div_by_zero) worker.prepare_job_execution(job) self.assertIn(job.id, registry.get_job_ids()) worker.perform_job(job, queue) self.assertNotIn(job.id, registry.get_job_ids()) def test_job_deletion(self): """Ensure job is removed from StartedJobRegistry when deleted.""" registry = StartedJobRegistry(connection=self.testconn) queue = Queue(connection=self.testconn) worker = Worker([queue]) job = queue.enqueue(say_hello) self.assertTrue(job.is_queued) worker.prepare_job_execution(job) self.assertIn(job.id, registry.get_job_ids()) job.delete() self.assertNotIn(job.id, registry.get_job_ids()) def test_get_job_count(self): """StartedJobRegistry returns the right number of job count.""" timestamp = current_timestamp() + 10 self.testconn.zadd(self.registry.key, {'foo': timestamp}) self.testconn.zadd(self.registry.key, {'bar': timestamp}) self.assertEqual(self.registry.count, 2) self.assertEqual(len(self.registry), 2) # Make sure def test_clean_registries(self): """clean_registries() cleans Started and Finished job registries.""" queue = Queue(connection=self.testconn) finished_job_registry = FinishedJobRegistry(connection=self.testconn) self.testconn.zadd(finished_job_registry.key, {'foo': 1}) started_job_registry = StartedJobRegistry(connection=self.testconn) self.testconn.zadd(started_job_registry.key, {'foo': 1}) failed_job_registry = FailedJobRegistry(connection=self.testconn) self.testconn.zadd(failed_job_registry.key, {'foo': 1}) clean_registries(queue) self.assertEqual(self.testconn.zcard(finished_job_registry.key), 0) self.assertEqual(self.testconn.zcard(started_job_registry.key), 0) self.assertEqual(self.testconn.zcard(failed_job_registry.key), 0) def test_clean_registries_with_serializer(self): """clean_registries() cleans Started and Finished job registries (with serializer).""" queue = Queue(connection=self.testconn, serializer=JSONSerializer) finished_job_registry = FinishedJobRegistry(connection=self.testconn, serializer=JSONSerializer) self.testconn.zadd(finished_job_registry.key, {'foo': 1}) started_job_registry = StartedJobRegistry(connection=self.testconn, serializer=JSONSerializer) self.testconn.zadd(started_job_registry.key, {'foo': 1}) failed_job_registry = FailedJobRegistry(connection=self.testconn, serializer=JSONSerializer) self.testconn.zadd(failed_job_registry.key, {'foo': 1}) clean_registries(queue) self.assertEqual(self.testconn.zcard(finished_job_registry.key), 0) self.assertEqual(self.testconn.zcard(started_job_registry.key), 0) self.assertEqual(self.testconn.zcard(failed_job_registry.key), 0) def test_get_queue(self): """registry.get_queue() returns the right Queue object.""" registry = StartedJobRegistry(connection=self.testconn) self.assertEqual(registry.get_queue(), Queue(connection=self.testconn)) registry = StartedJobRegistry('foo', connection=self.testconn, serializer=JSONSerializer) self.assertEqual(registry.get_queue(), Queue('foo', connection=self.testconn, serializer=JSONSerializer)) class TestFinishedJobRegistry(RQTestCase): def setUp(self): super().setUp() self.registry = FinishedJobRegistry(connection=self.testconn) def test_key(self): self.assertEqual(self.registry.key, 'rq:finished:default') def test_cleanup(self): """Finished job registry removes expired jobs.""" timestamp = current_timestamp() self.testconn.zadd(self.registry.key, {'foo': 1}) self.testconn.zadd(self.registry.key, {'bar': timestamp + 10}) self.testconn.zadd(self.registry.key, {'baz': timestamp + 30}) self.registry.cleanup() self.assertEqual(self.registry.get_job_ids(), ['bar', 'baz']) self.registry.cleanup(timestamp + 20) self.assertEqual(self.registry.get_job_ids(), ['baz']) # CanceledJobRegistry now implements noop cleanup, should not raise exception registry = CanceledJobRegistry(connection=self.testconn) registry.cleanup() def test_jobs_are_put_in_registry(self): """Completed jobs are added to FinishedJobRegistry.""" self.assertEqual(self.registry.get_job_ids(), []) queue = Queue(connection=self.testconn) worker = Worker([queue]) # Completed jobs are put in FinishedJobRegistry job = queue.enqueue(say_hello) worker.perform_job(job, queue) self.assertEqual(self.registry.get_job_ids(), [job.id]) # When job is deleted, it should be removed from FinishedJobRegistry self.assertEqual(job.get_status(), JobStatus.FINISHED) job.delete() self.assertEqual(self.registry.get_job_ids(), []) # Failed jobs are not put in FinishedJobRegistry failed_job = queue.enqueue(div_by_zero) worker.perform_job(failed_job, queue) self.assertEqual(self.registry.get_job_ids(), []) class TestDeferredRegistry(RQTestCase): def setUp(self): super().setUp() self.registry = DeferredJobRegistry(connection=self.testconn) def test_key(self): self.assertEqual(self.registry.key, 'rq:deferred:default') def test_add(self): """Adding a job to DeferredJobsRegistry.""" job = Job() self.registry.add(job) job_ids = [as_text(job_id) for job_id in self.testconn.zrange(self.registry.key, 0, -1)] self.assertEqual(job_ids, [job.id]) def test_register_dependency(self): """Ensure job creation and deletion works with DeferredJobRegistry.""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) job2 = queue.enqueue(say_hello, depends_on=job) registry = DeferredJobRegistry(connection=self.testconn) self.assertEqual(registry.get_job_ids(), [job2.id]) # When deleted, job removes itself from DeferredJobRegistry job2.delete() self.assertEqual(registry.get_job_ids(), []) class TestFailedJobRegistry(RQTestCase): def test_default_failure_ttl(self): """Job TTL defaults to DEFAULT_FAILURE_TTL""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) registry = FailedJobRegistry(connection=self.testconn) key = registry.key timestamp = current_timestamp() registry.add(job) score = self.testconn.zscore(key, job.id) self.assertLess(score, timestamp + DEFAULT_FAILURE_TTL + 2) self.assertGreater(score, timestamp + DEFAULT_FAILURE_TTL - 2) # Job key will also expire job_ttl = self.testconn.ttl(job.key) self.assertLess(job_ttl, DEFAULT_FAILURE_TTL + 2) self.assertGreater(job_ttl, DEFAULT_FAILURE_TTL - 2) timestamp = current_timestamp() ttl = 5 registry.add(job, ttl=ttl) score = self.testconn.zscore(key, job.id) self.assertLess(score, timestamp + ttl + 2) self.assertGreater(score, timestamp + ttl - 2) job_ttl = self.testconn.ttl(job.key) self.assertLess(job_ttl, ttl + 2) self.assertGreater(job_ttl, ttl - 2) def test_requeue(self): """FailedJobRegistry.requeue works properly""" queue = Queue(connection=self.testconn) job = queue.enqueue(div_by_zero, failure_ttl=5) worker = Worker([queue]) worker.work(burst=True) registry = FailedJobRegistry(connection=worker.connection) self.assertTrue(job in registry) registry.requeue(job.id) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) self.assertEqual(job.started_at, None) self.assertEqual(job.ended_at, None) worker.work(burst=True) self.assertTrue(job in registry) # Should also work with job instance registry.requeue(job) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) worker.work(burst=True) self.assertTrue(job in registry) # requeue_job should work the same way requeue_job(job.id, connection=self.testconn) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) worker.work(burst=True) self.assertTrue(job in registry) # And so does job.requeue() job.requeue() self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) def test_requeue_with_serializer(self): """FailedJobRegistry.requeue works properly (with serializer)""" queue = Queue(connection=self.testconn, serializer=JSONSerializer) job = queue.enqueue(div_by_zero, failure_ttl=5) worker = Worker([queue], serializer=JSONSerializer) worker.work(burst=True) registry = FailedJobRegistry(connection=worker.connection, serializer=JSONSerializer) self.assertTrue(job in registry) registry.requeue(job.id) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) self.assertEqual(job.started_at, None) self.assertEqual(job.ended_at, None) worker.work(burst=True) self.assertTrue(job in registry) # Should also work with job instance registry.requeue(job) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) worker.work(burst=True) self.assertTrue(job in registry) # requeue_job should work the same way requeue_job(job.id, connection=self.testconn, serializer=JSONSerializer) self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) worker.work(burst=True) self.assertTrue(job in registry) # And so does job.requeue() job.requeue() self.assertFalse(job in registry) self.assertIn(job.id, queue.get_job_ids()) job.refresh() self.assertEqual(job.get_status(), JobStatus.QUEUED) def test_invalid_job(self): """Requeuing a job that's not in FailedJobRegistry raises an error.""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) registry = FailedJobRegistry(connection=self.testconn) with self.assertRaises(InvalidJobOperation): registry.requeue(job) def test_worker_handle_job_failure(self): """Failed jobs are added to FailedJobRegistry""" q = Queue(connection=self.testconn) w = Worker([q]) registry = FailedJobRegistry(connection=w.connection) timestamp = current_timestamp() job = q.enqueue(div_by_zero, failure_ttl=5) w.handle_job_failure(job, q) # job is added to FailedJobRegistry with default failure ttl self.assertIn(job.id, registry.get_job_ids()) self.assertLess(self.testconn.zscore(registry.key, job.id), timestamp + DEFAULT_FAILURE_TTL + 5) # job is added to FailedJobRegistry with specified ttl job = q.enqueue(div_by_zero, failure_ttl=5) w.handle_job_failure(job, q) self.assertLess(self.testconn.zscore(registry.key, job.id), timestamp + 7) rq-1.16.2/tests/test_results.py0000644000000000000000000002727413615410400013433 0ustar00import tempfile import time import unittest from datetime import timedelta from unittest.mock import PropertyMock, patch from redis import Redis from rq.defaults import UNSERIALIZABLE_RETURN_VALUE_PAYLOAD from rq.job import Job from rq.queue import Queue from rq.registry import StartedJobRegistry from rq.results import Result, get_key from rq.utils import get_version, utcnow from rq.worker import Worker from tests import RQTestCase from .fixtures import div_by_zero, say_hello @unittest.skipIf(get_version(Redis()) < (5, 0, 0), 'Skip if Redis server < 5.0') class TestScheduledJobRegistry(RQTestCase): def test_save_and_get_result(self): """Ensure data is saved properly""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) result = Result.fetch_latest(job) self.assertIsNone(result) Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) result = Result.fetch_latest(job) self.assertEqual(result.return_value, 1) self.assertEqual(job.latest_result().return_value, 1) # Check that ttl is properly set key = get_key(job.id) ttl = self.connection.pttl(key) self.assertTrue(5000 < ttl <= 10000) # Check job with None return value Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=None) result = Result.fetch_latest(job) self.assertIsNone(result.return_value) Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=2) result = Result.fetch_latest(job) self.assertEqual(result.return_value, 2) def test_create_failure(self): """Ensure data is saved properly""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) Result.create_failure(job, ttl=10, exc_string='exception') result = Result.fetch_latest(job) self.assertEqual(result.exc_string, 'exception') # Check that ttl is properly set key = get_key(job.id) ttl = self.connection.pttl(key) self.assertTrue(5000 < ttl <= 10000) def test_getting_results(self): """Check getting all execution results""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) # latest_result() returns None when there's no result self.assertIsNone(job.latest_result()) result_1 = Result.create_failure(job, ttl=10, exc_string='exception') result_2 = Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) result_3 = Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) # Result.fetch_latest() returns the latest result result = Result.fetch_latest(job) self.assertEqual(result, result_3) self.assertEqual(job.latest_result(), result_3) # Result.all() and job.results() returns all results, newest first results = Result.all(job) self.assertEqual(results, [result_3, result_2, result_1]) self.assertEqual(job.results(), [result_3, result_2, result_1]) def test_count(self): """Result.count(job) returns number of results""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) self.assertEqual(Result.count(job), 0) Result.create_failure(job, ttl=10, exc_string='exception') self.assertEqual(Result.count(job), 1) Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) self.assertEqual(Result.count(job), 2) def test_delete_all(self): """Result.delete_all(job) deletes all results from Redis""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) Result.create_failure(job, ttl=10, exc_string='exception') Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) Result.delete_all(job) self.assertEqual(Result.count(job), 0) def test_job_successful_result_fallback(self): """Changes to job.result handling should be backwards compatible.""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) worker = Worker([queue]) worker.register_birth() self.assertEqual(worker.failed_job_count, 0) self.assertEqual(worker.successful_job_count, 0) self.assertEqual(worker.total_working_time, 0) # These should only run on workers that supports Redis streams registry = StartedJobRegistry(connection=self.connection) job.started_at = utcnow() job.ended_at = job.started_at + timedelta(seconds=0.75) job._result = 'Success' worker.handle_job_success(job, queue, registry) payload = self.connection.hgetall(job.key) self.assertFalse(b'result' in payload.keys()) self.assertEqual(job.result, 'Success') with patch('rq.worker.Worker.supports_redis_streams', new_callable=PropertyMock) as mock: with patch('rq.job.Job.supports_redis_streams', new_callable=PropertyMock) as job_mock: job_mock.return_value = False mock.return_value = False worker = Worker([queue]) worker.register_birth() job = queue.enqueue(say_hello) job._result = 'Success' job.started_at = utcnow() job.ended_at = job.started_at + timedelta(seconds=0.75) # If `save_result_to_job` = True, result will be saved to job # hash, simulating older versions of RQ worker.handle_job_success(job, queue, registry) payload = self.connection.hgetall(job.key) self.assertTrue(b'result' in payload.keys()) # Delete all new result objects so we only have result stored in job hash, # this should simulate a job that was executed in an earlier RQ version self.assertEqual(job.result, 'Success') def test_job_failed_result_fallback(self): """Changes to job.result failure handling should be backwards compatible.""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) worker = Worker([queue]) worker.register_birth() self.assertEqual(worker.failed_job_count, 0) self.assertEqual(worker.successful_job_count, 0) self.assertEqual(worker.total_working_time, 0) registry = StartedJobRegistry(connection=self.connection) job.started_at = utcnow() job.ended_at = job.started_at + timedelta(seconds=0.75) worker.handle_job_failure(job, exc_string='Error', queue=queue, started_job_registry=registry) job = Job.fetch(job.id, connection=self.connection) payload = self.connection.hgetall(job.key) self.assertFalse(b'exc_info' in payload.keys()) self.assertEqual(job.exc_info, 'Error') with patch('rq.worker.Worker.supports_redis_streams', new_callable=PropertyMock) as mock: with patch('rq.job.Job.supports_redis_streams', new_callable=PropertyMock) as job_mock: job_mock.return_value = False mock.return_value = False worker = Worker([queue]) worker.register_birth() job = queue.enqueue(say_hello) job.started_at = utcnow() job.ended_at = job.started_at + timedelta(seconds=0.75) # If `save_result_to_job` = True, result will be saved to job # hash, simulating older versions of RQ worker.handle_job_failure(job, exc_string='Error', queue=queue, started_job_registry=registry) payload = self.connection.hgetall(job.key) self.assertTrue(b'exc_info' in payload.keys()) # Delete all new result objects so we only have result stored in job hash, # this should simulate a job that was executed in an earlier RQ version Result.delete_all(job) job = Job.fetch(job.id, connection=self.connection) self.assertEqual(job.exc_info, 'Error') def test_job_return_value(self): """Test job.return_value""" queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) # Returns None when there's no result self.assertIsNone(job.return_value()) Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) self.assertEqual(job.return_value(), 1) # Returns None if latest result is a failure Result.create_failure(job, ttl=10, exc_string='exception') self.assertIsNone(job.return_value(refresh=True)) def test_job_return_value_sync(self): """Test job.return_value when queue.is_async=False""" queue = Queue(connection=self.connection, is_async=False) job = queue.enqueue(say_hello) # Returns None when there's no result self.assertIsNotNone(job.return_value()) job = queue.enqueue(div_by_zero) self.assertEqual(job.latest_result().type, Result.Type.FAILED) def test_job_return_value_result_ttl_infinity(self): """Test job.return_value when queue.result_ttl=-1""" queue = Queue(connection=self.connection, result_ttl=-1) job = queue.enqueue(say_hello) # Returns None when there's no result self.assertIsNone(job.return_value()) Result.create(job, Result.Type.SUCCESSFUL, ttl=-1, return_value=1) self.assertEqual(job.return_value(), 1) def test_job_return_value_result_ttl_zero(self): """Test job.return_value when queue.result_ttl=0""" queue = Queue(connection=self.connection, result_ttl=0) job = queue.enqueue(say_hello) # Returns None when there's no result self.assertIsNone(job.return_value()) Result.create(job, Result.Type.SUCCESSFUL, ttl=0, return_value=1) self.assertIsNone(job.return_value()) def test_job_return_value_unserializable(self): """Test job.return_value when it is not serializable""" queue = Queue(connection=self.connection, result_ttl=0) job = queue.enqueue(say_hello) # Returns None when there's no result self.assertIsNone(job.return_value()) # tempfile.NamedTemporaryFile() is not picklable Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=tempfile.NamedTemporaryFile()) self.assertEqual(job.return_value(), UNSERIALIZABLE_RETURN_VALUE_PAYLOAD) self.assertEqual(Result.count(job), 1) Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) self.assertEqual(Result.count(job), 2) def test_blocking_results(self): queue = Queue(connection=self.connection) job = queue.enqueue(say_hello) # Should block if there's no result. timeout = 1 self.assertIsNone(Result.fetch_latest(job)) started_at = time.time() self.assertIsNone(Result.fetch_latest(job, timeout=timeout)) blocked_for = time.time() - started_at self.assertGreaterEqual(blocked_for, timeout) # Shouldn't block if there's already a result present. Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=1) timeout = 1 result_sync = Result.fetch_latest(job) started_at = time.time() result_blocking = Result.fetch_latest(job, timeout=timeout) blocked_for = time.time() - started_at self.assertEqual(result_sync.return_value, result_blocking.return_value) self.assertGreater(timeout, blocked_for) # Should return the latest result if there are multiple. Result.create(job, Result.Type.SUCCESSFUL, ttl=10, return_value=2) result_blocking = Result.fetch_latest(job, timeout=1) self.assertEqual(result_blocking.return_value, 2) rq-1.16.2/tests/test_retry.py0000644000000000000000000001221513615410400013064 0ustar00from datetime import datetime, timedelta, timezone from rq.job import Job, JobStatus, Retry from rq.queue import Queue from rq.registry import FailedJobRegistry, StartedJobRegistry from rq.worker import Worker from tests import RQTestCase, fixtures from tests.fixtures import div_by_zero, say_hello class TestRetry(RQTestCase): def test_persistence_of_retry_data(self): """Retry related data is stored and restored properly""" job = Job.create(func=fixtures.some_calculation) job.retries_left = 3 job.retry_intervals = [1, 2, 3] job.save() job.retries_left = None job.retry_intervals = None job.refresh() self.assertEqual(job.retries_left, 3) self.assertEqual(job.retry_intervals, [1, 2, 3]) def test_retry_class(self): """Retry parses `max` and `interval` correctly""" retry = Retry(max=1) self.assertEqual(retry.max, 1) self.assertEqual(retry.intervals, [0]) self.assertRaises(ValueError, Retry, max=0) retry = Retry(max=2, interval=5) self.assertEqual(retry.max, 2) self.assertEqual(retry.intervals, [5]) retry = Retry(max=3, interval=[5, 10]) self.assertEqual(retry.max, 3) self.assertEqual(retry.intervals, [5, 10]) # interval can't be negative self.assertRaises(ValueError, Retry, max=1, interval=-5) self.assertRaises(ValueError, Retry, max=1, interval=[1, -5]) def test_get_retry_interval(self): """get_retry_interval() returns the right retry interval""" job = Job.create(func=fixtures.say_hello) # Handle case where self.retry_intervals is None job.retries_left = 2 self.assertEqual(job.get_retry_interval(), 0) # Handle the most common case job.retry_intervals = [1, 2] self.assertEqual(job.get_retry_interval(), 1) job.retries_left = 1 self.assertEqual(job.get_retry_interval(), 2) # Handle cases where number of retries > length of interval job.retries_left = 4 job.retry_intervals = [1, 2, 3] self.assertEqual(job.get_retry_interval(), 1) job.retries_left = 3 self.assertEqual(job.get_retry_interval(), 1) job.retries_left = 2 self.assertEqual(job.get_retry_interval(), 2) job.retries_left = 1 self.assertEqual(job.get_retry_interval(), 3) def test_job_retry(self): """Test job.retry() works properly""" queue = Queue(connection=self.testconn) retry = Retry(max=3, interval=5) job = queue.enqueue(div_by_zero, retry=retry) with self.testconn.pipeline() as pipeline: job.retry(queue, pipeline) pipeline.execute() self.assertEqual(job.retries_left, 2) # status should be scheduled since it's retried with 5 seconds interval self.assertEqual(job.get_status(), JobStatus.SCHEDULED) retry = Retry(max=3) job = queue.enqueue(div_by_zero, retry=retry) with self.testconn.pipeline() as pipeline: job.retry(queue, pipeline) pipeline.execute() self.assertEqual(job.retries_left, 2) # status should be queued self.assertEqual(job.get_status(), JobStatus.QUEUED) def test_retry_interval(self): """Retries with intervals are scheduled""" connection = self.testconn queue = Queue(connection=connection) retry = Retry(max=1, interval=5) job = queue.enqueue(div_by_zero, retry=retry) worker = Worker([queue]) registry = queue.scheduled_job_registry # If job if configured to retry with interval, it will be scheduled, # not directly put back in the queue queue.empty() worker.handle_job_failure(job, queue) job.refresh() self.assertEqual(job.get_status(), JobStatus.SCHEDULED) self.assertEqual(job.retries_left, 0) self.assertEqual(len(registry), 1) self.assertEqual(queue.job_ids, []) # Scheduled time is roughly 5 seconds from now scheduled_time = registry.get_scheduled_time(job) now = datetime.now(timezone.utc) self.assertTrue(now + timedelta(seconds=4) < scheduled_time < now + timedelta(seconds=10)) def test_cleanup_handles_retries(self): """Expired jobs should also be retried""" queue = Queue(connection=self.testconn) registry = StartedJobRegistry(connection=self.testconn) failed_job_registry = FailedJobRegistry(connection=self.testconn) job = queue.enqueue(say_hello, retry=Retry(max=1)) # Add job to StartedJobRegistry with past expiration time self.testconn.zadd(registry.key, {job.id: 2}) registry.cleanup() self.assertEqual(len(queue), 2) self.assertEqual(job.get_status(), JobStatus.QUEUED) self.assertNotIn(job, failed_job_registry) self.testconn.zadd(registry.key, {job.id: 2}) # Job goes to FailedJobRegistry because it's only retried once registry.cleanup() self.assertEqual(len(queue), 2) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertIn(job, failed_job_registry) rq-1.16.2/tests/test_scheduler.py0000644000000000000000000005404413615410400013703 0ustar00import os from datetime import datetime, timedelta, timezone from multiprocessing import Process from unittest import mock import redis from rq import Queue from rq.defaults import DEFAULT_MAINTENANCE_TASK_INTERVAL from rq.exceptions import NoSuchJobError from rq.job import Job, Retry from rq.registry import FinishedJobRegistry, ScheduledJobRegistry from rq.scheduler import RQScheduler from rq.serializers import JSONSerializer from rq.utils import current_timestamp from rq.worker import Worker from tests import RQTestCase, find_empty_redis_database, ssl_test from .fixtures import kill_worker, say_hello class CustomRedisConnection(redis.Connection): """Custom redis connection with a custom arg, used in test_custom_connection_pool""" def __init__(self, *args, custom_arg=None, **kwargs): self.custom_arg = custom_arg super().__init__(*args, **kwargs) def get_custom_arg(self): return self.custom_arg class TestScheduledJobRegistry(RQTestCase): def test_get_jobs_to_enqueue(self): """Getting job ids to enqueue from ScheduledJobRegistry.""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) timestamp = current_timestamp() self.testconn.zadd(registry.key, {'foo': 1}) self.testconn.zadd(registry.key, {'bar': timestamp + 10}) self.testconn.zadd(registry.key, {'baz': timestamp + 30}) self.assertEqual(registry.get_jobs_to_enqueue(), ['foo']) self.assertEqual(registry.get_jobs_to_enqueue(timestamp + 20), ['foo', 'bar']) def test_get_jobs_to_schedule_with_chunk_size(self): """Max amount of jobs returns by get_jobs_to_schedule() equal to chunk_size""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) timestamp = current_timestamp() chunk_size = 5 for index in range(0, chunk_size * 2): self.testconn.zadd(registry.key, {'foo_{}'.format(index): 1}) self.assertEqual(len(registry.get_jobs_to_schedule(timestamp, chunk_size)), chunk_size) self.assertEqual(len(registry.get_jobs_to_schedule(timestamp, chunk_size * 2)), chunk_size * 2) def test_get_scheduled_time(self): """get_scheduled_time() returns job's scheduled datetime""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) job = Job.create('myfunc', connection=self.testconn) job.save() dt = datetime(2019, 1, 1, tzinfo=timezone.utc) registry.schedule(job, datetime(2019, 1, 1, tzinfo=timezone.utc)) self.assertEqual(registry.get_scheduled_time(job), dt) # get_scheduled_time() should also work with job ID self.assertEqual(registry.get_scheduled_time(job.id), dt) # registry.get_scheduled_time() raises NoSuchJobError if # job.id is not found self.assertRaises(NoSuchJobError, registry.get_scheduled_time, '123') def test_schedule(self): """Adding job with the correct score to ScheduledJobRegistry""" queue = Queue(connection=self.testconn) job = Job.create('myfunc', connection=self.testconn) job.save() registry = ScheduledJobRegistry(queue=queue) from datetime import timezone # If we pass in a datetime with no timezone, `schedule()` # assumes local timezone so depending on your local timezone, # the timestamp maybe different # # we need to account for the difference between a timezone # with DST active and without DST active. The time.timezone # property isn't accurate when time.daylight is non-zero, # we'll test both. # # first, time.daylight == 0 (not in DST). # mock the sitatuoin for American/New_York not in DST (UTC - 5) # time.timezone = 18000 # time.daylight = 0 # time.altzone = 14400 mock_day = mock.patch('time.daylight', 0) mock_tz = mock.patch('time.timezone', 18000) mock_atz = mock.patch('time.altzone', 14400) with mock_tz, mock_day, mock_atz: registry.schedule(job, datetime(2019, 1, 1)) self.assertEqual( self.testconn.zscore(registry.key, job.id), 1546300800 + 18000 ) # 2019-01-01 UTC in Unix timestamp # second, time.daylight != 0 (in DST) # mock the sitatuoin for American/New_York not in DST (UTC - 4) # time.timezone = 18000 # time.daylight = 1 # time.altzone = 14400 mock_day = mock.patch('time.daylight', 1) mock_tz = mock.patch('time.timezone', 18000) mock_atz = mock.patch('time.altzone', 14400) with mock_tz, mock_day, mock_atz: registry.schedule(job, datetime(2019, 1, 1)) self.assertEqual( self.testconn.zscore(registry.key, job.id), 1546300800 + 14400 ) # 2019-01-01 UTC in Unix timestamp # Score is always stored in UTC even if datetime is in a different tz tz = timezone(timedelta(hours=7)) job = Job.create('myfunc', connection=self.testconn) job.save() registry.schedule(job, datetime(2019, 1, 1, 7, tzinfo=tz)) self.assertEqual(self.testconn.zscore(registry.key, job.id), 1546300800) # 2019-01-01 UTC in Unix timestamp class TestScheduler(RQTestCase): def test_init(self): """Scheduler can be instantiated with queues or queue names""" foo_queue = Queue('foo', connection=self.testconn) scheduler = RQScheduler([foo_queue, 'bar'], connection=self.testconn) self.assertEqual(scheduler._queue_names, {'foo', 'bar'}) self.assertEqual(scheduler.status, RQScheduler.Status.STOPPED) def test_should_reacquire_locks(self): """scheduler.should_reacquire_locks works properly""" queue = Queue(connection=self.testconn) scheduler = RQScheduler([queue], connection=self.testconn) self.assertTrue(scheduler.should_reacquire_locks) scheduler.acquire_locks() self.assertIsNotNone(scheduler.lock_acquisition_time) # scheduler.should_reacquire_locks always returns False if # scheduler.acquired_locks and scheduler._queue_names are the same self.assertFalse(scheduler.should_reacquire_locks) scheduler.lock_acquisition_time = datetime.now() - timedelta(seconds=DEFAULT_MAINTENANCE_TASK_INTERVAL + 6) self.assertFalse(scheduler.should_reacquire_locks) scheduler._queue_names = set(['default', 'foo']) self.assertTrue(scheduler.should_reacquire_locks) scheduler.acquire_locks() self.assertFalse(scheduler.should_reacquire_locks) def test_lock_acquisition(self): """Test lock acquisition""" name_1 = 'lock-test-1' name_2 = 'lock-test-2' name_3 = 'lock-test-3' scheduler = RQScheduler([name_1], self.testconn) self.assertEqual(scheduler.acquire_locks(), {name_1}) self.assertEqual(scheduler._acquired_locks, {name_1}) self.assertEqual(scheduler.acquire_locks(), set([])) # Only name_2 is returned since name_1 is already locked scheduler = RQScheduler([name_1, name_2], self.testconn) self.assertEqual(scheduler.acquire_locks(), {name_2}) self.assertEqual(scheduler._acquired_locks, {name_2}) # When a new lock is successfully acquired, _acquired_locks is added scheduler._queue_names.add(name_3) self.assertEqual(scheduler.acquire_locks(), {name_3}) self.assertEqual(scheduler._acquired_locks, {name_2, name_3}) def test_lock_acquisition_with_auto_start(self): """Test lock acquisition with auto_start=True""" scheduler = RQScheduler(['auto-start'], self.testconn) with mock.patch.object(scheduler, 'start') as mocked: scheduler.acquire_locks(auto_start=True) self.assertEqual(mocked.call_count, 1) # If process has started, scheduler.start() won't be called running_process = mock.MagicMock() running_process.is_alive.return_value = True scheduler = RQScheduler(['auto-start2'], self.testconn) scheduler._process = running_process with mock.patch.object(scheduler, 'start') as mocked: scheduler.acquire_locks(auto_start=True) self.assertEqual(mocked.call_count, 0) self.assertEqual(running_process.is_alive.call_count, 1) # If the process has stopped for some reason, the scheduler should restart scheduler = RQScheduler(['auto-start3'], self.testconn) stopped_process = mock.MagicMock() stopped_process.is_alive.return_value = False scheduler._process = stopped_process with mock.patch.object(scheduler, 'start') as mocked: scheduler.acquire_locks(auto_start=True) self.assertEqual(mocked.call_count, 1) self.assertEqual(stopped_process.is_alive.call_count, 1) def test_lock_release(self): """Test that scheduler.release_locks() only releases acquired locks""" name_1 = 'lock-test-1' name_2 = 'lock-test-2' scheduler_1 = RQScheduler([name_1], self.testconn) self.assertEqual(scheduler_1.acquire_locks(), {name_1}) self.assertEqual(scheduler_1._acquired_locks, {name_1}) # Only name_2 is returned since name_1 is already locked scheduler_1_2 = RQScheduler([name_1, name_2], self.testconn) self.assertEqual(scheduler_1_2.acquire_locks(), {name_2}) self.assertEqual(scheduler_1_2._acquired_locks, {name_2}) self.assertTrue(self.testconn.exists(scheduler_1.get_locking_key(name_1))) self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_1))) self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_2))) scheduler_1_2.release_locks() self.assertEqual(scheduler_1_2._acquired_locks, set()) self.assertEqual(scheduler_1._acquired_locks, {name_1}) self.assertTrue(self.testconn.exists(scheduler_1.get_locking_key(name_1))) self.assertTrue(self.testconn.exists(scheduler_1_2.get_locking_key(name_1))) self.assertFalse(self.testconn.exists(scheduler_1_2.get_locking_key(name_2))) def test_queue_scheduler_pid(self): queue = Queue(connection=self.testconn) scheduler = RQScheduler( [ queue, ], connection=self.testconn, ) scheduler.acquire_locks() assert queue.scheduler_pid == os.getpid() def test_heartbeat(self): """Test that heartbeat updates locking keys TTL""" name_1 = 'lock-test-1' name_2 = 'lock-test-2' name_3 = 'lock-test-3' scheduler = RQScheduler([name_3], self.testconn) scheduler.acquire_locks() scheduler = RQScheduler([name_1, name_2, name_3], self.testconn) scheduler.acquire_locks() locking_key_1 = RQScheduler.get_locking_key(name_1) locking_key_2 = RQScheduler.get_locking_key(name_2) locking_key_3 = RQScheduler.get_locking_key(name_3) with self.testconn.pipeline() as pipeline: pipeline.expire(locking_key_1, 1000) pipeline.expire(locking_key_2, 1000) pipeline.expire(locking_key_3, 1000) pipeline.execute() scheduler.heartbeat() self.assertEqual(self.testconn.ttl(locking_key_1), 61) self.assertEqual(self.testconn.ttl(locking_key_2), 61) self.assertEqual(self.testconn.ttl(locking_key_3), 1000) # scheduler.stop() releases locks and sets status to STOPPED scheduler._status = scheduler.Status.WORKING scheduler.stop() self.assertFalse(self.testconn.exists(locking_key_1)) self.assertFalse(self.testconn.exists(locking_key_2)) self.assertTrue(self.testconn.exists(locking_key_3)) self.assertEqual(scheduler.status, scheduler.Status.STOPPED) # Heartbeat also works properly for schedulers with a single queue scheduler = RQScheduler([name_1], self.testconn) scheduler.acquire_locks() self.testconn.expire(locking_key_1, 1000) scheduler.heartbeat() self.assertEqual(self.testconn.ttl(locking_key_1), 61) def test_enqueue_scheduled_jobs(self): """Scheduler can enqueue scheduled jobs""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) job = Job.create('myfunc', connection=self.testconn) job.save() registry.schedule(job, datetime(2019, 1, 1, tzinfo=timezone.utc)) scheduler = RQScheduler([queue], connection=self.testconn) scheduler.acquire_locks() scheduler.enqueue_scheduled_jobs() self.assertEqual(len(queue), 1) # After job is scheduled, registry should be empty self.assertEqual(len(registry), 0) # Jobs scheduled in the far future should not be affected registry.schedule(job, datetime(2100, 1, 1, tzinfo=timezone.utc)) scheduler.enqueue_scheduled_jobs() self.assertEqual(len(queue), 1) def test_prepare_registries(self): """prepare_registries() creates self._scheduled_job_registries""" foo_queue = Queue('foo', connection=self.testconn) bar_queue = Queue('bar', connection=self.testconn) scheduler = RQScheduler([foo_queue, bar_queue], connection=self.testconn) self.assertEqual(scheduler._scheduled_job_registries, []) scheduler.prepare_registries([foo_queue.name]) self.assertEqual(scheduler._scheduled_job_registries, [ScheduledJobRegistry(queue=foo_queue)]) scheduler.prepare_registries([foo_queue.name, bar_queue.name]) self.assertEqual( scheduler._scheduled_job_registries, [ScheduledJobRegistry(queue=foo_queue), ScheduledJobRegistry(queue=bar_queue)], ) class TestWorker(RQTestCase): def test_work_burst(self): """worker.work() with scheduler enabled works properly""" queue = Queue(connection=self.testconn) worker = Worker(queues=[queue], connection=self.testconn) worker.work(burst=True, with_scheduler=False) self.assertIsNone(worker.scheduler) worker = Worker(queues=[queue], connection=self.testconn) worker.work(burst=True, with_scheduler=True) self.assertIsNotNone(worker.scheduler) self.assertIsNone(self.testconn.get(worker.scheduler.get_locking_key('default'))) @mock.patch.object(RQScheduler, 'acquire_locks') def test_run_maintenance_tasks(self, mocked): """scheduler.acquire_locks() is called only when scheduled is enabled""" queue = Queue(connection=self.testconn) worker = Worker(queues=[queue], connection=self.testconn) worker.run_maintenance_tasks() self.assertEqual(mocked.call_count, 0) # if scheduler object exists and it's a first start, acquire locks should not run worker.last_cleaned_at = None worker.scheduler = RQScheduler([queue], connection=self.testconn) worker.run_maintenance_tasks() self.assertEqual(mocked.call_count, 0) # the scheduler exists and it's NOT a first start, since the process doesn't exists, # should call acquire_locks to start the process worker.last_cleaned_at = datetime.now() worker.run_maintenance_tasks() self.assertEqual(mocked.call_count, 1) # the scheduler exists, the process exists, but the process is not alive running_process = mock.MagicMock() running_process.is_alive.return_value = False worker.scheduler._process = running_process worker.run_maintenance_tasks() self.assertEqual(mocked.call_count, 2) self.assertEqual(running_process.is_alive.call_count, 1) # the scheduler exists, the process exits, and it is alive. acquire_locks shouldn't run running_process.is_alive.return_value = True worker.run_maintenance_tasks() self.assertEqual(mocked.call_count, 2) self.assertEqual(running_process.is_alive.call_count, 2) def test_work(self): queue = Queue(connection=self.testconn) worker = Worker(queues=[queue], connection=self.testconn) p = Process(target=kill_worker, args=(os.getpid(), False, 5)) p.start() queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello) worker.work(burst=False, with_scheduler=True) p.join(1) self.assertIsNotNone(worker.scheduler) registry = FinishedJobRegistry(queue=queue) self.assertEqual(len(registry), 1) @ssl_test def test_work_with_ssl(self): connection = find_empty_redis_database(ssl=True) queue = Queue(connection=connection) worker = Worker(queues=[queue], connection=connection) p = Process(target=kill_worker, args=(os.getpid(), False, 5)) p.start() queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello) worker.work(burst=False, with_scheduler=True) p.join(1) self.assertIsNotNone(worker.scheduler) registry = FinishedJobRegistry(queue=queue) self.assertEqual(len(registry), 1) def test_work_with_serializer(self): queue = Queue(connection=self.testconn, serializer=JSONSerializer) worker = Worker(queues=[queue], connection=self.testconn, serializer=JSONSerializer) p = Process(target=kill_worker, args=(os.getpid(), False, 5)) p.start() queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello, meta={'foo': 'bar'}) worker.work(burst=False, with_scheduler=True) p.join(1) self.assertIsNotNone(worker.scheduler) registry = FinishedJobRegistry(queue=queue) self.assertEqual(len(registry), 1) class TestQueue(RQTestCase): def test_enqueue_at(self): """queue.enqueue_at() puts job in the scheduled""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) scheduler = RQScheduler([queue], connection=self.testconn) scheduler.acquire_locks() # Jobs created using enqueue_at is put in the ScheduledJobRegistry job = queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello) self.assertEqual(len(queue), 0) self.assertEqual(len(registry), 1) # enqueue_at set job status to "scheduled" self.assertTrue(job.get_status() == 'scheduled') # After enqueue_scheduled_jobs() is called, the registry is empty # and job is enqueued scheduler.enqueue_scheduled_jobs() self.assertEqual(len(queue), 1) self.assertEqual(len(registry), 0) def test_enqueue_at_at_front(self): """queue.enqueue_at() accepts at_front argument. When true, job will be put at position 0 of the queue when the time comes for the job to be scheduled""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) scheduler = RQScheduler([queue], connection=self.testconn) scheduler.acquire_locks() # Jobs created using enqueue_at is put in the ScheduledJobRegistry # job_first should be enqueued first job_first = queue.enqueue_at(datetime(2019, 1, 1, tzinfo=timezone.utc), say_hello) # job_second will be enqueued second, but "at_front" job_second = queue.enqueue_at(datetime(2019, 1, 2, tzinfo=timezone.utc), say_hello, at_front=True) self.assertEqual(len(queue), 0) self.assertEqual(len(registry), 2) # enqueue_at set job status to "scheduled" self.assertTrue(job_first.get_status() == 'scheduled') self.assertTrue(job_second.get_status() == 'scheduled') # After enqueue_scheduled_jobs() is called, the registry is empty # and job is enqueued scheduler.enqueue_scheduled_jobs() self.assertEqual(len(queue), 2) self.assertEqual(len(registry), 0) self.assertEqual(0, queue.get_job_position(job_second.id)) self.assertEqual(1, queue.get_job_position(job_first.id)) def test_enqueue_in(self): """queue.enqueue_in() schedules job correctly""" queue = Queue(connection=self.testconn) registry = ScheduledJobRegistry(queue=queue) job = queue.enqueue_in(timedelta(seconds=30), say_hello) now = datetime.now(timezone.utc) scheduled_time = registry.get_scheduled_time(job) # Ensure that job is scheduled roughly 30 seconds from now self.assertTrue(now + timedelta(seconds=28) < scheduled_time < now + timedelta(seconds=32)) def test_enqueue_in_with_retry(self): """Ensure that the retry parameter is passed to the enqueue_at function from enqueue_in. """ queue = Queue(connection=self.testconn) job = queue.enqueue_in(timedelta(seconds=30), say_hello, retry=Retry(3, [2])) self.assertEqual(job.retries_left, 3) self.assertEqual(job.retry_intervals, [2]) def test_custom_connection_pool(self): """Connection pool customizing. Ensure that we can properly set a custom connection pool class and pass extra arguments""" custom_conn = redis.Redis( connection_pool=redis.ConnectionPool( connection_class=CustomRedisConnection, db=4, custom_arg="foo", ) ) queue = Queue(connection=custom_conn) scheduler = RQScheduler([queue], connection=custom_conn) scheduler_connection = scheduler.connection.connection_pool.get_connection('info') self.assertEqual(scheduler_connection.__class__, CustomRedisConnection) self.assertEqual(scheduler_connection.get_custom_arg(), "foo") def test_no_custom_connection_pool(self): """Connection pool customizing must not interfere if we're using a standard connection (non-pooled)""" standard_conn = redis.Redis(db=5) queue = Queue(connection=standard_conn) scheduler = RQScheduler([queue], connection=standard_conn) scheduler_connection = scheduler.connection.connection_pool.get_connection('info') self.assertEqual(scheduler_connection.__class__, redis.Connection) rq-1.16.2/tests/test_sentry.py0000644000000000000000000000362713615410400013252 0ustar00from unittest import mock from click.testing import CliRunner from rq import Queue from rq.cli import main from rq.cli.helpers import read_config_file from rq.contrib.sentry import register_sentry from rq.worker import SimpleWorker from tests import RQTestCase from tests.fixtures import div_by_zero class FakeSentry: servers = [] def captureException(self, *args, **kwds): # noqa pass # we cannot check this, because worker forks class TestSentry(RQTestCase): def setUp(self): super().setUp() db_num = self.testconn.connection_pool.connection_kwargs['db'] self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num def test_reading_dsn_from_file(self): settings = read_config_file('tests.config_files.sentry') self.assertIn('SENTRY_DSN', settings) self.assertEqual(settings['SENTRY_DSN'], 'https://123@sentry.io/123') @mock.patch('rq.contrib.sentry.register_sentry') def test_cli_flag(self, mocked): """rq worker -u -b --exception-handler """ # connection = Redis.from_url(self.redis_url) runner = CliRunner() runner.invoke(main, ['worker', '-u', self.redis_url, '-b', '--sentry-dsn', 'https://1@sentry.io/1']) self.assertEqual(mocked.call_count, 1) runner.invoke(main, ['worker-pool', '-u', self.redis_url, '-b', '--sentry-dsn', 'https://1@sentry.io/1']) self.assertEqual(mocked.call_count, 2) def test_failure_capture(self): """Test failure is captured by Sentry SDK""" from sentry_sdk import Hub hub = Hub.current self.assertIsNone(hub.last_event_id()) queue = Queue(connection=self.testconn) queue.enqueue(div_by_zero) worker = SimpleWorker(queues=[queue], connection=self.testconn) register_sentry('https://123@sentry.io/123') worker.work(burst=True) self.assertIsNotNone(hub.last_event_id()) rq-1.16.2/tests/test_serializers.py0000644000000000000000000000255213615410400014256 0ustar00import json import pickle import pickletools import queue import unittest from rq.serializers import DefaultSerializer, resolve_serializer class TestSerializers(unittest.TestCase): def test_resolve_serializer(self): """Ensure function resolve_serializer works correctly""" serializer = resolve_serializer(None) self.assertIsNotNone(serializer) self.assertEqual(serializer, DefaultSerializer) # Test round trip with default serializer test_data = {'test': 'data'} serialized_data = serializer.dumps(test_data) self.assertEqual(serializer.loads(serialized_data), test_data) self.assertEqual(next(pickletools.genops(serialized_data))[1], pickle.HIGHEST_PROTOCOL) # Test using json serializer serializer = resolve_serializer(json) self.assertIsNotNone(serializer) self.assertTrue(hasattr(serializer, 'dumps')) self.assertTrue(hasattr(serializer, 'loads')) # Test raise NotImplmentedError with self.assertRaises(NotImplementedError): resolve_serializer(object) # Test raise Exception with self.assertRaises(Exception): resolve_serializer(queue.Queue()) # Test using path.to.serializer string serializer = resolve_serializer('tests.fixtures.Serializer') self.assertIsNotNone(serializer) rq-1.16.2/tests/test_timeouts.py0000644000000000000000000000351513615410400013573 0ustar00import time from rq import Queue, SimpleWorker from rq.registry import FailedJobRegistry, FinishedJobRegistry from rq.timeouts import TimerDeathPenalty from tests import RQTestCase class TimerBasedWorker(SimpleWorker): death_penalty_class = TimerDeathPenalty def thread_friendly_sleep_func(seconds): end_at = time.time() + seconds while True: if time.time() > end_at: break class TestTimeouts(RQTestCase): def test_timer_death_penalty(self): """Ensure TimerDeathPenalty works correctly.""" q = Queue(connection=self.testconn) q.empty() finished_job_registry = FinishedJobRegistry(connection=self.testconn) failed_job_registry = FailedJobRegistry(connection=self.testconn) # make sure death_penalty_class persists w = TimerBasedWorker([q], connection=self.testconn) self.assertIsNotNone(w) self.assertEqual(w.death_penalty_class, TimerDeathPenalty) # Test short-running job doesn't raise JobTimeoutException job = q.enqueue(thread_friendly_sleep_func, args=(1,), job_timeout=3) w.work(burst=True) job.refresh() self.assertIn(job, finished_job_registry) # Test long-running job raises JobTimeoutException job = q.enqueue(thread_friendly_sleep_func, args=(5,), job_timeout=3) w.work(burst=True) self.assertIn(job, failed_job_registry) job.refresh() self.assertIn("rq.timeouts.JobTimeoutException", job.exc_info) # Test negative timeout doesn't raise JobTimeoutException, # which implies an unintended immediate timeout. job = q.enqueue(thread_friendly_sleep_func, args=(1,), job_timeout=-1) w.work(burst=True) job.refresh() self.assertIn(job, finished_job_registry) rq-1.16.2/tests/test_utils.py0000644000000000000000000001531313615410400013061 0ustar00import datetime import re from unittest.mock import Mock from redis import Redis from rq.exceptions import TimeoutFormatError from rq.utils import ( backend_class, ceildiv, ensure_list, first, get_call_string, get_version, import_attribute, is_nonstring_iterable, parse_timeout, split_list, truncate_long_string, utcparse, ) from rq.worker import SimpleWorker from tests import RQTestCase, fixtures class TestUtils(RQTestCase): def test_parse_timeout(self): """Ensure function parse_timeout works correctly""" self.assertEqual(12, parse_timeout(12)) self.assertEqual(12, parse_timeout('12')) self.assertEqual(12, parse_timeout('12s')) self.assertEqual(720, parse_timeout('12m')) self.assertEqual(3600, parse_timeout('1h')) self.assertEqual(3600, parse_timeout('1H')) def test_parse_timeout_coverage_scenarios(self): """Test parse_timeout edge cases for coverage""" timeouts = ['h12', 'h', 'm', 's', '10k'] self.assertEqual(None, parse_timeout(None)) with self.assertRaises(TimeoutFormatError): for timeout in timeouts: parse_timeout(timeout) def test_first(self): """Ensure function first works correctly""" self.assertEqual(42, first([0, False, None, [], (), 42])) self.assertEqual(None, first([0, False, None, [], ()])) self.assertEqual('ohai', first([0, False, None, [], ()], default='ohai')) self.assertEqual('bc', first(re.match(regex, 'abc') for regex in ['b.*', 'a(.*)']).group(1)) self.assertEqual(4, first([1, 1, 3, 4, 5], key=lambda x: x % 2 == 0)) def test_is_nonstring_iterable(self): """Ensure function is_nonstring_iterable works correctly""" self.assertEqual(True, is_nonstring_iterable([])) self.assertEqual(False, is_nonstring_iterable('test')) self.assertEqual(True, is_nonstring_iterable({})) self.assertEqual(True, is_nonstring_iterable(())) def test_ensure_list(self): """Ensure function ensure_list works correctly""" self.assertEqual([], ensure_list([])) self.assertEqual(['test'], ensure_list('test')) self.assertEqual({}, ensure_list({})) self.assertEqual((), ensure_list(())) def test_utcparse(self): """Ensure function utcparse works correctly""" utc_formated_time = '2017-08-31T10:14:02.123456Z' self.assertEqual(datetime.datetime(2017, 8, 31, 10, 14, 2, 123456), utcparse(utc_formated_time)) def test_utcparse_legacy(self): """Ensure function utcparse works correctly""" utc_formated_time = '2017-08-31T10:14:02Z' self.assertEqual(datetime.datetime(2017, 8, 31, 10, 14, 2), utcparse(utc_formated_time)) def test_backend_class(self): """Ensure function backend_class works correctly""" self.assertEqual(fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue')) self.assertNotEqual(fixtures.say_pid, backend_class(fixtures, 'DummyQueue')) self.assertEqual(fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue', override=fixtures.DummyQueue)) self.assertEqual( fixtures.DummyQueue, backend_class(fixtures, 'DummyQueue', override='tests.fixtures.DummyQueue') ) def test_get_redis_version(self): """Ensure get_version works properly""" redis = Redis() self.assertTrue(isinstance(get_version(redis), tuple)) # Parses 3 digit version numbers correctly class DummyRedis(Redis): def info(*args): return {'redis_version': '4.0.8'} self.assertEqual(get_version(DummyRedis()), (4, 0, 8)) # Parses 3 digit version numbers correctly class DummyRedis(Redis): def info(*args): return {'redis_version': '3.0.7.9'} self.assertEqual(get_version(DummyRedis()), (3, 0, 7)) def test_get_redis_version_gets_cached(self): """Ensure get_version works properly""" # Parses 3 digit version numbers correctly redis = Mock(spec=['info']) redis.info = Mock(return_value={'redis_version': '4.0.8'}) self.assertEqual(get_version(redis), (4, 0, 8)) self.assertEqual(get_version(redis), (4, 0, 8)) redis.info.assert_called_once() def test_import_attribute(self): """Ensure get_version works properly""" self.assertEqual(import_attribute('rq.utils.get_version'), get_version) self.assertEqual(import_attribute('rq.worker.SimpleWorker'), SimpleWorker) self.assertRaises(ValueError, import_attribute, 'non.existent.module') self.assertRaises(ValueError, import_attribute, 'rq.worker.WrongWorker') def test_ceildiv_even(self): """When a number is evenly divisible by another ceildiv returns the quotient""" dividend = 12 divisor = 4 self.assertEqual(ceildiv(dividend, divisor), dividend // divisor) def test_ceildiv_uneven(self): """When a number is not evenly divisible by another ceildiv returns the quotient plus one""" dividend = 13 divisor = 4 self.assertEqual(ceildiv(dividend, divisor), dividend // divisor + 1) def test_split_list(self): """Ensure split_list works properly""" BIG_LIST_SIZE = 42 SEGMENT_SIZE = 5 big_list = ['1'] * BIG_LIST_SIZE small_lists = list(split_list(big_list, SEGMENT_SIZE)) expected_small_list_count = ceildiv(BIG_LIST_SIZE, SEGMENT_SIZE) self.assertEqual(len(small_lists), expected_small_list_count) def test_truncate_long_string(self): """Ensure truncate_long_string works properly""" assert truncate_long_string("12", max_length=3) == "12" assert truncate_long_string("123", max_length=3) == "123" assert truncate_long_string("1234", max_length=3) == "123..." assert truncate_long_string("12345", max_length=3) == "123..." s = "long string but no max_length provided so no truncating should occur" * 10 assert truncate_long_string(s) == s def test_get_call_string(self): """Ensure a case, when func_name, args and kwargs are not None, works properly""" cs = get_call_string("f", ('some', 'args', 42), {"key1": "value1", "key2": True}) assert cs == "f('some', 'args', 42, key1='value1', key2=True)" def test_get_call_string_with_max_length(self): """Ensure get_call_string works properly when max_length is provided""" func_name = "f" args = (1234, 12345, 123456) kwargs = {"len4": 1234, "len5": 12345, "len6": 123456} cs = get_call_string(func_name, args, kwargs, max_length=5) assert cs == "f(1234, 12345, 12345..., len4=1234, len5=12345, len6=12345...)" rq-1.16.2/tests/test_worker.py0000644000000000000000000017137413615410400013244 0ustar00import json import os import shutil import signal import subprocess import sys import threading import time import zlib from datetime import datetime, timedelta from multiprocessing import Process from time import sleep from unittest import mock, skipIf from unittest.mock import Mock import psutil import pytest import redis.exceptions from redis import Redis from rq import Queue, SimpleWorker, Worker, get_current_connection from rq.defaults import DEFAULT_MAINTENANCE_TASK_INTERVAL from rq.job import Job, JobStatus, Retry from rq.registry import FailedJobRegistry, FinishedJobRegistry, StartedJobRegistry from rq.results import Result from rq.serializers import JSONSerializer from rq.suspension import resume, suspend from rq.utils import as_text, get_version, utcnow from rq.version import VERSION from rq.worker import HerokuWorker, RandomWorker, RoundRobinWorker, WorkerStatus from tests import RQTestCase, slow from tests.fixtures import ( CustomJob, access_self, create_file, create_file_after_timeout, create_file_after_timeout_and_setsid, div_by_zero, do_nothing, kill_worker, launch_process_within_worker_and_store_pid, long_running_job, modify_self, modify_self_and_error, raise_exc_mock, run_dummy_heroku_worker, save_key_ttl, say_hello, say_pid, ) class CustomQueue(Queue): pass class TestWorker(RQTestCase): def test_create_worker(self): """Worker creation using various inputs.""" # With single string argument w = Worker('foo') self.assertEqual(w.queues[0].name, 'foo') # With list of strings w = Worker(['foo', 'bar']) self.assertEqual(w.queues[0].name, 'foo') self.assertEqual(w.queues[1].name, 'bar') self.assertEqual(w.queue_keys(), [w.queues[0].key, w.queues[1].key]) self.assertEqual(w.queue_names(), ['foo', 'bar']) # With iterable of strings w = Worker(iter(['foo', 'bar'])) self.assertEqual(w.queues[0].name, 'foo') self.assertEqual(w.queues[1].name, 'bar') # With single Queue w = Worker(Queue('foo')) self.assertEqual(w.queues[0].name, 'foo') # With iterable of Queues w = Worker(iter([Queue('foo'), Queue('bar')])) self.assertEqual(w.queues[0].name, 'foo') self.assertEqual(w.queues[1].name, 'bar') # With list of Queues w = Worker([Queue('foo'), Queue('bar')]) self.assertEqual(w.queues[0].name, 'foo') self.assertEqual(w.queues[1].name, 'bar') # With string and serializer w = Worker('foo', serializer=json) self.assertEqual(w.queues[0].name, 'foo') # With queue having serializer w = Worker(Queue('foo'), serializer=json) self.assertEqual(w.queues[0].name, 'foo') def test_work_and_quit(self): """Worker processes work, then quits.""" fooq, barq = Queue('foo'), Queue('bar') w = Worker([fooq, barq]) self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.') fooq.enqueue(say_hello, name='Frank') self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') def test_work_and_quit_custom_serializer(self): """Worker processes work, then quits.""" fooq, barq = Queue('foo', serializer=JSONSerializer), Queue('bar', serializer=JSONSerializer) w = Worker([fooq, barq], serializer=JSONSerializer) self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.') fooq.enqueue(say_hello, name='Frank') self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') def test_worker_all(self): """Worker.all() works properly""" foo_queue = Queue('foo') bar_queue = Queue('bar') w1 = Worker([foo_queue, bar_queue], name='w1') w1.register_birth() w2 = Worker([foo_queue], name='w2') w2.register_birth() self.assertEqual(set(Worker.all(connection=foo_queue.connection)), set([w1, w2])) self.assertEqual(set(Worker.all(queue=foo_queue)), set([w1, w2])) self.assertEqual(set(Worker.all(queue=bar_queue)), set([w1])) w1.register_death() w2.register_death() def test_find_by_key(self): """Worker.find_by_key restores queues, state and job_id.""" queues = [Queue('foo'), Queue('bar')] w = Worker(queues) w.register_death() w.register_birth() w.set_state(WorkerStatus.STARTED) worker = Worker.find_by_key(w.key) self.assertEqual(worker.queues, queues) self.assertEqual(worker.get_state(), WorkerStatus.STARTED) self.assertEqual(worker._job_id, None) self.assertTrue(worker.key in Worker.all_keys(worker.connection)) self.assertEqual(worker.version, VERSION) # If worker is gone, its keys should also be removed worker.connection.delete(worker.key) Worker.find_by_key(worker.key) self.assertFalse(worker.key in Worker.all_keys(worker.connection)) self.assertRaises(ValueError, Worker.find_by_key, 'foo') def test_worker_ttl(self): """Worker ttl.""" w = Worker([]) w.register_birth() [worker_key] = self.testconn.smembers(Worker.redis_workers_keys) self.assertIsNotNone(self.testconn.ttl(worker_key)) w.register_death() def test_work_via_string_argument(self): """Worker processes work fed via string arguments.""" q = Queue('foo') w = Worker([q]) job = q.enqueue('tests.fixtures.say_hello', name='Frank') self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') expected_result = 'Hi there, Frank!' self.assertEqual(job.result, expected_result) # Only run if Redis server supports streams if job.supports_redis_streams: self.assertEqual(Result.fetch_latest(job).return_value, expected_result) self.assertIsNone(job.worker_name) def test_job_times(self): """job times are set correctly.""" q = Queue('foo') w = Worker([q]) before = utcnow() before = before.replace(microsecond=0) job = q.enqueue(say_hello) self.assertIsNotNone(job.enqueued_at) self.assertIsNone(job.started_at) self.assertIsNone(job.ended_at) self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') self.assertEqual(job.result, 'Hi there, Stranger!') after = utcnow() job.refresh() self.assertTrue(before <= job.enqueued_at <= after, 'Not %s <= %s <= %s' % (before, job.enqueued_at, after)) self.assertTrue(before <= job.started_at <= after, 'Not %s <= %s <= %s' % (before, job.started_at, after)) self.assertTrue(before <= job.ended_at <= after, 'Not %s <= %s <= %s' % (before, job.ended_at, after)) def test_work_is_unreadable(self): """Unreadable jobs are put on the failed job registry.""" q = Queue() self.assertEqual(q.count, 0) # NOTE: We have to fake this enqueueing for this test case. # What we're simulating here is a call to a function that is not # importable from the worker process. job = Job.create(func=div_by_zero, args=(3,), origin=q.name) job.save() job_data = job.data invalid_data = job_data.replace(b'div_by_zero', b'nonexisting') assert job_data != invalid_data self.testconn.hset(job.key, 'data', zlib.compress(invalid_data)) # We use the low-level internal function to enqueue any data (bypassing # validity checks) q.push_job_id(job.id) self.assertEqual(q.count, 1) # All set, we're going to process it w = Worker([q]) w.work(burst=True) # should silently pass self.assertEqual(q.count, 0) failed_job_registry = FailedJobRegistry(queue=q) self.assertTrue(job in failed_job_registry) def test_meta_is_unserializable(self): """Unserializable jobs are put on the failed job registry.""" q = Queue() self.assertEqual(q.count, 0) # NOTE: We have to fake this enqueueing for this test case. # What we're simulating here is a call to a function that is not # importable from the worker process. job = Job.create(func=do_nothing, origin=q.name, meta={'key': 'value'}) job.save() invalid_meta = '{{{{{{{{INVALID_JSON' self.testconn.hset(job.key, 'meta', invalid_meta) job.refresh() self.assertIsInstance(job.meta, dict) self.assertTrue('unserialized' in job.meta.keys()) @mock.patch('rq.worker.logger.error') def test_deserializing_failure_is_handled(self, mock_logger_error): """ Test that exceptions are properly handled for a job that fails to deserialize. """ q = Queue() self.assertEqual(q.count, 0) # as in test_work_is_unreadable(), we create a fake bad job job = Job.create(func=div_by_zero, args=(3,), origin=q.name) job.save() # setting data to b'' ensures that pickling will completely fail job_data = job.data invalid_data = job_data.replace(b'div_by_zero', b'') assert job_data != invalid_data self.testconn.hset(job.key, 'data', zlib.compress(invalid_data)) # We use the low-level internal function to enqueue any data (bypassing # validity checks) q.push_job_id(job.id) self.assertEqual(q.count, 1) # Now we try to run the job... w = Worker([q]) job, queue = w.dequeue_job_and_maintain_ttl(10) w.perform_job(job, queue) # An exception should be logged here at ERROR level self.assertIn("Traceback", mock_logger_error.call_args[0][3]) def test_heartbeat(self): """Heartbeat saves last_heartbeat""" q = Queue() w = Worker([q]) w.register_birth() self.assertEqual(str(w.pid), as_text(self.testconn.hget(w.key, 'pid'))) self.assertEqual(w.hostname, as_text(self.testconn.hget(w.key, 'hostname'))) last_heartbeat = self.testconn.hget(w.key, 'last_heartbeat') self.assertIsNotNone(self.testconn.hget(w.key, 'birth')) self.assertTrue(last_heartbeat is not None) w = Worker.find_by_key(w.key) self.assertIsInstance(w.last_heartbeat, datetime) # worker.refresh() shouldn't fail if last_heartbeat is None # for compatibility reasons self.testconn.hdel(w.key, 'last_heartbeat') w.refresh() # worker.refresh() shouldn't fail if birth is None # for compatibility reasons self.testconn.hdel(w.key, 'birth') w.refresh() def test_maintain_heartbeats(self): """worker.maintain_heartbeats() shouldn't create new job keys""" queue = Queue(connection=self.testconn) worker = Worker([queue], connection=self.testconn) job = queue.enqueue(say_hello) worker.maintain_heartbeats(job) self.assertTrue(self.testconn.exists(worker.key)) self.assertTrue(self.testconn.exists(job.key)) self.testconn.delete(job.key) worker.maintain_heartbeats(job) self.assertFalse(self.testconn.exists(job.key)) @slow def test_heartbeat_survives_lost_connection(self): with mock.patch.object(Worker, 'heartbeat') as mocked: # None -> Heartbeat is first called before the job loop mocked.side_effect = [None, redis.exceptions.ConnectionError()] q = Queue() w = Worker([q]) w.work(burst=True) # First call is prior to job loop, second raises the error, # third is successful, after "recovery" assert mocked.call_count == 3 def test_job_timeout_moved_to_failed_job_registry(self): """Jobs that run long are moved to FailedJobRegistry""" queue = Queue() worker = Worker([queue]) job = queue.enqueue(long_running_job, 5, job_timeout=1) worker.work(burst=True) self.assertIn(job, job.failed_job_registry) job.refresh() self.assertIn('rq.timeouts.JobTimeoutException', job.exc_info) @slow def test_heartbeat_busy(self): """Periodic heartbeats while horse is busy with long jobs""" q = Queue() w = Worker([q], job_monitoring_interval=5) for timeout, expected_heartbeats in [(2, 0), (7, 1), (12, 2)]: job = q.enqueue(long_running_job, args=(timeout,), job_timeout=30, result_ttl=-1) with mock.patch.object(w, 'heartbeat', wraps=w.heartbeat) as mocked: w.execute_job(job, q) self.assertEqual(mocked.call_count, expected_heartbeats) job = Job.fetch(job.id) self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_work_fails(self): """Failing jobs are put on the failed queue.""" q = Queue() self.assertEqual(q.count, 0) # Action job = q.enqueue(div_by_zero) self.assertEqual(q.count, 1) # keep for later enqueued_at_date = str(job.enqueued_at) w = Worker([q]) w.work(burst=True) # Postconditions self.assertEqual(q.count, 0) failed_job_registry = FailedJobRegistry(queue=q) self.assertTrue(job in failed_job_registry) self.assertEqual(w.get_current_job_id(), None) # Check the job job = Job.fetch(job.id) self.assertEqual(job.origin, q.name) # Should be the original enqueued_at date, not the date of enqueueing # to the failed queue self.assertEqual(str(job.enqueued_at), enqueued_at_date) self.assertTrue(job.exc_info) # should contain exc_info if job.supports_redis_streams: result = Result.fetch_latest(job) self.assertEqual(result.exc_string, job.exc_info) self.assertEqual(result.type, Result.Type.FAILED) def test_horse_fails(self): """Tests that job status is set to FAILED even if horse unexpectedly fails""" q = Queue() self.assertEqual(q.count, 0) # Action job = q.enqueue(say_hello) self.assertEqual(q.count, 1) # keep for later enqueued_at_date = str(job.enqueued_at) w = Worker([q]) with mock.patch.object(w, 'perform_job', new_callable=raise_exc_mock): w.work(burst=True) # should silently pass # Postconditions self.assertEqual(q.count, 0) failed_job_registry = FailedJobRegistry(queue=q) self.assertTrue(job in failed_job_registry) self.assertEqual(w.get_current_job_id(), None) # Check the job job = Job.fetch(job.id) self.assertEqual(job.origin, q.name) # Should be the original enqueued_at date, not the date of enqueueing # to the failed queue self.assertEqual(str(job.enqueued_at), enqueued_at_date) self.assertTrue(job.exc_info) # should contain exc_info def test_statistics(self): """Successful and failed job counts are saved properly""" queue = Queue(connection=self.connection) job = queue.enqueue(div_by_zero) worker = Worker([queue]) worker.register_birth() self.assertEqual(worker.failed_job_count, 0) self.assertEqual(worker.successful_job_count, 0) self.assertEqual(worker.total_working_time, 0) registry = StartedJobRegistry(connection=worker.connection) job.started_at = utcnow() job.ended_at = job.started_at + timedelta(seconds=0.75) worker.handle_job_failure(job, queue) worker.handle_job_success(job, queue, registry) worker.refresh() self.assertEqual(worker.failed_job_count, 1) self.assertEqual(worker.successful_job_count, 1) self.assertEqual(worker.total_working_time, 1.5) # 1.5 seconds worker.handle_job_failure(job, queue) worker.handle_job_success(job, queue, registry) worker.refresh() self.assertEqual(worker.failed_job_count, 2) self.assertEqual(worker.successful_job_count, 2) self.assertEqual(worker.total_working_time, 3.0) def test_handle_retry(self): """handle_job_failure() handles retry properly""" connection = self.testconn queue = Queue(connection=connection) retry = Retry(max=2) job = queue.enqueue(div_by_zero, retry=retry) registry = FailedJobRegistry(queue=queue) worker = Worker([queue]) # If job is configured to retry, it will be put back in the queue # and not put in the FailedJobRegistry. # This is the original execution queue.empty() worker.handle_job_failure(job, queue) job.refresh() self.assertEqual(job.retries_left, 1) self.assertEqual([job.id], queue.job_ids) self.assertFalse(job in registry) # First retry queue.empty() worker.handle_job_failure(job, queue) job.refresh() self.assertEqual(job.retries_left, 0) self.assertEqual([job.id], queue.job_ids) # Second retry queue.empty() worker.handle_job_failure(job, queue) job.refresh() self.assertEqual(job.retries_left, 0) self.assertEqual([], queue.job_ids) # If a job is no longer retries, it's put in FailedJobRegistry self.assertTrue(job in registry) def test_total_working_time(self): """worker.total_working_time is stored properly""" queue = Queue() job = queue.enqueue(long_running_job, 0.05) worker = Worker([queue]) worker.register_birth() worker.perform_job(job, queue) worker.refresh() # total_working_time should be a little bit more than 0.05 seconds self.assertGreaterEqual(worker.total_working_time, 0.05) # in multi-user environments delays might be unpredictable, # please adjust this magic limit accordingly in case if It takes even longer to run self.assertLess(worker.total_working_time, 1) def test_max_jobs(self): """Worker exits after number of jobs complete.""" queue = Queue() job1 = queue.enqueue(do_nothing) job2 = queue.enqueue(do_nothing) worker = Worker([queue]) worker.work(max_jobs=1) self.assertEqual(JobStatus.FINISHED, job1.get_status()) self.assertEqual(JobStatus.QUEUED, job2.get_status()) def test_disable_default_exception_handler(self): """ Job is not moved to FailedJobRegistry when default custom exception handler is disabled. """ queue = Queue(name='default', connection=self.testconn) job = queue.enqueue(div_by_zero) worker = Worker([queue], disable_default_exception_handler=False) worker.work(burst=True) registry = FailedJobRegistry(queue=queue) self.assertTrue(job in registry) # Job is not added to FailedJobRegistry if # disable_default_exception_handler is True job = queue.enqueue(div_by_zero) worker = Worker([queue], disable_default_exception_handler=True) worker.work(burst=True) self.assertFalse(job in registry) def test_custom_exc_handling(self): """Custom exception handling.""" def first_handler(job, *exc_info): job.meta = {'first_handler': True} job.save_meta() return True def second_handler(job, *exc_info): job.meta.update({'second_handler': True}) job.save_meta() def black_hole(job, *exc_info): # Don't fall through to default behaviour (moving to failed queue) return False q = Queue() self.assertEqual(q.count, 0) job = q.enqueue(div_by_zero) w = Worker([q], exception_handlers=first_handler) w.work(burst=True) # Check the job job.refresh() self.assertEqual(job.is_failed, True) self.assertTrue(job.meta['first_handler']) job = q.enqueue(div_by_zero) w = Worker([q], exception_handlers=[first_handler, second_handler]) w.work(burst=True) # Both custom exception handlers are run job.refresh() self.assertEqual(job.is_failed, True) self.assertTrue(job.meta['first_handler']) self.assertTrue(job.meta['second_handler']) job = q.enqueue(div_by_zero) w = Worker([q], exception_handlers=[first_handler, black_hole, second_handler]) w.work(burst=True) # second_handler is not run since it's interrupted by black_hole job.refresh() self.assertEqual(job.is_failed, True) self.assertTrue(job.meta['first_handler']) self.assertEqual(job.meta.get('second_handler'), None) def test_deleted_jobs_arent_executed(self): """Cancelling jobs.""" SENTINEL_FILE = '/tmp/rq-tests.txt' # noqa try: # Remove the sentinel if it is leftover from a previous test run os.remove(SENTINEL_FILE) except OSError as e: if e.errno != 2: raise q = Queue() job = q.enqueue(create_file, SENTINEL_FILE) # Here, we cancel the job, so the sentinel file may not be created self.testconn.delete(job.key) w = Worker([q]) w.work(burst=True) assert q.count == 0 # Should not have created evidence of execution self.assertEqual(os.path.exists(SENTINEL_FILE), False) def test_cancel_running_parent_job(self): """Cancel a running parent job and verify that dependent jobs are not started.""" def cancel_parent_job(job): while job.is_queued: time.sleep(1) job.cancel() return q = Queue( "low", ) parent_job = q.enqueue(long_running_job, 5) job = q.enqueue(say_hello, depends_on=parent_job) job2 = q.enqueue(say_hello, depends_on=job) status_thread = threading.Thread(target=cancel_parent_job, args=(parent_job,)) status_thread.start() w = Worker([q]) w.work( burst=True, ) status_thread.join() self.assertNotEqual(parent_job.result, None) self.assertEqual(job.get_status(), JobStatus.DEFERRED) self.assertEqual(job.result, None) self.assertEqual(job2.get_status(), JobStatus.DEFERRED) self.assertEqual(job2.result, None) self.assertEqual(q.count, 0) def test_cancel_dependent_job(self): """Cancel job and verify that when the parent job is finished, the dependent job is not started.""" q = Queue( "low", ) parent_job = q.enqueue(long_running_job, 5, job_id="parent_job") job = q.enqueue(say_hello, depends_on=parent_job, job_id="job1") job2 = q.enqueue(say_hello, depends_on=job, job_id="job2") job.cancel() w = Worker([q]) w.work( burst=True, ) self.assertTrue(job.is_canceled) self.assertNotEqual(parent_job.result, None) self.assertEqual(job.get_status(), JobStatus.CANCELED) self.assertEqual(job.result, None) self.assertEqual(job2.result, None) self.assertEqual(job2.get_status(), JobStatus.DEFERRED) self.assertEqual(q.count, 0) def test_cancel_job_enqueue_dependent(self): """Cancel a job in a chain and enqueue the dependent jobs.""" q = Queue( "low", ) parent_job = q.enqueue(long_running_job, 5, job_id="parent_job") job = q.enqueue(say_hello, depends_on=parent_job, job_id="job1") job2 = q.enqueue(say_hello, depends_on=job, job_id="job2") job3 = q.enqueue(say_hello, depends_on=job2, job_id="job3") job.cancel(enqueue_dependents=True) w = Worker([q]) w.work( burst=True, ) self.assertTrue(job.is_canceled) self.assertNotEqual(parent_job.result, None) self.assertEqual(job.get_status(), JobStatus.CANCELED) self.assertEqual(job.result, None) self.assertNotEqual(job2.result, None) self.assertEqual(job2.get_status(), JobStatus.FINISHED) self.assertEqual(job3.get_status(), JobStatus.FINISHED) self.assertEqual(q.count, 0) @slow def test_max_idle_time(self): q = Queue() w = Worker([q]) q.enqueue(say_hello, args=('Frank',)) self.assertIsNotNone(w.dequeue_job_and_maintain_ttl(1)) # idle for 1 second self.assertIsNone(w.dequeue_job_and_maintain_ttl(1, max_idle_time=1)) # idle for 3 seconds now = utcnow() self.assertIsNone(w.dequeue_job_and_maintain_ttl(1, max_idle_time=3)) self.assertLess((utcnow() - now).total_seconds(), 5) # 5 for some buffer # idle for 2 seconds because idle_time is less than timeout now = utcnow() self.assertIsNone(w.dequeue_job_and_maintain_ttl(3, max_idle_time=2)) self.assertLess((utcnow() - now).total_seconds(), 4) # 4 for some buffer # idle for 3 seconds because idle_time is less than two rounds of timeout now = utcnow() self.assertIsNone(w.dequeue_job_and_maintain_ttl(2, max_idle_time=3)) self.assertLess((utcnow() - now).total_seconds(), 5) # 5 for some buffer @slow # noqa def test_timeouts(self): """Worker kills jobs after timeout.""" sentinel_file = '/tmp/.rq_sentinel' q = Queue() w = Worker([q]) # Put it on the queue with a timeout value res = q.enqueue(create_file_after_timeout, args=(sentinel_file, 4), job_timeout=1) try: os.unlink(sentinel_file) except OSError as e: if e.errno == 2: pass self.assertEqual(os.path.exists(sentinel_file), False) w.work(burst=True) self.assertEqual(os.path.exists(sentinel_file), False) # TODO: Having to do the manual refresh() here is really ugly! res.refresh() self.assertIn('JobTimeoutException', as_text(res.exc_info)) def test_dequeue_job_and_maintain_ttl_non_blocking(self): """Not passing a timeout should return immediately with None as a result""" q = Queue() w = Worker([q]) self.assertIsNone(w.dequeue_job_and_maintain_ttl(None)) def test_worker_ttl_param_resolves_timeout(self): """ Ensures the worker_ttl param is being considered in the dequeue_timeout and connection_timeout params, takes into account 15 seconds gap (hard coded) """ q = Queue() w = Worker([q]) self.assertEqual(w.dequeue_timeout, 405) self.assertEqual(w.connection_timeout, 415) w = Worker([q], default_worker_ttl=500) self.assertEqual(w.dequeue_timeout, 485) self.assertEqual(w.connection_timeout, 495) def test_worker_sets_result_ttl(self): """Ensure that Worker properly sets result_ttl for individual jobs.""" q = Queue() job = q.enqueue(say_hello, args=('Frank',), result_ttl=10) w = Worker([q]) self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) w.work(burst=True) self.assertNotEqual(self.testconn.ttl(job.key), 0) self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) # Job with -1 result_ttl don't expire job = q.enqueue(say_hello, args=('Frank',), result_ttl=-1) w = Worker([q]) self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) w.work(burst=True) self.assertEqual(self.testconn.ttl(job.key), -1) self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) # Job with result_ttl = 0 gets deleted immediately job = q.enqueue(say_hello, args=('Frank',), result_ttl=0) w = Worker([q]) self.assertIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) w.work(burst=True) self.assertEqual(self.testconn.get(job.key), None) self.assertNotIn(job.get_id().encode(), self.testconn.lrange(q.key, 0, -1)) def test_worker_sets_job_status(self): """Ensure that worker correctly sets job status.""" q = Queue() w = Worker([q]) job = q.enqueue(say_hello) self.assertEqual(job.get_status(), JobStatus.QUEUED) self.assertEqual(job.is_queued, True) self.assertEqual(job.is_finished, False) self.assertEqual(job.is_failed, False) w.work(burst=True) job = Job.fetch(job.id) self.assertEqual(job.get_status(), JobStatus.FINISHED) self.assertEqual(job.is_queued, False) self.assertEqual(job.is_finished, True) self.assertEqual(job.is_failed, False) # Failed jobs should set status to "failed" job = q.enqueue(div_by_zero, args=(1,)) w.work(burst=True) job = Job.fetch(job.id) self.assertEqual(job.get_status(), JobStatus.FAILED) self.assertEqual(job.is_queued, False) self.assertEqual(job.is_finished, False) self.assertEqual(job.is_failed, True) def test_get_current_job(self): """Ensure worker.get_current_job() works properly""" q = Queue() worker = Worker([q]) job = q.enqueue_call(say_hello) self.assertEqual(self.testconn.hget(worker.key, 'current_job'), None) worker.set_current_job_id(job.id) self.assertEqual(worker.get_current_job_id(), as_text(self.testconn.hget(worker.key, 'current_job'))) self.assertEqual(worker.get_current_job(), job) def test_custom_job_class(self): """Ensure Worker accepts custom job class.""" q = Queue() worker = Worker([q], job_class=CustomJob) self.assertEqual(worker.job_class, CustomJob) def test_custom_queue_class(self): """Ensure Worker accepts custom queue class.""" q = CustomQueue() worker = Worker([q], queue_class=CustomQueue) self.assertEqual(worker.queue_class, CustomQueue) def test_custom_queue_class_is_not_global(self): """Ensure Worker custom queue class is not global.""" q = CustomQueue() worker_custom = Worker([q], queue_class=CustomQueue) q_generic = Queue() worker_generic = Worker([q_generic]) self.assertEqual(worker_custom.queue_class, CustomQueue) self.assertEqual(worker_generic.queue_class, Queue) self.assertEqual(Worker.queue_class, Queue) def test_custom_job_class_is_not_global(self): """Ensure Worker custom job class is not global.""" q = Queue() worker_custom = Worker([q], job_class=CustomJob) q_generic = Queue() worker_generic = Worker([q_generic]) self.assertEqual(worker_custom.job_class, CustomJob) self.assertEqual(worker_generic.job_class, Job) self.assertEqual(Worker.job_class, Job) def test_work_via_simpleworker(self): """Worker processes work, with forking disabled, then returns.""" fooq, barq = Queue('foo'), Queue('bar') w = SimpleWorker([fooq, barq]) self.assertEqual(w.work(burst=True), False, 'Did not expect any work on the queue.') job = fooq.enqueue(say_pid) self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') self.assertEqual(job.result, os.getpid(), 'PID mismatch, fork() is not supposed to happen here') def test_simpleworker_heartbeat_ttl(self): """SimpleWorker's key must last longer than job.timeout when working""" queue = Queue('foo') worker = SimpleWorker([queue]) job_timeout = 300 job = queue.enqueue(save_key_ttl, worker.key, job_timeout=job_timeout) worker.work(burst=True) job.refresh() self.assertGreater(job.meta['ttl'], job_timeout) def test_prepare_job_execution(self): """Prepare job execution does the necessary bookkeeping.""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) worker = Worker([queue]) worker.prepare_job_execution(job) # Updates working queue registry = StartedJobRegistry(connection=self.testconn) self.assertEqual(registry.get_job_ids(), [job.id]) # Updates worker's current job self.assertEqual(worker.get_current_job_id(), job.id) # job status is also updated self.assertEqual(job._status, JobStatus.STARTED) self.assertEqual(job.worker_name, worker.name) @skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0') def test_prepare_job_execution_removes_key_from_intermediate_queue(self): """Prepare job execution removes job from intermediate queue.""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) Queue.dequeue_any([queue], timeout=None, connection=self.testconn) self.assertIsNotNone(self.testconn.lpos(queue.intermediate_queue_key, job.id)) worker = Worker([queue]) worker.prepare_job_execution(job, remove_from_intermediate_queue=True) self.assertIsNone(self.testconn.lpos(queue.intermediate_queue_key, job.id)) self.assertEqual(queue.count, 0) @skipIf(get_version(Redis()) < (6, 2, 0), 'Skip if Redis server < 6.2.0') def test_work_removes_key_from_intermediate_queue(self): """Worker removes job from intermediate queue.""" queue = Queue(connection=self.testconn) job = queue.enqueue(say_hello) worker = Worker([queue]) worker.work(burst=True) self.assertIsNone(self.testconn.lpos(queue.intermediate_queue_key, job.id)) def test_work_unicode_friendly(self): """Worker processes work with unicode description, then quits.""" q = Queue('foo') w = Worker([q]) job = q.enqueue('tests.fixtures.say_hello', name='Adam', description='你好 世界!') self.assertEqual(w.work(burst=True), True, 'Expected at least some work done.') self.assertEqual(job.result, 'Hi there, Adam!') self.assertEqual(job.description, '你好 世界!') def test_work_log_unicode_friendly(self): """Worker process work with unicode or str other than pure ascii content, logging work properly""" q = Queue("foo") w = Worker([q]) job = q.enqueue('tests.fixtures.say_hello', name='阿达姆', description='你好 世界!') w.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FINISHED) job = q.enqueue('tests.fixtures.say_hello_unicode', name='阿达姆', description='你好 世界!') w.work(burst=True) self.assertEqual(job.get_status(), JobStatus.FINISHED) def test_suspend_worker_execution(self): """Test Pause Worker Execution""" SENTINEL_FILE = '/tmp/rq-tests.txt' # noqa try: # Remove the sentinel if it is leftover from a previous test run os.remove(SENTINEL_FILE) except OSError as e: if e.errno != 2: raise q = Queue() q.enqueue(create_file, SENTINEL_FILE) w = Worker([q]) suspend(self.testconn) w.work(burst=True) assert q.count == 1 # Should not have created evidence of execution self.assertEqual(os.path.exists(SENTINEL_FILE), False) resume(self.testconn) w.work(burst=True) assert q.count == 0 self.assertEqual(os.path.exists(SENTINEL_FILE), True) @slow def test_suspend_with_duration(self): q = Queue() for _ in range(5): q.enqueue(do_nothing) w = Worker([q]) # This suspends workers for working for 2 second suspend(self.testconn, 2) # So when this burst of work happens the queue should remain at 5 w.work(burst=True) assert q.count == 5 sleep(3) # The suspension should be expired now, and a burst of work should now clear the queue w.work(burst=True) assert q.count == 0 def test_worker_hash_(self): """Workers are hashed by their .name attribute""" q = Queue('foo') w1 = Worker([q], name="worker1") w2 = Worker([q], name="worker2") w3 = Worker([q], name="worker1") worker_set = set([w1, w2, w3]) self.assertEqual(len(worker_set), 2) def test_worker_sets_birth(self): """Ensure worker correctly sets worker birth date.""" q = Queue() w = Worker([q]) w.register_birth() birth_date = w.birth_date self.assertIsNotNone(birth_date) self.assertEqual(type(birth_date).__name__, 'datetime') def test_worker_sets_death(self): """Ensure worker correctly sets worker death date.""" q = Queue() w = Worker([q]) w.register_death() death_date = w.death_date self.assertIsNotNone(death_date) self.assertIsInstance(death_date, datetime) def test_clean_queue_registries(self): """worker.clean_registries sets last_cleaned_at and cleans registries.""" foo_queue = Queue('foo', connection=self.testconn) foo_registry = StartedJobRegistry('foo', connection=self.testconn) self.testconn.zadd(foo_registry.key, {'foo': 1}) self.assertEqual(self.testconn.zcard(foo_registry.key), 1) bar_queue = Queue('bar', connection=self.testconn) bar_registry = StartedJobRegistry('bar', connection=self.testconn) self.testconn.zadd(bar_registry.key, {'bar': 1}) self.assertEqual(self.testconn.zcard(bar_registry.key), 1) worker = Worker([foo_queue, bar_queue]) self.assertEqual(worker.last_cleaned_at, None) worker.clean_registries() self.assertNotEqual(worker.last_cleaned_at, None) self.assertEqual(len(foo_registry), 0) self.assertEqual(len(bar_registry), 0) def test_should_run_maintenance_tasks(self): """Workers should run maintenance tasks on startup and every hour.""" queue = Queue(connection=self.testconn) worker = Worker(queue) self.assertTrue(worker.should_run_maintenance_tasks) worker.last_cleaned_at = utcnow() self.assertFalse(worker.should_run_maintenance_tasks) worker.last_cleaned_at = utcnow() - timedelta(seconds=DEFAULT_MAINTENANCE_TASK_INTERVAL + 100) self.assertTrue(worker.should_run_maintenance_tasks) # custom maintenance_interval worker = Worker(queue, maintenance_interval=10) self.assertTrue(worker.should_run_maintenance_tasks) worker.last_cleaned_at = utcnow() self.assertFalse(worker.should_run_maintenance_tasks) worker.last_cleaned_at = utcnow() - timedelta(seconds=11) self.assertTrue(worker.should_run_maintenance_tasks) def test_worker_calls_clean_registries(self): """Worker calls clean_registries when run.""" queue = Queue(connection=self.connection) registry = StartedJobRegistry(connection=self.connection) self.testconn.zadd(registry.key, {'foo': 1}) worker = Worker(queue, connection=self.connection) worker.work(burst=True) self.assertEqual(len(registry), 0) def test_job_dependency_race_condition(self): """Dependencies added while the job gets finished shouldn't get lost.""" # This patches the enqueue_dependents to enqueue a new dependency AFTER # the original code was executed. orig_enqueue_dependents = Queue.enqueue_dependents def new_enqueue_dependents(self, job, *args, **kwargs): orig_enqueue_dependents(self, job, *args, **kwargs) if hasattr(Queue, '_add_enqueue') and Queue._add_enqueue is not None and Queue._add_enqueue.id == job.id: Queue._add_enqueue = None Queue().enqueue_call(say_hello, depends_on=job) Queue.enqueue_dependents = new_enqueue_dependents q = Queue() w = Worker([q]) with mock.patch.object(Worker, 'execute_job', wraps=w.execute_job) as mocked: parent_job = q.enqueue(say_hello, result_ttl=0) Queue._add_enqueue = parent_job job = q.enqueue_call(say_hello, depends_on=parent_job) w.work(burst=True) job = Job.fetch(job.id) self.assertEqual(job.get_status(), JobStatus.FINISHED) # The created spy checks two issues: # * before the fix of #739, 2 of the 3 jobs where executed due # to the race condition # * during the development another issue was fixed: # due to a missing pipeline usage in Queue.enqueue_job, the job # which was enqueued before the "rollback" was executed twice. # So before that fix the call count was 4 instead of 3 self.assertEqual(mocked.call_count, 3) def test_self_modification_persistence(self): """Make sure that any meta modification done by the job itself persists completely through the queue/worker/job stack.""" q = Queue() # Also make sure that previously existing metadata # persists properly job = q.enqueue(modify_self, meta={'foo': 'bar', 'baz': 42}, args=[{'baz': 10, 'newinfo': 'waka'}]) w = Worker([q]) w.work(burst=True) job_check = Job.fetch(job.id) self.assertEqual(job_check.meta['foo'], 'bar') self.assertEqual(job_check.meta['baz'], 10) self.assertEqual(job_check.meta['newinfo'], 'waka') def test_self_modification_persistence_with_error(self): """Make sure that any meta modification done by the job itself persists completely through the queue/worker/job stack -- even if the job errored""" q = Queue() # Also make sure that previously existing metadata # persists properly job = q.enqueue(modify_self_and_error, meta={'foo': 'bar', 'baz': 42}, args=[{'baz': 10, 'newinfo': 'waka'}]) w = Worker([q]) w.work(burst=True) # Postconditions self.assertEqual(q.count, 0) failed_job_registry = FailedJobRegistry(queue=q) self.assertTrue(job in failed_job_registry) self.assertEqual(w.get_current_job_id(), None) job_check = Job.fetch(job.id) self.assertEqual(job_check.meta['foo'], 'bar') self.assertEqual(job_check.meta['baz'], 10) self.assertEqual(job_check.meta['newinfo'], 'waka') @mock.patch('rq.worker.logger.info') def test_log_result_lifespan_true(self, mock_logger_info): """Check that log_result_lifespan True causes job lifespan to be logged.""" q = Queue() w = Worker([q]) job = q.enqueue(say_hello, args=('Frank',), result_ttl=10) w.perform_job(job, q) mock_logger_info.assert_called_with('Result is kept for %s seconds', 10) self.assertIn('Result is kept for %s seconds', [c[0][0] for c in mock_logger_info.call_args_list]) @mock.patch('rq.worker.logger.info') def test_log_result_lifespan_false(self, mock_logger_info): """Check that log_result_lifespan False causes job lifespan to not be logged.""" q = Queue() class TestWorker(Worker): log_result_lifespan = False w = TestWorker([q]) job = q.enqueue(say_hello, args=('Frank',), result_ttl=10) w.perform_job(job, q) self.assertNotIn('Result is kept for 10 seconds', [c[0][0] for c in mock_logger_info.call_args_list]) @mock.patch('rq.worker.logger.info') def test_log_job_description_true(self, mock_logger_info): """Check that log_job_description True causes job lifespan to be logged.""" q = Queue() w = Worker([q]) q.enqueue(say_hello, args=('Frank',), result_ttl=10) w.dequeue_job_and_maintain_ttl(10) self.assertIn("Frank", mock_logger_info.call_args[0][2]) @mock.patch('rq.worker.logger.info') def test_log_job_description_false(self, mock_logger_info): """Check that log_job_description False causes job lifespan to not be logged.""" q = Queue() w = Worker([q], log_job_description=False) q.enqueue(say_hello, args=('Frank',), result_ttl=10) w.dequeue_job_and_maintain_ttl(10) self.assertNotIn("Frank", mock_logger_info.call_args[0][2]) def test_worker_configures_socket_timeout(self): """Ensures that the worker correctly updates Redis client connection to have a socket_timeout""" q = Queue() _ = Worker([q]) connection_kwargs = q.connection.connection_pool.connection_kwargs self.assertEqual(connection_kwargs["socket_timeout"], 415) def test_worker_version(self): q = Queue() w = Worker([q]) w.version = '0.0.0' w.register_birth() self.assertEqual(w.version, '0.0.0') w.refresh() self.assertEqual(w.version, '0.0.0') # making sure that version is preserved when worker is retrieved by key worker = Worker.find_by_key(w.key) self.assertEqual(worker.version, '0.0.0') def test_python_version(self): python_version = sys.version q = Queue() w = Worker([q]) w.register_birth() self.assertEqual(w.python_version, python_version) # now patching version python_version = 'X.Y.Z.final' # dummy version self.assertNotEqual(python_version, sys.version) # otherwise tests are pointless w2 = Worker([q]) w2.python_version = python_version w2.register_birth() self.assertEqual(w2.python_version, python_version) # making sure that version is preserved when worker is retrieved by key worker = Worker.find_by_key(w2.key) self.assertEqual(worker.python_version, python_version) def test_dequeue_random_strategy(self): qs = [Queue('q%d' % i) for i in range(5)] for i in range(5): for j in range(3): qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j)) w = Worker(qs) w.work(burst=True, dequeue_strategy="random") start_times = [] for i in range(5): for j in range(3): job = Job.fetch('q%d_%d' % (i, j)) start_times.append(('q%d_%d' % (i, j), job.started_at)) sorted_by_time = sorted(start_times, key=lambda tup: tup[1]) sorted_ids = [tup[0] for tup in sorted_by_time] expected_rr = ['q%d_%d' % (i, j) for j in range(3) for i in range(5)] expected_ser = ['q%d_%d' % (i, j) for i in range(5) for j in range(3)] self.assertNotEqual(sorted_ids, expected_rr) self.assertNotEqual(sorted_ids, expected_ser) expected_rr.reverse() expected_ser.reverse() self.assertNotEqual(sorted_ids, expected_rr) self.assertNotEqual(sorted_ids, expected_ser) sorted_ids.sort() expected_ser.sort() self.assertEqual(sorted_ids, expected_ser) def test_request_force_stop_ignores_consecutive_signals(self): """Ignore signals sent within 1 second of the last signal""" queue = Queue(connection=self.connection) worker = Worker([queue]) worker._horse_pid = 1 worker._shutdown_requested_date = utcnow() with mock.patch.object(worker, 'kill_horse') as mocked: worker.request_force_stop(1, frame=None) self.assertEqual(mocked.call_count, 0) # If signal is sent a few seconds after, kill_horse() is called worker._shutdown_requested_date = utcnow() - timedelta(seconds=2) with mock.patch.object(worker, 'kill_horse') as mocked: self.assertRaises(SystemExit, worker.request_force_stop, 1, frame=None) def test_dequeue_round_robin(self): qs = [Queue('q%d' % i) for i in range(5)] for i in range(5): for j in range(3): qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j)) w = Worker(qs) w.work(burst=True, dequeue_strategy="round_robin") start_times = [] for i in range(5): for j in range(3): job = Job.fetch('q%d_%d' % (i, j)) start_times.append(('q%d_%d' % (i, j), job.started_at)) sorted_by_time = sorted(start_times, key=lambda tup: tup[1]) sorted_ids = [tup[0] for tup in sorted_by_time] expected = [ 'q0_0', 'q1_0', 'q2_0', 'q3_0', 'q4_0', 'q0_1', 'q1_1', 'q2_1', 'q3_1', 'q4_1', 'q0_2', 'q1_2', 'q2_2', 'q3_2', 'q4_2', ] self.assertEqual(expected, sorted_ids) def wait_and_kill_work_horse(pid, time_to_wait=0.0): time.sleep(time_to_wait) os.kill(pid, signal.SIGKILL) class TimeoutTestCase: def setUp(self): # we want tests to fail if signal are ignored and the work remain # running, so set a signal to kill them after X seconds self.killtimeout = 15 signal.signal(signal.SIGALRM, self._timeout) signal.alarm(self.killtimeout) def _timeout(self, signal, frame): raise AssertionError( "test still running after %i seconds, likely the worker wasn't shutdown correctly" % self.killtimeout ) class WorkerShutdownTestCase(TimeoutTestCase, RQTestCase): @slow def test_idle_worker_warm_shutdown(self): """worker with no ongoing job receiving single SIGTERM signal and shutting down""" w = Worker('foo') self.assertFalse(w._stop_requested) p = Process(target=kill_worker, args=(os.getpid(), False)) p.start() w.work() p.join(1) self.assertFalse(w._stop_requested) @slow def test_working_worker_warm_shutdown(self): """worker with an ongoing job receiving single SIGTERM signal, allowing job to finish then shutting down""" fooq = Queue('foo') w = Worker(fooq) sentinel_file = '/tmp/.rq_sentinel_warm' fooq.enqueue(create_file_after_timeout, sentinel_file, 2) self.assertFalse(w._stop_requested) p = Process(target=kill_worker, args=(os.getpid(), False)) p.start() w.work() p.join(2) self.assertFalse(p.is_alive()) self.assertTrue(w._stop_requested) self.assertTrue(os.path.exists(sentinel_file)) self.assertIsNotNone(w.shutdown_requested_date) self.assertEqual(type(w.shutdown_requested_date).__name__, 'datetime') @slow def test_working_worker_cold_shutdown(self): """Busy worker shuts down immediately on double SIGTERM signal""" fooq = Queue('foo') w = Worker(fooq) sentinel_file = '/tmp/.rq_sentinel_cold' self.assertFalse( os.path.exists(sentinel_file), '{sentinel_file} file should not exist yet, delete that file and try again.' ) fooq.enqueue(create_file_after_timeout, sentinel_file, 5) self.assertFalse(w._stop_requested) p = Process(target=kill_worker, args=(os.getpid(), True)) p.start() self.assertRaises(SystemExit, w.work) p.join(1) self.assertTrue(w._stop_requested) self.assertFalse(os.path.exists(sentinel_file)) shutdown_requested_date = w.shutdown_requested_date self.assertIsNotNone(shutdown_requested_date) self.assertEqual(type(shutdown_requested_date).__name__, 'datetime') @slow def test_work_horse_death_sets_job_failed(self): """worker with an ongoing job whose work horse dies unexpectadly (before completing the job) should set the job's status to FAILED """ fooq = Queue('foo') self.assertEqual(fooq.count, 0) w = Worker(fooq) sentinel_file = '/tmp/.rq_sentinel_work_horse_death' if os.path.exists(sentinel_file): os.remove(sentinel_file) fooq.enqueue(create_file_after_timeout, sentinel_file, 100) job, queue = w.dequeue_job_and_maintain_ttl(5) w.fork_work_horse(job, queue) p = Process(target=wait_and_kill_work_horse, args=(w._horse_pid, 0.5)) p.start() w.monitor_work_horse(job, queue) job_status = job.get_status() p.join(1) self.assertEqual(job_status, JobStatus.FAILED) failed_job_registry = FailedJobRegistry(queue=fooq) self.assertTrue(job in failed_job_registry) self.assertEqual(fooq.count, 0) @slow def test_work_horse_force_death(self): """Simulate a frozen worker that doesn't observe the timeout properly. Fake it by artificially setting the timeout of the parent process to something much smaller after the process is already forked. """ fooq = Queue('foo') self.assertEqual(fooq.count, 0) w = Worker([fooq], job_monitoring_interval=1) sentinel_file = '/tmp/.rq_sentinel_work_horse_death' if os.path.exists(sentinel_file): os.remove(sentinel_file) job = fooq.enqueue(launch_process_within_worker_and_store_pid, sentinel_file, 100) _, queue = w.dequeue_job_and_maintain_ttl(5) w.prepare_job_execution(job) w.fork_work_horse(job, queue) job.timeout = 5 time.sleep(1) with open(sentinel_file) as f: subprocess_pid = int(f.read().strip()) self.assertTrue(psutil.pid_exists(subprocess_pid)) with mock.patch.object(w, 'handle_work_horse_killed', wraps=w.handle_work_horse_killed) as mocked: w.monitor_work_horse(job, queue) self.assertEqual(mocked.call_count, 1) fudge_factor = 1 total_time = w.job_monitoring_interval + 65 + fudge_factor now = utcnow() self.assertTrue((utcnow() - now).total_seconds() < total_time) self.assertEqual(job.get_status(), JobStatus.FAILED) failed_job_registry = FailedJobRegistry(queue=fooq) self.assertTrue(job in failed_job_registry) self.assertEqual(fooq.count, 0) self.assertFalse(psutil.pid_exists(subprocess_pid)) def schedule_access_self(): q = Queue('default', connection=get_current_connection()) q.enqueue(access_self) @pytest.mark.skipif(sys.platform == 'darwin', reason='Fails on OS X') class TestWorkerSubprocess(RQTestCase): def setUp(self): super().setUp() db_num = self.testconn.connection_pool.connection_kwargs['db'] self.redis_url = 'redis://127.0.0.1:6379/%d' % db_num def test_run_empty_queue(self): """Run the worker in its own process with an empty queue""" subprocess.check_call(['rqworker', '-u', self.redis_url, '-b']) def test_run_access_self(self): """Schedule a job, then run the worker as subprocess""" q = Queue() job = q.enqueue(access_self) subprocess.check_call(['rqworker', '-u', self.redis_url, '-b']) registry = FinishedJobRegistry(queue=q) self.assertTrue(job in registry) assert q.count == 0 @skipIf('pypy' in sys.version.lower(), 'often times out with pypy') def test_run_scheduled_access_self(self): """Schedule a job that schedules a job, then run the worker as subprocess""" q = Queue() job = q.enqueue(schedule_access_self) subprocess.check_call(['rqworker', '-u', self.redis_url, '-b']) registry = FinishedJobRegistry(queue=q) self.assertTrue(job in registry) assert q.count == 0 @pytest.mark.skipif(sys.platform == 'darwin', reason='requires Linux signals') @skipIf('pypy' in sys.version.lower(), 'these tests often fail on pypy') class HerokuWorkerShutdownTestCase(TimeoutTestCase, RQTestCase): def setUp(self): super().setUp() self.sandbox = '/tmp/rq_shutdown/' os.makedirs(self.sandbox) def tearDown(self): shutil.rmtree(self.sandbox, ignore_errors=True) @slow def test_immediate_shutdown(self): """Heroku work horse shutdown with immediate (0 second) kill""" p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 0)) p.start() time.sleep(0.5) os.kill(p.pid, signal.SIGRTMIN) p.join(2) self.assertEqual(p.exitcode, 1) self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started'))) self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished'))) @slow def test_1_sec_shutdown(self): """Heroku work horse shutdown with 1 second kill""" p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 1)) p.start() time.sleep(0.5) os.kill(p.pid, signal.SIGRTMIN) time.sleep(0.1) self.assertEqual(p.exitcode, None) p.join(2) self.assertEqual(p.exitcode, 1) self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started'))) self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished'))) @slow def test_shutdown_double_sigrtmin(self): """Heroku work horse shutdown with long delay but SIGRTMIN sent twice""" p = Process(target=run_dummy_heroku_worker, args=(self.sandbox, 10)) p.start() time.sleep(0.5) os.kill(p.pid, signal.SIGRTMIN) # we have to wait a short while otherwise the second signal wont bet processed. time.sleep(0.1) os.kill(p.pid, signal.SIGRTMIN) p.join(2) self.assertEqual(p.exitcode, 1) self.assertTrue(os.path.exists(os.path.join(self.sandbox, 'started'))) self.assertFalse(os.path.exists(os.path.join(self.sandbox, 'finished'))) @mock.patch('rq.worker.logger.info') def test_handle_shutdown_request(self, mock_logger_info): """Mutate HerokuWorker so _horse_pid refers to an artificial process and test handle_warm_shutdown_request""" w = HerokuWorker('foo') path = os.path.join(self.sandbox, 'shouldnt_exist') p = Process(target=create_file_after_timeout_and_setsid, args=(path, 2)) p.start() self.assertEqual(p.exitcode, None) time.sleep(0.1) w._horse_pid = p.pid w.handle_warm_shutdown_request() p.join(2) # would expect p.exitcode to be -34 self.assertEqual(p.exitcode, -34) self.assertFalse(os.path.exists(path)) mock_logger_info.assert_called_with('Killed horse pid %s', p.pid) def test_handle_shutdown_request_no_horse(self): """Mutate HerokuWorker so _horse_pid refers to non existent process and test handle_warm_shutdown_request""" w = HerokuWorker('foo') w._horse_pid = 19999 w.handle_warm_shutdown_request() class TestExceptionHandlerMessageEncoding(RQTestCase): def setUp(self): super().setUp() self.worker = Worker("foo") self.worker._exc_handlers = [] # Mimic how exception info is actually passed forwards try: raise Exception(u"💪") except Exception: self.exc_info = sys.exc_info() def test_handle_exception_handles_non_ascii_in_exception_message(self): """worker.handle_exception doesn't crash on non-ascii in exception message.""" self.worker.handle_exception(Mock(), *self.exc_info) class TestRoundRobinWorker(RQTestCase): def test_round_robin(self): qs = [Queue('q%d' % i) for i in range(5)] for i in range(5): for j in range(3): qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j)) w = RoundRobinWorker(qs) w.work(burst=True) start_times = [] for i in range(5): for j in range(3): job = Job.fetch('q%d_%d' % (i, j)) start_times.append(('q%d_%d' % (i, j), job.started_at)) sorted_by_time = sorted(start_times, key=lambda tup: tup[1]) sorted_ids = [tup[0] for tup in sorted_by_time] expected = [ 'q0_0', 'q1_0', 'q2_0', 'q3_0', 'q4_0', 'q0_1', 'q1_1', 'q2_1', 'q3_1', 'q4_1', 'q0_2', 'q1_2', 'q2_2', 'q3_2', 'q4_2', ] self.assertEqual(expected, sorted_ids) class TestRandomWorker(RQTestCase): def test_random_worker(self): qs = [Queue('q%d' % i) for i in range(5)] for i in range(5): for j in range(3): qs[i].enqueue(say_pid, job_id='q%d_%d' % (i, j)) w = RandomWorker(qs) w.work(burst=True) start_times = [] for i in range(5): for j in range(3): job = Job.fetch('q%d_%d' % (i, j)) start_times.append(('q%d_%d' % (i, j), job.started_at)) sorted_by_time = sorted(start_times, key=lambda tup: tup[1]) sorted_ids = [tup[0] for tup in sorted_by_time] expected_rr = ['q%d_%d' % (i, j) for j in range(3) for i in range(5)] expected_ser = ['q%d_%d' % (i, j) for i in range(5) for j in range(3)] self.assertNotEqual(sorted_ids, expected_rr) self.assertNotEqual(sorted_ids, expected_ser) expected_rr.reverse() expected_ser.reverse() self.assertNotEqual(sorted_ids, expected_rr) self.assertNotEqual(sorted_ids, expected_ser) sorted_ids.sort() expected_ser.sort() self.assertEqual(sorted_ids, expected_ser) rq-1.16.2/tests/test_worker_pool.py0000644000000000000000000001325513615410400014266 0ustar00import os import signal from multiprocessing import Process from time import sleep from rq.connections import parse_connection from rq.job import JobStatus from rq.queue import Queue from rq.serializers import JSONSerializer from rq.worker import SimpleWorker from rq.worker_pool import WorkerPool, run_worker from tests import TestCase from tests.fixtures import CustomJob, _send_shutdown_command, long_running_job, say_hello def wait_and_send_shutdown_signal(pid, time_to_wait=0.0): sleep(time_to_wait) os.kill(pid, signal.SIGTERM) class TestWorkerPool(TestCase): def test_queues(self): """Test queue parsing""" pool = WorkerPool(['default', 'foo'], connection=self.connection) self.assertEqual( set(pool.queues), {Queue('default', connection=self.connection), Queue('foo', connection=self.connection)} ) # def test_spawn_workers(self): # """Test spawning workers""" # pool = WorkerPool(['default', 'foo'], connection=self.connection, num_workers=2) # pool.start_workers(burst=False) # self.assertEqual(len(pool.worker_dict.keys()), 2) # pool.stop_workers() def test_check_workers(self): """Test check_workers()""" pool = WorkerPool(['default'], connection=self.connection, num_workers=2) pool.start_workers(burst=False) # There should be two workers pool.check_workers() self.assertEqual(len(pool.worker_dict.keys()), 2) worker_data = list(pool.worker_dict.values())[0] sleep(0.5) _send_shutdown_command(worker_data.name, self.connection.connection_pool.connection_kwargs.copy(), delay=0) # 1 worker should be dead since we sent a shutdown command sleep(0.75) pool.check_workers(respawn=False) self.assertEqual(len(pool.worker_dict.keys()), 1) # If we call `check_workers` with `respawn=True`, the worker should be respawned pool.check_workers(respawn=True) self.assertEqual(len(pool.worker_dict.keys()), 2) pool.stop_workers() def test_reap_workers(self): """Dead workers are removed from worker_dict""" pool = WorkerPool(['default'], connection=self.connection, num_workers=2) pool.start_workers(burst=False) # There should be two workers pool.reap_workers() self.assertEqual(len(pool.worker_dict.keys()), 2) worker_data = list(pool.worker_dict.values())[0] sleep(0.5) _send_shutdown_command(worker_data.name, self.connection.connection_pool.connection_kwargs.copy(), delay=0) # 1 worker should be dead since we sent a shutdown command sleep(0.75) pool.reap_workers() self.assertEqual(len(pool.worker_dict.keys()), 1) pool.stop_workers() def test_start(self): """Test start()""" pool = WorkerPool(['default'], connection=self.connection, num_workers=2) p = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5)) p.start() pool.start() self.assertEqual(pool.status, pool.Status.STOPPED) self.assertTrue(pool.all_workers_have_stopped()) # We need this line so the test doesn't hang pool.stop_workers() def test_pool_ignores_consecutive_shutdown_signals(self): """If two shutdown signals are sent within one second, only the first one is processed""" # Send two shutdown signals within one second while the worker is # working on a long running job. The job should still complete (not killed) pool = WorkerPool(['foo'], connection=self.connection, num_workers=2) process_1 = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5)) process_1.start() process_2 = Process(target=wait_and_send_shutdown_signal, args=(os.getpid(), 0.5)) process_2.start() queue = Queue('foo', connection=self.connection) job = queue.enqueue(long_running_job, 1) pool.start(burst=True) self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED) # We need this line so the test doesn't hang pool.stop_workers() def test_run_worker(self): """Ensure run_worker() properly spawns a Worker""" queue = Queue('foo', connection=self.connection) queue.enqueue(say_hello) connection_class, pool_class, pool_kwargs = parse_connection(self.connection) run_worker('test-worker', ['foo'], connection_class, pool_class, pool_kwargs) # Worker should have processed the job self.assertEqual(len(queue), 0) def test_worker_pool_arguments(self): """Ensure arguments are properly used to create the right workers""" queue = Queue('foo', connection=self.connection) job = queue.enqueue(say_hello) pool = WorkerPool([queue], connection=self.connection, num_workers=2, worker_class=SimpleWorker) pool.start(burst=True) # Worker should have processed the job self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED) queue = Queue('json', connection=self.connection, serializer=JSONSerializer) job = queue.enqueue(say_hello, 'Hello') pool = WorkerPool( [queue], connection=self.connection, num_workers=2, worker_class=SimpleWorker, serializer=JSONSerializer ) pool.start(burst=True) # Worker should have processed the job self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED) pool = WorkerPool([queue], connection=self.connection, num_workers=2, job_class=CustomJob) pool.start(burst=True) # Worker should have processed the job self.assertEqual(job.get_status(refresh=True), JobStatus.FINISHED) rq-1.16.2/tests/test_worker_registration.py0000644000000000000000000000773313615410400016033 0ustar00from unittest.mock import patch from rq import Queue, Worker from rq.utils import ceildiv from rq.worker_registration import ( REDIS_WORKER_KEYS, WORKERS_BY_QUEUE_KEY, clean_worker_registry, get_keys, register, unregister, ) from tests import RQTestCase class TestWorkerRegistry(RQTestCase): def test_worker_registration(self): """Ensure worker.key is correctly set in Redis.""" foo_queue = Queue(name='foo') bar_queue = Queue(name='bar') worker = Worker([foo_queue, bar_queue]) register(worker) redis = worker.connection self.assertTrue(redis.sismember(worker.redis_workers_keys, worker.key)) self.assertEqual(Worker.count(connection=redis), 1) self.assertTrue(redis.sismember(WORKERS_BY_QUEUE_KEY % foo_queue.name, worker.key)) self.assertEqual(Worker.count(queue=foo_queue), 1) self.assertTrue(redis.sismember(WORKERS_BY_QUEUE_KEY % bar_queue.name, worker.key)) self.assertEqual(Worker.count(queue=bar_queue), 1) unregister(worker) self.assertFalse(redis.sismember(worker.redis_workers_keys, worker.key)) self.assertFalse(redis.sismember(WORKERS_BY_QUEUE_KEY % foo_queue.name, worker.key)) self.assertFalse(redis.sismember(WORKERS_BY_QUEUE_KEY % bar_queue.name, worker.key)) def test_get_keys_by_queue(self): """get_keys_by_queue only returns active workers for that queue""" foo_queue = Queue(name='foo') bar_queue = Queue(name='bar') baz_queue = Queue(name='baz') worker1 = Worker([foo_queue, bar_queue]) worker2 = Worker([foo_queue]) worker3 = Worker([baz_queue]) self.assertEqual(set(), get_keys(foo_queue)) register(worker1) register(worker2) register(worker3) # get_keys(queue) will return worker keys for that queue self.assertEqual(set([worker1.key, worker2.key]), get_keys(foo_queue)) self.assertEqual(set([worker1.key]), get_keys(bar_queue)) # get_keys(connection=connection) will return all worker keys self.assertEqual(set([worker1.key, worker2.key, worker3.key]), get_keys(connection=worker1.connection)) # Calling get_keys without arguments raises an exception self.assertRaises(ValueError, get_keys) unregister(worker1) unregister(worker2) unregister(worker3) def test_clean_registry(self): """clean_registry removes worker keys that don't exist in Redis""" queue = Queue(name='foo') worker = Worker([queue]) register(worker) redis = worker.connection self.assertTrue(redis.sismember(worker.redis_workers_keys, worker.key)) self.assertTrue(redis.sismember(REDIS_WORKER_KEYS, worker.key)) clean_worker_registry(queue) self.assertFalse(redis.sismember(worker.redis_workers_keys, worker.key)) self.assertFalse(redis.sismember(REDIS_WORKER_KEYS, worker.key)) def test_clean_large_registry(self): """ clean_registry() splits invalid_keys into multiple lists for set removal to avoid sending more than redis can receive """ worker_count = 11 MAX_KEYS = 6 SREM_CALL_COUNT = 2 queue = Queue(name='foo', connection=self.connection) for i in range(worker_count): worker = Worker([queue], connection=self.connection) register(worker) # Since we registered 11 workers and set the maximum keys to be deleted in each command to 6, # `srem` command should be called a total of 4 times. # `srem` is called twice per invalid key group; once for WORKERS_BY_QUEUE_KEY and once for REDIS_WORKER_KEYS with patch('rq.worker_registration.MAX_KEYS', MAX_KEYS), patch('redis.client.Pipeline.srem') as mock: clean_worker_registry(queue) expected_call_count = (ceildiv(worker_count, MAX_KEYS)) * SREM_CALL_COUNT self.assertEqual(mock.call_count, expected_call_count) rq-1.16.2/tests/config_files/__init__.py0000644000000000000000000000000013615410400015033 0ustar00rq-1.16.2/tests/config_files/dummy.py0000644000000000000000000000004413615410400014437 0ustar00REDIS_HOST = "testhost.example.com" rq-1.16.2/tests/config_files/dummy_logging.py0000644000000000000000000000116113615410400016146 0ustar00# example config taken from DICT_CONFIG = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'standard': {'format': 'MY_LOG_FMT: %(asctime)s [%(levelname)s] %(name)s: %(message)s'}, }, 'handlers': { 'default': { 'level': 'DEBUG', 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', # Default is stderr }, }, 'loggers': { 'root': {'handlers': ['default'], 'level': 'DEBUG', 'propagate': False}, # root logger }, } rq-1.16.2/tests/config_files/dummy_override.py0000644000000000000000000000013213615410400016334 0ustar00REDIS_HOST = "testhost.example.com" REDIS_PORT = 6378 REDIS_DB = 2 REDIS_PASSWORD = '123' rq-1.16.2/tests/config_files/sentry.py0000644000000000000000000000011513615410400014627 0ustar00REDIS_HOST = "testhost.example.com" SENTRY_DSN = 'https://123@sentry.io/123' rq-1.16.2/tests/ssl_config/private.pem0000644000000000000000000001231513615410400014612 0ustar00-----BEGIN RSA PRIVATE KEY----- MIIJKwIBAAKCAgEAwN/TmlUJWSo8rWLAf94FUqWlFieMnitFbeOkpZsVI5ROdUVl NvvCF1h/o6+PTff6kRuRDWMdxQed22Pk40K79mGz8rjgNCRBJehPIUgi27BZZac3 diae4aTgHsp6I0sw4+vT/4xbwfQoF+S2WdRfeoOV3odbFOKrxz2FKNb/p0I8/IbK Dgp/IpcX6z/LmYA0yD77eGxL9TzTW06hoLZByifKp0Q/MmQe6n4h4S1bG2dhAg5G 2twa+B4+lh5j45/WA+OvWzCMkRjI8NuDidxFKdx+ddqqmJdXR6Aivi15oCDzJsvA eRHtFddgHa7+jj2+rx6+D8E9bkwiTQHS23rLWVnB0Fydm2a+G7PyXUGk+Ss+ekyT +83HZfoPDN58k4ZPPG7xhOLYC5bDCNmRo0P4L4CkNj91KQYMdhpuX2LjOtYRR2B7 fmOXAlWIkeo8rJ+i+hCepkXTRTPG0FOzRVnYQfN2IbCFwSizqqRDSO7wlOBs7Q1U bDzgQi2JmpxuUf+/7A6WSAJirxXgTVEhj9YaxKZuGXzx/1+AQ2Dzp1u4Dh0dygxD BghornbBr5KdXRyAC71jszRnFNdHZriijwvgmKV70Jz5WGNxennHcE45HEUiFbI6 AZCJ+zqqlJfZGt5lWO1EPCALrBn5dKm8BzcYniIx1+AGC+mG7oy4NVePc9sCAwEA AQKCAgEAm6SDx6kTsCaLbIeiPA1YUkdlnykvKnxUvMbVGOa6+kk1vyDO+r3S9K/v 4JFNnWedhfeu6BSx80ugMWi9Tj+OGtbhNd/G3YzcHdEH+h2SM6JtocB82xVzZTd9 vJs8ULreqy6llzUW3r8+k3l3RapBmkYRbM/hykrYwCF/EWPeToT/XfEPoKEL00gG f0qt7CMvdOCOYbFS4oXBMY+UknJBSPcvbCeAsBNnd2dtw56sRML534TR3M992/fc HZxMk2VqeR0FZxsYdAaCMQuTbG6aSZurWUOqIxUN07kAEGP2ICg2z3ngylKS9esl nw6WUQa2l+7BBUm1XwqFK4trMr421W5hwdsUCt4iwgYjBdc/uJtOPsnF8wVol7I9 YWooHfnSvztaIYq4qVNU8iCn6KYZ6s+2CMafto/gugQlTNGksUhP0cu70oh3t8bC oeNf8O9ZRfwZzhsSTScVWpNpJxTB19Ofm2o/yU0JUiiH4fGVSSlTzVmP6/9g2tqU iuTjcuM55sOtFmTIWDY3aeKvnGz2peQEgtfdxQa5nkRwt719SplsP3iyjJdArgE/ x2xC162CwDVGCrq1H3JD9/fpZedC3CaYrXDMqI1vAsBcoKBbF3lNAxDnT+8tP2g5 1pGuvaR3+UOUG6sd/8bHycPZU5ba9XcpqXTNG7JRAlji/bdunaECggEBAOzhi6a+ Pmf6Ou6cAveEeGwki5d7qY+4Sw9q7eqOGE/4n3gF7ZQjbkdjSvE4D9Tk4soOcXv2 1o4Hh+Cmgwp4U6BLGgBW94mTUVqXtVkD0HFLpORjSd4HLSu9NCJfCjWH1Gtc/IyM vq6zeSwLIFDm7TZe8hvrfN5sxI6FMsi5T87sXQS1GjlBTVSiIAm2m/q27Hmkrs7u wI22yYmVgnWy7LbReSfhweYzdBQSMItYL+aXQvRsLhHWm+rLzdu8nslZ1gBgiqrs 8lly9SasM1d1E4vFvbtt1w4ZLTdetyq5FgWackgrj1dpHis116onxBa9lTRnAumw O4Dqr1JroTD6anMCggEBANBxAsl/LkhTIUW5biJ8DI6zavI38iZhcrYbDl8G+9i/ JUj4tuSgq8YX3wdoMqkaDX8s6L7YFxYY7Dom4wBhDYrxqiih7RLyxu9UxLjx5TeO f9m9SBwCxa+i05M8gAEyK9jmd/k76yuAqDDeZGuy/rH/itP+BJpsC3QX+8chKIjh /lN3le1OM3TmE9OdGwFG7CxPelKeghd9ES1yvq7yyL7RpCLcwNkKer8X+PQISrUe Q77vmc94p+Zgdacmt2Eu3hgCOk+swtouTmp4W1k0oJTcOIeT+2OF2U2/mZA5B1us smhFvpxObh3RHaxG3R1ciK5xWHWyx78qooc/n1Id7vkCggEBAI+XfV8bbZr7/aNM oSPHgnQThybRiIyda6qx5/zKHATGMmzAMy8cdyoBD5m/oSEtiihvru01SQQZno1Y gpDjNdYyEFXqYe1chvFCi2SlQkKbVx427b0QXppn++Vl9TtT1jkqydCtNJ2UH7zK FdHU2jCeR2cTTcNK7a9zIMC6TJ2jfBNxcK8KXcUS7hbVQiItppVqdajs435EMlEb d1S/nGyJ+EZrvG09/Xx5NkIRuB+wy558wUSA8kzXNDeiVCK8OVRLMWPBdHsyi1bh BdJbHvkYahXm1HkwW893s9LLFYVaBTKobSDQkMAiyFPV/TDHxV1ZoFNmR/uyx4pP wgt9kO8CggEBAMN2NjbdnHkV+0125WBREzV96fvZmqmDGB7MoF1cHy7RkBUtpdQf FvVbzTkU7OzGEYIAiwDrgjqmhF7DuHrSh/CTTg1sSvRJ1WL5CsCjlV7TsfBtHwGl V9urxNt9EEwO0C9Fb5u4JH9W1mF9Ko4T++LOz1CcE5T7XIIxO1kwLuKtieCbc2xk uLwWROFbocdAypeCsCJpoXSFQ2ZrA4TrBnRqApDukaj1usUXpcyxOd091CloZcO4 UTonmix0keIAISRCcovkZZRTeBU/Z+nu/+aX3CrHCiX5jhzqXwZvdAbzmxlMzcGl in1La5fxm8e8zi9G+rzkOYt6X46UisJmb4ECggEBAM2NtCiX85y0YswAx8GpZXz7 8yM9qmR1RJwDA8mnsJYRpyohIbHiPvGGd67W/MyOe2j8EPlMraK9PG/Q9PfkChc0 su5kjH/o2etgSYjykV0e3xKIuGb57gkQjgN6ZXTMBRxo+PqOp8BG/PkiTEbJErod K72zYfnvF1/YfrTHF+uGhF7rUl8Z66nNh1uZLURVE/O1+YRbJrFVi9hxdT+3FGv6 ilq32bGCMopgFOee0CRS4IYJtYJufq+EgmXBt5l6yjr6A1OLUcNQ0tsT88VDgTQe rvaAxK/9DXs3J7gjgsu4Qc/I6oLg+KSCEOSEbZsaYuICas143lC1cLfThlxAYoM= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIF0TCCA7mgAwIBAgIUH0n4JVFqZVeehn7EeRAkjWh0wrowDQYJKoZIhvcNAQEL BQAweDEfMB0GCSqGSIb3DQEJARYQdGVzdEBnZXRyZXNxLmNvbTEPMA0GA1UEAwwG cnEuY29tMQswCQYDVQQKDAJSUTEMMAoGA1UECwwDRW5nMQswCQYDVQQGEwJDQTEN MAsGA1UECAwEVGVzdDENMAsGA1UEBwwEVGVzdDAeFw0yMDExMjUxOTAzMzJaFw0y NTExMjUxOTAzMzJaMHgxHzAdBgkqhkiG9w0BCQEWEHRlc3RAZ2V0cmVzcS5jb20x DzANBgNVBAMMBnJxLmNvbTELMAkGA1UECgwCUlExDDAKBgNVBAsMA0VuZzELMAkG A1UEBhMCQ0ExDTALBgNVBAgMBFRlc3QxDTALBgNVBAcMBFRlc3QwggIiMA0GCSqG SIb3DQEBAQUAA4ICDwAwggIKAoICAQDA39OaVQlZKjytYsB/3gVSpaUWJ4yeK0Vt 46SlmxUjlE51RWU2+8IXWH+jr49N9/qRG5ENYx3FB53bY+TjQrv2YbPyuOA0JEEl 6E8hSCLbsFllpzd2Jp7hpOAeynojSzDj69P/jFvB9CgX5LZZ1F96g5Xeh1sU4qvH PYUo1v+nQjz8hsoOCn8ilxfrP8uZgDTIPvt4bEv1PNNbTqGgtkHKJ8qnRD8yZB7q fiHhLVsbZ2ECDkba3Br4Hj6WHmPjn9YD469bMIyRGMjw24OJ3EUp3H512qqYl1dH oCK+LXmgIPMmy8B5Ee0V12Adrv6OPb6vHr4PwT1uTCJNAdLbestZWcHQXJ2bZr4b s/JdQaT5Kz56TJP7zcdl+g8M3nyThk88bvGE4tgLlsMI2ZGjQ/gvgKQ2P3UpBgx2 Gm5fYuM61hFHYHt+Y5cCVYiR6jysn6L6EJ6mRdNFM8bQU7NFWdhB83YhsIXBKLOq pENI7vCU4GztDVRsPOBCLYmanG5R/7/sDpZIAmKvFeBNUSGP1hrEpm4ZfPH/X4BD YPOnW7gOHR3KDEMGCGiudsGvkp1dHIALvWOzNGcU10dmuKKPC+CYpXvQnPlYY3F6 ecdwTjkcRSIVsjoBkIn7OqqUl9ka3mVY7UQ8IAusGfl0qbwHNxieIjHX4AYL6Ybu jLg1V49z2wIDAQABo1MwUTAdBgNVHQ4EFgQUFBBOTl94RoNjXrxR9+idaPA6WMEw HwYDVR0jBBgwFoAUFBBOTl94RoNjXrxR9+idaPA6WMEwDwYDVR0TAQH/BAUwAwEB /zANBgkqhkiG9w0BAQsFAAOCAgEAltcc8+Vz+sLnoVrappVJ3iRa20T8J9XwrRt8 zs7WiMORHIh3PIKJVSjd328HwdFBHUJEMc5Vgrwg8rVQYoxRoz2kFj9fMF0fYync ipjL+p4bLGdyWDEHIziJSLULkjgypsW3rRi4MdB8kV8r8zHWVz4enFrztnw8e2Qz i/7FIIxc5i07kttCY4+u8VVZWrzaNt3KUrDQ3yJiBODp1pIMcmCUgx6AG7vhi9Js v1y27GKRW88pIGSHPWDcko2X9JuJuNHdBPYBU2rJXkhA6bh36LUuSJ0ZY2tvHPUw NZWi2DoYb3xaevdUDHS25+LUhFullQRvuS/1r9l8sCRp17xZBUh0rtDJa+keoq3O EADybpmoRKOfNoZLMeJabo/VbQX9qNYVN3rgzCZ/yOdotEKOrr90tw/JSS4CTtMw athKFIHWQwqcL1/xTM3EQ/HpxA6d1qayozMPVj5NnfpYjaBK+PncBTN01u/O45Pw +GGvvILPCsRYLIXp1lM5O3kbL9qffNLYHngQ/yW+R85AzMqbBIB9aaY3M0b4zdVo eIr8vDfTUh1bnzyKLiVWugOPVwfeU0ePg06Kr2yVPwtia4dW7YXm0dXHxn+7sMjg stJ4aqjlOiudLyb3wsRgnFDSzM5YZwtz3hCnbKhgDf5Qayywj/9VJWGpVbuQkmoq QQRVNAs= -----END CERTIFICATE----- rq-1.16.2/tests/ssl_config/stunnel.conf0000644000000000000000000000035313615410400014773 0ustar00cert=/etc/stunnel/private.pem fips=no foreground=yes sslVersion=all socket=l:TCP_NODELAY=1 socket=r:TCP_NODELAY=1 pid=/var/run/stunnel.pid debug=0 output=/etc/stunnel/stunnel.log [redis] accept = 0.0.0.0:9736 connect = 127.0.0.1:6379 rq-1.16.2/.gitignore0000644000000000000000000000027213615410400011134 0ustar00*.pyc *.egg-info .DS_Store /dump.rdb /.direnv /.envrc /.tox /dist /build .tox .pytest_cache/ .vagrant Vagrantfile .idea/ .coverage* /.cache Gemfile Gemfile.lock _site/ .venv/ .vscode/rq-1.16.2/LICENSE0000644000000000000000000000273613615410400010160 0ustar00Copyright 2012 Vincent Driessen. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY VINCENT DRIESSEN ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL VINCENT DRIESSEN OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of Vincent Driessen. rq-1.16.2/README.md0000644000000000000000000000742513615410400010432 0ustar00RQ (_Redis Queue_) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily. RQ requires Redis >= 3.0.0. [![Build status](https://github.com/rq/rq/workflows/Test%20rq/badge.svg)](https://github.com/rq/rq/actions?query=workflow%3A%22Test+rq%22) [![PyPI](https://img.shields.io/pypi/pyversions/rq.svg)](https://pypi.python.org/pypi/rq) [![Coverage](https://codecov.io/gh/rq/rq/branch/master/graph/badge.svg)](https://codecov.io/gh/rq/rq) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) Full documentation can be found [here][d]. ## Support RQ If you find RQ useful, please consider supporting this project via [Tidelift](https://tidelift.com/subscription/pkg/pypi-rq?utm_source=pypi-rq&utm_medium=referral&utm_campaign=readme). ## Getting started First, run a Redis server, of course: ```console $ redis-server ``` To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function: ```python import requests def count_words_at_url(url): """Just an example function that's called async.""" resp = requests.get(url) return len(resp.text.split()) ``` You do use the excellent [requests][r] package, don't you? Then, create an RQ queue: ```python from redis import Redis from rq import Queue queue = Queue(connection=Redis()) ``` And enqueue the function call: ```python from my_module import count_words_at_url job = queue.enqueue(count_words_at_url, 'http://nvie.com') ``` Scheduling jobs are also similarly easy: ```python # Schedule job to run at 9:15, October 10th job = queue.enqueue_at(datetime(2019, 10, 10, 9, 15), say_hello) # Schedule job to run in 10 seconds job = queue.enqueue_in(timedelta(seconds=10), say_hello) ``` Retrying failed jobs is also supported: ```python from rq import Retry # Retry up to 3 times, failed job will be requeued immediately queue.enqueue(say_hello, retry=Retry(max=3)) # Retry up to 3 times, with configurable intervals between retries queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60])) ``` For a more complete example, refer to the [docs][d]. But this is the essence. ### The worker To start executing enqueued function calls in the background, start a worker from your project's directory: ```console $ rq worker --with-scheduler *** Listening for work on default Got count_words_at_url('http://nvie.com') from default Job result = 818 *** Listening for work on default ``` That's about it. ## Installation Simply use the following command to install the latest released version: pip install rq If you want the cutting edge version (that may well be broken), use this: pip install git+https://github.com/rq/rq.git@master#egg=rq ## Related Projects Check out these below repos which might be useful in your rq based project. - [rq-dashboard](https://github.com/Parallels/rq-dashboard) - [rqmonitor](https://github.com/pranavgupta1234/rqmonitor) - [django-rq](https://github.com/rq/django-rq) - [Flask-RQ2](https://github.com/rq/Flask-RQ2) - [rq-scheduler](https://github.com/rq/rq-scheduler) ## Project history This project has been inspired by the good parts of [Celery][1], [Resque][2] and [this snippet][3], and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations. [r]: http://python-requests.org [d]: http://python-rq.org/ [m]: http://pypi.python.org/pypi/mailer [p]: http://docs.python.org/library/pickle.html [1]: http://docs.celeryq.dev/ [2]: https://github.com/resque/resque [3]: https://github.com/fengsp/flask-snippets/blob/1f65833a4291c5b833b195a09c365aa815baea4e/utilities/rq.py rq-1.16.2/pyproject.toml0000644000000000000000000000556413615410400012071 0ustar00[build-system] build-backend = "hatchling.build" requires = [ "hatchling", ] [project] name = "rq" description = "RQ is a simple, lightweight, library for creating background jobs, and processing them." readme = "README.md" license = "BSD-2-Clause" maintainers = [ {name = "Selwin Ong"}, ] authors = [ { name = "Selwin Ong", email = "selwin.ong@gmail.com" }, { name = "Vincent Driessen", email = "vincent@3rdcloud.com" }, ] requires-python = ">=3.7" classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Information Technology", "Intended Audience :: Science/Research", "Intended Audience :: System Administrators", "License :: OSI Approved :: BSD License", "Operating System :: MacOS", "Operating System :: POSIX", "Operating System :: Unix", "Programming Language :: Python", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Internet", "Topic :: Scientific/Engineering", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Distributed Computing", "Topic :: System :: Monitoring", "Topic :: System :: Systems Administration", ] dynamic = [ "version", ] dependencies = [ "click>=5", "redis>=3.5", ] [project.urls] changelog = "https://github.com/rq/rq/blob/master/CHANGES.md" documentation = "https://python-rq.org/docs/" homepage = "https://python-rq.org/" repository = "https://github.com/rq/rq/" [project.scripts] rq = "rq.cli:main" rqinfo = "rq.cli:info" # TODO [v2]: Remove rqworker = "rq.cli:worker" # TODO [v2]: Remove [tool.hatch.version] path = "rq/version.py" [tool.hatch.build.targets.sdist] include = [ "/docs", "/rq", "/tests", "CHANGES.md", "LICENSE", "pyproject.toml", "README.md", "requirements.txt", "tox.ini", ] [tool.hatch.envs.test] dependencies = [ "black", "coverage", "packaging", "psutil", "pytest", "pytest-cov", "ruff", "sentry-sdk<2", "tox", ] [tool.hatch.envs.test.scripts] cov = "pytest --cov=rq --cov-config=.coveragerc --cov-report=xml {args:tests}" [tool.black] line-length = 120 target-version = ["py38"] skip-string-normalization = true [tool.ruff] # Set what ruff should check for. # See https://beta.ruff.rs/docs/rules/ for a list of rules. select = [ "E", # pycodestyle errors "F", # pyflakes errors "I", # import sorting "W", # pycodestyle warnings ] line-length = 120 # To match black. target-version = "py38" [tool.ruff.isort] known-first-party = ["rq"] section-order = ["future", "standard-library", "third-party", "first-party", "local-folder"] rq-1.16.2/PKG-INFO0000644000000000000000000001306513615410400010245 0ustar00Metadata-Version: 2.1 Name: rq Version: 1.16.2 Summary: RQ is a simple, lightweight, library for creating background jobs, and processing them. Project-URL: changelog, https://github.com/rq/rq/blob/master/CHANGES.md Project-URL: documentation, https://python-rq.org/docs/ Project-URL: homepage, https://python-rq.org/ Project-URL: repository, https://github.com/rq/rq/ Author-email: Selwin Ong , Vincent Driessen Maintainer: Selwin Ong License-Expression: BSD-2-Clause License-File: LICENSE Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Science/Research Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: MacOS Classifier: Operating System :: POSIX Classifier: Operating System :: Unix Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Topic :: Internet Classifier: Topic :: Scientific/Engineering Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Distributed Computing Classifier: Topic :: System :: Monitoring Classifier: Topic :: System :: Systems Administration Requires-Python: >=3.7 Requires-Dist: click>=5 Requires-Dist: redis>=3.5 Description-Content-Type: text/markdown RQ (_Redis Queue_) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily. RQ requires Redis >= 3.0.0. [![Build status](https://github.com/rq/rq/workflows/Test%20rq/badge.svg)](https://github.com/rq/rq/actions?query=workflow%3A%22Test+rq%22) [![PyPI](https://img.shields.io/pypi/pyversions/rq.svg)](https://pypi.python.org/pypi/rq) [![Coverage](https://codecov.io/gh/rq/rq/branch/master/graph/badge.svg)](https://codecov.io/gh/rq/rq) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) Full documentation can be found [here][d]. ## Support RQ If you find RQ useful, please consider supporting this project via [Tidelift](https://tidelift.com/subscription/pkg/pypi-rq?utm_source=pypi-rq&utm_medium=referral&utm_campaign=readme). ## Getting started First, run a Redis server, of course: ```console $ redis-server ``` To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function: ```python import requests def count_words_at_url(url): """Just an example function that's called async.""" resp = requests.get(url) return len(resp.text.split()) ``` You do use the excellent [requests][r] package, don't you? Then, create an RQ queue: ```python from redis import Redis from rq import Queue queue = Queue(connection=Redis()) ``` And enqueue the function call: ```python from my_module import count_words_at_url job = queue.enqueue(count_words_at_url, 'http://nvie.com') ``` Scheduling jobs are also similarly easy: ```python # Schedule job to run at 9:15, October 10th job = queue.enqueue_at(datetime(2019, 10, 10, 9, 15), say_hello) # Schedule job to run in 10 seconds job = queue.enqueue_in(timedelta(seconds=10), say_hello) ``` Retrying failed jobs is also supported: ```python from rq import Retry # Retry up to 3 times, failed job will be requeued immediately queue.enqueue(say_hello, retry=Retry(max=3)) # Retry up to 3 times, with configurable intervals between retries queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60])) ``` For a more complete example, refer to the [docs][d]. But this is the essence. ### The worker To start executing enqueued function calls in the background, start a worker from your project's directory: ```console $ rq worker --with-scheduler *** Listening for work on default Got count_words_at_url('http://nvie.com') from default Job result = 818 *** Listening for work on default ``` That's about it. ## Installation Simply use the following command to install the latest released version: pip install rq If you want the cutting edge version (that may well be broken), use this: pip install git+https://github.com/rq/rq.git@master#egg=rq ## Related Projects Check out these below repos which might be useful in your rq based project. - [rq-dashboard](https://github.com/Parallels/rq-dashboard) - [rqmonitor](https://github.com/pranavgupta1234/rqmonitor) - [django-rq](https://github.com/rq/django-rq) - [Flask-RQ2](https://github.com/rq/Flask-RQ2) - [rq-scheduler](https://github.com/rq/rq-scheduler) ## Project history This project has been inspired by the good parts of [Celery][1], [Resque][2] and [this snippet][3], and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations. [r]: http://python-requests.org [d]: http://python-rq.org/ [m]: http://pypi.python.org/pypi/mailer [p]: http://docs.python.org/library/pickle.html [1]: http://docs.celeryq.dev/ [2]: https://github.com/resque/resque [3]: https://github.com/fengsp/flask-snippets/blob/1f65833a4291c5b833b195a09c365aa815baea4e/utilities/rq.py