gearman-2.0.2/0000755000076600000240000000000011513151461011741 5ustar mtaistaffgearman-2.0.2/AUTHORS.txt0000644000076600000240000000107211450513501013624 0ustar mtaistaffpython-gearman 2.x: ======================================== Primary authors: * Matthew Tai Contributors (in order of contribution, latest first): * Khaled alHabache :: python 2.4 backward compatibility * Eskil Olsen :: Connection fixes * Julian Krause :: Architectural design python-gearman 1.x ======================================== Primary authors: * Samuel Stauffer Contributors: * Justin Azoff * Kristopher * Eric Sumner gearman-2.0.2/CHANGES.txt0000644000076600000240000000134511513151154013554 0ustar mtaistaffv2.0.2, 2011-01-11 -- Major bug fix release * GearmanClient - Fixed a memory leak in the handler where we never de-allocated completed jobs [GH-6] * GearmanClient - Updated GET_STATUS to ask about a job that wasn't previously submitted [GH-1] * Gearman library - Fixed logging errors when NullHandler wasn't provdied [GH-3] v2.0.1, 2010-10-12 -- Minor bug fix release * GearmanJobRequest - Combined `server_status` and `status_updates` into a shared `status` field * GearmanJobRequest.status - `numerator` and `denominator` are now cast to integers * GearmanWorker.send_* - Updated to immediately send commands instead of waiting for the work select loop v2.0.0, 2010-09-28 -- Initial release v2.0.0.beta, 2010-06-15 -- Beta release gearman-2.0.2/docs/0000755000076600000240000000000011513151461012671 5ustar mtaistaffgearman-2.0.2/docs/1to2.rst0000644000076600000240000000555611450424015014221 0ustar mtaistaff============================================== Transitioning from python-gearman 1.x to 2.0.0 ============================================== Client (single task) ==================== :: # python-gearman 1.x old_client = gearman.GearmanClient(['localhost:4730']) old_result = old_client.do_task(Task("echo", "foo")) # python-gearman 2.x new_client = gearman.GearmanClient(['localhost:4730']) current_request = new_client.submit_job('echo', 'foo') new_result = current_request.result Client (multiple tasks) ======================= :: # python-gearman 1.x old_client = gearman.GearmanClient(['localhost:4730']) ts = Taskset([ Task(func="echo", arg="foo"), Task(func="echo", arg="bar"), ]) old_client.do_taskset(ts) for task in ts.values(): assert task.result == task.arg # python-gearman 2.x new_client = gearman.GearmanClient(['localhost:4730']) new_jobs = [ dict(task='echo', data='foo'), dict(task='echo', data='bar'), ] completed_requests = new_client.submit_multiple_jobs(new_jobs) for current_request in completed_requests: assert current_request.result == current_request.job.data Worker ====== :: # python-gearman 1.x class WorkerHook(object): def start(self, current_job): print "Job started" def fail(self, current_job, exc_info): print "Job failed, can't stop last gasp GEARMAN_COMMAND_WORK_FAIL" def complete(self, current_job, result): print "Job complete, can't stop last gasp GEARMAN_COMMAND_WORK_COMPLETE" def callback_fxn(idle, last_job_time): return False old_worker = gearman.GearmanWorker(['localhost:4730']) old_worker.register_function("echo", lambda job:job.arg) old_worker.work(stop_if=callback_fxn, hooks=WorkerHook()) # python-gearman 2.x class CustomGearmanWorker(gearman.GearmanWorker): def on_job_execute(self, current_job): print "Job started" return super(CustomGearmanWorker, self).on_job_execute(current_job) def on_job_exception(self, current_job, exc_info): print "Job failed, CAN stop last gasp GEARMAN_COMMAND_WORK_FAIL" return super(CustomGearmanWorker, self).on_job_exception(current_job, exc_info) def on_job_complete(self, current_job, job_result): print "Job failed, CAN stop last gasp GEARMAN_COMMAND_WORK_FAIL" return super(CustomGearmanWorker, self).send_job_complete(current_job, job_result) def after_poll(self, any_activity): # Return True if you want to continue polling, replaces callback_fxn return True def task_callback(gearman_worker, job): return job.data new_worker = CustomGearmanWorker(['localhost:4730']) new_worker.register_task("echo", task_callback) new_worker.work() gearman-2.0.2/docs/admin_client.rst0000644000076600000240000000211211450424015016043 0ustar mtaistaff:mod:`gearman.admin_client` --- Gearman Admin client ==================================================== .. module:: gearman.admin_client :synopsis: Gearman admin client - public interface for querying about server status .. autoclass:: GearmanAdminClient Interacting with a server ------------------------- .. automethod:: GearmanAdminClient.send_maxqueue .. automethod:: GearmanAdminClient.send_shutdown .. automethod:: GearmanAdminClient.get_status .. automethod:: GearmanAdminClient.get_version .. automethod:: GearmanAdminClient.get_workers Checking server state:: gm_admin_client = gearman.GearmanAdminClient(['localhost:4730']) # Inspect server state status_response = gm_admin_client.get_status() version_response = gm_admin_client.get_version() workers_response = gm_admin_client.get_workers() Testing server response times ----------------------------- .. automethod:: GearmanAdminClient.ping_server Checking server response time:: gm_admin_client = gearman.GearmanAdminClient(['localhost:4730']) response_time = gm_admin_client.ping_server() gearman-2.0.2/docs/architecture.rst0000644000076600000240000000231611450424015016105 0ustar mtaistaff=============== Design document =============== Architectural design document for developers GearmanConnectionManager - Bridges low-level I/O <-> command handlers ===================================================================== * Only class that an API user should directly interact with * Manages all I/O: polls connections, reconnects failed connections, etc... * Forwards commands between Connections <-> CommandHandlers * Manages multiple Connections and multple CommandHandlers * Manages global state of an interaction with Gearman (global job lock) GearmanConnection - Manages low-level I/O ========================================= * A single connection between a client/worker and a server * Thinly wrapped socket that can reconnect * Converts binary strings <-> Gearman commands * Manages in/out data buffers for socket-level operations * Manages in/out command buffers for gearman-level operations GearmanCommandHandler - Manages commands ======================================== * Represents the state machine of a single GearmanConnection * 1-1 mapping to a GearmanConnection (via GearmanConnectionManager) * Sends/receives commands ONLY - does no buffering * Handles all command generation / interpretation gearman-2.0.2/docs/client.rst0000644000076600000240000001165011455160537014715 0ustar mtaistaff:mod:`gearman.client` --- Gearman client ========================================== .. module:: gearman.client :synopsis: Gearman client - public interface for requesting jobs Function available to all examples:: def check_request_status(job_request): if job_request.complete: print "Job %s finished! Result: %s - %s" % (job_request.job.unique, job_request.state, job_request.result) elif job_request.timed_out: print "Job %s timed out!" % job_request.unique elif job_request.state == JOB_UNKNOWN: print "Job %s connection failed!" % job_request.unique .. autoclass:: GearmanClient Submitting jobs --------------- .. automethod:: GearmanClient.submit_job Sending a simple job as a blocking call:: gm_client = gearman.GearmanClient(['localhost:4730', 'otherhost:4730']) # See gearman/job.py to see attributes on the GearmanJobRequest # Defaults to PRIORITY_NONE, background=False (synchronous task), wait_until_complete=True completed_job_request = gm_client.submit_job("task_name", "arbitrary binary data") check_request_status(completed_job_request) Sending a high priority, background, blocking call:: gm_client = gearman.GearmanClient(['localhost:4730', 'otherhost:4730']) # See gearman/job.py to see attributes on the GearmanJobRequest submitted_job_request = gm_client.submit_job("task_name", "arbitrary binary data", priority=gearman.PRIORITY_HIGH, background=True) check_request_status(submitted_job_request) .. automethod:: GearmanClient.submit_multiple_jobs Sending multiple jobs all at once and behave like a non-blocking call (wait_until_complete=False):: import time gm_client = gearman.GearmanClient(['localhost:4730']) list_of_jobs = [dict(task="task_name", data="binary data"), dict(task="other_task", data="other binary data")] submitted_requests = gm_client.submit_multiple_jobs(list_of_jobs, background=False, wait_until_complete=False) # Once we know our jobs are accepted, we can do other stuff and wait for results later in the function # Similar to multithreading and doing a join except this is all done in a single process time.sleep(1.0) # Wait at most 5 seconds before timing out incomplete requests completed_requests = gm_client.wait_until_jobs_completed(submitted_requests, poll_timeout=5.0) for completed_job_request in completed_requests: check_request_status(completed_job_request) .. automethod:: GearmanClient.submit_multiple_requests Recovering from failed connections:: import time gm_client = gearman.GearmanClient(['localhost:4730']) list_of_jobs = [dict(task="task_name", data="task binary string"), dict(task="other_task", data="other binary string")] failed_requests = gm_client.submit_multiple_jobs(list_of_jobs, background=False) # Let's pretend our assigned requests' Gearman servers all failed assert all(request.state == JOB_UNKNOWN for request in failed_requests), "All connections didn't fail!" # Let's pretend our assigned requests' don't fail but some simply timeout retried_connection_failed_requests = gm_client.submit_multiple_requests(failed_requests, wait_until_complete=True, poll_timeout=1.0) timed_out_requests = [job_request for job_request in retried_requests if job_request.timed_out] # For our timed out requests, lets wait a little longer until they're complete retried_timed_out_requests = gm_client.submit_multiple_requests(timed_out_requests, wait_until_complete=True, poll_timeout=4.0) .. automethod:: GearmanClient.wait_until_jobs_accepted .. automethod:: GearmanClient.wait_until_jobs_completed Retrieving job status --------------------- .. automethod:: GearmanClient.get_job_status .. automethod:: GearmanClient.get_job_statuses Extending the client -------------------- .. autoattribute:: GearmanClient.data_encoder Send/receive Python objects (not just byte strings):: # By default, GearmanClient's can only send off byte-strings # If we want to be able to send out Python objects, we can specify a data encoder # This will automatically convert byte strings <-> Python objects for ALL commands that have the 'data' field # # See http://gearman.org/index.php?id=protocol for client commands that send/receive 'opaque data' import pickle class PickleDataEncoder(gearman.DataEncoder): @classmethod def encode(cls, encodable_object): return pickle.dumps(encodable_object) @classmethod def decode(cls, decodable_string): return pickle.loads(decodable_string) class PickleExampleClient(gearman.GearmanClient): data_encoder = PickleDataEncoder my_python_object = {'hello': 'there'} gm_client = PickleExampleClient(['localhost:4730']) gm_client.submit_job("task_name", my_python_object) gearman-2.0.2/docs/index.rst0000644000076600000240000000102711513150122014523 0ustar mtaistaff.. python-gearman documentation master file, created by sphinx-quickstart on Wed Aug 25 14:44:14 2010. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. python-gearman 2.x ================== Python Gearman API - Client, worker, and admin client interfaces For information on the Gearman protocol and a Gearman server, see http://www.gearman.org/ .. toctree:: :maxdepth: 3 :numbered: library.rst 1to2.rst architecture.rst * :ref:`search` gearman-2.0.2/docs/job.rst0000644000076600000240000000757211513150122014201 0ustar mtaistaff:mod:`gearman.job` --- Gearman job definitions ============================================== .. module:: gearman.job :synopsis: Gearman jobs - Common job classes used within each interface GearmanJob - Basic information about a requested job ---------------------------------------------------- .. autoclass:: GearmanJob Server identifers ^^^^^^^^^^^^^^^^^ .. attribute:: GearmanJob.connection :const:`GearmanConnection` - Server assignment. Could be :const:`None` prior to client job submission .. attribute:: GearmanJob.handle :const:`string` - Job's server handle. Handles are NOT interchangeable across different gearman servers Job parameters ^^^^^^^^^^^^^^ .. attribute:: GearmanJob.task :const:`string` - Job's task .. attribute:: GearmanJob.unique :const:`string` - Job's unique identifier (client assigned) .. attribute:: GearmanJob.data :const:`binary` - Job's binary payload GearmanJobRequest - State tracker for requested jobs ---------------------------------------------------- .. autoclass:: GearmanJobRequest Tracking job submission ^^^^^^^^^^^^^^^^^^^^^^^ .. attribute:: GearmanJobRequest.gearman_job :const:`GearmanJob` - Job that is being tracked by this :const:`GearmanJobRequest` object .. attribute:: GearmanJobRequest.priority * :const:`PRIORITY_NONE` [default] * :const:`PRIORITY_LOW` * :const:`PRIORITY_HIGH` .. attribute:: GearmanJobRequest.background :const:`boolean` - Is this job backgrounded? .. attribute:: GearmanJobRequest.connection_attempts :const:`integer` - Number of attempted connection attempts .. attribute:: GearmanJobRequest.max_connection_attempts :const:`integer` - Maximum number of attempted connection attempts before raising an exception Tracking job progress ^^^^^^^^^^^^^^^^^^^^^ .. attribute:: GearmanJobRequest.result :const:`binary` - Job's returned binary payload - Populated if and only if JOB_COMPLETE .. attribute:: GearmanJobRequest.exception :const:`binary` - Job's exception binary payload .. attribute:: GearmanJobRequest.state * :const:`JOB_UNKNOWN` - Request state is currently unknown, either unsubmitted or connection failed * :const:`JOB_PENDING` - Request has been submitted, pending handle * :const:`JOB_CREATED` - Request has been accepted * :const:`JOB_FAILED` - Request received an explicit job failure (job done but errored out) * :const:`JOB_COMPLETE` - Request received an explicit job completion (job done with results) .. attribute:: GearmanJobRequest.timed_out :const:`boolean` - Did the client hit its polling_timeout prior to a job finishing? .. attribute:: GearmanJobRequest.complete :const:`boolean` - Does the client need to continue to poll for more updates from this job? Tracking in-flight job updates ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Certain GearmanJob's may send back data prior to actually completing. :const:`GearmanClient` uses these queues to keep track of what/when we received certain updates. .. attribute:: GearmanJobRequest.warning_updates :const:`collections.deque` - Job's warning binary payloads .. attribute:: GearmanJobRequest.data_updates :const:`collections.deque` - Job's data binary payloads .. attribute:: GearmanJobRequest.status :const:`dictionary` - Job's status * `handle` - :const:`string` - Job handle * `known` - :const:`boolean` - Is the server aware of this request? * `running` - :const:`boolean` - Is the request currently being processed by a worker? * `numerator` - :const:`integer` * `denominator` - :const:`integer` * `time_received` - :const:`integer` - Time last updated .. versionadded:: 2.0.1 Replaces GearmanJobRequest.status_updates and GearmanJobRquest.server_status .. attribute:: GearmanJobRequest.status_updates .. deprecated:: 2.0.1 Replaced by GearmanJobRequest.status .. attribute:: GearmanJobRequest.server_status .. deprecated:: 2.0.1 Replaced by GearmanJobRequest.status gearman-2.0.2/docs/library.rst0000644000076600000240000000023211450424015015062 0ustar mtaistaffGearman Library documentation ============================= .. toctree:: :maxdepth: 3 client.rst worker.rst admin_client.rst job.rst gearman-2.0.2/docs/Makefile0000644000076600000240000001101611450424015014326 0ustar mtaistaff# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/python-gearman.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/python-gearman.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/python-gearman" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/python-gearman" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." gearman-2.0.2/docs/worker.rst0000644000076600000240000000647211450740276014755 0ustar mtaistaff:mod:`gearman.worker` --- Gearman worker ======================================== .. module:: gearman.worker :synopsis: Gearman worker - public interface for accepting/executing jobs .. autoclass:: GearmanWorker Job processing -------------- .. automethod:: GearmanWorker.set_client_id .. automethod:: GearmanWorker.register_task .. automethod:: GearmanWorker.unregister_task .. automethod:: GearmanWorker.work Setting up a basic worker that reverses a given byte-string:: gm_worker = gearman.GearmanWorker(['localhost:4730']) # See gearman/job.py to see attributes on the GearmanJob # Send back a reversed version of the 'data' string def task_listener_reverse(gearman_worker, gearman_job): return reversed(gearman_job.data) # gm_worker.set_client_id is optional gm_worker.set_client_id('your_worker_client_id_name') gm_worker.register_task('reverse', task_listener_reverse) # Enter our work loop and call gm_worker.after_poll() after each time we timeout/see socket activity gm_worker.work() Sending in-flight job updates ----------------------------- .. automethod:: GearmanWorker.send_job_data .. automethod:: GearmanWorker.send_job_status .. automethod:: GearmanWorker.send_job_warning Callback function sending back inflight job updates:: gm_worker = gearman.GearmanWorker(['localhost:4730']) # See gearman/job.py to see attributes on the GearmanJob # Send back a reversed version of the 'data' string through WORK_DATA instead of WORK_COMPLETE def task_listener_reverse_inflight(gearman_worker, gearman_job): reversed_data = reversed(gearman_job.data) total_chars = len(reversed_data) for idx, character in enumerate(reversed_data): gearman_worker.send_job_data(gearman_job, str(character)) gearman_worker.send_job_status(gearman_job, idx + 1, total_chars) return None # gm_worker.set_client_id is optional gm_worker.register_task('reverse', task_listener_reverse_inflight) # Enter our work loop and call gm_worker.after_poll() after each time we timeout/see socket activity gm_worker.work() Extending the worker -------------------- .. autoattribute:: GearmanWorker.data_encoder .. automethod:: GearmanWorker.after_poll Send/receive Python objects and do work between polls:: # By default, GearmanWorker's can only send off byte-strings # If we want to be able to send out Python objects, we can specify a data encoder # This will automatically convert byte strings <-> Python objects for ALL commands that have the 'data' field # # See http://gearman.org/index.php?id=protocol for Worker commands that send/receive 'opaque data' # import json # Or similarly styled library class JSONDataEncoder(gearman.DataEncoder): @classmethod def encode(cls, encodable_object): return json.dumps(encodable_object) @classmethod def decode(cls, decodable_string): return json.loads(decodable_string) class DBRollbackJSONWorker(gearman.GearmanWorker): data_encoder = JSONDataEncoder def after_poll(self, any_activity): # After every select loop, let's rollback our DB connections just to be safe continue_working = True self.db_connections.rollback() return continue_working gearman-2.0.2/gearman/0000755000076600000240000000000011513151461013353 5ustar mtaistaffgearman-2.0.2/gearman/__init__.py0000644000076600000240000000112011513150122015447 0ustar mtaistaff""" Gearman API - Client, worker, and admin client interfaces """ __version__ = '2.0.2' from gearman.admin_client import GearmanAdminClient from gearman.client import GearmanClient from gearman.worker import GearmanWorker from gearman.connection_manager import DataEncoder from gearman.constants import PRIORITY_NONE, PRIORITY_LOW, PRIORITY_HIGH, JOB_PENDING, JOB_CREATED, JOB_FAILED, JOB_COMPLETE import logging class NullHandler(logging.Handler): def emit(self, record): pass gearman_root_logger = logging.getLogger('gearman') gearman_root_logger.addHandler(NullHandler())gearman-2.0.2/gearman/admin_client.py0000644000076600000240000001126411455160537016370 0ustar mtaistaffimport logging import time from gearman import util from gearman.connection_manager import GearmanConnectionManager from gearman.admin_client_handler import GearmanAdminClientCommandHandler from gearman.errors import ConnectionError, InvalidAdminClientState, ServerUnavailable from gearman.protocol import GEARMAN_COMMAND_ECHO_RES, GEARMAN_COMMAND_ECHO_REQ, \ GEARMAN_SERVER_COMMAND_STATUS, GEARMAN_SERVER_COMMAND_VERSION, GEARMAN_SERVER_COMMAND_WORKERS, \ GEARMAN_SERVER_COMMAND_MAXQUEUE, GEARMAN_SERVER_COMMAND_SHUTDOWN gearman_logger = logging.getLogger(__name__) ECHO_STRING = "ping? pong!" DEFAULT_ADMIN_CLIENT_TIMEOUT = 10.0 class GearmanAdminClient(GearmanConnectionManager): """GearmanAdminClient :: Interface to send/receive administrative commands to a Gearman server This client acts as a BLOCKING client and each call will poll until it receives a satisfactory server response http://gearman.org/index.php?id=protocol See section 'Administrative Protocol' """ command_handler_class = GearmanAdminClientCommandHandler def __init__(self, host_list=None, poll_timeout=DEFAULT_ADMIN_CLIENT_TIMEOUT): super(GearmanAdminClient, self).__init__(host_list=host_list) self.poll_timeout = poll_timeout self.current_connection = util.unlist(self.connection_list) self.current_handler = None def establish_admin_connection(self): try: self.establish_connection(self.current_connection) except ConnectionError: raise ServerUnavailable('Found no valid connections in list: %r' % self.connection_list) self.current_handler = self.connection_to_handler_map[self.current_connection] def ping_server(self): """Sends off a debugging string to execute an application ping on the Gearman server""" start_time = time.time() self.establish_admin_connection() self.current_handler.send_echo_request(ECHO_STRING) server_response = self.wait_until_server_responds(GEARMAN_COMMAND_ECHO_REQ) if server_response != ECHO_STRING: raise InvalidAdminClientState("Echo string mismatch: got %s, expected %s" % (server_response, ECHO_STRING)) elapsed_time = time.time() - start_time return elapsed_time def send_maxqueue(self, task, max_size): """Sends a request to change the maximum queue size for a given task""" self.establish_admin_connection() self.current_handler.send_text_command('%s %s %s' % (GEARMAN_SERVER_COMMAND_MAXQUEUE, task, max_size)) return self.wait_until_server_responds(GEARMAN_SERVER_COMMAND_MAXQUEUE) def send_shutdown(self, graceful=True): """Sends a request to shutdown the connected gearman server""" actual_command = GEARMAN_SERVER_COMMAND_SHUTDOWN if graceful: actual_command += ' graceful' self.establish_admin_connection() self.current_handler.send_text_command(actual_command) return self.wait_until_server_responds(GEARMAN_SERVER_COMMAND_SHUTDOWN) def get_status(self): """Retrieves a list of all registered tasks and reports how many items/workers are in the queue""" self.establish_admin_connection() self.current_handler.send_text_command(GEARMAN_SERVER_COMMAND_STATUS) return self.wait_until_server_responds(GEARMAN_SERVER_COMMAND_STATUS) def get_version(self): """Retrieves the version number of the Gearman server""" self.establish_admin_connection() self.current_handler.send_text_command(GEARMAN_SERVER_COMMAND_VERSION) return self.wait_until_server_responds(GEARMAN_SERVER_COMMAND_VERSION) def get_workers(self): """Retrieves a list of workers and reports what tasks they're operating on""" self.establish_admin_connection() self.current_handler.send_text_command(GEARMAN_SERVER_COMMAND_WORKERS) return self.wait_until_server_responds(GEARMAN_SERVER_COMMAND_WORKERS) def wait_until_server_responds(self, expected_type): current_handler = self.current_handler def continue_while_no_response(any_activity): return (not current_handler.response_ready) self.poll_connections_until_stopped([self.current_connection], continue_while_no_response, timeout=self.poll_timeout) if not self.current_handler.response_ready: raise InvalidAdminClientState('Admin client timed out after %f second(s)' % self.poll_timeout) cmd_type, cmd_resp = self.current_handler.pop_response() if cmd_type != expected_type: raise InvalidAdminClientState('Received an unexpected response... got command %r, expecting command %r' % (cmd_type, expected_type)) return cmd_resp gearman-2.0.2/gearman/admin_client_handler.py0000644000076600000240000001535711435067246020075 0ustar mtaistaffimport collections import logging from gearman.command_handler import GearmanCommandHandler from gearman.errors import ProtocolError, InvalidAdminClientState from gearman.protocol import GEARMAN_COMMAND_ECHO_REQ, GEARMAN_COMMAND_TEXT_COMMAND, \ GEARMAN_SERVER_COMMAND_STATUS, GEARMAN_SERVER_COMMAND_VERSION, \ GEARMAN_SERVER_COMMAND_WORKERS, GEARMAN_SERVER_COMMAND_MAXQUEUE, GEARMAN_SERVER_COMMAND_SHUTDOWN gearman_logger = logging.getLogger(__name__) EXPECTED_GEARMAN_SERVER_COMMANDS = set([GEARMAN_SERVER_COMMAND_STATUS, GEARMAN_SERVER_COMMAND_VERSION, \ GEARMAN_SERVER_COMMAND_WORKERS, GEARMAN_SERVER_COMMAND_MAXQUEUE, GEARMAN_SERVER_COMMAND_SHUTDOWN]) class GearmanAdminClientCommandHandler(GearmanCommandHandler): """Special GEARMAN_COMMAND_TEXT_COMMAND command handler that'll parse text responses from the server""" STATUS_FIELDS = 4 WORKERS_FIELDS = 4 def __init__(self, connection_manager=None): super(GearmanAdminClientCommandHandler, self).__init__(connection_manager=connection_manager) self._sent_commands = collections.deque() self._recv_responses = collections.deque() self._status_response = [] self._workers_response = [] ####################################################################### ##### Public interface methods to be called by GearmanAdminClient ##### ####################################################################### @property def response_ready(self): return bool(self._recv_responses) def pop_response(self): if not self._sent_commands or not self._recv_responses: raise InvalidAdminClientState('Attempted to pop a response for a command that is not ready') sent_command = self._sent_commands.popleft() recv_response = self._recv_responses.popleft() return sent_command, recv_response def send_text_command(self, command_line): """Send our administrative text command""" expected_server_command = None for server_command in EXPECTED_GEARMAN_SERVER_COMMANDS: if command_line.startswith(server_command): expected_server_command = server_command break if not expected_server_command: raise ProtocolError('Attempted to send an unknown server command: %r' % command_line) self._sent_commands.append(expected_server_command) output_text = '%s\n' % command_line self.send_command(GEARMAN_COMMAND_TEXT_COMMAND, raw_text=output_text) def send_echo_request(self, echo_string): """Send our administrative text command""" self._sent_commands.append(GEARMAN_COMMAND_ECHO_REQ) self.send_command(GEARMAN_COMMAND_ECHO_REQ, data=echo_string) ########################################################### ### Callbacks when we receive a command from the server ### ########################################################### def recv_echo_res(self, data): self._recv_responses.append(data) return False def recv_text_command(self, raw_text): """Catch GEARMAN_COMMAND_TEXT_COMMAND's and forward them onto their respective recv_server_* callbacks""" if not self._sent_commands: raise InvalidAdminClientState('Received an unexpected server response') # Peek at the first command cmd_type = self._sent_commands[0] recv_server_command_function_name = 'recv_server_%s' % cmd_type cmd_callback = getattr(self, recv_server_command_function_name, None) if not cmd_callback: gearman_logger.error('Could not handle command: %r - %r' % (cmd_type, raw_text)) raise ValueError('Could not handle command: %r - %r' % (cmd_type, raw_text)) # This must match the parameter names as defined in the command handler completed_work = cmd_callback(raw_text) return completed_work def recv_server_status(self, raw_text): """Slowly assemble a server status message line by line""" # If we received a '.', we've finished parsing this status message # Pack up our output and reset our response queue if raw_text == '.': output_response = tuple(self._status_response) self._recv_responses.append(output_response) self._status_response = [] return False # If we didn't get a final response, split our line and interpret all the data split_tokens = raw_text.split('\t') if len(split_tokens) != self.STATUS_FIELDS: raise ProtocolError('Received %d tokens, expected %d tokens: %r' % (len(split_tokens), self.STATUS_FIELDS, split_tokens)) # Label our fields and make the results Python friendly task, queued_count, running_count, worker_count = split_tokens status_dict = {} status_dict['task'] = task status_dict['queued'] = int(queued_count) status_dict['running'] = int(running_count) status_dict['workers'] = int(worker_count) self._status_response.append(status_dict) return True def recv_server_version(self, raw_text): """Version response is a simple passthrough""" self._recv_responses.append(raw_text) return False def recv_server_workers(self, raw_text): """Slowly assemble a server workers message line by line""" # If we received a '.', we've finished parsing this workers message # Pack up our output and reset our response queue if raw_text == '.': output_response = tuple(self._workers_response) self._recv_responses.append(output_response) self._workers_response = [] return False split_tokens = raw_text.split(' ') if len(split_tokens) < self.WORKERS_FIELDS: raise ProtocolError('Received %d tokens, expected >= 4 tokens: %r' % (len(split_tokens), split_tokens)) if split_tokens[3] != ':': raise ProtocolError('Malformed worker response: %r' % (split_tokens, )) # Label our fields and make the results Python friendly worker_dict = {} worker_dict['file_descriptor'] = split_tokens[0] worker_dict['ip'] = split_tokens[1] worker_dict['client_id'] = split_tokens[2] worker_dict['tasks'] = tuple(split_tokens[4:]) self._workers_response.append(worker_dict) return True def recv_server_maxqueue(self, raw_text): """Maxqueue response is a simple passthrough""" if raw_text != 'OK': raise ProtocolError("Expected 'OK', received: %s" % raw_text) self._recv_responses.append(raw_text) return False def recv_server_shutdown(self, raw_text): """Shutdown response is a simple passthrough""" self._recv_responses.append(None) return Falsegearman-2.0.2/gearman/client.py0000644000076600000240000002613311455425120015211 0ustar mtaistaffimport collections from gearman import compat import logging import os import random import gearman.util from gearman.connection_manager import GearmanConnectionManager from gearman.client_handler import GearmanClientCommandHandler from gearman.constants import PRIORITY_NONE, PRIORITY_LOW, PRIORITY_HIGH, JOB_UNKNOWN, JOB_PENDING from gearman.errors import ConnectionError, ExceededConnectionAttempts, ServerUnavailable gearman_logger = logging.getLogger(__name__) # This number must be <= GEARMAN_UNIQUE_SIZE in gearman/libgearman/constants.h RANDOM_UNIQUE_BYTES = 16 class GearmanClient(GearmanConnectionManager): """ GearmanClient :: Interface to submit jobs to a Gearman server """ command_handler_class = GearmanClientCommandHandler def __init__(self, host_list=None, random_unique_bytes=RANDOM_UNIQUE_BYTES): super(GearmanClient, self).__init__(host_list=host_list) self.random_unique_bytes = random_unique_bytes # The authoritative copy of all requests that this client knows about # Ignores the fact if a request has been bound to a connection or not self.request_to_rotating_connection_queue = compat.defaultdict(collections.deque) def submit_job(self, task, data, unique=None, priority=PRIORITY_NONE, background=False, wait_until_complete=True, max_retries=0, poll_timeout=None): """Submit a single job to any gearman server""" job_info = dict(task=task, data=data, unique=unique, priority=priority) completed_job_list = self.submit_multiple_jobs([job_info], background=background, wait_until_complete=wait_until_complete, max_retries=max_retries, poll_timeout=poll_timeout) return gearman.util.unlist(completed_job_list) def submit_multiple_jobs(self, jobs_to_submit, background=False, wait_until_complete=True, max_retries=0, poll_timeout=None): """Takes a list of jobs_to_submit with dicts of {'task': task, 'data': data, 'unique': unique, 'priority': priority} """ assert type(jobs_to_submit) in (list, tuple, set), "Expected multiple jobs, received 1?" # Convert all job dicts to job request objects requests_to_submit = [self._create_request_from_dictionary(job_info, background=background, max_retries=max_retries) for job_info in jobs_to_submit] return self.submit_multiple_requests(requests_to_submit, wait_until_complete=wait_until_complete, poll_timeout=poll_timeout) def submit_multiple_requests(self, job_requests, wait_until_complete=True, poll_timeout=None): """Take GearmanJobRequests, assign them connections, and request that they be done. * Blocks until our jobs are accepted (should be fast) OR times out * Optionally blocks until jobs are all complete You MUST check the status of your requests after calling this function as "timed_out" or "state == JOB_UNKNOWN" maybe True """ assert type(job_requests) in (list, tuple, set), "Expected multiple job requests, received 1?" stopwatch = gearman.util.Stopwatch(poll_timeout) # We should always wait until our job is accepted, this should be fast time_remaining = stopwatch.get_time_remaining() processed_requests = self.wait_until_jobs_accepted(job_requests, poll_timeout=time_remaining) # Optionally, we'll allow a user to wait until all jobs are complete with the same poll_timeout time_remaining = stopwatch.get_time_remaining() if wait_until_complete and bool(time_remaining != 0.0): processed_requests = self.wait_until_jobs_completed(processed_requests, poll_timeout=time_remaining) return processed_requests def wait_until_jobs_accepted(self, job_requests, poll_timeout=None): """Go into a select loop until all our jobs have moved to STATE_PENDING""" assert type(job_requests) in (list, tuple, set), "Expected multiple job requests, received 1?" def is_request_pending(current_request): return bool(current_request.state == JOB_PENDING) # Poll until we know we've gotten acknowledgement that our job's been accepted # If our connection fails while we're waiting for it to be accepted, automatically retry right here def continue_while_jobs_pending(any_activity): for current_request in job_requests: if current_request.state == JOB_UNKNOWN: self.send_job_request(current_request) return compat.any(is_request_pending(current_request) for current_request in job_requests) self.poll_connections_until_stopped(self.connection_list, continue_while_jobs_pending, timeout=poll_timeout) # Mark any job still in the queued state to poll_timeout for current_request in job_requests: current_request.timed_out = is_request_pending(current_request) return job_requests def wait_until_jobs_completed(self, job_requests, poll_timeout=None): """Go into a select loop until all our jobs have completed or failed""" assert type(job_requests) in (list, tuple, set), "Expected multiple job requests, received 1?" def is_request_incomplete(current_request): return not current_request.complete # Poll until we get responses for all our functions # Do NOT attempt to auto-retry connection failures as we have no idea how for a worker got def continue_while_jobs_incomplete(any_activity): for current_request in job_requests: if is_request_incomplete(current_request) and current_request.state != JOB_UNKNOWN: return True return False self.poll_connections_until_stopped(self.connection_list, continue_while_jobs_incomplete, timeout=poll_timeout) # Mark any job still in the queued state to poll_timeout for current_request in job_requests: current_request.timed_out = is_request_incomplete(current_request) if not current_request.timed_out: self.request_to_rotating_connection_queue.pop(current_request, None) return job_requests def get_job_status(self, current_request, poll_timeout=None): """Fetch the job status of a single request""" request_list = self.get_job_statuses([current_request], poll_timeout=poll_timeout) return gearman.util.unlist(request_list) def get_job_statuses(self, job_requests, poll_timeout=None): """Fetch the job status of a multiple requests""" assert type(job_requests) in (list, tuple, set), "Expected multiple job requests, received 1?" for current_request in job_requests: current_request.status['last_time_received'] = current_request.status.get('time_received') current_connection = current_request.job.connection current_command_handler = self.connection_to_handler_map[current_connection] current_command_handler.send_get_status_of_job(current_request) return self.wait_until_job_statuses_received(job_requests, poll_timeout=poll_timeout) def wait_until_job_statuses_received(self, job_requests, poll_timeout=None): """Go into a select loop until we received statuses on all our requests""" assert type(job_requests) in (list, tuple, set), "Expected multiple job requests, received 1?" def is_status_not_updated(current_request): current_status = current_request.status return bool(current_status.get('time_received') == current_status.get('last_time_received')) # Poll to make sure we send out our request for a status update def continue_while_status_not_updated(any_activity): for current_request in job_requests: if is_status_not_updated(current_request) and current_request.state != JOB_UNKNOWN: return True return False self.poll_connections_until_stopped(self.connection_list, continue_while_status_not_updated, timeout=poll_timeout) for current_request in job_requests: current_request.status = current_request.status or {} current_request.timed_out = is_status_not_updated(current_request) return job_requests def _create_request_from_dictionary(self, job_info, background=False, max_retries=0): """Takes a dictionary with fields {'task': task, 'unique': unique, 'data': data, 'priority': priority, 'background': background}""" # Make sure we have a unique identifier for ALL our tasks job_unique = job_info.get('unique') if job_unique == '-': job_unique = job_info['data'] elif not job_unique: job_unique = os.urandom(self.random_unique_bytes).encode('hex') current_job = self.job_class(connection=None, handle=None, task=job_info['task'], unique=job_unique, data=job_info['data']) initial_priority = job_info.get('priority', PRIORITY_NONE) max_attempts = max_retries + 1 current_request = self.job_request_class(current_job, initial_priority=initial_priority, background=background, max_attempts=max_attempts) return current_request def establish_request_connection(self, current_request): """Return a live connection for the given hash""" # We'll keep track of the connections we're attempting to use so if we ever have to retry, we can use this history rotating_connections = self.request_to_rotating_connection_queue.get(current_request, None) if not rotating_connections: shuffled_connection_list = list(self.connection_list) random.shuffle(shuffled_connection_list) rotating_connections = collections.deque(shuffled_connection_list) self.request_to_rotating_connection_queue[current_request] = rotating_connections failed_connections = 0 chosen_connection = None for possible_connection in rotating_connections: try: chosen_connection = self.establish_connection(possible_connection) break except ConnectionError: # Rotate our server list so we'll skip all our broken servers failed_connections += 1 if not chosen_connection: raise ServerUnavailable('Found no valid connections: %r' % self.connection_list) # Rotate our server list so we'll skip all our broken servers rotating_connections.rotate(-failed_connections) return chosen_connection def send_job_request(self, current_request): """Attempt to send out a job request""" if current_request.connection_attempts >= current_request.max_connection_attempts: raise ExceededConnectionAttempts('Exceeded %d connection attempt(s) :: %r' % (current_request.max_connection_attempts, current_request)) chosen_connection = self.establish_request_connection(current_request) current_request.job.connection = chosen_connection current_request.connection_attempts += 1 current_request.timed_out = False current_command_handler = self.connection_to_handler_map[chosen_connection] current_command_handler.send_job_request(current_request) return current_request gearman-2.0.2/gearman/client_handler.py0000644000076600000240000001575611513150122016707 0ustar mtaistaffimport collections import time import logging from gearman.command_handler import GearmanCommandHandler from gearman.constants import JOB_UNKNOWN, JOB_PENDING, JOB_CREATED, JOB_FAILED, JOB_COMPLETE from gearman.errors import InvalidClientState from gearman.protocol import GEARMAN_COMMAND_GET_STATUS, submit_cmd_for_background_priority gearman_logger = logging.getLogger(__name__) class GearmanClientCommandHandler(GearmanCommandHandler): """Maintains the state of this connection on behalf of a GearmanClient""" def __init__(self, connection_manager=None): super(GearmanClientCommandHandler, self).__init__(connection_manager=connection_manager) # When we first submit jobs, we don't have a handle assigned yet... these handles will be returned in the order of submission self.requests_awaiting_handles = collections.deque() self.handle_to_request_map = dict() ################################################################## ##### Public interface methods to be called by GearmanClient ##### ################################################################## def send_job_request(self, current_request): """Register a newly created job request""" self._assert_request_state(current_request, JOB_UNKNOWN) gearman_job = current_request.job # Handle the I/O for requesting a job - determine which COMMAND we need to send cmd_type = submit_cmd_for_background_priority(current_request.background, current_request.priority) outbound_data = self.encode_data(gearman_job.data) self.send_command(cmd_type, task=gearman_job.task, unique=gearman_job.unique, data=outbound_data) # Once this command is sent, our request needs to wait for a handle current_request.state = JOB_PENDING self.requests_awaiting_handles.append(current_request) def send_get_status_of_job(self, current_request): """Forward the status of a job""" self._register_request(current_request) self.send_command(GEARMAN_COMMAND_GET_STATUS, job_handle=current_request.job.handle) def on_io_error(self): for pending_request in self.requests_awaiting_handles: pending_request.state = JOB_UNKNOWN for inflight_request in self.handle_to_request_map.itervalues(): inflight_request.state = JOB_UNKNOWN def _register_request(self, current_request): self.handle_to_request_map[current_request.job.handle] = current_request def _unregister_request(self, current_request): # De-allocate this request for all jobs return self.handle_to_request_map.pop(current_request.job.handle, None) ################################################################## ## Gearman command callbacks with kwargs defined by protocol.py ## ################################################################## def _assert_request_state(self, current_request, expected_state): if current_request.state != expected_state: raise InvalidClientState('Expected handle (%s) to be in state %r, got %r' % (current_request.job.handle, expected_state, current_request.state)) def recv_job_created(self, job_handle): if not self.requests_awaiting_handles: raise InvalidClientState('Received a job_handle with no pending requests') # If our client got a JOB_CREATED, our request now has a server handle current_request = self.requests_awaiting_handles.popleft() self._assert_request_state(current_request, JOB_PENDING) # Update the state of this request current_request.job.handle = job_handle current_request.state = JOB_CREATED self._register_request(current_request) return True def recv_work_data(self, job_handle, data): # Queue a WORK_DATA update current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) current_request.data_updates.append(self.decode_data(data)) return True def recv_work_warning(self, job_handle, data): # Queue a WORK_WARNING update current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) current_request.warning_updates.append(self.decode_data(data)) return True def recv_work_status(self, job_handle, numerator, denominator): # Queue a WORK_STATUS update current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) # The protocol spec is ambiguous as to what type the numerator and denominator is... # But according to Eskil, gearmand interprets these as integers current_request.status = { 'handle': job_handle, 'known': True, 'running': True, 'numerator': int(numerator), 'denominator': int(denominator), 'time_received': time.time() } return True def recv_work_complete(self, job_handle, data): # Update the state of our request and store our returned result current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) current_request.result = self.decode_data(data) current_request.state = JOB_COMPLETE self._unregister_request(current_request) return True def recv_work_fail(self, job_handle): # Update the state of our request and mark this job as failed current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) current_request.state = JOB_FAILED self._unregister_request(current_request) return True def recv_work_exception(self, job_handle, data): # Using GEARMAND_COMMAND_WORK_EXCEPTION is not recommended at time of this writing [2010-02-24] # http://groups.google.com/group/gearman/browse_thread/thread/5c91acc31bd10688/529e586405ed37fe # current_request = self.handle_to_request_map[job_handle] self._assert_request_state(current_request, JOB_CREATED) current_request.exception = self.decode_data(data) return True def recv_status_res(self, job_handle, known, running, numerator, denominator): # If we received a STATUS_RES update about this request, update our known status current_request = self.handle_to_request_map[job_handle] job_known = bool(known == '1') # Make our status response Python friendly current_request.status = { 'handle': job_handle, 'known': job_known, 'running': bool(running == '1'), 'numerator': int(numerator), 'denominator': int(denominator), 'time_received': time.time() } # If the server doesn't know about this request, we no longer need to track it if not job_known: self._unregister_request(current_request) return True gearman-2.0.2/gearman/command_handler.py0000644000076600000240000000577611435067246017071 0ustar mtaistaffimport logging from gearman.errors import UnknownCommandError from gearman.protocol import get_command_name gearman_logger = logging.getLogger(__name__) class GearmanCommandHandler(object): """A command handler manages the state which we should be in given a certain stream of commands GearmanCommandHandler does no I/O and only understands sending/receiving commands """ def __init__(self, connection_manager=None): self.connection_manager = connection_manager def initial_state(self, *largs, **kwargs): """Called by a Connection Manager after we've been instantiated and we're ready to send off commands""" pass def on_io_error(self): pass def decode_data(self, data): """Convenience function :: handle binary string -> object unpacking""" return self.connection_manager.data_encoder.decode(data) def encode_data(self, data): """Convenience function :: handle object -> binary string packing""" return self.connection_manager.data_encoder.encode(data) def fetch_commands(self): """Called by a Connection Manager to notify us that we have pending commands""" continue_working = True while continue_working: cmd_tuple = self.connection_manager.read_command(self) if cmd_tuple is None: break cmd_type, cmd_args = cmd_tuple continue_working = self.recv_command(cmd_type, **cmd_args) def send_command(self, cmd_type, **cmd_args): """Hand off I/O to the connection mananger""" self.connection_manager.send_command(self, cmd_type, cmd_args) def recv_command(self, cmd_type, **cmd_args): """Maps any command to a recv_* callback function""" completed_work = None gearman_command_name = get_command_name(cmd_type) if bool(gearman_command_name == cmd_type) or not gearman_command_name.startswith('GEARMAN_COMMAND_'): unknown_command_msg = 'Could not handle command: %r - %r' % (gearman_command_name, cmd_args) gearman_logger.error(unknown_command_msg) raise ValueError(unknown_command_msg) recv_command_function_name = gearman_command_name.lower().replace('gearman_command_', 'recv_') cmd_callback = getattr(self, recv_command_function_name, None) if not cmd_callback: missing_callback_msg = 'Could not handle command: %r - %r' % (get_command_name(cmd_type), cmd_args) gearman_logger.error(missing_callback_msg) raise UnknownCommandError(missing_callback_msg) # Expand the arguments as passed by the protocol # This must match the parameter names as defined in the command handler completed_work = cmd_callback(**cmd_args) return completed_work def recv_error(self, error_code, error_text): """When we receive an error from the server, notify the connection manager that we have a gearman error""" return self.connection_manager.on_gearman_error(error_code, error_text) gearman-2.0.2/gearman/compat.py0000644000076600000240000000505311450513501015210 0ustar mtaistaff""" Gearman compatibility module """ # Required for python2.4 backward compatibilty # Add a module attribute called "any" which is equivalent to "any" try: any = any except NameError: def any(iterable): """Return True if any element of the iterable is true. If the iterable is empty, return False""" for element in iterable: if element: return True return False # Required for python2.4 backward compatibilty # Add a module attribute called "all" which is equivalent to "all" try: all = all except NameError: def all(iterable): """Return True if all elements of the iterable are true (or if the iterable is empty)""" for element in iterable: if not element: return False return True # Required for python2.4 backward compatibilty # Add a class called "defaultdict" which is equivalent to "collections.defaultdict" try: from collections import defaultdict except ImportError: class defaultdict(dict): """A pure-Python version of Python 2.5's defaultdict taken from http://code.activestate.com/recipes/523034-emulate-collectionsdefaultdict/""" def __init__(self, default_factory=None, * a, ** kw): if (default_factory is not None and not hasattr(default_factory, '__call__')): raise TypeError('first argument must be callable') dict.__init__(self, * a, ** kw) self.default_factory = default_factory def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: return self.__missing__(key) def __missing__(self, key): if self.default_factory is None: raise KeyError(key) self[key] = value = self.default_factory() return value def __reduce__(self): if self.default_factory is None: args = tuple() else: args = self.default_factory, return type(self), args, None, None, self.items() def copy(self): return self.__copy__() def __copy__(self): return type(self)(self.default_factory, self) def __deepcopy__(self, memo): import copy return type(self)(self.default_factory, copy.deepcopy(self.items())) def __repr__(self): return 'defaultdict(%s, %s)' % (self.default_factory, dict.__repr__(self)) gearman-2.0.2/gearman/connection.py0000644000076600000240000002214011455160537016074 0ustar mtaistaffimport collections import logging import socket import struct import time from gearman.errors import ConnectionError, ProtocolError, ServerUnavailable from gearman.constants import DEFAULT_GEARMAN_PORT, _DEBUG_MODE_ from gearman.protocol import GEARMAN_PARAMS_FOR_COMMAND, GEARMAN_COMMAND_TEXT_COMMAND, NULL_CHAR, \ get_command_name, pack_binary_command, parse_binary_command, parse_text_command, pack_text_command gearman_logger = logging.getLogger(__name__) class GearmanConnection(object): """A connection between a client/worker and a server. Can be used to reconnect (unlike a socket) Wraps a socket and provides the following functionality: Full read/write methods for Gearman BINARY commands and responses Full read/write methods for Gearman SERVER commands and responses (using GEARMAN_COMMAND_TEXT_COMMAND) Manages raw data buffers for socket-level operations Manages command buffers for gearman-level operations All I/O and buffering should be done in this class """ connect_cooldown_seconds = 1.0 def __init__(self, host=None, port=DEFAULT_GEARMAN_PORT): port = port or DEFAULT_GEARMAN_PORT self.gearman_host = host self.gearman_port = port if host is None: raise ServerUnavailable("No host specified") self._reset_connection() def _reset_connection(self): """Reset the state of this connection""" self.connected = False self.gearman_socket = None self.allowed_connect_time = 0.0 self._is_client_side = None self._is_server_side = None # Reset all our raw data buffers self._incoming_buffer = '' self._outgoing_buffer = '' # Toss all commands we may have sent or received self._incoming_commands = collections.deque() self._outgoing_commands = collections.deque() def fileno(self): """Implements fileno() for use with select.select()""" if not self.gearman_socket: self.throw_exception(message='no socket set') return self.gearman_socket.fileno() def get_address(self): """Returns the host and port""" return (self.gearman_host, self.gearman_port) def writable(self): """Returns True if we have data to write""" return self.connected and bool(self._outgoing_commands or self._outgoing_buffer) def readable(self): """Returns True if we might have data to read""" return self.connected def connect(self): """Connect to the server. Raise ConnectionError if connection fails.""" if self.connected: self.throw_exception(message='connection already established') current_time = time.time() if current_time < self.allowed_connect_time: self.throw_exception(message='attempted to connect before required cooldown') self.allowed_connect_time = current_time + self.connect_cooldown_seconds self._reset_connection() self._create_client_socket() self.connected = True self._is_client_side = True self._is_server_side = False def _create_client_socket(self): """Creates a client side socket and subsequently binds/configures our socket options""" try: client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect((self.gearman_host, self.gearman_port)) except socket.error, socket_exception: self.throw_exception(exception=socket_exception) self.set_socket(client_socket) def set_socket(self, current_socket): """Setup common options for all Gearman-related sockets""" if self.gearman_socket: self.throw_exception(message='socket already bound') current_socket.setblocking(0) current_socket.settimeout(0.0) current_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, struct.pack('L', 1)) self.gearman_socket = current_socket def read_command(self): """Reads a single command from the command queue""" if not self._incoming_commands: return None return self._incoming_commands.popleft() def read_commands_from_buffer(self): """Reads data from buffer --> command_queue""" received_commands = 0 while True: cmd_type, cmd_args, cmd_len = self._unpack_command(self._incoming_buffer) if not cmd_len: break received_commands += 1 # Store our command on the command queue # Move the self._incoming_buffer forward by the number of bytes we just read self._incoming_commands.append((cmd_type, cmd_args)) self._incoming_buffer = self._incoming_buffer[cmd_len:] return received_commands def read_data_from_socket(self, bytes_to_read=4096): """Reads data from socket --> buffer""" if not self.connected: self.throw_exception(message='disconnected') recv_buffer = '' try: recv_buffer = self.gearman_socket.recv(bytes_to_read) except socket.error, socket_exception: self.throw_exception(exception=socket_exception) if len(recv_buffer) == 0: self.throw_exception(message='remote disconnected') self._incoming_buffer += recv_buffer return len(self._incoming_buffer) def _unpack_command(self, given_buffer): """Conditionally unpack a binary command or a text based server command""" assert self._is_client_side is not None, "Ambiguous connection state" if not given_buffer: cmd_type = None cmd_args = None cmd_len = 0 elif given_buffer[0] == NULL_CHAR: # We'll be expecting a response if we know we're a client side command is_response = bool(self._is_client_side) cmd_type, cmd_args, cmd_len = parse_binary_command(given_buffer, is_response=is_response) else: cmd_type, cmd_args, cmd_len = parse_text_command(given_buffer) if _DEBUG_MODE_ and cmd_type is not None: gearman_logger.debug('%s - Recv - %s - %r', hex(id(self)), get_command_name(cmd_type), cmd_args) return cmd_type, cmd_args, cmd_len def send_command(self, cmd_type, cmd_args): """Adds a single gearman command to the outgoing command queue""" self._outgoing_commands.append((cmd_type, cmd_args)) def send_commands_to_buffer(self): """Sends and packs commands -> buffer""" if not self._outgoing_commands: return packed_data = [self._outgoing_buffer] while self._outgoing_commands: cmd_type, cmd_args = self._outgoing_commands.popleft() packed_command = self._pack_command(cmd_type, cmd_args) packed_data.append(packed_command) self._outgoing_buffer = ''.join(packed_data) def send_data_to_socket(self): """Send data from buffer -> socket Returns remaining size of the output buffer """ if not self.connected: self.throw_exception(message='disconnected') if not self._outgoing_buffer: return 0 try: bytes_sent = self.gearman_socket.send(self._outgoing_buffer) except socket.error, socket_exception: self.throw_exception(exception=socket_exception) if bytes_sent == 0: self.throw_exception(message='remote disconnected') self._outgoing_buffer = self._outgoing_buffer[bytes_sent:] return len(self._outgoing_buffer) def _pack_command(self, cmd_type, cmd_args): """Converts a command to its raw binary format""" if cmd_type not in GEARMAN_PARAMS_FOR_COMMAND: raise ProtocolError('Unknown command: %r' % get_command_name(cmd_type)) if _DEBUG_MODE_: gearman_logger.debug('%s - Send - %s - %r', hex(id(self)), get_command_name(cmd_type), cmd_args) if cmd_type == GEARMAN_COMMAND_TEXT_COMMAND: return pack_text_command(cmd_type, cmd_args) else: # We'll be sending a response if we know we're a server side command is_response = bool(self._is_server_side) return pack_binary_command(cmd_type, cmd_args, is_response) def close(self): """Shutdown our existing socket and reset all of our connection data""" try: if self.gearman_socket: self.gearman_socket.close() except socket.error: pass self._reset_connection() def throw_exception(self, message=None, exception=None): # Mark us as disconnected but do NOT call self._reset_connection() # Allows catchers of ConnectionError a chance to inspect the last state of this connection self.connected = False if exception: message = repr(exception) rewritten_message = "<%s:%d> %s" % (self.gearman_host, self.gearman_port, message) raise ConnectionError(rewritten_message) def __repr__(self): return ('' % (self.gearman_host, self.gearman_port, self.connected)) gearman-2.0.2/gearman/connection_manager.py0000644000076600000240000002556411450513501017567 0ustar mtaistaffimport logging import select as select_lib import gearman.util from gearman.connection import GearmanConnection from gearman.constants import _DEBUG_MODE_ from gearman.errors import ConnectionError, ServerUnavailable from gearman.job import GearmanJob, GearmanJobRequest from gearman import compat gearman_logger = logging.getLogger(__name__) class DataEncoder(object): @classmethod def encode(cls, encodable_object): raise NotImplementedError @classmethod def decode(cls, decodable_string): raise NotImplementedError class NoopEncoder(DataEncoder): """Provide common object dumps for all communications over gearman""" @classmethod def _enforce_byte_string(cls, given_object): if type(given_object) != str: raise TypeError("Expecting byte string, got %r" % type(given_object)) @classmethod def encode(cls, encodable_object): cls._enforce_byte_string(encodable_object) return encodable_object @classmethod def decode(cls, decodable_string): cls._enforce_byte_string(decodable_string) return decodable_string class GearmanConnectionManager(object): """Abstract base class for any Gearman-type client that needs to connect/listen to multiple connections Mananges and polls a group of gearman connections Forwards all communication between a connection and a command handler The state of a connection is represented within the command handler Automatically encodes all 'data' fields as specified in protocol.py """ command_handler_class = None connection_class = GearmanConnection job_class = GearmanJob job_request_class = GearmanJobRequest data_encoder = NoopEncoder def __init__(self, host_list=None): assert self.command_handler_class is not None, 'GearmanClientBase did not receive a command handler class' self.connection_list = [] host_list = host_list or [] for hostport_tuple in host_list: self.add_connection(hostport_tuple) self.handler_to_connection_map = {} self.connection_to_handler_map = {} self.handler_initial_state = {} def shutdown(self): # Shutdown all our connections one by one for gearman_connection in self.connection_list: gearman_connection.close() ################################### # Connection management functions # ################################### def add_connection(self, hostport_tuple): """Add a new connection to this connection manager""" gearman_host, gearman_port = gearman.util.disambiguate_server_parameter(hostport_tuple) client_connection = self.connection_class(host=gearman_host, port=gearman_port) self.connection_list.append(client_connection) return client_connection def establish_connection(self, current_connection): """Attempt to connect... if not previously connected, create a new CommandHandler to manage this connection's state !NOTE! This function can throw a ConnectionError which deriving ConnectionManagers should catch """ assert current_connection in self.connection_list, "Unknown connection - %r" % current_connection if current_connection.connected: return current_connection # !NOTE! May throw a ConnectionError current_connection.connect() # Initiate a new command handler every time we start a new connection current_handler = self.command_handler_class(connection_manager=self) # Handler to connection map for CommandHandler -> Connection interactions # Connection to handler map for Connection -> CommandHandler interactions self.handler_to_connection_map[current_handler] = current_connection self.connection_to_handler_map[current_connection] = current_handler current_handler.initial_state(**self.handler_initial_state) return current_connection def poll_connections_once(self, submitted_connections, timeout=None): """Does a single robust select, catching socket errors""" select_connections = set(current_connection for current_connection in submitted_connections if current_connection.connected) rd_connections = set() wr_connections = set() ex_connections = set() if timeout is not None and timeout < 0.0: return rd_connections, wr_connections, ex_connections successful_select = False while not successful_select and select_connections: select_connections -= ex_connections check_rd_connections = [current_connection for current_connection in select_connections if current_connection.readable()] check_wr_connections = [current_connection for current_connection in select_connections if current_connection.writable()] try: rd_list, wr_list, ex_list = gearman.util.select(check_rd_connections, check_wr_connections, select_connections, timeout=timeout) rd_connections |= set(rd_list) wr_connections |= set(wr_list) ex_connections |= set(ex_list) successful_select = True except (select_lib.error, ConnectionError): # On any exception, we're going to assume we ran into a socket exception # We'll need to fish for bad connections as suggested at # # http://www.amk.ca/python/howto/sockets/ for conn_to_test in select_connections: try: _, _, _ = gearman.util.select([conn_to_test], [], [], timeout=0) except (select_lib.error, ConnectionError): rd_connections.discard(conn_to_test) wr_connections.discard(conn_to_test) ex_connections.add(conn_to_test) gearman_logger.error('select error: %r' % conn_to_test) if _DEBUG_MODE_: gearman_logger.debug('select :: Poll - %d :: Read - %d :: Write - %d :: Error - %d', \ len(select_connections), len(rd_connections), len(wr_connections), len(ex_connections)) return rd_connections, wr_connections, ex_connections def handle_connection_activity(self, rd_connections, wr_connections, ex_connections): """Process all connection activity... executes all handle_* callbacks""" dead_connections = set() for current_connection in rd_connections: try: self.handle_read(current_connection) except ConnectionError: dead_connections.add(current_connection) for current_connection in wr_connections: try: self.handle_write(current_connection) except ConnectionError: dead_connections.add(current_connection) for current_connection in ex_connections: self.handle_error(current_connection) for current_connection in dead_connections: self.handle_error(current_connection) failed_connections = ex_connections | dead_connections return rd_connections, wr_connections, failed_connections def poll_connections_until_stopped(self, submitted_connections, callback_fxn, timeout=None): """Continue to poll our connections until we receive a stopping condition""" stopwatch = gearman.util.Stopwatch(timeout) any_activity = False callback_ok = callback_fxn(any_activity) connection_ok = compat.any(current_connection.connected for current_connection in submitted_connections) while connection_ok and callback_ok: time_remaining = stopwatch.get_time_remaining() if time_remaining == 0.0: break # Do a single robust select and handle all connection activity read_connections, write_connections, dead_connections = self.poll_connections_once(submitted_connections, timeout=time_remaining) self.handle_connection_activity(read_connections, write_connections, dead_connections) any_activity = compat.any([read_connections, write_connections, dead_connections]) callback_ok = callback_fxn(any_activity) connection_ok = compat.any(current_connection.connected for current_connection in submitted_connections) # We should raise here if we have no alive connections (don't go into a select polling loop with no connections) if not connection_ok: raise ServerUnavailable('Found no valid connections in list: %r' % self.connection_list) return bool(connection_ok and callback_ok) def handle_read(self, current_connection): """Handle all our pending socket data""" current_handler = self.connection_to_handler_map[current_connection] # Transfer data from socket -> buffer current_connection.read_data_from_socket() # Transfer command from buffer -> command queue current_connection.read_commands_from_buffer() # Notify the handler that we have commands to fetch current_handler.fetch_commands() def handle_write(self, current_connection): # Transfer command from command queue -> buffer current_connection.send_commands_to_buffer() # Transfer data from buffer -> socket current_connection.send_data_to_socket() def handle_error(self, current_connection): dead_handler = self.connection_to_handler_map.pop(current_connection, None) if dead_handler: dead_handler.on_io_error() self.handler_to_connection_map.pop(dead_handler, None) current_connection.close() ################################## # Callbacks for Command Handlers # ################################## def read_command(self, command_handler): """CommandHandlers call this function to fetch pending commands NOTE: CommandHandlers have NO knowledge as to which connection they're representing ConnectionManagers must forward inbound commands to CommandHandlers """ gearman_connection = self.handler_to_connection_map[command_handler] cmd_tuple = gearman_connection.read_command() if cmd_tuple is None: return cmd_tuple cmd_type, cmd_args = cmd_tuple return cmd_type, cmd_args def send_command(self, command_handler, cmd_type, cmd_args): """CommandHandlers call this function to send pending commands NOTE: CommandHandlers have NO knowledge as to which connection they're representing ConnectionManagers must forward outbound commands to Connections """ gearman_connection = self.handler_to_connection_map[command_handler] gearman_connection.send_command(cmd_type, cmd_args) def on_gearman_error(self, error_code, error_text): gearman_logger.error('Received error from server: %s: %s' % (error_code, error_text)) return False gearman-2.0.2/gearman/constants.py0000644000076600000240000000073211435067246015755 0ustar mtaistaff_DEBUG_MODE_ = False DEFAULT_GEARMAN_PORT = 4730 PRIORITY_NONE = None PRIORITY_LOW = 'LOW' PRIORITY_HIGH = 'HIGH' JOB_UNKNOWN = 'UNKNOWN' # Request state is currently unknown, either unsubmitted or connection failed JOB_PENDING = 'PENDING' # Request has been submitted, pending handle JOB_CREATED = 'CREATED' # Request has been accepted JOB_FAILED = 'FAILED' # Request received an explicit fail JOB_COMPLETE = 'COMPLETE' # Request received an explicit complete gearman-2.0.2/gearman/errors.py0000644000076600000240000000067511435067246015263 0ustar mtaistaffclass GearmanError(Exception): pass class ConnectionError(GearmanError): pass class ServerUnavailable(GearmanError): pass class ProtocolError(GearmanError): pass class UnknownCommandError(GearmanError): pass class ExceededConnectionAttempts(GearmanError): pass class InvalidClientState(GearmanError): pass class InvalidWorkerState(GearmanError): pass class InvalidAdminClientState(GearmanError): pass gearman-2.0.2/gearman/job.py0000644000076600000240000000561611455425120014510 0ustar mtaistaffimport collections from gearman.constants import PRIORITY_NONE, JOB_UNKNOWN, JOB_PENDING, JOB_CREATED, JOB_FAILED, JOB_COMPLETE class GearmanJob(object): """Represents the basics of a job... used in GearmanClient / GearmanWorker to represent job states""" def __init__(self, connection, handle, task, unique, data): self.connection = connection self.handle = handle self.task = task self.unique = unique self.data = data def to_dict(self): return dict(task=self.task, job_handle=self.handle, unique=self.unique, data=self.data) def __repr__(self): return '' % (self.connection, self.handle, self.task, self.unique, self.data) class GearmanJobRequest(object): """Represents a job request... used in GearmanClient to represent job states""" def __init__(self, gearman_job, initial_priority=PRIORITY_NONE, background=False, max_attempts=1): self.gearman_job = gearman_job self.priority = initial_priority self.background = background self.connection_attempts = 0 self.max_connection_attempts = max_attempts self.initialize_request() def initialize_request(self): # Holds WORK_COMPLETE responses self.result = None # Holds WORK_EXCEPTION responses self.exception = None # Queues to hold WORK_WARNING, WORK_DATA responses self.warning_updates = collections.deque() self.data_updates = collections.deque() # Holds WORK_STATUS / STATUS_REQ responses self.status = {} self.state = JOB_UNKNOWN self.timed_out = False def reset(self): self.initialize_request() self.connection = None self.handle = None @property def status_updates(self): """Deprecated since 2.0.1, removing in next major release""" output_queue = collections.deque() if self.status: output_queue.append((self.status.get('numerator', 0), self.status.get('denominator', 0))) return output_queue @property def server_status(self): """Deprecated since 2.0.1, removing in next major release""" return self.status @property def job(self): return self.gearman_job @property def complete(self): background_complete = bool(self.background and self.state in (JOB_CREATED)) foreground_complete = bool(not self.background and self.state in (JOB_FAILED, JOB_COMPLETE)) actually_complete = background_complete or foreground_complete return actually_complete def __repr__(self): formatted_representation = '' return formatted_representation % (self.job.task, self.job.unique, self.priority, self.background, self.state, self.timed_out) gearman-2.0.2/gearman/protocol.py0000644000076600000240000002717311450513501015575 0ustar mtaistaffimport struct from gearman.constants import PRIORITY_NONE, PRIORITY_LOW, PRIORITY_HIGH from gearman.errors import ProtocolError from gearman import compat # Protocol specific constants NULL_CHAR = '\x00' MAGIC_RES_STRING = '%sRES' % NULL_CHAR MAGIC_REQ_STRING = '%sREQ' % NULL_CHAR COMMAND_HEADER_SIZE = 12 # Gearman commands 1-9 GEARMAN_COMMAND_CAN_DO = 1 GEARMAN_COMMAND_CANT_DO = 2 GEARMAN_COMMAND_RESET_ABILITIES = 3 GEARMAN_COMMAND_PRE_SLEEP = 4 GEARMAN_COMMAND_NOOP = 6 GEARMAN_COMMAND_SUBMIT_JOB = 7 GEARMAN_COMMAND_JOB_CREATED = 8 GEARMAN_COMMAND_GRAB_JOB = 9 # Gearman commands 10-19 GEARMAN_COMMAND_NO_JOB = 10 GEARMAN_COMMAND_JOB_ASSIGN = 11 GEARMAN_COMMAND_WORK_STATUS = 12 GEARMAN_COMMAND_WORK_COMPLETE = 13 GEARMAN_COMMAND_WORK_FAIL = 14 GEARMAN_COMMAND_GET_STATUS = 15 GEARMAN_COMMAND_ECHO_REQ = 16 GEARMAN_COMMAND_ECHO_RES = 17 GEARMAN_COMMAND_SUBMIT_JOB_BG = 18 GEARMAN_COMMAND_ERROR = 19 # Gearman commands 20-29 GEARMAN_COMMAND_STATUS_RES = 20 GEARMAN_COMMAND_SUBMIT_JOB_HIGH = 21 GEARMAN_COMMAND_SET_CLIENT_ID = 22 GEARMAN_COMMAND_CAN_DO_TIMEOUT = 23 GEARMAN_COMMAND_ALL_YOURS = 24 GEARMAN_COMMAND_WORK_EXCEPTION = 25 GEARMAN_COMMAND_OPTION_REQ = 26 GEARMAN_COMMAND_OPTION_RES = 27 GEARMAN_COMMAND_WORK_DATA = 28 GEARMAN_COMMAND_WORK_WARNING = 29 # Gearman commands 30-39 GEARMAN_COMMAND_GRAB_JOB_UNIQ = 30 GEARMAN_COMMAND_JOB_ASSIGN_UNIQ = 31 GEARMAN_COMMAND_SUBMIT_JOB_HIGH_BG = 32 GEARMAN_COMMAND_SUBMIT_JOB_LOW = 33 GEARMAN_COMMAND_SUBMIT_JOB_LOW_BG = 34 # Fake command code GEARMAN_COMMAND_TEXT_COMMAND = 9999 GEARMAN_PARAMS_FOR_COMMAND = { # Gearman commands 1-9 GEARMAN_COMMAND_CAN_DO: ['task'], GEARMAN_COMMAND_CANT_DO: ['task'], GEARMAN_COMMAND_RESET_ABILITIES: [], GEARMAN_COMMAND_PRE_SLEEP: [], GEARMAN_COMMAND_NOOP: [], GEARMAN_COMMAND_SUBMIT_JOB: ['task', 'unique', 'data'], GEARMAN_COMMAND_JOB_CREATED: ['job_handle'], GEARMAN_COMMAND_GRAB_JOB: [], # Gearman commands 10-19 GEARMAN_COMMAND_NO_JOB: [], GEARMAN_COMMAND_JOB_ASSIGN: ['job_handle', 'task', 'data'], GEARMAN_COMMAND_WORK_STATUS: ['job_handle', 'numerator', 'denominator'], GEARMAN_COMMAND_WORK_COMPLETE: ['job_handle', 'data'], GEARMAN_COMMAND_WORK_FAIL: ['job_handle'], GEARMAN_COMMAND_GET_STATUS: ['job_handle'], GEARMAN_COMMAND_ECHO_REQ: ['data'], GEARMAN_COMMAND_ECHO_RES: ['data'], GEARMAN_COMMAND_SUBMIT_JOB_BG: ['task', 'unique', 'data'], GEARMAN_COMMAND_ERROR: ['error_code', 'error_text'], # Gearman commands 20-29 GEARMAN_COMMAND_STATUS_RES: ['job_handle', 'known', 'running', 'numerator', 'denominator'], GEARMAN_COMMAND_SUBMIT_JOB_HIGH: ['task', 'unique', 'data'], GEARMAN_COMMAND_SET_CLIENT_ID: ['client_id'], GEARMAN_COMMAND_CAN_DO_TIMEOUT: ['task', 'timeout'], GEARMAN_COMMAND_ALL_YOURS: [], GEARMAN_COMMAND_WORK_EXCEPTION: ['job_handle', 'data'], GEARMAN_COMMAND_OPTION_REQ: ['option_name'], GEARMAN_COMMAND_OPTION_RES: ['option_name'], GEARMAN_COMMAND_WORK_DATA: ['job_handle', 'data'], GEARMAN_COMMAND_WORK_WARNING: ['job_handle', 'data'], # Gearman commands 30-39 GEARMAN_COMMAND_GRAB_JOB_UNIQ: [], GEARMAN_COMMAND_JOB_ASSIGN_UNIQ: ['job_handle', 'task', 'unique', 'data'], GEARMAN_COMMAND_SUBMIT_JOB_HIGH_BG: ['task', 'unique', 'data'], GEARMAN_COMMAND_SUBMIT_JOB_LOW: ['task', 'unique', 'data'], GEARMAN_COMMAND_SUBMIT_JOB_LOW_BG: ['task', 'unique', 'data'], # Fake gearman command GEARMAN_COMMAND_TEXT_COMMAND: ['raw_text'] } GEARMAN_COMMAND_TO_NAME = { GEARMAN_COMMAND_CAN_DO: 'GEARMAN_COMMAND_CAN_DO', GEARMAN_COMMAND_CANT_DO: 'GEARMAN_COMMAND_CANT_DO', GEARMAN_COMMAND_RESET_ABILITIES: 'GEARMAN_COMMAND_RESET_ABILITIES', GEARMAN_COMMAND_PRE_SLEEP: 'GEARMAN_COMMAND_PRE_SLEEP', GEARMAN_COMMAND_NOOP: 'GEARMAN_COMMAND_NOOP', GEARMAN_COMMAND_SUBMIT_JOB: 'GEARMAN_COMMAND_SUBMIT_JOB', GEARMAN_COMMAND_JOB_CREATED: 'GEARMAN_COMMAND_JOB_CREATED', GEARMAN_COMMAND_GRAB_JOB: 'GEARMAN_COMMAND_GRAB_JOB', # Gearman commands 10-19 GEARMAN_COMMAND_NO_JOB: 'GEARMAN_COMMAND_NO_JOB', GEARMAN_COMMAND_JOB_ASSIGN: 'GEARMAN_COMMAND_JOB_ASSIGN', GEARMAN_COMMAND_WORK_STATUS: 'GEARMAN_COMMAND_WORK_STATUS', GEARMAN_COMMAND_WORK_COMPLETE: 'GEARMAN_COMMAND_WORK_COMPLETE', GEARMAN_COMMAND_WORK_FAIL: 'GEARMAN_COMMAND_WORK_FAIL', GEARMAN_COMMAND_GET_STATUS: 'GEARMAN_COMMAND_GET_STATUS', GEARMAN_COMMAND_ECHO_REQ: 'GEARMAN_COMMAND_ECHO_REQ', GEARMAN_COMMAND_ECHO_RES: 'GEARMAN_COMMAND_ECHO_RES', GEARMAN_COMMAND_SUBMIT_JOB_BG: 'GEARMAN_COMMAND_SUBMIT_JOB_BG', GEARMAN_COMMAND_ERROR: 'GEARMAN_COMMAND_ERROR', # Gearman commands 20-29 GEARMAN_COMMAND_STATUS_RES: 'GEARMAN_COMMAND_STATUS_RES', GEARMAN_COMMAND_SUBMIT_JOB_HIGH: 'GEARMAN_COMMAND_SUBMIT_JOB_HIGH', GEARMAN_COMMAND_SET_CLIENT_ID: 'GEARMAN_COMMAND_SET_CLIENT_ID', GEARMAN_COMMAND_CAN_DO_TIMEOUT: 'GEARMAN_COMMAND_CAN_DO_TIMEOUT', GEARMAN_COMMAND_ALL_YOURS: 'GEARMAN_COMMAND_ALL_YOURS', GEARMAN_COMMAND_WORK_EXCEPTION: 'GEARMAN_COMMAND_WORK_EXCEPTION', GEARMAN_COMMAND_OPTION_REQ: 'GEARMAN_COMMAND_OPTION_REQ', GEARMAN_COMMAND_OPTION_RES: 'GEARMAN_COMMAND_OPTION_RES', GEARMAN_COMMAND_WORK_DATA: 'GEARMAN_COMMAND_WORK_DATA', GEARMAN_COMMAND_WORK_WARNING: 'GEARMAN_COMMAND_WORK_WARNING', # Gearman commands 30-39 GEARMAN_COMMAND_GRAB_JOB_UNIQ: 'GEARMAN_COMMAND_GRAB_JOB_UNIQ', GEARMAN_COMMAND_JOB_ASSIGN_UNIQ: 'GEARMAN_COMMAND_JOB_ASSIGN_UNIQ', GEARMAN_COMMAND_SUBMIT_JOB_HIGH_BG: 'GEARMAN_COMMAND_SUBMIT_JOB_HIGH_BG', GEARMAN_COMMAND_SUBMIT_JOB_LOW: 'GEARMAN_COMMAND_SUBMIT_JOB_LOW', GEARMAN_COMMAND_SUBMIT_JOB_LOW_BG: 'GEARMAN_COMMAND_SUBMIT_JOB_LOW_BG', GEARMAN_COMMAND_TEXT_COMMAND: 'GEARMAN_COMMAND_TEXT_COMMAND' } GEARMAN_SERVER_COMMAND_STATUS = 'status' GEARMAN_SERVER_COMMAND_VERSION = 'version' GEARMAN_SERVER_COMMAND_WORKERS = 'workers' GEARMAN_SERVER_COMMAND_MAXQUEUE = 'maxqueue' GEARMAN_SERVER_COMMAND_SHUTDOWN = 'shutdown' def get_command_name(cmd_type): return GEARMAN_COMMAND_TO_NAME.get(cmd_type, cmd_type) def submit_cmd_for_background_priority(background, priority): cmd_type_lookup = { (True, PRIORITY_NONE): GEARMAN_COMMAND_SUBMIT_JOB_BG, (True, PRIORITY_LOW): GEARMAN_COMMAND_SUBMIT_JOB_LOW_BG, (True, PRIORITY_HIGH): GEARMAN_COMMAND_SUBMIT_JOB_HIGH_BG, (False, PRIORITY_NONE): GEARMAN_COMMAND_SUBMIT_JOB, (False, PRIORITY_LOW): GEARMAN_COMMAND_SUBMIT_JOB_LOW, (False, PRIORITY_HIGH): GEARMAN_COMMAND_SUBMIT_JOB_HIGH } lookup_tuple = (background, priority) cmd_type = cmd_type_lookup[lookup_tuple] return cmd_type def parse_binary_command(in_buffer, is_response=True): """Parse data and return (command type, command arguments dict, command size) or (None, None, data) if there's not enough data for a complete command. """ in_buffer_size = len(in_buffer) magic = None cmd_type = None cmd_args = None cmd_len = 0 expected_packet_size = None # If we don't have enough data to parse, error early if in_buffer_size < COMMAND_HEADER_SIZE: return cmd_type, cmd_args, cmd_len # By default, we'll assume we're dealing with a gearman command magic, cmd_type, cmd_len = struct.unpack('!4sII', in_buffer[:COMMAND_HEADER_SIZE]) received_bad_response = is_response and bool(magic != MAGIC_RES_STRING) received_bad_request = not is_response and bool(magic != MAGIC_REQ_STRING) if received_bad_response or received_bad_request: raise ProtocolError('Malformed Magic') expected_cmd_params = GEARMAN_PARAMS_FOR_COMMAND.get(cmd_type, None) # GEARMAN_COMMAND_TEXT_COMMAND is a faked command that we use to support server text-based commands if expected_cmd_params is None or cmd_type == GEARMAN_COMMAND_TEXT_COMMAND: raise ProtocolError('Received unknown binary command: %s' % cmd_type) # If everything indicates this is a valid command, we should check to see if we have enough stuff to read in our buffer expected_packet_size = COMMAND_HEADER_SIZE + cmd_len if in_buffer_size < expected_packet_size: return None, None, 0 binary_payload = in_buffer[COMMAND_HEADER_SIZE:expected_packet_size] split_arguments = [] if len(expected_cmd_params) > 0: split_arguments = binary_payload.split(NULL_CHAR, len(expected_cmd_params) - 1) elif binary_payload: raise ProtocolError('Expected no binary payload: %s' % get_command_name(cmd_type)) # This is a sanity check on the binary_payload.split() phase # We should never be able to get here with any VALID gearman data if len(split_arguments) != len(expected_cmd_params): raise ProtocolError('Received %d argument(s), expecting %d argument(s): %s' % (len(split_arguments), len(expected_cmd_params), get_command_name(cmd_type))) # Iterate through the split arguments and assign them labels based on their order cmd_args = dict((param_label, param_value) for param_label, param_value in zip(expected_cmd_params, split_arguments)) return cmd_type, cmd_args, expected_packet_size def pack_binary_command(cmd_type, cmd_args, is_response=False): """Packs the given command using the parameter ordering specified in GEARMAN_PARAMS_FOR_COMMAND. *NOTE* Expects that all arguments in cmd_args are already str's. """ expected_cmd_params = GEARMAN_PARAMS_FOR_COMMAND.get(cmd_type, None) if expected_cmd_params is None or cmd_type == GEARMAN_COMMAND_TEXT_COMMAND: raise ProtocolError('Received unknown binary command: %s' % get_command_name(cmd_type)) expected_parameter_set = set(expected_cmd_params) received_parameter_set = set(cmd_args.keys()) if expected_parameter_set != received_parameter_set: raise ProtocolError('Received arguments did not match expected arguments: %r != %r' % (expected_parameter_set, received_parameter_set)) # Select the right expected magic if is_response: magic = MAGIC_RES_STRING else: magic = MAGIC_REQ_STRING # !NOTE! str should be replaced with bytes in Python 3.x # We will iterate in ORDER and str all our command arguments if compat.any(type(param_value) != str for param_value in cmd_args.itervalues()): raise ProtocolError('Received non-binary arguments: %r' % cmd_args) data_items = [cmd_args[param] for param in expected_cmd_params] binary_payload = NULL_CHAR.join(data_items) # Pack the header in the !4sII format then append the binary payload payload_size = len(binary_payload) packing_format = '!4sII%ds' % payload_size return struct.pack(packing_format, magic, cmd_type, payload_size, binary_payload) def parse_text_command(in_buffer): """Parse a text command and return a single line at a time""" cmd_type = None cmd_args = None cmd_len = 0 if '\n' not in in_buffer: return cmd_type, cmd_args, cmd_len text_command, in_buffer = in_buffer.split('\n', 1) if NULL_CHAR in text_command: raise ProtocolError('Received unexpected character: %s' % text_command) # Fake gearman command "TEXT_COMMAND" used to process server admin client responses cmd_type = GEARMAN_COMMAND_TEXT_COMMAND cmd_args = dict(raw_text=text_command) cmd_len = len(text_command) + 1 return cmd_type, cmd_args, cmd_len def pack_text_command(cmd_type, cmd_args): """Parse a text command and return a single line at a time""" if cmd_type != GEARMAN_COMMAND_TEXT_COMMAND: raise ProtocolError('Unknown cmd_type: Received %s, expecting %s' % (get_command_name(cmd_type), get_command_name(GEARMAN_COMMAND_TEXT_COMMAND))) cmd_line = cmd_args.get('raw_text') if cmd_line is None: raise ProtocolError('Did not receive arguments any valid arguments: %s' % cmd_args) return str(cmd_line)gearman-2.0.2/gearman/util.py0000644000076600000240000000442211450513501014701 0ustar mtaistaff#!/usr/bin/env python """ Gearman Client Utils """ import errno import select as select_lib import time from gearman.constants import DEFAULT_GEARMAN_PORT class Stopwatch(object): """Timer class that keeps track of time remaining""" def __init__(self, time_remaining): if time_remaining is not None: self.stop_time = time.time() + time_remaining else: self.stop_time = None def get_time_remaining(self): if self.stop_time is None: return None current_time = time.time() if not self.has_time_remaining(current_time): return 0.0 time_remaining = self.stop_time - current_time return time_remaining def has_time_remaining(self, time_comparison=None): time_comparison = time_comparison or self.get_time_remaining() if self.stop_time is None: return True return bool(time_comparison < self.stop_time) def disambiguate_server_parameter(hostport_tuple): """Takes either a tuple of (address, port) or a string of 'address:port' and disambiguates them for us""" if type(hostport_tuple) is tuple: gearman_host, gearman_port = hostport_tuple elif ':' in hostport_tuple: gearman_host, gearman_possible_port = hostport_tuple.split(':') gearman_port = int(gearman_possible_port) else: gearman_host = hostport_tuple gearman_port = DEFAULT_GEARMAN_PORT return gearman_host, gearman_port def select(rlist, wlist, xlist, timeout=None): """Behave similar to select.select, except ignoring certain types of exceptions""" rd_list = [] wr_list = [] ex_list = [] select_args = [rlist, wlist, xlist] if timeout is not None: select_args.append(timeout) try: rd_list, wr_list, ex_list = select_lib.select(*select_args) except select_lib.error, exc: # Ignore interrupted system call, reraise anything else if exc[0] != errno.EINTR: raise return rd_list, wr_list, ex_list def unlist(given_list): """Convert the (possibly) single item list into a single item""" list_size = len(given_list) if list_size == 0: return None elif list_size == 1: return given_list[0] else: raise ValueError(list_size) gearman-2.0.2/gearman/worker.py0000644000076600000240000002332211455425120015241 0ustar mtaistaffimport logging import random import sys from gearman import compat from gearman.connection_manager import GearmanConnectionManager from gearman.worker_handler import GearmanWorkerCommandHandler from gearman.errors import ConnectionError gearman_logger = logging.getLogger(__name__) POLL_TIMEOUT_IN_SECONDS = 60.0 class GearmanWorker(GearmanConnectionManager): """ GearmanWorker :: Interface to accept jobs from a Gearman server """ command_handler_class = GearmanWorkerCommandHandler def __init__(self, host_list=None): super(GearmanWorker, self).__init__(host_list=host_list) self.randomized_connections = None self.worker_abilities = {} self.worker_client_id = None self.command_handler_holding_job_lock = None self._update_initial_state() def _update_initial_state(self): self.handler_initial_state['abilities'] = self.worker_abilities.keys() self.handler_initial_state['client_id'] = self.worker_client_id ######################################################## ##### Public methods for general GearmanWorker use ##### ######################################################## def register_task(self, task, callback_function): """Register a function with this worker def function_callback(calling_gearman_worker, current_job): return current_job.data """ self.worker_abilities[task] = callback_function self._update_initial_state() for command_handler in self.handler_to_connection_map.iterkeys(): command_handler.set_abilities(self.handler_initial_state['abilities']) return task def unregister_task(self, task): """Unregister a function with worker""" self.worker_abilities.pop(task, None) self._update_initial_state() for command_handler in self.handler_to_connection_map.iterkeys(): command_handler.set_abilities(self.handler_initial_state['abilities']) return task def set_client_id(self, client_id): """Notify the server that we should be identified as this client ID""" self.worker_client_id = client_id self._update_initial_state() for command_handler in self.handler_to_connection_map.iterkeys(): command_handler.set_client_id(self.handler_initial_state['client_id']) return client_id def work(self, poll_timeout=POLL_TIMEOUT_IN_SECONDS): """Loop indefinitely, complete tasks from all connections.""" continue_working = True worker_connections = [] def continue_while_connections_alive(any_activity): return self.after_poll(any_activity) # Shuffle our connections after the poll timeout while continue_working: worker_connections = self.establish_worker_connections() continue_working = self.poll_connections_until_stopped(worker_connections, continue_while_connections_alive, timeout=poll_timeout) # If we were kicked out of the worker loop, we should shutdown all our connections for current_connection in worker_connections: current_connection.close() def shutdown(self): self.command_handler_holding_job_lock = None super(GearmanWorker, self).shutdown() ############################################################### ## Methods to override when dealing with connection polling ## ############################################################## def establish_worker_connections(self): """Return a shuffled list of connections that are alive, and try to reconnect to dead connections if necessary.""" self.randomized_connections = list(self.connection_list) random.shuffle(self.randomized_connections) output_connections = [] for current_connection in self.randomized_connections: try: valid_connection = self.establish_connection(current_connection) output_connections.append(valid_connection) except ConnectionError: pass return output_connections def after_poll(self, any_activity): """Polling callback to notify any outside listeners whats going on with the GearmanWorker. Return True to continue polling, False to exit the work loop""" return True def handle_error(self, current_connection): """If we discover that a connection has a problem, we better release the job lock""" current_handler = self.connection_to_handler_map.get(current_connection) if current_handler: self.set_job_lock(current_handler, lock=False) super(GearmanWorker, self).handle_error(current_connection) ############################################################# ## Public methods so Gearman jobs can send Gearman updates ## ############################################################# def _get_handler_for_job(self, current_job): return self.connection_to_handler_map[current_job.connection] def wait_until_updates_sent(self, multiple_gearman_jobs, poll_timeout=None): connection_set = set([current_job.connection for current_job in multiple_gearman_jobs]) def continue_while_updates_pending(any_activity): return compat.any(current_connection.writable() for current_connection in connection_set) self.poll_connections_until_stopped(connection_set, continue_while_updates_pending, timeout=poll_timeout) def send_job_status(self, current_job, numerator, denominator, poll_timeout=None): """Send a Gearman JOB_STATUS update for an inflight job""" current_handler = self._get_handler_for_job(current_job) current_handler.send_job_status(current_job, numerator=numerator, denominator=denominator) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) def send_job_complete(self, current_job, data, poll_timeout=None): current_handler = self._get_handler_for_job(current_job) current_handler.send_job_complete(current_job, data=data) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) def send_job_failure(self, current_job, poll_timeout=None): """Removes a job from the queue if its backgrounded""" current_handler = self._get_handler_for_job(current_job) current_handler.send_job_failure(current_job) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) def send_job_exception(self, current_job, data, poll_timeout=None): """Removes a job from the queue if its backgrounded""" # Using GEARMAND_COMMAND_WORK_EXCEPTION is not recommended at time of this writing [2010-02-24] # http://groups.google.com/group/gearman/browse_thread/thread/5c91acc31bd10688/529e586405ed37fe # current_handler = self._get_handler_for_job(current_job) current_handler.send_job_exception(current_job, data=data) current_handler.send_job_failure(current_job) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) def send_job_data(self, current_job, data, poll_timeout=None): """Send a Gearman JOB_DATA update for an inflight job""" current_handler = self._get_handler_for_job(current_job) current_handler.send_job_data(current_job, data=data) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) def send_job_warning(self, current_job, data, poll_timeout=None): """Send a Gearman JOB_WARNING update for an inflight job""" current_handler = self._get_handler_for_job(current_job) current_handler.send_job_warning(current_job, data=data) self.wait_until_updates_sent([current_job], poll_timeout=poll_timeout) ##################################################### ##### Callback methods for GearmanWorkerHandler ##### ##################################################### def create_job(self, command_handler, job_handle, task, unique, data): """Create a new job using our self.job_class""" current_connection = self.handler_to_connection_map[command_handler] return self.job_class(current_connection, job_handle, task, unique, data) def on_job_execute(self, current_job): try: function_callback = self.worker_abilities[current_job.task] job_result = function_callback(self, current_job) except Exception: return self.on_job_exception(current_job, sys.exc_info()) return self.on_job_complete(current_job, job_result) def on_job_exception(self, current_job, exc_info): self.send_job_failure(current_job) return False def on_job_complete(self, current_job, job_result): self.send_job_complete(current_job, job_result) return True def set_job_lock(self, command_handler, lock): """Set a worker level job lock so we don't try to hold onto 2 jobs at anytime""" if command_handler not in self.handler_to_connection_map: return False failed_lock = bool(lock and self.command_handler_holding_job_lock is not None) failed_unlock = bool(not lock and self.command_handler_holding_job_lock != command_handler) # If we've already been locked, we should say the lock failed # If we're attempting to unlock something when we don't have a lock, we're in a bad state if failed_lock or failed_unlock: return False if lock: self.command_handler_holding_job_lock = command_handler else: self.command_handler_holding_job_lock = None return True def check_job_lock(self, command_handler): """Check to see if we hold the job lock""" return bool(self.command_handler_holding_job_lock == command_handler) gearman-2.0.2/gearman/worker_handler.py0000644000076600000240000001470211435067246016751 0ustar mtaistaffimport logging from gearman.command_handler import GearmanCommandHandler from gearman.errors import InvalidWorkerState from gearman.protocol import GEARMAN_COMMAND_PRE_SLEEP, GEARMAN_COMMAND_RESET_ABILITIES, GEARMAN_COMMAND_CAN_DO, GEARMAN_COMMAND_SET_CLIENT_ID, GEARMAN_COMMAND_GRAB_JOB_UNIQ, \ GEARMAN_COMMAND_WORK_STATUS, GEARMAN_COMMAND_WORK_COMPLETE, GEARMAN_COMMAND_WORK_FAIL, GEARMAN_COMMAND_WORK_EXCEPTION, GEARMAN_COMMAND_WORK_WARNING, GEARMAN_COMMAND_WORK_DATA gearman_logger = logging.getLogger(__name__) class GearmanWorkerCommandHandler(GearmanCommandHandler): """GearmanWorker state machine on a per connection basis A worker can be in the following distinct states: SLEEP -> Doing nothing, can be awoken AWAKE -> Transitional state (for NOOP) AWAITING_JOB -> Holding worker level job lock and awaiting a server response EXECUTING_JOB -> Transitional state (for ASSIGN_JOB) """ def __init__(self, connection_manager=None): super(GearmanWorkerCommandHandler, self).__init__(connection_manager=connection_manager) self._handler_abilities = [] self._client_id = None def initial_state(self, abilities=None, client_id=None): self.set_client_id(client_id) self.set_abilities(abilities) self._sleep() ################################################################## ##### Public interface methods to be called by GearmanWorker ##### ################################################################## def set_abilities(self, connection_abilities_list): assert type(connection_abilities_list) in (list, tuple) self._handler_abilities = connection_abilities_list self.send_command(GEARMAN_COMMAND_RESET_ABILITIES) for task in self._handler_abilities: self.send_command(GEARMAN_COMMAND_CAN_DO, task=task) def set_client_id(self, client_id): self._client_id = client_id if self._client_id is not None: self.send_command(GEARMAN_COMMAND_SET_CLIENT_ID, client_id=self._client_id) ############################################################### #### Convenience methods for typical gearman jobs to call ##### ############################################################### def send_job_status(self, current_job, numerator, denominator): assert type(numerator) in (int, float), 'Numerator must be a numeric value' assert type(denominator) in (int, float), 'Denominator must be a numeric value' self.send_command(GEARMAN_COMMAND_WORK_STATUS, job_handle=current_job.handle, numerator=str(numerator), denominator=str(denominator)) def send_job_complete(self, current_job, data): """Removes a job from the queue if its backgrounded""" self.send_command(GEARMAN_COMMAND_WORK_COMPLETE, job_handle=current_job.handle, data=self.encode_data(data)) def send_job_failure(self, current_job): """Removes a job from the queue if its backgrounded""" self.send_command(GEARMAN_COMMAND_WORK_FAIL, job_handle=current_job.handle) def send_job_exception(self, current_job, data): # Using GEARMAND_COMMAND_WORK_EXCEPTION is not recommended at time of this writing [2010-02-24] # http://groups.google.com/group/gearman/browse_thread/thread/5c91acc31bd10688/529e586405ed37fe # self.send_command(GEARMAN_COMMAND_WORK_EXCEPTION, job_handle=current_job.handle, data=self.encode_data(data)) def send_job_data(self, current_job, data): self.send_command(GEARMAN_COMMAND_WORK_DATA, job_handle=current_job.handle, data=self.encode_data(data)) def send_job_warning(self, current_job, data): self.send_command(GEARMAN_COMMAND_WORK_WARNING, job_handle=current_job.handle, data=self.encode_data(data)) ########################################################### ### Callbacks when we receive a command from the server ### ########################################################### def _grab_job(self): self.send_command(GEARMAN_COMMAND_GRAB_JOB_UNIQ) def _sleep(self): self.send_command(GEARMAN_COMMAND_PRE_SLEEP) def _check_job_lock(self): return self.connection_manager.check_job_lock(self) def _acquire_job_lock(self): return self.connection_manager.set_job_lock(self, lock=True) def _release_job_lock(self): if not self.connection_manager.set_job_lock(self, lock=False): raise InvalidWorkerState("Unable to release job lock for %r" % self) return True def recv_noop(self): """Transition from being SLEEP --> AWAITING_JOB / SLEEP AWAITING_JOB -> AWAITING_JOB :: Noop transition, we're already awaiting a job SLEEP -> AWAKE -> AWAITING_JOB :: Transition if we can acquire the worker job lock SLEEP -> AWAKE -> SLEEP :: Transition if we can NOT acquire a worker job lock """ if self._check_job_lock(): pass elif self._acquire_job_lock(): self._grab_job() else: self._sleep() return True def recv_no_job(self): """Transition from being AWAITING_JOB --> SLEEP AWAITING_JOB -> SLEEP :: Always transition to sleep if we have nothing to do """ self._release_job_lock() self._sleep() return True def recv_job_assign_uniq(self, job_handle, task, unique, data): """Transition from being AWAITING_JOB --> EXECUTE_JOB --> SLEEP AWAITING_JOB -> EXECUTE_JOB -> SLEEP :: Always transition once we're given a job """ assert task in self._handler_abilities, '%s not found in %r' % (task, self._handler_abilities) # After this point, we know this connection handler is holding onto the job lock so we don't need to acquire it again if not self.connection_manager.check_job_lock(self): raise InvalidWorkerState("Received a job when we weren't expecting one") gearman_job = self.connection_manager.create_job(self, job_handle, task, unique, self.decode_data(data)) # Create a new job self.connection_manager.on_job_execute(gearman_job) # Release the job lock once we're doing and go back to sleep self._release_job_lock() self._sleep() return True def recv_job_assign(self, job_handle, task, data): """JOB_ASSIGN and JOB_ASSIGN_UNIQ are essentially the same""" return self.recv_job_assign(job_handle=job_handle, task=task, unique=None, data=data) gearman-2.0.2/gearman.egg-info/0000755000076600000240000000000011513151461015045 5ustar mtaistaffgearman-2.0.2/gearman.egg-info/dependency_links.txt0000644000076600000240000000000111513151461021113 0ustar mtaistaff gearman-2.0.2/gearman.egg-info/PKG-INFO0000644000076600000240000000303111513151461016137 0ustar mtaistaffMetadata-Version: 1.0 Name: gearman Version: 2.0.2 Summary: Gearman API - Client, worker, and admin client interfaces Home-page: http://github.com/Yelp/python-gearman/ Author: Matthew Tai Author-email: mtai@yelp.com License: Apache Description: ============== python-gearman ============== Description =========== Python Gearman API - Client, worker, and admin client interfaces For information on Gearman and a C-based Gearman server, see http://www.gearman.org/ Installation ============ * easy_install gearman * pip install gearman Links ===== * 2.x source * 2.x documentation * 1.x source * 1.x documentation Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules gearman-2.0.2/gearman.egg-info/SOURCES.txt0000644000076600000240000000130611513151461016731 0ustar mtaistaffAUTHORS.txt CHANGES.txt LICENSE.txt MANIFEST.in README.txt TODO.txt setup.cfg setup.py docs/1to2.rst docs/Makefile docs/admin_client.rst docs/architecture.rst docs/client.rst docs/index.rst docs/job.rst docs/library.rst docs/worker.rst gearman/__init__.py gearman/admin_client.py gearman/admin_client_handler.py gearman/client.py gearman/client_handler.py gearman/command_handler.py gearman/compat.py gearman/connection.py gearman/connection_manager.py gearman/constants.py gearman/errors.py gearman/job.py gearman/protocol.py gearman/util.py gearman/worker.py gearman/worker_handler.py gearman.egg-info/PKG-INFO gearman.egg-info/SOURCES.txt gearman.egg-info/dependency_links.txt gearman.egg-info/top_level.txtgearman-2.0.2/gearman.egg-info/top_level.txt0000644000076600000240000000001011513151461017566 0ustar mtaistaffgearman gearman-2.0.2/LICENSE.txt0000644000076600000240000000106111435067246013574 0ustar mtaistaffCopyright 2010 Yelp Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. gearman-2.0.2/MANIFEST.in0000644000076600000240000000012311513150122013464 0ustar mtaistaffinclude *.txt include docs/Makefile recursive-include docs *.rst prune docs/_build gearman-2.0.2/PKG-INFO0000644000076600000240000000303111513151461013033 0ustar mtaistaffMetadata-Version: 1.0 Name: gearman Version: 2.0.2 Summary: Gearman API - Client, worker, and admin client interfaces Home-page: http://github.com/Yelp/python-gearman/ Author: Matthew Tai Author-email: mtai@yelp.com License: Apache Description: ============== python-gearman ============== Description =========== Python Gearman API - Client, worker, and admin client interfaces For information on Gearman and a C-based Gearman server, see http://www.gearman.org/ Installation ============ * easy_install gearman * pip install gearman Links ===== * 2.x source * 2.x documentation * 1.x source * 1.x documentation Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.4 Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules gearman-2.0.2/README.txt0000644000076600000240000000105311455160537013447 0ustar mtaistaff============== python-gearman ============== Description =========== Python Gearman API - Client, worker, and admin client interfaces For information on Gearman and a C-based Gearman server, see http://www.gearman.org/ Installation ============ * easy_install gearman * pip install gearman Links ===== * 2.x source * 2.x documentation * 1.x source * 1.x documentation gearman-2.0.2/setup.cfg0000644000076600000240000000026311513151461013563 0ustar mtaistaff[build_sphinx] all_files = 1 build-dir = docs/_build source-dir = docs/ [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [upload_sphinx] upload-dir = docs/_build/html gearman-2.0.2/setup.py0000755000076600000240000000200211455160537013461 0ustar mtaistaff#!/usr/bin/env python from setuptools import setup from gearman import __version__ as version setup( name = 'gearman', version = version, author = 'Matthew Tai', author_email = 'mtai@yelp.com', description = 'Gearman API - Client, worker, and admin client interfaces', long_description=open('README.txt').read(), url = 'http://github.com/Yelp/python-gearman/', packages = ['gearman'], license='Apache', classifiers = [ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2.4', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Topic :: Software Development :: Libraries :: Python Modules', ], ) gearman-2.0.2/TODO.txt0000644000076600000240000000034411455160537013261 0ustar mtaistaffRequested features (contributions welcome) ========================================== * Update ConnectionManager code to play well with Twisted * Update Worker to handle multiple jobs at once instead of processing one at a time