Просмотр исходного кода

Merge branch 'master' into dbschedule

Conflicts:
	celery/task/base.py
	celery/task/schedules.py
Ask Solem 15 лет назад
Родитель
Сommit
bffe590b54
100 измененных файлов с 3467 добавлено и 867 удалено
  1. 1 1
      AUTHORS
  2. 272 107
      Changelog
  3. 3 0
      FAQ
  4. 1 1
      README.rst
  5. 2 2
      celery/__init__.py
  6. 1 2
      celery/backends/__init__.py
  7. 32 8
      celery/backends/amqp.py
  8. 2 8
      celery/backends/base.py
  9. 6 1
      celery/backends/database.py
  10. 1 1
      celery/backends/mongodb.py
  11. 31 7
      celery/bin/celeryd.py
  12. 8 8
      celery/bin/celeryd_multi.py
  13. 17 15
      celery/bin/celeryev.py
  14. 0 0
      celery/concurrency/__init__.py
  15. 10 3
      celery/concurrency/processes/__init__.py
  16. 970 0
      celery/concurrency/processes/pool.py
  17. 68 0
      celery/concurrency/threads.py
  18. 44 16
      celery/conf.py
  19. 15 4
      celery/datastructures.py
  20. 0 2
      celery/db/session.py
  21. 1 2
      celery/decorators.py
  22. 133 56
      celery/events/state.py
  23. 5 2
      celery/exceptions.py
  24. 44 29
      celery/execute/__init__.py
  25. 23 19
      celery/log.py
  26. 20 5
      celery/messaging.py
  27. 21 26
      celery/result.py
  28. 5 5
      celery/routes.py
  29. 12 11
      celery/schedules.py
  30. 166 0
      celery/serialization.py
  31. 13 10
      celery/states.py
  32. 7 5
      celery/task/__init__.py
  33. 29 191
      celery/task/base.py
  34. 41 1
      celery/task/builtins.py
  35. 7 0
      celery/task/control.py
  36. 2 2
      celery/task/http.py
  37. 1 1
      celery/task/schedules.py
  38. 167 0
      celery/task/sets.py
  39. 5 5
      celery/tests/test_backends/test_base.py
  40. 1 1
      celery/tests/test_buckets.py
  41. 176 0
      celery/tests/test_events_state.py
  42. 1 1
      celery/tests/test_pickle.py
  43. 5 4
      celery/tests/test_pool.py
  44. 1 1
      celery/tests/test_routes.py
  45. 3 3
      celery/tests/test_serialization.py
  46. 60 53
      celery/tests/test_task.py
  47. 2 3
      celery/tests/test_task_builtins.py
  48. 1 1
      celery/tests/test_task_http.py
  49. 7 7
      celery/tests/test_worker.py
  50. 2 2
      celery/tests/test_worker_controllers.py
  51. 18 19
      celery/tests/test_worker_job.py
  52. 15 1
      celery/tests/utils.py
  53. 25 14
      celery/utils/__init__.py
  54. 135 0
      celery/utils/functional.py
  55. 11 12
      celery/utils/mail.py
  56. 2 20
      celery/utils/timeutils.py
  57. 30 23
      celery/worker/__init__.py
  58. 27 19
      celery/worker/buckets.py
  59. 3 7
      celery/worker/control/__init__.py
  60. 9 4
      celery/worker/control/builtins.py
  61. 1 0
      celery/worker/control/registry.py
  62. 28 24
      celery/worker/job.py
  63. 145 12
      celery/worker/listener.py
  64. 1 0
      contrib/release/doc4allmods
  65. 0 1
      contrib/requirements/default.txt
  66. 4 0
      docs/_theme/classy/layout.html
  67. 281 0
      docs/_theme/classy/static/classy.css_t
  68. BIN
      docs/_theme/classy/static/logo.png
  69. 4 0
      docs/_theme/classy/theme.conf
  70. 1 1
      docs/conf.py
  71. 62 5
      docs/configuration.rst
  72. 3 0
      docs/cookbook/daemonizing.rst
  73. 5 2
      docs/cookbook/tasks.rst
  74. 3 0
      docs/getting-started/broker-installation.rst
  75. 38 22
      docs/getting-started/first-steps-with-celery.rst
  76. 23 10
      docs/getting-started/periodic-tasks.rst
  77. 4 0
      docs/getting-started/resources.rst
  78. 8 5
      docs/includes/introduction.txt
  79. 9 5
      docs/internals/deprecation.rst
  80. 12 3
      docs/internals/events.rst
  81. 3 0
      docs/internals/moduleindex.rst
  82. 34 26
      docs/internals/protocol.rst
  83. 3 0
      docs/internals/reference/celery.backends.amqp.rst
  84. 3 0
      docs/internals/reference/celery.backends.base.rst
  85. 3 0
      docs/internals/reference/celery.backends.database.rst
  86. 3 0
      docs/internals/reference/celery.backends.mongodb.rst
  87. 3 0
      docs/internals/reference/celery.backends.pyredis.rst
  88. 3 0
      docs/internals/reference/celery.backends.rst
  89. 3 0
      docs/internals/reference/celery.backends.tyrant.rst
  90. 3 0
      docs/internals/reference/celery.beat.rst
  91. 11 0
      docs/internals/reference/celery.concurrency.processes.pool.rst
  92. 11 0
      docs/internals/reference/celery.concurrency.processes.rst
  93. 11 0
      docs/internals/reference/celery.concurrency.threads.rst
  94. 3 0
      docs/internals/reference/celery.datastructures.rst
  95. 3 0
      docs/internals/reference/celery.db.models.rst
  96. 3 0
      docs/internals/reference/celery.db.session.rst
  97. 3 0
      docs/internals/reference/celery.execute.trace.rst
  98. 3 0
      docs/internals/reference/celery.log.rst
  99. 3 0
      docs/internals/reference/celery.platform.rst
  100. 3 0
      docs/internals/reference/celery.routes.rst

+ 1 - 1
AUTHORS

@@ -1,5 +1,5 @@
 Ordered by date of first contribution:
-  Ask Solem <askh@opera.com>
+  Ask Solem <ask@celeryproject.org>
   Grégoire Cachet <gregoire@audacy.fr>
   Vitaly Babiy <vbabiy86@gmail.com>
   Brian Rosner <brosner@gmail.com>

+ 272 - 107
Changelog

@@ -2,8 +2,29 @@
  Change history
 ================
 
-1.2.0 [xxxx-xx-xx xx:xx x.x xxxx]
-=================================
+.. contents::
+    :local:
+
+1.2.0
+=====
+:release-date: NOT RELEASED
+:branch: master
+:state: beta
+
+Celery 1.2 contains backward incompatible changes, the most important
+being that the Django dependency has been removed, so Celery no longer
+supports Django out of the box, but instead as an add-on package
+called `django-celery`_.
+
+We're very sorry for breaking backwards compatibility, but there's
+also many new and exciting features to make up for the time you lose
+upgrading, so be sure to read the :ref:`News <120news>` section.
+
+Quite a lot of potential users have been upset about the Django dependency,
+so maybe this is a chance to get wider adoption by the Python community as
+well.
+
+Big thanks to all contributors, testers and users!
 
 Upgrading for Django-users
 --------------------------
@@ -90,6 +111,25 @@ the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
 Backward incompatible changes
 -----------------------------
 
+* Default (python) loader now prints warning on missing ``celeryconfig.py``
+  instead of raising :exc:`ImportError`.
+
+    celeryd raises :exc:`~celery.exceptions.ImproperlyConfigured` if the configuration
+    is not set up. This makes it possible to use ``--help`` etc, without having a
+    working configuration.
+
+    Also this makes it possible to use the client side of celery without being
+    configured::
+
+        >>> from carrot.connection import Connection
+        >>> conn = Connection("localhost", "guest", "guest", "/")
+        >>> from celery.execute import send_task
+        >>> r = send_task("celery.ping", args=(), kwargs={}, connection=conn)
+        >>> from celery.backends.amqp import AMQPBackend
+        >>> r.backend = AMQPBackend(connection=conn)
+        >>> r.get()
+        'pong'
+
 * The following deprecated settings has been removed (as scheduled by
   the `deprecation timeline`_):
 
@@ -122,10 +162,79 @@ Backward incompatible changes
 
         CELERY_LOADER = "myapp.loaders.Loader"
 
+.. _120news:
+
 News
 ----
 
-* now depends on billiard >= 0.4.0
+* **celeryev**: Curses Celery Monitor and Event Viewer.
+
+    This is a simple monitor allowing you to see what tasks are
+    executing in real-time and investigate tracebacks and results of ready
+    tasks. It also enables you to set new rate limits and revoke tasks.
+
+    Screenshot:
+
+    .. image:: http://celeryproject.org/img/celeryevshotsm.jpg
+
+    If you run ``celeryev`` with the ``-d`` switch it will act as an event
+    dumper, simply dumping the events it receives to standard out::
+
+        $ celeryev -d
+        -> celeryev: starting capture...
+        casper.local [2010-06-04 10:42:07.020000] heartbeat
+        casper.local [2010-06-04 10:42:14.750000] task received:
+            tasks.add(61a68756-27f4-4879-b816-3cf815672b0e) args=[2, 2] kwargs={}
+            eta=2010-06-04T10:42:16.669290, retries=0
+        casper.local [2010-06-04 10:42:17.230000] task started
+            tasks.add(61a68756-27f4-4879-b816-3cf815672b0e) args=[2, 2] kwargs={}
+        casper.local [2010-06-04 10:42:17.960000] task succeeded:
+            tasks.add(61a68756-27f4-4879-b816-3cf815672b0e)
+            args=[2, 2] kwargs={} result=4, runtime=0.782663106918
+
+        The fields here are, in order: *sender hostname*, *timestamp*, *event type* and
+        *additional event fields*.
+
+* :mod:`billiard` has been moved back to the celery repository.
+
+    =====================================  =====================================
+    **Module name**                        **celery equivalent**
+    =====================================  =====================================
+    ``billiard.pool``                      ``celery.concurrency.processes.pool``
+    ``billiard.serialization``             ``celery.serialization``
+    ``billiard.utils.functional``          ``celery.utils.functional``
+    =====================================  =====================================
+
+    The :mod:`billiard` distribution may be maintained, depending on interest.
+
+* now depends on :mod:`carrot` >= 0.10.5
+
+* now depends on :mod:`pyparsing`
+
+* Added support for using complex crontab-expressions in periodic tasks. For
+  example, you can now use::
+
+    >>> crontab(minute="*/15")
+
+  or even::
+
+    >>> crontab(minute="*/30", hour="8-17,1-2", day_of_week="thu-fri")
+
+  See :doc:`getting-started/periodic-tasks`.
+
+* celeryd: Now waits for available pool processes before applying new
+  tasks to the pool.
+
+    This means it doesn't have to wait for dozens of tasks to finish at shutdown
+    because it has already applied n prefetched tasks without any pool
+    processes to immediately accept them.
+
+    Some overhead for very short tasks though, then the shutdown probably doesn't
+    matter either so can disable with::
+
+        CELERYD_POOL_PUTLOCKS = False
+
+    See http://github.com/ask/celery/issues/closed#issue/122
 
 * Added support for task soft and hard timelimits.
 
@@ -153,18 +262,8 @@ News
     Also when the hard time limit is exceeded, the task result should
     be a ``TimeLimitExceeded`` exception.
 
-* celeryd now waits for available pool processes before applying new tasks to the pool.
-
-    This means it doesn't have to wait for dozens of tasks to finish at shutdown
-    because it applied n prefetched tasks at once.
-
-    Some overhead for very short tasks though, then the shutdown probably doesn't
-    matter either so the feature can disable by the  ``CELERYD_POOL_PUTLOCKS``
-    setting::
-
-        CELERYD_POOL_PUTLOCKS = False
-
-    See http://github.com/ask/celery/issues/#issue/122
+* Test suite is now passing without a running broker, using the carrot
+  in-memory backend.
 
 * Log output is now available in colors.
 
@@ -246,7 +345,7 @@ News
 * celeryd: Added ``CELERYD_MAX_TASKS_PER_CHILD`` /
   :option:`--maxtasksperchild`
 
-    Defineds the maximum number of tasks a pool worker can process before
+    Defines the maximum number of tasks a pool worker can process before
     the process is terminated and replaced by a new one.
 
 * Revoked tasks now marked with state ``REVOKED``, and ``result.get()``
@@ -269,6 +368,25 @@ News
 
         $ celeryd -Q image,video
 
+* celeryd: New return value for the ``revoke`` control command:
+
+    Now returns::
+
+        {"ok": "task $id revoked"}
+
+    instead of ``True``.
+
+* celeryd: Can now enable/disable events using remote control
+
+    Example usage:
+
+        >>> from celery.task.control import broadcast
+        >>> broadcast("enable_events")
+        >>> broadcast("disable_events")
+
+
+* celeryd: New option ``--version``: Dump version info and exit.
+
 * :mod:`celeryd-multi <celeryd.bin.celeryd_multi>`: Tool for shell scripts
   to start multiple workers.
 
@@ -329,14 +447,10 @@ News
         celeryd-multi -n baz.myhost -c 10
         celeryd-multi -n xuzzy.myhost -c 3
 
-
-
-
-
-
-
-1.0.4 [2010-05-31 09:54 A.M CEST]
-=================================
+1.0.5
+=====
+:release-date: 2010-06-01 02:36 P.M CEST
+:md5: c93f7522c2ce98a32e1cc1a970a7dba1
 
 Critical
 --------
@@ -352,6 +466,12 @@ Critical
 
 * Now depends on :mod:`billiard` >= 0.3.1
 
+* celeryd: Previously exceptions raised by worker components could stall startup,
+  now it correctly logs the exceptions and shuts down.
+
+* celeryd: Prefetch counts was set too late. QoS is now set as early as possible,
+  so celeryd can't slurp in all the messages at start-up.
+
 Changes
 -------
 
@@ -360,6 +480,9 @@ Changes
     Tasks that defines steps of execution, the task can then
     be aborted after each step has completed.
 
+* :class:`~celery.events.EventDispatcher`: No longer creates AMQP channel
+  if events are disabled
+
 * Added required RPM package names under ``[bdist_rpm]`` section, to support building RPMs
   from the sources using setup.py
 
@@ -379,8 +502,15 @@ Changes
     * Should I use retry or acks_late?
     * Can I execute a task by name?
 
-1.0.3 [2010-05-15 03:00 P.M CEST]
-=================================
+1.0.4
+=====
+:release-date: 2010-05-31 09:54 A.M CEST
+
+* Changlog merged with 1.0.5 as the release was never announced.
+
+1.0.3
+=====
+:release-date: 2010-05-15 03:00 P.M CEST
 
 Important notes
 ---------------
@@ -409,6 +539,10 @@ Important notes
 
         ALTER TABLE celery_taskmeta MODIFY result TEXT NULL
 
+    PostgreSQL::
+
+        ALTER TABLE celery_taskmeta ALTER COLUMN result DROP NOT NULL
+
 * Removed ``Task.rate_limit_queue_type``, as it was not really useful
   and made it harder to refactor some parts.
 
@@ -572,7 +706,7 @@ Remote control commands
 
         >>> from celery.task.control import broadcast
         >>> broadcast("dump_reserved", reply=True)
-        [{'myworker1': [<TaskWrapper ....>]}]
+        [{'myworker1': [<TaskRequest ....>]}]
 
 * New remote control command: ``dump_schedule``
 
@@ -584,19 +718,19 @@ Remote control commands
         >>> broadcast("dump_schedule", reply=True)
         [{'w1': []},
          {'w3': []},
-         {'w2': ['0. 2010-05-12 11:06:00 pri0 <TaskWrapper:
+         {'w2': ['0. 2010-05-12 11:06:00 pri0 <TaskRequest
                     {name:"opalfeeds.tasks.refresh_feed_slice",
                      id:"95b45760-4e73-4ce8-8eac-f100aa80273a",
                      args:"(<Feeds freq_max:3600 freq_min:60
                                    start:2184.0 stop:3276.0>,)",
                      kwargs:"{'page': 2}"}>']},
-         {'w4': ['0. 2010-05-12 11:00:00 pri0 <TaskWrapper:
+         {'w4': ['0. 2010-05-12 11:00:00 pri0 <TaskRequest
                     {name:"opalfeeds.tasks.refresh_feed_slice",
                      id:"c053480b-58fb-422f-ae68-8d30a464edfe",
                      args:"(<Feeds freq_max:3600 freq_min:60
                                    start:1092.0 stop:2184.0>,)",
                      kwargs:"{\'page\': 1}"}>',
-                '1. 2010-05-12 11:12:00 pri0 <TaskWrapper:
+                '1. 2010-05-12 11:12:00 pri0 <TaskRequest
                     {name:"opalfeeds.tasks.refresh_feed_slice",
                      id:"ab8bc59e-6cf8-44b8-88d0-f1af57789758",
                      args:"(<Feeds freq_max:3600 freq_min:60
@@ -616,13 +750,14 @@ Fixes
   (http://github.com/ask/celery/issues/issue/98)
 
 * Now handles exceptions with unicode messages correctly in
-  ``TaskWrapper.on_failure``.
+  ``TaskRequest.on_failure``.
 
 * Database backend: ``TaskMeta.result``: default value should be ``None``
   not empty string.
 
-1.0.2 [2010-03-31 12:50 P.M CET]
-================================
+1.0.2
+=====
+:release-date: 2010-03-31 12:50 P.M CET
 
 * Deprecated: ``CELERY_BACKEND``, please use ``CELERY_RESULT_BACKEND``
   instead.
@@ -665,7 +800,7 @@ Fixes
 
     .. code-block:: python
 
-        CELERYD_POOL = "celery.worker.pool.TaskPool"
+        CELERYD_POOL = "celery.concurrency.processes.TaskPool"
         CELERYD_MEDIATOR = "celery.worker.controllers.Mediator"
         CELERYD_ETA_SCHEDULER = "celery.worker.controllers.ScheduleController"
         CELERYD_LISTENER = "celery.worker.listener.CarrotListener"
@@ -705,8 +840,9 @@ Fixes
 * celeryd: Now handles messages with encoding problems by acking them and
   emitting an error message.
 
-1.0.1 [2010-02-24 07:05 P.M CET]
-================================
+1.0.1
+=====
+:release-date: 2010-02-24 07:05 P.M CET
 
 * Tasks are now acknowledged early instead of late.
 
@@ -857,10 +993,11 @@ Fixes
   not executeable. Does not modify ``CELERYD`` when using django with
   virtualenv.
 
-1.0.0 [2010-02-10 04:00 P.M CET]
-================================
+1.0.0
+=====
+:release-date: 2010-02-10 04:00 P.M CET
 
-BACKWARD INCOMPATIBLE CHANGES
+Backward incompatible changes
 -----------------------------
 
 * Celery does not support detaching anymore, so you have to use the tools
@@ -1000,7 +1137,7 @@ BACKWARD INCOMPATIBLE CHANGES
 
         loader = current_loader()
 
-DEPRECATIONS
+Deprecations
 ------------
 
 * The following configuration variables has been renamed and will be
@@ -1029,7 +1166,7 @@ DEPRECATIONS
     ``TaskSet.run`` has now been deprecated, and is scheduled for
     removal in v1.2.
 
-NEWS
+News
 ----
 
 * Rate limiting support (per task type, or globally).
@@ -1090,7 +1227,7 @@ NEWS
 * The results of tasksets are now cached by storing it in the result
   backend.
 
-CHANGES
+Changes
 -------
 
 * Now depends on carrot >= 0.8.1
@@ -1158,20 +1295,21 @@ CHANGES
 * celeryd now correctly handles malformed messages by throwing away and
   acknowledging the message, instead of crashing.
 
-BUGS
+Bugs
 ----
 
 * Fixed a race condition that could happen while storing task results in the
   database.
 
-DOCUMENTATION
+Documentation
 -------------
 
 * Reference now split into two sections; API reference and internal module
   reference.
 
-0.8.4 [2010-02-05 01:52 P.M CEST]
----------------------------------
+0.8.4
+=====
+:release-date: 2010-02-05 01:52 P.M CEST
 
 * Now emits a warning if the --detach argument is used.
   --detach should not be used anymore, as it has several not easily fixed
@@ -1185,8 +1323,9 @@ DOCUMENTATION
 * Error e-mails are not sent anymore when the task is retried.
 
 
-0.8.3 [2009-12-22 09:43 A.M CEST]
----------------------------------
+0.8.3
+=====
+:release-date: 2009-12-22 09:43 A.M CEST
 
 * Fixed a possible race condition that could happen when storing/querying
   task results using the the database backend.
@@ -1194,17 +1333,19 @@ DOCUMENTATION
 * Now has console script entry points in the setup.py file, so tools like
   buildout will correctly install the programs celerybin and celeryinit.
 
-0.8.2 [2009-11-20 03:40 P.M CEST]
----------------------------------
+0.8.2
+=====
+:release-date: 2009-11-20 03:40 P.M CEST
 
 * QOS Prefetch count was not applied properly, as it was set for every message
   received (which apparently behaves like, "receive one more"), instead of only 
   set when our wanted value cahnged.
 
-0.8.1 [2009-11-16 05:21 P.M CEST]
+0.8.1
 =================================
+:release-date: 2009-11-16 05:21 P.M CEST
 
-VERY IMPORTANT NOTE
+Very important note
 -------------------
 
 This release (with carrot 0.8.0) enables AMQP QoS (quality of service), which
@@ -1212,7 +1353,7 @@ means the workers will only receive as many messages as it can handle at a
 time. As with any release, you should test this version upgrade on your
 development servers before rolling it out to production!
 
-IMPORTANT CHANGES
+Important changes
 -----------------
 
 * If you're using Python < 2.6 and you use the multiprocessing backport, then
@@ -1249,7 +1390,7 @@ IMPORTANT CHANGES
 
 * New version requirement for carrot: 0.8.0
 
-CHANGES
+Changes
 -------
 
 * Incorporated the multiprocessing backport patch that fixes the
@@ -1280,10 +1421,11 @@ CHANGES
 
 * SQLite no concurrency limit should only be effective if the db backend is used.
 
-0.8.0 [2009-09-22 03:06 P.M CEST]
-=================================
+0.8.0
+=====
+:release-date: 2009-09-22 03:06 P.M CEST
 
-BACKWARD INCOMPATIBLE CHANGES
+Backward incompatible changes
 -----------------------------
 
 * Add traceback to result value on failure.
@@ -1302,7 +1444,7 @@ BACKWARD INCOMPATIBLE CHANGES
 
 * Now depends on python-daemon 1.4.8
 
-IMPORTANT CHANGES
+Important changes
 -----------------
 
 * Celery can now be used in pure Python (outside of a Django project).
@@ -1365,7 +1507,7 @@ IMPORTANT CHANGES
     * AMQP_CONNECTION_MAX_RETRIES.
         Maximum number of restarts before we give up. Default: ``100``.
 
-NEWS
+News
 ----
 
 *  Fix an incompatibility between python-daemon and multiprocessing,
@@ -1414,10 +1556,11 @@ NEWS
 * Fix documentation typo ``.. import map`` -> ``.. import dmap``.
 	Thanks mikedizon
 
-0.6.0 [2009-08-07 06:54 A.M CET]
-================================
+0.6.0
+=====
+:release-date: 2009-08-07 06:54 A.M CET
 
-IMPORTANT CHANGES
+Important changes
 -----------------
 
 * Fixed a bug where tasks raising unpickleable exceptions crashed pool
@@ -1436,7 +1579,7 @@ IMPORTANT CHANGES
 	we didn't do this before. Some documentation is updated to not manually
 	specify a task name.
 
-NEWS
+News
 ----
 
 * Tested with Django 1.1
@@ -1488,14 +1631,16 @@ NEWS
 
 * Convert statistics data to unicode for use as kwargs. Thanks Lucy!
 
-0.4.1 [2009-07-02 01:42 P.M CET]
-================================
+0.4.1
+=====
+:release-date: 2009-07-02 01:42 P.M CET
 
 * Fixed a bug with parsing the message options (``mandatory``,
   ``routing_key``, ``priority``, ``immediate``)
 
-0.4.0 [2009-07-01 07:29 P.M CET]
-================================
+0.4.0
+=====
+:release-date: 2009-07-01 07:29 P.M CET
 
 * Adds eager execution. ``celery.execute.apply``|``Task.apply`` executes the
   function blocking until the task is done, for API compatiblity it
@@ -1507,8 +1652,9 @@ NEWS
 
 * 99% coverage using python ``coverage`` 3.0.
 
-0.3.20 [2009-06-25 08:42 P.M CET]
-=================================
+0.3.20
+======
+:release-date: 2009-06-25 08:42 P.M CET
 
 * New arguments to ``apply_async`` (the advanced version of
   ``delay_task``), ``countdown`` and ``eta``;
@@ -1582,8 +1728,9 @@ NEWS
 		Built-in tasks: ``PingTask``, ``DeleteExpiredTaskMetaTask``.
 
 
-0.3.7 [2008-06-16 11:41 P.M CET] 
---------------------------------
+0.3.7
+=====
+:release-date: 2008-06-16 11:41 P.M CET
 
 * **IMPORTANT** Now uses AMQP's ``basic.consume`` instead of
   ``basic.get``. This means we're no longer polling the broker for
@@ -1645,30 +1792,34 @@ NEWS
 * Tyrant Backend: Now re-establishes the connection for every task
   executed.
 
-0.3.3 [2009-06-08 01:07 P.M CET]
-================================
+0.3.3
+=====
+:release-date: 2009-06-08 01:07 P.M CET
 
 * The ``PeriodicWorkController`` now sleeps for 1 second between checking
   for periodic tasks to execute.
 
-0.3.2 [2009-06-08 01:07 P.M CET]
-================================
+0.3.2
+=====
+:release-date: 2009-06-08 01:07 P.M CET
 
 * celeryd: Added option ``--discard``: Discard (delete!) all waiting
   messages in the queue.
 
 * celeryd: The ``--wakeup-after`` option was not handled as a float.
 
-0.3.1 [2009-06-08 01:07 P.M CET]
-================================
+0.3.1
+=====
+:release-date: 2009-06-08 01:07 P.M CET
 
 * The `PeriodicTask`` worker is now running in its own thread instead
   of blocking the ``TaskController`` loop.
 
 * Default ``QUEUE_WAKEUP_AFTER`` has been lowered to ``0.1`` (was ``0.3``)
 
-0.3.0 [2009-06-08 12:41 P.M CET]
-================================
+0.3.0
+=====
+:release-date: 2009-06-08 12:41 P.M CET
 
 **NOTE** This is a development version, for the stable release, please
 see versions 0.2.x.
@@ -1741,8 +1892,9 @@ arguments, so be sure to flush your task queue before you upgrade.
 * The pool algorithm has been refactored for greater performance and
   stability.
 
-0.2.0 [2009-05-20 05:14 P.M CET]
-================================
+0.2.0
+=====
+:release-date: 2009-05-20 05:14 P.M CET
 
 * Final release of 0.2.0
 
@@ -1751,21 +1903,24 @@ arguments, so be sure to flush your task queue before you upgrade.
 * Fixes some syntax errors related to fetching results
   from the database backend.
 
-0.2.0-pre3 [2009-05-20 05:14 P.M CET]
-=====================================
+0.2.0-pre3
+==========
+:release-date: 2009-05-20 05:14 P.M CET
 
 * *Internal release*. Improved handling of unpickled exceptions,
   ``get_result`` now tries to recreate something looking like the
   original exception.
 
-0.2.0-pre2 [2009-05-20 01:56 P.M CET]
-=====================================
+0.2.0-pre2
+==========
+:release-date: 2009-05-20 01:56 P.M CET
 
 * Now handles unpickleable exceptions (like the dynimically generated
   subclasses of ``django.core.exception.MultipleObjectsReturned``).
 
-0.2.0-pre1 [2009-05-20 12:33 P.M CET]
-=====================================
+0.2.0-pre1
+==========
+:release-date: 2009-05-20 12:33 P.M CET
 
 * It's getting quite stable, with a lot of new features, so bump
   version to 0.2. This is a pre-release.
@@ -1774,21 +1929,24 @@ arguments, so be sure to flush your task queue before you upgrade.
   been removed. Use ``celery.backends.default_backend.mark_as_read()``, 
   and ``celery.backends.default_backend.mark_as_failure()`` instead.
 
-0.1.15 [2009-05-19 04:13 P.M CET]
-=================================
+0.1.15
+======
+:release-date: 2009-05-19 04:13 P.M CET
 
 * The celery daemon was leaking AMQP connections, this should be fixed,
   if you have any problems with too many files open (like ``emfile``
   errors in ``rabbit.log``, please contact us!
 
-0.1.14 [2009-05-19 01:08 P.M CET]
-=================================
+0.1.14
+======
+:release-date: 2009-05-19 01:08 P.M CET
 
 * Fixed a syntax error in the ``TaskSet`` class.  (No such variable
   ``TimeOutError``).
 
-0.1.13 [2009-05-19 12:36 P.M CET]
-=================================
+0.1.13
+======
+:release-date: 2009-05-19 12:36 P.M CET
 
 * Forgot to add ``yadayada`` to install requirements.
 
@@ -1808,8 +1966,9 @@ arguments, so be sure to flush your task queue before you upgrade.
 
   and the result will be in ``docs/.build/html``.
 
-0.1.12 [2009-05-18 04:38 P.M CET]
-=================================
+0.1.12
+======
+:release-date: 2009-05-18 04:38 P.M CET
 
 * ``delay_task()`` etc. now returns ``celery.task.AsyncResult`` object,
   which lets you check the result and any failure that might have
@@ -1846,14 +2005,16 @@ arguments, so be sure to flush your task queue before you upgrade.
 		TT_HOST = "localhost"; # Hostname for the Tokyo Tyrant server.
 		TT_PORT = 6657; # Port of the Tokyo Tyrant server.
 
-0.1.11 [2009-05-12 02:08 P.M CET]
-=================================
+0.1.11
+======
+:release-date: 2009-05-12 02:08 P.M CET
 
 * The logging system was leaking file descriptors, resulting in
   servers stopping with the EMFILES (too many open files) error. (fixed)
 
-0.1.10 [2009-05-11 12:46 P.M CET]
-=================================
+0.1.10
+======
+:release-date: 2009-05-11 12:46 P.M CET
 
 * Tasks now supports both positional arguments and keyword arguments.
 
@@ -1861,16 +2022,18 @@ arguments, so be sure to flush your task queue before you upgrade.
 
 * The daemon now tries to reconnect if the connection is lost.
 
-0.1.8 [2009-05-07 12:27 P.M CET]
-================================
+0.1.8
+=====
+:release-date: 2009-05-07 12:27 P.M CET
 
 * Better test coverage
 * More documentation
 * celeryd doesn't emit ``Queue is empty`` message if
   ``settings.CELERYD_EMPTY_MSG_EMIT_EVERY`` is 0.
 
-0.1.7 [2009-04-30 1:50 P.M CET]
-===============================
+0.1.7
+=====
+:release-date: 2009-04-30 1:50 P.M CET
 
 * Added some unittests
 
@@ -1884,8 +2047,9 @@ arguments, so be sure to flush your task queue before you upgrade.
   ``settings.CELERY_AMQP_EXCHANGE``, ``settings.CELERY_AMQP_ROUTING_KEY``,
   and ``settings.CELERY_AMQP_CONSUMER_QUEUE``.
 
-0.1.6 [2009-04-28 2:13 P.M CET]
-===============================
+0.1.6
+=====
+:release-date: 2009-04-28 2:13 P.M CET
 
 * Introducing ``TaskSet``. A set of subtasks is executed and you can
   find out how many, or if all them, are done (excellent for progress
@@ -1927,7 +2091,8 @@ arguments, so be sure to flush your task queue before you upgrade.
 * Project changed name from ``crunchy`` to ``celery``. The details of
   the name change request is in ``docs/name_change_request.txt``.
 
-0.1.0 [2009-04-24 11:28 A.M CET]
-================================
+0.1.0
+=====
+:release-date: 2009-04-24 11:28 A.M CET
 
 * Initial release

+ 3 - 0
FAQ

@@ -2,6 +2,9 @@
  Frequently Asked Questions
 ============================
 
+.. contents::
+    :local:
+
 General
 =======
 

+ 1 - 1
README.rst

@@ -4,7 +4,7 @@
 
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 
-:Version: 1.1.0
+:Version: 1.1.1
 :Web: http://celeryproject.org/
 :Download: http://pypi.python.org/pypi/celery/
 :Source: http://github.com/ask/celery/

+ 2 - 2
celery/__init__.py

@@ -1,10 +1,10 @@
 """Distributed Task Queue"""
 
-VERSION = (1, 1, 0)
+VERSION = (1, 1, 1)
 
 __version__ = ".".join(map(str, VERSION[0:3])) + "".join(VERSION[3:])
 __author__ = "Ask Solem"
-__contact__ = "askh@opera.com"
+__contact__ = "ask@celeryproject.org"
 __homepage__ = "http://github.com/ask/celery/"
 __docformat__ = "restructuredtext"
 

+ 1 - 2
celery/backends/__init__.py

@@ -1,7 +1,6 @@
-from billiard.utils.functional import curry
-
 from celery import conf
 from celery.utils import get_cls_by_name
+from celery.utils.functional import curry
 from celery.loaders import current_loader
 
 BACKEND_ALIASES = {

+ 32 - 8
celery/backends/amqp.py

@@ -91,18 +91,39 @@ class AMQPBackend(BaseDictBackend):
 
         return result
 
-    def wait_for(self, task_id, timeout=None):
-        try:
-            meta = self._get_task_meta_for(task_id, timeout)
-        except socket.timeout:
-            raise TimeoutError("The operation timed out.")
+    def wait_for(self, task_id, timeout=None, cache=True):
+        if task_id in self._cache:
+            meta = self._cache[task_id]
+        else:
+            try:
+                meta = self.consume(task_id, timeout=timeout)
+            except socket.timeout:
+                raise TimeoutError("The operation timed out.")
 
         if meta["status"] == states.SUCCESS:
-            return self.get_result(task_id)
+            return meta["result"]
         elif meta["status"] in states.PROPAGATE_STATES:
-            raise self.get_result(task_id)
+            raise self.exception_to_python(meta["result"])
 
-    def _get_task_meta_for(self, task_id, timeout=None):
+    def poll(self, task_id):
+        routing_key = task_id.replace("-", "")
+        consumer = self._create_consumer(task_id, self.connection)
+        result = consumer.fetch()
+        payload = None
+        if result:
+            payload = self._cache[task_id] = result.payload
+            consumer.backend.queue_delete(routing_key)
+        else:
+            # Use previously received status if any.
+            if task_id in self._cache:
+                payload = self._cache[task_id]
+            else:
+                payload = {"status": states.PENDING, "result": None}
+
+        consumer.close()
+        return payload
+
+    def consume(self, task_id, timeout=None):
         results = []
 
         def callback(message_data, message):
@@ -124,6 +145,9 @@ class AMQPBackend(BaseDictBackend):
         self._cache[task_id] = results[0]
         return results[0]
 
+    def get_task_meta(self, task_id, cache=True):
+        return self.poll(task_id)
+
     def reload_task_result(self, task_id):
         raise NotImplementedError(
                 "reload_task_result is not supported by this backend.")

+ 2 - 8
celery/backends/base.py

@@ -1,13 +1,11 @@
 """celery.backends.base"""
 import time
 
-from billiard.serialization import pickle
-from billiard.serialization import get_pickled_exception
-from billiard.serialization import get_pickleable_exception
-
 from celery import conf
 from celery import states
 from celery.exceptions import TimeoutError, TaskRevokedError
+from celery.serialization import pickle, get_pickled_exception
+from celery.serialization import get_pickleable_exception
 from celery.datastructures import LocalCache
 
 
@@ -69,10 +67,6 @@ class BaseBackend(object):
         """Prepare value for storage."""
         return result
 
-    def is_successful(self, task_id):
-        """Returns ``True`` if the task was successfully executed."""
-        return self.get_status(task_id) == states.SUCCESS
-
     def wait_for(self, task_id, timeout=None):
         """Wait for task and return its result.
 

+ 6 - 1
celery/backends/database.py

@@ -2,9 +2,10 @@ from datetime import datetime
 
 
 from celery import conf
+from celery.backends.base import BaseDictBackend
 from celery.db.models import Task, TaskSet
 from celery.db.session import ResultSession
-from celery.backends.base import BaseDictBackend
+from celery.exceptions import ImproperlyConfigured
 
 
 class DatabaseBackend(BaseDictBackend):
@@ -12,6 +13,10 @@ class DatabaseBackend(BaseDictBackend):
 
     def __init__(self, dburi=conf.RESULT_DBURI,
             engine_options=None, **kwargs):
+        if not dburi:
+            raise ImproperlyConfigured(
+                    "Missing connection string! Do you have "
+                    "CELERY_RESULT_DBURI set to a real value?")
         self.dburi = dburi
         self.engine_options = dict(engine_options or {},
                                    **conf.RESULT_ENGINE_OPTIONS or {})

+ 1 - 1
celery/backends/mongodb.py

@@ -1,7 +1,6 @@
 """MongoDB backend for celery."""
 from datetime import datetime
 
-from billiard.serialization import pickle
 try:
     import pymongo
 except ImportError:
@@ -12,6 +11,7 @@ from celery import states
 from celery.loaders import load_settings
 from celery.backends.base import BaseDictBackend
 from celery.exceptions import ImproperlyConfigured
+from celery.serialization import pickle
 
 
 class Bunch:

+ 31 - 7
celery/bin/celeryd.py

@@ -42,7 +42,7 @@
 
     Send events that can be captured by monitors like ``celerymon``.
 
-.. cmdoption:: --discard
+.. cmdoption:: --purge, --discard
 
     Discard all waiting tasks before the daemon is started.
     **WARNING**: This is unrecoverable, and the tasks will be
@@ -68,14 +68,12 @@ import socket
 import logging
 import optparse
 import warnings
-import traceback
 import multiprocessing
 
 import celery
 from celery import conf
 from celery import signals
 from celery import platform
-from celery.log import emergency_error
 from celery.task import discard_all
 from celery.utils import info
 from celery.utils import get_full_cls_name
@@ -111,7 +109,7 @@ OPTION_LIST = (
     optparse.make_option('-V', '--version',
             action="callback", callback=dump_version,
             help="Show version information and exit."),
-    optparse.make_option('--discard', default=False,
+    optparse.make_option('--purge', '--discard', default=False,
             action="store_true", dest="discard",
             help="Discard all waiting tasks before the server is started. "
                  "WARNING: This is unrecoverable, and the tasks will be "
@@ -190,11 +188,12 @@ class Worker(object):
             self.loglevel = conf.LOG_LEVELS[self.loglevel.upper()]
 
     def run(self):
+        self.init_loader()
+        self.init_queues()
+        self.redirect_stdouts_to_logger()
         print("celery@%s v%s is starting." % (self.hostname,
                                               celery.__version__))
 
-        self.init_loader()
-        self.init_queues()
 
         if conf.RESULT_BACKEND == "database" \
                 and self.settings.DATABASE_ENGINE == "sqlite3" and \
@@ -237,6 +236,13 @@ class Worker(object):
             raise ImproperlyConfigured(
                     "Celery needs to be configured to run celeryd.")
 
+    def redirect_stdouts_to_logger(self):
+        from celery import log
+        # Redirect stdout/stderr to our logger.
+        logger = log.setup_logger(loglevel=self.loglevel,
+                                  logfile=self.logfile)
+        log.redirect_stdouts_to_logger(logger, loglevel=logging.WARNING)
+
     def purge_messages(self):
         discarded_count = discard_all()
         what = discarded_count > 1 and "messages" or "message"
@@ -299,17 +305,35 @@ class Worker(object):
 
 def install_worker_int_handler(worker):
 
+    def _stop(signum, frame):
+        process_name = multiprocessing.current_process().name
+        if process_name == "MainProcess":
+            worker.logger.warn(
+                "celeryd: Hitting Ctrl+C again will terminate "
+                "all running tasks!")
+            install_worker_int_again_handler(worker)
+            worker.logger.warn("celeryd: Warm shutdown (%s)" % (
+                process_name))
+            worker.stop()
+        raise SystemExit()
+
+    platform.install_signal_handler("SIGINT", _stop)
+
+
+def install_worker_int_again_handler(worker):
+
     def _stop(signum, frame):
         process_name = multiprocessing.current_process().name
         if process_name == "MainProcess":
             worker.logger.warn("celeryd: Cold shutdown (%s)" % (
-                                    process_name))
+                process_name))
             worker.terminate()
         raise SystemExit()
 
     platform.install_signal_handler("SIGINT", _stop)
 
 
+
 def install_worker_term_handler(worker):
 
     def _stop(signum, frame):

+ 8 - 8
celery/bin/celeryd_multi.py

@@ -1,9 +1,7 @@
 import sys
-import shlex
 import socket
 
 from celery.utils.compat import defaultdict
-from carrot.utils import rpartition
 
 EXAMPLES = """
 Some examples:
@@ -85,7 +83,7 @@ class NamespacedOptionParser(object):
                     self.process_long_opt(arg[2:])
                 else:
                     value = None
-                    if rargs[pos + 1][0] != '-':
+                    if len(rargs) > pos + 1 and rargs[pos + 1][0] != '-':
                         value = rargs[pos + 1]
                         pos += 1
                     self.process_short_opt(arg[1:], value)
@@ -143,8 +141,9 @@ def abbreviations(map):
 
     def expand(S):
         ret = S
-        for short, long in map.items():
-            ret = ret.replace(short, long)
+        if S is not None:
+            for short, long in map.items():
+                ret = ret.replace(short, long)
         return ret
 
     return expand
@@ -176,8 +175,9 @@ def multi_args(p, cmd="celeryd", append="", prefix="", suffix=""):
                                 "%n": name})
         line = expand(cmd) + " " + " ".join(
                 format_opt(opt, expand(value))
-                    for opt, value in p.optmerge(name, options).items()) + \
-               " " + expand(append)
+                    for opt, value in p.optmerge(name, options).items())
+        if append:
+            line += " %s" % expand(append)
         yield this_name, line, expand
 
 
@@ -201,7 +201,7 @@ class MultiTool(object):
 
         try:
             return self.commands[argv[0]](argv[1:], cmd)
-        except KeyError, exc:
+        except KeyError:
             say("Invalid command: %s" % argv[0])
             self.usage()
             sys.exit(1)

+ 17 - 15
celery/bin/celeryev.py

@@ -1,16 +1,16 @@
 import sys
 import time
 import curses
-import atexit
 import socket
 import optparse
 import threading
 
-from pprint import pformat
 from datetime import datetime
 from textwrap import wrap
 from itertools import count
 
+from carrot.utils import rpartition
+
 import celery
 from celery import states
 from celery.task import control
@@ -20,9 +20,11 @@ from celery.messaging import establish_connection
 from celery.datastructures import LocalCache
 
 TASK_NAMES = LocalCache(0xFFF)
+
 HUMAN_TYPES = {"worker-offline": "shutdown",
                "worker-online": "started",
                "worker-heartbeat": "heartbeat"}
+
 OPTION_LIST = (
     optparse.make_option('-d', '--DUMP',
         action="store_true", dest="dump",
@@ -30,7 +32,6 @@ OPTION_LIST = (
 )
 
 
-
 def humanize_type(type):
     try:
         return HUMAN_TYPES[type.lower()]
@@ -116,7 +117,7 @@ class CursesMonitor(object):
                           "L": self.selection_rate_limit}
         self.keymap = dict(default_keymap, **self.keymap)
 
-    def format_row(self, uuid, worker, task, time, state):
+    def format_row(self, uuid, worker, task, timestamp, state):
         my, mx = self.win.getmaxyx()
         mx = mx - 3
         uuid_max = 36
@@ -126,8 +127,8 @@ class CursesMonitor(object):
         worker = abbr(worker, 16).ljust(16)
         task = abbrtask(task, 16).ljust(16)
         state = abbr(state, 8).ljust(8)
-        time = time.ljust(8)
-        row = "%s %s %s %s %s " % (uuid, worker, task, time, state)
+        timestamp = timestamp.ljust(8)
+        row = "%s %s %s %s %s " % (uuid, worker, task, timestamp, state)
         if self.screen_width is None:
             self.screen_width = len(row[:mx])
         return row[:mx]
@@ -177,7 +178,8 @@ class CursesMonitor(object):
             self.win.addstr(y(), 3, title, curses.A_BOLD | curses.A_UNDERLINE)
             blank_line()
         callback(my, mx, y())
-        self.win.addstr(my - 1, 0, "Press any key to continue...", curses.A_BOLD)
+        self.win.addstr(my - 1, 0, "Press any key to continue...",
+                        curses.A_BOLD)
         self.win.refresh()
         while 1:
             try:
@@ -333,7 +335,8 @@ class CursesMonitor(object):
                     attr = curses.A_NORMAL
                     if task.uuid == self.selected_task:
                         attr = curses.A_STANDOUT
-                    timestamp = datetime.fromtimestamp(task.timestamp or time.time())
+                    timestamp = datetime.fromtimestamp(
+                                    task.timestamp or time.time())
                     timef = timestamp.strftime("%H:%M:%S")
                     line = self.format_row(uuid, task.name,
                                            task.worker.hostname,
@@ -409,9 +412,12 @@ class CursesMonitor(object):
         curses.init_pair(4, curses.COLOR_MAGENTA, self.background)
         # greeting
         curses.init_pair(5, curses.COLOR_BLUE, self.background)
+        # started state
+        curses.init_pair(6, curses.COLOR_YELLOW, self.foreground)
 
         self.state_colors = {states.SUCCESS: curses.color_pair(3),
-                             states.REVOKED: curses.color_pair(4)}
+                             states.REVOKED: curses.color_pair(4),
+                             states.STARTED: curses.color_pair(6)}
         for state in states.EXCEPTION_STATES:
             self.state_colors[state] = curses.color_pair(2)
 
@@ -467,7 +473,7 @@ def eventtop():
                 conn.connection.drain_events()
             except socket.timeout:
                 pass
-    except Exception, exc:
+    except Exception:
         refresher.shutdown = True
         refresher.join()
         display.resetscreen()
@@ -490,7 +496,7 @@ def eventdump():
         conn and conn.close()
 
 
-def run_celeryev(dump=False):
+def run_celeryev(dump=False, **kwargs):
     if dump:
         return eventdump()
     return eventtop()
@@ -507,9 +513,5 @@ def main():
     options = parse_options(sys.argv[1:])
     return run_celeryev(**vars(options))
 
-
-
-
-
 if __name__ == "__main__":
     main()

+ 0 - 0
celery/concurrency/__init__.py


+ 10 - 3
celery/worker/pool.py → celery/concurrency/processes/__init__.py

@@ -3,11 +3,12 @@
 Process Pools.
 
 """
-from billiard.pool import Pool, RUN
-from billiard.utils.functional import curry
 
 from celery import log
 from celery.datastructures import ExceptionInfo
+from celery.utils.functional import curry
+
+from celery.concurrency.processes.pool import Pool, RUN
 
 
 class TaskPool(object):
@@ -52,12 +53,18 @@ class TaskPool(object):
                           maxtasksperchild=self.maxtasksperchild)
 
     def stop(self):
-        """Terminate the pool."""
+        """Gracefully stop the pool."""
         if self._pool is not None and self._pool._state == RUN:
             self._pool.close()
             self._pool.join()
             self._pool = None
 
+    def terminate(self):
+        """Force terminate the pool."""
+        if self._pool is not None:
+            self._pool.terminate()
+            self._pool = None
+
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
             errbacks=None, accept_callback=None, timeout_callback=None,
             **compat):

+ 970 - 0
celery/concurrency/processes/pool.py

@@ -0,0 +1,970 @@
+#
+# Module providing the `Pool` class for managing a process pool
+#
+# multiprocessing/pool.py
+#
+# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt
+#
+
+__all__ = ['Pool']
+
+#
+# Imports
+#
+
+import os
+import errno
+import threading
+import Queue
+import itertools
+import collections
+import time
+import signal
+
+from multiprocessing import Process, cpu_count, TimeoutError
+from multiprocessing.util import Finalize, debug
+
+from celery.exceptions import SoftTimeLimitExceeded, TimeLimitExceeded
+
+#
+# Constants representing the state of a pool
+#
+
+RUN = 0
+CLOSE = 1
+TERMINATE = 2
+
+# Signal used for soft time limits.
+SIG_SOFT_TIMEOUT = getattr(signal, "SIGUSR1", None)
+
+#
+# Miscellaneous
+#
+
+job_counter = itertools.count()
+
+def mapstar(args):
+    return map(*args)
+
+#
+# Code run by worker processes
+#
+
+
+def soft_timeout_sighandler(signum, frame):
+    raise SoftTimeLimitExceeded()
+
+
+def worker(inqueue, outqueue, ackqueue, initializer=None, initargs=(),
+        maxtasks=None):
+    assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
+    pid = os.getpid()
+    put = outqueue.put
+    get = inqueue.get
+    ack = ackqueue.put
+    if hasattr(inqueue, '_writer'):
+        inqueue._writer.close()
+        outqueue._reader.close()
+
+    if initializer is not None:
+        initializer(*initargs)
+
+    if SIG_SOFT_TIMEOUT is not None:
+        signal.signal(SIG_SOFT_TIMEOUT, soft_timeout_sighandler)
+
+    completed = 0
+    while maxtasks is None or (maxtasks and completed < maxtasks):
+        try:
+            task = get()
+        except (EOFError, IOError):
+            debug('worker got EOFError or IOError -- exiting')
+            break
+
+        if task is None:
+            debug('worker got sentinel -- exiting')
+            break
+
+        job, i, func, args, kwds = task
+        ack((job, i, time.time(), pid))
+        try:
+            result = (True, func(*args, **kwds))
+        except Exception, e:
+            result = (False, e)
+        put((job, i, result))
+        completed += 1
+    debug('worker exiting after %d tasks' % completed)
+
+
+#
+# Class representing a process pool
+#
+
+
+class PoolThread(threading.Thread):
+
+    def __init__(self, *args, **kwargs):
+        threading.Thread.__init__(self)
+        self._state = RUN
+        self.daemon = True
+
+    def terminate(self):
+        self._state = TERMINATE
+
+    def close(self):
+        self._state = CLOSE
+
+
+class Supervisor(PoolThread):
+
+    def __init__(self, pool):
+        self.pool = pool
+        super(Supervisor, self).__init__()
+
+    def run(self):
+        debug('worker handler starting')
+        while self._state == RUN and self.pool._state == RUN:
+            self.pool._maintain_pool()
+            time.sleep(0.1)
+        debug('worker handler exiting')
+
+
+class TaskHandler(PoolThread):
+
+    def __init__(self, taskqueue, put, outqueue, pool):
+        self.taskqueue = taskqueue
+        self.put = put
+        self.outqueue = outqueue
+        self.pool = pool
+        super(TaskHandler, self).__init__()
+
+    def run(self):
+        taskqueue = self.taskqueue
+        outqueue = self.outqueue
+        put = self.put
+        pool = self.pool
+
+        for taskseq, set_length in iter(taskqueue.get, None):
+            i = -1
+            for i, task in enumerate(taskseq):
+                if self._state:
+                    debug('task handler found thread._state != RUN')
+                    break
+                try:
+                    put(task)
+                except IOError:
+                    debug('could not put task on queue')
+                    break
+            else:
+                if set_length:
+                    debug('doing set_length()')
+                    set_length(i+1)
+                continue
+            break
+        else:
+            debug('task handler got sentinel')
+
+        try:
+            # tell result handler to finish when cache is empty
+            debug('task handler sending sentinel to result handler')
+            outqueue.put(None)
+
+            # tell workers there is no more work
+            debug('task handler sending sentinel to workers')
+            for p in pool:
+                put(None)
+        except IOError:
+            debug('task handler got IOError when sending sentinels')
+
+        debug('task handler exiting')
+
+
+class AckHandler(PoolThread):
+
+    def __init__(self, ackqueue, get, cache):
+        self.ackqueue = ackqueue
+        self.get = get
+        self.cache = cache
+
+        super(AckHandler, self).__init__()
+
+    def run(self):
+        debug('ack handler starting')
+        get = self.get
+        cache = self.cache
+
+        while 1:
+            try:
+                task = get()
+            except (IOError, EOFError), exc:
+                debug('ack handler got %s -- exiting',
+                        exc.__class__.__name__)
+
+            if self._state:
+                assert self._state == TERMINATE
+                debug('ack handler found thread._state=TERMINATE')
+                break
+
+            if task is None:
+                debug('ack handler got sentinel')
+                break
+
+            job, i, time_accepted, pid = task
+            try:
+                cache[job]._ack(i, time_accepted, pid)
+            except (KeyError, AttributeError), exc:
+                # Object gone, or doesn't support _ack (e.g. IMapIterator)
+                pass
+
+        while cache and self._state != TERMINATE:
+            try:
+                task = get()
+            except (IOError, EOFError), exc:
+                debug('ack handler got %s -- exiting',
+                        exc.__class__.__name__)
+                return
+
+            if task is None:
+                debug('result handler ignoring extra sentinel')
+                continue
+
+            job, i, time_accepted, pid = task
+            try:
+                cache[job]._ack(i, time_accepted, pid)
+            except KeyError:
+                pass
+
+        debug('ack handler exiting: len(cache)=%s, thread._state=%s',
+                len(cache), self._state)
+
+
+class TimeoutHandler(PoolThread):
+
+    def __init__(self, processes, sentinel_event, cache, t_soft, t_hard):
+        self.sentinel_event = sentinel_event
+        self.cache = cache
+        self.t_soft = t_soft
+        self.t_hard = t_hard
+        super(TimeoutHandler, self).__init__()
+
+    def run(self):
+        processes = self.processes
+        cache = self.cache
+        t_hard, t_soft = self.t_hard, self.t_soft
+        dirty = set()
+
+        def _process_by_pid(pid):
+            for index, process in enumerate(processes):
+                if process.pid == pid:
+                    return process, index
+            return None, None
+
+        def _pop_by_pid(pid):
+            process, index = _process_by_pid(pid)
+            if not process:
+                return
+            p = processes.pop(index)
+            assert p is process
+            return process
+
+        def _timed_out(start, timeout):
+            if not start or not timeout:
+                return False
+            if time.time() >= start + timeout:
+                return True
+
+        def _on_soft_timeout(job, i):
+            debug('soft time limit exceeded for %i' % i)
+            process, _index = _process_by_pid(job._accept_pid)
+            if not process:
+                return
+
+            # Run timeout callback
+            if job._timeout_callback is not None:
+                job._timeout_callback(soft=True)
+
+            try:
+                os.kill(job._accept_pid, SIG_SOFT_TIMEOUT)
+            except OSError, exc:
+                if exc.errno == errno.ESRCH:
+                    pass
+                else:
+                    raise
+
+            dirty.add(i)
+
+        def _on_hard_timeout(job, i):
+            debug('hard time limit exceeded for %i', i)
+            # Remove from _pool
+            process = _pop_by_pid(job._accept_pid)
+            # Remove from cache and set return value to an exception
+            job._set(i, (False, TimeLimitExceeded()))
+            # Run timeout callback
+            if job._timeout_callback is not None:
+                job._timeout_callback(soft=False)
+            if not process:
+                return
+            # Terminate the process
+            process.terminate()
+
+        # Inner-loop
+        while self._state == RUN:
+
+            # Remove dirty items not in cache anymore
+            if dirty:
+                dirty = set(k for k in dirty if k in cache)
+
+            for i, job in cache.items():
+                ack_time = job._time_accepted
+                if _timed_out(ack_time, t_hard):
+                    _on_hard_timeout(job, i)
+                elif i not in dirty and _timed_out(ack_time, t_soft):
+                    _on_soft_timeout(job, i)
+
+            time.sleep(0.5) # Don't waste CPU cycles.
+
+        debug('timeout handler exiting')
+
+
+class ResultHandler(PoolThread):
+
+    def __init__(self, outqueue, get, cache, putlock):
+        self.outqueue = outqueue
+        self.get = get
+        self.cache = cache
+        self.putlock = putlock
+        super(ResultHandler, self).__init__()
+
+    def run(self):
+        get = self.get
+        outqueue = self.outqueue
+        cache = self.cache
+        putlock = self.putlock
+
+        debug('result handler starting')
+        while 1:
+            try:
+                task = get()
+            except (IOError, EOFError), exc:
+                debug('result handler got %s -- exiting',
+                        exc.__class__.__name__)
+                return
+
+            if putlock is not None:
+                putlock.release()
+
+            if self._state:
+                assert self._state == TERMINATE
+                debug('result handler found thread._state=TERMINATE')
+                break
+
+            if task is None:
+                debug('result handler got sentinel')
+                break
+
+            job, i, obj = task
+            try:
+                cache[job]._set(i, obj)
+            except KeyError:
+                pass
+
+        if putlock is not None:
+            putlock.release()
+
+        while cache and self._state != TERMINATE:
+            try:
+                task = get()
+            except (IOError, EOFError), exc:
+                debug('result handler got %s -- exiting',
+                        exc.__class__.__name__)
+                return
+
+            if task is None:
+                debug('result handler ignoring extra sentinel')
+                continue
+            job, i, obj = task
+            try:
+                cache[job]._set(i, obj)
+            except KeyError:
+                pass
+
+        if hasattr(outqueue, '_reader'):
+            debug('ensuring that outqueue is not full')
+            # If we don't make room available in outqueue then
+            # attempts to add the sentinel (None) to outqueue may
+            # block.  There is guaranteed to be no more than 2 sentinels.
+            try:
+                for i in range(10):
+                    if not outqueue._reader.poll():
+                        break
+                    get()
+            except (IOError, EOFError):
+                pass
+
+        debug('result handler exiting: len(cache)=%s, thread._state=%s',
+              len(cache), self._state)
+
+
+class Pool(object):
+    '''
+    Class which supports an async version of the `apply()` builtin
+    '''
+    Process = Process
+    Supervisor = Supervisor
+    TaskHandler = TaskHandler
+    AckHandler = AckHandler
+    TimeoutHandler = TimeoutHandler
+    ResultHandler = ResultHandler
+    SoftTimeLimitExceeded = SoftTimeLimitExceeded
+
+    def __init__(self, processes=None, initializer=None, initargs=(),
+            maxtasksperchild=None, timeout=None, soft_timeout=None):
+        self._setup_queues()
+        self._taskqueue = Queue.Queue()
+        self._cache = {}
+        self._state = RUN
+        self.timeout = timeout
+        self.soft_timeout = soft_timeout
+        self._maxtasksperchild = maxtasksperchild
+        self._initializer = initializer
+        self._initargs = initargs
+
+        if self.soft_timeout and SIG_SOFT_TIMEOUT is None:
+            raise NotImplementedError("Soft timeouts not supported: "
+                    "Your platform does not have the SIGUSR1 signal.")
+
+        if processes is None:
+            try:
+                processes = cpu_count()
+            except NotImplementedError:
+                processes = 1
+        self._processes = processes
+
+        if initializer is not None and not hasattr(initializer, '__call__'):
+            raise TypeError('initializer must be a callable')
+
+        self._pool = []
+        for i in range(processes):
+            self._create_worker_process()
+
+        self._worker_handler = self.Supervisor(self)
+        self._worker_handler.start()
+
+        self._putlock = threading.Semaphore(self._processes)
+
+        self._task_handler = self.TaskHandler(self._taskqueue, self._quick_put,
+                                         self._outqueue, self._pool)
+        self._task_handler.start()
+
+        # Thread processing acknowledgements from the ackqueue.
+        self._ack_handler = self.AckHandler(self._ackqueue,
+                self._quick_get_ack, self._cache)
+        self._ack_handler.start()
+
+        # Thread killing timedout jobs.
+        if self.timeout or self.soft_timeout:
+            self._timeout_handler = self.TimeoutHandler(
+                    self._pool, self._cache,
+                    self.soft_timeout, self.timeout)
+            self._timeout_handler.start()
+        else:
+            self._timeout_handler = None
+
+        # Thread processing results in the outqueue.
+        self._result_handler = self.ResultHandler(self._outqueue,
+                                        self._quick_get, self._cache,
+                                        self._putlock)
+        self._result_handler.start()
+
+        self._terminate = Finalize(
+            self, self._terminate_pool,
+            args=(self._taskqueue, self._inqueue, self._outqueue,
+                  self._ackqueue, self._pool, self._ack_handler,
+                  self._worker_handler, self._task_handler,
+                  self._result_handler, self._cache,
+                  self._timeout_handler),
+            exitpriority=15,
+            )
+
+    def _create_worker_process(self):
+        w = self.Process(
+            target=worker,
+            args=(self._inqueue, self._outqueue, self._ackqueue,
+                    self._initializer, self._initargs,
+                    self._maxtasksperchild),
+            )
+        self._pool.append(w)
+        w.name = w.name.replace('Process', 'PoolWorker')
+        w.daemon = True
+        w.start()
+        return w
+
+    def _join_exited_workers(self):
+        """Cleanup after any worker processes which have exited due to
+        reaching their specified lifetime. Returns True if any workers were
+        cleaned up.
+        """
+        for i in reversed(range(len(self._pool))):
+            worker = self._pool[i]
+            if worker.exitcode is not None:
+                # worker exited
+                debug('cleaning up worker %d' % i)
+                worker.join()
+                del self._pool[i]
+        return len(self._pool) < self._processes
+
+    def _repopulate_pool(self):
+        """Bring the number of pool processes up to the specified number,
+        for use after reaping workers which have exited.
+        """
+        debug('repopulating pool')
+        for i in range(self._processes - len(self._pool)):
+            self._create_worker_process()
+            debug('added worker')
+
+    def _maintain_pool(self):
+        """"Clean up any exited workers and start replacements for them.
+        """
+        if self._join_exited_workers():
+            self._repopulate_pool()
+
+    def _setup_queues(self):
+        from multiprocessing.queues import SimpleQueue
+        self._inqueue = SimpleQueue()
+        self._outqueue = SimpleQueue()
+        self._ackqueue = SimpleQueue()
+        self._quick_put = self._inqueue._writer.send
+        self._quick_get = self._outqueue._reader.recv
+        self._quick_get_ack = self._ackqueue._reader.recv
+
+    def apply(self, func, args=(), kwds={}):
+        '''
+        Equivalent of `apply()` builtin
+        '''
+        assert self._state == RUN
+        return self.apply_async(func, args, kwds).get()
+
+    def map(self, func, iterable, chunksize=None):
+        '''
+        Equivalent of `map()` builtin
+        '''
+        assert self._state == RUN
+        return self.map_async(func, iterable, chunksize).get()
+
+    def imap(self, func, iterable, chunksize=1):
+        '''
+        Equivalent of `itertools.imap()` -- can be MUCH slower
+        than `Pool.map()`
+        '''
+        assert self._state == RUN
+        if chunksize == 1:
+            result = IMapIterator(self._cache)
+            self._taskqueue.put((((result._job, i, func, (x,), {})
+                         for i, x in enumerate(iterable)), result._set_length))
+            return result
+        else:
+            assert chunksize > 1
+            task_batches = Pool._get_tasks(func, iterable, chunksize)
+            result = IMapIterator(self._cache)
+            self._taskqueue.put((((result._job, i, mapstar, (x,), {})
+                     for i, x in enumerate(task_batches)), result._set_length))
+            return (item for chunk in result for item in chunk)
+
+    def imap_unordered(self, func, iterable, chunksize=1):
+        '''
+        Like `imap()` method but ordering of results is arbitrary
+        '''
+        assert self._state == RUN
+        if chunksize == 1:
+            result = IMapUnorderedIterator(self._cache)
+            self._taskqueue.put((((result._job, i, func, (x,), {})
+                         for i, x in enumerate(iterable)), result._set_length))
+            return result
+        else:
+            assert chunksize > 1
+            task_batches = Pool._get_tasks(func, iterable, chunksize)
+            result = IMapUnorderedIterator(self._cache)
+            self._taskqueue.put((((result._job, i, mapstar, (x,), {})
+                     for i, x in enumerate(task_batches)), result._set_length))
+            return (item for chunk in result for item in chunk)
+
+    def apply_async(self, func, args=(), kwds={},
+            callback=None, accept_callback=None, timeout_callback=None,
+            waitforslot=False):
+        '''
+        Asynchronous equivalent of `apply()` builtin.
+
+        Callback is called when the functions return value is ready.
+        The accept callback is called when the job is accepted to be executed.
+
+        Simplified the flow is like this:
+
+            >>> if accept_callback:
+            ...     accept_callback()
+            >>> retval = func(*args, **kwds)
+            >>> if callback:
+            ...     callback(retval)
+
+        '''
+        assert self._state == RUN
+        result = ApplyResult(self._cache, callback,
+                             accept_callback, timeout_callback)
+        if waitforslot:
+            self._putlock.acquire()
+        self._taskqueue.put(([(result._job, None, func, args, kwds)], None))
+        return result
+
+    def map_async(self, func, iterable, chunksize=None, callback=None):
+        '''
+        Asynchronous equivalent of `map()` builtin
+        '''
+        assert self._state == RUN
+        if not hasattr(iterable, '__len__'):
+            iterable = list(iterable)
+
+        if chunksize is None:
+            chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
+            if extra:
+                chunksize += 1
+        if len(iterable) == 0:
+            chunksize = 0
+
+        task_batches = Pool._get_tasks(func, iterable, chunksize)
+        result = MapResult(self._cache, chunksize, len(iterable), callback)
+        self._taskqueue.put((((result._job, i, mapstar, (x,), {})
+                              for i, x in enumerate(task_batches)), None))
+        return result
+
+    @staticmethod
+    def _get_tasks(func, it, size):
+        it = iter(it)
+        while 1:
+            x = tuple(itertools.islice(it, size))
+            if not x:
+                return
+            yield (func, x)
+
+    def __reduce__(self):
+        raise NotImplementedError(
+              'pool objects cannot be passed between '
+              'processes or pickled')
+
+    def close(self):
+        debug('closing pool')
+        if self._state == RUN:
+            self._state = CLOSE
+            self._worker_handler.close()
+            self._taskqueue.put(None)
+
+    def terminate(self):
+        debug('terminating pool')
+        self._state = TERMINATE
+        self._worker_handler.terminate()
+        self._terminate()
+
+    def join(self):
+        assert self._state in (CLOSE, TERMINATE)
+        self._worker_handler.join()
+        self._task_handler.join()
+        self._result_handler.join()
+        for p in self._pool:
+            p.join()
+        debug('after join()')
+
+    @staticmethod
+    def _help_stuff_finish(inqueue, task_handler, size):
+        # task_handler may be blocked trying to put items on inqueue
+        debug('removing tasks from inqueue until task handler finished')
+        inqueue._rlock.acquire()
+        while task_handler.is_alive() and inqueue._reader.poll():
+            inqueue._reader.recv()
+            time.sleep(0)
+
+    @classmethod
+    def _terminate_pool(cls, taskqueue, inqueue, outqueue, ackqueue, pool,
+                        ack_handler, worker_handler, task_handler,
+                        result_handler, cache, timeout_handler):
+
+        # this is guaranteed to only be called once
+        debug('finalizing pool')
+
+        worker_handler.terminate()
+
+        task_handler.terminate()
+        taskqueue.put(None)                 # sentinel
+
+        debug('helping task handler/workers to finish')
+        cls._help_stuff_finish(inqueue, task_handler, len(pool))
+
+        assert result_handler.is_alive() or len(cache) == 0
+
+        result_handler.terminate()
+        outqueue.put(None)                  # sentinel
+
+        ack_handler.terminate()
+        ackqueue.put(None)                  # sentinel
+
+        if timeout_handler is not None:
+            timeout_handler.terminate()
+
+        # Terminate workers which haven't already finished
+        if pool and hasattr(pool[0], 'terminate'):
+            debug('terminating workers')
+            for p in pool:
+                if p.exitcode is None:
+                    p.terminate()
+
+        debug('joining task handler')
+        task_handler.join(1e100)
+
+        debug('joining result handler')
+        result_handler.join(1e100)
+
+        debug('joining ack handler')
+        ack_handler.join(1e100)
+
+        if timeout_handler is not None:
+            debug('joining timeout handler')
+            timeout_handler.join(1e100)
+
+        if pool and hasattr(pool[0], 'terminate'):
+            debug('joining pool workers')
+            for p in pool:
+                if p.is_alive():
+                    # worker has not yet exited
+                    debug('cleaning up worker %d' % p.pid)
+                    p.join()
+DynamicPool = Pool
+
+#
+# Class whose instances are returned by `Pool.apply_async()`
+#
+
+class ApplyResult(object):
+
+    def __init__(self, cache, callback, accept_callback=None,
+            timeout_callback=None):
+        self._cond = threading.Condition(threading.Lock())
+        self._job = job_counter.next()
+        self._cache = cache
+        self._accepted = False
+        self._accept_pid = None
+        self._time_accepted = None
+        self._ready = False
+        self._callback = callback
+        self._accept_callback = accept_callback
+        self._timeout_callback = timeout_callback
+        cache[self._job] = self
+
+    def ready(self):
+        return self._ready
+
+    def accepted(self):
+        return self._accepted
+
+    def successful(self):
+        assert self._ready
+        return self._success
+
+    def wait(self, timeout=None):
+        self._cond.acquire()
+        try:
+            if not self._ready:
+                self._cond.wait(timeout)
+        finally:
+            self._cond.release()
+
+    def get(self, timeout=None):
+        self.wait(timeout)
+        if not self._ready:
+            raise TimeoutError
+        if self._success:
+            return self._value
+        else:
+            raise self._value
+
+    def _set(self, i, obj):
+        self._success, self._value = obj
+        if self._callback and self._success:
+            self._callback(self._value)
+        self._cond.acquire()
+        try:
+            self._ready = True
+            self._cond.notify()
+        finally:
+            self._cond.release()
+        if self._accepted:
+            del self._cache[self._job]
+
+    def _ack(self, i, time_accepted, pid):
+        self._accepted = True
+        self._time_accepted = time_accepted
+        self._accept_pid = pid
+        if self._accept_callback:
+            self._accept_callback()
+        if self._ready:
+            del self._cache[self._job]
+
+#
+# Class whose instances are returned by `Pool.map_async()`
+#
+
+class MapResult(ApplyResult):
+
+    def __init__(self, cache, chunksize, length, callback):
+        ApplyResult.__init__(self, cache, callback)
+        self._success = True
+        self._value = [None] * length
+        self._chunksize = chunksize
+        if chunksize <= 0:
+            self._number_left = 0
+            self._ready = True
+        else:
+            self._number_left = length//chunksize + bool(length % chunksize)
+
+    def _set(self, i, success_result):
+        success, result = success_result
+        if success:
+            self._value[i*self._chunksize:(i+1)*self._chunksize] = result
+            self._number_left -= 1
+            if self._number_left == 0:
+                if self._callback:
+                    self._callback(self._value)
+                del self._cache[self._job]
+                self._cond.acquire()
+                try:
+                    self._ready = True
+                    self._cond.notify()
+                finally:
+                    self._cond.release()
+
+        else:
+            self._success = False
+            self._value = result
+            del self._cache[self._job]
+            self._cond.acquire()
+            try:
+                self._ready = True
+                self._cond.notify()
+            finally:
+                self._cond.release()
+
+#
+# Class whose instances are returned by `Pool.imap()`
+#
+
+class IMapIterator(object):
+
+    def __init__(self, cache):
+        self._cond = threading.Condition(threading.Lock())
+        self._job = job_counter.next()
+        self._cache = cache
+        self._items = collections.deque()
+        self._index = 0
+        self._length = None
+        self._unsorted = {}
+        cache[self._job] = self
+
+    def __iter__(self):
+        return self
+
+    def next(self, timeout=None):
+        self._cond.acquire()
+        try:
+            try:
+                item = self._items.popleft()
+            except IndexError:
+                if self._index == self._length:
+                    raise StopIteration
+                self._cond.wait(timeout)
+                try:
+                    item = self._items.popleft()
+                except IndexError:
+                    if self._index == self._length:
+                        raise StopIteration
+                    raise TimeoutError
+        finally:
+            self._cond.release()
+
+        success, value = item
+        if success:
+            return value
+        raise value
+
+    __next__ = next                    # XXX
+
+    def _set(self, i, obj):
+        self._cond.acquire()
+        try:
+            if self._index == i:
+                self._items.append(obj)
+                self._index += 1
+                while self._index in self._unsorted:
+                    obj = self._unsorted.pop(self._index)
+                    self._items.append(obj)
+                    self._index += 1
+                self._cond.notify()
+            else:
+                self._unsorted[i] = obj
+
+            if self._index == self._length:
+                del self._cache[self._job]
+        finally:
+            self._cond.release()
+
+    def _set_length(self, length):
+        self._cond.acquire()
+        try:
+            self._length = length
+            if self._index == self._length:
+                self._cond.notify()
+                del self._cache[self._job]
+        finally:
+            self._cond.release()
+
+#
+# Class whose instances are returned by `Pool.imap_unordered()`
+#
+
+class IMapUnorderedIterator(IMapIterator):
+
+    def _set(self, i, obj):
+        self._cond.acquire()
+        try:
+            self._items.append(obj)
+            self._index += 1
+            self._cond.notify()
+            if self._index == self._length:
+                del self._cache[self._job]
+        finally:
+            self._cond.release()
+
+#
+#
+#
+
+class ThreadPool(Pool):
+
+    from multiprocessing.dummy import Process as DummyProcess
+    Process = DummyProcess
+
+    def __init__(self, processes=None, initializer=None, initargs=()):
+        Pool.__init__(self, processes, initializer, initargs)
+
+    def _setup_queues(self):
+        self._inqueue = Queue.Queue()
+        self._outqueue = Queue.Queue()
+        self._ackqueue = Queue.Queue()
+        self._quick_put = self._inqueue.put
+        self._quick_get = self._outqueue.get
+        self._quick_get_ack = self._ackqueue.get
+
+    @staticmethod
+    def _help_stuff_finish(inqueue, task_handler, size):
+        # put sentinels at head of inqueue to make workers finish
+        inqueue.not_empty.acquire()
+        try:
+            inqueue.queue.clear()
+            inqueue.queue.extend([None] * size)
+            inqueue.not_empty.notify_all()
+        finally:
+            inqueue.not_empty.release()

+ 68 - 0
celery/concurrency/threads.py

@@ -0,0 +1,68 @@
+
+import threading
+from threadpool import ThreadPool, WorkRequest
+
+from celery import log
+from celery.utils.functional import curry
+from celery.datastructures import ExceptionInfo
+
+
+accept_lock = threading.Lock()
+
+
+def do_work(target, args=(), kwargs={}, callback=None,
+        accept_callback=None):
+    accept_lock.acquire()
+    try:
+        accept_callback()
+    finally:
+        accept_lock.release()
+    callback(target(*args, **kwargs))
+
+
+class TaskPool(object):
+
+    def __init__(self, limit, logger=None, **kwargs):
+        self.limit = limit
+        self.logger = logger or log.get_default_logger()
+        self._pool = None
+
+    def start(self):
+        self._pool = ThreadPool(self.limit)
+
+    def stop(self):
+        self._pool.dismissWorkers(self.limit, do_join=True)
+
+    def apply_async(self, target, args=None, kwargs=None, callbacks=None,
+            errbacks=None, accept_callback=None, **compat):
+        args = args or []
+        kwargs = kwargs or {}
+        callbacks = callbacks or []
+        errbacks = errbacks or []
+
+        on_ready = curry(self.on_ready, callbacks, errbacks)
+
+        self.logger.debug("ThreadPool: Apply %s (args:%s kwargs:%s)" % (
+            target, args, kwargs))
+
+        req = WorkRequest(do_work, (target, args, kwargs, on_ready,
+                                    accept_callback))
+        self._pool.putRequest(req)
+        # threadpool also has callback support,
+        # but for some reason the callback is not triggered
+        # before you've collected the results.
+        # Clear the results (if any), so it doesn't grow too large.
+        self._pool._results_queue.queue.clear()
+        return req
+
+    def on_ready(self, callbacks, errbacks, ret_value):
+        """What to do when a worker task is ready and its return value has
+        been collected."""
+
+        if isinstance(ret_value, ExceptionInfo):
+            if isinstance(ret_value.exception, (
+                    SystemExit, KeyboardInterrupt)): # pragma: no cover
+                raise ret_value.exception
+            [errback(ret_value) for errback in errbacks]
+        else:
+            [callback(ret_value) for callback in callbacks]

+ 44 - 16
celery/conf.py

@@ -40,12 +40,12 @@ _DEFAULTS = {
     "CELERY_DEFAULT_EXCHANGE": "celery",
     "CELERY_DEFAULT_EXCHANGE_TYPE": "direct",
     "CELERY_DEFAULT_DELIVERY_MODE": 2, # persistent
-    "CELERY_BROKER_CONNECTION_TIMEOUT": 4,
-    "CELERY_BROKER_CONNECTION_RETRY": True,
-    "CELERY_BROKER_CONNECTION_MAX_RETRIES": 100,
+    "BROKER_CONNECTION_TIMEOUT": 4,
+    "BROKER_CONNECTION_RETRY": True,
+    "BROKER_CONNECTION_MAX_RETRIES": 100,
     "CELERY_ACKS_LATE": False,
     "CELERYD_POOL_PUTLOCKS": True,
-    "CELERYD_POOL": "celery.worker.pool.TaskPool",
+    "CELERYD_POOL": "celery.concurrency.processes.TaskPool",
     "CELERYD_MEDIATOR": "celery.worker.controllers.Mediator",
     "CELERYD_ETA_SCHEDULER": "celery.worker.controllers.ScheduleController",
     "CELERYD_LISTENER": "celery.worker.listener.CarrotListener",
@@ -78,9 +78,21 @@ _DEFAULTS = {
     "CELERY_RESULT_PERSISTENT": False,
     "CELERY_MAX_CACHED_RESULTS": 5000,
     "CELERY_TRACK_STARTED": False,
+
+    # Default e-mail settings.
+    "SERVER_EMAIL": "celery@localhost",
+    "EMAIL_HOST": "localhost",
+    "EMAIL_PORT": 25,
+    "ADMINS": (),
 }
 
 
+def isatty(fh):
+    # Fixes bug with mod_wsgi:
+    #   mod_wsgi.Log object has no attribute isatty.
+    return getattr(fh, "isatty", None) and fh.isatty()
+
+
 _DEPRECATION_FMT = """
 %s is deprecated in favor of %s and is scheduled for removal in celery v1.4.
 """.strip()
@@ -133,15 +145,14 @@ CELERYD_TASK_TIME_LIMIT = _get("CELERYD_TASK_TIME_LIMIT")
 CELERYD_TASK_SOFT_TIME_LIMIT = _get("CELERYD_TASK_SOFT_TIME_LIMIT")
 CELERYD_MAX_TASKS_PER_CHILD = _get("CELERYD_MAX_TASKS_PER_CHILD")
 STORE_ERRORS_EVEN_IF_IGNORED = _get("CELERY_STORE_ERRORS_EVEN_IF_IGNORED")
-CELERY_SEND_TASK_ERROR_EMAILS = _get("CELERY_SEND_TASK_ERROR_EMAILS",
-                                     not settings.DEBUG,
+CELERY_SEND_TASK_ERROR_EMAILS = _get("CELERY_SEND_TASK_ERROR_EMAILS", False,
                                      compat=["SEND_CELERY_TASK_ERROR_EMAILS"])
 CELERYD_LOG_FORMAT = _get("CELERYD_LOG_FORMAT",
                           compat=["CELERYD_DAEMON_LOG_FORMAT"])
 CELERYD_TASK_LOG_FORMAT = _get("CELERYD_TASK_LOG_FORMAT")
 CELERYD_LOG_FILE = _get("CELERYD_LOG_FILE")
-CELERYD_LOG_COLOR = _get("CELERYD_LOG_COLOR", \
-                       CELERYD_LOG_FILE is None and sys.stderr.isatty())
+CELERYD_LOG_COLOR = _get("CELERYD_LOG_COLOR",
+                       CELERYD_LOG_FILE is None and isatty(sys.stderr))
 CELERYD_LOG_LEVEL = _get("CELERYD_LOG_LEVEL",
                             compat=["CELERYD_DAEMON_LOG_LEVEL"])
 CELERYD_LOG_LEVEL = LOG_LEVELS[CELERYD_LOG_LEVEL.upper()]
@@ -154,6 +165,31 @@ CELERYD_LISTENER = _get("CELERYD_LISTENER")
 CELERYD_MEDIATOR = _get("CELERYD_MEDIATOR")
 CELERYD_ETA_SCHEDULER = _get("CELERYD_ETA_SCHEDULER")
 
+# :--- Email settings                               <-   --   --- - ----- -- #
+ADMINS = _get("ADMINS")
+SERVER_EMAIL = _get("SERVER_EMAIL")
+EMAIL_HOST = _get("EMAIL_HOST")
+EMAIL_HOST_USER = _get("EMAIL_HOST_USER")
+EMAIL_HOST_PASSWORD = _get("EMAIL_HOST_PASSWORD")
+EMAIL_PORT = _get("EMAIL_PORT")
+
+
+# :--- Broker connections                           <-   --   --- - ----- -- #
+BROKER_HOST = _get("BROKER_HOST")
+BROKER_PORT = _get("BROKER_PORT")
+BROKER_USER = _get("BROKER_USER")
+BROKER_PASSWORD = _get("BROKER_PASSWORD")
+BROKER_VHOST = _get("BROKER_VHOST")
+BROKER_USE_SSL = _get("BROKER_USE_SSL")
+BROKER_INSIST = _get("BROKER_INSIST")
+BROKER_CONNECTION_TIMEOUT = _get("BROKER_CONNECTION_TIMEOUT",
+                                compat=["CELERY_BROKER_CONNECTION_TIMEOUT"])
+BROKER_CONNECTION_RETRY = _get("BROKER_CONNECTION_RETRY",
+                                compat=["CELERY_BROKER_CONNECTION_RETRY"])
+BROKER_CONNECTION_MAX_RETRIES = _get("BROKER_CONNECTION_MAX_RETRIES",
+                            compat=["CELERY_BROKER_CONNECTION_MAX_RETRIES"])
+BROKER_BACKEND = _get("BROKER_BACKEND") or _get("CARROT_BACKEND")
+
 # <--- Message routing                             <-   --   --- - ----- -- #
 DEFAULT_QUEUE = _get("CELERY_DEFAULT_QUEUE")
 DEFAULT_ROUTING_KEY = _get("CELERY_DEFAULT_ROUTING_KEY")
@@ -179,14 +215,6 @@ EVENT_EXCHANGE_TYPE = _get("CELERY_EVENT_EXCHANGE_TYPE")
 EVENT_ROUTING_KEY = _get("CELERY_EVENT_ROUTING_KEY")
 EVENT_SERIALIZER = _get("CELERY_EVENT_SERIALIZER")
 
-# :--- Broker connections                           <-   --   --- - ----- -- #
-BROKER_CONNECTION_TIMEOUT = _get("CELERY_BROKER_CONNECTION_TIMEOUT",
-                                compat=["CELERY_AMQP_CONNECTION_TIMEOUT"])
-BROKER_CONNECTION_RETRY = _get("CELERY_BROKER_CONNECTION_RETRY",
-                                compat=["CELERY_AMQP_CONNECTION_RETRY"])
-BROKER_CONNECTION_MAX_RETRIES = _get("CELERY_BROKER_CONNECTION_MAX_RETRIES",
-                                compat=["CELERY_AMQP_CONNECTION_MAX_RETRIES"])
-
 # :--- AMQP Backend settings                        <-   --   --- - ----- -- #
 
 RESULT_EXCHANGE = _get("CELERY_RESULT_EXCHANGE")

+ 15 - 4
celery/datastructures.py

@@ -1,17 +1,28 @@
 from __future__ import generators
-"""
 
-Custom Datastructures
-
-"""
 import time
 import traceback
+
 from UserList import UserList
 from Queue import Queue, Empty as QueueEmpty
 
 from celery.utils.compat import OrderedDict
 
 
+class AttributeDict(dict):
+    """Dict subclass with attribute access."""
+
+    def __getattr__(self, key):
+        try:
+            return self[key]
+        except KeyError:
+            raise AttributeError("'%s' object has no attribute '%s'" % (
+                    self.__class__.__name__, key))
+
+    def __setattr__(self, key, value):
+        self[key] = value
+
+
 class PositionQueue(UserList):
     """A positional queue of a specific length, with slots that are either
     filled or unfilled. When all of the positions are filled, the queue

+ 0 - 2
celery/db/session.py

@@ -1,5 +1,3 @@
-import os
-
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.ext.declarative import declarative_base

+ 1 - 2
celery/decorators.py

@@ -5,9 +5,8 @@ Decorators
 """
 from inspect import getargspec
 
-from billiard.utils.functional import wraps
-
 from celery.task.base import Task, PeriodicTask
+from celery.utils.functional import wraps
 
 
 def task(*args, **options):

+ 133 - 56
celery/events/state.py

@@ -1,53 +1,83 @@
+import time
+import heapq
+
 from carrot.utils import partition
 
 from celery import states
+from celery.datastructures import LocalCache
+from celery.utils import kwdict
+
+HEARTBEAT_EXPIRE = 150 # 2 minutes, 30 seconds
 
 
-class Thing(object):
+class Element(dict):
+    """Base class for types."""
     visited = False
 
     def __init__(self, **fields):
-        self.update(fields)
+        dict.__init__(self, fields)
 
-    def update(self, fields, **extra):
-        for field_name, field_value in dict(fields, **extra).items():
-            setattr(self, field_name, field_value)
+    def __getattr__(self, key):
+        try:
+            return self[key]
+        except KeyError:
+            raise AttributeError("'%s' object has no attribute '%s'" % (
+                    self.__class__.__name__, key))
 
+    def __setattr__(self, key, value):
+        self[key] = value
 
 
-class Worker(Thing):
-    alive = False
+class Worker(Element):
+    """Worker State."""
 
     def __init__(self, **fields):
         super(Worker, self).__init__(**fields)
         self.heartbeats = []
 
-    def online(self, **kwargs):
-        self.alive = True
+    def on_online(self, timestamp=None, **kwargs):
+        self._heartpush(timestamp)
 
-    def offline(self, **kwargs):
-        self.alive = False
+    def on_offline(self, **kwargs):
+        self.heartbeats = []
+
+    def on_heartbeat(self, timestamp=None, **kwargs):
+        self._heartpush(timestamp)
 
-    def heartbeat(self, timestamp=None, **kwargs):
-        self.heartbeats.append(timestamp)
-        self.alive = True
+    def _heartpush(self, timestamp):
+        if timestamp:
+            heapq.heappush(self.heartbeats, timestamp)
 
+    @property
+    def alive(self):
+        return (self.heartbeats and
+                time.time() < self.heartbeats[-1] + HEARTBEAT_EXPIRE)
 
-class Task(Thing):
+
+class Task(Element):
+    """Task State."""
     _info_fields = ("args", "kwargs", "retries",
                     "result", "eta", "runtime",
                     "exception")
-    uuid = None
-    name = None
-    state = states.PENDING
-    received = False
-    accepted = False
-    args = None
-    kwargs = None
-    eta = None
-    retries = 0
-    worker = None
-    timestamp = None
+
+    _defaults = dict(uuid=None,
+                     name=None,
+                     state=states.PENDING,
+                     received=False,
+                     started=False,
+                     succeeded=False,
+                     failed=False,
+                     retried=False,
+                     revoked=False,
+                     args=None,
+                     kwargs=None,
+                     eta=None,
+                     retries=None,
+                     worker=None,
+                     timestamp=None)
+
+    def __init__(self, **fields):
+        super(Task, self).__init__(**dict(self._defaults, **fields))
 
     def info(self, fields=None, extra=[]):
         if fields is None:
@@ -62,52 +92,58 @@ class Task(Thing):
         return self.state in states.READY_STATES
 
     def update(self, d, **extra):
-        d = dict(d, **extra)
         if self.worker:
-            self.worker.online()
-        return super(Task, self).update(d)
+            self.worker.on_heartbeat(timestamp=time.time())
+        return super(Task, self).update(d, **extra)
 
-    def received(self, timestamp=None, **fields):
+    def on_received(self, timestamp=None, **fields):
+        print("ON RECEIVED")
         self.received = timestamp
         self.state = "RECEIVED"
+        print(fields)
         self.update(fields, timestamp=timestamp)
 
-    def accepted(self, timestamp=None, **fields):
-        self.state = "ACCEPTED"
-        self.accepted = timestamp
-        self.update(fields)
+    def on_started(self, timestamp=None, **fields):
+        self.state = states.STARTED
+        self.started = timestamp
+        self.update(fields, timestamp=timestamp)
 
-    def failed(self, timestamp=None, **fields):
+    def on_failed(self, timestamp=None, **fields):
         self.state = states.FAILURE
         self.failed = timestamp
         self.update(fields, timestamp=timestamp)
 
-    def retried(self, timestamp=None, **fields):
+    def on_retried(self, timestamp=None, **fields):
         self.state = states.RETRY
         self.retried = timestamp
         self.update(fields, timestamp=timestamp)
 
-    def succeeded(self, timestamp=None, **fields):
+    def on_succeeded(self, timestamp=None, **fields):
         self.state = states.SUCCESS
-        self.suceeded = timestamp
+        self.succeeded = timestamp
         self.update(fields, timestamp=timestamp)
 
-    def revoked(self, timestamp=None):
+    def on_revoked(self, timestamp=None, **fields):
         self.state = states.REVOKED
+        self.revoked = timestamp
+        self.update(fields, timestamp=timestamp)
 
 
 class State(object):
+    """Represents a snapshot of a clusters state."""
     event_count = 0
     task_count = 0
 
-    def __init__(self, callback=None):
-        self.workers = {}
-        self.tasks = {}
-        self.callback = callback
+    def __init__(self, callback=None,
+            max_workers_in_memory=5000, max_tasks_in_memory=10000):
+        self.workers = LocalCache(max_workers_in_memory)
+        self.tasks = LocalCache(max_tasks_in_memory)
+        self.event_callback = callback
         self.group_handlers = {"worker": self.worker_event,
                                "task": self.task_event}
 
-    def get_worker(self, hostname, **kwargs):
+    def get_or_create_worker(self, hostname, **kwargs):
+        """Get or create worker by hostname."""
         try:
             worker = self.workers[hostname]
             worker.update(kwargs)
@@ -116,7 +152,8 @@ class State(object):
                     hostname=hostname, **kwargs)
         return worker
 
-    def get_task(self, uuid, **kwargs):
+    def get_or_create_task(self, uuid, **kwargs):
+        """Get or create task by uuid."""
         try:
             task = self.tasks[uuid]
             task.update(kwargs)
@@ -125,34 +162,74 @@ class State(object):
         return task
 
     def worker_event(self, type, fields):
+        """Process worker event."""
         hostname = fields.pop("hostname")
-        worker = self.workers[hostname] = Worker(hostname=hostname)
-        handler = getattr(worker, type)
+        worker = self.get_or_create_worker(hostname)
+        handler = getattr(worker, "on_%s" % type)
         if handler:
             handler(**fields)
 
     def task_event(self, type, fields):
+        """Process task event."""
         uuid = fields.pop("uuid")
         hostname = fields.pop("hostname")
-        worker = self.get_worker(hostname)
-        task = self.get_task(uuid, worker=worker)
-        handler = getattr(task, type)
+        worker = self.get_or_create_worker(hostname)
+        task = self.get_or_create_task(uuid)
+        handler = getattr(task, "on_%s" % type)
         if type == "received":
             self.task_count += 1
         if handler:
             handler(**fields)
+        task.worker = worker
 
     def event(self, event):
-        event = dict((key.encode("utf-8"), value)
-                        for key, value in event.items())
+        """Process event."""
         self.event_count += 1
+        event = kwdict(event)
         group, _, type = partition(event.pop("type"), "-")
         self.group_handlers[group](type, event)
-        if self.callback:
-            self.callback(self, event)
+        if self.event_callback:
+            self.event_callback(self, event)
 
     def tasks_by_timestamp(self):
-        return sorted(self.tasks.items(), key=lambda t: t[1].timestamp,
-                reverse=True)
+        """Get tasks by timestamp.
+
+        Returns a list of ``(uuid, task)`` tuples.
+
+        """
+        return self._sort_tasks_by_time(self.tasks.items())
+
+    def _sort_tasks_by_time(self, tasks):
+        """Sort task items by time."""
+        return sorted(tasks, key=lambda t: t[1].timestamp, reverse=True)
+
+    def tasks_by_type(self, name):
+        """Get all tasks by type.
+
+        Returns a list of ``(uuid, task)`` tuples.
+
+        """
+        return self._sort_tasks_by_time([(uuid, task)
+                for uuid, task in self.tasks.items()
+                    if task.name == name])
+
+    def tasks_by_worker(self, hostname):
+        """Get all tasks by worker.
+
+        Returns a list of ``(uuid, task)`` tuples.
+
+        """
+        return self._sort_tasks_by_time([(uuid, task)
+                for uuid, task in self.tasks.items()
+                    if task.worker.hostname == hostname])
+
+    def task_types(self):
+        """Returns a list of all seen task types."""
+        return list(set(task.name for task in self.tasks.values()))
+
+    def alive_workers(self):
+        """Returns a list of (seemingly) alive workers."""
+        return [w for w in self.workers.values() if w.alive]
+
 
 state = State()

+ 5 - 2
celery/exceptions.py

@@ -3,7 +3,6 @@
 Common Exceptions
 
 """
-from billiard.pool import SoftTimeLimitExceeded as _SoftTimeLimitExceeded
 
 UNREGISTERED_FMT = """
 Task of kind %s is not registered, please make sure it's imported.
@@ -14,7 +13,11 @@ class RouteNotFound(KeyError):
     """Task routed to a queue not in the routing table (CELERY_QUEUES)."""
 
 
-class SoftTimeLimitExceeded(_SoftTimeLimitExceeded):
+class TimeLimitExceeded(Exception):
+    """The time limit has been exceeded and the job has been terminated."""
+
+
+class SoftTimeLimitExceeded(Exception):
     """The soft time limit has been exceeded. This exception is raised
     to give the task a chance to clean up."""
     pass

+ 44 - 29
celery/execute/__init__.py

@@ -19,54 +19,60 @@ def apply_async(task, args=None, kwargs=None, countdown=None, eta=None,
         **options):
     """Run a task asynchronously by the celery daemon(s).
 
-    :param task: The task to run (a callable object, or a :class:`Task`
-        instance
+    :param task: The :class:`~celery.task.base.Task` to run.
 
     :keyword args: The positional arguments to pass on to the
-        task (a ``list``).
+      task (a :class:`list` or :class:`tuple`).
 
-    :keyword kwargs: The keyword arguments to pass on to the task (a ``dict``)
+    :keyword kwargs: The keyword arguments to pass on to the
+      task (a :class:`dict`)
 
     :keyword countdown: Number of seconds into the future that the task should
-        execute. Defaults to immediate delivery (Do not confuse that with
-        the ``immediate`` setting, they are unrelated).
+      execute. Defaults to immediate delivery (Do not confuse that with
+      the ``immediate`` setting, they are unrelated).
 
-    :keyword eta: A :class:`datetime.datetime` object that describes the
-        absolute time when the task should execute. May not be specified
-        if ``countdown`` is also supplied. (Do not confuse this with the
-        ``immediate`` setting, they are unrelated).
+    :keyword eta: A :class:`~datetime.datetime` object that describes the
+      absolute time when the task should execute. May not be specified
+      if ``countdown`` is also supplied. (Do not confuse this with the
+      ``immediate`` setting, they are unrelated).
+
+    :keyword connection: Re-use existing broker connection instead
+      of establishing a new one. The ``connect_timeout`` argument is
+      not respected if this is set.
+
+    :keyword connect_timeout: The timeout in seconds, before we give up
+      on establishing a connection to the AMQP server.
 
     :keyword routing_key: The routing key used to route the task to a worker
-        server.
+      server. Defaults to the tasks :attr:`~celery.task.base.Task.exchange`
+      attribute.
 
     :keyword exchange: The named exchange to send the task to. Defaults to
-        :attr:`celery.task.base.Task.exchange`.
+      the tasks :attr:`~celery.task.base.Task.exchange` attribute.
 
     :keyword exchange_type: The exchange type to initalize the exchange as
-        if not already declared.
-        Defaults to :attr:`celery.task.base.Task.exchange_type`.
+      if not already declared. Defaults to the tasks
+      :attr:`~celery.task.base.Task.exchange_type` attribute.
 
     :keyword immediate: Request immediate delivery. Will raise an exception
-        if the task cannot be routed to a worker immediately.
-        (Do not confuse this parameter with the ``countdown`` and ``eta``
-        settings, as they are unrelated).
+      if the task cannot be routed to a worker immediately.
+      (Do not confuse this parameter with the ``countdown`` and ``eta``
+      settings, as they are unrelated). Defaults to the tasks
+      :attr:`~celery.task.base.Task.immediate` attribute.
 
     :keyword mandatory: Mandatory routing. Raises an exception if there's
-        no running workers able to take on this task.
-
-    :keyword connection: Re-use existing AMQP connection.
-        The ``connect_timeout`` argument is not respected if this is set.
-
-    :keyword connect_timeout: The timeout in seconds, before we give up
-        on establishing a connection to the AMQP server.
+      no running workers able to take on this task. Defaults to the tasks
+      :attr:`~celery.task.base.Task.mandatory` attribute.
 
     :keyword priority: The task priority, a number between ``0`` and ``9``.
+      Defaults to the tasks :attr:`~celery.task.base.Task.priority` attribute.
 
     :keyword serializer: A string identifying the default serialization
-        method to use. Defaults to the ``CELERY_TASK_SERIALIZER`` setting.
-        Can be ``pickle`` ``json``, ``yaml``, or any custom serialization
-        methods that have been registered with
-        :mod:`carrot.serialization.registry`.
+      method to use. Defaults to the ``CELERY_TASK_SERIALIZER`` setting.
+      Can be ``pickle`` ``json``, ``yaml``, or any custom serialization
+      methods that have been registered with
+      :mod:`carrot.serialization.registry`. Defaults to the tasks
+      :attr:`~celery.task.base.Task.serializer` attribute.
 
     **Note**: If the ``CELERY_ALWAYS_EAGER`` setting is set, it will be
     replaced by a local :func:`apply` call instead.
@@ -95,7 +101,16 @@ def apply_async(task, args=None, kwargs=None, countdown=None, eta=None,
 def send_task(name, args=None, kwargs=None, countdown=None, eta=None,
         task_id=None, publisher=None, connection=None, connect_timeout=None,
         result_cls=AsyncResult, **options):
+    """Send task by name.
+
+    Useful if you don't have access to the :class:`~celery.task.base.Task`
+    class.
+
+    :param name: Name of task to execute.
 
+    Supports the same arguments as :func:`apply_async`.
+
+    """
     exchange = options.get("exchange")
     exchange_type = options.get("exchange_type")
 
@@ -124,7 +139,7 @@ def delay_task(task_name, *args, **kwargs):
 
     Example
 
-        >>> r = delay_task("update_record", name="George Constanza", age=32)
+        >>> r = delay_task("update_record", name="George Costanza", age=32)
         >>> r.ready()
         True
         >>> r.result

+ 23 - 19
celery/log.py

@@ -15,26 +15,29 @@ _monkeypatched = False
 
 BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
 RESET_SEQ = "\033[0m"
-COLOUR_SEQ = "\033[1;%dm"
+COLOR_SEQ = "\033[1;%dm"
 BOLD_SEQ = "\033[1m"
-COLOURS = {
-    'WARNING': YELLOW,
-    'DEBUG': BLUE,
-    'CRITICAL': MAGENTA,
-    'ERROR': RED
+COLORS = {
+    "WARNING": YELLOW,
+    "DEBUG": BLUE,
+    "CRITICAL": MAGENTA,
+    "ERROR": RED,
 }
 
-class ColourFormatter(logging.Formatter):
-    def __init__(self, msg, use_colour=True):
+
+class ColorFormatter(logging.Formatter):
+    def __init__(self, msg, use_color=True):
         logging.Formatter.__init__(self, msg)
-        self.use_colour = use_colour
+        self.use_color = use_color
 
     def format(self, record):
         levelname = record.levelname
-        if self.use_colour and levelname in COLOURS:
-            record.msg = COLOUR_SEQ % (30 + COLOURS[levelname]) + record.msg + RESET_SEQ
+        if self.use_color and levelname in COLORS:
+            record.msg = COLOR_SEQ % (
+                    30 + COLORS[levelname]) + record.msg + RESET_SEQ
         return logging.Formatter.format(self, record)
 
+
 def get_task_logger(loglevel=None):
     ensure_process_aware_logger()
     logger = logging.getLogger("celery.Task")
@@ -85,7 +88,7 @@ def get_default_logger(loglevel=None):
 
 
 def setup_logger(loglevel=conf.CELERYD_LOG_LEVEL, logfile=None,
-        format=conf.CELERYD_LOG_FORMAT, colourize=conf.CELERYD_LOG_COLOR,
+        format=conf.CELERYD_LOG_FORMAT, colorize=conf.CELERYD_LOG_COLOR,
         **kwargs):
     """Setup the ``multiprocessing`` logger. If ``logfile`` is not specified,
     then ``stderr`` is used.
@@ -94,11 +97,11 @@ def setup_logger(loglevel=conf.CELERYD_LOG_LEVEL, logfile=None,
 
     """
     return _setup_logger(get_default_logger(loglevel),
-                         logfile, format, colourize, **kwargs)
+                         logfile, format, colorize, **kwargs)
 
 
 def setup_task_logger(loglevel=conf.CELERYD_LOG_LEVEL, logfile=None,
-        format=conf.CELERYD_TASK_LOG_FORMAT, colourize=conf.CELERYD_LOG_COLOR,
+        format=conf.CELERYD_TASK_LOG_FORMAT, colorize=conf.CELERYD_LOG_COLOR,
         task_kwargs=None, **kwargs):
     """Setup the task logger. If ``logfile`` is not specified, then
     ``stderr`` is used.
@@ -111,17 +114,17 @@ def setup_task_logger(loglevel=conf.CELERYD_LOG_LEVEL, logfile=None,
     task_kwargs.setdefault("task_id", "-?-")
     task_kwargs.setdefault("task_name", "-?-")
     logger = _setup_logger(get_task_logger(loglevel),
-                           logfile, format, colourize, **kwargs)
+                           logfile, format, colorize, **kwargs)
     return LoggerAdapter(logger, task_kwargs)
 
 
-def _setup_logger(logger, logfile, format, colourize,
-        formatter=ColourFormatter, **kwargs):
+def _setup_logger(logger, logfile, format, colorize,
+        formatter=ColorFormatter, **kwargs):
 
     if logger.handlers: # Logger already configured
         return logger
     handler = _detect_handler(logfile)
-    handler.setFormatter(formatter(format, use_colour=colourize))
+    handler.setFormatter(formatter(format, use_color=colorize))
     logger.addHandler(handler)
     return logger
 
@@ -204,7 +207,8 @@ class LoggingProxy(object):
 
     def write(self, data):
         """Write message to logging object."""
-        if not self.closed:
+        data = data.strip()
+        if data and not self.closed:
             self.logger.log(self.loglevel, data)
 
     def writelines(self, sequence):

+ 20 - 5
celery/messaging.py

@@ -7,13 +7,13 @@ import socket
 from datetime import datetime, timedelta
 from itertools import count
 
-from carrot.connection import DjangoBrokerConnection
+from carrot.connection import BrokerConnection
 from carrot.messaging import Publisher, Consumer, ConsumerSet as _ConsumerSet
-from billiard.utils.functional import wraps
 
 from celery import conf
 from celery import signals
 from celery.utils import gen_unique_id, mitemgetter, noop
+from celery.utils.functional import wraps
 from celery.routes import lookup_route, expand_destination
 from celery.loaders import load_settings
 
@@ -219,10 +219,25 @@ class BroadcastConsumer(Consumer):
         super(BroadcastConsumer, self).__init__(*args, **kwargs)
 
 
-def establish_connection(connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
+def establish_connection(hostname=None, userid=None, password=None,
+        virtual_host=None, port=None, ssl=None, insist=None,
+        connect_timeout=None, backend_cls=None):
     """Establish a connection to the message broker."""
-    return DjangoBrokerConnection(connect_timeout=connect_timeout,
-                                  settings=load_settings())
+    if insist is None:
+        insist = conf.BROKER_INSIST
+    if ssl is None:
+        ssl = conf.BROKER_USE_SSL
+    if connect_timeout is None:
+        connect_timeout = conf.BROKER_CONNECTION_TIMEOUT
+
+    return BrokerConnection(hostname or conf.BROKER_HOST,
+                            userid or conf.BROKER_USER,
+                            password or conf.BROKER_PASSWORD,
+                            virtual_host or conf.BROKER_VHOST,
+                            port or conf.BROKER_PORT,
+                            backend_cls=backend_cls or conf.BROKER_BACKEND,
+                            insist=insist, ssl=ssl,
+                            connect_timeout=connect_timeout)
 
 
 def with_connection(fun):

+ 21 - 26
celery/result.py

@@ -1,19 +1,16 @@
 from __future__ import generators
-"""
 
-Asynchronous result types.
-
-"""
 import time
-from itertools import imap
+
 from copy import copy
+from itertools import imap
 
 from celery import states
-from celery.utils import any, all
 from celery.backends import default_backend
-from celery.messaging import with_connection
-from celery.exceptions import TimeoutError
 from celery.datastructures import PositionQueue
+from celery.exceptions import TimeoutError
+from celery.messaging import with_connection
+from celery.utils import any, all
 
 
 class BaseAsyncResult(object):
@@ -38,7 +35,6 @@ class BaseAsyncResult(object):
         self.task_id = task_id
         self.backend = backend
 
-    @with_connection
     def revoke(self, connection=None, connect_timeout=None):
         """Send revoke signal to all workers.
 
@@ -46,11 +42,8 @@ class BaseAsyncResult(object):
 
         """
         from celery.task import control
-        control.revoke(self.task_id)
-
-    def get(self, timeout=None):
-        """Alias to :meth:`wait`."""
-        return self.wait(timeout=timeout)
+        control.revoke(self.task_id, connection=connection,
+                       connect_timeout=connect_timeout)
 
     def wait(self, timeout=None):
         """Wait for task, and return the result when it arrives.
@@ -67,6 +60,10 @@ class BaseAsyncResult(object):
         """
         return self.backend.wait_for(self.task_id, timeout=timeout)
 
+    def get(self, timeout=None):
+        """Alias to :meth:`wait`."""
+        return self.wait(timeout=timeout)
+
     def ready(self):
         """Returns ``True`` if the task executed successfully, or raised
         an exception. If the task is still running, pending, or is waiting
@@ -75,8 +72,7 @@ class BaseAsyncResult(object):
         :rtype: bool
 
         """
-        status = self.backend.get_status(self.task_id)
-        return status not in self.backend.UNREADY_STATES
+        return self.status not in self.backend.UNREADY_STATES
 
     def successful(self):
         """Returns ``True`` if the task executed successfully.
@@ -84,13 +80,14 @@ class BaseAsyncResult(object):
         :rtype: bool
 
         """
-        return self.backend.is_successful(self.task_id)
+        return self.status == states.SUCCESS
 
     def __str__(self):
-        """``str(self)`` -> ``self.task_id``"""
+        """``str(self) -> self.task_id``"""
         return self.task_id
 
     def __hash__(self):
+        """``hash(self) -> hash(self.task_id)``"""
         return hash(self.task_id)
 
     def __repr__(self):
@@ -168,21 +165,19 @@ class AsyncResult(BaseAsyncResult):
     """
 
     def __init__(self, task_id, backend=None):
-        backend = backend or default_backend
-        super(AsyncResult, self).__init__(task_id, backend)
+        super(AsyncResult, self).__init__(task_id, backend or default_backend)
 
 
 class TaskSetResult(object):
-    """Working with :class:`celery.task.TaskSet` results.
+    """Working with :class:`~celery.task.TaskSet` results.
 
     An instance of this class is returned by
-    :meth:`celery.task.TaskSet.apply_async()`. It lets you inspect the
-    status and return values of the taskset as a single entity.
+    ``TaskSet``'s :meth:`~celery.task.TaskSet.apply_async()`. It enables
+    inspection of the subtasks status and return values as a single entity.
 
     :option taskset_id: see :attr:`taskset_id`.
     :option subtasks: see :attr:`subtasks`.
 
-
     .. attribute:: taskset_id
 
         The UUID of the taskset itself.
@@ -344,7 +339,7 @@ class TaskSetResult(object):
 
     @property
     def total(self):
-        """The total number of tasks in the :class:`celery.task.TaskSet`."""
+        """The total number of tasks in the :class:`~celery.task.TaskSet`."""
         return len(self.subtasks)
 
 
@@ -374,7 +369,7 @@ class EagerResult(BaseAsyncResult):
             raise self.result
 
     def revoke(self):
-        pass
+        self._status = states.REVOKED
 
     @property
     def result(self):

+ 5 - 5
celery/routes.py

@@ -1,9 +1,9 @@
-from celery.utils import instantiate
 from celery.exceptions import RouteNotFound
+from celery.utils import instantiate
 
 
-# Route from mapping
 class MapRoute(object):
+    """Makes a router out of a :class:`dict`."""
 
     def __init__(self, map):
         self.map = map
@@ -16,7 +16,7 @@ def expand_destination(route, routing_table):
     if isinstance(route, basestring):
         try:
             dest = dict(routing_table[route])
-        except KeyError, exc:
+        except KeyError:
             raise RouteNotFound(
                 "Route %s does not exist in the routing table "
                 "(CELERY_QUEUES)" % route)
@@ -41,8 +41,8 @@ def prepare(routes):
 
 
 def firstmatcher(method):
-    """With a list of instances, find the first instance that returns a
-    value for the given method."""
+    """Returns a functions that with a list of instances,
+    finds the first instance that returns a value for the given method."""
 
     def _matcher(seq, *args, **kwargs):
         for cls in seq:

+ 12 - 11
celery/schedules.py

@@ -1,11 +1,10 @@
 from datetime import datetime
-from pyparsing import Word, Literal, ZeroOrMore, Optional, Group, StringEnd, alphas
+from pyparsing import (Word, Literal, ZeroOrMore, Optional,
+                       Group, StringEnd, alphas)
 
 from celery.utils import is_iterable
 from celery.utils.timeutils import timedelta_seconds, weekday, remaining
 
-__all__ = ["schedule", "crontab"]
-
 
 class schedule(object):
     relative = False
@@ -92,7 +91,7 @@ class crontab_parser(object):
     @staticmethod
     def _expand_range(toks):
         if len(toks) > 1:
-            return range(toks[0], int(toks[2])+1)
+            return range(toks[0], int(toks[2]) + 1)
         else:
             return toks[0]
 
@@ -190,15 +189,17 @@ class crontab(schedule):
         elif is_iterable(cronspec):
             result = set(cronspec)
         else:
-            raise TypeError("Argument cronspec needs to be of any of the " + \
-                    "following types: int, basestring, or an iterable type. " + \
+            raise TypeError(
+                    "Argument cronspec needs to be of any of the "
+                    "following types: int, basestring, or an iterable type. "
                     "'%s' was given." % type(cronspec))
 
         # assure the result does not exceed the max
         for number in result:
             if number >= max_:
-                raise ValueError("Invalid crontab pattern. Valid " + \
-                "range is 0-%d. '%d' was found." % (max_, number))
+                raise ValueError(
+                        "Invalid crontab pattern. Valid "
+                        "range is 0-%d. '%d' was found." % (max_, number))
 
         return result
 
@@ -219,7 +220,7 @@ class crontab(schedule):
         last = now - last_run_at
         due, when = False, 1
         if last.days > 0 or last.seconds > 60:
-            due = now.isoweekday() % 7 in self.day_of_week and \
-                  now.hour in self.hour and \
-                  now.minute in self.minute
+            due = (now.isoweekday() % 7 in self.day_of_week and
+                   now.hour in self.hour and
+                   now.minute in self.minute)
         return due, when

+ 166 - 0
celery/serialization.py

@@ -0,0 +1,166 @@
+import inspect
+import sys
+import types
+
+from copy import deepcopy
+
+import pickle as pypickle
+try:
+    import cPickle as cpickle
+except ImportError:
+    cpickle = None
+
+if sys.version_info < (2, 6):
+    # cPickle is broken in Python <= 2.5.
+    # It unsafely and incorrectly uses relative instead of absolute imports,
+    # so e.g.:
+    #       exceptions.KeyError
+    # becomes:
+    #       celery.exceptions.KeyError
+    #
+    # Your best choice is to upgrade to Python 2.6,
+    # as while the pure pickle version has worse performance,
+    # it is the only safe option for older Python versions.
+    pickle = pypickle
+else:
+    pickle = cpickle or pypickle
+
+
+# BaseException was introduced in Python 2.5.
+try:
+    _error_bases = (BaseException, )
+except NameError:
+    _error_bases = (SystemExit, KeyboardInterrupt)
+
+unwanted_base_classes = (StandardError, Exception) + _error_bases + (object, )
+
+
+if sys.version_info < (2, 5):
+
+    # Prior to Python 2.5, Exception was an old-style class
+    def subclass_exception(name, parent, unused):
+        return types.ClassType(name, (parent,), {})
+else:
+    def subclass_exception(name, parent, module):
+        return type(name, (parent,), {'__module__': module})
+
+
+def find_nearest_pickleable_exception(exc):
+    """With an exception instance, iterate over its super classes (by mro)
+    and find the first super exception that is pickleable. It does
+    not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
+    :class:`BaseException` and :class:`object`). If that happens
+    you should use :exc:`UnpickleableException` instead.
+
+    :param exc: An exception instance.
+
+    :returns: the nearest exception if it's not :exc:`Exception` or below,
+        if it is it returns ``None``.
+
+    :rtype: :exc:`Exception`
+
+    """
+    cls = exc.__class__
+    getmro_ = getattr(cls, "mro", None)
+
+    # old-style classes doesn't have mro()
+    if not getmro_:
+        # all Py2.4 exceptions has a baseclass.
+        if not getattr(cls, "__bases__", ()):
+            return
+        # Use inspect.getmro() to traverse bases instead.
+        getmro_ = lambda: inspect.getmro(cls)
+
+
+    for supercls in getmro_():
+        if supercls in unwanted_base_classes:
+            # only BaseException and object, from here on down,
+            # we don't care about these.
+            return
+        try:
+            exc_args = getattr(exc, "args", [])
+            superexc = supercls(*exc_args)
+            pickle.dumps(superexc)
+        except:
+            pass
+        else:
+            return superexc
+    return
+
+
+def create_exception_cls(name, module, parent=None):
+    """Dynamically create an exception class."""
+    if not parent:
+        parent = Exception
+    return subclass_exception(name, parent, module)
+
+
+class UnpickleableExceptionWrapper(Exception):
+    """Wraps unpickleable exceptions.
+
+    :param exc_module: see :attr:`exc_module`.
+
+    :param exc_cls_name: see :attr:`exc_cls_name`.
+
+    :param exc_args: see :attr:`exc_args`
+
+    .. attribute:: exc_module
+
+        The module of the original exception.
+
+    .. attribute:: exc_cls_name
+
+        The name of the original exception class.
+
+    .. attribute:: exc_args
+
+        The arguments for the original exception.
+
+    Example
+
+        >>> try:
+        ...     something_raising_unpickleable_exc()
+        >>> except Exception, e:
+        ...     exc = UnpickleableException(e.__class__.__module__,
+        ...                                 e.__class__.__name__,
+        ...                                 e.args)
+        ...     pickle.dumps(exc) # Works fine.
+
+    """
+
+    def __init__(self, exc_module, exc_cls_name, exc_args):
+        self.exc_module = exc_module
+        self.exc_cls_name = exc_cls_name
+        self.exc_args = exc_args
+        Exception.__init__(self, exc_module, exc_cls_name, exc_args)
+
+    @classmethod
+    def from_exception(cls, exc):
+        return cls(exc.__class__.__module__,
+                   exc.__class__.__name__,
+                   getattr(exc, "args", []))
+
+    def restore(self):
+        return create_exception_cls(self.exc_cls_name,
+                                    self.exc_module)(*self.exc_args)
+
+
+def get_pickleable_exception(exc):
+    """Make sure exception is pickleable."""
+    nearest = find_nearest_pickleable_exception(exc)
+    if nearest:
+        return nearest
+
+    try:
+        pickle.dumps(deepcopy(exc))
+    except Exception:
+        return UnpickleableExceptionWrapper.from_exception(exc)
+    return exc
+
+
+def get_pickled_exception(exc):
+    """Get original exception from exception pickled using
+    :meth:`get_pickleable_exception`."""
+    if isinstance(exc, UnpickleableExceptionWrapper):
+        return exc.restore()
+    return exc

+ 13 - 10
celery/states.py

@@ -1,4 +1,7 @@
-""" Task States
+"""
+
+States
+------
 
 .. data:: PENDING
 
@@ -24,16 +27,9 @@
 
     Task has been revoked.
 
-"""
-PENDING = "PENDING"
-STARTED = "STARTED"
-SUCCESS = "SUCCESS"
-FAILURE = "FAILURE"
-REVOKED = "REVOKED"
-RETRY = "RETRY"
-
+Sets
+----
 
-"""
 .. data:: READY_STATES
 
     Set of states meaning the task result is ready (has been executed).
@@ -55,6 +51,13 @@ RETRY = "RETRY"
     Set of all possible states.
 
 """
+PENDING = "PENDING"
+STARTED = "STARTED"
+SUCCESS = "SUCCESS"
+FAILURE = "FAILURE"
+REVOKED = "REVOKED"
+RETRY = "RETRY"
+
 READY_STATES = frozenset([SUCCESS, FAILURE, REVOKED])
 UNREADY_STATES = frozenset([PENDING, STARTED, RETRY])
 EXCEPTION_STATES = frozenset([RETRY, FAILURE, REVOKED])

+ 7 - 5
celery/task/__init__.py

@@ -3,13 +3,15 @@
 Working with tasks and task sets.
 
 """
-from billiard.serialization import pickle
 
 from celery.execute import apply_async
 from celery.registry import tasks
-from celery.task.base import Task, TaskSet, PeriodicTask, ExecuteRemoteTask
+from celery.serialization import pickle
+from celery.task.base import Task, PeriodicTask
+from celery.task.sets import TaskSet
+from celery.task.builtins import PingTask, ExecuteRemoteTask
+from celery.task.builtins import AsynchronousMapTask, _dmap
 from celery.task.control import discard_all
-from celery.task.builtins import PingTask
 from celery.task.http import HttpDispatchTask
 
 __all__ = ["Task", "TaskSet", "PeriodicTask", "tasks", "discard_all",
@@ -27,7 +29,7 @@ def dmap(fun, args, timeout=None):
         [4, 8, 16]
 
     """
-    return TaskSet.map(fun, args, timeout=timeout)
+    return _dmap(fun, args, timeout)
 
 
 def dmap_async(fun, args, timeout=None):
@@ -49,7 +51,7 @@ def dmap_async(fun, args, timeout=None):
         [4, 8, 16]
 
     """
-    return TaskSet.map_async(fun, args, timeout=timeout)
+    return AsynchronousMapTask.delay(pickle.dumps(fun), args, timeout=timeout)
 
 
 def execute_remote(fun, *args, **kwargs):

+ 29 - 191
celery/task/base.py

@@ -1,22 +1,21 @@
 import sys
 import warnings
-from datetime import timedelta
 
-from billiard.serialization import pickle
+from datetime import timedelta
 
 from celery import conf
-from celery.log import setup_task_logger
-from celery.utils import gen_unique_id, padlist
-from celery.utils.timeutils import timedelta_seconds
-from celery.result import BaseAsyncResult, TaskSetResult, EagerResult
-from celery.execute import apply_async, apply
-from celery.registry import tasks
 from celery.backends import default_backend
+from celery.exceptions import MaxRetriesExceededError, RetryTaskError
+from celery.execute import apply_async, apply
+from celery.log import setup_task_logger
 from celery.messaging import TaskPublisher, TaskConsumer
 from celery.messaging import establish_connection as _establish_connection
-from celery.exceptions import MaxRetriesExceededError, RetryTaskError
-
+from celery.registry import tasks
+from celery.result import BaseAsyncResult, EagerResult
 from celery.schedules import schedule
+from celery.utils.timeutils import timedelta_seconds
+
+from celery.task.sets import TaskSet, subtask
 
 PERIODIC_DEPRECATION_TEXT = """\
 Periodic task classes has been deprecated and will be removed
@@ -83,9 +82,11 @@ class Task(object):
     :meth:`run` method.
 
     .. attribute:: name
+
         Name of the task.
 
     .. attribute:: abstract
+
         If ``True`` the task is an abstract base class.
 
     .. attribute:: type
@@ -163,6 +164,7 @@ class Task(object):
         The result store backend used for this task.
 
     .. attribute:: autoregister
+
         If ``True`` the task is automatically registered in the task
         registry, which is the default behaviour.
 
@@ -341,15 +343,15 @@ class Task(object):
         :param args: Positional arguments to retry with.
         :param kwargs: Keyword arguments to retry with.
         :keyword exc: Optional exception to raise instead of
-            :exc:`MaxRestartsExceededError` when the max restart limit has
-            been exceeded.
+            :exc:`~celery.exceptions.MaxRetriesExceededError` when the max
+            restart limit has been exceeded.
         :keyword countdown: Time in seconds to delay the retry for.
         :keyword eta: Explicit time and date to run the retry at (must be a
             :class:`datetime.datetime` instance).
         :keyword \*\*options: Any extra options to pass on to
             meth:`apply_async`. See :func:`celery.execute.apply_async`.
         :keyword throw: If this is ``False``, do not raise the
-            :exc:`celery.exceptions.RetryTaskError` exception,
+            :exc:`~celery.exceptions.RetryTaskError` exception,
             that tells the worker to mark the task as being retried.
             Note that this means the task will be marked as failed
             if the task raises an exception, or successful if it
@@ -436,7 +438,7 @@ class Task(object):
         :param args: Original arguments for the retried task.
         :param kwargs: Original keyword arguments for the retried task.
 
-        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+        :keyword einfo: :class:`~celery.datastructures.ExceptionInfo` instance,
            containing the traceback.
 
         The return value of this handler is ignored.
@@ -453,7 +455,7 @@ class Task(object):
         :param args: Original arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task that failed.
 
-        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+        :keyword einfo: :class:`~celery.datastructures.ExceptionInfo` instance,
            containing the traceback (if any).
 
         The return value of this handler is ignored.
@@ -471,7 +473,7 @@ class Task(object):
         :param args: Original arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task that failed.
 
-        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+        :keyword einfo: :class:`~celery.datastructures.ExceptionInfo` instance,
            containing the traceback.
 
         The return value of this handler is ignored.
@@ -497,8 +499,8 @@ class Task(object):
     def execute(self, wrapper, pool, loglevel, logfile):
         """The method the worker calls to execute the task.
 
-        :param wrapper: A :class:`celery.worker.job.TaskWrapper`.
-        :param pool: A :class:`celery.worker.pool.TaskPool` object.
+        :param wrapper: A :class:`~celery.worker.job.TaskRequest`.
+        :param pool: A task pool.
         :param loglevel: Current loglevel.
         :param logfile: Name of the currently used logfile.
 
@@ -513,176 +515,12 @@ class Task(object):
             kind = "%s(Task)" % self.__class__.__name__
         return "<%s: %s (%s)>" % (kind, self.name, self.type)
 
-
-class ExecuteRemoteTask(Task):
-    """Execute an arbitrary function or object.
-
-    *Note* You probably want :func:`execute_remote` instead, which this
-    is an internal component of.
-
-    The object must be pickleable, so you can't use lambdas or functions
-    defined in the REPL (that is the python shell, or ``ipython``).
-
-    """
-    name = "celery.execute_remote"
-
-    def run(self, ser_callable, fargs, fkwargs, **kwargs):
-        """
-        :param ser_callable: A pickled function or callable object.
-        :param fargs: Positional arguments to apply to the function.
-        :param fkwargs: Keyword arguments to apply to the function.
-
-        """
-        return pickle.loads(ser_callable)(*fargs, **fkwargs)
-
-
-class AsynchronousMapTask(Task):
-    """Task used internally by :func:`dmap_async` and
-    :meth:`TaskSet.map_async`.  """
-    name = "celery.map_async"
-
-    def run(self, ser_callable, args, timeout=None, **kwargs):
-        """:see :meth:`TaskSet.dmap_async`."""
-        return TaskSet.map(pickle.loads(ser_callable), args, timeout=timeout)
-
-
-class TaskSet(object):
-    """A task containing several subtasks, making it possible
-    to track how many, or when all of the tasks has been completed.
-
-    :param task: The task class or name.
-        Can either be a fully qualified task name, or a task class.
-
-    :param args: A list of args, kwargs pairs.
-        e.g. ``[[args1, kwargs1], [args2, kwargs2], ..., [argsN, kwargsN]]``
-
-
-    .. attribute:: task_name
-
-        The name of the task.
-
-    .. attribute:: arguments
-
-        The arguments, as passed to the task set constructor.
-
-    .. attribute:: total
-
-        Total number of tasks in this task set.
-
-    Example
-
-        >>> from djangofeeds.tasks import RefreshFeedTask
-        >>> taskset = TaskSet(RefreshFeedTask, args=[
-        ...                 ([], {"feed_url": "http://cnn.com/rss"}),
-        ...                 ([], {"feed_url": "http://bbc.com/rss"}),
-        ...                 ([], {"feed_url": "http://xkcd.com/rss"})
-        ... ])
-
-        >>> taskset_result = taskset.apply_async()
-        >>> list_of_return_values = taskset_result.join()
-
-    """
-
-    def __init__(self, task, args):
-        try:
-            task_name = task.name
-            task_obj = task
-        except AttributeError:
-            task_name = task
-            task_obj = tasks[task_name]
-
-        # Get task instance
-        task_obj = tasks[task_obj.name]
-
-        self.task = task_obj
-        self.task_name = task_name
-        self.arguments = args
-        self.total = len(args)
-
-    def apply_async(self, connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
-        """Run all tasks in the taskset.
-
-        :returns: A :class:`celery.result.TaskSetResult` instance.
-
-        Example
-
-            >>> ts = TaskSet(RefreshFeedTask, args=[
-            ...         (["http://foo.com/rss"], {}),
-            ...         (["http://bar.com/rss"], {}),
-            ... ])
-            >>> result = ts.apply_async()
-            >>> result.taskset_id
-            "d2c9b261-8eff-4bfb-8459-1e1b72063514"
-            >>> result.subtask_ids
-            ["b4996460-d959-49c8-aeb9-39c530dcde25",
-            "598d2d18-ab86-45ca-8b4f-0779f5d6a3cb"]
-            >>> result.waiting()
-            True
-            >>> time.sleep(10)
-            >>> result.ready()
-            True
-            >>> result.successful()
-            True
-            >>> result.failed()
-            False
-            >>> result.join()
-            [True, True]
-
-        """
-        if conf.ALWAYS_EAGER:
-            return self.apply()
-
-        taskset_id = gen_unique_id()
-        conn = self.task.establish_connection(connect_timeout=connect_timeout)
-        publisher = self.task.get_publisher(connection=conn)
-        try:
-            subtasks = [self.apply_part(arglist, taskset_id, publisher)
-                            for arglist in self.arguments]
-        finally:
-            publisher.close()
-            conn.close()
-
-        return TaskSetResult(taskset_id, subtasks)
-
-    def apply_part(self, arglist, taskset_id, publisher):
-        """Apply a single part of the taskset."""
-        args, kwargs, opts = padlist(arglist, 3, default={})
-        return apply_async(self.task, args, kwargs,
-                           taskset_id=taskset_id, publisher=publisher, **opts)
-
-    def apply(self):
-        """Applies the taskset locally."""
-        taskset_id = gen_unique_id()
-        subtasks = [apply(self.task, args, kwargs)
-                        for args, kwargs in self.arguments]
-
-        # This will be filled with EagerResults.
-        return TaskSetResult(taskset_id, subtasks)
-
-    @classmethod
-    def remote_execute(cls, func, args):
-        """Apply ``args`` to function by distributing the args to the
-        celery server(s)."""
-        pickled = pickle.dumps(func)
-        arguments = [((pickled, arg, {}), {}) for arg in args]
-        return cls(ExecuteRemoteTask, arguments)
-
-    @classmethod
-    def map(cls, func, args, timeout=None):
-        """Distribute processing of the arguments and collect the results."""
-        remote_task = cls.remote_execute(func, args)
-        return remote_task.apply_async().join(timeout=timeout)
-
     @classmethod
-    def map_async(cls, func, args, timeout=None):
-        """Distribute processing of the arguments and collect the results
-        asynchronously.
-
-        :returns: :class:`celery.result.AsyncResult` instance.
-
-        """
-        serfunc = pickle.dumps(func)
-        return AsynchronousMapTask.delay(serfunc, args, timeout=timeout)
+    def subtask(cls, *args, **kwargs):
+        """Returns a :class:`~celery.task.sets.subtask` object for
+        this task that wraps arguments and execution options
+        for a single task invocation."""
+        return subtask(cls, *args, **kwargs)
 
 
 class PeriodicTask(Task):
@@ -693,8 +531,9 @@ class PeriodicTask(Task):
     .. attribute:: run_every
 
         *REQUIRED* Defines how often the task is run (its interval),
-        it can be a :class:`datetime.timedelta` object, a :class:`crontab`
-        object or an integer specifying the time in seconds.
+        it can be a :class:`~datetime.timedelta` object, a
+        :class:`~celery.task.schedules.crontab` object or an integer
+        specifying the time in seconds.
 
     .. attribute:: relative
 
@@ -751,7 +590,6 @@ class PeriodicTask(Task):
             raise NotImplementedError(
                     "Periodic tasks must have a run_every attribute")
 
-
         warnings.warn(PERIODIC_DEPRECATION_TEXT,
                         DeprecationWarning)
         conf.CELERYBEAT_SCHEDULE[self.name] = {
@@ -766,7 +604,7 @@ class PeriodicTask(Task):
         super(PeriodicTask, self).__init__()
 
     def timedelta_seconds(self, delta):
-        """Convert :class:`datetime.timedelta` to seconds.
+        """Convert :class:`~datetime.timedelta` to seconds.
 
         Doesn't account for negative timedeltas.
 

+ 41 - 1
celery/task/builtins.py

@@ -1,7 +1,9 @@
 from datetime import timedelta
 
-from celery.task.base import Task, PeriodicTask
 from celery.backends import default_backend
+from celery.serialization import pickle
+from celery.task.base import Task, PeriodicTask
+from celery.task.sets import TaskSet
 
 
 class DeleteExpiredTaskMetaTask(PeriodicTask):
@@ -28,3 +30,41 @@ class PingTask(Task):
     def run(self, **kwargs):
         """:returns: the string ``"pong"``."""
         return "pong"
+
+
+def _dmap(fun, args, timeout=None):
+    pickled = pickle.dumps(fun)
+    arguments = [((pickled, arg, {}), {}) for arg in args]
+    ts = TaskSet(ExecuteRemoteTask, arguments)
+    return ts.apply_async().join(timeout=timeout)
+
+
+class AsynchronousMapTask(Task):
+    """Task used internally by :func:`dmap_async` and
+    :meth:`TaskSet.map_async`.  """
+    name = "celery.map_async"
+
+    def run(self, serfun, args, timeout=None, **kwargs):
+        return _dmap(pickle.loads(serfun), args, timeout=timeout)
+
+
+class ExecuteRemoteTask(Task):
+    """Execute an arbitrary function or object.
+
+    *Note* You probably want :func:`execute_remote` instead, which this
+    is an internal component of.
+
+    The object must be pickleable, so you can't use lambdas or functions
+    defined in the REPL (that is the python shell, or ``ipython``).
+
+    """
+    name = "celery.execute_remote"
+
+    def run(self, ser_callable, fargs, fkwargs, **kwargs):
+        """
+        :param ser_callable: A pickled function or callable object.
+        :param fargs: Positional arguments to apply to the function.
+        :param fkwargs: Keyword arguments to apply to the function.
+
+        """
+        return pickle.loads(ser_callable)(*fargs, **fkwargs)

+ 7 - 0
celery/task/control.py

@@ -110,6 +110,13 @@ def broadcast(command, arguments=None, destination=None, connection=None,
     arguments = arguments or {}
     reply_ticket = reply and gen_unique_id() or None
 
+    if destination is not None and not isinstance(destination, (list, tuple)):
+        raise ValueError("destination must be a list/tuple not %s" % (
+                type(destination)))
+
+    # Set reply limit to number of destinations (if specificed)
+    if limit is None and destination:
+        limit = destination and len(destination) or None
 
     broadcast = BroadcastPublisher(connection)
     try:

+ 2 - 2
celery/task/http.py

@@ -52,9 +52,9 @@ class MutableURL(object):
         >>> str(url)
         'http://www.google.com:6580/foo/bar?y=4&x=3#foo'
         >>> url.query["x"] = 10
-        >>> url.query.update({"George": "Constanza"})
+        >>> url.query.update({"George": "Costanza"})
         >>> str(url)
-        'http://www.google.com:6580/foo/bar?y=4&x=10&George=Constanza#foo'
+        'http://www.google.com:6580/foo/bar?y=4&x=10&George=Costanza#foo'
 
     """
     def __init__(self, url):

+ 1 - 1
celery/task/schedules.py

@@ -1 +1 @@
-from celery.schedules import *
+from celery.schedules import schedule, crontab_parser, crontab

+ 167 - 0
celery/task/sets.py

@@ -0,0 +1,167 @@
+from UserList import UserList
+
+from celery import conf
+from celery import registry
+from celery.datastructures import AttributeDict
+from celery.messaging import establish_connection, with_connection
+from celery.messaging import TaskPublisher
+from celery.result import TaskSetResult
+from celery.utils import gen_unique_id
+
+
+class subtask(AttributeDict):
+    """Class that wraps the arguments and execution options
+    for a single task invocation.
+
+    Used as the parts in a :class:`TaskSet` or to safely
+    pass tasks around as callbacks.
+
+    :param task: Either a task class/instance, or the name of a task.
+    :keyword args: Positional arguments to apply.
+    :keyword kwargs: Keyword arguments to apply.
+    :keyword options: Additional options to
+      :func:`celery.execute.apply_async`.
+
+    Note that if the first argument is a :class:`dict`, the other
+    arguments will be ignored and the values in the dict will be used
+    instead.
+
+        >>> s = subtask("tasks.add", args=(2, 2))
+        >>> subtask(s)
+        {"task": "tasks.add", args=(2, 2), kwargs={}, options={}}
+
+    """
+
+    def __init__(self, task=None, args=None, kwargs=None, options=None,
+            **extra):
+        init = super(subtask, self).__init__
+
+        if isinstance(task, dict):
+            # Use the values from a dict.
+            return init(task)
+
+        # Also supports using task class/instance instead of string name.
+        try:
+            task_name = task.name
+        except AttributeError:
+            task_name = task
+
+        init(task=task_name, args=tuple(args or ()), kwargs=kwargs or (),
+             options=options or ())
+
+    def apply(self, *argmerge, **execopts):
+        """Apply this task locally."""
+        # For callbacks: extra args are prepended to the stored args.
+        args = tuple(argmerge) + tuple(self.args)
+        return self.get_type().apply(args, self.kwargs,
+                                     **dict(self.options, **execopts))
+
+    def apply_async(self, *argmerge, **execopts):
+        """Apply this task asynchronously."""
+        # For callbacks: extra args are prepended to the stored args.
+        args = tuple(argmerge) + tuple(self.args)
+        return self.get_type().apply_async(args, self.kwargs,
+                                           **dict(self.options, **execopts))
+
+    def get_type(self):
+        # For JSON serialization, the task class is lazily loaded,
+        # and not stored in the dict itself.
+        return registry.tasks[self.task]
+
+
+class TaskSet(UserList):
+    """A task containing several subtasks, making it possible
+    to track how many, or when all of the tasks has been completed.
+
+    :param tasks: A list of :class:`subtask` instances.
+
+    .. attribute:: total
+
+        Total number of subtasks in this task set.
+
+    Example::
+
+        >>> from djangofeeds.tasks import RefreshFeedTask
+        >>> from celery.task.sets import TaskSet, subtask
+        >>> urls = ("http://cnn.com/rss",
+        ...         "http://bbc.co.uk/rss",
+        ...         "http://xkcd.com/rss")
+        >>> subtasks = [RefreshFeedTask.subtask(kwargs={"feed_url": url})
+        ...                 for url in urls]
+        >>> taskset = TaskSet(tasks=subtasks)
+        >>> taskset_result = taskset.apply_async()
+        >>> list_of_return_values = taskset_result.join()
+
+    """
+    task = None # compat
+    task_name = None # compat
+
+    def __init__(self, task=None, tasks=None):
+        # Previously TaskSet only supported applying one kind of task.
+        # the signature then was TaskSet(task, arglist)
+        # Convert the arguments to subtasks'.
+        if task is not None:
+            tasks = [subtask(task, *arglist) for arglist in tasks]
+            self.task = task
+            self.task_name = task.name
+
+        self.data = list(tasks)
+        self.total = len(self.tasks)
+
+    @with_connection
+    def apply_async(self, connection=None,
+            connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
+        """Run all tasks in the taskset.
+
+        Returns a :class:`celery.result.TaskSetResult` instance.
+
+        Example
+
+            >>> ts = TaskSet(tasks=(
+            ...         RefreshFeedTask.subtask(["http://foo.com/rss"]),
+            ...         RefreshFeedTask.subtask(["http://bar.com/rss"]),
+            ... ))
+            >>> result = ts.apply_async()
+            >>> result.taskset_id
+            "d2c9b261-8eff-4bfb-8459-1e1b72063514"
+            >>> result.subtask_ids
+            ["b4996460-d959-49c8-aeb9-39c530dcde25",
+            "598d2d18-ab86-45ca-8b4f-0779f5d6a3cb"]
+            >>> result.waiting()
+            True
+            >>> time.sleep(10)
+            >>> result.ready()
+            True
+            >>> result.successful()
+            True
+            >>> result.failed()
+            False
+            >>> result.join()
+            [True, True]
+
+        """
+        if conf.ALWAYS_EAGER:
+            return self.apply()
+
+        taskset_id = gen_unique_id()
+        publisher = TaskPublisher(connection=connection)
+        try:
+            results = [task.apply_async(taskset_id=taskset_id,
+                                        publisher=publisher)
+                            for task in self.tasks]
+        finally:
+            publisher.close()
+
+        return TaskSetResult(taskset_id, results)
+
+    def apply(self):
+        """Applies the taskset locally."""
+        taskset_id = gen_unique_id()
+
+        # This will be filled with EagerResults.
+        return TaskSetResult(taskset_id, [task.apply(taskset_id=taskset_id)
+                                            for task in self.tasks])
+
+    @property
+    def tasks(self):
+        return self.data

+ 5 - 5
celery/tests/test_backends/test_base.py

@@ -2,10 +2,10 @@ import sys
 import types
 import unittest2 as unittest
 
-from billiard.serialization import subclass_exception
-from billiard.serialization import find_nearest_pickleable_exception as fnpe
-from billiard.serialization import UnpickleableExceptionWrapper
-from billiard.serialization import get_pickleable_exception as gpe
+from celery.serialization import subclass_exception
+from celery.serialization import find_nearest_pickleable_exception as fnpe
+from celery.serialization import UnpickleableExceptionWrapper
+from celery.serialization import get_pickleable_exception as gpe
 
 from celery import states
 from celery.backends.base import BaseBackend, KeyValueStoreBackend
@@ -28,7 +28,7 @@ class TestBaseBackendInterface(unittest.TestCase):
 
     def test_get_status(self):
         self.assertRaises(NotImplementedError,
-                b.is_successful, "SOMExx-N0Nex1stant-IDxx-")
+                b.get_status, "SOMExx-N0Nex1stant-IDxx-")
 
     def test_store_result(self):
         self.assertRaises(NotImplementedError,

+ 1 - 1
celery/tests/test_buckets.py

@@ -6,11 +6,11 @@ import time
 import unittest2 as unittest
 from itertools import chain, izip
 
-from billiard.utils.functional import curry
 
 from celery.task.base import Task
 from celery.utils import timeutils
 from celery.utils import gen_unique_id
+from celery.utils.functional import curry
 from celery.worker import buckets
 from celery.registry import TaskRegistry
 

+ 176 - 0
celery/tests/test_events_state.py

@@ -0,0 +1,176 @@
+import time
+import unittest2 as unittest
+
+from itertools import count
+
+from celery import states
+from celery.events import Event
+from celery.events.state import State, HEARTBEAT_EXPIRE
+from celery.utils import gen_unique_id
+
+
+class replay(object):
+
+    def __init__(self, state):
+        self.state = state
+        self.rewind()
+
+    def __iter__(self):
+        return self
+
+    def next(self):
+        try:
+            self.state.event(self.events[self.position()])
+        except IndexError:
+            raise StopIteration()
+
+    def rewind(self):
+        self.position = count(0).next
+        return self
+
+    def play(self):
+        for _ in self:
+            pass
+
+
+class ev_worker_online_offline(replay):
+    events = [
+        Event("worker-online", hostname="utest1"),
+        Event("worker-offline", hostname="utest1"),
+    ]
+
+
+class ev_worker_heartbeats(replay):
+    events = [
+        Event("worker-heartbeat", hostname="utest1",
+              timestamp=time.time() - HEARTBEAT_EXPIRE * 2),
+        Event("worker-heartbeat", hostname="utest1"),
+    ]
+
+
+class ev_task_states(replay):
+    uuid = gen_unique_id()
+    events = [
+        Event("task-received", uuid=uuid, name="task1",
+              args="(2, 2)", kwargs="{'foo': 'bar'}",
+              retries=0, eta=None, hostname="utest1"),
+        Event("task-started", uuid=uuid, hostname="utest1"),
+        Event("task-succeeded", uuid=uuid, result="4",
+              runtime=0.1234, hostname="utest1"),
+        Event("task-failed", uuid=uuid, exception="KeyError('foo')",
+              traceback="line 1 at main", hostname="utest1"),
+        Event("task-retried", uuid=uuid, exception="KeyError('bar')",
+              traceback="line 2 at main", hostname="utest1"),
+        Event("task-revoked", uuid=uuid, hostname="utest1"),
+    ]
+
+
+class ev_snapshot(replay):
+    events = [
+        Event("worker-online", hostname="utest1"),
+        Event("worker-online", hostname="utest2"),
+        Event("worker-online", hostname="utest3"),
+    ]
+    for i in range(20):
+        worker = not i % 2 and "utest2" or "utest1"
+        type = not i % 2 and "task2" or "task1"
+        events.append(Event("task-received", name=type,
+                      uuid=gen_unique_id(), hostname=worker))
+
+
+class test_State(unittest.TestCase):
+
+    def test_worker_online_offline(self):
+        r = ev_worker_online_offline(State())
+        r.next()
+        self.assertTrue(r.state.alive_workers())
+        self.assertTrue(r.state.workers["utest1"].alive)
+        r.play()
+        self.assertFalse(r.state.alive_workers())
+        self.assertFalse(r.state.workers["utest1"].alive)
+
+    def test_worker_heartbeat_expire(self):
+        r = ev_worker_heartbeats(State())
+        r.next()
+        self.assertFalse(r.state.alive_workers())
+        self.assertFalse(r.state.workers["utest1"].alive)
+        r.play()
+        self.assertTrue(r.state.alive_workers())
+        self.assertTrue(r.state.workers["utest1"].alive)
+
+    def test_task_states(self):
+        r = ev_task_states(State())
+
+        # RECEIVED
+        r.next()
+        self.assertTrue(r.uuid in r.state.tasks)
+        task = r.state.tasks[r.uuid]
+        self.assertEqual(task.state, "RECEIVED")
+        self.assertTrue(task.received)
+        self.assertEqual(task.timestamp, task.received)
+        self.assertEqual(task.worker.hostname, "utest1")
+
+        # STARTED
+        r.next()
+        self.assertTrue(r.state.workers["utest1"].alive,
+                "any task event adds worker heartbeat")
+        self.assertEqual(task.state, states.STARTED)
+        self.assertTrue(task.started)
+        self.assertEqual(task.timestamp, task.started)
+        self.assertEqual(task.worker.hostname, "utest1")
+
+        # SUCCESS
+        r.next()
+        self.assertEqual(task.state, states.SUCCESS)
+        self.assertTrue(task.succeeded)
+        self.assertEqual(task.timestamp, task.succeeded)
+        self.assertEqual(task.worker.hostname, "utest1")
+        self.assertEqual(task.result, "4")
+        self.assertEqual(task.runtime, 0.1234)
+
+        # FAILURE
+        r.next()
+        self.assertEqual(task.state, states.FAILURE)
+        self.assertTrue(task.failed)
+        self.assertEqual(task.timestamp, task.failed)
+        self.assertEqual(task.worker.hostname, "utest1")
+        self.assertEqual(task.exception, "KeyError('foo')")
+        self.assertEqual(task.traceback, "line 1 at main")
+
+        # RETRY
+        r.next()
+        self.assertEqual(task.state, states.RETRY)
+        self.assertTrue(task.retried)
+        self.assertEqual(task.timestamp, task.retried)
+        self.assertEqual(task.worker.hostname, "utest1")
+        self.assertEqual(task.exception, "KeyError('bar')")
+        self.assertEqual(task.traceback, "line 2 at main")
+
+        # REVOKED
+        r.next()
+        self.assertEqual(task.state, states.REVOKED)
+        self.assertTrue(task.revoked)
+        self.assertEqual(task.timestamp, task.revoked)
+        self.assertEqual(task.worker.hostname, "utest1")
+
+    def test_tasks_by_timestamp(self):
+        r = ev_snapshot(State())
+        r.play()
+        self.assertEqual(len(r.state.tasks_by_timestamp()), 20)
+
+    def test_tasks_by_type(self):
+        r = ev_snapshot(State())
+        r.play()
+        self.assertEqual(len(r.state.tasks_by_type("task1")), 10)
+        self.assertEqual(len(r.state.tasks_by_type("task2")), 10)
+
+    def test_alive_workers(self):
+        r = ev_snapshot(State())
+        r.play()
+        self.assertEqual(len(r.state.alive_workers()), 3)
+
+    def test_tasks_by_worker(self):
+        r = ev_snapshot(State())
+        r.play()
+        self.assertEqual(len(r.state.tasks_by_worker("utest1")), 10)
+        self.assertEqual(len(r.state.tasks_by_worker("utest2")), 10)

+ 1 - 1
celery/tests/test_pickle.py

@@ -1,6 +1,6 @@
 import unittest2 as unittest
 
-from billiard.serialization import pickle
+from celery.serialization import pickle
 
 
 class RegularException(Exception):

+ 5 - 4
celery/tests/test_pool.py

@@ -1,10 +1,11 @@
-import unittest2 as unittest
+import sys
+import time
 import logging
 import itertools
-import time
-from celery.worker.pool import TaskPool
+import unittest2 as unittest
+
+from celery.concurrency.processes import TaskPool
 from celery.datastructures import ExceptionInfo
-import sys
 
 
 def do_something(i):

+ 1 - 1
celery/tests/test_routes.py

@@ -1,10 +1,10 @@
 import unittest2 as unittest
 
-from billiard.utils.functional import wraps
 
 from celery import conf
 from celery import routes
 from celery.utils import gen_unique_id
+from celery.utils.functional import wraps
 from celery.exceptions import RouteNotFound
 
 

+ 3 - 3
celery/tests/test_serialization.py

@@ -7,10 +7,10 @@ from celery.tests.utils import execute_context, mask_modules
 class TestAAPickle(unittest.TestCase):
 
     def test_no_cpickle(self):
-        prev = sys.modules.pop("billiard.serialization")
+        prev = sys.modules.pop("celery.serialization")
         try:
             def with_cPickle_masked(_val):
-                from billiard.serialization import pickle
+                from celery.serialization import pickle
                 import pickle as orig_pickle
                 self.assertIs(pickle.dumps, orig_pickle.dumps)
 
@@ -18,4 +18,4 @@ class TestAAPickle(unittest.TestCase):
             execute_context(context, with_cPickle_masked)
 
         finally:
-            sys.modules["billiard.serialization"] = prev
+            sys.modules["celery.serialization"] = prev

+ 60 - 53
celery/tests/test_task.py

@@ -4,7 +4,6 @@ from datetime import datetime, timedelta
 
 from pyparsing import ParseException
 
-from billiard.utils.functional import wraps
 
 from celery import conf
 from celery import task
@@ -12,6 +11,7 @@ from celery import messaging
 from celery.task.schedules import crontab, crontab_parser
 from celery.utils import timeutils
 from celery.utils import gen_unique_id
+from celery.utils.functional import wraps
 from celery.result import EagerResult
 from celery.execute import send_task
 from celery.backends import default_backend
@@ -19,6 +19,9 @@ from celery.decorators import task as task_dec
 from celery.exceptions import RetryTaskError
 from celery.worker.listener import parse_iso8601
 
+from celery.tests.utils import with_eager_tasks
+
+
 def return_True(*args, **kwargs):
     # Task run functions can't be closures/lambdas, as they're pickled.
     return True
@@ -216,36 +219,28 @@ class TestCeleryTasks(unittest.TestCase):
         self.assertEqual(result.backend, RetryTask.backend)
         self.assertEqual(result.task_id, task_id)
 
+    @with_eager_tasks
     def test_ping(self):
-        from celery import conf
-        conf.ALWAYS_EAGER = True
         self.assertEqual(task.ping(), 'pong')
-        conf.ALWAYS_EAGER = False
 
+    @with_eager_tasks
     def test_execute_remote(self):
-        from celery import conf
-        conf.ALWAYS_EAGER = True
         self.assertEqual(task.execute_remote(return_True, ["foo"]).get(),
-                          True)
-        conf.ALWAYS_EAGER = False
+                         True)
 
+    @with_eager_tasks
     def test_dmap(self):
-        from celery import conf
         import operator
-        conf.ALWAYS_EAGER = True
         res = task.dmap(operator.add, zip(xrange(10), xrange(10)))
         self.assertEqual(sum(res), sum(operator.add(x, x)
-                                        for x in xrange(10)))
-        conf.ALWAYS_EAGER = False
+                                    for x in xrange(10)))
 
+    @with_eager_tasks
     def test_dmap_async(self):
-        from celery import conf
         import operator
-        conf.ALWAYS_EAGER = True
         res = task.dmap_async(operator.add, zip(xrange(10), xrange(10)))
         self.assertEqual(sum(res.get()), sum(operator.add(x, x)
-                                                for x in xrange(10)))
-        conf.ALWAYS_EAGER = False
+                                            for x in xrange(10)))
 
     def assertNextTaskDataEqual(self, consumer, presult, task_name,
             test_eta=False, **kwargs):
@@ -300,9 +295,9 @@ class TestCeleryTasks(unittest.TestCase):
         self.assertNextTaskDataEqual(consumer, presult, t1.name)
 
         # With arguments.
-        presult2 = t1.apply_async(kwargs=dict(name="George Constanza"))
+        presult2 = t1.apply_async(kwargs=dict(name="George Costanza"))
         self.assertNextTaskDataEqual(consumer, presult2, t1.name,
-                name="George Constanza")
+                name="George Costanza")
 
         # send_task
         sresult = send_task(t1.name, kwargs=dict(name="Elaine M. Benes"))
@@ -310,16 +305,16 @@ class TestCeleryTasks(unittest.TestCase):
                 name="Elaine M. Benes")
 
         # With eta.
-        presult2 = task.apply_async(t1, kwargs=dict(name="George Constanza"),
+        presult2 = task.apply_async(t1, kwargs=dict(name="George Costanza"),
                                     eta=datetime.now() + timedelta(days=1))
         self.assertNextTaskDataEqual(consumer, presult2, t1.name,
-                name="George Constanza", test_eta=True)
+                name="George Costanza", test_eta=True)
 
         # With countdown.
-        presult2 = task.apply_async(t1, kwargs=dict(name="George Constanza"),
+        presult2 = task.apply_async(t1, kwargs=dict(name="George Costanza"),
                                     countdown=10)
         self.assertNextTaskDataEqual(consumer, presult2, t1.name,
-                name="George Constanza", test_eta=True)
+                name="George Costanza", test_eta=True)
 
         # Discarding all tasks.
         consumer.discard_all()
@@ -355,16 +350,13 @@ class TestCeleryTasks(unittest.TestCase):
 
 class TestTaskSet(unittest.TestCase):
 
+    @with_eager_tasks
     def test_function_taskset(self):
-        from celery import conf
-        conf.ALWAYS_EAGER = True
         ts = task.TaskSet(return_True_task.name, [
-            ([1], {}), [[2], {}], [[3], {}], [[4], {}], [[5], {}]])
+              ([1], {}), [[2], {}], [[3], {}], [[4], {}], [[5], {}]])
         res = ts.apply_async()
         self.assertListEqual(res.join(), [True, True, True, True, True])
 
-        conf.ALWAYS_EAGER = False
-
     def test_counter_taskset(self):
         IncrementCounterTask.count = 0
         ts = task.TaskSet(IncrementCounterTask, [
@@ -475,7 +467,8 @@ class TestPeriodicTask(unittest.TestCase):
     def test_is_due_not_due(self):
         due, remaining = MyPeriodic().is_due(datetime.now())
         self.assertFalse(due)
-        # TODO: This assertion may fail if executed in the first minute of an hour
+        # TODO This assertion may fail if executed in the
+        # first minute of an hour
         self.assertGreater(remaining, 60)
 
     def test_is_due(self):
@@ -532,25 +525,35 @@ class test_crontab_parser(unittest.TestCase):
         self.assertEquals(crontab_parser(7).parse('*'), set(range(7)))
 
     def test_parse_range(self):
-        self.assertEquals(crontab_parser(60).parse('1-10'), set(range(1,10+1)))
-        self.assertEquals(crontab_parser(24).parse('0-20'), set(range(0,20+1)))
-        self.assertEquals(crontab_parser().parse('2-10'), set(range(2,10+1)))
+        self.assertEquals(crontab_parser(60).parse('1-10'),
+                          set(range(1, 10 + 1)))
+        self.assertEquals(crontab_parser(24).parse('0-20'),
+                          set(range(0, 20 + 1)))
+        self.assertEquals(crontab_parser().parse('2-10'),
+                          set(range(2, 10 + 1)))
 
     def test_parse_groups(self):
-        self.assertEquals(crontab_parser().parse('1,2,3,4'), set([1,2,3,4]))
-        self.assertEquals(crontab_parser().parse('0,15,30,45'), set([0,15,30,45]))
+        self.assertEquals(crontab_parser().parse('1,2,3,4'),
+                          set([1, 2, 3, 4]))
+        self.assertEquals(crontab_parser().parse('0,15,30,45'),
+                          set([0, 15, 30, 45]))
 
     def test_parse_steps(self):
-        self.assertEquals(crontab_parser(8).parse('*/2'), set([0,2,4,6]))
-        self.assertEquals(crontab_parser().parse('*/2'), set([ i*2 for i in xrange(30) ]))
-        self.assertEquals(crontab_parser().parse('*/3'), set([ i*3 for i in xrange(20) ]))
+        self.assertEquals(crontab_parser(8).parse('*/2'),
+                          set([0, 2, 4, 6]))
+        self.assertEquals(crontab_parser().parse('*/2'),
+                          set(i * 2 for i in xrange(30)))
+        self.assertEquals(crontab_parser().parse('*/3'),
+                          set(i * 3 for i in xrange(20)))
 
     def test_parse_composite(self):
-        self.assertEquals(crontab_parser(8).parse('*/2'), set([0,2,4,6]))
+        self.assertEquals(crontab_parser(8).parse('*/2'), set([0, 2, 4, 6]))
         self.assertEquals(crontab_parser().parse('2-9/5'), set([5]))
-        self.assertEquals(crontab_parser().parse('2-10/5'), set([5,10]))
-        self.assertEquals(crontab_parser().parse('2-11/5,3'), set([3,5,10]))
-        self.assertEquals(crontab_parser().parse('2-4/3,*/5,0-21/4'), set([0,3,4,5,8,10,12,15,16,20,25,30,35,40,45,50,55]))
+        self.assertEquals(crontab_parser().parse('2-10/5'), set([5, 10]))
+        self.assertEquals(crontab_parser().parse('2-11/5,3'), set([3, 5, 10]))
+        self.assertEquals(crontab_parser().parse('2-4/3,*/5,0-21/4'),
+                set([0, 3, 4, 5, 8, 10, 12, 15, 16,
+                    20, 25, 30, 35, 40, 45, 50, 55]))
 
     def test_parse_errors_on_empty_string(self):
         self.assertRaises(ParseException, crontab_parser(60).parse, '')
@@ -584,10 +587,10 @@ class test_crontab_is_due(unittest.TestCase):
         self.assertEquals(c.minute, set([30]))
         c = crontab(minute='30')
         self.assertEquals(c.minute, set([30]))
-        c = crontab(minute=(30,40,50))
-        self.assertEquals(c.minute, set([30,40,50]))
-        c = crontab(minute=set([30,40,50]))
-        self.assertEquals(c.minute, set([30,40,50]))
+        c = crontab(minute=(30, 40, 50))
+        self.assertEquals(c.minute, set([30, 40, 50]))
+        c = crontab(minute=set([30, 40, 50]))
+        self.assertEquals(c.minute, set([30, 40, 50]))
 
     def test_crontab_spec_invalid_minute(self):
         self.assertRaises(ValueError, crontab, minute=60)
@@ -598,8 +601,8 @@ class test_crontab_is_due(unittest.TestCase):
         self.assertEquals(c.hour, set([6]))
         c = crontab(hour='5')
         self.assertEquals(c.hour, set([5]))
-        c = crontab(hour=(4,8,12))
-        self.assertEquals(c.hour, set([4,8,12]))
+        c = crontab(hour=(4, 8, 12))
+        self.assertEquals(c.hour, set([4, 8, 12]))
 
     def test_crontab_spec_invalid_hour(self):
         self.assertRaises(ValueError, crontab, hour=24)
@@ -613,11 +616,11 @@ class test_crontab_is_due(unittest.TestCase):
         c = crontab(day_of_week='fri')
         self.assertEquals(c.day_of_week, set([5]))
         c = crontab(day_of_week='tuesday,sunday,fri')
-        self.assertEquals(c.day_of_week, set([0,2,5]))
+        self.assertEquals(c.day_of_week, set([0, 2, 5]))
         c = crontab(day_of_week='mon-fri')
-        self.assertEquals(c.day_of_week, set([1,2,3,4,5]))
+        self.assertEquals(c.day_of_week, set([1, 2, 3, 4, 5]))
         c = crontab(day_of_week='*/2')
-        self.assertEquals(c.day_of_week, set([0,2,4,6]))
+        self.assertEquals(c.day_of_week, set([0, 2, 4, 6]))
 
     def test_crontab_spec_invalid_dow(self):
         self.assertRaises(ValueError, crontab, day_of_week='fooday-barday')
@@ -675,25 +678,29 @@ class test_crontab_is_due(unittest.TestCase):
 
     @patch_crontab_nowfun(QuarterlyPeriodic, datetime(2010, 5, 10, 10, 15))
     def test_first_quarter_execution_is_due(self):
-        due, remaining = QuarterlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
+        due, remaining = QuarterlyPeriodic().is_due(
+                            datetime(2010, 5, 10, 6, 30))
         self.assertTrue(due)
         self.assertEquals(remaining, 1)
 
     @patch_crontab_nowfun(QuarterlyPeriodic, datetime(2010, 5, 10, 10, 30))
     def test_second_quarter_execution_is_due(self):
-        due, remaining = QuarterlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
+        due, remaining = QuarterlyPeriodic().is_due(
+                            datetime(2010, 5, 10, 6, 30))
         self.assertTrue(due)
         self.assertEquals(remaining, 1)
 
     @patch_crontab_nowfun(QuarterlyPeriodic, datetime(2010, 5, 10, 10, 14))
     def test_first_quarter_execution_is_not_due(self):
-        due, remaining = QuarterlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
+        due, remaining = QuarterlyPeriodic().is_due(
+                            datetime(2010, 5, 10, 6, 30))
         self.assertFalse(due)
         self.assertEquals(remaining, 1)
 
     @patch_crontab_nowfun(QuarterlyPeriodic, datetime(2010, 5, 10, 10, 29))
     def test_second_quarter_execution_is_not_due(self):
-        due, remaining = QuarterlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
+        due, remaining = QuarterlyPeriodic().is_due(
+                            datetime(2010, 5, 10, 6, 30))
         self.assertFalse(due)
         self.assertEquals(remaining, 1)
 

+ 2 - 3
celery/tests/test_task_builtins.py

@@ -1,9 +1,8 @@
 import unittest2 as unittest
 
-from billiard.serialization import pickle
-
-from celery.task.base import ExecuteRemoteTask
+from celery.task.builtins import ExecuteRemoteTask
 from celery.task.builtins import PingTask, DeleteExpiredTaskMetaTask
+from celery.serialization import pickle
 
 
 def some_func(i):

+ 1 - 1
celery/tests/test_task_http.py

@@ -13,10 +13,10 @@ try:
 except ImportError:
     from StringIO import StringIO
 
-from billiard.utils.functional import wraps
 from anyjson import serialize
 
 from celery.task import http
+from celery.utils.functional import wraps
 
 from celery.tests.utils import eager_tasks, execute_context
 

+ 7 - 7
celery/tests/test_worker.py

@@ -5,17 +5,17 @@ from multiprocessing import get_logger
 
 from carrot.connection import BrokerConnection
 from carrot.backends.base import BaseMessage
-from billiard.serialization import pickle
 
 from celery import conf
 from celery.utils import gen_unique_id
 from celery.worker import WorkController
-from celery.worker.job import TaskWrapper
+from celery.worker.job import TaskRequest
 from celery.worker.buckets import FastQueue
 from celery.worker.listener import CarrotListener, QoS, RUN
 from celery.worker.scheduler import Scheduler
 from celery.decorators import task as task_dec
 from celery.decorators import periodic_task as periodic_task_dec
+from celery.serialization import pickle
 
 from celery.tests.utils import execute_context
 from celery.tests.compat import catch_warnings
@@ -243,7 +243,7 @@ class TestCarrotListener(unittest.TestCase):
         l.receive_message(m.decode(), m)
 
         in_bucket = self.ready_queue.get_nowait()
-        self.assertIsInstance(in_bucket, TaskWrapper)
+        self.assertIsInstance(in_bucket, TaskRequest)
         self.assertEqual(in_bucket.task_name, foo_task.name)
         self.assertEqual(in_bucket.execute(), 2 * 4 * 8)
         self.assertTrue(self.eta_schedule.empty())
@@ -325,7 +325,7 @@ class TestCarrotListener(unittest.TestCase):
         in_hold = self.eta_schedule.queue[0]
         self.assertEqual(len(in_hold), 4)
         eta, priority, task, on_accept = in_hold
-        self.assertIsInstance(task, TaskWrapper)
+        self.assertIsInstance(task, TaskRequest)
         self.assertTrue(callable(on_accept))
         self.assertEqual(task.task_name, foo_task.name)
         self.assertEqual(task.execute(), 2 * 4 * 8)
@@ -353,7 +353,7 @@ class TestWorkController(unittest.TestCase):
         backend = MockBackend()
         m = create_message(backend, task=foo_task.name, args=[4, 8, 10],
                            kwargs={})
-        task = TaskWrapper.from_message(m, m.decode())
+        task = TaskRequest.from_message(m, m.decode())
         worker.process_task(task)
         worker.pool.stop()
 
@@ -363,7 +363,7 @@ class TestWorkController(unittest.TestCase):
         backend = MockBackend()
         m = create_message(backend, task=foo_task.name, args=[4, 8, 10],
                            kwargs={})
-        task = TaskWrapper.from_message(m, m.decode())
+        task = TaskRequest.from_message(m, m.decode())
         worker.process_task(task)
         worker.pool.stop()
 
@@ -373,7 +373,7 @@ class TestWorkController(unittest.TestCase):
         backend = MockBackend()
         m = create_message(backend, task=foo_task.name, args=[4, 8, 10],
                            kwargs={})
-        task = TaskWrapper.from_message(m, m.decode())
+        task = TaskRequest.from_message(m, m.decode())
         worker.process_task(task)
         worker.pool.stop()
 

+ 2 - 2
celery/tests/test_worker_controllers.py

@@ -75,11 +75,11 @@ class TestMediator(unittest.TestCase):
             got["value"] = value.value
 
         m = Mediator(ready_queue, mycallback)
-        ready_queue.put(MockTask("George Constanza"))
+        ready_queue.put(MockTask("George Costanza"))
 
         m.on_iteration()
 
-        self.assertEqual(got["value"], "George Constanza")
+        self.assertEqual(got["value"], "George Costanza")
 
     def test_mediator_on_iteration_revoked(self):
         ready_queue = Queue()

+ 18 - 19
celery/tests/test_worker_job.py

@@ -12,8 +12,7 @@ from celery.log import setup_logger
 from celery.task.base import Task
 from celery.utils import gen_unique_id
 from celery.result import AsyncResult
-from celery.worker.job import WorkerTaskTrace, TaskWrapper
-from celery.worker.pool import TaskPool
+from celery.worker.job import WorkerTaskTrace, TaskRequest
 from celery.backends import default_backend
 from celery.exceptions import RetryTaskError, NotRegistered
 from celery.decorators import task as task_dec
@@ -102,14 +101,14 @@ class MockEventDispatcher(object):
         self.sent.append(event)
 
 
-class TestTaskWrapper(unittest.TestCase):
+class TestTaskRequest(unittest.TestCase):
 
     def test_task_wrapper_repr(self):
-        tw = TaskWrapper(mytask.name, gen_unique_id(), [1], {"f": "x"})
+        tw = TaskRequest(mytask.name, gen_unique_id(), [1], {"f": "x"})
         self.assertTrue(repr(tw))
 
     def test_send_event(self):
-        tw = TaskWrapper(mytask.name, gen_unique_id(), [1], {"f": "x"})
+        tw = TaskRequest(mytask.name, gen_unique_id(), [1], {"f": "x"})
         tw.eventer = MockEventDispatcher()
         tw.send_event("task-frobulated")
         self.assertIn("task-frobulated", tw.eventer.sent)
@@ -127,7 +126,7 @@ class TestTaskWrapper(unittest.TestCase):
         job.mail_admins = mock_mail_admins
         conf.CELERY_SEND_TASK_ERROR_EMAILS = True
         try:
-            tw = TaskWrapper(mytask.name, gen_unique_id(), [1], {"f": "x"})
+            tw = TaskRequest(mytask.name, gen_unique_id(), [1], {"f": "x"})
             try:
                 raise KeyError("foo")
             except KeyError:
@@ -206,14 +205,14 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_executed_bit(self):
         from celery.worker.job import AlreadyExecutedError
-        tw = TaskWrapper(mytask.name, gen_unique_id(), [], {})
+        tw = TaskRequest(mytask.name, gen_unique_id(), [], {})
         self.assertFalse(tw.executed)
         tw._set_executed_bit()
         self.assertTrue(tw.executed)
         self.assertRaises(AlreadyExecutedError, tw._set_executed_bit)
 
     def test_task_wrapper_mail_attrs(self):
-        tw = TaskWrapper(mytask.name, gen_unique_id(), [], {})
+        tw = TaskRequest(mytask.name, gen_unique_id(), [], {})
         x = tw.success_msg % {"name": tw.task_name,
                               "id": tw.task_id,
                               "return_value": 10}
@@ -235,8 +234,8 @@ class TestTaskWrapper(unittest.TestCase):
         m = BaseMessage(body=simplejson.dumps(body), backend="foo",
                         content_type="application/json",
                         content_encoding="utf-8")
-        tw = TaskWrapper.from_message(m, m.decode())
-        self.assertIsInstance(tw, TaskWrapper)
+        tw = TaskRequest.from_message(m, m.decode())
+        self.assertIsInstance(tw, TaskRequest)
         self.assertEqual(tw.task_name, body["task"])
         self.assertEqual(tw.task_id, body["id"])
         self.assertEqual(tw.args, body["args"])
@@ -251,12 +250,12 @@ class TestTaskWrapper(unittest.TestCase):
         m = BaseMessage(body=simplejson.dumps(body), backend="foo",
                         content_type="application/json",
                         content_encoding="utf-8")
-        self.assertRaises(NotRegistered, TaskWrapper.from_message,
+        self.assertRaises(NotRegistered, TaskRequest.from_message,
                           m, m.decode())
 
     def test_execute(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
+        tw = TaskRequest(mytask.name, tid, [4], {"f": "x"})
         self.assertEqual(tw.execute(), 256)
         meta = default_backend._get_task_meta_for(tid)
         self.assertEqual(meta["result"], 256)
@@ -264,7 +263,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_execute_success_no_kwargs(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask_no_kwargs.name, tid, [4], {})
+        tw = TaskRequest(mytask_no_kwargs.name, tid, [4], {})
         self.assertEqual(tw.execute(), 256)
         meta = default_backend._get_task_meta_for(tid)
         self.assertEqual(meta["result"], 256)
@@ -272,7 +271,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_execute_success_some_kwargs(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask_some_kwargs.name, tid, [4], {})
+        tw = TaskRequest(mytask_some_kwargs.name, tid, [4], {})
         self.assertEqual(tw.execute(logfile="foobaz.log"), 256)
         meta = default_backend._get_task_meta_for(tid)
         self.assertEqual(some_kwargs_scratchpad.get("logfile"), "foobaz.log")
@@ -281,7 +280,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_execute_ack(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"},
+        tw = TaskRequest(mytask.name, tid, [4], {"f": "x"},
                         on_ack=on_ack)
         self.assertEqual(tw.execute(), 256)
         meta = default_backend._get_task_meta_for(tid)
@@ -291,7 +290,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_execute_fail(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask_raising.name, tid, [4], {"f": "x"})
+        tw = TaskRequest(mytask_raising.name, tid, [4], {"f": "x"})
         self.assertIsInstance(tw.execute(), ExceptionInfo)
         meta = default_backend._get_task_meta_for(tid)
         self.assertEqual(meta["status"], states.FAILURE)
@@ -299,7 +298,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_execute_using_pool(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
+        tw = TaskRequest(mytask.name, tid, [4], {"f": "x"})
 
         class MockPool(object):
             target = None
@@ -326,7 +325,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def test_default_kwargs(self):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
+        tw = TaskRequest(mytask.name, tid, [4], {"f": "x"})
         self.assertDictEqual(
                 tw.extend_with_default_kwargs(10, "some_logfile"), {
                     "f": "x",
@@ -340,7 +339,7 @@ class TestTaskWrapper(unittest.TestCase):
 
     def _test_on_failure(self, exception):
         tid = gen_unique_id()
-        tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
+        tw = TaskRequest(mytask.name, tid, [4], {"f": "x"})
         try:
             raise exception
         except Exception:

+ 15 - 1
celery/tests/utils.py

@@ -5,9 +5,10 @@ import sys
 import __builtin__
 from StringIO import StringIO
 
-from billiard.utils.functional import wraps
 from nose import SkipTest
 
+from celery.utils.functional import wraps
+
 
 class GeneratorContextManager(object):
     def __init__(self, gen):
@@ -78,6 +79,19 @@ def eager_tasks():
     conf.ALWAYS_EAGER = prev
 
 
+def with_eager_tasks(fun):
+
+    @wraps(fun)
+    def _inner(*args, **kwargs):
+        from celery import conf
+        prev = conf.ALWAYS_EAGER
+        conf.ALWAYS_EAGER = True
+        try:
+            return fun(*args, **kwargs)
+        finally:
+            conf.ALWAYS_EAGER = prev
+
+
 def with_environ(env_name, env_value):
 
     def _envpatched(fun):

+ 25 - 14
celery/utils/__init__.py

@@ -16,10 +16,10 @@ from inspect import getargspec
 from itertools import islice
 
 from carrot.utils import rpartition
-from billiard.utils.functional import curry
 
 from celery.utils.compat import all, any, defaultdict
 from celery.utils.timeutils import timedelta_seconds # was here before
+from celery.utils.functional import curry
 
 
 def noop(*args, **kwargs):
@@ -31,6 +31,17 @@ def noop(*args, **kwargs):
     pass
 
 
+def kwdict(kwargs):
+    """Make sure keyword arguments are not in unicode.
+
+    This should be fixed in newer Python versions,
+      see: http://bugs.python.org/issue4978.
+
+    """
+    return dict((key.encode("utf-8"), value)
+                    for key, value in kwargs.items())
+
+
 def first(predicate, iterable):
     """Returns the first element in ``iterable`` that ``predicate`` returns a
     ``True`` value for."""
@@ -78,13 +89,13 @@ def padlist(container, size, default=None):
 
     Examples:
 
-        >>> first, last, city = padlist(["George", "Constanza", "NYC"], 3)
-        ("George", "Constanza", "NYC")
-        >>> first, last, city = padlist(["George", "Constanza"], 3)
-        ("George", "Constanza", None)
-        >>> first, last, city, planet = padlist(["George", "Constanza",
+        >>> first, last, city = padlist(["George", "Costanza", "NYC"], 3)
+        ("George", "Costanza", "NYC")
+        >>> first, last, city = padlist(["George", "Costanza"], 3)
+        ("George", "Costanza", None)
+        >>> first, last, city, planet = padlist(["George", "Costanza",
                                                  "NYC"], 4, default="Earth")
-        ("George", "Constanza", "NYC", "Earth")
+        ("George", "Costanza", "NYC", "Earth")
 
     """
     return list(container)[:size] + [default] * (size - len(container))
@@ -207,23 +218,23 @@ def get_cls_by_name(name, aliases={}):
 
     Example::
 
-        celery.worker.pool.TaskPool
-                           ^- class name
+        celery.concurrency.processes.TaskPool
+                                    ^- class name
 
     If ``aliases`` is provided, a dict containing short name/long name
     mappings, the name is looked up in the aliases first.
 
     Examples:
 
-        >>> get_cls_by_name("celery.worker.pool.TaskPool")
-        <class 'celery.worker.pool.TaskPool'>
+        >>> get_cls_by_name("celery.concurrency.processes.TaskPool")
+        <class 'celery.concurrency.processes.TaskPool'>
 
         >>> get_cls_by_name("default", {
-        ...     "default": "celery.worker.pool.TaskPool"})
-        <class 'celery.worker.pool.TaskPool'>
+        ...     "default": "celery.concurrency.processes.TaskPool"})
+        <class 'celery.concurrency.processes.TaskPool'>
 
         # Does not try to look up non-string names.
-        >>> from celery.worker.pool import TaskPool
+        >>> from celery.concurrency.processes import TaskPool
         >>> get_cls_by_name(TaskPool) is TaskPool
         True
 

+ 135 - 0
celery/utils/functional.py

@@ -0,0 +1,135 @@
+"""Functional utilities for Python 2.4 compatibility."""
+# License for code in this file that was taken from Python 2.5.
+
+# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
+# --------------------------------------------
+#
+# 1. This LICENSE AGREEMENT is between the Python Software Foundation
+# ("PSF"), and the Individual or Organization ("Licensee") accessing and
+# otherwise using this software ("Python") in source or binary form and
+# its associated documentation.
+#
+# 2. Subject to the terms and conditions of this License Agreement, PSF
+# hereby grants Licensee a nonexclusive, royalty-free, world-wide
+# license to reproduce, analyze, test, perform and/or display publicly,
+# prepare derivative works, distribute, and otherwise use Python
+# alone or in any derivative version, provided, however, that PSF's
+# License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
+# 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software Foundation;
+# All Rights Reserved" are retained in Python alone or in any derivative
+# version prepared by Licensee.
+#
+# 3. In the event Licensee prepares a derivative work that is based on
+# or incorporates Python or any part thereof, and wants to make
+# the derivative work available to others as provided herein, then
+# Licensee hereby agrees to include in any such work a brief summary of
+# the changes made to Python.
+#
+# 4. PSF is making Python available to Licensee on an "AS IS"
+# basis.  PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
+# IMPLIED.  BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
+# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
+# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
+# INFRINGE ANY THIRD PARTY RIGHTS.
+#
+# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
+# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
+# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
+# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
+#
+# 6. This License Agreement will automatically terminate upon a material
+# breach of its terms and conditions.
+#
+# 7. Nothing in this License Agreement shall be deemed to create any
+# relationship of agency, partnership, or joint venture between PSF and
+# Licensee.  This License Agreement does not grant permission to use PSF
+# trademarks or trade name in a trademark sense to endorse or promote
+# products or services of Licensee, or any third party.
+#
+# 8. By copying, installing or otherwise using Python, Licensee
+# agrees to be bound by the terms and conditions of this License
+# Agreement.
+
+### Begin from Python 2.5 functools.py ########################################
+
+# Summary of changes made to the Python 2.5 code below:
+#   * swapped ``partial`` for ``curry`` to maintain backwards-compatibility
+#     in Django.
+#   * Wrapped the ``setattr`` call in ``update_wrapper`` with a try-except
+#     block to make it compatible with Python 2.3, which doesn't allow
+#     assigning to ``__name__``.
+
+# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software
+# Foundation. All Rights Reserved.
+
+###############################################################################
+
+# update_wrapper() and wraps() are tools to help write
+# wrapper functions that can handle naive introspection
+
+def _compat_curry(fun, *args, **kwargs):
+    """New function with partial application of the given arguments
+    and keywords."""
+
+    def _curried(*addargs, **addkwargs):
+        return fun(*(args+addargs), **dict(kwargs, **addkwargs))
+    return _curried
+
+
+try:
+    from functools import partial as curry
+except ImportError:
+    curry = _compat_curry
+
+WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__doc__')
+WRAPPER_UPDATES = ('__dict__',)
+def _compat_update_wrapper(wrapper, wrapped, assigned=WRAPPER_ASSIGNMENTS,
+        updated=WRAPPER_UPDATES):
+    """Update a wrapper function to look like the wrapped function
+
+       wrapper is the function to be updated
+       wrapped is the original function
+       assigned is a tuple naming the attributes assigned directly
+       from the wrapped function to the wrapper function (defaults to
+       functools.WRAPPER_ASSIGNMENTS)
+       updated is a tuple naming the attributes off the wrapper that
+       are updated with the corresponding attribute from the wrapped
+       function (defaults to functools.WRAPPER_UPDATES)
+
+    """
+    for attr in assigned:
+        try:
+            setattr(wrapper, attr, getattr(wrapped, attr))
+        except TypeError: # Python 2.3 doesn't allow assigning to __name__.
+            pass
+    for attr in updated:
+        getattr(wrapper, attr).update(getattr(wrapped, attr))
+    # Return the wrapper so this can be used as a decorator via curry()
+    return wrapper
+
+try:
+    from functools import update_wrapper
+except ImportError:
+    update_wrapper = _compat_update_wrapper
+
+
+def _compat_wraps(wrapped, assigned=WRAPPER_ASSIGNMENTS,
+        updated=WRAPPER_UPDATES):
+    """Decorator factory to apply update_wrapper() to a wrapper function
+
+    Returns a decorator that invokes update_wrapper() with the decorated
+    function as the wrapper argument and the arguments to wraps() as the
+    remaining arguments. Default arguments are as for update_wrapper().
+    This is a convenience function to simplify applying curry() to
+    update_wrapper().
+
+    """
+    return curry(update_wrapper, wrapped=wrapped,
+                 assigned=assigned, updated=updated)
+
+try:
+    from functools import wraps
+except ImportError:
+    wraps = _compat_wraps
+
+### End from Python 2.5 functools.py ##########################################

+ 11 - 12
celery/utils/mail.py

@@ -1,22 +1,21 @@
 from mailer import Message, Mailer
 
-from celery.loaders import load_settings
-
-
 def mail_admins(subject, message, fail_silently=False):
-    """Send a message to the admins in settings.ADMINS."""
-    settings = load_settings()
-    if not settings.ADMINS:
+    """Send a message to the admins in conf.ADMINS."""
+    from celery import conf
+
+    if not conf.ADMINS:
         return
-    to = ", ".join(admin_email for _, admin_email in settings.ADMINS)
-    username = settings.EMAIL_HOST_USER
-    password = settings.EMAIL_HOST_PASSWORD
 
-    message = Message(From=settings.SERVER_EMAIL, To=to,
-                      Subject=subject, Message=message)
+    to = ", ".join(admin_email for _, admin_email in conf.ADMINS)
+    username = conf.EMAIL_HOST_USER
+    password = conf.EMAIL_HOST_PASSWORD
+
+    message = Message(From=conf.SERVER_EMAIL, To=to,
+                      Subject=subject, Body=message)
 
     try:
-        mailer = Mailer(settings.EMAIL_HOST, settings.EMAIL_PORT)
+        mailer = Mailer(conf.EMAIL_HOST, conf.EMAIL_PORT)
         username and mailer.login(username, password)
         mailer.send(message)
     except Exception:

+ 2 - 20
celery/utils/timeutils.py

@@ -29,20 +29,6 @@ def delta_resolution(dt, delta):
     will be rounded to the nearest hour, and so on until seconds
     which will just return the original datetime.
 
-    Examples::
-
-        >>> now = datetime.now()
-        >>> now
-        datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
-        >>> delta_resolution(now, timedelta(days=2))
-        datetime.datetime(2010, 3, 30, 0, 0)
-        >>> delta_resolution(now, timedelta(hours=2))
-        datetime.datetime(2010, 3, 30, 11, 0)
-        >>> delta_resolution(now, timedelta(minutes=2))
-        datetime.datetime(2010, 3, 30, 11, 50)
-        >>> delta_resolution(now, timedelta(seconds=2))
-        datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
-
     """
     delta = timedelta_seconds(delta)
 
@@ -107,12 +93,8 @@ def rate(rate):
 def weekday(name):
     """Return the position of a weekday (0 - 7, where 0 is Sunday).
 
-        >>> weekday("sunday")
-        0
-        >>> weekday("sun")
-        0
-        >>> weekday("mon")
-        1
+        >>> weekday("sunday"), weekday("sun"), weekday("mon")
+        (0, 0, 1)
 
     """
     abbreviation = name[0:3].lower()

+ 30 - 23
celery/worker/__init__.py

@@ -3,7 +3,6 @@
 The Multiprocessing Worker Server
 
 """
-import time
 import socket
 import logging
 import traceback
@@ -20,8 +19,17 @@ from celery.utils import noop, instantiate
 from celery.worker.buckets import TaskBucket, FastQueue
 from celery.worker.scheduler import Scheduler
 
+RUN = 0x1
+CLOSE = 0x2
+TERMINATE = 0x3
+
 
 def process_initializer():
+    """Initializes the process so it can be used to process tasks.
+
+    Used for multiprocessing environments.
+
+    """
     # There seems to a bug in multiprocessing (backport?)
     # when detached, where the worker gets EOFErrors from time to time
     # and the logger is left from the parent process causing a crash.
@@ -49,7 +57,6 @@ class WorkController(object):
     :param embed_clockservice: see :attr:`run_clockservice`.
     :param send_events: see :attr:`send_events`.
 
-
     .. attribute:: concurrency
 
         The number of simultaneous processes doing work (default:
@@ -182,7 +189,7 @@ class WorkController(object):
 
     def start(self):
         """Starts the workers main loop."""
-        self._state = "RUN"
+        self._state = RUN
 
         try:
             for i, component in enumerate(self.components):
@@ -192,6 +199,7 @@ class WorkController(object):
                 component.start()
         finally:
             self.stop()
+
     def process_task(self, wrapper):
         """Process task by sending it to the pool of workers."""
         try:
@@ -205,32 +213,31 @@ class WorkController(object):
             self.stop()
 
     def stop(self):
-        """Gracefully shutdown the worker server."""
-        if self._state != "RUN":
-            return
-        if self._running != len(self.components):
-            return
+        """Graceful shutdown of the worker server."""
+        self._shutdown(warm=True)
 
-        signals.worker_shutdown.send(sender=self)
-        for component in reversed(self.components):
-            self.logger.debug("Stopping thread %s..." % (
-                              component.__class__.__name__))
-            component.stop()
+    def terminate(self):
+        """Not so graceful shutdown of the worker server."""
+        self._shutdown(warm=False)
 
-        self.listener.close_connection()
-        self._state = "STOP"
+    def _shutdown(self, warm=True):
+        """Gracefully shutdown the worker server."""
+        what = (warm and "stopping" or "terminating").capitalize()
 
-    def terminate(self):
-        """Not so gracefully shutdown the worker server."""
-        if self._state != "RUN":
+        if self._state != RUN or self._running != len(self.components):
+            # Not fully started, can safely exit.
             return
 
+        self._state = CLOSE
         signals.worker_shutdown.send(sender=self)
+
         for component in reversed(self.components):
-            self.logger.debug("Terminating thread %s..." % (
-                              component.__class__.__name__))
-            terminate = getattr(component, "terminate", component.stop)
-            terminate()
+            self.logger.debug("%s thread %s..." % (
+                    what, component.__class__.__name__))
+            stop = component.stop
+            if not warm:
+                stop = getattr(component, "terminate", stop)
+            stop()
 
         self.listener.close_connection()
-        self._state = "STOP"
+        self._state = TERMINATE

+ 27 - 19
celery/worker/buckets.py

@@ -1,11 +1,14 @@
 import time
-from Queue import Queue, Empty as QueueEmpty
+
+from collections import deque
 from itertools import chain
+from Queue import Queue, Empty as QueueEmpty
 
 from celery.utils import all
 from celery.utils import timeutils
 from celery.utils.compat import izip_longest
 
+
 class RateLimitExceeded(Exception):
     """The token buckets rate limit has been exceeded."""
 
@@ -13,7 +16,8 @@ class RateLimitExceeded(Exception):
 class TaskBucket(object):
     """This is a collection of token buckets, each task type having
     its own token bucket. If the task type doesn't have a rate limit,
-    it will have a plain Queue object instead of a token bucket queue.
+    it will have a plain :class:`Queue` object instead of a
+    :class:`TokenBucketQueue`.
 
     The :meth:`put` operation forwards the task to its appropriate bucket,
     while the :meth:`get` operation iterates over the buckets and retrieves
@@ -36,30 +40,35 @@ class TaskBucket(object):
 
 
     """
-    min_wait = 0.0
 
     def __init__(self, task_registry):
         self.task_registry = task_registry
         self.buckets = {}
         self.init_with_registry()
-        self.immediate = Queue()
-
-    def put(self, job):
-        """Put a task into the appropiate bucket."""
-        if job.task_name not in self.buckets:
-            self.add_bucket_for_type(job.task_name)
-        self.buckets[job.task_name].put_nowait(job)
+        self.immediate = deque()
+
+    def put(self, request):
+        """Put a :class:`~celery.worker.job.TaskRequest` into
+        the appropiate bucket."""
+        if request.task_name not in self.buckets:
+            self.add_bucket_for_type(request.task_name)
+        self.buckets[request.task_name].put_nowait(request)
     put_nowait = put
 
+    def _get_immediate(self):
+        try:
+            return self.immediate.popleft()
+        except IndexError: # Empty
+            raise QueueEmpty()
+
     def _get(self):
         # If the first bucket is always returning items, we would never
         # get to fetch items from the other buckets. So we always iterate over
         # all the buckets and put any ready items into a queue called
         # "immediate". This queue is always checked for cached items first.
-        if self.immediate:
-            try:
-                return 0, self.immediate.get_nowait()
-            except QueueEmpty:
+        try:
+            return 0, self._get_immediate()
+        except QueueEmpty:
                 pass
 
         remaining_times = []
@@ -68,7 +77,7 @@ class TaskBucket(object):
             if not remaining:
                 try:
                     # Just put any ready items into the immediate queue.
-                    self.immediate.put_nowait(bucket.get_nowait())
+                    self.immediate.append(bucket.get_nowait())
                 except QueueEmpty:
                     pass
                 except RateLimitExceeded:
@@ -78,7 +87,7 @@ class TaskBucket(object):
 
         # Try the immediate queue again.
         try:
-            return 0, self.immediate.get_nowait()
+            return 0, self._get_immediate()
         except QueueEmpty:
             if not remaining_times:
                 # No items in any of the buckets.
@@ -238,8 +247,7 @@ class TokenBucketQueue(object):
         Also see :meth:`Queue.Queue.put`.
 
         """
-        put = block and self.queue.put or self.queue.put_nowait
-        put(item)
+        self.queue.put(item, block=block)
 
     def put_nowait(self, item):
         """Put an item into the queue without blocking.
@@ -264,7 +272,7 @@ class TokenBucketQueue(object):
         get = block and self.queue.get or self.queue.get_nowait
 
         if not self.can_consume(1):
-            raise RateLimitExceeded
+            raise RateLimitExceeded()
 
         return get()
 

+ 3 - 7
celery/worker/control/__init__.py

@@ -1,7 +1,8 @@
 from celery import log
+from celery.messaging import ControlReplyPublisher, with_connection
+from celery.utils import kwdict
 from celery.worker.control.registry import Panel
 from celery.worker.control import builtins
-from celery.messaging import ControlReplyPublisher, with_connection
 
 
 class ControlDispatch(object):
@@ -55,12 +56,7 @@ class ControlDispatch(object):
         except KeyError:
             self.logger.error("No such control command: %s" % command)
         else:
-            # need to make sure keyword arguments are not in unicode
-            # this should be fixed in newer Python's
-            # (see: http://bugs.python.org/issue4978)
-            kwargs = dict((k.encode("utf8"), v)
-                            for k, v in kwargs.iteritems())
-            reply = control(self.panel, **kwargs)
+            reply = control(self.panel, **kwdict(kwargs))
             if reply_to:
                 self.reply({self.hostname: reply},
                            exchange=reply_to["exchange"],

+ 9 - 4
celery/worker/control/builtins.py

@@ -1,12 +1,11 @@
-import os
-import signal
 from datetime import datetime
 
 from celery import conf
+from celery.backends import default_backend
 from celery.registry import tasks
+from celery.utils import timeutils
 from celery.worker.revoke import revoked
 from celery.worker.control.registry import Panel
-from celery.backends import default_backend
 
 TASK_INFO_FIELDS = ("exchange", "routing_key", "rate_limit")
 
@@ -54,6 +53,12 @@ def rate_limit(panel, task_name, rate_limit, **kwargs):
     :param rate_limit: New rate limit.
 
     """
+
+    try:
+        timeutils.rate(rate_limit)
+    except ValueError, exc:
+        return {"error": "Invalid rate limit string: %s" % exc}
+
     try:
         tasks[task_name].rate_limit = rate_limit
     except KeyError:
@@ -135,4 +140,4 @@ def ping(panel, **kwargs):
 @Panel.register
 def shutdown(panel, **kwargs):
     panel.logger.critical("Got shutdown from remote.")
-    os.kill(os.getpid(), signal.SIGTERM)
+    raise SystemExit

+ 1 - 0
celery/worker/control/registry.py

@@ -12,6 +12,7 @@ class Panel(UserDict):
     @classmethod
     def register(cls, method, name=None):
         cls.data[name or method.__name__] = method
+        return method
 
     @classmethod
     def unregister(cls, name_or_method):

+ 28 - 24
celery/worker/job.py

@@ -12,7 +12,7 @@ import warnings
 from celery import conf
 from celery import platform
 from celery.log import get_default_logger
-from celery.utils import noop, fun_takes_kwargs
+from celery.utils import noop, kwdict, fun_takes_kwargs
 from celery.utils.mail import mail_admins
 from celery.worker.revoke import revoked
 from celery.loaders import current_loader
@@ -135,16 +135,12 @@ def execute_and_trace(task_name, *args, **kwargs):
         platform.set_mp_process_title("celeryd")
 
 
-class TaskWrapper(object):
-    """Class wrapping a task to be passed around and finally
-    executed inside of the worker.
+class TaskRequest(object):
+    """A request for task execution.
 
     :param task_name: see :attr:`task_name`.
-
     :param task_id: see :attr:`task_id`.
-
     :param args: see :attr:`args`
-
     :param kwargs: see :attr:`kwargs`.
 
     .. attribute:: task_name
@@ -163,16 +159,25 @@ class TaskWrapper(object):
 
         Mapping of keyword arguments to apply to the task.
 
+    .. attribute:: on_ack
+
+        Callback called when the task should be acknowledged.
+
     .. attribute:: message
 
         The original message sent. Used for acknowledging the message.
 
-    .. attribute executed
+    .. attribute:: executed
 
         Set to ``True`` if the task has been executed.
         A task should only be executed once.
 
-    .. attribute acknowledged
+    .. attribute:: delivery_info
+
+        Additional delivery info, e.g. the contains the path
+        from producer to consumer.
+
+    .. attribute:: acknowledged
 
         Set to ``True`` if the task has been acknowledged.
 
@@ -190,7 +195,7 @@ class TaskWrapper(object):
     time_start = None
 
     def __init__(self, task_name, task_id, args, kwargs,
-            on_ack=noop, retries=0, delivery_info=None, **opts):
+            on_ack=noop, retries=0, delivery_info=None, hostname=None, **opts):
         self.task_name = task_name
         self.task_id = task_id
         self.retries = retries
@@ -199,11 +204,13 @@ class TaskWrapper(object):
         self.on_ack = on_ack
         self.delivery_info = delivery_info or {}
         self.task = tasks[self.task_name]
+        self.hostname = hostname or socket.gethostname()
         self._already_revoked = False
 
         for opt in ("success_msg", "fail_msg", "fail_email_subject",
-                "fail_email_body", "logger", "eventer"):
+                    "fail_email_body", "logger", "eventer"):
             setattr(self, opt, opts.get(opt, getattr(self, opt, None)))
+
         if not self.logger:
             self.logger = get_default_logger()
 
@@ -226,14 +233,15 @@ class TaskWrapper(object):
         return False
 
     @classmethod
-    def from_message(cls, message, message_data, logger=None, eventer=None):
-        """Create a :class:`TaskWrapper` from a task message sent by
+    def from_message(cls, message, message_data, logger=None, eventer=None,
+            hostname=None):
+        """Create a :class:`TaskRequest` from a task message sent by
         :class:`celery.messaging.TaskPublisher`.
 
         :raises UnknownTaskError: if the message does not describe a task,
             the message is also rejected.
 
-        :returns: :class:`TaskWrapper` instance.
+        :returns: :class:`TaskRequest` instance.
 
         """
         task_name = message_data["task"]
@@ -250,14 +258,10 @@ class TaskWrapper(object):
         if not hasattr(kwargs, "items"):
             raise InvalidTaskError("Task kwargs must be a dictionary.")
 
-        # Convert any unicode keys in the keyword arguments to ascii.
-        kwargs = dict((key.encode("utf-8"), value)
-                        for key, value in kwargs.items())
-
-        return cls(task_name, task_id, args, kwargs,
-                    retries=retries, on_ack=message.ack,
-                    delivery_info=delivery_info,
-                    logger=logger, eventer=eventer)
+        return cls(task_name, task_id, args, kwdict(kwargs),
+                   retries=retries, on_ack=message.ack,
+                   delivery_info=delivery_info, logger=logger,
+                   eventer=eventer, hostname=hostname)
 
     def extend_with_default_kwargs(self, loglevel, logfile):
         """Extend the tasks keyword arguments with standard task arguments.
@@ -350,7 +354,7 @@ class TaskWrapper(object):
     def on_accepted(self):
         if not self.task.acks_late:
             self.acknowledge()
-        self.send_event("task-accepted", uuid=self.task_id)
+        self.send_event("task-started", uuid=self.task_id)
         self.logger.debug("Task accepted: %s[%s]" % (
             self.task_name, self.task_id))
 
@@ -395,7 +399,7 @@ class TaskWrapper(object):
                                        traceback=exc_info.traceback)
 
         context = {
-            "hostname": socket.gethostname(),
+            "hostname": self.hostname,
             "id": self.task_id,
             "name": self.task_name,
             "exc": repr(exc_info.exception),

+ 145 - 12
celery/worker/listener.py

@@ -1,7 +1,83 @@
+"""
+
+This module contains the component responsible for consuming messages
+from the broker, processing the messages and keeping the broker connections
+up and running.
+
+
+* :meth:`~CarrotListener.start` is an infinite loop, which only iterates
+  again if the connection is lost. For each iteration (at start, or if the
+  connection is lost) it calls :meth:`~CarrotListener.reset_connection`,
+  and starts the consumer by calling :meth:`~CarrotListener.consume_messages`.
+
+* :meth:`~CarrotListener.reset_connection`, clears the internal queues,
+  establishes a new connection to the broker, sets up the task
+  consumer (+ QoS), and the broadcast remote control command consumer.
+
+  Also if events are enabled it configures the event dispatcher and starts
+  up the hartbeat thread.
+
+* Finally it can consume messages. :meth:`~CarrotListener.consume_messages`
+  is simply an infinite loop waiting for events on the AMQP channels.
+
+  Both the task consumer and the broadcast consumer uses the same
+  callback: :meth:`~CarrotListener.receive_message`.
+  The reason is that some carrot backends doesn't support consuming
+  from several channels simultaneously, so we use a little nasty trick
+  (:meth:`~CarrotListener._detect_wait_method`) to select the best
+  possible channel distribution depending on the functionality supported
+  by the carrot backend.
+
+* So for each message received the :meth:`~CarrotListener.receive_message`
+  method is called, this checks the payload of the message for either
+  a ``task`` key or a ``control`` key.
+
+  If the message is a task, it verifies the validity of the message
+  converts it to a :class:`celery.worker.job.TaskRequest`, and sends
+  it to :meth:`~CarrotListener.on_task`.
+
+  If the message is a control command the message is passed to
+  :meth:`~CarrotListener.on_control`, which in turn dispatches
+  the control command using the control dispatcher.
+
+  It also tries to handle malformed or invalid messages properly,
+  so the worker doesn't choke on them and die. Any invalid messages
+  are acknowledged immediately and logged, so the message is not resent
+  again, and again.
+
+* If the task has an ETA/countdown, the task is moved to the ``eta_schedule``
+  so the :class:`~celery.worker.scheduler.Scheduler` can schedule it at its
+  deadline. Tasks without an eta are moved immediately to the ``ready_queue``,
+  so they can be picked up by the :class:`~celery.worker.controllers.Mediator`
+  to be sent to the pool.
+
+* When a task with an ETA is received the QoS prefetch count is also
+  incremented, so another message can be reserved. When the ETA is met
+  the prefetch count is decremented again, though this cannot happen
+  immediately because amqplib doesn't support doing broker requests
+  across threads. Instead the current prefetch count is kept as a
+  shared counter, so as soon as  :meth:`~CarrotListener.consume_messages`
+  detects that the value has changed it will send out the actual
+  QoS event to the broker.
+
+* Notice that when the connection is lost all internal queues are cleared
+  because we can no longer ack the messages reserved in memory.
+  Hoever, this is not dangerous as the broker will resend them
+  to another worker when the channel is closed.
+
+* **WARNING**: :meth:`~CarrotListener.stop` does not close the connection!
+  This is because some pre-acked messages may be in processing,
+  and they need to be finished before the channel is closed.
+  For celeryd this means the pool must finish the tasks it has acked
+  early, *then* close the connection.
+
+"""
+
 from __future__ import generators
 
 import socket
 import warnings
+
 from datetime import datetime
 
 from dateutil.parser import parse as parse_iso8601
@@ -9,7 +85,7 @@ from carrot.connection import AMQPConnectionException
 
 from celery import conf
 from celery.utils import noop, retry_over_time
-from celery.worker.job import TaskWrapper, InvalidTaskError
+from celery.worker.job import TaskRequest, InvalidTaskError
 from celery.worker.control import ControlDispatch
 from celery.worker.heartbeat import Heart
 from celery.events import EventDispatcher
@@ -23,6 +99,15 @@ CLOSE = 0x1
 
 
 class QoS(object):
+    """Quality of Service for Channel.
+
+    For thread-safe increment/decrement of a channels prefetch count value.
+
+    :param consumer: A :class:`carrot.messaging.Consumer` instance.
+    :param initial_value: Initial prefetch count value.
+    :param logger: Logger used to log debug messages.
+
+    """
     prev = None
 
     def __init__(self, consumer, initial_value, logger):
@@ -30,24 +115,32 @@ class QoS(object):
         self.logger = logger
         self.value = SharedCounter(initial_value)
 
-        self.set(int(self.value))
-
     def increment(self):
+        """Increment the current prefetch count value by one."""
         return self.set(self.value.increment())
 
     def decrement(self):
+        """Decrement the current prefetch count value by one."""
         return self.set(self.value.decrement())
 
     def decrement_eventually(self):
+        """Decrement the value, but do not update the qos.
+
+        The MainThread will be responsible for calling :meth:`update`
+        when necessary.
+
+        """
         self.value.decrement()
 
     def set(self, pcount):
+        """Set channel prefetch_count setting."""
         self.logger.debug("basic.qos: prefetch_count->%s" % pcount)
         self.consumer.qos(prefetch_count=pcount)
         self.prev = pcount
         return pcount
 
     def update(self):
+        """Update prefetch count with current value."""
         return self.set(self.next)
 
     @property
@@ -64,18 +157,49 @@ class CarrotListener(object):
 
     .. attribute:: ready_queue
 
-        The queue that holds tasks ready for processing immediately.
+        The queue that holds tasks ready for immediate processing.
 
     .. attribute:: eta_schedule
 
         Scheduler for paused tasks. Reasons for being paused include
         a countdown/eta or that it's waiting for retry.
 
+    .. attribute:: send_events
+
+        Is events enabled?
+
+    .. attribute:: init_callback
+
+        Callback to be called the first time the connection is active.
+
+    .. attribute:: hostname
+
+        Current hostname. Defaults to the system hostname.
+
+    .. attribute:: initial_prefetch_count
+
+        Initial QoS prefetch count for the task channel.
+
+    .. attribute:: control_dispatch
+
+        Control command dispatcher.
+        See :class:`celery.worker.control.ControlDispatch`.
+
+    .. attribute:: event_dispatcher
+
+        See :class:`celery.events.EventDispatcher`.
+
+    .. attribute:: hart
+
+        :class:`~celery.worker.heartbeat.Heart` sending out heart beats
+        if events enabled.
+
     .. attribute:: logger
 
         The logger used.
 
     """
+    _state = None
 
     def __init__(self, ready_queue, eta_schedule, logger,
             init_callback=noop, send_events=False, hostname=None,
@@ -89,12 +213,11 @@ class CarrotListener(object):
         self.logger = logger
         self.hostname = hostname or socket.gethostname()
         self.initial_prefetch_count = initial_prefetch_count
+        self.event_dispatcher = None
+        self.heart = None
         self.control_dispatch = ControlDispatch(logger=logger,
                                                 hostname=self.hostname,
                                                 listener=self)
-        self.event_dispatcher = None
-        self.heart = None
-        self._state = None
 
     def start(self):
         """Start the consumer.
@@ -116,8 +239,6 @@ class CarrotListener(object):
 
     def consume_messages(self):
         """Consume messages forever (or until an exception is raised)."""
-        task_consumer = self.task_consumer
-
         self.logger.debug("CarrotListener: Starting message consumer...")
         wait_for_message = self._detect_wait_method()(limit=None).next
         self.logger.debug("CarrotListener: Ready to accept tasks!")
@@ -155,14 +276,19 @@ class CarrotListener(object):
                     task.task_name, task.task_id))
             self.ready_queue.put(task)
 
+    def on_control(self, control):
+        """Handle received remote control command."""
+        return self.control_dispatch.dispatch_from_message(control)
+
     def receive_message(self, message_data, message):
         """The callback called when a new message is received. """
 
         # Handle task
         if message_data.get("task"):
             try:
-                task = TaskWrapper.from_message(message, message_data,
+                task = TaskRequest.from_message(message, message_data,
                                                 logger=self.logger,
+                                                hostname=self.hostname,
                                                 eventer=self.event_dispatcher)
             except NotRegistered, exc:
                 self.logger.error("Unknown task ignored: %s: %s" % (
@@ -179,8 +305,7 @@ class CarrotListener(object):
         # Handle control command
         control = message_data.get("control")
         if control:
-            self.control_dispatch.dispatch_from_message(control)
-            return
+            return self.on_control(control)
 
         warnings.warn(RuntimeWarning(
             "Received and deleted unknown message. Wrong destination?!? \
@@ -196,6 +321,7 @@ class CarrotListener(object):
         self.connection = self.connection and self.connection.close()
 
     def stop_consumers(self, close=True):
+        """Stop consuming."""
         if not self._state == RUN:
             return
         self._state = CLOSE
@@ -229,6 +355,7 @@ class CarrotListener(object):
         message.ack()
 
     def reset_connection(self):
+        """Re-establish connection and set up consumers."""
         self.logger.debug(
                 "CarrotListener: Re-establishing connection to the broker...")
         self.stop_consumers()
@@ -243,6 +370,7 @@ class CarrotListener(object):
         # QoS: Reset prefetch window.
         self.qos = QoS(self.task_consumer,
                        self.initial_prefetch_count, self.logger)
+        self.qos.update() # enable prefetch_count QoS.
 
         self.task_consumer.on_decode_error = self.on_decode_error
         self.broadcast_consumer = BroadcastConsumer(self.connection,
@@ -297,5 +425,10 @@ class CarrotListener(object):
         return conn
 
     def stop(self):
+        """Stop consuming.
+
+        Does not close connection.
+
+        """
         self.logger.debug("CarrotListener: Stopping consumers...")
         self.stop_consumers(close=False)

+ 1 - 0
contrib/release/doc4allmods

@@ -5,6 +5,7 @@ SKIP_PACKAGES="$PACKAGE tests management urls"
 SKIP_FILES="celery.bin.rst celery.contrib.rst
             celery.contrib.batches.rst
             celery.models.rst
+            celery.concurrency.rst
             celery.db.rst
             celery.db.a805d4bd.rst"
 

+ 0 - 1
contrib/requirements/default.txt

@@ -4,5 +4,4 @@ sqlalchemy
 anyjson
 carrot>=0.10.5
 django-picklefield
-billiard>=0.3.0
 pyparsing

+ 4 - 0
docs/_theme/classy/layout.html

@@ -0,0 +1,4 @@
+{% extends "basic/layout.html" %}
+{% block sidebar1 %}{% endblock %}
+{% block sidebar2 %}{% endblock %}
+

+ 281 - 0
docs/_theme/classy/static/classy.css_t

@@ -0,0 +1,281 @@
+/*
+ * flasky.css_t
+ * ~~~~~~~~~~~~
+ *
+ * Sphinx stylesheet -- flasky theme based on nature theme.
+ *
+ * :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
+ * :license: BSD, see LICENSE for details.
+ *
+ */
+ 
+@import url("basic.css");
+ 
+/* -- page layout ----------------------------------------------------------- */
+ 
+body {
+    font-family: 'Georgia', serif;
+    font-size: 17px;
+    color: #000;
+    background: white;
+    margin: 0;
+    padding: 0;
+}
+
+div.documentwrapper {
+    float: left;
+    width: 100%;
+}
+
+div.bodywrapper {
+    margin: 40px auto 0 auto;
+    width: 700px;
+}
+
+hr {
+    border: 1px solid #B1B4B6;
+}
+ 
+div.body {
+    background-color: #ffffff;
+    color: #3E4349;
+    padding: 0 30px 30px 30px;
+}
+
+img.floatingflask {
+    padding: 0 0 10px 10px;
+    float: right;
+}
+ 
+div.footer {
+    text-align: right;
+    color: #888;
+    padding: 10px;
+    font-size: 14px;
+    width: 650px;
+    margin: 0 auto 40px auto;
+}
+ 
+div.footer a {
+    color: #888;
+    text-decoration: underline;
+}
+ 
+div.related {
+    line-height: 32px;
+    color: #888;
+}
+
+div.related ul {
+    padding: 0 0 0 10px;
+}
+ 
+div.related a {
+    color: #444;
+}
+ 
+/* -- body styles ----------------------------------------------------------- */
+ 
+a {
+    color: #004B6B;
+    text-decoration: underline;
+}
+ 
+a:hover {
+    color: #6D4100;
+    text-decoration: underline;
+}
+
+div.body {
+    padding-bottom: 40px; /* saved for footer */
+}
+ 
+div.body h1,
+div.body h2,
+div.body h3,
+div.body h4,
+div.body h5,
+div.body h6 {
+    font-family: 'Garamond', 'Georgia', serif;
+    font-weight: normal;
+    margin: 30px 0px 10px 0px;
+    padding: 0;
+}
+ 
+#classy-classes-for-javascript h1 {
+    text-indent: -999999px;
+    background: url(classyjs.png) no-repeat center;
+    height: 460px;
+    margin: 0;
+}
+
+div.body h2 { font-size: 180%; }
+div.body h3 { font-size: 150%; }
+div.body h4 { font-size: 130%; }
+div.body h5 { font-size: 100%; }
+div.body h6 { font-size: 100%; }
+ 
+a.headerlink {
+    color: white;
+    padding: 0 4px;
+    text-decoration: none;
+}
+ 
+a.headerlink:hover {
+    color: #444;
+    background: #eaeaea;
+}
+ 
+div.body p, div.body dd, div.body li {
+    line-height: 1.4em;
+}
+
+div.admonition {
+    background: #fafafa;
+    margin: 20px -30px;
+    padding: 10px 30px;
+    border-top: 1px solid #ccc;
+    border-bottom: 1px solid #ccc;
+}
+
+div.admonition p.admonition-title {
+    font-family: 'Garamond', 'Georgia', serif;
+    font-weight: normal;
+    font-size: 24px;
+    margin: 0 0 10px 0;
+    padding: 0;
+    line-height: 1;
+}
+
+div.admonition p.last {
+    margin-bottom: 0;
+}
+
+div.highlight{
+    background-color: white;
+}
+
+dt:target, .highlight {
+    background: #FAF3E8;
+}
+
+div.note {
+    background-color: #eee;
+    border: 1px solid #ccc;
+}
+ 
+div.seealso {
+    background-color: #ffc;
+    border: 1px solid #ff6;
+}
+ 
+div.topic {
+    background-color: #eee;
+}
+ 
+div.warning {
+    background-color: #ffe4e4;
+    border: 1px solid #f66;
+}
+ 
+p.admonition-title {
+    display: inline;
+}
+ 
+p.admonition-title:after {
+    content: ":";
+}
+
+pre, tt {
+    font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
+    font-size: 0.85em;
+}
+
+img.screenshot {
+}
+
+tt.descname, tt.descclassname {
+    font-size: 0.95em;
+}
+
+tt.descname {
+    padding-right: 0.08em;
+}
+
+img.screenshot {
+    -moz-box-shadow: 2px 2px 4px #eee;
+    -webkit-box-shadow: 2px 2px 4px #eee;
+    box-shadow: 2px 2px 4px #eee;
+}
+
+table.docutils {
+    border: 1px solid #888;
+    -moz-box-shadow: 2px 2px 4px #eee;
+    -webkit-box-shadow: 2px 2px 4px #eee;
+    box-shadow: 2px 2px 4px #eee;
+}
+
+table.docutils td, table.docutils th {
+    border: 1px solid #888;
+    padding: 0.25em 0.7em;
+}
+
+table.field-list, table.footnote {
+    border: none;
+    -moz-box-shadow: none;
+    -webkit-box-shadow: none;
+    box-shadow: none;
+}
+
+table.footnote {
+    margin: 15px 0;
+    width: 100%;
+    border: 1px solid #eee;
+}
+
+table.field-list th {
+    padding: 0 0.8em 0 0;
+}
+
+table.field-list td {
+    padding: 0;
+}
+
+table.footnote td {
+    padding: 0.5em;
+}
+
+dl {
+    margin: 0;
+    padding: 0;
+}
+
+dl dd {
+    margin-left: 30px;
+}
+ 
+pre {
+    padding: 0;
+    margin: 15px -8px;
+    padding: 8px;
+    line-height: 1.3em;
+    border: 1px solid #A4D4EC;
+    background: #E9F5FC;
+    border-radius: 2px;
+    -moz-border-radius: 2px;
+    -webkit-border-radius: 2px;
+}
+
+tt {
+    background-color: #ecf0f3;
+    color: #222;
+    /* padding: 1px 2px; */
+}
+
+tt.xref, a tt {
+    background-color: #FBFBFB;
+}
+
+a:hover tt {
+    background: #EEE;
+}

BIN
docs/_theme/classy/static/logo.png


+ 4 - 0
docs/_theme/classy/theme.conf

@@ -0,0 +1,4 @@
+[theme]
+inherit = basic
+stylesheet = classy.css
+nosidebar = false

+ 1 - 1
docs/conf.py

@@ -62,5 +62,5 @@ latex_documents = [
    ur'Ask Solem', 'manual'),
 ]
 
-html_theme = "ADCTheme"
+html_theme = "classy"
 html_theme_path = ["_theme"]

+ 62 - 5
docs/configuration.rst

@@ -7,6 +7,9 @@ This document describes the configuration options available.
 If you're using the default loader, you must create the ``celeryconfig.py``
 module and make sure it is available on the Python path.
 
+.. contents::
+    :local:
+
 Example configuration file
 ==========================
 
@@ -483,15 +486,69 @@ Worker: celeryd
             except SoftTimeLimitExceeded:
                 cleanup_in_a_hurry()
 
+* CELERY_STORE_ERRORS_EVEN_IF_IGNORED
+
+    If set, the worker stores all task errors in the result store even if
+    ``Task.ignore_result`` is on.
+
+Error E-Mails
+-------------
+
 * CELERY_SEND_TASK_ERROR_EMAILS
 
     If set to ``True``, errors in tasks will be sent to admins by e-mail.
-    If unset, it will send the e-mails if ``settings.DEBUG`` is False.
 
-* CELERY_STORE_ERRORS_EVEN_IF_IGNORED
+* ADMINS
 
-    If set, the worker stores all task errors in the result store even if
-    ``Task.ignore_result`` is on.
+    List of ``(name, email_address)`` tuples for the admins that should
+    receive error e-mails.
+
+* SERVER_EMAIL
+
+    The e-mail address this worker sends e-mails from.
+    Default is ``"celery@localhost"``.
+
+* MAIL_HOST
+
+    The mail server to use. Default is ``"localhost"``.
+
+* MAIL_HOST_USER
+
+    Username (if required) to log on to the mail server with.
+
+* MAIL_HOST_PASSWORD
+
+    Password (if required) to log on to the mail server with.
+
+* MAIL_PORT
+
+    The port the mail server is listening on. Default is ``25``.
+
+Example E-Mail configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This configuration enables the sending of error e-mails to
+``george@vandelay.com`` and ``kramer@vandelay.com``:
+
+.. code-block:: python
+
+    # Enables error e-mails.
+    CELERY_SEND_TASK_ERROR_EMAILS = True
+
+    # Name and e-mail addresses of recipients
+    ADMINS = (
+        ("George Costanza", "george@vandelay.com"),
+        ("Cosmo Kramer", "kosmo@vandelay.com"),
+    )
+
+    # E-mail address used as sender (From field).
+    SERVER_EMAIL = "no-reply@vandelay.com"
+
+    # Mailserver configuration
+    EMAIL_HOST = "mail.vandelay.com"
+    EMAIL_PORT = 25
+    # EMAIL_HOST_USER = "servers"
+    # EMAIL_HOST_PASSWORD = "s3cr3t"
 
 Events
 ------
@@ -588,7 +645,7 @@ Custom Component Classes (advanced)
 * CELERYD_POOL
 
     Name of the task pool class used by the worker.
-    Default is ``"celery.worker.pool.TaskPool"``.
+    Default is ``"celery.concurrency.processes.TaskPool"``.
 
 * CELERYD_LISTENER
 

+ 3 - 0
docs/cookbook/daemonizing.rst

@@ -5,6 +5,9 @@
 Celery does not daemonize itself, please use one of the following
 daemonization tools.
 
+.. contents::
+    :local:
+
 
 start-stop-daemon
 =================

+ 5 - 2
docs/cookbook/tasks.rst

@@ -2,9 +2,12 @@
  Creating Tasks
 ================
 
+.. contents::
+    :local:
+
 
 Ensuring a task is only executed one at a time
-----------------------------------------------
+==============================================
 
 You can accomplish this by using a lock.
 
@@ -57,4 +60,4 @@ The cache key expires after some time in case something unexpected happens
             logger.debug(
                 "Feed %s is already being imported by another worker" % (
                     feed_url))
-            return
+            return

+ 3 - 0
docs/getting-started/broker-installation.rst

@@ -2,6 +2,9 @@
  Broker Installation
 =====================
 
+.. contents::
+    :local:
+
 Installing RabbitMQ
 ===================
 

+ 38 - 22
docs/getting-started/first-steps-with-celery.rst

@@ -2,6 +2,9 @@
  First steps with Celery
 ========================
 
+.. contents::
+    :local:
+
 Creating a simple task
 ======================
 
@@ -26,23 +29,24 @@ Our addition task looks like this:
 All celery tasks are classes that inherit from the ``Task``
 class. In this case we're using a decorator that wraps the add
 function in an appropriate class for us automatically. The full
-documentation on how to create tasks and task classes are in
-:doc:`Executing Tasks<../userguide/tasks>`.
+documentation on how to create tasks and task classes is in the
+:doc:`../userguide/tasks>` part of the user guide.
 
 
 
 Configuration
 =============
 
-Celery is configured by using a configuration module. By convention,
-this module is called ``celeryconfig.py``. This module must be in the
-Python path so it can be imported.
+Celery is configured by using a configuration module. By default
+this module is called ``celeryconfig.py``.
+
+:Note: This configuration module must be on the Python path so it
+  can be imported.
 
 You can set a custom name for the configuration module with the
-``CELERY_CONFIG_MODULE`` variable. In these examples we use the
+``CELERY_CONFIG_MODULE`` variable, but in these examples we use the
 default name.
 
-
 Let's create our ``celeryconfig.py``.
 
 1. Configure how we communicate with the broker::
@@ -58,10 +62,14 @@ Let's create our ``celeryconfig.py``.
 
         CELERY_RESULT_BACKEND = "amqp"
 
+   The AMQP backend is non-persistent by default, and you can only
+   fetch the result of a task once (as it's sent as a message).
+
 3. Finally, we list the modules to import, that is, all the modules
-   that contain tasks. This is so celery knows about what tasks it can
-   be asked to perform. We only have a single task module,
-   ``tasks.py``, which we added earlier::
+   that contain tasks. This is so Celery knows about what tasks it can
+   be asked to perform.
+
+   We only have a single task module, ``tasks.py``, which we added earlier::
 
         CELERY_IMPORTS = ("tasks", )
 
@@ -98,43 +106,51 @@ For info on how to run celery as standalone daemon, see
 Executing the task
 ==================
 
-Whenever we want to execute our task, we can use the ``delay`` method
-of the task class.
+Whenever we want to execute our task, we can use the
+:meth:`~celery.task.base.Task.delay` method of the task class.
 
-This is a handy shortcut to the ``apply_async`` method which gives
-greater control of the task execution.
-See :doc:`Executing Tasks<../userguide/executing>` for more information.
+This is a handy shortcut to the :meth:`~celery.task.base.Task.apply_async`
+method which gives greater control of the task execution. Read the
+:doc:`Executing Tasks<../userguide/executing>` part of the user guide
+for more information about executing tasks.
 
     >>> from tasks import add
     >>> add.delay(4, 4)
     <AsyncResult: 889143a6-39a2-4e52-837b-d80d33efb22d>
 
 At this point, the task has been sent to the message broker. The message
-broker will hold on to the task until a celery worker server has successfully
+broker will hold on to the task until a worker server has successfully
 picked it up.
 
 *Note:* If everything is just hanging when you execute ``delay``, please check
 that RabbitMQ is running, and that the user/password has access to the virtual
 host you configured earlier.
 
-Right now we have to check the celery worker log files to know what happened
-with the task. This is because we didn't keep the ``AsyncResult`` object
-returned by ``delay``.
+Right now we have to check the worker log files to know what happened
+with the task. This is because we didn't keep the :class:`~celery.result.AsyncResult`
+object returned by :meth:`~celery.task.base.Task.delay`.
 
-The ``AsyncResult`` lets us find the state of the task, wait for the task to
-finish and get its return value (or exception if the task failed).
+The :class:`~celery.result.AsyncResult` lets us find the state of the task, wait for
+the task to finish, get its return value (or exception if the task failed),
+and more.
 
-So, let's execute the task again, but this time we'll keep track of the task:
+So, let's execute the task again, but this time we'll keep track of the task
+by keeping the :class:`~celery.result.AsyncResult`::
 
     >>> result = add.delay(4, 4)
+
     >>> result.ready() # returns True if the task has finished processing.
     False
+
     >>> result.result # task is not ready, so no return value yet.
     None
+
     >>> result.get()   # Waits until the task is done and returns the retval.
     8
+
     >>> result.result # direct access to result, doesn't re-raise errors.
     8
+
     >>> result.successful() # returns True if the task didn't end in failure.
     True
 

+ 23 - 10
docs/getting-started/periodic-tasks.rst

@@ -2,7 +2,16 @@
  Periodic Tasks
 ================
 
-You can schedule tasks to run at intervals like ``cron``.
+.. contents::
+    :local:
+
+Introduction
+============
+
+The :mod:`~celery.bin.celerybeat` service enables you to schedule tasks to
+run at intervals.
+
+Periodic tasks are defined as special task classes.
 Here's an example of a periodic task:
 
 .. code-block:: python
@@ -11,13 +20,15 @@ Here's an example of a periodic task:
     from datetime import timedelta
 
     @periodic_task(run_every=timedelta(seconds=30))
-    def every_30_seconds(**kwargs):
-        logger = self.get_logger(**kwargs)
-        logger.info("Running periodic task!")
+    def every_30_seconds():
+        print("Running periodic task!")
 
-If you want a little more control over when the task is executed, for example,
-a particular time of day or day of the week, you can use the ``crontab`` schedule
-type:
+Crontab-like schedules
+======================
+
+If you want a little more control over when the task is executed, for
+example, a particular time of day or day of the week, you can use
+the ``crontab`` schedule type:
 
 .. code-block:: python
 
@@ -25,9 +36,8 @@ type:
     from celery.decorators import periodic_task
 
     @periodic_task(run_every=crontab(hour=7, minute=30, day_of_week=1))
-    def every_monday_morning(**kwargs):
-        logger = self.get_logger(**kwargs)
-        logger.info("Execute every Monday at 7:30AM.")
+    def every_monday_morning():
+        print("Execute every Monday at 7:30AM.")
 
 The syntax of these crontab expressions is very flexible.  Some examples:
 
@@ -71,6 +81,9 @@ The syntax of these crontab expressions is very flexible.  Some examples:
 |                                     | every hour during office hours (8am-5pm).  |
 +-------------------------------------+--------------------------------------------+
 
+Starting celerybeat
+===================
+
 If you want to use periodic tasks you need to start the ``celerybeat``
 service. You have to make sure only one instance of this server is running at
 any time, or else you will end up with multiple executions of the same task.

+ 4 - 0
docs/getting-started/resources.rst

@@ -2,4 +2,8 @@
  Resources
 ===========
 
+.. contents::
+    :local:
+    :depth: 2
+
 .. include:: ../includes/resources.txt

+ 8 - 5
docs/includes/introduction.txt

@@ -1,6 +1,6 @@
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 
-:Version: 1.1.0
+:Version: 1.1.1
 :Web: http://celeryproject.org/
 :Download: http://pypi.python.org/pypi/celery/
 :Source: http://github.com/ask/celery/
@@ -9,7 +9,7 @@
 
 --
 
-Celery is a task queue/job queue based on distributed message passing.
+Celery is an asynchronous task queue/job queue based on distributed message passing.
 It is focused on real-time operation, but supports scheduling as well.
 
 The execution units, called tasks, are executed concurrently on a single or
@@ -24,8 +24,8 @@ language. It can also `operate with other languages using webhooks`_.
 The recommended message broker is `RabbitMQ`_, but support for `Redis`_ and
 databases (`SQLAlchemy`_) is also available.
 
-You may also be pleased to know that full Django integration exists
-via the `django-celery`_ package.
+You may also be pleased to know that full Django integration exists,
+delivered by the `django-celery`_ package.
 
 .. _`RabbitMQ`: http://www.rabbitmq.com/
 .. _`Redis`: http://code.google.com/p/redis/
@@ -34,6 +34,9 @@ via the `django-celery`_ package.
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
 
+.. contents::
+    :local:
+
 Overview
 ========
 
@@ -43,7 +46,7 @@ This is a high level overview of the architecture.
 
 The broker pushes tasks to the worker servers.
 A worker server is a networked machine running ``celeryd``. This can be one or
-more machines, depending on the workload.
+more machines depending on the workload.
 
 The result of the task can be stored for later retrieval (called its
 "tombstone").

+ 9 - 5
docs/internals/deprecation.rst

@@ -2,9 +2,13 @@
  Celery Deprecation Timeline
 =============================
 
-* 1.2
+.. contents::
+    :local:
 
-  * The following settings will be removed:
+Removals for version 1.2
+============================
+
+* The following settings will be removed:
 
     =====================================  =====================================
     **Setting name**                       **Replace with**
@@ -17,12 +21,12 @@
     ``CELERY_AMQP_PUBLISHER_ROUTING_KEY``  ``CELERY_DEFAULT_ROUTING_KEY``
     =====================================  =====================================
 
-  * ``CELERY_LOADER`` definitions without class name.
+* ``CELERY_LOADER`` definitions without class name.
 
     E.g. ``celery.loaders.default``, needs to include the class name:
     ``celery.loaders.default.Loader``.
 
-  * :meth:`TaskSet.run`. Use :meth:`celery.task.base.TaskSet.apply_async`
+* :meth:`TaskSet.run`. Use :meth:`celery.task.base.TaskSet.apply_async`
     instead.
 
-  * The module :mod:`celery.task.rest`; use :mod:`celery.task.http` instead.
+* The module :mod:`celery.task.rest`; use :mod:`celery.task.http` instead.

+ 12 - 3
docs/internals/events.rst

@@ -5,14 +5,18 @@
 This is the list of events sent by the worker.
 The monitor uses these to visualize the state of the cluster.
 
+.. contents::
+    :local:
+
+
 Task Events
------------
+===========
 
 * task-received(uuid, name, args, kwargs, retries, eta, hostname, timestamp)
 
     Sent when the worker receives a task.
 
-* task-accepted(uuid, hostname, timestamp)
+* task-started(uuid, hostname, timestamp)
 
     Sent just before the worker executes the task.
 
@@ -27,13 +31,18 @@ Task Events
 
     Sent if the execution of the task failed.
 
+* task-revoked(uuid)
+
+    Sent if the task has been revoked (Note that this is likely
+    to be sent by more than one worker)
+
 * task-retried(uuid, exception, traceback, hostname, delay, timestamp)
 
     Sent if the task failed, but will be retried in the future.
     (**NOT IMPLEMENTED**)
 
 Worker Events
--------------
+=============
 
 * worker-online(hostname, timestamp)
 

+ 3 - 0
docs/internals/moduleindex.rst

@@ -2,6 +2,9 @@
  Module Index
 ==============
 
+.. contents::
+    :local:
+
 Worker
 ======
 

+ 34 - 26
docs/internals/protocol.rst

@@ -2,45 +2,53 @@
  Task Message Protocol
 =======================
 
-    * task
-        ``string``
+.. contents::
+    :local:
 
-        Name of the task. **required**
+Message format
+==============
 
-    * id
-        ``string``
+* task
+    ``string``
 
-        Unique id of the task (UUID). **required**
+    Name of the task. **required**
 
-    * args
-        ``list``
+* id
+    ``string``
 
-        List of arguments. Will be an empty list if not provided.
+    Unique id of the task (UUID). **required**
 
-    * kwargs
-        ``dictionary``
+* args
+    ``list``
 
-        Dictionary of keyword arguments. Will be an empty dictionary if not
-        provided.
+    List of arguments. Will be an empty list if not provided.
 
-    * retries
-        ``int``
+* kwargs
+    ``dictionary``
 
-        Current number of times this task has been retried.
-        Defaults to ``0`` if not specified.
+    Dictionary of keyword arguments. Will be an empty dictionary if not
+    provided.
 
-    * eta
-        ``string`` (ISO 8601)
+* retries
+    ``int``
 
-        Estimated time of arrival. This is the date and time in ISO 8601
-        format. If not provided the message is not scheduled, but will be
-        executed asap.
+    Current number of times this task has been retried.
+    Defaults to ``0`` if not specified.
 
-Example
-=======
+* eta
+    ``string`` (ISO 8601)
+
+    Estimated time of arrival. This is the date and time in ISO 8601
+    format. If not provided the message is not scheduled, but will be
+    executed asap.
+
+Example message
+===============
 
 This is an example invocation of the ``celery.task.PingTask`` task in JSON
-format::
+format:
+
+.. code-block:: javascript
 
     {"task": "celery.task.PingTask",
      "args": [],
@@ -48,7 +56,6 @@ format::
      "retries": 0,
      "eta": "2009-11-17T12:30:56.527191"}
 
-
 Serialization
 =============
 
@@ -63,4 +70,5 @@ The MIME-types supported by default are shown in the following table.
     json            application/json
     yaml            application/x-yaml
     pickle          application/x-python-serialize
+    msgpack         application/x-msgpack
     =============== =================================

+ 3 - 0
docs/internals/reference/celery.backends.amqp.rst

@@ -2,7 +2,10 @@
 Backend: AMQP - celery.backends.amqp
 =======================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.amqp
 
 .. automodule:: celery.backends.amqp
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.backends.base.rst

@@ -2,9 +2,12 @@
 Backend: Base - celery.backends.base
 =====================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.base
 
 .. automodule:: celery.backends.base
     :members:
+    :undoc-members:
 
 

+ 3 - 0
docs/internals/reference/celery.backends.database.rst

@@ -2,7 +2,10 @@
  Backend: SQLAlchemy Database - celery.backends.database
 =========================================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.database
 
 .. automodule:: celery.backends.database
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.backends.mongodb.rst

@@ -2,7 +2,10 @@
  Backend: MongoDB - celery.backends.mongodb
 ============================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.mongodb
 
 .. automodule:: celery.backends.mongodb
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.backends.pyredis.rst

@@ -2,7 +2,10 @@
  Backend: Redis - celery.backends.pyredis
 ==========================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.pyredis
 
 .. automodule:: celery.backends.pyredis
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.backends.rst

@@ -2,7 +2,10 @@
 Backends - celery.backends
 ===========================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends
 
 .. automodule:: celery.backends
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.backends.tyrant.rst

@@ -2,7 +2,10 @@
 Backend: Tokyo Tyrant - celery.backends.tyrant
 ===============================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.backends.tyrant
 
 .. automodule:: celery.backends.tyrant
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.beat.rst

@@ -2,7 +2,10 @@
 Clock Service - celery.beat
 ========================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.beat
 
 .. automodule:: celery.beat
     :members:
+    :undoc-members:

+ 11 - 0
docs/internals/reference/celery.concurrency.processes.pool.rst

@@ -0,0 +1,11 @@
+===================================================================
+ extended multiprocessing.pool - celery.concurrency.processes.pool
+===================================================================
+
+.. contents::
+    :local:
+.. currentmodule:: celery.concurrency.processes.pool
+
+.. automodule:: celery.concurrency.processes.pool
+    :members:
+    :undoc-members:

+ 11 - 0
docs/internals/reference/celery.concurrency.processes.rst

@@ -0,0 +1,11 @@
+=============================================================
+ Multiprocessing Pool Support - celery.concurrency.processes
+=============================================================
+
+.. contents::
+    :local:
+.. currentmodule:: celery.concurrency.processes
+
+.. automodule:: celery.concurrency.processes
+    :members:
+    :undoc-members:

+ 11 - 0
docs/internals/reference/celery.concurrency.threads.rst

@@ -0,0 +1,11 @@
+===================================================================
+ Thread Pool Support **EXPERIMENTAL** - celery.concurrency.threads
+===================================================================
+
+.. contents::
+    :local:
+.. currentmodule:: celery.concurrency.threads
+
+.. automodule:: celery.concurrency.threads
+    :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.datastructures.rst

@@ -2,7 +2,10 @@
 Datastructures - celery.datastructures
 =======================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.datastructures
 
 .. automodule:: celery.datastructures
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.db.models.rst

@@ -2,7 +2,10 @@
  SQLAlchemy Models - celery.db.models
 ======================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.db.models
 
 .. automodule:: celery.db.models
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.db.session.rst

@@ -2,7 +2,10 @@
  SQLAlchemy Session - celery.db.session
 ========================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.db.session
 
 .. automodule:: celery.db.session
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.execute.trace.rst

@@ -2,7 +2,10 @@
  Tracing Execution - celery.execute.trace
 ==========================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.execute.trace
 
 .. automodule:: celery.execute.trace
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.log.rst

@@ -2,7 +2,10 @@
 Logging - celery.log
 ==========================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.log
 
 .. automodule:: celery.log
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.platform.rst

@@ -2,7 +2,10 @@
  Platform Specific - celery.platform
 =====================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.platform
 
 .. automodule:: celery.platform
     :members:
+    :undoc-members:

+ 3 - 0
docs/internals/reference/celery.routes.rst

@@ -2,7 +2,10 @@
  Message Routers - celery.routes
 =================================
 
+.. contents::
+    :local:
 .. currentmodule:: celery.routes
 
 .. automodule:: celery.routes
     :members:
+    :undoc-members:

Некоторые файлы не были показаны из-за большого количества измененных файлов