Explorar o código

Merge branch 'master' into routers

Ask Solem %!s(int64=15) %!d(string=hai) anos
pai
achega
39b95c979a
Modificáronse 100 ficheiros con 3680 adicións e 2485 borrados
  1. 3 0
      .gitignore
  2. 4 0
      AUTHORS
  3. 609 185
      Changelog
  4. 175 156
      FAQ
  5. 1 0
      MANIFEST.in
  6. 14 10
      README.rst
  7. 1 1
      bin/celerybeat
  8. 5 0
      bin/celeryd-multi
  9. 0 8
      bin/celeryinit
  10. 2 2
      celery/__init__.py
  11. 6 6
      celery/backends/__init__.py
  12. 78 51
      celery/backends/amqp.py
  13. 6 6
      celery/backends/base.py
  14. 0 62
      celery/backends/cache.py
  15. 73 13
      celery/backends/database.py
  16. 9 12
      celery/backends/mongodb.py
  17. 1 1
      celery/backends/pyredis.py
  18. 62 49
      celery/bin/celerybeat.py
  19. 60 2
      celery/bin/celeryd.py
  20. 247 0
      celery/bin/celeryd_multi.py
  21. 0 14
      celery/bin/celeryinit.py
  22. 33 58
      celery/conf.py
  23. 149 0
      celery/contrib/abortable.py
  24. 0 19
      celery/contrib/test_runner.py
  25. 0 0
      celery/db/__init__.py
  26. 66 0
      celery/db/a805d4bd.py
  27. 70 0
      celery/db/models.py
  28. 36 0
      celery/db/session.py
  29. 15 0
      celery/exceptions.py
  30. 11 2
      celery/execute/__init__.py
  31. 19 6
      celery/execute/trace.py
  32. 7 84
      celery/loaders/__init__.py
  33. 1 0
      celery/loaders/base.py
  34. 19 10
      celery/loaders/default.py
  35. 0 100
      celery/loaders/djangoapp.py
  36. 0 0
      celery/management/commands/__init__.py
  37. 0 18
      celery/management/commands/camqadm.py
  38. 0 18
      celery/management/commands/celerybeat.py
  39. 0 18
      celery/management/commands/celeryd.py
  40. 0 37
      celery/management/commands/celerymon.py
  41. 0 149
      celery/managers.py
  42. 68 4
      celery/messaging.py
  43. 27 51
      celery/models.py
  44. 8 8
      celery/result.py
  45. 2 1
      celery/signals.py
  46. 13 5
      celery/states.py
  47. 84 70
      celery/task/base.py
  48. 49 9
      celery/task/control.py
  49. 0 19
      celery/task/rest.py
  50. 93 0
      celery/task/schedules.py
  51. 0 19
      celery/tests/runners.py
  52. 2 5
      celery/tests/test_backends/__init__.py
  53. 23 21
      celery/tests/test_backends/test_amqp.py
  54. 0 127
      celery/tests/test_backends/test_cache.py
  55. 0 68
      celery/tests/test_backends/test_database.py
  56. 6 5
      celery/tests/test_buckets.py
  57. 0 36
      celery/tests/test_conf.py
  58. 0 28
      celery/tests/test_discovery.py
  59. 6 61
      celery/tests/test_loaders.py
  60. 0 74
      celery/tests/test_models.py
  61. 0 7
      celery/tests/test_pool.py
  62. 112 3
      celery/tests/test_task.py
  63. 31 0
      celery/tests/test_task_abortable.py
  64. 5 0
      celery/tests/test_task_control.py
  65. 0 127
      celery/tests/test_views.py
  66. 6 5
      celery/tests/test_worker.py
  67. 27 7
      celery/tests/test_worker_control.py
  68. 41 89
      celery/tests/test_worker_job.py
  69. 0 16
      celery/urls.py
  70. 9 11
      celery/utils/__init__.py
  71. 21 0
      celery/utils/compat.py
  72. 1 0
      celery/utils/dispatch/__init__.py
  73. 36 0
      celery/utils/dispatch/license.txt
  74. 277 0
      celery/utils/dispatch/saferef.py
  75. 211 0
      celery/utils/dispatch/signal.py
  76. 1 1
      celery/utils/info.py
  77. 24 0
      celery/utils/mail.py
  78. 123 0
      celery/utils/timeutils.py
  79. 0 106
      celery/views.py
  80. 15 11
      celery/worker/__init__.py
  81. 65 44
      celery/worker/buckets.py
  82. 0 128
      celery/worker/control.py
  83. 68 0
      celery/worker/control/__init__.py
  84. 118 0
      celery/worker/control/builtins.py
  85. 21 0
      celery/worker/control/registry.py
  86. 43 10
      celery/worker/job.py
  87. 8 10
      celery/worker/listener.py
  88. 20 19
      celery/worker/pool.py
  89. 5 0
      celery/worker/scheduler.py
  90. 5 27
      contrib/debian/init.d/celeryd
  91. 183 0
      contrib/debian/init.d/celeryd-multi
  92. 1 1
      contrib/release/doc4allmods
  93. 4 3
      contrib/requirements/default.txt
  94. 1 2
      contrib/requirements/test.txt
  95. 3 1
      contrib/supervisord/celerybeat.conf
  96. 7 1
      contrib/supervisord/celeryd.conf
  97. 0 18
      contrib/supervisord/django/celerybeat.conf
  98. 0 18
      contrib/supervisord/django/celeryd.conf
  99. 0 112
      docs/_ext/djangodocs.py
  100. 25 0
      docs/_theme/ADCTheme/LICENSE

+ 3 - 0
.gitignore

@@ -4,6 +4,8 @@
 .*.sw[po]
 .*.sw[po]
 dist/
 dist/
 *.egg-info
 *.egg-info
+*.egg
+*.egg/
 doc/__build/*
 doc/__build/*
 build/
 build/
 .build/
 .build/
@@ -11,3 +13,4 @@ pip-log.txt
 .directory
 .directory
 erl_crash.dump
 erl_crash.dump
 *.db
 *.db
+Documentation/

+ 4 - 0
AUTHORS

@@ -25,3 +25,7 @@ Ordered by date of first contribution:
   Felix Berger <bflat1@gmx.net
   Felix Berger <bflat1@gmx.net
   Reza Lotun <rlotun@gmail.com>
   Reza Lotun <rlotun@gmail.com>
   Mikhail Korobov <kmike84@gmail.com>
   Mikhail Korobov <kmike84@gmail.com>
+  Jeff Balogh <me@jeffbalogh.org>
+  Patrick Altman <paltman@gmail.com>
+  Vincent Driessen <vincent@datafox.nl>
+  Hari <haridara@gmail.com>

+ 609 - 185
Changelog

@@ -2,6 +2,399 @@
  Change history
  Change history
 ================
 ================
 
 
+1.2.0 [xxxx-xx-xx xx:xx x.x xxxx]
+=================================
+
+Upgrading for Django-users
+--------------------------
+
+Django integration has been moved to a separate package: `django-celery`_.
+
+* To upgrade you need to install the `django-celery`_ module and change::
+
+    INSTALLED_APPS = "celery"
+
+  to::
+
+    INSTALLED_APPS = "djcelery"
+
+
+* The following modules has been moved to `django-celery`_:
+
+    =====================================  =====================================
+    **Module name**                        **Replace with**
+    =====================================  =====================================
+    ``celery.models``                      ``djcelery.models``
+    ``celery.managers``                    ``djcelery.managers``
+    ``celery.views``                       ``djcelery.views``
+    ``celery.urls``                        ``djcelery.url``
+    ``celery.management``                  ``djcelery.management``
+    ``celery.loaders.djangoapp``           ``djcelery.loaders``
+    ``celery.backends.database``           ``djcelery.backends.database``
+    ``celery.backends.cache``              ``djcelery.backends.cache``
+    =====================================  =====================================
+
+Importing ``djcelery`` will automatically setup celery to use the Django
+loader by setting the :envvar:`CELERY_LOADER`` environment variable (it won't
+change it if it's already defined).
+
+When the Django loader is used, the "database" and "cache" backend aliases
+will point to the ``djcelery`` backends instead of the built-in backends.
+
+.. _`django-celery`: http://pypi.python.org/pypi/django-celery
+
+
+Upgrading for others
+--------------------
+
+The database backend is now using `SQLAlchemy`_ instead of the Django ORM,
+see `Supported Databases`_ for a table of supported databases.
+
+
+The ``DATABASE_*`` settings has been replaced by a single setting:
+``CELERY_RESULT_DBURI``. The value here should be an `SQLAlchemy Connection
+String`_, some examples include:
+
+.. code-block:: python
+
+    # sqlite (filename)
+    CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"
+
+    # mysql
+    CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"
+
+    # postgresql
+    CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"
+
+    # oracle
+    CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"
+
+See `SQLAlchemy Connection Strings`_ for more information about connection
+strings.
+
+To specify additional SQLAlchemy database engine options you can use
+the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
+
+    # echo enables verbose logging from SQLAlchemy.
+    CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
+
+.. _`SQLAlchemy`:
+    http://www.sqlalchemy.org
+.. _`Supported Databases`:
+    http://www.sqlalchemy.org/docs/dbengine.html#supported-databases
+.. _`SQLAlchemy Connection String`:
+    http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
+.. _`SQLAlchemy Connection Strings`:
+    http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
+
+Backward incompatible changes
+-----------------------------
+
+* The following deprecated settings has been removed (as scheduled by
+  the `deprecation timeline`_):
+
+    =====================================  =====================================
+    **Setting name**                       **Replace with**
+    =====================================  =====================================
+    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
+    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
+    ``CELERY_AMQP_EXCHANGE``               ``CELERY_DEFAULT_EXCHANGE``
+    ``CELERY_AMQP_EXCHANGE_TYPE``          ``CELERY_DEFAULT_AMQP_EXCHANGE_TYPE``
+    ``CELERY_AMQP_CONSUMER_ROUTING_KEY``   ``CELERY_QUEUES``
+    ``CELERY_AMQP_PUBLISHER_ROUTING_KEY``  ``CELERY_DEFAULT_ROUTING_KEY``
+    =====================================  =====================================
+
+.. _`deprecation timeline`:
+    http://ask.github.com/celery/internals/deprecation.html
+
+* The ``celery.task.rest`` module has been removed, use ``celery.task.http``
+  instead (as scheduled by the `deprecation timeline`_).
+
+* It's no longer allowed to skip the class name in loader names.
+  (as scheduled by the `deprecation timeline`_):
+
+    Assuming the implicit ``Loader`` class name is no longer supported,
+    if you use e.g.::
+
+        CELERY_LOADER = "myapp.loaders"
+
+    You need to include the loader class name, like this::
+
+        CELERY_LOADER = "myapp.loaders.Loader"
+
+News
+----
+
+* now depends on billiard >= 0.4.0
+
+* Added support for task soft and hard timelimits.
+
+    New settings added:
+
+    * CELERYD_TASK_TIME_LIMIT
+
+        Hard time limit. The worker processing the task will be killed and
+        replaced with a new one when this is exceeded.
+    * CELERYD_SOFT_TASK_TIME_LIMIT
+
+        Soft time limit. The celery.exceptions.SoftTimeLimitExceeded exception
+        will be raised when this is exceeded. The task can catch this to
+        e.g. clean up before the hard time limit comes.
+
+    New command line arguments to celeryd added:
+    ``--time-limit`` and ``--soft-time-limit``.
+
+    What's left?
+
+    This won't work on platforms not supporting signals (and specifically
+    the ``SIGUSR1`` signal) yet. So an alternative the ability to disable
+    the feature alltogether on nonconforming platforms must be implemented.
+
+    Also when the hard time limit is exceeded, the task result should
+    be a ``TimeLimitExceeded`` exception.
+
+1.0.3 [2010-05-15 03:00 P.M CEST]
+=================================
+
+Important notes
+---------------
+
+* Messages are now acked *just before* the task function is executed.
+
+    This is the behavior we've wanted all along, but couldn't have because of
+    limitations in the multiprocessing module.
+    The previous behavior was not good, and the situation worsened with the
+    release of 1.0.1, so this change will definitely improve
+    reliability, performance and operations in general.
+
+    For more information please see http://bit.ly/9hom6T
+
+* Database result backend: result now explicitly sets ``null=True`` as
+  ``django-picklefield`` version 0.1.5 changed the default behavior
+  right under our noses :(
+
+    See: http://bit.ly/d5OwMr
+
+    This means those who created their celery tables (via syncdb or
+    celeryinit) with picklefield versions >= 0.1.5 has to alter their tables to
+    allow the result field to be ``NULL`` manually.
+
+    MySQL::
+
+        ALTER TABLE celery_taskmeta MODIFY result TEXT NULL
+
+* Removed ``Task.rate_limit_queue_type``, as it was not really useful
+  and made it harder to refactor some parts.
+
+* Now depends on carrot >= 0.10.4
+
+* Now depends on billiard >= 0.3.0
+
+News
+----
+
+* AMQP backend: Added timeout support for ``result.get()`` /
+  ``result.wait()``.
+
+* New task option: ``Task.acks_late`` (default: ``CELERY_ACKS_LATE``)
+
+    Late ack means the task messages will be acknowledged **after** the task
+    has been executed, not *just before*, which is the default behavior.
+
+    Note that this means the tasks may be executed twice if the worker
+    crashes in the middle of their execution. Not acceptable for most
+    applications, but desirable for others.
+
+* Added crontab-like scheduling to periodic tasks.
+
+    Like a cron job, you can specify units of time of when
+    you would like the task to execute. While not a full implementation
+    of cron's features, it should provide a fair degree of common scheduling
+    needs.
+
+    You can specify a minute (0-59), an hour (0-23), and/or a day of the
+    week (0-6 where 0 is Sunday, or by names: sun, mon, tue, wed, thu, fri,
+    sat).
+
+    Examples:
+
+    .. code-block:: python
+
+        from celery.task.schedules import crontab
+        from celery.decorators import periodic_task
+
+        @periodic_task(run_every=crontab(hour=7, minute=30))
+        def every_morning():
+            print("Runs every morning at 7:30a.m")
+
+        @periodic_task(run_every=crontab(hour=7, minute=30, day_of_week="mon"))
+        def every_monday_morning():
+            print("Run every monday morning at 7:30a.m")
+
+        @periodic_task(run_every=crontab(minutes=30))
+        def every_hour():
+            print("Runs every hour on the clock. e.g. 1:30, 2:30, 3:30 etc.")
+
+    Note that this a late addition. While we have unittests, due to the
+    nature of this feature we haven't been able to completely test this
+    in practice, so consider this experimental.
+
+* ``TaskPool.apply_async``: Now supports the ``accept_callback`` argument.
+
+* ``apply_async``: Now raises :exc:`ValueError` if task args is not a list,
+  or kwargs is not a tuple (http://github.com/ask/celery/issues/issue/95).
+
+* ``Task.max_retries`` can now be ``None``, which means it will retry forever.
+
+* Celerybeat: Now reuses the same connection when publishing large
+  sets of tasks.
+
+* Modified the task locking example in the documentation to use
+  ``cache.add`` for atomic locking.
+
+* Added experimental support for a *started* status on tasks.
+
+    If ``Task.track_started`` is enabled the task will report its status
+    as "started" when the task is executed by a worker.
+
+    The default value is ``False`` as the normal behaviour is to not
+    report that level of granularity. Tasks are either pending, finished,
+    or waiting to be retried. Having a "started" status can be useful for
+    when there are long running tasks and there is a need to report which
+    task is currently running.
+
+    The global default can be overridden by the ``CELERY_TRACK_STARTED``
+    setting.
+
+* User Guide: New section ``Tips and Best Practices``.
+
+    Contributions welcome!
+
+Remote control commands
+-----------------------
+
+* Remote control commands can now send replies back to the caller.
+
+    Existing commands has been improved to send replies, and the client
+    interface in ``celery.task.control`` has new keyword arguments: ``reply``,
+    ``timeout`` and ``limit``. Where reply means it will wait for replies,
+    timeout is the time in seconds to stop waiting for replies, and limit
+    is the maximum number of replies to get.
+
+    By default, it will wait for as many replies as possible for one second.
+
+    * rate_limit(task_name, destination=all, reply=False, timeout=1, limit=0)
+
+        Worker returns ``{"ok": message}`` on success,
+        or ``{"failure": message}`` on failure.
+
+            >>> from celery.task.control import rate_limit
+            >>> rate_limit("tasks.add", "10/s", reply=True)
+            [{'worker1': {'ok': 'new rate limit set successfully'}},
+             {'worker2': {'ok': 'new rate limit set successfully'}}]
+
+    * ping(destination=all, reply=False, timeout=1, limit=0)
+
+        Worker returns the simple message ``"pong"``.
+
+            >>> from celery.task.control import ping
+            >>> ping(reply=True)
+            [{'worker1': 'pong'},
+             {'worker2': 'pong'},
+
+    * revoke(destination=all, reply=False, timeout=1, limit=0)
+
+        Worker simply returns ``True``.
+
+            >>> from celery.task.control import revoke
+            >>> revoke("419e46eb-cf6a-4271-86a8-442b7124132c", reply=True)
+            [{'worker1': True},
+             {'worker2'; True}]
+
+* You can now add your own remote control commands!
+
+    Remote control commands are functions registered in the command
+    registry. Registering a command is done using
+    :meth:`celery.worker.control.Panel.register`:
+
+    .. code-block:: python
+
+        from celery.task.control import Panel
+
+        @Panel.register
+        def reset_broker_connection(panel, **kwargs):
+            panel.listener.reset_connection()
+            return {"ok": "connection re-established"}
+
+    With this module imported in the worker, you can launch the command
+    using ``celery.task.control.broadcast``::
+
+        >>> from celery.task.control import broadcast
+        >>> broadcast("reset_broker_connection", reply=True)
+        [{'worker1': {'ok': 'connection re-established'},
+         {'worker2': {'ok': 'connection re-established'}}]
+
+    **TIP** You can choose the worker(s) to receive the command
+    by using the ``destination`` argument::
+
+        >>> broadcast("reset_broker_connection", destination=["worker1"])
+        [{'worker1': {'ok': 'connection re-established'}]
+
+* New remote control command: ``dump_reserved``
+
+    Dumps tasks reserved by the worker, waiting to be executed::
+
+        >>> from celery.task.control import broadcast
+        >>> broadcast("dump_reserved", reply=True)
+        [{'myworker1': [<TaskWrapper ....>]}]
+
+* New remote control command: ``dump_schedule``
+
+    Dumps the workers currently registered ETA schedule.
+    These are tasks with an ``eta`` (or ``countdown``) argument
+    waiting to be executed by the worker.
+
+        >>> from celery.task.control import broadcast
+        >>> broadcast("dump_schedule", reply=True)
+        [{'w1': []},
+         {'w3': []},
+         {'w2': ['0. 2010-05-12 11:06:00 pri0 <TaskWrapper:
+                    {name:"opalfeeds.tasks.refresh_feed_slice",
+                     id:"95b45760-4e73-4ce8-8eac-f100aa80273a",
+                     args:"(<Feeds freq_max:3600 freq_min:60
+                                   start:2184.0 stop:3276.0>,)",
+                     kwargs:"{'page': 2}"}>']},
+         {'w4': ['0. 2010-05-12 11:00:00 pri0 <TaskWrapper:
+                    {name:"opalfeeds.tasks.refresh_feed_slice",
+                     id:"c053480b-58fb-422f-ae68-8d30a464edfe",
+                     args:"(<Feeds freq_max:3600 freq_min:60
+                                   start:1092.0 stop:2184.0>,)",
+                     kwargs:"{\'page\': 1}"}>',
+                '1. 2010-05-12 11:12:00 pri0 <TaskWrapper:
+                    {name:"opalfeeds.tasks.refresh_feed_slice",
+                     id:"ab8bc59e-6cf8-44b8-88d0-f1af57789758",
+                     args:"(<Feeds freq_max:3600 freq_min:60
+                                   start:3276.0 stop:4365>,)",
+                     kwargs:"{\'page\': 3}"}>']}]
+
+Fixes
+-----
+
+* Mediator thread no longer blocks for more than 1 second.
+
+    With rate limits enabled and when there was a lot of remaining time,
+    the mediator thread could block shutdown (and potentially block other
+    jobs from coming in).
+
+* Remote rate limits was not properly applied
+  (http://github.com/ask/celery/issues/issue/98)
+
+* Now handles exceptions with unicode messages correctly in
+  ``TaskWrapper.on_failure``.
+
+* Database backend: ``TaskMeta.result``: default value should be ``None``
+  not empty string.
+
 1.0.2 [2010-03-31 12:50 P.M CET]
 1.0.2 [2010-03-31 12:50 P.M CET]
 ================================
 ================================
 
 
@@ -10,19 +403,20 @@
 
 
 * We now use a custom logger in tasks. This logger supports task magic
 * We now use a custom logger in tasks. This logger supports task magic
   keyword arguments in formats.
   keyword arguments in formats.
-  The default format for tasks (``CELERYD_TASK_LOG_FORMAT``) now includes
-  the id and the name of tasks so the origin of task log messages can
-  easily be traced.
 
 
-  Example output::
-  	[2010-03-25 13:11:20,317: INFO/PoolWorker-1]
-  		[tasks.add(a6e1c5ad-60d9-42a0-8b24-9e39363125a4)] Hello from add
+    The default format for tasks (``CELERYD_TASK_LOG_FORMAT``) now includes
+    the id and the name of tasks so the origin of task log messages can
+    easily be traced.
+
+    Example output::
+        [2010-03-25 13:11:20,317: INFO/PoolWorker-1]
+            [tasks.add(a6e1c5ad-60d9-42a0-8b24-9e39363125a4)] Hello from add
 
 
-  To revert to the previous behavior you can set::
+    To revert to the previous behavior you can set::
 
 
-	CELERYD_TASK_LOG_FORMAT = """
-	[%(asctime)s: %(levelname)s/%(processName)s] %(message)s
-	""".strip()
+        CELERYD_TASK_LOG_FORMAT = """
+            [%(asctime)s: %(levelname)s/%(processName)s] %(message)s
+        """.strip()
 
 
 * Unittests: Don't disable the django test database teardown,
 * Unittests: Don't disable the django test database teardown,
   instead fixed the underlying issue which was caused by modifications
   instead fixed the underlying issue which was caused by modifications
@@ -31,36 +425,36 @@
 * Django Loader: New config ``CELERY_DB_REUSE_MAX`` (max number of tasks
 * Django Loader: New config ``CELERY_DB_REUSE_MAX`` (max number of tasks
   to reuse the same database connection)
   to reuse the same database connection)
 
 
-  The default is to use a new connection for every task.
-  We would very much like to reuse the connection, but a safe number of
-  reuses is not known, and we don't have any way to handle the errors
-  that might happen, which may even be database dependent.
+    The default is to use a new connection for every task.
+    We would very much like to reuse the connection, but a safe number of
+    reuses is not known, and we don't have any way to handle the errors
+    that might happen, which may even be database dependent.
 
 
-  See: http://bit.ly/94fwdd
+    See: http://bit.ly/94fwdd
 
 
 * celeryd: The worker components are now configurable: ``CELERYD_POOL``,
 * celeryd: The worker components are now configurable: ``CELERYD_POOL``,
-	``CELERYD_LISTENER``, ``CELERYD_MEDIATOR``, and ``CELERYD_ETA_SCHEDULER``.
+  ``CELERYD_LISTENER``, ``CELERYD_MEDIATOR``, and ``CELERYD_ETA_SCHEDULER``.
 
 
-	The default configuration is as follows:
+    The default configuration is as follows:
 
 
-  .. code-block:: python
+    .. code-block:: python
 
 
-    CELERYD_POOL = "celery.worker.pool.TaskPool"
-    CELERYD_MEDIATOR = "celery.worker.controllers.Mediator"
-    CELERYD_ETA_SCHEDULER = "celery.worker.controllers.ScheduleController"
-    CELERYD_LISTENER = "celery.worker.listener.CarrotListener"
+        CELERYD_POOL = "celery.worker.pool.TaskPool"
+        CELERYD_MEDIATOR = "celery.worker.controllers.Mediator"
+        CELERYD_ETA_SCHEDULER = "celery.worker.controllers.ScheduleController"
+        CELERYD_LISTENER = "celery.worker.listener.CarrotListener"
 
 
-  THe ``CELERYD_POOL`` setting makes it easy to swap out the multiprocessing
-  pool with a threaded pool, or how about a twisted/eventlet pool?
+    The ``CELERYD_POOL`` setting makes it easy to swap out the multiprocessing
+    pool with a threaded pool, or how about a twisted/eventlet pool?
 
 
-  Consider the competition for the first pool plug-in started!
+    Consider the competition for the first pool plug-in started!
 
 
 
 
 * Debian init scripts: Use ``-a`` not ``&&``
 * Debian init scripts: Use ``-a`` not ``&&``
   (http://github.com/ask/celery/issues/82).
   (http://github.com/ask/celery/issues/82).
 
 
 * Debian init scripts: Now always preserves ``$CELERYD_OPTS`` from the
 * Debian init scripts: Now always preserves ``$CELERYD_OPTS`` from the
-	``/etc/default/celeryd`` and ``/etc/default/celerybeat``.
+  ``/etc/default/celeryd`` and ``/etc/default/celerybeat``.
 
 
 * celery.beat.Scheduler: Fixed a bug where the schedule was not properly
 * celery.beat.Scheduler: Fixed a bug where the schedule was not properly
   flushed to disk if the schedule had not been properly initialized.
   flushed to disk if the schedule had not been properly initialized.
@@ -90,24 +484,24 @@
 
 
 * Tasks are now acknowledged early instead of late.
 * Tasks are now acknowledged early instead of late.
 
 
-  This is done because messages can only be acked within the same
-  connection channel, so if the connection is lost we would have to refetch
-  the message again to acknowledge it.
+    This is done because messages can only be acked within the same
+    connection channel, so if the connection is lost we would have to refetch
+    the message again to acknowledge it.
 
 
-  This might or might not affect you, but mostly those running tasks with a
-  really long execution time are affected, as all tasks that has made it
-  all the way into the pool needs to be executed before the worker can
-  safely terminate (this is at most the number of pool workers, multiplied
-  by the ``CELERYD_PREFETCH_MULTIPLIER`` setting.)
+    This might or might not affect you, but mostly those running tasks with a
+    really long execution time are affected, as all tasks that has made it
+    all the way into the pool needs to be executed before the worker can
+    safely terminate (this is at most the number of pool workers, multiplied
+    by the ``CELERYD_PREFETCH_MULTIPLIER`` setting.)
 
 
-  We multiply the prefetch count by default to increase the performance at
-  times with bursts of tasks with a short execution time. If this doesn't
-  apply to your use case, you should be able to set the prefetch multiplier
-  to zero, without sacrificing performance.
+    We multiply the prefetch count by default to increase the performance at
+    times with bursts of tasks with a short execution time. If this doesn't
+    apply to your use case, you should be able to set the prefetch multiplier
+    to zero, without sacrificing performance.
 
 
-  Please note that a patch to :mod:`multiprocessing` is currently being
-  worked on, this patch would enable us to use a better solution, and is
-  scheduled for inclusion in the ``1.2.0`` release.
+    Please note that a patch to :mod:`multiprocessing` is currently being
+    worked on, this patch would enable us to use a better solution, and is
+    scheduled for inclusion in the ``1.2.0`` release.
 
 
 * celeryd now shutdowns cleanly when receving the ``TERM`` signal.
 * celeryd now shutdowns cleanly when receving the ``TERM`` signal.
 
 
@@ -118,61 +512,67 @@
   to implement this functionality in the base classes.
   to implement this functionality in the base classes.
 
 
 * Caches are now also limited in size, so their memory usage doesn't grow
 * Caches are now also limited in size, so their memory usage doesn't grow
-  out of control. You can set the maximum number of results the cache
-  can hold using the ``CELERY_MAX_CACHED_RESULTS`` setting (the default
-  is five thousand results). In addition, you can refetch already retrieved
-  results using ``backend.reload_task_result`` +
-  ``backend.reload_taskset_result`` (that's for those who want to send
-  results incrementally).
+  out of control.
+  
+    You can set the maximum number of results the cache
+    can hold using the ``CELERY_MAX_CACHED_RESULTS`` setting (the default
+    is five thousand results). In addition, you can refetch already retrieved
+    results using ``backend.reload_task_result`` +
+    ``backend.reload_taskset_result`` (that's for those who want to send
+    results incrementally).
+
+* ``celeryd`` now works on Windows again.
 
 
-* ``celeryd`` now works on Windows again. Note that if running with Django,
-  you can't use ``project.settings`` as the settings module name, but the
-  following should work::
+    Note that if running with Django,
+    you can't use ``project.settings`` as the settings module name, but the
+    following should work::
 
 
-		$ python manage.py celeryd --settings=settings
+        $ python manage.py celeryd --settings=settings
 
 
 * Execution: ``.messaging.TaskPublisher.send_task`` now
 * Execution: ``.messaging.TaskPublisher.send_task`` now
-  incorporates all the functionality apply_async previously did (like
-  converting countdowns to eta), so :func:`celery.execute.apply_async` is
-  now simply a convenient front-end to
-  :meth:`celery.messaging.TaskPublisher.send_task`, using
-  the task classes default options.
+  incorporates all the functionality apply_async previously did.
+  
+    Like converting countdowns to eta, so :func:`celery.execute.apply_async` is
+    now simply a convenient front-end to
+    :meth:`celery.messaging.TaskPublisher.send_task`, using
+    the task classes default options.
 
 
-  Also :func:`celery.execute.send_task` has been
-  introduced, which can apply tasks using just the task name (useful
-  if the client does not have the destination task in its task registry).
+    Also :func:`celery.execute.send_task` has been
+    introduced, which can apply tasks using just the task name (useful
+    if the client does not have the destination task in its task registry).
 
 
-  Example:
+    Example:
 
 
-		>>> from celery.execute import send_task
-		>>> result = send_task("celery.ping", args=[], kwargs={})
-		>>> result.get()
-		'pong'
+        >>> from celery.execute import send_task
+        >>> result = send_task("celery.ping", args=[], kwargs={})
+        >>> result.get()
+        'pong'
 
 
 * ``camqadm``: This is a new utility for command line access to the AMQP API.
 * ``camqadm``: This is a new utility for command line access to the AMQP API.
-  Excellent for deleting queues/bindings/exchanges, experimentation and
-  testing::
 
 
-	$ camqadm
-	1> help
+    Excellent for deleting queues/bindings/exchanges, experimentation and
+    testing::
+
+        $ camqadm
+        1> help
 
 
-  Gives an interactive shell, type ``help`` for a list of commands.
+    Gives an interactive shell, type ``help`` for a list of commands.
 
 
-  When using Django, use the management command instead::
+    When using Django, use the management command instead::
 
 
-  	$ python manage.py camqadm
-  	1> help
+        $ python manage.py camqadm
+        1> help
 
 
 * Redis result backend: To conform to recent Redis API changes, the following
 * Redis result backend: To conform to recent Redis API changes, the following
   settings has been deprecated:
   settings has been deprecated:
-  
-		* ``REDIS_TIMEOUT``
-		* ``REDIS_CONNECT_RETRY``
 
 
-  These will emit a ``DeprecationWarning`` if used.
+        * ``REDIS_TIMEOUT``
+        * ``REDIS_CONNECT_RETRY``
+
+    These will emit a ``DeprecationWarning`` if used.
 
 
-  A ``REDIS_PASSWORD`` setting has been added, so you can use the new
-  simple authentication mechanism in Redis.
+    A ``REDIS_PASSWORD`` setting has been added, so you can use the new
+    simple authentication mechanism in Redis.
 
 
 * The redis result backend no longer calls ``SAVE`` when disconnecting,
 * The redis result backend no longer calls ``SAVE`` when disconnecting,
   as this is apparently better handled by Redis itself.
   as this is apparently better handled by Redis itself.
@@ -183,9 +583,10 @@
 * The ETA scheduler now sleeps at most two seconds between iterations.
 * The ETA scheduler now sleeps at most two seconds between iterations.
 
 
 * The ETA scheduler now deletes any revoked tasks it might encounter.
 * The ETA scheduler now deletes any revoked tasks it might encounter.
-  As revokes are not yet persistent, this is done to make sure the task
-  is revoked even though it's currently being hold because its eta is e.g.
-  a week into the future.
+
+    As revokes are not yet persistent, this is done to make sure the task
+    is revoked even though it's currently being hold because its eta is e.g.
+    a week into the future.
 
 
 * The ``task_id`` argument is now respected even if the task is executed 
 * The ``task_id`` argument is now respected even if the task is executed 
   eagerly (either using apply, or ``CELERY_ALWAYS_EAGER``).
   eagerly (either using apply, or ``CELERY_ALWAYS_EAGER``).
@@ -193,11 +594,12 @@
 * The internal queues are now cleared if the connection is reset.
 * The internal queues are now cleared if the connection is reset.
 
 
 * New magic keyword argument: ``delivery_info``.
 * New magic keyword argument: ``delivery_info``.
-	Used by retry() to resend the task to its original destination using the same
-	exchange/routing_key.
+
+    Used by retry() to resend the task to its original destination using the same
+    exchange/routing_key.
 
 
 * Events: Fields was not passed by ``.send()`` (fixes the uuid keyerrors
 * Events: Fields was not passed by ``.send()`` (fixes the uuid keyerrors
-	in celerymon)
+  in celerymon)
 
 
 * Added ``--schedule``/``-s`` option to celeryd, so it is possible to
 * Added ``--schedule``/``-s`` option to celeryd, so it is possible to
   specify a custom schedule filename when using an embedded celerybeat
   specify a custom schedule filename when using an embedded celerybeat
@@ -217,8 +619,10 @@
 * TaskPublisher: Declarations are now done once (per process).
 * TaskPublisher: Declarations are now done once (per process).
 
 
 * Added ``Task.delivery_mode`` and the ``CELERY_DEFAULT_DELIVERY_MODE``
 * Added ``Task.delivery_mode`` and the ``CELERY_DEFAULT_DELIVERY_MODE``
-  setting. These can be used to mark messages non-persistent (i.e. so they are
-  lost if the broker is restarted).
+  setting.
+
+    These can be used to mark messages non-persistent (i.e. so they are
+    lost if the broker is restarted).
 
 
 * Now have our own ``ImproperlyConfigured`` exception, instead of using the
 * Now have our own ``ImproperlyConfigured`` exception, instead of using the
   Django one.
   Django one.
@@ -237,92 +641,97 @@ BACKWARD INCOMPATIBLE CHANGES
   available on your platform, or something like supervisord to make
   available on your platform, or something like supervisord to make
   celeryd/celerybeat/celerymon into background processes.
   celeryd/celerybeat/celerymon into background processes.
 
 
-  We've had too many problems with celeryd daemonizing itself, so it was
-  decided it has to be removed. Example startup scripts has been added to
-  ``contrib/``:
+    We've had too many problems with celeryd daemonizing itself, so it was
+    decided it has to be removed. Example startup scripts has been added to
+    ``contrib/``:
 
 
-      * Debian, Ubuntu, (start-stop-daemon)
+    * Debian, Ubuntu, (start-stop-daemon)
 
 
-           ``contrib/debian/init.d/celeryd``
-           ``contrib/debian/init.d/celerybeat``
+        ``contrib/debian/init.d/celeryd``
+        ``contrib/debian/init.d/celerybeat``
 
 
-      * Mac OS X launchd
+    * Mac OS X launchd
 
 
-            ``contrib/mac/org.celeryq.celeryd.plist``
-            ``contrib/mac/org.celeryq.celerybeat.plist``
-            ``contrib/mac/org.celeryq.celerymon.plist``
+        ``contrib/mac/org.celeryq.celeryd.plist``
+        ``contrib/mac/org.celeryq.celerybeat.plist``
+        ``contrib/mac/org.celeryq.celerymon.plist``
 
 
-      * Supervisord (http://supervisord.org)
+    * Supervisord (http://supervisord.org)
 
 
-            ``contrib/supervisord/supervisord.conf``
+        ``contrib/supervisord/supervisord.conf``
 
 
-  In addition to ``--detach``, the following program arguments has been
-  removed: ``--uid``, ``--gid``, ``--workdir``, ``--chroot``, ``--pidfile``,
-  ``--umask``. All good daemonization tools should support equivalent
-  functionality, so don't worry.
+    In addition to ``--detach``, the following program arguments has been
+    removed: ``--uid``, ``--gid``, ``--workdir``, ``--chroot``, ``--pidfile``,
+    ``--umask``. All good daemonization tools should support equivalent
+    functionality, so don't worry.
 
 
-  Also the following configuration keys has been removed:
-  ``CELERYD_PID_FILE``, ``CELERYBEAT_PID_FILE``, ``CELERYMON_PID_FILE``.
+    Also the following configuration keys has been removed:
+    ``CELERYD_PID_FILE``, ``CELERYBEAT_PID_FILE``, ``CELERYMON_PID_FILE``.
 
 
 * Default celeryd loglevel is now ``WARN``, to enable the previous log level
 * Default celeryd loglevel is now ``WARN``, to enable the previous log level
   start celeryd with ``--loglevel=INFO``.
   start celeryd with ``--loglevel=INFO``.
 
 
 * Tasks are automatically registered.
 * Tasks are automatically registered.
 
 
-  This means you no longer have to register your tasks manually.
-  You don't have to change your old code right away, as it doesn't matter if
-  a task is registered twice.
+    This means you no longer have to register your tasks manually.
+    You don't have to change your old code right away, as it doesn't matter if
+    a task is registered twice.
+
+    If you don't want your task to be automatically registered you can set
+    the ``abstract`` attribute
+
+    .. code-block:: python
 
 
-  If you don't want your task to be automatically registered you can set
-  the ``abstract`` attribute
+        class MyTask(Task):
+            abstract = True
 
 
-  .. code-block:: python
+    By using ``abstract`` only tasks subclassing this task will be automatically
+    registered (this works like the Django ORM).
 
 
-		class MyTask(Task):
-			abstract = True
+    If you don't want subclasses to be registered either, you can set the
+    ``autoregister`` attribute to ``False``.
 
 
-  By using ``abstract`` only tasks subclassing this task will be automatically
-  registered (this works like the Django ORM).
+    Incidentally, this change also fixes the problems with automatic name
+    assignment and relative imports. So you also don't have to specify a task name
+    anymore if you use relative imports.
 
 
-  If you don't want subclasses to be registered either, you can set the
-  ``autoregister`` attribute to ``False``.
+* You can no longer use regular functions as tasks.
 
 
-  Incidentally, this change also fixes the problems with automatic name
-  assignment and relative imports. So you also don't have to specify a task name
-  anymore if you use relative imports.
+    This change was added
+    because it makes the internals a lot more clean and simple. However, you can
+    now turn functions into tasks by using the ``@task`` decorator:
 
 
-* You can no longer use regular functions as tasks. This change was added
-  because it makes the internals a lot more clean and simple. However, you can
-  now turn functions into tasks by using the ``@task`` decorator:
+    .. code-block:: python
 
 
-  .. code-block:: python
+        from celery.decorators import task
 
 
-		from celery.decorators import task
+        @task
+        def add(x, y):
+            return x + y
 
 
-		@task
-		def add(x, y):
-			return x + y
+    See the User Guide: :doc:`userguide/tasks` for more information.
 
 
-  See the User Guide: :doc:`userguide/tasks` for more information.
+* The periodic task system has been rewritten to a centralized solution.
 
 
-* The periodic task system has been rewritten to a centralized solution, this
-  means ``celeryd`` no longer schedules periodic tasks by default, but a new
-  daemon has been introduced: ``celerybeat``.
+    This means ``celeryd`` no longer schedules periodic tasks by default,
+    but a new daemon has been introduced: ``celerybeat``.
 
 
-  To launch the periodic task scheduler you have to run celerybeat::
+    To launch the periodic task scheduler you have to run celerybeat::
 
 
-		$ celerybeat
+        $ celerybeat
 
 
-  Make sure this is running on one server only, if you run it twice, all
-  periodic tasks will also be executed twice.
+    Make sure this is running on one server only, if you run it twice, all
+    periodic tasks will also be executed twice.
 
 
-  If you only have one worker server you can embed it into celeryd like this::
+    If you only have one worker server you can embed it into celeryd like this::
 
 
-		$ celeryd --beat # Embed celerybeat in celeryd.
+        $ celeryd --beat # Embed celerybeat in celeryd.
 
 
-* The supervisor has been removed, please use something like
-  http://supervisord.org instead. This means the ``-S`` and ``--supervised``
-  options to ``celeryd`` is no longer supported.
+* The supervisor has been removed.
+
+    This means the ``-S`` and ``--supervised`` options to ``celeryd`` is
+    no longer supported. Please use something like http://supervisord.org
+    instead.
 
 
 * ``TaskSet.join`` has been removed, use ``TaskSetResult.join`` instead.
 * ``TaskSet.join`` has been removed, use ``TaskSetResult.join`` instead.
 
 
@@ -344,23 +753,26 @@ BACKWARD INCOMPATIBLE CHANGES
   now in ``celery.loaders.djangoapp``. Reason: Internal API.
   now in ``celery.loaders.djangoapp``. Reason: Internal API.
 
 
 * ``CELERY_LOADER`` now needs loader class name in addition to module name,
 * ``CELERY_LOADER`` now needs loader class name in addition to module name,
-  e.g. where you previously had: ``"celery.loaders.default"``, you now need
-  ``"celery.loaders.default.Loader"``, using the previous syntax will result
-  in a DeprecationWarning.
+
+    E.g. where you previously had: ``"celery.loaders.default"``, you now need
+    ``"celery.loaders.default.Loader"``, using the previous syntax will result
+    in a DeprecationWarning.
 
 
 * Detecting the loader is now lazy, and so is not done when importing
 * Detecting the loader is now lazy, and so is not done when importing
-  ``celery.loaders``. To make this happen ``celery.loaders.settings`` has
-  been renamed to ``load_settings`` and is now a function returning the
-  settings object. ``celery.loaders.current_loader`` is now also
-  a function, returning the current loader.
+  ``celery.loaders``.
 
 
-  So::
+    To make this happen ``celery.loaders.settings`` has
+    been renamed to ``load_settings`` and is now a function returning the
+    settings object. ``celery.loaders.current_loader`` is now also
+    a function, returning the current loader.
 
 
-    	loader = current_loader
+    So::
 
 
-  needs to be changed to::
+        loader = current_loader
 
 
-    	loader = current_loader()
+    needs to be changed to::
+
+        loader = current_loader()
 
 
 DEPRECATIONS
 DEPRECATIONS
 ------------
 ------------
@@ -368,25 +780,28 @@ DEPRECATIONS
 * The following configuration variables has been renamed and will be
 * The following configuration variables has been renamed and will be
   deprecated in v1.2:
   deprecated in v1.2:
 
 
-  	* CELERYD_DAEMON_LOG_FORMAT -> CELERYD_LOG_FORMAT
-  	* CELERYD_DAEMON_LOG_LEVEL -> CELERYD_LOG_LEVEL
-  	* CELERY_AMQP_CONNECTION_TIMEOUT -> CELERY_BROKER_CONNECTION_TIMEOUT
-  	* CELERY_AMQP_CONNECTION_RETRY -> CELERY_BROKER_CONNECTION_RETRY
-  	* CELERY_AMQP_CONNECTION_MAX_RETRIES -> CELERY_BROKER_CONNECTION_MAX_RETRIES
-  	* SEND_CELERY_TASK_ERROR_EMAILS -> CELERY_SEND_TASK_ERROR_EMAILS
+    * CELERYD_DAEMON_LOG_FORMAT -> CELERYD_LOG_FORMAT
+    * CELERYD_DAEMON_LOG_LEVEL -> CELERYD_LOG_LEVEL
+    * CELERY_AMQP_CONNECTION_TIMEOUT -> CELERY_BROKER_CONNECTION_TIMEOUT
+    * CELERY_AMQP_CONNECTION_RETRY -> CELERY_BROKER_CONNECTION_RETRY
+    * CELERY_AMQP_CONNECTION_MAX_RETRIES -> CELERY_BROKER_CONNECTION_MAX_RETRIES
+    * SEND_CELERY_TASK_ERROR_EMAILS -> CELERY_SEND_TASK_ERROR_EMAILS
 
 
 * The public api names in celery.conf has also changed to a consistent naming
 * The public api names in celery.conf has also changed to a consistent naming
   scheme.
   scheme.
 
 
-* We now support consuming from an arbitrary number of queues, but to do this
-  we had to rename the configuration syntax. If you use any of the custom
-  AMQP routing options (queue/exchange/routing_key, etc), you should read the
-  new FAQ entry: http://bit.ly/aiWoH. The previous syntax is deprecated and
-  scheduled for removal in v1.2.
+* We now support consuming from an arbitrary number of queues.
+
+    To do this we had to rename the configuration syntax. If you use any of
+    the custom AMQP routing options (queue/exchange/routing_key, etc), you
+    should read the new FAQ entry: http://bit.ly/aiWoH.
+
+    The previous syntax is deprecated and scheduled for removal in v1.2.
 
 
 * ``TaskSet.run`` has been renamed to ``TaskSet.apply_async``.
 * ``TaskSet.run`` has been renamed to ``TaskSet.apply_async``.
-  ``run`` is still deprecated, and is scheduled for removal in v1.2.
 
 
+    ``TaskSet.run`` has now been deprecated, and is scheduled for
+    removal in v1.2.
 
 
 NEWS
 NEWS
 ----
 ----
@@ -400,12 +815,14 @@ NEWS
 * New cool task decorator syntax.
 * New cool task decorator syntax.
 
 
 * celeryd now sends events if enabled with the ``-E`` argument.
 * celeryd now sends events if enabled with the ``-E`` argument.
-  Excellent for monitoring tools, one is already in the making
-  (http://github.com/ask/celerymon).
 
 
-  Current events include: worker-heartbeat,
-  task-[received/succeeded/failed/retried],
-  worker-online, worker-offline.
+
+    Excellent for monitoring tools, one is already in the making
+    (http://github.com/ask/celerymon).
+
+    Current events include: worker-heartbeat,
+    task-[received/succeeded/failed/retried],
+    worker-online, worker-offline.
 
 
 * You can now delete (revoke) tasks that has already been applied.
 * You can now delete (revoke) tasks that has already been applied.
 
 
@@ -419,10 +836,11 @@ NEWS
 
 
 * ``celeryd`` now responds to the ``HUP`` signal by restarting itself.
 * ``celeryd`` now responds to the ``HUP`` signal by restarting itself.
 
 
-* Periodic tasks are now scheduled on the clock, i.e. ``timedelta(hours=1)``
-  means every hour at :00 minutes, not every hour from the server starts.
-  To revert to the previous behaviour you can set
-  ``PeriodicTask.relative = True``.
+* Periodic tasks are now scheduled on the clock.
+
+    I.e. ``timedelta(hours=1)`` means every hour at :00 minutes, not every
+    hour from the server starts.  To revert to the previous behaviour you
+    can set ``PeriodicTask.relative = True``.
 
 
 * Now supports passing execute options to a TaskSets list of args, e.g.:
 * Now supports passing execute options to a TaskSets list of args, e.g.:
 
 
@@ -432,14 +850,16 @@ NEWS
     >>> ts.run()
     >>> ts.run()
 
 
 * Got a 3x performance gain by setting the prefetch count to four times the 
 * Got a 3x performance gain by setting the prefetch count to four times the 
-  concurrency, (from an average task round-trip of 0.1s to 0.03s!). A new
-  setting has been added: ``CELERYD_PREFETCH_MULTIPLIER``, which is set
-  to ``4`` by default.
+  concurrency, (from an average task round-trip of 0.1s to 0.03s!).
+
+    A new setting has been added: ``CELERYD_PREFETCH_MULTIPLIER``, which
+    is set to ``4`` by default.
 
 
 * Improved support for webhook tasks.
 * Improved support for webhook tasks.
-  ``celery.task.rest`` is now deprecated, replaced with the new and shiny
-  :mod:`celery.task.http`. With more reflective names, sensible interface, and
-  it's possible to override the methods used to perform HTTP requests.
+
+    ``celery.task.rest`` is now deprecated, replaced with the new and shiny
+    :mod:`celery.task.http`. With more reflective names, sensible interface,
+    and it's possible to override the methods used to perform HTTP requests.
 
 
 * The results of tasksets are now cached by storing it in the result
 * The results of tasksets are now cached by storing it in the result
   backend.
   backend.
@@ -456,8 +876,9 @@ CHANGES
 * The ``uuid`` distribution is added as a dependency when running Python 2.4.
 * The ``uuid`` distribution is added as a dependency when running Python 2.4.
 
 
 * Now remembers the previously detected loader by keeping it in
 * Now remembers the previously detected loader by keeping it in
-  the ``CELERY_LOADER`` environment variable. This may help on windows where
-  fork emulation is used.
+  the ``CELERY_LOADER`` environment variable.
+
+    This may help on windows where fork emulation is used.
 
 
 * ETA no longer sends datetime objects, but uses ISO 8601 date format in a
 * ETA no longer sends datetime objects, but uses ISO 8601 date format in a
   string for better compatibility with other platforms.
   string for better compatibility with other platforms.
@@ -469,9 +890,10 @@ CHANGES
 * Refactored the ExecuteWrapper, ``apply`` and ``CELERY_ALWAYS_EAGER`` now
 * Refactored the ExecuteWrapper, ``apply`` and ``CELERY_ALWAYS_EAGER`` now
   also executes the task callbacks and signals.
   also executes the task callbacks and signals.
 
 
-* Now using a proper scheduler for the tasks with an ETA. This means waiting
-  eta tasks are sorted by time, so we don't have to poll the whole list all the
-  time.
+* Now using a proper scheduler for the tasks with an ETA.
+
+    This means waiting eta tasks are sorted by time, so we don't have
+    to poll the whole list all the time.
 
 
 * Now also imports modules listed in CELERY_IMPORTS when running
 * Now also imports modules listed in CELERY_IMPORTS when running
   with django (as documented).
   with django (as documented).
@@ -484,8 +906,10 @@ CHANGES
   connection to the broker.
   connection to the broker.
 
 
 * When running as a separate service the periodic task scheduler does some
 * When running as a separate service the periodic task scheduler does some
-  smart moves to not poll too regularly, if you need faster poll times you
-  can lower the value of ``CELERYBEAT_MAX_LOOP_INTERVAL``.
+  smart moves to not poll too regularly.
+
+    If you need faster poll times you can lower the value
+    of ``CELERYBEAT_MAX_LOOP_INTERVAL``.
 
 
 * You can now change periodic task intervals at runtime, by making
 * You can now change periodic task intervals at runtime, by making
   ``run_every`` a property, or subclassing ``PeriodicTask.is_due``.
   ``run_every`` a property, or subclassing ``PeriodicTask.is_due``.

+ 175 - 156
FAQ

@@ -58,41 +58,11 @@ Is celery for Django only?
 
 
 **Answer:** No.
 **Answer:** No.
 
 
-You can use all of the features without using Django.
+Celery does not depend on Django anymore. To use Celery with Django you have
+to use the `django-celery`_ package:
 
 
 
 
-Why is Django a dependency?
----------------------------
-
-Celery uses the Django ORM for database access when using the database result
-backend, the Django cache framework when using the cache result backend, and the Django signal
-dispatch mechanisms for signaling.
-
-This doesn't mean you need to have a Django project to use celery, it
-just means that sometimes we use internal Django components.
-
-The long term plan is to replace these with other solutions, (e.g. `SQLAlchemy`_ as the ORM,
-and `louie`_, for signaling). The celery distribution will be split into two:
-
-    * celery
-
-        The core. Using SQLAlchemy for the database backend.
-
-    * django-celery
-
-        Celery integration for Django, using the Django ORM for the database
-        backend.
-
-We're currently seeking people with `SQLAlchemy`_ experience, so please
-contact the project if you want this done sooner.
-
-The reason for the split is for purity only. It shouldn't affect you much as a
-user, so please don't worry about the Django dependency, just have a good time
-using celery.
-
-.. _`SQLAlchemy`: http://www.sqlalchemy.org/
-.. _`louie`: http://pypi.python.org/pypi/Louie/
-
+.. _`django-celery`: http://pypi.python.org/pypi/django-celery
 
 
 Do I have to use AMQP/RabbitMQ?
 Do I have to use AMQP/RabbitMQ?
 -------------------------------
 -------------------------------
@@ -222,9 +192,7 @@ with::
 Why won't my Task run?
 Why won't my Task run?
 ----------------------
 ----------------------
 
 
-**Answer:** Did you register the task in the applications ``tasks.py`` module?
-(or in some other module Django loads by default, like ``models.py``?).
-Also there might be syntax errors preventing the tasks module being imported.
+**Answer:** There might be syntax errors preventing the tasks module being imported.
 
 
 You can find out if celery is able to run the task by executing the
 You can find out if celery is able to run the task by executing the
 task manually:
 task manually:
@@ -274,6 +242,25 @@ Windows: The ``-B`` / ``--beat`` option to celeryd doesn't work?
 **Answer**: That's right. Run ``celerybeat`` and ``celeryd`` as separate
 **Answer**: That's right. Run ``celerybeat`` and ``celeryd`` as separate
 services instead.
 services instead.
 
 
+Tasks
+=====
+
+How can I reuse the same connection when applying tasks?
+--------------------------------------------------------
+
+**Answer**: See :doc:`userguide/executing`.
+
+Can I execute a task by name?
+-----------------------------
+
+**Answer**: Yes. Use :func:`celery.execute.send_task`.
+You can also execute a task by name from any language
+that has an AMQP client.
+
+    >>> from celery.execute import send_task
+    >>> send_task("tasks.add", args=[2, 2], kwargs={})
+    <AsyncResult: 373550e8-b9a0-4666-bc61-ace01fa4f91d>
+
 Results
 Results
 =======
 =======
 
 
@@ -298,6 +285,44 @@ If you need to specify a custom result backend you should use
 Brokers
 Brokers
 =======
 =======
 
 
+Why is RabbitMQ crashing?
+-------------------------
+
+RabbitMQ will crash if it runs out of memory. This will be fixed in a
+future release of RabbitMQ. please refer to the RabbitMQ FAQ:
+http://www.rabbitmq.com/faq.html#node-runs-out-of-memory
+
+Some common Celery misconfigurations can crash RabbitMQ:
+
+* Events.
+
+Running ``celeryd`` with the ``-E``/``--events`` option will send messages
+for events happening inside of the worker. If these event messages
+are not consumed, you will eventually run out of memory.
+
+Events should only be enabled if you have an active monitor consuming them.
+
+* AMQP backend results.
+
+When running with the AMQP result backend, every task result will be sent
+as a message. If you don't collect these results, they will build up and
+RabbitMQ will eventually run out of memory.
+
+If you don't use the results for a task, make sure you set the
+``ignore_result`` option:
+
+.. code-block python
+
+    @task(ignore_result=True)
+    def mytask():
+        ...
+
+    class MyTask(Task):
+        ignore_result = True
+
+Results can also be disabled globally using the ``CELERY_IGNORE_RESULT``
+setting.
+
 Can I use celery with ActiveMQ/STOMP?
 Can I use celery with ActiveMQ/STOMP?
 -------------------------------------
 -------------------------------------
 
 
@@ -382,6 +407,58 @@ using the STOMP backend:
 Features
 Features
 ========
 ========
 
 
+How can I run a task once another task has finished?
+----------------------------------------------------
+
+**Answer**: You can safely launch a task inside a task.
+Also, a common pattern is to use callback tasks:
+
+.. code-block:: python
+
+    @task()
+    def add(x, y, callback=None):
+        result = x + y
+        if callback:
+            callback.delay(result)
+        return result
+
+
+    @task(ignore_result=True)
+    def log_result(result, **kwargs):
+        logger = log_result.get_logger(**kwargs)
+        logger.info("log_result got: %s" % (result, ))
+
+
+    >>> add.delay(2, 2, callback=log_result)
+
+Can I cancel the execution of a task?
+-------------------------------------
+**Answer**: Yes. Use ``result.revoke``::
+
+    >>> result = add.apply_async(args=[2, 2], countdown=120)
+    >>> result.revoke()
+
+or if you only have the task id::
+
+    >>> from celery.task.control import revoke
+    >>> revoke(task_id)
+
+Why aren't my remote control commands received by all workers?
+--------------------------------------------------------------
+
+**Answer**: To receive broadcast remote control commands, every ``celeryd``
+uses its hostname to create a unique queue name to listen to,
+so if you have more than one worker with the same hostname, the
+control commands will be recieved in round-robin between them.
+
+To work around this you can explicitly set the hostname for every worker
+using the ``--hostname`` argument to ``celeryd``::
+
+    $ celeryd --hostname=$(hostname).1
+    $ celeryd --hostname=$(hostname).2
+
+etc, etc.
+
 Can I send some tasks to only some servers?
 Can I send some tasks to only some servers?
 --------------------------------------------
 --------------------------------------------
 
 
@@ -503,121 +580,6 @@ could also be useful as a source of information.
 .. _`Standard Exchange Types`: http://bit.ly/EEWca
 .. _`Standard Exchange Types`: http://bit.ly/EEWca
 .. _`RabbitMQ FAQ`: http://www.rabbitmq.com/faq.html
 .. _`RabbitMQ FAQ`: http://www.rabbitmq.com/faq.html
 
 
-Can I use celery without Django?
---------------------------------
-
-**Answer:** Yes.
-
-Celery uses something called loaders to read/setup configuration, import
-modules that register tasks and to decide what happens when a task is
-executed. Currently there are two loaders, the default loader and the Django
-loader. If you want to use celery without a Django project, you either have to
-use the default loader, or write a loader of your own.
-
-The rest of this answer describes how to use the default loader.
-
-While it is possible to use Celery from outside of Django, we still need
-Django itself to run, this is to use the ORM and cache-framework.
-Duplicating these features would be time consuming and mostly pointless, so
-while me might rewrite these in the future, this is a good solution in the
-mean time.
-Install Django using your favorite install tool, ``easy_install``, ``pip``, or
-whatever::
-
-    # easy_install django # as root
-
-You need a configuration file named ``celeryconfig.py``, either in the
-directory you run ``celeryd`` in, or in a Python library path where it is
-able to find it. The configuration file can contain any of the settings
-described in :mod:`celery.conf`. In addition; if you're using the
-database backend you have to configure the database. Here is an example
-configuration using the database backend with MySQL:
-
-.. code-block:: python
-
-    # Broker configuration
-    BROKER_HOST = "localhost"
-    BROKER_PORT = "5672"
-    BROKER_VHOST = "celery"
-    BROKER_USER = "celery"
-    BROKER_PASSWORD = "celerysecret"
-    CARROT_BACKEND="amqp"
-
-    # Using the database backend.
-    CELERY_RESULT_BACKEND = "database"
-    DATABASE_ENGINE = "mysql" # see Django docs for a description of these.
-    DATABASE_NAME = "mydb"
-    DATABASE_HOST = "mydb.example.org"
-    DATABASE_USER = "myuser"
-    DATABASE_PASSWORD = "mysecret"
-
-    # Number of processes that processes tasks simultaneously.
-    CELERYD_CONCURRENCY = 8
-
-    # Modules to import when celeryd starts.
-    # This must import every module where you register tasks so celeryd
-    # is able to find and run them.
-    CELERY_IMPORTS = ("mytaskmodule1", "mytaskmodule2")
-    
-With this configuration file in the current directory you have to
-run ``celeryinit`` to create the database tables::
-
-    $ celeryinit
-
-At this point you should be able to successfully run ``celeryd``::
-
-    $ celeryd --loglevel=INFO
-
-and send a task from a python shell (note that it must be able to import
-``celeryconfig.py``):
-
-    >>> from celery.task.builtins import PingTask
-    >>> result = PingTask.apply_async()
-    >>> result.get()
-    'pong'
-
-The celery test-suite is failing
---------------------------------
-
-**Answer**: If you're running tests from your Django project, and the celery
-test suite is failing in that context, then follow the steps below. If the
-celery tests are failing in another context, please report an issue to our
-issue tracker at GitHub:
-
-    http://github.com/ask/celery/issues/
-
-That Django is running tests for all applications in ``INSTALLED_APPS``
-by default is a pet peeve for many. You should use a test runner that either
-
-    1) Explicitly lists the apps you want to run tests for, or
-
-    2) Make a test runner that skips tests for apps you don't want to run.
-
-For example the test runner that celery is using:
-
-    http://bit.ly/NVKep
-
-To use this test runner, add the following to your ``settings.py``:
-
-.. code-block:: python
-
-    TEST_RUNNER = "celery.tests.runners.run_tests"
-    TEST_APPS = (
-        "app1",
-        "app2",
-        "app3",
-        "app4",
-    )
-
-Or, if you just want to skip the celery tests:
-
-.. code-block:: python
-
-    INSTALLED_APPS = (.....)
-    TEST_RUNNER = "celery.tests.runners.run_tests"
-    TEST_APPS = filter(lambda k: k != "celery", INSTALLED_APPS)
-
-
 Can I change the interval of a periodic task at runtime?
 Can I change the interval of a periodic task at runtime?
 --------------------------------------------------------
 --------------------------------------------------------
 
 
@@ -647,6 +609,50 @@ to different servers. In the real world this may actually work better than per m
 priorities. You can use this in combination with rate limiting to achieve a
 priorities. You can use this in combination with rate limiting to achieve a
 highly performant system.
 highly performant system.
 
 
+Should I use retry or acks_late?
+--------------------------------
+
+**Answer**: Depends. It's not necessarily one or the other, you may want
+to use both.
+
+``Task.retry`` is used to retry tasks, notably for expected errors that
+is catchable with the ``try:`` block. The AMQP transaction is not used
+for these errors: **if the task raises an exception it is still acked!**.
+
+The ``acks_late`` setting would be used when you need the task to be
+executed again if the worker (for some reason) crashes mid-execution.
+It's important to note that the worker is not known to crash, and if
+it does it is usually an unrecoverable error that requires human
+intervention (bug in the worker, or task code).
+
+In an ideal world you could safely retry any task that has failed, but
+this is rarely the case. Imagine the following task:
+
+.. code-block:: python
+
+    @task()
+    def process_upload(filename, tmpfile):
+        # Increment a file count stored in a database
+        increment_file_counter()
+        add_file_metadata_to_db(filename, tmpfile)
+        copy_file_to_destination(filename, tmpfile)
+
+If this crashed in the middle of copying the file to its destination
+the world would contain incomplete state. This is not a critical
+scenario of course, but you can probably imagine something far more
+sinister. So for ease of programming we have less reliability;
+It's a good default, users who require it and know what they
+are doing can still enable acks_late (and in the future hopefully
+use manual acknowledgement)
+
+In addition ``Task.retry`` has features not available in AMQP
+transactions: delay between retries, max retries, etc.
+
+So use retry for Python errors, and if your task is reentrant
+combine that with ``acks_late`` if that level of reliability
+is required.
+
+
 Can I schedule tasks to execute at a specific time?
 Can I schedule tasks to execute at a specific time?
 ---------------------------------------------------
 ---------------------------------------------------
 
 
@@ -654,11 +660,18 @@ Can I schedule tasks to execute at a specific time?
 
 
 **Answer**: Yes. You can use the ``eta`` argument of :meth:`Task.apply_async`.
 **Answer**: Yes. You can use the ``eta`` argument of :meth:`Task.apply_async`.
 
 
-However, you can't schedule a periodic task at a specific time yet.
-The good news is, if anyone is willing
-to implement it, it shouldn't be that hard. Some pointers to achieve this has
-been written here: http://bit.ly/99UQNO
+Or to schedule a periodic task at a specific time, use the
+:class:`celery.task.schedules.crontab` schedule behavior:
+
+
+.. code-block:: python
+
+    from celery.task.schedules import crontab
+    from celery.decorators import periodic_task
 
 
+    @periodic_task(run_every=crontab(hours=7, minute=30, day_of_week="mon"))
+    def every_monday_morning():
+        print("This is run every monday morning at 7:30")
 
 
 How do I shut down ``celeryd`` safely?
 How do I shut down ``celeryd`` safely?
 --------------------------------------
 --------------------------------------
@@ -668,4 +681,10 @@ executing jobs and shut down as soon as possible. No tasks should be lost.
 
 
 You should never stop ``celeryd`` with the ``KILL`` signal (``-9``),
 You should never stop ``celeryd`` with the ``KILL`` signal (``-9``),
 unless you've tried ``TERM`` a few times and waited a few minutes to let it
 unless you've tried ``TERM`` a few times and waited a few minutes to let it
-get a chance to shut down.
+get a chance to shut down. As if you do tasks may be terminated mid-execution,
+and they will not be re-run unless you have the ``acks_late`` option set.
+(``Task.acks_late`` / ``CELERY_ACKS_LATE``).
+
+How do I run celeryd in the background on [platform]?
+-----------------------------------------------------
+**Answer**: Please see :doc:`cookbook/daemonizing`.

+ 1 - 0
MANIFEST.in

@@ -8,6 +8,7 @@ include TODO
 include THANKS
 include THANKS
 include pavement.py
 include pavement.py
 include setup.cfg
 include setup.cfg
+recursive-include bin *
 recursive-include celery *.py
 recursive-include celery *.py
 recursive-include docs *
 recursive-include docs *
 recursive-include tests *
 recursive-include tests *

+ 14 - 10
README.rst

@@ -4,12 +4,12 @@
 
 
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 
 
-:Version: 1.0.1
+:Version: 1.1.0
 :Web: http://celeryproject.org/
 :Web: http://celeryproject.org/
 :Download: http://pypi.python.org/pypi/celery/
 :Download: http://pypi.python.org/pypi/celery/
 :Source: http://github.com/ask/celery/
 :Source: http://github.com/ask/celery/
 :Keywords: task queue, job queue, asynchronous, rabbitmq, amqp, redis,
 :Keywords: task queue, job queue, asynchronous, rabbitmq, amqp, redis,
-  django, python, webhooks, queue, distributed
+  python, webhooks, queue, distributed
 
 
 --
 --
 
 
@@ -22,14 +22,20 @@ more worker servers. Tasks can execute asynchronously (in the background) or syn
 
 
 Celery is already used in production to process millions of tasks a day.
 Celery is already used in production to process millions of tasks a day.
 
 
-Celery was originally created for use with Django, but is now usable
-from any Python project. It can
-also `operate with other languages via webhooks`_.
+Celery is written in Python, but the protocol can be implemented in any
+language. It can also `operate with other languages using webhooks`_.
 
 
-The recommended message broker is `RabbitMQ`_, but support for Redis and
-databases is also available.
+The recommended message broker is `RabbitMQ`_, but support for `Redis`_ and
+databases (`SQLAlchemy`_) is also available.
 
 
-.. _`operate with other languages via webhooks`:
+You may also be pleased to know that full Django integration exists
+via the `django-celery`_ package.
+
+.. _`RabbitMQ`: http://www.rabbitmq.com/
+.. _`Redis`: http://code.google.com/p/redis/
+.. _`SQLAlchemy`: http://www.sqlalchemy.org/
+.. _`django-celery`: http://pypi.python.org/pypi/django-celery
+.. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
     http://ask.github.com/celery/userguide/remote-tasks.html
 
 
 Overview
 Overview
@@ -150,12 +156,10 @@ Features
     +-----------------+----------------------------------------------------+
     +-----------------+----------------------------------------------------+
 
 
 
 
-.. _`RabbitMQ`: http://www.rabbitmq.com/
 .. _`clustering`: http://www.rabbitmq.com/clustering.html
 .. _`clustering`: http://www.rabbitmq.com/clustering.html
 .. _`AMQP`: http://www.amqp.org/
 .. _`AMQP`: http://www.amqp.org/
 .. _`Stomp`: http://stomp.codehaus.org/
 .. _`Stomp`: http://stomp.codehaus.org/
 .. _`MongoDB`: http://www.mongodb.org/
 .. _`MongoDB`: http://www.mongodb.org/
-.. _`Redis`: http://code.google.com/p/redis/
 .. _`Tokyo Tyrant`: http://tokyocabinet.sourceforge.net/
 .. _`Tokyo Tyrant`: http://tokyocabinet.sourceforge.net/
 
 
 Documentation
 Documentation

+ 1 - 1
bin/celerybeat

@@ -7,4 +7,4 @@ from celery.bin import celerybeat
 
 
 if __name__ == "__main__":
 if __name__ == "__main__":
     options = celerybeat.parse_options(sys.argv[1:])
     options = celerybeat.parse_options(sys.argv[1:])
-    celerybeat.run_clockservice(**vars(options))
+    celerybeat.run_celerybeat(**vars(options))

+ 5 - 0
bin/celeryd-multi

@@ -0,0 +1,5 @@
+#!/usr/bin/env python
+from celery.bin.celeryd_multi import main
+
+if __name__ == "__main__":
+    main()

+ 0 - 8
bin/celeryinit

@@ -1,8 +0,0 @@
-#!/usr/bin/env python
-import sys
-if not '' in sys.path:
-    sys.path.insert(0, '')
-from celery.bin import celeryinit
-
-if __name__ == "__main__":
-    celeryinit.main()

+ 2 - 2
celery/__init__.py

@@ -1,8 +1,8 @@
 """Distributed Task Queue"""
 """Distributed Task Queue"""
 
 
-VERSION = (1, 0, 2)
+VERSION = (1, 1, 0)
 
 
-__version__ = ".".join(map(str, VERSION))
+__version__ = ".".join(map(str, VERSION[0:3])) + "".join(VERSION[3:])
 __author__ = "Ask Solem"
 __author__ = "Ask Solem"
 __contact__ = "askh@opera.com"
 __contact__ = "askh@opera.com"
 __homepage__ = "http://github.com/ask/celery/"
 __homepage__ = "http://github.com/ask/celery/"

+ 6 - 6
celery/backends/__init__.py

@@ -2,15 +2,14 @@ from billiard.utils.functional import curry
 
 
 from celery import conf
 from celery import conf
 from celery.utils import get_cls_by_name
 from celery.utils import get_cls_by_name
+from celery.loaders import current_loader
 
 
 BACKEND_ALIASES = {
 BACKEND_ALIASES = {
     "amqp": "celery.backends.amqp.AMQPBackend",
     "amqp": "celery.backends.amqp.AMQPBackend",
-    "database": "celery.backends.database.DatabaseBackend",
-    "db": "celery.backends.database.DatabaseBackend",
     "redis": "celery.backends.pyredis.RedisBackend",
     "redis": "celery.backends.pyredis.RedisBackend",
-    "cache": "celery.backends.cache.CacheBackend",
     "mongodb": "celery.backends.mongodb.MongoBackend",
     "mongodb": "celery.backends.mongodb.MongoBackend",
     "tyrant": "celery.backends.tyrant.TyrantBackend",
     "tyrant": "celery.backends.tyrant.TyrantBackend",
+    "database": "celery.backends.database.DatabaseBackend",
 }
 }
 
 
 _backend_cache = {}
 _backend_cache = {}
@@ -19,14 +18,15 @@ _backend_cache = {}
 def get_backend_cls(backend):
 def get_backend_cls(backend):
     """Get backend class by name/alias"""
     """Get backend class by name/alias"""
     if backend not in _backend_cache:
     if backend not in _backend_cache:
-        _backend_cache[backend] = get_cls_by_name(backend, BACKEND_ALIASES)
+        aliases = dict(BACKEND_ALIASES, **current_loader().override_backends)
+        _backend_cache[backend] = get_cls_by_name(backend, aliases)
     return _backend_cache[backend]
     return _backend_cache[backend]
 
 
 
 
 """
 """
 .. function:: get_default_backend_cls()
 .. function:: get_default_backend_cls()
 
 
-    Get the backend class specified in :setting:`CELERY_RESULT_BACKEND`.
+    Get the backend class specified in the ``CELERY_RESULT_BACKEND`` setting.
 
 
 """
 """
 get_default_backend_cls = curry(get_backend_cls, conf.RESULT_BACKEND)
 get_default_backend_cls = curry(get_backend_cls, conf.RESULT_BACKEND)
@@ -36,7 +36,7 @@ get_default_backend_cls = curry(get_backend_cls, conf.RESULT_BACKEND)
 .. class:: DefaultBackend
 .. class:: DefaultBackend
 
 
     The default backend class used for storing task results and status,
     The default backend class used for storing task results and status,
-    specified in :setting:`CELERY_RESULT_BACKEND`.
+    specified in the ``CELERY_RESULT_BACKEND`` setting.
 
 
 """
 """
 DefaultBackend = get_default_backend_cls()
 DefaultBackend = get_default_backend_cls()

+ 78 - 51
celery/backends/amqp.py

@@ -1,11 +1,41 @@
 """celery.backends.amqp"""
 """celery.backends.amqp"""
+import socket
+
 from carrot.messaging import Consumer, Publisher
 from carrot.messaging import Consumer, Publisher
 
 
 from celery import conf
 from celery import conf
+from celery import states
+from celery.exceptions import TimeoutError
 from celery.backends.base import BaseDictBackend
 from celery.backends.base import BaseDictBackend
 from celery.messaging import establish_connection
 from celery.messaging import establish_connection
 
 
 
 
+class ResultPublisher(Publisher):
+    exchange = conf.RESULT_EXCHANGE
+    exchange_type = conf.RESULT_EXCHANGE_TYPE
+    delivery_mode = conf.RESULT_PERSISTENT and 2 or 1
+    serializer = conf.RESULT_SERIALIZER
+    durable = conf.RESULT_PERSISTENT
+
+    def __init__(self, connection, task_id, **kwargs):
+        super(ResultPublisher, self).__init__(connection,
+                        routing_key=task_id.replace("-", ""),
+                        **kwargs)
+
+
+class ResultConsumer(Consumer):
+    exchange = conf.RESULT_EXCHANGE
+    exchange_type = conf.RESULT_EXCHANGE_TYPE
+    durable = conf.RESULT_PERSISTENT
+    no_ack = True
+    auto_delete = True
+
+    def __init__(self, connection, task_id, **kwargs):
+        routing_key = task_id.replace("-", "")
+        super(ResultConsumer, self).__init__(connection,
+                queue=routing_key, routing_key=routing_key, **kwargs)
+
+
 class AMQPBackend(BaseDictBackend):
 class AMQPBackend(BaseDictBackend):
     """AMQP backend. Publish results by sending messages to the broker
     """AMQP backend. Publish results by sending messages to the broker
     using the task id as routing key.
     using the task id as routing key.
@@ -17,50 +47,28 @@ class AMQPBackend(BaseDictBackend):
     """
     """
 
 
     exchange = conf.RESULT_EXCHANGE
     exchange = conf.RESULT_EXCHANGE
-    capabilities = ["ResultStore"]
+    exchange_type = conf.RESULT_EXCHANGE_TYPE
+    persistent = conf.RESULT_PERSISTENT
+    serializer = conf.RESULT_SERIALIZER
     _connection = None
     _connection = None
-    _use_debug_tracking = False
-    _seen = set()
 
 
-    def __init__(self, *args, **kwargs):
-        super(AMQPBackend, self).__init__(*args, **kwargs)
+    def _create_publisher(self, task_id, connection):
+        delivery_mode = self.persistent and 2 or 1
 
 
-    @property
-    def connection(self):
-        if not self._connection:
-            self._connection = establish_connection()
-        return self._connection
+        # Declares the queue.
+        self._create_consumer(task_id, connection).close()
 
 
-    def _declare_queue(self, task_id, connection):
-        routing_key = task_id.replace("-", "")
-        backend = connection.create_backend()
-        backend.queue_declare(queue=routing_key, durable=True,
-                                exclusive=False, auto_delete=True)
-        backend.exchange_declare(exchange=self.exchange,
-                                 type="direct",
-                                 durable=True,
-                                 auto_delete=False)
-        backend.queue_bind(queue=routing_key, exchange=self.exchange,
-                           routing_key=routing_key)
-        backend.close()
-
-    def _publisher_for_task_id(self, task_id, connection):
-        routing_key = task_id.replace("-", "")
-        self._declare_queue(task_id, connection)
-        p = Publisher(connection, exchange=self.exchange,
-                      exchange_type="direct",
-                      routing_key=routing_key)
-        return p
+        return ResultPublisher(connection, task_id,
+                               exchange=self.exchange,
+                               exchange_type=self.exchange_type,
+                               delivery_mode=delivery_mode,
+                               serializer=self.serializer)
 
 
-    def _consumer_for_task_id(self, task_id, connection):
-        routing_key = task_id.replace("-", "")
-        self._declare_queue(task_id, connection)
-        return Consumer(connection, queue=routing_key,
-                        exchange=self.exchange,
-                        exchange_type="direct",
-                        no_ack=False, auto_ack=False,
-                        auto_delete=True,
-                        routing_key=routing_key)
+    def _create_consumer(self, task_id, connection):
+        return ResultConsumer(connection, task_id,
+                              exchange=self.exchange,
+                              exchange_type=self.exchange_type,
+                              durable=self.persistent)
 
 
     def store_result(self, task_id, result, status, traceback=None):
     def store_result(self, task_id, result, status, traceback=None):
         """Send task return value and status."""
         """Send task return value and status."""
@@ -71,33 +79,42 @@ class AMQPBackend(BaseDictBackend):
                 "status": status,
                 "status": status,
                 "traceback": traceback}
                 "traceback": traceback}
 
 
-        connection = self.connection
-        publisher = self._publisher_for_task_id(task_id, connection)
-        publisher.send(meta, serializer="pickle")
-        publisher.close()
+        publisher = self._create_publisher(task_id, self.connection)
+        try:
+            publisher.send(meta)
+        finally:
+            publisher.close()
 
 
         return result
         return result
 
 
-    def _get_task_meta_for(self, task_id):
-        assert task_id not in self._seen
-        self._use_debug_tracking and self._seen.add(task_id)
+    def wait_for(self, task_id, timeout=None):
+        try:
+            meta = self._get_task_meta_for(task_id, timeout)
+        except socket.timeout:
+            raise TimeoutError("The operation timed out.")
+
+        if meta["status"] == states.SUCCESS:
+            return self.get_result(task_id)
+        elif meta["status"] in states.PROPAGATE_STATES:
+            raise self.get_result(task_id)
 
 
+    def _get_task_meta_for(self, task_id, timeout=None):
         results = []
         results = []
 
 
         def callback(message_data, message):
         def callback(message_data, message):
             results.append(message_data)
             results.append(message_data)
-            message.ack()
 
 
         routing_key = task_id.replace("-", "")
         routing_key = task_id.replace("-", "")
 
 
-        connection = self.connection
-        consumer = self._consumer_for_task_id(task_id, connection)
+        wait = self.connection.connection.wait_multi
+        consumer = self._create_consumer(task_id, self.connection)
         consumer.register_callback(callback)
         consumer.register_callback(callback)
 
 
+        consumer.consume()
         try:
         try:
-            consumer.iterconsume().next()
+            wait([consumer.backend.channel], timeout=timeout)
         finally:
         finally:
-            consumer.backend.channel.queue_delete(routing_key)
+            consumer.backend.queue_delete(routing_key)
             consumer.close()
             consumer.close()
 
 
         self._cache[task_id] = results[0]
         self._cache[task_id] = results[0]
@@ -121,3 +138,13 @@ class AMQPBackend(BaseDictBackend):
         """Get the result of a taskset."""
         """Get the result of a taskset."""
         raise NotImplementedError(
         raise NotImplementedError(
                 "restore_taskset is not supported by this backend.")
                 "restore_taskset is not supported by this backend.")
+
+    def close(self):
+        if self._connection is not None:
+            self._connection.close()
+
+    @property
+    def connection(self):
+        if not self._connection:
+            self._connection = establish_connection()
+        return self._connection

+ 6 - 6
celery/backends/base.py

@@ -7,7 +7,7 @@ from billiard.serialization import get_pickleable_exception
 
 
 from celery import conf
 from celery import conf
 from celery import states
 from celery import states
-from celery.exceptions import TimeoutError
+from celery.exceptions import TimeoutError, TaskRevokedError
 from celery.datastructures import LocalCache
 from celery.datastructures import LocalCache
 
 
 
 
@@ -20,8 +20,6 @@ class BaseBackend(object):
 
 
     TimeoutError = TimeoutError
     TimeoutError = TimeoutError
 
 
-    capabilities = []
-
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
         pass
         pass
 
 
@@ -55,6 +53,10 @@ class BaseBackend(object):
         return self.store_result(task_id, exc, status=states.RETRY,
         return self.store_result(task_id, exc, status=states.RETRY,
                                  traceback=traceback)
                                  traceback=traceback)
 
 
+    def mark_as_revoked(self, task_id):
+        return self.store_result(task_id, TaskRevokedError(),
+                                 status=states.REVOKED, traceback=None)
+
     def prepare_exception(self, exc):
     def prepare_exception(self, exc):
         """Prepare exception for serialization."""
         """Prepare exception for serialization."""
         return get_pickleable_exception(exc)
         return get_pickleable_exception(exc)
@@ -90,7 +92,7 @@ class BaseBackend(object):
             status = self.get_status(task_id)
             status = self.get_status(task_id)
             if status == states.SUCCESS:
             if status == states.SUCCESS:
                 return self.get_result(task_id)
                 return self.get_result(task_id)
-            elif status == states.FAILURE:
+            elif status in states.PROPAGATE_STATES:
                 raise self.get_result(task_id)
                 raise self.get_result(task_id)
             # avoid hammering the CPU checking status.
             # avoid hammering the CPU checking status.
             time.sleep(sleep_inbetween)
             time.sleep(sleep_inbetween)
@@ -145,8 +147,6 @@ class BaseBackend(object):
 
 
 class BaseDictBackend(BaseBackend):
 class BaseDictBackend(BaseBackend):
 
 
-    capabilities = ["ResultStore"]
-
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
         super(BaseDictBackend, self).__init__(*args, **kwargs)
         super(BaseDictBackend, self).__init__(*args, **kwargs)
         self._cache = LocalCache(limit=conf.MAX_CACHED_RESULTS)
         self._cache = LocalCache(limit=conf.MAX_CACHED_RESULTS)

+ 0 - 62
celery/backends/cache.py

@@ -1,62 +0,0 @@
-"""celery.backends.cache"""
-from datetime import timedelta
-
-from django.utils.encoding import smart_str
-from django.core.cache import cache, get_cache
-from django.core.cache.backends.base import InvalidCacheBackendError
-
-from celery import conf
-from celery.utils import timedelta_seconds
-from celery.backends.base import KeyValueStoreBackend
-
-# CELERY_CACHE_BACKEND overrides the django-global(tm) backend settings.
-if conf.CELERY_CACHE_BACKEND:
-    cache = get_cache(conf.CELERY_CACHE_BACKEND)
-
-
-class DjangoMemcacheWrapper(object):
-    """Wrapper class to django's memcache backend class, that overrides the
-    :meth:`get` method in order to remove the forcing of unicode strings
-    since it may cause binary or pickled data to break."""
-
-    def __init__(self, cache):
-        self.cache = cache
-
-    def get(self, key, default=None):
-        val = self.cache._cache.get(smart_str(key))
-        if val is None:
-            return default
-        else:
-            return val
-
-    def set(self, key, value, timeout=0):
-        self.cache.set(key, value, timeout)
-
-# Check if django is using memcache as the cache backend. If so, wrap the
-# cache object in a DjangoMemcacheWrapper that fixes a bug with retrieving
-# pickled data
-from django.core.cache.backends.base import InvalidCacheBackendError
-try:
-    from django.core.cache.backends.memcached import CacheClass
-except InvalidCacheBackendError:
-    pass
-else:
-    if isinstance(cache, CacheClass):
-        cache = DjangoMemcacheWrapper(cache)
-
-
-class CacheBackend(KeyValueStoreBackend):
-    """Backend using the Django cache framework to store task metadata."""
-
-    def __init__(self, *args, **kwargs):
-        super(CacheBackend, self).__init__(self, *args, **kwargs)
-        expires = conf.TASK_RESULT_EXPIRES
-        if isinstance(expires, timedelta):
-            expires = timedelta_seconds(conf.TASK_RESULT_EXPIRES)
-        self.expires = expires
-
-    def get(self, key):
-        return cache.get(key)
-
-    def set(self, key, value):
-        cache.set(key, value, self.expires)

+ 73 - 13
celery/backends/database.py

@@ -1,34 +1,94 @@
-from celery.models import TaskMeta, TaskSetMeta
+from datetime import datetime
+
+
+from celery import conf
+from celery.db.models import Task, TaskSet
+from celery.db.session import ResultSession
 from celery.backends.base import BaseDictBackend
 from celery.backends.base import BaseDictBackend
 
 
 
 
 class DatabaseBackend(BaseDictBackend):
 class DatabaseBackend(BaseDictBackend):
-    """The database backends. Using Django models to store task metadata."""
+    """The database result backend."""
+
+    def __init__(self, dburi=conf.RESULT_DBURI,
+            engine_options=None, **kwargs):
+        self.dburi = dburi
+        self.engine_options = dict(engine_options or {},
+                                   **conf.RESULT_ENGINE_OPTIONS or {})
+        super(DatabaseBackend, self).__init__(**kwargs)
+
+    def ResultSession(self):
+        return ResultSession(dburi=self.dburi, **self.engine_options)
 
 
     def _store_result(self, task_id, result, status, traceback=None):
     def _store_result(self, task_id, result, status, traceback=None):
         """Store return value and status of an executed task."""
         """Store return value and status of an executed task."""
-        TaskMeta.objects.store_result(task_id, result, status,
-                                      traceback=traceback)
+        session = self.ResultSession()
+        try:
+            tasks = session.query(Task).filter(Task.task_id == task_id).all()
+            if not tasks:
+                task = Task(task_id)
+                session.add(task)
+                session.flush()
+            else:
+                task = tasks[0]
+            task.result = result
+            task.status = status
+            task.traceback = traceback
+            session.commit()
+        finally:
+            session.close()
         return result
         return result
 
 
     def _save_taskset(self, taskset_id, result):
     def _save_taskset(self, taskset_id, result):
         """Store the result of an executed taskset."""
         """Store the result of an executed taskset."""
-        TaskSetMeta.objects.store_result(taskset_id, result)
+        taskset = TaskSet(taskset_id, result)
+        session = self.ResultSession()
+        try:
+            session.add(taskset)
+            session.flush()
+            session.commit()
+        finally:
+            session.close()
         return result
         return result
 
 
     def _get_task_meta_for(self, task_id):
     def _get_task_meta_for(self, task_id):
         """Get task metadata for a task by id."""
         """Get task metadata for a task by id."""
-        meta = TaskMeta.objects.get_task(task_id)
-        if meta:
-            return meta.to_dict()
+        session = self.ResultSession()
+        try:
+            task = None
+            for task in session.query(Task).filter(Task.task_id == task_id):
+                break
+            if not task:
+                task = Task(task_id)
+                session.add(task)
+                session.flush()
+                session.commit()
+            if task:
+                return task.to_dict()
+        finally:
+            session.close()
 
 
     def _restore_taskset(self, taskset_id):
     def _restore_taskset(self, taskset_id):
         """Get taskset metadata for a taskset by id."""
         """Get taskset metadata for a taskset by id."""
-        meta = TaskSetMeta.objects.restore_taskset(taskset_id)
-        if meta:
-            return meta.to_dict()
+        session = self.ResultSession()
+        try:
+            qs = session.query(TaskSet)
+            for taskset in qs.filter(TaskSet.taskset_id == taskset_id):
+                return taskset.to_dict()
+        finally:
+            session.close()
 
 
     def cleanup(self):
     def cleanup(self):
         """Delete expired metadata."""
         """Delete expired metadata."""
-        TaskMeta.objects.delete_expired()
-        TaskSetMeta.objects.delete_expired()
+        expires = conf.TASK_RESULT_EXPIRES
+        session = self.ResultSession()
+        try:
+            for task in session.query(Task).filter(
+                    Task.date_done < (datetime.now() - expires)):
+                session.delete(task)
+            for taskset in session.query(TaskSet).filter(
+                    TaskSet.date_done < (datetime.now() - expires)):
+                session.delete(taskset)
+            session.commit()
+        finally:
+            session.close()

+ 9 - 12
celery/backends/mongodb.py

@@ -21,15 +21,12 @@ class Bunch:
 
 
 
 
 class MongoBackend(BaseDictBackend):
 class MongoBackend(BaseDictBackend):
-
-    capabilities = ["ResultStore"]
-
-    mongodb_host = 'localhost'
+    mongodb_host = "localhost"
     mongodb_port = 27017
     mongodb_port = 27017
     mongodb_user = None
     mongodb_user = None
     mongodb_password = None
     mongodb_password = None
-    mongodb_database = 'celery'
-    mongodb_taskmeta_collection = 'celery_taskmeta'
+    mongodb_database = "celery"
+    mongodb_taskmeta_collection = "celery_taskmeta"
 
 
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
         """Initialize MongoDB backend instance.
         """Initialize MongoDB backend instance.
@@ -52,15 +49,15 @@ class MongoBackend(BaseDictBackend):
                 raise ImproperlyConfigured(
                 raise ImproperlyConfigured(
                     "MongoDB backend settings should be grouped in a dict")
                     "MongoDB backend settings should be grouped in a dict")
 
 
-            self.mongodb_host = config.get('host', self.mongodb_host)
-            self.mongodb_port = int(config.get('port', self.mongodb_port))
-            self.mongodb_user = config.get('user', self.mongodb_user)
+            self.mongodb_host = config.get("host", self.mongodb_host)
+            self.mongodb_port = int(config.get("port", self.mongodb_port))
+            self.mongodb_user = config.get("user", self.mongodb_user)
             self.mongodb_password = config.get(
             self.mongodb_password = config.get(
-                    'password', self.mongodb_password)
+                    "password", self.mongodb_password)
             self.mongodb_database = config.get(
             self.mongodb_database = config.get(
-                    'database', self.mongodb_database)
+                    "database", self.mongodb_database)
             self.mongodb_taskmeta_collection = config.get(
             self.mongodb_taskmeta_collection = config.get(
-                'taskmeta_collection', self.mongodb_taskmeta_collection)
+                "taskmeta_collection", self.mongodb_taskmeta_collection)
 
 
         super(MongoBackend, self).__init__(*args, **kwargs)
         super(MongoBackend, self).__init__(*args, **kwargs)
         self._connection = None
         self._connection = None

+ 1 - 1
celery/backends/pyredis.py

@@ -25,7 +25,7 @@ class RedisBackend(KeyValueStoreBackend):
         The port to the Redis server.
         The port to the Redis server.
 
 
         Raises :class:`celery.exceptions.ImproperlyConfigured` if
         Raises :class:`celery.exceptions.ImproperlyConfigured` if
-        :setting:`REDIS_HOST` or :setting:`REDIS_PORT` is not set.
+        the ``REDIS_HOST`` or ``REDIS_PORT`` settings is not set.
 
 
     """
     """
     redis_host = "localhost"
     redis_host = "localhost"

+ 62 - 49
celery/bin/celerybeat.py

@@ -53,64 +53,73 @@ OPTION_LIST = (
 )
 )
 
 
 
 
-def run_clockservice(loglevel=conf.CELERYBEAT_LOG_LEVEL,
-        logfile=conf.CELERYBEAT_LOG_FILE,
-        schedule=conf.CELERYBEAT_SCHEDULE_FILENAME, **kwargs):
-    """Starts the celerybeat clock server."""
-
-    print("celerybeat %s is starting." % celery.__version__)
-
-    # Setup logging
-    if not isinstance(loglevel, int):
-        loglevel = conf.LOG_LEVELS[loglevel.upper()]
-
-    # Run the worker init handler.
-    # (Usually imports task modules and such.)
-    from celery.loaders import current_loader
-    current_loader().init_worker()
-
-
-    # Dump configuration to screen so we have some basic information
-    # when users sends e-mails.
-
-    print(STARTUP_INFO_FMT % {
-            "conninfo": info.format_broker_info(),
-            "logfile": logfile or "@stderr",
-            "loglevel": conf.LOG_LEVELS[loglevel],
-            "schedule": schedule,
-    })
-
-    print("celerybeat has started.")
-    arg_start = "manage" in sys.argv[0] and 2 or 1
-    platform.set_process_title("celerybeat",
-                               info=" ".join(sys.argv[arg_start:]))
-
-    def _run_clock():
+class Beat(object):
+
+    def __init__(self, loglevel=conf.CELERYBEAT_LOG_LEVEL,
+            logfile=conf.CELERYBEAT_LOG_FILE,
+            schedule=conf.CELERYBEAT_SCHEDULE_FILENAME, **kwargs):
+        """Starts the celerybeat task scheduler."""
+
+        self.loglevel = loglevel
+        self.logfile = logfile
+        self.schedule = schedule
+        # Setup logging
+        if not isinstance(self.loglevel, int):
+            self.loglevel = conf.LOG_LEVELS[self.loglevel.upper()]
+
+    def run(self):
+        print("celerybeat %s is starting." % celery.__version__)
+        self.init_loader()
+        print(self.startup_info())
+        self.set_process_title()
+        print("celerybeat has started.")
+        self.start_scheduler()
+
+    def start_scheduler(self):
         from celery.log import setup_logger
         from celery.log import setup_logger
-        logger = setup_logger(loglevel, logfile)
-        clockservice = ClockService(logger=logger, schedule_filename=schedule)
+        logger = setup_logger(self.loglevel, self.logfile)
+        beat = ClockService(logger,
+                            schedule_filename=self.schedule)
 
 
         try:
         try:
-            install_sync_handler(clockservice)
-            clockservice.start()
-        except Exception, e:
+            self.install_sync_handler(beat)
+            beat.start()
+        except Exception, exc:
             emergency_error(logfile,
             emergency_error(logfile,
                     "celerybeat raised exception %s: %s\n%s" % (
                     "celerybeat raised exception %s: %s\n%s" % (
-                            e.__class__, e, traceback.format_exc()))
+                            exc.__class__, exc, traceback.format_exc()))
+
+    def init_loader(self):
+        # Run the worker init handler.
+        # (Usually imports task modules and such.)
+        from celery.loaders import current_loader
+        current_loader().init_worker()
 
 
-    _run_clock()
+    def startup_info(self):
+        return STARTUP_INFO_FMT % {
+            "conninfo": info.format_broker_info(),
+            "logfile": self.logfile or "@stderr",
+            "loglevel": conf.LOG_LEVELS[self.loglevel],
+            "schedule": self.schedule,
+        }
+
+    def set_process_title(self):
+        arg_start = "manage" in sys.argv[0] and 2 or 1
+        platform.set_process_title("celerybeat",
+                               info=" ".join(sys.argv[arg_start:]))
 
 
+    def install_sync_handler(self, beat):
+        """Install a ``SIGTERM`` + ``SIGINT`` handler that saves
+        the celerybeat schedule."""
 
 
-def install_sync_handler(beat):
-    """Install a ``SIGTERM`` + ``SIGINT`` handler that saves
-    the celerybeat schedule."""
+        def _sync(signum, frame):
+            beat.sync()
+            raise SystemExit()
+
+        platform.install_signal_handler("SIGTERM", _sync)
+        platform.install_signal_handler("SIGINT", _sync)
 
 
-    def _sync(signum, frame):
-        beat.sync()
-        raise SystemExit()
 
 
-    platform.install_signal_handler("SIGTERM", _sync)
-    platform.install_signal_handler("SIGINT", _sync)
 
 
 
 
 def parse_options(arguments):
 def parse_options(arguments):
@@ -120,9 +129,13 @@ def parse_options(arguments):
     return options
     return options
 
 
 
 
+def run_celerybeat(**options):
+    Beat(**options).run()
+
+
 def main():
 def main():
     options = parse_options(sys.argv[1:])
     options = parse_options(sys.argv[1:])
-    run_clockservice(**vars(options))
+    run_celerybeat(**vars(options))
 
 
 if __name__ == "__main__":
 if __name__ == "__main__":
     main()
     main()

+ 60 - 2
celery/bin/celeryd.py

@@ -26,6 +26,12 @@
     Also run the ``celerybeat`` periodic task scheduler. Please note that
     Also run the ``celerybeat`` periodic task scheduler. Please note that
     there must only be one instance of this service.
     there must only be one instance of this service.
 
 
+.. cmdoption:: -Q, queues
+
+    List of queues to enable for this worker separated by comma.
+    By default all configured queues are enabled.
+    Example: ``-Q video,image``
+
 .. cmdoption:: -s, --schedule
 .. cmdoption:: -s, --schedule
 
 
     Path to the schedule database if running with the ``-B`` option.
     Path to the schedule database if running with the ``-B`` option.
@@ -42,6 +48,19 @@
     **WARNING**: This is unrecoverable, and the tasks will be
     **WARNING**: This is unrecoverable, and the tasks will be
     deleted from the messaging server.
     deleted from the messaging server.
 
 
+.. cmdoption:: --time-limit
+
+    Enables a hard time limit (in seconds) for tasks.
+
+.. cmdoption:: --soft-time-limit
+
+    Enables a soft time limit (in seconds) for tasks.
+
+.. cmdoption:: --maxtasksperchild
+
+    Maximum number of tasks a pool worker can execute before it's 
+    terminated and replaced by a new worker.
+
 """
 """
 import os
 import os
 import sys
 import sys
@@ -110,6 +129,24 @@ OPTION_LIST = (
     optparse.make_option('-E', '--events', default=conf.SEND_EVENTS,
     optparse.make_option('-E', '--events', default=conf.SEND_EVENTS,
             action="store_true", dest="events",
             action="store_true", dest="events",
             help="Send events so celery can be monitored by e.g. celerymon."),
             help="Send events so celery can be monitored by e.g. celerymon."),
+    optparse.make_option('--time-limit',
+            default=conf.CELERYD_TASK_TIME_LIMIT,
+            action="store", type="int", dest="task_time_limit",
+            help="Enables a hard time limit (in seconds) for tasks."),
+    optparse.make_option('--soft-time-limit',
+            default=conf.CELERYD_TASK_SOFT_TIME_LIMIT,
+            action="store", type="int", dest="task_soft_time_limit",
+            help="Enables a soft time limit (in seconds) for tasks."),
+    optparse.make_option('--maxtasksperchild',
+            default=conf.CELERYD_MAX_TASKS_PER_CHILD,
+            action="store", type="int", dest="max_tasks_per_child",
+            help="Maximum number of tasks a pool worker can execute"
+                 "before it's terminated and replaced by a new worker."),
+    optparse.make_option('--queues', '-Q', default=[],
+            action="store", dest="queues",
+            help="Comma separated list of queues to enable for this worker. "
+                 "By default all configured queues are enabled. "
+                 "Example: -Q video,image"),
 )
 )
 
 
 
 
@@ -119,7 +156,10 @@ class Worker(object):
             loglevel=conf.CELERYD_LOG_LEVEL, logfile=conf.CELERYD_LOG_FILE,
             loglevel=conf.CELERYD_LOG_LEVEL, logfile=conf.CELERYD_LOG_FILE,
             hostname=None, discard=False, run_clockservice=False,
             hostname=None, discard=False, run_clockservice=False,
             schedule=conf.CELERYBEAT_SCHEDULE_FILENAME,
             schedule=conf.CELERYBEAT_SCHEDULE_FILENAME,
-            events=False, **kwargs):
+            task_time_limit=conf.CELERYD_TASK_TIME_LIMIT,
+            task_soft_time_limit=conf.CELERYD_TASK_SOFT_TIME_LIMIT,
+            max_tasks_per_child=conf.CELERYD_MAX_TASKS_PER_CHILD,
+            queues=None, events=False, **kwargs):
         self.concurrency = concurrency or multiprocessing.cpu_count()
         self.concurrency = concurrency or multiprocessing.cpu_count()
         self.loglevel = loglevel
         self.loglevel = loglevel
         self.logfile = logfile
         self.logfile = logfile
@@ -128,6 +168,14 @@ class Worker(object):
         self.run_clockservice = run_clockservice
         self.run_clockservice = run_clockservice
         self.schedule = schedule
         self.schedule = schedule
         self.events = events
         self.events = events
+        self.task_time_limit = task_time_limit
+        self.task_soft_time_limit = task_soft_time_limit
+        self.max_tasks_per_child = max_tasks_per_child
+        self.queues = queues or []
+
+        if isinstance(self.queues, basestring):
+            self.queues = self.queues.split(",")
+
         if not isinstance(self.loglevel, int):
         if not isinstance(self.loglevel, int):
             self.loglevel = conf.LOG_LEVELS[self.loglevel.upper()]
             self.loglevel = conf.LOG_LEVELS[self.loglevel.upper()]
 
 
@@ -136,6 +184,7 @@ class Worker(object):
                                               celery.__version__))
                                               celery.__version__))
 
 
         self.init_loader()
         self.init_loader()
+        self.init_queues()
 
 
         if conf.RESULT_BACKEND == "database" \
         if conf.RESULT_BACKEND == "database" \
                 and self.settings.DATABASE_ENGINE == "sqlite3" and \
                 and self.settings.DATABASE_ENGINE == "sqlite3" and \
@@ -164,6 +213,12 @@ class Worker(object):
         signals.worker_ready.send(sender=listener)
         signals.worker_ready.send(sender=listener)
         print("celery@%s has started." % self.hostname)
         print("celery@%s has started." % self.hostname)
 
 
+    def init_queues(self):
+        if self.queues:
+            conf.QUEUES = dict((queue, options)
+                                for queue, options in conf.QUEUES.items()
+                                    if queue in self.queues)
+
     def init_loader(self):
     def init_loader(self):
         from celery.loaders import current_loader, load_settings
         from celery.loaders import current_loader, load_settings
         self.loader = current_loader()
         self.loader = current_loader()
@@ -215,7 +270,10 @@ class Worker(object):
                                 ready_callback=self.on_listener_ready,
                                 ready_callback=self.on_listener_ready,
                                 embed_clockservice=self.run_clockservice,
                                 embed_clockservice=self.run_clockservice,
                                 schedule_filename=self.schedule,
                                 schedule_filename=self.schedule,
-                                send_events=self.events)
+                                send_events=self.events,
+                                max_tasks_per_child=self.max_tasks_per_child,
+                                task_time_limit=self.task_time_limit,
+                                task_soft_time_limit=self.task_soft_time_limit)
 
 
         # Install signal handler so SIGHUP restarts the worker.
         # Install signal handler so SIGHUP restarts the worker.
         install_worker_restart_handler(worker)
         install_worker_restart_handler(worker)

+ 247 - 0
celery/bin/celeryd_multi.py

@@ -0,0 +1,247 @@
+import sys
+import shlex
+import socket
+
+from celery.utils.compat import defaultdict
+from carrot.utils import rpartition
+
+
+class OptionParser(object):
+
+    def __init__(self, args):
+        self.args = args
+        self.options = {}
+        self.values = []
+        self.parse()
+
+    def parse(self):
+        rargs = list(self.args)
+        pos = 0
+        while pos < len(rargs):
+            arg = rargs[pos]
+            if arg[0] == "-":
+                if arg[1] == "-":
+                    self.process_long_opt(arg[2:])
+                else:
+                    value = None
+                    if rargs[pos + 1][0] != '-':
+                        value = rargs[pos + 1]
+                        pos += 1
+                    self.process_short_opt(arg[1:], value)
+            else:
+                self.values.append(arg)
+            pos += 1
+
+    def process_long_opt(self, arg, value=None):
+        if "=" in arg:
+            arg, value = arg.split("=", 1)
+        self.add_option(arg, value, short=False)
+
+    def process_short_opt(self, arg, value=None):
+        self.add_option(arg, value, short=True)
+
+    def set_option(self, arg, value, short=False):
+        prefix = short and "-" or "--"
+        self.options[prefix + arg] = value
+
+
+class NamespacedOptionParser(OptionParser):
+
+    def __init__(self, args):
+        self.namespaces = defaultdict(lambda: {})
+        super(NamespacedOptionParser, self).__init__(args)
+
+    def add_option(self, name, value, short=False, ns=None):
+        prefix = short and "-" or "--"
+        dest = self.options
+        if ":" in name:
+            name, ns = name.split(":")
+            dest = self.namespaces[ns]
+        dest[prefix + name] = value
+
+    def optmerge(self, ns, defaults=None):
+        if defaults is None:
+            defaults = self.options
+        return dict(defaults, **self.namespaces[ns])
+
+
+def quote(v):
+    return "\\'".join("'" + p + "'" for p in v.split("'"))
+
+
+def format_opt(opt, value):
+    if not value:
+        return opt
+    if opt[0:2] == "--":
+        return "%s=%s" % (opt, value)
+    return "%s %s" % (opt, value)
+
+
+def parse_ns_range(ns, ranges=False):
+    ret = []
+    for space in "," in ns and ns.split(",") or [ns]:
+        if ranges and "-" in space:
+            start, stop = space.split("-")
+            x = map(str, range(int(start), int(stop) + 1))
+            ret.extend(x)
+        else:
+            ret.append(space)
+    return ret
+
+
+def abbreviations(map):
+
+    def expand(S):
+        ret = S
+        for short, long in map.items():
+            ret = ret.replace(short, long)
+        return ret
+
+    return expand
+
+
+def multi_args(p, cmd="celeryd", append="", prefix="", suffix=""):
+    names = p.values
+    options = dict(p.options)
+    ranges = len(names) == 1
+    if ranges:
+        names = map(str, range(1, int(names[0]) + 1))
+        prefix = "celery"
+    cmd = options.pop("--cmd", cmd)
+    append = options.pop("--append", append) 
+    hostname = options.pop("--hostname",
+                   options.pop("-n", socket.gethostname()))
+    prefix = options.pop("--prefix", prefix) or ""
+    suffix = options.pop("--suffix", suffix) or "." + hostname
+
+    for ns_name, ns_opts in p.namespaces.items():
+        if "," in ns_name or (ranges and "-" in ns_name):
+            for subns in parse_ns_range(ns_name, ranges):
+                p.namespaces[subns].update(ns_opts)
+            p.namespaces.pop(ns_name)
+
+    for name in names:
+        this_name = options["-n"] = prefix + name + suffix
+        expand = abbreviations({"%h": this_name,
+                                "%n": name})
+        line = expand(cmd) + " " + " ".join(
+                format_opt(opt, expand(value))
+                    for opt, value in p.optmerge(name, options).items()) + \
+               " " + expand(append)
+        yield this_name, line, expand
+
+
+
+
+def names(argv, cmd):
+    p = NamespacedOptionParser(argv)
+    print("\n".join(hostname
+                        for hostname, _, _ in multi_args(p, cmd)))
+
+def get(argv, cmd):
+    wanted = argv[0]
+    p = NamespacedOptionParser(argv[1:])
+    for name, worker, _ in multi_args(p, cmd):
+        if name == wanted:
+            print(worker)
+            return
+
+
+def start(argv, cmd):
+    p = NamespacedOptionParser(argv)
+    print("\n".join(worker
+                        for _, worker, _ in multi_args(p, cmd)))
+
+def expand(argv, cmd=None):
+    template = argv[0]
+    p = NamespacedOptionParser(argv[1:])
+    for _, _, expander in multi_args(p, cmd):
+        print(expander(template))
+
+def help(argv, cmd=None):
+    print("""Some examples:
+
+    # Advanced example with 10 workers:
+    #   * Three of the workers processes the images and video queue
+    #   * Two of the workers processes the data queue with loglevel DEBUG
+    #   * the rest processes the default' queue.
+    $ celeryd-multi start 10 -l INFO -Q:1-3 images,video -Q:4,5:data
+        -Q default -L:4,5 DEBUG
+
+    # get commands to start 10 workers, with 3 processes each
+    $ celeryd-multi start 3 -c 3
+    celeryd -n celeryd1.myhost -c 3
+    celeryd -n celeryd2.myhost -c 3
+    celeryd- n celeryd3.myhost -c 3
+
+    # start 3 named workers
+    $ celeryd-multi start image video data -c 3
+    celeryd -n image.myhost -c 3
+    celeryd -n video.myhost -c 3
+    celeryd -n data.myhost -c 3
+
+    # specify custom hostname
+    $ celeryd-multi start 2 -n worker.example.com -c 3
+    celeryd -n celeryd1.worker.example.com -c 3
+    celeryd -n celeryd2.worker.example.com -c 3
+
+    # Additionl options are added to each celeryd',
+    # but you can also modify the options for ranges of or single workers
+
+    # 3 workers: Two with 3 processes, and one with 10 processes.
+    $ celeryd-multi start 3 -c 3 -c:1 10
+    celeryd -n celeryd1.myhost -c 10
+    celeryd -n celeryd2.myhost -c 3
+    celeryd -n celeryd3.myhost -c 3
+
+    # can also specify options for named workers
+    $ celeryd-multi start image video data -c 3 -c:image 10
+    celeryd -n image.myhost -c 10
+    celeryd -n video.myhost -c 3
+    celeryd -n data.myhost -c 3
+
+    # ranges and lists of workers in options is also allowed:
+    # (-c:1-3 can also be written as -c:1,2,3)
+    $ celeryd-multi start 5 -c 3  -c:1-3 10
+    celeryd -n celeryd1.myhost -c 10
+    celeryd -n celeryd2.myhost -c 10
+    celeryd -n celeryd3.myhost -c 10
+    celeryd -n celeryd4.myhost -c 3
+    celeryd -n celeryd5.myhost -c 3
+
+    # lists also works with named workers
+    $ celeryd-multi start foo bar baz xuzzy -c 3 -c:foo,bar,baz 10
+    celeryd -n foo.myhost -c 10
+    celeryd -n bar.myhost -c 10
+    celeryd -n baz.myhost -c 10
+    celeryd -n xuzzy.myhost -c 3
+""")
+
+
+COMMANDS = {"start": start,
+            "names": names,
+            "expand": expand,
+            "get": get,
+            "help": help}
+
+def usage():
+    print("Please use one of the following commands: %s" % ", ".join(COMMANDS.keys()))
+
+def celeryd_multi(argv, cmd="celeryd"):
+    if len(argv) == 0:
+        usage()
+        sys.exit(0)
+
+    try:
+        return COMMANDS[argv[0]](argv[1:], cmd)
+    except KeyError, e:
+        print("Invalid command: %s" % argv[0])
+        usage()
+        sys.exit(1)
+
+def main():
+    celeryd_multi(sys.argv[1:])
+
+
+if __name__ == "__main__":
+    main()

+ 0 - 14
celery/bin/celeryinit.py

@@ -1,14 +0,0 @@
-import sys
-
-
-def main():
-    from celery.loaders.default import Loader
-    loader = Loader()
-    conf = loader.read_configuration()
-    from django.core.management import call_command, setup_environ
-    sys.stderr.write("Creating database tables...\n")
-    setup_environ(conf)
-    call_command("syncdb")
-
-if __name__ == "__main__":
-    main()

+ 33 - 58
celery/conf.py

@@ -22,12 +22,16 @@ settings = load_settings()
 _DEFAULTS = {
 _DEFAULTS = {
     "CELERY_RESULT_BACKEND": "database",
     "CELERY_RESULT_BACKEND": "database",
     "CELERY_ALWAYS_EAGER": False,
     "CELERY_ALWAYS_EAGER": False,
+    "CELERY_EAGER_PROPAGATES_EXCEPTIONS": False,
     "CELERY_TASK_RESULT_EXPIRES": timedelta(days=5),
     "CELERY_TASK_RESULT_EXPIRES": timedelta(days=5),
     "CELERY_SEND_EVENTS": False,
     "CELERY_SEND_EVENTS": False,
     "CELERY_IGNORE_RESULT": False,
     "CELERY_IGNORE_RESULT": False,
     "CELERY_STORE_ERRORS_EVEN_IF_IGNORED": False,
     "CELERY_STORE_ERRORS_EVEN_IF_IGNORED": False,
     "CELERY_TASK_SERIALIZER": "pickle",
     "CELERY_TASK_SERIALIZER": "pickle",
     "CELERY_DISABLE_RATE_LIMITS": False,
     "CELERY_DISABLE_RATE_LIMITS": False,
+    "CELERYD_TASK_TIME_LIMIT": None,
+    "CELERYD_TASK_SOFT_TIME_LIMIT": None,
+    "CELERYD_MAX_TASKS_PER_CHILD": None,
     "CELERY_DEFAULT_ROUTING_KEY": "celery",
     "CELERY_DEFAULT_ROUTING_KEY": "celery",
     "CELERY_DEFAULT_QUEUE": "celery",
     "CELERY_DEFAULT_QUEUE": "celery",
     "CELERY_DEFAULT_EXCHANGE": "celery",
     "CELERY_DEFAULT_EXCHANGE": "celery",
@@ -36,6 +40,7 @@ _DEFAULTS = {
     "CELERY_BROKER_CONNECTION_TIMEOUT": 4,
     "CELERY_BROKER_CONNECTION_TIMEOUT": 4,
     "CELERY_BROKER_CONNECTION_RETRY": True,
     "CELERY_BROKER_CONNECTION_RETRY": True,
     "CELERY_BROKER_CONNECTION_MAX_RETRIES": 100,
     "CELERY_BROKER_CONNECTION_MAX_RETRIES": 100,
+    "CELERY_ACKS_LATE": False,
     "CELERYD_POOL": "celery.worker.pool.TaskPool",
     "CELERYD_POOL": "celery.worker.pool.TaskPool",
     "CELERYD_MEDIATOR": "celery.worker.controllers.Mediator",
     "CELERYD_MEDIATOR": "celery.worker.controllers.Mediator",
     "CELERYD_ETA_SCHEDULER": "celery.worker.controllers.ScheduleController",
     "CELERYD_ETA_SCHEDULER": "celery.worker.controllers.ScheduleController",
@@ -60,13 +65,18 @@ _DEFAULTS = {
     "CELERY_EVENT_EXCHANGE": "celeryevent",
     "CELERY_EVENT_EXCHANGE": "celeryevent",
     "CELERY_EVENT_EXCHANGE_TYPE": "direct",
     "CELERY_EVENT_EXCHANGE_TYPE": "direct",
     "CELERY_EVENT_ROUTING_KEY": "celeryevent",
     "CELERY_EVENT_ROUTING_KEY": "celeryevent",
+    "CELERY_EVENT_SERIALIZER": "json",
     "CELERY_RESULT_EXCHANGE": "celeryresults",
     "CELERY_RESULT_EXCHANGE": "celeryresults",
+    "CELERY_RESULT_EXCHANGE_TYPE": "direct",
+    "CELERY_RESULT_SERIALIZER": "pickle",
+    "CELERY_RESULT_PERSISTENT": False,
     "CELERY_MAX_CACHED_RESULTS": 5000,
     "CELERY_MAX_CACHED_RESULTS": 5000,
     "CELERY_TRACK_STARTED": False,
     "CELERY_TRACK_STARTED": False,
 }
 }
 
 
+
 _DEPRECATION_FMT = """
 _DEPRECATION_FMT = """
-%s is deprecated in favor of %s and is scheduled for removal in celery v1.2.
+%s is deprecated in favor of %s and is scheduled for removal in celery v1.4.
 """.strip()
 """.strip()
 
 
 def _get(name, default=None, compat=None):
 def _get(name, default=None, compat=None):
@@ -86,6 +96,7 @@ def _get(name, default=None, compat=None):
 
 
 # <--- Task                                        <-   --   --- - ----- -- #
 # <--- Task                                        <-   --   --- - ----- -- #
 ALWAYS_EAGER = _get("CELERY_ALWAYS_EAGER")
 ALWAYS_EAGER = _get("CELERY_ALWAYS_EAGER")
+EAGER_PROPAGATES_EXCEPTIONS = _get("CELERY_EAGER_PROPAGATES_EXCEPTIONS")
 RESULT_BACKEND = _get("CELERY_RESULT_BACKEND", compat=["CELERY_BACKEND"])
 RESULT_BACKEND = _get("CELERY_RESULT_BACKEND", compat=["CELERY_BACKEND"])
 CELERY_BACKEND = RESULT_BACKEND # FIXME Remove in 1.4
 CELERY_BACKEND = RESULT_BACKEND # FIXME Remove in 1.4
 CELERY_CACHE_BACKEND = _get("CELERY_CACHE_BACKEND")
 CELERY_CACHE_BACKEND = _get("CELERY_CACHE_BACKEND")
@@ -93,10 +104,16 @@ TASK_SERIALIZER = _get("CELERY_TASK_SERIALIZER")
 TASK_RESULT_EXPIRES = _get("CELERY_TASK_RESULT_EXPIRES")
 TASK_RESULT_EXPIRES = _get("CELERY_TASK_RESULT_EXPIRES")
 IGNORE_RESULT = _get("CELERY_IGNORE_RESULT")
 IGNORE_RESULT = _get("CELERY_IGNORE_RESULT")
 TRACK_STARTED = _get("CELERY_TRACK_STARTED")
 TRACK_STARTED = _get("CELERY_TRACK_STARTED")
+ACKS_LATE = _get("CELERY_ACKS_LATE")
 # Make sure TASK_RESULT_EXPIRES is a timedelta.
 # Make sure TASK_RESULT_EXPIRES is a timedelta.
 if isinstance(TASK_RESULT_EXPIRES, int):
 if isinstance(TASK_RESULT_EXPIRES, int):
     TASK_RESULT_EXPIRES = timedelta(seconds=TASK_RESULT_EXPIRES)
     TASK_RESULT_EXPIRES = timedelta(seconds=TASK_RESULT_EXPIRES)
 
 
+# <--- SQLAlchemy                                  <-   --   --- - ----- -- #
+RESULT_DBURI = _get("CELERY_RESULT_DBURI")
+RESULT_ENGINE_OPTIONS = _get("CELERY_RESULT_ENGINE_OPTIONS")
+
+
 # <--- Client                                      <-   --   --- - ----- -- #
 # <--- Client                                      <-   --   --- - ----- -- #
 
 
 MAX_CACHED_RESULTS = _get("CELERY_MAX_CACHED_RESULTS")
 MAX_CACHED_RESULTS = _get("CELERY_MAX_CACHED_RESULTS")
@@ -106,6 +123,9 @@ MAX_CACHED_RESULTS = _get("CELERY_MAX_CACHED_RESULTS")
 SEND_EVENTS = _get("CELERY_SEND_EVENTS")
 SEND_EVENTS = _get("CELERY_SEND_EVENTS")
 DEFAULT_RATE_LIMIT = _get("CELERY_DEFAULT_RATE_LIMIT")
 DEFAULT_RATE_LIMIT = _get("CELERY_DEFAULT_RATE_LIMIT")
 DISABLE_RATE_LIMITS = _get("CELERY_DISABLE_RATE_LIMITS")
 DISABLE_RATE_LIMITS = _get("CELERY_DISABLE_RATE_LIMITS")
+CELERYD_TASK_TIME_LIMIT = _get("CELERYD_TASK_TIME_LIMIT")
+CELERYD_TASK_SOFT_TIME_LIMIT = _get("CELERYD_TASK_SOFT_TIME_LIMIT")
+CELERYD_MAX_TASKS_PER_CHILD = _get("CELERYD_MAX_TASKS_PER_CHILD")
 STORE_ERRORS_EVEN_IF_IGNORED = _get("CELERY_STORE_ERRORS_EVEN_IF_IGNORED")
 STORE_ERRORS_EVEN_IF_IGNORED = _get("CELERY_STORE_ERRORS_EVEN_IF_IGNORED")
 CELERY_SEND_TASK_ERROR_EMAILS = _get("CELERY_SEND_TASK_ERROR_EMAILS",
 CELERY_SEND_TASK_ERROR_EMAILS = _get("CELERY_SEND_TASK_ERROR_EMAILS",
                                      not settings.DEBUG,
                                      not settings.DEBUG,
@@ -115,7 +135,7 @@ CELERYD_LOG_FORMAT = _get("CELERYD_LOG_FORMAT",
 CELERYD_TASK_LOG_FORMAT = _get("CELERYD_TASK_LOG_FORMAT")
 CELERYD_TASK_LOG_FORMAT = _get("CELERYD_TASK_LOG_FORMAT")
 CELERYD_LOG_FILE = _get("CELERYD_LOG_FILE")
 CELERYD_LOG_FILE = _get("CELERYD_LOG_FILE")
 CELERYD_LOG_LEVEL = _get("CELERYD_LOG_LEVEL",
 CELERYD_LOG_LEVEL = _get("CELERYD_LOG_LEVEL",
-                        compat=["CELERYD_DAEMON_LOG_LEVEL"])
+                            compat=["CELERYD_DAEMON_LOG_LEVEL"])
 CELERYD_LOG_LEVEL = LOG_LEVELS[CELERYD_LOG_LEVEL.upper()]
 CELERYD_LOG_LEVEL = LOG_LEVELS[CELERYD_LOG_LEVEL.upper()]
 CELERYD_CONCURRENCY = _get("CELERYD_CONCURRENCY")
 CELERYD_CONCURRENCY = _get("CELERYD_CONCURRENCY")
 CELERYD_PREFETCH_MULTIPLIER = _get("CELERYD_PREFETCH_MULTIPLIER")
 CELERYD_PREFETCH_MULTIPLIER = _get("CELERYD_PREFETCH_MULTIPLIER")
@@ -126,65 +146,15 @@ CELERYD_MEDIATOR = _get("CELERYD_MEDIATOR")
 CELERYD_ETA_SCHEDULER = _get("CELERYD_ETA_SCHEDULER")
 CELERYD_ETA_SCHEDULER = _get("CELERYD_ETA_SCHEDULER")
 
 
 # <--- Message routing                             <-   --   --- - ----- -- #
 # <--- Message routing                             <-   --   --- - ----- -- #
-QUEUES = _get("CELERY_QUEUES")
 DEFAULT_QUEUE = _get("CELERY_DEFAULT_QUEUE")
 DEFAULT_QUEUE = _get("CELERY_DEFAULT_QUEUE")
 DEFAULT_ROUTING_KEY = _get("CELERY_DEFAULT_ROUTING_KEY")
 DEFAULT_ROUTING_KEY = _get("CELERY_DEFAULT_ROUTING_KEY")
 DEFAULT_EXCHANGE = _get("CELERY_DEFAULT_EXCHANGE")
 DEFAULT_EXCHANGE = _get("CELERY_DEFAULT_EXCHANGE")
 DEFAULT_EXCHANGE_TYPE = _get("CELERY_DEFAULT_EXCHANGE_TYPE")
 DEFAULT_EXCHANGE_TYPE = _get("CELERY_DEFAULT_EXCHANGE_TYPE")
 DEFAULT_DELIVERY_MODE = _get("CELERY_DEFAULT_DELIVERY_MODE")
 DEFAULT_DELIVERY_MODE = _get("CELERY_DEFAULT_DELIVERY_MODE")
-
-_DEPRECATIONS = {"CELERY_AMQP_CONSUMER_QUEUES": "CELERY_QUEUES",
-                 "CELERY_AMQP_CONSUMER_QUEUE": "CELERY_QUEUES",
-                 "CELERY_AMQP_EXCHANGE": "CELERY_DEFAULT_EXCHANGE",
-                 "CELERY_AMQP_EXCHANGE_TYPE": "CELERY_DEFAULT_EXCHANGE_TYPE",
-                 "CELERY_AMQP_CONSUMER_ROUTING_KEY": "CELERY_QUEUES",
-                 "CELERY_AMQP_PUBLISHER_ROUTING_KEY":
-                 "CELERY_DEFAULT_ROUTING_KEY"}
-
-
-_DEPRECATED_QUEUE_SETTING_FMT = """
-%s is deprecated in favor of %s and scheduled for removal in celery v1.0.
-Please visit http://bit.ly/5DsSuX for more information.
-
-We're sorry for the inconvenience.
-""".strip()
-
-
-def _find_deprecated_queue_settings():
-    global DEFAULT_QUEUE, DEFAULT_ROUTING_KEY
-    global DEFAULT_EXCHANGE, DEFAULT_EXCHANGE_TYPE
-    binding_key = None
-
-    multi = _get("CELERY_AMQP_CONSUMER_QUEUES")
-    if multi:
-        return multi
-
-    single = _get("CELERY_AMQP_CONSUMER_QUEUE")
-    if single:
-        DEFAULT_QUEUE = single
-        DEFAULT_EXCHANGE = _get("CELERY_AMQP_EXCHANGE", DEFAULT_EXCHANGE)
-        DEFAULT_EXCHANGE_TYPE = _get("CELERY_AMQP_EXCHANGE_TYPE",
-                                     DEFAULT_EXCHANGE_TYPE)
-        binding_key = _get("CELERY_AMQP_CONSUMER_ROUTING_KEY",
-                            DEFAULT_ROUTING_KEY)
-        DEFAULT_ROUTING_KEY = _get("CELERY_AMQP_PUBLISHER_ROUTING_KEY",
-                                   DEFAULT_ROUTING_KEY)
-    binding_key = binding_key or DEFAULT_ROUTING_KEY
-    return {DEFAULT_QUEUE: {"exchange": DEFAULT_EXCHANGE,
-                            "exchange_type": DEFAULT_EXCHANGE_TYPE,
-                            "binding_key": binding_key}}
-
-
-def _warn_if_deprecated_queue_settings():
-    for setting, new_setting in _DEPRECATIONS.items():
-        if _get(setting):
-            warnings.warn(DeprecationWarning(_DEPRECATED_QUEUE_SETTING_FMT % (
-                setting, _DEPRECATIONS[setting])))
-            break
-
-_warn_if_deprecated_queue_settings()
-if not QUEUES:
-    QUEUES = _find_deprecated_queue_settings()
+QUEUES = _get("CELERY_QUEUES") or {DEFAULT_QUEUE: {
+                                       "exchange": DEFAULT_EXCHANGE,
+                                       "exchange_type": DEFAULT_EXCHANGE_TYPE,
+                                       "binding_key": DEFAULT_ROUTING_KEY}}
 
 
 # :--- Broadcast queue settings                     <-   --   --- - ----- -- #
 # :--- Broadcast queue settings                     <-   --   --- - ----- -- #
 
 
@@ -198,6 +168,7 @@ EVENT_QUEUE = _get("CELERY_EVENT_QUEUE")
 EVENT_EXCHANGE = _get("CELERY_EVENT_EXCHANGE")
 EVENT_EXCHANGE = _get("CELERY_EVENT_EXCHANGE")
 EVENT_EXCHANGE_TYPE = _get("CELERY_EVENT_EXCHANGE_TYPE")
 EVENT_EXCHANGE_TYPE = _get("CELERY_EVENT_EXCHANGE_TYPE")
 EVENT_ROUTING_KEY = _get("CELERY_EVENT_ROUTING_KEY")
 EVENT_ROUTING_KEY = _get("CELERY_EVENT_ROUTING_KEY")
+EVENT_SERIALIZER = _get("CELERY_EVENT_SERIALIZER")
 
 
 # :--- Broker connections                           <-   --   --- - ----- -- #
 # :--- Broker connections                           <-   --   --- - ----- -- #
 BROKER_CONNECTION_TIMEOUT = _get("CELERY_BROKER_CONNECTION_TIMEOUT",
 BROKER_CONNECTION_TIMEOUT = _get("CELERY_BROKER_CONNECTION_TIMEOUT",
@@ -207,9 +178,12 @@ BROKER_CONNECTION_RETRY = _get("CELERY_BROKER_CONNECTION_RETRY",
 BROKER_CONNECTION_MAX_RETRIES = _get("CELERY_BROKER_CONNECTION_MAX_RETRIES",
 BROKER_CONNECTION_MAX_RETRIES = _get("CELERY_BROKER_CONNECTION_MAX_RETRIES",
                                 compat=["CELERY_AMQP_CONNECTION_MAX_RETRIES"])
                                 compat=["CELERY_AMQP_CONNECTION_MAX_RETRIES"])
 
 
-# :--- Backend settings                             <-   --   --- - ----- -- #
+# :--- AMQP Backend settings                        <-   --   --- - ----- -- #
 
 
 RESULT_EXCHANGE = _get("CELERY_RESULT_EXCHANGE")
 RESULT_EXCHANGE = _get("CELERY_RESULT_EXCHANGE")
+RESULT_EXCHANGE_TYPE = _get("CELERY_RESULT_EXCHANGE_TYPE")
+RESULT_SERIALIZER = _get("CELERY_RESULT_SERIALIZER")
+RESULT_PERSISTENT = _get("CELERY_RESULT_PERSISTENT")
 
 
 # :--- Celery Beat                                  <-   --   --- - ----- -- #
 # :--- Celery Beat                                  <-   --   --- - ----- -- #
 CELERYBEAT_LOG_LEVEL = _get("CELERYBEAT_LOG_LEVEL")
 CELERYBEAT_LOG_LEVEL = _get("CELERYBEAT_LOG_LEVEL")
@@ -234,4 +208,5 @@ def _init_routing_table(queues):
 
 
     return dict((queue, _defaults(opts)) for queue, opts in queues.items())
     return dict((queue, _defaults(opts)) for queue, opts in queues.items())
 
 
-routing_table = _init_routing_table(QUEUES)
+def get_routing_table():
+    return _init_routing_table(QUEUES)

+ 149 - 0
celery/contrib/abortable.py

@@ -0,0 +1,149 @@
+"""
+=========================
+Abortable tasks overview
+=========================
+
+For long-running :class:`Task`'s, it can be desirable to support
+aborting during execution. Of course, these tasks should be built to
+support abortion specifically.
+
+The :class:`AbortableTask` serves as a base class for all :class:`Task`
+objects that should support abortion by producers.
+
+* Producers may invoke the :meth:`abort` method on
+  :class:`AbortableAsyncResult` instances, to request abortion.
+
+* Consumers (workers) should periodically check (and honor!) the
+  :meth:`is_aborted` method at controlled points in their task's
+  :meth:`run` method. The more often, the better.
+
+The necessary intermediate communication is dealt with by the
+:class:`AbortableTask` implementation.
+
+Usage example
+-------------
+
+In the consumer:
+
+.. code-block:: python
+
+   from celery.contrib.abortable import AbortableTask
+
+   def MyLongRunningTask(AbortableTask):
+
+       def run(self, **kwargs):
+           logger = self.get_logger(**kwargs)
+           results = []
+           for x in xrange(100):
+               # Check after every 5 loops..
+               if x % 5 == 0:  # alternatively, check when some timer is due
+                   if self.is_aborted(**kwargs):
+                       # Respect the aborted status and terminate
+                       # gracefully
+                       logger.warning("Task aborted.")
+                       return None
+               y = do_something_expensive(x)
+               results.append(y)
+           logger.info("Task finished.")
+           return results
+
+
+In the producer:
+
+.. code-block:: python
+
+   from myproject.tasks import MyLongRunningTask
+
+   def myview(request):
+
+       async_result = MyLongRunningTask.delay()
+       # async_result is of type AbortableAsyncResult
+
+       # After 10 seconds, abort the task
+       time.sleep(10)
+       async_result.abort()
+
+       ...
+
+After the ``async_result.abort()`` call, the task execution is not
+aborted immediately. In fact, it is not guaranteed to abort at all. Keep
+checking the ``async_result`` status, or call ``async_result.wait()`` to
+have it block until the task is finished.
+
+"""
+from celery.task.base import Task
+from celery.result import AsyncResult
+
+
+""" Task States
+
+.. data:: ABORTED
+
+    Task is aborted (typically by the producer) and should be
+    aborted as soon as possible.
+
+"""
+ABORTED = "ABORTED"
+
+
+class AbortableAsyncResult(AsyncResult):
+    """Represents a abortable result.
+
+    Specifically, this gives the ``AsyncResult`` a :meth:`abort()` method,
+    which sets the state of the underlying Task to ``"ABORTED"``.
+
+    """
+
+    def is_aborted(self):
+        """Returns :const:`True` if the task is (being) aborted."""
+        return self.backend.get_status(self.task_id) == ABORTED
+
+    def abort(self):
+        """Set the state of the task to :const:`ABORTED`.
+
+        Abortable tasks monitor their state at regular intervals and
+        terminate execution if so.
+
+        Be aware that invoking this method does not guarantee when the
+        task will be aborted (or even if the task will be aborted at
+        all).
+
+        """
+        # TODO: store_result requires all four arguments to be set,
+        # but only status should be updated here
+        return self.backend.store_result(self.task_id, result=None,
+                                         status=ABORTED, traceback=None)
+
+
+class AbortableTask(Task):
+    """A celery task that serves as a base class for all :class:`Task`'s
+    that support aborting during execution.
+
+    All subclasses of :class:`AbortableTask` must call the
+    :meth:`is_aborted` method periodically and act accordingly when
+    the call evaluates to :const:`True`.
+
+    """
+
+    @classmethod
+    def AsyncResult(cls, task_id):
+        """Returns the accompanying AbortableAsyncResult instance."""
+        return AbortableAsyncResult(task_id, backend=cls.backend)
+
+    def is_aborted(self, **kwargs):
+        """Checks against the backend whether this
+        :class:`AbortableAsyncResult` is :const:`ABORTED`.
+
+        Always returns :const:`False` in case the `task_id` parameter
+        refers to a regular (non-abortable) :class:`Task`.
+
+        Be aware that invoking this method will cause a hit in the
+        backend (for example a database query), so find a good balance
+        between calling it regularly (for responsiveness), but not too
+        often (for performance).
+
+        """
+        result = self.AsyncResult(kwargs["task_id"])
+        if not isinstance(result, AbortableAsyncResult):
+            return False
+        return result.is_aborted()

+ 0 - 19
celery/contrib/test_runner.py

@@ -1,19 +0,0 @@
-from django.conf import settings
-from django.test.simple import run_tests as run_tests_orig
-
-USAGE = """\
-Custom test runner to allow testing of celery delayed tasks.
-"""
-
-def run_tests(test_labels, *args, **kwargs):
-    """Django test runner allowing testing of celery delayed tasks.
-
-    All tasks are run locally, not in a worker.
-
-    To use this runner set ``settings.TEST_RUNNER``::
-
-        TEST_RUNNER = "celery.contrib.test_runner.run_tests"
-
-    """
-    settings.CELERY_ALWAYS_EAGER = True
-    return run_tests_orig(test_labels, *args, **kwargs)

+ 0 - 0
celery/management/__init__.py → celery/db/__init__.py


+ 66 - 0
celery/db/a805d4bd.py

@@ -0,0 +1,66 @@
+"""
+a805d4bd
+This module fixes a bug with pickling and relative imports in Python < 2.6.
+
+The problem is with pickling an e.g. ``exceptions.KeyError`` instance.
+As SQLAlchemy has its own ``exceptions`` module, pickle will try to
+lookup ``KeyError`` in the wrong module, resulting in this exception::
+
+    cPickle.PicklingError: Can't pickle <type 'exceptions.KeyError'>:
+        attribute lookup exceptions.KeyError failed
+
+doing ``import exceptions`` just before the dump in ``sqlalchemy.types``
+reveals the source of the bug::
+
+    EXCEPTIONS: <module 'sqlalchemy.exc' from '/var/lib/hudson/jobs/celery/
+        workspace/buildenv/lib/python2.5/site-packages/sqlalchemy/exc.pyc'>
+
+Hence the random module name "a805d5bd" is taken to decrease the chances of
+a collision.
+
+"""
+from sqlalchemy.types import PickleType as _PickleType
+
+
+class PickleType(_PickleType):
+
+    def bind_processor(self, dialect):
+        impl_processor = self.impl.bind_processor(dialect)
+        dumps = self.pickler.dumps
+        protocol = self.protocol
+        if impl_processor:
+            def process(value):
+                if value is not None:
+                    value = dumps(value, protocol)
+                return impl_processor(value)
+
+        else:
+            def process(value):
+                if value is not None:
+                    value = dumps(value, protocol)
+                return value
+        return process
+
+    def result_processor(self, dialect, coltype):
+        impl_processor = self.impl.result_processor(dialect, coltype)
+        loads = self.pickler.loads
+        if impl_processor:
+
+            def process(value):
+                value = impl_processor(value)
+                if value is None:
+                    return None
+                return loads(value)
+        else:
+
+            def process(value):
+                if value is None:
+                    return None
+                return loads(value)
+        return process
+
+    def copy_value(self, value):
+        if self.mutable:
+            return self.pickler.loads(self.pickler.dumps(value, self.protocol))
+        else:
+            return value

+ 70 - 0
celery/db/models.py

@@ -0,0 +1,70 @@
+from datetime import datetime
+
+from sqlalchemy import Column, Sequence
+from sqlalchemy import Integer, String, Text, DateTime
+
+from celery import states
+from celery.db.session import ResultModelBase
+# See docstring of a805d4bd for an explanation for this workaround ;)
+from celery.db.a805d4bd import PickleType
+
+
+class Task(ResultModelBase):
+    """Task result/status."""
+    __tablename__ = "celery_taskmeta"
+    __table_args__ = {"sqlite_autoincrement": True}
+
+    id = Column("id", Integer, Sequence("task_id_sequence"), primary_key=True,
+            autoincrement=True)
+    task_id = Column("task_id", String(255))
+    status = Column("status", String(50), default=states.PENDING)
+    result = Column("result", PickleType, nullable=True)
+    date_done = Column("date_done", DateTime, default=datetime.now,
+                       onupdate=datetime.now, nullable=True)
+    traceback = Column("traceback", Text, nullable=True)
+
+    def __init__(self, task_id):
+        self.task_id = task_id
+
+    def __str__(self):
+        return "<Task(%s, %s, %s, %s)>" % (self.task_id,
+                                           self.result,
+                                           self.status,
+                                           self.traceback)
+
+    def to_dict(self):
+        return {"task_id": self.task_id,
+                "status": self.status,
+                "result": self.result,
+                "date_done": self.date_done,
+                "traceback": self.traceback}
+
+    def __unicode__(self):
+        return u"<Task: %s successful: %s>" % (self.task_id, self.status)
+
+
+class TaskSet(ResultModelBase):
+    """TaskSet result"""
+    __tablename__ = "celery_tasksetmeta"
+    __table_args__ = {"sqlite_autoincrement": True}
+
+    id = Column("id", Integer, Sequence("taskset_id_sequence"),
+                autoincrement=True, primary_key=True)
+    taskset_id = Column("taskset_id", String(255))
+    result = Column("result", PickleType, nullable=True)
+    date_done = Column("date_done", DateTime, default=datetime.now,
+                       nullable=True)
+
+    def __init__(self, task_id):
+        self.task_id = task_id
+
+    def __str__(self):
+        return "<TaskSet(%s, %s)>" % (self.task_id, self.result)
+
+    def to_dict(self):
+        return {"taskset_id": self.taskset_id,
+                "result": self.result,
+                "date_done": self.date_done}
+
+    def __unicode__(self):
+        return u"<TaskSet: %s>" % (self.taskset_id)

+ 36 - 0
celery/db/session.py

@@ -0,0 +1,36 @@
+import os
+
+from sqlalchemy import create_engine
+from sqlalchemy.orm import sessionmaker
+from sqlalchemy.ext.declarative import declarative_base
+
+from celery import conf
+from celery.utils.compat import defaultdict
+
+ResultModelBase = declarative_base()
+
+_SETUP = defaultdict(lambda: False)
+_ENGINES = {}
+
+
+def get_engine(dburi, **kwargs):
+    if dburi not in _ENGINES:
+        _ENGINES[dburi] = create_engine(dburi, **kwargs)
+    return _ENGINES[dburi]
+
+
+def create_session(dburi, **kwargs):
+    engine = get_engine(dburi, **kwargs)
+    return engine, sessionmaker(bind=engine)
+
+
+def setup_results(engine):
+    if not _SETUP["results"]:
+        ResultModelBase.metadata.create_all(engine)
+        _SETUP["results"] = True
+
+
+def ResultSession(dburi=conf.RESULT_DBURI, **kwargs):
+    engine, session = create_session(dburi, **kwargs)
+    setup_results(engine)
+    return session()

+ 15 - 0
celery/exceptions.py

@@ -3,14 +3,22 @@
 Common Exceptions
 Common Exceptions
 
 
 """
 """
+from billiard.pool import SoftTimeLimitExceeded as _SoftTimeLimitExceeded
 
 
 UNREGISTERED_FMT = """
 UNREGISTERED_FMT = """
 Task of kind %s is not registered, please make sure it's imported.
 Task of kind %s is not registered, please make sure it's imported.
 """.strip()
 """.strip()
 
 
 
 
+class SoftTimeLimitExceeded(_SoftTimeLimitExceeded):
+    """The soft time limit has been exceeded. This exception is raised
+    to give the task a chance to clean up."""
+    pass
+
+
 class ImproperlyConfigured(Exception):
 class ImproperlyConfigured(Exception):
     """Celery is somehow improperly configured."""
     """Celery is somehow improperly configured."""
+    pass
 
 
 
 
 class NotRegistered(KeyError):
 class NotRegistered(KeyError):
@@ -28,10 +36,12 @@ class AlreadyRegistered(Exception):
 
 
 class TimeoutError(Exception):
 class TimeoutError(Exception):
     """The operation timed out."""
     """The operation timed out."""
+    pass
 
 
 
 
 class MaxRetriesExceededError(Exception):
 class MaxRetriesExceededError(Exception):
     """The tasks max restart limit has been exceeded."""
     """The tasks max restart limit has been exceeded."""
+    pass
 
 
 
 
 class RetryTaskError(Exception):
 class RetryTaskError(Exception):
@@ -40,3 +50,8 @@ class RetryTaskError(Exception):
     def __init__(self, message, exc, *args, **kwargs):
     def __init__(self, message, exc, *args, **kwargs):
         self.exc = exc
         self.exc = exc
         Exception.__init__(self, message, exc, *args, **kwargs)
         Exception.__init__(self, message, exc, *args, **kwargs)
+
+
+class TaskRevokedError(Exception):
+    """The task has been revoked, so no result available."""
+    pass

+ 11 - 2
celery/execute/__init__.py

@@ -5,6 +5,7 @@ from celery.execute.trace import TaskTrace
 from celery.registry import tasks
 from celery.registry import tasks
 from celery.messaging import with_connection
 from celery.messaging import with_connection
 from celery.messaging import TaskPublisher
 from celery.messaging import TaskPublisher
+from celery.datastructures import ExceptionInfo
 
 
 extract_exec_options = mattrgetter("routing_key", "exchange",
 extract_exec_options = mattrgetter("routing_key", "exchange",
                                    "immediate", "mandatory",
                                    "immediate", "mandatory",
@@ -136,6 +137,9 @@ def delay_task(task_name, *args, **kwargs):
 def apply(task, args, kwargs, **options):
 def apply(task, args, kwargs, **options):
     """Apply the task locally.
     """Apply the task locally.
 
 
+    :keyword throw: Re-raise task exceptions. Defaults to
+        the ``CELERY_EAGER_PROPAGATES_EXCEPTIONS`` setting.
+
     This will block until the task completes, and returns a
     This will block until the task completes, and returns a
     :class:`celery.result.EagerResult` instance.
     :class:`celery.result.EagerResult` instance.
 
 
@@ -144,6 +148,7 @@ def apply(task, args, kwargs, **options):
     kwargs = kwargs or {}
     kwargs = kwargs or {}
     task_id = options.get("task_id", gen_unique_id())
     task_id = options.get("task_id", gen_unique_id())
     retries = options.get("retries", 0)
     retries = options.get("retries", 0)
+    throw = options.pop("throw", conf.EAGER_PROPAGATES_EXCEPTIONS)
 
 
     task = tasks[task.name] # Make sure we get the instance, not class.
     task = tasks[task.name] # Make sure we get the instance, not class.
 
 
@@ -151,9 +156,9 @@ def apply(task, args, kwargs, **options):
                       "task_id": task_id,
                       "task_id": task_id,
                       "task_retries": retries,
                       "task_retries": retries,
                       "task_is_eager": True,
                       "task_is_eager": True,
-                      "logfile": None,
+                      "logfile": options.get("logfile"),
                       "delivery_info": {"is_eager": True},
                       "delivery_info": {"is_eager": True},
-                      "loglevel": 0}
+                      "loglevel": options.get("loglevel", 0)}
     supported_keys = fun_takes_kwargs(task.run, default_kwargs)
     supported_keys = fun_takes_kwargs(task.run, default_kwargs)
     extend_with = dict((key, val) for key, val in default_kwargs.items()
     extend_with = dict((key, val) for key, val in default_kwargs.items()
                             if key in supported_keys)
                             if key in supported_keys)
@@ -161,4 +166,8 @@ def apply(task, args, kwargs, **options):
 
 
     trace = TaskTrace(task.name, task_id, args, kwargs, task=task)
     trace = TaskTrace(task.name, task_id, args, kwargs, task=task)
     retval = trace.execute()
     retval = trace.execute()
+    if isinstance(retval, ExceptionInfo):
+        if throw:
+            raise retval.exception
+        retval = retval.exception
     return EagerResult(task_id, retval, trace.status, traceback=trace.strtb)
     return EagerResult(task_id, retval, trace.status, traceback=trace.strtb)

+ 19 - 6
celery/execute/trace.py

@@ -67,9 +67,18 @@ class TaskTrace(object):
         trace = TraceInfo.trace(self.task, self.args, self.kwargs)
         trace = TraceInfo.trace(self.task, self.args, self.kwargs)
         self.status = trace.status
         self.status = trace.status
         self.strtb = trace.strtb
         self.strtb = trace.strtb
+        self.handle_after_return(trace.status, trace.retval,
+                                 trace.exc_type, trace.tb, trace.strtb)
         handler = self._trace_handlers[trace.status]
         handler = self._trace_handlers[trace.status]
         return handler(trace.retval, trace.exc_type, trace.tb, trace.strtb)
         return handler(trace.retval, trace.exc_type, trace.tb, trace.strtb)
 
 
+    def handle_after_return(self, status, retval, type_, tb, strtb):
+        einfo = None
+        if status in states.EXCEPTION_STATES:
+            einfo = ExceptionInfo((retval, type_, tb))
+        self.task.after_return(status, retval, self.task_id,
+                               self.args, self.kwargs, einfo=einfo)
+
     def handle_success(self, retval, *args):
     def handle_success(self, retval, *args):
         """Handle successful execution."""
         """Handle successful execution."""
         self.task.on_success(retval, self.task_id, self.args, self.kwargs)
         self.task.on_success(retval, self.task_id, self.args, self.kwargs)
@@ -77,7 +86,6 @@ class TaskTrace(object):
 
 
     def handle_retry(self, exc, type_, tb, strtb):
     def handle_retry(self, exc, type_, tb, strtb):
         """Handle retry exception."""
         """Handle retry exception."""
-        self.task.on_retry(exc, self.task_id, self.args, self.kwargs)
 
 
         # Create a simpler version of the RetryTaskError that stringifies
         # Create a simpler version of the RetryTaskError that stringifies
         # the original exception instead of including the exception instance.
         # the original exception instead of including the exception instance.
@@ -85,11 +93,16 @@ class TaskTrace(object):
         # guaranteeing pickleability.
         # guaranteeing pickleability.
         message, orig_exc = exc.args
         message, orig_exc = exc.args
         expanded_msg = "%s: %s" % (message, str(orig_exc))
         expanded_msg = "%s: %s" % (message, str(orig_exc))
-        return ExceptionInfo((type_,
-                              type_(expanded_msg, None),
-                              tb))
+        einfo = ExceptionInfo((type_,
+                               type_(expanded_msg, None),
+                               tb))
+        self.task.on_retry(exc, self.task_id,
+                           self.args, self.kwargs, einfo=einfo)
+        return einfo
 
 
     def handle_failure(self, exc, type_, tb, strtb):
     def handle_failure(self, exc, type_, tb, strtb):
         """Handle exception."""
         """Handle exception."""
-        self.task.on_failure(exc, self.task_id, self.args, self.kwargs)
-        return ExceptionInfo((type_, exc, tb))
+        einfo = ExceptionInfo((type_, exc, tb))
+        self.task.on_failure(exc, self.task_id,
+                             self.args, self.kwargs, einfo=einfo)
+        return einfo

+ 7 - 84
celery/loaders/__init__.py

@@ -1,104 +1,27 @@
 import os
 import os
-import string
-import warnings
-import importlib
 
 
-from carrot.utils import rpartition
+from celery.utils import get_cls_by_name
 
 
-from celery.utils import get_full_cls_name
-from celery.loaders.default import Loader as DefaultLoader
-from celery.loaders.djangoapp import Loader as DjangoLoader
-
-_DEFAULT_LOADER_CLASS_NAME = "Loader"
-LOADER_ALIASES = {"django": "celery.loaders.djangoapp.Loader",
-                  "default": "celery.loaders.default.Loader"}
-_loader_cache = {}
+LOADER_ALIASES = {"default": "celery.loaders.default.Loader",
+                  "django": "djcelery.loaders.DjangoLoader"}
 _loader = None
 _loader = None
 _settings = None
 _settings = None
 
 
 
 
-def first_letter(s):
-    for char in s:
-        if char in string.letters:
-            return char
-
-
-def resolve_loader(loader):
-    loader = LOADER_ALIASES.get(loader, loader)
-    loader_module_name, _, loader_cls_name = rpartition(loader, ".")
-    if first_letter(loader_cls_name) not in string.uppercase:
-        warnings.warn(DeprecationWarning(
-            "CELERY_LOADER now needs loader class name, e.g. %s.%s" % (
-                loader, _DEFAULT_LOADER_CLASS_NAME)))
-        return loader, _DEFAULT_LOADER_CLASS_NAME
-    return loader_module_name, loader_cls_name
-
-
-def _get_loader_cls(loader):
-    loader_module_name, loader_cls_name = resolve_loader(loader)
-    loader_module = importlib.import_module(loader_module_name)
-    return getattr(loader_module, loader_cls_name)
-
-
 def get_loader_cls(loader):
 def get_loader_cls(loader):
     """Get loader class by name/alias"""
     """Get loader class by name/alias"""
-    if loader not in _loader_cache:
-        _loader_cache[loader] = _get_loader_cls(loader)
-    return _loader_cache[loader]
-
-
-def detect_loader():
-    loader = os.environ.get("CELERY_LOADER")
-    if loader:
-        return get_loader_cls(loader)
-
-    loader = _detect_loader()
-    os.environ["CELERY_LOADER"] = get_full_cls_name(loader)
-
-    return loader
-
+    return get_cls_by_name(loader, LOADER_ALIASES)
 
 
-def _detect_loader(): # pragma: no cover
-    from django.conf import settings
-    if settings.configured:
-        return DjangoLoader
-    try:
-        # A settings module may be defined, but Django didn't attempt to
-        # load it yet. As an alternative to calling the private _setup(),
-        # we could also check whether DJANGO_SETTINGS_MODULE is set.
-        settings._setup()
-    except ImportError:
-        if not callable(getattr(os, "fork", None)):
-            # Platform doesn't support fork()
-            # XXX On systems without fork, multiprocessing seems to be
-            # launching the processes in some other way which does
-            # not copy the memory of the parent process. This means
-            # any configured env might be lost. This is a hack to make
-            # it work on Windows.
-            # A better way might be to use os.environ to set the currently
-            # used configuration method so to propogate it to the "child"
-            # processes. But this has to be experimented with.
-            # [asksol/heyman]
-            from django.core.management import setup_environ
-            try:
-                settings_mod = os.environ.get("DJANGO_SETTINGS_MODULE",
-                                                "settings")
-                project_settings = __import__(settings_mod, {}, {}, [''])
-                setup_environ(project_settings)
-                return DjangoLoader
-            except ImportError:
-                pass
-    else:
-        return DjangoLoader
 
 
-    return DefaultLoader
+def setup_loader():
+    return get_loader_cls(os.environ.setdefault("CELERY_LOADER", "default"))()
 
 
 
 
 def current_loader():
 def current_loader():
     """Detect and return the current loader."""
     """Detect and return the current loader."""
     global _loader
     global _loader
     if _loader is None:
     if _loader is None:
-        _loader = detect_loader()()
+        _loader = setup_loader()
     return _loader
     return _loader
 
 
 
 

+ 1 - 0
celery/loaders/base.py

@@ -19,6 +19,7 @@ class BaseLoader(object):
     """
     """
     _conf_cache = None
     _conf_cache = None
     worker_initialized = False
     worker_initialized = False
+    override_backends = {}
 
 
     def on_task_init(self, task_id, task):
     def on_task_init(self, task_id, task):
         """This method is called before a task is executed."""
         """This method is called before a task is executed."""

+ 19 - 10
celery/loaders/default.py

@@ -1,4 +1,5 @@
 import os
 import os
+from importlib import import_module
 
 
 from celery.loaders.base import BaseLoader
 from celery.loaders.base import BaseLoader
 
 
@@ -6,9 +7,11 @@ DEFAULT_CONFIG_MODULE = "celeryconfig"
 
 
 DEFAULT_SETTINGS = {
 DEFAULT_SETTINGS = {
     "DEBUG": False,
     "DEBUG": False,
+    "ADMINS": (),
     "DATABASE_ENGINE": "sqlite3",
     "DATABASE_ENGINE": "sqlite3",
     "DATABASE_NAME": "celery.sqlite",
     "DATABASE_NAME": "celery.sqlite",
     "INSTALLED_APPS": ("celery", ),
     "INSTALLED_APPS": ("celery", ),
+    "CELERY_IMPORTS": (),
 }
 }
 
 
 
 
@@ -16,6 +19,18 @@ def wanted_module_item(item):
     return not item.startswith("_")
     return not item.startswith("_")
 
 
 
 
+class Settings(dict):
+
+    def __getattr__(self, key):
+        try:
+            return self[key]
+        except KeyError:
+            raise AttributeError(key)
+
+    def __setattr_(self, key, value):
+        self[key] = value
+
+
 class Loader(BaseLoader):
 class Loader(BaseLoader):
     """The default loader.
     """The default loader.
 
 
@@ -23,14 +38,8 @@ class Loader(BaseLoader):
 
 
     """
     """
 
 
-    def setup_django_env(self, settingsdict):
-        config = dict(DEFAULT_SETTINGS, **settingsdict)
-
-        from django.conf import settings
-        if not settings.configured:
-            settings.configure()
-        for config_key, config_value in config.items():
-            setattr(settings, config_key, config_value)
+    def setup_settings(self, settingsdict):
+        settings = Settings(DEFAULT_SETTINGS, **settingsdict)
         installed_apps = set(list(DEFAULT_SETTINGS["INSTALLED_APPS"]) + \
         installed_apps = set(list(DEFAULT_SETTINGS["INSTALLED_APPS"]) + \
                              list(settings.INSTALLED_APPS))
                              list(settings.INSTALLED_APPS))
         settings.INSTALLED_APPS = tuple(installed_apps)
         settings.INSTALLED_APPS = tuple(installed_apps)
@@ -42,11 +51,11 @@ class Loader(BaseLoader):
         celery and Django so it can be used by regular Python."""
         celery and Django so it can be used by regular Python."""
         configname = os.environ.get("CELERY_CONFIG_MODULE",
         configname = os.environ.get("CELERY_CONFIG_MODULE",
                                     DEFAULT_CONFIG_MODULE)
                                     DEFAULT_CONFIG_MODULE)
-        celeryconfig = __import__(configname, {}, {}, [''])
+        celeryconfig = import_module(configname)
         usercfg = dict((key, getattr(celeryconfig, key))
         usercfg = dict((key, getattr(celeryconfig, key))
                             for key in dir(celeryconfig)
                             for key in dir(celeryconfig)
                                 if wanted_module_item(key))
                                 if wanted_module_item(key))
-        return self.setup_django_env(usercfg)
+        return self.setup_settings(usercfg)
 
 
     def on_worker_init(self):
     def on_worker_init(self):
         """Imports modules at worker init so tasks can be registered
         """Imports modules at worker init so tasks can be registered

+ 0 - 100
celery/loaders/djangoapp.py

@@ -1,100 +0,0 @@
-import imp
-import importlib
-
-from celery.loaders.base import BaseLoader
-
-_RACE_PROTECTION = False
-
-
-class Loader(BaseLoader):
-    """The Django loader."""
-    _db_reuse = 0
-
-    def read_configuration(self):
-        """Load configuration from Django settings."""
-        from django.conf import settings
-        return settings
-
-    def close_database(self):
-        from django.db import connection
-        db_reuse_max = getattr(self.conf, "CELERY_DB_REUSE_MAX", None)
-        if not db_reuse_max:
-            return connection.close()
-        if self._db_reuse >= db_reuse_max:
-            self._db_reuse = 0
-            return connection.close()
-        self._db_reuse += 1
-
-    def on_task_init(self, task_id, task):
-        """This method is called before a task is executed.
-
-        Does everything necessary for Django to work in a long-living,
-        multiprocessing environment.
-
-        """
-        # See http://groups.google.com/group/django-users/
-        #            browse_thread/thread/78200863d0c07c6d/
-        self.close_database()
-
-        # ## Reset cache connection only if using memcached/libmemcached
-        from django.core import cache
-        # XXX At Opera we use a custom memcached backend that uses
-        # libmemcached instead of libmemcache (cmemcache). Should find a
-        # better solution for this, but for now "memcached" should probably
-        # be unique enough of a string to not make problems.
-        cache_backend = cache.settings.CACHE_BACKEND
-        try:
-            parse_backend = cache.parse_backend_uri
-        except AttributeError:
-            parse_backend = lambda backend: backend.split(":", 1)
-        cache_scheme = parse_backend(cache_backend)[0]
-
-        if "memcached" in cache_scheme:
-            cache.cache.close()
-
-    def on_worker_init(self):
-        """Called when the worker starts.
-
-        Automatically discovers any ``tasks.py`` files in the applications
-        listed in ``INSTALLED_APPS``.
-
-        """
-        self.import_default_modules()
-        autodiscover()
-
-
-def autodiscover():
-    """Include tasks for all applications in :setting:`INSTALLED_APPS`."""
-    from django.conf import settings
-    global _RACE_PROTECTION
-
-    if _RACE_PROTECTION:
-        return
-    _RACE_PROTECTION = True
-    try:
-        return filter(None, [find_related_module(app, "tasks")
-                                for app in settings.INSTALLED_APPS])
-    finally:
-        _RACE_PROTECTION = False
-
-
-def find_related_module(app, related_name):
-    """Given an application name and a module name, tries to find that
-    module in the application."""
-
-    try:
-        app_path = importlib.import_module(app).__path__
-    except AttributeError:
-        return
-
-    try:
-        imp.find_module(related_name, app_path)
-    except ImportError:
-        return
-
-    module = importlib.import_module("%s.%s" % (app, related_name))
-
-    try:
-        return getattr(module, related_name)
-    except AttributeError:
-        return

+ 0 - 0
celery/management/commands/__init__.py


+ 0 - 18
celery/management/commands/camqadm.py

@@ -1,18 +0,0 @@
-"""
-
-Celery AMQP Administration Tool using the AMQP API.
-
-"""
-from django.core.management.base import BaseCommand
-
-from celery.bin.camqadm import camqadm, OPTION_LIST
-
-
-class Command(BaseCommand):
-    """Run the celery daemon."""
-    option_list = BaseCommand.option_list + OPTION_LIST
-    help = 'Celery AMQP Administration Tool using the AMQP API.'
-
-    def handle(self, *args, **options):
-        """Handle the management command."""
-        camqadm(*args, **options)

+ 0 - 18
celery/management/commands/celerybeat.py

@@ -1,18 +0,0 @@
-"""
-
-Start the celery clock service from the Django management command.
-
-"""
-from django.core.management.base import BaseCommand
-
-from celery.bin.celerybeat import run_clockservice, OPTION_LIST
-
-
-class Command(BaseCommand):
-    """Run the celery periodic task scheduler."""
-    option_list = BaseCommand.option_list + OPTION_LIST
-    help = 'Run the celery periodic task scheduler'
-
-    def handle(self, *args, **options):
-        """Handle the management command."""
-        run_clockservice(**options)

+ 0 - 18
celery/management/commands/celeryd.py

@@ -1,18 +0,0 @@
-"""
-
-Start the celery daemon from the Django management command.
-
-"""
-from django.core.management.base import BaseCommand
-
-from celery.bin.celeryd import run_worker, OPTION_LIST
-
-
-class Command(BaseCommand):
-    """Run the celery daemon."""
-    option_list = BaseCommand.option_list + OPTION_LIST
-    help = 'Run the celery daemon'
-
-    def handle(self, *args, **options):
-        """Handle the management command."""
-        run_worker(**options)

+ 0 - 37
celery/management/commands/celerymon.py

@@ -1,37 +0,0 @@
-"""
-
-Start the celery clock service from the Django management command.
-
-"""
-import sys
-from django.core.management.base import BaseCommand
-
-#try:
-from celerymonitor.bin.celerymond import run_monitor, OPTION_LIST
-#except ImportError:
-#    OPTION_LIST = ()
-#    run_monitor = None
-
-MISSING = """
-You don't have celerymon installed, please install it by running the following
-command:
-
-    $ easy_install celerymon
-
-or if you're using pip (like you should be):
-
-    $ pip install celerymon
-"""
-
-
-class Command(BaseCommand):
-    """Run the celery monitor."""
-    option_list = BaseCommand.option_list + OPTION_LIST
-    help = 'Run the celery monitor'
-
-    def handle(self, *args, **options):
-        """Handle the management command."""
-        if run_monitor is None:
-            sys.stderr.write(MISSING)
-        else:
-            run_monitor(**options)

+ 0 - 149
celery/managers.py

@@ -1,149 +0,0 @@
-from datetime import datetime
-from itertools import count
-
-from billiard.utils.functional import wraps
-
-from django.db import models
-from django.db import transaction
-from django.db.models.query import QuerySet
-
-
-def transaction_retry(max_retries=1):
-    """Decorator for methods doing database operations.
-
-    If the database operation fails, it will retry the operation
-    at most ``max_retries`` times.
-
-    """
-    def _outer(fun):
-
-        @wraps(fun)
-        def _inner(*args, **kwargs):
-            _max_retries = kwargs.pop("exception_retry_count", max_retries)
-            for retries in count(0):
-                try:
-                    return fun(*args, **kwargs)
-                except Exception: # pragma: no cover
-                    # Depending on the database backend used we can experience
-                    # various exceptions. E.g. psycopg2 raises an exception
-                    # if some operation breaks the transaction, so saving
-                    # the task result won't be possible until we rollback
-                    # the transaction.
-                    if retries >= _max_retries:
-                        raise
-                    transaction.rollback_unless_managed()
-
-        return _inner
-
-    return _outer
-
-
-def update_model_with_dict(obj, fields):
-    [setattr(obj, attr_name, attr_value)
-        for attr_name, attr_value in fields.items()]
-    obj.save()
-    return obj
-
-
-class ExtendedQuerySet(QuerySet):
-
-    def update_or_create(self, **kwargs):
-        obj, created = self.get_or_create(**kwargs)
-
-        if not created:
-            fields = dict(kwargs.pop("defaults", {}))
-            fields.update(kwargs)
-            update_model_with_dict(obj, fields)
-
-        return obj
-
-
-class ExtendedManager(models.Manager):
-
-    def get_query_set(self):
-        return ExtendedQuerySet(self.model)
-
-    def update_or_create(self, **kwargs):
-        return self.get_query_set().update_or_create(**kwargs)
-
-
-class ResultManager(ExtendedManager):
-
-    def get_all_expired(self):
-        """Get all expired task results."""
-        from celery import conf
-        expires = conf.TASK_RESULT_EXPIRES
-        return self.filter(date_done__lt=datetime.now() - expires)
-
-    def delete_expired(self):
-        """Delete all expired taskset results."""
-        self.get_all_expired().delete()
-
-
-class TaskManager(ResultManager):
-    """Manager for :class:`celery.models.Task` models."""
-
-    @transaction_retry(max_retries=1)
-    def get_task(self, task_id):
-        """Get task meta for task by ``task_id``.
-
-        :keyword exception_retry_count: How many times to retry by
-            transaction rollback on exception. This could theoretically
-            happen in a race condition if another worker is trying to
-            create the same task. The default is to retry once.
-
-        """
-        task, created = self.get_or_create(task_id=task_id)
-        return task
-
-    @transaction_retry(max_retries=2)
-    def store_result(self, task_id, result, status, traceback=None):
-        """Store the result and status of a task.
-
-        :param task_id: task id
-
-        :param result: The return value of the task, or an exception
-            instance raised by the task.
-
-        :param status: Task status. See
-            :meth:`celery.result.AsyncResult.get_status` for a list of
-            possible status values.
-
-        :keyword traceback: The traceback at the point of exception (if the
-            task failed).
-
-        :keyword exception_retry_count: How many times to retry by
-            transaction rollback on exception. This could theoretically
-            happen in a race condition if another worker is trying to
-            create the same task. The default is to retry twice.
-
-        """
-        return self.update_or_create(task_id=task_id, defaults={
-                                        "status": status,
-                                        "result": result,
-                                        "traceback": traceback})
-
-
-class TaskSetManager(ResultManager):
-    """Manager for :class:`celery.models.TaskSet` models."""
-
-
-    @transaction_retry(max_retries=1)
-    def restore_taskset(self, taskset_id):
-        """Get taskset meta for task by ``taskset_id``."""
-        try:
-            return self.get(taskset_id=taskset_id)
-        except self.model.DoesNotExist:
-            return None
-
-    @transaction_retry(max_retries=2)
-    def store_result(self, taskset_id, result):
-        """Store the result of a taskset.
-
-        :param taskset_id: task set id
-
-        :param result: The return value of the taskset
-
-        """
-        return self.update_or_create(taskset_id=taskset_id,
-                                     defaults={"result": result})

+ 68 - 4
celery/messaging.py

@@ -5,6 +5,7 @@ Sending and Receiving Messages
 """
 """
 import socket
 import socket
 from datetime import datetime, timedelta
 from datetime import datetime, timedelta
+from itertools import count
 
 
 from carrot.connection import DjangoBrokerConnection
 from carrot.connection import DjangoBrokerConnection
 from carrot.messaging import Publisher, Consumer, ConsumerSet as _ConsumerSet
 from carrot.messaging import Publisher, Consumer, ConsumerSet as _ConsumerSet
@@ -13,6 +14,7 @@ from billiard.utils.functional import wraps
 from celery import conf
 from celery import conf
 from celery import signals
 from celery import signals
 from celery.utils import gen_unique_id, mitemgetter, noop
 from celery.utils import gen_unique_id, mitemgetter, noop
+from celery.loaders import load_settings
 
 
 
 
 MSG_OPTIONS = ("mandatory", "priority",
 MSG_OPTIONS = ("mandatory", "priority",
@@ -21,7 +23,7 @@ MSG_OPTIONS = ("mandatory", "priority",
 
 
 get_msg_options = mitemgetter(*MSG_OPTIONS)
 get_msg_options = mitemgetter(*MSG_OPTIONS)
 extract_msg_options = lambda d: dict(zip(MSG_OPTIONS, get_msg_options(d)))
 extract_msg_options = lambda d: dict(zip(MSG_OPTIONS, get_msg_options(d)))
-default_queue = conf.routing_table[conf.DEFAULT_QUEUE]
+default_queue = conf.get_routing_table()[conf.DEFAULT_QUEUE]
 
 
 _queues_declared = False
 _queues_declared = False
 _exchanges_declared = {}
 _exchanges_declared = {}
@@ -57,6 +59,13 @@ class TaskPublisher(Publisher):
         if countdown: # Convert countdown to ETA.
         if countdown: # Convert countdown to ETA.
             eta = datetime.now() + timedelta(seconds=countdown)
             eta = datetime.now() + timedelta(seconds=countdown)
 
 
+        task_args = task_args or []
+        task_kwargs = task_kwargs or {}
+        if not isinstance(task_args, (list, tuple)):
+            raise ValueError("task args must be a list or tuple")
+        if not isinstance(task_kwargs, dict):
+            raise ValueError("task kwargs must be a dictionary")
+
         message_data = {
         message_data = {
             "task": task_name,
             "task": task_name,
             "id": task_id,
             "id": task_id,
@@ -117,6 +126,7 @@ class EventPublisher(Publisher):
     exchange = conf.EVENT_EXCHANGE
     exchange = conf.EVENT_EXCHANGE
     exchange_type = conf.EVENT_EXCHANGE_TYPE
     exchange_type = conf.EVENT_EXCHANGE_TYPE
     routing_key = conf.EVENT_ROUTING_KEY
     routing_key = conf.EVENT_ROUTING_KEY
+    serializer = conf.EVENT_SERIALIZER
 
 
 
 
 class EventConsumer(Consumer):
 class EventConsumer(Consumer):
@@ -128,15 +138,60 @@ class EventConsumer(Consumer):
     no_ack = True
     no_ack = True
 
 
 
 
+class ControlReplyConsumer(Consumer):
+    exchange = "celerycrq"
+    exchange_type = "direct"
+    durable = False
+    exclusive = False
+    auto_delete = True
+    no_ack = True
+
+    def __init__(self, connection, ticket, **kwargs):
+        self.ticket = ticket
+        queue = "%s.%s" % (self.exchange, ticket)
+        super(ControlReplyConsumer, self).__init__(connection,
+                                                   queue=queue,
+                                                   routing_key=ticket,
+                                                   **kwargs)
+
+    def collect(self, limit=None, timeout=1):
+        responses = []
+
+        def callback(message_data, message):
+            responses.append(message_data)
+
+        self.callbacks = [callback]
+        self.consume()
+        for i in limit and range(limit) or count():
+            try:
+                self.connection.drain_events(timeout=timeout)
+            except socket.timeout:
+                break
+
+        return responses
+
+
+class ControlReplyPublisher(Publisher):
+    exchange = "celerycrq"
+    exchange_type = "direct"
+    delivery_mode = "non-persistent"
+
+
 class BroadcastPublisher(Publisher):
 class BroadcastPublisher(Publisher):
     """Publish broadcast commands"""
     """Publish broadcast commands"""
+
+    ReplyTo = ControlReplyConsumer
+
     exchange = conf.BROADCAST_EXCHANGE
     exchange = conf.BROADCAST_EXCHANGE
     exchange_type = conf.BROADCAST_EXCHANGE_TYPE
     exchange_type = conf.BROADCAST_EXCHANGE_TYPE
 
 
-    def send(self, type, arguments, destination=None):
+    def send(self, type, arguments, destination=None, reply_ticket=None):
         """Send broadcast command."""
         """Send broadcast command."""
         arguments["command"] = type
         arguments["command"] = type
         arguments["destination"] = destination
         arguments["destination"] = destination
+        if reply_ticket:
+            arguments["reply_to"] = {"exchange": self.ReplyTo.exchange,
+                                     "routing_key": reply_ticket}
         super(BroadcastPublisher, self).send({"control": arguments})
         super(BroadcastPublisher, self).send({"control": arguments})
 
 
 
 
@@ -155,7 +210,8 @@ class BroadcastConsumer(Consumer):
 
 
 def establish_connection(connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
 def establish_connection(connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
     """Establish a connection to the message broker."""
     """Establish a connection to the message broker."""
-    return DjangoBrokerConnection(connect_timeout=connect_timeout)
+    return DjangoBrokerConnection(connect_timeout=connect_timeout,
+                                  settings=load_settings())
 
 
 
 
 def with_connection(fun):
 def with_connection(fun):
@@ -186,7 +242,7 @@ def get_consumer_set(connection, queues=None, **options):
     Defaults to the queues in ``CELERY_QUEUES``.
     Defaults to the queues in ``CELERY_QUEUES``.
 
 
     """
     """
-    queues = queues or conf.routing_table
+    queues = queues or conf.get_routing_table()
     cset = ConsumerSet(connection)
     cset = ConsumerSet(connection)
     for queue_name, queue_options in queues.items():
     for queue_name, queue_options in queues.items():
         queue_options = dict(queue_options)
         queue_options = dict(queue_options)
@@ -195,3 +251,11 @@ def get_consumer_set(connection, queues=None, **options):
                             backend=cset.backend, **queue_options)
                             backend=cset.backend, **queue_options)
         cset.consumers.append(consumer)
         cset.consumers.append(consumer)
     return cset
     return cset
+
+
+@with_connection
+def reply(data, exchange, routing_key, connection=None, connect_timeout=None,
+        **kwargs):
+    pub = Publisher(connection, exchange=exchange,
+                    routing_key=routing_key, **kwargs)
+    pub.send(data)

+ 27 - 51
celery/models.py

@@ -1,67 +1,43 @@
-import django
-from django.db import models
-from django.utils.translation import ugettext_lazy as _
+"""
 
 
-from picklefield.fields import PickledObjectField
+celery.models has been moved to djcelery.models.
 
 
-from celery import conf
-from celery import states
-from celery.managers import TaskManager, TaskSetManager
+This file is deprecated and will be removed in Celery v1.4.0.
 
 
-TASK_STATUSES_CHOICES = zip(states.ALL_STATES, states.ALL_STATES)
+"""
+from django.core.exceptions import ImproperlyConfigured
 
 
+raise ImproperlyConfigured("""
 
 
-class TaskMeta(models.Model):
-    """Task result/status."""
-    task_id = models.CharField(_(u"task id"), max_length=255, unique=True)
-    status = models.CharField(_(u"task status"), max_length=50,
-            default=states.PENDING, choices=TASK_STATUSES_CHOICES)
-    result = PickledObjectField(null=True)
-    date_done = models.DateTimeField(_(u"done at"), auto_now=True)
-    traceback = models.TextField(_(u"traceback"), blank=True, null=True)
+======================================================
+ERROR: celery can't be added to INSTALLED_APPS anymore
+======================================================
 
 
-    objects = TaskManager()
+Please install the django-celery package and add:
 
 
-    class Meta:
-        """Model meta-data."""
-        verbose_name = _(u"task meta")
-        verbose_name_plural = _(u"task meta")
+    INSTALLED_APPS = "djcelery"
 
 
-    def to_dict(self):
-        return {"task_id": self.task_id,
-                "status": self.status,
-                "result": self.result,
-                "date_done": self.date_done,
-                "traceback": self.traceback}
+To install django-celery you can do one of the following:
 
 
-    def __unicode__(self):
-        return u"<Task: %s successful: %s>" % (self.task_id, self.status)
+* Download from PyPI:
 
 
+    http://pypi.python.org/pypi/django-celery
 
 
-class TaskSetMeta(models.Model):
-    """TaskSet result"""
-    taskset_id = models.CharField(_(u"task id"), max_length=255, unique=True)
-    result = PickledObjectField()
-    date_done = models.DateTimeField(_(u"done at"), auto_now=True)
+* Install with pip:
 
 
-    objects = TaskSetManager()
+    pip install django-celery
 
 
-    class Meta:
-        """Model meta-data."""
-        verbose_name = _(u"taskset meta")
-        verbose_name_plural = _(u"taskset meta")
+* Install with easy_install:
 
 
-    def to_dict(self):
-        return {"taskset_id": self.taskset_id,
-                "result": self.result,
-                "date_done": self.date_done}
+    easy_install django-celery
 
 
-    def __unicode__(self):
-        return u"<TaskSet: %s>" % (self.taskset_id)
+* Clone the development repository:
 
 
-if (django.VERSION[0], django.VERSION[1]) >= (1, 1):
-    # keep models away from syncdb/reset if database backend is not
-    # being used.
-    if conf.RESULT_BACKEND != 'database':
-        TaskMeta._meta.managed = False
-        TaskSetMeta._meta.managed = False
+    http://github.com/ask/django-celery
+
+
+If you weren't aware of this already you should read the
+Celery 1.2.0 Changelog as well:
+    http://github.com/ask/celery/tree/djangofree/Changelog
+
+""")

+ 8 - 8
celery/result.py

@@ -69,8 +69,8 @@ class BaseAsyncResult(object):
 
 
     def ready(self):
     def ready(self):
         """Returns ``True`` if the task executed successfully, or raised
         """Returns ``True`` if the task executed successfully, or raised
-        an exception. If the task is still running, pending, or is waiting for retry
-        then ``False`` is returned.
+        an exception. If the task is still running, pending, or is waiting
+        for retry then ``False`` is returned.
 
 
         :rtype: bool
         :rtype: bool
 
 
@@ -176,8 +176,8 @@ class TaskSetResult(object):
     """Working with :class:`celery.task.TaskSet` results.
     """Working with :class:`celery.task.TaskSet` results.
 
 
     An instance of this class is returned by
     An instance of this class is returned by
-    :meth:`celery.task.TaskSet.run()`. It lets you inspect the status and
-    return values of the taskset as a single entity.
+    :meth:`celery.task.TaskSet.apply_async()`. It lets you inspect the
+    status and return values of the taskset as a single entity.
 
 
     :option taskset_id: see :attr:`taskset_id`.
     :option taskset_id: see :attr:`taskset_id`.
     :option subtasks: see :attr:`subtasks`.
     :option subtasks: see :attr:`subtasks`.
@@ -283,7 +283,7 @@ class TaskSetResult(object):
                     except ValueError:
                     except ValueError:
                         pass
                         pass
                     yield result.result
                     yield result.result
-                elif result.status == states.FAILURE:
+                elif result.status in states.PROPAGATE_STATES:
                     raise result.result
                     raise result.result
 
 
     def join(self, timeout=None):
     def join(self, timeout=None):
@@ -315,7 +315,7 @@ class TaskSetResult(object):
             for position, pending_result in enumerate(self.subtasks):
             for position, pending_result in enumerate(self.subtasks):
                 if pending_result.status == states.SUCCESS:
                 if pending_result.status == states.SUCCESS:
                     results[position] = pending_result.result
                     results[position] = pending_result.result
-                elif pending_result.status == states.FAILURE:
+                elif pending_result.status in states.PROPAGATE_STATES:
                     raise pending_result.result
                     raise pending_result.result
             if results.full():
             if results.full():
                 # Make list copy, so the returned type is not a position
                 # Make list copy, so the returned type is not a position
@@ -370,8 +370,8 @@ class EagerResult(BaseAsyncResult):
         """Wait until the task has been executed and return its result."""
         """Wait until the task has been executed and return its result."""
         if self.status == states.SUCCESS:
         if self.status == states.SUCCESS:
             return self.result
             return self.result
-        elif self.status == states.FAILURE:
-            raise self.result.exception
+        elif self.status in states.PROPAGATE_STATES:
+            raise self.result
 
 
     def revoke(self):
     def revoke(self):
         pass
         pass

+ 2 - 1
celery/signals.py

@@ -1,4 +1,4 @@
-from django.dispatch import Signal
+from celery.utils.dispatch import Signal
 
 
 task_sent = Signal(providing_args=["task_id", "task",
 task_sent = Signal(providing_args=["task_id", "task",
                                    "args", "kwargs",
                                    "args", "kwargs",
@@ -11,5 +11,6 @@ task_postrun = Signal(providing_args=["task_id", "task",
                                       "args", "kwargs", "retval"])
                                       "args", "kwargs", "retval"])
 
 
 worker_init = Signal(providing_args=[])
 worker_init = Signal(providing_args=[])
+worker_process_init = Signal(providing_args=[])
 worker_ready = Signal(providing_args=[])
 worker_ready = Signal(providing_args=[])
 worker_shutdown = Signal(providing_args=[])
 worker_shutdown = Signal(providing_args=[])

+ 13 - 5
celery/states.py

@@ -20,11 +20,16 @@
 
 
     Task is being retried.
     Task is being retried.
 
 
+.. data:: REVOKED
+
+    Task has been revoked.
+
 """
 """
 PENDING = "PENDING"
 PENDING = "PENDING"
 STARTED = "STARTED"
 STARTED = "STARTED"
 SUCCESS = "SUCCESS"
 SUCCESS = "SUCCESS"
 FAILURE = "FAILURE"
 FAILURE = "FAILURE"
+REVOKED = "REVOKED"
 RETRY = "RETRY"
 RETRY = "RETRY"
 
 
 
 
@@ -41,15 +46,18 @@ RETRY = "RETRY"
 
 
     Set of states meaning the task returned an exception.
     Set of states meaning the task returned an exception.
 
 
+.. data:: PROPAGATE_STATES
+
+    Set of exception states that should propagate exceptions to the user.
+
 .. data:: ALL_STATES
 .. data:: ALL_STATES
 
 
     Set of all possible states.
     Set of all possible states.
 
 
 """
 """
-READY_STATES = frozenset([SUCCESS, FAILURE])
+READY_STATES = frozenset([SUCCESS, FAILURE, REVOKED])
 UNREADY_STATES = frozenset([PENDING, STARTED, RETRY])
 UNREADY_STATES = frozenset([PENDING, STARTED, RETRY])
-EXCEPTION_STATES = frozenset([RETRY, FAILURE])
-
-ALL_STATES = frozenset([PENDING, STARTED, SUCCESS, FAILURE, RETRY])
-
+EXCEPTION_STATES = frozenset([RETRY, FAILURE, REVOKED])
+PROPAGATE_STATES = frozenset([FAILURE, REVOKED])
 
 
+ALL_STATES = frozenset([PENDING, STARTED, SUCCESS, FAILURE, RETRY, REVOKED])

+ 84 - 70
celery/task/base.py

@@ -1,13 +1,12 @@
 import sys
 import sys
-import warnings
-from datetime import datetime, timedelta
-from Queue import Queue
+from datetime import timedelta
 
 
 from billiard.serialization import pickle
 from billiard.serialization import pickle
 
 
 from celery import conf
 from celery import conf
 from celery.log import setup_task_logger
 from celery.log import setup_task_logger
-from celery.utils import gen_unique_id, padlist, timedelta_seconds
+from celery.utils import gen_unique_id, padlist
+from celery.utils.timeutils import timedelta_seconds
 from celery.result import BaseAsyncResult, TaskSetResult, EagerResult
 from celery.result import BaseAsyncResult, TaskSetResult, EagerResult
 from celery.execute import apply_async, apply
 from celery.execute import apply_async, apply
 from celery.registry import tasks
 from celery.registry import tasks
@@ -16,6 +15,8 @@ from celery.messaging import TaskPublisher, TaskConsumer
 from celery.messaging import establish_connection as _establish_connection
 from celery.messaging import establish_connection as _establish_connection
 from celery.exceptions import MaxRetriesExceededError, RetryTaskError
 from celery.exceptions import MaxRetriesExceededError, RetryTaskError
 
 
+from celery.task.schedules import schedule
+
 
 
 class TaskType(type):
 class TaskType(type):
     """Metaclass for tasks.
     """Metaclass for tasks.
@@ -64,6 +65,9 @@ class Task(object):
     The :meth:`run` method can take use of the default keyword arguments,
     The :meth:`run` method can take use of the default keyword arguments,
     as listed in the :meth:`run` documentation.
     as listed in the :meth:`run` documentation.
 
 
+    The resulting class is callable, which if called will apply the
+    :meth:`run` method.
+
     .. attribute:: name
     .. attribute:: name
         Name of the task.
         Name of the task.
 
 
@@ -106,6 +110,7 @@ class Task(object):
         can't be routed to a worker immediately.
         can't be routed to a worker immediately.
 
 
     .. attribute:: priority:
     .. attribute:: priority:
+
         The message priority. A number from ``0`` to ``9``, where ``0`` is the
         The message priority. A number from ``0`` to ``9``, where ``0`` is the
         highest. Note that RabbitMQ doesn't support priorities yet.
         highest. Note that RabbitMQ doesn't support priorities yet.
 
 
@@ -125,12 +130,6 @@ class Task(object):
         limit), ``"100/s"`` (hundred tasks a second), ``"100/m"`` (hundred
         limit), ``"100/s"`` (hundred tasks a second), ``"100/m"`` (hundred
         tasks a minute), ``"100/h"`` (hundred tasks an hour)
         tasks a minute), ``"100/h"`` (hundred tasks an hour)
 
 
-    .. attribute:: rate_limit_queue_type
-
-        Type of queue used by the rate limiter for this kind of tasks.
-        Default is a :class:`Queue.Queue`, but you can change this to
-        a :class:`Queue.LifoQueue` or an invention of your own.
-
     .. attribute:: ignore_result
     .. attribute:: ignore_result
 
 
         Don't store the return value of this task.
         Don't store the return value of this task.
@@ -166,8 +165,18 @@ class Task(object):
         The global default can be overridden by the ``CELERY_TRACK_STARTED``
         The global default can be overridden by the ``CELERY_TRACK_STARTED``
         setting.
         setting.
 
 
-    The resulting class is callable, which if called will apply the
-    :meth:`run` method.
+    .. attribute:: acks_late
+
+        If set to ``True`` messages for this task will be acknowledged
+        **after** the task has been executed, not *just before*, which is
+        the default behavior.
+
+        Note that this means the task may be executed twice if the worker
+        crashes in the middle of execution, which may be acceptable for some
+        applications.
+
+        The global default can be overriden by the ``CELERY_ACKS_LATE``
+        setting.
 
 
     """
     """
     __metaclass__ = TaskType
     __metaclass__ = TaskType
@@ -187,11 +196,11 @@ class Task(object):
     default_retry_delay = 3 * 60
     default_retry_delay = 3 * 60
     serializer = conf.TASK_SERIALIZER
     serializer = conf.TASK_SERIALIZER
     rate_limit = conf.DEFAULT_RATE_LIMIT
     rate_limit = conf.DEFAULT_RATE_LIMIT
-    rate_limit_queue_type = Queue
     backend = default_backend
     backend = default_backend
     exchange_type = conf.DEFAULT_EXCHANGE_TYPE
     exchange_type = conf.DEFAULT_EXCHANGE_TYPE
     delivery_mode = conf.DEFAULT_DELIVERY_MODE
     delivery_mode = conf.DEFAULT_DELIVERY_MODE
     track_started = conf.TRACK_STARTED
     track_started = conf.TRACK_STARTED
+    acks_late = conf.ACKS_LATE
 
 
     MaxRetriesExceededError = MaxRetriesExceededError
     MaxRetriesExceededError = MaxRetriesExceededError
 
 
@@ -403,7 +412,7 @@ class Task(object):
         """
         """
         return BaseAsyncResult(task_id, backend=self.backend)
         return BaseAsyncResult(task_id, backend=self.backend)
 
 
-    def on_retry(self, exc, task_id, args, kwargs):
+    def on_retry(self, exc, task_id, args, kwargs, einfo=None):
         """Retry handler.
         """Retry handler.
 
 
         This is run by the worker when the task is to be retried.
         This is run by the worker when the task is to be retried.
@@ -413,12 +422,32 @@ class Task(object):
         :param args: Original arguments for the retried task.
         :param args: Original arguments for the retried task.
         :param kwargs: Original keyword arguments for the retried task.
         :param kwargs: Original keyword arguments for the retried task.
 
 
+        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+           containing the traceback.
+
+        The return value of this handler is ignored.
+
+        """
+        pass
+
+    def after_return(self, status, retval, task_id, args, kwargs, einfo=None):
+        """Handler called after the task returns.
+
+        :param status: Current task state.
+        :param retval: Task return value/exception.
+        :param task_id: Unique id of the task.
+        :param args: Original arguments for the task that failed.
+        :param kwargs: Original keyword arguments for the task that failed.
+
+        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+           containing the traceback (if any).
+
         The return value of this handler is ignored.
         The return value of this handler is ignored.
 
 
         """
         """
         pass
         pass
 
 
-    def on_failure(self, exc, task_id, args, kwargs):
+    def on_failure(self, exc, task_id, args, kwargs, einfo=None):
         """Error handler.
         """Error handler.
 
 
         This is run by the worker when the task fails.
         This is run by the worker when the task fails.
@@ -428,6 +457,9 @@ class Task(object):
         :param args: Original arguments for the task that failed.
         :param args: Original arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task that failed.
 
 
+        :keyword einfo: :class:`celery.datastructures.ExceptionInfo` instance,
+           containing the traceback.
+
         The return value of this handler is ignored.
         The return value of this handler is ignored.
 
 
         """
         """
@@ -553,13 +585,6 @@ class TaskSet(object):
         self.arguments = args
         self.arguments = args
         self.total = len(args)
         self.total = len(args)
 
 
-    def run(self, *args, **kwargs):
-        """Deprecated alias to :meth:`apply_async`"""
-        warnings.warn(DeprecationWarning(
-            "TaskSet.run will be deprecated in favor of TaskSet.apply_async "
-            "in celery v1.2.0"))
-        return self.apply_async(*args, **kwargs)
-
     def apply_async(self, connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
     def apply_async(self, connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
         """Run all tasks in the taskset.
         """Run all tasks in the taskset.
 
 
@@ -654,8 +679,8 @@ class PeriodicTask(Task):
     .. attribute:: run_every
     .. attribute:: run_every
 
 
         *REQUIRED* Defines how often the task is run (its interval),
         *REQUIRED* Defines how often the task is run (its interval),
-        it can be either a :class:`datetime.timedelta` object or an
-        integer specifying the time in seconds.
+        it can be a :class:`datetime.timedelta` object, a :class:`crontab`
+        object or an integer specifying the time in seconds.
 
 
     .. attribute:: relative
     .. attribute:: relative
 
 
@@ -670,12 +695,36 @@ class PeriodicTask(Task):
 
 
         >>> from celery.task import tasks, PeriodicTask
         >>> from celery.task import tasks, PeriodicTask
         >>> from datetime import timedelta
         >>> from datetime import timedelta
-        >>> class MyPeriodicTask(PeriodicTask):
+        >>> class EveryThirtySecondsTask(PeriodicTask):
         ...     run_every = timedelta(seconds=30)
         ...     run_every = timedelta(seconds=30)
         ...
         ...
         ...     def run(self, **kwargs):
         ...     def run(self, **kwargs):
         ...         logger = self.get_logger(**kwargs)
         ...         logger = self.get_logger(**kwargs)
-        ...         logger.info("Running MyPeriodicTask")
+        ...         logger.info("Execute every 30 seconds")
+
+        >>> from celery.task import PeriodicTask
+        >>> from celery.task.schedules import crontab
+
+        >>> class EveryMondayMorningTask(PeriodicTask):
+        ...     run_every = crontab(hour=7, minute=30, day_of_week=1)
+        ...
+        ...     def run(self, **kwargs):
+        ...         logger = self.get_logger(**kwargs)
+        ...         logger.info("Execute every Monday at 7:30AM.")
+
+        >>> class EveryMorningTask(PeriodicTask):
+        ...     run_every = crontab(hours=7, minute=30)
+        ...
+        ...     def run(self, **kwargs):
+        ...         logger = self.get_logger(**kwargs)
+        ...         logger.info("Execute every day at 7:30AM.")
+
+        >>> class EveryQuarterPastTheHourTask(PeriodicTask):
+        ...     run_every = crontab(minute=15)
+        ...
+        ...     def run(self, **kwargs):
+        ...         logger = self.get_logger(**kwargs)
+        ...         logger.info("Execute every 0:15 past the hour every day.")
 
 
     """
     """
     abstract = True
     abstract = True
@@ -694,14 +743,12 @@ class PeriodicTask(Task):
         if isinstance(self.__class__.run_every, int):
         if isinstance(self.__class__.run_every, int):
             self.__class__.run_every = timedelta(seconds=self.run_every)
             self.__class__.run_every = timedelta(seconds=self.run_every)
 
 
-        super(PeriodicTask, self).__init__()
+        # Convert timedelta to instance of schedule.
+        if isinstance(self.__class__.run_every, timedelta):
+            self.__class__.run_every = schedule(self.__class__.run_every,
+                                                self.relative)
 
 
-    def remaining_estimate(self, last_run_at):
-        """Returns when the periodic task should run next as a timedelta."""
-        next_run_at = last_run_at + self.run_every
-        if not self.relative:
-            next_run_at = self.delta_resolution(next_run_at, self.run_every)
-        return next_run_at - datetime.now()
+        super(PeriodicTask, self).__init__()
 
 
     def timedelta_seconds(self, delta):
     def timedelta_seconds(self, delta):
         """Convert :class:`datetime.timedelta` to seconds.
         """Convert :class:`datetime.timedelta` to seconds.
@@ -732,41 +779,8 @@ class PeriodicTask(Task):
         responsiveness if of importance to you.
         responsiveness if of importance to you.
 
 
         """
         """
-        rem_delta = self.remaining_estimate(last_run_at)
-        rem = self.timedelta_seconds(rem_delta)
-        if rem == 0:
-            return True, self.timedelta_seconds(self.run_every)
-        return False, rem
-
-    def delta_resolution(self, dt, delta):
-        """Round a datetime to the resolution of a timedelta.
-
-        If the timedelta is in days, the datetime will be rounded
-        to the nearest days, if the timedelta is in hours the datetime
-        will be rounded to the nearest hour, and so on until seconds
-        which will just return the original datetime.
-
-            >>> now = datetime.now()
-            >>> now
-            datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
-            >>> delta_resolution(now, timedelta(days=2))
-            datetime.datetime(2010, 3, 30, 0, 0)
-            >>> delta_resolution(now, timedelta(hours=2))
-            datetime.datetime(2010, 3, 30, 11, 0)
-            >>> delta_resolution(now, timedelta(minutes=2))
-            datetime.datetime(2010, 3, 30, 11, 50)
-            >>> delta_resolution(now, timedelta(seconds=2))
-            datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
-
-        """
-        delta = self.timedelta_seconds(delta)
+        return self.run_every.is_due(last_run_at)
 
 
-        resolutions = ((3, lambda x: x / 86400),
-                       (4, lambda x: x / 3600),
-                       (5, lambda x: x / 60))
-
-        args = dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second
-        for res, predicate in resolutions:
-            if predicate(delta) >= 1.0:
-                return datetime(*args[:res])
-        return dt
+    def remaining_estimate(self, last_run_at):
+        """Returns when the periodic task should run next as a timedelta."""
+        return self.run_every.remaining_estimate(last_run_at)

+ 49 - 9
celery/task/control.py

@@ -1,5 +1,6 @@
 from celery import conf
 from celery import conf
-from celery.messaging import BroadcastPublisher
+from celery.utils import gen_unique_id
+from celery.messaging import BroadcastPublisher, ControlReplyConsumer
 from celery.messaging import with_connection, get_consumer_set
 from celery.messaging import with_connection, get_consumer_set
 
 
 
 
@@ -21,8 +22,7 @@ def discard_all(connection=None,
         consumers.close()
         consumers.close()
 
 
 
 
-def revoke(task_id, destination=None, connection=None,
-        connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
+def revoke(task_id, destination=None, **kwargs):
     """Revoke a task by id.
     """Revoke a task by id.
 
 
     If a task is revoked, the workers will ignore the task and not execute
     If a task is revoked, the workers will ignore the task and not execute
@@ -35,14 +35,36 @@ def revoke(task_id, destination=None, connection=None,
         a connection will be established automatically.
         a connection will be established automatically.
     :keyword connect_timeout: Timeout for new connection if a custom
     :keyword connect_timeout: Timeout for new connection if a custom
         connection is not provided.
         connection is not provided.
+    :keyword reply: Wait for and return the reply.
+    :keyword timeout: Timeout in seconds to wait for the reply.
+    :keyword limit: Limit number of replies.
 
 
     """
     """
     return broadcast("revoke", destination=destination,
     return broadcast("revoke", destination=destination,
-                               arguments={"task_id": task_id})
+                               arguments={"task_id": task_id}, **kwargs)
 
 
 
 
-def rate_limit(task_name, rate_limit, destination=None, connection=None,
-        connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
+def ping(destination=None, timeout=1, **kwargs):
+    """Ping workers.
+
+    Returns answer from alive workers.
+
+    :keyword destination: If set, a list of the hosts to send the command to,
+        when empty broadcast to all workers.
+    :keyword connection: Custom broker connection to use, if not set,
+        a connection will be established automatically.
+    :keyword connect_timeout: Timeout for new connection if a custom
+        connection is not provided.
+    :keyword reply: Wait for and return the reply.
+    :keyword timeout: Timeout in seconds to wait for the reply.
+    :keyword limit: Limit number of replies.
+
+    """
+    return broadcast("ping", reply=True, destination=destination,
+                     timeout=timeout, **kwargs)
+
+
+def rate_limit(task_name, rate_limit, destination=None, **kwargs):
     """Set rate limit for task by type.
     """Set rate limit for task by type.
 
 
     :param task_name: Type of task to change rate limit for.
     :param task_name: Type of task to change rate limit for.
@@ -55,16 +77,21 @@ def rate_limit(task_name, rate_limit, destination=None, connection=None,
         a connection will be established automatically.
         a connection will be established automatically.
     :keyword connect_timeout: Timeout for new connection if a custom
     :keyword connect_timeout: Timeout for new connection if a custom
         connection is not provided.
         connection is not provided.
+    :keyword reply: Wait for and return the reply.
+    :keyword timeout: Timeout in seconds to wait for the reply.
+    :keyword limit: Limit number of replies.
 
 
     """
     """
     return broadcast("rate_limit", destination=destination,
     return broadcast("rate_limit", destination=destination,
                                    arguments={"task_name": task_name,
                                    arguments={"task_name": task_name,
-                                              "rate_limit": rate_limit})
+                                              "rate_limit": rate_limit},
+                                   **kwargs)
 
 
 
 
 @with_connection
 @with_connection
 def broadcast(command, arguments=None, destination=None, connection=None,
 def broadcast(command, arguments=None, destination=None, connection=None,
-        connect_timeout=conf.BROKER_CONNECTION_TIMEOUT):
+        connect_timeout=conf.BROKER_CONNECTION_TIMEOUT, reply=False,
+        timeout=1, limit=None):
     """Broadcast a control command to the celery workers.
     """Broadcast a control command to the celery workers.
 
 
     :param command: Name of command to send.
     :param command: Name of command to send.
@@ -75,12 +102,25 @@ def broadcast(command, arguments=None, destination=None, connection=None,
         a connection will be established automatically.
         a connection will be established automatically.
     :keyword connect_timeout: Timeout for new connection if a custom
     :keyword connect_timeout: Timeout for new connection if a custom
         connection is not provided.
         connection is not provided.
+    :keyword reply: Wait for and return the reply.
+    :keyword timeout: Timeout in seconds to wait for the reply.
+    :keyword limit: Limit number of replies.
 
 
     """
     """
     arguments = arguments or {}
     arguments = arguments or {}
+    reply_ticket = reply and gen_unique_id() or None
+
 
 
     broadcast = BroadcastPublisher(connection)
     broadcast = BroadcastPublisher(connection)
     try:
     try:
-        broadcast.send(command, arguments, destination=destination)
+        broadcast.send(command, arguments, destination=destination,
+                       reply_ticket=reply_ticket)
     finally:
     finally:
         broadcast.close()
         broadcast.close()
+
+    if reply_ticket:
+        crq = ControlReplyConsumer(connection, reply_ticket)
+        try:
+            return crq.collect(limit=limit, timeout=timeout)
+        finally:
+            crq.close()

+ 0 - 19
celery/task/rest.py

@@ -1,19 +0,0 @@
-from celery.task.http import (InvalidResponseError, RemoteExecuteError,
-                              UnknownStatusError)
-from celery.task.http import URL
-from celery.task.http import HttpDispatch as RESTProxy
-from celery.task.http import HttpDispatchTask as RESTProxyTask
-
-import warnings
-warnings.warn(DeprecationWarning(
-"""celery.task.rest has been deprecated and is scheduled for removal in
-v1.2. Please use celery.task.http instead.
-
-The following objects has been renamed:
-
-    celery.task.rest.RESTProxy -> celery.task.http.HttpDispatch
-    celery.task.rest.RESTProxyTask -> celery.task.http.HttpDispatchTask
-
-Other objects have the same name, just moved to the celery.task.http module.
-
-"""))

+ 93 - 0
celery/task/schedules.py

@@ -0,0 +1,93 @@
+from datetime import datetime
+
+from celery.utils.timeutils import timedelta_seconds, weekday, remaining
+
+
+class schedule(object):
+    relative = False
+
+    def __init__(self, run_every=None, relative=False):
+        self.run_every = run_every
+        self.relative = relative
+
+    def remaining_estimate(self, last_run_at):
+        """Returns when the periodic task should run next as a timedelta."""
+        return remaining(last_run_at, self.run_every, relative=self.relative)
+
+    def is_due(self, last_run_at):
+        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        where next time to run is in seconds.
+
+        See :meth:`celery.task.base.PeriodicTask.is_due` for more information.
+
+        """
+        rem_delta = self.remaining_estimate(last_run_at)
+        rem = timedelta_seconds(rem_delta)
+        if rem == 0:
+            return True, timedelta_seconds(self.run_every)
+        return False, rem
+
+
+class crontab(schedule):
+    """A crontab can be used as the ``run_every`` value of a
+    :class:`PeriodicTask` to add cron-like scheduling.
+
+    Like a :manpage:`cron` job, you can specify units of time of when
+    you would like the task to execute. While not a full implementation
+    of cron's features, it should provide a fair degree of common scheduling
+    needs.
+
+    You can specify a minute, an hour, and/or a day of the week.
+
+    .. attribute:: minute
+
+        An integer from 0-59 that represents the minute of an hour of when
+        execution should occur.
+
+    .. attribute:: hour
+
+        An integer from 0-23 that represents the hour of a day of when
+        execution should occur.
+
+    .. attribute:: day_of_week
+
+        An integer from 0-6, where Sunday = 0 and Saturday = 6, that
+        represents the day of week that execution should occur.
+
+    """
+
+    def __init__(self, minute=None, hour=None, day_of_week=None,
+            nowfun=datetime.now):
+        self.hour = hour                  # (0 - 23)
+        self.minute = minute              # (0 - 59)
+        self.day_of_week = day_of_week    # (0 - 6) (Sunday=0)
+        self.nowfun = nowfun
+
+        if isinstance(self.day_of_week, basestring):
+            self.day_of_week = weekday(self.day_of_week)
+
+    def remaining_estimate(self, last_run_at):
+        # remaining_estimate controls the frequency of scheduler
+        # ticks. The scheduler needs to wake up every second in this case.
+        return 1
+
+    def is_due(self, last_run_at):
+        now = self.nowfun()
+        last = now - last_run_at
+        due, when = False, 1
+        if last.days > 0 or last.seconds > 60:
+            if self.day_of_week in (None, now.isoweekday()):
+                due, when = self._check_hour_minute(now)
+        return due, when
+
+    def _check_hour_minute(self, now):
+        due, when = False, 1
+        if self.hour is None and self.minute is None:
+            due, when = True, 1
+        if self.hour is None and self.minute == now.minute:
+            due, when = True, 1
+        if self.hour == now.hour and self.minute is None:
+            due, when = True, 1
+        if self.hour == now.hour and self.minute == now.minute:
+            due, when = True, 1
+        return due, when

+ 0 - 19
celery/tests/runners.py

@@ -1,19 +0,0 @@
-from django.conf import settings
-from django.test.simple import run_tests as django_test_runner
-
-
-def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=None,
-        **kwargs):
-    """ Test runner that only runs tests for the apps
-    listed in ``settings.TEST_APPS``.
-    """
-    extra_tests = extra_tests or []
-    app_labels = getattr(settings, "TEST_APPS", test_labels)
-
-    # Seems to be deleting the test database file twice :(
-    from celery.utils import noop
-    from django.db import connection
-    connection.creation.destroy_test_db = noop
-    return django_test_runner(app_labels,
-                              verbosity=verbosity, interactive=interactive,
-                              extra_tests=extra_tests, **kwargs)

+ 2 - 5
celery/tests/test_backends/__init__.py

@@ -1,18 +1,15 @@
 import unittest2 as unittest
 import unittest2 as unittest
 
 
 from celery import backends
 from celery import backends
-from celery.backends.database import DatabaseBackend
 from celery.backends.amqp import AMQPBackend
 from celery.backends.amqp import AMQPBackend
-from celery.backends.pyredis import RedisBackend
+from celery.backends.database import DatabaseBackend
 
 
 
 
 class TestBackends(unittest.TestCase):
 class TestBackends(unittest.TestCase):
 
 
     def test_get_backend_aliases(self):
     def test_get_backend_aliases(self):
         expects = [("amqp", AMQPBackend),
         expects = [("amqp", AMQPBackend),
-                   ("database", DatabaseBackend),
-                   ("db", DatabaseBackend),
-                   ("redis", RedisBackend)]
+                   ("database", DatabaseBackend)]
         for expect_name, expect_cls in expects:
         for expect_name, expect_cls in expects:
             self.assertIsInstance(backends.get_backend_cls(expect_name)(),
             self.assertIsInstance(backends.get_backend_cls(expect_name)(),
                                   expect_cls)
                                   expect_cls)

+ 23 - 21
celery/tests/test_backends/test_amqp.py

@@ -13,48 +13,50 @@ class SomeClass(object):
         self.data = data
         self.data = data
 
 
 
 
-class TestRedisBackend(unittest.TestCase):
+class test_AMQPBackend(unittest.TestCase):
 
 
-    def setUp(self):
-        self.backend = AMQPBackend()
-        self.backend._use_debug_tracking = True
+    def create_backend(self):
+        return AMQPBackend(serializer="pickle", persistent=False)
 
 
     def test_mark_as_done(self):
     def test_mark_as_done(self):
-        tb = self.backend
+        tb1 = self.create_backend()
+        tb2 = self.create_backend()
 
 
         tid = gen_unique_id()
         tid = gen_unique_id()
 
 
-        tb.mark_as_done(tid, 42)
-        self.assertTrue(tb.is_successful(tid))
-        self.assertEqual(tb.get_status(tid), states.SUCCESS)
-        self.assertEqual(tb.get_result(tid), 42)
-        self.assertTrue(tb._cache.get(tid))
-        self.assertTrue(tb.get_result(tid), 42)
+        tb1.mark_as_done(tid, 42)
+        self.assertTrue(tb2.is_successful(tid))
+        self.assertEqual(tb2.get_status(tid), states.SUCCESS)
+        self.assertEqual(tb2.get_result(tid), 42)
+        self.assertTrue(tb2._cache.get(tid))
+        self.assertTrue(tb2.get_result(tid), 42)
 
 
     def test_is_pickled(self):
     def test_is_pickled(self):
-        tb = self.backend
+        tb1 = self.create_backend()
+        tb2 = self.create_backend()
 
 
         tid2 = gen_unique_id()
         tid2 = gen_unique_id()
         result = {"foo": "baz", "bar": SomeClass(12345)}
         result = {"foo": "baz", "bar": SomeClass(12345)}
-        tb.mark_as_done(tid2, result)
+        tb1.mark_as_done(tid2, result)
         # is serialized properly.
         # is serialized properly.
-        rindb = tb.get_result(tid2)
+        rindb = tb2.get_result(tid2)
         self.assertEqual(rindb.get("foo"), "baz")
         self.assertEqual(rindb.get("foo"), "baz")
         self.assertEqual(rindb.get("bar").data, 12345)
         self.assertEqual(rindb.get("bar").data, 12345)
 
 
     def test_mark_as_failure(self):
     def test_mark_as_failure(self):
-        tb = self.backend
+        tb1 = self.create_backend()
+        tb2 = self.create_backend()
 
 
         tid3 = gen_unique_id()
         tid3 = gen_unique_id()
         try:
         try:
             raise KeyError("foo")
             raise KeyError("foo")
         except KeyError, exception:
         except KeyError, exception:
             einfo = ExceptionInfo(sys.exc_info())
             einfo = ExceptionInfo(sys.exc_info())
-        tb.mark_as_failure(tid3, exception, traceback=einfo.traceback)
-        self.assertFalse(tb.is_successful(tid3))
-        self.assertEqual(tb.get_status(tid3), states.FAILURE)
-        self.assertIsInstance(tb.get_result(tid3), KeyError)
-        self.assertEqual(tb.get_traceback(tid3), einfo.traceback)
+        tb1.mark_as_failure(tid3, exception, traceback=einfo.traceback)
+        self.assertFalse(tb2.is_successful(tid3))
+        self.assertEqual(tb2.get_status(tid3), states.FAILURE)
+        self.assertIsInstance(tb2.get_result(tid3), KeyError)
+        self.assertEqual(tb2.get_traceback(tid3), einfo.traceback)
 
 
     def test_process_cleanup(self):
     def test_process_cleanup(self):
-        self.backend.process_cleanup()
+        self.create_backend().process_cleanup()

+ 0 - 127
celery/tests/test_backends/test_cache.py

@@ -1,127 +0,0 @@
-import sys
-import unittest2 as unittest
-
-from billiard.serialization import pickle
-from django.core.cache.backends.base import InvalidCacheBackendError
-
-from celery import result
-from celery import states
-from celery.utils import gen_unique_id
-from celery.backends.cache import CacheBackend
-from celery.datastructures import ExceptionInfo
-
-
-class SomeClass(object):
-
-    def __init__(self, data):
-        self.data = data
-
-
-class TestCacheBackend(unittest.TestCase):
-
-    def test_mark_as_done(self):
-        cb = CacheBackend()
-
-        tid = gen_unique_id()
-
-        self.assertFalse(cb.is_successful(tid))
-        self.assertEqual(cb.get_status(tid), states.PENDING)
-        self.assertIsNone(cb.get_result(tid))
-
-        cb.mark_as_done(tid, 42)
-        self.assertTrue(cb.is_successful(tid))
-        self.assertEqual(cb.get_status(tid), states.SUCCESS)
-        self.assertEqual(cb.get_result(tid), 42)
-        self.assertTrue(cb.get_result(tid), 42)
-
-    def test_save_restore_taskset(self):
-        backend = CacheBackend()
-        taskset_id = gen_unique_id()
-        subtask_ids = [gen_unique_id() for i in range(10)]
-        subtasks = map(result.AsyncResult, subtask_ids)
-        res = result.TaskSetResult(taskset_id, subtasks)
-        res.save(backend=backend)
-        saved = result.TaskSetResult.restore(taskset_id, backend=backend)
-        self.assertListEqual(saved.subtasks, subtasks)
-        self.assertEqual(saved.taskset_id, taskset_id)
-
-    def test_is_pickled(self):
-        cb = CacheBackend()
-
-        tid2 = gen_unique_id()
-        result = {"foo": "baz", "bar": SomeClass(12345)}
-        cb.mark_as_done(tid2, result)
-        # is serialized properly.
-        rindb = cb.get_result(tid2)
-        self.assertEqual(rindb.get("foo"), "baz")
-        self.assertEqual(rindb.get("bar").data, 12345)
-
-    def test_mark_as_failure(self):
-        cb = CacheBackend()
-
-        einfo = None
-        tid3 = gen_unique_id()
-        try:
-            raise KeyError("foo")
-        except KeyError, exception:
-            einfo = ExceptionInfo(sys.exc_info())
-            pass
-        cb.mark_as_failure(tid3, exception, traceback=einfo.traceback)
-        self.assertFalse(cb.is_successful(tid3))
-        self.assertEqual(cb.get_status(tid3), states.FAILURE)
-        self.assertIsInstance(cb.get_result(tid3), KeyError)
-        self.assertEqual(cb.get_traceback(tid3), einfo.traceback)
-
-    def test_process_cleanup(self):
-        cb = CacheBackend()
-        cb.process_cleanup()
-
-
-class TestCustomCacheBackend(unittest.TestCase):
-
-    def test_custom_cache_backend(self):
-        from celery import conf
-        prev_backend = conf.CELERY_CACHE_BACKEND
-        prev_module = sys.modules["celery.backends.cache"]
-        conf.CELERY_CACHE_BACKEND = "dummy://"
-        sys.modules.pop("celery.backends.cache")
-        try:
-            from celery.backends.cache import cache
-            from django.core.cache import cache as django_cache
-            self.assertEqual(cache.__class__.__module__,
-                              "django.core.cache.backends.dummy")
-            self.assertIsNot(cache, django_cache)
-        finally:
-            conf.CELERY_CACHE_BACKEND = prev_backend
-            sys.modules["celery.backends.cache"] = prev_module
-
-
-class TestMemcacheWrapper(unittest.TestCase):
-
-    def test_memcache_wrapper(self):
-
-        try:
-            from django.core.cache.backends import memcached
-            from django.core.cache.backends import locmem
-        except InvalidCacheBackendError:
-            sys.stderr.write(
-                "\n* Memcache library is not installed. Skipping test.\n")
-            return
-        prev_cache_cls = memcached.CacheClass
-        memcached.CacheClass = locmem.CacheClass
-        prev_backend_module = sys.modules.pop("celery.backends.cache")
-        try:
-            from celery.backends.cache import cache, DjangoMemcacheWrapper
-            self.assertIsInstance(cache, DjangoMemcacheWrapper)
-
-            key = "cu.test_memcache_wrapper"
-            val = "The quick brown fox."
-            default = "The lazy dog."
-
-            self.assertEqual(cache.get(key, default=default), default)
-            cache.set(key, val)
-            self.assertEqual(pickle.loads(cache.get(key, default=default)),
-                              val)
-        finally:
-            memcached.CacheClass = prev_cache_cls
-            sys.modules["celery.backends.cache"] = prev_backend_module

+ 0 - 68
celery/tests/test_backends/test_database.py

@@ -1,68 +0,0 @@
-import unittest2 as unittest
-from datetime import timedelta
-
-from celery import states
-from celery.task import PeriodicTask
-from celery.utils import gen_unique_id
-from celery.backends.database import DatabaseBackend
-
-
-class SomeClass(object):
-
-    def __init__(self, data):
-        self.data = data
-
-
-class MyPeriodicTask(PeriodicTask):
-    name = "c.u.my-periodic-task-244"
-    run_every = timedelta(seconds=1)
-
-    def run(self, **kwargs):
-        return 42
-
-
-class TestDatabaseBackend(unittest.TestCase):
-
-    def test_backend(self):
-        b = DatabaseBackend()
-        tid = gen_unique_id()
-
-        self.assertFalse(b.is_successful(tid))
-        self.assertEqual(b.get_status(tid), states.PENDING)
-        self.assertIsNone(b.get_result(tid))
-
-        b.mark_as_done(tid, 42)
-        self.assertTrue(b.is_successful(tid))
-        self.assertEqual(b.get_status(tid), states.SUCCESS)
-        self.assertEqual(b.get_result(tid), 42)
-
-        tid2 = gen_unique_id()
-        result = {"foo": "baz", "bar": SomeClass(12345)}
-        b.mark_as_done(tid2, result)
-        # is serialized properly.
-        rindb = b.get_result(tid2)
-        self.assertEqual(rindb.get("foo"), "baz")
-        self.assertEqual(rindb.get("bar").data, 12345)
-
-        tid3 = gen_unique_id()
-        try:
-            raise KeyError("foo")
-        except KeyError, exception:
-            pass
-        b.mark_as_failure(tid3, exception)
-        self.assertFalse(b.is_successful(tid3))
-        self.assertEqual(b.get_status(tid3), states.FAILURE)
-        self.assertIsInstance(b.get_result(tid3), KeyError)
-
-    def test_taskset_store(self):
-        b = DatabaseBackend()
-        tid = gen_unique_id()
-
-        self.assertIsNone(b.restore_taskset(tid))
-
-        result = {"foo": "baz", "bar": SomeClass(12345)}
-        b.save_taskset(tid, result)
-        rindb = b.restore_taskset(tid)
-        self.assertIsNotNone(rindb)
-        self.assertEqual(rindb.get("foo"), "baz")
-        self.assertEqual(rindb.get("bar").data, 12345)

+ 6 - 5
celery/tests/test_buckets.py

@@ -9,6 +9,7 @@ from itertools import chain, izip
 from billiard.utils.functional import curry
 from billiard.utils.functional import curry
 
 
 from celery.task.base import Task
 from celery.task.base import Task
+from celery.utils import timeutils
 from celery.utils import gen_unique_id
 from celery.utils import gen_unique_id
 from celery.worker import buckets
 from celery.worker import buckets
 from celery.registry import TaskRegistry
 from celery.registry import TaskRegistry
@@ -97,15 +98,15 @@ class TestRateLimitString(unittest.TestCase):
 
 
     @skip_if_disabled
     @skip_if_disabled
     def test_conversion(self):
     def test_conversion(self):
-        self.assertEqual(buckets.parse_ratelimit_string(999), 999)
-        self.assertEqual(buckets.parse_ratelimit_string("1456/s"), 1456)
-        self.assertEqual(buckets.parse_ratelimit_string("100/m"),
+        self.assertEqual(timeutils.rate(999), 999)
+        self.assertEqual(timeutils.rate("1456/s"), 1456)
+        self.assertEqual(timeutils.rate("100/m"),
                           100 / 60.0)
                           100 / 60.0)
-        self.assertEqual(buckets.parse_ratelimit_string("10/h"),
+        self.assertEqual(timeutils.rate("10/h"),
                           10 / 60.0 / 60.0)
                           10 / 60.0 / 60.0)
 
 
         for zero in (0, None, "0", "0/m", "0/h", "0/s"):
         for zero in (0, None, "0", "0/m", "0/h", "0/s"):
-            self.assertEqual(buckets.parse_ratelimit_string(zero), 0)
+            self.assertEqual(timeutils.rate(zero), 0)
 
 
 
 
 class TaskA(Task):
 class TaskA(Task):

+ 0 - 36
celery/tests/test_conf.py

@@ -1,36 +0,0 @@
-import unittest2 as unittest
-
-from django.conf import settings
-
-from celery import conf
-
-
-SETTING_VARS = (
-    ("CELERY_DEFAULT_QUEUE", "DEFAULT_QUEUE"),
-    ("CELERY_DEFAULT_ROUTING_KEY", "DEFAULT_ROUTING_KEY"),
-    ("CELERY_DEFAULT_EXCHANGE_TYPE", "DEFAULT_EXCHANGE_TYPE"),
-    ("CELERY_DEFAULT_EXCHANGE", "DEFAULT_EXCHANGE"),
-    ("CELERYD_CONCURRENCY", "CELERYD_CONCURRENCY"),
-    ("CELERYD_LOG_FILE", "CELERYD_LOG_FILE"),
-    ("CELERYD_LOG_FORMAT", "CELERYD_LOG_FORMAT"),
-)
-
-
-class TestConf(unittest.TestCase):
-
-    def assertDefaultSetting(self, setting_name, result_var):
-        if hasattr(settings, setting_name):
-            self.assertEqual(getattr(conf, result_var),
-                              getattr(settings, setting_name),
-                              "Overwritten setting %s is written to %s" % (
-                                  setting_name, result_var))
-        else:
-            self.assertEqual(conf._DEFAULTS.get(setting_name),
-                             getattr(conf, result_var),
-                             "Default setting %s is written to %s" % (
-                                 setting_name, result_var))
-
-    def test_configuration_cls(self):
-        for setting_name, result_var in SETTING_VARS:
-            self.assertDefaultSetting(setting_name, result_var)
-        self.assertIsInstance(conf.CELERYD_LOG_LEVEL, int)

+ 0 - 28
celery/tests/test_discovery.py

@@ -1,28 +0,0 @@
-import unittest2 as unittest
-
-from django.conf import settings
-
-from celery.loaders.djangoapp import autodiscover
-from celery.task import tasks
-
-
-class TestDiscovery(unittest.TestCase):
-
-    def assertDiscovery(self):
-        apps = autodiscover()
-        self.assertTrue(apps)
-        self.assertIn("c.unittest.SomeAppTask", tasks)
-        self.assertEqual(tasks["c.unittest.SomeAppTask"].run(), 42)
-
-    def test_discovery(self):
-        if "someapp" in settings.INSTALLED_APPS:
-            self.assertDiscovery()
-
-    def test_discovery_with_broken(self):
-        if "someapp" in settings.INSTALLED_APPS:
-            installed_apps = list(settings.INSTALLED_APPS)
-            settings.INSTALLED_APPS = installed_apps + ["xxxnot.aexist"]
-            try:
-                self.assertRaises(ImportError, autodiscover)
-            finally:
-                settings.INSTALLED_APPS = installed_apps

+ 6 - 61
celery/tests/test_loaders.py

@@ -5,7 +5,6 @@ import unittest2 as unittest
 from celery import task
 from celery import task
 from celery import loaders
 from celery import loaders
 from celery.loaders import base
 from celery.loaders import base
-from celery.loaders import djangoapp
 from celery.loaders import default
 from celery.loaders import default
 
 
 from celery.tests.utils import with_environ
 from celery.tests.utils import with_environ
@@ -15,19 +14,12 @@ class TestLoaders(unittest.TestCase):
 
 
     def test_get_loader_cls(self):
     def test_get_loader_cls(self):
 
 
-        self.assertEqual(loaders.get_loader_cls("django"),
-                          loaders.DjangoLoader)
         self.assertEqual(loaders.get_loader_cls("default"),
         self.assertEqual(loaders.get_loader_cls("default"),
-                          loaders.DefaultLoader)
-        # Execute cached branch.
-        self.assertEqual(loaders.get_loader_cls("django"),
-                          loaders.DjangoLoader)
-        self.assertEqual(loaders.get_loader_cls("default"),
-                          loaders.DefaultLoader)
+                          default.Loader)
 
 
     @with_environ("CELERY_LOADER", "default")
     @with_environ("CELERY_LOADER", "default")
     def test_detect_loader_CELERY_LOADER(self):
     def test_detect_loader_CELERY_LOADER(self):
-        self.assertEqual(loaders.detect_loader(), loaders.DefaultLoader)
+        self.assertIsInstance(loaders.setup_loader(), default.Loader)
 
 
 
 
 class DummyLoader(base.BaseLoader):
 class DummyLoader(base.BaseLoader):
@@ -64,51 +56,6 @@ class TestLoaderBase(unittest.TestCase):
                               [os, sys, task])
                               [os, sys, task])
 
 
 
 
-class TestDjangoLoader(unittest.TestCase):
-
-    def setUp(self):
-        self.loader = loaders.DjangoLoader()
-
-    def test_on_worker_init(self):
-        from django.conf import settings
-        old_imports = getattr(settings, "CELERY_IMPORTS", None)
-        settings.CELERY_IMPORTS = ("xxx.does.not.exist", )
-        try:
-            self.assertRaises(ImportError, self.loader.on_worker_init)
-        finally:
-            settings.CELERY_IMPORTS = old_imports
-
-    def test_race_protection(self):
-        djangoapp._RACE_PROTECTION = True
-        try:
-            self.assertFalse(self.loader.on_worker_init())
-        finally:
-            djangoapp._RACE_PROTECTION = False
-
-    def test_find_related_module_no_path(self):
-        self.assertFalse(djangoapp.find_related_module("sys", "tasks"))
-
-    def test_find_related_module_no_related(self):
-        self.assertFalse(djangoapp.find_related_module("someapp",
-                                                       "frobulators"))
-
-
-def modifies_django_env(fun):
-
-    def _protected(*args, **kwargs):
-        from django.conf import settings
-        current = dict((key, getattr(settings, key))
-                        for key in settings.get_all_members()
-                            if key.isupper())
-        try:
-            return fun(*args, **kwargs)
-        finally:
-            for key, value in current.items():
-                setattr(settings, key, value)
-
-    return _protected
-
-
 class TestDefaultLoader(unittest.TestCase):
 class TestDefaultLoader(unittest.TestCase):
 
 
     def test_wanted_module_item(self):
     def test_wanted_module_item(self):
@@ -117,7 +64,6 @@ class TestDefaultLoader(unittest.TestCase):
         self.assertFalse(default.wanted_module_item("_foo"))
         self.assertFalse(default.wanted_module_item("_foo"))
         self.assertFalse(default.wanted_module_item("__foo"))
         self.assertFalse(default.wanted_module_item("__foo"))
 
 
-    @modifies_django_env
     def test_read_configuration(self):
     def test_read_configuration(self):
         from types import ModuleType
         from types import ModuleType
 
 
@@ -126,17 +72,16 @@ class TestDefaultLoader(unittest.TestCase):
 
 
         celeryconfig = ConfigModule("celeryconfig")
         celeryconfig = ConfigModule("celeryconfig")
         celeryconfig.CELERY_IMPORTS = ("os", "sys")
         celeryconfig.CELERY_IMPORTS = ("os", "sys")
+        configname = os.environ.get("CELERY_CONFIG_MODULE") or "celeryconfig"
 
 
-        sys.modules["celeryconfig"] = celeryconfig
+        prevconfig = sys.modules[configname]
+        sys.modules[configname] = celeryconfig
         try:
         try:
             l = default.Loader()
             l = default.Loader()
             settings = l.read_configuration()
             settings = l.read_configuration()
             self.assertTupleEqual(settings.CELERY_IMPORTS, ("os", "sys"))
             self.assertTupleEqual(settings.CELERY_IMPORTS, ("os", "sys"))
-            from django.conf import settings
-            settings.configured = False
             settings = l.read_configuration()
             settings = l.read_configuration()
             self.assertTupleEqual(settings.CELERY_IMPORTS, ("os", "sys"))
             self.assertTupleEqual(settings.CELERY_IMPORTS, ("os", "sys"))
-            self.assertTrue(settings.configured)
             l.on_worker_init()
             l.on_worker_init()
         finally:
         finally:
-            sys.modules.pop("celeryconfig", None)
+            sys.modules[configname] = prevconfig

+ 0 - 74
celery/tests/test_models.py

@@ -1,74 +0,0 @@
-import unittest2 as unittest
-from datetime import datetime, timedelta
-
-from celery import states
-from celery.utils import gen_unique_id
-from celery.models import TaskMeta, TaskSetMeta
-
-
-class TestModels(unittest.TestCase):
-
-    def createTaskMeta(self):
-        id = gen_unique_id()
-        taskmeta, created = TaskMeta.objects.get_or_create(task_id=id)
-        return taskmeta
-
-    def createTaskSetMeta(self):
-        id = gen_unique_id()
-        tasksetmeta, created = TaskSetMeta.objects.get_or_create(taskset_id=id)
-        return tasksetmeta
-
-    def test_taskmeta(self):
-        m1 = self.createTaskMeta()
-        m2 = self.createTaskMeta()
-        m3 = self.createTaskMeta()
-        self.assertTrue(unicode(m1).startswith("<Task:"))
-        self.assertTrue(m1.task_id)
-        self.assertIsInstance(m1.date_done, datetime)
-
-        self.assertEqual(TaskMeta.objects.get_task(m1.task_id).task_id,
-                m1.task_id)
-        self.assertNotEqual(TaskMeta.objects.get_task(m1.task_id).status,
-                            states.SUCCESS)
-        TaskMeta.objects.store_result(m1.task_id, True, status=states.SUCCESS)
-        TaskMeta.objects.store_result(m2.task_id, True, status=states.SUCCESS)
-        self.assertEqual(TaskMeta.objects.get_task(m1.task_id).status,
-                         states.SUCCESS)
-        self.assertEqual(TaskMeta.objects.get_task(m2.task_id).status,
-                         states.SUCCESS)
-
-        # Have to avoid save() because it applies the auto_now=True.
-        TaskMeta.objects.filter(task_id=m1.task_id).update(
-                date_done=datetime.now() - timedelta(days=10))
-
-        expired = TaskMeta.objects.get_all_expired()
-        self.assertIn(m1, expired)
-        self.assertNotIn(m2, expired)
-        self.assertNotIn(m3, expired)
-
-        TaskMeta.objects.delete_expired()
-        self.assertNotIn(m1, TaskMeta.objects.all())
-
-    def test_tasksetmeta(self):
-        m1 = self.createTaskSetMeta()
-        m2 = self.createTaskSetMeta()
-        m3 = self.createTaskSetMeta()
-        self.assertTrue(unicode(m1).startswith("<TaskSet:"))
-        self.assertTrue(m1.taskset_id)
-        self.assertIsInstance(m1.date_done, datetime)
-
-        self.assertEqual(
-                TaskSetMeta.objects.restore_taskset(m1.taskset_id).taskset_id,
-                m1.taskset_id)
-
-        # Have to avoid save() because it applies the auto_now=True.
-        TaskSetMeta.objects.filter(taskset_id=m1.taskset_id).update(
-                date_done=datetime.now() - timedelta(days=10))
-
-        expired = TaskSetMeta.objects.get_all_expired()
-        self.assertIn(m1, expired)
-        self.assertNotIn(m2, expired)
-        self.assertNotIn(m3, expired)
-
-        TaskSetMeta.objects.delete_expired()
-        self.assertNotIn(m1, TaskSetMeta.objects.all())

+ 0 - 7
celery/tests/test_pool.py

@@ -30,13 +30,6 @@ class TestTaskPool(unittest.TestCase):
         self.assertIsInstance(p.logger, logging.Logger)
         self.assertIsInstance(p.logger, logging.Logger)
         self.assertIsNone(p._pool)
         self.assertIsNone(p._pool)
 
 
-    def test_start_stop(self):
-        p = TaskPool(limit=2)
-        p.start()
-        self.assertIsNotNone(p._pool)
-        p.stop()
-        self.assertIsNone(p._pool)
-
     def x_apply(self):
     def x_apply(self):
         p = TaskPool(limit=2)
         p = TaskPool(limit=2)
         p.start()
         p.start()

+ 112 - 3
celery/tests/test_task.py

@@ -2,8 +2,13 @@ import unittest2 as unittest
 from StringIO import StringIO
 from StringIO import StringIO
 from datetime import datetime, timedelta
 from datetime import datetime, timedelta
 
 
+from billiard.utils.functional import wraps
+
+from celery import conf
 from celery import task
 from celery import task
 from celery import messaging
 from celery import messaging
+from celery.task.schedules import crontab
+from celery.utils import timeutils
 from celery.utils import gen_unique_id
 from celery.utils import gen_unique_id
 from celery.result import EagerResult
 from celery.result import EagerResult
 from celery.execute import send_task
 from celery.execute import send_task
@@ -261,6 +266,14 @@ class TestCeleryTasks(unittest.TestCase):
 
 
         self.assertRaises(NotImplementedError, IncompleteTask().run)
         self.assertRaises(NotImplementedError, IncompleteTask().run)
 
 
+    def test_task_kwargs_must_be_dictionary(self):
+        self.assertRaises(ValueError, IncrementCounterTask.apply_async,
+                          [], "str")
+
+    def test_task_args_must_be_list(self):
+        self.assertRaises(ValueError, IncrementCounterTask.apply_async,
+                          "str", {})
+
     def test_regular_task(self):
     def test_regular_task(self):
         T1 = self.createTaskCls("T1", "c.unittest.t.t1")
         T1 = self.createTaskCls("T1", "c.unittest.t.t1")
         self.assertIsInstance(T1(), T1)
         self.assertIsInstance(T1(), T1)
@@ -384,6 +397,16 @@ class TestTaskSet(unittest.TestCase):
 
 
 class TestTaskApply(unittest.TestCase):
 class TestTaskApply(unittest.TestCase):
 
 
+    def test_apply_throw(self):
+        self.assertRaises(KeyError, RaisingTask.apply, throw=True)
+
+    def test_apply_with_CELERY_EAGER_PROPAGATES_EXCEPTIONS(self):
+        conf.EAGER_PROPAGATES_EXCEPTIONS = True
+        try:
+            self.assertRaises(KeyError, RaisingTask.apply)
+        finally:
+            conf.EAGER_PROPAGATES_EXCEPTIONS = False
+
     def test_apply(self):
     def test_apply(self):
         IncrementCounterTask.count = 0
         IncrementCounterTask.count = 0
 
 
@@ -437,7 +460,7 @@ class TestPeriodicTask(unittest.TestCase):
             self.assertEqual(MyPeriodic().timedelta_seconds(delta), seconds)
             self.assertEqual(MyPeriodic().timedelta_seconds(delta), seconds)
 
 
     def test_delta_resolution(self):
     def test_delta_resolution(self):
-        D = MyPeriodic().delta_resolution
+        D = timeutils.delta_resolution
 
 
         dt = datetime(2010, 3, 30, 11, 50, 58, 41065)
         dt = datetime(2010, 3, 30, 11, 50, 58, 41065)
         deltamap = ((timedelta(days=2), datetime(2010, 3, 30, 0, 0)),
         deltamap = ((timedelta(days=2), datetime(2010, 3, 30, 0, 0)),
@@ -454,6 +477,92 @@ class TestPeriodicTask(unittest.TestCase):
 
 
     def test_is_due(self):
     def test_is_due(self):
         p = MyPeriodic()
         p = MyPeriodic()
-        due, remaining = p.is_due(datetime.now() - p.run_every)
+        due, remaining = p.is_due(datetime.now() - p.run_every.run_every)
+        self.assertTrue(due)
+        self.assertEqual(remaining,
+                         p.timedelta_seconds(p.run_every.run_every))
+
+
+class EveryMinutePeriodic(task.PeriodicTask):
+    run_every = crontab()
+
+
+class HourlyPeriodic(task.PeriodicTask):
+    run_every = crontab(minute=30)
+
+
+class DailyPeriodic(task.PeriodicTask):
+    run_every = crontab(hour=7, minute=30)
+
+
+class WeeklyPeriodic(task.PeriodicTask):
+    run_every = crontab(hour=7, minute=30, day_of_week="thursday")
+
+
+def patch_crontab_nowfun(cls, retval):
+
+    def create_patcher(fun):
+
+        @wraps(fun)
+        def __inner(*args, **kwargs):
+            prev_nowfun = cls.run_every.nowfun
+            cls.run_every.nowfun = lambda: retval
+            try:
+                return fun(*args, **kwargs)
+            finally:
+                cls.run_every.nowfun = prev_nowfun
+
+        return __inner
+
+    return create_patcher
+
+
+class test_crontab(unittest.TestCase):
+
+    def test_every_minute_execution_is_due(self):
+        last_ran = datetime.now() - timedelta(seconds=61)
+        due, remaining = EveryMinutePeriodic().is_due(last_ran)
+        self.assertTrue(due)
+        self.assertEquals(remaining, 1)
+
+    def test_every_minute_execution_is_not_due(self):
+        last_ran = datetime.now() - timedelta(seconds=30)
+        due, remaining = EveryMinutePeriodic().is_due(last_ran)
+        self.assertFalse(due)
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(HourlyPeriodic, datetime(2010, 5, 10, 10, 30))
+    def test_every_hour_execution_is_due(self):
+        due, remaining = HourlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
         self.assertTrue(due)
         self.assertTrue(due)
-        self.assertEqual(remaining, p.timedelta_seconds(p.run_every))
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(HourlyPeriodic, datetime(2010, 5, 10, 10, 29))
+    def test_every_hour_execution_is_not_due(self):
+        due, remaining = HourlyPeriodic().is_due(datetime(2010, 5, 10, 6, 30))
+        self.assertFalse(due)
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(DailyPeriodic, datetime(2010, 5, 10, 7, 30))
+    def test_daily_execution_is_due(self):
+        due, remaining = DailyPeriodic().is_due(datetime(2010, 5, 9, 7, 30))
+        self.assertTrue(due)
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(DailyPeriodic, datetime(2010, 5, 10, 10, 30))
+    def test_daily_execution_is_not_due(self):
+        due, remaining = DailyPeriodic().is_due(datetime(2010, 5, 10, 6, 29))
+        self.assertFalse(due)
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(WeeklyPeriodic, datetime(2010, 5, 6, 7, 30))
+    def test_weekly_execution_is_due(self):
+        due, remaining = WeeklyPeriodic().is_due(datetime(2010, 4, 30, 7, 30))
+        self.assertTrue(due)
+        self.assertEquals(remaining, 1)
+
+    @patch_crontab_nowfun(WeeklyPeriodic, datetime(2010, 5, 7, 10, 30))
+    def test_weekly_execution_is_not_due(self):
+        due, remaining = WeeklyPeriodic().is_due(datetime(2010, 4, 30, 6, 29))
+        self.assertFalse(due)
+        self.assertEquals(remaining, 1)

+ 31 - 0
celery/tests/test_task_abortable.py

@@ -0,0 +1,31 @@
+import unittest2 as unittest
+
+from celery.contrib.abortable import AbortableTask, AbortableAsyncResult
+
+
+class MyAbortableTask(AbortableTask):
+
+    def run(self, **kwargs):
+        return True
+
+
+class TestAbortableTask(unittest.TestCase):
+
+    def test_async_result_is_abortable(self):
+        t = MyAbortableTask()
+        result = t.apply_async()
+        tid = result.task_id
+        self.assertIsInstance(t.AsyncResult(tid), AbortableAsyncResult)
+
+    def test_is_not_aborted(self):
+        t = MyAbortableTask()
+        result = t.apply_async()
+        tid = result.task_id
+        self.assertFalse(t.is_aborted(task_id=tid))
+
+    def test_abort_yields_aborted(self):
+        t = MyAbortableTask()
+        result = t.apply_async()
+        result.abort()
+        tid = result.task_id
+        self.assertTrue(t.is_aborted(task_id=tid))

+ 5 - 0
celery/tests/test_task_control.py

@@ -48,6 +48,11 @@ class TestBroadcast(unittest.TestCase):
         control.revoke("foozbaaz")
         control.revoke("foozbaaz")
         self.assertIn("revoke", MockBroadcastPublisher.sent)
         self.assertIn("revoke", MockBroadcastPublisher.sent)
 
 
+    @with_mock_broadcast
+    def test_ping(self):
+        control.ping()
+        self.assertIn("ping", MockBroadcastPublisher.sent)
+
     @with_mock_broadcast
     @with_mock_broadcast
     def test_revoke_from_result(self):
     def test_revoke_from_result(self):
         from celery.result import AsyncResult
         from celery.result import AsyncResult

+ 0 - 127
celery/tests/test_views.py

@@ -1,127 +0,0 @@
-import sys
-
-from django.http import HttpResponse
-from django.test.testcases import TestCase as DjangoTestCase
-from django.core.urlresolvers import reverse
-from django.template import TemplateDoesNotExist
-
-from anyjson import deserialize as JSON_load
-from billiard.utils.functional import curry
-
-from celery import conf
-from celery import states
-from celery.utils import gen_unique_id, get_full_cls_name
-from celery.backends import default_backend
-from celery.exceptions import RetryTaskError
-from celery.decorators import task
-from celery.datastructures import ExceptionInfo
-
-def reversestar(name, **kwargs):
-    return reverse(name, kwargs=kwargs)
-
-task_is_successful = curry(reversestar, "celery-is_task_successful")
-task_status = curry(reversestar, "celery-task_status")
-task_apply = curry(reverse, "celery-apply")
-
-scratch = {}
-@task()
-def mytask(x, y):
-    ret = scratch["result"] = int(x) * int(y)
-    return ret
-
-
-def create_exception(name, base=Exception):
-    return type(name, (base, ), {})
-
-
-def catch_exception(exception):
-    try:
-        raise exception
-    except exception.__class__, exc:
-        exc = default_backend.prepare_exception(exc)
-        return exc, ExceptionInfo(sys.exc_info()).traceback
-
-
-class ViewTestCase(DjangoTestCase):
-
-    def assertJSONEqual(self, json, py):
-        json = isinstance(json, HttpResponse) and json.content or json
-        try:
-            self.assertEqual(JSON_load(json), py)
-        except TypeError, exc:
-            raise TypeError("%s: %s" % (exc, json))
-
-
-class TestTaskApply(ViewTestCase):
-
-    def test_apply(self):
-        conf.ALWAYS_EAGER = True
-        try:
-            self.client.get(task_apply(kwargs={"task_name":
-                mytask.name}) + "?x=4&y=4")
-            self.assertEqual(scratch["result"], 16)
-        finally:
-            conf.ALWAYS_EAGER = False
-
-    def test_apply_raises_404_on_unregistered_task(self):
-        conf.ALWAYS_EAGER = True
-        try:
-            name = "xxx.does.not.exist"
-            action = curry(self.client.get, task_apply(kwargs={
-                        "task_name": name}) + "?x=4&y=4")
-            self.assertRaises(TemplateDoesNotExist, action)
-        finally:
-            conf.ALWAYS_EAGER = False
-
-
-class TestTaskStatus(ViewTestCase):
-
-    def assertStatusForIs(self, status, res, traceback=None):
-        uuid = gen_unique_id()
-        default_backend.store_result(uuid, res, status,
-                                     traceback=traceback)
-        json = self.client.get(task_status(task_id=uuid))
-        expect = dict(id=uuid, status=status, result=res)
-        if status in default_backend.EXCEPTION_STATES:
-            instore = default_backend.get_result(uuid)
-            self.assertEqual(str(instore.args), str(res.args))
-            expect["result"] = str(res.args[0])
-            expect["exc"] = get_full_cls_name(res.__class__)
-            expect["traceback"] = traceback
-
-        self.assertJSONEqual(json, dict(task=expect))
-
-    def test_task_status_success(self):
-        self.assertStatusForIs(states.SUCCESS, "The quick brown fox")
-
-    def test_task_status_failure(self):
-        exc, tb = catch_exception(KeyError("foo"))
-        self.assertStatusForIs(states.FAILURE, exc, tb)
-
-    def test_task_status_retry(self):
-        oexc, _ = catch_exception(KeyError("Resource not available"))
-        exc, tb = catch_exception(RetryTaskError(str(oexc), oexc))
-        self.assertStatusForIs(states.RETRY, exc, tb)
-
-
-class TestTaskIsSuccessful(ViewTestCase):
-
-    def assertStatusForIs(self, status, outcome):
-        uuid = gen_unique_id()
-        result = gen_unique_id()
-        default_backend.store_result(uuid, result, status)
-        json = self.client.get(task_is_successful(task_id=uuid))
-        self.assertJSONEqual(json, {"task": {"id": uuid,
-                                             "executed": outcome}})
-
-    def test_is_successful_success(self):
-        self.assertStatusForIs(states.SUCCESS, True)
-
-    def test_is_successful_pending(self):
-        self.assertStatusForIs(states.PENDING, False)
-
-    def test_is_successful_failure(self):
-        self.assertStatusForIs(states.FAILURE, False)
-
-    def test_is_successful_retry(self):
-        self.assertStatusForIs(states.RETRY, False)

+ 6 - 5
celery/tests/test_worker.py

@@ -1,5 +1,5 @@
 import unittest2 as unittest
 import unittest2 as unittest
-from Queue import Queue, Empty
+from Queue import Empty
 from datetime import datetime, timedelta
 from datetime import datetime, timedelta
 from multiprocessing import get_logger
 from multiprocessing import get_logger
 
 
@@ -10,8 +10,9 @@ from billiard.serialization import pickle
 from celery import conf
 from celery import conf
 from celery.utils import gen_unique_id
 from celery.utils import gen_unique_id
 from celery.worker import WorkController
 from celery.worker import WorkController
-from celery.worker.listener import CarrotListener, RUN
 from celery.worker.job import TaskWrapper
 from celery.worker.job import TaskWrapper
+from celery.worker.buckets import FastQueue
+from celery.worker.listener import CarrotListener, RUN
 from celery.worker.scheduler import Scheduler
 from celery.worker.scheduler import Scheduler
 from celery.decorators import task as task_dec
 from celery.decorators import task as task_dec
 from celery.decorators import periodic_task as periodic_task_dec
 from celery.decorators import periodic_task as periodic_task_dec
@@ -125,7 +126,7 @@ def create_message(backend, **data):
 class TestCarrotListener(unittest.TestCase):
 class TestCarrotListener(unittest.TestCase):
 
 
     def setUp(self):
     def setUp(self):
-        self.ready_queue = Queue()
+        self.ready_queue = FastQueue()
         self.eta_schedule = Scheduler(self.ready_queue)
         self.eta_schedule = Scheduler(self.ready_queue)
         self.logger = get_logger()
         self.logger = get_logger()
         self.logger.setLevel(0)
         self.logger.setLevel(0)
@@ -139,7 +140,7 @@ class TestCarrotListener(unittest.TestCase):
             def drain_events(self):
             def drain_events(self):
                 return "draining"
                 return "draining"
 
 
-        l.connection = PlaceHolder()
+        l.connection = MockConnection()
         l.connection.connection = MockConnection()
         l.connection.connection = MockConnection()
 
 
         it = l._mainloop()
         it = l._mainloop()
@@ -266,7 +267,7 @@ class TestCarrotListener(unittest.TestCase):
         self.assertTrue(found)
         self.assertTrue(found)
 
 
     def test_revoke(self):
     def test_revoke(self):
-        ready_queue = Queue()
+        ready_queue = FastQueue()
         l = CarrotListener(ready_queue, self.eta_schedule, self.logger,
         l = CarrotListener(ready_queue, self.eta_schedule, self.logger,
                            send_events=False)
                            send_events=False)
         backend = MockBackend()
         backend = MockBackend()

+ 27 - 7
celery/tests/test_worker_control.py

@@ -9,10 +9,14 @@ from celery.registry import tasks
 
 
 hostname = socket.gethostname()
 hostname = socket.gethostname()
 
 
+
 class TestControlPanel(unittest.TestCase):
 class TestControlPanel(unittest.TestCase):
 
 
     def setUp(self):
     def setUp(self):
-        self.panel = control.ControlDispatch(hostname=hostname)
+        self.panel = self.create_panel(listener=object())
+
+    def create_panel(self, **kwargs):
+        return control.ControlDispatch(hostname=hostname, **kwargs)
 
 
     def test_shutdown(self):
     def test_shutdown(self):
         self.assertRaises(SystemExit, self.panel.execute, "shutdown")
         self.assertRaises(SystemExit, self.panel.execute, "shutdown")
@@ -21,17 +25,33 @@ class TestControlPanel(unittest.TestCase):
         self.panel.execute("dump_tasks")
         self.panel.execute("dump_tasks")
 
 
     def test_rate_limit(self):
     def test_rate_limit(self):
+
+        class Listener(object):
+
+            class ReadyQueue(object):
+                fresh = False
+
+                def refresh(self):
+                    self.fresh = True
+
+            def __init__(self):
+                self.ready_queue = self.ReadyQueue()
+
+        listener = Listener()
+        panel = self.create_panel(listener=listener)
+
         task = tasks[PingTask.name]
         task = tasks[PingTask.name]
         old_rate_limit = task.rate_limit
         old_rate_limit = task.rate_limit
         try:
         try:
-            self.panel.execute("rate_limit", kwargs=dict(
-                                                task_name=task.name,
-                                                rate_limit="100/m"))
+            panel.execute("rate_limit", kwargs=dict(task_name=task.name,
+                                                    rate_limit="100/m"))
             self.assertEqual(task.rate_limit, "100/m")
             self.assertEqual(task.rate_limit, "100/m")
-            self.panel.execute("rate_limit", kwargs=dict(
-                                                task_name=task.name,
-                                                rate_limit=0))
+            self.assertTrue(listener.ready_queue.fresh)
+            listener.ready_queue.fresh = False
+            panel.execute("rate_limit", kwargs=dict(task_name=task.name,
+                                                    rate_limit=0))
             self.assertEqual(task.rate_limit, 0)
             self.assertEqual(task.rate_limit, 0)
+            self.assertTrue(listener.ready_queue.fresh)
         finally:
         finally:
             task.rate_limit = old_rate_limit
             task.rate_limit = old_rate_limit
 
 

+ 41 - 89
celery/tests/test_worker_job.py

@@ -5,17 +5,16 @@ import unittest2 as unittest
 import simplejson
 import simplejson
 from StringIO import StringIO
 from StringIO import StringIO
 
 
-from django.core import cache
 from carrot.backends.base import BaseMessage
 from carrot.backends.base import BaseMessage
 
 
 from celery import states
 from celery import states
 from celery.log import setup_logger
 from celery.log import setup_logger
 from celery.task.base import Task
 from celery.task.base import Task
 from celery.utils import gen_unique_id
 from celery.utils import gen_unique_id
-from celery.models import TaskMeta
 from celery.result import AsyncResult
 from celery.result import AsyncResult
 from celery.worker.job import WorkerTaskTrace, TaskWrapper
 from celery.worker.job import WorkerTaskTrace, TaskWrapper
 from celery.worker.pool import TaskPool
 from celery.worker.pool import TaskPool
+from celery.backends import default_backend
 from celery.exceptions import RetryTaskError, NotRegistered
 from celery.exceptions import RetryTaskError, NotRegistered
 from celery.decorators import task as task_dec
 from celery.decorators import task as task_dec
 from celery.datastructures import ExceptionInfo
 from celery.datastructures import ExceptionInfo
@@ -63,13 +62,6 @@ def mytask_raising(i, **kwargs):
     raise KeyError(i)
     raise KeyError(i)
 
 
 
 
-@task_dec()
-def get_db_connection(i, **kwargs):
-    from django.db import connection
-    return id(connection)
-get_db_connection.ignore_result = True
-
-
 class TestRetryTaskError(unittest.TestCase):
 class TestRetryTaskError(unittest.TestCase):
 
 
     def test_retry_task_error(self):
     def test_retry_task_error(self):
@@ -100,65 +92,6 @@ class TestJail(unittest.TestCase):
         self.assertEqual(ret, 256)
         self.assertEqual(ret, 256)
         self.assertFalse(AsyncResult(task_id).ready())
         self.assertFalse(AsyncResult(task_id).ready())
 
 
-    def test_django_db_connection_is_closed(self):
-        from django.db import connection
-        connection._was_closed = False
-        old_connection_close = connection.close
-
-        def monkeypatched_connection_close(*args, **kwargs):
-            connection._was_closed = True
-            return old_connection_close(*args, **kwargs)
-
-        connection.close = monkeypatched_connection_close
-        try:
-            jail(gen_unique_id(), get_db_connection.name, [2], {})
-            self.assertTrue(connection._was_closed)
-        finally:
-            connection.close = old_connection_close
-
-    def test_django_cache_connection_is_closed(self):
-        old_cache_close = getattr(cache.cache, "close", None)
-        old_backend = cache.settings.CACHE_BACKEND
-        cache.settings.CACHE_BACKEND = "libmemcached"
-        cache._was_closed = False
-        old_cache_parse_backend = getattr(cache, "parse_backend_uri", None)
-        if old_cache_parse_backend: # checks to make sure attr exists
-            delattr(cache, 'parse_backend_uri')
-
-        def monkeypatched_cache_close(*args, **kwargs):
-            cache._was_closed = True
-
-        cache.cache.close = monkeypatched_cache_close
-
-        jail(gen_unique_id(), mytask.name, [4], {})
-        self.assertTrue(cache._was_closed)
-        cache.cache.close = old_cache_close
-        cache.settings.CACHE_BACKEND = old_backend
-        if old_cache_parse_backend:
-            cache.parse_backend_uri = old_cache_parse_backend
-
-    def test_django_cache_connection_is_closed_django_1_1(self):
-        old_cache_close = getattr(cache.cache, "close", None)
-        old_backend = cache.settings.CACHE_BACKEND
-        cache.settings.CACHE_BACKEND = "libmemcached"
-        cache._was_closed = False
-        old_cache_parse_backend = getattr(cache, "parse_backend_uri", None)
-        cache.parse_backend_uri = lambda uri: ["libmemcached", "1", "2"]
-
-        def monkeypatched_cache_close(*args, **kwargs):
-            cache._was_closed = True
-
-        cache.cache.close = monkeypatched_cache_close
-
-        jail(gen_unique_id(), mytask.name, [4], {})
-        self.assertTrue(cache._was_closed)
-        cache.cache.close = old_cache_close
-        cache.settings.CACHE_BACKEND = old_backend
-        if old_cache_parse_backend:
-            cache.parse_backend_uri = old_cache_parse_backend
-        else:
-            del(cache.parse_backend_uri)
-
 
 
 class MockEventDispatcher(object):
 class MockEventDispatcher(object):
 
 
@@ -325,53 +258,71 @@ class TestTaskWrapper(unittest.TestCase):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
         self.assertEqual(tw.execute(), 256)
         self.assertEqual(tw.execute(), 256)
-        meta = TaskMeta.objects.get(task_id=tid)
-        self.assertEqual(meta.result, 256)
-        self.assertEqual(meta.status, states.SUCCESS)
+        meta = default_backend._get_task_meta_for(tid)
+        self.assertEqual(meta["result"], 256)
+        self.assertEqual(meta["status"], states.SUCCESS)
 
 
     def test_execute_success_no_kwargs(self):
     def test_execute_success_no_kwargs(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask_no_kwargs.name, tid, [4], {})
         tw = TaskWrapper(mytask_no_kwargs.name, tid, [4], {})
         self.assertEqual(tw.execute(), 256)
         self.assertEqual(tw.execute(), 256)
-        meta = TaskMeta.objects.get(task_id=tid)
-        self.assertEqual(meta.result, 256)
-        self.assertEqual(meta.status, states.SUCCESS)
+        meta = default_backend._get_task_meta_for(tid)
+        self.assertEqual(meta["result"], 256)
+        self.assertEqual(meta["status"], states.SUCCESS)
 
 
     def test_execute_success_some_kwargs(self):
     def test_execute_success_some_kwargs(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask_some_kwargs.name, tid, [4], {})
         tw = TaskWrapper(mytask_some_kwargs.name, tid, [4], {})
         self.assertEqual(tw.execute(logfile="foobaz.log"), 256)
         self.assertEqual(tw.execute(logfile="foobaz.log"), 256)
-        meta = TaskMeta.objects.get(task_id=tid)
+        meta = default_backend._get_task_meta_for(tid)
         self.assertEqual(some_kwargs_scratchpad.get("logfile"), "foobaz.log")
         self.assertEqual(some_kwargs_scratchpad.get("logfile"), "foobaz.log")
-        self.assertEqual(meta.result, 256)
-        self.assertEqual(meta.status, states.SUCCESS)
+        self.assertEqual(meta["result"], 256)
+        self.assertEqual(meta["status"], states.SUCCESS)
 
 
     def test_execute_ack(self):
     def test_execute_ack(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"},
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"},
                         on_ack=on_ack)
                         on_ack=on_ack)
         self.assertEqual(tw.execute(), 256)
         self.assertEqual(tw.execute(), 256)
-        meta = TaskMeta.objects.get(task_id=tid)
+        meta = default_backend._get_task_meta_for(tid)
         self.assertTrue(scratch["ACK"])
         self.assertTrue(scratch["ACK"])
-        self.assertEqual(meta.result, 256)
-        self.assertEqual(meta.status, states.SUCCESS)
+        self.assertEqual(meta["result"], 256)
+        self.assertEqual(meta["status"], states.SUCCESS)
 
 
     def test_execute_fail(self):
     def test_execute_fail(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask_raising.name, tid, [4], {"f": "x"})
         tw = TaskWrapper(mytask_raising.name, tid, [4], {"f": "x"})
         self.assertIsInstance(tw.execute(), ExceptionInfo)
         self.assertIsInstance(tw.execute(), ExceptionInfo)
-        meta = TaskMeta.objects.get(task_id=tid)
-        self.assertEqual(meta.status, states.FAILURE)
-        self.assertIsInstance(meta.result, KeyError)
+        meta = default_backend._get_task_meta_for(tid)
+        self.assertEqual(meta["status"], states.FAILURE)
+        self.assertIsInstance(meta["result"], KeyError)
 
 
     def test_execute_using_pool(self):
     def test_execute_using_pool(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
         tw = TaskWrapper(mytask.name, tid, [4], {"f": "x"})
-        p = TaskPool(2)
-        p.start()
-        asyncres = tw.execute_using_pool(p)
-        self.assertEqual(asyncres.get(), 256)
-        p.stop()
+
+        class MockPool(object):
+            target = None
+            args = None
+            kwargs = None
+
+            def __init__(self, *args, **kwargs):
+                pass
+
+            def apply_async(self, target, args=None, kwargs=None,
+                    *margs, **mkwargs):
+                self.target = target
+                self.args = args
+                self.kwargs = kwargs
+
+        p = MockPool()
+        tw.execute_using_pool(p)
+        self.assertTrue(p.target)
+        self.assertEqual(p.args[0], mytask.name)
+        self.assertEqual(p.args[1], tid)
+        self.assertEqual(p.args[2], [4])
+        self.assertIn("f", p.args[3])
+        self.assertIn([4], p.args)
 
 
     def test_default_kwargs(self):
     def test_default_kwargs(self):
         tid = gen_unique_id()
         tid = gen_unique_id()
@@ -417,4 +368,5 @@ class TestTaskWrapper(unittest.TestCase):
         self._test_on_failure(Exception(u"Бобры атакуют"))
         self._test_on_failure(Exception(u"Бобры атакуют"))
 
 
     def test_on_failure_utf8_exception(self):
     def test_on_failure_utf8_exception(self):
-        self._test_on_failure(Exception(u"Бобры атакуют".encode('utf8')))
+        self._test_on_failure(Exception(
+            u"Бобры атакуют".encode('utf8')))

+ 0 - 16
celery/urls.py

@@ -1,16 +0,0 @@
-"""
-
-URLs defined for celery.
-
-"""
-from django.conf.urls.defaults import patterns, url
-
-from celery import views
-
-
-urlpatterns = patterns("",
-    url(r'^(?P<task_id>[\w\d\-]+)/done/?$', views.is_task_successful,
-        name="celery-is_task_successful"),
-    url(r'^(?P<task_id>[\w\d\-]+)/status/?$', views.task_status,
-        name="celery-task_status"),
-)

+ 9 - 11
celery/utils/__init__.py

@@ -19,6 +19,7 @@ from carrot.utils import rpartition
 from billiard.utils.functional import curry
 from billiard.utils.functional import curry
 
 
 from celery.utils.compat import all, any, defaultdict
 from celery.utils.compat import all, any, defaultdict
+from celery.utils.timeutils import timedelta_seconds # was here before
 
 
 
 
 def noop(*args, **kwargs):
 def noop(*args, **kwargs):
@@ -30,6 +31,14 @@ def noop(*args, **kwargs):
     pass
     pass
 
 
 
 
+def first(predicate, iterable):
+    """Returns the first element in ``iterable`` that ``predicate`` returns a
+    ``True`` value for."""
+    for item in iterable:
+        if predicate(item):
+            return item
+
+
 def chunks(it, n):
 def chunks(it, n):
     """Split an iterator into chunks with ``n`` elements each.
     """Split an iterator into chunks with ``n`` elements each.
 
 
@@ -181,17 +190,6 @@ def fun_takes_kwargs(fun, kwlist=[]):
     return filter(curry(operator.contains, args), kwlist)
     return filter(curry(operator.contains, args), kwlist)
 
 
 
 
-def timedelta_seconds(delta):
-    """Convert :class:`datetime.timedelta` to seconds.
-
-    Doesn't account for negative values.
-
-    """
-    if delta.days < 0:
-        return 0
-    return delta.days * 86400 + delta.seconds + (delta.microseconds / 10e5)
-
-
 def get_cls_by_name(name, aliases={}):
 def get_cls_by_name(name, aliases={}):
     """Get class by name.
     """Get class by name.
 
 

+ 21 - 0
celery/utils/compat.py

@@ -332,3 +332,24 @@ except ImportError:
         def log(self, level, msg, *args, **kwargs):
         def log(self, level, msg, *args, **kwargs):
             msg, kwargs = self.process(msg, kwargs)
             msg, kwargs = self.process(msg, kwargs)
             self.logger.log(level, msg, *args, **kwargs)
             self.logger.log(level, msg, *args, **kwargs)
+
+############## itertools.izip_longest #######################################
+
+try:
+    from itertools import izip_longest
+except ImportError:
+    import itertools
+    def izip_longest(*args, **kwds):
+        fillvalue = kwds.get("fillvalue")
+
+        def sentinel(counter=([fillvalue] * (len(args) - 1)).pop):
+            yield counter() # yields the fillvalue, or raises IndexError
+
+        fillers = itertools.repeat(fillvalue)
+        iters = [itertools.chain(it, sentinel(), fillers)
+                    for it in args]
+        try:
+            for tup in itertools.izip(*iters):
+                yield tup
+        except IndexError:
+            pass

+ 1 - 0
celery/utils/dispatch/__init__.py

@@ -0,0 +1 @@
+from celery.utils.dispatch.signal import Signal

+ 36 - 0
celery/utils/dispatch/license.txt

@@ -0,0 +1,36 @@
+django.dispatch was originally forked from PyDispatcher.
+
+PyDispatcher License:
+
+    Copyright (c) 2001-2003, Patrick K. O'Brien and Contributors
+    All rights reserved.
+    
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+    
+        Redistributions of source code must retain the above copyright
+        notice, this list of conditions and the following disclaimer.
+    
+        Redistributions in binary form must reproduce the above
+        copyright notice, this list of conditions and the following
+        disclaimer in the documentation and/or other materials
+        provided with the distribution.
+    
+        The name of Patrick K. O'Brien, or the name of any Contributor,
+        may not be used to endorse or promote products derived from this 
+        software without specific prior written permission.
+    
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+    FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+    COPYRIGHT HOLDERS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+    INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+    (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+    SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+    HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+    STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+    OF THE POSSIBILITY OF SUCH DAMAGE. 
+

+ 277 - 0
celery/utils/dispatch/saferef.py

@@ -0,0 +1,277 @@
+"""
+"Safe weakrefs", originally from pyDispatcher.
+
+Provides a way to safely weakref any function, including bound methods (which
+aren't handled by the core weakref module).
+"""
+
+import weakref
+import traceback
+
+
+def safe_ref(target, on_delete=None):
+    """Return a *safe* weak reference to a callable target
+
+    :param target: the object to be weakly referenced, if it's a
+        bound method reference, will create a :class:`BoundMethodWeakref`,
+        otherwise creates a simple :class:`weakref.ref`.
+
+    :keyword on_delete: if provided, will have a hard reference stored
+        to the callable to be called after the safe reference
+        goes out of scope with the reference object, (either a
+        :class:`weakref.ref` or a :class:`BoundMethodWeakref`) as argument.
+    """
+    if getattr(target, "im_self", None) is not None:
+        # Turn a bound method into a BoundMethodWeakref instance.
+        # Keep track of these instances for lookup by disconnect().
+        assert hasattr(target, 'im_func'), \
+            """safe_ref target %r has im_self, but no im_func, " \
+            "don't know how to create reference""" % (target, )
+        return get_bound_method_weakref(target=target,
+                                        on_delete=on_delete)
+    if callable(on_delete):
+        return weakref.ref(target, on_delete)
+    else:
+        return weakref.ref(target)
+
+
+class BoundMethodWeakref(object):
+    """'Safe' and reusable weak references to instance methods.
+
+    BoundMethodWeakref objects provide a mechanism for
+    referencing a bound method without requiring that the
+    method object itself (which is normally a transient
+    object) is kept alive.  Instead, the BoundMethodWeakref
+    object keeps weak references to both the object and the
+    function which together define the instance method.
+
+    .. attribute:: key
+        the identity key for the reference, calculated
+        by the class's :meth:`calculate_key` method applied to the
+        target instance method
+
+    .. attribute:: deletion_methods
+
+        sequence of callable objects taking
+        single argument, a reference to this object which
+        will be called when *either* the target object or
+        target function is garbage collected (i.e. when
+        this object becomes invalid).  These are specified
+        as the on_delete parameters of :func:`safe_ref` calls.
+
+    .. attribute:: weak_self
+        weak reference to the target object
+
+    .. attribute:: weak_func
+        weak reference to the target function
+
+    .. attribute:: _all_instances
+        class attribute pointing to all live
+        BoundMethodWeakref objects indexed by the class's
+        :meth:`calculate_key(target)` method applied to the target
+        objects. This weak value dictionary is used to
+        short-circuit creation so that multiple references
+        to the same (object, function) pair produce the
+        same BoundMethodWeakref instance.
+
+    """
+
+    _all_instances = weakref.WeakValueDictionary()
+
+    def __new__(cls, target, on_delete=None, *arguments, **named):
+        """Create new instance or return current instance
+
+        Basically this method of construction allows us to
+        short-circuit creation of references to already-
+        referenced instance methods.  The key corresponding
+        to the target is calculated, and if there is already
+        an existing reference, that is returned, with its
+        deletionMethods attribute updated.  Otherwise the
+        new instance is created and registered in the table
+        of already-referenced methods.
+
+        """
+        key = cls.calculate_key(target)
+        current = cls._all_instances.get(key)
+        if current is not None:
+            current.deletion_methods.append(on_delete)
+            return current
+        else:
+            base = super(BoundMethodWeakref, cls).__new__(cls)
+            cls._all_instances[key] = base
+            base.__init__(target, on_delete, *arguments, **named)
+            return base
+
+    def __init__(self, target, on_delete=None):
+        """Return a weak-reference-like instance for a bound method
+
+        :param target: the instance-method target for the weak
+            reference, must have ``im_self`` and ``im_func`` attributes
+            and be reconstructable via::
+
+                target.im_func.__get__(target.im_self)
+
+            which is true of built-in instance methods.
+
+        :keyword on_delete: optional callback which will be called
+            when this weak reference ceases to be valid
+            (i.e. either the object or the function is garbage
+            collected).  Should take a single argument,
+            which will be passed a pointer to this object.
+
+        """
+        def remove(weak, self=self):
+            """Set self.is_dead to true when method or instance is destroyed"""
+            methods = self.deletion_methods[:]
+            del(self.deletion_methods[:])
+            try:
+                del(self.__class__._all_instances[self.key])
+            except KeyError:
+                pass
+            for function in methods:
+                try:
+                    if callable(function):
+                        function(self)
+                except Exception, exc:
+                    try:
+                        traceback.print_exc()
+                    except AttributeError:
+                        print("Exception during saferef %s cleanup function "
+                              "%s: %s" % (self, function, exc))
+
+        self.deletion_methods = [on_delete]
+        self.key = self.calculate_key(target)
+        self.weak_self = weakref.ref(target.im_self, remove)
+        self.weak_func = weakref.ref(target.im_func, remove)
+        self.self_name = str(target.im_self)
+        self.func_name = str(target.im_func.__name__)
+
+    def calculate_key(cls, target):
+        """Calculate the reference key for this reference
+
+        Currently this is a two-tuple of the ``id()``'s of the
+        target object and the target function respectively.
+        """
+        return id(target.im_self), id(target.im_func)
+    calculate_key = classmethod(calculate_key)
+
+    def __str__(self):
+        """Give a friendly representation of the object"""
+        return """%s( %s.%s )""" % (
+            self.__class__.__name__,
+            self.self_name,
+            self.func_name,
+        )
+
+    __repr__ = __str__
+
+    def __nonzero__(self):
+        """Whether we are still a valid reference"""
+        return self() is not None
+
+    def __cmp__(self, other):
+        """Compare with another reference"""
+        if not isinstance(other, self.__class__):
+            return cmp(self.__class__, type(other))
+        return cmp(self.key, other.key)
+
+    def __call__(self):
+        """Return a strong reference to the bound method
+
+        If the target cannot be retrieved, then will
+        return None, otherwise returns a bound instance
+        method for our object and function.
+
+        Note:
+            You may call this method any number of times,
+            as it does not invalidate the reference.
+        """
+        target = self.weak_self()
+        if target is not None:
+            function = self.weak_func()
+            if function is not None:
+                return function.__get__(target)
+        return None
+
+
+class BoundNonDescriptorMethodWeakref(BoundMethodWeakref):
+    """A specialized :class:`BoundMethodWeakref`, for platforms where
+    instance methods are not descriptors.
+
+    It assumes that the function name and the target attribute name are the
+    same, instead of assuming that the function is a descriptor. This approach
+    is equally fast, but not 100% reliable because functions can be stored on
+    an attribute named differenty than the function's name such as in::
+
+        >>> class A(object):
+        ...     pass
+
+        >>> def foo(self):
+        ...     return "foo"
+        >>> A.bar = foo
+
+    But this shouldn't be a common use case. So, on platforms where methods
+    aren't descriptors (such as Jython) this implementation has the advantage
+    of working in the most cases.
+
+    """
+    def __init__(self, target, on_delete=None):
+        """Return a weak-reference-like instance for a bound method
+
+        :param target: the instance-method target for the weak
+            reference, must have ``im_self`` and ``im_func`` attributes
+            and be reconstructable via::
+
+                target.im_func.__get__(target.im_self)
+
+            which is true of built-in instance methods.
+
+        :keyword on_delete: optional callback which will be called
+            when this weak reference ceases to be valid
+            (i.e. either the object or the function is garbage
+            collected). Should take a single argument,
+            which will be passed a pointer to this object.
+
+        """
+        assert getattr(target.im_self, target.__name__) == target, \
+               "method %s isn't available as the attribute %s of %s" % (
+                    target, target.__name__, target.im_self)
+        super(BoundNonDescriptorMethodWeakref, self).__init__(target,
+                                                              on_delete)
+
+    def __call__(self):
+        """Return a strong reference to the bound method
+
+        If the target cannot be retrieved, then will
+        return None, otherwise returns a bound instance
+        method for our object and function.
+
+        Note:
+            You may call this method any number of times,
+            as it does not invalidate the reference.
+
+        """
+        target = self.weak_self()
+        if target is not None:
+            function = self.weak_func()
+            if function is not None:
+                # Using curry() would be another option, but it erases the
+                # "signature" of the function. That is, after a function is
+                # curried, the inspect module can't be used to determine how
+                # many arguments the function expects, nor what keyword
+                # arguments it supports, and pydispatcher needs this
+                # information.
+                return getattr(target, function.__name__)
+        return None
+
+
+def get_bound_method_weakref(target, on_delete):
+    """Instantiates the appropiate :class:`BoundMethodWeakRef`, depending
+    on the details of the underlying class method implementation."""
+    if hasattr(target, '__get__'):
+        # target method is a descriptor, so the default implementation works:
+        return BoundMethodWeakref(target=target, on_delete=on_delete)
+    else:
+        # no luck, use the alternative implementation:
+        return BoundNonDescriptorMethodWeakref(target=target,
+                                               on_delete=on_delete)

+ 211 - 0
celery/utils/dispatch/signal.py

@@ -0,0 +1,211 @@
+"""Signal class."""
+
+import weakref
+try:
+    set
+except NameError:
+    from sets import Set as set # Python 2.3 fallback
+
+from celery.utils.dispatch import saferef
+
+WEAKREF_TYPES = (weakref.ReferenceType, saferef.BoundMethodWeakref)
+
+
+def _make_id(target):
+    if hasattr(target, 'im_func'):
+        return (id(target.im_self), id(target.im_func))
+    return id(target)
+
+
+class Signal(object):
+    """Base class for all signals
+
+
+    .. attribute:: receivers
+        Internal attribute, holds a dictionary of
+        ``{receriverkey (id): weakref(receiver)}`` mappings.
+
+    """
+
+    def __init__(self, providing_args=None):
+        """Create a new signal.
+
+        :param providing_args: A list of the arguments this signal can pass
+            along in a :meth:`send` call.
+
+        """
+        self.receivers = []
+        if providing_args is None:
+            providing_args = []
+        self.providing_args = set(providing_args)
+
+    def connect(self, receiver, sender=None, weak=True, dispatch_uid=None):
+        """Connect receiver to sender for signal.
+
+        :param receiver: A function or an instance method which is to
+            receive signals. Receivers must be hashable objects.
+
+            if weak is ``True``, then receiver must be weak-referencable (more
+            precisely :func:`saferef.safe_ref()` must be able to create a
+            reference to the receiver).
+
+            Receivers must be able to accept keyword arguments.
+
+            If receivers have a ``dispatch_uid`` attribute, the receiver will
+            not be added if another receiver already exists with that
+            ``dispatch_uid``.
+
+        :keyword sender: The sender to which the receiver should respond.
+            Must either be of type :class:`Signal`, or ``None`` to receive
+            events from any sender.
+
+        :keyword weak: Whether to use weak references to the receiver.
+            By default, the module will attempt to use weak references to the
+            receiver objects. If this parameter is false, then strong
+            references will be used.
+
+        :keyword dispatch_uid: An identifier used to uniquely identify a
+            particular instance of a receiver. This will usually be a
+            string, though it may be anything hashable.
+
+        """
+        if dispatch_uid:
+            lookup_key = (dispatch_uid, _make_id(sender))
+        else:
+            lookup_key = (_make_id(receiver), _make_id(sender))
+
+        if weak:
+            receiver = saferef.safe_ref(receiver,
+                                        on_delete=self._remove_receiver)
+
+        for r_key, _ in self.receivers:
+            if r_key == lookup_key:
+                break
+        else:
+            self.receivers.append((lookup_key, receiver))
+
+    def disconnect(self, receiver=None, sender=None, weak=True,
+            dispatch_uid=None):
+        """Disconnect receiver from sender for signal.
+
+        If weak references are used, disconnect need not be called. The
+        receiver will be removed from dispatch automatically.
+
+        :keyword receiver: The registered receiver to disconnect. May be
+            none if ``dispatch_uid`` is specified.
+
+        :keyword sender: The registered sender to disconnect.
+
+        :keyword weak: The weakref state to disconnect.
+
+        :keyword dispatch_uid: the unique identifier of the receiver
+            to disconnect
+
+        """
+        if dispatch_uid:
+            lookup_key = (dispatch_uid, _make_id(sender))
+        else:
+            lookup_key = (_make_id(receiver), _make_id(sender))
+
+        for index in xrange(len(self.receivers)):
+            (r_key, _) = self.receivers[index]
+            if r_key == lookup_key:
+                del self.receivers[index]
+                break
+
+    def send(self, sender, **named):
+        """Send signal from sender to all connected receivers.
+
+        If any receiver raises an error, the error propagates back through
+        send, terminating the dispatch loop, so it is quite possible to not
+        have all receivers called if a raises an error.
+
+        :param sender: The sender of the signal. Either a specific
+            object or ``None``.
+
+        :keyword \*\*named: Named arguments which will be passed to receivers.
+
+        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+
+        """
+        responses = []
+        if not self.receivers:
+            return responses
+
+        for receiver in self._live_receivers(_make_id(sender)):
+            response = receiver(signal=self, sender=sender, **named)
+            responses.append((receiver, response))
+        return responses
+
+    def send_robust(self, sender, **named):
+        """Send signal from sender to all connected receivers catching errors.
+
+        :param sender: The sender of the signal. Can be any python object
+            (normally one registered with a connect if you actually want
+            something to occur).
+
+        :keyword \*\*named: Named arguments which will be passed to receivers.
+            These arguments must be a subset of the argument names defined in
+            :attr:`providing_args`.
+
+        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+
+        :raises DispatcherKeyError:
+
+        if any receiver raises an error (specifically any subclass of
+        :exc:`Exception`), the error instance is returned as the result
+        for that receiver.
+
+        """
+        responses = []
+        if not self.receivers:
+            return responses
+
+        # Call each receiver with whatever arguments it can accept.
+        # Return a list of tuple pairs [(receiver, response), ... ].
+        for receiver in self._live_receivers(_make_id(sender)):
+            try:
+                response = receiver(signal=self, sender=sender, **named)
+            except Exception, err:
+                responses.append((receiver, err))
+            else:
+                responses.append((receiver, response))
+        return responses
+
+    def _live_receivers(self, senderkey):
+        """Filter sequence of receivers to get resolved, live receivers.
+
+        This checks for weak references and resolves them, then returning only
+        live receivers.
+
+        """
+        none_senderkey = _make_id(None)
+        receivers = []
+
+        for (receiverkey, r_senderkey), receiver in self.receivers:
+            if r_senderkey == none_senderkey or r_senderkey == senderkey:
+                if isinstance(receiver, WEAKREF_TYPES):
+                    # Dereference the weak reference.
+                    receiver = receiver()
+                    if receiver is not None:
+                        receivers.append(receiver)
+                else:
+                    receivers.append(receiver)
+        return receivers
+
+    def _remove_receiver(self, receiver):
+        """Remove dead receivers from connections."""
+
+        to_remove = []
+        for key, connected_receiver in self.receivers:
+            if connected_receiver == receiver:
+                to_remove.append(key)
+        for key in to_remove:
+            for idx, (r_key, _) in enumerate(self.receivers):
+                if r_key == key:
+                    del self.receivers[idx]
+
+    def __repr__(self):
+        return '<Signal: %s>' % (self.__class__.__name__, )
+
+    __str__ = __repr__

+ 1 - 1
celery/utils/info.py

@@ -33,7 +33,7 @@ def textindent(t, indent=0):
 
 
 def format_routing_table(table=None, indent=0):
 def format_routing_table(table=None, indent=0):
     """Format routing table into string for log dumps."""
     """Format routing table into string for log dumps."""
-    table = table or conf.routing_table
+    table = table or conf.get_routing_table()
     format = lambda **route: ROUTE_FORMAT.strip() % route
     format = lambda **route: ROUTE_FORMAT.strip() % route
     routes = "\n".join(format(name=name, **route)
     routes = "\n".join(format(name=name, **route)
                             for name, route in table.items())
                             for name, route in table.items())

+ 24 - 0
celery/utils/mail.py

@@ -0,0 +1,24 @@
+from mailer import Message, Mailer
+
+from celery.loaders import load_settings
+
+
+def mail_admins(subject, message, fail_silently=False):
+    """Send a message to the admins in settings.ADMINS."""
+    settings = load_settings()
+    if not settings.ADMINS:
+        return
+    to = ", ".join(admin_email for _, admin_email in settings.ADMINS)
+    username = settings.EMAIL_HOST_USER
+    password = settings.EMAIL_HOST_PASSWORD
+
+    message = Message(From=settings.SERVER_EMAIL, To=to,
+                      Subject=subject, Message=message)
+
+    try:
+        mailer = Mailer(settings.EMAIL_HOST, settings.EMAIL_PORT)
+        username and mailer.login(username, password)
+        mailer.send(message)
+    except Exception:
+        if not fail_silently:
+            raise

+ 123 - 0
celery/utils/timeutils.py

@@ -0,0 +1,123 @@
+from datetime import datetime
+
+from carrot.utils import partition
+
+DAYNAMES = "sun", "mon", "tue", "wed", "thu", "fri", "sat"
+WEEKDAYS = dict((name, dow) for name, dow in zip(DAYNAMES, range(7)))
+
+RATE_MODIFIER_MAP = {"s": lambda n: n,
+                     "m": lambda n: n / 60.0,
+                     "h": lambda n: n / 60.0 / 60.0}
+
+
+def timedelta_seconds(delta):
+    """Convert :class:`datetime.timedelta` to seconds.
+
+    Doesn't account for negative values.
+
+    """
+    if delta.days < 0:
+        return 0
+    return delta.days * 86400 + delta.seconds + (delta.microseconds / 10e5)
+
+
+def delta_resolution(dt, delta):
+    """Round a datetime to the resolution of a timedelta.
+
+    If the timedelta is in days, the datetime will be rounded
+    to the nearest days, if the timedelta is in hours the datetime
+    will be rounded to the nearest hour, and so on until seconds
+    which will just return the original datetime.
+
+    Examples::
+
+        >>> now = datetime.now()
+        >>> now
+        datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
+        >>> delta_resolution(now, timedelta(days=2))
+        datetime.datetime(2010, 3, 30, 0, 0)
+        >>> delta_resolution(now, timedelta(hours=2))
+        datetime.datetime(2010, 3, 30, 11, 0)
+        >>> delta_resolution(now, timedelta(minutes=2))
+        datetime.datetime(2010, 3, 30, 11, 50)
+        >>> delta_resolution(now, timedelta(seconds=2))
+        datetime.datetime(2010, 3, 30, 11, 50, 58, 41065)
+
+    """
+    delta = timedelta_seconds(delta)
+
+    resolutions = ((3, lambda x: x / 86400),
+                   (4, lambda x: x / 3600),
+                   (5, lambda x: x / 60))
+
+    args = dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second
+    for res, predicate in resolutions:
+        if predicate(delta) >= 1.0:
+            return datetime(*args[:res])
+    return dt
+
+
+def remaining(start, ends_in, now=None, relative=True):
+    """Calculate the remaining time for a start date and a timedelta.
+
+    E.g. "how many seconds left for 30 seconds after ``start``?"
+
+    :param start: Start :class:`datetime.datetime`.
+    :param ends_in: The end delta as a :class:`datetime.timedelta`.
+
+    :keyword relative: If set to ``False``, the end time will be calculated
+        using :func:`delta_resolution` (i.e. rounded to the resolution
+          of ``ends_in``).
+    :keyword now: The current time, defaults to :func:`datetime.now`.
+
+    Examples::
+
+        >>> remaining(datetime.now(), ends_in=timedelta(seconds=30))
+        '0:0:29.999948'
+
+        >>> str(remaining(datetime.now() - timedelta(minutes=29),
+                ends_in=timedelta(hours=2)))
+        '1:30:59.999938'
+
+        >>> str(remaining(datetime.now() - timedelta(minutes=29),
+                ends_in=timedelta(hours=2),
+                relative=False))
+        '1:11:18.458437'
+
+    """
+    now = now or datetime.now()
+
+    end_date = start + ends_in
+    if not relative:
+        end_date = delta_resolution(end_date, ends_in)
+    return end_date - now
+
+
+def rate(rate):
+    """Parses rate strings, such as ``"100/m"`` or ``"2/h"``
+    and converts them to seconds."""
+    if rate:
+        if isinstance(rate, basestring):
+            ops, _, modifier = partition(rate, "/")
+            return RATE_MODIFIER_MAP[modifier or "s"](int(ops)) or 0
+        return rate or 0
+    return 0
+
+
+def weekday(name):
+    """Return the position of a weekday (0 - 7, where 0 is Sunday).
+
+        >>> weekday("sunday")
+        0
+        >>> weekday("sun")
+        0
+        >>> weekday("mon")
+        1
+
+    """
+    abbreviation = name[0:3].lower()
+    try:
+        return WEEKDAYS[abbreviation]
+    except KeyError:
+        # Show original day name in exception, instead of abbr.
+        raise KeyError(name)

+ 0 - 106
celery/views.py

@@ -1,106 +0,0 @@
-from django.http import HttpResponse, Http404
-
-from anyjson import serialize as JSON_dump
-from billiard.utils.functional import wraps
-
-from celery.utils import get_full_cls_name
-from celery.result import AsyncResult
-from celery.registry import tasks
-from celery.backends import default_backend
-
-
-def task_view(task):
-    """Decorator turning any task into a view that applies the task
-    asynchronously.
-
-    Returns a JSON dictionary containing the keys ``ok``, and
-        ``task_id``.
-
-    """
-
-    def _applier(request, **options):
-        kwargs = request.method == "POST" and \
-            request.POST.copy() or request.GET.copy()
-        kwargs = dict((key.encode("utf-8"), value)
-                    for key, value in kwargs.items())
-
-        result = task.apply_async(kwargs=kwargs)
-        response_data = {"ok": "true", "task_id": result.task_id}
-        return HttpResponse(JSON_dump(response_data),
-                            mimetype="application/json")
-
-    return _applier
-
-
-def apply(request, task_name):
-    """View applying a task.
-
-    **Note:** Please use this with caution. Preferably you shouldn't make this
-        publicly accessible without ensuring your code is safe!
-
-    """
-    try:
-        task = tasks[task_name]
-    except KeyError:
-        raise Http404("apply: no such task")
-    return task_view(task)(request)
-
-
-def is_task_successful(request, task_id):
-    """Returns task execute status in JSON format."""
-    response_data = {"task": {"id": task_id,
-                              "executed": AsyncResult(task_id).successful()}}
-    return HttpResponse(JSON_dump(response_data), mimetype="application/json")
-
-
-def task_status(request, task_id):
-    """Returns task status and result in JSON format."""
-    status = default_backend.get_status(task_id)
-    res = default_backend.get_result(task_id)
-    response_data = dict(id=task_id, status=status, result=res)
-    if status in default_backend.EXCEPTION_STATES:
-        traceback = default_backend.get_traceback(task_id)
-        response_data.update({"result": str(res.args[0]),
-                              "exc": get_full_cls_name(res.__class__),
-                              "traceback": traceback})
-
-    return HttpResponse(JSON_dump({"task": response_data}),
-            mimetype="application/json")
-
-
-def task_webhook(fun):
-    """Decorator turning a function into a task webhook.
-
-    If an exception is raised within the function, the decorated
-    function catches this and returns an error JSON response, otherwise
-    it returns the result as a JSON response.
-
-
-    Example:
-
-    .. code-block:: python
-
-        @task_webhook
-        def add(request):
-            x = int(request.GET["x"])
-            y = int(request.GET["y"])
-            return x + y
-
-        >>> response = add(request)
-        >>> response.content
-        '{"status": "success", "retval": 100}'
-
-    """
-
-    @wraps(fun)
-    def _inner(*args, **kwargs):
-        try:
-            retval = fun(*args, **kwargs)
-        except Exception, exc:
-            response = {"status": "failure", "reason": str(exc)}
-        else:
-            response = {"status": "success", "retval": retval}
-
-        return HttpResponse(JSON_dump(response), mimetype="application/json")
-
-    return _inner

+ 15 - 11
celery/worker/__init__.py

@@ -6,7 +6,6 @@ The Multiprocessing Worker Server
 import socket
 import socket
 import logging
 import logging
 import traceback
 import traceback
-from Queue import Queue
 from multiprocessing.util import Finalize
 from multiprocessing.util import Finalize
 
 
 from celery import conf
 from celery import conf
@@ -17,7 +16,7 @@ from celery.log import setup_logger, _hijack_multiprocessing_logger
 from celery.beat import EmbeddedClockService
 from celery.beat import EmbeddedClockService
 from celery.utils import noop, instantiate
 from celery.utils import noop, instantiate
 
 
-from celery.worker.buckets import TaskBucket
+from celery.worker.buckets import TaskBucket, FastQueue
 from celery.worker.scheduler import Scheduler
 from celery.worker.scheduler import Scheduler
 
 
 
 
@@ -36,6 +35,8 @@ def process_initializer():
     from celery.loaders import current_loader
     from celery.loaders import current_loader
     current_loader().init_worker()
     current_loader().init_worker()
 
 
+    signals.worker_process_init.send(sender=None)
+
 
 
 class WorkController(object):
 class WorkController(object):
     """Executes tasks waiting in the task queue.
     """Executes tasks waiting in the task queue.
@@ -84,12 +85,6 @@ class WorkController(object):
         The :class:`Queue.Queue` that holds tasks ready for immediate
         The :class:`Queue.Queue` that holds tasks ready for immediate
         processing.
         processing.
 
 
-    .. attribute:: hold_queue
-
-        The :class:`Queue.Queue` that holds paused tasks. Reasons for holding
-        back the task include waiting for ``eta`` to pass or the task is being
-        retried.
-
     .. attribute:: schedule_controller
     .. attribute:: schedule_controller
 
 
         Instance of :class:`celery.worker.controllers.ScheduleController`.
         Instance of :class:`celery.worker.controllers.ScheduleController`.
@@ -114,7 +109,10 @@ class WorkController(object):
             pool_cls=conf.CELERYD_POOL, listener_cls=conf.CELERYD_LISTENER,
             pool_cls=conf.CELERYD_POOL, listener_cls=conf.CELERYD_LISTENER,
             mediator_cls=conf.CELERYD_MEDIATOR,
             mediator_cls=conf.CELERYD_MEDIATOR,
             eta_scheduler_cls=conf.CELERYD_ETA_SCHEDULER,
             eta_scheduler_cls=conf.CELERYD_ETA_SCHEDULER,
-            schedule_filename=conf.CELERYBEAT_SCHEDULE_FILENAME):
+            schedule_filename=conf.CELERYBEAT_SCHEDULE_FILENAME,
+            task_time_limit=conf.CELERYD_TASK_TIME_LIMIT,
+            task_soft_time_limit=conf.CELERYD_TASK_SOFT_TIME_LIMIT,
+            max_tasks_per_child=conf.CELERYD_MAX_TASKS_PER_CHILD):
 
 
         # Options
         # Options
         self.loglevel = loglevel or self.loglevel
         self.loglevel = loglevel or self.loglevel
@@ -125,11 +123,14 @@ class WorkController(object):
         self.embed_clockservice = embed_clockservice
         self.embed_clockservice = embed_clockservice
         self.ready_callback = ready_callback
         self.ready_callback = ready_callback
         self.send_events = send_events
         self.send_events = send_events
+        self.task_time_limit = task_time_limit
+        self.task_soft_time_limit = task_soft_time_limit
+        self.max_tasks_per_child = max_tasks_per_child
         self._finalize = Finalize(self, self.stop, exitpriority=20)
         self._finalize = Finalize(self, self.stop, exitpriority=20)
 
 
         # Queues
         # Queues
         if conf.DISABLE_RATE_LIMITS:
         if conf.DISABLE_RATE_LIMITS:
-            self.ready_queue = Queue()
+            self.ready_queue = FastQueue()
         else:
         else:
             self.ready_queue = TaskBucket(task_registry=registry.tasks)
             self.ready_queue = TaskBucket(task_registry=registry.tasks)
         self.eta_schedule = Scheduler(self.ready_queue, logger=self.logger)
         self.eta_schedule = Scheduler(self.ready_queue, logger=self.logger)
@@ -139,7 +140,10 @@ class WorkController(object):
         # Threads + Pool + Consumer
         # Threads + Pool + Consumer
         self.pool = instantiate(pool_cls, self.concurrency,
         self.pool = instantiate(pool_cls, self.concurrency,
                                 logger=self.logger,
                                 logger=self.logger,
-                                initializer=process_initializer)
+                                initializer=process_initializer,
+                                maxtasksperchild=self.max_tasks_per_child,
+                                timeout=self.task_time_limit,
+                                soft_timeout=self.task_soft_time_limit)
         self.mediator = instantiate(mediator_cls, self.ready_queue,
         self.mediator = instantiate(mediator_cls, self.ready_queue,
                                     callback=self.process_task,
                                     callback=self.process_task,
                                     logger=self.logger)
                                     logger=self.logger)

+ 65 - 44
celery/worker/buckets.py

@@ -1,35 +1,15 @@
 import time
 import time
 from Queue import Queue, Empty as QueueEmpty
 from Queue import Queue, Empty as QueueEmpty
-
-from carrot.utils import partition
+from itertools import chain
 
 
 from celery.utils import all
 from celery.utils import all
-
-RATE_MODIFIER_MAP = {"s": lambda n: n,
-                     "m": lambda n: n / 60.0,
-                     "h": lambda n: n / 60.0 / 60.0}
-
+from celery.utils import timeutils
+from celery.utils.compat import izip_longest
 
 
 class RateLimitExceeded(Exception):
 class RateLimitExceeded(Exception):
     """The token buckets rate limit has been exceeded."""
     """The token buckets rate limit has been exceeded."""
 
 
 
 
-def parse_ratelimit_string(rate_limit):
-    """Parse rate limit configurations such as ``"100/m"`` or ``"2/h"``
-        and convert them into seconds.
-
-    Returns ``0`` for no rate limit.
-
-    """
-
-    if rate_limit:
-        if isinstance(rate_limit, basestring):
-            ops, _, modifier = partition(rate_limit, "/")
-            return RATE_MODIFIER_MAP[modifier or "s"](int(ops)) or 0
-        return rate_limit or 0
-    return 0
-
-
 class TaskBucket(object):
 class TaskBucket(object):
     """This is a collection of token buckets, each task type having
     """This is a collection of token buckets, each task type having
     its own token bucket. If the task type doesn't have a rate limit,
     its own token bucket. If the task type doesn't have a rate limit,
@@ -123,7 +103,7 @@ class TaskBucket(object):
             if remaining_time:
             if remaining_time:
                 if not block or did_timeout():
                 if not block or did_timeout():
                     raise QueueEmpty
                     raise QueueEmpty
-                time.sleep(remaining_time)
+                time.sleep(min(remaining_time, timeout or 1))
             else:
             else:
                 return item
                 return item
 
 
@@ -134,34 +114,48 @@ class TaskBucket(object):
         """Initialize with buckets for all the task types in the registry."""
         """Initialize with buckets for all the task types in the registry."""
         map(self.add_bucket_for_type, self.task_registry.keys())
         map(self.add_bucket_for_type, self.task_registry.keys())
 
 
+    def refresh(self):
+        """Refresh rate limits for all task types in the registry."""
+        map(self.update_bucket_for_type, self.task_registry.keys())
+
     def get_bucket_for_type(self, task_name):
     def get_bucket_for_type(self, task_name):
         """Get the bucket for a particular task type."""
         """Get the bucket for a particular task type."""
         if task_name not in self.buckets:
         if task_name not in self.buckets:
             return self.add_bucket_for_type(task_name)
             return self.add_bucket_for_type(task_name)
         return self.buckets[task_name]
         return self.buckets[task_name]
 
 
-    def add_bucket_for_type(self, task_name):
-        """Add a bucket for a task type.
+    def _get_queue_for_type(self, task_name):
+        bucket = self.buckets[task_name]
+        if isinstance(bucket, TokenBucketQueue):
+            return bucket.queue
+        return bucket
 
 
-        Will read the tasks rate limit and create a :class:`TokenBucketQueue`
-        if it has one. If the task doesn't have a rate limit a regular Queue
-        will be used.
-
-        """
-        if task_name in self.buckets:
-            return
+    def update_bucket_for_type(self, task_name):
         task_type = self.task_registry[task_name]
         task_type = self.task_registry[task_name]
-        task_queue = task_type.rate_limit_queue_type()
         rate_limit = getattr(task_type, "rate_limit", None)
         rate_limit = getattr(task_type, "rate_limit", None)
-        rate_limit = parse_ratelimit_string(rate_limit)
+        rate_limit = timeutils.rate(rate_limit)
+        if task_name in self.buckets:
+            task_queue = self._get_queue_for_type(task_name)
+        else:
+            task_queue = FastQueue()
+
         if rate_limit:
         if rate_limit:
             task_queue = TokenBucketQueue(rate_limit, queue=task_queue)
             task_queue = TokenBucketQueue(rate_limit, queue=task_queue)
-        else:
-            task_queue.expected_time = lambda: 0
 
 
         self.buckets[task_name] = task_queue
         self.buckets[task_name] = task_queue
         return task_queue
         return task_queue
 
 
+    def add_bucket_for_type(self, task_name):
+        """Add a bucket for a task type.
+
+        Will read the tasks rate limit and create a :class:`TokenBucketQueue`
+        if it has one. If the task doesn't have a rate limit a regular Queue
+        will be used.
+
+        """
+        if task_name not in self.buckets:
+            return self.update_bucket_for_type(task_name)
+
     def qsize(self):
     def qsize(self):
         """Get the total size of all the queues."""
         """Get the total size of all the queues."""
         return sum(bucket.qsize() for bucket in self.buckets.values())
         return sum(bucket.qsize() for bucket in self.buckets.values())
@@ -171,12 +165,35 @@ class TaskBucket(object):
 
 
     def clear(self):
     def clear(self):
         for bucket in self.buckets.values():
         for bucket in self.buckets.values():
-            try:
-                bucket.clear()
-            except AttributeError:
-                # Probably a Queue, not a TokenBucketQueue, so clear the
-                # underlying deque instead.
-                bucket.queue.clear()
+            bucket.clear()
+
+    @property
+    def items(self):
+        # for queues with contents [(1, 2), (3, 4), (5, 6), (7, 8)]
+        # zips and flattens to [1, 3, 5, 7, 2, 4, 6, 8]
+        return filter(None, chain.from_iterable(izip_longest(*[bucket.items
+                                    for bucket in self.buckets.values()])))
+
+
+class FastQueue(Queue):
+    """:class:`Queue.Queue` supporting the interface of
+    :class:`TokenBucketQueue`."""
+
+    def clear(self):
+        return self.queue.clear()
+
+    def expected_time(self, tokens=1):
+        return 0
+
+    def can_consume(self, tokens=1):
+        return True
+
+    def wait(self, block=True):
+        return self.get(block=block)
+
+    @property
+    def items(self):
+        return self.queue
 
 
 
 
 class TokenBucketQueue(object):
 class TokenBucketQueue(object):
@@ -275,7 +292,7 @@ class TokenBucketQueue(object):
         return self.queue.empty()
         return self.queue.empty()
 
 
     def clear(self):
     def clear(self):
-        return self.queue.queue.clear()
+        return self.items.clear()
 
 
     def wait(self, block=False):
     def wait(self, block=False):
         """Wait until a token can be retrieved from the bucket and return
         """Wait until a token can be retrieved from the bucket and return
@@ -307,3 +324,7 @@ class TokenBucketQueue(object):
             self._tokens = min(self.capacity, self._tokens + delta)
             self._tokens = min(self.capacity, self._tokens + delta)
             self.timestamp = now
             self.timestamp = now
         return self._tokens
         return self._tokens
+
+    @property
+    def items(self):
+        return self.queue.queue

+ 0 - 128
celery/worker/control.py

@@ -1,128 +0,0 @@
-import socket
-
-from celery import log
-from celery.registry import tasks
-from celery.worker.revoke import revoked
-
-TASK_INFO_FIELDS = ("exchange", "routing_key", "rate_limit")
-
-def expose(fun):
-    """Expose method as a celery worker control command, allowed to be called
-    from a message."""
-    fun.exposed = True
-    return fun
-
-
-class Control(object):
-    """The worker control panel.
-
-    :param logger: The current logger to use.
-
-    """
-
-    def __init__(self, logger, hostname=None):
-        self.logger = logger
-        self.hostname = hostname or socket.gethostname()
-
-    @expose
-    def revoke(self, task_id, **kwargs):
-        """Revoke task by task id."""
-        revoked.add(task_id)
-        self.logger.warn("Task %s revoked." % task_id)
-
-    @expose
-    def rate_limit(self, task_name, rate_limit, **kwargs):
-        """Set new rate limit for a task type.
-
-        See :attr:`celery.task.base.Task.rate_limit`.
-
-        :param task_name: Type of task.
-        :param rate_limit: New rate limit.
-
-        """
-        try:
-            tasks[task_name].rate_limit = rate_limit
-        except KeyError:
-            return
-
-        if not rate_limit:
-            self.logger.warn("Disabled rate limits for tasks of type %s" % (
-                                task_name))
-        else:
-            self.logger.warn("New rate limit for tasks of type %s: %s." % (
-                                task_name, rate_limit))
-
-    @expose
-    def shutdown(self, **kwargs):
-        self.logger.critical("Got shutdown from remote.")
-        raise SystemExit
-
-    @expose
-    def dump_tasks(self, **kwargs):
-        from celery import registry
-
-        def _extract_info(task):
-            fields = dict((field, str(getattr(task, field, None)))
-                            for field in TASK_INFO_FIELDS
-                                if getattr(task, field, None) is not None)
-            info = map("=".join, fields.items())
-            if not info:
-                return "\t%s" % task.name
-            return "\t%s [%s]" % (task.name, " ".join(info))
-
-        tasks = sorted(registry.tasks.keys())
-        tasks = [registry.tasks[task] for task in tasks]
-
-        self.logger.warn("* Dump of currently registered tasks:\n%s" % (
-            "\n".join(map(_extract_info, tasks))))
-
-
-class ControlDispatch(object):
-    """Execute worker control panel commands."""
-
-    panel_cls = Control
-
-    def __init__(self, logger=None, hostname=None):
-        self.logger = logger or log.get_default_logger()
-        self.hostname = hostname
-        self.panel = self.panel_cls(self.logger, hostname=self.hostname)
-
-    def dispatch_from_message(self, message):
-        """Dispatch by using message data received by the broker.
-
-        Example:
-
-            >>> def receive_message(message_data, message):
-            ...     control = message_data.get("control")
-            ...     if control:
-            ...         ControlDispatch().dispatch_from_message(control)
-
-        """
-        message = dict(message) # don't modify callers message.
-        command = message.pop("command")
-        destination = message.pop("destination", None)
-        if not destination or self.hostname in destination:
-            return self.execute(command, message)
-
-    def execute(self, command, kwargs=None):
-        """Execute control command by name and keyword arguments.
-
-        :param command: Name of the command to execute.
-        :param kwargs: Keyword arguments.
-
-        """
-        kwargs = kwargs or {}
-        control = None
-        try:
-            control = getattr(self.panel, command)
-        except AttributeError:
-            pass
-        if control is None or not control.exposed:
-            self.logger.error("No such control command: %s" % command)
-        else:
-            # need to make sure keyword arguments are not in unicode
-            # this should be fixed in newer Python's
-            # (see: http://bugs.python.org/issue4978)
-            kwargs = dict((k.encode('utf8'), v)
-                            for (k, v) in kwargs.iteritems())
-            return control(**kwargs)

+ 68 - 0
celery/worker/control/__init__.py

@@ -0,0 +1,68 @@
+from celery import log
+from celery.worker.control.registry import Panel
+from celery.worker.control import builtins
+from celery.messaging import ControlReplyPublisher, with_connection
+
+
+class ControlDispatch(object):
+    """Execute worker control panel commands."""
+    panel_cls = Panel
+
+    def __init__(self, logger=None, hostname=None, listener=None):
+        self.logger = logger or log.get_default_logger()
+        self.hostname = hostname
+        self.listener = listener
+        self.panel = self.panel_cls(self.logger, self.listener, self.hostname)
+
+    @with_connection
+    def reply(self, data, exchange, routing_key, connection=None,
+            connect_timeout=None):
+        crq = ControlReplyPublisher(connection, exchange=exchange)
+        try:
+            crq.send(data, routing_key=routing_key)
+        finally:
+            crq.close()
+
+    def dispatch_from_message(self, message):
+        """Dispatch by using message data received by the broker.
+
+        Example:
+
+            >>> def receive_message(message_data, message):
+            ...     control = message_data.get("control")
+            ...     if control:
+            ...         ControlDispatch().dispatch_from_message(control)
+
+        """
+        message = dict(message) # don't modify callers message.
+        command = message.pop("command")
+        destination = message.pop("destination", None)
+        reply_to = message.pop("reply_to", None)
+        if not destination or self.hostname in destination:
+            return self.execute(command, message, reply_to=reply_to)
+
+    def execute(self, command, kwargs=None, reply_to=None):
+        """Execute control command by name and keyword arguments.
+
+        :param command: Name of the command to execute.
+        :param kwargs: Keyword arguments.
+
+        """
+        kwargs = kwargs or {}
+        control = None
+        try:
+            control = self.panel[command]
+        except KeyError:
+            self.logger.error("No such control command: %s" % command)
+        else:
+            # need to make sure keyword arguments are not in unicode
+            # this should be fixed in newer Python's
+            # (see: http://bugs.python.org/issue4978)
+            kwargs = dict((k.encode("utf8"), v)
+                            for k, v in kwargs.iteritems())
+            reply = control(self.panel, **kwargs)
+            if reply_to:
+                self.reply({self.hostname: reply},
+                           exchange=reply_to["exchange"],
+                           routing_key=reply_to["routing_key"])
+            return reply

+ 118 - 0
celery/worker/control/builtins.py

@@ -0,0 +1,118 @@
+from datetime import datetime
+
+from celery import conf
+from celery.registry import tasks
+from celery.worker.revoke import revoked
+from celery.worker.control.registry import Panel
+from celery.backends import default_backend
+
+TASK_INFO_FIELDS = ("exchange", "routing_key", "rate_limit")
+
+
+@Panel.register
+def revoke(panel, task_id, task_name=None, **kwargs):
+    """Revoke task by task id."""
+    revoked.add(task_id)
+    backend = default_backend
+    if task_name: # Use custom task backend (if any)
+        try:
+            backend = tasks[task_name].backend
+        except KeyError:
+            pass
+    backend.mark_as_revoked(task_id)
+    panel.logger.warn("Task %s revoked" % (task_id, ))
+    return True
+
+
+@Panel.register
+def rate_limit(panel, task_name, rate_limit, **kwargs):
+    """Set new rate limit for a task type.
+
+    See :attr:`celery.task.base.Task.rate_limit`.
+
+    :param task_name: Type of task.
+    :param rate_limit: New rate limit.
+
+    """
+    try:
+        tasks[task_name].rate_limit = rate_limit
+    except KeyError:
+        panel.logger.error("Rate limit attempt for unknown task %s" % (
+            task_name, ))
+        return {"error": "unknown task"}
+
+    if conf.DISABLE_RATE_LIMITS:
+        panel.logger.error("Rate limit attempt, but rate limits disabled.")
+        return {"error": "rate limits disabled"}
+
+    panel.listener.ready_queue.refresh()
+
+    if not rate_limit:
+        panel.logger.warn("Disabled rate limits for tasks of type %s" % (
+                            task_name, ))
+        return {"ok": "rate limit disabled successfully"}
+
+    panel.logger.warn("New rate limit for tasks of type %s: %s." % (
+                task_name, rate_limit))
+    return {"ok": "new rate limit set successfully"}
+
+
+@Panel.register
+def dump_schedule(panel, **kwargs):
+    schedule = panel.listener.eta_schedule
+    if not schedule.queue:
+        panel.logger.info("--Empty schedule--")
+        return []
+
+    formatitem = lambda (i, item): "%s. %s pri%s %r" % (i,
+            datetime.fromtimestamp(item["eta"]),
+            item["priority"],
+            item["item"])
+    info = map(formatitem, enumerate(schedule.info()))
+    panel.logger.info("* Dump of current schedule:\n%s" % (
+                            "\n".join(info, )))
+    return info
+
+
+@Panel.register
+def dump_reserved(panel, **kwargs):
+    ready_queue = panel.listener.ready_queue
+    reserved = ready_queue.items
+    if not reserved:
+        panel.logger.info("--Empty queue--")
+        return []
+    info = map(repr, reserved)
+    panel.logger.info("* Dump of currently reserved tasks:\n%s" % (
+                            "\n".join(info, )))
+    return info
+
+
+@Panel.register
+def dump_tasks(panel, **kwargs):
+
+    def _extract_info(task):
+        fields = dict((field, str(getattr(task, field, None)))
+                        for field in TASK_INFO_FIELDS
+                            if getattr(task, field, None) is not None)
+        info = map("=".join, fields.items())
+        if not info:
+            return task.name
+        return "%s [%s]" % (task.name, " ".join(info))
+
+    info = map(_extract_info, (tasks[task]
+                                        for task in sorted(tasks.keys())))
+    panel.logger.warn("* Dump of currently registered tasks:\n%s" % (
+                "\n".join(info)))
+
+    return info
+
+
+@Panel.register
+def ping(panel, **kwargs):
+    return "pong"
+
+
+@Panel.register
+def shutdown(panel, **kwargs):
+    panel.logger.critical("Got shutdown from remote.")
+    raise SystemExit

+ 21 - 0
celery/worker/control/registry.py

@@ -0,0 +1,21 @@
+from UserDict import UserDict
+
+
+class Panel(UserDict):
+    data = dict() # Global registry.
+
+    def __init__(self, logger, listener, hostname=None):
+        self.logger = logger
+        self.hostname = hostname
+        self.listener = listener
+
+    @classmethod
+    def register(cls, method, name=None):
+        cls.data[name or method.__name__] = method
+
+    @classmethod
+    def unregister(cls, name_or_method):
+        name = name_or_method
+        if not isinstance(name_or_method, basestring):
+            name = name_or_method.__name__
+        cls.data.pop(name)

+ 43 - 10
celery/worker/job.py

@@ -8,12 +8,12 @@ import time
 import socket
 import socket
 import warnings
 import warnings
 
 
-from django.core.mail import mail_admins
 
 
 from celery import conf
 from celery import conf
 from celery import platform
 from celery import platform
 from celery.log import get_default_logger
 from celery.log import get_default_logger
 from celery.utils import noop, fun_takes_kwargs
 from celery.utils import noop, fun_takes_kwargs
+from celery.utils.mail import mail_admins
 from celery.loaders import current_loader
 from celery.loaders import current_loader
 from celery.execute.trace import TaskTrace
 from celery.execute.trace import TaskTrace
 from celery.registry import tasks
 from celery.registry import tasks
@@ -168,8 +168,12 @@ class TaskWrapper(object):
 
 
     .. attribute executed
     .. attribute executed
 
 
-    Set if the task has been executed. A task should only be executed
-    once.
+        Set to ``True`` if the task has been executed.
+        A task should only be executed once.
+
+    .. attribute acknowledged
+
+        Set to ``True`` if the task has been acknowledged.
 
 
     """
     """
     success_msg = "Task %(name)s[%(id)s] processed: %(return_value)s"
     success_msg = "Task %(name)s[%(id)s] processed: %(return_value)s"
@@ -181,6 +185,7 @@ class TaskWrapper(object):
     """
     """
     fail_email_body = TASK_FAIL_EMAIL_BODY
     fail_email_body = TASK_FAIL_EMAIL_BODY
     executed = False
     executed = False
+    acknowledged = False
     time_start = None
     time_start = None
 
 
     def __init__(self, task_name, task_id, args, kwargs,
     def __init__(self, task_name, task_id, args, kwargs,
@@ -289,10 +294,13 @@ class TaskWrapper(object):
         self._set_executed_bit()
         self._set_executed_bit()
 
 
         # acknowledge task as being processed.
         # acknowledge task as being processed.
-        self.on_ack()
+        if not self.task.acks_late:
+            self.acknowledge()
 
 
         tracer = WorkerTaskTrace(*self._get_tracer_args(loglevel, logfile))
         tracer = WorkerTaskTrace(*self._get_tracer_args(loglevel, logfile))
-        return tracer.execute()
+        retval = tracer.execute()
+        self.acknowledge()
+        return retval
 
 
     def send_event(self, type, **fields):
     def send_event(self, type, **fields):
         if self.eventer:
         if self.eventer:
@@ -313,22 +321,44 @@ class TaskWrapper(object):
         # Make sure task has not already been executed.
         # Make sure task has not already been executed.
         self._set_executed_bit()
         self._set_executed_bit()
 
 
-        self.send_event("task-accepted", uuid=self.task_id)
-
         args = self._get_tracer_args(loglevel, logfile)
         args = self._get_tracer_args(loglevel, logfile)
         self.time_start = time.time()
         self.time_start = time.time()
         result = pool.apply_async(execute_and_trace, args=args,
         result = pool.apply_async(execute_and_trace, args=args,
+                    accept_callback=self.on_accepted,
+                    timeout_callback=self.on_timeout,
                     callbacks=[self.on_success], errbacks=[self.on_failure])
                     callbacks=[self.on_success], errbacks=[self.on_failure])
-        self.on_ack()
         return result
         return result
 
 
+    def on_accepted(self):
+        if not self.task.acks_late:
+            self.acknowledge()
+        self.send_event("task-accepted", uuid=self.task_id)
+        self.logger.debug("Task accepted: %s[%s]" % (
+            self.task_name, self.task_id))
+
+    def on_timeout(self, soft):
+        if soft:
+            self.logger.warning("Soft time limit exceeded for %s[%s]" % (
+                self.task_name, self.task_id))
+        else:
+            self.logger.error("Hard time limit exceeded for %s[%s]" % (
+                self.task_name, self.task_id))
+
+    def acknowledge(self):
+        if not self.acknowledged:
+            self.on_ack()
+            self.acknowledged = True
+
     def on_success(self, ret_value):
     def on_success(self, ret_value):
         """The handler used if the task was successfully processed (
         """The handler used if the task was successfully processed (
         without raising an exception)."""
         without raising an exception)."""
 
 
+        if self.task.acks_late:
+            self.acknowledge()
+
         runtime = time.time() - self.time_start
         runtime = time.time() - self.time_start
         self.send_event("task-succeeded", uuid=self.task_id,
         self.send_event("task-succeeded", uuid=self.task_id,
-                        result=ret_value, runtime=runtime)
+                        result=repr(ret_value), runtime=runtime)
 
 
         msg = self.success_msg.strip() % {
         msg = self.success_msg.strip() % {
                 "id": self.task_id,
                 "id": self.task_id,
@@ -339,8 +369,11 @@ class TaskWrapper(object):
     def on_failure(self, exc_info):
     def on_failure(self, exc_info):
         """The handler used if the task raised an exception."""
         """The handler used if the task raised an exception."""
 
 
+        if self.task.acks_late:
+            self.acknowledge()
+
         self.send_event("task-failed", uuid=self.task_id,
         self.send_event("task-failed", uuid=self.task_id,
-                                       exception=exc_info.exception,
+                                       exception=repr(exc_info.exception),
                                        traceback=exc_info.traceback)
                                        traceback=exc_info.traceback)
 
 
         context = {
         context = {

+ 8 - 10
celery/worker/listener.py

@@ -1,4 +1,5 @@
 from __future__ import generators
 from __future__ import generators
+
 import socket
 import socket
 import warnings
 import warnings
 from datetime import datetime
 from datetime import datetime
@@ -56,7 +57,8 @@ class CarrotListener(object):
         self.logger = logger
         self.logger = logger
         self.hostname = hostname or socket.gethostname()
         self.hostname = hostname or socket.gethostname()
         self.control_dispatch = ControlDispatch(logger=logger,
         self.control_dispatch = ControlDispatch(logger=logger,
-                                                hostname=self.hostname)
+                                                hostname=self.hostname,
+                                                listener=self)
         self.prefetch_count = SharedCounter(initial_prefetch_count)
         self.prefetch_count = SharedCounter(initial_prefetch_count)
         self.event_dispatcher = None
         self.event_dispatcher = None
         self.heart = None
         self.heart = None
@@ -111,8 +113,8 @@ class CarrotListener(object):
             return task.on_ack()
             return task.on_ack()
 
 
         self.event_dispatcher.send("task-received", uuid=task.task_id,
         self.event_dispatcher.send("task-received", uuid=task.task_id,
-                name=task.task_name, args=task.args, kwargs=task.kwargs,
-                retries=task.retries, eta=eta)
+                name=task.task_name, args=repr(task.args),
+                kwargs=repr(task.kwargs), retries=task.retries, eta=eta)
 
 
         if eta:
         if eta:
             if not isinstance(eta, datetime):
             if not isinstance(eta, datetime):
@@ -201,12 +203,8 @@ class CarrotListener(object):
                 "CarrotListener: Re-establishing connection to the broker...")
                 "CarrotListener: Re-establishing connection to the broker...")
         self.stop_consumers()
         self.stop_consumers()
 
 
-        try:
-            # TaskBucket supports clear directly.
-            self.ready_queue.clear()
-        except AttributeError:
-            # Use the underlying deque of regular Queue
-            self.ready_queue.queue.clear()
+        # Clear internal queues.
+        self.ready_queue.clear()
         self.eta_schedule.clear()
         self.eta_schedule.clear()
 
 
         self.connection = self._open_connection()
         self.connection = self._open_connection()
@@ -225,7 +223,7 @@ class CarrotListener(object):
 
 
     def _mainloop(self, **kwargs):
     def _mainloop(self, **kwargs):
         while 1:
         while 1:
-            yield self.connection.connection.drain_events()
+            yield self.connection.drain_events()
 
 
     def _detect_wait_method(self):
     def _detect_wait_method(self):
         if hasattr(self.connection.connection, "drain_events"):
         if hasattr(self.connection.connection, "drain_events"):

+ 20 - 19
celery/worker/pool.py

@@ -3,7 +3,7 @@
 Process Pools.
 Process Pools.
 
 
 """
 """
-from billiard.pool import DynamicPool
+from billiard.pool import Pool, RUN
 from billiard.utils.functional import curry
 from billiard.utils.functional import curry
 
 
 from celery import log
 from celery import log
@@ -27,10 +27,14 @@ class TaskPool(object):
 
 
     """
     """
 
 
-    def __init__(self, limit, logger=None, initializer=None):
+    def __init__(self, limit, logger=None, initializer=None,
+            maxtasksperchild=None, timeout=None, soft_timeout=None):
         self.limit = limit
         self.limit = limit
         self.logger = logger or log.get_default_logger()
         self.logger = logger or log.get_default_logger()
         self.initializer = initializer
         self.initializer = initializer
+        self.maxtasksperchild = maxtasksperchild
+        self.timeout = timeout
+        self.soft_timeout = soft_timeout
         self._pool = None
         self._pool = None
 
 
     def start(self):
     def start(self):
@@ -39,25 +43,22 @@ class TaskPool(object):
         Will pre-fork all workers so they're ready to accept tasks.
         Will pre-fork all workers so they're ready to accept tasks.
 
 
         """
         """
-        self._pool = DynamicPool(processes=self.limit,
-                                 initializer=self.initializer)
+        self._pool = Pool(processes=self.limit,
+                          initializer=self.initializer,
+                          timeout=self.timeout,
+                          soft_timeout=self.soft_timeout,
+                          maxtasksperchild=self.maxtasksperchild)
 
 
     def stop(self):
     def stop(self):
         """Terminate the pool."""
         """Terminate the pool."""
-        self._pool.close()
-        self._pool.join()
-        self._pool = None
-
-    def replace_dead_workers(self):
-        self.logger.debug("TaskPool: Finding dead pool processes...")
-        dead_count = self._pool.replace_dead_workers()
-        if dead_count: # pragma: no cover
-            self.logger.info(
-                "TaskPool: Replaced %d dead pool workers..." % (
-                    dead_count))
+        if self._pool is not None and self._pool._state == RUN:
+            self._pool.close()
+            self._pool.join()
+            self._pool = None
 
 
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
-            errbacks=None, **compat):
+            errbacks=None, accept_callback=None, timeout_callback=None,
+            **compat):
         """Equivalent of the :func:``apply`` built-in function.
         """Equivalent of the :func:``apply`` built-in function.
 
 
         All ``callbacks`` and ``errbacks`` should complete immediately since
         All ``callbacks`` and ``errbacks`` should complete immediately since
@@ -74,10 +75,10 @@ class TaskPool(object):
         self.logger.debug("TaskPool: Apply %s (args:%s kwargs:%s)" % (
         self.logger.debug("TaskPool: Apply %s (args:%s kwargs:%s)" % (
             target, args, kwargs))
             target, args, kwargs))
 
 
-        self.replace_dead_workers()
-
         return self._pool.apply_async(target, args, kwargs,
         return self._pool.apply_async(target, args, kwargs,
-                                        callback=on_ready)
+                                      callback=on_ready,
+                                      accept_callback=accept_callback,
+                                      timeout_callback=timeout_callback)
 
 
     def on_ready(self, callbacks, errbacks, ret_value):
     def on_ready(self, callbacks, errbacks, ret_value):
         """What to do when a worker task is ready and its return value has
         """What to do when a worker task is ready and its return value has

+ 5 - 0
celery/worker/scheduler.py

@@ -1,4 +1,5 @@
 from __future__ import generators
 from __future__ import generators
+
 import time
 import time
 import heapq
 import heapq
 
 
@@ -81,6 +82,10 @@ class Scheduler(object):
     def clear(self):
     def clear(self):
         self._queue = []
         self._queue = []
 
 
+    def info(self):
+        return ({"eta": eta, "priority": priority, "item": item}
+                    for eta, priority, item, _ in self.queue)
+
     @property
     @property
     def queue(self):
     def queue(self):
         events = list(self._queue)
         events = list(self._queue)

+ 5 - 27
contrib/debian/init.d/celeryd

@@ -9,16 +9,8 @@
 # Short-Description:	celery task worker daemon
 # Short-Description:	celery task worker daemon
 ### END INIT INFO
 ### END INIT INFO
 
 
-# To use this with Django set your DJANGO_PROJECT_DIR in /etc/default/celeryd:
-#
-#   echo "DJANGO_PROJECT_DIR=/opt/Myapp" > /etc/default/celeryd
-#
-# The django project dir is the directory that contains settings and
-# manage.py.
-
 set -e
 set -e
 
 
-DJANGO_SETTINGS_MODULE=settings
 CELERYD_PID_FILE="/var/run/celeryd.pid"
 CELERYD_PID_FILE="/var/run/celeryd.pid"
 CELERYD_LOG_FILE="/var/log/celeryd.log"
 CELERYD_LOG_FILE="/var/log/celeryd.log"
 CELERYD_LOG_LEVEL="INFO"
 CELERYD_LOG_LEVEL="INFO"
@@ -30,22 +22,10 @@ if test -f /etc/default/celeryd; then
     . /etc/default/celeryd
     . /etc/default/celeryd
 fi
 fi
 
 
-export DJANGO_SETTINGS_MODULE
-export DJANGO_PROJECT_DIR
-
-if [ -z "$CELERYD" ]; then
-    if [ ! -z "$DJANGO_PROJECT_DIR" ]; then
-        CELERYD="$DJANGO_PROJECT_DIR/manage.py"
-        CELERYD_OPTS="celeryd $CELERYD_OPTS"
-    else
-        CELERYD=$DEFAULT_CELERYD
-    fi
-fi
+export CELERY_LOADER
 
 
 . /lib/lsb/init-functions
 . /lib/lsb/init-functions
 
 
-cd $DJANGO_PROJECT_DIR
-
 CELERYD_OPTS="$CELERYD_OPTS -f $CELERYD_LOG_FILE -l $CELERYD_LOG_LEVEL"
 CELERYD_OPTS="$CELERYD_OPTS -f $CELERYD_LOG_FILE -l $CELERYD_LOG_LEVEL"
 
 
 if [ -n "$2" ]; then
 if [ -n "$2" ]; then
@@ -83,9 +63,7 @@ check_dev_null() {
 export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
 export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
 if [ ! -z "$VIRTUALENV" ]; then
 if [ ! -z "$VIRTUALENV" ]; then
     export PATH="$VIRTUALENV/bin:$PATH"
     export PATH="$VIRTUALENV/bin:$PATH"
-    if [ -z "$DJANGO_PROJECT_DIR" ]; then
-    	CELERYD="$VIRTUALENV/bin/$CELERYD"
-    fi
+    CELERYD="$VIRTUALENV/bin/$CELERYD"
 fi
 fi
 
 
 
 
@@ -102,7 +80,7 @@ case "$1" in
   start)
   start)
     check_dev_null
     check_dev_null
     log_daemon_msg "Starting celery task worker server" "celeryd"
     log_daemon_msg "Starting celery task worker server" "celeryd"
-    if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --chdir $DJANGO_PROJECT_DIR --make-pidfile --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then
+    if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --make-pidfile --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then
         log_end_msg 0
         log_end_msg 0
     else
     else
         log_end_msg 1
         log_end_msg 1
@@ -124,7 +102,7 @@ case "$1" in
     log_daemon_msg "Restarting celery task worker server" "celeryd"
     log_daemon_msg "Restarting celery task worker server" "celeryd"
     start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile $CELERYD_PID_FILE
     start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile $CELERYD_PID_FILE
     check_dev_null log_end_msg
     check_dev_null log_end_msg
-    if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --chdir $DJANGO_PROJECT_DIR --make-pidfile --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then log_end_msg 0
+    if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --make-pidfile --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then log_end_msg 0
     else
     else
         log_end_msg 1
         log_end_msg 1
     fi
     fi
@@ -140,7 +118,7 @@ case "$1" in
         0)
         0)
 		# old daemon stopped
 		# old daemon stopped
 		check_dev_null log_end_msg
 		check_dev_null log_end_msg
-		if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --make-pidfile --chdir $DJANGO_PROJECT_DIR --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then
+		if start-stop-daemon --start $DAEMON_OPTS --quiet --oknodo --background --make-pidfile --pidfile $CELERYD_PID_FILE --exec $CELERYD -- $CELERYD_OPTS; then
 		    log_end_msg 0
 		    log_end_msg 0
 		else
 		else
 		    log_end_msg 1
 		    log_end_msg 1

+ 183 - 0
contrib/debian/init.d/celeryd-multi

@@ -0,0 +1,183 @@
+#!/bin/bash
+
+### BEGIN INIT INFO
+# Provides:		celeryd
+# Required-Start:	
+# Required-Stop:	
+# Default-Start:	2 3 4 5
+# Default-Stop:		1
+# Short-Description:	celery task worker daemon
+### END INIT INFO
+
+# OS X Debug replacements to lsb-functions.
+#log_action_msg () {
+#    echo $*
+#}
+#log_daemon_msg () {
+#    echo $*
+#}
+#log_end_msg () {
+#    if [ $1 -eq 0 ]; then
+#        echo "ok"
+#    else
+#        echo "failed!"
+#    fi
+#}
+
+set -e
+
+CELERYD_PID_FILE="/var/run/celeryd-%n.pid"
+CELERYD_LOG_FILE="/var/log/celeryd-%n.log"
+CELERYD_LOG_LEVEL="INFO"
+CELERYD_NUM_WORKERS=2
+DEFAULT_CELERYD="celeryd"
+
+# /etc/init.d/celeryd-multi start and stop the celery task worker daemon.
+
+if test -f /etc/default/celeryd; then
+    . /etc/default/celeryd
+fi
+
+export CELERY_LOADER
+
+. /lib/lsb/init-functions
+
+CELERYD_OPTS="$CELERYD_OPTS -f $CELERYD_LOG_FILE -l $CELERYD_LOG_LEVEL"
+
+if [ -n "$2" ]; then
+    CELERYD_OPTS="$CELERYD_OPTS $2"
+fi
+
+# Extra start-stop-daemon options, like user/group.
+if [ -n "$CELERYD_USER" ]; then
+    DAEMON_OPTS="$DAEMON_OPTS --chuid $CELERYD_USER"
+fi
+if [ -n "$CELERYD_GROUP" ]; then
+    DAEMON_OPTS="$DAEMON_OPTS --group $CELERYD_GROUP"
+fi
+
+
+# Are we running from init?
+run_by_init() {
+    ([ "$previous" ] && [ "$runlevel" ]) || [ "$runlevel" = S ]
+}
+
+
+check_dev_null() {
+    if [ ! -c /dev/null ]; then
+	if [ "$1" = log_end_msg ]; then
+	    log_end_msg 1 || true
+	fi
+	if ! run_by_init; then
+	    log_action_msg "/dev/null is not a character device!"
+	fi
+	exit 1
+    fi
+}
+
+
+export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
+if [ ! -z "$VIRTUALENV" ]; then
+    export PATH="$VIRTUALENV/bin:$PATH"
+    CELERYD="$VIRTUALENV/bin/$CELERYD"
+fi
+
+
+if [ -f "$CELERYD" -a ! -x "$CELERYD" ]; then
+    echo "ERROR: $CELERYD is not executable."
+    echo "Please make it executable by doing: chmod +x '$CELERYD'"
+
+    echo "celeryd is disabled"
+    exit
+fi
+
+WORKERS=$CELERYD_NUM_WORKERS
+
+stop_worker () {
+    cmd="start-stop-daemon  --stop --quiet $* --pidfile $CELERYD_PID_FILE"
+    w=`celeryd-multi start $WORKERS --cmd="start-stop-daemon --stop \
+                                        --quiet $* \
+                                        --pidfile $CELERYD_PID_FILE"`
+    for wname in `celeryd-multi names $WORKERS $CELERYD_OPTS`; do
+        log_daemon_msg "Stopping celery task worker" "$wname"
+        stopcmd=`celeryd-multi get "$wname" $WORKERS --cmd="$cmd" $CELERYD_OPTS`
+        if `$stopcmd`; then
+            log_end_msg 0
+        else:
+            log_end_msg 1
+        fi
+    done
+}
+
+start_worker () {
+    check_dev_null
+    cmd="start-stop-daemon --start $DAEMON_OPTS \
+                                    --quiet --oknodo --background \
+                                    --make-pidfile $* \
+                                    --pidfile $CELERYD_PID_FILE \
+                                    --exec $CELERYD --"
+    for wname in `celeryd-multi names $WORKERS $CELERYD_OPTS`; do
+        log_daemon_msg "Starting celery task worker" "$wname"
+        startcmd=`celeryd-multi get "$wname" $WORKERS --cmd="$cmd" $CELERYD_OPTS`
+        if `$startcmd`; then
+            log_end_msg 0
+        else
+            log_end_msg 1
+        fi
+    done
+}
+
+case "$1" in
+  start)
+    start_worker
+    ;;
+  stop)
+    stop_worker --oknodo
+    ;;
+
+  reload|force-reload)
+    echo "Use start+stop"
+    ;;
+
+  restart)
+    stop_worker --retry 30 --oknodo
+    start_worker
+    ;;
+
+  try-restart)
+    log_daemon_msg "Restarting celery task worker server" "celeryd"
+    set +e
+    stop_worker --retry 30
+    RET="$?"
+    set -e
+    case $RET in
+        0)
+		# old daemon stopped
+		check_dev_null log_end_msg
+        start_worker
+		;;
+	    1)
+		# daemon not running
+		log_progress_msg "(not running)"
+		log_end_msg 0
+		;;
+	    *)
+		# failed to stop
+		log_progress_msg "(failed to stop)"
+		log_end_msg 1
+		;;
+	esac
+	;;
+
+  status)
+    pidfiles=`celeryd-multi expand "$CELERYD_PID_FILE" $WORKERS $DAEMON_OPTS`
+    for pidfile in $pidfiles; do
+        status_of_proc -p $pidfile $CELERYD celeryd && exit 0 || exit $?
+    done
+	;;
+  *)
+	log_action_msg "Usage: /etc/init.d/celeryd-multi {start|stop|force-reload|restart|try-restart|status}"
+	exit 1
+esac
+
+exit 0

+ 1 - 1
contrib/release/doc4allmods

@@ -2,7 +2,7 @@
 
 
 PACKAGE="$1"
 PACKAGE="$1"
 SKIP_PACKAGES="$PACKAGE tests management urls"
 SKIP_PACKAGES="$PACKAGE tests management urls"
-SKIP_FILES="celery.bin.rst celery.task.rest.rst celery.contrib.rst
+SKIP_FILES="celery.bin.rst celery.contrib.rst
             celery.contrib.batches.rst"
             celery.contrib.batches.rst"
 
 
 modules=$(find "$PACKAGE" -name "*.py")
 modules=$(find "$PACKAGE" -name "*.py")

+ 4 - 3
contrib/requirements/default.txt

@@ -1,6 +1,7 @@
-django
+mailer
 python-dateutil
 python-dateutil
+sqlalchemy
 anyjson
 anyjson
-carrot>=0.10.3
+carrot>=0.10.4
 django-picklefield
 django-picklefield
-billiard>=0.2.1
+billiard>=0.3.0

+ 1 - 2
contrib/requirements/test.txt

@@ -2,9 +2,8 @@ unittest2>=0.4.0
 simplejson
 simplejson
 nose
 nose
 nose-cover3
 nose-cover3
-django-nose
 coverage>=3.0
 coverage>=3.0
+mock>=0.6.0
 pytyrant
 pytyrant
 redis
 redis
 pymongo
 pymongo
-git+git://github.com/exogen/nose-achievements.git

+ 3 - 1
contrib/supervisord/celerybeat.conf

@@ -3,7 +3,9 @@
 ; ============================
 ; ============================
 
 
 ; NOTE: If you're using Django, you shouldn't use this file.
 ; NOTE: If you're using Django, you shouldn't use this file.
-; Use django/celerybeat.conf instead!
+; Use
+; http://github.com/ask/django-celery/tree/master/contrib/supervisord/celerybeat.conf
+; instead!
 
 
 [program:celerybeat]
 [program:celerybeat]
 command=celerybeat --schedule /var/lib/celery/celerybeat-schedule --loglevel=INFO
 command=celerybeat --schedule /var/lib/celery/celerybeat-schedule --loglevel=INFO

+ 7 - 1
contrib/supervisord/celeryd.conf

@@ -3,7 +3,9 @@
 ; ============================
 ; ============================
 
 
 ; NOTE: If you're using Django, you shouldn't use this file.
 ; NOTE: If you're using Django, you shouldn't use this file.
-; Use django/celeryd.conf instead!
+; Use
+; http://github.com/ask/django-celery/tree/master/contrib/supervisord/celeryd.conf
+; instead!
 
 
 [program:celery]
 [program:celery]
 command=celeryd --loglevel=INFO
 command=celeryd --loglevel=INFO
@@ -20,6 +22,10 @@ autostart=true
 autorestart=true
 autorestart=true
 startsecs=10
 startsecs=10
 
 
+; Need to wait for currently executing tasks to finish at shutdown.
+; Increase this if you have very long running tasks.
+stopwaitsecs = 600
+
 ; if rabbitmq is supervised, set its priority higher
 ; if rabbitmq is supervised, set its priority higher
 ; so it starts first
 ; so it starts first
 priority=998
 priority=998

+ 0 - 18
contrib/supervisord/django/celerybeat.conf

@@ -1,18 +0,0 @@
-; ==========================================
-;  celerybeat supervisor example for Django
-; ==========================================
-
-[program:celerybeat]
-command=/path/to/project/manage.py celerybeat --schedule=/var/lib/celery/celerybeat-schedule --loglevel=INFO
-directory=/path/to/project
-user=nobody
-numprocs=1
-stdout_logfile=/var/log/celerybeat.log
-stderr_logfile=/var/log/celerybeat.log
-autostart=true
-autorestart=true
-startsecs=10
-
-; if rabbitmq is supervised, set its priority higher
-; so it starts first
-priority=999

+ 0 - 18
contrib/supervisord/django/celeryd.conf

@@ -1,18 +0,0 @@
-; =======================================
-;  celeryd supervisor example for Django
-; =======================================
-
-[program:celery]
-command=/path/to/project/manage.py celeryd --loglevel=INFO
-directory=/path/to/project
-user=nobody
-numprocs=1
-stdout_logfile=/var/log/celeryd.log
-stderr_logfile=/var/log/celeryd.log
-autostart=true
-autorestart=true
-startsecs=10
-
-; if rabbitmq is supervised, set its priority higher
-; so it starts first
-priority=998

+ 0 - 112
docs/_ext/djangodocs.py

@@ -1,112 +0,0 @@
-"""
-Sphinx plugins for Django documentation.
-"""
-
-import docutils.nodes
-import docutils.transforms
-import sphinx
-import sphinx.addnodes
-import sphinx.directives
-import sphinx.environment
-import sphinx.roles
-from docutils import nodes
-
-
-def setup(app):
-    app.add_crossref_type(
-        directivename = "setting",
-        rolename = "setting",
-        indextemplate = "pair: %s; setting",
-    )
-    app.add_crossref_type(
-        directivename = "templatetag",
-        rolename = "ttag",
-        indextemplate = "pair: %s; template tag",
-    )
-    app.add_crossref_type(
-        directivename = "templatefilter",
-        rolename = "tfilter",
-        indextemplate = "pair: %s; template filter",
-    )
-    app.add_crossref_type(
-        directivename = "fieldlookup",
-        rolename = "lookup",
-        indextemplate = "pair: %s, field lookup type",
-    )
-    app.add_description_unit(
-        directivename = "django-admin",
-        rolename = "djadmin",
-        indextemplate = "pair: %s; django-admin command",
-        parse_node = parse_django_admin_node,
-    )
-    app.add_description_unit(
-        directivename = "django-admin-option",
-        rolename = "djadminopt",
-        indextemplate = "pair: %s; django-admin command-line option",
-        parse_node = lambda env, sig, signode: \
-                sphinx.directives.parse_option_desc(signode, sig),
-    )
-    app.add_config_value('django_next_version', '0.0', True)
-    app.add_directive('versionadded', parse_version_directive, 1, (1, 1, 1))
-    app.add_directive('versionchanged', parse_version_directive, 1, (1, 1, 1))
-    app.add_transform(SuppressBlockquotes)
-
-
-def parse_version_directive(name, arguments, options, content, lineno,
-                      content_offset, block_text, state, state_machine):
-    env = state.document.settings.env
-    is_nextversion = env.config.django_next_version == arguments[0]
-    ret = []
-    node = sphinx.addnodes.versionmodified()
-    ret.append(node)
-    if not is_nextversion:
-        if len(arguments) == 1:
-            linktext = 'Please, see the release notes <releases-%s>' % (
-                    arguments[0])
-            xrefs = sphinx.roles.xfileref_role('ref', linktext, linktext,
-                                               lineno, state)
-            node.extend(xrefs[0])
-        node['version'] = arguments[0]
-    else:
-        node['version'] = "Development version"
-    node['type'] = name
-    if len(arguments) == 2:
-        inodes, messages = state.inline_text(arguments[1], lineno+1)
-        node.extend(inodes)
-        if content:
-            state.nested_parse(content, content_offset, node)
-        ret = ret + messages
-    env.note_versionchange(node['type'], node['version'], node, lineno)
-    return ret
-
-
-class SuppressBlockquotes(docutils.transforms.Transform):
-    """
-    Remove the default blockquotes that encase indented list, tables, etc.
-    """
-    default_priority = 300
-
-    suppress_blockquote_child_nodes = (
-        docutils.nodes.bullet_list,
-        docutils.nodes.enumerated_list,
-        docutils.nodes.definition_list,
-        docutils.nodes.literal_block,
-        docutils.nodes.doctest_block,
-        docutils.nodes.line_block,
-        docutils.nodes.table,
-    )
-
-    def apply(self):
-        for node in self.document.traverse(docutils.nodes.block_quote):
-            if len(node.children) == 1 and \
-                    isinstance(node.children[0],
-                               self.suppress_blockquote_child_nodes):
-                node.replace_self(node.children[0])
-
-
-def parse_django_admin_node(env, sig, signode):
-    command = sig.split(' ')[0]
-    env._django_curr_admin_command = command
-    title = "django-admin.py %s" % sig
-    signode += sphinx.addnodes.desc_name(title, title)
-    return sig

+ 25 - 0
docs/_theme/ADCTheme/LICENSE

@@ -0,0 +1,25 @@
+Copyright (c) 2009, Corey Oordt
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, 
+are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice, 
+      this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright notice, 
+      this list of conditions and the following disclaimer in the documentation 
+      and/or other materials provided with the distribution.
+    * Neither the name of Corey Oordt nor the names of its contributors 
+      may be used to endorse or promote products derived from this software 
+      without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Algúns arquivos non se mostraron porque demasiados arquivos cambiaron neste cambio