Browse Source

Merge branch 'master' into kombu2

Conflicts:
	celery/app/amqp.py
	celery/messaging.py
	celery/task/base.py
	celery/utils/__init__.py
	celery/worker/__init__.py
	docs/configuration.rst
	docs/internals/worker.rst
	docs/reference/celery.conf.rst
	docs/tutorials/clickcounter.rst
Ask Solem 14 years ago
parent
commit
0606fda57c
93 changed files with 1790 additions and 1523 deletions
  1. 161 161
      Changelog
  2. 30 24
      FAQ
  3. 7 7
      INSTALL
  4. 10 10
      README.rst
  5. 47 54
      celery/app/__init__.py
  6. 9 1
      celery/app/amqp.py
  7. 78 113
      celery/app/base.py
  8. 1 0
      celery/app/defaults.py
  9. 1 1
      celery/apps/beat.py
  10. 12 2
      celery/apps/worker.py
  11. 2 2
      celery/backends/base.py
  12. 1 1
      celery/backends/pyredis.py
  13. 1 1
      celery/backends/tyrant.py
  14. 1 1
      celery/beat.py
  15. 10 10
      celery/bin/camqadm.py
  16. 4 4
      celery/bin/celerybeat.py
  17. 22 9
      celery/bin/celeryd.py
  18. 8 2
      celery/concurrency/processes/__init__.py
  19. 42 4
      celery/concurrency/processes/pool.py
  20. 4 4
      celery/contrib/abortable.py
  21. 102 34
      celery/datastructures.py
  22. 4 4
      celery/db/a805d4bd.py
  23. 3 3
      celery/events/__init__.py
  24. 3 3
      celery/events/state.py
  25. 11 1
      celery/loaders/base.py
  26. 1 1
      celery/loaders/default.py
  27. 8 8
      celery/log.py
  28. 3 3
      celery/messaging.py
  29. 2 2
      celery/registry.py
  30. 50 67
      celery/result.py
  31. 4 4
      celery/routes.py
  32. 5 5
      celery/schedules.py
  33. 15 17
      celery/serialization.py
  34. 203 242
      celery/task/base.py
  35. 2 2
      celery/task/builtins.py
  36. 1 1
      celery/task/control.py
  37. 3 3
      celery/task/http.py
  38. 1 1
      celery/task/sets.py
  39. 3 3
      celery/tests/test_buckets.py
  40. 2 1
      celery/tests/test_worker_job.py
  41. 3 3
      celery/tests/utils.py
  42. 6 7
      celery/utils/__init__.py
  43. 4 4
      celery/utils/dispatch/saferef.py
  44. 6 6
      celery/utils/dispatch/signal.py
  45. 2 2
      celery/utils/functional.py
  46. 3 3
      celery/utils/timeutils.py
  47. 52 56
      celery/worker/__init__.py
  48. 33 39
      celery/worker/buckets.py
  49. 7 4
      celery/worker/consumer.py
  50. 11 0
      celery/worker/control/builtins.py
  51. 75 15
      celery/worker/controllers.py
  52. 6 7
      celery/worker/heartbeat.py
  53. 165 151
      celery/worker/job.py
  54. 20 20
      celery/worker/state.py
  55. 2 2
      contrib/debian/init.d/celerybeat
  56. 2 2
      contrib/debian/init.d/celeryd
  57. 2 2
      contrib/debian/init.d/celeryevcam
  58. 3 3
      contrib/generic-init.d/celeryd
  59. 4 4
      contrib/requirements/README.rst
  60. 45 45
      docs/configuration.rst
  61. 13 13
      docs/cookbook/daemonizing.rst
  62. 2 2
      docs/cookbook/tasks.rst
  63. 8 8
      docs/getting-started/broker-installation.rst
  64. 2 2
      docs/getting-started/first-steps-with-celery.rst
  65. 7 7
      docs/homepage/index.html
  66. 6 6
      docs/includes/installation.txt
  67. 9 9
      docs/includes/introduction.txt
  68. 3 3
      docs/includes/resources.txt
  69. 5 5
      docs/internals/app-overview.rst
  70. 8 8
      docs/internals/deprecation.rst
  71. 10 10
      docs/internals/protocol.rst
  72. 6 6
      docs/internals/worker.rst
  73. 1 1
      docs/links.rst
  74. 57 0
      docs/reference/celery.app.rst
  75. 37 38
      docs/reference/celery.conf.rst
  76. 2 2
      docs/reference/celery.signals.rst
  77. 1 0
      docs/reference/index.rst
  78. 11 11
      docs/releases/1.0/announcement.rst
  79. 21 19
      docs/tutorials/clickcounter.rst
  80. 2 2
      docs/tutorials/otherqueues.rst
  81. 27 27
      docs/userguide/executing.rst
  82. 36 36
      docs/userguide/monitoring.rst
  83. 16 16
      docs/userguide/periodic-tasks.rst
  84. 2 2
      docs/userguide/remote-tasks.rst
  85. 40 40
      docs/userguide/routing.rst
  86. 89 25
      docs/userguide/tasks.rst
  87. 6 6
      docs/userguide/tasksets.rst
  88. 15 15
      docs/userguide/workers.rst
  89. 4 4
      examples/celery_http_gateway/README.rst
  90. 6 6
      examples/ghetto-queue/README.rst
  91. 2 2
      examples/httpexample/README.rst
  92. 1 1
      examples/pythonproject/demoapp/README.rst
  93. 7 0
      setup.cfg

File diff suppressed because it is too large
+ 161 - 161
Changelog


+ 30 - 24
FAQ

@@ -130,8 +130,8 @@ Troubleshooting
 MySQL is throwing deadlock errors, what can I do?
 MySQL is throwing deadlock errors, what can I do?
 -------------------------------------------------
 -------------------------------------------------
 
 
-**Answer:** MySQL has default isolation level set to ``REPEATABLE-READ``,
-if you don't really need that, set it to ``READ-COMMITTED``.
+**Answer:** MySQL has default isolation level set to `REPEATABLE-READ`,
+if you don't really need that, set it to `READ-COMMITTED`.
 You can do that by adding the following to your :file:`my.cnf`::
 You can do that by adding the following to your :file:`my.cnf`::
 
 
     [mysqld]
     [mysqld]
@@ -178,7 +178,7 @@ http://www.playingwithwire.com/2009/10/how-to-get-celeryd-to-work-on-freebsd/
 
 
 .. _faq-duplicate-key-errors:
 .. _faq-duplicate-key-errors:
 
 
-I'm having ``IntegrityError: Duplicate Key`` errors. Why?
+I'm having `IntegrityError: Duplicate Key` errors. Why?
 ---------------------------------------------------------
 ---------------------------------------------------------
 
 
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
@@ -284,7 +284,7 @@ Results
 How do I get the result of a task if I have the ID that points there?
 How do I get the result of a task if I have the ID that points there?
 ----------------------------------------------------------------------
 ----------------------------------------------------------------------
 
 
-**Answer**: Use ``Task.AsyncResult``::
+**Answer**: Use `Task.AsyncResult`::
 
 
     >>> result = MyTask.AsyncResult(task_id)
     >>> result = MyTask.AsyncResult(task_id)
     >>> result.get()
     >>> result.get()
@@ -340,7 +340,7 @@ as a message. If you don't collect these results, they will build up and
 RabbitMQ will eventually run out of memory.
 RabbitMQ will eventually run out of memory.
 
 
 If you don't use the results for a task, make sure you set the
 If you don't use the results for a task, make sure you set the
-``ignore_result`` option:
+`ignore_result` option:
 
 
 .. code-block python
 .. code-block python
 
 
@@ -383,14 +383,14 @@ The STOMP carrot backend requires the `stompy`_ library::
 
 
 .. _`stompy`: http://pypi.python.org/pypi/stompy
 .. _`stompy`: http://pypi.python.org/pypi/stompy
 
 
-In this example we will use a queue called ``celery`` which we created in
+In this example we will use a queue called `celery` which we created in
 the ActiveMQ web admin interface.
 the ActiveMQ web admin interface.
 
 
-**Note**: When using ActiveMQ the queue name needs to have ``"/queue/"``
-prepended to it. i.e. the queue ``celery`` becomes ``/queue/celery``.
+**Note**: When using ActiveMQ the queue name needs to have `"/queue/"`
+prepended to it. i.e. the queue `celery` becomes `/queue/celery`.
 
 
 Since STOMP doesn't have exchanges and the routing capabilities of AMQP,
 Since STOMP doesn't have exchanges and the routing capabilities of AMQP,
-you need to set ``exchange`` name to the same as the queue name. This is
+you need to set `exchange` name to the same as the queue name. This is
 a minor inconvenience since carrot needs to maintain the same interface
 a minor inconvenience since carrot needs to maintain the same interface
 for both AMQP and STOMP.
 for both AMQP and STOMP.
 
 
@@ -474,11 +474,17 @@ For more information see :ref:`task-request-info`.
 Can I specify a custom task_id?
 Can I specify a custom task_id?
 -------------------------------
 -------------------------------
 
 
-**Answer**: Yes. Use the ``task_id`` argument to
+**Answer**: Yes.  Use the `task_id` argument to
 :meth:`~celery.execute.apply_async`::
 :meth:`~celery.execute.apply_async`::
 
 
     >>> task.apply_async(args, kwargs, task_id="...")
     >>> task.apply_async(args, kwargs, task_id="...")
 
 
+
+Can I use decorators with tasks?
+--------------------------------
+
+**Answer**: Yes.  But please see note at :ref:`tasks-decorating`.
+
 .. _faq-natural-task-ids:
 .. _faq-natural-task-ids:
 
 
 Can I use natural task ids?
 Can I use natural task ids?
@@ -523,7 +529,7 @@ See :doc:`userguide/tasksets` for more information.
 
 
 Can I cancel the execution of a task?
 Can I cancel the execution of a task?
 -------------------------------------
 -------------------------------------
-**Answer**: Yes. Use ``result.revoke``::
+**Answer**: Yes. Use `result.revoke`::
 
 
     >>> result = add.apply_async(args=[2, 2], countdown=120)
     >>> result = add.apply_async(args=[2, 2], countdown=120)
     >>> result.revoke()
     >>> result.revoke()
@@ -566,8 +572,8 @@ See :doc:`userguide/routing` for more information.
 Can I change the interval of a periodic task at runtime?
 Can I change the interval of a periodic task at runtime?
 --------------------------------------------------------
 --------------------------------------------------------
 
 
-**Answer**: Yes. You can override ``PeriodicTask.is_due`` or turn
-``PeriodicTask.run_every`` into a property:
+**Answer**: Yes. You can override `PeriodicTask.is_due` or turn
+`PeriodicTask.run_every` into a property:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -601,11 +607,11 @@ Should I use retry or acks_late?
 **Answer**: Depends. It's not necessarily one or the other, you may want
 **Answer**: Depends. It's not necessarily one or the other, you may want
 to use both.
 to use both.
 
 
-``Task.retry`` is used to retry tasks, notably for expected errors that
-is catchable with the ``try:`` block. The AMQP transaction is not used
+`Task.retry` is used to retry tasks, notably for expected errors that
+is catchable with the `try:` block. The AMQP transaction is not used
 for these errors: **if the task raises an exception it is still acked!**.
 for these errors: **if the task raises an exception it is still acked!**.
 
 
-The ``acks_late`` setting would be used when you need the task to be
+The `acks_late` setting would be used when you need the task to be
 executed again if the worker (for some reason) crashes mid-execution.
 executed again if the worker (for some reason) crashes mid-execution.
 It's important to note that the worker is not known to crash, and if
 It's important to note that the worker is not known to crash, and if
 it does it is usually an unrecoverable error that requires human
 it does it is usually an unrecoverable error that requires human
@@ -631,11 +637,11 @@ It's a good default, users who require it and know what they
 are doing can still enable acks_late (and in the future hopefully
 are doing can still enable acks_late (and in the future hopefully
 use manual acknowledgement)
 use manual acknowledgement)
 
 
-In addition ``Task.retry`` has features not available in AMQP
+In addition `Task.retry` has features not available in AMQP
 transactions: delay between retries, max retries, etc.
 transactions: delay between retries, max retries, etc.
 
 
 So use retry for Python errors, and if your task is reentrant
 So use retry for Python errors, and if your task is reentrant
-combine that with ``acks_late`` if that level of reliability
+combine that with `acks_late` if that level of reliability
 is required.
 is required.
 
 
 .. _faq-schedule-at-specific-time:
 .. _faq-schedule-at-specific-time:
@@ -645,7 +651,7 @@ Can I schedule tasks to execute at a specific time?
 
 
 .. module:: celery.task.base
 .. module:: celery.task.base
 
 
-**Answer**: Yes. You can use the ``eta`` argument of :meth:`Task.apply_async`.
+**Answer**: Yes. You can use the `eta` argument of :meth:`Task.apply_async`.
 
 
 Or to schedule a periodic task at a specific time, use the
 Or to schedule a periodic task at a specific time, use the
 :class:`celery.task.schedules.crontab` schedule behavior:
 :class:`celery.task.schedules.crontab` schedule behavior:
@@ -662,7 +668,7 @@ Or to schedule a periodic task at a specific time, use the
 
 
 .. _faq-safe-worker-shutdown:
 .. _faq-safe-worker-shutdown:
 
 
-How do I shut down ``celeryd`` safely?
+How do I shut down `celeryd` safely?
 --------------------------------------
 --------------------------------------
 
 
 **Answer**: Use the :sig:`TERM` signal, and the worker will finish all currently
 **Answer**: Use the :sig:`TERM` signal, and the worker will finish all currently
@@ -672,7 +678,7 @@ You should never stop :mod:`~celery.bin.celeryd` with the :sig:`KILL` signal
 (:option:`-9`), unless you've tried :sig:`TERM` a few times and waited a few
 (:option:`-9`), unless you've tried :sig:`TERM` a few times and waited a few
 minutes to let it get a chance to shut down.  As if you do tasks may be
 minutes to let it get a chance to shut down.  As if you do tasks may be
 terminated mid-execution, and they will not be re-run unless you have the
 terminated mid-execution, and they will not be re-run unless you have the
-``acks_late`` option set (``Task.acks_late`` / :setting:`CELERY_ACKS_LATE`).
+`acks_late` option set (`Task.acks_late` / :setting:`CELERY_ACKS_LATE`).
 
 
 .. seealso::
 .. seealso::
 
 
@@ -705,14 +711,14 @@ See http://bit.ly/bo9RSw
 
 
 .. _faq-windows-worker-embedded-beat:
 .. _faq-windows-worker-embedded-beat:
 
 
-The ``-B`` / ``--beat`` option to celeryd doesn't work?
+The `-B` / `--beat` option to celeryd doesn't work?
 ----------------------------------------------------------------
 ----------------------------------------------------------------
-**Answer**: That's right. Run ``celerybeat`` and ``celeryd`` as separate
+**Answer**: That's right. Run `celerybeat` and `celeryd` as separate
 services instead.
 services instead.
 
 
 .. _faq-windows-django-settings:
 .. _faq-windows-django-settings:
 
 
-``django-celery`` can’t find settings?
+`django-celery` can’t find settings?
 --------------------------------------
 --------------------------------------
 
 
 **Answer**: You need to specify the :option:`--settings` argument to
 **Answer**: You need to specify the :option:`--settings` argument to

+ 7 - 7
INSTALL

@@ -1,19 +1,19 @@
-Installing celery
+Installing Celery
 =================
 =================
 
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install Celery either via the Python Package Index (PyPI)
 or from source.
 or from source.
 
 
-To install using ``pip``,::
+To install using `pip`::
 
 
-    $ pip install celery
+    $ pip install Celery
 
 
-To install using ``easy_install``,::
+To install using `easy_install`::
 
 
-    $ easy_install celery
+    $ easy_install Celery
 
 
 If you have downloaded a source tarball you can install it
 If you have downloaded a source tarball you can install it
-by doing the following,::
+by doing the following::
 
 
     $ python setup.py build
     $ python setup.py build
     # python setup.py install # as root
     # python setup.py install # as root

+ 10 - 10
README.rst

@@ -41,7 +41,7 @@ the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
 .. _`Pylons`: http://pylonshq.com/
 .. _`Pylons`: http://pylonshq.com/
 .. _`Flask`: http://flask.pocoo.org/
 .. _`Flask`: http://flask.pocoo.org/
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
-.. _`celery-pylons`: http://bitbucket.org/ianschenck/celery-pylons
+.. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`operate with other languages using webhooks`:
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
     http://ask.github.com/celery/userguide/remote-tasks.html
@@ -59,7 +59,7 @@ This is a high level overview of the architecture.
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 
 
 The broker delivers tasks to the worker servers.
 The broker delivers tasks to the worker servers.
-A worker server is a networked machine running ``celeryd``.  This can be one or
+A worker server is a networked machine running `celeryd`.  This can be one or
 more machines depending on the workload.
 more machines depending on the workload.
 
 
 The result of the task can be stored for later retrieval (called its
 The result of the task can be stored for later retrieval (called its
@@ -107,7 +107,7 @@ Features
     |                 | while the queue is temporarily overloaded).        |
     |                 | while the queue is temporarily overloaded).        |
     +-----------------+----------------------------------------------------+
     +-----------------+----------------------------------------------------+
     | Concurrency     | Tasks are executed in parallel using the           |
     | Concurrency     | Tasks are executed in parallel using the           |
-    |                 | ``multiprocessing`` module.                        |
+    |                 | `multiprocessing` module.                          |
     +-----------------+----------------------------------------------------+
     +-----------------+----------------------------------------------------+
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     |                 | an exact date or countdown for when after the task |
     |                 | an exact date or countdown for when after the task |
@@ -194,14 +194,14 @@ is hosted at Github.
 Installation
 Installation
 ============
 ============
 
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install `celery` either via the Python Package Index (PyPI)
 or from source.
 or from source.
 
 
-To install using ``pip``,::
+To install using `pip`,::
 
 
     $ pip install celery
     $ pip install celery
 
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
 
     $ easy_install celery
     $ easy_install celery
 
 
@@ -210,7 +210,7 @@ To install using ``easy_install``,::
 Downloading and installing from source
 Downloading and installing from source
 --------------------------------------
 --------------------------------------
 
 
-Download the latest version of ``celery`` from
+Download the latest version of `celery` from
 http://pypi.python.org/pypi/celery/
 http://pypi.python.org/pypi/celery/
 
 
 You can install it by doing the following,::
 You can install it by doing the following,::
@@ -275,10 +275,10 @@ http://wiki.github.com/ask/celery/
 Contributing
 Contributing
 ============
 ============
 
 
-Development of ``celery`` happens at Github: http://github.com/ask/celery
+Development of `celery` happens at Github: http://github.com/ask/celery
 
 
 You are highly encouraged to participate in the development
 You are highly encouraged to participate in the development
-of ``celery``. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like Github (for some reason) you're welcome
 to send regular patches.
 to send regular patches.
 
 
 .. _license:
 .. _license:
@@ -286,7 +286,7 @@ to send regular patches.
 License
 License
 =======
 =======
 
 
-This software is licensed under the ``New BSD License``. See the ``LICENSE``
+This software is licensed under the `New BSD License`. See the ``LICENSE``
 file in the top distribution directory for the full license text.
 file in the top distribution directory for the full license text.
 
 
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround

+ 47 - 54
celery/app/__init__.py

@@ -1,3 +1,13 @@
+"""
+celery.app
+==========
+
+Celery Application.
+
+:copyright: (c) 2009 - 2010 by Ask Solem.
+:license: BSD, see LICENSE for more details.
+
+"""
 import os
 import os
 
 
 from inspect import getargspec
 from inspect import getargspec
@@ -6,44 +16,24 @@ from celery import registry
 from celery.app import base
 from celery.app import base
 from celery.utils.functional import wraps
 from celery.utils.functional import wraps
 
 
+# Apps with the :attr:`~celery.app.base.BaseApp.set_as_current` attribute
+# sets this, so it will always contain the last instantiated app,
+# and is the default app returned by :func:`app_or_default`.
 _current_app = None
 _current_app = None
 
 
 
 
 class App(base.BaseApp):
 class App(base.BaseApp):
     """Celery Application.
     """Celery Application.
 
 
-    Inherits from :class:`celery.app.base.BaseApp`.
-
     :keyword loader: The loader class, or the name of the loader class to use.
     :keyword loader: The loader class, or the name of the loader class to use.
-        Default is :class:`celery.loaders.app.AppLoader`.
+                     Default is :class:`celery.loaders.app.AppLoader`.
     :keyword backend: The result store backend class, or the name of the
     :keyword backend: The result store backend class, or the name of the
-        backend class to use. Default is the value of the
-        ``CELERY_RESULT_BACKEND`` setting.
-
-    .. attribute:: amqp
-
-        Sending/receiving messages.
-        See :class:`celery.app.amqp.AMQP`.
-
-    .. attribute:: backend
-
-        Storing/retreiving task state.
-        See :class:`celery.backend.base.BaseBackend`.
+                      backend class to use. Default is the value of the
+                      :setting:`CELERY_RESULT_BACKEND` setting.
 
 
-    .. attribute:: conf
+    .. seealso::
 
 
-        Current configuration. Supports both the dict interface and
-        attribute access.
-
-    .. attribute:: control
-
-        Controlling worker nodes.
-        See :class:`celery.task.control.Control`.
-
-    .. attribute:: log
-
-        Logging.
-        See :class:`celery.log.Logging`.
+        The app base class; :class:`~celery.app.base.BaseApp`.
 
 
     """
     """
 
 
@@ -59,53 +49,55 @@ class App(base.BaseApp):
         return create_task_cls(app=self)
         return create_task_cls(app=self)
 
 
     def Worker(self, **kwargs):
     def Worker(self, **kwargs):
-        """Create new :class:`celery.apps.worker.Worker` instance."""
+        """Create new :class:`~celery.apps.worker.Worker` instance."""
         from celery.apps.worker import Worker
         from celery.apps.worker import Worker
         return Worker(app=self, **kwargs)
         return Worker(app=self, **kwargs)
 
 
     def Beat(self, **kwargs):
     def Beat(self, **kwargs):
-        """Create new :class:`celery.apps.beat.Beat` instance."""
+        """Create new :class:`~celery.apps.beat.Beat` instance."""
         from celery.apps.beat import Beat
         from celery.apps.beat import Beat
         return Beat(app=self, **kwargs)
         return Beat(app=self, **kwargs)
 
 
     def TaskSet(self, *args, **kwargs):
     def TaskSet(self, *args, **kwargs):
-        """Create new :class:`celery.task.sets.TaskSet`."""
+        """Create new :class:`~celery.task.sets.TaskSet`."""
         from celery.task.sets import TaskSet
         from celery.task.sets import TaskSet
         kwargs["app"] = self
         kwargs["app"] = self
         return TaskSet(*args, **kwargs)
         return TaskSet(*args, **kwargs)
 
 
     def worker_main(self, argv=None):
     def worker_main(self, argv=None):
+        """Run :program:`celeryd` using `argv`.  Uses :data:`sys.argv`
+        if `argv` is not specified."""
         from celery.bin.celeryd import WorkerCommand
         from celery.bin.celeryd import WorkerCommand
         return WorkerCommand(app=self).execute_from_commandline(argv)
         return WorkerCommand(app=self).execute_from_commandline(argv)
 
 
     def task(self, *args, **options):
     def task(self, *args, **options):
         """Decorator to create a task class out of any callable.
         """Decorator to create a task class out of any callable.
 
 
-        Examples:
+        .. admonition:: Examples
 
 
-        .. code-block:: python
+            .. code-block:: python
 
 
-            @task()
-            def refresh_feed(url):
-                return Feed.objects.get(url=url).refresh()
+                @task()
+                def refresh_feed(url):
+                    return Feed.objects.get(url=url).refresh()
 
 
-        With setting extra options and using retry.
+            With setting extra options and using retry.
 
 
-        .. code-block:: python
+            .. code-block:: python
 
 
-            @task(exchange="feeds")
-            def refresh_feed(url, **kwargs):
-                try:
-                    return Feed.objects.get(url=url).refresh()
-                except socket.error, exc:
-                    refresh_feed.retry(args=[url], kwargs=kwargs, exc=exc)
+                @task(exchange="feeds")
+                def refresh_feed(url, **kwargs):
+                    try:
+                        return Feed.objects.get(url=url).refresh()
+                    except socket.error, exc:
+                        refresh_feed.retry(args=[url], kwargs=kwargs, exc=exc)
 
 
-        Calling the resulting task:
+            Calling the resulting task:
 
 
-            >>> refresh_feed("http://example.com/rss") # Regular
-            <Feed: http://example.com/rss>
-            >>> refresh_feed.delay("http://example.com/rss") # Async
-            <AsyncResult: 8998d0f4-da0b-4669-ba03-d5ab5ac6ad5d>
+                >>> refresh_feed("http://example.com/rss") # Regular
+                <Feed: http://example.com/rss>
+                >>> refresh_feed.delay("http://example.com/rss") # Async
+                <AsyncResult: 8998d0f4-da0b-4669-ba03-d5ab5ac6ad5d>
 
 
         """
         """
 
 
@@ -135,8 +127,10 @@ class App(base.BaseApp):
             return inner_create_task_cls()(*args)
             return inner_create_task_cls()(*args)
         return inner_create_task_cls(**options)
         return inner_create_task_cls(**options)
 
 
-# The "default" loader is the default loader used by old applications.
+#: The "default" loader is the default loader used by old applications.
 default_loader = os.environ.get("CELERY_LOADER") or "default"
 default_loader = os.environ.get("CELERY_LOADER") or "default"
+
+#: Global fallback app instance.
 default_app = App(loader=default_loader, set_as_current=False)
 default_app = App(loader=default_loader, set_as_current=False)
 
 
 if os.environ.get("CELERY_TRACE_APP"):
 if os.environ.get("CELERY_TRACE_APP"):
@@ -160,10 +154,9 @@ else:
     def app_or_default(app=None):
     def app_or_default(app=None):
         """Returns the app provided or the default app if none.
         """Returns the app provided or the default app if none.
 
 
-        If the environment variable :envvar:`CELERY_TRACE_APP` is set,
-        any time there is no active app and exception is raised. This
-        is used to trace app leaks (when someone forgets to pass
-        along the app instance).
+        The environment variable :envvar:`CELERY_TRACE_APP` is used to
+        trace app leaks.  When enabled an exception is raised if there
+        is no active app.
 
 
         """
         """
         global _current_app
         global _current_app

+ 9 - 1
celery/app/amqp.py

@@ -1,5 +1,13 @@
-import os
+"""
+celery.app.amqp
+===============
+
+AMQ related functionality.
 
 
+:copyright: (c) 2009 - 2010 by Ask Solem.
+:license: BSD, see LICENSE for more details.
+
+"""
 from datetime import datetime, timedelta
 from datetime import datetime, timedelta
 from UserDict import UserDict
 from UserDict import UserDict
 
 

+ 78 - 113
celery/app/base.py

@@ -1,78 +1,25 @@
+"""
+celery.app.base
+===============
+
+Application Base Class.
+
+:copyright: (c) 2009 - 2010 by Ask Solem.
+:license: BSD, see LICENSE for more details.
+
+"""
 import sys
 import sys
 import platform as _platform
 import platform as _platform
 
 
 from datetime import timedelta
 from datetime import timedelta
-from itertools import chain
 
 
 from celery import routes
 from celery import routes
 from celery.app.defaults import DEFAULTS
 from celery.app.defaults import DEFAULTS
-from celery.datastructures import AttributeDictMixin
+from celery.datastructures import MultiDictView
 from celery.utils import noop, isatty
 from celery.utils import noop, isatty
 from celery.utils.functional import wraps
 from celery.utils.functional import wraps
 
 
 
 
-class MultiDictView(AttributeDictMixin):
-    """View for one more more dicts.
-
-    * When getting a key, the dicts are searched in order.
-    * When setting a key, the key is added to the first dict.
-
-    >>> d1 = {"x": 3"}
-    >>> d2 = {"x": 1, "y": 2, "z": 3}
-    >>> x = MultiDictView([d1, d2])
-
-    >>> x["x"]
-    3
-
-    >>>  x["y"]
-    2
-
-    """
-    dicts = None
-
-    def __init__(self, *dicts):
-        self.__dict__["dicts"] = dicts
-
-    def __getitem__(self, key):
-        for d in self.__dict__["dicts"]:
-            try:
-                return d[key]
-            except KeyError:
-                pass
-        raise KeyError(key)
-
-    def __setitem__(self, key, value):
-        self.__dict__["dicts"][0][key] = value
-
-    def get(self, key, default=None):
-        try:
-            return self[key]
-        except KeyError:
-            return default
-
-    def setdefault(self, key, default):
-        try:
-            return self[key]
-        except KeyError:
-            self[key] = default
-            return default
-
-    def update(self, *args, **kwargs):
-        return self.__dict__["dicts"][0].update(*args, **kwargs)
-
-    def __contains__(self, key):
-        for d in self.__dict__["dicts"]:
-            if key in d:
-                return True
-        return False
-
-    def __repr__(self):
-        return repr(dict(iter(self)))
-
-    def __iter__(self):
-        return chain(*[d.iteritems() for d in self.__dict__["dicts"]])
-
-
 class BaseApp(object):
 class BaseApp(object):
     """Base class for apps."""
     """Base class for apps."""
     SYSTEM = _platform.system()
     SYSTEM = _platform.system()
@@ -133,30 +80,13 @@ class BaseApp(object):
         for key, value in config.items():
         for key, value in config.items():
             self.conf[key] = value
             self.conf[key] = value
 
 
-    def either(self, default_key, *values):
-        """Fallback to the value of a configuration key if none of the
-        ``*values`` are true."""
-        for value in values:
-            if value is not None:
-                return value
-        return self.conf.get(default_key)
-
-    def merge(self, a, b):
-        """Like ``dict(a, **b)`` except it will keep values from ``a``
-        if the value in ``b`` is :const:`None`."""
-        b = dict(b)
-        for key, value in a.items():
-            if b.get(key) is None:
-                b[key] = value
-        return b
-
     def send_task(self, name, args=None, kwargs=None, countdown=None,
     def send_task(self, name, args=None, kwargs=None, countdown=None,
             eta=None, task_id=None, publisher=None, connection=None,
             eta=None, task_id=None, publisher=None, connection=None,
             connect_timeout=None, result_cls=None, expires=None,
             connect_timeout=None, result_cls=None, expires=None,
             **options):
             **options):
         """Send task by name.
         """Send task by name.
 
 
-        :param name: Name of task to execute (e.g. ``"tasks.add"``).
+        :param name: Name of task to execute (e.g. `"tasks.add"`).
         :keyword result_cls: Specify custom result class. Default is
         :keyword result_cls: Specify custom result class. Default is
             using :meth:`AsyncResult`.
             using :meth:`AsyncResult`.
 
 
@@ -185,21 +115,33 @@ class BaseApp(object):
         return self.with_default_connection(_do_publish)(
         return self.with_default_connection(_do_publish)(
                 connection=connection, connect_timeout=connect_timeout)
                 connection=connection, connect_timeout=connect_timeout)
 
 
+    def AsyncResult(self, task_id, backend=None):
+        """Create :class:`celery.result.BaseAsyncResult` instance."""
+        from celery.result import BaseAsyncResult
+        return BaseAsyncResult(task_id, app=self,
+                               backend=backend or self.backend)
+
+    def TaskSetResult(self, taskset_id, results, **kwargs):
+        """Create :class:`celery.result.TaskSetResult` instance."""
+        from celery.result import TaskSetResult
+        return TaskSetResult(taskset_id, results, app=self)
+
     def broker_connection(self, hostname=None, userid=None,
     def broker_connection(self, hostname=None, userid=None,
             password=None, virtual_host=None, port=None, ssl=None,
             password=None, virtual_host=None, port=None, ssl=None,
             insist=None, connect_timeout=None, transport=None, **kwargs):
             insist=None, connect_timeout=None, transport=None, **kwargs):
         """Establish a connection to the message broker.
         """Establish a connection to the message broker.
 
 
-        :keyword hostname: defaults to the ``BROKER_HOST`` setting.
-        :keyword userid: defaults to the ``BROKER_USER`` setting.
-        :keyword password: defaults to the ``BROKER_PASSWORD`` setting.
-        :keyword virtual_host: defaults to the ``BROKER_VHOST`` setting.
-        :keyword port: defaults to the ``BROKER_PORT`` setting.
-        :keyword ssl: defaults to the ``BROKER_USE_SSL`` setting.
-        :keyword insist: defaults to the ``BROKER_INSIST`` setting.
+        :keyword hostname: defaults to the :setting:`BROKER_HOST` setting.
+        :keyword userid: defaults to the :setting:`BROKER_USER` setting.
+        :keyword password: defaults to the :setting:`BROKER_PASSWORD` setting.
+        :keyword virtual_host: defaults to the :setting:`BROKER_VHOST` setting.
+        :keyword port: defaults to the :setting:`BROKER_PORT` setting.
+        :keyword ssl: defaults to the :setting:`BROKER_USE_SSL` setting.
+        :keyword insist: defaults to the :setting:`BROKER_INSIST` setting.
         :keyword connect_timeout: defaults to the
         :keyword connect_timeout: defaults to the
-            ``BROKER_CONNECTION_TIMEOUT`` setting.
-        :keyword backend_cls: defaults to the ``BROKER_BACKEND`` setting.
+            :setting:`BROKER_CONNECTION_TIMEOUT` setting.
+        :keyword backend_cls: defaults to the :setting:`BROKER_BACKEND`
+            setting.
 
 
         :returns :class:`kombu.connection.BrokerConnection`:
         :returns :class:`kombu.connection.BrokerConnection`:
 
 
@@ -217,7 +159,7 @@ class BaseApp(object):
                                 "BROKER_CONNECTION_TIMEOUT", connect_timeout))
                                 "BROKER_CONNECTION_TIMEOUT", connect_timeout))
 
 
     def with_default_connection(self, fun):
     def with_default_connection(self, fun):
-        """With any function accepting ``connection`` and ``connect_timeout``
+        """With any function accepting `connection` and `connect_timeout`
         keyword arguments, establishes a default connection if one is
         keyword arguments, establishes a default connection if one is
         not already passed to it.
         not already passed to it.
 
 
@@ -272,33 +214,34 @@ class BaseApp(object):
                     seconds=c.CELERY_TASK_RESULT_EXPIRES)
                     seconds=c.CELERY_TASK_RESULT_EXPIRES)
         return c
         return c
 
 
-    def mail_admins(self, subject, message, fail_silently=False):
+    def mail_admins(self, subject, body, fail_silently=False):
         """Send an e-mail to the admins in conf.ADMINS."""
         """Send an e-mail to the admins in conf.ADMINS."""
-        from celery.utils import mail
-
         if not self.conf.ADMINS:
         if not self.conf.ADMINS:
             return
             return
-
         to = [admin_email for _, admin_email in self.conf.ADMINS]
         to = [admin_email for _, admin_email in self.conf.ADMINS]
-        message = mail.Message(sender=self.conf.SERVER_EMAIL,
-                               to=to, subject=subject, body=message)
-
-        mailer = mail.Mailer(self.conf.EMAIL_HOST,
-                             self.conf.EMAIL_PORT,
-                             self.conf.EMAIL_HOST_USER,
-                             self.conf.EMAIL_HOST_PASSWORD)
-        mailer.send(message, fail_silently=fail_silently)
+        self.loader.mail_admins(subject, body, fail_silently,
+                                to=to, sender=self.conf.SERVER_EMAIL,
+                                host=self.conf.EMAIL_HOST,
+                                port=self.conf.EMAIL_PORT,
+                                user=self.conf.EMAIL_USER,
+                                password=self.conf.EMAIL_PASSWORD)
 
 
-    def AsyncResult(self, task_id, backend=None):
-        """Create :class:`celery.result.BaseAsyncResult` instance."""
-        from celery.result import BaseAsyncResult
-        return BaseAsyncResult(task_id, app=self,
-                               backend=backend or self.backend)
+    def either(self, default_key, *values):
+        """Fallback to the value of a configuration key if none of the
+        `*values` are true."""
+        for value in values:
+            if value is not None:
+                return value
+        return self.conf.get(default_key)
 
 
-    def TaskSetResult(self, taskset_id, results, **kwargs):
-        """Create :class:`celery.result.TaskSetResult` instance."""
-        from celery.result import TaskSetResult
-        return TaskSetResult(taskset_id, results, app=self)
+    def merge(self, a, b):
+        """Like `dict(a, **b)` except it will keep values from `a`
+        if the value in `b` is :const:`None`."""
+        b = dict(b)
+        for key, value in a.items():
+            if b.get(key) is None:
+                b[key] = value
+        return b
 
 
     def _get_backend(self):
     def _get_backend(self):
         from celery.backends import get_backend_cls
         from celery.backends import get_backend_cls
@@ -312,6 +255,11 @@ class BaseApp(object):
 
 
     @property
     @property
     def amqp(self):
     def amqp(self):
+        """Sending/receiving messages.
+
+        See :class:`~celery.app.amqp.AMQP`.
+
+        """
         if self._amqp is None:
         if self._amqp is None:
             from celery.app.amqp import AMQP
             from celery.app.amqp import AMQP
             self._amqp = AMQP(self)
             self._amqp = AMQP(self)
@@ -319,12 +267,18 @@ class BaseApp(object):
 
 
     @property
     @property
     def backend(self):
     def backend(self):
+        """Storing/retreiving task state.
+
+        See :class:`~celery.backend.base.BaseBackend`.
+
+        """
         if self._backend is None:
         if self._backend is None:
             self._backend = self._get_backend()
             self._backend = self._get_backend()
         return self._backend
         return self._backend
 
 
     @property
     @property
     def loader(self):
     def loader(self):
+        """Current loader."""
         if self._loader is None:
         if self._loader is None:
             from celery.loaders import get_loader_cls
             from celery.loaders import get_loader_cls
             self._loader = get_loader_cls(self.loader_cls)(app=self)
             self._loader = get_loader_cls(self.loader_cls)(app=self)
@@ -332,12 +286,18 @@ class BaseApp(object):
 
 
     @property
     @property
     def conf(self):
     def conf(self):
+        """Current configuration (dict and attribute access)."""
         if self._conf is None:
         if self._conf is None:
             self._conf = self._get_config()
             self._conf = self._get_config()
         return self._conf
         return self._conf
 
 
     @property
     @property
     def control(self):
     def control(self):
+        """Controlling worker nodes.
+
+        See :class:`~celery.task.control.Control`.
+
+        """
         if self._control is None:
         if self._control is None:
             from celery.task.control import Control
             from celery.task.control import Control
             self._control = Control(app=self)
             self._control = Control(app=self)
@@ -345,6 +305,11 @@ class BaseApp(object):
 
 
     @property
     @property
     def log(self):
     def log(self):
+        """Logging utilities.
+
+        See :class:`~celery.log.Logging`.
+
+        """
         if self._log is None:
         if self._log is None:
             from celery.log import Logging
             from celery.log import Logging
             self._log = Logging(app=self)
             self._log = Logging(app=self)

+ 1 - 0
celery/app/defaults.py

@@ -97,6 +97,7 @@ NAMESPACES = {
         "REDIRECT_STDOUTS_LEVEL": Option("WARNING"),
         "REDIRECT_STDOUTS_LEVEL": Option("WARNING"),
     },
     },
     "CELERYD": {
     "CELERYD": {
+        "AUTOSCALER": Option("celery.worker.controllers.Autoscaler"),
         "CONCURRENCY": Option(0, type="int"),
         "CONCURRENCY": Option(0, type="int"),
         "ETA_SCHEDULER": Option("celery.utils.timer2.Timer"),
         "ETA_SCHEDULER": Option("celery.utils.timer2.Timer"),
         "ETA_SCHEDULER_PRECISION": Option(1.0, type="float"),
         "ETA_SCHEDULER_PRECISION": Option(1.0, type="float"),

+ 1 - 1
celery/apps/beat.py

@@ -111,7 +111,7 @@ class Beat(object):
                                info=" ".join(sys.argv[arg_start:]))
                                info=" ".join(sys.argv[arg_start:]))
 
 
     def install_sync_handler(self, beat):
     def install_sync_handler(self, beat):
-        """Install a ``SIGTERM`` + ``SIGINT`` handler that saves
+        """Install a `SIGTERM` + `SIGINT` handler that saves
         the celerybeat schedule."""
         the celerybeat schedule."""
 
 
         def _sync(signum, frame):
         def _sync(signum, frame):

+ 12 - 2
celery/apps/worker.py

@@ -6,6 +6,8 @@ import socket
 import sys
 import sys
 import warnings
 import warnings
 
 
+from carrot.utils import partition
+
 from celery import __version__
 from celery import __version__
 from celery import platforms
 from celery import platforms
 from celery import signals
 from celery import signals
@@ -40,7 +42,8 @@ class Worker(object):
             schedule=None, task_time_limit=None, task_soft_time_limit=None,
             schedule=None, task_time_limit=None, task_soft_time_limit=None,
             max_tasks_per_child=None, queues=None, events=False, db=None,
             max_tasks_per_child=None, queues=None, events=False, db=None,
             include=None, app=None, pidfile=None,
             include=None, app=None, pidfile=None,
-            redirect_stdouts=None, redirect_stdouts_level=None, **kwargs):
+            redirect_stdouts=None, redirect_stdouts_level=None,
+            autoscale=None, scheduler_cls=None, **kwargs):
         self.app = app = app_or_default(app)
         self.app = app = app_or_default(app)
         self.concurrency = (concurrency or
         self.concurrency = (concurrency or
                             app.conf.CELERYD_CONCURRENCY or
                             app.conf.CELERYD_CONCURRENCY or
@@ -53,6 +56,7 @@ class Worker(object):
         self.discard = discard
         self.discard = discard
         self.run_clockservice = run_clockservice
         self.run_clockservice = run_clockservice
         self.schedule = schedule or app.conf.CELERYBEAT_SCHEDULE_FILENAME
         self.schedule = schedule or app.conf.CELERYBEAT_SCHEDULE_FILENAME
+        self.scheduler_cls = scheduler_cls or app.conf.CELERYBEAT_SCHEDULER
         self.events = events
         self.events = events
         self.task_time_limit = (task_time_limit or
         self.task_time_limit = (task_time_limit or
                                 app.conf.CELERYD_TASK_TIME_LIMIT)
                                 app.conf.CELERYD_TASK_TIME_LIMIT)
@@ -69,6 +73,10 @@ class Worker(object):
         self.queues = None
         self.queues = None
         self.include = include or []
         self.include = include or []
         self.pidfile = pidfile
         self.pidfile = pidfile
+        self.autoscale = None
+        if autoscale:
+            max_c, _, min_c = partition(autoscale, ",")
+            self.autoscale = [int(max_c), min_c and int(min_c) or 0]
         self._isatty = sys.stdout.isatty()
         self._isatty = sys.stdout.isatty()
 
 
         self.colored = term.colored(enabled=app.conf.CELERYD_LOG_COLOR)
         self.colored = term.colored(enabled=app.conf.CELERYD_LOG_COLOR)
@@ -192,12 +200,14 @@ class Worker(object):
                                 ready_callback=self.on_consumer_ready,
                                 ready_callback=self.on_consumer_ready,
                                 embed_clockservice=self.run_clockservice,
                                 embed_clockservice=self.run_clockservice,
                                 schedule_filename=self.schedule,
                                 schedule_filename=self.schedule,
+                                scheduler_cls=self.scheduler_cls,
                                 send_events=self.events,
                                 send_events=self.events,
                                 db=self.db,
                                 db=self.db,
                                 queues=self.queues,
                                 queues=self.queues,
                                 max_tasks_per_child=self.max_tasks_per_child,
                                 max_tasks_per_child=self.max_tasks_per_child,
                                 task_time_limit=self.task_time_limit,
                                 task_time_limit=self.task_time_limit,
-                                task_soft_time_limit=self.task_soft_time_limit)
+                                task_soft_time_limit=self.task_soft_time_limit,
+                                autoscale=self.autoscale)
         self.install_platform_tweaks(worker)
         self.install_platform_tweaks(worker)
         worker.start()
         worker.start()
 
 

+ 2 - 2
celery/backends/base.py

@@ -77,9 +77,9 @@ class BaseBackend(object):
         If the task raises an exception, this exception
         If the task raises an exception, this exception
         will be re-raised by :func:`wait_for`.
         will be re-raised by :func:`wait_for`.
 
 
-        If ``timeout`` is not ``None``, this raises the
+        If `timeout` is not :const:`None`, this raises the
         :class:`celery.exceptions.TimeoutError` exception if the operation
         :class:`celery.exceptions.TimeoutError` exception if the operation
-        takes longer than ``timeout`` seconds.
+        takes longer than `timeout` seconds.
 
 
         """
         """
 
 

+ 1 - 1
celery/backends/pyredis.py

@@ -87,7 +87,7 @@ class RedisBackend(KeyValueStoreBackend):
         self._connection = None
         self._connection = None
 
 
     def open(self):
     def open(self):
-        """Get :class:`redis.Redis`` instance with the current
+        """Get :class:`redis.Redis` instance with the current
         server configuration.
         server configuration.
 
 
         The connection is then cached until you do an
         The connection is then cached until you do an

+ 1 - 1
celery/backends/tyrant.py

@@ -51,7 +51,7 @@ class TyrantBackend(KeyValueStoreBackend):
         self._connection = None
         self._connection = None
 
 
     def open(self):
     def open(self):
-        """Get :class:`pytyrant.PyTyrant`` instance with the current
+        """Get :class:`pytyrant.PyTyrant` instance with the current
         server configuration.
         server configuration.
 
 
         The connection is then cached until you do an
         The connection is then cached until you do an

+ 1 - 1
celery/beat.py

@@ -401,7 +401,7 @@ def EmbeddedService(*args, **kwargs):
     """Return embedded clock service.
     """Return embedded clock service.
 
 
     :keyword thread: Run threaded instead of as a separate process.
     :keyword thread: Run threaded instead of as a separate process.
-        Default is ``False``.
+        Default is :const:`False`.
 
 
     """
     """
     if kwargs.pop("thread", False):
     if kwargs.pop("thread", False):

+ 10 - 10
celery/bin/camqadm.py

@@ -54,12 +54,12 @@ class Spec(object):
     .. attribute args::
     .. attribute args::
 
 
         List of arguments this command takes. Should
         List of arguments this command takes. Should
-        contain ``(argument_name, argument_type)`` tuples.
+        contain `(argument_name, argument_type)` tuples.
 
 
     .. attribute returns:
     .. attribute returns:
 
 
         Helpful human string representation of what this command returns.
         Helpful human string representation of what this command returns.
-        May be ``None``, to signify the return type is unknown.
+        May be :const:`None`, to signify the return type is unknown.
 
 
     """
     """
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
@@ -69,7 +69,7 @@ class Spec(object):
     def coerce(self, index, value):
     def coerce(self, index, value):
         """Coerce value for argument at index.
         """Coerce value for argument at index.
 
 
-        E.g. if :attr:`args` is ``[("is_active", bool)]``:
+        E.g. if :attr:`args` is `[("is_active", bool)]`:
 
 
             >>> coerce(0, "False")
             >>> coerce(0, "False")
             False
             False
@@ -131,7 +131,7 @@ class AMQShell(cmd.Cmd):
     :keyword connect: Function used to connect to the server, must return
     :keyword connect: Function used to connect to the server, must return
         connection object.
         connection object.
 
 
-    :keyword silent: If ``True``, the commands won't have annoying output not
+    :keyword silent: If :const:`True`, the commands won't have annoying output not
         relevant when running in non-shell mode.
         relevant when running in non-shell mode.
 
 
 
 
@@ -198,7 +198,7 @@ class AMQShell(cmd.Cmd):
         self._reconnect()
         self._reconnect()
 
 
     def say(self, m):
     def say(self, m):
-        """Say something to the user. Disabled if :attr:`silent``."""
+        """Say something to the user. Disabled if :attr:`silent`."""
         if not self.silent:
         if not self.silent:
             say(m)
             say(m)
 
 
@@ -207,7 +207,7 @@ class AMQShell(cmd.Cmd):
         to Python values and find the corresponding method on the AMQP channel
         to Python values and find the corresponding method on the AMQP channel
         object.
         object.
 
 
-        :returns: tuple of ``(method, processed_args)``.
+        :returns: tuple of `(method, processed_args)`.
 
 
         Example:
         Example:
 
 
@@ -225,7 +225,7 @@ class AMQShell(cmd.Cmd):
         return getattr(self.chan, attr_name), args, spec.format_response
         return getattr(self.chan, attr_name), args, spec.format_response
 
 
     def do_exit(self, *args):
     def do_exit(self, *args):
-        """The ``"exit"`` command."""
+        """The `"exit"` command."""
         self.say("\n-> please, don't leave!")
         self.say("\n-> please, don't leave!")
         sys.exit(0)
         sys.exit(0)
 
 
@@ -249,7 +249,7 @@ class AMQShell(cmd.Cmd):
         return set(self.builtins.keys() + self.amqp.keys())
         return set(self.builtins.keys() + self.amqp.keys())
 
 
     def completenames(self, text, *ignored):
     def completenames(self, text, *ignored):
-        """Return all commands starting with ``text``, for tab-completion."""
+        """Return all commands starting with `text`, for tab-completion."""
         names = self.get_names()
         names = self.get_names()
         first = [cmd for cmd in names
         first = [cmd for cmd in names
                         if cmd.startswith(text.replace("_", "."))]
                         if cmd.startswith(text.replace("_", "."))]
@@ -274,7 +274,7 @@ class AMQShell(cmd.Cmd):
         """Parse input line.
         """Parse input line.
 
 
         :returns: tuple of three items:
         :returns: tuple of three items:
-            ``(command_name, arglist, original_line)``
+            `(command_name, arglist, original_line)`
 
 
         E.g::
         E.g::
 
 
@@ -327,7 +327,7 @@ class AMQShell(cmd.Cmd):
 
 
 
 
 class AMQPAdmin(object):
 class AMQPAdmin(object):
-    """The celery ``camqadm`` utility."""
+    """The celery :program:`camqadm` utility."""
 
 
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
         self.app = app_or_default(kwargs.get("app"))
         self.app = app_or_default(kwargs.get("app"))

+ 4 - 4
celery/bin/celerybeat.py

@@ -5,7 +5,7 @@
 
 
 .. cmdoption:: -s, --schedule
 .. cmdoption:: -s, --schedule
 
 
-    Path to the schedule database. Defaults to ``celerybeat-schedule``.
+    Path to the schedule database. Defaults to `celerybeat-schedule`.
     The extension ".db" will be appended to the filename.
     The extension ".db" will be appended to the filename.
 
 
 .. cmdoption:: -S, --scheduler
 .. cmdoption:: -S, --scheduler
@@ -14,12 +14,12 @@
 
 
 .. cmdoption:: -f, --logfile
 .. cmdoption:: -f, --logfile
 
 
-    Path to log file. If no logfile is specified, ``stderr`` is used.
+    Path to log file. If no logfile is specified, `stderr` is used.
 
 
 .. cmdoption:: -l, --loglevel
 .. cmdoption:: -l, --loglevel
 
 
-    Logging level, choose between ``DEBUG``, ``INFO``, ``WARNING``,
-    ``ERROR``, ``CRITICAL``, or ``FATAL``.
+    Logging level, choose between `DEBUG`, `INFO`, `WARNING`,
+    `ERROR`, `CRITICAL`, or `FATAL`.
 
 
 """
 """
 from celery.bin.base import Command, Option
 from celery.bin.base import Command, Option

+ 22 - 9
celery/bin/celeryd.py

@@ -10,12 +10,12 @@
 
 
 .. cmdoption:: -f, --logfile
 .. cmdoption:: -f, --logfile
 
 
-    Path to log file. If no logfile is specified, ``stderr`` is used.
+    Path to log file. If no logfile is specified, `stderr` is used.
 
 
 .. cmdoption:: -l, --loglevel
 .. cmdoption:: -l, --loglevel
 
 
-    Logging level, choose between ``DEBUG``, ``INFO``, ``WARNING``,
-    ``ERROR``, ``CRITICAL``, or ``FATAL``.
+    Logging level, choose between `DEBUG`, `INFO`, `WARNING`,
+    `ERROR`, `CRITICAL`, or `FATAL`.
 
 
 .. cmdoption:: -n, --hostname
 .. cmdoption:: -n, --hostname
 
 
@@ -23,14 +23,14 @@
 
 
 .. cmdoption:: -B, --beat
 .. cmdoption:: -B, --beat
 
 
-    Also run the ``celerybeat`` periodic task scheduler. Please note that
+    Also run the `celerybeat` periodic task scheduler. Please note that
     there must only be one instance of this service.
     there must only be one instance of this service.
 
 
 .. cmdoption:: -Q, --queues
 .. cmdoption:: -Q, --queues
 
 
     List of queues to enable for this worker, separated by comma.
     List of queues to enable for this worker, separated by comma.
     By default all configured queues are enabled.
     By default all configured queues are enabled.
-    Example: ``-Q video,image``
+    Example: `-Q video,image`
 
 
 .. cmdoption:: -I, --include
 .. cmdoption:: -I, --include
 
 
@@ -39,13 +39,17 @@
 
 
 .. cmdoption:: -s, --schedule
 .. cmdoption:: -s, --schedule
 
 
-    Path to the schedule database if running with the ``-B`` option.
-    Defaults to ``celerybeat-schedule``. The extension ".db" will be
+    Path to the schedule database if running with the `-B` option.
+    Defaults to `celerybeat-schedule`. The extension ".db" will be
     appended to the filename.
     appended to the filename.
 
 
+.. cmdoption:: --scheduler
+
+    Scheduler class to use. Default is celery.beat.PersistentScheduler
+
 .. cmdoption:: -E, --events
 .. cmdoption:: -E, --events
 
 
-    Send events that can be captured by monitors like ``celerymon``.
+    Send events that can be captured by monitors like `celerymon`.
 
 
 .. cmdoption:: --purge, --discard
 .. cmdoption:: --purge, --discard
 
 
@@ -114,7 +118,11 @@ class WorkerCommand(Command):
                      "option. The extension '.db' will be appended to the "
                      "option. The extension '.db' will be appended to the "
                     "filename. Default: %s" % (
                     "filename. Default: %s" % (
                         conf.CELERYBEAT_SCHEDULE_FILENAME, )),
                         conf.CELERYBEAT_SCHEDULE_FILENAME, )),
-
+            Option('--scheduler',
+                default=None,
+                action="store", dest="scheduler_cls",
+                help="Scheduler class. Default is "
+                     "celery.beat.PersistentScheduler"),
             Option('-S', '--statedb', default=conf.CELERYD_STATE_DB,
             Option('-S', '--statedb', default=conf.CELERYD_STATE_DB,
                 action="store", dest="db",
                 action="store", dest="db",
                 help="Path to the state database. The extension '.db' will "
                 help="Path to the state database. The extension '.db' will "
@@ -150,6 +158,11 @@ class WorkerCommand(Command):
                 help="Optional file used to store the workers pid. "
                 help="Optional file used to store the workers pid. "
                      "The worker will not start if this file already exists "
                      "The worker will not start if this file already exists "
                      "and the pid is still alive."),
                      "and the pid is still alive."),
+            Option('--autoscale', default=None,
+                help="Enable autoscaling by providing "
+                     "max_concurrency,min_concurrency. Example: "
+                     "--autoscale=10,3 (always keep 3 processes, "
+                     "but grow to 10 if necessary)."),
         )
         )
 
 
 
 

+ 8 - 2
celery/concurrency/processes/__init__.py

@@ -78,9 +78,9 @@ class TaskPool(object):
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
             errbacks=None, accept_callback=None, timeout_callback=None,
             errbacks=None, accept_callback=None, timeout_callback=None,
             **compat):
             **compat):
-        """Equivalent of the :func:``apply`` built-in function.
+        """Equivalent of the :func:`apply` built-in function.
 
 
-        All ``callbacks`` and ``errbacks`` should complete immediately since
+        All `callbacks` and `errbacks` should complete immediately since
         otherwise the thread which handles the result will get blocked.
         otherwise the thread which handles the result will get blocked.
 
 
         """
         """
@@ -102,6 +102,12 @@ class TaskPool(object):
                                       error_callback=on_worker_error,
                                       error_callback=on_worker_error,
                                       waitforslot=self.putlocks)
                                       waitforslot=self.putlocks)
 
 
+    def grow(self, n=1):
+        return self._pool.grow(n)
+
+    def shrink(self, n=1):
+        return self._pool.shrink(n)
+
     def on_worker_error(self, errbacks, exc):
     def on_worker_error(self, errbacks, exc):
         einfo = ExceptionInfo((exc.__class__, exc, None))
         einfo = ExceptionInfo((exc.__class__, exc, None))
         [errback(einfo) for errback in errbacks]
         [errback(einfo) for errback in errbacks]

+ 42 - 4
celery/concurrency/processes/pool.py

@@ -81,8 +81,8 @@ def soft_timeout_sighandler(signum, frame):
 
 
 
 
 def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
 def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
-    assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
     pid = os.getpid()
     pid = os.getpid()
+    assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
     put = outqueue.put
     put = outqueue.put
     get = inqueue.get
     get = inqueue.get
 
 
@@ -108,6 +108,7 @@ def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
     if SIG_SOFT_TIMEOUT is not None:
     if SIG_SOFT_TIMEOUT is not None:
         signal.signal(SIG_SOFT_TIMEOUT, soft_timeout_sighandler)
         signal.signal(SIG_SOFT_TIMEOUT, soft_timeout_sighandler)
 
 
+
     completed = 0
     completed = 0
     while maxtasks is None or (maxtasks and completed < maxtasks):
     while maxtasks is None or (maxtasks and completed < maxtasks):
         try:
         try:
@@ -137,7 +138,6 @@ def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
         completed += 1
         completed += 1
     debug('worker exiting after %d tasks' % completed)
     debug('worker exiting after %d tasks' % completed)
 
 
-
 #
 #
 # Class representing a process pool
 # Class representing a process pool
 #
 #
@@ -527,6 +527,44 @@ class Pool(object):
             return True
             return True
         return False
         return False
 
 
+    def shrink(self, n=1):
+        for i, worker in enumerate(self._iterinactive()):
+            self._processes -= 1
+            if self._putlock:
+                self._putlock._initial_value -= 1
+                self._putlock.acquire()
+            worker.terminate()
+            if i == n - 1:
+                return
+        raise ValueError("Can't shrink pool. All processes busy!")
+
+    def grow(self, n=1):
+        for i in xrange(n):
+            #assert len(self._pool) == self._processes
+            self._processes += 1
+            if self._putlock:
+                cond = self._putlock._Semaphore__cond
+                cond.acquire()
+                try:
+                    self._putlock._initial_value += 1
+                    self._putlock._Semaphore__value += 1
+                    cond.notify()
+                finally:
+                    cond.release()
+
+    def _iterinactive(self):
+        for worker in self._pool:
+            if not self._worker_active(worker):
+                yield worker
+        raise
+
+    def _worker_active(self, worker):
+        jobs = []
+        for job in self._cache.values():
+            if worker.pid in job.worker_pids():
+                return True
+        return False
+
     def _repopulate_pool(self):
     def _repopulate_pool(self):
         """Bring the number of pool processes up to the specified number,
         """Bring the number of pool processes up to the specified number,
         for use after reaping workers which have exited.
         for use after reaping workers which have exited.
@@ -541,8 +579,8 @@ class Pool(object):
     def _maintain_pool(self):
     def _maintain_pool(self):
         """"Clean up any exited workers and start replacements for them.
         """"Clean up any exited workers and start replacements for them.
         """
         """
-        if self._join_exited_workers():
-            self._repopulate_pool()
+        self._join_exited_workers()
+        self._repopulate_pool()
 
 
     def _setup_queues(self):
     def _setup_queues(self):
         from multiprocessing.queues import SimpleQueue
         from multiprocessing.queues import SimpleQueue

+ 4 - 4
celery/contrib/abortable.py

@@ -65,9 +65,9 @@ In the producer:
 
 
        ...
        ...
 
 
-After the ``async_result.abort()`` call, the task execution is not
+After the `async_result.abort()` call, the task execution is not
 aborted immediately. In fact, it is not guaranteed to abort at all. Keep
 aborted immediately. In fact, it is not guaranteed to abort at all. Keep
-checking the ``async_result`` status, or call ``async_result.wait()`` to
+checking the `async_result` status, or call `async_result.wait()` to
 have it block until the task is finished.
 have it block until the task is finished.
 
 
 .. note::
 .. note::
@@ -101,8 +101,8 @@ ABORTED = "ABORTED"
 class AbortableAsyncResult(AsyncResult):
 class AbortableAsyncResult(AsyncResult):
     """Represents a abortable result.
     """Represents a abortable result.
 
 
-    Specifically, this gives the ``AsyncResult`` a :meth:`abort()` method,
-    which sets the state of the underlying Task to ``"ABORTED"``.
+    Specifically, this gives the `AsyncResult` a :meth:`abort()` method,
+    which sets the state of the underlying Task to `"ABORTED"`.
 
 
     """
     """
 
 

+ 102 - 34
celery/datastructures.py

@@ -3,6 +3,7 @@ from __future__ import generators
 import time
 import time
 import traceback
 import traceback
 
 
+from itertools import chain
 from UserList import UserList
 from UserList import UserList
 from Queue import Queue, Empty as QueueEmpty
 from Queue import Queue, Empty as QueueEmpty
 
 
@@ -28,6 +29,7 @@ class AttributeDict(dict, AttributeDictMixin):
 
 
 
 
 class DictAttribute(object):
 class DictAttribute(object):
+    """Dict interface using attributes."""
 
 
     def __init__(self, obj):
     def __init__(self, obj):
         self.obj = obj
         self.obj = obj
@@ -61,20 +63,80 @@ class DictAttribute(object):
         return vars(self.obj).iteritems()
         return vars(self.obj).iteritems()
 
 
 
 
+class MultiDictView(AttributeDictMixin):
+    """View for one more more dicts.
+
+    * When getting a key, the dicts are searched in order.
+    * When setting a key, the key is added to the first dict.
+
+    >>> d1 = {"x": 3"}
+    >>> d2 = {"x": 1, "y": 2, "z": 3}
+    >>> x = MultiDictView([d1, d2])
+
+    >>> x["x"]
+    3
+
+    >>>  x["y"]
+    2
+
+    """
+    dicts = None
+
+    def __init__(self, *dicts):
+        self.__dict__["dicts"] = dicts
+
+    def __getitem__(self, key):
+        for d in self.__dict__["dicts"]:
+            try:
+                return d[key]
+            except KeyError:
+                pass
+        raise KeyError(key)
+
+    def __setitem__(self, key, value):
+        self.__dict__["dicts"][0][key] = value
+
+    def get(self, key, default=None):
+        try:
+            return self[key]
+        except KeyError:
+            return default
+
+    def setdefault(self, key, default):
+        try:
+            return self[key]
+        except KeyError:
+            self[key] = default
+            return default
+
+    def update(self, *args, **kwargs):
+        return self.__dict__["dicts"][0].update(*args, **kwargs)
+
+    def __contains__(self, key):
+        for d in self.__dict__["dicts"]:
+            if key in d:
+                return True
+        return False
+
+    def __repr__(self):
+        return repr(dict(iter(self)))
+
+    def __iter__(self):
+        return chain(*[d.iteritems() for d in self.__dict__["dicts"]])
+
+
 class PositionQueue(UserList):
 class PositionQueue(UserList):
     """A positional queue of a specific length, with slots that are either
     """A positional queue of a specific length, with slots that are either
     filled or unfilled. When all of the positions are filled, the queue
     filled or unfilled. When all of the positions are filled, the queue
     is considered :meth:`full`.
     is considered :meth:`full`.
 
 
-    :param length: see :attr:`length`.
-
-
-    .. attribute:: length
-
-        The number of items required for the queue to be considered full.
+    :param length: Number of items to fill.
 
 
     """
     """
 
 
+    #: The number of items required for the queue to be considered full.
+    length = None
+
     class UnfilledPosition(object):
     class UnfilledPosition(object):
         """Describes an unfilled slot."""
         """Describes an unfilled slot."""
 
 
@@ -88,16 +150,16 @@ class PositionQueue(UserList):
         self.data = map(self.UnfilledPosition, xrange(length))
         self.data = map(self.UnfilledPosition, xrange(length))
 
 
     def full(self):
     def full(self):
-        """Returns ``True`` if all of the slots has been filled."""
+        """Returns :const:`True` if all of the slots has been filled."""
         return len(self) >= self.length
         return len(self) >= self.length
 
 
     def __len__(self):
     def __len__(self):
-        """``len(self)`` -> number of slots filled with real values."""
+        """`len(self)` -> number of slots filled with real values."""
         return len(self.filled)
         return len(self.filled)
 
 
     @property
     @property
     def filled(self):
     def filled(self):
-        """Returns the filled slots as a list."""
+        """All filled slots as a list."""
         return [slot for slot in self.data
         return [slot for slot in self.data
                     if not isinstance(slot, self.UnfilledPosition)]
                     if not isinstance(slot, self.UnfilledPosition)]
 
 
@@ -108,15 +170,13 @@ class ExceptionInfo(object):
     :param exc_info: The exception tuple info as returned by
     :param exc_info: The exception tuple info as returned by
         :func:`traceback.format_exception`.
         :func:`traceback.format_exception`.
 
 
-    .. attribute:: exception
-
-        The original exception.
-
-    .. attribute:: traceback
+    """
 
 
-        A traceback from the point when :attr:`exception` was raised.
+    #: The original exception.
+    exception = None
 
 
-    """
+    #: A traceback form the point when :attr:`exception` was raised.
+    traceback = None
 
 
     def __init__(self, exc_info):
     def __init__(self, exc_info):
         type_, exception, tb = exc_info
         type_, exception, tb = exc_info
@@ -163,7 +223,7 @@ class SharedCounter(object):
     that you should not update the value by using a previous value, the only
     that you should not update the value by using a previous value, the only
     reliable operations are increment and decrement.
     reliable operations are increment and decrement.
 
 
-    Example
+    Example::
 
 
         >>> max_clients = SharedCounter(initial_value=10)
         >>> max_clients = SharedCounter(initial_value=10)
 
 
@@ -177,7 +237,6 @@ class SharedCounter(object):
         >>> if client >= int(max_clients): # Max clients now at 8
         >>> if client >= int(max_clients): # Max clients now at 8
         ...    wait()
         ...    wait()
 
 
-
         >>> max_client = max_clients + 10 # NOT OK (unsafe)
         >>> max_client = max_clients + 10 # NOT OK (unsafe)
 
 
     """
     """
@@ -201,17 +260,17 @@ class SharedCounter(object):
         return self._value
         return self._value
 
 
     def __iadd__(self, y):
     def __iadd__(self, y):
-        """``self += y``"""
+        """`self += y`"""
         self._modify_queue.put(y * +1)
         self._modify_queue.put(y * +1)
         return self
         return self
 
 
     def __isub__(self, y):
     def __isub__(self, y):
-        """``self -= y``"""
+        """`self -= y`"""
         self._modify_queue.put(y * -1)
         self._modify_queue.put(y * -1)
         return self
         return self
 
 
     def __int__(self):
     def __int__(self):
-        """``int(self) -> int``"""
+        """`int(self) -> int`"""
         return self._update_value()
         return self._update_value()
 
 
     def __repr__(self):
     def __repr__(self):
@@ -221,12 +280,12 @@ class SharedCounter(object):
 class LimitedSet(object):
 class LimitedSet(object):
     """Kind-of Set with limitations.
     """Kind-of Set with limitations.
 
 
-    Good for when you need to test for membership (``a in set``),
+    Good for when you need to test for membership (`a in set`),
     but the list might become to big, so you want to limit it so it doesn't
     but the list might become to big, so you want to limit it so it doesn't
     consume too much resources.
     consume too much resources.
 
 
     :keyword maxlen: Maximum number of members before we start
     :keyword maxlen: Maximum number of members before we start
-        deleting expired members.
+                     deleting expired members.
     :keyword expires: Time in seconds, before a membership expires.
     :keyword expires: Time in seconds, before a membership expires.
 
 
     """
     """
@@ -316,22 +375,23 @@ class TokenBucket(object):
     Most of this code was stolen from an entry in the ASPN Python Cookbook:
     Most of this code was stolen from an entry in the ASPN Python Cookbook:
     http://code.activestate.com/recipes/511490/
     http://code.activestate.com/recipes/511490/
 
 
-    :param fill_rate: see :attr:`fill_rate`.
-    :keyword capacity: see :attr:`capacity`.
+    .. admonition:: Thread safety
 
 
-    .. attribute:: fill_rate
+        This implementation is not thread safe.
 
 
-        The rate in tokens/second that the bucket will be refilled.
+    :param fill_rate: Refill rate in tokens/second.
+    :keyword capacity: Max number of tokens.  Default is 1.
 
 
-    .. attribute:: capacity
-
-        Maximum number of tokens in the bucket. Default is ``1``.
+    """
 
 
-    .. attribute:: timestamp
+    #: The rate in tokens/second that the bucket will be refilled
+    fill_rate = None
 
 
-        Timestamp of the last time a token was taken out of the bucket.
+    #: Maximum number of tokensin the bucket.
+    capacity = 1
 
 
-    """
+    #: Timestamp of the last time a token was taken out of the bucket.
+    timestamp = None
 
 
     def __init__(self, fill_rate, capacity=1):
     def __init__(self, fill_rate, capacity=1):
         self.capacity = float(capacity)
         self.capacity = float(capacity)
@@ -340,6 +400,8 @@ class TokenBucket(object):
         self.timestamp = time.time()
         self.timestamp = time.time()
 
 
     def can_consume(self, tokens=1):
     def can_consume(self, tokens=1):
+        """Returns :const:`True` if `tokens` number of tokens can be consumed
+        from the bucket."""
         if tokens <= self._get_tokens():
         if tokens <= self._get_tokens():
             self._tokens -= tokens
             self._tokens -= tokens
             return True
             return True
@@ -347,7 +409,13 @@ class TokenBucket(object):
 
 
     def expected_time(self, tokens=1):
     def expected_time(self, tokens=1):
         """Returns the expected time in seconds when a new token should be
         """Returns the expected time in seconds when a new token should be
-        available. *Note: consumes a token from the bucket*"""
+        available.
+
+        .. admonition:: Warning
+
+            This consumes a token from the bucket.
+
+        """
         _tokens = self._get_tokens()
         _tokens = self._get_tokens()
         tokens = max(tokens, _tokens)
         tokens = max(tokens, _tokens)
         return (tokens - _tokens) / self.fill_rate
         return (tokens - _tokens) / self.fill_rate

+ 4 - 4
celery/db/a805d4bd.py

@@ -2,14 +2,14 @@
 a805d4bd
 a805d4bd
 This module fixes a bug with pickling and relative imports in Python < 2.6.
 This module fixes a bug with pickling and relative imports in Python < 2.6.
 
 
-The problem is with pickling an e.g. ``exceptions.KeyError`` instance.
-As SQLAlchemy has its own ``exceptions`` module, pickle will try to
-lookup ``KeyError`` in the wrong module, resulting in this exception::
+The problem is with pickling an e.g. `exceptions.KeyError` instance.
+As SQLAlchemy has its own `exceptions` module, pickle will try to
+lookup :exc:`KeyError` in the wrong module, resulting in this exception::
 
 
     cPickle.PicklingError: Can't pickle <type 'exceptions.KeyError'>:
     cPickle.PicklingError: Can't pickle <type 'exceptions.KeyError'>:
         attribute lookup exceptions.KeyError failed
         attribute lookup exceptions.KeyError failed
 
 
-doing ``import exceptions`` just before the dump in ``sqlalchemy.types``
+doing `import exceptions` just before the dump in `sqlalchemy.types`
 reveals the source of the bug::
 reveals the source of the bug::
 
 
     EXCEPTIONS: <module 'sqlalchemy.exc' from '/var/lib/hudson/jobs/celery/
     EXCEPTIONS: <module 'sqlalchemy.exc' from '/var/lib/hudson/jobs/celery/

+ 3 - 3
celery/events/__init__.py

@@ -33,7 +33,7 @@ class EventDispatcher(object):
     :keyword hostname: Hostname to identify ourselves as,
     :keyword hostname: Hostname to identify ourselves as,
         by default uses the hostname returned by :func:`socket.gethostname`.
         by default uses the hostname returned by :func:`socket.gethostname`.
 
 
-    :keyword enabled: Set to ``False`` to not actually publish any events,
+    :keyword enabled: Set to :const:`False` to not actually publish any events,
         making :meth:`send` a noop operation.
         making :meth:`send` a noop operation.
 
 
     You need to :meth:`close` this after use.
     You need to :meth:`close` this after use.
@@ -104,8 +104,8 @@ class EventReceiver(object):
     :param connection: Carrot connection.
     :param connection: Carrot connection.
     :keyword handlers: Event handlers.
     :keyword handlers: Event handlers.
 
 
-    :attr:`handlers`` is a dict of event types and their handlers,
-    the special handler ``"*`"`` captures all events that doesn't have a
+    :attr:`handlers` is a dict of event types and their handlers,
+    the special handler `"*"` captures all events that doesn't have a
     handler.
     handler.
 
 
     """
     """

+ 3 - 3
celery/events/state.py

@@ -253,7 +253,7 @@ class State(object):
     def tasks_by_timestamp(self, limit=None):
     def tasks_by_timestamp(self, limit=None):
         """Get tasks by timestamp.
         """Get tasks by timestamp.
 
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
 
         """
         """
         return self._sort_tasks_by_time(self.tasks.items()[:limit])
         return self._sort_tasks_by_time(self.tasks.items()[:limit])
@@ -266,7 +266,7 @@ class State(object):
     def tasks_by_type(self, name, limit=None):
     def tasks_by_type(self, name, limit=None):
         """Get all tasks by type.
         """Get all tasks by type.
 
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
 
         """
         """
         return self._sort_tasks_by_time([(uuid, task)
         return self._sort_tasks_by_time([(uuid, task)
@@ -276,7 +276,7 @@ class State(object):
     def tasks_by_worker(self, hostname, limit=None):
     def tasks_by_worker(self, hostname, limit=None):
         """Get all tasks by worker.
         """Get all tasks by worker.
 
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
 
         """
         """
         return self._sort_tasks_by_time([(uuid, task)
         return self._sort_tasks_by_time([(uuid, task)

+ 11 - 1
celery/loaders/base.py

@@ -42,7 +42,8 @@ class BaseLoader(object):
         pass
         pass
 
 
     def on_worker_init(self):
     def on_worker_init(self):
-        """This method is called when the worker (``celeryd``) starts."""
+        """This method is called when the worker (:program:`celeryd`)
+        starts."""
         pass
         pass
 
 
     def import_task_module(self, module):
     def import_task_module(self, module):
@@ -112,6 +113,15 @@ class BaseLoader(object):
 
 
         return dict(map(getarg, args))
         return dict(map(getarg, args))
 
 
+    def mail_admins(self, subject, message, fail_silently=False,
+            sender=None, to=None, host=None, port=None, 
+            user=None, password=None):
+        from celery.utils import mail
+        message = mail.Message(sender=sender,
+                               to=to, subject=subject, body=message)
+        mailer = mail.Mailer(host, port, user, password)
+        mailer.send(message, fail_silently=fail_silently)
+
     @property
     @property
     def conf(self):
     def conf(self):
         """Loader configuration."""
         """Loader configuration."""

+ 1 - 1
celery/loaders/default.py

@@ -42,7 +42,7 @@ class Loader(BaseLoader):
         return settings
         return settings
 
 
     def read_configuration(self):
     def read_configuration(self):
-        """Read configuration from ``celeryconfig.py`` and configure
+        """Read configuration from :file:`celeryconfig.py` and configure
         celery and Django so it can be used by regular Python."""
         celery and Django so it can be used by regular Python."""
         configname = os.environ.get("CELERY_CONFIG_MODULE",
         configname = os.environ.get("CELERY_CONFIG_MODULE",
                                     DEFAULT_CONFIG_MODULE)
                                     DEFAULT_CONFIG_MODULE)

+ 8 - 8
celery/log.py

@@ -35,7 +35,7 @@ class ColorFormatter(logging.Formatter):
     def formatException(self, ei):
     def formatException(self, ei):
         r = logging.Formatter.formatException(self, ei)
         r = logging.Formatter.formatException(self, ei)
         if type(r) in [types.StringType]:
         if type(r) in [types.StringType]:
-            r = r.decode('utf-8', 'replace') # Convert to unicode
+            r = r.decode("utf-8", "replace") # Convert to unicode
         return r
         return r
 
 
     def format(self, record):
     def format(self, record):
@@ -101,7 +101,7 @@ class Logging(object):
 
 
     def _detect_handler(self, logfile=None):
     def _detect_handler(self, logfile=None):
         """Create log handler with either a filename, an open stream
         """Create log handler with either a filename, an open stream
-        or ``None`` (stderr)."""
+        or :const:`None` (stderr)."""
         if not logfile or hasattr(logfile, "write"):
         if not logfile or hasattr(logfile, "write"):
             return logging.StreamHandler(logfile)
             return logging.StreamHandler(logfile)
         return logging.FileHandler(logfile)
         return logging.FileHandler(logfile)
@@ -120,9 +120,9 @@ class Logging(object):
     def setup_logger(self, loglevel=None, logfile=None,
     def setup_logger(self, loglevel=None, logfile=None,
             format=None, colorize=None, name="celery", root=True,
             format=None, colorize=None, name="celery", root=True,
             app=None, **kwargs):
             app=None, **kwargs):
-        """Setup the ``multiprocessing`` logger.
+        """Setup the :mod:`multiprocessing` logger.
 
 
-        If ``logfile`` is not specified, then ``sys.stderr`` is used.
+        If `logfile` is not specified, then `sys.stderr` is used.
 
 
         Returns logger object.
         Returns logger object.
 
 
@@ -142,7 +142,7 @@ class Logging(object):
             colorize=None, task_kwargs=None, app=None, **kwargs):
             colorize=None, task_kwargs=None, app=None, **kwargs):
         """Setup the task logger.
         """Setup the task logger.
 
 
-        If ``logfile`` is not specified, then ``sys.stderr`` is used.
+        If `logfile` is not specified, then `sys.stderr` is used.
 
 
         Returns logger object.
         Returns logger object.
 
 
@@ -215,7 +215,7 @@ class LoggingProxy(object):
 
 
     def _safewrap_handlers(self):
     def _safewrap_handlers(self):
         """Make the logger handlers dump internal errors to
         """Make the logger handlers dump internal errors to
-        ``sys.__stderr__`` instead of ``sys.stderr`` to circumvent
+        `sys.__stderr__` instead of `sys.stderr` to circumvent
         infinite loops."""
         infinite loops."""
 
 
         def wrap_handler(handler):                  # pragma: no cover
         def wrap_handler(handler):                  # pragma: no cover
@@ -253,7 +253,7 @@ class LoggingProxy(object):
                 self._thread.recurse_protection = False
                 self._thread.recurse_protection = False
 
 
     def writelines(self, sequence):
     def writelines(self, sequence):
-        """``writelines(sequence_of_strings) -> None``.
+        """`writelines(sequence_of_strings) -> None`.
 
 
         Write the strings to the file.
         Write the strings to the file.
 
 
@@ -275,7 +275,7 @@ class LoggingProxy(object):
         self.closed = True
         self.closed = True
 
 
     def isatty(self):
     def isatty(self):
-        """Always returns ``False``. Just here for file support."""
+        """Always returns :const:`False`. Just here for file support."""
         return False
         return False
 
 
     def fileno(self):
     def fileno(self):

+ 3 - 3
celery/messaging.py

@@ -21,17 +21,17 @@ def establish_connection(**kwargs):
 
 
 def with_connection(fun):
 def with_connection(fun):
     """Decorator for providing default message broker connection for functions
     """Decorator for providing default message broker connection for functions
-    supporting the ``connection`` and ``connect_timeout`` keyword
+    supporting the `connection` and `connect_timeout` keyword
     arguments."""
     arguments."""
     # FIXME: Deprecate!
     # FIXME: Deprecate!
     return default_app.with_default_connection(fun)
     return default_app.with_default_connection(fun)
 
 
 
 
 def get_consumer_set(connection, queues=None, **options):
 def get_consumer_set(connection, queues=None, **options):
-    """Get the :class:`kombu.messaging.Consumer`` for a queue
+    """Get the :class:`kombu.messaging.Consumer` for a queue
     configuration.
     configuration.
 
 
-    Defaults to the queues in :const:`CELERY_QUEUES`.
+    Defaults to the queues in :setting:`CELERY_QUEUES`.
 
 
     """
     """
     # FIXME: Deprecate!
     # FIXME: Deprecate!

+ 2 - 2
celery/registry.py

@@ -1,12 +1,12 @@
 """celery.registry"""
 """celery.registry"""
 import inspect
 import inspect
+
 from UserDict import UserDict
 from UserDict import UserDict
 
 
 from celery.exceptions import NotRegistered
 from celery.exceptions import NotRegistered
 
 
 
 
 class TaskRegistry(UserDict):
 class TaskRegistry(UserDict):
-    """Site registry for tasks."""
 
 
     NotRegistered = NotRegistered
     NotRegistered = NotRegistered
 
 
@@ -36,7 +36,7 @@ class TaskRegistry(UserDict):
         """Unregister task by name.
         """Unregister task by name.
 
 
         :param name: name of the task to unregister, or a
         :param name: name of the task to unregister, or a
-            :class:`celery.task.base.Task` with a valid ``name`` attribute.
+            :class:`celery.task.base.Task` with a valid `name` attribute.
 
 
         :raises celery.exceptions.NotRegistered: if the task has not
         :raises celery.exceptions.NotRegistered: if the task has not
             been registered.
             been registered.

+ 50 - 67
celery/result.py

@@ -18,18 +18,17 @@ class BaseAsyncResult(object):
     :param task_id: see :attr:`task_id`.
     :param task_id: see :attr:`task_id`.
     :param backend: see :attr:`backend`.
     :param backend: see :attr:`backend`.
 
 
-    .. attribute:: task_id
-
-        The unique identifier for this task.
-
-    .. attribute:: backend
-
-        The task result backend used.
-
     """
     """
 
 
+    #: Error raised for timeouts.
     TimeoutError = TimeoutError
     TimeoutError = TimeoutError
 
 
+    #: The task uuid.
+    task_id = None
+
+    #: The task result backend to use.
+    backend = None
+
     def __init__(self, task_id, backend, app=None):
     def __init__(self, task_id, backend, app=None):
         self.task_id = task_id
         self.task_id = task_id
         self.backend = backend
         self.backend = backend
@@ -42,24 +41,25 @@ class BaseAsyncResult(object):
     def revoke(self, connection=None, connect_timeout=None):
     def revoke(self, connection=None, connect_timeout=None):
         """Send revoke signal to all workers.
         """Send revoke signal to all workers.
 
 
-        The workers will ignore the task if received.
+        Any worker receiving the task, or having reserved the
+        task, *must* ignore it.
 
 
         """
         """
         self.app.control.revoke(self.task_id, connection=connection,
         self.app.control.revoke(self.task_id, connection=connection,
                                 connect_timeout=connect_timeout)
                                 connect_timeout=connect_timeout)
 
 
     def wait(self, timeout=None):
     def wait(self, timeout=None):
-        """Wait for task, and return the result when it arrives.
+        """Wait for task, and return the result.
 
 
         :keyword timeout: How long to wait, in seconds, before the
         :keyword timeout: How long to wait, in seconds, before the
-            operation times out.
+                          operation times out.
 
 
-        :raises celery.exceptions.TimeoutError: if ``timeout`` is not
-            :const:`None` and the result does not arrive within ``timeout``
+        :raises celery.exceptions.TimeoutError: if `timeout` is not
+            :const:`None` and the result does not arrive within `timeout`
             seconds.
             seconds.
 
 
-        If the remote call raised an exception then that
-        exception will be re-raised.
+        If the remote call raised an exception then that exception will
+        be re-raised.
 
 
         """
         """
         return self.backend.wait_for(self.task_id, timeout=timeout)
         return self.backend.wait_for(self.task_id, timeout=timeout)
@@ -69,8 +69,7 @@ class BaseAsyncResult(object):
         return self.wait(timeout=timeout)
         return self.wait(timeout=timeout)
 
 
     def ready(self):
     def ready(self):
-        """Returns :const:`True` if the task executed successfully, or raised
-        an exception.
+        """Returns :const:`True` if the task has been executed.
 
 
         If the task is still running, pending, or is waiting
         If the task is still running, pending, or is waiting
         for retry then :const:`False` is returned.
         for retry then :const:`False` is returned.
@@ -87,11 +86,11 @@ class BaseAsyncResult(object):
         return self.status == states.FAILURE
         return self.status == states.FAILURE
 
 
     def __str__(self):
     def __str__(self):
-        """``str(self) -> self.task_id``"""
+        """`str(self) -> self.task_id`"""
         return self.task_id
         return self.task_id
 
 
     def __hash__(self):
     def __hash__(self):
-        """``hash(self) -> hash(self.task_id)``"""
+        """`hash(self) -> hash(self.task_id)`"""
         return hash(self.task_id)
         return hash(self.task_id)
 
 
     def __repr__(self):
     def __repr__(self):
@@ -108,19 +107,12 @@ class BaseAsyncResult(object):
     @property
     @property
     def result(self):
     def result(self):
         """When the task has been executed, this contains the return value.
         """When the task has been executed, this contains the return value.
-
-        If the task raised an exception, this will be the exception instance.
-
-        """
+        If the task raised an exception, this will be the exception instance."""
         return self.backend.get_result(self.task_id)
         return self.backend.get_result(self.task_id)
 
 
     @property
     @property
     def info(self):
     def info(self):
-        """Get state metadata.
-
-        Alias to :meth:`result`.
-
-        """
+        """Get state metadata.  Alias to :meth:`result`."""
         return self.result
         return self.result
 
 
     @property
     @property
@@ -135,9 +127,9 @@ class BaseAsyncResult(object):
 
 
     @property
     @property
     def state(self):
     def state(self):
-        """The current status of the task.
+        """The tasks current state.
 
 
-        Can be one of the following:
+        Possible values includes:
 
 
             *PENDING*
             *PENDING*
 
 
@@ -169,18 +161,15 @@ class BaseAsyncResult(object):
 class AsyncResult(BaseAsyncResult):
 class AsyncResult(BaseAsyncResult):
     """Pending task result using the default backend.
     """Pending task result using the default backend.
 
 
-    :param task_id: see :attr:`task_id`.
+    :param task_id: The tasks uuid.
 
 
+    """
 
 
-    .. attribute:: task_id
-
-        The unique identifier for this task.
-
-    .. attribute:: backend
-
-        Instance of :class:`celery.backends.DefaultBackend`.
+    #: The tasks uuid.
+    uuid = None
 
 
-    """
+    #: Task result store backend to use.
+    backend = None
 
 
     def __init__(self, task_id, backend=None, app=None):
     def __init__(self, task_id, backend=None, app=None):
         app = app_or_default(app)
         app = app_or_default(app)
@@ -189,24 +178,22 @@ class AsyncResult(BaseAsyncResult):
 
 
 
 
 class TaskSetResult(object):
 class TaskSetResult(object):
-    """Working with :class:`~celery.task.TaskSet` results.
+    """Working with :class:`~celery.task.sets.TaskSet` results.
 
 
     An instance of this class is returned by
     An instance of this class is returned by
-    ``TaskSet``'s :meth:`~celery.task.TaskSet.apply_async()`. It enables
-    inspection of the subtasks status and return values as a single entity.
-
-    :option taskset_id: see :attr:`taskset_id`.
-    :option subtasks: see :attr:`subtasks`.
+    `TaskSet`'s :meth:`~celery.task.TaskSet.apply_async()`.  It enables
+    inspection of the subtasks state and return values as a single entity.
 
 
-    .. attribute:: taskset_id
+    :param taskset_id: The id of the taskset.
+    :param subtasks: List of result instances.
 
 
-        The UUID of the taskset itself.
-
-    .. attribute:: subtasks
+    """
 
 
-        A list of :class:`AsyncResult` instances for all of the subtasks.
+    #: The UUID of the taskset.
+    taskset_id = None
 
 
-    """
+    #: A list of :class:`AsyncResult` instances for all of the subtasks.
+    subtasks = None
 
 
     def __init__(self, taskset_id, subtasks, app=None):
     def __init__(self, taskset_id, subtasks, app=None):
         self.taskset_id = taskset_id
         self.taskset_id = taskset_id
@@ -278,6 +265,7 @@ class TaskSetResult(object):
             subtask.forget()
             subtask.forget()
 
 
     def revoke(self, connection=None, connect_timeout=None):
     def revoke(self, connection=None, connect_timeout=None):
+        """Revoke all subtasks."""
 
 
         def _do_revoke(connection=None, connect_timeout=None):
         def _do_revoke(connection=None, connect_timeout=None):
             for subtask in self.subtasks:
             for subtask in self.subtasks:
@@ -287,10 +275,11 @@ class TaskSetResult(object):
                 connection=connection, connect_timeout=connect_timeout)
                 connection=connection, connect_timeout=connect_timeout)
 
 
     def __iter__(self):
     def __iter__(self):
-        """``iter(res)`` -> ``res.iterate()``."""
+        """`iter(res)` -> `res.iterate()`."""
         return self.iterate()
         return self.iterate()
 
 
     def __getitem__(self, index):
     def __getitem__(self, index):
+        """`res[i] -> res.subtasks[i]`"""
         return self.subtasks[index]
         return self.subtasks[index]
 
 
     def iterate(self):
     def iterate(self):
@@ -319,25 +308,19 @@ class TaskSetResult(object):
         """Gather the results of all tasks in the taskset,
         """Gather the results of all tasks in the taskset,
         and returns a list ordered by the order of the set.
         and returns a list ordered by the order of the set.
 
 
-        :keyword timeout: The number of seconds to wait for results
-            before the operation times out.
+        :keyword timeout: The number of seconds to wait for results before
+                          the operation times out.
 
 
         :keyword propagate: If any of the subtasks raises an exception, the
         :keyword propagate: If any of the subtasks raises an exception, the
-            exception will be reraised.
+                            exception will be reraised.
 
 
-        :raises celery.exceptions.TimeoutError: if ``timeout`` is not
-            :const:`None` and the operation takes longer than ``timeout``
+        :raises celery.exceptions.TimeoutError: if `timeout` is not
+            :const:`None` and the operation takes longer than `timeout`
             seconds.
             seconds.
 
 
-        :returns: list of return values for all subtasks in order.
-
         """
         """
 
 
         time_start = time.time()
         time_start = time.time()
-
-        def on_timeout():
-            raise TimeoutError("The operation timed out.")
-
         results = PositionQueue(length=self.total)
         results = PositionQueue(length=self.total)
 
 
         while True:
         while True:
@@ -354,12 +337,12 @@ class TaskSetResult(object):
             else:
             else:
                 if (timeout is not None and
                 if (timeout is not None and
                         time.time() >= time_start + timeout):
                         time.time() >= time_start + timeout):
-                    on_timeout()
+                    raise TimeoutError("join operation timed out.")
 
 
     def save(self, backend=None):
     def save(self, backend=None):
         """Save taskset result for later retrieval using :meth:`restore`.
         """Save taskset result for later retrieval using :meth:`restore`.
 
 
-        Example:
+        Example::
 
 
             >>> result.save()
             >>> result.save()
             >>> result = TaskSetResult.restore(task_id)
             >>> result = TaskSetResult.restore(task_id)
@@ -378,12 +361,12 @@ class TaskSetResult(object):
 
 
     @property
     @property
     def total(self):
     def total(self):
-        """The total number of tasks in the :class:`~celery.task.TaskSet`."""
+        """Total number of subtasks in the set."""
         return len(self.subtasks)
         return len(self.subtasks)
 
 
 
 
 class EagerResult(BaseAsyncResult):
 class EagerResult(BaseAsyncResult):
-    """Result that we know has already been executed.  """
+    """Result that we know has already been executed."""
     TimeoutError = TimeoutError
     TimeoutError = TimeoutError
 
 
     def __init__(self, task_id, ret_value, status, traceback=None):
     def __init__(self, task_id, ret_value, status, traceback=None):

+ 4 - 4
celery/routes.py

@@ -5,13 +5,13 @@ _first_route = firstmethod("route_for_task")
 
 
 
 
 def merge(a, b):
 def merge(a, b):
-    """Like ``dict(a, **b)`` except it will keep values from ``a``,
-    if the value in ``b`` is :const:`None`."""
+    """Like `dict(a, **b)` except it will keep values from `a`, if the value
+    in `b` is :const:`None`."""
     return dict(a, **dict((k, v) for k, v in b.iteritems() if v is not None))
     return dict(a, **dict((k, v) for k, v in b.iteritems() if v is not None))
 
 
 
 
 class MapRoute(object):
 class MapRoute(object):
-    """Makes a router out of a :class:`dict`."""
+    """Creates a router out of a :class:`dict`."""
 
 
     def __init__(self, map):
     def __init__(self, map):
         self.map = map
         self.map = map
@@ -75,7 +75,7 @@ class Router(object):
 
 
 
 
 def prepare(routes):
 def prepare(routes):
-    """Expand ROUTES setting."""
+    """Expands the :setting:`CELERY_ROUTES` setting."""
 
 
     def expand_route(route):
     def expand_route(route):
         if isinstance(route, dict):
         if isinstance(route, dict):

+ 5 - 5
celery/schedules.py

@@ -19,15 +19,15 @@ class schedule(object):
         return remaining(last_run_at, self.run_every, relative=self.relative)
         return remaining(last_run_at, self.run_every, relative=self.relative)
 
 
     def is_due(self, last_run_at):
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
         where next time to run is in seconds.
 
 
         e.g.
         e.g.
 
 
-        * ``(True, 20)``, means the task should be run now, and the next
+        * `(True, 20)`, means the task should be run now, and the next
             time to run is in 20 seconds.
             time to run is in 20 seconds.
 
 
-        * ``(False, 12)``, means the task should be run in 12 seconds.
+        * `(False, 12)`, means the task should be run in 12 seconds.
 
 
         You can override this to decide the interval at runtime,
         You can override this to decide the interval at runtime,
         but keep in mind the value of :setting:`CELERYBEAT_MAX_LOOP_INTERVAL`,
         but keep in mind the value of :setting:`CELERYBEAT_MAX_LOOP_INTERVAL`,
@@ -145,7 +145,7 @@ class crontab_parser(object):
 
 
 
 
 class crontab(schedule):
 class crontab(schedule):
-    """A crontab can be used as the ``run_every`` value of a
+    """A crontab can be used as the `run_every` value of a
     :class:`PeriodicTask` to add cron-like scheduling.
     :class:`PeriodicTask` to add cron-like scheduling.
 
 
     Like a :manpage:`cron` job, you can specify units of time of when
     Like a :manpage:`cron` job, you can specify units of time of when
@@ -292,7 +292,7 @@ class crontab(schedule):
         return remaining(last_run_at, delta, now=self.nowfun())
         return remaining(last_run_at, delta, now=self.nowfun())
 
 
     def is_due(self, last_run_at):
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
         where next time to run is in seconds.
 
 
         See :meth:`celery.schedules.schedule.is_due` for more information.
         See :meth:`celery.schedules.schedule.is_due` for more information.

+ 15 - 17
celery/serialization.py

@@ -32,6 +32,7 @@ try:
 except NameError:
 except NameError:
     _error_bases = (SystemExit, KeyboardInterrupt)
     _error_bases = (SystemExit, KeyboardInterrupt)
 
 
+#: List of base classes we probably don't want to reduce to.
 unwanted_base_classes = (StandardError, Exception) + _error_bases + (object, )
 unwanted_base_classes = (StandardError, Exception) + _error_bases + (object, )
 
 
 
 
@@ -47,15 +48,15 @@ else:
 
 
 def find_nearest_pickleable_exception(exc):
 def find_nearest_pickleable_exception(exc):
     """With an exception instance, iterate over its super classes (by mro)
     """With an exception instance, iterate over its super classes (by mro)
-    and find the first super exception that is pickleable. It does
+    and find the first super exception that is pickleable.  It does
     not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
     not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
-    :class:`BaseException` and :class:`object`). If that happens
+    :class:`BaseException` and :class:`object`).  If that happens
     you should use :exc:`UnpickleableException` instead.
     you should use :exc:`UnpickleableException` instead.
 
 
     :param exc: An exception instance.
     :param exc: An exception instance.
 
 
     :returns: the nearest exception if it's not :exc:`Exception` or below,
     :returns: the nearest exception if it's not :exc:`Exception` or below,
-        if it is it returns :const:`None`.
+              if it is it returns :const:`None`.
 
 
     :rtype :exc:`Exception`:
     :rtype :exc:`Exception`:
 
 
@@ -98,24 +99,12 @@ class UnpickleableExceptionWrapper(Exception):
     """Wraps unpickleable exceptions.
     """Wraps unpickleable exceptions.
 
 
     :param exc_module: see :attr:`exc_module`.
     :param exc_module: see :attr:`exc_module`.
-
     :param exc_cls_name: see :attr:`exc_cls_name`.
     :param exc_cls_name: see :attr:`exc_cls_name`.
-
     :param exc_args: see :attr:`exc_args`
     :param exc_args: see :attr:`exc_args`
 
 
-    .. attribute:: exc_module
-
-        The module of the original exception.
-
-    .. attribute:: exc_cls_name
-
-        The name of the original exception class.
+    **Example**
 
 
-    .. attribute:: exc_args
-
-        The arguments for the original exception.
-
-    Example
+    .. code-block:: python
 
 
         >>> try:
         >>> try:
         ...     something_raising_unpickleable_exc()
         ...     something_raising_unpickleable_exc()
@@ -127,6 +116,15 @@ class UnpickleableExceptionWrapper(Exception):
 
 
     """
     """
 
 
+    #: The module of the original exception.
+    exc_module = None
+
+    #: The name of the original exception class.
+    exc_cls_name = None
+
+    #: The arguments for the original exception.
+    exc_args = None
+
     def __init__(self, exc_module, exc_cls_name, exc_args):
     def __init__(self, exc_module, exc_cls_name, exc_args):
         self.exc_module = exc_module
         self.exc_module = exc_module
         self.exc_cls_name = exc_cls_name
         self.exc_cls_name = exc_cls_name

+ 203 - 242
celery/task/base.py

@@ -43,6 +43,7 @@ _default_context = {"logfile": None,
                     "is_eager": False,
                     "is_eager": False,
                     "delivery_info": None}
                     "delivery_info": None}
 
 
+
 def _unpickle_task(name):
 def _unpickle_task(name):
     return tasks[name]
     return tasks[name]
 
 
@@ -64,9 +65,9 @@ class TaskType(type):
     """Metaclass for tasks.
     """Metaclass for tasks.
 
 
     Automatically registers the task in the task registry, except
     Automatically registers the task in the task registry, except
-    if the ``abstract`` attribute is set.
+    if the `abstract` attribute is set.
 
 
-    If no ``name`` attribute is provided, the name is automatically
+    If no `name` attribute is provided, the name is automatically
     set to the name of the module it was defined in, and the class name.
     set to the name of the module it was defined in, and the class name.
 
 
     """
     """
@@ -105,7 +106,7 @@ class BaseTask(object):
     """A celery task.
     """A celery task.
 
 
     All subclasses of :class:`Task` must define the :meth:`run` method,
     All subclasses of :class:`Task` must define the :meth:`run` method,
-    which is the actual method the ``celery`` daemon executes.
+    which is the actual method the `celery` daemon executes.
 
 
     The :meth:`run` method can take use of the default keyword arguments,
     The :meth:`run` method can take use of the default keyword arguments,
     as listed in the :meth:`run` documentation.
     as listed in the :meth:`run` documentation.
@@ -113,177 +114,128 @@ class BaseTask(object):
     The resulting class is callable, which if called will apply the
     The resulting class is callable, which if called will apply the
     :meth:`run` method.
     :meth:`run` method.
 
 
-    .. attribute:: app
-
-        The application instance associated with this task class.
-
-    .. attribute:: name
-
-        Name of the task.
-
-    .. attribute:: abstract
-
-        If :const:`True` the task is an abstract base class.
-
-    .. attribute:: type
-
-        The type of task, currently unused.
-
-    .. attribute:: queue
-
-        Select a destination queue for this task. The queue needs to exist
-        in :setting:`CELERY_QUEUES`. The ``routing_key``, ``exchange`` and
-        ``exchange_type`` attributes will be ignored if this is set.
-
-    .. attribute:: routing_key
-
-        Override the global default ``routing_key`` for this task.
-
-    .. attribute:: exchange
-
-        Override the global default ``exchange`` for this task.
-
-    .. attribute:: exchange_type
-
-        Override the global default exchange type for this task.
-
-    .. attribute:: delivery_mode
-
-        Override the global default delivery mode for this task.
-        By default this is set to ``2`` (persistent). You can change this
-        to ``1`` to get non-persistent behavior, which means the messages
-        are lost if the broker is restarted.
-
-    .. attribute:: mandatory
-
-        Mandatory message routing. An exception will be raised if the task
-        can't be routed to a queue.
-
-    .. attribute:: immediate:
-
-        Request immediate delivery. An exception will be raised if the task
-        can't be routed to a worker immediately.
-
-    .. attribute:: priority:
-
-        The message priority. A number from ``0`` to ``9``, where ``0``
-        is the highest. Note that RabbitMQ doesn't support priorities yet.
-
-    .. attribute:: max_retries
-
-        Maximum number of retries before giving up.
-        If set to :const:`None`, it will never stop retrying.
-
-    .. attribute:: default_retry_delay
-
-        Default time in seconds before a retry of the task should be
-        executed. Default is a 3 minute delay.
-
-    .. attribute:: rate_limit
-
-        Set the rate limit for this task type, Examples: :const:`None` (no
-        rate limit), ``"100/s"`` (hundred tasks a second), ``"100/m"``
-        (hundred tasks a minute), ``"100/h"`` (hundred tasks an hour)
-
-    .. attribute:: ignore_result
+    """
+    __metaclass__ = TaskType
 
 
-        Don't store the return value of this task.
+    MaxRetriesExceededError = MaxRetriesExceededError
 
 
-    .. attribute:: store_errors_even_if_ignored
+    #: The application instance associated with this task class.
+    app = None
 
 
-        If true, errors will be stored even if the task is configured
-        to ignore results.
+    #: Name of the task.
+    name = None
 
 
-    .. attribute:: send_error_emails
+    #: If :const:`True` the task is an abstract base class.
+    abstract = True
 
 
-        If true, an e-mail will be sent to the admins whenever
-        a task of this type raises an exception.
+    #: If disabled the worker will not forward magic keyword arguments.
+    accept_magic_kwargs = True
 
 
-    .. attribute:: error_whitelist
+    #: Current request context (when task is executed).
+    request = Context()
 
 
-        List of exception types to send error e-mails for.
+    #: Destination queue.  The queue needs to exist
+    #: in :setting:`CELERY_QUEUES`.  The `routing_key`, `exchange` and
+    #: `exchange_type` attributes will be ignored if this is set.
+    queue = None
 
 
-    .. attribute:: serializer
+    #: Overrides the apps default `routing_key` for this task.
+    routing_key = None
 
 
-        The name of a serializer that has been registered with
-        :mod:`kombu.serialization.registry`. Example: ``"json"``.
+    #: Overrides the apps default `exchange` for this task.
+    exchange = None
 
 
-    .. attribute:: backend
+    #: Overrides the apps default exchange type for this task.
+    exchange_type = None
 
 
-        The result store backend used for this task.
+    #: Override the apps default delivery mode for this task.  Default is
+    #: `"persistent"`, but you can change this to `"transient"`, which means
+    #: messages will be lost if the broker is restarted.  Consult your broker
+    #: manual for any additional delivery modes.
+    delivery_mode = None
 
 
-    .. attribute:: autoregister
+    #: Mandatory message routing.
+    mandatory = False
 
 
-        If :const:`True` the task is automatically registered in the task
-        registry, which is the default behaviour.
+    #: Request immediate delivery.
+    immediate = False
 
 
-    .. attribute:: track_started
+    #: Default message priority.  A number between 0 to 9, where 0 is the
+    #: highest.  Note that RabbitMQ does not support priorities.
+    priority = None
 
 
-        If :const:`True` the task will report its status as "started"
-        when the task is executed by a worker.
-        The default value is ``False`` as the normal behaviour is to not
-        report that level of granularity. Tasks are either pending,
-        finished, or waiting to be retried.
+    #: Maximum number of retries before giving up.  If set to :const:`None`,
+    #: it will **never** stop retrying.
+    max_retries = 3
 
 
-        Having a "started" status can be useful for when there are long
-        running tasks and there is a need to report which task is
-        currently running.
+    #: Default time in seconds before a retry of the task should be
+    #: executed.  3 minutes by default.
+    default_retry_delay = 3 * 60
 
 
-        The global default can be overridden with the
-        :setting:`CELERY_TRACK_STARTED` setting.
+    #: Rate limit for this task type.  Examples: :const:`None` (no rate
+    #: limit), `"100/s"` (hundred tasks a second), `"100/m"` (hundred tasks
+    #: a minute),`"100/h"` (hundred tasks an hour)
+    rate_limit = None
 
 
-    .. attribute:: acks_late
+    #: If enabled the worker will not store task state and return values
+    #: for this task.  Defaults to the :setting:`CELERY_IGNORE_RESULT`
+    #: setting.
+    ignore_result = False
 
 
-        If set to :const:`True` messages for this task will be acknowledged
-        **after** the task has been executed, not *just before*, which is
-        the default behavior.
+    #: When enabled errors will be stored even if the task is otherwise
+    #: configured to ignore results.
+    store_errors_even_if_ignored = False
 
 
-        Note that this means the task may be executed twice if the worker
-        crashes in the middle of execution, which may be acceptable for some
-        applications.
+    #: If enabled an e-mail will be sent to :setting:`ADMINS` whenever a task
+    #: of this type fails.
+    send_error_emails = False
 
 
-        The global default can be overriden by the :setting:`CELERY_ACKS_LATE`
-        setting.
+    disable_error_emails = False                            # FIXME
 
 
-    .. attribute:: expires
+    #: List of exception types to send error e-mails for.
+    error_whitelist = ()
 
 
-        Default task expiry time in seconds or a :class:`~datetime.datetime`.
+    #: The name of a serializer that has been registered with
+    #: :mod:`kombu.serialization.registry`.  Default is `"pickle"`.
+    serializer = "pickle"
 
 
-    """
-    __metaclass__ = TaskType
+    #: The result store backend used for this task.
+    backend = None
 
 
-    app = None
-    name = None
-    abstract = True
+    #: If disabled the task will not be automatically registered
+    #: in the task registry.
     autoregister = True
     autoregister = True
-    type = "regular"
-    accept_magic_kwargs = True
-    request = Context()
-
-    queue = None
-    routing_key = None
-    exchange = None
-    exchange_type = None
-    delivery_mode = None
-    immediate = False
-    mandatory = False
-    priority = None
 
 
-    ignore_result = False
-    store_errors_even_if_ignored = False
-    send_error_emails = False
-    error_whitelist = ()
-    disable_error_emails = False                            # FIXME
-    max_retries = 3
-    default_retry_delay = 3 * 60
-    serializer = "pickle"
-    rate_limit = None
-    backend = None
+    #: If enabled the task will report its status as "started" when the task
+    #: is executed by a worker.  Disabled by default as the normal behaviour
+    #: is to not report that level of granularity.  Tasks are either pending,
+    #: finished, or waiting to be retried.
+    #:
+    #: Having a "started" status can be useful for when there are long
+    #: running tasks and there is a need to report which task is currently
+    #: running.
+    #:
+    #: The application default can be overridden using the
+    #: :setting:`CELERY_TRACK_STARTED` setting.
     track_started = False
     track_started = False
+
+    #: When enabled  messages for this task will be acknowledged **after**
+    #: the task has been executed, and not *just before* which is the
+    #: default behavior.
+    #:
+    #: Please note that this means the task may be executed twice if the
+    #: worker crashes mid execution (which may be acceptable for some
+    #: applications).
+    #:
+    #: The application default can be overriden with the
+    #: :setting:`CELERY_ACKS_LATE` setting.
     acks_late = False
     acks_late = False
+
+    #: Default task expiry time.
     expires = None
     expires = None
 
 
-    MaxRetriesExceededError = MaxRetriesExceededError
+    #: The type of task *(no longer used)*.
+    type = "regular"
 
 
     def __call__(self, *args, **kwargs):
     def __call__(self, *args, **kwargs):
         return self.run(*args, **kwargs)
         return self.run(*args, **kwargs)
@@ -298,19 +250,22 @@ class BaseTask(object):
         automatically passed by the worker if the function/method
         automatically passed by the worker if the function/method
         supports them:
         supports them:
 
 
-            * task_id
-            * task_name
-            * task_retries
-            * task_is_eager
-            * logfile
-            * loglevel
-            * delivery_info
+            * `task_id`
+            * `task_name`
+            * `task_retries
+            * `task_is_eager`
+            * `logfile`
+            * `loglevel`
+            * `delivery_info`
 
 
-        Additional standard keyword arguments may be added in the future.
         To take these default arguments, the task can either list the ones
         To take these default arguments, the task can either list the ones
         it wants explicitly or just take an arbitrary list of keyword
         it wants explicitly or just take an arbitrary list of keyword
         arguments (\*\*kwargs).
         arguments (\*\*kwargs).
 
 
+        Magic keyword arguments can be disabled using the
+        :attr:`accept_magic_kwargs` flag.  The information can then
+        be found in the :attr:`request` attribute.
+
         """
         """
         raise NotImplementedError("Tasks must define the run method.")
         raise NotImplementedError("Tasks must define the run method.")
 
 
@@ -341,11 +296,11 @@ class BaseTask(object):
 
 
         :rtype :class:`~celery.app.amqp.TaskPublisher`:
         :rtype :class:`~celery.app.amqp.TaskPublisher`:
 
 
-        Please be sure to close the AMQP connection when you're done
-        with this object, i.e.:
+        Please be sure to close the AMQP connection after you're done
+        with this object.  Example::
 
 
             >>> publisher = self.get_publisher()
             >>> publisher = self.get_publisher()
-            >>> # do something with publisher
+            >>> # ... do something with publisher
             >>> publisher.connection.close()
             >>> publisher.connection.close()
 
 
         """
         """
@@ -361,12 +316,12 @@ class BaseTask(object):
 
 
     @classmethod
     @classmethod
     def get_consumer(self, connection=None, connect_timeout=None):
     def get_consumer(self, connection=None, connect_timeout=None):
-        """Get a celery task message consumer.
+        """Get message consumer.
 
 
         :rtype :class:`~celery.app.amqp.TaskConsumer`:
         :rtype :class:`~celery.app.amqp.TaskConsumer`:
 
 
         Please be sure to close the AMQP connection when you're done
         Please be sure to close the AMQP connection when you're done
-        with this object. i.e.:
+        with this object.  Example::
 
 
             >>> consumer = self.get_consumer()
             >>> consumer = self.get_consumer()
             >>> # do something with consumer
             >>> # do something with consumer
@@ -380,8 +335,8 @@ class BaseTask(object):
 
 
     @classmethod
     @classmethod
     def delay(self, *args, **kwargs):
     def delay(self, *args, **kwargs):
-        """Shortcut to :meth:`apply_async`, with star arguments,
-        but doesn't support the extra options.
+        """Shortcut to :meth:`apply_async` giving star arguments, but without
+        options.
 
 
         :param \*args: positional arguments passed on to the task.
         :param \*args: positional arguments passed on to the task.
         :param \*\*kwargs: keyword arguments passed on to the task.
         :param \*\*kwargs: keyword arguments passed on to the task.
@@ -399,74 +354,78 @@ class BaseTask(object):
         """Run a task asynchronously by the celery daemon(s).
         """Run a task asynchronously by the celery daemon(s).
 
 
         :keyword args: The positional arguments to pass on to the
         :keyword args: The positional arguments to pass on to the
-            task (a :class:`list` or :class:`tuple`).
+                       task (a :class:`list` or :class:`tuple`).
 
 
         :keyword kwargs: The keyword arguments to pass on to the
         :keyword kwargs: The keyword arguments to pass on to the
-            task (a :class:`dict`)
+                         task (a :class:`dict`)
 
 
         :keyword countdown: Number of seconds into the future that the
         :keyword countdown: Number of seconds into the future that the
-            task should execute. Defaults to immediate delivery (Do not
-            confuse that with the ``immediate`` setting, they are
-            unrelated).
+                            task should execute. Defaults to immediate
+                            delivery (do not confuse with the
+                            `immediate` flag, as they are unrelated).
 
 
-        :keyword eta: A :class:`~datetime.datetime` object that describes
-            the absolute time and date of when the task should execute.
-            May not be specified if ``countdown`` is also supplied. (Do
-            not confuse this with the ``immediate`` setting, they are
-            unrelated).
+        :keyword eta: A :class:`~datetime.datetime` object describing
+                      the absolute time and date of when the task should
+                      be executed.  May not be specified if `countdown`
+                      is also supplied.  (Do not confuse this with the
+                      `immediate` flag, as they are unrelated).
 
 
         :keyword expires: Either a :class:`int`, describing the number of
         :keyword expires: Either a :class:`int`, describing the number of
-            seconds, or a :class:`~datetime.datetime` object that
-            describes the absolute time and date of when the task should
-            expire. The task will not be executed after the
-            expiration time.
+                          seconds, or a :class:`~datetime.datetime` object
+                          that describes the absolute time and date of when
+                          the task should expire.  The task will not be
+                          executed after the expiration time.
 
 
         :keyword connection: Re-use existing broker connection instead
         :keyword connection: Re-use existing broker connection instead
-            of establishing a new one. The ``connect_timeout`` argument
-            is not respected if this is set.
+                             of establishing a new one.  The `connect_timeout`
+                             argument is not respected if this is set.
 
 
-        :keyword connect_timeout: The timeout in seconds, before we give
-            up on establishing a connection to the AMQP server.
+        :keyword connect_timeout: The timeout in seconds, before we give up
+                                  on establishing a connection to the AMQP
+                                  server.
 
 
         :keyword routing_key: The routing key used to route the task to a
         :keyword routing_key: The routing key used to route the task to a
-            worker server. Defaults to the tasks
-            :attr:`routing_key` attribute.
+                              worker server.  Defaults to the
+                              :attr:`routing_key` attribute.
 
 
         :keyword exchange: The named exchange to send the task to.
         :keyword exchange: The named exchange to send the task to.
-            Defaults to the tasks :attr:`exchange` attribute.
+                           Defaults to the :attr:`exchange` attribute.
 
 
-        :keyword exchange_type: The exchange type to initalize the
-            exchange if not already declared. Defaults to the tasks
-            :attr:`exchange_type` attribute.
+        :keyword exchange_type: The exchange type to initalize the exchange
+                                if not already declared.  Defaults to the
+                                :attr:`exchange_type` attribute.
 
 
-        :keyword immediate: Request immediate delivery. Will raise an
-            exception if the task cannot be routed to a worker
-            immediately.  (Do not confuse this parameter with
-            the ``countdown`` and ``eta`` settings, as they are
-            unrelated). Defaults to the tasks :attr:`immediate` attribute.
+        :keyword immediate: Request immediate delivery.  Will raise an
+                            exception if the task cannot be routed to a worker
+                            immediately.  (Do not confuse this parameter with
+                            the `countdown` and `eta` settings, as they are
+                            unrelated).  Defaults to the :attr:`immediate`
+                            attribute.
 
 
         :keyword mandatory: Mandatory routing. Raises an exception if
         :keyword mandatory: Mandatory routing. Raises an exception if
-            there's no running workers able to take on this task.
-            Defaults to the tasks :attr:`mandatory` attribute.
+                            there's no running workers able to take on this
+                            task.  Defaults to the :attr:`mandatory`
+                            attribute.
 
 
         :keyword priority: The task priority, a number between 0 and 9.
         :keyword priority: The task priority, a number between 0 and 9.
-            Defaults to the tasks :attr:`priority` attribute.
+                           Defaults to the :attr:`priority` attribute.
 
 
         :keyword serializer: A string identifying the default
         :keyword serializer: A string identifying the default
-            serialization method to use. Defaults to the
-            ``CELERY_TASK_SERIALIZER`` setting. Can be ``pickle``,
-            ``json``, ``yaml``, or any custom serialization method
-            that has been registered with
-            :mod:`kombu.serialization.registry`.  Defaults to the tasks
-            :attr:`serializer` attribute.
+                             serialization method to use.  Can be `pickle`,
+                             `json`, `yaml`, `msgpack` or any custom
+                             serialization method that has been registered
+                             with :mod:`kombu.serialization.registry`.
+                             Defaults to the :attr:`serializer` attribute.
 
 
         :keyword compression: A string identifying the compression method
         :keyword compression: A string identifying the compression method
-            to use.  Defaults to the :setting:`CELERY_MESSAGE_COMPRESSION`
-            setting.  Can be one of ``zlib``, ``bzip2``, or any custom
-            compression methods registered with
-            :func:`kombu.compression.register`.  **Only supported by Kombu.**
-
-        **Note**: If the ``CELERY_ALWAYS_EAGER`` setting is set, it will
+                              to use.  Can be one of ``zlib``, ``bzip2``,
+                              or any custom compression methods registered with
+                              :func:`kombu.compression.register`. Defaults to
+                              the :setting:`CELERY_MESSAGE_COMPRESSION`
+                              setting.
+
+        .. note::
+            If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
             be replaced by a local :func:`apply` call instead.
             be replaced by a local :func:`apply` call instead.
 
 
         """
         """
@@ -507,38 +466,41 @@ class BaseTask(object):
         :param args: Positional arguments to retry with.
         :param args: Positional arguments to retry with.
         :param kwargs: Keyword arguments to retry with.
         :param kwargs: Keyword arguments to retry with.
         :keyword exc: Optional exception to raise instead of
         :keyword exc: Optional exception to raise instead of
-            :exc:`~celery.exceptions.MaxRetriesExceededError` when the max
-            restart limit has been exceeded.
+                      :exc:`~celery.exceptions.MaxRetriesExceededError`
+                      when the max restart limit has been exceeded.
         :keyword countdown: Time in seconds to delay the retry for.
         :keyword countdown: Time in seconds to delay the retry for.
         :keyword eta: Explicit time and date to run the retry at
         :keyword eta: Explicit time and date to run the retry at
-        (must be a :class:`~datetime.datetime` instance).
+                      (must be a :class:`~datetime.datetime` instance).
         :keyword \*\*options: Any extra options to pass on to
         :keyword \*\*options: Any extra options to pass on to
-            meth:`apply_async`. See :func:`celery.execute.apply_async`.
-        :keyword throw: If this is ``False``, do not raise the
-            :exc:`~celery.exceptions.RetryTaskError` exception,
-            that tells the worker to mark the task as being retried.
-            Note that this means the task will be marked as failed
-            if the task raises an exception, or successful if it
-            returns.
+                              meth:`apply_async`.
+        :keyword throw: If this is :const:`False`, do not raise the
+                        :exc:`~celery.exceptions.RetryTaskError` exception,
+                        that tells the worker to mark the task as being
+                        retried.  Note that this means the task will be
+                        marked as failed if the task raises an exception,
+                        or successful if it returns.
 
 
         :raises celery.exceptions.RetryTaskError: To tell the worker that
         :raises celery.exceptions.RetryTaskError: To tell the worker that
             the task has been re-sent for retry. This always happens,
             the task has been re-sent for retry. This always happens,
-            unless the ``throw`` keyword argument has been explicitly set
-            to ``False``, and is considered normal operation.
-
-        Example
-
-            >>> class TwitterPostStatusTask(Task):
-            ...
-            ...     def run(self, username, password, message, **kwargs):
-            ...         twitter = Twitter(username, password)
-            ...         try:
-            ...             twitter.post_status(message)
-            ...         except twitter.FailWhale, exc:
-            ...             # Retry in 5 minutes.
-            ...             self.retry([username, password, message],
-            ...                        kwargs,
-            ...                        countdown=60 * 5, exc=exc)
+            unless the `throw` keyword argument has been explicitly set
+            to :const:`False`, and is considered normal operation.
+
+        **Example**
+
+        .. code-block:: python
+
+            >>> @task
+            >>> def tweet(auth, message):
+            ...     twitter = Twitter(oauth=auth)
+            ...     try:
+            ...         twitter.post_status_update(message)
+            ...     except twitter.FailWhale, exc:
+            ...         # Retry in 5 minutes.
+            ...         return tweet.retry(countdown=60 * 5, exc=exc)
+
+        Although the task will never return above as `retry` raises an
+        exception to notify the worker, we use `return` in front of the retry
+        to convey that the rest of the block will not be executed.
 
 
         """
         """
         request = self.request
         request = self.request
@@ -555,7 +517,7 @@ class BaseTask(object):
         options["retries"] = request.retries + 1
         options["retries"] = request.retries + 1
         options["task_id"] = request.id
         options["task_id"] = request.id
         options["countdown"] = options.get("countdown",
         options["countdown"] = options.get("countdown",
-                                        self.default_retry_delay)
+                                           self.default_retry_delay)
         max_exc = exc or self.MaxRetriesExceededError(
         max_exc = exc or self.MaxRetriesExceededError(
                 "Can't retry %s[%s] args:%s kwargs:%s" % (
                 "Can't retry %s[%s] args:%s kwargs:%s" % (
                     self.name, options["task_id"], args, kwargs))
                     self.name, options["task_id"], args, kwargs))
@@ -584,13 +546,12 @@ class BaseTask(object):
 
 
         :param args: positional arguments passed on to the task.
         :param args: positional arguments passed on to the task.
         :param kwargs: keyword arguments passed on to the task.
         :param kwargs: keyword arguments passed on to the task.
-        :keyword throw: Re-raise task exceptions. Defaults to
-            the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS` setting.
+        :keyword throw: Re-raise task exceptions.  Defaults to
+                        the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`
+                        setting.
 
 
         :rtype :class:`celery.result.EagerResult`:
         :rtype :class:`celery.result.EagerResult`:
 
 
-        See :func:`celery.execute.apply`.
-
         """
         """
         args = args or []
         args = args or []
         kwargs = kwargs or {}
         kwargs = kwargs or {}
@@ -664,7 +625,7 @@ class BaseTask(object):
         :param kwargs: Original keyword arguments for the retried task.
         :param kwargs: Original keyword arguments for the retried task.
 
 
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
-        instance, containing the traceback.
+                        instance, containing the traceback.
 
 
         The return value of this handler is ignored.
         The return value of this handler is ignored.
 
 
@@ -680,10 +641,10 @@ class BaseTask(object):
         :param task_id: Unique id of the task.
         :param task_id: Unique id of the task.
         :param args: Original arguments for the task that failed.
         :param args: Original arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task
         :param kwargs: Original keyword arguments for the task
-            that failed.
+                       that failed.
 
 
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
-        instance, containing the traceback (if any).
+                        instance, containing the traceback (if any).
 
 
         The return value of this handler is ignored.
         The return value of this handler is ignored.
 
 
@@ -699,10 +660,10 @@ class BaseTask(object):
         :param task_id: Unique id of the failed task.
         :param task_id: Unique id of the failed task.
         :param args: Original arguments for the task that failed.
         :param args: Original arguments for the task that failed.
         :param kwargs: Original keyword arguments for the task
         :param kwargs: Original keyword arguments for the task
-            that failed.
+                       that failed.
 
 
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
         :keyword einfo: :class:`~celery.datastructures.ExceptionInfo`
-            instance, containing the traceback.
+                        instance, containing the traceback.
 
 
         The return value of this handler is ignored.
         The return value of this handler is ignored.
 
 
@@ -736,7 +697,7 @@ class BaseTask(object):
         wrapper.execute_using_pool(pool, loglevel, logfile)
         wrapper.execute_using_pool(pool, loglevel, logfile)
 
 
     def __repr__(self):
     def __repr__(self):
-        """repr(task)"""
+        """`repr(task)`"""
         try:
         try:
             kind = self.__class__.mro()[1].__name__
             kind = self.__class__.mro()[1].__name__
         except (AttributeError, IndexError):            # pragma: no cover
         except (AttributeError, IndexError):            # pragma: no cover
@@ -745,8 +706,8 @@ class BaseTask(object):
 
 
     @classmethod
     @classmethod
     def subtask(cls, *args, **kwargs):
     def subtask(cls, *args, **kwargs):
-        """Returns a :class:`~celery.task.sets.subtask` object for
-        this task that wraps arguments and execution options
+        """Returns :class:`~celery.task.sets.subtask` object for
+        this task, wrapping arguments and execution options
         for a single task invocation."""
         for a single task invocation."""
         return subtask(cls, *args, **kwargs)
         return subtask(cls, *args, **kwargs)
 
 
@@ -873,7 +834,7 @@ class PeriodicTask(Task):
         return timedelta_seconds(delta)
         return timedelta_seconds(delta)
 
 
     def is_due(self, last_run_at):
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
         where next time to run is in seconds.
 
 
         See :meth:`celery.schedules.schedule.is_due` for more information.
         See :meth:`celery.schedules.schedule.is_due` for more information.

+ 2 - 2
celery/task/builtins.py

@@ -26,7 +26,7 @@ class PingTask(Task):
     name = "celery.ping"
     name = "celery.ping"
 
 
     def run(self, **kwargs):
     def run(self, **kwargs):
-        """:returns: the string ``"pong"``."""
+        """:returns: the string `"pong"`."""
         return "pong"
         return "pong"
 
 
 
 
@@ -53,7 +53,7 @@ class ExecuteRemoteTask(Task):
     is an internal component of.
     is an internal component of.
 
 
     The object must be pickleable, so you can't use lambdas or functions
     The object must be pickleable, so you can't use lambdas or functions
-    defined in the REPL (that is the python shell, or ``ipython``).
+    defined in the REPL (that is the python shell, or :program:`ipython`).
 
 
     """
     """
     name = "celery.execute_remote"
     name = "celery.execute_remote"

+ 1 - 1
celery/task/control.py

@@ -148,7 +148,7 @@ class Control(object):
 
 
         :param task_name: Type of task to change rate limit for.
         :param task_name: Type of task to change rate limit for.
         :param rate_limit: The rate limit as tasks per second, or a rate limit
         :param rate_limit: The rate limit as tasks per second, or a rate limit
-            string (``"100/m"``, etc.
+            string (`"100/m"`, etc.
             see :attr:`celery.task.base.Task.rate_limit` for
             see :attr:`celery.task.base.Task.rate_limit` for
             more information).
             more information).
         :keyword destination: If set, a list of the hosts to send the
         :keyword destination: If set, a list of the hosts to send the

+ 3 - 3
celery/task/http.py

@@ -106,8 +106,8 @@ class HttpDispatch(object):
     """Make task HTTP request and collect the task result.
     """Make task HTTP request and collect the task result.
 
 
     :param url: The URL to request.
     :param url: The URL to request.
-    :param method: HTTP method used. Currently supported methods are ``GET``
-        and ``POST``.
+    :param method: HTTP method used. Currently supported methods are `GET`
+        and `POST`.
     :param task_kwargs: Task keyword arguments.
     :param task_kwargs: Task keyword arguments.
     :param logger: Logger used for user/system feedback.
     :param logger: Logger used for user/system feedback.
 
 
@@ -151,7 +151,7 @@ class HttpDispatchTask(BaseTask):
 
 
     :keyword url: The URL location of the HTTP callback task.
     :keyword url: The URL location of the HTTP callback task.
     :keyword method: Method to use when dispatching the callback. Usually
     :keyword method: Method to use when dispatching the callback. Usually
-        ``GET`` or ``POST``.
+        `GET` or `POST`.
     :keyword \*\*kwargs: Keyword arguments to pass on to the HTTP callback.
     :keyword \*\*kwargs: Keyword arguments to pass on to the HTTP callback.
 
 
     .. attribute:: url
     .. attribute:: url

+ 1 - 1
celery/task/sets.py

@@ -73,7 +73,7 @@ class subtask(AttributeDict):
                              options=options or {})
                              options=options or {})
 
 
     def delay(self, *argmerge, **kwmerge):
     def delay(self, *argmerge, **kwmerge):
-        """Shortcut to ``apply_async(argmerge, kwargs)``."""
+        """Shortcut to `apply_async(argmerge, kwargs)`."""
         return self.apply_async(args=argmerge, kwargs=kwmerge)
         return self.apply_async(args=argmerge, kwargs=kwmerge)
 
 
     def apply(self, args=(), kwargs={}, **options):
     def apply(self, args=(), kwargs={}, **options):

+ 3 - 3
celery/tests/test_buckets.py

@@ -45,7 +45,7 @@ class test_TokenBucketQueue(unittest.TestCase):
     @skip_if_disabled
     @skip_if_disabled
     def empty_queue_yields_QueueEmpty(self):
     def empty_queue_yields_QueueEmpty(self):
         x = buckets.TokenBucketQueue(fill_rate=10)
         x = buckets.TokenBucketQueue(fill_rate=10)
-        self.assertRaises(buckets.QueueEmpty, x.get)
+        self.assertRaises(buckets.Empty, x.get)
 
 
     @skip_if_disabled
     @skip_if_disabled
     def test_bucket__put_get(self):
     def test_bucket__put_get(self):
@@ -135,7 +135,7 @@ class test_TaskBucket(unittest.TestCase):
     @skip_if_disabled
     @skip_if_disabled
     def test_get_nowait(self):
     def test_get_nowait(self):
         x = buckets.TaskBucket(task_registry=self.registry)
         x = buckets.TaskBucket(task_registry=self.registry)
-        self.assertRaises(buckets.QueueEmpty, x.get_nowait)
+        self.assertRaises(buckets.Empty, x.get_nowait)
 
 
     @skip_if_disabled
     @skip_if_disabled
     def test_refresh(self):
     def test_refresh(self):
@@ -197,7 +197,7 @@ class test_TaskBucket(unittest.TestCase):
     @skip_if_disabled
     @skip_if_disabled
     def test_on_empty_buckets__get_raises_empty(self):
     def test_on_empty_buckets__get_raises_empty(self):
         b = buckets.TaskBucket(task_registry=self.registry)
         b = buckets.TaskBucket(task_registry=self.registry)
-        self.assertRaises(buckets.QueueEmpty, b.get, block=False)
+        self.assertRaises(buckets.Empty, b.get, block=False)
         self.assertEqual(b.qsize(), 0)
         self.assertEqual(b.qsize(), 0)
 
 
     @skip_if_disabled
     @skip_if_disabled

+ 2 - 1
celery/tests/test_worker_job.py

@@ -322,7 +322,8 @@ class test_TaskRequest(unittest.TestCase):
         tw = TaskRequest(mytask.name, gen_unique_id(), [], {})
         tw = TaskRequest(mytask.name, gen_unique_id(), [], {})
         x = tw.success_msg % {"name": tw.task_name,
         x = tw.success_msg % {"name": tw.task_name,
                               "id": tw.task_id,
                               "id": tw.task_id,
-                              "return_value": 10}
+                              "return_value": 10,
+                              "runtime": 0.1376}
         self.assertTrue(x)
         self.assertTrue(x)
         x = tw.error_msg % {"name": tw.task_name,
         x = tw.error_msg % {"name": tw.task_name,
                            "id": tw.task_id,
                            "id": tw.task_id,

+ 3 - 3
celery/tests/utils.py

@@ -169,7 +169,7 @@ def skip(reason):
 
 
 
 
 def skip_if(predicate, reason):
 def skip_if(predicate, reason):
-    """Skip test if predicate is ``True``."""
+    """Skip test if predicate is :const:`True`."""
 
 
     def _inner(fun):
     def _inner(fun):
         return predicate and skip(reason)(fun) or fun
         return predicate and skip(reason)(fun) or fun
@@ -178,7 +178,7 @@ def skip_if(predicate, reason):
 
 
 
 
 def skip_unless(predicate, reason):
 def skip_unless(predicate, reason):
-    """Skip test if predicate is ``False``."""
+    """Skip test if predicate is :const:`False`."""
     return skip_if(not predicate, reason)
     return skip_if(not predicate, reason)
 
 
 
 
@@ -218,7 +218,7 @@ def mask_modules(*modnames):
 
 
 @contextmanager
 @contextmanager
 def override_stdouts():
 def override_stdouts():
-    """Override ``sys.stdout`` and ``sys.stderr`` with ``StringIO``."""
+    """Override `sys.stdout` and `sys.stderr` with `StringIO`."""
     prev_out, prev_err = sys.stdout, sys.stderr
     prev_out, prev_err = sys.stdout, sys.stderr
     mystdout, mystderr = StringIO(), StringIO()
     mystdout, mystderr = StringIO(), StringIO()
     sys.stdout = sys.__stdout__ = mystdout
     sys.stdout = sys.__stdout__ = mystdout

+ 6 - 7
celery/utils/__init__.py

@@ -112,7 +112,7 @@ def kwdict(kwargs):
 
 
 
 
 def first(predicate, iterable):
 def first(predicate, iterable):
-    """Returns the first element in ``iterable`` that ``predicate`` returns a
+    """Returns the first element in `iterable` that `predicate` returns a
     :const:`True` value for."""
     :const:`True` value for."""
     for item in iterable:
     for item in iterable:
         if predicate(item):
         if predicate(item):
@@ -139,7 +139,7 @@ def firstmethod(method):
 
 
 
 
 def chunks(it, n):
 def chunks(it, n):
-    """Split an iterator into chunks with ``n`` elements each.
+    """Split an iterator into chunks with `n` elements each.
 
 
     Examples
     Examples
 
 
@@ -206,8 +206,8 @@ def fun_takes_kwargs(fun, kwlist=[]):
     """With a function, and a list of keyword arguments, returns arguments
     """With a function, and a list of keyword arguments, returns arguments
     in the list which the function takes.
     in the list which the function takes.
 
 
-    If the object has an ``argspec`` attribute that is used instead
-    of using the :meth:`inspect.getargspec`` introspection.
+    If the object has an `argspec` attribute that is used instead
+    of using the :meth:`inspect.getargspec` introspection.
 
 
     :param fun: The function to inspect arguments of.
     :param fun: The function to inspect arguments of.
     :param kwlist: The list of keyword arguments.
     :param kwlist: The list of keyword arguments.
@@ -243,7 +243,7 @@ def get_cls_by_name(name, aliases={}, imp=None):
         celery.concurrency.processes.TaskPool
         celery.concurrency.processes.TaskPool
                                     ^- class name
                                     ^- class name
 
 
-    If ``aliases`` is provided, a dict containing short name/long name
+    If `aliases` is provided, a dict containing short name/long name
     mappings, the name is looked up in the aliases first.
     mappings, the name is looked up in the aliases first.
 
 
     Examples:
     Examples:
@@ -326,7 +326,7 @@ def import_from_cwd(module, imp=None):
     located in the current directory.
     located in the current directory.
 
 
     Modules located in the current directory has
     Modules located in the current directory has
-    precedence over modules located in ``sys.path``.
+    precedence over modules located in `sys.path`.
     """
     """
     if imp is None:
     if imp is None:
         imp = importlib.import_module
         imp = importlib.import_module
@@ -341,4 +341,3 @@ def import_from_cwd(module, imp=None):
             sys.path.remove(cwd)
             sys.path.remove(cwd)
         except ValueError:
         except ValueError:
             pass
             pass
-

+ 4 - 4
celery/utils/dispatch/saferef.py

@@ -72,7 +72,7 @@ class BoundMethodWeakref(object):
 
 
         class attribute pointing to all live
         class attribute pointing to all live
         BoundMethodWeakref objects indexed by the class's
         BoundMethodWeakref objects indexed by the class's
-        ``calculate_key(target)`` method applied to the target
+        `calculate_key(target)` method applied to the target
         objects. This weak value dictionary is used to
         objects. This weak value dictionary is used to
         short-circuit creation so that multiple references
         short-circuit creation so that multiple references
         to the same (object, function) pair produce the
         to the same (object, function) pair produce the
@@ -110,7 +110,7 @@ class BoundMethodWeakref(object):
         """Return a weak-reference-like instance for a bound method
         """Return a weak-reference-like instance for a bound method
 
 
         :param target: the instance-method target for the weak
         :param target: the instance-method target for the weak
-            reference, must have ``im_self`` and ``im_func`` attributes
+            reference, must have `im_self` and `im_func` attributes
             and be reconstructable via::
             and be reconstructable via::
 
 
                 target.im_func.__get__(target.im_self)
                 target.im_func.__get__(target.im_self)
@@ -153,7 +153,7 @@ class BoundMethodWeakref(object):
     def calculate_key(cls, target):
     def calculate_key(cls, target):
         """Calculate the reference key for this reference
         """Calculate the reference key for this reference
 
 
-        Currently this is a two-tuple of the ``id()``'s of the
+        Currently this is a two-tuple of the `id()`'s of the
         target object and the target function respectively.
         target object and the target function respectively.
         """
         """
         return id(target.im_self), id(target.im_func)
         return id(target.im_self), id(target.im_func)
@@ -223,7 +223,7 @@ class BoundNonDescriptorMethodWeakref(BoundMethodWeakref):
         """Return a weak-reference-like instance for a bound method
         """Return a weak-reference-like instance for a bound method
 
 
         :param target: the instance-method target for the weak
         :param target: the instance-method target for the weak
-            reference, must have ``im_self`` and ``im_func`` attributes
+            reference, must have `im_self` and `im_func` attributes
             and be reconstructable via::
             and be reconstructable via::
 
 
                 target.im_func.__get__(target.im_self)
                 target.im_func.__get__(target.im_self)

+ 6 - 6
celery/utils/dispatch/signal.py

@@ -23,7 +23,7 @@ class Signal(object):
 
 
     .. attribute:: receivers
     .. attribute:: receivers
         Internal attribute, holds a dictionary of
         Internal attribute, holds a dictionary of
-        ``{receriverkey (id): weakref(receiver)}`` mappings.
+        `{receriverkey (id): weakref(receiver)}` mappings.
 
 
     """
     """
 
 
@@ -51,9 +51,9 @@ class Signal(object):
 
 
             Receivers must be able to accept keyword arguments.
             Receivers must be able to accept keyword arguments.
 
 
-            If receivers have a ``dispatch_uid`` attribute, the receiver will
+            If receivers have a `dispatch_uid` attribute, the receiver will
             not be added if another receiver already exists with that
             not be added if another receiver already exists with that
-            ``dispatch_uid``.
+            `dispatch_uid`.
 
 
         :keyword sender: The sender to which the receiver should respond.
         :keyword sender: The sender to which the receiver should respond.
             Must either be of type :class:`Signal`, or :const:`None` to receive
             Must either be of type :class:`Signal`, or :const:`None` to receive
@@ -92,7 +92,7 @@ class Signal(object):
         receiver will be removed from dispatch automatically.
         receiver will be removed from dispatch automatically.
 
 
         :keyword receiver: The registered receiver to disconnect. May be
         :keyword receiver: The registered receiver to disconnect. May be
-            none if ``dispatch_uid`` is specified.
+            none if `dispatch_uid` is specified.
 
 
         :keyword sender: The registered sender to disconnect.
         :keyword sender: The registered sender to disconnect.
 
 
@@ -125,7 +125,7 @@ class Signal(object):
 
 
         :keyword \*\*named: Named arguments which will be passed to receivers.
         :keyword \*\*named: Named arguments which will be passed to receivers.
 
 
-        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
 
 
         """
         """
         responses = []
         responses = []
@@ -148,7 +148,7 @@ class Signal(object):
             These arguments must be a subset of the argument names defined in
             These arguments must be a subset of the argument names defined in
             :attr:`providing_args`.
             :attr:`providing_args`.
 
 
-        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
 
 
         :raises DispatcherKeyError:
         :raises DispatcherKeyError:
 
 

+ 2 - 2
celery/utils/functional.py

@@ -53,9 +53,9 @@
 ### Begin from Python 2.5 functools.py ########################################
 ### Begin from Python 2.5 functools.py ########################################
 
 
 # Summary of changes made to the Python 2.5 code below:
 # Summary of changes made to the Python 2.5 code below:
-#   * Wrapped the ``setattr`` call in ``update_wrapper`` with a try-except
+#   * Wrapped the `setattr` call in `update_wrapper` with a try-except
 #     block to make it compatible with Python 2.3, which doesn't allow
 #     block to make it compatible with Python 2.3, which doesn't allow
-#     assigning to ``__name__``.
+#     assigning to `__name__`.
 
 
 # Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software
 # Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software
 # Foundation. All Rights Reserved.
 # Foundation. All Rights Reserved.

+ 3 - 3
celery/utils/timeutils.py

@@ -65,7 +65,7 @@ def remaining(start, ends_in, now=None, relative=True):
     :param ends_in: The end delta as a :class:`~datetime.timedelta`.
     :param ends_in: The end delta as a :class:`~datetime.timedelta`.
     :keyword relative: If set to :const:`False`, the end time will be
     :keyword relative: If set to :const:`False`, the end time will be
         calculated using :func:`delta_resolution` (i.e. rounded to the
         calculated using :func:`delta_resolution` (i.e. rounded to the
-        resolution of ``ends_in``).
+        resolution of `ends_in`).
     :keyword now: Function returning the current time and date,
     :keyword now: Function returning the current time and date,
         defaults to :func:`datetime.now`.
         defaults to :func:`datetime.now`.
 
 
@@ -79,7 +79,7 @@ def remaining(start, ends_in, now=None, relative=True):
 
 
 
 
 def rate(rate):
 def rate(rate):
-    """Parses rate strings, such as ``"100/m"`` or ``"2/h"``
+    """Parses rate strings, such as `"100/m"` or `"2/h"`
     and converts them to seconds."""
     and converts them to seconds."""
     if rate:
     if rate:
         if isinstance(rate, basestring):
         if isinstance(rate, basestring):
@@ -118,7 +118,7 @@ def humanize_seconds(secs, prefix=""):
 
 
 
 
 def maybe_iso8601(dt):
 def maybe_iso8601(dt):
-    """``Either datetime | str -> datetime or None -> None``"""
+    """`Either datetime | str -> datetime or None -> None`"""
     if not dt:
     if not dt:
         return
         return
     if isinstance(dt, datetime):
     if isinstance(dt, datetime):

+ 52 - 56
celery/worker/__init__.py

@@ -1,8 +1,3 @@
-"""
-
-The Multiprocessing Worker Server
-
-"""
 import socket
 import socket
 import logging
 import logging
 import traceback
 import traceback
@@ -23,10 +18,13 @@ RUN = 0x1
 CLOSE = 0x2
 CLOSE = 0x2
 TERMINATE = 0x3
 TERMINATE = 0x3
 
 
+#: List of signals to reset when a child process starts.
 WORKER_SIGRESET = frozenset(["SIGTERM",
 WORKER_SIGRESET = frozenset(["SIGTERM",
                              "SIGHUP",
                              "SIGHUP",
                              "SIGTTIN",
                              "SIGTTIN",
                              "SIGTTOU"])
                              "SIGTTOU"])
+
+#: List of signals to ignore when a child process starts.
 WORKER_SIGIGNORE = frozenset(["SIGINT"])
 WORKER_SIGIGNORE = frozenset(["SIGINT"])
 
 
 
 
@@ -50,65 +48,46 @@ def process_initializer(app, hostname):
 
 
 
 
 class WorkController(object):
 class WorkController(object):
-    """Executes tasks waiting in the task queue.
-
-    :param concurrency: see :attr:`concurrency`.
-    :param logfile: see :attr:`logfile`.
-    :param loglevel: see :attr:`loglevel`.
-    :param embed_clockservice: see :attr:`embed_clockservice`.
-    :param send_events: see :attr:`send_events`.
-
-    .. attribute:: concurrency
-
-        The number of simultaneous processes doing work (default:
-        ``conf.CELERYD_CONCURRENCY``)
-
-    .. attribute:: loglevel
-
-        The loglevel used (default: :const:`logging.INFO`)
-
-    .. attribute:: logfile
-
-        The logfile used, if no logfile is specified it uses ``stderr``
-        (default: `celery.conf.CELERYD_LOG_FILE`).
+    """Unmanaged worker instance."""
 
 
-    .. attribute:: embed_clockservice
+    #: The number of simultaneous processes doing work (default:
+    #: :setting:`CELERYD_CONCURRENCY`)
+    concurrency = None
 
 
-        If :const:`True`, celerybeat is embedded, running in the main worker
-        process as a thread.
-
-    .. attribute:: send_events
-
-        Enable the sending of monitoring events, these events can be captured
-        by monitors (celerymon).
-
-    .. attribute:: logger
-
-        The :class:`logging.Logger` instance used for logging.
-
-    .. attribute:: pool
+    #: The loglevel used (default: :const:`logging.INFO`)
+    loglevel = logging.ERROR
 
 
-        The :class:`multiprocessing.Pool` instance used.
+    #: The logfile used, if no logfile is specified it uses `stderr`
+    #: (default: :setting:`CELERYD_LOG_FILE`).
+    logfile = None
 
 
-    .. attribute:: ready_queue
+    #: If :const:`True`, celerybeat is embedded, running in the main worker
+    #: process as a thread.
+    embed_clockservice = None
 
 
-        The :class:`Queue.Queue` that holds tasks ready for immediate
-        processing.
+    #: Enable the sending of monitoring events, these events can be captured
+    #: by monitors (celerymon).
+    send_events = False
 
 
-    .. attribute:: schedule_controller
+    #: The :class:`logging.Logger` instance used for logging.
+    logger = None
 
 
-        Instance of :class:`celery.worker.controllers.ScheduleController`.
+    #: The pool instance used.
+    pool = None
 
 
-    .. attribute:: mediator
+    #: The internal queue object that holds tasks ready for immediate
+    #: processing.
+    ready_queue = None
 
 
-        Instance of :class:`celery.worker.controllers.Mediator`.
+    #: Instance of :class:`celery.worker.controllers.ScheduleController`.
+    schedule_controller = None
 
 
-    .. attribute:: consumer
+    #: Instance of :class:`celery.worker.controllers.Mediator`.
+    mediator = None
 
 
-        Instance of :class:`celery.worker.consumer.Consumer`.
+    #: Consumer instance.
+    consumer = None
 
 
-    """
-    loglevel = logging.ERROR
     _state = None
     _state = None
     _running = 0
     _running = 0
 
 
@@ -120,7 +99,8 @@ class WorkController(object):
             task_soft_time_limit=None, max_tasks_per_child=None,
             task_soft_time_limit=None, max_tasks_per_child=None,
             pool_putlocks=None, db=None, prefetch_multiplier=None,
             pool_putlocks=None, db=None, prefetch_multiplier=None,
             eta_scheduler_precision=None, queues=None,
             eta_scheduler_precision=None, queues=None,
-            disable_rate_limits=None, app=None):
+            disable_rate_limits=None, autoscale=None,
+            autoscaler_cls=None, scheduler_cls=None, app=None):
 
 
         self.app = app_or_default(app)
         self.app = app_or_default(app)
         conf = self.app.conf
         conf = self.app.conf
@@ -138,8 +118,11 @@ class WorkController(object):
         self.mediator_cls = mediator_cls or conf.CELERYD_MEDIATOR
         self.mediator_cls = mediator_cls or conf.CELERYD_MEDIATOR
         self.eta_scheduler_cls = eta_scheduler_cls or \
         self.eta_scheduler_cls = eta_scheduler_cls or \
                                     conf.CELERYD_ETA_SCHEDULER
                                     conf.CELERYD_ETA_SCHEDULER
+        self.autoscaler_cls = autoscaler_cls or \
+                                    conf.CELERYD_AUTOSCALER
         self.schedule_filename = schedule_filename or \
         self.schedule_filename = schedule_filename or \
                                     conf.CELERYBEAT_SCHEDULE_FILENAME
                                     conf.CELERYBEAT_SCHEDULE_FILENAME
+        self.scheduler_cls = scheduler_cls or conf.CELERYBEAT_SCHEDULER
         self.hostname = hostname or socket.gethostname()
         self.hostname = hostname or socket.gethostname()
         self.embed_clockservice = embed_clockservice
         self.embed_clockservice = embed_clockservice
         self.ready_callback = ready_callback
         self.ready_callback = ready_callback
@@ -178,7 +161,13 @@ class WorkController(object):
         self.logger.debug("Instantiating thread components...")
         self.logger.debug("Instantiating thread components...")
 
 
         # Threads + Pool + Consumer
         # Threads + Pool + Consumer
-        self.pool = instantiate(self.pool_cls, self.concurrency,
+        self.autoscaler = None
+        max_concurrency = None
+        min_concurrency = concurrency
+        if autoscale:
+            max_concurrency, min_concurrency = autoscale
+
+        self.pool = instantiate(self.pool_cls, min_concurrency,
                                 logger=self.logger,
                                 logger=self.logger,
                                 initializer=process_initializer,
                                 initializer=process_initializer,
                                 initargs=(self.app, self.hostname),
                                 initargs=(self.app, self.hostname),
@@ -187,6 +176,12 @@ class WorkController(object):
                                 soft_timeout=self.task_soft_time_limit,
                                 soft_timeout=self.task_soft_time_limit,
                                 putlocks=self.pool_putlocks)
                                 putlocks=self.pool_putlocks)
 
 
+        if autoscale:
+            self.autoscaler = instantiate(self.autoscaler_cls, self.pool,
+                                          max_concurrency=max_concurrency,
+                                          min_concurrency=min_concurrency,
+                                          logger=self.logger)
+
         self.mediator = None
         self.mediator = None
         if not disable_rate_limits:
         if not disable_rate_limits:
             self.mediator = instantiate(self.mediator_cls, self.ready_queue,
             self.mediator = instantiate(self.mediator_cls, self.ready_queue,
@@ -202,7 +197,8 @@ class WorkController(object):
         if self.embed_clockservice:
         if self.embed_clockservice:
             self.beat = beat.EmbeddedService(app=self.app,
             self.beat = beat.EmbeddedService(app=self.app,
                                 logger=self.logger,
                                 logger=self.logger,
-                                schedule_filename=self.schedule_filename)
+                                schedule_filename=self.schedule_filename,
+                                scheduler_cls=self.scheduler_cls)
 
 
         prefetch_count = self.concurrency * self.prefetch_multiplier
         prefetch_count = self.concurrency * self.prefetch_multiplier
         self.consumer = instantiate(self.consumer_cls,
         self.consumer = instantiate(self.consumer_cls,
@@ -224,6 +220,7 @@ class WorkController(object):
                                         self.mediator,
                                         self.mediator,
                                         self.scheduler,
                                         self.scheduler,
                                         self.beat,
                                         self.beat,
+                                        self.autoscaler,
                                         self.consumer))
                                         self.consumer))
 
 
     def start(self):
     def start(self):
@@ -257,7 +254,6 @@ class WorkController(object):
         self._shutdown(warm=False)
         self._shutdown(warm=False)
 
 
     def _shutdown(self, warm=True):
     def _shutdown(self, warm=True):
-        """Gracefully shutdown the worker server."""
         what = (warm and "stopping" or "terminating").capitalize()
         what = (warm and "stopping" or "terminating").capitalize()
 
 
         if self._state != RUN or self._running != len(self.components):
         if self._state != RUN or self._running != len(self.components):

+ 33 - 39
celery/worker/buckets.py

@@ -2,7 +2,7 @@ import threading
 import time
 import time
 
 
 from collections import deque
 from collections import deque
-from Queue import Queue, Empty as QueueEmpty
+from Queue import Queue, Empty
 
 
 from celery.datastructures import TokenBucket
 from celery.datastructures import TokenBucket
 from celery.utils import timeutils
 from celery.utils import timeutils
@@ -15,16 +15,16 @@ class RateLimitExceeded(Exception):
 
 
 class TaskBucket(object):
 class TaskBucket(object):
     """This is a collection of token buckets, each task type having
     """This is a collection of token buckets, each task type having
-    its own token bucket. If the task type doesn't have a rate limit,
-    it will have a plain :class:`Queue` object instead of a
+    its own token bucket.  If the task type doesn't have a rate limit,
+    it will have a plain :class:`~Queue.Queue` object instead of a
     :class:`TokenBucketQueue`.
     :class:`TokenBucketQueue`.
 
 
     The :meth:`put` operation forwards the task to its appropriate bucket,
     The :meth:`put` operation forwards the task to its appropriate bucket,
     while the :meth:`get` operation iterates over the buckets and retrieves
     while the :meth:`get` operation iterates over the buckets and retrieves
     the first available item.
     the first available item.
 
 
-    Say we have three types of tasks in the registry: ``celery.ping``,
-    ``feed.refresh`` and ``video.compress``, the TaskBucket will consist
+    Say we have three types of tasks in the registry: `celery.ping`,
+    `feed.refresh` and `video.compress`, the TaskBucket will consist
     of the following items::
     of the following items::
 
 
         {"celery.ping": TokenBucketQueue(fill_rate=300),
         {"celery.ping": TokenBucketQueue(fill_rate=300),
@@ -32,12 +32,11 @@ class TaskBucket(object):
          "video.compress": TokenBucketQueue(fill_rate=2)}
          "video.compress": TokenBucketQueue(fill_rate=2)}
 
 
     The get operation will iterate over these until one of the buckets
     The get operation will iterate over these until one of the buckets
-    is able to return an item. The underlying datastructure is a ``dict``,
+    is able to return an item.  The underlying datastructure is a `dict`,
     so the order is ignored here.
     so the order is ignored here.
 
 
     :param task_registry: The task registry used to get the task
     :param task_registry: The task registry used to get the task
-        type class for a given task name.
-
+                          type class for a given task name.
 
 
     """
     """
 
 
@@ -65,8 +64,8 @@ class TaskBucket(object):
     def _get_immediate(self):
     def _get_immediate(self):
         try:
         try:
             return self.immediate.popleft()
             return self.immediate.popleft()
-        except IndexError:                      # Empty
-            raise QueueEmpty()
+        except IndexError:
+            raise Empty()
 
 
     def _get(self):
     def _get(self):
         # If the first bucket is always returning items, we would never
         # If the first bucket is always returning items, we would never
@@ -75,8 +74,8 @@ class TaskBucket(object):
         # "immediate". This queue is always checked for cached items first.
         # "immediate". This queue is always checked for cached items first.
         try:
         try:
             return 0, self._get_immediate()
             return 0, self._get_immediate()
-        except QueueEmpty:
-                pass
+        except Empty:
+            pass
 
 
         remaining_times = []
         remaining_times = []
         for bucket in self.buckets.values():
         for bucket in self.buckets.values():
@@ -85,7 +84,7 @@ class TaskBucket(object):
                 try:
                 try:
                     # Just put any ready items into the immediate queue.
                     # Just put any ready items into the immediate queue.
                     self.immediate.append(bucket.get_nowait())
                     self.immediate.append(bucket.get_nowait())
-                except QueueEmpty:
+                except Empty:
                     pass
                     pass
                 except RateLimitExceeded:
                 except RateLimitExceeded:
                     remaining_times.append(bucket.expected_time())
                     remaining_times.append(bucket.expected_time())
@@ -95,7 +94,7 @@ class TaskBucket(object):
         # Try the immediate queue again.
         # Try the immediate queue again.
         try:
         try:
             return 0, self._get_immediate()
             return 0, self._get_immediate()
-        except QueueEmpty:
+        except Empty:
             if not remaining_times:
             if not remaining_times:
                 # No items in any of the buckets.
                 # No items in any of the buckets.
                 raise
                 raise
@@ -119,14 +118,14 @@ class TaskBucket(object):
             while True:
             while True:
                 try:
                 try:
                     remaining_time, item = self._get()
                     remaining_time, item = self._get()
-                except QueueEmpty:
+                except Empty:
                     if not block or did_timeout():
                     if not block or did_timeout():
                         raise
                         raise
                     self.not_empty.wait(timeout)
                     self.not_empty.wait(timeout)
                     continue
                     continue
                 if remaining_time:
                 if remaining_time:
                     if not block or did_timeout():
                     if not block or did_timeout():
-                        raise QueueEmpty
+                        raise Empty()
                     time.sleep(min(remaining_time, timeout or 1))
                     time.sleep(min(remaining_time, timeout or 1))
                 else:
                 else:
                     return item
                     return item
@@ -178,8 +177,8 @@ class TaskBucket(object):
         """Add a bucket for a task type.
         """Add a bucket for a task type.
 
 
         Will read the tasks rate limit and create a :class:`TokenBucketQueue`
         Will read the tasks rate limit and create a :class:`TokenBucketQueue`
-        if it has one. If the task doesn't have a rate limit a regular Queue
-        will be used.
+        if it has one.  If the task doesn't have a rate limit
+        :class:`FastQueue` will be used instead.
 
 
         """
         """
         if task_name not in self.buckets:
         if task_name not in self.buckets:
@@ -190,14 +189,17 @@ class TaskBucket(object):
         return sum(bucket.qsize() for bucket in self.buckets.values())
         return sum(bucket.qsize() for bucket in self.buckets.values())
 
 
     def empty(self):
     def empty(self):
+        """Returns :const:`True` if all of the buckets are empty."""
         return all(bucket.empty() for bucket in self.buckets.values())
         return all(bucket.empty() for bucket in self.buckets.values())
 
 
     def clear(self):
     def clear(self):
+        """Delete the data in all of the buckets."""
         for bucket in self.buckets.values():
         for bucket in self.buckets.values():
             bucket.clear()
             bucket.clear()
 
 
     @property
     @property
     def items(self):
     def items(self):
+        """Flattens the data in all of the buckets into a single list."""
         # for queues with contents [(1, 2), (3, 4), (5, 6), (7, 8)]
         # for queues with contents [(1, 2), (3, 4), (5, 6), (7, 8)]
         # zips and flattens to [1, 3, 5, 7, 2, 4, 6, 8]
         # zips and flattens to [1, 3, 5, 7, 2, 4, 6, 8]
         return filter(None, chain_from_iterable(izip_longest(*[bucket.items
         return filter(None, chain_from_iterable(izip_longest(*[bucket.items
@@ -229,8 +231,9 @@ class TokenBucketQueue(object):
     operations.
     operations.
 
 
     :param fill_rate: The rate in tokens/second that the bucket will
     :param fill_rate: The rate in tokens/second that the bucket will
-      be refilled.
-    :keyword capacity: Maximum number of tokens in the bucket. Default is 1.
+                      be refilled.
+    :keyword capacity: Maximum number of tokens in the bucket.
+                       Default is 1.
 
 
     """
     """
     RateLimitExceeded = RateLimitExceeded
     RateLimitExceeded = RateLimitExceeded
@@ -242,11 +245,7 @@ class TokenBucketQueue(object):
             self.queue = Queue()
             self.queue = Queue()
 
 
     def put(self, item, block=True):
     def put(self, item, block=True):
-        """Put an item into the queue.
-
-        Also see :meth:`Queue.Queue.put`.
-
-        """
+        """Put an item onto the queue."""
         self.queue.put(item, block=block)
         self.queue.put(item, block=block)
 
 
     def put_nowait(self, item):
     def put_nowait(self, item):
@@ -254,8 +253,6 @@ class TokenBucketQueue(object):
 
 
         :raises Queue.Full: If a free slot is not immediately available.
         :raises Queue.Full: If a free slot is not immediately available.
 
 
-        Also see :meth:`Queue.Queue.put_nowait`
-
         """
         """
         return self.put(item, block=False)
         return self.put(item, block=False)
 
 
@@ -263,11 +260,10 @@ class TokenBucketQueue(object):
         """Remove and return an item from the queue.
         """Remove and return an item from the queue.
 
 
         :raises RateLimitExceeded: If a token could not be consumed from the
         :raises RateLimitExceeded: If a token could not be consumed from the
-            token bucket (consuming from the queue too fast).
+                                   token bucket (consuming from the queue
+                                   too fast).
         :raises Queue.Empty: If an item is not immediately available.
         :raises Queue.Empty: If an item is not immediately available.
 
 
-        Also see :meth:`Queue.Queue.get`.
-
         """
         """
         get = block and self.queue.get or self.queue.get_nowait
         get = block and self.queue.get or self.queue.get_nowait
 
 
@@ -280,26 +276,23 @@ class TokenBucketQueue(object):
         """Remove and return an item from the queue without blocking.
         """Remove and return an item from the queue without blocking.
 
 
         :raises RateLimitExceeded: If a token could not be consumed from the
         :raises RateLimitExceeded: If a token could not be consumed from the
-            token bucket (consuming from the queue too fast).
+                                   token bucket (consuming from the queue
+                                   too fast).
         :raises Queue.Empty: If an item is not immediately available.
         :raises Queue.Empty: If an item is not immediately available.
 
 
-        Also see :meth:`Queue.Queue.get_nowait`.
-
         """
         """
         return self.get(block=False)
         return self.get(block=False)
 
 
     def qsize(self):
     def qsize(self):
-        """Returns the size of the queue.
-
-        See :meth:`Queue.Queue.qsize`.
-
-        """
+        """Returns the size of the queue."""
         return self.queue.qsize()
         return self.queue.qsize()
 
 
     def empty(self):
     def empty(self):
+        """Returns :const:`True` if the queue is empty."""
         return self.queue.empty()
         return self.queue.empty()
 
 
     def clear(self):
     def clear(self):
+        """Delete all data in the queue."""
         return self.items.clear()
         return self.items.clear()
 
 
     def wait(self, block=False):
     def wait(self, block=False):
@@ -312,10 +305,11 @@ class TokenBucketQueue(object):
             time.sleep(remaining)
             time.sleep(remaining)
 
 
     def expected_time(self, tokens=1):
     def expected_time(self, tokens=1):
-        """Returns the expected time in seconds when a new token should be
+        """Returns the expected time in seconds of when a new token should be
         available."""
         available."""
         return self._bucket.expected_time(tokens)
         return self._bucket.expected_time(tokens)
 
 
     @property
     @property
     def items(self):
     def items(self):
+        """Underlying data.  Do not modify."""
         return self.queue.queue
         return self.queue.queue

+ 7 - 4
celery/worker/consumer.py

@@ -25,7 +25,7 @@ up and running.
 
 
 * So for each message received the :meth:`~Consumer.receive_message`
 * So for each message received the :meth:`~Consumer.receive_message`
   method is called, this checks the payload of the message for either
   method is called, this checks the payload of the message for either
-  a ``task`` key or a ``control`` key.
+  a `task` key or a `control` key.
 
 
   If the message is a task, it verifies the validity of the message
   If the message is a task, it verifies the validity of the message
   converts it to a :class:`celery.worker.job.TaskRequest`, and sends
   converts it to a :class:`celery.worker.job.TaskRequest`, and sends
@@ -40,9 +40,9 @@ up and running.
   are acknowledged immediately and logged, so the message is not resent
   are acknowledged immediately and logged, so the message is not resent
   again, and again.
   again, and again.
 
 
-* If the task has an ETA/countdown, the task is moved to the ``eta_schedule``
+* If the task has an ETA/countdown, the task is moved to the `eta_schedule`
   so the :class:`timer2.Timer` can schedule it at its
   so the :class:`timer2.Timer` can schedule it at its
-  deadline. Tasks without an eta are moved immediately to the ``ready_queue``,
+  deadline. Tasks without an eta are moved immediately to the `ready_queue`,
   so they can be picked up by the :class:`~celery.worker.controllers.Mediator`
   so they can be picked up by the :class:`~celery.worker.controllers.Mediator`
   to be sent to the pool.
   to be sent to the pool.
 
 
@@ -80,6 +80,7 @@ from celery.events import EventDispatcher
 from celery.exceptions import NotRegistered
 from celery.exceptions import NotRegistered
 from celery.utils import noop
 from celery.utils import noop
 from celery.utils.timer2 import to_timestamp
 from celery.utils.timer2 import to_timestamp
+from celery.worker import state
 from celery.worker.job import TaskRequest, InvalidTaskError
 from celery.worker.job import TaskRequest, InvalidTaskError
 from celery.worker.control.registry import Panel
 from celery.worker.control.registry import Panel
 from celery.worker.heartbeat import Heart
 from celery.worker.heartbeat import Heart
@@ -256,7 +257,7 @@ class Consumer(object):
     def on_task(self, task):
     def on_task(self, task):
         """Handle received task.
         """Handle received task.
 
 
-        If the task has an ``eta`` we enter it into the ETA schedule,
+        If the task has an `eta` we enter it into the ETA schedule,
         otherwise we move it the ready queue for immediate processing.
         otherwise we move it the ready queue for immediate processing.
 
 
         """
         """
@@ -285,6 +286,7 @@ class Consumer(object):
                 self.eta_schedule.apply_at(eta,
                 self.eta_schedule.apply_at(eta,
                                            self.apply_eta_task, (task, ))
                                            self.apply_eta_task, (task, ))
         else:
         else:
+            state.task_reserved(task)
             self.ready_queue.put(task)
             self.ready_queue.put(task)
 
 
     def on_control(self, message, message_data):
     def on_control(self, message, message_data):
@@ -294,6 +296,7 @@ class Consumer(object):
             self.logger.error("No such control command: %s" % command)
             self.logger.error("No such control command: %s" % command)
 
 
     def apply_eta_task(self, task):
     def apply_eta_task(self, task):
+        state.task_reserved(task)
         self.ready_queue.put(task)
         self.ready_queue.put(task)
         self.qos.decrement_eventually()
         self.qos.decrement_eventually()
 
 

+ 11 - 0
celery/worker/control/builtins.py

@@ -167,6 +167,17 @@ def ping(panel, **kwargs):
     return "pong"
     return "pong"
 
 
 
 
+@Panel.register
+def pool_grow(panel, n=1, **kwargs):
+    panel.listener.pool.grow(n)
+    return {"ok": "spawned worker processes"}
+
+@Panel.register
+def pool_shrink(panel, n=1, **kwargs):
+    panel.listener.pool.shrink(n)
+    return {"ok": "terminated worker processes"}
+
+
 @Panel.register
 @Panel.register
 def shutdown(panel, **kwargs):
 def shutdown(panel, **kwargs):
     panel.logger.critical("Got shutdown from remote.")
     panel.logger.critical("Got shutdown from remote.")

+ 75 - 15
celery/worker/controllers.py

@@ -7,28 +7,88 @@ import logging
 import sys
 import sys
 import threading
 import threading
 import traceback
 import traceback
-from Queue import Empty as QueueEmpty
+
+from time import sleep, time
+from Queue import Empty
 
 
 from celery.app import app_or_default
 from celery.app import app_or_default
 from celery.utils.compat import log_with_extra
 from celery.utils.compat import log_with_extra
+from celery.worker import state
 
 
 
 
-class Mediator(threading.Thread):
-    """Thread continuously sending tasks in the queue to the pool.
+class Autoscaler(threading.Thread):
+
+    def __init__(self, pool, max_concurrency, min_concurrency=0,
+            keepalive=30, logger=None):
+        threading.Thread.__init__(self)
+        self.pool = pool
+        self.max_concurrency = max_concurrency
+        self.min_concurrency = min_concurrency
+        self.keepalive = keepalive
+        self.logger = logger or log.get_default_logger()
+        self._last_action = None
+        self._shutdown = threading.Event()
+        self._stopped = threading.Event()
+        self.setDaemon(True)
+        self.setName(self.__class__.__name__)
+
+        assert self.keepalive, "can't scale down too fast."
+
+    def scale(self):
+        current = min(self.qty, self.max_concurrency)
+        if current > self.processes:
+            self.scale_up(current - self.processes)
+        elif current < self.processes:
+            self.scale_down((self.processes - current) - self.min_concurrency)
+        sleep(1.0)
+
+    def scale_up(self, n):
+        self.logger.info("Scaling up %s processes." % (n, ))
+        self._last_action = time()
+        return self.pool.grow(n)
 
 
-    .. attribute:: ready_queue
+    def scale_down(self, n):
+        if not self._last_action or not n:
+            return
+        if time() - self._last_action > self.keepalive:
+            self.logger.info("Scaling down %s processes." % (n, ))
+            self._last_action = time()
+            try:
+                self.pool.shrink(n)
+            except Exception, exc:
+                import traceback
+                traceback.print_stack()
+                self.logger.error("Autoscaler: scale_down: %r" % (exc, ))
 
 
-        The task queue, a :class:`Queue.Queue` instance.
+    def run(self):
+        while not self._shutdown.isSet():
+            self.scale()
+        self._stopped.set()
 
 
-    .. attribute:: callback
+    def stop(self):
+        self._shutdown.set()
+        self._stopped.wait()
+        self.join(1e100)
+
+    @property
+    def qty(self):
+        return len(state.reserved_requests)
+
+    @property
+    def processes(self):
+        return self.pool._pool._processes
+
+
+class Mediator(threading.Thread):
+    """Thread continuously moving tasks in the ready queue to the pool."""
 
 
-        The callback used to process tasks retrieved from the
-        :attr:`ready_queue`.
+    #: The task queue, a :class:`~Queue.Queue` instance.
+    ready_queue = None
 
 
-    """
+    #: Callback called when a task is obtained.
+    callback = None
 
 
-    def __init__(self, ready_queue, callback, logger=None,
-            app=None):
+    def __init__(self, ready_queue, callback, logger=None, app=None):
         threading.Thread.__init__(self)
         threading.Thread.__init__(self)
         self.app = app_or_default(app)
         self.app = app_or_default(app)
         self.logger = logger or self.app.log.get_default_logger()
         self.logger = logger or self.app.log.get_default_logger()
@@ -41,9 +101,8 @@ class Mediator(threading.Thread):
 
 
     def move(self):
     def move(self):
         try:
         try:
-            # This blocks until there's a message in the queue.
             task = self.ready_queue.get(timeout=1.0)
             task = self.ready_queue.get(timeout=1.0)
-        except QueueEmpty:
+        except Empty:
             return
             return
 
 
         if task.revoked():
         if task.revoked():
@@ -65,12 +124,13 @@ class Mediator(threading.Thread):
                                            "name": task.task_name}})
                                            "name": task.task_name}})
 
 
     def run(self):
     def run(self):
+        """Move tasks forver or until :meth:`stop` is called."""
         while not self._shutdown.isSet():
         while not self._shutdown.isSet():
             self.move()
             self.move()
-        self._stopped.set()                 # indicate that we are stopped
+        self._stopped.set()
 
 
     def stop(self):
     def stop(self):
         """Gracefully shutdown the thread."""
         """Gracefully shutdown the thread."""
         self._shutdown.set()
         self._shutdown.set()
-        self._stopped.wait()                # block until this thread is done
+        self._stopped.wait()
         self.join(1e100)
         self.join(1e100)

+ 6 - 7
celery/worker/heartbeat.py

@@ -1,19 +1,18 @@
 import threading
 import threading
+
 from time import time, sleep
 from time import time, sleep
 
 
 
 
 class Heart(threading.Thread):
 class Heart(threading.Thread):
-    """Thread sending heartbeats at an interval.
+    """Thread sending heartbeats at regular intervals.
 
 
     :param eventer: Event dispatcher used to send the event.
     :param eventer: Event dispatcher used to send the event.
     :keyword interval: Time in seconds between heartbeats.
     :keyword interval: Time in seconds between heartbeats.
-        Default is 2 minutes.
-
-    .. attribute:: bpm
-
-        Beats per minute.
+                       Default is 2 minutes.
 
 
     """
     """
+
+    #: Beats per minute.
     bpm = 0.5
     bpm = 0.5
 
 
     def __init__(self, eventer, interval=None):
     def __init__(self, eventer, interval=None):
@@ -64,6 +63,6 @@ class Heart(threading.Thread):
             return
             return
         self._state = "CLOSE"
         self._state = "CLOSE"
         self._shutdown.set()
         self._shutdown.set()
-        self._stopped.wait()            # block until this thread is done
+        self._stopped.wait()            # blocks until this thread is done
         if self.isAlive():
         if self.isAlive():
             self.join(1e100)
             self.join(1e100)

+ 165 - 151
celery/worker/job.py

@@ -23,6 +23,8 @@ from celery.worker import state
 # pep8.py borks on a inline signature separator and
 # pep8.py borks on a inline signature separator and
 # says "trailing whitespace" ;)
 # says "trailing whitespace" ;)
 EMAIL_SIGNATURE_SEP = "-- "
 EMAIL_SIGNATURE_SEP = "-- "
+
+#: format string for the body of an error e-mail.
 TASK_ERROR_EMAIL_BODY = """
 TASK_ERROR_EMAIL_BODY = """
 Task %%(name)s with id %%(id)s raised exception:\n%%(exc)s
 Task %%(name)s with id %%(id)s raised exception:\n%%(exc)s
 
 
@@ -39,16 +41,20 @@ celeryd at %%(hostname)s.
 """ % {"EMAIL_SIGNATURE_SEP": EMAIL_SIGNATURE_SEP}
 """ % {"EMAIL_SIGNATURE_SEP": EMAIL_SIGNATURE_SEP}
 
 
 
 
+#: Keys to keep from the message delivery info.  The values
+#: of these keys must be pickleable.
 WANTED_DELIVERY_INFO = ("exchange", "routing_key", "consumer_tag", )
 WANTED_DELIVERY_INFO = ("exchange", "routing_key", "consumer_tag", )
 
 
 
 
 class InvalidTaskError(Exception):
 class InvalidTaskError(Exception):
     """The task has invalid data or is not properly constructed."""
     """The task has invalid data or is not properly constructed."""
+    pass
 
 
 
 
 class AlreadyExecutedError(Exception):
 class AlreadyExecutedError(Exception):
     """Tasks can only be executed once, as they might change
     """Tasks can only be executed once, as they might change
     world-wide state."""
     world-wide state."""
+    pass
 
 
 
 
 class WorkerTaskTrace(TaskTrace):
 class WorkerTaskTrace(TaskTrace):
@@ -57,14 +63,14 @@ class WorkerTaskTrace(TaskTrace):
     meta backend.
     meta backend.
 
 
     If the call was successful, it saves the result to the task result
     If the call was successful, it saves the result to the task result
-    backend, and sets the task status to ``"SUCCESS"``.
+    backend, and sets the task status to `"SUCCESS"`.
 
 
     If the call raises :exc:`celery.exceptions.RetryTaskError`, it extracts
     If the call raises :exc:`celery.exceptions.RetryTaskError`, it extracts
     the original exception, uses that as the result and sets the task status
     the original exception, uses that as the result and sets the task status
-    to ``"RETRY"``.
+    to `"RETRY"`.
 
 
     If the call results in an exception, it saves the exception as the task
     If the call results in an exception, it saves the exception as the task
-    result, and sets the task status to ``"FAILURE"``.
+    result, and sets the task status to `"FAILURE"`.
 
 
     :param task_name: The name of the task to execute.
     :param task_name: The name of the task to execute.
     :param task_id: The unique id of the task.
     :param task_id: The unique id of the task.
@@ -76,6 +82,12 @@ class WorkerTaskTrace(TaskTrace):
 
 
     """
     """
 
 
+    #: Current loader.
+    loader = None
+
+    #: Hostname to report as.
+    hostname = None
+
     def __init__(self, *args, **kwargs):
     def __init__(self, *args, **kwargs):
         self.loader = kwargs.get("loader") or app_or_default().loader
         self.loader = kwargs.get("loader") or app_or_default().loader
         self.hostname = kwargs.get("hostname") or socket.gethostname()
         self.hostname = kwargs.get("hostname") or socket.gethostname()
@@ -152,71 +164,72 @@ def execute_and_trace(task_name, *args, **kwargs):
 
 
 
 
 class TaskRequest(object):
 class TaskRequest(object):
-    """A request for task execution.
-
-    :param task_name: see :attr:`task_name`.
-    :param task_id: see :attr:`task_id`.
-    :param args: see :attr:`args`
-    :param kwargs: see :attr:`kwargs`.
-
-    .. attribute:: task_name
-
-        Kind of task. Must be a name registered in the task registry.
+    """A request for task execution."""
 
 
-    .. attribute:: task_id
+    #: Kind of task.  Must be a name registered in the task registry.
+    name = None
 
 
-        UUID of the task.
+    #: The task class (set by constructor using :attr:`task_name`).
+    task = None
 
 
-    .. attribute:: args
+    #: UUID of the task.
+    task_id = None
 
 
-        List of positional arguments to apply to the task.
+    #: List of positional arguments to apply to the task.
+    args = None
 
 
-    .. attribute:: kwargs
+    #: Mapping of keyword arguments to apply to the task.
+    kwargs = None
 
 
-        Mapping of keyword arguments to apply to the task.
+    #: Number of times the task has been retried.
+    retries = 0
 
 
-    .. attribute:: on_ack
+    #: The tasks eta (for information only).
+    eta = None
 
 
-        Callback called when the task should be acknowledged.
+    #: When the task expires.
+    expires = None
 
 
-    .. attribute:: message
+    #: Callback called when the task should be acknowledged.
+    on_ack = None
 
 
-        The original message sent. Used for acknowledging the message.
+    #: The message object.  Used to acknowledge the message.
+    message = None
 
 
-    .. attribute:: executed
-
-        Set to :const:`True` if the task has been executed.
-        A task should only be executed once.
-
-    .. attribute:: delivery_info
-
-        Additional delivery info, e.g. the contains the path
-        from producer to consumer.
+    #: Flag set when the task has been executed.
+    executed = False
 
 
-    .. attribute:: acknowledged
+    #: Additional delivery info, e.g. contains the path from
+    #: Producer to consumer.
+    delivery_info = None
 
 
-        Set to :const:`True` if the task has been acknowledged.
+    #: Flag set when the task has been acknowledged.
+    acknowledged = False
 
 
+    #: Format string used to log task success.
+    success_msg = """\
+        Task %(name)s[%(id)s] succeeded in %(runtime)ss: %(return_value)s
     """
     """
-    # Logging output
-    success_msg = "Task %(name)s[%(id)s] processed: %(return_value)s"
-    error_msg = """
+
+    #: Format string used to log task failure.
+    error_msg = """\
         Task %(name)s[%(id)s] raised exception: %(exc)s\n%(traceback)s
         Task %(name)s[%(id)s] raised exception: %(exc)s\n%(traceback)s
     """
     """
-    retry_msg = """
-        Task %(name)s[%(id)s] retry: %(exc)s
-    """
 
 
-    # E-mails
-    email_subject = """
+    #: Format string used to log task retry.
+    retry_msg = """Task %(name)s[%(id)s] retry: %(exc)s"""
+
+    #: Format string used to generate error e-mail subjects.
+    email_subject = """\
         [celery@%(hostname)s] Error: Task %(name)s (%(id)s): %(exc)s
         [celery@%(hostname)s] Error: Task %(name)s (%(id)s): %(exc)s
     """
     """
+
+    #: Format string used to generate error e-mail content.
     email_body = TASK_ERROR_EMAIL_BODY
     email_body = TASK_ERROR_EMAIL_BODY
 
 
-    # Internal flags
-    executed = False
-    acknowledged = False
+    #: Timestamp set when the task is started.
     time_start = None
     time_start = None
+
     _already_revoked = False
     _already_revoked = False
 
 
     def __init__(self, task_name, task_id, args, kwargs,
     def __init__(self, task_name, task_id, args, kwargs,
@@ -244,58 +257,32 @@ class TaskRequest(object):
         if self.task.ignore_result:
         if self.task.ignore_result:
             self._store_errors = self.task.store_errors_even_if_ignored
             self._store_errors = self.task.store_errors_even_if_ignored
 
 
-    def maybe_expire(self):
-        if self.expires and datetime.now() > self.expires:
-            state.revoked.add(self.task_id)
-            if self._store_errors:
-                self.task.backend.mark_as_revoked(self.task_id)
-
-    def revoked(self):
-        if self._already_revoked:
-            return True
-        if self.expires:
-            self.maybe_expire()
-        if self.task_id in state.revoked:
-            self.logger.warn("Skipping revoked task: %s[%s]" % (
-                self.task_name, self.task_id))
-            self.send_event("task-revoked", uuid=self.task_id)
-            self.acknowledge()
-            self._already_revoked = True
-            return True
-        return False
-
     @classmethod
     @classmethod
-    def from_message(cls, message, message_data, logger=None, eventer=None,
-            hostname=None, app=None):
-        """Create a :class:`TaskRequest` from a task message sent by
-        :class:`celery.app.amqp.TaskPublisher`.
+    def from_message(cls, message, message_data, **kw):
+        """Create request from a task message.
 
 
         :raises UnknownTaskError: if the message does not describe a task,
         :raises UnknownTaskError: if the message does not describe a task,
             the message is also rejected.
             the message is also rejected.
 
 
-        :returns :class:`TaskRequest`:
-
         """
         """
-        task_name = message_data["task"]
-        task_id = message_data["id"]
-        args = message_data["args"]
-        kwargs = message_data["kwargs"]
-        retries = message_data.get("retries", 0)
-        eta = maybe_iso8601(message_data.get("eta"))
-        expires = maybe_iso8601(message_data.get("expires"))
-
         _delivery_info = getattr(message, "delivery_info", {})
         _delivery_info = getattr(message, "delivery_info", {})
         delivery_info = dict((key, _delivery_info.get(key))
         delivery_info = dict((key, _delivery_info.get(key))
                                 for key in WANTED_DELIVERY_INFO)
                                 for key in WANTED_DELIVERY_INFO)
 
 
+        kwargs = message_data["kwargs"]
         if not hasattr(kwargs, "items"):
         if not hasattr(kwargs, "items"):
-            raise InvalidTaskError("Task kwargs must be a dictionary.")
-
-        return cls(task_name, task_id, args, kwdict(kwargs),
-                   retries=retries, on_ack=message.ack,
-                   delivery_info=delivery_info, logger=logger,
-                   eventer=eventer, hostname=hostname,
-                   eta=eta, expires=expires, app=app)
+            raise InvalidTaskError("Task keyword arguments is not a mapping.")
+
+        return cls(task_name=message_data["task"],
+                   task_id=message_data["id"],
+                   args=message_data["args"],
+                   kwargs=kwdict(kwargs),
+                   retries=message_data.get("retries", 0),
+                   eta=maybe_iso8601(message_data.get("eta")),
+                   expires=maybe_iso8601(message_data.get("expires")),
+                   on_ack=message.ack,
+                   delivery_info=delivery_info,
+                   **kw)
 
 
     def get_instance_attrs(self, loglevel, logfile):
     def get_instance_attrs(self, loglevel, logfile):
         return {"logfile": logfile,
         return {"logfile": logfile,
@@ -308,8 +295,8 @@ class TaskRequest(object):
     def extend_with_default_kwargs(self, loglevel, logfile):
     def extend_with_default_kwargs(self, loglevel, logfile):
         """Extend the tasks keyword arguments with standard task arguments.
         """Extend the tasks keyword arguments with standard task arguments.
 
 
-        Currently these are ``logfile``, ``loglevel``, ``task_id``,
-        ``task_name``, ``task_retries``, and ``delivery_info``.
+        Currently these are `logfile`, `loglevel`, `task_id`,
+        `task_name`, `task_retries`, and `delivery_info`.
 
 
         See :meth:`celery.task.base.Task.run` for more information.
         See :meth:`celery.task.base.Task.run` for more information.
 
 
@@ -331,18 +318,33 @@ class TaskRequest(object):
         kwargs.update(extend_with)
         kwargs.update(extend_with)
         return kwargs
         return kwargs
 
 
-    def _get_tracer_args(self, loglevel=None, logfile=None):
-        """Get the :class:`WorkerTaskTrace` tracer for this task."""
-        task_func_kwargs = self.extend_with_default_kwargs(loglevel, logfile)
-        return self.task_name, self.task_id, self.args, task_func_kwargs
+    def execute_using_pool(self, pool, loglevel=None, logfile=None):
+        """Like :meth:`execute`, but using the :mod:`multiprocessing` pool.
 
 
-    def _set_executed_bit(self):
-        """Set task as executed to make sure it's not executed again."""
-        if self.executed:
-            raise AlreadyExecutedError(
-                   "Task %s[%s] has already been executed" % (
-                       self.task_name, self.task_id))
-        self.executed = True
+        :param pool: A :class:`multiprocessing.Pool` instance.
+
+        :keyword loglevel: The loglevel used by the task.
+
+        :keyword logfile: The logfile used by the task.
+
+        """
+        if self.revoked():
+            return
+        # Make sure task has not already been executed.
+        self._set_executed_bit()
+
+        args = self._get_tracer_args(loglevel, logfile)
+        instance_attrs = self.get_instance_attrs(loglevel, logfile)
+        self.time_start = time.time()
+        result = pool.apply_async(execute_and_trace,
+                                  args=args,
+                                  kwargs={"hostname": self.hostname,
+                                          "request": instance_attrs},
+                                  accept_callback=self.on_accepted,
+                                  timeout_callback=self.on_timeout,
+                                  callbacks=[self.on_success],
+                                  errbacks=[self.on_failure])
+        return result
 
 
     def execute(self, loglevel=None, logfile=None):
     def execute(self, loglevel=None, logfile=None):
         """Execute the task in a :class:`WorkerTaskTrace`.
         """Execute the task in a :class:`WorkerTaskTrace`.
@@ -354,6 +356,7 @@ class TaskRequest(object):
         """
         """
         if self.revoked():
         if self.revoked():
             return
             return
+
         # Make sure task has not already been executed.
         # Make sure task has not already been executed.
         self._set_executed_bit()
         self._set_executed_bit()
 
 
@@ -370,39 +373,34 @@ class TaskRequest(object):
         self.acknowledge()
         self.acknowledge()
         return retval
         return retval
 
 
+    def maybe_expire(self):
+        """If expired, mark the task as revoked."""
+        if self.expires and datetime.now() > self.expires:
+            state.revoked.add(self.task_id)
+            if self._store_errors:
+                self.task.backend.mark_as_revoked(self.task_id)
+
+    def revoked(self):
+        """If revoked, skip task and mark state."""
+        if self._already_revoked:
+            return True
+        if self.expires:
+            self.maybe_expire()
+        if self.task_id in state.revoked:
+            self.logger.warn("Skipping revoked task: %s[%s]" % (
+                self.task_name, self.task_id))
+            self.send_event("task-revoked", uuid=self.task_id)
+            self.acknowledge()
+            self._already_revoked = True
+            return True
+        return False
+
     def send_event(self, type, **fields):
     def send_event(self, type, **fields):
         if self.eventer:
         if self.eventer:
             self.eventer.send(type, **fields)
             self.eventer.send(type, **fields)
 
 
-    def execute_using_pool(self, pool, loglevel=None, logfile=None):
-        """Like :meth:`execute`, but using the :mod:`multiprocessing` pool.
-
-        :param pool: A :class:`multiprocessing.Pool` instance.
-
-        :keyword loglevel: The loglevel used by the task.
-
-        :keyword logfile: The logfile used by the task.
-
-        """
-        if self.revoked():
-            return
-        # Make sure task has not already been executed.
-        self._set_executed_bit()
-
-        args = self._get_tracer_args(loglevel, logfile)
-        instance_attrs = self.get_instance_attrs(loglevel, logfile)
-        self.time_start = time.time()
-        result = pool.apply_async(execute_and_trace,
-                                  args=args,
-                                  kwargs={"hostname": self.hostname,
-                                          "request": instance_attrs},
-                                  accept_callback=self.on_accepted,
-                                  timeout_callback=self.on_timeout,
-                                  callbacks=[self.on_success],
-                                  errbacks=[self.on_failure])
-        return result
-
     def on_accepted(self):
     def on_accepted(self):
+        """Handler called when task is accepted by worker pool."""
         state.task_accepted(self)
         state.task_accepted(self)
         if not self.task.acks_late:
         if not self.task.acks_late:
             self.acknowledge()
             self.acknowledge()
@@ -411,6 +409,7 @@ class TaskRequest(object):
             self.task_name, self.task_id))
             self.task_name, self.task_id))
 
 
     def on_timeout(self, soft):
     def on_timeout(self, soft):
+        """Handler called if the task times out."""
         state.task_ready(self)
         state.task_ready(self)
         if soft:
         if soft:
             self.logger.warning("Soft time limit exceeded for %s[%s]" % (
             self.logger.warning("Soft time limit exceeded for %s[%s]" % (
@@ -424,14 +423,8 @@ class TaskRequest(object):
         if self._store_errors:
         if self._store_errors:
             self.task.backend.mark_as_failure(self.task_id, exc)
             self.task.backend.mark_as_failure(self.task_id, exc)
 
 
-    def acknowledge(self):
-        if not self.acknowledged:
-            self.on_ack()
-            self.acknowledged = True
-
     def on_success(self, ret_value):
     def on_success(self, ret_value):
-        """The handler used if the task was successfully processed (
-        without raising an exception)."""
+        """Handler called if the task was successfully processed."""
         state.task_ready(self)
         state.task_ready(self)
 
 
         if self.task.acks_late:
         if self.task.acks_late:
@@ -441,29 +434,25 @@ class TaskRequest(object):
         self.send_event("task-succeeded", uuid=self.task_id,
         self.send_event("task-succeeded", uuid=self.task_id,
                         result=repr(ret_value), runtime=runtime)
                         result=repr(ret_value), runtime=runtime)
 
 
-        msg = self.success_msg.strip() % {
+        self.logger.info(self.success_msg.strip() % {
                 "id": self.task_id,
                 "id": self.task_id,
                 "name": self.task_name,
                 "name": self.task_name,
-                "return_value": self.repr_result(ret_value)}
-        self.logger.info(msg)
-
-    def repr_result(self, result, maxlen=46):
-        # 46 is the length needed to fit
-        #     "the quick brown fox jumps over the lazy dog" :)
-        return truncate_text(repr(result), maxlen)
+                "return_value": self.repr_result(ret_value),
+                "runtime": runtime})
 
 
     def on_retry(self, exc_info):
     def on_retry(self, exc_info):
+        """Handler called if the task should be retried."""
         self.send_event("task-retried", uuid=self.task_id,
         self.send_event("task-retried", uuid=self.task_id,
                                         exception=repr(exc_info.exception.exc),
                                         exception=repr(exc_info.exception.exc),
                                         traceback=repr(exc_info.traceback))
                                         traceback=repr(exc_info.traceback))
-        msg = self.retry_msg.strip() % {
+
+        self.logger.info(self.retry_msg.strip() % {
                 "id": self.task_id,
                 "id": self.task_id,
                 "name": self.task_name,
                 "name": self.task_name,
-                "exc": repr(exc_info.exception.exc)}
-        self.logger.info(msg)
+                "exc": repr(exc_info.exception.exc)})
 
 
     def on_failure(self, exc_info):
     def on_failure(self, exc_info):
-        """The handler used if the task raised an exception."""
+        """Handler called if the task raised an exception."""
         state.task_ready(self)
         state.task_ready(self)
 
 
         if self.task.acks_late:
         if self.task.acks_late:
@@ -493,6 +482,7 @@ class TaskRequest(object):
 
 
         log_with_extra(self.logger, logging.ERROR,
         log_with_extra(self.logger, logging.ERROR,
                        self.error_msg.strip() % context,
                        self.error_msg.strip() % context,
+                       exc_info=exc_info,
                        extra={"data": {"hostname": self.hostname,
                        extra={"data": {"hostname": self.hostname,
                                        "id": self.task_id,
                                        "id": self.task_id,
                                        "name": self.task_name}})
                                        "name": self.task_name}})
@@ -502,6 +492,12 @@ class TaskRequest(object):
                               enabled=task_obj.send_error_emails,
                               enabled=task_obj.send_error_emails,
                               whitelist=task_obj.error_whitelist)
                               whitelist=task_obj.error_whitelist)
 
 
+    def acknowledge(self):
+        """Acknowledge task."""
+        if not self.acknowledged:
+            self.on_ack()
+            self.acknowledged = True
+
     def send_error_email(self, task, context, exc,
     def send_error_email(self, task, context, exc,
             whitelist=None, enabled=False, fail_silently=True):
             whitelist=None, enabled=False, fail_silently=True):
         if enabled and not task.disable_error_emails:
         if enabled and not task.disable_error_emails:
@@ -512,11 +508,10 @@ class TaskRequest(object):
             body = self.email_body.strip() % context
             body = self.email_body.strip() % context
             self.app.mail_admins(subject, body, fail_silently=fail_silently)
             self.app.mail_admins(subject, body, fail_silently=fail_silently)
 
 
-    def __repr__(self):
-        return '<%s: {name:"%s", id:"%s", args:"%s", kwargs:"%s"}>' % (
-                self.__class__.__name__,
-                self.task_name, self.task_id,
-                self.args, self.kwargs)
+    def repr_result(self, result, maxlen=46):
+        # 46 is the length needed to fit
+        #     "the quick brown fox jumps over the lazy dog" :)
+        return truncate_text(repr(result), maxlen)
 
 
     def info(self, safe=False):
     def info(self, safe=False):
         args = self.args
         args = self.args
@@ -540,3 +535,22 @@ class TaskRequest(object):
                     self.task_id,
                     self.task_id,
                     self.eta and " eta:[%s]" % (self.eta, ) or "",
                     self.eta and " eta:[%s]" % (self.eta, ) or "",
                     self.expires and " expires:[%s]" % (self.expires, ) or "")
                     self.expires and " expires:[%s]" % (self.expires, ) or "")
+
+    def __repr__(self):
+        return '<%s: {name:"%s", id:"%s", args:"%s", kwargs:"%s"}>' % (
+                self.__class__.__name__,
+                self.task_name, self.task_id, self.args, self.kwargs)
+
+    def _get_tracer_args(self, loglevel=None, logfile=None):
+        """Get the :class:`WorkerTaskTrace` tracer for this task."""
+        task_func_kwargs = self.extend_with_default_kwargs(loglevel, logfile)
+        return self.task_name, self.task_id, self.args, task_func_kwargs
+
+    def _set_executed_bit(self):
+        """Set task as executed to make sure it's not executed again."""
+        if self.executed:
+            raise AlreadyExecutedError(
+                   "Task %s[%s] has already been executed" % (
+                       self.task_name, self.task_id))
+        self.executed = True
+

+ 20 - 20
celery/worker/state.py

@@ -3,30 +3,29 @@ import shelve
 from celery.utils.compat import defaultdict
 from celery.utils.compat import defaultdict
 from celery.datastructures import LimitedSet
 from celery.datastructures import LimitedSet
 
 
-# Maximum number of revokes to keep in memory.
+#: maximum number of revokes to keep in memory.
 REVOKES_MAX = 10000
 REVOKES_MAX = 10000
 
 
-# How many seconds a revoke will be active before
-# being expired when the max limit has been exceeded.
-REVOKE_EXPIRES = 3600                       # One hour.
+#: how many seconds a revoke will be active before
+#: being expired when the max limit has been exceeded.
+REVOKE_EXPIRES = 3600
 
 
-"""
-.. data:: active_requests
+#: set of all reserved :class:`~celery.worker.job.TaskRequest`'s.
+reserved_requests = set()
 
 
-Set of currently active :class:`~celery.worker.job.TaskRequest`'s.
-
-.. data:: total_count
+#: set of currently active :class:`~celery.worker.job.TaskRequest`'s.
+active_requests = set()
 
 
-Count of tasks executed by the worker, sorted by type.
+#: count of tasks executed by the worker, sorted by type.
+total_count = defaultdict(lambda: 0)
 
 
-.. data:: revoked
+#: the list of currently revoked tasks.  Persistent if statedb set.
+revoked = LimitedSet(maxlen=REVOKES_MAX, expires=REVOKE_EXPIRES)
 
 
-The list of currently revoked tasks. (PERSISTENT if statedb set).
 
 
-"""
-active_requests = set()
-total_count = defaultdict(lambda: 0)
-revoked = LimitedSet(maxlen=REVOKES_MAX, expires=REVOKE_EXPIRES)
+def task_reserved(request):
+    """Updates global state when a task has been reserved."""
+    reserved_requests.add(request)
 
 
 
 
 def task_accepted(request):
 def task_accepted(request):
@@ -38,6 +37,7 @@ def task_accepted(request):
 def task_ready(request):
 def task_ready(request):
     """Updates global state when a task is ready."""
     """Updates global state when a task is ready."""
     active_requests.discard(request)
     active_requests.discard(request)
+    reserved_requests.discard(request)
 
 
 
 
 class Persistent(object):
 class Persistent(object):
@@ -48,10 +48,6 @@ class Persistent(object):
         self.filename = filename
         self.filename = filename
         self._load()
         self._load()
 
 
-    def _load(self):
-        self.merge(self.db)
-        self.close()
-
     def save(self):
     def save(self):
         self.sync(self.db).sync()
         self.sync(self.db).sync()
         self.close()
         self.close()
@@ -74,6 +70,10 @@ class Persistent(object):
             self._open.close()
             self._open.close()
             self._open = None
             self._open = None
 
 
+    def _load(self):
+        self.merge(self.db)
+        self.close()
+
     @property
     @property
     def db(self):
     def db(self):
         if self._open is None:
         if self._open is None:

+ 2 - 2
contrib/debian/init.d/celerybeat

@@ -48,7 +48,7 @@
 # =================
 # =================
 #
 #
 #   * CELERYBEAT_OPTS
 #   * CELERYBEAT_OPTS
-#       Additional arguments to celerybeat, see ``celerybeat --help`` for a
+#       Additional arguments to celerybeat, see `celerybeat --help` for a
 #       list.
 #       list.
 #
 #
 #   * CELERYBEAT_PID_FILE
 #   * CELERYBEAT_PID_FILE
@@ -61,7 +61,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #       Log level to use for celeryd. Default is INFO.
 #
 #
 #   * CELERYBEAT
 #   * CELERYBEAT
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #
 #   * CELERYBEAT_USER
 #   * CELERYBEAT_USER

+ 2 - 2
contrib/debian/init.d/celeryd

@@ -41,7 +41,7 @@
 # =================
 # =================
 #
 #
 #   * CELERYD_OPTS
 #   * CELERYD_OPTS
-#       Additional arguments to celeryd, see ``celeryd --help`` for a list.
+#       Additional arguments to celeryd, see `celeryd --help` for a list.
 #
 #
 #   * CELERYD_CHDIR
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -56,7 +56,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #       Log level to use for celeryd. Default is INFO.
 #
 #
 #   * CELERYD
 #   * CELERYD
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #
 #   * CELERYD_USER
 #   * CELERYD_USER

+ 2 - 2
contrib/debian/init.d/celeryevcam

@@ -47,7 +47,7 @@
 # =================
 # =================
 #
 #
 #   * CELERYEV_OPTS
 #   * CELERYEV_OPTS
-#       Additional arguments to celeryd, see ``celeryd --help`` for a list.
+#       Additional arguments to celeryd, see `celeryd --help` for a list.
 #
 #
 #   * CELERYD_CHDIR
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -62,7 +62,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #       Log level to use for celeryd. Default is INFO.
 #
 #
 #   * CELERYEV
 #   * CELERYEV
-#       Path to the celeryev program. Default is ``celeryev``.
+#       Path to the celeryev program. Default is `celeryev`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #
 #   * CELERYEV_USER
 #   * CELERYEV_USER

+ 3 - 3
contrib/generic-init.d/celeryd

@@ -51,8 +51,8 @@
 #       nodes, to start
 #       nodes, to start
 #
 #
 #   * CELERYD_OPTS
 #   * CELERYD_OPTS
-#       Additional arguments to celeryd-multi, see ``celeryd-multi --help``
-#       and ``celeryd --help`` for help.
+#       Additional arguments to celeryd-multi, see `celeryd-multi --help`
+#       and `celeryd --help` for help.
 #
 #
 #   * CELERYD_CHDIR
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -67,7 +67,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #       Log level to use for celeryd. Default is INFO.
 #
 #
 #   * CELERYD
 #   * CELERYD
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #
 #   * CELERYD_USER
 #   * CELERYD_USER

+ 4 - 4
contrib/requirements/README.rst

@@ -6,19 +6,19 @@
 Index
 Index
 =====
 =====
 
 
-* ``requirements/default.txt``
+* `requirements/default.txt`
 
 
     The default requirements (Python 2.6+).
     The default requirements (Python 2.6+).
 
 
-* ``requirements/py25.txt``
+* `requirements/py25.txt`
 
 
     Extra requirements needed to run on Python 2.5.
     Extra requirements needed to run on Python 2.5.
 
 
-* ``requirements/py26.txt``
+* `requirements/py26.txt`
 
 
     Extra requirements needed to run on Python 2.4.
     Extra requirements needed to run on Python 2.4.
 
 
-* ``requirements/test.txt``
+* `requirements/test.txt`
 
 
     Requirements needed to run the full unittest suite.
     Requirements needed to run the full unittest suite.
 
 

+ 45 - 45
docs/configuration.rst

@@ -202,14 +202,14 @@ The time in seconds of which the task result queues should expire.
 CELERY_RESULT_EXCHANGE
 CELERY_RESULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~
 
 
-Name of the exchange to publish results in.  Default is ``"celeryresults"``.
+Name of the exchange to publish results in.  Default is `"celeryresults"`.
 
 
 .. setting:: CELERY_RESULT_EXCHANGE_TYPE
 .. setting:: CELERY_RESULT_EXCHANGE_TYPE
 
 
 CELERY_RESULT_EXCHANGE_TYPE
 CELERY_RESULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-The exchange type of the result exchange.  Default is to use a ``direct``
+The exchange type of the result exchange.  Default is to use a `direct`
 exchange.
 exchange.
 
 
 .. setting:: CELERY_RESULT_SERIALIZER
 .. setting:: CELERY_RESULT_SERIALIZER
@@ -217,7 +217,7 @@ exchange.
 CELERY_RESULT_SERIALIZER
 CELERY_RESULT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Result message serialization format.  Default is ``"pickle"``. See
+Result message serialization format.  Default is `"pickle"`. See
 :ref:`executing-serializers`.
 :ref:`executing-serializers`.
 
 
 .. setting:: CELERY_RESULT_PERSISTENT
 .. setting:: CELERY_RESULT_PERSISTENT
@@ -326,7 +326,7 @@ Redis backend settings
     The Redis backend requires the :mod:`redis` library:
     The Redis backend requires the :mod:`redis` library:
     http://pypi.python.org/pypi/redis/0.5.5
     http://pypi.python.org/pypi/redis/0.5.5
 
 
-    To install the redis package use ``pip`` or ``easy_install``::
+    To install the redis package use `pip` or `easy_install`::
 
 
         $ pip install redis
         $ pip install redis
 
 
@@ -337,14 +337,14 @@ This backend requires the following configuration directives to be set.
 REDIS_HOST
 REDIS_HOST
 ~~~~~~~~~~
 ~~~~~~~~~~
 
 
-Hostname of the Redis database server. e.g. ``"localhost"``.
+Hostname of the Redis database server. e.g. `"localhost"`.
 
 
 .. setting:: REDIS_PORT
 .. setting:: REDIS_PORT
 
 
 REDIS_PORT
 REDIS_PORT
 ~~~~~~~~~~
 ~~~~~~~~~~
 
 
-Port to the Redis database server. e.g. ``6379``.
+Port to the Redis database server. e.g. `6379`.
 
 
 .. setting:: REDIS_DB
 .. setting:: REDIS_DB
 
 
@@ -437,8 +437,8 @@ CELERY_QUEUES
 The mapping of queues the worker consumes from.  This is a dictionary
 The mapping of queues the worker consumes from.  This is a dictionary
 of queue name/options.  See :ref:`guide-routing` for more information.
 of queue name/options.  See :ref:`guide-routing` for more information.
 
 
-The default is a queue/exchange/binding key of ``"celery"``, with
-exchange type ``direct``.
+The default is a queue/exchange/binding key of `"celery"`, with
+exchange type `direct`.
 
 
 You don't have to care about this unless you want custom routing facilities.
 You don't have to care about this unless you want custom routing facilities.
 
 
@@ -466,7 +466,7 @@ CELERY_DEFAULT_QUEUE
 ~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~
 
 
 The queue used by default, if no custom queue is specified.  This queue must
 The queue used by default, if no custom queue is specified.  This queue must
-be listed in :setting:`CELERY_QUEUES`.  The default is: ``celery``.
+be listed in :setting:`CELERY_QUEUES`.  The default is: `celery`.
 
 
 .. seealso::
 .. seealso::
 
 
@@ -478,7 +478,7 @@ CELERY_DEFAULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Name of the default exchange to use when no custom exchange is
 Name of the default exchange to use when no custom exchange is
-specified.  The default is: ``celery``.
+specified.  The default is: `celery`.
 
 
 .. setting:: CELERY_DEFAULT_EXCHANGE_TYPE
 .. setting:: CELERY_DEFAULT_EXCHANGE_TYPE
 
 
@@ -486,7 +486,7 @@ CELERY_DEFAULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Default exchange type used when no custom exchange is specified.
 Default exchange type used when no custom exchange is specified.
-The default is: ``direct``.
+The default is: `direct`.
 
 
 .. setting:: CELERY_DEFAULT_ROUTING_KEY
 .. setting:: CELERY_DEFAULT_ROUTING_KEY
 
 
@@ -494,14 +494,14 @@ CELERY_DEFAULT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 The default routing key used when sending tasks.
 The default routing key used when sending tasks.
-The default is: ``celery``.
+The default is: `celery`.
 
 
 .. setting:: CELERY_DEFAULT_DELIVERY_MODE
 .. setting:: CELERY_DEFAULT_DELIVERY_MODE
 
 
 CELERY_DEFAULT_DELIVERY_MODE
 CELERY_DEFAULT_DELIVERY_MODE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Can be ``transient`` or ``persistent``.  The default is to send
+Can be `transient` or `persistent`.  The default is to send
 persistent messages.
 persistent messages.
 
 
 .. _conf-broker-connection:
 .. _conf-broker-connection:
@@ -514,7 +514,7 @@ Broker Settings
 BROKER_BACKEND
 BROKER_BACKEND
 ~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~
 
 
-The messaging backend to use. Default is ``"amqplib"``.
+The messaging backend to use. Default is `"amqplib"`.
 
 
 .. setting:: BROKER_HOST
 .. setting:: BROKER_HOST
 
 
@@ -550,7 +550,7 @@ Password to connect with.
 BROKER_VHOST
 BROKER_VHOST
 ~~~~~~~~~~~~
 ~~~~~~~~~~~~
 
 
-Virtual host.  Default is ``"/"``.
+Virtual host.  Default is `"/"`.
 
 
 .. setting:: BROKER_USE_SSL
 .. setting:: BROKER_USE_SSL
 
 
@@ -604,7 +604,7 @@ CELERY_ALWAYS_EAGER
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
 If this is :const:`True`, all tasks will be executed locally by blocking
 If this is :const:`True`, all tasks will be executed locally by blocking
-until it is finished.  ``apply_async`` and ``Task.delay`` will return
+until it is finished.  `apply_async` and `Task.delay` will return
 a :class:`~celery.result.EagerResult` which emulates the behavior of
 a :class:`~celery.result.EagerResult` which emulates the behavior of
 :class:`~celery.result.AsyncResult`, except the result has already
 :class:`~celery.result.AsyncResult`, except the result has already
 been evaluated.
 been evaluated.
@@ -617,10 +617,10 @@ instead.
 CELERY_EAGER_PROPAGATES_EXCEPTIONS
 CELERY_EAGER_PROPAGATES_EXCEPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-If this is :const:`True`, eagerly executed tasks (using ``.apply``, or with
+If this is :const:`True`, eagerly executed tasks (using `.apply`, or with
 :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 
 
-It's the same as always running ``apply`` with ``throw=True``.
+It's the same as always running `apply` with `throw=True`.
 
 
 .. setting:: CELERY_IGNORE_RESULT
 .. setting:: CELERY_IGNORE_RESULT
 
 
@@ -648,7 +648,7 @@ A built-in periodic task will delete the results after this time
     backends. For the AMQP backend see
     backends. For the AMQP backend see
     :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`.
     :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`.
 
 
-    When using the database or MongoDB backends, ``celerybeat`` must be
+    When using the database or MongoDB backends, `celerybeat` must be
     running for the results to be expired.
     running for the results to be expired.
 
 
 
 
@@ -678,7 +678,7 @@ CELERY_TASK_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~
 
 
 A string identifying the default serialization method to use.  Can be
 A string identifying the default serialization method to use.  Can be
-``pickle`` (default), ``json``, ``yaml``, or any custom serialization
+`pickle` (default), `json`, `yaml`, `msgpack` or any custom serialization
 methods that have been registered with :mod:`kombu.serialization.registry`.
 methods that have been registered with :mod:`kombu.serialization.registry`.
 
 
 .. seealso::
 .. seealso::
@@ -784,7 +784,7 @@ CELERYD_STATE_DB
 ~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~
 
 
 Name of the file used to stores persistent worker state (like revoked tasks).
 Name of the file used to stores persistent worker state (like revoked tasks).
-Can be a relative or absolute path, but be aware that the suffix ``.db``
+Can be a relative or absolute path, but be aware that the suffix `.db`
 may be appended to the file name (depending on Python version).
 may be appended to the file name (depending on Python version).
 
 
 Can also be set via the :option:`--statedb` argument to
 Can also be set via the :option:`--statedb` argument to
@@ -813,7 +813,7 @@ Error E-Mails
 CELERY_SEND_TASK_ERROR_EMAILS
 CELERY_SEND_TASK_ERROR_EMAILS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-The default value for the ``Task.send_error_emails`` attribute, which if
+The default value for the `Task.send_error_emails` attribute, which if
 set to :const:`True` means errors occuring during task execution will be
 set to :const:`True` means errors occuring during task execution will be
 sent to :setting:`ADMINS` by e-mail.
 sent to :setting:`ADMINS` by e-mail.
 
 
@@ -829,7 +829,7 @@ A whitelist of exceptions to send error e-mails for.
 ADMINS
 ADMINS
 ~~~~~~
 ~~~~~~
 
 
-List of ``(name, email_address)`` tuples for the admins that should
+List of `(name, email_address)` tuples for the admins that should
 receive error e-mails.
 receive error e-mails.
 
 
 .. setting:: SERVER_EMAIL
 .. setting:: SERVER_EMAIL
@@ -845,7 +845,7 @@ Default is celery@localhost.
 MAIL_HOST
 MAIL_HOST
 ~~~~~~~~~
 ~~~~~~~~~
 
 
-The mail server to use.  Default is ``"localhost"``.
+The mail server to use.  Default is `"localhost"`.
 
 
 .. setting:: MAIL_HOST_USER
 .. setting:: MAIL_HOST_USER
 
 
@@ -866,7 +866,7 @@ Password (if required) to log on to the mail server with.
 MAIL_PORT
 MAIL_PORT
 ~~~~~~~~~
 ~~~~~~~~~
 
 
-The port the mail server is listening on.  Default is ``25``.
+The port the mail server is listening on.  Default is `25`.
 
 
 .. _conf-example-error-mail-config:
 .. _conf-example-error-mail-config:
 
 
@@ -906,7 +906,7 @@ Events
 CELERY_SEND_EVENTS
 CELERY_SEND_EVENTS
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
-Send events so the worker can be monitored by tools like ``celerymon``.
+Send events so the worker can be monitored by tools like `celerymon`.
 
 
 .. setting:: CELERY_EVENT_QUEUE
 .. setting:: CELERY_EVENT_QUEUE
 
 
@@ -914,21 +914,21 @@ CELERY_EVENT_QUEUE
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
 Name of the queue to consume event messages from. Default is
 Name of the queue to consume event messages from. Default is
-``"celeryevent"``.
+`"celeryevent"`.
 
 
 .. setting:: CELERY_EVENT_EXCHANGE
 .. setting:: CELERY_EVENT_EXCHANGE
 
 
 CELERY_EVENT_EXCHANGE
 CELERY_EVENT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~
 
 
-Name of the exchange to send event messages to.  Default is ``"celeryevent"``.
+Name of the exchange to send event messages to.  Default is `"celeryevent"`.
 
 
 .. setting:: CELERY_EVENT_EXCHANGE_TYPE
 .. setting:: CELERY_EVENT_EXCHANGE_TYPE
 
 
 CELERY_EVENT_EXCHANGE_TYPE
 CELERY_EVENT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-The exchange type of the event exchange.  Default is to use a ``"direct"``
+The exchange type of the event exchange.  Default is to use a `"direct"`
 exchange.
 exchange.
 
 
 .. setting:: CELERY_EVENT_ROUTING_KEY
 .. setting:: CELERY_EVENT_ROUTING_KEY
@@ -936,7 +936,7 @@ exchange.
 CELERY_EVENT_ROUTING_KEY
 CELERY_EVENT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Routing key used when sending event messages.  Default is ``"celeryevent"``.
+Routing key used when sending event messages.  Default is `"celeryevent"`.
 
 
 .. setting:: CELERY_EVENT_SERIALIZER
 .. setting:: CELERY_EVENT_SERIALIZER
 
 
@@ -944,7 +944,7 @@ CELERY_EVENT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Message serialization format used when sending event messages.
 Message serialization format used when sending event messages.
-Default is ``"json"``. See :ref:`executing-serializers`.
+Default is `"json"`. See :ref:`executing-serializers`.
 
 
 .. _conf-broadcast:
 .. _conf-broadcast:
 
 
@@ -960,7 +960,7 @@ Name prefix for the queue used when listening for broadcast messages.
 The workers hostname will be appended to the prefix to create the final
 The workers hostname will be appended to the prefix to create the final
 queue name.
 queue name.
 
 
-Default is ``"celeryctl"``.
+Default is `"celeryctl"`.
 
 
 .. setting:: CELERY_BROADCASTS_EXCHANGE
 .. setting:: CELERY_BROADCASTS_EXCHANGE
 
 
@@ -969,14 +969,14 @@ CELERY_BROADCAST_EXCHANGE
 
 
 Name of the exchange used for broadcast messages.
 Name of the exchange used for broadcast messages.
 
 
-Default is ``"celeryctl"``.
+Default is `"celeryctl"`.
 
 
 .. setting:: CELERY_BROADCAST_EXCHANGE_TYPE
 .. setting:: CELERY_BROADCAST_EXCHANGE_TYPE
 
 
 CELERY_BROADCAST_EXCHANGE_TYPE
 CELERY_BROADCAST_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Exchange type used for broadcast messages.  Default is ``"fanout"``.
+Exchange type used for broadcast messages.  Default is `"fanout"`.
 
 
 .. _conf-logging:
 .. _conf-logging:
 
 
@@ -991,7 +991,7 @@ CELERYD_LOG_FILE
 The default file name the worker daemon logs messages to.  Can be overridden
 The default file name the worker daemon logs messages to.  Can be overridden
 using the :option:`--logfile` option to :mod:`~celery.bin.celeryd`.
 using the :option:`--logfile` option to :mod:`~celery.bin.celeryd`.
 
 
-The default is :const:`None` (``stderr``)
+The default is :const:`None` (`stderr`)
 
 
 .. setting:: CELERYD_LOG_LEVEL
 .. setting:: CELERYD_LOG_LEVEL
 
 
@@ -1013,7 +1013,7 @@ CELERYD_LOG_FORMAT
 
 
 The format to use for log messages.
 The format to use for log messages.
 
 
-Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
+Default is `[%(asctime)s: %(levelname)s/%(processName)s] %(message)s`
 
 
 See the Python :mod:`logging` module for more information about log
 See the Python :mod:`logging` module for more information about log
 formats.
 formats.
@@ -1039,7 +1039,7 @@ formats.
 CELERY_REDIRECT_STDOUTS
 CELERY_REDIRECT_STDOUTS
 ~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 
-If enabled ``stdout`` and ``stderr`` will be redirected
+If enabled `stdout` and `stderr` will be redirected
 to the current logger.
 to the current logger.
 
 
 Enabled by default.
 Enabled by default.
@@ -1050,7 +1050,7 @@ Used by :program:`celeryd` and :program:`celerybeat`.
 CELERY_REDIRECT_STDOUTS_LEVEL
 CELERY_REDIRECT_STDOUTS_LEVEL
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-The loglevel output to ``stdout`` and ``stderr`` is logged as.
+The loglevel output to `stdout` and `stderr` is logged as.
 Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
 Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
 :const:`ERROR` or :const:`CRITICAL`.
 :const:`ERROR` or :const:`CRITICAL`.
 
 
@@ -1112,7 +1112,7 @@ CELERYBEAT_SCHEDULER
 ~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~
 
 
 The default scheduler class.  Default is
 The default scheduler class.  Default is
-``"celery.beat.PersistentScheduler"``.
+`"celery.beat.PersistentScheduler"`.
 
 
 Can also be set via the :option:`-S` argument to
 Can also be set via the :option:`-S` argument to
 :mod:`~celery.bin.celerybeat`.
 :mod:`~celery.bin.celerybeat`.
@@ -1122,9 +1122,9 @@ Can also be set via the :option:`-S` argument to
 CELERYBEAT_SCHEDULE_FILENAME
 CELERYBEAT_SCHEDULE_FILENAME
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Name of the file used by ``PersistentScheduler`` to store the last run times
+Name of the file used by `PersistentScheduler` to store the last run times
 of periodic tasks.  Can be a relative or absolute path, but be aware that the
 of periodic tasks.  Can be a relative or absolute path, but be aware that the
-suffix ``.db`` may be appended to the file name (depending on Python version).
+suffix `.db` may be appended to the file name (depending on Python version).
 
 
 Can also be set via the :option:`--schedule` argument to
 Can also be set via the :option:`--schedule` argument to
 :mod:`~celery.bin.celerybeat`.
 :mod:`~celery.bin.celerybeat`.
@@ -1143,9 +1143,9 @@ CELERYBEAT_LOG_FILE
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
 The default file name to log messages to.  Can be overridden using
 The default file name to log messages to.  Can be overridden using
-the `--logfile`` option to :mod:`~celery.bin.celerybeat`.
+the `--logfile` option to :mod:`~celery.bin.celerybeat`.
 
 
-The default is :const:`None` (``stderr``).
+The default is :const:`None` (`stderr`).
 
 
 .. setting:: CELERYBEAT_LOG_LEVEL
 .. setting:: CELERYBEAT_LOG_LEVEL
 
 
@@ -1171,9 +1171,9 @@ CELERYMON_LOG_FILE
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
 The default file name to log messages to.  Can be overridden using
 The default file name to log messages to.  Can be overridden using
-the :option:`--logfile` argument to ``celerymon``.
+the :option:`--logfile` argument to `celerymon`.
 
 
-The default is :const:`None` (``stderr``)
+The default is :const:`None` (`stderr`)
 
 
 .. setting:: CELERYMON_LOG_LEVEL
 .. setting:: CELERYMON_LOG_LEVEL
 
 

+ 13 - 13
docs/cookbook/daemonizing.rst

@@ -20,7 +20,7 @@ start-stop-daemon (Debian/Ubuntu/++)
 See the `contrib/debian/init.d/`_ directory in the celery distribution, this
 See the `contrib/debian/init.d/`_ directory in the celery distribution, this
 directory contains init scripts for celeryd and celerybeat.
 directory contains init scripts for celeryd and celerybeat.
 
 
-These scripts are configured in ``/etc/default/celeryd``.
+These scripts are configured in :file:`/etc/default/celeryd`.
 
 
 .. _`contrib/debian/init.d/`:
 .. _`contrib/debian/init.d/`:
     http://github.com/ask/celery/tree/master/contrib/debian/
     http://github.com/ask/celery/tree/master/contrib/debian/
@@ -30,7 +30,7 @@ These scripts are configured in ``/etc/default/celeryd``.
 Init script: celeryd
 Init script: celeryd
 --------------------
 --------------------
 
 
-:Usage: ``/etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}``
+:Usage: `/etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}`
 :Configuration file: /etc/default/celeryd
 :Configuration file: /etc/default/celeryd
 
 
 To configure celeryd you probably need to at least tell it where to chdir
 To configure celeryd you probably need to at least tell it where to chdir
@@ -43,7 +43,7 @@ Example configuration
 
 
 This is an example configuration for a Python project.
 This is an example configuration for a Python project.
 
 
-``/etc/default/celeryd``::
+:file:`/etc/default/celeryd`:
 
 
     # Where to chdir at start.
     # Where to chdir at start.
     CELERYD_CHDIR="/opt/Myproject/"
     CELERYD_CHDIR="/opt/Myproject/"
@@ -59,7 +59,7 @@ This is an example configuration for a Python project.
 Example Django configuration
 Example Django configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-This is an example configuration for those using ``django-celery``::
+This is an example configuration for those using `django-celery`::
 
 
     # Where the Django project is.
     # Where the Django project is.
     CELERYD_CHDIR="/opt/Project/"
     CELERYD_CHDIR="/opt/Project/"
@@ -76,7 +76,7 @@ Available options
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
 * CELERYD_OPTS
 * CELERYD_OPTS
-    Additional arguments to celeryd, see ``celeryd --help`` for a list.
+    Additional arguments to celeryd, see `celeryd --help` for a list.
 
 
 * CELERYD_CHDIR
 * CELERYD_CHDIR
     Path to chdir at start. Default is to stay in the current directory.
     Path to chdir at start. Default is to stay in the current directory.
@@ -91,7 +91,7 @@ Available options
     Log level to use for celeryd. Default is INFO.
     Log level to use for celeryd. Default is INFO.
 
 
 * CELERYD
 * CELERYD
-    Path to the celeryd program. Default is ``celeryd``.
+    Path to the celeryd program. Default is `celeryd`.
     You can point this to an virtualenv, or even use manage.py for django.
     You can point this to an virtualenv, or even use manage.py for django.
 
 
 * CELERYD_USER
 * CELERYD_USER
@@ -104,7 +104,7 @@ Available options
 
 
 Init script: celerybeat
 Init script: celerybeat
 -----------------------
 -----------------------
-:Usage: ``/etc/init.d/celerybeat {start|stop|force-reload|restart|try-restart|status}``
+:Usage: `/etc/init.d/celerybeat {start|stop|force-reload|restart|try-restart|status}`
 :Configuration file: /etc/default/celerybeat or /etc/default/celeryd
 :Configuration file: /etc/default/celerybeat or /etc/default/celeryd
 
 
 .. _debian-initd-celerybeat-example:
 .. _debian-initd-celerybeat-example:
@@ -114,7 +114,7 @@ Example configuration
 
 
 This is an example configuration for a Python project:
 This is an example configuration for a Python project:
 
 
-``/etc/default/celeryd``::
+`/etc/default/celeryd`::
 
 
     # Where to chdir at start.
     # Where to chdir at start.
     CELERYD_CHDIR="/opt/Myproject/"
     CELERYD_CHDIR="/opt/Myproject/"
@@ -133,7 +133,7 @@ This is an example configuration for a Python project:
 Example Django configuration
 Example Django configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-This is an example configuration for those using ``django-celery``::
+This is an example configuration for those using `django-celery`::
 
 
     # Where the Django project is.
     # Where the Django project is.
     CELERYD_CHDIR="/opt/Project/"
     CELERYD_CHDIR="/opt/Project/"
@@ -156,7 +156,7 @@ Available options
 ~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~
 
 
 * CELERYBEAT_OPTS
 * CELERYBEAT_OPTS
-    Additional arguments to celerybeat, see ``celerybeat --help`` for a
+    Additional arguments to celerybeat, see `celerybeat --help` for a
     list.
     list.
 
 
 * CELERYBEAT_PIDFILE
 * CELERYBEAT_PIDFILE
@@ -169,7 +169,7 @@ Available options
     Log level to use for celeryd. Default is INFO.
     Log level to use for celeryd. Default is INFO.
 
 
 * CELERYBEAT
 * CELERYBEAT
-    Path to the celeryd program. Default is ``celeryd``.
+    Path to the celeryd program. Default is `celeryd`.
     You can point this to an virtualenv, or even use manage.py for django.
     You can point this to an virtualenv, or even use manage.py for django.
 
 
 * CELERYBEAT_USER
 * CELERYBEAT_USER
@@ -193,14 +193,14 @@ This can reveal hints as to why the service won't start.
 Also you will see the commands generated, so you can try to run the celeryd
 Also you will see the commands generated, so you can try to run the celeryd
 command manually to read the resulting error output.
 command manually to read the resulting error output.
 
 
-For example my ``sh -x`` output does this::
+For example my `sh -x` output does this::
 
 
     ++ start-stop-daemon --start --chdir /opt/Opal/release/opal --quiet \
     ++ start-stop-daemon --start --chdir /opt/Opal/release/opal --quiet \
         --oknodo --background --make-pidfile --pidfile /var/run/celeryd.pid \
         --oknodo --background --make-pidfile --pidfile /var/run/celeryd.pid \
         --exec /opt/Opal/release/opal/manage.py celeryd -- --time-limit=300 \
         --exec /opt/Opal/release/opal/manage.py celeryd -- --time-limit=300 \
         -f /var/log/celeryd.log -l INFO
         -f /var/log/celeryd.log -l INFO
 
 
-Run the celeryd command after ``--exec`` (without the ``--``) to show the
+Run the celeryd command after `--exec` (without the `--`) to show the
 actual resulting output::
 actual resulting output::
 
 
     $ /opt/Opal/release/opal/manage.py celeryd --time-limit=300 \
     $ /opt/Opal/release/opal/manage.py celeryd --time-limit=300 \

+ 2 - 2
docs/cookbook/tasks.rst

@@ -17,9 +17,9 @@ You can accomplish this by using a lock.
 In this example we'll be using the cache framework to set a lock that is
 In this example we'll be using the cache framework to set a lock that is
 accessible for all workers.
 accessible for all workers.
 
 
-It's part of an imaginary RSS feed importer called ``djangofeeds``.
+It's part of an imaginary RSS feed importer called `djangofeeds`.
 The task takes a feed URL as a single argument, and imports that feed into
 The task takes a feed URL as a single argument, and imports that feed into
-a Django model called ``Feed``. We ensure that it's not possible for two or
+a Django model called `Feed`. We ensure that it's not possible for two or
 more workers to import the same feed at the same time by setting a cache key
 more workers to import the same feed at the same time by setting a cache key
 consisting of the md5sum of the feed URL.
 consisting of the md5sum of the feed URL.
 
 

+ 8 - 8
docs/getting-started/broker-installation.rst

@@ -19,7 +19,7 @@ see `Installing RabbitMQ on OS X`_.
 
 
 .. note::
 .. note::
 
 
-    If you're getting ``nodedown`` errors after installing and using
+    If you're getting `nodedown` errors after installing and using
     :program:`rabbitmqctl` then this blog post can help you identify
     :program:`rabbitmqctl` then this blog post can help you identify
     the source of the problem:
     the source of the problem:
 
 
@@ -53,7 +53,7 @@ Installing RabbitMQ on OS X
 The easiest way to install RabbitMQ on Snow Leopard is using `Homebrew`_; the new
 The easiest way to install RabbitMQ on Snow Leopard is using `Homebrew`_; the new
 and shiny package management system for OS X.
 and shiny package management system for OS X.
 
 
-In this example we'll install homebrew into ``/lol``, but you can
+In this example we'll install homebrew into :file:`/lol`, but you can
 choose whichever destination, even in your home directory if you want, as one of
 choose whichever destination, even in your home directory if you want, as one of
 the strengths of homebrew is that it's relocateable.
 the strengths of homebrew is that it's relocateable.
 
 
@@ -62,14 +62,14 @@ install git. Download and install from the disk image at
 http://code.google.com/p/git-osx-installer/downloads/list?can=3
 http://code.google.com/p/git-osx-installer/downloads/list?can=3
 
 
 When git is installed you can finally clone the repo, storing it at the
 When git is installed you can finally clone the repo, storing it at the
-``/lol`` location::
+:file:`/lol` location::
 
 
     $ git clone git://github.com/mxcl/homebrew /lol
     $ git clone git://github.com/mxcl/homebrew /lol
 
 
 
 
 Brew comes with a simple utility called :program:`brew`, used to install, remove and
 Brew comes with a simple utility called :program:`brew`, used to install, remove and
 query packages. To use it you first have to add it to :envvar:`PATH`, by
 query packages. To use it you first have to add it to :envvar:`PATH`, by
-adding the following line to the end of your ``~/.profile``::
+adding the following line to the end of your :file:`~/.profile`::
 
 
     export PATH="/lol/bin:/lol/sbin:$PATH"
     export PATH="/lol/bin:/lol/sbin:$PATH"
 
 
@@ -99,12 +99,12 @@ Use the :program:`scutil` command to permanently set your hostname::
 
 
     sudo scutil --set HostName myhost.local
     sudo scutil --set HostName myhost.local
 
 
-Then add that hostname to ``/etc/hosts`` so it's possible to resolve it
+Then add that hostname to :file:`/etc/hosts` so it's possible to resolve it
 back into an IP address::
 back into an IP address::
 
 
     127.0.0.1       localhost myhost myhost.local
     127.0.0.1       localhost myhost myhost.local
 
 
-If you start the rabbitmq server, your rabbit node should now be ``rabbit@myhost``,
+If you start the rabbitmq server, your rabbit node should now be `rabbit@myhost`,
 as verified by :program:`rabbitmqctl`::
 as verified by :program:`rabbitmqctl`::
 
 
     $ sudo rabbitmqctl status
     $ sudo rabbitmqctl status
@@ -120,8 +120,8 @@ as verified by :program:`rabbitmqctl`::
     ...done.
     ...done.
 
 
 This is especially important if your DHCP server gives you a hostname
 This is especially important if your DHCP server gives you a hostname
-starting with an IP address, (e.g. ``23.10.112.31.comcast.net``), because
-then RabbitMQ will try to use ``rabbit@23``, which is an illegal hostname.
+starting with an IP address, (e.g. `23.10.112.31.comcast.net`), because
+then RabbitMQ will try to use `rabbit@23`, which is an illegal hostname.
 
 
 .. _rabbitmq-osx-start-stop:
 .. _rabbitmq-osx-start-stop:
 
 

+ 2 - 2
docs/getting-started/first-steps-with-celery.rst

@@ -166,8 +166,8 @@ by holding on to the :class:`~celery.result.AsyncResult`::
     >>> result.successful() # returns True if the task didn't end in failure.
     >>> result.successful() # returns True if the task didn't end in failure.
     True
     True
 
 
-If the task raises an exception, the return value of ``result.successful()``
-will be :const:`False`, and ``result.result`` will contain the exception instance
+If the task raises an exception, the return value of `result.successful()`
+will be :const:`False`, and `result.result` will contain the exception instance
 raised by the task.
 raised by the task.
 
 
 Where to go from here
 Where to go from here

+ 7 - 7
docs/homepage/index.html

@@ -75,7 +75,7 @@ pageTracker._trackPageview();
 
 
         Celery is easy to integrate with Django, Pylons and Flask, using
         Celery is easy to integrate with Django, Pylons and Flask, using
         the <a href="http://pypi.python.org/pypi/django-celery">django-celery</a>,
         the <a href="http://pypi.python.org/pypi/django-celery">django-celery</a>,
-        <a href="http://bitbucket.org/ianschenck/celery-pylons">celery-pylons</a>
+        <a href="http://pypi.python.org/pypi/celery-pylons">celery-pylons</a>
         and <a href="http://github.com/ask/flask-celery">flask-celery</a>add-on packages.
         and <a href="http://github.com/ask/flask-celery">flask-celery</a>add-on packages.
 
 
         <h3>Example</h3>
         <h3>Example</h3>
@@ -98,7 +98,7 @@ pageTracker._trackPageview();
     <h3>Getting Started</h3>
     <h3>Getting Started</h3>
 
 
     <ol>
     <ol>
-        <li>Install celery by download or <code>pip install -U celery</code></li>
+        <li>Install celery by download or <code>pip install -U Celery</code></li>
         <li>Set up <a href="http://celeryq.org/docs/getting-started/broker-installation.html">RabbitMQ</a>
         <li>Set up <a href="http://celeryq.org/docs/getting-started/broker-installation.html">RabbitMQ</a>
         or one of the <a href="http://celeryq.org/docs/tutorials/otherqueues.html">ghetto queue</a>
         or one of the <a href="http://celeryq.org/docs/tutorials/otherqueues.html">ghetto queue</a>
         solutions.
         solutions.
@@ -178,7 +178,7 @@ pageTracker._trackPageview();
       instructions please read the full
       instructions please read the full
       <a href="http://celeryproject.org/docs/changelog.html">changelog</a>.
       <a href="http://celeryproject.org/docs/changelog.html">changelog</a>.
       Download from <a href="http://pypi.python.org/pypi/celery/1.0.6">PyPI</a>,
       Download from <a href="http://pypi.python.org/pypi/celery/1.0.6">PyPI</a>,
-      or simply install the upgrade using <code>pip install -U celery==1.0.6</code>.
+      or simply install the upgrade using <code>pip install -U Celery==1.0.6</code>.
       <hr>
       <hr>
       </span>
       </span>
 
 
@@ -189,7 +189,7 @@ pageTracker._trackPageview();
       broker connection loss, as well as some other minor fixes. Also
       broker connection loss, as well as some other minor fixes. Also
       AbortableTask has been added to contrib. Please read the full <a href="http://celeryproject.org/docs/changelog.html">changelog</a>
       AbortableTask has been added to contrib. Please read the full <a href="http://celeryproject.org/docs/changelog.html">changelog</a>
       before you upgrade. Download from <a href="http://pypi.python.org/pypi/celery/1.0.5">PyPI</a>,
       before you upgrade. Download from <a href="http://pypi.python.org/pypi/celery/1.0.5">PyPI</a>,
-      or simply install the upgrade using <code>pip install -U celery</code>.
+      or simply install the upgrade using <code>pip install -U Celery</code>.
       <hr>
       <hr>
       </span>
       </span>
 
 
@@ -199,7 +199,7 @@ pageTracker._trackPageview();
       <p>This release contains a drastic improvement in reliability and
       <p>This release contains a drastic improvement in reliability and
       performance. Please read the full <a href="http://celeryproject.org/docs/changelog.html">changelog</a>
       performance. Please read the full <a href="http://celeryproject.org/docs/changelog.html">changelog</a>
       before you upgrade. Download from <a href="http://pypi.python.org/pypi/celery/1.0.3">PyPI</a>,
       before you upgrade. Download from <a href="http://pypi.python.org/pypi/celery/1.0.3">PyPI</a>,
-      or simply install the upgrade using <code>pip install -U celery</code>.
+      or simply install the upgrade using <code>pip install -U Celery</code>.
       <hr>
       <hr>
       </span>
       </span>
 
 
@@ -211,7 +211,7 @@ pageTracker._trackPageview();
       2.4. Read the full <a href="http://celeryproject.org/docs/changelog.html">Changelog</a>
       2.4. Read the full <a href="http://celeryproject.org/docs/changelog.html">Changelog</a>
       for more information. Download from <a
       for more information. Download from <a
           href="http://pypi.python.org/pypi/celery/1.0.1">PyPI</a>,
           href="http://pypi.python.org/pypi/celery/1.0.1">PyPI</a>,
-      or simply install the upgrade using <code>pip install -U celery</code>.
+      or simply install the upgrade using <code>pip install -U Celery</code>.
       <hr>
       <hr>
       </span>
       </span>
 
 
@@ -221,7 +221,7 @@ pageTracker._trackPageview();
       <p>Celery 1.0 has finally been released! It is available on <a
       <p>Celery 1.0 has finally been released! It is available on <a
           href="http://pypi.python.org/pypi/celery/1.0.0">PyPI</a> for
           href="http://pypi.python.org/pypi/celery/1.0.0">PyPI</a> for
       downloading. You can also install it via <code>pip install
       downloading. You can also install it via <code>pip install
-          celery</code>. You can read the announcement <a href="http://celeryproject.org/celery_1.0_released.html">here</a>.
+          Celery</code>. You can read the announcement <a href="http://celeryproject.org/celery_1.0_released.html">here</a>.
       <hr>
       <hr>
       </span>
       </span>
 
 

+ 6 - 6
docs/includes/installation.txt

@@ -1,18 +1,18 @@
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install Celery either via the Python Package Index (PyPI)
 or from source.
 or from source.
 
 
-To install using ``pip``,::
+To install using `pip`,::
 
 
-    $ pip install celery
+    $ pip install Celery
 
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
 
-    $ easy_install celery
+    $ easy_install Celery
 
 
 Downloading and installing from source
 Downloading and installing from source
 --------------------------------------
 --------------------------------------
 
 
-Download the latest version of ``celery`` from
+Download the latest version of `celery` from
 http://pypi.python.org/pypi/celery/
 http://pypi.python.org/pypi/celery/
 
 
 You can install it by doing the following,::
 You can install it by doing the following,::

+ 9 - 9
docs/includes/introduction.txt

@@ -35,7 +35,7 @@ the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
 .. _`Pylons`: http://pylonshq.com/
 .. _`Pylons`: http://pylonshq.com/
 .. _`Flask`: http://flask.pocoo.org/
 .. _`Flask`: http://flask.pocoo.org/
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
-.. _`celery-pylons`: http://bitbucket.org/ianschenck/celery-pylons
+.. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`operate with other languages using webhooks`:
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
     http://ask.github.com/celery/userguide/remote-tasks.html
@@ -53,7 +53,7 @@ This is a high level overview of the architecture.
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 
 
 The broker delivers tasks to the worker servers.
 The broker delivers tasks to the worker servers.
-A worker server is a networked machine running ``celeryd``.  This can be one or
+A worker server is a networked machine running `celeryd`.  This can be one or
 more machines depending on the workload.
 more machines depending on the workload.
 
 
 The result of the task can be stored for later retrieval (called its
 The result of the task can be stored for later retrieval (called its
@@ -102,7 +102,7 @@ Features
     |                 | while the queue is temporarily overloaded).        |
     |                 | while the queue is temporarily overloaded).        |
     +-----------------+----------------------------------------------------+
     +-----------------+----------------------------------------------------+
     | Concurrency     | Tasks are executed in parallel using the           |
     | Concurrency     | Tasks are executed in parallel using the           |
-    |                 | ``multiprocessing`` module.                        |
+    |                 | `multiprocessing` module.                          |
     +-----------------+----------------------------------------------------+
     +-----------------+----------------------------------------------------+
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     |                 | an exact date or countdown for when after the task |
     |                 | an exact date or countdown for when after the task |
@@ -189,23 +189,23 @@ is hosted at Github.
 Installation
 Installation
 ============
 ============
 
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install Celery either via the Python Package Index (PyPI)
 or from source.
 or from source.
 
 
-To install using ``pip``,::
+To install using `pip`,::
 
 
-    $ pip install celery
+    $ pip install Celery
 
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
 
-    $ easy_install celery
+    $ easy_install Celery
 
 
 .. _celery-installing-from-source:
 .. _celery-installing-from-source:
 
 
 Downloading and installing from source
 Downloading and installing from source
 --------------------------------------
 --------------------------------------
 
 
-Download the latest version of ``celery`` from
+Download the latest version of Celery from
 http://pypi.python.org/pypi/celery/
 http://pypi.python.org/pypi/celery/
 
 
 You can install it by doing the following,::
 You can install it by doing the following,::

+ 3 - 3
docs/includes/resources.txt

@@ -44,10 +44,10 @@ http://wiki.github.com/ask/celery/
 Contributing
 Contributing
 ============
 ============
 
 
-Development of ``celery`` happens at Github: http://github.com/ask/celery
+Development of `celery` happens at Github: http://github.com/ask/celery
 
 
 You are highly encouraged to participate in the development
 You are highly encouraged to participate in the development
-of ``celery``. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like Github (for some reason) you're welcome
 to send regular patches.
 to send regular patches.
 
 
 .. _license:
 .. _license:
@@ -55,7 +55,7 @@ to send regular patches.
 License
 License
 =======
 =======
 
 
-This software is licensed under the ``New BSD License``. See the :file:`LICENSE`
+This software is licensed under the `New BSD License`. See the :file:`LICENSE`
 file in the top distribution directory for the full license text.
 file in the top distribution directory for the full license text.
 
 
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround

+ 5 - 5
docs/internals/app-overview.rst

@@ -101,19 +101,19 @@ Deprecations
 Removed deprecations
 Removed deprecations
 ====================
 ====================
 
 
-* ``celery.utils.timedelta_seconds``
+* `celery.utils.timedelta_seconds`
     Use: :func:`celery.utils.timeutils.timedelta_seconds`
     Use: :func:`celery.utils.timeutils.timedelta_seconds`
 
 
-* ``celery.utils.defaultdict``
+* `celery.utils.defaultdict`
     Use: :func:`celery.utils.compat.defaultdict`
     Use: :func:`celery.utils.compat.defaultdict`
 
 
-* ``celery.utils.all``
+* `celery.utils.all`
     Use: :func:`celery.utils.compat.all`
     Use: :func:`celery.utils.compat.all`
 
 
-* ``celery.task.apply_async``
+* `celery.task.apply_async`
     Use app.send_task
     Use app.send_task
 
 
-* ``celery.task.tasks``
+* `celery.task.tasks`
     Use :data:`celery.registry.tasks`
     Use :data:`celery.registry.tasks`
 
 
 Aliases (Pending deprecation)
 Aliases (Pending deprecation)

+ 8 - 8
docs/internals/deprecation.rst

@@ -17,18 +17,18 @@ Removals for version 2.0
     =====================================  =====================================
     =====================================  =====================================
     **Setting name**                       **Replace with**
     **Setting name**                       **Replace with**
     =====================================  =====================================
     =====================================  =====================================
-    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
-    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
-    ``CELERY_AMQP_EXCHANGE``               ``CELERY_DEFAULT_EXCHANGE``
-    ``CELERY_AMQP_EXCHANGE_TYPE``          ``CELERY_DEFAULT_AMQP_EXCHANGE_TYPE``
-    ``CELERY_AMQP_CONSUMER_ROUTING_KEY``   ``CELERY_QUEUES``
-    ``CELERY_AMQP_PUBLISHER_ROUTING_KEY``  ``CELERY_DEFAULT_ROUTING_KEY``
+    `CELERY_AMQP_CONSUMER_QUEUES`          `CELERY_QUEUES`
+    `CELERY_AMQP_CONSUMER_QUEUES`          `CELERY_QUEUES`
+    `CELERY_AMQP_EXCHANGE`                 `CELERY_DEFAULT_EXCHANGE`
+    `CELERY_AMQP_EXCHANGE_TYPE`            `CELERY_DEFAULT_AMQP_EXCHANGE_TYPE`
+    `CELERY_AMQP_CONSUMER_ROUTING_KEY`     `CELERY_QUEUES`
+    `CELERY_AMQP_PUBLISHER_ROUTING_KEY`    `CELERY_DEFAULT_ROUTING_KEY`
     =====================================  =====================================
     =====================================  =====================================
 
 
 * :envvar:`CELERY_LOADER` definitions without class name.
 * :envvar:`CELERY_LOADER` definitions without class name.
 
 
-    E.g. ``celery.loaders.default``, needs to include the class name:
-    ``celery.loaders.default.Loader``.
+    E.g. `celery.loaders.default`, needs to include the class name:
+    `celery.loaders.default.Loader`.
 
 
 * :meth:`TaskSet.run`. Use :meth:`celery.task.base.TaskSet.apply_async`
 * :meth:`TaskSet.run`. Use :meth:`celery.task.base.TaskSet.apply_async`
     instead.
     instead.

+ 10 - 10
docs/internals/protocol.rst

@@ -11,41 +11,41 @@ Message format
 ==============
 ==============
 
 
 * task
 * task
-    ``string``
+    `string`
 
 
     Name of the task. **required**
     Name of the task. **required**
 
 
 * id
 * id
-    ``string``
+    `string`
 
 
     Unique id of the task (UUID). **required**
     Unique id of the task (UUID). **required**
 
 
 * args
 * args
-    ``list``
+    `list`
 
 
     List of arguments. Will be an empty list if not provided.
     List of arguments. Will be an empty list if not provided.
 
 
 * kwargs
 * kwargs
-    ``dictionary``
+    `dictionary`
 
 
     Dictionary of keyword arguments. Will be an empty dictionary if not
     Dictionary of keyword arguments. Will be an empty dictionary if not
     provided.
     provided.
 
 
 * retries
 * retries
-    ``int``
+    `int`
 
 
     Current number of times this task has been retried.
     Current number of times this task has been retried.
-    Defaults to ``0`` if not specified.
+    Defaults to `0` if not specified.
 
 
 * eta
 * eta
-    ``string`` (ISO 8601)
+    `string` (ISO 8601)
 
 
     Estimated time of arrival. This is the date and time in ISO 8601
     Estimated time of arrival. This is the date and time in ISO 8601
     format. If not provided the message is not scheduled, but will be
     format. If not provided the message is not scheduled, but will be
     executed asap.
     executed asap.
 
 
 * expires (introduced after v2.0.2)
 * expires (introduced after v2.0.2)
-    ``string`` (ISO 8601)
+    `string` (ISO 8601)
 
 
     Expiration date. This is the date and time in ISO 8601 format.
     Expiration date. This is the date and time in ISO 8601 format.
     If not provided the message will never expire. The message
     If not provided the message will never expire. The message
@@ -55,7 +55,7 @@ Message format
 Example message
 Example message
 ===============
 ===============
 
 
-This is an example invocation of the ``celery.task.PingTask`` task in JSON
+This is an example invocation of the `celery.task.PingTask` task in JSON
 format:
 format:
 
 
 .. code-block:: javascript
 .. code-block:: javascript
@@ -70,7 +70,7 @@ Serialization
 =============
 =============
 
 
 The protocol supports several serialization formats using the
 The protocol supports several serialization formats using the
-``content_type`` message header.
+`content_type` message header.
 
 
 The MIME-types supported by default are shown in the following table.
 The MIME-types supported by default are shown in the following table.
 
 

+ 6 - 6
docs/internals/worker.rst

@@ -23,7 +23,7 @@ ready_queue
 -----------
 -----------
 
 
 The ready queue is either an instance of :class:`Queue.Queue`, or
 The ready queue is either an instance of :class:`Queue.Queue`, or
-`celery.buckets.TaskBucket`. The latter if rate limiting is enabled.
+:class:`celery.buckets.TaskBucket`.  The latter if rate limiting is enabled.
 
 
 eta_schedule
 eta_schedule
 ------------
 ------------
@@ -44,20 +44,20 @@ Receives messages from the broker using `Kombu`_.
 When a message is received it's converted into a
 When a message is received it's converted into a
 :class:`celery.worker.job.TaskRequest` object.
 :class:`celery.worker.job.TaskRequest` object.
 
 
-Tasks with an ETA are entered into the ``eta_schedule``, messages that can
-be immediately processed are moved directly to the ``ready_queue``.
+Tasks with an ETA are entered into the `eta_schedule`, messages that can
+be immediately processed are moved directly to the `ready_queue`.
 
 
 ScheduleController
 ScheduleController
 ------------------
 ------------------
 
 
-The schedule controller is running the ``eta_schedule``.
-If the scheduled tasks eta has passed it is moved to the ``ready_queue``,
+The schedule controller is running the `eta_schedule`.
+If the scheduled tasks eta has passed it is moved to the `ready_queue`,
 otherwise the thread sleeps until the eta is met (remember that the schedule
 otherwise the thread sleeps until the eta is met (remember that the schedule
 is sorted by time).
 is sorted by time).
 
 
 Mediator
 Mediator
 --------
 --------
-The mediator simply moves tasks in the ``ready_queue`` over to the
+The mediator simply moves tasks in the `ready_queue` over to the
 task pool for execution using
 task pool for execution using
 :meth:`celery.worker.job.TaskRequest.execute_using_pool`.
 :meth:`celery.worker.job.TaskRequest.execute_using_pool`.
 
 

+ 1 - 1
docs/links.rst

@@ -9,7 +9,7 @@
 celery
 celery
 ------
 ------
 
 
-* IRC logs from ``#celery`` (Freenode):
+* IRC logs from `#celery` (Freenode):
     http://botland.oebfare.com/logger/celery/
     http://botland.oebfare.com/logger/celery/
 
 
 .. _links-amqp:
 .. _links-amqp:

+ 57 - 0
docs/reference/celery.app.rst

@@ -0,0 +1,57 @@
+.. currentmodule:: celery.app
+
+.. automodule:: celery.app
+
+    .. contents::
+        :local:
+
+    Functions
+    ---------
+
+    .. autofunction:: app_or_default
+
+    Application
+    -----------
+
+    .. autoclass:: App
+
+        .. attribute:: main
+
+            Name of the `__main__` module.  Required for standalone scripts.
+
+            If set this will be used instead of `__main__` when automatically
+            generating task names.
+
+        .. autoattribute:: main
+        .. autoattribute:: amqp
+        .. autoattribute:: backend
+        .. autoattribute:: loader
+        .. autoattribute:: conf
+        .. autoattribute:: control
+        .. autoattribute:: log
+
+        .. automethod:: config_from_object
+        .. automethod:: config_from_envvar
+        .. automethod:: config_from_cmdline
+
+        .. automethod:: task
+        .. automethod:: create_task_cls
+        .. automethod:: TaskSet
+        .. automethod:: send_task
+        .. automethod:: AsyncResult
+        .. automethod:: TaskSetResult
+
+        .. automethod:: worker_main
+        .. automethod:: Worker
+        .. automethod:: Beat
+
+        .. automethod:: broker_connection
+        .. automethod:: with_default_connection
+
+        .. automethod:: mail_admins
+
+        .. automethod:: pre_config_merge
+        .. automethod:: post_config_merge
+
+        .. automethod:: either
+        .. automethod:: merge

+ 37 - 38
docs/reference/celery.conf.rst

@@ -27,8 +27,8 @@ Queues
 
 
 .. data:: DEFAULT_DELIVERY_MODE
 .. data:: DEFAULT_DELIVERY_MODE
 
 
-    Default delivery mode (``"persistent"`` or ``"non-persistent"``).
-    Default is ``"persistent"``.
+    Default delivery mode (`"persistent"` or `"non-persistent"`).
+    Default is `"persistent"`.
 
 
 .. data:: DEFAULT_ROUTING_KEY
 .. data:: DEFAULT_ROUTING_KEY
 
 
@@ -45,65 +45,65 @@ Queues
     broadcast messages. The workers hostname will be appended
     broadcast messages. The workers hostname will be appended
     to the prefix to create the final queue name.
     to the prefix to create the final queue name.
 
 
-    Default is ``"celeryctl"``.
+    Default is `"celeryctl"`.
 
 
 .. data:: BROADCAST_EXCHANGE
 .. data:: BROADCAST_EXCHANGE
 
 
     Name of the exchange used for broadcast messages.
     Name of the exchange used for broadcast messages.
 
 
-    Default is ``"celeryctl"``.
+    Default is `"celeryctl"`.
 
 
 .. data:: BROADCAST_EXCHANGE_TYPE
 .. data:: BROADCAST_EXCHANGE_TYPE
 
 
-    Exchange type used for broadcast messages. Default is ``"fanout"``.
+    Exchange type used for broadcast messages. Default is `"fanout"`.
 
 
 .. data:: EVENT_QUEUE
 .. data:: EVENT_QUEUE
 
 
     Name of queue used to listen for event messages. Default is
     Name of queue used to listen for event messages. Default is
-    ``"celeryevent"``.
+    `"celeryevent"`.
 
 
 .. data:: EVENT_EXCHANGE
 .. data:: EVENT_EXCHANGE
 
 
-    Exchange used to send event messages. Default is ``"celeryevent"``.
+    Exchange used to send event messages. Default is `"celeryevent"`.
 
 
 .. data:: EVENT_EXCHANGE_TYPE
 .. data:: EVENT_EXCHANGE_TYPE
 
 
-    Exchange type used for the event exchange. Default is ``"topic"``.
+    Exchange type used for the event exchange. Default is `"topic"`.
 
 
 .. data:: EVENT_ROUTING_KEY
 .. data:: EVENT_ROUTING_KEY
 
 
-    Routing key used for events. Default is ``"celeryevent"``.
+    Routing key used for events. Default is `"celeryevent"`.
 
 
 .. data:: EVENT_SERIALIZER
 .. data:: EVENT_SERIALIZER
 
 
     Type of serialization method used to serialize events. Default is
     Type of serialization method used to serialize events. Default is
-    ``"json"``.
+    `"json"`.
 
 
 .. data:: RESULT_EXCHANGE
 .. data:: RESULT_EXCHANGE
 
 
     Exchange used by the AMQP result backend to publish task results.
     Exchange used by the AMQP result backend to publish task results.
-    Default is ``"celeryresult"``.
+    Default is `"celeryresult"`.
 
 
 Sending E-Mails
 Sending E-Mails
 ===============
 ===============
 
 
 .. data:: CELERY_SEND_TASK_ERROR_EMAILS
 .. data:: CELERY_SEND_TASK_ERROR_EMAILS
 
 
-    If set to ``True``, errors in tasks will be sent to :data:`ADMINS` by e-mail.
+    If set to `True`, errors in tasks will be sent to :data:`ADMINS` by e-mail.
 
 
 .. data:: ADMINS
 .. data:: ADMINS
 
 
-    List of ``(name, email_address)`` tuples for the admins that should
+    List of `(name, email_address)` tuples for the admins that should
     receive error e-mails.
     receive error e-mails.
 
 
 .. data:: SERVER_EMAIL
 .. data:: SERVER_EMAIL
 
 
     The e-mail address this worker sends e-mails from.
     The e-mail address this worker sends e-mails from.
-    Default is ``"celery@localhost"``.
+    Default is `"celery@localhost"`.
 
 
 .. data:: MAIL_HOST
 .. data:: MAIL_HOST
 
 
-    The mail server to use. Default is ``"localhost"``.
+    The mail server to use. Default is `"localhost"`.
 
 
 .. data:: MAIL_HOST_USER
 .. data:: MAIL_HOST_USER
 
 
@@ -115,7 +115,7 @@ Sending E-Mails
 
 
 .. data:: MAIL_PORT
 .. data:: MAIL_PORT
 
 
-    The port the mail server is listening on. Default is ``25``.
+    The port the mail server is listening on. Default is `25`.
 
 
 Execution
 Execution
 =========
 =========
@@ -126,8 +126,8 @@ Execution
 
 
 .. data:: EAGER_PROPAGATES_EXCEPTIONS
 .. data:: EAGER_PROPAGATES_EXCEPTIONS
 
 
-    If set to ``True``, :func:`celery.execute.apply` will re-raise task exceptions.
-    It's the same as always running apply with ``throw=True``.
+    If set to `True`, :func:`celery.execute.apply` will re-raise task exceptions.
+    It's the same as always running apply with `throw=True`.
 
 
 .. data:: TASK_RESULT_EXPIRES
 .. data:: TASK_RESULT_EXPIRES
 
 
@@ -149,7 +149,7 @@ Execution
 
 
 .. data:: STORE_ERRORS_EVEN_IF_IGNORED
 .. data:: STORE_ERRORS_EVEN_IF_IGNORED
 
 
-    If enabled, task errors will be stored even though ``Task.ignore_result``
+    If enabled, task errors will be stored even though `Task.ignore_result`
     is enabled.
     is enabled.
 
 
 .. data:: MAX_CACHED_RESULTS
 .. data:: MAX_CACHED_RESULTS
@@ -159,12 +159,11 @@ Execution
 
 
 .. data:: TASK_SERIALIZER
 .. data:: TASK_SERIALIZER
 
 
-    A string identifying the default serialization
-    method to use. Can be ``pickle`` (default),
-    ``json``, ``yaml``, or any custom serialization methods that have
-    been registered with :mod:`kombu.serialization.registry`.
+    A string identifying the default serialization method to use.
 
 
-    Default is ``pickle``.
+    Can be `pickle` (default), `json`, `yaml`, `msgpack` or any custom
+    serialization methods that have been registered with
+    :mod:`kombu.serialization.registry`.
 
 
 .. data:: RESULT_BACKEND
 .. data:: RESULT_BACKEND
 
 
@@ -177,8 +176,8 @@ Execution
 .. data:: SEND_EVENTS
 .. data:: SEND_EVENTS
 
 
     If set, celery will send events that can be captured by monitors like
     If set, celery will send events that can be captured by monitors like
-    ``celerymon``.
-    Default is: ``False``.
+    `celerymon`.
+    Default is: `False`.
 
 
 .. data:: DEFAULT_RATE_LIMIT
 .. data:: DEFAULT_RATE_LIMIT
 
 
@@ -187,7 +186,7 @@ Execution
 
 
 .. data:: DISABLE_RATE_LIMITS
 .. data:: DISABLE_RATE_LIMITS
 
 
-    If ``True`` all rate limits will be disabled and all tasks will be executed
+    If `True` all rate limits will be disabled and all tasks will be executed
     as soon as possible.
     as soon as possible.
 
 
 Broker
 Broker
@@ -203,9 +202,9 @@ Broker
     Maximum number of retries before we give up re-establishing a connection
     Maximum number of retries before we give up re-establishing a connection
     to the broker.
     to the broker.
 
 
-    If this is set to ``0`` or :const:`None`, we will retry forever.
+    If this is set to `0` or :const:`None`, we will retry forever.
 
 
-    Default is ``100`` retries.
+    Default is `100` retries.
 
 
 Celerybeat
 Celerybeat
 ==========
 ==========
@@ -213,7 +212,7 @@ Celerybeat
 .. data:: CELERYBEAT_LOG_LEVEL
 .. data:: CELERYBEAT_LOG_LEVEL
 
 
     Default log level for celerybeat.
     Default log level for celerybeat.
-    Default is: ``INFO``.
+    Default is: `INFO`.
 
 
 .. data:: CELERYBEAT_LOG_FILE
 .. data:: CELERYBEAT_LOG_FILE
 
 
@@ -223,7 +222,7 @@ Celerybeat
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 
 
     Name of the persistent schedule database file.
     Name of the persistent schedule database file.
-    Default is: ``celerybeat-schedule``.
+    Default is: `celerybeat-schedule`.
 
 
 .. data:: CELERYBEAT_MAX_LOOP_INTERVAL
 .. data:: CELERYBEAT_MAX_LOOP_INTERVAL
 
 
@@ -241,7 +240,7 @@ Celerymon
 .. data:: CELERYMON_LOG_LEVEL
 .. data:: CELERYMON_LOG_LEVEL
 
 
     Default log level for celerymon.
     Default log level for celerymon.
-    Default is: ``INFO``.
+    Default is: `INFO`.
 
 
 .. data:: CELERYMON_LOG_FILE
 .. data:: CELERYMON_LOG_FILE
 
 
@@ -275,31 +274,31 @@ Celeryd
 .. data:: CELERYD_CONCURRENCY
 .. data:: CELERYD_CONCURRENCY
 
 
     The number of concurrent worker processes.
     The number of concurrent worker processes.
-    If set to ``0`` (the default), the total number of available CPUs/cores
+    If set to `0` (the default), the total number of available CPUs/cores
     will be used.
     will be used.
 
 
 .. data:: CELERYD_PREFETCH_MULTIPLIER
 .. data:: CELERYD_PREFETCH_MULTIPLIER
 
 
     The number of concurrent workers is multipled by this number to yield
     The number of concurrent workers is multipled by this number to yield
     the wanted AMQP QoS message prefetch count.
     the wanted AMQP QoS message prefetch count.
-    Default is: ``4``
+    Default is: `4`
 
 
 .. data:: CELERYD_POOL
 .. data:: CELERYD_POOL
 
 
     Name of the task pool class used by the worker.
     Name of the task pool class used by the worker.
-    Default is ``"celery.concurrency.processes.TaskPool"``.
+    Default is `"celery.concurrency.processes.TaskPool"`.
 
 
 .. data:: CELERYD_CONSUMER
 .. data:: CELERYD_CONSUMER
 
 
     Name of the consumer class used by the worker.
     Name of the consumer class used by the worker.
-    Default is ``"celery.worker.consumer.Consumer"``.
+    Default is `"celery.worker.consumer.Consumer"`.
 
 
 .. data:: CELERYD_MEDIATOR
 .. data:: CELERYD_MEDIATOR
 
 
     Name of the mediator class used by the worker.
     Name of the mediator class used by the worker.
-    Default is ``"celery.worker.controllers.Mediator"``.
+    Default is `"celery.worker.controllers.Mediator"`.
 
 
 .. data:: CELERYD_ETA_SCHEDULER
 .. data:: CELERYD_ETA_SCHEDULER
 
 
     Name of the ETA scheduler class used by the worker.
     Name of the ETA scheduler class used by the worker.
-    Default is ``"celery.worker.controllers.ScheduleController"``.
+    Default is `"celery.worker.controllers.ScheduleController"`.

+ 2 - 2
docs/reference/celery.signals.rst

@@ -31,8 +31,8 @@ Example connecting to the :data:`task_sent` signal:
 
 
 Some signals also have a sender which you can filter by. For example the
 Some signals also have a sender which you can filter by. For example the
 :data:`task_sent` signal uses the task name as a sender, so you can
 :data:`task_sent` signal uses the task name as a sender, so you can
-connect your handler to be called only when tasks with name ``"tasks.add"``
-has been sent by providing the ``sender`` argument to
+connect your handler to be called only when tasks with name `"tasks.add"`
+has been sent by providing the `sender` argument to
 :class:`~celery.utils.dispatch.signal.Signal.connect`:
 :class:`~celery.utils.dispatch.signal.Signal.connect`:
 
 
 .. code-block:: python
 .. code-block:: python

+ 1 - 0
docs/reference/index.rst

@@ -8,6 +8,7 @@
 .. toctree::
 .. toctree::
     :maxdepth: 2
     :maxdepth: 2
 
 
+    celery.app
     celery.decorators
     celery.decorators
     celery.task.base
     celery.task.base
     celery.task.sets
     celery.task.sets

+ 11 - 11
docs/releases/1.0/announcement.rst

@@ -39,11 +39,11 @@ API will be deprecated; so, for example, if we decided to remove a function
 that existed in Celery 1.0:
 that existed in Celery 1.0:
 
 
 * Celery 1.2 will contain a backwards-compatible replica of the function which
 * Celery 1.2 will contain a backwards-compatible replica of the function which
-  will raise a ``PendingDeprecationWarning``.
+  will raise a `PendingDeprecationWarning`.
   This warning is silent by default; you need to explicitly turn on display
   This warning is silent by default; you need to explicitly turn on display
   of these warnings.
   of these warnings.
 * Celery 1.4 will contain the backwards-compatible replica, but the warning
 * Celery 1.4 will contain the backwards-compatible replica, but the warning
-  will be promoted to a full-fledged ``DeprecationWarning``. This warning
+  will be promoted to a full-fledged `DeprecationWarning`. This warning
   is loud by default, and will likely be quite annoying.
   is loud by default, and will likely be quite annoying.
 * Celery 1.6 will remove the feature outright.
 * Celery 1.6 will remove the feature outright.
 
 
@@ -89,15 +89,15 @@ What's new?
 
 
 * New periodic task service.
 * New periodic task service.
 
 
-    Periodic tasks are no longer dispatched by ``celeryd``, but instead by a
-    separate service called ``celerybeat``. This is an optimized, centralized
+    Periodic tasks are no longer dispatched by `celeryd`, but instead by a
+    separate service called `celerybeat`. This is an optimized, centralized
     service dedicated to your periodic tasks, which means you don't have to
     service dedicated to your periodic tasks, which means you don't have to
     worry about deadlocks or race conditions any more. But that does mean you
     worry about deadlocks or race conditions any more. But that does mean you
     have to make sure only one instance of this service is running at any one
     have to make sure only one instance of this service is running at any one
     time.
     time.
 
 
-  **TIP:** If you're only running a single ``celeryd`` server, you can embed
-  ``celerybeat`` inside it. Just add the ``--beat`` argument.
+  **TIP:** If you're only running a single `celeryd` server, you can embed
+  `celerybeat` inside it. Just add the `--beat` argument.
 
 
 
 
 * Broadcast commands
 * Broadcast commands
@@ -120,12 +120,12 @@ What's new?
 * Platform agnostic message format.
 * Platform agnostic message format.
 
 
   The message format has been standardized and is now using the ISO-8601 format
   The message format has been standardized and is now using the ISO-8601 format
-  for dates instead of Python ``datetime`` objects. This means you can write task
-  consumers in other languages than Python (``eceleryd`` anyone?)
+  for dates instead of Python `datetime` objects. This means you can write task
+  consumers in other languages than Python (`eceleryd` anyone?)
 
 
 * Timely
 * Timely
 
 
-  Periodic tasks are now scheduled on the clock, i.e. ``timedelta(hours=1)``
+  Periodic tasks are now scheduled on the clock, i.e. `timedelta(hours=1)`
   means every hour at :00 minutes, not every hour from the server starts.
   means every hour at :00 minutes, not every hour from the server starts.
   To revert to the previous behavior you have the option to enable
   To revert to the previous behavior you have the option to enable
   :attr:`PeriodicTask.relative`.
   :attr:`PeriodicTask.relative`.
@@ -140,8 +140,8 @@ change set before you continue.
 .. _`changelog`: http://ask.github.com/celery/changelog.html
 .. _`changelog`: http://ask.github.com/celery/changelog.html
 
 
 **TIP:** If you install the :mod:`setproctitle` module you can see which
 **TIP:** If you install the :mod:`setproctitle` module you can see which
-task each worker process is currently executing in ``ps`` listings.
-Just install it using pip: ``pip install setproctitle``.
+task each worker process is currently executing in `ps` listings.
+Just install it using pip: `pip install setproctitle`.
 
 
 Resources
 Resources
 =========
 =========

+ 21 - 19
docs/tutorials/clickcounter.rst

@@ -18,8 +18,8 @@ you are likely to bump into problems. One database write for every click is
 not good if you have millions of clicks a day.
 not good if you have millions of clicks a day.
 
 
 So what can you do? In this tutorial we will send the individual clicks as
 So what can you do? In this tutorial we will send the individual clicks as
-messages using ``kombu``, and then process them later with a ``celery``
-periodic task.
+messages using `kombu`, and then process them later with a Celery periodic
+task.
 
 
 Celery and Kombu is excellent in tandem, and while this might not be
 Celery and Kombu is excellent in tandem, and while this might not be
 the perfect example, you'll at least see one example how of they can be used
 the perfect example, you'll at least see one example how of they can be used
@@ -28,9 +28,9 @@ to solve a task.
 The model
 The model
 =========
 =========
 
 
-The model is simple, ``Click`` has the URL as primary key and a number of
-clicks for that URL. Its manager, ``ClickManager`` implements the
-``increment_clicks`` method, which takes a URL and by how much to increment
+The model is simple, `Click` has the URL as primary key and a number of
+clicks for that URL. Its manager, `ClickManager` implements the
+`increment_clicks` method, which takes a URL and by how much to increment
 its count by.
 its count by.
 
 
 
 
@@ -75,22 +75,22 @@ Using Kombu to send clicks as messages
 
 
 The model is normal django stuff, nothing new there. But now we get on to
 The model is normal django stuff, nothing new there. But now we get on to
 the messaging. It has been a tradition for me to put the projects messaging
 the messaging. It has been a tradition for me to put the projects messaging
-related code in its own ``messaging.py`` module, and I will continue to do so
+related code in its own `messaging.py` module, and I will continue to do so
 here so maybe you can adopt this practice. In this module we have two
 here so maybe you can adopt this practice. In this module we have two
 functions:
 functions:
 
 
-* ``send_increment_clicks``
+* `send_increment_clicks`
 
 
   This function sends a simple message to the broker. The message body only
   This function sends a simple message to the broker. The message body only
   contains the URL we want to increment as plain-text, so the exchange and
   contains the URL we want to increment as plain-text, so the exchange and
-  routing key play a role here. We use an exchange called ``clicks``, with a
-  routing key of ``increment_click``, so any consumer binding a queue to
+  routing key play a role here. We use an exchange called `clicks`, with a
+  routing key of `increment_click`, so any consumer binding a queue to
   this exchange using this routing key will receive these messages.
   this exchange using this routing key will receive these messages.
 
 
-* ``process_clicks``
+* `process_clicks`
 
 
   This function processes all currently gathered clicks sent using
   This function processes all currently gathered clicks sent using
-  ``send_increment_clicks``. Instead of issuing one database query for every
+  `send_increment_clicks`. Instead of issuing one database query for every
   click it processes all of the messages first, calculates the new click count
   click it processes all of the messages first, calculates the new click count
   and issues one update per URL. A message that has been received will not be
   and issues one update per URL. A message that has been received will not be
   deleted from the broker until it has been acknowledged by the receiver, so
   deleted from the broker until it has been acknowledged by the receiver, so
@@ -98,11 +98,13 @@ functions:
   re-sent at a later point in time. This guarantees delivery and we respect
   re-sent at a later point in time. This guarantees delivery and we respect
   this feature here by not acknowledging the message until the clicks has
   this feature here by not acknowledging the message until the clicks has
   actually been written to disk.
   actually been written to disk.
-  
-  **Note**: This could probably be optimized further with
-  some hand-written SQL, but it will do for now. Let's say it's an exercise
-  left for the picky reader, albeit a discouraged one if you can survive
-  without doing it.
+
+  .. note::
+
+    This could probably be optimized further with
+    some hand-written SQL, but it will do for now. Let's say it's an exercise
+    left for the picky reader, albeit a discouraged one if you can survive
+    without doing it.
 
 
 On to the code...
 On to the code...
 
 
@@ -174,7 +176,7 @@ would want to count the clicks for, you replace the URL with:
 
 
     http://mysite/clickmuncher/count/?u=http://google.com
     http://mysite/clickmuncher/count/?u=http://google.com
 
 
-and the ``count`` view will send off an increment message and forward you to
+and the `count` view will send off an increment message and forward you to
 that site.
 that site.
 
 
 *clickmuncher/views.py*:
 *clickmuncher/views.py*:
@@ -223,8 +225,8 @@ Processing the clicks every 30 minutes is easy using celery periodic tasks.
         def run(self, **kwargs):
         def run(self, **kwargs):
             process_clicks()
             process_clicks()
 
 
-We subclass from :class:`celery.task.base.PeriodicTask`, set the ``run_every``
-attribute and in the body of the task just call the ``process_clicks``
+We subclass from :class:`celery.task.base.PeriodicTask`, set the `run_every`
+attribute and in the body of the task just call the `process_clicks`
 function we wrote earlier. 
 function we wrote earlier. 
 
 
 
 

+ 2 - 2
docs/tutorials/otherqueues.rst

@@ -45,7 +45,7 @@ Database
 Configuration
 Configuration
 -------------
 -------------
 
 
-The database backend uses the Django ``DATABASE_*`` settings for database
+The database backend uses the Django `DATABASE_*` settings for database
 configuration values.
 configuration values.
 
 
 #. Set your carrot backend::
 #. Set your carrot backend::
@@ -53,7 +53,7 @@ configuration values.
     CARROT_BACKEND = "ghettoq.taproot.Database"
     CARROT_BACKEND = "ghettoq.taproot.Database"
 
 
 
 
-#. Add :mod:`ghettoq` to ``INSTALLED_APPS``::
+#. Add :mod:`ghettoq` to `INSTALLED_APPS`::
 
 
     INSTALLED_APPS = ("ghettoq", )
     INSTALLED_APPS = ("ghettoq", )
 
 

+ 27 - 27
docs/userguide/executing.rst

@@ -16,28 +16,28 @@ Basics
 Executing tasks is done with :meth:`~celery.task.Base.Task.apply_async`,
 Executing tasks is done with :meth:`~celery.task.Base.Task.apply_async`,
 and the shortcut: :meth:`~celery.task.Base.Task.delay`.
 and the shortcut: :meth:`~celery.task.Base.Task.delay`.
 
 
-``delay`` is simple and convenient, as it looks like calling a regular
+`delay` is simple and convenient, as it looks like calling a regular
 function:
 function:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
     Task.delay(arg1, arg2, kwarg1="x", kwarg2="y")
     Task.delay(arg1, arg2, kwarg1="x", kwarg2="y")
 
 
-The same using ``apply_async`` is written like this:
+The same using `apply_async` is written like this:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
     Task.apply_async(args=[arg1, arg2], kwargs={"kwarg1": "x", "kwarg2": "y"})
     Task.apply_async(args=[arg1, arg2], kwargs={"kwarg1": "x", "kwarg2": "y"})
 
 
 
 
-While ``delay`` is convenient, it doesn't give you as much control as using
-``apply_async``.  With ``apply_async`` you can override the execution options
-available as attributes on the ``Task`` class (see :ref:`task-options`).
+While `delay` is convenient, it doesn't give you as much control as using
+`apply_async`.  With `apply_async` you can override the execution options
+available as attributes on the `Task` class (see :ref:`task-options`).
 In addition you can set countdown/eta, task expiry, provide a custom broker
 In addition you can set countdown/eta, task expiry, provide a custom broker
 connection and more.
 connection and more.
 
 
 Let's go over these in more detail.  All the examples uses a simple task,
 Let's go over these in more detail.  All the examples uses a simple task,
-called ``add``, taking two positional arguments and returning the sum:
+called `add`, taking two positional arguments and returning the sum:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -62,7 +62,7 @@ ETA and countdown
 =================
 =================
 
 
 The ETA (estimated time of arrival) lets you set a specific date and time that
 The ETA (estimated time of arrival) lets you set a specific date and time that
-is the earliest time at which your task will be executed.  ``countdown`` is
+is the earliest time at which your task will be executed.  `countdown` is
 a shortcut to set eta by seconds into the future.
 a shortcut to set eta by seconds into the future.
 
 
 .. code-block:: python
 .. code-block:: python
@@ -79,7 +79,7 @@ are executed in a timely manner you should monitor queue lenghts. Use
 Munin, or similar tools, to receive alerts, so appropiate action can be
 Munin, or similar tools, to receive alerts, so appropiate action can be
 taken to ease the workload.  See :ref:`monitoring-munin`.
 taken to ease the workload.  See :ref:`monitoring-munin`.
 
 
-While ``countdown`` is an integer, ``eta`` must be a :class:`~datetime.datetime`
+While `countdown` is an integer, `eta` must be a :class:`~datetime.datetime`
 object, specifying an exact date and time (including millisecond precision,
 object, specifying an exact date and time (including millisecond precision,
 and timezone information):
 and timezone information):
 
 
@@ -95,7 +95,7 @@ and timezone information):
 Expiration
 Expiration
 ==========
 ==========
 
 
-The ``expires`` argument defines an optional expiry time,
+The `expires` argument defines an optional expiry time,
 either as seconds after task publish, or a specific date and time using
 either as seconds after task publish, or a specific date and time using
 :class:~datetime.datetime`:
 :class:~datetime.datetime`:
 
 
@@ -121,8 +121,8 @@ Serializers
 Data transferred between clients and workers needs to be serialized.
 Data transferred between clients and workers needs to be serialized.
 The default serializer is :mod:`pickle`, but you can
 The default serializer is :mod:`pickle`, but you can
 change this globally or for each individual task.
 change this globally or for each individual task.
-There is built-in support for :mod:`pickle`, ``JSON``, ``YAML``
-and ``msgpack``, and you can also add your own custom serializers by registering
+There is built-in support for :mod:`pickle`, `JSON`, `YAML`
+and `msgpack`, and you can also add your own custom serializers by registering
 them into the Carrot serializer registry (see
 them into the Carrot serializer registry (see
 `Kombu: Serialization of Data`_).
 `Kombu: Serialization of Data`_).
 
 
@@ -182,12 +182,12 @@ be available for the worker.
 The client uses the following order to decide which serializer
 The client uses the following order to decide which serializer
 to use when sending a task:
 to use when sending a task:
 
 
-    1. The ``serializer`` argument to ``apply_async``
-    2. The tasks ``serializer`` attribute
+    1. The `serializer` argument to `apply_async`
+    2. The tasks `serializer` attribute
     3. The default :setting:`CELERY_TASK_SERIALIZER` setting.
     3. The default :setting:`CELERY_TASK_SERIALIZER` setting.
 
 
 
 
-*Using the ``serializer`` argument to ``apply_async``*:
+*Using the `serializer` argument to `apply_async`*:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -199,7 +199,7 @@ Connections and connection timeouts.
 ====================================
 ====================================
 
 
 Currently there is no support for broker connection pools, so 
 Currently there is no support for broker connection pools, so 
-``apply_async`` establishes and closes a new connection every time
+`apply_async` establishes and closes a new connection every time
 it is called.  This is something you need to be aware of when sending
 it is called.  This is something you need to be aware of when sending
 more than one task at a time.
 more than one task at a time.
 
 
@@ -231,7 +231,7 @@ publisher:
 
 
 The connection timeout is the number of seconds to wait before giving up
 The connection timeout is the number of seconds to wait before giving up
 on establishing the connection.  You can set this by using the
 on establishing the connection.  You can set this by using the
-``connect_timeout`` argument to ``apply_async``:
+`connect_timeout` argument to `apply_async`:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -258,11 +258,11 @@ process video, others process images, and some gather collective intelligence
 about its users.  Some of these tasks are more important, so we want to make
 about its users.  Some of these tasks are more important, so we want to make
 sure the high priority tasks get sent to dedicated nodes.
 sure the high priority tasks get sent to dedicated nodes.
 
 
-For the sake of this example we have a single exchange called ``tasks``.
+For the sake of this example we have a single exchange called `tasks`.
 There are different types of exchanges, each type interpreting the routing
 There are different types of exchanges, each type interpreting the routing
 key in different ways, implementing different messaging scenarios.
 key in different ways, implementing different messaging scenarios.
 
 
-The most common types used with Celery are ``direct`` and ``topic``.
+The most common types used with Celery are `direct` and `topic`.
 
 
 * direct
 * direct
 
 
@@ -271,14 +271,14 @@ The most common types used with Celery are ``direct`` and ``topic``.
 * topic
 * topic
 
 
     In the topic exchange the routing key is made up of words separated by
     In the topic exchange the routing key is made up of words separated by
-    dots (``.``).  Words can be matched by the wild cards ``*`` and ``#``,
-    where ``*`` matches one exact word, and ``#`` matches one or many words.
+    dots (`.`).  Words can be matched by the wild cards `*` and `#`,
+    where `*` matches one exact word, and `#` matches one or many words.
 
 
-    For example, ``*.stock.#`` matches the routing keys ``usd.stock`` and
-    ``euro.stock.db`` but not ``stock.nasdaq``.
+    For example, `*.stock.#` matches the routing keys `usd.stock` and
+    `euro.stock.db` but not `stock.nasdaq`.
 
 
-We create three queues, ``video``, ``image`` and ``lowpri`` that binds to
-the ``tasks`` exchange.  For the queues we use the following binding keys::
+We create three queues, `video`, `image` and `lowpri` that binds to
+the `tasks` exchange.  For the queues we use the following binding keys::
 
 
     video: video.#
     video: video.#
     image: image.#
     image: image.#
@@ -301,8 +301,8 @@ listen to different queues:
 
 
 
 
 Later, if the crop task is consuming a lot of resources,
 Later, if the crop task is consuming a lot of resources,
-we can bind new workers to handle just the ``"image.crop"`` task,
-by creating a new queue that binds to ``"image.crop``".
+we can bind new workers to handle just the `"image.crop"` task,
+by creating a new queue that binds to `"image.crop`".
 
 
 .. seealso::
 .. seealso::
 
 
@@ -329,7 +329,7 @@ Not supported by :mod:`amqplib`.
 
 
 * priority
 * priority
 
 
-A number between ``0`` and ``9``, where ``0`` is the highest priority.
+A number between `0` and `9`, where `0` is the highest priority.
 
 
 .. note::
 .. note::
 
 

+ 36 - 36
docs/userguide/monitoring.rst

@@ -67,7 +67,7 @@ Commands
         $ celeryctl inspect scheduled
         $ celeryctl inspect scheduled
 
 
     These are tasks reserved by the worker because they have the
     These are tasks reserved by the worker because they have the
-    ``eta`` or ``countdown`` argument set.
+    `eta` or `countdown` argument set.
 
 
 * **inspect reserved**: List reserved tasks
 * **inspect reserved**: List reserved tasks
     ::
     ::
@@ -106,7 +106,7 @@ Commands
 
 
 .. note::
 .. note::
 
 
-    All ``inspect`` commands supports a ``--timeout`` argument,
+    All `inspect` commands supports a `--timeout` argument,
     This is the number of seconds to wait for responses.
     This is the number of seconds to wait for responses.
     You may have to increase this timeout if you're getting empty responses
     You may have to increase this timeout if you're getting empty responses
     due to latency.
     due to latency.
@@ -118,7 +118,7 @@ Specifying destination nodes
 
 
 By default the inspect commands operates on all workers.
 By default the inspect commands operates on all workers.
 You can specify a single, or a list of workers by using the
 You can specify a single, or a list of workers by using the
-``--destination`` argument::
+`--destination` argument::
 
 
     $ celeryctl inspect -d w1,w2 reserved
     $ celeryctl inspect -d w1,w2 reserved
 
 
@@ -161,7 +161,7 @@ If you haven't already enabled the sending of events you need to do so::
 
 
     $ python manage.py celeryctl inspect enable_events
     $ python manage.py celeryctl inspect enable_events
 
 
-:Tip: You can enable events when the worker starts using the ``-E`` argument
+:Tip: You can enable events when the worker starts using the `-E` argument
       to :mod:`~celery.bin.celeryd`.
       to :mod:`~celery.bin.celeryd`.
 
 
 Now that the camera has been started, and events have been enabled
 Now that the camera has been started, and events have been enabled
@@ -179,21 +179,21 @@ Shutter frequency
 
 
 By default the camera takes a snapshot every second, if this is too frequent
 By default the camera takes a snapshot every second, if this is too frequent
 or you want to have higher precision, then you can change this using the
 or you want to have higher precision, then you can change this using the
-``--frequency`` argument.  This is a float describing how often, in seconds,
+`--frequency` argument.  This is a float describing how often, in seconds,
 it should wake up to check if there are any new events::
 it should wake up to check if there are any new events::
 
 
     $ python manage.py celerycam --frequency=3.0
     $ python manage.py celerycam --frequency=3.0
 
 
-The camera also supports rate limiting using the ``--maxrate`` argument.
+The camera also supports rate limiting using the `--maxrate` argument.
 While the frequency controls how often the camera thread wakes up,
 While the frequency controls how often the camera thread wakes up,
 the rate limit controls how often it will actually take a snapshot.
 the rate limit controls how often it will actually take a snapshot.
 
 
 The rate limits can be specified in seconds, minutes or hours
 The rate limits can be specified in seconds, minutes or hours
-by appending ``/s``, ``/m`` or ``/h`` to the value.
-Example: ``--maxrate=100/m``, means "hundred writes a minute".
+by appending `/s`, `/m` or `/h` to the value.
+Example: `--maxrate=100/m`, means "hundred writes a minute".
 
 
 The rate limit is off by default, which means it will take a snapshot
 The rate limit is off by default, which means it will take a snapshot
-for every ``--frequency`` seconds.
+for every `--frequency` seconds.
 
 
 The events also expire after some time, so the database doesn't fill up.
 The events also expire after some time, so the database doesn't fill up.
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
@@ -204,7 +204,7 @@ and tasks in other states after 5 days.
 Using outside of Django
 Using outside of Django
 ~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 
-``django-celery`` also installs the :program:`djcelerymon` program. This
+`django-celery` also installs the :program:`djcelerymon` program. This
 can be used by non-Django users, and runs both a webserver and a snapshot
 can be used by non-Django users, and runs both a webserver and a snapshot
 camera in the same process.
 camera in the same process.
 
 
@@ -226,12 +226,12 @@ and sets up the Django environment using the same settings::
     $ djcelerymon
     $ djcelerymon
 
 
 Database tables will be created the first time the monitor is run.
 Database tables will be created the first time the monitor is run.
-By default an ``sqlite3`` database file named
+By default an `sqlite3` database file named
 :file:`djcelerymon.db` is used, so make sure this file is writeable by the
 :file:`djcelerymon.db` is used, so make sure this file is writeable by the
 user running the monitor.
 user running the monitor.
 
 
 If you want to store the events in a different database, e.g. MySQL,
 If you want to store the events in a different database, e.g. MySQL,
-then you can configure the ``DATABASE*`` settings directly in your Celery
+then you can configure the `DATABASE*` settings directly in your Celery
 config module.  See http://docs.djangoproject.com/en/dev/ref/settings/#databases
 config module.  See http://docs.djangoproject.com/en/dev/ref/settings/#databases
 for more information about the database options avaialble.
 for more information about the database options avaialble.
 
 
@@ -260,7 +260,7 @@ Now that the service is started you can visit the monitor
 at http://127.0.0.1:8000, and log in using the user you created.
 at http://127.0.0.1:8000, and log in using the user you created.
 
 
 For a list of the command line options supported by :program:`djcelerymon`,
 For a list of the command line options supported by :program:`djcelerymon`,
-please see ``djcelerymon --help``.
+please see `djcelerymon --help`.
 
 
 .. _monitoring-celeryev:
 .. _monitoring-celeryev:
 
 
@@ -286,7 +286,7 @@ and it includes a tool to dump events to stdout::
 
 
     $ celeryev --dump
     $ celeryev --dump
 
 
-For a complete list of options use ``--help``::
+For a complete list of options use `--help`::
 
 
     $ celeryev --help
     $ celeryev --help
 
 
@@ -322,10 +322,10 @@ as manage users, virtual hosts and their permissions.
 
 
 .. note::
 .. note::
 
 
-    The default virtual host (``"/"``) is used in these
+    The default virtual host (`"/"`) is used in these
     examples, if you use a custom virtual host you have to add
     examples, if you use a custom virtual host you have to add
-    the ``-p`` argument to the command, e.g:
-    ``rabbitmqctl list_queues -p my_vhost ....``
+    the `-p` argument to the command, e.g:
+    `rabbitmqctl list_queues -p my_vhost ....`
 
 
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 
 
@@ -341,11 +341,11 @@ Finding the number of tasks in a queue::
                               messages_unacknowlged
                               messages_unacknowlged
 
 
 
 
-Here ``messages_ready`` is the number of messages ready
-for delivery (sent but not received), ``messages_unacknowledged``
+Here `messages_ready` is the number of messages ready
+for delivery (sent but not received), `messages_unacknowledged`
 is the number of messages that has been received by a worker but
 is the number of messages that has been received by a worker but
 not acknowledged yet (meaning it is in progress, or has been reserved).
 not acknowledged yet (meaning it is in progress, or has been reserved).
-``messages`` is the sum of ready and unacknowledged messages combined.
+`messages` is the sum of ready and unacknowledged messages combined.
 
 
 
 
 Finding the number of workers currently consuming from a queue::
 Finding the number of workers currently consuming from a queue::
@@ -356,7 +356,7 @@ Finding the amount of memory allocated to a queue::
 
 
     $ rabbitmqctl list_queues name memory
     $ rabbitmqctl list_queues name memory
 
 
-:Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
+:Tip: Adding the `-q` option to `rabbitmqctl(1)`_ makes the output
       easier to parse.
       easier to parse.
 
 
 
 
@@ -373,12 +373,12 @@ maintaining a Celery cluster.
     http://github.com/ask/rabbitmq-munin
     http://github.com/ask/rabbitmq-munin
 
 
 * celery_tasks: Monitors the number of times each task type has
 * celery_tasks: Monitors the number of times each task type has
-  been executed (requires ``celerymon``).
+  been executed (requires `celerymon`).
 
 
     http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
     http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
 
 
 * celery_task_states: Monitors the number of tasks in each state
 * celery_task_states: Monitors the number of tasks in each state
-  (requires ``celerymon``).
+  (requires `celerymon`).
 
 
     http://exchange.munin-monitoring.org/plugins/celery_tasks/details
     http://exchange.munin-monitoring.org/plugins/celery_tasks/details
 
 
@@ -412,7 +412,7 @@ write it to a database, send it by e-mail or something else entirely.
 
 
 :program:`celeryev` is then used to take snapshots with the camera,
 :program:`celeryev` is then used to take snapshots with the camera,
 for example if you want to capture state every 2 seconds using the
 for example if you want to capture state every 2 seconds using the
-camera ``myapp.Camera`` you run :program:`celeryev` with the following
+camera `myapp.Camera` you run :program:`celeryev` with the following
 arguments::
 arguments::
 
 
     $ celeryev -c myapp.Camera --frequency=2.0
     $ celeryev -c myapp.Camera --frequency=2.0
@@ -446,8 +446,8 @@ Here is an example camera, dumping the snapshot to screen:
 See the API reference for :mod:`celery.events.state` to read more
 See the API reference for :mod:`celery.events.state` to read more
 about state objects.
 about state objects.
 
 
-Now you can use this cam with ``celeryev`` by specifying
-it with the ``-c`` option::
+Now you can use this cam with `celeryev` by specifying
+it with the `-c` option::
 
 
     $ celeryev -c myapp.DumpCam --frequency=2.0
     $ celeryev -c myapp.DumpCam --frequency=2.0
 
 
@@ -481,16 +481,16 @@ This list contains the events sent by the worker, and their arguments.
 Task Events
 Task Events
 ~~~~~~~~~~~
 ~~~~~~~~~~~
 
 
-* ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
-  timestamp)``
+* `task-received(uuid, name, args, kwargs, retries, eta, hostname,
+  timestamp)`
 
 
     Sent when the worker receives a task.
     Sent when the worker receives a task.
 
 
-* ``task-started(uuid, hostname, timestamp)``
+* `task-started(uuid, hostname, timestamp)`
 
 
     Sent just before the worker executes the task.
     Sent just before the worker executes the task.
 
 
-* ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
+* `task-succeeded(uuid, result, runtime, hostname, timestamp)`
 
 
     Sent if the task executed successfully.
     Sent if the task executed successfully.
 
 
@@ -498,16 +498,16 @@ Task Events
     (Starting from the task is sent to the worker pool, and ending when the
     (Starting from the task is sent to the worker pool, and ending when the
     pool result handler callback is called).
     pool result handler callback is called).
 
 
-* ``task-failed(uuid, exception, traceback, hostname, timestamp)``
+* `task-failed(uuid, exception, traceback, hostname, timestamp)`
 
 
     Sent if the execution of the task failed.
     Sent if the execution of the task failed.
 
 
-* ``task-revoked(uuid)``
+* `task-revoked(uuid)`
 
 
     Sent if the task has been revoked (Note that this is likely
     Sent if the task has been revoked (Note that this is likely
     to be sent by more than one worker).
     to be sent by more than one worker).
 
 
-* ``task-retried(uuid, exception, traceback, hostname, timestamp)``
+* `task-retried(uuid, exception, traceback, hostname, timestamp)`
 
 
     Sent if the task failed, but will be retried in the future.
     Sent if the task failed, but will be retried in the future.
 
 
@@ -516,15 +516,15 @@ Task Events
 Worker Events
 Worker Events
 ~~~~~~~~~~~~~
 ~~~~~~~~~~~~~
 
 
-* ``worker-online(hostname, timestamp)``
+* `worker-online(hostname, timestamp)`
 
 
     The worker has connected to the broker and is online.
     The worker has connected to the broker and is online.
 
 
-* ``worker-heartbeat(hostname, timestamp)``
+* `worker-heartbeat(hostname, timestamp)`
 
 
     Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
     Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
     it is considered to be offline.
     it is considered to be offline.
 
 
-* ``worker-offline(hostname, timestamp)``
+* `worker-offline(hostname, timestamp)`
 
 
     The worker has disconnected from the broker.
     The worker has disconnected from the broker.

+ 16 - 16
docs/userguide/periodic-tasks.rst

@@ -30,7 +30,7 @@ Entries
 To schedule a task periodically you have to add an entry to the
 To schedule a task periodically you have to add an entry to the
 :setting:`CELERYBEAT_SCHEDULE` setting.
 :setting:`CELERYBEAT_SCHEDULE` setting.
 
 
-Example: Run the ``tasks.add`` task every 30 seconds.
+Example: Run the `tasks.add` task every 30 seconds.
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -46,7 +46,7 @@ Example: Run the ``tasks.add`` task every 30 seconds.
 
 
 
 
 Using a :class:`~datetime.timedelta` for the schedule means the task will
 Using a :class:`~datetime.timedelta` for the schedule means the task will
-be executed 30 seconds after ``celerybeat`` starts, and then every 30 seconds
+be executed 30 seconds after `celerybeat` starts, and then every 30 seconds
 after the last run.  A crontab like schedule also exists, see the section
 after the last run.  A crontab like schedule also exists, see the section
 on `Crontab schedules`_.
 on `Crontab schedules`_.
 
 
@@ -55,11 +55,11 @@ on `Crontab schedules`_.
 Available Fields
 Available Fields
 ----------------
 ----------------
 
 
-* ``task``
+* `task`
 
 
     The name of the task to execute.
     The name of the task to execute.
 
 
-* ``schedule``
+* `schedule`
 
 
     The frequency of execution.
     The frequency of execution.
 
 
@@ -68,28 +68,28 @@ Available Fields
     You can also define your own custom schedule types, by extending the
     You can also define your own custom schedule types, by extending the
     interface of :class:`~celery.schedules.schedule`.
     interface of :class:`~celery.schedules.schedule`.
 
 
-* ``args``
+* `args`
 
 
     Positional arguments (:class:`list` or :class:`tuple`).
     Positional arguments (:class:`list` or :class:`tuple`).
 
 
-* ``kwargs``
+* `kwargs`
 
 
     Keyword arguments (:class:`dict`).
     Keyword arguments (:class:`dict`).
 
 
-* ``options``
+* `options`
 
 
     Execution options (:class:`dict`).
     Execution options (:class:`dict`).
 
 
     This can be any argument supported by :meth:`~celery.execute.apply_async`,
     This can be any argument supported by :meth:`~celery.execute.apply_async`,
-    e.g. ``exchange``, ``routing_key``, ``expires``, and so on.
+    e.g. `exchange`, `routing_key`, `expires`, and so on.
 
 
-* ``relative``
+* `relative`
 
 
     By default :class:`~datetime.timedelta` schedules are scheduled
     By default :class:`~datetime.timedelta` schedules are scheduled
     "by the clock". This means the frequency is rounded to the nearest
     "by the clock". This means the frequency is rounded to the nearest
     second, minute, hour or day depending on the period of the timedelta.
     second, minute, hour or day depending on the period of the timedelta.
 
 
-    If ``relative`` is true the frequency is not rounded and will be
+    If `relative` is true the frequency is not rounded and will be
     relative to the time when :program:`celerybeat` was started.
     relative to the time when :program:`celerybeat` was started.
 
 
 .. _beat-crontab:
 .. _beat-crontab:
@@ -99,7 +99,7 @@ Crontab schedules
 
 
 If you want more control over when the task is executed, for
 If you want more control over when the task is executed, for
 example, a particular time of day or day of the week, you can use
 example, a particular time of day or day of the week, you can use
-the ``crontab`` schedule type:
+the `crontab` schedule type:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -165,13 +165,13 @@ To start the :program:`celerybeat` service::
 
 
     $ celerybeat
     $ celerybeat
 
 
-You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
+You can also start `celerybeat` with `celeryd` by using the `-B` option,
 this is convenient if you only intend to use one worker node::
 this is convenient if you only intend to use one worker node::
 
 
     $ celeryd -B
     $ celeryd -B
 
 
 Celerybeat needs to store the last run times of the tasks in a local database
 Celerybeat needs to store the last run times of the tasks in a local database
-file (named ``celerybeat-schedule`` by default), so it needs access to
+file (named `celerybeat-schedule` by default), so it needs access to
 write in the current directory, or alternatively you can specify a custom
 write in the current directory, or alternatively you can specify a custom
 location for this file::
 location for this file::
 
 
@@ -187,15 +187,15 @@ location for this file::
 Using custom scheduler classes
 Using custom scheduler classes
 ------------------------------
 ------------------------------
 
 
-Custom scheduler classes can be specified on the command line (the ``-S``
+Custom scheduler classes can be specified on the command line (the `-S`
 argument).  The default scheduler is :class:`celery.beat.PersistentScheduler`,
 argument).  The default scheduler is :class:`celery.beat.PersistentScheduler`,
 which is simply keeping track of the last run times in a local database file
 which is simply keeping track of the last run times in a local database file
 (a :mod:`shelve`).
 (a :mod:`shelve`).
 
 
-``django-celery`` also ships with a scheduler that stores the schedule in the
+`django-celery` also ships with a scheduler that stores the schedule in the
 Django database::
 Django database::
 
 
     $ celerybeat -S djcelery.schedulers.DatabaseScheduler
     $ celerybeat -S djcelery.schedulers.DatabaseScheduler
 
 
-Using ``django-celery``'s scheduler you can add, modify and remove periodic
+Using `django-celery`'s scheduler you can add, modify and remove periodic
 tasks from the Django Admin.
 tasks from the Django Admin.

+ 2 - 2
docs/userguide/remote-tasks.rst

@@ -109,7 +109,7 @@ task being executed::
             [f2cc8efc-2a14-40cd-85ad-f1c77c94beeb] processed: 100
             [f2cc8efc-2a14-40cd-85ad-f1c77c94beeb] processed: 100
 
 
 Since applying tasks can be done via HTTP using the
 Since applying tasks can be done via HTTP using the
-``djcelery.views.apply`` view, executing tasks from other languages is easy.
+`djcelery.views.apply` view, executing tasks from other languages is easy.
 For an example service exposing tasks via HTTP you should have a look at
 For an example service exposing tasks via HTTP you should have a look at
-``examples/celery_http_gateway`` in the Celery distribution:
+`examples/celery_http_gateway` in the Celery distribution:
 http://github.com/ask/celery/tree/master/examples/celery_http_gateway/
 http://github.com/ask/celery/tree/master/examples/celery_http_gateway/

+ 40 - 40
docs/userguide/routing.rst

@@ -32,17 +32,17 @@ With this setting on, a named queue that is not already defined in
 :setting:`CELERY_QUEUES` will be created automatically.  This makes it easy to
 :setting:`CELERY_QUEUES` will be created automatically.  This makes it easy to
 perform simple routing tasks.
 perform simple routing tasks.
 
 
-Say you have two servers, ``x``, and ``y`` that handles regular tasks,
-and one server ``z``, that only handles feed related tasks.  You can use this
+Say you have two servers, `x`, and `y` that handles regular tasks,
+and one server `z`, that only handles feed related tasks.  You can use this
 configuration::
 configuration::
 
 
     CELERY_ROUTES = {"feed.tasks.import_feed": {"queue": "feeds"}}
     CELERY_ROUTES = {"feed.tasks.import_feed": {"queue": "feeds"}}
 
 
 With this route enabled import feed tasks will be routed to the
 With this route enabled import feed tasks will be routed to the
-``"feeds"`` queue, while all other tasks will be routed to the default queue
-(named ``"celery"`` for historic reasons).
+`"feeds"` queue, while all other tasks will be routed to the default queue
+(named `"celery"` for historic reasons).
 
 
-Now you can start server ``z`` to only process the feeds queue like this::
+Now you can start server `z` to only process the feeds queue like this::
 
 
     (z)$ celeryd -Q feeds
     (z)$ celeryd -Q feeds
 
 
@@ -74,7 +74,7 @@ The point with this feature is to hide the complex AMQP protocol for users
 with only basic needs. However -- you may still be interested in how these queues
 with only basic needs. However -- you may still be interested in how these queues
 are declared.
 are declared.
 
 
-A queue named ``"video"`` will be created with the following settings:
+A queue named `"video"` will be created with the following settings:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -82,7 +82,7 @@ A queue named ``"video"`` will be created with the following settings:
      "exchange_type": "direct",
      "exchange_type": "direct",
      "routing_key": "video"}
      "routing_key": "video"}
 
 
-The non-AMQP backends like ``ghettoq`` does not support exchanges, so they
+The non-AMQP backends like `ghettoq` does not support exchanges, so they
 require the exchange to have the same name as the queue. Using this design
 require the exchange to have the same name as the queue. Using this design
 ensures it will work for them as well.
 ensures it will work for them as well.
 
 
@@ -91,8 +91,8 @@ ensures it will work for them as well.
 Manual routing
 Manual routing
 --------------
 --------------
 
 
-Say you have two servers, ``x``, and ``y`` that handles regular tasks,
-and one server ``z``, that only handles feed related tasks, you can use this
+Say you have two servers, `x`, and `y` that handles regular tasks,
+and one server `z`, that only handles feed related tasks, you can use this
 configuration:
 configuration:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -115,7 +115,7 @@ exchange/type/binding_key, if you don't set exchange or exchange type, they
 will be taken from the :setting:`CELERY_DEFAULT_EXCHANGE` and
 will be taken from the :setting:`CELERY_DEFAULT_EXCHANGE` and
 :setting:`CELERY_DEFAULT_EXCHANGE_TYPE` settings.
 :setting:`CELERY_DEFAULT_EXCHANGE_TYPE` settings.
 
 
-To route a task to the ``feed_tasks`` queue, you can add an entry in the
+To route a task to the `feed_tasks` queue, you can add an entry in the
 :setting:`CELERY_ROUTES` setting:
 :setting:`CELERY_ROUTES` setting:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -128,7 +128,7 @@ To route a task to the ``feed_tasks`` queue, you can add an entry in the
     }
     }
 
 
 
 
-You can also override this using the ``routing_key`` argument to
+You can also override this using the `routing_key` argument to
 :func:`~celery.execute.apply_async`, or :func:`~celery.execute.send_task`:
 :func:`~celery.execute.apply_async`, or :func:`~celery.execute.send_task`:
 
 
     >>> from feeds.tasks import import_feed
     >>> from feeds.tasks import import_feed
@@ -137,12 +137,12 @@ You can also override this using the ``routing_key`` argument to
     ...                         routing_key="feed.import")
     ...                         routing_key="feed.import")
 
 
 
 
-To make server ``z`` consume from the feed queue exclusively you can
-start it with the ``-Q`` option::
+To make server `z` consume from the feed queue exclusively you can
+start it with the `-Q` option::
 
 
     (z)$ celeryd -Q feed_tasks --hostname=z.example.com
     (z)$ celeryd -Q feed_tasks --hostname=z.example.com
 
 
-Servers ``x`` and ``y`` must be configured to consume from the default queue::
+Servers `x` and `y` must be configured to consume from the default queue::
 
 
     (x)$ celeryd -Q default --hostname=x.example.com
     (x)$ celeryd -Q default --hostname=x.example.com
     (y)$ celeryd -Q default --hostname=y.example.com
     (y)$ celeryd -Q default --hostname=y.example.com
@@ -243,7 +243,7 @@ The steps required to send and receive messages are:
 3. Bind the queue to the exchange.
 3. Bind the queue to the exchange.
 
 
 Celery automatically creates the entities necessary for the queues in
 Celery automatically creates the entities necessary for the queues in
-:setting:`CELERY_QUEUES` to work (except if the queue's ``auto_declare``
+:setting:`CELERY_QUEUES` to work (except if the queue's `auto_declare`
 setting is set to :const:`False`).
 setting is set to :const:`False`).
 
 
 Here's an example queue configuration with three queues;
 Here's an example queue configuration with three queues;
@@ -270,8 +270,8 @@ One for video, one for images and one default queue for everything else:
 
 
 .. note::
 .. note::
 
 
-    In Celery the ``routing_key`` is the key used to send the message,
-    while ``binding_key`` is the key the queue is bound with.  In the AMQP API
+    In Celery the `routing_key` is the key used to send the message,
+    while `binding_key` is the key the queue is bound with.  In the AMQP API
     they are both referred to as the routing key.
     they are both referred to as the routing key.
 
 
 .. _amqp-exchange-types:
 .. _amqp-exchange-types:
@@ -280,8 +280,8 @@ Exchange types
 --------------
 --------------
 
 
 The exchange type defines how the messages are routed through the exchange.
 The exchange type defines how the messages are routed through the exchange.
-The exchange types defined in the standard are ``direct``, ``topic``,
-``fanout`` and ``headers``.  Also non-standard exchange types are available
+The exchange types defined in the standard are `direct`, `topic`,
+`fanout` and `headers`.  Also non-standard exchange types are available
 as plugins to RabbitMQ, like the `last-value-cache plug-in`_ by Michael
 as plugins to RabbitMQ, like the `last-value-cache plug-in`_ by Michael
 Bridgen.
 Bridgen.
 
 
@@ -294,7 +294,7 @@ Direct exchanges
 ~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~
 
 
 Direct exchanges match by exact routing keys, so a queue bound by
 Direct exchanges match by exact routing keys, so a queue bound by
-the routing key ``video`` only receives messages with that routing key.
+the routing key `video` only receives messages with that routing key.
 
 
 .. _amqp-exchange-type-topic:
 .. _amqp-exchange-type-topic:
 
 
@@ -302,12 +302,12 @@ Topic exchanges
 ~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~
 
 
 Topic exchanges matches routing keys using dot-separated words, and the
 Topic exchanges matches routing keys using dot-separated words, and the
-wildcard characters: ``*`` (matches a single word), and ``#`` (matches
+wildcard characters: `*` (matches a single word), and `#` (matches
 zero or more words).
 zero or more words).
 
 
-With routing keys like ``usa.news``, ``usa.weather``, ``norway.news`` and
-``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
-items in the USA) or ``usa.weather`` (all USA weather items).
+With routing keys like `usa.news`, `usa.weather`, `norway.news` and
+`norway.weather`, bindings could be `*.news` (all news), `usa.#` (all
+items in the USA) or `usa.weather` (all USA weather items).
 
 
 .. _amqp-api:
 .. _amqp-api:
 
 
@@ -334,7 +334,7 @@ Related API commands
     Declares a queue by name.
     Declares a queue by name.
 
 
     Exclusive queues can only be consumed from by the current connection.
     Exclusive queues can only be consumed from by the current connection.
-    Exclusive also implies ``auto_delete``.
+    Exclusive also implies `auto_delete`.
 
 
 .. method:: queue.bind(queue_name, exchange_name, routing_key)
 .. method:: queue.bind(queue_name, exchange_name, routing_key)
 
 
@@ -367,7 +367,7 @@ It's used for command-line access to the AMQP API, enabling access to
 administration tasks like creating/deleting queues and exchanges, purging
 administration tasks like creating/deleting queues and exchanges, purging
 queues or sending messages.
 queues or sending messages.
 
 
-You can write commands directly in the arguments to ``camqadm``, or just start
+You can write commands directly in the arguments to `camqadm`, or just start
 with no arguments to start it in shell-mode::
 with no arguments to start it in shell-mode::
 
 
     $ camqadm
     $ camqadm
@@ -375,10 +375,10 @@ with no arguments to start it in shell-mode::
     -> connected.
     -> connected.
     1>
     1>
 
 
-Here ``1>`` is the prompt.  The number 1, is the number of commands you
-have executed so far.  Type ``help`` for a list of commands available.
+Here `1>` is the prompt.  The number 1, is the number of commands you
+have executed so far.  Type `help` for a list of commands available.
 It also supports autocompletion, so you can start typing a command and then
 It also supports autocompletion, so you can start typing a command and then
-hit the ``tab`` key to show a list of possible matches.
+hit the `tab` key to show a list of possible matches.
 
 
 Let's create a queue we can send messages to::
 Let's create a queue we can send messages to::
 
 
@@ -389,19 +389,19 @@ Let's create a queue we can send messages to::
     3> queue.bind testqueue testexchange testkey
     3> queue.bind testqueue testexchange testkey
     ok.
     ok.
 
 
-This created the direct exchange ``testexchange``, and a queue
-named ``testqueue``.  The queue is bound to the exchange using
-the routing key ``testkey``.
+This created the direct exchange `testexchange`, and a queue
+named `testqueue`.  The queue is bound to the exchange using
+the routing key `testkey`.
 
 
-From now on all messages sent to the exchange ``testexchange`` with routing
-key ``testkey`` will be moved to this queue.  We can send a message by
-using the ``basic.publish`` command::
+From now on all messages sent to the exchange `testexchange` with routing
+key `testkey` will be moved to this queue.  We can send a message by
+using the `basic.publish` command::
 
 
     4> basic.publish "This is a message!" testexchange testkey
     4> basic.publish "This is a message!" testexchange testkey
     ok.
     ok.
 
 
 Now that the message is sent we can retrieve it again.  We use the
 Now that the message is sent we can retrieve it again.  We use the
-``basic.get`` command here, which pops a single message off the queue,
+`basic.get` command here, which pops a single message off the queue,
 this command is not recommended for production as it implies polling, any
 this command is not recommended for production as it implies polling, any
 real application would declare consumers instead.
 real application would declare consumers instead.
 
 
@@ -426,9 +426,9 @@ Note the delivery tag listed in the structure above; Within a connection channel
 every received message has a unique delivery tag,
 every received message has a unique delivery tag,
 This tag is used to acknowledge the message.  Also note that
 This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
 delivery tags are not unique across connections, so in another client
-the delivery tag ``1`` might point to a different message than in this channel.
+the delivery tag `1` might point to a different message than in this channel.
 
 
-You can acknowledge the message we received using ``basic.ack``::
+You can acknowledge the message we received using `basic.ack`::
 
 
     6> basic.ack 1
     6> basic.ack 1
     ok.
     ok.
@@ -510,7 +510,7 @@ Routers
 A router is a class that decides the routing options for a task.
 A router is a class that decides the routing options for a task.
 
 
 All you need to define a new router is to create a class with a
 All you need to define a new router is to create a class with a
-``route_for_task`` method:
+`route_for_task` method:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -523,7 +523,7 @@ All you need to define a new router is to create a class with a
                         "routing_key": "video.compress"}
                         "routing_key": "video.compress"}
             return None
             return None
 
 
-If you return the ``queue`` key, it will expand with the defined settings of
+If you return the `queue` key, it will expand with the defined settings of
 that queue in :setting:`CELERY_QUEUES`::
 that queue in :setting:`CELERY_QUEUES`::
 
 
     {"queue": "video", "routing_key": "video.compress"}
     {"queue": "video", "routing_key": "video.compress"}

+ 89 - 25
docs/userguide/tasks.rst

@@ -18,8 +18,8 @@ Basics
 ======
 ======
 
 
 A task is a class that encapsulates a function and its execution options.
 A task is a class that encapsulates a function and its execution options.
-Given a function ``create_user``, that takes two arguments: ``username`` and
-``password``, you can create a task like this:
+Given a function create_user`, that takes two arguments: `username` and
+`password`, you can create a task like this:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -30,7 +30,7 @@ Given a function ``create_user``, that takes two arguments: ``username`` and
         User.objects.create(username=username, password=password)
         User.objects.create(username=username, password=password)
 
 
 
 
-Task options are added as arguments to ``task``::
+Task options are added as arguments to `task`:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -43,7 +43,7 @@ Task options are added as arguments to ``task``::
 Task Request Info
 Task Request Info
 =================
 =================
 
 
-The ``task.request`` attribute contains information about
+The `task.request` attribute contains information about
 the task being executed, and contains the following attributes:
 the task being executed, and contains the following attributes:
 
 
 :id: The unique id of the executing task.
 :id: The unique id of the executing task.
@@ -53,7 +53,7 @@ the task being executed, and contains the following attributes:
 :kwargs: Keyword arguments.
 :kwargs: Keyword arguments.
 
 
 :retries: How many times the current task has been retried.
 :retries: How many times the current task has been retried.
-          An integer starting at ``0``.
+          An integer starting at `0`.
 
 
 :is_eager: Set to :const:`True` if the task is executed locally in
 :is_eager: Set to :const:`True` if the task is executed locally in
            the client, kand not by a worker.
            the client, kand not by a worker.
@@ -97,10 +97,10 @@ the worker log:
         logger.info("Adding %s + %s" % (x, y))
         logger.info("Adding %s + %s" % (x, y))
         return x + y
         return x + y
 
 
-There are several logging levels available, and the workers ``loglevel``
+There are several logging levels available, and the workers `loglevel`
 setting decides whether or not they will be written to the log file.
 setting decides whether or not they will be written to the log file.
 
 
-Of course, you can also simply use ``print`` as anything written to standard
+Of course, you can also simply use `print` as anything written to standard
 out/-err will be written to the logfile as well.
 out/-err will be written to the logfile as well.
 
 
 .. _task-retry:
 .. _task-retry:
@@ -122,11 +122,11 @@ It will do the right thing, and respect the
         except (Twitter.FailWhaleError, Twitter.LoginError), exc:
         except (Twitter.FailWhaleError, Twitter.LoginError), exc:
             send_twitter_status.retry(exc=exc)
             send_twitter_status.retry(exc=exc)
 
 
-Here we used the ``exc`` argument to pass the current exception to
+Here we used the `exc` argument to pass the current exception to
 :meth:`~celery.task.base.BaseTask.retry`. At each step of the retry this exception
 :meth:`~celery.task.base.BaseTask.retry`. At each step of the retry this exception
 is available as the tombstone (result) of the task. When
 is available as the tombstone (result) of the task. When
 :attr:`~celery.task.base.BaseTask.max_retries` has been exceeded this is the
 :attr:`~celery.task.base.BaseTask.max_retries` has been exceeded this is the
-exception raised.  However, if an ``exc`` argument is not provided the
+exception raised.  However, if an `exc` argument is not provided the
 :exc:`~celery.exceptions.RetryTaskError` exception is raised instead.
 :exc:`~celery.exceptions.RetryTaskError` exception is raised instead.
 
 
 .. _task-retry-custom-delay:
 .. _task-retry-custom-delay:
@@ -140,7 +140,7 @@ before doing so. The default delay is in the
 attribute on the task. By default this is set to 3 minutes. Note that the
 attribute on the task. By default this is set to 3 minutes. Note that the
 unit for setting the delay is in seconds (int or float).
 unit for setting the delay is in seconds (int or float).
 
 
-You can also provide the ``countdown`` argument to
+You can also provide the `countdown` argument to
 :meth:`~celery.task.base.BaseTask.retry` to override this default.
 :meth:`~celery.task.base.BaseTask.retry` to override this default.
 
 
 .. code-block:: python
 .. code-block:: python
@@ -205,8 +205,8 @@ General
     If it is an integer, it is interpreted as "tasks per second". 
     If it is an integer, it is interpreted as "tasks per second". 
 
 
     The rate limits can be specified in seconds, minutes or hours
     The rate limits can be specified in seconds, minutes or hours
-    by appending ``"/s"``, ``"/m"`` or ``"/h"`` to the value.
-    Example: ``"100/m"`` (hundred tasks a minute).  Default is the
+    by appending `"/s"`, `"/m"` or `"/h"` to the value.
+    Example: `"100/m"` (hundred tasks a minute).  Default is the
     :setting:`CELERY_DEFAULT_RATE_LIMIT` setting, which if not specified means
     :setting:`CELERY_DEFAULT_RATE_LIMIT` setting, which if not specified means
     rate limiting for tasks is disabled by default.
     rate limiting for tasks is disabled by default.
 
 
@@ -236,7 +236,7 @@ General
 
 
     A string identifying the default serialization
     A string identifying the default serialization
     method to use. Defaults to the :setting:`CELERY_TASK_SERIALIZER`
     method to use. Defaults to the :setting:`CELERY_TASK_SERIALIZER`
-    setting.  Can be ``pickle`` ``json``, ``yaml``, or any custom
+    setting.  Can be `pickle` `json`, `yaml`, or any custom
     serialization methods that have been registered with
     serialization methods that have been registered with
     :mod:`kombu.serialization.registry`.
     :mod:`kombu.serialization.registry`.
 
 
@@ -273,7 +273,7 @@ General
     task is currently running.
     task is currently running.
 
 
     The hostname and pid of the worker executing the task
     The hostname and pid of the worker executing the task
-    will be avaiable in the state metadata (e.g. ``result.info["pid"]``)
+    will be avaiable in the state metadata (e.g. `result.info["pid"]`)
 
 
     The global default can be overridden by the
     The global default can be overridden by the
     :setting:`CELERY_TRACK_STARTED` setting.
     :setting:`CELERY_TRACK_STARTED` setting.
@@ -296,11 +296,11 @@ Message and routing options
 
 
 .. attribute:: Task.exchange
 .. attribute:: Task.exchange
 
 
-    Override the global default ``exchange`` for this task.
+    Override the global default `exchange` for this task.
 
 
 .. attribute:: Task.routing_key
 .. attribute:: Task.routing_key
 
 
-    Override the global default ``routing_key`` for this task.
+    Override the global default `routing_key` for this task.
 
 
 .. attribute:: Task.mandatory
 .. attribute:: Task.mandatory
 
 
@@ -392,7 +392,7 @@ For example if the client imports the module "myapp.tasks" as ".tasks", and
 the worker imports the module as "myapp.tasks", the generated names won't match
 the worker imports the module as "myapp.tasks", the generated names won't match
 and an :exc:`~celery.exceptions.NotRegistered` error will be raised by the worker.
 and an :exc:`~celery.exceptions.NotRegistered` error will be raised by the worker.
 
 
-This is also the case if using Django and using ``project.myapp``::
+This is also the case if using Django and using `project.myapp`::
 
 
     INSTALLED_APPS = ("project.myapp", )
     INSTALLED_APPS = ("project.myapp", )
 
 
@@ -417,6 +417,70 @@ add the project directory to the Python path::
 
 
 This makes more sense from the reusable app perspective anyway.
 This makes more sense from the reusable app perspective anyway.
 
 
+.. tasks-decorating:
+
+Decorating tasks
+================
+
+Using decorators with tasks requires extra steps because of the magic keyword
+arguments.
+
+If you have the following task and decorator:
+
+.. code-block:: python
+
+    from celery.utils.functional import wraps
+
+    def decorator(task):
+
+        @wraps(task)
+        def _decorated(*args, **kwargs):
+            print("inside decorator")
+            return task(*args, **kwargs)
+
+
+    @decorator
+    @task
+    def add(x, y):
+        return x + y
+
+Then the worker will see that the task is accepting keyword arguments,
+while it really doesn't, resulting in an error.
+
+The workaround is to either have your task accept arbitrary keyword
+arguments:
+
+.. code-block:: python
+
+    @decorator
+    @task
+    def add(x, y, **kwargs):
+        return x + y
+
+or patch the decorator to preserve the original signature:
+
+.. code-block:: python
+
+    from inspect import getargspec
+    from celery.utils.functional import wraps
+
+    def decorator(task):
+
+        @wraps(task)
+        def _decorated(*args, **kwargs):
+            print("in decorator")
+            return task(*args, **kwargs)
+        _decorated.argspec = inspect.getargspec(task)
+
+Also note the use of :func:`~celery.utils.functional.wraps` here,
+this is necessary to keep the original function name and docstring.
+
+.. note::
+
+    The magic keyword arguments will be deprecated in the future,
+    replaced by the `task.request` attribute in 2.2, and the
+    keyword arguments will be removed in 3.0.
+
 .. _task-states:
 .. _task-states:
 
 
 Task States
 Task States
@@ -460,7 +524,7 @@ STARTED
 Task has been started.
 Task has been started.
 Not reported by default, to enable please see :ref:`task-track-started`.
 Not reported by default, to enable please see :ref:`task-track-started`.
 
 
-:metadata: ``pid`` and ``hostname`` of the worker process executing
+:metadata: `pid` and `hostname` of the worker process executing
            the task.
            the task.
 
 
 .. state:: SUCCESS
 .. state:: SUCCESS
@@ -470,7 +534,7 @@ SUCCESS
 
 
 Task has been successfully executed.
 Task has been successfully executed.
 
 
-:metadata: ``result`` contains the return value of the task.
+:metadata: `result` contains the return value of the task.
 :propagates: Yes
 :propagates: Yes
 :ready: Yes
 :ready: Yes
 
 
@@ -481,7 +545,7 @@ FAILURE
 
 
 Task execution resulted in failure.
 Task execution resulted in failure.
 
 
-:metadata: ``result`` contains the exception occured, and ``traceback``
+:metadata: `result` contains the exception occured, and `traceback`
            contains the backtrace of the stack at the point when the
            contains the backtrace of the stack at the point when the
            exception was raised.
            exception was raised.
 :propagates: Yes
 :propagates: Yes
@@ -493,8 +557,8 @@ RETRY
 
 
 Task is being retried.
 Task is being retried.
 
 
-:metadata: ``result`` contains the exception that caused the retry,
-           and ``traceback`` contains the backtrace of the stack at the point
+:metadata: `result` contains the exception that caused the retry,
+           and `traceback` contains the backtrace of the stack at the point
            when the exceptions was raised.
            when the exceptions was raised.
 :propagates: No
 :propagates: No
 
 
@@ -525,9 +589,9 @@ update a tasks state::
                 meta={"current": i, "total": len(filenames)})
                 meta={"current": i, "total": len(filenames)})
 
 
 
 
-Here we created the state ``"PROGRESS"``, which tells any application
+Here we created the state `"PROGRESS"`, which tells any application
 aware of this state that the task is currently in progress, and also where
 aware of this state that the task is currently in progress, and also where
-it is in the process by having ``current`` and ``total`` counts as part of the
+it is in the process by having `current` and `total` counts as part of the
 state metadata.  This can then be used to create e.g. progress bars.
 state metadata.  This can then be used to create e.g. progress bars.
 
 
 .. _task-how-they-work:
 .. _task-how-they-work:
@@ -559,7 +623,7 @@ yourself:
         <Task: celery.ping (regular)>}
         <Task: celery.ping (regular)>}
 
 
 This is the list of tasks built-in to celery.  Note that we had to import
 This is the list of tasks built-in to celery.  Note that we had to import
-``celery.task`` first for these to show up.  This is because the tasks will
+`celery.task` first for these to show up.  This is because the tasks will
 only be registered when the module they are defined in is imported.
 only be registered when the module they are defined in is imported.
 
 
 The default loader imports any modules listed in the
 The default loader imports any modules listed in the

+ 6 - 6
docs/userguide/tasksets.rst

@@ -40,7 +40,7 @@ This makes it excellent as a means to pass callbacks around to tasks.
 Callbacks
 Callbacks
 ---------
 ---------
 
 
-Let's improve our ``add`` task so it can accept a callback that
+Let's improve our `add` task so it can accept a callback that
 takes the result as an argument::
 takes the result as an argument::
 
 
     from celery.decorators import task
     from celery.decorators import task
@@ -57,25 +57,25 @@ takes the result as an argument::
 asynchronously by :meth:`~celery.task.sets.subtask.delay`, and
 asynchronously by :meth:`~celery.task.sets.subtask.delay`, and
 eagerly by :meth:`~celery.task.sets.subtask.apply`.
 eagerly by :meth:`~celery.task.sets.subtask.apply`.
 
 
-The best thing is that any arguments you add to ``subtask.delay``,
+The best thing is that any arguments you add to `subtask.delay`,
 will be prepended to the arguments specified by the subtask itself!
 will be prepended to the arguments specified by the subtask itself!
 
 
 If you have the subtask::
 If you have the subtask::
 
 
     >>> add.subtask(args=(10, ))
     >>> add.subtask(args=(10, ))
 
 
-``subtask.delay(result)`` becomes::
+`subtask.delay(result)` becomes::
 
 
     >>> add.apply_async(args=(result, 10))
     >>> add.apply_async(args=(result, 10))
 
 
 ...
 ...
 
 
-Now let's execute our new ``add`` task with a callback::
+Now let's execute our new `add` task with a callback::
 
 
     >>> add.delay(2, 2, callback=add.subtask((8, )))
     >>> add.delay(2, 2, callback=add.subtask((8, )))
 
 
-As expected this will first launch one task calculating ``2 + 2``, then 
-another task calculating ``4 + 8``.
+As expected this will first launch one task calculating `2 + 2`, then 
+another task calculating `4 + 8`.
 
 
 .. _sets-taskset:
 .. _sets-taskset:
 
 

+ 15 - 15
docs/userguide/workers.rst

@@ -17,8 +17,8 @@ You can start celeryd to run in the foreground by executing the command::
     $ celeryd --loglevel=INFO
     $ celeryd --loglevel=INFO
 
 
 You probably want to use a daemonization tool to start
 You probably want to use a daemonization tool to start
-``celeryd`` in the background.  See :ref:`daemonizing` for help
-using ``celeryd`` with popular daemonization tools.
+`celeryd` in the background.  See :ref:`daemonizing` for help
+using `celeryd` with popular daemonization tools.
 
 
 For a full list of available command line options see
 For a full list of available command line options see
 :mod:`~celery.bin.celeryd`, or simply do::
 :mod:`~celery.bin.celeryd`, or simply do::
@@ -27,7 +27,7 @@ For a full list of available command line options see
 
 
 You can also start multiple workers on the same machine. If you do so
 You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
 be sure to give a unique name to each individual worker by specifying a
-hostname with the ``--hostname|-n`` argument::
+hostname with the `--hostname|-n` argument::
 
 
     $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
@@ -76,7 +76,7 @@ Concurrency
 ===========
 ===========
 
 
 Multiprocessing is used to perform concurrent execution of tasks.  The number
 Multiprocessing is used to perform concurrent execution of tasks.  The number
-of worker processes can be changed using the ``--concurrency`` argument and
+of worker processes can be changed using the `--concurrency` argument and
 defaults to the number of CPUs available on the machine.
 defaults to the number of CPUs available on the machine.
 
 
 More worker processes are usually better, but there's a cut-off point where
 More worker processes are usually better, but there's a cut-off point where
@@ -96,7 +96,7 @@ Revoking tasks works by sending a broadcast message to all the workers,
 the workers then keep a list of revoked tasks in memory.
 the workers then keep a list of revoked tasks in memory.
 
 
 If you want tasks to remain revoked after worker restart you need to
 If you want tasks to remain revoked after worker restart you need to
-specify a file for these to be stored in, either by using the ``--statedb``
+specify a file for these to be stored in, either by using the `--statedb`
 argument to :mod:`~celery.bin.celeryd` or the :setting:`CELERYD_STATE_DB`
 argument to :mod:`~celery.bin.celeryd` or the :setting:`CELERYD_STATE_DB`
 setting.  See :setting:`CELERYD_STATE_DB` for more information.
 setting.  See :setting:`CELERYD_STATE_DB` for more information.
 
 
@@ -112,9 +112,9 @@ waiting for some event that will never happen you will block the worker
 from processing new tasks indefinitely.  The best way to defend against
 from processing new tasks indefinitely.  The best way to defend against
 this scenario happening is enabling time limits.
 this scenario happening is enabling time limits.
 
 
-The time limit (``--time-limit``) is the maximum number of seconds a task
+The time limit (`--time-limit`) is the maximum number of seconds a task
 may run before the process executing it is terminated and replaced by a
 may run before the process executing it is terminated and replaced by a
-new process.  You can also enable a soft time limit (``--soft-time-limit``),
+new process.  You can also enable a soft time limit (`--soft-time-limit`),
 this raises an exception the task can catch to clean up before the hard
 this raises an exception the task can catch to clean up before the hard
 time limit kills it:
 time limit kills it:
 
 
@@ -150,8 +150,8 @@ a worker can execute before it's replaced by a new process.
 This is useful if you have memory leaks you have no control over
 This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 for example from closed source C extensions.
 
 
-The option can be set using the ``--maxtasksperchild`` argument
-to ``celeryd`` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
+The option can be set using the `--maxtasksperchild` argument
+to `celeryd` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
 
 
 .. _worker-remote-control:
 .. _worker-remote-control:
 
 
@@ -201,7 +201,7 @@ Sending the :control:`rate_limit` command and keyword arguments::
     ...                                    "rate_limit": "200/m"})
     ...                                    "rate_limit": "200/m"})
 
 
 This will send the command asynchronously, without waiting for a reply.
 This will send the command asynchronously, without waiting for a reply.
-To request a reply you have to use the ``reply`` argument::
+To request a reply you have to use the `reply` argument::
 
 
     >>> broadcast("rate_limit", {"task_name": "myapp.mytask",
     >>> broadcast("rate_limit", {"task_name": "myapp.mytask",
     ...                          "rate_limit": "200/m"}, reply=True)
     ...                          "rate_limit": "200/m"}, reply=True)
@@ -209,7 +209,7 @@ To request a reply you have to use the ``reply`` argument::
      {'worker2.example.com': 'New rate limit set successfully'},
      {'worker2.example.com': 'New rate limit set successfully'},
      {'worker3.example.com': 'New rate limit set successfully'}]
      {'worker3.example.com': 'New rate limit set successfully'}]
 
 
-Using the ``destination`` argument you can specify a list of workers
+Using the `destination` argument you can specify a list of workers
 to receive the command::
 to receive the command::
 
 
     >>> broadcast
     >>> broadcast
@@ -230,7 +230,7 @@ using :func:`~celery.task.control.broadcast`.
 Rate limits
 Rate limits
 -----------
 -----------
 
 
-Example changing the rate limit for the ``myapp.mytask`` task to accept
+Example changing the rate limit for the `myapp.mytask` task to accept
 200 tasks a minute on all servers:
 200 tasks a minute on all servers:
 
 
     >>> from celery.task.control import rate_limit
     >>> from celery.task.control import rate_limit
@@ -274,7 +274,7 @@ a custom timeout::
      {'worker2.example.com': 'pong'},
      {'worker2.example.com': 'pong'},
      {'worker3.example.com': 'pong'}]
      {'worker3.example.com': 'pong'}]
 
 
-:func:`~celery.task.control.ping` also supports the ``destination`` argument,
+:func:`~celery.task.control.ping` also supports the `destination` argument,
 so you can specify which workers to ping::
 so you can specify which workers to ping::
 
 
     >>> ping(['worker2.example.com', 'worker3.example.com'])
     >>> ping(['worker2.example.com', 'worker3.example.com'])
@@ -289,8 +289,8 @@ so you can specify which workers to ping::
 Enable/disable events
 Enable/disable events
 ---------------------
 ---------------------
 
 
-You can enable/disable events by using the ``enable_events``,
-``disable_events`` commands.  This is useful to temporarily monitor
+You can enable/disable events by using the `enable_events`,
+`disable_events` commands.  This is useful to temporarily monitor
 a worker using :program:`celeryev`/:program:`celerymon`.
 a worker using :program:`celeryev`/:program:`celerymon`.
 
 
 .. code-block:: python
 .. code-block:: python

+ 4 - 4
examples/celery_http_gateway/README.rst

@@ -7,7 +7,7 @@ statuses/results over HTTP.
 
 
 Some familiarity with Django is recommended.
 Some familiarity with Django is recommended.
 
 
-``settings.py`` contains the celery settings, you probably want to configure
+`settings.py` contains the celery settings, you probably want to configure
 at least the broker related settings.
 at least the broker related settings.
 
 
 To run the service you have to run the following commands::
 To run the service you have to run the following commands::
@@ -20,7 +20,7 @@ To run the service you have to run the following commands::
 The service is now running at http://localhost:8000
 The service is now running at http://localhost:8000
 
 
 
 
-You can apply tasks, with the ``/apply/<task_name>`` URL::
+You can apply tasks, with the `/apply/<task_name>` URL::
 
 
     $ curl http://localhost:8000/apply/celery.ping/
     $ curl http://localhost:8000/apply/celery.ping/
     {"ok": "true", "task_id": "e3a95109-afcd-4e54-a341-16c18fddf64b"}
     {"ok": "true", "task_id": "e3a95109-afcd-4e54-a341-16c18fddf64b"}
@@ -32,9 +32,9 @@ Then you can use the resulting task-id to get the return value::
 
 
 
 
 If you don't want to expose all tasks there are a few possible
 If you don't want to expose all tasks there are a few possible
-approaches. For instance you can extend the ``apply`` view to only
+approaches. For instance you can extend the `apply` view to only
 accept a whitelist. Another possibility is to just make views for every task you want to
 accept a whitelist. Another possibility is to just make views for every task you want to
-expose. We made on such view for ping in ``views.ping``::
+expose. We made on such view for ping in `views.ping`::
 
 
     $ curl http://localhost:8000/ping/
     $ curl http://localhost:8000/ping/
     {"ok": "true", "task_id": "383c902c-ba07-436b-b0f3-ea09cc22107c"}
     {"ok": "true", "task_id": "383c902c-ba07-436b-b0f3-ea09cc22107c"}

+ 6 - 6
examples/ghetto-queue/README.rst

@@ -43,14 +43,14 @@ supports `Redis`_ and relational databases via the Django ORM.
 .. _`Redis`: http://code.google.com/p/redis/
 .. _`Redis`: http://code.google.com/p/redis/
 
 
 
 
-The provided ``celeryconfig.py`` configures the settings used to drive celery.
+The provided `celeryconfig.py` configures the settings used to drive celery.
 
 
-Next we have to create the database tables by issuing the ``celeryinit``
+Next we have to create the database tables by issuing the `celeryinit`
 command::
 command::
 
 
     $ celeryinit
     $ celeryinit
 
 
-We're using SQLite3, so this creates a database file (``celery.db`` as
+We're using SQLite3, so this creates a database file (`celery.db` as
 specified in the config file). SQLite is great, but when used in combination
 specified in the config file). SQLite is great, but when used in combination
 with Django it doesn't handle concurrency well. To protect your program from
 with Django it doesn't handle concurrency well. To protect your program from
 lock problems, celeryd will only spawn one worker process. With
 lock problems, celeryd will only spawn one worker process. With
@@ -68,7 +68,7 @@ the foreground, we have to open up another terminal to run our test program::
     $ python test.py
     $ python test.py
 
 
 
 
-The test program simply runs the ``add`` task, which is a simple task adding
+The test program simply runs the `add` task, which is a simple task adding
 numbers. You can also run the task manually if you want::
 numbers. You can also run the task manually if you want::
 
 
     >>> from tasks import add
     >>> from tasks import add
@@ -80,7 +80,7 @@ Using Redis instead
 ===================
 ===================
 
 
 To use redis instead, you have to configure the following directives in 
 To use redis instead, you have to configure the following directives in 
-``celeryconfig.py``::
+`celeryconfig.py`::
 
 
     CARROT_BACKEND = "ghettoq.taproot.Redis"
     CARROT_BACKEND = "ghettoq.taproot.Redis"
     BROKER_HOST = "localhost"
     BROKER_HOST = "localhost"
@@ -97,7 +97,7 @@ Modules
 
 
         Tasks are defined in this module. This module is automatically
         Tasks are defined in this module. This module is automatically
         imported by the worker because it's listed in
         imported by the worker because it's listed in
-        celeryconfig's ``CELERY_IMPORTS`` directive.
+        celeryconfig's `CELERY_IMPORTS` directive.
 
 
     * test.py
     * test.py
 
 

+ 2 - 2
examples/httpexample/README.rst

@@ -5,8 +5,8 @@
 This example is a simple Django HTTP service exposing a single task
 This example is a simple Django HTTP service exposing a single task
 multiplying two numbers:
 multiplying two numbers:
 
 
-The multiply http callback task is in ``views.py``, mapped to a URL using
-``urls.py``.
+The multiply http callback task is in `views.py`, mapped to a URL using
+`urls.py`.
 
 
 There are no models, so to start it do::
 There are no models, so to start it do::
 
 

+ 1 - 1
examples/pythonproject/demoapp/README.rst

@@ -14,7 +14,7 @@ Modules
 
 
         Tasks are defined in this module. This module is automatically
         Tasks are defined in this module. This module is automatically
         imported by the worker because it's listed in
         imported by the worker because it's listed in
-        celeryconfig's ``CELERY_IMPORTS`` directive.
+        celeryconfig's `CELERY_IMPORTS` directive.
 
 
     * test.py
     * test.py
 
 

+ 7 - 0
setup.cfg

@@ -1,3 +1,10 @@
+[egg_info]
+tag_build = dev
+tag_date = true
+
+[aliases]
+release = egg_info -RDb ''
+
 [nosetests]
 [nosetests]
 where = celery/tests
 where = celery/tests
 cover3-branch = 1
 cover3-branch = 1

Some files were not shown because too many files changed in this diff