Browse Source

Docs: Replaced all occurences of ``literal`` with `literal`

Ask Solem 14 years ago
parent
commit
0afa1efa28
78 changed files with 739 additions and 737 deletions
  1. 161 161
      Changelog
  2. 24 24
      FAQ
  3. 4 4
      INSTALL
  4. 9 9
      README.rst
  5. 1 1
      celery/app/__init__.py
  6. 15 14
      celery/app/base.py
  7. 1 1
      celery/apps/beat.py
  8. 2 2
      celery/backends/base.py
  9. 1 1
      celery/backends/pyredis.py
  10. 1 1
      celery/backends/tyrant.py
  11. 1 1
      celery/beat.py
  12. 10 10
      celery/bin/camqadm.py
  13. 4 4
      celery/bin/celerybeat.py
  14. 8 8
      celery/bin/celeryd.py
  15. 2 2
      celery/concurrency/processes/__init__.py
  16. 4 4
      celery/contrib/abortable.py
  17. 7 7
      celery/datastructures.py
  18. 4 4
      celery/db/a805d4bd.py
  19. 3 3
      celery/events/__init__.py
  20. 3 3
      celery/events/state.py
  21. 2 1
      celery/loaders/base.py
  22. 1 1
      celery/loaders/default.py
  23. 7 7
      celery/log.py
  24. 2 2
      celery/messaging.py
  25. 1 1
      celery/registry.py
  26. 8 8
      celery/result.py
  27. 2 2
      celery/routes.py
  28. 5 5
      celery/schedules.py
  29. 26 26
      celery/task/base.py
  30. 2 2
      celery/task/builtins.py
  31. 1 1
      celery/task/control.py
  32. 3 3
      celery/task/http.py
  33. 1 1
      celery/task/sets.py
  34. 3 3
      celery/tests/utils.py
  35. 9 9
      celery/utils/__init__.py
  36. 4 4
      celery/utils/dispatch/saferef.py
  37. 6 6
      celery/utils/dispatch/signal.py
  38. 2 2
      celery/utils/functional.py
  39. 3 3
      celery/utils/timeutils.py
  40. 3 3
      celery/worker/__init__.py
  41. 3 3
      celery/worker/buckets.py
  42. 5 5
      celery/worker/job.py
  43. 4 4
      celery/worker/listener.py
  44. 2 2
      contrib/debian/init.d/celerybeat
  45. 2 2
      contrib/debian/init.d/celeryd
  46. 2 2
      contrib/debian/init.d/celeryevcam
  47. 3 3
      contrib/generic-init.d/celeryd
  48. 4 4
      contrib/requirements/README.rst
  49. 45 45
      docs/configuration.rst
  50. 13 13
      docs/cookbook/daemonizing.rst
  51. 2 2
      docs/cookbook/tasks.rst
  52. 8 8
      docs/getting-started/broker-installation.rst
  53. 2 2
      docs/getting-started/first-steps-with-celery.rst
  54. 4 4
      docs/includes/installation.txt
  55. 6 6
      docs/includes/introduction.txt
  56. 3 3
      docs/includes/resources.txt
  57. 5 5
      docs/internals/app-overview.rst
  58. 8 8
      docs/internals/deprecation.rst
  59. 10 10
      docs/internals/protocol.rst
  60. 6 6
      docs/internals/worker.rst
  61. 1 1
      docs/links.rst
  62. 36 36
      docs/reference/celery.conf.rst
  63. 2 2
      docs/reference/celery.signals.rst
  64. 11 11
      docs/releases/1.0/announcement.rst
  65. 13 13
      docs/tutorials/clickcounter.rst
  66. 2 2
      docs/tutorials/otherqueues.rst
  67. 27 27
      docs/userguide/executing.rst
  68. 36 36
      docs/userguide/monitoring.rst
  69. 16 16
      docs/userguide/periodic-tasks.rst
  70. 2 2
      docs/userguide/remote-tasks.rst
  71. 40 40
      docs/userguide/routing.rst
  72. 26 26
      docs/userguide/tasks.rst
  73. 6 6
      docs/userguide/tasksets.rst
  74. 15 15
      docs/userguide/workers.rst
  75. 4 4
      examples/celery_http_gateway/README.rst
  76. 6 6
      examples/ghetto-queue/README.rst
  77. 2 2
      examples/httpexample/README.rst
  78. 1 1
      examples/pythonproject/demoapp/README.rst

File diff suppressed because it is too large
+ 161 - 161
Changelog


+ 24 - 24
FAQ

@@ -130,8 +130,8 @@ Troubleshooting
 MySQL is throwing deadlock errors, what can I do?
 -------------------------------------------------
 
-**Answer:** MySQL has default isolation level set to ``REPEATABLE-READ``,
-if you don't really need that, set it to ``READ-COMMITTED``.
+**Answer:** MySQL has default isolation level set to `REPEATABLE-READ`,
+if you don't really need that, set it to `READ-COMMITTED`.
 You can do that by adding the following to your :file:`my.cnf`::
 
     [mysqld]
@@ -178,7 +178,7 @@ http://www.playingwithwire.com/2009/10/how-to-get-celeryd-to-work-on-freebsd/
 
 .. _faq-duplicate-key-errors:
 
-I'm having ``IntegrityError: Duplicate Key`` errors. Why?
+I'm having `IntegrityError: Duplicate Key` errors. Why?
 ---------------------------------------------------------
 
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
@@ -284,7 +284,7 @@ Results
 How do I get the result of a task if I have the ID that points there?
 ----------------------------------------------------------------------
 
-**Answer**: Use ``Task.AsyncResult``::
+**Answer**: Use `Task.AsyncResult`::
 
     >>> result = MyTask.AsyncResult(task_id)
     >>> result.get()
@@ -340,7 +340,7 @@ as a message. If you don't collect these results, they will build up and
 RabbitMQ will eventually run out of memory.
 
 If you don't use the results for a task, make sure you set the
-``ignore_result`` option:
+`ignore_result` option:
 
 .. code-block python
 
@@ -383,14 +383,14 @@ The STOMP carrot backend requires the `stompy`_ library::
 
 .. _`stompy`: http://pypi.python.org/pypi/stompy
 
-In this example we will use a queue called ``celery`` which we created in
+In this example we will use a queue called `celery` which we created in
 the ActiveMQ web admin interface.
 
-**Note**: When using ActiveMQ the queue name needs to have ``"/queue/"``
-prepended to it. i.e. the queue ``celery`` becomes ``/queue/celery``.
+**Note**: When using ActiveMQ the queue name needs to have `"/queue/"`
+prepended to it. i.e. the queue `celery` becomes `/queue/celery`.
 
 Since STOMP doesn't have exchanges and the routing capabilities of AMQP,
-you need to set ``exchange`` name to the same as the queue name. This is
+you need to set `exchange` name to the same as the queue name. This is
 a minor inconvenience since carrot needs to maintain the same interface
 for both AMQP and STOMP.
 
@@ -474,7 +474,7 @@ For more information see :ref:`task-request-info`.
 Can I specify a custom task_id?
 -------------------------------
 
-**Answer**: Yes.  Use the ``task_id`` argument to
+**Answer**: Yes.  Use the `task_id` argument to
 :meth:`~celery.execute.apply_async`::
 
     >>> task.apply_async(args, kwargs, task_id="...")
@@ -529,7 +529,7 @@ See :doc:`userguide/tasksets` for more information.
 
 Can I cancel the execution of a task?
 -------------------------------------
-**Answer**: Yes. Use ``result.revoke``::
+**Answer**: Yes. Use `result.revoke`::
 
     >>> result = add.apply_async(args=[2, 2], countdown=120)
     >>> result.revoke()
@@ -572,8 +572,8 @@ See :doc:`userguide/routing` for more information.
 Can I change the interval of a periodic task at runtime?
 --------------------------------------------------------
 
-**Answer**: Yes. You can override ``PeriodicTask.is_due`` or turn
-``PeriodicTask.run_every`` into a property:
+**Answer**: Yes. You can override `PeriodicTask.is_due` or turn
+`PeriodicTask.run_every` into a property:
 
 .. code-block:: python
 
@@ -607,11 +607,11 @@ Should I use retry or acks_late?
 **Answer**: Depends. It's not necessarily one or the other, you may want
 to use both.
 
-``Task.retry`` is used to retry tasks, notably for expected errors that
-is catchable with the ``try:`` block. The AMQP transaction is not used
+`Task.retry` is used to retry tasks, notably for expected errors that
+is catchable with the `try:` block. The AMQP transaction is not used
 for these errors: **if the task raises an exception it is still acked!**.
 
-The ``acks_late`` setting would be used when you need the task to be
+The `acks_late` setting would be used when you need the task to be
 executed again if the worker (for some reason) crashes mid-execution.
 It's important to note that the worker is not known to crash, and if
 it does it is usually an unrecoverable error that requires human
@@ -637,11 +637,11 @@ It's a good default, users who require it and know what they
 are doing can still enable acks_late (and in the future hopefully
 use manual acknowledgement)
 
-In addition ``Task.retry`` has features not available in AMQP
+In addition `Task.retry` has features not available in AMQP
 transactions: delay between retries, max retries, etc.
 
 So use retry for Python errors, and if your task is reentrant
-combine that with ``acks_late`` if that level of reliability
+combine that with `acks_late` if that level of reliability
 is required.
 
 .. _faq-schedule-at-specific-time:
@@ -651,7 +651,7 @@ Can I schedule tasks to execute at a specific time?
 
 .. module:: celery.task.base
 
-**Answer**: Yes. You can use the ``eta`` argument of :meth:`Task.apply_async`.
+**Answer**: Yes. You can use the `eta` argument of :meth:`Task.apply_async`.
 
 Or to schedule a periodic task at a specific time, use the
 :class:`celery.task.schedules.crontab` schedule behavior:
@@ -668,7 +668,7 @@ Or to schedule a periodic task at a specific time, use the
 
 .. _faq-safe-worker-shutdown:
 
-How do I shut down ``celeryd`` safely?
+How do I shut down `celeryd` safely?
 --------------------------------------
 
 **Answer**: Use the :sig:`TERM` signal, and the worker will finish all currently
@@ -678,7 +678,7 @@ You should never stop :mod:`~celery.bin.celeryd` with the :sig:`KILL` signal
 (:option:`-9`), unless you've tried :sig:`TERM` a few times and waited a few
 minutes to let it get a chance to shut down.  As if you do tasks may be
 terminated mid-execution, and they will not be re-run unless you have the
-``acks_late`` option set (``Task.acks_late`` / :setting:`CELERY_ACKS_LATE`).
+`acks_late` option set (`Task.acks_late` / :setting:`CELERY_ACKS_LATE`).
 
 .. seealso::
 
@@ -711,14 +711,14 @@ See http://bit.ly/bo9RSw
 
 .. _faq-windows-worker-embedded-beat:
 
-The ``-B`` / ``--beat`` option to celeryd doesn't work?
+The `-B` / `--beat` option to celeryd doesn't work?
 ----------------------------------------------------------------
-**Answer**: That's right. Run ``celerybeat`` and ``celeryd`` as separate
+**Answer**: That's right. Run `celerybeat` and `celeryd` as separate
 services instead.
 
 .. _faq-windows-django-settings:
 
-``django-celery`` can’t find settings?
+`django-celery` can’t find settings?
 --------------------------------------
 
 **Answer**: You need to specify the :option:`--settings` argument to

+ 4 - 4
INSTALL

@@ -1,19 +1,19 @@
 Installing celery
 =================
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install `celery` either via the Python Package Index (PyPI)
 or from source.
 
-To install using ``pip``,::
+To install using `pip`::
 
     $ pip install celery
 
-To install using ``easy_install``,::
+To install using `easy_install`::
 
     $ easy_install celery
 
 If you have downloaded a source tarball you can install it
-by doing the following,::
+by doing the following::
 
     $ python setup.py build
     # python setup.py install # as root

+ 9 - 9
README.rst

@@ -59,7 +59,7 @@ This is a high level overview of the architecture.
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 
 The broker delivers tasks to the worker servers.
-A worker server is a networked machine running ``celeryd``.  This can be one or
+A worker server is a networked machine running `celeryd`.  This can be one or
 more machines depending on the workload.
 
 The result of the task can be stored for later retrieval (called its
@@ -107,7 +107,7 @@ Features
     |                 | while the queue is temporarily overloaded).        |
     +-----------------+----------------------------------------------------+
     | Concurrency     | Tasks are executed in parallel using the           |
-    |                 | ``multiprocessing`` module.                        |
+    |                 | `multiprocessing` module.                          |
     +-----------------+----------------------------------------------------+
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     |                 | an exact date or countdown for when after the task |
@@ -194,14 +194,14 @@ is hosted at Github.
 Installation
 ============
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install `celery` either via the Python Package Index (PyPI)
 or from source.
 
-To install using ``pip``,::
+To install using `pip`,::
 
     $ pip install celery
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
     $ easy_install celery
 
@@ -210,7 +210,7 @@ To install using ``easy_install``,::
 Downloading and installing from source
 --------------------------------------
 
-Download the latest version of ``celery`` from
+Download the latest version of `celery` from
 http://pypi.python.org/pypi/celery/
 
 You can install it by doing the following,::
@@ -275,10 +275,10 @@ http://wiki.github.com/ask/celery/
 Contributing
 ============
 
-Development of ``celery`` happens at Github: http://github.com/ask/celery
+Development of `celery` happens at Github: http://github.com/ask/celery
 
 You are highly encouraged to participate in the development
-of ``celery``. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like Github (for some reason) you're welcome
 to send regular patches.
 
 .. _license:
@@ -286,7 +286,7 @@ to send regular patches.
 License
 =======
 
-This software is licensed under the ``New BSD License``. See the ``LICENSE``
+This software is licensed under the `New BSD License`. See the ``LICENSE``
 file in the top distribution directory for the full license text.
 
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround

+ 1 - 1
celery/app/__init__.py

@@ -18,7 +18,7 @@ class App(base.BaseApp):
         Default is :class:`celery.loaders.app.AppLoader`.
     :keyword backend: The result store backend class, or the name of the
         backend class to use. Default is the value of the
-        ``CELERY_RESULT_BACKEND`` setting.
+        :setting:`CELERY_RESULT_BACKEND` setting.
 
     .. attribute:: amqp
 

+ 15 - 14
celery/app/base.py

@@ -135,15 +135,15 @@ class BaseApp(object):
 
     def either(self, default_key, *values):
         """Fallback to the value of a configuration key if none of the
-        ``*values`` are true."""
+        `*values` are true."""
         for value in values:
             if value is not None:
                 return value
         return self.conf.get(default_key)
 
     def merge(self, a, b):
-        """Like ``dict(a, **b)`` except it will keep values from ``a``
-        if the value in ``b`` is :const:`None`."""
+        """Like `dict(a, **b)` except it will keep values from `a`
+        if the value in `b` is :const:`None`."""
         b = dict(b)
         for key, value in a.items():
             if b.get(key) is None:
@@ -156,7 +156,7 @@ class BaseApp(object):
             **options):
         """Send task by name.
 
-        :param name: Name of task to execute (e.g. ``"tasks.add"``).
+        :param name: Name of task to execute (e.g. `"tasks.add"`).
         :keyword result_cls: Specify custom result class. Default is
             using :meth:`AsyncResult`.
 
@@ -190,16 +190,17 @@ class BaseApp(object):
             insist=None, connect_timeout=None, backend_cls=None):
         """Establish a connection to the message broker.
 
-        :keyword hostname: defaults to the ``BROKER_HOST`` setting.
-        :keyword userid: defaults to the ``BROKER_USER`` setting.
-        :keyword password: defaults to the ``BROKER_PASSWORD`` setting.
-        :keyword virtual_host: defaults to the ``BROKER_VHOST`` setting.
-        :keyword port: defaults to the ``BROKER_PORT`` setting.
-        :keyword ssl: defaults to the ``BROKER_USE_SSL`` setting.
-        :keyword insist: defaults to the ``BROKER_INSIST`` setting.
+        :keyword hostname: defaults to the :setting:`BROKER_HOST` setting.
+        :keyword userid: defaults to the :setting:`BROKER_USER` setting.
+        :keyword password: defaults to the :setting:`BROKER_PASSWORD` setting.
+        :keyword virtual_host: defaults to the :setting:`BROKER_VHOST` setting.
+        :keyword port: defaults to the :setting:`BROKER_PORT` setting.
+        :keyword ssl: defaults to the :setting:`BROKER_USE_SSL` setting.
+        :keyword insist: defaults to the :setting:`BROKER_INSIST` setting.
         :keyword connect_timeout: defaults to the
-            ``BROKER_CONNECTION_TIMEOUT`` setting.
-        :keyword backend_cls: defaults to the ``BROKER_BACKEND`` setting.
+            :setting:`BROKER_CONNECTION_TIMEOUT` setting.
+        :keyword backend_cls: defaults to the :setting:`BROKER_BACKEND`
+            setting.
 
         :returns :class:`carrot.connection.BrokerConnection`:
 
@@ -217,7 +218,7 @@ class BaseApp(object):
                                 "BROKER_CONNECTION_TIMEOUT", connect_timeout))
 
     def with_default_connection(self, fun):
-        """With any function accepting ``connection`` and ``connect_timeout``
+        """With any function accepting `connection` and `connect_timeout`
         keyword arguments, establishes a default connection if one is
         not already passed to it.
 

+ 1 - 1
celery/apps/beat.py

@@ -111,7 +111,7 @@ class Beat(object):
                                info=" ".join(sys.argv[arg_start:]))
 
     def install_sync_handler(self, beat):
-        """Install a ``SIGTERM`` + ``SIGINT`` handler that saves
+        """Install a `SIGTERM` + `SIGINT` handler that saves
         the celerybeat schedule."""
 
         def _sync(signum, frame):

+ 2 - 2
celery/backends/base.py

@@ -77,9 +77,9 @@ class BaseBackend(object):
         If the task raises an exception, this exception
         will be re-raised by :func:`wait_for`.
 
-        If ``timeout`` is not ``None``, this raises the
+        If `timeout` is not :const:`None`, this raises the
         :class:`celery.exceptions.TimeoutError` exception if the operation
-        takes longer than ``timeout`` seconds.
+        takes longer than `timeout` seconds.
 
         """
 

+ 1 - 1
celery/backends/pyredis.py

@@ -87,7 +87,7 @@ class RedisBackend(KeyValueStoreBackend):
         self._connection = None
 
     def open(self):
-        """Get :class:`redis.Redis`` instance with the current
+        """Get :class:`redis.Redis` instance with the current
         server configuration.
 
         The connection is then cached until you do an

+ 1 - 1
celery/backends/tyrant.py

@@ -51,7 +51,7 @@ class TyrantBackend(KeyValueStoreBackend):
         self._connection = None
 
     def open(self):
-        """Get :class:`pytyrant.PyTyrant`` instance with the current
+        """Get :class:`pytyrant.PyTyrant` instance with the current
         server configuration.
 
         The connection is then cached until you do an

+ 1 - 1
celery/beat.py

@@ -401,7 +401,7 @@ def EmbeddedService(*args, **kwargs):
     """Return embedded clock service.
 
     :keyword thread: Run threaded instead of as a separate process.
-        Default is ``False``.
+        Default is :const:`False`.
 
     """
     if kwargs.pop("thread", False):

+ 10 - 10
celery/bin/camqadm.py

@@ -54,12 +54,12 @@ class Spec(object):
     .. attribute args::
 
         List of arguments this command takes. Should
-        contain ``(argument_name, argument_type)`` tuples.
+        contain `(argument_name, argument_type)` tuples.
 
     .. attribute returns:
 
         Helpful human string representation of what this command returns.
-        May be ``None``, to signify the return type is unknown.
+        May be :const:`None`, to signify the return type is unknown.
 
     """
     def __init__(self, *args, **kwargs):
@@ -69,7 +69,7 @@ class Spec(object):
     def coerce(self, index, value):
         """Coerce value for argument at index.
 
-        E.g. if :attr:`args` is ``[("is_active", bool)]``:
+        E.g. if :attr:`args` is `[("is_active", bool)]`:
 
             >>> coerce(0, "False")
             False
@@ -131,7 +131,7 @@ class AMQShell(cmd.Cmd):
     :keyword connect: Function used to connect to the server, must return
         connection object.
 
-    :keyword silent: If ``True``, the commands won't have annoying output not
+    :keyword silent: If :const:`True`, the commands won't have annoying output not
         relevant when running in non-shell mode.
 
 
@@ -198,7 +198,7 @@ class AMQShell(cmd.Cmd):
         self._reconnect()
 
     def say(self, m):
-        """Say something to the user. Disabled if :attr:`silent``."""
+        """Say something to the user. Disabled if :attr:`silent`."""
         if not self.silent:
             say(m)
 
@@ -207,7 +207,7 @@ class AMQShell(cmd.Cmd):
         to Python values and find the corresponding method on the AMQP channel
         object.
 
-        :returns: tuple of ``(method, processed_args)``.
+        :returns: tuple of `(method, processed_args)`.
 
         Example:
 
@@ -225,7 +225,7 @@ class AMQShell(cmd.Cmd):
         return getattr(self.chan, attr_name), args, spec.format_response
 
     def do_exit(self, *args):
-        """The ``"exit"`` command."""
+        """The `"exit"` command."""
         self.say("\n-> please, don't leave!")
         sys.exit(0)
 
@@ -249,7 +249,7 @@ class AMQShell(cmd.Cmd):
         return set(self.builtins.keys() + self.amqp.keys())
 
     def completenames(self, text, *ignored):
-        """Return all commands starting with ``text``, for tab-completion."""
+        """Return all commands starting with `text`, for tab-completion."""
         names = self.get_names()
         first = [cmd for cmd in names
                         if cmd.startswith(text.replace("_", "."))]
@@ -274,7 +274,7 @@ class AMQShell(cmd.Cmd):
         """Parse input line.
 
         :returns: tuple of three items:
-            ``(command_name, arglist, original_line)``
+            `(command_name, arglist, original_line)`
 
         E.g::
 
@@ -327,7 +327,7 @@ class AMQShell(cmd.Cmd):
 
 
 class AMQPAdmin(object):
-    """The celery ``camqadm`` utility."""
+    """The celery :program:`camqadm` utility."""
 
     def __init__(self, *args, **kwargs):
         self.app = app_or_default(kwargs.get("app"))

+ 4 - 4
celery/bin/celerybeat.py

@@ -5,7 +5,7 @@
 
 .. cmdoption:: -s, --schedule
 
-    Path to the schedule database. Defaults to ``celerybeat-schedule``.
+    Path to the schedule database. Defaults to `celerybeat-schedule`.
     The extension ".db" will be appended to the filename.
 
 .. cmdoption:: -S, --scheduler
@@ -14,12 +14,12 @@
 
 .. cmdoption:: -f, --logfile
 
-    Path to log file. If no logfile is specified, ``stderr`` is used.
+    Path to log file. If no logfile is specified, `stderr` is used.
 
 .. cmdoption:: -l, --loglevel
 
-    Logging level, choose between ``DEBUG``, ``INFO``, ``WARNING``,
-    ``ERROR``, ``CRITICAL``, or ``FATAL``.
+    Logging level, choose between `DEBUG`, `INFO`, `WARNING`,
+    `ERROR`, `CRITICAL`, or `FATAL`.
 
 """
 from celery.bin.base import Command, Option

+ 8 - 8
celery/bin/celeryd.py

@@ -10,12 +10,12 @@
 
 .. cmdoption:: -f, --logfile
 
-    Path to log file. If no logfile is specified, ``stderr`` is used.
+    Path to log file. If no logfile is specified, `stderr` is used.
 
 .. cmdoption:: -l, --loglevel
 
-    Logging level, choose between ``DEBUG``, ``INFO``, ``WARNING``,
-    ``ERROR``, ``CRITICAL``, or ``FATAL``.
+    Logging level, choose between `DEBUG`, `INFO`, `WARNING`,
+    `ERROR`, `CRITICAL`, or `FATAL`.
 
 .. cmdoption:: -n, --hostname
 
@@ -23,14 +23,14 @@
 
 .. cmdoption:: -B, --beat
 
-    Also run the ``celerybeat`` periodic task scheduler. Please note that
+    Also run the `celerybeat` periodic task scheduler. Please note that
     there must only be one instance of this service.
 
 .. cmdoption:: -Q, --queues
 
     List of queues to enable for this worker, separated by comma.
     By default all configured queues are enabled.
-    Example: ``-Q video,image``
+    Example: `-Q video,image`
 
 .. cmdoption:: -I, --include
 
@@ -39,8 +39,8 @@
 
 .. cmdoption:: -s, --schedule
 
-    Path to the schedule database if running with the ``-B`` option.
-    Defaults to ``celerybeat-schedule``. The extension ".db" will be
+    Path to the schedule database if running with the `-B` option.
+    Defaults to `celerybeat-schedule`. The extension ".db" will be
     appended to the filename.
 
 .. cmdoption:: --scheduler
@@ -49,7 +49,7 @@
 
 .. cmdoption:: -E, --events
 
-    Send events that can be captured by monitors like ``celerymon``.
+    Send events that can be captured by monitors like `celerymon`.
 
 .. cmdoption:: --purge, --discard
 

+ 2 - 2
celery/concurrency/processes/__init__.py

@@ -78,9 +78,9 @@ class TaskPool(object):
     def apply_async(self, target, args=None, kwargs=None, callbacks=None,
             errbacks=None, accept_callback=None, timeout_callback=None,
             **compat):
-        """Equivalent of the :func:``apply`` built-in function.
+        """Equivalent of the :func:`apply` built-in function.
 
-        All ``callbacks`` and ``errbacks`` should complete immediately since
+        All `callbacks` and `errbacks` should complete immediately since
         otherwise the thread which handles the result will get blocked.
 
         """

+ 4 - 4
celery/contrib/abortable.py

@@ -65,9 +65,9 @@ In the producer:
 
        ...
 
-After the ``async_result.abort()`` call, the task execution is not
+After the `async_result.abort()` call, the task execution is not
 aborted immediately. In fact, it is not guaranteed to abort at all. Keep
-checking the ``async_result`` status, or call ``async_result.wait()`` to
+checking the `async_result` status, or call `async_result.wait()` to
 have it block until the task is finished.
 
 .. note::
@@ -101,8 +101,8 @@ ABORTED = "ABORTED"
 class AbortableAsyncResult(AsyncResult):
     """Represents a abortable result.
 
-    Specifically, this gives the ``AsyncResult`` a :meth:`abort()` method,
-    which sets the state of the underlying Task to ``"ABORTED"``.
+    Specifically, this gives the `AsyncResult` a :meth:`abort()` method,
+    which sets the state of the underlying Task to `"ABORTED"`.
 
     """
 

+ 7 - 7
celery/datastructures.py

@@ -88,11 +88,11 @@ class PositionQueue(UserList):
         self.data = map(self.UnfilledPosition, xrange(length))
 
     def full(self):
-        """Returns ``True`` if all of the slots has been filled."""
+        """Returns :const:`True` if all of the slots has been filled."""
         return len(self) >= self.length
 
     def __len__(self):
-        """``len(self)`` -> number of slots filled with real values."""
+        """`len(self)` -> number of slots filled with real values."""
         return len(self.filled)
 
     @property
@@ -201,17 +201,17 @@ class SharedCounter(object):
         return self._value
 
     def __iadd__(self, y):
-        """``self += y``"""
+        """`self += y`"""
         self._modify_queue.put(y * +1)
         return self
 
     def __isub__(self, y):
-        """``self -= y``"""
+        """`self -= y`"""
         self._modify_queue.put(y * -1)
         return self
 
     def __int__(self):
-        """``int(self) -> int``"""
+        """`int(self) -> int`"""
         return self._update_value()
 
     def __repr__(self):
@@ -221,7 +221,7 @@ class SharedCounter(object):
 class LimitedSet(object):
     """Kind-of Set with limitations.
 
-    Good for when you need to test for membership (``a in set``),
+    Good for when you need to test for membership (`a in set`),
     but the list might become to big, so you want to limit it so it doesn't
     consume too much resources.
 
@@ -325,7 +325,7 @@ class TokenBucket(object):
 
     .. attribute:: capacity
 
-        Maximum number of tokens in the bucket. Default is ``1``.
+        Maximum number of tokens in the bucket. Default is `1`.
 
     .. attribute:: timestamp
 

+ 4 - 4
celery/db/a805d4bd.py

@@ -2,14 +2,14 @@
 a805d4bd
 This module fixes a bug with pickling and relative imports in Python < 2.6.
 
-The problem is with pickling an e.g. ``exceptions.KeyError`` instance.
-As SQLAlchemy has its own ``exceptions`` module, pickle will try to
-lookup ``KeyError`` in the wrong module, resulting in this exception::
+The problem is with pickling an e.g. `exceptions.KeyError` instance.
+As SQLAlchemy has its own `exceptions` module, pickle will try to
+lookup :exc:`KeyError` in the wrong module, resulting in this exception::
 
     cPickle.PicklingError: Can't pickle <type 'exceptions.KeyError'>:
         attribute lookup exceptions.KeyError failed
 
-doing ``import exceptions`` just before the dump in ``sqlalchemy.types``
+doing `import exceptions` just before the dump in `sqlalchemy.types`
 reveals the source of the bug::
 
     EXCEPTIONS: <module 'sqlalchemy.exc' from '/var/lib/hudson/jobs/celery/

+ 3 - 3
celery/events/__init__.py

@@ -33,7 +33,7 @@ class EventDispatcher(object):
     :keyword hostname: Hostname to identify ourselves as,
         by default uses the hostname returned by :func:`socket.gethostname`.
 
-    :keyword enabled: Set to ``False`` to not actually publish any events,
+    :keyword enabled: Set to :const:`False` to not actually publish any events,
         making :meth:`send` a noop operation.
 
     You need to :meth:`close` this after use.
@@ -104,8 +104,8 @@ class EventReceiver(object):
     :param connection: Carrot connection.
     :keyword handlers: Event handlers.
 
-    :attr:`handlers`` is a dict of event types and their handlers,
-    the special handler ``"*`"`` captures all events that doesn't have a
+    :attr:`handlers` is a dict of event types and their handlers,
+    the special handler `"*"` captures all events that doesn't have a
     handler.
 
     """

+ 3 - 3
celery/events/state.py

@@ -253,7 +253,7 @@ class State(object):
     def tasks_by_timestamp(self, limit=None):
         """Get tasks by timestamp.
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
         """
         return self._sort_tasks_by_time(self.tasks.items()[:limit])
@@ -266,7 +266,7 @@ class State(object):
     def tasks_by_type(self, name, limit=None):
         """Get all tasks by type.
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
         """
         return self._sort_tasks_by_time([(uuid, task)
@@ -276,7 +276,7 @@ class State(object):
     def tasks_by_worker(self, hostname, limit=None):
         """Get all tasks by worker.
 
-        Returns a list of ``(uuid, task)`` tuples.
+        Returns a list of `(uuid, task)` tuples.
 
         """
         return self._sort_tasks_by_time([(uuid, task)

+ 2 - 1
celery/loaders/base.py

@@ -42,7 +42,8 @@ class BaseLoader(object):
         pass
 
     def on_worker_init(self):
-        """This method is called when the worker (``celeryd``) starts."""
+        """This method is called when the worker (:program:`celeryd`)
+        starts."""
         pass
 
     def import_task_module(self, module):

+ 1 - 1
celery/loaders/default.py

@@ -42,7 +42,7 @@ class Loader(BaseLoader):
         return settings
 
     def read_configuration(self):
-        """Read configuration from ``celeryconfig.py`` and configure
+        """Read configuration from :file:`celeryconfig.py` and configure
         celery and Django so it can be used by regular Python."""
         configname = os.environ.get("CELERY_CONFIG_MODULE",
                                     DEFAULT_CONFIG_MODULE)

+ 7 - 7
celery/log.py

@@ -101,7 +101,7 @@ class Logging(object):
 
     def _detect_handler(self, logfile=None):
         """Create log handler with either a filename, an open stream
-        or ``None`` (stderr)."""
+        or :const:`None` (stderr)."""
         if not logfile or hasattr(logfile, "write"):
             return logging.StreamHandler(logfile)
         return logging.FileHandler(logfile)
@@ -120,9 +120,9 @@ class Logging(object):
     def setup_logger(self, loglevel=None, logfile=None,
             format=None, colorize=None, name="celery", root=True,
             app=None, **kwargs):
-        """Setup the ``multiprocessing`` logger.
+        """Setup the :mod:`multiprocessing` logger.
 
-        If ``logfile`` is not specified, then ``sys.stderr`` is used.
+        If `logfile` is not specified, then `sys.stderr` is used.
 
         Returns logger object.
 
@@ -142,7 +142,7 @@ class Logging(object):
             colorize=None, task_kwargs=None, app=None, **kwargs):
         """Setup the task logger.
 
-        If ``logfile`` is not specified, then ``sys.stderr`` is used.
+        If `logfile` is not specified, then `sys.stderr` is used.
 
         Returns logger object.
 
@@ -215,7 +215,7 @@ class LoggingProxy(object):
 
     def _safewrap_handlers(self):
         """Make the logger handlers dump internal errors to
-        ``sys.__stderr__`` instead of ``sys.stderr`` to circumvent
+        `sys.__stderr__` instead of `sys.stderr` to circumvent
         infinite loops."""
 
         def wrap_handler(handler):                  # pragma: no cover
@@ -253,7 +253,7 @@ class LoggingProxy(object):
                 self._thread.recurse_protection = False
 
     def writelines(self, sequence):
-        """``writelines(sequence_of_strings) -> None``.
+        """`writelines(sequence_of_strings) -> None`.
 
         Write the strings to the file.
 
@@ -275,7 +275,7 @@ class LoggingProxy(object):
         self.closed = True
 
     def isatty(self):
-        """Always returns ``False``. Just here for file support."""
+        """Always returns :const:`False`. Just here for file support."""
         return False
 
     def fileno(self):

+ 2 - 2
celery/messaging.py

@@ -21,14 +21,14 @@ def establish_connection(**kwargs):
 
 def with_connection(fun):
     """Decorator for providing default message broker connection for functions
-    supporting the ``connection`` and ``connect_timeout`` keyword
+    supporting the `connection` and `connect_timeout` keyword
     arguments."""
     # FIXME: Deprecate!
     return default_app.with_default_connection(fun)
 
 
 def get_consumer_set(connection, queues=None, **options):
-    """Get the :class:`carrot.messaging.ConsumerSet`` for a queue
+    """Get the :class:`carrot.messaging.ConsumerSet` for a queue
     configuration.
 
     Defaults to the queues in :const:`CELERY_QUEUES`.

+ 1 - 1
celery/registry.py

@@ -36,7 +36,7 @@ class TaskRegistry(UserDict):
         """Unregister task by name.
 
         :param name: name of the task to unregister, or a
-            :class:`celery.task.base.Task` with a valid ``name`` attribute.
+            :class:`celery.task.base.Task` with a valid `name` attribute.
 
         :raises celery.exceptions.NotRegistered: if the task has not
             been registered.

+ 8 - 8
celery/result.py

@@ -54,8 +54,8 @@ class BaseAsyncResult(object):
         :keyword timeout: How long to wait, in seconds, before the
             operation times out.
 
-        :raises celery.exceptions.TimeoutError: if ``timeout`` is not
-            :const:`None` and the result does not arrive within ``timeout``
+        :raises celery.exceptions.TimeoutError: if `timeout` is not
+            :const:`None` and the result does not arrive within `timeout`
             seconds.
 
         If the remote call raised an exception then that
@@ -87,11 +87,11 @@ class BaseAsyncResult(object):
         return self.status == states.FAILURE
 
     def __str__(self):
-        """``str(self) -> self.task_id``"""
+        """`str(self) -> self.task_id`"""
         return self.task_id
 
     def __hash__(self):
-        """``hash(self) -> hash(self.task_id)``"""
+        """`hash(self) -> hash(self.task_id)`"""
         return hash(self.task_id)
 
     def __repr__(self):
@@ -192,7 +192,7 @@ class TaskSetResult(object):
     """Working with :class:`~celery.task.TaskSet` results.
 
     An instance of this class is returned by
-    ``TaskSet``'s :meth:`~celery.task.TaskSet.apply_async()`. It enables
+    `TaskSet`'s :meth:`~celery.task.TaskSet.apply_async()`. It enables
     inspection of the subtasks status and return values as a single entity.
 
     :option taskset_id: see :attr:`taskset_id`.
@@ -287,7 +287,7 @@ class TaskSetResult(object):
                 connection=connection, connect_timeout=connect_timeout)
 
     def __iter__(self):
-        """``iter(res)`` -> ``res.iterate()``."""
+        """`iter(res)` -> `res.iterate()`."""
         return self.iterate()
 
     def __getitem__(self, index):
@@ -325,8 +325,8 @@ class TaskSetResult(object):
         :keyword propagate: If any of the subtasks raises an exception, the
             exception will be reraised.
 
-        :raises celery.exceptions.TimeoutError: if ``timeout`` is not
-            :const:`None` and the operation takes longer than ``timeout``
+        :raises celery.exceptions.TimeoutError: if `timeout` is not
+            :const:`None` and the operation takes longer than `timeout`
             seconds.
 
         :returns: list of return values for all subtasks in order.

+ 2 - 2
celery/routes.py

@@ -5,8 +5,8 @@ _first_route = firstmethod("route_for_task")
 
 
 def merge(a, b):
-    """Like ``dict(a, **b)`` except it will keep values from ``a``,
-    if the value in ``b`` is :const:`None`."""
+    """Like `dict(a, **b)` except it will keep values from `a`,
+    if the value in `b` is :const:`None`."""
     return dict(a, **dict((k, v) for k, v in b.iteritems() if v is not None))
 
 

+ 5 - 5
celery/schedules.py

@@ -19,15 +19,15 @@ class schedule(object):
         return remaining(last_run_at, self.run_every, relative=self.relative)
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
 
         e.g.
 
-        * ``(True, 20)``, means the task should be run now, and the next
+        * `(True, 20)`, means the task should be run now, and the next
             time to run is in 20 seconds.
 
-        * ``(False, 12)``, means the task should be run in 12 seconds.
+        * `(False, 12)`, means the task should be run in 12 seconds.
 
         You can override this to decide the interval at runtime,
         but keep in mind the value of :setting:`CELERYBEAT_MAX_LOOP_INTERVAL`,
@@ -145,7 +145,7 @@ class crontab_parser(object):
 
 
 class crontab(schedule):
-    """A crontab can be used as the ``run_every`` value of a
+    """A crontab can be used as the `run_every` value of a
     :class:`PeriodicTask` to add cron-like scheduling.
 
     Like a :manpage:`cron` job, you can specify units of time of when
@@ -292,7 +292,7 @@ class crontab(schedule):
         return remaining(last_run_at, delta, now=self.nowfun())
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
 
         See :meth:`celery.schedules.schedule.is_due` for more information.

+ 26 - 26
celery/task/base.py

@@ -63,9 +63,9 @@ class TaskType(type):
     """Metaclass for tasks.
 
     Automatically registers the task in the task registry, except
-    if the ``abstract`` attribute is set.
+    if the `abstract` attribute is set.
 
-    If no ``name`` attribute is provided, the name is automatically
+    If no `name` attribute is provided, the name is automatically
     set to the name of the module it was defined in, and the class name.
 
     """
@@ -104,7 +104,7 @@ class BaseTask(object):
     """A celery task.
 
     All subclasses of :class:`Task` must define the :meth:`run` method,
-    which is the actual method the ``celery`` daemon executes.
+    which is the actual method the `celery` daemon executes.
 
     The :meth:`run` method can take use of the default keyword arguments,
     as listed in the :meth:`run` documentation.
@@ -131,16 +131,16 @@ class BaseTask(object):
     .. attribute:: queue
 
         Select a destination queue for this task. The queue needs to exist
-        in :setting:`CELERY_QUEUES`. The ``routing_key``, ``exchange`` and
-        ``exchange_type`` attributes will be ignored if this is set.
+        in :setting:`CELERY_QUEUES`. The `routing_key`, `exchange` and
+        `exchange_type` attributes will be ignored if this is set.
 
     .. attribute:: routing_key
 
-        Override the global default ``routing_key`` for this task.
+        Override the global default `routing_key` for this task.
 
     .. attribute:: exchange
 
-        Override the global default ``exchange`` for this task.
+        Override the global default `exchange` for this task.
 
     .. attribute:: exchange_type
 
@@ -149,8 +149,8 @@ class BaseTask(object):
     .. attribute:: delivery_mode
 
         Override the global default delivery mode for this task.
-        By default this is set to ``2`` (persistent). You can change this
-        to ``1`` to get non-persistent behavior, which means the messages
+        By default this is set to `2` (persistent). You can change this
+        to `1` to get non-persistent behavior, which means the messages
         are lost if the broker is restarted.
 
     .. attribute:: mandatory
@@ -165,7 +165,7 @@ class BaseTask(object):
 
     .. attribute:: priority:
 
-        The message priority. A number from ``0`` to ``9``, where ``0``
+        The message priority. A number from `0` to `9`, where `0`
         is the highest. Note that RabbitMQ doesn't support priorities yet.
 
     .. attribute:: max_retries
@@ -181,8 +181,8 @@ class BaseTask(object):
     .. attribute:: rate_limit
 
         Set the rate limit for this task type, Examples: :const:`None` (no
-        rate limit), ``"100/s"`` (hundred tasks a second), ``"100/m"``
-        (hundred tasks a minute), ``"100/h"`` (hundred tasks an hour)
+        rate limit), `"100/s"` (hundred tasks a second), `"100/m"`
+        (hundred tasks a minute), `"100/h"` (hundred tasks an hour)
 
     .. attribute:: ignore_result
 
@@ -205,7 +205,7 @@ class BaseTask(object):
     .. attribute:: serializer
 
         The name of a serializer that has been registered with
-        :mod:`carrot.serialization.registry`. Example: ``"json"``.
+        :mod:`carrot.serialization.registry`. Example: `"json"`.
 
     .. attribute:: backend
 
@@ -220,7 +220,7 @@ class BaseTask(object):
 
         If :const:`True` the task will report its status as "started"
         when the task is executed by a worker.
-        The default value is ``False`` as the normal behaviour is to not
+        The default value is :const:`False` as the normal behaviour is to not
         report that level of granularity. Tasks are either pending,
         finished, or waiting to be retried.
 
@@ -405,13 +405,13 @@ class BaseTask(object):
 
         :keyword countdown: Number of seconds into the future that the
             task should execute. Defaults to immediate delivery (Do not
-            confuse that with the ``immediate`` setting, they are
+            confuse that with the `immediate` setting, they are
             unrelated).
 
         :keyword eta: A :class:`~datetime.datetime` object that describes
             the absolute time and date of when the task should execute.
-            May not be specified if ``countdown`` is also supplied. (Do
-            not confuse this with the ``immediate`` setting, they are
+            May not be specified if `countdown` is also supplied. (Do
+            not confuse this with the `immediate` setting, they are
             unrelated).
 
         :keyword expires: Either a :class:`int`, describing the number of
@@ -421,7 +421,7 @@ class BaseTask(object):
             expiration time.
 
         :keyword connection: Re-use existing broker connection instead
-            of establishing a new one. The ``connect_timeout`` argument
+            of establishing a new one. The `connect_timeout` argument
             is not respected if this is set.
 
         :keyword connect_timeout: The timeout in seconds, before we give
@@ -441,7 +441,7 @@ class BaseTask(object):
         :keyword immediate: Request immediate delivery. Will raise an
             exception if the task cannot be routed to a worker
             immediately.  (Do not confuse this parameter with
-            the ``countdown`` and ``eta`` settings, as they are
+            the `countdown` and `eta` settings, as they are
             unrelated). Defaults to the tasks :attr:`immediate` attribute.
 
         :keyword mandatory: Mandatory routing. Raises an exception if
@@ -453,13 +453,13 @@ class BaseTask(object):
 
         :keyword serializer: A string identifying the default
             serialization method to use. Defaults to the
-            ``CELERY_TASK_SERIALIZER`` setting. Can be ``pickle``,
-            ``json``, ``yaml``, or any custom serialization method
+            :setting:`CELERY_TASK_SERIALIZER` setting. Can be `pickle`,
+            `json`, `yaml`, or any custom serialization method
             that has been registered with
             :mod:`carrot.serialization.registry`. Defaults to the tasks
             :attr:`serializer` attribute.
 
-        **Note**: If the ``CELERY_ALWAYS_EAGER`` setting is set, it will
+        **Note**: If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
             be replaced by a local :func:`apply` call instead.
 
         """
@@ -506,7 +506,7 @@ class BaseTask(object):
         (must be a :class:`~datetime.datetime` instance).
         :keyword \*\*options: Any extra options to pass on to
             meth:`apply_async`. See :func:`celery.execute.apply_async`.
-        :keyword throw: If this is ``False``, do not raise the
+        :keyword throw: If this is :const:`False`, do not raise the
             :exc:`~celery.exceptions.RetryTaskError` exception,
             that tells the worker to mark the task as being retried.
             Note that this means the task will be marked as failed
@@ -515,8 +515,8 @@ class BaseTask(object):
 
         :raises celery.exceptions.RetryTaskError: To tell the worker that
             the task has been re-sent for retry. This always happens,
-            unless the ``throw`` keyword argument has been explicitly set
-            to ``False``, and is considered normal operation.
+            unless the `throw` keyword argument has been explicitly set
+            to :const:`False`, and is considered normal operation.
 
         Example
 
@@ -865,7 +865,7 @@ class PeriodicTask(Task):
         return timedelta_seconds(delta)
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items ``(is_due, next_time_to_run)``,
+        """Returns tuple of two items `(is_due, next_time_to_run)`,
         where next time to run is in seconds.
 
         See :meth:`celery.schedules.schedule.is_due` for more information.

+ 2 - 2
celery/task/builtins.py

@@ -26,7 +26,7 @@ class PingTask(Task):
     name = "celery.ping"
 
     def run(self, **kwargs):
-        """:returns: the string ``"pong"``."""
+        """:returns: the string `"pong"`."""
         return "pong"
 
 
@@ -53,7 +53,7 @@ class ExecuteRemoteTask(Task):
     is an internal component of.
 
     The object must be pickleable, so you can't use lambdas or functions
-    defined in the REPL (that is the python shell, or ``ipython``).
+    defined in the REPL (that is the python shell, or :program:`ipython`).
 
     """
     name = "celery.execute_remote"

+ 1 - 1
celery/task/control.py

@@ -145,7 +145,7 @@ class Control(object):
 
         :param task_name: Type of task to change rate limit for.
         :param rate_limit: The rate limit as tasks per second, or a rate limit
-            string (``"100/m"``, etc.
+            string (`"100/m"`, etc.
             see :attr:`celery.task.base.Task.rate_limit` for
             more information).
         :keyword destination: If set, a list of the hosts to send the

+ 3 - 3
celery/task/http.py

@@ -106,8 +106,8 @@ class HttpDispatch(object):
     """Make task HTTP request and collect the task result.
 
     :param url: The URL to request.
-    :param method: HTTP method used. Currently supported methods are ``GET``
-        and ``POST``.
+    :param method: HTTP method used. Currently supported methods are `GET`
+        and `POST`.
     :param task_kwargs: Task keyword arguments.
     :param logger: Logger used for user/system feedback.
 
@@ -151,7 +151,7 @@ class HttpDispatchTask(BaseTask):
 
     :keyword url: The URL location of the HTTP callback task.
     :keyword method: Method to use when dispatching the callback. Usually
-        ``GET`` or ``POST``.
+        `GET` or `POST`.
     :keyword \*\*kwargs: Keyword arguments to pass on to the HTTP callback.
 
     .. attribute:: url

+ 1 - 1
celery/task/sets.py

@@ -73,7 +73,7 @@ class subtask(AttributeDict):
                              options=options or {})
 
     def delay(self, *argmerge, **kwmerge):
-        """Shortcut to ``apply_async(argmerge, kwargs)``."""
+        """Shortcut to `apply_async(argmerge, kwargs)`."""
         return self.apply_async(args=argmerge, kwargs=kwmerge)
 
     def apply(self, args=(), kwargs={}, **options):

+ 3 - 3
celery/tests/utils.py

@@ -169,7 +169,7 @@ def skip(reason):
 
 
 def skip_if(predicate, reason):
-    """Skip test if predicate is ``True``."""
+    """Skip test if predicate is :const:`True`."""
 
     def _inner(fun):
         return predicate and skip(reason)(fun) or fun
@@ -178,7 +178,7 @@ def skip_if(predicate, reason):
 
 
 def skip_unless(predicate, reason):
-    """Skip test if predicate is ``False``."""
+    """Skip test if predicate is :const:`False`."""
     return skip_if(not predicate, reason)
 
 
@@ -218,7 +218,7 @@ def mask_modules(*modnames):
 
 @contextmanager
 def override_stdouts():
-    """Override ``sys.stdout`` and ``sys.stderr`` with ``StringIO``."""
+    """Override `sys.stdout` and `sys.stderr` with `StringIO`."""
     prev_out, prev_err = sys.stdout, sys.stderr
     mystdout, mystderr = StringIO(), StringIO()
     sys.stdout = sys.__stdout__ = mystdout

+ 9 - 9
celery/utils/__init__.py

@@ -117,7 +117,7 @@ def kwdict(kwargs):
 
 
 def first(predicate, iterable):
-    """Returns the first element in ``iterable`` that ``predicate`` returns a
+    """Returns the first element in `iterable` that `predicate` returns a
     :const:`True` value for."""
     for item in iterable:
         if predicate(item):
@@ -144,7 +144,7 @@ def firstmethod(method):
 
 
 def chunks(it, n):
-    """Split an iterator into chunks with ``n`` elements each.
+    """Split an iterator into chunks with `n` elements each.
 
     Examples
 
@@ -242,9 +242,9 @@ def retry_over_time(fun, catch, args=[], kwargs={}, errback=noop,
         exception class.
     :keyword args: Positional arguments passed on to the function.
     :keyword kwargs: Keyword arguments passed on to the function.
-    :keyword errback: Callback for when an exception in ``catch`` is raised.
-        The callback must take two arguments: ``exc`` and ``interval``, where
-        ``exc`` is the exception instance, and ``interval`` is the time in
+    :keyword errback: Callback for when an exception in `catch` is raised.
+        The callback must take two arguments: `exc` and `interval`, where
+        `exc` is the exception instance, and `interval` is the time in
         seconds to sleep next..
     :keyword max_retries: Maximum number of retries before we give up.
         If this is not set, we will retry forever.
@@ -277,8 +277,8 @@ def fun_takes_kwargs(fun, kwlist=[]):
     """With a function, and a list of keyword arguments, returns arguments
     in the list which the function takes.
 
-    If the object has an ``argspec`` attribute that is used instead
-    of using the :meth:`inspect.getargspec`` introspection.
+    If the object has an `argspec` attribute that is used instead
+    of using the :meth:`inspect.getargspec` introspection.
 
     :param fun: The function to inspect arguments of.
     :param kwlist: The list of keyword arguments.
@@ -314,7 +314,7 @@ def get_cls_by_name(name, aliases={}, imp=None):
         celery.concurrency.processes.TaskPool
                                     ^- class name
 
-    If ``aliases`` is provided, a dict containing short name/long name
+    If `aliases` is provided, a dict containing short name/long name
     mappings, the name is looked up in the aliases first.
 
     Examples:
@@ -397,7 +397,7 @@ def import_from_cwd(module, imp=None):
     located in the current directory.
 
     Modules located in the current directory has
-    precedence over modules located in ``sys.path``.
+    precedence over modules located in `sys.path`.
     """
     if imp is None:
         imp = importlib.import_module

+ 4 - 4
celery/utils/dispatch/saferef.py

@@ -72,7 +72,7 @@ class BoundMethodWeakref(object):
 
         class attribute pointing to all live
         BoundMethodWeakref objects indexed by the class's
-        ``calculate_key(target)`` method applied to the target
+        `calculate_key(target)` method applied to the target
         objects. This weak value dictionary is used to
         short-circuit creation so that multiple references
         to the same (object, function) pair produce the
@@ -110,7 +110,7 @@ class BoundMethodWeakref(object):
         """Return a weak-reference-like instance for a bound method
 
         :param target: the instance-method target for the weak
-            reference, must have ``im_self`` and ``im_func`` attributes
+            reference, must have `im_self` and `im_func` attributes
             and be reconstructable via::
 
                 target.im_func.__get__(target.im_self)
@@ -153,7 +153,7 @@ class BoundMethodWeakref(object):
     def calculate_key(cls, target):
         """Calculate the reference key for this reference
 
-        Currently this is a two-tuple of the ``id()``'s of the
+        Currently this is a two-tuple of the `id()`'s of the
         target object and the target function respectively.
         """
         return id(target.im_self), id(target.im_func)
@@ -223,7 +223,7 @@ class BoundNonDescriptorMethodWeakref(BoundMethodWeakref):
         """Return a weak-reference-like instance for a bound method
 
         :param target: the instance-method target for the weak
-            reference, must have ``im_self`` and ``im_func`` attributes
+            reference, must have `im_self` and `im_func` attributes
             and be reconstructable via::
 
                 target.im_func.__get__(target.im_self)

+ 6 - 6
celery/utils/dispatch/signal.py

@@ -23,7 +23,7 @@ class Signal(object):
 
     .. attribute:: receivers
         Internal attribute, holds a dictionary of
-        ``{receriverkey (id): weakref(receiver)}`` mappings.
+        `{receriverkey (id): weakref(receiver)}` mappings.
 
     """
 
@@ -51,9 +51,9 @@ class Signal(object):
 
             Receivers must be able to accept keyword arguments.
 
-            If receivers have a ``dispatch_uid`` attribute, the receiver will
+            If receivers have a `dispatch_uid` attribute, the receiver will
             not be added if another receiver already exists with that
-            ``dispatch_uid``.
+            `dispatch_uid`.
 
         :keyword sender: The sender to which the receiver should respond.
             Must either be of type :class:`Signal`, or :const:`None` to receive
@@ -92,7 +92,7 @@ class Signal(object):
         receiver will be removed from dispatch automatically.
 
         :keyword receiver: The registered receiver to disconnect. May be
-            none if ``dispatch_uid`` is specified.
+            none if `dispatch_uid` is specified.
 
         :keyword sender: The registered sender to disconnect.
 
@@ -125,7 +125,7 @@ class Signal(object):
 
         :keyword \*\*named: Named arguments which will be passed to receivers.
 
-        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
 
         """
         responses = []
@@ -148,7 +148,7 @@ class Signal(object):
             These arguments must be a subset of the argument names defined in
             :attr:`providing_args`.
 
-        :returns: a list of tuple pairs: ``[(receiver, response), ... ]``.
+        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
 
         :raises DispatcherKeyError:
 

+ 2 - 2
celery/utils/functional.py

@@ -53,9 +53,9 @@
 ### Begin from Python 2.5 functools.py ########################################
 
 # Summary of changes made to the Python 2.5 code below:
-#   * Wrapped the ``setattr`` call in ``update_wrapper`` with a try-except
+#   * Wrapped the `setattr` call in `update_wrapper` with a try-except
 #     block to make it compatible with Python 2.3, which doesn't allow
-#     assigning to ``__name__``.
+#     assigning to `__name__`.
 
 # Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007 Python Software
 # Foundation. All Rights Reserved.

+ 3 - 3
celery/utils/timeutils.py

@@ -65,7 +65,7 @@ def remaining(start, ends_in, now=None, relative=True):
     :param ends_in: The end delta as a :class:`~datetime.timedelta`.
     :keyword relative: If set to :const:`False`, the end time will be
         calculated using :func:`delta_resolution` (i.e. rounded to the
-        resolution of ``ends_in``).
+        resolution of `ends_in`).
     :keyword now: Function returning the current time and date,
         defaults to :func:`datetime.now`.
 
@@ -79,7 +79,7 @@ def remaining(start, ends_in, now=None, relative=True):
 
 
 def rate(rate):
-    """Parses rate strings, such as ``"100/m"`` or ``"2/h"``
+    """Parses rate strings, such as `"100/m"` or `"2/h"`
     and converts them to seconds."""
     if rate:
         if isinstance(rate, basestring):
@@ -118,7 +118,7 @@ def humanize_seconds(secs, prefix=""):
 
 
 def maybe_iso8601(dt):
-    """``Either datetime | str -> datetime or None -> None``"""
+    """`Either datetime | str -> datetime or None -> None`"""
     if not dt:
         return
     if isinstance(dt, datetime):

+ 3 - 3
celery/worker/__init__.py

@@ -61,7 +61,7 @@ class WorkController(object):
     .. attribute:: concurrency
 
         The number of simultaneous processes doing work (default:
-        ``conf.CELERYD_CONCURRENCY``)
+        :setting:`CELERYD_CONCURRENCY`)
 
     .. attribute:: loglevel
 
@@ -69,8 +69,8 @@ class WorkController(object):
 
     .. attribute:: logfile
 
-        The logfile used, if no logfile is specified it uses ``stderr``
-        (default: `celery.conf.CELERYD_LOG_FILE`).
+        The logfile used, if no logfile is specified it uses `stderr`
+        (default: :setting:`CELERYD_LOG_FILE`).
 
     .. attribute:: embed_clockservice
 

+ 3 - 3
celery/worker/buckets.py

@@ -23,8 +23,8 @@ class TaskBucket(object):
     while the :meth:`get` operation iterates over the buckets and retrieves
     the first available item.
 
-    Say we have three types of tasks in the registry: ``celery.ping``,
-    ``feed.refresh`` and ``video.compress``, the TaskBucket will consist
+    Say we have three types of tasks in the registry: `celery.ping`,
+    `feed.refresh` and `video.compress`, the TaskBucket will consist
     of the following items::
 
         {"celery.ping": TokenBucketQueue(fill_rate=300),
@@ -32,7 +32,7 @@ class TaskBucket(object):
          "video.compress": TokenBucketQueue(fill_rate=2)}
 
     The get operation will iterate over these until one of the buckets
-    is able to return an item. The underlying datastructure is a ``dict``,
+    is able to return an item. The underlying datastructure is a `dict`,
     so the order is ignored here.
 
     :param task_registry: The task registry used to get the task

+ 5 - 5
celery/worker/job.py

@@ -57,14 +57,14 @@ class WorkerTaskTrace(TaskTrace):
     meta backend.
 
     If the call was successful, it saves the result to the task result
-    backend, and sets the task status to ``"SUCCESS"``.
+    backend, and sets the task status to `"SUCCESS"`.
 
     If the call raises :exc:`celery.exceptions.RetryTaskError`, it extracts
     the original exception, uses that as the result and sets the task status
-    to ``"RETRY"``.
+    to `"RETRY"`.
 
     If the call results in an exception, it saves the exception as the task
-    result, and sets the task status to ``"FAILURE"``.
+    result, and sets the task status to `"FAILURE"`.
 
     :param task_name: The name of the task to execute.
     :param task_id: The unique id of the task.
@@ -308,8 +308,8 @@ class TaskRequest(object):
     def extend_with_default_kwargs(self, loglevel, logfile):
         """Extend the tasks keyword arguments with standard task arguments.
 
-        Currently these are ``logfile``, ``loglevel``, ``task_id``,
-        ``task_name``, ``task_retries``, and ``delivery_info``.
+        Currently these are `logfile`, `loglevel`, `task_id`,
+        `task_name`, `task_retries`, and `delivery_info`.
 
         See :meth:`celery.task.base.Task.run` for more information.
 

+ 4 - 4
celery/worker/listener.py

@@ -30,7 +30,7 @@ up and running.
 
 * So for each message received the :meth:`~CarrotListener.receive_message`
   method is called, this checks the payload of the message for either
-  a ``task`` key or a ``control`` key.
+  a `task` key or a `control` key.
 
   If the message is a task, it verifies the validity of the message
   converts it to a :class:`celery.worker.job.TaskRequest`, and sends
@@ -45,9 +45,9 @@ up and running.
   are acknowledged immediately and logged, so the message is not resent
   again, and again.
 
-* If the task has an ETA/countdown, the task is moved to the ``eta_schedule``
+* If the task has an ETA/countdown, the task is moved to the `eta_schedule`
   so the :class:`timer2.Timer` can schedule it at its
-  deadline. Tasks without an eta are moved immediately to the ``ready_queue``,
+  deadline. Tasks without an eta are moved immediately to the `ready_queue`,
   so they can be picked up by the :class:`~celery.worker.controllers.Mediator`
   to be sent to the pool.
 
@@ -257,7 +257,7 @@ class CarrotListener(object):
     def on_task(self, task):
         """Handle received task.
 
-        If the task has an ``eta`` we enter it into the ETA schedule,
+        If the task has an `eta` we enter it into the ETA schedule,
         otherwise we move it the ready queue for immediate processing.
 
         """

+ 2 - 2
contrib/debian/init.d/celerybeat

@@ -48,7 +48,7 @@
 # =================
 #
 #   * CELERYBEAT_OPTS
-#       Additional arguments to celerybeat, see ``celerybeat --help`` for a
+#       Additional arguments to celerybeat, see `celerybeat --help` for a
 #       list.
 #
 #   * CELERYBEAT_PID_FILE
@@ -61,7 +61,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #
 #   * CELERYBEAT
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #   * CELERYBEAT_USER

+ 2 - 2
contrib/debian/init.d/celeryd

@@ -41,7 +41,7 @@
 # =================
 #
 #   * CELERYD_OPTS
-#       Additional arguments to celeryd, see ``celeryd --help`` for a list.
+#       Additional arguments to celeryd, see `celeryd --help` for a list.
 #
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -56,7 +56,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #
 #   * CELERYD
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #   * CELERYD_USER

+ 2 - 2
contrib/debian/init.d/celeryevcam

@@ -47,7 +47,7 @@
 # =================
 #
 #   * CELERYEV_OPTS
-#       Additional arguments to celeryd, see ``celeryd --help`` for a list.
+#       Additional arguments to celeryd, see `celeryd --help` for a list.
 #
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -62,7 +62,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #
 #   * CELERYEV
-#       Path to the celeryev program. Default is ``celeryev``.
+#       Path to the celeryev program. Default is `celeryev`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #   * CELERYEV_USER

+ 3 - 3
contrib/generic-init.d/celeryd

@@ -51,8 +51,8 @@
 #       nodes, to start
 #
 #   * CELERYD_OPTS
-#       Additional arguments to celeryd-multi, see ``celeryd-multi --help``
-#       and ``celeryd --help`` for help.
+#       Additional arguments to celeryd-multi, see `celeryd-multi --help`
+#       and `celeryd --help` for help.
 #
 #   * CELERYD_CHDIR
 #       Path to chdir at start. Default is to stay in the current directory.
@@ -67,7 +67,7 @@
 #       Log level to use for celeryd. Default is INFO.
 #
 #   * CELERYD
-#       Path to the celeryd program. Default is ``celeryd``.
+#       Path to the celeryd program. Default is `celeryd`.
 #       You can point this to an virtualenv, or even use manage.py for django.
 #
 #   * CELERYD_USER

+ 4 - 4
contrib/requirements/README.rst

@@ -6,19 +6,19 @@
 Index
 =====
 
-* ``requirements/default.txt``
+* `requirements/default.txt`
 
     The default requirements (Python 2.6+).
 
-* ``requirements/py25.txt``
+* `requirements/py25.txt`
 
     Extra requirements needed to run on Python 2.5.
 
-* ``requirements/py26.txt``
+* `requirements/py26.txt`
 
     Extra requirements needed to run on Python 2.4.
 
-* ``requirements/test.txt``
+* `requirements/test.txt`
 
     Requirements needed to run the full unittest suite.
 

+ 45 - 45
docs/configuration.rst

@@ -202,14 +202,14 @@ The time in seconds of which the task result queues should expire.
 CELERY_RESULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~
 
-Name of the exchange to publish results in.  Default is ``"celeryresults"``.
+Name of the exchange to publish results in.  Default is `"celeryresults"`.
 
 .. setting:: CELERY_RESULT_EXCHANGE_TYPE
 
 CELERY_RESULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The exchange type of the result exchange.  Default is to use a ``direct``
+The exchange type of the result exchange.  Default is to use a `direct`
 exchange.
 
 .. setting:: CELERY_RESULT_SERIALIZER
@@ -217,7 +217,7 @@ exchange.
 CELERY_RESULT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-Result message serialization format.  Default is ``"pickle"``. See
+Result message serialization format.  Default is `"pickle"`. See
 :ref:`executing-serializers`.
 
 .. setting:: CELERY_RESULT_PERSISTENT
@@ -326,7 +326,7 @@ Redis backend settings
     The Redis backend requires the :mod:`redis` library:
     http://pypi.python.org/pypi/redis/0.5.5
 
-    To install the redis package use ``pip`` or ``easy_install``::
+    To install the redis package use `pip` or `easy_install`::
 
         $ pip install redis
 
@@ -337,14 +337,14 @@ This backend requires the following configuration directives to be set.
 REDIS_HOST
 ~~~~~~~~~~
 
-Hostname of the Redis database server. e.g. ``"localhost"``.
+Hostname of the Redis database server. e.g. `"localhost"`.
 
 .. setting:: REDIS_PORT
 
 REDIS_PORT
 ~~~~~~~~~~
 
-Port to the Redis database server. e.g. ``6379``.
+Port to the Redis database server. e.g. `6379`.
 
 .. setting:: REDIS_DB
 
@@ -437,8 +437,8 @@ CELERY_QUEUES
 The mapping of queues the worker consumes from.  This is a dictionary
 of queue name/options.  See :ref:`guide-routing` for more information.
 
-The default is a queue/exchange/binding key of ``"celery"``, with
-exchange type ``direct``.
+The default is a queue/exchange/binding key of `"celery"`, with
+exchange type `direct`.
 
 You don't have to care about this unless you want custom routing facilities.
 
@@ -466,7 +466,7 @@ CELERY_DEFAULT_QUEUE
 ~~~~~~~~~~~~~~~~~~~~
 
 The queue used by default, if no custom queue is specified.  This queue must
-be listed in :setting:`CELERY_QUEUES`.  The default is: ``celery``.
+be listed in :setting:`CELERY_QUEUES`.  The default is: `celery`.
 
 .. seealso::
 
@@ -478,7 +478,7 @@ CELERY_DEFAULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 Name of the default exchange to use when no custom exchange is
-specified.  The default is: ``celery``.
+specified.  The default is: `celery`.
 
 .. setting:: CELERY_DEFAULT_EXCHANGE_TYPE
 
@@ -486,7 +486,7 @@ CELERY_DEFAULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Default exchange type used when no custom exchange is specified.
-The default is: ``direct``.
+The default is: `direct`.
 
 .. setting:: CELERY_DEFAULT_ROUTING_KEY
 
@@ -494,14 +494,14 @@ CELERY_DEFAULT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The default routing key used when sending tasks.
-The default is: ``celery``.
+The default is: `celery`.
 
 .. setting:: CELERY_DEFAULT_DELIVERY_MODE
 
 CELERY_DEFAULT_DELIVERY_MODE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Can be ``transient`` or ``persistent``.  The default is to send
+Can be `transient` or `persistent`.  The default is to send
 persistent messages.
 
 .. _conf-broker-connection:
@@ -514,7 +514,7 @@ Broker Settings
 BROKER_BACKEND
 ~~~~~~~~~~~~~~
 
-The messaging backend to use. Default is ``"amqplib"``.
+The messaging backend to use. Default is `"amqplib"`.
 
 .. setting:: BROKER_HOST
 
@@ -550,7 +550,7 @@ Password to connect with.
 BROKER_VHOST
 ~~~~~~~~~~~~
 
-Virtual host.  Default is ``"/"``.
+Virtual host.  Default is `"/"`.
 
 .. setting:: BROKER_USE_SSL
 
@@ -604,7 +604,7 @@ CELERY_ALWAYS_EAGER
 ~~~~~~~~~~~~~~~~~~~
 
 If this is :const:`True`, all tasks will be executed locally by blocking
-until it is finished.  ``apply_async`` and ``Task.delay`` will return
+until it is finished.  `apply_async` and `Task.delay` will return
 a :class:`~celery.result.EagerResult` which emulates the behavior of
 :class:`~celery.result.AsyncResult`, except the result has already
 been evaluated.
@@ -617,10 +617,10 @@ instead.
 CELERY_EAGER_PROPAGATES_EXCEPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-If this is :const:`True`, eagerly executed tasks (using ``.apply``, or with
+If this is :const:`True`, eagerly executed tasks (using `.apply`, or with
 :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 
-It's the same as always running ``apply`` with ``throw=True``.
+It's the same as always running `apply` with `throw=True`.
 
 .. setting:: CELERY_IGNORE_RESULT
 
@@ -648,7 +648,7 @@ A built-in periodic task will delete the results after this time
     backends. For the AMQP backend see
     :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`.
 
-    When using the database or MongoDB backends, ``celerybeat`` must be
+    When using the database or MongoDB backends, `celerybeat` must be
     running for the results to be expired.
 
 
@@ -678,7 +678,7 @@ CELERY_TASK_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~
 
 A string identifying the default serialization method to use.  Can be
-``pickle`` (default), ``json``, ``yaml``, or any custom serialization
+`pickle` (default), `json`, `yaml`, or any custom serialization
 methods that have been registered with :mod:`carrot.serialization.registry`.
 
 .. seealso::
@@ -784,7 +784,7 @@ CELERYD_STATE_DB
 ~~~~~~~~~~~~~~~~
 
 Name of the file used to stores persistent worker state (like revoked tasks).
-Can be a relative or absolute path, but be aware that the suffix ``.db``
+Can be a relative or absolute path, but be aware that the suffix `.db`
 may be appended to the file name (depending on Python version).
 
 Can also be set via the :option:`--statedb` argument to
@@ -813,7 +813,7 @@ Error E-Mails
 CELERY_SEND_TASK_ERROR_EMAILS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The default value for the ``Task.send_error_emails`` attribute, which if
+The default value for the `Task.send_error_emails` attribute, which if
 set to :const:`True` means errors occuring during task execution will be
 sent to :setting:`ADMINS` by e-mail.
 
@@ -829,7 +829,7 @@ A whitelist of exceptions to send error e-mails for.
 ADMINS
 ~~~~~~
 
-List of ``(name, email_address)`` tuples for the admins that should
+List of `(name, email_address)` tuples for the admins that should
 receive error e-mails.
 
 .. setting:: SERVER_EMAIL
@@ -845,7 +845,7 @@ Default is celery@localhost.
 MAIL_HOST
 ~~~~~~~~~
 
-The mail server to use.  Default is ``"localhost"``.
+The mail server to use.  Default is `"localhost"`.
 
 .. setting:: MAIL_HOST_USER
 
@@ -866,7 +866,7 @@ Password (if required) to log on to the mail server with.
 MAIL_PORT
 ~~~~~~~~~
 
-The port the mail server is listening on.  Default is ``25``.
+The port the mail server is listening on.  Default is `25`.
 
 .. _conf-example-error-mail-config:
 
@@ -906,7 +906,7 @@ Events
 CELERY_SEND_EVENTS
 ~~~~~~~~~~~~~~~~~~
 
-Send events so the worker can be monitored by tools like ``celerymon``.
+Send events so the worker can be monitored by tools like `celerymon`.
 
 .. setting:: CELERY_EVENT_QUEUE
 
@@ -914,21 +914,21 @@ CELERY_EVENT_QUEUE
 ~~~~~~~~~~~~~~~~~~
 
 Name of the queue to consume event messages from. Default is
-``"celeryevent"``.
+`"celeryevent"`.
 
 .. setting:: CELERY_EVENT_EXCHANGE
 
 CELERY_EVENT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~
 
-Name of the exchange to send event messages to.  Default is ``"celeryevent"``.
+Name of the exchange to send event messages to.  Default is `"celeryevent"`.
 
 .. setting:: CELERY_EVENT_EXCHANGE_TYPE
 
 CELERY_EVENT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The exchange type of the event exchange.  Default is to use a ``"direct"``
+The exchange type of the event exchange.  Default is to use a `"direct"`
 exchange.
 
 .. setting:: CELERY_EVENT_ROUTING_KEY
@@ -936,7 +936,7 @@ exchange.
 CELERY_EVENT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-Routing key used when sending event messages.  Default is ``"celeryevent"``.
+Routing key used when sending event messages.  Default is `"celeryevent"`.
 
 .. setting:: CELERY_EVENT_SERIALIZER
 
@@ -944,7 +944,7 @@ CELERY_EVENT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 Message serialization format used when sending event messages.
-Default is ``"json"``. See :ref:`executing-serializers`.
+Default is `"json"`. See :ref:`executing-serializers`.
 
 .. _conf-broadcast:
 
@@ -960,7 +960,7 @@ Name prefix for the queue used when listening for broadcast messages.
 The workers hostname will be appended to the prefix to create the final
 queue name.
 
-Default is ``"celeryctl"``.
+Default is `"celeryctl"`.
 
 .. setting:: CELERY_BROADCASTS_EXCHANGE
 
@@ -969,14 +969,14 @@ CELERY_BROADCAST_EXCHANGE
 
 Name of the exchange used for broadcast messages.
 
-Default is ``"celeryctl"``.
+Default is `"celeryctl"`.
 
 .. setting:: CELERY_BROADCAST_EXCHANGE_TYPE
 
 CELERY_BROADCAST_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Exchange type used for broadcast messages.  Default is ``"fanout"``.
+Exchange type used for broadcast messages.  Default is `"fanout"`.
 
 .. _conf-logging:
 
@@ -991,7 +991,7 @@ CELERYD_LOG_FILE
 The default file name the worker daemon logs messages to.  Can be overridden
 using the :option:`--logfile` option to :mod:`~celery.bin.celeryd`.
 
-The default is :const:`None` (``stderr``)
+The default is :const:`None` (`stderr`)
 
 .. setting:: CELERYD_LOG_LEVEL
 
@@ -1013,7 +1013,7 @@ CELERYD_LOG_FORMAT
 
 The format to use for log messages.
 
-Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
+Default is `[%(asctime)s: %(levelname)s/%(processName)s] %(message)s`
 
 See the Python :mod:`logging` module for more information about log
 formats.
@@ -1039,7 +1039,7 @@ formats.
 CELERY_REDIRECT_STDOUTS
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-If enabled ``stdout`` and ``stderr`` will be redirected
+If enabled `stdout` and `stderr` will be redirected
 to the current logger.
 
 Enabled by default.
@@ -1050,7 +1050,7 @@ Used by :program:`celeryd` and :program:`celerybeat`.
 CELERY_REDIRECT_STDOUTS_LEVEL
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The loglevel output to ``stdout`` and ``stderr`` is logged as.
+The loglevel output to `stdout` and `stderr` is logged as.
 Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
 :const:`ERROR` or :const:`CRITICAL`.
 
@@ -1112,7 +1112,7 @@ CELERYBEAT_SCHEDULER
 ~~~~~~~~~~~~~~~~~~~~
 
 The default scheduler class.  Default is
-``"celery.beat.PersistentScheduler"``.
+`"celery.beat.PersistentScheduler"`.
 
 Can also be set via the :option:`-S` argument to
 :mod:`~celery.bin.celerybeat`.
@@ -1122,9 +1122,9 @@ Can also be set via the :option:`-S` argument to
 CELERYBEAT_SCHEDULE_FILENAME
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Name of the file used by ``PersistentScheduler`` to store the last run times
+Name of the file used by `PersistentScheduler` to store the last run times
 of periodic tasks.  Can be a relative or absolute path, but be aware that the
-suffix ``.db`` may be appended to the file name (depending on Python version).
+suffix `.db` may be appended to the file name (depending on Python version).
 
 Can also be set via the :option:`--schedule` argument to
 :mod:`~celery.bin.celerybeat`.
@@ -1143,9 +1143,9 @@ CELERYBEAT_LOG_FILE
 ~~~~~~~~~~~~~~~~~~~
 
 The default file name to log messages to.  Can be overridden using
-the `--logfile`` option to :mod:`~celery.bin.celerybeat`.
+the `--logfile` option to :mod:`~celery.bin.celerybeat`.
 
-The default is :const:`None` (``stderr``).
+The default is :const:`None` (`stderr`).
 
 .. setting:: CELERYBEAT_LOG_LEVEL
 
@@ -1171,9 +1171,9 @@ CELERYMON_LOG_FILE
 ~~~~~~~~~~~~~~~~~~
 
 The default file name to log messages to.  Can be overridden using
-the :option:`--logfile` argument to ``celerymon``.
+the :option:`--logfile` argument to `celerymon`.
 
-The default is :const:`None` (``stderr``)
+The default is :const:`None` (`stderr`)
 
 .. setting:: CELERYMON_LOG_LEVEL
 

+ 13 - 13
docs/cookbook/daemonizing.rst

@@ -20,7 +20,7 @@ start-stop-daemon (Debian/Ubuntu/++)
 See the `contrib/debian/init.d/`_ directory in the celery distribution, this
 directory contains init scripts for celeryd and celerybeat.
 
-These scripts are configured in ``/etc/default/celeryd``.
+These scripts are configured in :file:`/etc/default/celeryd`.
 
 .. _`contrib/debian/init.d/`:
     http://github.com/ask/celery/tree/master/contrib/debian/
@@ -30,7 +30,7 @@ These scripts are configured in ``/etc/default/celeryd``.
 Init script: celeryd
 --------------------
 
-:Usage: ``/etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}``
+:Usage: `/etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}`
 :Configuration file: /etc/default/celeryd
 
 To configure celeryd you probably need to at least tell it where to chdir
@@ -43,7 +43,7 @@ Example configuration
 
 This is an example configuration for a Python project.
 
-``/etc/default/celeryd``::
+:file:`/etc/default/celeryd`:
 
     # Where to chdir at start.
     CELERYD_CHDIR="/opt/Myproject/"
@@ -59,7 +59,7 @@ This is an example configuration for a Python project.
 Example Django configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-This is an example configuration for those using ``django-celery``::
+This is an example configuration for those using `django-celery`::
 
     # Where the Django project is.
     CELERYD_CHDIR="/opt/Project/"
@@ -76,7 +76,7 @@ Available options
 ~~~~~~~~~~~~~~~~~~
 
 * CELERYD_OPTS
-    Additional arguments to celeryd, see ``celeryd --help`` for a list.
+    Additional arguments to celeryd, see `celeryd --help` for a list.
 
 * CELERYD_CHDIR
     Path to chdir at start. Default is to stay in the current directory.
@@ -91,7 +91,7 @@ Available options
     Log level to use for celeryd. Default is INFO.
 
 * CELERYD
-    Path to the celeryd program. Default is ``celeryd``.
+    Path to the celeryd program. Default is `celeryd`.
     You can point this to an virtualenv, or even use manage.py for django.
 
 * CELERYD_USER
@@ -104,7 +104,7 @@ Available options
 
 Init script: celerybeat
 -----------------------
-:Usage: ``/etc/init.d/celerybeat {start|stop|force-reload|restart|try-restart|status}``
+:Usage: `/etc/init.d/celerybeat {start|stop|force-reload|restart|try-restart|status}`
 :Configuration file: /etc/default/celerybeat or /etc/default/celeryd
 
 .. _debian-initd-celerybeat-example:
@@ -114,7 +114,7 @@ Example configuration
 
 This is an example configuration for a Python project:
 
-``/etc/default/celeryd``::
+`/etc/default/celeryd`::
 
     # Where to chdir at start.
     CELERYD_CHDIR="/opt/Myproject/"
@@ -133,7 +133,7 @@ This is an example configuration for a Python project:
 Example Django configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-This is an example configuration for those using ``django-celery``::
+This is an example configuration for those using `django-celery`::
 
     # Where the Django project is.
     CELERYD_CHDIR="/opt/Project/"
@@ -156,7 +156,7 @@ Available options
 ~~~~~~~~~~~~~~~~~
 
 * CELERYBEAT_OPTS
-    Additional arguments to celerybeat, see ``celerybeat --help`` for a
+    Additional arguments to celerybeat, see `celerybeat --help` for a
     list.
 
 * CELERYBEAT_PIDFILE
@@ -169,7 +169,7 @@ Available options
     Log level to use for celeryd. Default is INFO.
 
 * CELERYBEAT
-    Path to the celeryd program. Default is ``celeryd``.
+    Path to the celeryd program. Default is `celeryd`.
     You can point this to an virtualenv, or even use manage.py for django.
 
 * CELERYBEAT_USER
@@ -193,14 +193,14 @@ This can reveal hints as to why the service won't start.
 Also you will see the commands generated, so you can try to run the celeryd
 command manually to read the resulting error output.
 
-For example my ``sh -x`` output does this::
+For example my `sh -x` output does this::
 
     ++ start-stop-daemon --start --chdir /opt/Opal/release/opal --quiet \
         --oknodo --background --make-pidfile --pidfile /var/run/celeryd.pid \
         --exec /opt/Opal/release/opal/manage.py celeryd -- --time-limit=300 \
         -f /var/log/celeryd.log -l INFO
 
-Run the celeryd command after ``--exec`` (without the ``--``) to show the
+Run the celeryd command after `--exec` (without the `--`) to show the
 actual resulting output::
 
     $ /opt/Opal/release/opal/manage.py celeryd --time-limit=300 \

+ 2 - 2
docs/cookbook/tasks.rst

@@ -17,9 +17,9 @@ You can accomplish this by using a lock.
 In this example we'll be using the cache framework to set a lock that is
 accessible for all workers.
 
-It's part of an imaginary RSS feed importer called ``djangofeeds``.
+It's part of an imaginary RSS feed importer called `djangofeeds`.
 The task takes a feed URL as a single argument, and imports that feed into
-a Django model called ``Feed``. We ensure that it's not possible for two or
+a Django model called `Feed`. We ensure that it's not possible for two or
 more workers to import the same feed at the same time by setting a cache key
 consisting of the md5sum of the feed URL.
 

+ 8 - 8
docs/getting-started/broker-installation.rst

@@ -19,7 +19,7 @@ see `Installing RabbitMQ on OS X`_.
 
 .. note::
 
-    If you're getting ``nodedown`` errors after installing and using
+    If you're getting `nodedown` errors after installing and using
     :program:`rabbitmqctl` then this blog post can help you identify
     the source of the problem:
 
@@ -53,7 +53,7 @@ Installing RabbitMQ on OS X
 The easiest way to install RabbitMQ on Snow Leopard is using `Homebrew`_; the new
 and shiny package management system for OS X.
 
-In this example we'll install homebrew into ``/lol``, but you can
+In this example we'll install homebrew into :file:`/lol`, but you can
 choose whichever destination, even in your home directory if you want, as one of
 the strengths of homebrew is that it's relocateable.
 
@@ -62,14 +62,14 @@ install git. Download and install from the disk image at
 http://code.google.com/p/git-osx-installer/downloads/list?can=3
 
 When git is installed you can finally clone the repo, storing it at the
-``/lol`` location::
+:file:`/lol` location::
 
     $ git clone git://github.com/mxcl/homebrew /lol
 
 
 Brew comes with a simple utility called :program:`brew`, used to install, remove and
 query packages. To use it you first have to add it to :envvar:`PATH`, by
-adding the following line to the end of your ``~/.profile``::
+adding the following line to the end of your :file:`~/.profile`::
 
     export PATH="/lol/bin:/lol/sbin:$PATH"
 
@@ -99,12 +99,12 @@ Use the :program:`scutil` command to permanently set your hostname::
 
     sudo scutil --set HostName myhost.local
 
-Then add that hostname to ``/etc/hosts`` so it's possible to resolve it
+Then add that hostname to :file:`/etc/hosts` so it's possible to resolve it
 back into an IP address::
 
     127.0.0.1       localhost myhost myhost.local
 
-If you start the rabbitmq server, your rabbit node should now be ``rabbit@myhost``,
+If you start the rabbitmq server, your rabbit node should now be `rabbit@myhost`,
 as verified by :program:`rabbitmqctl`::
 
     $ sudo rabbitmqctl status
@@ -120,8 +120,8 @@ as verified by :program:`rabbitmqctl`::
     ...done.
 
 This is especially important if your DHCP server gives you a hostname
-starting with an IP address, (e.g. ``23.10.112.31.comcast.net``), because
-then RabbitMQ will try to use ``rabbit@23``, which is an illegal hostname.
+starting with an IP address, (e.g. `23.10.112.31.comcast.net`), because
+then RabbitMQ will try to use `rabbit@23`, which is an illegal hostname.
 
 .. _rabbitmq-osx-start-stop:
 

+ 2 - 2
docs/getting-started/first-steps-with-celery.rst

@@ -166,8 +166,8 @@ by holding on to the :class:`~celery.result.AsyncResult`::
     >>> result.successful() # returns True if the task didn't end in failure.
     True
 
-If the task raises an exception, the return value of ``result.successful()``
-will be :const:`False`, and ``result.result`` will contain the exception instance
+If the task raises an exception, the return value of `result.successful()`
+will be :const:`False`, and `result.result` will contain the exception instance
 raised by the task.
 
 Where to go from here

+ 4 - 4
docs/includes/installation.txt

@@ -1,18 +1,18 @@
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install `celery` either via the Python Package Index (PyPI)
 or from source.
 
-To install using ``pip``,::
+To install using `pip`,::
 
     $ pip install celery
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
     $ easy_install celery
 
 Downloading and installing from source
 --------------------------------------
 
-Download the latest version of ``celery`` from
+Download the latest version of `celery` from
 http://pypi.python.org/pypi/celery/
 
 You can install it by doing the following,::

+ 6 - 6
docs/includes/introduction.txt

@@ -53,7 +53,7 @@ This is a high level overview of the architecture.
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 
 The broker delivers tasks to the worker servers.
-A worker server is a networked machine running ``celeryd``.  This can be one or
+A worker server is a networked machine running `celeryd`.  This can be one or
 more machines depending on the workload.
 
 The result of the task can be stored for later retrieval (called its
@@ -102,7 +102,7 @@ Features
     |                 | while the queue is temporarily overloaded).        |
     +-----------------+----------------------------------------------------+
     | Concurrency     | Tasks are executed in parallel using the           |
-    |                 | ``multiprocessing`` module.                        |
+    |                 | `multiprocessing` module.                          |
     +-----------------+----------------------------------------------------+
     | Scheduling      | Supports recurring tasks like cron, or specifying  |
     |                 | an exact date or countdown for when after the task |
@@ -189,14 +189,14 @@ is hosted at Github.
 Installation
 ============
 
-You can install ``celery`` either via the Python Package Index (PyPI)
+You can install `celery` either via the Python Package Index (PyPI)
 or from source.
 
-To install using ``pip``,::
+To install using `pip`,::
 
     $ pip install celery
 
-To install using ``easy_install``,::
+To install using `easy_install`,::
 
     $ easy_install celery
 
@@ -205,7 +205,7 @@ To install using ``easy_install``,::
 Downloading and installing from source
 --------------------------------------
 
-Download the latest version of ``celery`` from
+Download the latest version of `celery` from
 http://pypi.python.org/pypi/celery/
 
 You can install it by doing the following,::

+ 3 - 3
docs/includes/resources.txt

@@ -44,10 +44,10 @@ http://wiki.github.com/ask/celery/
 Contributing
 ============
 
-Development of ``celery`` happens at Github: http://github.com/ask/celery
+Development of `celery` happens at Github: http://github.com/ask/celery
 
 You are highly encouraged to participate in the development
-of ``celery``. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like Github (for some reason) you're welcome
 to send regular patches.
 
 .. _license:
@@ -55,7 +55,7 @@ to send regular patches.
 License
 =======
 
-This software is licensed under the ``New BSD License``. See the :file:`LICENSE`
+This software is licensed under the `New BSD License`. See the :file:`LICENSE`
 file in the top distribution directory for the full license text.
 
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround

+ 5 - 5
docs/internals/app-overview.rst

@@ -101,19 +101,19 @@ Deprecations
 Removed deprecations
 ====================
 
-* ``celery.utils.timedelta_seconds``
+* `celery.utils.timedelta_seconds`
     Use: :func:`celery.utils.timeutils.timedelta_seconds`
 
-* ``celery.utils.defaultdict``
+* `celery.utils.defaultdict`
     Use: :func:`celery.utils.compat.defaultdict`
 
-* ``celery.utils.all``
+* `celery.utils.all`
     Use: :func:`celery.utils.compat.all`
 
-* ``celery.task.apply_async``
+* `celery.task.apply_async`
     Use app.send_task
 
-* ``celery.task.tasks``
+* `celery.task.tasks`
     Use :data:`celery.registry.tasks`
 
 Aliases (Pending deprecation)

+ 8 - 8
docs/internals/deprecation.rst

@@ -17,18 +17,18 @@ Removals for version 2.0
     =====================================  =====================================
     **Setting name**                       **Replace with**
     =====================================  =====================================
-    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
-    ``CELERY_AMQP_CONSUMER_QUEUES``        ``CELERY_QUEUES``
-    ``CELERY_AMQP_EXCHANGE``               ``CELERY_DEFAULT_EXCHANGE``
-    ``CELERY_AMQP_EXCHANGE_TYPE``          ``CELERY_DEFAULT_AMQP_EXCHANGE_TYPE``
-    ``CELERY_AMQP_CONSUMER_ROUTING_KEY``   ``CELERY_QUEUES``
-    ``CELERY_AMQP_PUBLISHER_ROUTING_KEY``  ``CELERY_DEFAULT_ROUTING_KEY``
+    `CELERY_AMQP_CONSUMER_QUEUES`          `CELERY_QUEUES`
+    `CELERY_AMQP_CONSUMER_QUEUES`          `CELERY_QUEUES`
+    `CELERY_AMQP_EXCHANGE`                 `CELERY_DEFAULT_EXCHANGE`
+    `CELERY_AMQP_EXCHANGE_TYPE`            `CELERY_DEFAULT_AMQP_EXCHANGE_TYPE`
+    `CELERY_AMQP_CONSUMER_ROUTING_KEY`     `CELERY_QUEUES`
+    `CELERY_AMQP_PUBLISHER_ROUTING_KEY`    `CELERY_DEFAULT_ROUTING_KEY`
     =====================================  =====================================
 
 * :envvar:`CELERY_LOADER` definitions without class name.
 
-    E.g. ``celery.loaders.default``, needs to include the class name:
-    ``celery.loaders.default.Loader``.
+    E.g. `celery.loaders.default`, needs to include the class name:
+    `celery.loaders.default.Loader`.
 
 * :meth:`TaskSet.run`. Use :meth:`celery.task.base.TaskSet.apply_async`
     instead.

+ 10 - 10
docs/internals/protocol.rst

@@ -11,41 +11,41 @@ Message format
 ==============
 
 * task
-    ``string``
+    `string`
 
     Name of the task. **required**
 
 * id
-    ``string``
+    `string`
 
     Unique id of the task (UUID). **required**
 
 * args
-    ``list``
+    `list`
 
     List of arguments. Will be an empty list if not provided.
 
 * kwargs
-    ``dictionary``
+    `dictionary`
 
     Dictionary of keyword arguments. Will be an empty dictionary if not
     provided.
 
 * retries
-    ``int``
+    `int`
 
     Current number of times this task has been retried.
-    Defaults to ``0`` if not specified.
+    Defaults to `0` if not specified.
 
 * eta
-    ``string`` (ISO 8601)
+    `string` (ISO 8601)
 
     Estimated time of arrival. This is the date and time in ISO 8601
     format. If not provided the message is not scheduled, but will be
     executed asap.
 
 * expires (introduced after v2.0.2)
-    ``string`` (ISO 8601)
+    `string` (ISO 8601)
 
     Expiration date. This is the date and time in ISO 8601 format.
     If not provided the message will never expire. The message
@@ -55,7 +55,7 @@ Message format
 Example message
 ===============
 
-This is an example invocation of the ``celery.task.PingTask`` task in JSON
+This is an example invocation of the `celery.task.PingTask` task in JSON
 format:
 
 .. code-block:: javascript
@@ -70,7 +70,7 @@ Serialization
 =============
 
 The protocol supports several serialization formats using the
-``content_type`` message header.
+`content_type` message header.
 
 The MIME-types supported by default are shown in the following table.
 

+ 6 - 6
docs/internals/worker.rst

@@ -37,25 +37,25 @@ Components
 CarrotListener
 --------------
 
-Receives messages from the broker using ``carrot``.
+Receives messages from the broker using `carrot`.
 
 When a message is received it's converted into a
 :class:`celery.worker.job.TaskRequest` object.
 
-Tasks with an ETA are entered into the ``eta_schedule``, messages that can
-be immediately processed are moved directly to the ``ready_queue``.
+Tasks with an ETA are entered into the `eta_schedule`, messages that can
+be immediately processed are moved directly to the `ready_queue`.
 
 ScheduleController
 ------------------
 
-The schedule controller is running the ``eta_schedule``.
-If the scheduled tasks eta has passed it is moved to the ``ready_queue``,
+The schedule controller is running the `eta_schedule`.
+If the scheduled tasks eta has passed it is moved to the `ready_queue`,
 otherwise the thread sleeps until the eta is met (remember that the schedule
 is sorted by time).
 
 Mediator
 --------
-The mediator simply moves tasks in the ``ready_queue`` over to the
+The mediator simply moves tasks in the `ready_queue` over to the
 task pool for execution using
 :meth:`celery.worker.job.TaskRequest.execute_using_pool`.
 

+ 1 - 1
docs/links.rst

@@ -9,7 +9,7 @@
 celery
 ------
 
-* IRC logs from ``#celery`` (Freenode):
+* IRC logs from `#celery` (Freenode):
     http://botland.oebfare.com/logger/celery/
 
 .. _links-amqp:

+ 36 - 36
docs/reference/celery.conf.rst

@@ -27,8 +27,8 @@ Queues
 
 .. data:: DEFAULT_DELIVERY_MODE
 
-    Default delivery mode (``"persistent"`` or ``"non-persistent"``).
-    Default is ``"persistent"``.
+    Default delivery mode (`"persistent"` or `"non-persistent"`).
+    Default is `"persistent"`.
 
 .. data:: DEFAULT_ROUTING_KEY
 
@@ -45,65 +45,65 @@ Queues
     broadcast messages. The workers hostname will be appended
     to the prefix to create the final queue name.
 
-    Default is ``"celeryctl"``.
+    Default is `"celeryctl"`.
 
 .. data:: BROADCAST_EXCHANGE
 
     Name of the exchange used for broadcast messages.
 
-    Default is ``"celeryctl"``.
+    Default is `"celeryctl"`.
 
 .. data:: BROADCAST_EXCHANGE_TYPE
 
-    Exchange type used for broadcast messages. Default is ``"fanout"``.
+    Exchange type used for broadcast messages. Default is `"fanout"`.
 
 .. data:: EVENT_QUEUE
 
     Name of queue used to listen for event messages. Default is
-    ``"celeryevent"``.
+    `"celeryevent"`.
 
 .. data:: EVENT_EXCHANGE
 
-    Exchange used to send event messages. Default is ``"celeryevent"``.
+    Exchange used to send event messages. Default is `"celeryevent"`.
 
 .. data:: EVENT_EXCHANGE_TYPE
 
-    Exchange type used for the event exchange. Default is ``"topic"``.
+    Exchange type used for the event exchange. Default is `"topic"`.
 
 .. data:: EVENT_ROUTING_KEY
 
-    Routing key used for events. Default is ``"celeryevent"``.
+    Routing key used for events. Default is `"celeryevent"`.
 
 .. data:: EVENT_SERIALIZER
 
     Type of serialization method used to serialize events. Default is
-    ``"json"``.
+    `"json"`.
 
 .. data:: RESULT_EXCHANGE
 
     Exchange used by the AMQP result backend to publish task results.
-    Default is ``"celeryresult"``.
+    Default is `"celeryresult"`.
 
 Sending E-Mails
 ===============
 
 .. data:: CELERY_SEND_TASK_ERROR_EMAILS
 
-    If set to ``True``, errors in tasks will be sent to :data:`ADMINS` by e-mail.
+    If set to `True`, errors in tasks will be sent to :data:`ADMINS` by e-mail.
 
 .. data:: ADMINS
 
-    List of ``(name, email_address)`` tuples for the admins that should
+    List of `(name, email_address)` tuples for the admins that should
     receive error e-mails.
 
 .. data:: SERVER_EMAIL
 
     The e-mail address this worker sends e-mails from.
-    Default is ``"celery@localhost"``.
+    Default is `"celery@localhost"`.
 
 .. data:: MAIL_HOST
 
-    The mail server to use. Default is ``"localhost"``.
+    The mail server to use. Default is `"localhost"`.
 
 .. data:: MAIL_HOST_USER
 
@@ -115,7 +115,7 @@ Sending E-Mails
 
 .. data:: MAIL_PORT
 
-    The port the mail server is listening on. Default is ``25``.
+    The port the mail server is listening on. Default is `25`.
 
 Execution
 =========
@@ -126,8 +126,8 @@ Execution
 
 .. data:: EAGER_PROPAGATES_EXCEPTIONS
 
-    If set to ``True``, :func:`celery.execute.apply` will re-raise task exceptions.
-    It's the same as always running apply with ``throw=True``.
+    If set to `True`, :func:`celery.execute.apply` will re-raise task exceptions.
+    It's the same as always running apply with `throw=True`.
 
 .. data:: TASK_RESULT_EXPIRES
 
@@ -149,7 +149,7 @@ Execution
 
 .. data:: STORE_ERRORS_EVEN_IF_IGNORED
 
-    If enabled, task errors will be stored even though ``Task.ignore_result``
+    If enabled, task errors will be stored even though `Task.ignore_result`
     is enabled.
 
 .. data:: MAX_CACHED_RESULTS
@@ -160,11 +160,11 @@ Execution
 .. data:: TASK_SERIALIZER
 
     A string identifying the default serialization
-    method to use. Can be ``pickle`` (default),
-    ``json``, ``yaml``, or any custom serialization methods that have
+    method to use. Can be `pickle` (default),
+    `json`, `yaml`, or any custom serialization methods that have
     been registered with :mod:`carrot.serialization.registry`.
 
-    Default is ``pickle``.
+    Default is `pickle`.
 
 .. data:: RESULT_BACKEND
 
@@ -177,8 +177,8 @@ Execution
 .. data:: SEND_EVENTS
 
     If set, celery will send events that can be captured by monitors like
-    ``celerymon``.
-    Default is: ``False``.
+    `celerymon`.
+    Default is: `False`.
 
 .. data:: DEFAULT_RATE_LIMIT
 
@@ -187,7 +187,7 @@ Execution
 
 .. data:: DISABLE_RATE_LIMITS
 
-    If ``True`` all rate limits will be disabled and all tasks will be executed
+    If `True` all rate limits will be disabled and all tasks will be executed
     as soon as possible.
 
 Broker
@@ -203,9 +203,9 @@ Broker
     Maximum number of retries before we give up re-establishing a connection
     to the broker.
 
-    If this is set to ``0`` or :const:`None`, we will retry forever.
+    If this is set to `0` or :const:`None`, we will retry forever.
 
-    Default is ``100`` retries.
+    Default is `100` retries.
 
 Celerybeat
 ==========
@@ -213,7 +213,7 @@ Celerybeat
 .. data:: CELERYBEAT_LOG_LEVEL
 
     Default log level for celerybeat.
-    Default is: ``INFO``.
+    Default is: `INFO`.
 
 .. data:: CELERYBEAT_LOG_FILE
 
@@ -223,7 +223,7 @@ Celerybeat
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 
     Name of the persistent schedule database file.
-    Default is: ``celerybeat-schedule``.
+    Default is: `celerybeat-schedule`.
 
 .. data:: CELERYBEAT_MAX_LOOP_INTERVAL
 
@@ -241,7 +241,7 @@ Celerymon
 .. data:: CELERYMON_LOG_LEVEL
 
     Default log level for celerymon.
-    Default is: ``INFO``.
+    Default is: `INFO`.
 
 .. data:: CELERYMON_LOG_FILE
 
@@ -275,31 +275,31 @@ Celeryd
 .. data:: CELERYD_CONCURRENCY
 
     The number of concurrent worker processes.
-    If set to ``0`` (the default), the total number of available CPUs/cores
+    If set to `0` (the default), the total number of available CPUs/cores
     will be used.
 
 .. data:: CELERYD_PREFETCH_MULTIPLIER
 
     The number of concurrent workers is multipled by this number to yield
     the wanted AMQP QoS message prefetch count.
-    Default is: ``4``
+    Default is: `4`
 
 .. data:: CELERYD_POOL
 
     Name of the task pool class used by the worker.
-    Default is ``"celery.concurrency.processes.TaskPool"``.
+    Default is `"celery.concurrency.processes.TaskPool"`.
 
 .. data:: CELERYD_LISTENER
 
     Name of the listener class used by the worker.
-    Default is ``"celery.worker.listener.CarrotListener"``.
+    Default is `"celery.worker.listener.CarrotListener"`.
 
 .. data:: CELERYD_MEDIATOR
 
     Name of the mediator class used by the worker.
-    Default is ``"celery.worker.controllers.Mediator"``.
+    Default is `"celery.worker.controllers.Mediator"`.
 
 .. data:: CELERYD_ETA_SCHEDULER
 
     Name of the ETA scheduler class used by the worker.
-    Default is ``"celery.worker.controllers.ScheduleController"``.
+    Default is `"celery.worker.controllers.ScheduleController"`.

+ 2 - 2
docs/reference/celery.signals.rst

@@ -31,8 +31,8 @@ Example connecting to the :data:`task_sent` signal:
 
 Some signals also have a sender which you can filter by. For example the
 :data:`task_sent` signal uses the task name as a sender, so you can
-connect your handler to be called only when tasks with name ``"tasks.add"``
-has been sent by providing the ``sender`` argument to
+connect your handler to be called only when tasks with name `"tasks.add"`
+has been sent by providing the `sender` argument to
 :class:`~celery.utils.dispatch.signal.Signal.connect`:
 
 .. code-block:: python

+ 11 - 11
docs/releases/1.0/announcement.rst

@@ -39,11 +39,11 @@ API will be deprecated; so, for example, if we decided to remove a function
 that existed in Celery 1.0:
 
 * Celery 1.2 will contain a backwards-compatible replica of the function which
-  will raise a ``PendingDeprecationWarning``.
+  will raise a `PendingDeprecationWarning`.
   This warning is silent by default; you need to explicitly turn on display
   of these warnings.
 * Celery 1.4 will contain the backwards-compatible replica, but the warning
-  will be promoted to a full-fledged ``DeprecationWarning``. This warning
+  will be promoted to a full-fledged `DeprecationWarning`. This warning
   is loud by default, and will likely be quite annoying.
 * Celery 1.6 will remove the feature outright.
 
@@ -89,15 +89,15 @@ What's new?
 
 * New periodic task service.
 
-    Periodic tasks are no longer dispatched by ``celeryd``, but instead by a
-    separate service called ``celerybeat``. This is an optimized, centralized
+    Periodic tasks are no longer dispatched by `celeryd`, but instead by a
+    separate service called `celerybeat`. This is an optimized, centralized
     service dedicated to your periodic tasks, which means you don't have to
     worry about deadlocks or race conditions any more. But that does mean you
     have to make sure only one instance of this service is running at any one
     time.
 
-  **TIP:** If you're only running a single ``celeryd`` server, you can embed
-  ``celerybeat`` inside it. Just add the ``--beat`` argument.
+  **TIP:** If you're only running a single `celeryd` server, you can embed
+  `celerybeat` inside it. Just add the `--beat` argument.
 
 
 * Broadcast commands
@@ -120,12 +120,12 @@ What's new?
 * Platform agnostic message format.
 
   The message format has been standardized and is now using the ISO-8601 format
-  for dates instead of Python ``datetime`` objects. This means you can write task
-  consumers in other languages than Python (``eceleryd`` anyone?)
+  for dates instead of Python `datetime` objects. This means you can write task
+  consumers in other languages than Python (`eceleryd` anyone?)
 
 * Timely
 
-  Periodic tasks are now scheduled on the clock, i.e. ``timedelta(hours=1)``
+  Periodic tasks are now scheduled on the clock, i.e. `timedelta(hours=1)`
   means every hour at :00 minutes, not every hour from the server starts.
   To revert to the previous behavior you have the option to enable
   :attr:`PeriodicTask.relative`.
@@ -140,8 +140,8 @@ change set before you continue.
 .. _`changelog`: http://ask.github.com/celery/changelog.html
 
 **TIP:** If you install the :mod:`setproctitle` module you can see which
-task each worker process is currently executing in ``ps`` listings.
-Just install it using pip: ``pip install setproctitle``.
+task each worker process is currently executing in `ps` listings.
+Just install it using pip: `pip install setproctitle`.
 
 Resources
 =========

+ 13 - 13
docs/tutorials/clickcounter.rst

@@ -18,7 +18,7 @@ you are likely to bump into problems. One database write for every click is
 not good if you have millions of clicks a day.
 
 So what can you do? In this tutorial we will send the individual clicks as
-messages using ``carrot``, and then process them later with a ``celery``
+messages using `carrot`, and then process them later with a `celery`
 periodic task.
 
 Celery and carrot is excellent in tandem, and while this might not be
@@ -28,9 +28,9 @@ to solve a task.
 The model
 =========
 
-The model is simple, ``Click`` has the URL as primary key and a number of
-clicks for that URL. Its manager, ``ClickManager`` implements the
-``increment_clicks`` method, which takes a URL and by how much to increment
+The model is simple, `Click` has the URL as primary key and a number of
+clicks for that URL. Its manager, `ClickManager` implements the
+`increment_clicks` method, which takes a URL and by how much to increment
 its count by.
 
 
@@ -75,22 +75,22 @@ Using carrot to send clicks as messages
 
 The model is normal django stuff, nothing new there. But now we get on to
 the messaging. It has been a tradition for me to put the projects messaging
-related code in its own ``messaging.py`` module, and I will continue to do so
+related code in its own `messaging.py` module, and I will continue to do so
 here so maybe you can adopt this practice. In this module we have two
 functions:
 
-* ``send_increment_clicks``
+* `send_increment_clicks`
 
   This function sends a simple message to the broker. The message body only
   contains the URL we want to increment as plain-text, so the exchange and
-  routing key play a role here. We use an exchange called ``clicks``, with a
-  routing key of ``increment_click``, so any consumer binding a queue to
+  routing key play a role here. We use an exchange called `clicks`, with a
+  routing key of `increment_click`, so any consumer binding a queue to
   this exchange using this routing key will receive these messages.
 
-* ``process_clicks``
+* `process_clicks`
 
   This function processes all currently gathered clicks sent using
-  ``send_increment_clicks``. Instead of issuing one database query for every
+  `send_increment_clicks`. Instead of issuing one database query for every
   click it processes all of the messages first, calculates the new click count
   and issues one update per URL. A message that has been received will not be
   deleted from the broker until it has been acknowledged by the receiver, so
@@ -174,7 +174,7 @@ would want to count the clicks for, you replace the URL with:
 
     http://mysite/clickmuncher/count/?u=http://google.com
 
-and the ``count`` view will send off an increment message and forward you to
+and the `count` view will send off an increment message and forward you to
 that site.
 
 *clickmuncher/views.py*:
@@ -223,8 +223,8 @@ Processing the clicks every 30 minutes is easy using celery periodic tasks.
         def run(self, **kwargs):
             process_clicks()
 
-We subclass from :class:`celery.task.base.PeriodicTask`, set the ``run_every``
-attribute and in the body of the task just call the ``process_clicks``
+We subclass from :class:`celery.task.base.PeriodicTask`, set the `run_every`
+attribute and in the body of the task just call the `process_clicks`
 function we wrote earlier. 
 
 

+ 2 - 2
docs/tutorials/otherqueues.rst

@@ -56,7 +56,7 @@ Database
 Configuration
 -------------
 
-The database backend uses the Django ``DATABASE_*`` settings for database
+The database backend uses the Django `DATABASE_*` settings for database
 configuration values.
 
 #. Set your carrot backend::
@@ -64,7 +64,7 @@ configuration values.
     CARROT_BACKEND = "ghettoq.taproot.Database"
 
 
-#. Add :mod:`ghettoq` to ``INSTALLED_APPS``::
+#. Add :mod:`ghettoq` to `INSTALLED_APPS`::
 
     INSTALLED_APPS = ("ghettoq", )
 

+ 27 - 27
docs/userguide/executing.rst

@@ -16,28 +16,28 @@ Basics
 Executing tasks is done with :meth:`~celery.task.Base.Task.apply_async`,
 and the shortcut: :meth:`~celery.task.Base.Task.delay`.
 
-``delay`` is simple and convenient, as it looks like calling a regular
+`delay` is simple and convenient, as it looks like calling a regular
 function:
 
 .. code-block:: python
 
     Task.delay(arg1, arg2, kwarg1="x", kwarg2="y")
 
-The same using ``apply_async`` is written like this:
+The same using `apply_async` is written like this:
 
 .. code-block:: python
 
     Task.apply_async(args=[arg1, arg2], kwargs={"kwarg1": "x", "kwarg2": "y"})
 
 
-While ``delay`` is convenient, it doesn't give you as much control as using
-``apply_async``.  With ``apply_async`` you can override the execution options
-available as attributes on the ``Task`` class (see :ref:`task-options`).
+While `delay` is convenient, it doesn't give you as much control as using
+`apply_async`.  With `apply_async` you can override the execution options
+available as attributes on the `Task` class (see :ref:`task-options`).
 In addition you can set countdown/eta, task expiry, provide a custom broker
 connection and more.
 
 Let's go over these in more detail.  All the examples uses a simple task,
-called ``add``, taking two positional arguments and returning the sum:
+called `add`, taking two positional arguments and returning the sum:
 
 .. code-block:: python
 
@@ -62,7 +62,7 @@ ETA and countdown
 =================
 
 The ETA (estimated time of arrival) lets you set a specific date and time that
-is the earliest time at which your task will be executed.  ``countdown`` is
+is the earliest time at which your task will be executed.  `countdown` is
 a shortcut to set eta by seconds into the future.
 
 .. code-block:: python
@@ -79,7 +79,7 @@ are executed in a timely manner you should monitor queue lenghts. Use
 Munin, or similar tools, to receive alerts, so appropiate action can be
 taken to ease the workload.  See :ref:`monitoring-munin`.
 
-While ``countdown`` is an integer, ``eta`` must be a :class:`~datetime.datetime`
+While `countdown` is an integer, `eta` must be a :class:`~datetime.datetime`
 object, specifying an exact date and time (including millisecond precision,
 and timezone information):
 
@@ -95,7 +95,7 @@ and timezone information):
 Expiration
 ==========
 
-The ``expires`` argument defines an optional expiry time,
+The `expires` argument defines an optional expiry time,
 either as seconds after task publish, or a specific date and time using
 :class:~datetime.datetime`:
 
@@ -121,8 +121,8 @@ Serializers
 Data transferred between clients and workers needs to be serialized.
 The default serializer is :mod:`pickle`, but you can
 change this globally or for each individual task.
-There is built-in support for :mod:`pickle`, ``JSON``, ``YAML``
-and ``msgpack``, and you can also add your own custom serializers by registering
+There is built-in support for :mod:`pickle`, `JSON`, `YAML`
+and `msgpack`, and you can also add your own custom serializers by registering
 them into the Carrot serializer registry (see
 `Carrot: Serialization of Data`_).
 
@@ -182,12 +182,12 @@ be available for the worker.
 The client uses the following order to decide which serializer
 to use when sending a task:
 
-    1. The ``serializer`` argument to ``apply_async``
-    2. The tasks ``serializer`` attribute
+    1. The `serializer` argument to `apply_async`
+    2. The tasks `serializer` attribute
     3. The default :setting:`CELERY_TASK_SERIALIZER` setting.
 
 
-*Using the ``serializer`` argument to ``apply_async``*:
+*Using the `serializer` argument to `apply_async`*:
 
 .. code-block:: python
 
@@ -199,7 +199,7 @@ Connections and connection timeouts.
 ====================================
 
 Currently there is no support for broker connection pools, so 
-``apply_async`` establishes and closes a new connection every time
+`apply_async` establishes and closes a new connection every time
 it is called.  This is something you need to be aware of when sending
 more than one task at a time.
 
@@ -231,7 +231,7 @@ publisher:
 
 The connection timeout is the number of seconds to wait before giving up
 on establishing the connection.  You can set this by using the
-``connect_timeout`` argument to ``apply_async``:
+`connect_timeout` argument to `apply_async`:
 
 .. code-block:: python
 
@@ -258,11 +258,11 @@ process video, others process images, and some gather collective intelligence
 about its users.  Some of these tasks are more important, so we want to make
 sure the high priority tasks get sent to dedicated nodes.
 
-For the sake of this example we have a single exchange called ``tasks``.
+For the sake of this example we have a single exchange called `tasks`.
 There are different types of exchanges, each type interpreting the routing
 key in different ways, implementing different messaging scenarios.
 
-The most common types used with Celery are ``direct`` and ``topic``.
+The most common types used with Celery are `direct` and `topic`.
 
 * direct
 
@@ -271,14 +271,14 @@ The most common types used with Celery are ``direct`` and ``topic``.
 * topic
 
     In the topic exchange the routing key is made up of words separated by
-    dots (``.``).  Words can be matched by the wild cards ``*`` and ``#``,
-    where ``*`` matches one exact word, and ``#`` matches one or many words.
+    dots (`.`).  Words can be matched by the wild cards `*` and `#`,
+    where `*` matches one exact word, and `#` matches one or many words.
 
-    For example, ``*.stock.#`` matches the routing keys ``usd.stock`` and
-    ``euro.stock.db`` but not ``stock.nasdaq``.
+    For example, `*.stock.#` matches the routing keys `usd.stock` and
+    `euro.stock.db` but not `stock.nasdaq`.
 
-We create three queues, ``video``, ``image`` and ``lowpri`` that binds to
-the ``tasks`` exchange.  For the queues we use the following binding keys::
+We create three queues, `video`, `image` and `lowpri` that binds to
+the `tasks` exchange.  For the queues we use the following binding keys::
 
     video: video.#
     image: image.#
@@ -301,8 +301,8 @@ listen to different queues:
 
 
 Later, if the crop task is consuming a lot of resources,
-we can bind new workers to handle just the ``"image.crop"`` task,
-by creating a new queue that binds to ``"image.crop``".
+we can bind new workers to handle just the `"image.crop"` task,
+by creating a new queue that binds to `"image.crop`".
 
 .. seealso::
 
@@ -329,7 +329,7 @@ Not supported by :mod:`amqplib`.
 
 * priority
 
-A number between ``0`` and ``9``, where ``0`` is the highest priority.
+A number between `0` and `9`, where `0` is the highest priority.
 
 .. note::
 

+ 36 - 36
docs/userguide/monitoring.rst

@@ -67,7 +67,7 @@ Commands
         $ celeryctl inspect scheduled
 
     These are tasks reserved by the worker because they have the
-    ``eta`` or ``countdown`` argument set.
+    `eta` or `countdown` argument set.
 
 * **inspect reserved**: List reserved tasks
     ::
@@ -106,7 +106,7 @@ Commands
 
 .. note::
 
-    All ``inspect`` commands supports a ``--timeout`` argument,
+    All `inspect` commands supports a `--timeout` argument,
     This is the number of seconds to wait for responses.
     You may have to increase this timeout if you're getting empty responses
     due to latency.
@@ -118,7 +118,7 @@ Specifying destination nodes
 
 By default the inspect commands operates on all workers.
 You can specify a single, or a list of workers by using the
-``--destination`` argument::
+`--destination` argument::
 
     $ celeryctl inspect -d w1,w2 reserved
 
@@ -161,7 +161,7 @@ If you haven't already enabled the sending of events you need to do so::
 
     $ python manage.py celeryctl inspect enable_events
 
-:Tip: You can enable events when the worker starts using the ``-E`` argument
+:Tip: You can enable events when the worker starts using the `-E` argument
       to :mod:`~celery.bin.celeryd`.
 
 Now that the camera has been started, and events have been enabled
@@ -179,21 +179,21 @@ Shutter frequency
 
 By default the camera takes a snapshot every second, if this is too frequent
 or you want to have higher precision, then you can change this using the
-``--frequency`` argument.  This is a float describing how often, in seconds,
+`--frequency` argument.  This is a float describing how often, in seconds,
 it should wake up to check if there are any new events::
 
     $ python manage.py celerycam --frequency=3.0
 
-The camera also supports rate limiting using the ``--maxrate`` argument.
+The camera also supports rate limiting using the `--maxrate` argument.
 While the frequency controls how often the camera thread wakes up,
 the rate limit controls how often it will actually take a snapshot.
 
 The rate limits can be specified in seconds, minutes or hours
-by appending ``/s``, ``/m`` or ``/h`` to the value.
-Example: ``--maxrate=100/m``, means "hundred writes a minute".
+by appending `/s`, `/m` or `/h` to the value.
+Example: `--maxrate=100/m`, means "hundred writes a minute".
 
 The rate limit is off by default, which means it will take a snapshot
-for every ``--frequency`` seconds.
+for every `--frequency` seconds.
 
 The events also expire after some time, so the database doesn't fill up.
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
@@ -204,7 +204,7 @@ and tasks in other states after 5 days.
 Using outside of Django
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-``django-celery`` also installs the :program:`djcelerymon` program. This
+`django-celery` also installs the :program:`djcelerymon` program. This
 can be used by non-Django users, and runs both a webserver and a snapshot
 camera in the same process.
 
@@ -226,12 +226,12 @@ and sets up the Django environment using the same settings::
     $ djcelerymon
 
 Database tables will be created the first time the monitor is run.
-By default an ``sqlite3`` database file named
+By default an `sqlite3` database file named
 :file:`djcelerymon.db` is used, so make sure this file is writeable by the
 user running the monitor.
 
 If you want to store the events in a different database, e.g. MySQL,
-then you can configure the ``DATABASE*`` settings directly in your Celery
+then you can configure the `DATABASE*` settings directly in your Celery
 config module.  See http://docs.djangoproject.com/en/dev/ref/settings/#databases
 for more information about the database options avaialble.
 
@@ -260,7 +260,7 @@ Now that the service is started you can visit the monitor
 at http://127.0.0.1:8000, and log in using the user you created.
 
 For a list of the command line options supported by :program:`djcelerymon`,
-please see ``djcelerymon --help``.
+please see `djcelerymon --help`.
 
 .. _monitoring-celeryev:
 
@@ -286,7 +286,7 @@ and it includes a tool to dump events to stdout::
 
     $ celeryev --dump
 
-For a complete list of options use ``--help``::
+For a complete list of options use `--help`::
 
     $ celeryev --help
 
@@ -322,10 +322,10 @@ as manage users, virtual hosts and their permissions.
 
 .. note::
 
-    The default virtual host (``"/"``) is used in these
+    The default virtual host (`"/"`) is used in these
     examples, if you use a custom virtual host you have to add
-    the ``-p`` argument to the command, e.g:
-    ``rabbitmqctl list_queues -p my_vhost ....``
+    the `-p` argument to the command, e.g:
+    `rabbitmqctl list_queues -p my_vhost ....`
 
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 
@@ -341,11 +341,11 @@ Finding the number of tasks in a queue::
                               messages_unacknowlged
 
 
-Here ``messages_ready`` is the number of messages ready
-for delivery (sent but not received), ``messages_unacknowledged``
+Here `messages_ready` is the number of messages ready
+for delivery (sent but not received), `messages_unacknowledged`
 is the number of messages that has been received by a worker but
 not acknowledged yet (meaning it is in progress, or has been reserved).
-``messages`` is the sum of ready and unacknowledged messages combined.
+`messages` is the sum of ready and unacknowledged messages combined.
 
 
 Finding the number of workers currently consuming from a queue::
@@ -356,7 +356,7 @@ Finding the amount of memory allocated to a queue::
 
     $ rabbitmqctl list_queues name memory
 
-:Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
+:Tip: Adding the `-q` option to `rabbitmqctl(1)`_ makes the output
       easier to parse.
 
 
@@ -373,12 +373,12 @@ maintaining a Celery cluster.
     http://github.com/ask/rabbitmq-munin
 
 * celery_tasks: Monitors the number of times each task type has
-  been executed (requires ``celerymon``).
+  been executed (requires `celerymon`).
 
     http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
 
 * celery_task_states: Monitors the number of tasks in each state
-  (requires ``celerymon``).
+  (requires `celerymon`).
 
     http://exchange.munin-monitoring.org/plugins/celery_tasks/details
 
@@ -412,7 +412,7 @@ write it to a database, send it by e-mail or something else entirely.
 
 :program:`celeryev` is then used to take snapshots with the camera,
 for example if you want to capture state every 2 seconds using the
-camera ``myapp.Camera`` you run :program:`celeryev` with the following
+camera `myapp.Camera` you run :program:`celeryev` with the following
 arguments::
 
     $ celeryev -c myapp.Camera --frequency=2.0
@@ -446,8 +446,8 @@ Here is an example camera, dumping the snapshot to screen:
 See the API reference for :mod:`celery.events.state` to read more
 about state objects.
 
-Now you can use this cam with ``celeryev`` by specifying
-it with the ``-c`` option::
+Now you can use this cam with `celeryev` by specifying
+it with the `-c` option::
 
     $ celeryev -c myapp.DumpCam --frequency=2.0
 
@@ -481,16 +481,16 @@ This list contains the events sent by the worker, and their arguments.
 Task Events
 ~~~~~~~~~~~
 
-* ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
-  timestamp)``
+* `task-received(uuid, name, args, kwargs, retries, eta, hostname,
+  timestamp)`
 
     Sent when the worker receives a task.
 
-* ``task-started(uuid, hostname, timestamp)``
+* `task-started(uuid, hostname, timestamp)`
 
     Sent just before the worker executes the task.
 
-* ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
+* `task-succeeded(uuid, result, runtime, hostname, timestamp)`
 
     Sent if the task executed successfully.
 
@@ -498,16 +498,16 @@ Task Events
     (Starting from the task is sent to the worker pool, and ending when the
     pool result handler callback is called).
 
-* ``task-failed(uuid, exception, traceback, hostname, timestamp)``
+* `task-failed(uuid, exception, traceback, hostname, timestamp)`
 
     Sent if the execution of the task failed.
 
-* ``task-revoked(uuid)``
+* `task-revoked(uuid)`
 
     Sent if the task has been revoked (Note that this is likely
     to be sent by more than one worker).
 
-* ``task-retried(uuid, exception, traceback, hostname, timestamp)``
+* `task-retried(uuid, exception, traceback, hostname, timestamp)`
 
     Sent if the task failed, but will be retried in the future.
 
@@ -516,15 +516,15 @@ Task Events
 Worker Events
 ~~~~~~~~~~~~~
 
-* ``worker-online(hostname, timestamp)``
+* `worker-online(hostname, timestamp)`
 
     The worker has connected to the broker and is online.
 
-* ``worker-heartbeat(hostname, timestamp)``
+* `worker-heartbeat(hostname, timestamp)`
 
     Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
     it is considered to be offline.
 
-* ``worker-offline(hostname, timestamp)``
+* `worker-offline(hostname, timestamp)`
 
     The worker has disconnected from the broker.

+ 16 - 16
docs/userguide/periodic-tasks.rst

@@ -30,7 +30,7 @@ Entries
 To schedule a task periodically you have to add an entry to the
 :setting:`CELERYBEAT_SCHEDULE` setting.
 
-Example: Run the ``tasks.add`` task every 30 seconds.
+Example: Run the `tasks.add` task every 30 seconds.
 
 .. code-block:: python
 
@@ -46,7 +46,7 @@ Example: Run the ``tasks.add`` task every 30 seconds.
 
 
 Using a :class:`~datetime.timedelta` for the schedule means the task will
-be executed 30 seconds after ``celerybeat`` starts, and then every 30 seconds
+be executed 30 seconds after `celerybeat` starts, and then every 30 seconds
 after the last run.  A crontab like schedule also exists, see the section
 on `Crontab schedules`_.
 
@@ -55,11 +55,11 @@ on `Crontab schedules`_.
 Available Fields
 ----------------
 
-* ``task``
+* `task`
 
     The name of the task to execute.
 
-* ``schedule``
+* `schedule`
 
     The frequency of execution.
 
@@ -68,28 +68,28 @@ Available Fields
     You can also define your own custom schedule types, by extending the
     interface of :class:`~celery.schedules.schedule`.
 
-* ``args``
+* `args`
 
     Positional arguments (:class:`list` or :class:`tuple`).
 
-* ``kwargs``
+* `kwargs`
 
     Keyword arguments (:class:`dict`).
 
-* ``options``
+* `options`
 
     Execution options (:class:`dict`).
 
     This can be any argument supported by :meth:`~celery.execute.apply_async`,
-    e.g. ``exchange``, ``routing_key``, ``expires``, and so on.
+    e.g. `exchange`, `routing_key`, `expires`, and so on.
 
-* ``relative``
+* `relative`
 
     By default :class:`~datetime.timedelta` schedules are scheduled
     "by the clock". This means the frequency is rounded to the nearest
     second, minute, hour or day depending on the period of the timedelta.
 
-    If ``relative`` is true the frequency is not rounded and will be
+    If `relative` is true the frequency is not rounded and will be
     relative to the time when :program:`celerybeat` was started.
 
 .. _beat-crontab:
@@ -99,7 +99,7 @@ Crontab schedules
 
 If you want more control over when the task is executed, for
 example, a particular time of day or day of the week, you can use
-the ``crontab`` schedule type:
+the `crontab` schedule type:
 
 .. code-block:: python
 
@@ -165,13 +165,13 @@ To start the :program:`celerybeat` service::
 
     $ celerybeat
 
-You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
+You can also start `celerybeat` with `celeryd` by using the `-B` option,
 this is convenient if you only intend to use one worker node::
 
     $ celeryd -B
 
 Celerybeat needs to store the last run times of the tasks in a local database
-file (named ``celerybeat-schedule`` by default), so it needs access to
+file (named `celerybeat-schedule` by default), so it needs access to
 write in the current directory, or alternatively you can specify a custom
 location for this file::
 
@@ -187,15 +187,15 @@ location for this file::
 Using custom scheduler classes
 ------------------------------
 
-Custom scheduler classes can be specified on the command line (the ``-S``
+Custom scheduler classes can be specified on the command line (the `-S`
 argument).  The default scheduler is :class:`celery.beat.PersistentScheduler`,
 which is simply keeping track of the last run times in a local database file
 (a :mod:`shelve`).
 
-``django-celery`` also ships with a scheduler that stores the schedule in the
+`django-celery` also ships with a scheduler that stores the schedule in the
 Django database::
 
     $ celerybeat -S djcelery.schedulers.DatabaseScheduler
 
-Using ``django-celery``'s scheduler you can add, modify and remove periodic
+Using `django-celery`'s scheduler you can add, modify and remove periodic
 tasks from the Django Admin.

+ 2 - 2
docs/userguide/remote-tasks.rst

@@ -109,7 +109,7 @@ task being executed::
             [f2cc8efc-2a14-40cd-85ad-f1c77c94beeb] processed: 100
 
 Since applying tasks can be done via HTTP using the
-``djcelery.views.apply`` view, executing tasks from other languages is easy.
+`djcelery.views.apply` view, executing tasks from other languages is easy.
 For an example service exposing tasks via HTTP you should have a look at
-``examples/celery_http_gateway`` in the Celery distribution:
+`examples/celery_http_gateway` in the Celery distribution:
 http://github.com/ask/celery/tree/master/examples/celery_http_gateway/

+ 40 - 40
docs/userguide/routing.rst

@@ -32,17 +32,17 @@ With this setting on, a named queue that is not already defined in
 :setting:`CELERY_QUEUES` will be created automatically.  This makes it easy to
 perform simple routing tasks.
 
-Say you have two servers, ``x``, and ``y`` that handles regular tasks,
-and one server ``z``, that only handles feed related tasks.  You can use this
+Say you have two servers, `x`, and `y` that handles regular tasks,
+and one server `z`, that only handles feed related tasks.  You can use this
 configuration::
 
     CELERY_ROUTES = {"feed.tasks.import_feed": {"queue": "feeds"}}
 
 With this route enabled import feed tasks will be routed to the
-``"feeds"`` queue, while all other tasks will be routed to the default queue
-(named ``"celery"`` for historic reasons).
+`"feeds"` queue, while all other tasks will be routed to the default queue
+(named `"celery"` for historic reasons).
 
-Now you can start server ``z`` to only process the feeds queue like this::
+Now you can start server `z` to only process the feeds queue like this::
 
     (z)$ celeryd -Q feeds
 
@@ -74,7 +74,7 @@ The point with this feature is to hide the complex AMQP protocol for users
 with only basic needs. However -- you may still be interested in how these queues
 are declared.
 
-A queue named ``"video"`` will be created with the following settings:
+A queue named `"video"` will be created with the following settings:
 
 .. code-block:: python
 
@@ -82,7 +82,7 @@ A queue named ``"video"`` will be created with the following settings:
      "exchange_type": "direct",
      "routing_key": "video"}
 
-The non-AMQP backends like ``ghettoq`` does not support exchanges, so they
+The non-AMQP backends like `ghettoq` does not support exchanges, so they
 require the exchange to have the same name as the queue. Using this design
 ensures it will work for them as well.
 
@@ -91,8 +91,8 @@ ensures it will work for them as well.
 Manual routing
 --------------
 
-Say you have two servers, ``x``, and ``y`` that handles regular tasks,
-and one server ``z``, that only handles feed related tasks, you can use this
+Say you have two servers, `x`, and `y` that handles regular tasks,
+and one server `z`, that only handles feed related tasks, you can use this
 configuration:
 
 .. code-block:: python
@@ -115,7 +115,7 @@ exchange/type/binding_key, if you don't set exchange or exchange type, they
 will be taken from the :setting:`CELERY_DEFAULT_EXCHANGE` and
 :setting:`CELERY_DEFAULT_EXCHANGE_TYPE` settings.
 
-To route a task to the ``feed_tasks`` queue, you can add an entry in the
+To route a task to the `feed_tasks` queue, you can add an entry in the
 :setting:`CELERY_ROUTES` setting:
 
 .. code-block:: python
@@ -128,7 +128,7 @@ To route a task to the ``feed_tasks`` queue, you can add an entry in the
     }
 
 
-You can also override this using the ``routing_key`` argument to
+You can also override this using the `routing_key` argument to
 :func:`~celery.execute.apply_async`, or :func:`~celery.execute.send_task`:
 
     >>> from feeds.tasks import import_feed
@@ -137,12 +137,12 @@ You can also override this using the ``routing_key`` argument to
     ...                         routing_key="feed.import")
 
 
-To make server ``z`` consume from the feed queue exclusively you can
-start it with the ``-Q`` option::
+To make server `z` consume from the feed queue exclusively you can
+start it with the `-Q` option::
 
     (z)$ celeryd -Q feed_tasks --hostname=z.example.com
 
-Servers ``x`` and ``y`` must be configured to consume from the default queue::
+Servers `x` and `y` must be configured to consume from the default queue::
 
     (x)$ celeryd -Q default --hostname=x.example.com
     (y)$ celeryd -Q default --hostname=y.example.com
@@ -243,7 +243,7 @@ The steps required to send and receive messages are:
 3. Bind the queue to the exchange.
 
 Celery automatically creates the entities necessary for the queues in
-:setting:`CELERY_QUEUES` to work (except if the queue's ``auto_declare``
+:setting:`CELERY_QUEUES` to work (except if the queue's `auto_declare`
 setting is set to :const:`False`).
 
 Here's an example queue configuration with three queues;
@@ -270,8 +270,8 @@ One for video, one for images and one default queue for everything else:
 
 .. note::
 
-    In Celery the ``routing_key`` is the key used to send the message,
-    while ``binding_key`` is the key the queue is bound with.  In the AMQP API
+    In Celery the `routing_key` is the key used to send the message,
+    while `binding_key` is the key the queue is bound with.  In the AMQP API
     they are both referred to as the routing key.
 
 .. _amqp-exchange-types:
@@ -280,8 +280,8 @@ Exchange types
 --------------
 
 The exchange type defines how the messages are routed through the exchange.
-The exchange types defined in the standard are ``direct``, ``topic``,
-``fanout`` and ``headers``.  Also non-standard exchange types are available
+The exchange types defined in the standard are `direct`, `topic`,
+`fanout` and `headers`.  Also non-standard exchange types are available
 as plugins to RabbitMQ, like the `last-value-cache plug-in`_ by Michael
 Bridgen.
 
@@ -294,7 +294,7 @@ Direct exchanges
 ~~~~~~~~~~~~~~~~
 
 Direct exchanges match by exact routing keys, so a queue bound by
-the routing key ``video`` only receives messages with that routing key.
+the routing key `video` only receives messages with that routing key.
 
 .. _amqp-exchange-type-topic:
 
@@ -302,12 +302,12 @@ Topic exchanges
 ~~~~~~~~~~~~~~~
 
 Topic exchanges matches routing keys using dot-separated words, and the
-wildcard characters: ``*`` (matches a single word), and ``#`` (matches
+wildcard characters: `*` (matches a single word), and `#` (matches
 zero or more words).
 
-With routing keys like ``usa.news``, ``usa.weather``, ``norway.news`` and
-``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
-items in the USA) or ``usa.weather`` (all USA weather items).
+With routing keys like `usa.news`, `usa.weather`, `norway.news` and
+`norway.weather`, bindings could be `*.news` (all news), `usa.#` (all
+items in the USA) or `usa.weather` (all USA weather items).
 
 .. _amqp-api:
 
@@ -334,7 +334,7 @@ Related API commands
     Declares a queue by name.
 
     Exclusive queues can only be consumed from by the current connection.
-    Exclusive also implies ``auto_delete``.
+    Exclusive also implies `auto_delete`.
 
 .. method:: queue.bind(queue_name, exchange_name, routing_key)
 
@@ -367,7 +367,7 @@ It's used for command-line access to the AMQP API, enabling access to
 administration tasks like creating/deleting queues and exchanges, purging
 queues or sending messages.
 
-You can write commands directly in the arguments to ``camqadm``, or just start
+You can write commands directly in the arguments to `camqadm`, or just start
 with no arguments to start it in shell-mode::
 
     $ camqadm
@@ -375,10 +375,10 @@ with no arguments to start it in shell-mode::
     -> connected.
     1>
 
-Here ``1>`` is the prompt.  The number 1, is the number of commands you
-have executed so far.  Type ``help`` for a list of commands available.
+Here `1>` is the prompt.  The number 1, is the number of commands you
+have executed so far.  Type `help` for a list of commands available.
 It also supports autocompletion, so you can start typing a command and then
-hit the ``tab`` key to show a list of possible matches.
+hit the `tab` key to show a list of possible matches.
 
 Let's create a queue we can send messages to::
 
@@ -389,19 +389,19 @@ Let's create a queue we can send messages to::
     3> queue.bind testqueue testexchange testkey
     ok.
 
-This created the direct exchange ``testexchange``, and a queue
-named ``testqueue``.  The queue is bound to the exchange using
-the routing key ``testkey``.
+This created the direct exchange `testexchange`, and a queue
+named `testqueue`.  The queue is bound to the exchange using
+the routing key `testkey`.
 
-From now on all messages sent to the exchange ``testexchange`` with routing
-key ``testkey`` will be moved to this queue.  We can send a message by
-using the ``basic.publish`` command::
+From now on all messages sent to the exchange `testexchange` with routing
+key `testkey` will be moved to this queue.  We can send a message by
+using the `basic.publish` command::
 
     4> basic.publish "This is a message!" testexchange testkey
     ok.
 
 Now that the message is sent we can retrieve it again.  We use the
-``basic.get`` command here, which pops a single message off the queue,
+`basic.get` command here, which pops a single message off the queue,
 this command is not recommended for production as it implies polling, any
 real application would declare consumers instead.
 
@@ -426,9 +426,9 @@ Note the delivery tag listed in the structure above; Within a connection channel
 every received message has a unique delivery tag,
 This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
-the delivery tag ``1`` might point to a different message than in this channel.
+the delivery tag `1` might point to a different message than in this channel.
 
-You can acknowledge the message we received using ``basic.ack``::
+You can acknowledge the message we received using `basic.ack`::
 
     6> basic.ack 1
     ok.
@@ -510,7 +510,7 @@ Routers
 A router is a class that decides the routing options for a task.
 
 All you need to define a new router is to create a class with a
-``route_for_task`` method:
+`route_for_task` method:
 
 .. code-block:: python
 
@@ -523,7 +523,7 @@ All you need to define a new router is to create a class with a
                         "routing_key": "video.compress"}
             return None
 
-If you return the ``queue`` key, it will expand with the defined settings of
+If you return the `queue` key, it will expand with the defined settings of
 that queue in :setting:`CELERY_QUEUES`::
 
     {"queue": "video", "routing_key": "video.compress"}

+ 26 - 26
docs/userguide/tasks.rst

@@ -18,8 +18,8 @@ Basics
 ======
 
 A task is a class that encapsulates a function and its execution options.
-Given a function ``create_user``, that takes two arguments: ``username`` and
-``password``, you can create a task like this:
+Given a function create_user`, that takes two arguments: `username` and
+`password`, you can create a task like this:
 
 .. code-block:: python
 
@@ -30,7 +30,7 @@ Given a function ``create_user``, that takes two arguments: ``username`` and
         User.objects.create(username=username, password=password)
 
 
-Task options are added as arguments to ``task``::
+Task options are added as arguments to `task`:
 
 .. code-block:: python
 
@@ -43,7 +43,7 @@ Task options are added as arguments to ``task``::
 Task Request Info
 =================
 
-The ``task.request`` attribute contains information about
+The `task.request` attribute contains information about
 the task being executed, and contains the following attributes:
 
 :id: The unique id of the executing task.
@@ -53,7 +53,7 @@ the task being executed, and contains the following attributes:
 :kwargs: Keyword arguments.
 
 :retries: How many times the current task has been retried.
-          An integer starting at ``0``.
+          An integer starting at `0`.
 
 :is_eager: Set to :const:`True` if the task is executed locally in
            the client, kand not by a worker.
@@ -97,10 +97,10 @@ the worker log:
         logger.info("Adding %s + %s" % (x, y))
         return x + y
 
-There are several logging levels available, and the workers ``loglevel``
+There are several logging levels available, and the workers `loglevel`
 setting decides whether or not they will be written to the log file.
 
-Of course, you can also simply use ``print`` as anything written to standard
+Of course, you can also simply use `print` as anything written to standard
 out/-err will be written to the logfile as well.
 
 .. _task-retry:
@@ -122,11 +122,11 @@ It will do the right thing, and respect the
         except (Twitter.FailWhaleError, Twitter.LoginError), exc:
             send_twitter_status.retry(exc=exc)
 
-Here we used the ``exc`` argument to pass the current exception to
+Here we used the `exc` argument to pass the current exception to
 :meth:`~celery.task.base.BaseTask.retry`. At each step of the retry this exception
 is available as the tombstone (result) of the task. When
 :attr:`~celery.task.base.BaseTask.max_retries` has been exceeded this is the
-exception raised.  However, if an ``exc`` argument is not provided the
+exception raised.  However, if an `exc` argument is not provided the
 :exc:`~celery.exceptions.RetryTaskError` exception is raised instead.
 
 .. _task-retry-custom-delay:
@@ -140,7 +140,7 @@ before doing so. The default delay is in the
 attribute on the task. By default this is set to 3 minutes. Note that the
 unit for setting the delay is in seconds (int or float).
 
-You can also provide the ``countdown`` argument to
+You can also provide the `countdown` argument to
 :meth:`~celery.task.base.BaseTask.retry` to override this default.
 
 .. code-block:: python
@@ -205,8 +205,8 @@ General
     If it is an integer, it is interpreted as "tasks per second". 
 
     The rate limits can be specified in seconds, minutes or hours
-    by appending ``"/s"``, ``"/m"`` or ``"/h"`` to the value.
-    Example: ``"100/m"`` (hundred tasks a minute).  Default is the
+    by appending `"/s"`, `"/m"` or `"/h"` to the value.
+    Example: `"100/m"` (hundred tasks a minute).  Default is the
     :setting:`CELERY_DEFAULT_RATE_LIMIT` setting, which if not specified means
     rate limiting for tasks is disabled by default.
 
@@ -236,7 +236,7 @@ General
 
     A string identifying the default serialization
     method to use. Defaults to the :setting:`CELERY_TASK_SERIALIZER`
-    setting.  Can be ``pickle`` ``json``, ``yaml``, or any custom
+    setting.  Can be `pickle` `json`, `yaml`, or any custom
     serialization methods that have been registered with
     :mod:`carrot.serialization.registry`.
 
@@ -273,7 +273,7 @@ General
     task is currently running.
 
     The hostname and pid of the worker executing the task
-    will be avaiable in the state metadata (e.g. ``result.info["pid"]``)
+    will be avaiable in the state metadata (e.g. `result.info["pid"]`)
 
     The global default can be overridden by the
     :setting:`CELERY_TRACK_STARTED` setting.
@@ -296,11 +296,11 @@ Message and routing options
 
 .. attribute:: Task.exchange
 
-    Override the global default ``exchange`` for this task.
+    Override the global default `exchange` for this task.
 
 .. attribute:: Task.routing_key
 
-    Override the global default ``routing_key`` for this task.
+    Override the global default `routing_key` for this task.
 
 .. attribute:: Task.mandatory
 
@@ -392,7 +392,7 @@ For example if the client imports the module "myapp.tasks" as ".tasks", and
 the worker imports the module as "myapp.tasks", the generated names won't match
 and an :exc:`~celery.exceptions.NotRegistered` error will be raised by the worker.
 
-This is also the case if using Django and using ``project.myapp``::
+This is also the case if using Django and using `project.myapp`::
 
     INSTALLED_APPS = ("project.myapp", )
 
@@ -478,7 +478,7 @@ this is necessary to keep the original function name and docstring.
 .. note::
 
     The magic keyword arguments will be deprecated in the future,
-    replaced by the ``task.request`` attribute in 2.2, and the
+    replaced by the `task.request` attribute in 2.2, and the
     keyword arguments will be removed in 3.0.
 
 .. _task-states:
@@ -524,7 +524,7 @@ STARTED
 Task has been started.
 Not reported by default, to enable please see :ref:`task-track-started`.
 
-:metadata: ``pid`` and ``hostname`` of the worker process executing
+:metadata: `pid` and `hostname` of the worker process executing
            the task.
 
 .. state:: SUCCESS
@@ -534,7 +534,7 @@ SUCCESS
 
 Task has been successfully executed.
 
-:metadata: ``result`` contains the return value of the task.
+:metadata: `result` contains the return value of the task.
 :propagates: Yes
 :ready: Yes
 
@@ -545,7 +545,7 @@ FAILURE
 
 Task execution resulted in failure.
 
-:metadata: ``result`` contains the exception occured, and ``traceback``
+:metadata: `result` contains the exception occured, and `traceback`
            contains the backtrace of the stack at the point when the
            exception was raised.
 :propagates: Yes
@@ -557,8 +557,8 @@ RETRY
 
 Task is being retried.
 
-:metadata: ``result`` contains the exception that caused the retry,
-           and ``traceback`` contains the backtrace of the stack at the point
+:metadata: `result` contains the exception that caused the retry,
+           and `traceback` contains the backtrace of the stack at the point
            when the exceptions was raised.
 :propagates: No
 
@@ -589,9 +589,9 @@ update a tasks state::
                 meta={"current": i, "total": len(filenames)})
 
 
-Here we created the state ``"PROGRESS"``, which tells any application
+Here we created the state `"PROGRESS"`, which tells any application
 aware of this state that the task is currently in progress, and also where
-it is in the process by having ``current`` and ``total`` counts as part of the
+it is in the process by having `current` and `total` counts as part of the
 state metadata.  This can then be used to create e.g. progress bars.
 
 .. _task-how-they-work:
@@ -623,7 +623,7 @@ yourself:
         <Task: celery.ping (regular)>}
 
 This is the list of tasks built-in to celery.  Note that we had to import
-``celery.task`` first for these to show up.  This is because the tasks will
+`celery.task` first for these to show up.  This is because the tasks will
 only be registered when the module they are defined in is imported.
 
 The default loader imports any modules listed in the

+ 6 - 6
docs/userguide/tasksets.rst

@@ -40,7 +40,7 @@ This makes it excellent as a means to pass callbacks around to tasks.
 Callbacks
 ---------
 
-Let's improve our ``add`` task so it can accept a callback that
+Let's improve our `add` task so it can accept a callback that
 takes the result as an argument::
 
     from celery.decorators import task
@@ -57,25 +57,25 @@ takes the result as an argument::
 asynchronously by :meth:`~celery.task.sets.subtask.delay`, and
 eagerly by :meth:`~celery.task.sets.subtask.apply`.
 
-The best thing is that any arguments you add to ``subtask.delay``,
+The best thing is that any arguments you add to `subtask.delay`,
 will be prepended to the arguments specified by the subtask itself!
 
 If you have the subtask::
 
     >>> add.subtask(args=(10, ))
 
-``subtask.delay(result)`` becomes::
+`subtask.delay(result)` becomes::
 
     >>> add.apply_async(args=(result, 10))
 
 ...
 
-Now let's execute our new ``add`` task with a callback::
+Now let's execute our new `add` task with a callback::
 
     >>> add.delay(2, 2, callback=add.subtask((8, )))
 
-As expected this will first launch one task calculating ``2 + 2``, then 
-another task calculating ``4 + 8``.
+As expected this will first launch one task calculating `2 + 2`, then 
+another task calculating `4 + 8`.
 
 .. _sets-taskset:
 

+ 15 - 15
docs/userguide/workers.rst

@@ -17,8 +17,8 @@ You can start celeryd to run in the foreground by executing the command::
     $ celeryd --loglevel=INFO
 
 You probably want to use a daemonization tool to start
-``celeryd`` in the background.  See :ref:`daemonizing` for help
-using ``celeryd`` with popular daemonization tools.
+`celeryd` in the background.  See :ref:`daemonizing` for help
+using `celeryd` with popular daemonization tools.
 
 For a full list of available command line options see
 :mod:`~celery.bin.celeryd`, or simply do::
@@ -27,7 +27,7 @@ For a full list of available command line options see
 
 You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
-hostname with the ``--hostname|-n`` argument::
+hostname with the `--hostname|-n` argument::
 
     $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
@@ -76,7 +76,7 @@ Concurrency
 ===========
 
 Multiprocessing is used to perform concurrent execution of tasks.  The number
-of worker processes can be changed using the ``--concurrency`` argument and
+of worker processes can be changed using the `--concurrency` argument and
 defaults to the number of CPUs available on the machine.
 
 More worker processes are usually better, but there's a cut-off point where
@@ -96,7 +96,7 @@ Revoking tasks works by sending a broadcast message to all the workers,
 the workers then keep a list of revoked tasks in memory.
 
 If you want tasks to remain revoked after worker restart you need to
-specify a file for these to be stored in, either by using the ``--statedb``
+specify a file for these to be stored in, either by using the `--statedb`
 argument to :mod:`~celery.bin.celeryd` or the :setting:`CELERYD_STATE_DB`
 setting.  See :setting:`CELERYD_STATE_DB` for more information.
 
@@ -112,9 +112,9 @@ waiting for some event that will never happen you will block the worker
 from processing new tasks indefinitely.  The best way to defend against
 this scenario happening is enabling time limits.
 
-The time limit (``--time-limit``) is the maximum number of seconds a task
+The time limit (`--time-limit`) is the maximum number of seconds a task
 may run before the process executing it is terminated and replaced by a
-new process.  You can also enable a soft time limit (``--soft-time-limit``),
+new process.  You can also enable a soft time limit (`--soft-time-limit`),
 this raises an exception the task can catch to clean up before the hard
 time limit kills it:
 
@@ -150,8 +150,8 @@ a worker can execute before it's replaced by a new process.
 This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 
-The option can be set using the ``--maxtasksperchild`` argument
-to ``celeryd`` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
+The option can be set using the `--maxtasksperchild` argument
+to `celeryd` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
 
 .. _worker-remote-control:
 
@@ -201,7 +201,7 @@ Sending the :control:`rate_limit` command and keyword arguments::
     ...                                    "rate_limit": "200/m"})
 
 This will send the command asynchronously, without waiting for a reply.
-To request a reply you have to use the ``reply`` argument::
+To request a reply you have to use the `reply` argument::
 
     >>> broadcast("rate_limit", {"task_name": "myapp.mytask",
     ...                          "rate_limit": "200/m"}, reply=True)
@@ -209,7 +209,7 @@ To request a reply you have to use the ``reply`` argument::
      {'worker2.example.com': 'New rate limit set successfully'},
      {'worker3.example.com': 'New rate limit set successfully'}]
 
-Using the ``destination`` argument you can specify a list of workers
+Using the `destination` argument you can specify a list of workers
 to receive the command::
 
     >>> broadcast
@@ -230,7 +230,7 @@ using :func:`~celery.task.control.broadcast`.
 Rate limits
 -----------
 
-Example changing the rate limit for the ``myapp.mytask`` task to accept
+Example changing the rate limit for the `myapp.mytask` task to accept
 200 tasks a minute on all servers:
 
     >>> from celery.task.control import rate_limit
@@ -274,7 +274,7 @@ a custom timeout::
      {'worker2.example.com': 'pong'},
      {'worker3.example.com': 'pong'}]
 
-:func:`~celery.task.control.ping` also supports the ``destination`` argument,
+:func:`~celery.task.control.ping` also supports the `destination` argument,
 so you can specify which workers to ping::
 
     >>> ping(['worker2.example.com', 'worker3.example.com'])
@@ -289,8 +289,8 @@ so you can specify which workers to ping::
 Enable/disable events
 ---------------------
 
-You can enable/disable events by using the ``enable_events``,
-``disable_events`` commands.  This is useful to temporarily monitor
+You can enable/disable events by using the `enable_events`,
+`disable_events` commands.  This is useful to temporarily monitor
 a worker using :program:`celeryev`/:program:`celerymon`.
 
 .. code-block:: python

+ 4 - 4
examples/celery_http_gateway/README.rst

@@ -7,7 +7,7 @@ statuses/results over HTTP.
 
 Some familiarity with Django is recommended.
 
-``settings.py`` contains the celery settings, you probably want to configure
+`settings.py` contains the celery settings, you probably want to configure
 at least the broker related settings.
 
 To run the service you have to run the following commands::
@@ -20,7 +20,7 @@ To run the service you have to run the following commands::
 The service is now running at http://localhost:8000
 
 
-You can apply tasks, with the ``/apply/<task_name>`` URL::
+You can apply tasks, with the `/apply/<task_name>` URL::
 
     $ curl http://localhost:8000/apply/celery.ping/
     {"ok": "true", "task_id": "e3a95109-afcd-4e54-a341-16c18fddf64b"}
@@ -32,9 +32,9 @@ Then you can use the resulting task-id to get the return value::
 
 
 If you don't want to expose all tasks there are a few possible
-approaches. For instance you can extend the ``apply`` view to only
+approaches. For instance you can extend the `apply` view to only
 accept a whitelist. Another possibility is to just make views for every task you want to
-expose. We made on such view for ping in ``views.ping``::
+expose. We made on such view for ping in `views.ping`::
 
     $ curl http://localhost:8000/ping/
     {"ok": "true", "task_id": "383c902c-ba07-436b-b0f3-ea09cc22107c"}

+ 6 - 6
examples/ghetto-queue/README.rst

@@ -43,14 +43,14 @@ supports `Redis`_ and relational databases via the Django ORM.
 .. _`Redis`: http://code.google.com/p/redis/
 
 
-The provided ``celeryconfig.py`` configures the settings used to drive celery.
+The provided `celeryconfig.py` configures the settings used to drive celery.
 
-Next we have to create the database tables by issuing the ``celeryinit``
+Next we have to create the database tables by issuing the `celeryinit`
 command::
 
     $ celeryinit
 
-We're using SQLite3, so this creates a database file (``celery.db`` as
+We're using SQLite3, so this creates a database file (`celery.db` as
 specified in the config file). SQLite is great, but when used in combination
 with Django it doesn't handle concurrency well. To protect your program from
 lock problems, celeryd will only spawn one worker process. With
@@ -68,7 +68,7 @@ the foreground, we have to open up another terminal to run our test program::
     $ python test.py
 
 
-The test program simply runs the ``add`` task, which is a simple task adding
+The test program simply runs the `add` task, which is a simple task adding
 numbers. You can also run the task manually if you want::
 
     >>> from tasks import add
@@ -80,7 +80,7 @@ Using Redis instead
 ===================
 
 To use redis instead, you have to configure the following directives in 
-``celeryconfig.py``::
+`celeryconfig.py`::
 
     CARROT_BACKEND = "ghettoq.taproot.Redis"
     BROKER_HOST = "localhost"
@@ -97,7 +97,7 @@ Modules
 
         Tasks are defined in this module. This module is automatically
         imported by the worker because it's listed in
-        celeryconfig's ``CELERY_IMPORTS`` directive.
+        celeryconfig's `CELERY_IMPORTS` directive.
 
     * test.py
 

+ 2 - 2
examples/httpexample/README.rst

@@ -5,8 +5,8 @@
 This example is a simple Django HTTP service exposing a single task
 multiplying two numbers:
 
-The multiply http callback task is in ``views.py``, mapped to a URL using
-``urls.py``.
+The multiply http callback task is in `views.py`, mapped to a URL using
+`urls.py`.
 
 There are no models, so to start it do::
 

+ 1 - 1
examples/pythonproject/demoapp/README.rst

@@ -14,7 +14,7 @@ Modules
 
         Tasks are defined in this module. This module is automatically
         imported by the worker because it's listed in
-        celeryconfig's ``CELERY_IMPORTS`` directive.
+        celeryconfig's `CELERY_IMPORTS` directive.
 
     * test.py
 

Some files were not shown because too many files changed in this diff