Browse Source

Loads of documentation improvements

Ask Solem 14 years ago
parent
commit
f6199fb919

+ 25 - 25
Changelog

@@ -45,7 +45,7 @@ Important Notes
 
 * No longer depends on SQLAlchemy, this needs to be installed separately
   if the database backend is used (does not apply to users of
-  ``django-celery``).
+  `django-celery`_).
 
 .. _v210-news:
 
@@ -94,18 +94,18 @@ News
     lets you perform some actions, like revoking and rate limiting tasks,
     and shutting down worker nodes.
 
-    There's also a Debian init.d script for ``celeryev`` available,
+    There's also a Debian init.d script for :mod:`~celery.bin.celeryev` available,
     see :doc:`cookbook/daemonizing` for more information.
 
     New command line argments to celeryev:
 
-        * ``-c|--camera``: Snapshot camera class to use.
-        * ``--logfile|-f``: Logfile
-        * ``--loglevel|-l``: Loglevel
-        * ``--maxrate|-r``: Shutter rate limit.
-        * ``--freq|-F``: Shutter frequency
+        * :option:`-c|--camera`: Snapshot camera class to use.
+        * :option:`--logfile|-f`: Logfile
+        * :option:`--loglevel|-l`: Loglevel
+        * :option:`--maxrate|-r`: Shutter rate limit.
+        * :option:`--freq|-F`: Shutter frequency
 
-    The ``--camera`` argument is the name of a class used to take
+    The :option:`--camera` argument is the name of a class used to take
     snapshots with. It must support the interface defined by
     :class:`celery.events.snapshot.Polaroid`.
 
@@ -122,7 +122,7 @@ News
     anything new.
 
     The rate limit is off by default, which means it will take a snapshot
-    for every ``--frequency`` seconds.
+    for every :option:`--frequency` seconds.
 
     The django-celery camera also automatically deletes old events.
     It deletes successful tasks after 1 day, failed tasks after 3 days,
@@ -201,8 +201,8 @@ News
         signals.setup_logging.connect(setup_logging)
 
     If there are no receivers for this signal, the logging subsystem
-    will be configured using the ``--loglevel/--logfile argument``,
-    this will be used for *all defined loggers*.
+    will be configured using the :option:`--loglevel`/:option:`--logfile`
+    argument, this will be used for *all defined loggers*.
 
     Remember that celeryd also redirects stdout and stderr 
     to the celery logger, if you want to manually configure logging
@@ -219,7 +219,7 @@ News
             stdouts = logging.getLogger("mystdoutslogger")
             log.redirect_stdouts_to_logger(stdouts, loglevel=logging.WARNING)
 
-* celeryd: Added command-line option ``-I|--include``:
+* celeryd: Added command-line option :option:`-I`/:option:`--include`:
   Additional (task) modules to be imported
 
 * :func:`celery.messaging.establish_connection`: Ability to override defaults
@@ -269,10 +269,11 @@ News
 * The crontab schedule no longer wakes up every second, but implements
   ``remaining_estimate``.
 
-* celeryd:  Store FAILURE result if the ``WorkerLostError`` exception occurs
-  (worker process disappeared).
+* celeryd:  Store :state:`FAILURE` result if the
+   :exc:`~celery.exceptions.WorkerLostError` exception occurs (worker process
+   disappeared).
 
-* celeryd: Store FAILURE result if one of the ``*TimeLimitExceeded``
+* celeryd: Store :state:`FAILURE` result if one of the ``*TimeLimitExceeded``
   exceptions occurs.
 
 * Refactored the periodic task responsible for cleaning up results.
@@ -300,9 +301,8 @@ News
 
     See issue #184.
 
-*    Added ``Task.update_state(task_id, state, meta)``.
-
-    as a shortcut to ``task.backend.store_result(task_id, meta, state)``.
+* Added ``Task.update_state(task_id, state, meta)``
+  as a shortcut to ``task.backend.store_result(task_id, meta, state)``.
 
     The backend interface is "private" and the terminology outdated,
     so better to move this to :class:`~celery.task.base.Task` so it can be
@@ -420,7 +420,7 @@ Fixes
 * celeryd: Events are now buffered if the connection is down,
   then sent when the connection is re-established.
 
-* No longer depends on the ``mailer`` package.
+* No longer depends on the :mod:`mailer` package.
 
     This package had a namespace collision with ``django-mailer``,
     so its functionality was replaced.
@@ -461,7 +461,7 @@ Fixes
 * :func:`~celery.execute.apply`: Make sure ``kwargs["task_id"]`` is
   always set.
 
-* ``AsyncResult.traceback``: Now returns ``None``, instead of raising
+* ``AsyncResult.traceback``: Now returns :const:`None`, instead of raising
   :exc:`KeyError` if traceback is missing.
 
 * :class:`~celery.task.control.inspect`: Replies did not work correctly
@@ -1250,7 +1250,7 @@ News
     Defines the maximum number of tasks a pool worker can process before
     the process is terminated and replaced by a new one.
 
-* Revoked tasks now marked with state ``REVOKED``, and ``result.get()``
+* Revoked tasks now marked with state :state:`REVOKED`, and ``result.get()``
   will now raise :exc:`~celery.exceptions.TaskRevokedError`.
 
 * :func:`celery.task.control.ping` now works as expected.
@@ -2426,7 +2426,7 @@ Changes
   (Probably still problems with running on 2.4, but it will eventually be
   supported)
 
-* Prepare exception to pickle when saving RETRY status for all backends.
+* Prepare exception to pickle when saving :state:`RETRY` status for all backends.
 
 * SQLite no concurrency limit should only be effective if the db backend
   is used.
@@ -2568,10 +2568,10 @@ News
 
 * ``views.apply`` now correctly sets mimetype to "application/json"
 
-* ``views.task_status`` now returns exception if status is RETRY
+* ``views.task_status`` now returns exception if state is :state:`RETRY`
 
-* ``views.task_status`` now returns traceback if status is "FAILURE"
-    or "RETRY"
+* ``views.task_status`` now returns traceback if state is :state:`FAILURE`
+    or :state:`RETRY`
 
 * Documented default task arguments.
 

+ 31 - 25
FAQ

@@ -107,10 +107,10 @@ Is Celery multilingual?
 
 **Answer:** Yes.
 
-``celeryd`` is an implementation of Celery in python. If the language has
-an AMQP client, there shouldn't be much work to create a worker in your
-language.  A Celery worker is just a program connecting to the broker to
-process messages.
+:mod:`~celery.bin.celeryd` is an implementation of Celery in python. If the
+language has an AMQP client, there shouldn't be much work to create a worker
+in your language.  A Celery worker is just a program connecting to the broker
+to process messages.
 
 Also, there's another way to be language indepedent, and that is to use REST
 tasks, instead of your tasks being functions, they're URLs. With this
@@ -132,7 +132,7 @@ MySQL is throwing deadlock errors, what can I do?
 
 **Answer:** MySQL has default isolation level set to ``REPEATABLE-READ``,
 if you don't really need that, set it to ``READ-COMMITTED``.
-You can do that by adding the following to your ``my.cnf``::
+You can do that by adding the following to your :file:`my.cnf`::
 
     [mysqld]
     transaction-isolation = READ-COMMITTED
@@ -160,7 +160,7 @@ Why is Task.delay/apply\*/celeryd just hanging?
 **Answer:** There is a bug in some AMQP clients that will make it hang if
 it's not able to authenticate the current user, the password doesn't match or
 the user does not have access to the virtual host specified. Be sure to check
-your broker logs (for RabbitMQ that is ``/var/log/rabbitmq/rabbit.log`` on
+your broker logs (for RabbitMQ that is :file:`/var/log/rabbitmq/rabbit.log` on
 most systems), it usually contains a message describing the reason.
 
 .. _faq-celeryd-on-freebsd:
@@ -247,16 +247,16 @@ Why won't my Periodic Task run?
 How do I discard all waiting tasks?
 ------------------------------------
 
-**Answer:** Use ``celery.task.discard_all()``, like this:
+**Answer:** Use :func:`~celery.task.control.discard_all`, like this:
 
-    >>> from celery.task import discard_all
+    >>> from celery.task.control import discard_all
     >>> discard_all()
     1753
 
-The number ``1753`` is the number of messages deleted.
+The number 1753 is the number of messages deleted.
 
-You can also start celeryd with the ``--discard`` argument which will
-accomplish the same thing.
+You can also start :mod:`~celery.bin.celeryd` with the
+:option:`--discard` argument which will accomplish the same thing.
 
 .. _faq-messages-left-after-purge:
 
@@ -272,7 +272,7 @@ server). When that connection is closed (e.g because the worker was stopped)
 the tasks will be re-sent by the broker to the next available worker (or the
 same worker when it has been restarted), so to properly purge the queue of
 waiting tasks you have to stop all the workers, and then discard the tasks
-using ``discard_all``.
+using :func:`~celery.task.control.discard_all`.
 
 .. _faq-results:
 
@@ -289,7 +289,7 @@ How do I get the result of a task if I have the ID that points there?
     >>> result = MyTask.AsyncResult(task_id)
     >>> result.get()
 
-This will give you a :class:`celery.result.BaseAsyncResult` instance
+This will give you a :class:`~celery.result.BaseAsyncResult` instance
 using the tasks current result backend.
 
 If you need to specify a custom result backend you should use
@@ -327,8 +327,8 @@ important that you are aware of them.
 
 * Events.
 
-Running ``celeryd`` with the ``-E``/``--events`` option will send messages
-for events happening inside of the worker.
+Running :mod:`~celery.bin.celeryd` with the :option:`-E`/:option:`--events`
+option will send messages for events happening inside of the worker.
 
 Events should only be enabled if you have an active monitor consuming them,
 or if you purge the event queue periodically.
@@ -394,7 +394,8 @@ you need to set ``exchange`` name to the same as the queue name. This is
 a minor inconvenience since carrot needs to maintain the same interface
 for both AMQP and STOMP.
 
-Use the following settings in your ``celeryconfig.py``/django ``settings.py``:
+Use the following settings in your :file:`celeryconfig.py`/
+django :file:`settings.py`:
 
 .. code-block:: python
 
@@ -540,13 +541,13 @@ or if you only have the task id::
 Why aren't my remote control commands received by all workers?
 --------------------------------------------------------------
 
-**Answer**: To receive broadcast remote control commands, every ``celeryd``
+**Answer**: To receive broadcast remote control commands, every worker node
 uses its hostname to create a unique queue name to listen to,
 so if you have more than one worker with the same hostname, the
 control commands will be recieved in round-robin between them.
 
 To work around this you can explicitly set the hostname for every worker
-using the ``--hostname`` argument to ``celeryd``::
+using the :option:`--hostname` argument to :mod:`~celery.bin.celeryd`::
 
     $ celeryd --hostname=$(hostname).1
     $ celeryd --hostname=$(hostname).2
@@ -667,14 +668,18 @@ Or to schedule a periodic task at a specific time, use the
 How do I shut down ``celeryd`` safely?
 --------------------------------------
 
-**Answer**: Use the ``TERM`` signal, and the worker will finish all currently
+**Answer**: Use the :sig:`TERM` signal, and the worker will finish all currently
 executing jobs and shut down as soon as possible. No tasks should be lost.
 
-You should never stop ``celeryd`` with the ``KILL`` signal (``-9``),
-unless you've tried ``TERM`` a few times and waited a few minutes to let it
-get a chance to shut down. As if you do tasks may be terminated mid-execution,
-and they will not be re-run unless you have the ``acks_late`` option set.
-(``Task.acks_late`` / :setting:`CELERY_ACKS_LATE`).
+You should never stop :mod:`~celery.bin.celeryd` with the :sig:`KILL` signal
+(:option:`-9`), unless you've tried :sig:`TERM` a few times and waited a few
+minutes to let it get a chance to shut down.  As if you do tasks may be
+terminated mid-execution, and they will not be re-run unless you have the
+``acks_late`` option set (``Task.acks_late`` / :setting:`CELERY_ACKS_LATE`).
+
+.. seealso::
+
+    :ref:`worker-stopping`
 
 .. _faq-daemonizing:
 
@@ -713,7 +718,8 @@ services instead.
 ``django-celery`` can’t find settings?
 --------------------------------------
 
-**Answer**: You need to specify the ``--settings`` argument to ``manage.py``::
+**Answer**: You need to specify the :option:`--settings` argument to
+:program:`manage.py`::
 
     $ python manage.py celeryd start --settings=settings
 

+ 37 - 5
README.rst

@@ -13,13 +13,15 @@
 
 --
 
+.. _celery-synopsis:
+
 Celery is an open source asynchronous task queue/job queue based on
 distributed message passing. It is focused on real-time operation,
 but supports scheduling as well.
 
-The execution units, called tasks, are executed concurrently on a single or
-more worker servers. Tasks can execute asynchronously (in the background) or synchronously
-(wait until ready).
+The execution units, called tasks, are executed concurrently on one or
+more worker nodes. Tasks can execute asynchronously (in the background) or
+synchronously (wait until ready).
 
 Celery is already used in production to process millions of tasks a day.
 
@@ -29,6 +31,9 @@ language. It can also `operate with other languages using webhooks`_.
 The recommended message broker is `RabbitMQ`_, but support for `Redis`_ and
 databases (`SQLAlchemy`_) is also available.
 
+Celery can be easily used with Django and Pylons using
+`django-celery`_ and `celery-pylons`_.
+
 You may also be pleased to know that full Django integration exists,
 delivered by the `django-celery`_ package.
 
@@ -36,12 +41,15 @@ delivered by the `django-celery`_ package.
 .. _`Redis`: http://code.google.com/p/redis/
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
+.. _`celery-pylons`: http://bitbucket.org/ianschenck/celery-pylons
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
 
 .. contents::
     :local:
 
+.. _celery-overview:
+
 Overview
 ========
 
@@ -56,6 +64,8 @@ more machines depending on the workload.
 The result of the task can be stored for later retrieval (called its
 "tombstone").
 
+.. _celery-example:
+
 Example
 =======
 
@@ -77,6 +87,8 @@ You can execute the task in the background, or wait for it to finish::
 
 Simple!
 
+.. _celery-features:
+
 Features
 ========
 
@@ -166,6 +178,8 @@ Features
 .. _`MongoDB`: http://www.mongodb.org/
 .. _`Tokyo Tyrant`: http://tokyocabinet.sourceforge.net/
 
+.. _celery-documentation:
+
 Documentation
 =============
 
@@ -174,8 +188,10 @@ is hosted at Github.
 
 .. _`latest documentation`: http://ask.github.com/celery/
 
+.. _celery-installation:
+
 Installation
-=============
+============
 
 You can install ``celery`` either via the Python Package Index (PyPI)
 or from source.
@@ -188,6 +204,8 @@ To install using ``easy_install``,::
 
     $ easy_install celery
 
+.. _celery-installing-from-source:
+
 Downloading and installing from source
 --------------------------------------
 
@@ -201,17 +219,22 @@ You can install it by doing the following,::
     $ python setup.py build
     # python setup.py install # as root
 
+.. _celery-installing-from-git:
+
 Using the development version
-------------------------------
+-----------------------------
 
 You can clone the repository by doing the following::
 
     $ git clone git://github.com/ask/celery.git
 
+.. _getting-help:
 
 Getting Help
 ============
 
+.. _mailing-list:
+
 Mailing list
 ------------
 
@@ -220,6 +243,8 @@ please join the `celery-users`_ mailing list.
 
 .. _`celery-users`: http://groups.google.com/group/celery-users/
 
+.. _irc-channel:
+
 IRC
 ---
 
@@ -229,6 +254,7 @@ network.
 .. _`#celery`: irc://irc.freenode.net/celery
 .. _`Freenode`: http://freenode.net
 
+.. _bug-tracker:
 
 Bug tracker
 ===========
@@ -236,11 +262,15 @@ Bug tracker
 If you have any suggestions, bug reports or annoyances please report them
 to our issue tracker at http://github.com/ask/celery/issues/
 
+.. _wiki:
+
 Wiki
 ====
 
 http://wiki.github.com/ask/celery/
 
+.. _contributing:
+
 Contributing
 ============
 
@@ -250,6 +280,8 @@ You are highly encouraged to participate in the development
 of ``celery``. If you don't like Github (for some reason) you're welcome
 to send regular patches.
 
+.. _license:
+
 License
 =======
 

+ 9 - 4
celery/contrib/abortable.py

@@ -82,12 +82,17 @@ from celery.task.base import Task
 from celery.result import AsyncResult
 
 
-""" Task States
+"""
+Task States
+-----------
+
+.. state:: ABORTED
 
-.. data:: ABORTED
+ABORTED
+~~~~~~~
 
-    Task is aborted (typically by the producer) and should be
-    aborted as soon as possible.
+Task is aborted (typically by the producer) and should be
+aborted as soon as possible.
 
 """
 ABORTED = "ABORTED"

+ 55 - 22
celery/states.py

@@ -5,52 +5,85 @@
 States
 ------
 
-.. data:: PENDING
+.. state:: PENDING
 
-    Task is waiting for execution or unknown.
+PENDING
+~~~~~~~
 
-.. data:: STARTED
+Task is waiting for execution or unknown.
 
-    Task has been started.
+.. state:: STARTED
 
-.. data:: SUCCESS
+STARTED
+~~~~~~~
 
-    Task has been successfully executed.
+Task has been started.
 
-.. data:: FAILURE
+.. state:: SUCCESS
 
-    Task execution resulted in failure.
+SUCCESS
+~~~~~~~
 
-.. data:: RETRY
+Task has been successfully executed.
 
-    Task is being retried.
+.. state:: FAILURE
 
-.. data:: REVOKED
+FAILURE
+~~~~~~~
 
-    Task has been revoked.
+Task execution resulted in failure.
+
+.. state:: RETRY
+
+RETRY
+~~~~~
+
+Task is being retried.
+
+.. state:: REVOKED
+
+REVOKED
+~~~~~~~
+
+Task has been revoked.
 
 Sets
 ----
 
-.. data:: READY_STATES
+.. state:: READY_STATES
+
+READY_STATES
+~~~~~~~~~~~~
+
+Set of states meaning the task result is ready (has been executed).
+
+.. state:: UNREADY_STATES
+
+UNREADY_STATES
+~~~~~~~~~~~~~~
+
+Set of states meaning the task result is not ready (has not been executed).
 
-    Set of states meaning the task result is ready (has been executed).
+.. state:: EXCEPTION_STATES
 
-.. data:: UNREADY_STATES
+EXCEPTION_STATES
+~~~~~~~~~~~~~~~~
 
-    Set of states meaning the task result is not ready (has not been executed).
+Set of states meaning the task returned an exception.
 
-.. data:: EXCEPTION_STATES
+.. state:: PROPAGATE_STATES
 
-    Set of states meaning the task returned an exception.
+PROPAGATE_STATES
+~~~~~~~~~~~~~~~~
 
-.. data:: PROPAGATE_STATES
+Set of exception states that should propagate exceptions to the user.
 
-    Set of exception states that should propagate exceptions to the user.
+.. state:: ALL_STATES
 
-.. data:: ALL_STATES
+ALL_STATES
+~~~~~~~~~~
 
-    Set of all possible states.
+Set of all possible states.
 
 """
 

+ 15 - 0
docs/_ext/celerydocs.py

@@ -4,3 +4,18 @@ def setup(app):
         rolename      = "setting",
         indextemplate = "pair: %s; setting",
     )
+    app.add_crossref_type(
+        directivename = "sig",
+        rolename      = "sig",
+        indextemplate = "pair: %s; sig",
+    )
+    app.add_crossref_type(
+        directivename = "state",
+        rolename      = "state",
+        indextemplate = "pair: %s; state",
+    )
+    app.add_crossref_type(
+        directivename = "control",
+        rolename      = "control",
+        indextemplate = "pair: %s; control",
+    )

+ 200 - 80
docs/configuration.rst

@@ -55,7 +55,7 @@ Configuration Directives
 Concurrency settings
 --------------------
 
-.. _CELERYD_CONCURRENCY:
+.. setting:: CELERYD_CONCURRENCY
 
 CELERYD_CONCURRENCY
 ~~~~~~~~~~~~~~~~~~~
@@ -64,7 +64,7 @@ The number of concurrent worker processes, executing tasks simultaneously.
 
 Defaults to the number of CPUs/cores available.
 
-.. _CELERYD_PREFETCH_MULTIPLIER:
+.. setting:: CELERYD_PREFETCH_MULTIPLIER
 
 CELERYD_PREFETCH_MULTIPLIER
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -82,7 +82,7 @@ to the workers.
 Task result backend settings
 ----------------------------
 
-.. _CELERY_RESULT_BACKEND:
+.. setting:: CELERY_RESULT_BACKEND
 
 CELERY_RESULT_BACKEND
 ~~~~~~~~~~~~~~~~~~~~~
@@ -130,7 +130,7 @@ Can be one of the following:
 Database backend settings
 -------------------------
 
-.. _CELERY_RESULT_DBURI:
+.. setting:: CELERY_RESULT_DBURI
 
 CELERY_RESULT_DBURI
 ~~~~~~~~~~~~~~~~~~~
@@ -156,7 +156,7 @@ To use this backend you need to configure it with an
 See `Connection String`_ for more information about connection
 strings.
 
-.. _CELERY_RESULT_ENGINE_OPTIONS:
+.. setting:: CELERY_RESULT_ENGINE_OPTIONS
 
 CELERY_RESULT_ENGINE_OPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -186,7 +186,7 @@ Example configuration
 AMQP backend settings
 ---------------------
 
-.. _CELERY_AMQP_TASK_RESULT_EXPIRES:
+.. setting:: CELERY_AMQP_TASK_RESULT_EXPIRES
 
 CELERY_AMQP_TASK_RESULT_EXPIRES
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -197,14 +197,14 @@ The time in seconds of which the task result queues should expire.
 
     AMQP result expiration requires RabbitMQ versions 2.1.0 and higher.
 
-.. _CELERY_RESULT_EXCHANGE:
+.. setting:: CELERY_RESULT_EXCHANGE
 
 CELERY_RESULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~
 
 Name of the exchange to publish results in.  Default is ``"celeryresults"``.
 
-.. _CELERY_RESULT_EXCHANGE_TYPE:
+.. setting:: CELERY_RESULT_EXCHANGE_TYPE
 
 CELERY_RESULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -212,14 +212,15 @@ CELERY_RESULT_EXCHANGE_TYPE
 The exchange type of the result exchange.  Default is to use a ``direct``
 exchange.
 
-.. _CELERY_RESULT_SERIALIZER:
+.. setting:: CELERY_RESULT_SERIALIZER
 
 CELERY_RESULT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-Result message serialization format.  Default is ``"pickle"``.
+Result message serialization format.  Default is ``"pickle"``. See
+:ref:`executing-serializers`.
 
-.. _CELERY_RESULT_PERSISTENT:
+.. setting:: CELERY_RESULT_PERSISTENT
 
 CELERY_RESULT_PERSISTENT
 ~~~~~~~~~~~~~~~~~~~~~~~~
@@ -246,7 +247,7 @@ Cache backend settings
     The cache backend supports the `pylibmc`_ and `python-memcached`
     libraries.  The latter is used only if `pylibmc`_ is not installed.
 
-.. _CELERY_CACHE_BACKEND:
+.. setting:: CELERY_CACHE_BACKEND
 
 CELERY_CACHE_BACKEND
 ~~~~~~~~~~~~~~~~~~~~
@@ -264,7 +265,7 @@ Using multiple memcached servers:
     CELERY_RESULT_BACKEND = "cache"
     CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
 
-.. _CELERY_CACHE_BACKEND_OPTIONS:
+.. setting:: CELERY_CACHE_BACKEND_OPTIONS
 
 CELERY_CACHE_BACKEND_OPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -291,14 +292,14 @@ Tokyo Tyrant backend settings
 
 This backend requires the following configuration directives to be set:
 
-.. _TT_HOST:
+.. setting:: TT_HOST
 
 TT_HOST
 ~~~~~~~
 
 Hostname of the Tokyo Tyrant server.
 
-.. _TT_PORT:
+.. setting:: TT_PORT
 
 TT_PORT
 ~~~~~~~
@@ -331,28 +332,28 @@ Redis backend settings
 
 This backend requires the following configuration directives to be set.
 
-.. _REDIS_HOST:
+.. setting:: REDIS_HOST
 
 REDIS_HOST
 ~~~~~~~~~~
 
 Hostname of the Redis database server. e.g. ``"localhost"``.
 
-.. _REDIS_PORT:
+.. setting:: REDIS_PORT
 
 REDIS_PORT
 ~~~~~~~~~~
 
 Port to the Redis database server. e.g. ``6379``.
 
-.. _REDIS_DB:
+.. setting:: REDIS_DB
 
 REDIS_DB
 ~~~~~~~~
 
 Database number to use. Default is 0
 
-.. _REDIS_PASSWORD:
+.. setting:: REDIS_PASSWORD
 
 REDIS_PASSWORD
 ~~~~~~~~~~~~~~
@@ -380,7 +381,7 @@ MongoDB backend settings
     The MongoDB backend requires the :mod:`pymongo` library:
     http://github.com/mongodb/mongo-python-driver/tree/master
 
-.. _CELERY_MONGODB_BACKEND_SETTINGS:
+.. setting:: CELERY_MONGODB_BACKEND_SETTINGS
 
 CELERY_MONGODB_BACKEND_SETTINGS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -428,7 +429,7 @@ Message Routing
 
 .. _conf-messaging-routing:
 
-.. _CELERY_QUEUES:
+.. setting:: CELERY_QUEUES
 
 CELERY_QUEUES
 ~~~~~~~~~~~~~
@@ -441,7 +442,25 @@ exchange type ``direct``.
 
 You don't have to care about this unless you want custom routing facilities.
 
-.. _CELERY_DEFAULT_QUEUE:
+.. setting:: CELERY_ROUTES
+
+CELERY_ROUTES
+~~~~~~~~~~~~~
+
+A list of routers, or a single router used to route tasks to queues.
+When deciding the final destination of a task the routers are consulted
+in order.  See :ref:`routers` for more information.
+
+.. setting:: CELERY_CREATE_MISSING_QUEUES
+
+CELERY_CREATE_MISSING_QUEUES
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If enabled (default), any queues specified that is not defined in
+:setting:`CELERY_QUEUES` will be automatically created. See
+:ref:`routing-automatic`.
+
+.. setting:: CELERY_DEFAULT_QUEUE
 
 CELERY_DEFAULT_QUEUE
 ~~~~~~~~~~~~~~~~~~~~
@@ -449,7 +468,11 @@ CELERY_DEFAULT_QUEUE
 The queue used by default, if no custom queue is specified.  This queue must
 be listed in :setting:`CELERY_QUEUES`.  The default is: ``celery``.
 
-.. _CELERY_DEFAULT_EXCHANGE:
+.. seealso::
+
+    :ref:`routing-changing-default-queue`
+
+.. setting:: CELERY_DEFAULT_EXCHANGE
 
 CELERY_DEFAULT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~~
@@ -457,7 +480,7 @@ CELERY_DEFAULT_EXCHANGE
 Name of the default exchange to use when no custom exchange is
 specified.  The default is: ``celery``.
 
-.. _CELERY_DEFAULT_EXCHANGE_TYPE:
+.. setting:: CELERY_DEFAULT_EXCHANGE_TYPE
 
 CELERY_DEFAULT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -465,7 +488,7 @@ CELERY_DEFAULT_EXCHANGE_TYPE
 Default exchange type used when no custom exchange is specified.
 The default is: ``direct``.
 
-.. _CELERY_DEFAULT_ROUTING_KEY:
+.. setting:: CELERY_DEFAULT_ROUTING_KEY
 
 CELERY_DEFAULT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -473,7 +496,7 @@ CELERY_DEFAULT_ROUTING_KEY
 The default routing key used when sending tasks.
 The default is: ``celery``.
 
-.. _CELERY_DEFAULT_DELIVERY_MODE:
+.. setting:: CELERY_DEFAULT_DELIVERY_MODE
 
 CELERY_DEFAULT_DELIVERY_MODE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -486,18 +509,69 @@ persistent messages.
 Broker Settings
 ---------------
 
-.. _CELERY_BROKER_CONNECTION_TIMEOUT:
+.. setting:: BROKER_BACKEND
+
+BROKER_BACKEND
+~~~~~~~~~~~~~~
+
+The messaging backend to use. Default is ``"amqplib"``.
+
+.. setting:: BROKER_HOST
+
+BROKER_HOST
+~~~~~~~~~~~
 
-CELERY_BROKER_CONNECTION_TIMEOUT
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Hostname of the broker.
+
+.. setting:: BROKER_PORT
+
+BROKER_PORT
+~~~~~~~~~~~
+
+Custom port of the broker.  Default is to use the default port for the
+selected backend.
+
+.. setting:: BROKER_USER
+
+BROKER_USER
+~~~~~~~~~~~
+
+Username to connect as.
+
+.. setting:: BROKER_PASSWORD
+
+BROKER_PASSWORD
+~~~~~~~~~~~~~~~
+
+Password to connect with.
+
+.. setting:: BROKER_VHOST
+
+BROKER_VHOST
+~~~~~~~~~~~~
+
+Virtual host.  Default is ``"/"``.
+
+.. setting:: BROKER_USE_SSL
+
+BROKER_USE_SSL
+~~~~~~~~~~~~~~
+
+Use SSL to conenct to the broker.  Off by defalt.  This may not be supported
+by all transports.
+
+.. setting:: BROKER_CONNECTION_TIMEOUT
+
+BROKER_CONNECTION_TIMEOUT
+~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The default timeout in seconds before we give up establishing a connection
 to the AMQP server.  Default is 4 seconds.
 
-.. _CELERY_BROKER_CONNECTION_RETRY:
+.. setting:: CELERY_BROKER_CONNECTION_RETRY
 
-CELERY_BROKER_CONNECTION_RETRY
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+BROKER_CONNECTION_RETRY
+~~~~~~~~~~~~~~~~~~~~~~~
 
 Automatically try to re-establish the connection to the AMQP broker if lost.
 
@@ -507,7 +581,7 @@ exceeded.
 
 This behavior is on by default.
 
-.. _CELERY_BROKER_CONNECTION_MAX_RETRIES:
+.. setting:: CELERY_BROKER_CONNECTION_MAX_RETRIES
 
 CELERY_BROKER_CONNECTION_MAX_RETRIES
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -524,7 +598,7 @@ Default is 100 retries.
 Task execution settings
 -----------------------
 
-.. _CELERY_ALWAYS_EAGER:
+.. setting:: CELERY_ALWAYS_EAGER
 
 CELERY_ALWAYS_EAGER
 ~~~~~~~~~~~~~~~~~~~
@@ -538,7 +612,7 @@ been evaluated.
 Tasks will never be sent to the queue, but executed locally
 instead.
 
-.. _CELERY_EAGER_PROPAGATES_EXCEPTIONS:
+.. setting:: CELERY_EAGER_PROPAGATES_EXCEPTIONS
 
 CELERY_EAGER_PROPAGATES_EXCEPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -548,7 +622,7 @@ If this is :const:`True`, eagerly executed tasks (using ``.apply``, or with
 
 It's the same as always running ``apply`` with ``throw=True``.
 
-.. _CELERY_IGNORE_RESULT:
+.. setting:: CELERY_IGNORE_RESULT
 
 CELERY_IGNORE_RESULT
 ~~~~~~~~~~~~~~~~~~~~
@@ -557,7 +631,7 @@ Whether to store the task return values or not (tombstones).
 If you still want to store errors, just not successful return values,
 you can set :setting:`CELERY_STORE_ERRORS_EVEN_IF_IGNORED`.
 
-.. _CELERY_TASK_RESULT_EXPIRES:
+.. setting:: CELERY_TASK_RESULT_EXPIRES
 
 CELERY_TASK_RESULT_EXPIRES
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -578,7 +652,7 @@ A built-in periodic task will delete the results after this time
     running for the results to be expired.
 
 
-.. _CELERY_MAX_CACHED_RESULTS:
+.. setting:: CELERY_MAX_CACHED_RESULTS
 
 CELERY_MAX_CACHED_RESULTS
 ~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -586,7 +660,7 @@ CELERY_MAX_CACHED_RESULTS
 Total number of results to store before results are evicted from the
 result cache.  The default is 5000.
 
-.. _CELERY_TRACK_STARTED:
+.. setting:: CELERY_TRACK_STARTED
 
 CELERY_TRACK_STARTED
 ~~~~~~~~~~~~~~~~~~~~
@@ -598,7 +672,7 @@ are either pending, finished, or waiting to be retried.  Having a "started"
 state can be useful for when there are long running tasks and there is a
 need to report which task is currently running.
 
-.. _CELERY_TASK_SERIALIZER:
+.. setting:: CELERY_TASK_SERIALIZER
 
 CELERY_TASK_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~
@@ -607,7 +681,11 @@ A string identifying the default serialization method to use.  Can be
 ``pickle`` (default), ``json``, ``yaml``, or any custom serialization
 methods that have been registered with :mod:`carrot.serialization.registry`.
 
-.. _CELERY_DEFAULT_RATE_LIMIT:
+.. seealso::
+
+    :ref:`executing-serializers`.
+
+.. setting:: CELERY_DEFAULT_RATE_LIMIT
 
 CELERY_DEFAULT_RATE_LIMIT
 ~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -617,14 +695,14 @@ The global default rate limit for tasks.
 This value is used for tasks that does not have a custom rate limit
 The default is no rate limit.
 
-.. _CELERY_DISABLE_RATE_LIMITS:
+.. setting:: CELERY_DISABLE_RATE_LIMITS
 
 CELERY_DISABLE_RATE_LIMITS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Disable all rate limits, even if tasks has explicit rate limits set.
 
-.. _CELERY_ACKS_LATE:
+.. setting:: CELERY_ACKS_LATE
 
 CELERY_ACKS_LATE
 ~~~~~~~~~~~~~~~~
@@ -641,7 +719,7 @@ has been executed, not *just before*, which is the default behavior.
 Worker: celeryd
 ---------------
 
-.. _CELERY_IMPORTS:
+.. setting:: CELERY_IMPORTS
 
 CELERY_IMPORTS
 ~~~~~~~~~~~~~~
@@ -651,7 +729,7 @@ A sequence of modules to import when the celery daemon starts.
 This is used to specify the task modules to import, but also
 to import signal handlers and additional remote control commands, etc.
 
-.. _CELERYD_MAX_TASKS_PER_CHILD:
+.. setting:: CELERYD_MAX_TASKS_PER_CHILD
 
 CELERYD_MAX_TASKS_PER_CHILD
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -659,7 +737,7 @@ CELERYD_MAX_TASKS_PER_CHILD
 Maximum number of tasks a pool worker process can execute before
 it's replaced with a new one.  Default is no limit.
 
-.. _CELERYD_TASK_TIME_LIMIT:
+.. setting:: CELERYD_TASK_TIME_LIMIT
 
 CELERYD_TASK_TIME_LIMIT
 ~~~~~~~~~~~~~~~~~~~~~~~
@@ -667,7 +745,7 @@ CELERYD_TASK_TIME_LIMIT
 Task hard time limit in seconds.  The worker processing the task will
 be killed and replaced with a new one when this is exceeded.
 
-.. _CELERYD_SOFT_TASK_TIME_LIMIT:
+.. setting:: CELERYD_SOFT_TASK_TIME_LIMIT
 
 CELERYD_SOFT_TASK_TIME_LIMIT
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -692,7 +770,7 @@ Example:
         except SoftTimeLimitExceeded:
             cleanup_in_a_hurry()
 
-.. _CELERY_STORE_ERRORS_EVEN_IF_IGNORED:
+.. setting:: CELERY_STORE_ERRORS_EVEN_IF_IGNORED
 
 CELERY_STORE_ERRORS_EVEN_IF_IGNORED
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -700,19 +778,53 @@ CELERY_STORE_ERRORS_EVEN_IF_IGNORED
 If set, the worker stores all task errors in the result store even if
 :attr:`Task.ignore_result <celery.task.base.Task.ignore_result>` is on.
 
+.. setting:: CELERY_STATE_DB
+
+CELERYD_STATE_DB
+~~~~~~~~~~~~~~~~
+
+Name of the file used to stores persistent worker state (like revoked tasks).
+Can be a relative or absolute path, but be aware that the suffix ``.db``
+may be appended to the file name (depending on Python version).
+
+Can also be set via the :option:`--statedb` argument to
+:mod:`~celery.bin.celeryd`.
+
+Not enabled by default.
+
+.. setting:: CELERYD_ETA_SCHEDULER_PRECISION
+
+CELERYD_ETA_SCHEDULER_PRECISION
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum time in seconds that the ETA scheduler can sleep between
+rechecking the schedule.  Default is 1 second.
+
+Setting this value to 1 second means the schedulers precision will
+be 1 second. If you need near millisecond precision you can set this to 0.1.
+
 .. _conf-error-mails:
 
 Error E-Mails
 -------------
 
-.. _CELERYD_SEND_TASK_ERROR_EMAILS:
+.. setting:: CELERYD_SEND_TASK_ERROR_EMAILS
 
 CELERY_SEND_TASK_ERROR_EMAILS
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-If set to ``True``, errors in tasks will be sent to admins by e-mail.
+The default value for the ``Task.send_error_emails`` attribute, which if
+set to :const:`True` means errors occuring during task execution will be
+sent to :setting:`ADMINS` by e-mail.
 
-.. _ADMINS:
+.. setting:: CELERY_TASK_ERROR_WHITELIST
+
+CELERY_TASK_ERROR_WHITELIST
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A whitelist of exceptions to send error e-mails for.
+
+.. setting:: ADMINS
 
 ADMINS
 ~~~~~~
@@ -720,7 +832,7 @@ ADMINS
 List of ``(name, email_address)`` tuples for the admins that should
 receive error e-mails.
 
-.. _SERVER_EMAIL:
+.. setting:: SERVER_EMAIL
 
 SERVER_EMAIL
 ~~~~~~~~~~~~
@@ -728,28 +840,28 @@ SERVER_EMAIL
 The e-mail address this worker sends e-mails from.
 Default is celery@localhost.
 
-.. _MAIL_HOST:
+.. setting:: MAIL_HOST
 
 MAIL_HOST
 ~~~~~~~~~
 
 The mail server to use.  Default is ``"localhost"``.
 
-.. _MAIL_HOST_USER:
+.. setting:: MAIL_HOST_USER
 
 MAIL_HOST_USER
 ~~~~~~~~~~~~~~
 
 Username (if required) to log on to the mail server with.
 
-.. _MAIL_HOST_PASSWORD:
+.. setting:: MAIL_HOST_PASSWORD
 
 MAIL_HOST_PASSWORD
 ~~~~~~~~~~~~~~~~~~
 
 Password (if required) to log on to the mail server with.
 
-.. _MAIL_PORT:
+.. setting:: MAIL_PORT
 
 MAIL_PORT
 ~~~~~~~~~
@@ -789,21 +901,29 @@ george@vandelay.com and kramer@vandelay.com:
 Events
 ------
 
-.. _CELERY_SEND_EVENTS:
+.. setting:: CELERY_SEND_EVENTS
 
 CELERY_SEND_EVENTS
 ~~~~~~~~~~~~~~~~~~
 
 Send events so the worker can be monitored by tools like ``celerymon``.
 
-.. _CELERY_EVENT_EXCHANGE:
+.. setting:: CELERY_EVENT_QUEUE
+
+CELERY_EVENT_QUEUE
+~~~~~~~~~~~~~~~~~~
+
+Name of the queue to consume event messages from. Default is
+``"celeryevent"``.
+
+.. setting:: CELERY_EVENT_EXCHANGE
 
 CELERY_EVENT_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~
 
 Name of the exchange to send event messages to.  Default is ``"celeryevent"``.
 
-.. _CELERY_EVENT_EXCHANGE_TYPE:
+.. setting:: CELERY_EVENT_EXCHANGE_TYPE
 
 CELERY_EVENT_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -811,27 +931,27 @@ CELERY_EVENT_EXCHANGE_TYPE
 The exchange type of the event exchange.  Default is to use a ``"direct"``
 exchange.
 
-.. _CELERY_EVENT_ROUTING_KEY:
+.. setting:: CELERY_EVENT_ROUTING_KEY
 
 CELERY_EVENT_ROUTING_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 Routing key used when sending event messages.  Default is ``"celeryevent"``.
 
-.. _CELERY_EVENT_SERIALIZER:
+.. setting:: CELERY_EVENT_SERIALIZER
 
 CELERY_EVENT_SERIALIZER
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 Message serialization format used when sending event messages.
-Default is ``"json"``.
+Default is ``"json"``. See :ref:`executing-serializers`.
 
 .. _conf-broadcast:
 
 Broadcast Commands
 ------------------
 
-.. _CELERY_BROADCAST_QUEUE:
+.. setting:: CELERY_BROADCAST_QUEUE
 
 CELERY_BROADCAST_QUEUE
 ~~~~~~~~~~~~~~~~~~~~~~
@@ -842,7 +962,7 @@ queue name.
 
 Default is ``"celeryctl"``.
 
-.. _CELERY_BROADCASTS_EXCHANGE:
+.. setting:: CELERY_BROADCASTS_EXCHANGE
 
 CELERY_BROADCAST_EXCHANGE
 ~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -851,7 +971,7 @@ Name of the exchange used for broadcast messages.
 
 Default is ``"celeryctl"``.
 
-.. _CELERY_BROADCAST_EXCHANGE_TYPE:
+.. setting:: CELERY_BROADCAST_EXCHANGE_TYPE
 
 CELERY_BROADCAST_EXCHANGE_TYPE
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -863,7 +983,7 @@ Exchange type used for broadcast messages.  Default is ``"fanout"``.
 Logging
 -------
 
-.. _CELERYD_LOG_FILE:
+.. setting:: CELERYD_LOG_FILE
 
 CELERYD_LOG_FILE
 ~~~~~~~~~~~~~~~~
@@ -873,7 +993,7 @@ using the :option:`--logfile` option to :mod:`~celery.bin.celeryd`.
 
 The default is :const:`None` (``stderr``)
 
-.. _CELERYD_LOG_LEVEL:
+.. setting:: CELERYD_LOG_LEVEL
 
 CELERYD_LOG_LEVEL
 ~~~~~~~~~~~~~~~~~
@@ -886,7 +1006,7 @@ Can also be set via the :option:`--loglevel` argument to
 
 See the :mod:`logging` module for more information.
 
-.. _CELERYD_LOG_FORMAT:
+.. setting:: CELERYD_LOG_FORMAT
 
 CELERYD_LOG_FORMAT
 ~~~~~~~~~~~~~~~~~~
@@ -898,7 +1018,7 @@ Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
 See the Python :mod:`logging` module for more information about log
 formats.
 
-.. _CELERYD_TASK_LOG_FORMAT:
+.. setting:: CELERYD_TASK_LOG_FORMAT
 
 CELERYD_TASK_LOG_FORMAT
 ~~~~~~~~~~~~~~~~~~~~~~~
@@ -919,7 +1039,7 @@ formats.
 Custom Component Classes (advanced)
 -----------------------------------
 
-.. _CELERYD_POOL:
+.. setting:: CELERYD_POOL
 
 CELERYD_POOL
 ~~~~~~~~~~~~
@@ -927,7 +1047,7 @@ CELERYD_POOL
 Name of the task pool class used by the worker.
 Default is :class:`celery.concurrency.processes.TaskPool`.
 
-.. _CELERYD_LISTENER:
+.. setting:: CELERYD_LISTENER
 
 CELERYD_LISTENER
 ~~~~~~~~~~~~~~~~
@@ -935,7 +1055,7 @@ CELERYD_LISTENER
 Name of the listener class used by the worker.
 Default is :class:`celery.worker.listener.CarrotListener`.
 
-.. _CELERYD_MEDIATOR:
+.. setting:: CELERYD_MEDIATOR
 
 CELERYD_MEDIATOR
 ~~~~~~~~~~~~~~~~
@@ -943,7 +1063,7 @@ CELERYD_MEDIATOR
 Name of the mediator class used by the worker.
 Default is :class:`celery.worker.controllers.Mediator`.
 
-.. _CELERYD_ETA_SCHEDULER:
+.. setting:: CELERYD_ETA_SCHEDULER
 
 CELERYD_ETA_SCHEDULER
 ~~~~~~~~~~~~~~~~~~~~~
@@ -956,7 +1076,7 @@ Default is :class:`celery.worker.controllers.ScheduleController`.
 Periodic Task Server: celerybeat
 --------------------------------
 
-.. _CELERYBEAT_SCHEDULE:
+.. setting:: CELERYBEAT_SCHEDULE
 
 CELERYBEAT_SCHEDULE
 ~~~~~~~~~~~~~~~~~~~
@@ -964,7 +1084,7 @@ CELERYBEAT_SCHEDULE
 The periodic task schedule used by :mod:`~celery.bin.celerybeat`.
 See :ref:`beat-entries`.
 
-.. _CELERYBEAT_SCHEDULE_FILENAME:
+.. setting:: CELERYBEAT_SCHEDULE_FILENAME
 
 CELERYBEAT_SCHEDULE_FILENAME
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -976,7 +1096,7 @@ may be appended to the file name (depending on Python version).
 Can also be set via the :option:`--schedule` argument to
 :mod:`~celery.bin.celerybeat`.
 
-.. _CELERYBEAT_MAX_LOOP_INTERVAL:
+.. setting:: CELERYBEAT_MAX_LOOP_INTERVAL
 
 CELERYBEAT_MAX_LOOP_INTERVAL
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -984,7 +1104,7 @@ CELERYBEAT_MAX_LOOP_INTERVAL
 The maximum number of seconds :mod:`~celery.bin.celerybeat` can sleep
 between checking the schedule.  Default is 300 seconds (5 minutes).
 
-.. _CELERYBEAT_LOG_FILE:
+.. setting:: CELERYBEAT_LOG_FILE
 
 CELERYBEAT_LOG_FILE
 ~~~~~~~~~~~~~~~~~~~
@@ -994,7 +1114,7 @@ the `--logfile`` option to :mod:`~celery.bin.celerybeat`.
 
 The default is :const:`None` (``stderr``).
 
-.. _CELERYBEAT_LOG_LEVEL:
+.. setting:: CELERYBEAT_LOG_LEVEL
 
 CELERYBEAT_LOG_LEVEL
 ~~~~~~~~~~~~~~~~~~~~
@@ -1012,7 +1132,7 @@ See the :mod:`logging` module for more information.
 Monitor Server: celerymon
 -------------------------
 
-.. _CELERYMON_LOG_FILE:
+.. setting:: CELERYMON_LOG_FILE
 
 CELERYMON_LOG_FILE
 ~~~~~~~~~~~~~~~~~~
@@ -1022,7 +1142,7 @@ the :option:`--logfile` argument to ``celerymon``.
 
 The default is :const:`None` (``stderr``)
 
-.. _CELERYMON_LOG_LEVEL:
+.. setting:: CELERYMON_LOG_LEVEL
 
 CELERYMON_LOG_LEVEL
 ~~~~~~~~~~~~~~~~~~~

+ 8 - 8
docs/getting-started/broker-installation.rst

@@ -59,8 +59,8 @@ When git is installed you can finally clone the repo, storing it at the
     $ git clone git://github.com/mxcl/homebrew /lol
 
 
-Brew comes with a simple utility called ``brew``, used to install, remove and
-query packages. To use it you first have to add it to ``PATH``, by
+Brew comes with a simple utility called :program:`brew`, used to install, remove and
+query packages. To use it you first have to add it to :envvar:`PATH`, by
 adding the following line to the end of your ``~/.profile``::
 
     export PATH="/lol/bin:/lol/sbin:$PATH"
@@ -70,7 +70,7 @@ Save your profile and reload it::
     $ source ~/.profile
 
 
-Finally, we can install rabbitmq using ``brew``::
+Finally, we can install rabbitmq using :program:`brew`::
 
     $ brew install rabbitmq
 
@@ -87,7 +87,7 @@ If you're using a DHCP server that is giving you a random hostname, you need
 to permanently configure the hostname. This is because RabbitMQ uses the hostname
 to communicate with nodes.
 
-Use the ``scutil`` command to permanently set your hostname::
+Use the :program:`scutil` command to permanently set your hostname::
 
     sudo scutil --set HostName myhost.local
 
@@ -97,7 +97,7 @@ back into an IP address::
     127.0.0.1       localhost myhost myhost.local
 
 If you start the rabbitmq server, your rabbit node should now be ``rabbit@myhost``,
-as verified by ``rabbitmqctl``::
+as verified by :program:`rabbitmqctl`::
 
     $ sudo rabbitmqctl status
     Status of node rabbit@myhost ...
@@ -124,13 +124,13 @@ To start the server::
 
     $ sudo rabbitmq-server
 
-you can also run it in the background by adding the ``-detached`` option
+you can also run it in the background by adding the :option:`-detached` option
 (note: only one dash)::
 
     $ sudo rabbitmq-server -detached
 
-Never use ``kill`` to stop the RabbitMQ server, but rather use the
-``rabbitmqctl`` command::
+Never use :program:`kill` to stop the RabbitMQ server, but rather use the
+:program:`rabbitmqctl` command::
 
     $ sudo rabbitmqctl stop
 

+ 10 - 12
docs/getting-started/first-steps-with-celery.rst

@@ -15,11 +15,11 @@ Creating a simple task
 In this example we are creating a simple task that adds two
 numbers. Tasks are defined in a normal python module. The module can
 be named whatever you like, but the convention is to call it
-``tasks.py``.
+:file:`tasks.py`.
 
 Our addition task looks like this:
 
-``tasks.py``:
+:file:`tasks.py`:
 
 .. code-block:: python
 
@@ -42,7 +42,7 @@ Configuration
 =============
 
 Celery is configured by using a configuration module. By default
-this module is called ``celeryconfig.py``.
+this module is called :file:`celeryconfig.py`.
 
 .. note::
 
@@ -52,7 +52,7 @@ this module is called ``celeryconfig.py``.
     You can also set a custom name for the configuration module using
     the :envvar:`CELERY_CONFIG_MODULE` environment variable.
 
-Let's create our ``celeryconfig.py``.
+Let's create our :file:`celeryconfig.py`.
 
 1. Configure how we communicate with the broker::
 
@@ -74,28 +74,26 @@ Let's create our ``celeryconfig.py``.
    that contain tasks. This is so Celery knows about what tasks it can
    be asked to perform.
 
-   We only have a single task module, ``tasks.py``, which we added earlier::
+   We only have a single task module, :file:`tasks.py`, which we added earlier::
 
         CELERY_IMPORTS = ("tasks", )
 
 That's it.
 
-
 There are more options available, like how many processes you want to
 process work in parallel (the :setting:`CELERY_CONCURRENCY` setting), and we
 could use a persistent result store backend, but for now, this should
-do. For all of the options available, see the 
-:doc:`configuration directive reference<../configuration>`.
+do. For all of the options available, see :ref:`configuration`.
 
 .. note::
 
-    You can also specify modules to import using the ``-I`` option to
-    ``celeryd``::
+    You can also specify modules to import using the :option:`-I` option to
+    :mod:`~celery.bin.celeryd`::
 
         $ celeryd -l info -I tasks,handlers
 
     This can be a single, or a comma separated list of task modules to import when
-    ``celeryd`` starts.
+    :mod:`~celery.bin.celeryd` starts.
 
 
 .. _celerytut-running-celeryd:
@@ -175,7 +173,7 @@ by keeping the :class:`~celery.result.AsyncResult`::
     True
 
 If the task raises an exception, the return value of ``result.successful()``
-will be ``False``, and ``result.result`` will contain the exception instance
+will be :const:`False`, and ``result.result`` will contain the exception instance
 raised by the task.
 
 Where to go from here

+ 7 - 3
docs/includes/introduction.txt

@@ -13,9 +13,9 @@ Celery is an open source asynchronous task queue/job queue based on
 distributed message passing. It is focused on real-time operation,
 but supports scheduling as well.
 
-The execution units, called tasks, are executed concurrently on a single or
-more worker servers. Tasks can execute asynchronously (in the background) or synchronously
-(wait until ready).
+The execution units, called tasks, are executed concurrently on one or
+more worker nodes. Tasks can execute asynchronously (in the background) or
+synchronously (wait until ready).
 
 Celery is already used in production to process millions of tasks a day.
 
@@ -25,6 +25,9 @@ language. It can also `operate with other languages using webhooks`_.
 The recommended message broker is `RabbitMQ`_, but support for `Redis`_ and
 databases (`SQLAlchemy`_) is also available.
 
+Celery can be easily used with Django and Pylons using
+`django-celery`_ and `celery-pylons`_.
+
 You may also be pleased to know that full Django integration exists,
 delivered by the `django-celery`_ package.
 
@@ -32,6 +35,7 @@ delivered by the `django-celery`_ package.
 .. _`Redis`: http://code.google.com/p/redis/
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
+.. _`celery-pylons`: http://bitbucket.org/ianschenck/celery-pylons
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
 

+ 1 - 1
docs/includes/resources.txt

@@ -55,7 +55,7 @@ to send regular patches.
 License
 =======
 
-This software is licensed under the ``New BSD License``. See the ``LICENSE``
+This software is licensed under the ``New BSD License``. See the :file:`LICENSE`
 file in the top distribution directory for the full license text.
 
 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround

+ 6 - 6
docs/reference/celery.conf.rst

@@ -183,7 +183,7 @@ Execution
 .. data:: DEFAULT_RATE_LIMIT
 
     The default rate limit applied to all tasks which doesn't have a custom
-    rate limit defined. (Default: None)
+    rate limit defined. (Default: :const:`None`)
 
 .. data:: DISABLE_RATE_LIMITS
 
@@ -203,7 +203,7 @@ Broker
     Maximum number of retries before we give up re-establishing a connection
     to the broker.
 
-    If this is set to ``0`` or ``None``, we will retry forever.
+    If this is set to ``0`` or :const:`None`, we will retry forever.
 
     Default is ``100`` retries.
 
@@ -218,7 +218,7 @@ Celerybeat
 .. data:: CELERYBEAT_LOG_FILE
 
     Default log file for celerybeat.
-    Default is: ``None`` (stderr)
+    Default is: :const:`None` (stderr)
 
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 
@@ -246,7 +246,7 @@ Celerymon
 .. data:: CELERYMON_LOG_FILE
 
     Default log file for celerymon.
-    Default is: ``None`` (stderr)
+    Default is: :const:`None` (stderr)
 
 Celeryd
 =======
@@ -266,11 +266,11 @@ Celeryd
 .. data:: CELERYD_LOG_FILE
 
     Filename of the daemon log file.
-    Default is: ``None`` (stderr)
+    Default is: :const:`None` (stderr)
 
 .. data:: CELERYD_LOG_LEVEL
 
-    Default log level for daemons. (``WARN``)
+    Default log level for daemons. (:const:`WARN`)
 
 .. data:: CELERYD_CONCURRENCY
 

+ 1 - 1
docs/tutorials/otherqueues.rst

@@ -64,7 +64,7 @@ configuration values.
     CARROT_BACKEND = "ghettoq.taproot.Database"
 
 
-#. Add ``ghettoq`` to ``INSTALLED_APPS``::
+#. Add :mod:`ghettoq` to ``INSTALLED_APPS``::
 
     INSTALLED_APPS = ("ghettoq", )
 

+ 8 - 8
docs/userguide/executing.rst

@@ -30,11 +30,11 @@ The same thing using ``apply_async`` is written like this:
     Task.apply_async(args=[arg1, arg2], kwargs={"kwarg1": "x", "kwarg2": "y"})
 
 
-While ``delay`` is convenient, it doesn't give you as much control as using ``apply_async``.
-With ``apply_async`` you can override the execution options available as attributes on
-the ``Task`` class: ``routing_key``, ``exchange``, ``immediate``, ``mandatory``,
-``priority``, and ``serializer``.  In addition you can set a countdown/eta, or provide
-a custom broker connection.
+While ``delay`` is convenient, it doesn't give you as much control as using
+``apply_async``.  With ``apply_async`` you can override the execution options
+available as attributes on the ``Task`` class:  ``routing_key``, ``exchange``,
+``immediate``, ``mandatory``, ``priority``, and ``serializer``.
+In addition you can set a countdown/eta, or provide a custom broker connection.
 
 Let's go over these in more detail. The following examples use this simple
 task, which adds together two numbers:
@@ -97,9 +97,9 @@ Serializers
 Data passed between celery and workers has to be serialized to be
 transferred. The default serializer is :mod:`pickle`, but you can 
 change this for each
-task. There is built-in support for using :mod:`pickle`, ``JSON`` and ``YAML``,
-and you can add your own custom serializers by registering them into the
-carrot serializer registry.
+task. There is built-in support for using :mod:`pickle`, ``JSON``, ``YAML``
+and ``msgpack``. You can also add your own custom serializers by registering
+them into the Carrot serializer registry.
 
 The default serializer (pickle) supports Python objects, like ``datetime`` and
 any custom datatypes you define yourself. But since pickle has poor support

+ 9 - 9
docs/userguide/tasks.rst

@@ -215,13 +215,13 @@ General
     Set the rate limit for this task type, i.e. how many times in
     a given period of time is the task allowed to run.
 
-    If this is ``None`` no rate limit is in effect.
+    If this is :const:`None` no rate limit is in effect.
     If it is an integer, it is interpreted as "tasks per second". 
 
     The rate limits can be specified in seconds, minutes or hours
     by appending ``"/s"``, ``"/m"`` or ``"/h"`` to the value.
     Example: ``"100/m"`` (hundred tasks a minute). Default is the
-    ``CELERY_DEFAULT_RATE_LIMIT`` setting, which if not specified means
+    :setting:`CELERY_DEFAULT_RATE_LIMIT` setting, which if not specified means
     rate limiting for tasks is turned off by default.
 
 .. attribute:: Task.ignore_result
@@ -233,13 +233,13 @@ General
 .. attribute:: Task.send_error_emails
 
     Send an e-mail whenever a task of this type fails.
-    Defaults to the ``CELERY_SEND_TASK_ERROR_EMAILS`` setting.
+    Defaults to the :setting:`CELERY_SEND_TASK_ERROR_EMAILS` setting.
     See :ref:`conf-error-mails` for more information.
 
 .. attribute:: Task.serializer
 
     A string identifying the default serialization
-    method to use. Defaults to the ``CELERY_TASK_SERIALIZER``
+    method to use. Defaults to the :setting:`CELERY_TASK_SERIALIZER`
     setting.  Can be ``pickle`` ``json``, ``yaml``, or any custom
     serialization methods that have been registered with
     :mod:`carrot.serialization.registry`.
@@ -253,7 +253,7 @@ Message and routing options
 
 .. attribute:: Task.queue
 
-    Use the routing settings from a queue defined in ``CELERY_QUEUES``.
+    Use the routing settings from a queue defined in :setting:`CELERY_QUEUES`.
     If defined the :attr:`exchange` and :attr:`routing_key` options will be
     ignored.
 
@@ -457,7 +457,7 @@ This is the list of tasks built-in to celery. Note that we had to import
 only be registered when the module they are defined in is imported.
 
 The default loader imports any modules listed in the
-``CELERY_IMPORTS`` setting. 
+:setting:`CELERY_IMPORTS` setting. 
 
 The entity responsible for registering your task in the registry is a
 meta class, :class:`~celery.task.base.TaskType`. This is the default
@@ -500,8 +500,8 @@ wastes time and resources.
     def mytask(...)
         something()
 
-Results can even be disabled globally using the ``CELERY_IGNORE_RESULT``
-setting.
+Results can even be disabled globally using the
+:setting:`CELERY_IGNORE_RESULT` setting.
 
 .. _task-disable-rate-limits:
 
@@ -512,7 +512,7 @@ Disabling rate limits altogether is recommended if you don't have
 any tasks using them. This is because the rate limit subsystem introduces
 quite a lot of complexity.
 
-Set the ``CELERY_DISABLE_RATE_LIMITS`` setting to globally disable
+Set the :setting:`CELERY_DISABLE_RATE_LIMITS` setting to globally disable
 rate limits:
 
 .. code-block:: python

+ 13 - 8
docs/userguide/workers.rst

@@ -38,20 +38,20 @@ hostname with the ``--hostname|-n`` argument::
 Stopping the worker
 ===================
 
-Shutdown should be accomplished using the ``TERM`` signal.
+Shutdown should be accomplished using the :sig:`TERM` signal.
 
 When shutdown is initiated the worker will finish any tasks it's currently
 executing before it terminates, so if these tasks are important you should
-wait for it to finish before doing anything drastic (like sending the ``KILL``
+wait for it to finish before doing anything drastic (like sending the :sig:`KILL`
 signal).
 
 If the worker won't shutdown after considerate time, for example because
-of tasks stuck in an infinite-loop, you can use the ``KILL`` signal to
+of tasks stuck in an infinite-loop, you can use the :sig:`KILL` signal to
 force terminate the worker, but be aware that currently executing tasks will
 be lost (unless the tasks have the :attr:`~celery.task.base.Task.acks_late`
 option set).
 
-Also, since the ``KILL`` signal can't be catched by processes the worker will
+Also, since the :sig:`KILL` signal can't be catched by processes the worker will
 not be able to reap its children so make sure you do it manually. This
 command usually does the trick::
 
@@ -63,7 +63,7 @@ Restarting the worker
 =====================
 
 Other than stopping then starting the worker to restart, you can also
-restart the worker using the ``HUP`` signal::
+restart the worker using the :sig:`HUP` signal::
 
     $ kill -HUP $pid
 
@@ -175,7 +175,7 @@ Some remote control commands also have higher-level interfaces using
 :func:`~celery.task.control.broadcast` in the background, like
 :func:`~celery.task.control.rate_limit` and :func:`~celery.task.control.ping`.
 
-Sending the ``rate_limit`` command and keyword arguments::
+Sending the :control:`rate_limit` command and keyword arguments::
 
     >>> from celery.task.control import broadcast
     >>> broadcast("rate_limit", arguments={"task_name": "myapp.mytask",
@@ -206,6 +206,8 @@ using :func:`~celery.task.control.broadcast`.
 
 .. _worker-rate-limits:
 
+.. control:: rate_limit
+
 Rate limits
 -----------
 
@@ -227,7 +229,7 @@ destination hostname::
     :setting:`CELERY_DISABLE_RATE_LIMITS` setting on. To re-enable rate limits
     then you have to restart the worker.
 
-.. _worker-remote-shutdown:
+.. control:: shutdown
 
 Remote shutdown
 ---------------
@@ -237,7 +239,7 @@ This command will gracefully shut down the worker remotely::
     >>> broadcast("shutdown") # shutdown all workers
     >>> broadcast("shutdown, destination="worker1.example.com")
 
-.. _worker-ping:
+.. control:: ping
 
 Ping
 ----
@@ -262,6 +264,9 @@ so you can specify which workers to ping::
 
 .. _worker-enable-events:
 
+.. control:: enable_events
+.. control:: disable_events
+
 Enable/disable events
 ---------------------