Ver código fonte

More Documentation fixes

Ask Solem 14 anos atrás
pai
commit
0e7e060e6a

+ 73 - 66
Changelog

@@ -54,8 +54,8 @@ News
 
 * Added support for expiration of AMQP results (requires RabbitMQ 2.1.0)
 
-    The new configuration option ``CELERY_AMQP_TASK_RESULT_EXPIRES`` sets
-    the expiry time in seconds (can be int or float):
+    The new configuration option :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`
+    sets the expiry time in seconds (can be int or float):
 
     .. code-block:: python
 
@@ -278,7 +278,7 @@ News
 * Refactored the periodic task responsible for cleaning up results.
 
     * The backend cleanup task is now only added to the schedule if
-        ``CELERY_TASK_RESULT_EXPIRES`` is set.
+        :setting:`CELERY_TASK_RESULT_EXPIRES` is set.
 
     * If the schedule already contains a periodic task named
       "celery.backend_cleanup" it won't change it, so the behavior of the
@@ -433,8 +433,9 @@ Fixes
 
     See issue #170.
 
-* ``CELERY_ROUTES``: Values defined in the route should now have precedence
-  over values defined in ``CELERY_QUEUES`` when merging the two.
+* :setting:`CELERY_ROUTES`: Values defined in the route should now have
+  precedence over values defined in :setting:`CELERY_QUEUES` when merging
+  the two.
 
     With the follow settings::
 
@@ -452,9 +453,9 @@ Fixes
          "serializer": "json"}
 
     This was not the case before: the values
-    in ``CELERY_QUEUES`` would take precedence.
+    in :setting:`CELERY_QUEUES` would take precedence.
 
-* Worker crashed if the value of ``CELERY_TASK_ERROR_WHITELIST`` was
+* Worker crashed if the value of :setting:`CELERY_TASK_ERROR_WHITELIST` was
   not an iterable
 
 * :func:`~celery.execute.apply`: Make sure ``kwargs["task_id"]`` is
@@ -520,8 +521,8 @@ Documentation
 
     See issue #169.
 
-* Documented the default values for the ``CELERYD_CONCURRENCY``
-  and ``CELERYD_PREFETCH_MULTIPLIER`` settings.
+* Documented the default values for the :setting:`CELERYD_CONCURRENCY`
+  and :setting:`CELERYD_PREFETCH_MULTIPLIER` settings.
 
 * Tasks Userguide: Fixed typos in the subtask example
 
@@ -689,7 +690,7 @@ Documentation
     tasks like hot knife through butter.
 
     In addition a new setting has been added to control the minimum sleep
-    interval; ``CELERYD_ETA_SCHEDULER_PRECISION``. A good
+    interval; :setting:`CELERYD_ETA_SCHEDULER_PRECISION`. A good
     value for this would be a float between 0 and 1, depending
     on the needed precision. A value of 0.8 means that when the ETA of a task
     is met, it will take at most 0.8 seconds for the task to be moved to the
@@ -716,13 +717,13 @@ Documentation
 * Fixed "pending_xref" errors shown in the HTML rendering of the
   documentation. Apparently this was caused by new changes in Sphinx 1.0b2.
 
-* Router classes in ``CELERY_ROUTES`` are now imported lazily.
+* Router classes in :setting:`CELERY_ROUTES` are now imported lazily.
 
     Importing a router class in a module that also loads the Celery
     environment would cause a circular dependency. This is solved
     by importing it when needed after the environment is set up.
 
-* ``CELERY_ROUTES`` was broken if set to a single dict.
+* :setting:`CELERY_ROUTES` was broken if set to a single dict.
 
     This example in the docs should now work again::
 
@@ -863,7 +864,7 @@ The database result backend is now using `SQLAlchemy`_ instead of the
 Django ORM, see `Supported Databases`_ for a table of supported databases.
 
 The ``DATABASE_*`` settings has been replaced by a single setting:
-``CELERY_RESULT_DBURI``. The value here should be an
+:setting:`CELERY_RESULT_DBURI`. The value here should be an
 `SQLAlchemy Connection String`_, some examples include:
 
 .. code-block:: python
@@ -884,7 +885,7 @@ See `SQLAlchemy Connection Strings`_ for more information about connection
 strings.
 
 To specify additional SQLAlchemy database engine options you can use
-the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
+the :setting:`CELERY_RESULT_ENGINE_OPTIONS` setting::
 
     # echo enables verbose logging from SQLAlchemy.
     CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
@@ -974,7 +975,7 @@ Backward incompatible changes
 
         CELERY_LOADER = "myapp.loaders.Loader"
 
-* ``CELERY_TASK_RESULT_EXPIRES`` now defaults to 1 day.
+* :setting:`CELERY_TASK_RESULT_EXPIRES` now defaults to 1 day.
 
     Previous default setting was to expire in 5 days.
 
@@ -1113,7 +1114,8 @@ News
 
 * Missing queue definitions are now created automatically.
 
-    You can disable this using the CELERY_CREATE_MISSING_QUEUES setting.
+    You can disable this using the :setting:`CELERY_CREATE_MISSING_QUEUES`
+    setting.
 
     The missing queues are created with the following options::
 
@@ -1132,18 +1134,19 @@ News
 * New Task option: ``Task.queue``
 
     If set, message options will be taken from the corresponding entry
-    in ``CELERY_QUEUES``. ``exchange``, ``exchange_type`` and ``routing_key``
+    in :setting:`CELERY_QUEUES`. ``exchange``, ``exchange_type`` and ``routing_key``
     will be ignored
 
 * Added support for task soft and hard timelimits.
 
     New settings added:
 
-    * CELERYD_TASK_TIME_LIMIT
+    * :setting:`CELERYD_TASK_TIME_LIMIT`
 
         Hard time limit. The worker processing the task will be killed and
         replaced with a new one when this is exceeded.
-    * CELERYD_SOFT_TASK_TIME_LIMIT
+
+    * :setting:`CELERYD_SOFT_TASK_TIME_LIMIT`
 
         Soft time limit. The celery.exceptions.SoftTimeLimitExceeded exception
         will be raised when this is exceeded. The task can catch this to
@@ -1177,11 +1180,11 @@ News
 
     This is only enabled when the log output is a tty.
     You can explicitly enable/disable this feature using the
-    ``CELERYD_LOG_COLOR`` setting.
+    :setting:`CELERYD_LOG_COLOR` setting.
 
 * Added support for task router classes (like the django multidb routers)
 
-    * New setting: CELERY_ROUTES
+    * New setting: :setting:`CELERY_ROUTES`
 
     This is a single, or a list of routers to traverse when
     sending tasks. Dicts in this list converts to a
@@ -1210,7 +1213,7 @@ News
                     return "default"
 
     route_for_task may return a string or a dict. A string then means
-    it's a queue name in ``CELERY_QUEUES``, a dict means it's a custom route.
+    it's a queue name in :setting:`CELERY_QUEUES`, a dict means it's a custom route.
 
     When sending tasks, the routers are consulted in order. The first
     router that doesn't return ``None`` is the route to use. The message options
@@ -1241,7 +1244,7 @@ News
    :meth:`~celery.task.base.Task.on_retry`/
    :meth:`~celery.task.base.Task.on_failure` as einfo keyword argument.
 
-* celeryd: Added ``CELERYD_MAX_TASKS_PER_CHILD`` /
+* celeryd: Added :setting:`CELERYD_MAX_TASKS_PER_CHILD` /
   :option:`--maxtasksperchild`
 
     Defines the maximum number of tasks a pool worker can process before
@@ -1252,8 +1255,8 @@ News
 
 * :func:`celery.task.control.ping` now works as expected.
 
-* ``apply(throw=True)`` / ``CELERY_EAGER_PROPAGATES_EXCEPTIONS``: Makes eager
-  execution re-raise task errors.
+* ``apply(throw=True)`` / :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`:
+  Makes eager execution re-raise task errors.
 
 * New signal: :data:`~celery.signals.worker_process_init`: Sent inside the
   pool worker process at init.
@@ -1261,9 +1264,10 @@ News
 * celeryd :option:`-Q` option: Ability to specifiy list of queues to use,
   disabling other configured queues.
 
-    For example, if ``CELERY_QUEUES`` defines four queues: ``image``, ``video``,
-    ``data`` and ``default``, the following command would make celeryd only
-    consume from the ``image`` and ``video`` queues::
+    For example, if :setting:`CELERY_QUEUES` defines four
+    queues: ``image``, ``video``, ``data`` and ``default``, the following
+    command would make celeryd only consume from the ``image`` and ``video``
+    queues::
 
         $ celeryd -Q image,video
 
@@ -1286,9 +1290,9 @@ News
 * Removed top-level tests directory. Test config now in celery.tests.config
 
     This means running the unittests doesn't require any special setup.
-    ``celery/tests/__init__`` now configures the ``CELERY_CONFIG_MODULE`` and
-    ``CELERY_LOADER``, so when ``nosetests`` imports that, the unit test
-    environment is all set up.
+    ``celery/tests/__init__`` now configures the :envvar:`CELERY_CONFIG_MODULE`
+    and :envvar:`CELERY_LOADER` environment variables, so when ``nosetests``
+    imports that, the unit test environment is all set up.
 
     Before you run the tests you need to install the test requirements::
 
@@ -1516,7 +1520,7 @@ News
 * AMQP backend: Added timeout support for ``result.get()`` /
   ``result.wait()``.
 
-* New task option: ``Task.acks_late`` (default: ``CELERY_ACKS_LATE``)
+* New task option: ``Task.acks_late`` (default: :setting:`CELERY_ACKS_LATE`)
 
     Late ack means the task messages will be acknowledged **after** the task
     has been executed, not *just before*, which is the default behavior.
@@ -1586,7 +1590,7 @@ News
     when there are long running tasks and there is a need to report which
     task is currently running.
 
-    The global default can be overridden by the ``CELERY_TRACK_STARTED``
+    The global default can be overridden by the :setting:`CELERY_TRACK_STARTED`
     setting.
 
 * User Guide: New section ``Tips and Best Practices``.
@@ -1727,15 +1731,15 @@ Fixes
 =====
 :release-date: 2010-03-31 12:50 P.M CET
 
-* Deprecated: ``CELERY_BACKEND``, please use ``CELERY_RESULT_BACKEND``
-  instead.
+* Deprecated: :setting:`CELERY_BACKEND`, please use
+  :setting:`CELERY_RESULT_BACKEND` instead.
 
 * We now use a custom logger in tasks. This logger supports task magic
   keyword arguments in formats.
 
-    The default format for tasks (``CELERYD_TASK_LOG_FORMAT``) now includes
-    the id and the name of tasks so the origin of task log messages can
-    easily be traced.
+    The default format for tasks (:setting:`CELERYD_TASK_LOG_FORMAT`) now
+    includes the id and the name of tasks so the origin of task log messages
+    can easily be traced.
 
     Example output::
         [2010-03-25 13:11:20,317: INFO/PoolWorker-1]
@@ -1751,8 +1755,8 @@ Fixes
   instead fixed the underlying issue which was caused by modifications
   to the ``DATABASE_NAME`` setting (Issue #82).
 
-* Django Loader: New config ``CELERY_DB_REUSE_MAX`` (max number of tasks
-  to reuse the same database connection)
+* Django Loader: New config :setting:`CELERY_DB_REUSE_MAX` (max number of
+  tasks to reuse the same database connection)
 
     The default is to use a new connection for every task.
     We would very much like to reuse the connection, but a safe number of
@@ -1761,8 +1765,9 @@ Fixes
 
     See: http://bit.ly/94fwdd
 
-* celeryd: The worker components are now configurable: ``CELERYD_POOL``,
-  ``CELERYD_LISTENER``, ``CELERYD_MEDIATOR``, and ``CELERYD_ETA_SCHEDULER``.
+* celeryd: The worker components are now configurable: :setting:`CELERYD_POOL`,
+  :setting:`CELERYD_LISTENER`, :setting:`CELERYD_MEDIATOR`, and
+  :setting:`CELERYD_ETA_SCHEDULER`.
 
     The default configuration is as follows:
 
@@ -1773,8 +1778,9 @@ Fixes
         CELERYD_ETA_SCHEDULER = "celery.worker.controllers.ScheduleController"
         CELERYD_LISTENER = "celery.worker.listener.CarrotListener"
 
-    The ``CELERYD_POOL`` setting makes it easy to swap out the multiprocessing
-    pool with a threaded pool, or how about a twisted/eventlet pool?
+    The :setting:`CELERYD_POOL` setting makes it easy to swap out the
+    multiprocessing pool with a threaded pool, or how about a
+    twisted/eventlet pool?
 
     Consider the competition for the first pool plug-in started!
 
@@ -1823,7 +1829,7 @@ Fixes
     really long execution time are affected, as all tasks that has made it
     all the way into the pool needs to be executed before the worker can
     safely terminate (this is at most the number of pool workers, multiplied
-    by the ``CELERYD_PREFETCH_MULTIPLIER`` setting.)
+    by the :setting:`CELERYD_PREFETCH_MULTIPLIER` setting.)
 
     We multiply the prefetch count by default to increase the performance at
     times with bursts of tasks with a short execution time. If this doesn't
@@ -1848,9 +1854,9 @@ Fixes
   out of control.
   
     You can set the maximum number of results the cache
-    can hold using the ``CELERY_MAX_CACHED_RESULTS`` setting (the default
-    is five thousand results). In addition, you can refetch already retrieved
-    results using ``backend.reload_task_result`` +
+    can hold using the :setting:`CELERY_MAX_CACHED_RESULTS` setting (the
+    default is five thousand results). In addition, you can refetch already
+    retrieved results using ``backend.reload_task_result`` +
     ``backend.reload_taskset_result`` (that's for those who want to send
     results incrementally).
 
@@ -1923,7 +1929,7 @@ Fixes
     a week into the future.
 
 * The ``task_id`` argument is now respected even if the task is executed 
-  eagerly (either using apply, or ``CELERY_ALWAYS_EAGER``).
+  eagerly (either using apply, or :setting:`CELERY_ALWAYS_EAGER`).
 
 * The internal queues are now cleared if the connection is reset.
 
@@ -1952,7 +1958,7 @@ Fixes
 
 * TaskPublisher: Declarations are now done once (per process).
 
-* Added ``Task.delivery_mode`` and the ``CELERY_DEFAULT_DELIVERY_MODE``
+* Added ``Task.delivery_mode`` and the :setting:`CELERY_DEFAULT_DELIVERY_MODE`
   setting.
 
     These can be used to mark messages non-persistent (i.e. so they are
@@ -2083,7 +2089,7 @@ Backward incompatible changes
 
 * The worker no longer stores errors if ``Task.ignore_result`` is set, to
   revert to the previous behaviour set
-  ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED`` to ``True``.
+  :setting:`CELERY_STORE_ERRORS_EVEN_IF_IGNORED` to ``True``.
 
 * The staticstics functionality has been removed in favor of events,
   so the ``-S`` and ``--statistics`` switches has been removed.
@@ -2093,7 +2099,8 @@ Backward incompatible changes
 * ``celery.discovery`` has been removed, and it's ``autodiscover`` function is
   now in ``celery.loaders.djangoapp``. Reason: Internal API.
 
-* ``CELERY_LOADER`` now needs loader class name in addition to module name,
+* The :envvar:`CELERY_LOADER` environment variable now needs loader class name
+  in addition to module name,
 
     E.g. where you previously had: ``"celery.loaders.default"``, you now need
     ``"celery.loaders.default.Loader"``, using the previous syntax will result
@@ -2173,7 +2180,7 @@ News
 * You can now set the hostname celeryd identifies as using the ``--hostname``
   argument.
 
-* Cache backend now respects ``CELERY_TASK_RESULT_EXPIRES``.
+* Cache backend now respects the :setting:`CELERY_TASK_RESULT_EXPIRES` setting.
 
 * Message format has been standardized and now uses ISO-8601 format
   for dates instead of datetime.
@@ -2196,7 +2203,7 @@ News
 * Got a 3x performance gain by setting the prefetch count to four times the 
   concurrency, (from an average task round-trip of 0.1s to 0.03s!).
 
-    A new setting has been added: ``CELERYD_PREFETCH_MULTIPLIER``, which
+    A new setting has been added: :setting:`CELERYD_PREFETCH_MULTIPLIER`, which
     is set to ``4`` by default.
 
 * Improved support for webhook tasks.
@@ -2222,7 +2229,7 @@ Changes
 * The ``uuid`` distribution is added as a dependency when running Python 2.4.
 
 * Now remembers the previously detected loader by keeping it in
-  the ``CELERY_LOADER`` environment variable.
+  the :envvar:`CELERY_LOADER` environment variable.
 
     This may help on windows where fork emulation is used.
 
@@ -2233,15 +2240,15 @@ Changes
 
 * Task can now override the backend used to store results.
 
-* Refactored the ExecuteWrapper, ``apply`` and ``CELERY_ALWAYS_EAGER`` now
-  also executes the task callbacks and signals.
+* Refactored the ExecuteWrapper, ``apply`` and :setting:`CELERY_ALWAYS_EAGER`
+  now also executes the task callbacks and signals.
 
 * Now using a proper scheduler for the tasks with an ETA.
 
     This means waiting eta tasks are sorted by time, so we don't have
     to poll the whole list all the time.
 
-* Now also imports modules listed in CELERY_IMPORTS when running
+* Now also imports modules listed in :setting:`CELERY_IMPORTS` when running
   with django (as documented).
 
 * Loglevel for stdout/stderr changed from INFO to ERROR
@@ -2255,7 +2262,7 @@ Changes
   smart moves to not poll too regularly.
 
     If you need faster poll times you can lower the value
-    of ``CELERYBEAT_MAX_LOOP_INTERVAL``.
+    of :setting:`CELERYBEAT_MAX_LOOP_INTERVAL`.
 
 * You can now change periodic task intervals at runtime, by making
   ``run_every`` a property, or subclassing ``PeriodicTask.is_due``.
@@ -2270,10 +2277,10 @@ Changes
 * :exc:`celery.exceptions.NotRegistered` now inherits from :exc:`KeyError`,
   and ``TaskRegistry.__getitem__``+``pop`` raises ``NotRegistered`` instead
 
-* You can set the loader via the ``CELERY_LOADER`` environment variable.
+* You can set the loader via the :envvar:`CELERY_LOADER` environment variable.
 
-* You can now set ``CELERY_IGNORE_RESULT`` to ignore task results by default
-  (if enabled, tasks doesn't save results or errors to the backend used).
+* You can now set :setting:`CELERY_IGNORE_RESULT` to ignore task results by
+  default (if enabled, tasks doesn't save results or errors to the backend used).
 
 * celeryd now correctly handles malformed messages by throwing away and
   acknowledging the message, instead of crashing.
@@ -2412,8 +2419,8 @@ Changes
 * Added a Django test runner to contrib that sets
   ``CELERY_ALWAYS_EAGER = True`` for testing with the database backend.
 
-* Added a CELERY_CACHE_BACKEND setting for using something other than
-  the django-global cache backend.
+* Added a :setting:`CELERY_CACHE_BACKEND` setting for using something other
+  than the django-global cache backend.
 
 * Use custom implementation of functools.partial (curry) for Python 2.4 support
   (Probably still problems with running on 2.4, but it will eventually be
@@ -2613,7 +2620,7 @@ News
     startup instead of for each check (which has been a forgotten TODO/XXX
     in the code for a long time)
 
-* New settings variable: ``CELERY_TASK_RESULT_EXPIRES``
+* New settings variable: :setting:`CELERY_TASK_RESULT_EXPIRES`
     Time (in seconds, or a `datetime.timedelta` object) for when after
     stored task results are deleted. For the moment this only works for the
     database backend.
@@ -2673,7 +2680,7 @@ News
   function blocking until the task is done, for API compatiblity it
   returns an ``celery.result.EagerResult`` instance. You can configure
   celery to always run tasks locally by setting the
-  ``CELERY_ALWAYS_EAGER`` setting to ``True``.
+  :setting:`CELERY_ALWAYS_EAGER` setting to ``True``.
 
 * Now depends on ``anyjson``.
 

+ 4 - 4
FAQ

@@ -351,8 +351,8 @@ If you don't use the results for a task, make sure you set the
     class MyTask(Task):
         ignore_result = True
 
-Results can also be disabled globally using the ``CELERY_IGNORE_RESULT``
-setting.
+Results can also be disabled globally using the
+:setting:`CELERY_IGNORE_RESULT` setting.
 
 .. note::
 
@@ -360,7 +360,7 @@ setting.
     AMQP result backend results.
 
     To use this you need to run RabbitMQ 2.1 or higher and enable
-    the ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting.
+    the :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting.
 
 .. _faq-use-celery-with-stomp:
 
@@ -674,7 +674,7 @@ You should never stop ``celeryd`` with the ``KILL`` signal (``-9``),
 unless you've tried ``TERM`` a few times and waited a few minutes to let it
 get a chance to shut down. As if you do tasks may be terminated mid-execution,
 and they will not be re-run unless you have the ``acks_late`` option set.
-(``Task.acks_late`` / ``CELERY_ACKS_LATE``).
+(``Task.acks_late`` / :setting:`CELERY_ACKS_LATE`).
 
 .. _faq-daemonizing:
 

+ 6 - 0
docs/_ext/celerydocs.py

@@ -0,0 +1,6 @@
+def setup(app):
+    app.add_crossref_type(
+        directivename = "setting",
+        rolename      = "setting",
+        indextemplate = "pair: %s; setting",
+    )

+ 3 - 1
docs/conf.py

@@ -7,6 +7,7 @@ import os
 # is relative to the documentation root, use os.path.abspath to make it
 # absolute, like shown here.
 sys.path.append(os.path.join(os.pardir, "tests"))
+sys.path.append("_ext")
 import celery
 
 # General configuration
@@ -14,7 +15,8 @@ import celery
 
 extensions = ['sphinx.ext.autodoc',
               'sphinx.ext.coverage',
-              'sphinxcontrib.issuetracker']
+              'sphinxcontrib.issuetracker',
+              'celerydocs']
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['.templates']

+ 7 - 6
docs/configuration.rst

@@ -130,7 +130,7 @@ See `Connection String`_ for more information about connection
 strings.
 
 To specify additional SQLAlchemy database engine options you can use
-the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
+the :setting:`CELERY_RESULT_ENGINE_OPTIONS` setting::
 
     # echo enables verbose logging from SQLAlchemy.
     CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
@@ -202,7 +202,7 @@ Using multiple memcached servers:
     CELERY_RESULT_BACKEND = "cache"
     CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
 
-You can set pylibmc options using the ``CELERY_CACHE_BACKEND_OPTIONS``
+You can set pylibmc options using the :setting:`ELERY_CACHE_BACKEND_OPTIONS`
 setting:
 
 .. code-block:: python
@@ -347,7 +347,7 @@ Routing
 
 * CELERY_DEFAULT_QUEUE
     The queue used by default, if no custom queue is specified.
-    This queue must be listed in ``CELERY_QUEUES``.
+    This queue must be listed in :setting:`CELERY_QUEUES`.
     The default is: ``celery``.
 
 * CELERY_DEFAULT_EXCHANGE
@@ -382,7 +382,8 @@ Connection
     it's lost.
 
     The time between retries is increased for each retry, and is
-    not exhausted before ``CELERY_BROKER_CONNECTION_MAX_RETRIES`` is exceeded.
+    not exhausted before :setting:`CELERY_BROKER_CONNECTION_MAX_RETRIES` is
+    exceeded.
 
     This behavior is on by default.
 
@@ -412,7 +413,7 @@ Task execution settings
 * CELERY_EAGER_PROPAGATES_EXCEPTIONS
 
     If this is ``True``, eagerly executed tasks (using ``.apply``, or with
-    ``CELERY_ALWAYS_EAGER`` on), will raise exceptions.
+    :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 
     It's the same as always running ``apply`` with ``throw=True``.
 
@@ -420,7 +421,7 @@ Task execution settings
 
     Whether to store the task return values or not (tombstones).
     If you still want to store errors, just not successful return values,
-    you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
+    you can set :setting:`CELERY_STORE_ERRORS_EVEN_IF_IGNORED`.
 
 * CELERY_TASK_RESULT_EXPIRES
     Time (in seconds, or a :class:`datetime.timedelta` object) for when after

+ 28 - 17
docs/getting-started/first-steps-with-celery.rst

@@ -44,12 +44,13 @@ Configuration
 Celery is configured by using a configuration module. By default
 this module is called ``celeryconfig.py``.
 
-:Note: This configuration module must be on the Python path so it
-  can be imported.
+.. note::
 
-You can set a custom name for the configuration module with the
-``CELERY_CONFIG_MODULE`` variable, but in these examples we use the
-default name.
+    The configuration module must be on the Python path so it
+    can be imported.
+
+    You can also set a custom name for the configuration module using
+    the :envvar:`CELERY_CONFIG_MODULE` environment variable.
 
 Let's create our ``celeryconfig.py``.
 
@@ -75,20 +76,28 @@ Let's create our ``celeryconfig.py``.
 
    We only have a single task module, ``tasks.py``, which we added earlier::
 
-        import os
-        import sys
-        sys.path.insert(0, os.getcwd())
-
         CELERY_IMPORTS = ("tasks", )
 
 That's it.
 
+
 There are more options available, like how many processes you want to
-process work in parallel (the ``CELERY_CONCURRENCY`` setting), and we
+process work in parallel (the :setting:`CELERY_CONCURRENCY` setting), and we
 could use a persistent result store backend, but for now, this should
 do. For all of the options available, see the 
 :doc:`configuration directive reference<../configuration>`.
 
+.. note::
+
+    You can also specify modules to import using the ``-I`` option to
+    ``celeryd``::
+
+        $ celeryd -l info -I tasks,handlers
+
+    This can be a single, or a comma separated list of task modules to import when
+    ``celeryd`` starts.
+
+
 .. _celerytut-running-celeryd:
 
 Running the celery worker server
@@ -108,8 +117,7 @@ help command::
 
     $  celeryd --help
 
-For info on how to run celery as standalone daemon, see 
-:doc:`daemon mode reference<../cookbook/daemonizing>`
+For info on how to run celery as standalone daemon, see :ref:`daemonizing`.
 
 .. _`supervisord`: http://supervisord.org
 
@@ -135,15 +143,15 @@ broker will hold on to the task until a worker server has successfully
 picked it up.
 
 *Note:* If everything is just hanging when you execute ``delay``, please check
-that RabbitMQ is running, and that the user/password has access to the virtual
-host you configured earlier.
+that RabbitMQ is running, and that the user/password combination does have access to the
+virtual host you configured earlier.
 
 Right now we have to check the worker log files to know what happened
 with the task. This is because we didn't keep the :class:`~celery.result.AsyncResult`
 object returned by :meth:`~celery.task.base.Task.delay`.
 
 The :class:`~celery.result.AsyncResult` lets us find the state of the task, wait for
-the task to finish, get its return value (or exception if the task failed),
+the task to finish, get its return value (or exception + traceback if the task failed),
 and more.
 
 So, let's execute the task again, but this time we'll keep track of the task
@@ -170,5 +178,8 @@ If the task raises an exception, the return value of ``result.successful()``
 will be ``False``, and ``result.result`` will contain the exception instance
 raised by the task.
 
-That's all for now! After this you should probably read the :doc:`User
-Guide<../userguide/index>`.
+Where to go from here
+=====================
+
+After this you should read the :ref:`guide`. Specifically
+:ref:`guide-tasks` and :ref:`guide-executing`.

+ 1 - 1
docs/internals/deprecation.rst

@@ -25,7 +25,7 @@ Removals for version 2.0
     ``CELERY_AMQP_PUBLISHER_ROUTING_KEY``  ``CELERY_DEFAULT_ROUTING_KEY``
     =====================================  =====================================
 
-* ``CELERY_LOADER`` definitions without class name.
+* :envvar:`CELERY_LOADER` definitions without class name.
 
     E.g. ``celery.loaders.default``, needs to include the class name:
     ``celery.loaders.default.Loader``.

+ 2 - 2
docs/userguide/executing.rst

@@ -113,8 +113,8 @@ also be registered in the worker.
 
 When sending a task the serialization method is taken from the following
 places in order: The ``serializer`` argument to ``apply_async``, the
-Task's ``serializer`` attribute, and finally the global default ``CELERY_SERIALIZER``
-configuration directive.
+Task's ``serializer`` attribute, and finally the global default
+:setting:`CELERY_TASK_SERIALIZER` configuration directive.
 
 .. code-block:: python
 

+ 2 - 2
docs/userguide/periodic-tasks.rst

@@ -13,7 +13,7 @@ Introduction
 Celerybeat is a scheduler.  It kicks off tasks at regular intervals,
 which are then executed by worker nodes available in the cluster.
 
-By default the entries are taken from the ``CELERYBEAT_SCHEDULE`` setting,
+By default the entries are taken from the :setting:`CELERYBEAT_SCHEDULE` setting,
 but custom stores can also be used, like storing the entries
 in an SQL database.
 
@@ -28,7 +28,7 @@ Entries
 =======
 
 To schedule a task periodically you have to add an entry to the
-``CELERYBEAT_SCHEDULE`` setting:
+:setting:`CELERYBEAT_SCHEDULE` setting:
 
 .. code-block:: python
 

+ 44 - 37
docs/userguide/routing.rst

@@ -4,11 +4,11 @@
  Routing Tasks
 ===============
 
-**NOTE** This document refers to functionality only available in brokers
-using AMQP. Other brokers may implement some functionality, see their
-respective documenation for more information, or contact the `mailinglist`_.
+.. warning::
 
-.. _`mailinglist`: http://groups.google.com/group/celery-users
+    This document refers to functionality only available in brokers
+    using AMQP. Other brokers may implement some functionality, see their
+    respective documenation for more information, or contact the :ref:`mailing-list`.
 
 .. contents::
     :local:
@@ -24,12 +24,12 @@ Basics
 Automatic routing
 -----------------
 
-The simplest way to do routing is to use the ``CELERY_CREATE_MISSING_QUEUES``
-setting (on by default).
+The simplest way to do routing is to use the
+:setting:`CELERY_CREATE_MISSING_QUEUES` setting (on by default).
 
 With this setting on, a named queue that is not already defined in
-``CELERY_QUEUES`` will be created automatically. This makes it easy to perform
-simple routing tasks.
+:setting:`CELERY_QUEUES` will be created automatically. This makes it easy to
+perform simple routing tasks.
 
 Say you have two servers, ``x``, and ``y`` that handles regular tasks,
 and one server ``z``, that only handles feed related tasks. You can use this
@@ -109,12 +109,13 @@ configuration:
     CELERY_DEFAULT_EXCHANGE_TYPE = "topic"
     CELERY_DEFAULT_ROUTING_KEY = "task.default"
 
-``CELERY_QUEUES`` is a map of queue names and their exchange/type/binding_key,
-if you don't set exchange or exchange type, they will be taken from the
-``CELERY_DEFAULT_EXCHANGE``/``CELERY_DEFAULT_EXCHANGE_TYPE`` settings.
+:setting:`CELERY_QUEUES` is a map of queue names and their
+exchange/type/binding_key, if you don't set exchange or exchange type, they
+will be taken from the :setting:`CELERY_DEFAULT_EXCHANGE` and
+:setting:`CELERY_DEFAULT_EXCHANGE_TYPE` settings.
 
 To route a task to the ``feed_tasks`` queue, you can add an entry in the
-``CELERY_ROUTES`` setting:
+:setting:`CELERY_ROUTES` setting:
 
 .. code-block:: python
 
@@ -171,11 +172,13 @@ just specify a custom exchange and exchange type:
 
 If you're confused about these terms, you should read up on AMQP concepts.
 
-In addition to the :ref:`amqp-primer` below, there's
-`Rabbits and Warrens`_, an excellent blog post describing queues and
-exchanges. There's also AMQP in 10 minutes*: `Flexible Routing Model`_,
-and `Standard Exchange Types`_. For users of RabbitMQ the `RabbitMQ FAQ`_
-could be useful as a source of information.
+.. seealso::
+
+    In addition to the :ref:`amqp-primer` below, there's
+    `Rabbits and Warrens`_, an excellent blog post describing queues and
+    exchanges. There's also AMQP in 10 minutes*: `Flexible Routing Model`_,
+    and `Standard Exchange Types`_. For users of RabbitMQ the `RabbitMQ FAQ`_
+    could be useful as a source of information.
 
 .. _`Rabbits and Warrens`: http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/
 .. _`Flexible Routing Model`: http://bit.ly/95XFO1
@@ -238,8 +241,8 @@ The steps required to send and receive messages are:
 3. Bind the queue to the exchange.
 
 Celery automatically creates the entities necessary for the queues in
-``CELERY_QUEUES`` to work (except if the queue's ``auto_declare`` setting
-is set to :const:`False`).
+:setting:`CELERY_QUEUES` to work (except if the queue's ``auto_declare``
+setting is set to :const:`False`).
 
 Here's an example queue configuration with three queues;
 One for video, one for images and finally, one default queue for everything else:
@@ -263,10 +266,11 @@ One for video, one for images and finally, one default queue for everything else
     CELERY_DEFAULT_EXCHANGE_TYPE = "direct"
     CELERY_DEFAULT_ROUTING_KEY = "default"
 
+.. note::
 
-**NOTE**: In Celery the ``routing_key`` is the key used to send the message,
-while ``binding_key`` is the key the queue is bound with. In the AMQP API
-they are both referred to as the routing key.
+    In Celery the ``routing_key`` is the key used to send the message,
+    while ``binding_key`` is the key the queue is bound with. In the AMQP API
+    they are both referred to as the routing key.
 
 .. _amqp-exchange-types:
 
@@ -343,11 +347,13 @@ Related API commands
 
     Deletes an exchange.
 
-:Note: Declaring does not necessarily mean "create". When you declare you
-       *assert* that the entity exists and that it's operable. There is no
-       rule as to whom should initially create the exchange/queue/binding,
-       whether consumer or producer. Usually the first one to need it will
-       be the one to create it.
+.. note::
+
+    Declaring does not necessarily mean "create". When you declare you
+    *assert* that the entity exists and that it's operable. There is no
+    rule as to whom should initially create the exchange/queue/binding,
+    whether consumer or producer. Usually the first one to need it will
+    be the one to create it.
 
 .. _amqp-api-hands-on:
 
@@ -415,7 +421,7 @@ if it has not been acknowledged before the client connection is closed.
 
 Note the delivery tag listed in the structure above; Within a connection channel,
 every received message has a unique delivery tag,
-This tag is used to acknowledge the message. Note that
+This tag is used to acknowledge the message. Also note that
 delivery tags are not unique across connections, so in another client
 the delivery tag ``1`` might point to a different message than in this channel.
 
@@ -442,7 +448,7 @@ Routing Tasks
 Defining queues
 ---------------
 
-In Celery the queues are defined by the ``CELERY_QUEUES`` setting.
+In Celery the queues are defined by the :setting:`CELERY_QUEUES` setting.
 
 Here's an example queue configuration with three queues;
 One for video, one for images and finally, one default queue for everything else:
@@ -469,12 +475,12 @@ One for video, one for images and finally, one default queue for everything else
     CELERY_DEFAULT_EXCHANGE_TYPE = "direct"
     CELERY_DEFAULT_ROUTING_KEY = "default"
 
-Here, the ``CELERY_DEFAULT_QUEUE`` will be used to route tasks that doesn't
-have an explicit route.
+Here, the :setting:`CELERY_DEFAULT_QUEUE` will be used to route tasks that
+doesn't have an explicit route.
 
 The default exchange, exchange type and routing key will be used as the
 default routing values for tasks, and as the default values for entries
-in ``CELERY_QUEUES``.
+in :setting:`CELERY_QUEUES`.
 
 .. _routing-task-destination:
 
@@ -483,9 +489,10 @@ Specifying task destination
 
 The destination for a task is decided by the following (in order):
 
-1. The :ref:`routers` defined in ``CELERY_ROUTES``.
+1. The :ref:`routers` defined in :setting:`CELERY_ROUTES`.
 2. The routing arguments to :func:`~celery.execute.apply_async`.
-3. Routing related attributes defined on the :class:`~celery.task.base.Task` itself.
+3. Routing related attributes defined on the :class:`~celery.task.base.Task`
+   itself.
 
 It is considered best practice to not hard-code these settings, but rather
 leave that as configuration options by using :ref:`routers`;
@@ -514,7 +521,7 @@ All you need to define a new router is to create a class with a
             return None
 
 If you return the ``queue`` key, it will expand with the defined settings of
-that queue in ``CELERY_QUEUES``::
+that queue in :setting:`CELERY_QUEUES`::
 
     {"queue": "video", "routing_key": "video.compress"}
 
@@ -526,7 +533,7 @@ that queue in ``CELERY_QUEUES``::
          "routing_key": "video.compress"}
 
 
-You install router classes by adding it to the ``CELERY_ROUTES`` setting::
+You install router classes by adding it to the :setting:`CELERY_ROUTES` setting::
 
     CELERY_ROUTES = (MyRouter, )
 
@@ -536,7 +543,7 @@ Router classes can also be added by name::
 
 
 For simple task name -> route mappings like the router example above, you can simply
-drop a dict into ``CELERY_ROUTES`` to get the same result::
+drop a dict into :setting:`CELERY_ROUTES` to get the same result::
 
     CELERY_ROUTES = ({"myapp.tasks.compress_video": {
                         "queue": "video",

+ 24 - 10
docs/userguide/tasks.rst

@@ -25,7 +25,7 @@ Given a function ``create_user``, that takes two arguments: ``username`` and
             create_user(username, password)
 
 For convenience there is a shortcut decorator that turns any function into
-a task, :func:`celery.decorators.task`:
+a task:
 
 .. code-block:: python
 
@@ -283,10 +283,13 @@ Message and routing options
 .. attribute:: Task.priority
 
     The message priority. A number from 0 to 9, where 0 is the
-    highest priority. **Note:** RabbitMQ does not support priorities yet.
+    highest priority. **Note:** At the time writing this, RabbitMQ did not yet support
+    priorities
 
-Also see :ref:`executing-routing` for more information about message options,
-and :ref:`guide-routing`.
+.. seealso::
+
+    :ref:`executing-routing` for more information about message options,
+    and :ref:`guide-routing`.
 
 .. _task-example:
 
@@ -583,8 +586,12 @@ Good:
 We use :class:`~celery.task.sets.subtask` here to safely pass
 around the callback task. :class:`~celery.task.sets.subtask` is a 
 subclass of dict used to wrap the arguments and execution options
-for a single task invocation. See :doc:`tasksets` for more information about
-subtasks.
+for a single task invocation.
+
+
+.. seealso::
+
+    :ref:`sets-subtasks` for more information about subtasks.
 
 .. _task-performance-and-strategies:
 
@@ -607,8 +614,10 @@ However, executing a task does have overhead. A message needs to be sent, data
 may not be local, etc. So if the tasks are too fine-grained the additional
 overhead may not be worth it in the end.
 
-See the book `Art of Concurrency`_ for more information about task
-granularity.
+.. seealso::
+
+    The book `Art of Concurrency`_ has a whole section dedicated to the topic
+    of task granularity.
 
 .. _`Art of Concurrency`: http://oreilly.com/catalog/9780596521547
 
@@ -628,8 +637,13 @@ is going to be used.
 The easiest way to share data between workers is to use a distributed caching
 system, like `memcached`_.
 
-For more information about data-locality, please read
-http://research.microsoft.com/pubs/70001/tr-2003-24.pdf
+.. seealso::
+
+    The paper `Distributed Computing Economics`_ by Jim Gray is an excellent
+    introduction to the topic of data locality.
+
+.. _`Distributed Computing Economics`:
+    http://research.microsoft.com/pubs/70001/tr-2003-24.pdf
 
 .. _`memcached`: http://memcached.org/
 

+ 2 - 0
docs/userguide/tasksets.rst

@@ -12,6 +12,8 @@
 Subtasks
 ========
 
+.. versionadded:: 2.0
+
 The :class:`~celery.task.sets.subtask` class is used to wrap the arguments and
 execution options for a single task invocation::
 

+ 20 - 9
docs/userguide/workers.rst

@@ -17,8 +17,8 @@ You can start celeryd to run in the foreground by executing the command::
     $ celeryd --loglevel=INFO
 
 You probably want to use a daemonization tool to start
-``celeryd`` in the background. See :doc:`../cookbook/daemonizing` for help
-starting celeryd with some of the most popular daemonization tools.
+``celeryd`` in the background. See :ref:`daemonizing` for help
+using ``celeryd`` with popular daemonization tools.
 
 For a full list of available command line options see
 :mod:`~celery.bin.celeryd`, or simply execute the command::
@@ -92,6 +92,8 @@ run times and other factors.
 Time limits
 ===========
 
+.. versionadded:: 2.0
+
 A single task can potentially run forever, if you have lots of tasks
 waiting for some event that will never happen you will block the worker
 from processing new tasks indefinitely. The best way to defend against
@@ -115,16 +117,20 @@ time limit kills it:
         except SoftTimeLimitExceeded:
             clean_up_in_a_hurry()
 
-Time limits can also be set using the ``CELERYD_TASK_TIME_LIMIT`` /
-``CELERYD_SOFT_TASK_TIME_LIMIT`` settings.
+Time limits can also be set using the :setting:`CELERYD_TASK_TIME_LIMIT` /
+:setting:`CELERYD_SOFT_TASK_TIME_LIMIT` settings.
+
+.. note::
 
-**NOTE** Time limits does not currently work on Windows.
+    Time limits does not currently work on Windows.
 
 .. _worker-maxtasksperchild:
 
 Max tasks per child setting
 ===========================
 
+.. versionadded: 2.0
+
 With this option you can configure the maximum number of tasks
 a worker can execute before it's replaced by a new process.
 
@@ -132,13 +138,15 @@ This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 
 The option can be set using the ``--maxtasksperchild`` argument
-to ``celeryd`` or using the ``CELERYD_MAX_TASKS_PER_CHILD`` setting.
+to ``celeryd`` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
 
 .. _worker-remote-control:
 
 Remote control
 ==============
 
+.. versionadded:: 2.0
+
 Workers have the ability to be remote controlled using a high-priority
 broadcast message queue. The commands can be directed to all, or a specific
 list of workers.
@@ -213,8 +221,11 @@ destination hostname::
     >>> rate_limit("myapp.mytask", "200/m",
     ...            destination=["worker1.example.com"])
 
-**NOTE** This won't affect workers with the ``CELERY_DISABLE_RATE_LIMITS``
-setting on. To re-enable rate limits then you have to restart the worker.
+.. warning::
+
+    This won't affect workers with the
+    :setting:`CELERY_DISABLE_RATE_LIMITS` setting on. To re-enable rate limits
+    then you have to restart the worker.
 
 .. _worker-remote-shutdown:
 
@@ -286,7 +297,7 @@ Here's an example control command that restarts the broker connection:
 
 
 These can be added to task modules, or you can keep them in their own module
-then import them using the ``CELERY_IMPORTS`` setting::
+then import them using the :setting:`CELERY_IMPORTS` setting::
 
     CELERY_IMPORTS = ("myapp.worker.control", )