Browse Source

Doc improvements

Ask Solem 8 years ago
parent
commit
1b2c7bd2c6
3 changed files with 92 additions and 30 deletions
  1. 23 6
      docs/userguide/configuration.rst
  2. 8 0
      docs/userguide/tasks.rst
  3. 61 24
      docs/whatsnew-4.0.rst

+ 23 - 6
docs/userguide/configuration.rst

@@ -394,7 +394,10 @@ This requires the :pypi:`tblib` library, that can be installed using
 
 
 .. code-block:: console
 .. code-block:: console
 
 
-    $ pip install 'tblib>=1.3.0'
+    $ pip install celery[tblib]
+
+See :ref:`bundles` for information on combining multiple extension
+requirements.
 
 
 .. setting:: task_ignore_result
 .. setting:: task_ignore_result
 
 
@@ -838,7 +841,10 @@ Configuring the backend URL
 
 
     .. code-block:: console
     .. code-block:: console
 
 
-        $ pip install redis
+        $ pip install celery[redis]
+
+    See :ref:`bundles` for information on combining multiple extension
+    requirements.
 
 
 This backend requires the :setting:`result_backend`
 This backend requires the :setting:`result_backend`
 setting to be set to a Redis URL::
 setting to be set to a Redis URL::
@@ -905,8 +911,10 @@ Cassandra backend settings
 
 
     .. code-block:: console
     .. code-block:: console
 
 
-        $ pip install cassandra-driver
+        $ pip install celery[cassandra]
 
 
+    See :ref:`bundles` for information on combining multiple extension
+    requirements.
 This backend requires the following configuration directives to be set.
 This backend requires the following configuration directives to be set.
 
 
 .. setting:: cassandra_servers
 .. setting:: cassandra_servers
@@ -1047,7 +1055,10 @@ Riak backend settings
 
 
     .. code-block:: console
     .. code-block:: console
 
 
-        $ pip install riak
+        $ pip install celery[riak]
+
+    See :ref:`bundles` for information on combining multiple extension
+    requirements.
 
 
 This backend requires the :setting:`result_backend`
 This backend requires the :setting:`result_backend`
 setting to be set to a Riak URL::
 setting to be set to a Riak URL::
@@ -1143,7 +1154,10 @@ Couchbase backend settings
 
 
     .. code-block:: console
     .. code-block:: console
 
 
-        $ pip install couchbase
+        $ pip install celery[couchbase]
+
+    See :ref:`bundles` for instructions how to combine multiple extension
+    requirements.
 
 
 This backend can be configured via the :setting:`result_backend`
 This backend can be configured via the :setting:`result_backend`
 set to a Couchbase URL:
 set to a Couchbase URL:
@@ -1195,7 +1209,10 @@ CouchDB backend settings
 
 
     .. code-block:: console
     .. code-block:: console
 
 
-        $ pip install pycouchdb
+        $ pip install celery[couchdb]
+
+    See :ref:`bundles` for information on combining multiple extension
+    requirements.
 
 
 This backend can be configured via the :setting:`result_backend`
 This backend can be configured via the :setting:`result_backend`
 set to a CouchDB URL::
 set to a CouchDB URL::

+ 8 - 0
docs/userguide/tasks.rst

@@ -738,6 +738,14 @@ in a :keyword:`try` ... :keyword:`except` statement:
         except FailWhaleError as exc:
         except FailWhaleError as exc:
             raise div.retry(exc=exc, max_retries=5)
             raise div.retry(exc=exc, max_retries=5)
 
 
+If you want to automatically retry on any error, simply use:
+
+.. code-block:: python
+
+    @app.task(autoretry_for=(Exception,))
+    def x():
+        ...
+
 .. _task-options:
 .. _task-options:
 
 
 List of Options
 List of Options

+ 61 - 24
docs/whatsnew-4.0.rst

@@ -859,7 +859,7 @@ Amazon SQS transport now officially supported
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 The SQS broker transport has been rewritten to use async I/O and as such
 The SQS broker transport has been rewritten to use async I/O and as such
-joins RabbitMQ and Redis as officially supported transports.
+joins RabbitMQ, Redis and QPid as officially supported transports.
 
 
 The new implementation also takes advantage of long polling,
 The new implementation also takes advantage of long polling,
 and closes several issues related to using SQS as a broker.
 and closes several issues related to using SQS as a broker.
@@ -882,7 +882,15 @@ that we now have built-in support for it.
 
 
 For this a new ``autoretry_for`` argument is now supported by
 For this a new ``autoretry_for`` argument is now supported by
 the task decorators, where you can specify a tuple of exceptions
 the task decorators, where you can specify a tuple of exceptions
-to automatically retry for.
+to automatically retry for:
+
+.. code-block:: python
+
+    from twitter.exceptions import FailWhaleError
+
+    @app.task(autoretry_for=(FailWhaleError,))
+    def refresh_timeline(user):
+        return twitter.refresh_timeline(user)
 
 
 See :ref:`task-autoretry` for more information.
 See :ref:`task-autoretry` for more information.
 
 
@@ -890,31 +898,27 @@ Contributed by **Dmitry Malinovsky**.
 
 
 .. :sha:`75246714dd11e6c463b9dc67f4311690643bff24`
 .. :sha:`75246714dd11e6c463b9dc67f4311690643bff24`
 
 
-``Task.replace``
-~~~~~~~~~~~~~~~~
+``Task.replace`` Improvements
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Task.replace changed, removes Task.replace_in_chord.
+- ``self.replace(signature)`` can now replace any task, chord or group,
+  and the signature to replace with can be a chord, group or any other
+  type of signature.
 
 
-The two methods had almost the same functionality, but the old
-``Task.replace`` would force the new task to inherit the
-callbacks/errbacks of the existing task.
+- No longer inherits the callbacks and errbacks of the existing task.
 
 
-If you replace a node in a tree, then you wouldn't expect the new node to
-inherit the children of the old node, so this seems like unexpected
-behavior.
+    If you replace a node in a tree, then you wouldn't expect the new node to
+    inherit the children of the old node.
 
 
-So ``self.replace(sig)`` now works for any task, in addition ``sig`` can now
-be a group.
+- ``Task.replace_in_chord`` has been removed, use ``.replace`` instead.
 
 
-Groups are automatically converted to a chord, where the callback
-will "accumulate" the results of the group tasks.
+- If the replacement is a group, that group will be automatically converted
+  to a chord, where the callback "accumulates" the results of the group tasks.
 
 
-A new built-in task (`celery.accumulate` was added for this purpose)
+    A new built-in task (`celery.accumulate` was added for this purpose)
 
 
 Contributed by **Steeve Morin**, and **Ask Solem**.
 Contributed by **Steeve Morin**, and **Ask Solem**.
 
 
-Closes #817
-
 Remote Task Tracebacks
 Remote Task Tracebacks
 ~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~
 
 
@@ -1124,7 +1128,6 @@ See :ref:`beat-solar` for more information.
 
 
 Contributed by **Mark Parncutt**.
 Contributed by **Mark Parncutt**.
 
 
-
 Result Backends
 Result Backends
 ---------------
 ---------------
 
 
@@ -1132,7 +1135,7 @@ RPC Result Backend matured
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Lots of bugs in the previously experimental RPC result backend have been fixed
 Lots of bugs in the previously experimental RPC result backend have been fixed
-and we now consider it production ready.
+and can now be considered to production use.
 
 
 Contributed by **Ask Solem**, **Morris Tweed**.
 Contributed by **Ask Solem**, **Morris Tweed**.
 
 
@@ -1208,6 +1211,15 @@ This package is fully Python 3 compliant just as this backend is:
 
 
 That installs the required package to talk to Consul's HTTP API from Python.
 That installs the required package to talk to Consul's HTTP API from Python.
 
 
+You can also specify consul as an extension in your dependency on Celery:
+
+.. code-block:: console
+
+    $ pip install celery[consul]
+
+See :ref:`bundles` for more information.
+
+
 Contributed by **Wido den Hollander**.
 Contributed by **Wido den Hollander**.
 
 
 Brand new Cassandra result backend
 Brand new Cassandra result backend
@@ -1219,6 +1231,15 @@ library is replacing the old result backend using the older
 
 
 See :ref:`conf-cassandra-result-backend` for more information.
 See :ref:`conf-cassandra-result-backend` for more information.
 
 
+To depend on Celery with Cassandra as the result backend use:
+
+.. code-block:: console
+
+    $ pip install celery[cassandra]
+
+You can also combine multiple extension requirements,
+please see :ref:`bundles` for more information.
+
 .. # XXX What changed?
 .. # XXX What changed?
 
 
 New Elasticsearch result backend introduced
 New Elasticsearch result backend introduced
@@ -1226,6 +1247,15 @@ New Elasticsearch result backend introduced
 
 
 See :ref:`conf-elasticsearch-result-backend` for more information.
 See :ref:`conf-elasticsearch-result-backend` for more information.
 
 
+To depend on Celery with Elasticsearch as the result bakend use:
+
+.. code-block:: console
+
+    $ pip install celery[elasticsearch]
+
+You can also combine multiple extension requirements,
+please see :ref:`bundles` for more information.
+
 Contributed by **Ahmet Demir**.
 Contributed by **Ahmet Demir**.
 
 
 New File-system result backend introduced
 New File-system result backend introduced
@@ -1811,15 +1841,20 @@ Result
 TaskSet
 TaskSet
 ~~~~~~~
 ~~~~~~~
 
 
-TaskSet has been renamed to group and TaskSet will be removed in version 4.0.
+TaskSet has been removed, as it was replaced by the ``group`` construct in
+Celery 3.0.
 
 
-Old::
+If you have code like this:
+
+.. codeblock:: pycon
 
 
     >>> from celery.task import TaskSet
     >>> from celery.task import TaskSet
 
 
     >>> TaskSet(add.subtask((i, i)) for i in xrange(10)).apply_async()
     >>> TaskSet(add.subtask((i, i)) for i in xrange(10)).apply_async()
 
 
-New::
+You need to replace that with:
+
+.. code-block:: pycon
 
 
     >>> from celery import group
     >>> from celery import group
     >>> group(add.s(i, i) for i in xrange(10))()
     >>> group(add.s(i, i) for i in xrange(10))()
@@ -1966,7 +2001,9 @@ Changes to internal API
 
 
 - ``celery.utils.is_iterable`` has been removed.
 - ``celery.utils.is_iterable`` has been removed.
 
 
-    Instead use::
+    Instead use:
+
+    .. code-block:: python
 
 
         isinstance(x, collections.Iterable)
         isinstance(x, collections.Iterable)