Browse Source

Merge branch 'PaulMcMillan/docs_fixes'

Ask Solem 12 years ago
parent
commit
992ed6f26e
2 changed files with 25 additions and 21 deletions
  1. 15 15
      docs/userguide/canvas.rst
  2. 10 6
      docs/whatsnew-3.0.rst

+ 15 - 15
docs/userguide/canvas.rst

@@ -151,7 +151,7 @@ Callbacks
 .. versionadded:: 3.0
 .. versionadded:: 3.0
 
 
 Callbacks can be added to any task using the ``link`` argument
 Callbacks can be added to any task using the ``link`` argument
-to ``apply_async``:
+to ``apply_async``::
 
 
     add.apply_async((2, 2), link=other_task.subtask())
     add.apply_async((2, 2), link=other_task.subtask())
 
 
@@ -232,7 +232,7 @@ The Primitives
 The primitives are also subtasks themselves, so that they can be combined
 The primitives are also subtasks themselves, so that they can be combined
 in any number of ways to compose complex workflows.
 in any number of ways to compose complex workflows.
 
 
-Here's some examples::
+Here's some examples:
 
 
 - Simple chain
 - Simple chain
 
 
@@ -478,13 +478,13 @@ you chain tasks together:
 
 
     # (4 + 4) * 8 * 10
     # (4 + 4) * 8 * 10
     >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
     >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
-    proj.tasks.add(4, 4) | proj.tasks.mul(8)
+    proj.tasks.add(4, 4) | proj.tasks.mul(8) | proj.tasks.mul(10)
 
 
 
 
 Calling the chain will call the tasks in the current process
 Calling the chain will call the tasks in the current process
 and return the result of the last task in the chain::
 and return the result of the last task in the chain::
 
 
-    >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
+    >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
     >>> res.get()
     >>> res.get()
     640
     640
 
 
@@ -492,7 +492,7 @@ And calling ``apply_async`` will create a dedicated
 task so that the act of calling the chain happens
 task so that the act of calling the chain happens
 in a worker::
 in a worker::
 
 
-    >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
+    >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10)).apply_async()
     >>> res.get()
     >>> res.get()
     640
     640
 
 
@@ -612,8 +612,8 @@ that it works on the group as a whole::
     [4, 8, 16, 32, 64]
     [4, 8, 16, 32, 64]
 
 
 The :class:`~celery.result.GroupResult` takes a list of
 The :class:`~celery.result.GroupResult` takes a list of
-:class:`~celery.result.AsyncResult` instances and operates on them as if it was a
-single task.
+:class:`~celery.result.AsyncResult` instances and operates on them as
+if it was a single task.
 
 
 It supports the following operations:
 It supports the following operations:
 
 
@@ -738,11 +738,11 @@ Example implementation:
         raise unlock_chord.retry(countdown=interval, max_retries=max_retries)
         raise unlock_chord.retry(countdown=interval, max_retries=max_retries)
 
 
 
 
-This is used by all result backends except Redis and Memcached, which increment a
-counter after each task in the header, then applying the callback when the
-counter exceeds the number of tasks in the set. *Note:* chords do not properly
-work with Redis before version 2.2; you will need to upgrade to at least 2.2 to
-use them.
+This is used by all result backends except Redis and Memcached, which
+increment a counter after each task in the header, then applying the callback
+when the counter exceeds the number of tasks in the set. *Note:* chords do not
+properly work with Redis before version 2.2; you will need to upgrade to at
+least 2.2 to use them.
 
 
 The Redis and Memcached approach is a much better solution, but not easily
 The Redis and Memcached approach is a much better solution, but not easily
 implemented in other backends (suggestions welcome!).
 implemented in other backends (suggestions welcome!).
@@ -815,9 +815,9 @@ to call the starmap after 10 seconds::
 Chunks
 Chunks
 ------
 ------
 
 
--- Chunking lets you divide an iterable of work into pieces,
-   so that if you have one million objects, you can create
-   10 tasks with hundred thousand objects each.
+Chunking lets you divide an iterable of work into pieces, so that if
+you have one million objects, you can create 10 tasks with hundred
+thousand objects each.
 
 
 Some may worry that chunking your tasks results in a degradation
 Some may worry that chunking your tasks results in a degradation
 of parallelism, but this is rarely true for a busy cluster
 of parallelism, but this is rarely true for a busy cluster

+ 10 - 6
docs/whatsnew-3.0.rst

@@ -299,7 +299,7 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
 
 
                 $ dot -Tpng graph.dot -o graph.png
                 $ dot -Tpng graph.dot -o graph.png
 
 
-- A new special subtask called ``chain`` is also included::
+- A new special subtask called ``chain`` is also included:
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -351,7 +351,7 @@ transport option, which must be a list of numbers in **sorted order**::
 
 
 Priorities implemented in this way is not as reliable as
 Priorities implemented in this way is not as reliable as
 priorities on the server side, which is why
 priorities on the server side, which is why
-nickname the feature "quasi-priorities";
+the feature is nicknamed "quasi-priorities";
 **Using routing is still the suggested way of ensuring
 **Using routing is still the suggested way of ensuring
 quality of service**, as client implemented priorities
 quality of service**, as client implemented priorities
 fall short in a number of ways, e.g. if the worker
 fall short in a number of ways, e.g. if the worker
@@ -383,7 +383,9 @@ accidentally changed while switching to using blocking pop.
   since it was very difficult to migrate the TaskSet class to become
   since it was very difficult to migrate the TaskSet class to become
   a subtask.
   a subtask.
 
 
-- A new shortcut has been added to tasks::
+- A new shortcut has been added to tasks:
+
+    ::
 
 
         >>> task.s(arg1, arg2, kw=1)
         >>> task.s(arg1, arg2, kw=1)
 
 
@@ -395,7 +397,9 @@ accidentally changed while switching to using blocking pop.
 
 
         >>> (add.s(2, 2), pow.s(2)).apply_async()
         >>> (add.s(2, 2), pow.s(2)).apply_async()
 
 
-- Subtasks can be "evaluated" using the ``~`` operator::
+- Subtasks can be "evaluated" using the ``~`` operator:
+
+    ::
 
 
         >>> ~add.s(2, 2)
         >>> ~add.s(2, 2)
         4
         4
@@ -839,7 +843,7 @@ In Other News
 - Worker/Celerybeat no longer logs the startup banner.
 - Worker/Celerybeat no longer logs the startup banner.
 
 
     Previously it would be logged with severity warning,
     Previously it would be logged with severity warning,
-    no it's only written to stdout.
+    now it's only written to stdout.
 
 
 - The ``contrib/`` directory in the distribution has been renamed to
 - The ``contrib/`` directory in the distribution has been renamed to
   ``extra/``.
   ``extra/``.
@@ -878,7 +882,7 @@ Internals
     :mod:`celery.utils.functional`
     :mod:`celery.utils.functional`
 
 
 - Now using :mod:`kombu.utils.encoding` instead of
 - Now using :mod:`kombu.utils.encoding` instead of
-  `:mod:`celery.utils.encoding`.
+  :mod:`celery.utils.encoding`.
 
 
 - Renamed module ``celery.routes`` -> :mod:`celery.app.routes`.
 - Renamed module ``celery.routes`` -> :mod:`celery.app.routes`.