groups.rst 7.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273
  1. .. _guide-sets:
  2. .. _guide-groups:
  3. =======================================
  4. Groups, Chords, Chains and Callbacks
  5. =======================================
  6. .. contents::
  7. :local:
  8. .. _sets-subtasks:
  9. .. _groups-subtasks:
  10. Subtasks
  11. ========
  12. .. versionadded:: 2.0
  13. The :class:`~celery.task.sets.subtask` type is used to wrap the arguments and
  14. execution options for a single task invocation:
  15. .. code-block:: python
  16. from celery import subtask
  17. subtask(task_name_or_cls, args, kwargs, options)
  18. For convenience every task also has a shortcut to create subtasks:
  19. .. code-block:: python
  20. task.subtask(args, kwargs, options)
  21. :class:`~celery.task.sets.subtask` is actually a :class:`dict` subclass,
  22. which means it can be serialized with JSON or other encodings that doesn't
  23. support complex Python objects.
  24. Also it can be regarded as a type, as the following usage works::
  25. >>> s = subtask("tasks.add", args=(2, 2), kwargs={})
  26. >>> subtask(dict(s)) # coerce dict into subtask
  27. This makes it excellent as a means to pass callbacks around to tasks.
  28. .. _sets-callbacks:
  29. .. _groups-callbacks:
  30. Callbacks
  31. ---------
  32. Callbacks can be added to any task using the ``link`` argument
  33. to ``apply_async``:
  34. add.apply_async((2, 2), link=other_task.subtask())
  35. The callback will only be applied if the task exited successfully,
  36. and it will be applied with the return value of the parent task as argument.
  37. The best thing is that any arguments you add to `subtask`,
  38. will be prepended to the arguments specified by the subtask itself!
  39. If you have the subtask::
  40. >>> add.subtask(args=(10, ))
  41. `subtask.delay(result)` becomes::
  42. >>> add.apply_async(args=(result, 10))
  43. ...
  44. Now let's execute our ``add`` task with a callback using partial
  45. arguments::
  46. >>> add.apply_async((2, 2), link=add.subtask((8, )))
  47. As expected this will first launch one task calculating :math:`2 + 2`, then
  48. another task calculating :math:`4 + 8`.
  49. .. _sets-taskset:
  50. .. _groups-group:
  51. Groups
  52. ======
  53. The :class:`~celery.task.sets.group` enables easy invocation of several
  54. tasks at once, and is then able to join the results in the same order as the
  55. tasks were invoked.
  56. ``group`` takes a list of :class:`~celery.task.sets.subtask`'s::
  57. >>> from celery import group
  58. >>> from tasks import add
  59. >>> job = group([
  60. ... add.subtask((2, 2)),
  61. ... add.subtask((4, 4)),
  62. ... add.subtask((8, 8)),
  63. ... add.subtask((16, 16)),
  64. ... add.subtask((32, 32)),
  65. ... ])
  66. >>> result = job.apply_async()
  67. >>> result.ready() # have all subtasks completed?
  68. True
  69. >>> result.successful() # were all subtasks successful?
  70. True
  71. >>> result.join()
  72. [4, 8, 16, 32, 64]
  73. The first argument can alternatively be an iterator, like::
  74. >>> group(add.subtask((i, i)) for i in range(100))
  75. .. _sets-results:
  76. Results
  77. -------
  78. When a :class:`~celery.task.sets.group` is applied it returns a
  79. :class:`~celery.result.TaskSetResult` object.
  80. :class:`~celery.result.TaskSetResult` takes a list of
  81. :class:`~celery.result.AsyncResult` instances and operates on them as if it was a
  82. single task.
  83. It supports the following operations:
  84. * :meth:`~celery.result.TaskSetResult.successful`
  85. Returns :const:`True` if all of the subtasks finished
  86. successfully (e.g. did not raise an exception).
  87. * :meth:`~celery.result.TaskSetResult.failed`
  88. Returns :const:`True` if any of the subtasks failed.
  89. * :meth:`~celery.result.TaskSetResult.waiting`
  90. Returns :const:`True` if any of the subtasks
  91. is not ready yet.
  92. * :meth:`~celery.result.TaskSetResult.ready`
  93. Return :const:`True` if all of the subtasks
  94. are ready.
  95. * :meth:`~celery.result.TaskSetResult.completed_count`
  96. Returns the number of completed subtasks.
  97. * :meth:`~celery.result.TaskSetResult.revoke`
  98. Revokes all of the subtasks.
  99. * :meth:`~celery.result.TaskSetResult.iterate`
  100. Iterates over the return values of the subtasks
  101. as they finish, one by one.
  102. * :meth:`~celery.result.TaskSetResult.join`
  103. Gather the results for all of the subtasks
  104. and return a list with them ordered by the order of which they
  105. were called.
  106. .. _chords:
  107. Chords
  108. ======
  109. .. versionadded:: 2.3
  110. A chord is a task that only executes after all of the tasks in a taskset has
  111. finished executing.
  112. Let's calculate the sum of the expression
  113. :math:`1 + 1 + 2 + 2 + 3 + 3 ... n + n` up to a hundred digits.
  114. First we need two tasks, :func:`add` and :func:`tsum` (:func:`sum` is
  115. already a standard function):
  116. .. code-block:: python
  117. @celery.task
  118. def add(x, y):
  119. return x + y
  120. @celery.task
  121. def tsum(numbers):
  122. return sum(numbers)
  123. Now we can use a chord to calculate each addition step in parallel, and then
  124. get the sum of the resulting numbers::
  125. >>> from celery import chord
  126. >>> from tasks import add, tsum
  127. >>> chord(add.subtask((i, i))
  128. ... for i in xrange(100))(tsum.subtask()).get()
  129. 9900
  130. This is obviously a very contrived example, the overhead of messaging and
  131. synchronization makes this a lot slower than its Python counterpart::
  132. sum(i + i for i in xrange(100))
  133. The synchronization step is costly, so you should avoid using chords as much
  134. as possible. Still, the chord is a powerful primitive to have in your toolbox
  135. as synchronization is a required step for many parallel algorithms.
  136. Let's break the chord expression down::
  137. >>> callback = tsum.subtask()
  138. >>> header = [add.subtask((i, i)) for i in xrange(100)]
  139. >>> result = chord(header)(callback)
  140. >>> result.get()
  141. 9900
  142. Remember, the callback can only be executed after all of the tasks in the
  143. header has returned. Each step in the header is executed as a task, in
  144. parallel, possibly on different nodes. The callback is then applied with
  145. the return value of each task in the header. The task id returned by
  146. :meth:`chord` is the id of the callback, so you can wait for it to complete
  147. and get the final return value (but remember to :ref:`never have a task wait
  148. for other tasks <task-synchronous-subtasks>`)
  149. .. _chord-important-notes:
  150. Important Notes
  151. ---------------
  152. By default the synchronization step is implemented by having a recurring task
  153. poll the completion of the taskset every second, applying the subtask when
  154. ready.
  155. Example implementation:
  156. .. code-block:: python
  157. def unlock_chord(taskset, callback, interval=1, max_retries=None):
  158. if taskset.ready():
  159. return subtask(callback).delay(taskset.join())
  160. raise unlock_chord.retry(countdown=interval, max_retries=max_retries)
  161. This is used by all result backends except Redis and Memcached, which increment a
  162. counter after each task in the header, then applying the callback when the
  163. counter exceeds the number of tasks in the set. *Note:* chords do not properly
  164. work with Redis before version 2.2; you will need to upgrade to at least 2.2 to
  165. use them.
  166. The Redis and Memcached approach is a much better solution, but not easily
  167. implemented in other backends (suggestions welcome!).
  168. .. note::
  169. If you are using chords with the Redis result backend and also overriding
  170. the :meth:`Task.after_return` method, you need to make sure to call the
  171. super method or else the chord callback will not be applied.
  172. .. code-block:: python
  173. def after_return(self, *args, **kwargs):
  174. do_something()
  175. super(MyTask, self).after_return(*args, **kwargs)