canvas.rst 27 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046
  1. .. _guide-canvas:
  2. =============================
  3. Canvas: Designing Workflows
  4. =============================
  5. .. contents::
  6. :local:
  7. :depth: 2
  8. .. _canvas-subtasks:
  9. .. _canvas-signatures:
  10. Signatures
  11. ==========
  12. .. versionadded:: 2.0
  13. You just learned how to call a task using the tasks ``delay`` method
  14. in the :ref:`calling <guide-calling>` guide, and this is often all you need,
  15. but sometimes you may want to pass the signature of a task invocation to
  16. another process or as an argument to another function.
  17. A :func:`~celery.signature` wraps the arguments, keyword arguments, and execution options
  18. of a single task invocation in a way such that it can be passed to functions
  19. or even serialized and sent across the wire.
  20. - You can create a signature for the ``add`` task using its name like this:
  21. .. code-block:: pycon
  22. >>> from celery import signature
  23. >>> signature('tasks.add', args=(2, 2), countdown=10)
  24. tasks.add(2, 2)
  25. This task has a signature of arity 2 (two arguments): ``(2, 2)``,
  26. and sets the countdown execution option to 10.
  27. - or you can create one using the task's ``signature`` method:
  28. .. code-block:: pycon
  29. >>> add.signature((2, 2), countdown=10)
  30. tasks.add(2, 2)
  31. - There is also a shortcut using star arguments:
  32. .. code-block:: pycon
  33. >>> add.s(2, 2)
  34. tasks.add(2, 2)
  35. - Keyword arguments are also supported:
  36. .. code-block:: pycon
  37. >>> add.s(2, 2, debug=True)
  38. tasks.add(2, 2, debug=True)
  39. - From any signature instance you can inspect the different fields:
  40. .. code-block:: pycon
  41. >>> s = add.signature((2, 2), {'debug': True}, countdown=10)
  42. >>> s.args
  43. (2, 2)
  44. >>> s.kwargs
  45. {'debug': True}
  46. >>> s.options
  47. {'countdown': 10}
  48. - It supports the "Calling API" which means it supports ``delay`` and
  49. ``apply_async`` or being called directly.
  50. Calling the signature will execute the task inline in the current process:
  51. .. code-block:: pycon
  52. >>> add(2, 2)
  53. 4
  54. >>> add.s(2, 2)()
  55. 4
  56. ``delay`` is our beloved shortcut to ``apply_async`` taking star-arguments:
  57. .. code-block:: pycon
  58. >>> result = add.delay(2, 2)
  59. >>> result.get()
  60. 4
  61. ``apply_async`` takes the same arguments as the
  62. :meth:`Task.apply_async <@Task.apply_async>` method:
  63. .. code-block:: pycon
  64. >>> add.apply_async(args, kwargs, **options)
  65. >>> add.signature(args, kwargs, **options).apply_async()
  66. >>> add.apply_async((2, 2), countdown=1)
  67. >>> add.signature((2, 2), countdown=1).apply_async()
  68. - You can't define options with :meth:`~@Task.s`, but a chaining
  69. ``set`` call takes care of that:
  70. .. code-block:: pycon
  71. >>> add.s(2, 2).set(countdown=1)
  72. proj.tasks.add(2, 2)
  73. Partials
  74. --------
  75. With a signature, you can execute the task in a worker:
  76. .. code-block:: pycon
  77. >>> add.s(2, 2).delay()
  78. >>> add.s(2, 2).apply_async(countdown=1)
  79. Or you can call it directly in the current process:
  80. .. code-block:: pycon
  81. >>> add.s(2, 2)()
  82. 4
  83. Specifying additional args, kwargs or options to ``apply_async``/``delay``
  84. creates partials:
  85. - Any arguments added will be prepended to the args in the signature:
  86. .. code-block:: pycon
  87. >>> partial = add.s(2) # incomplete signature
  88. >>> partial.delay(4) # 4 + 2
  89. >>> partial.apply_async((4,)) # same
  90. - Any keyword arguments added will be merged with the kwargs in the signature,
  91. with the new keyword arguments taking precedence:
  92. .. code-block:: pycon
  93. >>> s = add.s(2, 2)
  94. >>> s.delay(debug=True) # -> add(2, 2, debug=True)
  95. >>> s.apply_async(kwargs={'debug': True}) # same
  96. - Any options added will be merged with the options in the signature,
  97. with the new options taking precedence:
  98. .. code-block:: pycon
  99. >>> s = add.signature((2, 2), countdown=10)
  100. >>> s.apply_async(countdown=1) # countdown is now 1
  101. You can also clone signatures to create derivatives:
  102. .. code-block:: pycon
  103. >>> s = add.s(2)
  104. proj.tasks.add(2)
  105. >>> s.clone(args=(4,), kwargs={'debug': True})
  106. proj.tasks.add(4, 2, debug=True)
  107. Immutability
  108. ------------
  109. .. versionadded:: 3.0
  110. Partials are meant to be used with callbacks, any tasks linked or chord
  111. callbacks will be applied with the result of the parent task.
  112. Sometimes you want to specify a callback that does not take
  113. additional arguments, and in that case you can set the signature
  114. to be immutable:
  115. .. code-block:: pycon
  116. >>> add.apply_async((2, 2), link=reset_buffers.signature(immutable=True))
  117. The ``.si()`` shortcut can also be used to create immutable signatures:
  118. .. code-block:: pycon
  119. >>> add.apply_async((2, 2), link=reset_buffers.si())
  120. Only the execution options can be set when a signature is immutable,
  121. so it's not possible to call the signature with partial args/kwargs.
  122. .. note::
  123. In this tutorial I sometimes use the prefix operator `~` to signatures.
  124. You probably shouldn't use it in your production code, but it's a handy shortcut
  125. when experimenting in the Python shell:
  126. .. code-block:: pycon
  127. >>> ~sig
  128. >>> # is the same as
  129. >>> sig.delay().get()
  130. .. _canvas-callbacks:
  131. Callbacks
  132. ---------
  133. .. versionadded:: 3.0
  134. Callbacks can be added to any task using the ``link`` argument
  135. to ``apply_async``:
  136. .. code-block:: pycon
  137. add.apply_async((2, 2), link=other_task.s())
  138. The callback will only be applied if the task exited successfully,
  139. and it will be applied with the return value of the parent task as argument.
  140. As I mentioned earlier, any arguments you add to a signature,
  141. will be prepended to the arguments specified by the signature itself!
  142. If you have the signature:
  143. .. code-block:: pycon
  144. >>> sig = add.s(10)
  145. then `sig.delay(result)` becomes:
  146. .. code-block:: pycon
  147. >>> add.apply_async(args=(result, 10))
  148. ...
  149. Now let's call our ``add`` task with a callback using partial
  150. arguments:
  151. .. code-block:: pycon
  152. >>> add.apply_async((2, 2), link=add.s(8))
  153. As expected this will first launch one task calculating :math:`2 + 2`, then
  154. another task calculating :math:`4 + 8`.
  155. The Primitives
  156. ==============
  157. .. versionadded:: 3.0
  158. .. topic:: Overview
  159. - ``group``
  160. The group primitive is a signature that takes a list of tasks that should
  161. be applied in parallel.
  162. - ``chain``
  163. The chain primitive lets us link together signatures so that one is called
  164. after the other, essentially forming a *chain* of callbacks.
  165. - ``chord``
  166. A chord is just like a group but with a callback. A chord consists
  167. of a header group and a body, where the body is a task that should execute
  168. after all of the tasks in the header are complete.
  169. - ``map``
  170. The map primitive works like the built-in ``map`` function, but creates
  171. a temporary task where a list of arguments is applied to the task.
  172. E.g. ``task.map([1, 2])`` results in a single task
  173. being called, applying the arguments in order to the task function so
  174. that the result is:
  175. .. code-block:: python
  176. res = [task(1), task(2)]
  177. - ``starmap``
  178. Works exactly like map except the arguments are applied as ``*args``.
  179. For example ``add.starmap([(2, 2), (4, 4)])`` results in a single
  180. task calling:
  181. .. code-block:: python
  182. res = [add(2, 2), add(4, 4)]
  183. - ``chunks``
  184. Chunking splits a long list of arguments into parts, e.g the operation:
  185. .. code-block:: pycon
  186. >>> items = zip(xrange(1000), xrange(1000)) # 1000 items
  187. >>> add.chunks(items, 10)
  188. will split the list of items into chunks of 10, resulting in 100
  189. tasks (each processing 10 items in sequence).
  190. The primitives are also signature objects themselves, so that they can be combined
  191. in any number of ways to compose complex workflows.
  192. Here's some examples:
  193. - Simple chain
  194. Here's a simple chain, the first task executes passing its return value
  195. to the next task in the chain, and so on.
  196. .. code-block:: pycon
  197. >>> from celery import chain
  198. >>> # 2 + 2 + 4 + 8
  199. >>> res = chain(add.s(2, 2), add.s(4), add.s(8))()
  200. >>> res.get()
  201. 16
  202. This can also be written using pipes:
  203. .. code-block:: pycon
  204. >>> (add.s(2, 2) | add.s(4) | add.s(8))().get()
  205. 16
  206. - Immutable signatures
  207. Signatures can be partial so arguments can be
  208. added to the existing arguments, but you may not always want that,
  209. for example if you don't want the result of the previous task in a chain.
  210. In that case you can mark the signature as immutable, so that the arguments
  211. cannot be changed:
  212. .. code-block:: pycon
  213. >>> add.signature((2, 2), immutable=True)
  214. There's also an ``.si`` shortcut for this:
  215. .. code-block:: pycon
  216. >>> add.si(2, 2)
  217. Now you can create a chain of independent tasks instead:
  218. .. code-block:: pycon
  219. >>> res = (add.si(2, 2) | add.si(4, 4) | add.s(8, 8))()
  220. >>> res.get()
  221. 16
  222. >>> res.parent.get()
  223. 8
  224. >>> res.parent.parent.get()
  225. 4
  226. - Simple group
  227. You can easily create a group of tasks to execute in parallel:
  228. .. code-block:: pycon
  229. >>> from celery import group
  230. >>> res = group(add.s(i, i) for i in xrange(10))()
  231. >>> res.get(timeout=1)
  232. [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  233. - Simple chord
  234. The chord primitive enables us to add callback to be called when
  235. all of the tasks in a group have finished executing, which is often
  236. required for algorithms that aren't embarrassingly parallel:
  237. .. code-block:: pycon
  238. >>> from celery import chord
  239. >>> res = chord((add.s(i, i) for i in xrange(10)), xsum.s())()
  240. >>> res.get()
  241. 90
  242. The above example creates 10 task that all start in parallel,
  243. and when all of them are complete the return values are combined
  244. into a list and sent to the ``xsum`` task.
  245. The body of a chord can also be immutable, so that the return value
  246. of the group is not passed on to the callback:
  247. .. code-block:: pycon
  248. >>> chord((import_contact.s(c) for c in contacts),
  249. ... notify_complete.si(import_id)).apply_async()
  250. Note the use of ``.si`` above which creates an immutable signature.
  251. - Blow your mind by combining
  252. Chains can be partial too:
  253. .. code-block:: pycon
  254. >>> c1 = (add.s(4) | mul.s(8))
  255. # (16 + 4) * 8
  256. >>> res = c1(16)
  257. >>> res.get()
  258. 160
  259. Which means that you can combine chains:
  260. .. code-block:: pycon
  261. # ((4 + 16) * 2 + 4) * 8
  262. >>> c2 = (add.s(4, 16) | mul.s(2) | (add.s(4) | mul.s(8)))
  263. >>> res = c2()
  264. >>> res.get()
  265. 352
  266. Chaining a group together with another task will automatically
  267. upgrade it to be a chord:
  268. .. code-block:: pycon
  269. >>> c3 = (group(add.s(i, i) for i in xrange(10)) | xsum.s())
  270. >>> res = c3()
  271. >>> res.get()
  272. 90
  273. Groups and chords accepts partial arguments too, so in a chain
  274. the return value of the previous task is forwarded to all tasks in the group:
  275. .. code-block:: pycon
  276. >>> new_user_workflow = (create_user.s() | group(
  277. ... import_contacts.s(),
  278. ... send_welcome_email.s()))
  279. ... new_user_workflow.delay(username='artv',
  280. ... first='Art',
  281. ... last='Vandelay',
  282. ... email='art@vandelay.com')
  283. If you don't want to forward arguments to the group then
  284. you can make the signatures in the group immutable:
  285. .. code-block:: pycon
  286. >>> res = (add.s(4, 4) | group(add.si(i, i) for i in xrange(10)))()
  287. >>> res.get()
  288. <GroupResult: de44df8c-821d-4c84-9a6a-44769c738f98 [
  289. bc01831b-9486-4e51-b046-480d7c9b78de,
  290. 2650a1b8-32bf-4771-a645-b0a35dcc791b,
  291. dcbee2a5-e92d-4b03-b6eb-7aec60fd30cf,
  292. 59f92e0a-23ea-41ce-9fad-8645a0e7759c,
  293. 26e1e707-eccf-4bf4-bbd8-1e1729c3cce3,
  294. 2d10a5f4-37f0-41b2-96ac-a973b1df024d,
  295. e13d3bdb-7ae3-4101-81a4-6f17ee21df2d,
  296. 104b2be0-7b75-44eb-ac8e-f9220bdfa140,
  297. c5c551a5-0386-4973-aa37-b65cbeb2624b,
  298. 83f72d71-4b71-428e-b604-6f16599a9f37]>
  299. >>> res.parent.get()
  300. 8
  301. .. _canvas-chain:
  302. Chains
  303. ------
  304. .. versionadded:: 3.0
  305. Tasks can be linked together, which in practice means adding
  306. a callback task:
  307. .. code-block:: pycon
  308. >>> res = add.apply_async((2, 2), link=mul.s(16))
  309. >>> res.get()
  310. 4
  311. The linked task will be applied with the result of its parent
  312. task as the first argument, which in the above case will result
  313. in ``mul(4, 16)`` since the result is 4.
  314. The results will keep track of any subtasks called by the original task,
  315. and this can be accessed from the result instance:
  316. .. code-block:: pycon
  317. >>> res.children
  318. [<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>]
  319. >>> res.children[0].get()
  320. 64
  321. The result instance also has a :meth:`~@AsyncResult.collect` method
  322. that treats the result as a graph, enabling you to iterate over
  323. the results:
  324. .. code-block:: pycon
  325. >>> list(res.collect())
  326. [(<AsyncResult: 7b720856-dc5f-4415-9134-5c89def5664e>, 4),
  327. (<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>, 64)]
  328. By default :meth:`~@AsyncResult.collect` will raise an
  329. :exc:`~@IncompleteStream` exception if the graph is not fully
  330. formed (one of the tasks has not completed yet),
  331. but you can get an intermediate representation of the graph
  332. too:
  333. .. code-block:: pycon
  334. >>> for result, value in res.collect(intermediate=True)):
  335. ....
  336. You can link together as many tasks as you like,
  337. and signatures can be linked too:
  338. .. code-block:: pycon
  339. >>> s = add.s(2, 2)
  340. >>> s.link(mul.s(4))
  341. >>> s.link(log_result.s())
  342. You can also add *error callbacks* using the ``link_error`` argument:
  343. .. code-block:: pycon
  344. >>> add.apply_async((2, 2), link_error=log_error.s())
  345. >>> add.signature((2, 2), link_error=log_error.s())
  346. Since exceptions can only be serialized when pickle is used
  347. the error callbacks take the id of the parent task as argument instead:
  348. .. code-block:: python
  349. from __future__ import print_function
  350. import os
  351. from proj.celery import app
  352. @app.task
  353. def log_error(task_id):
  354. result = app.AsyncResult(task_id)
  355. result.get(propagate=False) # make sure result written.
  356. with open(os.path.join('/var/errors', task_id), 'a') as fh:
  357. print('--\n\n{0} {1} {2}'.format(
  358. task_id, result.result, result.traceback), file=fh)
  359. To make it even easier to link tasks together there is
  360. a special signature called :class:`~celery.chain` that lets
  361. you chain tasks together:
  362. .. code-block:: pycon
  363. >>> from celery import chain
  364. >>> from proj.tasks import add, mul
  365. >>> # (4 + 4) * 8 * 10
  366. >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
  367. proj.tasks.add(4, 4) | proj.tasks.mul(8) | proj.tasks.mul(10)
  368. Calling the chain will call the tasks in the current process
  369. and return the result of the last task in the chain:
  370. .. code-block:: pycon
  371. >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
  372. >>> res.get()
  373. 640
  374. It also sets ``parent`` attributes so that you can
  375. work your way up the chain to get intermediate results:
  376. .. code-block:: pycon
  377. >>> res.parent.get()
  378. 64
  379. >>> res.parent.parent.get()
  380. 8
  381. >>> res.parent.parent
  382. <AsyncResult: eeaad925-6778-4ad1-88c8-b2a63d017933>
  383. Chains can also be made using the ``|`` (pipe) operator:
  384. .. code-block:: pycon
  385. >>> (add.s(2, 2) | mul.s(8) | mul.s(10)).apply_async()
  386. Graphs
  387. ~~~~~~
  388. In addition you can work with the result graph as a
  389. :class:`~celery.datastructures.DependencyGraph`:
  390. .. code-block:: pycon
  391. >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
  392. >>> res.parent.parent.graph
  393. 285fa253-fcf8-42ef-8b95-0078897e83e6(1)
  394. 463afec2-5ed4-4036-b22d-ba067ec64f52(0)
  395. 872c3995-6fa0-46ca-98c2-5a19155afcf0(2)
  396. 285fa253-fcf8-42ef-8b95-0078897e83e6(1)
  397. 463afec2-5ed4-4036-b22d-ba067ec64f52(0)
  398. You can even convert these graphs to *dot* format:
  399. .. code-block:: pycon
  400. >>> with open('graph.dot', 'w') as fh:
  401. ... res.parent.parent.graph.to_dot(fh)
  402. and create images:
  403. .. code-block:: console
  404. $ dot -Tpng graph.dot -o graph.png
  405. .. image:: ../images/result_graph.png
  406. .. _canvas-group:
  407. Groups
  408. ------
  409. .. versionadded:: 3.0
  410. A group can be used to execute several tasks in parallel.
  411. The :class:`~celery.group` function takes a list of signatures:
  412. .. code-block:: pycon
  413. >>> from celery import group
  414. >>> from proj.tasks import add
  415. >>> group(add.s(2, 2), add.s(4, 4))
  416. (proj.tasks.add(2, 2), proj.tasks.add(4, 4))
  417. If you **call** the group, the tasks will be applied
  418. one after one in the current process, and a :class:`~celery.result.GroupResult`
  419. instance is returned which can be used to keep track of the results,
  420. or tell how many tasks are ready and so on:
  421. .. code-block:: pycon
  422. >>> g = group(add.s(2, 2), add.s(4, 4))
  423. >>> res = g()
  424. >>> res.get()
  425. [4, 8]
  426. Group also supports iterators:
  427. .. code-block:: pycon
  428. >>> group(add.s(i, i) for i in xrange(100))()
  429. A group is a signature object, so it can be used in combination
  430. with other signatures.
  431. Group Results
  432. ~~~~~~~~~~~~~
  433. The group task returns a special result too,
  434. this result works just like normal task results, except
  435. that it works on the group as a whole:
  436. .. code-block:: pycon
  437. >>> from celery import group
  438. >>> from tasks import add
  439. >>> job = group([
  440. ... add.s(2, 2),
  441. ... add.s(4, 4),
  442. ... add.s(8, 8),
  443. ... add.s(16, 16),
  444. ... add.s(32, 32),
  445. ... ])
  446. >>> result = job.apply_async()
  447. >>> result.ready() # have all subtasks completed?
  448. True
  449. >>> result.successful() # were all subtasks successful?
  450. True
  451. >>> result.get()
  452. [4, 8, 16, 32, 64]
  453. The :class:`~celery.result.GroupResult` takes a list of
  454. :class:`~celery.result.AsyncResult` instances and operates on them as
  455. if it was a single task.
  456. It supports the following operations:
  457. * :meth:`~celery.result.GroupResult.successful`
  458. Return :const:`True` if all of the subtasks finished
  459. successfully (e.g. did not raise an exception).
  460. * :meth:`~celery.result.GroupResult.failed`
  461. Return :const:`True` if any of the subtasks failed.
  462. * :meth:`~celery.result.GroupResult.waiting`
  463. Return :const:`True` if any of the subtasks
  464. is not ready yet.
  465. * :meth:`~celery.result.GroupResult.ready`
  466. Return :const:`True` if all of the subtasks
  467. are ready.
  468. * :meth:`~celery.result.GroupResult.completed_count`
  469. Return the number of completed subtasks.
  470. * :meth:`~celery.result.GroupResult.revoke`
  471. Revoke all of the subtasks.
  472. * :meth:`~celery.result.GroupResult.join`
  473. Gather the results for all of the subtasks
  474. and return a list with them ordered by the order of which they
  475. were called.
  476. .. _canvas-chord:
  477. Chords
  478. ------
  479. .. versionadded:: 2.3
  480. .. note::
  481. Tasks used within a chord must *not* ignore their results. If the result
  482. backend is disabled for *any* task (header or body) in your chord you
  483. should read ":ref:`chord-important-notes`".
  484. A chord is a task that only executes after all of the tasks in a group have
  485. finished executing.
  486. Let's calculate the sum of the expression
  487. :math:`1 + 1 + 2 + 2 + 3 + 3 ... n + n` up to a hundred digits.
  488. First you need two tasks, :func:`add` and :func:`tsum` (:func:`sum` is
  489. already a standard function):
  490. .. code-block:: python
  491. @app.task
  492. def add(x, y):
  493. return x + y
  494. @app.task
  495. def tsum(numbers):
  496. return sum(numbers)
  497. Now you can use a chord to calculate each addition step in parallel, and then
  498. get the sum of the resulting numbers:
  499. .. code-block:: pycon
  500. >>> from celery import chord
  501. >>> from tasks import add, tsum
  502. >>> chord(add.s(i, i)
  503. ... for i in xrange(100))(tsum.s()).get()
  504. 9900
  505. This is obviously a very contrived example, the overhead of messaging and
  506. synchronization makes this a lot slower than its Python counterpart:
  507. .. code-block:: pycon
  508. >>> sum(i + i for i in xrange(100))
  509. The synchronization step is costly, so you should avoid using chords as much
  510. as possible. Still, the chord is a powerful primitive to have in your toolbox
  511. as synchronization is a required step for many parallel algorithms.
  512. Let's break the chord expression down:
  513. .. code-block:: pycon
  514. >>> callback = tsum.s()
  515. >>> header = [add.s(i, i) for i in range(100)]
  516. >>> result = chord(header)(callback)
  517. >>> result.get()
  518. 9900
  519. Remember, the callback can only be executed after all of the tasks in the
  520. header have returned. Each step in the header is executed as a task, in
  521. parallel, possibly on different nodes. The callback is then applied with
  522. the return value of each task in the header. The task id returned by
  523. :meth:`chord` is the id of the callback, so you can wait for it to complete
  524. and get the final return value (but remember to :ref:`never have a task wait
  525. for other tasks <task-synchronous-subtasks>`)
  526. .. _chord-errors:
  527. Error handling
  528. ~~~~~~~~~~~~~~
  529. So what happens if one of the tasks raises an exception?
  530. Errors will propagate to the callback, so the callback will not be executed
  531. instead the callback changes to failure state, and the error is set
  532. to the :exc:`~@ChordError` exception:
  533. .. code-block:: pycon
  534. >>> c = chord([add.s(4, 4), raising_task.s(), add.s(8, 8)])
  535. >>> result = c()
  536. >>> result.get()
  537. .. code-block:: pytb
  538. Traceback (most recent call last):
  539. File "<stdin>", line 1, in <module>
  540. File "*/celery/result.py", line 120, in get
  541. interval=interval)
  542. File "*/celery/backends/amqp.py", line 150, in wait_for
  543. raise meta['result']
  544. celery.exceptions.ChordError: Dependency 97de6f3f-ea67-4517-a21c-d867c61fcb47
  545. raised ValueError('something something',)
  546. While the traceback may be different depending on which result backend is
  547. being used, you can see the error description includes the id of the task that failed
  548. and a string representation of the original exception. You can also
  549. find the original traceback in ``result.traceback``.
  550. Note that the rest of the tasks will still execute, so the third task
  551. (``add.s(8, 8)``) is still executed even though the middle task failed.
  552. Also the :exc:`~@ChordError` only shows the task that failed
  553. first (in time): it does not respect the ordering of the header group.
  554. .. _chord-important-notes:
  555. Important Notes
  556. ~~~~~~~~~~~~~~~
  557. Tasks used within a chord must *not* ignore their results. In practice this
  558. means that you must enable a :const:`result_backend` in order to use
  559. chords. Additionally, if :const:`task_ignore_result` is set to :const:`True`
  560. in your configuration, be sure that the individual tasks to be used within
  561. the chord are defined with :const:`ignore_result=False`. This applies to both
  562. Task subclasses and decorated tasks.
  563. Example Task subclass:
  564. .. code-block:: python
  565. class MyTask(Task):
  566. abstract = True
  567. ignore_result = False
  568. Example decorated task:
  569. .. code-block:: python
  570. @app.task(ignore_result=False)
  571. def another_task(project):
  572. do_something()
  573. By default the synchronization step is implemented by having a recurring task
  574. poll the completion of the group every second, calling the signature when
  575. ready.
  576. Example implementation:
  577. .. code-block:: python
  578. from celery import maybe_signature
  579. @app.task(bind=True)
  580. def unlock_chord(self, group, callback, interval=1, max_retries=None):
  581. if group.ready():
  582. return maybe_signature(callback).delay(group.join())
  583. raise self.retry(countdown=interval, max_retries=max_retries)
  584. This is used by all result backends except Redis and Memcached, which
  585. increment a counter after each task in the header, then applying the callback
  586. when the counter exceeds the number of tasks in the set. *Note:* chords do not
  587. properly work with Redis before version 2.2; you will need to upgrade to at
  588. least 2.2 to use them.
  589. The Redis and Memcached approach is a much better solution, but not easily
  590. implemented in other backends (suggestions welcome!).
  591. .. note::
  592. If you are using chords with the Redis result backend and also overriding
  593. the :meth:`Task.after_return` method, you need to make sure to call the
  594. super method or else the chord callback will not be applied.
  595. .. code-block:: python
  596. def after_return(self, *args, **kwargs):
  597. do_something()
  598. super(MyTask, self).after_return(*args, **kwargs)
  599. .. _canvas-map:
  600. Map & Starmap
  601. -------------
  602. :class:`~celery.map` and :class:`~celery.starmap` are built-in tasks
  603. that calls the task for every element in a sequence.
  604. They differ from group in that
  605. - only one task message is sent
  606. - the operation is sequential.
  607. For example using ``map``:
  608. .. code-block:: pycon
  609. >>> from proj.tasks import add
  610. >>> ~xsum.map([range(10), range(100)])
  611. [45, 4950]
  612. is the same as having a task doing:
  613. .. code-block:: python
  614. @app.task
  615. def temp():
  616. return [xsum(range(10)), xsum(range(100))]
  617. and using ``starmap``:
  618. .. code-block:: pycon
  619. >>> ~add.starmap(zip(range(10), range(10)))
  620. [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  621. is the same as having a task doing:
  622. .. code-block:: python
  623. @app.task
  624. def temp():
  625. return [add(i, i) for i in range(10)]
  626. Both ``map`` and ``starmap`` are signature objects, so they can be used as
  627. other signatures and combined in groups etc., for example
  628. to call the starmap after 10 seconds:
  629. .. code-block:: pycon
  630. >>> add.starmap(zip(range(10), range(10))).apply_async(countdown=10)
  631. .. _canvas-chunks:
  632. Chunks
  633. ------
  634. Chunking lets you divide an iterable of work into pieces, so that if
  635. you have one million objects, you can create 10 tasks with hundred
  636. thousand objects each.
  637. Some may worry that chunking your tasks results in a degradation
  638. of parallelism, but this is rarely true for a busy cluster
  639. and in practice since you are avoiding the overhead of messaging
  640. it may considerably increase performance.
  641. To create a chunks signature you can use :meth:`@Task.chunks`:
  642. .. code-block:: pycon
  643. >>> add.chunks(zip(range(100), range(100)), 10)
  644. As with :class:`~celery.group` the act of sending the messages for
  645. the chunks will happen in the current process when called:
  646. .. code-block:: pycon
  647. >>> from proj.tasks import add
  648. >>> res = add.chunks(zip(range(100), range(100)), 10)()
  649. >>> res.get()
  650. [[0, 2, 4, 6, 8, 10, 12, 14, 16, 18],
  651. [20, 22, 24, 26, 28, 30, 32, 34, 36, 38],
  652. [40, 42, 44, 46, 48, 50, 52, 54, 56, 58],
  653. [60, 62, 64, 66, 68, 70, 72, 74, 76, 78],
  654. [80, 82, 84, 86, 88, 90, 92, 94, 96, 98],
  655. [100, 102, 104, 106, 108, 110, 112, 114, 116, 118],
  656. [120, 122, 124, 126, 128, 130, 132, 134, 136, 138],
  657. [140, 142, 144, 146, 148, 150, 152, 154, 156, 158],
  658. [160, 162, 164, 166, 168, 170, 172, 174, 176, 178],
  659. [180, 182, 184, 186, 188, 190, 192, 194, 196, 198]]
  660. while calling ``.apply_async`` will create a dedicated
  661. task so that the individual tasks are applied in a worker
  662. instead:
  663. .. code-block:: pycon
  664. >>> add.chunks(zip(range(100), range(100)), 10).apply_async()
  665. You can also convert chunks to a group:
  666. .. code-block:: pycon
  667. >>> group = add.chunks(zip(range(100), range(100)), 10).group()
  668. and with the group skew the countdown of each task by increments
  669. of one:
  670. .. code-block:: pycon
  671. >>> group.skew(start=1, stop=10)()
  672. which means that the first task will have a countdown of 1, the second
  673. a countdown of 2 and so on.