whatsnew-2.6.rst 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733
  1. .. _whatsnew-2.6:
  2. ==========================
  3. What's new in Celery 2.6
  4. ==========================
  5. Celery aims to be a flexible and reliable, best-of-breed solution
  6. to process vast amounts of messages in a distributed fashion, while
  7. providing operations with the tools to maintain such a system.
  8. Celery has a large and diverse community of users and contributors,
  9. you should come join us :ref:`on IRC <irc-channel>`
  10. or :ref:`our mailing-list <mailing-list>`.
  11. To read more about Celery you should visit our `website`_.
  12. While this version is backward compatible with previous versions
  13. it is important that you read the following section.
  14. If you use Celery in combination with Django you must also
  15. read the `django-celery changelog`_ and upgrade to `django-celery 2.6`_.
  16. This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3,
  17. as well as PyPy and Jython.
  18. .. _`website`: http://celeryproject.org/
  19. .. _`django-celery changelog`: http://bit.ly/djcelery-26-changelog
  20. .. _`django-celery 2.6`: http://pypi.python.org/pypi/django-celery/
  21. .. contents::
  22. :local:
  23. :depth: 1
  24. .. _v260-important:
  25. Important Notes
  26. ===============
  27. Eventloop
  28. ---------
  29. The worker is now running *without threads* when used with AMQP or Redis as a
  30. broker, resulting in::
  31. - Much better performance overall.
  32. - Fixes several edge case race conditions.
  33. - Sub-millisecond timer precision.
  34. - Faster shutdown times.
  35. The transports supported are: ``amqplib``, ``librabbitmq``, and ``redis``
  36. Hopefully this can be extended to include additional broker transports
  37. in the future.
  38. For increased reliability the :setting:`CELERY_FORCE_EXECV` setting is enabled
  39. by default if the eventloop is not used.
  40. Now depends on :mod:`billiard`.
  41. -------------------------------
  42. Billiard is a fork of the multiprocessing containing
  43. the no-execv patch by sbt (http://bugs.python.org/issue8713),
  44. and also contains the pool improvements previously located in Celery.
  45. This fork was necessary as changes to the C extension code was required
  46. for the no-execv patch to work.
  47. - Issue #625
  48. - Issue #627
  49. - Issue #640
  50. - `django-celery #122 <http://github.com/celery/django-celery/issues/122`
  51. - `django-celery #124 <http://github.com/celery/django-celery/issues/122`
  52. Last version to support Python 2.5
  53. ----------------------------------
  54. The 2.6 series will be last series to support Python 2.5.
  55. With several other distributions taking the step to discontinue
  56. Python 2.5 support, we feel that it is time too.
  57. Python 2.6 should be widely available at this point, and we urge
  58. you to upgrade, but if that is not possible you still have the option
  59. to continue using the Celery 2.6 series, and important bug fixes
  60. introduced in Celery 2.7 will be back-ported to Celery 2.6 upon request.
  61. .. _v260-news:
  62. News
  63. ====
  64. Chaining Tasks
  65. --------------
  66. Tasks can now have callbacks and errbacks, and dependencies are recorded
  67. - The task message format have been updated with two new extension keys
  68. Both keys can be empty/undefined or a list of subtasks.
  69. - ``callbacks``
  70. Applied if the task exits successfully, with the result
  71. of the task as an argument.
  72. - ``errbacks``
  73. Applied if an error occurred while executing the task,
  74. with the uuid of the task as an argument. Since it may not be possible
  75. to serialize the exception instance, it passes the uuid of the task
  76. instead. The uuid can then be used to retrieve the exception and
  77. traceback of the task from the result backend.
  78. - ``link`` and ``link_error`` keyword arguments has been added
  79. to ``apply_async``.
  80. These add callbacks and errbacks to the task, and
  81. you can read more about them at :ref:`calling-links`.
  82. - We now track what subtasks a task sends, and some result backends
  83. supports retrieving this information.
  84. - task.request.children
  85. Contains the result instances of the subtasks
  86. the currently executing task has applied.
  87. - AsyncResult.children
  88. Returns the tasks dependencies, as a list of
  89. ``AsyncResult``/``ResultSet`` instances.
  90. - AsyncResult.iterdeps
  91. Recursively iterates over the tasks dependencies,
  92. yielding `(parent, node)` tuples.
  93. Raises IncompleteStream if any of the dependencies
  94. has not returned yet.
  95. - AsyncResult.graph
  96. A ``DependencyGraph`` of the tasks dependencies.
  97. This can also be used to convert to dot format:
  98. .. code-block:: python
  99. with open("graph.dot") as fh:
  100. result.graph.to_dot(fh)
  101. which can than be used to produce an image::
  102. $ dot -Tpng graph.dot -o graph.png
  103. - A new special subtask called ``chain`` is also included::
  104. .. code-block:: python
  105. >>> from celery import chain
  106. # (2 + 2) * 8 / 2
  107. >>> res = chain(add.subtask((2, 2)),
  108. mul.subtask((8, )),
  109. div.subtask((2,))).apply_async()
  110. >>> res.get() == 16
  111. >>> res.parent.get() == 32
  112. >>> res.parent.parent.get() == 4
  113. - Adds :meth:`AsyncResult.get_leaf`
  114. Waits and returns the result of the leaf subtask.
  115. That is the last node found when traversing the graph,
  116. but this means that the graph can be 1-dimensional only (in effect
  117. a list).
  118. - Adds ``subtask.link(subtask)`` + ``subtask.link_error(subtask)``
  119. Shortcut to ``s.options.setdefault("link", []).append(subtask)``
  120. - Adds ``subtask.flatten_links()``
  121. Returns a flattened list of all dependencies (recursively)
  122. `group`/`chord`/`chain` are now subtasks
  123. ----------------------------------------
  124. - The source code for these, including subtask, has been moved
  125. to new module celery.canvas.
  126. - group is no longer an alias to TaskSet, but new alltogether,
  127. since it was very difficult to migrate the TaskSet class to become
  128. a subtask.
  129. - A new shortcut has been added to tasks::
  130. >>> task.s(arg1, arg2, kw=1)
  131. as a shortcut to::
  132. >>> task.subtask((arg1, arg2), {"kw": 1})
  133. - Tasks can be chained by using the ``|`` operator::
  134. >>> (add.s(2, 2), pow.s(2)).apply_async()
  135. - Subtasks can be "evaluated" using the ``~`` operator::
  136. >>> ~add.s(2, 2)
  137. 4
  138. >>> ~(add.s(2, 2) | pow.s(2))
  139. is the same as::
  140. >>> chain(add.s(2, 2), pow.s(2)).apply_async().get()
  141. - A new subtask_type key has been added to the subtask dicts
  142. This can be the string "chord", "group", "chain", "chunks",
  143. "xmap", or "xstarmap".
  144. - maybe_subtask now uses subtask_type to reconstruct
  145. the object, to be used when using non-pickle serializers.
  146. - The logic for these operations have been moved to dedicated
  147. tasks celery.chord, celery.chain and celery.group.
  148. - subtask no longer inherits from AttributeDict.
  149. It's now a pure dict subclass with properties for attribute
  150. access to the relevant keys.
  151. - The repr's now outputs how the sequence would like imperatively::
  152. >>> from celery import chord
  153. >>> (chord([add.s(i, i) for i in xrange(10)], xsum.s())
  154. | pow.s(2))
  155. tasks.xsum([tasks.add(0, 0),
  156. tasks.add(1, 1),
  157. tasks.add(2, 2),
  158. tasks.add(3, 3),
  159. tasks.add(4, 4),
  160. tasks.add(5, 5),
  161. tasks.add(6, 6),
  162. tasks.add(7, 7),
  163. tasks.add(8, 8),
  164. tasks.add(9, 9)]) | tasks.pow(2)
  165. Additional control commands made public
  166. ---------------------------------------
  167. - ``add_consumer``/``cancel_consumer``
  168. Tells workers to consume from a new queue, or cancel consuming from a
  169. queue. This command has also been changed so that the worker remembers
  170. the queues added, so that the change will persist even if
  171. the connection is re-connected.
  172. These commands are available programmatically as
  173. :meth:`@control.add_consumer` / :meth:`@control.cancel_consumer`:
  174. .. code-block:: python
  175. >>> celery.control.add_consumer(queue_name,
  176. ... destination=["w1.example.com"])
  177. >>> celery.control.cancel_consumer(queue_name,
  178. ... destination=["w1.example.com"])
  179. or using the :program:`celery control` command::
  180. $ celery control -d w1.example.com add_consumer queue
  181. $ celery control -d w1.example.com cancel_consumer queue
  182. .. note::
  183. Remember that a control command without *destination* will be
  184. sent to **all workers**.
  185. - ``autoscale``
  186. Tells workers with `--autoscale` enabled to change autoscale
  187. max/min concurrency settings.
  188. This command is available programmatically as :meth:`@control.autoscale`:
  189. .. code-block:: python
  190. >>> celery.control.autoscale(max=10, min=5,
  191. ... destination=["w1.example.com"])
  192. or using the :program:`celery control` command::
  193. $ celery control -d w1.example.com autoscale 10 5
  194. - ``pool_grow``/``pool_shrink``
  195. Tells workers to add or remove pool processes.
  196. These commands are available programmatically as
  197. :meth:`@control.pool_grow` / :meth:`@control.pool_shrink`:
  198. .. code-block:: python
  199. >>> celery.control.pool_grow(2, destination=["w1.example.com"])
  200. >>> celery.contorl.pool_shrink(2, destination=["w1.example.com"])
  201. or using the :program:`celery control` command::
  202. $ celery control -d w1.example.com pool_grow 2
  203. $ celery control -d w1.example.com pool_shrink 2
  204. - :program:`celery control` now supports ``rate_limit`` & ``time_limit``
  205. commands.
  206. See ``celery control --help`` for details.
  207. Crontab now supports Day of Month, and Month of Year arguments
  208. --------------------------------------------------------------
  209. See the updated list of examples at :ref:`beat-crontab`.
  210. Immutable subtasks
  211. ------------------
  212. ``subtask``'s can now be immutable, which means that the arguments
  213. will not be modified when applying callbacks::
  214. >>> chain(add.s(2, 2), clear_static_electricity.si())
  215. means it will not receive the argument of the parent task,
  216. and ``.si()`` is a shortcut to::
  217. >>> clear_static_electricity.subtask(immutable=True)
  218. Logging Improvements
  219. --------------------
  220. Logging support now conforms better with best practices.
  221. - Classes used by the worker no longer uses app.get_default_logger, but uses
  222. `celery.utils.log.get_logger` which simply gets the logger not setting the
  223. level, and adds a NullHandler.
  224. - Loggers are no longer passed around, instead every module using logging
  225. defines a module global logger that is used throughout.
  226. - All loggers inherit from a common logger called "celery".
  227. - Before task.get_logger would setup a new logger for every task,
  228. and even set the loglevel. This is no longer the case.
  229. - Instead all task loggers now inherit from a common "celery.task" logger
  230. that is set up when programs call `setup_logging_subsystem`.
  231. - Instead of using LoggerAdapter to augment the formatter with
  232. the task_id and task_name field, the task base logger now use
  233. a special formatter adding these values at runtime from the
  234. currently executing task.
  235. - In fact, ``task.get_logger`` is no longer recommended, it is better
  236. to add module-level logger to your tasks module.
  237. For example, like this:
  238. .. code-block:: python
  239. from celery.utils.log import get_task_logger
  240. logger = get_task_logger(__name__)
  241. @celery.task()
  242. def add(x, y):
  243. logger.debug("Adding %r + %r" % (x, y))
  244. return x + y
  245. The resulting logger will then inherit from the ``"celery.task"`` logger
  246. so that the current task name and id is included in logging output.
  247. - Redirected output from stdout/stderr is now logged to a "celery.redirected"
  248. logger.
  249. - In addition a few warnings.warn have been replaced with logger.warn.
  250. - Now avoids the 'no handlers for logger multiprocessing' warning
  251. Task registry no longer global
  252. ------------------------------
  253. Every Celery instance now has its own task registry.
  254. You can make apps share registries by specifying it::
  255. >>> app1 = Celery()
  256. >>> app2 = Celery(tasks=app1.tasks)
  257. Note that tasks are shared between registries by default, so that
  258. tasks will be added to every subsequently created task registry.
  259. As an alternative tasks can be private to specific task registries
  260. by setting the ``shared`` argument to the ``@task`` decorator::
  261. @celery.task(shared=False)
  262. def add(x, y):
  263. return x + y
  264. Abstract tasks are now lazily bound.
  265. ------------------------------------
  266. The :class:`~celery.task.Task` class is no longer bound to an app
  267. by default, it will first be bound (and configured) when
  268. a concrete subclass is created.
  269. This means that you can safely import and make task base classes,
  270. without also initializing the default app environment::
  271. from celery.task import Task
  272. class DebugTask(Task):
  273. abstract = True
  274. def __call__(self, *args, **kwargs):
  275. print("CALLING %r" % (self, ))
  276. return self.run(*args, **kwargs)
  277. >>> DebugTask
  278. <unbound DebugTask>
  279. >>> @celery1.task(base=DebugTask)
  280. ... def add(x, y):
  281. ... return x + y
  282. >>> add.__class__
  283. <class add of <Celery default:0x101510d10>>
  284. Lazy task decorators
  285. --------------------
  286. The ``@task`` decorator is now lazy when used with custom apps.
  287. That is, if ``accept_magic_kwargs`` is enabled (herby called "compat mode"), the task
  288. decorator executes inline like before, however for custom apps the @task
  289. decorator now returns a special PromiseProxy object that is only evaluated
  290. on access.
  291. All promises will be evaluated when `app.finalize` is called, or implicitly
  292. when the task registry is first used.
  293. Smart `--app` option
  294. --------------------
  295. The :option:`--app` option now 'auto-detects'
  296. - If the provided path is a module it tries to get an
  297. attribute named 'celery'.
  298. - If the provided path is a package it tries
  299. to import a submodule named 'celery',
  300. and get the celery attribute from that module.
  301. E.g. if you have a project named 'proj' where the
  302. celery app is located in 'from proj.celery import celery',
  303. then the following will be equivalent::
  304. $ celery worker --app=proj
  305. $ celery worker --app=proj.celery:
  306. $ celery worker --app=proj.celery:celery
  307. In Other News
  308. -------------
  309. - New :setting:`CELERYD_WORKER_LOST_WAIT` to control the timeout in
  310. seconds before :exc:`billiard.WorkerLostError` is raised
  311. when a worker can not be signalled (Issue #595).
  312. Contributed by Brendon Crawford.
  313. - Redis event monitor queues are now automatically deleted (Issue #436).
  314. - App instance factory methods have been converted to be cached
  315. descriptors that creates a new subclass on access.
  316. This means that e.g. ``celery.Worker`` is an actual class
  317. and will work as expected when::
  318. class Worker(celery.Worker):
  319. ...
  320. - New signal: :signal:`task-success`.
  321. - Multiprocessing logs are now only emitted if the :envvar:`MP_LOG`
  322. environment variable is set.
  323. - The Celery instance can now be created with a broker URL
  324. .. code-block:: python
  325. celery = Celery(broker="redis://")
  326. - Result backends can now be set using an URL
  327. Currently only supported by redis. Example use::
  328. CELERY_RESULT_BACKEND = "redis://localhost/1"
  329. - Heartbeat frequency now every 5s, and frequency sent with event
  330. The heartbeat frequency is now available in the worker event messages,
  331. so that clients can decide when to consider workers offline based on
  332. this value.
  333. - Module celery.actors has been removed, and will be part of cl instead.
  334. - Introduces new ``celery`` command, which is an entrypoint for all other
  335. commands.
  336. The main for this command can be run by calling ``celery.start()``.
  337. - Annotations now supports decorators if the key startswith '@'.
  338. E.g.:
  339. .. code-block:: python
  340. def debug_args(fun):
  341. @wraps(fun)
  342. def _inner(*args, **kwargs):
  343. print("ARGS: %r" % (args, ))
  344. return _inner
  345. CELERY_ANNOTATIONS = {
  346. "tasks.add": {"@__call__": debug_args},
  347. }
  348. Also tasks are now always bound by class so that
  349. annotated methods end up being bound.
  350. - Bugreport now available as a command and broadcast command
  351. - Get it from a Python repl::
  352. >>> import celery
  353. >>> print(celery.bugreport())
  354. - Using the ``celery`` command-line program::
  355. $ celery report
  356. - Get it from remote workers::
  357. $ celery inspect report
  358. - Module ``celery.log`` moved to :mod:`celery.app.log`.
  359. - Module ``celery.task.control`` moved to :mod:`celery.app.control`.
  360. - ``AsyncResult.task_id`` renamed to ``AsyncResult.id``
  361. - ``TasksetResult.taskset_id`` renamed to ``.id``
  362. - ``xmap(task, sequence)`` and ``xstarmap(task, sequence)``
  363. Returns a list of the results applying the task to every item
  364. in the sequence.
  365. Example::
  366. >>> from celery import xstarmap
  367. >>> xstarmap(add, zip(range(10), range(10)).apply_async()
  368. [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  369. - ``chunks(task, sequence, chunksize)``
  370. - ``group.skew(start=, stop=, step=)``
  371. Skew will skew the countdown for the individual tasks in a group,
  372. e.g. with a group::
  373. >>> g = group(add.s(i, i) for i in xrange(10))
  374. Skewing the tasks from 0 seconds to 10 seconds::
  375. >>> g.skew(stop=10)
  376. Will have the first task execute in 0 seconds, the second in 1 second,
  377. the third in 2 seconds and so on.
  378. - 99% test Coverage
  379. - :setting:`CELERY_QUEUES` can now be a list/tuple of :class:`~kombu.Queue`
  380. instances.
  381. Internally :attr:`@amqp.queues` is now a mapping of name/Queue instances,
  382. instead of converting on the fly.
  383. * Can now specify connection for :class:`@control.inspect`.
  384. .. code-block:: python
  385. i = celery.control.inspect(connection=BrokerConnection("redis://"))
  386. i.active_queues()
  387. * Module :mod:`celery.app.task` is now a module instead of a package.
  388. The setup.py install script will try to remove the old package,
  389. if that doesn't work for some reason you have to remove
  390. it manually, you can do so by executing the command::
  391. $ rm -r $(dirname $(python -c '
  392. import celery;print(celery.__file__)'))/app/task/
  393. * :setting:`CELERY_FORCE_EXECV` is now enabled by default.
  394. If the old behavior is wanted the setting can be set to False,
  395. or the new :option:`--no-execv` to :program:`celery worker`.
  396. * Deprecated module ``celery.conf`` has been removed.
  397. * The :setting:`CELERY_TIMEZONE` now always require the :mod:`pytz`
  398. library to be installed (exept if the timezone is set to `UTC`).
  399. * The Tokyo Tyrant backend has been removed and is no longer supported.
  400. * Now uses :func:`~kombu.common.maybe_declare` to cache queue declarations.
  401. * There is no longer a global default for the
  402. :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting, it is instead
  403. set by individual schedulers.
  404. * Worker: now truncates very long message bodies in error reports.
  405. * :envvar:`CELERY_BENCH` environment variable, will now also list
  406. memory usage statistics at worker shutdown.
  407. * Worker: now only ever use a single timer for all timing needs,
  408. and instead set different priorities.
  409. Internals
  410. ---------
  411. * Compat modules are now generated dynamically upon use.
  412. These modules are ``celery.messaging``, ``celery.log``,
  413. ``celery.decorators`` and ``celery.registry``.
  414. * :mod:`celery.utils` refactored into multiple modules:
  415. :mod:`celery.utils.text`
  416. :mod:`celery.utils.imports`
  417. :mod:`celery.utils.functional`
  418. * Now using :mod:`kombu.utils.encoding` instead of
  419. `:mod:`celery.utils.encoding`.
  420. * Renamed module ``celery.routes`` -> :mod:`celery.app.routes`.
  421. * Renamed package ``celery.db`` -> :mod:`celery.backends.database`.
  422. * Renamed module ``celery.abstract`` -> :mod:`celery.worker.abstract`.
  423. * Command-line docs are now parsed from the module docstrings.
  424. * Test suite directory has been reorganized.
  425. * :program:`setup.py` now reads docs from the :file:`requirements/` directory.
  426. .. _v260-experimental:
  427. Experimental
  428. ============
  429. :mod:`celery.contrib.methods`: Task decorator for methods
  430. ----------------------------------------------------------
  431. This is an experimental module containing a task
  432. decorator, and a task decorator filter, that can be used
  433. to create tasks out of methods::
  434. from celery.contrib.methods import task_method
  435. class Counter(object):
  436. def __init__(self):
  437. self.value = 1
  438. @celery.task(name="Counter.increment", filter=task_method)
  439. def increment(self, n=1):
  440. self.value += 1
  441. return self.value
  442. See :mod:`celery.contrib.methods` for more information.
  443. .. _v260-unscheduled-removals:
  444. Unscheduled Removals
  445. ====================
  446. Usually we don't make backward incompatible removals,
  447. but these removals should have no major effect.
  448. - The following settings have been renamed:
  449. - ``CELERYD_ETA_SCHEDULER`` -> ``CELERYD_TIMER``
  450. - ``CELERYD_ETA_SCHEDULER_PRECISION`` -> ``CELERYD_TIMER_PRECISION``
  451. .. _v260-deprecations:
  452. Deprecations
  453. ============
  454. See the :ref:`deprecation-timeline`.
  455. The following undocumented API's has been moved:
  456. - ``control.inspect.add_consumer`` -> :meth:`@control.add_consumer`.
  457. - ``control.inspect.cancel_consumer`` -> :meth:`@control.cancel_consumer`.
  458. - ``control.inspect.enable_events`` -> :meth:`@control.enable_events`.
  459. - ``control.inspect.disable_events`` -> :meth:`@control.disable_events`.
  460. This way ``inspect()`` is only used for commands that do not
  461. modify anything, while idempotent control commands that make changes
  462. are on the control objects.
  463. Fixes
  464. =====
  465. - Retry sqlalchemy backend operations on DatabaseError/OperationalError
  466. (Issue #634)