whatsnew-3.0.rst 30 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998
  1. .. _whatsnew-3.0:
  2. ===========================================
  3. What's new in Celery 3.0 (Chiastic Slide)
  4. ===========================================
  5. Celery is a simple, flexible and reliable distributed system to
  6. process vast amounts of messages, while providing operations with
  7. the tools required to maintain such a system.
  8. It's a task queue with focus on real-time processing, while also
  9. supporting task scheduling.
  10. Celery has a large and diverse community of users and contributors,
  11. you should come join us :ref:`on IRC <irc-channel>`
  12. or :ref:`our mailing-list <mailing-list>`.
  13. To read more about Celery you should go read the :ref:`introduction <intro>`.
  14. While this version is backward compatible with previous versions
  15. it's important that you read the following section.
  16. If you use Celery in combination with Django you must also
  17. read the `django-celery changelog`_ and upgrade to `django-celery 3.0`_.
  18. This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3,
  19. as well as PyPy and Jython.
  20. Highlights
  21. ==========
  22. .. topic:: Overview
  23. - A new and improved API, that is both simpler and more powerful.
  24. Everyone must read the new :ref:`first-steps` tutorial,
  25. and the new :ref:`next-steps` tutorial. Oh, and
  26. why not reread the user guide while you're at it :)
  27. There are no current plans to deprecate the old API,
  28. so you don't have to be in a hurry to port your applications.
  29. - The worker is now thread-less, giving great performance improvements.
  30. - The new "Canvas" makes it easy to define complex workflows.
  31. Ever wanted to chain tasks together? This is possible, but
  32. not just that, now you can even chain together groups and chords,
  33. or even combine multiple chains.
  34. Read more in the :ref:`Canvas <guide-canvas>` user guide.
  35. - All of Celery's command-line programs are now available from a single
  36. :program:`celery` umbrella command.
  37. - This is the last version to support Python 2.5.
  38. Starting with Celery 3.1, Python 2.6 or later is required.
  39. - Support for the new librabbitmq C client.
  40. Celery will automatically use the :mod:`librabbitmq` module
  41. if installed, which is a very fast and memory-optimized
  42. replacement for the py-amqp module.
  43. - Redis support is more reliable with improved ack emulation.
  44. - Celery now always uses UTC
  45. - Over 600 commits, 30k additions/36k deletions.
  46. In comparison 1.0➝ 2.0 had 18k additions/8k deletions.
  47. .. _`website`: http://celeryproject.org/
  48. .. _`django-celery changelog`:
  49. http://github.com/celery/django-celery/tree/master/Changelog
  50. .. _`django-celery 3.0`: http://pypi.python.org/pypi/django-celery/
  51. .. contents::
  52. :local:
  53. :depth: 2
  54. .. _v300-important:
  55. Important Notes
  56. ===============
  57. Broadcast exchanges renamed
  58. ---------------------------
  59. The workers remote control command exchanges has been renamed
  60. (a new pidbox name), this is because the ``auto_delete`` flag on the exchanges
  61. has been removed, and that makes it incompatible with earlier versions.
  62. You can manually delete the old exchanges if you want,
  63. using the :program:`celery amqp` command (previously called ``camqadm``):
  64. .. code-block:: bash
  65. $ celery amqp exchange.delete celeryd.pidbox
  66. $ celery amqp exchange.delete reply.celeryd.pidbox
  67. Eventloop
  68. ---------
  69. The worker is now running *without threads* when used with RabbitMQ (AMQP),
  70. or Redis as a broker, resulting in:
  71. - Much better overall performance.
  72. - Fixes several edge case race conditions.
  73. - Sub-millisecond timer precision.
  74. - Faster shutdown times.
  75. The transports supported are: ``py-amqp`` ``librabbitmq``, ``redis``,
  76. and ``amqplib``.
  77. Hopefully this can be extended to include additional broker transports
  78. in the future.
  79. For increased reliability the :setting:`CELERYD_FORCE_EXECV` setting is enabled
  80. by default if the eventloop is not used.
  81. New ``celery`` umbrella command
  82. -------------------------------
  83. All Celery's command-line programs are now available from a single
  84. :program:`celery` umbrella command.
  85. You can see a list of subcommands and options by running:
  86. .. code-block:: bash
  87. $ celery help
  88. Commands include:
  89. - ``celery worker`` (previously ``celeryd``).
  90. - ``celery beat`` (previously ``celerybeat``).
  91. - ``celery amqp`` (previously ``camqadm``).
  92. The old programs are still available (``celeryd``, ``celerybeat``, etc),
  93. but you are discouraged from using them.
  94. Now depends on :mod:`billiard`.
  95. -------------------------------
  96. Billiard is a fork of the multiprocessing containing
  97. the no-execv patch by sbt (http://bugs.python.org/issue8713),
  98. and also contains the pool improvements previously located in Celery.
  99. This fork was necessary as changes to the C extension code was required
  100. for the no-execv patch to work.
  101. - Issue #625
  102. - Issue #627
  103. - Issue #640
  104. - `django-celery #122 <http://github.com/celery/django-celery/issues/122`
  105. - `django-celery #124 <http://github.com/celery/django-celery/issues/122`
  106. :mod:`celery.app.task` no longer a package
  107. ------------------------------------------
  108. The :mod:`celery.app.task` module is now a module instead of a package.
  109. The setup.py install script will try to remove the old package,
  110. but if that doesn't work for some reason you have to remove
  111. it manually. This command helps:
  112. .. code-block:: bash
  113. $ rm -r $(dirname $(python -c '
  114. import celery;print(celery.__file__)'))/app/task/
  115. If you experience an error like ``ImportError: cannot import name _unpickle_task``,
  116. you just have to remove the old package and everything is fine.
  117. Last version to support Python 2.5
  118. ----------------------------------
  119. The 3.0 series will be last version to support Python 2.5,
  120. and starting from 3.1 Python 2.6 and later will be required.
  121. With several other distributions taking the step to discontinue
  122. Python 2.5 support, we feel that it is time too.
  123. Python 2.6 should be widely available at this point, and we urge
  124. you to upgrade, but if that is not possible you still have the option
  125. to continue using the Celery 3.0, and important bug fixes
  126. introduced in Celery 3.1 will be back-ported to Celery 3.0 upon request.
  127. UTC timezone is now used
  128. ------------------------
  129. This means that ETA/countdown in messages are not compatible with Celery
  130. versions prior to 2.5.
  131. You can disable UTC and revert back to old local time by setting
  132. the :setting:`CELERY_ENABLE_UTC` setting.
  133. Redis: Ack emulation improvements
  134. ---------------------------------
  135. Reducing the possibility of data loss.
  136. Acks are now implemented by storing a copy of the message when the message
  137. is consumed. The copy is not removed until the consumer acknowledges
  138. or rejects it.
  139. This means that unacknowledged messages will be redelivered either
  140. when the connection is closed, or when the visibility timeout is exceeded.
  141. - Visibility timeout
  142. This is a timeout for acks, so that if the consumer
  143. does not ack the message within this time limit, the message
  144. is redelivered to another consumer.
  145. The timeout is set to one hour by default, but
  146. can be changed by configuring a transport option::
  147. BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 18000} # 5 hours
  148. .. note::
  149. Messages that have not been acked will be redelivered
  150. if the visibility timeout is exceeded, for Celery users
  151. this means that ETA/countdown tasks that are scheduled to execute
  152. with a time that exceeds the visibility timeout will be executed
  153. twice (or more). If you plan on using long ETA/countdowns you
  154. should tweak the visibility timeout accordingly.
  155. Setting a long timeout means that it will take a long time
  156. for messages to be redelivered in the event of a power failure,
  157. but if so happens you could temporarily set the visibility timeout lower
  158. to flush out messages when you start up the systems again.
  159. .. _v300-news:
  160. News
  161. ====
  162. Chaining Tasks
  163. --------------
  164. Tasks can now have callbacks and errbacks, and dependencies are recorded
  165. - The task message format have been updated with two new extension keys
  166. Both keys can be empty/undefined or a list of subtasks.
  167. - ``callbacks``
  168. Applied if the task exits successfully, with the result
  169. of the task as an argument.
  170. - ``errbacks``
  171. Applied if an error occurred while executing the task,
  172. with the uuid of the task as an argument. Since it may not be possible
  173. to serialize the exception instance, it passes the uuid of the task
  174. instead. The uuid can then be used to retrieve the exception and
  175. traceback of the task from the result backend.
  176. - ``link`` and ``link_error`` keyword arguments has been added
  177. to ``apply_async``.
  178. These add callbacks and errbacks to the task, and
  179. you can read more about them at :ref:`calling-links`.
  180. - We now track what subtasks a task sends, and some result backends
  181. supports retrieving this information.
  182. - task.request.children
  183. Contains the result instances of the subtasks
  184. the currently executing task has applied.
  185. - AsyncResult.children
  186. Returns the tasks dependencies, as a list of
  187. ``AsyncResult``/``ResultSet`` instances.
  188. - AsyncResult.iterdeps
  189. Recursively iterates over the tasks dependencies,
  190. yielding `(parent, node)` tuples.
  191. Raises IncompleteStream if any of the dependencies
  192. has not returned yet.
  193. - AsyncResult.graph
  194. A ``DependencyGraph`` of the tasks dependencies.
  195. This can also be used to convert to dot format:
  196. .. code-block:: python
  197. with open('graph.dot') as fh:
  198. result.graph.to_dot(fh)
  199. which can than be used to produce an image:
  200. .. code-block:: bash
  201. $ dot -Tpng graph.dot -o graph.png
  202. - A new special subtask called ``chain`` is also included:
  203. .. code-block:: python
  204. >>> from celery import chain
  205. # (2 + 2) * 8 / 2
  206. >>> res = chain(add.subtask((2, 2)),
  207. mul.subtask((8, )),
  208. div.subtask((2,))).apply_async()
  209. >>> res.get() == 16
  210. >>> res.parent.get() == 32
  211. >>> res.parent.parent.get() == 4
  212. - Adds :meth:`AsyncResult.get_leaf`
  213. Waits and returns the result of the leaf subtask.
  214. That is the last node found when traversing the graph,
  215. but this means that the graph can be 1-dimensional only (in effect
  216. a list).
  217. - Adds ``subtask.link(subtask)`` + ``subtask.link_error(subtask)``
  218. Shortcut to ``s.options.setdefault('link', []).append(subtask)``
  219. - Adds ``subtask.flatten_links()``
  220. Returns a flattened list of all dependencies (recursively)
  221. Redis: Priority support.
  222. ------------------------
  223. The message's ``priority`` field is now respected by the Redis
  224. transport by having multiple lists for each named queue.
  225. The queues are then consumed by in order of priority.
  226. The priority field is a number in the range of 0 - 9, where
  227. 0 is the default and highest priority.
  228. The priority range is collapsed into four steps by default, since it is
  229. unlikely that nine steps will yield more benefit than using four steps.
  230. The number of steps can be configured by setting the ``priority_steps``
  231. transport option, which must be a list of numbers in **sorted order**::
  232. >>> BROKER_TRANSPORT_OPTIONS = {
  233. ... 'priority_steps': [0, 2, 4, 6, 8, 9],
  234. ... }
  235. Priorities implemented in this way is not as reliable as
  236. priorities on the server side, which is why
  237. the feature is nicknamed "quasi-priorities";
  238. **Using routing is still the suggested way of ensuring
  239. quality of service**, as client implemented priorities
  240. fall short in a number of ways, e.g. if the worker
  241. is busy with long running tasks, has prefetched many messages,
  242. or the queues are congested.
  243. Still, it is possible that using priorities in combination
  244. with routing can be more beneficial than using routing
  245. or priorities alone. Experimentation and monitoring
  246. should be used to prove this.
  247. Contributed by Germán M. Bravo.
  248. Redis: Now cycles queues so that consuming is fair.
  249. ---------------------------------------------------
  250. This ensures that a very busy queue won't block messages
  251. from other queues, and ensures that all queues have
  252. an equal chance of being consumed from.
  253. This used to be the case before, but the behavior was
  254. accidentally changed while switching to using blocking pop.
  255. `group`/`chord`/`chain` are now subtasks
  256. ----------------------------------------
  257. - group is no longer an alias to TaskSet, but new alltogether,
  258. since it was very difficult to migrate the TaskSet class to become
  259. a subtask.
  260. - A new shortcut has been added to tasks:
  261. ::
  262. >>> task.s(arg1, arg2, kw=1)
  263. as a shortcut to::
  264. >>> task.subtask((arg1, arg2), {'kw': 1})
  265. - Tasks can be chained by using the ``|`` operator::
  266. >>> (add.s(2, 2), pow.s(2)).apply_async()
  267. - Subtasks can be "evaluated" using the ``~`` operator:
  268. ::
  269. >>> ~add.s(2, 2)
  270. 4
  271. >>> ~(add.s(2, 2) | pow.s(2))
  272. is the same as::
  273. >>> chain(add.s(2, 2), pow.s(2)).apply_async().get()
  274. - A new subtask_type key has been added to the subtask dicts
  275. This can be the string "chord", "group", "chain", "chunks",
  276. "xmap", or "xstarmap".
  277. - maybe_subtask now uses subtask_type to reconstruct
  278. the object, to be used when using non-pickle serializers.
  279. - The logic for these operations have been moved to dedicated
  280. tasks celery.chord, celery.chain and celery.group.
  281. - subtask no longer inherits from AttributeDict.
  282. It's now a pure dict subclass with properties for attribute
  283. access to the relevant keys.
  284. - The repr's now outputs how the sequence would like imperatively::
  285. >>> from celery import chord
  286. >>> (chord([add.s(i, i) for i in xrange(10)], xsum.s())
  287. | pow.s(2))
  288. tasks.xsum([tasks.add(0, 0),
  289. tasks.add(1, 1),
  290. tasks.add(2, 2),
  291. tasks.add(3, 3),
  292. tasks.add(4, 4),
  293. tasks.add(5, 5),
  294. tasks.add(6, 6),
  295. tasks.add(7, 7),
  296. tasks.add(8, 8),
  297. tasks.add(9, 9)]) | tasks.pow(2)
  298. New remote control commands
  299. ---------------------------
  300. These commands were previously experimental, but they have proven
  301. stable and is now documented as part of the offical API.
  302. - :control:`add_consumer`/:control:`cancel_consumer`
  303. Tells workers to consume from a new queue, or cancel consuming from a
  304. queue. This command has also been changed so that the worker remembers
  305. the queues added, so that the change will persist even if
  306. the connection is re-connected.
  307. These commands are available programmatically as
  308. :meth:`@control.add_consumer` / :meth:`@control.cancel_consumer`:
  309. .. code-block:: python
  310. >>> celery.control.add_consumer(queue_name,
  311. ... destination=['w1.example.com'])
  312. >>> celery.control.cancel_consumer(queue_name,
  313. ... destination=['w1.example.com'])
  314. or using the :program:`celery control` command:
  315. .. code-block:: bash
  316. $ celery control -d w1.example.com add_consumer queue
  317. $ celery control -d w1.example.com cancel_consumer queue
  318. .. note::
  319. Remember that a control command without *destination* will be
  320. sent to **all workers**.
  321. - :control:`autoscale`
  322. Tells workers with `--autoscale` enabled to change autoscale
  323. max/min concurrency settings.
  324. This command is available programmatically as :meth:`@control.autoscale`:
  325. .. code-block:: python
  326. >>> celery.control.autoscale(max=10, min=5,
  327. ... destination=['w1.example.com'])
  328. or using the :program:`celery control` command:
  329. .. code-block:: bash
  330. $ celery control -d w1.example.com autoscale 10 5
  331. - :control:`pool_grow`/:control:`pool_shrink`
  332. Tells workers to add or remove pool processes.
  333. These commands are available programmatically as
  334. :meth:`@control.pool_grow` / :meth:`@control.pool_shrink`:
  335. .. code-block:: python
  336. >>> celery.control.pool_grow(2, destination=['w1.example.com'])
  337. >>> celery.contorl.pool_shrink(2, destination=['w1.example.com'])
  338. or using the :program:`celery control` command:
  339. .. code-block:: bash
  340. $ celery control -d w1.example.com pool_grow 2
  341. $ celery control -d w1.example.com pool_shrink 2
  342. - :program:`celery control` now supports :control:`rate_limit` and
  343. :control:`time_limit` commands.
  344. See ``celery control --help`` for details.
  345. Crontab now supports Day of Month, and Month of Year arguments
  346. --------------------------------------------------------------
  347. See the updated list of examples at :ref:`beat-crontab`.
  348. Immutable subtasks
  349. ------------------
  350. ``subtask``'s can now be immutable, which means that the arguments
  351. will not be modified when calling callbacks::
  352. >>> chain(add.s(2, 2), clear_static_electricity.si())
  353. means it will not receive the argument of the parent task,
  354. and ``.si()`` is a shortcut to::
  355. >>> clear_static_electricity.subtask(immutable=True)
  356. Logging Improvements
  357. --------------------
  358. Logging support now conforms better with best practices.
  359. - Classes used by the worker no longer uses app.get_default_logger, but uses
  360. `celery.utils.log.get_logger` which simply gets the logger not setting the
  361. level, and adds a NullHandler.
  362. - Loggers are no longer passed around, instead every module using logging
  363. defines a module global logger that is used throughout.
  364. - All loggers inherit from a common logger called "celery".
  365. - Before task.get_logger would setup a new logger for every task,
  366. and even set the loglevel. This is no longer the case.
  367. - Instead all task loggers now inherit from a common "celery.task" logger
  368. that is set up when programs call `setup_logging_subsystem`.
  369. - Instead of using LoggerAdapter to augment the formatter with
  370. the task_id and task_name field, the task base logger now use
  371. a special formatter adding these values at runtime from the
  372. currently executing task.
  373. - In fact, ``task.get_logger`` is no longer recommended, it is better
  374. to add a module-level logger to your tasks module.
  375. For example, like this:
  376. .. code-block:: python
  377. from celery.utils.log import get_task_logger
  378. logger = get_task_logger(__name__)
  379. @celery.task
  380. def add(x, y):
  381. logger.debug('Adding %r + %r' % (x, y))
  382. return x + y
  383. The resulting logger will then inherit from the ``"celery.task"`` logger
  384. so that the current task name and id is included in logging output.
  385. - Redirected output from stdout/stderr is now logged to a "celery.redirected"
  386. logger.
  387. - In addition a few warnings.warn have been replaced with logger.warn.
  388. - Now avoids the 'no handlers for logger multiprocessing' warning
  389. Task registry no longer global
  390. ------------------------------
  391. Every Celery instance now has its own task registry.
  392. You can make apps share registries by specifying it::
  393. >>> app1 = Celery()
  394. >>> app2 = Celery(tasks=app1.tasks)
  395. Note that tasks are shared between registries by default, so that
  396. tasks will be added to every subsequently created task registry.
  397. As an alternative tasks can be private to specific task registries
  398. by setting the ``shared`` argument to the ``@task`` decorator::
  399. @celery.task(shared=False)
  400. def add(x, y):
  401. return x + y
  402. Abstract tasks are now lazily bound.
  403. ------------------------------------
  404. The :class:`~celery.task.Task` class is no longer bound to an app
  405. by default, it will first be bound (and configured) when
  406. a concrete subclass is created.
  407. This means that you can safely import and make task base classes,
  408. without also initializing the app environment::
  409. from celery.task import Task
  410. class DebugTask(Task):
  411. abstract = True
  412. def __call__(self, *args, **kwargs):
  413. print('CALLING %r' % (self, ))
  414. return self.run(*args, **kwargs)
  415. >>> DebugTask
  416. <unbound DebugTask>
  417. >>> @celery1.task(base=DebugTask)
  418. ... def add(x, y):
  419. ... return x + y
  420. >>> add.__class__
  421. <class add of <Celery default:0x101510d10>>
  422. Lazy task decorators
  423. --------------------
  424. The ``@task`` decorator is now lazy when used with custom apps.
  425. That is, if ``accept_magic_kwargs`` is enabled (herby called "compat mode"), the task
  426. decorator executes inline like before, however for custom apps the @task
  427. decorator now returns a special PromiseProxy object that is only evaluated
  428. on access.
  429. All promises will be evaluated when :meth:`@finalize` is called, or implicitly
  430. when the task registry is first used.
  431. Smart `--app` option
  432. --------------------
  433. The :option:`--app` option now 'auto-detects'
  434. - If the provided path is a module it tries to get an
  435. attribute named 'celery'.
  436. - If the provided path is a package it tries
  437. to import a submodule named 'celery',
  438. and get the celery attribute from that module.
  439. E.g. if you have a project named 'proj' where the
  440. celery app is located in 'from proj.celery import app',
  441. then the following will be equivalent:
  442. .. code-block:: bash
  443. $ celery worker --app=proj
  444. $ celery worker --app=proj.celery:
  445. $ celery worker --app=proj.celery:app
  446. In Other News
  447. -------------
  448. - New :setting:`CELERYD_WORKER_LOST_WAIT` to control the timeout in
  449. seconds before :exc:`billiard.WorkerLostError` is raised
  450. when a worker can not be signalled (Issue #595).
  451. Contributed by Brendon Crawford.
  452. - Redis event monitor queues are now automatically deleted (Issue #436).
  453. - App instance factory methods have been converted to be cached
  454. descriptors that creates a new subclass on access.
  455. This means that e.g. ``app.Worker`` is an actual class
  456. and will work as expected when::
  457. class Worker(app.Worker):
  458. ...
  459. - New signal: :signal:`task_success`.
  460. - Multiprocessing logs are now only emitted if the :envvar:`MP_LOG`
  461. environment variable is set.
  462. - The Celery instance can now be created with a broker URL
  463. .. code-block:: python
  464. app = Celery(broker='redis://')
  465. - Result backends can now be set using an URL
  466. Currently only supported by redis. Example use::
  467. CELERY_RESULT_BACKEND = 'redis://localhost/1'
  468. - Heartbeat frequency now every 5s, and frequency sent with event
  469. The heartbeat frequency is now available in the worker event messages,
  470. so that clients can decide when to consider workers offline based on
  471. this value.
  472. - Module celery.actors has been removed, and will be part of cl instead.
  473. - Introduces new ``celery`` command, which is an entrypoint for all other
  474. commands.
  475. The main for this command can be run by calling ``celery.start()``.
  476. - Annotations now supports decorators if the key startswith '@'.
  477. E.g.:
  478. .. code-block:: python
  479. def debug_args(fun):
  480. @wraps(fun)
  481. def _inner(*args, **kwargs):
  482. print('ARGS: %r' % (args, ))
  483. return _inner
  484. CELERY_ANNOTATIONS = {
  485. 'tasks.add': {'@__call__': debug_args},
  486. }
  487. Also tasks are now always bound by class so that
  488. annotated methods end up being bound.
  489. - Bugreport now available as a command and broadcast command
  490. - Get it from a Python repl::
  491. >>> import celery
  492. >>> print(celery.bugreport())
  493. - Using the ``celery`` command line program:
  494. .. code-block:: bash
  495. $ celery report
  496. - Get it from remote workers:
  497. .. code-block:: bash
  498. $ celery inspect report
  499. - Module ``celery.log`` moved to :mod:`celery.app.log`.
  500. - Module ``celery.task.control`` moved to :mod:`celery.app.control`.
  501. - New signal: :signal:`task_revoked`
  502. Sent in the main process when the task is revoked or terminated.
  503. - ``AsyncResult.task_id`` renamed to ``AsyncResult.id``
  504. - ``TasksetResult.taskset_id`` renamed to ``.id``
  505. - ``xmap(task, sequence)`` and ``xstarmap(task, sequence)``
  506. Returns a list of the results applying the task function to every item
  507. in the sequence.
  508. Example::
  509. >>> from celery import xstarmap
  510. >>> xstarmap(add, zip(range(10), range(10)).apply_async()
  511. [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
  512. - ``chunks(task, sequence, chunksize)``
  513. - ``group.skew(start=, stop=, step=)``
  514. Skew will skew the countdown for the individual tasks in a group,
  515. e.g. with a group::
  516. >>> g = group(add.s(i, i) for i in xrange(10))
  517. Skewing the tasks from 0 seconds to 10 seconds::
  518. >>> g.skew(stop=10)
  519. Will have the first task execute in 0 seconds, the second in 1 second,
  520. the third in 2 seconds and so on.
  521. - 99% test Coverage
  522. - :setting:`CELERY_QUEUES` can now be a list/tuple of :class:`~kombu.Queue`
  523. instances.
  524. Internally :attr:`@amqp.queues` is now a mapping of name/Queue instances,
  525. instead of converting on the fly.
  526. - Can now specify connection for :class:`@control.inspect`.
  527. .. code-block:: python
  528. from kombu import Connection
  529. i = celery.control.inspect(connection=Connection('redis://'))
  530. i.active_queues()
  531. - :setting:`CELERYD_FORCE_EXECV` is now enabled by default.
  532. If the old behavior is wanted the setting can be set to False,
  533. or the new :option:`--no-execv` to :program:`celery worker`.
  534. - Deprecated module ``celery.conf`` has been removed.
  535. - The :setting:`CELERY_TIMEZONE` now always require the :mod:`pytz`
  536. library to be installed (exept if the timezone is set to `UTC`).
  537. - The Tokyo Tyrant backend has been removed and is no longer supported.
  538. - Now uses :func:`~kombu.common.maybe_declare` to cache queue declarations.
  539. - There is no longer a global default for the
  540. :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting, it is instead
  541. set by individual schedulers.
  542. - Worker: now truncates very long message bodies in error reports.
  543. - No longer deepcopies exceptions when trying to serialize errors.
  544. - :envvar:`CELERY_BENCH` environment variable, will now also list
  545. memory usage statistics at worker shutdown.
  546. - Worker: now only ever use a single timer for all timing needs,
  547. and instead set different priorities.
  548. - An exceptions arguments are now safely pickled
  549. Contributed by Matt Long.
  550. - Worker/Celerybeat no longer logs the startup banner.
  551. Previously it would be logged with severity warning,
  552. now it's only written to stdout.
  553. - The ``contrib/`` directory in the distribution has been renamed to
  554. ``extra/``.
  555. - New signal: :signal:`task_revoked`
  556. - celery.contrib.migrate: Many improvements including
  557. filtering, queue migration, and support for acking messages on the broker
  558. migrating from.
  559. Contributed by John Watson.
  560. - Worker: Prefetch count increments are now optimized and grouped together.
  561. - Worker: No longer calls ``consume`` on the remote control command queue
  562. twice.
  563. Probably didn't cause any problems, but was unecessary.
  564. Internals
  565. ---------
  566. - ``app.broker_connection`` is now ``app.connection``
  567. Both names still work.
  568. - Compat modules are now generated dynamically upon use.
  569. These modules are ``celery.messaging``, ``celery.log``,
  570. ``celery.decorators`` and ``celery.registry``.
  571. - :mod:`celery.utils` refactored into multiple modules:
  572. :mod:`celery.utils.text`
  573. :mod:`celery.utils.imports`
  574. :mod:`celery.utils.functional`
  575. - Now using :mod:`kombu.utils.encoding` instead of
  576. :mod:`celery.utils.encoding`.
  577. - Renamed module ``celery.routes`` -> :mod:`celery.app.routes`.
  578. - Renamed package ``celery.db`` -> :mod:`celery.backends.database`.
  579. - Renamed module ``celery.abstract`` -> :mod:`celery.worker.bootsteps`.
  580. - Command line docs are now parsed from the module docstrings.
  581. - Test suite directory has been reorganized.
  582. - :program:`setup.py` now reads docs from the :file:`requirements/` directory.
  583. - Celery commands no longer wraps output (Issue #700).
  584. Contributed by Thomas Johansson.
  585. .. _v300-experimental:
  586. Experimental
  587. ============
  588. :mod:`celery.contrib.methods`: Task decorator for methods
  589. ----------------------------------------------------------
  590. This is an experimental module containing a task
  591. decorator, and a task decorator filter, that can be used
  592. to create tasks out of methods::
  593. from celery.contrib.methods import task_method
  594. class Counter(object):
  595. def __init__(self):
  596. self.value = 1
  597. @celery.task(name='Counter.increment', filter=task_method)
  598. def increment(self, n=1):
  599. self.value += 1
  600. return self.value
  601. See :mod:`celery.contrib.methods` for more information.
  602. .. _v300-unscheduled-removals:
  603. Unscheduled Removals
  604. ====================
  605. Usually we don't make backward incompatible removals,
  606. but these removals should have no major effect.
  607. - The following settings have been renamed:
  608. - ``CELERYD_ETA_SCHEDULER`` -> ``CELERYD_TIMER``
  609. - ``CELERYD_ETA_SCHEDULER_PRECISION`` -> ``CELERYD_TIMER_PRECISION``
  610. .. _v300-deprecations:
  611. Deprecations
  612. ============
  613. See the :ref:`deprecation-timeline`.
  614. - The ``celery.backends.pyredis`` compat module has been removed.
  615. Use :mod:`celery.backends.redis` instead!
  616. - The following undocumented API's has been moved:
  617. - ``control.inspect.add_consumer`` -> :meth:`@control.add_consumer`.
  618. - ``control.inspect.cancel_consumer`` -> :meth:`@control.cancel_consumer`.
  619. - ``control.inspect.enable_events`` -> :meth:`@control.enable_events`.
  620. - ``control.inspect.disable_events`` -> :meth:`@control.disable_events`.
  621. This way ``inspect()`` is only used for commands that do not
  622. modify anything, while idempotent control commands that make changes
  623. are on the control objects.
  624. Fixes
  625. =====
  626. - Retry sqlalchemy backend operations on DatabaseError/OperationalError
  627. (Issue #634)
  628. - Tasks that called ``retry`` was not acknowledged if acks late was enabled
  629. Fix contributed by David Markey.
  630. - The message priority argument was not properly propagated to Kombu
  631. (Issue #708).
  632. Fix contributed by Eran Rundstein