whatsnew-3.1.rst 43 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281
  1. .. _whatsnew-3.1:
  2. ===========================================
  3. What's new in Celery 3.1 (Cipater)
  4. ===========================================
  5. :Author: Ask Solem (``ask at celeryproject.org``)
  6. .. sidebar:: Change history
  7. What's new documents describe the changes in major versions,
  8. we also have a :ref:`changelog` that lists the changes in bugfix
  9. releases (0.0.x), while older series are archived under the :ref:`history`
  10. section.
  11. Celery is a simple, flexible and reliable distributed system to
  12. process vast amounts of messages, while providing operations with
  13. the tools required to maintain such a system.
  14. It's a task queue with focus on real-time processing, while also
  15. supporting task scheduling.
  16. Celery has a large and diverse community of users and contributors,
  17. you should come join us :ref:`on IRC <irc-channel>`
  18. or :ref:`our mailing-list <mailing-list>`.
  19. To read more about Celery you should go read the :ref:`introduction <intro>`.
  20. While this version is backward compatible with previous versions
  21. it's important that you read the following section.
  22. This version is officially supported on CPython 2.6, 2.7 and 3.3,
  23. and also supported on PyPy.
  24. .. _`website`: http://celeryproject.org/
  25. .. topic:: Table of Contents
  26. Make sure you read the important notes before upgrading to this version.
  27. .. contents::
  28. :local:
  29. :depth: 2
  30. Preface
  31. =======
  32. Deadlocks have long plagued our workers, and while uncommon they are
  33. not acceptable. They are also infamous for being extremely hard to diagnose
  34. and reproduce, so to make this job easier I wrote a stress test suite that
  35. bombards the worker with different tasks in an attempt to break it.
  36. What happens if thousands of worker child processes are killed every
  37. second? what if we also kill the broker connection every 10
  38. seconds? These are examples of what the stress test suite will do to the
  39. worker, and it reruns these tests using different configuration combinations
  40. to find edge case bugs.
  41. The end result was that I had to rewrite the prefork pool to avoid the use
  42. of the POSIX semaphore. This was extremely challenging, but after
  43. months of hard work the worker now finally passes the stress test suite.
  44. There's probably more bugs to find, but the good news is
  45. that we now have a tool to reproduce them, so should you be so unlucky to
  46. experience a bug then we'll write a test for it and squash it!
  47. Note that I have also moved many broker transports into experimental status:
  48. the only transports recommended for production use today is RabbitMQ and
  49. Redis.
  50. I don't have the resources to maintain all of them, so bugs are left
  51. unresolved. I wish that someone will step up and take responsibility for
  52. these transports or donate resources to improve them, but as the situation
  53. is now I don't think the quality is up to date with the rest of the code-base
  54. so I cannot recommend them for production use.
  55. The next version of Celery 4.0 will focus on performance and removing
  56. rarely used parts of the library. Work has also started on a new message
  57. protocol, supporting multiple languages and more. The initial draft can
  58. be found :ref:`here <message-protocol-task-v2>`.
  59. This has probably been the hardest release I've worked on, so no
  60. introduction to this changelog would be complete without a massive
  61. thank you to everyone who contributed and helped me test it!
  62. Thank you for your support!
  63. *— Ask Solem*
  64. .. _v310-important:
  65. Important Notes
  66. ===============
  67. Dropped support for Python 2.5
  68. ------------------------------
  69. Celery now requires Python 2.6 or later.
  70. The new dual code base runs on both Python 2 and 3, without
  71. requiring the ``2to3`` porting tool.
  72. .. note::
  73. This is also the last version to support Python 2.6! From Celery 4.0 and
  74. on-wards Python 2.7 or later will be required.
  75. .. _last-version-to-enable-pickle:
  76. Last version to enable Pickle by default
  77. ----------------------------------------
  78. Starting from Celery 4.0 the default serializer will be json.
  79. If you depend on pickle being accepted you should be prepared
  80. for this change by explicitly allowing your worker
  81. to consume pickled messages using the :setting:`CELERY_ACCEPT_CONTENT`
  82. setting:
  83. .. code-block:: python
  84. CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
  85. Make sure you only select the serialization formats you'll actually be using,
  86. and make sure you have properly secured your broker from unwanted access
  87. (see the :ref:`Security Guide <guide-security>`).
  88. The worker will emit a deprecation warning if you don't define this setting.
  89. .. topic:: for Kombu users
  90. Kombu 3.0 no longer accepts pickled messages by default, so if you
  91. use Kombu directly then you have to configure your consumers:
  92. see the :ref:`Kombu 3.0 Changelog <kombu:version-3.0.0>` for more
  93. information.
  94. Old command-line programs removed and deprecated
  95. ------------------------------------------------
  96. Everyone should move to the new :program:`celery` umbrella
  97. command, so we are incrementally deprecating the old command names.
  98. In this version we've removed all commands that are not used
  99. in init-scripts. The rest will be removed in 4.0.
  100. +-------------------+--------------+-------------------------------------+
  101. | Program | New Status | Replacement |
  102. +===================+==============+=====================================+
  103. | ``celeryd`` | *DEPRECATED* | :program:`celery worker` |
  104. +-------------------+--------------+-------------------------------------+
  105. | ``celerybeat`` | *DEPRECATED* | :program:`celery beat` |
  106. +-------------------+--------------+-------------------------------------+
  107. | ``celeryd-multi`` | *DEPRECATED* | :program:`celery multi` |
  108. +-------------------+--------------+-------------------------------------+
  109. | ``celeryctl`` | **REMOVED** | :program:`celery inspect|control` |
  110. +-------------------+--------------+-------------------------------------+
  111. | ``celeryev`` | **REMOVED** | :program:`celery events` |
  112. +-------------------+--------------+-------------------------------------+
  113. | ``camqadm`` | **REMOVED** | :program:`celery amqp` |
  114. +-------------------+--------------+-------------------------------------+
  115. If this is not a new installation then you may want to remove the old
  116. commands:
  117. .. code-block:: console
  118. $ pip uninstall celery
  119. $ # repeat until it fails
  120. # ...
  121. $ pip uninstall celery
  122. $ pip install celery
  123. Please run :program:`celery --help` for help using the umbrella command.
  124. .. _v310-news:
  125. News
  126. ====
  127. Prefork Pool Improvements
  128. -------------------------
  129. These improvements are only active if you use an async capable
  130. transport. This means only RabbitMQ (AMQP) and Redis are supported
  131. at this point and other transports will still use the thread-based fallback
  132. implementation.
  133. - Pool is now using one IPC queue per child process.
  134. Previously the pool shared one queue between all child processes,
  135. using a POSIX semaphore as a mutex to achieve exclusive read and write
  136. access.
  137. The POSIX semaphore has now been removed and each child process
  138. gets a dedicated queue. This means that the worker will require more
  139. file descriptors (two descriptors per process), but it also means
  140. that performance is improved and we can send work to individual child
  141. processes.
  142. POSIX semaphores are not released when a process is killed, so killing
  143. processes could lead to a deadlock if it happened while the semaphore was
  144. acquired. There is no good solution to fix this, so the best option
  145. was to remove the semaphore.
  146. - Asynchronous write operations
  147. The pool now uses async I/O to send work to the child processes.
  148. - Lost process detection is now immediate.
  149. If a child process is killed or exits mysteriously the pool previously
  150. had to wait for 30 seconds before marking the task with a
  151. :exc:`~celery.exceptions.WorkerLostError`. It had to do this because
  152. the out-queue was shared between all processes, and the pool could not
  153. be certain whether the process completed the task or not. So an arbitrary
  154. timeout of 30 seconds was chosen, as it was believed that the out-queue
  155. would have been drained by this point.
  156. This timeout is no longer necessary, and so the task can be marked as
  157. failed as soon as the pool gets the notification that the process exited.
  158. - Rare race conditions fixed
  159. Most of these bugs were never reported to us, but were discovered while
  160. running the new stress test suite.
  161. Caveats
  162. ~~~~~~~
  163. .. topic:: Long running tasks
  164. The new pool will send tasks to a child process as long as the process
  165. in-queue is writable, and since the socket is buffered this means
  166. that the processes are, in effect, prefetching tasks.
  167. This benefits performance but it also means that other tasks may be stuck
  168. waiting for a long running task to complete::
  169. -> send T1 to Process A
  170. # A executes T1
  171. -> send T2 to Process B
  172. # B executes T2
  173. <- T2 complete
  174. -> send T3 to Process A
  175. # A still executing T1, T3 stuck in local buffer and
  176. # will not start until T1 returns
  177. The buffer size varies based on the operating system: some may
  178. have a buffer as small as 64KB but on recent Linux versions the buffer
  179. size is 1MB (can only be changed system wide).
  180. You can disable this prefetching behavior by enabling the
  181. :option:`-Ofair <celery worker -O>` worker option:
  182. .. code-block:: console
  183. $ celery -A proj worker -l info -Ofair
  184. With this option enabled the worker will only write to workers that are
  185. available for work, disabling the prefetch behavior.
  186. .. topic:: Max tasks per child
  187. If a process exits and pool prefetch is enabled the worker may have
  188. already written many tasks to the process in-queue, and these tasks
  189. must then be moved back and rewritten to a new process.
  190. This is very expensive if you have the
  191. :option:`--maxtasksperchild <celery worker --maxtasksperchild>` option
  192. set to a low value (e.g. less than 10), so if you need to enable this option
  193. you should also enable :option:`-Ofair <celery worker -O>` to turn off the
  194. prefetching behavior.
  195. Django supported out of the box
  196. -------------------------------
  197. Celery 3.0 introduced a shiny new API, but unfortunately did not
  198. have a solution for Django users.
  199. The situation changes with this version as Django is now supported
  200. in core and new Django users coming to Celery are now expected
  201. to use the new API directly.
  202. The Django community has a convention where there's a separate
  203. ``django-x`` package for every library, acting like a bridge between
  204. Django and the library.
  205. Having a separate project for Django users has been a pain for Celery,
  206. with multiple issue trackers and multiple documentation
  207. sources, and then lastly since 3.0 we even had different APIs.
  208. With this version we challenge that convention and Django users will
  209. use the same library, the same API and the same documentation as
  210. everyone else.
  211. There is no rush to port your existing code to use the new API,
  212. but if you would like to experiment with it you should know that:
  213. - You need to use a Celery application instance.
  214. The new Celery API introduced in 3.0 requires users to instantiate the
  215. library by creating an application:
  216. .. code-block:: python
  217. from celery import Celery
  218. app = Celery()
  219. - You need to explicitly integrate Celery with Django
  220. Celery will not automatically use the Django settings, so you can
  221. either configure Celery separately or you can tell it to use the Django
  222. settings with:
  223. .. code-block:: python
  224. app.config_from_object('django.conf:settings')
  225. Neither will it automatically traverse your installed apps to find task
  226. modules. If you want this behavior, you must explicitly pass a list of
  227. Django instances to the Celery app:
  228. .. code-block:: python
  229. from django.conf import settings
  230. app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
  231. - You no longer use ``manage.py``
  232. Instead you use the :program:`celery` command directly:
  233. .. code-block:: console
  234. $ celery -A proj worker -l info
  235. For this to work your app module must store the :envvar:`DJANGO_SETTINGS_MODULE`
  236. environment variable, see the example in the :ref:`Django
  237. guide <django-first-steps>`.
  238. To get started with the new API you should first read the :ref:`first-steps`
  239. tutorial, and then you should read the Django-specific instructions in
  240. :ref:`django-first-steps`.
  241. The fixes and improvements applied by the :pypi:`django-celery` library
  242. are now automatically applied by core Celery when it detects that
  243. the :envvar:`DJANGO_SETTINGS_MODULE` environment variable is set.
  244. The distribution ships with a new example project using Django
  245. in :file:`examples/django`:
  246. https://github.com/celery/celery/tree/3.1/examples/django
  247. Some features still require the :pypi:`django-celery` library:
  248. - Celery does not implement the Django database or cache result backends.
  249. - Celery does not ship with the database-based periodic task
  250. scheduler.
  251. .. note::
  252. If you're still using the old API when you upgrade to Celery 3.1
  253. then you must make sure that your settings module contains
  254. the ``djcelery.setup_loader()`` line, since this will
  255. no longer happen as a side-effect of importing the :pypi:`django-celery`
  256. module.
  257. New users (or if you have ported to the new API) don't need the ``setup_loader``
  258. line anymore, and must make sure to remove it.
  259. Events are now ordered using logical time
  260. -----------------------------------------
  261. Keeping physical clocks in perfect sync is impossible, so using
  262. time-stamps to order events in a distributed system is not reliable.
  263. Celery event messages have included a logical clock value for some time,
  264. but starting with this version that field is also used to order them.
  265. Also, events now record timezone information
  266. by including a new ``utcoffset`` field in the event message.
  267. This is a signed integer telling the difference from UTC time in hours,
  268. so e.g. an event sent from the Europe/London timezone in daylight savings
  269. time will have an offset of 1.
  270. :class:`@events.Receiver` will automatically convert the time-stamps
  271. to the local timezone.
  272. .. note::
  273. The logical clock is synchronized with other nodes
  274. in the same cluster (neighbors), so this means that the logical
  275. epoch will start at the point when the first worker in the cluster
  276. starts.
  277. If all of the workers are shutdown the clock value will be lost
  278. and reset to 0. To protect against this, you should specify the
  279. :option:`celery worker --statedb` option such that the worker can
  280. persist the clock value at shutdown.
  281. You may notice that the logical clock is an integer value and
  282. increases very rapidly. Do not worry about the value overflowing
  283. though, as even in the most busy clusters it may take several
  284. millennium before the clock exceeds a 64 bits value.
  285. New worker node name format (``name@host``)
  286. -------------------------------------------
  287. Node names are now constructed by two elements: name and host-name
  288. separated by '@'.
  289. This change was made to more easily identify multiple instances running
  290. on the same machine.
  291. If a custom name is not specified then the
  292. worker will use the name 'celery' by default, resulting in a
  293. fully qualified node name of 'celery@hostname':
  294. .. code-block:: console
  295. $ celery worker -n example.com
  296. celery@example.com
  297. To also set the name you must include the @:
  298. .. code-block:: console
  299. $ celery worker -n worker1@example.com
  300. worker1@example.com
  301. The worker will identify itself using the fully qualified
  302. node name in events and broadcast messages, so where before
  303. a worker would identify itself as 'worker1.example.com', it will now
  304. use 'celery@worker1.example.com'.
  305. Remember that the :option:`-n <celery worker -n>` argument also supports
  306. simple variable substitutions, so if the current host-name
  307. is *george.example.com* then the ``%h`` macro will expand into that:
  308. .. code-block:: console
  309. $ celery worker -n worker1@%h
  310. worker1@george.example.com
  311. The available substitutions are as follows:
  312. +---------------+----------------------------------------+
  313. | Variable | Substitution |
  314. +===============+========================================+
  315. | ``%h`` | Full host-name (including domain name) |
  316. +---------------+----------------------------------------+
  317. | ``%d`` | Domain name only |
  318. +---------------+----------------------------------------+
  319. | ``%n`` | Host-name only (without domain name) |
  320. +---------------+----------------------------------------+
  321. | ``%%`` | The character ``%`` |
  322. +---------------+----------------------------------------+
  323. Bound tasks
  324. -----------
  325. The task decorator can now create "bound tasks", which means that the
  326. task will receive the ``self`` argument.
  327. .. code-block:: python
  328. @app.task(bind=True)
  329. def send_twitter_status(self, oauth, tweet):
  330. try:
  331. twitter = Twitter(oauth)
  332. twitter.update_status(tweet)
  333. except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
  334. raise self.retry(exc=exc)
  335. Using *bound tasks* is now the recommended approach whenever
  336. you need access to the task instance or request context.
  337. Previously one would have to refer to the name of the task
  338. instead (``send_twitter_status.retry``), but this could lead to problems
  339. in some configurations.
  340. Mingle: Worker synchronization
  341. ------------------------------
  342. The worker will now attempt to synchronize with other workers in
  343. the same cluster.
  344. Synchronized data currently includes revoked tasks and logical clock.
  345. This only happens at start-up and causes a one second start-up delay
  346. to collect broadcast responses from other workers.
  347. You can disable this bootstep using the
  348. :option:`celery worker --without-mingle` option.
  349. Gossip: Worker <-> Worker communication
  350. ---------------------------------------
  351. Workers are now passively subscribing to worker related events like
  352. heartbeats.
  353. This means that a worker knows what other workers are doing and
  354. can detect if they go offline. Currently this is only used for clock
  355. synchronization, but there are many possibilities for future additions
  356. and you can write extensions that take advantage of this already.
  357. Some ideas include consensus protocols, reroute task to best worker (based on
  358. resource usage or data locality) or restarting workers when they crash.
  359. We believe that although this is a small addition, it opens
  360. amazing possibilities.
  361. You can disable this bootstep using the
  362. :option:`celery worker --without-gossip` option.
  363. Bootsteps: Extending the worker
  364. -------------------------------
  365. By writing bootsteps you can now easily extend the consumer part
  366. of the worker to add additional features, like custom message consumers.
  367. The worker has been using bootsteps for some time, but these were never
  368. documented. In this version the consumer part of the worker
  369. has also been rewritten to use bootsteps and the new :ref:`guide-extending`
  370. guide documents examples extending the worker, including adding
  371. custom message consumers.
  372. See the :ref:`guide-extending` guide for more information.
  373. .. note::
  374. Bootsteps written for older versions will not be compatible
  375. with this version, as the API has changed significantly.
  376. The old API was experimental and internal but should you be so unlucky
  377. to use it then please contact the mailing-list and we will help you port
  378. the bootstep to the new API.
  379. New RPC result backend
  380. ----------------------
  381. This new experimental version of the ``amqp`` result backend is a good
  382. alternative to use in classical RPC scenarios, where the process that initiates
  383. the task is always the process to retrieve the result.
  384. It uses Kombu to send and retrieve results, and each client
  385. uses a unique queue for replies to be sent to. This avoids
  386. the significant overhead of the original amqp result backend which creates
  387. one queue per task.
  388. By default results sent using this backend will not persist, so they won't
  389. survive a broker restart. You can enable
  390. the :setting:`CELERY_RESULT_PERSISTENT` setting to change that.
  391. .. code-block:: python
  392. CELERY_RESULT_BACKEND = 'rpc'
  393. CELERY_RESULT_PERSISTENT = True
  394. Note that chords are currently not supported by the RPC backend.
  395. Time limits can now be set by the client
  396. ----------------------------------------
  397. Two new options have been added to the Calling API: ``time_limit`` and
  398. ``soft_time_limit``:
  399. .. code-block:: pycon
  400. >>> res = add.apply_async((2, 2), time_limit=10, soft_time_limit=8)
  401. >>> res = add.subtask((2, 2), time_limit=10, soft_time_limit=8).delay()
  402. >>> res = add.s(2, 2).set(time_limit=10, soft_time_limit=8).delay()
  403. Contributed by Mher Movsisyan.
  404. Redis: Broadcast messages and virtual hosts
  405. -------------------------------------------
  406. Broadcast messages are currently seen by all virtual hosts when
  407. using the Redis transport. You can now fix this by enabling a prefix to all channels
  408. so that the messages are separated:
  409. .. code-block:: python
  410. BROKER_TRANSPORT_OPTIONS = {'fanout_prefix': True}
  411. Note that you'll not be able to communicate with workers running older
  412. versions or workers that does not have this setting enabled.
  413. This setting will be the default in a future version.
  414. Related to Issue #1490.
  415. :pypi:`pytz` replaces :pypi:`python-dateutil` dependency
  416. --------------------------------------------------------
  417. Celery no longer depends on the :pypi:`python-dateutil` library,
  418. but instead a new dependency on the :pypi:`pytz` library was added.
  419. The :pypi:`pytz` library was already recommended for accurate timezone support.
  420. This also means that dependencies are the same for both Python 2 and
  421. Python 3, and that the :file:`requirements/default-py3k.txt` file has
  422. been removed.
  423. Support for :pypi:`setuptools` extra requirements
  424. -------------------------------------------------
  425. Pip now supports the :pypi:`setuptools` extra requirements format,
  426. so we have removed the old bundles concept, and instead specify
  427. setuptools extras.
  428. You install extras by specifying them inside brackets:
  429. .. code-block:: console
  430. $ pip install celery[redis,mongodb]
  431. The above will install the dependencies for Redis and MongoDB. You can list
  432. as many extras as you want.
  433. .. warning::
  434. You can't use the ``celery-with-*`` packages anymore, as these will not be
  435. updated to use Celery 3.1.
  436. +-------------+-------------------------+---------------------------+
  437. | Extension | Requirement entry | Type |
  438. +=============+=========================+===========================+
  439. | Redis | ``celery[redis]`` | transport, result backend |
  440. +-------------+-------------------------+---------------------------+
  441. | MongoDB | ``celery[mongodb]`` | transport, result backend |
  442. +-------------+-------------------------+---------------------------+
  443. | CouchDB | ``celery[couchdb]`` | transport |
  444. +-------------+-------------------------+---------------------------+
  445. | Beanstalk | ``celery[beanstalk]`` | transport |
  446. +-------------+-------------------------+---------------------------+
  447. | ZeroMQ | ``celery[zeromq]`` | transport |
  448. +-------------+-------------------------+---------------------------+
  449. | Zookeeper | ``celery[zookeeper]`` | transport |
  450. +-------------+-------------------------+---------------------------+
  451. | SQLAlchemy | ``celery[sqlalchemy]`` | transport, result backend |
  452. +-------------+-------------------------+---------------------------+
  453. | librabbitmq | ``celery[librabbitmq]`` | transport (C amqp client) |
  454. +-------------+-------------------------+---------------------------+
  455. The complete list with examples is found in the :ref:`bundles` section.
  456. ``subtask.__call__()`` now executes the task directly
  457. -----------------------------------------------------
  458. A misunderstanding led to ``Signature.__call__`` being an alias of
  459. ``.delay`` but this does not conform to the calling API of ``Task`` which
  460. calls the underlying task method.
  461. This means that:
  462. .. code-block:: python
  463. @app.task
  464. def add(x, y):
  465. return x + y
  466. add.s(2, 2)()
  467. now does the same as calling the task directly:
  468. .. code-block:: pycon
  469. >>> add(2, 2)
  470. In Other News
  471. -------------
  472. - Now depends on :ref:`Kombu 3.0 <kombu:version-3.0.0>`.
  473. - Now depends on :pypi:`billiard` version 3.3.
  474. - Worker will now crash if running as the root user with pickle enabled.
  475. - Canvas: ``group.apply_async`` and ``chain.apply_async`` no longer starts
  476. separate task.
  477. That the group and chord primitives supported the "calling API" like other
  478. subtasks was a nice idea, but it was useless in practice and often
  479. confused users. If you still want this behavior you can define a
  480. task to do it for you.
  481. - New method ``Signature.freeze()`` can be used to "finalize"
  482. signatures/subtask.
  483. Regular signature:
  484. .. code-block:: pycon
  485. >>> s = add.s(2, 2)
  486. >>> result = s.freeze()
  487. >>> result
  488. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  489. >>> s.delay()
  490. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  491. Group:
  492. .. code-block:: pycon
  493. >>> g = group(add.s(2, 2), add.s(4, 4))
  494. >>> result = g.freeze()
  495. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [
  496. 70c0fb3d-b60e-4b22-8df7-aa25b9abc86d,
  497. 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  498. >>> g()
  499. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [70c0fb3d-b60e-4b22-8df7-aa25b9abc86d, 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  500. - Chord exception behavior defined (Issue #1172).
  501. From this version the chord callback will change state to FAILURE
  502. when a task part of a chord raises an exception.
  503. See more at :ref:`chord-errors`.
  504. - New ability to specify additional command line options
  505. to the worker and beat programs.
  506. The :attr:`@user_options` attribute can be used
  507. to add additional command-line arguments, and expects
  508. :mod:`optparse`-style options:
  509. .. code-block:: python
  510. from celery import Celery
  511. from celery.bin import Option
  512. app = Celery()
  513. app.user_options['worker'].add(
  514. Option('--my-argument'),
  515. )
  516. See the :ref:`guide-extending` guide for more information.
  517. - All events now include a ``pid`` field, which is the process id of the
  518. process that sent the event.
  519. - Event heartbeats are now calculated based on the time when the event
  520. was received by the monitor, and not the time reported by the worker.
  521. This means that a worker with an out-of-sync clock will no longer
  522. show as 'Offline' in monitors.
  523. A warning is now emitted if the difference between the senders
  524. time and the internal time is greater than 15 seconds, suggesting
  525. that the clocks are out of sync.
  526. - Monotonic clock support.
  527. A monotonic clock is now used for timeouts and scheduling.
  528. The monotonic clock function is built-in starting from Python 3.4,
  529. but we also have fallback implementations for Linux and macOS.
  530. - :program:`celery worker` now supports a new
  531. :option:`--detach <celery worker --detach>` argument to start
  532. the worker as a daemon in the background.
  533. - :class:`@events.Receiver` now sets a ``local_received`` field for incoming
  534. events, which is set to the time of when the event was received.
  535. - :class:`@events.Dispatcher` now accepts a ``groups`` argument
  536. which decides a white-list of event groups that will be sent.
  537. The type of an event is a string separated by '-', where the part
  538. before the first '-' is the group. Currently there are only
  539. two groups: ``worker`` and ``task``.
  540. A dispatcher instantiated as follows:
  541. .. code-block:: pycon
  542. >>> app.events.Dispatcher(connection, groups=['worker'])
  543. will only send worker related events and silently drop any attempts
  544. to send events related to any other group.
  545. - New :setting:`BROKER_FAILOVER_STRATEGY` setting.
  546. This setting can be used to change the transport fail-over strategy,
  547. can either be a callable returning an iterable or the name of a
  548. Kombu built-in failover strategy. Default is "round-robin".
  549. Contributed by Matt Wise.
  550. - ``Result.revoke`` will no longer wait for replies.
  551. You can add the ``reply=True`` argument if you really want to wait for
  552. responses from the workers.
  553. - Better support for link and link_error tasks for chords.
  554. Contributed by Steeve Morin.
  555. - Worker: Now emits warning if the :setting:`CELERYD_POOL` setting is set
  556. to enable the eventlet/gevent pools.
  557. The `-P` option should always be used to select the eventlet/gevent pool
  558. to ensure that the patches are applied as early as possible.
  559. If you start the worker in a wrapper (like Django's :file:`manage.py`)
  560. then you must apply the patches manually, e.g. by creating an alternative
  561. wrapper that monkey patches at the start of the program before importing
  562. any other modules.
  563. - There's a now an 'inspect clock' command which will collect the current
  564. logical clock value from workers.
  565. - `celery inspect stats` now contains the process id of the worker's main
  566. process.
  567. Contributed by Mher Movsisyan.
  568. - New remote control command to dump a workers configuration.
  569. Example:
  570. .. code-block:: console
  571. $ celery inspect conf
  572. Configuration values will be converted to values supported by JSON
  573. where possible.
  574. Contributed by Mher Movsisyan.
  575. - New settings :setting:`CELERY_EVENT_QUEUE_TTL` and
  576. :setting:`CELERY_EVENT_QUEUE_EXPIRES`.
  577. These control when a monitors event queue is deleted, and for how long
  578. events published to that queue will be visible. Only supported on
  579. RabbitMQ.
  580. - New Couchbase result backend.
  581. This result backend enables you to store and retrieve task results
  582. using `Couchbase`_.
  583. See :ref:`conf-couchbase-result-backend` for more information
  584. about configuring this result backend.
  585. Contributed by Alain Masiero.
  586. .. _`Couchbase`: http://www.couchbase.com
  587. - CentOS init-script now supports starting multiple worker instances.
  588. See the script header for details.
  589. Contributed by Jonathan Jordan.
  590. - ``AsyncResult.iter_native`` now sets default interval parameter to 0.5
  591. Fix contributed by Idan Kamara
  592. - New setting :setting:`BROKER_LOGIN_METHOD`.
  593. This setting can be used to specify an alternate login method
  594. for the AMQP transports.
  595. Contributed by Adrien Guinet
  596. - The ``dump_conf`` remote control command will now give the string
  597. representation for types that are not JSON compatible.
  598. - Function `celery.security.setup_security` is now :func:`@setup_security`.
  599. - Task retry now propagates the message expiry value (Issue #980).
  600. The value is forwarded at is, so the expiry time will not change.
  601. To update the expiry time you would have to pass a new expires
  602. argument to ``retry()``.
  603. - Worker now crashes if a channel error occurs.
  604. Channel errors are transport specific and is the list of exceptions
  605. returned by ``Connection.channel_errors``.
  606. For RabbitMQ this means that Celery will crash if the equivalence
  607. checks for one of the queues in :setting:`CELERY_QUEUES` mismatches, which
  608. makes sense since this is a scenario where manual intervention is
  609. required.
  610. - Calling ``AsyncResult.get()`` on a chain now propagates errors for previous
  611. tasks (Issue #1014).
  612. - The parent attribute of ``AsyncResult`` is now reconstructed when using JSON
  613. serialization (Issue #1014).
  614. - Worker disconnection logs are now logged with severity warning instead of
  615. error.
  616. Contributed by Chris Adams.
  617. - ``events.State`` no longer crashes when it receives unknown event types.
  618. - SQLAlchemy Result Backend: New :setting:`CELERY_RESULT_DB_TABLENAMES`
  619. setting can be used to change the name of the database tables used.
  620. Contributed by Ryan Petrello.
  621. - SQLAlchemy Result Backend: Now calls ``enginge.dispose`` after fork
  622. (Issue #1564).
  623. If you create your own SQLAlchemy engines then you must also
  624. make sure that these are closed after fork in the worker:
  625. .. code-block:: python
  626. from multiprocessing.util import register_after_fork
  627. engine = create_engine(*engine_args)
  628. register_after_fork(engine, engine.dispose)
  629. - A stress test suite for the Celery worker has been written.
  630. This is located in the ``funtests/stress`` directory in the git
  631. repository. There's a README file there to get you started.
  632. - The logger named ``celery.concurrency`` has been renamed to ``celery.pool``.
  633. - New command line utility ``celery graph``.
  634. This utility creates graphs in GraphViz dot format.
  635. You can create graphs from the currently installed bootsteps:
  636. .. code-block:: console
  637. # Create graph of currently installed bootsteps in both the worker
  638. # and consumer name-spaces.
  639. $ celery graph bootsteps | dot -T png -o steps.png
  640. # Graph of the consumer name-space only.
  641. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png
  642. # Graph of the worker name-space only.
  643. $ celery graph bootsteps worker | dot -T png -o worker_only.png
  644. Or graphs of workers in a cluster:
  645. .. code-block:: console
  646. # Create graph from the current cluster
  647. $ celery graph workers | dot -T png -o workers.png
  648. # Create graph from a specified list of workers
  649. $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png
  650. # also specify the number of threads in each worker
  651. $ celery graph workers nodes:w1,w2,w3 threads:2,4,6
  652. # …also specify the broker and backend URLs shown in the graph
  653. $ celery graph workers broker:amqp:// backend:redis://
  654. # …also specify the max number of workers/threads shown (wmax/tmax),
  655. # enumerating anything that exceeds that number.
  656. $ celery graph workers wmax:10 tmax:3
  657. - Changed the way that app instances are pickled.
  658. Apps can now define a ``__reduce_keys__`` method that is used instead
  659. of the old ``AppPickler`` attribute. E.g. if your app defines a custom
  660. 'foo' attribute that needs to be preserved when pickling you can define
  661. a ``__reduce_keys__`` as such:
  662. .. code-block:: python
  663. import celery
  664. class Celery(celery.Celery):
  665. def __init__(self, *args, **kwargs):
  666. super(Celery, self).__init__(*args, **kwargs)
  667. self.foo = kwargs.get('foo')
  668. def __reduce_keys__(self):
  669. return super(Celery, self).__reduce_keys__().update(
  670. foo=self.foo,
  671. )
  672. This is a much more convenient way to add support for pickling custom
  673. attributes. The old ``AppPickler`` is still supported but its use is
  674. discouraged and we would like to remove it in a future version.
  675. - Ability to trace imports for debugging purposes.
  676. The :envvar:`C_IMPDEBUG` can be set to trace imports as they
  677. occur:
  678. .. code-block:: console
  679. $ C_IMDEBUG=1 celery worker -l info
  680. .. code-block:: console
  681. $ C_IMPDEBUG=1 celery shell
  682. - Message headers now available as part of the task request.
  683. Example adding and retrieving a header value:
  684. .. code-block:: python
  685. @app.task(bind=True)
  686. def t(self):
  687. return self.request.headers.get('sender')
  688. >>> t.apply_async(headers={'sender': 'George Costanza'})
  689. - New :signal:`before_task_publish` signal dispatched before a task message
  690. is sent and can be used to modify the final message fields (Issue #1281).
  691. - New :signal:`after_task_publish` signal replaces the old :signal:`task_sent`
  692. signal.
  693. The :signal:`task_sent` signal is now deprecated and should not be used.
  694. - New :signal:`worker_process_shutdown` signal is dispatched in the
  695. prefork pool child processes as they exit.
  696. Contributed by Daniel M Taub.
  697. - ``celery.platforms.PIDFile`` renamed to :class:`celery.platforms.Pidfile`.
  698. - MongoDB Backend: Can now be configured using a URL:
  699. See :ref:`example-mongodb-result-config`.
  700. - MongoDB Backend: No longer using deprecated ``pymongo.Connection``.
  701. - MongoDB Backend: Now disables ``auto_start_request``.
  702. - MongoDB Backend: Now enables ``use_greenlets`` when eventlet/gevent is used.
  703. - ``subtask()`` / ``maybe_subtask()`` renamed to
  704. ``signature()``/``maybe_signature()``.
  705. Aliases still available for backwards compatibility.
  706. - The ``correlation_id`` message property is now automatically set to the
  707. id of the task.
  708. - The task message ``eta`` and ``expires`` fields now includes timezone
  709. information.
  710. - All result backends ``store_result``/``mark_as_*`` methods must now accept
  711. a ``request`` keyword argument.
  712. - Events now emit warning if the broken ``yajl`` library is used.
  713. - The :signal:`celeryd_init` signal now takes an extra keyword argument:
  714. ``option``.
  715. This is the mapping of parsed command line arguments, and can be used to
  716. prepare new preload arguments (``app.user_options['preload']``).
  717. - New callback: :meth:`@on_configure`.
  718. This callback is called when an app is about to be configured (a
  719. configuration key is required).
  720. - Worker: No longer forks on :sig:`HUP`.
  721. This means that the worker will reuse the same pid for better
  722. support with external process supervisors.
  723. Contributed by Jameel Al-Aziz.
  724. - Worker: The log message ``Got task from broker …`` was changed to
  725. ``Received task …``.
  726. - Worker: The log message ``Skipping revoked task …`` was changed
  727. to ``Discarding revoked task …``.
  728. - Optimization: Improved performance of ``ResultSet.join_native()``.
  729. Contributed by Stas Rudakou.
  730. - The :signal:`task_revoked` signal now accepts new ``request`` argument
  731. (Issue #1555).
  732. The revoked signal is dispatched after the task request is removed from
  733. the stack, so it must instead use the
  734. :class:`~celery.worker.request.Request` object to get information
  735. about the task.
  736. - Worker: New :option:`-X <celery worker -X>` command line argument to
  737. exclude queues (Issue #1399).
  738. The :option:`-X <celery worker -X>` argument is the inverse of the
  739. :option:`-Q <celery worker -Q>` argument and accepts a list of queues
  740. to exclude (not consume from):
  741. .. code-block:: console
  742. # Consume from all queues in CELERY_QUEUES, but not the 'foo' queue.
  743. $ celery worker -A proj -l info -X foo
  744. - Adds :envvar:`C_FAKEFORK` environment variable for simple
  745. init-script/:program:`celery multi` debugging.
  746. This means that you can now do:
  747. .. code-block:: console
  748. $ C_FAKEFORK=1 celery multi start 10
  749. or:
  750. .. code-block:: console
  751. $ C_FAKEFORK=1 /etc/init.d/celeryd start
  752. to avoid the daemonization step to see errors that are not visible
  753. due to missing stdout/stderr.
  754. A ``dryrun`` command has been added to the generic init-script that
  755. enables this option.
  756. - New public API to push and pop from the current task stack:
  757. :func:`celery.app.push_current_task` and
  758. :func:`celery.app.pop_current_task``.
  759. - ``RetryTaskError`` has been renamed to :exc:`~celery.exceptions.Retry`.
  760. The old name is still available for backwards compatibility.
  761. - New semi-predicate exception :exc:`~celery.exceptions.Reject`.
  762. This exception can be raised to ``reject``/``requeue`` the task message,
  763. see :ref:`task-semipred-reject` for examples.
  764. - :ref:`Semipredicates <task-semipredicates>` documented: (Retry/Ignore/Reject).
  765. .. _v310-removals:
  766. Scheduled Removals
  767. ==================
  768. - The ``BROKER_INSIST`` setting and the ``insist`` argument
  769. to ``~@connection`` is no longer supported.
  770. - The ``CELERY_AMQP_TASK_RESULT_CONNECTION_MAX`` setting is no longer
  771. supported.
  772. Use :setting:`BROKER_POOL_LIMIT` instead.
  773. - The ``CELERY_TASK_ERROR_WHITELIST`` setting is no longer supported.
  774. You should set the :class:`~celery.utils.mail.ErrorMail` attribute
  775. of the task class instead. You can also do this using
  776. :setting:`CELERY_ANNOTATIONS`:
  777. .. code-block:: python
  778. from celery import Celery
  779. from celery.utils.mail import ErrorMail
  780. class MyErrorMail(ErrorMail):
  781. whitelist = (KeyError, ImportError)
  782. def should_send(self, context, exc):
  783. return isinstance(exc, self.whitelist)
  784. app = Celery()
  785. app.conf.CELERY_ANNOTATIONS = {
  786. '*': {
  787. 'ErrorMail': MyErrorMails,
  788. }
  789. }
  790. - Functions that creates a broker connections no longer
  791. supports the ``connect_timeout`` argument.
  792. This can now only be set using the :setting:`BROKER_CONNECTION_TIMEOUT`
  793. setting. This is because functions no longer create connections
  794. directly, but instead get them from the connection pool.
  795. - The ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting is no longer supported.
  796. Use :setting:`CELERY_TASK_RESULT_EXPIRES` instead.
  797. .. _v310-deprecations:
  798. Deprecation Time-line Changes
  799. =============================
  800. See the :ref:`deprecation-timeline`.
  801. .. _v310-fixes:
  802. Fixes
  803. =====
  804. - AMQP Backend: join did not convert exceptions when using the json
  805. serializer.
  806. - Non-abstract task classes are now shared between apps (Issue #1150).
  807. Note that non-abstract task classes should not be used in the
  808. new API. You should only create custom task classes when you
  809. use them as a base class in the ``@task`` decorator.
  810. This fix ensure backwards compatibility with older Celery versions
  811. so that non-abstract task classes works even if a module is imported
  812. multiple times so that the app is also instantiated multiple times.
  813. - Worker: Workaround for Unicode errors in logs (Issue #427).
  814. - Task methods: ``.apply_async`` now works properly if args list is None
  815. (Issue #1459).
  816. - Eventlet/gevent/solo/threads pools now properly handles :exc:`BaseException`
  817. errors raised by tasks.
  818. - ``autoscale`` and :control:`pool_grow`/:control:`pool_shrink` remote
  819. control commands will now also automatically increase and decrease the
  820. consumer prefetch count.
  821. Fix contributed by Daniel M. Taub.
  822. - ``celery control pool_`` commands did not coerce string arguments to int.
  823. - Redis/Cache chords: Callback result is now set to failure if the group
  824. disappeared from the database (Issue #1094).
  825. - Worker: Now makes sure that the shutdown process is not initiated multiple
  826. times.
  827. - Programs: :program:`celery multi` now properly handles both ``-f`` and
  828. :option:`--logfile <celery worker --logfile>` options (Issue #1541).
  829. .. _v310-internal:
  830. Internal changes
  831. ================
  832. - Module ``celery.task.trace`` has been renamed to :mod:`celery.app.trace`.
  833. - Module ``celery.concurrency.processes`` has been renamed to
  834. :mod:`celery.concurrency.prefork`.
  835. - Classes that no longer fall back to using the default app:
  836. - Result backends (:class:`celery.backends.base.BaseBackend`)
  837. - :class:`celery.worker.WorkController`
  838. - :class:`celery.worker.Consumer`
  839. - :class:`celery.worker.request.Request`
  840. This means that you have to pass a specific app when instantiating
  841. these classes.
  842. - ``EventDispatcher.copy_buffer`` renamed to
  843. :meth:`@events.Dispatcher.extend_buffer`.
  844. - Removed unused and never documented global instance
  845. ``celery.events.state.state``.
  846. - :class:`@events.Receiver` is now a :class:`kombu.mixins.ConsumerMixin`
  847. subclass.
  848. - :class:`celery.apps.worker.Worker` has been refactored as a subclass of
  849. :class:`celery.worker.WorkController`.
  850. This removes a lot of duplicate functionality.
  851. - The ``Celery.with_default_connection`` method has been removed in favor
  852. of ``with app.connection_or_acquire`` (:meth:`@connection_or_acquire`)
  853. - The ``celery.results.BaseDictBackend`` class has been removed and is replaced by
  854. :class:`celery.results.BaseBackend`.