whatsnew-3.1.rst 26 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782
  1. .. _whatsnew-3.1:
  2. ===========================================
  3. What's new in Celery 3.1 (Cipater)
  4. ===========================================
  5. .. sidebar:: Change history
  6. What's new documents describes the changes in major versions,
  7. we also have a :ref:`changelog` that lists the changes in bugfix
  8. releases (0.0.x), while older series are archived under the :ref:`history`
  9. section.
  10. Celery is a simple, flexible and reliable distributed system to
  11. process vast amounts of messages, while providing operations with
  12. the tools required to maintain such a system.
  13. It's a task queue with focus on real-time processing, while also
  14. supporting task scheduling.
  15. Celery has a large and diverse community of users and contributors,
  16. you should come join us :ref:`on IRC <irc-channel>`
  17. or :ref:`our mailing-list <mailing-list>`.
  18. To read more about Celery you should go read the :ref:`introduction <intro>`.
  19. While this version is backward compatible with previous versions
  20. it's important that you read the following section.
  21. This version is officially supported on CPython 2.6, 2.7, 3.2 and 3.3,
  22. as well as PyPy and Jython.
  23. Highlights
  24. ==========
  25. .. topic:: Overview
  26. - Now supports Django out of the box.
  27. See the new tutorial at :ref:`django-first-steps`.
  28. - XXX2
  29. - XXX3
  30. YYY3
  31. .. _`website`: http://celeryproject.org/
  32. .. _`django-celery changelog`:
  33. http://github.com/celery/django-celery/tree/master/Changelog
  34. .. _`django-celery 3.0`: http://pypi.python.org/pypi/django-celery/
  35. .. contents::
  36. :local:
  37. :depth: 2
  38. .. _v310-important:
  39. Important Notes
  40. ===============
  41. XXX
  42. ---
  43. YYY
  44. .. _v310-news:
  45. News
  46. ====
  47. XXX
  48. ---
  49. YYY
  50. In Other News
  51. -------------
  52. - No longer supports Python 2.5
  53. From this version Celery requires Python 2.6 or later.
  54. Insteaad of using the 2to3 porting tool we now have
  55. a dual codebase that runs on both Python 2 and Python 3.
  56. - Now depends on :ref:`Kombu 3.0 <kombu:version-3.0.0>`.
  57. - Now depends on :mod:`billiard` version 3.3.
  58. - No longer depends on ``python-dateutil``
  59. Instead a dependency on :mod:`pytz` has been added, which was already
  60. recommended in the documentation for accurate timezone support.
  61. This also means that dependencies are on the same on both Python 2 and
  62. Python 3, and that the :file:`requirements/default-py3k.txt` file has
  63. been removed.
  64. - Time limits can now be set by the client for individual tasks (Issue #802).
  65. You can set both hard and soft time limits using the ``timeout`` and
  66. ``soft_timeout`` calling options:
  67. .. code-block:: python
  68. >>> res = add.apply_async((2, 2), timeout=10, soft_timeout=8)
  69. >>> res = add.subtask((2, 2), timeout=10, soft_timeout=8)()
  70. >>> res = add.s(2, 2).set(timeout=10, soft_timeout=8)()
  71. Contributed by Mher Movsisyan.
  72. - Old command-line programs removed and deprecated
  73. The goal is that everyone should move the new :program:`celery` umbrella
  74. command, so with this version we deprecate the old command names,
  75. and remove commands that are not used in init scripts.
  76. +-------------------+--------------+-------------------------------------+
  77. | Program | New Status | Replacement |
  78. +===================+==============+=====================================+
  79. | ``celeryd`` | *DEPRECATED* | :program:`celery worker` |
  80. +-------------------+--------------+-------------------------------------+
  81. | ``celerybeat`` | *DEPRECATED* | :program:`celery beat` |
  82. +-------------------+--------------+-------------------------------------+
  83. | ``celeryd-multi`` | *DEPRECATED* | :program:`celery multi` |
  84. +-------------------+--------------+-------------------------------------+
  85. | ``celeryctl`` | **REMOVED** | :program:`celery` |
  86. +-------------------+--------------+-------------------------------------+
  87. | ``celeryev`` | **REMOVED** | :program:`celery events` |
  88. +-------------------+--------------+-------------------------------------+
  89. | ``camqadm`` | **REMOVED** | :program:`celery amqp` |
  90. +-------------------+--------------+-------------------------------------+
  91. Please see :program:`celery --help` for help using the umbrella command.
  92. - Celery now support Django out of the box.
  93. The fixes and improvements applied by the django-celery library is now
  94. automatically applied by core Celery when it detects that
  95. the :envvar:`DJANGO_SETTINGS_MODULE` environment setting is set.
  96. The distribution ships with a new example project using Django
  97. in :file:`examples/django`:
  98. http://github.com/celery/celery/tree/master/examples/django
  99. There are cases where you would want to use django-celery still
  100. as:
  101. - Celery does not implement the Django database or cache backends.
  102. - Celery does not automatically read configuration from Django settings.
  103. - Celery does not ship with the database-based periodic task
  104. scheduler.
  105. If you are using django-celery then it is crucial that you have
  106. ``djcelery.setup_loader()`` in your settings module, as this
  107. no longer happens as a side-effect of importing the :mod:`djcelery`
  108. module.
  109. - Canvas: ``group.apply_async`` and ``chain.apply_async`` no longer starts
  110. separate task.
  111. That the group and chord primitives supported the "calling API" like other
  112. subtasks was a nice idea, but it was useless in practice, often confusing
  113. users. If you still want this behavior you can create a task to do it
  114. for you.
  115. - Redis: Option to separate broadcast messages by virtual host (Issue #1490).
  116. Broadcast messages are seen by all virtual hosts when using the Redis
  117. transport. You can fix this by enabling a prefix to all channels
  118. so that the messages are separated by virtual host::
  119. BROKER_TRANSPORT_OPTIONS = {'fanout_prefix': True}
  120. Note that you will not be able to communicate with workers running older
  121. versions or workers that does not have this setting enabled.
  122. This setting will be the default in the future, so better to migrate
  123. sooner rather than later.
  124. - ``Signature.freeze()`` can now be used to "finalize" subtasks
  125. Regular subtask:
  126. .. code-block:: python
  127. >>> s = add.s(2, 2)
  128. >>> result = s.freeze()
  129. >>> result
  130. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  131. >>> s.delay()
  132. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  133. Group:
  134. .. code-block:: python
  135. >>> g = group(add.s(2, 2), add.s(4, 4))
  136. >>> result = g.freeze()
  137. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [
  138. 70c0fb3d-b60e-4b22-8df7-aa25b9abc86d,
  139. 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  140. >>> g()
  141. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [70c0fb3d-b60e-4b22-8df7-aa25b9abc86d, 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  142. - The consumer part of the worker has been rewritten to use Bootsteps.
  143. By writing bootsteps you can now easily extend the consumer part
  144. of the worker to add additional features, or even message consumers.
  145. See the :ref:`guide-extending` guide for more information.
  146. - New Bootsteps implementation.
  147. The bootsteps and namespaces have been refactored for the better,
  148. sadly this means that bootsteps written for older versions will
  149. not be compatible with this version.
  150. Bootsteps were never publicly documented and was considered
  151. experimental, so chances are no one has ever implemented custom
  152. bootsteps, but if you did please contact the mailing-list
  153. and we'll help you port them.
  154. - Module ``celery.worker.bootsteps`` renamed to :mod:`celery.bootsteps`
  155. - The name of a bootstep no longer contain the name of the namespace.
  156. - A bootstep can now be part of multiple namespaces.
  157. - Namespaces must instantiate individual bootsteps, and
  158. there's no global registry of bootsteps.
  159. - New result backend with RPC semantics (``rpc``).
  160. This version of the ``amqp`` result backend is a very good alternative
  161. to use in classical RPC scenarios, where the process that initiates
  162. the task is always the process to retrieve the result.
  163. It uses Kombu to send and retrieve results, and each client
  164. will create a unique queue for replies to be sent to. Avoiding
  165. the significant overhead of the original amqp backend which creates
  166. one queue per task, but it's important to consider that it will
  167. not be possible to retrieve the result from another process,
  168. and that results sent using this backend is not persistent and so will
  169. not survive a broker restart.
  170. It has only been tested with the AMQP and Redis transports.
  171. - App instances can now add additional command line options
  172. to the worker and beat programs.
  173. The :attr:`@Celery.user_options` attribute can be used
  174. to add additional command-line arguments, and expects
  175. optparse-style options:
  176. .. code-block:: python
  177. from celery import Celery
  178. from optparse import make_option as Option
  179. app = Celery()
  180. app.user_options['worker'].add(
  181. Option('--my-argument'),
  182. )
  183. See :ref:`guide-extending` for more information.
  184. - Events are now ordered using logical time.
  185. Timestamps are not a reliable way to order events in a distributed system,
  186. for one the floating point value does not have enough precision, but
  187. also it's impossible to keep physical clocks in sync.
  188. Celery event messages have included a logical clock value for some time,
  189. but starting with this version that field is also used to order them
  190. (that is if the monitor is using :mod:`celery.events.state`).
  191. The logical clock is currently implemented using Lamport timestamps,
  192. which does not have a high degree of accuracy, but should be good
  193. enough to casually order the events.
  194. - All events now include a ``pid`` field, which is the process id of the
  195. process that sent the event.
  196. - Events now supports timezones.
  197. A new ``utcoffset`` field is now sent with every event. This is a
  198. signed integer telling the difference from UTC time in hours,
  199. so e.g. an even sent from the Europe/London timezone in daylight savings
  200. time will have an offset of 1.
  201. :class:`@events.Receiver` will automatically convert the timestamps
  202. to the destination timezone.
  203. - Event heartbeats are now calculated based on the time when the event
  204. was received by the monitor, and not the time reported by the worker.
  205. This means that a worker with an out-of-sync clock will no longer
  206. show as 'Offline' in monitors.
  207. A warning is now emitted if the difference between the senders
  208. time and the internal time is greater than 15 seconds, suggesting
  209. that the clocks are out of sync.
  210. - :program:`celery worker` now supports a ``--detach`` argument to start
  211. the worker as a daemon in the background.
  212. - :class:`@events.Receiver` now sets a ``local_received`` field for incoming
  213. events, which is set to the time of when the event was received.
  214. - :class:`@events.Dispatcher` now accepts a ``groups`` argument
  215. which decides a whitelist of event groups that will be sent.
  216. The type of an event is a string separated by '-', where the part
  217. before the first '-' is the group. Currently there are only
  218. two groups: ``worker`` and ``task``.
  219. A dispatcher instantiated as follows:
  220. .. code-block:: python
  221. app.events.Dispatcher(connection, groups=['worker'])
  222. will only send worker related events and silently drop any attempts
  223. to send events related to any other group.
  224. - Better support for link and link_error tasks for chords.
  225. Contributed by Steeve Morin.
  226. - There's a now an 'inspect clock' command which will collect the current
  227. logical clock value from workers.
  228. - `celery inspect stats` now contains the process id of the worker's main
  229. process.
  230. Contributed by Mher Movsisyan.
  231. - New remote control command to dump a workers configuration.
  232. Example:
  233. .. code-block:: bash
  234. $ celery inspect conf
  235. Configuration values will be converted to values supported by JSON
  236. where possible.
  237. Contributed by Mher Movisyan.
  238. - Now supports Setuptools extra requirements.
  239. +-------------+-------------------------+---------------------------+
  240. | Extension | Requirement entry | Type |
  241. +=============+=========================+===========================+
  242. | Redis | ``celery[redis]`` | transport, result backend |
  243. +-------------+-------------------------+---------------------------+
  244. | MongoDB`` | ``celery[mongodb]`` | transport, result backend |
  245. +-------------+-------------------------+---------------------------+
  246. | CouchDB | ``celery[couchdb]`` | transport |
  247. +-------------+-------------------------+---------------------------+
  248. | Beanstalk | ``celery[beanstalk]`` | transport |
  249. +-------------+-------------------------+---------------------------+
  250. | ZeroMQ | ``celery[zeromq]`` | transport |
  251. +-------------+-------------------------+---------------------------+
  252. | Zookeeper | ``celery[zookeeper]`` | transport |
  253. +-------------+-------------------------+---------------------------+
  254. | SQLAlchemy | ``celery[sqlalchemy]`` | transport, result backend |
  255. +-------------+-------------------------+---------------------------+
  256. | librabbitmq | ``celery[librabbitmq]`` | transport (C amqp client) |
  257. +-------------+-------------------------+---------------------------+
  258. Examples using :program:`pip install`:
  259. .. code-block:: bash
  260. pip install celery[redis]
  261. pip install celery[librabbitmq]
  262. pip install celery[redis,librabbitmq]
  263. pip install celery[mongodb]
  264. pip install celery[couchdb]
  265. pip install celery[beanstalk]
  266. pip install celery[zeromq]
  267. pip install celery[zookeeper]
  268. pip install celery[sqlalchemy]
  269. - New settings :setting:`CELERY_EVENT_QUEUE_TTL` and
  270. :setting:`CELERY_EVENT_QUEUE_EXPIRES`.
  271. These control when a monitors event queue is deleted, and for how long
  272. events published to that queue will be visible. Only supported on
  273. RabbitMQ.
  274. - New Couchbase result backend
  275. This result backend enables you to store and retrieve task results
  276. using `Couchbase`_.
  277. See :ref:`conf-couchbase-result-backend` for more information
  278. about configuring this result backend.
  279. Contributed by Alain Masiero.
  280. .. _`Couchbase`: http://www.couchbase.com
  281. - CentOS init script now supports starting multiple worker instances.
  282. See the script header for details.
  283. Contributed by Jonathan Jordan.
  284. - ``AsyncResult.iter_native`` now sets default interval parameter to 0.5
  285. Fix contributed by Idan Kamara
  286. - Worker node names now consists of a name and a hostname separated by '@'.
  287. This change is to more easily identify multiple instances running
  288. on the same machine.
  289. If a custom name is not specified then the
  290. worker will use the name 'celery' in default, resulting in a
  291. fully qualified node name of 'celery@hostname':
  292. .. code-block:: bash
  293. $ celery worker -n example.com
  294. celery@example.com
  295. To set the name you must include the @:
  296. .. code-block:: bash
  297. $ celery worker -n worker1@example.com
  298. worker1@example.com
  299. This also means that the worker will identify itself using the full
  300. nodename in events and broadcast messages, so where before
  301. a worker would identify as 'worker1.example.com', it will now
  302. use 'celery@worker1.example.com'.
  303. Remember that the ``-n`` argument also supports simple variable
  304. substitutions, so if the current hostname is *jerry.example.com*
  305. then ``%h`` will expand into that:
  306. .. code-block:: bash
  307. $ celery worker -n worker1@%h
  308. worker1@jerry.example.com
  309. The table of substitutions is as follows:
  310. +---------------+---------------------------------------+
  311. | Variable | Substitution |
  312. +===============+=======================================+
  313. | ``%h`` | Full hostname (including domain name) |
  314. +---------------+---------------------------------------+
  315. | ``%d`` | Domain name only |
  316. +---------------+---------------------------------------+
  317. | ``%n`` | Hostname only (without domain name) |
  318. +---------------+---------------------------------------+
  319. | ``%%`` | The character ``%`` |
  320. +---------------+---------------------------------------+
  321. - Task decorator can now create "bound tasks"
  322. This means that the function will be a method in the resulting
  323. task class and so will have a ``self`` argument that can be used
  324. to refer to the current task:
  325. .. code-block:: python
  326. @app.task(bind=True)
  327. def send_twitter_status(self, oauth, tweet):
  328. try:
  329. twitter = Twitter(oauth)
  330. twitter.update_status(tweet)
  331. except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
  332. raise self.retry(exc=exc)
  333. Using *bound tasks* is now the recommended approach whenever
  334. you need access to the current task or request context.
  335. Previously one would have to refer to the name of the task
  336. instead (``send_twitter_status.retry``), but this could lead to problems
  337. in some instances where the registered task was no longer the same
  338. object.
  339. - Workers now synchronizes revoked tasks with its neighbors.
  340. This happens at startup and causes a one second startup delay
  341. to collect broadcast responses from other workers.
  342. - Workers logical clock value is now persisted so that the clock
  343. is not reset when a worker restarts.
  344. The logical clock is also synchronized with other nodes
  345. in the same cluster (neighbors), so this means that the logical
  346. epoch will start at the point when the first worker in the cluster
  347. starts.
  348. You may notice that the logical clock is an integer value and increases
  349. very rapidly. It will take several millennia before the clock overflows 64 bits,
  350. so this is not a concern.
  351. - New setting :setting:`BROKER_LOGIN_METHOD`
  352. This setting can be used to specify an alternate login method
  353. for the AMQP transports.
  354. Contributed by Adrien Guinet
  355. - The ``dump_conf`` remote control command will now give the string
  356. representation for types that are not JSON compatible.
  357. - Calling a subtask will now execute the task directly as documented.
  358. A misunderstanding led to ``Signature.__call__`` being an alias of
  359. ``.delay`` but this does not conform to the calling API of ``Task`` which
  360. should call the underlying task method.
  361. This means that:
  362. .. code-block:: python
  363. @app.task
  364. def add(x, y):
  365. return x + y
  366. add.s(2, 2)()
  367. does the same as calling the task directly:
  368. .. code-block:: python
  369. add(2, 2)
  370. - Function `celery.security.setup_security` is now :func:`celery.setup_security`.
  371. - Message expires value is now forwarded at retry (Issue #980).
  372. The value is forwarded at is, so the expiry time will not change.
  373. To update the expiry time you would have to pass the expires
  374. argument to ``retry()``.
  375. - Worker now crashes if a channel error occurs.
  376. Channel errors are transport specific and is the list of exceptions
  377. returned by ``Connection.channel_errors``.
  378. For RabbitMQ this means that Celery will crash if the equivalence
  379. checks for one of the queues in :setting:`CELERY_QUEUES` mismatches, which
  380. makes sense since this is a scenario where manual intervention is
  381. required.
  382. - Calling ``AsyncResult.get()`` on a chain now propagates errors for previous
  383. tasks (Issue #1014).
  384. - The parent attribute of ``AsyncResult`` is now reconstructed when using JSON
  385. serialization (Issue #1014).
  386. - Worker disconnection logs are now logged with severity warning instead of
  387. error.
  388. Contributed by Chris Adams.
  389. - ``events.State`` no longer crashes when it receives unknown event types.
  390. - SQLAlchemy Result Backend: New :setting:`CELERY_RESULT_DB_TABLENAMES`
  391. setting can be used to change the name of the database tables used.
  392. Contributed by Ryan Petrello.
  393. - A stress test suite for the Celery worker has been written.
  394. This is located in the ``funtests/stress`` directory in the git
  395. repository. There's a README file there to get you started.
  396. - The logger named ``celery.concurrency`` has been renamed to ``celery.pool``.
  397. - New command line utility ``celery graph``
  398. This utility creates graphs in GraphViz dot format.
  399. You can create graphs from the currently installed bootsteps:
  400. .. code-block:: bash
  401. # Create graph of currently installed bootsteps in both the worker
  402. # and consumer namespaces.
  403. $ celery graph bootsteps | dot -T png -o steps.png
  404. # Graph of the consumer namespace only.
  405. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png
  406. # Graph of the worker namespace only.
  407. $ celery graph bootsteps worker | dot -T png -o worker_only.png
  408. Or graphs of workers in a cluster:
  409. .. code-block:: bash
  410. # Create graph from the current cluster
  411. $ celery graph workers | dot -T png -o workers.png
  412. # Create graph from a specified list of workers
  413. $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png
  414. # also specify the number of threads in each worker
  415. $ celery graph workers nodes:w1,w2,w3 threads:2,4,6
  416. # ...also specify the broker and backend URLs shown in the graph
  417. $ celery graph workers broker:amqp:// backend:redis://
  418. # ...also specify the max number of workers/threads shown (wmax/tmax),
  419. # enumerating anything that exceeds that number.
  420. $ celery graph workers wmax:10 tmax:3
  421. - Changed the way that app instances are pickled
  422. Apps can now define a ``__reduce_keys__`` method that is used instead
  423. of the old ``AppPickler`` attribute. E.g. if your app defines a custom
  424. 'foo' attribute that needs to be preserved when pickling you can define
  425. a ``__reduce_keys__`` as such:
  426. .. code-block:: python
  427. import celery
  428. class Celery(celery.Celery):
  429. def __init__(self, *args, **kwargs):
  430. super(Celery, self).__init__(*args, **kwargs)
  431. self.foo = kwargs.get('foo')
  432. def __reduce_keys__(self):
  433. return super(Celery, self).__reduce_keys__().update(
  434. foo=self.foo,
  435. )
  436. This is a much more convenient way to add support for pickling custom
  437. attributes. The old ``AppPickler`` is still supported but its use is
  438. discouraged and we would like to remove it in a future version.
  439. - Ability to trace imports for debugging purposes.
  440. The :envvar:`C_IMPDEBUG` can be set to trace imports as they
  441. occur:
  442. .. code-block:: bash
  443. $ C_IMDEBUG=1 celery worker -l info
  444. .. code-block:: bash
  445. $ C_IMPDEBUG=1 celery shell
  446. - :class:`celery.apps.worker.Worker` has been refactored as a subclass of
  447. :class:`celery.worker.WorkController`.
  448. This removes a lot of duplicate functionality.
  449. - :class:`@events.Receiver` is now a :class:`kombu.mixins.ConsumerMixin`
  450. subclass.
  451. - ``celery.platforms.PIDFile`` renamed to :class:`celery.platforms.Pidfile`.
  452. - ``celery.results.BaseDictBackend`` has been removed, replaced by
  453. :class:``celery.results.BaseBackend``.
  454. .. _v310-experimental:
  455. Experimental
  456. ============
  457. XXX
  458. ---
  459. YYY
  460. .. _v310-removals:
  461. Scheduled Removals
  462. ==================
  463. - The ``BROKER_INSIST`` setting is no longer supported.
  464. - The ``CELERY_AMQP_TASK_RESULT_CONNECTION_MAX`` setting is no longer
  465. supported.
  466. Use :setting:`BROKER_POOL_LIMIT` instead.
  467. - The ``CELERY_TASK_ERROR_WHITELIST`` setting is no longer supported.
  468. You should set the :class:`~celery.utils.mail.ErrorMail` attribute
  469. of the task class instead. You can also do this using
  470. :setting:`CELERY_ANNOTATIONS`:
  471. .. code-block:: python
  472. from celery import Celery
  473. from celery.utils.mail import ErrorMail
  474. class MyErrorMail(ErrorMail):
  475. whitelist = (KeyError, ImportError)
  476. def should_send(self, context, exc):
  477. return isinstance(exc, self.whitelist)
  478. app = Celery()
  479. app.conf.CELERY_ANNOTATIONS = {
  480. '*': {
  481. 'ErrorMail': MyErrorMails,
  482. }
  483. }
  484. - The ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting is no longer supported.
  485. Use :setting:`CELERY_TASK_RESULT_EXPIRES` instead.
  486. - Functions that establishes broker connections no longer
  487. supports the ``connect_timeout`` argument.
  488. This can now only be set using the :setting:`BROKER_CONNECTION_TIMEOUT`
  489. setting. This is because the functions no longer create connections
  490. directly, and instead get them from the connection pool.
  491. - The ``Celery.with_default_connection`` method has been removed in favor
  492. of ``with app.connection_or_acquire``.
  493. .. _v310-deprecations:
  494. Deprecations
  495. ============
  496. See the :ref:`deprecation-timeline`.
  497. - XXX
  498. YYY
  499. .. _v310-fixes:
  500. Fixes
  501. =====
  502. - XXX
  503. .. _v310-internal:
  504. Internal changes
  505. ================
  506. - Module ``celery.task.trace`` has been renamed to :mod:`celery.app.trace`.
  507. - Classes that no longer fall back to using the default app:
  508. - Result backends (:class:`celery.backends.base.BaseBackend`)
  509. - :class:`celery.worker.WorkController`
  510. - :class:`celery.worker.Consumer`
  511. - :class:`celery.worker.job.Request`
  512. This means that you have to pass a specific app when instantiating
  513. these classes.
  514. - ``EventDispatcher.copy_buffer`` renamed to ``EventDispatcher.extend_buffer``
  515. - Removed unused and never documented global instance
  516. ``celery.events.state.state``.