whatsnew-3.1.rst 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760
  1. .. _whatsnew-3.1:
  2. ===========================================
  3. What's new in Celery 3.1 (Cipater)
  4. ===========================================
  5. .. sidebar:: Change history
  6. What's new documents describes the changes in major versions,
  7. we also have a :ref:`changelog` that lists the changes in bugfix
  8. releases (0.0.x), while older series are archived under the :ref:`history`
  9. section.
  10. Celery is a simple, flexible and reliable distributed system to
  11. process vast amounts of messages, while providing operations with
  12. the tools required to maintain such a system.
  13. It's a task queue with focus on real-time processing, while also
  14. supporting task scheduling.
  15. Celery has a large and diverse community of users and contributors,
  16. you should come join us :ref:`on IRC <irc-channel>`
  17. or :ref:`our mailing-list <mailing-list>`.
  18. To read more about Celery you should go read the :ref:`introduction <intro>`.
  19. While this version is backward compatible with previous versions
  20. it's important that you read the following section.
  21. This version is officially supported on CPython 2.6, 2.7, 3.2 and 3.3,
  22. as well as PyPy and Jython.
  23. Highlights
  24. ==========
  25. .. topic:: Overview
  26. - Now supports Django out of the box.
  27. See the new tutorial at :ref:`django-first-steps`.
  28. - XXX2
  29. - XXX3
  30. YYY3
  31. .. _`website`: http://celeryproject.org/
  32. .. _`django-celery changelog`:
  33. http://github.com/celery/django-celery/tree/master/Changelog
  34. .. _`django-celery 3.0`: http://pypi.python.org/pypi/django-celery/
  35. .. contents::
  36. :local:
  37. :depth: 2
  38. .. _v310-important:
  39. Important Notes
  40. ===============
  41. XXX
  42. ---
  43. YYY
  44. .. _v310-news:
  45. News
  46. ====
  47. XXX
  48. ---
  49. YYY
  50. In Other News
  51. -------------
  52. - No longer supports Python 2.5
  53. From this version Celery requires Python 2.6 or later.
  54. Insteaad of using the 2to3 porting tool we now have
  55. a dual codebase that runs on both Python 2 and Python 3.
  56. - Now depends on :ref:`Kombu 3.0 <kombu:version-3.0.0>`.
  57. - Now depends on :mod:`billiard` version 3.3.
  58. - No longer depends on ``python-dateutil``
  59. Instead a dependency on :mod:`pytz` has been added, which was already
  60. recommended in the documentation for accurate timezone support.
  61. This also means that dependencies are on the same on both Python 2 and
  62. Python 3, and that the :file:`requirements/default-py3k.txt` file has
  63. been removed.
  64. - Time limits can now be set by the client for individual tasks (Issue #802).
  65. You can set both hard and soft time limits using the ``timeout`` and
  66. ``soft_timeout`` calling options:
  67. .. code-block:: python
  68. >>> res = add.apply_async((2, 2), timeout=10, soft_timeout=8)
  69. >>> res = add.subtask((2, 2), timeout=10, soft_timeout=8)()
  70. >>> res = add.s(2, 2).set(timeout=10, soft_timeout=8)()
  71. Contributed by Mher Movsisyan.
  72. - Old command-line programs removed and deprecated
  73. The goal is that everyone should move the new :program:`celery` umbrella
  74. command, so with this version we deprecate the old command names,
  75. and remove commands that are not used in init scripts.
  76. +-------------------+--------------+-------------------------------------+
  77. | Program | New Status | Replacement |
  78. +===================+==============+=====================================+
  79. | ``celeryd`` | *DEPRECATED* | :program:`celery worker` |
  80. +-------------------+--------------+-------------------------------------+
  81. | ``celerybeat`` | *DEPRECATED* | :program:`celery beat` |
  82. +-------------------+--------------+-------------------------------------+
  83. | ``celeryd-multi`` | *DEPRECATED* | :program:`celery multi` |
  84. +-------------------+--------------+-------------------------------------+
  85. | ``celeryctl`` | **REMOVED** | :program:`celery` |
  86. +-------------------+--------------+-------------------------------------+
  87. | ``celeryev`` | **REMOVED** | :program:`celery events` |
  88. +-------------------+--------------+-------------------------------------+
  89. | ``camqadm`` | **REMOVED** | :program:`celery amqp` |
  90. +-------------------+--------------+-------------------------------------+
  91. Please see :program:`celery --help` for help using the umbrella command.
  92. - Celery now support Django out of the box.
  93. The fixes and improvements applied by the django-celery library is now
  94. automatically applied by core Celery when it detects that
  95. the :envvar:`DJANGO_SETTINGS_MODULE` environment setting is set.
  96. The distribution ships with a new example project using Django
  97. in :file:`examples/django`:
  98. http://github.com/celery/celery/tree/master/examples/django
  99. There are cases where you would want to use django-celery still
  100. as:
  101. - Celery does not implement the Django database or cache backends.
  102. - Celery does not automatically read configuration from Django settings.
  103. - Celery does not ship with the database-based periodic task
  104. scheduler.
  105. If you are using django-celery then it is crucial that you have
  106. ``djcelery.setup_loader()`` in your settings module, as this
  107. no longer happens as a side-effect of importing the :mod:`djcelery`
  108. module.
  109. - ``Signature.freeze()`` can now be used to "finalize" subtasks
  110. Regular subtask:
  111. .. code-block:: python
  112. >>> s = add.s(2, 2)
  113. >>> result = s.freeze()
  114. >>> result
  115. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  116. >>> s.delay()
  117. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  118. Group:
  119. .. code-block:: python
  120. >>> g = group(add.s(2, 2), add.s(4, 4))
  121. >>> result = g.freeze()
  122. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [
  123. 70c0fb3d-b60e-4b22-8df7-aa25b9abc86d,
  124. 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  125. >>> g()
  126. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [70c0fb3d-b60e-4b22-8df7-aa25b9abc86d, 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  127. - The consumer part of the worker has been rewritten to use Bootsteps.
  128. By writing bootsteps you can now easily extend the consumer part
  129. of the worker to add additional features, or even message consumers.
  130. See the :ref:`guide-extending` guide for more information.
  131. - New Bootsteps implementation.
  132. The bootsteps and namespaces have been refactored for the better,
  133. sadly this means that bootsteps written for older versions will
  134. not be compatible with this version.
  135. Bootsteps were never publicly documented and was considered
  136. experimental, so chances are no one has ever implemented custom
  137. bootsteps, but if you did please contact the mailing-list
  138. and we'll help you port them.
  139. - Module ``celery.worker.bootsteps`` renamed to :mod:`celery.bootsteps`
  140. - The name of a bootstep no longer contain the name of the namespace.
  141. - A bootstep can now be part of multiple namespaces.
  142. - Namespaces must instantiate individual bootsteps, and
  143. there's no global registry of bootsteps.
  144. - New result backend with RPC semantics (``rpc``).
  145. This version of the ``amqp`` result backend is a very good alternative
  146. to use in classical RPC scenarios, where the process that initiates
  147. the task is always the process to retrieve the result.
  148. It uses Kombu to send and retrieve results, and each client
  149. will create a unique queue for replies to be sent to. Avoiding
  150. the significant overhead of the original amqp backend which creates
  151. one queue per task, but it's important to consider that it will
  152. not be possible to retrieve the result from another process,
  153. and that results sent using this backend is not persistent and so will
  154. not survive a broker restart.
  155. It has only been tested with the AMQP and Redis transports.
  156. - App instances can now add additional command line options
  157. to the worker and beat programs.
  158. The :attr:`@Celery.user_options` attribute can be used
  159. to add additional command-line arguments, and expects
  160. optparse-style options:
  161. .. code-block:: python
  162. from celery import Celery
  163. from optparse import make_option as Option
  164. app = Celery()
  165. app.user_options['worker'].add(
  166. Option('--my-argument'),
  167. )
  168. See :ref:`guide-extending` for more information.
  169. - Events are now ordered using logical time.
  170. Timestamps are not a reliable way to order events in a distributed system,
  171. for one the floating point value does not have enough precision, but
  172. also it's impossible to keep physical clocks in sync.
  173. Celery event messages have included a logical clock value for some time,
  174. but starting with this version that field is also used to order them
  175. (that is if the monitor is using :mod:`celery.events.state`).
  176. The logical clock is currently implemented using Lamport timestamps,
  177. which does not have a high degree of accuracy, but should be good
  178. enough to casually order the events.
  179. - All events now include a ``pid`` field, which is the process id of the
  180. process that sent the event.
  181. - Events now supports timezones.
  182. A new ``utcoffset`` field is now sent with every event. This is a
  183. signed integer telling the difference from UTC time in hours,
  184. so e.g. an even sent from the Europe/London timezone in daylight savings
  185. time will have an offset of 1.
  186. :class:`@events.Receiver` will automatically convert the timestamps
  187. to the destination timezone.
  188. - Event heartbeats are now calculated based on the time when the event
  189. was received by the monitor, and not the time reported by the worker.
  190. This means that a worker with an out-of-sync clock will no longer
  191. show as 'Offline' in monitors.
  192. A warning is now emitted if the difference between the senders
  193. time and the internal time is greater than 15 seconds, suggesting
  194. that the clocks are out of sync.
  195. - :program:`celery worker` now supports a ``--detach`` argument to start
  196. the worker as a daemon in the background.
  197. - :class:`@events.Receiver` now sets a ``local_received`` field for incoming
  198. events, which is set to the time of when the event was received.
  199. - :class:`@events.Dispatcher` now accepts a ``groups`` argument
  200. which decides a whitelist of event groups that will be sent.
  201. The type of an event is a string separated by '-', where the part
  202. before the first '-' is the group. Currently there are only
  203. two groups: ``worker`` and ``task``.
  204. A dispatcher instantiated as follows:
  205. .. code-block:: python
  206. app.events.Dispatcher(connection, groups=['worker'])
  207. will only send worker related events and silently drop any attempts
  208. to send events related to any other group.
  209. - Better support for link and link_error tasks for chords.
  210. Contributed by Steeve Morin.
  211. - There's a now an 'inspect clock' command which will collect the current
  212. logical clock value from workers.
  213. - `celery inspect stats` now contains the process id of the worker's main
  214. process.
  215. Contributed by Mher Movsisyan.
  216. - New remote control command to dump a workers configuration.
  217. Example:
  218. .. code-block:: bash
  219. $ celery inspect conf
  220. Configuration values will be converted to values supported by JSON
  221. where possible.
  222. Contributed by Mher Movisyan.
  223. - Now supports Setuptools extra requirements.
  224. +-------------+-------------------------+---------------------------+
  225. | Extension | Requirement entry | Type |
  226. +=============+=========================+===========================+
  227. | Redis | ``celery[redis]`` | transport, result backend |
  228. +-------------+-------------------------+---------------------------+
  229. | MongoDB`` | ``celery[mongodb]`` | transport, result backend |
  230. +-------------+-------------------------+---------------------------+
  231. | CouchDB | ``celery[couchdb]`` | transport |
  232. +-------------+-------------------------+---------------------------+
  233. | Beanstalk | ``celery[beanstalk]`` | transport |
  234. +-------------+-------------------------+---------------------------+
  235. | ZeroMQ | ``celery[zeromq]`` | transport |
  236. +-------------+-------------------------+---------------------------+
  237. | Zookeeper | ``celery[zookeeper]`` | transport |
  238. +-------------+-------------------------+---------------------------+
  239. | SQLAlchemy | ``celery[sqlalchemy]`` | transport, result backend |
  240. +-------------+-------------------------+---------------------------+
  241. | librabbitmq | ``celery[librabbitmq]`` | transport (C amqp client) |
  242. +-------------+-------------------------+---------------------------+
  243. Examples using :program:`pip install`:
  244. .. code-block:: bash
  245. pip install celery[redis]
  246. pip install celery[librabbitmq]
  247. pip install celery[redis,librabbitmq]
  248. pip install celery[mongodb]
  249. pip install celery[couchdb]
  250. pip install celery[beanstalk]
  251. pip install celery[zeromq]
  252. pip install celery[zookeeper]
  253. pip install celery[sqlalchemy]
  254. - New settings :setting:`CELERY_EVENT_QUEUE_TTL` and
  255. :setting:`CELERY_EVENT_QUEUE_EXPIRES`.
  256. These control when a monitors event queue is deleted, and for how long
  257. events published to that queue will be visible. Only supported on
  258. RabbitMQ.
  259. - New Couchbase result backend
  260. This result backend enables you to store and retrieve task results
  261. using `Couchbase`_.
  262. See :ref:`conf-couchbase-result-backend` for more information
  263. about configuring this result backend.
  264. Contributed by Alain Masiero.
  265. .. _`Couchbase`: http://www.couchbase.com
  266. - CentOS init script now supports starting multiple worker instances.
  267. See the script header for details.
  268. Contributed by Jonathan Jordan.
  269. - ``AsyncResult.iter_native`` now sets default interval parameter to 0.5
  270. Fix contributed by Idan Kamara
  271. - Worker node names now consists of a name and a hostname separated by '@'.
  272. This change is to more easily identify multiple instances running
  273. on the same machine.
  274. If a custom name is not specified then the
  275. worker will use the name 'celery' in default, resulting in a
  276. fully qualified node name of 'celery@hostname':
  277. .. code-block:: bash
  278. $ celery worker -n example.com
  279. celery@example.com
  280. To set the name you must include the @:
  281. .. code-block:: bash
  282. $ celery worker -n worker1@example.com
  283. worker1@example.com
  284. This also means that the worker will identify itself using the full
  285. nodename in events and broadcast messages, so where before
  286. a worker would identify as 'worker1.example.com', it will now
  287. use 'celery@worker1.example.com'.
  288. Remember that the ``-n`` argument also supports simple variable
  289. substitutions, so if the current hostname is *jerry.example.com*
  290. then ``%h`` will expand into that:
  291. .. code-block:: bash
  292. $ celery worker -n worker1@%h
  293. worker1@jerry.example.com
  294. The table of substitutions is as follows:
  295. +---------------+---------------------------------------+
  296. | Variable | Substitution |
  297. +===============+=======================================+
  298. | ``%h`` | Full hostname (including domain name) |
  299. +---------------+---------------------------------------+
  300. | ``%d`` | Domain name only |
  301. +---------------+---------------------------------------+
  302. | ``%n`` | Hostname only (without domain name) |
  303. +---------------+---------------------------------------+
  304. | ``%%`` | The character ``%`` |
  305. +---------------+---------------------------------------+
  306. - Task decorator can now create "bound tasks"
  307. This means that the function will be a method in the resulting
  308. task class and so will have a ``self`` argument that can be used
  309. to refer to the current task:
  310. .. code-block:: python
  311. @app.task(bind=True)
  312. def send_twitter_status(self, oauth, tweet):
  313. try:
  314. twitter = Twitter(oauth)
  315. twitter.update_status(tweet)
  316. except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
  317. raise self.retry(exc=exc)
  318. Using *bound tasks* is now the recommended approach whenever
  319. you need access to the current task or request context.
  320. Previously one would have to refer to the name of the task
  321. instead (``send_twitter_status.retry``), but this could lead to problems
  322. in some instances where the registered task was no longer the same
  323. object.
  324. - Workers now synchronizes revoked tasks with its neighbors.
  325. This happens at startup and causes a one second startup delay
  326. to collect broadcast responses from other workers.
  327. - Workers logical clock value is now persisted so that the clock
  328. is not reset when a worker restarts.
  329. The logical clock is also synchronized with other nodes
  330. in the same cluster (neighbors), so this means that the logical
  331. epoch will start at the point when the first worker in the cluster
  332. starts.
  333. You may notice that the logical clock is an integer value and increases
  334. very rapidly. It will take several millennia before the clock overflows 64 bits,
  335. so this is not a concern.
  336. - New setting :setting:`BROKER_LOGIN_METHOD`
  337. This setting can be used to specify an alternate login method
  338. for the AMQP transports.
  339. Contributed by Adrien Guinet
  340. - The ``dump_conf`` remote control command will now give the string
  341. representation for types that are not JSON compatible.
  342. - Calling a subtask will now execute the task directly as documented.
  343. A misunderstanding led to ``Signature.__call__`` being an alias of
  344. ``.delay`` but this does not conform to the calling API of ``Task`` which
  345. should call the underlying task method.
  346. This means that:
  347. .. code-block:: python
  348. @app.task
  349. def add(x, y):
  350. return x + y
  351. add.s(2, 2)()
  352. does the same as calling the task directly:
  353. .. code-block:: python
  354. add(2, 2)
  355. - Function `celery.security.setup_security` is now :func:`celery.setup_security`.
  356. - Message expires value is now forwarded at retry (Issue #980).
  357. The value is forwarded at is, so the expiry time will not change.
  358. To update the expiry time you would have to pass the expires
  359. argument to ``retry()``.
  360. - Worker now crashes if a channel error occurs.
  361. Channel errors are transport specific and is the list of exceptions
  362. returned by ``Connection.channel_errors``.
  363. For RabbitMQ this means that Celery will crash if the equivalence
  364. checks for one of the queues in :setting:`CELERY_QUEUES` mismatches, which
  365. makes sense since this is a scenario where manual intervention is
  366. required.
  367. - Calling ``AsyncResult.get()`` on a chain now propagates errors for previous
  368. tasks (Issue #1014).
  369. - The parent attribute of ``AsyncResult`` is now reconstructed when using JSON
  370. serialization (Issue #1014).
  371. - Worker disconnection logs are now logged with severity warning instead of
  372. error.
  373. Contributed by Chris Adams.
  374. - ``events.State`` no longer crashes when it receives unknown event types.
  375. - SQLAlchemy Result Backend: New :setting:`CELERY_RESULT_DB_TABLENAMES`
  376. setting can be used to change the name of the database tables used.
  377. Contributed by Ryan Petrello.
  378. - A stress test suite for the Celery worker has been written.
  379. This is located in the ``funtests/stress`` directory in the git
  380. repository. There's a README file there to get you started.
  381. - The logger named ``celery.concurrency`` has been renamed to ``celery.pool``.
  382. - New command line utility ``celery graph``
  383. This utility creates graphs in GraphViz dot format.
  384. You can create graphs from the currently installed bootsteps:
  385. .. code-block:: bash
  386. # Create graph of currently installed bootsteps in both the worker
  387. # and consumer namespaces.
  388. $ celery graph bootsteps | dot -T png -o steps.png
  389. # Graph of the consumer namespace only.
  390. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png
  391. # Graph of the worker namespace only.
  392. $ celery graph bootsteps worker | dot -T png -o worker_only.png
  393. Or graphs of workers in a cluster:
  394. .. code-block:: bash
  395. # Create graph from the current cluster
  396. $ celery graph workers | dot -T png -o workers.png
  397. # Create graph from a specified list of workers
  398. $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png
  399. # also specify the number of threads in each worker
  400. $ celery graph workers nodes:w1,w2,w3 threads:2,4,6
  401. # ...also specify the broker and backend URLs shown in the graph
  402. $ celery graph workers broker:amqp:// backend:redis://
  403. # ...also specify the max number of workers/threads shown (wmax/tmax),
  404. # enumerating anything that exceeds that number.
  405. $ celery graph workers wmax:10 tmax:3
  406. - Changed the way that app instances are pickled
  407. Apps can now define a ``__reduce_keys__`` method that is used instead
  408. of the old ``AppPickler`` attribute. E.g. if your app defines a custom
  409. 'foo' attribute that needs to be preserved when pickling you can define
  410. a ``__reduce_keys__`` as such:
  411. .. code-block:: python
  412. import celery
  413. class Celery(celery.Celery):
  414. def __init__(self, *args, **kwargs):
  415. super(Celery, self).__init__(*args, **kwargs)
  416. self.foo = kwargs.get('foo')
  417. def __reduce_keys__(self):
  418. return super(Celery, self).__reduce_keys__().update(
  419. foo=self.foo,
  420. )
  421. This is a much more convenient way to add support for pickling custom
  422. attributes. The old ``AppPickler`` is still supported but its use is
  423. discouraged and we would like to remove it in a future version.
  424. - Ability to trace imports for debugging purposes.
  425. The :envvar:`C_IMPDEBUG` can be set to trace imports as they
  426. occur:
  427. .. code-block:: bash
  428. $ C_IMDEBUG=1 celery worker -l info
  429. .. code-block:: bash
  430. $ C_IMPDEBUG=1 celery shell
  431. - :class:`celery.apps.worker.Worker` has been refactored as a subclass of
  432. :class:`celery.worker.WorkController`.
  433. This removes a lot of duplicate functionality.
  434. - :class:`@events.Receiver` is now a :class:`kombu.mixins.ConsumerMixin`
  435. subclass.
  436. - ``celery.platforms.PIDFile`` renamed to :class:`celery.platforms.Pidfile`.
  437. - ``celery.results.BaseDictBackend`` has been removed, replaced by
  438. :class:``celery.results.BaseBackend``.
  439. .. _v310-experimental:
  440. Experimental
  441. ============
  442. XXX
  443. ---
  444. YYY
  445. .. _v310-removals:
  446. Scheduled Removals
  447. ==================
  448. - The ``BROKER_INSIST`` setting is no longer supported.
  449. - The ``CELERY_AMQP_TASK_RESULT_CONNECTION_MAX`` setting is no longer
  450. supported.
  451. Use :setting:`BROKER_POOL_LIMIT` instead.
  452. - The ``CELERY_TASK_ERROR_WHITELIST`` setting is no longer supported.
  453. You should set the :class:`~celery.utils.mail.ErrorMail` attribute
  454. of the task class instead. You can also do this using
  455. :setting:`CELERY_ANNOTATIONS`:
  456. .. code-block:: python
  457. from celery import Celery
  458. from celery.utils.mail import ErrorMail
  459. class MyErrorMail(ErrorMail):
  460. whitelist = (KeyError, ImportError)
  461. def should_send(self, context, exc):
  462. return isinstance(exc, self.whitelist)
  463. app = Celery()
  464. app.conf.CELERY_ANNOTATIONS = {
  465. '*': {
  466. 'ErrorMail': MyErrorMails,
  467. }
  468. }
  469. - The ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting is no longer supported.
  470. Use :setting:`CELERY_TASK_RESULT_EXPIRES` instead.
  471. - Functions that establishes broker connections no longer
  472. supports the ``connect_timeout`` argument.
  473. This can now only be set using the :setting:`BROKER_CONNECTION_TIMEOUT`
  474. setting. This is because the functions no longer create connections
  475. directly, and instead get them from the connection pool.
  476. - The ``Celery.with_default_connection`` method has been removed in favor
  477. of ``with app.connection_or_acquire``.
  478. .. _v310-deprecations:
  479. Deprecations
  480. ============
  481. See the :ref:`deprecation-timeline`.
  482. - XXX
  483. YYY
  484. .. _v310-fixes:
  485. Fixes
  486. =====
  487. - XXX
  488. .. _v310-internal:
  489. Internal changes
  490. ================
  491. - Module ``celery.task.trace`` has been renamed to :mod:`celery.app.trace`.
  492. - Classes that no longer fall back to using the default app:
  493. - Result backends (:class:`celery.backends.base.BaseBackend`)
  494. - :class:`celery.worker.WorkController`
  495. - :class:`celery.worker.Consumer`
  496. - :class:`celery.worker.job.Request`
  497. This means that you have to pass a specific app when instantiating
  498. these classes.
  499. - ``EventDispatcher.copy_buffer`` renamed to ``EventDispatcher.extend_buffer``
  500. - Removed unused and never documented global instance
  501. ``celery.events.state.state``.