whatsnew-3.1.rst 24 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753
  1. .. _whatsnew-3.1:
  2. ===========================================
  3. What's new in Celery 3.1 (Cipater)
  4. ===========================================
  5. .. sidebar:: Change history
  6. What's new documents describes the changes in major versions,
  7. we also have a :ref:`changelog` that lists the changes in bugfix
  8. releases (0.0.x), while older series are archived under the :ref:`history`
  9. section.
  10. Celery is a simple, flexible and reliable distributed system to
  11. process vast amounts of messages, while providing operations with
  12. the tools required to maintain such a system.
  13. It's a task queue with focus on real-time processing, while also
  14. supporting task scheduling.
  15. Celery has a large and diverse community of users and contributors,
  16. you should come join us :ref:`on IRC <irc-channel>`
  17. or :ref:`our mailing-list <mailing-list>`.
  18. To read more about Celery you should go read the :ref:`introduction <intro>`.
  19. While this version is backward compatible with previous versions
  20. it's important that you read the following section.
  21. This version is officially supported on CPython 2.6, 2.7, 3.2 and 3.3,
  22. as well as PyPy and Jython.
  23. Highlights
  24. ==========
  25. .. topic:: Overview
  26. - Now supports Django out of the box.
  27. See the new tutorial at :ref:`django-first-steps`.
  28. - XXX2
  29. - XXX3
  30. YYY3
  31. .. _`website`: http://celeryproject.org/
  32. .. _`django-celery changelog`:
  33. http://github.com/celery/django-celery/tree/master/Changelog
  34. .. _`django-celery 3.0`: http://pypi.python.org/pypi/django-celery/
  35. .. contents::
  36. :local:
  37. :depth: 2
  38. .. _v310-important:
  39. Important Notes
  40. ===============
  41. XXX
  42. ---
  43. YYY
  44. .. _v310-news:
  45. News
  46. ====
  47. XXX
  48. ---
  49. YYY
  50. In Other News
  51. -------------
  52. - No longer supports Python 2.5
  53. From this version Celery requires Python 2.6 or later.
  54. - No longer depends on ``python-dateutil``
  55. Instead a dependency on :mod:`pytz` has been added, which was already
  56. recommended in the documentation for accurate timezone support.
  57. This also means that dependencies are on the same on both Python 2 and
  58. Python 3, and that the :file:`requirements/default-py3k.txt` file has
  59. been removed.
  60. - Time limits can now be set by the client for individual tasks (Issue #802).
  61. You can set both hard and soft time limits using the ``timeout`` and
  62. ``soft_timeout`` calling options:
  63. .. code-block:: python
  64. >>> res = add.apply_async((2, 2), timeout=10, soft_timeout=8)
  65. >>> res = add.subtask((2, 2), timeout=10, soft_timeout=8)()
  66. >>> res = add.s(2, 2).set(timeout=10, soft_timeout=8)()
  67. Contributed by Mher Movsisyan.
  68. - Old command-line programs removed and deprecated
  69. The goal is that everyone should move the new :program:`celery` umbrella
  70. command, so with this version we deprecate the old command names,
  71. and remove commands that are not used in init scripts.
  72. +-------------------+--------------+-------------------------------------+
  73. | Program | New Status | Replacement |
  74. +===================+==============+=====================================+
  75. | ``celeryd`` | *DEPRECATED* | :program:`celery worker` |
  76. +-------------------+--------------+-------------------------------------+
  77. | ``celerybeat`` | *DEPRECATED* | :program:`celery beat` |
  78. +-------------------+--------------+-------------------------------------+
  79. | ``celeryd-multi`` | *DEPRECATED* | :program:`celery multi` |
  80. +-------------------+--------------+-------------------------------------+
  81. | ``celeryctl`` | **REMOVED** | :program:`celery` |
  82. +-------------------+--------------+-------------------------------------+
  83. | ``celeryev`` | **REMOVED** | :program:`celery events` |
  84. +-------------------+--------------+-------------------------------------+
  85. | ``camqadm`` | **REMOVED** | :program:`celery amqp` |
  86. +-------------------+--------------+-------------------------------------+
  87. Please see :program:`celery --help` for help using the umbrella command.
  88. - Celery now support Django out of the box.
  89. The fixes and improvements applied by the django-celery library is now
  90. automatically applied by core Celery when it detects that
  91. the :envvar:`DJANGO_SETTINGS_MODULE` environment setting is set.
  92. The distribution ships with a new example project using Django
  93. in :file:`examples/django`:
  94. http://github.com/celery/celery/tree/master/examples/django
  95. There are cases where you would want to use django-celery still
  96. as:
  97. - Celery does not implement the Django database or cache backends.
  98. - Celery does not automatically read configuration from Django settings.
  99. - Celery does not ship with the database-based periodic task
  100. scheduler.
  101. If you are using django-celery then it is crucial that you have
  102. ``djcelery.setup_loader()`` in your settings module, as this
  103. no longer happens as a side-effect of importing the :mod:`djcelery`
  104. module.
  105. - ``Signature.freeze()`` can now be used to "finalize" subtasks
  106. Regular subtask:
  107. .. code-block:: python
  108. >>> s = add.s(2, 2)
  109. >>> result = s.freeze()
  110. >>> result
  111. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  112. >>> s.delay()
  113. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  114. Group:
  115. .. code-block:: python
  116. >>> g = group(add.s(2, 2), add.s(4, 4))
  117. >>> result = g.freeze()
  118. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [
  119. 70c0fb3d-b60e-4b22-8df7-aa25b9abc86d,
  120. 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  121. >>> g()
  122. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [70c0fb3d-b60e-4b22-8df7-aa25b9abc86d, 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  123. - The consumer part of the worker has been rewritten to use Bootsteps.
  124. By writing bootsteps you can now easily extend the consumer part
  125. of the worker to add additional features, or even message consumers.
  126. See the :ref:`guide-extending` guide for more information.
  127. - New Bootsteps implementation.
  128. The bootsteps and namespaces have been refactored for the better,
  129. sadly this means that bootsteps written for older versions will
  130. not be compatible with this version.
  131. Bootsteps were never publicly documented and was considered
  132. experimental, so chances are no one has ever implemented custom
  133. bootsteps, but if you did please contact the mailing-list
  134. and we'll help you port them.
  135. - Module ``celery.worker.bootsteps`` renamed to :mod:`celery.bootsteps`
  136. - The name of a bootstep no longer contain the name of the namespace.
  137. - A bootstep can now be part of multiple namespaces.
  138. - Namespaces must instantiate individual bootsteps, and
  139. there's no global registry of bootsteps.
  140. - New result backend with RPC semantics (``rpc``).
  141. This version of the ``amqp`` result backend is a very good alternative
  142. to use in classical RPC scenarios, where the process that initiates
  143. the task is always the process to retrieve the result.
  144. It uses Kombu to send and retrieve results, and each client
  145. will create a unique queue for replies to be sent to. Avoiding
  146. the significant overhead of the original amqp backend which creates
  147. one queue per task, but it's important to consider that it will
  148. not be possible to retrieve the result from another process,
  149. and that results sent using this backend is not persistent and so will
  150. not survive a broker restart.
  151. It has only been tested with the AMQP and Redis transports.
  152. - App instances can now add additional command line options
  153. to the worker and beat programs.
  154. The :attr:`@Celery.user_options` attribute can be used
  155. to add additional command-line arguments, and expects
  156. optparse-style options:
  157. .. code-block:: python
  158. from celery import Celery
  159. from optparse import make_option as Option
  160. celery = Celery()
  161. celery.user_options['worker'].add(
  162. Option('--my-argument'),
  163. )
  164. See :ref:`guide-extending` for more information.
  165. - Events are now ordered using logical time.
  166. Timestamps are not a reliable way to order events in a distributed system,
  167. for one the floating point value does not have enough precision, but
  168. also it's impossible to keep physical clocks in sync.
  169. Celery event messages have included a logical clock value for some time,
  170. but starting with this version that field is also used to order them
  171. (if the monitor is using ``celery.events.state``).
  172. The logical clock is currently implemented using Lamport timestamps,
  173. which does not have a high degree of accuracy, but should be good
  174. enough to casually order the events.
  175. - All events now include a ``pid`` field, which is the process id of the
  176. process that sent the event.
  177. - Events now supports timezones.
  178. A new ``utcoffset`` field is now sent with every event. This is a
  179. signed integer telling the difference from UTC time in hours,
  180. so e.g. an even sent from the Europe/London timezone in daylight savings
  181. time will have an offset of 1.
  182. :class:`@events.Receiver` will automatically convert the timestamps
  183. to the destination timezone.
  184. - Event heartbeats are now calculated based on the time when the event
  185. was received by the monitor, and not the time reported by the worker.
  186. This means that a worker with an out-of-sync clock will no longer
  187. show as 'Offline' in monitors.
  188. A warning is now emitted if the difference between the senders
  189. time and the internal time is greater than 15 seconds, suggesting
  190. that the clocks are out of sync.
  191. - :program:`celery worker` now supports a ``--detach`` argument to start
  192. the worker as a daemon in the background.
  193. - :class:`@events.Receiver` now sets a ``local_received`` field for incoming
  194. events, which is set to the time of when the event was received.
  195. - :class:`@events.Dispatcher` now accepts a ``groups`` argument
  196. which decides a whitelist of event groups that will be sent.
  197. The type of an event is a string separated by '-', where the part
  198. before the first '-' is the group. Currently there are only
  199. two groups: ``worker`` and ``task``.
  200. A dispatcher instantiated as follows:
  201. .. code-block:: python
  202. app.events.Dispatcher(connection, groups=['worker'])
  203. will only send worker related events and silently drop any attempts
  204. to send events related to any other group.
  205. - Better support for link and link_error tasks for chords.
  206. Contributed by Steeve Morin.
  207. - There's a now an 'inspect clock' command which will collect the current
  208. logical clock value from workers.
  209. - `celery inspect stats` now contains the process id of the worker's main
  210. process.
  211. Contributed by Mher Movsisyan.
  212. - New remote control command to dump a workers configuration.
  213. Example:
  214. .. code-block:: bash
  215. $ celery inspect conf
  216. Configuration values will be converted to values supported by JSON
  217. where possible.
  218. Contributed by Mher Movisyan.
  219. - Now supports Setuptools extra requirements.
  220. +-------------+-------------------------+---------------------------+
  221. | Extension | Requirement entry | Type |
  222. +=============+=========================+===========================+
  223. | Redis | ``celery[redis]`` | transport, result backend |
  224. +-------------+-------------------------+---------------------------+
  225. | MongoDB`` | ``celery[mongodb]`` | transport, result backend |
  226. +-------------+-------------------------+---------------------------+
  227. | CouchDB | ``celery[couchdb]`` | transport |
  228. +-------------+-------------------------+---------------------------+
  229. | Beanstalk | ``celery[beanstalk]`` | transport |
  230. +-------------+-------------------------+---------------------------+
  231. | ZeroMQ | ``celery[zeromq]`` | transport |
  232. +-------------+-------------------------+---------------------------+
  233. | Zookeeper | ``celery[zookeeper]`` | transport |
  234. +-------------+-------------------------+---------------------------+
  235. | SQLAlchemy | ``celery[sqlalchemy]`` | transport, result backend |
  236. +-------------+-------------------------+---------------------------+
  237. | librabbitmq | ``celery[librabbitmq]`` | transport (C amqp client) |
  238. +-------------+-------------------------+---------------------------+
  239. Examples using :program:`pip install`:
  240. .. code-block:: bash
  241. pip install celery[redis]
  242. pip install celery[librabbitmq]
  243. pip install celery[redis,librabbitmq]
  244. pip install celery[mongodb]
  245. pip install celery[couchdb]
  246. pip install celery[beanstalk]
  247. pip install celery[zeromq]
  248. pip install celery[zookeeper]
  249. pip install celery[sqlalchemy]
  250. - New settings :setting:`CELERY_EVENT_QUEUE_TTL` and
  251. :setting:`CELERY_EVENT_QUEUE_EXPIRES`.
  252. These control when a monitors event queue is deleted, and for how long
  253. events published to that queue will be visible. Only supported on
  254. RabbitMQ.
  255. - New Couchbase result backend
  256. This result backend enables you to store and retrieve task results
  257. using `Couchbase`_.
  258. See :ref:`conf-couchbase-result-backend` for more information
  259. about configuring this result backend.
  260. Contributed by Alain Masiero.
  261. .. _`Couchbase`: http://www.couchbase.com
  262. - CentOS init script now supports starting multiple worker instances.
  263. See the script header for details.
  264. Contributed by Jonathan Jordan.
  265. - ``AsyncResult.iter_native`` now sets default interval parameter to 0.5
  266. Fix contributed by Idan Kamara
  267. - Worker node names now consists of a name and a hostname separated by '@'.
  268. This change is to more easily identify multiple instances running
  269. on the same machine.
  270. If a custom name is not specified then the
  271. worker will use the name 'celery' in default, resulting in a
  272. fully qualified node name of 'celery@hostname':
  273. .. code-block:: bash
  274. $ celery worker -n example.com
  275. celery@example.com
  276. To set the name you must include the @:
  277. .. code-block:: bash
  278. $ celery worker -n worker1@example.com
  279. worker1@example.com
  280. This also means that the worker will identify itself using the full
  281. nodename in events and broadcast messages, so where before
  282. a worker would identify as 'worker1.example.com', it will now
  283. use 'celery@worker1.example.com'.
  284. Remember that the ``-n`` argument also supports simple variable
  285. substitutions, so if the current hostname is *jerry.example.com*
  286. then ``%h`` will expand into that:
  287. .. code-block:: bash
  288. $ celery worker -n worker1@%h
  289. worker1@jerry.example.com
  290. The table of substitutions is as follows:
  291. +---------------+---------------------------------------+
  292. | Variable | Substitution |
  293. +===============+=======================================+
  294. | ``%h`` | Full hostname (including domain name) |
  295. +---------------+---------------------------------------+
  296. | ``%d`` | Domain name only |
  297. +---------------+---------------------------------------+
  298. | ``%n`` | Hostname only (without domain name) |
  299. +---------------+---------------------------------------+
  300. | ``%%`` | The character ``%`` |
  301. +---------------+---------------------------------------+
  302. - Task decorator can now create "bound tasks"
  303. This means that the function will be a method in the resulting
  304. task class and so will have a ``self`` argument that can be used
  305. to refer to the current task:
  306. .. code-block:: python
  307. @app.task(bind=True)
  308. def send_twitter_status(self, oauth, tweet):
  309. try:
  310. twitter = Twitter(oauth)
  311. twitter.update_status(tweet)
  312. except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
  313. raise self.retry(exc=exc)
  314. Using *bound tasks* is now the recommended approach whenever
  315. you need access to the current task or request context.
  316. Previously one would have to refer to the name of the task
  317. instead (``send_twitter_status.retry``), but this could lead to problems
  318. in some instances where the registered task was no longer the same
  319. object.
  320. - Workers now synchronizes revoked tasks with its neighbors.
  321. This happens at startup and causes a one second startup delay
  322. to collect broadcast responses from other workers.
  323. - Workers logical clock value is now persisted so that the clock
  324. is not reset when a worker restarts.
  325. The logical clock is also synchronized with other nodes
  326. in the same cluster (neighbors), so this means that the logical
  327. epoch will start at the point when the first worker in the cluster
  328. starts.
  329. You may notice that the logical clock is an integer value and increases
  330. very rapidly. It will take several millennia before the clock overflows 64 bits,
  331. so this is not a concern.
  332. - New setting :setting:`BROKER_LOGIN_METHOD`
  333. This setting can be used to specify an alternate login method
  334. for the AMQP transports.
  335. Contributed by Adrien Guinet
  336. - The ``dump_conf`` remote control command will now give the string
  337. representation for types that are not JSON compatible.
  338. - Calling a subtask will now execute the task directly as documented.
  339. A misunderstanding led to ``Signature.__call__`` being an alias of
  340. ``.delay`` but this does not conform to the calling API of ``Task`` which
  341. should call the underlying task method.
  342. This means that:
  343. .. code-block:: python
  344. @app.task
  345. def add(x, y):
  346. return x + y
  347. add.s(2, 2)()
  348. does the same as calling the task directly:
  349. .. code-block:: python
  350. add(2, 2)
  351. - Function `celery.security.setup_security` is now :func:`celery.setup_security`.
  352. - Message expires value is now forwarded at retry (Issue #980).
  353. The value is forwarded at is, so the expiry time will not change.
  354. To update the expiry time you would have to pass the expires
  355. argument to ``retry()``.
  356. - Worker now crashes if a channel error occurs.
  357. Channel errors are transport specific and is the list of exceptions
  358. returned by ``Connection.channel_errors``.
  359. For RabbitMQ this means that Celery will crash if the equivalence
  360. checks for one of the queues in :setting:`CELERY_QUEUES` mismatches, which
  361. makes sense since this is a scenario where manual intervention is
  362. required.
  363. - Calling ``AsyncResult.get()`` on a chain now propagates errors for previous
  364. tasks (Issue #1014).
  365. - The parent attribute of ``AsyncResult`` is now reconstructed when using JSON
  366. serialization (Issue #1014).
  367. - Worker disconnection logs are now logged with severity warning instead of
  368. error.
  369. Contributed by Chris Adams.
  370. - ``events.State`` no longer crashes when it receives unknown event types.
  371. - SQLAlchemy Result Backend: New :setting:`CELERY_RESULT_DB_TABLENAMES`
  372. setting can be used to change the name of the database tables used.
  373. Contributed by Ryan Petrello.
  374. - A stress test suite for the Celery worker has been written.
  375. This is located in the ``funtests/stress`` directory in the git
  376. repository. There's a README file there to get you started.
  377. - The logger named ``celery.concurrency`` has been renamed to ``celery.pool``.
  378. - New command line utility ``celery graph``
  379. This utility creates graphs in GraphViz dot format.
  380. You can create graphs from the currently installed bootsteps:
  381. .. code-block:: bash
  382. # Create graph of currently installed bootsteps in both the worker
  383. # and consumer namespaces.
  384. $ celery graph bootsteps | dot -T png -o steps.png
  385. # Graph of the consumer namespace only.
  386. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png
  387. # Graph of the worker namespace only.
  388. $ celery graph bootsteps worker | dot -T png -o worker_only.png
  389. Or graphs of workers in a cluster:
  390. .. code-block:: bash
  391. # Create graph from the current cluster
  392. $ celery graph workers | dot -T png -o workers.png
  393. # Create graph from a specified list of workers
  394. $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png
  395. # also specify the number of threads in each worker
  396. $ celery graph workers nodes:w1,w2,w3 threads:2,4,6
  397. # ...also specify the broker and backend URLs shown in the graph
  398. $ celery graph workers broker:amqp:// backend:redis://
  399. # ...also specify the max number of workers/threads shown (wmax/tmax),
  400. # enumerating anything that exceeds that number.
  401. $ celery graph workers wmax:10 tmax:3
  402. - Changed the way that app instances are pickled
  403. Apps can now define a ``__reduce_keys__`` method that is used instead
  404. of the old ``AppPickler`` attribute. E.g. if your app defines a custom
  405. 'foo' attribute that needs to be preserved when pickling you can define
  406. a ``__reduce_keys__`` as such:
  407. .. code-block:: python
  408. import celery
  409. class Celery(celery.Celery):
  410. def __init__(self, *args, **kwargs):
  411. super(Celery, self).__init__(*args, **kwargs)
  412. self.foo = kwargs.get('foo')
  413. def __reduce_keys__(self):
  414. return super(Celery, self).__reduce_keys__().update(
  415. foo=self.foo,
  416. )
  417. This is a much more convenient way to add support for pickling custom
  418. attributes. The old ``AppPickler`` is still supported but its use is
  419. discouraged and we would like to remove it in a future version.
  420. - Ability to trace imports for debugging purposes.
  421. The :envvar:`C_IMPDEBUG` can be set to trace imports as they
  422. occur:
  423. .. code-block:: bash
  424. $ C_IMDEBUG=1 celery worker -l info
  425. .. code-block:: bash
  426. $ C_IMPDEBUG=1 celery shell
  427. - :class:`celery.apps.worker.Worker` has been refactored as a subclass of
  428. :class:`celery.worker.WorkController`.
  429. This removes a lot of duplicate functionality.
  430. - :class:`@events.Receiver` is now a :class:`kombu.mixins.ConsumerMixin`
  431. subclass.
  432. - ``celery.platforms.PIDFile`` renamed to :class:`celery.platforms.Pidfile`.
  433. - ``celery.results.BaseDictBackend`` has been removed, replaced by
  434. :class:``celery.results.BaseBackend``.
  435. .. _v310-experimental:
  436. Experimental
  437. ============
  438. XXX
  439. ---
  440. YYY
  441. .. _v310-removals:
  442. Scheduled Removals
  443. ==================
  444. - The ``BROKER_INSIST`` setting is no longer supported.
  445. - The ``CELERY_AMQP_TASK_RESULT_CONNECTION_MAX`` setting is no longer
  446. supported.
  447. Use :setting:`BROKER_POOL_LIMIT` instead.
  448. - The ``CELERY_TASK_ERROR_WHITELIST`` setting is no longer supported.
  449. You should set the :class:`~celery.utils.mail.ErrorMail` attribute
  450. of the task class instead. You can also do this using
  451. :setting:`CELERY_ANNOTATIONS`:
  452. .. code-block:: python
  453. from celery import Celery
  454. from celery.utils.mail import ErrorMail
  455. class MyErrorMail(ErrorMail):
  456. whitelist = (KeyError, ImportError)
  457. def should_send(self, context, exc):
  458. return isinstance(exc, self.whitelist)
  459. app = Celery()
  460. app.conf.CELERY_ANNOTATIONS = {
  461. '*': {
  462. 'ErrorMail': MyErrorMails,
  463. }
  464. }
  465. - The ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting is no longer supported.
  466. Use :setting:`CELERY_TASK_RESULT_EXPIRES` instead.
  467. - Functions that establishes broker connections no longer
  468. supports the ``connect_timeout`` argument.
  469. This can now only be set using the :setting:`BROKER_CONNECTION_TIMEOUT`
  470. setting. This is because function rarely establish connections directly,
  471. but instead acquire connections from the connection pool.
  472. - The ``Celery.with_default_connection`` method has been removed in favor
  473. of ``with app.connection_or_acquire``.
  474. .. _v310-deprecations:
  475. Deprecations
  476. ============
  477. See the :ref:`deprecation-timeline`.
  478. - XXX
  479. YYY
  480. .. _v310-fixes:
  481. Fixes
  482. =====
  483. - XXX
  484. .. _v310-internal
  485. Internal changes
  486. ================
  487. - Module ``celery.task.trace`` has been renamed to :mod:`celery.app.trace`.
  488. - Classes that no longer fall back to using the default app:
  489. - Result backends (:class:`celery.backends.base.BaseBackend`)
  490. - :class:`celery.worker.WorkController`
  491. - :class:`celery.worker.Consumer`
  492. - :class:`celery.worker.job.Request`
  493. This means that you have to pass a specific app when instantiating
  494. these classes.
  495. - ``EventDispatcher.copy_buffer`` renamed to ``EventDispatcher.extend_buffer``
  496. - Removed unused and never documented global instance
  497. ``celery.events.state.state``.