whatsnew-3.1.rst 31 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957
  1. .. _whatsnew-3.1:
  2. ===========================================
  3. What's new in Celery 3.1 (Cipater)
  4. ===========================================
  5. .. sidebar:: Change history
  6. What's new documents describes the changes in major versions,
  7. we also have a :ref:`changelog` that lists the changes in bugfix
  8. releases (0.0.x), while older series are archived under the :ref:`history`
  9. section.
  10. Celery is a simple, flexible and reliable distributed system to
  11. process vast amounts of messages, while providing operations with
  12. the tools required to maintain such a system.
  13. It's a task queue with focus on real-time processing, while also
  14. supporting task scheduling.
  15. Celery has a large and diverse community of users and contributors,
  16. you should come join us :ref:`on IRC <irc-channel>`
  17. or :ref:`our mailing-list <mailing-list>`.
  18. To read more about Celery you should go read the :ref:`introduction <intro>`.
  19. While this version is backward compatible with previous versions
  20. it's important that you read the following section.
  21. This version is officially supported on CPython 2.6, 2.7, 3.2 and 3.3,
  22. as well as PyPy and Jython.
  23. Highlights
  24. ==========
  25. .. topic:: Overview
  26. - Now supports Django out of the box.
  27. See the new tutorial at :ref:`django-first-steps`.
  28. - XXX2
  29. - XXX3
  30. YYY3
  31. .. _`website`: http://celeryproject.org/
  32. .. _`django-celery changelog`:
  33. http://github.com/celery/django-celery/tree/master/Changelog
  34. .. _`django-celery 3.0`: http://pypi.python.org/pypi/django-celery/
  35. .. contents::
  36. :local:
  37. :depth: 2
  38. .. _v310-important:
  39. Important Notes
  40. ===============
  41. No longer supports Python 2.5
  42. -----------------------------
  43. Celery now requires Python 2.6 or later.
  44. We now have a dual codebase that runs on both Python 2 and 3 without
  45. using the ``2to3`` porting tool.
  46. Last version to enable Pickle by default.
  47. -----------------------------------------
  48. Starting from Celery 3.2 the default serializer will be json.
  49. If you depend on pickle being accepted you should be prepared
  50. for this change by explicitly allowing your worker
  51. to consume pickled messages using the :setting:`CELERY_ACCEPT_CONTENT``
  52. setting::
  53. CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
  54. Make sure you select only the serialization formats that you will actually be using,
  55. and make sure you have properly secured your broker from unwanted access
  56. (see the :ref:`guide-security` guide).
  57. The worker will show a deprecation warning if you don't define this setting.
  58. Old command-line programs removed and deprecated.
  59. -------------------------------------------------
  60. The goal is that everyone should move the new :program:`celery` umbrella
  61. command, so with this version we deprecate the old command names,
  62. and remove commands that are not used in init scripts.
  63. +-------------------+--------------+-------------------------------------+
  64. | Program | New Status | Replacement |
  65. +===================+==============+=====================================+
  66. | ``celeryd`` | *DEPRECATED* | :program:`celery worker` |
  67. +-------------------+--------------+-------------------------------------+
  68. | ``celerybeat`` | *DEPRECATED* | :program:`celery beat` |
  69. +-------------------+--------------+-------------------------------------+
  70. | ``celeryd-multi`` | *DEPRECATED* | :program:`celery multi` |
  71. +-------------------+--------------+-------------------------------------+
  72. | ``celeryctl`` | **REMOVED** | :program:`celery inspect|control` |
  73. +-------------------+--------------+-------------------------------------+
  74. | ``celeryev`` | **REMOVED** | :program:`celery events` |
  75. +-------------------+--------------+-------------------------------------+
  76. | ``camqadm`` | **REMOVED** | :program:`celery amqp` |
  77. +-------------------+--------------+-------------------------------------+
  78. Please see :program:`celery --help` for help using the umbrella command.
  79. .. _v310-news:
  80. News
  81. ====
  82. Now supports Django out of the box.
  83. -----------------------------------
  84. The fixes and improvements applied by the django-celery library is now
  85. automatically applied by core Celery when it detects that
  86. the :envvar:`DJANGO_SETTINGS_MODULE` environment variable is set.
  87. The distribution ships with a new example project using Django
  88. in :file:`examples/django`:
  89. http://github.com/celery/celery/tree/3.1/examples/django
  90. Some features still require the :mod:`django-celery` library:
  91. - Celery does not implement the Django database or cache result backends.
  92. - Celery does not ship with the database-based periodic task
  93. scheduler.
  94. .. note::
  95. If you are using django-celery then it is crucial that you have
  96. ``djcelery.setup_loader()`` in your settings module, as this
  97. no longer happens as a side-effect of importing the :mod:`djcelery`
  98. module.
  99. Multiprocessing Pool improvements
  100. ---------------------------------
  101. XXX TODO TODO BLABLABLABLA
  102. :mod:`pytz` replaces ``python-dateutil`` dependency.
  103. ----------------------------------------------------
  104. Celery no longer depends on the ``python-dateutil`` library,
  105. but instead a new dependency on the :mod:`pytz` library was added.
  106. The :mod:`pytz` library was already recommended for accurate timezone support.
  107. This also means that dependencies are the same for both Python 2 and
  108. Python 3, and that the :file:`requirements/default-py3k.txt` file has
  109. been removed.
  110. Bootsteps: Extending the worker
  111. -------------------------------
  112. By writing bootsteps you can now easily extend the consumer part
  113. of the worker to add additional features, or even message consumers.
  114. The worker has been using bootsteps for some time, but these were never
  115. documented. In this version the consumer part of the worker
  116. has also been rewritten to use bootsteps and the new :ref:`guide-extending`
  117. guide documents examples extending the worker, including adding
  118. custom message consumers.
  119. See the :ref:`guide-extending` guide for more information.
  120. .. note::
  121. Bootsteps written for older versions will not be compatible
  122. with this version, as the API has changed significantly.
  123. The old API was experimental and internal so hopefully no one
  124. is depending. Should you happen to be using it then please
  125. contact the mailing-list and we will help you port to the new version.
  126. New result backend with RPC semantics
  127. -------------------------------------
  128. This version of the ``amqp`` result backend is a very good alternative
  129. to use in classical RPC scenarios, where the process that initiates
  130. the task is always the process to retrieve the result.
  131. It uses Kombu to send and retrieve results, and each client
  132. will create a unique queue for replies to be sent to. Avoiding
  133. the significant overhead of the original amqp backend which creates
  134. one queue per task.
  135. Results sent using this backend is not persistent, and so will
  136. not survive a broker restart, but you can set
  137. the :setting:`CELERY_RESULT_PERSISTENT` setting to change that.
  138. .. code-block:: python
  139. CELERY_RESULT_BACKEND = 'rpc'
  140. Note that chords are currently not supported by the RPC backend.
  141. Time limits can now be set by the client.
  142. -----------------------------------------
  143. You can set both hard and soft time limits using the ``time_limit`` and
  144. ``soft_time_limit`` calling options:
  145. .. code-block:: python
  146. >>> res = add.apply_async((2, 2), time_limit=10, soft_time_limit=8)
  147. >>> res = add.subtask((2, 2), time_limit=10, soft_time_limit=8).delay()
  148. >>> res = add.s(2, 2).set(time_limit=10, soft_time_limit=8).delay()
  149. Contributed by Mher Movsisyan.
  150. Redis: Separate broadcast messages by virtual host
  151. ---------------------------------------------------------------------------
  152. Broadcast messages are seen by all virtual hosts when using the Redis
  153. transport. You can fix this by enabling a prefix to all channels
  154. so that the messages are separated by virtual host::
  155. BROKER_TRANSPORT_OPTIONS = {'fanout_prefix': True}
  156. Note that you will not be able to communicate with workers running older
  157. versions or workers that does not have this setting enabled.
  158. This setting will be the default in the future, so better to migrate
  159. sooner rather than later.
  160. Related to Issue #1490.
  161. Events are now ordered using logical time.
  162. ------------------------------------------
  163. Timestamps are not a reliable way to order events in a distributed system,
  164. for one the floating point value does not have enough precision, but
  165. also it's impossible to keep physical clocks in sync.
  166. Celery event messages have included a logical clock value for some time,
  167. but starting with this version that field is also used to order them
  168. (that is if the monitor is using :mod:`celery.events.state`).
  169. The logical clock is currently implemented using Lamport timestamps,
  170. which does not have a high degree of accuracy, but should be good
  171. enough to casually order the events.
  172. Also, events now records timezone information for better timestamp
  173. accuracy, where a new ``utcoffset`` field is included in the event.
  174. This is a signed integer telling the difference from UTC time in hours,
  175. so e.g. an even sent from the Europe/London timezone in daylight savings
  176. time will have an offset of 1.
  177. :class:`@events.Receiver` will automatically convert the timestamps
  178. to the destination timezone.
  179. .. note::
  180. The logical clock is synchronized with other nodes
  181. in the same cluster (neighbors), so this means that the logical
  182. epoch will start at the point when the first worker in the cluster
  183. starts.
  184. If all of the workers are shutdown the clock value will be lost
  185. and reset to 0, to protect against this you should specify
  186. a :option:`--statedb` so that the worker can persist the clock
  187. value at shutdown.
  188. You may notice that the logical clock is an integer value and
  189. increases very rapidly. Do not worry about the value overflowing
  190. though, as even in the most busy clusters it may take several
  191. millennia before the clock exceeds a 64 bits value.
  192. New worker node name format (``name@host``).
  193. -------------------------------------------------------------------------
  194. Node names are now constructed by a node name and a hostname separated by '@'.
  195. This change was made to more easily identify multiple instances running
  196. on the same machine.
  197. If a custom name is not specified then the
  198. worker will use the name 'celery' by default, resulting in a
  199. fully qualified node name of 'celery@hostname':
  200. .. code-block:: bash
  201. $ celery worker -n example.com
  202. celery@example.com
  203. To also set the name you must include the @:
  204. .. code-block:: bash
  205. $ celery worker -n worker1@example.com
  206. worker1@example.com
  207. The worker will identify itself using the fully qualified
  208. node name in events and broadcast messages, so where before
  209. a worker would identify as 'worker1.example.com', it will now
  210. use 'celery@worker1.example.com'.
  211. Remember that the ``-n`` argument also supports simple variable
  212. substitutions, so if the current hostname is *jerry.example.com*
  213. then ``%h`` will expand into that:
  214. .. code-block:: bash
  215. $ celery worker -n worker1@%h
  216. worker1@jerry.example.com
  217. The available substitutions are as follows:
  218. +---------------+---------------------------------------+
  219. | Variable | Substitution |
  220. +===============+=======================================+
  221. | ``%h`` | Full hostname (including domain name) |
  222. +---------------+---------------------------------------+
  223. | ``%d`` | Domain name only |
  224. +---------------+---------------------------------------+
  225. | ``%n`` | Hostname only (without domain name) |
  226. +---------------+---------------------------------------+
  227. | ``%%`` | The character ``%`` |
  228. +---------------+---------------------------------------+
  229. Bound tasks
  230. -----------
  231. The task decorator can now created "bound tasks", which means that the
  232. task will receive the ``self`` argument.
  233. .. code-block:: python
  234. @app.task(bind=True)
  235. def send_twitter_status(self, oauth, tweet):
  236. try:
  237. twitter = Twitter(oauth)
  238. twitter.update_status(tweet)
  239. except (Twitter.FailWhaleError, Twitter.LoginError) as exc:
  240. raise self.retry(exc=exc)
  241. Using *bound tasks* is now the recommended approach whenever
  242. you need access to the current task or request context.
  243. Previously one would have to refer to the name of the task
  244. instead (``send_twitter_status.retry``), but this could lead to problems
  245. in some instances where the registered task was no longer the same
  246. object.
  247. Gossip: Worker <-> Worker communication.
  248. ----------------------------------------
  249. Workers now synchronizes revoked tasks with its neighbors.
  250. This happens at startup and causes a one second startup delay
  251. to collect broadcast responses from other workers.
  252. Now supports Setuptools extra requirements.
  253. -------------------------------------------
  254. Pip now supports installing setuptools extra requirements
  255. so we have deprecated the old bundles, replacing them with these
  256. little creatures, which are more convenient since you can easily
  257. specify multiple extras (e.g. ``pip install celery[redis,mongodb]``).
  258. +-------------+-------------------------+---------------------------+
  259. | Extension | Requirement entry | Type |
  260. +=============+=========================+===========================+
  261. | Redis | ``celery[redis]`` | transport, result backend |
  262. +-------------+-------------------------+---------------------------+
  263. | MongoDB`` | ``celery[mongodb]`` | transport, result backend |
  264. +-------------+-------------------------+---------------------------+
  265. | CouchDB | ``celery[couchdb]`` | transport |
  266. +-------------+-------------------------+---------------------------+
  267. | Beanstalk | ``celery[beanstalk]`` | transport |
  268. +-------------+-------------------------+---------------------------+
  269. | ZeroMQ | ``celery[zeromq]`` | transport |
  270. +-------------+-------------------------+---------------------------+
  271. | Zookeeper | ``celery[zookeeper]`` | transport |
  272. +-------------+-------------------------+---------------------------+
  273. | SQLAlchemy | ``celery[sqlalchemy]`` | transport, result backend |
  274. +-------------+-------------------------+---------------------------+
  275. | librabbitmq | ``celery[librabbitmq]`` | transport (C amqp client) |
  276. +-------------+-------------------------+---------------------------+
  277. There are more examples in the :ref:`bundles` section.
  278. Calling a subtask will now execute the task directly
  279. ----------------------------------------------------
  280. A misunderstanding led to ``Signature.__call__`` being an alias of
  281. ``.delay`` but this does not conform to the calling API of ``Task`` which
  282. should call the underlying task method.
  283. This means that:
  284. .. code-block:: python
  285. @app.task
  286. def add(x, y):
  287. return x + y
  288. add.s(2, 2)()
  289. now does the same as calling the task directly:
  290. .. code-block:: python
  291. add(2, 2)
  292. In Other News
  293. -------------
  294. - Now depends on :ref:`Kombu 3.0 <kombu:version-3.0.0>`.
  295. - Now depends on :mod:`billiard` version 3.3.
  296. - Worker will now crash if running as the root user with pickle enabled.
  297. - Canvas: ``group.apply_async`` and ``chain.apply_async`` no longer starts
  298. separate task.
  299. That the group and chord primitives supported the "calling API" like other
  300. subtasks was a nice idea, but it was useless in practice, often confusing
  301. users. If you still want this behavior you can create a task to do it
  302. for you.
  303. - New method ``Signature.freeze()`` can be used to "finalize"
  304. signatures/subtask.
  305. Regular signature:
  306. .. code-block:: python
  307. >>> s = add.s(2, 2)
  308. >>> result = s.freeze()
  309. >>> result
  310. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  311. >>> s.delay()
  312. <AsyncResult: ffacf44b-f8a1-44e9-80a3-703150151ef2>
  313. Group:
  314. .. code-block:: python
  315. >>> g = group(add.s(2, 2), add.s(4, 4))
  316. >>> result = g.freeze()
  317. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [
  318. 70c0fb3d-b60e-4b22-8df7-aa25b9abc86d,
  319. 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  320. >>> g()
  321. <GroupResult: e1094b1d-08fc-4e14-838e-6d601b99da6d [70c0fb3d-b60e-4b22-8df7-aa25b9abc86d, 58fcd260-2e32-4308-a2ea-f5be4a24f7f4]>
  322. - New ability to specify additional command line options
  323. to the worker and beat programs.
  324. The :attr:`@Celery.user_options` attribute can be used
  325. to add additional command-line arguments, and expects
  326. optparse-style options:
  327. .. code-block:: python
  328. from celery import Celery
  329. from optparse import make_option as Option
  330. app = Celery()
  331. app.user_options['worker'].add(
  332. Option('--my-argument'),
  333. )
  334. See the :ref:`guide-extending` guide for more information.
  335. - All events now include a ``pid`` field, which is the process id of the
  336. process that sent the event.
  337. - Event heartbeats are now calculated based on the time when the event
  338. was received by the monitor, and not the time reported by the worker.
  339. This means that a worker with an out-of-sync clock will no longer
  340. show as 'Offline' in monitors.
  341. A warning is now emitted if the difference between the senders
  342. time and the internal time is greater than 15 seconds, suggesting
  343. that the clocks are out of sync.
  344. - Many parts of the Celery codebase now uses a montonic clock.
  345. The montonic clock function is built-in starting from Python 3.4,
  346. but we also have fallback implementaions for Linux and OS X.
  347. - :program:`celery worker` now supports a ``--detach`` argument to start
  348. the worker as a daemon in the background.
  349. - :class:`@events.Receiver` now sets a ``local_received`` field for incoming
  350. events, which is set to the time of when the event was received.
  351. - :class:`@events.Dispatcher` now accepts a ``groups`` argument
  352. which decides a whitelist of event groups that will be sent.
  353. The type of an event is a string separated by '-', where the part
  354. before the first '-' is the group. Currently there are only
  355. two groups: ``worker`` and ``task``.
  356. A dispatcher instantiated as follows:
  357. .. code-block:: python
  358. app.events.Dispatcher(connection, groups=['worker'])
  359. will only send worker related events and silently drop any attempts
  360. to send events related to any other group.
  361. - ``Result.revoke`` will no longer wait for replies.
  362. You can add the ``reply=True`` argument if you really want to wait for
  363. responses from the workers.
  364. - Better support for link and link_error tasks for chords.
  365. Contributed by Steeve Morin.
  366. - There's a now an 'inspect clock' command which will collect the current
  367. logical clock value from workers.
  368. - `celery inspect stats` now contains the process id of the worker's main
  369. process.
  370. Contributed by Mher Movsisyan.
  371. - New remote control command to dump a workers configuration.
  372. Example:
  373. .. code-block:: bash
  374. $ celery inspect conf
  375. Configuration values will be converted to values supported by JSON
  376. where possible.
  377. Contributed by Mher Movisyan.
  378. - New settings :setting:`CELERY_EVENT_QUEUE_TTL` and
  379. :setting:`CELERY_EVENT_QUEUE_EXPIRES`.
  380. These control when a monitors event queue is deleted, and for how long
  381. events published to that queue will be visible. Only supported on
  382. RabbitMQ.
  383. - New Couchbase result backend
  384. This result backend enables you to store and retrieve task results
  385. using `Couchbase`_.
  386. See :ref:`conf-couchbase-result-backend` for more information
  387. about configuring this result backend.
  388. Contributed by Alain Masiero.
  389. .. _`Couchbase`: http://www.couchbase.com
  390. - CentOS init script now supports starting multiple worker instances.
  391. See the script header for details.
  392. Contributed by Jonathan Jordan.
  393. - ``AsyncResult.iter_native`` now sets default interval parameter to 0.5
  394. Fix contributed by Idan Kamara
  395. - New setting :setting:`BROKER_LOGIN_METHOD`
  396. This setting can be used to specify an alternate login method
  397. for the AMQP transports.
  398. Contributed by Adrien Guinet
  399. - The ``dump_conf`` remote control command will now give the string
  400. representation for types that are not JSON compatible.
  401. - Function `celery.security.setup_security` is now :func:`celery.setup_security`.
  402. - Message expires value is now forwarded at retry (Issue #980).
  403. The value is forwarded at is, so the expiry time will not change.
  404. To update the expiry time you would have to pass the expires
  405. argument to ``retry()``.
  406. - Worker now crashes if a channel error occurs.
  407. Channel errors are transport specific and is the list of exceptions
  408. returned by ``Connection.channel_errors``.
  409. For RabbitMQ this means that Celery will crash if the equivalence
  410. checks for one of the queues in :setting:`CELERY_QUEUES` mismatches, which
  411. makes sense since this is a scenario where manual intervention is
  412. required.
  413. - Calling ``AsyncResult.get()`` on a chain now propagates errors for previous
  414. tasks (Issue #1014).
  415. - The parent attribute of ``AsyncResult`` is now reconstructed when using JSON
  416. serialization (Issue #1014).
  417. - Worker disconnection logs are now logged with severity warning instead of
  418. error.
  419. Contributed by Chris Adams.
  420. - ``events.State`` no longer crashes when it receives unknown event types.
  421. - SQLAlchemy Result Backend: New :setting:`CELERY_RESULT_DB_TABLENAMES`
  422. setting can be used to change the name of the database tables used.
  423. Contributed by Ryan Petrello.
  424. - A stress test suite for the Celery worker has been written.
  425. This is located in the ``funtests/stress`` directory in the git
  426. repository. There's a README file there to get you started.
  427. - The logger named ``celery.concurrency`` has been renamed to ``celery.pool``.
  428. - New command line utility ``celery graph``
  429. This utility creates graphs in GraphViz dot format.
  430. You can create graphs from the currently installed bootsteps:
  431. .. code-block:: bash
  432. # Create graph of currently installed bootsteps in both the worker
  433. # and consumer namespaces.
  434. $ celery graph bootsteps | dot -T png -o steps.png
  435. # Graph of the consumer namespace only.
  436. $ celery graph bootsteps consumer | dot -T png -o consumer_only.png
  437. # Graph of the worker namespace only.
  438. $ celery graph bootsteps worker | dot -T png -o worker_only.png
  439. Or graphs of workers in a cluster:
  440. .. code-block:: bash
  441. # Create graph from the current cluster
  442. $ celery graph workers | dot -T png -o workers.png
  443. # Create graph from a specified list of workers
  444. $ celery graph workers nodes:w1,w2,w3 | dot -T png workers.png
  445. # also specify the number of threads in each worker
  446. $ celery graph workers nodes:w1,w2,w3 threads:2,4,6
  447. # ...also specify the broker and backend URLs shown in the graph
  448. $ celery graph workers broker:amqp:// backend:redis://
  449. # ...also specify the max number of workers/threads shown (wmax/tmax),
  450. # enumerating anything that exceeds that number.
  451. $ celery graph workers wmax:10 tmax:3
  452. - Changed the way that app instances are pickled
  453. Apps can now define a ``__reduce_keys__`` method that is used instead
  454. of the old ``AppPickler`` attribute. E.g. if your app defines a custom
  455. 'foo' attribute that needs to be preserved when pickling you can define
  456. a ``__reduce_keys__`` as such:
  457. .. code-block:: python
  458. import celery
  459. class Celery(celery.Celery):
  460. def __init__(self, *args, **kwargs):
  461. super(Celery, self).__init__(*args, **kwargs)
  462. self.foo = kwargs.get('foo')
  463. def __reduce_keys__(self):
  464. return super(Celery, self).__reduce_keys__().update(
  465. foo=self.foo,
  466. )
  467. This is a much more convenient way to add support for pickling custom
  468. attributes. The old ``AppPickler`` is still supported but its use is
  469. discouraged and we would like to remove it in a future version.
  470. - Ability to trace imports for debugging purposes.
  471. The :envvar:`C_IMPDEBUG` can be set to trace imports as they
  472. occur:
  473. .. code-block:: bash
  474. $ C_IMDEBUG=1 celery worker -l info
  475. .. code-block:: bash
  476. $ C_IMPDEBUG=1 celery shell
  477. - Message headers now available as part of the task request.
  478. Example setting and retreiving a header value::
  479. @app.task(bind=True)
  480. def t(self):
  481. return self.request.headers.get('sender')
  482. >>> t.apply_async(headers={'sender': 'George Costanza'})
  483. - :class:`@events.Receiver` is now a :class:`kombu.mixins.ConsumerMixin`
  484. subclass.
  485. - New :signal:`task_send`` signal dispatched before a task message
  486. is sent and can be used to modify the final message fields (Issue #1281).
  487. - ``celery.platforms.PIDFile`` renamed to :class:`celery.platforms.Pidfile`.
  488. - ``celery.results.BaseDictBackend`` has been removed, replaced by
  489. :class:``celery.results.BaseBackend``.
  490. - MongoDB Backend: Can now be configured using an URL
  491. See :ref:`example-mongodb-result-config`.
  492. - MongoDB Backend: No longer using deprecated ``pymongo.Connection``.
  493. - MongoDB Backend: Now disables ``auto_start_request``.
  494. - MongoDB Backend: Now enables ``use_greenlets`` when eventlet/gevent is used.
  495. - ``subtask()`` / ``maybe_subtask()`` renamed to
  496. ``signature()``/``maybe_signature()``.
  497. Aliases still available for backwards compatibility.
  498. - The ``correlation_id`` message property is now automatically set to the
  499. id of the task.
  500. - The task message ``eta`` and ``expires`` fields now includes timezone
  501. information.
  502. - All result backends ``store_result``/``mark_as_*`` methods must now accept
  503. a ``request`` keyword argument.
  504. - Events now emit warning if the broken ``yajl`` library is used.
  505. - The :signal:`celeryd_init` signal now takes an extra keyword argument:
  506. ``option``.
  507. This is the mapping of parsed command line arguments, and can be used to
  508. prepare new preload arguments (``app.user_options['preload']``).
  509. - New callback: ``Celery.on_configure``.
  510. This callback is called when an app is about to be configured (a
  511. configuration key is required).
  512. - Eventlet/gevent/solo/threads pools now properly handles BaseException errors
  513. raised by tasks.
  514. - Worker: No longer forks on :sig:`HUP`
  515. This means that the worker will reuse the same pid, which makes it
  516. easier for process supervisors.
  517. Contributed by Jameel Al-Aziz.
  518. - Optimization: Improved performance of ``ResultSet.join_native()``.
  519. Contributed by Stas Rudakou.
  520. - The :signal:`task_revoked` signal now accepts new ``request`` argument
  521. (Issue #1555).
  522. The revoked signal is dispatched after the task request is removed from
  523. the stack, so it must instead use the :class:`~celery.worker.job.Request`
  524. object to get information about the task.
  525. - Worker: New :option:`-X` command line argument to exclude queues
  526. (Issue #1399).
  527. The :option:`-X` argument is the inverse of the :option:`-Q` argument
  528. and accepts a list of queues to exclude (not consume from):
  529. .. code-block:: bash
  530. # Consume from all queues in CELERY_QUEUES, but not the 'foo' queue.
  531. $ celery worker -A proj -l info -X foo
  532. - Adds :envvar:`C_FAKEFORK` envvar for simple init script/multi debugging
  533. This means that you can now do:
  534. .. code-block:: bash
  535. $ C_FAKEFORK=1 celery multi start 10
  536. or:
  537. .. code-block:: bash
  538. $ C_FAKEFORK=1 /etc/init.d/celeryd start
  539. to avoid the daemonization step to see errors that are not visible
  540. due to missing stdout/stderr.
  541. A ``dryrun`` command has been added to the generic init script that
  542. enables this option.
  543. - New public API to push and pop from the current task stack:
  544. :func:`celery.app.push_current_task` and
  545. :func:`celery.app.pop_current_task``.
  546. - ``RetryTaskError`` has been renamed to :exc:`~celery.exceptions.Retry`.
  547. The old name is still available for backwards compatibility.
  548. - New semi-predicate exception :exc:`~celery.exceptions.Reject`
  549. This exception can be raised to reject/requeue the task message,
  550. see :ref:`task-semipred-reject` for examples.
  551. - :ref:`Semipredicates <task-semipredicates>` documented: (Retry/Ignore/Reject).
  552. .. _v310-experimental:
  553. Experimental
  554. ============
  555. XXX
  556. ---
  557. YYY
  558. .. _v310-removals:
  559. Scheduled Removals
  560. ==================
  561. - The ``BROKER_INSIST`` setting is no longer supported.
  562. - The ``CELERY_AMQP_TASK_RESULT_CONNECTION_MAX`` setting is no longer
  563. supported.
  564. Use :setting:`BROKER_POOL_LIMIT` instead.
  565. - The ``CELERY_TASK_ERROR_WHITELIST`` setting is no longer supported.
  566. You should set the :class:`~celery.utils.mail.ErrorMail` attribute
  567. of the task class instead. You can also do this using
  568. :setting:`CELERY_ANNOTATIONS`:
  569. .. code-block:: python
  570. from celery import Celery
  571. from celery.utils.mail import ErrorMail
  572. class MyErrorMail(ErrorMail):
  573. whitelist = (KeyError, ImportError)
  574. def should_send(self, context, exc):
  575. return isinstance(exc, self.whitelist)
  576. app = Celery()
  577. app.conf.CELERY_ANNOTATIONS = {
  578. '*': {
  579. 'ErrorMail': MyErrorMails,
  580. }
  581. }
  582. - Functions that creates a broker connections no longer
  583. supports the ``connect_timeout`` argument.
  584. This can now only be set using the :setting:`BROKER_CONNECTION_TIMEOUT`
  585. setting. This is because functions no longer create connections
  586. directly, but instead get them from the connection pool.
  587. - The ``CELERY_AMQP_TASK_RESULT_EXPIRES`` setting is no longer supported.
  588. Use :setting:`CELERY_TASK_RESULT_EXPIRES` instead.
  589. .. _v310-deprecations:
  590. Deprecations
  591. ============
  592. See the :ref:`deprecation-timeline`.
  593. .. _v310-fixes:
  594. Fixes
  595. =====
  596. - AMQP Backend: join did not convert exceptions when using the json
  597. serializer.
  598. - Worker: Workaround for unicode errors in logs (Issue #427)
  599. - Task methods: ``.apply_async`` now works properly if args list is None
  600. (Issue #1459).
  601. - Autoscale and ``pool_grow``/``pool_shrink`` remote control commands
  602. will now also automatically increase and decrease the consumer prefetch count.
  603. Fix contributed by Daniel M. Taub.
  604. - ``celery control pool_`` commands did not coerce string arguments to int.
  605. - Redis/Cache chords: Callback result is now set to failure if the group
  606. disappeared from the database (Issue #1094).
  607. - Worker: Now makes sure that the shutdown process is not initiated multiple
  608. times.
  609. - Multi: Now properly handles both ``-f`` and ``--logfile`` options
  610. (Issue #1541).
  611. .. _v310-internal:
  612. Internal changes
  613. ================
  614. - Module ``celery.task.trace`` has been renamed to :mod:`celery.app.trace`.
  615. - Module ``celery.concurrency.processes`` has been renamed to
  616. :mod:`celery.concurrency.prefork`.
  617. - Classes that no longer fall back to using the default app:
  618. - Result backends (:class:`celery.backends.base.BaseBackend`)
  619. - :class:`celery.worker.WorkController`
  620. - :class:`celery.worker.Consumer`
  621. - :class:`celery.worker.job.Request`
  622. This means that you have to pass a specific app when instantiating
  623. these classes.
  624. - ``EventDispatcher.copy_buffer`` renamed to ``EventDispatcher.extend_buffer``
  625. - Removed unused and never documented global instance
  626. ``celery.events.state.state``.
  627. - :class:`celery.apps.worker.Worker` has been refactored as a subclass of
  628. :class:`celery.worker.WorkController`.
  629. This removes a lot of duplicate functionality.
  630. - The ``Celery.with_default_connection`` method has been removed in favor
  631. of ``with app.connection_or_acquire``.