whatsnew-2.5.rst 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569
  1. .. _whatsnew-2.5:
  2. ==========================
  3. What's new in Celery 2.5
  4. ==========================
  5. Celery aims to be a flexible and reliable, best-of-breed solution
  6. to process vast amounts of messages in a distributed fashion, while
  7. providing operations with the tools to maintain such a system.
  8. Celery has a large and diverse community of users and contributors,
  9. you should come join us :ref:`on IRC <irc-channel>`
  10. or :ref:`our mailing-list <mailing-list>`.
  11. To read more about Celery you should visit our `website`_.
  12. While this version is backward compatible with previous versions
  13. it is important that you read the following section.
  14. If you use Celery in combination with Django you must also
  15. read the `django-celery changelog <djcelery:version-2.5.0>` and upgrade to `django-celery 2.5`_.
  16. This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3,
  17. as well as PyPy and Jython.
  18. .. _`website`: http://celeryproject.org/
  19. .. _`django-celery 2.5`: http://pypi.python.org/pypi/django-celery/
  20. .. contents::
  21. :local:
  22. .. _v250-important:
  23. Important Notes
  24. ===============
  25. Broker connection pool now enabled by default
  26. ---------------------------------------------
  27. The default limit is 10 connections, if you have many threads/green-threads
  28. using connections at the same time you may want to tweak this limit
  29. to avoid contention.
  30. See the :setting:`BROKER_POOL_LIMIT` setting for more information.
  31. Also note that publishing tasks will be retried by default, to change
  32. this default or the default retry policy see
  33. :setting:`CELERY_TASK_PUBLISH_RETRY` and
  34. :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`.
  35. Rabbit Result Backend: Exchange is no longer *auto delete*
  36. ----------------------------------------------------------
  37. The exchange used for results in the Rabbit (AMQP) result backend
  38. used to have the *auto_delete* flag set, which could result in a
  39. race condition leading to an annoying warning.
  40. .. admonition:: For RabbitMQ users
  41. Old exchanges created with the *auto_delete* flag enabled has
  42. to be removed.
  43. The :program:`camqadm` command can be used to delete the
  44. previous exchange:
  45. .. code-block:: bash
  46. $ camqadm exchange.delete celeryresults
  47. As an alternative to deleting the old exchange you can
  48. configure a new name for the exchange::
  49. CELERY_RESULT_EXCHANGE = 'celeryresults2'
  50. But you have to make sure that all clients and workers
  51. use this new setting, so they are updated to use the same
  52. exchange name.
  53. Solution for hanging workers (but must be manually enabled)
  54. -----------------------------------------------------------
  55. The :setting:`CELERYD_FORCE_EXECV` setting has been added to solve
  56. a problem with deadlocks that originate when threads and fork is mixed
  57. together:
  58. .. code-block:: python
  59. CELERYD_FORCE_EXECV = True
  60. This setting is recommended for all users using the prefork pool,
  61. but especially users also using time limits or a max tasks per child
  62. setting.
  63. - See `Python Issue 6721`_ to read more about this issue, and why
  64. resorting to :func:`~os.execv`` is the only safe solution.
  65. Enabling this option will result in a slight performance penalty
  66. when new child worker processes are started, and it will also increase
  67. memory usage (but many platforms are optimized, so the impact may be
  68. minimal). Considering that it ensures reliability when replacing
  69. lost worker processes, it should be worth it.
  70. - It's already the default behavior on Windows.
  71. - It will be the default behavior for all platforms in a future version.
  72. .. _`Python Issue 6721`: http://bugs.python.org/issue6721#msg140215
  73. .. _v250-optimizations:
  74. Optimizations
  75. =============
  76. - The code path used when the worker executes a task has been heavily
  77. optimized, meaning the worker is able to process a great deal
  78. more tasks/second compared to previous versions. As an example the solo
  79. pool can now process up to 15000 tasks/second on a 4 core MacBook Pro
  80. when using the `pylibrabbitmq`_ transport, where it previously
  81. could only do 5000 tasks/second.
  82. - The task error tracebacks are now much shorter.
  83. - Fixed a noticeable delay in task processing when rate limits are enabled.
  84. .. _`pylibrabbitmq`: http://pypi.python.org/pylibrabbitmq/
  85. .. _v250-deprecations:
  86. Deprecations
  87. ============
  88. Removals
  89. --------
  90. * The old :class:`TaskSet` signature of ``(task_name, list_of_tasks)``
  91. can no longer be used (originally scheduled for removal in 2.4).
  92. The deprecated ``.task_name`` and ``.task`` attributes has also been
  93. removed.
  94. * The functions ``celery.execute.delay_task``, ``celery.execute.apply``,
  95. and ``celery.execute.apply_async`` has been removed (originally)
  96. scheduled for removal in 2.3).
  97. * The built-in ``ping`` task has been removed (originally scheduled
  98. for removal in 2.3). Please use the ping broadcast command
  99. instead.
  100. * It is no longer possible to import ``subtask`` and ``TaskSet``
  101. from :mod:`celery.task.base`, please import them from :mod:`celery.task`
  102. instead (originally scheduled for removal in 2.4).
  103. Deprecations
  104. ------------
  105. * The :mod:`celery.decorators` module has changed status
  106. from pending deprecation to deprecated, and is scheduled for removal
  107. in version 4.0. The ``celery.task`` module must be used instead.
  108. .. _v250-news:
  109. News
  110. ====
  111. Timezone support
  112. ----------------
  113. Celery can now be configured to treat all incoming and outgoing dates
  114. as UTC, and the local timezone can be configured.
  115. This is not yet enabled by default, since enabling
  116. time zone support means workers running versions pre 2.5
  117. will be out of sync with upgraded workers.
  118. To enable UTC you have to set :setting:`CELERY_ENABLE_UTC`::
  119. CELERY_ENABLE_UTC = True
  120. When UTC is enabled, dates and times in task messages will be
  121. converted to UTC, and then converted back to the local timezone
  122. when received by a worker.
  123. You can change the local timezone using the :setting:`CELERY_TIMEZONE`
  124. setting. Installing the :mod:`pytz` library is recommended when
  125. using a custom timezone, to keep timezone definition up-to-date,
  126. but it will fallback to a system definition of the timezone if available.
  127. UTC will enabled by default in version 3.0.
  128. .. note::
  129. django-celery will use the local timezone as specified by the
  130. ``TIME_ZONE`` setting, it will also honor the new `USE_TZ`_ setting
  131. introuced in Django 1.4.
  132. .. _`USE_TZ`: https://docs.djangoproject.com/en/dev/topics/i18n/timezones/
  133. New security serializer using cryptographic signing
  134. ---------------------------------------------------
  135. A new serializer has been added that signs and verifies the signature
  136. of messages.
  137. The name of the new serializer is ``auth``, and needs additional
  138. configuration to work (see :ref:`conf-security`).
  139. .. seealso::
  140. :ref:`guide-security`
  141. Contributed by Mher Movsisyan.
  142. Experimental support for automatic module reloading
  143. ---------------------------------------------------
  144. Starting :program:`celeryd` with the :option:`--autoreload` option will
  145. enable the worker to watch for file system changes to all imported task
  146. modules imported (and also any non-task modules added to the
  147. :setting:`CELERY_IMPORTS` setting or the :option:`-I|--include` option).
  148. This is an experimental feature intended for use in development only,
  149. using auto-reload in production is discouraged as the behavior of reloading
  150. a module in Python is undefined, and may cause hard to diagnose bugs and
  151. crashes. Celery uses the same approach as the auto-reloader found in e.g.
  152. the Django ``runserver`` command.
  153. When auto-reload is enabled the worker starts an additional thread
  154. that watches for changes in the file system. New modules are imported,
  155. and already imported modules are reloaded whenever a change is detected,
  156. and if the prefork pool is used the child processes will finish the work
  157. they are doing and exit, so that they can be replaced by fresh processes
  158. effectively reloading the code.
  159. File system notification backends are pluggable, and Celery comes with three
  160. implementations:
  161. * inotify (Linux)
  162. Used if the :mod:`pyinotify` library is installed.
  163. If you are running on Linux this is the recommended implementation,
  164. to install the :mod:`pyinotify` library you have to run the following
  165. command:
  166. .. code-block:: bash
  167. $ pip install pyinotify
  168. * kqueue (OS X/BSD)
  169. * stat
  170. The fallback implementation simply polls the files using ``stat`` and is very
  171. expensive.
  172. You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
  173. environment variable:
  174. .. code-block:: bash
  175. $ env CELERYD_FSNOTIFY=stat celeryd -l info --autoreload
  176. Contributed by Mher Movsisyan.
  177. New :setting:`CELERY_ANNOTATIONS` setting
  178. -----------------------------------------
  179. This new setting enables the configuration to modify task classes
  180. and their attributes.
  181. The setting can be a dict, or a list of annotation objects that filter
  182. for tasks and return a map of attributes to change.
  183. As an example, this is an annotation to change the ``rate_limit`` attribute
  184. for the ``tasks.add`` task:
  185. .. code-block:: python
  186. CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
  187. or change the same for all tasks:
  188. .. code-block:: python
  189. CELERY_ANNOTATIONS = {'*': {'rate_limit': '10/s'}}
  190. You can change methods too, for example the ``on_failure`` handler:
  191. .. code-block:: python
  192. def my_on_failure(self, exc, task_id, args, kwargs, einfo):
  193. print('Oh no! Task failed: %r' % (exc, ))
  194. CELERY_ANNOTATIONS = {'*': {'on_failure': my_on_failure}}
  195. If you need more flexibility then you can also create objects
  196. that filter for tasks to annotate:
  197. .. code-block:: python
  198. class MyAnnotate(object):
  199. def annotate(self, task):
  200. if task.name.startswith('tasks.'):
  201. return {'rate_limit': '10/s'}
  202. CELERY_ANNOTATIONS = (MyAnnotate(), {…})
  203. ``current`` provides the currently executing task
  204. -------------------------------------------------
  205. The new :data:`celery.task.current` proxy will always give the currently
  206. executing task.
  207. **Example**:
  208. .. code-block:: python
  209. from celery.task import current, task
  210. @task
  211. def update_twitter_status(auth, message):
  212. twitter = Twitter(auth)
  213. try:
  214. twitter.update_status(message)
  215. except twitter.FailWhale, exc:
  216. # retry in 10 seconds.
  217. current.retry(countdown=10, exc=exc)
  218. Previously you would have to type ``update_twitter_status.retry(…)``
  219. here, which can be annoying for long task names.
  220. .. note::
  221. This will not work if the task function is called directly, i.e:
  222. ``update_twitter_status(a, b)``. For that to work ``apply`` must
  223. be used: ``update_twitter_status.apply((a, b))``.
  224. In Other News
  225. -------------
  226. - Now depends on Kombu 2.1.0.
  227. - Efficient Chord support for the memcached backend (Issue #533)
  228. This means memcached joins Redis in the ability to do non-polling
  229. chords.
  230. Contributed by Dan McGee.
  231. - Adds Chord support for the Rabbit result backend (amqp)
  232. The Rabbit result backend can now use the fallback chord solution.
  233. - Sending :sig:`QUIT` to celeryd will now cause it cold terminate.
  234. That is, it will not finish executing the tasks it is currently
  235. working on.
  236. Contributed by Alec Clowes.
  237. - New "detailed" mode for the Cassandra backend.
  238. Allows to have a "detailed" mode for the Cassandra backend.
  239. Basically the idea is to keep all states using Cassandra wide columns.
  240. New states are then appended to the row as new columns, the last state
  241. being the last column.
  242. See the :setting:`CASSANDRA_DETAILED_MODE` setting.
  243. Contributed by Steeve Morin.
  244. - The crontab parser now matches Vixie Cron behavior when parsing ranges
  245. with steps (e.g. 1-59/2).
  246. Contributed by Daniel Hepper.
  247. - celerybeat can now be configured on the command-line like celeryd.
  248. Additional configuration must be added at the end of the argument list
  249. followed by ``--``, for example:
  250. .. code-block:: bash
  251. $ celerybeat -l info -- celerybeat.max_loop_interval=10.0
  252. - Now limits the number of frames in a traceback so that celeryd does not
  253. crash on maximum recursion limit exceeded exceptions (Issue #615).
  254. The limit is set to the current recursion limit divided by 8 (which
  255. is 125 by default).
  256. To get or set the current recursion limit use
  257. :func:`sys.getrecursionlimit` and :func:`sys.setrecursionlimit`.
  258. - More information is now preserved in the pickleable traceback.
  259. This has been added so that Sentry can show more details.
  260. Contributed by Sean O'Connor.
  261. - CentOS init script has been updated and should be more flexible.
  262. Contributed by Andrew McFague.
  263. - MongoDB result backend now supports ``forget()``.
  264. Contributed by Andrew McFague
  265. - ``task.retry()`` now re-raises the original exception keeping
  266. the original stack trace.
  267. Suggested by ojii.
  268. - The `--uid` argument to daemons now uses ``initgroups()`` to set
  269. groups to all the groups the user is a member of.
  270. Contributed by Łukasz Oleś.
  271. - celeryctl: Added ``shell`` command.
  272. The shell will have the current_app (``celery``) and all tasks
  273. automatically added to locals.
  274. - celeryctl: Added ``migrate`` command.
  275. The migrate command moves all tasks from one broker to another.
  276. Note that this is experimental and you should have a backup
  277. of the data before proceeding.
  278. **Examples**:
  279. .. code-block:: bash
  280. $ celeryctl migrate redis://localhost amqp://localhost
  281. $ celeryctl migrate amqp://localhost//v1 amqp://localhost//v2
  282. $ python manage.py celeryctl migrate django:// redis://
  283. * Routers can now override the ``exchange`` and ``routing_key`` used
  284. to create missing queues (Issue #577).
  285. By default this will always use the name of the queue,
  286. but you can now have a router return exchange and routing_key keys
  287. to set them.
  288. This is useful when using routing classes which decides a destination
  289. at runtime.
  290. Contributed by Akira Matsuzaki.
  291. - Redis result backend: Adds support for a ``max_connections`` parameter.
  292. It is now possible to configure the maximum number of
  293. simultaneous connections in the Redis connection pool used for
  294. results.
  295. The default max connections setting can be configured using the
  296. :setting:`CELERY_REDIS_MAX_CONNECTIONS` setting,
  297. or it can be changed individually by ``RedisBackend(max_connections=int)``.
  298. Contributed by Steeve Morin.
  299. - Redis result backend: Adds the ability to wait for results without polling.
  300. Contributed by Steeve Morin.
  301. - MongoDB result backend: Now supports save and restore taskset.
  302. Contributed by Julien Poissonnier.
  303. - There's a new :ref:`guide-security` guide in the documentation.
  304. - The init scripts has been updated, and many bugs fixed.
  305. Contributed by Chris Streeter.
  306. - User (tilde) is now expanded in command-line arguments.
  307. - Can now configure CELERYCTL envvar in :file:`/etc/default/celeryd`.
  308. While not necessary for operation, :program:`celeryctl` is used for the
  309. ``celeryd status`` command, and the path to :program:`celeryctl` must be
  310. configured for that to work.
  311. The daemonization cookbook contains examples.
  312. Contributed by Jude Nagurney.
  313. - The MongoDB result backend can now use Replica Sets.
  314. Contributed by Ivan Metzlar.
  315. - gevent: Now supports autoscaling (Issue #599).
  316. Contributed by Mark Lavin.
  317. - multiprocessing: Mediator thread is now always enabled,
  318. even though rate limits are disabled, as the pool semaphore
  319. is known to block the main thread, causing broadcast commands and
  320. shutdown to depend on the semaphore being released.
  321. Fixes
  322. =====
  323. - Exceptions that are re-raised with a new exception object now keeps
  324. the original stack trace.
  325. - Windows: Fixed the ``no handlers found for multiprocessing`` warning.
  326. - Windows: The ``celeryd`` program can now be used.
  327. Previously Windows users had to launch celeryd using
  328. ``python -m celery.bin.celeryd``.
  329. - Redis result backend: Now uses ``SETEX`` command to set result key,
  330. and expiry atomically.
  331. Suggested by yaniv-aknin.
  332. - celeryd: Fixed a problem where shutdown hanged when Ctrl+C was used to
  333. terminate.
  334. - celeryd: No longer crashes when channel errors occur.
  335. Fix contributed by Roger Hu.
  336. - Fixed memory leak in the eventlet pool, caused by the
  337. use of ``greenlet.getcurrent``.
  338. Fix contributed by Ignas Mikalajūnas.
  339. - Cassandra backend: No longer uses :func:`pycassa.connect` which is
  340. deprecated since :mod:`pycassa` 1.4.
  341. Fix contributed by Jeff Terrace.
  342. - Fixed unicode decode errors that could occur while sending error emails.
  343. Fix contributed by Seong Wun Mun.
  344. - ``celery.bin`` programs now always defines ``__package__`` as recommended
  345. by PEP-366.
  346. - ``send_task`` now emits a warning when used in combination with
  347. :setting:`CELERY_ALWAYS_EAGER` (Issue #581).
  348. Contributed by Mher Movsisyan.
  349. - ``apply_async`` now forwards the original keyword arguments to ``apply``
  350. when :setting:`CELERY_ALWAYS_EAGER` is enabled.
  351. - celeryev now tries to re-establish the connection if the connection
  352. to the broker is lost (Issue #574).
  353. - celeryev: Fixed a crash occurring if a task has no associated worker
  354. information.
  355. Fix contributed by Matt Williamson.
  356. - The current date and time is now consistently taken from the current loaders
  357. ``now`` method.
  358. - Now shows helpful error message when given a config module ending in
  359. ``.py`` that can't be imported.
  360. - celeryctl: The ``--expires`` and ``-eta`` arguments to the apply command
  361. can now be an ISO-8601 formatted string.
  362. - celeryctl now exits with exit status ``EX_UNAVAILABLE`` (69) if no replies
  363. have been received.