changelog-2.2.rst 32 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018
  1. .. _changelog-2.2:
  2. ===============================
  3. Change history for Celery 2.2
  4. ===============================
  5. .. contents::
  6. :local:
  7. .. _version-2.2.8:
  8. 2.2.8
  9. =====
  10. :release-date: 2011-11-25 16:00 P.M GMT
  11. :by: Ask Solem
  12. .. _v228-security-fixes:
  13. Security Fixes
  14. --------------
  15. * [Security: `CELERYSA-0001`_] Daemons would set effective id's rather than
  16. real id's when the :option:`--uid`/:option:`--gid` arguments to
  17. :program:`celery multi`, :program:`celeryd_detach`,
  18. :program:`celery beat` and :program:`celery events` were used.
  19. This means privileges weren't properly dropped, and that it would
  20. be possible to regain supervisor privileges later.
  21. .. _`CELERYSA-0001`:
  22. http://github.com/celery/celery/tree/master/docs/sec/CELERYSA-0001.txt
  23. .. _version-2.2.7:
  24. 2.2.7
  25. =====
  26. :release-date: 2011-06-13 16:00 P.M BST
  27. * New signals: :signal:`after_setup_logger` and
  28. :signal:`after_setup_task_logger`
  29. These signals can be used to augment logging configuration
  30. after Celery has set up logging.
  31. * Redis result backend now works with Redis 2.4.4.
  32. * multi: The :option:`--gid` option now works correctly.
  33. * worker: Retry wrongfully used the repr of the traceback instead
  34. of the string representation.
  35. * App.config_from_object: Now loads module, not attribute of module.
  36. * Fixed issue where logging of objects would give "<Unrepresentable: ...>"
  37. .. _version-2.2.6:
  38. 2.2.6
  39. =====
  40. :release-date: 2011-04-15 16:00 P.M CEST
  41. .. _v226-important:
  42. Important Notes
  43. ---------------
  44. * Now depends on Kombu 1.1.2.
  45. * Dependency lists now explicitly specifies that we don't want python-dateutil
  46. 2.x, as this version only supports py3k.
  47. If you have installed dateutil 2.0 by accident you should downgrade
  48. to the 1.5.0 version::
  49. pip install -U python-dateutil==1.5.0
  50. or by easy_install::
  51. easy_install -U python-dateutil==1.5.0
  52. .. _v226-fixes:
  53. Fixes
  54. -----
  55. * The new ``WatchedFileHandler`` broke Python 2.5 support (Issue #367).
  56. * Task: Don't use ``app.main`` if the task name is set explicitly.
  57. * Sending emails did not work on Python 2.5, due to a bug in
  58. the version detection code (Issue #378).
  59. * Beat: Adds method ``ScheduleEntry._default_now``
  60. This method can be overridden to change the default value
  61. of ``last_run_at``.
  62. * An error occurring in process cleanup could mask task errors.
  63. We no longer propagate errors happening at process cleanup,
  64. but log them instead. This way they will not interfere with publishing
  65. the task result (Issue #365).
  66. * Defining tasks did not work properly when using the Django
  67. ``shell_plus`` utility (Issue #366).
  68. * ``AsyncResult.get`` did not accept the ``interval`` and ``propagate``
  69. arguments.
  70. * worker: Fixed a bug where the worker would not shutdown if a
  71. :exc:`socket.error` was raised.
  72. .. _version-2.2.5:
  73. 2.2.5
  74. =====
  75. :release-date: 2011-03-28 06:00 P.M CEST
  76. .. _v225-important:
  77. Important Notes
  78. ---------------
  79. * Now depends on Kombu 1.0.7
  80. .. _v225-news:
  81. News
  82. ----
  83. * Our documentation is now hosted by Read The Docs
  84. (http://docs.celeryproject.org), and all links have been changed to point to
  85. the new URL.
  86. * Logging: Now supports log rotation using external tools like `logrotate.d`_
  87. (Issue #321)
  88. This is accomplished by using the ``WatchedFileHandler``, which re-opens
  89. the file if it is renamed or deleted.
  90. .. _`logrotate.d`:
  91. http://www.ducea.com/2006/06/06/rotating-linux-log-files-part-2-logrotate/
  92. * :ref:`tut-otherqueues` now documents how to configure Redis/Database result
  93. backends.
  94. * gevent: Now supports ETA tasks.
  95. But gevent still needs ``CELERY_DISABLE_RATE_LIMITS=True`` to work.
  96. * TaskSet User Guide: now contains TaskSet callback recipes.
  97. * Eventlet: New signals:
  98. * ``eventlet_pool_started``
  99. * ``eventlet_pool_preshutdown``
  100. * ``eventlet_pool_postshutdown``
  101. * ``eventlet_pool_apply``
  102. See :mod:`celery.signals` for more information.
  103. * New :setting:`BROKER_TRANSPORT_OPTIONS` setting can be used to pass
  104. additional arguments to a particular broker transport.
  105. * worker: ``worker_pid`` is now part of the request info as returned by
  106. broadcast commands.
  107. * TaskSet.apply/Taskset.apply_async now accepts an optional ``taskset_id``
  108. argument.
  109. * The taskset_id (if any) is now available in the Task request context.
  110. * SQLAlchemy result backend: taskset_id and taskset_id columns now have a
  111. unique constraint. (Tables need to recreated for this to take affect).
  112. * Task Userguide: Added section about choosing a result backend.
  113. * Removed unused attribute ``AsyncResult.uuid``.
  114. .. _v225-fixes:
  115. Fixes
  116. -----
  117. * multiprocessing.Pool: Fixes race condition when marking job with
  118. ``WorkerLostError`` (Issue #268).
  119. The process may have published a result before it was terminated,
  120. but we have no reliable way to detect that this is the case.
  121. So we have to wait for 10 seconds before marking the result with
  122. WorkerLostError. This gives the result handler a chance to retrieve the
  123. result.
  124. * multiprocessing.Pool: Shutdown could hang if rate limits disabled.
  125. There was a race condition when the MainThread was waiting for the pool
  126. semaphore to be released. The ResultHandler now terminates after 5
  127. seconds if there are unacked jobs, but no worker processes left to start
  128. them (it needs to timeout because there could still be an ack+result
  129. that we haven't consumed from the result queue. It
  130. is unlikely we will receive any after 5 seconds with no worker processes).
  131. * celerybeat: Now creates pidfile even if the ``--detach`` option is not set.
  132. * eventlet/gevent: The broadcast command consumer is now running in a separate
  133. greenthread.
  134. This ensures broadcast commands will take priority even if there are many
  135. active tasks.
  136. * Internal module ``celery.worker.controllers`` renamed to
  137. ``celery.worker.mediator``.
  138. * worker: Threads now terminates the program by calling ``os._exit``, as it
  139. is the only way to ensure exit in the case of syntax errors, or other
  140. unrecoverable errors.
  141. * Fixed typo in ``maybe_timedelta`` (Issue #352).
  142. * worker: Broadcast commands now logs with loglevel debug instead of warning.
  143. * AMQP Result Backend: Now resets cached channel if the connection is lost.
  144. * Polling results with the AMQP result backend was not working properly.
  145. * Rate limits: No longer sleeps if there are no tasks, but rather waits for
  146. the task received condition (Performance improvement).
  147. * ConfigurationView: ``iter(dict)`` should return keys, not items (Issue #362).
  148. * celerybeat: PersistentScheduler now automatically removes a corrupted
  149. schedule file (Issue #346).
  150. * Programs that doesn't support positional command-line arguments now provides
  151. a user friendly error message.
  152. * Programs no longer tries to load the configuration file when showing
  153. ``--version`` (Issue #347).
  154. * Autoscaler: The "all processes busy" log message is now severity debug
  155. instead of error.
  156. * worker: If the message body can't be decoded, it is now passed through
  157. ``safe_str`` when logging.
  158. This to ensure we don't get additional decoding errors when trying to log
  159. the failure.
  160. * ``app.config_from_object``/``app.config_from_envvar`` now works for all
  161. loaders.
  162. * Now emits a user-friendly error message if the result backend name is
  163. unknown (Issue #349).
  164. * :mod:`celery.contrib.batches`: Now sets loglevel and logfile in the task
  165. request so ``task.get_logger`` works with batch tasks (Issue #357).
  166. * worker: An exception was raised if using the amqp transport and the prefetch
  167. count value exceeded 65535 (Issue #359).
  168. The prefetch count is incremented for every received task with an
  169. ETA/countdown defined. The prefetch count is a short, so can only support
  170. a maximum value of 65535. If the value exceeds the maximum value we now
  171. disable the prefetch count, it is re-enabled as soon as the value is below
  172. the limit again.
  173. * cursesmon: Fixed unbound local error (Issue #303).
  174. * eventlet/gevent is now imported on demand so autodoc can import the modules
  175. without having eventlet/gevent installed.
  176. * worker: Ack callback now properly handles ``AttributeError``.
  177. * ``Task.after_return`` is now always called *after* the result has been
  178. written.
  179. * Cassandra Result Backend: Should now work with the latest ``pycassa``
  180. version.
  181. * multiprocessing.Pool: No longer cares if the putlock semaphore is released
  182. too many times. (this can happen if one or more worker processes are
  183. killed).
  184. * SQLAlchemy Result Backend: Now returns accidentally removed ``date_done`` again
  185. (Issue #325).
  186. * Task.request contex is now always initialized to ensure calling the task
  187. function directly works even if it actively uses the request context.
  188. * Exception occuring when iterating over the result from ``TaskSet.apply``
  189. fixed.
  190. * eventlet: Now properly schedules tasks with an ETA in the past.
  191. .. _version-2.2.4:
  192. 2.2.4
  193. =====
  194. :release-date: 2011-02-19 12:00 AM CET
  195. .. _v224-fixes:
  196. Fixes
  197. -----
  198. * worker: 2.2.3 broke error logging, resulting in tracebacks not being logged.
  199. * AMQP result backend: Polling task states did not work properly if there were
  200. more than one result message in the queue.
  201. * ``TaskSet.apply_async()`` and ``TaskSet.apply()`` now supports an optional
  202. ``taskset_id`` keyword argument (Issue #331).
  203. * The current taskset id (if any) is now available in the task context as
  204. ``request.taskset`` (Issue #329).
  205. * SQLAlchemy result backend: `date_done` was no longer part of the results as it had
  206. been accidentally removed. It is now available again (Issue #325).
  207. * SQLAlchemy result backend: Added unique constraint on `Task.id` and
  208. `TaskSet.taskset_id`. Tables needs to be recreated for this to take effect.
  209. * Fixed exception raised when iterating on the result of ``TaskSet.apply()``.
  210. * Tasks Userguide: Added section on choosing a result backend.
  211. .. _version-2.2.3:
  212. 2.2.3
  213. =====
  214. :release-date: 2011-02-12 04:00 P.M CET
  215. .. _v223-fixes:
  216. Fixes
  217. -----
  218. * Now depends on Kombu 1.0.3
  219. * Task.retry now supports a ``max_retries`` argument, used to change the
  220. default value.
  221. * `multiprocessing.cpu_count` may raise :exc:`NotImplementedError` on
  222. platforms where this is not supported (Issue #320).
  223. * Coloring of log messages broke if the logged object was not a string.
  224. * Fixed several typos in the init script documentation.
  225. * A regression caused `Task.exchange` and `Task.routing_key` to no longer
  226. have any effect. This is now fixed.
  227. * Routing Userguide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
  228. instances, not classes.
  229. * :program:`celeryev` did not create pidfile even though the
  230. :option:`--pidfile` argument was set.
  231. * Task logger format was no longer used. (Issue #317).
  232. The id and name of the task is now part of the log message again.
  233. * A safe version of ``repr()`` is now used in strategic places to ensure
  234. objects with a broken ``__repr__`` does not crash the worker, or otherwise
  235. make errors hard to understand (Issue #298).
  236. * Remote control command ``active_queues``: did not account for queues added
  237. at runtime.
  238. In addition the dictionary replied by this command now has a different
  239. structure: the exchange key is now a dictionary containing the
  240. exchange declaration in full.
  241. * The :option:`-Q` option to :program:`celery worker` removed unused queue
  242. declarations, so routing of tasks could fail.
  243. Queues are no longer removed, but rather `app.amqp.queues.consume_from()`
  244. is used as the list of queues to consume from.
  245. This ensures all queues are available for routing purposes.
  246. * celeryctl: Now supports the `inspect active_queues` command.
  247. .. _version-2.2.2:
  248. 2.2.2
  249. =====
  250. :release-date: 2011-02-03 04:00 P.M CET
  251. .. _v222-fixes:
  252. Fixes
  253. -----
  254. * Celerybeat could not read the schedule properly, so entries in
  255. :setting:`CELERYBEAT_SCHEDULE` would not be scheduled.
  256. * Task error log message now includes `exc_info` again.
  257. * The `eta` argument can now be used with `task.retry`.
  258. Previously it was overwritten by the countdown argument.
  259. * celery multi/celeryd_detach: Now logs errors occuring when executing
  260. the `celery worker` command.
  261. * daemonizing tutorial: Fixed typo ``--time-limit 300`` ->
  262. ``--time-limit=300``
  263. * Colors in logging broke non-string objects in log messages.
  264. * ``setup_task_logger`` no longer makes assumptions about magic task kwargs.
  265. .. _version-2.2.1:
  266. 2.2.1
  267. =====
  268. :release-date: 2011-02-02 04:00 P.M CET
  269. .. _v221-fixes:
  270. Fixes
  271. -----
  272. * Eventlet pool was leaking memory (Issue #308).
  273. * Deprecated function ``celery.execute.delay_task`` was accidentally removed,
  274. now available again.
  275. * ``BasePool.on_terminate`` stub did not exist
  276. * celeryd_detach: Adds readable error messages if user/group name does not
  277. exist.
  278. * Smarter handling of unicode decod errors when logging errors.
  279. .. _version-2.2.0:
  280. 2.2.0
  281. =====
  282. :release-date: 2011-02-01 10:00 AM CET
  283. .. _v220-important:
  284. Important Notes
  285. ---------------
  286. * Carrot has been replaced with `Kombu`_
  287. Kombu is the next generation messaging framework for Python,
  288. fixing several flaws present in Carrot that was hard to fix
  289. without breaking backwards compatibility.
  290. Also it adds:
  291. * First-class support for virtual transports; Redis, Django ORM,
  292. SQLAlchemy, Beanstalk, MongoDB, CouchDB and in-memory.
  293. * Consistent error handling with introspection,
  294. * The ability to ensure that an operation is performed by gracefully
  295. handling connection and channel errors,
  296. * Message compression (zlib, bzip2, or custom compression schemes).
  297. This means that `ghettoq` is no longer needed as the
  298. functionality it provided is already available in Celery by default.
  299. The virtual transports are also more feature complete with support
  300. for exchanges (direct and topic). The Redis transport even supports
  301. fanout exchanges so it is able to perform worker remote control
  302. commands.
  303. .. _`Kombu`: http://pypi.python.org/pypi/kombu
  304. * Magic keyword arguments pending deprecation.
  305. The magic keyword arguments were responsibile for many problems
  306. and quirks: notably issues with tasks and decorators, and name
  307. collisions in keyword arguments for the unaware.
  308. It wasn't easy to find a way to deprecate the magic keyword arguments,
  309. but we think this is a solution that makes sense and it will not
  310. have any adverse effects for existing code.
  311. The path to a magic keyword argument free world is:
  312. * the `celery.decorators` module is deprecated and the decorators
  313. can now be found in `celery.task`.
  314. * The decorators in `celery.task` disables keyword arguments by
  315. default
  316. * All examples in the documentation have been changed to use
  317. `celery.task`.
  318. This means that the following will have magic keyword arguments
  319. enabled (old style):
  320. .. code-block:: python
  321. from celery.decorators import task
  322. @task()
  323. def add(x, y, **kwargs):
  324. print("In task %s" % kwargs["task_id"])
  325. return x + y
  326. And this will not use magic keyword arguments (new style):
  327. .. code-block:: python
  328. from celery.task import task
  329. @task()
  330. def add(x, y):
  331. print("In task %s" % add.request.id)
  332. return x + y
  333. In addition, tasks can choose not to accept magic keyword arguments by
  334. setting the `task.accept_magic_kwargs` attribute.
  335. .. admonition:: Deprecation
  336. Using the decorators in :mod:`celery.decorators` emits a
  337. :class:`PendingDeprecationWarning` with a helpful message urging
  338. you to change your code, in version 2.4 this will be replaced with
  339. a :class:`DeprecationWarning`, and in version 4.0 the
  340. :mod:`celery.decorators` module will be removed and no longer exist.
  341. Similarly, the `task.accept_magic_kwargs` attribute will no
  342. longer have any effect starting from version 4.0.
  343. * The magic keyword arguments are now available as `task.request`
  344. This is called *the context*. Using thread-local storage the
  345. context contains state that is related to the current request.
  346. It is mutable and you can add custom attributes that will only be seen
  347. by the current task request.
  348. The following context attributes are always available:
  349. ===================================== ===================================
  350. **Magic Keyword Argument** **Replace with**
  351. ===================================== ===================================
  352. `kwargs["task_id"]` `self.request.id`
  353. `kwargs["delivery_info"]` `self.request.delivery_info`
  354. `kwargs["task_retries"]` `self.request.retries`
  355. `kwargs["logfile"]` `self.request.logfile`
  356. `kwargs["loglevel"]` `self.request.loglevel`
  357. `kwargs["task_is_eager` `self.request.is_eager`
  358. **NEW** `self.request.args`
  359. **NEW** `self.request.kwargs`
  360. ===================================== ===================================
  361. In addition, the following methods now automatically uses the current
  362. context, so you don't have to pass `kwargs` manually anymore:
  363. * `task.retry`
  364. * `task.get_logger`
  365. * `task.update_state`
  366. * `Eventlet`_ support.
  367. This is great news for I/O-bound tasks!
  368. To change pool implementations you use the :option:`-P|--pool` argument
  369. to :program:`celery worker`, or globally using the
  370. :setting:`CELERYD_POOL` setting. This can be the full name of a class,
  371. or one of the following aliases: `processes`, `eventlet`, `gevent`.
  372. For more information please see the :ref:`concurrency-eventlet` section
  373. in the User Guide.
  374. .. admonition:: Why not gevent?
  375. For our first alternative concurrency implementation we have focused
  376. on `Eventlet`_, but there is also an experimental `gevent`_ pool
  377. available. This is missing some features, notably the ability to
  378. schedule ETA tasks.
  379. Hopefully the `gevent`_ support will be feature complete by
  380. version 2.3, but this depends on user demand (and contributions).
  381. .. _`Eventlet`: http://eventlet.net
  382. .. _`gevent`: http://gevent.org
  383. * Python 2.4 support deprecated!
  384. We're happy^H^H^H^H^Hsad to announce that this is the last version
  385. to support Python 2.4.
  386. You are urged to make some noise if you're currently stuck with
  387. Python 2.4. Complain to your package maintainers, sysadmins and bosses:
  388. tell them it's time to move on!
  389. Apart from wanting to take advantage of with-statements, coroutines,
  390. conditional expressions and enhanced try blocks, the code base
  391. now contains so many 2.4 related hacks and workarounds it's no longer
  392. just a compromise, but a sacrifice.
  393. If it really isn't your choice, and you don't have the option to upgrade
  394. to a newer version of Python, you can just continue to use Celery 2.2.
  395. Important fixes can be backported for as long as there is interest.
  396. * worker: Now supports Autoscaling of child worker processes.
  397. The :option:`--autoscale` option can be used to configure the minimum
  398. and maximum number of child worker processes::
  399. --autoscale=AUTOSCALE
  400. Enable autoscaling by providing
  401. max_concurrency,min_concurrency. Example:
  402. --autoscale=10,3 (always keep 3 processes, but grow to
  403. 10 if necessary).
  404. * Remote Debugging of Tasks
  405. ``celery.contrib.rdb`` is an extended version of :mod:`pdb` that
  406. enables remote debugging of processes that does not have terminal
  407. access.
  408. Example usage:
  409. .. code-block:: python
  410. from celery.contrib import rdb
  411. from celery.task import task
  412. @task()
  413. def add(x, y):
  414. result = x + y
  415. rdb.set_trace() # <- set breakpoint
  416. return result
  417. :func:`~celery.contrib.rdb.set_trace` sets a breakpoint at the current
  418. location and creates a socket you can telnet into to remotely debug
  419. your task.
  420. The debugger may be started by multiple processes at the same time,
  421. so rather than using a fixed port the debugger will search for an
  422. available port, starting from the base port (6900 by default).
  423. The base port can be changed using the environment variable
  424. :envvar:`CELERY_RDB_PORT`.
  425. By default the debugger will only be available from the local host,
  426. to enable access from the outside you have to set the environment
  427. variable :envvar:`CELERY_RDB_HOST`.
  428. When the worker encounters your breakpoint it will log the following
  429. information::
  430. [INFO/MainProcess] Received task:
  431. tasks.add[d7261c71-4962-47e5-b342-2448bedd20e8]
  432. [WARNING/PoolWorker-1] Remote Debugger:6900:
  433. Please telnet 127.0.0.1 6900. Type `exit` in session to continue.
  434. [2011-01-18 14:25:44,119: WARNING/PoolWorker-1] Remote Debugger:6900:
  435. Waiting for client...
  436. If you telnet the port specified you will be presented
  437. with a ``pdb`` shell:
  438. .. code-block:: bash
  439. $ telnet localhost 6900
  440. Connected to localhost.
  441. Escape character is '^]'.
  442. > /opt/devel/demoapp/tasks.py(128)add()
  443. -> return result
  444. (Pdb)
  445. Enter ``help`` to get a list of available commands,
  446. It may be a good idea to read the `Python Debugger Manual`_ if
  447. you have never used `pdb` before.
  448. .. _`Python Debugger Manual`: http://docs.python.org/library/pdb.html
  449. * Events are now transient and is using a topic exchange (instead of direct).
  450. The `CELERYD_EVENT_EXCHANGE`, `CELERYD_EVENT_ROUTING_KEY`,
  451. `CELERYD_EVENT_EXCHANGE_TYPE` settings are no longer in use.
  452. This means events will not be stored until there is a consumer, and the
  453. events will be gone as soon as the consumer stops. Also it means there
  454. can be multiple monitors running at the same time.
  455. The routing key of an event is the type of event (e.g. `worker.started`,
  456. `worker.heartbeat`, `task.succeeded`, etc. This means a consumer can
  457. filter on specific types, to only be alerted of the events it cares about.
  458. Each consumer will create a unique queue, meaning it is in effect a
  459. broadcast exchange.
  460. This opens up a lot of possibilities, for example the workers could listen
  461. for worker events to know what workers are in the neighborhood, and even
  462. restart workers when they go down (or use this information to optimize
  463. tasks/autoscaling).
  464. .. note::
  465. The event exchange has been renamed from "celeryevent" to "celeryev"
  466. so it does not collide with older versions.
  467. If you would like to remove the old exchange you can do so
  468. by executing the following command:
  469. .. code-block:: bash
  470. $ camqadm exchange.delete celeryevent
  471. * The worker now starts without configuration, and configuration can be
  472. specified directly on the command-line.
  473. Configuration options must appear after the last argument, separated
  474. by two dashes:
  475. .. code-block:: bash
  476. $ celery worker -l info -I tasks -- broker.host=localhost broker.vhost=/app
  477. * Configuration is now an alias to the original configuration, so changes
  478. to the original will reflect Celery at runtime.
  479. * `celery.conf` has been deprecated, and modifying `celery.conf.ALWAYS_EAGER`
  480. will no longer have any effect.
  481. The default configuration is now available in the
  482. :mod:`celery.app.defaults` module. The available configuration options
  483. and their types can now be introspected.
  484. * Remote control commands are now provided by `kombu.pidbox`, the generic
  485. process mailbox.
  486. * Internal module `celery.worker.listener` has been renamed to
  487. `celery.worker.consumer`, and `.CarrotListener` is now `.Consumer`.
  488. * Previously deprecated modules `celery.models` and
  489. `celery.management.commands` have now been removed as per the deprecation
  490. timeline.
  491. * [Security: Low severity] Removed `celery.task.RemoteExecuteTask` and
  492. accompanying functions: `dmap`, `dmap_async`, and `execute_remote`.
  493. Executing arbitrary code using pickle is a potential security issue if
  494. someone gains unrestricted access to the message broker.
  495. If you really need this functionality, then you would have to add
  496. this to your own project.
  497. * [Security: Low severity] The `stats` command no longer transmits the
  498. broker password.
  499. One would have needed an authenticated broker connection to receive
  500. this password in the first place, but sniffing the password at the
  501. wire level would have been possible if using unencrypted communication.
  502. .. _v220-news:
  503. News
  504. ----
  505. * The internal module `celery.task.builtins` has been removed.
  506. * The module `celery.task.schedules` is deprecated, and
  507. `celery.schedules` should be used instead.
  508. For example if you have::
  509. from celery.task.schedules import crontab
  510. You should replace that with::
  511. from celery.schedules import crontab
  512. The module needs to be renamed because it must be possible
  513. to import schedules without importing the `celery.task` module.
  514. * The following functions have been deprecated and is scheduled for
  515. removal in version 2.3:
  516. * `celery.execute.apply_async`
  517. Use `task.apply_async()` instead.
  518. * `celery.execute.apply`
  519. Use `task.apply()` instead.
  520. * `celery.execute.delay_task`
  521. Use `registry.tasks[name].delay()` instead.
  522. * Importing `TaskSet` from `celery.task.base` is now deprecated.
  523. You should use::
  524. >>> from celery.task import TaskSet
  525. instead.
  526. * New remote control commands:
  527. * `active_queues`
  528. Returns the queue declarations a worker is currently consuming from.
  529. * Added the ability to retry publishing the task message in
  530. the event of connection loss or failure.
  531. This is disabled by default but can be enabled using the
  532. :setting:`CELERY_TASK_PUBLISH_RETRY` setting, and tweaked by
  533. the :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY` setting.
  534. In addition `retry`, and `retry_policy` keyword arguments have
  535. been added to `Task.apply_async`.
  536. .. note::
  537. Using the `retry` argument to `apply_async` requires you to
  538. handle the publisher/connection manually.
  539. * Periodic Task classes (`@periodic_task`/`PeriodicTask`) will *not* be
  540. deprecated as previously indicated in the source code.
  541. But you are encouraged to use the more flexible
  542. :setting:`CELERYBEAT_SCHEDULE` setting.
  543. * Built-in daemonization support of the worker using `celery multi`
  544. is no longer experimental and is considered production quality.
  545. See :ref:`daemon-generic` if you want to use the new generic init
  546. scripts.
  547. * Added support for message compression using the
  548. :setting:`CELERY_MESSAGE_COMPRESSION` setting, or the `compression` argument
  549. to `apply_async`. This can also be set using routers.
  550. * worker: Now logs stacktrace of all threads when receiving the
  551. `SIGUSR1` signal. (Does not work on cPython 2.4, Windows or Jython).
  552. Inspired by https://gist.github.com/737056
  553. * Can now remotely terminate/kill the worker process currently processing
  554. a task.
  555. The `revoke` remote control command now supports a `terminate` argument
  556. Default signal is `TERM`, but can be specified using the `signal`
  557. argument. Signal can be the uppercase name of any signal defined
  558. in the :mod:`signal` module in the Python Standard Library.
  559. Terminating a task also revokes it.
  560. Example::
  561. >>> from celery.task.control import revoke
  562. >>> revoke(task_id, terminate=True)
  563. >>> revoke(task_id, terminate=True, signal="KILL")
  564. >>> revoke(task_id, terminate=True, signal="SIGKILL")
  565. * `TaskSetResult.join_native`: Backend-optimized version of `join()`.
  566. If available, this version uses the backends ability to retrieve
  567. multiple results at once, unlike `join()` which fetches the results
  568. one by one.
  569. So far only supported by the AMQP result backend. Support for memcached
  570. and Redis may be added later.
  571. * Improved implementations of `TaskSetResult.join` and `AsyncResult.wait`.
  572. An `interval` keyword argument have been added to both so the
  573. polling interval can be specified (default interval is 0.5 seconds).
  574. A `propagate` keyword argument have been added to `result.wait()`,
  575. errors will be returned instead of raised if this is set to False.
  576. .. warning::
  577. You should decrease the polling interval when using the database
  578. result backend, as frequent polling can result in high database load.
  579. * The PID of the child worker process accepting a task is now sent as a field
  580. with the :event:`task-started` event.
  581. * The following fields have been added to all events in the worker class:
  582. * `sw_ident`: Name of worker software (e.g. py-celery).
  583. * `sw_ver`: Software version (e.g. 2.2.0).
  584. * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  585. * For better accuracy the start time reported by the multiprocessing worker
  586. process is used when calculating task duration.
  587. Previously the time reported by the accept callback was used.
  588. * `celerybeat`: New built-in daemonization support using the `--detach`
  589. option.
  590. * `celeryev`: New built-in daemonization support using the `--detach`
  591. option.
  592. * `TaskSet.apply_async`: Now supports custom publishers by using the
  593. `publisher` argument.
  594. * Added :setting:`CELERY_SEND_TASK_SENT_EVENT` setting.
  595. If enabled an event will be sent with every task, so monitors can
  596. track tasks before the workers receive them.
  597. * `celerybeat`: Now reuses the broker connection when calling
  598. scheduled tasks.
  599. * The configuration module and loader to use can now be specified on
  600. the command-line.
  601. For example:
  602. .. code-block:: bash
  603. $ celery worker --config=celeryconfig.py --loader=myloader.Loader
  604. * Added signals: `beat_init` and `beat_embedded_init`
  605. * :signal:`celery.signals.beat_init`
  606. Dispatched when :program:`celerybeat` starts (either standalone or
  607. embedded). Sender is the :class:`celery.beat.Service` instance.
  608. * :signal:`celery.signals.beat_embedded_init`
  609. Dispatched in addition to the :signal:`beat_init` signal when
  610. :program:`celerybeat` is started as an embedded process. Sender
  611. is the :class:`celery.beat.Service` instance.
  612. * Redis result backend: Removed deprecated settings `REDIS_TIMEOUT` and
  613. `REDIS_CONNECT_RETRY`.
  614. * CentOS init script for :program:`celery worker` now available in `extra/centos`.
  615. * Now depends on `pyparsing` version 1.5.0 or higher.
  616. There have been reported issues using Celery with pyparsing 1.4.x,
  617. so please upgrade to the latest version.
  618. * Lots of new unit tests written, now with a total coverage of 95%.
  619. .. _v220-fixes:
  620. Fixes
  621. -----
  622. * `celeryev` Curses Monitor: Improved resize handling and UI layout
  623. (Issue #274 + Issue #276)
  624. * AMQP Backend: Exceptions occurring while sending task results are now
  625. propagated instead of silenced.
  626. the worker will then show the full traceback of these errors in the log.
  627. * AMQP Backend: No longer deletes the result queue after successful
  628. poll, as this should be handled by the
  629. :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting instead.
  630. * AMQP Backend: Now ensures queues are declared before polling results.
  631. * Windows: worker: Show error if running with `-B` option.
  632. Running celerybeat embedded is known not to work on Windows, so
  633. users are encouraged to run celerybeat as a separate service instead.
  634. * Windows: Utilities no longer output ANSI color codes on Windows
  635. * camqadm: Now properly handles Ctrl+C by simply exiting instead of showing
  636. confusing traceback.
  637. * Windows: All tests are now passing on Windows.
  638. * Remove bin/ directory, and `scripts` section from setup.py.
  639. This means we now rely completely on setuptools entrypoints.
  640. .. _v220-experimental:
  641. Experimental
  642. ------------
  643. * Jython: worker now runs on Jython using the threaded pool.
  644. All tests pass, but there may still be bugs lurking around the corners.
  645. * PyPy: worker now runs on PyPy.
  646. It runs without any pool, so to get parallel execution you must start
  647. multiple instances (e.g. using :program:`multi`).
  648. Sadly an initial benchmark seems to show a 30% performance decrease on
  649. pypy-1.4.1 + JIT. We would like to find out why this is, so stay tuned.
  650. * :class:`PublisherPool`: Experimental pool of task publishers and
  651. connections to be used with the `retry` argument to `apply_async`.
  652. The example code below will re-use connections and channels, and
  653. retry sending of the task message if the connection is lost.
  654. .. code-block:: python
  655. from celery import current_app
  656. # Global pool
  657. pool = current_app().amqp.PublisherPool(limit=10)
  658. def my_view(request):
  659. with pool.acquire() as publisher:
  660. add.apply_async((2, 2), publisher=publisher, retry=True)