changelog-2.3.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370
  1. .. _changelog-2.3:
  2. ===============================
  3. Change history for Celery 2.3
  4. ===============================
  5. .. contents::
  6. :local:
  7. .. _version-2.3.4:
  8. 2.3.4
  9. =====
  10. :release-date: 2011-11-25 04:00 P.M GMT
  11. :release-by: Ask Solem
  12. .. _v234-security-fixes:
  13. Security Fixes
  14. --------------
  15. * [Security: `CELERYSA-0001`_] Daemons would set effective id's rather than
  16. real id's when the :option:`--uid`/:option:`--gid` arguments to
  17. :program:`celery multi`, :program:`celeryd_detach`,
  18. :program:`celery beat` and :program:`celery events` were used.
  19. This means privileges weren't properly dropped, and that it would
  20. be possible to regain supervisor privileges later.
  21. .. _`CELERYSA-0001`:
  22. http://github.com/celery/celery/tree/master/docs/sec/CELERYSA-0001.txt
  23. Fixes
  24. -----
  25. * Backported fix for #455 from 2.4 to 2.3.
  26. * Statedb was not saved at shutdown.
  27. * Fixes worker sometimes hanging when hard time limit exceeded.
  28. .. _version-2.3.3:
  29. 2.3.3
  30. =====
  31. :release-date: 2011-16-09 05:00 P.M BST
  32. :release-by: Mher Movsisyan
  33. * Monkey patching :attr:`sys.stdout` could result in the worker
  34. crashing if the replacing object did not define :meth:`isatty`
  35. (Issue #477).
  36. * ``CELERYD`` option in :file:`/etc/default/celeryd` should not
  37. be used with generic init scripts.
  38. .. _version-2.3.2:
  39. 2.3.2
  40. =====
  41. :release-date: 2011-10-07 05:00 P.M BST
  42. :release-by: Ask Solem
  43. .. _v232-news:
  44. News
  45. ----
  46. * Improved Contributing guide.
  47. If you'd like to contribute to Celery you should read the
  48. :ref:`Contributing Gudie <contributing>`.
  49. We are looking for contributors at all skill levels, so don't
  50. hesitate!
  51. * Now depends on Kombu 1.3.1
  52. * ``Task.request`` now contains the current worker host name (Issue #460).
  53. Available as ``task.request.hostname``.
  54. * It is now easier for app subclasses to extend how they are pickled.
  55. (see :class:`celery.app.AppPickler`).
  56. .. _v232-fixes:
  57. Fixes
  58. -----
  59. * `purge/discard_all` was not working correctly (Issue #455).
  60. * The coloring of log messages didn't handle non-ASCII data well
  61. (Issue #427).
  62. * [Windows] the multiprocessing pool tried to import ``os.kill``
  63. even though this is not available there (Issue #450).
  64. * Fixes case where the worker could become unresponsive because of tasks
  65. exceeding the hard time limit.
  66. * The :event:`task-sent` event was missing from the event reference.
  67. * ``ResultSet.iterate`` now returns results as they finish (Issue #459).
  68. This was not the case previously, even though the documentation
  69. states this was the expected behavior.
  70. * Retries will no longer be performed when tasks are called directly
  71. (using ``__call__``).
  72. Instead the exception passed to ``retry`` will be re-raised.
  73. * Eventlet no longer crashes if autoscale is enabled.
  74. growing and shrinking eventlet pools is still not supported.
  75. * py24 target removed from :file:`tox.ini`.
  76. .. _version-2.3.1:
  77. 2.3.1
  78. =====
  79. :release-date: 2011-08-07 08:00 P.M BST
  80. :release-by: Ask Solem
  81. Fixes
  82. -----
  83. * The :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting did not work,
  84. resulting in an AMQP related error about not being able to serialize
  85. floats while trying to publish task states (Issue #446).
  86. .. _version-2.3.0:
  87. 2.3.0
  88. =====
  89. :release-date: 2011-08-05 12:00 P.M BST
  90. :tested: cPython: 2.5, 2.6, 2.7; PyPy: 1.5; Jython: 2.5.2
  91. :release-by: Ask Solem
  92. .. _v230-important:
  93. Important Notes
  94. ---------------
  95. * Now requires Kombu 1.2.1
  96. * Results are now disabled by default.
  97. The AMQP backend was not a good default because often the users were
  98. not consuming the results, resulting in thousands of queues.
  99. While the queues can be configured to expire if left unused, it was not
  100. possible to enable this by default because this was only available in
  101. recent RabbitMQ versions (2.1.1+)
  102. With this change enabling a result backend will be a conscious choice,
  103. which will hopefully lead the user to read the documentation and be aware
  104. of any common pitfalls with the particular backend.
  105. The default backend is now a dummy backend
  106. (:class:`celery.backends.base.DisabledBackend`). Saving state is simply an
  107. noop operation, and AsyncResult.wait(), .result, .state, etc. will raise
  108. a :exc:`NotImplementedError` telling the user to configure the result backend.
  109. For help choosing a backend please see :ref:`task-result-backends`.
  110. If you depend on the previous default which was the AMQP backend, then
  111. you have to set this explicitly before upgrading::
  112. CELERY_RESULT_BACKEND = "amqp"
  113. .. note::
  114. For django-celery users the default backend is still ``database``,
  115. and results are not disabled by default.
  116. * The Debian init scripts have been deprecated in favor of the generic-init.d
  117. init scripts.
  118. In addition generic init scripts for celerybeat and celeryev has been
  119. added.
  120. .. _v230-news:
  121. News
  122. ----
  123. * Automatic connection pool support.
  124. The pool is used by everything that requires a broker connection. For
  125. example calling tasks, sending broadcast commands, retrieving results
  126. with the AMQP result backend, and so on.
  127. The pool is disabled by default, but you can enable it by configuring the
  128. :setting:`BROKER_POOL_LIMIT` setting::
  129. BROKER_POOL_LIMIT = 10
  130. A limit of 10 means a maximum of 10 simultaneous connections can co-exist.
  131. Only a single connection will ever be used in a single-thread
  132. environment, but in a concurrent environment (threads, greenlets, etc., but
  133. not processes) when the limit has been exceeded, any try to acquire a
  134. connection will block the thread and wait for a connection to be released.
  135. This is something to take into consideration when choosing a limit.
  136. A limit of :const:`None` or 0 means no limit, and connections will be
  137. established and closed every time.
  138. * Introducing Chords (taskset callbacks).
  139. A chord is a task that only executes after all of the tasks in a taskset
  140. has finished executing. It's a fancy term for "taskset callbacks"
  141. adopted from
  142. `Cω <http://research.microsoft.com/en-us/um/cambridge/projects/comega/>`_).
  143. It works with all result backends, but the best implementation is
  144. currently provided by the Redis result backend.
  145. Here's an example chord::
  146. >>> chord(add.subtask((i, i))
  147. ... for i in xrange(100))(tsum.subtask()).get()
  148. 9900
  149. Please read the :ref:`Chords section in the user guide <canvas-chord>`, if you
  150. want to know more.
  151. * Time limits can now be set for individual tasks.
  152. To set the soft and hard time limits for a task use the ``time_limit``
  153. and ``soft_time_limit`` attributes:
  154. .. code-block:: python
  155. import time
  156. @task(time_limit=60, soft_time_limit=30)
  157. def sleeptask(seconds):
  158. time.sleep(seconds)
  159. If the attributes are not set, then the workers default time limits
  160. will be used.
  161. New in this version you can also change the time limits for a task
  162. at runtime using the :func:`time_limit` remote control command::
  163. >>> from celery.task import control
  164. >>> control.time_limit("tasks.sleeptask",
  165. ... soft=60, hard=120, reply=True)
  166. [{'worker1.example.com': {'ok': 'time limits set successfully'}}]
  167. Only tasks that starts executing after the time limit change will be affected.
  168. .. note::
  169. Soft time limits will still not work on Windows or other platforms
  170. that do not have the ``SIGUSR1`` signal.
  171. * Redis backend configuration directive names changed to include the
  172. ``CELERY_`` prefix.
  173. ===================================== ===================================
  174. **Old setting name** **Replace with**
  175. ===================================== ===================================
  176. `REDIS_HOST` `CELERY_REDIS_HOST`
  177. `REDIS_PORT` `CELERY_REDIS_PORT`
  178. `REDIS_DB` `CELERY_REDIS_DB`
  179. `REDIS_PASSWORD` `CELERY_REDIS_PASSWORD`
  180. ===================================== ===================================
  181. The old names are still supported but pending deprecation.
  182. * PyPy: The default pool implementation used is now multiprocessing
  183. if running on PyPy 1.5.
  184. * multi: now supports "pass through" options.
  185. Pass through options makes it easier to use celery without a
  186. configuration file, or just add last-minute options on the command
  187. line.
  188. Example use:
  189. .. code-block:: bash
  190. $ celery multi start 4 -c 2 -- broker.host=amqp.example.com \
  191. broker.vhost=/ \
  192. celery.disable_rate_limits=yes
  193. * celerybeat: Now retries establishing the connection (Issue #419).
  194. * celeryctl: New ``list bindings`` command.
  195. Lists the current or all available bindings, depending on the
  196. broker transport used.
  197. * Heartbeat is now sent every 30 seconds (previously every 2 minutes).
  198. * ``ResultSet.join_native()`` and ``iter_native()`` is now supported by
  199. the Redis and Cache result backends.
  200. This is an optimized version of ``join()`` using the underlying
  201. backends ability to fetch multiple results at once.
  202. * Can now use SSL when sending error e-mails by enabling the
  203. :setting:`EMAIL_USE_SSL` setting.
  204. * ``events.default_dispatcher()``: Context manager to easily obtain
  205. an event dispatcher instance using the connection pool.
  206. * Import errors in the configuration module will not be silenced anymore.
  207. * ResultSet.iterate: Now supports the ``timeout``, ``propagate`` and
  208. ``interval`` arguments.
  209. * ``with_default_connection`` -> ``with default_connection``
  210. * TaskPool.apply_async: Keyword arguments ``callbacks`` and ``errbacks``
  211. has been renamed to ``callback`` and ``errback`` and take a single scalar
  212. value instead of a list.
  213. * No longer propagates errors occurring during process cleanup (Issue #365)
  214. * Added ``TaskSetResult.delete()``, which will delete a previously
  215. saved taskset result.
  216. * Celerybeat now syncs every 3 minutes instead of only at
  217. shutdown (Issue #382).
  218. * Monitors now properly handles unknown events, so user-defined events
  219. are displayed.
  220. * Terminating a task on Windows now also terminates all of the tasks child
  221. processes (Issue #384).
  222. * worker: ``-I|--include`` option now always searches the current directory
  223. to import the specified modules.
  224. * Cassandra backend: Now expires results by using TTLs.
  225. * Functional test suite in ``funtests`` is now actually working properly, and
  226. passing tests.
  227. .. _v230-fixes:
  228. Fixes
  229. -----
  230. * celeryev was trying to create the pidfile twice.
  231. * celery.contrib.batches: Fixed problem where tasks failed
  232. silently (Issue #393).
  233. * Fixed an issue where logging objects would give "<Unrepresentable",
  234. even though the objects were.
  235. * ``CELERY_TASK_ERROR_WHITE_LIST`` is now properly initialized
  236. in all loaders.
  237. * celeryd_detach now passes through command line configuration.
  238. * Remote control command ``add_consumer`` now does nothing if the
  239. queue is already being consumed from.