浏览代码

[docs] More whatsnew-4.0 changes

Ask Solem 9 年之前
父节点
当前提交
0a23d31631
共有 7 个文件被更改,包括 297 次插入113 次删除
  1. 13 0
      celery/app/base.py
  2. 24 2
      docs/reference/celery.rst
  3. 7 10
      docs/userguide/periodic-tasks.rst
  4. 46 17
      docs/userguide/routing.rst
  5. 31 22
      docs/userguide/tasks.rst
  6. 14 12
      docs/userguide/workers.rst
  7. 162 50
      docs/whatsnew-4.0.rst

+ 13 - 0
celery/app/base.py

@@ -90,6 +90,12 @@ def _after_fork_cleanup_app(app):
 
 
 
 
 class PendingConfiguration(UserDict, AttributeDictMixin):
 class PendingConfiguration(UserDict, AttributeDictMixin):
+    # `app.conf` will be of this type before being explicitly configured,
+    # which means the app can keep any configuration set directly
+    # on `app.conf` before the `app.config_from_object` call.
+    #
+    # accessing any key will finalize the configuration,
+    # replacing `app.conf` with a concrete settings object.
 
 
     callback = None
     callback = None
     data = None
     data = None
@@ -1058,10 +1064,17 @@ class Celery(object):
 
 
     @property
     @property
     def current_worker_task(self):
     def current_worker_task(self):
+        """The task currently being executed by a worker or :const:`None`.
+
+        Differs from :data:`current_task` in that it's not affected
+        by tasks calling other tasks directly, or eagerly.
+
+        """
         return get_current_worker_task()
         return get_current_worker_task()
 
 
     @cached_property
     @cached_property
     def oid(self):
     def oid(self):
+        """Universally unique identifier for this app."""
         return oid_from(self)
         return oid_from(self)
 
 
     @cached_property
     @cached_property

+ 24 - 2
docs/reference/celery.rst

@@ -38,6 +38,8 @@ and creating Celery applications.
 
 
     .. autoattribute:: current_task
     .. autoattribute:: current_task
 
 
+    .. autoattribute:: current_worker_task
+
     .. autoattribute:: amqp
     .. autoattribute:: amqp
 
 
     .. autoattribute:: backend
     .. autoattribute:: backend
@@ -52,6 +54,8 @@ and creating Celery applications.
     .. autoattribute:: producer_pool
     .. autoattribute:: producer_pool
     .. autoattribute:: Task
     .. autoattribute:: Task
     .. autoattribute:: timezone
     .. autoattribute:: timezone
+    .. autoattribute:: builtin_fixups
+    .. autoattribute:: oid
 
 
     .. automethod:: close
     .. automethod:: close
 
 
@@ -67,6 +71,8 @@ and creating Celery applications.
 
 
     .. automethod:: add_defaults
     .. automethod:: add_defaults
 
 
+    .. automethod:: add_periodic_task
+
     .. automethod:: setup_security
     .. automethod:: setup_security
 
 
     .. automethod:: start
     .. automethod:: start
@@ -75,6 +81,8 @@ and creating Celery applications.
 
 
     .. automethod:: send_task
     .. automethod:: send_task
 
 
+    .. automethod:: gen_task_name
+
     .. autoattribute:: AsyncResult
     .. autoattribute:: AsyncResult
 
 
     .. autoattribute:: GroupResult
     .. autoattribute:: GroupResult
@@ -87,6 +95,10 @@ and creating Celery applications.
 
 
     .. autoattribute:: Beat
     .. autoattribute:: Beat
 
 
+    .. automethod:: connection_for_read
+
+    .. automethod:: connection_for_write
+
     .. automethod:: connection
     .. automethod:: connection
 
 
     .. automethod:: connection_or_acquire
     .. automethod:: connection_or_acquire
@@ -101,8 +113,14 @@ and creating Celery applications.
 
 
     .. automethod:: set_current
     .. automethod:: set_current
 
 
+    .. automethod:: set_default
+
     .. automethod:: finalize
     .. automethod:: finalize
 
 
+    .. automethod:: on_init
+
+    .. automethod:: prepare_config
+
     .. data:: on_configure
     .. data:: on_configure
 
 
         Signal sent when app is loading configuration.
         Signal sent when app is loading configuration.
@@ -115,6 +133,10 @@ and creating Celery applications.
 
 
         Signal sent after app has been finalized.
         Signal sent after app has been finalized.
 
 
+    .. data:: on_after_fork
+
+        Signal sent in child process after fork.
+
 Canvas primitives
 Canvas primitives
 -----------------
 -----------------
 
 
@@ -202,8 +224,8 @@ See :ref:`guide-canvas` for more about creating task workflows.
     arguments will be ignored and the values in the dict will be used
     arguments will be ignored and the values in the dict will be used
     instead.
     instead.
 
 
-        >>> s = signature('tasks.add', args=(2, 2))
-        >>> signature(s)
+        >>> s = app.signature('tasks.add', args=(2, 2))
+        >>> app.signature(s)
         {'task': 'tasks.add', args=(2, 2), kwargs={}, options={}}
         {'task': 'tasks.add', args=(2, 2), kwargs={}, options={}}
 
 
     .. method:: signature.__call__(*args \*\*kwargs)
     .. method:: signature.__call__(*args \*\*kwargs)

+ 7 - 10
docs/userguide/periodic-tasks.rst

@@ -37,14 +37,12 @@ An example time zone could be `Europe/London`:
 
 
     timezone = 'Europe/London'
     timezone = 'Europe/London'
 
 
-
 This setting must be added to your app, either by configuration it directly
 This setting must be added to your app, either by configuration it directly
 using (``app.conf.timezone = 'Europe/London'``), or by adding
 using (``app.conf.timezone = 'Europe/London'``), or by adding
 it to your configuration module if you have set one up using
 it to your configuration module if you have set one up using
 ``app.config_from_object``.  See :ref:`celerytut-configuration` for
 ``app.config_from_object``.  See :ref:`celerytut-configuration` for
 more information about configuration options.
 more information about configuration options.
 
 
-
 The default scheduler (storing the schedule in the :file:`celerybeat-schedule`
 The default scheduler (storing the schedule in the :file:`celerybeat-schedule`
 file) will automatically detect that the time zone has changed, and so will
 file) will automatically detect that the time zone has changed, and so will
 reset the schedule itself, but other schedulers may not be so smart (e.g. the
 reset the schedule itself, but other schedulers may not be so smart (e.g. the
@@ -103,10 +101,10 @@ beat schedule list.
         print(arg)
         print(arg)
 
 
 
 
-Setting these up from within the ``on_after_configure`` handler means
+Setting these up from within the :data:`~@on_after_configure` handler means
 that we will not evaluate the app at module level when using ``test.s()``.
 that we will not evaluate the app at module level when using ``test.s()``.
 
 
-The `@add_periodic_task` function will add the entry to the
+The :meth:`~@add_periodic_task` function will add the entry to the
 :setting:`beat_schedule` setting behind the scenes, which also
 :setting:`beat_schedule` setting behind the scenes, which also
 can be used to set up periodic tasks manually:
 can be used to set up periodic tasks manually:
 
 
@@ -114,15 +112,14 @@ Example: Run the `tasks.add` task every 30 seconds.
 
 
 .. code-block:: python
 .. code-block:: python
 
 
-    beat_schedule = {
+    app.conf.beat_schedule = {
         'add-every-30-seconds': {
         'add-every-30-seconds': {
             'task': 'tasks.add',
             'task': 'tasks.add',
             'schedule': 30.0,
             'schedule': 30.0,
             'args': (16, 16)
             'args': (16, 16)
         },
         },
     }
     }
-
-    timezone = 'UTC'
+    app.conf.timezone = 'UTC'
 
 
 
 
 .. note::
 .. note::
@@ -131,7 +128,7 @@ Example: Run the `tasks.add` task every 30 seconds.
     please see :ref:`celerytut-configuration`.  You can either
     please see :ref:`celerytut-configuration`.  You can either
     set these options on your app directly or you can keep
     set these options on your app directly or you can keep
     a separate module for configuration.
     a separate module for configuration.
-    
+
     If you want to use a single item tuple for `args`, don't forget
     If you want to use a single item tuple for `args`, don't forget
     that the constructor is a comma and not a pair of parentheses.
     that the constructor is a comma and not a pair of parentheses.
 
 
@@ -203,7 +200,7 @@ the :class:`~celery.schedules.crontab` schedule type:
 
 
     from celery.schedules import crontab
     from celery.schedules import crontab
 
 
-    beat_schedule = {
+    app.conf.beat_schedule = {
         # Executes every Monday morning at 7:30 A.M
         # Executes every Monday morning at 7:30 A.M
         'add-every-monday-morning': {
         'add-every-monday-morning': {
             'task': 'tasks.add',
             'task': 'tasks.add',
@@ -285,7 +282,7 @@ sunset, dawn or dusk, you can use the
 
 
     from celery.schedules import solar
     from celery.schedules import solar
 
 
-    beat_schedule = {
+    app.conf.beat_schedule = {
         # Executes at sunset in Melbourne
         # Executes at sunset in Melbourne
         'add-at-melbourne-sunset': {
         'add-at-melbourne-sunset': {
             'task': 'tasks.add',
             'task': 'tasks.add',

+ 46 - 17
docs/userguide/routing.rst

@@ -87,8 +87,8 @@ configuration:
 
 
     from kombu import Exchange, Queue
     from kombu import Exchange, Queue
 
 
-    task_default_queue = 'default'
-    task_queues = (
+    app.conf.task_default_queue = 'default'
+    app.conf.task_queues = (
         Queue('default', Exchange('default'), routing_key='default'),
         Queue('default', Exchange('default'), routing_key='default'),
     )
     )
 
 
@@ -126,8 +126,8 @@ configuration:
 
 
     from kombu import Queue
     from kombu import Queue
 
 
-    task_default_queue = 'default'
-    task_queues = (
+    app.conf.task_default_queue = 'default'
+    app.conf.task_queues = (
         Queue('default',    routing_key='task.#'),
         Queue('default',    routing_key='task.#'),
         Queue('feed_tasks', routing_key='feed.#'),
         Queue('feed_tasks', routing_key='feed.#'),
     )
     )
@@ -191,7 +191,7 @@ just specify a custom exchange and exchange type:
 
 
     from kombu import Exchange, Queue
     from kombu import Exchange, Queue
 
 
-    task_queues = (
+    app.conf.task_queues = (
         Queue('feed_tasks',    routing_key='feed.#'),
         Queue('feed_tasks',    routing_key='feed.#'),
         Queue('regular_tasks', routing_key='task.#'),
         Queue('regular_tasks', routing_key='task.#'),
         Queue('image_tasks',   exchange=Exchange('mediatasks', type='direct'),
         Queue('image_tasks',   exchange=Exchange('mediatasks', type='direct'),
@@ -213,6 +213,34 @@ If you're confused about these terms, you should read up on AMQP.
 .. _`Standard Exchange Types`: http://bit.ly/EEWca
 .. _`Standard Exchange Types`: http://bit.ly/EEWca
 .. _`RabbitMQ FAQ`: http://www.rabbitmq.com/faq.html
 .. _`RabbitMQ FAQ`: http://www.rabbitmq.com/faq.html
 
 
+.. _routing-special_options:
+
+Special Routing Options
+=======================
+
+.. _routing-option-rabbitmq-priorities:
+
+RabbitMQ Message Priorities
+---------------------------
+:supported transports: rabbitmq
+
+.. versionadded:: 4.0
+
+Queues can be configured to support priorities by setting the
+``x-max-priority`` argument:
+
+.. code-block:: python
+
+    from kombu import Exchange, Queue
+
+    app.conf.task_queues = [
+        Queue('tasks', Exchange('tasks'), routing_key='tasks',
+              queue_arguments={'x-max-priority': 10},
+    ]
+
+A default value for all queues can be set using the
+:setting:`task_queue_max_priority` setting.
+
 .. _amqp-primer:
 .. _amqp-primer:
 
 
 AMQP Primer
 AMQP Primer
@@ -280,14 +308,14 @@ One for video, one for images and one default queue for everything else:
 
 
     from kombu import Exchange, Queue
     from kombu import Exchange, Queue
 
 
-    task_queues = (
+    app.conf.task_queues = (
         Queue('default', Exchange('default'), routing_key='default'),
         Queue('default', Exchange('default'), routing_key='default'),
         Queue('videos',  Exchange('media'),   routing_key='media.video'),
         Queue('videos',  Exchange('media'),   routing_key='media.video'),
         Queue('images',  Exchange('media'),   routing_key='media.image'),
         Queue('images',  Exchange('media'),   routing_key='media.image'),
     )
     )
-    task_default_queue = 'default'
-    task_default_exchange_type = 'direct'
-    task_default_routing_key = 'default'
+    app.conf.task_default_queue = 'default'
+    app.conf.task_default_exchange_type = 'direct'
+    app.conf.task_default_routing_key = 'default'
 
 
 .. _amqp-exchange-types:
 .. _amqp-exchange-types:
 
 
@@ -501,14 +529,14 @@ One for video, one for images and one default queue for everything else:
     default_exchange = Exchange('default', type='direct')
     default_exchange = Exchange('default', type='direct')
     media_exchange = Exchange('media', type='direct')
     media_exchange = Exchange('media', type='direct')
 
 
-    task_queues = (
+    app.conf.task_queues = (
         Queue('default', default_exchange, routing_key='default'),
         Queue('default', default_exchange, routing_key='default'),
         Queue('videos', media_exchange, routing_key='media.video'),
         Queue('videos', media_exchange, routing_key='media.video'),
         Queue('images', media_exchange, routing_key='media.image')
         Queue('images', media_exchange, routing_key='media.image')
     )
     )
-    task_default_queue = 'default'
-    task_default_exchange = 'default'
-    task_default_routing_key = 'default'
+    app.conf.task_default_queue = 'default'
+    app.conf.task_default_exchange = 'default'
+    app.conf.task_default_routing_key = 'default'
 
 
 Here, the :setting:`task_default_queue` will be used to route tasks that
 Here, the :setting:`task_default_queue` will be used to route tasks that
 doesn't have an explicit route.
 doesn't have an explicit route.
@@ -613,8 +641,8 @@ copies of tasks to all workers connected to it:
 
 
     from kombu.common import Broadcast
     from kombu.common import Broadcast
 
 
-    task_queues = (Broadcast('broadcast_tasks'),)
-    task_routes = {'tasks.reload_cache': {'queue': 'broadcast_tasks'}}
+    app.conf.task_queues = (Broadcast('broadcast_tasks'),)
+    app.conf.task_routes = {'tasks.reload_cache': {'queue': 'broadcast_tasks'}}
 
 
 Now the ``tasks.reload_cache`` task will be sent to every
 Now the ``tasks.reload_cache`` task will be sent to every
 worker consuming from this queue.
 worker consuming from this queue.
@@ -627,9 +655,10 @@ a celerybeat schedule:
     from kombu.common import Broadcast
     from kombu.common import Broadcast
     from celery.schedules import crontab
     from celery.schedules import crontab
 
 
-    task_queues = (Broadcast('broadcast_tasks'),)
+    app.conf.task_queues = (Broadcast('broadcast_tasks'),)
 
 
-    beat_schedule = {'test-task': {
+    app.conf.beat_schedule = {
+        'test-task': {
             'task': 'tasks.reload_cache',
             'task': 'tasks.reload_cache',
             'schedule': crontab(minute=0, hour='*/3'),
             'schedule': crontab(minute=0, hour='*/3'),
             'options': {'exchange': 'broadcast_tasks'}
             'options': {'exchange': 'broadcast_tasks'}

+ 31 - 22
docs/userguide/tasks.rst

@@ -532,27 +532,21 @@ override this default.
         try:
         try:
         except Exception as exc:
         except Exception as exc:
-            raise self.retry(exc=exc, countdown=60)  # override the default and
-                                                     # retry in 1 minute
+            # overrides the default delay to retry after 1 minute
+            raise self.retry(exc=exc, countdown=60)
 
 
-Autoretrying
-------------
+.. _task-autoretry:
 
 
-.. versionadded:: 4.0
-
-Sometimes you may want to retry a task on particular exception. To do so,
-you should wrap a task body with :keyword:`try` ... :keyword:`except`
-statement, for example:
+Automatic retry for known exceptions
+------------------------------------
 
 
-.. code-block:: python
+.. versionadded:: 4.0
 
 
-    @app.task
-    def div(a, b):
-        try:
-            return a / b
-        except ZeroDivisionError as exc:
-            raise div.retry(exc=exc)
+Sometimes you just want to retry a task whenever a particular exception
+is raised.
 
 
+As this is such a common pattern we have built-in support for it
+with the
 This may not be acceptable all the time, since you may have a lot of such
 This may not be acceptable all the time, since you may have a lot of such
 tasks.
 tasks.
 
 
@@ -561,19 +555,34 @@ Fortunately, you can tell Celery to automatically retry a task using
 
 
 .. code-block:: python
 .. code-block:: python
 
 
-    @app.task(autoretry_for(ZeroDivisionError,))
-    def div(a, b):
-        return a / b
+    from twitter.exceptions import FailWhaleError
+
+    @app.task(autoretry_for=(FailWhaleError,))
+    def refresh_timeline(user):
+        return twitter.refresh_timeline(user)
 
 
 If you want to specify custom arguments for internal `~@Task.retry`
 If you want to specify custom arguments for internal `~@Task.retry`
 call, pass `retry_kwargs` argument to `~@Celery.task` decorator:
 call, pass `retry_kwargs` argument to `~@Celery.task` decorator:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
-    @app.task(autoretry_for=(ZeroDivisionError,),
+    @app.task(autoretry_for=(FailWhaleError,),
               retry_kwargs={'max_retries': 5})
               retry_kwargs={'max_retries': 5})
-    def div(a, b):
-        return a / b
+    def refresh_timeline(user):
+        return twitter.refresh_timeline(user)
+
+This is provided as an alternative to manually handling the exceptions,
+and the example above will do the same as wrapping the task body
+in a :keyword:`try` ... :keyword:`except` statement, i.e.:
+
+.. code-block:: python
+
+    @app.task
+    def refresh_timeline(user):
+        try:
+            twitter.refresh_timeline(user)
+        except FailWhaleError as exc:
+            raise div.retry(exc=exc, max_retries=5)
 
 
 .. _task-options:
 .. _task-options:
 
 

+ 14 - 12
docs/userguide/workers.rst

@@ -229,8 +229,8 @@ Remote control
     commands from the command-line.  It supports all of the commands
     commands from the command-line.  It supports all of the commands
     listed below.  See :ref:`monitoring-control` for more information.
     listed below.  See :ref:`monitoring-control` for more information.
 
 
-pool support: *prefork, eventlet, gevent*, blocking:*threads/solo* (see note)
-broker support: *amqp, redis*
+:pool support: *prefork, eventlet, gevent*, blocking:*threads/solo* (see note)
+:broker support: *amqp, redis*
 
 
 Workers have the ability to be remote controlled using a high-priority
 Workers have the ability to be remote controlled using a high-priority
 broadcast message queue.  The commands can be directed to all, or a specific
 broadcast message queue.  The commands can be directed to all, or a specific
@@ -419,7 +419,7 @@ Time Limits
 
 
 .. versionadded:: 2.0
 .. versionadded:: 2.0
 
 
-pool support: *prefork/gevent*
+:pool support: *prefork/gevent*
 
 
 .. sidebar:: Soft, or hard?
 .. sidebar:: Soft, or hard?
 
 
@@ -464,7 +464,7 @@ Changing time limits at runtime
 -------------------------------
 -------------------------------
 .. versionadded:: 2.3
 .. versionadded:: 2.3
 
 
-broker support: *amqp, redis*
+:broker support: *amqp, redis*
 
 
 There is a remote control command that enables you to change both soft
 There is a remote control command that enables you to change both soft
 and hard time limits for a task — named ``time_limit``.
 and hard time limits for a task — named ``time_limit``.
@@ -519,7 +519,7 @@ Max tasks per child setting
 
 
 .. versionadded:: 2.0
 .. versionadded:: 2.0
 
 
-pool support: *prefork*
+:pool support: *prefork*
 
 
 With this option you can configure the maximum number of tasks
 With this option you can configure the maximum number of tasks
 a worker can execute before it's replaced by a new process.
 a worker can execute before it's replaced by a new process.
@@ -527,15 +527,17 @@ a worker can execute before it's replaced by a new process.
 This is useful if you have memory leaks you have no control over
 This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 for example from closed source C extensions.
 
 
-The option can be set using the workers `--maxtasksperchild` argument
+The option can be set using the workers :option:`--maxtasksperchild` argument
 or using the :setting:`worker_max_tasks_per_child` setting.
 or using the :setting:`worker_max_tasks_per_child` setting.
 
 
+.. _worker-maxmemperchild:
+
 Max memory per child setting
 Max memory per child setting
 ============================
 ============================
 
 
-.. versionadded:: TODO
+.. versionadded:: 4.0
 
 
-pool support: *prefork*
+:pool support: *prefork*
 
 
 With this option you can configure the maximum amount of resident
 With this option you can configure the maximum amount of resident
 memory a worker can execute before it's replaced by a new process.
 memory a worker can execute before it's replaced by a new process.
@@ -543,8 +545,8 @@ memory a worker can execute before it's replaced by a new process.
 This is useful if you have memory leaks you have no control over
 This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 for example from closed source C extensions.
 
 
-The option can be set using the workers `--maxmemperchild` argument
-or using the :setting:`CELERYD_MAX_MEMORY_PER_CHILD` setting.
+The option can be set using the workers :option:`--maxmemperchild` argument
+or using the :setting:`worker_max_memory_per_child` setting.
 
 
 .. _worker-autoscaling:
 .. _worker-autoscaling:
 
 
@@ -553,7 +555,7 @@ Autoscaling
 
 
 .. versionadded:: 2.2
 .. versionadded:: 2.2
 
 
-pool support: *prefork*, *gevent*
+:pool support: *prefork*, *gevent*
 
 
 The *autoscaler* component is used to dynamically resize the pool
 The *autoscaler* component is used to dynamically resize the pool
 based on load:
 based on load:
@@ -728,7 +730,7 @@ Autoreloading
 
 
 .. versionadded:: 2.5
 .. versionadded:: 2.5
 
 
-pool support: *prefork, eventlet, gevent, threads, solo*
+:pool support: *prefork, eventlet, gevent, threads, solo*
 
 
 Starting :program:`celery worker` with the :option:`--autoreload` option will
 Starting :program:`celery worker` with the :option:`--autoreload` option will
 enable the worker to watch for file system changes to all imported task
 enable the worker to watch for file system changes to all imported task

+ 162 - 50
docs/whatsnew-4.0.rst

@@ -45,6 +45,44 @@ Preface
 =======
 =======
 
 
 
 
+Wall of Contributors
+--------------------
+
+Aaron McMillin, Adam Renberg, Adrien Guinet, Ahmet Demir, Aitor Gómez-Goiri,
+Albert Wang, Alex Koshelev, Alex Rattray, Alex Williams, Alexander Koshelev,
+Alexander Lebedev, Alexander Oblovatniy, Alexey Kotlyarov, Ali Bozorgkhan,
+Alice Zoë Bevan–McGregor, Allard Hoeve, Alman One, Andrea Rabbaglietti,
+Andrea Rosa, Andrei Fokau, Andrew Rodionoff, Andriy Yurchuk,
+Aneil Mallavarapu, Areski Belaid, Artyom Koval, Ask Solem, Balthazar Rouberol,
+Berker Peksag, Bert Vanderbauwhede, Brian Bouterse, Chris Duryee, Chris Erway,
+Chris Harris, Chris Martin, Corey Farwell, Craig Jellick, Cullen Rhodes,
+Dallas Marlow, Daniel Wallace, Danilo Bargen, Davanum Srinivas, Dave Smith,
+David Baumgold, David Harrigan, David Pravec, Dennis Brakhane, Derek Anderson,
+Dmitry Malinovsky, Dudás Ádám, Dustin J. Mitchell, Ed Morley, Fatih Sucu,
+Feanil Patel, Felix Schwarz, Fernando Rocha, Flavio Grossi, Frantisek Holop,
+Gao Jiangmiao, Gerald Manipon, Gilles Dartiguelongue, Gino Ledesma,
+Hank John, Hogni Gylfason, Ilya Georgievsky, Ionel Cristian Mărieș,
+James Pulec, Jared Lewis, Jason Veatch, Jasper Bryant-Greene, Jeremy Tillman,
+Jocelyn Delalande, Joe Jevnik, John Anderson, John Kirkham, John Whitlock,
+Joshua Harlow, Juan Rossi, Justin Patrin, Kai Groner, Kevin Harvey,
+Konstantinos Koukopoulos, Kouhei Maeda, Kracekumar Ramaraju,
+Krzysztof Bujniewicz, Latitia M. Haskins, Len Buckens, Lorenzo Mancini,
+Lucas Wiman, Luke Pomfrey, Marcio Ribeiro, Marin Atanasov Nikolov,
+Mark Parncutt, Maxime Vdb, Mher Movsisyan, Michael (michael-k),
+Michael Duane Mooring, Michael Permana, Mickaël Penhard, Mike Attwood,
+Morton Fox, Môshe van der Sterre, Nat Williams, Nathan Van Gheem, Nik Nyby,
+Omer Katz, Omer Korner, Ori Hoch, Paul Pearce, Paulo Bu, Philip Garnero,
+Piotr Maślanka, Radek Czajka, Raghuram Srinivasan, Randy Barlow,
+Rodolfo Carvalho, Roger Hu, Rongze Zhu, Ross Deane, Ryan Luckie,
+Rémy Greinhofer, Samuel Jaillet, Sergey Azovskov, Sergey Tikhonov,
+Seungha Kim, Steve Peak, Sukrit Khera, Tadej Janež, Tewfik Sadaoui,
+Thomas French, Thomas Grainger, Tobias Schottdorf, Tocho Tochev,
+Valentyn Klindukh, Vic Kumar, Vladimir Bolshakov, Vladimir Gorbunov,
+Wayne Chang, Wil Langford, Will Thompson, William King, Yury Selivanov,
+Zoran Pavlovic, 許邱翔, @allenling, @bee-keeper, @ffeast, @flyingfoxlee,
+@gdw2, @gitaarik, @hankjin, @m-vdb, @mdk, @nokrik, @ocean1, @orlo666,
+@raducc, @wanglei, @worldexception.
+
 .. _v400-important:
 .. _v400-important:
 
 
 Important Notes
 Important Notes
@@ -278,6 +316,31 @@ and the Django handler will automatically find your installed apps:
 The Django integration :ref:`example in the documentation
 The Django integration :ref:`example in the documentation
 <django-first-steps>` has been updated to use the argument-less call.
 <django-first-steps>` has been updated to use the argument-less call.
 
 
+Worker direct queues no longer use auto-delete.
+===============================================
+
+Workers/clients running 4.0 will no longer be able to send
+worker direct messages to worker running older versions, and vice versa.
+
+If you're relying on worker direct messages you should upgrade
+your 3.x workers and clients to use the new routing settings first,
+by replacing :func:`celery.utils.worker_direct` with this implementation:
+
+.. code-block:: python
+
+    from kombu import Exchange, Queue
+
+    worker_direct_exchange = Exchange('C.dq2')
+
+    def worker_direct(hostname):
+        return Queue(
+            '{hostname}.dq2'.format(hostname),
+            exchange=worker_direct_exchange,
+            routing_key=hostname,
+        )
+
+(This feature closed Issue #2492.)
+
 
 
 Old command-line programs removed
 Old command-line programs removed
 ---------------------------------
 ---------------------------------
@@ -441,8 +504,8 @@ log file can cause corruption.
 You are encouraged to upgrade your init scripts and multi arguments
 You are encouraged to upgrade your init scripts and multi arguments
 to use this new option.
 to use this new option.
 
 
-Ability to configure separate broker urls for read/write
-========================================================
+Configure broker URL for read/write separately.
+===============================================
 
 
 New :setting:`broker_read_url` and :setting:`broker_write_url` settings
 New :setting:`broker_read_url` and :setting:`broker_write_url` settings
 have been added so that separate broker urls can be provided
 have been added so that separate broker urls can be provided
@@ -476,6 +539,9 @@ the intent of the required connection.
 Canvas Refactor
 Canvas Refactor
 ===============
 ===============
 
 
+The canvas/workflow implementation have been heavily refactored
+to fix some long outstanding issues.
+
 # BLALBLABLA
 # BLALBLABLA
 d79dcd8e82c5e41f39abd07ffed81ca58052bcd2
 d79dcd8e82c5e41f39abd07ffed81ca58052bcd2
 1e9dd26592eb2b93f1cb16deb771cfc65ab79612
 1e9dd26592eb2b93f1cb16deb771cfc65ab79612
@@ -485,7 +551,7 @@ e442df61b2ff1fe855881c1e2ff9acc970090f54
 - Now unrolls groups within groups into a single group (Issue #1509).
 - Now unrolls groups within groups into a single group (Issue #1509).
 - chunks/map/starmap tasks now routes based on the target task
 - chunks/map/starmap tasks now routes based on the target task
 - chords and chains can now be immutable.
 - chords and chains can now be immutable.
-- Fixed bug where serialized signature were not converted back into
+- Fixed bug where serialized signatures were not converted back into
   signatures (Issue #2078)
   signatures (Issue #2078)
 
 
     Fix contributed by Ross Deane.
     Fix contributed by Ross Deane.
@@ -521,8 +587,13 @@ See :ref:`beat-solar` for more information.
 
 
 Contributed by Mark Parncutt.
 Contributed by Mark Parncutt.
 
 
-App can now configure periodic tasks
-====================================
+New API for configuring periodic tasks
+======================================
+
+This new API enables you to use signatures when defining periodic tasks,
+removing the chance of mistyping task names.
+
+An example of the new API is :ref:`here <beat-entries>`.
 
 
 # bc18d0859c1570f5eb59f5a969d1d32c63af764b
 # bc18d0859c1570f5eb59f5a969d1d32c63af764b
 # 132d8d94d38f4050db876f56a841d5a5e487b25b
 # 132d8d94d38f4050db876f56a841d5a5e487b25b
@@ -530,84 +601,119 @@ App can now configure periodic tasks
 RabbitMQ Priority queue support
 RabbitMQ Priority queue support
 ===============================
 ===============================
 
 
-# 1d4cbbcc921aa34975bde4b503b8df9c2f1816e0
+See :ref:`routing-options-rabbitmq-priorities` for more information.
 
 
 Contributed by Gerald Manipon.
 Contributed by Gerald Manipon.
 
 
-Incompatible: Worker direct queues are no longer using auto-delete.
-===================================================================
+Prefork: Limit child process resident memory size.
+==================================================
+# 5cae0e754128750a893524dcba4ae030c414de33
 
 
-Issue #2492.
+You can now limit the maximum amount of memory allocated per prefork
+pool child process by setting the worker :option:`--maxmemperchild` option,
+or the :setting:`worker_max_memory_per_child` setting.
 
 
-Prefork: Limits for child process resident memory size.
-=======================================================
+The limit is for RSS/resident memory size and is specified in kilobytes.
 
 
-This version introduces the new :setting:`worker_max_memory_per_child` setting,
-which BLA BLA BLA
+A child process having exceeded the limit will be terminated and replaced
+with a new process after the currently executing task returns.
 
 
-# 5cae0e754128750a893524dcba4ae030c414de33
+See :ref:`worker-maxmemperchild` for more information.
 
 
 Contributed by Dave Smith.
 Contributed by Dave Smith.
 
 
 Redis: Result backend optimizations
 Redis: Result backend optimizations
 ===============================================
 ===============================================
 
 
-Pub/sub results
----------------
+RPC is now using pub/sub for streaming task results.
+----------------------------------------------------
+
+Calling ``result.get()`` when using the Redis result backend
+used to be extremely expensive as it was using polling to wait
+for the result to become available. A default polling
+interval of 0.5 seconds did not help performance, but was
+necessary to avoid a spin loop.
+
+The new implementation is using Redis Pub/Sub mechanisms to
+publish and retrieve results immediately, greatly improving
+task round-trip times.
 
 
 Contributed by Yaroslav Zhavoronkov and Ask Solem.
 Contributed by Yaroslav Zhavoronkov and Ask Solem.
 
 
-Chord join
-----------
+New optimized chord join implementation.
+----------------------------------------
 
 
 This was an experimental feature introduced in Celery 3.1,
 This was an experimental feature introduced in Celery 3.1,
-but is now enabled by default.
+that could only be enabled by adding ``?new_join=1`` to the
+result backend URL configuration.
 
 
-?new_join BLABLABLA
+We feel that the implementation has been tested thoroughly enough
+to be considered stable and enabled by default.
 
 
-Riak Result Backend
-===================
+The new implementation greatly reduces the overhead of chords,
+and especially with larger chords the performance benefit can be massive.
 
 
-Contributed by Gilles Dartiguelongue, Alman One and NoKriK.
+New Riak result backend Introduced.
+===================================
 
 
-Bla bla
+See :ref:`conf-riak-result-backend` for more information.
 
 
-- blah blah
+Contributed by Gilles Dartiguelongue, Alman One and NoKriK.
+
+New CouchDB result backend introduced.
+======================================
 
 
-CouchDB Result Backend
-======================
+See :ref:`conf-couchdb-result-backend` for more information.
 
 
 Contributed by Nathan Van Gheem
 Contributed by Nathan Van Gheem
 
 
-New Cassandra Backend
-=====================
+Brand new Cassandra result backend.
+===================================
 
 
-The new Cassandra backend utilizes the python-driver library.
-Old backend is deprecated and everyone using cassandra is required to upgrade
-to be using the new driver.
+A brand new Cassandra backend utilizing the new :pypi:`cassandra-driver`
+library is replacing the old result backend which was using the older
+:pypi:`pycassa` library.
+
+See :ref:`conf-cassandra-result-backend` for more information.
 
 
 # XXX What changed?
 # XXX What changed?
 
 
+New Elasticsearch result backend introduced.
+============================================
 
 
-Elasticsearch Result Backend
-============================
+See :ref:`conf-elasticsearch-result-backend` for more information.
 
 
 Contributed by Ahmet Demir.
 Contributed by Ahmet Demir.
 
 
-Filesystem Result Backend
-=========================
+New Filesystem result backend introduced.
+=========================================
+
+See :ref:`conf-filesystem-result-backend` for more information.
 
 
 Contributed by Môshe van der Sterre.
 Contributed by Môshe van der Sterre.
 
 
 Event Batching
 Event Batching
 ==============
 ==============
 
 
-Events are now buffered in the worker and sent as a list, and
-events are sent as transient messages by default so that they are not written
-to disk by RabbitMQ.
+Events are now buffered in the worker and sent as a list which reduces
+the overhead required to send monitoring events.
 
 
-03399b4d7c26fb593e61acf34f111b66b340ba4e
+For authors of custom event monitors there will be no action
+required as long as you're using the Python celery
+helpers (:class:`~@events.Receiver`) to implement your monitor.
+However, if you're manually receiving event messages you must now account
+for batched event messages which differ from normal event messages
+in the following way:
+
+    - The routing key for a batch of event messages will be set to
+      ``<event-group>.multi`` where the only batched event group
+      is currently ``task`` (giving a routing key of ``task.multi``).
 
 
+    - The message body will be a serialized list-of-dictionaries instead
+      of a dictionary.  Each item in the list can be regarded
+      as a normal event message body.
+
+03399b4d7c26fb593e61acf34f111b66b340ba4e
 
 
 Task.replace
 Task.replace
 ============
 ============
@@ -636,19 +742,26 @@ Closes #817
 Optimized Beat implementation
 Optimized Beat implementation
 =============================
 =============================
 
 
-heapq
-20340d79b55137643d5ac0df063614075385daaa
+The :program:`celery beat` implementation has been optimized
+for millions of periodic tasks by using a heap to schedule entries.
 
 
 Contributed by Ask Solem and Alexander Koshelev.
 Contributed by Ask Solem and Alexander Koshelev.
 
 
-
 Task Autoretry Decorator
 Task Autoretry Decorator
 ========================
 ========================
 
 
-75246714dd11e6c463b9dc67f4311690643bff24
+Writing custom retry handling for exception events is so common
+that we now have built-in support for it.
+
+For this a new ``autoretry_for`` argument is now supported by
+the task decorators, where you can specify a tuple of exceptions
+to automatically retry for.
+
+See :ref:`task-autoretry` for more information.
 
 
 Contributed by Dmitry Malinovsky.
 Contributed by Dmitry Malinovsky.
 
 
+# 75246714dd11e6c463b9dc67f4311690643bff24
 
 
 Async Result API
 Async Result API
 ================
 ================
@@ -657,12 +770,6 @@ eventlet/gevent drainers, promises, BLA BLA
 
 
 Closed issue #2529.
 Closed issue #2529.
 
 
-
-:setting:`task_routes` can now contain glob patterns and regexes.
-=================================================================
-
-See examples in :setting:`task_routes` and :ref:`routing-automatic`.
-
 In Other News
 In Other News
 -------------
 -------------
 
 
@@ -680,6 +787,11 @@ In Other News
   This increases performance as it completely bypasses the routing table,
   This increases performance as it completely bypasses the routing table,
   in addition it also improves reliability for the Redis broker transport.
   in addition it also improves reliability for the Redis broker transport.
 
 
+- **Tasks**: :setting:`task_routes` can now contain glob patterns and
+  regexes.
+
+    See new examples in :setting:`task_routes` and :ref:`routing-automatic`.
+
 - **Eventlet/Gevent**: Fixed race condition leading to "simultaneous read"
 - **Eventlet/Gevent**: Fixed race condition leading to "simultaneous read"
   errors (Issue #2812).
   errors (Issue #2812).