Browse Source

Merge branch '3.0'

Conflicts:
	Changelog
	celery/__init__.py
	docs/includes/introduction.txt
Ask Solem 12 years ago
parent
commit
efbb804184

+ 131 - 0
Changelog

@@ -15,6 +15,137 @@
 - `Task.apply_async` now supports timeout and soft_timeout arguments (Issue #802)
 - `App.control.Inspect.conf` can be used for inspecting worker configuration
 
+.. _version-3.0.4:
+
+3.0.4
+=====
+:release-date: 2012-07-26 07:00 P.M BST
+
+- Now depends on Kombu 2.3
+
+- New experimental standalone Celery monitor: Flower
+
+    See :ref:`monitoring-flower` to read more about it!
+
+    Contributed by Mher Movsisyan.
+
+- Now supports AMQP heartbeats if using the new ``pyamqp://`` transport.
+
+    - The py-amqp transport requires the :mod:`amqp` library to be installed::
+
+        $ pip install amqp
+
+    - Then you need to set the transport URL prefix to ``pyamqp://``.
+
+    - The default heartbeat value is 10 seconds, but this can be changed using
+      the :setting:`BROKER_HEARTBEAT` setting::
+
+        BROKER_HEARTBEAT = 5.0
+
+    - If the broker heartbeat is set to 10 seconds, the heartbeats will be
+      monitored every 5 seconds (double the hertbeat rate).
+
+    See the `Kombu 2.3 changelog`_ for more information.
+
+.. _`Kombu 2.3 changelog`:
+    http://kombu.readthedocs.org/en/latest/changelog.html#version-2-3-0
+
+- Now supports RabbitMQ Consumer Cancel Notifications, using the ``pyamqp://``
+  transport.
+
+    This is essential when running RabbitMQ in a cluster.
+
+    See the `Kombu 2.3 changelog`_ for more information.
+
+- Delivery info is no longer passed directly through.
+
+    It was discovered that the SQS transport adds objects that can't
+    be pickled to the delivery info mapping, so we had to go back
+    to using the whitelist again.
+
+    Fixing this bug also means that the SQS transport is now working again.
+
+- The semaphore was not properly released when a task was revoked (Issue #877).
+
+    This could lead to tasks being swallowed and not released until a worker
+    restart.
+
+    Thanks to Hynek Schlawack for debugging the issue.
+
+- Retrying a task now also forwards any linked tasks.
+
+    This means that if a task is part of a chain (or linked in some other
+    way) and that even if the task is retried, then the next task in the chain
+    will be executed when the retry succeeds.
+
+- Chords: Now supports setting the interval and other keyword arguments
+  to the chord unlock task.
+
+    - The interval can now be set as part of the chord subtasks kwargs::
+
+        chord(header)(body, interval=10.0)
+
+    - In addition the chord unlock task now honors the Task.default_retry_delay
+      option, used when none is specified, which also means that the default
+      interval can also be changed using annotations:
+
+        .. code-block:: python
+
+            CELERY_ANNOTATIONS = {
+                'celery.chord_unlock': {
+                    'default_retry_delay': 10.0,
+                }
+            }
+
+- New :meth:`@Celery.add_defaults` method can add new default configuration
+  dicts to the applications configuration.
+
+    For example::
+
+        config = {'FOO': 10}
+
+        celery.add_defaults(config)
+
+    is the same as ``celery.conf.update(config)`` except that data will not be
+    copied, and that it will not be pickled when the worker spawns child
+    processes.
+
+    In addition the method accepts a callable::
+
+        def initialize_config():
+            # insert heavy stuff that can't be done at import time here.
+
+        celery.add_defaults(initialize_config)
+
+    which means the same as the above except that it will not happen
+    until the celery configuration is actually used.
+
+    As an example, Celery can lazily use the configuration of a Flask app::
+
+        flask_app = Flask()
+        celery = Celery()
+        celery.add_defaults(lambda: flask_app.config)
+
+- Revoked tasks were not marked as revoked in the result backend (Issue #871).
+
+    Fix contributed by Hynek Schlawack.
+
+- Eventloop now properly handles the case when the epoll poller object
+  has been closed (Issue #882).
+
+- Fixed syntax error in ``funtests/test_leak.py``
+
+    Fix contributed by Catalin Iacob.
+
+- group/chunks: Now accepts empty task list (Issue #873).
+
+- New method names:
+
+    - ``Celery.default_connection()`` ➠  :meth:`~@Celery.connection_or_acquire`.
+    - ``Celery.default_producer()``   ➠  :meth:`~@Celery.producer_or_acquire`.
+
+    The old names still work for backward compatibility.
+
 .. _version-3.0.3:
 
 3.0.3

+ 1 - 1
celery/app/defaults.py

@@ -68,7 +68,7 @@ NAMESPACES = {
         'CONNECTION_TIMEOUT': Option(4, type='float'),
         'CONNECTION_RETRY': Option(True, type='bool'),
         'CONNECTION_MAX_RETRIES': Option(100, type='int'),
-        'HEARTBEAT': Option(3, type='int'),
+        'HEARTBEAT': Option(10, type='int'),
         'POOL_LIMIT': Option(10, type='int'),
         'USE_SSL': Option(False, type='bool'),
         'TRANSPORT': Option(type='string'),

+ 7 - 8
celery/contrib/migrate.py

@@ -122,22 +122,19 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
         source=None, app=None, callback=None, limit=None, transform=None,
         **kwargs):
     """Find tasks by filtering them and move the tasks to a new queue.
-    :param predicate: Predicate function used to filter messages to move.
-        Must accept the standard signature of ``(body, message)`` used
-        by Kombu consumer callbacks.  If the predicate wants the message
-        to be moved it should return the hostname of the worker to move it
-        to, otherwise it should return :const:`False`
-    :keyword queues: A list of queues to consume from, if not specified
-        it will consume from all configured task queues in ``CELERY_QUEUES``.
 
     :param predicate: Filter function used to decide which messages
         to move.  Must accept the standard signature of ``(body, message)``
         used by Kombu consumer callbacks. If the predicate wants the message
-        to be moved it must return either
+        to be moved it must return either:
+
             1) a tuple of ``(exchange, routing_key)``, or
+
             2) a :class:`~kombu.entity.Queue` instance, or
+
             3) any other true value which means the specified
                ``exchange`` and ``routing_key`` arguments will be used.
+
     :keyword connection: Custom connection to use.
     :keyword source: Optional list of source queues to use instead of the
         default (which is the queues in :setting:`CELERY_QUEUES`).
@@ -166,6 +163,8 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
 
     or with a transform:
 
+    .. code-block:: python
+
         def transform(value):
             if isinstance(value, basestring):
                 return Queue(value, Exchange(value), value)

+ 0 - 1
celery/tests/worker/test_mediator.py

@@ -6,7 +6,6 @@ from Queue import Queue
 
 from mock import Mock, patch
 
-from celery.utils import uuid
 from celery.worker.mediator import Mediator
 from celery.worker.state import revoked as revoked_tasks
 from celery.tests.utils import Case

+ 1 - 0
docs/conf.py

@@ -71,6 +71,7 @@ intersphinx_mapping = {
         "http://kombu.readthedocs.org/en/latest/": None,
         "http://django-celery.readthedocs.org/en/latest": None,
         "http://cyme.readthedocs.org/en/latest": None,
+        "http://amqp.readthedocs.org/en/latest": None,
 }
 
 # The name of the Pygments (syntax highlighting) style to use.

+ 20 - 0
docs/configuration.rst

@@ -729,6 +729,26 @@ It can also be a fully qualified path to your own transport implementation.
 
 See the Kombu documentation for more information about broker URLs.
 
+.. setting:: BROKER_HEARTBEAT
+
+BROKER_HEARTBEAT
+~~~~~~~~~~~~~~~~
+:transports supported: ``pyamqp``
+
+It's not always possible to detect connection loss in a timely
+manner using TCP/IP alone, so AMQP defines something called heartbeats
+that's is used both by the client and the broker to detect if
+a connection was closed.
+
+Heartbeats are currently only supported by the ``pyamqp://`` transport,
+and this requires the :mod:`amqp` module::
+
+    $ pip install amqp
+
+The default heartbeat value is 10 seconds,
+the heartbeat will then be monitored at double the rate of the heartbeat value
+(so for the default 10 seconds, the heartbeat is checked every 5 seconds).
+
 .. setting:: BROKER_USE_SSL
 
 BROKER_USE_SSL

BIN
docs/images/dashboard.png


+ 13 - 11
docs/userguide/monitoring.rst

@@ -151,7 +151,7 @@ You can specify a single, or a list of workers by using the
     $ celery inspect -d w1,w2 reserved
 
 
-.. _monitoring-django-admin:
+.. _monitoring-flower:
 
 Celery Flower: Web interface
 ----------------------------
@@ -161,17 +161,17 @@ Celery Flower is a web based, real-time monitor and administration tool.
 Features
 ~~~~~~~~
 
-* Workers monitoring and management
-* Configuration viewer
-* Worker pool control
-* Broker options viewer
-* Queues management
-* Tasks execution statistics
-* Task viewer
+- Workers monitoring and management
+- Configuration viewer
+- Worker pool control
+- Broker options viewer
+- Queues management
+- Tasks execution statistics
+- Task viewer
 
-*Screenshot*
+**Screenshot**
 
-.. figure:: https://github.com/mher/flower/raw/master/docs/screenshots/dashborad.png
+.. figure:: ../images/dashboard.png
 
 More screenshots_:
 
@@ -188,6 +188,8 @@ Launch Celery Flower and open http://localhost:8008 in browser: ::
 
     $ celery flower
 
+.. _monitoring-django-admin:
+
 Django Admin Monitor
 --------------------
 
@@ -581,7 +583,7 @@ Task Events
 ~~~~~~~~~~~
 
 * ``task-sent(uuid, name, args, kwargs, retries, eta, expires,
-              queue, exchange, routing_key)``
+  queue, exchange, routing_key)``
 
    Sent when a task message is published and
    the :setting:`CELERY_SEND_TASK_SENT_EVENT` setting is enabled.

+ 1 - 1
requirements/default-py3k.txt

@@ -1,4 +1,4 @@
 billiard>=2.7.3.10
 python-dateutil>=2.0
 pytz
-kombu>=2.2.5
+kombu>=2.3