Ask Solem 11 سال پیش
والد
کامیت
de77364e62

+ 5 - 5
README.rst

@@ -81,8 +81,8 @@ getting started tutorials:
 .. _`Next steps`:
 .. _`Next steps`:
     http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
     http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
 
 
-Celery is...
-============
+Celery is
+==========
 
 
 - **Simple**
 - **Simple**
 
 
@@ -119,8 +119,8 @@ Celery is...
     Custom pool implementations, serializers, compression schemes, logging,
     Custom pool implementations, serializers, compression schemes, logging,
     schedulers, consumers, producers, autoscalers, broker transports and much more.
     schedulers, consumers, producers, autoscalers, broker transports and much more.
 
 
-It supports...
-==============
+It supports
+============
 
 
     - **Message Transports**
     - **Message Transports**
 
 
@@ -128,7 +128,7 @@ It supports...
         - MongoDB_ (experimental), Amazon SQS (experimental),
         - MongoDB_ (experimental), Amazon SQS (experimental),
         - CouchDB_ (experimental), SQLAlchemy_ (experimental),
         - CouchDB_ (experimental), SQLAlchemy_ (experimental),
         - Django ORM (experimental), `IronMQ`_
         - Django ORM (experimental), `IronMQ`_
-        - and more...
+        - and more
 
 
     - **Concurrency**
     - **Concurrency**
 
 

+ 3 - 3
celery/utils/dispatch/signal.py

@@ -140,7 +140,7 @@ class Signal(object):  # pragma: no cover
 
 
         :keyword \*\*named: Named arguments which will be passed to receivers.
         :keyword \*\*named: Named arguments which will be passed to receivers.
 
 
-        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
+        :returns: a list of tuple pairs: `[(receiver, response),  ]`.
 
 
         """
         """
         responses = []
         responses = []
@@ -163,7 +163,7 @@ class Signal(object):  # pragma: no cover
             These arguments must be a subset of the argument names defined in
             These arguments must be a subset of the argument names defined in
             :attr:`providing_args`.
             :attr:`providing_args`.
 
 
-        :returns: a list of tuple pairs: `[(receiver, response), ... ]`.
+        :returns: a list of tuple pairs: `[(receiver, response),  ]`.
 
 
         :raises DispatcherKeyError:
         :raises DispatcherKeyError:
 
 
@@ -177,7 +177,7 @@ class Signal(object):  # pragma: no cover
             return responses
             return responses
 
 
         # Call each receiver with whatever arguments it can accept.
         # Call each receiver with whatever arguments it can accept.
-        # Return a list of tuple pairs [(receiver, response), ... ].
+        # Return a list of tuple pairs [(receiver, response),  ].
         for receiver in self._live_receivers(_make_id(sender)):
         for receiver in self._live_receivers(_make_id(sender)):
             try:
             try:
                 response = receiver(signal=self, sender=sender, **named)
                 response = receiver(signal=self, sender=sender, **named)

+ 1 - 1
docs/configuration.rst

@@ -127,7 +127,7 @@ instead of a dict to choose which tasks to annotate:
             if task.name.startswith('tasks.'):
             if task.name.startswith('tasks.'):
                 return {'rate_limit': '10/s'}
                 return {'rate_limit': '10/s'}
 
 
-    CELERY_ANNOTATIONS = (MyAnnotate(), {...})
+    CELERY_ANNOTATIONS = (MyAnnotate(), {})
 
 
 
 
 
 

+ 9 - 10
docs/faq.rst

@@ -528,7 +528,7 @@ If you don't use the results for a task, make sure you set the
 
 
     @celery.task(ignore_result=True)
     @celery.task(ignore_result=True)
     def mytask():
     def mytask():
-        ...
+        
 
 
     class MyTask(Task):
     class MyTask(Task):
         ignore_result = True
         ignore_result = True
@@ -633,7 +633,7 @@ Can I specify a custom task_id?
 
 
 **Answer**: Yes.  Use the `task_id` argument to :meth:`Task.apply_async`::
 **Answer**: Yes.  Use the `task_id` argument to :meth:`Task.apply_async`::
 
 
-    >>> task.apply_async(args, kwargs, task_id="...")
+    >>> task.apply_async(args, kwargs, task_id='…')
 
 
 
 
 Can I use decorators with tasks?
 Can I use decorators with tasks?
@@ -730,19 +730,18 @@ Can I change the interval of a periodic task at runtime?
 --------------------------------------------------------
 --------------------------------------------------------
 
 
 **Answer**: Yes. You can use the Django database scheduler, or you can
 **Answer**: Yes. You can use the Django database scheduler, or you can
-override `PeriodicTask.is_due` or turn `PeriodicTask.run_every` into a
-property:
+create a new schedule subclass and override
+:meth:`~celery.schedules.schedule.is_due`:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
-    class MyPeriodic(PeriodicTask):
+    from celery.schedules import schedule
 
 
-        def run(self):
-            # ...
 
 
-        @property
-        def run_every(self):
-            return get_interval_from_database(...)
+    class my_schedule(schedule):
+
+        def is_due(self, last_run_at):
+            return …
 
 
 .. _faq-task-priorities:
 .. _faq-task-priorities:
 
 

+ 2 - 2
docs/getting-started/first-steps-with-celery.rst

@@ -266,7 +266,7 @@ If the task raised an exception you can also gain access to the
 original traceback::
 original traceback::
 
 
     >>> result.traceback
     >>> result.traceback
-    ...
+    
 
 
 See :mod:`celery.result` for the complete result object reference.
 See :mod:`celery.result` for the complete result object reference.
 
 
@@ -456,5 +456,5 @@ the task id after all).
 
 
     .. code-block:: python
     .. code-block:: python
 
 
-        >>> result = task.delay(...)
+        >>> result = task.delay()
         >>> print(result.backend)
         >>> print(result.backend)

+ 3 - 3
docs/getting-started/next-steps.rst

@@ -397,8 +397,8 @@ There is also a shortcut using star arguments::
     >>> add.s(2, 2)
     >>> add.s(2, 2)
     tasks.add(2, 2)
     tasks.add(2, 2)
 
 
-And there's that calling API again...
--------------------------------------
+And there's that calling API again
+-----------------------------------
 
 
 Subtask instances also supports the calling API, which means that they
 Subtask instances also supports the calling API, which means that they
 have the ``delay`` and ``apply_async`` methods.
 have the ``delay`` and ``apply_async`` methods.
@@ -449,7 +449,7 @@ As stated subtasks supports the calling API, which means that:
   existing keys.
   existing keys.
 
 
 So this all seems very useful, but what can you actually do with these?
 So this all seems very useful, but what can you actually do with these?
-To get to that I must introduce the canvas primitives...
+To get to that I must introduce the canvas primitives
 
 
 The Primitives
 The Primitives
 --------------
 --------------

+ 5 - 5
docs/includes/introduction.txt

@@ -75,8 +75,8 @@ getting started tutorials:
 .. _`Next steps`:
 .. _`Next steps`:
     http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
     http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
 
 
-Celery is...
-============
+Celery is
+==========
 
 
 - **Simple**
 - **Simple**
 
 
@@ -113,8 +113,8 @@ Celery is...
     Custom pool implementations, serializers, compression schemes, logging,
     Custom pool implementations, serializers, compression schemes, logging,
     schedulers, consumers, producers, autoscalers, broker transports and much more.
     schedulers, consumers, producers, autoscalers, broker transports and much more.
 
 
-It supports...
-==============
+It supports
+============
 
 
     - **Message Transports**
     - **Message Transports**
 
 
@@ -122,7 +122,7 @@ It supports...
         - MongoDB_ (experimental), Amazon SQS (experimental),
         - MongoDB_ (experimental), Amazon SQS (experimental),
         - CouchDB_ (experimental), SQLAlchemy_ (experimental),
         - CouchDB_ (experimental), SQLAlchemy_ (experimental),
         - Django ORM (experimental), `IronMQ`_
         - Django ORM (experimental), `IronMQ`_
-        - and more...
+        - and more
 
 
     - **Concurrency**
     - **Concurrency**
 
 

+ 19 - 2
docs/internals/deprecation.rst

@@ -16,11 +16,20 @@ Removals for version 3.2
   as the ``celery.task`` package is being phased out.  The compat module
   as the ``celery.task`` package is being phased out.  The compat module
   will be removed in version 3.2 so please change any import from::
   will be removed in version 3.2 so please change any import from::
 
 
-    from celery.task.trace import ...
+    from celery.task.trace import 
 
 
   to::
   to::
 
 
-    from celery.app.trace import ...
+    from celery.app.trace import …
+
+- ``AsyncResult.serializable()`` and ``celery.result.from_serializable``
+  will be removed.
+
+    Use instead::
+
+        >>> tup = result.as_tuple()
+        >>> from celery.result import result_from_tuple
+        >>> result = result_from_tuple(tup)
 
 
 .. _deprecations-v4.0:
 .. _deprecations-v4.0:
 
 
@@ -166,6 +175,14 @@ Apply to: :class:`~celery.result.AsyncResult`,
 - ``load_settings()`` -> ``current_app.conf``
 - ``load_settings()`` -> ``current_app.conf``
 
 
 
 
+Task_sent signal
+----------------
+
+The :signals:`task_sent` signal will be removed in version 4.0.
+Please use the :signal:`before_task_publish` and :signal:`after_task_publush`
+signals instead.
+
+
 Modules to Remove
 Modules to Remove
 -----------------
 -----------------
 
 

+ 1 - 1
docs/internals/guide.rst

@@ -16,7 +16,7 @@ The API>RCP Precedence Rule
 - The API is more important than Readability
 - The API is more important than Readability
 - Readability is more important than Convention
 - Readability is more important than Convention
 - Convention is more important than Performance
 - Convention is more important than Performance
-    - ...unless the code is a proven hotspot.
+    - unless the code is a proven hotspot.
 
 
 More important than anything else is the end-user API.
 More important than anything else is the end-user API.
 Conventions must step aside, and any suffering is always alleviated
 Conventions must step aside, and any suffering is always alleviated

+ 13 - 13
docs/reference/celery.rst

@@ -29,7 +29,7 @@ and creating Celery applications.
 
 
 .. versionadded:: 2.5
 .. versionadded:: 2.5
 
 
-.. class:: Celery(main='__main__', broker='amqp://localhost//', ...)
+.. class:: Celery(main='__main__', broker='amqp://localhost//', )
 
 
     :param main: Name of the main module if running as `__main__`.
     :param main: Name of the main module if running as `__main__`.
         This is used as a prefix for task names.
         This is used as a prefix for task names.
@@ -205,7 +205,7 @@ and creating Celery applications.
         it's important that the same configuration happens at import time
         it's important that the same configuration happens at import time
         when pickle restores the object on the other side.
         when pickle restores the object on the other side.
 
 
-    .. method:: Celery.setup_security(...)
+    .. method:: Celery.setup_security()
 
 
         Setup the message-signing serializer.
         Setup the message-signing serializer.
         This will affect all application instances (a global operation).
         This will affect all application instances (a global operation).
@@ -235,7 +235,7 @@ and creating Celery applications.
 
 
         Uses :data:`sys.argv` if `argv` is not specified.
         Uses :data:`sys.argv` if `argv` is not specified.
 
 
-    .. method:: Celery.task(fun, ...)
+    .. method:: Celery.task(fun, )
 
 
         Decorator to create a task class out of any callable.
         Decorator to create a task class out of any callable.
 
 
@@ -245,7 +245,7 @@ and creating Celery applications.
 
 
             @celery.task
             @celery.task
             def refresh_feed(url):
             def refresh_feed(url):
-                return ...
+                return 
 
 
         with setting extra options:
         with setting extra options:
 
 
@@ -253,7 +253,7 @@ and creating Celery applications.
 
 
             @celery.task(exchange="feeds")
             @celery.task(exchange="feeds")
             def refresh_feed(url):
             def refresh_feed(url):
-                return ...
+                return 
 
 
         .. admonition:: App Binding
         .. admonition:: App Binding
 
 
@@ -266,7 +266,7 @@ and creating Celery applications.
             application is fully set up (finalized).
             application is fully set up (finalized).
 
 
 
 
-    .. method:: Celery.send_task(name[, args[, kwargs[, ...]]])
+    .. method:: Celery.send_task(name[, args[, kwargs[, ]]])
 
 
         Send task by name.
         Send task by name.
 
 
@@ -371,7 +371,7 @@ Canvas primitives
 
 
 See :ref:`guide-canvas` for more about creating task workflows.
 See :ref:`guide-canvas` for more about creating task workflows.
 
 
-.. class:: group(task1[, task2[, task3[,... taskN]]])
+.. class:: group(task1[, task2[, task3[, taskN]]])
 
 
     Creates a group of tasks to be executed in parallel.
     Creates a group of tasks to be executed in parallel.
 
 
@@ -388,7 +388,7 @@ See :ref:`guide-canvas` for more about creating task workflows.
     tasks in the group (and return a :class:`GroupResult` instance
     tasks in the group (and return a :class:`GroupResult` instance
     that can be used to inspect the state of the group).
     that can be used to inspect the state of the group).
 
 
-.. class:: chain(task1[, task2[, task3[,... taskN]]])
+.. class:: chain(task1[, task2[, task3[, taskN]]])
 
 
     Chains tasks together, so that each tasks follows each other
     Chains tasks together, so that each tasks follows each other
     by being applied as a callback of the previous task.
     by being applied as a callback of the previous task.
@@ -465,7 +465,7 @@ See :ref:`guide-canvas` for more about creating task workflows.
 
 
         Shortcut to :meth:`apply_async`.
         Shortcut to :meth:`apply_async`.
 
 
-    .. method:: signature.apply_async(args=(), kwargs={}, ...)
+    .. method:: signature.apply_async(args=(), kwargs={}, )
 
 
         Apply this task asynchronously.
         Apply this task asynchronously.
 
 
@@ -476,7 +476,7 @@ See :ref:`guide-canvas` for more about creating task workflows.
 
 
         See :meth:`~@Task.apply_async`.
         See :meth:`~@Task.apply_async`.
 
 
-    .. method:: signature.apply(args=(), kwargs={}, ...)
+    .. method:: signature.apply(args=(), kwargs={}, )
 
 
         Same as :meth:`apply_async` but executed the task inline instead
         Same as :meth:`apply_async` but executed the task inline instead
         of sending a task message.
         of sending a task message.
@@ -490,7 +490,7 @@ See :ref:`guide-canvas` for more about creating task workflows.
 
 
         :returns: :class:`@AsyncResult` instance.
         :returns: :class:`@AsyncResult` instance.
 
 
-    .. method:: signature.clone(args=(), kwargs={}, ...)
+    .. method:: signature.clone(args=(), kwargs={}, )
 
 
         Return a copy of this signature.
         Return a copy of this signature.
 
 
@@ -518,9 +518,9 @@ See :ref:`guide-canvas` for more about creating task workflows.
 
 
         :returns: ``other_signature`` (to work with :func:`~functools.reduce`)
         :returns: ``other_signature`` (to work with :func:`~functools.reduce`)
 
 
-    .. method:: signature.set(...)
+    .. method:: signature.set()
 
 
-        Set arbitrary options (same as ``.options.update(...)``).
+        Set arbitrary options (same as ``.options.update()``).
 
 
         This is a chaining method call (i.e. it will return ``self``).
         This is a chaining method call (i.e. it will return ``self``).
 
 

+ 2 - 2
docs/userguide/calling.rst

@@ -19,7 +19,7 @@ used by task instances and the :ref:`canvas <guide-canvas>`.
 
 
 The API defines a standard set of execution options, as well as three methods:
 The API defines a standard set of execution options, as well as three methods:
 
 
-    - ``apply_async(args[, kwargs[, ...]])``
+    - ``apply_async(args[, kwargs[, ]])``
 
 
         Sends a task message.
         Sends a task message.
 
 
@@ -92,7 +92,7 @@ called `add`, returning the sum of two arguments:
         return x + y
         return x + y
 
 
 
 
-.. topic:: There's another way...
+.. topic:: There's another way
 
 
     You will learn more about this later while reading about the :ref:`Canvas
     You will learn more about this later while reading about the :ref:`Canvas
     <guide-canvas>`, but :class:`~celery.subtask`'s are objects used to pass around
     <guide-canvas>`, but :class:`~celery.subtask`'s are objects used to pass around

+ 1 - 1
docs/userguide/monitoring.rst

@@ -319,7 +319,7 @@ as manage users, virtual hosts and their permissions.
     The default virtual host (``"/"``) is used in these
     The default virtual host (``"/"``) is used in these
     examples, if you use a custom virtual host you have to add
     examples, if you use a custom virtual host you have to add
     the ``-p`` argument to the command, e.g:
     the ``-p`` argument to the command, e.g:
-    ``rabbitmqctl list_queues -p my_vhost ....``
+    ``rabbitmqctl list_queues -p my_vhost ``
 
 
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 
 

+ 1 - 1
docs/userguide/remote-tasks.rst

@@ -35,7 +35,7 @@ Whether to use GET or POST is up to you and your requirements.
 The web page should then return a response in the following format
 The web page should then return a response in the following format
 if the execution was successful::
 if the execution was successful::
 
 
-    {'status': 'success', 'retval': ....}
+    {'status': 'success', 'retval': }
 
 
 or if there was an error::
 or if there was an error::
 
 

+ 5 - 5
docs/userguide/tasks.rst

@@ -384,7 +384,7 @@ override this default.
     @app.task(bind=True, default_retry_delay=30 * 60)  # retry in 30 minutes.
     @app.task(bind=True, default_retry_delay=30 * 60)  # retry in 30 minutes.
     def add(x, y):
     def add(x, y):
         try:
         try:
-            ...
+            
         except Exception as exc:
         except Exception as exc:
             raise self.retry(exc=exc, countdown=60)  # override the default and
             raise self.retry(exc=exc, countdown=60)  # override the default and
                                                      # retry in 1 minute
                                                      # retry in 1 minute
@@ -1002,7 +1002,7 @@ that can be added to tasks like this:
     @app.task(base=DatabaseTask)
     @app.task(base=DatabaseTask)
     def process_rows():
     def process_rows():
         for row in process_rows.db.table.all():
         for row in process_rows.db.table.all():
-            ...
+            
 
 
 The ``db`` attribute of the ``process_rows`` task will then
 The ``db`` attribute of the ``process_rows`` task will then
 always stay the same in each process.
 always stay the same in each process.
@@ -1159,7 +1159,7 @@ wastes time and resources.
 .. code-block:: python
 .. code-block:: python
 
 
     @app.task(ignore_result=True)
     @app.task(ignore_result=True)
-    def mytask(...)
+    def mytask(…):
         something()
         something()
 
 
 Results can even be disabled globally using the :setting:`CELERY_IGNORE_RESULT`
 Results can even be disabled globally using the :setting:`CELERY_IGNORE_RESULT`
@@ -1377,7 +1377,7 @@ Let's have a look at another example:
 
 
     @transaction.commit_on_success
     @transaction.commit_on_success
     def create_article(request):
     def create_article(request):
-        article = Article.objects.create(....)
+        article = Article.objects.create()
         expand_abbreviations.delay(article.pk)
         expand_abbreviations.delay(article.pk)
 
 
 This is a Django view creating an article object in the database,
 This is a Django view creating an article object in the database,
@@ -1397,7 +1397,7 @@ depending on state from the current transaction*:
     @transaction.commit_manually
     @transaction.commit_manually
     def create_article(request):
     def create_article(request):
         try:
         try:
-            article = Article.objects.create(...)
+            article = Article.objects.create()
         except:
         except:
             transaction.rollback()
             transaction.rollback()
             raise
             raise

+ 2 - 2
docs/whatsnew-2.5.rst

@@ -303,7 +303,7 @@ that filter for tasks to annotate:
             if task.name.startswith('tasks.'):
             if task.name.startswith('tasks.'):
                 return {'rate_limit': '10/s'}
                 return {'rate_limit': '10/s'}
 
 
-    CELERY_ANNOTATIONS = (MyAnnotate(), {...})
+    CELERY_ANNOTATIONS = (MyAnnotate(), {})
 
 
 ``current`` provides the currently executing task
 ``current`` provides the currently executing task
 -------------------------------------------------
 -------------------------------------------------
@@ -326,7 +326,7 @@ executing task.
             # retry in 10 seconds.
             # retry in 10 seconds.
             current.retry(countdown=10, exc=exc)
             current.retry(countdown=10, exc=exc)
 
 
-Previously you would have to type ``update_twitter_status.retry(...)``
+Previously you would have to type ``update_twitter_status.retry()``
 here, which can be annoying for long task names.
 here, which can be annoying for long task names.
 
 
 .. note::
 .. note::

+ 57 - 3
docs/whatsnew-3.1.rst

@@ -148,7 +148,10 @@ but hopefully more transports will be supported in the future.
     This timeout is no longer necessary, and so the task can be marked as
     This timeout is no longer necessary, and so the task can be marked as
     failed as soon as the pool gets the notification that the process exited.
     failed as soon as the pool gets the notification that the process exited.
 
 
-.. admonition:: Long running tasks
+Caveats
+~~~~~~~
+
+.. topic:: Long running tasks
 
 
     The new pool will asynchronously send as many tasks to the processes
     The new pool will asynchronously send as many tasks to the processes
     as it can and this means that the processes are, in effect, prefetching
     as it can and this means that the processes are, in effect, prefetching
@@ -182,6 +185,15 @@ but hopefully more transports will be supported in the future.
     With this option enabled the worker will only write to workers that are
     With this option enabled the worker will only write to workers that are
     available for work, disabling the prefetch behavior.
     available for work, disabling the prefetch behavior.
 
 
+.. topic:: Max tasks per child
+
+    If a process exits and pool prefetch is enabled the worker may have
+    already written many tasks to the process inqueue, and these tasks
+    must then be moved back and rewritten to a new process.
+
+    This is very expensive if you have ``--maxtasksperchild`` set to a low
+    value (e.g. less than 10), so if you need that you should also
+    enable ``-Ofair`` to turn off the prefetching behavior.
 
 
 Django supported out of the box
 Django supported out of the box
 -------------------------------
 -------------------------------
@@ -672,6 +684,14 @@ In Other News
     will only send worker related events and silently drop any attempts
     will only send worker related events and silently drop any attempts
     to send events related to any other group.
     to send events related to any other group.
 
 
+- New :setting:`BROKER_FAILOVER_STRATEGY` setting.
+
+    This setting can be used to change the transport failover strategy,
+    can either be a callable returning an iterable or the name of a
+    Kombu built-in failover strategy.  Default is "round-robin".
+
+    Contributed by Matt Wise.
+
 - ``Result.revoke`` will no longer wait for replies.
 - ``Result.revoke`` will no longer wait for replies.
 
 
     You can add the ``reply=True`` argument if you really want to wait for
     You can add the ``reply=True`` argument if you really want to wait for
@@ -681,6 +701,17 @@ In Other News
 
 
     Contributed by Steeve Morin.
     Contributed by Steeve Morin.
 
 
+- Worker: Now emits warning if the :setting:`CELERYD_POOL` setting is set
+  to enable the eventlet/gevent pools.
+
+    The `-P` option should always be used to select the eventlet/gevent pool
+    to ensure that the patches are applied as early as possible.
+
+    If you start the worker in a wrapper (like Django's manage.py)
+    then you must apply the patches manually, e.g. by creating an alternative
+    wrapper that monkey patches at the start of the program before importing
+    any other modules.
+
 - There's a now an 'inspect clock' command which will collect the current
 - There's a now an 'inspect clock' command which will collect the current
   logical clock value from workers.
   logical clock value from workers.
 
 
@@ -776,6 +807,19 @@ In Other News
 
 
     Contributed by Ryan Petrello.
     Contributed by Ryan Petrello.
 
 
+- SQLAlchemy Result Backend: Now calls ``enginge.dispose`` after fork
+   (Issue #1564).
+
+    If you create your own sqlalchemy engines then you must also
+    make sure that these are closed after fork in the worker:
+
+    .. code-block:: python
+
+        from multiprocessing.util import register_after_fork
+
+        engine = create_engine(...)
+        register_after_fork(engine, engine.dispose)
+
 - A stress test suite for the Celery worker has been written.
 - A stress test suite for the Celery worker has been written.
 
 
     This is located in the ``funtests/stress`` directory in the git
     This is located in the ``funtests/stress`` directory in the git
@@ -872,10 +916,10 @@ In Other News
 
 
         >>> t.apply_async(headers={'sender': 'George Costanza'})
         >>> t.apply_async(headers={'sender': 'George Costanza'})
 
 
-- New :signal:`task_before_publish`` signal dispatched before a task message
+- New :signal:`before_task_publish`` signal dispatched before a task message
   is sent and can be used to modify the final message fields (Issue #1281).
   is sent and can be used to modify the final message fields (Issue #1281).
 
 
-- New :signal:`task_after_publish` signal replaces the old :signal:`task_sent`
+- New :signal:`after_task_publish` signal replaces the old :signal:`task_sent`
   signal.
   signal.
 
 
     The :signal:`task_sent` signal is now deprecated and should not be used.
     The :signal:`task_sent` signal is now deprecated and should not be used.
@@ -1062,6 +1106,16 @@ Fixes
 - AMQP Backend: join did not convert exceptions when using the json
 - AMQP Backend: join did not convert exceptions when using the json
   serializer.
   serializer.
 
 
+- Non-abstract task classes are now shared between apps (Issue #1150).
+
+    Note that non-abstract task classes should not be used in the
+    new API.  You should only create custom task classes when you
+    use them as a base class in the ``@task`` decorator.
+
+    This fix ensure backwards compatibility with older Celery versions
+    so that non-abstract task classes works even if a module is imported
+    multiple times so that the app is also instantiated multiple times.
+
 - Worker: Workaround for Unicode errors in logs (Issue #427)
 - Worker: Workaround for Unicode errors in logs (Issue #427)
 
 
 - Task methods: ``.apply_async`` now works properly if args list is None
 - Task methods: ``.apply_async`` now works properly if args list is None

+ 1 - 1
funtests/stress/README.rst

@@ -159,7 +159,7 @@ Using a different result backend
 You can set the environment variable ``CSTRESS_BACKEND`` to change
 You can set the environment variable ``CSTRESS_BACKEND`` to change
 the result backend used::
 the result backend used::
 
 
-    $ CSTRESS_BACKEND='amqp://' celery -A stress worker #...
+    $ CSTRESS_BACKEND='amqp://' celery -A stress worker #
     $ CSTRESS_BACKEND='amqp://' python -m stress
     $ CSTRESS_BACKEND='amqp://' python -m stress
 
 
 Using a custom queue
 Using a custom queue