ソースを参照

Merge branch 'master' into kombuRPC

Ask Solem 12 年 前
コミット
8cdfea46c1
100 ファイル変更2787 行追加1864 行削除
  1. 2 1
      CONTRIBUTORS.txt
  2. 124 13
      Changelog
  3. 3 2
      README.rst
  4. 15 0
      celery/__init__.py
  5. 6 4
      celery/_state.py
  6. 0 5
      celery/app/__init__.py
  7. 12 1
      celery/app/base.py
  8. 15 7
      celery/app/builtins.py
  9. 8 6
      celery/app/defaults.py
  10. 13 13
      celery/app/log.py
  11. 37 26
      celery/app/task.py
  12. 11 8
      celery/apps/beat.py
  13. 29 20
      celery/apps/worker.py
  14. 1 1
      celery/beat.py
  15. 3 0
      celery/bin/__init__.py
  16. 33 23
      celery/bin/base.py
  17. 216 30
      celery/bin/celery.py
  18. 10 7
      celery/bin/celerybeat.py
  19. 1 1
      celery/bin/celeryd.py
  20. 1 1
      celery/bin/celeryd_multi.py
  21. 11 8
      celery/bin/celeryev.py
  22. 381 0
      celery/bootsteps.py
  23. 1 1
      celery/concurrency/base.py
  24. 1 1
      celery/concurrency/processes.py
  25. 28 21
      celery/contrib/batches.py
  26. 30 4
      celery/contrib/methods.py
  27. 136 10
      celery/datastructures.py
  28. 3 1
      celery/events/__init__.py
  29. 5 1
      celery/exceptions.py
  30. 0 0
      celery/fixups/__init__.py
  31. 185 0
      celery/fixups/django.py
  32. 75 2
      celery/loaders/base.py
  33. 3 33
      celery/loaders/default.py
  34. 16 9
      celery/platforms.py
  35. 6 4
      celery/result.py
  36. 1 0
      celery/states.py
  37. 6 5
      celery/task/base.py
  38. 14 7
      celery/task/trace.py
  39. 1 1
      celery/tests/app/test_app.py
  40. 4 12
      celery/tests/app/test_loaders.py
  41. 3 2
      celery/tests/bin/test_celery.py
  42. 1 0
      celery/tests/bin/test_celerybeat.py
  43. 4 9
      celery/tests/bin/test_celeryd.py
  44. 21 9
      celery/tests/contrib/test_abortable.py
  45. 1 0
      celery/tests/tasks/test_sets.py
  46. 64 37
      celery/tests/tasks/test_tasks.py
  47. 2 4
      celery/tests/utilities/test_datastructures.py
  48. 4 0
      celery/tests/utilities/test_platforms.py
  49. 7 3
      celery/tests/utilities/test_timer2.py
  50. 49 98
      celery/tests/worker/test_bootsteps.py
  51. 8 8
      celery/tests/worker/test_control.py
  52. 5 4
      celery/tests/worker/test_request.py
  53. 246 181
      celery/tests/worker/test_worker.py
  54. 4 12
      celery/utils/imports.py
  55. 11 0
      celery/utils/threads.py
  56. 10 7
      celery/utils/timer2.py
  57. 6 10
      celery/utils/timeutils.py
  58. 68 113
      celery/worker/__init__.py
  59. 6 4
      celery/worker/autoreload.py
  60. 8 6
      celery/worker/autoscale.py
  61. 0 210
      celery/worker/bootsteps.py
  62. 87 59
      celery/worker/components.py
  63. 270 667
      celery/worker/consumer.py
  64. 8 6
      celery/worker/control.py
  65. 1 2
      celery/worker/heartbeat.py
  66. 2 2
      celery/worker/hub.py
  67. 51 27
      celery/worker/job.py
  68. 156 0
      celery/worker/loops.py
  69. 6 4
      celery/worker/mediator.py
  70. 103 0
      celery/worker/pidbox.py
  71. 8 0
      celery/worker/state.py
  72. 1 1
      docs/.templates/page.html
  73. 1 1
      docs/Makefile
  74. 27 4
      docs/configuration.rst
  75. 4 6
      docs/contributing.rst
  76. 1 1
      docs/django/first-steps-with-django.rst
  77. 1 20
      docs/faq.rst
  78. 1 1
      docs/getting-started/first-steps-with-celery.rst
  79. 1 3
      docs/getting-started/introduction.rst
  80. 14 6
      docs/getting-started/next-steps.rst
  81. 7 0
      docs/glossary.rst
  82. 5 12
      docs/history/changelog-1.0.rst
  83. 4 7
      docs/history/changelog-2.0.rst
  84. 3 3
      docs/history/changelog-2.1.rst
  85. 3 3
      docs/history/changelog-2.2.rst
  86. 2 2
      docs/history/changelog-2.3.rst
  87. 1 1
      docs/history/changelog-2.4.rst
  88. 2 2
      docs/history/changelog-2.5.rst
  89. BIN
      docs/images/consumer_graph.png
  90. BIN
      docs/images/graph.png
  91. BIN
      docs/images/result_graph.png
  92. BIN
      docs/images/worker_graph.png
  93. BIN
      docs/images/worker_graph_full.png
  94. 1 1
      docs/includes/introduction.txt
  95. 2 1
      docs/includes/resources.txt
  96. 2 1
      docs/internals/guide.rst
  97. 0 1
      docs/internals/reference/index.rst
  98. 2 10
      docs/reference/celery.app.amqp.rst
  99. 3 3
      docs/reference/celery.bootsteps.rst
  100. 32 1
      docs/reference/celery.rst

+ 2 - 1
CONTRIBUTORS.txt

@@ -118,4 +118,5 @@ Paul McMillan, 2012/07/26
 Mitar, 2012/07/28
 Adam DePue, 2012/08/22
 Thomas Meson, 2012/08/28
-Daniel Lundin, 2012/08/30 
+Daniel Lundin, 2012/08/30
+Alexey Zatelepin, 2012/09/18

+ 124 - 13
Changelog

@@ -7,7 +7,7 @@
 .. contents::
     :local:
 
-If you're looking for versions prior to 3.x you should see :ref:`history`.
+If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 .. _version-3.1.0:
 
@@ -20,15 +20,95 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
 - `Task.apply_async` now supports timeout and soft_timeout arguments (Issue #802)
 - `App.control.Inspect.conf` can be used for inspecting worker configuration
 
+.. _version-3.0.11:
+
+3.0.11
+======
+:release-date: 2012-09-26 04:00 P.M UTC
+
+- [security:low] generic-init.d scripts changed permissions of /var/log & /var/run
+
+    In the daemonization tutorial the recommended directories were as follows:
+
+    .. code-block:: bash
+
+        CELERYD_LOG_FILE="/var/log/celery/%n.log"
+        CELERYD_PID_FILE="/var/run/celery/%n.pid"
+
+    But in the scripts themselves the default files were ``/var/log/celery%n.log``
+    and ``/var/run/celery%n.pid``, so if the user did not change the location
+    by configuration, the directories ``/var/log`` and ``/var/run`` would be
+    created - and worse have their permissions and owners changed.
+
+    This change means that:
+
+        - Default pid file is ``/var/run/celery/%n.pid``
+        - Default log file is ``/var/log/celery/%n.log``
+
+        - The directories are only created and have their permissions
+          changed if *no custom locations are set*.
+
+    Users can force paths to be created by calling the ``create-paths``
+    subcommand:
+
+    .. code-block:: bash
+
+        $ sudo /etc/init.d/celeryd create-paths
+
+    .. admonition:: Upgrading Celery will not update init scripts
+
+        To update the init scripts you have to re-download
+        the files from source control and update them manually.
+        You can find the init scripts for version 3.0.x at:
+
+            http://github.com/celery/celery/tree/3.0/extra/generic-init.d
+
+- Now depends on billiard 2.7.3.17
+
+- Fixes request stack protection when app is initialized more than
+  once (Issue #1003).
+
+- ETA tasks now properly works when system timezone is not the same
+  as the configured timezone (Issue #1004).
+
+- Terminating a task now works if the task has been sent to the
+  pool but not yet acknowledged by a pool process (Issue #1007).
+
+    Fix contributed by Alexey Zatelepin
+
+- Terminating a task now properly updates the state of the task to revoked,
+  and sends a ``task-revoked`` event.
+
+- Multi: No longer parses --app option (Issue #1008).
+
+- Generic worker init script now waits for workers to shutdown by default.
+
+- Multi: stop_verify command renamed to stopwait.
+
+- Daemonization: Now delays trying to create pidfile/logfile until after
+  the working directory has been changed into.
+
+- :program:`celery worker` and :program:`celery beat` commands now respects
+  the :option:`--no-color` option (Issue #999).
+
+- Fixed typos in eventlet examples (Issue #1000)
+
+    Fix contributed by Bryan Bishop.
+    Congratulations on opening bug #1000!
+
+- Tasks that raise :exc:`~celery.exceptions.Ignore` are now acknowledged.
+
+- Beat: Now shows the name of the entry in ``sending due task`` logs.
+
 .. _version-3.0.10:
 
 3.0.10
 ======
-:release-date: TBA
+:release-date: 2012-09-20 05:30 P.M BST
 
 - Now depends on kombu 2.4.7
 
-- Now depends on billiard 2.7.3.13
+- Now depends on billiard 2.7.3.14
 
     - Fixes crash at startup when using Django and pre-1.4 projects
       (setup_environ).
@@ -39,7 +119,7 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
     - Billiard now installs even if the C extension cannot be built.
 
         It's still recommended to build the C extension if you are using
-        a transport other than rabbitmq/redis (or use force_execv for some
+        a transport other than rabbitmq/redis (or use forced execv for some
         other reason).
 
     - Pool now sets a ``current_process().index`` attribute that can be used to create
@@ -48,7 +128,9 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
 - Canvas: chord/group/chain no longer modifies the state when called
 
     Previously calling a chord/group/chain would modify the ids of subtasks
-    so that::
+    so that:
+
+    .. code-block:: python
 
         >>> c = chord([add.s(2, 2), add.s(4, 4)], xsum.s())
         >>> c()
@@ -58,23 +140,37 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
     previous invocation.  This is now fixed, so that calling a subtask
     won't mutate any options.
 
-- Canvas: Chaining a chord to another task now works.
+- Canvas: Chaining a chord to another task now works (Issue #965).
 
 - Worker: Fixed a bug where the request stack could be corrupted if
   relative imports are used.
 
     Problem usually manifested itself as an exception while trying to
-    send a failed task result (NoneType does not have id attribute).
+    send a failed task result (``NoneType does not have id attribute``).
 
     Fix contributed by Sam Cooke.
 
+- Tasks can now raise :exc:`~celery.exceptions.Ignore` to skip updating states
+  or events after return.
+
+    Example:
+
+    .. code-block:: python
+
+        from celery.exceptions import Ignore
+
+        @task
+        def custom_revokes():
+            if redis.sismember('tasks.revoked', custom_revokes.request.id):
+                raise Ignore()
+
 - The worker now makes sure the request/task stacks are not modified
   by the initial ``Task.__call__``.
 
     This would previously be a problem if a custom task class defined
     ``__call__`` and also called ``super()``.
 
-- Because of many bugs the fast local optimization has been disabled,
+- Because of problems the fast local optimization has been disabled,
   and can only be enabled by setting the :envvar:`USE_FAST_LOCALS` attribute.
 
 - Worker: Now sets a default socket timeout of 5 seconds at shutdown
@@ -82,7 +178,7 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
 
 - More fixes related to late eventlet/gevent patching.
 
-- Documentation for the settings out of sync with reality:
+- Documentation for settings out of sync with reality:
 
     - :setting:`CELERY_TASK_PUBLISH_RETRY`
 
@@ -98,9 +194,7 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
 
     Fix contributed by Matt Long.
 
-- Worker: Log messages when connection established and lost have been improved
-  so that they are more useful when used with the upcoming multiple broker
-  hostlist for failover that is coming in the next Kombu version.
+- Worker: Log messages when connection established and lost have been improved.
 
 - The repr of a crontab schedule value of '0' should be '*'  (Issue #972).
 
@@ -109,7 +203,24 @@ If you're looking for versions prior to 3.x you should see :ref:`history`.
 
     Fix contributed by Alexey Zatelepin.
 
-- gevent: Now supports hard time limits using ``gevent.Timeout`.
+- gevent: Now supports hard time limits using ``gevent.Timeout``.
+
+- Documentation: Links to init scripts now point to the 3.0 branch instead
+  of the development branch (master).
+
+- Documentation: Fixed typo in signals user guide (Issue #986).
+
+    ``instance.app.queues`` -> ``instance.app.amqp.queues``.
+
+- Eventlet/gevent: The worker did not properly set the custom app
+  for new greenlets.
+
+- Eventlet/gevent: Fixed a bug where the worker could not recover
+  from connection loss (Issue #959).
+
+    Also, because of a suspected bug in gevent the
+    :setting:`BROKER_CONNECTION_TIMEOUT` setting has been disabled
+    when using gevent
 
 3.0.9
 =====

+ 3 - 2
README.rst

@@ -37,7 +37,7 @@ by using webhooks.
 .. _RCelery: http://leapfrogdevelopment.github.com/rcelery/
 .. _`PHP client`: https://github.com/gjedeer/celery-php
 .. _`using webhooks`:
-    http://celery.github.com/celery/userguide/remote-tasks.html
+    http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
 
 What do I need?
 ===============
@@ -344,7 +344,8 @@ to send regular patches.
 Be sure to also read the `Contributing to Celery`_ section in the
 documentation.
 
-.. _`Contributing to Celery`: http://celery.github.com/celery/contributing.html
+.. _`Contributing to Celery`:
+    http://docs.celeryproject.org/en/master/contributing.html
 
 .. _license:
 

+ 15 - 0
celery/__init__.py

@@ -24,6 +24,21 @@ VERSION_BANNER = '{0} ({1})'.format(__version__, SERIES)
 
 # -eof meta-
 
+import os
+if os.environ.get('C_IMPDEBUG'):
+    import sys
+    import __builtin__
+    real_import = __builtin__.__import__
+
+    def debug_import(name, locals=None, globals=None, fromlist=None,
+            level=-1):
+        glob = globals or getattr(sys, 'emarfteg_'[::-1])(1).f_globals
+        importer_name = glob and glob.get('__name__') or 'unknown'
+        print('-- {0} imports {1}'.format(importer_name, name))
+        return real_import(name, locals, globals, fromlist, level)
+    __builtin__.__import__ = debug_import
+
+
 # This is for static analyzers
 Celery = object
 bugreport = lambda *a, **kw: None

+ 6 - 4
celery/_state.py

@@ -36,14 +36,16 @@ _task_stack = LocalStack()
 
 def set_default_app(app):
     global default_app
-    if default_app is None:
-        default_app = app
+    default_app = app
 
 
 def get_current_app():
     if default_app is None:
-        # creates the default app, but we want to defer that.
-        import celery.app  # noqa
+        #: creates the global fallback app instance.
+        from celery.app import Celery, default_loader
+        set_default_app(Celery('default', loader=default_loader,
+                                          set_as_current=False,
+                                          accept_magic_kwargs=True))
     return _tls.current_app or default_app
 
 

+ 0 - 5
celery/app/__init__.py

@@ -36,11 +36,6 @@ app_or_default = None
 #: The 'default' loader is the default loader used by old applications.
 default_loader = os.environ.get('CELERY_LOADER') or 'default'
 
-#: Global fallback app instance.
-set_default_app(Celery('default', loader=default_loader,
-                                  set_as_current=False,
-                                  accept_magic_kwargs=True))
-
 
 def bugreport():
     return current_app().bugreport()

+ 12 - 1
celery/app/base.py

@@ -11,7 +11,7 @@ from __future__ import absolute_import
 import threading
 import warnings
 
-from collections import deque
+from collections import defaultdict, deque
 from contextlib import contextmanager
 from copy import deepcopy
 from functools import wraps
@@ -35,6 +35,10 @@ from .defaults import DEFAULTS, find_deprecated_settings
 from .registry import TaskRegistry
 from .utils import AppPickler, Settings, bugreport, _unpickle_app
 
+DEFAULT_FIXUPS = (
+    'celery.fixups.django:DjangoFixup',
+)
+
 
 def _unpickle_appattr(reverse_name, args):
     """Given an attribute name and a list of args, gets
@@ -72,6 +76,8 @@ class Celery(object):
         self.set_as_current = set_as_current
         self.registry_cls = symbol_by_name(self.registry_cls)
         self.accept_magic_kwargs = accept_magic_kwargs
+        self.user_options = defaultdict(set)
+        self.steps = defaultdict(set)
 
         self.configured = False
         self._pending_defaults = deque()
@@ -90,6 +96,8 @@ class Celery(object):
             self._preconf['BROKER_URL'] = broker
         if include:
             self._preconf['CELERY_IMPORTS'] = include
+        self.fixups = list(filter(None, (symbol_by_name(f).include(self)
+                                        for f in DEFAULT_FIXUPS)))
 
         if self.set_as_current:
             self.set_current()
@@ -193,6 +201,9 @@ class Celery(object):
     def config_from_cmdline(self, argv, namespace='celery'):
         self.conf.update(self.loader.cmdline_config_parser(argv, namespace))
 
+    def autodiscover_tasks(self, packages, related_name='tasks'):
+        self.loader.autodiscover_tasks(packages, related_name)
+
     def send_task(self, name, args=None, kwargs=None, countdown=None,
             eta=None, task_id=None, producer=None, connection=None,
             result_cls=None, expires=None, queues=None, publisher=None,

+ 15 - 7
celery/app/builtins.py

@@ -229,7 +229,7 @@ def add_chain_task(app):
             return tasks, results
 
         def apply_async(self, args=(), kwargs={}, group_id=None, chord=None,
-                task_id=None, **options):
+                task_id=None, link=None, link_error=None, **options):
             if self.app.conf.CELERY_ALWAYS_EAGER:
                 return self.apply(args, kwargs, **options)
             options.pop('publisher', None)
@@ -242,6 +242,13 @@ def add_chain_task(app):
             if task_id:
                 tasks[-1].set(task_id=task_id)
                 result = tasks[-1].type.AsyncResult(task_id)
+            # make sure we can do a link() and link_error() on a chain object.
+            if link:
+                tasks[-1].set(link=link)
+            # and if any task in the chain fails, call the errbacks
+            if link_error:
+                for task in tasks:
+                    task.set(link_error=link_error)
             tasks[0].apply_async()
             return result
 
@@ -307,16 +314,17 @@ def add_chord_task(app):
         def apply_async(self, args=(), kwargs={}, task_id=None, **options):
             if self.app.conf.CELERY_ALWAYS_EAGER:
                 return self.apply(args, kwargs, **options)
-            group_id = options.pop('group_id', None)
-            chord = options.pop('chord', None)
             header = kwargs.pop('header')
             body = kwargs.pop('body')
             header, body = (list(maybe_subtask(header)),
                             maybe_subtask(body))
-            if group_id:
-                body.set(group_id=group_id)
-            if chord:
-                body.set(chord=chord)
+            # forward certain options to body
+            for opt_name in ['group_id', 'chord']:
+                opt_value = options.pop(opt_name, None)
+                if opt_value:
+                    body.set(**{opt_name: opt_value})
+            map(body.link, options.pop('link', []))
+            map(body.link_error, options.pop('link_error', []))
             callback_id = body.options.setdefault('task_id', task_id or uuid())
             parent = super(Chord, self).apply_async((header, body, args),
                                                      kwargs, **options)

+ 8 - 6
celery/app/defaults.py

@@ -150,22 +150,24 @@ NAMESPACES = {
         'WORKER_DIRECT': Option(False, type='bool'),
     },
     'CELERYD': {
-        'AUTOSCALER': Option('celery.worker.autoscale.Autoscaler'),
-        'AUTORELOADER': Option('celery.worker.autoreload.Autoreloader'),
-        'BOOT_STEPS': Option((), type='tuple'),
+        'AGENT': Option(None, type='string'),
+        'AUTOSCALER': Option('celery.worker.autoscale:Autoscaler'),
+        'AUTORELOADER': Option('celery.worker.autoreload:Autoreloader'),
+        'BOOTSTEPS': Option((), type='tuple'),
+        'CONSUMER_BOOTSTEPS': Option((), type='tuple'),
         'CONCURRENCY': Option(0, type='int'),
         'TIMER': Option(type='string'),
         'TIMER_PRECISION': Option(1.0, type='float'),
         'FORCE_EXECV': Option(True, type='bool'),
         'HIJACK_ROOT_LOGGER': Option(True, type='bool'),
-        'CONSUMER': Option(type='string'),
+        'CONSUMER': Option('celery.worker.consumer:Consumer', type='string'),
         'LOG_FORMAT': Option(DEFAULT_PROCESS_LOG_FMT),
         'LOG_COLOR': Option(type='bool'),
         'LOG_LEVEL': Option('WARN', deprecate_by='2.4', remove_by='4.0',
                             alt='--loglevel argument'),
         'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
                             alt='--logfile argument'),
-        'MEDIATOR': Option('celery.worker.mediator.Mediator'),
+        'MEDIATOR': Option('celery.worker.mediator:Mediator'),
         'MAX_TASKS_PER_CHILD': Option(type='int'),
         'POOL': Option(DEFAULT_POOL),
         'POOL_PUTLOCKS': Option(True, type='bool'),
@@ -179,7 +181,7 @@ NAMESPACES = {
     },
     'CELERYBEAT': {
         'SCHEDULE': Option({}, type='dict'),
-        'SCHEDULER': Option('celery.beat.PersistentScheduler'),
+        'SCHEDULER': Option('celery.beat:PersistentScheduler'),
         'SCHEDULE_FILENAME': Option('celerybeat-schedule'),
         'MAX_LOOP_INTERVAL': Option(0, type='float'),
         'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',

+ 13 - 13
celery/app/log.py

@@ -38,7 +38,7 @@ class TaskFormatter(ColorFormatter):
 
     def format(self, record):
         task = get_current_task()
-        if task:
+        if task and task.request:
             record.__dict__.update(task_id=task.request.id,
                                    task_name=task.name)
         else:
@@ -61,8 +61,10 @@ class Logging(object):
         self.colorize = self.app.conf.CELERYD_LOG_COLOR
 
     def setup(self, loglevel=None, logfile=None, redirect_stdouts=False,
-            redirect_level='WARNING'):
-        handled = self.setup_logging_subsystem(loglevel, logfile)
+            redirect_level='WARNING', colorize=None):
+        handled = self.setup_logging_subsystem(
+            loglevel, logfile, colorize=colorize,
+        )
         if not handled:
             logger = get_logger('celery.redirected')
             if redirect_stdouts:
@@ -81,8 +83,7 @@ class Logging(object):
         Logging._setup = True
         loglevel = mlevel(loglevel or self.loglevel)
         format = format or self.format
-        if colorize is None:
-            colorize = self.supports_color(logfile)
+        colorize = self.supports_color(colorize, logfile)
         reset_multiprocessing_logger()
         if not is_py3k:
             ensure_process_aware_logger()
@@ -126,8 +127,7 @@ class Logging(object):
         """
         loglevel = mlevel(loglevel or self.loglevel)
         format = format or self.task_format
-        if colorize is None:
-            colorize = self.supports_color(logfile)
+        colorize = self.supports_color(colorize, logfile)
 
         logger = self.setup_handlers(get_logger('celery.task'),
                                      logfile, format, colorize,
@@ -156,24 +156,24 @@ class Logging(object):
             sys.stderr = proxy
         return proxy
 
-    def supports_color(self, logfile=None):
+    def supports_color(self, colorize=None, logfile=None):
+        colorize = self.colorize if colorize is None else colorize
         if self.app.IS_WINDOWS:
             # Windows does not support ANSI color codes.
             return False
-        if self.colorize is None:
+        if colorize or colorize is None:
             # Only use color if there is no active log file
             # and stderr is an actual terminal.
             return logfile is None and isatty(sys.stderr)
-        return self.colorize
+        return colorize
 
-    def colored(self, logfile=None):
-        return colored(enabled=self.supports_color(logfile))
+    def colored(self, logfile=None, enabled=None):
+        return colored(enabled=self.supports_color(enabled, logfile))
 
     def setup_handlers(self, logger, logfile, format, colorize,
             formatter=ColorFormatter, **kwargs):
         if self._is_configured(logger):
             return logger
-
         handler = self._detect_handler(logfile)
         handler.setFormatter(formatter(format, use_color=colorize))
         logger.addHandler(handler)

+ 37 - 26
celery/app/task.py

@@ -236,6 +236,10 @@ class Task(object):
     #: Default task expiry time.
     expires = None
 
+    #: Some may expect a request to exist even if the task has not been
+    #: called.  This should probably be deprecated.
+    _default_request = None
+
     __bound__ = False
 
     from_config = (
@@ -274,7 +278,6 @@ class Task(object):
 
             from celery.utils.threads import LocalStack
             self.request_stack = LocalStack()
-            self.request_stack.push(Context())
 
         # PeriodicTask uses this to add itself to the PeriodicTask schedule.
         self.on_bound(app)
@@ -318,6 +321,9 @@ class Task(object):
         _task_stack.push(self)
         self.push_request()
         try:
+            # add self if this is a bound task
+            if self.__self__ is not None:
+                return self.run(self.__self__, *args, **kwargs)
             return self.run(*args, **kwargs)
         finally:
             self.pop_request()
@@ -394,28 +400,20 @@ class Task(object):
         :keyword retry_policy:  Override the retry policy used.  See the
                                 :setting:`CELERY_TASK_PUBLISH_RETRY` setting.
 
-        :keyword routing_key: The routing key used to route the task to a
-                              worker server.  Defaults to the
-                              :attr:`routing_key` attribute.
-
-        :keyword exchange: The named exchange to send the task to.
-                           Defaults to the :attr:`exchange` attribute.
+        :keyword routing_key: Custom routing key used to route the task to a
+                              worker server. If in combination with a
+                              ``queue`` argument only used to specify custom
+                              routing keys to topic exchanges.
 
-        :keyword exchange_type: The exchange type to initialize the exchange
-                                if not already declared.  Defaults to the
-                                :attr:`exchange_type` attribute.
+        :keyword queue: The queue to route the task to.  This must be a key
+                        present in :setting:`CELERY_QUEUES`, or
+                        :setting:`CELERY_CREATE_MISSING_QUEUES` must be
+                        enabled.  See :ref:`guide-routing` for more
+                        information.
 
-        :keyword immediate: Request immediate delivery.  Will raise an
-                            exception if the task cannot be routed to a worker
-                            immediately.  (Do not confuse this parameter with
-                            the `countdown` and `eta` settings, as they are
-                            unrelated).  Defaults to the :attr:`immediate`
-                            attribute.
-
-        :keyword mandatory: Mandatory routing. Raises an exception if
-                            there's no running workers able to take on this
-                            task.  Defaults to the :attr:`mandatory`
-                            attribute.
+        :keyword exchange: Named custom exchange to send the task to.
+                           Usually not used in combination with the ``queue``
+                           argument.
 
         :keyword priority: The task priority, a number between 0 and 9.
                            Defaults to the :attr:`priority` attribute.
@@ -445,6 +443,9 @@ class Task(object):
             attribute.
         :keyword publisher: Deprecated alias to ``producer``.
 
+        Also supports all keyword arguments supported by
+        :meth:`kombu.messaging.Producer.publish`.
+
         .. note::
             If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
             be replaced by a local :func:`apply` call instead.
@@ -595,7 +596,10 @@ class Task(object):
         from celery.task.trace import eager_trace_task
 
         app = self._get_app()
-        args = args or []
+        args = args or ()
+        # add 'self' if this is a bound method.
+        if self.__self__ is not None:
+            args = (self.__self__, ) + tuple(args)
         kwargs = kwargs or {}
         task_id = options.get('task_id') or uuid()
         retries = options.get('retries', 0)
@@ -785,10 +789,17 @@ class Task(object):
         """`repr(task)`"""
         return '<@task: {0.name}>'.format(self)
 
-    @property
-    def request(self):
-        """Current request object."""
-        return self.request_stack.top
+    def _get_request(self):
+        """Get current request object."""
+        req = self.request_stack.top
+        if req is None:
+            # task was not called, but some may still expect a request
+            # to be there, perhaps that should be deprecated.
+            if self._default_request is None:
+                self._default_request = Context()
+            return self._default_request
+        return req
+    request = property(_get_request)
 
     @property
     def __name__(self):

+ 11 - 8
celery/apps/beat.py

@@ -47,14 +47,17 @@ class Beat(configurated):
     redirect_stdouts_level = from_config()
 
     def __init__(self, max_interval=None, app=None,
-            socket_timeout=30, pidfile=None, **kwargs):
+            socket_timeout=30, pidfile=None, no_color=None, **kwargs):
         """Starts the celerybeat task scheduler."""
         self.app = app = app_or_default(app or self.app)
         self.setup_defaults(kwargs, namespace='celerybeat')
 
         self.max_interval = max_interval
         self.socket_timeout = socket_timeout
-        self.colored = app.log.colored(self.logfile)
+        self.no_color = no_color
+        self.colored = app.log.colored(self.logfile,
+            enabled=not no_color if no_color is not None else no_color,
+        )
         self.pidfile = pidfile
 
         if not isinstance(self.loglevel, int):
@@ -67,12 +70,12 @@ class Beat(configurated):
         self.set_process_title()
         self.start_scheduler()
 
-    def setup_logging(self):
-        handled = self.app.log.setup_logging_subsystem(loglevel=self.loglevel,
-                                                       logfile=self.logfile)
-        if self.redirect_stdouts and not handled:
-            self.app.log.redirect_stdouts_to_logger(logger,
-                    loglevel=self.redirect_stdouts_level)
+    def setup_logging(self, colorize=None):
+        if colorize is None and self.no_color is not None:
+            colorize = not self.no_color
+        self.app.log.setup(self.loglevel, self.logfile,
+                           self.redirect_stdouts, self.redirect_stdouts_level,
+                           colorize=colorize)
 
     def start_scheduler(self):
         c = self.colored

+ 29 - 20
celery/apps/worker.py

@@ -22,6 +22,7 @@ from functools import partial
 from billiard import current_process
 
 from celery import VERSION_BANNER, platforms, signals
+from celery.app.abstract import from_config
 from celery.exceptions import SystemTerminate
 from celery.loaders.app import AppLoader
 from celery.task import trace
@@ -80,35 +81,39 @@ EXTRA_INFO_FMT = """
 
 
 class Worker(WorkController):
+    redirect_stdouts = from_config()
+    redirect_stdouts_level = from_config()
 
-    def on_before_init(self, purge=False, redirect_stdouts=None,
-            redirect_stdouts_level=None, **kwargs):
+    def on_before_init(self, purge=False, no_color=None, **kwargs):
         # apply task execution optimizations
         trace.setup_worker_optimizations(self.app)
 
         # this signal can be used to set up configuration for
         # workers by name.
         conf = self.app.conf
-        signals.celeryd_init.send(sender=self.hostname, instance=self,
-                                  conf=conf)
+        signals.celeryd_init.send(
+            sender=self.hostname, instance=self, conf=conf,
+        )
         self.purge = purge
+        self.no_color = no_color
         self._isatty = isatty(sys.stdout)
-        self.colored = self.app.log.colored(self.logfile)
-        if redirect_stdouts is None:
-            redirect_stdouts = conf.CELERY_REDIRECT_STDOUTS,
-        if redirect_stdouts_level is None:
-            redirect_stdouts_level = conf.CELERY_REDIRECT_STDOUTS_LEVEL
-        self.redirect_stdouts = redirect_stdouts
-        self.redirect_stdouts_level = redirect_stdouts_level
+        self.colored = self.app.log.colored(self.logfile,
+            enabled=not no_color if no_color is not None else no_color
+        )
 
-    def on_start(self):
+    def on_init_namespace(self):
+        self.setup_logging()
         # apply task execution optimizations
         trace.setup_worker_optimizations(self.app)
 
+    def on_start(self):
+        WorkController.on_start(self)
+
         # this signal can be used to e.g. change queues after
         # the -Q option has been applied.
-        signals.celeryd_after_setup.send(sender=self.hostname, instance=self,
-                                         conf=self.app.conf)
+        signals.celeryd_after_setup.send(
+            sender=self.hostname, instance=self, conf=self.app.conf,
+        )
 
         if getattr(os, 'getuid', None) and os.getuid() == 0:
             warnings.warn(RuntimeWarning(
@@ -119,19 +124,23 @@ class Worker(WorkController):
 
         # Dump configuration to screen so we have some basic information
         # for when users sends bug reports.
-        print(str(self.colored.cyan(' \n', self.startup_info())) +
-              str(self.colored.reset(self.extra_info() or '')))
+        sys.__stdout__.write(
+            str(self.colored.cyan(' \n', self.startup_info())) +
+            str(self.colored.reset(self.extra_info() or '')) + '\n'
+        )
         self.set_process_status('-active-')
-        self.redirect_stdouts_to_logger()
         self.install_platform_tweaks(self)
 
     def on_consumer_ready(self, consumer):
         signals.worker_ready.send(sender=consumer)
         print('celery@{0.hostname} ready.'.format(self))
 
-    def redirect_stdouts_to_logger(self):
+    def setup_logging(self, colorize=None):
+        if colorize is None and self.no_color is not None:
+            colorize = not self.no_color
         self.app.log.setup(self.loglevel, self.logfile,
-                           self.redirect_stdouts, self.redirect_stdouts_level)
+                           self.redirect_stdouts, self.redirect_stdouts_level,
+                           colorize=colorize)
 
     def purge_messages(self):
         count = self.app.control.purge()
@@ -141,7 +150,7 @@ class Worker(WorkController):
     def tasklist(self, include_builtins=True):
         tasks = self.app.tasks
         if not include_builtins:
-            tasks = [t for t in tasks if not t.startswith('celery.')]
+            tasks = (t for t in tasks if not t.startswith('celery.'))
         return '\n'.join('  . {0}'.format(task) for task in sorted(tasks))
 
     def extra_info(self):

+ 1 - 1
celery/beat.py

@@ -172,7 +172,7 @@ class Scheduler(object):
         is_due, next_time_to_run = entry.is_due()
 
         if is_due:
-            info('Scheduler: Sending due task %s', entry.task)
+            info('Scheduler: Sending due task %s (%s)', entry.name, entry.task)
             try:
                 result = self.apply_async(entry, publisher=publisher)
             except Exception as exc:

+ 3 - 0
celery/bin/__init__.py

@@ -0,0 +1,3 @@
+from __future__ import absolute_import
+
+from .base import Option  # noqa

+ 33 - 23
celery/bin/base.py

@@ -20,7 +20,7 @@ Preload Options
 
 .. cmdoption:: --config
 
-    name of the configuration module (default: `celeryconfig`)
+    Name of the configuration module
 
 .. _daemon-options:
 
@@ -80,7 +80,7 @@ for warning in (CDeprecationWarning, CPendingDeprecationWarning):
     warnings.simplefilter('once', warning, 0)
 
 ARGV_DISABLED = """
-Unrecognized command line arguments: {0}
+Unrecognized command-line arguments: {0}
 
 Try --help?
 """
@@ -103,7 +103,7 @@ class HelpFormatter(IndentedHelpFormatter):
 
 
 class Command(object):
-    """Base class for command line applications.
+    """Base class for command-line applications.
 
     :keyword app: The current app.
     :keyword get_app: Callable returning the current app if no app provided.
@@ -127,12 +127,16 @@ class Command(object):
     # module Rst documentation to parse help from (if any)
     doc = None
 
+    # Some programs (multi) does not want to load the app specified
+    # (Issue #1008).
+    respects_app_option = True
+
     #: List of options to parse before parsing other options.
     preload_options = (
         Option('-A', '--app', default=None),
         Option('-b', '--broker', default=None),
         Option('--loader', default=None),
-        Option('--config', default='celeryconfig', dest='config_module'),
+        Option('--config', default=None),
     )
 
     #: Enable if the application should support config from the cmdline.
@@ -159,9 +163,9 @@ class Command(object):
         raise NotImplementedError('subclass responsibility')
 
     def execute_from_commandline(self, argv=None):
-        """Execute application from command line.
+        """Execute application from command-line.
 
-        :keyword argv: The list of command line arguments.
+        :keyword argv: The list of command-line arguments.
                        Defaults to ``sys.argv``.
 
         """
@@ -195,7 +199,7 @@ class Command(object):
         return '%%prog [options] {0.args}'.format(self)
 
     def get_options(self):
-        """Get supported command line options."""
+        """Get supported command-line options."""
         return self.option_list
 
     def expanduser(self, value):
@@ -204,7 +208,7 @@ class Command(object):
         return value
 
     def handle_argv(self, prog_name, argv):
-        """Parses command line arguments from ``argv`` and dispatches
+        """Parses command-line arguments from ``argv`` and dispatches
         to :meth:`run`.
 
         :param prog_name: The program name (``argv[0]``).
@@ -286,15 +290,18 @@ class Command(object):
         broker = preload_options.get('broker', None)
         if broker:
             os.environ['CELERY_BROKER_URL'] = broker
-        config_module = preload_options.get('config_module')
-        if config_module:
-            os.environ['CELERY_CONFIG_MODULE'] = config_module
-        if app:
-            self.app = self.find_app(app)
-        elif self.app is None:
-            self.app = self.get_app(loader=loader)
-        if self.enable_config_from_cmdline:
-            argv = self.process_cmdline_config(argv)
+        config = preload_options.get('config')
+        if config:
+            os.environ['CELERY_CONFIG_MODULE'] = config
+        if self.respects_app_option:
+            if app and self.respects_app_option:
+                self.app = self.find_app(app)
+            elif self.app is None:
+                self.app = self.get_app(loader=loader)
+            if self.enable_config_from_cmdline:
+                argv = self.process_cmdline_config(argv)
+        else:
+            self.app = celery.Celery()
         return argv
 
     def find_app(self, app):
@@ -304,10 +311,13 @@ class Command(object):
             # last part was not an attribute, but a module
             sym = import_from_cwd(app)
         if isinstance(sym, ModuleType):
-            if getattr(sym, '__path__', None):
-                return self.find_app('{0}.celery:'.format(
-                            app.replace(':', '')))
-            return sym.celery
+            try:
+                return sym.celery
+            except AttributeError:
+                if getattr(sym, '__path__', None):
+                    return self.find_app('{0}.celery:'.format(
+                                app.replace(':', '')))
+                raise
         return sym
 
     def symbol_by_name(self, name):
@@ -377,8 +387,8 @@ class Command(object):
             return match.sub(lambda m: keys[m.expand(expand)], s)
 
     def _get_default_app(self, *args, **kwargs):
-        from celery.app import default_app
-        return default_app._get_current_object()  # omit proxy
+        from celery._state import get_current_app
+        return get_current_app()  # omit proxy
 
 
 def daemon_options(default_pidfile=None, default_logfile=None):

+ 216 - 30
celery/bin/celery.py

@@ -6,17 +6,19 @@ The :program:`celery` umbrella command.
 .. program:: celery
 
 """
-from __future__ import absolute_import, print_function
+from __future__ import absolute_import, print_function, unicode_literals
 
 import anyjson
 import heapq
+import os
 import sys
 import warnings
 
 from importlib import import_module
-from itertools import imap
+from operator import itemgetter
 from pprint import pformat
 
+from celery.datastructures import DependencyGraph, GraphFormatter
 from celery.platforms import EX_OK, EX_FAILURE, EX_UNAVAILABLE, EX_USAGE
 from celery.utils import term
 from celery.utils import text
@@ -26,6 +28,7 @@ from celery.utils.timeutils import maybe_iso8601
 
 from celery.bin.base import Command as BaseCommand, Option
 
+
 HELP = """
 ---- -- - - ---- Commands- -------------- --- ------------
 
@@ -40,13 +43,18 @@ Migrating task {state.count}/{state.strtotal}: \
 {body[task]}[{body[id]}]\
 """
 
-commands = {}
+DEBUG = os.environ.get('C_DEBUG', False)
 
+commands = {}
 command_classes = [
     ('Main', ['worker', 'events', 'beat', 'shell', 'multi', 'amqp'], 'green'),
     ('Remote Control', ['status', 'inspect', 'control'], 'blue'),
     ('Utils', ['purge', 'list', 'migrate', 'call', 'result', 'report'], None),
 ]
+if DEBUG:
+    command_classes.append(
+        ('Debug', ['graph'], 'red'),
+    )
 
 
 @memoize()
@@ -94,6 +102,28 @@ def load_extension_commands(namespace='celery.commands'):
             command(cls, name=ep.name)
 
 
+def determine_exit_status(ret):
+    if isinstance(ret, int):
+        return ret
+    return EX_OK if ret else EX_FAILURE
+
+
+def main(argv=None):
+    # Fix for setuptools generated scripts, so that it will
+    # work with multiprocessing fork emulation.
+    # (see multiprocessing.forking.get_preparation_data())
+    try:
+        if __name__ != '__main__':  # pragma: no cover
+            sys.modules['__main__'] = sys.modules[__name__]
+        cmd = CeleryCommand()
+        cmd.maybe_patch_concurrency()
+        from billiard import freeze_support
+        freeze_support()
+        cmd.execute_from_commandline(argv)
+    except KeyboardInterrupt:
+        pass
+
+
 class Command(BaseCommand):
     help = ''
     args = ''
@@ -103,7 +133,7 @@ class Command(BaseCommand):
 
     option_list = (
         Option('--quiet', '-q', action='store_true'),
-        Option('--no-color', '-C', action='store_true'),
+        Option('--no-color', '-C', action='store_true', default=None),
     )
 
     def __init__(self, app=None, no_color=False, stdout=sys.stdout,
@@ -120,14 +150,15 @@ class Command(BaseCommand):
         try:
             ret = self.run(*args, **kwargs)
         except Error as exc:
-            self.error(self.colored.red('Error: {0!r}'.format(exc)))
+            self.error(self.colored.red('Error: {0}'.format(exc)))
             return exc.status
 
         return ret if ret is not None else EX_OK
 
-    def show_help(self, command):
+    def exit_help(self, command):
+        # this never exits due to OptionParser.parse_options
         self.run_from_argv(self.prog_name, [command, '--help'])
-        return EX_USAGE
+        sys.exit(EX_USAGE)
 
     def error(self, s):
         self.out(s, fh=self.stderr)
@@ -148,7 +179,7 @@ class Command(BaseCommand):
         return self(*args, **options)
 
     def usage(self, command):
-        return '%%prog {0} [options] {self.args}'.format(command, self=self)
+        return '%prog {0} [options] {self.args}'.format(command, self=self)
 
     def prettify_list(self, n):
         c = self.colored
@@ -223,6 +254,7 @@ class Delegate(Command):
 @command
 class multi(Command):
     """Start multiple worker instances."""
+    respects_app_option = False
 
     def get_options(self):
         return ()
@@ -502,7 +534,7 @@ class _RemoteControl(Command):
         ])
 
     def usage(self, command):
-        return '%%prog {0} [options] {1} <command> [arg1 .. argN]'.format(
+        return '%prog {0} [options] {1} <command> [arg1 .. argN]'.format(
                 command, self.args)
 
     def call(self, *args, **kwargs):
@@ -523,7 +555,7 @@ class _RemoteControl(Command):
         destination = kwargs.get('destination')
         timeout = kwargs.get('timeout') or self.choices[method][0]
         if destination and isinstance(destination, basestring):
-            destination = list(imap(str.strip, destination.split(',')))
+            destination = [dest.strip() for dest in destination.split(',')]
 
         try:
             handler = getattr(self, method)
@@ -700,7 +732,7 @@ class migrate(Command):
 
     def run(self, *args, **kwargs):
         if len(args) != 2:
-            return self.show_help('migrate')
+            return self.exit_help('migrate')
         from kombu import Connection
         from celery.contrib.migrate import migrate_tasks
 
@@ -828,7 +860,7 @@ class help(Command):
     """Show help screen and exit."""
 
     def usage(self, command):
-        return '%%prog <command> [options] {0.args}'.format(self)
+        return '%prog <command> [options] {0.args}'.format(self)
 
     def run(self, *args, **kwargs):
         self.parser.print_help()
@@ -887,6 +919,9 @@ class CeleryCommand(BaseCommand):
         return self.execute(command, argv)
 
     def execute_from_commandline(self, argv=None):
+        argv = sys.argv if argv is None else argv
+        if 'multi' in argv[1:3]:  # Issue 1008
+            self.respects_app_option = False
         try:
             sys.exit(determine_exit_status(
                 super(CeleryCommand, self).execute_from_commandline(argv)))
@@ -929,26 +964,177 @@ class CeleryCommand(BaseCommand):
         load_extension_commands()
 
 
-def determine_exit_status(ret):
-    if isinstance(ret, int):
-        return ret
-    return EX_OK if ret else EX_FAILURE
+@command
+class graph(Command):
+    args = """<TYPE> [arguments]
+            .....  bootsteps [worker] [consumer]
+            .....  workers   [enumerate]
+    """
 
+    def run(self, what=None, *args, **kwargs):
+        map = {'bootsteps': self.bootsteps, 'workers': self.workers}
+        not what and self.exit_help('graph')
+        if what not in map:
+            raise Error('no graph {0} in {1}'.format(what, '|'.join(map)))
+        return map[what](*args, **kwargs)
+
+    def bootsteps(self, *args, **kwargs):
+        worker = self.app.WorkController()
+        include = set(arg.lower() for arg in args or ['worker', 'consumer'])
+        if 'worker' in include:
+            graph = worker.namespace.graph
+            if 'consumer' in include:
+                worker.namespace.connect_with(worker.consumer.namespace)
+        else:
+            graph = worker.consumer.namespace.graph
+        graph.to_dot(self.stdout)
 
-def main(argv=None):
-    # Fix for setuptools generated scripts, so that it will
-    # work with multiprocessing fork emulation.
-    # (see multiprocessing.forking.get_preparation_data())
-    try:
-        if __name__ != '__main__':  # pragma: no cover
-            sys.modules['__main__'] = sys.modules[__name__]
-        cmd = CeleryCommand()
-        cmd.maybe_patch_concurrency()
-        from billiard import freeze_support
-        freeze_support()
-        cmd.execute_from_commandline(argv)
-    except KeyboardInterrupt:
-        pass
+    def workers(self, *args, **kwargs):
+
+        def simplearg(arg):
+            return maybe_list(itemgetter(0, 2)(arg.partition(':')))
+
+        def maybe_list(l, sep=','):
+            return (l[0], l[1].split(sep) if sep in l[1] else l[1])
+
+        args = dict(map(simplearg, args))
+        generic = 'generic' in args
+
+        def generic_label(node):
+            return '{0} ({1}://)'.format(type(node).__name__,
+                                         node._label.split('://')[0])
+
+        class Node(object):
+            force_label = None
+            scheme = {}
+
+            def __init__(self, label, pos=None):
+                self._label = label
+                self.pos = pos
+
+            def label(self):
+                return self._label
+
+            def __str__(self):
+                return self.label()
+
+        class Thread(Node):
+            scheme = {'fillcolor': 'lightcyan4', 'fontcolor': 'yellow',
+                      'shape': 'oval', 'fontsize': 10, 'width': 0.3,
+                      'color': 'black'}
+
+            def __init__(self, label, **kwargs):
+                self._label = 'thr-{0}'.format(next(tids))
+                self.real_label = label
+                self.pos = 0
+
+        class Formatter(GraphFormatter):
+
+            def label(self, obj):
+                return obj and obj.label()
+
+            def node(self, obj):
+                scheme = dict(obj.scheme) if obj.pos else obj.scheme
+                if isinstance(obj, Thread):
+                    scheme['label'] = obj.real_label
+                return self.draw_node(
+                    obj, dict(self.node_scheme, **scheme),
+                )
+
+            def terminal_node(self, obj):
+                return self.draw_node(
+                    obj, dict(self.term_scheme, **obj.scheme),
+                )
+
+            def edge(self, a, b, **attrs):
+                if isinstance(a, Thread):
+                    attrs.update(arrowhead='none', arrowtail='tee')
+                return self.draw_edge(a, b, self.edge_scheme, attrs)
+
+        def subscript(n):
+            S = {'0': '₀', '1': '₁', '2': '₂', '3': '₃', '4': '₄',
+                 '5': '₅', '6': '₆', '7': '₇', '8': '₈', '9': '₉'}
+            return ''.join([S[i] for i in str(n)])
+
+        class Worker(Node):
+            pass
+
+        class Backend(Node):
+            scheme = {'shape': 'folder', 'width': 2,
+                      'height': 1, 'color': 'black',
+                      'fillcolor': 'peachpuff3', 'color': 'peachpuff4'}
+
+            def label(self):
+                return generic_label(self) if generic else self._label
+
+        class Broker(Node):
+            scheme = {'shape': 'circle', 'fillcolor': 'cadetblue3',
+                      'color': 'cadetblue4', 'height': 1}
+
+            def label(self):
+                return generic_label(self) if generic else self._label
+
+        from itertools import count
+        tids = count(1)
+        Wmax = int(args.get('wmax', 4) or 0)
+        Tmax = int(args.get('tmax', 3) or 0)
+
+        def maybe_abbr(l, name, max=Wmax):
+            size = len(l)
+            abbr = max and size > max
+            if 'enumerate' in args:
+                l = ['{0}{1}'.format(name, subscript(i + 1))
+                        for i, obj in enumerate(l)]
+            if abbr:
+                l = l[0:max - 1] + [l[size - 1]]
+                l[max - 2] = '{0}⎨…{1}⎬'.format(
+                    name[0], subscript(size - (max - 1)))
+            return l
+
+        try:
+            workers = args['nodes']
+            threads = args.get('threads') or []
+        except KeyError:
+            replies = self.app.control.inspect().stats()
+            workers, threads = [], []
+            for worker, reply in replies.iteritems():
+                workers.append(worker)
+                threads.append(reply['pool']['max-concurrency'])
+
+        wlen = len(workers)
+        backend = args.get('backend', self.app.conf.CELERY_RESULT_BACKEND)
+        threads_for = {}
+        workers = maybe_abbr(workers, 'Worker')
+        if Wmax and wlen > Wmax:
+            threads = threads[0:3] + [threads[-1]]
+        for i, threads in enumerate(threads):
+            threads_for[workers[i]] = maybe_abbr(
+                range(int(threads)), 'P', Tmax,
+            )
+
+        broker = Broker(args.get('broker', self.app.connection().as_uri()))
+        backend = Backend(backend) if backend else None
+        graph = DependencyGraph(formatter=Formatter())
+        graph.add_arc(broker)
+        if backend:
+            graph.add_arc(backend)
+        curworker = [0]
+        for i, worker in enumerate(workers):
+            worker = Worker(worker, pos=i)
+            graph.add_arc(worker)
+            graph.add_edge(worker, broker)
+            if backend:
+                graph.add_edge(worker, backend)
+            threads = threads_for.get(worker._label)
+            if threads:
+                for thread in threads:
+                    thread = Thread(thread)
+                    graph.add_arc(thread)
+                    graph.add_edge(thread, worker)
+
+            curworker[0] += 1
+
+        graph.to_dot(self.stdout)
 
 
 if __name__ == '__main__':          # pragma: no cover

+ 10 - 7
celery/bin/celerybeat.py

@@ -75,13 +75,16 @@ class BeatCommand(Command):
     def get_options(self):
         c = self.app.conf
 
-        return (
-            Option('--detach', action='store_true'),
-            Option('-s', '--schedule', default=c.CELERYBEAT_SCHEDULE_FILENAME),
-            Option('--max-interval', type='float'),
-            Option('-S', '--scheduler', dest='scheduler_cls'),
-            Option('-l', '--loglevel', default=c.CELERYBEAT_LOG_LEVEL),
-        ) + daemon_options(default_pidfile='celerybeat.pid')
+        return ((
+                Option('--detach', action='store_true'),
+                Option('-s', '--schedule',
+                    default=c.CELERYBEAT_SCHEDULE_FILENAME),
+                Option('--max-interval', type='float'),
+                Option('-S', '--scheduler', dest='scheduler_cls'),
+                Option('-l', '--loglevel', default=c.CELERYBEAT_LOG_LEVEL))
+            + daemon_options(default_pidfile='celerybeat.pid')
+            + tuple(self.app.user_options['beat'])
+        )
 
 
 def main():

+ 1 - 1
celery/bin/celeryd.py

@@ -197,7 +197,7 @@ class WorkerCommand(Command):
             Option('--autoreload', action='store_true'),
             Option('--no-execv', action='store_true', default=False),
             Option('-D', '--detach', action='store_true'),
-        ) + daemon_options()
+        ) + daemon_options() + tuple(self.app.user_options['worker'])
 
 
 def main():

+ 1 - 1
celery/bin/celeryd_multi.py

@@ -155,7 +155,7 @@ class MultiTool(object):
                          'show': self.show,
                          'stop': self.stop,
                          'stopwait': self.stopwait,
-                         'stop_verify': self.stopwait,
+                         'stop_verify': self.stopwait,  # compat alias
                          'restart': self.restart,
                          'kill': self.kill,
                          'names': self.names,

+ 11 - 8
celery/bin/celeryev.py

@@ -104,14 +104,17 @@ class EvCommand(Command):
         return set_process_title(prog, info=info)
 
     def get_options(self):
-        return (
-            Option('-d', '--dump', action='store_true'),
-            Option('-c', '--camera'),
-            Option('--detach', action='store_true'),
-            Option('-F', '--frequency', '--freq', type='float', default=1.0),
-            Option('-r', '--maxrate'),
-            Option('-l', '--loglevel', default='INFO'),
-        ) + daemon_options(default_pidfile='celeryev.pid')
+        return ((
+                Option('-d', '--dump', action='store_true'),
+                Option('-c', '--camera'),
+                Option('--detach', action='store_true'),
+                Option('-F', '--frequency', '--freq',
+                    type='float', default=1.0),
+                Option('-r', '--maxrate'),
+                Option('-l', '--loglevel', default='INFO'))
+            + daemon_options(default_pidfile='celeryev.pid')
+            + tuple(self.app.user_options['events'])
+        )
 
 
 def main():

+ 381 - 0
celery/bootsteps.py

@@ -0,0 +1,381 @@
+# -*- coding: utf-8 -*-
+"""
+    celery.bootsteps
+    ~~~~~~~~~~~~~~~~
+
+    The bootsteps!
+
+"""
+from __future__ import absolute_import, unicode_literals
+
+from collections import deque
+from importlib import import_module
+from threading import Event
+
+from kombu.common import ignore_errors
+from kombu.utils import symbol_by_name
+
+from .datastructures import DependencyGraph, GraphFormatter
+from .utils.imports import instantiate, qualname
+from .utils.log import get_logger
+from .utils.threads import default_socket_timeout
+
+try:
+    from greenlet import GreenletExit
+    IGNORE_ERRORS = (GreenletExit, )
+except ImportError:  # pragma: no cover
+    IGNORE_ERRORS = ()
+
+#: Default socket timeout at shutdown.
+SHUTDOWN_SOCKET_TIMEOUT = 5.0
+
+#: States
+RUN = 0x1
+CLOSE = 0x2
+TERMINATE = 0x3
+
+logger = get_logger(__name__)
+debug = logger.debug
+
+
+def _pre(ns, fmt):
+    return '| {0}: {1}'.format(ns.alias, fmt)
+
+
+def _label(s):
+    return s.name.rsplit('.', 1)[-1]
+
+
+class StepFormatter(GraphFormatter):
+
+    namespace_prefix = '⧉'
+    conditional_prefix = '∘'
+    namespace_scheme = {
+        'shape': 'parallelogram',
+        'color': 'slategray4',
+        'fillcolor': 'slategray3',
+    }
+
+    def label(self, step):
+        return step and '{0}{1}'.format(self._get_prefix(step),
+            (step.label or _label(step)).encode('utf-8', 'ignore'),
+        )
+
+    def _get_prefix(self, step):
+        if step.last:
+            return self.namespace_prefix
+        if step.conditional:
+            return self.conditional_prefix
+        return ''
+
+    def node(self, obj, **attrs):
+        scheme = self.namespace_scheme if obj.last else self.node_scheme
+        return self.draw_node(obj, scheme, attrs)
+
+    def edge(self, a, b, **attrs):
+        if a.last:
+            attrs.update(arrowhead='none', color='darkseagreen3')
+        return self.draw_edge(a, b, self.edge_scheme, attrs)
+
+
+class Namespace(object):
+    """A namespace containing bootsteps.
+
+    :keyword steps: List of steps.
+    :keyword name: Set explicit name for this namespace.
+    :keyword app: Set the Celery app for this namespace.
+    :keyword on_start: Optional callback applied after namespace start.
+    :keyword on_close: Optional callback applied before namespace close.
+    :keyword on_stopped: Optional callback applied after namespace stopped.
+
+    """
+    GraphFormatter = StepFormatter
+
+    name = None
+    state = None
+    started = 0
+    default_steps = set()
+
+    def __init__(self, steps=None, name=None, app=None, on_start=None,
+            on_close=None, on_stopped=None):
+        self.app = app
+        self.name = name or self.name or qualname(type(self))
+        self.types = set(steps or []) | set(self.default_steps)
+        self.on_start = on_start
+        self.on_close = on_close
+        self.on_stopped = on_stopped
+        self.shutdown_complete = Event()
+        self.steps = {}
+
+    def start(self, parent):
+        self.state = RUN
+        if self.on_start:
+            self.on_start()
+        for i, step in enumerate(filter(None, parent.steps)):
+            self._debug('Starting %s', step.alias)
+            self.started = i + 1
+            step.start(parent)
+            debug('^-- substep ok')
+
+    def close(self, parent):
+        if self.on_close:
+            self.on_close()
+        for step in parent.steps:
+            close = getattr(step, 'close', None)
+            if close:
+                close(parent)
+
+    def restart(self, parent, description='Restarting', attr='stop'):
+        with default_socket_timeout(SHUTDOWN_SOCKET_TIMEOUT):  # Issue 975
+            for step in reversed(parent.steps):
+                if step:
+                    self._debug('%s %s...', description, step.alias)
+                    fun = getattr(step, attr, None)
+                    if fun:
+                        fun(parent)
+
+    def stop(self, parent, close=True, terminate=False):
+        what = 'Terminating' if terminate else 'Stopping'
+        if self.state in (CLOSE, TERMINATE):
+            return
+
+        self.close(parent)
+
+        if self.state != RUN or self.started != len(parent.steps):
+            # Not fully started, can safely exit.
+            self.state = TERMINATE
+            self.shutdown_complete.set()
+            return
+        self.state = CLOSE
+        self.restart(parent, what, 'terminate' if terminate else 'stop')
+
+        if self.on_stopped:
+            self.on_stopped()
+        self.state = TERMINATE
+        self.shutdown_complete.set()
+
+    def join(self, timeout=None):
+        try:
+            # Will only get here if running green,
+            # makes sure all greenthreads have exited.
+            self.shutdown_complete.wait(timeout=timeout)
+        except IGNORE_ERRORS:
+            pass
+
+    def apply(self, parent, **kwargs):
+        """Apply the steps in this namespace to an object.
+
+        This will apply the ``__init__`` and ``include`` methods
+        of each steps with the object as argument.
+
+        For :class:`StartStopStep` the services created
+        will also be added the the objects ``steps`` attribute.
+
+        """
+        self._debug('Preparing bootsteps.')
+        order = self.order = []
+        steps = self.steps = self.claim_steps()
+
+        self._debug('Building graph...')
+        for S in self._finalize_steps(steps):
+            step = S(parent, **kwargs)
+            steps[step.name] = step
+            order.append(step)
+        self._debug('New boot order: {%s}',
+                    ', '.join(s.alias for s in self.order))
+        for step in order:
+            step.include(parent)
+        return self
+
+    def connect_with(self, other):
+        self.graph.adjacent.update(other.graph.adjacent)
+        self.graph.add_edge(type(other.order[0]), type(self.order[-1]))
+
+    def import_module(self, module):
+        return import_module(module)
+
+    def __getitem__(self, name):
+        return self.steps[name]
+
+    def _find_last(self):
+        for C in self.steps.itervalues():
+            if C.last:
+                return C
+
+    def _firstpass(self, steps):
+        stream = deque(step.requires for step in steps.itervalues())
+        while stream:
+            for node in stream.popleft():
+                node = symbol_by_name(node)
+                if node.name not in self.steps:
+                    steps[node.name] = node
+                stream.append(node.requires)
+
+    def _finalize_steps(self, steps):
+        last = self._find_last()
+        self._firstpass(steps)
+        it = ((C, C.requires) for C in steps.itervalues())
+        G = self.graph = DependencyGraph(it,
+            formatter=self.GraphFormatter(root=last),
+        )
+        if last:
+            for obj in G:
+                if obj != last:
+                    G.add_edge(last, obj)
+        try:
+            return G.topsort()
+        except KeyError as exc:
+            raise KeyError('unknown bootstep: %s' % exc)
+
+    def claim_steps(self):
+        return dict(self.load_step(step) for step in self._all_steps())
+
+    def _all_steps(self):
+        return self.types | self.app.steps[self.name.lower()]
+
+    def load_step(self, step):
+        step = symbol_by_name(step)
+        return step.name, step
+
+    def _debug(self, msg, *args):
+        return debug(_pre(self, msg), *args)
+
+    @property
+    def alias(self):
+        return _label(self)
+
+
+class StepType(type):
+    """Metaclass for steps."""
+
+    def __new__(cls, name, bases, attrs):
+        module = attrs.get('__module__')
+        qname = '{0}.{1}'.format(module, name) if module else name
+        attrs.update(
+            __qualname__=qname,
+            name=attrs.get('name') or qname,
+            requires=attrs.get('requires', ()),
+        )
+        return super(StepType, cls).__new__(cls, name, bases, attrs)
+
+    def __str__(self):
+        return self.name
+
+    def __repr__(self):
+        return 'step:{0.name}{{{0.requires!r}}}'.format(self)
+
+
+class Step(object):
+    """A Bootstep.
+
+    The :meth:`__init__` method is called when the step
+    is bound to a parent object, and can as such be used
+    to initialize attributes in the parent object at
+    parent instantiation-time.
+
+    """
+    __metaclass__ = StepType
+
+    #: Optional step name, will use qualname if not specified.
+    name = None
+
+    #: Optional short name used for graph outputs and in logs.
+    label = None
+
+    #: Set this to true if the step is enabled based on some condition.
+    conditional = False
+
+    #: List of other steps that that must be started before this step.
+    #: Note that all dependencies must be in the same namespace.
+    requires = ()
+
+    #: This flag is reserved for the workers Consumer,
+    #: since it is required to always be started last.
+    #: There can only be one object marked with lsat
+    #: in every namespace.
+    last = False
+
+    #: This provides the default for :meth:`include_if`.
+    enabled = True
+
+    def __init__(self, parent, **kwargs):
+        pass
+
+    def include_if(self, parent):
+        """An optional predicate that decided whether this
+        step should be created."""
+        return self.enabled
+
+    def instantiate(self, name, *args, **kwargs):
+        return instantiate(name, *args, **kwargs)
+
+    def _should_include(self, parent):
+        if self.include_if(parent):
+            return True, self.create(parent)
+        return False, None
+
+    def include(self, parent):
+        return self._should_include(parent)[0]
+
+    def create(self, parent):
+        """Create the step."""
+        pass
+
+    def __repr__(self):
+        return '<step: {0.alias}>'.format(self)
+
+    @property
+    def alias(self):
+        return self.label or _label(self)
+
+
+class StartStopStep(Step):
+
+    #: Optional obj created by the :meth:`create` method.
+    #: This is used by :class:`StartStopStep` to keep the
+    #: original service object.
+    obj = None
+
+    def start(self, parent):
+        if self.obj:
+            return self.obj.start()
+
+    def stop(self, parent):
+        if self.obj:
+            return self.obj.stop()
+
+    def close(self, parent):
+        pass
+
+    def terminate(self, parent):
+        self.stop(parent)
+
+    def include(self, parent):
+        inc, ret = self._should_include(parent)
+        if inc:
+            self.obj = ret
+            parent.steps.append(self)
+        return inc
+
+
+class ConsumerStep(StartStopStep):
+    requires = ('Connection', )
+    consumers = None
+
+    def get_consumers(self, channel):
+        raise NotImplementedError('missing get_consumers')
+
+    def start(self, c):
+        self.consumers = self.get_consumers(c.connection)
+        for consumer in self.consumers or []:
+            consumer.consume()
+
+    def stop(self, c):
+        for consumer in self.consumers or []:
+            ignore_errors(c.connection, consumer.cancel)
+
+    def shutdown(self, c):
+        self.stop(c)
+        for consumer in self.consumers or []:
+            if consumer.channel:
+                ignore_errors(c.connection, consumer.channel.close)

+ 1 - 1
celery/concurrency/base.py

@@ -17,7 +17,7 @@ from kombu.utils.encoding import safe_repr
 from celery.utils import timer2
 from celery.utils.log import get_logger
 
-logger = get_logger('celery.concurrency')
+logger = get_logger('celery.pool')
 
 
 def apply_target(target, args=(), kwargs={}, callback=None,

+ 1 - 1
celery/concurrency/processes.py

@@ -142,4 +142,4 @@ class TaskPool(BasePool):
 
     @property
     def timers(self):
-        return {self.maintain_pool: 30.0}
+        return {self.maintain_pool: 5.0}

+ 28 - 21
celery/contrib/batches.py

@@ -12,7 +12,7 @@ A click counter that flushes the buffer every 100 messages, and every
 
 .. code-block:: python
 
-    from celery.task import task
+    from celery import task
     from celery.contrib.batches import Batches
 
     # Flush after 100 messages, or 10 seconds.
@@ -43,9 +43,8 @@ from itertools import count
 from Queue import Empty, Queue
 
 from celery.task import Task
-from celery.utils import timer2
 from celery.utils.log import get_logger
-from celery.worker import state
+from celery.worker.job import Request
 
 
 logger = get_logger(__name__)
@@ -136,30 +135,39 @@ class Batches(Task):
         self._count = count(1).next
         self._tref = None
         self._pool = None
-        self._logging = None
 
     def run(self, requests):
         raise NotImplementedError('must implement run(requests)')
 
-    def flush(self, requests):
-        return self.apply_buffer(requests, ([SimpleRequest.from_request(r)
-                                                for r in requests], ))
+    def Strategy(self, task, app, consumer):
+        self._pool = consumer.pool
+        hostname = consumer.hostname
+        eventer = consumer.event_dispatcher
+        Req = Request
+        connection_errors = consumer.connection_errors
+        timer = consumer.timer
+        put_buffer = self._buffer.put
+        flush_buffer = self._do_flush
+
+        def task_message_handler(message, body, ack):
+            request = Req(body, on_ack=ack, app=app, hostname=hostname,
+                          events=eventer, task=task,
+                          connection_errors=connection_errors,
+                          delivery_info=message.delivery_info)
+            put_buffer(request)
 
-    def execute(self, request, pool, loglevel, logfile):
-        if not self._pool:         # just take pool from first task.
-            self._pool = pool
-        if not self._logging:
-            self._logging = loglevel, logfile
+            if self._tref is None:     # first request starts flush timer.
+                self._tref = timer.apply_interval(self.flush_interval * 1000.0,
+                                                  flush_buffer)
 
-        state.task_ready(request)  # immediately remove from worker state.
-        self._buffer.put(request)
+            if not self._count() % self.flush_every:
+                flush_buffer()
 
-        if self._tref is None:     # first request starts flush timer.
-            self._tref = timer2.apply_interval(self.flush_interval * 1000,
-                                               self._do_flush)
+        return task_message_handler
 
-        if not self._count() % self.flush_every:
-            self._do_flush()
+    def flush(self, requests):
+        return self.apply_buffer(requests, ([SimpleRequest.from_request(r)
+                                                for r in requests], ))
 
     def _do_flush(self):
         logger.debug('Batches: Wake-up to flush buffer...')
@@ -185,8 +193,7 @@ class Batches(Task):
         def on_return(result):
             [req.acknowledge() for req in acks_late[True]]
 
-        loglevel, logfile = self._logging
         return self._pool.apply_async(apply_batches_task,
-                    (self, args, loglevel, logfile),
+                    (self, args, 0, None),
                     accept_callback=on_accepted,
                     callback=acks_late[True] and on_return or None)

+ 30 - 4
celery/contrib/methods.py

@@ -30,6 +30,33 @@ or with any task decorator:
         def add(self, x, y):
             return x + y
 
+.. note::
+
+    The task must use the new Task base class (:class:`celery.Task`),
+    and the old base class using classmethods (``celery.task.Task``,
+    ``celery.task.base.Task``).
+
+    This means that you have to use the task decorator from a Celery app
+    instance, and not the old-API:
+
+    .. code-block:: python
+
+
+        from celery import task       # BAD
+        from celery.task import task  # ALSO BAD
+
+        # GOOD:
+        celery = Celery(...)
+
+        @celery.task(filter=task_method)
+        def foo(self): pass
+
+        # ALSO GOOD:
+        from celery import current_app
+
+        @current_app.task(filter=task_method)
+        def foo(self): pass
+
 Caveats
 -------
 
@@ -71,9 +98,7 @@ Caveats
 
 from __future__ import absolute_import
 
-from functools import partial
-
-from celery import task as _task
+from celery import current_app
 
 
 class task_method(object):
@@ -89,4 +114,5 @@ class task_method(object):
         return task
 
 
-task = partial(_task, filter=task_method)
+def task(*args, **kwargs):
+    return current_app.task(*args, **dict(kwargs, filter=task_method))

+ 136 - 10
celery/datastructures.py

@@ -6,7 +6,7 @@
     Custom types and data structures.
 
 """
-from __future__ import absolute_import, print_function
+from __future__ import absolute_import, print_function, unicode_literals
 
 import sys
 import time
@@ -16,10 +16,106 @@ from functools import partial
 from itertools import chain
 
 from billiard.einfo import ExceptionInfo  # noqa
+from kombu.utils.encoding import safe_str
 from kombu.utils.limits import TokenBucket  # noqa
 
 from .utils.functional import LRUCache, first, uniq  # noqa
 
+DOT_HEAD = """
+{IN}{type} {id} {{
+{INp}graph [{attrs}]
+"""
+DOT_ATTR = '{name}={value}'
+DOT_NODE = '{INp}"{0}" [{attrs}]'
+DOT_EDGE = '{INp}"{0}" {dir} "{1}" [{attrs}]'
+DOT_ATTRSEP = ', '
+DOT_DIRS = {'graph': '--', 'digraph': '->'}
+DOT_TAIL = '{IN}}}'
+
+
+class GraphFormatter(object):
+    _attr = DOT_ATTR.strip()
+    _node = DOT_NODE.strip()
+    _edge = DOT_EDGE.strip()
+    _head = DOT_HEAD.strip()
+    _tail = DOT_TAIL.strip()
+    _attrsep = DOT_ATTRSEP
+    _dirs = dict(DOT_DIRS)
+
+    scheme = {
+        'shape': 'box',
+        'arrowhead': 'vee',
+        'style': 'filled',
+        'fontname': 'Helvetica Neue',
+    }
+    edge_scheme = {
+        'color': 'darkseagreen4',
+        'arrowcolor': 'black',
+        'arrowsize': 0.7,
+    }
+    node_scheme = {'fillcolor': 'palegreen3', 'color': 'palegreen4'}
+    term_scheme = {'fillcolor': 'palegreen1', 'color': 'palegreen2'}
+    graph_scheme = {'bgcolor': 'mintcream'}
+
+    def __init__(self, root=None, type=None, id=None,
+            indent=0, inw=' ' * 4, **scheme):
+        self.id = id or 'dependencies'
+        self.root = root
+        self.type = type or 'digraph'
+        self.direction = self._dirs[self.type]
+        self.IN = inw * (indent or 0)
+        self.INp = self.IN + inw
+        self.scheme = dict(self.scheme, **scheme)
+        self.graph_scheme = dict(self.graph_scheme, root=self.label(self.root))
+
+    def attr(self, name, value):
+        value = '"{0}"'.format(value)
+        return self.FMT(self._attr, name=name, value=value)
+
+    def attrs(self, d, scheme=None):
+        d = dict(self.scheme, **dict(scheme, **d or {}) if scheme else d)
+        return self._attrsep.join(
+            safe_str(self.attr(k, v)) for k, v in d.iteritems()
+        )
+
+    def head(self, **attrs):
+        return self.FMT(self._head, id=self.id, type=self.type,
+            attrs=self.attrs(attrs, self.graph_scheme),
+        )
+
+    def tail(self):
+        return self.FMT(self._tail)
+
+    def label(self, obj):
+        return obj
+
+    def node(self, obj, **attrs):
+        return self.draw_node(obj, self.node_scheme, attrs)
+
+    def terminal_node(self, obj, **attrs):
+        return self.draw_node(obj, self.term_scheme, attrs)
+
+    def edge(self, a, b, **attrs):
+        return self.draw_edge(a, b, **attrs)
+
+    def _enc(self, s):
+        return s.encode('utf-8', 'ignore')
+
+    def FMT(self, fmt, *args, **kwargs):
+        return self._enc(fmt.format(
+            *args, **dict(kwargs, IN=self.IN, INp=self.INp)
+        ))
+
+    def draw_edge(self, a, b, scheme=None, attrs=None):
+        return self.FMT(self._edge, self.label(a), self.label(b),
+            dir=self.direction, attrs=self.attrs(attrs, self.edge_scheme),
+        )
+
+    def draw_node(self, obj, scheme=None, attrs=None):
+        return self.FMT(self._node, self.label(obj),
+            attrs=self.attrs(attrs, scheme),
+        )
+
 
 class CycleError(Exception):
     """A cycle was detected in an acyclic graph."""
@@ -40,7 +136,8 @@ class DependencyGraph(object):
 
     """
 
-    def __init__(self, it=None):
+    def __init__(self, it=None, formatter=None):
+        self.formatter = formatter or GraphFormatter()
         self.adjacent = {}
         if it is not None:
             self.update(it)
@@ -54,6 +151,15 @@ class DependencyGraph(object):
         (``A`` depends on ``B``)."""
         self[A].append(B)
 
+    def find_last(self, g):
+        for obj in g.adjacent.keys():
+            if obj.last:
+                return obj
+
+    def connect(self, graph):
+        """Add nodes from another graph."""
+        self.adjacent.update(graph.adjacent)
+
     def topsort(self):
         """Sort the graph topologically.
 
@@ -158,20 +264,32 @@ class DependencyGraph(object):
 
         return result
 
-    def to_dot(self, fh, ws=' ' * 4):
+    def to_dot(self, fh, formatter=None):
         """Convert the graph to DOT format.
 
         :param fh: A file, or a file-like object to write the graph to.
 
         """
+        seen = set()
+        draw = formatter or self.formatter
         P = partial(print, file=fh)
-        P('digraph dependencies {')
+
+        def if_not_seen(fun, obj):
+            if draw.label(obj) not in seen:
+                P(fun(obj))
+                seen.add(draw.label(obj))
+
+        P(draw.head())
         for obj, adjacent in self.iteritems():
             if not adjacent:
-                P(ws + '"{0}"'.format(obj))
+                if_not_seen(draw.terminal_node, obj)
             for req in adjacent:
-                P(ws + '"{0}" -> "{1}"'.format(obj, req))
-        P('}')
+                if_not_seen(draw.node, obj)
+                P(draw.edge(obj, req))
+        P(draw.tail())
+
+    def format(self, obj):
+        return self.formatter(obj) if self.formatter else obj
 
     def __iter__(self):
         return iter(self.adjacent)
@@ -234,9 +352,16 @@ class DictAttribute(object):
     `obj[k] -> obj.k`
 
     """
+    obj = None
 
     def __init__(self, obj):
-        self.obj = obj
+        object.__setattr__(self, 'obj', obj)
+
+    def __getattr__(self, key):
+        return getattr(self.obj, key)
+
+    def __setattr__(self, key, value):
+        return setattr(self.obj, key, value)
 
     def get(self, key, default=None):
         try:
@@ -264,14 +389,15 @@ class DictAttribute(object):
         return hasattr(self.obj, key)
 
     def _iterate_keys(self):
-        return iter(vars(self.obj))
+        return iter(dir(self.obj))
     iterkeys = _iterate_keys
 
     def __iter__(self):
         return self._iterate_keys()
 
     def _iterate_items(self):
-        return vars(self.obj).iteritems()
+        for key in self._iterate_keys():
+            yield key, getattr(self.obj, key)
     iteritems = _iterate_items
 
     if sys.version_info[0] == 3:  # pragma: no cover

+ 3 - 1
celery/events/__init__.py

@@ -84,6 +84,7 @@ class EventDispatcher(object):
         self.serializer = serializer or self.app.conf.CELERY_EVENT_SERIALIZER
         self.on_enabled = set()
         self.on_disabled = set()
+        self.tzoffset = [-time.timezone, -time.altzone]
 
         self.enabled = enabled
         if not connection and channel:
@@ -128,7 +129,8 @@ class EventDispatcher(object):
         if self.enabled:
             with self.mutex:
                 event = Event(type, hostname=self.hostname,
-                                    clock=self.app.clock.forward(), **fields)
+                                    clock=self.app.clock.forward(),
+                                    tzoffset=self.tzoffset, **fields)
                 try:
                     self.publisher.publish(event,
                                            routing_key=type.replace('-', '.'))

+ 5 - 1
celery/exceptions.py

@@ -9,7 +9,7 @@
 from __future__ import absolute_import
 
 from billiard.exceptions import (  # noqa
-    SoftTimeLimitExceeded, TimeLimitExceeded, WorkerLostError,
+    SoftTimeLimitExceeded, TimeLimitExceeded, WorkerLostError, Terminated,
 )
 
 UNREGISTERED_FMT = """\
@@ -25,6 +25,10 @@ class SecurityError(Exception):
     """
 
 
+class Ignore(Exception):
+    """A task can raise this to ignore doing state updates."""
+
+
 class SystemTerminate(SystemExit):
     """Signals that the worker should terminate."""
 

+ 0 - 0
celery/fixups/__init__.py


+ 185 - 0
celery/fixups/django.py

@@ -0,0 +1,185 @@
+from __future__ import absolute_import
+
+import os
+import sys
+import warnings
+
+from datetime import datetime
+
+from celery import signals
+
+SETTINGS_MODULE = os.environ.get('DJANGO_SETTINGS_MODULE')
+
+
+def _maybe_close_fd(fh):
+    try:
+        os.close(fh.fileno())
+    except (AttributeError, OSError, TypeError):
+        # TypeError added for celery#962
+        pass
+
+
+class DjangoFixup(object):
+    _db_recycles = 0
+
+    @classmethod
+    def include(cls, app):
+        if SETTINGS_MODULE:
+            self = cls(app)
+            self.install()
+            return self
+
+    def __init__(self, app):
+        from django import db
+        from django.core import cache
+        from django.conf import settings
+        from django.core.mail import mail_admins
+
+        # Current time and date
+        try:
+            from django.utils.timezone import now
+        except ImportError:  # pre django-1.4
+            now = datetime.now  # noqa
+
+        # Database-related exceptions.
+        from django.db import DatabaseError
+        try:
+            import MySQLdb as mysql
+            _my_database_errors = (mysql.DatabaseError,
+                                   mysql.InterfaceError,
+                                   mysql.OperationalError)
+        except ImportError:
+            _my_database_errors = ()      # noqa
+        try:
+            import psycopg2 as pg
+            _pg_database_errors = (pg.DatabaseError,
+                                   pg.InterfaceError,
+                                   pg.OperationalError)
+        except ImportError:
+            _pg_database_errors = ()      # noqa
+        try:
+            import sqlite3
+            _lite_database_errors = (sqlite3.DatabaseError,
+                                     sqlite3.InterfaceError,
+                                     sqlite3.OperationalError)
+        except ImportError:
+            _lite_database_errors = ()    # noqa
+        try:
+            import cx_Oracle as oracle
+            _oracle_database_errors = (oracle.DatabaseError,
+                                       oracle.InterfaceError,
+                                       oracle.OperationalError)
+        except ImportError:
+            _oracle_database_errors = ()  # noqa
+
+        self.app = app
+        self.db_reuse_max = self.app.conf.get('CELERY_DB_REUSE_MAX', None)
+        self._cache = cache
+        self._settings = settings
+        self._db = db
+        self._mail_admins = mail_admins
+        self._now = now
+        self.database_errors = (
+            (DatabaseError, ) +
+            _my_database_errors +
+            _pg_database_errors +
+            _lite_database_errors +
+            _oracle_database_errors,
+        )
+
+    def install(self):
+        # Need to add project directory to path
+        sys.path.append(os.getcwd())
+        signals.beat_embedded_init.connect(self.close_database)
+        signals.worker_ready.connect(self.on_worker_ready)
+        signals.task_prerun.connect(self.on_task_prerun)
+        signals.task_postrun.connect(self.on_task_postrun)
+        signals.worker_init.connect(self.on_worker_init)
+        signals.worker_process_init.connect(self.on_worker_process_init)
+
+        self.app.loader.now = self.now
+        self.app.loader.mail_admins = self.mail_admins
+
+    def now(self, utc=False):
+        return datetime.utcnow() if utc else self._now()
+
+    def mail_admins(self, subject, body, fail_silently=False, **kwargs):
+        return self._mail_admins(subject, body, fail_silently=fail_silently)
+
+    def on_worker_init(self, **kwargs):
+        """Called when the worker starts.
+
+        Automatically discovers any ``tasks.py`` files in the applications
+        listed in ``INSTALLED_APPS``.
+
+        """
+        self.close_database()
+        self.close_cache()
+
+    def on_worker_process_init(self, **kwargs):
+        # the parent process may have established these,
+        # so need to close them.
+
+        # calling db.close() on some DB connections will cause
+        # the inherited DB conn to also get broken in the parent
+        # process so we need to remove it without triggering any
+        # network IO that close() might cause.
+        try:
+            for c in self._db.connections.all():
+                if c and c.connection:
+                    _maybe_close_fd(c.connection)
+        except AttributeError:
+            if self._db.connection and self._db.connection.connection:
+                _maybe_close_fd(self._db.connection.connection)
+
+        # use the _ version to avoid DB_REUSE preventing the conn.close() call
+        self._close_database()
+        self.close_cache()
+
+    def on_task_prerun(self, sender, **kwargs):
+        """Called before every task."""
+        if not getattr(sender.request, 'is_eager', False):
+            self.close_database()
+
+    def on_task_postrun(self, **kwargs):
+        """Does everything necessary for Django to work in a long-living,
+        multiprocessing environment.
+
+        """
+        # See http://groups.google.com/group/django-users/
+        #            browse_thread/thread/78200863d0c07c6d/
+        self.close_database()
+        self.close_cache()
+
+    def close_database(self, **kwargs):
+        if not self.db_reuse_max:
+            return self._close_database()
+        if self._db_recycles >= self.db_reuse_max * 2:
+            self._db_recycles = 0
+            self._close_database()
+        self._db_recycles += 1
+
+    def _close_database(self):
+        try:
+            funs = [conn.close for conn in self._db.connections]
+        except AttributeError:
+            funs = [self._db.close_connection]  # pre multidb
+
+        for close in funs:
+            try:
+                close()
+            except self.database_errors, exc:
+                str_exc = str(exc)
+                if 'closed' not in str_exc and 'not connected' not in str_exc:
+                    raise
+
+    def close_cache(self):
+        try:
+            self._cache.cache.close()
+        except (TypeError, AttributeError):
+            pass
+
+    def on_worker_ready(self, **kwargs):
+        if self._settings.DEBUG:
+            warnings.warn('Using settings.DEBUG leads to a memory leak, never '
+                          'use this setting in production environments!')

+ 75 - 2
celery/loaders/base.py

@@ -9,9 +9,11 @@
 from __future__ import absolute_import
 
 import anyjson
+import imp
 import importlib
 import os
 import re
+import sys
 
 from datetime import datetime
 from itertools import imap
@@ -21,7 +23,9 @@ from kombu.utils.encoding import safe_str
 
 from celery.datastructures import DictAttribute
 from celery.exceptions import ImproperlyConfigured
-from celery.utils.imports import import_from_cwd, symbol_by_name
+from celery.utils.imports import (
+    import_from_cwd, symbol_by_name, NotAPackage, find_module,
+)
 from celery.utils.functional import maybe_list
 
 BUILTIN_MODULES = frozenset()
@@ -32,6 +36,16 @@ and as such the configuration could not be loaded.
 Please set this variable and make it point to
 a configuration module.""")
 
+_RACE_PROTECTION = False
+CONFIG_INVALID_NAME = """
+Error: Module '{module}' doesn't exist, or it's not a valid \
+Python module name.
+"""
+
+CONFIG_WITH_SUFFIX = CONFIG_INVALID_NAME + """
+Did you mean '{suggest}'?
+"""
+
 
 class BaseLoader(object):
     """The base class for loaders.
@@ -147,6 +161,24 @@ class BaseLoader(object):
         self._conf = obj
         return True
 
+    def _import_config_module(self, name):
+        try:
+            self.find_module(name)
+        except NotAPackage:
+            if name.endswith('.py'):
+                raise NotAPackage, NotAPackage(
+                        CONFIG_WITH_SUFFIX.format(
+                            module=name,
+                            suggest=name[:-3])), sys.exc_info()[2]
+            raise NotAPackage, NotAPackage(
+                    CONFIG_INVALID_NAME.format(
+                        module=name)), sys.exc_info()[2]
+        else:
+            return self.import_from_cwd(name)
+
+    def find_module(self, module):
+        return find_module(module)
+
     def cmdline_config_parser(self, args, namespace='celery',
                 re_type=re.compile(r'\((\w+)\)'),
                 extra_types={'json': anyjson.loads},
@@ -159,7 +191,7 @@ class BaseLoader(object):
 
         def getarg(arg):
             """Parse a single configuration definition from
-            the command line."""
+            the command-line."""
 
             ## find key/value
             # ns.key=value|ns_key=value (case insensitive)
@@ -207,8 +239,20 @@ class BaseLoader(object):
         mailer.send(message, fail_silently=fail_silently)
 
     def read_configuration(self):
+        try:
+            custom_config = os.environ['CELERY_CONFIG_MODULE']
+        except KeyError:
+            pass
+        else:
+            usercfg = self._import_config_module(custom_config)
+            return DictAttribute(usercfg)
         return {}
 
+    def autodiscover_tasks(self, packages, related_name='tasks'):
+        self.task_modules.update(mod.__name__
+            for mod in autodiscover_tasks(packages, related_name) if mod
+        )
+
     @property
     def conf(self):
         """Loader configuration."""
@@ -219,3 +263,32 @@ class BaseLoader(object):
     @cached_property
     def mail(self):
         return self.import_module('celery.utils.mail')
+
+
+def autodiscover_tasks(packages, related_name='tasks'):
+    global _RACE_PROTECTION
+
+    if _RACE_PROTECTION:
+        return
+    _RACE_PROTECTION = True
+    try:
+        return [find_related_module(pkg, related_name) for pkg in packages]
+    finally:
+        _RACE_PROTECTION = False
+
+
+def find_related_module(package, related_name):
+    """Given a package name and a module name, tries to find that
+    module."""
+
+    try:
+        pkg_path = importlib.import_module(package).__path__
+    except AttributeError:
+        return
+
+    try:
+        imp.find_module(related_name, pkg_path)
+    except ImportError:
+        return
+
+    return importlib.import_module('{0}.{1}'.format(package, related_name))

+ 3 - 33
celery/loaders/default.py

@@ -9,13 +9,11 @@
 from __future__ import absolute_import
 
 import os
-import sys
 import warnings
 
-from celery.datastructures import AttributeDict
+from celery.datastructures import DictAttribute
 from celery.exceptions import NotConfigured
 from celery.utils import strtobool
-from celery.utils.imports import NotAPackage, find_module
 
 from .base import BaseLoader
 
@@ -24,24 +22,12 @@ DEFAULT_CONFIG_MODULE = 'celeryconfig'
 #: Warns if configuration file is missing if :envvar:`C_WNOCONF` is set.
 C_WNOCONF = strtobool(os.environ.get('C_WNOCONF', False))
 
-CONFIG_INVALID_NAME = """
-Error: Module '{module}' doesn't exist, or it's not a valid \
-Python module name.
-"""
-
-CONFIG_WITH_SUFFIX = CONFIG_INVALID_NAME + """
-Did you mean '{suggest}'?
-"""
-
 
 class Loader(BaseLoader):
     """The loader used by the default app."""
 
     def setup_settings(self, settingsdict):
-        return AttributeDict(settingsdict)
-
-    def find_module(self, module):
-        return find_module(module)
+        return DictAttribute(settingsdict)
 
     def read_configuration(self):
         """Read configuration from :file:`celeryconfig.py` and configure
@@ -49,16 +35,7 @@ class Loader(BaseLoader):
         configname = os.environ.get('CELERY_CONFIG_MODULE',
                                      DEFAULT_CONFIG_MODULE)
         try:
-            self.find_module(configname)
-        except NotAPackage:
-            if configname.endswith('.py'):
-                raise NotAPackage, NotAPackage(
-                        CONFIG_WITH_SUFFIX.format(
-                            module=configname,
-                            suggest=configname[:-3])), sys.exc_info()[2]
-            raise NotAPackage, NotAPackage(
-                    CONFIG_INVALID_NAME.format(
-                        module=configname)), sys.exc_info()[2]
+            usercfg = self._import_config_module(configname)
         except ImportError:
             # billiard sets this if forked using execv
             if C_WNOCONF and not os.environ.get('FORKED_BY_MULTIPROCESSING'):
@@ -67,12 +44,5 @@ class Loader(BaseLoader):
                     'is available to Python.'.format(module=configname)))
             return self.setup_settings({})
         else:
-            celeryconfig = self.import_from_cwd(configname)
-            usercfg = dict((key, getattr(celeryconfig, key))
-                            for key in dir(celeryconfig)
-                                if self.wanted_module_item(key))
             self.configured = True
             return self.setup_settings(usercfg)
-
-    def wanted_module_item(self, item):
-        return not item.startswith('_')

+ 16 - 9
celery/platforms.py

@@ -262,10 +262,11 @@ class DaemonContext(object):
     _is_open = False
 
     def __init__(self, pidfile=None, workdir=None, umask=None,
-            fake=False, **kwargs):
+            fake=False, after_chdir=None, **kwargs):
         self.workdir = workdir or DAEMON_WORKDIR
         self.umask = DAEMON_UMASK if umask is None else umask
         self.fake = fake
+        self.after_chdir = after_chdir
         self.stdfds = (sys.stdin, sys.stdout, sys.stderr)
 
     def redirect_to_null(self, fd):
@@ -281,6 +282,9 @@ class DaemonContext(object):
             os.chdir(self.workdir)
             os.umask(self.umask)
 
+            if self.after_chdir:
+                self.after_chdir()
+
             preserve = [fileno(f) for f in self.stdfds if fileno(f)]
             for fd in reversed(range(get_fdmax(default=2048))):
                 if fd not in preserve:
@@ -353,14 +357,17 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
         # no point trying to setuid unless we're root.
         maybe_drop_privileges(uid=uid, gid=gid)
 
-    # Since without stderr any errors will be silently suppressed,
-    # we need to know that we have access to the logfile.
-    logfile and open(logfile, 'a').close()
-    # Doesn't actually create the pidfile, but makes sure it's not stale.
-    if pidfile:
-        _create_pidlock(pidfile).release()
-
-    return DaemonContext(umask=umask, workdir=workdir, fake=fake)
+    def after_chdir_do():
+        # Since without stderr any errors will be silently suppressed,
+        # we need to know that we have access to the logfile.
+        logfile and open(logfile, 'a').close()
+        # Doesn't actually create the pidfile, but makes sure it's not stale.
+        if pidfile:
+            _create_pidlock(pidfile).release()
+
+    return DaemonContext(
+        umask=umask, workdir=workdir, fake=fake, after_chdir=after_chdir_do,
+    )
 
 
 def parse_uid(uid):

+ 6 - 4
celery/result.py

@@ -20,7 +20,7 @@ from kombu.utils.compat import OrderedDict
 from . import current_app
 from . import states
 from .app import app_or_default
-from .datastructures import DependencyGraph
+from .datastructures import DependencyGraph, GraphFormatter
 from .exceptions import IncompleteStream, TimeoutError
 
 
@@ -187,11 +187,13 @@ class AsyncResult(ResultBase):
         """Returns :const:`True` if the task failed."""
         return self.state == states.FAILURE
 
-    def build_graph(self, intermediate=False):
-        graph = DependencyGraph()
+    def build_graph(self, intermediate=False, formatter=None):
+        graph = DependencyGraph(
+            formatter=formatter or GraphFormatter(root=self.id, shape='oval'),
+        )
         for parent, node in self.iterdeps(intermediate=intermediate):
+            graph.add_arc(node)
             if parent:
-                graph.add_arc(parent)
                 graph.add_edge(parent, node)
         return graph
 

+ 1 - 0
celery/states.py

@@ -133,6 +133,7 @@ FAILURE = 'FAILURE'
 REVOKED = 'REVOKED'
 #: Task is waiting for retry.
 RETRY = 'RETRY'
+IGNORED = 'IGNORED'
 
 READY_STATES = frozenset([SUCCESS, FAILURE, REVOKED])
 UNREADY_STATES = frozenset([PENDING, RECEIVED, STARTED, RETRY])

+ 6 - 5
celery/task/base.py

@@ -6,7 +6,7 @@
     The task implementation has been moved to :mod:`celery.app.task`.
 
     This contains the backward compatible Task class used in the old API,
-    and shouldn't be used anymore.
+    and shouldn't be used in new applications.
 
 """
 from __future__ import absolute_import
@@ -21,7 +21,8 @@ from celery.utils.log import get_task_logger
 
 #: list of methods that must be classmethods in the old API.
 _COMPAT_CLASSMETHODS = (
-    'delay', 'apply_async', 'retry', 'apply', 'AsyncResult', 'subtask',
+    'delay', 'apply_async', 'retry', 'apply',
+    'AsyncResult', 'subtask', '_get_request',
 )
 
 
@@ -61,10 +62,10 @@ class Task(BaseTask):
     for name in _COMPAT_CLASSMETHODS:
         locals()[name] = reclassmethod(getattr(BaseTask, name))
 
+    @class_property
     @classmethod
-    def _get_request(self):
-        return self.request_stack.top
-    request = class_property(_get_request)
+    def request(cls):
+        return cls._get_request()
 
     @classmethod
     def get_logger(self, **kwargs):

+ 14 - 7
celery/task/trace.py

@@ -29,20 +29,18 @@ from celery._state import _task_stack
 from celery.app import set_default_app
 from celery.app.task import Task as BaseTask, Context
 from celery.datastructures import ExceptionInfo
-from celery.exceptions import RetryTaskError
+from celery.exceptions import Ignore, RetryTaskError
 from celery.utils.serialization import get_pickleable_exception
 from celery.utils.log import get_logger
 
 _logger = get_logger(__name__)
 
 send_prerun = signals.task_prerun.send
-prerun_receivers = signals.task_prerun.receivers
 send_postrun = signals.task_postrun.send
-postrun_receivers = signals.task_postrun.receivers
 send_success = signals.task_success.send
-success_receivers = signals.task_success.receivers
 STARTED = states.STARTED
 SUCCESS = states.SUCCESS
+IGNORED = states.IGNORED
 RETRY = states.RETRY
 FAILURE = states.FAILURE
 EXCEPTION_STATES = states.EXCEPTION_STATES
@@ -197,6 +195,10 @@ def build_tracer(name, task, loader=None, hostname=None, store_errors=True,
     pop_task = _task_stack.pop
     on_chord_part_return = backend.on_chord_part_return
 
+    prerun_receivers = signals.task_prerun.receivers
+    postrun_receivers = signals.task_postrun.receivers
+    success_receivers = signals.task_success.receivers
+
     from celery import canvas
     subtask = canvas.subtask
 
@@ -222,6 +224,8 @@ def build_tracer(name, task, loader=None, hostname=None, store_errors=True,
                 try:
                     R = retval = fun(*args, **kwargs)
                     state = SUCCESS
+                except Ignore as exc:
+                    I, R = Info(IGNORED, exc), ExceptionInfo(internal=True)
                 except RetryTaskError as exc:
                     I = Info(RETRY, exc)
                     state, retval = I.state, I.retval
@@ -342,9 +346,12 @@ def setup_worker_optimizations(app):
 
     trace_task_ret = _fast_trace_task
     try:
-        sys.modules['celery.worker.job'].trace_task_ret = _fast_trace_task
+        job = sys.modules['celery.worker.job']
     except KeyError:
         pass
+    else:
+        job.trace_task_ret = _fast_trace_task
+        job.__optimize__()
 
 
 def reset_worker_optimizations():
@@ -384,8 +391,8 @@ def _install_stack_protection():
         def __protected_call__(self, *args, **kwargs):
             stack = self.request_stack
             req = stack.top
-            if req and not req._protected and len(stack) == 2 and \
-                    not req.called_directly:
+            if req and not req._protected and \
+                    len(stack) == 1 and not req.called_directly:
                 req._protected = 1
                 return self.run(*args, **kwargs)
             return orig(self, *args, **kwargs)

+ 1 - 1
celery/tests/app/test_app.py

@@ -336,7 +336,7 @@ class test_App(Case):
 
     def test_Windows_log_color_disabled(self):
         self.app.IS_WINDOWS = True
-        self.assertFalse(self.app.log.supports_color())
+        self.assertFalse(self.app.log.supports_color(True))
 
     def test_compat_setting_CARROT_BACKEND(self):
         self.app.config_from_object(Object(CARROT_BACKEND='set_by_us'))

+ 4 - 12
celery/tests/app/test_loaders.py

@@ -153,22 +153,14 @@ class test_LoaderBase(Case):
 
 class test_DefaultLoader(Case):
 
-    def test_wanted_module_item(self):
-        l = default.Loader()
-        self.assertTrue(l.wanted_module_item('FOO'))
-        self.assertTrue(l.wanted_module_item('Foo'))
-        self.assertFalse(l.wanted_module_item('_FOO'))
-        self.assertFalse(l.wanted_module_item('__FOO'))
-        self.assertTrue(l.wanted_module_item('foo'))
-
-    @patch('celery.loaders.default.find_module')
+    @patch('celery.loaders.base.find_module')
     def test_read_configuration_not_a_package(self, find_module):
         find_module.side_effect = NotAPackage()
         l = default.Loader()
         with self.assertRaises(NotAPackage):
             l.read_configuration()
 
-    @patch('celery.loaders.default.find_module')
+    @patch('celery.loaders.base.find_module')
     def test_read_configuration_py_in_name(self, find_module):
         prev = os.environ['CELERY_CONFIG_MODULE']
         os.environ['CELERY_CONFIG_MODULE'] = 'celeryconfig.py'
@@ -180,7 +172,7 @@ class test_DefaultLoader(Case):
         finally:
             os.environ['CELERY_CONFIG_MODULE'] = prev
 
-    @patch('celery.loaders.default.find_module')
+    @patch('celery.loaders.base.find_module')
     def test_read_configuration_importerror(self, find_module):
         default.C_WNOCONF = True
         find_module.side_effect = ImportError()
@@ -238,7 +230,7 @@ class test_DefaultLoader(Case):
 
         with warnings.catch_warnings(record=True):
             l = _Loader()
-            self.assertDictEqual(l.conf, {})
+            self.assertFalse(l.configured)
             context_executed[0] = True
         self.assertTrue(context_executed[0])
 

+ 3 - 2
celery/tests/bin/test_celery.py

@@ -45,9 +45,10 @@ class test_Command(AppCase):
         self.err = WhateverIO()
         self.cmd = Command(self.app, stdout=self.out, stderr=self.err)
 
-    def test_show_help(self):
+    def test_exit_help(self):
         self.cmd.run_from_argv = Mock()
-        self.assertEqual(self.cmd.show_help('foo'), EX_USAGE)
+        with self.assertRaises(SystemExit):
+            self.cmd.exit_help('foo')
         self.cmd.run_from_argv.assert_called_with(
                 self.cmd.prog_name, ['foo', '--help']
         )

+ 1 - 0
celery/tests/bin/test_celerybeat.py

@@ -114,6 +114,7 @@ class test_Beat(AppCase):
             pass
         b = beatapp.Beat()
         b.redirect_stdouts = False
+        b.app.log.__class__._setup = False
         b.setup_logging()
         with self.assertRaises(AttributeError):
             sys.stdout.logger

+ 4 - 9
celery/tests/bin/test_celeryd.py

@@ -60,10 +60,7 @@ def disable_stdouts(fun):
 
 
 class Worker(cd.Worker):
-
-    def __init__(self, *args, **kwargs):
-        super(Worker, self).__init__(*args, **kwargs)
-        self.redirect_stdouts = False
+    redirect_stdouts = False
 
     def start(self, *args, **kwargs):
         self.on_start()
@@ -292,9 +289,7 @@ class test_Worker(WorkerAppCase):
 
     @disable_stdouts
     def test_redirect_stdouts(self):
-        worker = self.Worker()
-        worker.redirect_stdouts = False
-        worker.redirect_stdouts_to_logger()
+        self.Worker(redirect_stdouts=False)
         with self.assertRaises(AttributeError):
             sys.stdout.logger
 
@@ -306,9 +301,9 @@ class test_Worker(WorkerAppCase):
             logging_setup[0] = True
 
         try:
-            worker = self.Worker()
+            worker = self.Worker(redirect_stdouts=False)
             worker.app.log.__class__._setup = False
-            worker.redirect_stdouts_to_logger()
+            worker.setup_logging()
             self.assertTrue(logging_setup[0])
             with self.assertRaises(AttributeError):
                 sys.stdout.logger

+ 21 - 9
celery/tests/contrib/test_abortable.py

@@ -21,19 +21,31 @@ class test_AbortableTask(Case):
 
     def test_is_not_aborted(self):
         t = MyAbortableTask()
-        result = t.apply_async()
-        tid = result.id
-        self.assertFalse(t.is_aborted(task_id=tid))
+        t.push_request()
+        try:
+            result = t.apply_async()
+            tid = result.id
+            self.assertFalse(t.is_aborted(task_id=tid))
+        finally:
+            t.pop_request()
 
     def test_is_aborted_not_abort_result(self):
         t = MyAbortableTask()
         t.AsyncResult = AsyncResult
-        t.request.id = 'foo'
-        self.assertFalse(t.is_aborted())
+        t.push_request()
+        try:
+            t.request.id = 'foo'
+            self.assertFalse(t.is_aborted())
+        finally:
+            t.pop_request()
 
     def test_abort_yields_aborted(self):
         t = MyAbortableTask()
-        result = t.apply_async()
-        result.abort()
-        tid = result.id
-        self.assertTrue(t.is_aborted(task_id=tid))
+        t.push_request()
+        try:
+            result = t.apply_async()
+            result.abort()
+            tid = result.id
+            self.assertTrue(t.is_aborted(task_id=tid))
+        finally:
+            t.pop_request()

+ 1 - 0
celery/tests/tasks/test_sets.py

@@ -148,6 +148,7 @@ class test_TaskSet(Case):
         def xyz():
             pass
         from celery._state import _task_stack
+        xyz.push_request()
         _task_stack.push(xyz)
         try:
             ts.apply_async(publisher=Publisher())

+ 64 - 37
celery/tests/tasks/test_tasks.py

@@ -141,27 +141,37 @@ class test_task_retries(Case):
         self.assertEqual(retry_task_noargs.iterations, 4)
 
     def test_retry_kwargs_can_be_empty(self):
-        with self.assertRaises(RetryTaskError):
-            retry_task_mockapply.retry(args=[4, 4], kwargs=None)
-
-    def test_retry_not_eager(self):
-        retry_task_mockapply.request.called_directly = False
-        exc = Exception('baz')
+        retry_task_mockapply.push_request()
         try:
-            retry_task_mockapply.retry(args=[4, 4], kwargs={'task_retries': 0},
-                                       exc=exc, throw=False)
-            self.assertTrue(retry_task_mockapply.__class__.applied)
+            with self.assertRaises(RetryTaskError):
+                retry_task_mockapply.retry(args=[4, 4], kwargs=None)
         finally:
-            retry_task_mockapply.__class__.applied = 0
+            retry_task_mockapply.pop_request()
 
+    def test_retry_not_eager(self):
+        retry_task_mockapply.push_request()
         try:
-            with self.assertRaises(RetryTaskError):
+            retry_task_mockapply.request.called_directly = False
+            exc = Exception('baz')
+            try:
                 retry_task_mockapply.retry(
                     args=[4, 4], kwargs={'task_retries': 0},
-                    exc=exc, throw=True)
-            self.assertTrue(retry_task_mockapply.__class__.applied)
+                    exc=exc, throw=False,
+                )
+                self.assertTrue(retry_task_mockapply.__class__.applied)
+            finally:
+                retry_task_mockapply.__class__.applied = 0
+
+            try:
+                with self.assertRaises(RetryTaskError):
+                    retry_task_mockapply.retry(
+                        args=[4, 4], kwargs={'task_retries': 0},
+                        exc=exc, throw=True)
+                self.assertTrue(retry_task_mockapply.__class__.applied)
+            finally:
+                retry_task_mockapply.__class__.applied = 0
         finally:
-            retry_task_mockapply.__class__.applied = 0
+            retry_task_mockapply.pop_request()
 
     def test_retry_with_kwargs(self):
         retry_task_customexc.__class__.max_retries = 3
@@ -322,11 +332,16 @@ class test_tasks(Case):
         self.assertTrue(publisher.exchange)
 
     def test_context_get(self):
-        request = self.createTask('c.unittest.t.c.g').request
-        request.foo = 32
-        self.assertEqual(request.get('foo'), 32)
-        self.assertEqual(request.get('bar', 36), 36)
-        request.clear()
+        task = self.createTask('c.unittest.t.c.g')
+        task.push_request()
+        try:
+            request = task.request
+            request.foo = 32
+            self.assertEqual(request.get('foo'), 32)
+            self.assertEqual(request.get('bar', 36), 36)
+            request.clear()
+        finally:
+            task.pop_request()
 
     def test_task_class_repr(self):
         task = self.createTask('c.unittest.t.repr')
@@ -350,9 +365,13 @@ class test_tasks(Case):
 
     def test_after_return(self):
         task = self.createTask('c.unittest.t.after_return')
-        task.request.chord = return_True_task.s()
-        task.after_return('SUCCESS', 1.0, 'foobar', (), {}, None)
-        task.request.clear()
+        task.push_request()
+        try:
+            task.request.chord = return_True_task.s()
+            task.after_return('SUCCESS', 1.0, 'foobar', (), {}, None)
+            task.request.clear()
+        finally:
+            task.pop_request()
 
     def test_send_task_sent_event(self):
         T1 = self.createTask('c.unittest.t.t1')
@@ -393,15 +412,19 @@ class test_tasks(Case):
         def yyy():
             pass
 
-        tid = uuid()
-        yyy.update_state(tid, 'FROBULATING', {'fooz': 'baaz'})
-        self.assertEqual(yyy.AsyncResult(tid).status, 'FROBULATING')
-        self.assertDictEqual(yyy.AsyncResult(tid).result, {'fooz': 'baaz'})
-
-        yyy.request.id = tid
-        yyy.update_state(state='FROBUZATING', meta={'fooz': 'baaz'})
-        self.assertEqual(yyy.AsyncResult(tid).status, 'FROBUZATING')
-        self.assertDictEqual(yyy.AsyncResult(tid).result, {'fooz': 'baaz'})
+        yyy.push_request()
+        try:
+            tid = uuid()
+            yyy.update_state(tid, 'FROBULATING', {'fooz': 'baaz'})
+            self.assertEqual(yyy.AsyncResult(tid).status, 'FROBULATING')
+            self.assertDictEqual(yyy.AsyncResult(tid).result, {'fooz': 'baaz'})
+
+            yyy.request.id = tid
+            yyy.update_state(state='FROBUZATING', meta={'fooz': 'baaz'})
+            self.assertEqual(yyy.AsyncResult(tid).status, 'FROBUZATING')
+            self.assertDictEqual(yyy.AsyncResult(tid).result, {'fooz': 'baaz'})
+        finally:
+            yyy.pop_request()
 
     def test_repr(self):
 
@@ -421,13 +444,17 @@ class test_tasks(Case):
 
     def test_get_logger(self):
         t1 = self.createTask('c.unittest.t.t1')
-        logfh = WhateverIO()
-        logger = t1.get_logger(logfile=logfh, loglevel=0)
-        self.assertTrue(logger)
+        t1.push_request()
+        try:
+            logfh = WhateverIO()
+            logger = t1.get_logger(logfile=logfh, loglevel=0)
+            self.assertTrue(logger)
 
-        t1.request.loglevel = 3
-        logger = t1.get_logger(logfile=logfh, loglevel=None)
-        self.assertTrue(logger)
+            t1.request.loglevel = 3
+            logger = t1.get_logger(logfile=logfh, loglevel=None)
+            self.assertTrue(logger)
+        finally:
+            t1.pop_request()
 
 
 class test_TaskSet(Case):

+ 2 - 4
celery/tests/utilities/test_datastructures.py

@@ -45,10 +45,8 @@ class test_DictAttribute(Case):
         obj.attr1 = 1
         x = DictAttribute(obj)
         x['attr2'] = 2
-        self.assertDictEqual(dict(x.iteritems()),
-                             dict(attr1=1, attr2=2))
-        self.assertDictEqual(dict(x.items()),
-                             dict(attr1=1, attr2=2))
+        self.assertEqual(x['attr1'], 1)
+        self.assertEqual(x['attr2'], 2)
 
 
 class test_ConfigurationView(Case):

+ 4 - 0
celery/tests/utilities/test_platforms.py

@@ -275,11 +275,15 @@ if not current_app.IS_WINDOWS:
             geteuid.return_value = 5001
             context = detached(uid='user', gid='group', logfile='/foo/bar')
             self.assertIsInstance(context, DaemonContext)
+            self.assertTrue(context.after_chdir)
+            context.after_chdir()
             open.assert_called_with('/foo/bar', 'a')
             open.return_value.close.assert_called_with()
 
             context = detached(pidfile='/foo/bar/pid')
             self.assertIsInstance(context, DaemonContext)
+            self.assertTrue(context.after_chdir)
+            context.after_chdir()
             pidlock.assert_called_with('/foo/bar/pid')
 
     class test_DaemonContext(Case):

+ 7 - 3
celery/tests/utilities/test_timer2.py

@@ -42,7 +42,7 @@ class test_Schedule(Case):
 
     def test_handle_error(self):
         from datetime import datetime
-        mktime = timer2.mktime
+        to_timestamp = timer2.to_timestamp
         scratch = [None]
 
         def _overflow(x):
@@ -53,7 +53,7 @@ class test_Schedule(Case):
 
         s = timer2.Schedule(on_error=on_error)
 
-        timer2.mktime = _overflow
+        timer2.to_timestamp = _overflow
         try:
             s.enter(timer2.Entry(lambda: None, (), {}),
                     eta=datetime.now())
@@ -64,7 +64,7 @@ class test_Schedule(Case):
                 s.enter(timer2.Entry(lambda: None, (), {}),
                         eta=datetime.now())
         finally:
-            timer2.mktime = mktime
+            timer2.to_timestamp = to_timestamp
 
         exc = scratch[0]
         self.assertIsInstance(exc, OverflowError)
@@ -82,8 +82,12 @@ class test_Timer(Case):
                 done[0] = True
 
             t.apply_after(300, set_done)
+            mss = 0
             while not done[0]:
+                if mss >= 2.0:
+                    raise Exception('test timed out')
                 time.sleep(0.1)
+                mss += 0.1
         finally:
             t.stop()
 

+ 49 - 98
celery/tests/worker/test_bootsteps.py

@@ -2,37 +2,29 @@ from __future__ import absolute_import
 
 from mock import Mock
 
-from celery.worker import bootsteps
+from celery import bootsteps
 
 from celery.tests.utils import AppCase, Case
 
 
-class test_Component(Case):
+class test_Step(Case):
 
-    class Def(bootsteps.Component):
-        name = 'test_Component.Def'
+    class Def(bootsteps.StartStopStep):
+        name = 'test_Step.Def'
 
-    def test_components_must_be_named(self):
-        with self.assertRaises(NotImplementedError):
-
-            class X(bootsteps.Component):
-                pass
-
-        class Y(bootsteps.Component):
-            abstract = True
+    def setUp(self):
+        self.steps = []
 
     def test_namespace_name(self, ns='test_namespace_name'):
 
-        class X(bootsteps.Component):
+        class X(bootsteps.Step):
             namespace = ns
             name = 'X'
-        self.assertEqual(X.namespace, ns)
         self.assertEqual(X.name, 'X')
 
-        class Y(bootsteps.Component):
-            name = '%s.Y' % (ns, )
-        self.assertEqual(Y.namespace, ns)
-        self.assertEqual(Y.name, 'Y')
+        class Y(bootsteps.Step):
+            name = '%s.Y' % ns
+        self.assertEqual(Y.name, '%s.Y' % ns)
 
     def test_init(self):
         self.assertTrue(self.Def(self))
@@ -70,13 +62,13 @@ class test_Component(Case):
         self.assertFalse(x.create.call_count)
 
 
-class test_StartStopComponent(Case):
+class test_StartStopStep(Case):
 
-    class Def(bootsteps.StartStopComponent):
-        name = 'test_StartStopComponent.Def'
+    class Def(bootsteps.StartStopStep):
+        name = 'test_StartStopStep.Def'
 
     def setUp(self):
-        self.components = []
+        self.steps = []
 
     def test_start__stop(self):
         x = self.Def(self)
@@ -84,42 +76,31 @@ class test_StartStopComponent(Case):
 
         # include creates the underlying object and sets
         # its x.obj attribute to it, as well as appending
-        # it to the parent.components list.
+        # it to the parent.steps list.
         x.include(self)
-        self.assertTrue(self.components)
-        self.assertIs(self.components[0], x.obj)
+        self.assertTrue(self.steps)
+        self.assertIs(self.steps[0], x)
 
-        x.start()
+        x.start(self)
         x.obj.start.assert_called_with()
 
-        x.stop()
+        x.stop(self)
         x.obj.stop.assert_called_with()
 
     def test_include_when_disabled(self):
         x = self.Def(self)
         x.enabled = False
         x.include(self)
-        self.assertFalse(self.components)
-
-    def test_terminate_when_terminable(self):
-        x = self.Def(self)
-        x.terminable = True
-        x.create = Mock()
-
-        x.include(self)
-        x.terminate()
-        x.obj.terminate.assert_called_with()
-        self.assertFalse(x.obj.stop.call_count)
+        self.assertFalse(self.steps)
 
-    def test_terminate_calls_stop_when_not_terminable(self):
+    def test_terminate(self):
         x = self.Def(self)
         x.terminable = False
         x.create = Mock()
 
         x.include(self)
-        x.terminate()
+        x.terminate(self)
         x.obj.stop.assert_called_with()
-        self.assertFalse(x.obj.terminate.call_count)
 
 
 class test_Namespace(AppCase):
@@ -127,47 +108,29 @@ class test_Namespace(AppCase):
     class NS(bootsteps.Namespace):
         name = 'test_Namespace'
 
-    class ImportingNS(bootsteps.Namespace):
-
-        def __init__(self, *args, **kwargs):
-            bootsteps.Namespace.__init__(self, *args, **kwargs)
-            self.imported = []
-
-        def modules(self):
-            return ['A', 'B', 'C']
-
-        def import_module(self, module):
-            self.imported.append(module)
-
-    def test_components_added_to_unclaimed(self):
+    def test_steps_added_to_unclaimed(self):
 
-        class tnA(bootsteps.Component):
+        class tnA(bootsteps.Step):
             name = 'test_Namespace.A'
 
-        class tnB(bootsteps.Component):
+        class tnB(bootsteps.Step):
             name = 'test_Namespace.B'
 
-        class xxA(bootsteps.Component):
+        class xxA(bootsteps.Step):
             name = 'xx.A'
 
-        self.assertIn('A', self.NS._unclaimed['test_Namespace'])
-        self.assertIn('B', self.NS._unclaimed['test_Namespace'])
-        self.assertIn('A', self.NS._unclaimed['xx'])
-        self.assertNotIn('B', self.NS._unclaimed['xx'])
+        class NS(self.NS):
+            default_steps = [tnA, tnB]
+        ns = NS(app=self.app)
+
+        self.assertIn(tnA, ns._all_steps())
+        self.assertIn(tnB, ns._all_steps())
+        self.assertNotIn(xxA, ns._all_steps())
 
     def test_init(self):
         ns = self.NS(app=self.app)
         self.assertIs(ns.app, self.app)
         self.assertEqual(ns.name, 'test_Namespace')
-        self.assertFalse(ns.services)
-
-    def test_interface_modules(self):
-        self.NS(app=self.app).modules()
-
-    def test_load_modules(self):
-        x = self.ImportingNS(app=self.app)
-        x.load_modules()
-        self.assertListEqual(x.imported, ['A', 'B', 'C'])
 
     def test_apply(self):
 
@@ -177,44 +140,32 @@ class test_Namespace(AppCase):
             def modules(self):
                 return ['A', 'B']
 
-        class A(bootsteps.Component):
-            name = 'test_apply.A'
-            requires = ['C']
-
-        class B(bootsteps.Component):
+        class B(bootsteps.Step):
             name = 'test_apply.B'
 
-        class C(bootsteps.Component):
+        class C(bootsteps.Step):
             name = 'test_apply.C'
-            requires = ['B']
+            requires = [B]
+
+        class A(bootsteps.Step):
+            name = 'test_apply.A'
+            requires = [C]
 
-        class D(bootsteps.Component):
+        class D(bootsteps.Step):
             name = 'test_apply.D'
             last = True
 
-        x = MyNS(app=self.app)
-        x.import_module = Mock()
+        x = MyNS([A, D], app=self.app)
         x.apply(self)
 
-        self.assertItemsEqual(x.components.values(), [A, B, C, D])
-        self.assertTrue(x.import_module.call_count)
-
-        for boot_step in x.boot_steps:
-            self.assertEqual(boot_step.namespace, x)
-
-        self.assertIsInstance(x.boot_steps[0], B)
-        self.assertIsInstance(x.boot_steps[1], C)
-        self.assertIsInstance(x.boot_steps[2], A)
-        self.assertIsInstance(x.boot_steps[3], D)
-
-        self.assertIs(x['A'], A)
-
-    def test_import_module(self):
-        x = self.NS(app=self.app)
-        import os
-        self.assertIs(x.import_module('os'), os)
+        self.assertIsInstance(x.order[0], B)
+        self.assertIsInstance(x.order[1], C)
+        self.assertIsInstance(x.order[2], A)
+        self.assertIsInstance(x.order[3], D)
+        self.assertIn(A, x.types)
+        self.assertIs(x[A.name], x.order[2])
 
-    def test_find_last_but_no_components(self):
+    def test_find_last_but_no_steps(self):
 
         class MyNS(bootsteps.Namespace):
             name = 'qwejwioqjewoqiej'

+ 8 - 8
celery/tests/worker/test_control.py

@@ -351,17 +351,17 @@ class test_ControlPanel(Case):
     def test_revoke_terminate(self):
         request = Mock()
         request.id = tid = uuid()
-        state.active_requests.add(request)
+        state.reserved_requests.add(request)
         try:
             r = control.revoke(Mock(), tid, terminate=True)
             self.assertIn(tid, revoked)
             self.assertTrue(request.terminate.call_count)
-            self.assertIn('terminated', r['ok'])
+            self.assertIn('terminating', r['ok'])
             # unknown task id only revokes
             r = control.revoke(Mock(), uuid(), terminate=True)
-            self.assertIn('revoked', r['ok'])
+            self.assertIn('not found', r['ok'])
         finally:
-            state.active_requests.discard(request)
+            state.reserved_requests.discard(request)
 
     def test_autoscale(self):
         self.panel.state.consumer = Mock()
@@ -382,7 +382,7 @@ class test_ControlPanel(Case):
         m = {'method': 'ping',
              'destination': hostname}
         r = self.panel.handle_message(m, None)
-        self.assertEqual(r, 'pong')
+        self.assertEqual(r, {'ok': 'pong'})
 
     def test_shutdown(self):
         m = {'method': 'shutdown',
@@ -405,8 +405,8 @@ class test_ControlPanel(Case):
                       mailbox=self.app.control.mailbox)
         r = panel.dispatch('ping', reply_to={'exchange': 'x',
                                              'routing_key': 'x'})
-        self.assertEqual(r, 'pong')
-        self.assertDictEqual(replies[0], {panel.hostname: 'pong'})
+        self.assertEqual(r, {'ok': 'pong'})
+        self.assertDictEqual(replies[0], {panel.hostname: {'ok': 'pong'}})
 
     def test_pool_restart(self):
         consumer = Consumer()
@@ -439,7 +439,7 @@ class test_ControlPanel(Case):
         self.assertEqual([(('foo',), {}), (('bar',), {})],
                           _import.call_args_list)
 
-    def test_pool_restart_relaod_modules(self):
+    def test_pool_restart_reload_modules(self):
         consumer = Consumer()
         consumer.controller = _WC(app=current_app)
         consumer.controller.pool.restart = Mock()

+ 5 - 4
celery/tests/worker/test_request.py

@@ -619,7 +619,7 @@ class test_TaskRequest(AppCase):
     def test_worker_task_trace_handle_retry(self):
         from celery.exceptions import RetryTaskError
         tid = uuid()
-        mytask.request.update({'id': tid})
+        mytask.push_request(id=tid)
         try:
             raise ValueError('foo')
         except Exception as exc:
@@ -634,12 +634,13 @@ class test_TaskRequest(AppCase):
                 self.assertEqual(mytask.backend.get_status(tid),
                                  states.RETRY)
         finally:
-            mytask.request.clear()
+            mytask.pop_request()
 
     def test_worker_task_trace_handle_failure(self):
         tid = uuid()
-        mytask.request.update({'id': tid})
+        mytask.push_request()
         try:
+            mytask.request.id = tid
             try:
                 raise ValueError('foo')
             except Exception as exc:
@@ -651,7 +652,7 @@ class test_TaskRequest(AppCase):
                 self.assertEqual(mytask.backend.get_status(tid),
                                  states.FAILURE)
         finally:
-            mytask.request.clear()
+            mytask.pop_request()
 
     def test_task_wrapper_mail_attrs(self):
         tw = TaskRequest(mytask.name, uuid(), [], {})

+ 246 - 181
celery/tests/worker/test_worker.py

@@ -9,6 +9,7 @@ from Queue import Empty
 
 from billiard.exceptions import WorkerLostError
 from kombu import Connection
+from kombu.common import QoS, PREFETCH_COUNT_MAX, ignore_errors
 from kombu.exceptions import StdChannelError
 from kombu.transport.base import Message
 from mock import Mock, patch
@@ -16,6 +17,7 @@ from nose import SkipTest
 
 from celery import current_app
 from celery.app.defaults import DEFAULTS
+from celery.bootsteps import RUN, CLOSE, TERMINATE, StartStopStep
 from celery.concurrency.base import BasePool
 from celery.datastructures import AttributeDict
 from celery.exceptions import SystemTerminate
@@ -23,33 +25,51 @@ from celery.task import task as task_dec
 from celery.task import periodic_task as periodic_task_dec
 from celery.utils import uuid
 from celery.worker import WorkController
-from celery.worker.components import Queues, Timers, EvLoop, Pool
+from celery.worker import components
 from celery.worker.buckets import FastQueue
 from celery.worker.job import Request
-from celery.worker.consumer import BlockingConsumer
-from celery.worker.consumer import QoS, RUN, PREFETCH_COUNT_MAX, CLOSE
+from celery.worker import consumer
+from celery.worker.consumer import Consumer
 from celery.utils.serialization import pickle
 from celery.utils.timer2 import Timer
 
 from celery.tests.utils import AppCase, Case
 
 
+def MockStep(step=None):
+    step = Mock() if step is None else step
+    step.namespace = Mock()
+    step.namespace.name = 'MockNS'
+    step.name = 'MockStep'
+    return step
+
+
 class PlaceHolder(object):
         pass
 
 
-class MyKombuConsumer(BlockingConsumer):
+def find_step(obj, typ):
+    return obj.namespace.steps[typ.name]
+
+
+class _MyKombuConsumer(Consumer):
     broadcast_consumer = Mock()
     task_consumer = Mock()
 
     def __init__(self, *args, **kwargs):
         kwargs.setdefault('pool', BasePool(2))
-        super(MyKombuConsumer, self).__init__(*args, **kwargs)
+        super(_MyKombuConsumer, self).__init__(*args, **kwargs)
 
     def restart_heartbeat(self):
         self.heart = None
 
 
+class MyKombuConsumer(Consumer):
+
+    def loop(self, *args, **kwargs):
+        pass
+
+
 class MockNode(object):
     commands = []
 
@@ -166,19 +186,19 @@ class test_QoS(Case):
         self.assertEqual(qos.value, PREFETCH_COUNT_MAX - 1)
 
     def test_consumer_increment_decrement(self):
-        consumer = Mock()
-        qos = QoS(consumer, 10)
+        mconsumer = Mock()
+        qos = QoS(mconsumer.qos, 10)
         qos.update()
         self.assertEqual(qos.value, 10)
-        consumer.qos.assert_called_with(prefetch_count=10)
+        mconsumer.qos.assert_called_with(prefetch_count=10)
         qos.decrement_eventually()
         qos.update()
         self.assertEqual(qos.value, 9)
-        consumer.qos.assert_called_with(prefetch_count=9)
+        mconsumer.qos.assert_called_with(prefetch_count=9)
         qos.decrement_eventually()
         self.assertEqual(qos.value, 8)
-        consumer.qos.assert_called_with(prefetch_count=9)
-        self.assertIn({'prefetch_count': 9}, consumer.qos.call_args)
+        mconsumer.qos.assert_called_with(prefetch_count=9)
+        self.assertIn({'prefetch_count': 9}, mconsumer.qos.call_args)
 
         # Does not decrement 0 value
         qos.value = 0
@@ -188,8 +208,8 @@ class test_QoS(Case):
         self.assertEqual(qos.value, 0)
 
     def test_consumer_decrement_eventually(self):
-        consumer = Mock()
-        qos = QoS(consumer, 10)
+        mconsumer = Mock()
+        qos = QoS(mconsumer.qos, 10)
         qos.decrement_eventually()
         self.assertEqual(qos.value, 9)
         qos.value = 0
@@ -197,8 +217,8 @@ class test_QoS(Case):
         self.assertEqual(qos.value, 0)
 
     def test_set(self):
-        consumer = Mock()
-        qos = QoS(consumer, 10)
+        mconsumer = Mock()
+        qos = QoS(mconsumer.qos, 10)
         qos.set(12)
         self.assertEqual(qos.prev, 12)
         qos.set(qos.prev)
@@ -215,7 +235,8 @@ class test_Consumer(Case):
 
     def test_info(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l.qos = QoS(l.task_consumer, 10)
+        l.task_consumer = Mock()
+        l.qos = QoS(l.task_consumer.qos, 10)
         info = l.info
         self.assertEqual(info['prefetch_count'], 10)
         self.assertFalse(info['broker'])
@@ -226,90 +247,102 @@ class test_Consumer(Case):
 
     def test_start_when_closed(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l._state = CLOSE
+        l.namespace.state = CLOSE
         l.start()
 
     def test_connection(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
 
-        l.reset_connection()
+        l.namespace.start(l)
         self.assertIsInstance(l.connection, Connection)
 
-        l._state = RUN
+        l.namespace.state = RUN
         l.event_dispatcher = None
-        l.stop_consumers(close_connection=False)
+        l.namespace.restart(l)
         self.assertTrue(l.connection)
 
-        l._state = RUN
-        l.stop_consumers()
+        l.namespace.state = RUN
+        l.shutdown()
         self.assertIsNone(l.connection)
         self.assertIsNone(l.task_consumer)
 
-        l.reset_connection()
+        l.namespace.start(l)
         self.assertIsInstance(l.connection, Connection)
-        l.stop_consumers()
+        l.namespace.restart(l)
 
         l.stop()
-        l.close_connection()
+        l.shutdown()
         self.assertIsNone(l.connection)
         self.assertIsNone(l.task_consumer)
 
     def test_close_connection(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l._state = RUN
-        l.close_connection()
+        l.namespace.state = RUN
+        step = find_step(l, consumer.Connection)
+        conn = l.connection = Mock()
+        step.shutdown(l)
+        self.assertTrue(conn.close.called)
+        self.assertIsNone(l.connection)
 
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
         eventer = l.event_dispatcher = Mock()
         eventer.enabled = True
         heart = l.heart = MockHeart()
-        l._state = RUN
-        l.stop_consumers()
+        l.namespace.state = RUN
+        Events = find_step(l, consumer.Events)
+        Events.shutdown(l)
+        Heart = find_step(l, consumer.Heart)
+        Heart.shutdown(l)
         self.assertTrue(eventer.close.call_count)
         self.assertTrue(heart.closed)
 
     @patch('celery.worker.consumer.warn')
     def test_receive_message_unknown(self, warn):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         backend = Mock()
         m = create_message(backend, unknown={'baz': '!!!'})
         l.event_dispatcher = Mock()
-        l.pidbox_node = MockNode()
+        l.node = MockNode()
 
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
         self.assertTrue(warn.call_count)
 
-    @patch('celery.utils.timer2.to_timestamp')
+    @patch('celery.worker.consumer.to_timestamp')
     def test_receive_message_eta_OverflowError(self, to_timestamp):
         to_timestamp.side_effect = OverflowError()
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         m = create_message(Mock(), task=foo_task.name,
                                    args=('2, 2'),
                                    kwargs={},
                                    eta=datetime.now().isoformat())
         l.event_dispatcher = Mock()
-        l.pidbox_node = MockNode()
+        l.node = MockNode()
         l.update_strategies()
 
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
         self.assertTrue(m.acknowledged)
         self.assertTrue(to_timestamp.call_count)
 
     @patch('celery.worker.consumer.error')
     def test_receive_message_InvalidTaskError(self, error):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         m = create_message(Mock(), task=foo_task.name,
                            args=(1, 2), kwargs='foobarbaz', id=1)
         l.update_strategies()
         l.event_dispatcher = Mock()
-        l.pidbox_node = MockNode()
 
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
         self.assertIn('Received invalid task message', error.call_args[0][0])
 
     @patch('celery.worker.consumer.crit')
     def test_on_decode_error(self, crit):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
 
         class MockMessage(Mock):
             content_type = 'application/x-msgpack'
@@ -321,14 +354,25 @@ class test_Consumer(Case):
         self.assertTrue(message.ack.call_count)
         self.assertIn("Can't decode message body", crit.call_args[0][0])
 
+    def _get_on_message(self, l):
+        l.qos = Mock()
+        l.event_dispatcher = Mock()
+        l.task_consumer = Mock()
+        l.connection = Mock()
+        l.connection.drain_events.side_effect = SystemExit()
+
+        with self.assertRaises(SystemExit):
+            l.loop(*l.loop_args())
+        self.assertTrue(l.task_consumer.register_callback.called)
+        return l.task_consumer.register_callback.call_args[0][0]
+
     def test_receieve_message(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
         m = create_message(Mock(), task=foo_task.name,
                            args=[2, 4, 8], kwargs={})
         l.update_strategies()
-
-        l.event_dispatcher = Mock()
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
 
         in_bucket = self.ready_queue.get_nowait()
         self.assertIsInstance(in_bucket, Request)
@@ -338,10 +382,10 @@ class test_Consumer(Case):
 
     def test_start_connection_error(self):
 
-        class MockConsumer(BlockingConsumer):
+        class MockConsumer(Consumer):
             iterations = 0
 
-            def consume_messages(self):
+            def loop(self, *args, **kwargs):
                 if not self.iterations:
                     self.iterations = 1
                     raise KeyError('foo')
@@ -359,10 +403,10 @@ class test_Consumer(Case):
         # Regression test for AMQPChannelExceptions that can occur within the
         # consumer. (i.e. 404 errors)
 
-        class MockConsumer(BlockingConsumer):
+        class MockConsumer(Consumer):
             iterations = 0
 
-            def consume_messages(self):
+            def loop(self, *args, **kwargs):
                 if not self.iterations:
                     self.iterations = 1
                     raise KeyError('foo')
@@ -376,7 +420,7 @@ class test_Consumer(Case):
         l.heart.stop()
         l.timer.stop()
 
-    def test_consume_messages_ignores_socket_timeout(self):
+    def test_loop_ignores_socket_timeout(self):
 
         class Connection(current_app.connection().__class__):
             obj = None
@@ -389,10 +433,10 @@ class test_Consumer(Case):
         l.connection = Connection()
         l.task_consumer = Mock()
         l.connection.obj = l
-        l.qos = QoS(l.task_consumer, 10)
-        l.consume_messages()
+        l.qos = QoS(l.task_consumer.qos, 10)
+        l.loop(*l.loop_args())
 
-    def test_consume_messages_when_socket_error(self):
+    def test_loop_when_socket_error(self):
 
         class Connection(current_app.connection().__class__):
             obj = None
@@ -401,20 +445,20 @@ class test_Consumer(Case):
                 self.obj.connection = None
                 raise socket.error('foo')
 
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l._state = RUN
+        l = Consumer(self.ready_queue, timer=self.timer)
+        l.namespace.state = RUN
         c = l.connection = Connection()
         l.connection.obj = l
         l.task_consumer = Mock()
-        l.qos = QoS(l.task_consumer, 10)
+        l.qos = QoS(l.task_consumer.qos, 10)
         with self.assertRaises(socket.error):
-            l.consume_messages()
+            l.loop(*l.loop_args())
 
-        l._state = CLOSE
+        l.namespace.state = CLOSE
         l.connection = c
-        l.consume_messages()
+        l.loop(*l.loop_args())
 
-    def test_consume_messages(self):
+    def test_loop(self):
 
         class Connection(current_app.connection().__class__):
             obj = None
@@ -422,17 +466,16 @@ class test_Consumer(Case):
             def drain_events(self, **kwargs):
                 self.obj.connection = None
 
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
         l.connection = Connection()
         l.connection.obj = l
         l.task_consumer = Mock()
-        l.qos = QoS(l.task_consumer, 10)
+        l.qos = QoS(l.task_consumer.qos, 10)
 
-        l.consume_messages()
-        l.consume_messages()
+        l.loop(*l.loop_args())
+        l.loop(*l.loop_args())
         self.assertTrue(l.task_consumer.consume.call_count)
         l.task_consumer.qos.assert_called_with(prefetch_count=10)
-        l.task_consumer.qos = Mock()
         self.assertEqual(l.qos.value, 10)
         l.qos.decrement_eventually()
         self.assertEqual(l.qos.value, 9)
@@ -440,15 +483,15 @@ class test_Consumer(Case):
         self.assertEqual(l.qos.value, 9)
         l.task_consumer.qos.assert_called_with(prefetch_count=9)
 
-    def test_maybe_conn_error(self):
+    def test_ignore_errors(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
         l.connection_errors = (KeyError, )
         l.channel_errors = (SyntaxError, )
-        l.maybe_conn_error(Mock(side_effect=AttributeError('foo')))
-        l.maybe_conn_error(Mock(side_effect=KeyError('foo')))
-        l.maybe_conn_error(Mock(side_effect=SyntaxError('foo')))
+        ignore_errors(l, Mock(side_effect=AttributeError('foo')))
+        ignore_errors(l, Mock(side_effect=KeyError('foo')))
+        ignore_errors(l, Mock(side_effect=SyntaxError('foo')))
         with self.assertRaises(IndexError):
-            l.maybe_conn_error(Mock(side_effect=IndexError('foo')))
+            ignore_errors(l, Mock(side_effect=IndexError('foo')))
 
     def test_apply_eta_task(self):
         from celery.worker import state
@@ -463,18 +506,20 @@ class test_Consumer(Case):
         self.assertIs(self.ready_queue.get_nowait(), task)
 
     def test_receieve_message_eta_isoformat(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         m = create_message(Mock(), task=foo_task.name,
                            eta=datetime.now().isoformat(),
                            args=[2, 4, 8], kwargs={})
 
         l.task_consumer = Mock()
-        l.qos = QoS(l.task_consumer, l.initial_prefetch_count)
+        l.qos = QoS(l.task_consumer.qos, 1)
         current_pcount = l.qos.value
         l.event_dispatcher = Mock()
         l.enabled = False
         l.update_strategies()
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
         l.timer.stop()
         l.timer.join(1)
 
@@ -487,28 +532,30 @@ class test_Consumer(Case):
         self.assertGreater(l.qos.value, current_pcount)
         l.timer.stop()
 
-    def test_on_control(self):
+    def test_pidbox_callback(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l.pidbox_node = Mock()
-        l.reset_pidbox_node = Mock()
+        con = find_step(l, consumer.Control).box
+        con.node = Mock()
+        con.reset = Mock()
 
-        l.on_control('foo', 'bar')
-        l.pidbox_node.handle_message.assert_called_with('foo', 'bar')
+        con.on_message('foo', 'bar')
+        con.node.handle_message.assert_called_with('foo', 'bar')
 
-        l.pidbox_node = Mock()
-        l.pidbox_node.handle_message.side_effect = KeyError('foo')
-        l.on_control('foo', 'bar')
-        l.pidbox_node.handle_message.assert_called_with('foo', 'bar')
+        con.node = Mock()
+        con.node.handle_message.side_effect = KeyError('foo')
+        con.on_message('foo', 'bar')
+        con.node.handle_message.assert_called_with('foo', 'bar')
 
-        l.pidbox_node = Mock()
-        l.pidbox_node.handle_message.side_effect = ValueError('foo')
-        l.on_control('foo', 'bar')
-        l.pidbox_node.handle_message.assert_called_with('foo', 'bar')
-        l.reset_pidbox_node.assert_called_with()
+        con.node = Mock()
+        con.node.handle_message.side_effect = ValueError('foo')
+        con.on_message('foo', 'bar')
+        con.node.handle_message.assert_called_with('foo', 'bar')
+        self.assertTrue(con.reset.called)
 
     def test_revoke(self):
         ready_queue = FastQueue()
-        l = MyKombuConsumer(ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(ready_queue, timer=self.timer)
+        l.steps.pop()
         backend = Mock()
         id = uuid()
         t = create_message(backend, task=foo_task.name, args=[2, 4, 8],
@@ -516,16 +563,19 @@ class test_Consumer(Case):
         from celery.worker.state import revoked
         revoked.add(id)
 
-        l.receive_message(t.decode(), t)
+        callback = self._get_on_message(l)
+        callback(t.decode(), t)
         self.assertTrue(ready_queue.empty())
 
     def test_receieve_message_not_registered(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         backend = Mock()
         m = create_message(backend, task='x.X.31x', args=[2, 4, 8], kwargs={})
 
         l.event_dispatcher = Mock()
-        self.assertFalse(l.receive_message(m.decode(), m))
+        callback = self._get_on_message(l)
+        self.assertFalse(callback(m.decode(), m))
         with self.assertRaises(Empty):
             self.ready_queue.get_nowait()
         self.assertTrue(self.timer.empty())
@@ -533,7 +583,7 @@ class test_Consumer(Case):
     @patch('celery.worker.consumer.warn')
     @patch('celery.worker.consumer.logger')
     def test_receieve_message_ack_raises(self, logger, warn):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
         backend = Mock()
         m = create_message(backend, args=[2, 4, 8], kwargs={})
 
@@ -541,7 +591,8 @@ class test_Consumer(Case):
         l.connection_errors = (socket.error, )
         m.reject = Mock()
         m.reject.side_effect = socket.error('foo')
-        self.assertFalse(l.receive_message(m.decode(), m))
+        callback = self._get_on_message(l)
+        self.assertFalse(callback(m.decode(), m))
         self.assertTrue(warn.call_count)
         with self.assertRaises(Empty):
             self.ready_queue.get_nowait()
@@ -550,7 +601,8 @@ class test_Consumer(Case):
         self.assertTrue(logger.critical.call_count)
 
     def test_receieve_message_eta(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l = _MyKombuConsumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         l.event_dispatcher = Mock()
         l.event_dispatcher._outbound_buffer = deque()
         backend = Mock()
@@ -559,16 +611,17 @@ class test_Consumer(Case):
                            eta=(datetime.now() +
                                timedelta(days=1)).isoformat())
 
-        l.reset_connection()
+        l.namespace.start(l)
         p = l.app.conf.BROKER_CONNECTION_RETRY
         l.app.conf.BROKER_CONNECTION_RETRY = False
         try:
-            l.reset_connection()
+            l.namespace.start(l)
         finally:
             l.app.conf.BROKER_CONNECTION_RETRY = p
-        l.stop_consumers()
+        l.namespace.restart(l)
         l.event_dispatcher = Mock()
-        l.receive_message(m.decode(), m)
+        callback = self._get_on_message(l)
+        callback(m.decode(), m)
         l.timer.stop()
         in_hold = l.timer.queue[0]
         self.assertEqual(len(in_hold), 3)
@@ -582,24 +635,33 @@ class test_Consumer(Case):
 
     def test_reset_pidbox_node(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l.pidbox_node = Mock()
-        chan = l.pidbox_node.channel = Mock()
+        con = find_step(l, consumer.Control).box
+        con.node = Mock()
+        chan = con.node.channel = Mock()
         l.connection = Mock()
         chan.close.side_effect = socket.error('foo')
         l.connection_errors = (socket.error, )
-        l.reset_pidbox_node()
+        con.reset()
         chan.close.assert_called_with()
 
     def test_reset_pidbox_node_green(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l.pool = Mock()
-        l.pool.is_green = True
-        l.reset_pidbox_node()
-        l.pool.spawn_n.assert_called_with(l._green_pidbox_node)
+        from celery.worker.pidbox import gPidbox
+        pool = Mock()
+        pool.is_green = True
+        l = MyKombuConsumer(self.ready_queue, timer=self.timer, pool=pool)
+        con = find_step(l, consumer.Control)
+        self.assertIsInstance(con.box, gPidbox)
+        con.start(l)
+        l.pool.spawn_n.assert_called_with(
+            con.box.loop, l,
+        )
 
     def test__green_pidbox_node(self):
-        l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l.pidbox_node = Mock()
+        pool = Mock()
+        pool.is_green = True
+        l = MyKombuConsumer(self.ready_queue, timer=self.timer, pool=pool)
+        l.node = Mock()
+        controller = find_step(l, consumer.Control)
 
         class BConsumer(Mock):
 
@@ -610,7 +672,7 @@ class test_Consumer(Case):
             def __exit__(self, *exc_info):
                 self.cancel()
 
-        l.pidbox_node.listen = BConsumer()
+        controller.box.node.listen = BConsumer()
         connections = []
 
         class Connection(object):
@@ -639,25 +701,26 @@ class test_Consumer(Case):
                     self.calls += 1
                     raise socket.timeout()
                 self.obj.connection = None
-                self.obj._pidbox_node_shutdown.set()
+                controller.box._node_shutdown.set()
 
             def close(self):
                 self.closed = True
 
         l.connection = Mock()
-        l._open_connection = lambda: Connection(obj=l)
-        l._green_pidbox_node()
+        l.connect = lambda: Connection(obj=l)
+        controller = find_step(l, consumer.Control)
+        controller.box.loop(l)
 
-        l.pidbox_node.listen.assert_called_with(callback=l.on_control)
-        self.assertTrue(l.broadcast_consumer)
-        l.broadcast_consumer.consume.assert_called_with()
+        self.assertTrue(controller.box.node.listen.called)
+        self.assertTrue(controller.box.consumer)
+        controller.box.consumer.consume.assert_called_with()
 
         self.assertIsNone(l.connection)
         self.assertTrue(connections[0].closed)
 
     @patch('kombu.connection.Connection._establish_connection')
     @patch('kombu.utils.sleep')
-    def test_open_connection_errback(self, sleep, connect):
+    def test_connect_errback(self, sleep, connect):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
         from kombu.transport.memory import Transport
         Transport.connection_errors = (StdChannelError, )
@@ -667,17 +730,18 @@ class test_Consumer(Case):
                 return
             raise StdChannelError()
         connect.side_effect = effect
-        l._open_connection()
+        l.connect()
         connect.assert_called_with()
 
     def test_stop_pidbox_node(self):
         l = MyKombuConsumer(self.ready_queue, timer=self.timer)
-        l._pidbox_node_stopped = Event()
-        l._pidbox_node_shutdown = Event()
-        l._pidbox_node_stopped.set()
-        l.stop_pidbox_node()
+        cont = find_step(l, consumer.Control)
+        cont._node_stopped = Event()
+        cont._node_shutdown = Event()
+        cont._node_stopped.set()
+        cont.stop(l)
 
-    def test_start__consume_messages(self):
+    def test_start__loop(self):
 
         class _QoS(object):
             prev = 3
@@ -702,18 +766,17 @@ class test_Consumer(Case):
         l.connection = Connection()
         l.iterations = 0
 
-        def raises_KeyError(limit=None):
+        def raises_KeyError(*args, **kwargs):
             l.iterations += 1
             if l.qos.prev != l.qos.value:
                 l.qos.update()
             if l.iterations >= 2:
                 raise KeyError('foo')
 
-        l.consume_messages = raises_KeyError
+        l.loop = raises_KeyError
         with self.assertRaises(KeyError):
             l.start()
-        self.assertTrue(init_callback.call_count)
-        self.assertEqual(l.iterations, 1)
+        self.assertEqual(l.iterations, 2)
         self.assertEqual(l.qos.prev, l.qos.value)
 
         init_callback.reset_mock()
@@ -723,25 +786,25 @@ class test_Consumer(Case):
         l.task_consumer = Mock()
         l.broadcast_consumer = Mock()
         l.connection = Connection()
-        l.consume_messages = Mock(side_effect=socket.error('foo'))
+        l.loop = Mock(side_effect=socket.error('foo'))
         with self.assertRaises(socket.error):
             l.start()
-        self.assertTrue(init_callback.call_count)
-        self.assertTrue(l.consume_messages.call_count)
+        self.assertTrue(l.loop.call_count)
 
     def test_reset_connection_with_no_node(self):
-        l = BlockingConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
+        l.steps.pop()
         self.assertEqual(None, l.pool)
-        l.reset_connection()
+        l.namespace.start(l)
 
     def test_on_task_revoked(self):
-        l = BlockingConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
         task = Mock()
         task.revoked.return_value = True
         l.on_task(task)
 
     def test_on_task_no_events(self):
-        l = BlockingConsumer(self.ready_queue, timer=self.timer)
+        l = Consumer(self.ready_queue, timer=self.timer)
         task = Mock()
         task.revoked.return_value = False
         l.event_dispatcher = Mock()
@@ -756,7 +819,6 @@ class test_WorkController(AppCase):
     def setup(self):
         self.worker = self.create_worker()
         from celery import worker
-        from celery.worker import components
         self._logger = worker.logger
         self._comp_logger = components.logger
         self.logger = worker.logger = Mock()
@@ -764,20 +826,19 @@ class test_WorkController(AppCase):
 
     def teardown(self):
         from celery import worker
-        from celery.worker import components
         worker.logger = self._logger
         components.logger = self._comp_logger
 
     def create_worker(self, **kw):
         worker = self.app.WorkController(concurrency=1, loglevel=0, **kw)
-        worker._shutdown_complete.set()
+        worker.namespace.shutdown_complete.set()
         return worker
 
     @patch('celery.platforms.create_pidlock')
     def test_use_pidfile(self, create_pidlock):
         create_pidlock.return_value = Mock()
         worker = self.create_worker(pidfile='pidfilelockfilepid')
-        worker.components = []
+        worker.steps = []
         worker.start()
         self.assertTrue(create_pidlock.called)
         worker.stop()
@@ -824,12 +885,12 @@ class test_WorkController(AppCase):
         self.assertTrue(worker.pool)
         self.assertTrue(worker.consumer)
         self.assertTrue(worker.mediator)
-        self.assertTrue(worker.components)
+        self.assertTrue(worker.steps)
 
     def test_with_embedded_celerybeat(self):
         worker = WorkController(concurrency=1, loglevel=0, beat=True)
         self.assertTrue(worker.beat)
-        self.assertIn(worker.beat, worker.components)
+        self.assertIn(worker.beat, [w.obj for w in worker.steps])
 
     def test_with_autoscaler(self):
         worker = self.create_worker(autoscale=[10, 3], send_events=False,
@@ -839,17 +900,17 @@ class test_WorkController(AppCase):
     def test_dont_stop_or_terminate(self):
         worker = WorkController(concurrency=1, loglevel=0)
         worker.stop()
-        self.assertNotEqual(worker._state, worker.CLOSE)
+        self.assertNotEqual(worker.namespace.state, CLOSE)
         worker.terminate()
-        self.assertNotEqual(worker._state, worker.CLOSE)
+        self.assertNotEqual(worker.namespace.state, CLOSE)
 
         sigsafe, worker.pool.signal_safe = worker.pool.signal_safe, False
         try:
-            worker._state = worker.RUN
+            worker.namespace.state = RUN
             worker.stop(in_sighandler=True)
-            self.assertNotEqual(worker._state, worker.CLOSE)
+            self.assertNotEqual(worker.namespace.state, CLOSE)
             worker.terminate(in_sighandler=True)
-            self.assertNotEqual(worker._state, worker.CLOSE)
+            self.assertNotEqual(worker.namespace.state, CLOSE)
         finally:
             worker.pool.signal_safe = sigsafe
 
@@ -859,14 +920,14 @@ class test_WorkController(AppCase):
         try:
             raise KeyError('foo')
         except KeyError as exc:
-            Timers(worker).on_timer_error(exc)
+            components.Timer(worker).on_timer_error(exc)
             msg, args = self.comp_logger.error.call_args[0]
             self.assertIn('KeyError', msg % args)
 
     def test_on_timer_tick(self):
         worker = WorkController(concurrency=1, loglevel=10)
 
-        Timers(worker).on_timer_tick(30.0)
+        components.Timer(worker).on_timer_tick(30.0)
         xargs = self.comp_logger.debug.call_args[0]
         fmt, arg = xargs[0], xargs[1]
         self.assertEqual(30.0, arg)
@@ -891,11 +952,11 @@ class test_WorkController(AppCase):
         m = create_message(backend, task=foo_task.name, args=[4, 8, 10],
                            kwargs={})
         task = Request.from_message(m, m.decode())
-        worker.components = []
-        worker._state = worker.RUN
+        worker.steps = []
+        worker.namespace.state = RUN
         with self.assertRaises(KeyboardInterrupt):
             worker.process_task(task)
-        self.assertEqual(worker._state, worker.TERMINATE)
+        self.assertEqual(worker.namespace.state, TERMINATE)
 
     def test_process_task_raise_SystemTerminate(self):
         worker = self.worker
@@ -905,11 +966,11 @@ class test_WorkController(AppCase):
         m = create_message(backend, task=foo_task.name, args=[4, 8, 10],
                            kwargs={})
         task = Request.from_message(m, m.decode())
-        worker.components = []
-        worker._state = worker.RUN
+        worker.steps = []
+        worker.namespace.state = RUN
         with self.assertRaises(SystemExit):
             worker.process_task(task)
-        self.assertEqual(worker._state, worker.TERMINATE)
+        self.assertEqual(worker.namespace.state, TERMINATE)
 
     def test_process_task_raise_regular(self):
         worker = self.worker
@@ -924,17 +985,18 @@ class test_WorkController(AppCase):
 
     def test_start_catches_base_exceptions(self):
         worker1 = self.create_worker()
-        stc = Mock()
+        stc = MockStep()
         stc.start.side_effect = SystemTerminate()
-        worker1.components = [stc]
+        worker1.steps = [stc]
         worker1.start()
+        stc.start.assert_called_with(worker1)
         self.assertTrue(stc.terminate.call_count)
 
         worker2 = self.create_worker()
-        sec = Mock()
+        sec = MockStep()
         sec.start.side_effect = SystemExit()
         sec.terminate = None
-        worker2.components = [sec]
+        worker2.steps = [sec]
         worker2.start()
         self.assertTrue(sec.stop.call_count)
 
@@ -986,14 +1048,19 @@ class test_WorkController(AppCase):
 
     def test_start__stop(self):
         worker = self.worker
-        worker._shutdown_complete.set()
-        worker.components = [Mock(), Mock(), Mock(), Mock()]
+        worker.namespace.shutdown_complete.set()
+        worker.steps = [MockStep(StartStopStep(self)) for _ in range(4)]
+        worker.namespace.state = RUN
+        worker.namespace.started = 4
+        for w in worker.steps:
+            w.start = Mock()
+            w.stop = Mock()
 
         worker.start()
-        for w in worker.components:
+        for w in worker.steps:
             self.assertTrue(w.start.call_count)
         worker.stop()
-        for component in worker.components:
+        for w in worker.steps:
             self.assertTrue(w.stop.call_count)
 
         # Doesn't close pool if no pool.
@@ -1002,15 +1069,15 @@ class test_WorkController(AppCase):
         worker.stop()
 
         # test that stop of None is not attempted
-        worker.components[-1] = None
+        worker.steps[-1] = None
         worker.start()
         worker.stop()
 
-    def test_component_raises(self):
+    def test_step_raises(self):
         worker = self.worker
-        comp = Mock()
-        worker.components = [comp]
-        comp.start.side_effect = TypeError()
+        step = Mock()
+        worker.steps = [step]
+        step.start.side_effect = TypeError()
         worker.stop = Mock()
         worker.start()
         worker.stop.assert_called_with()
@@ -1020,36 +1087,34 @@ class test_WorkController(AppCase):
 
     def test_start__terminate(self):
         worker = self.worker
-        worker._shutdown_complete.set()
-        worker.components = [Mock(), Mock(), Mock(), Mock(), Mock()]
-        for component in worker.components[:3]:
-            component.terminate = None
-
+        worker.namespace.shutdown_complete.set()
+        worker.namespace.started = 5
+        worker.namespace.state = RUN
+        worker.steps = [MockStep() for _ in range(5)]
         worker.start()
-        for w in worker.components[:3]:
+        for w in worker.steps[:3]:
             self.assertTrue(w.start.call_count)
-        self.assertTrue(worker._running, len(worker.components))
-        self.assertEqual(worker._state, RUN)
+        self.assertTrue(worker.namespace.started, len(worker.steps))
+        self.assertEqual(worker.namespace.state, RUN)
         worker.terminate()
-        for component in worker.components[:3]:
-            self.assertTrue(component.stop.call_count)
-        self.assertTrue(worker.components[4].terminate.call_count)
+        for step in worker.steps:
+            self.assertTrue(step.terminate.call_count)
 
     def test_Queues_pool_not_rlimit_safe(self):
         w = Mock()
         w.pool_cls.rlimit_safe = False
-        Queues(w).create(w)
+        components.Queues(w).create(w)
         self.assertTrue(w.disable_rate_limits)
 
     def test_Queues_pool_no_sem(self):
         w = Mock()
         w.pool_cls.uses_semaphore = False
-        Queues(w).create(w)
+        components.Queues(w).create(w)
         self.assertIs(w.ready_queue.put, w.process_task)
 
-    def test_EvLoop_crate(self):
+    def test_Hub_crate(self):
         w = Mock()
-        x = EvLoop(w)
+        x = components.Hub(w)
         hub = x.create(w)
         self.assertTrue(w.timer.max_interval)
         self.assertIs(w.hub, hub)
@@ -1058,7 +1123,7 @@ class test_WorkController(AppCase):
         w = Mock()
         w.pool_cls = Mock()
         w.use_eventloop = False
-        pool = Pool(w)
+        pool = components.Pool(w)
         pool.create(w)
 
     def test_Pool_create(self):
@@ -1070,7 +1135,7 @@ class test_WorkController(AppCase):
         P = w.pool_cls.return_value = Mock()
         P.timers = {Mock(): 30}
         w.use_eventloop = True
-        pool = Pool(w)
+        pool = components.Pool(w)
         pool.create(w)
         self.assertIsInstance(w.semaphore, BoundedSemaphore)
         self.assertTrue(w.hub.on_init)

+ 4 - 12
celery/utils/imports.py

@@ -24,18 +24,10 @@ class NotAPackage(Exception):
     pass
 
 
-if sys.version_info >= (3, 3):  # pragma: no cover
-
-    def qualname(obj):
-        return obj.__qualname__
-
-else:
-
-    def qualname(obj):  # noqa
-        if not hasattr(obj, '__name__') and hasattr(obj, '__class__'):
-            return qualname(obj.__class__)
-
-        return '%s.%s' % (obj.__module__, obj.__name__)
+def qualname(obj):  # noqa
+    if not hasattr(obj, '__name__') and hasattr(obj, '__class__'):
+        obj = obj.__class__
+    return '%s.%s' % (obj.__module__, obj.__name__)
 
 
 def instantiate(name, *args, **kwargs):

+ 11 - 0
celery/utils/threads.py

@@ -9,10 +9,13 @@
 from __future__ import absolute_import, print_function
 
 import os
+import socket
 import sys
 import threading
 import traceback
 
+from contextlib import contextmanager
+
 from celery.local import Proxy
 from celery.utils.compat import THREAD_TIMEOUT_MAX
 
@@ -284,6 +287,14 @@ class LocalManager(object):
             self.__class__.__name__, len(self.locals))
 
 
+@contextmanager
+def default_socket_timeout(timeout):
+    prev = socket.getdefaulttimeout()
+    socket.setdefaulttimeout(timeout)
+    yield
+    socket.setdefaulttimeout(prev)
+
+
 class _FastLocalStack(threading.local):
 
     def __init__(self):

+ 10 - 7
celery/utils/timer2.py

@@ -14,12 +14,13 @@ import os
 import sys
 import threading
 
-from datetime import datetime, timedelta
+from datetime import datetime
 from functools import wraps
 from itertools import count, imap
-from time import time, sleep, mktime
+from time import time, sleep
 
 from celery.utils.compat import THREAD_TIMEOUT_MAX
+from celery.utils.timeutils import timedelta_seconds, timezone
 from kombu.log import get_logger
 
 VERSION = (1, 0, 0)
@@ -31,6 +32,7 @@ __docformat__ = 'restructuredtext'
 
 DEFAULT_MAX_INTERVAL = 2
 TIMER_DEBUG = os.environ.get('TIMER_DEBUG')
+EPOCH = datetime.utcfromtimestamp(0).replace(tzinfo=timezone.utc)
 
 logger = get_logger('timer2')
 
@@ -71,7 +73,9 @@ class Entry(object):
 
 def to_timestamp(d):
     if isinstance(d, datetime):
-        return mktime(d.timetuple())
+        if d.tzinfo is None:
+            d = d.replace(tzinfo=timezone.utc)
+        return timedelta_seconds(d - EPOCH)
     return d
 
 
@@ -110,11 +114,11 @@ class Schedule(object):
 
         """
         if eta is None:
-            eta = datetime.now()
+            eta = time()
         if isinstance(eta, datetime):
             try:
                 eta = to_timestamp(eta)
-            except OverflowError as exc:
+            except Exception as exc:
                 if not self.handle_error(exc):
                     raise
                 return
@@ -128,8 +132,7 @@ class Schedule(object):
         return self.enter(self.Entry(fun, args, kwargs), eta, priority)
 
     def enter_after(self, msecs, entry, priority=0):
-        eta = datetime.now() + timedelta(seconds=msecs / 1000.0)
-        return self.enter(entry, eta, priority)
+        return self.enter(entry, time() + (msecs / 1000.0), priority)
 
     def apply_after(self, msecs, fun, args=(), kwargs={}, priority=0):
         return self.enter_after(msecs, self.Entry(fun, args, kwargs), priority)

+ 6 - 10
celery/utils/timeutils.py

@@ -90,13 +90,6 @@ class LocalTimezone(tzinfo):
         return tt.tm_isdst > 0
 
 
-def _get_local_timezone():
-    global _local_timezone
-    if _local_timezone is None:
-        _local_timezone = LocalTimezone()
-    return _local_timezone
-
-
 class _Zone(object):
 
     def tz_or_local(self, tzinfo=None):
@@ -109,10 +102,13 @@ class _Zone(object):
             dt = make_aware(dt, orig or self.utc)
         return localize(dt, self.tz_or_local(local))
 
+    def to_system(self, dt):
+        return localize(dt, self.local)
+
     def to_local_fallback(self, dt, *args, **kwargs):
         if is_naive(dt):
-            return make_aware(dt, _get_local_timezone())
-        return localize(dt, _get_local_timezone())
+            return make_aware(dt, self.local)
+        return localize(dt, self.local)
 
     def get_timezone(self, zone):
         if isinstance(zone, basestring):
@@ -121,7 +117,7 @@ class _Zone(object):
 
     @cached_property
     def local(self):
-        return _get_local_timezone()
+        return LocalTimezone()
 
     @cached_property
     def utc(self):

+ 68 - 113
celery/worker/__init__.py

@@ -5,8 +5,8 @@
 
     :class:`WorkController` can be used to instantiate in-process workers.
 
-    The worker consists of several components, all managed by boot-steps
-    (mod:`celery.worker.bootsteps`).
+    The worker consists of several components, all managed by bootsteps
+    (mod:`celery.bootsteps`).
 
 """
 from __future__ import absolute_import
@@ -15,12 +15,11 @@ import socket
 import sys
 import traceback
 
-from threading import Event
-
 from billiard import cpu_count
 from kombu.syn import detect_environment
 from kombu.utils.finalize import Finalize
 
+from celery import bootsteps
 from celery import concurrency as _concurrency
 from celery import platforms
 from celery import signals
@@ -30,23 +29,11 @@ from celery.exceptions import (
     ImproperlyConfigured, SystemTerminate, TaskRevokedError,
 )
 from celery.utils import worker_direct
-from celery.utils.imports import qualname, reload_from_cwd
+from celery.utils.imports import reload_from_cwd
 from celery.utils.log import mlevel, worker_logger as logger
 
-from . import bootsteps
 from . import state
 
-try:
-    from greenlet import GreenletExit
-    IGNORE_ERRORS = (GreenletExit, )
-except ImportError:  # pragma: no cover
-    IGNORE_ERRORS = ()
-
-#: Worker states
-RUN = 0x1
-CLOSE = 0x2
-TERMINATE = 0x3
-
 UNKNOWN_QUEUE = """\
 Trying to select queue subset of {0!r}, but queue {1} is not
 defined in the CELERY_QUEUES setting.
@@ -55,34 +42,9 @@ If you want to automatically declare unknown queues you can
 enable the CELERY_CREATE_MISSING_QUEUES setting.
 """
 
-#: Default socket timeout at shutdown.
-SHUTDOWN_SOCKET_TIMEOUT = 5.0
-
-
-class Namespace(bootsteps.Namespace):
-    """This is the boot-step namespace of the :class:`WorkController`.
-
-    It loads modules from :setting:`CELERYD_BOOT_STEPS`, and its
-    own set of built-in boot-step modules.
-
-    """
-    name = 'worker'
-    builtin_boot_steps = ('celery.worker.components',
-                          'celery.worker.autoscale',
-                          'celery.worker.autoreload',
-                          'celery.worker.consumer',
-                          'celery.worker.mediator')
-
-    def modules(self):
-        return self.builtin_boot_steps + self.app.conf.CELERYD_BOOT_STEPS
-
 
 class WorkController(configurated):
     """Unmanaged worker instance."""
-    RUN = RUN
-    CLOSE = CLOSE
-    TERMINATE = TERMINATE
-
     app = None
     concurrency = from_config()
     loglevel = from_config('log_level')
@@ -108,32 +70,43 @@ class WorkController(configurated):
     disable_rate_limits = from_config()
     worker_lost_wait = from_config()
 
-    _state = None
-    _running = 0
     pidlock = None
 
+    class Namespace(bootsteps.Namespace):
+        """This is the bootstep namespace for the
+        :class:`WorkController` class.
+
+        It loads modules from :setting:`CELERYD_BOOTSTEPS`, and its
+        own set of built-in bootsteps.
+
+        """
+        name = 'Worker'
+        default_steps = set([
+            'celery.worker.components:Hub',
+            'celery.worker.components:Queues',
+            'celery.worker.components:Pool',
+            'celery.worker.components:Beat',
+            'celery.worker.components:Timer',
+            'celery.worker.components:StateDB',
+            'celery.worker.components:Consumer',
+            'celery.worker.autoscale:WorkerComponent',
+            'celery.worker.autoreload:WorkerComponent',
+            'celery.worker.mediator:WorkerComponent',
+
+        ])
+
     def __init__(self, app=None, hostname=None, **kwargs):
         self.app = app_or_default(app or self.app)
         self.hostname = hostname or socket.gethostname()
+        self.app.loader.init_worker()
         self.on_before_init(**kwargs)
 
         self._finalize = Finalize(self, self.stop, exitpriority=1)
-        self._shutdown_complete = Event()
         self.setup_instance(**self.prepare_args(**kwargs))
 
-    def on_before_init(self, **kwargs):
-        pass
-
-    def on_start(self):
-        pass
-
-    def on_consumer_ready(self, consumer):
-        pass
-
     def setup_instance(self, queues=None, ready_callback=None,
             pidfile=None, include=None, **kwargs):
         self.pidfile = pidfile
-        self.app.loader.init_worker()
         self.setup_defaults(kwargs, namespace='celeryd')
         self.setup_queues(queues)
         self.setup_includes(include)
@@ -149,13 +122,42 @@ class WorkController(configurated):
         self.loglevel = mlevel(self.loglevel)
         self.ready_callback = ready_callback or self.on_consumer_ready
         self.use_eventloop = self.should_use_eventloop()
+        self.options = kwargs
 
         signals.worker_init.send(sender=self)
 
-        # Initialize boot steps
+        # Initialize bootsteps
         self.pool_cls = _concurrency.get_implementation(self.pool_cls)
-        self.components = []
-        self.namespace = Namespace(app=self.app).apply(self, **kwargs)
+        self.steps = []
+        self.on_init_namespace()
+        self.namespace = self.Namespace(app=self.app,
+                                        on_start=self.on_start,
+                                        on_close=self.on_close,
+                                        on_stopped=self.on_stopped)
+        self.namespace.apply(self, **kwargs)
+
+    def on_init_namespace(self):
+        pass
+
+    def on_before_init(self, **kwargs):
+        pass
+
+    def on_start(self):
+        if self.pidfile:
+            self.pidlock = platforms.create_pidlock(self.pidfile)
+
+    def on_consumer_ready(self, consumer):
+        pass
+
+    def on_close(self):
+        self.app.loader.shutdown_worker()
+
+    def on_stopped(self):
+        self.timer.stop()
+        self.consumer.shutdown()
+
+        if self.pidlock:
+            self.pidlock.release()
 
     def setup_queues(self, queues):
         if isinstance(queues, basestring):
@@ -187,34 +189,16 @@ class WorkController(configurated):
 
     def start(self):
         """Starts the workers main loop."""
-        self.on_start()
-        self._state = self.RUN
-        if self.pidfile:
-            self.pidlock = platforms.create_pidlock(self.pidfile)
         try:
-            for i, component in enumerate(self.components):
-                logger.debug('Starting %s...', qualname(component))
-                self._running = i + 1
-                if component:
-                    component.start()
-                logger.debug('%s OK!', qualname(component))
+            self.namespace.start(self)
         except SystemTerminate:
             self.terminate()
         except Exception as exc:
-            logger.error('Unrecoverable error: %r', exc,
-                         exc_info=True)
+            logger.error('Unrecoverable error: %r', exc, exc_info=True)
             self.stop()
         except (KeyboardInterrupt, SystemExit):
             self.stop()
 
-        try:
-            # Will only get here if running green,
-            # makes sure all greenthreads have exited.
-            self._shutdown_complete.wait()
-        except IGNORE_ERRORS:
-            pass
-    run = start   # XXX Compat
-
     def process_task_sem(self, req):
         return self._quick_acquire(self.process_task, req)
 
@@ -260,41 +244,8 @@ class WorkController(configurated):
             self._shutdown(warm=False)
 
     def _shutdown(self, warm=True):
-        what = 'Stopping' if warm else 'Terminating'
-        socket_timeout = socket.getdefaulttimeout()
-        socket.setdefaulttimeout(SHUTDOWN_SOCKET_TIMEOUT)  # Issue 975
-
-        if self._state in (self.CLOSE, self.TERMINATE):
-            return
-
-        self.app.loader.shutdown_worker()
-
-        if self.pool:
-            self.pool.close()
-
-        if self._state != self.RUN or self._running != len(self.components):
-            # Not fully started, can safely exit.
-            self._state = self.TERMINATE
-            self._shutdown_complete.set()
-            return
-        self._state = self.CLOSE
-
-        for component in reversed(self.components):
-            logger.debug('%s %s...', what, qualname(component))
-            if component:
-                stop = component.stop
-                if not warm:
-                    stop = getattr(component, 'terminate', None) or stop
-                stop()
-
-        self.timer.stop()
-        self.consumer.close_connection()
-
-        if self.pidlock:
-            self.pidlock.release()
-        self._state = self.TERMINATE
-        socket.setdefaulttimeout(socket_timeout)
-        self._shutdown_complete.set()
+        self.namespace.stop(self, terminate=not warm)
+        self.namespace.join()
 
     def reload(self, modules=None, reload=False, reloader=None):
         modules = self.app.loader.task_modules if modules is None else modules
@@ -309,6 +260,10 @@ class WorkController(configurated):
                 reload_from_cwd(sys.modules[module], reloader)
         self.pool.restart()
 
+    @property
+    def _state(self):
+        return self.namespace.state
+
     @property
     def state(self):
         return state

+ 6 - 4
celery/worker/autoreload.py

@@ -18,12 +18,13 @@ from threading import Event
 
 from kombu.utils import eventio
 
+from celery import bootsteps
 from celery.platforms import ignore_errno
 from celery.utils.imports import module_file
 from celery.utils.log import get_logger
 from celery.utils.threads import bgThread
 
-from .bootsteps import StartStopComponent
+from .components import Pool
 
 try:                        # pragma: no cover
     import pyinotify
@@ -35,9 +36,10 @@ except ImportError:         # pragma: no cover
 logger = get_logger(__name__)
 
 
-class WorkerComponent(StartStopComponent):
-    name = 'worker.autoreloader'
-    requires = ('pool', )
+class WorkerComponent(bootsteps.StartStopStep):
+    label = 'Autoreloader'
+    conditional = True
+    requires = (Pool, )
 
     def __init__(self, w, autoreload=None, **kwargs):
         self.enabled = w.autoreload = autoreload

+ 8 - 6
celery/worker/autoscale.py

@@ -7,8 +7,8 @@
     for growing and shrinking the pool according to the
     current autoscale settings.
 
-    The autoscale thread is only enabled if autoscale
-    has been enabled on the command line.
+    The autoscale thread is only enabled if :option:`--autoscale`
+    has been enabled on the command-line.
 
 """
 from __future__ import absolute_import
@@ -18,20 +18,22 @@ import threading
 from functools import partial
 from time import sleep, time
 
+from celery import bootsteps
 from celery.utils.log import get_logger
 from celery.utils.threads import bgThread
 
 from . import state
-from .bootsteps import StartStopComponent
+from .components import Pool
 from .hub import DummyLock
 
 logger = get_logger(__name__)
 debug, info, error = logger.debug, logger.info, logger.error
 
 
-class WorkerComponent(StartStopComponent):
-    name = 'worker.autoscaler'
-    requires = ('pool', )
+class WorkerComponent(bootsteps.StartStopStep):
+    label = 'Autoscaler'
+    conditional = True
+    requires = (Pool, )
 
     def __init__(self, w, **kwargs):
         self.enabled = w.autoscale

+ 0 - 210
celery/worker/bootsteps.py

@@ -1,210 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    celery.worker.bootsteps
-    ~~~~~~~~~~~~~~~~~~~~~~~
-
-    The boot-step components.
-
-"""
-from __future__ import absolute_import
-
-from collections import defaultdict
-from importlib import import_module
-
-from celery.datastructures import DependencyGraph
-from celery.utils.imports import instantiate
-from celery.utils.log import get_logger
-
-logger = get_logger(__name__)
-
-
-class Namespace(object):
-    """A namespace containing components.
-
-    Every component must belong to a namespace.
-
-    When component classes are created they are added to the
-    mapping of unclaimed components.  The components will be
-    claimed when the namespace they belong to is created.
-
-    :keyword name: Set the name of this namespace.
-    :keyword app: Set the Celery app for this namespace.
-
-    """
-    name = None
-    _unclaimed = defaultdict(dict)
-    _started_count = 0
-
-    def __init__(self, name=None, app=None):
-        self.app = app
-        self.name = name or self.name
-        self.services = []
-
-    def modules(self):
-        """Subclasses can override this to return a
-        list of modules to import before components are claimed."""
-        return []
-
-    def load_modules(self):
-        """Will load the component modules this namespace depends on."""
-        for m in self.modules():
-            self.import_module(m)
-
-    def apply(self, parent, **kwargs):
-        """Apply the components in this namespace to an object.
-
-        This will apply the ``__init__`` and ``include`` methods
-        of each components with the object as argument.
-
-        For ``StartStopComponents`` the services created
-        will also be added the the objects ``components`` attribute.
-
-        """
-        self._debug('Loading modules.')
-        self.load_modules()
-        self._debug('Claiming components.')
-        self.components = self._claim()
-        self._debug('Building boot step graph.')
-        self.boot_steps = [self.bind_component(name, parent, **kwargs)
-                                for name in self._finalize_boot_steps()]
-        self._debug('New boot order: {%s}',
-                ', '.join(c.name for c in self.boot_steps))
-
-        for component in self.boot_steps:
-            component.include(parent)
-        return self
-
-    def bind_component(self, name, parent, **kwargs):
-        """Bind component to parent object and this namespace."""
-        comp = self[name](parent, **kwargs)
-        comp.namespace = self
-        return comp
-
-    def import_module(self, module):
-        return import_module(module)
-
-    def __getitem__(self, name):
-        return self.components[name]
-
-    def _find_last(self):
-        for C in self.components.itervalues():
-            if C.last:
-                return C
-
-    def _finalize_boot_steps(self):
-        G = self.graph = DependencyGraph((C.name, C.requires)
-                            for C in self.components.itervalues())
-        last = self._find_last()
-        if last:
-            for obj in G:
-                if obj != last.name:
-                    G.add_edge(last.name, obj)
-        return G.topsort()
-
-    def _claim(self):
-        return self._unclaimed[self.name]
-
-    def _debug(self, msg, *args):
-        return logger.debug('[%s] ' + msg,
-                            *(self.name.capitalize(), ) + args)
-
-
-class ComponentType(type):
-    """Metaclass for components."""
-
-    def __new__(cls, name, bases, attrs):
-        abstract = attrs.pop('abstract', False)
-        if not abstract:
-            try:
-                cname = attrs['name']
-            except KeyError:
-                raise NotImplementedError('Components must be named')
-            namespace = attrs.get('namespace', None)
-            if not namespace:
-                attrs['namespace'], _, attrs['name'] = cname.partition('.')
-        cls = super(ComponentType, cls).__new__(cls, name, bases, attrs)
-        if not abstract:
-            Namespace._unclaimed[cls.namespace][cls.name] = cls
-        return cls
-
-
-class Component(object):
-    """A component.
-
-    The :meth:`__init__` method is called when the component
-    is bound to a parent object, and can as such be used
-    to initialize attributes in the parent object at
-    parent instantiation-time.
-
-    """
-    __metaclass__ = ComponentType
-
-    #: The name of the component, or the namespace
-    #: and the name of the component separated by dot.
-    name = None
-
-    #: List of component names this component depends on.
-    #: Note that the dependencies must be in the same namespace.
-    requires = ()
-
-    #: can be used to specify the namespace,
-    #: if the name does not include it.
-    namespace = None
-
-    #: if set the component will not be registered,
-    #: but can be used as a component base class.
-    abstract = True
-
-    #: Optional obj created by the :meth:`create` method.
-    #: This is used by StartStopComponents to keep the
-    #: original service object.
-    obj = None
-
-    #: This flag is reserved for the workers Consumer,
-    #: since it is required to always be started last.
-    #: There can only be one object marked with lsat
-    #: in every namespace.
-    last = False
-
-    #: This provides the default for :meth:`include_if`.
-    enabled = True
-
-    def __init__(self, parent, **kwargs):
-        pass
-
-    def create(self, parent):
-        """Create the component."""
-        pass
-
-    def include_if(self, parent):
-        """An optional predicate that decided whether this
-        component should be created."""
-        return self.enabled
-
-    def instantiate(self, qualname, *args, **kwargs):
-        return instantiate(qualname, *args, **kwargs)
-
-    def include(self, parent):
-        if self.include_if(parent):
-            self.obj = self.create(parent)
-            return True
-
-
-class StartStopComponent(Component):
-    abstract = True
-    terminable = False
-
-    def start(self):
-        return self.obj.start()
-
-    def stop(self):
-        return self.obj.stop()
-
-    def terminate(self):
-        if self.terminable:
-            return self.obj.terminate()
-        return self.obj.stop()
-
-    def include(self, parent):
-        if super(StartStopComponent, self).include(parent):
-            parent.components.append(self.obj)

+ 87 - 59
celery/worker/components.py

@@ -3,7 +3,7 @@
     celery.worker.components
     ~~~~~~~~~~~~~~~~~~~~~~~~
 
-    Default worker boot-steps.
+    Default worker bootsteps.
 
 """
 from __future__ import absolute_import
@@ -15,16 +15,59 @@ from functools import partial
 
 from billiard.exceptions import WorkerLostError
 
+from celery import bootsteps
 from celery.utils.log import worker_logger as logger
 from celery.utils.timer2 import Schedule
 
-from . import bootsteps
+from . import hub
 from .buckets import TaskBucket, FastQueue
-from .hub import Hub, BoundedSemaphore
 
 
-class Pool(bootsteps.StartStopComponent):
-    """The pool component.
+class Hub(bootsteps.StartStopStep):
+
+    def __init__(self, w, **kwargs):
+        w.hub = None
+
+    def include_if(self, w):
+        return w.use_eventloop
+
+    def create(self, w):
+        w.timer = Schedule(max_interval=10)
+        w.hub = hub.Hub(w.timer)
+        return w.hub
+
+
+class Queues(bootsteps.Step):
+    """This bootstep initializes the internal queues
+    used by the worker."""
+    label = 'Queues (intra)'
+    requires = (Hub, )
+
+    def __init__(self, w, **kwargs):
+        w.start_mediator = False
+
+    def create(self, w):
+        w.start_mediator = True
+        if not w.pool_cls.rlimit_safe:
+            w.disable_rate_limits = True
+        if w.disable_rate_limits:
+            w.ready_queue = FastQueue()
+            if w.use_eventloop:
+                w.start_mediator = False
+                if w.pool_putlocks and w.pool_cls.uses_semaphore:
+                    w.ready_queue.put = w.process_task_sem
+                else:
+                    w.ready_queue.put = w.process_task
+            elif not w.pool_cls.requires_mediator:
+                # just send task directly to pool, skip the mediator.
+                w.ready_queue.put = w.process_task
+                w.start_mediator = False
+        else:
+            w.ready_queue = TaskBucket(task_registry=w.app.tasks)
+
+
+class Pool(bootsteps.StartStopStep):
+    """Bootstep managing the worker pool.
 
     Describes how to initialize the worker pool, and starts and stops
     the pool during worker startup/shutdown.
@@ -37,8 +80,7 @@ class Pool(bootsteps.StartStopComponent):
         * min_concurrency
 
     """
-    name = 'worker.pool'
-    requires = ('queues', )
+    requires = (Queues, )
 
     def __init__(self, w, autoscale=None, autoreload=None,
             no_execv=False, **kwargs):
@@ -54,6 +96,14 @@ class Pool(bootsteps.StartStopComponent):
             w.max_concurrency, w.min_concurrency = w.autoscale
         self.autoreload_enabled = autoreload
 
+    def close(self, w):
+        if w.pool:
+            w.pool.close()
+
+    def terminate(self, w):
+        if w.pool:
+            w.pool.terminate()
+
     def on_poll_init(self, pool, hub):
         apply_after = hub.timer.apply_after
         apply_at = hub.timer.apply_at
@@ -107,7 +157,7 @@ class Pool(bootsteps.StartStopComponent):
         procs = w.min_concurrency
         forking_enable = not threaded or (w.no_execv or not w.force_execv)
         if not threaded:
-            semaphore = w.semaphore = BoundedSemaphore(procs)
+            semaphore = w.semaphore = hub.BoundedSemaphore(procs)
             w._quick_acquire = w.semaphore.acquire
             w._quick_release = w.semaphore.release
             max_restarts = 100
@@ -129,14 +179,15 @@ class Pool(bootsteps.StartStopComponent):
         return pool
 
 
-class Beat(bootsteps.StartStopComponent):
-    """Component used to embed a celerybeat process.
+class Beat(bootsteps.StartStopStep):
+    """Step used to embed a celerybeat process.
 
     This will only be enabled if the ``beat``
     argument is set.
 
     """
-    name = 'worker.beat'
+    label = 'Beat'
+    conditional = True
 
     def __init__(self, w, beat=False, **kwargs):
         self.enabled = w.beat = beat
@@ -150,51 +201,9 @@ class Beat(bootsteps.StartStopComponent):
         return b
 
 
-class Queues(bootsteps.Component):
-    """This component initializes the internal queues
-    used by the worker."""
-    name = 'worker.queues'
-    requires = ('ev', )
-
-    def create(self, w):
-        w.start_mediator = True
-        if not w.pool_cls.rlimit_safe:
-            w.disable_rate_limits = True
-        if w.disable_rate_limits:
-            w.ready_queue = FastQueue()
-            if w.use_eventloop:
-                w.start_mediator = False
-                if w.pool_putlocks and w.pool_cls.uses_semaphore:
-                    w.ready_queue.put = w.process_task_sem
-                else:
-                    w.ready_queue.put = w.process_task
-            elif not w.pool_cls.requires_mediator:
-                # just send task directly to pool, skip the mediator.
-                w.ready_queue.put = w.process_task
-                w.start_mediator = False
-        else:
-            w.ready_queue = TaskBucket(task_registry=w.app.tasks)
-
-
-class EvLoop(bootsteps.StartStopComponent):
-    name = 'worker.ev'
-
-    def __init__(self, w, **kwargs):
-        w.hub = None
-
-    def include_if(self, w):
-        return w.use_eventloop
-
-    def create(self, w):
-        w.timer = Schedule(max_interval=10)
-        hub = w.hub = Hub(w.timer)
-        return hub
-
-
-class Timers(bootsteps.Component):
-    """This component initializes the internal timers used by the worker."""
-    name = 'worker.timers'
-    requires = ('pool', )
+class Timer(bootsteps.Step):
+    """This step initializes the internal timer used by the worker."""
+    requires = (Pool, )
 
     def include_if(self, w):
         return not w.use_eventloop
@@ -216,9 +225,8 @@ class Timers(bootsteps.Component):
         logger.debug('Timer wake-up! Next eta %s secs.', delay)
 
 
-class StateDB(bootsteps.Component):
-    """This component sets up the workers state db if enabled."""
-    name = 'worker.state-db'
+class StateDB(bootsteps.Step):
+    """This bootstep sets up the workers state db if enabled."""
 
     def __init__(self, w, **kwargs):
         self.enabled = w.state_db
@@ -227,3 +235,23 @@ class StateDB(bootsteps.Component):
     def create(self, w):
         w._persistence = w.state.Persistent(w.state_db)
         atexit.register(w._persistence.save)
+
+
+class Consumer(bootsteps.StartStopStep):
+    last = True
+
+    def create(self, w):
+        prefetch_count = w.concurrency * w.prefetch_multiplier
+        c = w.consumer = self.instantiate(w.consumer_cls,
+                w.ready_queue,
+                hostname=w.hostname,
+                send_events=w.send_events,
+                init_callback=w.ready_callback,
+                initial_prefetch_count=prefetch_count,
+                pool=w.pool,
+                timer=w.timer,
+                app=w.app,
+                controller=w,
+                hub=w.hub,
+                worker_options=w.options)
+        return c

+ 270 - 667
celery/worker/consumer.py

@@ -3,109 +3,54 @@
 celery.worker.consumer
 ~~~~~~~~~~~~~~~~~~~~~~
 
-This module contains the component responsible for consuming messages
+This module contains the components responsible for consuming messages
 from the broker, processing the messages and keeping the broker connections
 up and running.
 
-
-* :meth:`~Consumer.start` is an infinite loop, which only iterates
-  again if the connection is lost. For each iteration (at start, or if the
-  connection is lost) it calls :meth:`~Consumer.reset_connection`,
-  and starts the consumer by calling :meth:`~Consumer.consume_messages`.
-
-* :meth:`~Consumer.reset_connection`, clears the internal queues,
-  establishes a new connection to the broker, sets up the task
-  consumer (+ QoS), and the broadcast remote control command consumer.
-
-  Also if events are enabled it configures the event dispatcher and starts
-  up the heartbeat thread.
-
-* Finally it can consume messages. :meth:`~Consumer.consume_messages`
-  is simply an infinite loop waiting for events on the AMQP channels.
-
-  Both the task consumer and the broadcast consumer uses the same
-  callback: :meth:`~Consumer.receive_message`.
-
-* So for each message received the :meth:`~Consumer.receive_message`
-  method is called, this checks the payload of the message for either
-  a `task` key or a `control` key.
-
-  If the message is a task, it verifies the validity of the message
-  converts it to a :class:`celery.worker.job.Request`, and sends
-  it to :meth:`~Consumer.on_task`.
-
-  If the message is a control command the message is passed to
-  :meth:`~Consumer.on_control`, which in turn dispatches
-  the control command using the control dispatcher.
-
-  It also tries to handle malformed or invalid messages properly,
-  so the worker doesn't choke on them and die. Any invalid messages
-  are acknowledged immediately and logged, so the message is not resent
-  again, and again.
-
-* If the task has an ETA/countdown, the task is moved to the `timer`
-  so the :class:`timer2.Timer` can schedule it at its
-  deadline. Tasks without an eta are moved immediately to the `ready_queue`,
-  so they can be picked up by the :class:`~celery.worker.mediator.Mediator`
-  to be sent to the pool.
-
-* When a task with an ETA is received the QoS prefetch count is also
-  incremented, so another message can be reserved. When the ETA is met
-  the prefetch count is decremented again, though this cannot happen
-  immediately because amqplib doesn't support doing broker requests
-  across threads. Instead the current prefetch count is kept as a
-  shared counter, so as soon as  :meth:`~Consumer.consume_messages`
-  detects that the value has changed it will send out the actual
-  QoS event to the broker.
-
-* Notice that when the connection is lost all internal queues are cleared
-  because we can no longer ack the messages reserved in memory.
-  However, this is not dangerous as the broker will resend them
-  to another worker when the channel is closed.
-
-* **WARNING**: :meth:`~Consumer.stop` does not close the connection!
-  This is because some pre-acked messages may be in processing,
-  and they need to be finished before the channel is closed.
-  For celeryd this means the pool must finish the tasks it has acked
-  early, *then* close the connection.
-
 """
 from __future__ import absolute_import
 
 import logging
 import socket
-import threading
-
-from time import sleep
-from Queue import Empty
 
+from kombu.common import QoS, ignore_errors
 from kombu.syn import _detect_environment
 from kombu.utils.encoding import safe_repr
-from kombu.utils.eventio import READ, WRITE, ERR
 
+from celery import bootsteps
 from celery.app import app_or_default
-from celery.datastructures import AttributeDict
-from celery.exceptions import InvalidTaskError, SystemTerminate
 from celery.task.trace import build_tracer
-from celery.utils import text
-from celery.utils import timer2
+from celery.utils.timer2 import default_timer, to_timestamp
 from celery.utils.functional import noop
 from celery.utils.log import get_logger
-from celery.utils.timeutils import humanize_seconds
+from celery.utils.text import truncate
+from celery.utils.timeutils import humanize_seconds, timezone
 
-from . import state
-from .bootsteps import StartStopComponent
-from .control import Panel
-from .heartbeat import Heart
+from . import heartbeat, loops, pidbox
+from .state import task_reserved, maybe_shutdown
 
-RUN = 0x1
-CLOSE = 0x2
+CLOSE = bootsteps.CLOSE
+logger = get_logger(__name__)
+debug, info, warn, error, crit = (logger.debug, logger.info, logger.warn,
+                                  logger.error, logger.critical)
 
-#: Heartbeat check is called every heartbeat_seconds' / rate'.
-AMQHEARTBEAT_RATE = 2.0
+CONNECTION_RETRY = """\
+consumer: Connection to broker lost. \
+Trying to re-establish the connection...\
+"""
+
+CONNECTION_RETRY_STEP = """\
+Trying again {when}...\
+"""
+
+CONNECTION_ERROR = """\
+consumer: Cannot connect to %s: %s.
+%s
+"""
 
-#: Prefetch count can't exceed short.
-PREFETCH_COUNT_MAX = 0xFFFF
+CONNECTION_FAILOVER = """\
+Will retry using next failover.\
+"""
 
 UNKNOWN_FORMAT = """\
 Received and deleted unknown message. Wrong destination?!?
@@ -143,169 +88,20 @@ body: {0} {{content_type:{1} content_encoding:{2} delivery_info:{3}}}\
 """
 
 
-RETRY_CONNECTION = """\
-consumer: Connection to broker lost. \
-Trying to re-establish the connection...\
-"""
-
-CONNECTION_ERROR = """\
-consumer: Cannot connect to %s: %s.
-%s
-"""
-
-CONNECTION_RETRY = """\
-Trying again {when}...\
-"""
-
-CONNECTION_FAILOVER = """\
-Will retry using next failover.\
-"""
-
-task_reserved = state.task_reserved
-
-logger = get_logger(__name__)
-info, warn, error, crit = (logger.info, logger.warn,
-                           logger.error, logger.critical)
-
-
-def debug(msg, *args, **kwargs):
-    logger.debug('consumer: {0}'.format(msg), *args, **kwargs)
-
-
 def dump_body(m, body):
-    return '{0} ({1}b)'.format(text.truncate(safe_repr(body), 1024),
+    return '{0} ({1}b)'.format(truncate(safe_repr(body), 1024),
                                len(m.body))
 
 
-class Component(StartStopComponent):
-    name = 'worker.consumer'
-    last = True
-
-    def Consumer(self, w):
-        return (w.consumer_cls or
-                Consumer if w.hub else BlockingConsumer)
-
-    def create(self, w):
-        prefetch_count = w.concurrency * w.prefetch_multiplier
-        c = w.consumer = self.instantiate(self.Consumer(w),
-                w.ready_queue,
-                hostname=w.hostname,
-                send_events=w.send_events,
-                init_callback=w.ready_callback,
-                initial_prefetch_count=prefetch_count,
-                pool=w.pool,
-                timer=w.timer,
-                app=w.app,
-                controller=w,
-                hub=w.hub)
-        return c
-
-
-class QoS(object):
-    """Thread safe increment/decrement of a channels prefetch_count.
-
-    :param consumer: A :class:`kombu.messaging.Consumer` instance.
-    :param initial_value: Initial prefetch count value.
-
-    """
-    prev = None
-
-    def __init__(self, consumer, initial_value):
-        self.consumer = consumer
-        self._mutex = threading.RLock()
-        self.value = initial_value or 0
-
-    def increment_eventually(self, n=1):
-        """Increment the value, but do not update the channels QoS.
-
-        The MainThread will be responsible for calling :meth:`update`
-        when necessary.
-
-        """
-        with self._mutex:
-            if self.value:
-                self.value = self.value + max(n, 0)
-        return self.value
-
-    def decrement_eventually(self, n=1):
-        """Decrement the value, but do not update the channels QoS.
-
-        The MainThread will be responsible for calling :meth:`update`
-        when necessary.
-
-        """
-        with self._mutex:
-            if self.value:
-                self.value -= n
-        return self.value
-
-    def set(self, pcount):
-        """Set channel prefetch_count setting."""
-        if pcount != self.prev:
-            new_value = pcount
-            if pcount > PREFETCH_COUNT_MAX:
-                warn('QoS: Disabled: prefetch_count exceeds %r',
-                     PREFETCH_COUNT_MAX)
-                new_value = 0
-            debug('basic.qos: prefetch_count->%s', new_value)
-            self.consumer.qos(prefetch_count=new_value)
-            self.prev = pcount
-        return pcount
-
-    def update(self):
-        """Update prefetch count with current value."""
-        with self._mutex:
-            return self.set(self.value)
-
-
 class Consumer(object):
-    """Listen for messages received from the broker and
-    move them to the ready queue for task processing.
-
-    :param ready_queue: See :attr:`ready_queue`.
-    :param timer: See :attr:`timer`.
-
-    """
 
-    #: The queue that holds tasks ready for immediate processing.
+    #: Intra-queue for tasks ready to be handled
     ready_queue = None
 
-    #: Enable/disable events.
-    send_events = False
-
-    #: Optional callback to be called when the connection is established.
-    #: Will only be called once, even if the connection is lost and
-    #: re-established.
+    #: Optional callback called the first time the worker
+    #: is ready to receive tasks.
     init_callback = None
 
-    #: The current hostname.  Defaults to the system hostname.
-    hostname = None
-
-    #: Initial QoS prefetch count for the task channel.
-    initial_prefetch_count = 0
-
-    #: A :class:`celery.events.EventDispatcher` for sending events.
-    event_dispatcher = None
-
-    #: The thread that sends event heartbeats at regular intervals.
-    #: The heartbeats are used by monitors to detect that a worker
-    #: went offline/disappeared.
-    heart = None
-
-    #: The broker connection.
-    connection = None
-
-    #: The consumer used to consume task messages.
-    task_consumer = None
-
-    #: The consumer used to consume broadcast commands.
-    broadcast_consumer = None
-
-    #: The process mailbox (kombu pidbox node).
-    pidbox_node = None
-    _pidbox_node_shutdown = None   # used for greenlets
-    _pidbox_node_stopped = None    # used for greenlets
-
     #: The current worker pool instance.
     pool = None
 
@@ -313,187 +109,186 @@ class Consumer(object):
     #: as sending heartbeats.
     timer = None
 
-    # Consumer state, can be RUN or CLOSE.
-    _state = None
+    class Namespace(bootsteps.Namespace):
+        name = 'Consumer'
+        default_steps = [
+            'celery.worker.consumer:Connection',
+            'celery.worker.consumer:Events',
+            'celery.worker.consumer:Heart',
+            'celery.worker.consumer:Control',
+            'celery.worker.consumer:Tasks',
+            'celery.worker.consumer:Evloop',
+            'celery.worker.consumer:Agent',
+        ]
+
+        def shutdown(self, parent):
+            self.restart(parent, 'Shutdown', 'shutdown')
 
     def __init__(self, ready_queue,
-            init_callback=noop, send_events=False, hostname=None,
-            initial_prefetch_count=2, pool=None, app=None,
+            init_callback=noop, hostname=None,
+            pool=None, app=None,
             timer=None, controller=None, hub=None, amqheartbeat=None,
-            **kwargs):
+            worker_options=None, **kwargs):
         self.app = app_or_default(app)
-        self.connection = None
-        self.task_consumer = None
         self.controller = controller
-        self.broadcast_consumer = None
         self.ready_queue = ready_queue
-        self.send_events = send_events
         self.init_callback = init_callback
         self.hostname = hostname or socket.gethostname()
-        self.initial_prefetch_count = initial_prefetch_count
-        self.event_dispatcher = None
-        self.heart = None
         self.pool = pool
-        self.timer = timer or timer2.default_timer
-        pidbox_state = AttributeDict(app=self.app,
-                                     hostname=self.hostname,
-                                     listener=self,     # pre 2.2
-                                     consumer=self)
-        self.pidbox_node = self.app.control.mailbox.Node(self.hostname,
-                                                         state=pidbox_state,
-                                                         handlers=Panel.data)
+        self.timer = timer or default_timer
+        self.strategies = {}
         conninfo = self.app.connection()
         self.connection_errors = conninfo.connection_errors
         self.channel_errors = conninfo.channel_errors
 
         self._does_info = logger.isEnabledFor(logging.INFO)
-        self.strategies = {}
-        if hub:
-            hub.on_init.append(self.on_poll_init)
-        self.hub = hub
         self._quick_put = self.ready_queue.put
-        self.amqheartbeat = amqheartbeat
-        if self.amqheartbeat is None:
-            self.amqheartbeat = self.app.conf.BROKER_HEARTBEAT
-        if not hub:
+
+        if hub:
+            self.amqheartbeat = amqheartbeat
+            if self.amqheartbeat is None:
+                self.amqheartbeat = self.app.conf.BROKER_HEARTBEAT
+            self.hub = hub
+            self.hub.on_init.append(self.on_poll_init)
+        else:
+            self.hub = None
             self.amqheartbeat = 0
 
+        if not hasattr(self, 'loop'):
+            self.loop = loops.asynloop if hub else loops.synloop
+
         if _detect_environment() == 'gevent':
             # there's a gevent bug that causes timeouts to not be reset,
             # so if the connection timeout is exceeded once, it can NEVER
             # connect again.
             self.app.conf.BROKER_CONNECTION_TIMEOUT = None
 
-    def update_strategies(self):
-        S = self.strategies
-        app = self.app
-        loader = app.loader
-        hostname = self.hostname
-        for name, task in self.app.tasks.iteritems():
-            S[name] = task.start_strategy(app, self)
-            task.__trace__ = build_tracer(name, task, loader, hostname)
+        self.steps = []
+        self.namespace = self.Namespace(
+            app=self.app, on_close=self.on_close,
+        )
+        self.namespace.apply(self, **dict(worker_options or {}, **kwargs))
 
     def start(self):
-        """Start the consumer.
+        ns, loop = self.namespace, self.loop
+        while ns.state != CLOSE:
+            maybe_shutdown()
+            try:
+                ns.start(self)
+            except self.connection_errors + self.channel_errors:
+                maybe_shutdown()
+                if ns.state != CLOSE and self.connection:
+                    warn(CONNECTION_RETRY, exc_info=True)
+                    ns.restart(self)
 
-        Automatically survives intermittent connection failure,
-        and will retry establishing the connection and restart
-        consuming messages.
+    def shutdown(self):
+        self.namespace.shutdown(self)
 
-        """
+    def stop(self):
+        self.namespace.stop(self)
 
-        self.init_callback(self)
+    def on_ready(self):
+        callback, self.init_callback = self.init_callback, None
+        if callback:
+            callback(self)
 
-        while self._state != CLOSE:
-            self.maybe_shutdown()
-            try:
-                self.reset_connection()
-                self.consume_messages()
-            except self.connection_errors + self.channel_errors:
-                error(RETRY_CONNECTION, exc_info=True)
+    def loop_args(self):
+        return (self, self.connection, self.task_consumer,
+                self.strategies, self.namespace, self.hub, self.qos,
+                self.amqheartbeat, self.handle_unknown_message,
+                self.handle_unknown_task, self.handle_invalid_task)
 
     def on_poll_init(self, hub):
         hub.update_readers(self.connection.eventmap)
         self.connection.transport.on_poll_init(hub.poller)
 
-    def consume_messages(self, sleep=sleep, min=min, Empty=Empty,
-            hbrate=AMQHEARTBEAT_RATE):
-        """Consume messages forever (or until an exception is raised)."""
-
-        with self.hub as hub:
-            qos = self.qos
-            update_qos = qos.update
-            update_readers = hub.update_readers
-            readers, writers = hub.readers, hub.writers
-            poll = hub.poller.poll
-            fire_timers = hub.fire_timers
-            scheduled = hub.timer._queue
-            connection = self.connection
-            hb = self.amqheartbeat
-            hbtick = connection.heartbeat_check
-            on_poll_start = connection.transport.on_poll_start
-            on_poll_empty = connection.transport.on_poll_empty
-            strategies = self.strategies
-            drain_nowait = connection.drain_nowait
-            on_task_callbacks = hub.on_task
-            keep_draining = connection.transport.nb_keep_draining
-
-            if hb and connection.supports_heartbeats:
-                hub.timer.apply_interval(
-                    hb * 1000.0 / hbrate, hbtick, (hbrate, ))
-
-            def on_task_received(body, message):
-                if on_task_callbacks:
-                    [callback() for callback in on_task_callbacks]
-                try:
-                    name = body['task']
-                except (KeyError, TypeError):
-                    return self.handle_unknown_message(body, message)
-                try:
-                    strategies[name](message, body, message.ack_log_error)
-                except KeyError as exc:
-                    self.handle_unknown_task(body, message, exc)
-                except InvalidTaskError as exc:
-                    self.handle_invalid_task(body, message, exc)
-                #fire_timers()
-
-            self.task_consumer.callbacks = [on_task_received]
-            self.task_consumer.consume()
-
-            debug('Ready to accept tasks!')
-
-            while self._state != CLOSE and self.connection:
-                # shutdown if signal handlers told us to.
-                if state.should_stop:
-                    raise SystemExit()
-                elif state.should_terminate:
-                    raise SystemTerminate()
-
-                # fire any ready timers, this also returns
-                # the number of seconds until we need to fire timers again.
-                poll_timeout = fire_timers() if scheduled else 1
-
-                # We only update QoS when there is no more messages to read.
-                # This groups together qos calls, and makes sure that remote
-                # control commands will be prioritized over task messages.
-                if qos.prev != qos.value:
-                    update_qos()
-
-                update_readers(on_poll_start())
-                if readers or writers:
-                    connection.more_to_read = True
-                    while connection.more_to_read:
-                        try:
-                            events = poll(poll_timeout)
-                        except ValueError:  # Issue 882
-                            return
-                        if not events:
-                            on_poll_empty()
-                        for fileno, event in events or ():
-                            try:
-                                if event & READ:
-                                    readers[fileno](fileno, event)
-                                if event & WRITE:
-                                    writers[fileno](fileno, event)
-                                if event & ERR:
-                                    for handlermap in readers, writers:
-                                        try:
-                                            handlermap[fileno](fileno, event)
-                                        except KeyError:
-                                            pass
-                            except (KeyError, Empty):
-                                continue
-                            except socket.error:
-                                if self._state != CLOSE:  # pragma: no cover
-                                    raise
-                        if keep_draining:
-                            drain_nowait()
-                            poll_timeout = 0
-                        else:
-                            connection.more_to_read = False
-                else:
-                    # no sockets yet, startup is probably not done.
-                    sleep(min(poll_timeout, 0.1))
+    def on_decode_error(self, message, exc):
+        """Callback called if an error occurs while decoding
+        a message received.
+
+        Simply logs the error and acknowledges the message so it
+        doesn't enter a loop.
+
+        :param message: The message with errors.
+        :param exc: The original exception instance.
+
+        """
+        crit("Can't decode message body: %r (type:%r encoding:%r raw:%r')",
+             exc, message.content_type, message.content_encoding,
+             dump_body(message, message.body))
+        message.ack()
+
+    def on_close(self):
+        # Clear internal queues to get rid of old messages.
+        # They can't be acked anyway, as a delivery tag is specific
+        # to the current channel.
+        self.ready_queue.clear()
+        self.timer.clear()
+
+    def connect(self):
+        """Establish the broker connection.
+
+        Will retry establishing the connection if the
+        :setting:`BROKER_CONNECTION_RETRY` setting is enabled
+
+        """
+        conn = self.app.connection(heartbeat=self.amqheartbeat)
+
+        # Callback called for each retry while the connection
+        # can't be established.
+        def _error_handler(exc, interval, next_step=CONNECTION_RETRY_STEP):
+            if getattr(conn, 'alt', None) and interval == 0:
+                next_step = CONNECTION_FAILOVER
+            error(CONNECTION_ERROR, conn.as_uri(), exc,
+                  next_step.format(when=humanize_seconds(interval, 'in', ' ')))
+
+        # remember that the connection is lazy, it won't establish
+        # until it's needed.
+        if not self.app.conf.BROKER_CONNECTION_RETRY:
+            # retry disabled, just call connect directly.
+            conn.connect()
+            return conn
+
+        return conn.ensure_connection(_error_handler,
+                    self.app.conf.BROKER_CONNECTION_MAX_RETRIES,
+                    callback=maybe_shutdown)
+
+    def add_task_queue(self, queue, exchange=None, exchange_type=None,
+            routing_key=None, **options):
+        cset = self.task_consumer
+        try:
+            q = self.app.amqp.queues[queue]
+        except KeyError:
+            exchange = queue if exchange is None else exchange
+            exchange_type = 'direct' if exchange_type is None \
+                                     else exchange_type
+            q = self.app.amqp.queues.select_add(queue,
+                    exchange=exchange,
+                    exchange_type=exchange_type,
+                    routing_key=routing_key, **options)
+        if not cset.consuming_from(queue):
+            cset.add_queue(q)
+            cset.consume()
+            info('Started consuming from %r', queue)
+
+    def cancel_task_queue(self, queue):
+        self.app.amqp.queues.select_remove(queue)
+        self.task_consumer.cancel_by_queue(queue)
+
+    @property
+    def info(self):
+        """Returns information about this consumer instance
+        as a dict.
+
+        This is also the consumer related info returned by
+        ``celeryctl stats``.
+
+        """
+        conninfo = {}
+        if self.connection:
+            conninfo = self.connection.info()
+            conninfo.pop('password', None)  # don't send password.
+        return {'broker': conninfo, 'prefetch_count': self.qos.value}
 
     def on_task(self, task, task_reserved=task_reserved):
         """Handle received task.
@@ -517,30 +312,22 @@ class Consumer(object):
                     expires=task.expires and task.expires.isoformat())
 
         if task.eta:
+            eta = timezone.to_system(task.eta) if task.utc else task.eta
             try:
-                eta = timer2.to_timestamp(task.eta)
+                eta = to_timestamp(eta)
             except OverflowError as exc:
                 error("Couldn't convert eta %s to timestamp: %r. Task: %r",
                       task.eta, exc, task.info(safe=True), exc_info=True)
                 task.acknowledge()
             else:
                 self.qos.increment_eventually()
-                self.timer.apply_at(eta, self.apply_eta_task, (task, ),
-                                    priority=6)
+                self.timer.apply_at(
+                    eta, self.apply_eta_task, (task, ), priority=6,
+                )
         else:
             task_reserved(task)
             self._quick_put(task)
 
-    def on_control(self, body, message):
-        """Process remote control command message."""
-        try:
-            self.pidbox_node.handle_message(body, message)
-        except KeyError as exc:
-            error('No such control command: %s', exc)
-        except Exception as exc:
-            error('Control command error: %r', exc, exc_info=True)
-            self.reset_pidbox_node()
-
     def apply_eta_task(self, task):
         """Method called by the timer to apply a task with an
         ETA/countdown."""
@@ -566,307 +353,123 @@ class Consumer(object):
         error(INVALID_TASK_ERROR, exc, dump_body(message, body), exc_info=True)
         message.reject_log_error(logger, self.connection_errors)
 
-    def receive_message(self, body, message):
-        """Handles incoming messages.
+    def update_strategies(self):
+        loader = self.app.loader
+        for name, task in self.app.tasks.iteritems():
+            self.strategies[name] = task.start_strategy(self.app, self)
+            task.__trace__ = build_tracer(name, task, loader, self.hostname)
 
-        :param body: The message body.
-        :param message: The kombu message object.
 
-        """
-        try:
-            name = body['task']
-        except (KeyError, TypeError):
-            return self.handle_unknown_message(body, message)
+class Connection(bootsteps.StartStopStep):
 
-        try:
-            self.strategies[name](message, body, message.ack_log_error)
-        except KeyError as exc:
-            self.handle_unknown_task(body, message, exc)
-        except InvalidTaskError as exc:
-            self.handle_invalid_task(body, message, exc)
-
-    def maybe_conn_error(self, fun):
-        """Applies function but ignores any connection or channel
-        errors raised."""
-        try:
-            fun()
-        except (AttributeError, ) + \
-                self.connection_errors + \
-                self.channel_errors:
-            pass
+    def __init__(self, c, **kwargs):
+        c.connection = None
 
-    def close_connection(self):
-        """Closes the current broker connection and all open channels."""
+    def start(self, c):
+        c.connection = c.connect()
+        info('Connected to %s', c.connection.as_uri())
 
+    def shutdown(self, c):
         # We must set self.connection to None here, so
         # that the green pidbox thread exits.
-        connection, self.connection = self.connection, None
-
-        if self.task_consumer:
-            debug('Closing consumer channel...')
-            self.task_consumer = \
-                    self.maybe_conn_error(self.task_consumer.close)
-
-        self.stop_pidbox_node()
-
+        connection, c.connection = c.connection, None
         if connection:
-            debug('Closing broker connection...')
-            self.maybe_conn_error(connection.close)
+            ignore_errors(connection, connection.close)
 
-    def stop_consumers(self, close_connection=True, join=True):
-        """Stop consuming tasks and broadcast commands, also stops
-        the heartbeat thread and event dispatcher.
 
-        :keyword close_connection: Set to False to skip closing the broker
-                                    connection.
+class Events(bootsteps.StartStopStep):
+    requires = (Connection, )
 
-        """
-        if not self._state == RUN:
-            return
-
-        if self.heart:
-            # Stop the heartbeat thread if it's running.
-            debug('Heart: Going into cardiac arrest...')
-            self.heart = self.heart.stop()
-
-        debug('Cancelling task consumer...')
-        if join and self.task_consumer:
-            self.maybe_conn_error(self.task_consumer.cancel)
-
-        if self.event_dispatcher:
-            debug('Shutting down event dispatcher...')
-            self.event_dispatcher = \
-                    self.maybe_conn_error(self.event_dispatcher.close)
-
-        debug('Cancelling broadcast consumer...')
-        if join and self.broadcast_consumer:
-            self.maybe_conn_error(self.broadcast_consumer.cancel)
-
-        if close_connection:
-            self.close_connection()
-
-    def on_decode_error(self, message, exc):
-        """Callback called if an error occurs while decoding
-        a message received.
-
-        Simply logs the error and acknowledges the message so it
-        doesn't enter a loop.
-
-        :param message: The message with errors.
-        :param exc: The original exception instance.
-
-        """
-        crit("Can't decode message body: %r (type:%r encoding:%r raw:%r')",
-             exc, message.content_type, message.content_encoding,
-             dump_body(message, message.body))
-        message.ack()
-
-    def reset_pidbox_node(self):
-        """Sets up the process mailbox."""
-        self.stop_pidbox_node()
-        # close previously opened channel if any.
-        if self.pidbox_node.channel:
-            try:
-                self.pidbox_node.channel.close()
-            except self.connection_errors + self.channel_errors:
-                pass
-
-        if self.pool is not None and self.pool.is_green:
-            return self.pool.spawn_n(self._green_pidbox_node)
-        self.pidbox_node.channel = self.connection.channel()
-        self.broadcast_consumer = self.pidbox_node.listen(
-                                        callback=self.on_control)
-
-    def stop_pidbox_node(self):
-        if self._pidbox_node_stopped:
-            self._pidbox_node_shutdown.set()
-            debug('Waiting for broadcast thread to shutdown...')
-            self._pidbox_node_stopped.wait()
-            self._pidbox_node_stopped = self._pidbox_node_shutdown = None
-        elif self.broadcast_consumer:
-            debug('Closing broadcast channel...')
-            self.broadcast_consumer = \
-                self.maybe_conn_error(self.broadcast_consumer.channel.close)
-
-    def _green_pidbox_node(self):
-        """Sets up the process mailbox when running in a greenlet
-        environment."""
-        # THIS CODE IS TERRIBLE
-        # Luckily work has already started rewriting the Consumer for 4.0.
-        self._pidbox_node_shutdown = threading.Event()
-        self._pidbox_node_stopped = threading.Event()
-        try:
-            with self._open_connection() as conn:
-                info('pidbox: Connected to %s.', conn.as_uri())
-                self.pidbox_node.channel = conn.default_channel
-                self.broadcast_consumer = self.pidbox_node.listen(
-                                            callback=self.on_control)
-                with self.broadcast_consumer:
-                    while not self._pidbox_node_shutdown.isSet():
-                        try:
-                            conn.drain_events(timeout=1.0)
-                        except socket.timeout:
-                            pass
-        finally:
-            self._pidbox_node_stopped.set()
-
-    def reset_connection(self):
-        """Re-establish the broker connection and set up consumers,
-        heartbeat and the event dispatcher."""
-        debug('Re-establishing connection to the broker...')
-        self.stop_consumers(join=False)
-
-        # Clear internal queues to get rid of old messages.
-        # They can't be acked anyway, as a delivery tag is specific
-        # to the current channel.
-        self.ready_queue.clear()
-        self.timer.clear()
-
-        # Re-establish the broker connection and setup the task consumer.
-        self.connection = self._open_connection()
-        info('consumer: Connected to %s.', self.connection.as_uri())
-        self.task_consumer = self.app.amqp.TaskConsumer(self.connection,
-                                    on_decode_error=self.on_decode_error)
-        # QoS: Reset prefetch window.
-        self.qos = QoS(self.task_consumer, self.initial_prefetch_count)
-        self.qos.update()
-
-        # Setup the process mailbox.
-        self.reset_pidbox_node()
+    def __init__(self, c, send_events=None, **kwargs):
+        self.send_events = send_events
+        c.event_dispatcher = None
 
+    def start(self, c):
         # Flush events sent while connection was down.
-        prev_event_dispatcher = self.event_dispatcher
-        self.event_dispatcher = self.app.events.Dispatcher(self.connection,
-                                                hostname=self.hostname,
-                                                enabled=self.send_events)
-        if prev_event_dispatcher:
-            self.event_dispatcher.copy_buffer(prev_event_dispatcher)
-            self.event_dispatcher.flush()
+        prev = c.event_dispatcher
+        dis = c.event_dispatcher = c.app.events.Dispatcher(
+            c.connection, hostname=c.hostname, enabled=self.send_events,
+        )
+        if prev:
+            dis.copy_buffer(prev)
+            dis.flush()
 
-        # Restart heartbeat thread.
-        self.restart_heartbeat()
+    def stop(self, c):
+        if c.event_dispatcher:
+            ignore_errors(c, c.event_dispatcher.close)
+            c.event_dispatcher = None
+    shutdown = stop
 
-        # reload all task's execution strategies.
-        self.update_strategies()
 
-        # We're back!
-        self._state = RUN
+class Heart(bootsteps.StartStopStep):
+    requires = (Events, )
 
-    def restart_heartbeat(self):
-        """Restart the heartbeat thread.
+    def __init__(self, c, **kwargs):
+        c.heart = None
 
-        This thread sends heartbeat events at intervals so monitors
-        can tell if the worker is off-line/missing.
+    def start(self, c):
+        c.heart = heartbeat.Heart(c.timer, c.event_dispatcher)
+        c.heart.start()
 
-        """
-        self.heart = Heart(self.timer, self.event_dispatcher)
-        self.heart.start()
+    def stop(self, c):
+        c.heart = c.heart and c.heart.stop()
+    shutdown = stop
 
-    def _open_connection(self):
-        """Establish the broker connection.
 
-        Will retry establishing the connection if the
-        :setting:`BROKER_CONNECTION_RETRY` setting is enabled
+class Control(bootsteps.StartStopStep):
+    requires = (Events, )
 
-        """
-        conn = self.app.connection(heartbeat=self.amqheartbeat)
-
-        # Callback called for each retry while the connection
-        # can't be established.
-        def _error_handler(exc, interval, next_step=CONNECTION_RETRY):
-            if getattr(conn, 'alt', None) and interval == 0:
-                next_step = CONNECTION_FAILOVER
-            error(CONNECTION_ERROR, conn.as_uri(), exc,
-                  next_step.format(when=humanize_seconds(interval, 'in', ' ')))
-
-        # remember that the connection is lazy, it won't establish
-        # until it's needed.
-        if not self.app.conf.BROKER_CONNECTION_RETRY:
-            # retry disabled, just call connect directly.
-            conn.connect()
-            return conn
-
-        return conn.ensure_connection(_error_handler,
-                    self.app.conf.BROKER_CONNECTION_MAX_RETRIES,
-                    callback=self.maybe_shutdown)
-
-    def stop(self):
-        """Stop consuming.
-
-        Does not close the broker connection, so be sure to call
-        :meth:`close_connection` when you are finished with it.
-
-        """
-        # Notifies other threads that this instance can't be used
-        # anymore.
-        self.close()
-        debug('Stopping consumers...')
-        self.stop_consumers(close_connection=False, join=True)
+    def __init__(self, c, **kwargs):
+        self.is_green = c.pool is not None and c.pool.is_green
+        self.box = (pidbox.gPidbox if self.is_green else pidbox.Pidbox)(c)
+        self.start = self.box.start
+        self.stop = self.box.stop
+        self.shutdown = self.box.shutdown
 
-    def close(self):
-        self._state = CLOSE
 
-    def maybe_shutdown(self):
-        if state.should_stop:
-            raise SystemExit()
-        elif state.should_terminate:
-            raise SystemTerminate()
+class Tasks(bootsteps.StartStopStep):
+    requires = (Control, )
 
-    def add_task_queue(self, queue, exchange=None, exchange_type=None,
-            routing_key=None, **options):
-        cset = self.task_consumer
-        try:
-            q = self.app.amqp.queues[queue]
-        except KeyError:
-            exchange = queue if exchange is None else exchange
-            exchange_type = 'direct' if exchange_type is None \
-                                     else exchange_type
-            q = self.app.amqp.queues.select_add(queue,
-                    exchange=exchange,
-                    exchange_type=exchange_type,
-                    routing_key=routing_key, **options)
-        if not cset.consuming_from(queue):
-            cset.add_queue(q)
-            cset.consume()
-            logger.info('Started consuming from %r', queue)
-
-    def cancel_task_queue(self, queue):
-        self.app.amqp.queues.select_remove(queue)
-        self.task_consumer.cancel_by_queue(queue)
+    def __init__(self, c, initial_prefetch_count=2, **kwargs):
+        c.task_consumer = c.qos = None
+        self.initial_prefetch_count = initial_prefetch_count
 
-    @property
-    def info(self):
-        """Returns information about this consumer instance
-        as a dict.
+    def start(self, c):
+        c.update_strategies()
+        c.task_consumer = c.app.amqp.TaskConsumer(
+            c.connection, on_decode_error=c.on_decode_error,
+        )
+        c.qos = QoS(c.task_consumer.qos, self.initial_prefetch_count)
+        c.qos.update()  # set initial prefetch count
+
+    def stop(self, c):
+        if c.task_consumer:
+            debug('Cancelling task consumer...')
+            ignore_errors(c, c.task_consumer.cancel)
+
+    def shutdown(self, c):
+        if c.task_consumer:
+            self.stop(c)
+            debug('Closing consumer channel...')
+            ignore_errors(c, c.task_consumer.close)
+            c.task_consumer = None
 
-        This is also the consumer related info returned by
-        ``celeryctl stats``.
 
-        """
-        conninfo = {}
-        if self.connection:
-            conninfo = self.connection.info()
-            conninfo.pop('password', None)  # don't send password.
-        return {'broker': conninfo, 'prefetch_count': self.qos.value}
+class Agent(bootsteps.StartStopStep):
+    conditional = True
+    requires = (Connection, )
 
+    def __init__(self, c, **kwargs):
+        self.agent_cls = self.enabled = c.app.conf.CELERYD_AGENT
 
-class BlockingConsumer(Consumer):
+    def create(self, c):
+        agent = c.agent = self.instantiate(self.agent_cls, c.connection)
+        return agent
 
-    def consume_messages(self):
-        # receive_message handles incoming messages.
-        self.task_consumer.register_callback(self.receive_message)
-        self.task_consumer.consume()
 
-        debug('Ready to accept tasks!')
+class Evloop(bootsteps.StartStopStep):
+    label = 'event loop'
+    last = True
 
-        while self._state != CLOSE and self.connection:
-            self.maybe_shutdown()
-            if self.qos.prev != self.qos.value:     # pragma: no cover
-                self.qos.update()
-            try:
-                self.connection.drain_events(timeout=10.0)
-            except socket.timeout:
-                pass
-            except socket.error:
-                if self._state != CLOSE:            # pragma: no cover
-                    raise
+    def start(self, c):
+        c.loop(*c.loop_args())

+ 8 - 6
celery/worker/control.py

@@ -39,17 +39,19 @@ class Panel(UserDict):
 def revoke(panel, task_id, terminate=False, signal=None, **kwargs):
     """Revoke task by task id."""
     revoked.add(task_id)
-    action = 'revoked'
     if terminate:
         signum = _signals.signum(signal or 'TERM')
-        for request in state.active_requests:
+        for request in state.reserved_requests:
             if request.id == task_id:
-                action = 'terminated ({0})'.format(signum)
+                logger.info('Terminating %s (%s)', task_id, signum)
                 request.terminate(panel.consumer.pool, signal=signum)
                 break
+        else:
+            return {'ok': 'terminate: task {0} not found'.format(task_id)}
+        return {'ok': 'terminating {0} ({1})'.format(task_id, signal)}
 
-    logger.info('Task %s %s.', task_id, action)
-    return {'ok': 'task {0} {1}'.format(task_id, action)}
+    logger.info('Revoking task %s', task_id)
+    return {'ok': 'revoking task {0}'.format(task_id)}
 
 
 @Panel.register
@@ -212,7 +214,7 @@ def dump_tasks(panel, taskinfoitems=None, **kwargs):
 
 @Panel.register
 def ping(panel, **kwargs):
-    return 'pong'
+    return {'ok': 'pong'}
 
 
 @Panel.register

+ 1 - 2
celery/worker/heartbeat.py

@@ -28,8 +28,7 @@ class Heart(object):
         self.interval = float(interval or 5.0)
         self.tref = None
 
-        # Make event dispatcher start/stop us when it's
-        # enabled/disabled.
+        # Make event dispatcher start/stop us when enabled/disabled.
         self.eventer.on_enabled.add(self.start)
         self.eventer.on_disabled.add(self.stop)
 

+ 2 - 2
celery/worker/hub.py

@@ -132,11 +132,11 @@ class Hub(object):
         self.on_task = []
 
     def start(self):
-        """Called by StartStopComponent at worker startup."""
+        """Called by Hub bootstep at worker startup."""
         self.poller = eventio.poll()
 
     def stop(self):
-        """Called by StartStopComponent at worker shutdown."""
+        """Called by Hub bootstep at worker shutdown."""
         self.poller.close()
 
     def init(self):

+ 51 - 27
celery/worker/job.py

@@ -23,7 +23,7 @@ from celery import exceptions
 from celery import signals
 from celery.app import app_or_default
 from celery.datastructures import ExceptionInfo
-from celery.exceptions import TaskRevokedError
+from celery.exceptions import Ignore, TaskRevokedError
 from celery.platforms import signals as _signals
 from celery.task.trace import (
     trace_task,
@@ -34,20 +34,27 @@ from celery.utils.functional import noop
 from celery.utils.log import get_logger
 from celery.utils.serialization import get_pickled_exception
 from celery.utils.text import truncate
-from celery.utils.timeutils import maybe_iso8601, timezone
+from celery.utils.timeutils import maybe_iso8601, timezone, maybe_make_aware
 
 from . import state
 
 logger = get_logger(__name__)
 debug, info, warn, error = (logger.debug, logger.info,
                             logger.warn, logger.error)
-_does_debug = logger.isEnabledFor(logging.DEBUG)
-_does_info = logger.isEnabledFor(logging.INFO)
+_does_info = False
+_does_debug = False
+
+
+def __optimize__():
+    global _does_debug
+    global _does_info
+    _does_debug = logger.isEnabledFor(logging.DEBUG)
+    _does_info = logger.isEnabledFor(logging.INFO)
+__optimize__()
 
 # Localize
-tz_to_local = timezone.to_local
-tz_or_local = timezone.tz_or_local
 tz_utc = timezone.utc
+tz_or_local = timezone.tz_or_local
 send_revoked = signals.task_revoked.send
 
 task_accepted = state.task_accepted
@@ -64,8 +71,9 @@ class Request(object):
                  'eventer', 'connection_errors',
                  'task', 'eta', 'expires',
                  'request_dict', 'acknowledged', 'success_msg',
-                 'error_msg', 'retry_msg', 'time_start', 'worker_pid',
-                 '_already_revoked', '_terminate_on_ack', '_tzlocal')
+                 'error_msg', 'retry_msg', 'ignore_msg', 'utc',
+                 'time_start', 'worker_pid', '_already_revoked',
+                 '_terminate_on_ack', '_tzlocal')
 
     #: Format string used to log task success.
     success_msg = """\
@@ -82,6 +90,10 @@ class Request(object):
         Task %(name)s[%(id)s] INTERNAL ERROR: %(exc)s
     """
 
+    ignored_msg = """\
+        Task %(name)s[%(id)s] ignored
+    """
+
     #: Format string used to log task retry.
     retry_msg = """Task %(name)s[%(id)s] retry: %(exc)s"""
 
@@ -103,7 +115,7 @@ class Request(object):
             self.kwargs = kwdict(self.kwargs)
         eta = body.get('eta')
         expires = body.get('expires')
-        utc = body.get('utc', False)
+        utc = self.utc = body.get('utc', False)
         self.on_ack = on_ack
         self.hostname = hostname or socket.gethostname()
         self.eventer = eventer
@@ -116,14 +128,15 @@ class Request(object):
         # timezone means the message is timezone-aware, and the only timezone
         # supported at this point is UTC.
         if eta is not None:
-            tz = tz_utc if utc else self.tzlocal
-            self.eta = tz_to_local(maybe_iso8601(eta), self.tzlocal, tz)
+            self.eta = maybe_iso8601(eta)
+            if utc:
+                self.eta = maybe_make_aware(self.eta, self.tzlocal)
         else:
             self.eta = None
         if expires is not None:
-            tz = tz_utc if utc else self.tzlocal
-            self.expires = tz_to_local(maybe_iso8601(expires),
-                                       self.tzlocal, tz)
+            self.expires = maybe_iso8601(expires)
+            if utc:
+                self.expires = maybe_make_aware(self.expires, self.tzlocal)
         else:
             self.expires = None
 
@@ -236,9 +249,11 @@ class Request(object):
 
     def maybe_expire(self):
         """If expired, mark the task as revoked."""
-        if self.expires and datetime.now(self.tzlocal) > self.expires:
-            revoked_tasks.add(self.id)
-            return True
+        if self.expires:
+            now = datetime.now(tz_or_local(self.tzlocal) if self.utc else None)
+            if now > self.expires:
+                revoked_tasks.add(self.id)
+                return True
 
     def terminate(self, pool, signal=None):
         if self.time_start:
@@ -350,19 +365,21 @@ class Request(object):
         task_ready(self)
 
         if not exc_info.internal:
+            exc = exc_info.exception
 
-            if isinstance(exc_info.exception, exceptions.RetryTaskError):
+            if isinstance(exc, exceptions.RetryTaskError):
                 return self.on_retry(exc_info)
 
-            # This is a special case as the process would not have had
+            # These are special cases where the process would not have had
             # time to write the result.
-            if isinstance(exc_info.exception, exceptions.WorkerLostError) and \
-                    self.store_errors:
-                self.task.backend.mark_as_failure(self.id, exc_info.exception)
+            if self.store_errors:
+                if isinstance(exc, exceptions.WorkerLostError):
+                    self.task.backend.mark_as_failure(self.id, exc)
+                elif isinstance(exc, exceptions.Terminated):
+                    self._announce_revoked('terminated', True, str(exc), False)
             # (acks_late) acknowledge after result stored.
             if self.task.acks_late:
                 self.acknowledge()
-
         self._log_error(exc_info)
 
     def _log_error(self, einfo):
@@ -383,9 +400,16 @@ class Request(object):
                          traceback=traceback)
 
         if internal:
-            format = self.internal_error_msg
-            description = 'INTERNAL ERROR'
-            severity = logging.CRITICAL
+            if isinstance(einfo.exception, Ignore):
+                format = self.ignored_msg
+                description = 'ignored'
+                severity = logging.INFO
+                exc_info = None
+                self.acknowledge()
+            else:
+                format = self.internal_error_msg
+                description = 'INTERNAL ERROR'
+                severity = logging.CRITICAL
 
         context = {
             'hostname': self.hostname,
@@ -444,7 +468,7 @@ class Request(object):
     @property
     def tzlocal(self):
         if self._tzlocal is None:
-            self._tzlocal = tz_or_local(self.app.conf.CELERY_TIMEZONE)
+            self._tzlocal = self.app.conf.CELERY_TIMEZONE
         return self._tzlocal
 
     @property

+ 156 - 0
celery/worker/loops.py

@@ -0,0 +1,156 @@
+"""
+celery.worker.loop
+~~~~~~~~~~~~~~~~~~
+
+The consumers highly-optimized inner loop.
+
+"""
+from __future__ import absolute_import
+
+import socket
+
+from time import sleep
+from Queue import Empty
+
+from kombu.utils.eventio import READ, WRITE, ERR
+
+from celery.bootsteps import CLOSE
+from celery.exceptions import InvalidTaskError, SystemTerminate
+
+from . import state
+
+#: Heartbeat check is called every heartbeat_seconds' / rate'.
+AMQHEARTBEAT_RATE = 2.0
+
+
+def asynloop(obj, connection, consumer, strategies, ns, hub, qos,
+        heartbeat, handle_unknown_message, handle_unknown_task,
+        handle_invalid_task, sleep=sleep, min=min, Empty=Empty,
+        hbrate=AMQHEARTBEAT_RATE):
+    """Non-blocking eventloop consuming messages until connection is lost,
+    or shutdown is requested."""
+
+    with hub as hub:
+        update_qos = qos.update
+        update_readers = hub.update_readers
+        readers, writers = hub.readers, hub.writers
+        poll = hub.poller.poll
+        fire_timers = hub.fire_timers
+        scheduled = hub.timer._queue
+        hbtick = connection.heartbeat_check
+        on_poll_start = connection.transport.on_poll_start
+        on_poll_empty = connection.transport.on_poll_empty
+        drain_nowait = connection.drain_nowait
+        on_task_callbacks = hub.on_task
+        keep_draining = connection.transport.nb_keep_draining
+
+        if heartbeat and connection.supports_heartbeats:
+            hub.timer.apply_interval(
+                heartbeat * 1000.0 / hbrate, hbtick, (hbrate, ))
+
+        def on_task_received(body, message):
+            if on_task_callbacks:
+                [callback() for callback in on_task_callbacks]
+            try:
+                name = body['task']
+            except (KeyError, TypeError):
+                return handle_unknown_message(body, message)
+            try:
+                strategies[name](message, body, message.ack_log_error)
+            except KeyError as exc:
+                handle_unknown_task(body, message, exc)
+            except InvalidTaskError as exc:
+                handle_invalid_task(body, message, exc)
+
+        consumer.callbacks = [on_task_received]
+        consumer.consume()
+        obj.on_ready()
+
+        while ns.state != CLOSE and obj.connection:
+            # shutdown if signal handlers told us to.
+            if state.should_stop:
+                raise SystemExit()
+            elif state.should_terminate:
+                raise SystemTerminate()
+
+            # fire any ready timers, this also returns
+            # the number of seconds until we need to fire timers again.
+            poll_timeout = fire_timers() if scheduled else 1
+
+            # We only update QoS when there is no more messages to read.
+            # This groups together qos calls, and makes sure that remote
+            # control commands will be prioritized over task messages.
+            if qos.prev != qos.value:
+                update_qos()
+
+            update_readers(on_poll_start())
+            if readers or writers:
+                connection.more_to_read = True
+                while connection.more_to_read:
+                    try:
+                        events = poll(poll_timeout)
+                    except ValueError:  # Issue 882
+                        return
+                    if not events:
+                        on_poll_empty()
+                    for fileno, event in events or ():
+                        try:
+                            if event & READ:
+                                readers[fileno](fileno, event)
+                            if event & WRITE:
+                                writers[fileno](fileno, event)
+                            if event & ERR:
+                                for handlermap in readers, writers:
+                                    try:
+                                        handlermap[fileno](fileno, event)
+                                    except KeyError:
+                                        pass
+                        except (KeyError, Empty):
+                            continue
+                        except socket.error:
+                            if ns.state != CLOSE:  # pragma: no cover
+                                raise
+                    if keep_draining:
+                        drain_nowait()
+                        poll_timeout = 0
+                    else:
+                        connection.more_to_read = False
+            else:
+                # no sockets yet, startup is probably not done.
+                sleep(min(poll_timeout, 0.1))
+
+
+def synloop(obj, connection, consumer, strategies, ns, hub, qos,
+        heartbeat, handle_unknown_message, handle_unknown_task,
+        handle_invalid_task, **kwargs):
+    """Fallback blocking eventloop for transports that doesn't support AIO."""
+
+    def on_task_received(body, message):
+        try:
+            name = body['task']
+        except (KeyError, TypeError):
+            return handle_unknown_message(body, message)
+
+        try:
+            strategies[name](message, body, message.ack_log_error)
+        except KeyError as exc:
+            handle_unknown_task(body, message, exc)
+        except InvalidTaskError as exc:
+            handle_invalid_task(body, message, exc)
+
+    consumer.register_callback(on_task_received)
+    consumer.consume()
+
+    obj.on_ready()
+
+    while ns.state != CLOSE and obj.connection:
+        state.maybe_shutdown()
+        if qos.prev != qos.value:         # pragma: no cover
+            qos.update()
+        try:
+            connection.drain_events(timeout=2.0)
+        except socket.timeout:
+            pass
+        except socket.error:
+            if ns.state != CLOSE:  # pragma: no cover
+                raise

+ 6 - 4
celery/worker/mediator.py

@@ -20,17 +20,19 @@ import logging
 from Queue import Empty
 
 from celery.app import app_or_default
+from celery.bootsteps import StartStopStep
 from celery.utils.threads import bgThread
 from celery.utils.log import get_logger
 
-from .bootsteps import StartStopComponent
+from . import components
 
 logger = get_logger(__name__)
 
 
-class WorkerComponent(StartStopComponent):
-    name = 'worker.mediator'
-    requires = ('pool', 'queues', )
+class WorkerComponent(StartStopStep):
+    label = 'Mediator'
+    conditional = True
+    requires = (components.Pool, components.Queues, )
 
     def __init__(self, w, **kwargs):
         w.mediator = None

+ 103 - 0
celery/worker/pidbox.py

@@ -0,0 +1,103 @@
+from __future__ import absolute_import
+
+import socket
+import threading
+
+from kombu.common import ignore_errors
+
+from celery.datastructures import AttributeDict
+from celery.utils.log import get_logger
+
+from . import control
+
+logger = get_logger(__name__)
+debug, error, info = logger.debug, logger.error, logger.info
+
+
+class Pidbox(object):
+    consumer = None
+
+    def __init__(self, c):
+        self.c = c
+        self.hostname = c.hostname
+        self.node = c.app.control.mailbox.Node(c.hostname,
+            handlers=control.Panel.data,
+            state=AttributeDict(app=c.app, hostname=c.hostname, consumer=c),
+        )
+
+    def on_message(self, body, message):
+        try:
+            self.node.handle_message(body, message)
+        except KeyError as exc:
+            error('No such control command: %s', exc)
+        except Exception as exc:
+            error('Control command error: %r', exc, exc_info=True)
+            self.reset()
+
+    def start(self, c):
+        self.node.channel = c.connection.channel()
+        self.consumer = self.node.listen(callback=self.on_message)
+
+    def stop(self, c):
+        self.consumer = self._close_channel(c)
+
+    def reset(self):
+        """Sets up the process mailbox."""
+        self.stop(self.c)
+        self.start(self.c)
+
+    def _close_channel(self, c):
+        if self.node and self.node.channel:
+            ignore_errors(c, self.node.channel.close)
+
+    def shutdown(self, c):
+        if self.consumer:
+            debug('Cancelling broadcast consumer...')
+            ignore_errors(c, self.consumer.cancel)
+        self.stop(self.c)
+
+
+class gPidbox(Pidbox):
+    _node_shutdown = None
+    _node_stopped = None
+    _resets = 0
+
+    def start(self, c):
+        c.pool.spawn_n(self.loop, c)
+
+    def stop(self, c):
+        if self._node_stopped:
+            self._node_shutdown.set()
+            debug('Waiting for broadcast thread to shutdown...')
+            self._node_stopped.wait()
+            self._node_stopped = self._node_shutdown = None
+        super(gPidbox, self).stop(c)
+
+    def reset(self):
+        self._resets += 1
+
+    def _do_reset(self, c, connection):
+        self._close_channel(c)
+        self.node.channel = connection.channel()
+        self.consumer = self.node.listen(callback=self.on_message)
+        self.consumer.consume()
+
+    def loop(self, c):
+        resets = [self._resets]
+        shutdown = self._node_shutdown = threading.Event()
+        stopped = self._node_stopped = threading.Event()
+        try:
+            with c.connect() as connection:
+
+                info('pidbox: Connected to %s.', connection.as_uri())
+                self._do_reset(c, connection)
+                while not shutdown.is_set() and c.connection:
+                    if resets[0] < self._resets:
+                        resets[0] += 1
+                        self._do_reset(c, connection)
+                    try:
+                        connection.drain_events(timeout=1.0)
+                    except socket.timeout:
+                        pass
+        finally:
+            stopped.set()

+ 8 - 0
celery/worker/state.py

@@ -20,6 +20,7 @@ from collections import defaultdict
 from kombu.utils import cached_property
 
 from celery import __version__
+from celery.exceptions import SystemTerminate
 from celery.datastructures import LimitedSet
 
 #: Worker software/platform information.
@@ -53,6 +54,13 @@ should_stop = False
 should_terminate = False
 
 
+def maybe_shutdown():
+    if should_stop:
+        raise SystemExit()
+    elif should_terminate:
+        raise SystemTerminate()
+
+
 def task_accepted(request):
     """Updates global state when a task has been accepted."""
     active_requests.add(request)

+ 1 - 1
docs/.templates/page.html

@@ -12,7 +12,7 @@
         {% else %}
         <p>
         This document describes Celery {{ version }}. For development docs,
-        <a href="http://celery.github.com/celery/{{ pagename }}{{ file_suffix }}">go here</a>.
+        <a href="http://docs.celeryproject.org/en/master/{{ pagename }}{{ file_suffix }}">go here</a>.
         </p>
     {% endif %}
 

+ 1 - 1
docs/Makefile

@@ -1,7 +1,7 @@
 # Makefile for Sphinx documentation
 #
 
-# You can set these variables from the command line.
+# You can set these variables from the command-line.
 SPHINXOPTS    =
 SPHINXBUILD   = sphinx-build
 PAPER         =

+ 27 - 4
docs/configuration.rst

@@ -556,6 +556,14 @@ use the ``TimeUUID`` type as a comparator::
 
     create column family task_results with comparator = TimeUUIDType;
 
+CASSANDRA_OPTIONS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Options to be passed to the `pycassa connection pool`_ (optional).
+
+.. _`pycassa connection pool`: http://pycassa.github.com/pycassa/api/pycassa/pool.html
+.. setting:: CASSANDRA_DETAILED_MODE
+
 Example configuration
 ~~~~~~~~~~~~~~~~~~~~~
 
@@ -567,6 +575,10 @@ Example configuration
     CASSANDRA_READ_CONSISTENCY = "ONE"
     CASSANDRA_WRITE_CONSISTENCY = "ONE"
     CASSANDRA_DETAILED_MODE = True
+    CASSANDRA_OPTIONS = {
+        'timeout': 300,
+        'max_retries': 10
+    }
 
 .. _conf-messaging:
 
@@ -1402,15 +1414,26 @@ The directory containing X.509 certificates used for
 Custom Component Classes (advanced)
 -----------------------------------
 
-.. setting:: CELERYD_BOOT_STEPS
+.. setting:: CELERYD_BOOTSTEPS
 
-CELERYD_BOOT_STEPS
-~~~~~~~~~~~~~~~~~~
+CELERYD_BOOTSTEPS
+~~~~~~~~~~~~~~~~~
 
 This setting enables you to add additional components to the worker process.
-It should be a list of module names with :class:`celery.abstract.Component`
+It should be a list of module names with
+:class:`celery.bootsteps.Step`
 classes, that augments functionality in the worker.
 
+.. setting:: CELERYD_CONSUMER_BOOTSTEPS
+
+CELERYD_CONSUMER_BOOTSTEPS
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This setting enables you to add additional components to the workers consumer.
+It should be a list of module names with
+:class:`celery.bootsteps.Step`` classes, that augments
+functionality in the consumer.
+
 .. setting:: CELERYD_POOL
 
 CELERYD_POOL

+ 4 - 6
docs/contributing.rst

@@ -880,12 +880,10 @@ Releasing
 
 Commands to make a new public stable release::
 
-    $ paver releaseok     # checks pep8, autodoc index and runs tests
-    $ paver removepyc  # Remove .pyc files.
-    $ git clean -xdn # Check that there's no left-over files in the repository.
-    $ python2.5 setup.py sdist upload # Upload package to PyPI
-    $ paver upload_pypi_docs
-    $ paver ghdocs # Build and upload documentation to Github.
+    $ paver releaseok  # checks pep8, autodoc index, runs tests and more
+    $ paver removepyc  # Remove .pyc files
+    $ git clean -xdn   # Check that there's no left-over files in the repo
+    $ python setup.py sdist upload  # Upload package to PyPI
 
 If this is a new release series then you also need to do the
 following:

+ 1 - 1
docs/django/first-steps-with-django.rst

@@ -113,7 +113,7 @@ development it is useful to be able to start a worker instance by using the
 
     $ python manage.py celery worker --loglevel=info
 
-For a complete listing of the command line options available,
+For a complete listing of the command-line options available,
 use the help command:
 
 .. code-block:: bash

+ 1 - 20
docs/faq.rst

@@ -231,10 +231,7 @@ to process messages.
 Also, there's another way to be language independent, and that is to use REST
 tasks, instead of your tasks being functions, they're URLs. With this
 information you can even create simple web servers that enable preloading of
-code. See: `User Guide: Remote Tasks`_.
-
-.. _`User Guide: Remote Tasks`:
-    http://celery.github.com/celery/userguide/remote-tasks.html
+code. See: :ref:`User Guide: Remote Tasks <guide-webhooks>`.
 
 .. _faq-troubleshooting:
 
@@ -891,22 +888,6 @@ Several database tables are created by default, these relate to
 Windows
 =======
 
-.. _faq-windows-worker-spawn-loop:
-
-celeryd keeps spawning processes at startup
--------------------------------------------
-
-**Answer**: This is a known issue on Windows.
-You have to start celeryd with the command:
-
-.. code-block:: bash
-
-    $ python -m celery.bin.celeryd
-
-Any additional arguments can be appended to this command.
-
-See http://bit.ly/bo9RSw
-
 .. _faq-windows-worker-embedded-beat:
 
 The `-B` / `--beat` option to celeryd doesn't work?

+ 1 - 1
docs/getting-started/first-steps-with-celery.rst

@@ -170,7 +170,7 @@ background as a daemon.  To do this you need to use the tools provided
 by your platform, or something like `supervisord`_ (see :ref:`daemonizing`
 for more information).
 
-For a complete listing of the command line options available, do:
+For a complete listing of the command-line options available, do:
 
 .. code-block:: bash
 

+ 1 - 3
docs/getting-started/introduction.rst

@@ -31,8 +31,6 @@ by :ref:`using webhooks <guide-webhooks>`.
 
 .. _RCelery: http://leapfrogdevelopment.github.com/rcelery/
 .. _`PHP client`: https://github.com/gjedeer/celery-php
-.. _`using webhooks`:
-    http://celery.github.com/celery/userguide/remote-tasks.html
 
 What do I need?
 ===============
@@ -213,7 +211,7 @@ Features
         - **User Components**
 
             Each worker component can be customized, and additional components
-            can be defined by the user.  The worker is built up using "boot steps" — a
+            can be defined by the user.  The worker is built up using "bootsteps" — a
             dependency graph enabling fine grained control of the worker's
             internals.
 

+ 14 - 6
docs/getting-started/next-steps.rst

@@ -93,7 +93,7 @@ When the worker starts you should see a banner and some messages::
      [2012-06-08 16:23:51,078: WARNING/MainProcess] celery@halcyon.local has started.
 
 -- The *broker* is the URL you specifed in the broker argument in our ``celery``
-module, you can also specify a different broker on the command line by using
+module, you can also specify a different broker on the command-line by using
 the :option:`-b` option.
 
 -- *Concurrency* is the number of multiprocessing worker process used
@@ -125,7 +125,7 @@ as a means for Quality of Service, separation of concerns,
 and emulating priorities, all described in the :ref:`Routing Guide
 <guide-routing>`.
 
-You can get a complete list of command line arguments
+You can get a complete list of command-line arguments
 by passing in the `--help` flag:
 
 .. code-block:: bash
@@ -177,12 +177,20 @@ or stop it:
 
     $ celery multi stop -w1 -A proj -l info
 
+The ``stop`` command is asynchronous so it will not wait for the
+worker to shutdown.  You will probably want to use the ``stopwait`` command
+instead which will ensure all currently executing tasks is completed:
+
+.. code-block:: bash
+
+    $ celery multi stopwait -w1 -A proj -l info
+
 .. note::
 
     :program:`celery multi` doesn't store information about workers
-    so you need to use the same command line parameters when restarting.
-    Also the same pidfile and logfile arguments must be used when
-    stopping/killing.
+    so you need to use the same command-line arguments when
+    restarting.  Only the same pidfile and logfile arguments must be
+    used when stopping.
 
 By default it will create pid and log files in the current directory,
 to protect against multiple workers launching on top of each other
@@ -196,7 +204,7 @@ you are encouraged to put these in a dedicated directory:
                                             --logfile=/var/log/celery/%n.pid
 
 With the multi command you can start multiple workers, and there is a powerful
-command line syntax to specify arguments for different workers too,
+command-line syntax to specify arguments for different workers too,
 e.g:
 
 .. code-block:: bash

+ 7 - 0
docs/glossary.rst

@@ -15,6 +15,13 @@ Glossary
         Sends a task message so that the task function is
         :term:`executed <executing>` by a worker.
 
+    kombu
+        Python messaging library used by Celery to send and receive messages.
+
+    billiard
+        Fork of the Python multiprocessing library containing improvements
+        required by Celery.
+
     executing
         Workers *execute* task :term:`requests <request>`.
 

+ 5 - 12
docs/history/changelog-1.0.rst

@@ -522,7 +522,7 @@ Fixes
         >>> result.get()
         'pong'
 
-* `camqadm`: This is a new utility for command line access to the AMQP API.
+* `camqadm`: This is a new utility for command-line access to the AMQP API.
 
     Excellent for deleting queues/bindings/exchanges, experimentation and
     testing:
@@ -786,7 +786,7 @@ Deprecations
 
     To do this we had to rename the configuration syntax. If you use any of
     the custom AMQP routing options (queue/exchange/routing_key, etc.), you
-    should read the new FAQ entry: http://bit.ly/aiWoH.
+    should read the new FAQ entry: :ref:`faq-task-routing`.
 
     The previous syntax is deprecated and scheduled for removal in v2.0.
 
@@ -1121,10 +1121,7 @@ Important changes
 
 * Celery now supports task retries.
 
-    See `Cookbook: Retrying Tasks`_ for more information.
-
-.. _`Cookbook: Retrying Tasks`:
-    http://celery.github.com/celery/cookbook/task-retries.html
+    See :ref:`task-retry` for more information.
 
 * We now have an AMQP result store backend.
 
@@ -1556,12 +1553,8 @@ arguments, so be sure to flush your task queue before you upgrade.
         CELERY_AMQP_CONSUMER_QUEUE
         CELERY_AMQP_EXCHANGE_TYPE
 
-  See the entry `Can I send some tasks to only some servers?`_ in the
-  `FAQ`_ for more information.
-
-.. _`Can I send some tasks to only some servers?`:
-        http://bit.ly/celery_AMQP_routing
-.. _`FAQ`: http://celery.github.com/celery/faq.html
+  See the entry :ref:`faq-task-routing` in the
+  :ref:`FAQ <faq>` for more information.
 
 * Task errors are now logged using log level `ERROR` instead of `INFO`,
   and stacktraces are dumped. Thanks to Grégoire Cachet.

+ 4 - 7
docs/history/changelog-2.0.rst

@@ -556,7 +556,7 @@ Backward incompatible changes
         'pong'
 
 * The following deprecated settings has been removed (as scheduled by
-  the `deprecation timeline`_):
+  the :ref:`deprecation-timeline`):
 
     =====================================  =====================================
     **Setting name**                       **Replace with**
@@ -568,14 +568,11 @@ Backward incompatible changes
     `CELERY_AMQP_PUBLISHER_ROUTING_KEY`    `CELERY_DEFAULT_ROUTING_KEY`
     =====================================  =====================================
 
-.. _`deprecation timeline`:
-    http://celery.github.com/celery/internals/deprecation.html
-
 * The `celery.task.rest` module has been removed, use :mod:`celery.task.http`
-  instead (as scheduled by the `deprecation timeline`_).
+  instead (as scheduled by the :ref:`deprecation-timeline`).
 
 * It's no longer allowed to skip the class name in loader names.
-  (as scheduled by the `deprecation timeline`_):
+  (as scheduled by the :ref:`deprecation-timeline`):
 
     Assuming the implicit `Loader` class name is no longer supported,
     if you use e.g.::
@@ -769,7 +766,7 @@ News
         exception will be raised when this is exceeded.  The task can catch
         this to e.g. clean up before the hard time limit comes.
 
-    New command line arguments to celeryd added:
+    New command-line arguments to celeryd added:
     `--time-limit` and `--soft-time-limit`.
 
     What's left?

+ 3 - 3
docs/history/changelog-2.1.rst

@@ -147,13 +147,13 @@ Fixes
 * :program:`celeryd-multi`: Fixed `set changed size during iteration` bug
     occurring in the restart command.
 
-* celeryd: Accidentally tried to use additional command line arguments.
+* celeryd: Accidentally tried to use additional command-line arguments.
 
    This would lead to an error like:
 
     `got multiple values for keyword argument 'concurrency'`.
 
-    Additional command line arguments are now ignored, and does not
+    Additional command-line arguments are now ignored, and does not
     produce this error.  However -- we do reserve the right to use
     positional arguments in the future, so please do not depend on this
     behavior.
@@ -360,7 +360,7 @@ News
     There's also a Debian init.d script for :mod:`~celery.bin.celeryev` available,
     see :ref:`daemonizing` for more information.
 
-    New command line arguments to celeryev:
+    New command-line arguments to celeryev:
 
         * :option:`-c|--camera`: Snapshot camera class to use.
         * :option:`--logfile|-f`: Log file

+ 3 - 3
docs/history/changelog-2.2.rst

@@ -233,7 +233,7 @@ Fixes
 * celerybeat:  PersistentScheduler now automatically removes a corrupted
   schedule file (Issue #346).
 
-* Programs that doesn't support positional command line arguments now provides
+* Programs that doesn't support positional command-line arguments now provides
   a user friendly error message.
 
 * Programs no longer tries to load the configuration file when showing
@@ -708,7 +708,7 @@ Important Notes
             $ camqadm exchange.delete celeryevent
 
 * `celeryd` now starts without configuration, and configuration can be
-  specified directly on the command line.
+  specified directly on the command-line.
 
   Configuration options must appear after the last argument, separated
   by two dashes:
@@ -912,7 +912,7 @@ News
    scheduled tasks.
 
 * The configuration module and loader to use can now be specified on
-  the command line.
+  the command-line.
 
     For example:
 

+ 2 - 2
docs/history/changelog-2.3.rst

@@ -69,8 +69,8 @@ News
 
 * Improved Contributing guide.
 
-    If you'd like to contribute to Celery you should read this
-    guide: http://celery.github.com/celery/contributing.html
+    If you'd like to contribute to Celery you should read the
+    :ref:`Contributing Gudie <contributing>`.
 
     We are looking for contributors at all skill levels, so don't
     hesitate!

+ 1 - 1
docs/history/changelog-2.4.rst

@@ -203,7 +203,7 @@ Important Notes
     then the value from the configuration will be used as default.
 
     Also, programs now support the :option:`-b|--broker` option to specify
-    a broker URL on the command line:
+    a broker URL on the command-line:
 
     .. code-block:: bash
 

+ 2 - 2
docs/history/changelog-2.5.rst

@@ -133,10 +133,10 @@ Fixes
 
     Fix contributed by Martin Melin.
 
-- celeryctl can now be configured on the command line.
+- celeryctl can now be configured on the command-line.
 
     Like with celeryd it is now possible to configure celery settings
-    on the command line for celeryctl:
+    on the command-line for celeryctl:
 
     .. code-block:: bash
 

BIN
docs/images/consumer_graph.png


BIN
docs/images/graph.png


BIN
docs/images/result_graph.png


BIN
docs/images/worker_graph.png


BIN
docs/images/worker_graph_full.png


+ 1 - 1
docs/includes/introduction.txt

@@ -31,7 +31,7 @@ by using webhooks.
 .. _RCelery: http://leapfrogdevelopment.github.com/rcelery/
 .. _`PHP client`: https://github.com/gjedeer/celery-php
 .. _`using webhooks`:
-    http://celery.github.com/celery/userguide/remote-tasks.html
+    http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
 
 What do I need?
 ===============

+ 2 - 1
docs/includes/resources.txt

@@ -52,7 +52,8 @@ to send regular patches.
 Be sure to also read the `Contributing to Celery`_ section in the
 documentation.
 
-.. _`Contributing to Celery`: http://celery.github.com/celery/contributing.html
+.. _`Contributing to Celery`:
+    http://docs.celeryproject.org/en/master/contributing.html
 
 .. _license:
 

+ 2 - 1
docs/internals/guide.rst

@@ -260,9 +260,10 @@ Module Overview
 - celery.apps
 
     Major user applications: ``celeryd``, and ``celerybeat``
+
 - celery.bin
 
-    Command line applications.
+    Command-line applications.
     setup.py creates setuptools entrypoints for these.
 
 - celery.concurrency

+ 0 - 1
docs/internals/reference/index.rst

@@ -21,7 +21,6 @@
     celery.worker.strategy
     celery.worker.autoreload
     celery.worker.autoscale
-    celery.worker.bootsteps
     celery.concurrency
     celery.concurrency.solo
     celery.concurrency.processes

+ 2 - 10
docs/reference/celery.app.amqp.rst

@@ -39,16 +39,8 @@
     ------
 
     .. autoclass:: Queues
-
-        .. automethod:: add
-
-        .. automethod:: format
-
-        .. automethod:: select_subset
-
-        .. automethod:: new_missing
-
-        .. autoattribute:: consume_from
+        :members:
+        :undoc-members:
 
     TaskPublisher
     -------------

+ 3 - 3
docs/internals/reference/celery.worker.bootsteps.rst → docs/reference/celery.bootsteps.rst

@@ -1,11 +1,11 @@
 ==========================================
- celery.worker.bootsteps
+ celery.bootsteps
 ==========================================
 
 .. contents::
     :local:
-.. currentmodule:: celery.worker.bootsteps
+.. currentmodule:: celery.bootsteps
 
-.. automodule:: celery.worker.bootsteps
+.. automodule:: celery.bootsteps
     :members:
     :undoc-members:

+ 32 - 1
docs/reference/celery.rst

@@ -40,6 +40,16 @@ Application
 
         Current configuration.
 
+    .. attribute:: user_options
+
+        Custom options for command-line programs.
+        See :ref:`extending-commandoptions`
+
+    .. attribute:: steps
+
+        Custom bootsteps to extend and modify the worker.
+        See :ref:`extending-bootsteps`.
+
     .. attribute:: Celery.current_task
 
         The instance of the task that is being executed, or :const:`None`.
@@ -89,7 +99,7 @@ Application
         Only necessary for dynamically created apps for which you can
         use the with statement::
 
-            with Celery(...) as app:
+            with Celery(set_as_current=False) as app:
                 with app.connection() as conn:
                     pass
 
@@ -124,6 +134,27 @@ Application
             >>> os.environ["CELERY_CONFIG_MODULE"] = "myapp.celeryconfig"
             >>> celery.config_from_envvar("CELERY_CONFIG_MODULE")
 
+    .. method:: Celery.autodiscover_tasks(packages, related_name="tasks")
+
+        With a list of packages, try to import modules of a specific name (by
+        default 'tasks').
+
+        For example if you have an (imagined) directory tree like this::
+
+            foo/__init__.py
+               tasks.py
+               models.py
+
+            bar/__init__.py
+                tasks.py
+                models.py
+
+            baz/__init__.py
+                models.py
+
+        Then calling ``app.autodiscover_tasks(['foo', bar', 'baz'])`` will
+        result in the modules ``foo.tasks`` and ``bar.tasks`` being imported.
+
     .. method:: Celery.add_defaults(d)
 
         Add default configuration from dict ``d``.

この差分においてかなりの量のファイルが変更されているため、一部のファイルを表示していません