Ask Solem 14 лет назад
Родитель
Сommit
6a7b03b298

+ 9 - 0
celery/result.py

@@ -114,6 +114,15 @@ class BaseAsyncResult(object):
         """
         return self.backend.get_result(self.task_id)
 
+    @property
+    def info(self):
+        """Get state metadata.
+
+        Alias to :meth:`result`.
+
+        """
+        return self.result
+
     @property
     def traceback(self):
         """Get the traceback of a failed task."""

+ 47 - 53
docs/getting-started/first-steps-with-celery.rst

@@ -12,14 +12,12 @@
 Creating a simple task
 ======================
 
-In this example we are creating a simple task that adds two
-numbers. Tasks are defined in a normal python module. The module can
-be named whatever you like, but the convention is to call it
-:file:`tasks.py`.
+In this tutorial we are creating a simple task that adds two
+numbers.  Tasks are defined in normal Python modules.
 
-Our addition task looks like this:
+By convention we will call our moudule :file:`tasks.py`, and it looks
 
-:file:`tasks.py`:
+**:file:`tasks.py`:**
 
 .. code-block:: python
 
@@ -30,31 +28,33 @@ Our addition task looks like this:
         return x + y
 
 
-All celery tasks are classes that inherit from the ``Task``
-class. In this case we're using a decorator that wraps the add
-function in an appropriate class for us automatically. The full
-documentation on how to create tasks and task classes is in the
-:doc:`../userguide/tasks` part of the user guide.
+All Celery tasks are classes that inherits from the
+:class:`~clery.task.base.Task` class.  In this example we're using a
+decorator that wraps the add function in an appropriate class for us
+automatically.
+
+.. seealso::
+
+    The full documentation on how to create tasks and task classes is in the
+    :doc:`../userguide/tasks` part of the user guide.
 
 .. _celerytut-conf:
 
 Configuration
 =============
 
-Celery is configured by using a configuration module. By default
+Celery is configured by using a configuration module.  By default
 this module is called :file:`celeryconfig.py`.
 
-.. note::
-
-    The configuration module must be on the Python path so it
-    can be imported.
+The configuration module must either be in the current directory
+or on the Python path, so that it can be imported.
 
-    You can also set a custom name for the configuration module using
-    the :envvar:`CELERY_CONFIG_MODULE` environment variable.
+You can also set a custom name for the configuration module by using
+the :envvar:`CELERY_CONFIG_MODULE` environment variable.
 
 Let's create our :file:`celeryconfig.py`.
 
-1. Configure how we communicate with the broker::
+1. Configure how we communicate with the broker (RabbitMQ in this example)::
 
         BROKER_HOST = "localhost"
         BROKER_PORT = 5672
@@ -62,17 +62,18 @@ Let's create our :file:`celeryconfig.py`.
         BROKER_PASSWORD = "mypassword"
         BROKER_VHOST = "myvhost"
 
-2. In this example we don't want to store the results of the tasks, so
-   we'll use the simplest backend available; the AMQP backend::
+2. Define the backend used to store task metadata and return values::
 
         CELERY_RESULT_BACKEND = "amqp"
 
    The AMQP backend is non-persistent by default, and you can only
    fetch the result of a task once (as it's sent as a message).
 
-3. Finally, we list the modules to import, that is, all the modules
-   that contain tasks. This is so Celery knows about what tasks it can
-   be asked to perform.
+   For list of backends available and related options see
+   :ref:`conf-result-backend`.
+
+3. Finally we list the modules the worker should import.  This includes
+   the modules containing your tasks.
 
    We only have a single task module, :file:`tasks.py`, which we added earlier::
 
@@ -81,9 +82,9 @@ Let's create our :file:`celeryconfig.py`.
 That's it.
 
 There are more options available, like how many processes you want to
-process work in parallel (the :setting:`CELERY_CONCURRENCY` setting), and we
-could use a persistent result store backend, but for now, this should
-do. For all of the options available, see :ref:`configuration`.
+use to process work in parallel (the :setting:`CELERY_CONCURRENCY` setting),
+and we could use a persistent result store backend, but for now, this should
+do.  For all of the options available, see :ref:`configuration`.
 
 .. note::
 
@@ -92,8 +93,8 @@ do. For all of the options available, see :ref:`configuration`.
 
         $ celeryd -l info -I tasks,handlers
 
-    This can be a single, or a comma separated list of task modules to import when
-    :mod:`~celery.bin.celeryd` starts.
+    This can be a single, or a comma separated list of task modules to import
+    when :program:`celeryd` starts.
 
 
 .. _celerytut-running-celeryd:
@@ -106,17 +107,15 @@ see what's going on in the terminal::
 
     $ celeryd --loglevel=INFO
 
-However, in production you probably want to run the worker in the
-background as a daemon. To do this you need to use to tools provided
-by your platform, or something like `supervisord`_.
+In production you will probably want to run the worker in the
+background as a daemon.  To do this you need to use the tools provided
+by your platform, or something like `supervisord`_ (see :ref:`daemonization`
+for more information).
 
-For a complete listing of the command line options available, use the
-help command::
+For a complete listing of the command line options available, do::
 
     $  celeryd --help
 
-For info on how to run celery as standalone daemon, see :ref:`daemonizing`.
-
 .. _`supervisord`: http://supervisord.org
 
 .. _celerytut-executing-task:
@@ -124,36 +123,31 @@ For info on how to run celery as standalone daemon, see :ref:`daemonizing`.
 Executing the task
 ==================
 
-Whenever we want to execute our task, we can use the
+Whenever we want to execute our task, we use the
 :meth:`~celery.task.base.Task.delay` method of the task class.
 
 This is a handy shortcut to the :meth:`~celery.task.base.Task.apply_async`
-method which gives greater control of the task execution. Read the
-:doc:`Executing Tasks<../userguide/executing>` part of the user guide
-for more information about executing tasks.
+method which gives greater control of the task execution (see
+:ref:`guide-executing`).
 
     >>> from tasks import add
     >>> add.delay(4, 4)
     <AsyncResult: 889143a6-39a2-4e52-837b-d80d33efb22d>
 
 At this point, the task has been sent to the message broker. The message
-broker will hold on to the task until a worker server has successfully
-picked it up.
-
-*Note:* If everything is just hanging when you execute ``delay``, please check
-that RabbitMQ is running, and that the user/password combination does have access to the
-virtual host you configured earlier.
+broker will hold on to the task until a worker server has consumed and
+executed it.
 
 Right now we have to check the worker log files to know what happened
-with the task. This is because we didn't keep the :class:`~celery.result.AsyncResult`
-object returned by :meth:`~celery.task.base.Task.delay`.
+with the task.  This is because we didn't keep the
+:class:`~celery.result.AsyncResult` object returned.
 
-The :class:`~celery.result.AsyncResult` lets us find the state of the task, wait for
-the task to finish, get its return value (or exception + traceback if the task failed),
-and more.
+The :class:`~celery.result.AsyncResult` lets us check the state of the task,
+wait for the task to finish, get its return value or exception/traceback
+if the task failed, and more.
 
-So, let's execute the task again, but this time we'll keep track of the task
-by keeping the :class:`~celery.result.AsyncResult`::
+Let's execute the task again -- but this time we'll keep track of the task
+by holding on to the :class:`~celery.result.AsyncResult`::
 
     >>> result = add.delay(4, 4)
 

+ 7 - 10
docs/includes/introduction.txt

@@ -10,26 +10,23 @@
 .. _celery-synopsis:
 
 Celery is an open source asynchronous task queue/job queue based on
-distributed message passing. It is focused on real-time operation,
+distributed message passing.  It is focused on real-time operation,
 but supports scheduling as well.
 
 The execution units, called tasks, are executed concurrently on one or
-more worker nodes. Tasks can execute asynchronously (in the background) or
+more worker nodes.  Tasks can execute asynchronously (in the background) or
 synchronously (wait until ready).
 
 Celery is already used in production to process millions of tasks a day.
 
 Celery is written in Python, but the protocol can be implemented in any
-language. It can also `operate with other languages using webhooks`_.
+language.  It can also `operate with other languages using webhooks`_.
 
 The recommended message broker is `RabbitMQ`_, but support for `Redis`_ and
 databases (`SQLAlchemy`_) is also available.
 
-Celery can be easily used with Django and Pylons using
-`django-celery`_ and `celery-pylons`_.
-
-You may also be pleased to know that full Django integration exists,
-delivered by the `django-celery`_ package.
+Celery is easy to integrate with Django and Pylons, using
+the `django-celery`_ and `celery-pylons`_ add-on packages.
 
 .. _`RabbitMQ`: http://www.rabbitmq.com/
 .. _`Redis`: http://code.google.com/p/redis/
@@ -51,8 +48,8 @@ This is a high level overview of the architecture.
 
 .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
 
-The broker pushes tasks to the worker servers.
-A worker server is a networked machine running ``celeryd``. This can be one or
+The broker delivers tasks to the worker servers.
+A worker server is a networked machine running ``celeryd``.  This can be one or
 more machines depending on the workload.
 
 The result of the task can be stored for later retrieval (called its

+ 31 - 31
docs/userguide/monitoring.rst

@@ -11,6 +11,7 @@ Introduction
 ============
 
 There are several tools available to monitor and inspect Celery clusters.
+
 This document describes some of these, as as well as
 features related to monitoring, like events and broadcast commands.
 
@@ -29,7 +30,7 @@ celeryctl: Management Utility
 :mod:`~celery.bin.celeryctl` is a command line utility to inspect
 and manage worker nodes (and to some degree tasks).
 
-To list all the commands from the command line do::
+To list all the commands avaialble do::
 
     $ celeryctl help
 
@@ -92,14 +93,6 @@ Commands
 
         $ celeryctl inspect stats
 
-* **inspect diagnose**: Diagnose the pool processes.
-    ::
-
-        $ celeryctl inspect diagnose
-
-    This will verify that the workers pool processes are available
-    to do work.  Note that this will not work if the worker is busy.
-
 * **inspect enable_events**: Enable events
     ::
 
@@ -177,7 +170,7 @@ you should be able to see your workers and the tasks in the admin interface
 
 The admin interface shows tasks, worker nodes, and even
 lets you perform some actions, like revoking and rate limiting tasks,
-and shutting down worker nodes.
+or shutting down worker nodes.
 
 .. _monitoring-django-frequency:
 
@@ -185,7 +178,7 @@ Shutter frequency
 ~~~~~~~~~~~~~~~~~
 
 By default the camera takes a snapshot every second, if this is too frequent
-or you want higher precision then you can change this using the
+or you want to have higher precision, then you can change this using the
 ``--frequency`` argument.  This is a float describing how often, in seconds,
 it should wake up to check if there are any new events::
 
@@ -200,7 +193,7 @@ by appending ``/s``, ``/m`` or ``/h`` to the value.
 Example: ``--maxrate=100/m``, means "hundred writes a minute".
 
 The rate limit is off by default, which means it will take a snapshot
-for every ``--frequency`` seconds. 
+for every ``--frequency`` seconds.
 
 The events also expire after some time, so the database doesn't fill up.
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
@@ -238,8 +231,9 @@ By default an ``sqlite3`` database file named
 user running the monitor.
 
 If you want to store the events in a different database, e.g. MySQL,
-then you can configure the ``DATABASE*`` settings in your Celery
-config module. See http://docs.djangoproject.com/en/dev/ref/settings/#databases.
+then you can configure the ``DATABASE*`` settings directly in your Celery
+config module.  See http://docs.djangoproject.com/en/dev/ref/settings/#databases
+for more information about the database options avaialble.
 
 You will also be asked to create a superuser (and you need to create one
 to be able to log into the admin later)::
@@ -276,9 +270,9 @@ celeryev: Curses Monitor
 .. versionadded:: 2.0
 
 :mod:`~celery.bin.celeryev` is a simple curses monitor displaying
-task and worker history. You can inspect the result and traceback of tasks,
-and it also supports some management commands like rate limiting and shutdown
-of workers.
+task and worker history.  You can inspect the result and traceback of tasks,
+and it also supports some management commands like rate limiting and shutting
+down workers.
 
 .. image:: http://celeryproject.org/img/celeryevshotsm.jpg
 
@@ -304,12 +298,12 @@ celerymon: Web monitor
 
 `celerymon`_ is the ongoing work to create a web monitor.
 It's far from complete yet, and does currently only support
-a JSON API. Help is desperately needed for this project, so if you,
-or someone you knowi, would like to contribute templates, design, code
+a JSON API.  Help is desperately needed for this project, so if you,
+or someone you know would like to contribute templates, design, code
 or help this project in any way, please get in touch!
 
 :Tip: The Django admin monitor can be used even though you're not using
-      Celery with a Django project. See :ref:`monitoring-nodjango`.
+      Celery with a Django project.  See :ref:`monitoring-nodjango`.
 
 .. _`celerymon`: http://github.com/ask/celerymon/
 
@@ -395,8 +389,8 @@ Events
 ======
 
 The worker has the ability to send a message whenever some event
-happens. These events are then captured by tools like ``celerymon`` and 
-``celeryev`` to monitor the cluster.
+happens.  These events are then captured by tools like :program:`celerymon`
+and :program:`celeryev` to monitor the cluster.
 
 .. _monitoring-snapshots:
 
@@ -406,19 +400,20 @@ Snapshots
 .. versionadded: 2.1
 
 Even a single worker can produce a huge amount of events, so storing
-history of events on disk may be very expensive.
+the history of all events on disk may be very expensive.
 
 A sequence of events describes the cluster state in that time period,
 by taking periodic snapshots of this state we can keep all history, but
 still only periodically write it to disk.
 
 To take snapshots you need a Camera class, with this you can define
-what should happen every time the state is captured. You can
-write it to a database, send it by e-mail or something else entirely).
+what should happen every time the state is captured;  You can
+write it to a database, send it by e-mail or something else entirely.
 
-``celeryev`` is then used to take snapshots with the camera,
+:program:`celeryev` is then used to take snapshots with the camera,
 for example if you want to capture state every 2 seconds using the
-camera ``myapp.Camera`` you run ``celeryev`` with the following arguments::
+camera ``myapp.Camera`` you run :pogram:`celeryev` with the following
+arguments::
 
     $ celeryev -c myapp.Camera --frequency=2.0
 
@@ -428,7 +423,7 @@ camera ``myapp.Camera`` you run ``celeryev`` with the following arguments::
 Custom Camera
 ~~~~~~~~~~~~~
 
-Here is an example camera, dumping the snapshot to the screen:
+Here is an example camera, dumping the snapshot to screen:
 
 .. code-block:: python
 
@@ -436,6 +431,7 @@ Here is an example camera, dumping the snapshot to the screen:
 
     from celery.events.snapshot import Polaroid
 
+
     class DumpCam(Polaroid):
 
         def shutter(self, state):
@@ -447,6 +443,9 @@ Here is an example camera, dumping the snapshot to the screen:
             print("Total: %s events, %s tasks" % (
                 state.event_count, state.task_count))
 
+See the API reference for :mod:`celery.events.state` to read more
+about state objects.
+
 Now you can use this cam with ``celeryev`` by specifying
 it with the ``-c`` option::
 
@@ -494,9 +493,10 @@ Task Events
 * ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
 
     Sent if the task executed successfully.
+
     Runtime is the time it took to execute the task using the pool.
-    (Time starting from the task is sent to the pool, and ending when the
-    pool result handlers callback is called).
+    (Starting from the task is sent to the worker pool, and ending when the
+    pool result handler callback is called).
 
 * ``task-failed(uuid, exception, traceback, hostname, timestamp)``
 
@@ -505,7 +505,7 @@ Task Events
 * ``task-revoked(uuid)``
 
     Sent if the task has been revoked (Note that this is likely
-    to be sent by more than one worker)
+    to be sent by more than one worker).
 
 * ``task-retried(uuid, exception, traceback, hostname, delay, timestamp)``
 

+ 22 - 17
docs/userguide/periodic-tasks.rst

@@ -10,15 +10,15 @@
 Introduction
 ============
 
-Celerybeat is a scheduler.  It kicks off tasks at regular intervals,
-which are then executed by worker nodes available in the cluster.
+:program:`celerybeat` is a scheduler.  It kicks off tasks at regular intervals,
+which are then executed by the worker nodes available in the cluster.
 
 By default the entries are taken from the :setting:`CELERYBEAT_SCHEDULE` setting,
 but custom stores can also be used, like storing the entries
 in an SQL database.
 
 You have to ensure only a single scheduler is running for a schedule
-at a time, otherwise you would end up with duplicate tasks. Using
+at a time, otherwise you would end up with duplicate tasks.  Using
 a centralized approach means the schedule does not have to be synchronized,
 and the service can operate without using locks.
 
@@ -28,7 +28,9 @@ Entries
 =======
 
 To schedule a task periodically you have to add an entry to the
-:setting:`CELERYBEAT_SCHEDULE` setting:
+:setting:`CELERYBEAT_SCHEDULE` setting.
+
+Example: Run the ``tasks.add`` task every 30 seconds.
 
 .. code-block:: python
 
@@ -43,11 +45,9 @@ To schedule a task periodically you have to add an entry to the
     }
 
 
-Here we run the ``tasks.add`` task every 30 seconds.
-
-Using a :class:`~datetime.timedelta` means the task will be executed
-30 seconds after ``celerybeat`` starts, and then every 30 seconds
-after the last run. A crontab like schedule also exists, see the section
+Using a :class:`~datetime.timedelta` for the schedule means the task will
+be executed 30 seconds after ``celerybeat`` starts, and then every 30 seconds
+after the last run.  A crontab like schedule also exists, see the section
 on `Crontab schedules`_.
 
 .. _beat-entry-fields:
@@ -65,8 +65,8 @@ Available Fields
 
     This can be the number of seconds as an integer, a
     :class:`~datetime.timedelta`, or a :class:`~celery.schedules.crontab`.
-    You can also define your own custom schedule types, just make sure
-    it supports the :class:`~celery.schedules.schedule` interface.
+    You can also define your own custom schedule types, by extending the
+    interface of :class:`~celery.schedules.schedule`.
 
 * ``args``
 
@@ -90,7 +90,7 @@ Available Fields
     second, minute, hour or day depending on the period of the timedelta.
 
     If ``relative`` is true the frequency is not rounded and will be
-    relative to the time ``celerybeat`` was started.
+    relative to the time when :program:`celerybeat` was started.
 
 .. _beat-crontab:
 
@@ -161,7 +161,7 @@ The syntax of these crontab expressions are very flexible.  Some examples:
 Starting celerybeat
 ===================
 
-To start the ``celerybeat`` service::
+To start the :program:`celerybeat` service::
 
     $ celerybeat
 
@@ -171,12 +171,17 @@ this is convenient if you only intend to use one worker node::
     $ celeryd -B
 
 Celerybeat needs to store the last run times of the tasks in a local database
-file (named ``celerybeat-schedule`` by default), so you need access to
-write to the current directory, or alternatively you can specify a custom
+file (named ``celerybeat-schedule`` by default), so it needs access to
+write in the current directory, or alternatively you can specify a custom
 location for this file::
 
     $ celerybeat -s /home/celery/var/run/celerybeat-schedule
 
+
+.. note::
+
+    To daemonize celerybeat see :ref:`daemonizing`.
+
 .. _beat-custom-schedulers:
 
 Using custom scheduler classes
@@ -187,8 +192,8 @@ argument).  The default scheduler is :class:`celery.beat.PersistentScheduler`,
 which is simply keeping track of the last run times in a local database file
 (a :mod:`shelve`).
 
-``django-celery`` also ships with a scheduler that stores the schedule in a
-database::
+``django-celery`` also ships with a scheduler that stores the schedule in the
+Django database::
 
     $ celerybeat -S djcelery.schedulers.DatabaseScheduler
 

+ 11 - 6
docs/userguide/remote-tasks.rst

@@ -17,8 +17,8 @@ Basics
 If you need to call into another language, framework or similar, you can
 do so by using HTTP callback tasks.
 
-The HTTP callback tasks use GET/POST arguments and a simple JSON response
-to return results. The scheme to call a task is::
+The HTTP callback tasks uses GET/POST data to pass arguments and returns
+result as a JSON response. The scheme to call a task is::
 
     GET http://example.com/mytask/?arg1=a&arg2=b&arg3=c
 
@@ -26,7 +26,10 @@ or using POST::
 
     POST http://example.com/mytask
 
-**Note:** POST data has to be form encoded.
+.. note::
+
+    POST data needs to be form encoded.
+
 Whether to use GET or POST is up to you and your requirements.
 
 The web page should then return a response in the following format
@@ -99,12 +102,14 @@ functionality.
     >>> res.get()
     100
 
-The output of celeryd (or the logfile if you've enabled it) should show the task being processed::
+The output of :program:`celeryd` (or the logfile if enabled) should show the
+task being executed::
 
     [INFO/MainProcess] Task celery.task.http.HttpDispatchTask
             [f2cc8efc-2a14-40cd-85ad-f1c77c94beeb] processed: 100
 
 Since applying tasks can be done via HTTP using the
-``celery.views.apply`` view, executing tasks from other languages is easy.
+``djcelery.views.apply`` view, executing tasks from other languages is easy.
 For an example service exposing tasks via HTTP you should have a look at
-``examples/celery_http_gateway``.
+``examples/celery_http_gateway`` in the Celery distribution:
+    http://github.com/ask/celery/tree/master/examples/celery_http_gateway/

+ 54 - 48
docs/userguide/routing.rst

@@ -7,8 +7,9 @@
 .. warning::
 
     This document refers to functionality only available in brokers
-    using AMQP. Other brokers may implement some functionality, see their
-    respective documenation for more information, or contact the :ref:`mailing-list`.
+    using AMQP.  Other brokers may implement some functionality, see their
+    respective documentation for more information, or contact the
+    :ref:`mailing-list`.
 
 .. contents::
     :local:
@@ -28,11 +29,11 @@ The simplest way to do routing is to use the
 :setting:`CELERY_CREATE_MISSING_QUEUES` setting (on by default).
 
 With this setting on, a named queue that is not already defined in
-:setting:`CELERY_QUEUES` will be created automatically. This makes it easy to
+:setting:`CELERY_QUEUES` will be created automatically.  This makes it easy to
 perform simple routing tasks.
 
 Say you have two servers, ``x``, and ``y`` that handles regular tasks,
-and one server ``z``, that only handles feed related tasks. You can use this
+and one server ``z``, that only handles feed related tasks.  You can use this
 configuration::
 
     CELERY_ROUTES = {"feed.tasks.import_feed": {"queue": "feeds"}}
@@ -70,8 +71,8 @@ How the queues are defined
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The point with this feature is to hide the complex AMQP protocol for users
-with only basic needs. However  you may still be interested in how these queues
-are defined.
+with only basic needs. However -- you may still be interested in how these queues
+are declared.
 
 A queue named ``"video"`` will be created with the following settings:
 
@@ -170,7 +171,7 @@ just specify a custom exchange and exchange type:
             },
         }
 
-If you're confused about these terms, you should read up on AMQP concepts.
+If you're confused about these terms, you should read up on AMQP.
 
 .. seealso::
 
@@ -193,12 +194,12 @@ AMQP Primer
 Messages
 --------
 
-A message consists of headers and a body. Celery uses headers to store
-the content type of the message and its content encoding. In Celery the
+A message consists of headers and a body.  Celery uses headers to store
+the content type of the message and its content encoding.  The
 content type is usually the serialization format used to serialize the
-message, and the body contains the name of the task to execute, the
+message. The body contains the name of the task to execute, the
 task id (UUID), the arguments to execute it with and some additional
-metadata - like the number of retries and its ETA (if any).
+metadata -- like the number of retries or an ETA.
 
 This is an example task message represented as a Python dictionary:
 
@@ -229,9 +230,10 @@ Exchanges, queues and routing keys.
 -----------------------------------
 
 1. Messages are sent to exchanges.
-2. An exchange routes messages to one or more queues. Several exchange types
-   exists, providing different ways to do routing.
-3. The message waits in the queue until someone consumes from it.
+2. An exchange routes messages to one or more queues.  Several exchange types
+   exists, providing different ways to do routing, or implementing
+   different messaging scenarios.
+3. The message waits in the queue until someone consumes it.
 4. The message is deleted from the queue when it has been acknowledged.
 
 The steps required to send and receive messages are:
@@ -245,7 +247,7 @@ Celery automatically creates the entities necessary for the queues in
 setting is set to :const:`False`).
 
 Here's an example queue configuration with three queues;
-One for video, one for images and finally, one default queue for everything else:
+One for video, one for images and one default queue for everything else:
 
 .. code-block:: python
 
@@ -269,7 +271,7 @@ One for video, one for images and finally, one default queue for everything else
 .. note::
 
     In Celery the ``routing_key`` is the key used to send the message,
-    while ``binding_key`` is the key the queue is bound with. In the AMQP API
+    while ``binding_key`` is the key the queue is bound with.  In the AMQP API
     they are both referred to as the routing key.
 
 .. _amqp-exchange-types:
@@ -279,9 +281,9 @@ Exchange types
 
 The exchange type defines how the messages are routed through the exchange.
 The exchange types defined in the standard are ``direct``, ``topic``,
-``fanout`` and ``headers``. Also non-standard exchange types are available
+``fanout`` and ``headers``.  Also non-standard exchange types are available
 as plugins to RabbitMQ, like the `last-value-cache plug-in`_ by Michael
-Bridgen. 
+Bridgen.
 
 .. _`last-value-cache plug-in`:
     http://github.com/squaremo/rabbitmq-lvc-plugin
@@ -291,17 +293,17 @@ Bridgen.
 Direct exchanges
 ~~~~~~~~~~~~~~~~
 
-Direct exchanges match by exact routing keys, so a queue bound with
-the routing key ``video`` only receives messages with the same routing key.
+Direct exchanges match by exact routing keys, so a queue bound by
+the routing key ``video`` only receives messages with that routing key.
 
 .. _amqp-exchange-type-topic:
 
 Topic exchanges
 ~~~~~~~~~~~~~~~
 
-Topic exchanges matches routing keys using dot-separated words, and can
-include wildcard characters: ``*`` matches a single word, ``#`` matches
-zero or more words.
+Topic exchanges matches routing keys using dot-separated words, and the
+wildcard characters: ``*`` (matches a single word), and ``#`` (matches
+zero or more words).
 
 With routing keys like ``usa.news``, ``usa.weather``, ``norway.news`` and
 ``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
@@ -320,7 +322,7 @@ Related API commands
     :keyword passive: Passive means the exchange won't be created, but you
         can use this to check if the exchange already exists.
 
-    :keyword durable: Durable exchanges are persistent. That is - they survive
+    :keyword durable: Durable exchanges are persistent.  That is - they survive
         a broker restart.
 
     :keyword auto_delete: This means the queue will be deleted by the broker
@@ -349,10 +351,10 @@ Related API commands
 
 .. note::
 
-    Declaring does not necessarily mean "create". When you declare you
-    *assert* that the entity exists and that it's operable. There is no
+    Declaring does not necessarily mean "create".  When you declare you
+    *assert* that the entity exists and that it's operable.  There is no
     rule as to whom should initially create the exchange/queue/binding,
-    whether consumer or producer. Usually the first one to need it will
+    whether consumer or producer.  Usually the first one to need it will
     be the one to create it.
 
 .. _amqp-api-hands-on:
@@ -360,10 +362,10 @@ Related API commands
 Hands-on with the API
 ---------------------
 
-Celery comes with a tool called ``camqadm`` (short for celery AMQP admin).
-It's used for simple admnistration tasks like creating/deleting queues and
-exchanges, purging queues and sending messages. In short it's for simple
-command-line access to the AMQP API.
+Celery comes with a tool called :program:`camqadm` (short for Celery AMQ Admin).
+It's used for command-line access to the AMQP API, enabling access to
+administration tasks like creating/deleting queues and exchanges, purging
+queues or sending messages.
 
 You can write commands directly in the arguments to ``camqadm``, or just start
 with no arguments to start it in shell-mode::
@@ -373,12 +375,12 @@ with no arguments to start it in shell-mode::
     -> connected.
     1>
 
-Here ``1>`` is the prompt. The number is counting the number of commands you
-have executed. Type ``help`` for a list of commands. It also has
-autocompletion, so you can start typing a command and then hit the
-``tab`` key to show a list of possible matches.
+Here ``1>`` is the prompt.  The number 1, is the number of commands you
+have executed so far.  Type ``help`` for a list of commands available.
+It also supports autocompletion, so you can start typing a command and then
+hit the ``tab`` key to show a list of possible matches.
 
-Now let's create a queue we can send messages to::
+Let's create a queue we can send messages to::
 
     1> exchange.declare testexchange direct
     ok.
@@ -392,13 +394,13 @@ named ``testqueue``.  The queue is bound to the exchange using
 the routing key ``testkey``.
 
 From now on all messages sent to the exchange ``testexchange`` with routing
-key ``testkey`` will be moved to this queue. We can send a message by
+key ``testkey`` will be moved to this queue.  We can send a message by
 using the ``basic.publish`` command::
 
     4> basic.publish "This is a message!" testexchange testkey
     ok.
 
-Now that the message is sent we can retrieve it again. We use the
+Now that the message is sent we can retrieve it again.  We use the
 ``basic.get`` command here, which pops a single message off the queue,
 this command is not recommended for production as it implies polling, any
 real application would declare consumers instead.
@@ -416,12 +418,13 @@ Pop a message off the queue::
 
 
 AMQP uses acknowledgment to signify that a message has been received
-and processed successfully. The message is sent to the next receiver
-if it has not been acknowledged before the client connection is closed.
+and processed successfully.  If the message has not been acknowledged
+and consumer channel is closed, the message will be delivered to
+another consumer.
 
 Note the delivery tag listed in the structure above; Within a connection channel,
 every received message has a unique delivery tag,
-This tag is used to acknowledge the message. Also note that
+This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
 the delivery tag ``1`` might point to a different message than in this channel.
 
@@ -448,10 +451,10 @@ Routing Tasks
 Defining queues
 ---------------
 
-In Celery the queues are defined by the :setting:`CELERY_QUEUES` setting.
+In Celery available queues are defined by the :setting:`CELERY_QUEUES` setting.
 
 Here's an example queue configuration with three queues;
-One for video, one for images and finally, one default queue for everything else:
+One for video, one for images and one default queue for everything else:
 
 .. code-block:: python
 
@@ -533,7 +536,7 @@ that queue in :setting:`CELERY_QUEUES`::
          "routing_key": "video.compress"}
 
 
-You install router classes by adding it to the :setting:`CELERY_ROUTES` setting::
+You install router classes by adding them to the :setting:`CELERY_ROUTES` setting::
 
     CELERY_ROUTES = (MyRouter, )
 
@@ -543,11 +546,14 @@ Router classes can also be added by name::
 
 
 For simple task name -> route mappings like the router example above, you can simply
-drop a dict into :setting:`CELERY_ROUTES` to get the same result::
+drop a dict into :setting:`CELERY_ROUTES` to get the same behavior::
+
+.. code-block:: python
 
     CELERY_ROUTES = ({"myapp.tasks.compress_video": {
-                        "queue": "video",
-                        "routing_key": "video.compress"}}, )
+                            "queue": "video",
+                            "routing_key": "video.compress"
+                     }}, )
 
 The routers will then be traversed in order, it will stop at the first router
-returning a value and use that as the final route for the task.
+returning a true value, and use that as the final route for the task.

+ 169 - 75
docs/userguide/tasks.rst

@@ -180,7 +180,6 @@ You can also provide the ``countdown`` argument to
                            countdown=60) # override the default and
                                          # - retry in 1 minute
 
-
 .. _task-options:
 
 Task options
@@ -196,7 +195,8 @@ General
     The name the task is registered as.
 
     You can set this name manually, or just use the default which is
-    automatically generated using the module and class name.
+    automatically generated using the module and class name.  See
+    :ref:`task-names`.
 
 .. attribute:: Task.abstract
 
@@ -207,13 +207,13 @@ General
 
     The maximum number of attempted retries before giving up.
     If this exceeds the :exc:`~celery.exceptions.MaxRetriesExceeded`
-    an exception will be raised. *NOTE:* You have to :meth:`retry`
+    an exception will be raised.  *NOTE:* You have to :meth:`retry`
     manually, it's not something that happens automatically.
 
 .. attribute:: Task.default_retry_delay
 
     Default time in seconds before a retry of the task
-    should be executed. Can be either an ``int`` or a ``float``.
+    should be executed.  Can be either :class:`int` or :class:`float`.
     Default is a 3 minute delay.
 
 .. attribute:: Task.rate_limit
@@ -226,19 +226,19 @@ General
 
     The rate limits can be specified in seconds, minutes or hours
     by appending ``"/s"``, ``"/m"`` or ``"/h"`` to the value.
-    Example: ``"100/m"`` (hundred tasks a minute). Default is the
+    Example: ``"100/m"`` (hundred tasks a minute).  Default is the
     :setting:`CELERY_DEFAULT_RATE_LIMIT` setting, which if not specified means
-    rate limiting for tasks is turned off by default.
+    rate limiting for tasks is disabled by default.
 
 .. attribute:: Task.ignore_result
 
-    Don't store task state. This means you can't use the
+    Don't store task state.    Note that this means you can't use
     :class:`~celery.result.AsyncResult` to check if the task is ready,
     or get its return value.
 
 .. attribute:: Task.store_errors_even_if_ignored
 
-    If true, errors will be stored even if the task is configured
+    If :const:`True`, errors will be stored even if the task is configured
     to ignore results.
 
 .. attribute:: Task.send_error_emails
@@ -264,7 +264,7 @@ General
 
 .. attribute:: Task.backend
 
-    The result store backend to use for this task. Defaults to the
+    The result store backend to use for this task.  Defaults to the
     :setting:`CELERY_RESULT_BACKEND` setting.
 
 .. attribute:: Task.acks_late
@@ -286,12 +286,15 @@ General
 
     If :const:`True` the task will report its status as "started"
     when the task is executed by a worker.
-    The default value is ``False`` as the normal behaviour is to not
+    The default value is :const:`False` as the normal behaviour is to not
     report that level of granularity. Tasks are either pending, finished,
-    or waiting to be retried. Having a "started" status can be useful for
+    or waiting to be retried.  Having a "started" status can be useful for
     when there are long running tasks and there is a need to report which
     task is currently running.
 
+    The hostname and pid of the worker executing the task
+    will be avaiable in the state metadata (e.g. ``result.info["pid"]``)
+
     The global default can be overridden by the
     :setting:`CELERY_TRACK_STARTED` setting.
 
@@ -326,6 +329,8 @@ Message and routing options
     However -- If the task is mandatory, an exception will be raised
     instead.
 
+    Not supported by amqplib.
+
 .. attribute:: Task.immediate
 
     Request immediate delivery.  If the task cannot be routed to a
@@ -334,27 +339,115 @@ Message and routing options
     queue the task, but with no guarantee that the task will ever
     be executed.
 
+    Not supported by amqplib.
+
 .. attribute:: Task.priority
 
     The message priority. A number from 0 to 9, where 0 is the
-    highest priority. **Note:** At the time writing this, RabbitMQ did not yet support
-    priorities
+    highest priority.
+
+    Not supported by RabbitMQ.
 
 .. seealso::
 
     :ref:`executing-routing` for more information about message options,
     and :ref:`guide-routing`.
 
+.. _task-names:
+
+Task names
+==========
+
+The task type is identified by the *task name*.
+
+If not provided a name will be automatically generated using the module
+and class name.
+
+For example:
+
+.. code-block:: python
+
+    >>> @task(name="sum-of-two-numbers")
+    >>> def add(x, y):
+    ...     return x + y
+
+    >>> add.name
+    'sum-of-two-numbers'
+
+
+The best practice is to use the module name as a prefix to classify the
+tasks using namespaces.  This way the name won't collide with the name from
+another module::
+
+.. code-block:: python
+
+    >>> @task(name="tasks.add")
+    >>> def add(x, y):
+    ...     return x + y
+
+    >>> add.name
+    'tasks.add'
+
+
+Which is exactly the name that is automatically generated for this
+task if the module name is "tasks.py":
+
+.. code-block:: python
+
+    >>> @task()
+    >>> def add(x, y):
+    ...     return x + y
+
+    >>> add.name
+    'tasks.add'
+
+.. _task-naming-relative-imports:
+
+Automatic naming and relative imports
+-------------------------------------
+
+Relative imports and automatic name generation does not go well together,
+so if you're using relative imports you should set the name explicitly.
+
+For example if the client imports the module "myapp.tasks" as ".tasks", and
+the worker imports the module as "myapp.tasks", the generated names won't match
+and an :exc:`~celery.exceptions.NotRegistered` error will be raised by the worker.
+
+This is also the case if using Django and using ``project.myapp``::
+
+    INSTALLED_APPS = ("project.myapp", )
+
+The worker will have the tasks registered as "project.myapp.tasks.*", 
+while this is what happens in the client if the module is imported as
+"myapp.tasks":
+
+.. code-block:: python
+
+    >>> from myapp.tasks import add
+    >>> add.name
+    'myapp.tasks.add'
+
+For this reason you should never use "project.app", but rather
+add the project directory to the Python path::
+
+    import os
+    import sys
+    sys.path.append(os.getcwd())
+
+    INSTALLED_APPS = ("myapp", )
+
+This makes more sense from the reusable app perspective anyway.
+
 .. _task-states:
 
 Task States
 ===========
 
-During its lifetime a task will transition through several states,
+During its lifetime a task will transition through several possible states,
 and each state may have arbitrary metadata attached to it.  When a task
-moves into another state the previous state is
-forgotten, but some transitions can be deducted, (e.g. a task now
-in the :state:`FAILED` state, is implied to have, been in the
+moves into a new state the previous state is
+forgotten about, but some transitions can be deducted, (e.g. a task now
+in the :state:`FAILED` state, is implied to have been in the
 :state:`STARTED` state at some point).
 
 There are also sets of states, like the set of
@@ -362,8 +455,8 @@ There are also sets of states, like the set of
 :state:`ready states <READY_STATES>`.
 
 The client uses the membership of these sets to decide whether
-the exception should be re-raised (:state:`PROPAGATE_STATES`), or if the result can
-be cached (it can if the task is ready).
+the exception should be re-raised (:state:`PROPAGATE_STATES`), or whether
+the result can be cached (it can if the task is ready).
 
 You can also define :ref:`custom-states`.
 
@@ -439,13 +532,12 @@ Custom states
 -------------
 
 You can easily define your own states, all you need is a unique name.
-The name of the state is usually an uppercase string.
-As an example you could have a look at
-:mod:`abortable tasks <~celery.contrib.abortable>` wich defines
-the :state:`ABORTED` state.
+The name of the state is usually an uppercase string.  As an example
+you could have a look at :mod:`abortable tasks <~celery.contrib.abortable>`
+wich defines its own custom :state:`ABORTED` state.
 
-To set the state of a task you use :meth:`Task.update_state
-<celery.task.base.Task.update_state>`::
+Use :meth:`Task.update_state <celery.task.base.Task.update_state>` to
+update a tasks state::
 
     @task
     def upload_files(filenames, **kwargs):
@@ -456,9 +548,9 @@ To set the state of a task you use :meth:`Task.update_state
 
 
 Here we created the state ``"PROGRESS"``, which tells any application
-aware of this state that the task is currently in progress, and where it is
-in the process by having ``current`` and ``total`` counts as part of the
-state metadata. This can then be used to create progressbars or similar.
+aware of this state that the task is currently in progress, and also where
+it is in the process by having ``current`` and ``total`` counts as part of the
+state metadata.  This can then be used to create e.g. progress bars.
 
 .. _task-how-they-work:
 
@@ -468,8 +560,8 @@ How it works
 Here comes the technical details, this part isn't something you need to know,
 but you may be interested.
 
-All defined tasks are listed in a registry. The registry contains
-a list of task names and their task classes. You can investigate this registry
+All defined tasks are listed in a registry.  The registry contains
+a list of task names and their task classes.  You can investigate this registry
 yourself:
 
 .. code-block:: python
@@ -488,32 +580,33 @@ yourself:
      'celery.ping':
         <Task: celery.ping (regular)>}
 
-This is the list of tasks built-in to celery. Note that we had to import
-``celery.task`` first for these to show up. This is because the tasks will
+This is the list of tasks built-in to celery.  Note that we had to import
+``celery.task`` first for these to show up.  This is because the tasks will
 only be registered when the module they are defined in is imported.
 
 The default loader imports any modules listed in the
-:setting:`CELERY_IMPORTS` setting. 
+:setting:`CELERY_IMPORTS` setting.
 
 The entity responsible for registering your task in the registry is a
-meta class, :class:`~celery.task.base.TaskType`. This is the default
-meta class for :class:`~celery.task.base.Task`. If you want to register
-your task manually you can set the :attr:`~celery.task.base.Task.abstract`
-attribute:
+meta class, :class:`~celery.task.base.TaskType`.  This is the default
+meta class for :class:`~celery.task.base.Task`.
+
+If you want to register your task manually you can set mark the
+task as :attr:`~celery.task.base.Task.abstract`:
 
 .. code-block:: python
 
     class MyTask(Task):
         abstract = True
 
-This way the task won't be registered, but any task subclassing it will.
+This way the task won't be registered, but any task subclassing it will be.
 
-When tasks are sent, we don't send the function code, just the name
-of the task. When the worker receives the message it can just look it up in
-the task registry to find the execution code.
+When tasks are sent, we don't send any actual function code, just the name
+of the task to execute.  When the worker then receives the message it can look
+up th ename in its task registry to find the execution code.
 
 This means that your workers should always be updated with the same software
-as the client. This is a drawback, but the alternative is a technical
+as the client.  This is a drawback, but the alternative is a technical
 challenge that has yet to be solved.
 
 .. _task-best-practices:
@@ -536,8 +629,8 @@ wastes time and resources.
     def mytask(...)
         something()
 
-Results can even be disabled globally using the
-:setting:`CELERY_IGNORE_RESULT` setting.
+Results can even be disabled globally using the :setting:`CELERY_IGNORE_RESULT`
+setting.
 
 .. _task-disable-rate-limits:
 
@@ -545,7 +638,7 @@ Disable rate limits if they're not used
 ---------------------------------------
 
 Disabling rate limits altogether is recommended if you don't have
-any tasks using them. This is because the rate limit subsystem introduces
+any tasks using them.  This is because the rate limit subsystem introduces
 quite a lot of complexity.
 
 Set the :setting:`CELERY_DISABLE_RATE_LIMITS` setting to globally disable
@@ -565,8 +658,7 @@ and may even cause a deadlock if the worker pool is exhausted.
 
 Make your design asynchronous instead, for example by using *callbacks*.
 
-
-Bad:
+**Bad**:
 
 .. code-block:: python
 
@@ -589,7 +681,7 @@ Bad:
         return PageInfo.objects.create(url, info)
 
 
-Good:
+**Good**:
 
 .. code-block:: python
 
@@ -620,7 +712,7 @@ Good:
 
 
 We use :class:`~celery.task.sets.subtask` here to safely pass
-around the callback task. :class:`~celery.task.sets.subtask` is a 
+around the callback task.  :class:`~celery.task.sets.subtask` is a
 subclass of dict used to wrap the arguments and execution options
 for a single task invocation.
 
@@ -640,8 +732,8 @@ Granularity
 -----------
 
 The task granularity is the amount of computation needed by each subtask.
-It's generally better to split your problem up in many small tasks, than
-having a few long running ones.
+In general it is better to split the problem up into many small tasks, than
+have a few long running tasks.
 
 With smaller tasks you can process more tasks in parallel and the tasks
 won't run long enough to block the worker from processing other waiting tasks.
@@ -663,14 +755,14 @@ Data locality
 -------------
 
 The worker processing the task should be as close to the data as
-possible. The best would be to have a copy in memory, the worst being a
+possible.  The best would be to have a copy in memory, the worst would be a
 full transfer from another continent.
 
 If the data is far away, you could try to run another worker at location, or
-if that's not possible, cache often used data, or preload data you know
+if that's not possible - cache often used data, or preload data you know
 is going to be used.
 
-The easiest way to share data between workers is to use a distributed caching
+The easiest way to share data between workers is to use a distributed cache
 system, like `memcached`_.
 
 .. seealso::
@@ -688,24 +780,25 @@ system, like `memcached`_.
 State
 -----
 
-Since celery is a distributed system, you can't know in which process, or even
-on what machine the task will run. Indeed you can't even know if the task will
-run in a timely manner, so please be wary of the state you pass on to tasks.
+Since celery is a distributed system, you can't know in which process, or
+on what machine the task will be executed.  You can't even know if the task will
+run in a timely manner.
 
 The ancient async sayings tells us that “asserting the world is the
 responsibility of the task”.  What this means is that the world view may
 have changed since the task was requested, so the task is responsible for
-making sure the world is how it should be.  For example if you have a task
+making sure the world is how it should be;  If you have a task
 that reindexes a search engine, and the search engine should only be reindexed
 at maximum every 5 minutes, then it must be the tasks responsibility to assert
 that, not the callers.
 
-Another gotcha is Django model objects. They shouldn't be passed on as arguments
-to task classes, it's almost always better to re-fetch the object from the
-database instead, as there are possible race conditions involved.
+Another gotcha is Django model objects.  They shouldn't be passed on as arguments
+to tasks.  It's almost always better to re-fetch the object from the
+database when the task is running instead,  as using old data may lead
+to race conditions.
 
 Imagine the following scenario where you have an article and a task
-that automatically expands some abbreviations in it.
+that automatically expands some abbreviations in it:
 
 .. code-block:: python
 
@@ -724,10 +817,10 @@ clicks on a button that initiates the abbreviation task.
     >>> article = Article.objects.get(id=102)
     >>> expand_abbreviations.delay(model_object)
 
-Now, the queue is very busy, so the task won't be run for another 2 minutes,
-in the meantime another author makes some changes to the article,
+Now, the queue is very busy, so the task won't be run for another 2 minutes.
+In the meantime another author makes changes to the article, so
 when the task is finally run, the body of the article is reverted to the old
-version, because the task had the old body in its argument.
+version because the task had the old body in its argument.
 
 Fixing the race condition is easy, just use the article id instead, and
 re-fetch the article in the task body:
@@ -750,7 +843,7 @@ messages may be expensive.
 Database transactions
 ---------------------
 
-Let's look at another example:
+Let's have a look at another example:
 
 .. code-block:: python
 
@@ -762,16 +855,16 @@ Let's look at another example:
         expand_abbreviations.delay(article.pk)
 
 This is a Django view creating an article object in the database,
-then passing its primary key to a task. It uses the `commit_on_success`
+then passing the primary key to a task.  It uses the `commit_on_success`
 decorator, which will commit the transaction when the view returns, or
 roll back if the view raises an exception.
 
 There is a race condition if the task starts executing
-before the transaction has been committed: the database object does not exist
+before the transaction has been committed; The database object does not exist
 yet!
 
-The solution is to **always commit transactions before applying tasks
-that depends on state from the current transaction**:
+The solution is to *always commit transactions before sending tasks
+depending on state from the current transaction*:
 
 .. code-block:: python
 
@@ -792,11 +885,11 @@ Example
 =======
 
 Let's take a real wold example; A blog where comments posted needs to be
-filtered for spam. When the comment is created, the spam filter runs in the
+filtered for spam.  When the comment is created, the spam filter runs in the
 background, so the user doesn't have to wait for it to finish.
 
 We have a Django blog application allowing comments
-on blog posts. We'll describe parts of the models/views and tasks for this
+on blog posts.  We'll describe parts of the models/views and tasks for this
 application.
 
 blog/models.py
@@ -872,11 +965,11 @@ blog/views.py
 
 To filter spam in comments we use `Akismet`_, the service
 used to filter spam in comments posted to the free weblog platform
-`Wordpress`. `Akismet`_ is free for personal use, but for commercial use you
-need to pay. You have to sign up to their service to get an API key.
+`Wordpress`.  `Akismet`_ is free for personal use, but for commercial use you
+need to pay.  You have to sign up to their service to get an API key.
 
 To make API calls to `Akismet`_ we use the `akismet.py`_ library written by
-Michael Foord.
+`Michael Foord`_.
 
 .. _task-example-blog-tasks:
 
@@ -918,3 +1011,4 @@ blog/tasks.py
 
 .. _`Akismet`: http://akismet.com/faq/
 .. _`akismet.py`: http://www.voidspace.org.uk/downloads/akismet.py
+.. _`Michael Foord`: http://www.voidspace.org.uk/

+ 7 - 5
docs/userguide/tasksets.rst

@@ -14,16 +14,16 @@ Subtasks
 
 .. versionadded:: 2.0
 
-The :class:`~celery.task.sets.subtask` class is used to wrap the arguments and
+The :class:`~celery.task.sets.subtask` type is used to wrap the arguments and
 execution options for a single task invocation::
 
     subtask(task_name_or_cls, args, kwargs, options)
 
-For convenience every task also has a shortcut to create subtask instances::
+For convenience every task also has a shortcut to create subtasks::
 
     task.subtask(args, kwargs, options)
 
-:class:`~celery.task.sets.subtask` is actually a subclass of :class:`dict`,
+:class:`~celery.task.sets.subtask` is actually a :class:`dict` subclass,
 which means it can be serialized with JSON or other encodings that doesn't
 support complex Python objects.
 
@@ -53,14 +53,14 @@ takes the result as an argument::
             subtask(callback).delay(result)
         return result
 
-See? :class:`~celery.task.sets.subtask` also knows how it should be applied,
+:class:`~celery.task.sets.subtask` also knows how it should be applied,
 asynchronously by :meth:`~celery.task.sets.subtask.delay`, and
 eagerly by :meth:`~celery.task.sets.subtask.apply`.
 
 The best thing is that any arguments you add to ``subtask.delay``,
 will be prepended to the arguments specified by the subtask itself!
 
-So if you have the subtask::
+If you have the subtask::
 
     >>> add.subtask(args=(10, ))
 
@@ -68,6 +68,8 @@ So if you have the subtask::
 
     >>> add.apply_async(args=(result, 10))
 
+...
+
 Now let's execute our new ``add`` task with a callback::
 
     >>> add.delay(2, 2, callback=add.subtask((8, )))

+ 34 - 26
docs/userguide/workers.rst

@@ -17,15 +17,15 @@ You can start celeryd to run in the foreground by executing the command::
     $ celeryd --loglevel=INFO
 
 You probably want to use a daemonization tool to start
-``celeryd`` in the background. See :ref:`daemonizing` for help
+``celeryd`` in the background.  See :ref:`daemonizing` for help
 using ``celeryd`` with popular daemonization tools.
 
 For a full list of available command line options see
-:mod:`~celery.bin.celeryd`, or simply execute the command::
+:mod:`~celery.bin.celeryd`, or simply do::
 
     $ celeryd --help
 
-You can also start multiple celeryd's on the same machine. If you do so
+You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
 hostname with the ``--hostname|-n`` argument::
 
@@ -40,8 +40,8 @@ Stopping the worker
 
 Shutdown should be accomplished using the :sig:`TERM` signal.
 
-When shutdown is initiated the worker will finish any tasks it's currently
-executing before it terminates, so if these tasks are important you should
+When shutdown is initiated the worker will finish all currently executing
+tasks before it actually terminates, so if these tasks are important you should
 wait for it to finish before doing anything drastic (like sending the :sig:`KILL`
 signal).
 
@@ -51,8 +51,8 @@ force terminate the worker, but be aware that currently executing tasks will
 be lost (unless the tasks have the :attr:`~celery.task.base.Task.acks_late`
 option set).
 
-Also, since the :sig:`KILL` signal can't be catched by processes the worker will
-not be able to reap its children so make sure you do it manually. This
+Also as processes can't override the :sig:`KILL` signal, the worker will
+not be able to reap its children, so make sure to do so manually.  This
 command usually does the trick::
 
     $ ps auxww | grep celeryd | awk '{print $2}' | xargs kill -9
@@ -75,16 +75,16 @@ arguments as it was started with.
 Concurrency
 ===========
 
-Multiprocessing is used to perform concurrent execution of tasks. The number
+Multiprocessing is used to perform concurrent execution of tasks.  The number
 of worker processes can be changed using the ``--concurrency`` argument and
-defaults to the number of CPUs available.
+defaults to the number of CPUs available on the machine.
 
 More worker processes are usually better, but there's a cut-off point where
 adding more processes affects performance in negative ways.
 There is even some evidence to support that having multiple celeryd's running,
-may perform better than having a single worker. For example 3 celeryd's with
-10 worker processes each, but you need to experiment to find the values that
-works best for you as this varies based on application, work load, task
+may perform better than having a single worker.  For example 3 celeryd's with
+10 worker processes each.  You need to experiment to find the numbers that
+works best for you, as this varies based on application, work load, task
 run times and other factors.
 
 .. _worker-persistent-revokes:
@@ -98,7 +98,7 @@ the workers then keep a list of revoked tasks in memory.
 If you want tasks to remain revoked after worker restart you need to
 specify a file for these to be stored in, either by using the ``--statedb``
 argument to :mod:`~celery.bin.celeryd` or the :setting:`CELERYD_STATE_DB`
-setting. See :setting:`CELERYD_STATE_DB` for more information.
+setting.  See :setting:`CELERYD_STATE_DB` for more information.
 
 .. _worker-time-limits:
 
@@ -109,12 +109,12 @@ Time limits
 
 A single task can potentially run forever, if you have lots of tasks
 waiting for some event that will never happen you will block the worker
-from processing new tasks indefinitely. The best way to defend against
+from processing new tasks indefinitely.  The best way to defend against
 this scenario happening is enabling time limits.
 
 The time limit (``--time-limit``) is the maximum number of seconds a task
 may run before the process executing it is terminated and replaced by a
-new process. You can also enable a soft time limit (``--soft-time-limit``),
+new process.  You can also enable a soft time limit (``--soft-time-limit``),
 this raises an exception the task can catch to clean up before the hard
 time limit kills it:
 
@@ -161,23 +161,29 @@ Remote control
 .. versionadded:: 2.0
 
 Workers have the ability to be remote controlled using a high-priority
-broadcast message queue. The commands can be directed to all, or a specific
+broadcast message queue.  The commands can be directed to all, or a specific
 list of workers.
 
-Commands can also have replies. The client can then wait for and collect
-those replies, but since there's no central authority to know how many
+Commands can also have replies.  The client can then wait for and collect
+those replies.  Since there's no central authority to know how many
 workers are available in the cluster, there is also no way to estimate
-how many workers may send a reply. Therefore the client has a configurable
-timeout — the deadline in seconds for replies to arrive in. This timeout
-defaults to one second. If the worker doesn't reply within the deadline
+how many workers may send a reply, so the client has a configurable
+timeout — the deadline in seconds for replies to arrive in.  This timeout
+defaults to one second.  If the worker doesn't reply within the deadline
 it doesn't necessarily mean the worker didn't reply, or worse is dead, but
 may simply be caused by network latency or the worker being slow at processing
 commands, so adjust the timeout accordingly.
 
 In addition to timeouts, the client can specify the maximum number
-of replies to wait for. If a destination is specified this limit is set
+of replies to wait for.  If a destination is specified, this limit is set
 to the number of destination hosts.
 
+.. seealso::
+
+    The :program:`celeryctl` program is used to execute remote control
+    commands from the commandline.  It supports all of the commands
+    listed below.  See :ref:`monitoring-celeryctl` for more information.
+
 .. _worker-broadcast-fun:
 
 The :func:`~celery.task.control.broadcast` function.
@@ -284,8 +290,10 @@ Enable/disable events
 ---------------------
 
 You can enable/disable events by using the ``enable_events``,
-``disable_events`` commands. This is useful to temporarily monitor
-a worker using celeryev/celerymon.
+``disable_events`` commands.  This is useful to temporarily monitor
+a worker using :program:`celeryev`/:program:`celerymon`.
+
+.. code-block:: python
 
     >>> broadcast("enable_events")
     >>> broadcast("disable_events")
@@ -324,8 +332,8 @@ then import them using the :setting:`CELERY_IMPORTS` setting::
 Inspecting workers
 ==================
 
-:class:`celery.task.control.inspect` lets you inspect running workers. It uses
-remote control commands under the hood.
+:class:`celery.task.control.inspect` lets you inspect running workers.  It
+uses remote control commands under the hood.
 
 .. code-block:: python