Bläddra i källkod

Which, which, which

Ask Solem 8 år sedan
förälder
incheckning
2b2e4b27ce
42 ändrade filer med 250 tillägg och 232 borttagningar
  1. 1 1
      celery/app/amqp.py
  2. 6 6
      celery/app/base.py
  3. 2 2
      celery/app/control.py
  4. 4 5
      celery/app/task.py
  5. 5 5
      celery/canvas.py
  6. 6 4
      celery/concurrency/asynpool.py
  7. 1 1
      celery/contrib/abortable.py
  8. 3 3
      celery/contrib/migrate.py
  9. 2 2
      celery/contrib/rdb.py
  10. 5 5
      celery/platforms.py
  11. 1 1
      celery/schedules.py
  12. 1 1
      celery/utils/collections.py
  13. 1 1
      celery/utils/timeutils.py
  14. 2 2
      celery/worker/consumer/consumer.py
  15. 1 1
      celery/worker/request.py
  16. 1 1
      docs/THANKS
  17. 1 1
      docs/contributing.rst
  18. 1 1
      docs/django/first-steps-with-django.rst
  19. 2 2
      docs/faq.rst
  20. 2 2
      docs/getting-started/brokers/rabbitmq.rst
  21. 3 3
      docs/getting-started/brokers/sqs.rst
  22. 19 14
      docs/getting-started/first-steps-with-celery.rst
  23. 3 3
      docs/getting-started/introduction.rst
  24. 15 14
      docs/getting-started/next-steps.rst
  25. 1 1
      docs/internals/app-overview.rst
  26. 2 2
      docs/internals/deprecation.rst
  27. 3 3
      docs/internals/protocol.rst
  28. 2 2
      docs/userguide/application.rst
  29. 13 13
      docs/userguide/calling.rst
  30. 24 23
      docs/userguide/canvas.rst
  31. 1 1
      docs/userguide/concurrency/eventlet.rst
  32. 23 17
      docs/userguide/configuration.rst
  33. 9 8
      docs/userguide/daemonizing.rst
  34. 10 11
      docs/userguide/extending.rst
  35. 1 1
      docs/userguide/monitoring.rst
  36. 4 4
      docs/userguide/optimizing.rst
  37. 7 7
      docs/userguide/periodic-tasks.rst
  38. 7 4
      docs/userguide/routing.rst
  39. 4 4
      docs/userguide/signals.rst
  40. 29 29
      docs/userguide/tasks.rst
  41. 5 5
      docs/userguide/workers.rst
  42. 17 16
      docs/whatsnew-4.0.rst

+ 1 - 1
celery/app/amqp.py

@@ -228,7 +228,7 @@ class AMQP(object):
 
     # Exchange class/function used when defining automatic queues.
     # E.g. you can use ``autoexchange = lambda n: None`` to use the
-    # AMQP default exchange, which is a shortcut to bypass routing
+    # AMQP default exchange: a shortcut to bypass routing
     # and instead send directly to the queue named in the routing key.
     autoexchange = None
 

+ 6 - 6
celery/app/base.py

@@ -88,7 +88,7 @@ def _after_fork_cleanup_app(app):
 
 class PendingConfiguration(UserDict, AttributeDictMixin):
     # `app.conf` will be of this type before being explicitly configured,
-    # which means the app can keep any configuration set directly
+    # meaning the app can keep any configuration set directly
     # on `app.conf` before the `app.config_from_object` call.
     #
     # accessing any key will finalize the configuration,
@@ -216,7 +216,7 @@ class Celery(object):
             self._tasks = TaskRegistry(self._tasks or {})
 
         # If the class defines a custom __reduce_args__ we need to use
-        # the old way of pickling apps, which is pickling a list of
+        # the old way of pickling apps: pickling a list of
         # args instead of the new way that pickles a dict of keywords.
         self._using_v1_reduce = app_has_custom(self, '__reduce_args__')
 
@@ -284,8 +284,8 @@ class Celery(object):
     def close(self):
         """Clean up after the application.
 
-        Only necessary for dynamically created apps for which you can
-        use the :keyword:`with` statement instead
+        Only necessary for dynamically created apps, and you should
+        probably use the :keyword:`with` statement instead.
 
         Example:
             >>> with Celery(set_as_current=False) as app:
@@ -575,8 +575,8 @@ class Celery(object):
                 This argument may also be a callable, in which case the
                 value returned is used (for lazy evaluation).
             related_name (str): The name of the module to find.  Defaults
-                to "tasks", which means it look for "module.tasks" for every
-                module in ``packages``.
+                to "tasks": meaning "look for 'module.tasks' for every
+                module in ``packages``."
             force (bool): By default this call is lazy so that the actual
                 auto-discovery won't happen until an application imports
                 the default modules.  Forcing will cause the auto-discovery

+ 2 - 2
celery/app/control.py

@@ -79,8 +79,8 @@ class Inspect(object):
 
     def active(self, safe=None):
         # safe is ignored since 4.0
-        # as we now have argsrepr/kwargsrepr which means no objects
-        # will need to be serialized.
+        # as no objects will need serialization now that we
+        # have argsrepr/kwargsrepr.
         return self._request('active')
 
     def scheduled(self, safe=None):

+ 4 - 5
celery/app/task.py

@@ -213,7 +213,7 @@ class Task(object):
     #: finished, or waiting to be retried.
     #:
     #: Having a 'started' status can be useful for when there are long
-    #: running tasks and there's a need to report which task is currently
+    #: running tasks and there's a need to report what task is currently
     #: running.
     #:
     #: The application default can be overridden using the
@@ -221,12 +221,11 @@ class Task(object):
     track_started = None
 
     #: When enabled messages for this task will be acknowledged **after**
-    #: the task has been executed, and not *just before* which is the
-    #: default behavior.
+    #: the task has been executed, and not *just before* (the
+    #: default behavior).
     #:
     #: Please note that this means the task may be executed twice if the
-    #: worker crashes mid execution (which may be acceptable for some
-    #: applications).
+    #: worker crashes mid execution.
     #:
     #: The application default can be overridden with the
     #: :setting:`task_acks_late` setting.

+ 5 - 5
celery/canvas.py

@@ -130,7 +130,7 @@ class Signature(dict):
 
     Signatures can also be created from tasks:
 
-    - Using the ``.signature()`` method which has the same signature
+    - Using the ``.signature()`` method that has the same signature
       as ``Task.apply_async``:
 
         .. code-block:: pycon
@@ -512,8 +512,8 @@ class chain(Signature):
 
     Note:
         If called with only one argument, then that argument must
-        be an iterable of tasks to chain, which means you can
-        use this with a generator expression.
+        be an iterable of tasks to chain: this allows us
+        to use generator expressions.
 
     Example:
         This is effectively :math:`((2 + 2) + 4)`:
@@ -853,8 +853,8 @@ class group(Signature):
 
     Note:
         If only one argument is passed, and that argument is an iterable
-        then that'll be used as the list of tasks instead, which
-        means you can use ``group`` with generator expressions.
+        then that'll be used as the list of tasks instead: this
+        allows us to use ``group`` with generator expressions.
 
     Example:
         >>> lazy_group = group([add.s(2, 2), add.s(4, 4)])

+ 6 - 4
celery/concurrency/asynpool.py

@@ -331,7 +331,7 @@ class ResultHandler(_pool.ResultHandler):
             proc = process_index[fd]
         except KeyError:
             # process already found terminated
-            # which means its outqueue has already been processed
+            # this means its outqueue has already been processed
             # by the worker lost handler.
             return remove(fd)
 
@@ -1045,8 +1045,10 @@ class AsynPool(_pool.Pool):
 
     def on_process_alive(self, pid):
         """Handler called when the :const:`WORKER_UP` message is received
-        from a child process, which marks the process as ready
-        to receive work."""
+        from a child process.
+
+        Marks the process as ready to receive work.
+        """
         try:
             proc = next(w for w in self._pool if w.pid == pid)
         except StopIteration:
@@ -1142,7 +1144,7 @@ class AsynPool(_pool.Pool):
             raise ValueError(proc)
 
     def _setup_queues(self):
-        # this is only used by the original pool which uses a shared
+        # this is only used by the original pool that used a shared
         # queue for all processes.
 
         # these attributes makes no sense for us, but we'll still

+ 1 - 1
celery/contrib/abortable.py

@@ -111,7 +111,7 @@ class AbortableAsyncResult(AsyncResult):
     """Represents a abortable result.
 
     Specifically, this gives the `AsyncResult` a :meth:`abort()` method,
-    which sets the state of the underlying Task to `'ABORTED'`.
+    that sets the state of the underlying Task to `'ABORTED'`.
     """
 
     def is_aborted(self):

+ 3 - 3
celery/contrib/migrate.py

@@ -125,7 +125,7 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
     """Find tasks by filtering them and move the tasks to a new queue.
 
     Arguments:
-        predicate (Callable): Filter function used to decide which messages
+        predicate (Callable): Filter function used to decide the messages
             to move.  Must accept the standard signature of ``(body, message)``
             used by Kombu consumer callbacks.  If the predicate wants the
             message to be moved it must return either:
@@ -134,11 +134,11 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
 
                 2) a :class:`~kombu.entity.Queue` instance, or
 
-                3) any other true value which means the specified
+                3) any other true value means the specified
                     ``exchange`` and ``routing_key`` arguments will be used.
         connection (kombu.Connection): Custom connection to use.
         source: List[Union[str, kombu.Queue]]: Optional list of source
-            queues to use instead of the default (which is the queues
+            queues to use instead of the default (queues
             in :setting:`task_queues`).  This list can also contain
             :class:`~kombu.entity.Queue` instances.
         exchange (str, kombu.Exchange): Default destination exchange.

+ 2 - 2
celery/contrib/rdb.py

@@ -29,8 +29,8 @@ Environment Variables
 ``CELERY_RDB_HOST``
 -------------------
 
-    Hostname to bind to.  Default is '127.0.01', which means the socket
-    will only be accessible from the local host.
+    Hostname to bind to.  Default is '127.0.01' (only accessable from
+    localhost).
 
 .. envvar:: CELERY_RDB_PORT
 

+ 5 - 5
celery/platforms.py

@@ -79,7 +79,7 @@ User information: uid={uid} euid={euid} gid={gid} egid={egid}
 """
 
 ROOT_DISCOURAGED = """\
-You're running the worker with superuser privileges, which is
+You're running the worker with superuser privileges: this is
 absolutely not recommended!
 
 Please specify a different user using the -u option.
@@ -127,8 +127,8 @@ class Pidfile(object):
 
     See Also:
         Best practice is to not use this directly but rather use
-        the :func:`create_pidlock` function instead,
-        which is more convenient and also removes stale pidfiles (when
+        the :func:`create_pidlock` function instead:
+        more convenient and also removes stale pidfiles (when
         the process holding the lock is no longer running).
     """
 
@@ -481,7 +481,7 @@ def setgroups(groups):
 
 
 def initgroups(uid, gid):
-    """Compat version of :func:`os.initgroups` which was first
+    """Compat version of :func:`os.initgroups` that was first
     added to Python 2.7."""
     if not pwd:  # pragma: no cover
         return
@@ -725,7 +725,7 @@ def get_errno_name(n):
 def ignore_errno(*errnos, **kwargs):
     """Context manager to ignore specific POSIX error codes.
 
-    Takes a list of error codes to ignore, which can be either
+    Takes a list of error codes to ignore: this can be either
     the name of the code, or the code integer itself::
 
         >>> with ignore_errno('ENOENT'):

+ 1 - 1
celery/schedules.py

@@ -105,7 +105,7 @@ class schedule(object):
         it does not need to be accurate but will influence the precision
         of your schedule.  You must also keep in mind
         the value of :setting:`beat_max_loop_interval`,
-        which decides the maximum number of seconds the scheduler can
+        that decides the maximum number of seconds the scheduler can
         sleep between re-checking the periodic task intervals.  So if you
         have a task that changes schedule at run-time then your next_run_at
         check will decide how long it will take before a change to the

+ 1 - 1
celery/utils/collections.py

@@ -426,7 +426,7 @@ class LimitedSet(object):
     ``maxlen`` is enforced at all times, so if the limit is reached
     we'll also remove non-expired items.
 
-    You can also configure ``minlen``, which is the minimal residual size
+    You can also configure ``minlen``: this is the minimal residual size
     of the set.
 
     All arguments are optional, and no limits are enabled by default.

+ 1 - 1
celery/utils/timeutils.py

@@ -172,7 +172,7 @@ def delta_resolution(dt, delta):
     :class:`~datetime.datetime` will be rounded to the nearest days,
     if the :class:`~datetime.timedelta` is in hours the
     :class:`~datetime.datetime` will be rounded to the nearest hour,
-    and so on until seconds which will just return the original
+    and so on until seconds, which will just return the original
     :class:`~datetime.datetime`.
     """
     delta = max(delta.total_seconds(), 0)

+ 2 - 2
celery/worker/consumer/consumer.py

@@ -242,8 +242,8 @@ class Consumer(object):
 
         Note:
             Currently pool grow operations will end up with an offset
-            of +1 if the initial size of the pool was 0 (which could
-            be the case with old deprecated autoscale option, may consider
+            of +1 if the initial size of the pool was 0 (this could
+            be the case with the old deprecated autoscale option, may consider
             removing this now that it's no longer supported).
         """
         num_processes = self.pool.num_processes

+ 1 - 1
celery/worker/request.py

@@ -1,5 +1,5 @@
 # -*- coding: utf-8 -*-
-"""This module defines the :class:`Request` class, which specifies
+"""This module defines the :class:`Request` class, that specifies
 how tasks are executed."""
 from __future__ import absolute_import, unicode_literals
 

+ 1 - 1
docs/THANKS

@@ -1,6 +1,6 @@
 Thanks to Rune Halvorsen <runeh@opera.com> for the name.
 Thanks to Anton Tsigularov <antont@opera.com> for the previous name (crunchy)
-    which we had to abandon because of an existing project with that name.
+    that we had to abandon because of an existing project with that name.
 Thanks to Armin Ronacher for the Sphinx theme.
 Thanks to Brian K. Jones for bunny.py (https://github.com/bkjones/bunny), the
     tool that inspired 'celery amqp'.

+ 1 - 1
docs/contributing.rst

@@ -728,7 +728,7 @@ is following the conventions.
         set textwidth=78
 
   If adhering to this limit makes the code less readable, you have one more
-  character to go on, which means 78 is a soft limit, and 79 is the hard
+  character to go on. This means 78 is a soft limit, and 79 is the hard
   limit :)
 
 * Import order

+ 1 - 1
docs/django/first-steps-with-django.rst

@@ -64,7 +64,7 @@ for the :program:`celery` command-line program:
 
 You don't need this line, but it saves you from always passing in the
 settings module to the ``celery`` program. It must always come before
-creating the app instances, which is what we do next:
+creating the app instances, as is what we do next:
 
 .. code-block:: python
 

+ 2 - 2
docs/faq.rst

@@ -270,7 +270,7 @@ When using the RabbitMQ (AMQP) and Redis transports it should work
 out of the box.
 
 For other transports the compatibility prefork pool is
-used which requires a working POSIX semaphore implementation,
+used and requires a working POSIX semaphore implementation,
 this is enabled in FreeBSD by default since FreeBSD 8.x.
 For older version of FreeBSD, you have to enable
 POSIX semaphores in the kernel and manually recompile billiard.
@@ -445,7 +445,7 @@ setting to "json" or "yaml" instead of pickle.
 Similarly for task results you can set :setting:`result_serializer`.
 
 For more details of the formats used and the lookup order when
-checking which format to use for a task see :ref:`calling-serializers`
+checking what format to use for a task see :ref:`calling-serializers`
 
 Can messages be encrypted?
 --------------------------

+ 2 - 2
docs/getting-started/brokers/rabbitmq.rst

@@ -140,8 +140,8 @@ be `rabbit@myhost`, as verified by :command:`rabbitmqctl`:
     ...done.
 
 This is especially important if your DHCP server gives you a host name
-starting with an IP address, (e.g. `23.10.112.31.comcast.net`), because
-then RabbitMQ will try to use `rabbit@23`, which is an illegal host name.
+starting with an IP address, (e.g. `23.10.112.31.comcast.net`).  In this
+case RabbitMQ will try to use `rabbit@23`: an illegal host name.
 
 .. _rabbitmq-macOS-start-stop:
 

+ 3 - 3
docs/getting-started/brokers/sqs.rst

@@ -78,8 +78,8 @@ Polling Interval
 
 The polling interval decides the number of seconds to sleep between
 unsuccessful polls. This value can be either an int or a float.
-By default the value is 1 second, which means that the worker will
-sleep for one second whenever there are no more messages to read.
+By default the value is *one second*: this means the worker will
+sleep for one second when there's no more messages to read.
 
 You must note that **more frequent polling is also more expensive, so increasing
 the polling interval can save you money**.
@@ -89,7 +89,7 @@ setting::
 
     broker_transport_options = {'polling_interval': 0.3}
 
-Very frequent polling intervals can cause *busy loops*, which results in the
+Very frequent polling intervals can cause *busy loops*, resulting in the
 worker using a lot of CPU time. If you need sub-millisecond precision you
 should consider using another transport, like `RabbitMQ <broker-amqp>`,
 or `Redis <broker-redis>`.

+ 19 - 14
docs/getting-started/first-steps-with-celery.rst

@@ -27,7 +27,7 @@ will get you started in no time. It's deliberately kept simple, so
 to not confuse you with advanced features.
 After you have finished this tutorial
 it's a good idea to browse the rest of the documentation,
-for example the :ref:`next-steps` tutorial, which will
+for example the :ref:`next-steps` tutorial will
 showcase Celery's capabilities.
 
 .. contents::
@@ -103,8 +103,8 @@ with standard Python tools like ``pip`` or ``easy_install``:
 Application
 ===========
 
-The first thing you need is a Celery instance, which is called the Celery
-application or just "app" for short. Since this instance is used as
+The first thing you need is a Celery instance.  We call this the *Celery
+application* or just *app* for short. As this instance is used as
 the entry-point for everything you want to do in Celery, like creating tasks and
 managing workers, it must be possible for other modules to import it.
 
@@ -125,14 +125,17 @@ Let's create the file :file:`tasks.py`:
         return x + y
 
 The first argument to :class:`~celery.app.Celery` is the name of the current module,
-this is needed so that names can be automatically generated, the second
-argument is the broker keyword argument which specifies the URL of the
-message broker you want to use, using RabbitMQ here, which is already the
-default option. See :ref:`celerytut-broker` above for more choices,
+this only needed so names can be automatically generated when the tasks are
+defined in the `__main__` module.
+
+The second argument is the broker keyword argument, specifying the URL of the
+message broker you want to use. Here using RabbitMQ (also the default option).
+
+See :ref:`celerytut-broker` above for more choices,
 e.g. for RabbitMQ you can use ``amqp://localhost``, or for Redis you can
 use ``redis://localhost``.
 
-You defined a single task, called ``add``, which returns the sum of two numbers.
+You defined a single task, called ``add``, returning the sum of two numbers.
 
 .. _celerytut-running-the-worker:
 
@@ -178,7 +181,7 @@ Calling the task
 To call our task you can use the :meth:`~@Task.delay` method.
 
 This is a handy shortcut to the :meth:`~@Task.apply_async`
-method which gives greater control of the task execution (see
+method that gives greater control of the task execution (see
 :ref:`guide-calling`)::
 
     >>> from tasks import add
@@ -187,11 +190,13 @@ method which gives greater control of the task execution (see
 The task has now been processed by the worker you started earlier,
 and you can verify that by looking at the workers console output.
 
-Calling a task returns an :class:`~@AsyncResult` instance,
-which can be used to check the state of the task, wait for the task to finish
+Calling a task returns an :class:`~@AsyncResult` instance:
+this can be used to check the state of the task, wait for the task to finish,
 or get its return value (or if the task failed, the exception and traceback).
-But this isn't enabled by default, and you have to configure Celery to
-use a result backend, which is detailed in the next section.
+
+Results aren't enabled by default, so if you want to do RPC or keep track
+of task results in a database you have to configure Celery to use a result
+backend.  This is described by the next section.
 
 .. _celerytut-keeping-results:
 
@@ -209,7 +214,7 @@ and -- or you can define your own.
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`Django`: http://djangoproject.com
 
-For this example we use the `rpc` result backend, which sends states
+For this example we use the `rpc` result backend, that sends states
 back as transient messages. The backend is specified via the ``backend`` argument to
 :class:`@Celery`, (or via the :setting:`task_result_backend` setting if
 you choose to use a configuration module):

+ 3 - 3
docs/getting-started/introduction.rst

@@ -18,8 +18,8 @@ A task queue's input is a unit of work called a task. Dedicated worker
 processes constantly monitor task queues for new work to perform.
 
 Celery communicates via messages, usually using a broker
-to mediate between clients and workers. To initiate a task, a client adds a
-message to the queue, which the broker then delivers to a worker.
+to mediate between clients and workers. To initiate a task the client adds a
+message to the queue, the broker then delivers that message to a worker.
 
 A Celery system can consist of multiple workers and brokers, giving way
 to high availability and horizontal scaling.
@@ -213,7 +213,7 @@ Features
 Framework Integration
 =====================
 
-Celery is easy to integrate with web frameworks, some of which even have
+Celery is easy to integrate with web frameworks, some of them even have
 integration packages:
 
     +--------------------+------------------------+

+ 15 - 14
docs/getting-started/next-steps.rst

@@ -116,7 +116,7 @@ Eventlet, Gevent, and running in a single thread (see :ref:`concurrency`).
 -- *Events* is an option that when enabled causes Celery to send
 monitoring messages (events) for actions occurring in the worker.
 These can be used by monitor programs like ``celery events``,
-and Flower - the real-time Celery monitor, which you can read about in
+and Flower - the real-time Celery monitor, that you can read about in
 the :ref:`Monitoring and Management guide <guide-monitoring>`.
 
 -- *Queues* is the list of queues that the worker will consume
@@ -178,9 +178,10 @@ or stop it:
 
     $ celery multi stop w1 -A proj -l info
 
-The ``stop`` command is asynchronous so it'll not wait for the
+The ``stop`` command is asynchronous so it won't wait for the
 worker to shutdown. You'll probably want to use the ``stopwait`` command
-instead which will ensure all currently executing tasks is completed:
+instead,  this ensures all currently executing tasks is completed
+before exiting:
 
 .. code-block:: console
 
@@ -283,7 +284,7 @@ so that no message is sent:
     4
 
 These three methods - :meth:`delay`, :meth:`apply_async`, and applying
-(``__call__``), represents the Celery calling API, which are also used for
+(``__call__``), represents the Celery calling API, that's also used for
 signatures.
 
 A more detailed overview of the Calling API can be found in the
@@ -293,7 +294,7 @@ Every task invocation will be given a unique identifier (an UUID), this
 is the task id.
 
 The ``delay`` and ``apply_async`` methods return an :class:`~@AsyncResult`
-instance, which can be used to keep track of the tasks execution state.
+instance, that can be used to keep track of the tasks execution state.
 But for this you need to enable a :ref:`result backend <task-result-backends>` so that
 the state can be stored somewhere.
 
@@ -376,7 +377,7 @@ The started state is a special state that's only recorded if the
 ``@task(track_started=True)`` option is set for the task.
 
 The pending state is actually not a recorded state, but rather
-the default state for any task id that's unknown, which you can see
+the default state for any task id that's unknown: this you can see
 from this example:
 
 .. code-block:: pycon
@@ -430,7 +431,7 @@ There's also a shortcut using star arguments:
 And there's that calling API again…
 -----------------------------------
 
-Signature instances also supports the calling API, which means that they
+Signature instances also supports the calling API: meaning they
 have the ``delay`` and ``apply_async`` methods.
 
 But there's a difference in that the signature may already have
@@ -462,7 +463,7 @@ and this can be resolved when calling the signature:
     >>> res.get()
     10
 
-Here you added the argument 8, which was prepended to the existing argument 2
+Here you added the argument 8 that was prepended to the existing argument 2
 forming a complete signature of ``add(8, 2)``.
 
 Keyword arguments can also be added later, these are then merged with any
@@ -473,7 +474,7 @@ existing keyword arguments, but with new arguments taking precedence:
     >>> s3 = add.s(2, 2, debug=True)
     >>> s3.delay(debug=False)   # debug is now False.
 
-As stated signatures supports the calling API, which means that:
+As stated signatures supports the calling API: meaning that;
 
 - ``sig.apply_async(args=(), kwargs={}, **options)``
 
@@ -665,8 +666,8 @@ This is implemented by using broadcast messaging, so all remote
 control commands are received by every worker in the cluster.
 
 You can also specify one or more workers to act on the request
-using the :option:`--destination <celery inspect --destination>` option,
-which is a comma separated list of worker host names:
+using the :option:`--destination <celery inspect --destination>` option.
+This is a comma separated list of worker host names:
 
 .. code-block:: console
 
@@ -684,7 +685,7 @@ For a list of inspect commands you can execute:
 
     $ celery -A proj inspect --help
 
-Then there's the :program:`celery control` command, which contains
+Then there's the :program:`celery control` command, that contains
 commands that actually changes things in the worker at runtime:
 
 .. code-block:: console
@@ -752,8 +753,8 @@ If you have strict fair scheduling requirements, or want to optimize
 for throughput then you should read the :ref:`Optimizing Guide
 <guide-optimizing>`.
 
-If you're using RabbitMQ then you should install the :pypi:`librabbitmq`
-module, which is an AMQP client implemented in C:
+If you're using RabbitMQ then you can install the :pypi:`librabbitmq`
+module: this is an AMQP client implemented in C:
 
 .. code-block:: console
 

+ 1 - 1
docs/internals/app-overview.rst

@@ -5,7 +5,7 @@
 The `app` branch is a work-in-progress to remove
 the use of a global configuration in Celery.
 
-Celery can now be instantiated, which means several
+Celery can now be instantiated and several
 instances of Celery may exist in the same process space.
 Also, large parts can be customized without resorting to monkey
 patching.

+ 2 - 2
docs/internals/deprecation.rst

@@ -22,7 +22,7 @@ Compat Task Modules
 
 - Module ``celery.decorators`` will be removed:
 
-    Which means you need to change:
+    This means you need to change:
 
     .. code-block:: python
 
@@ -102,7 +102,7 @@ Modules to Remove
 
 - ``celery.execute``
 
-  This module only contains ``send_task``, which must be replaced with
+  This module only contains ``send_task``: this must be replaced with
   :attr:`@send_task` instead.
 
 - ``celery.decorators``

+ 3 - 3
docs/internals/protocol.rst

@@ -108,7 +108,7 @@ Changes from version 1
 
     This means that workers/intermediates can inspect the message
     and make decisions based on the headers without decoding
-    the payload (which may be language specific, e.g. serialized by the
+    the payload (that may be language specific, e.g. serialized by the
     Python specific pickle serializer).
 
 - Always UTC
@@ -182,8 +182,8 @@ Changes from version 1
 Version 1
 ---------
 
-In version 1 of the protocol all fields are stored in the message body,
-which means workers and intermediate consumers must deserialize the payload
+In version 1 of the protocol all fields are stored in the message body:
+meaning workers and intermediate consumers must deserialize the payload
 to read the fields.
 
 Message body

+ 2 - 2
docs/userguide/application.rst

@@ -24,8 +24,8 @@ Let's create one now:
     >>> app
     <Celery __main__:0x100469fd0>
 
-The last line shows the textual representation of the application,
-which includes the name of the app class (``Celery``), the name of the
+The last line shows the textual representation of the application:
+including the name of the app class (``Celery``), the name of the
 current main module (``__main__``), and the memory address of the object
 (``0x100469fd0``).
 

+ 13 - 13
docs/userguide/calling.rst

@@ -119,10 +119,11 @@ as a partial argument:
 
 .. sidebar:: What's ``s``?
 
-    The ``add.s`` call used here is called a signature, I talk
-    more about signatures in the :ref:`canvas guide <guide-canvas>`,
-    where you can also learn about :class:`~celery.chain`, which
-    is a simpler way to chain tasks together.
+    The ``add.s`` call used here is called a signature. If you
+    don't know what they are you should read about them in the
+    :ref:`canvas guide <guide-canvas>`.
+    There you can also learn about :class:`~celery.chain`:  a simpler
+    way to chain tasks together.
 
     In practice the ``link`` execution option is considered an internal
     primitive, and you'll probably not use it directly, but
@@ -269,8 +270,7 @@ and can contain the following keys:
 - `interval_start`
 
     Defines the number of seconds (float or integer) to wait between
-    retries. Default is 0, which means the first retry will be
-    instantaneous.
+    retries. Default is 0 (the first retry will be instantaneous).
 
 - `interval_step`
 
@@ -386,9 +386,9 @@ json -- JSON is supported in many programming languages, is now
     data types: strings, Unicode, floats, Boolean, dictionaries, and lists.
     Decimals and dates are notably missing.
 
-    Also, binary data will be transferred using Base64 encoding, which will
-    cause the transferred data to be around 34% larger than an encoding which
-    supports native binary types.
+    Binary data will be transferred using Base64 encoding,
+    increasing the size of the transferred data by 34% compared to an encoding
+    format where native binary types are supported.
 
     However, if your data fits inside the above constraints and you need
     cross-language support, the default setting of JSON is probably your
@@ -426,8 +426,8 @@ The encoding used is available as a message header, so the worker knows how to
 deserialize any task. If you use a custom serializer, this serializer must
 be available for the worker.
 
-The following order is used to decide which serializer
-to use when sending a task:
+The following order is used to decide the serializer
+used when sending a task:
 
     1. The `serializer` execution option.
     2. The :attr:`@-Task.serializer` attribute
@@ -449,8 +449,8 @@ Celery can compress the messages using either *gzip*, or *bzip2*.
 You can also create your own compression schemes and register
 them in the :func:`kombu compression registry <kombu.compression.register>`.
 
-The following order is used to decide which compression scheme
-to use when sending a task:
+The following order is used to decide the compression scheme
+used when sending a task:
 
     1. The `compression` execution option.
     2. The :attr:`@-Task.compression` attribute.

+ 24 - 23
docs/userguide/canvas.rst

@@ -70,8 +70,8 @@ or even serialized and sent across the wire.
         >>> s.options
         {'countdown': 10}
 
-- It supports the "Calling API" which means it supports ``delay`` and
-  ``apply_async`` or being called directly.
+- It supports the "Calling API" of ``delay``,
+  ``apply_async``, etc., including being called directly (``__call__``).
 
     Calling the signature will execute the task inline in the current process:
 
@@ -343,7 +343,7 @@ Here's some examples:
 
         >>> add.signature((2, 2), immutable=True)
 
-    There's also a ``.si()`` shortcut for this, which is the preffered way of
+    There's also a ``.si()`` shortcut for this, and this is the preffered way of
     creating signatures:
 
     .. code-block:: pycon
@@ -377,9 +377,9 @@ Here's some examples:
 
 - Simple chord
 
-    The chord primitive enables us to add callback to be called when
-    all of the tasks in a group have finished executing, which is often
-    required for algorithms that aren't embarrassingly parallel:
+    The chord primitive enables us to add a callback to be called when
+    all of the tasks in a group have finished executing.  This is often
+    required for algorithms that aren't *embarrassingly parallel*:
 
     .. code-block:: pycon
 
@@ -400,7 +400,9 @@ Here's some examples:
         >>> chord((import_contact.s(c) for c in contacts),
         ...       notify_complete.si(import_id)).apply_async()
 
-    Note the use of ``.si`` above which creates an immutable signature.
+    Note the use of ``.si`` above; this creates an immutable signature,
+    meaning any new arguments passed (including to return value of the
+    previous task) will be ignored.
 
 - Blow your mind by combining
 
@@ -415,7 +417,7 @@ Here's some examples:
         >>> res.get()
         160
 
-    Which means that you can combine chains:
+    this means that you can combine chains:
 
     .. code-block:: pycon
 
@@ -481,8 +483,8 @@ Chains
 
 .. versionadded:: 3.0
 
-Tasks can be linked together, which in practice means adding
-a callback task:
+Tasks can be linked together: the linked task is called when the task
+returns successfully:
 
 .. code-block:: pycon
 
@@ -491,8 +493,8 @@ a callback task:
     4
 
 The linked task will be applied with the result of its parent
-task as the first argument, which in the above case will result
-in ``mul(4, 16)`` since the result is 4.
+task as the first argument. In the above case where the result was 4,
+this will result in ``mul(4, 16)``.
 
 The results will keep track of any subtasks called by the original task,
 and this can be accessed from the result instance:
@@ -541,7 +543,7 @@ You can also add *error callbacks* using the `on_error` method:
 
     >>> add.s(2, 2).on_error(log_error.s()).delay()
 
-Which will resut in the following ``.apply_async`` call when the signature
+This will result in the following ``.apply_async`` call when the signature
 is applied:
 
 .. code-block:: pycon
@@ -666,7 +668,7 @@ The :class:`~celery.group` function takes a list of signatures:
 
 If you **call** the group, the tasks will be applied
 one after another in the current process, and a :class:`~celery.result.GroupResult`
-instance is returned which can be used to keep track of the results,
+instance is returned that can be used to keep track of the results,
 or tell how many tasks are ready and so on:
 
 .. code-block:: pycon
@@ -749,9 +751,8 @@ It supports the following operations:
 
 * :meth:`~celery.result.GroupResult.join`
 
-    Gather the results for all of the subtasks
-    and return a list with them ordered by the order of which they
-    were called.
+    Gather the results of all subtasks
+    and return them in the same order as they were called (as a list).
 
 .. _canvas-chord:
 
@@ -857,8 +858,8 @@ to the :exc:`~@ChordError` exception:
     celery.exceptions.ChordError: Dependency 97de6f3f-ea67-4517-a21c-d867c61fcb47
         raised ValueError('something something',)
 
-While the traceback may be different depending on which result backend is
-being used, you can see the error description includes the id of the task that failed
+While the traceback may be different depending on the result backend used,
+you can see that the error description includes the id of the task that failed
 and a string representation of the original exception. You can also
 find the original traceback in ``result.traceback``.
 
@@ -926,11 +927,11 @@ Example implementation:
         raise self.retry(countdown=interval, max_retries=max_retries)
 
 
-This is used by all result backends except Redis and Memcached, which
-increment a counter after each task in the header, then applying the callback
+This is used by all result backends except Redis and Memcached: they
+increment a counter after each task in the header, then applies the callback
 when the counter exceeds the number of tasks in the set. *Note:* chords don't
 properly work with Redis before version 2.2; you'll need to upgrade to at
-least 2.2 to use them.
+least *redis-server* 2.2 to use them.
 
 The Redis and Memcached approach is a much better solution, but not easily
 implemented in other backends (suggestions welcome!).
@@ -1063,5 +1064,5 @@ of one:
 
     >>> group.skew(start=1, stop=10)()
 
-which means that the first task will have a countdown of one second, the second
+This means that the first task will have a countdown of one second, the second
 task a countdown of two seconds, and so on.

+ 1 - 1
docs/userguide/concurrency/eventlet.rst

@@ -18,7 +18,7 @@ change how you run your code, not how you write it.
     * `Coroutines`_ ensure that the developer uses a blocking style of
       programming that's similar to threading, but provide the benefits of
       non-blocking I/O.
-    * The event dispatch is implicit, which means you can easily use Eventlet
+    * The event dispatch is implicit: meaning you can easily use Eventlet
       from the Python interpreter, or as a small part of a larger application.
 
 Celery supports Eventlet as an alternative execution pool implementation.

+ 23 - 17
docs/userguide/configuration.rst

@@ -265,7 +265,7 @@ You can change methods too, for example the ``on_failure`` handler:
     task_annotations = {'*': {'on_failure': my_on_failure}}
 
 If you need more flexibility then you can use objects
-instead of a dict to choose which tasks to annotate:
+instead of a dict to choose the tasks to annotate:
 
 .. code-block:: python
 
@@ -358,7 +358,7 @@ Default: Disabled.
 
 If this is :const:`True`, all tasks will be executed locally by blocking until
 the task returns. ``apply_async()`` and ``Task.delay()`` will return
-an :class:`~celery.result.EagerResult` instance, which emulates the API
+an :class:`~celery.result.EagerResult` instance, that emulates the API
 and behavior of :class:`~celery.result.AsyncResult`, except the result
 is already evaluated.
 
@@ -388,7 +388,7 @@ Default: Disabled.
 If enabled task results will include the workers stack when re-raising
 task errors.
 
-This requires the :pypi:`tblib` library, which can be installed using
+This requires the :pypi:`tblib` library, that can be installed using
 :command:`pip`:
 
 .. code-block:: console
@@ -428,7 +428,7 @@ task is executed by a worker. The default value is :const:`False` as
 the normal behavior is to not report that level of granularity. Tasks
 are either pending, finished, or waiting to be retried. Having a 'started'
 state can be useful for when there are long running tasks and there's a
-need to report which task is currently running.
+need to report what task is currently running.
 
 .. setting:: task_time_limit
 
@@ -474,7 +474,7 @@ Example:
 Default: Disabled.
 
 Late ack means the task messages will be acknowledged **after** the task
-has been executed, not *just before*, which is the default behavior.
+has been executed, not *just before* (the default behavior).
 
 .. seealso::
 
@@ -644,7 +644,9 @@ on backend specifications).
 
 Default: Disabled by default.
 
-Enables client caching of results, which can be useful for the old deprecated
+Enables client caching of results.
+
+This can be useful for the old deprecated
 'amqp' backend where the result is unavailable as soon as one result instance
 consumes it.
 
@@ -688,7 +690,7 @@ Examples::
 
 Please see `Supported Databases`_ for a table of supported databases,
 and `Connection String`_ for more information about connection
-strings (which is the part of the URI that comes after the ``db+`` prefix).
+strings (this is the part of the URI that comes after the ``db+`` prefix).
 
 .. _`Supported Databases`:
     http://www.sqlalchemy.org/docs/core/engines.html#supported-databases
@@ -874,7 +876,7 @@ For example::
 
     result_backend = 'redis://localhost/0'
 
-which is the same as::
+is the same as::
 
     result_backend = 'redis://'
 
@@ -1083,7 +1085,7 @@ For example::
 
     result_backend = 'riak://localhost/celery
 
-which is the same as::
+is the same as::
 
     result_backend = 'riak://'
 
@@ -1330,7 +1332,7 @@ A router can be specified as either:
 
 *  A function with the signature ``(name, args, kwargs,
    options, task=None, **kwargs)``
-*  A string which provides the path to a router function.
+*  A string providing the path to a router function.
 *  A dict containing router specification:
      Will be converted to a :class:`celery.routes.MapRoute` instance.
 * A list of ``(pattern, route)`` tuples:
@@ -1576,11 +1578,15 @@ Only the scheme part (``transport://``) is required, the rest
 is optional, and defaults to the specific transports default values.
 
 The transport part is the broker implementation to use, and the
-default is ``amqp``, which uses ``librabbitmq`` by default or falls back to
-``pyamqp`` if that's not installed. Also there are many other choices including
+default is ``amqp``, (uses ``librabbitmq`` if installed or falls back to
+``pyamqp``). There are also many other choices including:
 ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
-``couchdb``.
-It can also be a fully qualified path to your own transport implementation.
+and ``couchdb``.
+
+The scheme can also be a fully qualified path to your own transport
+implementation::
+
+    broker_url = 'proj.transports.MyTransport://localhost'
 
 More than one broker URL, of the same transport, can also be specified.
 The broker URLs can be passed in as a single string that's semicolon delimited::
@@ -1658,9 +1664,9 @@ a connection was closed.
 
 If the heartbeat value is 10 seconds, then
 the heartbeat will be monitored at the interval specified
-by the :setting:`broker_heartbeat_checkrate` setting, which by default is
-double the rate of the heartbeat value
-(so for the default 10 seconds, the heartbeat is checked every 5 seconds).
+by the :setting:`broker_heartbeat_checkrate` setting (by default
+this is set to double the rate of the heartbeat value,
+so for the 10 seconds, the heartbeat is checked every 5 seconds).
 
 .. setting:: broker_heartbeat_checkrate
 

+ 9 - 8
docs/userguide/daemonizing.rst

@@ -35,10 +35,10 @@ tell it where to change
 directory to when it starts (to find the module containing your app, or your
 configuration module).
 
-The daemonization script is configured by the file :file:`/etc/default/celeryd`,
-which is a shell (:command:`sh`) script. You can add environment variables and the
-configuration options below to this file. To add environment variables you
-must also export them (e.g. :command:`export DISPLAY=":0"`)
+The daemonization script is configured by the file :file:`/etc/default/celeryd`.
+This is a shell (:command:`sh`) script where you can add environment variables like
+the configuration options below.  To add real environment variables affecting
+the worker you must also export them (e.g. :command:`export DISPLAY=":0"`)
 
 .. Admonition:: Superuser privileges required
 
@@ -344,8 +344,8 @@ and now you should be able to see the errors.
 Commonly such errors are caused by insufficient permissions
 to read from, or write to a file, and also by syntax errors
 in configuration modules, user modules, third-party libraries,
-or even from Celery itself (if you've found a bug, in which case
-you should :ref:`report it <reporting-bugs>`).
+or even from Celery itself (if you've found a bug you
+should :ref:`report it <reporting-bugs>`).
 
 
 .. _daemon-systemd-generic:
@@ -479,10 +479,11 @@ use the Environment in :file:`celery.service`.
 
 Running the worker with superuser privileges (root)
 ======================================================================
+
 Running the worker with superuser privileges is a very dangerous practice.
 There should always be a workaround to avoid running as root. Celery may
-run arbitrary code in messages serialized with pickle - which is dangerous,
-especially if run as root.
+run arbitrary code in messages serialized with pickle - this is dangerous,
+especially when run as root.
 
 By default Celery won't run workers as root. The associated error
 message may not be visible in the logs but may be seen if :envvar:`C_FAKEFORK`

+ 10 - 11
docs/userguide/extending.rst

@@ -16,7 +16,7 @@ Custom Message Consumers
 You may want to embed custom Kombu consumers to manually process your messages.
 
 For that purpose a special :class:`~celery.bootstep.ConsumerStep` bootstep class
-exists, where you only need to define the ``get_consumers`` method, which must
+exists, where you only need to define the ``get_consumers`` method, that must
 return a list of :class:`kombu.Consumer` objects to start
 whenever the connection is established:
 
@@ -62,12 +62,11 @@ whenever the connection is established:
 .. note::
 
     Kombu Consumers can take use of two different message callback dispatching
-    mechanisms. The first one is the ``callbacks`` argument which accepts
+    mechanisms. The first one is the ``callbacks`` argument that accepts
     a list of callbacks with a ``(body, message)`` signature,
-    the second one is the ``on_message`` argument which takes a single
+    the second one is the ``on_message`` argument that takes a single
     callback with a ``(message,)`` signature. The latter won't
-    automatically decode and deserialize the payload which is useful
-    in many cases:
+    automatically decode and deserialize the payload.
 
     .. code-block:: python
 
@@ -99,7 +98,7 @@ and the worker currently defines two blueprints: **Worker**, and **Consumer**
 **Figure A:** Bootsteps in the Worker and Consumer blueprints. Starting
               from the bottom up the first step in the worker blueprint
               is the Timer, and the last step is to start the Consumer blueprint,
-              which then establishes the broker connection and starts
+              that then establishes the broker connection and starts
               consuming messages.
 
 .. figure:: ../images/worker_graph_full.png
@@ -115,8 +114,8 @@ The Worker is the first blueprint to start, and with it starts major components
 the event loop, processing pool, and the timer used for ETA tasks and other
 timed events.
 
-When the worker is fully started it'll continue to the Consumer blueprint,
-which sets up how tasks are to be executed, connects to the broker and starts
+When the worker is fully started it continues with the Consumer blueprint,
+that sets up how tasks are executed, connects to the broker and starts
 the message consumers.
 
 The :class:`~celery.worker.WorkController` is the core worker implementation,
@@ -624,8 +623,8 @@ the worker has been initialized, so the "is starting" lines are time-stamped.
 You may notice that this does no longer happen at shutdown, this is because
 the ``stop`` and ``shutdown`` methods are called inside a *signal handler*,
 and it's not safe to use logging inside such a handler.
-Logging with the Python logging module isn't :term:`reentrant`,
-which means that you cannot interrupt the function and
+Logging with the Python logging module isn't :term:`reentrant`:
+meaning you cannot interrupt the function then
 call it again later. It's important that the ``stop`` and ``shutdown`` methods
 you write is also :term:`reentrant`.
 
@@ -740,7 +739,7 @@ Preload options
 ~~~~~~~~~~~~~~~
 
 The :program:`celery` umbrella command supports the concept of 'preload
-options', which are special options passed to all sub-commands and parsed
+options'.  These are special options passed to all sub-commands and parsed
 outside of the main parsing step.
 
 The list of default preload options can be found in the API reference:

+ 1 - 1
docs/userguide/monitoring.rst

@@ -46,7 +46,7 @@ Commands
 
 * **shell**: Drop into a Python shell.
 
-  The locals will include the ``celery`` variable, which is the current app.
+  The locals will include the ``celery`` variable: this is the current app.
   Also all known tasks will be automatically added to locals (unless the
   :option:`--without-tasks <celery shell --without-tasks>` flag is set).
 

+ 4 - 4
docs/userguide/optimizing.rst

@@ -109,8 +109,8 @@ or by using :setting:`task_routes`:
 
 
 The ``delivery_mode`` changes how the messages to this queue are delivered.
-A value of 1 means that the message won't be written to disk, and a value
-of 2 (default) means that the message can be written to disk.
+A value of one means that the message won't be written to disk, and a value
+of two (default) means that the message can be written to disk.
 
 To direct a task to your new transient queue you can specify the queue
 argument (or use the :setting:`task_routes` setting):
@@ -145,7 +145,7 @@ The workers' default prefetch count is the
 of concurrency slots[*]_ (processes/threads/green-threads).
 
 If you have many tasks with a long duration you want
-the multiplier value to be 1, which means it'll only reserve one
+the multiplier value to be *one*: meaning it'll only reserve one
 task per worker process at a time.
 
 However -- If you have many short-running tasks, and throughput/round trip
@@ -167,7 +167,7 @@ The task message is only deleted from the queue after the task is
 it can be redelivered to another worker (or the same after recovery).
 
 When using the default of early acknowledgment, having a prefetch multiplier setting
-of 1, means the worker will reserve at most one extra task for every
+of *one*, means the worker will reserve at most one extra task for every
 worker process: or in other words, if the worker is started with
 :option:`-c 10 <celery worker -c>`, the worker may reserve at most 20
 tasks (10 unacknowledged tasks executing, and 10 unacknowledged reserved

+ 7 - 7
docs/userguide/periodic-tasks.rst

@@ -10,8 +10,8 @@
 Introduction
 ============
 
-:program:`celery beat` is a scheduler. It kicks off tasks at regular intervals,
-which are then executed by the worker nodes available in the cluster.
+:program:`celery beat` is a scheduler; It kicks off tasks at regular intervals,
+that are then executed by available worker nodes in the cluster.
 
 By default the entries are taken from the :setting:`beat_schedule` setting,
 but custom stores can also be used, like storing the entries in a SQL database.
@@ -104,8 +104,8 @@ Setting these up from within the :data:`~@on_after_configure` handler means
 that we'll not evaluate the app at module level when using ``test.s()``.
 
 The :meth:`~@add_periodic_task` function will add the entry to the
-:setting:`beat_schedule` setting behind the scenes, which also
-can be used to set up periodic tasks manually:
+:setting:`beat_schedule` setting behind the scenes, and the same setting
+can also can be used to set up periodic tasks manually:
 
 Example: Run the `tasks.add` task every 30 seconds.
 
@@ -415,9 +415,9 @@ Using custom scheduler classes
 Custom scheduler classes can be specified on the command-line (the
 :option:`-S <celery beat -S>` argument).
 
-The default scheduler is :class:`celery.beat.PersistentScheduler`,
-which is simply keeping track of the last run times in a local database file
-(a :mod:`shelve`).
+The default scheduler is the :class:`celery.beat.PersistentScheduler`,
+that simply keeps track of the last run times in a local :mod:`shelve`
+database file.
 
 :pypi:`django-celery` also ships with a scheduler that stores the schedule in
 the Django database:

+ 7 - 4
docs/userguide/routing.rst

@@ -48,8 +48,10 @@ to match all tasks in the ``feed.tasks`` name-space:
 
     app.conf.task_routes = {'feed.tasks.*': {'queue': 'feeds'}}
 
-If the order in which the patterns are matched is important you should should
-specify a tuple as the task router instead::
+If the order of matching patterns is important you should
+specify the router in *items* format instead:
+
+.. code-block:: python
 
     task_routes = ([
         ('feed.tasks.*': {'queue': 'feeds'}),
@@ -469,8 +471,9 @@ using the ``basic.publish`` command:
     ok.
 
 Now that the message is sent you can retrieve it again. You can use the
-``basic.get``` command here, which polls for new messages on the queue
-(which is alright for maintenance tasks, for services you'd want to use
+``basic.get``` command here, that polls for new messages on the queue
+in a synchronous manner
+(this is OK for maintenance tasks, but for services you want to use
 ``basic.consume`` instead)
 
 Pop a message off the queue:

+ 4 - 4
docs/userguide/signals.rst

@@ -37,7 +37,7 @@ Example connecting to the :signal:`after_task_publish` signal:
         ))
 
 
-Some signals also have a sender which you can filter by. For example the
+Some signals also have a sender you can filter by. For example the
 :signal:`after_task_publish` signal uses the task name as a sender, so by
 providing the ``sender`` argument to
 :class:`~celery.utils.dispatch.signal.Signal.connect` you can
@@ -303,7 +303,7 @@ Provides arguments:
     This is a :class:`~celery.worker.request.Request` instance, and not
     ``task.request``. When using the prefork pool this signal
     is dispatched in the parent process, so ``task.request`` isn't available
-    and shouldn't be used. Use this object instead, which should have many
+    and shouldn't be used. Use this object instead, as they share many
     of the same fields.
 
 * ``terminated``
@@ -739,8 +739,8 @@ It can be used to add additional command-line arguments to the
             enable_monitoring()
 
 
-Sender is the :class:`~celery.bin.base.Command` instance, which depends
-on what program was called (e.g. for the umbrella command it'll be
+Sender is the :class:`~celery.bin.base.Command` instance, and the value depends
+on the program that was called (e.g. for the umbrella command it'll be
 a :class:`~celery.bin.celery.CeleryCommand`) object).
 
 Provides arguments:

+ 29 - 29
docs/userguide/tasks.rst

@@ -18,7 +18,7 @@ until that message has been :term:`acknowledged` by a worker. A worker can reser
 many messages in advance and even if the worker is killed -- by power failure
 or some other reason -- the message will be redelivered to another worker.
 
-Ideally task functions should be :term:`idempotent`, which means
+Ideally task functions should be :term:`idempotent`: meaning
 the function won't cause unintended effects even if called
 multiple times with the same arguments.
 Since the worker cannot detect if your tasks are idempotent, the default
@@ -132,8 +132,8 @@ these can be specified as arguments to the decorator:
 
     When using multiple decorators in combination with the task
     decorator you must make sure that the `task`
-    decorator is applied last (which in Python oddly means that it must
-    be the first in the list):
+    decorator is applied last (oddly, in Python this means it must
+    be first in the list):
 
     .. code-block:: python
 
@@ -294,8 +294,8 @@ since the worker and the client imports the modules under different names:
     >>> mytask.name
     'myapp.tasks.mytask'
 
-So for this reason you must be consistent in how you
-import modules, which is also a Python best practice.
+For this reason you must be consistent in how you
+import modules, and that is also a Python best practice.
 
 Similarly, you shouldn't use old-style relative imports:
 
@@ -368,7 +368,7 @@ So each task will have a name like `moduleA.taskA`, `moduleA.taskB` and
 
 .. warning::
 
-    Make sure that your :meth:`@gen_task_name` is a pure function, which means
+    Make sure that your :meth:`@gen_task_name` is a pure function: meaning
     that for the same input it must always return the same output.
 
 .. _task-request-info:
@@ -497,8 +497,7 @@ for all of your tasks at the top of your module:
         return x + y
 
 Celery uses the standard Python logger library,
-for which documentation can be found in the :mod:`logging`
-module.
+and the documentation can be found :mod:`here <logging>`.
 
 You can also use :func:`print`, as anything written to standard
 out/-err will be redirected to the logging system (you can disable this,
@@ -786,8 +785,8 @@ General
 
 .. attribute:: Task.rate_limit
 
-    Set the rate limit for this task type which limits the number of tasks
-    that can be run in a given time frame. Tasks will still complete when
+    Set the rate limit for this task type (limits the number of tasks
+    that can be run in a given time frame). Tasks will still complete when
     a rate limit is in effect, but it may take some time before it's allowed to
     start.
 
@@ -801,8 +800,8 @@ General
     Example: `"100/m"` (hundred tasks a minute). This will enforce a minimum
     delay of 600ms between starting two tasks on the same worker instance.
 
-    Default is the :setting:`task_default_rate_limit` setting,
-    which if not specified means rate limiting for tasks is disabled by default.
+    Default is the :setting:`task_default_rate_limit` setting:
+    if not specified means rate limiting for tasks is disabled by default.
 
     Note that this is a *per worker instance* rate limit, and not a global
     rate limit. To enforce a global rate limit (e.g. for an API with a
@@ -853,18 +852,18 @@ General
 .. attribute:: Task.backend
 
     The result store backend to use for this task. An instance of one of the
-    backend classes in `celery.backends`. Defaults to `app.backend` which is
+    backend classes in `celery.backends`. Defaults to `app.backend`,
     defined by the :setting:`result_backend` setting.
 
 .. attribute:: Task.acks_late
 
     If set to :const:`True` messages for this task will be acknowledged
-    **after** the task has been executed, not *just before*, which is
-    the default behavior.
+    **after** the task has been executed, not *just before* (the default
+    behavior).
 
-    Note that this means the task may be executed twice if the worker
-    crashes in the middle of execution, which may be acceptable for some
-    applications.
+    Note: This means the task may be executed multiple times should the worker
+    crash in the middle of execution.  Make sure your tasks are
+    :term:`idempotent`.
 
     The global default can be overridden by the :setting:`task_acks_late`
     setting.
@@ -878,7 +877,7 @@ General
     The default value is :const:`False` as the normal behavior is to not
     report that level of granularity. Tasks are either pending, finished,
     or waiting to be retried. Having a "started" status can be useful for
-    when there are long running tasks and there's a need to report which
+    when there are long running tasks and there's a need to report what
     task is currently running.
 
     The host name and process id of the worker executing the task
@@ -967,10 +966,11 @@ limitations.
 * Some databases use a default transaction isolation level that
   isn't suitable for polling tables for changes.
 
-  In MySQL the default transaction isolation level is `REPEATABLE-READ`, which
-  means the transaction won't see changes by other transactions until the
-  transaction is committed. Changing that to the `READ-COMMITTED` isolation
-  level is recommended.
+  In MySQL the default transaction isolation level is `REPEATABLE-READ`:
+  meaning the transaction won't see changes made by other transactions until
+  the current transaction is committed.
+
+  Changing that to the `READ-COMMITTED` isolation level is recommended.
 
 .. _task-builtin-states:
 
@@ -1047,8 +1047,8 @@ Custom states
 
 You can easily define your own states, all you need is a unique name.
 The name of the state is usually an uppercase string. As an example
-you could have a look at :mod:`abortable tasks <~celery.contrib.abortable>`
-which defines its own custom :state:`ABORTED` state.
+you could have a look at the :mod:`abortable tasks <~celery.contrib.abortable>`
+which defines a custom :state:`ABORTED` state.
 
 Use :meth:`~@Task.update_state` to update a task's state:.
 
@@ -1062,7 +1062,7 @@ Use :meth:`~@Task.update_state` to update a task's state:.
                     meta={'current': i, 'total': len(filenames)})
 
 
-Here I created the state `"PROGRESS"`, which tells any application
+Here I created the state `"PROGRESS"`, telling any application
 aware of this state that the task is currently in progress, and also where
 it is in the process by having `current` and `total` counts as part of the
 state meta-data. This can then be used to create e.g. progress bars.
@@ -1130,7 +1130,7 @@ you have to pass them as regular args:
 Semipredicates
 ==============
 
-The worker wraps the task in a tracing function which records the final
+The worker wraps the task in a tracing function that records the final
 state of the task. There are a number of exceptions that can be used to
 signal this function to change how it treats the return of the task.
 
@@ -1592,7 +1592,7 @@ system, like `memcached`_.
 State
 -----
 
-Since celery is a distributed system, you can't know in which process, or
+Since celery is a distributed system, you can't know which process, or
 on what machine the task will be executed. You can't even know if the task will
 run in a timely manner.
 
@@ -1672,7 +1672,7 @@ Let's have a look at another example:
 
 This is a Django view creating an article object in the database,
 then passing the primary key to a task. It uses the `commit_on_success`
-decorator, which will commit the transaction when the view returns, or
+decorator, that will commit the transaction when the view returns, or
 roll back if the view raises an exception.
 
 There's a race condition if the task starts executing

+ 5 - 5
docs/userguide/workers.rst

@@ -560,7 +560,7 @@ Queues
 
 A worker instance can consume from any number of queues.
 By default it will consume from all queues defined in the
-:setting:`task_queues` setting (which if not specified defaults to the
+:setting:`task_queues` setting (if not specified defaults to the
 queue named ``celery``).
 
 You can specify what queues to consume from at start-up, by giving a comma
@@ -680,7 +680,7 @@ the :control:`active_queues` control command:
 
 Like all other remote control commands this also supports the
 :option:`--destination <celery inspect --destination>` argument used
-to specify which workers should reply to the request:
+to specify the workers that should reply to the request:
 
 .. code-block:: console
 
@@ -964,11 +964,11 @@ The output will include the following fields:
 
     * ``majflt``
 
-        Number of page faults which were serviced by doing I/O.
+        Number of page faults that were serviced by doing I/O.
 
     * ``minflt``
 
-        Number of page faults which were serviced without doing I/O.
+        Number of page faults that were serviced without doing I/O.
 
     * ``msgrcv``
 
@@ -1034,7 +1034,7 @@ a custom timeout:
      {'worker3.example.com': 'pong'}]
 
 :meth:`~@control.ping` also supports the `destination` argument,
-so you can specify which workers to ping:
+so you can specify the workers to ping:
 
 .. code-block:: pycon
 

+ 17 - 16
docs/whatsnew-4.0.rst

@@ -118,8 +118,8 @@ version: ``celery==4.0.0``, or a range: ``celery>=4.0,<5.0``.
 
 Dropping support for Python 2 will enable us to remove massive
 amounts of compatibility code, and going with Python 3.6 allows
-us to take advantage of typing, async/await, asyncio, ++, for which
-there're no convenient alternatives in older versions.
+us to take advantage of typing, async/await, asyncio, and similar
+concepts there's no alternative for in older versions.
 
 Celery 4.x will continue to work on Python 2.7, 3.4, 3.5; just as Celery 3.x
 still works on Python 2.6.
@@ -606,9 +606,9 @@ Prefork: Tasks now log from the child process
 ---------------------------------------------
 
 Logging of task success/failure now happens from the child process
-actually executing the task, which means that logging utilities
-like Sentry can get full information about tasks that fail, including
-variables in the traceback.
+executing the task.  As a result logging utilities,
+like Sentry can get full information about tasks, including
+variables in the traceback stack.
 
 Prefork: One log-file per child process
 ---------------------------------------
@@ -870,7 +870,7 @@ Brand new Cassandra result backend
 ----------------------------------
 
 A brand new Cassandra backend utilizing the new :pypi:`cassandra-driver`
-library is replacing the old result backend which was using the older
+library is replacing the old result backend using the older
 :pypi:`pycassa` library.
 
 See :ref:`conf-cassandra-result-backend` for more information.
@@ -894,23 +894,24 @@ Contributed by **Môshe van der Sterre**.
 Event Batching
 --------------
 
-Events are now buffered in the worker and sent as a list which reduces
+Events are now buffered in the worker and sent as a list, reducing
 the overhead required to send monitoring events.
 
 For authors of custom event monitors there will be no action
 required as long as you're using the Python Celery
 helpers (:class:`~@events.Receiver`) to implement your monitor.
-However, if you're manually receiving event messages you must now account
-for batched event messages which differ from normal event messages
+
+However, if you're parsing raw event messages you must now account
+for batched event messages,  as they differ from normal event messages
 in the following way:
 
-    - The routing key for a batch of event messages will be set to
-      ``<event-group>.multi`` where the only batched event group
-      is currently ``task`` (giving a routing key of ``task.multi``).
+- The routing key for a batch of event messages will be set to
+  ``<event-group>.multi`` where the only batched event group
+  is currently ``task`` (giving a routing key of ``task.multi``).
 
-    - The message body will be a serialized list-of-dictionaries instead
-      of a dictionary. Each item in the list can be regarded
-      as a normal event message body.
+- The message body will be a serialized list-of-dictionaries instead
+  of a dictionary. Each item in the list can be regarded
+  as a normal event message body.
 
 .. :sha:`03399b4d7c26fb593e61acf34f111b66b340ba4e`
 
@@ -1180,7 +1181,7 @@ Programs
 - :program:`celery inspect registered`: now ignores built-in tasks.
 
 - :program:`celery purge` now takes ``-Q`` and ``-X`` options
-  used to specify which queues to include and exclude from the purge.
+  used to specify what queues to include and exclude from the purge.
 
 - New :program:`celery logtool`: Utility for filtering and parsing
   celery worker log-files