Browse Source

Serial comma

Ask Solem 8 years ago
parent
commit
f2622c6de2

+ 15 - 15
CONTRIBUTING.rst

@@ -42,8 +42,8 @@ the `Pylons Code of Conduct`_.
 .. _`Ubuntu Code of Conduct`: http://www.ubuntu.com/community/conduct
 .. _`Ubuntu Code of Conduct`: http://www.ubuntu.com/community/conduct
 .. _`Pylons Code of Conduct`: http://docs.pylonshq.com/community/conduct.html
 .. _`Pylons Code of Conduct`: http://docs.pylonshq.com/community/conduct.html
 
 
-Be considerate.
----------------
+Be considerate
+--------------
 
 
 Your work will be used by other people, and you in turn will depend on the
 Your work will be used by other people, and you in turn will depend on the
 work of others. Any decision you take will affect users and colleagues, and
 work of others. Any decision you take will affect users and colleagues, and
@@ -53,8 +53,8 @@ the work of others. For example, changes to code, infrastructure, policy,
 documentation and translations during a release may negatively impact
 documentation and translations during a release may negatively impact
 others work.
 others work.
 
 
-Be respectful.
---------------
+Be respectful
+-------------
 
 
 The Celery community and its members treat one another with respect. Everyone
 The Celery community and its members treat one another with respect. Everyone
 can make a valuable contribution to Celery. We may not always agree, but
 can make a valuable contribution to Celery. We may not always agree, but
@@ -66,8 +66,8 @@ expect members of the Celery community to be respectful when dealing with
 other contributors as well as with people outside the Celery project and with
 other contributors as well as with people outside the Celery project and with
 users of Celery.
 users of Celery.
 
 
-Be collaborative.
------------------
+Be collaborative
+----------------
 
 
 Collaboration is central to Celery and to the larger free software community.
 Collaboration is central to Celery and to the larger free software community.
 We should always be open to collaboration. Your work should be done
 We should always be open to collaboration. Your work should be done
@@ -78,11 +78,11 @@ projects informed of your ideas and progress. It many not be possible to
 get consensus from upstream, or even from your colleagues about the correct
 get consensus from upstream, or even from your colleagues about the correct
 implementation for an idea, so don't feel obliged to have that agreement
 implementation for an idea, so don't feel obliged to have that agreement
 before you begin, but at least keep the outside world informed of your work,
 before you begin, but at least keep the outside world informed of your work,
-and publish your work in a way that allows outsiders to test, discuss and
+and publish your work in a way that allows outsiders to test, discuss, and
 contribute to your efforts.
 contribute to your efforts.
 
 
-When you disagree, consult others.
-----------------------------------
+When you disagree, consult others
+---------------------------------
 
 
 Disagreements, both political and technical, happen all the time and
 Disagreements, both political and technical, happen all the time and
 the Celery community is no exception. It's important that we resolve
 the Celery community is no exception. It's important that we resolve
@@ -92,8 +92,8 @@ way, then we encourage you to make a derivative distribution or alternate
 set of packages that still build on the work we've done to utilize as common
 set of packages that still build on the work we've done to utilize as common
 of a core as possible.
 of a core as possible.
 
 
-When you're unsure, ask for help.
----------------------------------
+When you're unsure, ask for help
+--------------------------------
 
 
 Nobody knows everything, and nobody is expected to be perfect. Asking
 Nobody knows everything, and nobody is expected to be perfect. Asking
 questions avoids many problems down the road, and so questions are
 questions avoids many problems down the road, and so questions are
@@ -101,8 +101,8 @@ encouraged. Those who are asked questions should be responsive and helpful.
 However, when asking a question, care must be taken to do so in an appropriate
 However, when asking a question, care must be taken to do so in an appropriate
 forum.
 forum.
 
 
-Step down considerately.
-------------------------
+Step down considerately
+-----------------------
 
 
 Developers on every project come and go and Celery is no different. When you
 Developers on every project come and go and Celery is no different. When you
 leave or disengage from the project, in whole or in part, we ask that you do
 leave or disengage from the project, in whole or in part, we ask that you do
@@ -187,7 +187,7 @@ the developers fix the bug.
 
 
 A bug could be fixed by some other improvements and fixes - it might not have an
 A bug could be fixed by some other improvements and fixes - it might not have an
 existing report in the bug tracker. Make sure you're using the latest releases of
 existing report in the bug tracker. Make sure you're using the latest releases of
-celery, billiard, kombu, amqp and vine.
+celery, billiard, kombu, amqp, and vine.
 
 
 5) **Collect information about the bug**.
 5) **Collect information about the bug**.
 
 
@@ -211,7 +211,7 @@ spelling or other errors on the website/docs/code.
          ``pdb`` session.
          ``pdb`` session.
        * Collect tracing data using `strace`_(Linux),
        * Collect tracing data using `strace`_(Linux),
          ``dtruss`` (macOS), and ``ktrace`` (BSD),
          ``dtruss`` (macOS), and ``ktrace`` (BSD),
-         `ltrace`_ and `lsof`_.
+         `ltrace`_, and `lsof`_.
 
 
     D) Include the output from the ``celery report`` command:
     D) Include the output from the ``celery report`` command:
         ::
         ::

+ 7 - 7
README.rst

@@ -103,7 +103,7 @@ Celery is...
     Celery is easy to use and maintain, and does *not need configuration files*.
     Celery is easy to use and maintain, and does *not need configuration files*.
 
 
     It has an active, friendly community you can talk to for support,
     It has an active, friendly community you can talk to for support,
-    including a `mailing-list`_ and and an IRC channel.
+    like at our `mailing-list`_, or the IRC channel.
 
 
     Here's one of the simplest applications you can make::
     Here's one of the simplest applications you can make::
 
 
@@ -119,7 +119,7 @@ Celery is...
 
 
     Workers and clients will automatically retry in the event
     Workers and clients will automatically retry in the event
     of connection loss or failure, and some brokers support
     of connection loss or failure, and some brokers support
-    HA in way of *Master/Master* or *Master/Slave* replication.
+    HA in way of *Primary/Primary* or *Primary/Replica* replication.
 
 
 - **Fast**
 - **Fast**
 
 
@@ -131,7 +131,7 @@ Celery is...
 
 
     Almost every part of *Celery* can be extended or used on its own,
     Almost every part of *Celery* can be extended or used on its own,
     Custom pool implementations, serializers, compression schemes, logging,
     Custom pool implementations, serializers, compression schemes, logging,
-    schedulers, consumers, producers, broker transports and much more.
+    schedulers, consumers, producers, broker transports, and much more.
 
 
 It supports...
 It supports...
 ============
 ============
@@ -206,8 +206,8 @@ database connections at ``fork``.
 Documentation
 Documentation
 =============
 =============
 
 
-The `latest documentation`_ with user guides, tutorials and API reference
-is hosted at Read The Docs.
+The `latest documentation`_ is hosted at Read The Docs, containing user guides,
+tutorials, and an API reference.
 
 
 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
 
 
@@ -341,7 +341,7 @@ With pip
 ~~~~~~~~
 ~~~~~~~~
 
 
 The Celery development version also requires the development
 The Celery development version also requires the development
-versions of ``kombu``, ``amqp``, ``billiard`` and ``vine``.
+versions of ``kombu``, ``amqp``, ``billiard``, and ``vine``.
 
 
 You can install the latest snapshot of these using the following
 You can install the latest snapshot of these using the following
 pip commands:
 pip commands:
@@ -388,7 +388,7 @@ network.
 Bug tracker
 Bug tracker
 ===========
 ===========
 
 
-If you have any suggestions, bug reports or annoyances please report them
+If you have any suggestions, bug reports, or annoyances please report them
 to our issue tracker at https://github.com/celery/celery/issues/
 to our issue tracker at https://github.com/celery/celery/issues/
 
 
 .. _wiki:
 .. _wiki:

+ 2 - 2
celery/states.py

@@ -48,8 +48,8 @@ ALL_STATES
 
 
 Set of all possible states.
 Set of all possible states.
 
 
-Misc.
------
+Misc
+----
 
 
 """
 """
 from __future__ import absolute_import, unicode_literals
 from __future__ import absolute_import, unicode_literals

+ 1 - 1
celery/tests/utils/test_saferepr.py

@@ -159,7 +159,7 @@ class test_saferepr(Case):
         self.assertIn('Recursion on', res)
         self.assertIn('Recursion on', res)
 
 
     def test_same_as_repr(self):
     def test_same_as_repr(self):
-        # Simple objects, small containers and classes that overwrite __repr__
+        # Simple objects, small containers, and classes that overwrite __repr__
         # For those the result should be the same as repr().
         # For those the result should be the same as repr().
         # Ahem.  The docs don't say anything about that -- this appears to
         # Ahem.  The docs don't say anything about that -- this appears to
         # be testing an implementation quirk.  Starting in Python 2.5, it's
         # be testing an implementation quirk.  Starting in Python 2.5, it's

+ 1 - 1
celery/utils/collections.py

@@ -1,5 +1,5 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
-"""Custom maps, sets, sequences and other data structures."""
+"""Custom maps, sets, sequences, and other data structures."""
 from __future__ import absolute_import, unicode_literals
 from __future__ import absolute_import, unicode_literals
 
 
 import sys
 import sys

+ 1 - 1
celery/utils/timeutils.py

@@ -1,5 +1,5 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
-"""Utilities related to dates, times, intervals and timezones."""
+"""Utilities related to dates, times, intervals, and timezones."""
 from __future__ import absolute_import, print_function, unicode_literals
 from __future__ import absolute_import, print_function, unicode_literals
 
 
 import numbers
 import numbers

+ 1 - 1
docs/community.rst

@@ -4,7 +4,7 @@
 Community Resources
 Community Resources
 =======================
 =======================
 
 
-This is a list of external blog posts, tutorials and slides related
+This is a list of external blog posts, tutorials, and slides related
 to Celery. If you have a link that's missing from this list, please
 to Celery. If you have a link that's missing from this list, please
 contact the mailing-list or submit a patch.
 contact the mailing-list or submit a patch.
 
 

+ 15 - 15
docs/contributing.rst

@@ -42,8 +42,8 @@ the `Pylons Code of Conduct`_.
 .. _`Ubuntu Code of Conduct`: http://www.ubuntu.com/community/conduct
 .. _`Ubuntu Code of Conduct`: http://www.ubuntu.com/community/conduct
 .. _`Pylons Code of Conduct`: http://docs.pylonshq.com/community/conduct.html
 .. _`Pylons Code of Conduct`: http://docs.pylonshq.com/community/conduct.html
 
 
-Be considerate.
----------------
+Be considerate
+--------------
 
 
 Your work will be used by other people, and you in turn will depend on the
 Your work will be used by other people, and you in turn will depend on the
 work of others. Any decision you take will affect users and colleagues, and
 work of others. Any decision you take will affect users and colleagues, and
@@ -53,8 +53,8 @@ the work of others. For example, changes to code, infrastructure, policy,
 documentation and translations during a release may negatively impact
 documentation and translations during a release may negatively impact
 others work.
 others work.
 
 
-Be respectful.
---------------
+Be respectful
+-------------
 
 
 The Celery community and its members treat one another with respect. Everyone
 The Celery community and its members treat one another with respect. Everyone
 can make a valuable contribution to Celery. We may not always agree, but
 can make a valuable contribution to Celery. We may not always agree, but
@@ -66,8 +66,8 @@ expect members of the Celery community to be respectful when dealing with
 other contributors as well as with people outside the Celery project and with
 other contributors as well as with people outside the Celery project and with
 users of Celery.
 users of Celery.
 
 
-Be collaborative.
------------------
+Be collaborative
+----------------
 
 
 Collaboration is central to Celery and to the larger free software community.
 Collaboration is central to Celery and to the larger free software community.
 We should always be open to collaboration. Your work should be done
 We should always be open to collaboration. Your work should be done
@@ -78,11 +78,11 @@ projects informed of your ideas and progress. It many not be possible to
 get consensus from upstream, or even from your colleagues about the correct
 get consensus from upstream, or even from your colleagues about the correct
 implementation for an idea, so don't feel obliged to have that agreement
 implementation for an idea, so don't feel obliged to have that agreement
 before you begin, but at least keep the outside world informed of your work,
 before you begin, but at least keep the outside world informed of your work,
-and publish your work in a way that allows outsiders to test, discuss and
+and publish your work in a way that allows outsiders to test, discuss, and
 contribute to your efforts.
 contribute to your efforts.
 
 
-When you disagree, consult others.
-----------------------------------
+When you disagree, consult others
+---------------------------------
 
 
 Disagreements, both political and technical, happen all the time and
 Disagreements, both political and technical, happen all the time and
 the Celery community is no exception. It's important that we resolve
 the Celery community is no exception. It's important that we resolve
@@ -92,8 +92,8 @@ way, then we encourage you to make a derivative distribution or alternate
 set of packages that still build on the work we've done to utilize as common
 set of packages that still build on the work we've done to utilize as common
 of a core as possible.
 of a core as possible.
 
 
-When you're unsure, ask for help.
----------------------------------
+When you're unsure, ask for help
+--------------------------------
 
 
 Nobody knows everything, and nobody is expected to be perfect. Asking
 Nobody knows everything, and nobody is expected to be perfect. Asking
 questions avoids many problems down the road, and so questions are
 questions avoids many problems down the road, and so questions are
@@ -101,8 +101,8 @@ encouraged. Those who are asked questions should be responsive and helpful.
 However, when asking a question, care must be taken to do so in an appropriate
 However, when asking a question, care must be taken to do so in an appropriate
 forum.
 forum.
 
 
-Step down considerately.
-------------------------
+Step down considerately
+-----------------------
 
 
 Developers on every project come and go and Celery is no different. When you
 Developers on every project come and go and Celery is no different. When you
 leave or disengage from the project, in whole or in part, we ask that you do
 leave or disengage from the project, in whole or in part, we ask that you do
@@ -187,7 +187,7 @@ the developers fix the bug.
 
 
 A bug could be fixed by some other improvements and fixes - it might not have an
 A bug could be fixed by some other improvements and fixes - it might not have an
 existing report in the bug tracker. Make sure you're using the latest releases of
 existing report in the bug tracker. Make sure you're using the latest releases of
-celery, billiard, kombu, amqp and vine.
+celery, billiard, kombu, amqp, and vine.
 
 
 5) **Collect information about the bug**.
 5) **Collect information about the bug**.
 
 
@@ -211,7 +211,7 @@ spelling or other errors on the website/docs/code.
          :mod:`pdb` session.
          :mod:`pdb` session.
        * Collect tracing data using `strace`_(Linux),
        * Collect tracing data using `strace`_(Linux),
          :command:`dtruss` (macOS), and :command:`ktrace` (BSD),
          :command:`dtruss` (macOS), and :command:`ktrace` (BSD),
-         `ltrace`_ and `lsof`_.
+         `ltrace`_, and `lsof`_.
 
 
     D) Include the output from the :command:`celery report` command:
     D) Include the output from the :command:`celery report` command:
 
 

+ 2 - 2
docs/django/first-steps-with-django.rst

@@ -139,8 +139,8 @@ concrete app instance:
     You can find the full source code for the Django example project at:
     You can find the full source code for the Django example project at:
     https://github.com/celery/celery/tree/3.1/examples/django/
     https://github.com/celery/celery/tree/3.1/examples/django/
 
 
-Using the Django ORM/Cache as a result backend.
------------------------------------------------
+Using the Django ORM/Cache as a result backend
+----------------------------------------------
 
 
 The [``django-celery``](https://github.com/celery/django-celery) library defines
 The [``django-celery``](https://github.com/celery/django-celery) library defines
 result backends that uses the Django ORM and Django Cache frameworks.
 result backends that uses the Django ORM and Django Cache frameworks.

+ 1 - 1
docs/faq.rst

@@ -147,7 +147,7 @@ Is Celery dependent on pickle?
 **Answer:** No.
 **Answer:** No.
 
 
 Celery can support any serialization scheme and has built-in support for
 Celery can support any serialization scheme and has built-in support for
-JSON, YAML, Pickle and msgpack. Also, as every task is associated with a
+JSON, YAML, Pickle, and msgpack. Also, as every task is associated with a
 content type, you can even send one task using pickle, and another using JSON.
 content type, you can even send one task using pickle, and another using JSON.
 
 
 The default serialization format is pickle simply because it's
 The default serialization format is pickle simply because it's

+ 1 - 1
docs/getting-started/brokers/sqs.rst

@@ -135,7 +135,7 @@ Caveats
 - SQS doesn't yet support worker remote control commands.
 - SQS doesn't yet support worker remote control commands.
 
 
 - SQS doesn't yet support events, and so cannot be used with
 - SQS doesn't yet support events, and so cannot be used with
-  :program:`celery events`, :program:`celerymon` or the Django Admin
+  :program:`celery events`, :program:`celerymon`, or the Django Admin
   monitor.
   monitor.
 
 
 .. _sqs-results-configuration:
 .. _sqs-results-configuration:

+ 3 - 3
docs/getting-started/first-steps-with-celery.rst

@@ -278,7 +278,7 @@ Configuration
 Celery, like a consumer appliance, doesn't need much to be operated.
 Celery, like a consumer appliance, doesn't need much to be operated.
 It has an input and an output, where you must connect the input to a broker and maybe
 It has an input and an output, where you must connect the input to a broker and maybe
 the output to a result backend if so wanted. But if you look closely at the back
 the output to a result backend if so wanted. But if you look closely at the back
-there's a lid revealing loads of sliders, dials and buttons: this is the configuration.
+there's a lid revealing loads of sliders, dials, and buttons: this is the configuration.
 
 
 The default configuration should be good enough for most uses, but there are
 The default configuration should be good enough for most uses, but there are
 many things to tweak so Celery works just the way you want it to.
 many things to tweak so Celery works just the way you want it to.
@@ -424,8 +424,8 @@ Worker doesn't start: Permission Error
     make sure that they point to a file/directory that's writable and
     make sure that they point to a file/directory that's writable and
     readable by the user starting the worker.
     readable by the user starting the worker.
 
 
-Result backend doesn't work or tasks are always in ``PENDING`` state.
----------------------------------------------------------------------
+Result backend doesn't work or tasks are always in ``PENDING`` state
+--------------------------------------------------------------------
 
 
 All tasks are :state:`PENDING` by default, so the state would've been
 All tasks are :state:`PENDING` by default, so the state would've been
 better named "unknown". Celery doesn't update any state when a task
 better named "unknown". Celery doesn't update any state when a task

+ 3 - 3
docs/getting-started/introduction.rst

@@ -117,7 +117,7 @@ Celery is…
 
 
         Almost every part of *Celery* can be extended or used on its own,
         Almost every part of *Celery* can be extended or used on its own,
         Custom pool implementations, serializers, compression schemes, logging,
         Custom pool implementations, serializers, compression schemes, logging,
-        schedulers, consumers, producers, broker transports and much more.
+        schedulers, consumers, producers, broker transports, and much more.
 
 
 
 
 .. topic:: It supports
 .. topic:: It supports
@@ -128,7 +128,7 @@ Celery is…
         - **Brokers**
         - **Brokers**
 
 
             - :ref:`RabbitMQ <broker-rabbitmq>`, :ref:`Redis <broker-redis>`,
             - :ref:`RabbitMQ <broker-rabbitmq>`, :ref:`Redis <broker-redis>`,
-            - :ref:`Amazon SQS <broker-sqs>` and more…
+            - :ref:`Amazon SQS <broker-sqs>`, and more…
 
 
         - **Concurrency**
         - **Concurrency**
 
 
@@ -169,7 +169,7 @@ Features
 
 
             Simple and complex work-flows can be composed using
             Simple and complex work-flows can be composed using
             a set of powerful primitives we call the "canvas",
             a set of powerful primitives we call the "canvas",
-            including grouping, chaining, chunking and more.
+            including grouping, chaining, chunking, and more.
 
 
             :ref:`Read more… <guide-canvas>`.
             :ref:`Read more… <guide-canvas>`.
 
 

+ 2 - 2
docs/getting-started/next-steps.rst

@@ -265,7 +265,7 @@ This method is actually a star-argument shortcut to another method called
     >>> add.apply_async((2, 2))
     >>> add.apply_async((2, 2))
 
 
 The latter enables you to specify execution options like the time to run
 The latter enables you to specify execution options like the time to run
-(countdown), the queue it should be sent to and so on:
+(countdown), the queue it should be sent to, and so on:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
 
 
@@ -652,7 +652,7 @@ power of AMQP routing, see the :ref:`Routing Guide <guide-routing>`.
 Remote Control
 Remote Control
 ==============
 ==============
 
 
-If you're using RabbitMQ (AMQP), Redis or Qpid as the broker then
+If you're using RabbitMQ (AMQP), Redis, or Qpid as the broker then
 you can control and inspect the worker at runtime.
 you can control and inspect the worker at runtime.
 
 
 For example you can see what tasks the worker is currently working on:
 For example you can see what tasks the worker is currently working on:

+ 1 - 1
docs/glossary.rst

@@ -90,7 +90,7 @@ Glossary
 
 
     reentrant
     reentrant
         describes a function that can be interrupted in the middle of
         describes a function that can be interrupted in the middle of
-        execution (e.g. by hardware interrupt or signal) and then safely
+        execution (e.g. by hardware interrupt or signal), and then safely
         called again later. Reentrancy isn't the same as
         called again later. Reentrancy isn't the same as
         :term:`idempotence <idempotent>` as the return value doesn't have to
         :term:`idempotence <idempotent>` as the return value doesn't have to
         be the same given the same inputs, and a reentrant function may have
         be the same given the same inputs, and a reentrant function may have

+ 1 - 1
docs/history/changelog-3.1.rst

@@ -1278,7 +1278,7 @@ Fixes
     that's set when a ``task-sent`` event is being received.
     that's set when a ``task-sent`` event is being received.
 
 
     Also, a clients logical clock isn't in sync with the cluster so
     Also, a clients logical clock isn't in sync with the cluster so
-    they live in a "time bubble". So for this reason monitors will no
+    they live in a "time bubble." So for this reason monitors will no
     longer attempt to merge with the clock of an event sent by a client,
     longer attempt to merge with the clock of an event sent by a client,
     instead it will fake the value by using the current clock with
     instead it will fake the value by using the current clock with
     a skew of -1.
     a skew of -1.

+ 9 - 9
docs/history/whatsnew-3.0.rst

@@ -4,7 +4,7 @@
  What's new in Celery 3.0 (Chiastic Slide)
  What's new in Celery 3.0 (Chiastic Slide)
 ===========================================
 ===========================================
 
 
-Celery is a simple, flexible and reliable distributed system to
+Celery is a simple, flexible, and reliable distributed system to
 process vast amounts of messages, while providing operations with
 process vast amounts of messages, while providing operations with
 the tools required to maintain such a system.
 the tools required to maintain such a system.
 
 
@@ -144,8 +144,8 @@ Commands include:
 The old programs are still available (``celeryd``, ``celerybeat``, etc),
 The old programs are still available (``celeryd``, ``celerybeat``, etc),
 but you're discouraged from using them.
 but you're discouraged from using them.
 
 
-Now depends on :pypi:`billiard`.
---------------------------------
+Now depends on :pypi:`billiard`
+-------------------------------
 
 
 Billiard is a fork of the multiprocessing containing
 Billiard is a fork of the multiprocessing containing
 the no-execv patch by ``sbt`` (http://bugs.python.org/issue8713),
 the no-execv patch by ``sbt`` (http://bugs.python.org/issue8713),
@@ -338,8 +338,8 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
 
 
     Returns a flattened list of all dependencies (recursively)
     Returns a flattened list of all dependencies (recursively)
 
 
-Redis: Priority support.
-------------------------
+Redis: Priority support
+-----------------------
 
 
 The message's ``priority`` field is now respected by the Redis
 The message's ``priority`` field is now respected by the Redis
 transport by having multiple lists for each named queue.
 transport by having multiple lists for each named queue.
@@ -375,8 +375,8 @@ should be used to prove this.
 
 
 Contributed by Germán M. Bravo.
 Contributed by Germán M. Bravo.
 
 
-Redis: Now cycles queues so that consuming is fair.
----------------------------------------------------
+Redis: Now cycles queues so that consuming is fair
+--------------------------------------------------
 
 
 This ensures that a very busy queue won't block messages
 This ensures that a very busy queue won't block messages
 from other queues, and ensures that all queues have
 from other queues, and ensures that all queues have
@@ -635,8 +635,8 @@ by setting the ``shared`` argument to the ``@task`` decorator:
         return x + y
         return x + y
 
 
 
 
-Abstract tasks are now lazily bound.
-------------------------------------
+Abstract tasks are now lazily bound
+-----------------------------------
 
 
 The :class:`~celery.task.Task` class is no longer bound to an app
 The :class:`~celery.task.Task` class is no longer bound to an app
 by default, it will first be bound (and configured) when
 by default, it will first be bound (and configured) when

+ 1 - 1
docs/includes/installation.txt

@@ -131,7 +131,7 @@ With pip
 ~~~~~~~~
 ~~~~~~~~
 
 
 The Celery development version also requires the development
 The Celery development version also requires the development
-versions of :pypi:`kombu`, :pypi:`amqp`, :pypi:`billiard` and :pypi:`vine`.
+versions of :pypi:`kombu`, :pypi:`amqp`, :pypi:`billiard`, and :pypi:`vine`.
 
 
 You can install the latest snapshot of these using the following
 You can install the latest snapshot of these using the following
 pip commands:
 pip commands:

+ 5 - 5
docs/includes/introduction.txt

@@ -95,7 +95,7 @@ Celery is…
     Celery is easy to use and maintain, and does *not need configuration files*.
     Celery is easy to use and maintain, and does *not need configuration files*.
 
 
     It has an active, friendly community you can talk to for support,
     It has an active, friendly community you can talk to for support,
-    including a `mailing-list`_ and and an IRC channel.
+    like at our `mailing-list`_, or the IRC channel.
 
 
     Here's one of the simplest applications you can make::
     Here's one of the simplest applications you can make::
 
 
@@ -111,7 +111,7 @@ Celery is…
 
 
     Workers and clients will automatically retry in the event
     Workers and clients will automatically retry in the event
     of connection loss or failure, and some brokers support
     of connection loss or failure, and some brokers support
-    HA in way of *Master/Master* or *Master/Slave* replication.
+    HA in way of *Primary/Primary* or *Primary/Replica* replication.
 
 
 - **Fast**
 - **Fast**
 
 
@@ -123,7 +123,7 @@ Celery is…
 
 
     Almost every part of *Celery* can be extended or used on its own,
     Almost every part of *Celery* can be extended or used on its own,
     Custom pool implementations, serializers, compression schemes, logging,
     Custom pool implementations, serializers, compression schemes, logging,
-    schedulers, consumers, producers, broker transports and much more.
+    schedulers, consumers, producers, broker transports, and much more.
 
 
 It supports…
 It supports…
 ============
 ============
@@ -198,7 +198,7 @@ database connections at ``fork``.
 Documentation
 Documentation
 =============
 =============
 
 
-The `latest documentation`_ with user guides, tutorials and API reference
-is hosted at Read The Docs.
+The `latest documentation`_ is hosted at Read The Docs, containing user guides,
+tutorials, and an API reference.
 
 
 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/

+ 1 - 1
docs/includes/resources.txt

@@ -28,7 +28,7 @@ network.
 Bug tracker
 Bug tracker
 ===========
 ===========
 
 
-If you have any suggestions, bug reports or annoyances please report them
+If you have any suggestions, bug reports, or annoyances please report them
 to our issue tracker at https://github.com/celery/celery/issues/
 to our issue tracker at https://github.com/celery/celery/issues/
 
 
 .. _wiki:
 .. _wiki:

+ 1 - 1
docs/index.rst

@@ -2,7 +2,7 @@
  Celery - Distributed Task Queue
  Celery - Distributed Task Queue
 =================================
 =================================
 
 
-Celery is a simple, flexible and reliable distributed system to
+Celery is a simple, flexible, and reliable distributed system to
 process vast amounts of messages, while providing operations with
 process vast amounts of messages, while providing operations with
 the tools required to maintain such a system.
 the tools required to maintain such a system.
 
 

+ 2 - 2
docs/internals/guide.rst

@@ -232,7 +232,7 @@ Module Overview
 - celery.loaders
 - celery.loaders
 
 
     Every app must have a loader. The loader decides how configuration
     Every app must have a loader. The loader decides how configuration
-    is read, what happens when the worker starts, when a task starts and ends,
+    is read; what happens when the worker starts; when a task starts and ends;
     and so on.
     and so on.
 
 
     The loaders included are:
     The loaders included are:
@@ -246,7 +246,7 @@ Module Overview
             "single-mode" uses this loader by default.
             "single-mode" uses this loader by default.
 
 
     Extension loaders also exist, like :pypi:`django-celery`,
     Extension loaders also exist, like :pypi:`django-celery`,
-    :pypi:`celery-pylons` and so on.
+    :pypi:`celery-pylons`, and so on.
 
 
 - celery.worker
 - celery.worker
 
 

+ 2 - 2
docs/sec/CELERYSA-0001.txt

@@ -43,8 +43,8 @@ affected.
 Systems affected
 Systems affected
 ================
 ================
 
 
-Users of Celery versions 2.1, 2.2, 2.3, 2.4 except the recently
-released 2.2.8, 2.3.4 and 2.4.4, daemonizing the Celery programs
+Users of Celery versions 2.1, 2.2, 2.3, 2.4; except the recently
+released 2.2.8, 2.3.4, and 2.4.4, daemonizing the Celery programs
 as the root user, using either:
 as the root user, using either:
     1) the --uid or --gid arguments, or
     1) the --uid or --gid arguments, or
     2) the provided generic init-scripts with the environment variables
     2) the provided generic init-scripts with the environment variables

+ 3 - 3
docs/userguide/application.rst

@@ -12,7 +12,7 @@ The Celery library must be instantiated before use, this instance
 is called an application (or *app* for short).
 is called an application (or *app* for short).
 
 
 The application is thread-safe so that multiple Celery applications
 The application is thread-safe so that multiple Celery applications
-with different configurations, components and tasks can co-exist in the
+with different configurations, components, and tasks can co-exist in the
 same process space.
 same process space.
 
 
 Let's create one now:
 Let's create one now:
@@ -357,7 +357,7 @@ Finalizing the object will:
 
 
 .. _default-app:
 .. _default-app:
 
 
-.. topic:: The "default app".
+.. topic:: The "default app"
 
 
     Celery didn't always have applications, it used to be that
     Celery didn't always have applications, it used to be that
     there was only a module-based API, and for backwards compatibility
     there was only a module-based API, and for backwards compatibility
@@ -517,7 +517,7 @@ class: :class:`celery.Task`.
 
 
 The neutral base class is special because it's not bound to any specific app
 The neutral base class is special because it's not bound to any specific app
 yet. Once a task is bound to an app it'll read configuration to set default
 yet. Once a task is bound to an app it'll read configuration to set default
-values and so on.
+values, and so on.
 
 
 To realize a base class you need to create a task using the :meth:`@task`
 To realize a base class you need to create a task using the :meth:`@task`
 decorator:
 decorator:

+ 5 - 5
docs/userguide/canvas.rst

@@ -126,7 +126,7 @@ Or you can call it directly in the current process:
     >>> add.s(2, 2)()
     >>> add.s(2, 2)()
     4
     4
 
 
-Specifying additional args, kwargs or options to ``apply_async``/``delay``
+Specifying additional args, kwargs, or options to ``apply_async``/``delay``
 creates partials:
 creates partials:
 
 
 - Any arguments added will be prepended to the args in the signature:
 - Any arguments added will be prepended to the args in the signature:
@@ -169,7 +169,7 @@ Immutability
 
 
 .. versionadded:: 3.0
 .. versionadded:: 3.0
 
 
-Partials are meant to be used with callbacks, any tasks linked or chord
+Partials are meant to be used with callbacks, any tasks linked, or chord
 callbacks will be applied with the result of the parent task.
 callbacks will be applied with the result of the parent task.
 Sometimes you want to specify a callback that doesn't take
 Sometimes you want to specify a callback that doesn't take
 additional arguments, and in that case you can set the signature
 additional arguments, and in that case you can set the signature
@@ -764,7 +764,7 @@ Chords
 
 
     Tasks used within a chord must *not* ignore their results. If the result
     Tasks used within a chord must *not* ignore their results. If the result
     backend is disabled for *any* task (header or body) in your chord you
     backend is disabled for *any* task (header or body) in your chord you
-    should read ":ref:`chord-important-notes`".
+    should read ":ref:`chord-important-notes`."
 
 
 
 
 A chord is a task that only executes after all of the tasks in a group have
 A chord is a task that only executes after all of the tasks in a group have
@@ -1063,5 +1063,5 @@ of one:
 
 
     >>> group.skew(start=1, stop=10)()
     >>> group.skew(start=1, stop=10)()
 
 
-which means that the first task will have a countdown of 1, the second
-a countdown of 2 and so on.
+which means that the first task will have a countdown of one second, the second
+task a countdown of two seconds, and so on.

+ 4 - 4
docs/userguide/configuration.rst

@@ -168,7 +168,7 @@ General settings
 ``accept_content``
 ``accept_content``
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
-Default: ``{'json'}``  (set, list or tuple).
+Default: ``{'json'}``  (set, list, or tuple).
 
 
 A white-list of content-types/serializers to allow.
 A white-list of content-types/serializers to allow.
 
 
@@ -312,7 +312,7 @@ Protocol 2 is supported by 3.1.24 and 4.x+.
 Default: ``"json"`` (since 4.0, earlier: pickle).
 Default: ``"json"`` (since 4.0, earlier: pickle).
 
 
 A string identifying the default serialization method to use. Can be
 A string identifying the default serialization method to use. Can be
-`json` (default), `pickle`, `yaml`, `msgpack` or any custom serialization
+`json` (default), `pickle`, `yaml`, `msgpack`, or any custom serialization
 methods that have been registered with :mod:`kombu.serialization.registry`.
 methods that have been registered with :mod:`kombu.serialization.registry`.
 
 
 .. seealso::
 .. seealso::
@@ -1264,7 +1264,7 @@ the backend.
 
 
 If you're trying Celery on a single system you can simply use the backend
 If you're trying Celery on a single system you can simply use the backend
 without any further configuration. For larger clusters you could use NFS,
 without any further configuration. For larger clusters you could use NFS,
-`GlusterFS`_, CIFS, `HDFS`_ (using FUSE) or any other file-system.
+`GlusterFS`_, CIFS, `HDFS`_ (using FUSE), or any other file-system.
 
 
 .. _`GlusterFS`: http://www.gluster.org/
 .. _`GlusterFS`: http://www.gluster.org/
 .. _`HDFS`: http://hadoop.apache.org/
 .. _`HDFS`: http://hadoop.apache.org/
@@ -2110,7 +2110,7 @@ Default: :const:`WARNING`.
 
 
 The log level output to `stdout` and `stderr` is logged as.
 The log level output to `stdout` and `stderr` is logged as.
 Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
 Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
-:const:`ERROR` or :const:`CRITICAL`.
+:const:`ERROR`, or :const:`CRITICAL`.
 
 
 .. _conf-security:
 .. _conf-security:
 
 

+ 4 - 4
docs/userguide/extending.rst

@@ -581,7 +581,7 @@ It can be added both as a worker and consumer bootstep:
 
 
         def __init__(self, parent, **kwargs):
         def __init__(self, parent, **kwargs):
             # here we can prepare the Worker/Consumer object
             # here we can prepare the Worker/Consumer object
-            # in any way we want, set attribute defaults and so on.
+            # in any way we want, set attribute defaults, and so on.
             print('{0!r} is in init'.format(parent))
             print('{0!r} is in init'.format(parent))
 
 
         def start(self, parent):
         def start(self, parent):
@@ -695,7 +695,7 @@ Adding new command-line options
 Command-specific options
 Command-specific options
 ~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-You can add additional command-line options to the ``worker``, ``beat`` and
+You can add additional command-line options to the ``worker``, ``beat``, and
 ``events`` commands by modifying the :attr:`~@user_options` attribute of the
 ``events`` commands by modifying the :attr:`~@user_options` attribute of the
 application instance.
 application instance.
 
 
@@ -837,8 +837,8 @@ Worker API
 ==========
 ==========
 
 
 
 
-:class:`~kombu.async.Hub` - The workers async event loop.
----------------------------------------------------------
+:class:`~kombu.async.Hub` - The workers async event loop
+--------------------------------------------------------
 :supported transports: amqp, redis
 :supported transports: amqp, redis
 
 
 .. versionadded:: 3.0
 .. versionadded:: 3.0

+ 6 - 6
docs/userguide/periodic-tasks.rst

@@ -180,7 +180,7 @@ Available Fields
 * `relative`
 * `relative`
 
 
     By default :class:`~datetime.timedelta` schedules are scheduled
     By default :class:`~datetime.timedelta` schedules are scheduled
-    "by the clock". This means the frequency is rounded to the nearest
+    "by the clock." This means the frequency is rounded to the nearest
     second, minute, hour or day depending on the period of the
     second, minute, hour or day depending on the period of the
     :class:`~datetime.timedelta`.
     :class:`~datetime.timedelta`.
 
 
@@ -236,7 +236,7 @@ Some examples:
 |         ``day_of_week='sun')``          |                                            |
 |         ``day_of_week='sun')``          |                                            |
 +-----------------------------------------+--------------------------------------------+
 +-----------------------------------------+--------------------------------------------+
 | ``crontab(minute='*/10',``              | Execute every ten minutes, but only        |
 | ``crontab(minute='*/10',``              | Execute every ten minutes, but only        |
-|         ``hour='3,17,22',``             | between 3-4 am, 5-6 pm and 10-11 pm on     |
+|         ``hour='3,17,22',``             | between 3-4 am, 5-6 pm, and 10-11 pm on    |
 |         ``day_of_week='thu,fri')``      | Thursdays or Fridays.                      |
 |         ``day_of_week='thu,fri')``      | Thursdays or Fridays.                      |
 +-----------------------------------------+--------------------------------------------+
 +-----------------------------------------+--------------------------------------------+
 | ``crontab(minute=0, hour='*/2,*/3')``   | Execute every even hour, and every hour    |
 | ``crontab(minute=0, hour='*/2,*/3')``   | Execute every even hour, and every hour    |
@@ -365,9 +365,9 @@ when the sun doesn't rise. The one exception is ``solar_noon``, which is
 formally defined as the moment the sun transits the celestial meridian,
 formally defined as the moment the sun transits the celestial meridian,
 and will occur every day even if the sun is below the horizon.
 and will occur every day even if the sun is below the horizon.
 
 
-Twilight is defined as the period between dawn and sunrise, and between
+Twilight is defined as the period between dawn and sunrise; and between
 sunset and dusk. You can schedule an event according to "twilight"
 sunset and dusk. You can schedule an event according to "twilight"
-depending on your definition of twilight (civil, nautical or astronomical),
+depending on your definition of twilight (civil, nautical, or astronomical),
 and whether you want the event to take place at the beginning or end
 and whether you want the event to take place at the beginning or end
 of twilight, using the appropriate event from the list above.
 of twilight, using the appropriate event from the list above.
 
 
@@ -426,5 +426,5 @@ the Django database:
 
 
     $ celery -A proj beat -S djcelery.schedulers.DatabaseScheduler
     $ celery -A proj beat -S djcelery.schedulers.DatabaseScheduler
 
 
-Using :pypi:`django-celery`'s scheduler you can add, modify and remove periodic
-tasks from the Django Admin.
+Using :pypi:`django-celery`'s scheduler you can add, modify, and remove
+periodic tasks from the Django Admin.

+ 8 - 8
docs/userguide/routing.rst

@@ -273,8 +273,8 @@ This is an example task message represented as a Python dictionary:
 
 
 .. _amqp-producers-consumers-brokers:
 .. _amqp-producers-consumers-brokers:
 
 
-Producers, consumers and brokers
---------------------------------
+Producers, consumers, and brokers
+---------------------------------
 
 
 The client sending messages is typically called a *publisher*, or
 The client sending messages is typically called a *publisher*, or
 a *producer*, while the entity receiving messages is called
 a *producer*, while the entity receiving messages is called
@@ -287,7 +287,7 @@ You're likely to see these terms used a lot in AMQP related material.
 
 
 .. _amqp-exchanges-queues-keys:
 .. _amqp-exchanges-queues-keys:
 
 
-Exchanges, queues and routing keys.
+Exchanges, queues, and routing keys
 -----------------------------------
 -----------------------------------
 
 
 1. Messages are sent to exchanges.
 1. Messages are sent to exchanges.
@@ -308,7 +308,7 @@ Celery automatically creates the entities necessary for the queues in
 setting is set to :const:`False`).
 setting is set to :const:`False`).
 
 
 Here's an example queue configuration with three queues;
 Here's an example queue configuration with three queues;
-One for video, one for images and one default queue for everything else:
+One for video, one for images, and one default queue for everything else:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -354,9 +354,9 @@ Topic exchanges matches routing keys using dot-separated words, and the
 wild-card characters: ``*`` (matches a single word), and ``#`` (matches
 wild-card characters: ``*`` (matches a single word), and ``#`` (matches
 zero or more words).
 zero or more words).
 
 
-With routing keys like ``usa.news``, ``usa.weather``, ``norway.news`` and
+With routing keys like ``usa.news``, ``usa.weather``, ``norway.news``, and
 ``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
 ``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
-items in the USA) or ``usa.weather`` (all USA weather items).
+items in the USA), or ``usa.weather`` (all USA weather items).
 
 
 .. _amqp-api:
 .. _amqp-api:
 
 
@@ -528,7 +528,7 @@ Defining queues
 In Celery available queues are defined by the :setting:`task_queues` setting.
 In Celery available queues are defined by the :setting:`task_queues` setting.
 
 
 Here's an example queue configuration with three queues;
 Here's an example queue configuration with three queues;
-One for video, one for images and one default queue for everything else:
+One for video, one for images, and one default queue for everything else:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -547,7 +547,7 @@ One for video, one for images and one default queue for everything else:
 Here, the :setting:`task_default_queue` will be used to route tasks that
 Here, the :setting:`task_default_queue` will be used to route tasks that
 doesn't have an explicit route.
 doesn't have an explicit route.
 
 
-The default exchange, exchange type and routing key will be used as the
+The default exchange, exchange type, and routing key will be used as the
 default routing values for tasks, and as the default values for entries
 default routing values for tasks, and as the default values for entries
 in :setting:`task_queues`.
 in :setting:`task_queues`.
 
 

+ 4 - 4
docs/userguide/security.rst

@@ -65,7 +65,7 @@ Worker
 
 
 The default permissions of tasks running inside a worker are the same ones as
 The default permissions of tasks running inside a worker are the same ones as
 the privileges of the worker itself. This applies to resources such as
 the privileges of the worker itself. This applies to resources such as
-memory, file-systems and devices.
+memory, file-systems, and devices.
 
 
 An exception to this rule is when using the multiprocessing based task pool,
 An exception to this rule is when using the multiprocessing based task pool,
 which is currently the default. In this case, the task will have access to
 which is currently the default. In this case, the task will have access to
@@ -77,7 +77,7 @@ Limiting access to memory contents can be done by launching every task
 in a subprocess (:func:`fork` + :func:`execve`).
 in a subprocess (:func:`fork` + :func:`execve`).
 
 
 Limiting file-system and device access can be accomplished by using
 Limiting file-system and device access can be accomplished by using
-`chroot`_, `jail`_, `sandboxing`_, virtual machines or other
+`chroot`_, `jail`_, `sandboxing`_, virtual machines, or other
 mechanisms as enabled by the platform or additional software.
 mechanisms as enabled by the platform or additional software.
 
 
 Note also that any task executed in the worker will have the
 Note also that any task executed in the worker will have the
@@ -153,7 +153,7 @@ setting to use the `auth` serializer.
 Also required is configuring the
 Also required is configuring the
 paths used to locate private keys and certificates on the file-system:
 paths used to locate private keys and certificates on the file-system:
 the :setting:`security_key`,
 the :setting:`security_key`,
-:setting:`security_certificate` and :setting:`security_cert_store`
+:setting:`security_certificate`, and :setting:`security_cert_store`
 settings respectively.
 settings respectively.
 With these configured it's also necessary to call the
 With these configured it's also necessary to call the
 :func:`celery.setup_security` function. Note that this will also
 :func:`celery.setup_security` function. Note that this will also
@@ -220,7 +220,7 @@ open source implementations, used to keep
 cryptographic hashes of files in the file-system, so that administrators
 cryptographic hashes of files in the file-system, so that administrators
 can be alerted when they change. This way when the damage is done and your
 can be alerted when they change. This way when the damage is done and your
 system has been compromised you can tell exactly what files intruders
 system has been compromised you can tell exactly what files intruders
-have changed  (password files, logs, back-doors, root-kits and so on).
+have changed  (password files, logs, back-doors, root-kits, and so on).
 Often this is the only way you'll be able to detect an intrusion.
 Often this is the only way you'll be able to detect an intrusion.
 
 
 Some open source implementations include:
 Some open source implementations include:

+ 1 - 1
docs/userguide/signals.rst

@@ -110,7 +110,7 @@ Provides arguments:
 * ``declare``
 * ``declare``
 
 
     List of entities (:class:`~kombu.Exchange`,
     List of entities (:class:`~kombu.Exchange`,
-    :class:`~kombu.Queue` or :class:`~kombu.binding` to declare before
+    :class:`~kombu.Queue`, or :class:`~kombu.binding` to declare before
     publishing the message. Can be modified.
     publishing the message. Can be modified.
 
 
 * ``retry_policy``
 * ``retry_policy``

+ 1 - 1
docs/userguide/tasks.rst

@@ -343,7 +343,7 @@ Consider you have many tasks within many different modules::
                    /tasks.py
                    /tasks.py
 
 
 Using the default automatic naming, each task will have a generated name
 Using the default automatic naming, each task will have a generated name
-like `moduleA.tasks.taskA`, `moduleA.tasks.taskB`, `moduleB.tasks.test`
+like `moduleA.tasks.taskA`, `moduleA.tasks.taskB`, `moduleB.tasks.test`,
 and so on. You may want to get rid of having `tasks` in all task names.
 and so on. You may want to get rid of having `tasks` in all task names.
 As pointed above, you can explicitly give names for all tasks, or you
 As pointed above, you can explicitly give names for all tasks, or you
 can change the automatic naming behavior by overriding
 can change the automatic naming behavior by overriding

+ 3 - 3
docs/userguide/workers.rst

@@ -144,7 +144,7 @@ Variables in file paths
 =======================
 =======================
 
 
 The file path arguments for :option:`--logfile <celery worker --logfile>`,
 The file path arguments for :option:`--logfile <celery worker --logfile>`,
-:option:`--pidfile <celery worker --pidfile>` and
+:option:`--pidfile <celery worker --pidfile>`, and
 :option:`--statedb <celery worker --statedb>` can contain variables that the
 :option:`--statedb <celery worker --statedb>` can contain variables that the
 worker will expand:
 worker will expand:
 
 
@@ -262,13 +262,13 @@ to the number of destination hosts.
 
 
 .. _worker-broadcast-fun:
 .. _worker-broadcast-fun:
 
 
-The :meth:`~@control.broadcast` function.
+The :meth:`~@control.broadcast` function
 ----------------------------------------------------
 ----------------------------------------------------
 
 
 This is the client function used to send commands to the workers.
 This is the client function used to send commands to the workers.
 Some remote control commands also have higher-level interfaces using
 Some remote control commands also have higher-level interfaces using
 :meth:`~@control.broadcast` in the background, like
 :meth:`~@control.broadcast` in the background, like
-:meth:`~@control.rate_limit` and :meth:`~@control.ping`.
+:meth:`~@control.rate_limit`, and :meth:`~@control.ping`.
 
 
 Sending the :control:`rate_limit` command and keyword arguments:
 Sending the :control:`rate_limit` command and keyword arguments:
 
 

+ 2 - 2
docs/whatsnew-3.1.rst

@@ -12,7 +12,7 @@
     releases (0.0.x), while older series are archived under the :ref:`history`
     releases (0.0.x), while older series are archived under the :ref:`history`
     section.
     section.
 
 
-Celery is a simple, flexible and reliable distributed system to
+Celery is a simple, flexible, and reliable distributed system to
 process vast amounts of messages, while providing operations with
 process vast amounts of messages, while providing operations with
 the tools required to maintain such a system.
 the tools required to maintain such a system.
 
 
@@ -28,7 +28,7 @@ To read more about Celery you should go read the :ref:`introduction <intro>`.
 While this version is backward compatible with previous versions
 While this version is backward compatible with previous versions
 it's important that you read the following section.
 it's important that you read the following section.
 
 
-This version is officially supported on CPython 2.6, 2.7 and 3.3,
+This version is officially supported on CPython 2.6, 2.7, and 3.3,
 and also supported on PyPy.
 and also supported on PyPy.
 
 
 .. _`website`: http://celeryproject.org/
 .. _`website`: http://celeryproject.org/

+ 37 - 37
docs/whatsnew-4.0.rst

@@ -12,7 +12,7 @@
     releases (0.0.x), while older series are archived under the :ref:`history`
     releases (0.0.x), while older series are archived under the :ref:`history`
     section.
     section.
 
 
-Celery is a simple, flexible and reliable distributed system to
+Celery is a simple, flexible, and reliable distributed system to
 process vast amounts of messages, while providing operations with
 process vast amounts of messages, while providing operations with
 the tools required to maintain such a system.
 the tools required to maintain such a system.
 
 
@@ -28,7 +28,7 @@ To read more about Celery you should go read the :ref:`introduction <intro>`.
 While this version is backward compatible with previous versions
 While this version is backward compatible with previous versions
 it's important that you read the following section.
 it's important that you read the following section.
 
 
-This version is officially supported on CPython 2.7, 3.4 and 3.5.
+This version is officially supported on CPython 2.7, 3.4, and 3.5.
 and also supported on PyPy.
 and also supported on PyPy.
 
 
 .. _`website`: http://celeryproject.org/
 .. _`website`: http://celeryproject.org/
@@ -457,8 +457,8 @@ The Django integration :ref:`example in the documentation
 This also ensures comaptibility with the new, ehm, ``appconfig`` stuff
 This also ensures comaptibility with the new, ehm, ``appconfig`` stuff
 introduced in recent Django versions.
 introduced in recent Django versions.
 
 
-Worker direct queues no longer use auto-delete.
------------------------------------------------
+Worker direct queues no longer use auto-delete
+----------------------------------------------
 
 
 Workers/clients running 4.0 will no longer be able to send
 Workers/clients running 4.0 will no longer be able to send
 worker direct messages to workers running older versions, and vice versa.
 worker direct messages to workers running older versions, and vice versa.
@@ -624,8 +624,8 @@ log file can cause corruption.
 You're encouraged to upgrade your init-scripts and
 You're encouraged to upgrade your init-scripts and
 :program:`celery multi` arguments to use this new option.
 :program:`celery multi` arguments to use this new option.
 
 
-Configure broker URL for read/write separately.
------------------------------------------------
+Configure broker URL for read/write separately
+----------------------------------------------
 
 
 New :setting:`broker_read_url` and :setting:`broker_write_url` settings
 New :setting:`broker_read_url` and :setting:`broker_write_url` settings
 have been added so that separate broker URLs can be provided
 have been added so that separate broker URLs can be provided
@@ -718,8 +718,8 @@ to fix some long outstanding issues.
 - Fixed issue where ``group | task`` wasn't upgrading correctly
 - Fixed issue where ``group | task`` wasn't upgrading correctly
   to chord (Issue #2922).
   to chord (Issue #2922).
 
 
-Amazon SQS transport now officially supported.
-----------------------------------------------
+Amazon SQS transport now officially supported
+---------------------------------------------
 
 
 The SQS broker transport has been rewritten to use async I/O and as such
 The SQS broker transport has been rewritten to use async I/O and as such
 joins RabbitMQ and Redis as officially supported transports.
 joins RabbitMQ and Redis as officially supported transports.
@@ -729,13 +729,13 @@ and closes several issues related to using SQS as a broker.
 
 
 This work was sponsored by Nextdoor.
 This work was sponsored by Nextdoor.
 
 
-Apache QPid transport now officially supported.
------------------------------------------------
+Apache QPid transport now officially supported
+----------------------------------------------
 
 
 Contributed by **Brian Bouterse**.
 Contributed by **Brian Bouterse**.
 
 
-Schedule tasks based on sunrise, sunset, dawn and dusk.
--------------------------------------------------------
+Schedule tasks based on sunrise, sunset, dawn and dusk
+------------------------------------------------------
 
 
 See :ref:`beat-solar` for more information.
 See :ref:`beat-solar` for more information.
 
 
@@ -774,8 +774,8 @@ See :ref:`routing-options-rabbitmq-priorities` for more information.
 
 
 Contributed by **Gerald Manipon**.
 Contributed by **Gerald Manipon**.
 
 
-Prefork: Limit child process resident memory size.
---------------------------------------------------
+Prefork: Limit child process resident memory size
+-------------------------------------------------
 .. :sha:`5cae0e754128750a893524dcba4ae030c414de33`
 .. :sha:`5cae0e754128750a893524dcba4ae030c414de33`
 
 
 You can now limit the maximum amount of memory allocated per prefork
 You can now limit the maximum amount of memory allocated per prefork
@@ -795,8 +795,8 @@ Contributed by **Dave Smith**.
 Redis: Result backend optimizations
 Redis: Result backend optimizations
 -----------------------------------
 -----------------------------------
 
 
-RPC is now using pub/sub for streaming task results.
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RPC is now using pub/sub for streaming task results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Calling ``result.get()`` when using the Redis result backend
 Calling ``result.get()`` when using the Redis result backend
 used to be extremely expensive as it was using polling to wait
 used to be extremely expensive as it was using polling to wait
@@ -810,8 +810,8 @@ task round-trip times.
 
 
 Contributed by **Yaroslav Zhavoronkov** and **Ask Solem**.
 Contributed by **Yaroslav Zhavoronkov** and **Ask Solem**.
 
 
-New optimized chord join implementation.
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+New optimized chord join implementation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 This was an experimental feature introduced in Celery 3.1,
 This was an experimental feature introduced in Celery 3.1,
 that could only be enabled by adding ``?new_join=1`` to the
 that could only be enabled by adding ``?new_join=1`` to the
@@ -823,22 +823,22 @@ to be considered stable and enabled by default.
 The new implementation greatly reduces the overhead of chords,
 The new implementation greatly reduces the overhead of chords,
 and especially with larger chords the performance benefit can be massive.
 and especially with larger chords the performance benefit can be massive.
 
 
-New Riak result backend Introduced.
------------------------------------
+New Riak result backend Introduced
+----------------------------------
 
 
 See :ref:`conf-riak-result-backend` for more information.
 See :ref:`conf-riak-result-backend` for more information.
 
 
 Contributed by **Gilles Dartiguelongue**, **Alman One** and **NoKriK**.
 Contributed by **Gilles Dartiguelongue**, **Alman One** and **NoKriK**.
 
 
-New CouchDB result backend introduced.
---------------------------------------
+New CouchDB result backend introduced
+-------------------------------------
 
 
 See :ref:`conf-couchdb-result-backend` for more information.
 See :ref:`conf-couchdb-result-backend` for more information.
 
 
 Contributed by **Nathan Van Gheem**.
 Contributed by **Nathan Van Gheem**.
 
 
-New Consul result backend introduced.
--------------------------------------
+New Consul result backend introduced
+------------------------------------
 
 
 Add support for Consul as a backend using the Key/Value store of Consul.
 Add support for Consul as a backend using the Key/Value store of Consul.
 
 
@@ -866,8 +866,8 @@ That installs the required package to talk to Consul's HTTP API from Python.
 
 
 Contributed by **Wido den Hollander**.
 Contributed by **Wido den Hollander**.
 
 
-Brand new Cassandra result backend.
------------------------------------
+Brand new Cassandra result backend
+----------------------------------
 
 
 A brand new Cassandra backend utilizing the new :pypi:`cassandra-driver`
 A brand new Cassandra backend utilizing the new :pypi:`cassandra-driver`
 library is replacing the old result backend which was using the older
 library is replacing the old result backend which was using the older
@@ -877,15 +877,15 @@ See :ref:`conf-cassandra-result-backend` for more information.
 
 
 .. # XXX What changed?
 .. # XXX What changed?
 
 
-New Elasticsearch result backend introduced.
---------------------------------------------
+New Elasticsearch result backend introduced
+-------------------------------------------
 
 
 See :ref:`conf-elasticsearch-result-backend` for more information.
 See :ref:`conf-elasticsearch-result-backend` for more information.
 
 
 Contributed by **Ahmet Demir**.
 Contributed by **Ahmet Demir**.
 
 
-New File-system result backend introduced.
-------------------------------------------
+New File-system result backend introduced
+-----------------------------------------
 
 
 See :ref:`conf-filesystem-result-backend` for more information.
 See :ref:`conf-filesystem-result-backend` for more information.
 
 
@@ -914,8 +914,8 @@ in the following way:
 
 
 .. :sha:`03399b4d7c26fb593e61acf34f111b66b340ba4e`
 .. :sha:`03399b4d7c26fb593e61acf34f111b66b340ba4e`
 
 
-Task.replace
-------------
+``Task.replace``
+----------------
 
 
 Task.replace changed, removes Task.replace_in_chord.
 Task.replace changed, removes Task.replace_in_chord.
 
 
@@ -978,8 +978,8 @@ eventlet/gevent drainers, promises, BLA BLA
 
 
 Closed issue #2529.
 Closed issue #2529.
 
 
-RPC Result Backend matured.
----------------------------
+RPC Result Backend matured
+--------------------------
 
 
 Lots of bugs in the previously experimental RPC result backend have been fixed
 Lots of bugs in the previously experimental RPC result backend have been fixed
 and we now consider it production ready.
 and we now consider it production ready.
@@ -1673,9 +1673,9 @@ Logging Settings
 ``CELERYD_LOG_FILE``                   :option:`celery worker --logfile`
 ``CELERYD_LOG_FILE``                   :option:`celery worker --logfile`
 ``CELERYBEAT_LOG_LEVEL``               :option:`celery beat --loglevel`
 ``CELERYBEAT_LOG_LEVEL``               :option:`celery beat --loglevel`
 ``CELERYBEAT_LOG_FILE``                :option:`celery beat --loglevel`
 ``CELERYBEAT_LOG_FILE``                :option:`celery beat --loglevel`
-``CELERYMON_LOG_LEVEL``                celerymon is deprecated, use flower.
-``CELERYMON_LOG_FILE``                 celerymon is deprecated, use flower.
-``CELERYMON_LOG_FORMAT``               celerymon is deprecated, use flower.
+``CELERYMON_LOG_LEVEL``                celerymon is deprecated, use flower
+``CELERYMON_LOG_FILE``                 celerymon is deprecated, use flower
+``CELERYMON_LOG_FORMAT``               celerymon is deprecated, use flower
 =====================================  =====================================
 =====================================  =====================================
 
 
 Task Settings
 Task Settings

+ 3 - 3
extra/generic-init.d/celerybeat

@@ -56,7 +56,7 @@ _config_sanity() {
         echo "Error: Config script '$path' must be owned by root!"
         echo "Error: Config script '$path' must be owned by root!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with mailicious intent.  When sure the "
         echo "modified with mailicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change ownership of the script:"
         echo "you can change ownership of the script:"
@@ -68,7 +68,7 @@ _config_sanity() {
         echo "Error: Config script '$path' cannot be writable by others!"
         echo "Error: Config script '$path' cannot be writable by others!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with malicious intent.  When sure the "
         echo "modified with malicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change the scripts permissions:"
         echo "you can change the scripts permissions:"
@@ -79,7 +79,7 @@ _config_sanity() {
         echo "Error: Config script '$path' cannot be writable by group!"
         echo "Error: Config script '$path' cannot be writable by group!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with malicious intent.  When sure the "
         echo "modified with malicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change the scripts permissions:"
         echo "you can change the scripts permissions:"

+ 3 - 3
extra/generic-init.d/celeryd

@@ -77,7 +77,7 @@ _config_sanity() {
         echo "Error: Config script '$path' must be owned by root!"
         echo "Error: Config script '$path' must be owned by root!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with mailicious intent.  When sure the "
         echo "modified with mailicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change ownership of the script:"
         echo "you can change ownership of the script:"
@@ -89,7 +89,7 @@ _config_sanity() {
         echo "Error: Config script '$path' cannot be writable by others!"
         echo "Error: Config script '$path' cannot be writable by others!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with malicious intent.  When sure the "
         echo "modified with malicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change the scripts permissions:"
         echo "you can change the scripts permissions:"
@@ -100,7 +100,7 @@ _config_sanity() {
         echo "Error: Config script '$path' cannot be writable by group!"
         echo "Error: Config script '$path' cannot be writable by group!"
         echo
         echo
         echo "Resolution:"
         echo "Resolution:"
-        echo "Review the file carefully and make sure it hasn't been "
+        echo "Review the file carefully, and make sure it hasn't been "
         echo "modified with malicious intent.  When sure the "
         echo "modified with malicious intent.  When sure the "
         echo "script is safe to execute with superuser privileges "
         echo "script is safe to execute with superuser privileges "
         echo "you can change the scripts permissions:"
         echo "you can change the scripts permissions:"

+ 3 - 3
funtests/setup.py

@@ -24,9 +24,9 @@ class no_install(install):
     def run(self, *args, **kwargs):
     def run(self, *args, **kwargs):
         import sys
         import sys
         sys.stderr.write("""
         sys.stderr.write("""
-------------------------------------------------------
-The Celery functional test suite cannot be installed.
-------------------------------------------------------
+-----------------------------------------------------
+The Celery functional test suite cannot be installed
+-----------------------------------------------------
 
 
 
 
 But you can execute the tests by running the command:
 But you can execute the tests by running the command:

+ 1 - 1
setup.py

@@ -17,7 +17,7 @@ except (AttributeError, ImportError):
 
 
 E_UNSUPPORTED_PYTHON = """
 E_UNSUPPORTED_PYTHON = """
 ----------------------------------------
 ----------------------------------------
- Celery 4.0 requires %s %s or later!
+ Celery 4.0 requires %s %s or later
 ----------------------------------------
 ----------------------------------------
 
 
 - For CPython 2.6, PyPy 1.x, Jython 2.6, CPython 3.2->3.3; use Celery 3.1:
 - For CPython 2.6, PyPy 1.x, Jython 2.6, CPython 3.2->3.3; use Celery 3.1: