Przeglądaj źródła

One space after period for proportional fonts + contractions

Ask Solem 9 lat temu
rodzic
commit
6027e445ec
100 zmienionych plików z 933 dodań i 938 usunięć
  1. 50 50
      CONTRIBUTING.rst
  2. 12 12
      README.rst
  3. 1 1
      celery/__init__.py
  4. 1 1
      celery/_state.py
  5. 1 1
      celery/app/__init__.py
  6. 2 2
      celery/app/amqp.py
  7. 5 5
      celery/app/base.py
  8. 1 1
      celery/app/log.py
  9. 14 14
      celery/app/task.py
  10. 3 3
      celery/app/trace.py
  11. 1 1
      celery/app/utils.py
  12. 1 1
      celery/backends/amqp.py
  13. 1 1
      celery/backends/base.py
  14. 2 2
      celery/backends/cassandra.py
  15. 1 1
      celery/backends/filesystem.py
  16. 1 1
      celery/backends/mongodb.py
  17. 1 1
      celery/backends/rpc.py
  18. 4 4
      celery/beat.py
  19. 1 1
      celery/bin/base.py
  20. 1 1
      celery/bin/beat.py
  21. 3 3
      celery/bin/celery.py
  22. 1 1
      celery/bin/events.py
  23. 1 1
      celery/bin/multi.py
  24. 4 4
      celery/bin/worker.py
  25. 8 8
      celery/canvas.py
  26. 14 14
      celery/concurrency/asynpool.py
  27. 2 2
      celery/contrib/abortable.py
  28. 1 1
      celery/contrib/rdb.py
  29. 1 1
      celery/contrib/sphinx.py
  30. 2 2
      celery/events/__init__.py
  31. 2 2
      celery/events/snapshot.py
  32. 2 2
      celery/events/state.py
  33. 5 5
      celery/exceptions.py
  34. 1 1
      celery/fixups/django.py
  35. 1 1
      celery/local.py
  36. 8 8
      celery/platforms.py
  37. 7 7
      celery/result.py
  38. 5 5
      celery/schedules.py
  39. 1 1
      celery/states.py
  40. 1 1
      celery/task/__init__.py
  41. 3 3
      celery/task/base.py
  42. 1 1
      celery/tests/backends/test_cassandra.py
  43. 1 1
      celery/tests/case.py
  44. 1 1
      celery/tests/fixups/test_django.py
  45. 2 2
      celery/tests/tasks/test_chord.py
  46. 1 1
      celery/utils/__init__.py
  47. 3 3
      celery/utils/collections.py
  48. 2 2
      celery/utils/deprecated.py
  49. 2 2
      celery/utils/dispatch/saferef.py
  50. 1 1
      celery/utils/dispatch/signal.py
  51. 1 1
      celery/utils/functional.py
  52. 1 1
      celery/utils/nodenames.py
  53. 3 3
      celery/utils/objects.py
  54. 1 1
      celery/utils/saferepr.py
  55. 1 1
      celery/utils/serialization.py
  56. 1 1
      celery/utils/threads.py
  57. 3 3
      celery/worker/__init__.py
  58. 1 1
      celery/worker/components.py
  59. 1 1
      celery/worker/consumer/consumer.py
  60. 2 2
      celery/worker/loops.py
  61. 1 1
      celery/worker/request.py
  62. 50 50
      docs/contributing.rst
  63. 1 1
      docs/copyright.rst
  64. 16 23
      docs/django/first-steps-with-django.rst
  65. 64 64
      docs/faq.rst
  66. 3 3
      docs/getting-started/brokers/index.rst
  67. 2 2
      docs/getting-started/brokers/rabbitmq.rst
  68. 7 7
      docs/getting-started/brokers/redis.rst
  69. 13 13
      docs/getting-started/brokers/sqs.rst
  70. 34 33
      docs/getting-started/first-steps-with-celery.rst
  71. 9 9
      docs/getting-started/introduction.rst
  72. 40 38
      docs/getting-started/next-steps.rst
  73. 9 9
      docs/glossary.rst
  74. 44 44
      docs/history/changelog-1.0.rst
  75. 14 14
      docs/history/changelog-2.0.rst
  76. 18 18
      docs/history/changelog-2.1.rst
  77. 65 65
      docs/history/changelog-2.2.rst
  78. 16 16
      docs/history/changelog-2.3.rst
  79. 19 19
      docs/history/changelog-2.4.rst
  80. 4 4
      docs/history/changelog-2.5.rst
  81. 38 38
      docs/history/changelog-3.0.rst
  82. 50 50
      docs/history/changelog-3.1.rst
  83. 14 14
      docs/history/whatsnew-2.5.rst
  84. 33 33
      docs/history/whatsnew-3.0.rst
  85. 2 2
      docs/includes/installation.txt
  86. 9 9
      docs/includes/introduction.txt
  87. 1 1
      docs/includes/resources.txt
  88. 1 1
      docs/index.rst
  89. 1 1
      docs/internals/app-overview.rst
  90. 7 7
      docs/internals/guide.rst
  91. 4 4
      docs/internals/protocol.rst
  92. 4 5
      docs/reference/celery.app.amqp.rst
  93. 3 3
      docs/sec/CELERYSA-0001.txt
  94. 3 3
      docs/sec/CELERYSA-0002.txt
  95. 3 3
      docs/tutorials/task-cookbook.rst
  96. 18 18
      docs/userguide/application.rst
  97. 21 21
      docs/userguide/calling.rst
  98. 20 20
      docs/userguide/canvas.rst
  99. 7 7
      docs/userguide/concurrency/eventlet.rst
  100. 59 59
      docs/userguide/configuration.rst

+ 50 - 50
CONTRIBUTING.rst

@@ -6,7 +6,7 @@
 
 
 Welcome!
 Welcome!
 
 
-This document is fairly extensive and you are not really expected
+This document is fairly extensive and you aren't really expected
 to study this in detail for small contributions;
 to study this in detail for small contributions;
 
 
     The most important rule is that contributing must be easy
     The most important rule is that contributing must be easy
@@ -17,7 +17,7 @@ If you're reporting a bug you should read the Reporting bugs section
 below to ensure that your bug report contains enough information
 below to ensure that your bug report contains enough information
 to successfully diagnose the issue, and if you're contributing code
 to successfully diagnose the issue, and if you're contributing code
 you should try to mimic the conventions you see surrounding the code
 you should try to mimic the conventions you see surrounding the code
-you are working on, but in the end all patches will be cleaned up by
+you're working on, but in the end all patches will be cleaned up by
 the person merging the changes so don't worry too much.
 the person merging the changes so don't worry too much.
 
 
 .. contents::
 .. contents::
@@ -28,8 +28,8 @@ the person merging the changes so don't worry too much.
 Community Code of Conduct
 Community Code of Conduct
 =========================
 =========================
 
 
-The goal is to maintain a diverse community that is pleasant for everyone.
-That is why we would greatly appreciate it if everyone contributing to and
+The goal is to maintain a diverse community that's pleasant for everyone.
+That's why we would greatly appreciate it if everyone contributing to and
 interacting with the community also followed this Code of Conduct.
 interacting with the community also followed this Code of Conduct.
 
 
 The Code of Conduct covers our behavior as members of the community,
 The Code of Conduct covers our behavior as members of the community,
@@ -46,22 +46,22 @@ Be considerate.
 ---------------
 ---------------
 
 
 Your work will be used by other people, and you in turn will depend on the
 Your work will be used by other people, and you in turn will depend on the
-work of others.  Any decision you take will affect users and colleagues, and
+work of others. Any decision you take will affect users and colleagues, and
 we expect you to take those consequences into account when making decisions.
 we expect you to take those consequences into account when making decisions.
 Even if it's not obvious at the time, our contributions to Celery will impact
 Even if it's not obvious at the time, our contributions to Celery will impact
-the work of others.  For example, changes to code, infrastructure, policy,
+the work of others. For example, changes to code, infrastructure, policy,
 documentation and translations during a release may negatively impact
 documentation and translations during a release may negatively impact
 others work.
 others work.
 
 
 Be respectful.
 Be respectful.
 --------------
 --------------
 
 
-The Celery community and its members treat one another with respect.  Everyone
-can make a valuable contribution to Celery.  We may not always agree, but
-disagreement is no excuse for poor behavior and poor manners.  We might all
+The Celery community and its members treat one another with respect. Everyone
+can make a valuable contribution to Celery. We may not always agree, but
+disagreement is no excuse for poor behavior and poor manners. We might all
 experience some frustration now and then, but we cannot allow that frustration
 experience some frustration now and then, but we cannot allow that frustration
-to turn into a personal attack.  It's important to remember that a community
-where people feel uncomfortable or threatened is not a productive one.  We
+to turn into a personal attack. It's important to remember that a community
+where people feel uncomfortable or threatened isn't a productive one. We
 expect members of the Celery community to be respectful when dealing with
 expect members of the Celery community to be respectful when dealing with
 other contributors as well as with people outside the Celery project and with
 other contributors as well as with people outside the Celery project and with
 users of Celery.
 users of Celery.
@@ -70,11 +70,11 @@ Be collaborative.
 -----------------
 -----------------
 
 
 Collaboration is central to Celery and to the larger free software community.
 Collaboration is central to Celery and to the larger free software community.
-We should always be open to collaboration.  Your work should be done
+We should always be open to collaboration. Your work should be done
 transparently and patches from Celery should be given back to the community
 transparently and patches from Celery should be given back to the community
-when they are made, not just when the distribution releases.  If you wish
+when they're made, not just when the distribution releases. If you wish
 to work on new code for existing upstream projects, at least keep those
 to work on new code for existing upstream projects, at least keep those
-projects informed of your ideas and progress.  It many not be possible to
+projects informed of your ideas and progress. It many not be possible to
 get consensus from upstream, or even from your colleagues about the correct
 get consensus from upstream, or even from your colleagues about the correct
 implementation for an idea, so don't feel obliged to have that agreement
 implementation for an idea, so don't feel obliged to have that agreement
 before you begin, but at least keep the outside world informed of your work,
 before you begin, but at least keep the outside world informed of your work,
@@ -85,29 +85,29 @@ When you disagree, consult others.
 ----------------------------------
 ----------------------------------
 
 
 Disagreements, both political and technical, happen all the time and
 Disagreements, both political and technical, happen all the time and
-the Celery community is no exception.  It is important that we resolve
+the Celery community is no exception. It's important that we resolve
 disagreements and differing views constructively and with the help of the
 disagreements and differing views constructively and with the help of the
-community and community process.  If you really want to go a different
+community and community process. If you really want to go a different
 way, then we encourage you to make a derivative distribution or alternate
 way, then we encourage you to make a derivative distribution or alternate
 set of packages that still build on the work we've done to utilize as common
 set of packages that still build on the work we've done to utilize as common
 of a core as possible.
 of a core as possible.
 
 
-When you are unsure, ask for help.
-----------------------------------
+When you're unsure, ask for help.
+---------------------------------
 
 
-Nobody knows everything, and nobody is expected to be perfect.  Asking
+Nobody knows everything, and nobody is expected to be perfect. Asking
 questions avoids many problems down the road, and so questions are
 questions avoids many problems down the road, and so questions are
-encouraged.  Those who are asked questions should be responsive and helpful.
+encouraged. Those who are asked questions should be responsive and helpful.
 However, when asking a question, care must be taken to do so in an appropriate
 However, when asking a question, care must be taken to do so in an appropriate
 forum.
 forum.
 
 
 Step down considerately.
 Step down considerately.
 ------------------------
 ------------------------
 
 
-Developers on every project come and go and Celery is no different.  When you
+Developers on every project come and go and Celery is no different. When you
 leave or disengage from the project, in whole or in part, we ask that you do
 leave or disengage from the project, in whole or in part, we ask that you do
-so in a way that minimizes disruption to the project.  This means you should
-tell people you are leaving and take the proper steps to ensure that others
+so in a way that minimizes disruption to the project. This means you should
+tell people you're leaving and take the proper steps to ensure that others
 can pick up where you leave off.
 can pick up where you leave off.
 
 
 .. _reporting-bugs:
 .. _reporting-bugs:
@@ -174,12 +174,12 @@ and participate in the discussion.
 
 
 2) **Determine if your bug is really a bug.**
 2) **Determine if your bug is really a bug.**
 
 
-You should not file a bug if you are requesting support.  For that you can use
+You shouldn't file a bug if you're requesting support. For that you can use
 the `mailing-list`_, or `irc-channel`_.
 the `mailing-list`_, or `irc-channel`_.
 
 
 3) **Make sure your bug hasn't already been reported.**
 3) **Make sure your bug hasn't already been reported.**
 
 
-Search through the appropriate Issue tracker.  If a bug like yours was found,
+Search through the appropriate Issue tracker. If a bug like yours was found,
 check if you have new information that could be reported to help
 check if you have new information that could be reported to help
 the developers fix the bug.
 the developers fix the bug.
 
 
@@ -192,7 +192,7 @@ celery, billiard, kombu, amqp and vine.
 5) **Collect information about the bug.**
 5) **Collect information about the bug.**
 
 
 To have the best chance of having a bug fixed, we need to be able to easily
 To have the best chance of having a bug fixed, we need to be able to easily
-reproduce the conditions that caused it.  Most of the time this information
+reproduce the conditions that caused it. Most of the time this information
 will be from a Python traceback message, though some bugs might be in design,
 will be from a Python traceback message, though some bugs might be in design,
 spelling or other errors on the website/docs/code.
 spelling or other errors on the website/docs/code.
 
 
@@ -202,12 +202,12 @@ spelling or other errors on the website/docs/code.
        etc.), the version of your Python interpreter, and the version of Celery,
        etc.), the version of your Python interpreter, and the version of Celery,
        and related packages that you were running when the bug occurred.
        and related packages that you were running when the bug occurred.
 
 
-    C) If you are reporting a race condition or a deadlock, tracebacks can be
+    C) If you're reporting a race condition or a deadlock, tracebacks can be
        hard to get or might not be that useful. Try to inspect the process to
        hard to get or might not be that useful. Try to inspect the process to
        get more diagnostic data. Some ideas:
        get more diagnostic data. Some ideas:
 
 
        * Enable celery's ``breakpoint_signal`` and use it
        * Enable celery's ``breakpoint_signal`` and use it
-         to inspect the process's state.  This will allow you to open a
+         to inspect the process's state. This will allow you to open a
          ``pdb`` session.
          ``pdb`` session.
        * Collect tracing data using `strace`_(Linux),
        * Collect tracing data using `strace`_(Linux),
          ``dtruss`` (macOS), and ``ktrace`` (BSD),
          ``dtruss`` (macOS), and ``ktrace`` (BSD),
@@ -251,7 +251,7 @@ issue tracker.
 * ``librabbitmq``: https://github.com/celery/librabbitmq/issues
 * ``librabbitmq``: https://github.com/celery/librabbitmq/issues
 * ``django-celery``: https://github.com/celery/django-celery/issues
 * ``django-celery``: https://github.com/celery/django-celery/issues
 
 
-If you are unsure of the origin of the bug you can ask the
+If you're unsure of the origin of the bug you can ask the
 `mailing-list`_, or just use the Celery issue tracker.
 `mailing-list`_, or just use the Celery issue tracker.
 
 
 Contributors guide to the code base
 Contributors guide to the code base
@@ -326,7 +326,7 @@ Maintenance branches
 --------------------
 --------------------
 
 
 Maintenance branches are named after the version, e.g. the maintenance branch
 Maintenance branches are named after the version, e.g. the maintenance branch
-for the 2.2.x series is named ``2.2``.  Previously these were named
+for the 2.2.x series is named ``2.2``. Previously these were named
 ``releaseXX-maint``.
 ``releaseXX-maint``.
 
 
 The versions we currently maintain is:
 The versions we currently maintain is:
@@ -344,7 +344,7 @@ Archived branches
 
 
 Archived branches are kept for preserving history only,
 Archived branches are kept for preserving history only,
 and theoretically someone could provide patches for these if they depend
 and theoretically someone could provide patches for these if they depend
-on a series that is no longer officially supported.
+on a series that's no longer officially supported.
 
 
 An archived version is named ``X.Y-archived``.
 An archived version is named ``X.Y-archived``.
 
 
@@ -366,17 +366,17 @@ Feature branches
 ----------------
 ----------------
 
 
 Major new features are worked on in dedicated branches.
 Major new features are worked on in dedicated branches.
-There is no strict naming requirement for these branches.
+There's no strict naming requirement for these branches.
 
 
-Feature branches are removed once they have been merged into a release branch.
+Feature branches are removed once they've been merged into a release branch.
 
 
 Tags
 Tags
 ====
 ====
 
 
-Tags are used exclusively for tagging releases.  A release tag is
+Tags are used exclusively for tagging releases. A release tag is
 named with the format ``vX.Y.Z``, e.g. ``v2.3.1``.
 named with the format ``vX.Y.Z``, e.g. ``v2.3.1``.
 Experimental releases contain an additional identifier ``vX.Y.Z-id``, e.g.
 Experimental releases contain an additional identifier ``vX.Y.Z-id``, e.g.
-``v3.0.0-rc1``.  Experimental tags may be removed after the official release.
+``v3.0.0-rc1``. Experimental tags may be removed after the official release.
 
 
 .. _contributing-changes:
 .. _contributing-changes:
 
 
@@ -388,7 +388,7 @@ Working on Features & Patches
     Contributing to Celery should be as simple as possible,
     Contributing to Celery should be as simple as possible,
     so none of these steps should be considered mandatory.
     so none of these steps should be considered mandatory.
 
 
-    You can even send in patches by email if that is your preferred
+    You can even send in patches by email if that's your preferred
     work method. We won't like you any less, any contribution you make
     work method. We won't like you any less, any contribution you make
     is always appreciated!
     is always appreciated!
 
 
@@ -497,7 +497,7 @@ When your feature/bugfix is complete you may want to submit
 a pull requests so that it can be reviewed by the maintainers.
 a pull requests so that it can be reviewed by the maintainers.
 
 
 Creating pull requests is easy, and also let you track the progress
 Creating pull requests is easy, and also let you track the progress
-of your contribution.  Read the `Pull Requests`_ section in the GitHub
+of your contribution. Read the `Pull Requests`_ section in the GitHub
 Guide to learn how this is done.
 Guide to learn how this is done.
 
 
 You can also attach pull requests to existing issues by following
 You can also attach pull requests to existing issues by following
@@ -537,7 +537,7 @@ The coverage XML output will then be located at ``coverage.xml``
 Running the tests on all supported Python versions
 Running the tests on all supported Python versions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-There is a ``tox`` configuration file in the top directory of the
+There's a ``tox`` configuration file in the top directory of the
 distribution.
 distribution.
 
 
 To run the tests for all supported Python versions simply execute:
 To run the tests for all supported Python versions simply execute:
@@ -575,7 +575,7 @@ After building succeeds the documentation is available at ``_build/html``.
 Verifying your contribution
 Verifying your contribution
 ---------------------------
 ---------------------------
 
 
-To use these tools you need to install a few dependencies.  These dependencies
+To use these tools you need to install a few dependencies. These dependencies
 can be found in ``requirements/pkgutils.txt``.
 can be found in ``requirements/pkgutils.txt``.
 
 
 Installing the dependencies:
 Installing the dependencies:
@@ -611,7 +611,7 @@ reference please execute:
 If files are missing you can add them by copying an existing reference file.
 If files are missing you can add them by copying an existing reference file.
 
 
 If the module is internal it should be part of the internal reference
 If the module is internal it should be part of the internal reference
-located in ``docs/internals/reference/``.  If the module is public
+located in ``docs/internals/reference/``. If the module is public
 it should be located in ``docs/reference/``.
 it should be located in ``docs/reference/``.
 
 
 For example if reference is missing for the module ``celery.worker.awesome``
 For example if reference is missing for the module ``celery.worker.awesome``
@@ -697,7 +697,7 @@ is following the conventions.
 
 
 .. _`PEP-257`: http://www.python.org/dev/peps/pep-0257/
 .. _`PEP-257`: http://www.python.org/dev/peps/pep-0257/
 
 
-* Lines should not exceed 78 columns.
+* Lines shouldn't exceed 78 columns.
 
 
   You can enforce this in ``vim`` by setting the ``textwidth`` option:
   You can enforce this in ``vim`` by setting the ``textwidth`` option:
   ::
   ::
@@ -748,12 +748,12 @@ is following the conventions.
         from __future__ import absolute_import
         from __future__ import absolute_import
 
 
     * If the module uses the ``with`` statement and must be compatible
     * If the module uses the ``with`` statement and must be compatible
-      with Python 2.5 (celery is not) then it must also enable that::
+      with Python 2.5 (celery isn't) then it must also enable that::
 
 
         from __future__ import with_statement
         from __future__ import with_statement
 
 
     * Every future import must be on its own line, as older Python 2.5
     * Every future import must be on its own line, as older Python 2.5
-      releases did not support importing multiple features on the
+      releases didn't support importing multiple features on the
       same future import line::
       same future import line::
 
 
         # Good
         # Good
@@ -763,12 +763,12 @@ is following the conventions.
         # Bad
         # Bad
         from __future__ import absolute_import, with_statement
         from __future__ import absolute_import, with_statement
 
 
-     (Note that this rule does not apply if the package does not include
+     (Note that this rule doesn't apply if the package doesn't include
      support for Python 2.5)
      support for Python 2.5)
 
 
 
 
 * Note that we use "new-style` relative imports when the distribution
 * Note that we use "new-style` relative imports when the distribution
-  does not support Python versions below 2.5
+  doesn't support Python versions below 2.5
 
 
     This requires Python 2.5 or later:
     This requires Python 2.5 or later:
     ::
     ::
@@ -796,7 +796,7 @@ that require third-party libraries must be added.
         pycassa
         pycassa
 
 
     These are pip requirement files so you can have version specifiers and
     These are pip requirement files so you can have version specifiers and
-    multiple packages are separated by newline.  A more complex example could
+    multiple packages are separated by newline. A more complex example could
     be:
     be:
     ::
     ::
 
 
@@ -829,7 +829,7 @@ that require third-party libraries must be added.
 
 
 That's all that needs to be done, but remember that if your feature
 That's all that needs to be done, but remember that if your feature
 adds additional configuration options then these needs to be documented
 adds additional configuration options then these needs to be documented
-in ``docs/configuration.rst``.  Also all settings need to be added to the
+in ``docs/configuration.rst``. Also all settings need to be added to the
 ``celery/app/defaults.py`` module.
 ``celery/app/defaults.py`` module.
 
 
 Result backends require a separate section in the ``docs/configuration.rst``
 Result backends require a separate section in the ``docs/configuration.rst``
@@ -844,7 +844,7 @@ This is a list of people that can be contacted for questions
 regarding the official git repositories, PyPI packages
 regarding the official git repositories, PyPI packages
 Read the Docs pages.
 Read the Docs pages.
 
 
-If the issue is not an emergency then it is better
+If the issue isn't an emergency then it's better
 to `report an issue`_.
 to `report an issue`_.
 
 
 
 
@@ -957,7 +957,7 @@ Promise/deferred implementation.
 ------------
 ------------
 
 
 Fork of multiprocessing containing improvements
 Fork of multiprocessing containing improvements
-that will eventually be merged into the Python stdlib.
+that'll eventually be merged into the Python stdlib.
 
 
 :git: https://github.com/celery/billiard
 :git: https://github.com/celery/billiard
 :CI: http://travis-ci.org/#!/celery/billiard/
 :CI: http://travis-ci.org/#!/celery/billiard/
@@ -1054,7 +1054,7 @@ The version number must be updated two places:
     * ``docs/include/introduction.txt``
     * ``docs/include/introduction.txt``
 
 
 After you have changed these files you must render
 After you have changed these files you must render
-the ``README`` files.  There is a script to convert sphinx syntax
+the ``README`` files. There's a script to convert sphinx syntax
 to generic reStructured Text syntax, and the make target `readme`
 to generic reStructured Text syntax, and the make target `readme`
 does this for you:
 does this for you:
 ::
 ::

+ 12 - 12
README.rst

@@ -15,8 +15,8 @@
 
 
 --
 --
 
 
-What is a Task Queue?
-=====================
+What's a Task Queue?
+====================
 
 
 Task queues are used as a mechanism to distribute work across threads or
 Task queues are used as a mechanism to distribute work across threads or
 machines.
 machines.
@@ -25,14 +25,14 @@ A task queue's input is a unit of work, called a task, dedicated worker
 processes then constantly monitor the queue for new work to perform.
 processes then constantly monitor the queue for new work to perform.
 
 
 Celery communicates via messages, usually using a broker
 Celery communicates via messages, usually using a broker
-to mediate between clients and workers.  To initiate a task a client puts a
+to mediate between clients and workers. To initiate a task a client puts a
 message on the queue, the broker then delivers the message to a worker.
 message on the queue, the broker then delivers the message to a worker.
 
 
 A Celery system can consist of multiple workers and brokers, giving way
 A Celery system can consist of multiple workers and brokers, giving way
 to high availability and horizontal scaling.
 to high availability and horizontal scaling.
 
 
 Celery is written in Python, but the protocol can be implemented in any
 Celery is written in Python, but the protocol can be implemented in any
-language.  In addition to Python there's node-celery_ for Node.js,
+language. In addition to Python there's node-celery_ for Node.js,
 and a `PHP client`_.
 and a `PHP client`_.
 
 
 Language interoperability can also be achieved
 Language interoperability can also be achieved
@@ -55,7 +55,7 @@ Celery version 4.0 runs on,
 This is the last version to support Python 2.7,
 This is the last version to support Python 2.7,
 and from the next version (Celery 5.x) Python 3.6 or newer is required.
 and from the next version (Celery 5.x) Python 3.6 or newer is required.
 
 
-If you are running an older version of Python, you need to be running
+If you're running an older version of Python, you need to be running
 an older version of Celery:
 an older version of Celery:
 
 
 - Python 2.6: Celery series 3.1 or earlier.
 - Python 2.6: Celery series 3.1 or earlier.
@@ -63,8 +63,8 @@ an older version of Celery:
 - Python 2.4 was Celery series 2.2 or earlier.
 - Python 2.4 was Celery series 2.2 or earlier.
 
 
 Celery is a project with minimal funding,
 Celery is a project with minimal funding,
-so we do not support Microsoft Windows.
-Please do not open any issues related to that platform.
+so we don't support Microsoft Windows.
+Please don't open any issues related to that platform.
 
 
 *Celery* is usually used with a message broker to send and receive messages.
 *Celery* is usually used with a message broker to send and receive messages.
 The RabbitMQ, Redis transports are feature complete,
 The RabbitMQ, Redis transports are feature complete,
@@ -77,7 +77,7 @@ across datacenters.
 Get Started
 Get Started
 ===========
 ===========
 
 
-If this is the first time you're trying to use Celery, or you are
+If this is the first time you're trying to use Celery, or you're
 new to Celery 4.0 coming from previous versions then you should read our
 new to Celery 4.0 coming from previous versions then you should read our
 getting started tutorials:
 getting started tutorials:
 
 
@@ -184,7 +184,7 @@ integration packages:
     | `Tornado`_         | `tornado-celery`_      |
     | `Tornado`_         | `tornado-celery`_      |
     +--------------------+------------------------+
     +--------------------+------------------------+
 
 
-The integration packages are not strictly necessary, but they can make
+The integration packages aren't strictly necessary, but they can make
 development easier, and sometimes they add important hooks like closing
 development easier, and sometimes they add important hooks like closing
 database connections at ``fork``.
 database connections at ``fork``.
 
 
@@ -238,7 +238,7 @@ Celery also defines a group of bundles that can be used
 to install Celery and the dependencies for a given feature.
 to install Celery and the dependencies for a given feature.
 
 
 You can specify these in your requirements or on the ``pip``
 You can specify these in your requirements or on the ``pip``
-command-line by using brackets.  Multiple bundles can be specified by
+command-line by using brackets. Multiple bundles can be specified by
 separating them by commas.
 separating them by commas.
 ::
 ::
 
 
@@ -334,7 +334,7 @@ You can install it by doing the following,:
     # python setup.py install
     # python setup.py install
 
 
 The last command must be executed as a privileged user if
 The last command must be executed as a privileged user if
-you are not currently using a virtualenv.
+you aren't currently using a virtualenv.
 
 
 .. _celery-installing-from-git:
 .. _celery-installing-from-git:
 
 
@@ -409,7 +409,7 @@ Contributing
 
 
 Development of `celery` happens at GitHub: https://github.com/celery/celery
 Development of `celery` happens at GitHub: https://github.com/celery/celery
 
 
-You are highly encouraged to participate in the development
+You're highly encouraged to participate in the development
 of `celery`. If you don't like GitHub (for some reason) you're welcome
 of `celery`. If you don't like GitHub (for some reason) you're welcome
 to send regular patches.
 to send regular patches.
 
 

+ 1 - 1
celery/__init__.py

@@ -113,7 +113,7 @@ def _patch_gevent():
     monkey.patch_all()
     monkey.patch_all()
     if version_info[0] == 0:  # pragma: no cover
     if version_info[0] == 0:  # pragma: no cover
         # Signals aren't working in gevent versions <1.0,
         # Signals aren't working in gevent versions <1.0,
-        # and are not monkey patched by patch_all()
+        # and aren't monkey patched by patch_all()
         _signal = __import__('signal')
         _signal = __import__('signal')
         _signal.signal = gsignal
         _signal.signal = gsignal
 
 

+ 1 - 1
celery/_state.py

@@ -25,7 +25,7 @@ __all__ = [
 #: Global default app used when no current app.
 #: Global default app used when no current app.
 default_app = None
 default_app = None
 
 
-#: List of all app instances (weakrefs), must not be used directly.
+#: List of all app instances (weakrefs), mustn't be used directly.
 _apps = weakref.WeakSet()
 _apps = weakref.WeakSet()
 
 
 #: global set of functions to call whenever a new app is finalized
 #: global set of functions to call whenever a new app is finalized

+ 1 - 1
celery/app/__init__.py

@@ -88,7 +88,7 @@ else:
 def shared_task(*args, **kwargs):
 def shared_task(*args, **kwargs):
     """Create shared tasks (decorator).
     """Create shared tasks (decorator).
 
 
-    This can be used by library authors to create tasks that will work
+    This can be used by library authors to create tasks that'll work
     for any app environment.
     for any app environment.
 
 
     Returns:
     Returns:

+ 2 - 2
celery/app/amqp.py

@@ -163,7 +163,7 @@ class Queues(dict):
         return info[0] + '\n' + textindent('\n'.join(info[1:]), indent)
         return info[0] + '\n' + textindent('\n'.join(info[1:]), indent)
 
 
     def select_add(self, queue, **kwargs):
     def select_add(self, queue, **kwargs):
-        """Add new task queue that will be consumed from even when
+        """Add new task queue that'll be consumed from even when
         a subset has been selected using the
         a subset has been selected using the
         :option:`celery worker -Q` option."""
         :option:`celery worker -Q` option."""
         q = self.add(queue, **kwargs)
         q = self.add(queue, **kwargs)
@@ -184,7 +184,7 @@ class Queues(dict):
             }
             }
 
 
     def deselect(self, exclude):
     def deselect(self, exclude):
-        """Deselect queues so that they will not be consumed from.
+        """Deselect queues so that they won't be consumed from.
 
 
         Arguments:
         Arguments:
             exclude (Sequence[str], str): Names of queues to avoid
             exclude (Sequence[str], str): Names of queues to avoid

+ 5 - 5
celery/app/base.py

@@ -336,7 +336,7 @@ class Celery(object):
             a proxy object, so that the act of creating the task is not
             a proxy object, so that the act of creating the task is not
             performed until the task is used or the task registry is accessed.
             performed until the task is used or the task registry is accessed.
 
 
-            If you are depending on binding to be deferred, then you must
+            If you're depending on binding to be deferred, then you must
             not access any attributes on the returned object until the
             not access any attributes on the returned object until the
             application is fully set up (finalized).
             application is fully set up (finalized).
         """
         """
@@ -538,7 +538,7 @@ class Celery(object):
             digest (str): Digest algorithm used when signing messages.
             digest (str): Digest algorithm used when signing messages.
                 Default is ``sha1``.
                 Default is ``sha1``.
             serializer (str): Serializer used to encode messages after
             serializer (str): Serializer used to encode messages after
-                they have been signed.  See :setting:`task_serializer` for
+                they've been signed.  See :setting:`task_serializer` for
                 the serializers supported.  Default is ``json``.
                 the serializers supported.  Default is ``json``.
         """
         """
         from celery.security import setup_security
         from celery.security import setup_security
@@ -578,7 +578,7 @@ class Celery(object):
                 to "tasks", which means it look for "module.tasks" for every
                 to "tasks", which means it look for "module.tasks" for every
                 module in ``packages``.
                 module in ``packages``.
             force (bool): By default this call is lazy so that the actual
             force (bool): By default this call is lazy so that the actual
-                auto-discovery will not happen until an application imports
+                auto-discovery won't happen until an application imports
                 the default modules.  Forcing will cause the auto-discovery
                 the default modules.  Forcing will cause the auto-discovery
                 to happen immediately.
                 to happen immediately.
         """
         """
@@ -916,7 +916,7 @@ class Celery(object):
             reverse (str): Reverse path to this object used for pickling
             reverse (str): Reverse path to this object used for pickling
                 purposes.  E.g. for ``app.AsyncResult`` use ``"AsyncResult"``.
                 purposes.  E.g. for ``app.AsyncResult`` use ``"AsyncResult"``.
             keep_reduce (bool): If enabled a custom ``__reduce__``
             keep_reduce (bool): If enabled a custom ``__reduce__``
-                implementation will not be provided.
+                implementation won't be provided.
         """
         """
         Class = symbol_by_name(Class)
         Class = symbol_by_name(Class)
         reverse = reverse if reverse else Class.__name__
         reverse = reverse if reverse else Class.__name__
@@ -1054,7 +1054,7 @@ class Celery(object):
 
 
     @property
     @property
     def current_task(self):
     def current_task(self):
-        """The instance of the task that is being executed, or
+        """The instance of the task that's being executed, or
         :const:`None`."""
         :const:`None`."""
         return _task_stack.top
         return _task_stack.top
 
 

+ 1 - 1
celery/app/log.py

@@ -202,7 +202,7 @@ class Logging(object):
             # Windows does not support ANSI color codes.
             # Windows does not support ANSI color codes.
             return False
             return False
         if colorize or colorize is None:
         if colorize or colorize is None:
-            # Only use color if there is no active log file
+            # Only use color if there's no active log file
             # and stderr is an actual terminal.
             # and stderr is an actual terminal.
             return logfile is None and isatty(sys.stderr)
             return logfile is None and isatty(sys.stderr)
         return colorize
         return colorize

+ 14 - 14
celery/app/task.py

@@ -175,7 +175,7 @@ class Task(object):
     #: a minute),`'100/h'` (hundred tasks an hour)
     #: a minute),`'100/h'` (hundred tasks an hour)
     rate_limit = None
     rate_limit = None
 
 
-    #: If enabled the worker will not store task state and return values
+    #: If enabled the worker won't store task state and return values
     #: for this task.  Defaults to the :setting:`task_ignore_result`
     #: for this task.  Defaults to the :setting:`task_ignore_result`
     #: setting.
     #: setting.
     ignore_result = None
     ignore_result = None
@@ -213,7 +213,7 @@ class Task(object):
     #: finished, or waiting to be retried.
     #: finished, or waiting to be retried.
     #:
     #:
     #: Having a 'started' status can be useful for when there are long
     #: Having a 'started' status can be useful for when there are long
-    #: running tasks and there is a need to report which task is currently
+    #: running tasks and there's a need to report which task is currently
     #: running.
     #: running.
     #:
     #:
     #: The application default can be overridden using the
     #: The application default can be overridden using the
@@ -247,9 +247,9 @@ class Task(object):
     #: Tuple of expected exceptions.
     #: Tuple of expected exceptions.
     #:
     #:
     #: These are errors that are expected in normal operation
     #: These are errors that are expected in normal operation
-    #: and that should not be regarded as a real error by the worker.
+    #: and that shouldn't be regarded as a real error by the worker.
     #: Currently this means that the state will be updated to an error
     #: Currently this means that the state will be updated to an error
-    #: state, but the worker will not log the event as an error.
+    #: state, but the worker won't log the event as an error.
     throws = ()
     throws = ()
 
 
     #: Default task expiry time.
     #: Default task expiry time.
@@ -261,7 +261,7 @@ class Task(object):
     #: Task request stack, the current request will be the topmost.
     #: Task request stack, the current request will be the topmost.
     request_stack = None
     request_stack = None
 
 
-    #: Some may expect a request to exist even if the task has not been
+    #: Some may expect a request to exist even if the task hasn't been
     #: called.  This should probably be deprecated.
     #: called.  This should probably be deprecated.
     _default_request = None
     _default_request = None
 
 
@@ -362,7 +362,7 @@ class Task(object):
         # - simply grabs it from the local registry.
         # - simply grabs it from the local registry.
         # - in later versions the module of the task is also included,
         # - in later versions the module of the task is also included,
         # - and the receiving side tries to import that module so that
         # - and the receiving side tries to import that module so that
-        # - it will work even if the task has not been registered.
+        # - it will work even if the task hasn't been registered.
         mod = type(self).__module__
         mod = type(self).__module__
         mod = mod if mod and mod in sys.modules else None
         mod = mod if mod and mod in sys.modules else None
         return (_unpickle_task_v2, (self.name, mod), None)
         return (_unpickle_task_v2, (self.name, mod), None)
@@ -405,7 +405,7 @@ class Task(object):
 
 
             expires (float, ~datetime.datetime): Datetime or
             expires (float, ~datetime.datetime): Datetime or
                 seconds in the future for the task should expire.
                 seconds in the future for the task should expire.
-                The task will not be executed after the expiration time.
+                The task won't be executed after the expiration time.
 
 
             shadow (str): Override task name used in logs/monitoring.
             shadow (str): Override task name used in logs/monitoring.
                 Default is retrieved from :meth:`shadow_name`.
                 Default is retrieved from :meth:`shadow_name`.
@@ -441,7 +441,7 @@ class Task(object):
 
 
             serializer (str): Serialization method to use.
             serializer (str): Serialization method to use.
                 Can be `pickle`, `json`, `yaml`, `msgpack` or any custom
                 Can be `pickle`, `json`, `yaml`, `msgpack` or any custom
-                serialization method that has been registered
+                serialization method that's been registered
                 with :mod:`kombu.serialization.registry`.
                 with :mod:`kombu.serialization.registry`.
                 Defaults to the :attr:`serializer` attribute.
                 Defaults to the :attr:`serializer` attribute.
 
 
@@ -559,7 +559,7 @@ class Task(object):
         Note:
         Note:
             Although the task will never return above as `retry` raises an
             Although the task will never return above as `retry` raises an
             exception to notify the worker, we use `raise` in front of the
             exception to notify the worker, we use `raise` in front of the
-            retry to convey that the rest of the block will not be executed.
+            retry to convey that the rest of the block won't be executed.
 
 
         Arguments:
         Arguments:
             args (Tuple): Positional arguments to retry with.
             args (Tuple): Positional arguments to retry with.
@@ -578,15 +578,15 @@ class Task(object):
             eta (~datetime.dateime): Explicit time and date to run the
             eta (~datetime.dateime): Explicit time and date to run the
                 retry at.
                 retry at.
             max_retries (int): If set, overrides the default retry limit for
             max_retries (int): If set, overrides the default retry limit for
-                this execution. Changes to this parameter do not propagate to
+                this execution. Changes to this parameter don't propagate to
                 subsequent task retry attempts. A value of :const:`None`, means
                 subsequent task retry attempts. A value of :const:`None`, means
-                "use the default", so if you want infinite retries you would
+                "use the default", so if you want infinite retries you'd
                 have to set the :attr:`max_retries` attribute of the task to
                 have to set the :attr:`max_retries` attribute of the task to
                 :const:`None` first.
                 :const:`None` first.
             time_limit (int): If set, overrides the default time limit.
             time_limit (int): If set, overrides the default time limit.
             soft_time_limit (int): If set, overrides the default soft
             soft_time_limit (int): If set, overrides the default soft
                 time limit.
                 time limit.
-            throw (bool): If this is :const:`False`, do not raise the
+            throw (bool): If this is :const:`False`, don't raise the
                 :exc:`~@Retry` exception, that tells the worker to mark
                 :exc:`~@Retry` exception, that tells the worker to mark
                 the task as being retried.  Note that this means the task
                 the task as being retried.  Note that this means the task
                 will be marked as failed if the task raises an exception,
                 will be marked as failed if the task raises an exception,
@@ -760,7 +760,7 @@ class Task(object):
         Raises:
         Raises:
             ~@Ignore: This is always raised, so the best practice
             ~@Ignore: This is always raised, so the best practice
             is to always use ``raise self.replace(...)`` to convey
             is to always use ``raise self.replace(...)`` to convey
-            to the reader that the task will not continue after being replaced.
+            to the reader that the task won't continue after being replaced.
         """
         """
         chord = self.request.chord
         chord = self.request.chord
         if 'chord' in sig.options:
         if 'chord' in sig.options:
@@ -798,7 +798,7 @@ class Task(object):
 
 
         Arguments:
         Arguments:
             sig (~@Signature): Signature to extend chord with.
             sig (~@Signature): Signature to extend chord with.
-            lazy (bool): If enabled the new task will not actually be called,
+            lazy (bool): If enabled the new task won't actually be called,
                 and ``sig.delay()`` must be called manually.
                 and ``sig.delay()`` must be called manually.
         """
         """
         if not self.request.chord:
         if not self.request.chord:

+ 3 - 3
celery/app/trace.py

@@ -322,7 +322,7 @@ def build_tracer(name, task, loader=None, hostname=None, store_errors=True,
         # retval - is the always unmodified return value.
         # retval - is the always unmodified return value.
         # state  - is the resulting task state.
         # state  - is the resulting task state.
 
 
-        # This function is very long because we have unrolled all the calls
+        # This function is very long because we've unrolled all the calls
         # for performance reasons, and because the function is so long
         # for performance reasons, and because the function is so long
         # we want the main variables (I, and R) to stand out visually from the
         # we want the main variables (I, and R) to stand out visually from the
         # the rest of the variables, so breaking PEP8 is worth it ;)
         # the rest of the variables, so breaking PEP8 is worth it ;)
@@ -539,7 +539,7 @@ def setup_worker_optimizations(app, hostname=None):
     hostname = hostname or gethostname()
     hostname = hostname or gethostname()
 
 
     # make sure custom Task.__call__ methods that calls super
     # make sure custom Task.__call__ methods that calls super
-    # will not mess up the request/task stack.
+    # won't mess up the request/task stack.
     _install_stack_protection()
     _install_stack_protection()
 
 
     # all new threads start without a current app, so if an app is not
     # all new threads start without a current app, so if an app is not
@@ -593,7 +593,7 @@ def _install_stack_protection():
     #   they work when tasks are called directly.
     #   they work when tasks are called directly.
     #
     #
     # The worker only optimizes away __call__ in the case
     # The worker only optimizes away __call__ in the case
-    # where it has not been overridden, so the request/task stack
+    # where it hasn't been overridden, so the request/task stack
     # will blow if a custom task class defines __call__ and also
     # will blow if a custom task class defines __call__ and also
     # calls super().
     # calls super().
     if not getattr(BaseTask, '_stackprotected', False):
     if not getattr(BaseTask, '_stackprotected', False):

+ 1 - 1
celery/app/utils.py

@@ -216,7 +216,7 @@ def detect_settings(conf, preconf={}, ignore_keys=set(), prefix=None,
         # always use new format if prefix is used.
         # always use new format if prefix is used.
         info, left = _settings_info, set()
         info, left = _settings_info, set()
 
 
-    # only raise error for keys that the user did not provide two keys
+    # only raise error for keys that the user didn't provide two keys
     # for (e.g. both ``result_expires`` and ``CELERY_TASK_RESULT_EXPIRES``).
     # for (e.g. both ``result_expires`` and ``CELERY_TASK_RESULT_EXPIRES``).
     really_left = {key for key in left if info.convert[key] not in have}
     really_left = {key for key in left if info.convert[key] not in have}
     if really_left:
     if really_left:

+ 1 - 1
celery/backends/amqp.py

@@ -14,7 +14,7 @@ __all__ = ['AMQPBackend']
 
 
 def repair_uuid(s):
 def repair_uuid(s):
     # Historically the dashes in UUIDS are removed from AMQ entity names,
     # Historically the dashes in UUIDS are removed from AMQ entity names,
-    # but there is no known reason to.  Hopefully we'll be able to fix
+    # but there's no known reason to.  Hopefully we'll be able to fix
     # this in v4.0.
     # this in v4.0.
     return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
     return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
 
 

+ 1 - 1
celery/backends/base.py

@@ -83,7 +83,7 @@ class Backend(object):
     supports_native_join = False
     supports_native_join = False
 
 
     #: If true the backend must automatically expire results.
     #: If true the backend must automatically expire results.
-    #: The daily backend_cleanup periodic task will not be triggered
+    #: The daily backend_cleanup periodic task won't be triggered
     #: in this case.
     #: in this case.
     supports_autoexpire = False
     supports_autoexpire = False
 
 

+ 2 - 2
celery/backends/cassandra.py

@@ -145,8 +145,8 @@ class CassandraBackend(BaseBackend):
                 auth_provider=self.auth_provider)
                 auth_provider=self.auth_provider)
             self._session = self._connection.connect(self.keyspace)
             self._session = self._connection.connect(self.keyspace)
 
 
-            # We are forced to do concatenation below, as formatting would
-            # blow up on superficial %s that will be processed by Cassandra
+            # We're forced to do concatenation below, as formatting would
+            # blow up on superficial %s that'll be processed by Cassandra
             self._write_stmt = cassandra.query.SimpleStatement(
             self._write_stmt = cassandra.query.SimpleStatement(
                 Q_INSERT_RESULT.format(
                 Q_INSERT_RESULT.format(
                     table=self.table, expires=self.cqlexpires),
                     table=self.table, expires=self.cqlexpires),

+ 1 - 1
celery/backends/filesystem.py

@@ -50,7 +50,7 @@ class FilesystemBackend(KeyValueStoreBackend):
         self.open = open
         self.open = open
         self.unlink = unlink
         self.unlink = unlink
 
 
-        # Lets verify that we have everything setup right
+        # Lets verify that we've everything setup right
         self._do_directory_test(b'.fs-backend-' + uuid().encode(encoding))
         self._do_directory_test(b'.fs-backend-' + uuid().encode(encoding))
 
 
     def _find_path(self, url):
     def _find_path(self, url):

+ 1 - 1
celery/backends/mongodb.py

@@ -96,7 +96,7 @@ class MongoBackend(BaseBackend):
             if not isinstance(config, dict):
             if not isinstance(config, dict):
                 raise ImproperlyConfigured(
                 raise ImproperlyConfigured(
                     'MongoDB backend settings should be grouped in a dict')
                     'MongoDB backend settings should be grouped in a dict')
-            config = dict(config)  # do not modify original
+            config = dict(config)  # don't modify original
 
 
             if 'host' in config or 'port' in config:
             if 'host' in config or 'port' in config:
                 # these should take over uri conf
                 # these should take over uri conf

+ 1 - 1
celery/backends/rpc.py

@@ -171,7 +171,7 @@ class BaseRPCBackend(base.Backend, AsyncBackendMixin):
             tid = self._get_message_task_id(acc)
             tid = self._get_message_task_id(acc)
             prev, latest_by_id[tid] = latest_by_id.get(tid), acc
             prev, latest_by_id[tid] = latest_by_id.get(tid), acc
             if prev:
             if prev:
-                # backends are not expected to keep history,
+                # backends aren't expected to keep history,
                 # so we delete everything except the most recent state.
                 # so we delete everything except the most recent state.
                 prev.ack()
                 prev.ack()
                 prev = None
                 prev = None

+ 4 - 4
celery/beat.py

@@ -150,7 +150,7 @@ class ScheduleEntry(object):
             # in the scheduler heap, the order is decided by the
             # in the scheduler heap, the order is decided by the
             # preceding members of the tuple ``(time, priority, entry)``.
             # preceding members of the tuple ``(time, priority, entry)``.
             #
             #
-            # If all that is left to order on is the entry then it can
+            # If all that's left to order on is the entry then it can
             # just as well be random.
             # just as well be random.
             return id(self) < id(other)
             return id(self) < id(other)
         return NotImplemented
         return NotImplemented
@@ -161,13 +161,13 @@ class Scheduler(object):
 
 
     The :program:`celery beat` program may instantiate this class
     The :program:`celery beat` program may instantiate this class
     multiple times for introspection purposes, but then with the
     multiple times for introspection purposes, but then with the
-    ``lazy`` argument set.  It is important for subclasses to
+    ``lazy`` argument set.  It's important for subclasses to
     be idempotent when this argument is set.
     be idempotent when this argument is set.
 
 
     Arguments:
     Arguments:
         schedule (~celery.schedules.schedule): see :attr:`schedule`.
         schedule (~celery.schedules.schedule): see :attr:`schedule`.
         max_interval (int): see :attr:`max_interval`.
         max_interval (int): see :attr:`max_interval`.
-        lazy (bool): Do not set up the schedule.
+        lazy (bool): Don't set up the schedule.
     """
     """
 
 
     Entry = ScheduleEntry
     Entry = ScheduleEntry
@@ -236,7 +236,7 @@ class Scheduler(object):
     def tick(self, event_t=event_t, min=min,
     def tick(self, event_t=event_t, min=min,
              heappop=heapq.heappop, heappush=heapq.heappush,
              heappop=heapq.heappop, heappush=heapq.heappush,
              heapify=heapq.heapify, mktime=time.mktime):
              heapify=heapq.heapify, mktime=time.mktime):
-        """Run a tick, that is one iteration of the scheduler.
+        """Run a tick - one iteration of the scheduler.
 
 
         Executes one due task per call.
         Executes one due task per call.
 
 

+ 1 - 1
celery/bin/base.py

@@ -285,7 +285,7 @@ class Command(object):
         Matching is case insensitive.
         Matching is case insensitive.
 
 
         Arguments:
         Arguments:
-            q (str): the question to ask (do not include questionark)
+            q (str): the question to ask (don't include questionark)
             choice (Tuple[str]): tuple of possible choices, must be lowercase.
             choice (Tuple[str]): tuple of possible choices, must be lowercase.
             default (Any): Default value if any.
             default (Any): Default value if any.
         """
         """

+ 1 - 1
celery/bin/beat.py

@@ -39,7 +39,7 @@
 
 
     Optional file used to store the process pid.
     Optional file used to store the process pid.
 
 
-    The program will not start if this file already exists
+    The program won't start if this file already exists
     and the pid is still alive.
     and the pid is still alive.
 
 
 .. cmdoption:: --uid
 .. cmdoption:: --uid

+ 3 - 3
celery/bin/celery.py

@@ -56,7 +56,7 @@ in any command that also has a `--detach` option.
 
 
     Optional file used to store the process pid.
     Optional file used to store the process pid.
 
 
-    The program will not start if this file already exists
+    The program won't start if this file already exists
     and the pid is still alive.
     and the pid is still alive.
 
 
 .. cmdoption:: --uid
 .. cmdoption:: --uid
@@ -471,7 +471,7 @@ class purge(Command):
 
 
     option_list = Command.option_list + (
     option_list = Command.option_list + (
         Option('--force', '-f', action='store_true',
         Option('--force', '-f', action='store_true',
-               help='Do not prompt for verification'),
+               help="Don't prompt for verification"),
         Option('--queues', '-Q', default=[],
         Option('--queues', '-Q', default=[],
                help='Comma separated list of queue names to purge.'),
                help='Comma separated list of queue names to purge.'),
         Option('--exclude-queues', '-X', default=[],
         Option('--exclude-queues', '-X', default=[],
@@ -1106,7 +1106,7 @@ class CeleryCommand(Command):
                 elif value.startswith('-'):
                 elif value.startswith('-'):
                     # we eat the next argument even though we don't know
                     # we eat the next argument even though we don't know
                     # if this option takes an argument or not.
                     # if this option takes an argument or not.
-                    # instead we will assume what is the command name in the
+                    # instead we'll assume what's the command name in the
                     # return statements below.
                     # return statements below.
                     try:
                     try:
                         nxt = argv[index + 1]
                         nxt = argv[index + 1]

+ 1 - 1
celery/bin/events.py

@@ -40,7 +40,7 @@
 
 
     Optional file used to store the process pid.
     Optional file used to store the process pid.
 
 
-    The program will not start if this file already exists
+    The program won't start if this file already exists
     and the pid is still alive.
     and the pid is still alive.
 
 
 .. cmdoption:: --uid
 .. cmdoption:: --uid

+ 1 - 1
celery/bin/multi.py

@@ -20,7 +20,7 @@ Examples
 
 
 
 
     $ # You need to add the same arguments when you restart,
     $ # You need to add the same arguments when you restart,
-    $ # as these are not persisted anywhere.
+    $ # as these aren't persisted anywhere.
     $ celery multi restart Leslie -E --pidfile=/var/run/celery/%n.pid
     $ celery multi restart Leslie -E --pidfile=/var/run/celery/%n.pid
                                      --logfile=/var/run/celery/%n%I.log
                                      --logfile=/var/run/celery/%n%I.log
 
 

+ 4 - 4
celery/bin/worker.py

@@ -78,15 +78,15 @@ The :program:`celery worker` command (previously known as ``celeryd``)
 
 
 .. cmdoption:: --without-gossip
 .. cmdoption:: --without-gossip
 
 
-    Do not subscribe to other workers events.
+    Don't subscribe to other workers events.
 
 
 .. cmdoption:: --without-mingle
 .. cmdoption:: --without-mingle
 
 
-    Do not synchronize with other workers at start-up.
+    Don't synchronize with other workers at start-up.
 
 
 .. cmdoption:: --without-heartbeat
 .. cmdoption:: --without-heartbeat
 
 
-    Do not send event heartbeats.
+    Don't send event heartbeats.
 
 
 .. cmdoption:: --heartbeat-interval
 .. cmdoption:: --heartbeat-interval
 
 
@@ -136,7 +136,7 @@ The :program:`celery worker` command (previously known as ``celeryd``)
 
 
     Optional file used to store the process pid.
     Optional file used to store the process pid.
 
 
-    The program will not start if this file already exists
+    The program won't start if this file already exists
     and the pid is still alive.
     and the pid is still alive.
 
 
 .. cmdoption:: --uid
 .. cmdoption:: --uid

+ 8 - 8
celery/canvas.py

@@ -294,8 +294,8 @@ class Signature(dict):
                root_id=None, parent_id=None):
                root_id=None, parent_id=None):
         """Finalize the signature by adding a concrete task id.
         """Finalize the signature by adding a concrete task id.
 
 
-        The task will not be called and you should not call the signature
-        twice after freezing it as that will result in two task messages
+        The task won't be called and you shouldn't call the signature
+        twice after freezing it as that'll result in two task messages
         using the same task id.
         using the same task id.
 
 
         Returns:
         Returns:
@@ -542,7 +542,7 @@ class chain(Signature):
     Arguments:
     Arguments:
         *tasks (Signature): List of task signatures to chain.
         *tasks (Signature): List of task signatures to chain.
             If only one argument is passed and that argument is
             If only one argument is passed and that argument is
-            an iterable, then that will be used as the list of signatures
+            an iterable, then that'll be used as the list of signatures
             to chain instead.  This means that you can use a generator
             to chain instead.  This means that you can use a generator
             expression.
             expression.
 
 
@@ -853,7 +853,7 @@ class group(Signature):
 
 
     Note:
     Note:
         If only one argument is passed, and that argument is an iterable
         If only one argument is passed, and that argument is an iterable
-        then that will be used as the list of tasks instead, which
+        then that'll be used as the list of tasks instead, which
         means you can use ``group`` with generator expressions.
         means you can use ``group`` with generator expressions.
 
 
     Example:
     Example:
@@ -864,8 +864,8 @@ class group(Signature):
 
 
     Arguments:
     Arguments:
         *tasks (Signature): A list of signatures that this group will call.
         *tasks (Signature): A list of signatures that this group will call.
-            If there is only one argument, and that argument is an iterable,
-            then that will define the list of signatures instead.
+            If there's only one argument, and that argument is an iterable,
+            then that'll define the list of signatures instead.
         **options (Any): Execution options applied to all tasks
         **options (Any): Execution options applied to all tasks
             in the group.
             in the group.
 
 
@@ -904,7 +904,7 @@ class group(Signature):
         for task in tasks:
         for task in tasks:
             if isinstance(task, CallableSignature):
             if isinstance(task, CallableSignature):
                 # local sigs are always of type Signature, and we
                 # local sigs are always of type Signature, and we
-                # clone them to make sure we do not modify the originals.
+                # clone them to make sure we don't modify the originals.
                 task = task.clone()
                 task = task.clone()
             else:
             else:
                 # serialized sigs must be converted to Signature.
                 # serialized sigs must be converted to Signature.
@@ -969,7 +969,7 @@ class group(Signature):
         p.finalize()
         p.finalize()
 
 
         # - Special case of group(A.s() | group(B.s(), C.s()))
         # - Special case of group(A.s() | group(B.s(), C.s()))
-        # That is, group with single item that is a chain but the
+        # That is, group with single item that's a chain but the
         # last task in that chain is a group.
         # last task in that chain is a group.
         #
         #
         # We cannot actually support arbitrary GroupResults in chains,
         # We cannot actually support arbitrary GroupResults in chains,

+ 14 - 14
celery/concurrency/asynpool.py

@@ -82,7 +82,7 @@ UNAVAIL = frozenset({errno.EAGAIN, errno.EINTR})
 #: Constant sent by child process when started (ready to accept work)
 #: Constant sent by child process when started (ready to accept work)
 WORKER_UP = 15
 WORKER_UP = 15
 
 
-#: A process must have started before this timeout (in secs.) expires.
+#: A process must've started before this timeout (in secs.) expires.
 PROC_ALIVE_TIMEOUT = 4.0
 PROC_ALIVE_TIMEOUT = 4.0
 
 
 SCHED_STRATEGY_PREFETCH = 1
 SCHED_STRATEGY_PREFETCH = 1
@@ -163,7 +163,7 @@ def _select(readers=None, writers=None, err=None, timeout=0,
     Returns:
     Returns:
         Tuple[Set, Set, Set]: of ``(readable, writable, again)``, where
         Tuple[Set, Set, Set]: of ``(readable, writable, again)``, where
         ``readable`` is a set of fds that have data available for read,
         ``readable`` is a set of fds that have data available for read,
-        ``writable`` is a set of fds that is ready to be written to
+        ``writable`` is a set of fds that's ready to be written to
         and ``again`` is a flag that if set means the caller must
         and ``again`` is a flag that if set means the caller must
         throw away the result and call us again.
         throw away the result and call us again.
     """
     """
@@ -307,7 +307,7 @@ class ResultHandler(_pool.ResultHandler):
         on_state_change = self.on_state_change
         on_state_change = self.on_state_change
         join_exited_workers = self.join_exited_workers
         join_exited_workers = self.join_exited_workers
 
 
-        # flush the processes outqueues until they have all terminated.
+        # flush the processes outqueues until they've all terminated.
         outqueues = set(fileno_to_outq)
         outqueues = set(fileno_to_outq)
         while cache and outqueues and self._state != TERMINATE:
         while cache and outqueues and self._state != TERMINATE:
             if check_timeouts is not None:
             if check_timeouts is not None:
@@ -386,7 +386,7 @@ class AsynPool(_pool.Pool):
         # synqueue fileno -> process mapping
         # synqueue fileno -> process mapping
         self._fileno_to_synq = {}
         self._fileno_to_synq = {}
 
 
-        # We keep track of processes that have not yet
+        # We keep track of processes that haven't yet
         # sent a WORKER_UP message.  If a process fails to send
         # sent a WORKER_UP message.  If a process fails to send
         # this message within proc_up_timeout we terminate it
         # this message within proc_up_timeout we terminate it
         # and hope the next process will recover.
         # and hope the next process will recover.
@@ -564,7 +564,7 @@ class AsynPool(_pool.Pool):
 
 
         def on_process_up(proc):
         def on_process_up(proc):
             """Called when a process has started."""
             """Called when a process has started."""
-            # If we got the same fd as a previous process then we will also
+            # If we got the same fd as a previous process then we'll also
             # receive jobs in the old buffer, so we need to reset the
             # receive jobs in the old buffer, so we need to reset the
             # job._write_to and job._scheduled_for attributes used to recover
             # job._write_to and job._scheduled_for attributes used to recover
             # message boundaries when processes exit.
             # message boundaries when processes exit.
@@ -603,7 +603,7 @@ class AsynPool(_pool.Pool):
 
 
             try:
             try:
                 if index[fd] is proc:
                 if index[fd] is proc:
-                    # fd has not been reused so we can remove it from index.
+                    # fd hasn't been reused so we can remove it from index.
                     index.pop(fd, None)
                     index.pop(fd, None)
             except KeyError:
             except KeyError:
                 pass
                 pass
@@ -927,7 +927,7 @@ class AsynPool(_pool.Pool):
     def flush(self):
     def flush(self):
         if self._state == TERMINATE:
         if self._state == TERMINATE:
             return
             return
-        # cancel all tasks that have not been accepted so that NACK is sent.
+        # cancel all tasks that haven't been accepted so that NACK is sent.
         for job in values(self._cache):
         for job in values(self._cache):
             if not job._accepted:
             if not job._accepted:
                 job._cancel()
                 job._cancel()
@@ -957,7 +957,7 @@ class AsynPool(_pool.Pool):
                     for gen in writers:
                     for gen in writers:
                         if (gen.__name__ == '_write_job' and
                         if (gen.__name__ == '_write_job' and
                                 gen_not_started(gen)):
                                 gen_not_started(gen)):
-                            # has not started writing the job so can
+                            # hasn't started writing the job so can
                             # discard the task, but we must also remove
                             # discard the task, but we must also remove
                             # it from the Pool._cache.
                             # it from the Pool._cache.
                             try:
                             try:
@@ -1006,7 +1006,7 @@ class AsynPool(_pool.Pool):
     def get_process_queues(self):
     def get_process_queues(self):
         """Get queues for a new process.
         """Get queues for a new process.
 
 
-        Here we will find an unused slot, as there should always
+        Here we'll find an unused slot, as there should always
         be one available when we start a new process.
         be one available when we start a new process.
         """
         """
         return next(q for q, owner in items(self._queues)
         return next(q for q, owner in items(self._queues)
@@ -1028,8 +1028,8 @@ class AsynPool(_pool.Pool):
         """Creates new in, out (and optionally syn) queues,
         """Creates new in, out (and optionally syn) queues,
         returned as a tuple."""
         returned as a tuple."""
         # NOTE: Pipes must be set O_NONBLOCK at creation time (the original
         # NOTE: Pipes must be set O_NONBLOCK at creation time (the original
-        # fd), otherwise it will not be possible to change the flags until
-        # there is an actual reader/writer on the other side.
+        # fd), otherwise it won't be possible to change the flags until
+        # there's an actual reader/writer on the other side.
         inq = _SimpleQueue(wnonblock=True)
         inq = _SimpleQueue(wnonblock=True)
         outq = _SimpleQueue(rnonblock=True)
         outq = _SimpleQueue(rnonblock=True)
         synq = None
         synq = None
@@ -1106,7 +1106,7 @@ class AsynPool(_pool.Pool):
 
 
     @staticmethod
     @staticmethod
     def _stop_task_handler(task_handler):
     def _stop_task_handler(task_handler):
-        """Called at shutdown to tell processes that we are shutting down."""
+        """Called at shutdown to tell processes that we're shutting down."""
         for proc in task_handler.pool:
         for proc in task_handler.pool:
             try:
             try:
                 setblocking(proc.inq._writer, 1)
                 setblocking(proc.inq._writer, 1)
@@ -1145,14 +1145,14 @@ class AsynPool(_pool.Pool):
         # this is only used by the original pool which uses a shared
         # this is only used by the original pool which uses a shared
         # queue for all processes.
         # queue for all processes.
 
 
-        # these attributes makes no sense for us, but we will still
+        # these attributes makes no sense for us, but we'll still
         # have to initialize them.
         # have to initialize them.
         self._inqueue = self._outqueue = \
         self._inqueue = self._outqueue = \
             self._quick_put = self._quick_get = self._poll_result = None
             self._quick_put = self._quick_get = self._poll_result = None
 
 
     def process_flush_queues(self, proc):
     def process_flush_queues(self, proc):
         """Flushes all queues, including the outbound buffer, so that
         """Flushes all queues, including the outbound buffer, so that
-        all tasks that have not been started will be discarded.
+        all tasks that haven't been started will be discarded.
 
 
         In Celery this is called whenever the transport connection is lost
         In Celery this is called whenever the transport connection is lost
         (consumer restart), and when a process is terminated.
         (consumer restart), and when a process is terminated.

+ 2 - 2
celery/contrib/abortable.py

@@ -71,8 +71,8 @@ In the producer:
         time.sleep(10)
         time.sleep(10)
         result.abort()
         result.abort()
 
 
-After the `result.abort()` call, the task execution is not
-aborted immediately. In fact, it is not guaranteed to abort at all. Keep
+After the `result.abort()` call, the task execution isn't
+aborted immediately. In fact, it's not guaranteed to abort at all. Keep
 checking `result.state` status, or call `result.get(timeout=)` to
 checking `result.state` status, or call `result.get(timeout=)` to
 have it block until the task is finished.
 have it block until the task is finished.
 
 

+ 1 - 1
celery/contrib/rdb.py

@@ -172,7 +172,7 @@ class Rdb(Pdb):
     do_q = do_exit = do_quit
     do_q = do_exit = do_quit
 
 
     def set_quit(self):
     def set_quit(self):
-        # this raises a BdbQuit exception that we are unable to catch.
+        # this raises a BdbQuit exception that we're unable to catch.
         sys.settrace(None)
         sys.settrace(None)
 
 
 
 

+ 1 - 1
celery/contrib/sphinx.py

@@ -14,7 +14,7 @@ Add the extension to your :file:`docs/conf.py` configuration module:
     extensions = (...,
     extensions = (...,
                   'celery.contrib.sphinx')
                   'celery.contrib.sphinx')
 
 
-If you would like to change the prefix for tasks in reference documentation
+If you'd like to change the prefix for tasks in reference documentation
 then you can change the ``celery_task_prefix`` configuration value:
 then you can change the ``celery_task_prefix`` configuration value:
 
 
 .. code-block:: python
 .. code-block:: python

+ 2 - 2
celery/events/__init__.py

@@ -180,7 +180,7 @@ class EventDispatcher(object):
             retry (bool): Retry in the event of connection failure.
             retry (bool): Retry in the event of connection failure.
             retry_policy (Mapping): Map of custom retry policy options.
             retry_policy (Mapping): Map of custom retry policy options.
                 See :meth:`~kombu.Connection.ensure`.
                 See :meth:`~kombu.Connection.ensure`.
-            blind (bool): Don't set logical clock value (also do not forward
+            blind (bool): Don't set logical clock value (also don't forward
                 the internal logical clock).
                 the internal logical clock).
             Event (Callable): Event type used to create event.
             Event (Callable): Event type used to create event.
                 Defaults to :func:`Event`.
                 Defaults to :func:`Event`.
@@ -223,7 +223,7 @@ class EventDispatcher(object):
             retry (bool): Retry in the event of connection failure.
             retry (bool): Retry in the event of connection failure.
             retry_policy (Mapping): Map of custom retry policy options.
             retry_policy (Mapping): Map of custom retry policy options.
                 See :meth:`~kombu.Connection.ensure`.
                 See :meth:`~kombu.Connection.ensure`.
-            blind (bool): Don't set logical clock value (also do not forward
+            blind (bool): Don't set logical clock value (also don't forward
                 the internal logical clock).
                 the internal logical clock).
             Event (Callable): Event type used to create event,
             Event (Callable): Event type used to create event,
                 defaults to :func:`Event`.
                 defaults to :func:`Event`.

+ 2 - 2
celery/events/snapshot.py

@@ -1,9 +1,9 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
 """Periodically store events in a database.
 """Periodically store events in a database.
 
 
-Consuming the events as a stream is not always suitable
+Consuming the events as a stream isn't always suitable
 so this module implements a system to take snapshots of the
 so this module implements a system to take snapshots of the
-state of a cluster at regular intervals.  There is a full
+state of a cluster at regular intervals.  There's a full
 implementation of this writing the snapshots to a database
 implementation of this writing the snapshots to a database
 in :mod:`djcelery.snapshots` in the `django-celery` distribution.
 in :mod:`djcelery.snapshots` in the `django-celery` distribution.
 """
 """

+ 2 - 2
celery/events/state.py

@@ -110,7 +110,7 @@ def heartbeat_expires(timestamp, freq=60,
                       expire_window=HEARTBEAT_EXPIRE_WINDOW,
                       expire_window=HEARTBEAT_EXPIRE_WINDOW,
                       Decimal=Decimal, float=float, isinstance=isinstance):
                       Decimal=Decimal, float=float, isinstance=isinstance):
     # some json implementations returns decimal.Decimal objects,
     # some json implementations returns decimal.Decimal objects,
-    # which are not compatible with float.
+    # which aren't compatible with float.
     freq = float(freq) if isinstance(freq, Decimal) else freq
     freq = float(freq) if isinstance(freq, Decimal) else freq
     if isinstance(timestamp, Decimal):
     if isinstance(timestamp, Decimal):
         timestamp = float(timestamp)
         timestamp = float(timestamp)
@@ -261,7 +261,7 @@ class Task(object):
 
 
     #: How to merge out of order events.
     #: How to merge out of order events.
     #: Disorder is detected by logical ordering (e.g. :event:`task-received`
     #: Disorder is detected by logical ordering (e.g. :event:`task-received`
-    #: must have happened before a :event:`task-failed` event).
+    #: must've happened before a :event:`task-failed` event).
     #:
     #:
     #: A merge rule consists of a state and a list of fields to keep from
     #: A merge rule consists of a state and a list of fields to keep from
     #: that state. ``(RECEIVED, ('name', 'args')``, means the name and args
     #: that state. ``(RECEIVED, ('name', 'args')``, means the name and args

+ 5 - 5
celery/exceptions.py

@@ -24,7 +24,7 @@ __all__ = [
 ]
 ]
 
 
 UNREGISTERED_FMT = """\
 UNREGISTERED_FMT = """\
-Task of kind {0} is not registered, please make sure it's imported.\
+Task of kind {0} never registered, please make sure it's imported.\
 """
 """
 
 
 
 
@@ -125,7 +125,7 @@ class ImproperlyConfigured(ImportError):
 
 
 @python_2_unicode_compatible
 @python_2_unicode_compatible
 class NotRegistered(KeyError, CeleryError):
 class NotRegistered(KeyError, CeleryError):
-    """The task is not registered."""
+    """The task ain't registered."""
 
 
     def __repr__(self):
     def __repr__(self):
         return UNREGISTERED_FMT.format(self)
         return UNREGISTERED_FMT.format(self)
@@ -148,7 +148,7 @@ class TaskRevokedError(CeleryError):
 
 
 
 
 class NotConfigured(CeleryWarning):
 class NotConfigured(CeleryWarning):
-    """Celery has not been configured, as no config module has been found."""
+    """Celery hasn't been configured, as no config module has been found."""
 
 
 
 
 class AlwaysEagerIgnored(CeleryWarning):
 class AlwaysEagerIgnored(CeleryWarning):
@@ -156,11 +156,11 @@ class AlwaysEagerIgnored(CeleryWarning):
 
 
 
 
 class InvalidTaskError(CeleryError):
 class InvalidTaskError(CeleryError):
-    """The task has invalid data or is not properly constructed."""
+    """The task has invalid data or ain't properly constructed."""
 
 
 
 
 class IncompleteStream(CeleryError):
 class IncompleteStream(CeleryError):
-    """Found the end of a stream of data, but the data is not yet complete."""
+    """Found the end of a stream of data, but the data isn't complete."""
 
 
 
 
 class ChordError(CeleryError):
 class ChordError(CeleryError):

+ 1 - 1
celery/fixups/django.py

@@ -25,7 +25,7 @@ __all__ = ['DjangoFixup', 'fixup']
 
 
 ERR_NOT_INSTALLED = """\
 ERR_NOT_INSTALLED = """\
 Environment variable DJANGO_SETTINGS_MODULE is defined
 Environment variable DJANGO_SETTINGS_MODULE is defined
-but Django is not installed.  Will not apply Django fix-ups!
+but Django isn't installed.  Won't apply Django fix-ups!
 """
 """
 
 
 
 

+ 1 - 1
celery/local.py

@@ -298,7 +298,7 @@ class Proxy(object):
 
 
 
 
 class PromiseProxy(Proxy):
 class PromiseProxy(Proxy):
-    """This is a proxy to an object that has not yet been evaulated.
+    """This is a proxy to an object that hasn't yet been evaulated.
 
 
     :class:`Proxy` will evaluate the object each time, while the
     :class:`Proxy` will evaluate the object each time, while the
     promise will only evaluate it once.
     promise will only evaluate it once.

+ 8 - 8
celery/platforms.py

@@ -79,7 +79,7 @@ User information: uid={uid} euid={euid} gid={gid} egid={egid}
 """
 """
 
 
 ROOT_DISCOURAGED = """\
 ROOT_DISCOURAGED = """\
-You are running the worker with superuser privileges, which is
+You're running the worker with superuser privileges, which is
 absolutely not recommended!
 absolutely not recommended!
 
 
 Please specify a different user using the -u option.
 Please specify a different user using the -u option.
@@ -177,7 +177,7 @@ class Pidfile(object):
             os.unlink(self.path)
             os.unlink(self.path)
 
 
     def remove_if_stale(self):
     def remove_if_stale(self):
-        """Remove the lock if the process is not running.
+        """Remove the lock if the process isn't running.
         (does not respond to signals)."""
         (does not respond to signals)."""
         try:
         try:
             pid = self.read_pid()
             pid = self.read_pid()
@@ -229,7 +229,7 @@ def create_pidlock(pidfile):
     """Create and verify pidfile.
     """Create and verify pidfile.
 
 
     If the pidfile already exists the program exits with an error message,
     If the pidfile already exists the program exits with an error message,
-    however if the process it refers to is not running anymore, the pidfile
+    however if the process it refers to isn't running anymore, the pidfile
     is deleted and the program continues.
     is deleted and the program continues.
 
 
     This function will automatically install an :mod:`atexit` handler
     This function will automatically install an :mod:`atexit` handler
@@ -363,14 +363,14 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
             The ability to write to this file
             The ability to write to this file
             will be verified before the process is detached.
             will be verified before the process is detached.
         pidfile (str): Optional pid file.
         pidfile (str): Optional pid file.
-            The pidfile will not be created,
+            The pidfile won't be created,
             as this is the responsibility of the child.  But the process will
             as this is the responsibility of the child.  But the process will
             exit if the pid lock exists and the pid written is still running.
             exit if the pid lock exists and the pid written is still running.
         uid (int, str): Optional user id or user name to change
         uid (int, str): Optional user id or user name to change
             effective privileges to.
             effective privileges to.
         gid (int, str): Optional group id or group name to change
         gid (int, str): Optional group id or group name to change
             effective privileges to.
             effective privileges to.
-        umask (str, int): Optional umask that will be effective in
+        umask (str, int): Optional umask that'll be effective in
             the child process.
             the child process.
         workdir (str): Optional new working directory.
         workdir (str): Optional new working directory.
         fake (bool): Don't actually detach, intended for debugging purposes.
         fake (bool): Don't actually detach, intended for debugging purposes.
@@ -384,7 +384,7 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
         ...           uid='nobody'):
         ...           uid='nobody'):
         ... # Now in detached child process with effective user set to nobody,
         ... # Now in detached child process with effective user set to nobody,
         ... # and we know that our logfile can be written to, and that
         ... # and we know that our logfile can be written to, and that
-        ... # the pidfile is not locked.
+        ... # the pidfile isn't locked.
         ... pidlock = create_pidlock('/var/run/app.pid')
         ... pidlock = create_pidlock('/var/run/app.pid')
         ...
         ...
         ... # Run the program
         ... # Run the program
@@ -446,7 +446,7 @@ def parse_gid(gid):
 
 
 def _setgroups_hack(groups):
 def _setgroups_hack(groups):
     """:fun:`setgroups` may have a platform-dependent limit,
     """:fun:`setgroups` may have a platform-dependent limit,
-    and it is not always possible to know in advance what this limit
+    and it's not always possible to know in advance what this limit
     is, so we use this ugly hack stolen from glibc."""
     is, so we use this ugly hack stolen from glibc."""
     groups = groups[:]
     groups = groups[:]
 
 
@@ -559,7 +559,7 @@ def maybe_drop_privileges(uid=None, gid=None):
 class Signals(object):
 class Signals(object):
     """Convenience interface to :mod:`signals`.
     """Convenience interface to :mod:`signals`.
 
 
-    If the requested signal is not supported on the current platform,
+    If the requested signal isn't supported on the current platform,
     the operation will be ignored.
     the operation will be ignored.
 
 
     Example:
     Example:

+ 7 - 7
celery/result.py

@@ -149,7 +149,7 @@ class AsyncResult(ResultBase):
             propagate (bool): Re-raise exception if the task failed.
             propagate (bool): Re-raise exception if the task failed.
             interval (float): Time to wait (in seconds) before retrying to
             interval (float): Time to wait (in seconds) before retrying to
                 retrieve the result.  Note that this does not have any effect
                 retrieve the result.  Note that this does not have any effect
-                when using the RPC/redis result store backends, as they do not
+                when using the RPC/redis result store backends, as they don't
                 use polling.
                 use polling.
             no_ack (bool): Enable amqp no ack (automatically acknowledge
             no_ack (bool): Enable amqp no ack (automatically acknowledge
                 message).  If this is :const:`False` then the message will
                 message).  If this is :const:`False` then the message will
@@ -158,7 +158,7 @@ class AsyncResult(ResultBase):
                 parent tasks.
                 parent tasks.
 
 
         Raises:
         Raises:
-            celery.exceptions.TimeoutError: if `timeout` is not
+            celery.exceptions.TimeoutError: if `timeout` isn't
                 :const:`None` and the result does not arrive within
                 :const:`None` and the result does not arrive within
                 `timeout` seconds.
                 `timeout` seconds.
             Exception: If the remote call raised an exception then that
             Exception: If the remote call raised an exception then that
@@ -474,7 +474,7 @@ class ResultSet(ResultBase):
         """Remove result from the set; it must be a member.
         """Remove result from the set; it must be a member.
 
 
         Raises:
         Raises:
-            KeyError: if the result is not a member.
+            KeyError: if the result isn't a member.
         """
         """
         if isinstance(result, string_t):
         if isinstance(result, string_t):
             result = self.app.AsyncResult(result)
             result = self.app.AsyncResult(result)
@@ -505,7 +505,7 @@ class ResultSet(ResultBase):
 
 
         Returns:
         Returns:
             bool: true if all of the tasks finished
             bool: true if all of the tasks finished
-                successfully (i.e. did not raise an exception).
+                successfully (i.e. didn't raise an exception).
         """
         """
         return all(result.successful() for result in self.results)
         return all(result.successful() for result in self.results)
 
 
@@ -647,7 +647,7 @@ class ResultSet(ResultBase):
                 No results will be returned by this function if a callback
                 No results will be returned by this function if a callback
                 is specified.  The order of results is also arbitrary when a
                 is specified.  The order of results is also arbitrary when a
                 callback is used.  To get access to the result object for
                 callback is used.  To get access to the result object for
-                a particular id you will have to generate an index first:
+                a particular id you'll have to generate an index first:
                 ``index = {r.id: r for r in gres.results.values()}``
                 ``index = {r.id: r for r in gres.results.values()}``
                 Or you can create new result objects on the fly:
                 Or you can create new result objects on the fly:
                 ``result = app.AsyncResult(task_id)`` (both will
                 ``result = app.AsyncResult(task_id)`` (both will
@@ -657,7 +657,7 @@ class ResultSet(ResultBase):
                 *will not be acknowledged*).
                 *will not be acknowledged*).
 
 
         Raises:
         Raises:
-            celery.exceptions.TimeoutError: if ``timeout`` is not
+            celery.exceptions.TimeoutError: if ``timeout`` isn't
                 :const:`None` and the operation takes longer than ``timeout``
                 :const:`None` and the operation takes longer than ``timeout``
                 seconds.
                 seconds.
         """
         """
@@ -953,7 +953,7 @@ def result_from_tuple(r, app=None):
             return app.GroupResult(
             return app.GroupResult(
                 res, [result_from_tuple(child, app) for child in nodes],
                 res, [result_from_tuple(child, app) for child in nodes],
             )
             )
-        # previously did not include parent
+        # previously didn't include parent
         id, parent = res if isinstance(res, (list, tuple)) else (res, None)
         id, parent = res if isinstance(res, (list, tuple)) else (res, None)
         if parent:
         if parent:
             parent = result_from_tuple(parent, app)
             parent = result_from_tuple(parent, app)

+ 5 - 5
celery/schedules.py

@@ -300,7 +300,7 @@ class crontab(schedule):
     periodic task entry to add :manpage:`crontab(5)`-like scheduling.
     periodic task entry to add :manpage:`crontab(5)`-like scheduling.
 
 
     Like a :manpage:`cron(5)`-job, you can specify units of time of when
     Like a :manpage:`cron(5)`-job, you can specify units of time of when
-    you would like the task to execute. It is a reasonably complete
+    you'd like the task to execute. It's a reasonably complete
     implementation of :command:`cron`'s features, so it should provide a fair
     implementation of :command:`cron`'s features, so it should provide a fair
     degree of scheduling needs.
     degree of scheduling needs.
 
 
@@ -361,7 +361,7 @@ class crontab(schedule):
 
 
         The Celery app instance.
         The Celery app instance.
 
 
-    It is important to realize that any day on which execution should
+    It's important to realize that any day on which execution should
     occur must be represented by entries in all three of the day and
     occur must be represented by entries in all three of the day and
     month attributes.  For example, if ``day_of_week`` is 0 and
     month attributes.  For example, if ``day_of_week`` is 0 and
     ``day_of_month`` is every seventh day, only months that begin
     ``day_of_month`` is every seventh day, only months that begin
@@ -399,8 +399,8 @@ class crontab(schedule):
 
 
         And convert it to an (expanded) set representing all time unit
         And convert it to an (expanded) set representing all time unit
         values on which the Crontab triggers.  Only in case of the base
         values on which the Crontab triggers.  Only in case of the base
-        type being :class:`str`, parsing occurs.  (It is fast and
-        happens only once for each Crontab instance, so there is no
+        type being :class:`str`, parsing occurs.  (It's fast and
+        happens only once for each Crontab instance, so there's no
         significant performance overhead involved.)
         significant performance overhead involved.)
 
 
         For the other base types, merely Python type conversions happen.
         For the other base types, merely Python type conversions happen.
@@ -740,7 +740,7 @@ class solar(schedule):
                 start=last_run_at_utc, use_center=self.use_center,
                 start=last_run_at_utc, use_center=self.use_center,
             )
             )
         except self.ephem.CircumpolarError:  # pragma: no cover
         except self.ephem.CircumpolarError:  # pragma: no cover
-            # Sun will not rise/set today. Check again tomorrow
+            # Sun won't rise/set today. Check again tomorrow
             # (specifically, after the next anti-transit).
             # (specifically, after the next anti-transit).
             next_utc = (
             next_utc = (
                 self.cal.next_antitransit(self.ephem.Sun()) +
                 self.cal.next_antitransit(self.ephem.Sun()) +

+ 1 - 1
celery/states.py

@@ -25,7 +25,7 @@ Set of states meaning the task result is ready (has been executed).
 UNREADY_STATES
 UNREADY_STATES
 ~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~
 
 
-Set of states meaning the task result is not ready (has not been executed).
+Set of states meaning the task result is not ready (hasn't been executed).
 
 
 .. state:: EXCEPTION_STATES
 .. state:: EXCEPTION_STATES
 
 

+ 1 - 1
celery/task/__init__.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
 """Old deprecated task module.
 """Old deprecated task module.
 
 
-This is the old task module, it should not be used anymore,
+This is the old task module, it shouldn't be used anymore,
 import from the main 'celery' module instead.
 import from the main 'celery' module instead.
 If you're looking for the decorator implementation then that's in
 If you're looking for the decorator implementation then that's in
 ``celery.app.base.Celery.task``.
 ``celery.app.base.Celery.task``.

+ 3 - 3
celery/task/base.py

@@ -62,7 +62,7 @@ class TaskType(type):
         new = super(TaskType, cls).__new__
         new = super(TaskType, cls).__new__
         task_module = attrs.get('__module__') or '__main__'
         task_module = attrs.get('__module__') or '__main__'
 
 
-        # - Abstract class: abstract attribute should not be inherited.
+        # - Abstract class: abstract attribute shouldn't be inherited.
         abstract = attrs.pop('abstract', None)
         abstract = attrs.pop('abstract', None)
         if abstract or not attrs.get('autoregister', True):
         if abstract or not attrs.get('autoregister', True):
             return new(cls, name, bases, attrs)
             return new(cls, name, bases, attrs)
@@ -92,13 +92,13 @@ class TaskType(type):
             # an app is created multiple times due to modules
             # an app is created multiple times due to modules
             # imported under multiple names.
             # imported under multiple names.
             # Hairy stuff,  here to be compatible with 2.x.
             # Hairy stuff,  here to be compatible with 2.x.
-            # People should not use non-abstract task classes anymore,
+            # People shouldn't use non-abstract task classes anymore,
             # use the task decorator.
             # use the task decorator.
             from celery._state import connect_on_app_finalize
             from celery._state import connect_on_app_finalize
             unique_name = '.'.join([task_module, name])
             unique_name = '.'.join([task_module, name])
             if unique_name not in cls._creation_count:
             if unique_name not in cls._creation_count:
                 # the creation count is used as a safety
                 # the creation count is used as a safety
-                # so that the same task is not added recursively
+                # so that the same task isn't added recursively
                 # to the set of constructors.
                 # to the set of constructors.
                 cls._creation_count[unique_name] = 1
                 cls._creation_count[unique_name] = 1
                 connect_on_app_finalize(_CompatShared(
                 connect_on_app_finalize(_CompatShared(

+ 1 - 1
celery/tests/backends/test_cassandra.py

@@ -126,7 +126,7 @@ class test_CassandraBackend(AppCase):
         self.assertIsNone(x._connection)
         self.assertIsNone(x._connection)
         self.assertIsNone(x._session)
         self.assertIsNone(x._session)
 
 
-        x.process_cleanup()  # should not raise
+        x.process_cleanup()  # shouldn't raise
 
 
     def test_please_free_memory(self):
     def test_please_free_memory(self):
         # Ensure that Cluster object IS shut down.
         # Ensure that Cluster object IS shut down.

+ 1 - 1
celery/tests/case.py

@@ -44,7 +44,7 @@ CASE_REDEFINES_TEARDOWN = """\
 should be: "teardown"\
 should be: "teardown"\
 """
 """
 CASE_LOG_REDIRECT_EFFECT = """\
 CASE_LOG_REDIRECT_EFFECT = """\
-Test {0} did not disable LoggingProxy for {1}\
+Test {0} didn't disable LoggingProxy for {1}\
 """
 """
 CASE_LOG_LEVEL_EFFECT = """\
 CASE_LOG_LEVEL_EFFECT = """\
 Test {0} Modified the level of the root logger\
 Test {0} Modified the level of the root logger\

+ 1 - 1
celery/tests/fixups/test_django.py

@@ -208,7 +208,7 @@ class test_DjangoWorkerFixup(FixupCase):
                     f.close_database.assert_called()
                     f.close_database.assert_called()
                     f.close_cache.assert_called()
                     f.close_cache.assert_called()
 
 
-            # when a task is eager, do not close connections
+            # when a task is eager, don't close connections
             with patch.object(f, 'close_cache'):
             with patch.object(f, 'close_cache'):
                 task.request.is_eager = True
                 task.request.is_eager = True
                 with patch.object(f, 'close_database'):
                 with patch.object(f, 'close_database'):

+ 2 - 2
celery/tests/tasks/test_chord.py

@@ -76,7 +76,7 @@ class test_unlock_chord_task(ChordCase):
             cb.type.apply_async.assert_called_with(
             cb.type.apply_async.assert_called_with(
                 ([2, 4, 8, 6],), {}, task_id=cb.id,
                 ([2, 4, 8, 6],), {}, task_id=cb.id,
             )
             )
-            # did not retry
+            # didn't retry
             self.assertFalse(retry.call_count)
             self.assertFalse(retry.call_count)
 
 
     def test_deps_ready_fails(self):
     def test_deps_ready_fails(self):
@@ -114,7 +114,7 @@ class test_unlock_chord_task(ChordCase):
 
 
         with self._chord_context(Failed) as (cb, retry, fail_current):
         with self._chord_context(Failed) as (cb, retry, fail_current):
             cb.type.apply_async.assert_not_called()
             cb.type.apply_async.assert_not_called()
-            # did not retry
+            # didn't retry
             self.assertFalse(retry.call_count)
             self.assertFalse(retry.call_count)
             fail_current.assert_called()
             fail_current.assert_called()
             self.assertEqual(
             self.assertEqual(

+ 1 - 1
celery/utils/__init__.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
 """Utility functions.
 """Utility functions.
 
 
-Do not import from here directly anymore, as these are only
+Don't import from here directly anymore, as these are only
 here for backwards compatibility.
 here for backwards compatibility.
 """
 """
 from __future__ import absolute_import, print_function, unicode_literals
 from __future__ import absolute_import, print_function, unicode_literals

+ 3 - 3
celery/utils/collections.py

@@ -424,7 +424,7 @@ class LimitedSet(object):
     but the set should not grow unbounded.
     but the set should not grow unbounded.
 
 
     ``maxlen`` is enforced at all times, so if the limit is reached
     ``maxlen`` is enforced at all times, so if the limit is reached
-    we will also remove non-expired items.
+    we'll also remove non-expired items.
 
 
     You can also configure ``minlen``, which is the minimal residual size
     You can also configure ``minlen``, which is the minimal residual size
     of the set.
     of the set.
@@ -495,7 +495,7 @@ class LimitedSet(object):
             raise ValueError('expires cannot be negative!')
             raise ValueError('expires cannot be negative!')
 
 
     def _refresh_heap(self):
     def _refresh_heap(self):
-        """Time consuming recreating of heap. Do not run this too often."""
+        """Time consuming recreating of heap. Don't run this too often."""
         self._heap[:] = [entry for entry in values(self._data)]
         self._heap[:] = [entry for entry in values(self._data)]
         heapify(self._heap)
         heapify(self._heap)
 
 
@@ -568,7 +568,7 @@ class LimitedSet(object):
             while len(self._data) > self.minlen >= 0:
             while len(self._data) > self.minlen >= 0:
                 inserted_time, _ = self._heap[0]
                 inserted_time, _ = self._heap[0]
                 if inserted_time + self.expires > now:
                 if inserted_time + self.expires > now:
-                    break  # oldest item has not expired yet
+                    break  # oldest item hasn't expired yet
                 self.pop()
                 self.pop()
 
 
     def pop(self, default=None):
     def pop(self, default=None):

+ 2 - 2
celery/utils/deprecated.py

@@ -43,11 +43,11 @@ def Callable(deprecation=None, removal=None,
 
 
     Arguments:
     Arguments:
         deprecation (str): Version that marks first deprecation, if this
         deprecation (str): Version that marks first deprecation, if this
-            argument is not set a ``PendingDeprecationWarning`` will be
+            argument isn't set a ``PendingDeprecationWarning`` will be
             emitted instead.
             emitted instead.
         removal (str): Future version when this feature will be removed.
         removal (str): Future version when this feature will be removed.
         alternative (str): Instructions for an alternative solution (if any).
         alternative (str): Instructions for an alternative solution (if any).
-        description (str): Description of what is being deprecated.
+        description (str): Description of what's being deprecated.
     """
     """
     def _inner(fun):
     def _inner(fun):
 
 

+ 2 - 2
celery/utils/dispatch/saferef.py

@@ -91,7 +91,7 @@ class BoundMethodWeakref(object):  # pragma: no cover
             Basically this method of construction allows us to
             Basically this method of construction allows us to
             short-circuit creation of references to already-
             short-circuit creation of references to already-
             referenced instance methods.  The key corresponding
             referenced instance methods.  The key corresponding
-            to the target is calculated, and if there is already
+            to the target is calculated, and if there's already
             an existing reference, that is returned, with its
             an existing reference, that is returned, with its
             deletionMethods attribute updated.  Otherwise the
             deletionMethods attribute updated.  Otherwise the
             new instance is created and registered in the table
             new instance is created and registered in the table
@@ -174,7 +174,7 @@ class BoundMethodWeakref(object):  # pragma: no cover
         return str(self)
         return str(self)
 
 
     def __bool__(self):
     def __bool__(self):
-        """Whether we are still a valid reference"""
+        """Whether we're still a valid reference"""
         return self() is not None
         return self() is not None
     __nonzero__ = __bool__  # py2
     __nonzero__ = __bool__  # py2
 
 

+ 1 - 1
celery/utils/dispatch/signal.py

@@ -121,7 +121,7 @@ class Signal(object):  # pragma: no cover
                    dispatch_uid=None):
                    dispatch_uid=None):
         """Disconnect receiver from sender for signal.
         """Disconnect receiver from sender for signal.
 
 
-        If weak references are used, disconnect need not be called. The
+        If weak references are used, disconnect needn't be called. The
         receiver will be removed from dispatch automatically.
         receiver will be removed from dispatch automatically.
 
 
         Arguments:
         Arguments:

+ 1 - 1
celery/utils/functional.py

@@ -83,7 +83,7 @@ def first(predicate, it):
     """Return the first element in ``iterable`` that ``predicate`` gives a
     """Return the first element in ``iterable`` that ``predicate`` gives a
     :const:`True` value for.
     :const:`True` value for.
 
 
-    If ``predicate`` is None it will return the first item that is not
+    If ``predicate`` is None it will return the first item that's not
     :const:`None`.
     :const:`None`.
     """
     """
     return next(
     return next(

+ 1 - 1
celery/utils/nodenames.py

@@ -33,7 +33,7 @@ __all__ = [
 
 
 
 
 def worker_direct(hostname):
 def worker_direct(hostname):
-    """Return :class:`kombu.Queue` that is a direct route to
+    """Return :class:`kombu.Queue` that's a direct route to
     a worker by hostname.
     a worker by hostname.
 
 
     Arguments:
     Arguments:

+ 3 - 3
celery/utils/objects.py

@@ -21,7 +21,7 @@ def mro_lookup(cls, attr, stop=set(), monkey_patched=[]):
         stop (Set[Any]): A set of types that if reached will stop
         stop (Set[Any]): A set of types that if reached will stop
             the search.
             the search.
         monkey_patched (Sequence): Use one of the stop classes
         monkey_patched (Sequence): Use one of the stop classes
-            if the attributes module origin is not in this list.
+            if the attributes module origin isn't in this list.
             Used to detect monkey patched attributes.
             Used to detect monkey patched attributes.
 
 
     Returns:
     Returns:
@@ -53,11 +53,11 @@ class FallbackContext(object):
         @contextmanager
         @contextmanager
         def connection_or_default_connection(connection=None):
         def connection_or_default_connection(connection=None):
             if connection:
             if connection:
-                # user already has a connection, should not close
+                # user already has a connection, shouldn't close
                 # after use
                 # after use
                 yield connection
                 yield connection
             else:
             else:
-                # must have new connection, and also close the connection
+                # must've new connection, and also close the connection
                 # after the block returns
                 # after the block returns
                 with create_new_connection() as connection:
                 with create_new_connection() as connection:
                     yield connection
                     yield connection

+ 1 - 1
celery/utils/saferepr.py

@@ -7,7 +7,7 @@ Differences from regular :func:`repr`:
 - Sets are represented the Python 3 way: ``{1, 2}`` vs ``set([1, 2])``.
 - Sets are represented the Python 3 way: ``{1, 2}`` vs ``set([1, 2])``.
 - Unicode strings does not have the ``u'`` prefix, even on Python 2.
 - Unicode strings does not have the ``u'`` prefix, even on Python 2.
 - Empty set formatted as ``set()`` (Python 3), not ``set([])`` (Python 2).
 - Empty set formatted as ``set()`` (Python 3), not ``set([])`` (Python 2).
-- Longs do not have the ``L`` suffix.
+- Longs don't have the ``L`` suffix.
 
 
 Very slow with no limits, super quick with limits.
 Very slow with no limits, super quick with limits.
 """
 """

+ 1 - 1
celery/utils/serialization.py

@@ -45,7 +45,7 @@ def subclass_exception(name, parent, module):  # noqa
 def find_pickleable_exception(exc, loads=pickle.loads,
 def find_pickleable_exception(exc, loads=pickle.loads,
                               dumps=pickle.dumps):
                               dumps=pickle.dumps):
     """With an exception instance, iterate over its super classes (by MRO)
     """With an exception instance, iterate over its super classes (by MRO)
-    and find the first super exception that is pickleable.  It does
+    and find the first super exception that's pickleable.  It does
     not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
     not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
     :class:`BaseException` and :class:`object`).  If that happens
     :class:`BaseException` and :class:`object`).  If that happens
     you should use :exc:`UnpickleableException` instead.
     you should use :exc:`UnpickleableException` instead.

+ 1 - 1
celery/utils/threads.py

@@ -311,6 +311,6 @@ if USE_FAST_LOCALS:  # pragma: no cover
 else:
 else:
     # - See #706
     # - See #706
     # since each thread has its own greenlet we can just use those as
     # since each thread has its own greenlet we can just use those as
-    # identifiers for the context.  If greenlets are not available we
+    # identifiers for the context.  If greenlets aren't available we
     # fall back to the  current thread ident.
     # fall back to the  current thread ident.
     LocalStack = _LocalStack  # noqa
     LocalStack = _LocalStack  # noqa

+ 3 - 3
celery/worker/__init__.py

@@ -49,7 +49,7 @@ __all__ = ['WorkController', 'default_nodename']
 SHUTDOWN_SOCKET_TIMEOUT = 5.0
 SHUTDOWN_SOCKET_TIMEOUT = 5.0
 
 
 SELECT_UNKNOWN_QUEUE = """\
 SELECT_UNKNOWN_QUEUE = """\
-Trying to select queue subset of {0!r}, but queue {1} is not
+Trying to select queue subset of {0!r}, but queue {1} isn't
 defined in the `task_queues` setting.
 defined in the `task_queues` setting.
 
 
 If you want to automatically declare unknown queues you can
 If you want to automatically declare unknown queues you can
@@ -57,7 +57,7 @@ enable the `task_create_missing_queues` setting.
 """
 """
 
 
 DESELECT_UNKNOWN_QUEUE = """\
 DESELECT_UNKNOWN_QUEUE = """\
-Trying to deselect queue subset of {0!r}, but queue {1} is not
+Trying to deselect queue subset of {0!r}, but queue {1} isn't
 defined in the `task_queues` setting.
 defined in the `task_queues` setting.
 """
 """
 
 
@@ -120,7 +120,7 @@ class WorkController(object):
         self.loglevel = mlevel(self.loglevel)
         self.loglevel = mlevel(self.loglevel)
         self.ready_callback = ready_callback or self.on_consumer_ready
         self.ready_callback = ready_callback or self.on_consumer_ready
 
 
-        # this connection is not established, only used for params
+        # this connection won't establish, only used for params
         self._conninfo = self.app.connection_for_read()
         self._conninfo = self.app.connection_for_read()
         self.use_eventloop = (
         self.use_eventloop = (
             self.should_use_eventloop() if use_eventloop is None
             self.should_use_eventloop() if use_eventloop is None

+ 1 - 1
celery/worker/components.py

@@ -24,7 +24,7 @@ use standalone beat instead.\
 """
 """
 
 
 W_POOL_SETTING = """
 W_POOL_SETTING = """
-The worker_pool setting should not be used to select the eventlet/gevent
+The worker_pool setting shouldn't be used to select the eventlet/gevent
 pools, instead you *must use the -P* argument so that patches are applied
 pools, instead you *must use the -P* argument so that patches are applied
 as early as possible.
 as early as possible.
 """
 """

+ 1 - 1
celery/worker/consumer/consumer.py

@@ -76,7 +76,7 @@ Received unregistered task of type %s.
 The message has been ignored and discarded.
 The message has been ignored and discarded.
 
 
 Did you remember to import the module containing this task?
 Did you remember to import the module containing this task?
-Or maybe you are using relative imports?
+Or maybe you're using relative imports?
 Please see http://bit.ly/gLye1c for more information.
 Please see http://bit.ly/gLye1c for more information.
 
 
 The full contents of the message body was:
 The full contents of the message body was:

+ 2 - 2
celery/worker/loops.py

@@ -52,7 +52,7 @@ def asynloop(obj, connection, consumer, blueprint, hub, qos,
         raise WorkerLostError('Could not start worker processes')
         raise WorkerLostError('Could not start worker processes')
 
 
     # consumer.consume() may have prefetched up to our
     # consumer.consume() may have prefetched up to our
-    # limit - drain an event so we are in a clean state
+    # limit - drain an event so we're in a clean state
     # prior to starting our event loop.
     # prior to starting our event loop.
     if connection.transport.driver_type == 'amqp':
     if connection.transport.driver_type == 'amqp':
         hub.call_soon(_quick_drain, connection)
         hub.call_soon(_quick_drain, connection)
@@ -74,7 +74,7 @@ def asynloop(obj, connection, consumer, blueprint, hub, qos,
             elif should_terminate is not None and should_stop is not False:
             elif should_terminate is not None and should_stop is not False:
                 raise WorkerTerminate(should_terminate)
                 raise WorkerTerminate(should_terminate)
 
 
-            # We only update QoS when there is no more messages to read.
+            # We only update QoS when there's no more messages to read.
             # This groups together qos calls, and makes sure that remote
             # This groups together qos calls, and makes sure that remote
             # control commands will be prioritized over task messages.
             # control commands will be prioritized over task messages.
             if qos.prev != qos.value:
             if qos.prev != qos.value:

+ 1 - 1
celery/worker/request.py

@@ -341,7 +341,7 @@ class Request(object):
         if isinstance(exc, Retry):
         if isinstance(exc, Retry):
             return self.on_retry(exc_info)
             return self.on_retry(exc_info)
 
 
-        # These are special cases where the process would not have had
+        # These are special cases where the process wouldn't've had
         # time to write the result.
         # time to write the result.
         if isinstance(exc, Terminated):
         if isinstance(exc, Terminated):
             self._announce_revoked(
             self._announce_revoked(

+ 50 - 50
docs/contributing.rst

@@ -6,7 +6,7 @@
 
 
 Welcome!
 Welcome!
 
 
-This document is fairly extensive and you are not really expected
+This document is fairly extensive and you aren't really expected
 to study this in detail for small contributions;
 to study this in detail for small contributions;
 
 
     The most important rule is that contributing must be easy
     The most important rule is that contributing must be easy
@@ -17,7 +17,7 @@ If you're reporting a bug you should read the Reporting bugs section
 below to ensure that your bug report contains enough information
 below to ensure that your bug report contains enough information
 to successfully diagnose the issue, and if you're contributing code
 to successfully diagnose the issue, and if you're contributing code
 you should try to mimic the conventions you see surrounding the code
 you should try to mimic the conventions you see surrounding the code
-you are working on, but in the end all patches will be cleaned up by
+you're working on, but in the end all patches will be cleaned up by
 the person merging the changes so don't worry too much.
 the person merging the changes so don't worry too much.
 
 
 .. contents::
 .. contents::
@@ -28,8 +28,8 @@ the person merging the changes so don't worry too much.
 Community Code of Conduct
 Community Code of Conduct
 =========================
 =========================
 
 
-The goal is to maintain a diverse community that is pleasant for everyone.
-That is why we would greatly appreciate it if everyone contributing to and
+The goal is to maintain a diverse community that's pleasant for everyone.
+That's why we would greatly appreciate it if everyone contributing to and
 interacting with the community also followed this Code of Conduct.
 interacting with the community also followed this Code of Conduct.
 
 
 The Code of Conduct covers our behavior as members of the community,
 The Code of Conduct covers our behavior as members of the community,
@@ -46,22 +46,22 @@ Be considerate.
 ---------------
 ---------------
 
 
 Your work will be used by other people, and you in turn will depend on the
 Your work will be used by other people, and you in turn will depend on the
-work of others.  Any decision you take will affect users and colleagues, and
+work of others. Any decision you take will affect users and colleagues, and
 we expect you to take those consequences into account when making decisions.
 we expect you to take those consequences into account when making decisions.
 Even if it's not obvious at the time, our contributions to Celery will impact
 Even if it's not obvious at the time, our contributions to Celery will impact
-the work of others.  For example, changes to code, infrastructure, policy,
+the work of others. For example, changes to code, infrastructure, policy,
 documentation and translations during a release may negatively impact
 documentation and translations during a release may negatively impact
 others work.
 others work.
 
 
 Be respectful.
 Be respectful.
 --------------
 --------------
 
 
-The Celery community and its members treat one another with respect.  Everyone
-can make a valuable contribution to Celery.  We may not always agree, but
-disagreement is no excuse for poor behavior and poor manners.  We might all
+The Celery community and its members treat one another with respect. Everyone
+can make a valuable contribution to Celery. We may not always agree, but
+disagreement is no excuse for poor behavior and poor manners. We might all
 experience some frustration now and then, but we cannot allow that frustration
 experience some frustration now and then, but we cannot allow that frustration
-to turn into a personal attack.  It's important to remember that a community
-where people feel uncomfortable or threatened is not a productive one.  We
+to turn into a personal attack. It's important to remember that a community
+where people feel uncomfortable or threatened isn't a productive one. We
 expect members of the Celery community to be respectful when dealing with
 expect members of the Celery community to be respectful when dealing with
 other contributors as well as with people outside the Celery project and with
 other contributors as well as with people outside the Celery project and with
 users of Celery.
 users of Celery.
@@ -70,11 +70,11 @@ Be collaborative.
 -----------------
 -----------------
 
 
 Collaboration is central to Celery and to the larger free software community.
 Collaboration is central to Celery and to the larger free software community.
-We should always be open to collaboration.  Your work should be done
+We should always be open to collaboration. Your work should be done
 transparently and patches from Celery should be given back to the community
 transparently and patches from Celery should be given back to the community
-when they are made, not just when the distribution releases.  If you wish
+when they're made, not just when the distribution releases. If you wish
 to work on new code for existing upstream projects, at least keep those
 to work on new code for existing upstream projects, at least keep those
-projects informed of your ideas and progress.  It many not be possible to
+projects informed of your ideas and progress. It many not be possible to
 get consensus from upstream, or even from your colleagues about the correct
 get consensus from upstream, or even from your colleagues about the correct
 implementation for an idea, so don't feel obliged to have that agreement
 implementation for an idea, so don't feel obliged to have that agreement
 before you begin, but at least keep the outside world informed of your work,
 before you begin, but at least keep the outside world informed of your work,
@@ -85,29 +85,29 @@ When you disagree, consult others.
 ----------------------------------
 ----------------------------------
 
 
 Disagreements, both political and technical, happen all the time and
 Disagreements, both political and technical, happen all the time and
-the Celery community is no exception.  It is important that we resolve
+the Celery community is no exception. It's important that we resolve
 disagreements and differing views constructively and with the help of the
 disagreements and differing views constructively and with the help of the
-community and community process.  If you really want to go a different
+community and community process. If you really want to go a different
 way, then we encourage you to make a derivative distribution or alternate
 way, then we encourage you to make a derivative distribution or alternate
 set of packages that still build on the work we've done to utilize as common
 set of packages that still build on the work we've done to utilize as common
 of a core as possible.
 of a core as possible.
 
 
-When you are unsure, ask for help.
-----------------------------------
+When you're unsure, ask for help.
+---------------------------------
 
 
-Nobody knows everything, and nobody is expected to be perfect.  Asking
+Nobody knows everything, and nobody is expected to be perfect. Asking
 questions avoids many problems down the road, and so questions are
 questions avoids many problems down the road, and so questions are
-encouraged.  Those who are asked questions should be responsive and helpful.
+encouraged. Those who are asked questions should be responsive and helpful.
 However, when asking a question, care must be taken to do so in an appropriate
 However, when asking a question, care must be taken to do so in an appropriate
 forum.
 forum.
 
 
 Step down considerately.
 Step down considerately.
 ------------------------
 ------------------------
 
 
-Developers on every project come and go and Celery is no different.  When you
+Developers on every project come and go and Celery is no different. When you
 leave or disengage from the project, in whole or in part, we ask that you do
 leave or disengage from the project, in whole or in part, we ask that you do
-so in a way that minimizes disruption to the project.  This means you should
-tell people you are leaving and take the proper steps to ensure that others
+so in a way that minimizes disruption to the project. This means you should
+tell people you're leaving and take the proper steps to ensure that others
 can pick up where you leave off.
 can pick up where you leave off.
 
 
 .. _reporting-bugs:
 .. _reporting-bugs:
@@ -174,12 +174,12 @@ and participate in the discussion.
 
 
 2) **Determine if your bug is really a bug.**
 2) **Determine if your bug is really a bug.**
 
 
-You should not file a bug if you are requesting support.  For that you can use
+You shouldn't file a bug if you're requesting support. For that you can use
 the :ref:`mailing-list`, or :ref:`irc-channel`.
 the :ref:`mailing-list`, or :ref:`irc-channel`.
 
 
 3) **Make sure your bug hasn't already been reported.**
 3) **Make sure your bug hasn't already been reported.**
 
 
-Search through the appropriate Issue tracker.  If a bug like yours was found,
+Search through the appropriate Issue tracker. If a bug like yours was found,
 check if you have new information that could be reported to help
 check if you have new information that could be reported to help
 the developers fix the bug.
 the developers fix the bug.
 
 
@@ -192,7 +192,7 @@ celery, billiard, kombu, amqp and vine.
 5) **Collect information about the bug.**
 5) **Collect information about the bug.**
 
 
 To have the best chance of having a bug fixed, we need to be able to easily
 To have the best chance of having a bug fixed, we need to be able to easily
-reproduce the conditions that caused it.  Most of the time this information
+reproduce the conditions that caused it. Most of the time this information
 will be from a Python traceback message, though some bugs might be in design,
 will be from a Python traceback message, though some bugs might be in design,
 spelling or other errors on the website/docs/code.
 spelling or other errors on the website/docs/code.
 
 
@@ -202,12 +202,12 @@ spelling or other errors on the website/docs/code.
        etc.), the version of your Python interpreter, and the version of Celery,
        etc.), the version of your Python interpreter, and the version of Celery,
        and related packages that you were running when the bug occurred.
        and related packages that you were running when the bug occurred.
 
 
-    C) If you are reporting a race condition or a deadlock, tracebacks can be
+    C) If you're reporting a race condition or a deadlock, tracebacks can be
        hard to get or might not be that useful. Try to inspect the process to
        hard to get or might not be that useful. Try to inspect the process to
        get more diagnostic data. Some ideas:
        get more diagnostic data. Some ideas:
 
 
        * Enable celery's :ref:`breakpoint signal <breakpoint_signal>` and use it
        * Enable celery's :ref:`breakpoint signal <breakpoint_signal>` and use it
-         to inspect the process's state.  This will allow you to open a
+         to inspect the process's state. This will allow you to open a
          :mod:`pdb` session.
          :mod:`pdb` session.
        * Collect tracing data using `strace`_(Linux),
        * Collect tracing data using `strace`_(Linux),
          :command:`dtruss` (macOS), and :command:`ktrace` (BSD),
          :command:`dtruss` (macOS), and :command:`ktrace` (BSD),
@@ -252,7 +252,7 @@ issue tracker.
 * :pypi:`librabbitmq`: https://github.com/celery/librabbitmq/issues
 * :pypi:`librabbitmq`: https://github.com/celery/librabbitmq/issues
 * :pypi:`django-celery`: https://github.com/celery/django-celery/issues
 * :pypi:`django-celery`: https://github.com/celery/django-celery/issues
 
 
-If you are unsure of the origin of the bug you can ask the
+If you're unsure of the origin of the bug you can ask the
 :ref:`mailing-list`, or just use the Celery issue tracker.
 :ref:`mailing-list`, or just use the Celery issue tracker.
 
 
 Contributors guide to the code base
 Contributors guide to the code base
@@ -328,7 +328,7 @@ Maintenance branches
 --------------------
 --------------------
 
 
 Maintenance branches are named after the version, e.g. the maintenance branch
 Maintenance branches are named after the version, e.g. the maintenance branch
-for the 2.2.x series is named ``2.2``.  Previously these were named
+for the 2.2.x series is named ``2.2``. Previously these were named
 ``releaseXX-maint``.
 ``releaseXX-maint``.
 
 
 The versions we currently maintain is:
 The versions we currently maintain is:
@@ -346,7 +346,7 @@ Archived branches
 
 
 Archived branches are kept for preserving history only,
 Archived branches are kept for preserving history only,
 and theoretically someone could provide patches for these if they depend
 and theoretically someone could provide patches for these if they depend
-on a series that is no longer officially supported.
+on a series that's no longer officially supported.
 
 
 An archived version is named ``X.Y-archived``.
 An archived version is named ``X.Y-archived``.
 
 
@@ -368,17 +368,17 @@ Feature branches
 ----------------
 ----------------
 
 
 Major new features are worked on in dedicated branches.
 Major new features are worked on in dedicated branches.
-There is no strict naming requirement for these branches.
+There's no strict naming requirement for these branches.
 
 
-Feature branches are removed once they have been merged into a release branch.
+Feature branches are removed once they've been merged into a release branch.
 
 
 Tags
 Tags
 ====
 ====
 
 
-Tags are used exclusively for tagging releases.  A release tag is
+Tags are used exclusively for tagging releases. A release tag is
 named with the format ``vX.Y.Z``, e.g. ``v2.3.1``.
 named with the format ``vX.Y.Z``, e.g. ``v2.3.1``.
 Experimental releases contain an additional identifier ``vX.Y.Z-id``, e.g.
 Experimental releases contain an additional identifier ``vX.Y.Z-id``, e.g.
-``v3.0.0-rc1``.  Experimental tags may be removed after the official release.
+``v3.0.0-rc1``. Experimental tags may be removed after the official release.
 
 
 .. _contributing-changes:
 .. _contributing-changes:
 
 
@@ -390,7 +390,7 @@ Working on Features & Patches
     Contributing to Celery should be as simple as possible,
     Contributing to Celery should be as simple as possible,
     so none of these steps should be considered mandatory.
     so none of these steps should be considered mandatory.
 
 
-    You can even send in patches by email if that is your preferred
+    You can even send in patches by email if that's your preferred
     work method. We won't like you any less, any contribution you make
     work method. We won't like you any less, any contribution you make
     is always appreciated!
     is always appreciated!
 
 
@@ -506,7 +506,7 @@ When your feature/bugfix is complete you may want to submit
 a pull requests so that it can be reviewed by the maintainers.
 a pull requests so that it can be reviewed by the maintainers.
 
 
 Creating pull requests is easy, and also let you track the progress
 Creating pull requests is easy, and also let you track the progress
-of your contribution.  Read the `Pull Requests`_ section in the GitHub
+of your contribution. Read the `Pull Requests`_ section in the GitHub
 Guide to learn how this is done.
 Guide to learn how this is done.
 
 
 You can also attach pull requests to existing issues by following
 You can also attach pull requests to existing issues by following
@@ -549,7 +549,7 @@ The coverage XML output will then be located at :file:`coverage.xml`
 Running the tests on all supported Python versions
 Running the tests on all supported Python versions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-There is a :pypi:`tox` configuration file in the top directory of the
+There's a :pypi:`tox` configuration file in the top directory of the
 distribution.
 distribution.
 
 
 To run the tests for all supported Python versions simply execute:
 To run the tests for all supported Python versions simply execute:
@@ -591,7 +591,7 @@ After building succeeds the documentation is available at :file:`_build/html`.
 Verifying your contribution
 Verifying your contribution
 ---------------------------
 ---------------------------
 
 
-To use these tools you need to install a few dependencies.  These dependencies
+To use these tools you need to install a few dependencies. These dependencies
 can be found in :file:`requirements/pkgutils.txt`.
 can be found in :file:`requirements/pkgutils.txt`.
 
 
 Installing the dependencies:
 Installing the dependencies:
@@ -631,7 +631,7 @@ reference please execute:
 If files are missing you can add them by copying an existing reference file.
 If files are missing you can add them by copying an existing reference file.
 
 
 If the module is internal it should be part of the internal reference
 If the module is internal it should be part of the internal reference
-located in :file:`docs/internals/reference/`.  If the module is public
+located in :file:`docs/internals/reference/`. If the module is public
 it should be located in :file:`docs/reference/`.
 it should be located in :file:`docs/reference/`.
 
 
 For example if reference is missing for the module ``celery.worker.awesome``
 For example if reference is missing for the module ``celery.worker.awesome``
@@ -724,7 +724,7 @@ is following the conventions.
 
 
 .. _`PEP-257`: http://www.python.org/dev/peps/pep-0257/
 .. _`PEP-257`: http://www.python.org/dev/peps/pep-0257/
 
 
-* Lines should not exceed 78 columns.
+* Lines shouldn't exceed 78 columns.
 
 
   You can enforce this in :command:`vim` by setting the ``textwidth`` option:
   You can enforce this in :command:`vim` by setting the ``textwidth`` option:
 
 
@@ -777,12 +777,12 @@ is following the conventions.
         from __future__ import absolute_import
         from __future__ import absolute_import
 
 
     * If the module uses the :keyword:`with` statement and must be compatible
     * If the module uses the :keyword:`with` statement and must be compatible
-      with Python 2.5 (celery is not) then it must also enable that::
+      with Python 2.5 (celery isn't) then it must also enable that::
 
 
         from __future__ import with_statement
         from __future__ import with_statement
 
 
     * Every future import must be on its own line, as older Python 2.5
     * Every future import must be on its own line, as older Python 2.5
-      releases did not support importing multiple features on the
+      releases didn't support importing multiple features on the
       same future import line::
       same future import line::
 
 
         # Good
         # Good
@@ -792,12 +792,12 @@ is following the conventions.
         # Bad
         # Bad
         from __future__ import absolute_import, with_statement
         from __future__ import absolute_import, with_statement
 
 
-     (Note that this rule does not apply if the package does not include
+     (Note that this rule doesn't apply if the package doesn't include
      support for Python 2.5)
      support for Python 2.5)
 
 
 
 
 * Note that we use "new-style` relative imports when the distribution
 * Note that we use "new-style` relative imports when the distribution
-  does not support Python versions below 2.5
+  doesn't support Python versions below 2.5
 
 
     This requires Python 2.5 or later:
     This requires Python 2.5 or later:
 
 
@@ -827,7 +827,7 @@ that require third-party libraries must be added.
         pycassa
         pycassa
 
 
     These are pip requirement files so you can have version specifiers and
     These are pip requirement files so you can have version specifiers and
-    multiple packages are separated by newline.  A more complex example could
+    multiple packages are separated by newline. A more complex example could
     be:
     be:
 
 
     .. code-block:: text
     .. code-block:: text
@@ -862,7 +862,7 @@ that require third-party libraries must be added.
 
 
 That's all that needs to be done, but remember that if your feature
 That's all that needs to be done, but remember that if your feature
 adds additional configuration options then these needs to be documented
 adds additional configuration options then these needs to be documented
-in :file:`docs/configuration.rst`.  Also all settings need to be added to the
+in :file:`docs/configuration.rst`. Also all settings need to be added to the
 :file:`celery/app/defaults.py` module.
 :file:`celery/app/defaults.py` module.
 
 
 Result backends require a separate section in the :file:`docs/configuration.rst`
 Result backends require a separate section in the :file:`docs/configuration.rst`
@@ -877,7 +877,7 @@ This is a list of people that can be contacted for questions
 regarding the official git repositories, PyPI packages
 regarding the official git repositories, PyPI packages
 Read the Docs pages.
 Read the Docs pages.
 
 
-If the issue is not an emergency then it is better
+If the issue isn't an emergency then it's better
 to :ref:`report an issue <reporting-bugs>`.
 to :ref:`report an issue <reporting-bugs>`.
 
 
 
 
@@ -990,7 +990,7 @@ Promise/deferred implementation.
 ------------
 ------------
 
 
 Fork of multiprocessing containing improvements
 Fork of multiprocessing containing improvements
-that will eventually be merged into the Python stdlib.
+that'll eventually be merged into the Python stdlib.
 
 
 :git: https://github.com/celery/billiard
 :git: https://github.com/celery/billiard
 :CI: http://travis-ci.org/#!/celery/billiard/
 :CI: http://travis-ci.org/#!/celery/billiard/
@@ -1087,7 +1087,7 @@ The version number must be updated two places:
     * :file:`docs/include/introduction.txt`
     * :file:`docs/include/introduction.txt`
 
 
 After you have changed these files you must render
 After you have changed these files you must render
-the :file:`README` files.  There is a script to convert sphinx syntax
+the :file:`README` files. There's a script to convert sphinx syntax
 to generic reStructured Text syntax, and the make target `readme`
 to generic reStructured Text syntax, and the make target `readme`
 does this for you:
 does this for you:
 
 

+ 1 - 1
docs/copyright.rst

@@ -9,7 +9,7 @@ by Ask Solem
 
 
 Copyright |copy| 2009-2016, Ask Solem.
 Copyright |copy| 2009-2016, Ask Solem.
 
 
-All rights reserved.  This material may be copied or distributed only
+All rights reserved. This material may be copied or distributed only
 subject to the terms and conditions set forth in the `Creative Commons
 subject to the terms and conditions set forth in the `Creative Commons
 Attribution-ShareAlike 4.0 International`
 Attribution-ShareAlike 4.0 International`
 <http://creativecommons.org/licenses/by-sa/4.0/legalcode>`_ license.
 <http://creativecommons.org/licenses/by-sa/4.0/legalcode>`_ license.

+ 16 - 23
docs/django/first-steps-with-django.rst

@@ -12,9 +12,9 @@ Using Celery with Django
     Previous versions of Celery required a separate library to work with Django,
     Previous versions of Celery required a separate library to work with Django,
     but since 3.1 this is no longer the case. Django is supported out of the
     but since 3.1 this is no longer the case. Django is supported out of the
     box now so this document only contains a basic way to integrate Celery and
     box now so this document only contains a basic way to integrate Celery and
-    Django.  You will use the same API as non-Django users so it's recommended that
-    you read the :ref:`first-steps` tutorial
-    first and come back to this tutorial.  When you have a working example you can
+    Django. You'll use the same API as non-Django users so you're recommended
+    to read the :ref:`first-steps` tutorial
+    first and come back to this tutorial. When you have a working example you can
     continue to the :ref:`next-steps` guide.
     continue to the :ref:`next-steps` guide.
 
 
 To use Celery with your Django project you must first define
 To use Celery with your Django project you must first define
@@ -36,7 +36,7 @@ that defines the Celery instance:
 .. literalinclude:: ../../examples/django/proj/celery.py
 .. literalinclude:: ../../examples/django/proj/celery.py
 
 
 Then you need to import this app in your :file:`proj/proj/__init__.py`
 Then you need to import this app in your :file:`proj/proj/__init__.py`
-module.  This ensures that the app is loaded when Django starts
+module. This ensures that the app is loaded when Django starts
 so that the ``@shared_task`` decorator (mentioned later) will use it:
 so that the ``@shared_task`` decorator (mentioned later) will use it:
 
 
 :file:`proj/proj/__init__.py`:
 :file:`proj/proj/__init__.py`:
@@ -49,7 +49,7 @@ both the app and tasks, like in the :ref:`tut-celery` tutorial.
 
 
 Let's break down what happens in the first module,
 Let's break down what happens in the first module,
 first we import absolute imports from the future, so that our
 first we import absolute imports from the future, so that our
-``celery.py`` module will not clash with the library:
+``celery.py`` module won't clash with the library:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -63,7 +63,7 @@ for the :program:`celery` command-line program:
     os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
     os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
 
 
 You don't need this line, but it saves you from always passing in the
 You don't need this line, but it saves you from always passing in the
-settings module to the celery program.  It must always come before
+settings module to the celery program. It must always come before
 creating the app instances, which is what we do next:
 creating the app instances, which is what we do next:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -74,7 +74,7 @@ This is our instance of the library, you can have many instances
 but there's probably no reason for that when using Django.
 but there's probably no reason for that when using Django.
 
 
 We also add the Django settings module as a configuration source
 We also add the Django settings module as a configuration source
-for Celery.  This means that you don't have to use multiple
+for Celery. This means that you don't have to use multiple
 configuration files, and instead configure Celery directly
 configuration files, and instead configure Celery directly
 from the Django settings; but you can also separate them if wanted.
 from the Django settings; but you can also separate them if wanted.
 
 
@@ -110,13 +110,13 @@ of your installed apps, following the ``tasks.py`` convention::
         - models.py
         - models.py
 
 
 
 
-This way you do not have to manually add the individual modules
-to the :setting:`CELERY_IMPORTS <imports>` setting.  The ``lambda`` so that the
+This way you don't have to manually add the individual modules
+to the :setting:`CELERY_IMPORTS <imports>` setting. The ``lambda`` so that the
 auto-discovery can happen only when needed, and so that importing your
 auto-discovery can happen only when needed, and so that importing your
-module will not evaluate the Django settings object.
+module won't evaluate the Django settings object.
 
 
 Finally, the ``debug_task`` example is a task that dumps
 Finally, the ``debug_task`` example is a task that dumps
-its own request information.  This is using the new ``bind=True`` task option
+its own request information. This is using the new ``bind=True`` task option
 introduced in Celery 3.1 to easily refer to the current task instance.
 introduced in Celery 3.1 to easily refer to the current task instance.
 
 
 Using the ``@shared_task`` decorator
 Using the ``@shared_task`` decorator
@@ -159,23 +159,16 @@ To use this with your project you need to follow these four steps:
 
 
     This step will create the tables used to store results
     This step will create the tables used to store results
     when using the database result backend and the tables used
     when using the database result backend and the tables used
-    by the database periodic task scheduler.  You can skip
+    by the database periodic task scheduler. You can skip
     this step if you don't use these.
     this step if you don't use these.
 
 
-    If you are using Django 1.7+ or south_, you'll want to:
+    Create the tables by migrating your database:
 
 
     .. code-block:: console
     .. code-block:: console
 
 
         $ python manage.py migrate djcelery
         $ python manage.py migrate djcelery
 
 
-    For those who are on Django 1.6 or lower and not using south, a normal
-    ``syncdb`` will work:
-
-    .. code-block:: console
-
-        $ python manage.py syncdb
-
-4.  Configure celery to use the :pypi:`django-celery` backend.
+4. Configure celery to use the :pypi:`django-celery` backend.
 
 
     For the database backend you must use:
     For the database backend you must use:
 
 
@@ -213,10 +206,10 @@ To use this with your project you need to follow these four steps:
 Starting the worker process
 Starting the worker process
 ===========================
 ===========================
 
 
-In a production environment you will want to run the worker in the background
+In a production environment you'll want to run the worker in the background
 as a daemon - see :ref:`daemonizing` - but for testing and
 as a daemon - see :ref:`daemonizing` - but for testing and
 development it is useful to be able to start a worker instance by using the
 development it is useful to be able to start a worker instance by using the
-:program:`celery worker` manage command, much as you would use Django's
+:program:`celery worker` manage command, much as you'd use Django's
 :command:`manage.py runserver`:
 :command:`manage.py runserver`:
 
 
 .. code-block:: console
 .. code-block:: console

+ 64 - 64
docs/faq.rst

@@ -18,7 +18,7 @@ What kinds of things should I use Celery for?
 ---------------------------------------------
 ---------------------------------------------
 
 
 **Answer:** `Queue everything and delight everyone`_ is a good article
 **Answer:** `Queue everything and delight everyone`_ is a good article
-describing why you would use a queue in a web context.
+describing why you'd use a queue in a web context.
 
 
 .. _`Queue everything and delight everyone`:
 .. _`Queue everything and delight everyone`:
     http://decafbad.com/blog/2008/07/04/queue-everything-and-delight-everyone
     http://decafbad.com/blog/2008/07/04/queue-everything-and-delight-everyone
@@ -62,8 +62,8 @@ The numbers as of this writing are:
     - tests: 14,209 lines.
     - tests: 14,209 lines.
     - backends, contrib, compat utilities: 9,032 lines.
     - backends, contrib, compat utilities: 9,032 lines.
 
 
-Lines of code is not a useful metric, so
-even if Celery did consist of 50k lines of code you would not
+Lines of code isn't a useful metric, so
+even if Celery did consist of 50k lines of code you wouldn't
 be able to draw any conclusions from such a number.
 be able to draw any conclusions from such a number.
 
 
 Does Celery have many dependencies?
 Does Celery have many dependencies?
@@ -85,8 +85,8 @@ celery
 - `kombu`_
 - `kombu`_
 
 
 Kombu is part of the Celery ecosystem and is the library used
 Kombu is part of the Celery ecosystem and is the library used
-to send and receive messages.  It is also the library that enables
-us to support many different message brokers.  It is also used by the
+to send and receive messages. It's also the library that enables
+us to support many different message brokers. It's also used by the
 OpenStack project, and many others, validating the choice to separate
 OpenStack project, and many others, validating the choice to separate
 it from the Celery code-base.
 it from the Celery code-base.
 
 
@@ -95,10 +95,10 @@ it from the Celery code-base.
 - `billiard`_
 - `billiard`_
 
 
 Billiard is a fork of the Python multiprocessing module containing
 Billiard is a fork of the Python multiprocessing module containing
-many performance and stability improvements.  It is an eventual goal
+many performance and stability improvements. It's an eventual goal
 that these improvements will be merged back into Python one day.
 that these improvements will be merged back into Python one day.
 
 
-It is also used for compatibility with older Python versions
+It's also used for compatibility with older Python versions
 that don't come with the multiprocessing module.
 that don't come with the multiprocessing module.
 
 
 .. _`billiard`: http://pypi.python.org/pypi/billiard
 .. _`billiard`: http://pypi.python.org/pypi/billiard
@@ -113,9 +113,9 @@ The pytz module provides timezone definitions and related tools.
 ~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~
 
 
 If you use :pypi:`django-celery` then you don't have to install Celery
 If you use :pypi:`django-celery` then you don't have to install Celery
-separately, as it will make sure that the required version is installed.
+separately, as it'll make sure that the required version is installed.
 
 
-:pypi:`django-celery` does not have any other dependencies.
+:pypi:`django-celery` doesn't have any other dependencies.
 
 
 kombu
 kombu
 ~~~~~
 ~~~~~
@@ -124,7 +124,7 @@ Kombu depends on the following packages:
 
 
 - `amqp`_
 - `amqp`_
 
 
-The underlying pure-Python amqp client implementation.  AMQP being the default
+The underlying pure-Python amqp client implementation. AMQP being the default
 broker this is a natural dependency.
 broker this is a natural dependency.
 
 
 .. _`amqp`: http://pypi.python.org/pypi/amqp
 .. _`amqp`: http://pypi.python.org/pypi/amqp
@@ -144,7 +144,7 @@ Is Celery heavy-weight?
 Celery poses very little overhead both in memory footprint and
 Celery poses very little overhead both in memory footprint and
 performance.
 performance.
 
 
-But please note that the default configuration is not optimized for time nor
+But please note that the default configuration isn't optimized for time nor
 space, see the :ref:`guide-optimizing` guide for more information.
 space, see the :ref:`guide-optimizing` guide for more information.
 
 
 .. _faq-serializion-is-a-choice:
 .. _faq-serializion-is-a-choice:
@@ -158,11 +158,11 @@ Celery can support any serialization scheme and has built-in support for
 JSON, YAML, Pickle and msgpack. Also, as every task is associated with a
 JSON, YAML, Pickle and msgpack. Also, as every task is associated with a
 content type, you can even send one task using pickle, and another using JSON.
 content type, you can even send one task using pickle, and another using JSON.
 
 
-The default serialization format is pickle simply because it is
+The default serialization format is pickle simply because it's
 convenient (it supports sending complex Python objects as task arguments).
 convenient (it supports sending complex Python objects as task arguments).
 
 
 If you need to communicate with other languages you should change
 If you need to communicate with other languages you should change
-to a serialization format that is suitable for that.
+to a serialization format that's suitable for that.
 
 
 You can set a global default serializer, the default serializer for a
 You can set a global default serializer, the default serializer for a
 particular Task, or even what serializer to use when sending a single task
 particular Task, or even what serializer to use when sending a single task
@@ -189,10 +189,10 @@ See :ref:`brokers` for more information.
 
 
 Redis as a broker won't perform as well as
 Redis as a broker won't perform as well as
 an AMQP broker, but the combination RabbitMQ as broker and Redis as a result
 an AMQP broker, but the combination RabbitMQ as broker and Redis as a result
-store is commonly used.  If you have strict reliability requirements you are
+store is commonly used. If you have strict reliability requirements you're
 encouraged to use RabbitMQ or another AMQP broker. Some transports also uses
 encouraged to use RabbitMQ or another AMQP broker. Some transports also uses
-polling, so they are likely to consume more resources. However, if you for
-some reason are not able to use AMQP, feel free to use these alternatives.
+polling, so they're likely to consume more resources. However, if you for
+some reason aren't able to use AMQP, feel free to use these alternatives.
 They will probably work fine for most use cases, and note that the above
 They will probably work fine for most use cases, and note that the above
 points are not specific to Celery; If using Redis/database as a queue worked
 points are not specific to Celery; If using Redis/database as a queue worked
 fine for you before, it probably will now. You can always upgrade later
 fine for you before, it probably will now. You can always upgrade later
@@ -207,10 +207,10 @@ Is Celery multilingual?
 
 
 :mod:`~celery.bin.worker` is an implementation of Celery in Python. If the
 :mod:`~celery.bin.worker` is an implementation of Celery in Python. If the
 language has an AMQP client, there shouldn't be much work to create a worker
 language has an AMQP client, there shouldn't be much work to create a worker
-in your language.  A Celery worker is just a program connecting to the broker
+in your language. A Celery worker is just a program connecting to the broker
 to process messages.
 to process messages.
 
 
-Also, there's another way to be language independent, and that is to use REST
+Also, there's another way to be language independent, and that's to use REST
 tasks, instead of your tasks being functions, they're URLs. With this
 tasks, instead of your tasks being functions, they're URLs. With this
 information you can even create simple web servers that enable preloading of
 information you can even create simple web servers that enable preloading of
 code. Simply expose an endpoint that performs an operation, and create a task
 code. Simply expose an endpoint that performs an operation, and create a task
@@ -242,8 +242,8 @@ Transaction Model and Locking`_ in the MySQL user manual.
 
 
 .. _faq-worker-hanging:
 .. _faq-worker-hanging:
 
 
-The worker is not doing anything, just hanging
-----------------------------------------------
+The worker isn't doing anything, just hanging
+---------------------------------------------
 
 
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
             or `Why is Task.delay/apply\* just hanging?`.
             or `Why is Task.delay/apply\* just hanging?`.
@@ -261,10 +261,10 @@ using MySQL, see `MySQL is throwing deadlock errors, what can I do?`_.
 Why is Task.delay/apply\*/the worker just hanging?
 Why is Task.delay/apply\*/the worker just hanging?
 --------------------------------------------------
 --------------------------------------------------
 
 
-**Answer:** There is a bug in some AMQP clients that will make it hang if
+**Answer:** There's a bug in some AMQP clients that'll make it hang if
 it's not able to authenticate the current user, the password doesn't match or
 it's not able to authenticate the current user, the password doesn't match or
-the user does not have access to the virtual host specified. Be sure to check
-your broker logs (for RabbitMQ that is :file:`/var/log/rabbitmq/rabbit.log` on
+the user doesn't have access to the virtual host specified. Be sure to check
+your broker logs (for RabbitMQ that's :file:`/var/log/rabbitmq/rabbit.log` on
 most systems), it usually contains a message describing the reason.
 most systems), it usually contains a message describing the reason.
 
 
 .. _faq-worker-on-freebsd:
 .. _faq-worker-on-freebsd:
@@ -317,7 +317,7 @@ worker process taking the messages hostage. This could happen if the worker
 wasn't properly shut down.
 wasn't properly shut down.
 
 
 When a message is received by a worker the broker waits for it to be
 When a message is received by a worker the broker waits for it to be
-acknowledged before marking the message as processed. The broker will not
+acknowledged before marking the message as processed. The broker won't
 re-send that message to another consumer until the consumer is shut down
 re-send that message to another consumer until the consumer is shut down
 properly.
 properly.
 
 
@@ -326,7 +326,7 @@ them::
 
 
     ps auxww | grep celeryd | awk '{print $2}' | xargs kill
     ps auxww | grep celeryd | awk '{print $2}' | xargs kill
 
 
-You might have to wait a while until all workers have finished the work they're
+You may have to wait a while until all workers have finished the work they're
 doing. If it's still hanging after a long time you can kill them by force
 doing. If it's still hanging after a long time you can kill them by force
 with::
 with::
 
 
@@ -394,9 +394,9 @@ I've purged messages, but there are still messages left in the queue?
 ---------------------------------------------------------------------
 ---------------------------------------------------------------------
 
 
 **Answer:** Tasks are acknowledged (removed from the queue) as soon
 **Answer:** Tasks are acknowledged (removed from the queue) as soon
-as they are actually executed. After the worker has received a task, it will
-take some time until it is actually executed, especially if there are a lot
-of tasks already waiting for execution. Messages that are not acknowledged are
+as they're actually executed. After the worker has received a task, it will
+take some time until it's actually executed, especially if there are a lot
+of tasks already waiting for execution. Messages that aren't acknowledged are
 held on to by the worker until it closes the connection to the broker (AMQP
 held on to by the worker until it closes the connection to the broker (AMQP
 server). When that connection is closed (e.g. because the worker was stopped)
 server). When that connection is closed (e.g. because the worker was stopped)
 the tasks will be re-sent by the broker to the next available worker (or the
 the tasks will be re-sent by the broker to the next available worker (or the
@@ -437,14 +437,14 @@ Security
 Isn't using `pickle` a security concern?
 Isn't using `pickle` a security concern?
 ----------------------------------------
 ----------------------------------------
 
 
-**Answer**: Yes, indeed it is.
+**Answer**: Yes, indeed it's.
 
 
-You are right to have a security concern, as this can indeed be a real issue.
-It is essential that you protect against unauthorized
+You're right to have a security concern, as this can indeed be a real issue.
+It's essential that you protect against unauthorized
 access to your broker, databases and other services transmitting pickled
 access to your broker, databases and other services transmitting pickled
 data.
 data.
 
 
-Note that this is not just something you should be aware of with Celery, for
+Note that this isn't just something you should be aware of with Celery, for
 example also Django uses pickle for its cache client.
 example also Django uses pickle for its cache client.
 
 
 For the task messages you can set the :setting:`task_serializer`
 For the task messages you can set the :setting:`task_serializer`
@@ -461,7 +461,7 @@ Can messages be encrypted?
 **Answer**: Some AMQP brokers supports using SSL (including RabbitMQ).
 **Answer**: Some AMQP brokers supports using SSL (including RabbitMQ).
 You can enable this using the :setting:`broker_use_ssl` setting.
 You can enable this using the :setting:`broker_use_ssl` setting.
 
 
-It is also possible to add additional encryption and security to messages,
+It's also possible to add additional encryption and security to messages,
 if you have a need for this then you should contact the :ref:`mailing-list`.
 if you have a need for this then you should contact the :ref:`mailing-list`.
 
 
 Is it safe to run :program:`celery worker` as root?
 Is it safe to run :program:`celery worker` as root?
@@ -489,7 +489,7 @@ http://www.rabbitmq.com/faq.html#node-runs-out-of-memory
 .. note::
 .. note::
 
 
     This is no longer the case, RabbitMQ versions 2.0 and above
     This is no longer the case, RabbitMQ versions 2.0 and above
-    includes a new persister, that is tolerant to out of memory
+    includes a new persister, that's tolerant to out of memory
     errors. RabbitMQ 2.1 or higher is recommended for Celery.
     errors. RabbitMQ 2.1 or higher is recommended for Celery.
 
 
     If you're still running an older version of RabbitMQ and experience
     If you're still running an older version of RabbitMQ and experience
@@ -497,8 +497,8 @@ http://www.rabbitmq.com/faq.html#node-runs-out-of-memory
 
 
 Misconfiguration of Celery can eventually lead to a crash
 Misconfiguration of Celery can eventually lead to a crash
 on older version of RabbitMQ. Even if it doesn't crash, this
 on older version of RabbitMQ. Even if it doesn't crash, this
-can still consume a lot of resources, so it is very
-important that you are aware of the common pitfalls.
+can still consume a lot of resources, so it's
+important that you're aware of the common pitfalls.
 
 
 * Events.
 * Events.
 
 
@@ -514,11 +514,11 @@ When running with the AMQP result backend, every task result will be sent
 as a message. If you don't collect these results, they will build up and
 as a message. If you don't collect these results, they will build up and
 RabbitMQ will eventually run out of memory.
 RabbitMQ will eventually run out of memory.
 
 
-This result backend is now deprecated so you should not be using it.
+This result backend is now deprecated so you shouldn't be using it.
 Use either the RPC backend for rpc-style calls, or a persistent backend
 Use either the RPC backend for rpc-style calls, or a persistent backend
 if you need multi-consumer access to results.
 if you need multi-consumer access to results.
 
 
-Results expire after 1 day by default.  It may be a good idea
+Results expire after 1 day by default. It may be a good idea
 to lower this value by configuring the :setting:`result_expires`
 to lower this value by configuring the :setting:`result_expires`
 setting.
 setting.
 
 
@@ -539,13 +539,13 @@ If you don't use the results for a task, make sure you set the
 Can I use Celery with ActiveMQ/STOMP?
 Can I use Celery with ActiveMQ/STOMP?
 -------------------------------------
 -------------------------------------
 
 
-**Answer**: No.  It used to be supported by Carrot,
-but is not currently supported in Kombu.
+**Answer**: No. It used to be supported by Carrot,
+but isn't currently supported in Kombu.
 
 
 .. _faq-non-amqp-missing-features:
 .. _faq-non-amqp-missing-features:
 
 
-What features are not supported when not using an AMQP broker?
---------------------------------------------------------------
+What features aren't supported when not using an AMQP broker?
+-------------------------------------------------------------
 
 
 This is an incomplete list of features not available when
 This is an incomplete list of features not available when
 using the virtual transports:
 using the virtual transports:
@@ -575,7 +575,7 @@ The connection pool is enabled by default since version 2.5.
 :command:`sudo` in a :mod:`subprocess` returns :const:`None`
 :command:`sudo` in a :mod:`subprocess` returns :const:`None`
 ------------------------------------------------------------
 ------------------------------------------------------------
 
 
-There is a :command:`sudo` configuration option that makes it illegal
+There's a :command:`sudo` configuration option that makes it illegal
 for process without a tty to run :command:`sudo`:
 for process without a tty to run :command:`sudo`:
 
 
 .. code-block:: text
 .. code-block:: text
@@ -583,22 +583,22 @@ for process without a tty to run :command:`sudo`:
     Defaults requiretty
     Defaults requiretty
 
 
 If you have this configuration in your :file:`/etc/sudoers` file then
 If you have this configuration in your :file:`/etc/sudoers` file then
-tasks will not be able to call :command:`sudo` when the worker is
-running as a daemon.  If you want to enable that, then you need to remove
+tasks won't be able to call :command:`sudo` when the worker is
+running as a daemon. If you want to enable that, then you need to remove
 the line from :file:`/etc/sudoers`.
 the line from :file:`/etc/sudoers`.
 
 
 See: http://timelordz.com/wiki/Apache_Sudo_Commands
 See: http://timelordz.com/wiki/Apache_Sudo_Commands
 
 
 .. _faq-deletes-unknown-tasks:
 .. _faq-deletes-unknown-tasks:
 
 
-Why do workers delete tasks from the queue if they are unable to process them?
-------------------------------------------------------------------------------
+Why do workers delete tasks from the queue if they're unable to process them?
+-----------------------------------------------------------------------------
 **Answer**:
 **Answer**:
 
 
 The worker rejects unknown tasks, messages with encoding errors and messages
 The worker rejects unknown tasks, messages with encoding errors and messages
 that don't contain the proper fields (as per the task message protocol).
 that don't contain the proper fields (as per the task message protocol).
 
 
-If it did not reject them they could be redelivered again and again,
+If it didn't reject them they could be redelivered again and again,
 causing a loop.
 causing a loop.
 
 
 Recent versions of RabbitMQ has the ability to configure a dead-letter
 Recent versions of RabbitMQ has the ability to configure a dead-letter
@@ -611,7 +611,7 @@ Can I call a task by name?
 
 
 **Answer**: Yes. Use :meth:`@send_task`.
 **Answer**: Yes. Use :meth:`@send_task`.
 You can also call a task by name from any language
 You can also call a task by name from any language
-that has an AMQP client.
+with an AMQP client.
 
 
     >>> app.send_task('tasks.add', args=[2, 2], kwargs={})
     >>> app.send_task('tasks.add', args=[2, 2], kwargs={})
     <AsyncResult: 373550e8-b9a0-4666-bc61-ace01fa4f91d>
     <AsyncResult: 373550e8-b9a0-4666-bc61-ace01fa4f91d>
@@ -634,7 +634,7 @@ For more information see :ref:`task-request-info`.
 Can I specify a custom task_id?
 Can I specify a custom task_id?
 -------------------------------
 -------------------------------
 
 
-**Answer**: Yes.  Use the `task_id` argument to :meth:`Task.apply_async`::
+**Answer**: Yes. Use the `task_id` argument to :meth:`Task.apply_async`::
 
 
     >>> task.apply_async(args, kwargs, task_id='…')
     >>> task.apply_async(args, kwargs, task_id='…')
 
 
@@ -642,14 +642,14 @@ Can I specify a custom task_id?
 Can I use decorators with tasks?
 Can I use decorators with tasks?
 --------------------------------
 --------------------------------
 
 
-**Answer**: Yes.  But please see note in the sidebar at :ref:`task-basics`.
+**Answer**: Yes, but please see note in the sidebar at :ref:`task-basics`.
 
 
 .. _faq-natural-task-ids:
 .. _faq-natural-task-ids:
 
 
 Can I use natural task ids?
 Can I use natural task ids?
 ---------------------------
 ---------------------------
 
 
-**Answer**: Yes, but make sure it is unique, as the behavior
+**Answer**: Yes, but make sure it's unique, as the behavior
 for two tasks existing with the same id is undefined.
 for two tasks existing with the same id is undefined.
 
 
 The world will probably not explode, but at the worst
 The world will probably not explode, but at the worst
@@ -733,7 +733,7 @@ See :doc:`userguide/routing` for more information.
 Can I disable prefetching of tasks?
 Can I disable prefetching of tasks?
 -----------------------------------
 -----------------------------------
 
 
-**Answer**: The term prefetch must have confused you, as as in Celery it's only used
+**Answer**: The AMQP term "prefetch" is confusing, as it's only used
 to describe the task prefetching *limits*.
 to describe the task prefetching *limits*.
 
 
 Disabling the prefetch limits is possible, but that means the worker will
 Disabling the prefetch limits is possible, but that means the worker will
@@ -773,8 +773,8 @@ RabbitMQ supports priorities since version 3.5.0.
 Redis transport emulates support of priorities.
 Redis transport emulates support of priorities.
 
 
 You can also prioritize work by routing high priority tasks
 You can also prioritize work by routing high priority tasks
-to different workers.  In the real world this may actually work better
-than per message priorities.  You can use this in combination with rate
+to different workers. In the real world this may actually work better
+than per message priorities. You can use this in combination with rate
 limiting to achieve a responsive system.
 limiting to achieve a responsive system.
 
 
 .. _faq-acks_late-vs-retry:
 .. _faq-acks_late-vs-retry:
@@ -786,16 +786,16 @@ Should I use retry or acks_late?
 to use both.
 to use both.
 
 
 `Task.retry` is used to retry tasks, notably for expected errors that
 `Task.retry` is used to retry tasks, notably for expected errors that
-is catch-able with the :keyword:`try` block. The AMQP transaction is not used
-for these errors: **if the task raises an exception it is still acknowledged!**
+is catch-able with the :keyword:`try` block. The AMQP transaction isn't used
+for these errors: **if the task raises an exception it's still acknowledged!**
 
 
 The `acks_late` setting would be used when you need the task to be
 The `acks_late` setting would be used when you need the task to be
 executed again if the worker (for some reason) crashes mid-execution.
 executed again if the worker (for some reason) crashes mid-execution.
-It's important to note that the worker is not known to crash, and if
-it does it is usually an unrecoverable error that requires human
+It's important to note that the worker isn't known to crash, and if
+it does it's usually an unrecoverable error that requires human
 intervention (bug in the worker, or task code).
 intervention (bug in the worker, or task code).
 
 
-In an ideal world you could safely retry any task that has failed, but
+In an ideal world you could safely retry any task that's failed, but
 this is rarely the case. Imagine the following task:
 this is rarely the case. Imagine the following task:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -808,7 +808,7 @@ this is rarely the case. Imagine the following task:
         copy_file_to_destination(filename, tmpfile)
         copy_file_to_destination(filename, tmpfile)
 
 
 If this crashed in the middle of copying the file to its destination
 If this crashed in the middle of copying the file to its destination
-the world would contain incomplete state. This is not a critical
+the world would contain incomplete state. This isn't a critical
 scenario of course, but you can probably imagine something far more
 scenario of course, but you can probably imagine something far more
 sinister. So for ease of programming we have less reliability;
 sinister. So for ease of programming we have less reliability;
 It's a good default, users who require it and know what they
 It's a good default, users who require it and know what they
@@ -850,7 +850,7 @@ Also make sure you kill the main worker process, not its child processes.
 You can direct a kill signal to a specific child process if you know the
 You can direct a kill signal to a specific child process if you know the
 process is currently executing a task the worker shutdown is depending on,
 process is currently executing a task the worker shutdown is depending on,
 but this also means that a ``WorkerLostError`` state will be set for the
 but this also means that a ``WorkerLostError`` state will be set for the
-task so the task will not run again.
+task so the task won't run again.
 
 
 Identifying the type of process is easier if you have installed the
 Identifying the type of process is easier if you have installed the
 :pypi:`setproctitle` module:
 :pypi:`setproctitle` module:
@@ -859,7 +859,7 @@ Identifying the type of process is easier if you have installed the
 
 
     $ pip install setproctitle
     $ pip install setproctitle
 
 
-With this library installed you will be able to see the type of process in
+With this library installed you'll be able to see the type of process in
 :command:`ps` listings, but the worker must be restarted for this to take effect.
 :command:`ps` listings, but the worker must be restarted for this to take effect.
 
 
 .. seealso::
 .. seealso::
@@ -903,7 +903,7 @@ Several database tables are created by default, these relate to
     backward compatibility).
     backward compatibility).
 
 
     The results are stored in the ``TaskMeta`` and ``TaskSetMeta`` models.
     The results are stored in the ``TaskMeta`` and ``TaskSetMeta`` models.
-    *these tables are not created if another result backend is configured*.
+    *these tables aren't created if another result backend is configured*.
 
 
 .. _faq-windows:
 .. _faq-windows:
 
 

+ 3 - 3
docs/getting-started/brokers/index.rst

@@ -42,12 +42,12 @@ individual transport (see :ref:`broker_toc`).
 | *Zookeeper*   | Experimental | No             | No                 |
 | *Zookeeper*   | Experimental | No             | No                 |
 +---------------+--------------+----------------+--------------------+
 +---------------+--------------+----------------+--------------------+
 
 
-Experimental brokers may be functional but they do not have
+Experimental brokers may be functional but they don't have
 dedicated maintainers.
 dedicated maintainers.
 
 
-Missing monitor support means that the transport does not
+Missing monitor support means that the transport doesn't
 implement events, and as such Flower, `celery events`, `celerymon`
 implement events, and as such Flower, `celery events`, `celerymon`
-and other event-based monitoring tools will not work.
+and other event-based monitoring tools won't work.
 
 
 Remote control means the ability to inspect and manage workers
 Remote control means the ability to inspect and manage workers
 at runtime using the `celery inspect` and `celery control` commands
 at runtime using the `celery inspect` and `celery control` commands

+ 2 - 2
docs/getting-started/brokers/rabbitmq.rst

@@ -10,7 +10,7 @@
 Installation & Configuration
 Installation & Configuration
 ============================
 ============================
 
 
-RabbitMQ is the default broker so it does not require any additional
+RabbitMQ is the default broker so it doesn't require any additional
 dependencies or initial configuration, other than the URL location of
 dependencies or initial configuration, other than the URL location of
 the broker instance you want to use:
 the broker instance you want to use:
 
 
@@ -107,7 +107,7 @@ shell (e.g. :file:`.bash_profile` or :file:`.profile`).
 Configuring the system host name
 Configuring the system host name
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-If you're using a DHCP server that is giving you a random host name, you need
+If you're using a DHCP server that's giving you a random host name, you need
 to permanently configure the host name. This is because RabbitMQ uses the host name
 to permanently configure the host name. This is because RabbitMQ uses the host name
 to communicate with nodes.
 to communicate with nodes.
 
 

+ 7 - 7
docs/getting-started/brokers/redis.rst

@@ -58,7 +58,7 @@ Visibility Timeout
 
 
 The visibility timeout defines the number of seconds to wait
 The visibility timeout defines the number of seconds to wait
 for the worker to acknowledge the task before the message is redelivered
 for the worker to acknowledge the task before the message is redelivered
-to another worker.  Be sure to see :ref:`redis-caveats` below.
+to another worker. Be sure to see :ref:`redis-caveats` below.
 
 
 This option is set via the :setting:`broker_transport_options` setting:
 This option is set via the :setting:`broker_transport_options` setting:
 
 
@@ -100,8 +100,8 @@ they will only be received by the active virtual host:
 
 
     app.conf.broker_transport_options = {'fanout_prefix': True}
     app.conf.broker_transport_options = {'fanout_prefix': True}
 
 
-Note that you will not be able to communicate with workers running older
-versions or workers that does not have this setting enabled.
+Note that you won't be able to communicate with workers running older
+versions or workers that doesn't have this setting enabled.
 
 
 This setting will be the default in the future, so better to migrate
 This setting will be the default in the future, so better to migrate
 sooner rather than later.
 sooner rather than later.
@@ -121,7 +121,7 @@ the workers may only subscribe to worker related events:
     app.conf.broker_transport_options = {'fanout_patterns': True}
     app.conf.broker_transport_options = {'fanout_patterns': True}
 
 
 Note that this change is backward incompatible so all workers in the
 Note that this change is backward incompatible so all workers in the
-cluster must have this option enabled, or else they will not be able to
+cluster must have this option enabled, or else they won't be able to
 communicate.
 communicate.
 
 
 This option will be enabled by default in the future.
 This option will be enabled by default in the future.
@@ -129,7 +129,7 @@ This option will be enabled by default in the future.
 Visibility timeout
 Visibility timeout
 ------------------
 ------------------
 
 
-If a task is not acknowledged within the :ref:`redis-visibility_timeout`
+If a task isn't acknowledged within the :ref:`redis-visibility_timeout`
 the task will be redelivered to another worker and executed.
 the task will be redelivered to another worker and executed.
 
 
 This causes problems with ETA/countdown/retry tasks where the
 This causes problems with ETA/countdown/retry tasks where the
@@ -137,14 +137,14 @@ time to execute exceeds the visibility timeout; in fact if that
 happens it will be executed again, and again in a loop.
 happens it will be executed again, and again in a loop.
 
 
 So you have to increase the visibility timeout to match
 So you have to increase the visibility timeout to match
-the time of the longest ETA you are planning to use.
+the time of the longest ETA you're planning to use.
 
 
 Note that Celery will redeliver messages at worker shutdown,
 Note that Celery will redeliver messages at worker shutdown,
 so having a long visibility timeout will only delay the redelivery
 so having a long visibility timeout will only delay the redelivery
 of 'lost' tasks in the event of a power failure or forcefully terminated
 of 'lost' tasks in the event of a power failure or forcefully terminated
 workers.
 workers.
 
 
-Periodic tasks will not be affected by the visibility timeout,
+Periodic tasks won't be affected by the visibility timeout,
 as this is a concept separate from ETA/countdown.
 as this is a concept separate from ETA/countdown.
 
 
 You can increase this timeout by configuring a transport option
 You can increase this timeout by configuring a transport option

+ 13 - 13
docs/getting-started/brokers/sqs.rst

@@ -67,7 +67,7 @@ Visibility Timeout
 
 
 The visibility timeout defines the number of seconds to wait
 The visibility timeout defines the number of seconds to wait
 for the worker to acknowledge the task before the message is redelivered
 for the worker to acknowledge the task before the message is redelivered
-to another worker.  Also see caveats below.
+to another worker. Also see caveats below.
 
 
 This option is set via the :setting:`broker_transport_options` setting::
 This option is set via the :setting:`broker_transport_options` setting::
 
 
@@ -79,11 +79,11 @@ Polling Interval
 ----------------
 ----------------
 
 
 The polling interval decides the number of seconds to sleep between
 The polling interval decides the number of seconds to sleep between
-unsuccessful polls.  This value can be either an int or a float.
+unsuccessful polls. This value can be either an int or a float.
 By default the value is 1 second, which means that the worker will
 By default the value is 1 second, which means that the worker will
 sleep for one second whenever there are no more messages to read.
 sleep for one second whenever there are no more messages to read.
 
 
-You should note that **more frequent polling is also more expensive, so increasing
+You must note that **more frequent polling is also more expensive, so increasing
 the polling interval can save you money**.
 the polling interval can save you money**.
 
 
 The polling interval can be set via the :setting:`broker_transport_options`
 The polling interval can be set via the :setting:`broker_transport_options`
@@ -92,14 +92,14 @@ setting::
     broker_transport_options = {'polling_interval': 0.3}
     broker_transport_options = {'polling_interval': 0.3}
 
 
 Very frequent polling intervals can cause *busy loops*, which results in the
 Very frequent polling intervals can cause *busy loops*, which results in the
-worker using a lot of CPU time.  If you need sub-millisecond precision you
+worker using a lot of CPU time. If you need sub-millisecond precision you
 should consider using another transport, like `RabbitMQ <broker-amqp>`,
 should consider using another transport, like `RabbitMQ <broker-amqp>`,
 or `Redis <broker-redis>`.
 or `Redis <broker-redis>`.
 
 
 Queue Prefix
 Queue Prefix
 ------------
 ------------
 
 
-By default Celery will not assign any prefix to the queue names,
+By default Celery won't assign any prefix to the queue names,
 If you have other services using SQS you can configure it do so
 If you have other services using SQS you can configure it do so
 using the :setting:`broker_transport_options` setting::
 using the :setting:`broker_transport_options` setting::
 
 
@@ -111,7 +111,7 @@ using the :setting:`broker_transport_options` setting::
 Caveats
 Caveats
 =======
 =======
 
 
-- If a task is not acknowledged within the ``visibility_timeout``,
+- If a task isn't acknowledged within the ``visibility_timeout``,
   the task will be redelivered to another worker and executed.
   the task will be redelivered to another worker and executed.
 
 
     This causes problems with ETA/countdown/retry tasks where the
     This causes problems with ETA/countdown/retry tasks where the
@@ -119,14 +119,14 @@ Caveats
     happens it will be executed again, and again in a loop.
     happens it will be executed again, and again in a loop.
 
 
     So you have to increase the visibility timeout to match
     So you have to increase the visibility timeout to match
-    the time of the longest ETA you are planning to use.
+    the time of the longest ETA you're planning to use.
 
 
     Note that Celery will redeliver messages at worker shutdown,
     Note that Celery will redeliver messages at worker shutdown,
     so having a long visibility timeout will only delay the redelivery
     so having a long visibility timeout will only delay the redelivery
     of 'lost' tasks in the event of a power failure or forcefully terminated
     of 'lost' tasks in the event of a power failure or forcefully terminated
     workers.
     workers.
 
 
-    Periodic tasks will not be affected by the visibility timeout,
+    Periodic tasks won't be affected by the visibility timeout,
     as it is a concept separate from ETA/countdown.
     as it is a concept separate from ETA/countdown.
 
 
     The maximum visibility timeout supported by AWS as of this writing
     The maximum visibility timeout supported by AWS as of this writing
@@ -134,9 +134,9 @@ Caveats
 
 
         broker_transport_options = {'visibility_timeout': 43200}
         broker_transport_options = {'visibility_timeout': 43200}
 
 
-- SQS does not yet support worker remote control commands.
+- SQS doesn't yet support worker remote control commands.
 
 
-- SQS does not yet support events, and so cannot be used with
+- SQS doesn't yet support events, and so cannot be used with
   :program:`celery events`, :program:`celerymon` or the Django Admin
   :program:`celery events`, :program:`celerymon` or the Django Admin
   monitor.
   monitor.
 
 
@@ -146,13 +146,13 @@ Results
 -------
 -------
 
 
 Multiple products in the Amazon Web Services family could be a good candidate
 Multiple products in the Amazon Web Services family could be a good candidate
-to store or publish results with, but there is no such result backend included
+to store or publish results with, but there's no such result backend included
 at this point.
 at this point.
 
 
 .. warning::
 .. warning::
 
 
-    Do not use the ``amqp`` result backend with SQS.
+    Don't use the ``amqp`` result backend with SQS.
 
 
     It will create one queue for every task, and the queues will
     It will create one queue for every task, and the queues will
-    not be collected.  This could cost you money that would be better
+    not be collected. This could cost you money that would be better
     spent contributing an AWS result store backend back to Celery :)
     spent contributing an AWS result store backend back to Celery :)

+ 34 - 33
docs/getting-started/first-steps-with-celery.rst

@@ -6,14 +6,15 @@
 =========================
 =========================
 
 
 Celery is a task queue with batteries included.
 Celery is a task queue with batteries included.
-It is easy to use so that you can get started without learning
-the full complexities of the problem it solves. It is designed
+It's easy to use so that you can get started without learning
+the full complexities of the problem it solves. It's designed
 around best practices so that your product can scale
 around best practices so that your product can scale
 and integrate with other languages, and it comes with the
 and integrate with other languages, and it comes with the
 tools and support you need to run such a system in production.
 tools and support you need to run such a system in production.
 
 
-In this tutorial you will learn the absolute basics of using Celery.
-You will learn about;
+In this tutorial you'll learn the absolute basics of using Celery.
+
+Learn about;
 
 
 - Choosing and installing a message transport (broker).
 - Choosing and installing a message transport (broker).
 - Installing Celery and creating your first task.
 - Installing Celery and creating your first task.
@@ -22,7 +23,7 @@ You will learn about;
   and inspecting return values.
   and inspecting return values.
 
 
 Celery may seem daunting at first - but don't worry - this tutorial
 Celery may seem daunting at first - but don't worry - this tutorial
-will get you started in no time. It is deliberately kept simple, so
+will get you started in no time. It's deliberately kept simple, so
 to not confuse you with advanced features.
 to not confuse you with advanced features.
 After you have finished this tutorial
 After you have finished this tutorial
 it's a good idea to browse the rest of the documentation,
 it's a good idea to browse the rest of the documentation,
@@ -53,7 +54,7 @@ Detailed information about using RabbitMQ with Celery:
 
 
 .. _`RabbitMQ`: http://www.rabbitmq.com/
 .. _`RabbitMQ`: http://www.rabbitmq.com/
 
 
-If you are using Ubuntu or Debian install RabbitMQ by executing this
+If you're using Ubuntu or Debian install RabbitMQ by executing this
 command:
 command:
 
 
 .. code-block:: console
 .. code-block:: console
@@ -103,11 +104,11 @@ Application
 ===========
 ===========
 
 
 The first thing you need is a Celery instance, which is called the celery
 The first thing you need is a Celery instance, which is called the celery
-application or just "app" for short.  Since this instance is used as
+application or just "app" for short. Since this instance is used as
 the entry-point for everything you want to do in Celery, like creating tasks and
 the entry-point for everything you want to do in Celery, like creating tasks and
 managing workers, it must be possible for other modules to import it.
 managing workers, it must be possible for other modules to import it.
 
 
-In this tutorial you will keep everything contained in a single module,
+In this tutorial we keep everything contained in a single module,
 but for larger projects you want to create
 but for larger projects you want to create
 a :ref:`dedicated module <project-layout>`.
 a :ref:`dedicated module <project-layout>`.
 
 
@@ -127,7 +128,7 @@ The first argument to :class:`~celery.app.Celery` is the name of the current mod
 this is needed so that names can be automatically generated, the second
 this is needed so that names can be automatically generated, the second
 argument is the broker keyword argument which specifies the URL of the
 argument is the broker keyword argument which specifies the URL of the
 message broker you want to use, using RabbitMQ here, which is already the
 message broker you want to use, using RabbitMQ here, which is already the
-default option.  See :ref:`celerytut-broker` above for more choices,
+default option. See :ref:`celerytut-broker` above for more choices,
 e.g. for RabbitMQ you can use ``amqp://localhost``, or for Redis you can
 e.g. for RabbitMQ you can use ``amqp://localhost``, or for Redis you can
 use ``redis://localhost``.
 use ``redis://localhost``.
 
 
@@ -148,10 +149,10 @@ argument:
 .. note::
 .. note::
 
 
     See the :ref:`celerytut-troubleshooting` section if the worker
     See the :ref:`celerytut-troubleshooting` section if the worker
-    does not start.
+    doesn't start.
 
 
-In production you will want to run the worker in the
-background as a daemon.  To do this you need to use the tools provided
+In production you'll want to run the worker in the
+background as a daemon. To do this you need to use the tools provided
 by your platform, or something like `supervisord`_ (see :ref:`daemonizing`
 by your platform, or something like `supervisord`_ (see :ref:`daemonizing`
 for more information).
 for more information).
 
 
@@ -198,7 +199,7 @@ Keeping Results
 ===============
 ===============
 
 
 If you want to keep track of the tasks' states, Celery needs to store or send
 If you want to keep track of the tasks' states, Celery needs to store or send
-the states somewhere.  There are several
+the states somewhere. There are several
 built-in result backends to choose from: `SQLAlchemy`_/`Django`_ ORM,
 built-in result backends to choose from: `SQLAlchemy`_/`Django`_ ORM,
 `Memcached`_, `Redis`_, :ref:`RPC <conf-rpc-result-backend>` (`RabbitMQ`_/AMQP),
 `Memcached`_, `Redis`_, :ref:`RPC <conf-rpc-result-backend>` (`RabbitMQ`_/AMQP),
 and -- or you can define your own.
 and -- or you can define your own.
@@ -208,8 +209,8 @@ and -- or you can define your own.
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`Django`: http://djangoproject.com
 .. _`Django`: http://djangoproject.com
 
 
-For this example you will use the `rpc` result backend, which sends states
-back as transient messages.  The backend is specified via the ``backend`` argument to
+For this example we use the `rpc` result backend, which sends states
+back as transient messages. The backend is specified via the ``backend`` argument to
 :class:`@Celery`, (or via the :setting:`task_result_backend` setting if
 :class:`@Celery`, (or via the :setting:`task_result_backend` setting if
 you choose to use a configuration module):
 you choose to use a configuration module):
 
 
@@ -276,7 +277,7 @@ Configuration
 
 
 Celery, like a consumer appliance, doesn't need much to be operated.
 Celery, like a consumer appliance, doesn't need much to be operated.
 It has an input and an output, where you must connect the input to a broker and maybe
 It has an input and an output, where you must connect the input to a broker and maybe
-the output to a result backend if so wanted.  But if you look closely at the back
+the output to a result backend if so wanted. But if you look closely at the back
 there's a lid revealing loads of sliders, dials and buttons: this is the configuration.
 there's a lid revealing loads of sliders, dials and buttons: this is the configuration.
 
 
 The default configuration should be good enough for most uses, but there are
 The default configuration should be good enough for most uses, but there are
@@ -294,7 +295,7 @@ task payloads by changing the :setting:`task_serializer` setting:
 
 
     app.conf.task_serializer = 'json'
     app.conf.task_serializer = 'json'
 
 
-If you are configuring many settings at once you can use ``update``:
+If you're configuring many settings at once you can use ``update``:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -307,8 +308,8 @@ If you are configuring many settings at once you can use ``update``:
     )
     )
 
 
 For larger projects using a dedicated configuration module is useful,
 For larger projects using a dedicated configuration module is useful,
-in fact you are discouraged from hard coding
-periodic task intervals and task routing options, as it is much
+in fact you're discouraged from hard coding
+periodic task intervals and task routing options, as it's much
 better to keep this in a centralized location, and especially for libraries
 better to keep this in a centralized location, and especially for libraries
 it makes it possible for users to control how they want your tasks to behave,
 it makes it possible for users to control how they want your tasks to behave,
 you can also imagine your SysAdmin making simple changes to the configuration
 you can also imagine your SysAdmin making simple changes to the configuration
@@ -349,7 +350,7 @@ contain any syntax errors, you can try to import it:
 
 
 For a complete reference of configuration options, see :ref:`configuration`.
 For a complete reference of configuration options, see :ref:`configuration`.
 
 
-To demonstrate the power of configuration files, this is how you would
+To demonstrate the power of configuration files, this is how you'd
 route a misbehaving task to a dedicated queue:
 route a misbehaving task to a dedicated queue:
 
 
 :file:`celeryconfig.py`:
 :file:`celeryconfig.py`:
@@ -372,7 +373,7 @@ instead, so that only 10 tasks of this type can be processed in a minute
         'tasks.add': {'rate_limit': '10/m'}
         'tasks.add': {'rate_limit': '10/m'}
     }
     }
 
 
-If you are using RabbitMQ or Redis as the
+If you're using RabbitMQ or Redis as the
 broker then you can also direct the workers to set a new rate limit
 broker then you can also direct the workers to set a new rate limit
 for the task at runtime:
 for the task at runtime:
 
 
@@ -401,8 +402,8 @@ Troubleshooting
 
 
 There's also a troubleshooting section in the :ref:`faq`.
 There's also a troubleshooting section in the :ref:`faq`.
 
 
-Worker does not start: Permission Error
----------------------------------------
+Worker doesn't start: Permission Error
+--------------------------------------
 
 
 - If you're using Debian, Ubuntu or other Debian-based distributions:
 - If you're using Debian, Ubuntu or other Debian-based distributions:
 
 
@@ -420,30 +421,30 @@ Worker does not start: Permission Error
     If you provide any of the :option:`--pidfile <celery worker --pidfile>`,
     If you provide any of the :option:`--pidfile <celery worker --pidfile>`,
     :option:`--logfile <celery worker --logfile>` or
     :option:`--logfile <celery worker --logfile>` or
     :option:`--statedb <celery worker --statedb>` arguments, then you must
     :option:`--statedb <celery worker --statedb>` arguments, then you must
-    make sure that they point to a file/directory that is writable and
+    make sure that they point to a file/directory that's writable and
     readable by the user starting the worker.
     readable by the user starting the worker.
 
 
-Result backend does not work or tasks are always in ``PENDING`` state.
-----------------------------------------------------------------------
+Result backend doesn't work or tasks are always in ``PENDING`` state.
+---------------------------------------------------------------------
 
 
-All tasks are :state:`PENDING` by default, so the state would have been
-better named "unknown".  Celery does not update any state when a task
+All tasks are :state:`PENDING` by default, so the state would've been
+better named "unknown". Celery doesn't update any state when a task
 is sent, and any task with no history is assumed to be pending (you know
 is sent, and any task with no history is assumed to be pending (you know
 the task id after all).
 the task id after all).
 
 
-1) Make sure that the task does not have ``ignore_result`` enabled.
+1) Make sure that the task doesn't have ``ignore_result`` enabled.
 
 
     Enabling this option will force the worker to skip updating
     Enabling this option will force the worker to skip updating
     states.
     states.
 
 
-2) Make sure the :setting:`task_ignore_result` setting is not enabled.
+2) Make sure the :setting:`task_ignore_result` setting isn't enabled.
 
 
-3) Make sure that you do not have any old workers still running.
+3) Make sure that you don't have any old workers still running.
 
 
     It's easy to start multiple workers by accident, so make sure
     It's easy to start multiple workers by accident, so make sure
     that the previous worker is properly shutdown before you start a new one.
     that the previous worker is properly shutdown before you start a new one.
 
 
-    An old worker that is not configured with the expected result backend
+    An old worker that aren't configured with the expected result backend
     may be running and is hijacking the tasks.
     may be running and is hijacking the tasks.
 
 
     The :option:`--pidfile <celery worker --pidfile>` argument can be set to
     The :option:`--pidfile <celery worker --pidfile>` argument can be set to
@@ -452,7 +453,7 @@ the task id after all).
 4) Make sure the client is configured with the right backend.
 4) Make sure the client is configured with the right backend.
 
 
     If for some reason the client is configured to use a different backend
     If for some reason the client is configured to use a different backend
-    than the worker, you will not be able to receive the result,
+    than the worker, you won't be able to receive the result,
     so make sure the backend is correct by inspecting it:
     so make sure the backend is correct by inspecting it:
 
 
     .. code-block:: pycon
     .. code-block:: pycon

+ 9 - 9
docs/getting-started/introduction.rst

@@ -8,8 +8,8 @@
     :local:
     :local:
     :depth: 1
     :depth: 1
 
 
-What is a Task Queue?
-=====================
+What's a Task Queue?
+====================
 
 
 Task queues are used as a mechanism to distribute work across threads or
 Task queues are used as a mechanism to distribute work across threads or
 machines.
 machines.
@@ -18,14 +18,14 @@ A task queue's input is a unit of work called a task. Dedicated worker
 processes constantly monitor task queues for new work to perform.
 processes constantly monitor task queues for new work to perform.
 
 
 Celery communicates via messages, usually using a broker
 Celery communicates via messages, usually using a broker
-to mediate between clients and workers.  To initiate a task, a client adds a
+to mediate between clients and workers. To initiate a task, a client adds a
 message to the queue, which the broker then delivers to a worker.
 message to the queue, which the broker then delivers to a worker.
 
 
 A Celery system can consist of multiple workers and brokers, giving way
 A Celery system can consist of multiple workers and brokers, giving way
 to high availability and horizontal scaling.
 to high availability and horizontal scaling.
 
 
 Celery is written in Python, but the protocol can be implemented in any
 Celery is written in Python, but the protocol can be implemented in any
-language.  In addition to Python there's node-celery_ for Node.js,
+language. In addition to Python there's node-celery_ for Node.js,
 and a `PHP client`_.
 and a `PHP client`_.
 
 
 Language interoperability can also be achieved
 Language interoperability can also be achieved
@@ -46,7 +46,7 @@ What do I need?
     This is the last version to support Python 2.7,
     This is the last version to support Python 2.7,
     and from the next version (Celery 5.x) Python 3.6 or newer is required.
     and from the next version (Celery 5.x) Python 3.6 or newer is required.
 
 
-    If you are running an older version of Python, you need to be running
+    If you're running an older version of Python, you need to be running
     an older version of Celery:
     an older version of Celery:
 
 
     - Python 2.6: Celery series 3.1 or earlier.
     - Python 2.6: Celery series 3.1 or earlier.
@@ -54,8 +54,8 @@ What do I need?
     - Python 2.4 was Celery series 2.2 or earlier.
     - Python 2.4 was Celery series 2.2 or earlier.
 
 
     Celery is a project with minimal funding,
     Celery is a project with minimal funding,
-    so we do not support Microsoft Windows.
-    Please do not open any issues related to that platform.
+    so we don't support Microsoft Windows.
+    Please don't open any issues related to that platform.
 
 
 *Celery* requires a message transport to send and receive messages.
 *Celery* requires a message transport to send and receive messages.
 The RabbitMQ and Redis broker transports are feature complete,
 The RabbitMQ and Redis broker transports are feature complete,
@@ -203,7 +203,7 @@ Features
         - **User Components**
         - **User Components**
 
 
             Each worker component can be customized, and additional components
             Each worker component can be customized, and additional components
-            can be defined by the user.  The worker is built up using "bootsteps" — a
+            can be defined by the user. The worker is built up using "bootsteps" — a
             dependency graph enabling fine grained control of the worker's
             dependency graph enabling fine grained control of the worker's
             internals.
             internals.
 
 
@@ -230,7 +230,7 @@ integration packages:
     | `Tornado`_         | `tornado-celery`_      |
     | `Tornado`_         | `tornado-celery`_      |
     +--------------------+------------------------+
     +--------------------+------------------------+
 
 
-The integration packages are not strictly necessary, but they can make
+The integration packages aren't strictly necessary, but they can make
 development easier, and sometimes they add important hooks like closing
 development easier, and sometimes they add important hooks like closing
 database connections at :manpage:`fork(2)`.
 database connections at :manpage:`fork(2)`.
 
 

+ 40 - 38
docs/getting-started/next-steps.rst

@@ -4,11 +4,11 @@
  Next Steps
  Next Steps
 ============
 ============
 
 
-The :ref:`first-steps` guide is intentionally minimal.  In this guide
-I will demonstrate what Celery offers in more detail, including
+The :ref:`first-steps` guide is intentionally minimal. In this guide
+I'll demonstrate what Celery offers in more detail, including
 how to add Celery support for your application and library.
 how to add Celery support for your application and library.
 
 
-This document does not document all of Celery's features and
+This document doesn't document all of Celery's features and
 best practices, so it's recommended that you also read the
 best practices, so it's recommended that you also read the
 :ref:`User Guide <guide>`
 :ref:`User Guide <guide>`
 
 
@@ -37,7 +37,7 @@ Project layout::
     :language: python
     :language: python
 
 
 In this module you created our :class:`@Celery` instance (sometimes
 In this module you created our :class:`@Celery` instance (sometimes
-referred to as the *app*).  To use Celery within your project
+referred to as the *app*). To use Celery within your project
 you simply import this instance.
 you simply import this instance.
 
 
 - The ``broker`` argument specifies the URL of the broker to use.
 - The ``broker`` argument specifies the URL of the broker to use.
@@ -50,14 +50,14 @@ you simply import this instance.
     While results are disabled by default I use the RPC result backend here
     While results are disabled by default I use the RPC result backend here
     because I demonstrate how retrieving results work later, you may want to use
     because I demonstrate how retrieving results work later, you may want to use
     a different backend for your application. They all have different
     a different backend for your application. They all have different
-    strengths and weaknesses.  If you don't need results it's better
-    to disable them.  Results can also be disabled for individual tasks
+    strengths and weaknesses. If you don't need results it's better
+    to disable them. Results can also be disabled for individual tasks
     by setting the ``@task(ignore_result=True)`` option.
     by setting the ``@task(ignore_result=True)`` option.
 
 
     See :ref:`celerytut-keeping-results` for more information.
     See :ref:`celerytut-keeping-results` for more information.
 
 
 - The ``include`` argument is a list of modules to import when
 - The ``include`` argument is a list of modules to import when
-  the worker starts.  You need to add our tasks module here so
+  the worker starts. You need to add our tasks module here so
   that the worker is able to find our tasks.
   that the worker is able to find our tasks.
 
 
 :file:`proj/tasks.py`
 :file:`proj/tasks.py`
@@ -104,7 +104,7 @@ it can be processed.
 The default concurrency number is the number of CPU's on that machine
 The default concurrency number is the number of CPU's on that machine
 (including cores), you can specify a custom number using
 (including cores), you can specify a custom number using
 the :option:`celery worker -c` option.
 the :option:`celery worker -c` option.
-There is no recommended value, as the optimal number depends on a number of
+There's no recommended value, as the optimal number depends on a number of
 factors, but if your tasks are mostly I/O-bound then you can try to increase
 factors, but if your tasks are mostly I/O-bound then you can try to increase
 it, experimentation has shown that adding more than twice the number
 it, experimentation has shown that adding more than twice the number
 of CPU's is rarely effective, and likely to degrade performance
 of CPU's is rarely effective, and likely to degrade performance
@@ -120,7 +120,7 @@ and Flower - the real-time Celery monitor, which you can read about in
 the :ref:`Monitoring and Management guide <guide-monitoring>`.
 the :ref:`Monitoring and Management guide <guide-monitoring>`.
 
 
 -- *Queues* is the list of queues that the worker will consume
 -- *Queues* is the list of queues that the worker will consume
-tasks from.  The worker can be told to consume from several queues
+tasks from. The worker can be told to consume from several queues
 at once, and this is used to route messages to specific workers
 at once, and this is used to route messages to specific workers
 as a means for Quality of Service, separation of concerns,
 as a means for Quality of Service, separation of concerns,
 and prioritization, all described in the :ref:`Routing Guide
 and prioritization, all described in the :ref:`Routing Guide
@@ -138,13 +138,13 @@ These options are described in more detailed in the :ref:`Workers Guide <guide-w
 Stopping the worker
 Stopping the worker
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
-To stop the worker simply hit :kbd:`Control-c`.  A list of signals supported
+To stop the worker simply hit :kbd:`Control-c`. A list of signals supported
 by the worker is detailed in the :ref:`Workers Guide <guide-workers>`.
 by the worker is detailed in the :ref:`Workers Guide <guide-workers>`.
 
 
 In the background
 In the background
 ~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~
 
 
-In production you will want to run the worker in the background, this is
+In production you'll want to run the worker in the background, this is
 described in detail in the :ref:`daemonization tutorial <daemonizing>`.
 described in detail in the :ref:`daemonization tutorial <daemonizing>`.
 
 
 The daemonization scripts uses the :program:`celery multi` command to
 The daemonization scripts uses the :program:`celery multi` command to
@@ -178,8 +178,8 @@ or stop it:
 
 
     $ celery multi stop w1 -A proj -l info
     $ celery multi stop w1 -A proj -l info
 
 
-The ``stop`` command is asynchronous so it will not wait for the
-worker to shutdown.  You will probably want to use the ``stopwait`` command
+The ``stop`` command is asynchronous so it'll not wait for the
+worker to shutdown. You'll probably want to use the ``stopwait`` command
 instead which will ensure all currently executing tasks is completed:
 instead which will ensure all currently executing tasks is completed:
 
 
 .. code-block:: console
 .. code-block:: console
@@ -190,12 +190,12 @@ instead which will ensure all currently executing tasks is completed:
 
 
     :program:`celery multi` doesn't store information about workers
     :program:`celery multi` doesn't store information about workers
     so you need to use the same command-line arguments when
     so you need to use the same command-line arguments when
-    restarting.  Only the same pidfile and logfile arguments must be
+    restarting. Only the same pidfile and logfile arguments must be
     used when stopping.
     used when stopping.
 
 
-By default it will create pid and log files in the current directory,
+By default it'll create pid and log files in the current directory,
 to protect against multiple workers launching on top of each other
 to protect against multiple workers launching on top of each other
-you are encouraged to put these in a dedicated directory:
+you're encouraged to put these in a dedicated directory:
 
 
 .. code-block:: console
 .. code-block:: console
 
 
@@ -204,7 +204,7 @@ you are encouraged to put these in a dedicated directory:
     $ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/%n.pid \
     $ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/%n.pid \
                                             --logfile=/var/log/celery/%n%I.log
                                             --logfile=/var/log/celery/%n%I.log
 
 
-With the multi command you can start multiple workers, and there is a powerful
+With the multi command you can start multiple workers, and there's a powerful
 command-line syntax to specify arguments for different workers too,
 command-line syntax to specify arguments for different workers too,
 e.g:
 e.g:
 
 
@@ -297,11 +297,11 @@ instance, which can be used to keep track of the tasks execution state.
 But for this you need to enable a :ref:`result backend <task-result-backends>` so that
 But for this you need to enable a :ref:`result backend <task-result-backends>` so that
 the state can be stored somewhere.
 the state can be stored somewhere.
 
 
-Results are disabled by default because of the fact that there is no result
+Results are disabled by default because of the fact that there's no result
 backend that suits every application, so to choose one you need to consider
 backend that suits every application, so to choose one you need to consider
-the drawbacks of each individual backend.  For many tasks
+the drawbacks of each individual backend. For many tasks
 keeping the return value isn't even very useful, so it's a sensible default to
 keeping the return value isn't even very useful, so it's a sensible default to
-have.  Also note that result backends are not used for monitoring tasks and workers,
+have. Also note that result backends aren't used for monitoring tasks and workers,
 for that Celery uses dedicated event messages (see :ref:`guide-monitoring`).
 for that Celery uses dedicated event messages (see :ref:`guide-monitoring`).
 
 
 If you have a result backend configured you can retrieve the return
 If you have a result backend configured you can retrieve the return
@@ -346,9 +346,11 @@ by passing the ``propagate`` argument:
     >>> res.get(propagate=False)
     >>> res.get(propagate=False)
     TypeError('add() takes exactly 2 arguments (1 given)',)
     TypeError('add() takes exactly 2 arguments (1 given)',)
 
 
-In this case it will return the exception instance raised instead,
-and so to check whether the task succeeded or failed you will have to
-use the corresponding methods on the result instance::
+In this case it'll return the exception instance raised instead,
+and so to check whether the task succeeded or failed you'll have to
+use the corresponding methods on the result instance:
+
+.. code-block:: pycon
 
 
     >>> res.failed()
     >>> res.failed()
     True
     True
@@ -369,12 +371,12 @@ states. The stages of a typical task can be::
 
 
     PENDING -> STARTED -> SUCCESS
     PENDING -> STARTED -> SUCCESS
 
 
-The started state is a special state that is only recorded if the
+The started state is a special state that's only recorded if the
 :setting:`task_track_started` setting is enabled, or if the
 :setting:`task_track_started` setting is enabled, or if the
 ``@task(track_started=True)`` option is set for the task.
 ``@task(track_started=True)`` option is set for the task.
 
 
 The pending state is actually not a recorded state, but rather
 The pending state is actually not a recorded state, but rather
-the default state for any task id that is unknown, which you can see
+the default state for any task id that's unknown, which you can see
 from this example:
 from this example:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
@@ -386,7 +388,7 @@ from this example:
     'PENDING'
     'PENDING'
 
 
 If the task is retried the stages can become even more complex,
 If the task is retried the stages can become even more complex,
-e.g, for a task that is retried two times the stages would be::
+e.g, for a task that's retried two times the stages would be::
 
 
     PENDING -> STARTED -> RETRY -> STARTED -> RETRY -> STARTED -> SUCCESS
     PENDING -> STARTED -> RETRY -> STARTED -> RETRY -> STARTED -> SUCCESS
 
 
@@ -418,7 +420,7 @@ and a countdown of 10 seconds like this:
     >>> add.signature((2, 2), countdown=10)
     >>> add.signature((2, 2), countdown=10)
     tasks.add(2, 2)
     tasks.add(2, 2)
 
 
-There is also a shortcut using star arguments:
+There's also a shortcut using star arguments:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
 
 
@@ -431,8 +433,8 @@ And there's that calling API again…
 Signature instances also supports the calling API, which means that they
 Signature instances also supports the calling API, which means that they
 have the ``delay`` and ``apply_async`` methods.
 have the ``delay`` and ``apply_async`` methods.
 
 
-But there is a difference in that the signature may already have
-an argument signature specified.  The ``add`` task takes two arguments,
+But there's a difference in that the signature may already have
+an argument signature specified. The ``add`` task takes two arguments,
 so a signature specifying two arguments would make a complete signature:
 so a signature specifying two arguments would make a complete signature:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
@@ -476,11 +478,11 @@ As stated signatures supports the calling API, which means that:
 - ``sig.apply_async(args=(), kwargs={}, **options)``
 - ``sig.apply_async(args=(), kwargs={}, **options)``
 
 
     Calls the signature with optional partial arguments and partial
     Calls the signature with optional partial arguments and partial
-    keyword arguments.  Also supports partial execution options.
+    keyword arguments. Also supports partial execution options.
 
 
 - ``sig.delay(*args, **kwargs)``
 - ``sig.delay(*args, **kwargs)``
 
 
-  Star argument version of ``apply_async``.  Any arguments will be prepended
+  Star argument version of ``apply_async``. Any arguments will be prepended
   to the arguments in the signature, and keyword arguments is merged with any
   to the arguments in the signature, and keyword arguments is merged with any
   existing keys.
   existing keys.
 
 
@@ -670,19 +672,19 @@ which is a comma separated list of worker host names:
 
 
     $ celery -A proj inspect active --destination=celery@example.com
     $ celery -A proj inspect active --destination=celery@example.com
 
 
-If a destination is not provided then every worker will act and reply
+If a destination isn't provided then every worker will act and reply
 to the request.
 to the request.
 
 
 The :program:`celery inspect` command contains commands that
 The :program:`celery inspect` command contains commands that
-does not change anything in the worker, it only replies information
-and statistics about what is going on inside the worker.
+doesn't change anything in the worker, it only replies information
+and statistics about what's going on inside the worker.
 For a list of inspect commands you can execute:
 For a list of inspect commands you can execute:
 
 
 .. code-block:: console
 .. code-block:: console
 
 
     $ celery -A proj inspect --help
     $ celery -A proj inspect --help
 
 
-Then there is the :program:`celery control` command, which contains
+Then there's the :program:`celery control` command, which contains
 commands that actually changes things in the worker at runtime:
 commands that actually changes things in the worker at runtime:
 
 
 .. code-block:: console
 .. code-block:: console
@@ -731,7 +733,7 @@ Timezone
 All times and dates, internally and in messages uses the UTC timezone.
 All times and dates, internally and in messages uses the UTC timezone.
 
 
 When the worker receives a message, for example with a countdown set it
 When the worker receives a message, for example with a countdown set it
-converts that UTC time to local time.  If you wish to use
+converts that UTC time to local time. If you wish to use
 a different timezone than the system timezone then you must
 a different timezone than the system timezone then you must
 configure that using the :setting:`timezone` setting:
 configure that using the :setting:`timezone` setting:
 
 
@@ -742,7 +744,7 @@ configure that using the :setting:`timezone` setting:
 Optimization
 Optimization
 ============
 ============
 
 
-The default configuration is not optimized for throughput by default,
+The default configuration isn't optimized for throughput by default,
 it tries to walk the middle way between many short tasks and fewer long
 it tries to walk the middle way between many short tasks and fewer long
 tasks, a compromise between throughput and fair scheduling.
 tasks, a compromise between throughput and fair scheduling.
 
 
@@ -763,4 +765,4 @@ What to do now?
 Now that you have read this document you should continue
 Now that you have read this document you should continue
 to the :ref:`User Guide <guide>`.
 to the :ref:`User Guide <guide>`.
 
 
-There's also an :ref:`API reference <apiref>` if you are so inclined.
+There's also an :ref:`API reference <apiref>` if you're so inclined.

+ 9 - 9
docs/glossary.rst

@@ -8,9 +8,9 @@ Glossary
 
 
     acknowledged
     acknowledged
         Workers acknowledge messages to signify that a message has been
         Workers acknowledge messages to signify that a message has been
-        handled.  Failing to acknowledge a message
-        will cause the message to be redelivered.   Exactly when a
-        transaction is considered a failure varies by transport.  In AMQP the
+        handled. Failing to acknowledge a message
+        will cause the message to be redelivered. Exactly when a
+        transaction is considered a failure varies by transport. In AMQP the
         transaction fails when the connection/channel is closed (or lost),
         transaction fails when the connection/channel is closed (or lost),
         but in Redis/SQS the transaction times out after a configurable amount
         but in Redis/SQS the transaction times out after a configurable amount
         of time (the ``visibility_timeout``).
         of time (the ``visibility_timeout``).
@@ -20,7 +20,7 @@ Glossary
 
 
     early acknowledgment
     early acknowledgment
         Task is :term:`acknowledged` just-in-time before being executed,
         Task is :term:`acknowledged` just-in-time before being executed,
-        meaning the task will not be redelivered to another worker if the
+        meaning the task won't be redelivered to another worker if the
         machine loses power, or the worker instance is abruptly killed,
         machine loses power, or the worker instance is abruptly killed,
         mid-execution.
         mid-execution.
 
 
@@ -79,15 +79,15 @@ Glossary
         Further reading: https://en.wikipedia.org/wiki/Idempotent
         Further reading: https://en.wikipedia.org/wiki/Idempotent
 
 
     nullipotent
     nullipotent
-        describes a function that will have the same effect, and give the same
+        describes a function that'll have the same effect, and give the same
         result, even if called zero or multiple times (side-effect free).
         result, even if called zero or multiple times (side-effect free).
         A stronger version of :term:`idempotent`.
         A stronger version of :term:`idempotent`.
 
 
     reentrant
     reentrant
         describes a function that can be interrupted in the middle of
         describes a function that can be interrupted in the middle of
         execution (e.g. by hardware interrupt or signal) and then safely
         execution (e.g. by hardware interrupt or signal) and then safely
-        called again later.  Reentrancy is not the same as
-        :term:`idempotence <idempotent>` as the return value does not have to
+        called again later. Reentrancy isn't the same as
+        :term:`idempotence <idempotent>` as the return value doesn't have to
         be the same given the same inputs, and a reentrant function may have
         be the same given the same inputs, and a reentrant function may have
         side effects as long as it can be interrupted;  An idempotent function
         side effects as long as it can be interrupted;  An idempotent function
         is always reentrant, but the reverse may not be true.
         is always reentrant, but the reverse may not be true.
@@ -103,8 +103,8 @@ Glossary
 
 
     `prefetch count`
     `prefetch count`
         Maximum number of unacknowledged messages a consumer can hold and if
         Maximum number of unacknowledged messages a consumer can hold and if
-        exceeded the transport should not deliver any more messages to that
-        consumer.  See :ref:`optimizing-prefetch-limit`.
+        exceeded the transport shouldn't deliver any more messages to that
+        consumer. See :ref:`optimizing-prefetch-limit`.
 
 
     pidbox
     pidbox
         A process mailbox, used to implement remote control commands.
         A process mailbox, used to implement remote control commands.

+ 44 - 44
docs/history/changelog-1.0.rst

@@ -47,7 +47,7 @@ Critical
 
 
     Fixed by making the pool worker processes ignore :const:`SIGINT`.
     Fixed by making the pool worker processes ignore :const:`SIGINT`.
 
 
-* Should not close the consumers before the pool is terminated, just cancel
+* Shouldn't close the consumers before the pool is terminated, just cancel
   the consumers.
   the consumers.
 
 
     See issue #122.
     See issue #122.
@@ -117,7 +117,7 @@ Important notes
 
 
     This is the behavior we've wanted all along, but couldn't have because of
     This is the behavior we've wanted all along, but couldn't have because of
     limitations in the multiprocessing module.
     limitations in the multiprocessing module.
-    The previous behavior was not good, and the situation worsened with the
+    The previous behavior wasn't good, and the situation worsened with the
     release of 1.0.1, so this change will definitely improve
     release of 1.0.1, so this change will definitely improve
     reliability, performance and operations in general.
     reliability, performance and operations in general.
 
 
@@ -146,7 +146,7 @@ Important notes
 
 
         ALTER TABLE celery_taskmeta ALTER COLUMN result DROP NOT NULL
         ALTER TABLE celery_taskmeta ALTER COLUMN result DROP NOT NULL
 
 
-* Removed `Task.rate_limit_queue_type`, as it was not really useful
+* Removed `Task.rate_limit_queue_type`, as it wasn't really useful
   and made it harder to refactor some parts.
   and made it harder to refactor some parts.
 
 
 * Now depends on carrot >= 0.10.4
 * Now depends on carrot >= 0.10.4
@@ -175,7 +175,7 @@ News
 * Added Crontab-like scheduling to periodic tasks.
 * Added Crontab-like scheduling to periodic tasks.
 
 
     Like a cronjob, you can specify units of time of when
     Like a cronjob, you can specify units of time of when
-    you would like the task to execute. While not a full implementation
+    you'd like the task to execute. While not a full implementation
     of :command:`cron`'s features, it should provide a fair degree of common scheduling
     of :command:`cron`'s features, it should provide a fair degree of common scheduling
     needs.
     needs.
 
 
@@ -209,8 +209,8 @@ News
 
 
 * `TaskPool.apply_async`: Now supports the `accept_callback` argument.
 * `TaskPool.apply_async`: Now supports the `accept_callback` argument.
 
 
-* `apply_async`: Now raises :exc:`ValueError` if task args is not a list,
-  or kwargs is not a tuple (Issue #95).
+* `apply_async`: Now raises :exc:`ValueError` if task args isn't a list,
+  or kwargs isn't a tuple (Issue #95).
 
 
 * `Task.max_retries` can now be `None`, which means it will retry forever.
 * `Task.max_retries` can now be `None`, which means it will retry forever.
 
 
@@ -228,7 +228,7 @@ News
     The default value is `False` as the normal behavior is to not
     The default value is `False` as the normal behavior is to not
     report that level of granularity. Tasks are either pending, finished,
     report that level of granularity. Tasks are either pending, finished,
     or waiting to be retried. Having a "started" status can be useful for
     or waiting to be retried. Having a "started" status can be useful for
-    when there are long running tasks and there is a need to report which
+    when there are long running tasks and there's a need to report which
     task is currently running.
     task is currently running.
 
 
     The global default can be overridden by the :setting:`CELERY_TRACK_STARTED`
     The global default can be overridden by the :setting:`CELERY_TRACK_STARTED`
@@ -358,7 +358,7 @@ Fixes
     the mediator thread could block shutdown (and potentially block other
     the mediator thread could block shutdown (and potentially block other
     jobs from coming in).
     jobs from coming in).
 
 
-* Remote rate limits was not properly applied (Issue #98).
+* Remote rate limits wasn't properly applied (Issue #98).
 
 
 * Now handles exceptions with Unicode messages correctly in
 * Now handles exceptions with Unicode messages correctly in
   `TaskRequest.on_failure`.
   `TaskRequest.on_failure`.
@@ -401,8 +401,8 @@ Fixes
   tasks to reuse the same database connection)
   tasks to reuse the same database connection)
 
 
     The default is to use a new connection for every task.
     The default is to use a new connection for every task.
-    We would very much like to reuse the connection, but a safe number of
-    reuses is not known, and we don't have any way to handle the errors
+    We'd very much like to reuse the connection, but a safe number of
+    reuses isn't known, and we don't have any way to handle the errors
     that might happen, which may even be database dependent.
     that might happen, which may even be database dependent.
 
 
     See: http://bit.ly/94fwdd
     See: http://bit.ly/94fwdd
@@ -432,13 +432,13 @@ Fixes
 * Debian init-scripts: Now always preserves `$CELERYD_OPTS` from the
 * Debian init-scripts: Now always preserves `$CELERYD_OPTS` from the
   `/etc/default/celeryd` and `/etc/default/celerybeat`.
   `/etc/default/celeryd` and `/etc/default/celerybeat`.
 
 
-* celery.beat.Scheduler: Fixed a bug where the schedule was not properly
-  flushed to disk if the schedule had not been properly initialized.
+* celery.beat.Scheduler: Fixed a bug where the schedule wasn't properly
+  flushed to disk if the schedule hadn't been properly initialized.
 
 
 * ``celerybeat``: Now syncs the schedule to disk when receiving the :sig:`SIGTERM`
 * ``celerybeat``: Now syncs the schedule to disk when receiving the :sig:`SIGTERM`
   and :sig:`SIGINT` signals.
   and :sig:`SIGINT` signals.
 
 
-* Control commands: Make sure keywords arguments are not in Unicode.
+* Control commands: Make sure keywords arguments aren't in Unicode.
 
 
 * ETA scheduler: Was missing a logger object, so the scheduler crashed
 * ETA scheduler: Was missing a logger object, so the scheduler crashed
   when trying to log that a task had been revoked.
   when trying to log that a task had been revoked.
@@ -446,11 +446,11 @@ Fixes
 * ``management.commands.camqadm``: Fixed typo `camqpadm` -> `camqadm`
 * ``management.commands.camqadm``: Fixed typo `camqpadm` -> `camqadm`
   (Issue #83).
   (Issue #83).
 
 
-* PeriodicTask.delta_resolution: Was not working for days and hours, now fixed
+* PeriodicTask.delta_resolution: wasn't working for days and hours, now fixed
   by rounding to the nearest day/hour.
   by rounding to the nearest day/hour.
 
 
 * Fixed a potential infinite loop in `BaseAsyncResult.__eq__`, although
 * Fixed a potential infinite loop in `BaseAsyncResult.__eq__`, although
-  there is no evidence that it has ever been triggered.
+  there's no evidence that it has ever been triggered.
 
 
 * worker: Now handles messages with encoding problems by acking them and
 * worker: Now handles messages with encoding problems by acking them and
   emitting an error message.
   emitting an error message.
@@ -465,14 +465,14 @@ Fixes
 * Tasks are now acknowledged early instead of late.
 * Tasks are now acknowledged early instead of late.
 
 
     This is done because messages can only be acknowledged within the same
     This is done because messages can only be acknowledged within the same
-    connection channel, so if the connection is lost we would have to
+    connection channel, so if the connection is lost we'd've to
     re-fetch the message again to acknowledge it.
     re-fetch the message again to acknowledge it.
 
 
     This might or might not affect you, but mostly those running tasks with a
     This might or might not affect you, but mostly those running tasks with a
-    really long execution time are affected, as all tasks that has made it
+    really long execution time are affected, as all tasks that's made it
     all the way into the pool needs to be executed before the worker can
     all the way into the pool needs to be executed before the worker can
     safely terminate (this is at most the number of pool workers, multiplied
     safely terminate (this is at most the number of pool workers, multiplied
-    by the :setting:`CELERYD_PREFETCH_MULTIPLIER` setting.)
+    by the :setting:`CELERYD_PREFETCH_MULTIPLIER` setting).
 
 
     We multiply the prefetch count by default to increase the performance at
     We multiply the prefetch count by default to increase the performance at
     times with bursts of tasks with a short execution time. If this doesn't
     times with bursts of tasks with a short execution time. If this doesn't
@@ -525,7 +525,7 @@ Fixes
 
 
     Also :func:`celery.execute.send_task` has been
     Also :func:`celery.execute.send_task` has been
     introduced, which can apply tasks using just the task name (useful
     introduced, which can apply tasks using just the task name (useful
-    if the client does not have the destination task in its task registry).
+    if the client doesn't have the destination task in its task registry).
 
 
     Example:
     Example:
 
 
@@ -574,7 +574,7 @@ Fixes
 
 
 * The ETA scheduler now deletes any revoked tasks it might encounter.
 * The ETA scheduler now deletes any revoked tasks it might encounter.
 
 
-    As revokes are not yet persistent, this is done to make sure the task
+    As revokes aren't yet persistent, this is done to make sure the task
     is revoked even though it's currently being hold because its eta is e.g.
     is revoked even though it's currently being hold because its eta is e.g.
     a week into the future.
     a week into the future.
 
 
@@ -588,7 +588,7 @@ Fixes
     Used by retry() to resend the task to its original destination using the same
     Used by retry() to resend the task to its original destination using the same
     exchange/routing_key.
     exchange/routing_key.
 
 
-* Events: Fields was not passed by `.send()` (fixes the UUID key errors
+* Events: Fields wasn't passed by `.send()` (fixes the UUID key errors
   in celerymon)
   in celerymon)
 
 
 * Added `--schedule`/`-s` option to the worker, so it is possible to
 * Added `--schedule`/`-s` option to the worker, so it is possible to
@@ -611,14 +611,14 @@ Fixes
 * Added `Task.delivery_mode` and the :setting:`CELERY_DEFAULT_DELIVERY_MODE`
 * Added `Task.delivery_mode` and the :setting:`CELERY_DEFAULT_DELIVERY_MODE`
   setting.
   setting.
 
 
-    These can be used to mark messages non-persistent (i.e. so they are
+    These can be used to mark messages non-persistent (i.e. so they're
     lost if the broker is restarted).
     lost if the broker is restarted).
 
 
 * Now have our own `ImproperlyConfigured` exception, instead of using the
 * Now have our own `ImproperlyConfigured` exception, instead of using the
   Django one.
   Django one.
 
 
 * Improvements to the Debian init-scripts: Shows an error if the program is
 * Improvements to the Debian init-scripts: Shows an error if the program is
-  not executable.  Does not modify `CELERYD` when using django with
+  not executable. Does not modify `CELERYD` when using django with
   virtualenv.
   virtualenv.
 
 
 .. _version-1.0.0:
 .. _version-1.0.0:
@@ -633,7 +633,7 @@ Fixes
 Backward incompatible changes
 Backward incompatible changes
 -----------------------------
 -----------------------------
 
 
-* Celery does not support detaching anymore, so you have to use the tools
+* Celery doesn't support detaching anymore, so you have to use the tools
   available on your platform, or something like :pypi:`supervisor` to make
   available on your platform, or something like :pypi:`supervisor` to make
   ``celeryd``/``celerybeat``/``celerymon`` into background processes.
   ``celeryd``/``celerybeat``/``celerymon`` into background processes.
 
 
@@ -761,7 +761,7 @@ Backward incompatible changes
     `"celery.loaders.default.Loader"`, using the previous syntax will result
     `"celery.loaders.default.Loader"`, using the previous syntax will result
     in a `DeprecationWarning`.
     in a `DeprecationWarning`.
 
 
-* Detecting the loader is now lazy, and so is not done when importing
+* Detecting the loader is now lazy, and so isn't done when importing
   `celery.loaders`.
   `celery.loaders`.
 
 
     To make this happen `celery.loaders.settings` has
     To make this happen `celery.loaders.settings` has
@@ -830,7 +830,7 @@ News
     task-[received/succeeded/failed/retried],
     task-[received/succeeded/failed/retried],
     :event:`worker-online`, :event:`worker-offline`.
     :event:`worker-online`, :event:`worker-offline`.
 
 
-* You can now delete (revoke) tasks that has already been applied.
+* You can now delete (revoke) tasks that's already been applied.
 
 
 * You can now set the hostname the worker identifies as using the `--hostname`
 * You can now set the hostname the worker identifies as using the `--hostname`
   argument.
   argument.
@@ -845,7 +845,7 @@ News
 * Periodic tasks are now scheduled on the clock.
 * Periodic tasks are now scheduled on the clock.
 
 
     I.e. `timedelta(hours=1)` means every hour at :00 minutes, not every
     I.e. `timedelta(hours=1)` means every hour at :00 minutes, not every
-    hour from the server starts.  To revert to the previous behavior you
+    hour from the server starts. To revert to the previous behavior you
     can set `PeriodicTask.relative = True`.
     can set `PeriodicTask.relative = True`.
 
 
 * Now supports passing execute options to a TaskSets list of args, e.g.:
 * Now supports passing execute options to a TaskSets list of args, e.g.:
@@ -965,7 +965,7 @@ Documentation
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
 * Now emits a warning if the --detach argument is used.
 * Now emits a warning if the --detach argument is used.
-  --detach should not be used anymore, as it has several not easily fixed
+  --detach shouldn't be used anymore, as it has several not easily fixed
   bugs related to it. Instead, use something like start-stop-daemon,
   bugs related to it. Instead, use something like start-stop-daemon,
   :pypi:`supervisor` or :command:`launchd` (macOS).
   :pypi:`supervisor` or :command:`launchd` (macOS).
 
 
@@ -996,7 +996,7 @@ Documentation
 :release-date: 2009-11-20 03:40 P.M CEST
 :release-date: 2009-11-20 03:40 P.M CEST
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
-* QOS Prefetch count was not applied properly, as it was set for every message
+* QOS Prefetch count wasn't applied properly, as it was set for every message
   received (which apparently behaves like, "receive one more"), instead of only
   received (which apparently behaves like, "receive one more"), instead of only
   set when our wanted value changed.
   set when our wanted value changed.
 
 
@@ -1169,8 +1169,8 @@ Important changes
     http://bugs.python.org/issue4607
     http://bugs.python.org/issue4607
 
 
 * You can now customize what happens at worker start, at process init, etc.,
 * You can now customize what happens at worker start, at process init, etc.,
-    by creating your own loaders. (see :mod:`celery.loaders.default`,
-    :mod:`celery.loaders.djangoapp`, :mod:`celery.loaders`.)
+    by creating your own loaders (see :mod:`celery.loaders.default`,
+    :mod:`celery.loaders.djangoapp`, :mod:`celery.loaders`).
 
 
 * Support for multiple AMQP exchanges and queues.
 * Support for multiple AMQP exchanges and queues.
 
 
@@ -1210,7 +1210,7 @@ News
     `task_postrun`, see :mod:`celery.signals` for more information.
     `task_postrun`, see :mod:`celery.signals` for more information.
 
 
 * `TaskSetResult.join` caused `TypeError` when `timeout=None`.
 * `TaskSetResult.join` caused `TypeError` when `timeout=None`.
-    Thanks Jerzy Kozera.  Closes #31
+    Thanks Jerzy Kozera. Closes #31
 
 
 * `views.apply` should return `HttpResponse` instance.
 * `views.apply` should return `HttpResponse` instance.
     Thanks to Jerzy Kozera. Closes #32
     Thanks to Jerzy Kozera. Closes #32
@@ -1290,7 +1290,7 @@ News
     has been launched.
     has been launched.
 
 
 * The periodic task table is now locked for reading while getting
 * The periodic task table is now locked for reading while getting
-    periodic task status. (MySQL only so far, seeking patches for other
+    periodic task status (MySQL only so far, seeking patches for other
     engines)
     engines)
 
 
 * A lot more debugging information is now available by turning on the
 * A lot more debugging information is now available by turning on the
@@ -1309,7 +1309,7 @@ News
     includes the ETA for the task (if any).
     includes the ETA for the task (if any).
 
 
 * Acknowledgment now happens in the pool callback. Can't do ack in the job
 * Acknowledgment now happens in the pool callback. Can't do ack in the job
-    target, as it's not pickleable (can't share AMQP connection, etc.)).
+    target, as it's not pickleable (can't share AMQP connection, etc.).
 
 
 * Added note about .delay hanging in README
 * Added note about .delay hanging in README
 
 
@@ -1391,7 +1391,7 @@ News
 
 
 * Should now work on Windows (although running in the background won't
 * Should now work on Windows (although running in the background won't
   work, so using the `--detach` argument results in an exception
   work, so using the `--detach` argument results in an exception
-  being raised.)
+  being raised).
 
 
 * Added support for statistics for profiling and monitoring.
 * Added support for statistics for profiling and monitoring.
   To start sending statistics start the worker with the
   To start sending statistics start the worker with the
@@ -1414,7 +1414,7 @@ News
 
 
     .. warning::
     .. warning::
 
 
-        Use with caution! Do not expose this URL to the public
+        Use with caution! Don't expose this URL to the public
         without first ensuring that your code is safe!
         without first ensuring that your code is safe!
 
 
 * Refactored `celery.task`. It's now split into three modules:
 * Refactored `celery.task`. It's now split into three modules:
@@ -1489,7 +1489,7 @@ News
   it's the term used in the documentation from now on.
   it's the term used in the documentation from now on.
 
 
 * Make sure the pool and periodic task worker thread is terminated
 * Make sure the pool and periodic task worker thread is terminated
-  properly at exit. (So :kbd:`Control-c` works again).
+  properly at exit (so :kbd:`Control-c` works again).
 
 
 * Now depends on `python-daemon`.
 * Now depends on `python-daemon`.
 
 
@@ -1521,7 +1521,7 @@ News
 * worker: Added option `--discard`: Discard (delete!) all waiting
 * worker: Added option `--discard`: Discard (delete!) all waiting
   messages in the queue.
   messages in the queue.
 
 
-* Worker: The `--wakeup-after` option was not handled as a float.
+* Worker: The `--wakeup-after` option wasn't handled as a float.
 
 
 .. _version-0.3.1:
 .. _version-0.3.1:
 
 
@@ -1569,7 +1569,7 @@ arguments, so be sure to flush your task queue before you upgrade.
   Thanks to Grégoire Cachet.
   Thanks to Grégoire Cachet.
 
 
 * Added support for message priorities, topic exchanges, custom routing
 * Added support for message priorities, topic exchanges, custom routing
-  keys for tasks. This means we have introduced
+  keys for tasks. This means we've introduced
   `celery.task.apply_async`, a new way of executing tasks.
   `celery.task.apply_async`, a new way of executing tasks.
 
 
   You can use `celery.task.delay` and `celery.Task.delay` like usual, but
   You can use `celery.task.delay` and `celery.Task.delay` like usual, but
@@ -1680,7 +1680,7 @@ arguments, so be sure to flush your task queue before you upgrade.
 :release-date: 2009-05-19 01:08 P.M CET
 :release-date: 2009-05-19 01:08 P.M CET
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
-* Fixed a syntax error in the `TaskSet` class.  (No such variable
+* Fixed a syntax error in the `TaskSet` class (no such variable
   `TimeOutError`).
   `TimeOutError`).
 
 
 .. _version-0.1.13:
 .. _version-0.1.13:
@@ -1718,12 +1718,12 @@ arguments, so be sure to flush your task queue before you upgrade.
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
 * `delay_task()` etc. now returns `celery.task.AsyncResult` object,
 * `delay_task()` etc. now returns `celery.task.AsyncResult` object,
-  which lets you check the result and any failure that might have
-  happened.  It kind of works like the `multiprocessing.AsyncResult`
+  which lets you check the result and any failure that might've
+  happened. It kind of works like the `multiprocessing.AsyncResult`
   class returned by `multiprocessing.Pool.map_async`.
   class returned by `multiprocessing.Pool.map_async`.
 
 
 * Added ``dmap()`` and ``dmap_async()``. This works like the
 * Added ``dmap()`` and ``dmap_async()``. This works like the
-  `multiprocessing.Pool` versions except they are tasks
+  `multiprocessing.Pool` versions except they're tasks
   distributed to the celery server. Example:
   distributed to the celery server. Example:
 
 
     .. code-block:: pycon
     .. code-block:: pycon
@@ -1766,7 +1766,7 @@ arguments, so be sure to flush your task queue before you upgrade.
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
 * The logging system was leaking file descriptors, resulting in
 * The logging system was leaking file descriptors, resulting in
-  servers stopping with the EMFILES (too many open files) error. (fixed)
+  servers stopping with the EMFILES (too many open files) error (fixed).
 
 
 .. _version-0.1.10:
 .. _version-0.1.10:
 
 

+ 14 - 14
docs/history/changelog-2.0.rst

@@ -61,7 +61,7 @@ Fixes
          'routing_key': 'tasks.add',
          'routing_key': 'tasks.add',
          'serializer': 'json'}
          'serializer': 'json'}
 
 
-    This was not the case before: the values
+    This wasn't the case before: the values
     in :setting:`CELERY_QUEUES` would take precedence.
     in :setting:`CELERY_QUEUES` would take precedence.
 
 
 * Worker crashed if the value of :setting:`CELERY_TASK_ERROR_WHITELIST` was
 * Worker crashed if the value of :setting:`CELERY_TASK_ERROR_WHITELIST` was
@@ -73,7 +73,7 @@ Fixes
 * `AsyncResult.traceback`: Now returns :const:`None`, instead of raising
 * `AsyncResult.traceback`: Now returns :const:`None`, instead of raising
   :exc:`KeyError` if traceback is missing.
   :exc:`KeyError` if traceback is missing.
 
 
-* :class:`~celery.task.control.inspect`: Replies did not work correctly
+* :class:`~celery.task.control.inspect`: Replies didn't work correctly
   if no destination was specified.
   if no destination was specified.
 
 
 * Can now store result/meta-data for custom states.
 * Can now store result/meta-data for custom states.
@@ -86,8 +86,8 @@ Fixes
 
 
     See issue #160.
     See issue #160.
 
 
-* Worker: On macOS it is not possible to run `os.exec*` in a process
-  that is threaded.
+* Worker: On macOS it isn't possible to run `os.exec*` in a process
+  that's threaded.
 
 
       This breaks the SIGHUP restart handler,
       This breaks the SIGHUP restart handler,
       and is now disabled on macOS, emitting a warning instead.
       and is now disabled on macOS, emitting a warning instead.
@@ -105,7 +105,7 @@ Fixes
     This is now fixed by using a workaround.
     This is now fixed by using a workaround.
     See issue #143.
     See issue #143.
 
 
-* Debian init-scripts: Commands should not run in a sub shell
+* Debian init-scripts: Commands shouldn't run in a sub shell
 
 
     See issue #163.
     See issue #163.
 
 
@@ -185,7 +185,7 @@ Documentation
     this would make each child process start a new worker instance when
     this would make each child process start a new worker instance when
     the terminal window was closed :/
     the terminal window was closed :/
 
 
-* Worker: Do not install SIGHUP handler if running from a terminal.
+* Worker: Don't install SIGHUP handler if running from a terminal.
 
 
     This fixes the problem where the worker is launched in the background
     This fixes the problem where the worker is launched in the background
     when closing the terminal.
     when closing the terminal.
@@ -280,7 +280,7 @@ Documentation
 * multiprocessing.pool: Now handles encoding errors, so that pickling errors
 * multiprocessing.pool: Now handles encoding errors, so that pickling errors
   doesn't crash the worker processes.
   doesn't crash the worker processes.
 
 
-* The remote control command replies was not working with RabbitMQ 1.8.0's
+* The remote control command replies wasn't working with RabbitMQ 1.8.0's
   stricter equivalence checks.
   stricter equivalence checks.
 
 
     If you've already hit this problem you may have to delete the
     If you've already hit this problem you may have to delete the
@@ -316,7 +316,7 @@ Documentation
     is met, it will take at most 0.8 seconds for the task to be moved to the
     is met, it will take at most 0.8 seconds for the task to be moved to the
     ready queue.
     ready queue.
 
 
-* Pool: Supervisor did not release the semaphore.
+* Pool: Supervisor didn't release the semaphore.
 
 
     This would lead to a deadlock if all workers terminated prematurely.
     This would lead to a deadlock if all workers terminated prematurely.
 
 
@@ -351,7 +351,7 @@ Documentation
 
 
         CELERY_ROUTES = {'feed.tasks.import_feed': 'feeds'}
         CELERY_ROUTES = {'feed.tasks.import_feed': 'feeds'}
 
 
-* `CREATE_MISSING_QUEUES` was not honored by apply_async.
+* `CREATE_MISSING_QUEUES` wasn't honored by apply_async.
 
 
 * New remote control command: `stats`
 * New remote control command: `stats`
 
 
@@ -374,7 +374,7 @@ Documentation
 
 
     Gives a list of tasks currently being executed by the worker.
     Gives a list of tasks currently being executed by the worker.
     By default arguments are passed through repr in case there
     By default arguments are passed through repr in case there
-    are arguments that is not JSON encodable. If you know
+    are arguments that's not JSON encodable. If you know
     the arguments are JSON safe, you can pass the argument `safe=True`.
     the arguments are JSON safe, you can pass the argument `safe=True`.
 
 
     Example reply:
     Example reply:
@@ -476,8 +476,8 @@ Django integration has been moved to a separate package: `django-celery`_.
     =====================================  =====================================
     =====================================  =====================================
 
 
 Importing :mod:`djcelery` will automatically setup Celery to use Django loader.
 Importing :mod:`djcelery` will automatically setup Celery to use Django loader.
-loader.  It does this by setting the :envvar:`CELERY_LOADER` environment variable to
-`"django"` (it won't change it if a loader is already set.)
+loader. It does this by setting the :envvar:`CELERY_LOADER` environment variable to
+`"django"` (it won't change it if a loader is already set).
 
 
 When the Django loader is used, the "database" and "cache" result backend
 When the Django loader is used, the "database" and "cache" result backend
 aliases will point to the :mod:`djcelery` backends instead of the built-in backends,
 aliases will point to the :mod:`djcelery` backends instead of the built-in backends,
@@ -568,7 +568,7 @@ Backward incompatible changes
   instead of raising :exc:`ImportError`.
   instead of raising :exc:`ImportError`.
 
 
     The worker raises :exc:`~@ImproperlyConfigured` if the configuration
     The worker raises :exc:`~@ImproperlyConfigured` if the configuration
-    is not set up. This makes it possible to use `--help` etc., without having a
+    isn't set up. This makes it possible to use `--help` etc., without having a
     working configuration.
     working configuration.
 
 
     Also this makes it possible to use the client side of celery without being
     Also this makes it possible to use the client side of celery without being
@@ -807,7 +807,7 @@ News
     * :setting:`CELERYD_TASK_SOFT_TIME_LIMIT`
     * :setting:`CELERYD_TASK_SOFT_TIME_LIMIT`
 
 
         Soft time limit. The :exc:`~@SoftTimeLimitExceeded`
         Soft time limit. The :exc:`~@SoftTimeLimitExceeded`
-        exception will be raised when this is exceeded.  The task can catch
+        exception will be raised when this is exceeded. The task can catch
         this to e.g. clean up before the hard time limit comes.
         this to e.g. clean up before the hard time limit comes.
 
 
     New command-line arguments to ``celeryd`` added:
     New command-line arguments to ``celeryd`` added:

+ 18 - 18
docs/history/changelog-2.1.rst

@@ -20,17 +20,17 @@ Fixes
 -----
 -----
 
 
 * Execution options to `apply_async` now takes precedence over options
 * Execution options to `apply_async` now takes precedence over options
-  returned by active routers.  This was a regression introduced recently
+  returned by active routers. This was a regression introduced recently
   (Issue #244).
   (Issue #244).
 
 
 * curses monitor: Long arguments are now truncated so curses
 * curses monitor: Long arguments are now truncated so curses
-  doesn't crash with out of bounds errors.  (Issue #235).
+  doesn't crash with out of bounds errors (Issue #235).
 
 
 * multi: Channel errors occurring while handling control commands no
 * multi: Channel errors occurring while handling control commands no
   longer crash the worker but are instead logged with severity error.
   longer crash the worker but are instead logged with severity error.
 
 
 * SQLAlchemy database backend: Fixed a race condition occurring when
 * SQLAlchemy database backend: Fixed a race condition occurring when
-  the client wrote the pending state.  Just like the Django database backend,
+  the client wrote the pending state. Just like the Django database backend,
   it does no longer save the pending state (Issue #261 + Issue #262).
   it does no longer save the pending state (Issue #261 + Issue #262).
 
 
 * Error email body now uses `repr(exception)` instead of `str(exception)`,
 * Error email body now uses `repr(exception)` instead of `str(exception)`,
@@ -50,7 +50,7 @@ Fixes
   the message.
   the message.
 
 
 * `TaskRequest.on_failure` now encodes traceback using the current file-system
 * `TaskRequest.on_failure` now encodes traceback using the current file-system
-   encoding.  (Issue #286).
+   encoding (Issue #286).
 
 
 * `EagerResult` can now be pickled (Issue #288).
 * `EagerResult` can now be pickled (Issue #288).
 
 
@@ -156,9 +156,9 @@ Fixes
 
 
     `got multiple values for keyword argument 'concurrency'`.
     `got multiple values for keyword argument 'concurrency'`.
 
 
-    Additional command-line arguments are now ignored, and does not
-    produce this error.  However -- we do reserve the right to use
-    positional arguments in the future, so please do not depend on this
+    Additional command-line arguments are now ignored, and doesn't
+    produce this error. However -- we do reserve the right to use
+    positional arguments in the future, so please don't depend on this
     behavior.
     behavior.
 
 
 * ``celerybeat``: Now respects routers and task execution options again.
 * ``celerybeat``: Now respects routers and task execution options again.
@@ -182,7 +182,7 @@ News
   :setting:`CELERYD_REDIRECT_STDOUTS_LEVEL` settings.
   :setting:`CELERYD_REDIRECT_STDOUTS_LEVEL` settings.
 
 
     :setting:`CELERY_REDIRECT_STDOUTS` is used by the worker and
     :setting:`CELERY_REDIRECT_STDOUTS` is used by the worker and
-    beat.  All output to `stdout` and `stderr` will be
+    beat. All output to `stdout` and `stderr` will be
     redirected to the current logger if enabled.
     redirected to the current logger if enabled.
 
 
     :setting:`CELERY_REDIRECT_STDOUTS_LEVEL` decides the log level used and is
     :setting:`CELERY_REDIRECT_STDOUTS_LEVEL` decides the log level used and is
@@ -267,8 +267,8 @@ Important Notes
 
 
 * Celery is now following the versioning semantics defined by `semver`_.
 * Celery is now following the versioning semantics defined by `semver`_.
 
 
-    This means we are no longer allowed to use odd/even versioning semantics
-    By our previous versioning scheme this stable release should have
+    This means we're no longer allowed to use odd/even versioning semantics
+    By our previous versioning scheme this stable release should've
     been version 2.2.
     been version 2.2.
 
 
 .. _`semver`: http://semver.org
 .. _`semver`: http://semver.org
@@ -279,7 +279,7 @@ Important Notes
   if the database result backend is used.
   if the database result backend is used.
 
 
 * :pypi:`django-celery` now comes with a monitor for the Django Admin
 * :pypi:`django-celery` now comes with a monitor for the Django Admin
-  interface.  This can also be used if you're not a Django user.
+  interface. This can also be used if you're not a Django user.
   (Update: Django-Admin monitor has been replaced with Flower, see the
   (Update: Django-Admin monitor has been replaced with Flower, see the
   Monitoring guide).
   Monitoring guide).
 
 
@@ -330,8 +330,8 @@ News
     If enabled, the worker sends messages about what the worker is doing.
     If enabled, the worker sends messages about what the worker is doing.
     These messages are called "events".
     These messages are called "events".
     The events are used by real-time monitors to show what the
     The events are used by real-time monitors to show what the
-    cluster is doing, but they are not very useful for monitoring
-    over a longer period of time.  Snapshots
+    cluster is doing, but they're not very useful for monitoring
+    over a longer period of time. Snapshots
     lets you take "pictures" of the clusters state at regular intervals.
     lets you take "pictures" of the clusters state at regular intervals.
     This can then be stored in a database to generate statistics
     This can then be stored in a database to generate statistics
     with, or even monitoring over longer time periods.
     with, or even monitoring over longer time periods.
@@ -423,7 +423,7 @@ News
         >>> task.apply_async(args, kwargs,
         >>> task.apply_async(args, kwargs,
         ...                  expires=datetime.now() + timedelta(days=1)
         ...                  expires=datetime.now() + timedelta(days=1)
 
 
-    When a worker receives a task that has been expired it will be
+    When a worker receives a task that's been expired it will be
     marked as revoked (:exc:`~@TaskRevokedError`).
     marked as revoked (:exc:`~@TaskRevokedError`).
 
 
 * Changed the way logging is configured.
 * Changed the way logging is configured.
@@ -515,13 +515,13 @@ News
 
 
     See issue #182.
     See issue #182.
 
 
-* worker: Now emits a warning if there is already a worker node using the same
+* worker: Now emits a warning if there's already a worker node using the same
   name running on the same virtual host.
   name running on the same virtual host.
 
 
 * AMQP result backend: Sending of results are now retried if the connection
 * AMQP result backend: Sending of results are now retried if the connection
   is down.
   is down.
 
 
-* AMQP result backend: `result.get()`: Wait for next state if state is not
+* AMQP result backend: `result.get()`: Wait for next state if state isn't
     in :data:`~celery.states.READY_STATES`.
     in :data:`~celery.states.READY_STATES`.
 
 
 * TaskSetResult now supports subscription.
 * TaskSetResult now supports subscription.
@@ -569,7 +569,7 @@ News
     See issue #134.
     See issue #134.
 
 
 * Implemented `AsyncResult.forget` for SQLAlchemy/Memcached/Redis/Tokyo Tyrant
 * Implemented `AsyncResult.forget` for SQLAlchemy/Memcached/Redis/Tokyo Tyrant
-  backends.  (Forget and remove task result).
+  backends (forget and remove task result).
 
 
     See issue #184.
     See issue #184.
 
 
@@ -738,7 +738,7 @@ Experimental
 
 
 * worker: Added `--pidfile` argument.
 * worker: Added `--pidfile` argument.
 
 
-   The worker will write its pid when it starts.  The worker will
+   The worker will write its pid when it starts. The worker will
    not be started if this file exists and the pid contained is still alive.
    not be started if this file exists and the pid contained is still alive.
 
 
 * Added generic init.d script using `celeryd-multi`
 * Added generic init.d script using `celeryd-multi`

+ 65 - 65
docs/history/changelog-2.2.rst

@@ -95,7 +95,7 @@ Fixes
 
 
 * Task: Don't use ``app.main`` if the task name is set explicitly.
 * Task: Don't use ``app.main`` if the task name is set explicitly.
 
 
-* Sending emails did not work on Python 2.5, due to a bug in
+* Sending emails didn't work on Python 2.5, due to a bug in
   the version detection code (Issue #378).
   the version detection code (Issue #378).
 
 
 * Beat: Adds method ``ScheduleEntry._default_now``
 * Beat: Adds method ``ScheduleEntry._default_now``
@@ -106,16 +106,16 @@ Fixes
 * An error occurring in process cleanup could mask task errors.
 * An error occurring in process cleanup could mask task errors.
 
 
   We no longer propagate errors happening at process cleanup,
   We no longer propagate errors happening at process cleanup,
-  but log them instead.  This way they will not interfere with publishing
+  but log them instead. This way they won't interfere with publishing
   the task result (Issue #365).
   the task result (Issue #365).
 
 
-* Defining tasks did not work properly when using the Django
+* Defining tasks didn't work properly when using the Django
   ``shell_plus`` utility (Issue #366).
   ``shell_plus`` utility (Issue #366).
 
 
-* ``AsyncResult.get`` did not accept the ``interval`` and ``propagate``
+* ``AsyncResult.get`` didn't accept the ``interval`` and ``propagate``
    arguments.
    arguments.
 
 
-* worker: Fixed a bug where the worker would not shutdown if a
+* worker: Fixed a bug where the worker wouldn't shutdown if a
    :exc:`socket.error` was raised.
    :exc:`socket.error` was raised.
 
 
 .. _version-2.2.5:
 .. _version-2.2.5:
@@ -145,7 +145,7 @@ News
   (Issue #321)
   (Issue #321)
 
 
     This is accomplished by using the ``WatchedFileHandler``, which re-opens
     This is accomplished by using the ``WatchedFileHandler``, which re-opens
-    the file if it is renamed or deleted.
+    the file if it's renamed or deleted.
 
 
 .. _`logrotate.d`:
 .. _`logrotate.d`:
     http://www.ducea.com/2006/06/06/rotating-linux-log-files-part-2-logrotate/
     http://www.ducea.com/2006/06/06/rotating-linux-log-files-part-2-logrotate/
@@ -180,7 +180,7 @@ News
 * The taskset_id (if any) is now available in the Task request context.
 * The taskset_id (if any) is now available in the Task request context.
 
 
 * SQLAlchemy result backend: taskset_id and taskset_id columns now have a
 * SQLAlchemy result backend: taskset_id and taskset_id columns now have a
-  unique constraint.  (Tables need to recreated for this to take affect).
+  unique constraint (tables need to recreated for this to take affect).
 
 
 * Task user guide: Added section about choosing a result backend.
 * Task user guide: Added section about choosing a result backend.
 
 
@@ -198,19 +198,19 @@ Fixes
     but we have no reliable way to detect that this is the case.
     but we have no reliable way to detect that this is the case.
 
 
     So we have to wait for 10 seconds before marking the result with
     So we have to wait for 10 seconds before marking the result with
-    WorkerLostError.  This gives the result handler a chance to retrieve the
+    WorkerLostError. This gives the result handler a chance to retrieve the
     result.
     result.
 
 
 * multiprocessing.Pool: Shutdown could hang if rate limits disabled.
 * multiprocessing.Pool: Shutdown could hang if rate limits disabled.
 
 
     There was a race condition when the MainThread was waiting for the pool
     There was a race condition when the MainThread was waiting for the pool
-    semaphore to be released.  The ResultHandler now terminates after 5
+    semaphore to be released. The ResultHandler now terminates after 5
     seconds if there are unacked jobs, but no worker processes left to start
     seconds if there are unacked jobs, but no worker processes left to start
     them  (it needs to timeout because there could still be an ack+result
     them  (it needs to timeout because there could still be an ack+result
     that we haven't consumed from the result queue. It
     that we haven't consumed from the result queue. It
-    is unlikely we will receive any after 5 seconds with no worker processes).
+    is unlikely we'll receive any after 5 seconds with no worker processes).
 
 
-* ``celerybeat``: Now creates pidfile even if the ``--detach`` option is not set.
+* ``celerybeat``: Now creates pidfile even if the ``--detach`` option isn't set.
 
 
 * eventlet/gevent: The broadcast command consumer is now running in a separate
 * eventlet/gevent: The broadcast command consumer is now running in a separate
   green-thread.
   green-thread.
@@ -231,7 +231,7 @@ Fixes
 
 
 * AMQP Result Backend: Now resets cached channel if the connection is lost.
 * AMQP Result Backend: Now resets cached channel if the connection is lost.
 
 
-* Polling results with the AMQP result backend was not working properly.
+* Polling results with the AMQP result backend wasn't working properly.
 
 
 * Rate limits: No longer sleeps if there are no tasks, but rather waits for
 * Rate limits: No longer sleeps if there are no tasks, but rather waits for
   the task received condition (Performance improvement).
   the task received condition (Performance improvement).
@@ -250,7 +250,7 @@ Fixes
 * Autoscaler: The "all processes busy" log message is now severity debug
 * Autoscaler: The "all processes busy" log message is now severity debug
   instead of error.
   instead of error.
 
 
-* worker: If the message body can't be decoded, it is now passed through
+* worker: If the message body can't be decoded, it's now passed through
   ``safe_str`` when logging.
   ``safe_str`` when logging.
 
 
     This to ensure we don't get additional decoding errors when trying to log
     This to ensure we don't get additional decoding errors when trying to log
@@ -269,9 +269,9 @@ Fixes
   count value exceeded 65535 (Issue #359).
   count value exceeded 65535 (Issue #359).
 
 
     The prefetch count is incremented for every received task with an
     The prefetch count is incremented for every received task with an
-    ETA/countdown defined.  The prefetch count is a short, so can only support
-    a maximum value of 65535.  If the value exceeds the maximum value we now
-    disable the prefetch count, it is re-enabled as soon as the value is below
+    ETA/countdown defined. The prefetch count is a short, so can only support
+    a maximum value of 65535. If the value exceeds the maximum value we now
+    disable the prefetch count, it's re-enabled as soon as the value is below
     the limit again.
     the limit again.
 
 
 * ``cursesmon``: Fixed unbound local error (Issue #303).
 * ``cursesmon``: Fixed unbound local error (Issue #303).
@@ -288,7 +288,7 @@ Fixes
   version.
   version.
 
 
 * multiprocessing.Pool: No longer cares if the ``putlock`` semaphore is released
 * multiprocessing.Pool: No longer cares if the ``putlock`` semaphore is released
-  too many times. (this can happen if one or more worker processes are
+  too many times (this can happen if one or more worker processes are
   killed).
   killed).
 
 
 * SQLAlchemy Result Backend: Now returns accidentally removed ``date_done`` again
 * SQLAlchemy Result Backend: Now returns accidentally removed ``date_done`` again
@@ -316,7 +316,7 @@ Fixes
 
 
 * worker: 2.2.3 broke error logging, resulting in tracebacks not being logged.
 * worker: 2.2.3 broke error logging, resulting in tracebacks not being logged.
 
 
-* AMQP result backend: Polling task states did not work properly if there were
+* AMQP result backend: Polling task states didn't work properly if there were
   more than one result message in the queue.
   more than one result message in the queue.
 
 
 * ``TaskSet.apply_async()`` and ``TaskSet.apply()`` now supports an optional
 * ``TaskSet.apply_async()`` and ``TaskSet.apply()`` now supports an optional
@@ -326,10 +326,10 @@ Fixes
   ``request.taskset`` (Issue #329).
   ``request.taskset`` (Issue #329).
 
 
 * SQLAlchemy result backend: `date_done` was no longer part of the results as it had
 * SQLAlchemy result backend: `date_done` was no longer part of the results as it had
-  been accidentally removed.  It is now available again (Issue #325).
+  been accidentally removed. It's now available again (Issue #325).
 
 
 * SQLAlchemy result backend: Added unique constraint on `Task.id` and
 * SQLAlchemy result backend: Added unique constraint on `Task.id` and
-  `TaskSet.taskset_id`.  Tables needs to be recreated for this to take effect.
+  `TaskSet.taskset_id`. Tables needs to be recreated for this to take effect.
 
 
 * Fixed exception raised when iterating on the result of ``TaskSet.apply()``.
 * Fixed exception raised when iterating on the result of ``TaskSet.apply()``.
 
 
@@ -353,30 +353,30 @@ Fixes
   default value.
   default value.
 
 
 * `multiprocessing.cpu_count` may raise :exc:`NotImplementedError` on
 * `multiprocessing.cpu_count` may raise :exc:`NotImplementedError` on
-  platforms where this is not supported (Issue #320).
+  platforms where this isn't supported (Issue #320).
 
 
-* Coloring of log messages broke if the logged object was not a string.
+* Coloring of log messages broke if the logged object wasn't a string.
 
 
 * Fixed several typos in the init-script documentation.
 * Fixed several typos in the init-script documentation.
 
 
 * A regression caused `Task.exchange` and `Task.routing_key` to no longer
 * A regression caused `Task.exchange` and `Task.routing_key` to no longer
-  have any effect.  This is now fixed.
+  have any effect. This is now fixed.
 
 
 * Routing user guide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
 * Routing user guide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
   instances, not classes.
   instances, not classes.
 
 
-* :program:`celeryev` did not create pidfile even though the
+* :program:`celeryev` didn't create pidfile even though the
   :option:`--pidfile <celery events --pidfile>` argument was set.
   :option:`--pidfile <celery events --pidfile>` argument was set.
 
 
-* Task logger format was no longer used. (Issue #317).
+* Task logger format was no longer used (Issue #317).
 
 
    The id and name of the task is now part of the log message again.
    The id and name of the task is now part of the log message again.
 
 
 * A safe version of ``repr()`` is now used in strategic places to ensure
 * A safe version of ``repr()`` is now used in strategic places to ensure
-  objects with a broken ``__repr__`` does not crash the worker, or otherwise
+  objects with a broken ``__repr__`` doesn't crash the worker, or otherwise
   make errors hard to understand (Issue #298).
   make errors hard to understand (Issue #298).
 
 
-* Remote control command :control:`active_queues`: did not account for queues added
+* Remote control command :control:`active_queues`: didn't account for queues added
   at runtime.
   at runtime.
 
 
     In addition the dictionary replied by this command now has a different
     In addition the dictionary replied by this command now has a different
@@ -405,8 +405,8 @@ Fixes
 Fixes
 Fixes
 -----
 -----
 
 
-* ``celerybeat`` could not read the schedule properly, so entries in
-  :setting:`CELERYBEAT_SCHEDULE` would not be scheduled.
+* ``celerybeat`` couldn't read the schedule properly, so entries in
+  :setting:`CELERYBEAT_SCHEDULE` wouldn't be scheduled.
 
 
 * Task error log message now includes `exc_info` again.
 * Task error log message now includes `exc_info` again.
 
 
@@ -441,10 +441,10 @@ Fixes
 * Deprecated function ``celery.execute.delay_task`` was accidentally removed,
 * Deprecated function ``celery.execute.delay_task`` was accidentally removed,
   now available again.
   now available again.
 
 
-* ``BasePool.on_terminate`` stub did not exist
+* ``BasePool.on_terminate`` stub didn't exist
 
 
-* ``celeryd_detach``: Adds readable error messages if user/group name does not
-   exist.
+* ``celeryd_detach``: Adds readable error messages if user/group name
+  doesn't exist.
 
 
 * Smarter handling of unicode decode errors when logging errors.
 * Smarter handling of unicode decode errors when logging errors.
 
 
@@ -478,8 +478,8 @@ Important Notes
     This means that `ghettoq` is no longer needed as the
     This means that `ghettoq` is no longer needed as the
     functionality it provided is already available in Celery by default.
     functionality it provided is already available in Celery by default.
     The virtual transports are also more feature complete with support
     The virtual transports are also more feature complete with support
-    for exchanges (direct and topic).  The Redis transport even supports
-    fanout exchanges so it is able to perform worker remote control
+    for exchanges (direct and topic). The Redis transport even supports
+    fanout exchanges so it's able to perform worker remote control
     commands.
     commands.
 
 
 .. _`Kombu`: http://pypi.python.org/pypi/kombu
 .. _`Kombu`: http://pypi.python.org/pypi/kombu
@@ -491,7 +491,7 @@ Important Notes
     collisions in keyword arguments for the unaware.
     collisions in keyword arguments for the unaware.
 
 
     It wasn't easy to find a way to deprecate the magic keyword arguments,
     It wasn't easy to find a way to deprecate the magic keyword arguments,
-    but we think this is a solution that makes sense and it will not
+    but we think this is a solution that makes sense and it won't
     have any adverse effects for existing code.
     have any adverse effects for existing code.
 
 
     The path to a magic keyword argument free world is:
     The path to a magic keyword argument free world is:
@@ -515,7 +515,7 @@ Important Notes
                     print('In task %s' % kwargs['task_id'])
                     print('In task %s' % kwargs['task_id'])
                     return x + y
                     return x + y
 
 
-        And this will not use magic keyword arguments (new style):
+        And this won't use magic keyword arguments (new style):
 
 
             .. code-block:: python
             .. code-block:: python
 
 
@@ -542,10 +542,10 @@ Important Notes
 
 
 * The magic keyword arguments are now available as `task.request`
 * The magic keyword arguments are now available as `task.request`
 
 
-    This is called *the context*.  Using thread-local storage the
-    context contains state that is related to the current request.
+    This is called *the context*. Using thread-local storage the
+    context contains state that's related to the current request.
 
 
-    It is mutable and you can add custom attributes that will only be seen
+    It's mutable and you can add custom attributes that'll only be seen
     by the current task request.
     by the current task request.
 
 
     The following context attributes are always available:
     The following context attributes are always available:
@@ -576,7 +576,7 @@ Important Notes
 
 
     To change pool implementations you use the :option:`celery worker --pool`
     To change pool implementations you use the :option:`celery worker --pool`
     argument, or globally using the
     argument, or globally using the
-    :setting:`CELERYD_POOL` setting.  This can be the full name of a class,
+    :setting:`CELERYD_POOL` setting. This can be the full name of a class,
     or one of the following aliases: `processes`, `eventlet`, `gevent`.
     or one of the following aliases: `processes`, `eventlet`, `gevent`.
 
 
     For more information please see the :ref:`concurrency-eventlet` section
     For more information please see the :ref:`concurrency-eventlet` section
@@ -584,8 +584,8 @@ Important Notes
 
 
     .. admonition:: Why not gevent?
     .. admonition:: Why not gevent?
 
 
-        For our first alternative concurrency implementation we have focused
-        on `Eventlet`_, but there is also an experimental `gevent`_ pool
+        For our first alternative concurrency implementation we've focused
+        on `Eventlet`_, but there's also an experimental `gevent`_ pool
         available. This is missing some features, notably the ability to
         available. This is missing some features, notably the ability to
         schedule ETA tasks.
         schedule ETA tasks.
 
 
@@ -600,8 +600,8 @@ Important Notes
     We're happy^H^H^H^H^Hsad to announce that this is the last version
     We're happy^H^H^H^H^Hsad to announce that this is the last version
     to support Python 2.4.
     to support Python 2.4.
 
 
-    You are urged to make some noise if you're currently stuck with
-    Python 2.4.  Complain to your package maintainers, sysadmins and bosses:
+    You're urged to make some noise if you're currently stuck with
+    Python 2.4. Complain to your package maintainers, sysadmins and bosses:
     tell them it's time to move on!
     tell them it's time to move on!
 
 
     Apart from wanting to take advantage of :keyword:`with` statements,
     Apart from wanting to take advantage of :keyword:`with` statements,
@@ -611,7 +611,7 @@ Important Notes
 
 
     If it really isn't your choice, and you don't have the option to upgrade
     If it really isn't your choice, and you don't have the option to upgrade
     to a newer version of Python, you can just continue to use Celery 2.2.
     to a newer version of Python, you can just continue to use Celery 2.2.
-    Important fixes can be back ported for as long as there is interest.
+    Important fixes can be back ported for as long as there's interest.
 
 
 * worker: Now supports Autoscaling of child worker processes.
 * worker: Now supports Autoscaling of child worker processes.
 
 
@@ -622,14 +622,14 @@ Important Notes
 
 
         --autoscale=AUTOSCALE
         --autoscale=AUTOSCALE
              Enable autoscaling by providing
              Enable autoscaling by providing
-             max_concurrency,min_concurrency.  Example:
+             max_concurrency,min_concurrency. Example:
                --autoscale=10,3 (always keep 3 processes, but grow to
                --autoscale=10,3 (always keep 3 processes, but grow to
               10 if necessary).
               10 if necessary).
 
 
 * Remote Debugging of Tasks
 * Remote Debugging of Tasks
 
 
    ``celery.contrib.rdb`` is an extended version of :mod:`pdb` that
    ``celery.contrib.rdb`` is an extended version of :mod:`pdb` that
-   enables remote debugging of processes that does not have terminal
+   enables remote debugging of processes that doesn't have terminal
    access.
    access.
 
 
    Example usage:
    Example usage:
@@ -670,7 +670,7 @@ Important Notes
         [2011-01-18 14:25:44,119: WARNING/PoolWorker-1] Remote Debugger:6900:
         [2011-01-18 14:25:44,119: WARNING/PoolWorker-1] Remote Debugger:6900:
             Waiting for client...
             Waiting for client...
 
 
-    If you telnet the port specified you will be presented
+    If you telnet the port specified you'll be presented
     with a ``pdb`` shell:
     with a ``pdb`` shell:
 
 
     .. code-block:: console
     .. code-block:: console
@@ -694,15 +694,15 @@ Important Notes
     The `CELERYD_EVENT_EXCHANGE`, `CELERYD_EVENT_ROUTING_KEY`,
     The `CELERYD_EVENT_EXCHANGE`, `CELERYD_EVENT_ROUTING_KEY`,
     `CELERYD_EVENT_EXCHANGE_TYPE` settings are no longer in use.
     `CELERYD_EVENT_EXCHANGE_TYPE` settings are no longer in use.
 
 
-    This means events will not be stored until there is a consumer, and the
-    events will be gone as soon as the consumer stops.  Also it means there
+    This means events won't be stored until there's a consumer, and the
+    events will be gone as soon as the consumer stops. Also it means there
     can be multiple monitors running at the same time.
     can be multiple monitors running at the same time.
 
 
     The routing key of an event is the type of event (e.g. `worker.started`,
     The routing key of an event is the type of event (e.g. `worker.started`,
-    `worker.heartbeat`, `task.succeeded`, etc.  This means a consumer can
+    `worker.heartbeat`, `task.succeeded`, etc. This means a consumer can
     filter on specific types, to only be alerted of the events it cares about.
     filter on specific types, to only be alerted of the events it cares about.
 
 
-    Each consumer will create a unique queue, meaning it is in effect a
+    Each consumer will create a unique queue, meaning it's in effect a
     broadcast exchange.
     broadcast exchange.
 
 
     This opens up a lot of possibilities, for example the workers could listen
     This opens up a lot of possibilities, for example the workers could listen
@@ -713,9 +713,9 @@ Important Notes
     .. note::
     .. note::
 
 
         The event exchange has been renamed from ``"celeryevent"``
         The event exchange has been renamed from ``"celeryevent"``
-        to ``"celeryev"`` so it does not collide with older versions.
+        to ``"celeryev"`` so it doesn't collide with older versions.
 
 
-        If you would like to remove the old exchange you can do so
+        If you'd like to remove the old exchange you can do so
         by executing the following command:
         by executing the following command:
 
 
         .. code-block:: console
         .. code-block:: console
@@ -739,7 +739,7 @@ Important Notes
   will no longer have any effect.
   will no longer have any effect.
 
 
     The default configuration is now available in the
     The default configuration is now available in the
-    :mod:`celery.app.defaults` module.  The available configuration options
+    :mod:`celery.app.defaults` module. The available configuration options
     and their types can now be introspected.
     and their types can now be introspected.
 
 
 * Remote control commands are now provided by `kombu.pidbox`, the generic
 * Remote control commands are now provided by `kombu.pidbox`, the generic
@@ -758,15 +758,15 @@ Important Notes
     Executing arbitrary code using pickle is a potential security issue if
     Executing arbitrary code using pickle is a potential security issue if
     someone gains unrestricted access to the message broker.
     someone gains unrestricted access to the message broker.
 
 
-    If you really need this functionality, then you would have to add
+    If you really need this functionality, then you'd've to add
     this to your own project.
     this to your own project.
 
 
 * [Security: Low severity] The `stats` command no longer transmits the
 * [Security: Low severity] The `stats` command no longer transmits the
   broker password.
   broker password.
 
 
-    One would have needed an authenticated broker connection to receive
+    One would've needed an authenticated broker connection to receive
     this password in the first place, but sniffing the password at the
     this password in the first place, but sniffing the password at the
-    wire level would have been possible if using unencrypted communication.
+    wire level would've been possible if using unencrypted communication.
 
 
 .. _v220-news:
 .. _v220-news:
 
 
@@ -836,7 +836,7 @@ News
 * Periodic Task classes (`@periodic_task`/`PeriodicTask`) will *not* be
 * Periodic Task classes (`@periodic_task`/`PeriodicTask`) will *not* be
   deprecated as previously indicated in the source code.
   deprecated as previously indicated in the source code.
 
 
-    But you are encouraged to use the more flexible
+    But you're encouraged to use the more flexible
     :setting:`CELERYBEAT_SCHEDULE` setting.
     :setting:`CELERYBEAT_SCHEDULE` setting.
 
 
 * Built-in daemonization support of the worker using `celery multi`
 * Built-in daemonization support of the worker using `celery multi`
@@ -847,10 +847,10 @@ News
 
 
 * Added support for message compression using the
 * Added support for message compression using the
   :setting:`CELERY_MESSAGE_COMPRESSION` setting, or the `compression` argument
   :setting:`CELERY_MESSAGE_COMPRESSION` setting, or the `compression` argument
-  to `apply_async`.  This can also be set using routers.
+  to `apply_async`. This can also be set using routers.
 
 
 * worker: Now logs stack-trace of all threads when receiving the
 * worker: Now logs stack-trace of all threads when receiving the
-   `SIGUSR1` signal.  (Does not work on CPython 2.4, Windows or Jython).
+   `SIGUSR1` signal (doesn't work on CPython 2.4, Windows or Jython).
 
 
     Inspired by https://gist.github.com/737056
     Inspired by https://gist.github.com/737056
 
 
@@ -878,7 +878,7 @@ News
     multiple results at once, unlike `join()` which fetches the results
     multiple results at once, unlike `join()` which fetches the results
     one by one.
     one by one.
 
 
-    So far only supported by the AMQP result backend.  Support for Memcached
+    So far only supported by the AMQP result backend. Support for Memcached
     and Redis may be added later.
     and Redis may be added later.
 
 
 * Improved implementations of `TaskSetResult.join` and `AsyncResult.wait`.
 * Improved implementations of `TaskSetResult.join` and `AsyncResult.wait`.
@@ -940,12 +940,12 @@ News
     * :signal:`celery.signals.beat_init`
     * :signal:`celery.signals.beat_init`
 
 
         Dispatched when :program:`celerybeat` starts (either standalone or
         Dispatched when :program:`celerybeat` starts (either standalone or
-        embedded).  Sender is the :class:`celery.beat.Service` instance.
+        embedded). Sender is the :class:`celery.beat.Service` instance.
 
 
     * :signal:`celery.signals.beat_embedded_init`
     * :signal:`celery.signals.beat_embedded_init`
 
 
         Dispatched in addition to the :signal:`beat_init` signal when
         Dispatched in addition to the :signal:`beat_init` signal when
-        :program:`celerybeat` is started as an embedded process.  Sender
+        :program:`celerybeat` is started as an embedded process. Sender
         is the :class:`celery.beat.Service` instance.
         is the :class:`celery.beat.Service` instance.
 
 
 * Redis result backend: Removed deprecated settings `REDIS_TIMEOUT` and
 * Redis result backend: Removed deprecated settings `REDIS_TIMEOUT` and
@@ -1010,7 +1010,7 @@ Experimental
     multiple instances (e.g. using :program:`multi`).
     multiple instances (e.g. using :program:`multi`).
 
 
     Sadly an initial benchmark seems to show a 30% performance decrease on
     Sadly an initial benchmark seems to show a 30% performance decrease on
-    ``pypy-1.4.1`` + JIT.  We would like to find out why this is, so stay tuned.
+    ``pypy-1.4.1`` + JIT. We would like to find out why this is, so stay tuned.
 
 
 * :class:`PublisherPool`: Experimental pool of task publishers and
 * :class:`PublisherPool`: Experimental pool of task publishers and
   connections to be used with the `retry` argument to `apply_async`.
   connections to be used with the `retry` argument to `apply_async`.

+ 16 - 16
docs/history/changelog-2.3.rst

@@ -37,7 +37,7 @@ Fixes
 
 
 * Backported fix for #455 from 2.4 to 2.3.
 * Backported fix for #455 from 2.4 to 2.3.
 
 
-* StateDB was not saved at shutdown.
+* StateDB wasn't saved at shutdown.
 
 
 * Fixes worker sometimes hanging when hard time limit exceeded.
 * Fixes worker sometimes hanging when hard time limit exceeded.
 
 
@@ -50,10 +50,10 @@ Fixes
 :release-by: Mher Movsisyan
 :release-by: Mher Movsisyan
 
 
 * Monkey patching :attr:`sys.stdout` could result in the worker
 * Monkey patching :attr:`sys.stdout` could result in the worker
-  crashing if the replacing object did not define :meth:`isatty`
+  crashing if the replacing object didn't define :meth:`isatty`
   (Issue #477).
   (Issue #477).
 
 
-* ``CELERYD`` option in :file:`/etc/default/celeryd` should not
+* ``CELERYD`` option in :file:`/etc/default/celeryd` shouldn't
   be used with generic init-scripts.
   be used with generic init-scripts.
 
 
 
 
@@ -74,7 +74,7 @@ News
     If you'd like to contribute to Celery you should read the
     If you'd like to contribute to Celery you should read the
     :ref:`Contributing Gudie <contributing>`.
     :ref:`Contributing Gudie <contributing>`.
 
 
-    We are looking for contributors at all skill levels, so don't
+    We're looking for contributors at all skill levels, so don't
     hesitate!
     hesitate!
 
 
 * Now depends on Kombu 1.3.1
 * Now depends on Kombu 1.3.1
@@ -83,7 +83,7 @@ News
 
 
     Available as ``task.request.hostname``.
     Available as ``task.request.hostname``.
 
 
-* It is now easier for app subclasses to extend how they are pickled.
+* It's now easier for app subclasses to extend how they're pickled.
     (see :class:`celery.app.AppPickler`).
     (see :class:`celery.app.AppPickler`).
 
 
 .. _v232-fixes:
 .. _v232-fixes:
@@ -91,13 +91,13 @@ News
 Fixes
 Fixes
 -----
 -----
 
 
-* `purge/discard_all` was not working correctly (Issue #455).
+* `purge/discard_all` wasn't working correctly (Issue #455).
 
 
 * The coloring of log messages didn't handle non-ASCII data well
 * The coloring of log messages didn't handle non-ASCII data well
   (Issue #427).
   (Issue #427).
 
 
 * [Windows] the multiprocessing pool tried to import ``os.kill``
 * [Windows] the multiprocessing pool tried to import ``os.kill``
-  even though this is not available there (Issue #450).
+  even though this isn't available there (Issue #450).
 
 
 * Fixes case where the worker could become unresponsive because of tasks
 * Fixes case where the worker could become unresponsive because of tasks
   exceeding the hard time limit.
   exceeding the hard time limit.
@@ -106,7 +106,7 @@ Fixes
 
 
 * ``ResultSet.iterate`` now returns results as they finish (Issue #459).
 * ``ResultSet.iterate`` now returns results as they finish (Issue #459).
 
 
-    This was not the case previously, even though the documentation
+    This wasn't the case previously, even though the documentation
     states this was the expected behavior.
     states this was the expected behavior.
 
 
 * Retries will no longer be performed when tasks are called directly
 * Retries will no longer be performed when tasks are called directly
@@ -131,7 +131,7 @@ Fixes
 Fixes
 Fixes
 -----
 -----
 
 
-* The :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting did not work,
+* The :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting didn't work,
   resulting in an AMQP related error about not being able to serialize
   resulting in an AMQP related error about not being able to serialize
   floats while trying to publish task states (Issue #446).
   floats while trying to publish task states (Issue #446).
 
 
@@ -152,10 +152,10 @@ Important Notes
 
 
 * Results are now disabled by default.
 * Results are now disabled by default.
 
 
-    The AMQP backend was not a good default because often the users were
+    The AMQP backend wasn't a good default because often the users were
     not consuming the results, resulting in thousands of queues.
     not consuming the results, resulting in thousands of queues.
 
 
-    While the queues can be configured to expire if left unused, it was not
+    While the queues can be configured to expire if left unused, it wasn't
     possible to enable this by default because this was only available in
     possible to enable this by default because this was only available in
     recent RabbitMQ versions (2.1.1+)
     recent RabbitMQ versions (2.1.1+)
 
 
@@ -164,7 +164,7 @@ Important Notes
     of any common pitfalls with the particular backend.
     of any common pitfalls with the particular backend.
 
 
     The default backend is now a dummy backend
     The default backend is now a dummy backend
-    (:class:`celery.backends.base.DisabledBackend`).  Saving state is simply an
+    (:class:`celery.backends.base.DisabledBackend`). Saving state is simply an
     no-op, and AsyncResult.wait(), .result, .state, etc. will raise
     no-op, and AsyncResult.wait(), .result, .state, etc. will raise
     a :exc:`NotImplementedError` telling the user to configure the result backend.
     a :exc:`NotImplementedError` telling the user to configure the result backend.
 
 
@@ -193,7 +193,7 @@ News
 
 
 * Automatic connection pool support.
 * Automatic connection pool support.
 
 
-    The pool is used by everything that requires a broker connection.  For
+    The pool is used by everything that requires a broker connection, for
     example calling tasks, sending broadcast commands, retrieving results
     example calling tasks, sending broadcast commands, retrieving results
     with the AMQP result backend, and so on.
     with the AMQP result backend, and so on.
 
 
@@ -215,7 +215,7 @@ News
 * Introducing Chords (taskset callbacks).
 * Introducing Chords (taskset callbacks).
 
 
     A chord is a task that only executes after all of the tasks in a taskset
     A chord is a task that only executes after all of the tasks in a taskset
-    has finished executing.  It's a fancy term for "taskset callbacks"
+    has finished executing. It's a fancy term for "taskset callbacks"
     adopted from
     adopted from
     `Cω  <http://research.microsoft.com/en-us/um/cambridge/projects/comega/>`_).
     `Cω  <http://research.microsoft.com/en-us/um/cambridge/projects/comega/>`_).
 
 
@@ -260,7 +260,7 @@ News
     .. note::
     .. note::
 
 
         Soft time limits will still not work on Windows or other platforms
         Soft time limits will still not work on Windows or other platforms
-        that do not have the ``SIGUSR1`` signal.
+        that don't have the ``SIGUSR1`` signal.
 
 
 * Redis backend configuration directive names changed to include the
 * Redis backend configuration directive names changed to include the
    ``CELERY_`` prefix.
    ``CELERY_`` prefix.
@@ -315,7 +315,7 @@ News
 * ``events.default_dispatcher()``: Context manager to easily obtain
 * ``events.default_dispatcher()``: Context manager to easily obtain
   an event dispatcher instance using the connection pool.
   an event dispatcher instance using the connection pool.
 
 
-* Import errors in the configuration module will not be silenced anymore.
+* Import errors in the configuration module won't be silenced anymore.
 
 
 * ResultSet.iterate:  Now supports the ``timeout``, ``propagate`` and
 * ResultSet.iterate:  Now supports the ``timeout``, ``propagate`` and
   ``interval`` arguments.
   ``interval`` arguments.

+ 19 - 19
docs/history/changelog-2.4.rst

@@ -94,7 +94,7 @@ Fixes
 :release-date: 2011-11-14 12:00 P.M GMT
 :release-date: 2011-11-14 12:00 P.M GMT
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
-* Program module no longer uses relative imports so that it is
+* Program module no longer uses relative imports so that it's
   possible to do ``python -m celery.bin.name``.
   possible to do ``python -m celery.bin.name``.
 
 
 .. _version-2.4.1:
 .. _version-2.4.1:
@@ -108,7 +108,7 @@ Fixes
 
 
 * processes pool: Decrease polling interval for less idle CPU usage.
 * processes pool: Decrease polling interval for less idle CPU usage.
 
 
-* processes pool: MaybeEncodingError was not wrapped in ExceptionInfo
+* processes pool: MaybeEncodingError wasn't wrapped in ExceptionInfo
   (Issue #524).
   (Issue #524).
 
 
 * worker: would silence errors occurring after task consumer started.
 * worker: would silence errors occurring after task consumer started.
@@ -133,11 +133,11 @@ Important Notes
 * Fixed deadlock in worker process handling (Issue #496).
 * Fixed deadlock in worker process handling (Issue #496).
 
 
     A deadlock could occur after spawning new child processes because
     A deadlock could occur after spawning new child processes because
-    the logging library's mutex was not properly reset after fork.
+    the logging library's mutex wasn't properly reset after fork.
 
 
     The symptoms of this bug affecting would be that the worker simply
     The symptoms of this bug affecting would be that the worker simply
     stops processing tasks, as none of the workers child processes
     stops processing tasks, as none of the workers child processes
-    are functioning.  There was a greater chance of this bug occurring
+    are functioning. There was a greater chance of this bug occurring
     with ``maxtasksperchild`` or a time-limit enabled.
     with ``maxtasksperchild`` or a time-limit enabled.
 
 
     This is a workaround for http://bugs.python.org/issue6721#msg140215.
     This is a workaround for http://bugs.python.org/issue6721#msg140215.
@@ -157,8 +157,8 @@ Important Notes
     deprecated and will be removed in version 4.0.
     deprecated and will be removed in version 4.0.
 
 
     Note that this means that the result backend requires RabbitMQ 2.1.0 or
     Note that this means that the result backend requires RabbitMQ 2.1.0 or
-    higher, and that you have to disable expiration if you are running
-    with an older version.  You can do so by disabling the
+    higher, and that you have to disable expiration if you're running
+    with an older version. You can do so by disabling the
     :setting:`CELERY_TASK_RESULT_EXPIRES` setting::
     :setting:`CELERY_TASK_RESULT_EXPIRES` setting::
 
 
         CELERY_TASK_RESULT_EXPIRES = None
         CELERY_TASK_RESULT_EXPIRES = None
@@ -188,7 +188,7 @@ Important Notes
     .. note::
     .. note::
 
 
         Note that the path component (virtual_host) always starts with a
         Note that the path component (virtual_host) always starts with a
-        forward-slash.  This is necessary to distinguish between the virtual
+        forward-slash. This is necessary to distinguish between the virtual
         host ``''`` (empty) and ``'/'``, which are both acceptable virtual
         host ``''`` (empty) and ``'/'``, which are both acceptable virtual
         host names.
         host names.
 
 
@@ -207,8 +207,8 @@ Important Notes
         So the leading slash in the path component is **always required**.
         So the leading slash in the path component is **always required**.
 
 
     In addition the :setting:`BROKER_URL` setting has been added as an alias
     In addition the :setting:`BROKER_URL` setting has been added as an alias
-    to ``BROKER_HOST``.  Any broker setting specified in both the URL and in
-    the configuration will be ignored, if a setting is not provided in the URL
+    to ``BROKER_HOST``. Any broker setting specified in both the URL and in
+    the configuration will be ignored, if a setting isn't provided in the URL
     then the value from the configuration will be used as default.
     then the value from the configuration will be used as default.
 
 
     Also, programs now support the :option:`--broker <celery --broker>`
     Also, programs now support the :option:`--broker <celery --broker>`
@@ -278,11 +278,11 @@ News
 
 
 * CELERY_IMPORTS can now be a scalar value (Issue #485).
 * CELERY_IMPORTS can now be a scalar value (Issue #485).
 
 
-    It is too easy to forget to add the comma after the sole element of a
+    It's too easy to forget to add the comma after the sole element of a
     tuple, and this is something that often affects newcomers.
     tuple, and this is something that often affects newcomers.
 
 
     The docs should probably use a list in examples, as using a tuple
     The docs should probably use a list in examples, as using a tuple
-    for this doesn't even make sense.  Nonetheless, there are many
+    for this doesn't even make sense. Nonetheless, there are many
     tutorials out there using a tuple, and this change should be a help
     tutorials out there using a tuple, and this change should be a help
     to new users.
     to new users.
 
 
@@ -292,7 +292,7 @@ News
 
 
     Contributed by Kornelijus Survila.
     Contributed by Kornelijus Survila.
 
 
-* The ``statedb`` was not saved at exit.
+* The ``statedb`` wasn't saved at exit.
 
 
     This has now been fixed and it should again remember previously
     This has now been fixed and it should again remember previously
     revoked tasks when a ``--statedb`` is enabled.
     revoked tasks when a ``--statedb`` is enabled.
@@ -317,7 +317,7 @@ News
   the :setting:`CELERY_RESULT_SERIALIZER` setting (Issue #435).
   the :setting:`CELERY_RESULT_SERIALIZER` setting (Issue #435).
 
 
     This means that only the database (Django/SQLAlchemy) backends
     This means that only the database (Django/SQLAlchemy) backends
-    currently does not support using custom serializers.
+    currently doesn't support using custom serializers.
 
 
     Contributed by Steeve Morin
     Contributed by Steeve Morin
 
 
@@ -330,7 +330,7 @@ News
 * ``multi`` now supports a ``stop_verify`` command to wait for
 * ``multi`` now supports a ``stop_verify`` command to wait for
   processes to shutdown.
   processes to shutdown.
 
 
-* Cache backend did not work if the cache key was unicode (Issue #504).
+* Cache backend didn't work if the cache key was unicode (Issue #504).
 
 
     Fix contributed by Neil Chintomby.
     Fix contributed by Neil Chintomby.
 
 
@@ -345,7 +345,7 @@ News
 
 
     Fix contributed by Remy Noel
     Fix contributed by Remy Noel
 
 
-* multi did not work on Windows (Issue #472).
+* multi didn't work on Windows (Issue #472).
 
 
 * New-style ``CELERY_REDIS_*`` settings now takes precedence over
 * New-style ``CELERY_REDIS_*`` settings now takes precedence over
   the old ``REDIS_*`` configuration keys (Issue #508).
   the old ``REDIS_*`` configuration keys (Issue #508).
@@ -356,12 +356,12 @@ News
 
 
     Fix contributed by Roger Hu.
     Fix contributed by Roger Hu.
 
 
-* Documented that Chords do not work well with :command:`redis-server` versions
+* Documented that Chords don't work well with :command:`redis-server` versions
   before 2.2.
   before 2.2.
 
 
     Contributed by Dan McGee.
     Contributed by Dan McGee.
 
 
-* The :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting was not respected.
+* The :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting wasn't respected.
 
 
 * ``inspect.registered_tasks`` renamed to ``inspect.registered`` for naming
 * ``inspect.registered_tasks`` renamed to ``inspect.registered`` for naming
   consistency.
   consistency.
@@ -395,10 +395,10 @@ News
 
 
     Contributed by Yury V. Zaytsev.
     Contributed by Yury V. Zaytsev.
 
 
-* KeyValueStoreBackend.get_many did not respect the ``timeout`` argument
+* KeyValueStoreBackend.get_many didn't respect the ``timeout`` argument
   (Issue #512).
   (Issue #512).
 
 
-* beat/events's ``--workdir`` option did not :manpage:`chdir(2)` before after
+* beat/events's ``--workdir`` option didn't :manpage:`chdir(2)` before after
   configuration was attempted (Issue #506).
   configuration was attempted (Issue #506).
 
 
 * After deprecating 2.4 support we can now name modules correctly, since we
 * After deprecating 2.4 support we can now name modules correctly, since we

+ 4 - 4
docs/history/changelog-2.5.rst

@@ -34,15 +34,15 @@ This is a dummy release performed for the following goals:
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
 * A bug causes messages to be sent with UTC time-stamps even though
 * A bug causes messages to be sent with UTC time-stamps even though
-  :setting:`CELERY_ENABLE_UTC` was not enabled (Issue #636).
+  :setting:`CELERY_ENABLE_UTC` wasn't enabled (Issue #636).
 
 
 * ``celerybeat``: No longer crashes if an entry's args is set to None
 * ``celerybeat``: No longer crashes if an entry's args is set to None
   (Issue #657).
   (Issue #657).
 
 
-* Auto-reload did not work if a module's ``__file__`` attribute
+* Auto-reload didn't work if a module's ``__file__`` attribute
   was set to the modules ``.pyc`` file.  (Issue #647).
   was set to the modules ``.pyc`` file.  (Issue #647).
 
 
-* Fixes early 2.5 compatibility where ``__package__`` does not exist
+* Fixes early 2.5 compatibility where ``__package__`` doesn't exist
   (Issue #638).
   (Issue #638).
 
 
 .. _version-2.5.2:
 .. _version-2.5.2:
@@ -181,7 +181,7 @@ Fixes
 -----
 -----
 
 
 * Eventlet/Gevent: A small typo caused the worker to hang when eventlet/gevent
 * Eventlet/Gevent: A small typo caused the worker to hang when eventlet/gevent
-  was used, this was because the environment was not monkey patched
+  was used, this was because the environment wasn't monkey patched
   early enough.
   early enough.
 
 
 * Eventlet/Gevent: Another small typo caused the mediator to be started
 * Eventlet/Gevent: Another small typo caused the mediator to be started

+ 38 - 38
docs/history/changelog-3.0.rst

@@ -32,7 +32,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - The worker would no longer start if the `-P solo` pool was selected
 - The worker would no longer start if the `-P solo` pool was selected
   (Issue #1548).
   (Issue #1548).
 
 
-- Redis/Cache result backends would not complete chords
+- Redis/Cache result backends wouldn't complete chords
   if any of the tasks were retried (Issue #1401).
   if any of the tasks were retried (Issue #1401).
 
 
 - Task decorator is no longer lazy if app is finalized.
 - Task decorator is no longer lazy if app is finalized.
@@ -65,7 +65,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     This works with the `celery multi` command in general.
     This works with the `celery multi` command in general.
 
 
-- ``get_pickleable_etype`` did not always return a value (Issue #1556).
+- ``get_pickleable_etype`` didn't always return a value (Issue #1556).
 - Fixed bug where ``app.GroupResult.restore`` would fall back to the default
 - Fixed bug where ``app.GroupResult.restore`` would fall back to the default
   app.
   app.
 
 
@@ -82,7 +82,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
 - Now depends on :ref:`Kombu 2.5.14 <kombu:version-2.5.14>`.
 - Now depends on :ref:`Kombu 2.5.14 <kombu:version-2.5.14>`.
 
 
-- ``send_task`` did not honor ``link`` and ``link_error`` arguments.
+- ``send_task`` didn't honor ``link`` and ``link_error`` arguments.
 
 
     This had the side effect of chains not calling unregistered tasks,
     This had the side effect of chains not calling unregistered tasks,
     silently discarding them.
     silently discarding them.
@@ -93,14 +93,14 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     Contributed by Matt Robenolt.
     Contributed by Matt Robenolt.
 
 
-- POSIX: Daemonization did not redirect ``sys.stdin`` to ``/dev/null``.
+- POSIX: Daemonization didn't redirect ``sys.stdin`` to ``/dev/null``.
 
 
     Fix contributed by Alexander Smirnov.
     Fix contributed by Alexander Smirnov.
 
 
 - Canvas: group bug caused fallback to default app when ``.apply_async`` used
 - Canvas: group bug caused fallback to default app when ``.apply_async`` used
   (Issue #1516)
   (Issue #1516)
 
 
-- Canvas: generator arguments was not always pickleable.
+- Canvas: generator arguments wasn't always pickleable.
 
 
 .. _version-3.0.22:
 .. _version-3.0.22:
 
 
@@ -226,11 +226,11 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - A Python 3 related fix managed to disable the deadlock fix
 - A Python 3 related fix managed to disable the deadlock fix
   announced in 3.0.18.
   announced in 3.0.18.
 
 
-    Tests have been added to make sure this does not happen again.
+    Tests have been added to make sure this doesn't happen again.
 
 
 - Task retry policy:  Default max_retries is now 3.
 - Task retry policy:  Default max_retries is now 3.
 
 
-    This ensures clients will not be hanging while the broker is down.
+    This ensures clients won't be hanging while the broker is down.
 
 
     .. note::
     .. note::
 
 
@@ -304,7 +304,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Worker: Fixed a deadlock that could occur while revoking tasks (Issue #1297).
 - Worker: Fixed a deadlock that could occur while revoking tasks (Issue #1297).
 
 
 - Worker: The :sig:`HUP` handler now closes all open file descriptors
 - Worker: The :sig:`HUP` handler now closes all open file descriptors
-  before restarting to ensure file descriptors does not leak (Issue #1270).
+  before restarting to ensure file descriptors doesn't leak (Issue #1270).
 
 
 - Worker: Optimized storing/loading the revoked tasks list (Issue #1289).
 - Worker: Optimized storing/loading the revoked tasks list (Issue #1289).
 
 
@@ -354,7 +354,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
   eta/expires fields (Issue #1232).
   eta/expires fields (Issue #1232).
 
 
 - The ``pool_restart`` remote control command now reports
 - The ``pool_restart`` remote control command now reports
-  an error if the :setting:`CELERYD_POOL_RESTARTS` setting is not set.
+  an error if the :setting:`CELERYD_POOL_RESTARTS` setting isn't set.
 
 
 - :meth:`@add_defaults`` can now be used with non-dict objects.
 - :meth:`@add_defaults`` can now be used with non-dict objects.
 
 
@@ -561,13 +561,13 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     execv was only enabled when transports other than AMQP/Redis was used,
     execv was only enabled when transports other than AMQP/Redis was used,
     and it's there to prevent deadlocks caused by mutexes not being released
     and it's there to prevent deadlocks caused by mutexes not being released
-    before the process forks.  Unfortunately it also changes the environment
-    introducing many corner case bugs that is hard to fix without adding
-    horrible hacks.  Deadlock issues are reported far less often than the
+    before the process forks. Unfortunately it also changes the environment
+    introducing many corner case bugs that're hard to fix without adding
+    horrible hacks. Deadlock issues are reported far less often than the
     bugs that execv are causing, so we now disable it by default.
     bugs that execv are causing, so we now disable it by default.
 
 
     Work is in motion to create non-blocking versions of these transports
     Work is in motion to create non-blocking versions of these transports
-    so that execv is not necessary (which is the situation with the amqp
+    so that execv isn't necessary (which is the situation with the amqp
     and redis broker transports)
     and redis broker transports)
 
 
 - Chord exception behavior defined (Issue #1172).
 - Chord exception behavior defined (Issue #1172).
@@ -579,11 +579,11 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     and the actual behavior was very unsatisfactory, indeed
     and the actual behavior was very unsatisfactory, indeed
     it will just forward the exception value to the chord callback.
     it will just forward the exception value to the chord callback.
 
 
-    For backward compatibility reasons we do not change to the new
+    For backward compatibility reasons we don't change to the new
     behavior in a bugfix release, even if the current behavior was
     behavior in a bugfix release, even if the current behavior was
-    never documented.  Instead you can enable the
+    never documented. Instead you can enable the
     :setting:`CELERY_CHORD_PROPAGATES` setting to get the new behavior
     :setting:`CELERY_CHORD_PROPAGATES` setting to get the new behavior
-    that will be default from Celery 3.1.
+    that'll be default from Celery 3.1.
 
 
     See more at :ref:`chord-errors`.
     See more at :ref:`chord-errors`.
 
 
@@ -735,7 +735,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
   task modules will always use the correct app instance (Issue #1072).
   task modules will always use the correct app instance (Issue #1072).
 
 
 - AMQP Backend: Now republishes result messages that have been polled
 - AMQP Backend: Now republishes result messages that have been polled
-  (using ``result.ready()`` and friends, ``result.get()`` will not do this
+  (using ``result.ready()`` and friends, ``result.get()`` won't do this
   in this version).
   in this version).
 
 
 - Crontab schedule values can now "wrap around"
 - Crontab schedule values can now "wrap around"
@@ -794,7 +794,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     Contributed by Locker537.
     Contributed by Locker537.
 
 
-- The ``add_consumer`` control command did not properly persist
+- The ``add_consumer`` control command didn't properly persist
   the addition of new queues so that they survived connection failure
   the addition of new queues so that they survived connection failure
   (Issue #1079).
   (Issue #1079).
 
 
@@ -811,7 +811,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     - [Redis] Number of messages that can be restored in one interval is no
     - [Redis] Number of messages that can be restored in one interval is no
               longer limited (but can be set using the
               longer limited (but can be set using the
               ``unacked_restore_limit``
               ``unacked_restore_limit``
-              :setting:`transport option <BROKER_TRANSPORT_OPTIONS>`.)
+              :setting:`transport option <BROKER_TRANSPORT_OPTIONS>`).
     - Heartbeat value can be specified in broker URLs (Mher Movsisyan).
     - Heartbeat value can be specified in broker URLs (Mher Movsisyan).
     - Fixed problem with msgpack on Python 3 (Jasper Bryant-Greene).
     - Fixed problem with msgpack on Python 3 (Jasper Bryant-Greene).
 
 
@@ -830,7 +830,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - New method ``Task.subtask_from_request`` returns a subtask using the current
 - New method ``Task.subtask_from_request`` returns a subtask using the current
   request.
   request.
 
 
-- Results get_many method did not respect timeout argument.
+- Results get_many method didn't respect timeout argument.
 
 
     Fix contributed by Remigiusz Modrzejewski
     Fix contributed by Remigiusz Modrzejewski
 
 
@@ -880,7 +880,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
         CELERYD_PID_FILE="/var/run/celery/%n.pid"
         CELERYD_PID_FILE="/var/run/celery/%n.pid"
 
 
     But in the scripts themselves the default files were ``/var/log/celery%n.log``
     But in the scripts themselves the default files were ``/var/log/celery%n.log``
-    and ``/var/run/celery%n.pid``, so if the user did not change the location
+    and ``/var/run/celery%n.pid``, so if the user didn't change the location
     by configuration, the directories ``/var/log`` and ``/var/run`` would be
     by configuration, the directories ``/var/log`` and ``/var/run`` would be
     created - and worse have their permissions and owners changed.
     created - and worse have their permissions and owners changed.
 
 
@@ -899,7 +899,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
         $ sudo /etc/init.d/celeryd create-paths
         $ sudo /etc/init.d/celeryd create-paths
 
 
-    .. admonition:: Upgrading Celery will not update init-scripts
+    .. admonition:: Upgrading Celery won't update init-scripts
 
 
         To update the init-scripts you have to re-download
         To update the init-scripts you have to re-download
         the files from source control and update them manually.
         the files from source control and update them manually.
@@ -912,7 +912,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Fixes request stack protection when app is initialized more than
 - Fixes request stack protection when app is initialized more than
   once (Issue #1003).
   once (Issue #1003).
 
 
-- ETA tasks now properly works when system timezone is not the same
+- ETA tasks now properly works when system timezone isn't same
   as the configured timezone (Issue #1004).
   as the configured timezone (Issue #1004).
 
 
 - Terminating a task now works if the task has been sent to the
 - Terminating a task now works if the task has been sent to the
@@ -963,7 +963,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     - Billiard now installs even if the C extension cannot be built.
     - Billiard now installs even if the C extension cannot be built.
 
 
-        It's still recommended to build the C extension if you are using
+        It's still recommended to build the C extension if you're using
         a transport other than RabbitMQ/Redis (or use forced execv for some
         a transport other than RabbitMQ/Redis (or use forced execv for some
         other reason).
         other reason).
 
 
@@ -982,7 +982,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
         >>> c() <-- call again
         >>> c() <-- call again
 
 
     at the second time the ids for the tasks would be the same as in the
     at the second time the ids for the tasks would be the same as in the
-    previous invocation.  This is now fixed, so that calling a subtask
+    previous invocation. This is now fixed, so that calling a subtask
     won't mutate any options.
     won't mutate any options.
 
 
 - Canvas: Chaining a chord to another task now works (Issue #965).
 - Canvas: Chaining a chord to another task now works (Issue #965).
@@ -1009,7 +1009,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
             if redis.sismember('tasks.revoked', custom_revokes.request.id):
             if redis.sismember('tasks.revoked', custom_revokes.request.id):
                 raise Ignore()
                 raise Ignore()
 
 
-- The worker now makes sure the request/task stacks are not modified
+- The worker now makes sure the request/task stacks aren't modified
   by the initial ``Task.__call__``.
   by the initial ``Task.__call__``.
 
 
     This would previously be a problem if a custom task class defined
     This would previously be a problem if a custom task class defined
@@ -1019,7 +1019,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
   and can only be enabled by setting the :envvar:`USE_FAST_LOCALS` attribute.
   and can only be enabled by setting the :envvar:`USE_FAST_LOCALS` attribute.
 
 
 - Worker: Now sets a default socket timeout of 5 seconds at shutdown
 - Worker: Now sets a default socket timeout of 5 seconds at shutdown
-  so that broken socket reads do not hinder proper shutdown (Issue #975).
+  so that broken socket reads don't hinder proper shutdown (Issue #975).
 
 
 - More fixes related to late eventlet/gevent patching.
 - More fixes related to late eventlet/gevent patching.
 
 
@@ -1057,7 +1057,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     ``instance.app.queues`` -> ``instance.app.amqp.queues``.
     ``instance.app.queues`` -> ``instance.app.amqp.queues``.
 
 
-- Eventlet/gevent: The worker did not properly set the custom app
+- Eventlet/gevent: The worker didn't properly set the custom app
   for new greenlets.
   for new greenlets.
 
 
 - Eventlet/gevent: Fixed a bug where the worker could not recover
 - Eventlet/gevent: Fixed a bug where the worker could not recover
@@ -1093,7 +1093,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Note about the :setting:`CELERY_ENABLE_UTC` setting.
 - Note about the :setting:`CELERY_ENABLE_UTC` setting.
 
 
     If you previously disabled this just to force periodic tasks to work with
     If you previously disabled this just to force periodic tasks to work with
-    your timezone, then you are now *encouraged to re-enable it*.
+    your timezone, then you're now *encouraged to re-enable it*.
 
 
 - Now depends on Kombu 2.4.5 which fixes PyPy + Jython installation.
 - Now depends on Kombu 2.4.5 which fixes PyPy + Jython installation.
 
 
@@ -1219,13 +1219,13 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
   with the exception object instead of its string representation.
   with the exception object instead of its string representation.
 
 
 - The worker daemon would try to create the pid file before daemonizing
 - The worker daemon would try to create the pid file before daemonizing
-  to catch errors, but this file was not immediately released (Issue #923).
+  to catch errors, but this file wasn't immediately released (Issue #923).
 
 
 - Fixes Jython compatibility.
 - Fixes Jython compatibility.
 
 
 - ``billiard.forking_enable`` was called by all pools not just the
 - ``billiard.forking_enable`` was called by all pools not just the
   processes pool, which would result in a useless warning if the billiard
   processes pool, which would result in a useless warning if the billiard
-  C extensions were not installed.
+  C extensions weren't installed.
 
 
 .. _version-3.0.6:
 .. _version-3.0.6:
 
 
@@ -1267,7 +1267,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     A regression long ago disabled magic kwargs for these, and since
     A regression long ago disabled magic kwargs for these, and since
     no one has complained about it we don't have any incentive to fix it now.
     no one has complained about it we don't have any incentive to fix it now.
 
 
-- The ``inspect reserved`` control command did not work properly.
+- The ``inspect reserved`` control command didn't work properly.
 
 
 - Should now play better with tools for static analysis by explicitly
 - Should now play better with tools for static analysis by explicitly
   specifying dynamically created attributes in the :mod:`celery` and
   specifying dynamically created attributes in the :mod:`celery` and
@@ -1385,7 +1385,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
     Fixing this bug also means that the SQS transport is now working again.
     Fixing this bug also means that the SQS transport is now working again.
 
 
-- The semaphore was not properly released when a task was revoked (Issue #877).
+- The semaphore wasn't properly released when a task was revoked (Issue #877).
 
 
     This could lead to tasks being swallowed and not released until a worker
     This could lead to tasks being swallowed and not released until a worker
     restart.
     restart.
@@ -1426,8 +1426,8 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
         app.add_defaults(config)
         app.add_defaults(config)
 
 
-    is the same as ``app.conf.update(config)`` except that data will not be
-    copied, and that it will not be pickled when the worker spawns child
+    is the same as ``app.conf.update(config)`` except that data won't be
+    copied, and that it won't be pickled when the worker spawns child
     processes.
     processes.
 
 
     In addition the method accepts a callable::
     In addition the method accepts a callable::
@@ -1437,7 +1437,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
         app.add_defaults(initialize_config)
         app.add_defaults(initialize_config)
 
 
-    which means the same as the above except that it will not happen
+    which means the same as the above except that it won't happen
     until the celery configuration is actually used.
     until the celery configuration is actually used.
 
 
     As an example, Celery can lazily use the configuration of a Flask app::
     As an example, Celery can lazily use the configuration of a Flask app::
@@ -1446,7 +1446,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
         app = Celery()
         app = Celery()
         app.add_defaults(lambda: flask_app.config)
         app.add_defaults(lambda: flask_app.config)
 
 
-- Revoked tasks were not marked as revoked in the result backend (Issue #871).
+- Revoked tasks weren't marked as revoked in the result backend (Issue #871).
 
 
     Fix contributed by Hynek Schlawack.
     Fix contributed by Hynek Schlawack.
 
 
@@ -1565,7 +1565,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 
 - The :program:`celery worker` command now works with eventlet/gevent.
 - The :program:`celery worker` command now works with eventlet/gevent.
 
 
-    Previously it would not patch the environment early enough.
+    Previously it wouldn't patch the environment early enough.
 
 
 - The :program:`celery` command now supports extension commands
 - The :program:`celery` command now supports extension commands
   using setuptools entry-points.
   using setuptools entry-points.

+ 50 - 50
docs/history/changelog-3.1.rst

@@ -47,7 +47,7 @@ new in Celery 3.1.
     Contributed by Sebastian Kalinowski.
     Contributed by Sebastian Kalinowski.
 
 
 - **Utils**: The ``.discard(item)`` method of
 - **Utils**: The ``.discard(item)`` method of
-  :class:`~celery.utils.collections.LimitedSet` did not actually remove the item
+  :class:`~celery.utils.collections.LimitedSet` didn't actually remove the item
   (Issue #3087).
   (Issue #3087).
 
 
     Fix contributed by Dave Smith.
     Fix contributed by Dave Smith.
@@ -100,7 +100,7 @@ new in Celery 3.1.
 - **Results**: Database backend now properly supports JSON exceptions
 - **Results**: Database backend now properly supports JSON exceptions
   (Issue #2441).
   (Issue #2441).
 
 
-- **Results**: Redis ``new_join`` did not properly call task errbacks on chord
+- **Results**: Redis ``new_join`` didn't properly call task errbacks on chord
   error (Issue #2796).
   error (Issue #2796).
 
 
 - **Results**: Restores Redis compatibility with Python :pypi:`redis` < 2.10.0
 - **Results**: Restores Redis compatibility with Python :pypi:`redis` < 2.10.0
@@ -128,11 +128,11 @@ new in Celery 3.1.
 
 
     Contributed by Dennis Brakhane.
     Contributed by Dennis Brakhane.
 
 
-    Python 3.5's ``OrderedDict`` does not allow mutation while it is being
+    Python 3.5's ``OrderedDict`` doesn't allow mutation while it is being
     iterated over. This breaks "update" if it is called with a dict
     iterated over. This breaks "update" if it is called with a dict
     larger than the maximum size.
     larger than the maximum size.
 
 
-    This commit changes the code to a version that does not iterate over
+    This commit changes the code to a version that doesn't iterate over
     the dict, and should also be a little bit faster.
     the dict, and should also be a little bit faster.
 
 
 - **Init-scripts**: The beat init-script now properly reports service as down
 - **Init-scripts**: The beat init-script now properly reports service as down
@@ -233,7 +233,7 @@ new in Celery 3.1.
 
 
     Fix contributed by Sukrit Khera.
     Fix contributed by Sukrit Khera.
 
 
-- **Results**: RPC/AMQP backends did not deserialize exceptions properly
+- **Results**: RPC/AMQP backends didn't deserialize exceptions properly
   (Issue #2691).
   (Issue #2691).
 
 
     Fix contributed by Sukrit Khera.
     Fix contributed by Sukrit Khera.
@@ -329,7 +329,7 @@ new in Celery 3.1.
 :release-date: 2014-11-19 03:30 P.M UTC
 :release-date: 2014-11-19 03:30 P.M UTC
 :release-by: Ask Solem
 :release-by: Ask Solem
 
 
-.. admonition:: Do not enable the `CELERYD_FORCE_EXECV` setting!
+.. admonition:: Don't enable the `CELERYD_FORCE_EXECV` setting!
 
 
     Please review your configuration and disable this option if you're using the
     Please review your configuration and disable this option if you're using the
     RabbitMQ or Redis transport.
     RabbitMQ or Redis transport.
@@ -359,7 +359,7 @@ new in Celery 3.1.
 
 
     Fix contributed by Thomas French.
     Fix contributed by Thomas French.
 
 
-- **Task**: Callbacks was not called properly if ``link`` was a list of
+- **Task**: Callbacks wasn't called properly if ``link`` was a list of
   signatures (Issue #2350).
   signatures (Issue #2350).
 
 
 - **Canvas**: chain and group now handles json serialized signatures
 - **Canvas**: chain and group now handles json serialized signatures
@@ -397,7 +397,7 @@ new in Celery 3.1.
 
 
     Fix contributed by Sukrit Khera.
     Fix contributed by Sukrit Khera.
 
 
-- **Task**: Exception info was not properly set for tasks raising
+- **Task**: Exception info wasn't properly set for tasks raising
   :exc:`~celery.exceptions.Reject` (Issue #2043).
   :exc:`~celery.exceptions.Reject` (Issue #2043).
 
 
 - **Worker**: Duplicates are now removed when loading the set of revoked tasks
 - **Worker**: Duplicates are now removed when loading the set of revoked tasks
@@ -436,7 +436,7 @@ new in Celery 3.1.
 - **Canvas**: ``celery.signature`` now properly forwards app argument
 - **Canvas**: ``celery.signature`` now properly forwards app argument
   in all cases.
   in all cases.
 
 
-- **Task**: ``.retry()`` did not raise the exception correctly
+- **Task**: ``.retry()`` didn't raise the exception correctly
   when called without a current exception.
   when called without a current exception.
 
 
     Fix contributed by Andrea Rabbaglietti.
     Fix contributed by Andrea Rabbaglietti.
@@ -555,7 +555,7 @@ News
 - **Beat**: Accounts for standard 1ms drift by always waking up 0.010s
 - **Beat**: Accounts for standard 1ms drift by always waking up 0.010s
   earlier.
   earlier.
 
 
-    This will adjust the latency so that the periodic tasks will not move
+    This will adjust the latency so that the periodic tasks won't move
     1ms after every invocation.
     1ms after every invocation.
 
 
 - Documentation fixes
 - Documentation fixes
@@ -578,7 +578,7 @@ News
 
 
     Now depends on :ref:`Kombu 3.0.19 <kombu:version-3.0.19>`.
     Now depends on :ref:`Kombu 3.0.19 <kombu:version-3.0.19>`.
 
 
-- **App**: Connections were not being closed after fork due to an error in the
+- **App**: Connections weren't being closed after fork due to an error in the
   after fork handler (Issue #2055).
   after fork handler (Issue #2055).
 
 
     This could manifest itself by causing framing errors when using RabbitMQ.
     This could manifest itself by causing framing errors when using RabbitMQ.
@@ -590,7 +590,7 @@ News
 - **Django**: Fixed problems with event timezones when using Django
 - **Django**: Fixed problems with event timezones when using Django
   (``Substantial drift``).
   (``Substantial drift``).
 
 
-    Celery did not take into account that Django modifies the
+    Celery didn't take into account that Django modifies the
     ``time.timeone`` attributes and friends.
     ``time.timeone`` attributes and friends.
 
 
 - **Canvas**: ``Signature.link`` now works when the link option is a scalar
 - **Canvas**: ``Signature.link`` now works when the link option is a scalar
@@ -619,7 +619,7 @@ News
 - **Programs**: The default working directory for :program:`celery worker
 - **Programs**: The default working directory for :program:`celery worker
   --detach` is now the current working directory, not ``/``.
   --detach` is now the current working directory, not ``/``.
 
 
-- **Canvas**: ``signature(s, app=app)`` did not upgrade serialized signatures
+- **Canvas**: ``signature(s, app=app)`` didn't upgrade serialized signatures
   to their original class (``subtask_type``) when the ``app`` keyword argument
   to their original class (``subtask_type``) when the ``app`` keyword argument
   was used.
   was used.
 
 
@@ -644,7 +644,7 @@ News
 
 
     Fix contributed by Luke Pomfrey.
     Fix contributed by Luke Pomfrey.
 
 
-- **Other**: The ``inspect conf`` command did not handle non-string keys well.
+- **Other**: The ``inspect conf`` command didn't handle non-string keys well.
 
 
     Fix contributed by Jay Farrimond.
     Fix contributed by Jay Farrimond.
 
 
@@ -653,13 +653,13 @@ News
 
 
     Fix contributed by Dmitry Malinovsky.
     Fix contributed by Dmitry Malinovsky.
 
 
-- **Programs**: :program:`celery worker --detach` did not forward working
+- **Programs**: :program:`celery worker --detach` didn't forward working
   directory option (Issue #2003).
   directory option (Issue #2003).
 
 
 - **Programs**: :program:`celery inspect registered` no longer includes
 - **Programs**: :program:`celery inspect registered` no longer includes
   the list of built-in tasks.
   the list of built-in tasks.
 
 
-- **Worker**: The ``requires`` attribute for boot steps were not being handled
+- **Worker**: The ``requires`` attribute for boot steps weren't being handled
   correctly (Issue #2002).
   correctly (Issue #2002).
 
 
 - **Eventlet**: The eventlet pool now supports the ``pool_grow`` and
 - **Eventlet**: The eventlet pool now supports the ``pool_grow`` and
@@ -684,7 +684,7 @@ News
 
 
     Fix contributed by Ian Dees.
     Fix contributed by Ian Dees.
 
 
-- **Init-scripts**: The CentOS init-scripts did not quote
+- **Init-scripts**: The CentOS init-scripts didn't quote
   :envvar:`CELERY_CHDIR`.
   :envvar:`CELERY_CHDIR`.
 
 
     Fix contributed by :github_user:`ffeast`.
     Fix contributed by :github_user:`ffeast`.
@@ -783,15 +783,15 @@ News
 
 
 - **Redis:** Important note about events (Issue #1882).
 - **Redis:** Important note about events (Issue #1882).
 
 
-    There is a new transport option for Redis that enables monitors
-    to filter out unwanted events.  Enabling this option in the workers
+    There's a new transport option for Redis that enables monitors
+    to filter out unwanted events. Enabling this option in the workers
     will increase performance considerably:
     will increase performance considerably:
 
 
     .. code-block:: python
     .. code-block:: python
 
 
         BROKER_TRANSPORT_OPTIONS = {'fanout_patterns': True}
         BROKER_TRANSPORT_OPTIONS = {'fanout_patterns': True}
 
 
-    Enabling this option means that your workers will not be able to see
+    Enabling this option means that your workers won't be able to see
     workers with the option disabled (or is running an older version of
     workers with the option disabled (or is running an older version of
     Celery), so if you do enable it then make sure you do so on all
     Celery), so if you do enable it then make sure you do so on all
     nodes.
     nodes.
@@ -805,7 +805,7 @@ News
 
 
     This means that the global result cache can finally be disabled,
     This means that the global result cache can finally be disabled,
     and you can do so by setting :setting:`CELERY_MAX_CACHED_RESULTS` to
     and you can do so by setting :setting:`CELERY_MAX_CACHED_RESULTS` to
-    :const:`-1`.  The lifetime of the cache will then be bound to the
+    :const:`-1`. The lifetime of the cache will then be bound to the
     lifetime of the result object, which will be the default behavior
     lifetime of the result object, which will be the default behavior
     in Celery 3.2.
     in Celery 3.2.
 
 
@@ -816,7 +816,7 @@ News
   prefork pool.
   prefork pool.
 
 
     This can be enabled by using the new ``%i`` and ``%I`` format specifiers
     This can be enabled by using the new ``%i`` and ``%I`` format specifiers
-    for the log file name.  See :ref:`worker-files-process-index`.
+    for the log file name. See :ref:`worker-files-process-index`.
 
 
 - **Redis**: New experimental chord join implementation.
 - **Redis**: New experimental chord join implementation.
 
 
@@ -901,7 +901,7 @@ News
     Contributed by Chris Clark.
     Contributed by Chris Clark.
 
 
 - **Commands**: :program:`celery inspect memdump` no longer crashes
 - **Commands**: :program:`celery inspect memdump` no longer crashes
-  if the :mod:`psutil` module is not installed (Issue #1914).
+  if the :mod:`psutil` module isn't installed (Issue #1914).
 
 
 - **Worker**: Remote control commands now always accepts json serialized
 - **Worker**: Remote control commands now always accepts json serialized
   messages (Issue #1870).
   messages (Issue #1870).
@@ -949,7 +949,7 @@ News
 - **Generic init-scripts**: Fixed compatibility with the minimal
 - **Generic init-scripts**: Fixed compatibility with the minimal
   :program:`dash` shell (Issue #1815).
   :program:`dash` shell (Issue #1815).
 
 
-- **Commands**: The :program:`celery amqp basic.publish` command was not
+- **Commands**: The :program:`celery amqp basic.publish` command wasn't
   working properly.
   working properly.
 
 
     Fix contributed by Andrey Voronov.
     Fix contributed by Andrey Voronov.
@@ -960,7 +960,7 @@ News
 - **Commands**: Better error message for missing arguments to preload
 - **Commands**: Better error message for missing arguments to preload
   options (Issue #1860).
   options (Issue #1860).
 
 
-- **Commands**: :program:`celery -h` did not work because of a bug in the
+- **Commands**: :program:`celery -h` didn't work because of a bug in the
   argument parser (Issue #1849).
   argument parser (Issue #1849).
 
 
 - **Worker**: Improved error message for message decoding errors.
 - **Worker**: Improved error message for message decoding errors.
@@ -1000,7 +1000,7 @@ News
 .. _`billiard 3.3.0.14`:
 .. _`billiard 3.3.0.14`:
     https://github.com/celery/billiard/blob/master/CHANGES.txt
     https://github.com/celery/billiard/blob/master/CHANGES.txt
 
 
-- **Worker**: The event loop was not properly reinitialized at consumer restart
+- **Worker**: The event loop wasn't properly reinitialized at consumer restart
   which would force the worker to continue with a closed ``epoll`` instance on
   which would force the worker to continue with a closed ``epoll`` instance on
   Linux, resulting in a crash.
   Linux, resulting in a crash.
 
 
@@ -1036,15 +1036,15 @@ News
 
 
 - **Generic init-scripts:** Now runs a check at start-up to verify
 - **Generic init-scripts:** Now runs a check at start-up to verify
   that any configuration scripts are owned by root and that they
   that any configuration scripts are owned by root and that they
-  are not world/group writable.
+  aren't world/group writable.
 
 
     The init-script configuration is a shell script executed by root,
     The init-script configuration is a shell script executed by root,
-    so this is a preventive measure to ensure that users do not
+    so this is a preventive measure to ensure that users don't
     leave this file vulnerable to changes by unprivileged users.
     leave this file vulnerable to changes by unprivileged users.
 
 
     .. note::
     .. note::
 
 
-        Note that upgrading celery will not update the init-scripts,
+        Note that upgrading celery won't update the init-scripts,
         instead you need to manually copy the improved versions from the
         instead you need to manually copy the improved versions from the
         source distribution:
         source distribution:
         https://github.com/celery/celery/tree/3.1/extra/generic-init.d
         https://github.com/celery/celery/tree/3.1/extra/generic-init.d
@@ -1055,10 +1055,10 @@ News
     A new :option:`-f <celery purge -f>` was added that can be used to disable
     A new :option:`-f <celery purge -f>` was added that can be used to disable
     interactive mode.
     interactive mode.
 
 
-- **Task**: ``.retry()`` did not raise the value provided in the ``exc`` argument
+- **Task**: ``.retry()`` didn't raise the value provided in the ``exc`` argument
   when called outside of an error context (*Issue #1755*).
   when called outside of an error context (*Issue #1755*).
 
 
-- **Commands:** The :program:`celery multi` command did not forward command
+- **Commands:** The :program:`celery multi` command didn't forward command
   line configuration to the target workers.
   line configuration to the target workers.
 
 
     The change means that multi will forward the special ``--`` argument and
     The change means that multi will forward the special ``--`` argument and
@@ -1134,10 +1134,10 @@ Init-script security improvements
 
 
 Where the generic init-scripts (for ``celeryd``, and ``celerybeat``) before
 Where the generic init-scripts (for ``celeryd``, and ``celerybeat``) before
 delegated the responsibility of dropping privileges to the target application,
 delegated the responsibility of dropping privileges to the target application,
-it will now use ``su`` instead, so that the Python program is not trusted
+it will now use ``su`` instead, so that the Python program isn't trusted
 with superuser privileges.
 with superuser privileges.
 
 
-This is not in reaction to any known exploit, but it will
+This isn't in reaction to any known exploit, but it will
 limit the possibility of a privilege escalation bug being abused in the
 limit the possibility of a privilege escalation bug being abused in the
 future.
 future.
 
 
@@ -1151,7 +1151,7 @@ The 3.1 release accidentally left the amqp backend configured to be
 non-persistent by default.
 non-persistent by default.
 
 
 Upgrading from 3.0 would give a "not equivalent" error when attempting to
 Upgrading from 3.0 would give a "not equivalent" error when attempting to
-set or retrieve results for a task.  That is unless you manually set the
+set or retrieve results for a task. That's unless you manually set the
 persistence setting::
 persistence setting::
 
 
     CELERY_RESULT_PERSISTENT = True
     CELERY_RESULT_PERSISTENT = True
@@ -1172,7 +1172,7 @@ It's not legal for tasks to block by waiting for subtasks
 as this is likely to lead to resource starvation and eventually
 as this is likely to lead to resource starvation and eventually
 deadlock when using the prefork pool (see also :ref:`task-synchronous-subtasks`).
 deadlock when using the prefork pool (see also :ref:`task-synchronous-subtasks`).
 
 
-If you really know what you are doing you can avoid the warning (and
+If you really know what you're doing you can avoid the warning (and
 the future exception being raised) by moving the operation in a
 the future exception being raised) by moving the operation in a
 white-list block:
 white-list block:
 
 
@@ -1214,7 +1214,7 @@ Fixes
   (Issue #1740).
   (Issue #1740).
 
 
     This also removed a rarely used feature where you can symlink the script
     This also removed a rarely used feature where you can symlink the script
-    to provide alternative configurations.  You instead copy the script
+    to provide alternative configurations. You instead copy the script
     and give it a new name, but perhaps a better solution is to provide
     and give it a new name, but perhaps a better solution is to provide
     arguments to ``CELERYD_OPTS`` to separate them:
     arguments to ``CELERYD_OPTS`` to separate them:
 
 
@@ -1226,7 +1226,7 @@ Fixes
 - Fallback chord unlock task is now always called after the chord header
 - Fallback chord unlock task is now always called after the chord header
   (Issue #1700).
   (Issue #1700).
 
 
-    This means that the unlock task will not be started if there's
+    This means that the unlock task won't be started if there's
     an error sending the header.
     an error sending the header.
 
 
 - Celery command: Fixed problem with arguments for some control commands.
 - Celery command: Fixed problem with arguments for some control commands.
@@ -1242,7 +1242,7 @@ Fixes
 
 
     Fix contributed by Ionel Cristian Mărieș.
     Fix contributed by Ionel Cristian Mărieș.
 
 
-- Worker with :option:`-B <celery worker -B>` argument did not properly
+- Worker with :option:`-B <celery worker -B>` argument didn't properly
   shut down the beat instance.
   shut down the beat instance.
 
 
 - Worker: The ``%n`` and ``%h`` formats are now also supported by the
 - Worker: The ``%n`` and ``%h`` formats are now also supported by the
@@ -1275,16 +1275,16 @@ Fixes
   (Issue #1714).
   (Issue #1714).
 
 
     For ``events.State`` the tasks now have a ``Task.client`` attribute
     For ``events.State`` the tasks now have a ``Task.client`` attribute
-    that is set when a ``task-sent`` event is being received.
+    that's set when a ``task-sent`` event is being received.
 
 
-    Also, a clients logical clock is not in sync with the cluster so
-    they live in a "time bubble".  So for this reason monitors will no
+    Also, a clients logical clock isn't in sync with the cluster so
+    they live in a "time bubble". So for this reason monitors will no
     longer attempt to merge with the clock of an event sent by a client,
     longer attempt to merge with the clock of an event sent by a client,
     instead it will fake the value by using the current clock with
     instead it will fake the value by using the current clock with
     a skew of -1.
     a skew of -1.
 
 
 - Prefork pool: The method used to find terminated processes was flawed
 - Prefork pool: The method used to find terminated processes was flawed
-  in that it did not also take into account missing ``popen`` objects.
+  in that it didn't also take into account missing ``popen`` objects.
 
 
 - Canvas: ``group`` and ``chord`` now works with anon signatures as long
 - Canvas: ``group`` and ``chord`` now works with anon signatures as long
   as the group/chord object is associated with an app instance (Issue #1744).
   as the group/chord object is associated with an app instance (Issue #1744).
@@ -1332,7 +1332,7 @@ Fixes
 
 
     Fix contributed by Jonathan Jordan.
     Fix contributed by Jonathan Jordan.
 
 
-- Events: Fixed problem when task name is not defined (Issue #1710).
+- Events: Fixed problem when task name isn't defined (Issue #1710).
 
 
     Fix contributed by Mher Movsisyan.
     Fix contributed by Mher Movsisyan.
 
 
@@ -1385,7 +1385,7 @@ Fixes
 
 
         app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
         app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
 
 
-    this ensures that the settings object is not prepared
+    this ensures that the settings object isn't prepared
     prematurely.
     prematurely.
 
 
 - Fixed regression for :option:`--app <celery --app>` argument
 - Fixed regression for :option:`--app <celery --app>` argument
@@ -1393,11 +1393,11 @@ Fixes
 
 
 - Worker: Now respects the :option:`--uid <celery worker --uid>` and
 - Worker: Now respects the :option:`--uid <celery worker --uid>` and
   :option:`--gid <celery worker --gid>` arguments even if
   :option:`--gid <celery worker --gid>` arguments even if
-  :option:`--detach <celery worker --detach>` is not enabled.
+  :option:`--detach <celery worker --detach>` isn't enabled.
 
 
 - Beat: Now respects the :option:`--uid <celery beat --uid>` and
 - Beat: Now respects the :option:`--uid <celery beat --uid>` and
   :option:`--gid <celery beat --gid>` arguments even if
   :option:`--gid <celery beat --gid>` arguments even if
-  :option:`--detach <celery beat --detach>` is not enabled.
+  :option:`--detach <celery beat --detach>` isn't enabled.
 
 
 - Python 3: Fixed unorderable error occurring with the worker
 - Python 3: Fixed unorderable error occurring with the worker
   :option:`-B <celery worker -B>` argument enabled.
   :option:`-B <celery worker -B>` argument enabled.
@@ -1450,7 +1450,7 @@ Fixes
         tasks = app.tasks
         tasks = app.tasks
         add.delay(2, 2)
         add.delay(2, 2)
 
 
-- The worker did not send monitoring events during shutdown.
+- The worker didn't send monitoring events during shutdown.
 
 
 - Worker: Mingle and gossip is now automatically disabled when
 - Worker: Mingle and gossip is now automatically disabled when
   used with an unsupported transport (Issue #1664).
   used with an unsupported transport (Issue #1664).
@@ -1565,7 +1565,7 @@ Fixes
 - Python 3: Fixed compatibility issues.
 - Python 3: Fixed compatibility issues.
 
 
 - Windows:  Accidentally showed warning that the billiard C extension
 - Windows:  Accidentally showed warning that the billiard C extension
-  was not installed (Issue #1630).
+  wasn't installed (Issue #1630).
 
 
 - Django: Tutorial updated with a solution that sets a default
 - Django: Tutorial updated with a solution that sets a default
   :envvar:`DJANGO_SETTINGS_MODULE` so that it doesn't have to be typed
   :envvar:`DJANGO_SETTINGS_MODULE` so that it doesn't have to be typed
@@ -1578,7 +1578,7 @@ Fixes
 
 
 - Django: Fixed a problem when using the Django settings in Django 1.6.
 - Django: Fixed a problem when using the Django settings in Django 1.6.
 
 
-- Django: Fix-up should not be applied if the django loader is active.
+- Django: Fix-up shouldn't be applied if the django loader is active.
 
 
 - Worker:  Fixed attribute error for ``human_write_stats`` when using the
 - Worker:  Fixed attribute error for ``human_write_stats`` when using the
   compatibility prefork pool implementation.
   compatibility prefork pool implementation.
@@ -1587,7 +1587,7 @@ Fixes
 
 
 - Inspect.conf: Now supports a ``with_defaults`` argument.
 - Inspect.conf: Now supports a ``with_defaults`` argument.
 
 
-- Group.restore: The backend argument was not respected.
+- Group.restore: The backend argument wasn't respected.
 
 
 .. _version-3.1.0:
 .. _version-3.1.0:
 
 

+ 14 - 14
docs/history/whatsnew-2.5.rst

@@ -15,7 +15,7 @@ or :ref:`our mailing-list <mailing-list>`.
 To read more about Celery you should visit our `website`_.
 To read more about Celery you should visit our `website`_.
 
 
 While this version is backward compatible with previous versions
 While this version is backward compatible with previous versions
-it is important that you read the following section.
+it's important that you read the following section.
 
 
 If you use Celery in combination with Django you must also
 If you use Celery in combination with Django you must also
 read the `django-celery changelog <djcelery:version-2.5.0>` and upgrade to `django-celery 2.5`_.
 read the `django-celery changelog <djcelery:version-2.5.0>` and upgrade to `django-celery 2.5`_.
@@ -74,7 +74,7 @@ race condition leading to an annoying warning.
         CELERY_RESULT_EXCHANGE = 'celeryresults2'
         CELERY_RESULT_EXCHANGE = 'celeryresults2'
 
 
     But you have to make sure that all clients and workers
     But you have to make sure that all clients and workers
-    use this new setting, so they are updated to use the same
+    use this new setting, so they're updated to use the same
     exchange name.
     exchange name.
 
 
 Solution for hanging workers (but must be manually enabled)
 Solution for hanging workers (but must be manually enabled)
@@ -98,7 +98,7 @@ setting.
 Enabling this option will result in a slight performance penalty
 Enabling this option will result in a slight performance penalty
 when new child worker processes are started, and it will also increase
 when new child worker processes are started, and it will also increase
 memory usage (but many platforms are optimized, so the impact may be
 memory usage (but many platforms are optimized, so the impact may be
-minimal).  Considering that it ensures reliability when replacing
+minimal). Considering that it ensures reliability when replacing
 lost worker processes, it should be worth it.
 lost worker processes, it should be worth it.
 
 
 - It's already the default behavior on Windows.
 - It's already the default behavior on Windows.
@@ -113,7 +113,7 @@ Optimization
 
 
 - The code path used when the worker executes a task has been heavily
 - The code path used when the worker executes a task has been heavily
   optimized, meaning the worker is able to process a great deal
   optimized, meaning the worker is able to process a great deal
-  more tasks/second compared to previous versions.  As an example the solo
+  more tasks/second compared to previous versions. As an example the solo
   pool can now process up to 15000 tasks/second on a 4 core MacBook Pro
   pool can now process up to 15000 tasks/second on a 4 core MacBook Pro
   when using the `pylibrabbitmq`_ transport, where it previously
   when using the `pylibrabbitmq`_ transport, where it previously
   could only do 5000 tasks/second.
   could only do 5000 tasks/second.
@@ -142,10 +142,10 @@ Removals
   scheduled for removal in 2.3).
   scheduled for removal in 2.3).
 
 
 * The built-in ``ping`` task has been removed (originally scheduled
 * The built-in ``ping`` task has been removed (originally scheduled
-  for removal in 2.3).  Please use the ping broadcast command
+  for removal in 2.3). Please use the ping broadcast command
   instead.
   instead.
 
 
-* It is no longer possible to import ``subtask`` and ``TaskSet``
+* It's no longer possible to import ``subtask`` and ``TaskSet``
   from :mod:`celery.task.base`, please import them from :mod:`celery.task`
   from :mod:`celery.task.base`, please import them from :mod:`celery.task`
   instead (originally scheduled for removal in 2.4).
   instead (originally scheduled for removal in 2.4).
 
 
@@ -154,7 +154,7 @@ Deprecated modules
 
 
 * The :mod:`celery.decorators` module has changed status
 * The :mod:`celery.decorators` module has changed status
   from pending deprecation to deprecated, and is scheduled for removal
   from pending deprecation to deprecated, and is scheduled for removal
-  in version 4.0.  The ``celery.task`` module must be used instead.
+  in version 4.0. The ``celery.task`` module must be used instead.
 
 
 .. _v250-news:
 .. _v250-news:
 
 
@@ -167,7 +167,7 @@ Timezone support
 Celery can now be configured to treat all incoming and outgoing dates
 Celery can now be configured to treat all incoming and outgoing dates
 as UTC, and the local timezone can be configured.
 as UTC, and the local timezone can be configured.
 
 
-This is not yet enabled by default, since enabling
+This isn't yet enabled by default, since enabling
 time zone support means workers running versions pre-2.5
 time zone support means workers running versions pre-2.5
 will be out of sync with upgraded workers.
 will be out of sync with upgraded workers.
 
 
@@ -180,7 +180,7 @@ converted to UTC, and then converted back to the local timezone
 when received by a worker.
 when received by a worker.
 
 
 You can change the local timezone using the :setting:`CELERY_TIMEZONE`
 You can change the local timezone using the :setting:`CELERY_TIMEZONE`
-setting.  Installing the :pypi:`pytz` library is recommended when
+setting. Installing the :pypi:`pytz` library is recommended when
 using a custom timezone, to keep timezone definition up-to-date,
 using a custom timezone, to keep timezone definition up-to-date,
 but it will fallback to a system definition of the timezone if available.
 but it will fallback to a system definition of the timezone if available.
 
 
@@ -274,11 +274,11 @@ executing task.
             # retry in 10 seconds.
             # retry in 10 seconds.
             current.retry(countdown=10, exc=exc)
             current.retry(countdown=10, exc=exc)
 
 
-Previously you would have to type ``update_twitter_status.retry(…)``
+Previously you'd've to type ``update_twitter_status.retry(…)``
 here, which can be annoying for long task names.
 here, which can be annoying for long task names.
 
 
 .. note::
 .. note::
-    This will not work if the task function is called directly, i.e:
+    This won't work if the task function is called directly, i.e:
     ``update_twitter_status(a, b)``. For that to work ``apply`` must
     ``update_twitter_status(a, b)``. For that to work ``apply`` must
     be used: ``update_twitter_status.apply((a, b))``.
     be used: ``update_twitter_status.apply((a, b))``.
 
 
@@ -300,7 +300,7 @@ In Other News
 
 
 - Sending :sig:`QUIT` to ``celeryd`` will now cause it cold terminate.
 - Sending :sig:`QUIT` to ``celeryd`` will now cause it cold terminate.
 
 
-    That is, it will not finish executing the tasks it is currently
+    That is, it won't finish executing the tasks it's currently
     working on.
     working on.
 
 
     Contributed by Alec Clowes.
     Contributed by Alec Clowes.
@@ -330,7 +330,7 @@ In Other News
 
 
     $ celerybeat -l info -- celerybeat.max_loop_interval=10.0
     $ celerybeat -l info -- celerybeat.max_loop_interval=10.0
 
 
-- Now limits the number of frames in a traceback so that ``celeryd`` does not
+- Now limits the number of frames in a traceback so that ``celeryd`` doesn't
   crash on maximum recursion limit exceeded exceptions (Issue #615).
   crash on maximum recursion limit exceeded exceptions (Issue #615).
 
 
     The limit is set to the current recursion limit divided by 8 (which
     The limit is set to the current recursion limit divided by 8 (which
@@ -396,7 +396,7 @@ In Other News
 
 
 - Redis result backend: Adds support for a ``max_connections`` parameter.
 - Redis result backend: Adds support for a ``max_connections`` parameter.
 
 
-    It is now possible to configure the maximum number of
+    It's now possible to configure the maximum number of
     simultaneous connections in the Redis connection pool used for
     simultaneous connections in the Redis connection pool used for
     results.
     results.
 
 

+ 33 - 33
docs/history/whatsnew-3.0.rst

@@ -31,10 +31,10 @@ Highlights
 
 
 .. topic:: Overview
 .. topic:: Overview
 
 
-    - A new and improved API, that is both simpler and more powerful.
+    - A new and improved API, that's both simpler and more powerful.
 
 
         Everyone must read the new :ref:`first-steps` tutorial,
         Everyone must read the new :ref:`first-steps` tutorial,
-        and the new :ref:`next-steps` tutorial.  Oh, and
+        and the new :ref:`next-steps` tutorial. Oh, and
         why not reread the user guide while you're at it :)
         why not reread the user guide while you're at it :)
 
 
         There are no current plans to deprecate the old API,
         There are no current plans to deprecate the old API,
@@ -119,7 +119,7 @@ Hopefully this can be extended to include additional broker transports
 in the future.
 in the future.
 
 
 For increased reliability the :setting:`CELERY_FORCE_EXECV` setting is enabled
 For increased reliability the :setting:`CELERY_FORCE_EXECV` setting is enabled
-by default if the event-loop is not used.
+by default if the event-loop isn't used.
 
 
 New ``celery`` umbrella command
 New ``celery`` umbrella command
 -------------------------------
 -------------------------------
@@ -142,7 +142,7 @@ Commands include:
 - ``celery amqp``    (previously ``camqadm``).
 - ``celery amqp``    (previously ``camqadm``).
 
 
 The old programs are still available (``celeryd``, ``celerybeat``, etc),
 The old programs are still available (``celeryd``, ``celerybeat``, etc),
-but you are discouraged from using them.
+but you're discouraged from using them.
 
 
 Now depends on :pypi:`billiard`.
 Now depends on :pypi:`billiard`.
 --------------------------------
 --------------------------------
@@ -167,7 +167,7 @@ The :mod:`celery.app.task` module is now a module instead of a package.
 
 
 The :file:`setup.py` install script will try to remove the old package,
 The :file:`setup.py` install script will try to remove the old package,
 but if that doesn't work for some reason you have to remove
 but if that doesn't work for some reason you have to remove
-it manually.  This command helps:
+it manually. This command helps:
 
 
 .. code-block:: console
 .. code-block:: console
 
 
@@ -186,14 +186,14 @@ With several other distributions taking the step to discontinue
 Python 2.5 support, we feel that it is time too.
 Python 2.5 support, we feel that it is time too.
 
 
 Python 2.6 should be widely available at this point, and we urge
 Python 2.6 should be widely available at this point, and we urge
-you to upgrade, but if that is not possible you still have the option
+you to upgrade, but if that's not possible you still have the option
 to continue using the Celery 3.0, and important bug fixes
 to continue using the Celery 3.0, and important bug fixes
 introduced in Celery 3.1 will be back-ported to Celery 3.0 upon request.
 introduced in Celery 3.1 will be back-ported to Celery 3.0 upon request.
 
 
 UTC timezone is now used
 UTC timezone is now used
 ------------------------
 ------------------------
 
 
-This means that ETA/countdown in messages are not compatible with Celery
+This means that ETA/countdown in messages aren't compatible with Celery
 versions prior to 2.5.
 versions prior to 2.5.
 
 
 You can disable UTC and revert back to old local time by setting
 You can disable UTC and revert back to old local time by setting
@@ -205,7 +205,7 @@ Redis: Ack emulation improvements
     Reducing the possibility of data loss.
     Reducing the possibility of data loss.
 
 
     Acks are now implemented by storing a copy of the message when the message
     Acks are now implemented by storing a copy of the message when the message
-    is consumed.  The copy is not removed until the consumer acknowledges
+    is consumed. The copy isn't removed until the consumer acknowledges
     or rejects it.
     or rejects it.
 
 
     This means that unacknowledged messages will be redelivered either
     This means that unacknowledged messages will be redelivered either
@@ -214,7 +214,7 @@ Redis: Ack emulation improvements
     - Visibility timeout
     - Visibility timeout
 
 
         This is a timeout for acks, so that if the consumer
         This is a timeout for acks, so that if the consumer
-        does not ack the message within this time limit, the message
+        doesn't ack the message within this time limit, the message
         is redelivered to another consumer.
         is redelivered to another consumer.
 
 
         The timeout is set to one hour by default, but
         The timeout is set to one hour by default, but
@@ -225,14 +225,14 @@ Redis: Ack emulation improvements
 
 
     .. note::
     .. note::
 
 
-        Messages that have not been acked will be redelivered
+        Messages that haven't been acked will be redelivered
         if the visibility timeout is exceeded, for Celery users
         if the visibility timeout is exceeded, for Celery users
         this means that ETA/countdown tasks that are scheduled to execute
         this means that ETA/countdown tasks that are scheduled to execute
         with a time that exceeds the visibility timeout will be executed
         with a time that exceeds the visibility timeout will be executed
-        twice (or more).  If you plan on using long ETA/countdowns you
+        twice (or more). If you plan on using long ETA/countdowns you
         should tweak the visibility timeout accordingly.
         should tweak the visibility timeout accordingly.
 
 
-    Setting a long timeout means that it will take a long time
+    Setting a long timeout means that it'll take a long time
     for messages to be redelivered in the event of a power failure,
     for messages to be redelivered in the event of a power failure,
     but if so happens you could temporarily set the visibility timeout lower
     but if so happens you could temporarily set the visibility timeout lower
     to flush out messages when you start up the systems again.
     to flush out messages when you start up the systems again.
@@ -259,9 +259,9 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
     - ``errbacks``
     - ``errbacks``
 
 
         Applied if an error occurred while executing the task,
         Applied if an error occurred while executing the task,
-        with the uuid of the task as an argument.  Since it may not be possible
+        with the uuid of the task as an argument. Since it may not be possible
         to serialize the exception instance, it passes the uuid of the task
         to serialize the exception instance, it passes the uuid of the task
-        instead.  The uuid can then be used to retrieve the exception and
+        instead. The uuid can then be used to retrieve the exception and
         traceback of the task from the result backend.
         traceback of the task from the result backend.
 
 
     - ``link`` and ``link_error`` keyword arguments has been added
     - ``link`` and ``link_error`` keyword arguments has been added
@@ -289,12 +289,12 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
             yielding `(parent, node)` tuples.
             yielding `(parent, node)` tuples.
 
 
             Raises IncompleteStream if any of the dependencies
             Raises IncompleteStream if any of the dependencies
-            has not returned yet.
+            hasn't returned yet.
 
 
        - AsyncResult.graph
        - AsyncResult.graph
 
 
             A :class:`~celery.utils.graph.DependencyGraph` of the tasks
             A :class:`~celery.utils.graph.DependencyGraph` of the tasks
-            dependencies.  With this you can also convert to dot format:
+            dependencies. With this you can also convert to dot format:
 
 
             .. code-block:: python
             .. code-block:: python
 
 
@@ -326,7 +326,7 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
 - Adds :meth:`AsyncResult.get_leaf`
 - Adds :meth:`AsyncResult.get_leaf`
 
 
     Waits and returns the result of the leaf subtask.
     Waits and returns the result of the leaf subtask.
-    That is the last node found when traversing the graph,
+    That's the last node found when traversing the graph,
     but this means that the graph can be 1-dimensional only (in effect
     but this means that the graph can be 1-dimensional only (in effect
     a list).
     a list).
 
 
@@ -359,7 +359,7 @@ transport option, which must be a list of numbers in **sorted order**:
     ...     'priority_steps': [0, 2, 4, 6, 8, 9],
     ...     'priority_steps': [0, 2, 4, 6, 8, 9],
     ... }
     ... }
 
 
-Priorities implemented in this way is not as reliable as
+Priorities implemented in this way isn't as reliable as
 priorities on the server side, which is why
 priorities on the server side, which is why
 the feature is nicknamed "quasi-priorities";
 the feature is nicknamed "quasi-priorities";
 **Using routing is still the suggested way of ensuring
 **Using routing is still the suggested way of ensuring
@@ -370,7 +370,7 @@ or the queues are congested.
 
 
 Still, it is possible that using priorities in combination
 Still, it is possible that using priorities in combination
 with routing can be more beneficial than using routing
 with routing can be more beneficial than using routing
-or priorities alone.  Experimentation and monitoring
+or priorities alone. Experimentation and monitoring
 should be used to prove this.
 should be used to prove this.
 
 
 Contributed by Germán M. Bravo.
 Contributed by Germán M. Bravo.
@@ -464,13 +464,13 @@ accidentally changed while switching to using blocking pop.
 New remote control commands
 New remote control commands
 ---------------------------
 ---------------------------
 
 
-These commands were previously experimental, but they have proven
+These commands were previously experimental, but they've proven
 stable and is now documented as part of the official API.
 stable and is now documented as part of the official API.
 
 
 - :control:`add_consumer`/:control:`cancel_consumer`
 - :control:`add_consumer`/:control:`cancel_consumer`
 
 
     Tells workers to consume from a new queue, or cancel consuming from a
     Tells workers to consume from a new queue, or cancel consuming from a
-    queue.  This command has also been changed so that the worker remembers
+    queue. This command has also been changed so that the worker remembers
     the queues added, so that the change will persist even if
     the queues added, so that the change will persist even if
     the connection is re-connected.
     the connection is re-connected.
 
 
@@ -547,13 +547,13 @@ Immutable subtasks
 ------------------
 ------------------
 
 
 ``subtask``'s can now be immutable, which means that the arguments
 ``subtask``'s can now be immutable, which means that the arguments
-will not be modified when calling callbacks:
+won't be modified when calling callbacks:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
 
 
     >>> chain(add.s(2, 2), clear_static_electricity.si())
     >>> chain(add.s(2, 2), clear_static_electricity.si())
 
 
-means it will not receive the argument of the parent task,
+means it'll not receive the argument of the parent task,
 and ``.si()`` is a shortcut to:
 and ``.si()`` is a shortcut to:
 
 
 .. code-block:: pycon
 .. code-block:: pycon
@@ -570,15 +570,15 @@ Logging support now conforms better with best practices.
   level, and adds a NullHandler.
   level, and adds a NullHandler.
 
 
 - Loggers are no longer passed around, instead every module using logging
 - Loggers are no longer passed around, instead every module using logging
-  defines a module global logger that is used throughout.
+  defines a module global logger that's used throughout.
 
 
 - All loggers inherit from a common logger called "celery".
 - All loggers inherit from a common logger called "celery".
 
 
 - Before ``task.get_logger`` would setup a new logger for every task,
 - Before ``task.get_logger`` would setup a new logger for every task,
-  and even set the log level.  This is no longer the case.
+  and even set the log level. This is no longer the case.
 
 
     - Instead all task loggers now inherit from a common "celery.task" logger
     - Instead all task loggers now inherit from a common "celery.task" logger
-      that is set up when programs call `setup_logging_subsystem`.
+      that's set up when programs call `setup_logging_subsystem`.
 
 
     - Instead of using LoggerAdapter to augment the formatter with
     - Instead of using LoggerAdapter to augment the formatter with
       the task_id and task_name field, the task base logger now use
       the task_id and task_name field, the task base logger now use
@@ -675,7 +675,7 @@ The ``@task`` decorator is now lazy when used with custom apps.
 
 
 That is, if ``accept_magic_kwargs`` is enabled (her by called "compat mode"), the task
 That is, if ``accept_magic_kwargs`` is enabled (her by called "compat mode"), the task
 decorator executes inline like before, however for custom apps the @task
 decorator executes inline like before, however for custom apps the @task
-decorator now returns a special PromiseProxy object that is only evaluated
+decorator now returns a special PromiseProxy object that's only evaluated
 on access.
 on access.
 
 
 All promises will be evaluated when :meth:`@finalize` is called, or implicitly
 All promises will be evaluated when :meth:`@finalize` is called, or implicitly
@@ -709,7 +709,7 @@ In Other News
 
 
 - New :setting:`CELERYD_WORKER_LOST_WAIT` to control the timeout in
 - New :setting:`CELERYD_WORKER_LOST_WAIT` to control the timeout in
   seconds before :exc:`billiard.WorkerLostError` is raised
   seconds before :exc:`billiard.WorkerLostError` is raised
-  when a worker can not be signaled (Issue #595).
+  when a worker can't be signaled (Issue #595).
 
 
     Contributed by Brendon Crawford.
     Contributed by Brendon Crawford.
 
 
@@ -739,7 +739,7 @@ In Other News
 
 
 - Result backends can now be set using a URL
 - Result backends can now be set using a URL
 
 
-    Currently only supported by redis.  Example use:
+    Currently only supported by redis. Example use:
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -876,7 +876,7 @@ In Other News
 
 
 - Now uses :func:`~kombu.common.maybe_declare` to cache queue declarations.
 - Now uses :func:`~kombu.common.maybe_declare` to cache queue declarations.
 
 
-- There is no longer a global default for the
+- There's no longer a global default for the
   :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting, it is instead
   :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting, it is instead
   set by individual schedulers.
   set by individual schedulers.
 
 
@@ -1012,7 +1012,7 @@ See the :ref:`deprecation-timeline`.
     - ``control.inspect.enable_events`` -> :meth:`@control.enable_events`.
     - ``control.inspect.enable_events`` -> :meth:`@control.enable_events`.
     - ``control.inspect.disable_events`` -> :meth:`@control.disable_events`.
     - ``control.inspect.disable_events`` -> :meth:`@control.disable_events`.
 
 
-    This way ``inspect()`` is only used for commands that do not
+    This way ``inspect()`` is only used for commands that don't
     modify anything, while idempotent control commands that make changes
     modify anything, while idempotent control commands that make changes
     are on the control objects.
     are on the control objects.
 
 
@@ -1022,11 +1022,11 @@ Fixes
 - Retry SQLAlchemy backend operations on DatabaseError/OperationalError
 - Retry SQLAlchemy backend operations on DatabaseError/OperationalError
   (Issue #634)
   (Issue #634)
 
 
-- Tasks that called ``retry`` was not acknowledged if acks late was enabled
+- Tasks that called ``retry`` wasn't acknowledged if acks late was enabled
 
 
     Fix contributed by David Markey.
     Fix contributed by David Markey.
 
 
-- The message priority argument was not properly propagated to Kombu
+- The message priority argument wasn't properly propagated to Kombu
   (Issue #708).
   (Issue #708).
 
 
     Fix contributed by Eran Rundstein
     Fix contributed by Eran Rundstein

+ 2 - 2
docs/includes/installation.txt

@@ -27,7 +27,7 @@ Celery also defines a group of bundles that can be used
 to install Celery and the dependencies for a given feature.
 to install Celery and the dependencies for a given feature.
 
 
 You can specify these in your requirements or on the :command:`pip`
 You can specify these in your requirements or on the :command:`pip`
-command-line by using brackets.  Multiple bundles can be specified by
+command-line by using brackets. Multiple bundles can be specified by
 separating them by commas.
 separating them by commas.
 
 
 .. code-block:: console
 .. code-block:: console
@@ -125,7 +125,7 @@ You can install it by doing the following,:
     # python setup.py install
     # python setup.py install
 
 
 The last command must be executed as a privileged user if
 The last command must be executed as a privileged user if
-you are not currently using a virtualenv.
+you aren't currently using a virtualenv.
 
 
 .. _celery-installing-from-git:
 .. _celery-installing-from-git:
 
 

+ 9 - 9
docs/includes/introduction.txt

@@ -7,8 +7,8 @@
 
 
 --
 --
 
 
-What is a Task Queue?
-=====================
+What's a Task Queue?
+====================
 
 
 Task queues are used as a mechanism to distribute work across threads or
 Task queues are used as a mechanism to distribute work across threads or
 machines.
 machines.
@@ -17,14 +17,14 @@ A task queue's input is a unit of work, called a task, dedicated worker
 processes then constantly monitor the queue for new work to perform.
 processes then constantly monitor the queue for new work to perform.
 
 
 Celery communicates via messages, usually using a broker
 Celery communicates via messages, usually using a broker
-to mediate between clients and workers.  To initiate a task a client puts a
+to mediate between clients and workers. To initiate a task a client puts a
 message on the queue, the broker then delivers the message to a worker.
 message on the queue, the broker then delivers the message to a worker.
 
 
 A Celery system can consist of multiple workers and brokers, giving way
 A Celery system can consist of multiple workers and brokers, giving way
 to high availability and horizontal scaling.
 to high availability and horizontal scaling.
 
 
 Celery is written in Python, but the protocol can be implemented in any
 Celery is written in Python, but the protocol can be implemented in any
-language.  In addition to Python there's node-celery_ for Node.js,
+language. In addition to Python there's node-celery_ for Node.js,
 and a `PHP client`_.
 and a `PHP client`_.
 
 
 Language interoperability can also be achieved
 Language interoperability can also be achieved
@@ -47,7 +47,7 @@ Celery version 4.0 runs on,
 This is the last version to support Python 2.7,
 This is the last version to support Python 2.7,
 and from the next version (Celery 5.x) Python 3.6 or newer is required.
 and from the next version (Celery 5.x) Python 3.6 or newer is required.
 
 
-If you are running an older version of Python, you need to be running
+If you're running an older version of Python, you need to be running
 an older version of Celery:
 an older version of Celery:
 
 
 - Python 2.6: Celery series 3.1 or earlier.
 - Python 2.6: Celery series 3.1 or earlier.
@@ -55,8 +55,8 @@ an older version of Celery:
 - Python 2.4 was Celery series 2.2 or earlier.
 - Python 2.4 was Celery series 2.2 or earlier.
 
 
 Celery is a project with minimal funding,
 Celery is a project with minimal funding,
-so we do not support Microsoft Windows.
-Please do not open any issues related to that platform.
+so we don't support Microsoft Windows.
+Please don't open any issues related to that platform.
 
 
 *Celery* is usually used with a message broker to send and receive messages.
 *Celery* is usually used with a message broker to send and receive messages.
 The RabbitMQ, Redis transports are feature complete,
 The RabbitMQ, Redis transports are feature complete,
@@ -69,7 +69,7 @@ across datacenters.
 Get Started
 Get Started
 ===========
 ===========
 
 
-If this is the first time you're trying to use Celery, or you are
+If this is the first time you're trying to use Celery, or you're
 new to Celery 4.0 coming from previous versions then you should read our
 new to Celery 4.0 coming from previous versions then you should read our
 getting started tutorials:
 getting started tutorials:
 
 
@@ -176,7 +176,7 @@ integration packages:
     | `Tornado`_         | `tornado-celery`_      |
     | `Tornado`_         | `tornado-celery`_      |
     +--------------------+------------------------+
     +--------------------+------------------------+
 
 
-The integration packages are not strictly necessary, but they can make
+The integration packages aren't strictly necessary, but they can make
 development easier, and sometimes they add important hooks like closing
 development easier, and sometimes they add important hooks like closing
 database connections at ``fork``.
 database connections at ``fork``.
 
 

+ 1 - 1
docs/includes/resources.txt

@@ -45,7 +45,7 @@ Contributing
 
 
 Development of `celery` happens at GitHub: https://github.com/celery/celery
 Development of `celery` happens at GitHub: https://github.com/celery/celery
 
 
-You are highly encouraged to participate in the development
+You're highly encouraged to participate in the development
 of `celery`. If you don't like GitHub (for some reason) you're welcome
 of `celery`. If you don't like GitHub (for some reason) you're welcome
 to send regular patches.
 to send regular patches.
 
 

+ 1 - 1
docs/index.rst

@@ -18,7 +18,7 @@ Celery is Open Source and licensed under the `BSD License`_.
 Getting Started
 Getting Started
 ===============
 ===============
 
 
-- If you are new to Celery you can get started by following
+- If you're new to Celery you can get started by following
   the :ref:`first-steps` tutorial.
   the :ref:`first-steps` tutorial.
 
 
 - You can also check out the :ref:`FAQ <faq>`.
 - You can also check out the :ref:`FAQ <faq>`.

+ 1 - 1
docs/internals/app-overview.rst

@@ -181,7 +181,7 @@ is missing.
         def __init__(self, app=None):
         def __init__(self, app=None):
             self.app = app_or_default(app)
             self.app = app_or_default(app)
 
 
-The problem with this approach is that there is a chance
+The problem with this approach is that there's a chance
 that the app instance is lost along the way, and everything
 that the app instance is lost along the way, and everything
 seems to be working normally. Testing app instance leaks
 seems to be working normally. Testing app instance leaks
 is hard. The environment variable :envvar:`CELERY_TRACE_APP`
 is hard. The environment variable :envvar:`CELERY_TRACE_APP`

+ 7 - 7
docs/internals/guide.rst

@@ -34,7 +34,7 @@ Naming
 - Follows :pep:`8`.
 - Follows :pep:`8`.
 
 
 - Class names must be `CamelCase`.
 - Class names must be `CamelCase`.
-- but not if they are verbs, verbs shall be `lower_case`:
+- but not if they're verbs, verbs shall be `lower_case`:
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -62,8 +62,8 @@ Naming
     .. note::
     .. note::
 
 
         Sometimes it makes sense to have a class mask as a function,
         Sometimes it makes sense to have a class mask as a function,
-        and there is precedence for this in the Python standard library (e.g.
-        :class:`~contextlib.contextmanager`).  Celery examples include
+        and there's precedence for this in the Python standard library (e.g.
+        :class:`~contextlib.contextmanager`). Celery examples include
         :class:`~celery.signature`, :class:`~celery.chord`,
         :class:`~celery.signature`, :class:`~celery.chord`,
         ``inspect``, :class:`~kombu.utils.functional.promise` and more..
         ``inspect``, :class:`~kombu.utils.functional.promise` and more..
 
 
@@ -148,7 +148,7 @@ Composites
 ~~~~~~~~~~
 ~~~~~~~~~~
 
 
 Similarly to exceptions, composite classes should be override-able by
 Similarly to exceptions, composite classes should be override-able by
-inheritance and/or instantiation.  Common sense can be used when
+inheritance and/or instantiation. Common sense can be used when
 selecting what classes to include, but often it's better to add one
 selecting what classes to include, but often it's better to add one
 too many: predicting what users need to override is hard (this has
 too many: predicting what users need to override is hard (this has
 saved us from many a monkey patch).
 saved us from many a monkey patch).
@@ -174,11 +174,11 @@ In the beginning Celery was developed for Django, simply because
 this enabled us get the project started quickly, while also having
 this enabled us get the project started quickly, while also having
 a large potential user base.
 a large potential user base.
 
 
-In Django there is a global settings object, so multiple Django projects
+In Django there's a global settings object, so multiple Django projects
 can't co-exist in the same process space, this later posed a problem
 can't co-exist in the same process space, this later posed a problem
 for using Celery with frameworks that doesn't have this limitation.
 for using Celery with frameworks that doesn't have this limitation.
 
 
-Therefore the app concept was introduced.  When using apps you use 'celery'
+Therefore the app concept was introduced. When using apps you use 'celery'
 objects instead of importing things from celery sub-modules, this
 objects instead of importing things from celery sub-modules, this
 (unfortunately) also means that Celery essentially has two API's.
 (unfortunately) also means that Celery essentially has two API's.
 
 
@@ -231,7 +231,7 @@ Module Overview
 
 
 - celery.loaders
 - celery.loaders
 
 
-    Every app must have a loader.  The loader decides how configuration
+    Every app must have a loader. The loader decides how configuration
     is read, what happens when the worker starts, when a task starts and ends,
     is read, what happens when the worker starts, when a task starts and ends,
     and so on.
     and so on.
 
 

+ 4 - 4
docs/internals/protocol.rst

@@ -220,7 +220,7 @@ Message body
     :`string` (ISO 8601):
     :`string` (ISO 8601):
 
 
     Estimated time of arrival. This is the date and time in ISO 8601
     Estimated time of arrival. This is the date and time in ISO 8601
-    format. If not provided the message is not scheduled, but will be
+    format. If not provided the message isn't scheduled, but will be
     executed asap.
     executed asap.
 
 
 * ``expires``
 * ``expires``
@@ -243,7 +243,7 @@ Message body
 
 
     .. versionadded:: 2.3
     .. versionadded:: 2.3
 
 
-    Signifies that this task is one of the header parts of a chord.  The value
+    Signifies that this task is one of the header parts of a chord. The value
     of this key is the body of the cord that should be executed when all of
     of this key is the body of the cord that should be executed when all of
     the tasks in the header has returned.
     the tasks in the header has returned.
 
 
@@ -334,7 +334,7 @@ Standard body fields
 
 
 - *string* ``type``
 - *string* ``type``
 
 
-    The type of event.  This is a string containing the *category* and
+    The type of event. This is a string containing the *category* and
     *action* separated by a dash delimiter (e.g. ``task-succeeded``).
     *action* separated by a dash delimiter (e.g. ``task-succeeded``).
 
 
 - *string* ``hostname``
 - *string* ``hostname``
@@ -352,7 +352,7 @@ Standard body fields
 - *signed short* ``utcoffset``
 - *signed short* ``utcoffset``
 
 
     This field describes the timezone of the originating host, and is
     This field describes the timezone of the originating host, and is
-    specified as the number of hours ahead of/behind UTC.  E.g. ``-2`` or
+    specified as the number of hours ahead of/behind UTC. E.g. ``-2`` or
     ``+1``.
     ``+1``.
 
 
 - *unsigned long long* ``pid``
 - *unsigned long long* ``pid``

+ 4 - 5
docs/reference/celery.app.amqp.rst

@@ -12,20 +12,19 @@
 
 
         .. attribute:: Connection
         .. attribute:: Connection
 
 
-            Broker connection class used.  Default is
-            :class:`kombu.Connection`.
+            Broker connection class used. Default is :class:`kombu.Connection`.
 
 
         .. attribute:: Consumer
         .. attribute:: Consumer
 
 
-            Base Consumer class used.  Default is :class:`kombu.Consumer`.
+            Base Consumer class used. Default is :class:`kombu.Consumer`.
 
 
         .. attribute:: Producer
         .. attribute:: Producer
 
 
-            Base Producer class used.  Default is :class:`kombu.Producer`.
+            Base Producer class used. Default is :class:`kombu.Producer`.
 
 
         .. attribute:: queues
         .. attribute:: queues
 
 
-            All currently defined task queues. (A :class:`Queues` instance).
+            All currently defined task queues (a :class:`Queues` instance).
 
 
         .. automethod:: Queues
         .. automethod:: Queues
         .. automethod:: Router
         .. automethod:: Router

+ 3 - 3
docs/sec/CELERYSA-0001.txt

@@ -21,7 +21,7 @@ Description
 
 
 The --uid and --gid arguments to the celeryd-multi,
 The --uid and --gid arguments to the celeryd-multi,
 celeryd_detach, celerybeat and celeryev programs shipped
 celeryd_detach, celerybeat and celeryev programs shipped
-with Celery versions 2.1 and later was not handled properly:
+with Celery versions 2.1 and later wasn't handled properly:
 only the effective user was changed, with the real id remaining
 only the effective user was changed, with the real id remaining
 unchanged.
 unchanged.
 
 
@@ -34,7 +34,7 @@ default makes it possible to execute arbitrary code.
 We recommend that users takes steps to secure their systems so that
 We recommend that users takes steps to secure their systems so that
 malicious users cannot abuse the message broker to send messages,
 malicious users cannot abuse the message broker to send messages,
 or disable the pickle serializer used in Celery so that arbitrary code
 or disable the pickle serializer used in Celery so that arbitrary code
-execution is not possible.
+execution isn't possible.
 
 
 Patches are now available for all maintained versions (see below),
 Patches are now available for all maintained versions (see below),
 and users are urged to upgrade, even if not directly
 and users are urged to upgrade, even if not directly
@@ -86,7 +86,7 @@ with updated packages.
 Please direct questions to the celery-users mailing-list:
 Please direct questions to the celery-users mailing-list:
 http://groups.google.com/group/celery-users/,
 http://groups.google.com/group/celery-users/,
 
 
-or if you are planning to report a security issue we request that
+or if you're planning to report a security issue we request that
 you keep the information confidential by contacting
 you keep the information confidential by contacting
 security@celeryproject.org, so that a fix can be issued as quickly as possible.
 security@celeryproject.org, so that a fix can be issued as quickly as possible.
 
 

+ 3 - 3
docs/sec/CELERYSA-0002.txt

@@ -26,7 +26,7 @@ end up having world-writable permissions.
 In practice this means that local users will be able to modify and possibly
 In practice this means that local users will be able to modify and possibly
 corrupt the files created by user tasks.
 corrupt the files created by user tasks.
 
 
-This is not immediately exploitable but can be if those files are later
+This isn't immediately exploitable but can be if those files are later
 evaluated as a program, for example a task that creates Python program files
 evaluated as a program, for example a task that creates Python program files
 that are later executed.
 that are later executed.
 
 
@@ -56,7 +56,7 @@ NOTE:
     then files may already have been created with insecure permissions.
     then files may already have been created with insecure permissions.
 
 
     So after upgrading, or using the workaround, then please make sure
     So after upgrading, or using the workaround, then please make sure
-    that files already created are not world writable.
+    that files already created aren't world writable.
 
 
 To work around the issue you can set a custom umask using the ``--umask``
 To work around the issue you can set a custom umask using the ``--umask``
 argument:
 argument:
@@ -83,7 +83,7 @@ with updated packages.
 Please direct questions to the celery-users mailing-list:
 Please direct questions to the celery-users mailing-list:
 http://groups.google.com/group/celery-users/,
 http://groups.google.com/group/celery-users/,
 
 
-or if you are planning to report a new security related issue we request that
+or if you're planning to report a new security related issue we request that
 you keep the information confidential by contacting
 you keep the information confidential by contacting
 security@celeryproject.org instead.
 security@celeryproject.org instead.
 
 

+ 3 - 3
docs/tutorials/task-cookbook.rst

@@ -14,7 +14,7 @@ Ensuring a task is only executed one at a time
 
 
 You can accomplish this by using a lock.
 You can accomplish this by using a lock.
 
 
-In this example we'll be using the cache framework to set a lock that is
+In this example we'll be using the cache framework to set a lock that's
 accessible for all workers.
 accessible for all workers.
 
 
 It's part of an imaginary RSS feed importer called `djangofeeds`.
 It's part of an imaginary RSS feed importer called `djangofeeds`.
@@ -26,7 +26,7 @@ consisting of the MD5 check-sum of the feed URL.
 The cache key expires after some time in case something unexpected happens,
 The cache key expires after some time in case something unexpected happens,
 and something always will...
 and something always will...
 
 
-For this reason your tasks run-time should not exceed the timeout.
+For this reason your tasks run-time shouldn't exceed the timeout.
 
 
 
 
 .. note::
 .. note::
@@ -60,7 +60,7 @@ For this reason your tasks run-time should not exceed the timeout.
             # memcache delete is very slow, but we have to use it to take
             # memcache delete is very slow, but we have to use it to take
             # advantage of using add() for atomic locking
             # advantage of using add() for atomic locking
             if monotonic() < timeout_at:
             if monotonic() < timeout_at:
-                # do not release the lock if we exceeded the timeout
+                # don't release the lock if we exceeded the timeout
                 # to lessen the chance of releasing an expired lock
                 # to lessen the chance of releasing an expired lock
                 # owned by someone else.
                 # owned by someone else.
                 cache.delete(lock_id)
                 cache.delete(lock_id)

+ 18 - 18
docs/userguide/application.rst

@@ -32,10 +32,10 @@ current main module (``__main__``), and the memory address of the object
 Main Name
 Main Name
 =========
 =========
 
 
-Only one of these is important, and that is the main module name.
+Only one of these is important, and that's the main module name.
 Let's look at why that is.
 Let's look at why that is.
 
 
-When you send a task message in Celery, that message will not contain
+When you send a task message in Celery, that message won't contain
 any source code, but only the name of the task you want to execute.
 any source code, but only the name of the task you want to execute.
 This works similarly to how host names work on the internet: every worker
 This works similarly to how host names work on the internet: every worker
 maintains a mapping of task names to their actual functions, called the *task
 maintains a mapping of task names to their actual functions, called the *task
@@ -58,7 +58,7 @@ Whenever you define a task, that task will also be added to the local registry:
     >>> app.tasks['__main__.add']
     >>> app.tasks['__main__.add']
     <@task: __main__.add>
     <@task: __main__.add>
 
 
-and there you see that ``__main__`` again; whenever Celery is not able
+and there you see that ``__main__`` again; whenever Celery isn't able
 to detect what module the function belongs to, it uses the main module
 to detect what module the function belongs to, it uses the main module
 name to generate the beginning of the task name.
 name to generate the beginning of the task name.
 
 
@@ -113,8 +113,8 @@ You can specify another name for the main module:
 Configuration
 Configuration
 =============
 =============
 
 
-There are several options you can set that will change how
-Celery works.  These options can be set directly on the app instance,
+There are several options you can set that'll change how
+Celery works. These options can be set directly on the app instance,
 or you can use a dedicated configuration module.
 or you can use a dedicated configuration module.
 
 
 The configuration is available as :attr:`@conf`:
 The configuration is available as :attr:`@conf`:
@@ -163,7 +163,7 @@ from a configuration object.
 This can be a configuration module, or any object with configuration attributes.
 This can be a configuration module, or any object with configuration attributes.
 
 
 Note that any configuration that was previously set will be reset when
 Note that any configuration that was previously set will be reset when
-:meth:`~@config_from_object` is called.  If you want to set additional
+:meth:`~@config_from_object` is called. If you want to set additional
 configuration you should do so after.
 configuration you should do so after.
 
 
 Example 1: Using the name of a module
 Example 1: Using the name of a module
@@ -197,12 +197,12 @@ Example 2: Passing an actual module object
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 You can also pass an already imported module object, but this
 You can also pass an already imported module object, but this
-is not always recommended.
+isn't always recommended.
 
 
 .. tip::
 .. tip::
 
 
     Using the name of a module is recommended as this means the module does
     Using the name of a module is recommended as this means the module does
-    not need to be serialized when the prefork pool is used.  If you're
+    not need to be serialized when the prefork pool is used. If you're
     experiencing configuration problems or pickle errors then please
     experiencing configuration problems or pickle errors then please
     try using the name of a module instead.
     try using the name of a module instead.
 
 
@@ -275,7 +275,7 @@ one is :meth:`~celery.app.utils.Settings.humanize`:
 
 
     >>> app.conf.humanize(with_defaults=False, censored=True)
     >>> app.conf.humanize(with_defaults=False, censored=True)
 
 
-This method returns the configuration as a tabulated string.  This will
+This method returns the configuration as a tabulated string. This will
 only contain changes to the configuration by default, but you can include the
 only contain changes to the configuration by default, but you can include the
 built-in default keys and values by enabling the ``with_defaults`` argument.
 built-in default keys and values by enabling the ``with_defaults`` argument.
 
 
@@ -286,7 +286,7 @@ can use the :meth:`~celery.app.utils.Settings.table` method:
 
 
     >>> app.conf.table(with_defaults=False, censored=True)
     >>> app.conf.table(with_defaults=False, censored=True)
 
 
-Please note that Celery will not be able to remove all sensitive information,
+Please note that Celery won't be able to remove all sensitive information,
 as it merely uses a regular expression to search for commonly named keys.
 as it merely uses a regular expression to search for commonly named keys.
 If you add custom settings containing sensitive information you should name
 If you add custom settings containing sensitive information you should name
 the keys using a name that Celery identifies as secret.
 the keys using a name that Celery identifies as secret.
@@ -299,7 +299,7 @@ these sub-strings:
 Laziness
 Laziness
 ========
 ========
 
 
-The application instance is lazy, meaning it will not be evaluated
+The application instance is lazy, meaning it won't be evaluated
 until it's actually needed.
 until it's actually needed.
 
 
 Creating a :class:`@Celery` instance will only do the following:
 Creating a :class:`@Celery` instance will only do the following:
@@ -310,12 +310,12 @@ Creating a :class:`@Celery` instance will only do the following:
        argument was disabled)
        argument was disabled)
     #. Call the :meth:`@on_init` callback (does nothing by default).
     #. Call the :meth:`@on_init` callback (does nothing by default).
 
 
-The :meth:`@task` decorators do not create the tasks at the point when
-the task is defined, instead it will defer the creation
+The :meth:`@task` decorators don't create the tasks at the point when
+the task is defined, instead it'll defer the creation
 of the task to happen either when the task is used, or after the
 of the task to happen either when the task is used, or after the
 application has been *finalized*,
 application has been *finalized*,
 
 
-This example shows how the task is not created until
+This example shows how the task isn't created until
 you use the task, or access an attribute (in this case :meth:`repr`):
 you use the task, or access an attribute (in this case :meth:`repr`):
 
 
 .. code-block:: pycon
 .. code-block:: pycon
@@ -359,15 +359,15 @@ Finalizing the object will:
 
 
 .. topic:: The "default app".
 .. topic:: The "default app".
 
 
-    Celery did not always have applications, it used to be that
+    Celery didn't always have applications, it used to be that
     there was only a module-based API, and for backwards compatibility
     there was only a module-based API, and for backwards compatibility
     the old API is still there until the release of Celery 5.0.
     the old API is still there until the release of Celery 5.0.
 
 
-    Celery always creates a special app that is the "default app",
+    Celery always creates a special app - the "default app",
     and this is used if no custom application has been instantiated.
     and this is used if no custom application has been instantiated.
 
 
     The :mod:`celery.task` module is there to accommodate the old API,
     The :mod:`celery.task` module is there to accommodate the old API,
-    and should not be used if you use a custom app. You should
+    and shouldn't be used if you use a custom app. You should
     always use the methods on the app instance, not the module based API.
     always use the methods on the app instance, not the module based API.
 
 
     For example, the old Task base class enables many compatibility
     For example, the old Task base class enables many compatibility
@@ -516,7 +516,7 @@ class: :class:`celery.Task`.
     default request used when a task is called directly.
     default request used when a task is called directly.
 
 
 The neutral base class is special because it's not bound to any specific app
 The neutral base class is special because it's not bound to any specific app
-yet.  Once a task is bound to an app it will read configuration to set default
+yet. Once a task is bound to an app it'll read configuration to set default
 values and so on.
 values and so on.
 
 
 To realize a base class you need to create a task using the :meth:`@task`
 To realize a base class you need to create a task using the :meth:`@task`

+ 21 - 21
docs/userguide/calling.rst

@@ -25,14 +25,14 @@ The API defines a standard set of execution options, as well as three methods:
 
 
     - ``delay(*args, **kwargs)``
     - ``delay(*args, **kwargs)``
 
 
-        Shortcut to send a task message, but does not support execution
+        Shortcut to send a task message, but doesn't support execution
         options.
         options.
 
 
     - *calling* (``__call__``)
     - *calling* (``__call__``)
 
 
         Applying an object supporting the calling API (e.g. ``add(2, 2)``)
         Applying an object supporting the calling API (e.g. ``add(2, 2)``)
         means that the task will be executed in the current process, and
         means that the task will be executed in the current process, and
-        not by a worker (a message will not be sent).
+        not by a worker (a message won't be sent).
 
 
 .. _calling-cheat:
 .. _calling-cheat:
 
 
@@ -75,7 +75,7 @@ Using :meth:`~@Task.apply_async` instead you have to write:
 
 
 .. sidebar:: Tip
 .. sidebar:: Tip
 
 
-    If the task is not registered in the current process
+    If the task isn't registered in the current process
     you can use :meth:`~@send_task` to call the task by name instead.
     you can use :meth:`~@send_task` to call the task by name instead.
 
 
 
 
@@ -83,7 +83,7 @@ So `delay` is clearly convenient, but if you want to set additional execution
 options you have to use ``apply_async``.
 options you have to use ``apply_async``.
 
 
 The rest of this document will go into the task execution
 The rest of this document will go into the task execution
-options in detail.  All examples use a task
+options in detail. All examples use a task
 called `add`, returning the sum of two arguments:
 called `add`, returning the sum of two arguments:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -95,7 +95,7 @@ called `add`, returning the sum of two arguments:
 
 
 .. topic:: There's another way…
 .. topic:: There's another way…
 
 
-    You will learn more about this later while reading about the :ref:`Canvas
+    You'll learn more about this later while reading about the :ref:`Canvas
     <guide-canvas>`, but :class:`~celery.signature`'s are objects used to pass around
     <guide-canvas>`, but :class:`~celery.signature`'s are objects used to pass around
     the signature of a task invocation, (for example to send it over the
     the signature of a task invocation, (for example to send it over the
     network), and they also support the Calling API:
     network), and they also support the Calling API:
@@ -117,7 +117,7 @@ as a partial argument:
 
 
     add.apply_async((2, 2), link=add.s(16))
     add.apply_async((2, 2), link=add.s(16))
 
 
-.. sidebar:: What is ``s``?
+.. sidebar:: What's ``s``?
 
 
     The ``add.s`` call used here is called a signature, I talk
     The ``add.s`` call used here is called a signature, I talk
     more about signatures in the :ref:`canvas guide <guide-canvas>`,
     more about signatures in the :ref:`canvas guide <guide-canvas>`,
@@ -125,8 +125,8 @@ as a partial argument:
     is a simpler way to chain tasks together.
     is a simpler way to chain tasks together.
 
 
     In practice the ``link`` execution option is considered an internal
     In practice the ``link`` execution option is considered an internal
-    primitive, and you will probably not use it directly, but
-    rather use chains instead.
+    primitive, and you'll probably not use it directly, but
+    use chains instead.
 
 
 Here the result of the first task (4) will be sent to a new
 Here the result of the first task (4) will be sent to a new
 task that adds 16 to the previous result, forming the expression
 task that adds 16 to the previous result, forming the expression
@@ -177,7 +177,7 @@ ETA and countdown
 =================
 =================
 
 
 The ETA (estimated time of arrival) lets you set a specific date and time that
 The ETA (estimated time of arrival) lets you set a specific date and time that
-is the earliest time at which your task will be executed.  `countdown` is
+is the earliest time at which your task will be executed. `countdown` is
 a shortcut to set eta by seconds into the future.
 a shortcut to set eta by seconds into the future.
 
 
 .. code-block:: pycon
 .. code-block:: pycon
@@ -189,10 +189,10 @@ a shortcut to set eta by seconds into the future.
 The task is guaranteed to be executed at some time *after* the
 The task is guaranteed to be executed at some time *after* the
 specified date and time, but not necessarily at that exact time.
 specified date and time, but not necessarily at that exact time.
 Possible reasons for broken deadlines may include many items waiting
 Possible reasons for broken deadlines may include many items waiting
-in the queue, or heavy network latency.  To make sure your tasks
+in the queue, or heavy network latency. To make sure your tasks
 are executed in a timely manner you should monitor the queue for congestion. Use
 are executed in a timely manner you should monitor the queue for congestion. Use
 Munin, or similar tools, to receive alerts, so appropriate action can be
 Munin, or similar tools, to receive alerts, so appropriate action can be
-taken to ease the workload.  See :ref:`monitoring-munin`.
+taken to ease the workload. See :ref:`monitoring-munin`.
 
 
 While `countdown` is an integer, `eta` must be a :class:`~datetime.datetime`
 While `countdown` is an integer, `eta` must be a :class:`~datetime.datetime`
 object, specifying an exact date and time (including millisecond precision,
 object, specifying an exact date and time (including millisecond precision,
@@ -269,18 +269,18 @@ and can contain the following keys:
 - `interval_start`
 - `interval_start`
 
 
     Defines the number of seconds (float or integer) to wait between
     Defines the number of seconds (float or integer) to wait between
-    retries.  Default is 0, which means the first retry will be
+    retries. Default is 0, which means the first retry will be
     instantaneous.
     instantaneous.
 
 
 - `interval_step`
 - `interval_step`
 
 
     On each consecutive retry this number will be added to the retry
     On each consecutive retry this number will be added to the retry
-    delay (float or integer).  Default is 0.2.
+    delay (float or integer). Default is 0.2.
 
 
 - `interval_max`
 - `interval_max`
 
 
     Maximum number of seconds (float or integer) to wait between
     Maximum number of seconds (float or integer) to wait between
-    retries.  Default is 0.2.
+    retries. Default is 0.2.
 
 
 For example, the default policy correlates to:
 For example, the default policy correlates to:
 
 
@@ -293,7 +293,7 @@ For example, the default policy correlates to:
         'interval_max': 0.2,
         'interval_max': 0.2,
     })
     })
 
 
-the maximum time spent retrying will be 0.4 seconds.  It is set relatively
+the maximum time spent retrying will be 0.4 seconds. It's set relatively
 short by default because a connection failure could lead to a retry pile effect
 short by default because a connection failure could lead to a retry pile effect
 if the broker connection is down: e.g. many web server processes waiting
 if the broker connection is down: e.g. many web server processes waiting
 to retry blocking other incoming requests.
 to retry blocking other incoming requests.
@@ -358,7 +358,7 @@ pickle -- If you have no desire to support any language other than
 
 
 yaml -- YAML has many of the same characteristics as json,
 yaml -- YAML has many of the same characteristics as json,
     except that it natively supports more data types (including dates,
     except that it natively supports more data types (including dates,
-    recursive references, etc.)
+    recursive references, etc.).
 
 
     However, the Python libraries for YAML are a good bit slower than the
     However, the Python libraries for YAML are a good bit slower than the
     libraries for JSON.
     libraries for JSON.
@@ -368,14 +368,14 @@ yaml -- YAML has many of the same characteristics as json,
 
 
     See http://yaml.org/ for more information.
     See http://yaml.org/ for more information.
 
 
-msgpack -- msgpack is a binary serialization format that is closer to JSON
-    in features.  It is very young however, and support should be considered
+msgpack -- msgpack is a binary serialization format that's closer to JSON
+    in features. It's very young however, and support should be considered
     experimental at this point.
     experimental at this point.
 
 
     See http://msgpack.org/ for more information.
     See http://msgpack.org/ for more information.
 
 
 The encoding used is available as a message header, so the worker knows how to
 The encoding used is available as a message header, so the worker knows how to
-deserialize any task.  If you use a custom serializer, this serializer must
+deserialize any task. If you use a custom serializer, this serializer must
 be available for the worker.
 be available for the worker.
 
 
 The following order is used to decide which serializer
 The following order is used to decide which serializer
@@ -419,7 +419,7 @@ Connections
 
 
 .. sidebar:: Automatic Pool Support
 .. sidebar:: Automatic Pool Support
 
 
-    Since version 2.3 there is support for automatic connection pools,
+    Since version 2.3 there's support for automatic connection pools,
     so you don't have to manually handle connections and publishers
     so you don't have to manually handle connections and publishers
     to reuse connections.
     to reuse connections.
 
 
@@ -475,7 +475,7 @@ the workers :option:`-Q <celery worker -Q>` argument:
 
 
 .. seealso::
 .. seealso::
 
 
-    Hard-coding queue names in code is not recommended, the best practice
+    Hard-coding queue names in code isn't recommended, the best practice
     is to use configuration routers (:setting:`task_routes`).
     is to use configuration routers (:setting:`task_routes`).
 
 
     To find out more about routing, please see :ref:`guide-routing`.
     To find out more about routing, please see :ref:`guide-routing`.

+ 20 - 20
docs/userguide/canvas.rst

@@ -44,7 +44,7 @@ or even serialized and sent across the wire.
         >>> add.signature((2, 2), countdown=10)
         >>> add.signature((2, 2), countdown=10)
         tasks.add(2, 2)
         tasks.add(2, 2)
 
 
-- There is also a shortcut using star arguments:
+- There's also a shortcut using star arguments:
 
 
     .. code-block:: pycon
     .. code-block:: pycon
 
 
@@ -171,7 +171,7 @@ Immutability
 
 
 Partials are meant to be used with callbacks, any tasks linked or chord
 Partials are meant to be used with callbacks, any tasks linked or chord
 callbacks will be applied with the result of the parent task.
 callbacks will be applied with the result of the parent task.
-Sometimes you want to specify a callback that does not take
+Sometimes you want to specify a callback that doesn't take
 additional arguments, and in that case you can set the signature
 additional arguments, and in that case you can set the signature
 to be immutable:
 to be immutable:
 
 
@@ -265,7 +265,7 @@ The Primitives
 
 
     - ``chord``
     - ``chord``
 
 
-        A chord is just like a group but with a callback.  A chord consists
+        A chord is just like a group but with a callback. A chord consists
         of a header group and a body,  where the body is a task that should execute
         of a header group and a body,  where the body is a task that should execute
         after all of the tasks in the header are complete.
         after all of the tasks in the header are complete.
 
 
@@ -392,7 +392,7 @@ Here's some examples:
     into a list and sent to the ``xsum`` task.
     into a list and sent to the ``xsum`` task.
 
 
     The body of a chord can also be immutable, so that the return value
     The body of a chord can also be immutable, so that the return value
-    of the group is not passed on to the callback:
+    of the group isn't passed on to the callback:
 
 
     .. code-block:: pycon
     .. code-block:: pycon
 
 
@@ -515,8 +515,8 @@ the results:
      (<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>, 64)]
      (<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>, 64)]
 
 
 By default :meth:`~@AsyncResult.collect` will raise an
 By default :meth:`~@AsyncResult.collect` will raise an
-:exc:`~@IncompleteStream` exception if the graph is not fully
-formed (one of the tasks has not completed yet),
+:exc:`~@IncompleteStream` exception if the graph isn't fully
+formed (one of the tasks hasn't completed yet),
 but you can get an intermediate representation of the graph
 but you can get an intermediate representation of the graph
 too:
 too:
 
 
@@ -547,7 +547,7 @@ is applied:
 
 
     >>> add.apply_async((2, 2), link_error=log_error.s())
     >>> add.apply_async((2, 2), link_error=log_error.s())
 
 
-The worker will not actually call the errback as a task, but will
+The worker won't actually call the errback as a task, but will
 instead call the errback function directly so that the raw request, exception
 instead call the errback function directly so that the raw request, exception
 and traceback objects can be passed to it.
 and traceback objects can be passed to it.
 
 
@@ -567,7 +567,7 @@ Here's an example errback:
             print('--\n\n{0} {1} {2}'.format(
             print('--\n\n{0} {1} {2}'.format(
                 task_id, exc, traceback), file=fh)
                 task_id, exc, traceback), file=fh)
 
 
-To make it even easier to link tasks together there is
+To make it even easier to link tasks together there's
 a special signature called :class:`~celery.chain` that lets
 a special signature called :class:`~celery.chain` that lets
 you chain tasks together:
 you chain tasks together:
 
 
@@ -722,7 +722,7 @@ It supports the following operations:
 * :meth:`~celery.result.GroupResult.successful`
 * :meth:`~celery.result.GroupResult.successful`
 
 
     Return :const:`True` if all of the subtasks finished
     Return :const:`True` if all of the subtasks finished
-    successfully (e.g. did not raise an exception).
+    successfully (e.g. didn't raise an exception).
 
 
 * :meth:`~celery.result.GroupResult.failed`
 * :meth:`~celery.result.GroupResult.failed`
 
 
@@ -731,7 +731,7 @@ It supports the following operations:
 * :meth:`~celery.result.GroupResult.waiting`
 * :meth:`~celery.result.GroupResult.waiting`
 
 
     Return :const:`True` if any of the subtasks
     Return :const:`True` if any of the subtasks
-    is not ready yet.
+    isn't ready yet.
 
 
 * :meth:`~celery.result.GroupResult.ready`
 * :meth:`~celery.result.GroupResult.ready`
 
 
@@ -822,9 +822,9 @@ Let's break the chord expression down:
     9900
     9900
 
 
 Remember, the callback can only be executed after all of the tasks in the
 Remember, the callback can only be executed after all of the tasks in the
-header have returned.  Each step in the header is executed as a task, in
-parallel, possibly on different nodes.  The callback is then applied with
-the return value of each task in the header.  The task id returned by
+header have returned. Each step in the header is executed as a task, in
+parallel, possibly on different nodes. The callback is then applied with
+the return value of each task in the header. The task id returned by
 :meth:`chord` is the id of the callback, so you can wait for it to complete
 :meth:`chord` is the id of the callback, so you can wait for it to complete
 and get the final return value (but remember to :ref:`never have a task wait
 and get the final return value (but remember to :ref:`never have a task wait
 for other tasks <task-synchronous-subtasks>`)
 for other tasks <task-synchronous-subtasks>`)
@@ -858,13 +858,13 @@ to the :exc:`~@ChordError` exception:
 
 
 While the traceback may be different depending on which result backend is
 While the traceback may be different depending on which result backend is
 being used, you can see the error description includes the id of the task that failed
 being used, you can see the error description includes the id of the task that failed
-and a string representation of the original exception.  You can also
+and a string representation of the original exception. You can also
 find the original traceback in ``result.traceback``.
 find the original traceback in ``result.traceback``.
 
 
 Note that the rest of the tasks will still execute, so the third task
 Note that the rest of the tasks will still execute, so the third task
 (``add.s(8, 8)``) is still executed even though the middle task failed.
 (``add.s(8, 8)``) is still executed even though the middle task failed.
 Also the :exc:`~@ChordError` only shows the task that failed
 Also the :exc:`~@ChordError` only shows the task that failed
-first (in time): it does not respect the ordering of the header group.
+first (in time): it doesn't respect the ordering of the header group.
 
 
 To perform an action when a chord fails you can therefore attach
 To perform an action when a chord fails you can therefore attach
 an errback to the chord callback:
 an errback to the chord callback:
@@ -927,8 +927,8 @@ Example implementation:
 
 
 This is used by all result backends except Redis and Memcached, which
 This is used by all result backends except Redis and Memcached, which
 increment a counter after each task in the header, then applying the callback
 increment a counter after each task in the header, then applying the callback
-when the counter exceeds the number of tasks in the set. *Note:* chords do not
-properly work with Redis before version 2.2; you will need to upgrade to at
+when the counter exceeds the number of tasks in the set. *Note:* chords don't
+properly work with Redis before version 2.2; you'll need to upgrade to at
 least 2.2 to use them.
 least 2.2 to use them.
 
 
 The Redis and Memcached approach is a much better solution, but not easily
 The Redis and Memcached approach is a much better solution, but not easily
@@ -937,9 +937,9 @@ implemented in other backends (suggestions welcome!).
 
 
 .. note::
 .. note::
 
 
-    If you are using chords with the Redis result backend and also overriding
+    If you're using chords with the Redis result backend and also overriding
     the :meth:`Task.after_return` method, you need to make sure to call the
     the :meth:`Task.after_return` method, you need to make sure to call the
-    super method or else the chord callback will not be applied.
+    super method or else the chord callback won't be applied.
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -1012,7 +1012,7 @@ thousand objects each.
 
 
 Some may worry that chunking your tasks results in a degradation
 Some may worry that chunking your tasks results in a degradation
 of parallelism, but this is rarely true for a busy cluster
 of parallelism, but this is rarely true for a busy cluster
-and in practice since you are avoiding the overhead  of messaging
+and in practice since you're avoiding the overhead  of messaging
 it may considerably increase performance.
 it may considerably increase performance.
 
 
 To create a chunks signature you can use :meth:`@Task.chunks`:
 To create a chunks signature you can use :meth:`@Task.chunks`:

+ 7 - 7
docs/userguide/concurrency/eventlet.rst

@@ -16,23 +16,23 @@ change how you run your code, not how you write it.
     * It uses `epoll(4)`_ or `libevent`_ for
     * It uses `epoll(4)`_ or `libevent`_ for
       `highly scalable non-blocking I/O`_.
       `highly scalable non-blocking I/O`_.
     * `Coroutines`_ ensure that the developer uses a blocking style of
     * `Coroutines`_ ensure that the developer uses a blocking style of
-      programming that is similar to threading, but provide the benefits of
+      programming that's similar to threading, but provide the benefits of
       non-blocking I/O.
       non-blocking I/O.
     * The event dispatch is implicit, which means you can easily use Eventlet
     * The event dispatch is implicit, which means you can easily use Eventlet
       from the Python interpreter, or as a small part of a larger application.
       from the Python interpreter, or as a small part of a larger application.
 
 
 Celery supports Eventlet as an alternative execution pool implementation.
 Celery supports Eventlet as an alternative execution pool implementation.
-It is in some cases superior to prefork, but you need to ensure
-your tasks do not perform blocking calls, as this will halt all
+It's in some cases superior to prefork, but you need to ensure
+your tasks don't perform blocking calls, as this will halt all
 other operations in the worker until the blocking call returns.
 other operations in the worker until the blocking call returns.
 
 
 The prefork pool can take use of multiple processes, but how many is
 The prefork pool can take use of multiple processes, but how many is
-often limited to a few processes per CPU.  With Eventlet you can efficiently
-spawn hundreds, or thousands of green threads.  In an informal test with a
+often limited to a few processes per CPU. With Eventlet you can efficiently
+spawn hundreds, or thousands of green threads. In an informal test with a
 feed hub system the Eventlet pool could fetch and process hundreds of feeds
 feed hub system the Eventlet pool could fetch and process hundreds of feeds
 every second, while the prefork pool spent 14 seconds processing 100
 every second, while the prefork pool spent 14 seconds processing 100
-feeds.  Note that this is one of the applications async I/O is especially good
-at (asynchronous HTTP requests).  You may want a mix of both Eventlet and
+feeds. Note that this is one of the applications async I/O is especially good
+at (asynchronous HTTP requests). You may want a mix of both Eventlet and
 prefork workers, and route tasks according to compatibility or
 prefork workers, and route tasks according to compatibility or
 what works best.
 what works best.
 
 

+ 59 - 59
docs/userguide/configuration.rst

@@ -7,7 +7,7 @@
 This document describes the configuration options available.
 This document describes the configuration options available.
 
 
 If you're using the default loader, you must create the :file:`celeryconfig.py`
 If you're using the default loader, you must create the :file:`celeryconfig.py`
-module and make sure it is available on the Python path.
+module and make sure it's available on the Python path.
 
 
 .. contents::
 .. contents::
     :local:
     :local:
@@ -47,7 +47,7 @@ names, are the renaming of some prefixes, like ``celerybeat_`` to ``beat_``,
 ``celeryd_`` to ``worker_``, and most of the top level ``celery_`` settings
 ``celeryd_`` to ``worker_``, and most of the top level ``celery_`` settings
 have been moved into a new  ``task_`` prefix.
 have been moved into a new  ``task_`` prefix.
 
 
-Celery will still be able to read old configuration files, so there is no
+Celery will still be able to read old configuration files, so there's no
 rush in moving to the new settings format.
 rush in moving to the new settings format.
 
 
 =====================================  ==============================================
 =====================================  ==============================================
@@ -170,11 +170,11 @@ General settings
 
 
 A white-list of content-types/serializers to allow.
 A white-list of content-types/serializers to allow.
 
 
-If a message is received that is not in this list then
+If a message is received that's not in this list then
 the message will be discarded with an error.
 the message will be discarded with an error.
 
 
 By default any content type is enabled (including pickle and yaml)
 By default any content type is enabled (including pickle and yaml)
-so make sure untrusted parties do not have access to your broker.
+so make sure untrusted parties don't have access to your broker.
 See :ref:`guide-security` for more.
 See :ref:`guide-security` for more.
 
 
 Example::
 Example::
@@ -213,8 +213,8 @@ Configure Celery to use a custom time zone.
 The timezone value can be any time zone supported by the `pytz`_
 The timezone value can be any time zone supported by the `pytz`_
 library.
 library.
 
 
-If not set the UTC timezone is used.  For backwards compatibility
-there is also a :setting:`enable_utc` setting, and this is set
+If not set the UTC timezone is used. For backwards compatibility
+there's also a :setting:`enable_utc` setting, and this is set
 to false the system local timezone is used instead.
 to false the system local timezone is used instead.
 
 
 .. _`pytz`: http://pypi.python.org/pypi/pytz/
 .. _`pytz`: http://pypi.python.org/pypi/pytz/
@@ -230,7 +230,7 @@ Task settings
 ~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~
 
 
 This setting can be used to rewrite any task attribute from the
 This setting can be used to rewrite any task attribute from the
-configuration.  The setting can be a dict, or a list of annotation
+configuration. The setting can be a dict, or a list of annotation
 objects that filter for tasks and return a map of attributes
 objects that filter for tasks and return a map of attributes
 to change.
 to change.
 
 
@@ -299,7 +299,7 @@ Default is 2 since 4.0.0.
 ``task_serializer``
 ``task_serializer``
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
-A string identifying the default serialization method to use.  Can be
+A string identifying the default serialization method to use. Can be
 `pickle` (default), `json`, `yaml`, `msgpack` or any custom serialization
 `pickle` (default), `json`, `yaml`, `msgpack` or any custom serialization
 methods that have been registered with :mod:`kombu.serialization.registry`.
 methods that have been registered with :mod:`kombu.serialization.registry`.
 
 
@@ -343,7 +343,7 @@ Task execution settings
 ~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~
 
 
 If this is :const:`True`, all tasks will be executed locally by blocking until
 If this is :const:`True`, all tasks will be executed locally by blocking until
-the task returns.  ``apply_async()`` and ``Task.delay()`` will return
+the task returns. ``apply_async()`` and ``Task.delay()`` will return
 an :class:`~celery.result.EagerResult` instance, which emulates the API
 an :class:`~celery.result.EagerResult` instance, which emulates the API
 and behavior of :class:`~celery.result.AsyncResult`, except the result
 and behavior of :class:`~celery.result.AsyncResult`, except the result
 is already evaluated.
 is already evaluated.
@@ -400,10 +400,10 @@ If set, the worker stores all task errors in the result store even if
 ~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~
 
 
 If :const:`True` the task will report its status as 'started' when the
 If :const:`True` the task will report its status as 'started' when the
-task is executed by a worker.  The default value is :const:`False` as
-the normal behavior is to not report that level of granularity.  Tasks
-are either pending, finished, or waiting to be retried.  Having a 'started'
-state can be useful for when there are long running tasks and there is a
+task is executed by a worker. The default value is :const:`False` as
+the normal behavior is to not report that level of granularity. Tasks
+are either pending, finished, or waiting to be retried. Having a 'started'
+state can be useful for when there are long running tasks and there's a
 need to report which task is currently running.
 need to report which task is currently running.
 
 
 .. setting:: task_time_limit
 .. setting:: task_time_limit
@@ -411,7 +411,7 @@ need to report which task is currently running.
 ``task_time_limit``
 ``task_time_limit``
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
-Task hard time limit in seconds.  The worker processing the task will
+Task hard time limit in seconds. The worker processing the task will
 be killed and replaced with a new one when this is exceeded.
 be killed and replaced with a new one when this is exceeded.
 
 
 .. setting:: task_soft_time_limit
 .. setting:: task_soft_time_limit
@@ -422,7 +422,7 @@ be killed and replaced with a new one when this is exceeded.
 Task soft time limit in seconds.
 Task soft time limit in seconds.
 
 
 The :exc:`~@SoftTimeLimitExceeded` exception will be
 The :exc:`~@SoftTimeLimitExceeded` exception will be
-raised when this is exceeded.  The task can catch this to
+raised when this is exceeded. The task can catch this to
 e.g. clean up before the hard time limit comes.
 e.g. clean up before the hard time limit comes.
 
 
 Example:
 Example:
@@ -475,7 +475,7 @@ worker.
 
 
 The global default rate limit for tasks.
 The global default rate limit for tasks.
 
 
-This value is used for tasks that does not have a custom rate limit
+This value is used for tasks that doesn't have a custom rate limit
 The default is no rate limit.
 The default is no rate limit.
 
 
 .. seealso::
 .. seealso::
@@ -544,7 +544,7 @@ Can be one of the following:
 .. warning:
 .. warning:
 
 
     While the AMQP result backend is very efficient, you must make sure
     While the AMQP result backend is very efficient, you must make sure
-    you only receive the same result once.  See :doc:`userguide/calling`).
+    you only receive the same result once. See :doc:`userguide/calling`).
 
 
 .. _`SQLAlchemy`: http://sqlalchemy.org
 .. _`SQLAlchemy`: http://sqlalchemy.org
 .. _`Memcached`: http://memcached.org
 .. _`Memcached`: http://memcached.org
@@ -561,7 +561,7 @@ Can be one of the following:
 ``result_serializer``
 ``result_serializer``
 ~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~
 
 
-Result serialization format.  Default is ``pickle``. See
+Result serialization format. Default is ``pickle``. See
 :ref:`calling-serializers` for information about supported
 :ref:`calling-serializers` for information about supported
 serialization formats.
 serialization formats.
 
 
@@ -585,7 +585,7 @@ stored task tombstones will be deleted.
 
 
 A built-in periodic task will delete the results after this time
 A built-in periodic task will delete the results after this time
 (``celery.backend_cleanup``), assuming that ``celery beat`` is
 (``celery.backend_cleanup``), assuming that ``celery beat`` is
-enabled.  The task runs daily at 4am.
+enabled. The task runs daily at 4am.
 
 
 A value of :const:`None` or 0 means results will never expire (depending
 A value of :const:`None` or 0 means results will never expire (depending
 on backend specifications).
 on backend specifications).
@@ -681,12 +681,12 @@ the :setting:`sqlalchmey_engine_options` setting::
 ``sqlalchemy_short_lived_sessions``
 ``sqlalchemy_short_lived_sessions``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Short lived sessions are disabled by default.  If enabled they can drastically reduce
-performance, especially on systems processing lots of tasks.  This option is useful
+Short lived sessions are disabled by default. If enabled they can drastically reduce
+performance, especially on systems processing lots of tasks. This option is useful
 on low-traffic workers that experience errors as a result of cached database connections
 on low-traffic workers that experience errors as a result of cached database connections
-going stale through inactivity.  For example, intermittent errors like
+going stale through inactivity. For example, intermittent errors like
 `(OperationalError) (2006, 'MySQL server has gone away')` can be fixed by enabling
 `(OperationalError) (2006, 'MySQL server has gone away')` can be fixed by enabling
-short lived sessions.  This option only affects the database backend.
+short lived sessions. This option only affects the database backend.
 
 
 .. setting:: sqlalchemy_table_names
 .. setting:: sqlalchemy_table_names
 
 
@@ -694,7 +694,7 @@ short lived sessions.  This option only affects the database backend.
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 When SQLAlchemy is configured as the result backend, Celery automatically
 When SQLAlchemy is configured as the result backend, Celery automatically
-creates two tables to store result meta-data for tasks.  This setting allows
+creates two tables to store result meta-data for tasks. This setting allows
 you to customize the table names:
 you to customize the table names:
 
 
 .. code-block:: python
 .. code-block:: python
@@ -715,14 +715,14 @@ RPC backend settings
 ``result_exchange``
 ``result_exchange``
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
-Name of the exchange to publish results in.  Default is `celeryresults`.
+Name of the exchange to publish results in. Default is `celeryresults`.
 
 
 .. setting:: result_exchange_type
 .. setting:: result_exchange_type
 
 
 ``result_exchange_type``
 ``result_exchange_type``
 ~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-The exchange type of the result exchange.  Default is to use a `direct`
+The exchange type of the result exchange. Default is to use a `direct`
 exchange.
 exchange.
 
 
 .. setting:: result_persistent
 .. setting:: result_persistent
@@ -730,8 +730,8 @@ exchange.
 ``result_persistent``
 ``result_persistent``
 ~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~
 
 
-If set to :const:`True`, result messages will be persistent.  This means the
-messages will not be lost after a broker restart.  The default is for the
+If set to :const:`True`, result messages will be persistent. This means the
+messages won't be lost after a broker restart. The default is for the
 results to be transient.
 results to be transient.
 
 
 Example configuration
 Example configuration
@@ -750,7 +750,7 @@ Cache backend settings
 .. note::
 .. note::
 
 
     The cache backend supports the :pypi:`pylibmc` and `python-memcached`
     The cache backend supports the :pypi:`pylibmc` and `python-memcached`
-    libraries.  The latter is used only if :pypi:`pylibmc` is not installed.
+    libraries. The latter is used only if :pypi:`pylibmc` isn't installed.
 
 
 Using a single Memcached server:
 Using a single Memcached server:
 
 
@@ -947,7 +947,7 @@ after adding. Default (None) means they will never expire.
 ``cassandra_auth_provider``
 ``cassandra_auth_provider``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-AuthProvider class within ``cassandra.auth`` module to use.  Values can be
+AuthProvider class within ``cassandra.auth`` module to use. Values can be
 ``PlainTextAuthProvider`` or ``SaslAuthProvider``.
 ``PlainTextAuthProvider`` or ``SaslAuthProvider``.
 
 
 .. setting:: cassandra_auth_kwargs
 .. setting:: cassandra_auth_kwargs
@@ -1058,7 +1058,7 @@ This is a dict supporting the following keys:
 
 
 * ``protocol``
 * ``protocol``
 
 
-    The protocol to use to connect to the Riak server. This is not configurable
+    The protocol to use to connect to the Riak server. This isn't configurable
     via :setting:`result_backend`
     via :setting:`result_backend`
 
 
 .. _conf-ironcache-result-backend:
 .. _conf-ironcache-result-backend:
@@ -1194,7 +1194,7 @@ This backend can be configured using a file URL, for example::
 The configured directory needs to be shared and writable by all servers using
 The configured directory needs to be shared and writable by all servers using
 the backend.
 the backend.
 
 
-If you are trying Celery on a single system you can simply use the backend
+If you're trying Celery on a single system you can simply use the backend
 without any further configuration. For larger clusters you could use NFS,
 without any further configuration. For larger clusters you could use NFS,
 `GlusterFS`_, CIFS, `HDFS`_ (using FUSE) or any other file-system.
 `GlusterFS`_, CIFS, `HDFS`_ (using FUSE) or any other file-system.
 
 
@@ -1413,7 +1413,7 @@ as the routing key and the ``C.dq`` exchange::
 ``task_create_missing_queues``
 ``task_create_missing_queues``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-If enabled (default), any queues specified that are not defined in
+If enabled (default), any queues specified that aren't defined in
 :setting:`task_queues` will be automatically created. See
 :setting:`task_queues` will be automatically created. See
 :ref:`routing-automatic`.
 :ref:`routing-automatic`.
 
 
@@ -1426,7 +1426,7 @@ The name of the default queue used by `.apply_async` if the message has
 no route or no custom queue has been specified.
 no route or no custom queue has been specified.
 
 
 This queue must be listed in :setting:`task_queues`.
 This queue must be listed in :setting:`task_queues`.
-If :setting:`task_queues` is not specified then it is automatically
+If :setting:`task_queues` isn't specified then it's automatically
 created containing one queue entry, where this name is used as the name of
 created containing one queue entry, where this name is used as the name of
 that queue.
 that queue.
 
 
@@ -1470,7 +1470,7 @@ The default is: `celery`.
 ``task_default_delivery_mode``
 ``task_default_delivery_mode``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
-Can be `transient` or `persistent`.  The default is to send
+Can be `transient` or `persistent`. The default is to send
 persistent messages.
 persistent messages.
 
 
 .. _conf-broker-settings:
 .. _conf-broker-settings:
@@ -1483,7 +1483,7 @@ Broker Settings
 ``broker_url``
 ``broker_url``
 ~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~
 
 
-Default broker URL.  This must be a URL in the form of::
+Default broker URL. This must be a URL in the form of::
 
 
     transport://userid:password@hostname:port/virtual_host
     transport://userid:password@hostname:port/virtual_host
 
 
@@ -1492,13 +1492,13 @@ is optional, and defaults to the specific transports default values.
 
 
 The transport part is the broker implementation to use, and the
 The transport part is the broker implementation to use, and the
 default is ``amqp``, which uses ``librabbitmq`` by default or falls back to
 default is ``amqp``, which uses ``librabbitmq`` by default or falls back to
-``pyamqp`` if that is not installed.  Also there are many other choices including
+``pyamqp`` if that's not installed. Also there are many other choices including
 ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
 ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
 ``couchdb``.
 ``couchdb``.
 It can also be a fully qualified path to your own transport implementation.
 It can also be a fully qualified path to your own transport implementation.
 
 
 More than one broker URL, of the same transport, can also be specified.
 More than one broker URL, of the same transport, can also be specified.
-The broker URLs can be passed in as a single string that is semicolon delimited::
+The broker URLs can be passed in as a single string that's semicolon delimited::
 
 
     broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'
     broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'
 
 
@@ -1579,8 +1579,8 @@ double the rate of the heartbeat value
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 :transports supported: ``pyamqp``
 :transports supported: ``pyamqp``
 
 
-At intervals the worker will monitor that the broker has not missed
-too many heartbeats.  The rate at which this is checked is calculated
+At intervals the worker will monitor that the broker hasn't missed
+too many heartbeats. The rate at which this is checked is calculated
 by dividing the :setting:`broker_heartbeat` value with this value,
 by dividing the :setting:`broker_heartbeat` value with this value,
 so if the heartbeat is 10.0 and the rate is the default 2.0, the check
 so if the heartbeat is 10.0 and the rate is the default 2.0, the check
 will be performed every 5 seconds (twice the heartbeat sending rate).
 will be performed every 5 seconds (twice the heartbeat sending rate).
@@ -1618,8 +1618,8 @@ certificate authority:
 
 
 .. warning::
 .. warning::
 
 
-    Be careful using ``broker_use_ssl=True``. It is possible that your default
-    configuration will not validate the server cert at all. Please read Python
+    Be careful using ``broker_use_ssl=True``. It's possible that your default
+    configuration won't validate the server cert at all. Please read Python
     `ssl module security
     `ssl module security
     considerations <https://docs.python.org/3/library/ssl.html#ssl-security>`_.
     considerations <https://docs.python.org/3/library/ssl.html#ssl-security>`_.
 
 
@@ -1633,8 +1633,8 @@ certificate authority:
 The maximum number of connections that can be open in the connection pool.
 The maximum number of connections that can be open in the connection pool.
 
 
 The pool is enabled by default since version 2.5, with a default limit of ten
 The pool is enabled by default since version 2.5, with a default limit of ten
-connections.  This number can be tweaked depending on the number of
-threads/green-threads (eventlet/gevent) using a connection.  For example
+connections. This number can be tweaked depending on the number of
+threads/green-threads (eventlet/gevent) using a connection. For example
 running eventlet with 1000 greenlets that use a connection to the broker,
 running eventlet with 1000 greenlets that use a connection to the broker,
 contention can arise and you should consider increasing the limit.
 contention can arise and you should consider increasing the limit.
 
 
@@ -1649,7 +1649,7 @@ Default (since 2.5) is to use a pool of 10 connections.
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 The default timeout in seconds before we give up establishing a connection
 The default timeout in seconds before we give up establishing a connection
-to the AMQP server.  Default is 4 seconds. This setting is disabled when using
+to the AMQP server. Default is 4 seconds. This setting is disabled when using
 gevent.
 gevent.
 
 
 .. setting:: broker_connection_retry
 .. setting:: broker_connection_retry
@@ -1673,7 +1673,7 @@ This behavior is on by default.
 Maximum number of retries before we give up re-establishing a connection
 Maximum number of retries before we give up re-establishing a connection
 to the AMQP broker.
 to the AMQP broker.
 
 
-If this is set to :const:`0` or :const:`None`, we will retry forever.
+If this is set to :const:`0` or :const:`None`, we'll retry forever.
 
 
 Default is 100 retries.
 Default is 100 retries.
 
 
@@ -1753,11 +1753,11 @@ Defaults to the number of available CPUs.
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 How many messages to prefetch at a time multiplied by the number of
 How many messages to prefetch at a time multiplied by the number of
-concurrent processes.  The default is 4 (four messages for each
-process).  The default setting is usually a good choice, however -- if you
+concurrent processes. The default is 4 (four messages for each
+process). The default setting is usually a good choice, however -- if you
 have very long running tasks waiting in the queue and you have to start the
 have very long running tasks waiting in the queue and you have to start the
 workers, note that the first worker to start will receive four times the
 workers, note that the first worker to start will receive four times the
-number of messages initially.  Thus the tasks may not be fairly distributed
+number of messages initially. Thus the tasks may not be fairly distributed
 to the workers.
 to the workers.
 
 
 To disable prefetching, set :setting:`worker_prefetch_multiplier` to 1.
 To disable prefetching, set :setting:`worker_prefetch_multiplier` to 1.
@@ -1768,7 +1768,7 @@ For more on prefetching, read :ref:`optimizing-prefetch-limit`
 
 
 .. note::
 .. note::
 
 
-    Tasks with ETA/countdown are not affected by prefetch limits.
+    Tasks with ETA/countdown aren't affected by prefetch limits.
 
 
 .. setting:: worker_lost_wait
 .. setting:: worker_lost_wait
 
 
@@ -1788,7 +1788,7 @@ Default is 10.0
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Maximum number of tasks a pool worker process can execute before
 Maximum number of tasks a pool worker process can execute before
-it's replaced with a new one.  Default is no limit.
+it's replaced with a new one. Default is no limit.
 
 
 .. setting:: worker_max_memory_per_child
 .. setting:: worker_max_memory_per_child
 
 
@@ -1827,7 +1827,7 @@ Not enabled by default.
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Set the maximum time in seconds that the ETA scheduler can sleep between
 Set the maximum time in seconds that the ETA scheduler can sleep between
-rechecking the schedule.  Default is 1 second.
+rechecking the schedule. Default is 1 second.
 
 
 Setting this value to 1 second means the schedulers precision will
 Setting this value to 1 second means the schedulers precision will
 be 1 second. If you need near millisecond precision you can set this to 0.1.
 be 1 second. If you need near millisecond precision you can set this to 0.1.
@@ -1852,7 +1852,7 @@ Events
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Send task-related events so that tasks can be monitored using tools like
 Send task-related events so that tasks can be monitored using tools like
-`flower`.  Sets the default value for the workers
+`flower`. Sets the default value for the workers
 :option:`-E <celery worker -E>` argument.
 :option:`-E <celery worker -E>` argument.
 
 
 .. setting:: task_send_sent_event
 .. setting:: task_send_sent_event
@@ -1863,7 +1863,7 @@ Send task-related events so that tasks can be monitored using tools like
 .. versionadded:: 2.2
 .. versionadded:: 2.2
 
 
 If enabled, a :event:`task-sent` event will be sent for every task so tasks can be
 If enabled, a :event:`task-sent` event will be sent for every task so tasks can be
-tracked before they are consumed by a worker.
+tracked before they're consumed by a worker.
 
 
 Disabled by default.
 Disabled by default.
 
 
@@ -2027,7 +2027,7 @@ used to sign messages when :ref:`message-signing` is used.
 .. versionadded:: 2.5
 .. versionadded:: 2.5
 
 
 The directory containing X.509 certificates used for
 The directory containing X.509 certificates used for
-:ref:`message-signing`.  Can be a glob with wild-cards,
+:ref:`message-signing`. Can be a glob with wild-cards,
 (for example :file:`/etc/certs/*.pem`).
 (for example :file:`/etc/certs/*.pem`).
 
 
 .. _conf-custom-components:
 .. _conf-custom-components:
@@ -2047,7 +2047,7 @@ Name of the pool class used by the worker.
     Never use this option to select the eventlet or gevent pool.
     Never use this option to select the eventlet or gevent pool.
     You must use the :option:`-P <celery worker -P>` option to
     You must use the :option:`-P <celery worker -P>` option to
     :program:`celery worker` instead, to ensure the monkey patches
     :program:`celery worker` instead, to ensure the monkey patches
-    are not applied too late, causing things to break in strange ways.
+    aren't applied too late, causing things to break in strange ways.
 
 
 Default is ``celery.concurrency.prefork:TaskPool``.
 Default is ``celery.concurrency.prefork:TaskPool``.
 
 
@@ -2096,7 +2096,7 @@ See :ref:`beat-entries`.
 ``beat_scheduler``
 ``beat_scheduler``
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
-The default scheduler class.  Default is ``celery.beat:PersistentScheduler``.
+The default scheduler class. Default is ``celery.beat:PersistentScheduler``.
 
 
 Can also be set via the :option:`celery beat -S` argument.
 Can also be set via the :option:`celery beat -S` argument.
 
 
@@ -2106,7 +2106,7 @@ Can also be set via the :option:`celery beat -S` argument.
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
 Name of the file used by `PersistentScheduler` to store the last run times
 Name of the file used by `PersistentScheduler` to store the last run times
-of periodic tasks.  Can be a relative or absolute path, but be aware that the
+of periodic tasks. Can be a relative or absolute path, but be aware that the
 suffix `.db` may be appended to the file name (depending on Python version).
 suffix `.db` may be appended to the file name (depending on Python version).
 
 
 Can also be set via the :option:`celery beat --schedule` argument.
 Can also be set via the :option:`celery beat --schedule` argument.
@@ -2132,7 +2132,7 @@ between checking the schedule.
 
 
 The default for this value is scheduler specific.
 The default for this value is scheduler specific.
 For the default celery beat scheduler the value is 300 (5 minutes),
 For the default celery beat scheduler the value is 300 (5 minutes),
-but for e.g. the :pypi:`django-celery` database scheduler it is 5 seconds
+but for e.g. the :pypi:`django-celery` database scheduler it's 5 seconds
 because the schedule may be changed externally, and so it must take
 because the schedule may be changed externally, and so it must take
 changes to the schedule into account.
 changes to the schedule into account.
 
 

Niektóre pliki nie zostały wyświetlone z powodu dużej ilości zmienionych plików