Переглянути джерело

[docs] Spelling stuff completed

Ask Solem 9 роки тому
батько
коміт
aee706ad5a
100 змінених файлів з 954 додано та 838 видалено
  1. 50 53
      CONTRIBUTING.rst
  2. 3 3
      README.rst
  3. 3 3
      celery/app/amqp.py
  4. 3 3
      celery/app/annotations.py
  5. 11 11
      celery/app/base.py
  6. 4 3
      celery/app/builtins.py
  7. 1 1
      celery/app/log.py
  8. 6 6
      celery/app/task.py
  9. 1 1
      celery/apps/worker.py
  10. 2 2
      celery/backends/amqp.py
  11. 3 3
      celery/backends/async.py
  12. 4 4
      celery/backends/cache.py
  13. 2 2
      celery/backends/cassandra.py
  14. 2 2
      celery/backends/couchbase.py
  15. 2 2
      celery/backends/couchdb.py
  16. 2 2
      celery/backends/elasticsearch.py
  17. 5 4
      celery/backends/filesystem.py
  18. 2 2
      celery/backends/mongodb.py
  19. 4 4
      celery/backends/redis.py
  20. 2 2
      celery/backends/riak.py
  21. 2 2
      celery/backends/rpc.py
  22. 2 2
      celery/beat.py
  23. 11 9
      celery/bin/amqp.py
  24. 14 4
      celery/bin/base.py
  25. 9 7
      celery/bin/celery.py
  26. 5 4
      celery/bin/worker.py
  27. 1 1
      celery/bootsteps.py
  28. 4 4
      celery/concurrency/asynpool.py
  29. 9 3
      celery/contrib/rdb.py
  30. 2 2
      celery/contrib/sphinx.py
  31. 21 15
      celery/datastructures.py
  32. 1 1
      celery/events/__init__.py
  33. 2 2
      celery/events/cursesmon.py
  34. 1 1
      celery/exceptions.py
  35. 1 1
      celery/fixups/django.py
  36. 8 7
      celery/platforms.py
  37. 65 63
      celery/schedules.py
  38. 5 3
      celery/task/http.py
  39. 0 1
      celery/tests/app/test_app.py
  40. 1 1
      celery/utils/__init__.py
  41. 1 1
      celery/utils/debug.py
  42. 1 1
      celery/utils/dispatch/saferef.py
  43. 1 1
      celery/utils/dispatch/signal.py
  44. 9 4
      celery/utils/functional.py
  45. 9 7
      celery/utils/iso8601.py
  46. 3 3
      celery/utils/objects.py
  47. 2 2
      celery/utils/saferepr.py
  48. 1 1
      celery/utils/serialization.py
  49. 4 4
      celery/utils/threads.py
  50. 20 15
      celery/utils/timeutils.py
  51. 3 2
      celery/worker/autoreload.py
  52. 29 25
      docs/configuration.rst
  53. 30 26
      docs/contributing.rst
  54. 2 2
      docs/django/first-steps-with-django.rst
  55. 16 13
      docs/faq.rst
  56. 7 3
      docs/getting-started/brokers/couchdb.rst
  57. 1 1
      docs/getting-started/brokers/index.rst
  58. 34 12
      docs/getting-started/brokers/ironmq.rst
  59. 7 3
      docs/getting-started/brokers/mongodb.rst
  60. 6 6
      docs/getting-started/brokers/rabbitmq.rst
  61. 6 6
      docs/getting-started/brokers/redis.rst
  62. 3 1
      docs/getting-started/brokers/sqlalchemy.rst
  63. 4 2
      docs/getting-started/brokers/sqs.rst
  64. 5 5
      docs/getting-started/introduction.rst
  65. 1 1
      docs/getting-started/next-steps.rst
  66. 66 56
      docs/history/changelog-1.0.rst
  67. 12 12
      docs/history/changelog-2.0.rst
  68. 25 25
      docs/history/changelog-2.1.rst
  69. 45 41
      docs/history/changelog-2.2.rst
  70. 14 14
      docs/history/changelog-2.3.rst
  71. 31 19
      docs/history/changelog-2.4.rst
  72. 10 8
      docs/history/changelog-2.5.rst
  73. 41 41
      docs/history/changelog-3.0.rst
  74. 58 54
      docs/history/changelog-3.1.rst
  75. 5 5
      docs/history/whatsnew-2.5.rst
  76. 1 1
      docs/history/whatsnew-3.0.rst
  77. 32 32
      docs/includes/installation.txt
  78. 2 2
      docs/includes/resources.txt
  79. 84 87
      docs/internals/app-overview.rst
  80. 4 4
      docs/internals/deprecation.rst
  81. 5 5
      docs/internals/guide.rst
  82. 20 21
      docs/internals/protocol.rst
  83. 1 1
      docs/internals/reference/celery._state.rst
  84. 1 1
      docs/internals/reference/celery.app.annotations.rst
  85. 1 1
      docs/internals/reference/celery.app.routes.rst
  86. 1 1
      docs/internals/reference/celery.app.trace.rst
  87. 1 1
      docs/internals/reference/celery.backends.amqp.rst
  88. 1 1
      docs/internals/reference/celery.backends.async.rst
  89. 1 1
      docs/internals/reference/celery.backends.base.rst
  90. 1 1
      docs/internals/reference/celery.backends.cache.rst
  91. 1 1
      docs/internals/reference/celery.backends.cassandra.rst
  92. 1 1
      docs/internals/reference/celery.backends.couchbase.rst
  93. 1 1
      docs/internals/reference/celery.backends.couchdb.rst
  94. 1 1
      docs/internals/reference/celery.backends.database.models.rst
  95. 1 1
      docs/internals/reference/celery.backends.database.rst
  96. 1 1
      docs/internals/reference/celery.backends.database.session.rst
  97. 1 1
      docs/internals/reference/celery.backends.elasticsearch.rst
  98. 1 1
      docs/internals/reference/celery.backends.filesystem.rst
  99. 1 1
      docs/internals/reference/celery.backends.mongodb.rst
  100. 1 1
      docs/internals/reference/celery.backends.redis.rst

+ 50 - 53
CONTRIBUTING.rst

@@ -209,10 +209,10 @@ spelling or other errors on the website/docs/code.
        * Enable celery's ``breakpoint_signal`` and use it
          to inspect the process's state.  This will allow you to open a
          ``pdb`` session.
-       * Collect tracing data using strace_(Linux), dtruss (OSX) and ktrace(BSD),
-         ltrace_ and lsof_.
+       * Collect tracing data using `strace`_(Linux), ``dtruss`` (OSX),
+         and ``ktrace`` (BSD), `ltrace`_ and `lsof`_.
 
-    D) Include the output from the `celery report` command:
+    D) Include the output from the ``celery report`` command:
         ::
 
             $ celery -A proj report
@@ -243,21 +243,21 @@ Issue Trackers
 Bugs for a package in the Celery ecosystem should be reported to the relevant
 issue tracker.
 
-* Celery: https://github.com/celery/celery/issues/
-* Kombu: https://github.com/celery/kombu/issues
-* pyamqp: https://github.com/celery/py-amqp/issues
-* vine: https://github.com/celery/vine/issues
-* librabbitmq: https://github.com/celery/librabbitmq/issues
-* Django-Celery: https://github.com/celery/django-celery/issues
+* ``celery``: https://github.com/celery/celery/issues/
+* ``kombu``: https://github.com/celery/kombu/issues
+* ``amqp``: https://github.com/celery/py-amqp/issues
+* ``vine``: https://github.com/celery/vine/issues
+* ``librabbitmq``: https://github.com/celery/librabbitmq/issues
+* ``django-celery``: https://github.com/celery/django-celery/issues
 
 If you are unsure of the origin of the bug you can ask the
 `mailing-list`_, or just use the Celery issue tracker.
 
-Contributors guide to the codebase
-==================================
+Contributors guide to the code base
+===================================
 
 There's a separate section for internal details,
-including details about the codebase and a style guide.
+including details about the code base and a style guide.
 
 Read `internals-guide`_ for more!
 
@@ -268,7 +268,7 @@ Versions
 
 Version numbers consists of a major version, minor version and a release number.
 Since version 2.1.0 we use the versioning semantics described by
-semver: http://semver.org.
+SemVer: http://semver.org.
 
 Stable releases are published at PyPI
 while development releases are only available in the GitHub git repository as tags.
@@ -398,7 +398,7 @@ Forking and setting up the repository
 -------------------------------------
 
 First you need to fork the Celery repository, a good introduction to this
-is in the Github Guide: `Fork a Repo`_.
+is in the GitHub Guide: `Fork a Repo`_.
 
 After you have cloned the repository you should checkout your copy
 to a directory on your machine:
@@ -423,7 +423,7 @@ always use the ``--rebase`` option to ``git pull``:
 With this option you don't clutter the history with merging
 commit notes. See `Rebasing merge commits in git`_.
 If you want to learn more about rebasing see the `Rebase`_
-section in the Github guides.
+section in the GitHub guides.
 
 If you need to work on a different branch than ``master`` you can
 fetch and checkout a remote branch like this::
@@ -496,7 +496,7 @@ When your feature/bugfix is complete you may want to submit
 a pull requests so that it can be reviewed by the maintainers.
 
 Creating pull requests is easy, and also let you track the progress
-of your contribution.  Read the `Pull Requests`_ section in the Github
+of your contribution.  Read the `Pull Requests`_ section in the GitHub
 Guide to learn how this is done.
 
 You can also attach pull requests to existing issues by following
@@ -711,14 +711,14 @@ is following the conventions.
 
     * Python standard library (`import xxx`)
     * Python standard library ('from xxx import`)
-    * Third party packages.
+    * Third-party packages.
     * Other modules from the current package.
 
     or in case of code using Django:
 
     * Python standard library (`import xxx`)
     * Python standard library ('from xxx import`)
-    * Third party packages.
+    * Third-party packages.
     * Django packages.
     * Other modules from the current package.
 
@@ -784,7 +784,7 @@ Some features like a new result backend may require additional libraries
 that the user must install.
 
 We use setuptools `extra_requires` for this, and all new optional features
-that require 3rd party libraries must be added.
+that require third-party libraries must be added.
 
 1) Add a new requirements file in `requirements/extras`
 
@@ -908,8 +908,8 @@ Jan Henrik Helmers
 Packages
 ========
 
-celery
-------
+``celery``
+----------
 
 :git: https://github.com/celery/celery
 :CI: http://travis-ci.org/#!/celery/celery
@@ -917,8 +917,8 @@ celery
 :PyPI: http://pypi.python.org/pypi/celery
 :docs: http://docs.celeryproject.org
 
-kombu
------
+``kombu``
+---------
 
 Messaging library.
 
@@ -928,8 +928,8 @@ Messaging library.
 :PyPI: http://pypi.python.org/pypi/kombu
 :docs: http://kombu.readthedocs.org
 
-amqp
-----
+``amqp``
+--------
 
 Python AMQP 0.9.1 client.
 
@@ -939,8 +939,8 @@ Python AMQP 0.9.1 client.
 :PyPI: http://pypi.python.org/pypi/amqp
 :docs: http://amqp.readthedocs.org
 
-vine
-----
+``vine``
+--------
 
 Promise/deferred implementation.
 
@@ -950,8 +950,8 @@ Promise/deferred implementation.
 :PyPI: http://pypi.python.org/pypi/vine
 :docs: http://vine.readthedocs.org
 
-billiard
---------
+``billiard``
+------------
 
 Fork of multiprocessing containing improvements
 that will eventually be merged into the Python stdlib.
@@ -961,24 +961,16 @@ that will eventually be merged into the Python stdlib.
 :Windows-CI: https://ci.appveyor.com/project/ask/billiard
 :PyPI: http://pypi.python.org/pypi/billiard
 
-librabbitmq
------------
+``librabbitmq``
+---------------
 
 Very fast Python AMQP client written in C.
 
 :git: https://github.com/celery/librabbitmq
 :PyPI: http://pypi.python.org/pypi/librabbitmq
 
-celerymon
----------
-
-Celery monitor web-service.
-
-:git: https://github.com/celery/celerymon
-:PyPI: http://pypi.python.org/pypi/celerymon
-
-django-celery
--------------
+``django-celery``
+-----------------
 
 Django <-> Celery Integration.
 
@@ -986,16 +978,16 @@ Django <-> Celery Integration.
 :PyPI: http://pypi.python.org/pypi/django-celery
 :docs: http://docs.celeryproject.org/en/latest/django
 
-cl
---
+``cell``
+--------
 
 Actor library.
 
-:git: https://github.com/celery/cl
-:PyPI: http://pypi.python.org/pypi/cl
+:git: https://github.com/celery/cell
+:PyPI: http://pypi.python.org/pypi/cell
 
-cyme
-----
+``cyme``
+--------
 
 Distributed Celery Instance manager.
 
@@ -1007,32 +999,37 @@ Distributed Celery Instance manager.
 Deprecated
 ----------
 
-- Flask-Celery
+- ``Flask-Celery``
 
 :git: https://github.com/ask/Flask-Celery
 :PyPI: http://pypi.python.org/pypi/Flask-Celery
 
-- carrot
+- ``celerymon``
+
+:git: https://github.com/celery/celerymon
+:PyPI: http://pypi.python.org/pypi/celerymon
+
+- ``carrot``
 
 :git: https://github.com/ask/carrot
 :PyPI: http://pypi.python.org/pypi/carrot
 
-- ghettoq
+- ``ghettoq``
 
 :git: https://github.com/ask/ghettoq
 :PyPI: http://pypi.python.org/pypi/ghettoq
 
-- kombu-sqlalchemy
+- ``kombu-sqlalchemy``
 
 :git: https://github.com/ask/kombu-sqlalchemy
 :PyPI: http://pypi.python.org/pypi/kombu-sqlalchemy
 
-- django-kombu
+- ``django-kombu``
 
 :git: https://github.com/ask/django-kombu
 :PyPI: http://pypi.python.org/pypi/django-kombu
 
-- pylibrabbitmq
+- ``pylibrabbitmq``
 
 Old name for ``librabbitmq``.
 

+ 3 - 3
README.rst

@@ -250,7 +250,7 @@ Serializers
 ~~~~~~~~~~~
 
 :celery[auth]:
-    for using the auth serializer.
+    for using the ``auth`` security serializer.
 
 :celery[msgpack]:
     for using the msgpack serializer.
@@ -419,10 +419,10 @@ http://wiki.github.com/celery/celery/
 Contributing
 ============
 
-Development of `celery` happens at Github: https://github.com/celery/celery
+Development of `celery` happens at GitHub: https://github.com/celery/celery
 
 You are highly encouraged to participate in the development
-of `celery`. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like GitHub (for some reason) you're welcome
 to send regular patches.
 
 Be sure to also read the `Contributing to Celery`_ section in the

+ 3 - 3
celery/app/amqp.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.app.amqp
-    ~~~~~~~~~~~~~~~
+    ``celery.app.amqp``
+    ~~~~~~~~~~~~~~~~~~~
 
     Sending and receiving messages using Kombu.
 
@@ -236,7 +236,7 @@ class AMQP(object):
 
     # Exchange class/function used when defining automatic queues.
     # E.g. you can use ``autoexchange = lambda n: None`` to use the
-    # amqp default exchange, which is a shortcut to bypass routing
+    # AMQP default exchange, which is a shortcut to bypass routing
     # and instead send directly to the queue named in the routing key.
     autoexchange = None
 

+ 3 - 3
celery/app/annotations.py

@@ -1,9 +1,9 @@
 # -*- coding: utf-8 -*-
 """
-    celery.app.annotations
-    ~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.app.annotations``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    Annotations is a nice term for moneky patching
+    Annotations is a nice term for monkey-patching
     task classes in the configuration.
 
     This prepares and performs the annotations in the

+ 11 - 11
celery/app/base.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.app.base
-    ~~~~~~~~~~~~~~~
+    ``celery.app.base``
+    ~~~~~~~~~~~~~~~~~~~
 
     Actual App instance implementation.
 
@@ -115,7 +115,7 @@ class Celery(object):
     """Celery application.
 
     :param main: Name of the main module if running as `__main__`.
-        This is used as the prefix for autogenerated task names.
+        This is used as the prefix for auto-generated task names.
 
     :keyword broker: URL of the default broker used.
     :keyword loader: The loader class, or the name of the loader class to use.
@@ -130,7 +130,7 @@ class Celery(object):
     :keyword set_as_current:  Make this the global current app.
     :keyword tasks: A task registry or the name of a registry class.
     :keyword include: List of modules every worker should import.
-    :keyword fixups: List of fixup plug-ins (see e.g.
+    :keyword fixups: List of fix-up plug-ins (see e.g.
         :mod:`celery.fixups.django`).
     :keyword autofinalize: If set to False a :exc:`RuntimeError`
         will be raised if the task registry or tasks are used before
@@ -235,7 +235,7 @@ class Celery(object):
             prefix=self.namespace,
         )
 
-        # - Apply fixups.
+        # - Apply fix-ups.
         self.fixups = set(self.builtin_fixups) if fixups is None else fixups
         # ...store fixup instances in _fixups to keep weakrefs alive.
         self._fixups = [symbol_by_name(fixup)(self) for fixup in self.fixups]
@@ -530,8 +530,8 @@ class Celery(object):
         This will affect all application instances (a global operation).
 
         Disables untrusted serializers and if configured to use the ``auth``
-        serializer will register the auth serializer with the provided settings
-        into the Kombu serializer registry.
+        serializer will register the ``auth`` serializer with the provided
+        settings into the Kombu serializer registry.
 
         :keyword allowed_serializers: List of serializer names, or
             content_types that should be exempt from being disabled.
@@ -558,7 +558,7 @@ class Celery(object):
         """Try to auto-discover and import modules with a specific name (by
         default 'tasks').
 
-        If the name is empty, this will be delegated to fixups (e.g. Django).
+        If the name is empty, this will be delegated to fix-ups (e.g. Django).
 
         For example if you have an (imagined) directory tree like this:
 
@@ -715,8 +715,8 @@ class Celery(object):
         :keyword transport: defaults to the :setting:`broker_transport`
                  setting.
         :keyword transport_options: Dictionary of transport specific options.
-        :keyword heartbeat: AMQP Heartbeat in seconds (pyamqp only).
-        :keyword login_method: Custom login method to use (amqp only).
+        :keyword heartbeat: AMQP Heartbeat in seconds (``pyamqp`` only).
+        :keyword login_method: Custom login method to use (AMQP only).
         :keyword failover_strategy: Custom failover strategy.
         :keyword \*\*kwargs: Additional arguments to :class:`kombu.Connection`.
 
@@ -1013,7 +1013,7 @@ class Celery(object):
 
     @cached_property
     def Beat(self, **kwargs):
-        """Celerybeat scheduler application.
+        """:program:`celery beat` scheduler application.
 
         See :class:`~@Beat`.
 

+ 4 - 3
celery/app/builtins.py

@@ -1,10 +1,11 @@
 # -*- coding: utf-8 -*-
 """
-    celery.app.builtins
-    ~~~~~~~~~~~~~~~~~~~
+    ``celery.app.builtins``
+    ~~~~~~~~~~~~~~~~~~~~~~~
 
     Built-in tasks that are always available in all
-    app instances. E.g. chord, group and xmap.
+    app instances. E.g. :class:`@chord`, :class:`@group`
+    and :class:`@xmap`.
 
 """
 from __future__ import absolute_import, unicode_literals

+ 1 - 1
celery/app/log.py

@@ -6,7 +6,7 @@
     The Celery instances logging section: ``Celery.log``.
 
     Sets up logging for the worker and other programs,
-    redirects stdouts, colors log output, patches logging
+    redirects standard outs, colors log output, patches logging
     related compatibility fixes, and so on.
 
 """

+ 6 - 6
celery/app/task.py

@@ -147,9 +147,6 @@ class Task(object):
     #: Name of the task.
     name = None
 
-    #: If :const:`True` the task is an abstract base class.
-    abstract = True
-
     #: Maximum number of retries before giving up.  If set to :const:`None`,
     #: it will **never** stop retrying.
     max_retries = 3
@@ -257,6 +254,9 @@ class Task(object):
     #: called.  This should probably be deprecated.
     _default_request = None
 
+    #: Deprecated attribute ``abstract`` here for compatibility.
+    abstract = True
+
     _exec_options = None
 
     __bound__ = False
@@ -565,7 +565,7 @@ class Task(object):
 
             If this argument is set and retry is called while
             an exception was raised (``sys.exc_info()`` is set)
-            it will attempt to reraise the current exception.
+            it will attempt to re-raise the current exception.
 
             If no exception was raised it will raise the ``exc``
             argument provided.
@@ -582,7 +582,7 @@ class Task(object):
         :keyword soft_time_limit: If set, overrides the default soft
                                   time limit.
         :keyword \*\*options: Any extra options to pass on to
-                              meth:`apply_async`.
+                              :meth:`apply_async`.
         :keyword throw: If this is :const:`False`, do not raise the
                         :exc:`~@Retry` exception,
                         that tells the worker to mark the task as being
@@ -638,7 +638,7 @@ class Task(object):
 
         if max_retries is not None and retries > max_retries:
             if exc:
-                # first try to reraise the original exception
+                # first try to re-raise the original exception
                 maybe_reraise()
                 # or if not in an except block then raise the custom exc.
                 raise exc

+ 1 - 1
celery/apps/worker.py

@@ -322,7 +322,7 @@ def install_cry_handler(sig='SIGUSR1'):
         return
 
     def cry_handler(*args):
-        """Signal handler logging the stacktrace of all active threads."""
+        """Signal handler logging the stack-trace of all active threads."""
         with in_sighandler():
             safe_say(cry())
     platforms.signals[sig] = cry_handler

+ 2 - 2
celery/backends/amqp.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.amqp
-    ~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.amqp``
+    ~~~~~~~~~~~~~~~~~~~~~~~~
 
     The AMQP result backend.
 

+ 3 - 3
celery/backends/async.py

@@ -1,8 +1,8 @@
 """
-    celery.backends.async
-    ~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.async``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    Async backend support utilitites.
+    Async backend support utilities.
 
 """
 from __future__ import absolute_import, unicode_literals

+ 4 - 4
celery/backends/cache.py

@@ -1,9 +1,9 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.cache
-    ~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.cache``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    Memcache and in-memory cache result backend.
+    Memcached and in-memory cache result backend.
 
 """
 from __future__ import absolute_import, unicode_literals
@@ -25,7 +25,7 @@ _imp = [None]
 PY3 = sys.version_info[0] == 3
 
 REQUIRES_BACKEND = """\
-The memcached backend requires either pylibmc or python-memcached.\
+The Memcached backend requires either pylibmc or python-memcached.\
 """
 
 UNKNOWN_BACKEND = """\

+ 2 - 2
celery/backends/cassandra.py

@@ -1,7 +1,7 @@
 # -* coding: utf-8 -*-
 """
-    celery.backends.cassandra
-    ~~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.cassandra``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Apache Cassandra result store backend using DataStax driver
 

+ 2 - 2
celery/backends/couchbase.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.couchbase
-    ~~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.couchbase``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Couchbase result store backend.
 

+ 2 - 2
celery/backends/couchdb.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.couchdb
-    ~~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.couchdb``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     CouchDB result store backend.
 

+ 2 - 2
celery/backends/elasticsearch.py

@@ -1,7 +1,7 @@
 # -* coding: utf-8 -*-
 """
-    celery.backends.elasticsearch
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.elasticsearch``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Elasticsearch result store backend.
 

+ 5 - 4
celery/backends/filesystem.py

@@ -1,9 +1,10 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.filesystem
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.filesystem``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     File-system result store backend.
+
 """
 from __future__ import absolute_import, unicode_literals
 
@@ -39,7 +40,7 @@ class FilesystemBackend(KeyValueStoreBackend):
     :param url:  URL to the directory we should use
     :param open: open function to use when opening files
     :param unlink: unlink function to use when deleting files
-    :param sep: directory seperator (to join the directory with the key)
+    :param sep: directory separator (to join the directory with the key)
     :param encoding: encoding used on the file-system
 
     """
@@ -50,7 +51,7 @@ class FilesystemBackend(KeyValueStoreBackend):
         self.url = url
         path = self._find_path(url)
 
-        # We need the path and seperator as bytes objects
+        # We need the path and separator as bytes objects
         self.path = path.encode(encoding)
         self.sep = sep.encode(encoding)
 

+ 2 - 2
celery/backends/mongodb.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.mongodb
-    ~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.mongodb``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     MongoDB result store backend.
 

+ 4 - 4
celery/backends/redis.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.redis
-    ~~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.redis``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Redis result store backend.
 
@@ -102,10 +102,10 @@ class RedisBackend(base.BaseKeyValueStoreBackend, async.AsyncBackendMixin):
 
     ResultConsumer = ResultConsumer
 
-    #: redis-py client module.
+    #: :pypi:`redis` client module.
     redis = redis
 
-    #: Maximium number of connections in the pool.
+    #: Maximum number of connections in the pool.
     max_connections = None
 
     supports_autoexpire = True

+ 2 - 2
celery/backends/riak.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.riak
-    ~~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.riak``
+    ~~~~~~~~~~~~~~~~~~~~~~~~
 
     Riak result store backend.
 

+ 2 - 2
celery/backends/rpc.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.backends.rpc
-    ~~~~~~~~~~~~~~~~~~~
+    ``celery.backends.rpc``
+    ~~~~~~~~~~~~~~~~~~~~~~~
 
     RPC-style result backend, using reply-to and one queue per client.
 

+ 2 - 2
celery/beat.py

@@ -52,7 +52,7 @@ DEFAULT_MAX_INTERVAL = 300  # 5 minutes
 
 
 class SchedulingError(Exception):
-    """An error occured while scheduling a task."""
+    """An error occurred while scheduling a task."""
 
 
 @total_ordering
@@ -74,7 +74,7 @@ class ScheduleEntry(object):
     #: The task name
     name = None
 
-    #: The schedule (run_every/crontab)
+    #: The schedule (:class:`~celery.schedules.schedule`)
     schedule = None
 
     #: Positional arguments to apply.

+ 11 - 9
celery/bin/amqp.py

@@ -45,7 +45,7 @@ class Spec(object):
     """AMQP Command specification.
 
     Used to convert arguments to Python values and display various help
-    and tooltips.
+    and tool-tips.
 
     :param args: see :attr:`args`.
     :keyword returns: see :attr:`returns`.
@@ -259,7 +259,7 @@ class AMQShell(cmd.Cmd):
     def dispatch(self, cmd, arglist):
         """Dispatch and execute the command.
 
-        Lookup order is: :attr:`builtins` -> :attr:`amqp`.
+        Look-up order is: :attr:`builtins` -> :attr:`amqp`.
 
         """
         if isinstance(arglist, string_t):
@@ -354,19 +354,21 @@ class AMQPAdmin(object):
 class amqp(Command):
     """AMQP Administration Shell.
 
-    Also works for non-amqp transports (but not ones that
+    Also works for non-AMQP transports (but not ones that
     store declarations in memory).
 
-    Examples::
+    Examples:
 
-        celery amqp
+    .. code-block:: console
+
+        $ celery amqp
             start shell mode
-        celery amqp help
+        $ celery amqp help
             show list of commands
 
-        celery amqp exchange.delete name
-        celery amqp queue.delete queue
-        celery amqp queue.delete queue yes yes
+        $ celery amqp exchange.delete name
+        $ celery amqp queue.delete queue
+        $ celery amqp queue.delete queue yes yes
 
     """
 

+ 14 - 4
celery/bin/base.py

@@ -49,6 +49,7 @@ Try --help?
 
 find_long_opt = re.compile(r'.+?(--.+?)(?:\s|,|$)')
 find_rst_ref = re.compile(r':\w+:`(.+?)`')
+find_rst_decl = re.compile(r'^\s*\.\. .+?::.+$')
 
 
 @python_2_unicode_compatible
@@ -163,7 +164,7 @@ class Command(object):
     #: Text to print in --help before option list.
     description = ''
 
-    #: Set to true if this command doesn't have subcommands
+    #: Set to true if this command doesn't have sub-commands
     leaf = True
 
     # used by :meth:`say_remote_command_reply`.
@@ -184,7 +185,7 @@ class Command(object):
         self._no_color = no_color
         self.quiet = quiet
         if not self.description:
-            self.description = self.__doc__
+            self.description = self._strip_restructeredtext(self.__doc__)
         if on_error:
             self.on_error = on_error
         if on_usage_error:
@@ -512,10 +513,19 @@ class Command(object):
                     in_option = m.groups()[0].strip()
                 assert in_option, 'missing long opt'
             elif in_option and line.startswith(' ' * 4):
-                options[in_option].append(
-                    find_rst_ref.sub(r'\1', line.strip()).replace('`', ''))
+                if not find_rst_decl.match(line):
+                    options[in_option].append(
+                        find_rst_ref.sub(
+                            r'\1', line.strip()).replace('`', ''))
         return options
 
+    def _strip_restructeredtext(self, s):
+        return '\n'.join(
+            find_rst_ref.sub(r'\1', line.replace('`', ''))
+            for line in (s or '').splitlines()
+            if not find_rst_decl.match(line)
+        )
+
     def with_pool_option(self, argv):
         """Return tuple of ``(short_opts, long_opts)`` if the command
         supports a pool argument, and used to monkey patch eventlet/gevent

+ 9 - 7
celery/bin/celery.py

@@ -15,11 +15,11 @@ and usually parsed before command-specific arguments.
 
 .. cmdoption:: -A, --app
 
-    app instance to use (e.g. module.attr_name)
+    app instance to use (e.g. ``module.attr_name``)
 
 .. cmdoption:: -b, --broker
 
-    url to broker.  default is 'amqp://guest@localhost//'
+    URL to broker.  default is ``amqp://guest@localhost//``
 
 .. cmdoption:: --loader
 
@@ -615,7 +615,7 @@ class _RemoteControl(Command):
 class inspect(_RemoteControl):
     """Inspect the worker at runtime.
 
-    Availability: RabbitMQ (amqp), Redis, and MongoDB transports.
+    Availability: RabbitMQ (AMQP), Redis, and MongoDB transports.
 
     Examples::
 
@@ -656,7 +656,7 @@ class inspect(_RemoteControl):
 class control(_RemoteControl):
     """Workers remote control.
 
-    Availability: RabbitMQ (amqp), Redis, and MongoDB transports.
+    Availability: RabbitMQ (AMQP), Redis, and MongoDB transports.
 
     Examples::
 
@@ -744,10 +744,12 @@ class status(Command):
 class migrate(Command):
     """Migrate tasks from one broker to another.
 
-    Examples::
+    Examples:
+
+    .. code-block:: console
 
-        celery migrate redis://localhost amqp://guest@localhost//
-        celery migrate django:// redis://localhost
+        $ celery migrate redis://localhost amqp://guest@localhost//
+        $ celery migrate django:// redis://localhost
 
     NOTE: This command is experimental, make sure you have
           a backup of the tasks before you continue.

+ 5 - 4
celery/bin/worker.py

@@ -63,7 +63,8 @@ The :program:`celery worker` command (previously known as ``celeryd``)
 
 .. cmdoption:: --scheduler
 
-    Scheduler class to use. Default is celery.beat.PersistentScheduler
+    Scheduler class to use. Default is
+    :class:`celery.beat.PersistentScheduler`
 
 .. cmdoption:: -S, --statedb
 
@@ -129,7 +130,7 @@ The :program:`celery worker` command (previously known as ``celeryd``)
 
 .. cmdoption:: --autoreload
 
-    Enable autoreloading.
+    Enable auto-reloading.
 
 .. cmdoption:: --no-execv
 
@@ -166,8 +167,8 @@ The :program:`celery worker` command (previously known as ``celeryd``)
 
 .. cmdoption:: --umask
 
-    Effective umask (in octal) of the process after detaching.  Inherits
-    the umask of the parent process by default.
+    Effective :manpage:`umask(1)` (in octal) of the process after detaching.
+    Inherits the :manpage:`umask(1)` of the parent process by default.
 
 .. cmdoption:: --workdir
 

+ 1 - 1
celery/bootsteps.py

@@ -308,7 +308,7 @@ class Step(object):
 
     """
 
-    #: Optional step name, will use qualname if not specified.
+    #: Optional step name, will use ``qualname`` if not specified.
     name = None
 
     #: Optional short name used for graph outputs and in logs.

+ 4 - 4
celery/concurrency/asynpool.py

@@ -735,9 +735,9 @@ class AsynPool(_pool.Pool):
 
         def schedule_writes(ready_fds, total_write_count=[0]):
             # Schedule write operation to ready file descriptor.
-            # The file descriptor is writeable, but that does not
+            # The file descriptor is writable, but that does not
             # mean the process is currently reading from the socket.
-            # The socket is buffered so writeable simply means that
+            # The socket is buffered so writable simply means that
             # the buffer can accept at least 1 byte of data.
 
             # This means we have to cycle between the ready fds.
@@ -766,7 +766,7 @@ class AsynPool(_pool.Pool):
                     job = pop_message()
                 except IndexError:
                     # no more messages, remove all inactive fds from the hub.
-                    # this is important since the fds are always writeable
+                    # this is important since the fds are always writable
                     # as long as there's 1 byte left in the buffer, and so
                     # this may create a spinloop where the event loop
                     # always wakes up.
@@ -877,7 +877,7 @@ class AsynPool(_pool.Pool):
 
         def send_ack(response, pid, job, fd, WRITE=WRITE, ERR=ERR):
             # Only used when synack is enabled.
-            # Schedule writing ack response for when the fd is writeable.
+            # Schedule writing ack response for when the fd is writable.
             msg = Ack(job, fd, precalc[response])
             callback = promise(write_generator_done)
             cor = _write_ack(fd, msg, callback=callback)

+ 9 - 3
celery/contrib/rdb.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-celery.contrib.rdb
-==================
+``celery.contrib.rdb``
+======================
 
 Remote debugger for Celery tasks running in multiprocessing pool workers.
 Inspired by http://snippets.dzone.com/posts/show/7248
@@ -24,11 +24,17 @@ Inspired by http://snippets.dzone.com/posts/show/7248
 
 .. envvar:: CELERY_RDB_HOST
 
+``CELERY_RDB_HOST``
+-------------------
+
     Hostname to bind to.  Default is '127.0.01', which means the socket
     will only be accessible from the local host.
 
 .. envvar:: CELERY_RDB_PORT
 
+``CELERY_RDB_PORT``
+-------------------
+
     Base port to bind to.  Default is 6899.
     The debugger will try to find an available port starting from the
     base port.  The selected port will be logged by the worker.
@@ -177,7 +183,7 @@ def debugger():
 
 
 def set_trace(frame=None):
-    """Set breakpoint at current location, or a specified frame"""
+    """Set break-point at current location, or a specified frame."""
     if frame is None:
         frame = _frame().f_back
     return debugger().set_trace(frame)

+ 2 - 2
celery/contrib/sphinx.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-celery.contrib.sphinx
-=====================
+``celery.contrib.sphinx``
+=========================
 
 Sphinx documentation plugin
 

+ 21 - 15
celery/datastructures.py

@@ -1,9 +1,9 @@
 # -*- coding: utf-8 -*-
 """
-    celery.datastructures
-    ~~~~~~~~~~~~~~~~~~~~~
+    ``celery.datastructures``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-    Custom types and data structures.
+    Custom types and data-structures.
 
 """
 from __future__ import absolute_import, print_function, unicode_literals
@@ -32,9 +32,11 @@ except ImportError:
         pass
     LazySettings = LazyObject  # noqa
 
-__all__ = ['GraphFormatter', 'CycleError', 'DependencyGraph',
-           'AttributeDictMixin', 'AttributeDict', 'DictAttribute',
-           'ConfigurationView', 'LimitedSet']
+__all__ = [
+    'GraphFormatter', 'CycleError', 'DependencyGraph',
+    'AttributeDictMixin', 'AttributeDict', 'DictAttribute',
+    'ConfigurationView', 'LimitedSet',
+]
 
 DOT_HEAD = """
 {IN}{type} {id} {{
@@ -447,15 +449,16 @@ MutableMapping.register(DictAttribute)
 
 @python_2_unicode_compatible
 class ConfigurationView(AttributeDictMixin):
-    """A view over an applications configuration dicts.
+    """A view over an applications configuration dictionaries.
 
     Custom (but older) version of :class:`collections.ChainMap`.
 
-    If the key does not exist in ``changes``, the ``defaults`` dicts
-    are consulted.
+    If the key does not exist in ``changes``, the ``defaults``
+    dictionaries are consulted.
 
     :param changes:  Dict containing changes to the configuration.
-    :param defaults: List of dicts containing the default configuration.
+    :param defaults: List of dictionaries containing the default
+                     configuration.
 
     """
     key_t = None
@@ -596,17 +599,17 @@ class LimitedSet(object):
     Good for when you need to test for membership (`a in set`),
     but the set should not grow unbounded.
 
-    Maxlen is enforced at all times, so if the limit is reached
+    ``maxlen`` is enforced at all times, so if the limit is reached
     we will also remove non-expired items.
 
-    You can also configure minlen, which is the minimal residual size
+    You can also configure ``minlen``, which is the minimal residual size
     of the set.
 
     All arguments are optional, and no limits are enabled by default.
 
     :keyword maxlen: Optional max number of items.
 
-        Adding more items than maxlen will result in immediate
+        Adding more items than ``maxlen`` will result in immediate
         removal of items sorted by oldest insertion time.
 
     :keyword expires: TTL for all items.
@@ -614,19 +617,22 @@ class LimitedSet(object):
         Expired items are purged as keys are inserted.
 
     :keyword minlen: Minimal residual size of this set.
+
         .. versionadded:: 4.0
 
         Value must be less than ``maxlen`` if both are configured.
 
         Older expired items will be deleted, only after the set
-        exceeds minlen number of items.
+        exceeds ``minlen`` number of items.
 
     :keyword data: Initial data to initialize set with.
         Can be an iterable of ``(key, value)`` pairs,
         a dict (``{key: insertion_time}``), or another instance
         of :class:`LimitedSet`.
 
-    Example::
+    Example:
+
+    .. code-block:: pycon
 
         >>> s = LimitedSet(maxlen=50000, expires=3600, minlen=4000)
         >>> for i in range(60000):

+ 1 - 1
celery/events/__init__.py

@@ -92,7 +92,7 @@ class EventDispatcher(object):
         include ``"task"`` and ``"worker"``.
 
     :keyword enabled: Set to :const:`False` to not actually publish any events,
-        making :meth:`send` a noop operation.
+        making :meth:`send` a no-op.
 
     :keyword channel: Can be used instead of `connection` to specify
         an exact channel to use when sending events.

+ 2 - 2
celery/events/cursesmon.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.events.cursesmon
-    ~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.events.cursesmon``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Graphical monitor of Celery events using curses.
 

+ 1 - 1
celery/exceptions.py

@@ -101,7 +101,7 @@ class Ignore(TaskPredicate):
 
 @python_2_unicode_compatible
 class Reject(TaskPredicate):
-    """A task can raise this if it wants to reject/requeue the message."""
+    """A task can raise this if it wants to reject/re-queue the message."""
 
     def __init__(self, reason=None, requeue=False):
         self.reason = reason

+ 1 - 1
celery/fixups/django.py

@@ -23,7 +23,7 @@ __all__ = ['DjangoFixup', 'fixup']
 
 ERR_NOT_INSTALLED = """\
 Environment variable DJANGO_SETTINGS_MODULE is defined
-but Django is not installed.  Will not apply Django fixups!
+but Django is not installed.  Will not apply Django fix-ups!
 """
 
 

+ 8 - 7
celery/platforms.py

@@ -109,7 +109,7 @@ def pyimplementation():
 
 
 class LockFailed(Exception):
-    """Raised if a pidlock can't be acquired."""
+    """Raised if a PID lock can't be acquired."""
 
 
 class Pidfile(object):
@@ -251,12 +251,12 @@ def _create_pidlock(pidfile):
 
 
 def fd_by_path(paths):
-    """Return a list of fds.
+    """Return a list of file descriptors.
 
-    This method returns list of fds corresponding to
+    This method returns list of file descriptors corresponding to
     file paths passed in paths variable.
 
-    :keyword paths: List of file paths go get fd for.
+    :keyword paths: List of file paths.
 
     :returns: :list:.
 
@@ -364,7 +364,7 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
       privileges to.
     :keyword umask: Optional umask that will be effective in the child process.
     :keyword workdir: Optional new working directory.
-    :keyword fake: Don't actually detach, intented for debugging purposes.
+    :keyword fake: Don't actually detach, intended for debugging purposes.
     :keyword \*\*opts: Ignored.
 
     **Example**:
@@ -682,7 +682,7 @@ def strargv(argv):
 
 
 def set_process_title(progname, info=None):
-    """Set the ps name for the currently running process.
+    """Set the :command:`ps` name for the currently running process.
 
     Only works if :pypi:`setproctitle` is installed.
 
@@ -701,7 +701,8 @@ if os.environ.get('NOSETPS'):  # pragma: no cover
 else:
 
     def set_mp_process_title(progname, info=None, hostname=None):  # noqa
-        """Set the ps name using the multiprocessing process name.
+        """Set the :command:`ps` name using the :mod:`multiprocessing`
+        process name.
 
         Only works if :pypi:`setproctitle` is installed.
 

+ 65 - 63
celery/schedules.py

@@ -66,7 +66,7 @@ def cronfield(s):
 
 
 class ParseException(Exception):
-    """Raised by crontab_parser when the input can't be parsed."""
+    """Raised by :class:`crontab_parser` when the input can't be parsed."""
 
 
 @python_2_unicode_compatible
@@ -99,15 +99,13 @@ class schedule(object):
         )
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items `(is_due, next_time_to_check)`,
+        """Returns tuple of two items ``(is_due, next_time_to_check)``,
         where next time to check is in seconds.
 
-        e.g.
-
-        * `(True, 20)`, means the task should be run now, and the next
+        * ``(True, 20)``, means the task should be run now, and the next
             time to check is in 20 seconds.
 
-        * `(False, 12.3)`, means the task is not due, but that the scheduler
+        * ``(False, 12.3)``, means the task is not due, but that the scheduler
           should check again in 12.3 seconds.
 
         The next time to check is used to save energy/CPU cycles,
@@ -118,7 +116,7 @@ class schedule(object):
         sleep between re-checking the periodic task intervals.  So if you
         have a task that changes schedule at run-time then your next_run_at
         check will decide how long it will take before a change to the
-        schedule takes effect.  The max loop interval takes precendence
+        schedule takes effect.  The max loop interval takes precedence
         over the next check at value returned.
 
         .. admonition:: Scheduler max interval variance
@@ -184,9 +182,9 @@ class schedule(object):
 
 
 class crontab_parser(object):
-    """Parser for crontab expressions. Any expression of the form 'groups'
+    """Parser for Crontab expressions. Any expression of the form 'groups'
     (see BNF grammar below) is accepted and expanded to a set of numbers.
-    These numbers represent the units of time that the crontab needs to
+    These numbers represent the units of time that the Crontab needs to
     run on:
 
     .. code-block:: bnf
@@ -201,7 +199,7 @@ class crontab_parser(object):
         groups  :: expr ( ',' expr ) *
 
     The parser is a general purpose one, useful for parsing hours, minutes and
-    day_of_week expressions.  Example usage:
+    day of week expressions.  Example usage:
 
     .. code-block:: pycon
 
@@ -212,8 +210,8 @@ class crontab_parser(object):
         >>> day_of_week = crontab_parser(7).parse('*')
         [0, 1, 2, 3, 4, 5, 6]
 
-    It can also parse day_of_month and month_of_year expressions if initialized
-    with an minimum of 1.  Example usage:
+    It can also parse day of month and month of year expressions if initialized
+    with a minimum of 1.  Example usage:
 
     .. code-block:: pycon
 
@@ -307,12 +305,12 @@ class crontab_parser(object):
 
 @python_2_unicode_compatible
 class crontab(schedule):
-    """A crontab can be used as the `run_every` value of a
-    :class:`PeriodicTask` to add cron-like scheduling.
+    """A Crontab can be used as the ``run_every`` value of a
+    periodic task entry to add :manpage:`crontab(5)`-like scheduling.
 
-    Like a :manpage:`cron` job, you can specify units of time of when
+    Like a :manpage:`cron(5)`-job, you can specify units of time of when
     you would like the task to execute. It is a reasonably complete
-    implementation of cron's features, so it should provide a fair
+    implementation of :command:`cron`'s features, so it should provide a fair
     degree of scheduling needs.
 
     You can specify a minute, an hour, a day of the week, a day of the
@@ -322,17 +320,17 @@ class crontab(schedule):
 
         - A (list of) integers from 0-59 that represent the minutes of
           an hour of when execution should occur; or
-        - A string representing a crontab pattern.  This may get pretty
-          advanced, like `minute='*/15'` (for every quarter) or
-          `minute='1,13,30-45,50-59/2'`.
+        - A string representing a Crontab pattern.  This may get pretty
+          advanced, like ``minute='*/15'`` (for every quarter) or
+          ``minute='1,13,30-45,50-59/2'``.
 
     .. attribute:: hour
 
         - A (list of) integers from 0-23 that represent the hours of
           a day of when execution should occur; or
-        - A string representing a crontab pattern.  This may get pretty
-          advanced, like `hour='*/3'` (for every three hours) or
-          `hour='0,8-17/2'` (at midnight, and every two hours during
+        - A string representing a Crontab pattern.  This may get pretty
+          advanced, like ``hour='*/3'`` (for every three hours) or
+          ``hour='0,8-17/2'`` (at midnight, and every two hours during
           office hours).
 
     .. attribute:: day_of_week
@@ -340,27 +338,27 @@ class crontab(schedule):
         - A (list of) integers from 0-6, where Sunday = 0 and Saturday =
           6, that represent the days of a week that execution should
           occur.
-        - A string representing a crontab pattern.  This may get pretty
-          advanced, like `day_of_week='mon-fri'` (for weekdays only).
-          (Beware that `day_of_week='*/2'` does not literally mean
+        - A string representing a Crontab pattern.  This may get pretty
+          advanced, like ``day_of_week='mon-fri'`` (for weekdays only).
+          (Beware that ``day_of_week='*/2'`` does not literally mean
           'every two days', but 'every day that is divisible by two'!)
 
     .. attribute:: day_of_month
 
         - A (list of) integers from 1-31 that represents the days of the
           month that execution should occur.
-        - A string representing a crontab pattern.  This may get pretty
-          advanced, such as `day_of_month='2-30/3'` (for every even
-          numbered day) or `day_of_month='1-7,15-21'` (for the first and
+        - A string representing a Crontab pattern.  This may get pretty
+          advanced, such as ``day_of_month='2-30/3'`` (for every even
+          numbered day) or ``day_of_month='1-7,15-21'`` (for the first and
           third weeks of the month).
 
     .. attribute:: month_of_year
 
         - A (list of) integers from 1-12 that represents the months of
           the year during which execution can occur.
-        - A string representing a crontab pattern.  This may get pretty
-          advanced, such as `month_of_year='*/3'` (for the first month
-          of every quarter) or `month_of_year='2-12/2'` (for every even
+        - A string representing a Crontab pattern.  This may get pretty
+          advanced, such as ``month_of_year='*/3'`` (for the first month
+          of every quarter) or ``month_of_year='2-12/2'`` (for every even
           numbered month).
 
     .. attribute:: nowfun
@@ -374,11 +372,12 @@ class crontab(schedule):
 
     It is important to realize that any day on which execution should
     occur must be represented by entries in all three of the day and
-    month attributes.  For example, if `day_of_week` is 0 and `day_of_month`
-    is every seventh day, only months that begin on Sunday and are also
-    in the `month_of_year` attribute will have execution events.  Or,
-    `day_of_week` is 1 and `day_of_month` is '1-7,15-21' means every
-    first and third Monday of every month present in `month_of_year`.
+    month attributes.  For example, if ``day_of_week`` is 0 and
+    ``day_of_month`` is every seventh day, only months that begin
+    on Sunday and are also in the ``month_of_year`` attribute will have
+    execution events.  Or, ``day_of_week`` is 1 and ``day_of_month``
+    is '1-7,15-21' means every first and third Monday of every month
+    present in ``month_of_year``.
 
     """
 
@@ -409,19 +408,18 @@ class crontab(schedule):
             list        (like [8-17])
 
         And convert it to an (expanded) set representing all time unit
-        values on which the crontab triggers.  Only in case of the base
-        type being 'str', parsing occurs.  (It is fast and
-        happens only once for each crontab instance, so there is no
+        values on which the Crontab triggers.  Only in case of the base
+        type being :class:`str`, parsing occurs.  (It is fast and
+        happens only once for each Crontab instance, so there is no
         significant performance overhead involved.)
 
         For the other base types, merely Python type conversions happen.
 
-        The argument `max_` is needed to determine the expansion of '*'
-        and ranges.
-        The argument `min_` is needed to determine the expansion of '*'
-        and ranges for 1-based cronspecs, such as day of month or month
-        of year. The default is sufficient for minute, hour, and day of
-        week.
+        The argument ``max_`` is needed to determine the expansion of
+        ``*`` and ranges.  The argument ``min_`` is needed to determine
+        the expansion of ``*`` and ranges for 1-based cronspecs, such as
+        day of month or month of year.  The default is sufficient for minute,
+        hour, and day of week.
 
         """
         if isinstance(cronspec, numbers.Integral):
@@ -443,11 +441,12 @@ class crontab(schedule):
         return result
 
     def _delta_to_next(self, last_run_at, next_hour, next_minute):
-        """Takes a datetime of last run, next minute and hour, and
-        returns a relativedelta for the next scheduled day and time.
+        """Takes a :class:`~datetime.datetime` of last run, next minute and hour,
+        and returns a :class:`~celery.utils.timeutils.ffwd` for the next
+        scheduled day and time.
 
-        Only called when day_of_month and/or month_of_year cronspec
-        is specified to further limit scheduled task execution.
+        Only called when ``day_of_month`` and/or ``month_of_year``
+        cronspec is specified to further limit scheduled task execution.
 
         """
         datedata = AttributeDict(year=last_run_at.year)
@@ -582,11 +581,12 @@ class crontab(schedule):
         return self.to_local(last_run_at), delta, self.to_local(now)
 
     def remaining_estimate(self, last_run_at, ffwd=ffwd):
-        """Returns when the periodic task should run next as a timedelta."""
+        """Returns when the periodic task should run next as a
+        :class:`~datetime.timedelta`."""
         return remaining(*self.remaining_delta(last_run_at, ffwd=ffwd))
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items `(is_due, next_time_to_run)`,
+        """Returns tuple of two items ``(is_due, next_time_to_run)``,
         where next time to run is in seconds.
 
         See :meth:`celery.schedules.schedule.is_due` for more information.
@@ -631,21 +631,22 @@ def maybe_schedule(s, relative=False, app=None):
 
 @python_2_unicode_compatible
 class solar(schedule):
-    """A solar event can be used as the `run_every` value of a
-    :class:`PeriodicTask` to schedule based on certain solar events.
+    """A solar event can be used as the ``run_every`` value of a
+    periodic task entry to schedule based on certain solar events.
 
     :param event: Solar event that triggers this task. Available
-        values are: dawn_astronomical, dawn_nautical, dawn_civil,
-        sunrise, solar_noon, sunset, dusk_civil, dusk_nautical,
-        dusk_astronomical
+        values are: ``dawn_astronomical``, ``dawn_nautical``, ``dawn_civil``,
+        ``sunrise``, ``solar_noon``, ``sunset``, ``dusk_civil``,
+        ``dusk_nautical``, ``dusk_astronomical``.
     :param lat: The latitude of the observer.
     :param lon: The longitude of the observer.
     :param nowfun: Function returning the current date and time
         (class:`~datetime.datetime`).
     :param app: Celery app instance.
+
     """
 
-    _all_events = [
+    _all_events = {
         'dawn_astronomical',
         'dawn_nautical',
         'dawn_civil',
@@ -655,7 +656,7 @@ class solar(schedule):
         'dusk_civil',
         'dusk_nautical',
         'dusk_astronomical',
-    ]
+    }
     _horizons = {
         'dawn_astronomical': '-18',
         'dawn_nautical': '-12',
@@ -700,7 +701,7 @@ class solar(schedule):
 
         if event not in self._all_events:
             raise ValueError(SOLAR_INVALID_EVENT.format(
-                event=event, all_events=', '.join(self._all_events),
+                event=event, all_events=', '.join(sorted(self._all_events)),
             ))
         if lat < -90 or lat > 90:
             raise ValueError(SOLAR_INVALID_LATITUDE.format(lat=lat))
@@ -727,9 +728,10 @@ class solar(schedule):
         )
 
     def remaining_estimate(self, last_run_at):
-        """Returns when the periodic task should run next as a timedelta,
-        or if it shouldn't run today (e.g. the sun does not rise today),
-        returns the time when the next check should take place."""
+        """Returns when the periodic task should run next as a
+        :class:`~datetime.timedelta`, or if it shouldn't run today (e.g.
+        the sun does not rise today), returns the time when the next check
+        should take place."""
         last_run_at = self.maybe_make_aware(last_run_at)
         last_run_at_utc = localize(last_run_at, timezone.utc)
         self.cal.date = last_run_at_utc
@@ -751,7 +753,7 @@ class solar(schedule):
         return delta
 
     def is_due(self, last_run_at):
-        """Returns tuple of two items `(is_due, next_time_to_run)`,
+        """Returns tuple of two items ``(is_due, next_time_to_run)``,
         where next time to run is in seconds.
 
         See :meth:`celery.schedules.schedule.is_due` for more information.

+ 5 - 3
celery/task/http.py

@@ -89,9 +89,11 @@ class MutableURL(object):
 
     Supports editing the query parameter list.
     You can convert the object back to a string, the query will be
-    properly urlencoded.
+    properly URL-encoded.
 
-    Examples
+    Examples:
+
+    .. code-block:: pycon
 
         >>> url = URL('http://www.google.com:6580/foo/bar?x=3&y=4#foo')
         >>> url.query
@@ -177,7 +179,7 @@ def dispatch(self, url=None, method='GET', **kwargs):
     .. attribute:: url
 
         If this is set, this is used as the default URL for requests.
-        Default is to require the user of the task to supply the url as an
+        Default is to require the user of the task to supply the URL as an
         argument, as this attribute is intended for subclasses.
 
     .. attribute:: method

+ 0 - 1
celery/tests/app/test_app.py

@@ -426,7 +426,6 @@ class test_App(AppCase):
         from celery.app.task import Task
 
         class adX(Task):
-            abstract = True
 
             def run(self, y, z, x):
                 return y, z, x

+ 1 - 1
celery/utils/__init__.py

@@ -208,7 +208,7 @@ def isatty(fh):
 
 
 def cry(out=None, sepchr='=', seplen=49):  # pragma: no cover
-    """Return stacktrace of all active threads,
+    """Return stack-trace of all active threads,
     taken from https://gist.github.com/737056."""
     import threading
 

+ 1 - 1
celery/utils/debug.py

@@ -111,7 +111,7 @@ def memdump(samples=10, file=None):  # pragma: no cover
 def sample(x, n, k=0):
     """Given a list `x` a sample of length ``n`` of that list is returned.
 
-    E.g. if `n` is 10, and `x` has 100 items, a list of every 10th
+    E.g. if `n` is 10, and `x` has 100 items, a list of every tenth.
     item is returned.
 
     ``k`` can be used as offset.

+ 1 - 1
celery/utils/dispatch/saferef.py

@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-"Safe weakrefs", originally from pyDispatcher.
+"Safe weakrefs", originally from :pypi:`pyDispatcher`.
 
 Provides a way to safely weakref any function, including bound methods (which
 aren't handled by the core weakref module).

+ 1 - 1
celery/utils/dispatch/signal.py

@@ -59,7 +59,7 @@ class Signal(object):  # pragma: no cover
         :param receiver: A function or an instance method which is to
             receive signals. Receivers must be hashable objects.
 
-            if weak is :const:`True`, then receiver must be weak-referencable
+            if weak is :const:`True`, then receiver must be weak-referenceable
             (more precisely :func:`saferef.safe_ref()` must be able to create a
             reference to the receiver).
 

+ 9 - 4
celery/utils/functional.py

@@ -88,10 +88,11 @@ def evaluate_promises(it):
 
 
 def first(predicate, it):
-    """Return the first element in `iterable` that `predicate` Gives a
+    """Return the first element in ``iterable`` that ``predicate`` gives a
     :const:`True` value for.
 
-    If `predicate` is None it will return the first item that is not None.
+    If ``predicate`` is None it will return the first item that is not
+    :const:`None`.
 
     """
     return next(
@@ -127,7 +128,9 @@ def firstmethod(method, on_call=None):
 def chunks(it, n):
     """Split an iterator into chunks with `n` elements each.
 
-    Examples
+    Examples:
+
+    .. code-block:: pycon
 
         # n == 2
         >>> x = chunks(iter([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), 2)
@@ -149,6 +152,8 @@ def padlist(container, size, default=None):
 
     Examples:
 
+    .. code-block:: pycon
+
         >>> first, last, city = padlist(['George', 'Costanza', 'NYC'], 3)
         ('George', 'Costanza', 'NYC')
         >>> first, last, city = padlist(['George', 'Costanza'], 3)
@@ -175,7 +180,7 @@ def uniq(it):
 
 
 def regen(it):
-    """Regen takes any iterable, and if the object is an
+    """``Regen`` takes any iterable, and if the object is an
     generator it will cache the evaluated list on first access,
     so that the generator can be "consumed" multiple times."""
     if isinstance(it, (list, tuple)):

+ 9 - 7
celery/utils/iso8601.py

@@ -1,10 +1,11 @@
-"""Originally taken from pyiso8601 (http://code.google.com/p/pyiso8601/)
+"""Originally taken from :pypi:`pyiso8601`
+(http://code.google.com/p/pyiso8601/)
 
-Modified to match the behavior of dateutil.parser:
+Modified to match the behavior of ``dateutil.parser``:
 
-    - raise ValueError instead of ParseError
-    - return naive datetimes by default
-    - uses pytz.FixedOffset
+    - raise :exc:`ValueError` instead of ``ParseError``
+    - return naive :class:`~datetime.datetime` by default
+    - uses :class:`pytz.FixedOffset`
 
 This is the original License:
 
@@ -14,7 +15,7 @@ Permission is hereby granted, free of charge, to any person obtaining a
 copy of this software and associated documentation files (the
 "Software"), to deal in the Software without restriction, including
 without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
+distribute, sub-license, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to
 the following conditions:
 
@@ -52,7 +53,8 @@ TIMEZONE_REGEX = re.compile(
 
 
 def parse_iso8601(datestring):
-    """Parse and convert ISO 8601 string into a datetime object"""
+    """Parse and convert ISO-8601 string into a
+    :class:`~datetime.datetime` object"""
     m = ISO8601_REGEX.match(datestring)
     if not m:
         raise ValueError('unable to parse date string %r' % datestring)

+ 3 - 3
celery/utils/objects.py

@@ -22,9 +22,9 @@ def mro_lookup(cls, attr, stop=set(), monkey_patched=[]):
     """Return the first node by MRO order that defines an attribute.
 
     :keyword stop: A list of types that if reached will stop the search.
-    :keyword monkey_patched: Use one of the stop classes if the attr's
-        module origin is not in this list, this to detect monkey patched
-        attributes.
+    :keyword monkey_patched: Use one of the stop classes if the
+        attributes module origin is not in this list, this to detect
+        monkey patched attributes.
 
     :returns None: if the attribute was not found.
 

+ 2 - 2
celery/utils/saferepr.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.utils.saferepr
-    ~~~~~~~~~~~~~~~~~~~~~
+    ``celery.utils.saferepr``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
 
     Streaming, truncating, non-recursive version of :func:`repr`.
 

+ 1 - 1
celery/utils/serialization.py

@@ -41,7 +41,7 @@ def subclass_exception(name, parent, module):  # noqa
 
 def find_pickleable_exception(exc, loads=pickle.loads,
                               dumps=pickle.dumps):
-    """With an exception instance, iterate over its super classes (by mro)
+    """With an exception instance, iterate over its super classes (by MRO)
     and find the first super exception that is pickleable.  It does
     not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
     :class:`BaseException` and :class:`object`).  If that happens

+ 4 - 4
celery/utils/threads.py

@@ -261,11 +261,11 @@ class _LocalStack(object):
 class LocalManager(object):
     """Local objects cannot manage themselves. For that you need a local
     manager.  You can pass a local manager multiple locals or add them
-    later by appending them to `manager.locals`.  Everytime the manager
-    cleans up it, will clean up all the data left in the locals for this
+    later by appending them to ``manager.locals``.  Every time the manager
+    cleans up, it will clean up all the data left in the locals for this
     context.
 
-    The `ident_func` parameter can be added to override the default ident
+    The ``ident_func`` parameter can be added to override the default ident
     function for the wrapped locals.
 
     """
@@ -294,7 +294,7 @@ class LocalManager(object):
     def cleanup(self):
         """Manually clean up the data in the locals for this context.
 
-        Call this at the end of the request or use `make_middleware()`.
+        Call this at the end of the request or use ``make_middleware()``.
 
         """
         for local in self.locals:

+ 20 - 15
celery/utils/timeutils.py

@@ -1,7 +1,7 @@
 # -*- coding: utf-8 -*-
 """
-    celery.utils.timeutils
-    ~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.utils.timeutils``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     This module contains various utilities related to dates and times.
 
@@ -158,19 +158,23 @@ timezone = _Zone()
 
 
 def maybe_timedelta(delta):
-    """Coerces integer to timedelta if `delta` is an integer."""
+    """Coerces integer to :class:`~datetime.timedelta` if argument
+    is an integer."""
     if isinstance(delta, numbers.Real):
         return timedelta(seconds=delta)
     return delta
 
 
 def delta_resolution(dt, delta):
-    """Round a datetime to the resolution of a timedelta.
+    """Round a :class:`~datetime.datetime` to the resolution of
+    a :class:`~datetime.timedelta`.
 
-    If the timedelta is in days, the datetime will be rounded
-    to the nearest days, if the timedelta is in hours the datetime
-    will be rounded to the nearest hour, and so on until seconds
-    which will just return the original datetime.
+    If the :class:`~datetime.timedelta` is in days, the
+    :class:`~datetime.datetime` will be rounded to the nearest days,
+    if the :class:`~datetime.timedelta` is in hours the
+    :class:`~datetime.datetime` will be rounded to the nearest hour,
+    and so on until seconds which will just return the original
+    :class:`~datetime.datetime`.
 
     """
     delta = max(delta.total_seconds(), 0)
@@ -187,7 +191,8 @@ def delta_resolution(dt, delta):
 
 
 def remaining(start, ends_in, now=None, relative=False):
-    """Calculate the remaining time for a start date and a timedelta.
+    """Calculate the remaining time for a start date and a
+    :class:`~datetime.timedelta`.
 
     e.g. "how many seconds left for 30 seconds after start?"
 
@@ -257,7 +262,7 @@ def humanize_seconds(secs, prefix='', sep='', now='now'):
 
 
 def maybe_iso8601(dt):
-    """`Either datetime | str -> datetime or None -> None`"""
+    """Either ``datetime | str -> datetime`` or ``None -> None``"""
     if not dt:
         return
     if isinstance(dt, datetime):
@@ -266,13 +271,13 @@ def maybe_iso8601(dt):
 
 
 def is_naive(dt):
-    """Return :const:`True` if the datetime is naive
+    """Return :const:`True` if the :class:`~datetime.datetime` is naive
     (does not have timezone information)."""
     return dt.tzinfo is None or dt.tzinfo.utcoffset(dt) is None
 
 
 def make_aware(dt, tz):
-    """Sets the timezone for a datetime object."""
+    """Sets the timezone for a :class:`~datetime.datetime` object."""
     try:
         _localize = tz.localize
     except AttributeError:
@@ -287,7 +292,7 @@ def make_aware(dt, tz):
 
 
 def localize(dt, tz):
-    """Convert aware datetime to another timezone."""
+    """Convert aware :class:`~datetime.datetime` to another timezone."""
     dt = dt.astimezone(tz)
     try:
         _normalize = tz.normalize
@@ -304,7 +309,7 @@ def localize(dt, tz):
 
 
 def to_utc(dt):
-    """Converts naive datetime to UTC"""
+    """Converts naive :class:`~datetime.datetime` to UTC"""
     return make_aware(dt, timezone.utc)
 
 
@@ -318,7 +323,7 @@ def maybe_make_aware(dt, tz=None):
 
 @python_2_unicode_compatible
 class ffwd(object):
-    """Version of relativedelta that only supports addition."""
+    """Version of ``dateutil.relativedelta`` that only supports addition."""
 
     def __init__(self, year=None, month=None, weeks=0, weekday=None, day=None,
                  hour=None, minute=None, second=None, microsecond=None,

+ 3 - 2
celery/worker/autoreload.py

@@ -1,9 +1,10 @@
 # -*- coding: utf-8 -*-
 """
-    celery.worker.autoreload
-    ~~~~~~~~~~~~~~~~~~~~~~~~
+    ``celery.worker.autoreload``
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
     This module implements automatic module reloading
+
 """
 from __future__ import absolute_import, unicode_literals
 

+ 29 - 25
docs/configuration.rst

@@ -516,7 +516,7 @@ Can be one of the following:
     See :ref:`conf-redis-result-backend`.
 
 * ``cache``
-    Use `memcached`_ to store the results.
+    Use `Memcached`_ to store the results.
     See :ref:`conf-cache-result-backend`.
 
 * ``mongodb``
@@ -557,7 +557,7 @@ Can be one of the following:
     you only receive the same result once.  See :doc:`userguide/calling`).
 
 .. _`SQLAlchemy`: http://sqlalchemy.org
-.. _`memcached`: http://memcached.org
+.. _`Memcached`: http://memcached.org
 .. _`MongoDB`: http://mongodb.org
 .. _`Redis`: http://redis.io
 .. _`Cassandra`: http://cassandra.apache.org/
@@ -604,8 +604,8 @@ Default is to expire after 1 day.
 
 .. note::
 
-    For the moment this only works with the amqp, database, cache, redis and MongoDB
-    backends.
+    For the moment this only works with the AMQP, database, cache,
+    Redis and MongoDB backends.
 
     When using the database or MongoDB backends, `celery beat` must be
     running for the results to be expired.
@@ -747,13 +747,13 @@ Cache backend settings
     The cache backend supports the :pypi:`pylibmc` and `python-memcached`
     libraries.  The latter is used only if :pypi:`pylibmc` is not installed.
 
-Using a single memcached server:
+Using a single Memcached server:
 
 .. code-block:: python
 
     result_backend = 'cache+memcached://127.0.0.1:11211/'
 
-Using multiple memcached servers:
+Using multiple Memcached servers:
 
 .. code-block:: python
 
@@ -893,7 +893,7 @@ This is a dict supporting the following keys:
 
 * ``options``
 
-    Additional keyword arguments to pass to the mongodb connection
+    Additional keyword arguments to pass to the MongoDB connection
     constructor.  See the :pypi:`pymongo` docs to see a list of arguments
     supported.
 
@@ -1000,7 +1000,9 @@ AuthProvider class within ``cassandra.auth`` module to use.  Values can be
 ``cassandra_auth_kwargs``
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Named arguments to pass into the auth provider. e.g.::
+Named arguments to pass into the authentication provider. e.g.:
+
+.. code-block:: python
 
     cassandra_auth_kwargs = {
         username: 'cassandra',
@@ -1044,7 +1046,7 @@ Riak backend settings
     The Riak backend requires the :pypi:`riak` library:
     http://pypi.python.org/pypi/riak/
 
-    To install the riak package use `pip` or `easy_install`:
+    To install the :pypi:`riak` package use `pip` or `easy_install`:
 
     .. code-block:: console
 
@@ -1076,9 +1078,9 @@ The fields of the URL are defined as follows:
 #. ``bucket``
 
     Bucket name to use. Default is `celery`.
-    The bucket needs to be a string with ascii characters only.
+    The bucket needs to be a string with ASCII characters only.
 
-Altenatively, this backend can be configured with the following configuration directives.
+Alternatively, this backend can be configured with the following configuration directives.
 
 .. setting:: riak_backend_settings
 
@@ -1089,7 +1091,7 @@ This is a dict supporting the following keys:
 
 * ``host``
 
-    The host name of the Riak server. Defaults to "localhost".
+    The host name of the Riak server. Defaults to ``"localhost"``.
 
 * ``port``
 
@@ -1140,14 +1142,16 @@ Couchbase backend settings
     The Couchbase backend requires the :pypi:`couchbase` library:
     https://pypi.python.org/pypi/couchbase
 
-    To install the couchbase package use `pip` or `easy_install`:
+    To install the :pypi:`couchbase` package use `pip` or `easy_install`:
 
     .. code-block:: console
 
         $ pip install couchbase
 
 This backend can be configured via the :setting:`result_backend`
-set to a couchbase URL::
+set to a Couchbase URL:
+
+.. code-block:: python
 
     result_backend = 'couchbase://username:password@host:port/bucket'
 
@@ -1189,14 +1193,14 @@ CouchDB backend settings
     The CouchDB backend requires the :pypi:`pycouchdb` library:
     https://pypi.python.org/pypi/pycouchdb
 
-    To install the couchbase package use `pip` or `easy_install`:
+    To install the Couchbase package use :command:`pip`, or :command:`easy_install`:
 
     .. code-block:: console
 
         $ pip install pycouchdb
 
 This backend can be configured via the :setting:`result_backend`
-set to a couchdb URL::
+set to a CouchDB URL::
 
     result_backend = 'couchdb://username:password@host:port/container'
 
@@ -1282,7 +1286,7 @@ This backend can be configured using a file URL, for example::
 
     CELERY_RESULT_BACKEND = 'file:///var/celery/results'
 
-The configured directory needs to be shared and writeable by all servers using
+The configured directory needs to be shared and writable by all servers using
 the backend.
 
 If you are trying Celery on a single system you can simply use the backend
@@ -1311,7 +1315,7 @@ the :ref:`automatic routing facilities <routing-automatic>`.
 If you really want to configure advanced routing, this setting should
 be a list of :class:`kombu.Queue` objects the worker will consume from.
 
-Note that workers can be overriden this setting via the
+Note that workers can be overridden this setting via the
 :option:`-Q <celery worker -Q>` option, or individual queues from this
 list (by name) can be excluded using the :option:`-X <celery worker -X>`
 option.
@@ -1713,7 +1717,7 @@ The maximum number of connections that can be open in the connection pool.
 
 The pool is enabled by default since version 2.5, with a default limit of ten
 connections.  This number can be tweaked depending on the number of
-threads/greenthreads (eventlet/gevent) using a connection.  For example
+threads/green-threads (eventlet/gevent) using a connection.  For example
 running eventlet with 1000 greenlets that use a connection to the broker,
 contention can arise and you should consider increasing the limit.
 
@@ -2009,7 +2013,7 @@ The default is 2 seconds.
 ~~~~~~~~~~~~~~~~~
 .. versionadded:: 4.0
 
-Charset for outgoing emails. Default is 'utf-8'.
+Character set for outgoing emails. Default is ``"utf-8"``.
 
 .. _conf-example-error-mail-config:
 
@@ -2089,7 +2093,7 @@ Disabled by default.
 Expiry time in seconds (int/float) for when after a monitor clients
 event queue will be deleted (``x-expires``).
 
-Default is never, relying on the queue autodelete setting.
+Default is never, relying on the queue auto-delete setting.
 
 .. setting:: event_serializer
 
@@ -2219,7 +2223,7 @@ used to sign messages when :ref:`message-signing` is used.
 .. versionadded:: 2.5
 
 The directory containing X.509 certificates used for
-:ref:`message-signing`.  Can be a glob with wildcards,
+:ref:`message-signing`.  Can be a glob with wild-cards,
 (for example :file:`/etc/certs/*.pem`).
 
 .. _conf-custom-components:
@@ -2269,7 +2273,7 @@ Default is ``celery.worker.autoscale:Autoscaler``.
 ``worker_autoreloader``
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-Name of the autoreloader class used by the worker to reload
+Name of the auto-reloader class used by the worker to reload
 Python modules and files that have changed.
 
 Default is: ``celery.worker.autoreload:Autoreloader``.
@@ -2288,8 +2292,8 @@ Default is :class:`celery.worker.consumer.Consumer`
 ~~~~~~~~~~~~~~~~
 
 Name of the ETA scheduler class used by the worker.
-Default is :class:`kombu.async.hub.timer.Timer`, or one overrided
-by the pool implementation.
+Default is :class:`kombu.async.hub.timer.Timer`, or set by the
+pool implementation.
 
 .. _conf-celerybeat:
 

+ 30 - 26
docs/contributing.rst

@@ -209,10 +209,10 @@ spelling or other errors on the website/docs/code.
        * Enable celery's :ref:`breakpoint signal <breakpoint_signal>` and use it
          to inspect the process's state.  This will allow you to open a
          :mod:`pdb` session.
-       * Collect tracing data using strace_(Linux), dtruss (OSX) and ktrace(BSD),
-         ltrace_ and lsof_.
+       * Collect tracing data using `strace`_(Linux), :command:`dtruss` (OSX),
+         and :command:`ktrace` (BSD), `ltrace`_ and `lsof`_.
 
-    D) Include the output from the `celery report` command:
+    D) Include the output from the :command:`celery report` command:
 
         .. code-block:: console
 
@@ -254,11 +254,11 @@ issue tracker.
 If you are unsure of the origin of the bug you can ask the
 :ref:`mailing-list`, or just use the Celery issue tracker.
 
-Contributors guide to the codebase
-==================================
+Contributors guide to the code base
+===================================
 
 There's a separate section for internal details,
-including details about the codebase and a style guide.
+including details about the code base and a style guide.
 
 Read :ref:`internals-guide` for more!
 
@@ -269,7 +269,7 @@ Versions
 
 Version numbers consists of a major version, minor version and a release number.
 Since version 2.1.0 we use the versioning semantics described by
-semver: http://semver.org.
+SemVer: http://semver.org.
 
 Stable releases are published at PyPI
 while development releases are only available in the GitHub git repository as tags.
@@ -351,17 +351,17 @@ An archived version is named ``X.Y-archived``.
 
 Our currently archived branches are:
 
-* 2.5-archived
+* :github_branch:`2.5-archived`
 
-* 2.4-archived
+* :github_branch:`2.4-archived`
 
-* 2.3-archived
+* :github_branch:`2.3-archived`
 
-* 2.1-archived
+* :github_branch:`2.1-archived`
 
-* 2.0-archived
+* :github_branch:`2.0-archived`
 
-* 1.0-archived
+* :github_branch:`1.0-archived`
 
 Feature branches
 ----------------
@@ -400,7 +400,7 @@ Forking and setting up the repository
 -------------------------------------
 
 First you need to fork the Celery repository, a good introduction to this
-is in the Github Guide: `Fork a Repo`_.
+is in the GitHub Guide: `Fork a Repo`_.
 
 After you have cloned the repository you should checkout your copy
 to a directory on your machine:
@@ -428,7 +428,7 @@ always use the ``--rebase`` option to ``git pull``:
 With this option you don't clutter the history with merging
 commit notes. See `Rebasing merge commits in git`_.
 If you want to learn more about rebasing see the `Rebase`_
-section in the Github guides.
+section in the GitHub guides.
 
 If you need to work on a different branch than ``master`` you can
 fetch and checkout a remote branch like this::
@@ -505,7 +505,7 @@ When your feature/bugfix is complete you may want to submit
 a pull requests so that it can be reviewed by the maintainers.
 
 Creating pull requests is easy, and also let you track the progress
-of your contribution.  Read the `Pull Requests`_ section in the Github
+of your contribution.  Read the `Pull Requests`_ section in the GitHub
 Guide to learn how this is done.
 
 You can also attach pull requests to existing issues by following
@@ -739,14 +739,14 @@ is following the conventions.
 
     * Python standard library (`import xxx`)
     * Python standard library ('from xxx import`)
-    * Third party packages.
+    * Third-party packages.
     * Other modules from the current package.
 
     or in case of code using Django:
 
     * Python standard library (`import xxx`)
     * Python standard library ('from xxx import`)
-    * Third party packages.
+    * Third-party packages.
     * Django packages.
     * Other modules from the current package.
 
@@ -766,7 +766,7 @@ is following the conventions.
         from .five import zip_longest, items, range
         from .utils import timeutils
 
-* Wildcard imports must not be used (`from xxx import *`).
+* Wild-card imports must not be used (`from xxx import *`).
 
 * For distributions where Python 2.5 is the oldest support version
   additional rules apply:
@@ -814,12 +814,14 @@ Some features like a new result backend may require additional libraries
 that the user must install.
 
 We use setuptools `extra_requires` for this, and all new optional features
-that require 3rd party libraries must be added.
+that require third-party libraries must be added.
 
 1) Add a new requirements file in `requirements/extras`
 
     E.g. for the Cassandra backend this is
-    :file:`requirements/extras/cassandra.txt`, and the file looks like this::
+    :file:`requirements/extras/cassandra.txt`, and the file looks like this:
+
+    .. code-block:: text
 
         pycassa
 
@@ -827,6 +829,8 @@ that require 3rd party libraries must be added.
     multiple packages are separated by newline.  A more complex example could
     be:
 
+    .. code-block:: text
+
         # pycassa 2.0 breaks Foo
         pycassa>=1.0,<2.0
         thrift
@@ -834,14 +838,14 @@ that require 3rd party libraries must be added.
 2) Modify ``setup.py``
 
     After the requirements file is added you need to add it as an option
-    to ``setup.py`` in the ``extras_require`` section::
+    to :file:`setup.py` in the ``extras_require`` section::
 
         extra['extras_require'] = {
             # ...
             'cassandra': extras('cassandra.txt'),
         }
 
-3) Document the new feature in ``docs/includes/installation.txt``
+3) Document the new feature in :file:`docs/includes/installation.txt`
 
     You must add your feature to the list in the :ref:`bundles` section
     of :file:`docs/includes/installation.txt`.
@@ -857,10 +861,10 @@ that require 3rd party libraries must be added.
 
 That's all that needs to be done, but remember that if your feature
 adds additional configuration options then these needs to be documented
-in ``docs/configuration.rst``.  Also all settings need to be added to the
-``celery/app/defaults.py`` module.
+in :file:`docs/configuration.rst`.  Also all settings need to be added to the
+:file:`celery/app/defaults.py` module.
 
-Result backends require a separate section in the ``docs/configuration.rst``
+Result backends require a separate section in the :file:`docs/configuration.rst`
 file.
 
 .. _contact_information:

+ 2 - 2
docs/django/first-steps-with-django.rst

@@ -217,13 +217,13 @@ Starting the worker process
 In a production environment you will want to run the worker in the background
 as a daemon - see :ref:`daemonizing` - but for testing and
 development it is useful to be able to start a worker instance by using the
-``celery worker`` manage command, much as you would use Django's runserver:
+:program:`celery worker` manage command, much as you would use Django's
+:command:`manage.py runserver`:
 
 .. code-block:: console
 
     $ celery -A proj worker -l info
 
-
 For a complete listing of the command-line options available,
 use the help command:
 

+ 16 - 13
docs/faq.rst

@@ -88,7 +88,7 @@ Kombu is part of the Celery ecosystem and is the library used
 to send and receive messages.  It is also the library that enables
 us to support many different message brokers.  It is also used by the
 OpenStack project, and many others, validating the choice to separate
-it from the Celery codebase.
+it from the Celery code-base.
 
 .. _`kombu`: http://pypi.python.org/pypi/kombu
 
@@ -296,7 +296,7 @@ I'm having `IntegrityError: Duplicate Key` errors. Why?
 ---------------------------------------------------------
 
 **Answer:** See `MySQL is throwing deadlock errors, what can I do?`_.
-Thanks to howsthedotcom.
+Thanks to :github_user:`@howsthedotcom`.
 
 .. _faq-worker-stops-processing:
 
@@ -370,7 +370,7 @@ all configured task queues:
 
     $ celery -A proj purge
 
-or programatically:
+or programmatically:
 
 .. code-block:: pycon
 
@@ -571,17 +571,20 @@ The connection pool is enabled by default since version 2.5.
 
 .. _faq-sudo-subprocess:
 
-Sudo in a :mod:`subprocess` returns :const:`None`
--------------------------------------------------
+:command:`sudo` in a :mod:`subprocess` returns :const:`None`
+------------------------------------------------------------
+
+There is a :command:`sudo` configuration option that makes it illegal
+for process without a tty to run :command:`sudo`:
 
-There is a sudo configuration option that makes it illegal for process
-without a tty to run sudo::
+.. code-block:: text
 
     Defaults requiretty
 
 If you have this configuration in your :file:`/etc/sudoers` file then
-tasks will not be able to call sudo when the worker is running as a daemon.
-If you want to enable that, then you need to remove the line from sudoers.
+tasks will not be able to call :command:`sudo` when the worker is
+running as a daemon.  If you want to enable that, then you need to remove
+the line from :file:`/etc/sudoers`.
 
 See: http://timelordz.com/wiki/Apache_Sudo_Commands
 
@@ -782,7 +785,7 @@ Should I use retry or acks_late?
 to use both.
 
 `Task.retry` is used to retry tasks, notably for expected errors that
-is catchable with the :keyword:`try` block. The AMQP transaction is not used
+is catch-able with the :keyword:`try` block. The AMQP transaction is not used
 for these errors: **if the task raises an exception it is still acknowledged!**
 
 The `acks_late` setting would be used when you need the task to be
@@ -849,14 +852,14 @@ but this also means that a ``WorkerLostError`` state will be set for the
 task so the task will not run again.
 
 Identifying the type of process is easier if you have installed the
-``setproctitle`` module:
+:pypi:`setproctitle` module:
 
 .. code-block:: console
 
     $ pip install setproctitle
 
-With this library installed you will be able to see the type of process in ps
-listings, but the worker must be restarted for this to take effect.
+With this library installed you will be able to see the type of process in
+:command:`ps` listings, but the worker must be restarted for this to take effect.
 
 .. seealso::
 

+ 7 - 3
docs/getting-started/brokers/couchdb.rst

@@ -30,16 +30,20 @@ Configuration
 =============
 
 Configuration is easy, set the transport, and configure the location of
-your CouchDB database::
+your CouchDB database:
+
+.. code-block:: python
 
     broker_url = 'couchdb://localhost:5984/database_name'
 
-Where the URL is in the format of::
+Where the URL is in the format of:
+
+.. code-block:: text
 
     couchdb://userid:password@hostname:port/database_name
 
 The host name will default to ``localhost`` and the port to 5984,
-and so they are optional.  userid and password are also optional,
+and so they are optional.  ``userid`` and ``password`` are also optional,
 but needed if your CouchDB server requires authentication.
 
 .. _couchdb-results-configuration:

+ 1 - 1
docs/getting-started/brokers/index.rst

@@ -64,7 +64,7 @@ individual transport (see :ref:`broker_toc`).
 +---------------+--------------+----------------+--------------------+
 | *SQLAlchemy*  | Experimental | No             | No                 |
 +---------------+--------------+----------------+--------------------+
-| *Iron MQ*     | 3rd party    | No             | No                 |
+| *Iron MQ*     | third-party  | No             | No                 |
 +---------------+--------------+----------------+--------------------+
 
 Experimental brokers may be functional but they do not have

+ 34 - 12
docs/getting-started/brokers/ironmq.rst

@@ -9,62 +9,84 @@
 Installation
 ============
 
-For IronMQ support, you'll need the [iron_celery](https://github.com/iron-io/iron_celery) library:
+For IronMQ support, you'll need the :pypi:`iron_celery` library:
 
 .. code-block:: console
 
     $ pip install iron_celery
 
-As well as an [Iron.io account](http://www.iron.io). Sign up for free at [iron.io](http://www.iron.io).
+As well as an `Iron.io account <Iron.io>`_. Sign up for free at `Iron.io`_.
+
+
+_`Iron.io`: http://www.iron.io/
 
 .. _broker-ironmq-configuration:
 
 Configuration
 =============
 
-First, you'll need to import the iron_celery library right after you import Celery, for example::
+First, you'll need to import the iron_celery library right after you
+import Celery, for example:
+
+.. code-block:: python
 
     from celery import Celery
     import iron_celery
 
     app = Celery('mytasks', broker='ironmq://', backend='ironcache://')
 
-You have to specify IronMQ in the broker URL::
+You have to specify IronMQ in the broker URL:
+
+.. code-block:: python
 
     broker_url = 'ironmq://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@'
 
-where the URL format is::
+where the URL format is:
+
+.. code-block:: text
 
     ironmq://project_id:token@
 
 you must *remember to include the "@" at the end*.
 
 The login credentials can also be set using the environment variables
-:envvar:`IRON_TOKEN` and :envvar:`IRON_PROJECT_ID`, which are set automatically if you use the IronMQ Heroku add-on.
-And in this case the broker url may only be::
+:envvar:`IRON_TOKEN` and :envvar:`IRON_PROJECT_ID`, which are set automatically
+if you use the IronMQ Heroku add-on.  And in this case the broker URL may only be:
+
+.. code-block:: text
 
     ironmq://
 
 Clouds
 ------
 
-The default cloud/region is ``AWS us-east-1``. You can choose the IronMQ Rackspace (ORD) cloud by changing the URL to::
+The default cloud/region is ``AWS us-east-1``. You can choose the IronMQ Rackspace (ORD)
+cloud by changing the URL to:
+
+.. code-block:: text
 
     ironmq://project_id:token@mq-rackspace-ord.iron.io
 
 Results
 =======
 
-You can store results in IronCache with the same Iron.io credentials, just set the results URL with the same syntax
-as the broker URL, but changing the start to ``ironcache``::
+You can store results in IronCache with the same ``Iron.io`` credentials,
+just set the results URL with the same syntax
+as the broker URL, but changing the start to ``ironcache``:
+
+.. code-block:: text
 
     ironcache:://project_id:token@
 
-This will default to a cache named "Celery", if you want to change that::
+This will default to a cache named "Celery", if you want to change that:
+
+.. code-block:: text
 
     ironcache:://project_id:token@/awesomecache
 
 More Information
 ================
 
-You can find more information in the [iron_celery README](https://github.com/iron-io/iron_celery).
+You can find more information in the `iron_celery README`_.
+
+_`iron_celery README`: https://github.com/iron-io/iron_celery/

+ 7 - 3
docs/getting-started/brokers/mongodb.rst

@@ -30,16 +30,20 @@ Configuration
 =============
 
 Configuration is easy, set the transport, and configure the location of
-your MongoDB database::
+your MongoDB database:
+
+.. code-block:: python
 
     broker_url = 'mongodb://localhost:27017/database_name'
 
-Where the URL is in the format of::
+Where the URL is in the format of:
+
+.. code-block:: text
 
     mongodb://userid:password@hostname:port/database_name
 
 The host name will default to ``localhost`` and the port to 27017,
-and so they are optional.  userid and password are also optional,
+and so they are optional.  ``userid`` and ``password`` are also optional,
 but needed if your MongoDB server requires authentication.
 
 .. _mongodb-results-configuration:

+ 6 - 6
docs/getting-started/brokers/rabbitmq.rst

@@ -78,14 +78,14 @@ Installing RabbitMQ on OS X
 The easiest way to install RabbitMQ on OS X is using `Homebrew`_ the new and
 shiny package management system for OS X.
 
-First, install homebrew using the one-line command provided by the `Homebrew
+First, install Homebrew using the one-line command provided by the `Homebrew
 documentation`_:
 
 .. code-block:: console
 
     ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
 
-Finally, we can install rabbitmq using :command:`brew`:
+Finally, we can install RabbitMQ using :command:`brew`:
 
 .. code-block:: console
 
@@ -96,8 +96,8 @@ Finally, we can install rabbitmq using :command:`brew`:
 
 .. _rabbitmq-osx-system-hostname:
 
-After you've installed rabbitmq with :command:`brew` you need to add the following to
-your path to be able to start and stop the broker: add it to the startup file for your
+After you've installed RabbitMQ with :command:`brew` you need to add the following to
+your path to be able to start and stop the broker: add it to the start-up file for your
 shell (e.g. :file:`.bash_profile` or :file:`.profile`).
 
 .. code-block:: bash
@@ -122,8 +122,8 @@ back into an IP address::
 
     127.0.0.1       localhost myhost myhost.local
 
-If you start the rabbitmq server, your rabbit node should now be `rabbit@myhost`,
-as verified by :command:`rabbitmqctl`:
+If you start the :command:`rabbitmq-server`, your rabbit node should now
+be `rabbit@myhost`, as verified by :command:`rabbitmqctl`:
 
 .. code-block:: console
 

+ 6 - 6
docs/getting-started/brokers/redis.rst

@@ -35,16 +35,16 @@ Where the URL is in the format of:
 
     redis://:password@hostname:port/db_number
 
-all fields after the scheme are optional, and will default to localhost on port 6379,
-using database 0.
+all fields after the scheme are optional, and will default to ``localhost``
+on port 6379, using database 0.
 
-If a unix socket connection should be used, the URL needs to be in the format:
+If a Unix socket connection should be used, the URL needs to be in the format:
 
 .. code-block:: text
 
     redis+socket:///path/to/redis.sock
 
-Specifying a different database number when using a unix socket is possible
+Specifying a different database number when using a Unix socket is possible
 by adding the ``virtual_host`` parameter to the URL:
 
 .. code-block:: text
@@ -168,5 +168,5 @@ If you experience an error like:
     InconsistencyError: Probably the key ('_kombu.binding.celery') has been
     removed from the Redis database.
 
-then you may want to configure the redis-server to not evict keys by setting
-the ``timeout`` parameter to 0 in the redis configuration file.
+then you may want to configure the :command:`redis-server` to not evict keys
+by setting the ``timeout`` parameter to 0 in the redis configuration file.

+ 3 - 1
docs/getting-started/brokers/sqlalchemy.rst

@@ -22,7 +22,9 @@ Configuration
 =============
 
 Celery needs to know the location of your database, which should be the usual
-SQLAlchemy connection string, but with 'sqla+' prepended to it::
+SQLAlchemy connection string, but with the ``sqla+`` prefix added:
+
+.. code-block:: python
 
     broker_url = 'sqla+sqlite:///celerydb.sqlite'
 

+ 4 - 2
docs/getting-started/brokers/sqs.rst

@@ -34,7 +34,9 @@ You have to specify SQS in the broker URL::
 
     broker_url = 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@'
 
-where the URL format is::
+where the URL format is:
+
+.. code-block:: text
 
     sqs://aws_access_key_id:aws_secret_access_key@
 
@@ -42,7 +44,7 @@ you must *remember to include the "@" at the end*.
 
 The login credentials can also be set using the environment variables
 :envvar:`AWS_ACCESS_KEY_ID` and :envvar:`AWS_SECRET_ACCESS_KEY`,
-in that case the broker url may only be ``sqs://``.
+in that case the broker URL may only be ``sqs://``.
 
 .. note::
 

+ 5 - 5
docs/getting-started/introduction.rst

@@ -119,7 +119,7 @@ Celery is…
         - **Brokers**
 
             - :ref:`RabbitMQ <broker-rabbitmq>`, :ref:`Redis <broker-redis>`,
-            - :ref:`MongoDB <broker-mongodb>` (exp), ZeroMQ (exp)
+            - :ref:`MongoDB <broker-mongodb>` (exp),
             - :ref:`CouchDB <broker-couchdb>` (exp), :ref:`SQLAlchemy <broker-sqlalchemy>` (exp)
             - :ref:`Django ORM <broker-django>` (exp), :ref:`Amazon SQS <broker-sqs>`, (exp)
             - and more…
@@ -133,7 +133,7 @@ Celery is…
         - **Result Stores**
 
             - AMQP, Redis
-            - memcached, MongoDB
+            - Memcached, MongoDB
             - SQLAlchemy, Django ORM
             - Apache Cassandra, IronCache, Elasticsearch
 
@@ -180,7 +180,7 @@ Features
             You can specify the time to run a task in seconds or a
             :class:`~datetime.datetime`, or or you can use
             periodic tasks for recurring events based on a
-            simple interval, or crontab expressions
+            simple interval, or Crontab expressions
             supporting minute, hour, day of week, day of month, and
             month of year.
 
@@ -257,8 +257,8 @@ database connections at :manpage:`fork(2)`.
 .. _`Tornado`: http://www.tornadoweb.org/
 .. _`tornado-celery`: https://github.com/mher/tornado-celery/
 
-Quickjump
-=========
+Quick Jump
+==========
 
 .. topic:: I want to ⟶
 

+ 1 - 1
docs/getting-started/next-steps.rst

@@ -238,7 +238,7 @@ If none of these are found it'll try a submodule named ``proj.celery``:
 
 4) an attribute named ``proj.celery.app``, or
 5) an attribute named ``proj.celery.celery``, or
-6) Any atribute in the module ``proj.celery`` where the value is a Celery
+6) Any attribute in the module ``proj.celery`` where the value is a Celery
    application.
 
 This scheme mimics the practices used in the documentation,

+ 66 - 56
docs/history/changelog-1.0.rst

@@ -74,7 +74,7 @@ Changes
   if events are disabled
 
 * Added required RPM package names under `[bdist_rpm]` section, to support building RPMs
-  from the sources using setup.py
+  from the sources using :file:`setup.py`.
 
 * Running unit tests: :envvar:`NOSE_VERBOSE` environment var now enables verbose output from Nose.
 
@@ -129,15 +129,20 @@ Important notes
 
     See: http://bit.ly/d5OwMr
 
-    This means those who created their celery tables (via syncdb or
-    celeryinit) with picklefield versions >= 0.1.5 has to alter their tables to
+    This means those who created their celery tables (via ``syncdb`` or
+    ``celeryinit``) with :pypi:`django-picklefield``
+    versions >= 0.1.5 has to alter their tables to
     allow the result field to be `NULL` manually.
 
-    MySQL::
+    MySQL:
+
+    .. code-block:: sql
 
         ALTER TABLE celery_taskmeta MODIFY result TEXT NULL
 
-    PostgreSQL::
+    PostgreSQL:
+
+    .. code-block:: sql
 
         ALTER TABLE celery_taskmeta ALTER COLUMN result DROP NOT NULL
 
@@ -167,16 +172,16 @@ News
         crashes in mid-execution. Not acceptable for most
         applications, but desirable for others.
 
-* Added crontab-like scheduling to periodic tasks.
+* Added Crontab-like scheduling to periodic tasks.
 
-    Like a cron job, you can specify units of time of when
+    Like a cronjob, you can specify units of time of when
     you would like the task to execute. While not a full implementation
-    of cron's features, it should provide a fair degree of common scheduling
+    of :command:`cron`'s features, it should provide a fair degree of common scheduling
     needs.
 
     You can specify a minute (0-59), an hour (0-23), and/or a day of the
-    week (0-6 where 0 is Sunday, or by names: sun, mon, tue, wed, thu, fri,
-    sat).
+    week (0-6 where 0 is Sunday, or by names:
+    ``sun, mon, tue, wed, thu, fri, sat``).
 
     Examples:
 
@@ -198,7 +203,7 @@ News
             print('Runs every hour on the clock. e.g. 1:30, 2:30, 3:30 etc.')
 
     .. note::
-        This a late addition. While we have unittests, due to the
+        This a late addition. While we have unit tests, due to the
         nature of this feature we haven't been able to completely test this
         in practice, so consider this experimental.
 
@@ -209,7 +214,7 @@ News
 
 * `Task.max_retries` can now be `None`, which means it will retry forever.
 
-* Celerybeat: Now reuses the same connection when publishing large
+* ``celerybeat``: Now reuses the same connection when publishing large
   sets of tasks.
 
 * Modified the task locking example in the documentation to use
@@ -422,15 +427,15 @@ Fixes
     Consider the competition for the first pool plug-in started!
 
 
-* Debian init scripts: Use `-a` not `&&` (Issue #82).
+* Debian init-scripts: Use `-a` not `&&` (Issue #82).
 
-* Debian init scripts: Now always preserves `$CELERYD_OPTS` from the
+* Debian init-scripts: Now always preserves `$CELERYD_OPTS` from the
   `/etc/default/celeryd` and `/etc/default/celerybeat`.
 
 * celery.beat.Scheduler: Fixed a bug where the schedule was not properly
   flushed to disk if the schedule had not been properly initialized.
 
-* celerybeat: Now syncs the schedule to disk when receiving the :sig:`SIGTERM`
+* ``celerybeat``: Now syncs the schedule to disk when receiving the :sig:`SIGTERM`
   and :sig:`SIGINT` signals.
 
 * Control commands: Make sure keywords arguments are not in Unicode.
@@ -438,7 +443,7 @@ Fixes
 * ETA scheduler: Was missing a logger object, so the scheduler crashed
   when trying to log that a task had been revoked.
 
-* management.commands.camqadm: Fixed typo `camqpadm` -> `camqadm`
+* ``management.commands.camqadm``: Fixed typo `camqpadm` -> `camqadm`
   (Issue #83).
 
 * PeriodicTask.delta_resolution: Was not working for days and hours, now fixed
@@ -460,8 +465,8 @@ Fixes
 * Tasks are now acknowledged early instead of late.
 
     This is done because messages can only be acknowledged within the same
-    connection channel, so if the connection is lost we would have to refetch
-    the message again to acknowledge it.
+    connection channel, so if the connection is lost we would have to
+    re-fetch the message again to acknowledge it.
 
     This might or might not affect you, but mostly those running tasks with a
     really long execution time are affected, as all tasks that has made it
@@ -494,7 +499,7 @@ Fixes
 
     You can set the maximum number of results the cache
     can hold using the :setting:`CELERY_MAX_CACHED_RESULTS` setting (the
-    default is five thousand results). In addition, you can refetch already
+    default is five thousand results). In addition, you can re-fetch already
     retrieved results using `backend.reload_task_result` +
     `backend.reload_taskset_result` (that's for those who want to send
     results incrementally).
@@ -587,7 +592,7 @@ Fixes
   in celerymon)
 
 * Added `--schedule`/`-s` option to the worker, so it is possible to
-  specify a custom schedule filename when using an embedded celerybeat
+  specify a custom schedule filename when using an embedded ``celerybeat``
   server (the `-B`/`--beat`) option.
 
 * Better Python 2.4 compatibility. The test suite now passes.
@@ -612,7 +617,7 @@ Fixes
 * Now have our own `ImproperlyConfigured` exception, instead of using the
   Django one.
 
-* Improvements to the Debian init scripts: Shows an error if the program is
+* Improvements to the Debian init-scripts: Shows an error if the program is
   not executable.  Does not modify `CELERYD` when using django with
   virtualenv.
 
@@ -630,24 +635,24 @@ Backward incompatible changes
 
 * Celery does not support detaching anymore, so you have to use the tools
   available on your platform, or something like :pypi:`supervisor` to make
-  celeryd/celerybeat/celerymon into background processes.
+  ``celeryd``/``celerybeat``/``celerymon`` into background processes.
 
     We've had too many problems with the worker daemonizing itself, so it was
     decided it has to be removed. Example start-up scripts has been added to
     the `extra/` directory:
 
-    * Debian, Ubuntu, (start-stop-daemon)
+    * Debian, Ubuntu, (:command:`start-stop-daemon`)
 
         `extra/debian/init.d/celeryd`
         `extra/debian/init.d/celerybeat`
 
-    * Mac OS X launchd
+    * Mac OS X :command:`launchd`
 
         `extra/mac/org.celeryq.celeryd.plist`
         `extra/mac/org.celeryq.celerybeat.plist`
         `extra/mac/org.celeryq.celerymon.plist`
 
-    * Supervisord (http://supervisord.org)
+    * Supervisor (http://supervisord.org)
 
         `extra/supervisord/supervisord.conf`
 
@@ -709,7 +714,7 @@ Backward incompatible changes
     This means the worker no longer schedules periodic tasks by default,
     but a new daemon has been introduced: `celerybeat`.
 
-    To launch the periodic task scheduler you have to run celerybeat:
+    To launch the periodic task scheduler you have to run ``celerybeat``:
 
     .. code-block:: console
 
@@ -780,12 +785,12 @@ Deprecations
 * The following configuration variables has been renamed and will be
   deprecated in v2.0:
 
-    * CELERYD_DAEMON_LOG_FORMAT -> CELERYD_LOG_FORMAT
-    * CELERYD_DAEMON_LOG_LEVEL -> CELERYD_LOG_LEVEL
-    * CELERY_AMQP_CONNECTION_TIMEOUT -> CELERY_BROKER_CONNECTION_TIMEOUT
-    * CELERY_AMQP_CONNECTION_RETRY -> CELERY_BROKER_CONNECTION_RETRY
-    * CELERY_AMQP_CONNECTION_MAX_RETRIES -> CELERY_BROKER_CONNECTION_MAX_RETRIES
-    * SEND_CELERY_TASK_ERROR_EMAILS -> CELERY_SEND_TASK_ERROR_EMAILS
+    * ``CELERYD_DAEMON_LOG_FORMAT`` -> ``CELERYD_LOG_FORMAT``
+    * ``CELERYD_DAEMON_LOG_LEVEL`` -> ``CELERYD_LOG_LEVEL``
+    * ``CELERY_AMQP_CONNECTION_TIMEOUT`` -> ``CELERY_BROKER_CONNECTION_TIMEOUT``
+    * ``CELERY_AMQP_CONNECTION_RETRY`` -> ``CELERY_BROKER_CONNECTION_RETRY``
+    * ``CELERY_AMQP_CONNECTION_MAX_RETRIES`` -> ``CELERY_BROKER_CONNECTION_MAX_RETRIES``
+    * ``SEND_CELERY_TASK_ERROR_EMAILS`` -> ``CELERY_SEND_TASK_ERROR_EMAILS``
 
 * The public API names in celery.conf has also changed to a consistent naming
   scheme.
@@ -870,9 +875,10 @@ News
 Changes
 -------
 
-* Now depends on carrot >= 0.8.1
+* Now depends on :pypi:`carrot` >= 0.8.1
 
-* New dependencies: billiard, python-dateutil, django-picklefield
+* New dependencies: :pypi:`billiard`, :pypi:`python-dateutil`,
+  :pypi:`django-picklefield`.
 
 * No longer depends on python-daemon
 
@@ -961,7 +967,7 @@ Documentation
 * Now emits a warning if the --detach argument is used.
   --detach should not be used anymore, as it has several not easily fixed
   bugs related to it. Instead, use something like start-stop-daemon,
-  :pypi:`supervisor` or launchd (os x).
+  :pypi:`supervisor` or :command:`launchd` (os x).
 
 
 * Make sure logger class is process aware, even if running Python >= 2.6.
@@ -979,8 +985,9 @@ Documentation
 * Fixed a possible race condition that could happen when storing/querying
   task results using the database backend.
 
-* Now has console script entry points in the setup.py file, so tools like
-  Buildout will correctly install the programs celeryd and celeryinit.
+* Now has console script entry points in the :file:`setup.py` file, so tools like
+  :pypi:`zc.buildout` will correctly install the programs ``celeryd`` and
+  ``celeryinit``.
 
 .. _version-0.8.2:
 
@@ -1061,12 +1068,13 @@ Changes
 
 * Added a Redis result store backend
 
-* Allow /etc/default/celeryd to define additional options for the celeryd init
-  script.
+* Allow :file:`/etc/default/celeryd` to define additional options
+  for the ``celeryd`` init-script.
 
 * MongoDB periodic tasks issue when using different time than UTC fixed.
 
-* Windows specific: Negate test for available os.fork (thanks miracle2k)
+* Windows specific: Negate test for available ``os.fork``
+  (thanks :github_user:`miracle2k`).
 
 * Now tried to handle broken PID files.
 
@@ -1074,9 +1082,9 @@ Changes
   `CELERY_ALWAYS_EAGER = True` for testing with the database backend.
 
 * Added a :setting:`CELERY_CACHE_BACKEND` setting for using something other
-  than the django-global cache backend.
+  than the Django-global cache backend.
 
-* Use custom implementation of functools.partial (curry) for Python 2.4 support
+* Use custom implementation of ``functools.partial`` for Python 2.4 support
   (Probably still problems with running on 2.4, but it will eventually be
   supported)
 
@@ -1191,7 +1199,7 @@ News
     detaching.
 
 * Fixed a possible DjangoUnicodeDecodeError being raised when saving pickled
-    data to Django`s memcached cache backend.
+    data to Django`s Memcached cache backend.
 
 * Better Windows compatibility.
 
@@ -1230,7 +1238,7 @@ News
 * Add a sensible __repr__ to ExceptionInfo for easier debugging
 
 * Fix documentation typo `.. import map` -> `.. import dmap`.
-    Thanks to mikedizon
+    Thanks to :github_user:`mikedizon`.
 
 .. _version-0.6.0:
 
@@ -1411,18 +1419,18 @@ News
 
 * Refactored `celery.task`. It's now split into three modules:
 
-    * celery.task
+    * ``celery.task``
 
         Contains `apply_async`, `delay_task`, `discard_all`, and task
         shortcuts, plus imports objects from `celery.task.base` and
         `celery.task.builtins`
 
-    * celery.task.base
+    * ``celery.task.base``
 
         Contains task base classes: `Task`, `PeriodicTask`,
         `TaskSet`, `AsynchronousMapTask`, `ExecuteRemoteTask`.
 
-    * celery.task.builtins
+    * ``celery.task.builtins``
 
         Built-in tasks: `PingTask`, `DeleteExpiredTaskMetaTask`.
 
@@ -1441,15 +1449,15 @@ News
   available on the system.
 
 * **IMPORTANT** `tasks.register`: Renamed `task_name` argument to
-  `name`, so
+  `name`, so::
 
         >>> tasks.register(func, task_name='mytask')
 
-  has to be replaced with:
+  has to be replaced with::
 
         >>> tasks.register(func, name='mytask')
 
-* The daemon now correctly runs if the pidlock is stale.
+* The daemon now correctly runs if the pidfile is stale.
 
 * Now compatible with carrot 0.4.5
 
@@ -1474,7 +1482,7 @@ News
 * No longer depends on `django`, so installing `celery` won't affect
   the preferred Django version installed.
 
-* Now works with PostgreSQL (psycopg2) again by registering the
+* Now works with PostgreSQL (:pypi:`psycopg2`) again by registering the
   `PickledObject` field.
 
 * Worker: Added `--detach` option as an alias to `--daemon`, and
@@ -1488,7 +1496,7 @@ News
 * Removed dependency to `simplejson`
 
 * Cache Backend: Re-establishes connection for every task process
-  if the Django cache backend is memcached/libmemcached.
+  if the Django cache backend is :pypi:`python-memcached`/:pypi:`libmemcached`.
 
 * Tyrant Backend: Now re-establishes the connection for every task
   executed.
@@ -1542,7 +1550,7 @@ News
 **VERY IMPORTANT:** Pickle is now the encoder used for serializing task
 arguments, so be sure to flush your task queue before you upgrade.
 
-* **IMPORTANT** TaskSet.run() now returns a celery.result.TaskSetResult
+* **IMPORTANT** TaskSet.run() now returns a ``celery.result.TaskSetResult``
   instance, which lets you inspect the status and return values of a
   taskset as it was a single entity.
 
@@ -1581,7 +1589,7 @@ arguments, so be sure to flush your task queue before you upgrade.
   :ref:`FAQ <faq>` for more information.
 
 * Task errors are now logged using log level `ERROR` instead of `INFO`,
-  and stacktraces are dumped. Thanks to Grégoire Cachet.
+  and stack-traces are dumped. Thanks to Grégoire Cachet.
 
 * Make every new worker process re-establish it's Django DB connection,
   this solving the "MySQL connection died?" exceptions.
@@ -1714,10 +1722,12 @@ arguments, so be sure to flush your task queue before you upgrade.
   happened.  It kind of works like the `multiprocessing.AsyncResult`
   class returned by `multiprocessing.Pool.map_async`.
 
-* Added dmap() and dmap_async(). This works like the
+* Added ``dmap()`` and ``dmap_async()``. This works like the
   `multiprocessing.Pool` versions except they are tasks
   distributed to the celery server. Example:
 
+    .. code-block:: pycon
+
         >>> from celery.task import dmap
         >>> import operator
         >>> dmap(operator.add, [[2, 2], [4, 4], [8, 8]])
@@ -1834,7 +1844,7 @@ arguments, so be sure to flush your task queue before you upgrade.
 
         >>> url(r'^celery/$', include('celery.urls'))
 
-  then visiting the following url:
+  then visiting the following URL:
 
   .. code-block:: text
 

+ 12 - 12
docs/history/changelog-2.0.rst

@@ -81,7 +81,7 @@ Fixes
 * Worker: A warning is now emitted if the sending of task error
   emails fails.
 
-* celeryev: Curses monitor no longer crashes if the terminal window
+* ``celeryev``: Curses monitor no longer crashes if the terminal window
   is resized.
 
     See issue #160.
@@ -105,11 +105,11 @@ Fixes
     This is now fixed by using a workaround.
     See issue #143.
 
-* Debian init scripts: Commands should not run in a sub shell
+* Debian init-scripts: Commands should not run in a sub shell
 
     See issue #163.
 
-* Debian init scripts: Use the absolute path of celeryd program to allow stat
+* Debian init-scripts: Use the absolute path of ``celeryd`` program to allow stat
 
     See issue #162.
 
@@ -146,7 +146,7 @@ Documentation
 
     to `CELERYD_LOG_FILE` / `CELERYD_PID_FILE`
 
-    Also added troubleshooting section for the init scripts.
+    Also added troubleshooting section for the init-scripts.
 
 .. _version-2.0.2:
 
@@ -162,7 +162,7 @@ Documentation
 
 * Test suite now passing on Python 2.4
 
-* No longer have to type `PYTHONPATH=.` to use celeryconfig in the current
+* No longer have to type `PYTHONPATH=.` to use ``celeryconfig`` in the current
   directory.
 
     This is accomplished by the default loader ensuring that the current
@@ -181,7 +181,7 @@ Documentation
 
 * Worker: SIGHUP handler accidentally propagated to worker pool processes.
 
-    In combination with 7a7c44e39344789f11b5346e9cc8340f5fe4846c
+    In combination with :sha:`7a7c44e39344789f11b5346e9cc8340f5fe4846c`
     this would make each child process start a new worker instance when
     the terminal window was closed :/
 
@@ -199,7 +199,7 @@ Documentation
 
     See issue #154.
 
-* Debian worker init script: Stop now works correctly.
+* Debian worker init-script: Stop now works correctly.
 
 * Task logger: `warn` method added (synonym for `warning`)
 
@@ -326,7 +326,7 @@ Documentation
 
 * Task.__reduce__: Tasks created using the task decorator can now be pickled.
 
-* setup.py: nose added to `tests_require`.
+* :file:`setup.py`: :pypi:`nose` added to `tests_require`.
 
 * Pickle should now work with SQLAlchemy 0.5.x
 
@@ -721,7 +721,7 @@ News
 * Worker: :kbd:`Control-c` (SIGINT) once does warm shutdown,
   hitting :kbd:`Control-c` twice forces termination.
 
-* Added support for using complex crontab-expressions in periodic tasks. For
+* Added support for using complex Crontab-expressions in periodic tasks. For
   example, you can now use:
 
     .. code-block:: pycon
@@ -810,7 +810,7 @@ News
         exception will be raised when this is exceeded.  The task can catch
         this to e.g. clean up before the hard time limit comes.
 
-    New command-line arguments to celeryd added:
+    New command-line arguments to ``celeryd`` added:
     `--time-limit` and `--soft-time-limit`.
 
     What's left?
@@ -910,7 +910,7 @@ News
 
 * :class:`~celery.datastructures.ExceptionInfo` now passed to
    :meth:`~celery.task.base.Task.on_retry`/
-   :meth:`~celery.task.base.Task.on_failure` as einfo keyword argument.
+   :meth:`~celery.task.base.Task.on_failure` as ``einfo`` keyword argument.
 
 * Worker: Added :setting:`CELERYD_MAX_TASKS_PER_CHILD` /
   :option:`celery worker --maxtasksperchild`
@@ -1035,7 +1035,7 @@ News
             celeryd -n celeryd1.worker.example.com -c 3
             celeryd -n celeryd2.worker.example.com -c 3
 
-        Additional options are added to each celeryd',
+        Additional options are added to each ``celeryd``,
         but you can also modify the options for ranges of or single workers
 
     - 3 workers: Two with 3 processes, and one with 10 processes.

+ 25 - 25
docs/history/changelog-2.1.rst

@@ -82,8 +82,8 @@ Documentation
     This means :program:`celeryev` and friends finds workers immediately
     at start-up.
 
-* celeryev cursesmon: Set screen_delay to 10ms, so the screen refreshes more
-  often.
+* ``celeryev`` curses monitor: Set screen_delay to 10ms, so the screen
+  refreshes more often.
 
 * Fixed pickling errors when pickling :class:`AsyncResult` on older Python
   versions.
@@ -108,7 +108,7 @@ Fixes
 * worker: Now honors ignore result for
   :exc:`~@WorkerLostError` and timeout errors.
 
-* celerybeat: Fixed :exc:`UnboundLocalError` in celerybeat logging
+* ``celerybeat``: Fixed :exc:`UnboundLocalError` in ``celerybeat`` logging
   when using logging setup signals.
 
 * worker: All log messages now includes `exc_info`.
@@ -127,7 +127,7 @@ Fixes
 
 * Now working on Windows again.
 
-   Removed dependency on the pwd/grp modules.
+   Removed dependency on the :mod:`pwd`/:mod:`grp` modules.
 
 * snapshots: Fixed race condition leading to loss of events.
 
@@ -161,12 +161,12 @@ Fixes
     positional arguments in the future, so please do not depend on this
     behavior.
 
-* celerybeat: Now respects routers and task execution options again.
+* ``celerybeat``: Now respects routers and task execution options again.
 
-* celerybeat: Now reuses the publisher instead of the connection.
+* ``celerybeat``: Now reuses the publisher instead of the connection.
 
 * Cache result backend: Using :class:`float` as the expires argument
-  to `cache.set` is deprecated by the memcached libraries,
+  to `cache.set` is deprecated by the Memcached libraries,
   so we now automatically cast to :class:`int`.
 
 * unit tests: No longer emits logging and warnings in test output.
@@ -204,7 +204,7 @@ News
 * New remote control commands: `add_consumer` and `cancel_consumer`.
 
     .. method:: add_consumer(queue, exchange, exchange_type, routing_key,
-                             **options)
+                             \*\*options)
         :module:
 
         Tells the worker to declare and consume from the specified
@@ -220,7 +220,7 @@ News
     :class:`~celery.task.control.inspect`.
 
 
-    Example using celeryctl to start consuming from queue "queue", in
+    Example using ``celeryctl`` to start consuming from queue "queue", in
     exchange "exchange", of type "direct" using binding key "key":
 
     .. code-block:: console
@@ -245,11 +245,11 @@ News
 
         >>> inspect.cancel_consumer('queue')
 
-* celerybeat: Now logs the traceback if a message can't be sent.
+* ``celerybeat``: Now logs the traceback if a message can't be sent.
 
-* celerybeat: Now enables a default socket timeout of 30 seconds.
+* ``celerybeat``: Now enables a default socket timeout of 30 seconds.
 
-* README/introduction/homepage: Added link to `Flask-Celery`_.
+* ``README``/introduction/homepage: Added link to `Flask-Celery`_.
 
 .. _`Flask-Celery`: https://github.com/ask/flask-celery
 
@@ -325,7 +325,7 @@ News
         CELERY_AMQP_TASK_RESULT_EXPIRES = 30 * 60  # 30 minutes.
         CELERY_AMQP_TASK_RESULT_EXPIRES = 0.80     # 800 ms.
 
-* celeryev: Event Snapshots
+* ``celeryev``: Event Snapshots
 
     If enabled, the worker sends messages about what the worker is doing.
     These messages are called "events".
@@ -364,7 +364,7 @@ News
     There's also a Debian init.d script for :mod:`~celery.bin.events` available,
     see :ref:`daemonizing` for more information.
 
-    New command-line arguments to celeryev:
+    New command-line arguments to ``celeryev``:
 
         * :option:`celery events --camera`: Snapshot camera class to use.
         * :option:`celery events --logfile`: Log file
@@ -394,7 +394,7 @@ News
 * :func:`~celery.task.control.broadcast`: Added callback argument, this can be
   used to process replies immediately as they arrive.
 
-* celeryctl: New command line utility to manage and inspect worker nodes,
+* ``celeryctl``: New command line utility to manage and inspect worker nodes,
   apply tasks and inspect the results of tasks.
 
     .. seealso::
@@ -436,13 +436,13 @@ News
     =====================================  =====================================
     **Application**                        **Logger Name**
     =====================================  =====================================
-    `celeryd`                              "celery"
-    `celerybeat`                           "celery.beat"
-    `celeryev`                             "celery.ev"
+    ``celeryd``                            ``"celery"``
+    ``celerybeat``                         ``"celery.beat"``
+    ``celeryev``                           ``"celery.ev"``
     =====================================  =====================================
 
     This means that the `loglevel` and `logfile` arguments will
-    affect all registered loggers (even those from 3rd party libraries).
+    affect all registered loggers (even those from third-party libraries).
     Unless you configure the loggers manually as shown below, that is.
 
     *Users can choose to configure logging by subscribing to the
@@ -465,7 +465,7 @@ News
 
     Remember that the worker also redirects stdout and stderr
     to the celery logger, if manually configure logging
-    you also need to redirect the stdouts manually:
+    you also need to redirect the standard outs manually:
 
     .. code-block:: python
 
@@ -537,7 +537,7 @@ News
 * Added `Task.store_errors_even_if_ignored`, so it can be changed per Task,
   not just by the global setting.
 
-* The crontab scheduler no longer wakes up every second, but implements
+* The Crontab scheduler no longer wakes up every second, but implements
   `remaining_estimate` (*Optimization*).
 
 * worker:  Store :state:`FAILURE` result if the
@@ -557,7 +557,7 @@ News
       backend cleanup task can be easily changed.
 
     * The task is now run every day at 4:00 AM, rather than every day since
-      the first time it was run (using crontab schedule instead of
+      the first time it was run (using Crontab schedule instead of
       `run_every`)
 
     * Renamed `celery.task.builtins.DeleteExpiredTaskMetaTask`
@@ -568,8 +568,8 @@ News
 
     See issue #134.
 
-* Implemented `AsyncResult.forget` for sqla/cache/redis/tyrant backends.
-  (Forget and remove task result).
+* Implemented `AsyncResult.forget` for SQLAlchemy/Memcached/Redis/Tokyo Tyrant
+  backends.  (Forget and remove task result).
 
     See issue #184.
 
@@ -612,7 +612,7 @@ News
 
     See issue #164.
 
-* timedelta_seconds: Use `timedelta.total_seconds` if running on Python 2.7
+* ``timedelta_seconds``: Use ``timedelta.total_seconds`` if running on Python 2.7
 
 * :class:`~celery.datastructures.TokenBucket`: Generic Token Bucket algorithm
 

+ 45 - 41
docs/history/changelog-2.2.rst

@@ -70,17 +70,21 @@ Important Notes
 
 * Now depends on Kombu 1.1.2.
 
-* Dependency lists now explicitly specifies that we don't want python-dateutil
-  2.x, as this version only supports py3k.
+* Dependency lists now explicitly specifies that we don't want
+  :pypi:`python-dateutil` 2.x, as this version only supports Python 3.
 
     If you have installed dateutil 2.0 by accident you should downgrade
-    to the 1.5.0 version::
+    to the 1.5.0 version:
 
-        pip install -U python-dateutil==1.5.0
+    .. code-block:: console
+
+        $ pip install -U python-dateutil==1.5.0
 
-    or by easy_install::
+    or by ``easy_install``:
+
+    .. code-block:: console
 
-        easy_install -U python-dateutil==1.5.0
+        $ easy_install -U python-dateutil==1.5.0
 
 .. _v226-fixes:
 
@@ -146,7 +150,7 @@ News
 .. _`logrotate.d`:
     http://www.ducea.com/2006/06/06/rotating-linux-log-files-part-2-logrotate/
 
-* otherqueues tutorial now documents how to configure Redis/Database result
+* ``otherqueues`` tutorial now documents how to configure Redis/Database result
    backends.
 
 * gevent: Now supports ETA tasks.
@@ -178,7 +182,7 @@ News
 * SQLAlchemy result backend: taskset_id and taskset_id columns now have a
   unique constraint.  (Tables need to recreated for this to take affect).
 
-* Task Userguide: Added section about choosing a result backend.
+* Task user guide: Added section about choosing a result backend.
 
 * Removed unused attribute ``AsyncResult.uuid``.
 
@@ -206,10 +210,10 @@ Fixes
     that we haven't consumed from the result queue. It
     is unlikely we will receive any after 5 seconds with no worker processes).
 
-* celerybeat: Now creates pidfile even if the ``--detach`` option is not set.
+* ``celerybeat``: Now creates pidfile even if the ``--detach`` option is not set.
 
 * eventlet/gevent: The broadcast command consumer is now running in a separate
-  greenthread.
+  green-thread.
 
     This ensures broadcast commands will take priority even if there are many
     active tasks.
@@ -234,7 +238,7 @@ Fixes
 
 * ConfigurationView: ``iter(dict)`` should return keys, not items (Issue #362).
 
-* celerybeat:  PersistentScheduler now automatically removes a corrupted
+* ``celerybeat``:  PersistentScheduler now automatically removes a corrupted
   schedule file (Issue #346).
 
 * Programs that doesn't support positional command-line arguments now provides
@@ -270,7 +274,7 @@ Fixes
     disable the prefetch count, it is re-enabled as soon as the value is below
     the limit again.
 
-* cursesmon: Fixed unbound local error (Issue #303).
+* ``cursesmon``: Fixed unbound local error (Issue #303).
 
 * eventlet/gevent is now imported on demand so autodoc can import the modules
   without having eventlet/gevent installed.
@@ -283,17 +287,17 @@ Fixes
 * Cassandra Result Backend: Should now work with the latest ``pycassa``
   version.
 
-* multiprocessing.Pool: No longer cares if the putlock semaphore is released
+* multiprocessing.Pool: No longer cares if the ``putlock`` semaphore is released
   too many times. (this can happen if one or more worker processes are
   killed).
 
 * SQLAlchemy Result Backend: Now returns accidentally removed ``date_done`` again
   (Issue #325).
 
-* Task.request contex is now always initialized to ensure calling the task
+* Task.request context is now always initialized to ensure calling the task
   function directly works even if it actively uses the request context.
 
-* Exception occuring when iterating over the result from ``TaskSet.apply``
+* Exception occurring when iterating over the result from ``TaskSet.apply``
   fixed.
 
 * eventlet: Now properly schedules tasks with an ETA in the past.
@@ -329,7 +333,7 @@ Fixes
 
 * Fixed exception raised when iterating on the result of ``TaskSet.apply()``.
 
-* Tasks Userguide: Added section on choosing a result backend.
+* Tasks user guide: Added section on choosing a result backend.
 
 .. _version-2.2.3:
 
@@ -353,12 +357,12 @@ Fixes
 
 * Coloring of log messages broke if the logged object was not a string.
 
-* Fixed several typos in the init script documentation.
+* Fixed several typos in the init-script documentation.
 
 * A regression caused `Task.exchange` and `Task.routing_key` to no longer
   have any effect.  This is now fixed.
 
-* Routing Userguide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
+* Routing user guide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
   instances, not classes.
 
 * :program:`celeryev` did not create pidfile even though the
@@ -387,7 +391,7 @@ Fixes
 
     This ensures all queues are available for routing purposes.
 
-* celeryctl: Now supports the `inspect active_queues` command.
+* ``celeryctl``: Now supports the `inspect active_queues` command.
 
 .. _version-2.2.2:
 
@@ -401,7 +405,7 @@ Fixes
 Fixes
 -----
 
-* Celerybeat could not read the schedule properly, so entries in
+* ``celerybeat`` could not read the schedule properly, so entries in
   :setting:`CELERYBEAT_SCHEDULE` would not be scheduled.
 
 * Task error log message now includes `exc_info` again.
@@ -410,7 +414,7 @@ Fixes
 
     Previously it was overwritten by the countdown argument.
 
-* celery multi/celeryd_detach: Now logs errors occuring when executing
+* ``celery multi``/``celeryd_detach``: Now logs errors occurring when executing
   the `celery worker` command.
 
 * daemonizing tutorial: Fixed typo ``--time-limit 300`` ->
@@ -439,10 +443,10 @@ Fixes
 
 * ``BasePool.on_terminate`` stub did not exist
 
-* celeryd_detach: Adds readable error messages if user/group name does not
+* ``celeryd_detach``: Adds readable error messages if user/group name does not
    exist.
 
-* Smarter handling of unicode decod errors when logging errors.
+* Smarter handling of unicode decode errors when logging errors.
 
 .. _version-2.2.0:
 
@@ -469,7 +473,7 @@ Important Notes
     * Consistent error handling with introspection,
     * The ability to ensure that an operation is performed by gracefully
       handling connection and channel errors,
-    * Message compression (zlib, bzip2, or custom compression schemes).
+    * Message compression (:mod:`zlib`, :mod:`bz2`, or custom compression schemes).
 
     This means that `ghettoq` is no longer needed as the
     functionality it provided is already available in Celery by default.
@@ -482,7 +486,7 @@ Important Notes
 
 * Magic keyword arguments pending deprecation.
 
-    The magic keyword arguments were responsibile for many problems
+    The magic keyword arguments were responsible for many problems
     and quirks: notably issues with tasks and decorators, and name
     collisions in keyword arguments for the unaware.
 
@@ -708,8 +712,8 @@ Important Notes
 
     .. note::
 
-        The event exchange has been renamed from "celeryevent" to "celeryev"
-        so it does not collide with older versions.
+        The event exchange has been renamed from ``"celeryevent"``
+        to ``"celeryev"`` so it does not collide with older versions.
 
         If you would like to remove the old exchange you can do so
         by executing the following command:
@@ -746,7 +750,7 @@ Important Notes
 
 * Previously deprecated modules `celery.models` and
   `celery.management.commands` have now been removed as per the deprecation
-  timeline.
+  time-line.
 
 * [Security: Low severity] Removed `celery.task.RemoteExecuteTask` and
     accompanying functions: `dmap`, `dmap_async`, and `execute_remote`.
@@ -845,8 +849,8 @@ News
   :setting:`CELERY_MESSAGE_COMPRESSION` setting, or the `compression` argument
   to `apply_async`.  This can also be set using routers.
 
-* worker: Now logs stacktrace of all threads when receiving the
-   `SIGUSR1` signal.  (Does not work on cPython 2.4, Windows or Jython).
+* worker: Now logs stack-trace of all threads when receiving the
+   `SIGUSR1` signal.  (Does not work on CPython 2.4, Windows or Jython).
 
     Inspired by https://gist.github.com/737056
 
@@ -874,7 +878,7 @@ News
     multiple results at once, unlike `join()` which fetches the results
     one by one.
 
-    So far only supported by the AMQP result backend.  Support for memcached
+    So far only supported by the AMQP result backend.  Support for Memcached
     and Redis may be added later.
 
 * Improved implementations of `TaskSetResult.join` and `AsyncResult.wait`.
@@ -896,7 +900,7 @@ News
 
 * The following fields have been added to all events in the worker class:
 
-    * `sw_ident`: Name of worker software (e.g. py-celery).
+    * `sw_ident`: Name of worker software (e.g. ``"py-celery"``).
     * `sw_ver`: Software version (e.g. 2.2.0).
     * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
 
@@ -947,11 +951,11 @@ News
 * Redis result backend: Removed deprecated settings `REDIS_TIMEOUT` and
   `REDIS_CONNECT_RETRY`.
 
-* CentOS init script for :program:`celery worker` now available in `extra/centos`.
+* CentOS init-script for :program:`celery worker` now available in `extra/centos`.
 
-* Now depends on `pyparsing` version 1.5.0 or higher.
+* Now depends on :pypi:`pyparsing` version 1.5.0 or higher.
 
-    There have been reported issues using Celery with pyparsing 1.4.x,
+    There have been reported issues using Celery with :pypi:`pyparsing` 1.4.x,
     so please upgrade to the latest version.
 
 * Lots of new unit tests written, now with a total coverage of 95%.
@@ -977,19 +981,19 @@ Fixes
 
 * Windows: worker: Show error if running with `-B` option.
 
-    Running celerybeat embedded is known not to work on Windows, so
-    users are encouraged to run celerybeat as a separate service instead.
+    Running ``celerybeat`` embedded is known not to work on Windows, so
+    users are encouraged to run ``celerybeat`` as a separate service instead.
 
 * Windows: Utilities no longer output ANSI color codes on Windows
 
-* camqadm: Now properly handles :kbd:`Control-c` by simply exiting instead
+* ``camqadm``: Now properly handles :kbd:`Control-c` by simply exiting instead
   of showing confusing traceback.
 
 * Windows: All tests are now passing on Windows.
 
-* Remove bin/ directory, and `scripts` section from setup.py.
+* Remove bin/ directory, and `scripts` section from :file:`setup.py`.
 
-    This means we now rely completely on setuptools entrypoints.
+    This means we now rely completely on setuptools entry-points.
 
 .. _v220-experimental:
 
@@ -1006,7 +1010,7 @@ Experimental
     multiple instances (e.g. using :program:`multi`).
 
     Sadly an initial benchmark seems to show a 30% performance decrease on
-    pypy-1.4.1 + JIT.  We would like to find out why this is, so stay tuned.
+    ``pypy-1.4.1`` + JIT.  We would like to find out why this is, so stay tuned.
 
 * :class:`PublisherPool`: Experimental pool of task publishers and
   connections to be used with the `retry` argument to `apply_async`.

+ 14 - 14
docs/history/changelog-2.3.rst

@@ -37,7 +37,7 @@ Fixes
 
 * Backported fix for #455 from 2.4 to 2.3.
 
-* Statedb was not saved at shutdown.
+* StateDB was not saved at shutdown.
 
 * Fixes worker sometimes hanging when hard time limit exceeded.
 
@@ -54,7 +54,7 @@ Fixes
   (Issue #477).
 
 * ``CELERYD`` option in :file:`/etc/default/celeryd` should not
-  be used with generic init scripts.
+  be used with generic init-scripts.
 
 
 .. _version-2.3.2:
@@ -118,7 +118,7 @@ Fixes
 
     growing and shrinking eventlet pools is still not supported.
 
-* py24 target removed from :file:`tox.ini`.
+* ``py24`` target removed from :file:`tox.ini`.
 
 
 .. _version-2.3.1:
@@ -140,7 +140,7 @@ Fixes
 2.3.0
 =====
 :release-date: 2011-08-05 12:00 P.M BST
-:tested: cPython: 2.5, 2.6, 2.7; PyPy: 1.5; Jython: 2.5.2
+:tested: CPython: 2.5, 2.6, 2.7; PyPy: 1.5; Jython: 2.5.2
 :release-by: Ask Solem
 
 .. _v230-important:
@@ -165,7 +165,7 @@ Important Notes
 
     The default backend is now a dummy backend
     (:class:`celery.backends.base.DisabledBackend`).  Saving state is simply an
-    noop operation, and AsyncResult.wait(), .result, .state, etc. will raise
+    no-op, and AsyncResult.wait(), .result, .state, etc. will raise
     a :exc:`NotImplementedError` telling the user to configure the result backend.
 
     For help choosing a backend please see :ref:`task-result-backends`.
@@ -180,11 +180,11 @@ Important Notes
         For :pypi:`django-celery` users the default backend is
         still ``database``, and results are not disabled by default.
 
-* The Debian init scripts have been deprecated in favor of the generic-init.d
-  init scripts.
+* The Debian init-scripts have been deprecated in favor of the generic-init.d
+  init-scripts.
 
-    In addition generic init scripts for celerybeat and celeryev has been
-    added.
+    In addition generic init-scripts for ``celerybeat`` and ``celeryev`` has
+    been added.
 
 .. _v230-news:
 
@@ -294,9 +294,9 @@ News
                                          broker.vhost=/               \
                                          celery.disable_rate_limits=yes
 
-* celerybeat: Now retries establishing the connection (Issue #419).
+* ``celerybeat``: Now retries establishing the connection (Issue #419).
 
-* celeryctl: New ``list bindings`` command.
+* ``celeryctl``: New ``list bindings`` command.
 
     Lists the current or all available bindings, depending on the
     broker transport used.
@@ -331,7 +331,7 @@ News
 * Added ``TaskSetResult.delete()``, which will delete a previously
   saved taskset result.
 
-* Celerybeat now syncs every 3 minutes instead of only at
+* ``celerybeat`` now syncs every 3 minutes instead of only at
   shutdown (Issue #382).
 
 * Monitors now properly handles unknown events, so user-defined events
@@ -353,7 +353,7 @@ News
 Fixes
 -----
 
-* celeryev was trying to create the pidfile twice.
+* ``celeryev`` was trying to create the pidfile twice.
 
 * celery.contrib.batches: Fixed problem where tasks failed
   silently (Issue #393).
@@ -364,7 +364,7 @@ Fixes
 * ``CELERY_TASK_ERROR_WHITE_LIST`` is now properly initialized
   in all loaders.
 
-* celeryd_detach now passes through command line configuration.
+* ``celeryd_detach`` now passes through command line configuration.
 
 * Remote control command ``add_consumer`` now does nothing if the
   queue is already being consumed from.

+ 31 - 19
docs/history/changelog-2.4.rst

@@ -71,7 +71,7 @@ Fixes
 
     Contributed by Juan Ignacio Catalano.
 
-* generic init scripts now automatically creates log and pid file
+* generic init-scripts now automatically creates log and pid file
   directories (Issue #545).
 
     Contributed by Chris Streeter.
@@ -104,14 +104,14 @@ Fixes
 :release-date: 2011-11-07 06:00 P.M GMT
 :release-by: Ask Solem
 
-* celeryctl inspect commands was missing output.
+* ``celeryctl inspect`` commands was missing output.
 
 * processes pool: Decrease polling interval for less idle CPU usage.
 
 * processes pool: MaybeEncodingError was not wrapped in ExceptionInfo
   (Issue #524).
 
-* worker: would silence errors occuring after task consumer started.
+* worker: would silence errors occurring after task consumer started.
 
 * logging: Fixed a bug where unicode in stdout redirected log messages
   couldn't be written (Issue #522).
@@ -168,11 +168,15 @@ Important Notes
 * Broker transports can be now be specified using URLs
 
     The broker can now be specified as an URL instead.
-    This URL must have the format::
+    This URL must have the format:
+
+    .. code-block:: text
 
         transport://user:password@hostname:port/virtual_host
 
-    for example the default broker is written as::
+    for example the default broker is written as:
+
+    .. code-block:: text
 
         amqp://guest:guest@localhost:5672//
 
@@ -190,9 +194,13 @@ Important Notes
 
         A virtual host of ``'/'`` becomes:
 
+        .. code-block:: text
+
             amqp://guest:guest@localhost:5672//
 
-        and a virtual host of ``''`` (empty) becomes::
+        and a virtual host of ``''`` (empty) becomes:
+
+        .. code-block:: text
 
             amqp://guest:guest@localhost:5672/
 
@@ -278,13 +286,13 @@ News
     tutorials out there using a tuple, and this change should be a help
     to new users.
 
-    Suggested by jsaxon-cars.
+    Suggested by :github_user:`jsaxon-cars`.
 
 * Fixed a memory leak when using the thread pool (Issue #486).
 
     Contributed by Kornelijus Survila.
 
-* The statedb was not saved at exit.
+* The ``statedb`` was not saved at exit.
 
     This has now been fixed and it should again remember previously
     revoked tasks when a ``--statedb`` is enabled.
@@ -302,13 +310,13 @@ News
 
     Contributed by Chris Chamberlin.
 
-* Fixed race condition in celery.events.state (celerymon/celeryev)
+* Fixed race condition in :mod:`celery.events.state` (``celerymon``/``celeryev``)
   where task info would be removed while iterating over it (Issue #501).
 
 * The Cache, Cassandra, MongoDB, Redis and Tyrant backends now respects
   the :setting:`CELERY_RESULT_SERIALIZER` setting (Issue #435).
 
-    This means that only the database (django/sqlalchemy) backends
+    This means that only the database (Django/SQLAlchemy) backends
     currently does not support using custom serializers.
 
     Contributed by Steeve Morin
@@ -344,11 +352,11 @@ News
 
     Fix contributed by Joshua Ginsberg
 
-* Generic beat init script no longer sets `bash -e` (Issue #510).
+* Generic beat init-script no longer sets `bash -e` (Issue #510).
 
     Fix contributed by Roger Hu.
 
-* Documented that Chords do not work well with redis-server versions
+* Documented that Chords do not work well with :command:`redis-server` versions
   before 2.2.
 
     Contributed by Dan McGee.
@@ -365,15 +373,19 @@ News
 * Worker logged the string representation of args and kwargs
   without safe guards (Issue #480).
 
-* RHEL init script: Changed worker start-up priority.
+* RHEL init-script: Changed worker start-up priority.
 
-    The default start / stop priorities for MySQL on RHEL are
+    The default start / stop priorities for MySQL on RHEL are:
+
+    .. code-block:: console
 
         # chkconfig: - 64 36
 
     Therefore, if Celery is using a database as a broker / message store, it
     should be started after the database is up and running, otherwise errors
-    will ensue. This commit changes the priority in the init script to
+    will ensue. This commit changes the priority in the init-script to:
+
+    .. code-block:: console
 
         # chkconfig: - 85 15
 
@@ -386,7 +398,7 @@ News
 * KeyValueStoreBackend.get_many did not respect the ``timeout`` argument
   (Issue #512).
 
-* beat/events's --workdir option did not chdir before after
+* beat/events's ``--workdir`` option did not :manpage:`chdir(2)` before after
   configuration was attempted (Issue #506).
 
 * After deprecating 2.4 support we can now name modules correctly, since we
@@ -394,10 +406,10 @@ News
 
     Therefore the following internal modules have been renamed:
 
-        celery.concurrency.evlet    -> celery.concurrency.eventlet
-        celery.concurrency.evg      -> celery.concurrency.gevent
+        ``celery.concurrency.evlet``    -> ``celery.concurrency.eventlet``
+        ``celery.concurrency.evg``      -> ``celery.concurrency.gevent``
 
-* AUTHORS file is now sorted alphabetically.
+* :file:`AUTHORS` file is now sorted alphabetically.
 
     Also, as you may have noticed the contributors of new features/fixes are
     now mentioned in the Changelog.

+ 10 - 8
docs/history/changelog-2.5.rst

@@ -36,13 +36,13 @@ This is a dummy release performed for the following goals:
 * A bug causes messages to be sent with UTC time-stamps even though
   :setting:`CELERY_ENABLE_UTC` was not enabled (Issue #636).
 
-* celerybeat: No longer crashes if an entry's args is set to None
+* ``celerybeat``: No longer crashes if an entry's args is set to None
   (Issue #657).
 
-* Autoreload did not work if a module's ``__file__`` attribute
-  was set to the modules '.pyc' file.  (Issue #647).
+* Auto-reload did not work if a module's ``__file__`` attribute
+  was set to the modules ``.pyc`` file.  (Issue #647).
 
-* Fixes early 2.5 compatibility where __package__ does not exist
+* Fixes early 2.5 compatibility where ``__package__`` does not exist
   (Issue #638).
 
 .. _version-2.5.2:
@@ -121,7 +121,9 @@ Fixes
     a new line so that a partially written pidfile is detected as broken,
     as before doing:
 
-        echo -n "1" > celeryd.pid
+    .. code-block:: console
+
+        $ echo -n "1" > celeryd.pid
 
     would cause the worker to think that an existing instance was already
     running (init has pid 1 after all).
@@ -149,7 +151,7 @@ Fixes
 
         $ celery inspect -- broker.pool_limit=30
 
-- Version dependency for python-dateutil fixed to be strict.
+- Version dependency for :pypi:`python-dateutil` fixed to be strict.
 
     Fix contributed by Thomas Meson.
 
@@ -158,7 +160,7 @@ Fixes
 
     This fixes a bug where a custom __call__  may mysteriously disappear.
 
-- Autoreload's inotify support has been improved.
+- Auto-reload's ``inotify`` support has been improved.
 
     Contributed by Mher Movsisyan.
 
@@ -185,7 +187,7 @@ Fixes
 * Eventlet/Gevent: Another small typo caused the mediator to be started
   with eventlet/gevent, which would make the worker sometimes hang at shutdown.
 
-* Mulitprocessing: Fixed an error occurring if the pool was stopped
+* :mod:`multiprocessing`: Fixed an error occurring if the pool was stopped
   before it was properly started.
 
 * Proxy objects now redirects ``__doc__`` and ``__name__`` so ``help(obj)``

+ 41 - 41
docs/history/changelog-3.0.rst

@@ -54,7 +54,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
   when publishing tasks (Issue #1540).
 
 - New :envvar:`C_FAKEFORK` environment variable can be used to
-  debug the init scripts.
+  debug the init-scripts.
 
     Setting this will skip the daemonization step so that errors
     printed to stderr after standard outs are closed can be seen:
@@ -93,7 +93,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Contributed by Matt Robenolt.
 
-- Posix: Daemonization did not redirect ``sys.stdin`` to ``/dev/null``.
+- POSIX: Daemonization did not redirect ``sys.stdin`` to ``/dev/null``.
 
     Fix contributed by Alexander Smirnov.
 
@@ -113,7 +113,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Now depends on :pypi:`billiard` 2.7.3.32
 
-- Fixed bug with monthly and yearly crontabs (Issue #1465).
+- Fixed bug with monthly and yearly Crontabs (Issue #1465).
 
     Fix contributed by Guillaume Gauvrit.
 
@@ -177,7 +177,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - [generic-init.d] Fixed compatibility with Ubuntu's minimal Dash
   shell (Issue #1387).
 
-    Fix contributed by monkut.
+    Fix contributed by :github_user:`monkut`.
 
 - ``Task.apply``/``ALWAYS_EAGER`` now also executes callbacks and errbacks
   (Issue #1336).
@@ -192,13 +192,13 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - [Python 3] Now handles ``io.UnsupportedOperation`` that may be raised
   by ``file.fileno()`` in Python 3.
 
-- [Python 3] Fixed problem with qualname.
+- [Python 3] Fixed problem with ``qualname``.
 
 - [events.State] Now ignores unknown event-groups.
 
 - [MongoDB backend] No longer uses deprecated ``safe`` parameter.
 
-    Fix contributed by rfkrocktk
+    Fix contributed by :github_user:`rfkrocktk`.
 
 - The eventlet pool now imports on Windows.
 
@@ -294,7 +294,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Connection URLs now ignore multiple '+' tokens.
 
-- Worker/statedb: Now uses pickle protocol 2 (Py2.5+)
+- Worker/``statedb``: Now uses pickle protocol 2 (Python 2.5+)
 
 - Fixed Python 3 compatibility issues.
 
@@ -397,7 +397,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - RabbitMQ/Redis: thread-less and lock-free rate-limit implementation.
 
     This means that rate limits pose minimal overhead when used with
-    RabbitMQ/Redis or future transports using the eventloop,
+    RabbitMQ/Redis or future transports using the event-loop,
     and that the rate-limit implementation is now thread-less and lock-free.
 
     The thread-based transports will still use the old implementation for
@@ -511,7 +511,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Contributed by Milen Pavlov.
 
-- Improved init scripts for CentOS.
+- Improved init-scripts for CentOS.
 
     - Updated to support celery 3.x conventions.
     - Now uses CentOS built-in ``status`` and ``killproc``
@@ -559,7 +559,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     It was causing too many problems for users, you can still enable
     it using the :setting:`CELERYD_FORCE_EXECV` setting.
 
-    execv was only enabled when transports other than amqp/redis was used,
+    execv was only enabled when transports other than AMQP/Redis was used,
     and it's there to prevent deadlocks caused by mutexes not being released
     before the process forks.  Unfortunately it also changes the environment
     introducing many corner case bugs that is hard to fix without adding
@@ -670,7 +670,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Now depends on Kombu 2.5
 
-    - py-amqp has replaced amqplib as the default transport,
+    - :pypi:`amqp` has replaced :pypi:`amqplib` as the default transport,
       gaining support for AMQP 0.9, and the RabbitMQ extensions
       including Consumer Cancel Notifications and heartbeats.
 
@@ -685,7 +685,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - The :option:`--loader <celery --loader>` option now works again (Issue #1066).
 
-- :program:`celery` umbrella command: All subcommands now supports
+- :program:`celery` umbrella command: All sub-commands now supports
   the :option:`--workdir <celery --workdir>` option (Issue #1063).
 
 - Groups included in chains now give GroupResults (Issue #1057)
@@ -744,14 +744,14 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Contributed by Loren Abrams.
 
-- multi stopwait command now shows the pid of processes.
+- ``multi stopwait`` command now shows the pid of processes.
 
     Contributed by Loren Abrams.
 
 - Handling of ETA/countdown fixed when the :setting:`CELERY_ENABLE_UTC`
    setting is disabled (Issue #1065).
 
-- A number of uneeded properties were included in messages,
+- A number of unneeded properties were included in messages,
   caused by accidentally passing ``Queue.as_dict`` as message properties.
 
 - Rate limit values can now be float
@@ -778,7 +778,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Contributed by Thomas Grainger.
 
-- Mongodb backend: Connection ``max_pool_size`` can now be set in
+- MongoDB backend: Connection ``max_pool_size`` can now be set in
   :setting:`CELERY_MONGODB_BACKEND_SETTINGS`.
 
     Contributed by Craig Younkins.
@@ -857,7 +857,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - :mod:`celery.contrib.batches` now works again.
 
-- Fixed missing whitespace in ``bdist_rpm`` requirements (Issue #1046).
+- Fixed missing white-space in ``bdist_rpm`` requirements (Issue #1046).
 
 - Event state's ``tasks_by_name`` applied limit before filtering by name.
 
@@ -893,17 +893,17 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
           changed if *no custom locations are set*.
 
     Users can force paths to be created by calling the ``create-paths``
-    subcommand:
+    sub-command:
 
     .. code-block:: console
 
         $ sudo /etc/init.d/celeryd create-paths
 
-    .. admonition:: Upgrading Celery will not update init scripts
+    .. admonition:: Upgrading Celery will not update init-scripts
 
-        To update the init scripts you have to re-download
+        To update the init-scripts you have to re-download
         the files from source control and update them manually.
-        You can find the init scripts for version 3.0.x at:
+        You can find the init-scripts for version 3.0.x at:
 
             https://github.com/celery/celery/tree/3.0/extra/generic-init.d
 
@@ -923,11 +923,11 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Terminating a task now properly updates the state of the task to revoked,
   and sends a ``task-revoked`` event.
 
-- Generic worker init script now waits for workers to shutdown by default.
+- Generic worker init-script now waits for workers to shutdown by default.
 
 - Multi: No longer parses --app option (Issue #1008).
 
-- Multi: stop_verify command renamed to stopwait.
+- Multi: ``stop_verify`` command renamed to ``stopwait``.
 
 - Daemonization: Now delays trying to create pidfile/logfile until after
   the working directory has been changed into.
@@ -956,7 +956,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Now depends on billiard 2.7.3.14
 
     - Fixes crash at start-up when using Django and pre-1.4 projects
-      (setup_environ).
+      (``setup_environ``).
 
     - Hard time limits now sends the KILL signal shortly after TERM,
       to terminate processes that have signal handlers blocked by C extensions.
@@ -964,7 +964,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     - Billiard now installs even if the C extension cannot be built.
 
         It's still recommended to build the C extension if you are using
-        a transport other than rabbitmq/redis (or use forced execv for some
+        a transport other than RabbitMQ/Redis (or use forced execv for some
         other reason).
 
     - Pool now sets a ``current_process().index`` attribute that can be used to create
@@ -1041,7 +1041,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Worker: Log messages when connection established and lost have been improved.
 
-- The repr of a crontab schedule value of '0' should be '*'  (Issue #972).
+- The repr of a Crontab schedule value of '0' should be '*'  (Issue #972).
 
 - Revoked tasks are now removed from reserved/active state in the worker
   (Issue #969)
@@ -1050,7 +1050,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - gevent: Now supports hard time limits using ``gevent.Timeout``.
 
-- Documentation: Links to init scripts now point to the 3.0 branch instead
+- Documentation: Links to init-scripts now point to the 3.0 branch instead
   of the development branch (master).
 
 - Documentation: Fixed typo in signals user guide (Issue #986).
@@ -1100,7 +1100,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Fixed bug with timezones when :setting:`CELERY_ENABLE_UTC` is disabled
   (Issue #952).
 
-- Fixed a typo in the celerybeat upgrade mechanism (Issue #951).
+- Fixed a typo in the ``celerybeat`` upgrade mechanism (Issue #951).
 
 - Make sure the `exc_info` argument to logging is resolved (Issue #899).
 
@@ -1142,7 +1142,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Now depends on Kombu 2.4.4
 
-- Fixed problem with amqplib and receiving larger message payloads
+- Fixed problem with :pypi:`amqplib` and receiving larger message payloads
   (Issue #922).
 
     The problem would manifest itself as either the worker hanging,
@@ -1151,7 +1151,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     Users of the new ``pyamqp://`` transport must upgrade to
     :pypi:`amqp` 0.9.3.
 
-- Beat: Fixed another timezone bug with interval and crontab schedules
+- Beat: Fixed another timezone bug with interval and Crontab schedules
   (Issue #943).
 
 - Beat: The schedule file is now automatically cleared if the timezone
@@ -1243,7 +1243,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Crontab schedules now properly respects :setting:`CELERY_TIMEZONE` setting.
 
-    It's important to note that crontab schedules uses UTC time by default
+    It's important to note that Crontab schedules uses UTC time by default
     unless this setting is set.
 
     Issue #904 and :pypi:`django-celery` #150.
@@ -1284,8 +1284,8 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - The argument to :class:`~celery.exceptions.TaskRevokedError` is now one
   of the reasons ``revoked``, ``expired`` or ``terminated``.
 
-- Old Task class does no longer use classmethods for push_request and
-  pop_request  (Issue #912).
+- Old Task class does no longer use :class:`classmethod` for ``push_request``
+  and ``pop_request``  (Issue #912).
 
 - ``GroupResult`` now supports the ``children`` attribute (Issue #916).
 
@@ -1301,7 +1301,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Improved event and camera examples in the monitoring guide.
 
-- Disables celery command setuptools entrypoints if the command can't be
+- Disables celery command setuptools entry-points if the command can't be
   loaded.
 
 - Fixed broken ``dump_request`` example in the tasks guide.
@@ -1330,7 +1330,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - ``chain.apply`` now passes args to the first task (Issue #889).
 
 - Documented previously secret options to the :pypi:`django-celery` monitor
-  in the monitoring userguide (Issue #396).
+  in the monitoring user guide (Issue #396).
 
 - Old changelog are now organized in separate documents for each series,
   see :ref:`history`.
@@ -1352,7 +1352,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - Now supports AMQP heartbeats if using the new ``pyamqp://`` transport.
 
-    - The py-amqp transport requires the :pypi:`amqp` library to be installed:
+    - The :pypi:`amqp` transport requires the :pypi:`amqp` library to be installed:
 
         .. code-block:: console
 
@@ -1366,7 +1366,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
         BROKER_HEARTBEAT = 5.0
 
     - If the broker heartbeat is set to 10 seconds, the heartbeats will be
-      monitored every 5 seconds (double the hertbeat rate).
+      monitored every 5 seconds (double the heartbeat rate).
 
     See the :ref:`Kombu 2.3 changelog <kombu:version-2.3.0>` for more information.
 
@@ -1418,7 +1418,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
             }
 
 - New :meth:`@add_defaults` method can add new default configuration
-  dicts to the applications configuration.
+  dictionaries to the applications configuration.
 
     For example::
 
@@ -1450,8 +1450,8 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Fix contributed by Hynek Schlawack.
 
-- Eventloop now properly handles the case when the epoll poller object
-  has been closed (Issue #882).
+- Event-loop now properly handles the case when the :manpage:`epoll` poller
+  object has been closed (Issue #882).
 
 - Fixed syntax error in ``funtests/test_leak.py``
 
@@ -1474,7 +1474,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 :release-date: 2012-07-20 09:17 P.M BST
 :release-by: Ask Solem
 
-- amqplib passes the channel object as part of the delivery_info
+- :pypi:`amqplib` passes the channel object as part of the delivery_info
   and it's not pickleable, so we now remove it.
 
 .. _version-3.0.2:
@@ -1557,7 +1557,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Beat: now works with timezone aware datetime's.
 
 - Task classes inheriting ``from celery import Task``
-  mistakingly enabled ``accept_magic_kwargs``.
+  mistakenly enabled ``accept_magic_kwargs``.
 
 - Fixed bug in ``inspect scheduled`` (Issue #829).
 

+ 58 - 54
docs/history/changelog-3.1.rst

@@ -21,7 +21,8 @@ new in Celery 3.1.
 
     - Now depends on :mod:`billiard` 3.3.0.23.
 
-- **Prefork pool**: Fixes 100% CPU loop on Linux epoll (Issue #1845).
+- **Prefork pool**: Fixes 100% CPU loop on Linux :manpage:`epoll`
+  (Issue #1845).
 
     Also potential fix for: Issue #2142, Issue #2606
 
@@ -38,7 +39,7 @@ new in Celery 3.1.
   than -2147483648 (Issue #3078).
 
 - **Programs**: :program:`celery shell --ipython` now compatible with newer
-  IPython versions.
+  :pypi:`IPython` versions.
 
 - **Programs**: The DuplicateNodeName warning emitted by inspect/control
   now includes a list of the node names returned.
@@ -54,7 +55,7 @@ new in Celery 3.1.
 - **Worker**: Node name formatting now emits less confusing error message
   for unmatched format keys (Issue #3016).
 
-- **Results**: amqp/rpc backends: Fixed deserialization of JSON exceptions
+- **Results**: RPC/AMQP backends: Fixed deserialization of JSON exceptions
   (Issue #2518).
 
     Fix contributed by Allard Hoeve.
@@ -81,7 +82,8 @@ new in Celery 3.1.
 
         Includes binary wheels for Microsoft Windows x86 and x86_64!
 
-- **Task**: Error emails now uses ``utf-8`` charset by default (Issue #2737).
+- **Task**: Error emails now uses ``utf-8`` character set by default
+  (Issue #2737).
 
 - **Task**: Retry now forwards original message headers (Issue #3017).
 
@@ -101,7 +103,7 @@ new in Celery 3.1.
 - **Results**: Redis ``new_join`` did not properly call task errbacks on chord
   error (Issue #2796).
 
-- **Results**: Restores Redis compatibility with redis-py < 2.10.0
+- **Results**: Restores Redis compatibility with Python :pypi:`redis` < 2.10.0
   (Issue #2903).
 
 - **Results**: Fixed rare issue with chord error handling (Issue #2409).
@@ -133,7 +135,7 @@ new in Celery 3.1.
     This commit changes the code to a version that does not iterate over
     the dict, and should also be a little bit faster.
 
-- **Init scripts**: The beat init script now properly reports service as down
+- **Init-scripts**: The beat init-script now properly reports service as down
   when no pid file can be found.
 
     Eric Zarowny
@@ -154,7 +156,7 @@ new in Celery 3.1.
 
 - **Documentation**: Includes improvements by:
 
-    Bryson
+    :github_user:`Bryson`
     Caleb Mingle
     Christopher Martin
     Dieter Adriaenssens
@@ -165,15 +167,15 @@ new in Celery 3.1.
     Kevin McCarthy
     Kirill Pavlov
     Marco Buttu
-    Mayflower
+    :github_user:`Mayflower`
     Mher Movsisyan
     Michael Floering
-    michael-k
+    :github_user:`michael-k`
     Nathaniel Varona
     Rudy Attias
     Ryan Luckie
     Steven Parker
-    squfrans
+    :github_user:`squfrans`
     Tadej Janež
     TakesxiSximada
     Tom S
@@ -231,7 +233,7 @@ new in Celery 3.1.
 
     Fix contributed by Sukrit Khera.
 
-- **Results**: RPC/amqp backends did not deserialize exceptions properly
+- **Results**: RPC/AMQP backends did not deserialize exceptions properly
   (Issue #2691).
 
     Fix contributed by Sukrit Khera.
@@ -248,11 +250,11 @@ new in Celery 3.1.
 
         Carlos Garcia-Dubus
         D. Yu
-        jerry
+        :github_user:`jerry`
         Jocelyn Delalande
         Josh Kupershmidt
         Juan Rossi
-        kanemra
+        :github_user:`kanemra`
         Paul Pearce
         Pavel Savchenko
         Sean Wang
@@ -294,8 +296,8 @@ new in Celery 3.1.
 
     Fix contributed by Gunnlaugur Thor Briem.
 
-- **init scripts**: The celerybeat generic init script now uses
-  ``/bin/sh`` instead of bash (Issue #2496).
+- **init-scripts**: The beat generic init-script now uses
+  :file:`/bin/sh` instead of :command:`bash` (Issue #2496).
 
     Fix contributed by Jelle Verstraaten.
 
@@ -358,7 +360,7 @@ new in Celery 3.1.
     Fix contributed by Thomas French.
 
 - **Task**: Callbacks was not called properly if ``link`` was a list of
-  signatures (Issuse #2350).
+  signatures (Issue #2350).
 
 - **Canvas**: chain and group now handles json serialized signatures
   (Issue #2076).
@@ -377,7 +379,7 @@ new in Celery 3.1.
 - **Task**: Fixed problem with app not being properly propagated to
   ``trace_task`` in all cases.
 
-    Fix contributed by kristaps.
+    Fix contributed by :github_user:`kristaps`.
 
 - **Worker**: Expires from task message now associated with a timezone.
 
@@ -391,7 +393,7 @@ new in Celery 3.1.
     Fix contributed by Gino Ledesma.
 
 - **Mongodb Result backend**: Pickling the backend instance will now include
-  the original url (Issue #2347).
+  the original URL (Issue #2347).
 
     Fix contributed by Sukrit Khera.
 
@@ -404,7 +406,7 @@ new in Celery 3.1.
 - **celery.contrib.rdb**: Fixed problems with ``rdb.set_trace`` calling stop
   from the wrong frame.
 
-    Fix contributed by llllllllll.
+    Fix contributed by :github_user:`llllllllll`.
 
 - **Canvas**: ``chain`` and ``chord`` can now be immutable.
 
@@ -414,7 +416,7 @@ new in Celery 3.1.
 - **Results**: Small refactoring so that results are decoded the same way in
   all result backends.
 
-- **Logging**: The ``processName`` format was introduced in Py2.6.2 so for
+- **Logging**: The ``processName`` format was introduced in Python 2.6.2 so for
   compatibility this format is now excluded when using earlier versions
   (Issue #1644).
 
@@ -478,10 +480,10 @@ new in Celery 3.1.
 
     - Now depends on :ref:`Kombu 3.0.22 <kombu:version-3.0.22>`.
 
-- **Init scripts**: The generic worker init scripts ``status`` command
+- **Init-scripts**: The generic worker init-scripts ``status`` command
   now gets an accurate pidfile list (Issue #1942).
 
-- **Init scripts**: The generic beat script now implements the ``status``
+- **Init-scripts**: The generic beat script now implements the ``status``
    command.
 
     Contributed by John Whitlock.
@@ -544,7 +546,7 @@ News
 - **Task**: ``signature_from_request`` now propagates ``reply_to`` so that
   the RPC backend works with retried tasks (Issue #2113).
 
-- **Task**: ``retry`` will no longer attempt to requeue the task if sending
+- **Task**: ``retry`` will no longer attempt to re-queue the task if sending
   the retry message fails.
 
     Unrelated exceptions being raised could cause a message loop, so it was
@@ -558,7 +560,7 @@ News
 
 - Documentation fixes
 
-    Contributed by Yuval Greenfield, Lucas Wiman, nicholsonjf
+    Contributed by Yuval Greenfield, Lucas Wiman, :github_user:`nicholsonjf`.
 
 - **Worker**: Removed an outdated assert statement that could lead to errors
   being masked (Issue #2086).
@@ -682,10 +684,10 @@ News
 
     Fix contributed by Ian Dees.
 
-- **Init scripts**: The CentOS init scripts did not quote
+- **Init-scripts**: The CentOS init-scripts did not quote
   :envvar:`CELERY_CHDIR`.
 
-    Fix contributed by ffeast.
+    Fix contributed by :github_user:`ffeast`.
 
 .. _version-3.1.11:
 
@@ -824,6 +826,8 @@ News
 
     The new option can be set in the result backend URL:
 
+    .. code-block:: python
+
         CELERY_RESULT_BACKEND = 'redis://localhost?new_join=1'
 
     This must be enabled manually as it's incompatible
@@ -882,7 +886,7 @@ News
 - **Task**: ``Task.apply`` now properly sets ``request.headers``
   (Issue #1874).
 
-- **Worker**: Fixed ``UnicodeEncodeError`` occuring when worker is started
+- **Worker**: Fixed :exc:`UnicodeEncodeError` occurring when worker is started
   by :pypi:`supervisor`.
 
     Fix contributed by Codeb Fan.
@@ -934,15 +938,15 @@ News
 - **Task**: Task.backend is now a property that forwards to ``app.backend``
   if no custom backend has been specified for the task (Issue #1821).
 
-- **Generic init scripts**: Fixed bug in stop command.
+- **Generic init-scripts**: Fixed bug in stop command.
 
     Fix contributed by Rinat Shigapov.
 
-- **Generic init scripts**: Fixed compatibility with GNU :manpage:`stat`.
+- **Generic init-scripts**: Fixed compatibility with GNU :manpage:`stat`.
 
     Fix contributed by Paul Kilgo.
 
-- **Generic init scripts**: Fixed compatibility with the minimal
+- **Generic init-scripts**: Fixed compatibility with the minimal
   :program:`dash` shell (Issue #1815).
 
 - **Commands**: The :program:`celery amqp basic.publish` command was not
@@ -1030,17 +1034,17 @@ News
 
     Fix contributed by Brodie Rao.
 
-- **Generic init scripts:** Now runs a check at start-up to verify
+- **Generic init-scripts:** Now runs a check at start-up to verify
   that any configuration scripts are owned by root and that they
-  are not world/group writeable.
+  are not world/group writable.
 
-    The init script configuration is a shell script executed by root,
+    The init-script configuration is a shell script executed by root,
     so this is a preventive measure to ensure that users do not
     leave this file vulnerable to changes by unprivileged users.
 
     .. note::
 
-        Note that upgrading celery will not update the init scripts,
+        Note that upgrading celery will not update the init-scripts,
         instead you need to manually copy the improved versions from the
         source distribution:
         https://github.com/celery/celery/tree/3.1/extra/generic-init.d
@@ -1092,7 +1096,7 @@ News
   (:program:`celery events -d`) can now be piped into other commands.
 
 - **Documentation:** The RabbitMQ installation instructions for OS X was
-  updated to use modern homebrew practices.
+  updated to use modern Homebrew practices.
 
     Contributed by Jon Chen.
 
@@ -1125,10 +1129,10 @@ News
 Important Notes
 ---------------
 
-Init script security improvements
+Init-script security improvements
 ---------------------------------
 
-Where the generic init scripts (for ``celeryd``, and ``celerybeat``) before
+Where the generic init-scripts (for ``celeryd``, and ``celerybeat``) before
 delegated the responsibility of dropping privileges to the target application,
 it will now use ``su`` instead, so that the Python program is not trusted
 with superuser privileges.
@@ -1137,7 +1141,7 @@ This is not in reaction to any known exploit, but it will
 limit the possibility of a privilege escalation bug being abused in the
 future.
 
-You have to upgrade the init scripts manually from this directory:
+You have to upgrade the init-scripts manually from this directory:
 https://github.com/celery/celery/tree/3.1/extra/generic-init.d
 
 AMQP result backend
@@ -1205,8 +1209,8 @@ Fixes
 - Worker: Now keeps count of the total number of tasks processed,
   not just by type (``all_active_count``).
 
-- Init scripts:  Fixed problem with reading configuration file
-  when the init script is symlinked to a runlevel (e.g. ``S02celeryd``).
+- Init-scripts:  Fixed problem with reading configuration file
+  when the init-script is symlinked to a runlevel (e.g. ``S02celeryd``).
   (Issue #1740).
 
     This also removed a rarely used feature where you can symlink the script
@@ -1255,7 +1259,7 @@ Fixes
 - Redis/Cache result backends: Will now timeout if keys evicted while trying
   to join a chord.
 
-- The fallbock unlock chord task now raises :exc:`Retry` so that the
+- The fallback unlock chord task now raises :exc:`Retry` so that the
   retry even is properly logged by the worker.
 
 - Multi: Will no longer apply Eventlet/gevent monkey patches (Issue #1717).
@@ -1280,7 +1284,7 @@ Fixes
     a skew of -1.
 
 - Prefork pool: The method used to find terminated processes was flawed
-  in that it did not also take into account missing popen objects.
+  in that it did not also take into account missing ``popen`` objects.
 
 - Canvas: ``group`` and ``chord`` now works with anon signatures as long
   as the group/chord object is associated with an app instance (Issue #1744).
@@ -1324,7 +1328,7 @@ Fixes
 
 - Cache result backend now compatible with Python 3 (Issue #1697).
 
-- CentOS init script: Now compatible with sys-v style init symlinks.
+- CentOS init-script: Now compatible with SysV style init symlinks.
 
     Fix contributed by Jonathan Jordan.
 
@@ -1395,7 +1399,7 @@ Fixes
   :option:`--gid <celery beat --gid>` arguments even if
   :option:`--detach <celery beat --detach>` is not enabled.
 
-- Python 3: Fixed unorderable error occuring with the worker
+- Python 3: Fixed unorderable error occurring with the worker
   :option:`-B <celery worker -B>` argument enabled.
 
 - ``celery.VERSION`` is now a named tuple.
@@ -1404,7 +1408,7 @@ Fixes
 
 - ``celery shell`` command: Fixed ``IPython.frontend`` deprecation warning.
 
-- The default app no longer includes the built-in fixups.
+- The default app no longer includes the built-in fix-ups.
 
     This fixes a bug where ``celery multi`` would attempt
     to load the Django settings module before entering
@@ -1455,7 +1459,7 @@ Fixes
   the rare ``--opt value`` format (Issue #1668).
 
 - ``celery`` command: Accidentally removed options
-  appearing before the subcommand, these are now moved to the end
+  appearing before the sub-command, these are now moved to the end
   instead.
 
 - Worker now properly responds to ``inspect stats`` commands
@@ -1466,11 +1470,11 @@ Fixes
 
 - Beat: Fixed syntax error in string formatting.
 
-    Contributed by nadad.
+    Contributed by :github_user:`nadad`.
 
 - Fixed typos in the documentation.
 
-    Fixes contributed by Loic Bistuer, sunfinite.
+    Fixes contributed by Loic Bistuer, :github_user:`sunfinite`.
 
 - Nested chains now works properly when constructed using the
   ``chain`` type instead of the ``|`` operator (Issue #1656).
@@ -1488,8 +1492,8 @@ Fixes
 
 - Worker accidentally set a default socket timeout of 5 seconds.
 
-- Django: Fixup now sets the default app so that threads will use
-  the same app instance (e.g. for manage.py runserver).
+- Django: Fix-up now sets the default app so that threads will use
+  the same app instance (e.g. for :command:`manage.py runserver`).
 
 - Worker: Fixed Unicode error crash at start-up experienced by some users.
 
@@ -1501,7 +1505,7 @@ Fixes
 - The :option:`--app <celery --app>` argument could end up using a module
   object instead of an app instance (with a resulting crash).
 
-- Fixed a syntax error problem in the celerybeat init script.
+- Fixed a syntax error problem in the beat init-script.
 
     Fix contributed by Vsevolod.
 
@@ -1540,7 +1544,7 @@ Fixes
 - Django: Fixed ``ImproperlyConfigured`` error raised
   when no database backend specified.
 
-    Fix contributed by j0hnsmith
+    Fix contributed by :github_user:`j0hnsmith`.
 
 - Prefork pool: Now using ``_multiprocessing.read`` with ``memoryview``
   if available.
@@ -1570,11 +1574,11 @@ Fixes
     Also fixed typos in the tutorial, and added the settings
     required to use the Django database backend.
 
-    Thanks to Chris Ward, orarbel.
+    Thanks to Chris Ward, :github_user:`orarbel`.
 
 - Django: Fixed a problem when using the Django settings in Django 1.6.
 
-- Django: Fixup should not be applied if the django loader is active.
+- Django: Fix-up should not be applied if the django loader is active.
 
 - Worker:  Fixed attribute error for ``human_write_stats`` when using the
   compatibility prefork pool implementation.

+ 5 - 5
docs/history/whatsnew-2.5.rst

@@ -370,7 +370,7 @@ In Other News
 
     Contributed by Steeve Morin.
 
-- The crontab parser now matches Vixie Cron behavior when parsing ranges
+- The Crontab parser now matches Vixie Cron behavior when parsing ranges
   with steps (e.g. 1-59/2).
 
     Contributed by Daniel Hepper.
@@ -399,7 +399,7 @@ In Other News
 
     Contributed by Sean O'Connor.
 
-- CentOS init script has been updated and should be more flexible.
+- CentOS init-script has been updated and should be more flexible.
 
     Contributed by Andrew McFague.
 
@@ -410,7 +410,7 @@ In Other News
 - ``task.retry()`` now re-raises the original exception keeping
   the original stack trace.
 
-    Suggested by ``@ojii``.
+    Suggested by :github_user:`ojii`.
 
 - The `--uid` argument to daemons now uses ``initgroups()`` to set
   groups to all the groups the user is a member of.
@@ -470,7 +470,7 @@ In Other News
 
 - There's a new :ref:`guide-security` guide in the documentation.
 
-- The init scripts has been updated, and many bugs fixed.
+- The init-scripts have been updated, and many bugs fixed.
 
     Contributed by Chris Streeter.
 
@@ -516,7 +516,7 @@ Fixes
 - Redis result backend: Now uses ``SETEX`` command to set result key,
   and expiry atomically.
 
-    Suggested by ``@yaniv-aknin``.
+    Suggested by :github_user:`yaniv-aknin`.
 
 - ``celeryd``: Fixed a problem where shutdown hanged when :kbd:`Control-c`
   was used to terminate.

+ 1 - 1
docs/history/whatsnew-3.0.rst

@@ -1019,7 +1019,7 @@ See the :ref:`deprecation-timeline`.
 Fixes
 =====
 
-- Retry sqlalchemy backend operations on DatabaseError/OperationalError
+- Retry SQLAlchemy backend operations on DatabaseError/OperationalError
   (Issue #634)
 
 - Tasks that called ``retry`` was not acknowledged if acks late was enabled

+ 32 - 32
docs/includes/installation.txt

@@ -26,9 +26,9 @@ Bundles
 Celery also defines a group of bundles that can be used
 to install Celery and the dependencies for a given feature.
 
-You can specify these in your requirements or on the ``pip`` comand-line
-by using brackets.  Multiple bundles can be specified by separating them by
-commas.
+You can specify these in your requirements or on the :command:`pip`
+command-line by using brackets.  Multiple bundles can be specified by
+separating them by commas.
 
 .. code-block:: console
 
@@ -41,84 +41,84 @@ The following bundles are available:
 Serializers
 ~~~~~~~~~~~
 
-:celery[auth]:
-    for using the auth serializer.
+:``celery[auth]``:
+    for using the ``auth`` security serializer.
 
-:celery[msgpack]:
+:``celery[msgpack]``:
     for using the msgpack serializer.
 
-:celery[yaml]:
+:``celery[yaml]``:
     for using the yaml serializer.
 
 Concurrency
 ~~~~~~~~~~~
 
-:celery[eventlet]:
-    for using the eventlet pool.
+:``celery[eventlet]``:
+    for using the :pypi:`eventlet` pool.
 
-:celery[gevent]:
-    for using the gevent pool.
+:``celery[gevent]``:
+    for using the :pypi:`gevent` pool.
 
-:celery[threads]:
+:``celery[threads]``:
     for using the thread pool.
 
 Transports and Backends
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-:celery[librabbitmq]:
+:``celery[librabbitmq]``:
     for using the librabbitmq C library.
 
-:celery[redis]:
+:``celery[redis]``:
     for using Redis as a message transport or as a result backend.
 
-:celery[mongodb]:
+:``celery[mongodb]``:
     for using MongoDB as a message transport (*experimental*),
     or as a result backend (*supported*).
 
-:celery[sqs]:
+:``celery[sqs]``:
     for using Amazon SQS as a message transport (*experimental*).
 
-:celery[tblib]
+:``celery[tblib``]
     for using the :setting:`task_remote_tracebacks` feature.
 
-:celery[memcache]:
-    for using memcached as a result backend (using pylibmc)
+:``celery[memcache]``:
+    for using Memcached as a result backend (using :pypi:`pylibmc`)
 
-:celery[pymemcache]:
-    for using memcached as a result backend (pure-python implementation).
+:``celery[pymemcache]``:
+    for using Memcached as a result backend (pure-Python implementation).
 
-:celery[cassandra]:
+:``celery[cassandra]``:
     for using Apache Cassandra as a result backend with DataStax driver.
 
-:celery[couchdb]:
+:``celery[couchdb]``:
     for using CouchDB as a message transport (*experimental*).
 
-:celery[couchbase]:
+:``celery[couchbase]``:
     for using Couchbase as a result backend.
 
-:celery[elasticsearch]
+:``celery[elasticsearch]``:
     for using Elasticsearch as a result backend.
 
-:celery[riak]:
+:``celery[riak]``:
     for using Riak as a result backend.
 
-:celery[beanstalk]:
+:``celery[beanstalk]``:
     for using Beanstalk as a message transport (*experimental*).
 
-:celery[zookeeper]:
+:``celery[zookeeper]``:
     for using Zookeeper as a message transport.
 
-:celery[zeromq]:
+:``celery[zeromq]``:
     for using ZeroMQ as a message transport (*experimental*).
 
-:celery[sqlalchemy]:
+:``celery[sqlalchemy]``:
     for using SQLAlchemy as a message transport (*experimental*),
     or as a result backend (*supported*).
 
-:celery[pyro]:
+:``celery[pyro]``:
     for using the Pyro4 message transport (*experimental*).
 
-:celery[slmq]:
+:``celery[slmq]``:
     for using the SoftLayer Message Queue transport (*experimental*).
 
 .. _celery-installing-from-source:

+ 2 - 2
docs/includes/resources.txt

@@ -43,10 +43,10 @@ http://wiki.github.com/celery/celery/
 Contributing
 ============
 
-Development of `celery` happens at Github: https://github.com/celery/celery
+Development of `celery` happens at GitHub: https://github.com/celery/celery
 
 You are highly encouraged to participate in the development
-of `celery`. If you don't like Github (for some reason) you're welcome
+of `celery`. If you don't like GitHub (for some reason) you're welcome
 to send regular patches.
 
 Be sure to also read the `Contributing to Celery`_ section in the

+ 84 - 87
docs/internals/app-overview.rst

@@ -37,7 +37,6 @@ Creating custom Task subclasses:
     Task = celery.create_task_cls()
 
     class DebugTask(Task):
-        abstract = True
 
         def on_failure(self, *args, **kwargs):
             import pdb
@@ -89,11 +88,11 @@ Other interesting attributes::
 As you can probably see, this really opens up another
 dimension of customization abilities.
 
-Deprecations
-============
+Deprecated
+==========
 
-* celery.task.ping
-  celery.task.PingTask
+* ``celery.task.ping``
+  ``celery.task.PingTask``
 
   Inferior to the ping remote control command.
   Will be removed in Celery 2.3.
@@ -101,41 +100,41 @@ Deprecations
 Aliases (Pending deprecation)
 =============================
 
-* celery.task.base
-    * .Task -> {app.Task / :class:`celery.app.task.Task`}
+* ``celery.task.base``
+    * ``.Task`` -> {``app.Task`` / :class:`celery.app.task.Task`}
 
-* celery.task.sets
-    * .TaskSet -> {app.TaskSet}
+* ``celery.task.sets``
+    * ``.TaskSet`` -> {``app.TaskSet``}
 
-* celery.decorators / celery.task
-    * .task -> {app.task}
+* ``celery.decorators`` / ``celery.task``
+    * ``.task`` -> {``app.task``}
 
-* celery.execute
-    * .apply_async -> {task.apply_async}
-    * .apply -> {task.apply}
-    * .send_task -> {app.send_task}
-    * .delay_task -> no alternative
+* ``celery.execute``
+    * ``.apply_async`` -> {``task.apply_async``}
+    * ``.apply`` -> {``task.apply``}
+    * ``.send_task`` -> {``app.send_task``}
+    * ``.delay_task`` -> *no alternative*
 
-* celery.log
-    * .get_default_logger -> {app.log.get_default_logger}
-    * .setup_logger -> {app.log.setup_logger}
-    * .get_task_logger -> {app.log.get_task_logger}
-    * .setup_task_logger -> {app.log.setup_task_logger}
-    * .setup_logging_subsystem -> {app.log.setup_logging_subsystem}
-    * .redirect_stdouts_to_logger -> {app.log.redirect_stdouts_to_logger}
+* ``celery.log``
+    * ``.get_default_logger`` -> {``app.log.get_default_logger``}
+    * ``.setup_logger`` -> {``app.log.setup_logger``}
+    * ``.get_task_logger`` -> {``app.log.get_task_logger``}
+    * ``.setup_task_logger`` -> {``app.log.setup_task_logger``}
+    * ``.setup_logging_subsystem`` -> {``app.log.setup_logging_subsystem``}
+    * ``.redirect_stdouts_to_logger`` -> {``app.log.redirect_stdouts_to_logger``}
 
-* celery.messaging
-    * .establish_connection -> {app.broker_connection}
-    * .with_connection -> {app.with_connection}
-    * .get_consumer_set -> {app.amqp.get_task_consumer}
-    * .TaskPublisher -> {app.amqp.TaskPublisher}
-    * .TaskConsumer -> {app.amqp.TaskConsumer}
-    * .ConsumerSet -> {app.amqp.ConsumerSet}
+* ``celery.messaging``
+    * ``.establish_connection`` -> {``app.broker_connection``}
+    * ``.with_connection`` -> {``app.with_connection``}
+    * ``.get_consumer_set`` -> {``app.amqp.get_task_consumer``}
+    * ``.TaskPublisher`` -> {``app.amqp.TaskPublisher``}
+    * ``.TaskConsumer`` -> {``app.amqp.TaskConsumer``}
+    * ``.ConsumerSet`` -> {``app.amqp.ConsumerSet``}
 
-* celery.conf.* -> {app.conf}
+* ``celery.conf.*`` -> {``app.conf``}
 
     **NOTE**: All configuration keys are now named the same
-    as in the configuration. So the key "task_always_eager"
+    as in the configuration. So the key ``task_always_eager``
     is accessed as::
 
         >>> app.conf.task_always_eager
@@ -145,22 +144,22 @@ Aliases (Pending deprecation)
         >>> from celery import conf
         >>> conf.always_eager
 
-    * .get_queues -> {app.amqp.get_queues}
+    * ``.get_queues`` -> {``app.amqp.get_queues``}
 
-* celery.task.control
-    * .broadcast -> {app.control.broadcast}
-    * .rate_limit -> {app.control.rate_limit}
-    * .ping -> {app.control.ping}
-    * .revoke -> {app.control.revoke}
-    * .discard_all -> {app.control.discard_all}
-    * .inspect -> {app.control.inspect}
+* ``celery.task.control``
+    * ``.broadcast`` -> {``app.control.broadcast``}
+    * ``.rate_limit`` -> {``app.control.rate_limit``}
+    * ``.ping`` -> {``app.control.ping``}
+    * ``.revoke`` -> {``app.control.revoke``}
+    * ``.discard_all`` -> {``app.control.discard_all``}
+    * ``.inspect`` -> {``app.control.inspect``}
 
-* celery.utils.info
-    * .humanize_seconds -> celery.utils.timeutils.humanize_seconds
-    * .textindent -> celery.utils.textindent
-    * .get_broker_info -> {app.amqp.get_broker_info}
-    * .format_broker_info -> {app.amqp.format_broker_info}
-    * .format_queues -> {app.amqp.format_queues}
+* ``celery.utils.info``
+    * ``.humanize_seconds`` -> ``celery.utils.timeutils.humanize_seconds``
+    * ``.textindent`` -> ``celery.utils.textindent``
+    * ``.get_broker_info`` -> {``app.amqp.get_broker_info``}
+    * ``.format_broker_info`` -> {``app.amqp.format_broker_info``}
+    * ``.format_queues`` -> {``app.amqp.format_queues``}
 
 Default App Usage
 =================
@@ -193,44 +192,42 @@ instance.
 App Dependency Tree
 -------------------
 
-* {app}
-    * celery.loaders.base.BaseLoader
-    * celery.backends.base.BaseBackend
-    * {app.TaskSet}
-        * celery.task.sets.TaskSet (app.TaskSet)
-    * [app.TaskSetResult]
-        * celery.result.TaskSetResult (app.TaskSetResult)
-
-* {app.AsyncResult}
-    * celery.result.BaseAsyncResult / celery.result.AsyncResult
-
-* celery.bin.worker.WorkerCommand
-    * celery.apps.worker.Worker
-        * celery.worker.WorkerController
-            * celery.worker.consumer.Consumer
-                * celery.worker.request.Request
-                * celery.events.EventDispatcher
-                * celery.worker.control.ControlDispatch
-                    * celery.woker.control.registry.Panel
-                    * celery.pidbox.BroadcastPublisher
-                * celery.pidbox.BroadcastConsumer
-            * celery.worker.controllers.Mediator
-            * celery.beat.EmbeddedService
-
-* celery.bin.events.EvCommand
-    * celery.events.snapshot.evcam
-        * celery.events.snapshot.Polaroid
-        * celery.events.EventReceiver
-    * celery.events.cursesmon.evtop
-        * celery.events.EventReceiver
-        * celery.events.cursesmon.CursesMonitor
-    * celery.events.dumper
-        * celery.events.EventReceiver
-
-* celery.bin.amqp.AMQPAdmin
-
-* celery.bin.beat.BeatCommand
-    * celery.apps.beat.Beat
-        * celery.beat.Service
-            * celery.beat.Scheduler
-
+* {``app``}
+    * ``celery.loaders.base.BaseLoader``
+    * ``celery.backends.base.BaseBackend``
+    * {``app.TaskSet``}
+        * ``celery.task.sets.TaskSet`` (``app.TaskSet``)
+    * [``app.TaskSetResult``]
+        * ``celery.result.TaskSetResult`` (``app.TaskSetResult``)
+
+* {``app.AsyncResult``}
+    * ``celery.result.BaseAsyncResult`` / ``celery.result.AsyncResult``
+
+* ``celery.bin.worker.WorkerCommand``
+    * ``celery.apps.worker.Worker``
+        * ``celery.worker.WorkerController``
+            * ``celery.worker.consumer.Consumer``
+                * ``celery.worker.request.Request``
+                * ``celery.events.EventDispatcher``
+                * ``celery.worker.control.ControlDispatch``
+                    * ``celery.worker.control.registry.Panel``
+                    * ``celery.pidbox.BroadcastPublisher``
+                * ``celery.pidbox.BroadcastConsumer``
+            * ``celery.beat.EmbeddedService``
+
+* ``celery.bin.events.EvCommand``
+    * ``celery.events.snapshot.evcam``
+        * ``celery.events.snapshot.Polaroid``
+        * ``celery.events.EventReceiver``
+    * ``celery.events.cursesmon.evtop``
+        * ``celery.events.EventReceiver``
+        * ``celery.events.cursesmon.CursesMonitor``
+    * ``celery.events.dumper``
+        * ``celery.events.EventReceiver``
+
+* ``celery.bin.amqp.AMQPAdmin``
+
+* ``celery.bin.beat.BeatCommand``
+    * ``celery.apps.beat.Beat``
+        * ``celery.beat.Service``
+            * ``celery.beat.Scheduler``

+ 4 - 4
docs/internals/deprecation.rst

@@ -1,8 +1,8 @@
 .. _deprecation-timeline:
 
-=============================
- Celery Deprecation Timeline
-=============================
+==============================
+ Celery Deprecation Time-line
+==============================
 
 .. contents::
     :local:
@@ -62,7 +62,7 @@ Compat Task Modules
 
 
 Note that the new :class:`~celery.Task` class no longer
-uses classmethods for these methods:
+uses :func:`classmethod` for these methods:
 
     - delay
     - apply_async

+ 5 - 5
docs/internals/guide.rst

@@ -16,7 +16,7 @@ The API>RCP Precedence Rule
 - The API is more important than Readability
 - Readability is more important than Convention
 - Convention is more important than Performance
-    - …unless the code is a proven hotspot.
+    - …unless the code is a proven hot-spot.
 
 More important than anything else is the end-user API.
 Conventions must step aside, and any suffering is always alleviated
@@ -62,7 +62,7 @@ Naming
     .. note::
 
         Sometimes it makes sense to have a class mask as a function,
-        and there is precedence for this in the stdlib (e.g.
+        and there is precedence for this in the Python standard library (e.g.
         :class:`~contextlib.contextmanager`).  Celery examples include
         :class:`~celery.signature`, :class:`~celery.chord`,
         ``inspect``, :class:`~kombu.utils.functional.promise` and more..
@@ -179,7 +179,7 @@ can't co-exist in the same process space, this later posed a problem
 for using Celery with frameworks that doesn't have this limitation.
 
 Therefore the app concept was introduced.  When using apps you use 'celery'
-objects instead of importing things from celery submodules, this
+objects instead of importing things from celery sub-modules, this
 (unfortunately) also means that Celery essentially has two API's.
 
 Here's an example using Celery in single-mode:
@@ -264,7 +264,7 @@ Module Overview
 - celery.bin
 
     Command-line applications.
-    setup.py creates setuptools entrypoints for these.
+    :file:`setup.py` creates setuptools entry-points for these.
 
 - celery.concurrency
 
@@ -325,7 +325,7 @@ Worker overview
 * `app.Worker` -> `celery.apps.worker:Worker`
 
    Responsibilities:
-   * sets up logging and redirects stdouts
+   * sets up logging and redirects standard outs
    * installs signal handlers (`TERM`/`HUP`/`STOP`/`USR1` (cry)/`USR2` (rdb))
    * prints banner and warnings (e.g. pickle warning)
    * handles the :option:`celery worker --purge` argument

+ 20 - 21
docs/internals/protocol.rst

@@ -123,7 +123,7 @@ Changes from version 1
     - If a message uses raw encoding then the raw data
       will be passed as a single argument to the function.
 
-    - Java/C, etc. can use a thrift/protobuf document as the body
+    - Java/C, etc. can use a Thrift/protobuf document as the body
 
 - Dispatches to actor based on ``task``, ``meth`` headers
 
@@ -159,7 +159,6 @@ Changes from version 1
         from celery.utils.imports import qualname
 
         class PickleTask(Task):
-            abstract = True
 
             def unpack_args(self, fun, args=()):
                 return fun, args
@@ -188,41 +187,41 @@ to read the fields.
 Message body
 ~~~~~~~~~~~~
 
-* task
+* ``task``
     :`string`:
 
     Name of the task. **required**
 
-* id
+* ``id``
     :`string`:
 
     Unique id of the task (UUID). **required**
 
-* args
+* ``args``
     :`list`:
 
     List of arguments. Will be an empty list if not provided.
 
-* kwargs
+* ``kwargs``
     :`dictionary`:
 
     Dictionary of keyword arguments. Will be an empty dictionary if not
     provided.
 
-* retries
+* ``retries``
     :`int`:
 
     Current number of times this task has been retried.
     Defaults to `0` if not specified.
 
-* eta
+* ``eta``
     :`string` (ISO 8601):
 
     Estimated time of arrival. This is the date and time in ISO 8601
     format. If not provided the message is not scheduled, but will be
     executed asap.
 
-* expires
+* ``expires``
     :`string` (ISO 8601):
 
     .. versionadded:: 2.0.2
@@ -232,12 +231,12 @@ Message body
     will be expired when the message is received and the expiration date
     has been exceeded.
 
-* taskset
+* ``taskset``
     :`string`:
 
-    The taskset this task is part of (if any).
+    The group this task is part of (if any).
 
-* chord
+* ``chord``
     :`Signature`:
 
     .. versionadded:: 2.3
@@ -246,7 +245,7 @@ Message body
     of this key is the body of the cord that should be executed when all of
     the tasks in the header has returned.
 
-* utc
+* ``utc``
     :`bool`:
 
     .. versionadded:: 2.5
@@ -254,21 +253,21 @@ Message body
     If true time uses the UTC timezone, if not the current local timezone
     should be used.
 
-* callbacks
+* ``callbacks``
     :`<list>Signature`:
 
     .. versionadded:: 3.0
 
     A list of signatures to call if the task exited successfully.
 
-* errbacks
+* ``errbacks``
     :`<list>Signature`:
 
     .. versionadded:: 3.0
 
     A list of signatures to call if an error occurs while executing the task.
 
-* timelimit
+* ``timelimit``
     :`<tuple>(float, float)`:
 
     .. versionadded:: 3.1
@@ -277,7 +276,7 @@ Message body
     limit value (`int`/`float` or :const:`None` for no limit).
 
     Example value specifying a soft time limit of 3 seconds, and a hard time
-    limt of 10 seconds::
+    limit of 10 seconds::
 
         {'timelimit': (3.0, 10.0)}
 
@@ -285,7 +284,7 @@ Message body
 Example message
 ~~~~~~~~~~~~~~~
 
-This is an example invocation of a `celery.task.ping` task in JSON
+This is an example invocation of a `celery.task.ping` task in json
 format:
 
 .. code-block:: javascript
@@ -334,7 +333,7 @@ Standard body fields
 - *string* ``type``
 
     The type of event.  This is a string containing the *category* and
-    *action* separated by a dash delimeter (e.g. ``task-succeeded``).
+    *action* separated by a dash delimiter (e.g. ``task-succeeded``).
 
 - *string* ``hostname``
 
@@ -342,11 +341,11 @@ Standard body fields
 
 - *unsigned long long* ``clock``
 
-    The logical clock value for this event (Lamport timestamp).
+    The logical clock value for this event (Lamport time-stamp).
 
 - *float* ``timestamp``
 
-    The UNIX timestamp corresponding to the time of when the event occurred.
+    The UNIX time-stamp corresponding to the time of when the event occurred.
 
 - *signed short* ``utcoffset``
 

+ 1 - 1
docs/internals/reference/celery._state.rst

@@ -1,5 +1,5 @@
 ========================================
- celery._state
+ ``celery._state``
 ========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.app.annotations.rst

@@ -1,5 +1,5 @@
 ==========================================
- celery.app.annotations
+ ``celery.app.annotations``
 ==========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.app.routes.rst

@@ -1,5 +1,5 @@
 =================================
- celery.app.routes
+ ``celery.app.routes``
 =================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.app.trace.rst

@@ -1,5 +1,5 @@
 ==========================================
- celery.app.trace
+ ``celery.app.trace``
 ==========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.amqp.rst

@@ -1,5 +1,5 @@
 =======================================
- celery.backends.amqp
+ ``celery.backends.amqp``
 =======================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.async.rst

@@ -1,5 +1,5 @@
 =====================================
- celery.backends.async
+ ``celery.backends.async``
 =====================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.base.rst

@@ -1,5 +1,5 @@
 =====================================
- celery.backends.base
+ ``celery.backends.base``
 =====================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.cache.rst

@@ -1,5 +1,5 @@
 ===========================================
- celery.backends.cache
+ ``celery.backends.cache``
 ===========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.cassandra.rst

@@ -1,5 +1,5 @@
 ================================================
- celery.backends.cassandra
+ ``celery.backends.cassandra``
 ================================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.couchbase.rst

@@ -1,5 +1,5 @@
 ============================================
- celery.backends.couchbase
+ ``celery.backends.couchbase``
 ============================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.couchdb.rst

@@ -1,5 +1,5 @@
 ===========================================
- celery.backends.couchdb
+ ``celery.backends.couchdb``
 ===========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.database.models.rst

@@ -1,5 +1,5 @@
 ======================================
- celery.backends.database.models
+ ``celery.backends.database.models``
 ======================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.database.rst

@@ -1,5 +1,5 @@
 =========================================================
- celery.backends.database
+ ``celery.backends.database``
 =========================================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.database.session.rst

@@ -1,5 +1,5 @@
 ========================================
- celery.backends.database.session
+ ``celery.backends.database.session``
 ========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.elasticsearch.rst

@@ -1,5 +1,5 @@
 ===========================================
- celery.backends.elasticsearch
+ ``celery.backends.elasticsearch``
 ===========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.filesystem.rst

@@ -1,5 +1,5 @@
 ==========================================
- celery.backends.filesystem
+ ``celery.backends.filesystem``
 ==========================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.mongodb.rst

@@ -1,5 +1,5 @@
 ============================================
- celery.backends.mongodb
+ ``celery.backends.mongodb``
 ============================================
 
 .. contents::

+ 1 - 1
docs/internals/reference/celery.backends.redis.rst

@@ -1,5 +1,5 @@
 ==========================================
- celery.backends.redis
+ ``celery.backends.redis``
 ==========================================
 
 .. contents::

Деякі файли не було показано, через те що забагато файлів було змінено