浏览代码

Documentation cleanup

Ask Solem 14 年之前
父节点
当前提交
ddfce13e73

+ 1 - 1
Changelog

@@ -50,7 +50,7 @@ Important Notes
 
 
     It wasn't easy to find a way to deprecate the magic keyword arguments,
     It wasn't easy to find a way to deprecate the magic keyword arguments,
     but we think this is a solution that makes sense and it will not
     but we think this is a solution that makes sense and it will not
-    have any adverse effects on existing code.
+    have any adverse effects for existing code.
 
 
     The path to a magic keyword argument free world is:
     The path to a magic keyword argument free world is:
 
 

+ 4 - 4
docs/.templates/sidebarintro.html

@@ -1,4 +1,4 @@
-<h3>Celery</h3>
-<p>
-  Celery is a Distributed Task Queue for Python.
-</p>
+<p class="logo"><a href="{{ pathto(master_doc) }}">
+  <img class="logo" src="http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png" alt="Logo"/>
+</a></p>
+

+ 1 - 0
docs/conf.py

@@ -21,6 +21,7 @@ import celery
 
 
 extensions = ['sphinx.ext.autodoc',
 extensions = ['sphinx.ext.autodoc',
               'sphinx.ext.coverage',
               'sphinx.ext.coverage',
+              'sphinx.ext.pngmath',
               'sphinxcontrib.issuetracker',
               'sphinxcontrib.issuetracker',
               'celerydocs']
               'celerydocs']
 
 

+ 17 - 7
docs/configuration.rst

@@ -60,9 +60,10 @@ Concurrency settings
 CELERYD_CONCURRENCY
 CELERYD_CONCURRENCY
 ~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~
 
 
-The number of concurrent worker processes, executing tasks simultaneously.
+The number of concurrent worker processes/threads/green threads, executing
+tasks.
 
 
-Defaults to the number of CPUs/cores available.
+Defaults to the number of available CPUs.
 
 
 .. setting:: CELERYD_PREFETCH_MULTIPLIER
 .. setting:: CELERYD_PREFETCH_MULTIPLIER
 
 
@@ -514,7 +515,11 @@ Broker Settings
 BROKER_BACKEND
 BROKER_BACKEND
 ~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~
 
 
-The messaging backend to use. Default is `"amqplib"`.
+The Kombu transport to use.  Default is ``amqplib``.
+
+You can use a custom transport class name, or select one of the
+built-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``, 
+``sqlalchemy``, ``django``, ``mongodb``, ``couchdb``.
 
 
 .. setting:: BROKER_HOST
 .. setting:: BROKER_HOST
 
 
@@ -620,7 +625,7 @@ CELERY_EAGER_PROPAGATES_EXCEPTIONS
 If this is :const:`True`, eagerly executed tasks (using `.apply`, or with
 If this is :const:`True`, eagerly executed tasks (using `.apply`, or with
 :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 :setting:`CELERY_ALWAYS_EAGER` on), will raise exceptions.
 
 
-It's the same as always running `apply` with `throw=True`.
+It's the same as always running `apply` with ``throw=True``.
 
 
 .. setting:: CELERY_IGNORE_RESULT
 .. setting:: CELERY_IGNORE_RESULT
 
 
@@ -1125,8 +1130,12 @@ Custom Component Classes (advanced)
 CELERYD_POOL
 CELERYD_POOL
 ~~~~~~~~~~~~
 ~~~~~~~~~~~~
 
 
-Name of the task pool class used by the worker.
-Default is :class:`celery.concurrency.processes.TaskPool`.
+Name of the pool class used by the worker.
+
+You can use a custom pool class name, or select one of
+the built-in aliases: ``processes``, ``eventlet``, ``gevent``.
+
+Default is ``processes``.
 
 
 .. setting:: CELERYD_CONSUMER
 .. setting:: CELERYD_CONSUMER
 
 
@@ -1150,7 +1159,8 @@ CELERYD_ETA_SCHEDULER
 ~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~
 
 
 Name of the ETA scheduler class used by the worker.
 Name of the ETA scheduler class used by the worker.
-Default is :class:`celery.worker.controllers.ScheduleController`.
+Default is :class:`celery.utils.timer2.Timer`, or one overrided
+by the pool implementation.
 
 
 .. _conf-celerybeat:
 .. _conf-celerybeat:
 
 

+ 0 - 3
docs/index.rst

@@ -1,6 +1,3 @@
-.. image:: images/celery_favicon_128.png
-   :class: celerylogo
-
 =================================
 =================================
  Celery - Distributed Task Queue
  Celery - Distributed Task Queue
 =================================
 =================================

+ 19 - 17
docs/userguide/concurrency/eventlet.rst

@@ -9,11 +9,12 @@
 Introduction
 Introduction
 ============
 ============
 
 
-The `Eventlet`_ homepage describes eventlet as,
-a concurrent networking library for Python that allows you to
+The `Eventlet`_ homepage describes it as;
+A concurrent networking library for Python that allows you to
 change how you run your code, not how you write it.
 change how you run your code, not how you write it.
 
 
-    * It uses epoll or libevent for `highly scalable non-blocking I/O`_.
+    * It uses `epoll(4)`_ or `libevent`_ for
+      `highly scalable non-blocking I/O`_.
     * `Coroutines`_ ensure that the developer uses a blocking style of
     * `Coroutines`_ ensure that the developer uses a blocking style of
       programming that is similar to threading, but provide the benefits of
       programming that is similar to threading, but provide the benefits of
       non-blocking I/O.
       non-blocking I/O.
@@ -22,22 +23,23 @@ change how you run your code, not how you write it.
 
 
 Celery supports Eventlet as an alternative execution pool implementation.
 Celery supports Eventlet as an alternative execution pool implementation.
 It is in some cases superior to multiprocessing, but you need to ensure
 It is in some cases superior to multiprocessing, but you need to ensure
-your tasks do not perform any blocking calls, as this will halt all
-other operations in the worker.
-
-The multiprocessing pool can take use of many processes, but it is often
-limited to a few processes per CPU.  With eventlet you can efficiently spawn
-hundreds of concurrent couroutines.  In an informal test with a feed hub
-system the Eventlet pool could fetch and process hundreds of feeds every
-second, while the multiprocessing pool spent 14 seconds processing 100 feeds.
-But this is exactly the kind of application evented I/O is good for.  
-You may want a a mix of both eventlet and multiprocessing workers,
-depending on the needs of your tasks.
+your tasks do not perform blocking calls, as this will halt all
+other operations in the worker until the blocking call returns.
+
+The multiprocessing pool can take use of multiple processes, but how many is
+often limited to a few processes per CPU.  With Eventlet you can efficiently
+spawn hundreds, or thousands of green threads.  In an informal test with a
+feed hub system the Eventlet pool could fetch and process hundreds of feeds
+every second, while the multiprocessing pool spent 14 seconds processing 100
+feeds.  Note that is one of the applications evented I/O is especially good
+at (asynchronous HTTP requests).  You may want a a mix of both Eventlet and
+multiprocessing workers, and route tasks according to compatibility or
+what works best.
 
 
 Enabling Eventlet
 Enabling Eventlet
 =================
 =================
 
 
-You can enable the Eventlet pool by using the `-P` option to
+You can enable the Eventlet pool by using the ``-P`` option to
 :program:`celeryd`::
 :program:`celeryd`::
 
 
     $ celeryd -P eventlet -c 1000
     $ celeryd -P eventlet -c 1000
@@ -50,9 +52,9 @@ Examples
 See the `Eventlet examples`_ directory in the Celery distribution for
 See the `Eventlet examples`_ directory in the Celery distribution for
 some examples taking use of Eventlet support.
 some examples taking use of Eventlet support.
 
 
-
-
 .. _`Eventlet`: http://eventlet.net
 .. _`Eventlet`: http://eventlet.net
+.. _`epoll(4)`: http://linux.die.net/man/4/epoll
+.. _`libevent`: http://monkey.org/~provos/libevent/
 .. _`highly scalable non-blocking I/O`:
 .. _`highly scalable non-blocking I/O`:
     http://en.wikipedia.org/wiki/Asynchronous_I/O#Select.28.2Fpoll.29_loops
     http://en.wikipedia.org/wiki/Asynchronous_I/O#Select.28.2Fpoll.29_loops
 .. _`Coroutines`: http://en.wikipedia.org/wiki/Coroutine
 .. _`Coroutines`: http://en.wikipedia.org/wiki/Coroutine

+ 3 - 3
docs/userguide/executing.rst

@@ -13,7 +13,7 @@
 Basics
 Basics
 ======
 ======
 
 
-Executing tasks is done with :meth:`~celery.task.Base.Task.apply_async`,
+Executing a task is done with :meth:`~celery.task.Base.Task.apply_async`,
 and the shortcut: :meth:`~celery.task.Base.Task.delay`.
 and the shortcut: :meth:`~celery.task.Base.Task.delay`.
 
 
 `delay` is simple and convenient, as it looks like calling a regular
 `delay` is simple and convenient, as it looks like calling a regular
@@ -36,8 +36,8 @@ available as attributes on the `Task` class (see :ref:`task-options`).
 In addition you can set countdown/eta, task expiry, provide a custom broker
 In addition you can set countdown/eta, task expiry, provide a custom broker
 connection and more.
 connection and more.
 
 
-Let's go over these in more detail.  All the examples use a simple task,
-called `add`, taking two positional arguments and returning the sum:
+Let's go over these in more detail.  All the examples uses a simple task
+called `add`, returning the sum of two positional arguments:
 
 
 .. code-block:: python
 .. code-block:: python
 
 

+ 25 - 24
docs/userguide/monitoring.rst

@@ -57,6 +57,7 @@ Commands
 
 
 * **purge**: Purge messages from all configured task queues.
 * **purge**: Purge messages from all configured task queues.
     ::
     ::
+
         $ celeryctl purge
         $ celeryctl purge
 
 
     .. warning::
     .. warning::
@@ -115,9 +116,9 @@ Commands
 
 
 .. note::
 .. note::
 
 
-    All `inspect` commands supports a `--timeout` argument,
+    All ``inspect`` commands supports a ``--timeout`` argument,
     This is the number of seconds to wait for responses.
     This is the number of seconds to wait for responses.
-    You may have to increase this timeout if you're getting empty responses
+    You may have to increase this timeout if you're not getting a response
     due to latency.
     due to latency.
 
 
 .. _celeryctl-inspect-destination:
 .. _celeryctl-inspect-destination:
@@ -188,21 +189,21 @@ Shutter frequency
 
 
 By default the camera takes a snapshot every second, if this is too frequent
 By default the camera takes a snapshot every second, if this is too frequent
 or you want to have higher precision, then you can change this using the
 or you want to have higher precision, then you can change this using the
-`--frequency` argument.  This is a float describing how often, in seconds,
+``--frequency`` argument.  This is a float describing how often, in seconds,
 it should wake up to check if there are any new events::
 it should wake up to check if there are any new events::
 
 
     $ python manage.py celerycam --frequency=3.0
     $ python manage.py celerycam --frequency=3.0
 
 
-The camera also supports rate limiting using the `--maxrate` argument.
+The camera also supports rate limiting using the ``--maxrate`` argument.
 While the frequency controls how often the camera thread wakes up,
 While the frequency controls how often the camera thread wakes up,
 the rate limit controls how often it will actually take a snapshot.
 the rate limit controls how often it will actually take a snapshot.
 
 
 The rate limits can be specified in seconds, minutes or hours
 The rate limits can be specified in seconds, minutes or hours
 by appending `/s`, `/m` or `/h` to the value.
 by appending `/s`, `/m` or `/h` to the value.
-Example: `--maxrate=100/m`, means "hundred writes a minute".
+Example: ``--maxrate=100/m``, means "hundred writes a minute".
 
 
 The rate limit is off by default, which means it will take a snapshot
 The rate limit is off by default, which means it will take a snapshot
-for every `--frequency` seconds.
+for every ``--frequency`` seconds.
 
 
 The events also expire after some time, so the database doesn't fill up.
 The events also expire after some time, so the database doesn't fill up.
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
 Successful tasks are deleted after 1 day, failed tasks after 3 days,
@@ -269,7 +270,7 @@ Now that the service is started you can visit the monitor
 at http://127.0.0.1:8000, and log in using the user you created.
 at http://127.0.0.1:8000, and log in using the user you created.
 
 
 For a list of the command line options supported by :program:`djcelerymon`,
 For a list of the command line options supported by :program:`djcelerymon`,
-please see `djcelerymon --help`.
+please see ``djcelerymon --help``.
 
 
 .. _monitoring-celeryev:
 .. _monitoring-celeryev:
 
 
@@ -295,7 +296,7 @@ and it includes a tool to dump events to :file:`stdout`::
 
 
     $ celeryev --dump
     $ celeryev --dump
 
 
-For a complete list of options use `--help`::
+For a complete list of options use ``--help``::
 
 
     $ celeryev --help
     $ celeryev --help
 
 
@@ -331,10 +332,10 @@ as manage users, virtual hosts and their permissions.
 
 
 .. note::
 .. note::
 
 
-    The default virtual host (`"/"`) is used in these
+    The default virtual host (``"/"``) is used in these
     examples, if you use a custom virtual host you have to add
     examples, if you use a custom virtual host you have to add
-    the `-p` argument to the command, e.g:
-    `rabbitmqctl list_queues -p my_vhost ....`
+    the ``-p`` argument to the command, e.g:
+    ``rabbitmqctl list_queues -p my_vhost ....``
 
 
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
 
 
@@ -365,7 +366,7 @@ Finding the amount of memory allocated to a queue::
 
 
     $ rabbitmqctl list_queues name memory
     $ rabbitmqctl list_queues name memory
 
 
-:Tip: Adding the `-q` option to `rabbitmqctl(1)`_ makes the output
+:Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
       easier to parse.
       easier to parse.
 
 
 
 
@@ -421,7 +422,7 @@ write it to a database, send it by e-mail or something else entirely.
 
 
 :program:`celeryev` is then used to take snapshots with the camera,
 :program:`celeryev` is then used to take snapshots with the camera,
 for example if you want to capture state every 2 seconds using the
 for example if you want to capture state every 2 seconds using the
-camera `myapp.Camera` you run :program:`celeryev` with the following
+camera ``myapp.Camera`` you run :program:`celeryev` with the following
 arguments::
 arguments::
 
 
     $ celeryev -c myapp.Camera --frequency=2.0
     $ celeryev -c myapp.Camera --frequency=2.0
@@ -455,7 +456,7 @@ Here is an example camera, dumping the snapshot to screen:
 See the API reference for :mod:`celery.events.state` to read more
 See the API reference for :mod:`celery.events.state` to read more
 about state objects.
 about state objects.
 
 
-Now you can use this cam with `celeryev` by specifying
+Now you can use this cam with :program:`celeryev` by specifying
 it with the `-c` option::
 it with the `-c` option::
 
 
     $ celeryev -c myapp.DumpCam --frequency=2.0
     $ celeryev -c myapp.DumpCam --frequency=2.0
@@ -490,16 +491,16 @@ This list contains the events sent by the worker, and their arguments.
 Task Events
 Task Events
 ~~~~~~~~~~~
 ~~~~~~~~~~~
 
 
-* `task-received(uuid, name, args, kwargs, retries, eta, hostname,
-  timestamp)`
+* ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
+  timestamp)``
 
 
     Sent when the worker receives a task.
     Sent when the worker receives a task.
 
 
-* `task-started(uuid, hostname, timestamp, pid)`
+* ``task-started(uuid, hostname, timestamp, pid)``
 
 
     Sent just before the worker executes the task.
     Sent just before the worker executes the task.
 
 
-* `task-succeeded(uuid, result, runtime, hostname, timestamp)`
+* ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
 
 
     Sent if the task executed successfully.
     Sent if the task executed successfully.
 
 
@@ -507,16 +508,16 @@ Task Events
     (Starting from the task is sent to the worker pool, and ending when the
     (Starting from the task is sent to the worker pool, and ending when the
     pool result handler callback is called).
     pool result handler callback is called).
 
 
-* `task-failed(uuid, exception, traceback, hostname, timestamp)`
+* ``task-failed(uuid, exception, traceback, hostname, timestamp)``
 
 
     Sent if the execution of the task failed.
     Sent if the execution of the task failed.
 
 
-* `task-revoked(uuid)`
+* ``task-revoked(uuid)``
 
 
     Sent if the task has been revoked (Note that this is likely
     Sent if the task has been revoked (Note that this is likely
     to be sent by more than one worker).
     to be sent by more than one worker).
 
 
-* `task-retried(uuid, exception, traceback, hostname, timestamp)`
+* ``task-retried(uuid, exception, traceback, hostname, timestamp)``
 
 
     Sent if the task failed, but will be retried in the future.
     Sent if the task failed, but will be retried in the future.
 
 
@@ -525,7 +526,7 @@ Task Events
 Worker Events
 Worker Events
 ~~~~~~~~~~~~~
 ~~~~~~~~~~~~~
 
 
-* `worker-online(hostname, timestamp, sw_ident, sw_ver, sw_sys)`
+* ``worker-online(hostname, timestamp, sw_ident, sw_ver, sw_sys)``
 
 
     The worker has connected to the broker and is online.
     The worker has connected to the broker and is online.
 
 
@@ -533,11 +534,11 @@ Worker Events
     * `sw_ver`: Software version (e.g. 2.2.0).
     * `sw_ver`: Software version (e.g. 2.2.0).
     * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
     * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
 
 
-* `worker-heartbeat(hostname, timestamp, sw_ident, sw_ver, sw_sys)`
+* ``worker-heartbeat(hostname, timestamp, sw_ident, sw_ver, sw_sys)``
 
 
     Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
     Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
     it is considered to be offline.
     it is considered to be offline.
 
 
-* `worker-offline(hostname, timestamp, sw_ident, sw_ver, sw_sys)`
+* ``worker-offline(hostname, timestamp, sw_ident, sw_ver, sw_sys)``
 
 
     The worker has disconnected from the broker.
     The worker has disconnected from the broker.

+ 39 - 39
docs/userguide/periodic-tasks.rst

@@ -117,45 +117,45 @@ the `crontab` schedule type:
 
 
 The syntax of these crontab expressions are very flexible.  Some examples:
 The syntax of these crontab expressions are very flexible.  Some examples:
 
 
-+-------------------------------------+--------------------------------------------+
-| **Example**                         | **Meaning**                                |
-+-------------------------------------+--------------------------------------------+
-| crontab()                           | Execute every minute.                      |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0, hour=0)           | Execute daily at midnight.                 |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0, hour="\*/3")      | Execute every three hours:                 |
-|                                     | 3am, 6am, 9am, noon, 3pm, 6pm, 9pm.        |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0,                   | Same as previous.                          |
-|         hour=[0,3,6,9,12,15,18,21]) |                                            |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute="\*/15")             | Execute every 15 minutes.                  |
-+-------------------------------------+--------------------------------------------+
-| crontab(day_of_week="sunday")       | Execute every minute (!) at Sundays.       |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute="*",                 | Same as previous.                          |
-|         hour="*",                   |                                            |
-|         day_of_week="sun")          |                                            |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute="\*/10",             | Execute every ten minutes, but only        |
-|         hour="3,17,22",             | between 3-4 am, 5-6 pm and 10-11 pm on     |
-|         day_of_week="thu,fri")      | Thursdays or Fridays.                      |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0, hour="\*/2,\*/3") | Execute every even hour, and every hour    |
-|                                     | divisible by three. This means:            |
-|                                     | at every hour *except*: 1am,               |
-|                                     | 5am, 7am, 11am, 1pm, 5pm, 7pm,             |
-|                                     | 11pm                                       |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0, hour="\*/5")      | Execute hour divisible by 5. This means    |
-|                                     | that it is triggered at 3pm, not 5pm       |
-|                                     | (since 3pm equals the 24-hour clock        |
-|                                     | value of "15", which is divisible by 5).   |
-+-------------------------------------+--------------------------------------------+
-| crontab(minute=0, hour="\*/3,8-17") | Execute every hour divisible by 3, and     |
-|                                     | every hour during office hours (8am-5pm).  |
-+-------------------------------------+--------------------------------------------+
++-----------------------------------------+--------------------------------------------+
+| **Example**                             | **Meaning**                                |
++-----------------------------------------+--------------------------------------------+
+| ``crontab()``                           | Execute every minute.                      |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0, hour=0)``           | Execute daily at midnight.                 |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0, hour="*/3")``       | Execute every three hours:                 |
+|                                         | 3am, 6am, 9am, noon, 3pm, 6pm, 9pm.        |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0,``                   | Same as previous.                          |
+|         ``hour=[0,3,6,9,12,15,18,21])`` |                                            |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute="*/15")``              | Execute every 15 minutes.                  |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(day_of_week="sunday")``       | Execute every minute (!) at Sundays.       |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute="*",``                 | Same as previous.                          |
+|         ``hour="*",``                   |                                            |
+|         ``day_of_week="sun")``          |                                            |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute="*/10",``              | Execute every ten minutes, but only        |
+|         ``hour="3,17,22",``             | between 3-4 am, 5-6 pm and 10-11 pm on     |
+|         ``day_of_week="thu,fri")``      | Thursdays or Fridays.                      |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0, hour="*/2,*/3")``   | Execute every even hour, and every hour    |
+|                                         | divisible by three. This means:            |
+|                                         | at every hour *except*: 1am,               |
+|                                         | 5am, 7am, 11am, 1pm, 5pm, 7pm,             |
+|                                         | 11pm                                       |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0, hour="*/5")``       | Execute hour divisible by 5. This means    |
+|                                         | that it is triggered at 3pm, not 5pm       |
+|                                         | (since 3pm equals the 24-hour clock        |
+|                                         | value of "15", which is divisible by 5).   |
++-----------------------------------------+--------------------------------------------+
+| ``crontab(minute=0, hour="*/3,8-17")``  | Execute every hour divisible by 3, and     |
+|                                         | every hour during office hours (8am-5pm).  |
++-----------------------------------------+--------------------------------------------+
 
 
 .. _beat-starting:
 .. _beat-starting:
 
 

+ 26 - 26
docs/userguide/routing.rst

@@ -138,7 +138,7 @@ You can also override this using the `routing_key` argument to
 
 
 
 
 To make server `z` consume from the feed queue exclusively you can
 To make server `z` consume from the feed queue exclusively you can
-start it with the `-Q` option::
+start it with the ``-Q`` option::
 
 
     (z)$ celeryd -Q feed_tasks --hostname=z.example.com
     (z)$ celeryd -Q feed_tasks --hostname=z.example.com
 
 
@@ -302,12 +302,12 @@ Topic exchanges
 ~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~
 
 
 Topic exchanges matches routing keys using dot-separated words, and the
 Topic exchanges matches routing keys using dot-separated words, and the
-wildcard characters: `*` (matches a single word), and `#` (matches
+wildcard characters: ``*`` (matches a single word), and ``#`` (matches
 zero or more words).
 zero or more words).
 
 
-With routing keys like `usa.news`, `usa.weather`, `norway.news` and
-`norway.weather`, bindings could be `*.news` (all news), `usa.#` (all
-items in the USA) or `usa.weather` (all USA weather items).
+With routing keys like ``usa.news``, ``usa.weather``, ``norway.news`` and
+``norway.weather``, bindings could be ``*.news`` (all news), ``usa.#`` (all
+items in the USA) or ``usa.weather`` (all USA weather items).
 
 
 .. _amqp-api:
 .. _amqp-api:
 
 
@@ -367,16 +367,16 @@ It's used for command-line access to the AMQP API, enabling access to
 administration tasks like creating/deleting queues and exchanges, purging
 administration tasks like creating/deleting queues and exchanges, purging
 queues or sending messages.
 queues or sending messages.
 
 
-You can write commands directly in the arguments to `camqadm`, or just start
-with no arguments to start it in shell-mode::
+You can write commands directly in the arguments to :program:`camqadm`,
+or just start with no arguments to start it in shell-mode::
 
 
     $ camqadm
     $ camqadm
     -> connecting to amqp://guest@localhost:5672/.
     -> connecting to amqp://guest@localhost:5672/.
     -> connected.
     -> connected.
     1>
     1>
 
 
-Here `1>` is the prompt.  The number 1, is the number of commands you
-have executed so far.  Type `help` for a list of commands available.
+Here ``1>`` is the prompt.  The number 1, is the number of commands you
+have executed so far.  Type ``help`` for a list of commands available.
 It also supports auto-completion, so you can start typing a command and then
 It also supports auto-completion, so you can start typing a command and then
 hit the `tab` key to show a list of possible matches.
 hit the `tab` key to show a list of possible matches.
 
 
@@ -389,21 +389,19 @@ Let's create a queue we can send messages to::
     3> queue.bind testqueue testexchange testkey
     3> queue.bind testqueue testexchange testkey
     ok.
     ok.
 
 
-This created the direct exchange `testexchange`, and a queue
-named `testqueue`.  The queue is bound to the exchange using
-the routing key `testkey`.
+This created the direct exchange ``testexchange``, and a queue
+named ``testqueue``.  The queue is bound to the exchange using
+the routing key ``testkey``.
 
 
-From now on all messages sent to the exchange `testexchange` with routing
-key `testkey` will be moved to this queue.  We can send a message by
-using the `basic.publish` command::
+From now on all messages sent to the exchange ``testexchange`` with routing
+key ``testkey`` will be moved to this queue.  We can send a message by
+using the ``basic.publish`` command::
 
 
     4> basic.publish "This is a message!" testexchange testkey
     4> basic.publish "This is a message!" testexchange testkey
     ok.
     ok.
 
 
 Now that the message is sent we can retrieve it again.  We use the
 Now that the message is sent we can retrieve it again.  We use the
-`basic.get` command here, which pops a single message off the queue,
-this command is not recommended for production as it implies polling, any
-real application would declare consumers instead.
+``basic.get``` command here, which polls for new messages on the queue.
 
 
 Pop a message off the queue::
 Pop a message off the queue::
 
 
@@ -422,13 +420,13 @@ and processed successfully.  If the message has not been acknowledged
 and consumer channel is closed, the message will be delivered to
 and consumer channel is closed, the message will be delivered to
 another consumer.
 another consumer.
 
 
-Note the delivery tag listed in the structure above; Within a connection channel,
-every received message has a unique delivery tag,
+Note the delivery tag listed in the structure above; Within a connection
+channel, every received message has a unique delivery tag,
 This tag is used to acknowledge the message.  Also note that
 This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
 delivery tags are not unique across connections, so in another client
 the delivery tag `1` might point to a different message than in this channel.
 the delivery tag `1` might point to a different message than in this channel.
 
 
-You can acknowledge the message we received using `basic.ack`::
+You can acknowledge the message we received using ``basic.ack``::
 
 
     6> basic.ack 1
     6> basic.ack 1
     ok.
     ok.
@@ -510,7 +508,7 @@ Routers
 A router is a class that decides the routing options for a task.
 A router is a class that decides the routing options for a task.
 
 
 All you need to define a new router is to create a class with a
 All you need to define a new router is to create a class with a
-`route_for_task` method:
+``route_for_task`` method:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -523,7 +521,7 @@ All you need to define a new router is to create a class with a
                         "routing_key": "video.compress"}
                         "routing_key": "video.compress"}
             return None
             return None
 
 
-If you return the `queue` key, it will expand with the defined settings of
+If you return the ``queue`` key, it will expand with the defined settings of
 that queue in :setting:`CELERY_QUEUES`::
 that queue in :setting:`CELERY_QUEUES`::
 
 
     {"queue": "video", "routing_key": "video.compress"}
     {"queue": "video", "routing_key": "video.compress"}
@@ -536,7 +534,8 @@ that queue in :setting:`CELERY_QUEUES`::
          "routing_key": "video.compress"}
          "routing_key": "video.compress"}
 
 
 
 
-You install router classes by adding them to the :setting:`CELERY_ROUTES` setting::
+You install router classes by adding them to the :setting:`CELERY_ROUTES`
+setting::
 
 
     CELERY_ROUTES = (MyRouter, )
     CELERY_ROUTES = (MyRouter, )
 
 
@@ -545,8 +544,9 @@ Router classes can also be added by name::
     CELERY_ROUTES = ("myapp.routers.MyRouter", )
     CELERY_ROUTES = ("myapp.routers.MyRouter", )
 
 
 
 
-For simple task name -> route mappings like the router example above, you can simply
-drop a dict into :setting:`CELERY_ROUTES` to get the same behavior:
+For simple task name -> route mappings like the router example above,
+you can simply drop a dict into :setting:`CELERY_ROUTES` to get the
+same behavior:
 
 
 .. code-block:: python
 .. code-block:: python
 
 

+ 2 - 2
docs/userguide/tasksets.rst

@@ -74,8 +74,8 @@ Now let's execute our new `add` task with a callback::
 
 
     >>> add.delay(2, 2, callback=add.subtask((8, )))
     >>> add.delay(2, 2, callback=add.subtask((8, )))
 
 
-As expected this will first launch one task calculating `2 + 2`, then 
-another task calculating `4 + 8`.
+As expected this will first launch one task calculating :math:`2 + 2`, then
+another task calculating :math:`4 + 8`.
 
 
 .. _sets-taskset:
 .. _sets-taskset:
 
 

+ 15 - 12
docs/userguide/workers.rst

@@ -27,7 +27,7 @@ For a full list of available command line options see
 
 
 You can also start multiple workers on the same machine. If you do so
 You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
 be sure to give a unique name to each individual worker by specifying a
-host name with the `--hostname|-n` argument::
+host name with the :option:`--hostname|-n` argument::
 
 
     $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
     $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
@@ -75,17 +75,20 @@ arguments as it was started with.
 Concurrency
 Concurrency
 ===========
 ===========
 
 
-Multiprocessing is used to perform concurrent execution of tasks.  The number
-of worker processes can be changed using the `--concurrency` argument and
-defaults to the number of CPUs available on the machine.
-
-More worker processes are usually better, but there's a cut-off point where
-adding more processes affects performance in negative ways.
-There is even some evidence to support that having multiple celeryd's running,
-may perform better than having a single worker.  For example 3 celeryd's with
-10 worker processes each.  You need to experiment to find the numbers that
-works best for you, as this varies based on application, work load, task
-run times and other factors.
+By default multiprocessing is used to perform concurrent execution of tasks,
+but you can also use :ref:`Eventlet <concurrency-eventlet>`.  The number
+of worker processes/threads can be changed using the :option:`--concurrency`
+argument and defaults to the number of CPUs available on the machine.
+
+.. admonition:: Number of processes (multiprocessing)
+
+    More worker processes are usually better, but there's a cut-off point where
+    adding more processes affects performance in negative ways.
+    There is even some evidence to support that having multiple celeryd's running,
+    may perform better than having a single worker.  For example 3 celeryd's with
+    10 worker processes each.  You need to experiment to find the numbers that
+    works best for you, as this varies based on application, work load, task
+    run times and other factors.
 
 
 .. _worker-persistent-revokes:
 .. _worker-persistent-revokes: