Browse Source

Docs: Use better Pygments highlighters

Ask Solem 9 years ago
parent
commit
1a953d6aa2
50 changed files with 791 additions and 524 deletions
  1. 1 1
      celery/app/task.py
  2. 27 27
      celery/bin/multi.py
  3. 1 1
      celery/platforms.py
  4. 1 1
      celery/result.py
  5. 1 1
      celery/tests/security/test_security.py
  6. 1 1
      celery/utils/serialization.py
  7. 7 7
      docs/configuration.rst
  8. 27 27
      docs/contributing.rst
  9. 8 8
      docs/django/first-steps-with-django.rst
  10. 8 8
      docs/faq.rst
  11. 1 1
      docs/getting-started/brokers/beanstalk.rst
  12. 1 1
      docs/getting-started/brokers/couchdb.rst
  13. 1 1
      docs/getting-started/brokers/django.rst
  14. 1 1
      docs/getting-started/brokers/ironmq.rst
  15. 1 1
      docs/getting-started/brokers/mongodb.rst
  16. 16 14
      docs/getting-started/brokers/rabbitmq.rst
  17. 1 1
      docs/getting-started/brokers/redis.rst
  18. 1 1
      docs/getting-started/brokers/sqs.rst
  19. 8 8
      docs/getting-started/first-steps-with-celery.rst
  20. 88 47
      docs/getting-started/next-steps.rst
  21. 9 9
      docs/history/changelog-1.0.rst
  22. 12 12
      docs/history/changelog-2.0.rst
  23. 13 13
      docs/history/changelog-2.1.rst
  24. 4 4
      docs/history/changelog-2.2.rst
  25. 1 1
      docs/history/changelog-2.3.rst
  26. 1 1
      docs/history/changelog-2.4.rst
  27. 2 2
      docs/history/changelog-2.5.rst
  28. 8 8
      docs/history/changelog-3.0.rst
  29. 5 5
      docs/history/changelog-3.1.rst
  30. 1 1
      docs/includes/installation.txt
  31. 1 1
      docs/internals/guide.rst
  32. 19 15
      docs/internals/protocol.rst
  33. 2 2
      docs/reference/celery.rst
  34. 4 4
      docs/tutorials/daemonizing.rst
  35. 1 1
      docs/tutorials/debugging.rst
  36. 10 10
      docs/userguide/application.rst
  37. 9 7
      docs/userguide/calling.rst
  38. 200 80
      docs/userguide/canvas.rst
  39. 1 1
      docs/userguide/concurrency/eventlet.rst
  40. 9 3
      docs/userguide/extending.rst
  41. 31 31
      docs/userguide/monitoring.rst
  42. 2 2
      docs/userguide/optimizing.rst
  43. 11 11
      docs/userguide/periodic-tasks.rst
  44. 19 7
      docs/userguide/remote-tasks.rst
  45. 27 15
      docs/userguide/routing.rst
  46. 19 9
      docs/userguide/tasks.rst
  47. 72 48
      docs/userguide/workers.rst
  48. 5 5
      docs/whatsnew-2.5.rst
  49. 69 35
      docs/whatsnew-3.0.rst
  50. 23 23
      docs/whatsnew-3.1.rst

+ 1 - 1
celery/app/task.py

@@ -580,7 +580,7 @@ class Task(object):
 
         **Example**
 
-        .. code-block:: python
+        .. code-block:: pycon
 
             >>> from imaginary_twitter_lib import Twitter
             >>> from proj.celery import app

+ 27 - 27
celery/bin/multi.py

@@ -6,79 +6,79 @@
 Examples
 ========
 
-.. code-block:: bash
+.. code-block:: console
 
-    # Single worker with explicit name and events enabled.
+    $ # Single worker with explicit name and events enabled.
     $ celery multi start Leslie -E
 
-    # Pidfiles and logfiles are stored in the current directory
-    # by default.  Use --pidfile and --logfile argument to change
-    # this.  The abbreviation %n will be expanded to the current
-    # node name.
+    $ # Pidfiles and logfiles are stored in the current directory
+    $ # by default.  Use --pidfile and --logfile argument to change
+    $ # this.  The abbreviation %n will be expanded to the current
+    $ # node name.
     $ celery multi start Leslie -E --pidfile=/var/run/celery/%n.pid
                                    --logfile=/var/log/celery/%n%I.log
 
 
-    # You need to add the same arguments when you restart,
-    # as these are not persisted anywhere.
+    $ # You need to add the same arguments when you restart,
+    $ # as these are not persisted anywhere.
     $ celery multi restart Leslie -E --pidfile=/var/run/celery/%n.pid
                                      --logfile=/var/run/celery/%n%I.log
 
-    # To stop the node, you need to specify the same pidfile.
+    $ # To stop the node, you need to specify the same pidfile.
     $ celery multi stop Leslie --pidfile=/var/run/celery/%n.pid
 
-    # 3 workers, with 3 processes each
+    $ # 3 workers, with 3 processes each
     $ celery multi start 3 -c 3
     celery worker -n celery1@myhost -c 3
     celery worker -n celery2@myhost -c 3
     celery worker -n celery3@myhost -c 3
 
-    # start 3 named workers
+    $ # start 3 named workers
     $ celery multi start image video data -c 3
     celery worker -n image@myhost -c 3
     celery worker -n video@myhost -c 3
     celery worker -n data@myhost -c 3
 
-    # specify custom hostname
+    $ # specify custom hostname
     $ celery multi start 2 --hostname=worker.example.com -c 3
     celery worker -n celery1@worker.example.com -c 3
     celery worker -n celery2@worker.example.com -c 3
 
-    # specify fully qualified nodenames
+    $ # specify fully qualified nodenames
     $ celery multi start foo@worker.example.com bar@worker.example.com -c 3
 
-    # fully qualified nodenames but using the current hostname
+    $ # fully qualified nodenames but using the current hostname
     $ celery multi start foo@%h bar@%h
 
-    # Advanced example starting 10 workers in the background:
-    #   * Three of the workers processes the images and video queue
-    #   * Two of the workers processes the data queue with loglevel DEBUG
-    #   * the rest processes the default' queue.
+    $ # Advanced example starting 10 workers in the background:
+    $ #   * Three of the workers processes the images and video queue
+    $ #   * Two of the workers processes the data queue with loglevel DEBUG
+    $ #   * the rest processes the default' queue.
     $ celery multi start 10 -l INFO -Q:1-3 images,video -Q:4,5 data
         -Q default -L:4,5 DEBUG
 
-    # You can show the commands necessary to start the workers with
-    # the 'show' command:
+    $ # You can show the commands necessary to start the workers with
+    $ # the 'show' command:
     $ celery multi show 10 -l INFO -Q:1-3 images,video -Q:4,5 data
         -Q default -L:4,5 DEBUG
 
-    # Additional options are added to each celery worker' comamnd,
-    # but you can also modify the options for ranges of, or specific workers
+    $ # Additional options are added to each celery worker' comamnd,
+    $ # but you can also modify the options for ranges of, or specific workers
 
-    # 3 workers: Two with 3 processes, and one with 10 processes.
+    $ # 3 workers: Two with 3 processes, and one with 10 processes.
     $ celery multi start 3 -c 3 -c:1 10
     celery worker -n celery1@myhost -c 10
     celery worker -n celery2@myhost -c 3
     celery worker -n celery3@myhost -c 3
 
-    # can also specify options for named workers
+    $ # can also specify options for named workers
     $ celery multi start image video data -c 3 -c:image 10
     celery worker -n image@myhost -c 10
     celery worker -n video@myhost -c 3
     celery worker -n data@myhost -c 3
 
-    # ranges and lists of workers in options is also allowed:
-    # (-c:1-3 can also be written as -c:1,2,3)
+    $ # ranges and lists of workers in options is also allowed:
+    $ # (-c:1-3 can also be written as -c:1,2,3)
     $ celery multi start 5 -c 3  -c:1-3 10
     celery worker -n celery1@myhost -c 10
     celery worker -n celery2@myhost -c 10
@@ -86,7 +86,7 @@ Examples
     celery worker -n celery4@myhost -c 3
     celery worker -n celery5@myhost -c 3
 
-    # lists also works with named workers
+    $ # lists also works with named workers
     $ celery multi start foo bar baz xuzzy -c 3 -c:foo,bar,baz 10
     celery worker -n foo@myhost -c 10
     celery worker -n bar@myhost -c 10

+ 1 - 1
celery/platforms.py

@@ -515,7 +515,7 @@ class Signals(object):
 
     **Examples**:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> from celery.platforms import signals
 

+ 1 - 1
celery/result.py

@@ -219,7 +219,7 @@ class AsyncResult(ResultBase):
 
         Calling :meth:`collect` would return:
 
-        .. code-block:: python
+        .. code-block:: pycon
 
             >>> from celery.result import ResultBase
             >>> from proj.tasks import A

+ 1 - 1
celery/tests/security/test_security.py

@@ -3,7 +3,7 @@ Keys and certificates for tests (KEY1 is a private key of CERT1, etc.)
 
 Generated with:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ openssl genrsa -des3 -passout pass:test -out key1.key 1024
     $ openssl req -new -key key1.key -out key1.csr -passin pass:test

+ 1 - 1
celery/utils/serialization.py

@@ -86,7 +86,7 @@ class UnpickleableExceptionWrapper(Exception):
 
     **Example**
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> def pickle_it(raising_function):
         ...     try:

+ 7 - 7
docs/configuration.rst

@@ -434,7 +434,7 @@ Configuring the backend URL
 
     To install the redis package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install redis
 
@@ -540,7 +540,7 @@ Cassandra backend settings
 
     To install the pycassa package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install pycassa
 
@@ -636,7 +636,7 @@ Riak backend settings
 
     To install the riak package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install riak
 
@@ -702,7 +702,7 @@ IronCache backend settings
 
     To install the iron_celery package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install iron_celery
 
@@ -729,7 +729,7 @@ Couchbase backend settings
 
     To install the couchbase package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install couchbase
 
@@ -775,7 +775,7 @@ CouchDB backend settings
 
     To install the couchbase package use `pip` or `easy_install`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install pycouchdb
 
@@ -967,7 +967,7 @@ With the follow settings:
 
 The final routing options for ``tasks.add`` will become:
 
-.. code-block:: python
+.. code-block:: javascript
 
     {"exchange": "cpubound",
      "routing_key": "tasks.add",

+ 27 - 27
docs/contributing.rst

@@ -214,7 +214,7 @@ spelling or other errors on the website/docs/code.
 
     D) Include the output from the `celery report` command:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ celery -A proj report
 
@@ -402,14 +402,14 @@ is in the Github Guide: `Fork a Repo`_.
 After you have cloned the repository you should checkout your copy
 to a directory on your machine:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ git clone git@github.com:username/celery.git
 
 When the repository is cloned enter the directory to set up easy access
 to upstream changes:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ cd celery
     $ git remote add upstream git://github.com/celery/celery.git
@@ -418,7 +418,7 @@ to upstream changes:
 If you need to pull in new changes from upstream you should
 always use the :option:`--rebase` option to ``git pull``:
 
-.. code-block:: bash
+.. code-block:: console
 
     git pull --rebase upstream master
 
@@ -448,14 +448,14 @@ A complete list of the dependencies needed are located in
 
 Installing the test requirements:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U -r requirements/test.txt
 
 When installation of dependencies is complete you can execute
 the test suite by calling ``nosetests``:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ nosetests
 
@@ -480,7 +480,7 @@ Some useful options to :program:`nosetests` are:
 If you want to run the tests for a single test file only
 you can do so like this:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ nosetests celery.tests.test_worker.test_worker_job
 
@@ -510,13 +510,13 @@ To calculate test coverage you must first install the :mod:`coverage` module.
 
 Installing the :mod:`coverage` module:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U coverage
 
 Code coverage in HTML:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ nosetests --with-coverage --cover-html
 
@@ -525,7 +525,7 @@ The coverage output will then be located at
 
 Code coverage in XML (Cobertura-style):
 
-.. code-block:: bash
+.. code-block:: console
 
     $ nosetests --with-coverage --cover-xml --cover-xml-file=coverage.xml
 
@@ -541,16 +541,16 @@ distribution.
 
 To run the tests for all supported Python versions simply execute:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ tox
 
 If you only want to test specific Python versions use the :option:`-e`
 option:
 
-.. code-block:: bash
+.. code-block:: console
 
-    $ tox -e py26
+    $ tox -e 2.7
 
 Building the documentation
 --------------------------
@@ -558,14 +558,14 @@ Building the documentation
 To build the documentation you need to install the dependencies
 listed in :file:`requirements/docs.txt`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U -r requirements/docs.txt
 
 After these dependencies are installed you should be able to
 build the docs by running:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ cd docs
     $ rm -rf .build
@@ -584,7 +584,7 @@ can be found in :file:`requirements/pkgutils.txt`.
 
 Installing the dependencies:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U -r requirements/pkgutils.txt
 
@@ -594,14 +594,14 @@ pyflakes & PEP8
 To ensure that your changes conform to PEP8 and to run pyflakes
 execute:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ make flakecheck
 
 To not return a negative exit code when this command fails use
 the ``flakes`` target instead:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ make flakes§
 
@@ -611,7 +611,7 @@ API reference
 To make sure that all modules have a corresponding section in the API
 reference please execute:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ make apicheck
     $ make indexcheck
@@ -628,14 +628,14 @@ and this module is considered part of the public API, use the following steps:
 
 Use an existing file as a template:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ cd docs/reference/
     $ cp celery.schedules.rst celery.worker.awesome.rst
 
 Edit the file using your favorite editor:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ vim celery.worker.awesome.rst
 
@@ -645,7 +645,7 @@ Edit the file using your favorite editor:
 
 Edit the index using your favorite editor:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ vim index.rst
 
@@ -654,7 +654,7 @@ Edit the index using your favorite editor:
 
 Commit your changes:
 
-.. code-block:: bash
+.. code-block:: console
 
     # Add the file to git
     $ git add celery.worker.awesome.rst
@@ -838,7 +838,7 @@ that require 3rd party libraries must be added.
     After you've made changes to this file you need to render
     the distro :file:`README` file:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install -U requirements/pkgutils.txt
         $ make readme
@@ -1045,19 +1045,19 @@ the :file:`README` files.  There is a script to convert sphinx syntax
 to generic reStructured Text syntax, and the make target `readme`
 does this for you:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ make readme
 
 Now commit the changes:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ git commit -a -m "Bumps version to X.Y.Z"
 
 and make a new version tag:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ git tag vX.Y.Z
     $ git push --tags

+ 8 - 8
docs/django/first-steps-with-django.rst

@@ -55,7 +55,7 @@ first we import absolute imports from the future, so that our
 
     from __future__ import absolute_import
 
-Then we set the default :envvar:`DJANGO_SETTINGS_MODULE` 
+Then we set the default :envvar:`DJANGO_SETTINGS_MODULE` environment variable
 for the :program:`celery` command-line program:
 
 .. code-block:: python
@@ -137,14 +137,14 @@ concrete app instance:
 Using the Django ORM/Cache as a result backend.
 -----------------------------------------------
 
-The [``django-celery``](https://github.com/celery/django-celery) library defines result backends that
-uses the Django ORM and Django Cache frameworks.
+The [``django-celery``](https://github.com/celery/django-celery) library defines
+result backends that uses the Django ORM and Django Cache frameworks.
 
 To use this with your project you need to follow these four steps:
 
 1. Install the ``django-celery`` library:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install django-celery
 
@@ -159,13 +159,13 @@ To use this with your project you need to follow these four steps:
 
     If you are using south_ for schema migrations, you'll want to:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py migrate djcelery
 
     For those who are not using south, a normal ``syncdb`` will work:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py syncdb
 
@@ -212,7 +212,7 @@ as a daemon - see :ref:`daemonizing` - but for testing and
 development it is useful to be able to start a worker instance by using the
 ``celery worker`` manage command, much as you would use Django's runserver:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info
 
@@ -220,7 +220,7 @@ development it is useful to be able to start a worker instance by using the
 For a complete listing of the command-line options available,
 use the help command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery help
 

+ 8 - 8
docs/faq.rst

@@ -306,7 +306,7 @@ Why aren't my tasks processed?
 **Answer:** With RabbitMQ you can see how many consumers are currently
 receiving tasks by running the following command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ rabbitmqctl list_queues -p <myvhost> name messages consumers
     Listing queues ...
@@ -366,13 +366,13 @@ How do I purge all waiting tasks?
 **Answer:** You can use the ``celery purge`` command to purge
 all configured task queues:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj purge
 
 or programatically:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from proj.celery import app
     >>> app.control.purge()
@@ -381,7 +381,7 @@ or programatically:
 If you only want to purge messages from a specific queue
 you have to use the AMQP API or the :program:`celery amqp` utility:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj amqp queue.purge <queue name>
 
@@ -523,7 +523,7 @@ setting.
 If you don't use the results for a task, make sure you set the
 `ignore_result` option:
 
-.. code-block python
+.. code-block:: python
 
     @app.task(ignore_result=True)
     def mytask():
@@ -705,7 +705,7 @@ control commands will be received in round-robin between them.
 To work around this you can explicitly set the nodename for every worker
 using the :option:`-n` argument to :mod:`~celery.bin.worker`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -n worker1@%h
     $ celery -A proj worker -n worker2@%h
@@ -842,9 +842,9 @@ task so the task will not run again.
 Identifying the type of process is easier if you have installed the
 ``setproctitle`` module:
 
-.. code-block:: bash
+.. code-block:: console
 
-    pip install setproctitle
+    $ pip install setproctitle
 
 With this library installed you will be able to see the type of process in ps
 listings, but the worker must be restarted for this to take effect.

+ 1 - 1
docs/getting-started/brokers/beanstalk.rst

@@ -22,7 +22,7 @@ For the Beanstalk support you have to install additional dependencies.
 You can install both Celery and these dependencies in one go using
 the ``celery[beanstalk]`` :ref:`bundle <bundles>`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U celery[beanstalk]
 

+ 1 - 1
docs/getting-started/brokers/couchdb.rst

@@ -20,7 +20,7 @@ For the CouchDB support you have to install additional dependencies.
 You can install both Celery and these dependencies in one go using
 the ``celery[couchdb]`` :ref:`bundle <bundles>`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U celery[couchdb]
 

+ 1 - 1
docs/getting-started/brokers/django.rst

@@ -34,7 +34,7 @@ configuration values.
 
 #. Sync your database schema:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ python manage.py syncdb
 

+ 1 - 1
docs/getting-started/brokers/ironmq.rst

@@ -11,7 +11,7 @@ Installation
 
 For IronMQ support, you'll need the [iron_celery](http://github.com/iron-io/iron_celery) library:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install iron_celery
 

+ 1 - 1
docs/getting-started/brokers/mongodb.rst

@@ -20,7 +20,7 @@ For the MongoDB support you have to install additional dependencies.
 You can install both Celery and these dependencies in one go using
 the ``celery[mongodb]`` :ref:`bundle <bundles>`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U celery[mongodb]
 

+ 16 - 14
docs/getting-started/brokers/rabbitmq.rst

@@ -12,9 +12,11 @@ Installation & Configuration
 
 RabbitMQ is the default broker so it does not require any additional
 dependencies or initial configuration, other than the URL location of
-the broker instance you want to use::
+the broker instance you want to use:
 
-    >>> BROKER_URL = 'amqp://guest:guest@localhost:5672//'
+.. code-block:: python
+
+    BROKER_URL = 'amqp://guest:guest@localhost:5672//'
 
 For a description of broker URLs and a full list of the
 various broker configuration options available to Celery,
@@ -46,19 +48,19 @@ Setting up RabbitMQ
 To use celery we need to create a RabbitMQ user, a virtual host and
 allow that user access to that virtual host:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl add_user myuser mypassword
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl add_vhost myvhost
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl set_user_tags myuser mytag
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
 
@@ -79,13 +81,13 @@ shiny package management system for OS X.
 First, install homebrew using the one-line command provided by the `Homebrew
 documentation`_:
 
-.. code-block:: bash
+.. code-block:: console
 
     ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
 
 Finally, we can install rabbitmq using :program:`brew`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ brew install rabbitmq
 
@@ -96,7 +98,7 @@ Finally, we can install rabbitmq using :program:`brew`:
 
 After you have installed rabbitmq with brew you need to add the following to your path to be able to start and stop the broker. Add it to your .bash_profile or .profile
 
-.. code-block:: bash
+.. code-block:: console
 
     `PATH=$PATH:/usr/local/sbin`
 
@@ -109,7 +111,7 @@ to communicate with nodes.
 
 Use the :program:`scutil` command to permanently set your host name:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo scutil --set HostName myhost.local
 
@@ -121,7 +123,7 @@ back into an IP address::
 If you start the rabbitmq server, your rabbit node should now be `rabbit@myhost`,
 as verified by :program:`rabbitmqctl`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl status
     Status of node rabbit@myhost ...
@@ -146,21 +148,21 @@ Starting/Stopping the RabbitMQ server
 
 To start the server:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmq-server
 
 you can also run it in the background by adding the :option:`-detached` option
 (note: only one dash):
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmq-server -detached
 
 Never use :program:`kill` to stop the RabbitMQ server, but rather use the
 :program:`rabbitmqctl` command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo rabbitmqctl stop
 

+ 1 - 1
docs/getting-started/brokers/redis.rst

@@ -13,7 +13,7 @@ For the Redis support you have to install additional dependencies.
 You can install both Celery and these dependencies in one go using
 the ``celery[redis]`` :ref:`bundle <bundles>`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U celery[redis]
 

+ 1 - 1
docs/getting-started/brokers/sqs.rst

@@ -18,7 +18,7 @@ Installation
 
 For the Amazon SQS support you have to install the `boto`_ library:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install -U boto
 

+ 8 - 8
docs/getting-started/first-steps-with-celery.rst

@@ -56,7 +56,7 @@ Detailed information about using RabbitMQ with Celery:
 If you are using Ubuntu or Debian install RabbitMQ by executing this
 command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ sudo apt-get install rabbitmq-server
 
@@ -111,7 +111,7 @@ Installing Celery
 Celery is on the Python Package Index (PyPI), so it can be installed
 with standard Python tools like ``pip`` or ``easy_install``:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install celery
 
@@ -157,7 +157,7 @@ Running the celery worker server
 You now run the worker by executing our program with the ``worker``
 argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A tasks worker --loglevel=info
 
@@ -173,13 +173,13 @@ for more information).
 
 For a complete listing of the command-line options available, do:
 
-.. code-block:: bash
+.. code-block:: console
 
     $  celery worker --help
 
 There are also several other commands available, and help is also available:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery help
 
@@ -344,7 +344,7 @@ current directory or on the Python path, it could look like this:
 To verify that your configuration file works properly, and doesn't
 contain any syntax errors, you can try to import it:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ python -m celeryconfig
 
@@ -377,7 +377,7 @@ If you are using RabbitMQ or Redis as the
 broker then you can also direct the workers to set a new rate limit
 for the task at runtime:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A tasks control rate_limit tasks.add 10/m
     worker@example.com: OK
@@ -411,7 +411,7 @@ Worker does not start: Permission Error
 
     A simple workaround is to create a symbolic link:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         # ln -s /run/shm /dev/shm
 

+ 88 - 47
docs/getting-started/next-steps.rst

@@ -72,7 +72,7 @@ Starting the worker
 
 The :program:`celery` program can be used to start the worker (you need to run the worker in the directory above proj):
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info
 
@@ -128,7 +128,7 @@ and emulating priorities, all described in the :ref:`Routing Guide
 You can get a complete list of command-line arguments
 by passing in the `--help` flag:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery worker --help
 
@@ -149,7 +149,7 @@ described in detail in the :ref:`daemonization tutorial <daemonizing>`.
 The daemonization scripts uses the :program:`celery multi` command to
 start one or more workers in the background:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery multi start w1 -A proj -l info
     celery multi v3.1.1 (Cipater)
@@ -158,7 +158,7 @@ start one or more workers in the background:
 
 You can restart it too:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery  multi restart w1 -A proj -l info
     celery multi v3.1.1 (Cipater)
@@ -173,7 +173,7 @@ You can restart it too:
 
 or stop it:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery multi stop w1 -A proj -l info
 
@@ -181,7 +181,7 @@ The ``stop`` command is asynchronous so it will not wait for the
 worker to shutdown.  You will probably want to use the ``stopwait`` command
 instead which will ensure all currently executing tasks is completed:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery multi stopwait w1 -A proj -l info
 
@@ -196,7 +196,7 @@ By default it will create pid and log files in the current directory,
 to protect against multiple workers launching on top of each other
 you are encouraged to put these in a dedicated directory:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ mkdir -p /var/run/celery
     $ mkdir -p /var/log/celery
@@ -207,7 +207,7 @@ With the multi command you can start multiple workers, and there is a powerful
 command-line syntax to specify arguments for different workers too,
 e.g:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery multi start 10 -A proj -l info -Q:1-3 images,video -Q:4,5 data \
         -Q default -L:4,5 debug
@@ -250,17 +250,23 @@ for larger projects.
 Calling Tasks
 =============
 
-You can call a task using the :meth:`delay` method::
+You can call a task using the :meth:`delay` method:
+
+.. code-block:: pycon
 
     >>> add.delay(2, 2)
 
 This method is actually a star-argument shortcut to another method called
-:meth:`apply_async`::
+:meth:`apply_async`:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2))
 
 The latter enables you to specify execution options like the time to run
-(countdown), the queue it should be sent to and so on::
+(countdown), the queue it should be sent to and so on:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2), queue='lopri', countdown=10)
 
@@ -268,7 +274,9 @@ In the above example the task will be sent to a queue named ``lopri`` and the
 task will execute, at the earliest, 10 seconds after the message was sent.
 
 Applying the task directly will execute the task in the current process,
-so that no message is sent::
+so that no message is sent:
+
+.. code-block:: pycon
 
     >>> add(2, 2)
     4
@@ -296,22 +304,31 @@ have.  Also note that result backends are not used for monitoring tasks and work
 for that Celery uses dedicated event messages (see :ref:`guide-monitoring`).
 
 If you have a result backend configured you can retrieve the return
-value of a task::
+value of a task:
+
+.. code-block:: pycon
 
     >>> res = add.delay(2, 2)
     >>> res.get(timeout=1)
     4
 
-You can find the task's id by looking at the :attr:`id` attribute::
+You can find the task's id by looking at the :attr:`id` attribute:
+
+.. code-block:: pycon
 
     >>> res.id
     d6b3aea2-fb9b-4ebc-8da4-848818db9114
 
 You can also inspect the exception and traceback if the task raised an
-exception, in fact ``result.get()`` will propagate any errors by default::
+exception, in fact ``result.get()`` will propagate any errors by default:
+
+.. code-block:: pycon
 
     >>> res = add.delay(2)
     >>> res.get(timeout=1)
+
+.. code-block:: pytb
+
     Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
     File "/opt/devel/celery/celery/result.py", line 113, in get
@@ -321,7 +338,9 @@ exception, in fact ``result.get()`` will propagate any errors by default::
     TypeError: add() takes exactly 2 arguments (1 given)
 
 If you don't wish for the errors to propagate then you can disable that
-by passing the ``propagate`` argument::
+by passing the ``propagate`` argument:
+
+.. code-block:: pycon
 
     >>> res.get(propagate=False)
     TypeError('add() takes exactly 2 arguments (1 given)',)
@@ -337,7 +356,9 @@ use the corresponding methods on the result instance::
     False
 
 So how does it know if the task has failed or not?  It can find out by looking
-at the tasks *state*::
+at the tasks *state*:
+
+.. code-block:: pycon
 
     >>> res.state
     'FAILURE'
@@ -353,7 +374,9 @@ The started state is a special state that is only recorded if the
 
 The pending state is actually not a recorded state, but rather
 the default state for any task id that is unknown, which you can see
-from this example::
+from this example:
+
+.. code-block:: pycon
 
     >>> from proj.celery import app
 
@@ -387,12 +410,16 @@ invocation in a way such that it can be passed to functions or even serialized
 and sent across the wire.
 
 You can create a signature for the ``add`` task using the arguments ``(2, 2)``,
-and a countdown of 10 seconds like this::
+and a countdown of 10 seconds like this:
+
+.. code-block:: pycon
 
     >>> add.signature((2, 2), countdown=10)
     tasks.add(2, 2)
 
-There is also a shortcut using star arguments::
+There is also a shortcut using star arguments:
+
+.. code-block:: pycon
 
     >>> add.s(2, 2)
     tasks.add(2, 2)
@@ -405,7 +432,9 @@ have the ``delay`` and ``apply_async`` methods.
 
 But there is a difference in that the signature may already have
 an argument signature specified.  The ``add`` task takes two arguments,
-so a signature specifying two arguments would make a complete signature::
+so a signature specifying two arguments would make a complete signature:
+
+.. code-block:: pycon
 
     >>> s1 = add.s(2, 2)
     >>> res = s1.delay()
@@ -413,13 +442,17 @@ so a signature specifying two arguments would make a complete signature::
     4
 
 But, you can also make incomplete signatures to create what we call
-*partials*::
+*partials*:
+
+.. code-block:: pycon
 
     # incomplete partial: add(?, 2)
     >>> s2 = add.s(2)
 
 ``s2`` is now a partial signature that needs another argument to be complete,
-and this can be resolved when calling the signature::
+and this can be resolved when calling the signature:
+
+.. code-block:: pycon
 
     # resolves the partial: add(8, 2)
     >>> res = s2.delay(8)
@@ -430,7 +463,9 @@ Here you added the argument 8, which was prepended to the existing argument 2
 forming a complete signature of ``add(8, 2)``.
 
 Keyword arguments can also be added later, these are then merged with any
-existing keyword arguments, but with new arguments taking precedence::
+existing keyword arguments, but with new arguments taking precedence:
+
+.. code-block:: pycon
 
     >>> s3 = add.s(2, 2, debug=True)
     >>> s3.delay(debug=False)   # debug is now False.
@@ -484,7 +519,7 @@ A :class:`~celery.group` calls a list of tasks in parallel,
 and it returns a special result instance that lets you inspect the results
 as a group, and retrieve the return values in order.
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import group
     >>> from proj.tasks import add
@@ -494,7 +529,7 @@ as a group, and retrieve the return values in order.
 
 - Partial group
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> g = group(add.s(i) for i in xrange(10))
     >>> g(10).get()
@@ -506,7 +541,7 @@ Chains
 Tasks can be linked together so that after one task returns the other
 is called:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import chain
     >>> from proj.tasks import add, mul
@@ -518,9 +553,9 @@ is called:
 
 or a partial chain:
 
-.. code-block:: python
+.. code-block:: pycon
 
-    # (? + 4) * 8
+    >>> # (? + 4) * 8
     >>> g = chain(add.s(4) | mul.s(8))
     >>> g(4).get()
     64
@@ -528,7 +563,7 @@ or a partial chain:
 
 Chains can also be written like this:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> (add.s(4, 4) | mul.s(8))().get()
     64
@@ -538,7 +573,7 @@ Chords
 
 A chord is a group with a callback:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import chord
     >>> from proj.tasks import add, xsum
@@ -550,7 +585,7 @@ A chord is a group with a callback:
 A group chained to another task will be automatically converted
 to a chord:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> (group(add.s(i, i) for i in xrange(10)) | xsum.s())().get()
     90
@@ -571,7 +606,9 @@ Celery supports all of the routing facilities provided by AMQP,
 but it also supports simple routing where messages are sent to named queues.
 
 The :setting:`CELERY_ROUTES` setting enables you to route tasks by name
-and keep everything centralized in one location::
+and keep everything centralized in one location:
+
+.. code-block:: python
 
     app.conf.update(
         CELERY_ROUTES = {
@@ -580,7 +617,9 @@ and keep everything centralized in one location::
     )
 
 You can also specify the queue at runtime
-with the ``queue`` argument to ``apply_async``::
+with the ``queue`` argument to ``apply_async``:
+
+.. code-block:: pycon
 
     >>> from proj.tasks import add
     >>> add.apply_async((2, 2), queue='hipri')
@@ -588,7 +627,7 @@ with the ``queue`` argument to ``apply_async``::
 You can then make a worker consume from this queue by
 specifying the :option:`-Q` option:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -Q hipri
 
@@ -597,7 +636,7 @@ for example you can make the worker consume from both the default
 queue, and the ``hipri`` queue, where
 the default queue is named ``celery`` for historical reasons:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -Q hipri,celery
 
@@ -615,7 +654,7 @@ you can control and inspect the worker at runtime.
 
 For example you can see what tasks the worker is currently working on:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect active
 
@@ -626,7 +665,7 @@ You can also specify one or more workers to act on the request
 using the :option:`--destination` option, which is a comma separated
 list of worker host names:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect active --destination=celery@example.com
 
@@ -638,47 +677,47 @@ does not change anything in the worker, it only replies information
 and statistics about what is going on inside the worker.
 For a list of inspect commands you can execute:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect --help
 
 Then there is the :program:`celery control` command, which contains
 commands that actually changes things in the worker at runtime:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control --help
 
 For example you can force workers to enable event messages (used
 for monitoring tasks and workers):
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control enable_events
 
 When events are enabled you can then start the event dumper
 to see what the workers are doing:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events --dump
 
 or you can start the curses interface:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events
 
 when you're finished monitoring you can disable events again:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control disable_events
 
 The :program:`celery status` command also uses remote control commands
 and shows a list of online workers in the cluster:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj status
 
@@ -693,7 +732,9 @@ All times and dates, internally and in messages uses the UTC timezone.
 When the worker receives a message, for example with a countdown set it
 converts that UTC time to local time.  If you wish to use
 a different timezone than the system timezone then you must
-configure that using the :setting:`CELERY_TIMEZONE` setting::
+configure that using the :setting:`CELERY_TIMEZONE` setting:
+
+.. code-block:: python
 
     app.conf.CELERY_TIMEZONE = 'Europe/London'
 
@@ -711,7 +752,7 @@ for throughput then you should read the :ref:`Optimizing Guide
 If you're using RabbitMQ then you should install the :mod:`librabbitmq`
 module, which is an AMQP client implemented in C:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install librabbitmq
 

+ 9 - 9
docs/history/changelog-1.0.rst

@@ -20,13 +20,13 @@
   If you've already used the AMQP backend this means you have to
   delete the previous definitions:
 
-  .. code-block:: bash
+  .. code-block:: console
 
       $ camqadm exchange.delete celeryresults
 
   or:
 
-  .. code-block:: bash
+  .. code-block:: console
 
       $ python manage.py camqadm exchange.delete celeryresults
 
@@ -506,7 +506,7 @@ Fixes
         If you're using Celery with Django, you can't use `project.settings`
         as the settings module name, but the following should work:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ python manage.py celeryd --settings=settings
 
@@ -534,7 +534,7 @@ Fixes
     Excellent for deleting queues/bindings/exchanges, experimentation and
     testing:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ camqadm
         1> help
@@ -543,7 +543,7 @@ Fixes
 
     When using Django, use the management command instead:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py camqadm
         1> help
@@ -711,7 +711,7 @@ Backward incompatible changes
 
     To launch the periodic task scheduler you have to run celerybeat:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celerybeat
 
@@ -720,7 +720,7 @@ Backward incompatible changes
 
     If you only have one worker server you can embed it into the worker like this:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd --beat # Embed celerybeat in celeryd.
 
@@ -1552,7 +1552,7 @@ arguments, so be sure to flush your task queue before you upgrade.
 
 * You can now run the celery daemon by using `manage.py`:
 
-  .. code-block:: bash
+  .. code-block:: console
 
         $ python manage.py celeryd
 
@@ -1693,7 +1693,7 @@ arguments, so be sure to flush your task queue before you upgrade.
 * Now using the Sphinx documentation system, you can build
   the html documentation by doing:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ cd docs
         $ make html

+ 12 - 12
docs/history/changelog-2.0.rst

@@ -278,13 +278,13 @@ Documentation
     If you've already hit this problem you may have to delete the
     declaration:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ camqadm exchange.delete celerycrq
 
     or:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py camqadm exchange.delete celerycrq
 
@@ -387,7 +387,7 @@ Documentation
 
     Use the `-S|--statedb` argument to the worker to enable it:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd --statedb=/var/run/celeryd
 
@@ -599,7 +599,7 @@ Backward incompatible changes
     If you've already used celery with this backend chances are you
     have to delete the previous declaration:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ camqadm exchange.delete celeryresults
 
@@ -638,7 +638,7 @@ News
     If you run `celeryev` with the `-d` switch it will act as an event
     dumper, simply dumping the events it receives to standard out:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryev -d
         -> celeryev: starting capture...
@@ -742,7 +742,7 @@ News
    This feature is added for easily setting up routing using the `-Q`
    option to the worker:
 
-   .. code-block:: bash
+   .. code-block:: console
 
        $ celeryd -Q video, image
 
@@ -887,7 +887,7 @@ News
     command would make the worker only consume from the `image` and `video`
     queues:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd -Q image,video
 
@@ -916,25 +916,25 @@ News
 
     Before you run the tests you need to install the test requirements:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install -r requirements/test.txt
 
     Running all tests:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ nosetests
 
     Specifying the tests to run:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ nosetests celery.tests.test_task
 
     Producing HTML coverage:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ nosetests --with-coverage3
 
@@ -947,7 +947,7 @@ News
 
     Some examples:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         # Advanced example with 10 workers:
         #   * Three of the workers processes the images and video queue

+ 13 - 13
docs/history/changelog-2.1.rst

@@ -223,7 +223,7 @@ News
     Example using celeryctl to start consuming from queue "queue", in
     exchange "exchange", of type "direct" using binding key "key":
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryctl inspect add_consumer queue exchange direct key
         $ celeryctl inspect cancel_consumer queue
@@ -234,7 +234,7 @@ News
 
     Another example using :class:`~celery.task.control.inspect`:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> from celery.task.control import inspect
         >>> inspect.add_consumer(queue="queue", exchange="exchange",
@@ -296,7 +296,7 @@ Important Notes
     To do this use :program:`python` to find the location
     of this module:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python
         >>> import celery.platform
@@ -306,7 +306,7 @@ Important Notes
     Here the compiled module is in :file:`/opt/devel/celery/celery/`,
     to remove the offending files do:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ rm -f /opt/devel/celery/celery/platform.py*
 
@@ -345,13 +345,13 @@ News
 
     1. Create the new database tables:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ python manage.py syncdb
 
     2. Start the django-celery snapshot camera:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ python manage.py celerycam
 
@@ -403,7 +403,7 @@ News
 
     Some examples:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryctl apply tasks.add -a '[2, 2]' --countdown=10
 
@@ -482,7 +482,7 @@ News
 
     Example:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd -I app1.tasks,app2.tasks
 
@@ -692,7 +692,7 @@ Experimental
 
     multi can now be used to start, stop and restart worker nodes:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd-multi start jerry elaine george kramer
 
@@ -701,7 +701,7 @@ Experimental
     use the `--pidfile` and `--logfile` arguments with the `%n`
     format:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd-multi start jerry elaine george kramer \
                         --logfile=/var/log/celeryd@%n.log \
@@ -709,20 +709,20 @@ Experimental
 
     Stopping:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd-multi stop jerry elaine george kramer
 
     Restarting. The nodes will be restarted one by one as the old ones
     are shutdown:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd-multi restart jerry elaine george kramer
 
     Killing the nodes (**WARNING**: Will discard currently executing tasks):
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryd-multi kill jerry elaine george kramer
 

+ 4 - 4
docs/history/changelog-2.2.rst

@@ -666,7 +666,7 @@ Important Notes
     If you telnet the port specified you will be presented
     with a ``pdb`` shell:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ telnet localhost 6900
         Connected to localhost.
@@ -711,7 +711,7 @@ Important Notes
         If you would like to remove the old exchange you can do so
         by executing the following command:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ camqadm exchange.delete celeryevent
 
@@ -721,7 +721,7 @@ Important Notes
   Configuration options must appear after the last argument, separated
   by two dashes:
 
-  .. code-block:: bash
+  .. code-block:: console
 
       $ celery worker -l info -I tasks -- broker.host=localhost broker.vhost=/app
 
@@ -924,7 +924,7 @@ News
 
     For example:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery worker --config=celeryconfig.py --loader=myloader.Loader
 

+ 1 - 1
docs/history/changelog-2.3.rst

@@ -287,7 +287,7 @@ News
 
     Example use:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery multi start 4  -c 2  -- broker.host=amqp.example.com \
                                          broker.vhost=/               \

+ 1 - 1
docs/history/changelog-2.4.rst

@@ -205,7 +205,7 @@ Important Notes
     Also, programs now support the :option:`-b|--broker` option to specify
     a broker URL on the command-line:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery worker -b redis://localhost
 

+ 2 - 2
docs/history/changelog-2.5.rst

@@ -94,7 +94,7 @@ News
 
     Example:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> s = add.subtask((5,))
         >>> new = s.clone(args=(10,), countdown=5})
@@ -145,7 +145,7 @@ Fixes
     Like with the worker it is now possible to configure celery settings
     on the command-line for celery control|inspect
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery inspect -- broker.pool_limit=30
 

+ 8 - 8
docs/history/changelog-3.0.rst

@@ -596,7 +596,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
 - ``subtask.id`` added as an alias to ``subtask['options'].id``
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> s = add.s(2, 2)
         >>> s.id = 'my-id'
@@ -690,9 +690,9 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     Previously it would incorrectly add a regular result instead of a group
     result, but now this works:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
-        # [4 + 4, 4 + 8, 16 + 8]
+        >>> # [4 + 4, 4 + 8, 16 + 8]
         >>> res = (add.s(2, 2) | group(add.s(4), add.s(8), add.s(16)))()
         >>> res
         <GroupResult: a0acf905-c704-499e-b03a-8d445e6398f7 [
@@ -704,14 +704,14 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     Example:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> c1 = (add.s(2) | add.s(4))
         >>> c2 = (add.s(8) | add.s(16))
 
         >>> c3 = (c1 | c2)
 
-        # 8 + 2 + 4 + 8 + 16
+        >>> # 8 + 2 + 4 + 8 + 16
         >>> assert c3(8).get() == 38
 
 - Subtasks can now be used with unregistered tasks.
@@ -891,7 +891,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     Users can force paths to be created by calling the ``create-paths``
     subcommand:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ sudo /etc/init.d/celeryd create-paths
 
@@ -971,7 +971,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
     Previously calling a chord/group/chain would modify the ids of subtasks
     so that:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> c = chord([add.s(2, 2), add.s(4, 4)], xsum.s())
         >>> c()
@@ -1077,7 +1077,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 
     You can do this by executing the following command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py shell
         >>> from djcelery.models import PeriodicTask

+ 5 - 5
docs/history/changelog-3.1.rst

@@ -371,7 +371,7 @@ News
     and if you use the ``librabbitmq`` module you also have to upgrade
     to librabbitmq 1.5.0:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install -U librabbitmq
 
@@ -507,9 +507,9 @@ News
     This means that referring to a number will work when specifying a list
     of node names and not just for a number range:
 
-    .. code-block:: bash
+    .. code-block:: console
 
-        celery multi start A B C D -c:1 4 -c:2-4 8
+        $ celery multi start A B C D -c:1 4 -c:2-4 8
 
     In this example ``1`` refers to node A (as it's the first node in the
     list).
@@ -735,7 +735,7 @@ News
     Example using command-line configuration to set a broker heartbeat
     from :program:`celery multi`:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery multi start 1 -c3 -- broker.heartbeat=30
 
@@ -915,7 +915,7 @@ Fixes
 
     Example:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj worker -n foo@%h --logfile=%n.log --statedb=%n.db
 

+ 1 - 1
docs/includes/installation.txt

@@ -26,7 +26,7 @@ You can specify these in your requirements or on the ``pip`` comand-line
 by using brackets.  Multiple bundles can be specified by separating them by
 commas.
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install "celery[librabbitmq]"
 

+ 1 - 1
docs/internals/guide.rst

@@ -108,7 +108,7 @@ A subclass can change the default value:
 
 and the value can be set at instantiation:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> producer = TaskProducer(serializer='msgpack')
 

+ 19 - 15
docs/internals/protocol.rst

@@ -125,7 +125,9 @@ Changes from version 1
 
     This is fixed in the new message protocol by specifying
     a list of signatures, each task will then pop a task off the list
-    when sending the next message::
+    when sending the next message:
+
+    .. code-block:: python
 
         execute_task(message)
         chain = embed['chain']
@@ -138,25 +140,27 @@ Changes from version 1
 - ``root_id`` and ``parent_id`` fields helps keep track of workflows.
 
 - ``shadow`` lets you specify a different name for logs, monitors
-  can be used for e.g. meta tasks that calls any function::
+  can be used for e.g. meta tasks that calls any function:
+
+    .. code-block:: python
 
-    from celery.utils.imports import qualname
+        from celery.utils.imports import qualname
 
-    class PickleTask(Task):
-        abstract = True
+        class PickleTask(Task):
+            abstract = True
 
-        def unpack_args(self, fun, args=()):
-            return fun, args
+            def unpack_args(self, fun, args=()):
+                return fun, args
 
-        def apply_async(self, args, kwargs, **options):
-            fun, real_args = self.unpack_args(*args)
-            return super(PickleTask, self).apply_async(
-                (fun, real_args, kwargs), shadow=qualname(fun), **options
-            )
+            def apply_async(self, args, kwargs, **options):
+                fun, real_args = self.unpack_args(*args)
+                return super(PickleTask, self).apply_async(
+                    (fun, real_args, kwargs), shadow=qualname(fun), **options
+                )
 
-    @app.task(base=PickleTask)
-    def call(fun, args, kwargs):
-        return fun(*args, **kwargs)
+        @app.task(base=PickleTask)
+        def call(fun, args, kwargs):
+            return fun(*args, **kwargs)
 
 
 .. _message-protocol-task-v1:

+ 2 - 2
docs/reference/celery.rst

@@ -154,7 +154,7 @@ and creating Celery applications.
         :keyword force:  Force reading configuration immediately.
             By default the configuration will be read only when required.
 
-        .. code-block:: python
+        .. code-block:: pycon
 
             >>> celery.config_from_object("myapp.celeryconfig")
 
@@ -169,7 +169,7 @@ and creating Celery applications.
         The value of the environment variable must be the name
         of a module to import.
 
-        .. code-block:: python
+        .. code-block:: pycon
 
             >>> os.environ["CELERY_CONFIG_MODULE"] = "myapp.celeryconfig"
             >>> celery.config_from_envvar("CELERY_CONFIG_MODULE")

+ 4 - 4
docs/tutorials/daemonizing.rst

@@ -52,7 +52,7 @@ must also export them (e.g. ``export DISPLAY=":0"``)
     instead they can use the :program:`celery multi` utility (or
     :program:`celery worker --detach`):
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery multi start worker1 \
             -A proj \
@@ -368,7 +368,7 @@ Troubleshooting
 If you can't get the init scripts to work, you should try running
 them in *verbose mode*:
 
-.. code-block:: bash
+.. code-block:: console
 
     # sh -x /etc/init.d/celeryd start
 
@@ -381,9 +381,9 @@ not be able to see them anywhere.  For this situation you can use
 the :envvar:`C_FAKEFORK` environment variable to skip the
 daemonization step:
 
-.. code-block:: bash
+.. code-block:: console
 
-    C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
+    # C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
 
 
 and now you should be able to see the errors.

+ 1 - 1
docs/tutorials/debugging.rst

@@ -52,7 +52,7 @@ information::
 If you telnet the port specified you will be presented
 with a `pdb` shell:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ telnet localhost 6900
     Connected to localhost.

+ 10 - 10
docs/userguide/application.rst

@@ -17,7 +17,7 @@ same process space.
 
 Let's create one now:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import Celery
     >>> app = Celery()
@@ -43,7 +43,7 @@ registry*.
 
 Whenever you define a task, that task will also be added to the local registry:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> @app.task
     ... def add(x, y):
@@ -93,7 +93,7 @@ the tasks will be named starting with "``tasks``" (the real name of the module):
 
 You can specify another name for the main module:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app = Celery('tasks')
     >>> app.main
@@ -236,7 +236,7 @@ environment variable named :envvar:`CELERY_CONFIG_MODULE`:
 
 You can then specify the configuration module to use via the environment:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ CELERY_CONFIG_MODULE="celeryconfig.prod" celery worker -l info
 
@@ -252,7 +252,7 @@ passwords and API keys.
 Celery comes with several utilities used for presenting the configuration,
 one is :meth:`~celery.app.utils.Settings.humanize`:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.conf.humanize(with_defaults=False, censored=True)
 
@@ -263,7 +263,7 @@ default keys and values by changing the ``with_defaults`` argument.
 If you instead want to work with the configuration as a dictionary, then you
 can use the :meth:`~celery.app.utils.Settings.table` method:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.conf.table(with_defaults=False, censored=True)
 
@@ -299,7 +299,7 @@ application has been *finalized*,
 This example shows how the task is not created until
 you use the task, or access an attribute (in this case :meth:`repr`):
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> @app.task
     >>> def add(x, y):
@@ -410,7 +410,7 @@ In development you can set the :envvar:`CELERY_TRACE_APP`
 environment variable to raise an exception if the app
 chain breaks:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ CELERY_TRACE_APP=1 celery worker -l info
 
@@ -423,7 +423,7 @@ chain breaks:
     For example, in the beginning it was possible to use any callable as
     a task:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         def hello(to):
             return 'hello {0}'.format(to)
@@ -507,7 +507,7 @@ and so on.
 It's also possible to change the default base class for an application
 by changing its :meth:`@Task` attribute:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import Celery, Task
 

+ 9 - 7
docs/userguide/calling.rst

@@ -160,7 +160,9 @@ option:
 
 
 In addition, both the ``link`` and ``link_error`` options can be expressed
-as a list::
+as a list:
+
+.. code-block:: python
 
     add.apply_async((2, 2), link=[add.s(16), other_task.s()])
 
@@ -177,7 +179,7 @@ The ETA (estimated time of arrival) lets you set a specific date and time that
 is the earliest time at which your task will be executed.  `countdown` is
 a shortcut to set eta by seconds into the future.
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> result = add.apply_async((2, 2), countdown=3)
     >>> result.get()    # this takes at least 3 seconds to return
@@ -195,7 +197,7 @@ While `countdown` is an integer, `eta` must be a :class:`~datetime.datetime`
 object, specifying an exact date and time (including millisecond precision,
 and timezone information):
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from datetime import datetime, timedelta
 
@@ -211,7 +213,7 @@ The `expires` argument defines an optional expiry time,
 either as seconds after task publish, or a specific date and time using
 :class:`~datetime.datetime`:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> # Task expires after one minute from now.
     >>> add.apply_async((10, 10), expires=60)
@@ -385,7 +387,7 @@ to use when sending a task:
 
 Example setting a custom serializer for a single task invocation:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> add.apply_async((10, 10), serializer='json')
 
@@ -442,7 +444,7 @@ publisher:
 
 Though this particular example is much better expressed as a group:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import group
 
@@ -466,7 +468,7 @@ Simple routing (name <-> name) is accomplished using the ``queue`` option::
 You can then assign workers to the ``priority.high`` queue by using
 the workers :option:`-Q` argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info -Q celery,priority.high
 

+ 200 - 80
docs/userguide/canvas.rst

@@ -26,7 +26,9 @@ A :func:`~celery.signature` wraps the arguments, keyword arguments, and executio
 of a single task invocation in a way such that it can be passed to functions
 or even serialized and sent across the wire.
 
-- You can create a signature for the ``add`` task using its name like this::
+- You can create a signature for the ``add`` task using its name like this:
+
+    .. code-block:: pycon
 
         >>> from celery import signature
         >>> signature('tasks.add', args=(2, 2), countdown=10)
@@ -35,22 +37,30 @@ or even serialized and sent across the wire.
   This task has a signature of arity 2 (two arguments): ``(2, 2)``,
   and sets the countdown execution option to 10.
 
-- or you can create one using the task's ``signature`` method::
+- or you can create one using the task's ``signature`` method:
+
+    .. code-block:: pycon
 
         >>> add.signature((2, 2), countdown=10)
         tasks.add(2, 2)
 
-- There is also a shortcut using star arguments::
+- There is also a shortcut using star arguments:
+
+    .. code-block:: pycon
 
         >>> add.s(2, 2)
         tasks.add(2, 2)
 
-- Keyword arguments are also supported::
+- Keyword arguments are also supported:
+
+    .. code-block:: pycon
 
         >>> add.s(2, 2, debug=True)
         tasks.add(2, 2, debug=True)
 
-- From any signature instance you can inspect the different fields::
+- From any signature instance you can inspect the different fields:
+
+    .. code-block:: pycon
 
         >>> s = add.signature((2, 2), {'debug': True}, countdown=10)
         >>> s.args
@@ -63,20 +73,27 @@ or even serialized and sent across the wire.
 - It supports the "Calling API" which means it supports ``delay`` and
   ``apply_async`` or being called directly.
 
-    Calling the signature will execute the task inline in the current process::
+    Calling the signature will execute the task inline in the current process:
+
+    .. code-block:: pycon
 
         >>> add(2, 2)
         4
         >>> add.s(2, 2)()
         4
 
-  ``delay`` is our beloved shortcut to ``apply_async`` taking star-arguments::
+    ``delay`` is our beloved shortcut to ``apply_async`` taking star-arguments:
+
+    .. code-block:: pycon
 
         >>> result = add.delay(2, 2)
         >>> result.get()
         4
 
-  ``apply_async`` takes the same arguments as the :meth:`Task.apply_async <@Task.apply_async>` method::
+    ``apply_async`` takes the same arguments as the
+    :meth:`Task.apply_async <@Task.apply_async>` method:
+
+    .. code-block:: pycon
 
         >>> add.apply_async(args, kwargs, **options)
         >>> add.signature(args, kwargs, **options).apply_async()
@@ -85,20 +102,26 @@ or even serialized and sent across the wire.
         >>> add.signature((2, 2), countdown=1).apply_async()
 
 - You can't define options with :meth:`~@Task.s`, but a chaining
-  ``set`` call takes care of that::
+  ``set`` call takes care of that:
+
+    .. code-block:: pycon
 
-    >>> add.s(2, 2).set(countdown=1)
-    proj.tasks.add(2, 2)
+        >>> add.s(2, 2).set(countdown=1)
+        proj.tasks.add(2, 2)
 
 Partials
 --------
 
-With a signature, you can execute the task in a worker::
+With a signature, you can execute the task in a worker:
+
+.. code-block:: pycon
 
     >>> add.s(2, 2).delay()
     >>> add.s(2, 2).apply_async(countdown=1)
 
-Or you can call it directly in the current process::
+Or you can call it directly in the current process:
+
+.. code-block:: pycon
 
     >>> add.s(2, 2)()
     4
@@ -106,27 +129,35 @@ Or you can call it directly in the current process::
 Specifying additional args, kwargs or options to ``apply_async``/``delay``
 creates partials:
 
-- Any arguments added will be prepended to the args in the signature::
+- Any arguments added will be prepended to the args in the signature:
+
+    .. code-block:: pycon
 
-    >>> partial = add.s(2)          # incomplete signature
-    >>> partial.delay(4)            # 4 + 2
-    >>> partial.apply_async((4,))  # same
+        >>> partial = add.s(2)          # incomplete signature
+        >>> partial.delay(4)            # 4 + 2
+        >>> partial.apply_async((4,))  # same
 
 - Any keyword arguments added will be merged with the kwargs in the signature,
-  with the new keyword arguments taking precedence::
+  with the new keyword arguments taking precedence:
 
-    >>> s = add.s(2, 2)
-    >>> s.delay(debug=True)                    # -> add(2, 2, debug=True)
-    >>> s.apply_async(kwargs={'debug': True})  # same
+    .. code-block:: pycon
+
+        >>> s = add.s(2, 2)
+        >>> s.delay(debug=True)                    # -> add(2, 2, debug=True)
+        >>> s.apply_async(kwargs={'debug': True})  # same
 
 - Any options added will be merged with the options in the signature,
-  with the new options taking precedence::
+  with the new options taking precedence:
 
-    >>> s = add.signature((2, 2), countdown=10)
-    >>> s.apply_async(countdown=1)  # countdown is now 1
+    .. code-block:: pycon
+
+        >>> s = add.signature((2, 2), countdown=10)
+        >>> s.apply_async(countdown=1)  # countdown is now 1
 
 You can also clone signatures to create derivatives:
 
+.. code-block:: pycon
+
     >>> s = add.s(2)
     proj.tasks.add(2)
 
@@ -142,11 +173,15 @@ Partials are meant to be used with callbacks, any tasks linked or chord
 callbacks will be applied with the result of the parent task.
 Sometimes you want to specify a callback that does not take
 additional arguments, and in that case you can set the signature
-to be immutable::
+to be immutable:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2), link=reset_buffers.signature(immutable=True))
 
-The ``.si()`` shortcut can also be used to create immutable signatures::
+The ``.si()`` shortcut can also be used to create immutable signatures:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2), link=reset_buffers.si())
 
@@ -157,7 +192,9 @@ so it's not possible to call the signature with partial args/kwargs.
 
     In this tutorial I sometimes use the prefix operator `~` to signatures.
     You probably shouldn't use it in your production code, but it's a handy shortcut
-    when experimenting in the Python shell::
+    when experimenting in the Python shell:
+
+    .. code-block:: pycon
 
         >>> ~sig
 
@@ -173,7 +210,9 @@ Callbacks
 .. versionadded:: 3.0
 
 Callbacks can be added to any task using the ``link`` argument
-to ``apply_async``::
+to ``apply_async``:
+
+.. code-block:: pycon
 
     add.apply_async((2, 2), link=other_task.s())
 
@@ -183,18 +222,24 @@ and it will be applied with the return value of the parent task as argument.
 As I mentioned earlier, any arguments you add to a signature,
 will be prepended to the arguments specified by the signature itself!
 
-If you have the signature::
+If you have the signature:
+
+.. code-block:: pycon
 
     >>> sig = add.s(10)
 
-then `sig.delay(result)` becomes::
+then `sig.delay(result)` becomes:
+
+.. code-block:: pycon
 
     >>> add.apply_async(args=(result, 10))
 
 ...
 
 Now let's call our ``add`` task with a callback using partial
-arguments::
+arguments:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2), link=add.s(8))
 
@@ -230,7 +275,9 @@ The Primitives
         a temporary task where a list of arguments is applied to the task.
         E.g. ``task.map([1, 2])`` results in a single task
         being called, applying the arguments in order to the task function so
-        that the result is::
+        that the result is:
+
+        .. code-block:: python
 
             res = [task(1), task(2)]
 
@@ -238,13 +285,17 @@ The Primitives
 
         Works exactly like map except the arguments are applied as ``*args``.
         For example ``add.starmap([(2, 2), (4, 4)])`` results in a single
-        task calling::
+        task calling:
+
+        .. code-block:: python
 
             res = [add(2, 2), add(4, 4)]
 
     - ``chunks``
 
-        Chunking splits a long list of arguments into parts, e.g the operation::
+        Chunking splits a long list of arguments into parts, e.g the operation:
+
+        .. code-block:: pycon
 
             >>> items = zip(xrange(1000), xrange(1000))  # 1000 items
             >>> add.chunks(items, 10)
@@ -263,16 +314,18 @@ Here's some examples:
     Here's a simple chain, the first task executes passing its return value
     to the next task in the chain, and so on.
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> from celery import chain
 
-        # 2 + 2 + 4 + 8
+        >>> # 2 + 2 + 4 + 8
         >>> res = chain(add.s(2, 2), add.s(4), add.s(8))()
         >>> res.get()
         16
 
-    This can also be written using pipes::
+    This can also be written using pipes:
+
+    .. code-block:: pycon
 
         >>> (add.s(2, 2) | add.s(4) | add.s(8))().get()
         16
@@ -284,15 +337,21 @@ Here's some examples:
     for example if you don't want the result of the previous task in a chain.
 
     In that case you can mark the signature as immutable, so that the arguments
-    cannot be changed::
+    cannot be changed:
+
+    .. code-block:: pycon
 
         >>> add.signature((2, 2), immutable=True)
 
-    There's also an ``.si`` shortcut for this::
+    There's also an ``.si`` shortcut for this:
+
+    .. code-block:: pycon
 
         >>> add.si(2, 2)
 
-    Now you can create a chain of independent tasks instead::
+    Now you can create a chain of independent tasks instead:
+
+    .. code-block:: pycon
 
         >>> res = (add.si(2, 2) | add.si(4, 4) | add.s(8, 8))()
         >>> res.get()
@@ -306,7 +365,9 @@ Here's some examples:
 
 - Simple group
 
-    You can easily create a group of tasks to execute in parallel::
+    You can easily create a group of tasks to execute in parallel:
+
+    .. code-block:: pycon
 
         >>> from celery import group
         >>> res = group(add.s(i, i) for i in xrange(10))()
@@ -317,7 +378,9 @@ Here's some examples:
 
     The chord primitive enables us to add callback to be called when
     all of the tasks in a group have finished executing, which is often
-    required for algorithms that aren't embarrassingly parallel::
+    required for algorithms that aren't embarrassingly parallel:
+
+    .. code-block:: pycon
 
         >>> from celery import chord
         >>> res = chord((add.s(i, i) for i in xrange(10)), xsum.s())()
@@ -329,7 +392,9 @@ Here's some examples:
     into a list and sent to the ``xsum`` task.
 
     The body of a chord can also be immutable, so that the return value
-    of the group is not passed on to the callback::
+    of the group is not passed on to the callback:
+
+    .. code-block:: pycon
 
         >>> chord((import_contact.s(c) for c in contacts),
         ...       notify_complete.si(import_id)).apply_async()
@@ -338,7 +403,9 @@ Here's some examples:
 
 - Blow your mind by combining
 
-    Chains can be partial too::
+    Chains can be partial too:
+
+    .. code-block:: pycon
 
         >>> c1 = (add.s(4) | mul.s(8))
 
@@ -347,7 +414,9 @@ Here's some examples:
         >>> res.get()
         160
 
-    Which means that you can combine chains::
+    Which means that you can combine chains:
+
+    .. code-block:: pycon
 
         # ((4 + 16) * 2 + 4) * 8
         >>> c2 = (add.s(4, 16) | mul.s(2) | (add.s(4) | mul.s(8)))
@@ -357,7 +426,9 @@ Here's some examples:
         352
 
     Chaining a group together with another task will automatically
-    upgrade it to be a chord::
+    upgrade it to be a chord:
+
+    .. code-block:: pycon
 
         >>> c3 = (group(add.s(i, i) for i in xrange(10)) | xsum.s())
         >>> res = c3()
@@ -365,7 +436,9 @@ Here's some examples:
         90
 
     Groups and chords accepts partial arguments too, so in a chain
-    the return value of the previous task is forwarded to all tasks in the group::
+    the return value of the previous task is forwarded to all tasks in the group:
+
+    .. code-block:: pycon
 
 
         >>> new_user_workflow = (create_user.s() | group(
@@ -378,7 +451,9 @@ Here's some examples:
 
 
     If you don't want to forward arguments to the group then
-    you can make the signatures in the group immutable::
+    you can make the signatures in the group immutable:
+
+    .. code-block:: pycon
 
         >>> res = (add.s(4, 4) | group(add.si(i, i) for i in xrange(10)))()
         >>> res.get()
@@ -406,7 +481,9 @@ Chains
 .. versionadded:: 3.0
 
 Tasks can be linked together, which in practice means adding
-a callback task::
+a callback task:
+
+.. code-block:: pycon
 
     >>> res = add.apply_async((2, 2), link=mul.s(16))
     >>> res.get()
@@ -417,7 +494,9 @@ task as the first argument, which in the above case will result
 in ``mul(4, 16)`` since the result is 4.
 
 The results will keep track of any subtasks called by the original task,
-and this can be accessed from the result instance::
+and this can be accessed from the result instance:
+
+.. code-block:: pycon
 
     >>> res.children
     [<AsyncResult: 8c350acf-519d-4553-8a53-4ad3a5c5aeb4>]
@@ -427,7 +506,9 @@ and this can be accessed from the result instance::
 
 The result instance also has a :meth:`~@AsyncResult.collect` method
 that treats the result as a graph, enabling you to iterate over
-the results::
+the results:
+
+.. code-block:: pycon
 
     >>> list(res.collect())
     [(<AsyncResult: 7b720856-dc5f-4415-9134-5c89def5664e>, 4),
@@ -437,19 +518,25 @@ By default :meth:`~@AsyncResult.collect` will raise an
 :exc:`~@IncompleteStream` exception if the graph is not fully
 formed (one of the tasks has not completed yet),
 but you can get an intermediate representation of the graph
-too::
+too:
+
+.. code-block:: pycon
 
     >>> for result, value in res.collect(intermediate=True)):
     ....
 
 You can link together as many tasks as you like,
-and signatures can be linked too::
+and signatures can be linked too:
+
+.. code-block:: pycon
 
     >>> s = add.s(2, 2)
     >>> s.link(mul.s(4))
     >>> s.link(log_result.s())
 
-You can also add *error callbacks* using the ``link_error`` argument::
+You can also add *error callbacks* using the ``link_error`` argument:
+
+.. code-block:: pycon
 
     >>> add.apply_async((2, 2), link_error=log_error.s())
 
@@ -476,25 +563,29 @@ To make it even easier to link tasks together there is
 a special signature called :class:`~celery.chain` that lets
 you chain tasks together:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from celery import chain
     >>> from proj.tasks import add, mul
 
-    # (4 + 4) * 8 * 10
+    >>> # (4 + 4) * 8 * 10
     >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
     proj.tasks.add(4, 4) | proj.tasks.mul(8) | proj.tasks.mul(10)
 
 
 Calling the chain will call the tasks in the current process
-and return the result of the last task in the chain::
+and return the result of the last task in the chain:
+
+.. code-block:: pycon
 
     >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
     >>> res.get()
     640
 
 It also sets ``parent`` attributes so that you can
-work your way up the chain to get intermediate results::
+work your way up the chain to get intermediate results:
+
+.. code-block:: pycon
 
     >>> res.parent.get()
     64
@@ -506,7 +597,9 @@ work your way up the chain to get intermediate results::
     <AsyncResult: eeaad925-6778-4ad1-88c8-b2a63d017933>
 
 
-Chains can also be made using the ``|`` (pipe) operator::
+Chains can also be made using the ``|`` (pipe) operator:
+
+.. code-block:: pycon
 
     >>> (add.s(2, 2) | mul.s(8) | mul.s(10)).apply_async()
 
@@ -516,7 +609,7 @@ Graphs
 In addition you can work with the result graph as a
 :class:`~celery.datastructures.DependencyGraph`:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))()
 
@@ -527,7 +620,9 @@ In addition you can work with the result graph as a
         285fa253-fcf8-42ef-8b95-0078897e83e6(1)
             463afec2-5ed4-4036-b22d-ba067ec64f52(0)
 
-You can even convert these graphs to *dot* format::
+You can even convert these graphs to *dot* format:
+
+.. code-block:: pycon
 
     >>> with open('graph.dot', 'w') as fh:
     ...     res.parent.parent.graph.to_dot(fh)
@@ -535,7 +630,7 @@ You can even convert these graphs to *dot* format::
 
 and create images:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ dot -Tpng graph.dot -o graph.png
 
@@ -550,7 +645,9 @@ Groups
 
 A group can be used to execute several tasks in parallel.
 
-The :class:`~celery.group` function takes a list of signatures::
+The :class:`~celery.group` function takes a list of signatures:
+
+.. code-block:: pycon
 
     >>> from celery import group
     >>> from proj.tasks import add
@@ -561,14 +658,18 @@ The :class:`~celery.group` function takes a list of signatures::
 If you **call** the group, the tasks will be applied
 one after one in the current process, and a :class:`~celery.result.GroupResult`
 instance is returned which can be used to keep track of the results,
-or tell how many tasks are ready and so on::
+or tell how many tasks are ready and so on:
+
+.. code-block:: pycon
 
     >>> g = group(add.s(2, 2), add.s(4, 4))
     >>> res = g()
     >>> res.get()
     [4, 8]
 
-Group also supports iterators::
+Group also supports iterators:
+
+.. code-block:: pycon
 
     >>> group(add.s(i, i) for i in xrange(100))()
 
@@ -580,7 +681,9 @@ Group Results
 
 The group task returns a special result too,
 this result works just like normal task results, except
-that it works on the group as a whole::
+that it works on the group as a whole:
+
+.. code-block:: pycon
 
     >>> from celery import group
     >>> from tasks import add
@@ -653,7 +756,7 @@ Chords
     Tasks used within a chord must *not* ignore their results. If the result
     backend is disabled for *any* task (header or body) in your chord you
     should read ":ref:`chord-important-notes`".
-    
+
 
 A chord is a task that only executes after all of the tasks in a group have
 finished executing.
@@ -677,7 +780,9 @@ already a standard function):
 
 
 Now you can use a chord to calculate each addition step in parallel, and then
-get the sum of the resulting numbers::
+get the sum of the resulting numbers:
+
+.. code-block:: pycon
 
     >>> from celery import chord
     >>> from tasks import add, tsum
@@ -688,9 +793,11 @@ get the sum of the resulting numbers::
 
 
 This is obviously a very contrived example, the overhead of messaging and
-synchronization makes this a lot slower than its Python counterpart::
+synchronization makes this a lot slower than its Python counterpart:
+
+.. code-block:: pycon
 
-    sum(i + i for i in xrange(100))
+    >>> sum(i + i for i in xrange(100))
 
 The synchronization step is costly, so you should avoid using chords as much
 as possible. Still, the chord is a powerful primitive to have in your toolbox
@@ -698,7 +805,7 @@ as synchronization is a required step for many parallel algorithms.
 
 Let's break the chord expression down:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> callback = tsum.s()
     >>> header = [add.s(i, i) for i in range(100)]
@@ -725,11 +832,14 @@ Errors will propagate to the callback, so the callback will not be executed
 instead the callback changes to failure state, and the error is set
 to the :exc:`~@ChordError` exception:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> c = chord([add.s(4, 4), raising_task.s(), add.s(8, 8)])
     >>> result = c()
     >>> result.get()
+
+.. code-block:: pytb
+
     Traceback (most recent call last):
       File "<stdin>", line 1, in <module>
       File "*/celery/result.py", line 120, in get
@@ -833,7 +943,7 @@ They differ from group in that
 
 For example using ``map``:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from proj.tasks import add
 
@@ -848,7 +958,9 @@ is the same as having a task doing:
     def temp():
         return [xsum(range(10)), xsum(range(100))]
 
-and using ``starmap``::
+and using ``starmap``:
+
+.. code-block:: pycon
 
     >>> ~add.starmap(zip(range(10), range(10)))
     [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
@@ -863,7 +975,9 @@ is the same as having a task doing:
 
 Both ``map`` and ``starmap`` are signature objects, so they can be used as
 other signatures and combined in groups etc., for example
-to call the starmap after 10 seconds::
+to call the starmap after 10 seconds:
+
+.. code-block:: pycon
 
     >>> add.starmap(zip(range(10), range(10))).apply_async(countdown=10)
 
@@ -883,14 +997,14 @@ it may considerably increase performance.
 
 To create a chunks signature you can use :meth:`@Task.chunks`:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> add.chunks(zip(range(100), range(100)), 10)
 
 As with :class:`~celery.group` the act of sending the messages for
 the chunks will happen in the current process when called:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from proj.tasks import add
 
@@ -909,16 +1023,22 @@ the chunks will happen in the current process when called:
 
 while calling ``.apply_async`` will create a dedicated
 task so that the individual tasks are applied in a worker
-instead::
+instead:
+
+.. code-block:: pycon
 
     >>> add.chunks(zip(range(100), range(100)), 10).apply_async()
 
-You can also convert chunks to a group::
+You can also convert chunks to a group:
+
+.. code-block:: pycon
 
     >>> group = add.chunks(zip(range(100), range(100)), 10).group()
 
 and with the group skew the countdown of each task by increments
-of one::
+of one:
+
+.. code-block:: pycon
 
     >>> group.skew(start=1, stop=10)()
 

+ 1 - 1
docs/userguide/concurrency/eventlet.rst

@@ -42,7 +42,7 @@ Enabling Eventlet
 You can enable the Eventlet pool by using the ``-P`` option to
 :program:`celery worker`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -P eventlet -c 1000
 

+ 9 - 3
docs/userguide/extending.rst

@@ -396,7 +396,9 @@ Attributes
     Every registered task type has an entry in this mapping,
     where the value is used to execute an incoming message of this task type
     (the task execution strategy).  This mapping is generated by the Tasks
-    bootstep when the consumer starts::
+    bootstep when the consumer starts:
+
+    .. code-block:: python
 
         for name, task in app.tasks.items():
             strategies[name] = task.start_strategy(app, consumer)
@@ -429,7 +431,9 @@ Attributes
 .. attribute:: qos
 
     The :class:`~kombu.common.QoS` object can be used to change the
-    task channels current prefetch_count value, e.g::
+    task channels current prefetch_count value, e.g:
+
+    .. code-block:: python
 
         # increment at next cycle
         consumer.qos.increment_eventually(1)
@@ -473,7 +477,9 @@ Installing Bootsteps
 ====================
 
 ``app.steps['worker']`` and ``app.steps['consumer']`` can be modified
-to add new bootsteps::
+to add new bootsteps:
+
+.. code-block:: pycon
 
     >>> app = Celery()
     >>> app.steps['worker'].add(MyWorkerStep)  # < add class, do not instantiate

+ 31 - 31
docs/userguide/monitoring.rst

@@ -31,13 +31,13 @@ and manage worker nodes (and to some degree tasks).
 
 To list all the commands available do:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery help
 
 or to get help for a specific command do:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery <command> --help
 
@@ -56,13 +56,13 @@ Commands
 
 * **status**: List active nodes in this cluster
 
-    .. code-block:: bash
+    .. code-block:: console
 
             $ celery -A proj status
 
 * **result**: Show the result of a task
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
 
@@ -75,14 +75,14 @@ Commands
         There is no undo for this operation, and messages will
         be permanently deleted!
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj purge
 
 
 * **inspect active**: List active tasks
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect active
 
@@ -90,7 +90,7 @@ Commands
 
 * **inspect scheduled**: List scheduled ETA tasks
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect scheduled
 
@@ -99,7 +99,7 @@ Commands
 
 * **inspect reserved**: List reserved tasks
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect reserved
 
@@ -109,37 +109,37 @@ Commands
 
 * **inspect revoked**: List history of revoked tasks
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect revoked
 
 * **inspect registered**: List registered tasks
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect registered
 
 * **inspect stats**: Show worker statistics (see :ref:`worker-statistics`)
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj inspect stats
 
 * **control enable_events**: Enable events
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj control enable_events
 
 * **control disable_events**: Disable events
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj control disable_events
 
 * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj migrate redis://localhost amqp://localhost
 
@@ -163,7 +163,7 @@ By default the inspect and control commands operates on all workers.
 You can specify a single, or a list of workers by using the
 `--destination` argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect -d w1,w2 reserved
 
@@ -244,25 +244,25 @@ Usage
 
 You can use pip to install Flower:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install flower
 
 Running the flower command will start a web-server that you can visit:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj flower
 
 The default port is http://localhost:5555, but you can change this using the `--port` argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj flower --port=5555
 
 Broker URL can also be passed through the `--broker` argument :
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery flower --broker=amqp://guest:guest@localhost:5672//
     or
@@ -270,7 +270,7 @@ Broker URL can also be passed through the `--broker` argument :
 
 Then, you can visit flower in your web browser :
 
-.. code-block:: bash
+.. code-block:: console
 
     $ open http://localhost:5555
 
@@ -296,7 +296,7 @@ probably want to use Flower instead.
 
 Starting:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events
 
@@ -308,19 +308,19 @@ You should see a screen like:
 `celery events` is also used to start snapshot cameras (see
 :ref:`monitoring-snapshots`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events --camera=<camera-class> --frequency=1.0
 
 and it includes a tool to dump events to :file:`stdout`:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events --dump
 
 For a complete list of options use ``--help``:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery events --help
 
@@ -355,7 +355,7 @@ Inspecting queues
 
 Finding the number of tasks in a queue:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ rabbitmqctl list_queues name messages messages_ready \
                               messages_unacknowledged
@@ -370,13 +370,13 @@ not acknowledged yet (meaning it is in progress, or has been reserved).
 
 Finding the number of workers currently consuming from a queue:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ rabbitmqctl list_queues name consumers
 
 Finding the amount of memory allocated to a queue:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ rabbitmqctl list_queues name memory
 
@@ -399,13 +399,13 @@ Inspecting queues
 
 Finding the number of tasks in a queue:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
 
 The default queue is named `celery`. To get all available queues, invoke:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
 
@@ -480,7 +480,7 @@ for example if you want to capture state every 2 seconds using the
 camera ``myapp.Camera`` you run :program:`celery events` with the following
 arguments:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events -c myapp.Camera --frequency=2.0
 
@@ -520,7 +520,7 @@ about state objects.
 Now you can use this cam with :program:`celery events` by specifying
 it with the :option:`-c` option:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj events -c myapp.DumpCam --frequency=2.0
 

+ 2 - 2
docs/userguide/optimizing.rst

@@ -60,7 +60,7 @@ librabbitmq
 If you're using RabbitMQ (AMQP) as the broker then you can install the
 :mod:`librabbitmq` module to use an optimized client written in C:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install librabbitmq
 
@@ -228,7 +228,7 @@ size is 1MB (can only be changed system wide).
 You can disable this prefetching behavior by enabling the :option:`-Ofair`
 worker option:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info -Ofair
 

+ 11 - 11
docs/userguide/periodic-tasks.rst

@@ -63,7 +63,7 @@ schedule manually.
     The database scheduler will not reset when timezone related settings
     change, so you must do this manually:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ python manage.py shell
         >>> from djcelery.models import PeriodicTask
@@ -283,12 +283,12 @@ sunset, dawn or dusk, you can use the
     from celery.schedules import solar
 
     CELERYBEAT_SCHEDULE = {
-    	# Executes at sunset in Melbourne
-    	'add-at-melbourne-sunset': {
-    		'task': 'tasks.add',
-    		'schedule': solar('sunset', -37.81753, 144.96715),
-    		'args': (16, 16),
-    	},
+        # Executes at sunset in Melbourne
+        'add-at-melbourne-sunset': {
+            'task': 'tasks.add',
+            'schedule': solar('sunset', -37.81753, 144.96715),
+            'args': (16, 16),
+        },
     }
 
 The arguments are simply: ``solar(event, latitude, longitude)``
@@ -378,7 +378,7 @@ Starting the Scheduler
 
 To start the :program:`celery beat` service:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj beat
 
@@ -387,7 +387,7 @@ workers `-B` option, this is convenient if you will never run
 more than one worker node, but it's not commonly used and for that
 reason is not recommended for production use:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -B
 
@@ -396,7 +396,7 @@ file (named `celerybeat-schedule` by default), so it needs access to
 write in the current directory, or alternatively you can specify a custom
 location for this file:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj beat -s /home/celery/var/run/celerybeat-schedule
 
@@ -418,7 +418,7 @@ which is simply keeping track of the last run times in a local database file
 `django-celery` also ships with a scheduler that stores the schedule in the
 Django database:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj beat -S djcelery.schedulers.DatabaseScheduler
 

+ 19 - 7
docs/userguide/remote-tasks.rst

@@ -18,13 +18,17 @@ If you need to call into another language, framework or similar, you can
 do so by using HTTP callback tasks.
 
 The HTTP callback tasks uses GET/POST data to pass arguments and returns
-result as a JSON response. The scheme to call a task is::
+result as a JSON response. The scheme to call a task is:
 
-    GET http://example.com/mytask/?arg1=a&arg2=b&arg3=c
+.. code-block:: http
 
-or using POST::
+    GET HTTP/1.1 http://example.com/mytask/?arg1=a&arg2=b&arg3=c
 
-    POST http://example.com/mytask
+or using POST:
+
+.. code-block:: http
+
+    POST HTTP/1.1 http://example.com/mytask
 
 .. note::
 
@@ -33,11 +37,15 @@ or using POST::
 Whether to use GET or POST is up to you and your requirements.
 
 The web page should then return a response in the following format
-if the execution was successful::
+if the execution was successful:
+
+.. code-block:: javascript
 
     {'status': 'success', 'retval': …}
 
-or if there was an error::
+or if there was an error:
+
+.. code-block:: javascript
 
     {'status': 'failure', 'reason': 'Invalid moon alignment.'}
 
@@ -97,13 +105,17 @@ Calling webhook tasks
 
 To call a task you can use the :class:`~celery.task.http.URL` class:
 
+.. code-block:: pycon
+
     >>> from celery.task.http import URL
     >>> res = URL('http://example.com/multiply').get_async(x=10, y=10)
 
 
 :class:`~celery.task.http.URL` is a shortcut to the :class:`HttpDispatchTask`.
 You can subclass this to extend the
-functionality.
+functionality:
+
+.. code-block:: pycon
 
     >>> from celery.task.http import HttpDispatchTask
     >>> res = HttpDispatchTask.delay(

+ 27 - 15
docs/userguide/routing.rst

@@ -43,14 +43,14 @@ With this route enabled import feed tasks will be routed to the
 
 Now you can start server `z` to only process the feeds queue like this:
 
-.. code-block:: bash
+.. code-block:: console
 
     user@z:/$ celery -A proj worker -Q feeds
 
 You can specify as many queues as you want, so you can make this server
 process the default queue as well:
 
-.. code-block:: bash
+.. code-block:: console
 
     user@z:/$ celery -A proj worker -Q feeds,celery
 
@@ -82,7 +82,7 @@ are declared.
 
 A queue named `"video"` will be created with the following settings:
 
-.. code-block:: python
+.. code-block:: javascript
 
     {'exchange': 'video',
      'exchange_type': 'direct',
@@ -145,13 +145,13 @@ You can also override this using the `routing_key` argument to
 To make server `z` consume from the feed queue exclusively you can
 start it with the ``-Q`` option:
 
-.. code-block:: bash
+.. code-block:: console
 
     user@z:/$ celery -A proj worker -Q feed_tasks --hostname=z@%h
 
 Servers `x` and `y` must be configured to consume from the default queue:
 
-.. code-block:: bash
+.. code-block:: console
 
     user@x:/$ celery -A proj worker -Q default --hostname=x@%h
     user@y:/$ celery -A proj worker -Q default --hostname=y@%h
@@ -159,7 +159,7 @@ Servers `x` and `y` must be configured to consume from the default queue:
 If you want, you can even have your feed processing worker handle regular
 tasks as well, maybe in times when there's a lot of work to do:
 
-.. code-block:: python
+.. code-block:: console
 
     user@z:/$ celery -A proj worker -Q feed_tasks,default --hostname=z@%h
 
@@ -209,7 +209,7 @@ metadata -- like the number of retries or an ETA.
 
 This is an example task message represented as a Python dictionary:
 
-.. code-block:: python
+.. code-block:: javascript
 
     {'task': 'myapp.tasks.add',
      'id': '54086c5e-6193-4575-8308-dbab76798756',
@@ -365,7 +365,7 @@ but different implementation may not implement all commands.
 You can write commands directly in the arguments to :program:`celery amqp`,
 or just start with no arguments to start it in shell-mode:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj amqp
     -> connecting to amqp://guest@localhost:5672/.
@@ -379,7 +379,7 @@ hit the `tab` key to show a list of possible matches.
 
 Let's create a queue you can send messages to:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj amqp
     1> exchange.declare testexchange direct
@@ -395,7 +395,9 @@ the routing key ``testkey``.
 
 From now on all messages sent to the exchange ``testexchange`` with routing
 key ``testkey`` will be moved to this queue.  You can send a message by
-using the ``basic.publish`` command::
+using the ``basic.publish`` command:
+
+.. code-block:: console
 
     4> basic.publish 'This is a message!' testexchange testkey
     ok.
@@ -405,7 +407,9 @@ Now that the message is sent you can retrieve it again.  You can use the
 (which is alright for maintenance tasks, for services you'd want to use
 ``basic.consume`` instead)
 
-Pop a message off the queue::
+Pop a message off the queue:
+
+.. code-block:: console
 
     5> basic.get testqueue
     {'body': 'This is a message!',
@@ -428,12 +432,16 @@ This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
 the delivery tag `1` might point to a different message than in this channel.
 
-You can acknowledge the message you received using ``basic.ack``::
+You can acknowledge the message you received using ``basic.ack``:
+
+.. code-block:: console
 
     6> basic.ack 1
     ok.
 
-To clean up after our test session you should delete the entities you created::
+To clean up after our test session you should delete the entities you created:
+
+.. code-block:: console
 
     7> queue.delete testqueue
     ok. 0 messages deleted.
@@ -533,11 +541,15 @@ becomes -->
 
 
 You install router classes by adding them to the :setting:`CELERY_ROUTES`
-setting::
+setting:
+
+.. code-block:: python
 
     CELERY_ROUTES = (MyRouter(),)
 
-Router classes can also be added by name::
+Router classes can also be added by name:
+
+.. code-block:: python
 
     CELERY_ROUTES = ('myapp.routers.MyRouter',)
 

+ 19 - 9
docs/userguide/tasks.rst

@@ -73,7 +73,9 @@ these can be specified as arguments to the decorator:
     if you don't know what that is then please read :ref:`first-steps`.
 
     If you're using Django or are still using the "old" module based celery API,
-    then you can import the task decorator like this::
+    then you can import the task decorator like this:
+
+    .. code-block:: python
 
         from celery import task
 
@@ -106,7 +108,7 @@ will be generated out of the function name if a custom name is not provided.
 
 For example:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> @app.task(name='sum-of-two-numbers')
     >>> def add(x, y):
@@ -119,13 +121,15 @@ A best practice is to use the module name as a namespace,
 this way names won't collide if there's already a task with that name
 defined in another module.
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> @app.task(name='tasks.add')
     >>> def add(x, y):
     ...     return x + y
 
-You can tell the name of the task by investigating its name attribute::
+You can tell the name of the task by investigating its name attribute:
+
+.. code-block:: pycon
 
     >>> add.name
     'tasks.add'
@@ -168,7 +172,7 @@ If you install the app under the name ``project.myapp`` then the
 tasks module will be imported as ``project.myapp.tasks``,
 so you must make sure you always import the tasks using the same name:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from project.myapp.tasks import mytask   # << GOOD
 
@@ -177,7 +181,7 @@ so you must make sure you always import the tasks using the same name:
 The second example will cause the task to be named differently
 since the worker and the client imports the modules under different names:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from project.myapp.tasks import mytask
     >>> mytask.name
@@ -894,7 +898,9 @@ The name of the state is usually an uppercase string.  As an example
 you could have a look at :mod:`abortable tasks <~celery.contrib.abortable>`
 which defines its own custom :state:`ABORTED` state.
 
-Use :meth:`~@Task.update_state` to update a task's state::
+Use :meth:`~@Task.update_state` to update a task's state:.
+
+.. code-block:: python
 
     @app.task(bind=True)
     def upload_files(self, filenames):
@@ -1268,7 +1274,7 @@ All defined tasks are listed in a registry.  The registry contains
 a list of task names and their task classes.  You can investigate this registry
 yourself:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> from proj.celery import app
     >>> app.tasks
@@ -1503,7 +1509,9 @@ that automatically expands some abbreviations in it:
         article.save()
 
 First, an author creates an article and saves it, then the author
-clicks on a button that initiates the abbreviation task::
+clicks on a button that initiates the abbreviation task:
+
+.. code-block:: pycon
 
     >>> article = Article.objects.get(id=102)
     >>> expand_abbreviations.delay(article)
@@ -1524,6 +1532,8 @@ re-fetch the article in the task body:
         article.body.replace('MyCorp', 'My Corporation')
         article.save()
 
+.. code-block:: pycon
+
     >>> expand_abbreviations(article_id)
 
 There might even be performance benefits to this approach, as sending large

+ 72 - 48
docs/userguide/workers.rst

@@ -21,14 +21,14 @@ Starting the worker
 
 You can start the worker in the foreground by executing the command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info
 
 For a full list of available command-line options see
 :mod:`~celery.bin.worker`, or simply do:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery worker --help
 
@@ -36,7 +36,7 @@ You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
 host name with the :option:`--hostname|-n` argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1.%h
     $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2.%h
@@ -81,7 +81,7 @@ Also as processes can't override the :sig:`KILL` signal, the worker will
 not be able to reap its children, so make sure to do so manually.  This
 command usually does the trick:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
 
@@ -94,10 +94,10 @@ To restart the worker you should send the `TERM` signal and start a new
 instance.  The easiest way to manage workers for development
 is by using `celery multi`:
 
-    .. code-block:: bash
+.. code-block:: console
 
-        $ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
-        $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid
+    $ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
+    $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid
 
 For production deployments you should be using init scripts or other process
 supervision systems (see :ref:`daemonizing`).
@@ -107,7 +107,7 @@ restart the worker using the :sig:`HUP` signal, but note that the worker
 will be responsible for restarting itself so this is prone to problems and
 is not recommended in production:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ kill -HUP $pid
 
@@ -265,14 +265,18 @@ Some remote control commands also have higher-level interfaces using
 :meth:`~@control.broadcast` in the background, like
 :meth:`~@control.rate_limit` and :meth:`~@control.ping`.
 
-Sending the :control:`rate_limit` command and keyword arguments::
+Sending the :control:`rate_limit` command and keyword arguments:
+
+.. code-block:: pycon
 
     >>> app.control.broadcast('rate_limit',
     ...                          arguments={'task_name': 'myapp.mytask',
     ...                                     'rate_limit': '200/m'})
 
 This will send the command asynchronously, without waiting for a reply.
-To request a reply you have to use the `reply` argument::
+To request a reply you have to use the `reply` argument:
+
+.. code-block:: pycon
 
     >>> app.control.broadcast('rate_limit', {
     ...     'task_name': 'myapp.mytask', 'rate_limit': '200/m'}, reply=True)
@@ -281,7 +285,9 @@ To request a reply you have to use the `reply` argument::
      {'worker3.example.com': 'New rate limit set successfully'}]
 
 Using the `destination` argument you can specify a list of workers
-to receive the command::
+to receive the command:
+
+.. code-block:: pycon
 
     >>> app.control.broadcast('rate_limit', {
     ...     'task_name': 'myapp.mytask',
@@ -331,7 +337,7 @@ Terminating a task also revokes it.
 
 **Example**
 
-::
+.. code-block:: pycon
 
     >>> result.revoke()
 
@@ -359,7 +365,7 @@ several tasks at once.
 
 **Example**
 
-::
+.. code-block:: pycon
 
     >>> app.control.revoke([
     ...    '7993b0aa-1f0b-4780-9af0-c47c0858b3f2',
@@ -385,15 +391,15 @@ of revoked ids will also vanish.  If you want to preserve this list between
 restarts you need to specify a file for these to be stored in by using the `--statedb`
 argument to :program:`celery worker`:
 
-.. code-block:: bash
+.. code-block:: console
 
-    celery -A proj worker -l info --statedb=/var/run/celery/worker.state
+    $ celery -A proj worker -l info --statedb=/var/run/celery/worker.state
 
 or if you use :program:`celery multi` you will want to create one file per
 worker instance so then you can use the `%n` format to expand the current node
 name:
 
-.. code-block:: bash
+.. code-block:: console
 
     celery multi start 2 -l info --statedb=/var/run/celery/%n.state
 
@@ -463,7 +469,9 @@ and hard time limits for a task — named ``time_limit``.
 
 Example changing the time limit for the ``tasks.crawl_the_web`` task
 to have a soft time limit of one minute, and a hard time limit of
-two minutes::
+two minutes:
+
+.. code-block:: pycon
 
     >>> app.control.time_limit('tasks.crawl_the_web',
                                soft=60, hard=120, reply=True)
@@ -484,7 +492,7 @@ Changing rate-limits at runtime
 Example changing the rate limit for the `myapp.mytask` task to execute
 at most 200 tasks of that type every minute:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.rate_limit('myapp.mytask', '200/m')
 
@@ -492,7 +500,7 @@ The above does not specify a destination, so the change request will affect
 all worker instances in the cluster.  If you only want to affect a specific
 list of workers you can include the ``destination`` argument:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.rate_limit('myapp.mytask', '200/m',
     ...            destination=['celery@worker1.example.com'])
@@ -562,7 +570,7 @@ queue named ``celery``).
 You can specify what queues to consume from at startup,
 by giving a comma separated list of queues to the :option:`-Q` option:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj worker -l info -Q foo,bar,baz
 
@@ -586,7 +594,7 @@ to start consuming from a queue. This operation is idempotent.
 To tell all workers in the cluster to start consuming from a queue
 named "``foo``" you can use the :program:`celery control` program:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control add_consumer foo
     -> worker1.local: OK
@@ -595,11 +603,13 @@ named "``foo``" you can use the :program:`celery control` program:
 If you want to specify a specific worker you can use the
 :option:`--destination`` argument:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control add_consumer foo -d worker1.local
 
-The same can be accomplished dynamically using the :meth:`@control.add_consumer` method::
+The same can be accomplished dynamically using the :meth:`@control.add_consumer` method:
+
+.. code-block:: pycon
 
     >>> app.control.add_consumer('foo', reply=True)
     [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
@@ -611,7 +621,9 @@ The same can be accomplished dynamically using the :meth:`@control.add_consumer`
 
 By now I have only shown examples using automatic queues,
 If you need more control you can also specify the exchange, routing_key and
-even other options::
+even other options:
+
+.. code-block:: pycon
 
     >>> app.control.add_consumer(
     ...     queue='baz',
@@ -637,14 +649,14 @@ control command.
 To force all workers in the cluster to cancel consuming from a queue
 you can use the :program:`celery control` program:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control cancel_consumer foo
 
 The :option:`--destination` argument can be used to specify a worker, or a
 list of workers, to act on the command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj control cancel_consumer foo -d worker1.local
 
@@ -652,7 +664,7 @@ list of workers, to act on the command:
 You can also cancel consumers programmatically using the
 :meth:`@control.cancel_consumer` method:
 
-.. code-block:: bash
+.. code-block:: console
 
     >>> app.control.cancel_consumer('foo', reply=True)
     [{u'worker1.local': {u'ok': u"no longer consuming from u'foo'"}}]
@@ -665,7 +677,7 @@ Queues: List of active queues
 You can get a list of queues that a worker consumes from by using
 the :control:`active_queues` control command:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect active_queues
     [...]
@@ -674,14 +686,16 @@ Like all other remote control commands this also supports the
 :option:`--destination` argument used to specify which workers should
 reply to the request:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect active_queues -d worker1.local
     [...]
 
 
 This can also be done programmatically by using the
-:meth:`@control.inspect.active_queues` method::
+:meth:`@control.inspect.active_queues` method:
+
+.. code-block:: pycon
 
     >>> app.control.inspect().active_queues()
     [...]
@@ -726,7 +740,7 @@ implementations:
     to install the :mod:`pyinotify` library you have to run the following
     command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install pyinotify
 
@@ -740,7 +754,7 @@ implementations:
 You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
 environment variable:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ env CELERYD_FSNOTIFY=stat celery worker -l info --autoreload
 
@@ -766,14 +780,14 @@ Example
 Running the following command will result in the `foo` and `bar` modules
 being imported by the worker processes:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.broadcast('pool_restart',
     ...                       arguments={'modules': ['foo', 'bar']})
 
 Use the ``reload`` argument to reload modules it has already imported:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.broadcast('pool_restart',
     ...                       arguments={'modules': ['foo'],
@@ -782,7 +796,7 @@ Use the ``reload`` argument to reload modules it has already imported:
 If you don't specify any modules then all known tasks modules will
 be imported/reloaded:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.broadcast('pool_restart', arguments={'reload': True})
 
@@ -816,16 +830,16 @@ uses remote control commands under the hood.
 You can also use the ``celery`` command to inspect workers,
 and it supports the same commands as the :class:`@control` interface.
 
-.. code-block:: python
+.. code-block:: pycon
 
-    # Inspect all nodes.
+    >>> # Inspect all nodes.
     >>> i = app.control.inspect()
 
-    # Specify multiple nodes to inspect.
+    >>> # Specify multiple nodes to inspect.
     >>> i = app.control.inspect(['worker1.example.com',
                                 'worker2.example.com'])
 
-    # Specify a single node to inspect.
+    >>> # Specify a single node to inspect.
     >>> i = app.control.inspect('worker1.example.com')
 
 .. _worker-inspect-registered-tasks:
@@ -834,7 +848,9 @@ Dump of registered tasks
 ------------------------
 
 You can get a list of tasks registered in the worker using the
-:meth:`~@control.inspect.registered`::
+:meth:`~@control.inspect.registered`:
+
+.. code-block:: pycon
 
     >>> i.registered()
     [{'worker1.example.com': ['tasks.add',
@@ -846,7 +862,9 @@ Dump of currently executing tasks
 ---------------------------------
 
 You can get a list of active tasks using
-:meth:`~@control.inspect.active`::
+:meth:`~@control.inspect.active`:
+
+.. code-block:: pycon
 
     >>> i.active()
     [{'worker1.example.com':
@@ -861,7 +879,9 @@ Dump of scheduled (ETA) tasks
 -----------------------------
 
 You can get a list of tasks waiting to be scheduled by using
-:meth:`~@control.inspect.scheduled`::
+:meth:`~@control.inspect.scheduled`:
+
+.. code-block:: pycon
 
     >>> i.scheduled()
     [{'worker1.example.com':
@@ -891,7 +911,9 @@ Reserved tasks are tasks that have been received, but are still waiting to be
 executed.
 
 You can get a list of these using
-:meth:`~@control.inspect.reserved`::
+:meth:`~@control.inspect.reserved`:
+
+.. code-block:: pycon
 
     >>> i.reserved()
     [{'worker1.example.com':
@@ -910,7 +932,7 @@ The remote control command ``inspect stats`` (or
 :meth:`~@control.inspect.stats`) will give you a long list of useful (or not
 so useful) statistics about the worker:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery -A proj inspect stats
 
@@ -1108,7 +1130,7 @@ Remote shutdown
 
 This command will gracefully shut down the worker remotely:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.broadcast('shutdown') # shutdown all workers
     >>> app.control.broadcast('shutdown, destination="worker1@example.com")
@@ -1123,7 +1145,7 @@ The workers reply with the string 'pong', and that's just about it.
 It will use the default one second timeout for replies unless you specify
 a custom timeout:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.ping(timeout=0.5)
     [{'worker1.example.com': 'pong'},
@@ -1131,7 +1153,9 @@ a custom timeout:
      {'worker3.example.com': 'pong'}]
 
 :meth:`~@control.ping` also supports the `destination` argument,
-so you can specify which workers to ping::
+so you can specify which workers to ping:
+
+.. code-block:: pycon
 
     >>> ping(['worker2.example.com', 'worker3.example.com'])
     [{'worker2.example.com': 'pong'},
@@ -1149,7 +1173,7 @@ You can enable/disable events by using the `enable_events`,
 `disable_events` commands.  This is useful to temporarily monitor
 a worker using :program:`celery events`/:program:`celerymon`.
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> app.control.enable_events()
     >>> app.control.disable_events()

+ 5 - 5
docs/whatsnew-2.5.rst

@@ -64,7 +64,7 @@ race condition leading to an annoying warning.
     The :program:`camqadm` command can be used to delete the
     previous exchange:
 
-    .. code-block:: bash
+    .. code-block:: console
 
             $ camqadm exchange.delete celeryresults
 
@@ -240,7 +240,7 @@ implementations:
     to install the :mod:`pyinotify` library you have to run the following
     command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ pip install pyinotify
 
@@ -254,7 +254,7 @@ implementations:
 You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
 environment variable:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ env CELERYD_FSNOTIFY=stat celeryd -l info --autoreload
 
@@ -378,7 +378,7 @@ In Other News
   Additional configuration must be added at the end of the argument list
   followed by ``--``, for example:
 
-  .. code-block:: bash
+  .. code-block:: console
 
     $ celerybeat -l info -- celerybeat.max_loop_interval=10.0
 
@@ -428,7 +428,7 @@ In Other News
 
     **Examples**:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celeryctl migrate redis://localhost amqp://localhost
         $ celeryctl migrate amqp://localhost//v1 amqp://localhost//v2

+ 69 - 35
docs/whatsnew-3.0.rst

@@ -96,7 +96,7 @@ has been removed, and that makes it incompatible with earlier versions.
 You can manually delete the old exchanges if you want,
 using the :program:`celery amqp` command (previously called ``camqadm``):
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery amqp exchange.delete celeryd.pidbox
     $ celery amqp exchange.delete reply.celeryd.pidbox
@@ -128,7 +128,7 @@ All Celery's command-line programs are now available from a single
 
 You can see a list of subcommands and options by running:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery help
 
@@ -168,7 +168,7 @@ The setup.py install script will try to remove the old package,
 but if that doesn't work for some reason you have to remove
 it manually.  This command helps:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ rm -r $(dirname $(python -c '
         import celery;print(celery.__file__)'))/app/task/
@@ -303,13 +303,13 @@ Tasks can now have callbacks and errbacks, and dependencies are recorded
 
             which can than be used to produce an image:
 
-            .. code-block:: bash
+            .. code-block:: console
 
                 $ dot -Tpng graph.dot -o graph.png
 
 - A new special subtask called ``chain`` is also included:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> from celery import chain
 
@@ -351,7 +351,9 @@ The priority field is a number in the range of 0 - 9, where
 The priority range is collapsed into four steps by default, since it is
 unlikely that nine steps will yield more benefit than using four steps.
 The number of steps can be configured by setting the ``priority_steps``
-transport option, which must be a list of numbers in **sorted order**::
+transport option, which must be a list of numbers in **sorted order**:
+
+.. code-block:: pycon
 
     >>> BROKER_TRANSPORT_OPTIONS = {
     ...     'priority_steps': [0, 2, 4, 6, 8, 9],
@@ -393,28 +395,34 @@ accidentally changed while switching to using blocking pop.
 
 - A new shortcut has been added to tasks:
 
-    ::
+    .. code-block:: pycon
 
         >>> task.s(arg1, arg2, kw=1)
 
-    as a shortcut to::
+    as a shortcut to:
+
+    .. code-block:: pycon
 
         >>> task.subtask((arg1, arg2), {'kw': 1})
 
-- Tasks can be chained by using the ``|`` operator::
+- Tasks can be chained by using the ``|`` operator:
+
+    .. code-block:: pycon
 
         >>> (add.s(2, 2), pow.s(2)).apply_async()
 
 - Subtasks can be "evaluated" using the ``~`` operator:
 
-    ::
+    .. code-block:: pycon
 
         >>> ~add.s(2, 2)
         4
 
         >>> ~(add.s(2, 2) | pow.s(2))
 
-    is the same as::
+    is the same as:
+
+    .. code-block:: pycon
 
         >>> chain(add.s(2, 2), pow.s(2)).apply_async().get()
 
@@ -434,7 +442,9 @@ accidentally changed while switching to using blocking pop.
     It's now a pure dict subclass with properties for attribute
     access to the relevant keys.
 
-- The repr's now outputs how the sequence would like imperatively::
+- The repr's now outputs how the sequence would like imperatively:
+
+    .. code-block:: pycon
 
         >>> from celery import chord
 
@@ -467,7 +477,7 @@ stable and is now documented as part of the offical API.
     These commands are available programmatically as
     :meth:`@control.add_consumer` / :meth:`@control.cancel_consumer`:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> celery.control.add_consumer(queue_name,
         ...     destination=['w1.example.com'])
@@ -476,7 +486,7 @@ stable and is now documented as part of the offical API.
 
     or using the :program:`celery control` command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery control -d w1.example.com add_consumer queue
         $ celery control -d w1.example.com cancel_consumer queue
@@ -493,14 +503,14 @@ stable and is now documented as part of the offical API.
 
     This command is available programmatically as :meth:`@control.autoscale`:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> celery.control.autoscale(max=10, min=5,
         ...     destination=['w1.example.com'])
 
     or using the :program:`celery control` command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery control -d w1.example.com autoscale 10 5
 
@@ -511,14 +521,14 @@ stable and is now documented as part of the offical API.
     These commands are available programmatically as
     :meth:`@control.pool_grow` / :meth:`@control.pool_shrink`:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> celery.control.pool_grow(2, destination=['w1.example.com'])
         >>> celery.contorl.pool_shrink(2, destination=['w1.example.com'])
 
     or using the :program:`celery control` command:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery control -d w1.example.com pool_grow 2
         $ celery control -d w1.example.com pool_shrink 2
@@ -537,12 +547,16 @@ Immutable subtasks
 ------------------
 
 ``subtask``'s can now be immutable, which means that the arguments
-will not be modified when calling callbacks::
+will not be modified when calling callbacks:
+
+.. code-block:: pycon
 
     >>> chain(add.s(2, 2), clear_static_electricity.si())
 
 means it will not receive the argument of the parent task,
-and ``.si()`` is a shortcut to::
+and ``.si()`` is a shortcut to:
+
+.. code-block:: pycon
 
     >>> clear_static_electricity.subtask(immutable=True)
 
@@ -602,7 +616,9 @@ Task registry no longer global
 
 Every Celery instance now has its own task registry.
 
-You can make apps share registries by specifying it::
+You can make apps share registries by specifying it:
+
+.. code-block:: pycon
 
     >>> app1 = Celery()
     >>> app2 = Celery(tasks=app1.tasks)
@@ -610,7 +626,9 @@ You can make apps share registries by specifying it::
 Note that tasks are shared between registries by default, so that
 tasks will be added to every subsequently created task registry.
 As an alternative tasks can be private to specific task registries
-by setting the ``shared`` argument to the ``@task`` decorator::
+by setting the ``shared`` argument to the ``@task`` decorator:
+
+.. code-block:: python
 
     @celery.task(shared=False)
     def add(x, y):
@@ -625,7 +643,9 @@ by default, it will first be bound (and configured) when
 a concrete subclass is created.
 
 This means that you can safely import and make task base classes,
-without also initializing the app environment::
+without also initializing the app environment:
+
+.. code-block:: python
 
     from celery.task import Task
 
@@ -636,6 +656,8 @@ without also initializing the app environment::
             print('CALLING %r' % (self,))
             return self.run(*args, **kwargs)
 
+.. code-block:: pycon
+
     >>> DebugTask
     <unbound DebugTask>
 
@@ -676,7 +698,7 @@ E.g. if you have a project named 'proj' where the
 celery app is located in 'from proj.celery import app',
 then the following will be equivalent:
 
-.. code-block:: bash
+.. code-block:: console
 
         $ celery worker --app=proj
         $ celery worker --app=proj.celery:
@@ -697,7 +719,9 @@ In Other News
   descriptors that creates a new subclass on access.
 
     This means that e.g. ``app.Worker`` is an actual class
-    and will work as expected when::
+    and will work as expected when:
+
+    .. code-block:: python
 
         class Worker(app.Worker):
             ...
@@ -715,7 +739,9 @@ In Other News
 
 - Result backends can now be set using an URL
 
-    Currently only supported by redis.  Example use::
+    Currently only supported by redis.  Example use:
+
+    .. code-block:: python
 
         CELERY_RESULT_BACKEND = 'redis://localhost/1'
 
@@ -754,20 +780,22 @@ In Other News
 
 - Bugreport now available as a command and broadcast command
 
-    - Get it from a Python repl::
+    - Get it from a Python repl:
+
+        .. code-block:: pycon
 
-        >>> import celery
-        >>> print(celery.bugreport())
+            >>> import celery
+            >>> print(celery.bugreport())
 
     - Using the ``celery`` command line program:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ celery report
 
     - Get it from remote workers:
 
-        .. code-block:: bash
+        .. code-block:: console
 
             $ celery inspect report
 
@@ -788,7 +816,9 @@ In Other News
     Returns a list of the results applying the task function to every item
     in the sequence.
 
-    Example::
+    Example:
+
+    .. code-block:: pycon
 
         >>> from celery import xstarmap
 
@@ -799,12 +829,16 @@ In Other News
 
 - ``group.skew(start=, stop=, step=)``
 
-  Skew will skew the countdown for the individual tasks in a group,
-  e.g. with a group::
+    Skew will skew the countdown for the individual tasks in a group,
+    e.g. with a group:
+
+    .. code-block:: pycon
 
         >>> g = group(add.s(i, i) for i in xrange(10))
 
-  Skewing the tasks from 0 seconds to 10 seconds::
+  Skewing the tasks from 0 seconds to 10 seconds:
+
+    .. code-block:: pycon
 
         >>> g.skew(stop=10)
 

+ 23 - 23
docs/whatsnew-3.1.rst

@@ -159,7 +159,7 @@ in init scripts.  The rest will be removed in 3.2.
 If this is not a new installation then you may want to remove the old
 commands:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip uninstall celery
     $ # repeat until it fails
@@ -250,7 +250,7 @@ Caveats
     You can disable this prefetching behavior by enabling the :option:`-Ofair`
     worker option:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery -A proj worker -l info -Ofair
 
@@ -325,9 +325,9 @@ but if you would like to experiment with it you should know that:
 
     Instead you use the :program:`celery` command directly:
 
-    .. code-block:: bash
+    .. code-block:: console
 
-        celery -A proj worker -l info
+        $ celery -A proj worker -l info
 
     For this to work your app module must store the  :envvar:`DJANGO_SETTINGS_MODULE`
     environment variable, see the example in the :ref:`Django
@@ -410,14 +410,14 @@ If a custom name is not specified then the
 worker will use the name 'celery' by default, resulting in a
 fully qualified node name of 'celery@hostname':
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery worker -n example.com
     celery@example.com
 
 To also set the name you must include the @:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery worker -n worker1@example.com
     worker1@example.com
@@ -431,7 +431,7 @@ Remember that the ``-n`` argument also supports simple variable
 substitutions, so if the current hostname is *george.example.com*
 then the ``%h`` macro will expand into that:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ celery worker -n worker1@%h
     worker1@george.example.com
@@ -556,7 +556,7 @@ Time limits can now be set by the client
 Two new options have been added to the Calling API: ``time_limit`` and
 ``soft_time_limit``:
 
-.. code-block:: python
+.. code-block:: pycon
 
     >>> res = add.apply_async((2, 2), time_limit=10, soft_time_limit=8)
 
@@ -605,7 +605,7 @@ setuptools extras.
 
 You install extras by specifying them inside brackets:
 
-.. code-block:: bash
+.. code-block:: console
 
     $ pip install celery[redis,mongodb]
 
@@ -659,9 +659,9 @@ This means that:
 
 now does the same as calling the task directly:
 
-.. code-block:: python
+.. code-block:: pycon
 
-    add(2, 2)
+    >>> add(2, 2)
 
 In Other News
 -------------
@@ -685,7 +685,7 @@ In Other News
 
     Regular signature:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> s = add.s(2, 2)
         >>> result = s.freeze()
@@ -696,7 +696,7 @@ In Other News
 
     Group:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
         >>> g = group(add.s(2, 2), add.s(4, 4))
         >>> result = g.freeze()
@@ -767,9 +767,9 @@ In Other News
 
     A dispatcher instantiated as follows:
 
-    .. code-block:: python
+    .. code-block:: pycon
 
-        app.events.Dispatcher(connection, groups=['worker'])
+        >>> app.events.Dispatcher(connection, groups=['worker'])
 
     will only send worker related events and silently drop any attempts
     to send events related to any other group.
@@ -814,7 +814,7 @@ In Other News
 
     Example:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ celery inspect conf
 
@@ -923,7 +923,7 @@ In Other News
 
     You can create graphs from the currently installed bootsteps:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         # Create graph of currently installed bootsteps in both the worker
         # and consumer namespaces.
@@ -937,7 +937,7 @@ In Other News
 
     Or graphs of workers in a cluster:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         # Create graph from the current cluster
         $ celery graph workers | dot -T png -o workers.png
@@ -986,11 +986,11 @@ In Other News
     The :envvar:`C_IMPDEBUG` can be set to trace imports as they
     occur:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ C_IMDEBUG=1 celery worker -l info
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ C_IMPDEBUG=1 celery shell
 
@@ -1089,7 +1089,7 @@ In Other News
     The :option:`-X` argument is the inverse of the :option:`-Q` argument
     and accepts a list of queues to exclude (not consume from):
 
-    .. code-block:: bash
+    .. code-block:: console
 
         # Consume from all queues in CELERY_QUEUES, but not the 'foo' queue.
         $ celery worker -A proj -l info -X foo
@@ -1098,13 +1098,13 @@ In Other News
 
     This means that you can now do:
 
-    .. code-block:: bash
+    .. code-block:: console
 
             $ C_FAKEFORK=1 celery multi start 10
 
     or:
 
-    .. code-block:: bash
+    .. code-block:: console
 
         $ C_FAKEFORK=1 /etc/init.d/celeryd start