浏览代码

Docs: Wording

Ask Solem 8 年之前
父节点
当前提交
b65c699155

+ 2 - 2
CONTRIBUTING.rst

@@ -10,7 +10,7 @@ This document is fairly extensive and you aren't really expected
 to study this in detail for small contributions;
 to study this in detail for small contributions;
 
 
     The most important rule is that contributing must be easy
     The most important rule is that contributing must be easy
-    and that the community is friendly and not nitpicking on details
+    and that the community is friendly and not nitpicking on details,
     such as coding style.
     such as coding style.
 
 
 If you're reporting a bug you should read the Reporting bugs section
 If you're reporting a bug you should read the Reporting bugs section
@@ -700,7 +700,7 @@ is following the conventions.
         set textwidth=78
         set textwidth=78
 
 
   If adhering to this limit makes the code less readable, you have one more
   If adhering to this limit makes the code less readable, you have one more
-  character to go on, which means 78 is a soft limit, and 79 is the hard
+  character to go on. This means 78 is a soft limit, and 79 is the hard
   limit :)
   limit :)
 
 
 * Import order
 * Import order

+ 2 - 2
celery/utils/dispatch/saferef.py

@@ -212,7 +212,7 @@ class BoundNonDescriptorMethodWeakref(BoundMethodWeakref):  # pragma: no cover
         the same, instead of assuming that the function is a descriptor.
         the same, instead of assuming that the function is a descriptor.
         This approach is equally fast, but not 100% reliable because
         This approach is equally fast, but not 100% reliable because
         functions can be stored on an attribute named differenty than the
         functions can be stored on an attribute named differenty than the
-        function's name such as in::
+        function's name, such as in::
 
 
             >>> class A(object):
             >>> class A(object):
             ...     pass
             ...     pass
@@ -222,7 +222,7 @@ class BoundNonDescriptorMethodWeakref(BoundMethodWeakref):  # pragma: no cover
             >>> A.bar = foo
             >>> A.bar = foo
 
 
         This shouldn't be a common use case.  So, on platforms where methods
         This shouldn't be a common use case.  So, on platforms where methods
-        aren't descriptors (such as Jython) this implementation has the
+        aren't descriptors (e.g. Jython) this implementation has the
         advantage of working in the most cases.
         advantage of working in the most cases.
     """
     """
     def __init__(self, target, on_delete=None):
     def __init__(self, target, on_delete=None):

+ 1 - 1
celery/utils/objects.py

@@ -1,5 +1,5 @@
 # -*- coding: utf-8 -*-
 # -*- coding: utf-8 -*-
-"""Object related utilities including introspection, etc."""
+"""Object related utilities, including introspection, etc."""
 from __future__ import absolute_import, unicode_literals
 from __future__ import absolute_import, unicode_literals
 
 
 __all__ = ['Bunch', 'FallbackContext', 'mro_lookup']
 __all__ = ['Bunch', 'FallbackContext', 'mro_lookup']

+ 1 - 1
celery/worker/control.py

@@ -347,7 +347,7 @@ def _iter_schedule_requests(timer, Request=Request):
 
 
 @inspect_command(alias='dump_reserved')
 @inspect_command(alias='dump_reserved')
 def reserved(state, **kwargs):
 def reserved(state, **kwargs):
-    """List of currently reserved tasks (not including scheduled/active)."""
+    """List of currently reserved tasks, not including scheduled/active."""
     reserved_tasks = (
     reserved_tasks = (
         state.tset(worker_state.reserved_requests) -
         state.tset(worker_state.reserved_requests) -
         state.tset(worker_state.active_requests)
         state.tset(worker_state.active_requests)

+ 1 - 1
docs/contributing.rst

@@ -10,7 +10,7 @@ This document is fairly extensive and you aren't really expected
 to study this in detail for small contributions;
 to study this in detail for small contributions;
 
 
     The most important rule is that contributing must be easy
     The most important rule is that contributing must be easy
-    and that the community is friendly and not nitpicking on details
+    and that the community is friendly and not nitpicking on details,
     such as coding style.
     such as coding style.
 
 
 If you're reporting a bug you should read the Reporting bugs section
 If you're reporting a bug you should read the Reporting bugs section

+ 126 - 68
docs/faq.rst

@@ -144,17 +144,20 @@ space, see the :ref:`guide-optimizing` guide for more information.
 Is Celery dependent on pickle?
 Is Celery dependent on pickle?
 ------------------------------
 ------------------------------
 
 
-**Answer:** No.
+**Answer:** No, Celery can support any serialization scheme.
 
 
-Celery can support any serialization scheme and has built-in support for
-JSON, YAML, Pickle, and msgpack. Also, as every task is associated with a
-content type, you can even send one task using pickle, and another using JSON.
+We have built-in support for JSON, YAML, Pickle, and msgpack.
+Every task is associated with a content type, so you can even send one task using pickle,
+another using JSON.
 
 
-The default serialization format is pickle simply because it's
-convenient (it supports sending complex Python objects as task arguments).
+The default serialization support used to be pickle, but since 4.0 the default
+is now JSON.  If you require sending complex Python objects as task arguments,
+you can use pickle as the serialization format, but see notes in
+:ref:`security-serializers`.
 
 
-If you need to communicate with other languages you should change
-to a serialization format that's suitable for that.
+If you need to communicate with other languages you should use
+a serialization format suited to that task, which pretty much means any
+serializer that's not pickle.
 
 
 You can set a global default serializer, the default serializer for a
 You can set a global default serializer, the default serializer for a
 particular Task, or even what serializer to use when sending a single task
 particular Task, or even what serializer to use when sending a single task
@@ -165,18 +168,16 @@ instance.
 Is Celery for Django only?
 Is Celery for Django only?
 --------------------------
 --------------------------
 
 
-**Answer:** No.
-
-You can use Celery with any framework, web or otherwise.
+**Answer:** No, you can use Celery with any framework, web or otherwise.
 
 
 .. _faq-is-celery-for-rabbitmq-only:
 .. _faq-is-celery-for-rabbitmq-only:
 
 
 Do I have to use AMQP/RabbitMQ?
 Do I have to use AMQP/RabbitMQ?
 -------------------------------
 -------------------------------
 
 
-**Answer**: No.
+**Answer**: No, although using RabbitMQ is recommended you can also
+use Redis, SQS, or Qpid.
 
 
-Although using RabbitMQ is recommended you can also use Redis, SQS or Qpid.
 See :ref:`brokers` for more information.
 See :ref:`brokers` for more information.
 
 
 Redis as a broker won't perform as well as
 Redis as a broker won't perform as well as
@@ -264,7 +265,7 @@ most systems), it usually contains a message describing the reason.
 Does it work on FreeBSD?
 Does it work on FreeBSD?
 ------------------------
 ------------------------
 
 
-**Answer:** Depends
+**Answer:** Depends;
 
 
 When using the RabbitMQ (AMQP) and Redis transports it should work
 When using the RabbitMQ (AMQP) and Redis transports it should work
 out of the box.
 out of the box.
@@ -314,15 +315,25 @@ re-send that message to another consumer until the consumer is shut down
 properly.
 properly.
 
 
 If you hit this problem you have to kill all workers manually and restart
 If you hit this problem you have to kill all workers manually and restart
-them::
+them:
+
+.. code-block:: console
+
+    $ pkill 'celery worker'
+
+    $ # - If you don't have pkill use:
+    $ # ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill
 
 
-    ps auxww | grep celeryd | awk '{print $2}' | xargs kill
+You may have to wait a while until all workers have finished executing
+tasks. If it's still hanging after a long time you can kill them by force
+with:
 
 
-You may have to wait a while until all workers have finished the work they're
-doing. If it's still hanging after a long time you can kill them by force
-with::
+.. code-block:: console
+
+    $ pkill -9 'celery worker'
 
 
-    ps auxww | grep celeryd | awk '{print $2}' | xargs kill -9
+    $ # - If you don't have pkill use:
+    $ # ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
 
 
 .. _faq-task-does-not-run:
 .. _faq-task-does-not-run:
 
 
@@ -334,6 +345,8 @@ Why won't my Task run?
 You can find out if Celery is able to run the task by executing the
 You can find out if Celery is able to run the task by executing the
 task manually:
 task manually:
 
 
+.. code-block:: python
+
     >>> from myapp.tasks import MyPeriodicTask
     >>> from myapp.tasks import MyPeriodicTask
     >>> MyPeriodicTask.delay()
     >>> MyPeriodicTask.delay()
 
 
@@ -406,7 +419,9 @@ Results
 How do I get the result of a task if I have the ID that points there?
 How do I get the result of a task if I have the ID that points there?
 ----------------------------------------------------------------------
 ----------------------------------------------------------------------
 
 
-**Answer**: Use `task.AsyncResult`::
+**Answer**: Use `task.AsyncResult`:
+
+.. code-block:: pycon
 
 
     >>> result = my_task.AsyncResult(task_id)
     >>> result = my_task.AsyncResult(task_id)
     >>> result.get()
     >>> result.get()
@@ -418,6 +433,8 @@ If you need to specify a custom result backend, or you want to use
 the current application's default backend you can use
 the current application's default backend you can use
 :class:`@AsyncResult`:
 :class:`@AsyncResult`:
 
 
+.. code-block:: pycon
+
     >>> result = app.AsyncResult(task_id)
     >>> result = app.AsyncResult(task_id)
     >>> result.get()
     >>> result.get()
 
 
@@ -429,9 +446,9 @@ Security
 Isn't using `pickle` a security concern?
 Isn't using `pickle` a security concern?
 ----------------------------------------
 ----------------------------------------
 
 
-**Answer**: Yes, indeed it's.
+**Answer**: Indeed, since Celery 4.0 the default serializer is now JSON
+to make sure people are choosing serializers consciously and aware of this concern.
 
 
-You're right to have a security concern, as this can indeed be a real issue.
 It's essential that you protect against unauthorized
 It's essential that you protect against unauthorized
 access to your broker, databases and other services transmitting pickled
 access to your broker, databases and other services transmitting pickled
 data.
 data.
@@ -531,8 +548,8 @@ If you don't use the results for a task, make sure you set the
 Can I use Celery with ActiveMQ/STOMP?
 Can I use Celery with ActiveMQ/STOMP?
 -------------------------------------
 -------------------------------------
 
 
-**Answer**: No. It used to be supported by Carrot,
-but isn't currently supported in Kombu.
+**Answer**: No. It used to be supported by :pypi:`Carrot` (our old messaging library)
+but isn't currently supported in :pypi:`Kombu` (our new messaging library).
 
 
 .. _faq-non-amqp-missing-features:
 .. _faq-non-amqp-missing-features:
 
 
@@ -601,19 +618,22 @@ queue for exchange, so that rejected messages is moved there.
 Can I call a task by name?
 Can I call a task by name?
 -----------------------------
 -----------------------------
 
 
-**Answer**: Yes. Use :meth:`@send_task`.
-You can also call a task by name from any language
-with an AMQP client.
+**Answer**: Yes, use :meth:`@send_task`.
+
+You can also call a task by name, from any language,
+using an AMQP client:
+
+.. code-block:: python
 
 
     >>> app.send_task('tasks.add', args=[2, 2], kwargs={})
     >>> app.send_task('tasks.add', args=[2, 2], kwargs={})
     <AsyncResult: 373550e8-b9a0-4666-bc61-ace01fa4f91d>
     <AsyncResult: 373550e8-b9a0-4666-bc61-ace01fa4f91d>
 
 
 .. _faq-get-current-task-id:
 .. _faq-get-current-task-id:
 
 
-How can I get the task id of the current task?
+Can I get the task id of the current task?
 ----------------------------------------------
 ----------------------------------------------
 
 
-**Answer**: The current id and more is available in the task request::
+**Answer**: Yes, the current id and more is available in the task request::
 
 
     @app.task(bind=True)
     @app.task(bind=True)
     def mytask(self):
     def mytask(self):
@@ -621,12 +641,36 @@ How can I get the task id of the current task?
 
 
 For more information see :ref:`task-request-info`.
 For more information see :ref:`task-request-info`.
 
 
+If you don't have a reference to the task instance you can use
+:attr:`app.current_task <@current_task>`:
+
+.. code-block:: python
+
+    >>> app.current_task.request.id
+
+But note that this will be any task, be it one executed by the worker, or a
+task called directly by that task, or a task called eagerly.
+
+To get the current task being worked on specifically, use
+:attr:`app.current_worker_task <@current_worker_task>`:
+
+.. code-block:: python
+
+    >>> app.current_worker_task.request.id
+
+.. note::
+
+    Both :attr:`~@current_task`, and :attr:`~@current_worker_task` can be
+    :const:`None`.
+
 .. _faq-custom-task-ids:
 .. _faq-custom-task-ids:
 
 
 Can I specify a custom task_id?
 Can I specify a custom task_id?
 -------------------------------
 -------------------------------
 
 
-**Answer**: Yes. Use the `task_id` argument to :meth:`Task.apply_async`::
+**Answer**: Yes, use the `task_id` argument to :meth:`Task.apply_async`:
+
+.. code-block:: pycon
 
 
     >>> task.apply_async(args, kwargs, task_id='…')
     >>> task.apply_async(args, kwargs, task_id='…')
 
 
@@ -644,16 +688,17 @@ Can I use natural task ids?
 **Answer**: Yes, but make sure it's unique, as the behavior
 **Answer**: Yes, but make sure it's unique, as the behavior
 for two tasks existing with the same id is undefined.
 for two tasks existing with the same id is undefined.
 
 
-The world will probably not explode, but at the worst
-they can overwrite each others results.
+The world will probably not explode, but they can
+definitely overwrite each others results.
 
 
 .. _faq-task-callbacks:
 .. _faq-task-callbacks:
 
 
-How can I run a task once another task has finished?
-----------------------------------------------------
+Can I run a task once another task has finished?
+------------------------------------------------
+
+**Answer**: Yes, you can safely launch a task inside a task.
 
 
-**Answer**: You can safely launch a task inside a task.
-Also, a common pattern is to add callbacks to tasks:
+A common pattern is to add callbacks to tasks:
 
 
 .. code-block:: python
 .. code-block:: python
 
 
@@ -669,7 +714,9 @@ Also, a common pattern is to add callbacks to tasks:
     def log_result(result):
     def log_result(result):
         logger.info("log_result got: %r", result)
         logger.info("log_result got: %r", result)
 
 
-Invocation::
+Invocation:
+
+.. code-block:: pycon
 
 
     >>> (add.s(2, 2) | log_result.s()).delay()
     >>> (add.s(2, 2) | log_result.s()).delay()
 
 
@@ -679,24 +726,32 @@ See :doc:`userguide/canvas` for more information.
 
 
 Can I cancel the execution of a task?
 Can I cancel the execution of a task?
 -------------------------------------
 -------------------------------------
-**Answer**: Yes. Use `result.revoke`::
+**Answer**: Yes, Use :meth:`result.revoke() <celery.result.AsyncResult.revoke>`:
+
+.. code-block:: pycon
 
 
     >>> result = add.apply_async(args=[2, 2], countdown=120)
     >>> result = add.apply_async(args=[2, 2], countdown=120)
     >>> result.revoke()
     >>> result.revoke()
 
 
-or if you only have the task id::
+or if you only have the task id:
+
+.. code-block:: pycon
 
 
     >>> from proj.celery import app
     >>> from proj.celery import app
     >>> app.control.revoke(task_id)
     >>> app.control.revoke(task_id)
 
 
+
+The latter also support passing a list of task-ids as argument.
+
 .. _faq-node-not-receiving-broadcast-commands:
 .. _faq-node-not-receiving-broadcast-commands:
 
 
 Why aren't my remote control commands received by all workers?
 Why aren't my remote control commands received by all workers?
 --------------------------------------------------------------
 --------------------------------------------------------------
 
 
 **Answer**: To receive broadcast remote control commands, every worker node
 **Answer**: To receive broadcast remote control commands, every worker node
-uses its host name to create a unique queue name to listen to,
-so if you have more than one worker with the same host name, the
+creates a unique queue name, based on the nodename of the worker.
+
+If you have more than one worker with the same host name, the
 control commands will be received in round-robin between them.
 control commands will be received in round-robin between them.
 
 
 To work around this you can explicitly set the nodename for every worker
 To work around this you can explicitly set the nodename for every worker
@@ -708,15 +763,16 @@ using the :option:`-n <celery worker -n>` argument to
     $ celery -A proj worker -n worker1@%h
     $ celery -A proj worker -n worker1@%h
     $ celery -A proj worker -n worker2@%h
     $ celery -A proj worker -n worker2@%h
 
 
-where ``%h`` is automatically expanded into the current hostname.
+where ``%h`` expands into the current hostname.
 
 
 .. _faq-task-routing:
 .. _faq-task-routing:
 
 
 Can I send some tasks to only some servers?
 Can I send some tasks to only some servers?
 --------------------------------------------
 --------------------------------------------
 
 
-**Answer:** Yes. You can route tasks to an arbitrary server using AMQP,
-and a worker can bind to as many queues as it wants.
+**Answer:** Yes, you can route tasks to one or more workers,
+using different message routing topologies, and a worker instance
+can bind to multiple queues.
 
 
 See :doc:`userguide/routing` for more information.
 See :doc:`userguide/routing` for more information.
 
 
@@ -725,8 +781,8 @@ See :doc:`userguide/routing` for more information.
 Can I disable prefetching of tasks?
 Can I disable prefetching of tasks?
 -----------------------------------
 -----------------------------------
 
 
-**Answer**: The AMQP term "prefetch" is confusing, as it's only used
-to describe the task prefetching *limits*.
+**Answer**: Maybe! The AMQP term "prefetch" is confusing, as it's only used
+to describe the task prefetching *limit*.  There's no actual prefetching involved.
 
 
 Disabling the prefetch limits is possible, but that means the worker will
 Disabling the prefetch limits is possible, but that means the worker will
 consume as many tasks as it can, as fast as possible.
 consume as many tasks as it can, as fast as possible.
@@ -740,7 +796,7 @@ that only reserves one task at a time is found here:
 Can I change the interval of a periodic task at runtime?
 Can I change the interval of a periodic task at runtime?
 --------------------------------------------------------
 --------------------------------------------------------
 
 
-**Answer**: Yes. You can use the Django database scheduler, or you can
+**Answer**: Yes, you can use the Django database scheduler, or you can
 create a new schedule subclass and override
 create a new schedule subclass and override
 :meth:`~celery.schedules.schedule.is_due`:
 :meth:`~celery.schedules.schedule.is_due`:
 
 
@@ -748,7 +804,6 @@ create a new schedule subclass and override
 
 
     from celery.schedules import schedule
     from celery.schedules import schedule
 
 
-
     class my_schedule(schedule):
     class my_schedule(schedule):
 
 
         def is_due(self, last_run_at):
         def is_due(self, last_run_at):
@@ -759,15 +814,13 @@ create a new schedule subclass and override
 Does Celery support task priorities?
 Does Celery support task priorities?
 ------------------------------------
 ------------------------------------
 
 
-**Answer**: Yes.
-
-RabbitMQ supports priorities since version 3.5.0.
-Redis transport emulates support of priorities.
+**Answer**: Yes, RabbitMQ supports priorities since version 3.5.0,
+and the Redis transport emulates priority support.
 
 
 You can also prioritize work by routing high priority tasks
 You can also prioritize work by routing high priority tasks
-to different workers. In the real world this may actually work better
+to different workers. In the real world this usually works better
 than per message priorities. You can use this in combination with rate
 than per message priorities. You can use this in combination with rate
-limiting to achieve a responsive system.
+limiting, and per message priorities to achieve a responsive system.
 
 
 .. _faq-acks_late-vs-retry:
 .. _faq-acks_late-vs-retry:
 
 
@@ -819,7 +872,7 @@ is required.
 Can I schedule tasks to execute at a specific time?
 Can I schedule tasks to execute at a specific time?
 ---------------------------------------------------
 ---------------------------------------------------
 
 
-.. module:: celery.task.base
+.. module:: celery.app.task
 
 
 **Answer**: Yes. You can use the `eta` argument of :meth:`Task.apply_async`.
 **Answer**: Yes. You can use the `eta` argument of :meth:`Task.apply_async`.
 
 
@@ -828,21 +881,24 @@ See also :ref:`guide-beat`.
 
 
 .. _faq-safe-worker-shutdown:
 .. _faq-safe-worker-shutdown:
 
 
-How can I safely shut down the worker?
---------------------------------------
+Can I safely shut down the worker?
+----------------------------------
+
+**Answer**: Yes, use the :sig:`TERM` signal.
 
 
-**Answer**: Use the :sig:`TERM` signal, and the worker will finish all currently
-executing jobs and shut down as soon as possible. No tasks should be lost.
+This will tell the worker to finish all currently
+executing jobs and shut down as soon as possible. No tasks should be lost
+even with experimental transports as long as the shutdown completes.
 
 
 You should never stop :mod:`~celery.bin.worker` with the :sig:`KILL` signal
 You should never stop :mod:`~celery.bin.worker` with the :sig:`KILL` signal
 (``kill -9``), unless you've tried :sig:`TERM` a few times and waited a few
 (``kill -9``), unless you've tried :sig:`TERM` a few times and waited a few
 minutes to let it get a chance to shut down.
 minutes to let it get a chance to shut down.
 
 
-Also make sure you kill the main worker process, not its child processes.
-You can direct a kill signal to a specific child process if you know the
-process is currently executing a task the worker shutdown is depending on,
-but this also means that a ``WorkerLostError`` state will be set for the
-task so the task won't run again.
+Also make sure you kill the main worker process only, not any of its child
+processes.  You can direct a kill signal to a specific child process if
+you know the process is currently executing a task the worker shutdown
+is depending on, but this also means that a ``WorkerLostError`` state will
+be set for the task so the task won't run again.
 
 
 Identifying the type of process is easier if you have installed the
 Identifying the type of process is easier if you have installed the
 :pypi:`setproctitle` module:
 :pypi:`setproctitle` module:
@@ -860,9 +916,9 @@ With this library installed you'll be able to see the type of process in
 
 
 .. _faq-daemonizing:
 .. _faq-daemonizing:
 
 
-How do I run the worker in the background on [platform]?
---------------------------------------------------------
-**Answer**: Please see :ref:`daemonizing`.
+Can I run the worker in the background on [platform]?
+-----------------------------------------------------
+**Answer**: Yes, please see :ref:`daemonizing`.
 
 
 .. _faq-django:
 .. _faq-django:
 
 
@@ -909,3 +965,5 @@ Does Celery support Windows?
 **Answer**: No.
 **Answer**: No.
 
 
 Since Celery 4.x, Windows is no longer supported due to lack of resources.
 Since Celery 4.x, Windows is no longer supported due to lack of resources.
+
+But it may still work and we are happy to accept patches.

+ 1 - 1
docs/history/changelog-3.0.rst

@@ -671,7 +671,7 @@ If you're looking for versions prior to 3.0.x you should go to :ref:`history`.
 - Now depends on Kombu 2.5
 - Now depends on Kombu 2.5
 
 
     - :pypi:`amqp` has replaced :pypi:`amqplib` as the default transport,
     - :pypi:`amqp` has replaced :pypi:`amqplib` as the default transport,
-      gaining support for AMQP 0.9, and the RabbitMQ extensions
+      gaining support for AMQP 0.9, and the RabbitMQ extensions,
       including Consumer Cancel Notifications and heartbeats.
       including Consumer Cancel Notifications and heartbeats.
 
 
     - support for multiple connection URLs for failover.
     - support for multiple connection URLs for failover.

+ 1 - 1
docs/history/whatsnew-3.0.rst

@@ -904,7 +904,7 @@ In Other News
 
 
 - New signal: :signal:`task_revoked`
 - New signal: :signal:`task_revoked`
 
 
-- :mod:`celery.contrib.migrate`: Many improvements including
+- :mod:`celery.contrib.migrate`: Many improvements, including;
   filtering, queue migration, and support for acking messages on the broker
   filtering, queue migration, and support for acking messages on the broker
   migrating from.
   migrating from.
 
 

+ 3 - 3
docs/userguide/calling.rst

@@ -31,8 +31,8 @@ The API defines a standard set of execution options, as well as three methods:
     - *calling* (``__call__``)
     - *calling* (``__call__``)
 
 
         Applying an object supporting the calling API (e.g. ``add(2, 2)``)
         Applying an object supporting the calling API (e.g. ``add(2, 2)``)
-        means that the task will be executed in the current process, and
-        not by a worker (a message won't be sent).
+        means that the task will not be executed by a worker, but in the current
+        process instead (a message won't be sent).
 
 
 .. _calling-cheat:
 .. _calling-cheat:
 
 
@@ -380,7 +380,7 @@ Each option has its advantages and disadvantages.
 
 
 json -- JSON is supported in many programming languages, is now
 json -- JSON is supported in many programming languages, is now
     a standard part of Python (since 2.6), and is fairly fast to decode
     a standard part of Python (since 2.6), and is fairly fast to decode
-    using the modern Python libraries such as :pypi:`simplejson`.
+    using the modern Python libraries, such as :pypi:`simplejson`.
 
 
     The primary disadvantage to JSON is that it limits you to the following
     The primary disadvantage to JSON is that it limits you to the following
     data types: strings, Unicode, floats, Boolean, dictionaries, and lists.
     data types: strings, Unicode, floats, Boolean, dictionaries, and lists.

+ 2 - 2
docs/userguide/configuration.rst

@@ -175,7 +175,7 @@ A white-list of content-types/serializers to allow.
 If a message is received that's not in this list then
 If a message is received that's not in this list then
 the message will be discarded with an error.
 the message will be discarded with an error.
 
 
-By default any content type is enabled (including pickle and yaml)
+By default any content type is enabled, including pickle and yaml,
 so make sure untrusted parties don't have access to your broker.
 so make sure untrusted parties don't have access to your broker.
 See :ref:`guide-security` for more.
 See :ref:`guide-security` for more.
 
 
@@ -1579,7 +1579,7 @@ is optional, and defaults to the specific transports default values.
 
 
 The transport part is the broker implementation to use, and the
 The transport part is the broker implementation to use, and the
 default is ``amqp``, (uses ``librabbitmq`` if installed or falls back to
 default is ``amqp``, (uses ``librabbitmq`` if installed or falls back to
-``pyamqp``). There are also many other choices including:
+``pyamqp``). There are also many other choices, including;
 ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
 ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
 and ``couchdb``.
 and ``couchdb``.
 
 

+ 1 - 1
docs/userguide/daemonizing.rst

@@ -474,7 +474,7 @@ This is an example configuration for those using :pypi:`django-celery`:
     CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
     CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
     CELERYD_PID_FILE="/var/run/celery/%n.pid"
     CELERYD_PID_FILE="/var/run/celery/%n.pid"
 
 
-To add an environment variable such as :envvar:`DJANGO_SETTINGS_MODULE`
+To add an environment variable, such as :envvar:`DJANGO_SETTINGS_MODULE`,
 use the Environment in :file:`celery.service`.
 use the Environment in :file:`celery.service`.
 
 
 Running the worker with superuser privileges (root)
 Running the worker with superuser privileges (root)

+ 15 - 15
docs/userguide/extending.rst

@@ -159,7 +159,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class WorkerStep(bootsteps.StartStopStep):
         class WorkerStep(bootsteps.StartStopStep):
-            requires = ('celery.worker.components:Hub',)
+            requires = {'celery.worker.components:Hub'}
 
 
 .. _extending-worker-pool:
 .. _extending-worker-pool:
 
 
@@ -173,7 +173,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class WorkerStep(bootsteps.StartStopStep):
         class WorkerStep(bootsteps.StartStopStep):
-            requires = ('celery.worker.components:Pool',)
+            requires = {'celery.worker.components:Pool'}
 
 
 .. _extending-worker-timer:
 .. _extending-worker-timer:
 
 
@@ -186,7 +186,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class WorkerStep(bootsteps.StartStopStep):
         class WorkerStep(bootsteps.StartStopStep):
-            requires = ('celery.worker.components:Timer',)
+            requires = {'celery.worker.components:Timer'}
 
 
 .. _extending-worker-statedb:
 .. _extending-worker-statedb:
 
 
@@ -202,7 +202,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class WorkerStep(bootsteps.StartStopStep):
         class WorkerStep(bootsteps.StartStopStep):
-            requires = ('celery.worker.components:Statedb',)
+            requires = {'celery.worker.components:Statedb'}
 
 
 Example worker bootstep
 Example worker bootstep
 -----------------------
 -----------------------
@@ -214,7 +214,7 @@ An example Worker bootstep could be:
     from celery import bootsteps
     from celery import bootsteps
 
 
     class ExampleWorkerStep(bootsteps.StartStopStep):
     class ExampleWorkerStep(bootsteps.StartStopStep):
-        requires = ('Pool',)
+        requires = {'celery.worker.components:Pool'}
 
 
         def __init__(self, worker, **kwargs):
         def __init__(self, worker, **kwargs):
             print('Called when the WorkController instance is constructed')
             print('Called when the WorkController instance is constructed')
@@ -246,7 +246,7 @@ Another example could use the timer to wake up at regular intervals:
 
 
 
 
     class DeadlockDetection(bootsteps.StartStopStep):
     class DeadlockDetection(bootsteps.StartStopStep):
-        requires = ('Timer',)
+        requires = {'celery.worker.components:Timer'}
 
 
         def __init__(self, worker, deadlock_timeout=3600):
         def __init__(self, worker, deadlock_timeout=3600):
             self.timeout = deadlock_timeout
             self.timeout = deadlock_timeout
@@ -266,7 +266,7 @@ Another example could use the timer to wake up at regular intervals:
 
 
         def detect(self, worker):
         def detect(self, worker):
             # update active requests
             # update active requests
-            for req in self.worker.active_requests:
+            for req in worker.active_requests:
                 if req.time_start and time() - req.time_start > self.timeout:
                 if req.time_start and time() - req.time_start > self.timeout:
                     raise SystemExit()
                     raise SystemExit()
 
 
@@ -329,7 +329,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class WorkerStep(bootsteps.StartStopStep):
         class WorkerStep(bootsteps.StartStopStep):
-            requires = ('celery.worker:Hub',)
+            requires = {'celery.worker.components:Hub'}
 
 
 .. _extending-consumer-connection:
 .. _extending-consumer-connection:
 
 
@@ -343,7 +343,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class Step(bootsteps.StartStopStep):
         class Step(bootsteps.StartStopStep):
-            requires = ('celery.worker.consumer:Connection',)
+            requires = {'celery.worker.consumer.connection:Connection'}
 
 
 .. _extending-consumer-event_dispatcher:
 .. _extending-consumer-event_dispatcher:
 
 
@@ -356,14 +356,14 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class Step(bootsteps.StartStopStep):
         class Step(bootsteps.StartStopStep):
-            requires = ('celery.worker.consumer:Events',)
+            requires = {'celery.worker.consumer.events:Events'}
 
 
 .. _extending-consumer-gossip:
 .. _extending-consumer-gossip:
 
 
 .. attribute:: gossip
 .. attribute:: gossip
 
 
     Worker to worker broadcast communication
     Worker to worker broadcast communication
-    (:class:`~celery.worker.consumer.Gossip`).
+    (:class:`~celery.worker.consumer.gossip.Gossip`).
 
 
     A consumer bootstep must require the `Gossip` bootstep to use this.
     A consumer bootstep must require the `Gossip` bootstep to use this.
 
 
@@ -372,7 +372,7 @@ Attributes
         class RatelimitStep(bootsteps.StartStopStep):
         class RatelimitStep(bootsteps.StartStopStep):
             """Rate limit tasks based on the number of workers in the
             """Rate limit tasks based on the number of workers in the
             cluster."""
             cluster."""
-            requires = ('celery.worker.consumer:Gossip',)
+            requires = {'celery.worker.consumer.gossip:Gossip'}
 
 
             def start(self, c):
             def start(self, c):
                 self.c = c
                 self.c = c
@@ -444,7 +444,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class Step(bootsteps.StartStopStep):
         class Step(bootsteps.StartStopStep):
-            requires = ('celery.worker.consumer:Heart',)
+            requires = {'celery.worker.consumer.heart:Heart'}
 
 
 .. _extending-consumer-task_consumer:
 .. _extending-consumer-task_consumer:
 
 
@@ -457,7 +457,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class Step(bootsteps.StartStopStep):
         class Step(bootsteps.StartStopStep):
-            requires = ('celery.worker.consumer:Tasks',)
+            requires = {'celery.worker.consumer.tasks:Tasks'}
 
 
 .. _extending-consumer-strategies:
 .. _extending-consumer-strategies:
 
 
@@ -481,7 +481,7 @@ Attributes
     .. code-block:: python
     .. code-block:: python
 
 
         class Step(bootsteps.StartStopStep):
         class Step(bootsteps.StartStopStep):
-            requires = ('celery.worker.consumer:Tasks',)
+            requires = {'celery.worker.consumer.tasks:Tasks'}
 
 
 .. _extending-consumer-task_buckets:
 .. _extending-consumer-task_buckets:
 
 

+ 1 - 1
docs/userguide/periodic-tasks.rst

@@ -129,7 +129,7 @@ Example: Run the `tasks.add` task every 30 seconds.
     a separate module for configuration.
     a separate module for configuration.
 
 
     If you want to use a single item tuple for `args`, don't forget
     If you want to use a single item tuple for `args`, don't forget
-    that the constructor is a comma and not a pair of parentheses.
+    that the constructor is a comma, and not a pair of parentheses.
 
 
 Using a :class:`~datetime.timedelta` for the schedule means the task will
 Using a :class:`~datetime.timedelta` for the schedule means the task will
 be sent in 30 second intervals (the first task will be sent 30 seconds
 be sent in 30 second intervals (the first task will be sent 30 seconds

+ 1 - 1
docs/userguide/routing.rst

@@ -6,7 +6,7 @@
 
 
 .. note::
 .. note::
 
 
-    Alternate routing concepts like topic and fanout may not be
+    Alternate routing concepts like topic and fanout is not
     available for all transports, please consult the
     available for all transports, please consult the
     :ref:`transport comparison table <kombu:transport-comparison>`.
     :ref:`transport comparison table <kombu:transport-comparison>`.
 
 

+ 9 - 5
docs/userguide/security.rst

@@ -64,7 +64,7 @@ Worker
 ------
 ------
 
 
 The default permissions of tasks running inside a worker are the same ones as
 The default permissions of tasks running inside a worker are the same ones as
-the privileges of the worker itself. This applies to resources such as
+the privileges of the worker itself. This applies to resources, such as;
 memory, file-systems, and devices.
 memory, file-systems, and devices.
 
 
 An exception to this rule is when using the multiprocessing based task pool,
 An exception to this rule is when using the multiprocessing based task pool,
@@ -90,14 +90,18 @@ outbound traffic.
 .. _`sandboxing`:
 .. _`sandboxing`:
     https://en.wikipedia.org/wiki/Sandbox_(computer_security)
     https://en.wikipedia.org/wiki/Sandbox_(computer_security)
 
 
+.. _security-serializers:
+
 Serializers
 Serializers
 ===========
 ===========
 
 
-The default `pickle` serializer is convenient because it supports
-arbitrary Python objects, whereas other serializers only
-work with a restricted set of types.
+The default serializer is JSON since version 4.0, but since it has
+only support for a restricted set of types you may want to consider
+using pickle for serialization instead.
 
 
-But for the same reasons the `pickle` serializer is inherently insecure [*]_,
+The `pickle` serializer is convenient as it can serialize
+almost any Python object, even functions with some work,
+but for the same reasons `pickle` is inherently insecure [*]_,
 and should be avoided whenever clients are untrusted or
 and should be avoided whenever clients are untrusted or
 unauthenticated.
 unauthenticated.
 
 

+ 8 - 26
docs/userguide/tasks.rst

@@ -400,7 +400,7 @@ The request defines the following attributes:
           An integer starting at `0`.
           An integer starting at `0`.
 
 
 :is_eager: Set to :const:`True` if the task is executed locally in
 :is_eager: Set to :const:`True` if the task is executed locally in
-           the client, and not by a worker.
+           the client, not by a worker.
 
 
 :eta: The original ETA of the task (if any).
 :eta: The original ETA of the task (if any).
       This is in UTC time (depending on the :setting:`enable_utc`
       This is in UTC time (depending on the :setting:`enable_utc`
@@ -659,11 +659,6 @@ Automatic retry for known exceptions
 Sometimes you just want to retry a task whenever a particular exception
 Sometimes you just want to retry a task whenever a particular exception
 is raised.
 is raised.
 
 
-As this is such a common pattern we've built-in support for it
-with the
-This may not be acceptable all the time, since you may have a lot of such
-tasks.
-
 Fortunately, you can tell Celery to automatically retry a task using
 Fortunately, you can tell Celery to automatically retry a task using
 `autoretry_for` argument in `~@Celery.task` decorator:
 `autoretry_for` argument in `~@Celery.task` decorator:
 
 
@@ -811,12 +806,12 @@ General
 .. attribute:: Task.time_limit
 .. attribute:: Task.time_limit
 
 
     The hard time limit, in seconds, for this task.
     The hard time limit, in seconds, for this task.
-    If not set then the workers default will be used.
+    When not set the workers default is used.
 
 
 .. attribute:: Task.soft_time_limit
 .. attribute:: Task.soft_time_limit
 
 
     The soft time limit for this task.
     The soft time limit for this task.
-    If not set then the workers default will be used.
+    When not set the workers default is used.
 
 
 .. attribute:: Task.ignore_result
 .. attribute:: Task.ignore_result
 
 
@@ -961,7 +956,7 @@ web applications with a database already in place, but it also comes with
 limitations.
 limitations.
 
 
 * Polling the database for new states is expensive, and so you should
 * Polling the database for new states is expensive, and so you should
-  increase the polling intervals of operations such as `result.get()`.
+  increase the polling intervals of operations, such as `result.get()`.
 
 
 * Some databases use a default transaction isolation level that
 * Some databases use a default transaction isolation level that
   isn't suitable for polling tables for changes.
   isn't suitable for polling tables for changes.
@@ -1451,21 +1446,8 @@ wastes time and resources.
 Results can even be disabled globally using the :setting:`task_ignore_result`
 Results can even be disabled globally using the :setting:`task_ignore_result`
 setting.
 setting.
 
 
-.. _task-disable-rate-limits:
-
-Disable rate limits if they're not used
----------------------------------------
-
-Disabling rate limits altogether is recommended if you don't have
-any tasks using them. This is because the rate limit subsystem introduces
-quite a lot of complexity.
-
-Set the :setting:`worker_disable_rate_limits` setting to globally disable
-rate limits:
-
-.. code-block:: python
-
-    worker_disable_rate_limits = True
+More optimization tips
+----------------------
 
 
 You find additional optimization tips in the
 You find additional optimization tips in the
 :ref:`Optimizing Guide <guide-optimizing>`.
 :ref:`Optimizing Guide <guide-optimizing>`.
@@ -1548,8 +1530,8 @@ With smaller tasks you can process more tasks in parallel and the tasks
 won't run long enough to block the worker from processing other waiting tasks.
 won't run long enough to block the worker from processing other waiting tasks.
 
 
 However, executing a task does have overhead. A message needs to be sent, data
 However, executing a task does have overhead. A message needs to be sent, data
-may not be local, etc. So if the tasks are too fine-grained the additional
-overhead may not be worth it in the end.
+may not be local, etc. So if the tasks are too fine-grained the
+overhead added probably removes any benefit.
 
 
 .. seealso::
 .. seealso::
 
 

+ 43 - 31
docs/userguide/workers.rst

@@ -16,8 +16,8 @@ Starting the worker
 .. sidebar:: Daemonizing
 .. sidebar:: Daemonizing
 
 
     You probably want to use a daemonization tool to start
     You probably want to use a daemonization tool to start
-    in the background. See :ref:`daemonizing` for help
-    detaching the worker using popular daemonization tools.
+    the worker in the background. See :ref:`daemonizing` for help
+    starting the worker as a daemon using popular service managers.
 
 
 You can start the worker in the foreground by executing the command:
 You can start the worker in the foreground by executing the command:
 
 
@@ -32,30 +32,35 @@ For a full list of available command-line options see
 
 
     $ celery worker --help
     $ celery worker --help
 
 
-You can also start multiple workers on the same machine. If you do so
-be sure to give a unique name to each individual worker by specifying a
+You can start multiple workers on the same machine, but
+be sure to name each individual worker by specifying a
 node name with the :option:`--hostname <celery worker --hostname>` argument:
 node name with the :option:`--hostname <celery worker --hostname>` argument:
 
 
 .. code-block:: console
 .. code-block:: console
 
 
-    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1.%h
-    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2.%h
-    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker3.%h
+    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1@%h
+    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2@%h
+    $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker3@%h
 
 
 The ``hostname`` argument can expand the following variables:
 The ``hostname`` argument can expand the following variables:
 
 
-    - ``%h``:  Hostname including domain name.
+    - ``%h``:  Hostname, including domain name.
     - ``%n``:  Hostname only.
     - ``%n``:  Hostname only.
     - ``%d``:  Domain name only.
     - ``%d``:  Domain name only.
 
 
-E.g. if the current hostname is ``george.example.com`` then
-these will expand to:
+If the current hostname is *george.example.com*, these will expand to:
 
 
-    - ``worker1.%h`` -> ``worker1.george.example.com``
-    - ``worker1.%n`` -> ``worker1.george``
-    - ``worker1.%d`` -> ``worker1.example.com``
++----------+----------------+------------------------------+
+| Variable | Template       | Result                       |
++----------+----------------+------------------------------+
+| ``%h``   | ``worker1@%h`` | *worker1@george.example.com* |
++----------+----------------+------------------------------+
+| ``%n``   | ``worker1@%n`` | *worker1@george*             |
++----------+----------------+------------------------------+
+| ``%d``   | ``worker1@%d`` | *worker1@example.com*        |
++----------+----------------+------------------------------+
 
 
-.. admonition:: Note for :pypi:`supervisor` users.
+.. admonition:: Note for :pypi:`supervisor` users
 
 
    The ``%`` sign must be escaped by adding a second one: `%%h`.
    The ``%`` sign must be escaped by adding a second one: `%%h`.
 
 
@@ -67,20 +72,27 @@ Stopping the worker
 Shutdown should be accomplished using the :sig:`TERM` signal.
 Shutdown should be accomplished using the :sig:`TERM` signal.
 
 
 When shutdown is initiated the worker will finish all currently executing
 When shutdown is initiated the worker will finish all currently executing
-tasks before it actually terminates, so if these tasks are important you should
-wait for it to finish before doing anything drastic (like sending the :sig:`KILL`
-signal).
-
-If the worker won't shutdown after considerate time, for example because
-of tasks stuck in an infinite-loop, you can use the :sig:`KILL` signal to
-force terminate the worker, but be aware that currently executing tasks will
-be lost (unless the tasks have the :attr:`~@Task.acks_late`
+tasks before it actually terminates. If these tasks are important, you should
+wait for it to finish before doing anything drastic, like sending the :sig:`KILL`
+signal.
+
+If the worker won't shutdown after considerate time, for being
+stuck in an infinite-loop or similar, you can use the :sig:`KILL` signal to
+force terminate the worker: but be aware that currently executing tasks will
+be lost (i.e. unless the tasks have the :attr:`~@Task.acks_late`
 option set).
 option set).
 
 
 Also as processes can't override the :sig:`KILL` signal, the worker will
 Also as processes can't override the :sig:`KILL` signal, the worker will
-not be able to reap its children, so make sure to do so manually. This
+not be able to reap its children; make sure to do so manually. This
 command usually does the trick:
 command usually does the trick:
 
 
+.. code-block:: console
+
+    $ pkill -9 -f 'celery worker'
+
+If you don't have the :command:`pkill` command on your system, you can use the slightly
+longer version:
+
 .. code-block:: console
 .. code-block:: console
 
 
     $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
     $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
@@ -99,11 +111,11 @@ is by using `celery multi`:
     $ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
     $ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
     $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid
     $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid
 
 
-For production deployments you should be using init-scripts or other process
-supervision systems (see :ref:`daemonizing`).
+For production deployments you should be using init-scripts or a process
+supervision system (see :ref:`daemonizing`).
 
 
-Other than stopping then starting the worker to restart, you can also
-restart the worker using the :sig:`HUP` signal, but note that the worker
+Other than stopping, then starting the worker to restart, you can also
+restart the worker using the :sig:`HUP` signal. Note that the worker
 will be responsible for restarting itself so this is prone to problems and
 will be responsible for restarting itself so this is prone to problems and
 isn't recommended in production:
 isn't recommended in production:
 
 
@@ -152,7 +164,7 @@ Node name replacements
 ----------------------
 ----------------------
 
 
 - ``%p``:  Full node name.
 - ``%p``:  Full node name.
-- ``%h``:  Hostname including domain name.
+- ``%h``:  Hostname, including domain name.
 - ``%n``:  Hostname only.
 - ``%n``:  Hostname only.
 - ``%d``:  Domain name only.
 - ``%d``:  Domain name only.
 - ``%i``:  Prefork pool process index or 0 if MainProcess.
 - ``%i``:  Prefork pool process index or 0 if MainProcess.
@@ -178,7 +190,7 @@ This can be used to specify one log file per child process.
 
 
 Note that the numbers will stay within the process limit even if processes
 Note that the numbers will stay within the process limit even if processes
 exit or if ``maxtasksperchild``/time limits are used. I.e. the number
 exit or if ``maxtasksperchild``/time limits are used. I.e. the number
-is the *process index* not the process count or pid.
+is the *process index*, not the process count or pid.
 
 
 * ``%i`` - Pool process index or 0 if MainProcess.
 * ``%i`` - Pool process index or 0 if MainProcess.
 
 
@@ -560,8 +572,8 @@ Queues
 
 
 A worker instance can consume from any number of queues.
 A worker instance can consume from any number of queues.
 By default it will consume from all queues defined in the
 By default it will consume from all queues defined in the
-:setting:`task_queues` setting (if not specified defaults to the
-queue named ``celery``).
+:setting:`task_queues` setting (that if not specified falls back to the
+default queue named ``celery``).
 
 
 You can specify what queues to consume from at start-up, by giving a comma
 You can specify what queues to consume from at start-up, by giving a comma
 separated list of queues to the :option:`-Q <celery worker -Q>` option:
 separated list of queues to the :option:`-Q <celery worker -Q>` option: