|
@@ -33,8 +33,7 @@ This is a high level overview of the architecture.
|
|
|
|
|
|
The broker pushes tasks to the worker servers.
|
|
|
A worker server is a networked machine running ``celeryd``. This can be one or
|
|
|
-more machines, depending on the workload. See `A look inside the worker`_ to
|
|
|
-see how the worker server works.
|
|
|
+more machines, depending on the workload.
|
|
|
|
|
|
The result of the task can be stored for later retrieval (called its
|
|
|
"tombstone").
|
|
@@ -42,18 +41,16 @@ The result of the task can be stored for later retrieval (called its
|
|
|
Features
|
|
|
========
|
|
|
|
|
|
- * Uses AMQP messaging (RabbitMQ, ZeroMQ, Qpid) to route tasks to the
|
|
|
+ * Uses messaging (AMQP: RabbitMQ, ZeroMQ, Qpid) to route tasks to the
|
|
|
worker servers. Experimental support for STOMP (ActiveMQ) is also
|
|
|
- available.
|
|
|
-
|
|
|
- * For simple setups it's also possible to use Redis or an SQL database
|
|
|
- as the message queue.
|
|
|
+ available. For simple setups it's also possible to use Redis or an
|
|
|
+ SQL database as the message queue.
|
|
|
|
|
|
* You can run as many worker servers as you want, and still
|
|
|
be *guaranteed that the task is only executed once.*
|
|
|
|
|
|
* Tasks are executed *concurrently* using the Python 2.6
|
|
|
- ```multiprocessing`` module (also available as a back-port
|
|
|
+ ``multiprocessing`` module (also available as a back-port
|
|
|
to older python versions)
|
|
|
|
|
|
* Supports *periodic tasks*, which makes it a (better) replacement
|
|
@@ -178,9 +175,6 @@ allow that user access to that virtual host::
|
|
|
|
|
|
$ rabbitmqctl add_vhost myvhost
|
|
|
|
|
|
-From RabbitMQ version 1.6.0 and onward you have to use the new ACL features
|
|
|
-to allow access::
|
|
|
-
|
|
|
$ rabbitmqctl set_permissions -p myvhost myuser "" ".*" ".*"
|
|
|
|
|
|
See the RabbitMQ `Admin Guide`_ for more information about `access control`_.
|
|
@@ -190,11 +184,6 @@ See the RabbitMQ `Admin Guide`_ for more information about `access control`_.
|
|
|
.. _`access control`: http://www.rabbitmq.com/admin-guide.html#access-control
|
|
|
|
|
|
|
|
|
-If you are still using version 1.5.0 or below, please use ``map_user_vhost``::
|
|
|
-
|
|
|
- $ rabbitmqctl map_user_vhost myuser myvhost
|
|
|
-
|
|
|
-
|
|
|
Configuring your Django project to use Celery
|
|
|
---------------------------------------------
|
|
|
|
|
@@ -209,11 +198,11 @@ You only need three simple steps to use celery with your Django project.
|
|
|
3. Configure celery to use the AMQP user and virtual host we created
|
|
|
before, by adding the following to your ``settings.py``::
|
|
|
|
|
|
- AMQP_SERVER = "localhost"
|
|
|
- AMQP_PORT = 5672
|
|
|
- AMQP_USER = "myuser"
|
|
|
- AMQP_PASSWORD = "mypassword"
|
|
|
- AMQP_VHOST = "myvhost"
|
|
|
+ BROKER_HOST = "localhost"
|
|
|
+ BROKER_PORT = 5672
|
|
|
+ BROKER_USER = "myuser"
|
|
|
+ BROKER_PASSWORD = "mypassword"
|
|
|
+ BROKER_VHOST = "myvhost"
|
|
|
|
|
|
|
|
|
That's it.
|
|
@@ -270,45 +259,6 @@ This is a task that adds two numbers:
|
|
|
def add(x, y):
|
|
|
return x + y
|
|
|
|
|
|
-You can also use the workers logger to add some diagnostic output to
|
|
|
-the worker log:
|
|
|
-::
|
|
|
-
|
|
|
- from celery.decorators import task
|
|
|
- @task()
|
|
|
- def add(x, y, **kwargs):
|
|
|
- logger = add.get_logger(**kwargs)
|
|
|
- logger.info("Adding %s + %s" % (x, y))
|
|
|
- return x + y
|
|
|
-
|
|
|
-As you can see the worker is sending some keyword arguments to this task,
|
|
|
-this is the default keyword arguments. A task can choose not to take these,
|
|
|
-or only list the ones it want (the worker will do the right thing).
|
|
|
-The current default keyword arguments are:
|
|
|
-
|
|
|
- * logfile
|
|
|
-
|
|
|
- The currently used log file, can be passed on to ``self.get_logger``
|
|
|
- to gain access to the workers log file via a ``logger.Logging``
|
|
|
- instance.
|
|
|
-
|
|
|
- * loglevel
|
|
|
-
|
|
|
- The current loglevel used.
|
|
|
-
|
|
|
- * task_id
|
|
|
-
|
|
|
- The unique id of the executing task.
|
|
|
-
|
|
|
- * task_name
|
|
|
-
|
|
|
- Name of the executing task.
|
|
|
-
|
|
|
- * task_retries
|
|
|
-
|
|
|
- How many times the current task has been retried.
|
|
|
- (an integer starting a ``0``).
|
|
|
-
|
|
|
Now if we want to execute this task, we can use the
|
|
|
``delay`` method of the task class.
|
|
|
This is a handy shortcut to the ``apply_async`` method which gives
|
|
@@ -326,9 +276,9 @@ picked it up.
|
|
|
that RabbitMQ is running, and that the user/password has access to the virtual
|
|
|
host you configured earlier.
|
|
|
|
|
|
-Right now we have to check the celery worker logfiles to know what happened with
|
|
|
-the task. This is because we didn't keep the ``AsyncResult`` object returned
|
|
|
-by ``delay``.
|
|
|
+Right now we have to check the celery worker logfiles to know what happened
|
|
|
+with the task. This is because we didn't keep the ``AsyncResult`` object
|
|
|
+returned by ``delay``.
|
|
|
|
|
|
The ``AsyncResult`` lets us find the state of the task, wait for the task to
|
|
|
finish and get its return value (or exception if the task failed).
|
|
@@ -378,6 +328,30 @@ Here's an example of a periodic task:
|
|
|
logger.info("Running periodic task!")
|
|
|
>>> tasks.register(MyPeriodicTask)
|
|
|
|
|
|
+
|
|
|
+If you want to use periodic tasks you need to start the ``celerybeat``
|
|
|
+service. You have to make sure only one instance of this server is running at
|
|
|
+any time, or else you will end up with multiple executions of the same task.
|
|
|
+
|
|
|
+To start the ``celerybeat`` service::
|
|
|
+
|
|
|
+ $ celerybeat --detach
|
|
|
+
|
|
|
+or if using Django::
|
|
|
+
|
|
|
+ $ python manage.py celerybeat
|
|
|
+
|
|
|
+
|
|
|
+You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
|
|
|
+this is convenient if you only have one server::
|
|
|
+
|
|
|
+ $ celeryd --detach -B
|
|
|
+
|
|
|
+or if using Django::
|
|
|
+
|
|
|
+ $ python manage.py celeryd --detach -B
|
|
|
+
|
|
|
+
|
|
|
A look inside the components
|
|
|
============================
|
|
|
|