introduction.txt 8.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192
  1. .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
  2. :Version: 1.0.0-pre3
  3. :Keywords: task queue, job queue, asynchronous, rabbitmq, amqp, redis,
  4. django, python, webhooks, queue, distributed
  5. --
  6. Celery is a task queue/job queue based on distributed message passing.
  7. It is focused on real-time operation, but has support for scheduling as well.
  8. The execution units, called tasks, are executed concurrently on one or more
  9. worker servers, asynchronously (in the background) or synchronously
  10. (wait until ready).
  11. Celery is already used in production to process millions of tasks a day.
  12. It was first created for Django, but is now usable from Python as well.
  13. It can also `operate with other languages via HTTP+JSON`_.
  14. .. _`operate with other languages via HTTP+JSON`:
  15. http://ask.github.com/celery/userguide/remote-tasks.html
  16. Overview
  17. ========
  18. This is a high level overview of the architecture.
  19. .. image:: http://cloud.github.com/downloads/ask/celery/Celery-Overview-v4.jpg
  20. The broker pushes tasks to the worker servers.
  21. A worker server is a networked machine running ``celeryd``. This can be one or
  22. more machines, depending on the workload.
  23. The result of the task can be stored for later retrieval (called its
  24. "tombstone").
  25. Example
  26. =======
  27. You probably want to see some code by now, so I'll give you an example task
  28. adding two numbers:
  29. .. code-block:: python
  30. from celery.decorators import task
  31. @task
  32. def add(x, y):
  33. return x + y
  34. You can execute the task in the background, or wait for it to finish::
  35. >>> result = add.delay(4, 4)
  36. >>> result.wait() # wait for and return the result
  37. 8
  38. Simple!
  39. Features
  40. ========
  41. +-----------------+----------------------------------------------------+
  42. | Messaging | Supported brokers include `RabbitMQ`_, `Stomp`_, |
  43. | | `Redis`_, and the most common SQL databases. |
  44. +-----------------+----------------------------------------------------+
  45. | Robust | Using `RabbitMQ`, celery survives most error |
  46. | | scenarios, and your tasks will never be lost. |
  47. +-----------------+----------------------------------------------------+
  48. | Distributed | Runs on one or more machines. Supports |
  49. | | `clustering`_ when used in combination with |
  50. | | `RabbitMQ`_. You can set up new workers without |
  51. | | central configuration (e.g. use your dads laptop |
  52. | | while the queue is temporarily overloaded). |
  53. +-----------------+----------------------------------------------------+
  54. | Concurrency | Tasks are executed in parallel using the |
  55. | | ``multiprocessing`` module. |
  56. +-----------------+----------------------------------------------------+
  57. | Scheduling | Supports recurring tasks like cron, or specifying |
  58. | | an exact date or countdown for when after the task |
  59. | | should be executed. |
  60. +-----------------+----------------------------------------------------+
  61. | Performance | Able to execute tasks while the user waits. |
  62. +-----------------+----------------------------------------------------+
  63. | Return Values | Task return values can be saved to the selected |
  64. | | result store backend. You can wait for the result, |
  65. | | retrieve it later, or ignore it. |
  66. +-----------------+----------------------------------------------------+
  67. | Result Stores | Database, `MongoDB`_, `Redis`_, `Tokyo Tyrant`, |
  68. | | `AMQP`_ (high performance). |
  69. +-----------------+----------------------------------------------------+
  70. | Webhooks | Your tasks can also be HTTP callbacks, enabling |
  71. | | cross-language communication. |
  72. +-----------------+----------------------------------------------------+
  73. | Rate limiting | Supports rate limiting by using the token bucket |
  74. | | algorithm, which accounts for bursts of traffic. |
  75. | | Rate limits can be set for each task type, or |
  76. | | globally for all. |
  77. +-----------------+----------------------------------------------------+
  78. | Routing | Using AMQP you can route tasks arbitrarily to |
  79. | | different workers. |
  80. +-----------------+----------------------------------------------------+
  81. | Remote-control | You can rate limit and delete (revoke) tasks |
  82. | | remotely. |
  83. +-----------------+----------------------------------------------------+
  84. | Monitoring | You can capture everything happening with the |
  85. | | workers in real-time by subscribing to events. |
  86. | | A real-time web monitor is in development. |
  87. +-----------------+----------------------------------------------------+
  88. | Serialization | Supports Pickle, JSON, YAML, or easily defined |
  89. | | custom schemes. One task invocation can have a |
  90. | | different scheme than another. |
  91. +-----------------+----------------------------------------------------+
  92. | Tracebacks | Errors and tracebacks are stored and can be |
  93. | | investigated after the fact. |
  94. +-----------------+----------------------------------------------------+
  95. | UUID | Every task has an UUID (Universally Unique |
  96. | | Identifier), which is the task id used to query |
  97. | | task status and return value. |
  98. +-----------------+----------------------------------------------------+
  99. | Retries | Tasks can be retried if they fail, with |
  100. | | configurable maximum number of retries, and delays |
  101. | | between each retry. |
  102. +-----------------+----------------------------------------------------+
  103. | Task Sets | A Task set is a task consisting of several |
  104. | | sub-tasks. You can find out how many, or if all |
  105. | | of the sub-tasks has been executed, and even |
  106. | | retrieve the results in order. Progress bars, |
  107. | | anyone? |
  108. +-----------------+----------------------------------------------------+
  109. | Made for Web | You can query status and results via URLs, |
  110. | | enabling the ability to poll task status using |
  111. | | Ajax. |
  112. +-----------------+----------------------------------------------------+
  113. | Error e-mails | Can be configured to send e-mails to the |
  114. | | administrators when tasks fails. |
  115. +-----------------+----------------------------------------------------+
  116. | Supervised | Pool workers are supervised and automatically |
  117. | | replaced if they crash. |
  118. +-----------------+----------------------------------------------------+
  119. .. _`RabbitMQ`: http://www.rabbitmq.com/
  120. .. _`clustering`: http://www.rabbitmq.com/clustering.html
  121. .. _`AMQP`: http://www.amqp.org/
  122. .. _`Stomp`: http://stomp.codehaus.org/
  123. .. _`MongoDB`: http://www.mongodb.org/
  124. .. _`Redis`: http://code.google.com/p/redis/
  125. .. _`Tokyo Tyrant`: http://tokyocabinet.sourceforge.net/
  126. Documentation
  127. =============
  128. The `latest documentation`_ with user guides, tutorials and API reference
  129. is hosted at Github.
  130. .. _`latest documentation`: http://ask.github.com/celery/
  131. Installation
  132. =============
  133. You can install ``celery`` either via the Python Package Index (PyPI)
  134. or from source.
  135. To install using ``pip``,::
  136. $ pip install celery
  137. To install using ``easy_install``,::
  138. $ easy_install celery
  139. Downloading and installing from source
  140. --------------------------------------
  141. Download the latest version of ``celery`` from
  142. http://pypi.python.org/pypi/celery/
  143. You can install it by doing the following,::
  144. $ tar xvfz celery-0.0.0.tar.gz
  145. $ cd celery-0.0.0
  146. $ python setup.py build
  147. # python setup.py install # as root
  148. Using the development version
  149. ------------------------------
  150. You can clone the repository by doing the following::
  151. $ git clone git://github.com/ask/celery.git