introduction.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316
  1. .. _intro:
  2. ========================
  3. Introduction to Celery
  4. ========================
  5. .. contents::
  6. :local:
  7. :depth: 1
  8. What is a Task Queue?
  9. =====================
  10. Task queues are used as a mechanism to distribute work across threads or
  11. machines.
  12. A task queue's input is a unit of work, called a task, dedicated worker
  13. processes then constantly monitor the queue for new work to perform.
  14. Celery communicates via messages, usually using a broker
  15. to mediate between clients and workers. To initiate a task a client puts a
  16. message on the queue, the broker then delivers the message to a worker.
  17. A Celery system can consist of multiple workers and brokers, giving way
  18. to high availability and horizontal scaling.
  19. Celery is written in Python, but the protocol can be implemented in any
  20. language. So far there's RCelery_ for the Ruby programming language,
  21. node-celery_ for Node.js and a `PHP client`_, but language interoperability can also be achieved
  22. by :ref:`using webhooks <guide-webhooks>`.
  23. .. _RCelery: http://leapfrogdevelopment.github.com/rcelery/
  24. .. _`PHP client`: https://github.com/gjedeer/celery-php
  25. .. _node-celery: https://github.com/mher/node-celery
  26. What do I need?
  27. ===============
  28. .. sidebar:: Version Requirements
  29. :subtitle: Celery version 3.0 runs on
  30. - Python ❨2.5, 2.6, 2.7, 3.2, 3.3❩
  31. - PyPy ❨1.8, 1.9❩
  32. - Jython ❨2.5, 2.7❩.
  33. This is the last version to support Python 2.5,
  34. and from the next version Python 2.6 or newer is required.
  35. The last version to support Python 2.4 was Celery series 2.2.
  36. *Celery* requires a message transport to send and receive messages.
  37. The RabbitMQ and Redis broker transports are feature complete,
  38. but there's also support for a myriad of other experimental solutions, including
  39. using SQLite for local development.
  40. *Celery* can run on a single machine, on multiple machines, or even
  41. across data centers.
  42. Get Started
  43. ===========
  44. If this is the first time you're trying to use Celery, or you are
  45. new to Celery 3.0 coming from previous versions then you should read our
  46. getting started tutorials:
  47. - :ref:`first-steps`
  48. - :ref:`next-steps`
  49. Celery is…
  50. ==========
  51. .. _`mailing-list`: http://groups.google.com/group/celery-users
  52. .. topic:: \
  53. - **Simple**
  54. Celery is easy to use and maintain, and it *doesn't need configuration files*.
  55. It has an active, friendly community you can talk to for support,
  56. including a `mailing-list`_ and an :ref:`IRC channel <irc-channel>`.
  57. Here's one of the simplest applications you can make:
  58. .. code-block:: python
  59. from celery import Celery
  60. app = Celery('hello', broker='amqp://guest@localhost//')
  61. @app.task
  62. def hello():
  63. return 'hello world'
  64. - **Highly Available**
  65. Workers and clients will automatically retry in the event
  66. of connection loss or failure, and some brokers support
  67. HA in way of *Master/Master* or *Master/Slave* replication.
  68. - **Fast**
  69. A single Celery process can process millions of tasks a minute,
  70. with sub-millisecond round-trip latency (using RabbitMQ,
  71. py-librabbitmq, and optimized settings).
  72. - **Flexible**
  73. Almost every part of *Celery* can be extended or used on its own,
  74. Custom pool implementations, serializers, compression schemes, logging,
  75. schedulers, consumers, producers, autoscalers, broker transports and much more.
  76. .. topic:: It supports
  77. .. hlist::
  78. :columns: 2
  79. - **Brokers**
  80. - :ref:`RabbitMQ <broker-rabbitmq>`, :ref:`Redis <broker-redis>`,
  81. - :ref:`MongoDB <broker-mongodb>` (exp), ZeroMQ (exp)
  82. - :ref:`CouchDB <broker-couchdb>` (exp), :ref:`SQLAlchemy <broker-sqlalchemy>` (exp)
  83. - :ref:`Django ORM <broker-django>` (exp), :ref:`Amazon SQS <broker-sqs>`, (exp)
  84. - and more…
  85. - **Concurrency**
  86. - prefork (multiprocessing),
  87. - Eventlet_, gevent_
  88. - threads/single threaded
  89. - **Result Stores**
  90. - AMQP, Redis
  91. - memcached, MongoDB
  92. - SQLAlchemy, Django ORM
  93. - Apache Cassandra
  94. - **Serialization**
  95. - *pickle*, *json*, *yaml*, *msgpack*.
  96. - *zlib*, *bzip2* compression.
  97. - Cryptographic message signing.
  98. Features
  99. ========
  100. .. topic:: \
  101. .. hlist::
  102. :columns: 2
  103. - **Monitoring**
  104. A stream of monitoring events is emitted by workers and
  105. is used by built-in and external tools to tell you what
  106. your cluster is doing -- in real-time.
  107. :ref:`Read more… <guide-monitoring>`.
  108. - **Workflows**
  109. Simple and complex workflows can be composed using
  110. a set of powerful primitives we call the "canvas",
  111. including grouping, chaining, chunking and more.
  112. :ref:`Read more… <guide-canvas>`.
  113. - **Time & Rate Limits**
  114. You can control how many tasks can be executed per second/minute/hour,
  115. or how long a task can be allowed to run, and this can be set as
  116. a default, for a specific worker or individually for each task type.
  117. :ref:`Read more… <worker-time-limits>`.
  118. - **Scheduling**
  119. You can specify the time to run a task in seconds or a
  120. :class:`~datetime.datetime`, or or you can use
  121. periodic tasks for recurring events based on a
  122. simple interval, or crontab expressions
  123. supporting minute, hour, day of week, day of month, and
  124. month of year.
  125. :ref:`Read more… <guide-beat>`.
  126. - **Autoreloading**
  127. In development workers can be configured to automatically reload source
  128. code as it changes, including :manpage:`inotify(7)` support on Linux.
  129. :ref:`Read more… <worker-autoreloading>`.
  130. - **Autoscaling**
  131. Dynamically resizing the worker pool depending on load,
  132. or custom metrics specified by the user, used to limit
  133. memory usage in shared hosting/cloud environments or to
  134. enforce a given quality of service.
  135. :ref:`Read more… <worker-autoscaling>`.
  136. - **Resource Leak Protection**
  137. The :option:`--maxtasksperchild` option is used for user tasks
  138. leaking resources, like memory or file descriptors, that
  139. are simply out of your control.
  140. :ref:`Read more… <worker-maxtasksperchild>`.
  141. - **User Components**
  142. Each worker component can be customized, and additional components
  143. can be defined by the user. The worker is built up using "bootsteps" — a
  144. dependency graph enabling fine grained control of the worker's
  145. internals.
  146. .. _`Eventlet`: http://eventlet.net/
  147. .. _`gevent`: http://gevent.org/
  148. Framework Integration
  149. =====================
  150. Celery is easy to integrate with web frameworks, some of which even have
  151. integration packages:
  152. +--------------------+------------------------+
  153. | `Django`_ | `django-celery`_ |
  154. +--------------------+------------------------+
  155. | `Pyramid`_ | `pyramid_celery`_ |
  156. +--------------------+------------------------+
  157. | `Pylons`_ | `celery-pylons`_ |
  158. +--------------------+------------------------+
  159. | `Flask`_ | not needed |
  160. +--------------------+------------------------+
  161. | `web2py`_ | `web2py-celery`_ |
  162. +--------------------+------------------------+
  163. | `Tornado`_ | `tornado-celery`_ |
  164. +--------------------+------------------------+
  165. The integration packages are not strictly necessary, but they can make
  166. development easier, and sometimes they add important hooks like closing
  167. database connections at :manpage:`fork(2)`.
  168. .. _`Django`: http://djangoproject.com/
  169. .. _`Pylons`: http://pylonshq.com/
  170. .. _`Flask`: http://flask.pocoo.org/
  171. .. _`web2py`: http://web2py.com/
  172. .. _`Bottle`: http://bottlepy.org/
  173. .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
  174. .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
  175. .. _`django-celery`: http://pypi.python.org/pypi/django-celery
  176. .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
  177. .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
  178. .. _`Tornado`: http://www.tornadoweb.org/
  179. .. _`tornado-celery`: http://github.com/mher/tornado-celery/
  180. Quickjump
  181. =========
  182. .. topic:: I want to ⟶
  183. .. hlist::
  184. :columns: 2
  185. - :ref:`get the return value of a task <task-states>`
  186. - :ref:`use logging from my task <task-logging>`
  187. - :ref:`learn about best practices <task-best-practices>`
  188. - :ref:`create a custom task base class <task-custom-classes>`
  189. - :ref:`add a callback to a group of tasks <canvas-chord>`
  190. - :ref:`split a task into several chunks <canvas-chunks>`
  191. - :ref:`optimize the worker <guide-optimizing>`
  192. - :ref:`see a list of built-in task states <task-builtin-states>`
  193. - :ref:`create custom task states <custom-states>`
  194. - :ref:`set a custom task name <task-names>`
  195. - :ref:`track when a task starts <task-track-started>`
  196. - :ref:`retry a task when it fails <task-retry>`
  197. - :ref:`get the id of the current task <task-request-info>`
  198. - :ref:`know what queue a task was delivered to <task-request-info>`
  199. - :ref:`see a list of running workers <monitoring-control>`
  200. - :ref:`purge all messages <monitoring-control>`
  201. - :ref:`inspect what the workers are doing <monitoring-control>`
  202. - :ref:`see what tasks a worker has registerd <monitoring-control>`
  203. - :ref:`migrate tasks to a new broker <monitoring-control>`
  204. - :ref:`see a list of event message types <event-reference>`
  205. - :ref:`contribute to Celery <contributing>`
  206. - :ref:`learn about available configuration settings <configuration>`
  207. - :ref:`receive email when a task fails <conf-error-mails>`
  208. - :ref:`get a list of people and companies using Celery <res-using-celery>`
  209. - :ref:`write my own remote control command <worker-custom-control-commands>`
  210. - :ref:`change worker queues at runtime <worker-queues>`
  211. .. topic:: Jump to ⟶
  212. .. hlist::
  213. :columns: 4
  214. - :ref:`Brokers <brokers>`
  215. - :ref:`Applications <guide-app>`
  216. - :ref:`Tasks <guide-tasks>`
  217. - :ref:`Calling <guide-calling>`
  218. - :ref:`Workers <guide-workers>`
  219. - :ref:`Daemonizing <daemonizing>`
  220. - :ref:`Monitoring <guide-monitoring>`
  221. - :ref:`Optimizing <guide-optimizing>`
  222. - :ref:`Security <guide-security>`
  223. - :ref:`Routing <guide-routing>`
  224. - :ref:`Configuration <configuration>`
  225. - :ref:`Django <django>`
  226. - :ref:`Contributing <contributing>`
  227. - :ref:`Signals <signals>`
  228. - :ref:`FAQ <faq>`
  229. - :ref:`API Reference <apiref>`
  230. .. include:: ../includes/installation.txt