configuration.rst 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386
  1. ============================
  2. Configuration and defaults
  3. ============================
  4. This document describes the configuration options available.
  5. If you're using celery in a Django project these settings should be defined
  6. in your projects ``settings.py`` file.
  7. In a regular Python environment using the default loader you must create
  8. the ``celeryconfig.py`` module and make sure it is available on the
  9. Python path.
  10. Example configuration file
  11. ==========================
  12. This is an example configuration file to get you started,
  13. it should contain all you need to run a basic celery set-up.
  14. .. code-block:: python
  15. CELERY_BACKEND = "database"
  16. DATABASE_ENGINE = "sqlite3"
  17. DATABASE_NAME = "mydatabase.db"
  18. AMQP_SERVER = "localhost"
  19. AMQP_PORT = 5672
  20. AMQP_VHOST = "/"
  21. AMQP_USER = "guest"
  22. AMQP_PASSWORD = "guest"
  23. ## If you're doing mostly I/O you can have higher concurrency,
  24. ## if mostly spending time in the CPU, try to keep it close to the
  25. ## number of CPUs on your machine.
  26. # CELERYD_CONCURRENCY = 8
  27. CELERYD_LOG_FILE = "celeryd.log"
  28. CELERYD_PID_FILE = "celeryd.pid"
  29. CELERYD_DAEMON_LOG_LEVEL = "INFO"
  30. Concurrency settings
  31. ====================
  32. * CELERYD_CONCURRENCY
  33. The number of concurrent worker processes, executing tasks simultaneously.
  34. Defaults to the number of CPUs in the system.
  35. Task result backend settings
  36. ============================
  37. * CELERY_BACKEND
  38. The backend used to store task results (tombstones).
  39. Can be one of the following:
  40. * database (default)
  41. Use a relational database supported by the Django ORM.
  42. * cache
  43. Use `memcached`_ to store the results.
  44. * mongodb
  45. Use `MongoDB`_ to store the results.
  46. * pyredis
  47. Use `Redis`_ to store the results.
  48. * tyrant
  49. Use `Tokyo Tyrant`_ to store the results.
  50. * amqp
  51. Send results back as AMQP messages
  52. (**WARNING** While very fast, you must make sure you only
  53. try to receive the result once).
  54. .. _`memcached`: http://memcached.org
  55. .. _`MongoDB`: http://mongodb.org
  56. .. _`Redis`: http://code.google.com/p/redis/
  57. .. _`Tokyo Tyrant`: http://1978th.net/tokyotyrant/
  58. * CELERY_PERIODIC_STATUS_BACKEND
  59. The backend used to store the status of periodic tasks.
  60. Can be one of the following:
  61. * database (default)
  62. Use a relational database supported by the Django ORM.
  63. * mongodb
  64. Use MongoDB.
  65. Database backend settings
  66. =========================
  67. This applies to both the result store backend and the periodic status
  68. backend.
  69. Please see the Django ORM database settings documentation:
  70. http://docs.djangoproject.com/en/dev/ref/settings/#database-engine
  71. If you use this backend make sure to initialize the database tables
  72. after configuration. When using celery with a Django project this
  73. means executing::
  74. $ python manage.py syncdb
  75. When using celery in a regular Python environment you have to execute::
  76. $ celeryinit
  77. Example configuration
  78. ---------------------
  79. .. code-block:: python
  80. CELERY_BACKEND = "database"
  81. DATABASE_ENGINE = "mysql"
  82. DATABASE_USER = "myusername"
  83. DATABASE_PASSWORD = "mypassword"
  84. DATABASE_NAME = "mydatabase"
  85. DATABASE_HOST = "localhost"
  86. Cache backend settings
  87. ======================
  88. Please see the documentation for the Django cache framework settings:
  89. http://docs.djangoproject.com/en/dev/topics/cache/#memcached
  90. To use a custom cache backend for Celery, while using another for Django,
  91. you should use the ``CELERY_CACHE_BACKEND`` setting instead of the regular
  92. django ``CACHE_BACKEND`` setting.
  93. Example configuration
  94. ---------------------
  95. Using a single memcached server:
  96. .. code-block:: python
  97. CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
  98. Using multiple memcached servers:
  99. .. code-block:: python
  100. CELERY_BACKEND = "cache"
  101. CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
  102. Tokyo Tyrant backend settings
  103. =============================
  104. **NOTE** The Tokyo Tyrant backend requires the :mod:`pytyrant` library:
  105. http://pypi.python.org/pypi/pytyrant/
  106. This backend requires the following configuration directives to be set:
  107. * TT_HOST
  108. Hostname of the Tokyo Tyrant server.
  109. * TT_PORT
  110. The port the Tokyo Tyrant server is listening to.
  111. Example configuration
  112. ---------------------
  113. .. code-block:: python
  114. CELERY_BACKEND = "tyrant"
  115. TT_HOST = "localhost"
  116. TT_PORT = 1978
  117. Redis backend settings
  118. ======================
  119. **NOTE** The Redis backend requires the :mod:`redis` library:
  120. http://pypi.python.org/pypi/redis/0.5.5
  121. To install the redis package use ``pip`` or ``easy_install``::
  122. $ pip install redis
  123. This backend requires the following configuration directives to be set:
  124. * REDIS_HOST
  125. Hostname of the Redis database server. e.g. ``"localhost"``.
  126. * REDIS_PORT
  127. Port to the Redis database server. e.g. ``6379``.
  128. Also, the following optional configuration directives are available:
  129. * REDIS_DB
  130. Name of the database to use. Default is ``celery_results``.
  131. * REDIS_TIMEOUT
  132. Timeout in seconds before we give up establishing a connection
  133. to the Redis server.
  134. * REDIS_CONNECT_RETRY
  135. Retry connecting if an connection could not be established. Default is
  136. false.
  137. Example configuration
  138. ---------------------
  139. .. code-block:: python
  140. CELERY_BACKEND = "pyredis"
  141. REDIS_HOST = "localhost"
  142. REDIS_PORT = 6739
  143. REDIS_DATABASE = "celery_results"
  144. REDIS_CONNECT_RETRY=True
  145. MongoDB backend settings
  146. ========================
  147. **NOTE** The MongoDB backend requires the :mod:`pymongo` library:
  148. http://github.com/mongodb/mongo-python-driver/tree/master
  149. * CELERY_MONGODB_BACKEND_SETTINGS
  150. This is a dict supporting the following keys:
  151. * host
  152. Hostname of the MongoDB server. Defaults to "localhost".
  153. * port
  154. The port the MongoDB server is listening to. Defaults to 27017.
  155. * user
  156. Username to authenticate to the MongoDB server as (optional).
  157. * password
  158. Password to authenticate to the MongoDB server (optional).
  159. * database
  160. The database name to connect to. Defaults to "celery".
  161. * taskmeta_collection
  162. The collection name to store task metadata.
  163. Defaults to "celery_taskmeta".
  164. * periodictaskmeta_collection
  165. The collection name to store periodic task metadata.
  166. Defaults to "celery_periodictaskmeta".
  167. Example configuration
  168. ---------------------
  169. .. code-block:: python
  170. CELERY_BACKEND = "mongodb"
  171. CELERY_MONGODB_BACKEND_SETTINGS = {
  172. "host": "192.168.1.100",
  173. "port": 30000,
  174. "database": "mydb",
  175. "taskmeta_collection": "my_taskmeta_collection",
  176. }
  177. Broker settings
  178. ===============
  179. * CELERY_AMQP_EXCHANGE
  180. Name of the AMQP exchange.
  181. * CELERY_AMQP_EXCHANGE_TYPE
  182. The type of exchange. If the exchange type is ``direct``, all messages
  183. receives all tasks. However, if the exchange type is ``topic``, you can
  184. route e.g. some tasks to one server, and others to the rest.
  185. See `Exchange types and the effect of bindings`_.
  186. .. _`Exchange types and the effect of bindings`:
  187. http://bit.ly/wpamqpexchanges
  188. * CELERY_AMQP_PUBLISHER_ROUTING_KEY
  189. The default AMQP routing key used when publishing tasks.
  190. * CELERY_AMQP_CONSUMER_ROUTING_KEY
  191. The AMQP routing key used when consuming tasks.
  192. * CELERY_AMQP_CONSUMER_QUEUE
  193. The name of the AMQP queue.
  194. * CELERY_AMQP_CONSUMER_QUEUES
  195. Dictionary defining multiple AMQP queues.
  196. * CELERY_AMQP_CONNECTION_TIMEOUT
  197. The timeout in seconds before we give up establishing a connection
  198. to the AMQP server. Default is 4 seconds.
  199. * CELERY_AMQP_CONNECTION_RETRY
  200. Automatically try to re-establish the connection to the AMQP broker if
  201. it's lost.
  202. The time between retries is increased for each retry, and is
  203. not exhausted before ``CELERY_AMQP_CONNECTION_MAX_RETRIES`` is exceeded.
  204. This behaviour is on by default.
  205. * CELERY_AMQP_CONNECTION_MAX_RETRIES
  206. Maximum number of retries before we give up re-establishing a connection
  207. to the AMQP broker.
  208. If this is set to ``0`` or ``None``, we will retry forever.
  209. Default is 100 retries.
  210. Task execution settings
  211. =======================
  212. * SEND_CELERY_TASK_ERROR_EMAILS
  213. If set to ``True``, errors in tasks will be sent to admins by e-mail.
  214. If unset, it will send the e-mails if ``settings.DEBUG`` is False.
  215. * CELERY_ALWAYS_EAGER
  216. If this is ``True``, all tasks will be executed locally by blocking
  217. until it is finished. ``apply_async`` and ``delay_task`` will return
  218. a :class:`celery.result.EagerResult` which emulates the behaviour of
  219. an :class:`celery.result.AsyncResult`.
  220. Tasks will never be sent to the queue, but executed locally
  221. instead.
  222. * CELERY_TASK_RESULT_EXPIRES
  223. Time (in seconds, or a :class:`datetime.timedelta` object) for when after
  224. stored task tombstones are deleted.
  225. **NOTE**: For the moment this only works for the database and MongoDB
  226. backends.
  227. * CELERY_TASK_SERIALIZER
  228. A string identifying the default serialization
  229. method to use. Can be ``pickle`` (default),
  230. ``json``, ``yaml``, or any custom serialization methods that have
  231. been registered with :mod:`carrot.serialization.registry`.
  232. Default is ``pickle``.
  233. Logging settings
  234. ================
  235. * CELERYD_LOG_FILE
  236. The default filename the worker daemon logs messages to, can be
  237. overridden using the `--logfile`` option to ``celeryd``.
  238. The default is to log using ``stderr`` if running in the foreground,
  239. when running in the background, detached as a daemon, the default
  240. logfile is ``celeryd.log``.
  241. * CELERYD_DAEMON_LOG_LEVEL
  242. Worker log level, can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  243. ``ERROR``, ``CRITICAL``, or ``FATAL``.
  244. See the :mod:`logging` module for more information.
  245. * CELERYD_DAEMON_LOG_FORMAT
  246. The format to use for log messages. Can be overridden using
  247. the ``--loglevel`` option to ``celeryd``.
  248. Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
  249. See the Python :mod:`logging` module for more information about log
  250. formats.
  251. Process settings
  252. ================
  253. * CELERYD_PID_FILE
  254. Full path to the daemon pid file. Default is ``celeryd.pid``.
  255. Can be overridden using the ``--pidfile`` option to ``celeryd``.