configuration.rst 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655
  1. ============================
  2. Configuration and defaults
  3. ============================
  4. This document describes the configuration options available.
  5. If you're using the default loader, you must create the ``celeryconfig.py``
  6. module and make sure it is available on the Python path.
  7. Example configuration file
  8. ==========================
  9. This is an example configuration file to get you started.
  10. It should contain all you need to run a basic celery set-up.
  11. .. code-block:: python
  12. CELERY_RESULT_BACKEND = "database"
  13. CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"
  14. BROKER_HOST = "localhost"
  15. BROKER_PORT = 5672
  16. BROKER_VHOST = "/"
  17. BROKER_USER = "guest"
  18. BROKER_PASSWORD = "guest"
  19. ## If you're doing mostly I/O you can have more processes,
  20. ## but if mostly spending CPU, try to keep it close to the
  21. ## number of CPUs on your machine. If not set, the number of CPUs/cores
  22. ## available will be used.
  23. # CELERYD_CONCURRENCY = 8
  24. # CELERYD_LOG_FILE = "celeryd.log"
  25. # CELERYD_LOG_LEVEL = "INFO"
  26. Concurrency settings
  27. ====================
  28. * CELERYD_CONCURRENCY
  29. The number of concurrent worker processes, executing tasks simultaneously.
  30. Defaults to the number of CPUs/cores available.
  31. * CELERYD_PREFETCH_MULTIPLIER
  32. How many messages to prefetch at a time multiplied by the number of
  33. concurrent processes. The default is 4 (four messages for each
  34. process). The default setting seems pretty good here. However, if you have
  35. very long running tasks waiting in the queue and you have to start the
  36. workers, note that the first worker to start will receive four times the
  37. number of messages initially. Thus the tasks may not be fairly balanced among the
  38. workers.
  39. Task result backend settings
  40. ============================
  41. * CELERY_RESULT_BACKEND
  42. The backend used to store task results (tombstones).
  43. Can be one of the following:
  44. * database (default)
  45. Use a relational database supported by `SQLAlchemy`_.
  46. * cache
  47. Use `memcached`_ to store the results.
  48. * mongodb
  49. Use `MongoDB`_ to store the results.
  50. * redis
  51. Use `Redis`_ to store the results.
  52. * tyrant
  53. Use `Tokyo Tyrant`_ to store the results.
  54. * amqp
  55. Send results back as AMQP messages
  56. (**WARNING** While very fast, you must make sure you only
  57. receive the result once. See :doc:`userguide/executing`).
  58. .. _`SQLAlchemy`: http://sqlalchemy.org
  59. .. _`memcached`: http://memcached.org
  60. .. _`MongoDB`: http://mongodb.org
  61. .. _`Redis`: http://code.google.com/p/redis/
  62. .. _`Tokyo Tyrant`: http://1978th.net/tokyotyrant/
  63. Database backend settings
  64. =========================
  65. Please see `Supported Databases`_ for a table of supported databases.
  66. To use this backend you need to configure it with an
  67. `SQLAlchemy Connection String`_, some examples include:
  68. .. code-block:: python
  69. # sqlite (filename)
  70. CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"
  71. # mysql
  72. CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"
  73. # postgresql
  74. CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"
  75. # oracle
  76. CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"
  77. See `SQLAlchemy Connection Strings`_ for more information about connection
  78. strings.
  79. To specify additional SQLAlchemy database engine options you can use
  80. the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
  81. # echo enables verbose logging from SQLAlchemy.
  82. CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
  83. .. _`SQLAlchemy`:
  84. http://www.sqlalchemy.org
  85. .. _`Supported Databases`:
  86. http://www.sqlalchemy.org/docs/dbengine.html#supported-databases
  87. .. _`SQLAlchemy Connection String`:
  88. http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
  89. .. _`SQLAlchemy Connection Strings`:
  90. http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
  91. Please see the Django ORM database settings documentation:
  92. http://docs.djangoproject.com/en/dev/ref/settings/#database-engine
  93. If you use this backend, make sure to initialize the database tables
  94. after configuration. Use the ``celeryinit`` command to do so::
  95. $ celeryinit
  96. Example configuration
  97. ---------------------
  98. .. code-block:: python
  99. CELERY_RESULT_BACKEND = "database"
  100. CELERY_RESULT_DBURI = "mysql://user:password@host/dbname"
  101. AMQP backend settings
  102. =====================
  103. * CELERY_RESULT_EXCHANGE
  104. Name of the exchange to publish results in. Default is ``"celeryresults"``.
  105. * CELERY_RESULT_EXCHANGE_TYPE
  106. The exchange type of the result exchange. Default is to use a ``direct``
  107. exchange.
  108. * CELERY_RESULT_SERIALIZER
  109. Result message serialization format. Default is ``"pickle"``.
  110. * CELERY_RESULTS_PERSISTENT
  111. If set to ``True``, result messages will be persistent. This means the
  112. messages will not be lost after a broker restart. The default is for the
  113. results to be transient.
  114. Example configuration
  115. ---------------------
  116. CELERY_RESULT_BACKEND = "amqp"
  117. Cache backend settings
  118. ======================
  119. Please see the documentation for the Django cache framework settings:
  120. http://docs.djangoproject.com/en/dev/topics/cache/#memcached
  121. To use a custom cache backend for Celery, while using another for Django,
  122. you should use the ``CELERY_CACHE_BACKEND`` setting instead of the regular
  123. django ``CACHE_BACKEND`` setting.
  124. Example configuration
  125. ---------------------
  126. Using a single memcached server:
  127. .. code-block:: python
  128. CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
  129. Using multiple memcached servers:
  130. .. code-block:: python
  131. CELERY_RESULT_BACKEND = "cache"
  132. CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
  133. Tokyo Tyrant backend settings
  134. =============================
  135. **NOTE** The Tokyo Tyrant backend requires the :mod:`pytyrant` library:
  136. http://pypi.python.org/pypi/pytyrant/
  137. This backend requires the following configuration directives to be set:
  138. * TT_HOST
  139. Hostname of the Tokyo Tyrant server.
  140. * TT_PORT
  141. The port the Tokyo Tyrant server is listening to.
  142. Example configuration
  143. ---------------------
  144. .. code-block:: python
  145. CELERY_RESULT_BACKEND = "tyrant"
  146. TT_HOST = "localhost"
  147. TT_PORT = 1978
  148. Redis backend settings
  149. ======================
  150. **NOTE** The Redis backend requires the :mod:`redis` library:
  151. http://pypi.python.org/pypi/redis/0.5.5
  152. To install the redis package use ``pip`` or ``easy_install``::
  153. $ pip install redis
  154. This backend requires the following configuration directives to be set:
  155. * REDIS_HOST
  156. Hostname of the Redis database server. e.g. ``"localhost"``.
  157. * REDIS_PORT
  158. Port to the Redis database server. e.g. ``6379``.
  159. Also, the following optional configuration directives are available:
  160. * REDIS_DB
  161. Name of the database to use. Default is ``celery_results``.
  162. * REDIS_PASSWORD
  163. Password used to connect to the database.
  164. Example configuration
  165. ---------------------
  166. .. code-block:: python
  167. CELERY_RESULT_BACKEND = "redis"
  168. REDIS_HOST = "localhost"
  169. REDIS_PORT = 6379
  170. REDIS_DATABASE = "celery_results"
  171. REDIS_CONNECT_RETRY=True
  172. MongoDB backend settings
  173. ========================
  174. **NOTE** The MongoDB backend requires the :mod:`pymongo` library:
  175. http://github.com/mongodb/mongo-python-driver/tree/master
  176. * CELERY_MONGODB_BACKEND_SETTINGS
  177. This is a dict supporting the following keys:
  178. * host
  179. Hostname of the MongoDB server. Defaults to "localhost".
  180. * port
  181. The port the MongoDB server is listening to. Defaults to 27017.
  182. * user
  183. User name to authenticate to the MongoDB server as (optional).
  184. * password
  185. Password to authenticate to the MongoDB server (optional).
  186. * database
  187. The database name to connect to. Defaults to "celery".
  188. * taskmeta_collection
  189. The collection name to store task meta data.
  190. Defaults to "celery_taskmeta".
  191. Example configuration
  192. ---------------------
  193. .. code-block:: python
  194. CELERY_RESULT_BACKEND = "mongodb"
  195. CELERY_MONGODB_BACKEND_SETTINGS = {
  196. "host": "192.168.1.100",
  197. "port": 30000,
  198. "database": "mydb",
  199. "taskmeta_collection": "my_taskmeta_collection",
  200. }
  201. Messaging settings
  202. ==================
  203. Routing
  204. -------
  205. * CELERY_QUEUES
  206. The mapping of queues the worker consumes from. This is a dictionary
  207. of queue name/options. See :doc:`userguide/routing` for more information.
  208. The default is a queue/exchange/binding key of ``"celery"``, with
  209. exchange type ``direct``.
  210. You don't have to care about this unless you want custom routing facilities.
  211. * CELERY_DEFAULT_QUEUE
  212. The queue used by default, if no custom queue is specified.
  213. This queue must be listed in ``CELERY_QUEUES``.
  214. The default is: ``celery``.
  215. * CELERY_DEFAULT_EXCHANGE
  216. Name of the default exchange to use when no custom exchange
  217. is specified.
  218. The default is: ``celery``.
  219. * CELERY_DEFAULT_EXCHANGE_TYPE
  220. Default exchange type used when no custom exchange is specified.
  221. The default is: ``direct``.
  222. * CELERY_DEFAULT_ROUTING_KEY
  223. The default routing key used when sending tasks.
  224. The default is: ``celery``.
  225. * CELERY_DEFAULT_DELIVERY_MODE
  226. Can be ``transient`` or ``persistent``. Default is to send
  227. persistent messages.
  228. Connection
  229. ----------
  230. * CELERY_BROKER_CONNECTION_TIMEOUT
  231. The timeout in seconds before we give up establishing a connection
  232. to the AMQP server. Default is 4 seconds.
  233. * CELERY_BROKER_CONNECTION_RETRY
  234. Automatically try to re-establish the connection to the AMQP broker if
  235. it's lost.
  236. The time between retries is increased for each retry, and is
  237. not exhausted before ``CELERY_BROKER_CONNECTION_MAX_RETRIES`` is exceeded.
  238. This behavior is on by default.
  239. * CELERY_BROKER_CONNECTION_MAX_RETRIES
  240. Maximum number of retries before we give up re-establishing a connection
  241. to the AMQP broker.
  242. If this is set to ``0`` or ``None``, we will retry forever.
  243. Default is 100 retries.
  244. Task execution settings
  245. =======================
  246. * CELERY_ALWAYS_EAGER
  247. If this is ``True``, all tasks will be executed locally by blocking
  248. until it is finished. ``apply_async`` and ``Task.delay`` will return
  249. a :class:`celery.result.EagerResult` which emulates the behavior of
  250. :class:`celery.result.AsyncResult`, except the result has already
  251. been evaluated.
  252. Tasks will never be sent to the queue, but executed locally
  253. instead.
  254. * CELERY_EAGER_PROPAGATES_EXCEPTIONS
  255. If this is ``True``, eagerly executed tasks (using ``.apply``, or with
  256. ``CELERY_ALWAYS_EAGER`` on), will raise exceptions.
  257. It's the same as always running ``apply`` with ``throw=True``.
  258. * CELERY_IGNORE_RESULT
  259. Whether to store the task return values or not (tombstones).
  260. If you still want to store errors, just not successful return values,
  261. you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
  262. * CELERY_TASK_RESULT_EXPIRES
  263. Time (in seconds, or a :class:`datetime.timedelta` object) for when after
  264. stored task tombstones will be deleted.
  265. A built-in periodic task will delete the results after this time
  266. (:class:`celery.task.builtins.DeleteExpiredTaskMetaTask`).
  267. **NOTE**: For the moment this only works with the database, cache and MongoDB
  268. backends.
  269. **NOTE**: ``celerybeat`` must be running for the results to be expired.
  270. * CELERY_MAX_CACHED_RESULTS
  271. Total number of results to store before results are evicted from the
  272. result cache. The default is ``5000``.
  273. * CELERY_TRACK_STARTED
  274. If ``True`` the task will report its status as "started"
  275. when the task is executed by a worker.
  276. The default value is ``False`` as the normal behaviour is to not
  277. report that level of granularity. Tasks are either pending, finished,
  278. or waiting to be retried. Having a "started" status can be useful for
  279. when there are long running tasks and there is a need to report which
  280. task is currently running.
  281. backends.
  282. * CELERY_TASK_SERIALIZER
  283. A string identifying the default serialization
  284. method to use. Can be ``pickle`` (default),
  285. ``json``, ``yaml``, or any custom serialization methods that have
  286. been registered with :mod:`carrot.serialization.registry`.
  287. Default is ``pickle``.
  288. * CELERY_DEFAULT_RATE_LIMIT
  289. The global default rate limit for tasks.
  290. This value is used for tasks that does not have a custom rate limit
  291. The default is no rate limit.
  292. * CELERY_DISABLE_RATE_LIMITS
  293. Disable all rate limits, even if tasks has explicit rate limits set.
  294. * CELERY_ACKS_LATE
  295. Late ack means the task messages will be acknowledged **after** the task
  296. has been executed, not *just before*, which is the default behavior.
  297. See http://ask.github.com/celery/faq.html#should-i-use-retry-or-acks-late
  298. Worker: celeryd
  299. ===============
  300. * CELERY_IMPORTS
  301. A sequence of modules to import when the celery daemon starts. This is
  302. useful to add tasks if you are not using django or cannot use task
  303. auto-discovery.
  304. * CELERYD_MAX_TASKS_PER_CHILD
  305. Maximum number of tasks a pool worker process can execute before
  306. it's replaced with a new one. Default is no limit.
  307. * CELERYD_TASK_TIME_LIMIT
  308. Task hard time limit in seconds. The worker processing the task will
  309. be killed and replaced with a new one when this is exceeded.
  310. * CELERYD_SOFT_TASK_TIME_LIMIT
  311. Task soft time limit in seconds.
  312. The :exc:`celery.exceptions.SoftTimeLimitExceeded` exception will be
  313. raised when this is exceeded. The task can catch this to
  314. e.g. clean up before the hard time limit comes.
  315. .. code-block:: python
  316. from celery.decorators import task
  317. from celery.exceptions import SoftTimeLimitExceeded
  318. @task()
  319. def mytask():
  320. try:
  321. return do_work()
  322. except SoftTimeLimitExceeded:
  323. cleanup_in_a_hurry()
  324. * CELERY_SEND_TASK_ERROR_EMAILS
  325. If set to ``True``, errors in tasks will be sent to admins by e-mail.
  326. If unset, it will send the e-mails if ``settings.DEBUG`` is False.
  327. * CELERY_STORE_ERRORS_EVEN_IF_IGNORED
  328. If set, the worker stores all task errors in the result store even if
  329. ``Task.ignore_result`` is on.
  330. Events
  331. ------
  332. * CELERY_SEND_EVENTS
  333. Send events so the worker can be monitored by tools like ``celerymon``.
  334. * CELERY_EVENT_EXCHANGE
  335. Name of the exchange to send event messages to. Default is
  336. ``"celeryevent"``.
  337. * CELERY_EVENT_EXCHANGE_TYPE
  338. The exchange type of the event exchange. Default is to use a ``direct``
  339. exchange.
  340. * CELERY_EVENT_ROUTING_KEY
  341. Routing key used when sending event messages. Default is
  342. ``"celeryevent"``.
  343. * CELERY_EVENT_SERIALIZER
  344. Message serialization format used when sending event messages. Default is
  345. ``"json"``.
  346. Broadcast Commands
  347. ------------------
  348. * CELERY_BROADCAST_QUEUE
  349. Name prefix for the queue used when listening for
  350. broadcast messages. The workers hostname will be appended
  351. to the prefix to create the final queue name.
  352. Default is ``"celeryctl"``.
  353. * CELERY_BROADCAST_EXCHANGE
  354. Name of the exchange used for broadcast messages.
  355. Default is ``"celeryctl"``.
  356. * CELERY_BROADCAST_EXCHANGE_TYPE
  357. Exchange type used for broadcast messages. Default is ``"fanout"``.
  358. Logging
  359. -------
  360. * CELERYD_LOG_FILE
  361. The default file name the worker daemon logs messages to, can be
  362. overridden using the `--logfile`` option to ``celeryd``.
  363. The default is ``None`` (``stderr``)
  364. Can also be set via the ``--logfile`` argument.
  365. * CELERYD_LOG_LEVEL
  366. Worker log level, can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  367. ``ERROR``, ``CRITICAL``.
  368. Can also be set via the ``--loglevel`` argument.
  369. See the :mod:`logging` module for more information.
  370. * CELERYD_LOG_FORMAT
  371. The format to use for log messages.
  372. Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
  373. See the Python :mod:`logging` module for more information about log
  374. formats.
  375. * CELERYD_TASK_LOG_FORMAT
  376. The format to use for log messages logged in tasks. Can be overridden using
  377. the ``--loglevel`` option to ``celeryd``.
  378. Default is::
  379. [%(asctime)s: %(levelname)s/%(processName)s]
  380. [%(task_name)s(%(task_id)s)] %(message)s
  381. See the Python :mod:`logging` module for more information about log
  382. formats.
  383. Custom Component Classes (advanced)
  384. -----------------------------------
  385. * CELERYD_POOL
  386. Name of the task pool class used by the worker.
  387. Default is ``"celery.worker.pool.TaskPool"``.
  388. * CELERYD_LISTENER
  389. Name of the listener class used by the worker.
  390. Default is ``"celery.worker.listener.CarrotListener"``.
  391. * CELERYD_MEDIATOR
  392. Name of the mediator class used by the worker.
  393. Default is ``"celery.worker.controllers.Mediator"``.
  394. * CELERYD_ETA_SCHEDULER
  395. Name of the ETA scheduler class used by the worker.
  396. Default is ``"celery.worker.controllers.ScheduleController"``.
  397. Periodic Task Server: celerybeat
  398. ================================
  399. * CELERYBEAT_SCHEDULE_FILENAME
  400. Name of the file celerybeat stores the current schedule in.
  401. Can be a relative or absolute path, but be aware that the suffix ``.db``
  402. will be appended to the file name.
  403. Can also be set via the ``--schedule`` argument.
  404. * CELERYBEAT_MAX_LOOP_INTERVAL
  405. The maximum number of seconds celerybeat can sleep between checking
  406. the schedule. Default is 300 seconds (5 minutes).
  407. * CELERYBEAT_LOG_FILE
  408. The default file name to log messages to, can be
  409. overridden using the `--logfile`` option.
  410. The default is ``None`` (``stderr``).
  411. Can also be set via the ``--logfile`` argument.
  412. * CELERYBEAT_LOG_LEVEL
  413. Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  414. ``ERROR``, or ``CRITICAL``.
  415. Can also be set via the ``--loglevel`` argument.
  416. See the :mod:`logging` module for more information.
  417. Monitor Server: celerymon
  418. =========================
  419. * CELERYMON_LOG_FILE
  420. The default file name to log messages to, can be
  421. overridden using the `--logfile`` option.
  422. The default is ``None`` (``stderr``)
  423. Can also be set via the ``--logfile`` argument.
  424. * CELERYMON_LOG_LEVEL
  425. Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  426. ``ERROR``, or ``CRITICAL``.
  427. See the :mod:`logging` module for more information.