configuration.rst 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757
  1. .. _configuration:
  2. ============================
  3. Configuration and defaults
  4. ============================
  5. This document describes the configuration options available.
  6. If you're using the default loader, you must create the ``celeryconfig.py``
  7. module and make sure it is available on the Python path.
  8. .. contents::
  9. :local:
  10. .. _conf-example:
  11. Example configuration file
  12. ==========================
  13. This is an example configuration file to get you started.
  14. It should contain all you need to run a basic celery set-up.
  15. .. code-block:: python
  16. # List of modules to import when celery starts.
  17. CELERY_IMPORTS = ("myapp.tasks", )
  18. ## Result store settings.
  19. CELERY_RESULT_BACKEND = "database"
  20. CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"
  21. ## Broker settings.
  22. BROKER_HOST = "localhost"
  23. BROKER_PORT = 5672
  24. BROKER_VHOST = "/"
  25. BROKER_USER = "guest"
  26. BROKER_PASSWORD = "guest"
  27. ## Worker settings
  28. ## If you're doing mostly I/O you can have more processes,
  29. ## but if mostly spending CPU, try to keep it close to the
  30. ## number of CPUs on your machine. If not set, the number of CPUs/cores
  31. ## available will be used.
  32. CELERYD_CONCURRENCY = 10
  33. # CELERYD_LOG_FILE = "celeryd.log"
  34. # CELERYD_LOG_LEVEL = "INFO"
  35. .. _conf-concurrency:
  36. Concurrency settings
  37. ====================
  38. * CELERYD_CONCURRENCY
  39. The number of concurrent worker processes, executing tasks simultaneously.
  40. Defaults to the number of CPUs/cores available.
  41. * CELERYD_PREFETCH_MULTIPLIER
  42. How many messages to prefetch at a time multiplied by the number of
  43. concurrent processes. The default is 4 (four messages for each
  44. process). The default setting seems pretty good here. However, if you have
  45. very long running tasks waiting in the queue and you have to start the
  46. workers, note that the first worker to start will receive four times the
  47. number of messages initially. Thus the tasks may not be fairly balanced among the
  48. workers.
  49. .. _conf-result-backend:
  50. Task result backend settings
  51. ============================
  52. * CELERY_RESULT_BACKEND
  53. The backend used to store task results (tombstones).
  54. Can be one of the following:
  55. * database (default)
  56. Use a relational database supported by `SQLAlchemy`_.
  57. * cache
  58. Use `memcached`_ to store the results.
  59. * mongodb
  60. Use `MongoDB`_ to store the results.
  61. * redis
  62. Use `Redis`_ to store the results.
  63. * tyrant
  64. Use `Tokyo Tyrant`_ to store the results.
  65. * amqp
  66. Send results back as AMQP messages
  67. (**WARNING** While very fast, you must make sure you only
  68. receive the result once. See :doc:`userguide/executing`).
  69. .. _`SQLAlchemy`: http://sqlalchemy.org
  70. .. _`memcached`: http://memcached.org
  71. .. _`MongoDB`: http://mongodb.org
  72. .. _`Redis`: http://code.google.com/p/redis/
  73. .. _`Tokyo Tyrant`: http://1978th.net/tokyotyrant/
  74. .. _conf-database-result-backend:
  75. Database backend settings
  76. =========================
  77. Please see `Supported Databases`_ for a table of supported databases.
  78. To use this backend you need to configure it with an
  79. `Connection String`_, some examples include:
  80. .. code-block:: python
  81. # sqlite (filename)
  82. CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"
  83. # mysql
  84. CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"
  85. # postgresql
  86. CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"
  87. # oracle
  88. CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"
  89. See `Connection String`_ for more information about connection
  90. strings.
  91. To specify additional SQLAlchemy database engine options you can use
  92. the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
  93. # echo enables verbose logging from SQLAlchemy.
  94. CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
  95. .. _`Supported Databases`:
  96. http://www.sqlalchemy.org/docs/dbengine.html#supported-databases
  97. .. _`Connection String`:
  98. http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
  99. Example configuration
  100. ---------------------
  101. .. code-block:: python
  102. CELERY_RESULT_BACKEND = "database"
  103. CELERY_RESULT_DBURI = "mysql://user:password@host/dbname"
  104. .. _conf-amqp-result-backend:
  105. AMQP backend settings
  106. =====================
  107. * CELERY_RESULT_EXCHANGE
  108. Name of the exchange to publish results in. Default is ``"celeryresults"``.
  109. * CELERY_RESULT_EXCHANGE_TYPE
  110. The exchange type of the result exchange. Default is to use a ``direct``
  111. exchange.
  112. * CELERY_RESULT_SERIALIZER
  113. Result message serialization format. Default is ``"pickle"``.
  114. * CELERY_RESULTS_PERSISTENT
  115. If set to ``True``, result messages will be persistent. This means the
  116. messages will not be lost after a broker restart. The default is for the
  117. results to be transient.
  118. Example configuration
  119. ---------------------
  120. CELERY_RESULT_BACKEND = "amqp"
  121. .. _conf-cache-result-backend:
  122. Cache backend settings
  123. ======================
  124. The cache backend supports the `pylibmc`_ and `python-memcached` libraries.
  125. The latter is used only if `pylibmc`_ is not installed.
  126. Example configuration
  127. ---------------------
  128. Using a single memcached server:
  129. .. code-block:: python
  130. CELERY_CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
  131. Using multiple memcached servers:
  132. .. code-block:: python
  133. CELERY_RESULT_BACKEND = "cache"
  134. CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
  135. You can set pylibmc options using the ``CELERY_CACHE_BACKEND_OPTIONS``
  136. setting:
  137. .. code-block:: python
  138. CELERY_CACHE_BACKEND_OPTIONS = {"binary": True,
  139. "behaviors": {"tcp_nodelay": True}}
  140. .. _`pylibmc`: http://sendapatch.se/projects/pylibmc/
  141. .. _conf-tyrant-result-backend:
  142. Tokyo Tyrant backend settings
  143. =============================
  144. **NOTE** The Tokyo Tyrant backend requires the :mod:`pytyrant` library:
  145. http://pypi.python.org/pypi/pytyrant/
  146. This backend requires the following configuration directives to be set:
  147. * TT_HOST
  148. Hostname of the Tokyo Tyrant server.
  149. * TT_PORT
  150. The port the Tokyo Tyrant server is listening to.
  151. Example configuration
  152. ---------------------
  153. .. code-block:: python
  154. CELERY_RESULT_BACKEND = "tyrant"
  155. TT_HOST = "localhost"
  156. TT_PORT = 1978
  157. .. _conf-redis-result-backend:
  158. Redis backend settings
  159. ======================
  160. **NOTE** The Redis backend requires the :mod:`redis` library:
  161. http://pypi.python.org/pypi/redis/0.5.5
  162. To install the redis package use ``pip`` or ``easy_install``::
  163. $ pip install redis
  164. This backend requires the following configuration directives to be set:
  165. * REDIS_HOST
  166. Hostname of the Redis database server. e.g. ``"localhost"``.
  167. * REDIS_PORT
  168. Port to the Redis database server. e.g. ``6379``.
  169. Also, the following optional configuration directives are available:
  170. * REDIS_DB
  171. Database number to use. Default is 0
  172. * REDIS_PASSWORD
  173. Password used to connect to the database.
  174. Example configuration
  175. ---------------------
  176. .. code-block:: python
  177. CELERY_RESULT_BACKEND = "redis"
  178. REDIS_HOST = "localhost"
  179. REDIS_PORT = 6379
  180. REDIS_DB = "celery_results"
  181. REDIS_CONNECT_RETRY=True
  182. .. _conf-mongodb-result-backend:
  183. MongoDB backend settings
  184. ========================
  185. **NOTE** The MongoDB backend requires the :mod:`pymongo` library:
  186. http://github.com/mongodb/mongo-python-driver/tree/master
  187. * CELERY_MONGODB_BACKEND_SETTINGS
  188. This is a dict supporting the following keys:
  189. * host
  190. Hostname of the MongoDB server. Defaults to "localhost".
  191. * port
  192. The port the MongoDB server is listening to. Defaults to 27017.
  193. * user
  194. User name to authenticate to the MongoDB server as (optional).
  195. * password
  196. Password to authenticate to the MongoDB server (optional).
  197. * database
  198. The database name to connect to. Defaults to "celery".
  199. * taskmeta_collection
  200. The collection name to store task meta data.
  201. Defaults to "celery_taskmeta".
  202. Example configuration
  203. ---------------------
  204. .. code-block:: python
  205. CELERY_RESULT_BACKEND = "mongodb"
  206. CELERY_MONGODB_BACKEND_SETTINGS = {
  207. "host": "192.168.1.100",
  208. "port": 30000,
  209. "database": "mydb",
  210. "taskmeta_collection": "my_taskmeta_collection",
  211. }
  212. .. _conf-messaging:
  213. Messaging settings
  214. ==================
  215. .. _conf-messaging-routing:
  216. Routing
  217. -------
  218. * CELERY_QUEUES
  219. The mapping of queues the worker consumes from. This is a dictionary
  220. of queue name/options. See :doc:`userguide/routing` for more information.
  221. The default is a queue/exchange/binding key of ``"celery"``, with
  222. exchange type ``direct``.
  223. You don't have to care about this unless you want custom routing facilities.
  224. * CELERY_DEFAULT_QUEUE
  225. The queue used by default, if no custom queue is specified.
  226. This queue must be listed in ``CELERY_QUEUES``.
  227. The default is: ``celery``.
  228. * CELERY_DEFAULT_EXCHANGE
  229. Name of the default exchange to use when no custom exchange
  230. is specified.
  231. The default is: ``celery``.
  232. * CELERY_DEFAULT_EXCHANGE_TYPE
  233. Default exchange type used when no custom exchange is specified.
  234. The default is: ``direct``.
  235. * CELERY_DEFAULT_ROUTING_KEY
  236. The default routing key used when sending tasks.
  237. The default is: ``celery``.
  238. * CELERY_DEFAULT_DELIVERY_MODE
  239. Can be ``transient`` or ``persistent``. Default is to send
  240. persistent messages.
  241. .. _conf-broker-connection:
  242. Connection
  243. ----------
  244. * CELERY_BROKER_CONNECTION_TIMEOUT
  245. The timeout in seconds before we give up establishing a connection
  246. to the AMQP server. Default is 4 seconds.
  247. * CELERY_BROKER_CONNECTION_RETRY
  248. Automatically try to re-establish the connection to the AMQP broker if
  249. it's lost.
  250. The time between retries is increased for each retry, and is
  251. not exhausted before ``CELERY_BROKER_CONNECTION_MAX_RETRIES`` is exceeded.
  252. This behavior is on by default.
  253. * CELERY_BROKER_CONNECTION_MAX_RETRIES
  254. Maximum number of retries before we give up re-establishing a connection
  255. to the AMQP broker.
  256. If this is set to ``0`` or ``None``, we will retry forever.
  257. Default is 100 retries.
  258. .. _conf-task-execution:
  259. Task execution settings
  260. =======================
  261. * CELERY_ALWAYS_EAGER
  262. If this is ``True``, all tasks will be executed locally by blocking
  263. until it is finished. ``apply_async`` and ``Task.delay`` will return
  264. a :class:`celery.result.EagerResult` which emulates the behavior of
  265. :class:`celery.result.AsyncResult`, except the result has already
  266. been evaluated.
  267. Tasks will never be sent to the queue, but executed locally
  268. instead.
  269. * CELERY_EAGER_PROPAGATES_EXCEPTIONS
  270. If this is ``True``, eagerly executed tasks (using ``.apply``, or with
  271. ``CELERY_ALWAYS_EAGER`` on), will raise exceptions.
  272. It's the same as always running ``apply`` with ``throw=True``.
  273. * CELERY_IGNORE_RESULT
  274. Whether to store the task return values or not (tombstones).
  275. If you still want to store errors, just not successful return values,
  276. you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
  277. * CELERY_TASK_RESULT_EXPIRES
  278. Time (in seconds, or a :class:`datetime.timedelta` object) for when after
  279. stored task tombstones will be deleted.
  280. A built-in periodic task will delete the results after this time
  281. (:class:`celery.task.builtins.DeleteExpiredTaskMetaTask`).
  282. **NOTE**: For the moment this only works with the database, cache and MongoDB
  283. backends.
  284. **NOTE**: ``celerybeat`` must be running for the results to be expired.
  285. * CELERY_MAX_CACHED_RESULTS
  286. Total number of results to store before results are evicted from the
  287. result cache. The default is ``5000``.
  288. * CELERY_TRACK_STARTED
  289. If ``True`` the task will report its status as "started"
  290. when the task is executed by a worker.
  291. The default value is ``False`` as the normal behaviour is to not
  292. report that level of granularity. Tasks are either pending, finished,
  293. or waiting to be retried. Having a "started" status can be useful for
  294. when there are long running tasks and there is a need to report which
  295. task is currently running.
  296. backends.
  297. * CELERY_TASK_SERIALIZER
  298. A string identifying the default serialization
  299. method to use. Can be ``pickle`` (default),
  300. ``json``, ``yaml``, or any custom serialization methods that have
  301. been registered with :mod:`carrot.serialization.registry`.
  302. Default is ``pickle``.
  303. * CELERY_DEFAULT_RATE_LIMIT
  304. The global default rate limit for tasks.
  305. This value is used for tasks that does not have a custom rate limit
  306. The default is no rate limit.
  307. * CELERY_DISABLE_RATE_LIMITS
  308. Disable all rate limits, even if tasks has explicit rate limits set.
  309. * CELERY_ACKS_LATE
  310. Late ack means the task messages will be acknowledged **after** the task
  311. has been executed, not *just before*, which is the default behavior.
  312. See http://ask.github.com/celery/faq.html#should-i-use-retry-or-acks-late
  313. .. _conf-celeryd:
  314. Worker: celeryd
  315. ===============
  316. * CELERY_IMPORTS
  317. A sequence of modules to import when the celery daemon starts.
  318. This is used to specify the task modules to import, but also
  319. to import signal handlers and additional remote control commands, etc.
  320. * CELERYD_MAX_TASKS_PER_CHILD
  321. Maximum number of tasks a pool worker process can execute before
  322. it's replaced with a new one. Default is no limit.
  323. * CELERYD_TASK_TIME_LIMIT
  324. Task hard time limit in seconds. The worker processing the task will
  325. be killed and replaced with a new one when this is exceeded.
  326. * CELERYD_SOFT_TASK_TIME_LIMIT
  327. Task soft time limit in seconds.
  328. The :exc:`celery.exceptions.SoftTimeLimitExceeded` exception will be
  329. raised when this is exceeded. The task can catch this to
  330. e.g. clean up before the hard time limit comes.
  331. .. code-block:: python
  332. from celery.decorators import task
  333. from celery.exceptions import SoftTimeLimitExceeded
  334. @task()
  335. def mytask():
  336. try:
  337. return do_work()
  338. except SoftTimeLimitExceeded:
  339. cleanup_in_a_hurry()
  340. * CELERY_STORE_ERRORS_EVEN_IF_IGNORED
  341. If set, the worker stores all task errors in the result store even if
  342. ``Task.ignore_result`` is on.
  343. .. _conf-error-mails:
  344. Error E-Mails
  345. -------------
  346. * CELERY_SEND_TASK_ERROR_EMAILS
  347. If set to ``True``, errors in tasks will be sent to admins by e-mail.
  348. * ADMINS
  349. List of ``(name, email_address)`` tuples for the admins that should
  350. receive error e-mails.
  351. * SERVER_EMAIL
  352. The e-mail address this worker sends e-mails from.
  353. Default is ``"celery@localhost"``.
  354. * MAIL_HOST
  355. The mail server to use. Default is ``"localhost"``.
  356. * MAIL_HOST_USER
  357. Username (if required) to log on to the mail server with.
  358. * MAIL_HOST_PASSWORD
  359. Password (if required) to log on to the mail server with.
  360. * MAIL_PORT
  361. The port the mail server is listening on. Default is ``25``.
  362. .. _conf-example-error-mail-config:
  363. Example E-Mail configuration
  364. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  365. This configuration enables the sending of error e-mails to
  366. ``george@vandelay.com`` and ``kramer@vandelay.com``:
  367. .. code-block:: python
  368. # Enables error e-mails.
  369. CELERY_SEND_TASK_ERROR_EMAILS = True
  370. # Name and e-mail addresses of recipients
  371. ADMINS = (
  372. ("George Costanza", "george@vandelay.com"),
  373. ("Cosmo Kramer", "kosmo@vandelay.com"),
  374. )
  375. # E-mail address used as sender (From field).
  376. SERVER_EMAIL = "no-reply@vandelay.com"
  377. # Mailserver configuration
  378. EMAIL_HOST = "mail.vandelay.com"
  379. EMAIL_PORT = 25
  380. # EMAIL_HOST_USER = "servers"
  381. # EMAIL_HOST_PASSWORD = "s3cr3t"
  382. .. _conf-events:
  383. Events
  384. ------
  385. * CELERY_SEND_EVENTS
  386. Send events so the worker can be monitored by tools like ``celerymon``.
  387. * CELERY_EVENT_EXCHANGE
  388. Name of the exchange to send event messages to. Default is
  389. ``"celeryevent"``.
  390. * CELERY_EVENT_EXCHANGE_TYPE
  391. The exchange type of the event exchange. Default is to use a ``direct``
  392. exchange.
  393. * CELERY_EVENT_ROUTING_KEY
  394. Routing key used when sending event messages. Default is
  395. ``"celeryevent"``.
  396. * CELERY_EVENT_SERIALIZER
  397. Message serialization format used when sending event messages. Default is
  398. ``"json"``.
  399. .. _conf-broadcast:
  400. Broadcast Commands
  401. ------------------
  402. * CELERY_BROADCAST_QUEUE
  403. Name prefix for the queue used when listening for
  404. broadcast messages. The workers hostname will be appended
  405. to the prefix to create the final queue name.
  406. Default is ``"celeryctl"``.
  407. * CELERY_BROADCAST_EXCHANGE
  408. Name of the exchange used for broadcast messages.
  409. Default is ``"celeryctl"``.
  410. * CELERY_BROADCAST_EXCHANGE_TYPE
  411. Exchange type used for broadcast messages. Default is ``"fanout"``.
  412. .. _conf-logging:
  413. Logging
  414. -------
  415. * CELERYD_LOG_FILE
  416. The default file name the worker daemon logs messages to, can be
  417. overridden using the `--logfile`` option to ``celeryd``.
  418. The default is ``None`` (``stderr``)
  419. Can also be set via the ``--logfile`` argument.
  420. * CELERYD_LOG_LEVEL
  421. Worker log level, can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  422. ``ERROR``, ``CRITICAL``.
  423. Can also be set via the ``--loglevel`` argument.
  424. See the :mod:`logging` module for more information.
  425. * CELERYD_LOG_FORMAT
  426. The format to use for log messages.
  427. Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
  428. See the Python :mod:`logging` module for more information about log
  429. formats.
  430. * CELERYD_TASK_LOG_FORMAT
  431. The format to use for log messages logged in tasks. Can be overridden using
  432. the ``--loglevel`` option to ``celeryd``.
  433. Default is::
  434. [%(asctime)s: %(levelname)s/%(processName)s]
  435. [%(task_name)s(%(task_id)s)] %(message)s
  436. See the Python :mod:`logging` module for more information about log
  437. formats.
  438. .. _conf-custom-components:
  439. Custom Component Classes (advanced)
  440. -----------------------------------
  441. * CELERYD_POOL
  442. Name of the task pool class used by the worker.
  443. Default is ``"celery.concurrency.processes.TaskPool"``.
  444. * CELERYD_LISTENER
  445. Name of the listener class used by the worker.
  446. Default is ``"celery.worker.listener.CarrotListener"``.
  447. * CELERYD_MEDIATOR
  448. Name of the mediator class used by the worker.
  449. Default is ``"celery.worker.controllers.Mediator"``.
  450. * CELERYD_ETA_SCHEDULER
  451. Name of the ETA scheduler class used by the worker.
  452. Default is ``"celery.worker.controllers.ScheduleController"``.
  453. .. _conf-celerybeat:
  454. Periodic Task Server: celerybeat
  455. ================================
  456. * CELERYBEAT_SCHEDULE_FILENAME
  457. Name of the file celerybeat stores the current schedule in.
  458. Can be a relative or absolute path, but be aware that the suffix ``.db``
  459. will be appended to the file name.
  460. Can also be set via the ``--schedule`` argument.
  461. * CELERYBEAT_MAX_LOOP_INTERVAL
  462. The maximum number of seconds celerybeat can sleep between checking
  463. the schedule. Default is 300 seconds (5 minutes).
  464. * CELERYBEAT_LOG_FILE
  465. The default file name to log messages to, can be
  466. overridden using the `--logfile`` option.
  467. The default is ``None`` (``stderr``).
  468. Can also be set via the ``--logfile`` argument.
  469. * CELERYBEAT_LOG_LEVEL
  470. Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  471. ``ERROR``, or ``CRITICAL``.
  472. Can also be set via the ``--loglevel`` argument.
  473. See the :mod:`logging` module for more information.
  474. .. _conf-celerymon:
  475. Monitor Server: celerymon
  476. =========================
  477. * CELERYMON_LOG_FILE
  478. The default file name to log messages to, can be
  479. overridden using the `--logfile`` option.
  480. The default is ``None`` (``stderr``)
  481. Can also be set via the ``--logfile`` argument.
  482. * CELERYMON_LOG_LEVEL
  483. Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  484. ``ERROR``, or ``CRITICAL``.
  485. See the :mod:`logging` module for more information.