configuration.rst 51 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268126912701271127212731274127512761277127812791280128112821283128412851286128712881289129012911292129312941295129612971298129913001301130213031304130513061307130813091310131113121313131413151316131713181319132013211322132313241325132613271328132913301331133213331334133513361337133813391340134113421343134413451346134713481349135013511352135313541355135613571358135913601361136213631364136513661367136813691370137113721373137413751376137713781379138013811382138313841385138613871388138913901391139213931394139513961397139813991400140114021403140414051406140714081409141014111412141314141415141614171418141914201421142214231424142514261427142814291430143114321433143414351436143714381439144014411442144314441445144614471448144914501451145214531454145514561457145814591460146114621463146414651466146714681469147014711472147314741475147614771478147914801481148214831484148514861487148814891490149114921493149414951496149714981499150015011502150315041505150615071508150915101511151215131514151515161517151815191520152115221523152415251526152715281529153015311532153315341535153615371538153915401541154215431544154515461547154815491550155115521553155415551556155715581559156015611562156315641565156615671568156915701571157215731574157515761577157815791580158115821583158415851586158715881589159015911592159315941595159615971598159916001601160216031604160516061607160816091610161116121613161416151616161716181619162016211622162316241625162616271628162916301631163216331634163516361637163816391640164116421643164416451646164716481649165016511652165316541655165616571658165916601661166216631664166516661667166816691670167116721673167416751676167716781679168016811682168316841685168616871688168916901691169216931694169516961697169816991700170117021703170417051706170717081709171017111712171317141715171617171718171917201721172217231724172517261727172817291730173117321733173417351736173717381739174017411742174317441745174617471748174917501751175217531754175517561757175817591760176117621763176417651766176717681769177017711772177317741775177617771778177917801781178217831784178517861787178817891790179117921793179417951796179717981799180018011802180318041805180618071808180918101811181218131814181518161817181818191820182118221823182418251826182718281829183018311832183318341835183618371838183918401841184218431844184518461847184818491850185118521853185418551856185718581859186018611862186318641865186618671868186918701871187218731874187518761877187818791880188118821883188418851886188718881889189018911892189318941895189618971898189919001901190219031904190519061907190819091910191119121913191419151916191719181919192019211922192319241925192619271928192919301931193219331934193519361937193819391940194119421943194419451946194719481949195019511952195319541955195619571958195919601961196219631964196519661967196819691970197119721973197419751976197719781979198019811982198319841985198619871988198919901991199219931994199519961997199819992000200120022003200420052006200720082009201020112012201320142015201620172018201920202021202220232024202520262027202820292030203120322033203420352036203720382039204020412042204320442045204620472048204920502051205220532054205520562057205820592060206120622063206420652066206720682069207020712072207320742075207620772078207920802081
  1. .. _configuration:
  2. ============================
  3. Configuration and defaults
  4. ============================
  5. This document describes the configuration options available.
  6. If you're using the default loader, you must create the :file:`celeryconfig.py`
  7. module and make sure it is available on the Python path.
  8. .. contents::
  9. :local:
  10. :depth: 2
  11. .. _conf-example:
  12. Example configuration file
  13. ==========================
  14. This is an example configuration file to get you started.
  15. It should contain all you need to run a basic Celery set-up.
  16. .. code-block:: python
  17. ## Broker settings.
  18. broker_url = 'amqp://guest:guest@localhost:5672//'
  19. # List of modules to import when celery starts.
  20. imports = ('myapp.tasks',)
  21. ## Using the database to store task state and results.
  22. result_backend = 'db+sqlite:///results.db'
  23. task_annotations = {'tasks.add': {'rate_limit': '10/s'}}
  24. Configuration Directives
  25. ========================
  26. .. _conf-datetime:
  27. General settings
  28. ----------------
  29. .. setting:: accept_content
  30. accept_content
  31. ~~~~~~~~~~~~~~
  32. A whitelist of content-types/serializers to allow.
  33. If a message is received that is not in this list then
  34. the message will be discarded with an error.
  35. By default any content type is enabled (including pickle and yaml)
  36. so make sure untrusted parties do not have access to your broker.
  37. See :ref:`guide-security` for more.
  38. Example::
  39. # using serializer name
  40. accept_content = ['json']
  41. # or the actual content-type (MIME)
  42. accept_content = ['application/json']
  43. Time and date settings
  44. ----------------------
  45. .. setting:: enable_utc
  46. enable_utc
  47. ~~~~~~~~~~
  48. .. versionadded:: 2.5
  49. If enabled dates and times in messages will be converted to use
  50. the UTC timezone.
  51. Note that workers running Celery versions below 2.5 will assume a local
  52. timezone for all messages, so only enable if all workers have been
  53. upgraded.
  54. Enabled by default since version 3.0.
  55. .. setting:: timezone
  56. timezone
  57. ~~~~~~~~
  58. Configure Celery to use a custom time zone.
  59. The timezone value can be any time zone supported by the `pytz`_
  60. library.
  61. If not set the UTC timezone is used. For backwards compatibility
  62. there is also a :setting:`enable_utc` setting, and this is set
  63. to false the system local timezone is used instead.
  64. .. _`pytz`: http://pypi.python.org/pypi/pytz/
  65. .. _conf-tasks:
  66. Task settings
  67. -------------
  68. .. setting:: task_annotations
  69. task_annotations
  70. ~~~~~~~~~~~~~~~~
  71. This setting can be used to rewrite any task attribute from the
  72. configuration. The setting can be a dict, or a list of annotation
  73. objects that filter for tasks and return a map of attributes
  74. to change.
  75. This will change the ``rate_limit`` attribute for the ``tasks.add``
  76. task:
  77. .. code-block:: python
  78. task_annotations = {'tasks.add': {'rate_limit': '10/s'}}
  79. or change the same for all tasks:
  80. .. code-block:: python
  81. task_annotations = {'*': {'rate_limit': '10/s'}}
  82. You can change methods too, for example the ``on_failure`` handler:
  83. .. code-block:: python
  84. def my_on_failure(self, exc, task_id, args, kwargs, einfo):
  85. print('Oh no! Task failed: {0!r}'.format(exc))
  86. task_annotations = {'*': {'on_failure': my_on_failure}}
  87. If you need more flexibility then you can use objects
  88. instead of a dict to choose which tasks to annotate:
  89. .. code-block:: python
  90. class MyAnnotate(object):
  91. def annotate(self, task):
  92. if task.name.startswith('tasks.'):
  93. return {'rate_limit': '10/s'}
  94. task_annotations = (MyAnnotate(), {…})
  95. .. setting:: task_compression
  96. task_compression
  97. ~~~~~~~~~~~~~~~~
  98. Default compression used for task messages.
  99. Can be ``gzip``, ``bzip2`` (if available), or any custom
  100. compression schemes registered in the Kombu compression registry.
  101. The default is to send uncompressed messages.
  102. .. setting:: task_protocol
  103. task_protocol
  104. ~~~~~~~~~~~~~
  105. Default task message protocol version.
  106. Supports protocols: 1 and 2 (default is 1 for backwards compatibility).
  107. .. setting:: task_serializer
  108. task_serializer
  109. ~~~~~~~~~~~~~~~
  110. A string identifying the default serialization method to use. Can be
  111. `pickle` (default), `json`, `yaml`, `msgpack` or any custom serialization
  112. methods that have been registered with :mod:`kombu.serialization.registry`.
  113. .. seealso::
  114. :ref:`calling-serializers`.
  115. .. setting:: task_publish_retry
  116. task_publish_retry
  117. ~~~~~~~~~~~~~~~~~~
  118. .. versionadded:: 2.2
  119. Decides if publishing task messages will be retried in the case
  120. of connection loss or other connection errors.
  121. See also :setting:`task_publish_retry_policy`.
  122. Enabled by default.
  123. .. setting:: task_publish_retry_policy
  124. task_publish_retry_policy
  125. ~~~~~~~~~~~~~~~~~~~~~~~~~
  126. .. versionadded:: 2.2
  127. Defines the default policy when retrying publishing a task message in
  128. the case of connection loss or other connection errors.
  129. See :ref:`calling-retry` for more information.
  130. .. _conf-task-execution:
  131. Task execution settings
  132. -----------------------
  133. .. setting:: task_always_eager
  134. task_always_eager
  135. ~~~~~~~~~~~~~~~~~
  136. If this is :const:`True`, all tasks will be executed locally by blocking until
  137. the task returns. ``apply_async()`` and ``Task.delay()`` will return
  138. an :class:`~celery.result.EagerResult` instance, which emulates the API
  139. and behavior of :class:`~celery.result.AsyncResult`, except the result
  140. is already evaluated.
  141. That is, tasks will be executed locally instead of being sent to
  142. the queue.
  143. .. setting:: task_eager_propagates_exceptions
  144. task_eager_propagates_exceptions
  145. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  146. If this is :const:`True`, eagerly executed tasks (applied by `task.apply()`,
  147. or when the :setting:`task_always_eager` setting is enabled), will
  148. propagate exceptions.
  149. It's the same as always running ``apply()`` with ``throw=True``.
  150. .. setting:: task_ignore_result
  151. task_ignore_result
  152. ~~~~~~~~~~~~~~~~~~
  153. Whether to store the task return values or not (tombstones).
  154. If you still want to store errors, just not successful return values,
  155. you can set :setting:`task_store_errors_even_if_ignored`.
  156. .. setting:: task_store_errors_even_if_ignored
  157. task_store_errors_even_if_ignored
  158. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  159. If set, the worker stores all task errors in the result store even if
  160. :attr:`Task.ignore_result <celery.task.base.Task.ignore_result>` is on.
  161. .. setting:: task_track_started
  162. task_track_started
  163. ~~~~~~~~~~~~~~~~~~
  164. If :const:`True` the task will report its status as "started" when the
  165. task is executed by a worker. The default value is :const:`False` as
  166. the normal behaviour is to not report that level of granularity. Tasks
  167. are either pending, finished, or waiting to be retried. Having a "started"
  168. state can be useful for when there are long running tasks and there is a
  169. need to report which task is currently running.
  170. .. setting:: task_time_limit
  171. task_time_limit
  172. ~~~~~~~~~~~~~~~
  173. Task hard time limit in seconds. The worker processing the task will
  174. be killed and replaced with a new one when this is exceeded.
  175. .. setting:: task_soft_time_limit
  176. task_soft_time_limit
  177. ~~~~~~~~~~~~~~~~~~~~
  178. Task soft time limit in seconds.
  179. The :exc:`~@SoftTimeLimitExceeded` exception will be
  180. raised when this is exceeded. The task can catch this to
  181. e.g. clean up before the hard time limit comes.
  182. Example:
  183. .. code-block:: python
  184. from celery.exceptions import SoftTimeLimitExceeded
  185. @app.task
  186. def mytask():
  187. try:
  188. return do_work()
  189. except SoftTimeLimitExceeded:
  190. cleanup_in_a_hurry()
  191. .. setting:: task_acks_late
  192. task_acks_late
  193. ~~~~~~~~~~~~~~
  194. Late ack means the task messages will be acknowledged **after** the task
  195. has been executed, not *just before*, which is the default behavior.
  196. .. seealso::
  197. FAQ: :ref:`faq-acks_late-vs-retry`.
  198. .. setting:: task_reject_on_worker_lost
  199. task_reject_on_worker_lost
  200. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  201. Even if :setting:`task_acks_late` is enabled, the worker will
  202. acknowledge tasks when the worker process executing them abrubtly
  203. exits or is signalled (e.g. :sig:`KILL`/:sig:`INT`, etc).
  204. Setting this to true allows the message to be requeued instead,
  205. so that the task will execute again by the same worker, or another
  206. worker.
  207. .. warning::
  208. Enabling this can cause message loops; make sure you know
  209. what you're doing.
  210. .. setting:: task_default_rate_limit
  211. task_default_rate_limit
  212. ~~~~~~~~~~~~~~~~~~~~~~~
  213. The global default rate limit for tasks.
  214. This value is used for tasks that does not have a custom rate limit
  215. The default is no rate limit.
  216. .. seealso::
  217. The setting:`worker_disable_rate_limits` setting can
  218. disable all rate limits.
  219. .. _conf-result-backend:
  220. Task result backend settings
  221. ----------------------------
  222. .. setting:: result_backend
  223. result_backend
  224. ~~~~~~~~~~~~~~
  225. The backend used to store task results (tombstones).
  226. Disabled by default.
  227. Can be one of the following:
  228. * rpc
  229. Send results back as AMQP messages
  230. See :ref:`conf-rpc-result-backend`.
  231. * database
  232. Use a relational database supported by `SQLAlchemy`_.
  233. See :ref:`conf-database-result-backend`.
  234. * redis
  235. Use `Redis`_ to store the results.
  236. See :ref:`conf-redis-result-backend`.
  237. * cache
  238. Use `memcached`_ to store the results.
  239. See :ref:`conf-cache-result-backend`.
  240. * mongodb
  241. Use `MongoDB`_ to store the results.
  242. See :ref:`conf-mongodb-result-backend`.
  243. * new_cassandra
  244. Use `Cassandra`_ to store the results, using newer database driver than _cassandra_.
  245. See :ref:`conf-new_cassandra-result-backend`.
  246. * ironcache
  247. Use `IronCache`_ to store the results.
  248. See :ref:`conf-ironcache-result-backend`.
  249. * couchbase
  250. Use `Couchbase`_ to store the results.
  251. See :ref:`conf-couchbase-result-backend`.
  252. * couchdb
  253. Use `CouchDB`_ to store the results.
  254. See :ref:`conf-couchdb-result-backend`.
  255. * amqp
  256. Older AMQP backend (badly) emulating a database-based backend.
  257. See :ref:`conf-amqp-result-backend`.
  258. .. warning:
  259. While the AMQP result backend is very efficient, you must make sure
  260. you only receive the same result once. See :doc:`userguide/calling`).
  261. .. _`SQLAlchemy`: http://sqlalchemy.org
  262. .. _`memcached`: http://memcached.org
  263. .. _`MongoDB`: http://mongodb.org
  264. .. _`Redis`: http://redis.io
  265. .. _`Cassandra`: http://cassandra.apache.org/
  266. .. _`IronCache`: http://www.iron.io/cache
  267. .. _`CouchDB`: http://www.couchdb.com/
  268. .. _`Couchbase`: http://www.couchbase.com/
  269. .. setting:: result_serializer
  270. result_serializer
  271. ~~~~~~~~~~~~~~~~~
  272. Result serialization format. Default is ``pickle``. See
  273. :ref:`calling-serializers` for information about supported
  274. serialization formats.
  275. .. setting:: result_compression
  276. result_compression
  277. ~~~~~~~~~~~~~~~~~~
  278. Optional compression method used for task results.
  279. Supports the same options as the :setting:`task_serializer` setting.
  280. Default is no compression.
  281. .. setting:: result_expires
  282. result_expires
  283. ~~~~~~~~~~~~~~
  284. Time (in seconds, or a :class:`~datetime.timedelta` object) for when after
  285. stored task tombstones will be deleted.
  286. A built-in periodic task will delete the results after this time
  287. (``celery.backend_cleanup``), assuming that ``celery beat`` is
  288. enabled. The task runs daily at 4am.
  289. A value of :const:`None` or 0 means results will never expire (depending
  290. on backend specifications).
  291. Default is to expire after 1 day.
  292. .. note::
  293. For the moment this only works with the amqp, database, cache, redis and MongoDB
  294. backends.
  295. When using the database or MongoDB backends, `celery beat` must be
  296. running for the results to be expired.
  297. .. setting:: result_cache_max
  298. result_cache_max
  299. ~~~~~~~~~~~~~~~~
  300. Result backends caches ready results used by the client.
  301. This is the total number of results to cache before older results are evicted.
  302. The default is 5000. 0 or None means no limit, and a value of :const:`-1`
  303. will disable the cache.
  304. .. _conf-database-result-backend:
  305. Database backend settings
  306. -------------------------
  307. Database URL Examples
  308. ~~~~~~~~~~~~~~~~~~~~~
  309. To use the database backend you have to configure the
  310. :setting:`result_backend` setting with a connection URL and the ``db+``
  311. prefix:
  312. .. code-block:: python
  313. result_backend = 'db+scheme://user:password@host:port/dbname'
  314. Examples::
  315. # sqlite (filename)
  316. result_backend = 'db+sqlite:///results.sqlite'
  317. # mysql
  318. result_backend = 'db+mysql://scott:tiger@localhost/foo'
  319. # postgresql
  320. result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase'
  321. # oracle
  322. result_backend = 'db+oracle://scott:tiger@127.0.0.1:1521/sidname'
  323. .. code-block:: python
  324. Please see `Supported Databases`_ for a table of supported databases,
  325. and `Connection String`_ for more information about connection
  326. strings (which is the part of the URI that comes after the ``db+`` prefix).
  327. .. _`Supported Databases`:
  328. http://www.sqlalchemy.org/docs/core/engines.html#supported-databases
  329. .. _`Connection String`:
  330. http://www.sqlalchemy.org/docs/core/engines.html#database-urls
  331. .. setting:: sqlalchemy_dburi
  332. sqlalchemy_dburi
  333. ~~~~~~~~~~~~~~~~
  334. This setting is no longer used as it's now possible to specify
  335. the database URL directly in the :setting:`result_backend` setting.
  336. .. setting:: sqlalchemy_engine_options
  337. sqlalchemy_engine_options
  338. ~~~~~~~~~~~~~~~~~~~~~~~~~
  339. To specify additional SQLAlchemy database engine options you can use
  340. the :setting:`sqlalchmey_engine_options` setting::
  341. # echo enables verbose logging from SQLAlchemy.
  342. sqlalchemy_engine_options = {'echo': True}
  343. .. setting:: sqlalchemy_short_lived_sessions
  344. sqlalchemy_short_lived_sessions
  345. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  346. sqlalchemy_short_lived_sessions = True
  347. Short lived sessions are disabled by default. If enabled they can drastically reduce
  348. performance, especially on systems processing lots of tasks. This option is useful
  349. on low-traffic workers that experience errors as a result of cached database connections
  350. going stale through inactivity. For example, intermittent errors like
  351. `(OperationalError) (2006, 'MySQL server has gone away')` can be fixed by enabling
  352. short lived sessions. This option only affects the database backend.
  353. .. setting:: sqlalchemy_table_names
  354. sqlalchemy_table_names
  355. ~~~~~~~~~~~~~~~~~~~~~~
  356. When SQLAlchemy is configured as the result backend, Celery automatically
  357. creates two tables to store result metadata for tasks. This setting allows
  358. you to customize the table names:
  359. .. code-block:: python
  360. # use custom table names for the database result backend.
  361. sqlalchemy_table_names = {
  362. 'task': 'myapp_taskmeta',
  363. 'group': 'myapp_groupmeta',
  364. }
  365. .. _conf-rpc-result-backend:
  366. RPC backend settings
  367. --------------------
  368. .. setting:: result_persistent
  369. result_persistent
  370. ~~~~~~~~~~~~~~~~~
  371. If set to :const:`True`, result messages will be persistent. This means the
  372. messages will not be lost after a broker restart. The default is for the
  373. results to be transient.
  374. Example configuration
  375. ~~~~~~~~~~~~~~~~~~~~~
  376. .. code-block:: python
  377. result_backend = 'rpc://'
  378. result_persistent = False
  379. .. _conf-cache-result-backend:
  380. Cache backend settings
  381. ----------------------
  382. .. note::
  383. The cache backend supports the `pylibmc`_ and `python-memcached`
  384. libraries. The latter is used only if `pylibmc`_ is not installed.
  385. Using a single memcached server:
  386. .. code-block:: python
  387. result_backend = 'cache+memcached://127.0.0.1:11211/'
  388. Using multiple memcached servers:
  389. .. code-block:: python
  390. result_backend = """
  391. cache+memcached://172.19.26.240:11211;172.19.26.242:11211/
  392. """.strip()
  393. The "memory" backend stores the cache in memory only:
  394. .. code-block:: python
  395. result_backend = 'cache'
  396. cache_backend = 'memory'
  397. .. setting:: cache_backend_options
  398. cache_backend_options
  399. ~~~~~~~~~~~~~~~~~~~~~
  400. You can set pylibmc options using the :setting:`cache_backend_options`
  401. setting:
  402. .. code-block:: python
  403. cache_backend_options = {
  404. 'binary': True,
  405. 'behaviors': {'tcp_nodelay': True},
  406. }
  407. .. _`pylibmc`: http://sendapatch.se/projects/pylibmc/
  408. .. setting:: cache_backend
  409. cache_backend
  410. ~~~~~~~~~~~~~
  411. This setting is no longer used as it's now possible to specify
  412. the cache backend directly in the :setting:`result_backend` setting.
  413. .. _conf-redis-result-backend:
  414. Redis backend settings
  415. ----------------------
  416. Configuring the backend URL
  417. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  418. .. note::
  419. The Redis backend requires the :mod:`redis` library:
  420. http://pypi.python.org/pypi/redis/
  421. To install the redis package use `pip` or `easy_install`:
  422. .. code-block:: console
  423. $ pip install redis
  424. This backend requires the :setting:`result_backend`
  425. setting to be set to a Redis URL::
  426. result_backend = 'redis://:password@host:port/db'
  427. For example::
  428. result_backend = 'redis://localhost/0'
  429. which is the same as::
  430. result_backend = 'redis://'
  431. The fields of the URL are defined as follows:
  432. - *host*
  433. Host name or IP address of the Redis server. e.g. `localhost`.
  434. - *port*
  435. Port to the Redis server. Default is 6379.
  436. - *db*
  437. Database number to use. Default is 0.
  438. The db can include an optional leading slash.
  439. - *password*
  440. Password used to connect to the database.
  441. .. setting:: redis_max_connections
  442. redis_max_connections
  443. ~~~~~~~~~~~~~~~~~~~~~
  444. Maximum number of connections available in the Redis connection
  445. pool used for sending and retrieving results.
  446. .. _conf-mongodb-result-backend:
  447. MongoDB backend settings
  448. ------------------------
  449. .. note::
  450. The MongoDB backend requires the :mod:`pymongo` library:
  451. http://github.com/mongodb/mongo-python-driver/tree/master
  452. .. setting:: mongodb_backend_settings
  453. mongodb_backend_settings
  454. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  455. This is a dict supporting the following keys:
  456. * database
  457. The database name to connect to. Defaults to ``celery``.
  458. * taskmeta_collection
  459. The collection name to store task meta data.
  460. Defaults to ``celery_taskmeta``.
  461. * max_pool_size
  462. Passed as max_pool_size to PyMongo's Connection or MongoClient
  463. constructor. It is the maximum number of TCP connections to keep
  464. open to MongoDB at a given time. If there are more open connections
  465. than max_pool_size, sockets will be closed when they are released.
  466. Defaults to 10.
  467. * options
  468. Additional keyword arguments to pass to the mongodb connection
  469. constructor. See the :mod:`pymongo` docs to see a list of arguments
  470. supported.
  471. .. _example-mongodb-result-config:
  472. Example configuration
  473. ~~~~~~~~~~~~~~~~~~~~~
  474. .. code-block:: python
  475. result_backend = 'mongodb://192.168.1.100:30000/'
  476. mongodb_backend_settings = {
  477. 'database': 'mydb',
  478. 'taskmeta_collection': 'my_taskmeta_collection',
  479. }
  480. .. _conf-new_cassandra-result-backend:
  481. new_cassandra backend settings
  482. ------------------------------
  483. .. note::
  484. This Cassandra backend driver requires :mod:`cassandra-driver`.
  485. https://pypi.python.org/pypi/cassandra-driver
  486. To install, use `pip` or `easy_install`:
  487. .. code-block:: bash
  488. $ pip install cassandra-driver
  489. This backend requires the following configuration directives to be set.
  490. .. setting:: cassandra_servers
  491. cassandra_servers
  492. ~~~~~~~~~~~~~~~~~
  493. List of ``host`` Cassandra servers. e.g.::
  494. cassandra_servers = ['localhost']
  495. .. setting:: cassandra_port
  496. cassandra_port
  497. ~~~~~~~~~~~~~~
  498. Port to contact the Cassandra servers on. Default is 9042.
  499. .. setting:: cassandra_keyspace
  500. cassandra_keyspace
  501. ~~~~~~~~~~~~~~~~~~
  502. The keyspace in which to store the results. e.g.::
  503. cassandra_keyspace = 'tasks_keyspace'
  504. .. setting:: cassandra_column_family
  505. cassandra_column_family
  506. ~~~~~~~~~~~~~~~~~~~~~~~
  507. The table (column family) in which to store the results. e.g.::
  508. cassandra_column_family = 'tasks'
  509. .. setting:: cassandra_read_consistency
  510. cassandra_read_consistency
  511. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  512. The read consistency used. Values can be ``ONE``, ``TWO``, ``THREE``, ``QUORUM``, ``ALL``,
  513. ``LOCAL_QUORUM``, ``EACH_QUORUM``, ``LOCAL_ONE``.
  514. .. setting:: cassandra_write_consistency
  515. cassandra_write_consistency
  516. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  517. The write consistency used. Values can be ``ONE``, ``TWO``, ``THREE``, ``QUORUM``, ``ALL``,
  518. ``LOCAL_QUORUM``, ``EACH_QUORUM``, ``LOCAL_ONE``.
  519. .. setting:: cassandra_entry_ttl
  520. cassandra_entry_ttl
  521. ~~~~~~~~~~~~~~~~~~~
  522. Time-to-live for status entries. They will expire and be removed after that many seconds
  523. after adding. Default (None) means they will never expire.
  524. Example configuration
  525. ~~~~~~~~~~~~~~~~~~~~~
  526. .. code-block:: python
  527. cassandra_servers = ['localhost']
  528. cassandra_keyspace = 'celery'
  529. cassandra_column_family = 'task_results'
  530. cassandra_read_consistency = 'ONE'
  531. cassandra_write_consistency = 'ONE'
  532. cassandra_entry_ttl = 86400
  533. .. _conf-riak-result-backend:
  534. Riak backend settings
  535. ---------------------
  536. .. note::
  537. The Riak backend requires the :mod:`riak` library:
  538. http://pypi.python.org/pypi/riak/
  539. To install the riak package use `pip` or `easy_install`:
  540. .. code-block:: console
  541. $ pip install riak
  542. This backend requires the :setting:`result_backend`
  543. setting to be set to a Riak URL::
  544. result_backend = "riak://host:port/bucket"
  545. For example::
  546. result_backend = "riak://localhost/celery
  547. which is the same as::
  548. result_backend = "riak://"
  549. The fields of the URL are defined as follows:
  550. - *host*
  551. Host name or IP address of the Riak server. e.g. `"localhost"`.
  552. - *port*
  553. Port to the Riak server using the protobuf protocol. Default is 8087.
  554. - *bucket*
  555. Bucket name to use. Default is `celery`.
  556. The bucket needs to be a string with ascii characters only.
  557. Altenatively, this backend can be configured with the following configuration directives.
  558. .. setting:: riak_backend_settings
  559. riak_backend_settings
  560. ~~~~~~~~~~~~~~~~~~~~~
  561. This is a dict supporting the following keys:
  562. * host
  563. The host name of the Riak server. Defaults to "localhost".
  564. * port
  565. The port the Riak server is listening to. Defaults to 8087.
  566. * bucket
  567. The bucket name to connect to. Defaults to "celery".
  568. * protocol
  569. The protocol to use to connect to the Riak server. This is not configurable
  570. via :setting:`result_backend`
  571. .. _conf-ironcache-result-backend:
  572. IronCache backend settings
  573. --------------------------
  574. .. note::
  575. The IronCache backend requires the :mod:`iron_celery` library:
  576. http://pypi.python.org/pypi/iron_celery
  577. To install the iron_celery package use `pip` or `easy_install`:
  578. .. code-block:: console
  579. $ pip install iron_celery
  580. IronCache is configured via the URL provided in :setting:`result_backend`, for example::
  581. result_backend = 'ironcache://project_id:token@'
  582. Or to change the cache name::
  583. ironcache:://project_id:token@/awesomecache
  584. For more information, see: https://github.com/iron-io/iron_celery
  585. .. _conf-couchbase-result-backend:
  586. Couchbase backend settings
  587. --------------------------
  588. .. note::
  589. The Couchbase backend requires the :mod:`couchbase` library:
  590. https://pypi.python.org/pypi/couchbase
  591. To install the couchbase package use `pip` or `easy_install`:
  592. .. code-block:: console
  593. $ pip install couchbase
  594. This backend can be configured via the :setting:`result_backend`
  595. set to a couchbase URL::
  596. result_backend = 'couchbase://username:password@host:port/bucket'
  597. .. setting:: couchbase_backend_settings
  598. couchbase_backend_settings
  599. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  600. This is a dict supporting the following keys:
  601. * host
  602. Host name of the Couchbase server. Defaults to ``localhost``.
  603. * port
  604. The port the Couchbase server is listening to. Defaults to ``8091``.
  605. * bucket
  606. The default bucket the Couchbase server is writing to.
  607. Defaults to ``default``.
  608. * username
  609. User name to authenticate to the Couchbase server as (optional).
  610. * password
  611. Password to authenticate to the Couchbase server (optional).
  612. .. _conf-couchdb-result-backend:
  613. CouchDB backend settings
  614. ------------------------
  615. .. note::
  616. The CouchDB backend requires the :mod:`pycouchdb` library:
  617. https://pypi.python.org/pypi/pycouchdb
  618. To install the couchbase package use `pip` or `easy_install`:
  619. .. code-block:: console
  620. $ pip install pycouchdb
  621. This backend can be configured via the :setting:`result_backend`
  622. set to a couchdb URL::
  623. result_backend = 'couchdb://username:password@host:port/container'
  624. The URL is formed out of the following parts:
  625. * username
  626. User name to authenticate to the CouchDB server as (optional).
  627. * password
  628. Password to authenticate to the CouchDB server (optional).
  629. * host
  630. Host name of the CouchDB server. Defaults to ``localhost``.
  631. * port
  632. The port the CouchDB server is listening to. Defaults to ``8091``.
  633. * container
  634. The default container the CouchDB server is writing to.
  635. Defaults to ``default``.
  636. .. _conf-amqp-result-backend:
  637. AMQP backend settings
  638. ---------------------
  639. .. admonition:: Do not use in production.
  640. This is the old AMQP result backend that creates one queue per task,
  641. if you want to send results back as message please consider using the
  642. RPC backend instead, or if you need the results to be persistent
  643. use a result backend designed for that purpose (e.g. Redis, or a database).
  644. .. note::
  645. The AMQP backend requires RabbitMQ 1.1.0 or higher to automatically
  646. expire results. If you are running an older version of RabbitMQ
  647. you should disable result expiration like this:
  648. result_expires = None
  649. .. setting:: result_exchange
  650. result_exchange
  651. ~~~~~~~~~~~~~~~
  652. Name of the exchange to publish results in. Default is `celeryresults`.
  653. .. setting:: result_exchange_type
  654. result_exchange_type
  655. ~~~~~~~~~~~~~~~~~~~~
  656. The exchange type of the result exchange. Default is to use a `direct`
  657. exchange.
  658. result_persistent
  659. ~~~~~~~~~~~~~~~~~
  660. If set to :const:`True`, result messages will be persistent. This means the
  661. messages will not be lost after a broker restart. The default is for the
  662. results to be transient.
  663. Example configuration
  664. ~~~~~~~~~~~~~~~~~~~~~
  665. .. code-block:: python
  666. result_backend = 'amqp'
  667. result_expires = 18000 # 5 hours.
  668. .. _conf-messaging:
  669. Message Routing
  670. ---------------
  671. .. _conf-messaging-routing:
  672. .. setting:: task_queues
  673. task_queues
  674. ~~~~~~~~~~~
  675. Most users will not want to specify this setting and should rather use
  676. the :ref:`automatic routing facilities <routing-automatic>`.
  677. If you really want to configure advanced routing, this setting should
  678. be a list of :class:`kombu.Queue` objects the worker will consume from.
  679. Note that workers can be overriden this setting via the `-Q` option,
  680. or individual queues from this list (by name) can be excluded using
  681. the `-X` option.
  682. Also see :ref:`routing-basics` for more information.
  683. The default is a queue/exchange/binding key of ``celery``, with
  684. exchange type ``direct``.
  685. See also :setting:`task_routes`
  686. .. setting:: task_routes
  687. task_routes
  688. ~~~~~~~~~~~~~
  689. A list of routers, or a single router used to route tasks to queues.
  690. When deciding the final destination of a task the routers are consulted
  691. in order.
  692. A router can be specified as either:
  693. * A router class instances
  694. * A string which provides the path to a router class
  695. * A dict containing router specification. It will be converted to a :class:`celery.routes.MapRoute` instance.
  696. Examples:
  697. .. code-block:: python
  698. task_routes = {
  699. "celery.ping": "default",
  700. "mytasks.add": "cpu-bound",
  701. "video.encode": {
  702. "queue": "video",
  703. "exchange": "media"
  704. "routing_key": "media.video.encode",
  705. },
  706. }
  707. task_routes = ("myapp.tasks.Router", {"celery.ping": "default})
  708. Where ``myapp.tasks.Router`` could be:
  709. .. code-block:: python
  710. class Router(object):
  711. def route_for_task(self, task, args=None, kwargs=None):
  712. if task == "celery.ping":
  713. return "default"
  714. ``route_for_task`` may return a string or a dict. A string then means
  715. it's a queue name in :setting:`task_queues`, a dict means it's a custom route.
  716. When sending tasks, the routers are consulted in order. The first
  717. router that doesn't return ``None`` is the route to use. The message options
  718. is then merged with the found route settings, where the routers settings
  719. have priority.
  720. Example if :func:`~celery.execute.apply_async` has these arguments:
  721. .. code-block:: python
  722. Task.apply_async(immediate=False, exchange="video",
  723. routing_key="video.compress")
  724. and a router returns:
  725. .. code-block:: python
  726. {"immediate": True, "exchange": "urgent"}
  727. the final message options will be:
  728. .. code-block:: python
  729. immediate=True, exchange="urgent", routing_key="video.compress"
  730. (and any default message options defined in the
  731. :class:`~celery.task.base.Task` class)
  732. Values defined in :setting:`task_routes` have precedence over values defined in
  733. :setting:`task_queues` when merging the two.
  734. With the follow settings:
  735. .. code-block:: python
  736. task_queues = {
  737. "cpubound": {
  738. "exchange": "cpubound",
  739. "routing_key": "cpubound",
  740. },
  741. }
  742. task_routes = {
  743. "tasks.add": {
  744. "queue": "cpubound",
  745. "routing_key": "tasks.add",
  746. "serializer": "json",
  747. },
  748. }
  749. The final routing options for ``tasks.add`` will become:
  750. .. code-block:: javascript
  751. {"exchange": "cpubound",
  752. "routing_key": "tasks.add",
  753. "serializer": "json"}
  754. See :ref:`routers` for more examples.
  755. .. setting:: task_queue_ha_policy
  756. task_queue_ha_policy
  757. ~~~~~~~~~~~~~~~~~~~~
  758. :brokers: RabbitMQ
  759. This will set the default HA policy for a queue, and the value
  760. can either be a string (usually ``all``):
  761. .. code-block:: python
  762. task_queue_ha_policy = 'all'
  763. Using 'all' will replicate the queue to all current nodes,
  764. Or you can give it a list of nodes to replicate to:
  765. .. code-block:: python
  766. task_queue_ha_policy = ['rabbit@host1', 'rabbit@host2']
  767. Using a list will implicitly set ``x-ha-policy`` to 'nodes' and
  768. ``x-ha-policy-params`` to the given list of nodes.
  769. See http://www.rabbitmq.com/ha.html for more information.
  770. .. setting:: worker_direct
  771. worker_direct
  772. ~~~~~~~~~~~~~
  773. This option enables so that every worker has a dedicated queue,
  774. so that tasks can be routed to specific workers.
  775. The queue name for each worker is automatically generated based on
  776. the worker hostname and a ``.dq`` suffix, using the ``C.dq`` exchange.
  777. For example the queue name for the worker with node name ``w1@example.com``
  778. becomes::
  779. w1@example.com.dq
  780. Then you can route the task to the task by specifying the hostname
  781. as the routing key and the ``C.dq`` exchange::
  782. task_routes = {
  783. 'tasks.add': {'exchange': 'C.dq', 'routing_key': 'w1@example.com'}
  784. }
  785. .. setting:: task_create_missing_queues
  786. task_create_missing_queues
  787. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  788. If enabled (default), any queues specified that are not defined in
  789. :setting:`task_queues` will be automatically created. See
  790. :ref:`routing-automatic`.
  791. .. setting:: task_default_queue
  792. task_default_queue
  793. ~~~~~~~~~~~~~~~~~~
  794. The name of the default queue used by `.apply_async` if the message has
  795. no route or no custom queue has been specified.
  796. This queue must be listed in :setting:`task_queues`.
  797. If :setting:`task_queues` is not specified then it is automatically
  798. created containing one queue entry, where this name is used as the name of
  799. that queue.
  800. The default is: `celery`.
  801. .. seealso::
  802. :ref:`routing-changing-default-queue`
  803. .. setting:: task_default_exchange
  804. task_default_exchange
  805. ~~~~~~~~~~~~~~~~~~~~~
  806. Name of the default exchange to use when no custom exchange is
  807. specified for a key in the :setting:`task_queues` setting.
  808. The default is: `celery`.
  809. .. setting:: task_default_exchange_type
  810. task_default_exchange_type
  811. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  812. Default exchange type used when no custom exchange type is specified
  813. for a key in the :setting:`task_queues` setting.
  814. The default is: `direct`.
  815. .. setting:: task_default_routing_key
  816. task_default_routing_key
  817. ~~~~~~~~~~~~~~~~~~~~~~~~
  818. The default routing key used when no custom routing key
  819. is specified for a key in the :setting:`task_queues` setting.
  820. The default is: `celery`.
  821. .. setting:: task_default_delivery_mode
  822. task_default_delivery_mode
  823. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  824. Can be `transient` or `persistent`. The default is to send
  825. persistent messages.
  826. .. _conf-broker-settings:
  827. Broker Settings
  828. ---------------
  829. .. setting:: broker_url
  830. broker_url
  831. ~~~~~~~~~~
  832. Default broker URL. This must be an URL in the form of::
  833. transport://userid:password@hostname:port/virtual_host
  834. Only the scheme part (``transport://``) is required, the rest
  835. is optional, and defaults to the specific transports default values.
  836. The transport part is the broker implementation to use, and the
  837. default is ``amqp``, which uses ``librabbitmq`` by default or falls back to
  838. ``pyamqp`` if that is not installed. Also there are many other choices including
  839. ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,
  840. ``couchdb``.
  841. It can also be a fully qualified path to your own transport implementation.
  842. More than broker URL, of the same transport, can also be specified.
  843. The broker URLs can be passed in as a single string that is semicolon delimited::
  844. broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'
  845. Or as a list::
  846. broker_url = [
  847. 'transport://userid:password@localhost:port//',
  848. 'transport://userid:password@hostname:port//'
  849. ]
  850. The brokers will then be used in the :setting:`broker_failover_strategy`.
  851. See :ref:`kombu:connection-urls` in the Kombu documentation for more
  852. information.
  853. .. setting:: broker_failover_strategy
  854. broker_failover_strategy
  855. ~~~~~~~~~~~~~~~~~~~~~~~~
  856. Default failover strategy for the broker Connection object. If supplied,
  857. may map to a key in 'kombu.connection.failover_strategies', or be a reference
  858. to any method that yields a single item from a supplied list.
  859. Example::
  860. # Random failover strategy
  861. def random_failover_strategy(servers):
  862. it = list(it) # don't modify callers list
  863. shuffle = random.shuffle
  864. for _ in repeat(None):
  865. shuffle(it)
  866. yield it[0]
  867. broker_failover_strategy = random_failover_strategy
  868. .. setting:: broker_heartbeat
  869. broker_heartbeat
  870. ~~~~~~~~~~~~~~~~
  871. :transports supported: ``pyamqp``
  872. It's not always possible to detect connection loss in a timely
  873. manner using TCP/IP alone, so AMQP defines something called heartbeats
  874. that's is used both by the client and the broker to detect if
  875. a connection was closed.
  876. Heartbeats are disabled by default.
  877. If the heartbeat value is 10 seconds, then
  878. the heartbeat will be monitored at the interval specified
  879. by the :setting:`broker_heartbeat_checkrate` setting, which by default is
  880. double the rate of the heartbeat value
  881. (so for the default 10 seconds, the heartbeat is checked every 5 seconds).
  882. .. setting:: broker_heartbeat_checkrate
  883. broker_heartbeat_checkrate
  884. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  885. :transports supported: ``pyamqp``
  886. At intervals the worker will monitor that the broker has not missed
  887. too many heartbeats. The rate at which this is checked is calculated
  888. by dividing the :setting:`broker_heartbeat` value with this value,
  889. so if the heartbeat is 10.0 and the rate is the default 2.0, the check
  890. will be performed every 5 seconds (twice the heartbeat sending rate).
  891. .. setting:: broker_use_ssl
  892. broker_use_ssl
  893. ~~~~~~~~~~~~~~
  894. :transports supported: ``pyamqp``, ``redis``
  895. Toggles SSL usage on broker connection and SSL settings.
  896. If ``True`` the connection will use SSL with default SSL settings.
  897. If set to a dict, will configure SSL connection according to the specified
  898. policy. The format used is python `ssl.wrap_socket()
  899. options <https://docs.python.org/3/library/ssl.html#ssl.wrap_socket>`_.
  900. Default is ``False`` (no SSL).
  901. Note that SSL socket is generally served on a separate port by the broker.
  902. Example providing a client cert and validating the server cert against a custom
  903. certificate authority:
  904. .. code-block:: python
  905. import ssl
  906. broker_use_ssl = {
  907. 'keyfile': '/var/ssl/private/worker-key.pem',
  908. 'certfile': '/var/ssl/amqp-server-cert.pem',
  909. 'ca_certs': '/var/ssl/myca.pem',
  910. 'cert_reqs': ssl.CERT_REQUIRED
  911. }
  912. .. warning::
  913. Be careful using ``broker_use_ssl=True``, it is possible that your default
  914. configuration do not validate the server cert at all, please read Python
  915. `ssl module security
  916. considerations <https://docs.python.org/3/library/ssl.html#ssl-security>`_.
  917. .. setting:: broker_pool_limit
  918. broker_pool_limit
  919. ~~~~~~~~~~~~~~~~~
  920. .. versionadded:: 2.3
  921. The maximum number of connections that can be open in the connection pool.
  922. The pool is enabled by default since version 2.5, with a default limit of ten
  923. connections. This number can be tweaked depending on the number of
  924. threads/greenthreads (eventlet/gevent) using a connection. For example
  925. running eventlet with 1000 greenlets that use a connection to the broker,
  926. contention can arise and you should consider increasing the limit.
  927. If set to :const:`None` or 0 the connection pool will be disabled and
  928. connections will be established and closed for every use.
  929. Default (since 2.5) is to use a pool of 10 connections.
  930. .. setting:: broker_connection_timeout
  931. broker_connection_timeout
  932. ~~~~~~~~~~~~~~~~~~~~~~~~~
  933. The default timeout in seconds before we give up establishing a connection
  934. to the AMQP server. Default is 4 seconds.
  935. .. setting:: broker_connection_retry
  936. broker_connection_retry
  937. ~~~~~~~~~~~~~~~~~~~~~~~
  938. Automatically try to re-establish the connection to the AMQP broker if lost.
  939. The time between retries is increased for each retry, and is
  940. not exhausted before :setting:`broker_connection_max_retries` is
  941. exceeded.
  942. This behavior is on by default.
  943. .. setting:: broker_connection_max_retries
  944. broker_connection_max_retries
  945. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  946. Maximum number of retries before we give up re-establishing a connection
  947. to the AMQP broker.
  948. If this is set to :const:`0` or :const:`None`, we will retry forever.
  949. Default is 100 retries.
  950. .. setting:: broker_login_method
  951. broker_login_method
  952. ~~~~~~~~~~~~~~~~~~~
  953. Set custom amqp login method, default is ``AMQPLAIN``.
  954. .. setting:: broker_transport_options
  955. broker_transport_options
  956. ~~~~~~~~~~~~~~~~~~~~~~~~
  957. .. versionadded:: 2.2
  958. A dict of additional options passed to the underlying transport.
  959. See your transport user manual for supported options (if any).
  960. Example setting the visibility timeout (supported by Redis and SQS
  961. transports):
  962. .. code-block:: python
  963. broker_transport_options = {'visibility_timeout': 18000} # 5 hours
  964. .. _conf-worker:
  965. Worker
  966. ------
  967. .. setting:: imports
  968. imports
  969. ~~~~~~~
  970. A sequence of modules to import when the worker starts.
  971. This is used to specify the task modules to import, but also
  972. to import signal handlers and additional remote control commands, etc.
  973. The modules will be imported in the original order.
  974. .. setting:: include
  975. include
  976. ~~~~~~~
  977. Exact same semantics as :setting:`imports`, but can be used as a means
  978. to have different import categories.
  979. The modules in this setting are imported after the modules in
  980. :setting:`imports`.
  981. .. _conf-concurrency:
  982. .. setting:: worker_concurrency
  983. worker_concurrency
  984. ~~~~~~~~~~~~~~~~~~
  985. The number of concurrent worker processes/threads/green threads executing
  986. tasks.
  987. If you're doing mostly I/O you can have more processes,
  988. but if mostly CPU-bound, try to keep it close to the
  989. number of CPUs on your machine. If not set, the number of CPUs/cores
  990. on the host will be used.
  991. Defaults to the number of available CPUs.
  992. .. setting:: worker_prefetch_multiplier
  993. worker_prefetch_multiplier
  994. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  995. How many messages to prefetch at a time multiplied by the number of
  996. concurrent processes. The default is 4 (four messages for each
  997. process). The default setting is usually a good choice, however -- if you
  998. have very long running tasks waiting in the queue and you have to start the
  999. workers, note that the first worker to start will receive four times the
  1000. number of messages initially. Thus the tasks may not be fairly distributed
  1001. to the workers.
  1002. To disable prefetching, set :setting:`worker_prefetch_multiplier` to 1.
  1003. Changing that setting to 0 will allow the worker to keep consuming
  1004. as many messages as it wants.
  1005. For more on prefetching, read :ref:`optimizing-prefetch-limit`
  1006. .. note::
  1007. Tasks with ETA/countdown are not affected by prefetch limits.
  1008. .. setting:: worker_lost_wait
  1009. worker_lost_wait
  1010. ~~~~~~~~~~~~~~~~
  1011. In some cases a worker may be killed without proper cleanup,
  1012. and the worker may have published a result before terminating.
  1013. This value specifies how long we wait for any missing results before
  1014. raising a :exc:`@WorkerLostError` exception.
  1015. Default is 10.0
  1016. .. setting:: worker_max_tasks_per_child
  1017. worker_max_tasks_per_child
  1018. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1019. Maximum number of tasks a pool worker process can execute before
  1020. it's replaced with a new one. Default is no limit.
  1021. .. setting:: worker_max_memory_per_child
  1022. worker_max_memory_per_child
  1023. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1024. Maximum amount of resident memory that may be consumed by a
  1025. worker before it will be replaced by a new worker. If a single
  1026. task causes a worker to exceed this limit, the task will be
  1027. completed, and the worker will be replaced afterwards. Default:
  1028. no limit.
  1029. .. setting:: worker_disable_rate_limits
  1030. worker_disable_rate_limits
  1031. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  1032. Disable all rate limits, even if tasks has explicit rate limits set.
  1033. .. setting:: worker_state_db
  1034. worker_state_db
  1035. ~~~~~~~~~~~~~~~
  1036. Name of the file used to stores persistent worker state (like revoked tasks).
  1037. Can be a relative or absolute path, but be aware that the suffix `.db`
  1038. may be appended to the file name (depending on Python version).
  1039. Can also be set via the :option:`--statedb` argument to
  1040. :mod:`~celery.bin.worker`.
  1041. Not enabled by default.
  1042. .. setting:: worker_timer_precision
  1043. worker_timer_precision
  1044. ~~~~~~~~~~~~~~~~~~~~~~
  1045. Set the maximum time in seconds that the ETA scheduler can sleep between
  1046. rechecking the schedule. Default is 1 second.
  1047. Setting this value to 1 second means the schedulers precision will
  1048. be 1 second. If you need near millisecond precision you can set this to 0.1.
  1049. .. setting:: worker_enable_remote_control
  1050. worker_enable_remote_control
  1051. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1052. Specify if remote control of the workers is enabled.
  1053. Default is :const:`True`.
  1054. .. _conf-error-mails:
  1055. Error E-Mails
  1056. -------------
  1057. .. setting:: task_send_error_emails
  1058. task_send_error_emails
  1059. ~~~~~~~~~~~~~~~~~~~~~~
  1060. The default value for the `Task.send_error_emails` attribute, which if
  1061. set to :const:`True` means errors occurring during task execution will be
  1062. sent to :setting:`admins` by email.
  1063. Disabled by default.
  1064. .. setting:: admins
  1065. admins
  1066. ~~~~~~
  1067. List of `(name, email_address)` tuples for the administrators that should
  1068. receive error emails.
  1069. .. setting:: server_email
  1070. server_email
  1071. ~~~~~~~~~~~~
  1072. The email address this worker sends emails from.
  1073. Default is celery@localhost.
  1074. .. setting:: email_host
  1075. email_host
  1076. ~~~~~~~~~~
  1077. The mail server to use. Default is ``localhost``.
  1078. .. setting:: email_host_user
  1079. email_host_user
  1080. ~~~~~~~~~~~~~~~
  1081. User name (if required) to log on to the mail server with.
  1082. .. setting:: email_host_password
  1083. email_host_password
  1084. ~~~~~~~~~~~~~~~~~~~
  1085. Password (if required) to log on to the mail server with.
  1086. .. setting:: email_port
  1087. email_port
  1088. ~~~~~~~~~~
  1089. The port the mail server is listening on. Default is `25`.
  1090. .. setting:: email_use_ssl
  1091. email_use_ssl
  1092. ~~~~~~~~~~~~~
  1093. Use SSL when connecting to the SMTP server. Disabled by default.
  1094. .. setting:: email_use_tls
  1095. email_use_tls
  1096. ~~~~~~~~~~~~~
  1097. Use TLS when connecting to the SMTP server. Disabled by default.
  1098. .. setting:: email_timeout
  1099. email_timeout
  1100. ~~~~~~~~~~~~~
  1101. Timeout in seconds for when we give up trying to connect
  1102. to the SMTP server when sending emails.
  1103. The default is 2 seconds.
  1104. .. setting:: email_charset
  1105. email_charset
  1106. ~~~~~~~~~~~~~
  1107. .. versionadded:: 4.0
  1108. Charset for outgoing emails. Default is "us-ascii".
  1109. .. _conf-example-error-mail-config:
  1110. Example E-Mail configuration
  1111. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1112. This configuration enables the sending of error emails to
  1113. george@vandelay.com and kramer@vandelay.com:
  1114. .. code-block:: python
  1115. # Enables error emails.
  1116. task_send_error_emails = True
  1117. # Name and email addresses of recipients
  1118. admins = (
  1119. ('George Costanza', 'george@vandelay.com'),
  1120. ('Cosmo Kramer', 'kosmo@vandelay.com'),
  1121. )
  1122. # Email address used as sender (From field).
  1123. server_email = 'no-reply@vandelay.com'
  1124. # Mailserver configuration
  1125. email_host = 'mail.vandelay.com'
  1126. email_port = 25
  1127. # email_host_user = 'servers'
  1128. # email_host_password = 's3cr3t'
  1129. .. _conf-events:
  1130. Events
  1131. ------
  1132. .. setting:: worker_send_events
  1133. worker_send_events
  1134. ~~~~~~~~~~~~~~~~~~
  1135. Send task-related events so that tasks can be monitored using tools like
  1136. `flower`. Sets the default value for the workers :option:`-E` argument.
  1137. .. setting:: task_send_sent_event
  1138. task_send_sent_event
  1139. ~~~~~~~~~~~~~~~~~~~~
  1140. .. versionadded:: 2.2
  1141. If enabled, a :event:`task-sent` event will be sent for every task so tasks can be
  1142. tracked before they are consumed by a worker.
  1143. Disabled by default.
  1144. .. setting:: event_queue_ttl
  1145. event_queue_ttl
  1146. ~~~~~~~~~~~~~~~~~~~~~~
  1147. :transports supported: ``amqp``
  1148. Message expiry time in seconds (int/float) for when messages sent to a monitor clients
  1149. event queue is deleted (``x-message-ttl``)
  1150. For example, if this value is set to 10 then a message delivered to this queue
  1151. will be deleted after 10 seconds.
  1152. Disabled by default.
  1153. .. setting:: event_queue_expires
  1154. event_queue_expires
  1155. ~~~~~~~~~~~~~~~~~~~
  1156. :transports supported: ``amqp``
  1157. Expiry time in seconds (int/float) for when after a monitor clients
  1158. event queue will be deleted (``x-expires``).
  1159. Default is never, relying on the queue autodelete setting.
  1160. .. setting:: event_serializer
  1161. event_serializer
  1162. ~~~~~~~~~~~~~~~~
  1163. Message serialization format used when sending event messages.
  1164. Default is ``json``. See :ref:`calling-serializers`.
  1165. .. _conf-logging:
  1166. Logging
  1167. -------
  1168. .. setting:: worker_hijack_root_logger
  1169. worker_hijack_root_logger
  1170. ~~~~~~~~~~~~~~~~~~~~~~~~~
  1171. .. versionadded:: 2.2
  1172. By default any previously configured handlers on the root logger will be
  1173. removed. If you want to customize your own logging handlers, then you
  1174. can disable this behavior by setting
  1175. `worker_hijack_root_logger = False`.
  1176. .. note::
  1177. Logging can also be customized by connecting to the
  1178. :signal:`celery.signals.setup_logging` signal.
  1179. .. setting:: worker_log_color
  1180. worker_log_color
  1181. ~~~~~~~~~~~~~~~~~
  1182. Enables/disables colors in logging output by the Celery apps.
  1183. By default colors are enabled if
  1184. 1) the app is logging to a real terminal, and not a file.
  1185. 2) the app is not running on Windows.
  1186. .. setting:: worker_log_format
  1187. worker_log_format
  1188. ~~~~~~~~~~~~~~~~~
  1189. The format to use for log messages.
  1190. Default is `[%(asctime)s: %(levelname)s/%(processName)s] %(message)s`
  1191. See the Python :mod:`logging` module for more information about log
  1192. formats.
  1193. .. setting:: worker_task_log_format
  1194. worker_task_log_format
  1195. ~~~~~~~~~~~~~~~~~~~~~~
  1196. The format to use for log messages logged in tasks. Can be overridden using
  1197. the :option:`--loglevel` option to :mod:`~celery.bin.worker`.
  1198. Default is::
  1199. [%(asctime)s: %(levelname)s/%(processName)s]
  1200. [%(task_name)s(%(task_id)s)] %(message)s
  1201. See the Python :mod:`logging` module for more information about log
  1202. formats.
  1203. .. setting:: worker_redirect_stdouts
  1204. worker_redirect_stdouts
  1205. ~~~~~~~~~~~~~~~~~~~~~~~
  1206. If enabled `stdout` and `stderr` will be redirected
  1207. to the current logger.
  1208. Enabled by default.
  1209. Used by :program:`celery worker` and :program:`celery beat`.
  1210. .. setting:: worker_redirect_stdouts_level
  1211. worker_redirect_stdouts_level
  1212. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1213. The log level output to `stdout` and `stderr` is logged as.
  1214. Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,
  1215. :const:`ERROR` or :const:`CRITICAL`.
  1216. Default is :const:`WARNING`.
  1217. .. _conf-security:
  1218. Security
  1219. --------
  1220. .. setting:: security_key
  1221. security_key
  1222. ~~~~~~~~~~~~
  1223. .. versionadded:: 2.5
  1224. The relative or absolute path to a file containing the private key
  1225. used to sign messages when :ref:`message-signing` is used.
  1226. .. setting:: security_certificate
  1227. security_certificate
  1228. ~~~~~~~~~~~~~~~~~~~~
  1229. .. versionadded:: 2.5
  1230. The relative or absolute path to an X.509 certificate file
  1231. used to sign messages when :ref:`message-signing` is used.
  1232. .. setting:: security_cert_store
  1233. security_cert_store
  1234. ~~~~~~~~~~~~~~~~~~~
  1235. .. versionadded:: 2.5
  1236. The directory containing X.509 certificates used for
  1237. :ref:`message-signing`. Can be a glob with wildcards,
  1238. (for example :file:`/etc/certs/*.pem`).
  1239. .. _conf-custom-components:
  1240. Custom Component Classes (advanced)
  1241. -----------------------------------
  1242. .. setting:: worker_pool
  1243. worker_pool
  1244. ~~~~~~~~~~~
  1245. Name of the pool class used by the worker.
  1246. .. admonition:: Eventlet/Gevent
  1247. Never use this option to select the eventlet or gevent pool.
  1248. You must use the `-P` option instead, otherwise the monkey patching
  1249. will happen too late and things will break in strange and silent ways.
  1250. Default is ``celery.concurrency.prefork:TaskPool``.
  1251. .. setting:: worker_pool_restarts
  1252. worker_pool_restarts
  1253. ~~~~~~~~~~~~~~~~~~~~
  1254. If enabled the worker pool can be restarted using the
  1255. :control:`pool_restart` remote control command.
  1256. Disabled by default.
  1257. .. setting:: worker_autoscaler
  1258. worker_autoscaler
  1259. ~~~~~~~~~~~~~~~~~
  1260. .. versionadded:: 2.2
  1261. Name of the autoscaler class to use.
  1262. Default is ``celery.worker.autoscale:Autoscaler``.
  1263. .. setting:: worker_autoreloader
  1264. worker_autoreloader
  1265. ~~~~~~~~~~~~~~~~~~~
  1266. Name of the autoreloader class used by the worker to reload
  1267. Python modules and files that have changed.
  1268. Default is: ``celery.worker.autoreload:Autoreloader``.
  1269. .. setting:: worker_consumer
  1270. worker_consumer
  1271. ~~~~~~~~~~~~~~~
  1272. Name of the consumer class used by the worker.
  1273. Default is :class:`celery.worker.consumer.Consumer`
  1274. .. setting:: worker_timer
  1275. worker_timer
  1276. ~~~~~~~~~~~~
  1277. Name of the ETA scheduler class used by the worker.
  1278. Default is :class:`kombu.async.hub.timer.Timer`, or one overrided
  1279. by the pool implementation.
  1280. .. _conf-celerybeat:
  1281. Beat Settings (:program:`celery beat`)
  1282. --------------------------------------
  1283. .. setting:: beat_schedule
  1284. beat_schedule
  1285. ~~~~~~~~~~~~~
  1286. The periodic task schedule used by :mod:`~celery.bin.beat`.
  1287. See :ref:`beat-entries`.
  1288. .. setting:: beat_scheduler
  1289. beat_scheduler
  1290. ~~~~~~~~~~~~~~
  1291. The default scheduler class. Default is ``celery.beat:PersistentScheduler``.
  1292. Can also be set via the :option:`-S` argument to
  1293. :mod:`~celery.bin.beat`.
  1294. .. setting:: beat_schedule_filename
  1295. beat_schedule_filename
  1296. ~~~~~~~~~~~~~~~~~~~~~~
  1297. Name of the file used by `PersistentScheduler` to store the last run times
  1298. of periodic tasks. Can be a relative or absolute path, but be aware that the
  1299. suffix `.db` may be appended to the file name (depending on Python version).
  1300. Can also be set via the :option:`--schedule` argument to
  1301. :mod:`~celery.bin.beat`.
  1302. .. setting:: beat_sync_every
  1303. beat_sync_every
  1304. ~~~~~~~~~~~~~~~
  1305. The number of periodic tasks that can be called before another database sync
  1306. is issued.
  1307. Defaults to 0 (sync based on timing - default of 3 minutes as determined by
  1308. scheduler.sync_every). If set to 1, beat will call sync after every task
  1309. message sent.
  1310. .. setting:: beat_max_loop_interval
  1311. beat_max_loop_interval
  1312. ~~~~~~~~~~~~~~~~~~~~~~
  1313. The maximum number of seconds :mod:`~celery.bin.beat` can sleep
  1314. between checking the schedule.
  1315. The default for this value is scheduler specific.
  1316. For the default celery beat scheduler the value is 300 (5 minutes),
  1317. but for e.g. the django-celery database scheduler it is 5 seconds
  1318. because the schedule may be changed externally, and so it must take
  1319. changes to the schedule into account.
  1320. Also when running celery beat embedded (:option:`-B`) on Jython as a thread
  1321. the max interval is overridden and set to 1 so that it's possible
  1322. to shut down in a timely manner.