configuration.rst 7.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296
  1. ============================
  2. Configuration and defaults
  3. ============================
  4. This document describes the configuration options available.
  5. If you're using celery in a Django project these settings should be defined
  6. in your projects ``settings.py`` file.
  7. In a regular Python environment using the default loader you must create
  8. the ``celeryconfig.py`` module and make sure it is available on the
  9. Python path.
  10. Example configuration file
  11. ==========================
  12. This is an example configuration file to get you started,
  13. it should contain all you need to run a basic celery set-up.
  14. .. code-block:: python
  15. CELERY_BACKEND = "database"
  16. DATABASE_ENGINE = "sqlite3"
  17. DATABASE_NAME = "mydatabase.db"
  18. AMQP_HOST = "localhost"
  19. AMQP_PORT = 5672
  20. AMQP_VHOST = "/"
  21. AMQP_USER = "guest"
  22. AMQP_PASSWORD = "guest"
  23. ## If you're doing mostly I/O you can have higher concurrency,
  24. ## if mostly spending time in the CPU, try to keep it close to the
  25. ## number of CPUs on your machine.
  26. # CELERYD_CONCURRENCY = 8
  27. CELERYD_LOG_FILE = "celeryd.log"
  28. CELERYD_PID_FILE = "celeryd.pid"
  29. CELERYD_DAEMON_LOG_LEVEL = "INFO"
  30. Concurrency settings
  31. ====================
  32. * CELERYD_CONCURRENCY
  33. The number of concurrent worker processes, executing tasks simultaneously.
  34. Defaults to the number of CPUs in the system.
  35. Task result backend settings
  36. ============================
  37. * CELERY_BACKEND
  38. The backend used to store task results (tombstones).
  39. Can be one of the following:
  40. * database (default)
  41. Use a relational database supported by the Django ORM.
  42. * cache
  43. Use memcached to store the results.
  44. * mongodb
  45. Use MongoDB to store the results.
  46. * tyrant
  47. Use Tokyo Tyrant to store the results.
  48. * CELERY_PERIODIC_STATUS_BACKEND
  49. The backend used to store the status of periodic tasks.
  50. Can be one of the following:
  51. * database (default)
  52. Use a relational database supported by the Django ORM.
  53. * mongodb
  54. Use MongoDB.
  55. Database backend settings
  56. =========================
  57. This applies to both the result store backend and the periodic status
  58. backend.
  59. Please see the Django ORM database settings documentation:
  60. http://docs.djangoproject.com/en/dev/ref/settings/#database-engine
  61. If you use this backend make sure to initialize the database tables
  62. after configuration. When using celery with a Django project this
  63. means executing::
  64. $ python manage.py syncdb
  65. When using celery in a regular Python environment you have to execute::
  66. $ celeryinit
  67. Example configuration
  68. ---------------------
  69. .. code-block:: python
  70. DATABASE_ENGINE="mysql"
  71. DATABASE_USER="myusername"
  72. DATABASE_PASSWORD="mypassword"
  73. DATABASE_NAME="mydatabase"
  74. DATABASE_HOST="localhost"
  75. Cache backend settings
  76. ======================
  77. Please see the documentation for the Django cache framework settings:
  78. http://docs.djangoproject.com/en/dev/topics/cache/#memcached
  79. Example configuration
  80. ---------------------
  81. Using a single memcached server:
  82. .. code-block:: python
  83. CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
  84. Using multiple memcached servers:
  85. .. code-block:: python
  86. CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
  87. Tokyo Tyrant backend settings
  88. =============================
  89. **NOTE** The Tokyo Tyrant backend requires the :mod:`pytyrant` library:
  90. http://pypi.python.org/pypi/pytyrant/
  91. This backend requires the following configuration variables to be set:
  92. * TT_HOST
  93. Hostname of the Tokyo Tyrant server.
  94. * TT_PORT
  95. The port the Tokyo Tyrant server is listening to.
  96. Example configuration
  97. ---------------------
  98. .. code-block:: python
  99. TT_HOST = "localhost"
  100. TT_PORT = 1978
  101. MongoDB backend settings
  102. ========================
  103. * CELERY_MONGODB_BACKEND_SETTINGS
  104. This is a dict supporting the following keys
  105. * host
  106. Hostname of the MongoDB server.
  107. * port
  108. The port the MongoDB server is listening to.
  109. * user
  110. Username to authenticate to the MongoDB server as.
  111. * password
  112. * database
  113. The database name to connect to.
  114. * taskmeta_collection
  115. FIXME
  116. * periodictaskmeta_collection
  117. FIXME
  118. Broker settings
  119. ===============
  120. * CELERY_AMQP_EXCHANGE
  121. Name of the AMQP exchange.
  122. * CELERY_AMQP_EXCHANGE_TYPE
  123. The type of exchange. If the exchange type is ``direct``, all messages
  124. receives all tasks. However, if the exchange type is ``topic``, you can
  125. route e.g. some tasks to one server, and others to the rest.
  126. See `Exchange types and the effect of bindings`_.
  127. .. _`Exchange types and the effect of bindings`:
  128. http://bit.ly/wpamqpexchanges
  129. * CELERY_AMQP_PUBLISHER_ROUTING_KEY
  130. The default AMQP routing key used when publishing tasks.
  131. * CELERY_AMQP_CONSUMER_ROUTING_KEY
  132. The AMQP routing key used when consuming tasks.
  133. * CELERY_AMQP_CONSUMER_QUEUE
  134. The name of the AMQP queue.
  135. * CELERY_AMQP_CONSUMER_QUEUES
  136. Dictionary defining multiple AMQP queues.
  137. * CELERY_AMQP_CONNECTION_TIMEOUT
  138. The timeout in seconds before we give up establishing a connection
  139. to the AMQP server. Default is 4 seconds.
  140. * CELERY_AMQP_CONNECTION_RETRY
  141. Automatically try to re-establish the connection to the AMQP broker if
  142. it's lost.
  143. The time between retries is increased for each retry, and is
  144. not exhausted before ``CELERY_AMQP_CONNECTION_MAX_RETRIES`` is exceeded.
  145. This behaviour is on by default.
  146. * CELERY_AMQP_CONNECTION_MAX_RETRIES
  147. Maximum number of retries before we give up re-establishing a connection
  148. to the AMQP broker.
  149. If this is set to ``0`` or ``None``, we will retry forever.
  150. Default is 100 retries.
  151. Task execution settings
  152. =======================
  153. * SEND_CELERY_TASK_ERROR_EMAILS
  154. If set to ``True``, errors in tasks will be sent to admins by e-mail.
  155. If unset, it will send the e-mails if ``settings.DEBUG`` is False.
  156. * CELERY_ALWAYS_EAGER
  157. If this is ``True``, all tasks will be executed locally by blocking
  158. until it is finished. ``apply_async`` and ``delay_task`` will return
  159. a :class:`celery.result.EagerResult` which emulates the behaviour of
  160. an :class:`celery.result.AsyncResult`.
  161. Tasks will never be sent to the queue, but executed locally
  162. instead.
  163. * CELERY_TASK_RESULT_EXPIRES
  164. Time (in seconds, or a :class:`datetime.timedelta` object) for when after
  165. stored task tombstones are deleted.
  166. **NOTE**: For the moment this only works for the database and MongoDB
  167. backends.
  168. * CELERY_TASK_SERIALIZER
  169. A string identifying the default serialization
  170. method to use. Can be ``pickle`` (default),
  171. ``json``, ``yaml``, or any custom serialization methods that have
  172. been registered with :mod:`carrot.serialization.registry`.
  173. Default is ``pickle``.
  174. Logging settings
  175. ================
  176. * CELERYD_LOG_FILE
  177. The default filename the worker daemon logs messages to, can be
  178. overridden using the `--logfile`` option to ``celeryd``.
  179. The default is to log using ``stderr`` if running in the foreground,
  180. when running in the background, detached as a daemon, the default
  181. logfile is ``celeryd.log``.
  182. * CELERYD_DAEMON_LOG_LEVEL
  183. Worker log level, can be any of ``DEBUG``, ``INFO``, ``WARNING``,
  184. ``ERROR``, ``CRITICAL``, or ``FATAL``.
  185. See the :mod:`logging` module for more information.
  186. * CELERYD_DAEMON_LOG_FORMAT
  187. The format to use for log messages. Can be overridden using
  188. the ``--loglevel`` option to ``celeryd``.
  189. Default is ``[%(asctime)s: %(levelname)s/%(processName)s] %(message)s``
  190. See the Python :mod:`logging` module for more information about log
  191. formats.
  192. Process settings
  193. ================
  194. * CELERYD_PID_FILE
  195. Full path to the daemon pid file. Default is ``celeryd.pid``.
  196. Can be overridden using the ``--pidfile`` option to ``celeryd``.