| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330 | .. _configuration:============================ Configuration and defaults============================This document describes the configuration options available.If you're using the default loader, you must create the :file:`celeryconfig.py`module and make sure it is available on the Python path... contents::    :local:    :depth: 2.. _conf-example:Example configuration file==========================This is an example configuration file to get you started.It should contain all you need to run a basic Celery set-up... code-block:: python    # List of modules to import when celery starts.    CELERY_IMPORTS = ("myapp.tasks", )    ## Result store settings.    CELERY_RESULT_BACKEND = "database"    CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"    ## Broker settings.    BROKER_HOST = "localhost"    BROKER_PORT = 5672    BROKER_VHOST = "/"    BROKER_USER = "guest"    BROKER_PASSWORD = "guest"    ## Worker settings    ## If you're doing mostly I/O you can have more processes,    ## but if mostly spending CPU, try to keep it close to the    ## number of CPUs on your machine. If not set, the number of CPUs/cores    ## available will be used.    CELERYD_CONCURRENCY = 10    # CELERYD_LOG_FILE = "celeryd.log"    # CELERYD_LOG_LEVEL = "INFO"Configuration Directives========================.. _conf-concurrency:Concurrency settings--------------------.. setting:: CELERYD_CONCURRENCYCELERYD_CONCURRENCY~~~~~~~~~~~~~~~~~~~The number of concurrent worker processes/threads/green threads, executingtasks.Defaults to the number of available CPUs... setting:: CELERYD_PREFETCH_MULTIPLIERCELERYD_PREFETCH_MULTIPLIER~~~~~~~~~~~~~~~~~~~~~~~~~~~How many messages to prefetch at a time multiplied by the number ofconcurrent processes.  The default is 4 (four messages for eachprocess).  The default setting is usually a good choice, however -- if youhave very long running tasks waiting in the queue and you have to start theworkers, note that the first worker to start will receive four times thenumber of messages initially.  Thus the tasks may not be fairly distributedto the workers... _conf-result-backend:Task result backend settings----------------------------.. setting:: CELERY_RESULT_BACKENDCELERY_RESULT_BACKEND~~~~~~~~~~~~~~~~~~~~~The backend used to store task results (tombstones).Can be one of the following:* database (default)    Use a relational database supported by `SQLAlchemy`_.    See :ref:`conf-database-result-backend`.* cache    Use `memcached`_ to store the results.    See :ref:`conf-cache-result-backend`.* mongodb    Use `MongoDB`_ to store the results.    See :ref:`conf-mongodb-result-backend`.* redis    Use `Redis`_ to store the results.    See :ref:`conf-redis-result-backend`.* tyrant    Use `Tokyo Tyrant`_ to store the results.    See :ref:`conf-tyrant-result-backend`.* amqp    Send results back as AMQP messages    See :ref:`conf-amqp-result-backend`... warning:    While the AMQP result backend is very efficient, you must make sure    you only receive the same result once.  See :doc:`userguide/executing`)... _`SQLAlchemy`: http://sqlalchemy.org.. _`memcached`: http://memcached.org.. _`MongoDB`: http://mongodb.org.. _`Redis`: http://code.google.com/p/redis/.. _`Tokyo Tyrant`: http://1978th.net/tokyotyrant/.. _conf-database-result-backend:Database backend settings-------------------------.. setting:: CELERY_RESULT_DBURICELERY_RESULT_DBURI~~~~~~~~~~~~~~~~~~~Please see `Supported Databases`_ for a table of supported databases.To use this backend you need to configure it with an`Connection String`_, some examples include:.. code-block:: python    # sqlite (filename)    CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"    # mysql    CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"    # postgresql    CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"    # oracle    CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"See `Connection String`_ for more information about connectionstrings... setting:: CELERY_RESULT_ENGINE_OPTIONSCELERY_RESULT_ENGINE_OPTIONS~~~~~~~~~~~~~~~~~~~~~~~~~~~~To specify additional SQLAlchemy database engine options you can usethe :setting:`CELERY_RESULT_ENGINE_OPTIONS` setting::    # echo enables verbose logging from SQLAlchemy.    CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}.. _`Supported Databases`:    http://www.sqlalchemy.org/docs/core/engines.html#supported-databases.. _`Connection String`:    http://www.sqlalchemy.org/docs/core/engines.html#database-urlsExample configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    CELERY_RESULT_BACKEND = "database"    CELERY_RESULT_DBURI = "mysql://user:password@host/dbname".. _conf-amqp-result-backend:AMQP backend settings---------------------.. setting:: CELERY_AMQP_TASK_RESULT_EXPIRESCELERY_AMQP_TASK_RESULT_EXPIRES~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The time in seconds of which the task result queues should expire... note::    AMQP result expiration requires RabbitMQ versions 2.1.0 and higher... setting:: CELERY_AMQP_TASK_RESULT_CONNECTION_MAXCELERY_AMQP_TASK_RESULT_CONNECTION_MAX~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum number of connections used by the AMQP result backend simultaneously.Default is 1 (a single connection per process)... setting:: CELERY_RESULT_EXCHANGECELERY_RESULT_EXCHANGE~~~~~~~~~~~~~~~~~~~~~~Name of the exchange to publish results in.  Default is `"celeryresults"`... setting:: CELERY_RESULT_EXCHANGE_TYPECELERY_RESULT_EXCHANGE_TYPE~~~~~~~~~~~~~~~~~~~~~~~~~~~The exchange type of the result exchange.  Default is to use a `direct`exchange... setting:: CELERY_RESULT_SERIALIZERCELERY_RESULT_SERIALIZER~~~~~~~~~~~~~~~~~~~~~~~~Result message serialization format.  Default is `"pickle"`. See:ref:`executing-serializers`... setting:: CELERY_RESULT_PERSISTENTCELERY_RESULT_PERSISTENT~~~~~~~~~~~~~~~~~~~~~~~~If set to :const:`True`, result messages will be persistent.  This means themessages will not be lost after a broker restart.  The default is for theresults to be transient.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    CELERY_RESULT_BACKEND = "amqp"    CELERY_AMQP_TASK_RESULT_EXPIRES = 18000  # 5 hours... _conf-cache-result-backend:Cache backend settings----------------------.. note::    The cache backend supports the `pylibmc`_ and `python-memcached`    libraries.  The latter is used only if `pylibmc`_ is not installed... setting:: CELERY_CACHE_BACKENDCELERY_CACHE_BACKEND~~~~~~~~~~~~~~~~~~~~Using a single memcached server:.. code-block:: python    CELERY_CACHE_BACKEND = 'memcached://127.0.0.1:11211/'Using multiple memcached servers:.. code-block:: python    CELERY_RESULT_BACKEND = "cache"    CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'.. setting:: CELERY_CACHE_BACKEND_OPTIONSThe "dummy" backend stores the cache in memory only:    CELERY_CACHE_BACKEND = "dummy"CELERY_CACHE_BACKEND_OPTIONS~~~~~~~~~~~~~~~~~~~~~~~~~~~~You can set pylibmc options using the :setting:`CELERY_CACHE_BACKEND_OPTIONS`setting:.. code-block:: python    CELERY_CACHE_BACKEND_OPTIONS = {"binary": True,                                    "behaviors": {"tcp_nodelay": True}}.. _`pylibmc`: http://sendapatch.se/projects/pylibmc/.. _conf-tyrant-result-backend:Tokyo Tyrant backend settings-----------------------------.. note::    The Tokyo Tyrant backend requires the :mod:`pytyrant` library:    http://pypi.python.org/pypi/pytyrant/This backend requires the following configuration directives to be set:.. setting:: TT_HOSTTT_HOST~~~~~~~Host name of the Tokyo Tyrant server... setting:: TT_PORTTT_PORT~~~~~~~The port the Tokyo Tyrant server is listening to.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    CELERY_RESULT_BACKEND = "tyrant"    TT_HOST = "localhost"    TT_PORT = 1978.. _conf-redis-result-backend:Redis backend settings----------------------.. note::    The Redis backend requires the :mod:`redis` library:    http://pypi.python.org/pypi/redis/0.5.5    To install the redis package use `pip` or `easy_install`::        $ pip install redisThis backend requires the following configuration directives to be set... setting:: REDIS_HOSTREDIS_HOST~~~~~~~~~~Host name of the Redis database server. e.g. `"localhost"`... setting:: REDIS_PORTREDIS_PORT~~~~~~~~~~Port to the Redis database server. e.g. `6379`... setting:: REDIS_DBREDIS_DB~~~~~~~~Database number to use. Default is 0.. setting:: REDIS_PASSWORDREDIS_PASSWORD~~~~~~~~~~~~~~Password used to connect to the database.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    CELERY_RESULT_BACKEND = "redis"    REDIS_HOST = "localhost"    REDIS_PORT = 6379    REDIS_DB = 0    REDIS_CONNECT_RETRY = True.. _conf-mongodb-result-backend:MongoDB backend settings------------------------.. note::     The MongoDB backend requires the :mod:`pymongo` library:    http://github.com/mongodb/mongo-python-driver/tree/master.. setting:: CELERY_MONGODB_BACKEND_SETTINGSCELERY_MONGODB_BACKEND_SETTINGS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~This is a dict supporting the following keys:* host    Host name of the MongoDB server. Defaults to "localhost".* port    The port the MongoDB server is listening to. Defaults to 27017.* user    User name to authenticate to the MongoDB server as (optional).* password    Password to authenticate to the MongoDB server (optional).* database    The database name to connect to. Defaults to "celery".* taskmeta_collection    The collection name to store task meta data.    Defaults to "celery_taskmeta"... _example-mongodb-result-config:Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    CELERY_RESULT_BACKEND = "mongodb"    CELERY_MONGODB_BACKEND_SETTINGS = {        "host": "192.168.1.100",        "port": 30000,        "database": "mydb",        "taskmeta_collection": "my_taskmeta_collection",    }.. _conf-messaging:Message Routing---------------.. _conf-messaging-routing:.. setting:: CELERY_QUEUESCELERY_QUEUES~~~~~~~~~~~~~The mapping of queues the worker consumes from.  This is a dictionaryof queue name/options.  See :ref:`guide-routing` for more information.The default is a queue/exchange/binding key of `"celery"`, withexchange type `direct`.You don't have to care about this unless you want custom routing facilities... setting:: CELERY_ROUTESCELERY_ROUTES~~~~~~~~~~~~~A list of routers, or a single router used to route tasks to queues.When deciding the final destination of a task the routers are consultedin order.  See :ref:`routers` for more information... setting:: CELERY_CREATE_MISSING_QUEUESCELERY_CREATE_MISSING_QUEUES~~~~~~~~~~~~~~~~~~~~~~~~~~~~If enabled (default), any queues specified that is not defined in:setting:`CELERY_QUEUES` will be automatically created. See:ref:`routing-automatic`... setting:: CELERY_DEFAULT_QUEUECELERY_DEFAULT_QUEUE~~~~~~~~~~~~~~~~~~~~The queue used by default, if no custom queue is specified.  This queue mustbe listed in :setting:`CELERY_QUEUES`.  The default is: `celery`... seealso::    :ref:`routing-changing-default-queue`.. setting:: CELERY_DEFAULT_EXCHANGECELERY_DEFAULT_EXCHANGE~~~~~~~~~~~~~~~~~~~~~~~Name of the default exchange to use when no custom exchange isspecified.  The default is: `celery`... setting:: CELERY_DEFAULT_EXCHANGE_TYPECELERY_DEFAULT_EXCHANGE_TYPE~~~~~~~~~~~~~~~~~~~~~~~~~~~~Default exchange type used when no custom exchange is specified.The default is: `direct`... setting:: CELERY_DEFAULT_ROUTING_KEYCELERY_DEFAULT_ROUTING_KEY~~~~~~~~~~~~~~~~~~~~~~~~~~The default routing key used when sending tasks.The default is: `celery`... setting:: CELERY_DEFAULT_DELIVERY_MODECELERY_DEFAULT_DELIVERY_MODE~~~~~~~~~~~~~~~~~~~~~~~~~~~~Can be `transient` or `persistent`.  The default is to sendpersistent messages... _conf-broker-connection:Broker Settings---------------.. setting:: BROKER_BACKENDBROKER_BACKEND~~~~~~~~~~~~~~The Kombu transport to use.  Default is ``amqplib``.You can use a custom transport class name, or select one of thebuilt-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``, ``couchdb``... setting:: BROKER_HOSTBROKER_HOST~~~~~~~~~~~Hostname of the broker... setting:: BROKER_PORTBROKER_PORT~~~~~~~~~~~Custom port of the broker.  Default is to use the default port for theselected backend... setting:: BROKER_USERBROKER_USER~~~~~~~~~~~Username to connect as... setting:: BROKER_PASSWORDBROKER_PASSWORD~~~~~~~~~~~~~~~Password to connect with... setting:: BROKER_VHOSTBROKER_VHOST~~~~~~~~~~~~Virtual host.  Default is `"/"`... setting:: BROKER_USE_SSLBROKER_USE_SSL~~~~~~~~~~~~~~Use SSL to connect to the broker.  Off by default.  This may not be supportedby all transports... setting:: BROKER_CONNECTION_TIMEOUTBROKER_CONNECTION_TIMEOUT~~~~~~~~~~~~~~~~~~~~~~~~~The default timeout in seconds before we give up establishing a connectionto the AMQP server.  Default is 4 seconds... setting:: CELERY_BROKER_CONNECTION_RETRYBROKER_CONNECTION_RETRY~~~~~~~~~~~~~~~~~~~~~~~Automatically try to re-establish the connection to the AMQP broker if lost.The time between retries is increased for each retry, and isnot exhausted before :setting:`CELERY_BROKER_CONNECTION_MAX_RETRIES` isexceeded.This behavior is on by default... setting:: CELERY_BROKER_CONNECTION_MAX_RETRIESCELERY_BROKER_CONNECTION_MAX_RETRIES~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum number of retries before we give up re-establishing a connectionto the AMQP broker.If this is set to :const:`0` or :const:`None`, we will retry forever.Default is 100 retries... _conf-task-execution:Task execution settings-----------------------.. setting:: CELERY_ALWAYS_EAGERCELERY_ALWAYS_EAGER~~~~~~~~~~~~~~~~~~~If this is :const:`True`, all tasks will be executed locally by blocking untilthe task returns.  ``apply_async()`` and ``Task.delay()`` will returnan :class:`~celery.result.EagerResult` instance, which emulates the APIand behavior of :class:`~celery.result.AsyncResult`, except the resultis already evaluated.That is, tasks will be executed locally instead of being sent tothe queue... setting:: CELERY_EAGER_PROPAGATES_EXCEPTIONSCELERY_EAGER_PROPAGATES_EXCEPTIONS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If this is :const:`True`, eagerly executed tasks (applied by `task.apply()`,or when the :setting:`CELERY_ALWAYS_EAGER` setting is enabled), willpropagate exceptions.It's the same as always running ``apply()`` with ``throw=True``... setting:: CELERY_IGNORE_RESULTCELERY_IGNORE_RESULT~~~~~~~~~~~~~~~~~~~~Whether to store the task return values or not (tombstones).If you still want to store errors, just not successful return values,you can set :setting:`CELERY_STORE_ERRORS_EVEN_IF_IGNORED`... setting:: CELERY_MESSAGE_COMPRESSIONCELERY_MESSAGE_COMPRESSION~~~~~~~~~~~~~~~~~~~~~~~~~~Default compression used for task messages.Can be ``"gzip"``, ``"bzip2"`` (if available), or any customcompression schemes registered in the Kombu compression registry.The default is to send uncompressed messages... setting:: CELERY_TASK_RESULT_EXPIRESCELERY_TASK_RESULT_EXPIRES~~~~~~~~~~~~~~~~~~~~~~~~~~Time (in seconds, or a :class:`~datetime.timedelta` object) for when afterstored task tombstones will be deleted.A built-in periodic task will delete the results after this time(:class:`celery.task.backend_cleanup`)... note::    For the moment this only works with the database, cache, redis and MongoDB    backends. For the AMQP backend see    :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`.    When using the database or MongoDB backends, `celerybeat` must be    running for the results to be expired... setting:: CELERY_MAX_CACHED_RESULTSCELERY_MAX_CACHED_RESULTS~~~~~~~~~~~~~~~~~~~~~~~~~Result backends caches ready results used by the client.This is the total number of results to cache before older results are evicted.The default is 5000... setting:: CELERY_TRACK_STARTEDCELERY_TRACK_STARTED~~~~~~~~~~~~~~~~~~~~If :const:`True` the task will report its status as "started" when thetask is executed by a worker.  The default value is :const:`False` asthe normal behaviour is to not report that level of granularity.  Tasksare either pending, finished, or waiting to be retried.  Having a "started"state can be useful for when there are long running tasks and there is aneed to report which task is currently running... setting:: CELERY_TASK_SERIALIZERCELERY_TASK_SERIALIZER~~~~~~~~~~~~~~~~~~~~~~A string identifying the default serialization method to use.  Can be`pickle` (default), `json`, `yaml`, `msgpack` or any custom serializationmethods that have been registered with :mod:`kombu.serialization.registry`... seealso::    :ref:`executing-serializers`... setting:: CELERY_TASK_PUBLISH_RETRYCELERY_TASK_PUBLISH_RETRY~~~~~~~~~~~~~~~~~~~~~~~~~Decides if publishing task messages will be retried in the caseof connection loss or other connection errors.See also :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`.Disabled by default... setting:: CELERY_TASK_PUBLISH_RETRY_POLICYDefines the default policy when retrying publishing a task message inthe case of connection loss or other connection errors.This is a mapping that must contain the following keys:    * `max_retries`        Maximum number of retries before giving up, in this case the        exception that caused the retry to fail will be raised.        A value of 0 or :const:`None` means it will retry forever.        The default is to retry 3 times.    * `interval_start`        Defines the number of seconds (float or integer) to wait between        retries.  Default is 0, which means the first retry will be        instantaneous.    * `interval_step`        On each consecutive retry this number will be added to the retry        delay (float or integer).  Default is 0.2.    * `interval_max`        Maximum number of seconds (float or integer) to wait between        retries.  Default is 0.2.With the default policy of::    {"max_retries": 3,     "interval_start": 0,     "interval_step": 0.2,     "interval_max": 0.2}the maximum time spent retrying will be 0.4 seconds.  It is set relativelyshort by default because a connection failure could lead to a retry pile effectif the broker connection is down: e.g. many web server processes waitingto retry blocking other incoming requests.CELERY_TASK_PUBLISH_RETRY_POLICY~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.. setting:: CELERY_DEFAULT_RATE_LIMITCELERY_DEFAULT_RATE_LIMIT~~~~~~~~~~~~~~~~~~~~~~~~~The global default rate limit for tasks.This value is used for tasks that does not have a custom rate limitThe default is no rate limit... setting:: CELERY_DISABLE_RATE_LIMITSCELERY_DISABLE_RATE_LIMITS~~~~~~~~~~~~~~~~~~~~~~~~~~Disable all rate limits, even if tasks has explicit rate limits set... setting:: CELERY_ACKS_LATECELERY_ACKS_LATE~~~~~~~~~~~~~~~~Late ack means the task messages will be acknowledged **after** the taskhas been executed, not *just before*, which is the default behavior... seealso::    FAQ: :ref:`faq-acks_late-vs-retry`... _conf-celeryd:Worker: celeryd---------------.. setting:: CELERY_IMPORTSCELERY_IMPORTS~~~~~~~~~~~~~~A sequence of modules to import when the celery daemon starts.This is used to specify the task modules to import, but alsoto import signal handlers and additional remote control commands, etc... setting:: CELERYD_MAX_TASKS_PER_CHILDCELERYD_MAX_TASKS_PER_CHILD~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum number of tasks a pool worker process can execute beforeit's replaced with a new one.  Default is no limit... setting:: CELERYD_TASK_TIME_LIMITCELERYD_TASK_TIME_LIMIT~~~~~~~~~~~~~~~~~~~~~~~Task hard time limit in seconds.  The worker processing the task willbe killed and replaced with a new one when this is exceeded... setting:: CELERYD_TASK_SOFT_TIME_LIMITCELERYD_TASK_SOFT_TIME_LIMIT~~~~~~~~~~~~~~~~~~~~~~~~~~~~Task soft time limit in seconds.The :exc:`~celery.exceptions.SoftTimeLimitExceeded` exception will beraised when this is exceeded.  The task can catch this toe.g. clean up before the hard time limit comes.Example:.. code-block:: python    from celery.task import task    from celery.exceptions import SoftTimeLimitExceeded    @task()    def mytask():        try:            return do_work()        except SoftTimeLimitExceeded:            cleanup_in_a_hurry().. setting:: CELERY_STORE_ERRORS_EVEN_IF_IGNOREDCELERY_STORE_ERRORS_EVEN_IF_IGNORED~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If set, the worker stores all task errors in the result store even if:attr:`Task.ignore_result <celery.task.base.Task.ignore_result>` is on... setting:: CELERYD_STATE_DBCELERYD_STATE_DB~~~~~~~~~~~~~~~~Name of the file used to stores persistent worker state (like revoked tasks).Can be a relative or absolute path, but be aware that the suffix `.db`may be appended to the file name (depending on Python version).Can also be set via the :option:`--statedb` argument to:mod:`~celery.bin.celeryd`.Not enabled by default... setting:: CELERYD_ETA_SCHEDULER_PRECISIONCELERYD_ETA_SCHEDULER_PRECISION~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Set the maximum time in seconds that the ETA scheduler can sleep betweenrechecking the schedule.  Default is 1 second.Setting this value to 1 second means the schedulers precision willbe 1 second. If you need near millisecond precision you can set this to 0.1... _conf-error-mails:Error E-Mails-------------.. setting:: CELERYD_SEND_TASK_ERROR_EMAILSCELERY_SEND_TASK_ERROR_EMAILS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The default value for the `Task.send_error_emails` attribute, which ifset to :const:`True` means errors occurring during task execution will besent to :setting:`ADMINS` by e-mail... setting:: CELERY_TASK_ERROR_WHITELISTCELERY_TASK_ERROR_WHITELIST~~~~~~~~~~~~~~~~~~~~~~~~~~~A white list of exceptions to send error e-mails for... setting:: ADMINSADMINS~~~~~~List of `(name, email_address)` tuples for the administrators that shouldreceive error e-mails... setting:: SERVER_EMAILSERVER_EMAIL~~~~~~~~~~~~The e-mail address this worker sends e-mails from.Default is celery@localhost... setting:: MAIL_HOSTMAIL_HOST~~~~~~~~~The mail server to use.  Default is `"localhost"`... setting:: MAIL_HOST_USERMAIL_HOST_USER~~~~~~~~~~~~~~User name (if required) to log on to the mail server with... setting:: MAIL_HOST_PASSWORDMAIL_HOST_PASSWORD~~~~~~~~~~~~~~~~~~Password (if required) to log on to the mail server with... setting:: MAIL_PORTMAIL_PORT~~~~~~~~~The port the mail server is listening on.  Default is `25`... _conf-example-error-mail-config:Example E-Mail configuration~~~~~~~~~~~~~~~~~~~~~~~~~~~~This configuration enables the sending of error e-mails togeorge@vandelay.com and kramer@vandelay.com:.. code-block:: python    # Enables error e-mails.    CELERY_SEND_TASK_ERROR_EMAILS = True    # Name and e-mail addresses of recipients    ADMINS = (        ("George Costanza", "george@vandelay.com"),        ("Cosmo Kramer", "kosmo@vandelay.com"),    )    # E-mail address used as sender (From field).    SERVER_EMAIL = "no-reply@vandelay.com"    # Mailserver configuration    EMAIL_HOST = "mail.vandelay.com"    EMAIL_PORT = 25    # EMAIL_HOST_USER = "servers"    # EMAIL_HOST_PASSWORD = "s3cr3t".. _conf-events:Events------.. setting:: CELERY_SEND_EVENTSCELERY_SEND_EVENTS~~~~~~~~~~~~~~~~~~Send events so the worker can be monitored by tools like `celerymon`... setting:: CELERY_SEND_TASK_SENT_EVENTCELERY_SEND_TASK_SENT_EVENT~~~~~~~~~~~~~~~~~~~~~~~~~~~If enabled, a `task-sent` event will be sent for every task so tasks can betracked before they are consumed by a worker.Disabled by default... setting:: CELERY_EVENT_QUEUECELERY_EVENT_QUEUE~~~~~~~~~~~~~~~~~~Name of the queue to consume event messages from. Default is`"celeryevent"`... setting:: CELERY_EVENT_EXCHANGECELERY_EVENT_EXCHANGE~~~~~~~~~~~~~~~~~~~~~Name of the exchange to send event messages to.  Default is `"celeryevent"`... setting:: CELERY_EVENT_EXCHANGE_TYPECELERY_EVENT_EXCHANGE_TYPE~~~~~~~~~~~~~~~~~~~~~~~~~~The exchange type of the event exchange.  Default is to use a `"direct"`exchange... setting:: CELERY_EVENT_ROUTING_KEYCELERY_EVENT_ROUTING_KEY~~~~~~~~~~~~~~~~~~~~~~~~Routing key used when sending event messages.  Default is `"celeryevent"`... setting:: CELERY_EVENT_SERIALIZERCELERY_EVENT_SERIALIZER~~~~~~~~~~~~~~~~~~~~~~~Message serialization format used when sending event messages.Default is `"json"`. See :ref:`executing-serializers`... _conf-broadcast:Broadcast Commands------------------.. setting:: CELERY_BROADCAST_QUEUECELERY_BROADCAST_QUEUE~~~~~~~~~~~~~~~~~~~~~~Name prefix for the queue used when listening for broadcast messages.The workers host name will be appended to the prefix to create the finalqueue name.Default is `"celeryctl"`... setting:: CELERY_BROADCASTS_EXCHANGECELERY_BROADCAST_EXCHANGE~~~~~~~~~~~~~~~~~~~~~~~~~Name of the exchange used for broadcast messages.Default is `"celeryctl"`... setting:: CELERY_BROADCAST_EXCHANGE_TYPECELERY_BROADCAST_EXCHANGE_TYPE~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Exchange type used for broadcast messages.  Default is `"fanout"`... _conf-logging:Logging-------.. setting:: CELERYD_HIJACK_ROOT_LOGGERCELERYD_HIJACK_ROOT_LOGGER~~~~~~~~~~~~~~~~~~~~~~~~~~By default any previously configured logging options will be reset,because the Celery apps "hijacks" the root logger.If you want to customize your own logging then you can disablethis behavior... note::    Logging can also be customized by connecting to the    :signal:`celery.signals.setup_logging` signal... setting:: CELERYD_LOG_FILECELERYD_LOG_FILE~~~~~~~~~~~~~~~~The default file name the worker daemon logs messages to.  Can be overriddenusing the :option:`--logfile` option to :mod:`~celery.bin.celeryd`.The default is :const:`None` (`stderr`).. setting:: CELERYD_LOG_LEVELCELERYD_LOG_LEVEL~~~~~~~~~~~~~~~~~Worker log level, can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,:const:`ERROR` or :const:`CRITICAL`.Can also be set via the :option:`--loglevel` argument to:mod:`~celery.bin.celeryd`.See the :mod:`logging` module for more information... setting:: CELERYD_LOG_COLORCELERYD_LOG_COLOR~~~~~~~~~~~~~~~~~Enables/disables colors in logging output by the Celery apps.By default colors are enabled if    1) the app is logging to a real terminal, and not a file.    2) the app is not running on Windows... setting:: CELERYD_LOG_FORMATCELERYD_LOG_FORMAT~~~~~~~~~~~~~~~~~~The format to use for log messages.Default is `[%(asctime)s: %(levelname)s/%(processName)s] %(message)s`See the Python :mod:`logging` module for more information about logformats... setting:: CELERYD_TASK_LOG_FORMATCELERYD_TASK_LOG_FORMAT~~~~~~~~~~~~~~~~~~~~~~~The format to use for log messages logged in tasks.  Can be overridden usingthe :option:`--loglevel` option to :mod:`~celery.bin.celeryd`.Default is::    [%(asctime)s: %(levelname)s/%(processName)s]        [%(task_name)s(%(task_id)s)] %(message)sSee the Python :mod:`logging` module for more information about logformats... setting:: CELERY_REDIRECT_STDOUTSCELERY_REDIRECT_STDOUTS~~~~~~~~~~~~~~~~~~~~~~~If enabled `stdout` and `stderr` will be redirectedto the current logger.Enabled by default.Used by :program:`celeryd` and :program:`celerybeat`... setting:: CELERY_REDIRECT_STDOUTS_LEVELCELERY_REDIRECT_STDOUTS_LEVEL~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The log level output to `stdout` and `stderr` is logged as.Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,:const:`ERROR` or :const:`CRITICAL`.Default is :const:`WARNING`... _conf-custom-components:Custom Component Classes (advanced)-----------------------------------.. setting:: CELERYD_POOLCELERYD_POOL~~~~~~~~~~~~Name of the pool class used by the worker.You can use a custom pool class name, or select one ofthe built-in aliases: ``processes``, ``eventlet``, ``gevent``.Default is ``processes``... setting:: CELERYD_AUTOSCALERCELERYD_AUTOSCALER~~~~~~~~~~~~~~~~~~Name of the autoscaler class to use.Default is ``"celery.worker.autoscale.Autoscaler"``... setting:: CELERYD_CONSUMERCELERYD_CONSUMER~~~~~~~~~~~~~~~~Name of the consumer class used by the worker.Default is :class:`celery.worker.consumer.Consumer`.. setting:: CELERYD_MEDIATORCELERYD_MEDIATOR~~~~~~~~~~~~~~~~Name of the mediator class used by the worker.Default is :class:`celery.worker.controllers.Mediator`... setting:: CELERYD_ETA_SCHEDULERCELERYD_ETA_SCHEDULER~~~~~~~~~~~~~~~~~~~~~Name of the ETA scheduler class used by the worker.Default is :class:`celery.utils.timer2.Timer`, or one overridedby the pool implementation... _conf-celerybeat:Periodic Task Server: celerybeat--------------------------------.. setting:: CELERYBEAT_SCHEDULECELERYBEAT_SCHEDULE~~~~~~~~~~~~~~~~~~~The periodic task schedule used by :mod:`~celery.bin.celerybeat`.See :ref:`beat-entries`... setting:: CELERYBEAT_SCHEDULERCELERYBEAT_SCHEDULER~~~~~~~~~~~~~~~~~~~~The default scheduler class.  Default is`"celery.beat.PersistentScheduler"`.Can also be set via the :option:`-S` argument to:mod:`~celery.bin.celerybeat`... setting:: CELERYBEAT_SCHEDULE_FILENAMECELERYBEAT_SCHEDULE_FILENAME~~~~~~~~~~~~~~~~~~~~~~~~~~~~Name of the file used by `PersistentScheduler` to store the last run timesof periodic tasks.  Can be a relative or absolute path, but be aware that thesuffix `.db` may be appended to the file name (depending on Python version).Can also be set via the :option:`--schedule` argument to:mod:`~celery.bin.celerybeat`... setting:: CELERYBEAT_MAX_LOOP_INTERVALCELERYBEAT_MAX_LOOP_INTERVAL~~~~~~~~~~~~~~~~~~~~~~~~~~~~The maximum number of seconds :mod:`~celery.bin.celerybeat` can sleepbetween checking the schedule.  Default is 300 seconds (5 minutes)... setting:: CELERYBEAT_LOG_FILECELERYBEAT_LOG_FILE~~~~~~~~~~~~~~~~~~~The default file name to log messages to.  Can be overridden usingthe `--logfile` option to :mod:`~celery.bin.celerybeat`.The default is :const:`None` (`stderr`)... setting:: CELERYBEAT_LOG_LEVELCELERYBEAT_LOG_LEVEL~~~~~~~~~~~~~~~~~~~~Logging level. Can be any of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,:const:`ERROR`, or :const:`CRITICAL`.Can also be set via the :option:`--loglevel` argument to:mod:`~celery.bin.celerybeat`.See the :mod:`logging` module for more information... _conf-celerymon:Monitor Server: celerymon-------------------------.. setting:: CELERYMON_LOG_FILECELERYMON_LOG_FILE~~~~~~~~~~~~~~~~~~The default file name to log messages to.  Can be overridden usingthe :option:`--logfile` argument to `celerymon`.The default is :const:`None` (`stderr`).. setting:: CELERYMON_LOG_LEVELCELERYMON_LOG_LEVEL~~~~~~~~~~~~~~~~~~~Logging level. Can be any of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,:const:`ERROR`, or :const:`CRITICAL`.See the :mod:`logging` module for more information.
 |