| 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268126912701271127212731274127512761277127812791280128112821283128412851286128712881289129012911292129312941295129612971298129913001301130213031304130513061307130813091310131113121313131413151316131713181319132013211322132313241325132613271328132913301331133213331334133513361337133813391340134113421343134413451346134713481349135013511352135313541355135613571358135913601361136213631364136513661367136813691370137113721373137413751376137713781379138013811382138313841385138613871388138913901391139213931394139513961397139813991400140114021403140414051406140714081409141014111412141314141415141614171418141914201421142214231424142514261427142814291430143114321433143414351436143714381439144014411442144314441445144614471448144914501451145214531454145514561457145814591460146114621463146414651466146714681469147014711472147314741475147614771478147914801481148214831484148514861487148814891490149114921493149414951496149714981499150015011502150315041505150615071508150915101511151215131514151515161517151815191520152115221523152415251526152715281529153015311532153315341535153615371538153915401541154215431544154515461547154815491550155115521553155415551556155715581559156015611562156315641565156615671568156915701571157215731574157515761577157815791580158115821583158415851586158715881589159015911592159315941595159615971598159916001601160216031604160516061607160816091610161116121613161416151616161716181619162016211622162316241625162616271628162916301631163216331634163516361637163816391640164116421643164416451646164716481649165016511652165316541655165616571658165916601661166216631664166516661667166816691670167116721673167416751676167716781679168016811682168316841685168616871688168916901691169216931694169516961697169816991700170117021703170417051706170717081709171017111712171317141715171617171718171917201721172217231724172517261727172817291730173117321733173417351736173717381739174017411742174317441745174617471748174917501751175217531754175517561757175817591760176117621763176417651766176717681769177017711772177317741775177617771778177917801781178217831784178517861787178817891790179117921793179417951796179717981799180018011802180318041805180618071808180918101811181218131814181518161817181818191820182118221823182418251826182718281829183018311832183318341835183618371838183918401841184218431844184518461847184818491850185118521853185418551856185718581859186018611862186318641865186618671868186918701871187218731874187518761877187818791880188118821883188418851886188718881889189018911892189318941895189618971898189919001901190219031904190519061907190819091910191119121913191419151916191719181919192019211922192319241925192619271928192919301931193219331934193519361937193819391940194119421943194419451946194719481949195019511952195319541955195619571958195919601961196219631964196519661967196819691970197119721973197419751976197719781979198019811982198319841985198619871988198919901991199219931994199519961997199819992000200120022003200420052006200720082009201020112012201320142015201620172018201920202021202220232024202520262027202820292030203120322033203420352036203720382039204020412042204320442045204620472048204920502051205220532054205520562057205820592060206120622063206420652066206720682069207020712072207320742075207620772078207920802081 | .. _configuration:============================ Configuration and defaults============================This document describes the configuration options available.If you're using the default loader, you must create the :file:`celeryconfig.py`module and make sure it is available on the Python path... contents::    :local:    :depth: 2.. _conf-example:Example configuration file==========================This is an example configuration file to get you started.It should contain all you need to run a basic Celery set-up... code-block:: python    ## Broker settings.    broker_url = 'amqp://guest:guest@localhost:5672//'    # List of modules to import when celery starts.    imports = ('myapp.tasks',)    ## Using the database to store task state and results.    result_backend = 'db+sqlite:///results.db'    task_annotations = {'tasks.add': {'rate_limit': '10/s'}}Configuration Directives========================.. _conf-datetime:General settings----------------.. setting:: accept_contentaccept_content~~~~~~~~~~~~~~A whitelist of content-types/serializers to allow.If a message is received that is not in this list thenthe message will be discarded with an error.By default any content type is enabled (including pickle and yaml)so make sure untrusted parties do not have access to your broker.See :ref:`guide-security` for more.Example::    # using serializer name    accept_content = ['json']    # or the actual content-type (MIME)    accept_content = ['application/json']Time and date settings----------------------.. setting:: enable_utcenable_utc~~~~~~~~~~.. versionadded:: 2.5If enabled dates and times in messages will be converted to usethe UTC timezone.Note that workers running Celery versions below 2.5 will assume a localtimezone for all messages, so only enable if all workers have beenupgraded.Enabled by default since version 3.0... setting:: timezonetimezone~~~~~~~~Configure Celery to use a custom time zone.The timezone value can be any time zone supported by the `pytz`_library.If not set the UTC timezone is used.  For backwards compatibilitythere is also a :setting:`enable_utc` setting, and this is setto false the system local timezone is used instead... _`pytz`: http://pypi.python.org/pypi/pytz/.. _conf-tasks:Task settings-------------.. setting:: task_annotationstask_annotations~~~~~~~~~~~~~~~~This setting can be used to rewrite any task attribute from theconfiguration.  The setting can be a dict, or a list of annotationobjects that filter for tasks and return a map of attributesto change.This will change the ``rate_limit`` attribute for the ``tasks.add``task:.. code-block:: python    task_annotations = {'tasks.add': {'rate_limit': '10/s'}}or change the same for all tasks:.. code-block:: python    task_annotations = {'*': {'rate_limit': '10/s'}}You can change methods too, for example the ``on_failure`` handler:.. code-block:: python    def my_on_failure(self, exc, task_id, args, kwargs, einfo):        print('Oh no! Task failed: {0!r}'.format(exc))    task_annotations = {'*': {'on_failure': my_on_failure}}If you need more flexibility then you can use objectsinstead of a dict to choose which tasks to annotate:.. code-block:: python    class MyAnnotate(object):        def annotate(self, task):            if task.name.startswith('tasks.'):                return {'rate_limit': '10/s'}    task_annotations = (MyAnnotate(), {…}).. setting:: task_compressiontask_compression~~~~~~~~~~~~~~~~Default compression used for task messages.Can be ``gzip``, ``bzip2`` (if available), or any customcompression schemes registered in the Kombu compression registry.The default is to send uncompressed messages... setting:: task_protocoltask_protocol~~~~~~~~~~~~~Default task message protocol version.Supports protocols: 1 and 2 (default is 1 for backwards compatibility)... setting:: task_serializertask_serializer~~~~~~~~~~~~~~~A string identifying the default serialization method to use.  Can be`pickle` (default), `json`, `yaml`, `msgpack` or any custom serializationmethods that have been registered with :mod:`kombu.serialization.registry`... seealso::    :ref:`calling-serializers`... setting:: task_publish_retrytask_publish_retry~~~~~~~~~~~~~~~~~~.. versionadded:: 2.2Decides if publishing task messages will be retried in the caseof connection loss or other connection errors.See also :setting:`task_publish_retry_policy`.Enabled by default... setting:: task_publish_retry_policytask_publish_retry_policy~~~~~~~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.2Defines the default policy when retrying publishing a task message inthe case of connection loss or other connection errors.See :ref:`calling-retry` for more information... _conf-task-execution:Task execution settings-----------------------.. setting:: task_always_eagertask_always_eager~~~~~~~~~~~~~~~~~If this is :const:`True`, all tasks will be executed locally by blocking untilthe task returns.  ``apply_async()`` and ``Task.delay()`` will returnan :class:`~celery.result.EagerResult` instance, which emulates the APIand behavior of :class:`~celery.result.AsyncResult`, except the resultis already evaluated.That is, tasks will be executed locally instead of being sent tothe queue... setting:: task_eager_propagates_exceptionstask_eager_propagates_exceptions~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If this is :const:`True`, eagerly executed tasks (applied by `task.apply()`,or when the :setting:`task_always_eager` setting is enabled), willpropagate exceptions.It's the same as always running ``apply()`` with ``throw=True``... setting:: task_ignore_resulttask_ignore_result~~~~~~~~~~~~~~~~~~Whether to store the task return values or not (tombstones).If you still want to store errors, just not successful return values,you can set :setting:`task_store_errors_even_if_ignored`... setting:: task_store_errors_even_if_ignoredtask_store_errors_even_if_ignored~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~If set, the worker stores all task errors in the result store even if:attr:`Task.ignore_result <celery.task.base.Task.ignore_result>` is on... setting:: task_track_startedtask_track_started~~~~~~~~~~~~~~~~~~If :const:`True` the task will report its status as "started" when thetask is executed by a worker.  The default value is :const:`False` asthe normal behaviour is to not report that level of granularity.  Tasksare either pending, finished, or waiting to be retried.  Having a "started"state can be useful for when there are long running tasks and there is aneed to report which task is currently running... setting:: task_time_limittask_time_limit~~~~~~~~~~~~~~~Task hard time limit in seconds.  The worker processing the task willbe killed and replaced with a new one when this is exceeded... setting:: task_soft_time_limittask_soft_time_limit~~~~~~~~~~~~~~~~~~~~Task soft time limit in seconds.The :exc:`~@SoftTimeLimitExceeded` exception will beraised when this is exceeded.  The task can catch this toe.g. clean up before the hard time limit comes.Example:.. code-block:: python    from celery.exceptions import SoftTimeLimitExceeded    @app.task    def mytask():        try:            return do_work()        except SoftTimeLimitExceeded:            cleanup_in_a_hurry().. setting:: task_acks_latetask_acks_late~~~~~~~~~~~~~~Late ack means the task messages will be acknowledged **after** the taskhas been executed, not *just before*, which is the default behavior... seealso::    FAQ: :ref:`faq-acks_late-vs-retry`... setting:: task_reject_on_worker_losttask_reject_on_worker_lost~~~~~~~~~~~~~~~~~~~~~~~~~~Even if :setting:`task_acks_late` is enabled, the worker willacknowledge tasks when the worker process executing them abrubtlyexits or is signalled (e.g. :sig:`KILL`/:sig:`INT`, etc).Setting this to true allows the message to be requeued instead,so that the task will execute again by the same worker, or anotherworker... warning::    Enabling this can cause message loops; make sure you know    what you're doing... setting:: task_default_rate_limittask_default_rate_limit~~~~~~~~~~~~~~~~~~~~~~~The global default rate limit for tasks.This value is used for tasks that does not have a custom rate limitThe default is no rate limit... seealso::    The setting:`worker_disable_rate_limits` setting can    disable all rate limits... _conf-result-backend:Task result backend settings----------------------------.. setting:: result_backendresult_backend~~~~~~~~~~~~~~The backend used to store task results (tombstones).Disabled by default.Can be one of the following:* rpc    Send results back as AMQP messages    See :ref:`conf-rpc-result-backend`.* database    Use a relational database supported by `SQLAlchemy`_.    See :ref:`conf-database-result-backend`.* redis    Use `Redis`_ to store the results.    See :ref:`conf-redis-result-backend`.* cache    Use `memcached`_ to store the results.    See :ref:`conf-cache-result-backend`.* mongodb    Use `MongoDB`_ to store the results.    See :ref:`conf-mongodb-result-backend`.* new_cassandra    Use `Cassandra`_ to store the results, using newer database driver than _cassandra_.    See :ref:`conf-new_cassandra-result-backend`.* ironcache    Use `IronCache`_ to store the results.    See :ref:`conf-ironcache-result-backend`.* couchbase    Use `Couchbase`_ to store the results.    See :ref:`conf-couchbase-result-backend`.* couchdb    Use `CouchDB`_ to store the results.    See :ref:`conf-couchdb-result-backend`.* amqp    Older AMQP backend (badly) emulating a database-based backend.    See :ref:`conf-amqp-result-backend`... warning:    While the AMQP result backend is very efficient, you must make sure    you only receive the same result once.  See :doc:`userguide/calling`)... _`SQLAlchemy`: http://sqlalchemy.org.. _`memcached`: http://memcached.org.. _`MongoDB`: http://mongodb.org.. _`Redis`: http://redis.io.. _`Cassandra`: http://cassandra.apache.org/.. _`IronCache`: http://www.iron.io/cache.. _`CouchDB`: http://www.couchdb.com/.. _`Couchbase`: http://www.couchbase.com/.. setting:: result_serializerresult_serializer~~~~~~~~~~~~~~~~~Result serialization format.  Default is ``pickle``. See:ref:`calling-serializers` for information about supportedserialization formats... setting:: result_compressionresult_compression~~~~~~~~~~~~~~~~~~Optional compression method used for task results.Supports the same options as the :setting:`task_serializer` setting.Default is no compression... setting:: result_expiresresult_expires~~~~~~~~~~~~~~Time (in seconds, or a :class:`~datetime.timedelta` object) for when afterstored task tombstones will be deleted.A built-in periodic task will delete the results after this time(``celery.backend_cleanup``), assuming that ``celery beat`` isenabled.  The task runs daily at 4am.A value of :const:`None` or 0 means results will never expire (dependingon backend specifications).Default is to expire after 1 day... note::    For the moment this only works with the amqp, database, cache, redis and MongoDB    backends.    When using the database or MongoDB backends, `celery beat` must be    running for the results to be expired... setting:: result_cache_maxresult_cache_max~~~~~~~~~~~~~~~~Result backends caches ready results used by the client.This is the total number of results to cache before older results are evicted.The default is 5000.  0 or None means no limit, and a value of :const:`-1`will disable the cache... _conf-database-result-backend:Database backend settings-------------------------Database URL Examples~~~~~~~~~~~~~~~~~~~~~To use the database backend you have to configure the:setting:`result_backend` setting with a connection URL and the ``db+``prefix:.. code-block:: python    result_backend = 'db+scheme://user:password@host:port/dbname'Examples::    # sqlite (filename)    result_backend = 'db+sqlite:///results.sqlite'    # mysql    result_backend = 'db+mysql://scott:tiger@localhost/foo'    # postgresql    result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase'    # oracle    result_backend = 'db+oracle://scott:tiger@127.0.0.1:1521/sidname'.. code-block:: pythonPlease see `Supported Databases`_ for a table of supported databases,and `Connection String`_ for more information about connectionstrings (which is the part of the URI that comes after the ``db+`` prefix)... _`Supported Databases`:    http://www.sqlalchemy.org/docs/core/engines.html#supported-databases.. _`Connection String`:    http://www.sqlalchemy.org/docs/core/engines.html#database-urls.. setting:: sqlalchemy_dburisqlalchemy_dburi~~~~~~~~~~~~~~~~This setting is no longer used as it's now possible to specifythe database URL directly in the :setting:`result_backend` setting... setting:: sqlalchemy_engine_optionssqlalchemy_engine_options~~~~~~~~~~~~~~~~~~~~~~~~~To specify additional SQLAlchemy database engine options you can usethe :setting:`sqlalchmey_engine_options` setting::    # echo enables verbose logging from SQLAlchemy.    sqlalchemy_engine_options = {'echo': True}.. setting:: sqlalchemy_short_lived_sessionssqlalchemy_short_lived_sessions~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~    sqlalchemy_short_lived_sessions = TrueShort lived sessions are disabled by default.  If enabled they can drastically reduceperformance, especially on systems processing lots of tasks.  This option is usefulon low-traffic workers that experience errors as a result of cached database connectionsgoing stale through inactivity.  For example, intermittent errors like`(OperationalError) (2006, 'MySQL server has gone away')` can be fixed by enablingshort lived sessions.  This option only affects the database backend... setting:: sqlalchemy_table_namessqlalchemy_table_names~~~~~~~~~~~~~~~~~~~~~~When SQLAlchemy is configured as the result backend, Celery automaticallycreates two tables to store result metadata for tasks.  This setting allowsyou to customize the table names:.. code-block:: python    # use custom table names for the database result backend.    sqlalchemy_table_names = {        'task': 'myapp_taskmeta',        'group': 'myapp_groupmeta',    }.. _conf-rpc-result-backend:RPC backend settings--------------------.. setting:: result_persistentresult_persistent~~~~~~~~~~~~~~~~~If set to :const:`True`, result messages will be persistent.  This means themessages will not be lost after a broker restart.  The default is for theresults to be transient.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    result_backend = 'rpc://'    result_persistent = False.. _conf-cache-result-backend:Cache backend settings----------------------.. note::    The cache backend supports the `pylibmc`_ and `python-memcached`    libraries.  The latter is used only if `pylibmc`_ is not installed.Using a single memcached server:.. code-block:: python    result_backend = 'cache+memcached://127.0.0.1:11211/'Using multiple memcached servers:.. code-block:: python    result_backend = """        cache+memcached://172.19.26.240:11211;172.19.26.242:11211/    """.strip()The "memory" backend stores the cache in memory only:.. code-block:: python    result_backend = 'cache'    cache_backend = 'memory'.. setting:: cache_backend_optionscache_backend_options~~~~~~~~~~~~~~~~~~~~~You can set pylibmc options using the :setting:`cache_backend_options`setting:.. code-block:: python    cache_backend_options = {        'binary': True,        'behaviors': {'tcp_nodelay': True},    }.. _`pylibmc`: http://sendapatch.se/projects/pylibmc/.. setting:: cache_backendcache_backend~~~~~~~~~~~~~This setting is no longer used as it's now possible to specifythe cache backend directly in the :setting:`result_backend` setting... _conf-redis-result-backend:Redis backend settings----------------------Configuring the backend URL~~~~~~~~~~~~~~~~~~~~~~~~~~~.. note::    The Redis backend requires the :mod:`redis` library:    http://pypi.python.org/pypi/redis/    To install the redis package use `pip` or `easy_install`:    .. code-block:: console        $ pip install redisThis backend requires the :setting:`result_backend`setting to be set to a Redis URL::    result_backend = 'redis://:password@host:port/db'For example::    result_backend = 'redis://localhost/0'which is the same as::    result_backend = 'redis://'The fields of the URL are defined as follows:- *host*Host name or IP address of the Redis server. e.g. `localhost`.- *port*Port to the Redis server. Default is 6379.- *db*Database number to use. Default is 0.The db can include an optional leading slash.- *password*Password used to connect to the database... setting:: redis_max_connectionsredis_max_connections~~~~~~~~~~~~~~~~~~~~~Maximum number of connections available in the Redis connectionpool used for sending and retrieving results... _conf-mongodb-result-backend:MongoDB backend settings------------------------.. note::    The MongoDB backend requires the :mod:`pymongo` library:    http://github.com/mongodb/mongo-python-driver/tree/master.. setting:: mongodb_backend_settingsmongodb_backend_settings~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~This is a dict supporting the following keys:* database    The database name to connect to. Defaults to ``celery``.* taskmeta_collection    The collection name to store task meta data.    Defaults to ``celery_taskmeta``.* max_pool_size    Passed as max_pool_size to PyMongo's Connection or MongoClient    constructor. It is the maximum number of TCP connections to keep    open to MongoDB at a given time. If there are more open connections    than max_pool_size, sockets will be closed when they are released.    Defaults to 10.* options    Additional keyword arguments to pass to the mongodb connection    constructor.  See the :mod:`pymongo` docs to see a list of arguments    supported... _example-mongodb-result-config:Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    result_backend = 'mongodb://192.168.1.100:30000/'    mongodb_backend_settings = {        'database': 'mydb',        'taskmeta_collection': 'my_taskmeta_collection',    }.. _conf-new_cassandra-result-backend:new_cassandra backend settings------------------------------.. note::    This Cassandra backend driver requires :mod:`cassandra-driver`.    https://pypi.python.org/pypi/cassandra-driver    To install, use `pip` or `easy_install`:    .. code-block:: bash        $ pip install cassandra-driverThis backend requires the following configuration directives to be set... setting:: cassandra_serverscassandra_servers~~~~~~~~~~~~~~~~~List of ``host`` Cassandra servers. e.g.::    cassandra_servers = ['localhost'].. setting:: cassandra_portcassandra_port~~~~~~~~~~~~~~Port to contact the Cassandra servers on. Default is 9042... setting:: cassandra_keyspacecassandra_keyspace~~~~~~~~~~~~~~~~~~The keyspace in which to store the results. e.g.::    cassandra_keyspace = 'tasks_keyspace'.. setting:: cassandra_column_familycassandra_column_family~~~~~~~~~~~~~~~~~~~~~~~The table (column family) in which to store the results. e.g.::    cassandra_column_family = 'tasks'.. setting:: cassandra_read_consistencycassandra_read_consistency~~~~~~~~~~~~~~~~~~~~~~~~~~The read consistency used. Values can be ``ONE``, ``TWO``, ``THREE``, ``QUORUM``, ``ALL``,``LOCAL_QUORUM``, ``EACH_QUORUM``, ``LOCAL_ONE``... setting:: cassandra_write_consistencycassandra_write_consistency~~~~~~~~~~~~~~~~~~~~~~~~~~~The write consistency used. Values can be ``ONE``, ``TWO``, ``THREE``, ``QUORUM``, ``ALL``,``LOCAL_QUORUM``, ``EACH_QUORUM``, ``LOCAL_ONE``... setting:: cassandra_entry_ttlcassandra_entry_ttl~~~~~~~~~~~~~~~~~~~Time-to-live for status entries. They will expire and be removed after that many secondsafter adding. Default (None) means they will never expire.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    cassandra_servers = ['localhost']    cassandra_keyspace = 'celery'    cassandra_column_family = 'task_results'    cassandra_read_consistency = 'ONE'    cassandra_write_consistency = 'ONE'    cassandra_entry_ttl = 86400.. _conf-riak-result-backend:Riak backend settings---------------------.. note::    The Riak backend requires the :mod:`riak` library:    http://pypi.python.org/pypi/riak/    To install the riak package use `pip` or `easy_install`:    .. code-block:: console        $ pip install riakThis backend requires the :setting:`result_backend`setting to be set to a Riak URL::    result_backend = "riak://host:port/bucket"For example::    result_backend = "riak://localhost/celerywhich is the same as::    result_backend = "riak://"The fields of the URL are defined as follows:- *host*Host name or IP address of the Riak server. e.g. `"localhost"`.- *port*Port to the Riak server using the protobuf protocol. Default is 8087.- *bucket*Bucket name to use. Default is `celery`.The bucket needs to be a string with ascii characters only.Altenatively, this backend can be configured with the following configuration directives... setting:: riak_backend_settingsriak_backend_settings~~~~~~~~~~~~~~~~~~~~~This is a dict supporting the following keys:* host    The host name of the Riak server. Defaults to "localhost".* port    The port the Riak server is listening to. Defaults to 8087.* bucket    The bucket name to connect to. Defaults to "celery".* protocol    The protocol to use to connect to the Riak server. This is not configurable    via :setting:`result_backend`.. _conf-ironcache-result-backend:IronCache backend settings--------------------------.. note::    The IronCache backend requires the :mod:`iron_celery` library:    http://pypi.python.org/pypi/iron_celery    To install the iron_celery package use `pip` or `easy_install`:    .. code-block:: console        $ pip install iron_celeryIronCache is configured via the URL provided in :setting:`result_backend`, for example::    result_backend = 'ironcache://project_id:token@'Or to change the cache name::    ironcache:://project_id:token@/awesomecacheFor more information, see: https://github.com/iron-io/iron_celery.. _conf-couchbase-result-backend:Couchbase backend settings--------------------------.. note::    The Couchbase backend requires the :mod:`couchbase` library:    https://pypi.python.org/pypi/couchbase    To install the couchbase package use `pip` or `easy_install`:    .. code-block:: console        $ pip install couchbaseThis backend can be configured via the :setting:`result_backend`set to a couchbase URL::    result_backend = 'couchbase://username:password@host:port/bucket'.. setting:: couchbase_backend_settingscouchbase_backend_settings~~~~~~~~~~~~~~~~~~~~~~~~~~This is a dict supporting the following keys:* host    Host name of the Couchbase server. Defaults to ``localhost``.* port    The port the Couchbase server is listening to. Defaults to ``8091``.* bucket    The default bucket the Couchbase server is writing to.    Defaults to ``default``.* username    User name to authenticate to the Couchbase server as (optional).* password    Password to authenticate to the Couchbase server (optional)... _conf-couchdb-result-backend:CouchDB backend settings------------------------.. note::    The CouchDB backend requires the :mod:`pycouchdb` library:    https://pypi.python.org/pypi/pycouchdb    To install the couchbase package use `pip` or `easy_install`:    .. code-block:: console        $ pip install pycouchdbThis backend can be configured via the :setting:`result_backend`set to a couchdb URL::    result_backend = 'couchdb://username:password@host:port/container'The URL is formed out of the following parts:* username    User name to authenticate to the CouchDB server as (optional).* password    Password to authenticate to the CouchDB server (optional).* host    Host name of the CouchDB server. Defaults to ``localhost``.* port    The port the CouchDB server is listening to. Defaults to ``8091``.* container    The default container the CouchDB server is writing to.    Defaults to ``default``... _conf-amqp-result-backend:AMQP backend settings---------------------.. admonition:: Do not use in production.    This is the old AMQP result backend that creates one queue per task,    if you want to send results back as message please consider using the    RPC backend instead, or if you need the results to be persistent    use a result backend designed for that purpose (e.g. Redis, or a database)... note::    The AMQP backend requires RabbitMQ 1.1.0 or higher to automatically    expire results.  If you are running an older version of RabbitMQ    you should disable result expiration like this:        result_expires = None.. setting:: result_exchangeresult_exchange~~~~~~~~~~~~~~~Name of the exchange to publish results in.  Default is `celeryresults`... setting:: result_exchange_typeresult_exchange_type~~~~~~~~~~~~~~~~~~~~The exchange type of the result exchange.  Default is to use a `direct`exchange.result_persistent~~~~~~~~~~~~~~~~~If set to :const:`True`, result messages will be persistent.  This means themessages will not be lost after a broker restart.  The default is for theresults to be transient.Example configuration~~~~~~~~~~~~~~~~~~~~~.. code-block:: python    result_backend = 'amqp'    result_expires = 18000  # 5 hours... _conf-messaging:Message Routing---------------.. _conf-messaging-routing:.. setting:: task_queuestask_queues~~~~~~~~~~~Most users will not want to specify this setting and should rather usethe :ref:`automatic routing facilities <routing-automatic>`.If you really want to configure advanced routing, this setting shouldbe a list of :class:`kombu.Queue` objects the worker will consume from.Note that workers can be overriden this setting via the `-Q` option,or individual queues from this list (by name) can be excluded usingthe `-X` option.Also see :ref:`routing-basics` for more information.The default is a queue/exchange/binding key of ``celery``, withexchange type ``direct``.See also :setting:`task_routes`.. setting:: task_routestask_routes~~~~~~~~~~~~~A list of routers, or a single router used to route tasks to queues.When deciding the final destination of a task the routers are consultedin order.A router can be specified as either:*  A router class instances*  A string which provides the path to a router class*  A dict containing router specification. It will be converted to a :class:`celery.routes.MapRoute` instance.Examples:.. code-block:: python    task_routes = {        "celery.ping": "default",        "mytasks.add": "cpu-bound",        "video.encode": {            "queue": "video",            "exchange": "media"            "routing_key": "media.video.encode",        },    }    task_routes = ("myapp.tasks.Router", {"celery.ping": "default})Where ``myapp.tasks.Router`` could be:.. code-block:: python    class Router(object):        def route_for_task(self, task, args=None, kwargs=None):            if task == "celery.ping":                return "default"``route_for_task`` may return a string or a dict. A string then meansit's a queue name in :setting:`task_queues`, a dict means it's a custom route.When sending tasks, the routers are consulted in order. The firstrouter that doesn't return ``None`` is the route to use. The message optionsis then merged with the found route settings, where the routers settingshave priority.Example if :func:`~celery.execute.apply_async` has these arguments:.. code-block:: python   Task.apply_async(immediate=False, exchange="video",                    routing_key="video.compress")and a router returns:.. code-block:: python    {"immediate": True, "exchange": "urgent"}the final message options will be:.. code-block:: python    immediate=True, exchange="urgent", routing_key="video.compress"(and any default message options defined in the:class:`~celery.task.base.Task` class)Values defined in :setting:`task_routes` have precedence over values defined in:setting:`task_queues` when merging the two.With the follow settings:.. code-block:: python    task_queues = {        "cpubound": {            "exchange": "cpubound",            "routing_key": "cpubound",        },    }    task_routes = {        "tasks.add": {            "queue": "cpubound",            "routing_key": "tasks.add",            "serializer": "json",        },    }The final routing options for ``tasks.add`` will become:.. code-block:: javascript    {"exchange": "cpubound",     "routing_key": "tasks.add",     "serializer": "json"}See :ref:`routers` for more examples... setting:: task_queue_ha_policytask_queue_ha_policy~~~~~~~~~~~~~~~~~~~~:brokers: RabbitMQThis will set the default HA policy for a queue, and the valuecan either be a string (usually ``all``):.. code-block:: python    task_queue_ha_policy = 'all'Using 'all' will replicate the queue to all current nodes,Or you can give it a list of nodes to replicate to:.. code-block:: python    task_queue_ha_policy = ['rabbit@host1', 'rabbit@host2']Using a list will implicitly set ``x-ha-policy`` to 'nodes' and``x-ha-policy-params`` to the given list of nodes.See http://www.rabbitmq.com/ha.html for more information... setting:: worker_directworker_direct~~~~~~~~~~~~~This option enables so that every worker has a dedicated queue,so that tasks can be routed to specific workers.The queue name for each worker is automatically generated based onthe worker hostname and a ``.dq`` suffix, using the ``C.dq`` exchange.For example the queue name for the worker with node name ``w1@example.com``becomes::    w1@example.com.dqThen you can route the task to the task by specifying the hostnameas the routing key and the ``C.dq`` exchange::    task_routes = {        'tasks.add': {'exchange': 'C.dq', 'routing_key': 'w1@example.com'}    }.. setting:: task_create_missing_queuestask_create_missing_queues~~~~~~~~~~~~~~~~~~~~~~~~~~If enabled (default), any queues specified that are not defined in:setting:`task_queues` will be automatically created. See:ref:`routing-automatic`... setting:: task_default_queuetask_default_queue~~~~~~~~~~~~~~~~~~The name of the default queue used by `.apply_async` if the message hasno route or no custom queue has been specified.This queue must be listed in :setting:`task_queues`.If :setting:`task_queues` is not specified then it is automaticallycreated containing one queue entry, where this name is used as the name ofthat queue.The default is: `celery`... seealso::    :ref:`routing-changing-default-queue`.. setting:: task_default_exchangetask_default_exchange~~~~~~~~~~~~~~~~~~~~~Name of the default exchange to use when no custom exchange isspecified for a key in the :setting:`task_queues` setting.The default is: `celery`... setting:: task_default_exchange_typetask_default_exchange_type~~~~~~~~~~~~~~~~~~~~~~~~~~Default exchange type used when no custom exchange type is specifiedfor a key in the :setting:`task_queues` setting.The default is: `direct`... setting:: task_default_routing_keytask_default_routing_key~~~~~~~~~~~~~~~~~~~~~~~~The default routing key used when no custom routing keyis specified for a key in the :setting:`task_queues` setting.The default is: `celery`... setting:: task_default_delivery_modetask_default_delivery_mode~~~~~~~~~~~~~~~~~~~~~~~~~~Can be `transient` or `persistent`.  The default is to sendpersistent messages... _conf-broker-settings:Broker Settings---------------.. setting:: broker_urlbroker_url~~~~~~~~~~Default broker URL.  This must be an URL in the form of::    transport://userid:password@hostname:port/virtual_hostOnly the scheme part (``transport://``) is required, the restis optional, and defaults to the specific transports default values.The transport part is the broker implementation to use, and thedefault is ``amqp``, which uses ``librabbitmq`` by default or falls back to``pyamqp`` if that is not installed.  Also there are many other choices including``redis``, ``beanstalk``, ``sqlalchemy``, ``django``, ``mongodb``,``couchdb``.It can also be a fully qualified path to your own transport implementation.More than broker URL, of the same transport, can also be specified.The broker URLs can be passed in as a single string that is semicolon delimited::    broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'Or as a list::    broker_url = [        'transport://userid:password@localhost:port//',        'transport://userid:password@hostname:port//'    ]The brokers will then be used in the :setting:`broker_failover_strategy`.See :ref:`kombu:connection-urls` in the Kombu documentation for moreinformation... setting:: broker_failover_strategybroker_failover_strategy~~~~~~~~~~~~~~~~~~~~~~~~Default failover strategy for the broker Connection object. If supplied,may map to a key in 'kombu.connection.failover_strategies', or be a referenceto any method that yields a single item from a supplied list.Example::    # Random failover strategy    def random_failover_strategy(servers):        it = list(it)  # don't modify callers list        shuffle = random.shuffle        for _ in repeat(None):            shuffle(it)            yield it[0]    broker_failover_strategy = random_failover_strategy.. setting:: broker_heartbeatbroker_heartbeat~~~~~~~~~~~~~~~~:transports supported: ``pyamqp``It's not always possible to detect connection loss in a timelymanner using TCP/IP alone, so AMQP defines something called heartbeatsthat's is used both by the client and the broker to detect ifa connection was closed.Heartbeats are disabled by default.If the heartbeat value is 10 seconds, thenthe heartbeat will be monitored at the interval specifiedby the :setting:`broker_heartbeat_checkrate` setting, which by default isdouble the rate of the heartbeat value(so for the default 10 seconds, the heartbeat is checked every 5 seconds)... setting:: broker_heartbeat_checkratebroker_heartbeat_checkrate~~~~~~~~~~~~~~~~~~~~~~~~~~:transports supported: ``pyamqp``At intervals the worker will monitor that the broker has not missedtoo many heartbeats.  The rate at which this is checked is calculatedby dividing the :setting:`broker_heartbeat` value with this value,so if the heartbeat is 10.0 and the rate is the default 2.0, the checkwill be performed every 5 seconds (twice the heartbeat sending rate)... setting:: broker_use_sslbroker_use_ssl~~~~~~~~~~~~~~:transports supported: ``pyamqp``, ``redis``Toggles SSL usage on broker connection and SSL settings.If ``True`` the connection will use SSL with default SSL settings.If set to a dict, will configure SSL connection according to the specifiedpolicy. The format used is python `ssl.wrap_socket()options <https://docs.python.org/3/library/ssl.html#ssl.wrap_socket>`_.Default is ``False`` (no SSL).Note that SSL socket is generally served on a separate port by the broker.Example providing a client cert and validating the server cert against a customcertificate authority:.. code-block:: python    import ssl    broker_use_ssl = {      'keyfile': '/var/ssl/private/worker-key.pem',      'certfile': '/var/ssl/amqp-server-cert.pem',      'ca_certs': '/var/ssl/myca.pem',      'cert_reqs': ssl.CERT_REQUIRED    }.. warning::    Be careful using ``broker_use_ssl=True``, it is possible that your default    configuration do not validate the server cert at all, please read Python    `ssl module security    considerations <https://docs.python.org/3/library/ssl.html#ssl-security>`_... setting:: broker_pool_limitbroker_pool_limit~~~~~~~~~~~~~~~~~.. versionadded:: 2.3The maximum number of connections that can be open in the connection pool.The pool is enabled by default since version 2.5, with a default limit of tenconnections.  This number can be tweaked depending on the number ofthreads/greenthreads (eventlet/gevent) using a connection.  For examplerunning eventlet with 1000 greenlets that use a connection to the broker,contention can arise and you should consider increasing the limit.If set to :const:`None` or 0 the connection pool will be disabled andconnections will be established and closed for every use.Default (since 2.5) is to use a pool of 10 connections... setting:: broker_connection_timeoutbroker_connection_timeout~~~~~~~~~~~~~~~~~~~~~~~~~The default timeout in seconds before we give up establishing a connectionto the AMQP server.  Default is 4 seconds... setting:: broker_connection_retrybroker_connection_retry~~~~~~~~~~~~~~~~~~~~~~~Automatically try to re-establish the connection to the AMQP broker if lost.The time between retries is increased for each retry, and isnot exhausted before :setting:`broker_connection_max_retries` isexceeded.This behavior is on by default... setting:: broker_connection_max_retriesbroker_connection_max_retries~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum number of retries before we give up re-establishing a connectionto the AMQP broker.If this is set to :const:`0` or :const:`None`, we will retry forever.Default is 100 retries... setting:: broker_login_methodbroker_login_method~~~~~~~~~~~~~~~~~~~Set custom amqp login method, default is ``AMQPLAIN``... setting:: broker_transport_optionsbroker_transport_options~~~~~~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.2A dict of additional options passed to the underlying transport.See your transport user manual for supported options (if any).Example setting the visibility timeout (supported by Redis and SQStransports):.. code-block:: python    broker_transport_options = {'visibility_timeout': 18000}  # 5 hours.. _conf-worker:Worker------.. setting:: importsimports~~~~~~~A sequence of modules to import when the worker starts.This is used to specify the task modules to import, but alsoto import signal handlers and additional remote control commands, etc.The modules will be imported in the original order... setting:: includeinclude~~~~~~~Exact same semantics as :setting:`imports`, but can be used as a meansto have different import categories.The modules in this setting are imported after the modules in:setting:`imports`... _conf-concurrency:.. setting:: worker_concurrencyworker_concurrency~~~~~~~~~~~~~~~~~~The number of concurrent worker processes/threads/green threads executingtasks.If you're doing mostly I/O you can have more processes,but if mostly CPU-bound, try to keep it close to thenumber of CPUs on your machine. If not set, the number of CPUs/coreson the host will be used.Defaults to the number of available CPUs... setting:: worker_prefetch_multiplierworker_prefetch_multiplier~~~~~~~~~~~~~~~~~~~~~~~~~~How many messages to prefetch at a time multiplied by the number ofconcurrent processes.  The default is 4 (four messages for eachprocess).  The default setting is usually a good choice, however -- if youhave very long running tasks waiting in the queue and you have to start theworkers, note that the first worker to start will receive four times thenumber of messages initially.  Thus the tasks may not be fairly distributedto the workers.To disable prefetching, set :setting:`worker_prefetch_multiplier` to 1.Changing that setting to 0 will allow the worker to keep consumingas many messages as it wants.For more on prefetching, read :ref:`optimizing-prefetch-limit`.. note::    Tasks with ETA/countdown are not affected by prefetch limits... setting:: worker_lost_waitworker_lost_wait~~~~~~~~~~~~~~~~In some cases a worker may be killed without proper cleanup,and the worker may have published a result before terminating.This value specifies how long we wait for any missing results beforeraising a :exc:`@WorkerLostError` exception.Default is 10.0.. setting:: worker_max_tasks_per_childworker_max_tasks_per_child~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum number of tasks a pool worker process can execute beforeit's replaced with a new one.  Default is no limit... setting:: worker_max_memory_per_childworker_max_memory_per_child~~~~~~~~~~~~~~~~~~~~~~~~~~~Maximum amount of resident memory that may be consumed by aworker before it will be replaced by a new worker. If a singletask causes a worker to exceed this limit, the task will becompleted, and the worker will be replaced afterwards. Default:no limit... setting:: worker_disable_rate_limitsworker_disable_rate_limits~~~~~~~~~~~~~~~~~~~~~~~~~~Disable all rate limits, even if tasks has explicit rate limits set... setting:: worker_state_dbworker_state_db~~~~~~~~~~~~~~~Name of the file used to stores persistent worker state (like revoked tasks).Can be a relative or absolute path, but be aware that the suffix `.db`may be appended to the file name (depending on Python version).Can also be set via the :option:`--statedb` argument to:mod:`~celery.bin.worker`.Not enabled by default... setting:: worker_timer_precisionworker_timer_precision~~~~~~~~~~~~~~~~~~~~~~Set the maximum time in seconds that the ETA scheduler can sleep betweenrechecking the schedule.  Default is 1 second.Setting this value to 1 second means the schedulers precision willbe 1 second. If you need near millisecond precision you can set this to 0.1... setting:: worker_enable_remote_controlworker_enable_remote_control~~~~~~~~~~~~~~~~~~~~~~~~~~~~Specify if remote control of the workers is enabled.Default is :const:`True`... _conf-error-mails:Error E-Mails-------------.. setting:: task_send_error_emailstask_send_error_emails~~~~~~~~~~~~~~~~~~~~~~The default value for the `Task.send_error_emails` attribute, which ifset to :const:`True` means errors occurring during task execution will besent to :setting:`admins` by email.Disabled by default... setting:: adminsadmins~~~~~~List of `(name, email_address)` tuples for the administrators that shouldreceive error emails... setting:: server_emailserver_email~~~~~~~~~~~~The email address this worker sends emails from.Default is celery@localhost... setting:: email_hostemail_host~~~~~~~~~~The mail server to use.  Default is ``localhost``... setting:: email_host_useremail_host_user~~~~~~~~~~~~~~~User name (if required) to log on to the mail server with... setting:: email_host_passwordemail_host_password~~~~~~~~~~~~~~~~~~~Password (if required) to log on to the mail server with... setting:: email_portemail_port~~~~~~~~~~The port the mail server is listening on.  Default is `25`... setting:: email_use_sslemail_use_ssl~~~~~~~~~~~~~Use SSL when connecting to the SMTP server.  Disabled by default... setting:: email_use_tlsemail_use_tls~~~~~~~~~~~~~Use TLS when connecting to the SMTP server.  Disabled by default... setting:: email_timeoutemail_timeout~~~~~~~~~~~~~Timeout in seconds for when we give up trying to connectto the SMTP server when sending emails.The default is 2 seconds... setting:: email_charsetemail_charset~~~~~~~~~~~~~.. versionadded:: 4.0Charset for outgoing emails. Default is "us-ascii"... _conf-example-error-mail-config:Example E-Mail configuration~~~~~~~~~~~~~~~~~~~~~~~~~~~~This configuration enables the sending of error emails togeorge@vandelay.com and kramer@vandelay.com:.. code-block:: python    # Enables error emails.    task_send_error_emails = True    # Name and email addresses of recipients    admins = (        ('George Costanza', 'george@vandelay.com'),        ('Cosmo Kramer', 'kosmo@vandelay.com'),    )    # Email address used as sender (From field).    server_email = 'no-reply@vandelay.com'    # Mailserver configuration    email_host = 'mail.vandelay.com'    email_port = 25    # email_host_user = 'servers'    # email_host_password = 's3cr3t'.. _conf-events:Events------.. setting:: worker_send_eventsworker_send_events~~~~~~~~~~~~~~~~~~Send task-related events so that tasks can be monitored using tools like`flower`.  Sets the default value for the workers :option:`-E` argument... setting:: task_send_sent_eventtask_send_sent_event~~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.2If enabled, a :event:`task-sent` event will be sent for every task so tasks can betracked before they are consumed by a worker.Disabled by default... setting:: event_queue_ttlevent_queue_ttl~~~~~~~~~~~~~~~~~~~~~~:transports supported: ``amqp``Message expiry time in seconds (int/float) for when messages sent to a monitor clientsevent queue is deleted (``x-message-ttl``)For example, if this value is set to 10 then a message delivered to this queuewill be deleted after 10 seconds.Disabled by default... setting:: event_queue_expiresevent_queue_expires~~~~~~~~~~~~~~~~~~~:transports supported: ``amqp``Expiry time in seconds (int/float) for when after a monitor clientsevent queue will be deleted (``x-expires``).Default is never, relying on the queue autodelete setting... setting:: event_serializerevent_serializer~~~~~~~~~~~~~~~~Message serialization format used when sending event messages.Default is ``json``. See :ref:`calling-serializers`... _conf-logging:Logging-------.. setting:: worker_hijack_root_loggerworker_hijack_root_logger~~~~~~~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.2By default any previously configured handlers on the root logger will beremoved. If you want to customize your own logging handlers, then youcan disable this behavior by setting`worker_hijack_root_logger = False`... note::    Logging can also be customized by connecting to the    :signal:`celery.signals.setup_logging` signal... setting:: worker_log_colorworker_log_color~~~~~~~~~~~~~~~~~Enables/disables colors in logging output by the Celery apps.By default colors are enabled if    1) the app is logging to a real terminal, and not a file.    2) the app is not running on Windows... setting:: worker_log_formatworker_log_format~~~~~~~~~~~~~~~~~The format to use for log messages.Default is `[%(asctime)s: %(levelname)s/%(processName)s] %(message)s`See the Python :mod:`logging` module for more information about logformats... setting:: worker_task_log_formatworker_task_log_format~~~~~~~~~~~~~~~~~~~~~~The format to use for log messages logged in tasks.  Can be overridden usingthe :option:`--loglevel` option to :mod:`~celery.bin.worker`.Default is::    [%(asctime)s: %(levelname)s/%(processName)s]        [%(task_name)s(%(task_id)s)] %(message)sSee the Python :mod:`logging` module for more information about logformats... setting:: worker_redirect_stdoutsworker_redirect_stdouts~~~~~~~~~~~~~~~~~~~~~~~If enabled `stdout` and `stderr` will be redirectedto the current logger.Enabled by default.Used by :program:`celery worker` and :program:`celery beat`... setting:: worker_redirect_stdouts_levelworker_redirect_stdouts_level~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The log level output to `stdout` and `stderr` is logged as.Can be one of :const:`DEBUG`, :const:`INFO`, :const:`WARNING`,:const:`ERROR` or :const:`CRITICAL`.Default is :const:`WARNING`... _conf-security:Security--------.. setting:: security_keysecurity_key~~~~~~~~~~~~.. versionadded:: 2.5The relative or absolute path to a file containing the private keyused to sign messages when :ref:`message-signing` is used... setting:: security_certificatesecurity_certificate~~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.5The relative or absolute path to an X.509 certificate fileused to sign messages when :ref:`message-signing` is used... setting:: security_cert_storesecurity_cert_store~~~~~~~~~~~~~~~~~~~.. versionadded:: 2.5The directory containing X.509 certificates used for:ref:`message-signing`.  Can be a glob with wildcards,(for example :file:`/etc/certs/*.pem`)... _conf-custom-components:Custom Component Classes (advanced)-----------------------------------.. setting:: worker_poolworker_pool~~~~~~~~~~~Name of the pool class used by the worker... admonition:: Eventlet/Gevent    Never use this option to select the eventlet or gevent pool.    You must use the `-P` option instead, otherwise the monkey patching    will happen too late and things will break in strange and silent ways.Default is ``celery.concurrency.prefork:TaskPool``... setting:: worker_pool_restartsworker_pool_restarts~~~~~~~~~~~~~~~~~~~~If enabled the worker pool can be restarted using the:control:`pool_restart` remote control command.Disabled by default... setting:: worker_autoscalerworker_autoscaler~~~~~~~~~~~~~~~~~.. versionadded:: 2.2Name of the autoscaler class to use.Default is ``celery.worker.autoscale:Autoscaler``... setting:: worker_autoreloaderworker_autoreloader~~~~~~~~~~~~~~~~~~~Name of the autoreloader class used by the worker to reloadPython modules and files that have changed.Default is: ``celery.worker.autoreload:Autoreloader``... setting:: worker_consumerworker_consumer~~~~~~~~~~~~~~~Name of the consumer class used by the worker.Default is :class:`celery.worker.consumer.Consumer`.. setting:: worker_timerworker_timer~~~~~~~~~~~~Name of the ETA scheduler class used by the worker.Default is :class:`kombu.async.hub.timer.Timer`, or one overridedby the pool implementation... _conf-celerybeat:Beat Settings (:program:`celery beat`)--------------------------------------.. setting:: beat_schedulebeat_schedule~~~~~~~~~~~~~The periodic task schedule used by :mod:`~celery.bin.beat`.See :ref:`beat-entries`... setting:: beat_schedulerbeat_scheduler~~~~~~~~~~~~~~The default scheduler class.  Default is ``celery.beat:PersistentScheduler``.Can also be set via the :option:`-S` argument to:mod:`~celery.bin.beat`... setting:: beat_schedule_filenamebeat_schedule_filename~~~~~~~~~~~~~~~~~~~~~~Name of the file used by `PersistentScheduler` to store the last run timesof periodic tasks.  Can be a relative or absolute path, but be aware that thesuffix `.db` may be appended to the file name (depending on Python version).Can also be set via the :option:`--schedule` argument to:mod:`~celery.bin.beat`... setting:: beat_sync_everybeat_sync_every~~~~~~~~~~~~~~~The number of periodic tasks that can be called before another database syncis issued.Defaults to 0 (sync based on timing - default of 3 minutes as determined byscheduler.sync_every). If set to 1, beat will call sync after every taskmessage sent... setting:: beat_max_loop_intervalbeat_max_loop_interval~~~~~~~~~~~~~~~~~~~~~~The maximum number of seconds :mod:`~celery.bin.beat` can sleepbetween checking the schedule.The default for this value is scheduler specific.For the default celery beat scheduler the value is 300 (5 minutes),but for e.g. the django-celery database scheduler it is 5 secondsbecause the schedule may be changed externally, and so it must takechanges to the schedule into account.Also when running celery beat embedded (:option:`-B`) on Jython as a threadthe max interval is overridden and set to 1 so that it's possibleto shut down in a timely manner.
 |