| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138 | .. _guide-optimizing:============ Optimizing============Introduction============The default configuration makes a lot of compromises.  It's not optimal forany single case, but works well enough for most situations.There are optimizations that can be applied based on specific use cases.Optimizations can apply to different properties of the running environment,be it the time tasks take to execute, the amount of memory used, orresponsiveness at times of high load.Ensuring Operations===================In the book `Programming Pearls`_, Jon Bentley presents the concept ofback-of-the-envelope calculations by asking the question;    ❝ How much water flows out of the Mississippi River in a day? ❞The point of this exercise[*] is to show that there is a limitto how much data a system can process in a timely manner.Back of the envelope calculations can be used as a means to plan for thisahead of time.In Celery; If a task takes 10 minutes to complete,and there are 10 new tasks coming in every minute, the queue will neverbe empty.  This is why it's very importantthat you monitor queue lengths!A way to do this is by :ref:`using Munin <monitoring-munin>`.You should set up alerts, that will notify you as soon as any queue hasreached an unacceptable size.  This way you can take appropriate actionlike adding new worker nodes, or revoking unnecessary tasks... [*] The chapter is available to read for free here:       `The back of the envelope`_.  The book is a classic text. Highly       recommended... _`Programming Pearls`: http://www.cs.bell-labs.com/cm/cs/pearls/.. _`The back of the envelope`:    http://books.google.com/books?id=kse_7qbWbjsC&pg=PA67.. _optimizing-worker-settings:Worker Settings===============.. _optimizing-prefetch-limit:Prefetch Limits---------------*Prefetch* is a term inherited from AMQP that is often misunderstoodby users.The prefetch limit is a **limit** for the number of tasks (messages) a workercan reserve for itself.  If it is zero, the worker will keepconsuming messages, not respecting that there may be otheravailable worker nodes that may be able to process them sooner[#],or that the messages may not even fit in memory.The workers' default prefetch count is the:setting:`CELERYD_PREFETCH_MULTIPLIER` setting multiplied by the numberof child worker processes[#].If you have many tasks with a long duration you wantthe multiplier value to be 1, which means it will only reserve onetask per worker process at a time.However -- If you have many short-running tasks, and throughput/roundtriplatency[#] is important to you, this number should be large. The worker isable to process more tasks per second if the messages have already beenprefetched, and is available in memory.  You may have to experiment to findthe best value that works for you.  Values like 50 or 150 might make sense inthese circumstances. Say 64, or 128.If you have a combination of long- and short-running tasks, the best optionis to use two worker nodes that are configured separatly, and routethe tasks according to the run-time. (see :ref:`guide-routing`)... [*] RabbitMQ and other brokers deliver messages round-robin,       so this doesn't apply to an active system.  If there is no prefetch       limit and you restart the cluster, there will be timing delays between       nodes starting. If there are 3 offline nodes and one active node,       all messages will be delivered to the active node... [*] This is the concurrency setting; :setting:`CELERYD_CONCURRENCY` or the       :option:`-c` option to :program:`celeryd`.Reserve one task at a time--------------------------When using early acknowledgement (default), a prefetch multiplier of 1means the worker will reserve at most one extra task for every activeworker process.When users ask if it's possible to disable "prefetching of tasks", oftenwhat they really want is to have a worker only reserve as many tasks as thereare child processes.But this is not possible without enabling late acknowledgementsacknowledgements; A task that has been started, will beretried if the worker crashes mid execution so the task must be `idempotent`_(see also notes at :ref:`faq-acks_late-vs-retry`)... _`idempotent`: http://en.wikipedia.org/wiki/IdempotentYou can enable this behavior by using the following configuration options:.. code-block:: python    CELERY_ACKS_LATE = True    CELERYD_PREFETCH_MULTIPLIER = 1.. optimizing-rate-limits:Rate Limits-----------The system responsible for enforcing rate limits introduces some overhead,so if you're not using rate limits it may be a good idea todisable them completely.  This will disable one thread, and it won'tspend as many CPU cycles when the queue is inactive.Set the :setting:`CELERY_DISABLE_RATE_LIMITS` setting to disablethe rate limit subsystem:.. code-block:: python    CELERY_DISABLE_RATE_LIMITS = True
 |