optimizing.rst 2.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
  1. .. _optimizing:
  2. ============
  3. Optimizing
  4. ============
  5. Introduction
  6. ============
  7. The default configuration, like any good default, is full of compromises.
  8. It is not tweaked to be optimal for any single use case, but tries to
  9. find middle ground that works *well enough* for most situations.
  10. There are key optimizations to be done if your application is mainly
  11. processing lots of short tasks, and also if you have fewer but very
  12. long tasks.
  13. .. _optimizing-worker-settings:
  14. Worker Settings
  15. ===============
  16. .. _optimizing-prefetch-limit:
  17. Prefetch limit
  18. --------------
  19. *Prefetch* is a term inherited from AMQP, and it is often misunderstood.
  20. The prefetch limit is a limit for how many tasks a worker can reserve
  21. in advance. If this is set to zero, the worker will keep consuming
  22. messages *ad infinitum*, not respecting that there may be other
  23. available worker nodes (that may be able to process them sooner),
  24. or that the messages may not fit in memory.
  25. The workers initial prefetch count is set by multiplying
  26. the :setting:`CELERYD_PREFETCH_MULTIPLIER` setting by the number
  27. of child worker processes. The default is 4 messages per child process.
  28. If you have many expensive tasks with a long duration you would want
  29. the multiplier value to be 1, which means it will only reserve one
  30. unacknowledged task per worker process at a time.
  31. However -- If you have lots of short tasks, and throughput/roundtrip latency
  32. is important to you, then you want this number to be large. Say 64, or 128
  33. for example, as the worker is able to process a lot more tasks/s if the
  34. messages have already been prefetched in memory. You may have to experiment
  35. to find the best value.
  36. If you have a combination of both very long and short tasks, then the best
  37. option is to use two worker nodes that is configured individually, and route
  38. the tasks accordingly (see :ref:`guide-routing`).
  39. Scenario 1: Lots of short tasks
  40. ===============================
  41. .. code-block:: python
  42. CELERYD_PREFETCH_MULTIPLIER = 128
  43. CELERY_DISABLE_RATE_LIMITS = True
  44. Scenario 2: Expensive tasks
  45. ===========================
  46. .. code-block:: python
  47. CELERYD_PREFETCH_MULTIPLIER = 1