workers.rst 34 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219
  1. .. _guide-workers:
  2. ===============
  3. Workers Guide
  4. ===============
  5. .. contents::
  6. :local:
  7. :depth: 1
  8. .. _worker-starting:
  9. Starting the worker
  10. ===================
  11. .. sidebar:: Daemonizing
  12. You probably want to use a daemonization tool to start
  13. in the background. See :ref:`daemonizing` for help
  14. detaching the worker using popular daemonization tools.
  15. You can start the worker in the foreground by executing the command:
  16. .. code-block:: console
  17. $ celery -A proj worker -l info
  18. For a full list of available command-line options see
  19. :mod:`~celery.bin.worker`, or simply do:
  20. .. code-block:: console
  21. $ celery worker --help
  22. You can also start multiple workers on the same machine. If you do so
  23. be sure to give a unique name to each individual worker by specifying a
  24. node name with the :option:`--hostname|-n` argument:
  25. .. code-block:: console
  26. $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker1.%h
  27. $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker2.%h
  28. $ celery -A proj worker --loglevel=INFO --concurrency=10 -n worker3.%h
  29. The ``hostname`` argument can expand the following variables:
  30. - ``%h``: Hostname including domain name.
  31. - ``%n``: Hostname only.
  32. - ``%d``: Domain name only.
  33. E.g. if the current hostname is ``george.example.com`` then
  34. these will expand to:
  35. - ``worker1.%h`` -> ``worker1.george.example.com``
  36. - ``worker1.%n`` -> ``worker1.george``
  37. - ``worker1.%d`` -> ``worker1.example.com``
  38. .. admonition:: Note for :program:`supervisord` users.
  39. The ``%`` sign must be escaped by adding a second one: `%%h`.
  40. .. _worker-stopping:
  41. Stopping the worker
  42. ===================
  43. Shutdown should be accomplished using the :sig:`TERM` signal.
  44. When shutdown is initiated the worker will finish all currently executing
  45. tasks before it actually terminates, so if these tasks are important you should
  46. wait for it to finish before doing anything drastic (like sending the :sig:`KILL`
  47. signal).
  48. If the worker won't shutdown after considerate time, for example because
  49. of tasks stuck in an infinite-loop, you can use the :sig:`KILL` signal to
  50. force terminate the worker, but be aware that currently executing tasks will
  51. be lost (unless the tasks have the :attr:`~@Task.acks_late`
  52. option set).
  53. Also as processes can't override the :sig:`KILL` signal, the worker will
  54. not be able to reap its children, so make sure to do so manually. This
  55. command usually does the trick:
  56. .. code-block:: console
  57. $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
  58. .. _worker-restarting:
  59. Restarting the worker
  60. =====================
  61. To restart the worker you should send the `TERM` signal and start a new
  62. instance. The easiest way to manage workers for development
  63. is by using `celery multi`:
  64. .. code-block:: console
  65. $ celery multi start 1 -A proj -l info -c4 --pidfile=/var/run/celery/%n.pid
  66. $ celery multi restart 1 --pidfile=/var/run/celery/%n.pid
  67. For production deployments you should be using init scripts or other process
  68. supervision systems (see :ref:`daemonizing`).
  69. Other than stopping then starting the worker to restart, you can also
  70. restart the worker using the :sig:`HUP` signal, but note that the worker
  71. will be responsible for restarting itself so this is prone to problems and
  72. is not recommended in production:
  73. .. code-block:: console
  74. $ kill -HUP $pid
  75. .. note::
  76. Restarting by :sig:`HUP` only works if the worker is running
  77. in the background as a daemon (it does not have a controlling
  78. terminal).
  79. :sig:`HUP` is disabled on OS X because of a limitation on
  80. that platform.
  81. .. _worker-process-signals:
  82. Process Signals
  83. ===============
  84. The worker's main process overrides the following signals:
  85. +--------------+-------------------------------------------------+
  86. | :sig:`TERM` | Warm shutdown, wait for tasks to complete. |
  87. +--------------+-------------------------------------------------+
  88. | :sig:`QUIT` | Cold shutdown, terminate ASAP |
  89. +--------------+-------------------------------------------------+
  90. | :sig:`USR1` | Dump traceback for all active threads. |
  91. +--------------+-------------------------------------------------+
  92. | :sig:`USR2` | Remote debug, see :mod:`celery.contrib.rdb`. |
  93. +--------------+-------------------------------------------------+
  94. .. _worker-files:
  95. Variables in file paths
  96. =======================
  97. The file path arguments for :option:`--logfile`, :option:`--pidfile` and :option:`--statedb`
  98. can contain variables that the worker will expand:
  99. Node name replacements
  100. ----------------------
  101. - ``%p``: Full node name.
  102. - ``%h``: Hostname including domain name.
  103. - ``%n``: Hostname only.
  104. - ``%d``: Domain name only.
  105. - ``%i``: Prefork pool process index or 0 if MainProcess.
  106. - ``%I``: Prefork pool process index with separator.
  107. E.g. if the current hostname is ``george@foo.example.com`` then
  108. these will expand to:
  109. - ``--logfile-%p.log`` -> :file:`george@foo.example.com.log`
  110. - ``--logfile=%h.log`` -> :file:`foo.example.com.log`
  111. - ``--logfile=%n.log`` -> :file:`george.log`
  112. - ``--logfile=%d`` -> :file:`example.com.log`
  113. .. _worker-files-process-index:
  114. Prefork pool process index
  115. --------------------------
  116. The prefork pool process index specifiers will expand into a different
  117. filename depending on the process that will eventually need to open the file.
  118. This can be used to specify one log file per child process.
  119. Note that the numbers will stay within the process limit even if processes
  120. exit or if autoscale/maxtasksperchild/time limits are used. I.e. the number
  121. is the *process index* not the process count or pid.
  122. * ``%i`` - Pool process index or 0 if MainProcess.
  123. Where ``-n worker1@example.com -c2 -f %n-%i.log`` will result in
  124. three log files:
  125. - :file:`worker1-0.log` (main process)
  126. - :file:`worker1-1.log` (pool process 1)
  127. - :file:`worker1-2.log` (pool process 2)
  128. * ``%I`` - Pool process index with separator.
  129. Where ``-n worker1@example.com -c2 -f %n%I.log`` will result in
  130. three log files:
  131. - :file:`worker1.log` (main process)
  132. - :file:`worker1-1.log` (pool process 1)
  133. - :file:`worker1-2.log` (pool process 2)
  134. .. _worker-concurrency:
  135. Concurrency
  136. ===========
  137. By default multiprocessing is used to perform concurrent execution of tasks,
  138. but you can also use :ref:`Eventlet <concurrency-eventlet>`. The number
  139. of worker processes/threads can be changed using the :option:`--concurrency`
  140. argument and defaults to the number of CPUs available on the machine.
  141. .. admonition:: Number of processes (multiprocessing/prefork pool)
  142. More pool processes are usually better, but there's a cut-off point where
  143. adding more pool processes affects performance in negative ways.
  144. There is even some evidence to support that having multiple worker
  145. instances running, may perform better than having a single worker.
  146. For example 3 workers with 10 pool processes each. You need to experiment
  147. to find the numbers that works best for you, as this varies based on
  148. application, work load, task run times and other factors.
  149. .. _worker-remote-control:
  150. Remote control
  151. ==============
  152. .. versionadded:: 2.0
  153. .. sidebar:: The ``celery`` command
  154. The :program:`celery` program is used to execute remote control
  155. commands from the command-line. It supports all of the commands
  156. listed below. See :ref:`monitoring-control` for more information.
  157. pool support: *prefork, eventlet, gevent*, blocking:*threads/solo* (see note)
  158. broker support: *amqp, redis*
  159. Workers have the ability to be remote controlled using a high-priority
  160. broadcast message queue. The commands can be directed to all, or a specific
  161. list of workers.
  162. Commands can also have replies. The client can then wait for and collect
  163. those replies. Since there's no central authority to know how many
  164. workers are available in the cluster, there is also no way to estimate
  165. how many workers may send a reply, so the client has a configurable
  166. timeout — the deadline in seconds for replies to arrive in. This timeout
  167. defaults to one second. If the worker doesn't reply within the deadline
  168. it doesn't necessarily mean the worker didn't reply, or worse is dead, but
  169. may simply be caused by network latency or the worker being slow at processing
  170. commands, so adjust the timeout accordingly.
  171. In addition to timeouts, the client can specify the maximum number
  172. of replies to wait for. If a destination is specified, this limit is set
  173. to the number of destination hosts.
  174. .. note::
  175. The solo and threads pool supports remote control commands,
  176. but any task executing will block any waiting control command,
  177. so it is of limited use if the worker is very busy. In that
  178. case you must increase the timeout waiting for replies in the client.
  179. .. _worker-broadcast-fun:
  180. The :meth:`~@control.broadcast` function.
  181. ----------------------------------------------------
  182. This is the client function used to send commands to the workers.
  183. Some remote control commands also have higher-level interfaces using
  184. :meth:`~@control.broadcast` in the background, like
  185. :meth:`~@control.rate_limit` and :meth:`~@control.ping`.
  186. Sending the :control:`rate_limit` command and keyword arguments:
  187. .. code-block:: pycon
  188. >>> app.control.broadcast('rate_limit',
  189. ... arguments={'task_name': 'myapp.mytask',
  190. ... 'rate_limit': '200/m'})
  191. This will send the command asynchronously, without waiting for a reply.
  192. To request a reply you have to use the `reply` argument:
  193. .. code-block:: pycon
  194. >>> app.control.broadcast('rate_limit', {
  195. ... 'task_name': 'myapp.mytask', 'rate_limit': '200/m'}, reply=True)
  196. [{'worker1.example.com': 'New rate limit set successfully'},
  197. {'worker2.example.com': 'New rate limit set successfully'},
  198. {'worker3.example.com': 'New rate limit set successfully'}]
  199. Using the `destination` argument you can specify a list of workers
  200. to receive the command:
  201. .. code-block:: pycon
  202. >>> app.control.broadcast('rate_limit', {
  203. ... 'task_name': 'myapp.mytask',
  204. ... 'rate_limit': '200/m'}, reply=True,
  205. ... destination=['worker1@example.com'])
  206. [{'worker1.example.com': 'New rate limit set successfully'}]
  207. Of course, using the higher-level interface to set rate limits is much
  208. more convenient, but there are commands that can only be requested
  209. using :meth:`~@control.broadcast`.
  210. Commands
  211. ========
  212. .. control:: revoke
  213. ``revoke``: Revoking tasks
  214. --------------------------
  215. :pool support: all, terminate only supported by prefork
  216. :broker support: *amqp, redis*
  217. :command: :program:`celery -A proj control revoke <task_id>`
  218. All worker nodes keeps a memory of revoked task ids, either in-memory or
  219. persistent on disk (see :ref:`worker-persistent-revokes`).
  220. When a worker receives a revoke request it will skip executing
  221. the task, but it won't terminate an already executing task unless
  222. the `terminate` option is set.
  223. .. note::
  224. The terminate option is a last resort for administrators when
  225. a task is stuck. It's not for terminating the task,
  226. it's for terminating the process that is executing the task, and that
  227. process may have already started processing another task at the point
  228. when the signal is sent, so for this reason you must never call this
  229. programatically.
  230. If `terminate` is set the worker child process processing the task
  231. will be terminated. The default signal sent is `TERM`, but you can
  232. specify this using the `signal` argument. Signal can be the uppercase name
  233. of any signal defined in the :mod:`signal` module in the Python Standard
  234. Library.
  235. Terminating a task also revokes it.
  236. **Example**
  237. .. code-block:: pycon
  238. >>> result.revoke()
  239. >>> AsyncResult(id).revoke()
  240. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed')
  241. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
  242. ... terminate=True)
  243. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
  244. ... terminate=True, signal='SIGKILL')
  245. Revoking multiple tasks
  246. -----------------------
  247. .. versionadded:: 3.1
  248. The revoke method also accepts a list argument, where it will revoke
  249. several tasks at once.
  250. **Example**
  251. .. code-block:: pycon
  252. >>> app.control.revoke([
  253. ... '7993b0aa-1f0b-4780-9af0-c47c0858b3f2',
  254. ... 'f565793e-b041-4b2b-9ca4-dca22762a55d',
  255. ... 'd9d35e03-2997-42d0-a13e-64a66b88a618',
  256. ])
  257. The ``GroupResult.revoke`` method takes advantage of this since
  258. version 3.1.
  259. .. _worker-persistent-revokes:
  260. Persistent revokes
  261. ------------------
  262. Revoking tasks works by sending a broadcast message to all the workers,
  263. the workers then keep a list of revoked tasks in memory. When a worker starts
  264. up it will synchronize revoked tasks with other workers in the cluster.
  265. The list of revoked tasks is in-memory so if all workers restart the list
  266. of revoked ids will also vanish. If you want to preserve this list between
  267. restarts you need to specify a file for these to be stored in by using the `--statedb`
  268. argument to :program:`celery worker`:
  269. .. code-block:: console
  270. $ celery -A proj worker -l info --statedb=/var/run/celery/worker.state
  271. or if you use :program:`celery multi` you will want to create one file per
  272. worker instance so then you can use the `%n` format to expand the current node
  273. name:
  274. .. code-block:: console
  275. celery multi start 2 -l info --statedb=/var/run/celery/%n.state
  276. See also :ref:`worker-files`
  277. Note that remote control commands must be working for revokes to work.
  278. Remote control commands are only supported by the RabbitMQ (amqp) and Redis
  279. at this point.
  280. .. _worker-time-limits:
  281. Time Limits
  282. ===========
  283. .. versionadded:: 2.0
  284. pool support: *prefork/gevent*
  285. .. sidebar:: Soft, or hard?
  286. The time limit is set in two values, `soft` and `hard`.
  287. The soft time limit allows the task to catch an exception
  288. to clean up before it is killed: the hard timeout is not catchable
  289. and force terminates the task.
  290. A single task can potentially run forever, if you have lots of tasks
  291. waiting for some event that will never happen you will block the worker
  292. from processing new tasks indefinitely. The best way to defend against
  293. this scenario happening is enabling time limits.
  294. The time limit (`--time-limit`) is the maximum number of seconds a task
  295. may run before the process executing it is terminated and replaced by a
  296. new process. You can also enable a soft time limit (`--soft-time-limit`),
  297. this raises an exception the task can catch to clean up before the hard
  298. time limit kills it:
  299. .. code-block:: python
  300. from myapp import app
  301. from celery.exceptions import SoftTimeLimitExceeded
  302. @app.task
  303. def mytask():
  304. try:
  305. do_work()
  306. except SoftTimeLimitExceeded:
  307. clean_up_in_a_hurry()
  308. Time limits can also be set using the :setting:`task_time_limit` /
  309. :setting:`task_soft_time_limit` settings.
  310. .. note::
  311. Time limits do not currently work on Windows and other
  312. platforms that do not support the ``SIGUSR1`` signal.
  313. Changing time limits at runtime
  314. -------------------------------
  315. .. versionadded:: 2.3
  316. broker support: *amqp, redis*
  317. There is a remote control command that enables you to change both soft
  318. and hard time limits for a task — named ``time_limit``.
  319. Example changing the time limit for the ``tasks.crawl_the_web`` task
  320. to have a soft time limit of one minute, and a hard time limit of
  321. two minutes:
  322. .. code-block:: pycon
  323. >>> app.control.time_limit('tasks.crawl_the_web',
  324. soft=60, hard=120, reply=True)
  325. [{'worker1.example.com': {'ok': 'time limits set successfully'}}]
  326. Only tasks that starts executing after the time limit change will be affected.
  327. .. _worker-rate-limits:
  328. Rate Limits
  329. ===========
  330. .. control:: rate_limit
  331. Changing rate-limits at runtime
  332. -------------------------------
  333. Example changing the rate limit for the `myapp.mytask` task to execute
  334. at most 200 tasks of that type every minute:
  335. .. code-block:: pycon
  336. >>> app.control.rate_limit('myapp.mytask', '200/m')
  337. The above does not specify a destination, so the change request will affect
  338. all worker instances in the cluster. If you only want to affect a specific
  339. list of workers you can include the ``destination`` argument:
  340. .. code-block:: pycon
  341. >>> app.control.rate_limit('myapp.mytask', '200/m',
  342. ... destination=['celery@worker1.example.com'])
  343. .. warning::
  344. This won't affect workers with the
  345. :setting:`worker_disable_rate_limits` setting enabled.
  346. .. _worker-maxtasksperchild:
  347. Max tasks per child setting
  348. ===========================
  349. .. versionadded:: 2.0
  350. pool support: *prefork*
  351. With this option you can configure the maximum number of tasks
  352. a worker can execute before it's replaced by a new process.
  353. This is useful if you have memory leaks you have no control over
  354. for example from closed source C extensions.
  355. The option can be set using the workers `--maxtasksperchild` argument
  356. or using the :setting:`worker_max_tasks_per_child` setting.
  357. Max memory per child setting
  358. ============================
  359. .. versionadded:: TODO
  360. pool support: *prefork*
  361. With this option you can configure the maximum amount of resident
  362. memory a worker can execute before it's replaced by a new process.
  363. This is useful if you have memory leaks you have no control over
  364. for example from closed source C extensions.
  365. The option can be set using the workers `--maxmemperchild` argument
  366. or using the :setting:`CELERYD_MAX_MEMORY_PER_CHILD` setting.
  367. .. _worker-autoscaling:
  368. Autoscaling
  369. ===========
  370. .. versionadded:: 2.2
  371. pool support: *prefork*, *gevent*
  372. The *autoscaler* component is used to dynamically resize the pool
  373. based on load:
  374. - The autoscaler adds more pool processes when there is work to do,
  375. - and starts removing processes when the workload is low.
  376. It's enabled by the :option:`--autoscale` option, which needs two
  377. numbers: the maximum and minimum number of pool processes::
  378. --autoscale=AUTOSCALE
  379. Enable autoscaling by providing
  380. max_concurrency,min_concurrency. Example:
  381. --autoscale=10,3 (always keep 3 processes, but grow to
  382. 10 if necessary).
  383. You can also define your own rules for the autoscaler by subclassing
  384. :class:`~celery.worker.autoscaler.Autoscaler`.
  385. Some ideas for metrics include load average or the amount of memory available.
  386. You can specify a custom autoscaler with the :setting:`worker_autoscaler` setting.
  387. .. _worker-queues:
  388. Queues
  389. ======
  390. A worker instance can consume from any number of queues.
  391. By default it will consume from all queues defined in the
  392. :setting:`task_queues` setting (which if not specified defaults to the
  393. queue named ``celery``).
  394. You can specify what queues to consume from at startup,
  395. by giving a comma separated list of queues to the :option:`-Q` option:
  396. .. code-block:: console
  397. $ celery -A proj worker -l info -Q foo,bar,baz
  398. If the queue name is defined in :setting:`task_queues` it will use that
  399. configuration, but if it's not defined in the list of queues Celery will
  400. automatically generate a new queue for you (depending on the
  401. :setting:`task_create_missing_queues` option).
  402. You can also tell the worker to start and stop consuming from a queue at
  403. runtime using the remote control commands :control:`add_consumer` and
  404. :control:`cancel_consumer`.
  405. .. control:: add_consumer
  406. Queues: Adding consumers
  407. ------------------------
  408. The :control:`add_consumer` control command will tell one or more workers
  409. to start consuming from a queue. This operation is idempotent.
  410. To tell all workers in the cluster to start consuming from a queue
  411. named "``foo``" you can use the :program:`celery control` program:
  412. .. code-block:: console
  413. $ celery -A proj control add_consumer foo
  414. -> worker1.local: OK
  415. started consuming from u'foo'
  416. If you want to specify a specific worker you can use the
  417. :option:`--destination`` argument:
  418. .. code-block:: console
  419. $ celery -A proj control add_consumer foo -d worker1.local
  420. The same can be accomplished dynamically using the :meth:`@control.add_consumer` method:
  421. .. code-block:: pycon
  422. >>> app.control.add_consumer('foo', reply=True)
  423. [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
  424. >>> app.control.add_consumer('foo', reply=True,
  425. ... destination=['worker1@example.com'])
  426. [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
  427. By now I have only shown examples using automatic queues,
  428. If you need more control you can also specify the exchange, routing_key and
  429. even other options:
  430. .. code-block:: pycon
  431. >>> app.control.add_consumer(
  432. ... queue='baz',
  433. ... exchange='ex',
  434. ... exchange_type='topic',
  435. ... routing_key='media.*',
  436. ... options={
  437. ... 'queue_durable': False,
  438. ... 'exchange_durable': False,
  439. ... },
  440. ... reply=True,
  441. ... destination=['w1@example.com', 'w2@example.com'])
  442. .. control:: cancel_consumer
  443. Queues: Canceling consumers
  444. ---------------------------
  445. You can cancel a consumer by queue name using the :control:`cancel_consumer`
  446. control command.
  447. To force all workers in the cluster to cancel consuming from a queue
  448. you can use the :program:`celery control` program:
  449. .. code-block:: console
  450. $ celery -A proj control cancel_consumer foo
  451. The :option:`--destination` argument can be used to specify a worker, or a
  452. list of workers, to act on the command:
  453. .. code-block:: console
  454. $ celery -A proj control cancel_consumer foo -d worker1.local
  455. You can also cancel consumers programmatically using the
  456. :meth:`@control.cancel_consumer` method:
  457. .. code-block:: console
  458. >>> app.control.cancel_consumer('foo', reply=True)
  459. [{u'worker1.local': {u'ok': u"no longer consuming from u'foo'"}}]
  460. .. control:: active_queues
  461. Queues: List of active queues
  462. -----------------------------
  463. You can get a list of queues that a worker consumes from by using
  464. the :control:`active_queues` control command:
  465. .. code-block:: console
  466. $ celery -A proj inspect active_queues
  467. [...]
  468. Like all other remote control commands this also supports the
  469. :option:`--destination` argument used to specify which workers should
  470. reply to the request:
  471. .. code-block:: console
  472. $ celery -A proj inspect active_queues -d worker1.local
  473. [...]
  474. This can also be done programmatically by using the
  475. :meth:`@control.inspect.active_queues` method:
  476. .. code-block:: pycon
  477. >>> app.control.inspect().active_queues()
  478. [...]
  479. >>> app.control.inspect(['worker1.local']).active_queues()
  480. [...]
  481. .. _worker-autoreloading:
  482. Autoreloading
  483. =============
  484. .. versionadded:: 2.5
  485. pool support: *prefork, eventlet, gevent, threads, solo*
  486. Starting :program:`celery worker` with the :option:`--autoreload` option will
  487. enable the worker to watch for file system changes to all imported task
  488. modules (and also any non-task modules added to the
  489. :setting:`imports` setting or the :option:`-I|--include` option).
  490. This is an experimental feature intended for use in development only,
  491. using auto-reload in production is discouraged as the behavior of reloading
  492. a module in Python is undefined, and may cause hard to diagnose bugs and
  493. crashes. Celery uses the same approach as the auto-reloader found in e.g.
  494. the Django ``runserver`` command.
  495. When auto-reload is enabled the worker starts an additional thread
  496. that watches for changes in the file system. New modules are imported,
  497. and already imported modules are reloaded whenever a change is detected,
  498. and if the prefork pool is used the child processes will finish the work
  499. they are doing and exit, so that they can be replaced by fresh processes
  500. effectively reloading the code.
  501. File system notification backends are pluggable, and it comes with three
  502. implementations:
  503. * inotify (Linux)
  504. Used if the :mod:`pyinotify` library is installed.
  505. If you are running on Linux this is the recommended implementation,
  506. to install the :mod:`pyinotify` library you have to run the following
  507. command:
  508. .. code-block:: console
  509. $ pip install pyinotify
  510. * kqueue (OS X/BSD)
  511. * stat
  512. The fallback implementation simply polls the files using ``stat`` and is very
  513. expensive.
  514. You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
  515. environment variable:
  516. .. code-block:: console
  517. $ env CELERYD_FSNOTIFY=stat celery worker -l info --autoreload
  518. .. _worker-autoreload:
  519. .. control:: pool_restart
  520. Pool Restart Command
  521. --------------------
  522. .. versionadded:: 2.5
  523. Requires the :setting:`worker_pool_restarts` setting to be enabled.
  524. The remote control command :control:`pool_restart` sends restart requests to
  525. the workers child processes. It is particularly useful for forcing
  526. the worker to import new modules, or for reloading already imported
  527. modules. This command does not interrupt executing tasks.
  528. Example
  529. ~~~~~~~
  530. Running the following command will result in the `foo` and `bar` modules
  531. being imported by the worker processes:
  532. .. code-block:: pycon
  533. >>> app.control.broadcast('pool_restart',
  534. ... arguments={'modules': ['foo', 'bar']})
  535. Use the ``reload`` argument to reload modules it has already imported:
  536. .. code-block:: pycon
  537. >>> app.control.broadcast('pool_restart',
  538. ... arguments={'modules': ['foo'],
  539. ... 'reload': True})
  540. If you don't specify any modules then all known tasks modules will
  541. be imported/reloaded:
  542. .. code-block:: pycon
  543. >>> app.control.broadcast('pool_restart', arguments={'reload': True})
  544. The ``modules`` argument is a list of modules to modify. ``reload``
  545. specifies whether to reload modules if they have previously been imported.
  546. By default ``reload`` is disabled. The `pool_restart` command uses the
  547. Python :func:`reload` function to reload modules, or you can provide
  548. your own custom reloader by passing the ``reloader`` argument.
  549. .. note::
  550. Module reloading comes with caveats that are documented in :func:`reload`.
  551. Please read this documentation and make sure your modules are suitable
  552. for reloading.
  553. .. seealso::
  554. - http://pyunit.sourceforge.net/notes/reloading.html
  555. - http://www.indelible.org/ink/python-reloading/
  556. - http://docs.python.org/library/functions.html#reload
  557. .. _worker-inspect:
  558. Inspecting workers
  559. ==================
  560. :class:`@control.inspect` lets you inspect running workers. It
  561. uses remote control commands under the hood.
  562. You can also use the ``celery`` command to inspect workers,
  563. and it supports the same commands as the :class:`@control` interface.
  564. .. code-block:: pycon
  565. >>> # Inspect all nodes.
  566. >>> i = app.control.inspect()
  567. >>> # Specify multiple nodes to inspect.
  568. >>> i = app.control.inspect(['worker1.example.com',
  569. 'worker2.example.com'])
  570. >>> # Specify a single node to inspect.
  571. >>> i = app.control.inspect('worker1.example.com')
  572. .. _worker-inspect-registered-tasks:
  573. Dump of registered tasks
  574. ------------------------
  575. You can get a list of tasks registered in the worker using the
  576. :meth:`~@control.inspect.registered`:
  577. .. code-block:: pycon
  578. >>> i.registered()
  579. [{'worker1.example.com': ['tasks.add',
  580. 'tasks.sleeptask']}]
  581. .. _worker-inspect-active-tasks:
  582. Dump of currently executing tasks
  583. ---------------------------------
  584. You can get a list of active tasks using
  585. :meth:`~@control.inspect.active`:
  586. .. code-block:: pycon
  587. >>> i.active()
  588. [{'worker1.example.com':
  589. [{'name': 'tasks.sleeptask',
  590. 'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
  591. 'args': '(8,)',
  592. 'kwargs': '{}'}]}]
  593. .. _worker-inspect-eta-schedule:
  594. Dump of scheduled (ETA) tasks
  595. -----------------------------
  596. You can get a list of tasks waiting to be scheduled by using
  597. :meth:`~@control.inspect.scheduled`:
  598. .. code-block:: pycon
  599. >>> i.scheduled()
  600. [{'worker1.example.com':
  601. [{'eta': '2010-06-07 09:07:52', 'priority': 0,
  602. 'request': {
  603. 'name': 'tasks.sleeptask',
  604. 'id': '1a7980ea-8b19-413e-91d2-0b74f3844c4d',
  605. 'args': '[1]',
  606. 'kwargs': '{}'}},
  607. {'eta': '2010-06-07 09:07:53', 'priority': 0,
  608. 'request': {
  609. 'name': 'tasks.sleeptask',
  610. 'id': '49661b9a-aa22-4120-94b7-9ee8031d219d',
  611. 'args': '[2]',
  612. 'kwargs': '{}'}}]}]
  613. .. note::
  614. These are tasks with an eta/countdown argument, not periodic tasks.
  615. .. _worker-inspect-reserved:
  616. Dump of reserved tasks
  617. ----------------------
  618. Reserved tasks are tasks that have been received, but are still waiting to be
  619. executed.
  620. You can get a list of these using
  621. :meth:`~@control.inspect.reserved`:
  622. .. code-block:: pycon
  623. >>> i.reserved()
  624. [{'worker1.example.com':
  625. [{'name': 'tasks.sleeptask',
  626. 'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
  627. 'args': '(8,)',
  628. 'kwargs': '{}'}]}]
  629. .. _worker-statistics:
  630. Statistics
  631. ----------
  632. The remote control command ``inspect stats`` (or
  633. :meth:`~@control.inspect.stats`) will give you a long list of useful (or not
  634. so useful) statistics about the worker:
  635. .. code-block:: console
  636. $ celery -A proj inspect stats
  637. The output will include the following fields:
  638. - ``broker``
  639. Section for broker information.
  640. * ``connect_timeout``
  641. Timeout in seconds (int/float) for establishing a new connection.
  642. * ``heartbeat``
  643. Current heartbeat value (set by client).
  644. * ``hostname``
  645. Node name of the remote broker.
  646. * ``insist``
  647. No longer used.
  648. * ``login_method``
  649. Login method used to connect to the broker.
  650. * ``port``
  651. Port of the remote broker.
  652. * ``ssl``
  653. SSL enabled/disabled.
  654. * ``transport``
  655. Name of transport used (e.g. ``amqp`` or ``redis``)
  656. * ``transport_options``
  657. Options passed to transport.
  658. * ``uri_prefix``
  659. Some transports expects the host name to be an URL, this applies to
  660. for example SQLAlchemy where the host name part is the connection URI:
  661. redis+socket:///tmp/redis.sock
  662. In this example the uri prefix will be ``redis``.
  663. * ``userid``
  664. User id used to connect to the broker with.
  665. * ``virtual_host``
  666. Virtual host used.
  667. - ``clock``
  668. Value of the workers logical clock. This is a positive integer and should
  669. be increasing every time you receive statistics.
  670. - ``pid``
  671. Process id of the worker instance (Main process).
  672. - ``pool``
  673. Pool-specific section.
  674. * ``max-concurrency``
  675. Max number of processes/threads/green threads.
  676. * ``max-tasks-per-child``
  677. Max number of tasks a thread may execute before being recycled.
  678. * ``processes``
  679. List of pids (or thread-id's).
  680. * ``put-guarded-by-semaphore``
  681. Internal
  682. * ``timeouts``
  683. Default values for time limits.
  684. * ``writes``
  685. Specific to the prefork pool, this shows the distribution of writes
  686. to each process in the pool when using async I/O.
  687. - ``prefetch_count``
  688. Current prefetch count value for the task consumer.
  689. - ``rusage``
  690. System usage statistics. The fields available may be different
  691. on your platform.
  692. From :manpage:`getrusage(2)`:
  693. * ``stime``
  694. Time spent in operating system code on behalf of this process.
  695. * ``utime``
  696. Time spent executing user instructions.
  697. * ``maxrss``
  698. The maximum resident size used by this process (in kilobytes).
  699. * ``idrss``
  700. Amount of unshared memory used for data (in kilobytes times ticks of
  701. execution)
  702. * ``isrss``
  703. Amount of unshared memory used for stack space (in kilobytes times
  704. ticks of execution)
  705. * ``ixrss``
  706. Amount of memory shared with other processes (in kilobytes times
  707. ticks of execution).
  708. * ``inblock``
  709. Number of times the file system had to read from the disk on behalf of
  710. this process.
  711. * ``oublock``
  712. Number of times the file system has to write to disk on behalf of
  713. this process.
  714. * ``majflt``
  715. Number of page faults which were serviced by doing I/O.
  716. * ``minflt``
  717. Number of page faults which were serviced without doing I/O.
  718. * ``msgrcv``
  719. Number of IPC messages received.
  720. * ``msgsnd``
  721. Number of IPC messages sent.
  722. * ``nvcsw``
  723. Number of times this process voluntarily invoked a context switch.
  724. * ``nivcsw``
  725. Number of times an involuntary context switch took place.
  726. * ``nsignals``
  727. Number of signals received.
  728. * ``nswap``
  729. The number of times this process was swapped entirely out of memory.
  730. - ``total``
  731. Map of task names and the total number of tasks with that type
  732. the worker has accepted since startup.
  733. Additional Commands
  734. ===================
  735. .. control:: shutdown
  736. Remote shutdown
  737. ---------------
  738. This command will gracefully shut down the worker remotely:
  739. .. code-block:: pycon
  740. >>> app.control.broadcast('shutdown') # shutdown all workers
  741. >>> app.control.broadcast('shutdown, destination="worker1@example.com")
  742. .. control:: ping
  743. Ping
  744. ----
  745. This command requests a ping from alive workers.
  746. The workers reply with the string 'pong', and that's just about it.
  747. It will use the default one second timeout for replies unless you specify
  748. a custom timeout:
  749. .. code-block:: pycon
  750. >>> app.control.ping(timeout=0.5)
  751. [{'worker1.example.com': 'pong'},
  752. {'worker2.example.com': 'pong'},
  753. {'worker3.example.com': 'pong'}]
  754. :meth:`~@control.ping` also supports the `destination` argument,
  755. so you can specify which workers to ping:
  756. .. code-block:: pycon
  757. >>> ping(['worker2.example.com', 'worker3.example.com'])
  758. [{'worker2.example.com': 'pong'},
  759. {'worker3.example.com': 'pong'}]
  760. .. _worker-enable-events:
  761. .. control:: enable_events
  762. .. control:: disable_events
  763. Enable/disable events
  764. ---------------------
  765. You can enable/disable events by using the `enable_events`,
  766. `disable_events` commands. This is useful to temporarily monitor
  767. a worker using :program:`celery events`/:program:`celerymon`.
  768. .. code-block:: pycon
  769. >>> app.control.enable_events()
  770. >>> app.control.disable_events()
  771. .. _worker-custom-control-commands:
  772. Writing your own remote control commands
  773. ========================================
  774. Remote control commands are registered in the control panel and
  775. they take a single argument: the current
  776. :class:`~celery.worker.control.ControlDispatch` instance.
  777. From there you have access to the active
  778. :class:`~celery.worker.consumer.Consumer` if needed.
  779. Here's an example control command that increments the task prefetch count:
  780. .. code-block:: python
  781. from celery.worker.control import Panel
  782. @Panel.register
  783. def increase_prefetch_count(state, n=1):
  784. state.consumer.qos.increment_eventually(n)
  785. return {'ok': 'prefetch count incremented'}