workers.rst 31 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100
  1. .. _guide-workers:
  2. ===============
  3. Workers Guide
  4. ===============
  5. .. contents::
  6. :local:
  7. :depth: 1
  8. .. _worker-starting:
  9. Starting the worker
  10. ===================
  11. .. sidebar:: Daemonizing
  12. You probably want to use a daemonization tool to start
  13. in the background. See :ref:`daemonizing` for help
  14. detaching the worker using popular daemonization tools.
  15. You can start the worker in the foreground by executing the command:
  16. .. code-block:: bash
  17. $ celery --app=app worker -l info
  18. For a full list of available command-line options see
  19. :mod:`~celery.bin.worker`, or simply do:
  20. .. code-block:: bash
  21. $ celery worker --help
  22. You can also start multiple workers on the same machine. If you do so
  23. be sure to give a unique name to each individual worker by specifying a
  24. host name with the :option:`--hostname|-n` argument:
  25. .. code-block:: bash
  26. $ celery worker --loglevel=INFO --concurrency=10 -n worker1.%h
  27. $ celery worker --loglevel=INFO --concurrency=10 -n worker2.%h
  28. $ celery worker --loglevel=INFO --concurrency=10 -n worker3.%h
  29. The hostname argument can expand the following variables:
  30. - ``%h``: Hostname including domain name.
  31. - ``%n``: Hostname only.
  32. - ``%d``: Domain name only.
  33. E.g. if the current hostname is ``george.example.com`` then
  34. these will expand to:
  35. - ``worker1.%h`` -> ``worker1.george.example.com``
  36. - ``worker1.%n`` -> ``worker1.george``
  37. - ``worker1.%d`` -> ``worker1.example.com``
  38. .. _worker-stopping:
  39. Stopping the worker
  40. ===================
  41. Shutdown should be accomplished using the :sig:`TERM` signal.
  42. When shutdown is initiated the worker will finish all currently executing
  43. tasks before it actually terminates, so if these tasks are important you should
  44. wait for it to finish before doing anything drastic (like sending the :sig:`KILL`
  45. signal).
  46. If the worker won't shutdown after considerate time, for example because
  47. of tasks stuck in an infinite-loop, you can use the :sig:`KILL` signal to
  48. force terminate the worker, but be aware that currently executing tasks will
  49. be lost (unless the tasks have the :attr:`~@Task.acks_late`
  50. option set).
  51. Also as processes can't override the :sig:`KILL` signal, the worker will
  52. not be able to reap its children, so make sure to do so manually. This
  53. command usually does the trick:
  54. .. code-block:: bash
  55. $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
  56. .. _worker-restarting:
  57. Restarting the worker
  58. =====================
  59. Other than stopping then starting the worker to restart, you can also
  60. restart the worker using the :sig:`HUP` signal:
  61. .. code-block:: bash
  62. $ kill -HUP $pid
  63. The worker will then replace itself with a new instance using the same
  64. arguments as it was started with.
  65. .. note::
  66. Restarting by :sig:`HUP` only works if the worker is running
  67. in the background as a daemon (it does not have a controlling
  68. terminal).
  69. :sig:`HUP` is disabled on OS X because of a limitation on
  70. that platform.
  71. .. _worker-process-signals:
  72. Process Signals
  73. ===============
  74. The worker's main process overrides the following signals:
  75. +--------------+-------------------------------------------------+
  76. | :sig:`TERM` | Warm shutdown, wait for tasks to complete. |
  77. +--------------+-------------------------------------------------+
  78. | :sig:`QUIT` | Cold shutdown, terminate ASAP |
  79. +--------------+-------------------------------------------------+
  80. | :sig:`USR1` | Dump traceback for all active threads. |
  81. +--------------+-------------------------------------------------+
  82. | :sig:`USR2` | Remote debug, see :mod:`celery.contrib.rdb`. |
  83. +--------------+-------------------------------------------------+
  84. .. _worker-concurrency:
  85. Concurrency
  86. ===========
  87. By default multiprocessing is used to perform concurrent execution of tasks,
  88. but you can also use :ref:`Eventlet <concurrency-eventlet>`. The number
  89. of worker processes/threads can be changed using the :option:`--concurrency`
  90. argument and defaults to the number of CPUs available on the machine.
  91. .. admonition:: Number of processes (multiprocessing/prefork pool)
  92. More pool processes are usually better, but there's a cut-off point where
  93. adding more pool processes affects performance in negative ways.
  94. There is even some evidence to support that having multiple worker
  95. instances running, may perform better than having a single worker.
  96. For example 3 workers with 10 pool processes each. You need to experiment
  97. to find the numbers that works best for you, as this varies based on
  98. application, work load, task run times and other factors.
  99. .. _worker-remote-control:
  100. Remote control
  101. ==============
  102. .. versionadded:: 2.0
  103. .. sidebar:: The ``celery`` command
  104. The :program:`celery` program is used to execute remote control
  105. commands from the command-line. It supports all of the commands
  106. listed below. See :ref:`monitoring-control` for more information.
  107. pool support: *prefork, eventlet, gevent*, blocking:*threads/solo* (see note)
  108. broker support: *amqp, redis, mongodb*
  109. Workers have the ability to be remote controlled using a high-priority
  110. broadcast message queue. The commands can be directed to all, or a specific
  111. list of workers.
  112. Commands can also have replies. The client can then wait for and collect
  113. those replies. Since there's no central authority to know how many
  114. workers are available in the cluster, there is also no way to estimate
  115. how many workers may send a reply, so the client has a configurable
  116. timeout — the deadline in seconds for replies to arrive in. This timeout
  117. defaults to one second. If the worker doesn't reply within the deadline
  118. it doesn't necessarily mean the worker didn't reply, or worse is dead, but
  119. may simply be caused by network latency or the worker being slow at processing
  120. commands, so adjust the timeout accordingly.
  121. In addition to timeouts, the client can specify the maximum number
  122. of replies to wait for. If a destination is specified, this limit is set
  123. to the number of destination hosts.
  124. .. note::
  125. The solo and threads pool supports remote control commands,
  126. but any task executing will block any waiting control command,
  127. so it is of limited use if the worker is very busy. In that
  128. case you must increase the timeout waiting for replies in the client.
  129. .. _worker-broadcast-fun:
  130. The :meth:`~@control.broadcast` function.
  131. ----------------------------------------------------
  132. This is the client function used to send commands to the workers.
  133. Some remote control commands also have higher-level interfaces using
  134. :meth:`~@control.broadcast` in the background, like
  135. :meth:`~@control.rate_limit` and :meth:`~@control.ping`.
  136. Sending the :control:`rate_limit` command and keyword arguments::
  137. >>> app.control.broadcast('rate_limit',
  138. ... arguments={'task_name': 'myapp.mytask',
  139. ... 'rate_limit': '200/m'})
  140. This will send the command asynchronously, without waiting for a reply.
  141. To request a reply you have to use the `reply` argument::
  142. >>> app.control.broadcast('rate_limit', {
  143. ... 'task_name': 'myapp.mytask', 'rate_limit': '200/m'}, reply=True)
  144. [{'worker1.example.com': 'New rate limit set successfully'},
  145. {'worker2.example.com': 'New rate limit set successfully'},
  146. {'worker3.example.com': 'New rate limit set successfully'}]
  147. Using the `destination` argument you can specify a list of workers
  148. to receive the command::
  149. >>> app.control.broadcast('rate_limit', {
  150. ... 'task_name': 'myapp.mytask',
  151. ... 'rate_limit': '200/m'}, reply=True,
  152. ... destination=['worker1@example.com'])
  153. [{'worker1.example.com': 'New rate limit set successfully'}]
  154. Of course, using the higher-level interface to set rate limits is much
  155. more convenient, but there are commands that can only be requested
  156. using :meth:`~@control.broadcast`.
  157. .. control:: revoke
  158. Revoking tasks
  159. ==============
  160. pool support: all
  161. broker support: *amqp, redis, mongodb*
  162. All worker nodes keeps a memory of revoked task ids, either in-memory or
  163. persistent on disk (see :ref:`worker-persistent-revokes`).
  164. When a worker receives a revoke request it will skip executing
  165. the task, but it won't terminate an already executing task unless
  166. the `terminate` option is set.
  167. .. note::
  168. The terminate option is a last resort for administrators when
  169. a task is stuck. It's not for terminating the task,
  170. it's for terminating the process that is executing the task, and that
  171. process may have already started processing another task at the point
  172. when the signal is sent, so for this rason you must never call this
  173. programatically.
  174. If `terminate` is set the worker child process processing the task
  175. will be terminated. The default signal sent is `TERM`, but you can
  176. specify this using the `signal` argument. Signal can be the uppercase name
  177. of any signal defined in the :mod:`signal` module in the Python Standard
  178. Library.
  179. Terminating a task also revokes it.
  180. **Example**
  181. ::
  182. >>> result.revoke()
  183. >>> AsyncResult(id).revoke()
  184. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed')
  185. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
  186. ... terminate=True)
  187. >>> app.control.revoke('d9078da5-9915-40a0-bfa1-392c7bde42ed',
  188. ... terminate=True, signal='SIGKILL')
  189. Revoking multiple tasks
  190. -----------------------
  191. .. versionadded:: 3.1
  192. The revoke method also accepts a list argument, where it will revoke
  193. several tasks at once.
  194. **Example**
  195. ::
  196. >>> app.control.revoke([
  197. ... '7993b0aa-1f0b-4780-9af0-c47c0858b3f2',
  198. ... 'f565793e-b041-4b2b-9ca4-dca22762a55d',
  199. ... 'd9d35e03-2997-42d0-a13e-64a66b88a618',
  200. ])
  201. The ``GroupResult.revoke`` method takes advantage of this since
  202. version 3.1.
  203. .. _worker-persistent-revokes:
  204. Persistent revokes
  205. ------------------
  206. Revoking tasks works by sending a broadcast message to all the workers,
  207. the workers then keep a list of revoked tasks in memory. When a worker starts
  208. up it will synchronize revoked tasks with other workers in the cluster.
  209. The list of revoked tasks is in-memory so if all workers restart the list
  210. of revoked ids will also vanish. If you want to preserve this list between
  211. restarts you need to specify a file for these to be stored in by using the `--statedb`
  212. argument to :program:`celery worker`:
  213. .. code-block:: bash
  214. celery -A proj worker -l info --statedb=/var/run/celery/worker.state
  215. or if you use :program:`celery multi` you will want to create one file per
  216. worker instance so then you can use the `%n` format to expand the current node
  217. name:
  218. .. code-block:: bash
  219. celery multi start 2 -l info --statedb=/var/run/celery/%n.state
  220. Note that remote control commands must be working for revokes to work.
  221. Remote control commands are only supported by the RabbitMQ (amqp), Redis and MongDB
  222. transports at this point.
  223. .. _worker-time-limits:
  224. Time Limits
  225. ===========
  226. .. versionadded:: 2.0
  227. pool support: *prefork/gevent*
  228. .. sidebar:: Soft, or hard?
  229. The time limit is set in two values, `soft` and `hard`.
  230. The soft time limit allows the task to catch an exception
  231. to clean up before it is killed: the hard timeout is not catchable
  232. and force terminates the task.
  233. A single task can potentially run forever, if you have lots of tasks
  234. waiting for some event that will never happen you will block the worker
  235. from processing new tasks indefinitely. The best way to defend against
  236. this scenario happening is enabling time limits.
  237. The time limit (`--time-limit`) is the maximum number of seconds a task
  238. may run before the process executing it is terminated and replaced by a
  239. new process. You can also enable a soft time limit (`--soft-time-limit`),
  240. this raises an exception the task can catch to clean up before the hard
  241. time limit kills it:
  242. .. code-block:: python
  243. from myapp import app
  244. from celery.exceptions import SoftTimeLimitExceeded
  245. @app.task
  246. def mytask():
  247. try:
  248. do_work()
  249. except SoftTimeLimitExceeded:
  250. clean_up_in_a_hurry()
  251. Time limits can also be set using the :setting:`CELERYD_TASK_TIME_LIMIT` /
  252. :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` settings.
  253. .. note::
  254. Time limits do not currently work on Windows and other
  255. platforms that do not support the ``SIGUSR1`` signal.
  256. Changing time limits at runtime
  257. -------------------------------
  258. .. versionadded:: 2.3
  259. broker support: *amqp, redis, mongodb*
  260. There is a remote control command that enables you to change both soft
  261. and hard time limits for a task — named ``time_limit``.
  262. Example changing the time limit for the ``tasks.crawl_the_web`` task
  263. to have a soft time limit of one minute, and a hard time limit of
  264. two minutes::
  265. >>> app.control.time_limit('tasks.crawl_the_web',
  266. soft=60, hard=120, reply=True)
  267. [{'worker1.example.com': {'ok': 'time limits set successfully'}}]
  268. Only tasks that starts executing after the time limit change will be affected.
  269. .. _worker-rate-limits:
  270. Rate Limits
  271. ===========
  272. .. control:: rate_limit
  273. Changing rate-limits at runtime
  274. -------------------------------
  275. Example changing the rate limit for the `myapp.mytask` task to execute
  276. at most 200 tasks of that type every minute:
  277. .. code-block:: python
  278. >>> app.control.rate_limit('myapp.mytask', '200/m')
  279. The above does not specify a destination, so the change request will affect
  280. all worker instances in the cluster. If you only want to affect a specific
  281. list of workers you can include the ``destination`` argument:
  282. .. code-block:: python
  283. >>> app.control.rate_limit('myapp.mytask', '200/m',
  284. ... destination=['celery@worker1.example.com'])
  285. .. warning::
  286. This won't affect workers with the
  287. :setting:`CELERY_DISABLE_RATE_LIMITS` setting enabled.
  288. .. _worker-maxtasksperchild:
  289. Max tasks per child setting
  290. ===========================
  291. .. versionadded:: 2.0
  292. pool support: *prefork*
  293. With this option you can configure the maximum number of tasks
  294. a worker can execute before it's replaced by a new process.
  295. This is useful if you have memory leaks you have no control over
  296. for example from closed source C extensions.
  297. The option can be set using the workers `--maxtasksperchild` argument
  298. or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
  299. .. _worker-autoscaling:
  300. Autoscaling
  301. ===========
  302. .. versionadded:: 2.2
  303. pool support: *prefork*, *gevent*
  304. The *autoscaler* component is used to dynamically resize the pool
  305. based on load:
  306. - The autoscaler adds more pool processes when there is work to do,
  307. - and starts removing processes when the workload is low.
  308. It's enabled by the :option:`--autoscale` option, which needs two
  309. numbers: the maximum and minimum number of pool processes::
  310. --autoscale=AUTOSCALE
  311. Enable autoscaling by providing
  312. max_concurrency,min_concurrency. Example:
  313. --autoscale=10,3 (always keep 3 processes, but grow to
  314. 10 if necessary).
  315. You can also define your own rules for the autoscaler by subclassing
  316. :class:`~celery.worker.autoscaler.Autoscaler`.
  317. Some ideas for metrics include load average or the amount of memory available.
  318. You can specify a custom autoscaler with the :setting:`CELERYD_AUTOSCALER` setting.
  319. .. _worker-queues:
  320. Queues
  321. ======
  322. A worker instance can consume from any number of queues.
  323. By default it will consume from all queues defined in the
  324. :setting:`CELERY_QUEUES` setting (which if not specified defaults to the
  325. queue named ``celery``).
  326. You can specify what queues to consume from at startup,
  327. by giving a comma separated list of queues to the :option:`-Q` option:
  328. .. code-block:: bash
  329. $ celery worker -l info -Q foo,bar,baz
  330. If the queue name is defined in :setting:`CELERY_QUEUES` it will use that
  331. configuration, but if it's not defined in the list of queues Celery will
  332. automatically generate a new queue for you (depending on the
  333. :setting:`CELERY_CREATE_MISSING_QUEUES` option).
  334. You can also tell the worker to start and stop consuming from a queue at
  335. runtime using the remote control commands :control:`add_consumer` and
  336. :control:`cancel_consumer`.
  337. .. control:: add_consumer
  338. Queues: Adding consumers
  339. ------------------------
  340. The :control:`add_consumer` control command will tell one or more workers
  341. to start consuming from a queue. This operation is idempotent.
  342. To tell all workers in the cluster to start consuming from a queue
  343. named "``foo``" you can use the :program:`celery control` program:
  344. .. code-block:: bash
  345. $ celery control add_consumer foo
  346. -> worker1.local: OK
  347. started consuming from u'foo'
  348. If you want to specify a specific worker you can use the
  349. :option:`--destination`` argument:
  350. .. code-block:: bash
  351. $ celery control add_consumer foo -d worker1.local
  352. The same can be accomplished dynamically using the :meth:`@control.add_consumer` method::
  353. >>> myapp.control.add_consumer('foo', reply=True)
  354. [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
  355. >>> myapp.control.add_consumer('foo', reply=True,
  356. ... destination=['worker1@example.com'])
  357. [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
  358. By now I have only shown examples using automatic queues,
  359. If you need more control you can also specify the exchange, routing_key and
  360. even other options::
  361. >>> myapp.control.add_consumer(
  362. ... queue='baz',
  363. ... exchange='ex',
  364. ... exchange_type='topic',
  365. ... routing_key='media.*',
  366. ... options={
  367. ... 'queue_durable': False,
  368. ... 'exchange_durable': False,
  369. ... },
  370. ... reply=True,
  371. ... destination=['w1@example.com', 'w2@example.com'])
  372. .. control:: cancel_consumer
  373. Queues: Cancelling consumers
  374. ----------------------------
  375. You can cancel a consumer by queue name using the :control:`cancel_consumer`
  376. control command.
  377. To force all workers in the cluster to cancel consuming from a queue
  378. you can use the :program:`celery control` program:
  379. .. code-block:: bash
  380. $ celery control cancel_consumer foo
  381. The :option:`--destination` argument can be used to specify a worker, or a
  382. list of workers, to act on the command:
  383. .. code-block:: bash
  384. $ celery control cancel_consumer foo -d worker1.local
  385. You can also cancel consumers programmatically using the
  386. :meth:`@control.cancel_consumer` method:
  387. .. code-block:: bash
  388. >>> myapp.control.cancel_consumer('foo', reply=True)
  389. [{u'worker1.local': {u'ok': u"no longer consuming from u'foo'"}}]
  390. .. control:: active_queues
  391. Queues: List of active queues
  392. -----------------------------
  393. You can get a list of queues that a worker consumes from by using
  394. the :control:`active_queues` control command:
  395. .. code-block:: bash
  396. $ celery inspect active_queues
  397. [...]
  398. Like all other remote control commands this also supports the
  399. :option:`--destination` argument used to specify which workers should
  400. reply to the request:
  401. .. code-block:: bash
  402. $ celery inspect active_queues -d worker1.local
  403. [...]
  404. This can also be done programmatically by using the
  405. :meth:`@control.inspect.active_queues` method::
  406. >>> myapp.inspect().active_queues()
  407. [...]
  408. >>> myapp.inspect(['worker1.local']).active_queues()
  409. [...]
  410. .. _worker-autoreloading:
  411. Autoreloading
  412. =============
  413. .. versionadded:: 2.5
  414. pool support: *prefork, eventlet, gevent, threads, solo*
  415. Starting :program:`celery worker` with the :option:`--autoreload` option will
  416. enable the worker to watch for file system changes to all imported task
  417. modules imported (and also any non-task modules added to the
  418. :setting:`CELERY_IMPORTS` setting or the :option:`-I|--include` option).
  419. This is an experimental feature intended for use in development only,
  420. using auto-reload in production is discouraged as the behavior of reloading
  421. a module in Python is undefined, and may cause hard to diagnose bugs and
  422. crashes. Celery uses the same approach as the auto-reloader found in e.g.
  423. the Django ``runserver`` command.
  424. When auto-reload is enabled the worker starts an additional thread
  425. that watches for changes in the file system. New modules are imported,
  426. and already imported modules are reloaded whenever a change is detected,
  427. and if the prefork pool is used the child processes will finish the work
  428. they are doing and exit, so that they can be replaced by fresh processes
  429. effectively reloading the code.
  430. File system notification backends are pluggable, and it comes with three
  431. implementations:
  432. * inotify (Linux)
  433. Used if the :mod:`pyinotify` library is installed.
  434. If you are running on Linux this is the recommended implementation,
  435. to install the :mod:`pyinotify` library you have to run the following
  436. command:
  437. .. code-block:: bash
  438. $ pip install pyinotify
  439. * kqueue (OS X/BSD)
  440. * stat
  441. The fallback implementation simply polls the files using ``stat`` and is very
  442. expensive.
  443. You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
  444. environment variable:
  445. .. code-block:: bash
  446. $ env CELERYD_FSNOTIFY=stat celery worker -l info --autoreload
  447. .. _worker-autoreload:
  448. .. control:: pool_restart
  449. Pool Restart Command
  450. --------------------
  451. .. versionadded:: 2.5
  452. Requires the :setting:`CELERYD_POOL_RESTARTS` setting to be enabled.
  453. The remote control command :control:`pool_restart` sends restart requests to
  454. the workers child processes. It is particularly useful for forcing
  455. the worker to import new modules, or for reloading already imported
  456. modules. This command does not interrupt executing tasks.
  457. Example
  458. ~~~~~~~
  459. Running the following command will result in the `foo` and `bar` modules
  460. being imported by the worker processes:
  461. .. code-block:: python
  462. >>> app.control.broadcast('pool_restart',
  463. ... arguments={'modules': ['foo', 'bar']})
  464. Use the ``reload`` argument to reload modules it has already imported:
  465. .. code-block:: python
  466. >>> app.control.broadcast('pool_restart',
  467. ... arguments={'modules': ['foo'],
  468. ... 'reload': True})
  469. If you don't specify any modules then all known tasks modules will
  470. be imported/reloaded:
  471. .. code-block:: python
  472. >>> app.control.broadcast('pool_restart', arguments={'reload': True})
  473. The ``modules`` argument is a list of modules to modify. ``reload``
  474. specifies whether to reload modules if they have previously been imported.
  475. By default ``reload`` is disabled. The `pool_restart` command uses the
  476. Python :func:`reload` function to reload modules, or you can provide
  477. your own custom reloader by passing the ``reloader`` argument.
  478. .. note::
  479. Module reloading comes with caveats that are documented in :func:`reload`.
  480. Please read this documentation and make sure your modules are suitable
  481. for reloading.
  482. .. seealso::
  483. - http://pyunit.sourceforge.net/notes/reloading.html
  484. - http://www.indelible.org/ink/python-reloading/
  485. - http://docs.python.org/library/functions.html#reload
  486. .. _worker-inspect:
  487. Inspecting workers
  488. ==================
  489. :class:`@control.inspect` lets you inspect running workers. It
  490. uses remote control commands under the hood.
  491. You can also use the ``celery`` command to inspect workers,
  492. and it supports the same commands as the :class:`@Celery.control` interface.
  493. .. code-block:: python
  494. # Inspect all nodes.
  495. >>> i = app.control.inspect()
  496. # Specify multiple nodes to inspect.
  497. >>> i = app.control.inspect(['worker1.example.com',
  498. 'worker2.example.com'])
  499. # Specify a single node to inspect.
  500. >>> i = app.control.inspect('worker1.example.com')
  501. .. _worker-inspect-registered-tasks:
  502. Dump of registered tasks
  503. ------------------------
  504. You can get a list of tasks registered in the worker using the
  505. :meth:`~@control.inspect.registered`::
  506. >>> i.registered()
  507. [{'worker1.example.com': ['tasks.add',
  508. 'tasks.sleeptask']}]
  509. .. _worker-inspect-active-tasks:
  510. Dump of currently executing tasks
  511. ---------------------------------
  512. You can get a list of active tasks using
  513. :meth:`~@control.inspect.active`::
  514. >>> i.active()
  515. [{'worker1.example.com':
  516. [{'name': 'tasks.sleeptask',
  517. 'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
  518. 'args': '(8,)',
  519. 'kwargs': '{}'}]}]
  520. .. _worker-inspect-eta-schedule:
  521. Dump of scheduled (ETA) tasks
  522. -----------------------------
  523. You can get a list of tasks waiting to be scheduled by using
  524. :meth:`~@control.inspect.scheduled`::
  525. >>> i.scheduled()
  526. [{'worker1.example.com':
  527. [{'eta': '2010-06-07 09:07:52', 'priority': 0,
  528. 'request': {
  529. 'name': 'tasks.sleeptask',
  530. 'id': '1a7980ea-8b19-413e-91d2-0b74f3844c4d',
  531. 'args': '[1]',
  532. 'kwargs': '{}'}},
  533. {'eta': '2010-06-07 09:07:53', 'priority': 0,
  534. 'request': {
  535. 'name': 'tasks.sleeptask',
  536. 'id': '49661b9a-aa22-4120-94b7-9ee8031d219d',
  537. 'args': '[2]',
  538. 'kwargs': '{}'}}]}]
  539. .. note::
  540. These are tasks with an eta/countdown argument, not periodic tasks.
  541. .. _worker-inspect-reserved:
  542. Dump of reserved tasks
  543. ----------------------
  544. Reserved tasks are tasks that has been received, but is still waiting to be
  545. executed.
  546. You can get a list of these using
  547. :meth:`~@control.inspect.reserved`::
  548. >>> i.reserved()
  549. [{'worker1.example.com':
  550. [{'name': 'tasks.sleeptask',
  551. 'id': '32666e9b-809c-41fa-8e93-5ae0c80afbbf',
  552. 'args': '(8,)',
  553. 'kwargs': '{}'}]}]
  554. .. _worker-statistics:
  555. Statistics
  556. ----------
  557. The remote control command ``inspect stats`` (or
  558. :meth:`~@control.inspect.stats`) will give you a long list of useful (or not
  559. so useful) statistics about the worker:
  560. .. code-block:: bash
  561. $ celery -A proj inspect stats
  562. The output will include the following fields:
  563. - ``broker``
  564. Section for broker information.
  565. * ``connect_timeout``
  566. Timeout in seconds (int/float) for establishing a new connection.
  567. * ``heartbeat``
  568. Current heartbeat value (set by client).
  569. * ``hostname``
  570. Hostname of the remote broker.
  571. * ``insist``
  572. No longer used.
  573. * ``login_method``
  574. Login method used to connect to the broker.
  575. * ``port``
  576. Port of the remote broker.
  577. * ``ssl``
  578. SSL enabled/disabled.
  579. * ``transport``
  580. Name of transport used (e.g. ``amqp`` or ``mongodb``)
  581. * ``transport_options``
  582. Options passed to transport.
  583. * ``uri_prefix``
  584. Some transports expects the host name to be an URL, this applies to
  585. for example SQLAlchemy where the host name part is the connection URI:
  586. sqla+sqlite:///
  587. In this example the uri prefix will be ``sqla``.
  588. * ``userid``
  589. User id used to connect to the broker with.
  590. * ``virtual_host``
  591. Virtual host used.
  592. - ``clock``
  593. Value of the workers logical clock. This is a positive integer and should
  594. be increasing every time you receive statistics.
  595. - ``pid``
  596. Process id of the worker instance (Main process).
  597. - ``pool``
  598. Pool-specific section.
  599. * ``max-concurrency``
  600. Max number of processes/threads/green threads.
  601. * ``max-tasks-per-child``
  602. Max number of tasks a thread may execute before being recycled.
  603. * ``processes``
  604. List of pids (or thread-id's).
  605. * ``put-guarded-by-semaphore``
  606. Internal
  607. * ``timeouts``
  608. Default values for time limits.
  609. * ``writes``
  610. Specific to the prefork pool, this shows the distribution of writes
  611. to each process in the pool when using async I/O.
  612. - ``prefetch_count``
  613. Current prefetch count value for the task consumer.
  614. - ``rusage``
  615. System usage statistics. The fields available may be different
  616. on your platform.
  617. From :manpage:`getrusage(2)`:
  618. * ``stime``
  619. Time spent in operating system code on behalf of this process.
  620. * ``utime``
  621. Time spent executing user instructions.
  622. * ``maxrss``
  623. The maximum resident size used by this process (in kilobytes).
  624. * ``idrss``
  625. Amount of unshared memory used for data (in kilobytes times ticks of
  626. execution)
  627. * ``isrss``
  628. Amount of unshared memory used for stack space (in kilobytes times
  629. ticks of execution)
  630. * ``ixrss``
  631. Amount of memory shared with other processes (in kilobytes times
  632. ticks of execution).
  633. * ``inblock``
  634. Number of times the file system had to read from the disk on behalf of
  635. this process.
  636. * ``oublock``
  637. Number of times the file system has to write to disk on behalf of
  638. this process.
  639. * ``majflt``
  640. Number of page faults which were serviced by doing I/O.
  641. * ``minflt``
  642. Number of page faults which were serviced without doing I/O.
  643. * ``msgrcv``
  644. Number of IPC messages received.
  645. * ``msgsnd``
  646. Number of IPC messages sent.
  647. * ``nvcsw``
  648. Number of times this process voluntarily invoked a context switch.
  649. * ``nivcsw``
  650. Number of times an involuntary context switch took place.
  651. * ``nsignals``
  652. Number of signals received.
  653. * ``nswap``
  654. The number of times this process was swapped entirely out of memory.
  655. - ``total``
  656. List of task names and a total number of times that task have been
  657. executed since worker start.
  658. Additional Commands
  659. ===================
  660. .. control:: shutdown
  661. Remote shutdown
  662. ---------------
  663. This command will gracefully shut down the worker remotely:
  664. .. code-block:: python
  665. >>> app.control.broadcast('shutdown') # shutdown all workers
  666. >>> app.control.broadcast('shutdown, destination="worker1@example.com")
  667. .. control:: ping
  668. Ping
  669. ----
  670. This command requests a ping from alive workers.
  671. The workers reply with the string 'pong', and that's just about it.
  672. It will use the default one second timeout for replies unless you specify
  673. a custom timeout:
  674. .. code-block:: python
  675. >>> app.control.ping(timeout=0.5)
  676. [{'worker1.example.com': 'pong'},
  677. {'worker2.example.com': 'pong'},
  678. {'worker3.example.com': 'pong'}]
  679. :meth:`~@control.ping` also supports the `destination` argument,
  680. so you can specify which workers to ping::
  681. >>> ping(['worker2.example.com', 'worker3.example.com'])
  682. [{'worker2.example.com': 'pong'},
  683. {'worker3.example.com': 'pong'}]
  684. .. _worker-enable-events:
  685. .. control:: enable_events
  686. .. control:: disable_events
  687. Enable/disable events
  688. ---------------------
  689. You can enable/disable events by using the `enable_events`,
  690. `disable_events` commands. This is useful to temporarily monitor
  691. a worker using :program:`celery events`/:program:`celerymon`.
  692. .. code-block:: python
  693. >>> app.control.enable_events()
  694. >>> app.control.disable_events()
  695. .. _worker-custom-control-commands:
  696. Writing your own remote control commands
  697. ========================================
  698. Remote control commands are registered in the control panel and
  699. they take a single argument: the current
  700. :class:`~celery.worker.control.ControlDispatch` instance.
  701. From there you have access to the active
  702. :class:`~celery.worker.consumer.Consumer` if needed.
  703. Here's an example control command that restarts the broker connection:
  704. .. code-block:: python
  705. from celery.worker.control import Panel
  706. @Panel.register
  707. def reset_connection(state):
  708. state.consumer.reset_connection()
  709. return {'ok': 'connection reset'}