monitoring.rst 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817
  1. .. _guide-monitoring:
  2. =================================
  3. Monitoring and Management Guide
  4. =================================
  5. .. contents::
  6. :local:
  7. Introduction
  8. ============
  9. There are several tools available to monitor and inspect Celery clusters.
  10. This document describes some of these, as as well as
  11. features related to monitoring, like events and broadcast commands.
  12. .. _monitoring-workers:
  13. Workers
  14. =======
  15. .. _monitoring-control:
  16. Management Command-line Utilities (``inspect``/``control``)
  17. -----------------------------------------------------------
  18. :program:`celery` can also be used to inspect
  19. and manage worker nodes (and to some degree tasks).
  20. To list all the commands available do:
  21. .. code-block:: console
  22. $ celery help
  23. or to get help for a specific command do:
  24. .. code-block:: console
  25. $ celery <command> --help
  26. Commands
  27. ~~~~~~~~
  28. * **shell**: Drop into a Python shell.
  29. The locals will include the ``celery`` variable: this is the current app.
  30. Also all known tasks will be automatically added to locals (unless the
  31. :option:`--without-tasks <celery shell --without-tasks>` flag is set).
  32. Uses :pypi:`Ipython`, :pypi:`bpython`, or regular :program:`python` in that
  33. order if installed. You can force an implementation using
  34. :option:`--ipython <celery shell --ipython>`,
  35. :option:`--bpython <celery shell --bpython>`, or
  36. :option:`--python <celery shell --python>`.
  37. * **status**: List active nodes in this cluster
  38. .. code-block:: console
  39. $ celery -A proj status
  40. * **result**: Show the result of a task
  41. .. code-block:: console
  42. $ celery -A proj result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
  43. Note that you can omit the name of the task as long as the
  44. task doesn't use a custom result backend.
  45. * **purge**: Purge messages from all configured task queues.
  46. This command will remove all messages from queues configured in
  47. the :setting:`CELERY_QUEUES` setting:
  48. .. warning::
  49. There's no undo for this operation, and messages will
  50. be permanently deleted!
  51. .. code-block:: console
  52. $ celery -A proj purge
  53. You can also specify the queues to purge using the `-Q` option:
  54. .. code-block:: console
  55. $ celery -A proj purge -Q celery,foo,bar
  56. and exclude queues from being purged using the `-X` option:
  57. .. code-block:: console
  58. $ celery -A proj purge -X celery
  59. * **inspect active**: List active tasks
  60. .. code-block:: console
  61. $ celery -A proj inspect active
  62. These are all the tasks that are currently being executed.
  63. * **inspect scheduled**: List scheduled ETA tasks
  64. .. code-block:: console
  65. $ celery -A proj inspect scheduled
  66. These are tasks reserved by the worker when they have an
  67. `eta` or `countdown` argument set.
  68. * **inspect reserved**: List reserved tasks
  69. .. code-block:: console
  70. $ celery -A proj inspect reserved
  71. This will list all tasks that have been prefetched by the worker,
  72. and is currently waiting to be executed (doesn't include tasks
  73. with an ETA value set).
  74. * **inspect revoked**: List history of revoked tasks
  75. .. code-block:: console
  76. $ celery -A proj inspect revoked
  77. * **inspect registered**: List registered tasks
  78. .. code-block:: console
  79. $ celery -A proj inspect registered
  80. * **inspect stats**: Show worker statistics (see :ref:`worker-statistics`)
  81. .. code-block:: console
  82. $ celery -A proj inspect stats
  83. * **inspect query_task**: Show information about task(s) by id.
  84. Any worker having a task in this set of ids reserved/active will respond
  85. with status and information.
  86. .. code-block:: console
  87. $ celery -A proj inspect query_task e9f6c8f0-fec9-4ae8-a8c6-cf8c8451d4f8
  88. You can also query for information about multiple tasks:
  89. .. code-block:: console
  90. $ celery -A proj inspect query_task id1 id2 ... idN
  91. * **control enable_events**: Enable events
  92. .. code-block:: console
  93. $ celery -A proj control enable_events
  94. * **control disable_events**: Disable events
  95. .. code-block:: console
  96. $ celery -A proj control disable_events
  97. * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
  98. .. code-block:: console
  99. $ celery -A proj migrate redis://localhost amqp://localhost
  100. This command will migrate all the tasks on one broker to another.
  101. As this command is new and experimental you should be sure to have
  102. a backup of the data before proceeding.
  103. .. note::
  104. All ``inspect`` and ``control`` commands supports a
  105. :option:`--timeout <celery inspect --timeout>` argument,
  106. This is the number of seconds to wait for responses.
  107. You may have to increase this timeout if you're not getting a response
  108. due to latency.
  109. .. _inspect-destination:
  110. Specifying destination nodes
  111. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  112. By default the inspect and control commands operates on all workers.
  113. You can specify a single, or a list of workers by using the
  114. :option:`--destination <celery inspect --destination>` argument:
  115. .. code-block:: console
  116. $ celery -A proj inspect -d w1@e.com,w2@e.com reserved
  117. $ celery -A proj control -d w1@e.com,w2@e.com enable_events
  118. .. _monitoring-flower:
  119. Flower: Real-time Celery web-monitor
  120. ------------------------------------
  121. Flower is a real-time web based monitor and administration tool for Celery.
  122. It's under active development, but is already an essential tool.
  123. Being the recommended monitor for Celery, it obsoletes the Django-Admin
  124. monitor, ``celerymon`` and the ``ncurses`` based monitor.
  125. Flower is pronounced like "flow", but you can also use the botanical version
  126. if you prefer.
  127. Features
  128. ~~~~~~~~
  129. - Real-time monitoring using Celery Events
  130. - Task progress and history
  131. - Ability to show task details (arguments, start time, run-time, and more)
  132. - Graphs and statistics
  133. - Remote Control
  134. - View worker status and statistics
  135. - Shutdown and restart worker instances
  136. - Control worker pool size and autoscale settings
  137. - View and modify the queues a worker instance consumes from
  138. - View currently running tasks
  139. - View scheduled tasks (ETA/countdown)
  140. - View reserved and revoked tasks
  141. - Apply time and rate limits
  142. - Configuration viewer
  143. - Revoke or terminate tasks
  144. - HTTP API
  145. - List workers
  146. - Shut down a worker
  147. - Restart worker’s pool
  148. - Grow worker’s pool
  149. - Shrink worker’s pool
  150. - Autoscale worker pool
  151. - Start consuming from a queue
  152. - Stop consuming from a queue
  153. - List tasks
  154. - List (seen) task types
  155. - Get a task info
  156. - Execute a task
  157. - Execute a task by name
  158. - Get a task result
  159. - Change soft and hard time limits for a task
  160. - Change rate limit for a task
  161. - Revoke a task
  162. - OpenID authentication
  163. **Screenshots**
  164. .. figure:: ../images/dashboard.png
  165. :width: 700px
  166. .. figure:: ../images/monitor.png
  167. :width: 700px
  168. More screenshots_:
  169. .. _screenshots: https://github.com/mher/flower/tree/master/docs/screenshots
  170. Usage
  171. ~~~~~
  172. You can use pip to install Flower:
  173. .. code-block:: console
  174. $ pip install flower
  175. Running the flower command will start a web-server that you can visit:
  176. .. code-block:: console
  177. $ celery -A proj flower
  178. The default port is http://localhost:5555, but you can change this using the
  179. :option:`--port <flower --port>` argument:
  180. .. code-block:: console
  181. $ celery -A proj flower --port=5555
  182. Broker URL can also be passed through the
  183. :option:`--broker <celery --broker>` argument :
  184. .. code-block:: console
  185. $ celery flower --broker=amqp://guest:guest@localhost:5672//
  186. or
  187. $ celery flower --broker=redis://guest:guest@localhost:6379/0
  188. Then, you can visit flower in your web browser :
  189. .. code-block:: console
  190. $ open http://localhost:5555
  191. Flower has many more features than are detailed here, including
  192. authorization options. Check out the `official documentation`_ for more
  193. information.
  194. .. _official documentation: https://flower.readthedocs.io/en/latest/
  195. .. _monitoring-celeryev:
  196. celery events: Curses Monitor
  197. -----------------------------
  198. .. versionadded:: 2.0
  199. `celery events` is a simple curses monitor displaying
  200. task and worker history. You can inspect the result and traceback of tasks,
  201. and it also supports some management commands like rate limiting and shutting
  202. down workers. This monitor was started as a proof of concept, and you
  203. probably want to use Flower instead.
  204. Starting:
  205. .. code-block:: console
  206. $ celery -A proj events
  207. You should see a screen like:
  208. .. figure:: ../images/celeryevshotsm.jpg
  209. `celery events` is also used to start snapshot cameras (see
  210. :ref:`monitoring-snapshots`:
  211. .. code-block:: console
  212. $ celery -A proj events --camera=<camera-class> --frequency=1.0
  213. and it includes a tool to dump events to :file:`stdout`:
  214. .. code-block:: console
  215. $ celery -A proj events --dump
  216. For a complete list of options use :option:`--help <celery --help>`:
  217. .. code-block:: console
  218. $ celery events --help
  219. .. _`celerymon`: https://github.com/celery/celerymon/
  220. .. _monitoring-rabbitmq:
  221. RabbitMQ
  222. ========
  223. To manage a Celery cluster it is important to know how
  224. RabbitMQ can be monitored.
  225. RabbitMQ ships with the `rabbitmqctl(1)`_ command,
  226. with this you can list queues, exchanges, bindings,
  227. queue lengths, the memory usage of each queue, as well
  228. as manage users, virtual hosts and their permissions.
  229. .. note::
  230. The default virtual host (``"/"``) is used in these
  231. examples, if you use a custom virtual host you have to add
  232. the ``-p`` argument to the command, for example:
  233. ``rabbitmqctl list_queues -p my_vhost …``
  234. .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
  235. .. _monitoring-rmq-queues:
  236. Inspecting queues
  237. -----------------
  238. Finding the number of tasks in a queue:
  239. .. code-block:: console
  240. $ rabbitmqctl list_queues name messages messages_ready \
  241. messages_unacknowledged
  242. Here `messages_ready` is the number of messages ready
  243. for delivery (sent but not received), `messages_unacknowledged`
  244. is the number of messages that's been received by a worker but
  245. not acknowledged yet (meaning it is in progress, or has been reserved).
  246. `messages` is the sum of ready and unacknowledged messages.
  247. Finding the number of workers currently consuming from a queue:
  248. .. code-block:: console
  249. $ rabbitmqctl list_queues name consumers
  250. Finding the amount of memory allocated to a queue:
  251. .. code-block:: console
  252. $ rabbitmqctl list_queues name memory
  253. :Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
  254. easier to parse.
  255. .. _monitoring-redis:
  256. Redis
  257. =====
  258. If you're using Redis as the broker, you can monitor the Celery cluster using
  259. the `redis-cli(1)` command to list lengths of queues.
  260. .. _monitoring-redis-queues:
  261. Inspecting queues
  262. -----------------
  263. Finding the number of tasks in a queue:
  264. .. code-block:: console
  265. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
  266. The default queue is named `celery`. To get all available queues, invoke:
  267. .. code-block:: console
  268. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
  269. .. note::
  270. Queue keys only exists when there are tasks in them, so if a key
  271. doesn't exist it simply means there are no messages in that queue.
  272. This is because in Redis a list with no elements in it is automatically
  273. removed, and hence it won't show up in the `keys` command output,
  274. and `llen` for that list returns 0.
  275. Also, if you're using Redis for other purposes, the
  276. output of the `keys` command will include unrelated values stored in
  277. the database. The recommended way around this is to use a
  278. dedicated `DATABASE_NUMBER` for Celery, you can also use
  279. database numbers to separate Celery applications from each other (virtual
  280. hosts), but this won't affect the monitoring events used by for example
  281. Flower as Redis pub/sub commands are global rather than database based.
  282. .. _monitoring-munin:
  283. Munin
  284. =====
  285. This is a list of known Munin plug-ins that can be useful when
  286. maintaining a Celery cluster.
  287. * ``rabbitmq-munin``: Munin plug-ins for RabbitMQ.
  288. https://github.com/ask/rabbitmq-munin
  289. * ``celery_tasks``: Monitors the number of times each task type has
  290. been executed (requires `celerymon`).
  291. http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
  292. * ``celery_task_states``: Monitors the number of tasks in each state
  293. (requires `celerymon`).
  294. http://exchange.munin-monitoring.org/plugins/celery_tasks/details
  295. .. _monitoring-events:
  296. Events
  297. ======
  298. The worker has the ability to send a message whenever some event
  299. happens. These events are then captured by tools like Flower,
  300. and :program:`celery events` to monitor the cluster.
  301. .. _monitoring-snapshots:
  302. Snapshots
  303. ---------
  304. .. versionadded:: 2.1
  305. Even a single worker can produce a huge amount of events, so storing
  306. the history of all events on disk may be very expensive.
  307. A sequence of events describes the cluster state in that time period,
  308. by taking periodic snapshots of this state you can keep all history, but
  309. still only periodically write it to disk.
  310. To take snapshots you need a Camera class, with this you can define
  311. what should happen every time the state is captured; You can
  312. write it to a database, send it by email or something else entirely.
  313. :program:`celery events` is then used to take snapshots with the camera,
  314. for example if you want to capture state every 2 seconds using the
  315. camera ``myapp.Camera`` you run :program:`celery events` with the following
  316. arguments:
  317. .. code-block:: console
  318. $ celery -A proj events -c myapp.Camera --frequency=2.0
  319. .. _monitoring-camera:
  320. Custom Camera
  321. ~~~~~~~~~~~~~
  322. Cameras can be useful if you need to capture events and do something
  323. with those events at an interval. For real-time event processing
  324. you should use :class:`@events.Receiver` directly, like in
  325. :ref:`event-real-time-example`.
  326. Here is an example camera, dumping the snapshot to screen:
  327. .. code-block:: python
  328. from pprint import pformat
  329. from celery.events.snapshot import Polaroid
  330. class DumpCam(Polaroid):
  331. clear_after = True # clear after flush (incl, state.event_count).
  332. def on_shutter(self, state):
  333. if not state.event_count:
  334. # No new events since last snapshot.
  335. return
  336. print('Workers: {0}'.format(pformat(state.workers, indent=4)))
  337. print('Tasks: {0}'.format(pformat(state.tasks, indent=4)))
  338. print('Total: {0.event_count} events, {0.task_count} tasks'.format(
  339. state))
  340. See the API reference for :mod:`celery.events.state` to read more
  341. about state objects.
  342. Now you can use this cam with :program:`celery events` by specifying
  343. it with the :option:`-c <celery events -c>` option:
  344. .. code-block:: console
  345. $ celery -A proj events -c myapp.DumpCam --frequency=2.0
  346. Or you can use it programmatically like this:
  347. .. code-block:: python
  348. from celery import Celery
  349. from myapp import DumpCam
  350. def main(app, freq=1.0):
  351. state = app.events.State()
  352. with app.connection() as connection:
  353. recv = app.events.Receiver(connection, handlers={'*': state.event})
  354. with DumpCam(state, freq=freq):
  355. recv.capture(limit=None, timeout=None)
  356. if __name__ == '__main__':
  357. app = Celery(broker='amqp://guest@localhost//')
  358. main(app)
  359. .. _event-real-time-example:
  360. Real-time processing
  361. --------------------
  362. To process events in real-time you need the following
  363. - An event consumer (this is the ``Receiver``)
  364. - A set of handlers called when events come in.
  365. You can have different handlers for each event type,
  366. or a catch-all handler can be used ('*')
  367. - State (optional)
  368. :class:`@events.State` is a convenient in-memory representation
  369. of tasks and workers in the cluster that's updated as events come in.
  370. It encapsulates solutions for many common things, like checking if a
  371. worker is still alive (by verifying heartbeats), merging event fields
  372. together as events come in, making sure time-stamps are in sync, and so on.
  373. Combining these you can easily process events in real-time:
  374. .. code-block:: python
  375. from celery import Celery
  376. def my_monitor(app):
  377. state = app.events.State()
  378. def announce_failed_tasks(event):
  379. state.event(event)
  380. # task name is sent only with -received event, and state
  381. # will keep track of this for us.
  382. task = state.tasks.get(event['uuid'])
  383. print('TASK FAILED: %s[%s] %s' % (
  384. task.name, task.uuid, task.info(),))
  385. with app.connection() as connection:
  386. recv = app.events.Receiver(connection, handlers={
  387. 'task-failed': announce_failed_tasks,
  388. '*': state.event,
  389. })
  390. recv.capture(limit=None, timeout=None, wakeup=True)
  391. if __name__ == '__main__':
  392. app = Celery(broker='amqp://guest@localhost//')
  393. my_monitor(app)
  394. .. note::
  395. The ``wakeup`` argument to ``capture`` sends a signal to all workers
  396. to force them to send a heartbeat. This way you can immediately see
  397. workers when the monitor starts.
  398. You can listen to specific events by specifying the handlers:
  399. .. code-block:: python
  400. from celery import Celery
  401. def my_monitor(app):
  402. state = app.events.State()
  403. def announce_failed_tasks(event):
  404. state.event(event)
  405. # task name is sent only with -received event, and state
  406. # will keep track of this for us.
  407. task = state.tasks.get(event['uuid'])
  408. print('TASK FAILED: %s[%s] %s' % (
  409. task.name, task.uuid, task.info(),))
  410. with app.connection() as connection:
  411. recv = app.events.Receiver(connection, handlers={
  412. 'task-failed': announce_failed_tasks,
  413. })
  414. recv.capture(limit=None, timeout=None, wakeup=True)
  415. if __name__ == '__main__':
  416. app = Celery(broker='amqp://guest@localhost//')
  417. my_monitor(app)
  418. .. _event-reference:
  419. Event Reference
  420. ===============
  421. This list contains the events sent by the worker, and their arguments.
  422. .. _event-reference-task:
  423. Task Events
  424. -----------
  425. .. event:: task-sent
  426. task-sent
  427. ~~~~~~~~~
  428. :signature: ``task-sent(uuid, name, args, kwargs, retries, eta, expires,
  429. queue, exchange, routing_key, root_id, parent_id)``
  430. Sent when a task message is published and
  431. the :setting:`task_send_sent_event` setting is enabled.
  432. .. event:: task-received
  433. task-received
  434. ~~~~~~~~~~~~~
  435. :signature: ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
  436. timestamp, root_id, parent_id)``
  437. Sent when the worker receives a task.
  438. .. event:: task-started
  439. task-started
  440. ~~~~~~~~~~~~
  441. :signature: ``task-started(uuid, hostname, timestamp, pid)``
  442. Sent just before the worker executes the task.
  443. .. event:: task-succeeded
  444. task-succeeded
  445. ~~~~~~~~~~~~~~
  446. :signature: ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
  447. Sent if the task executed successfully.
  448. Run-time is the time it took to execute the task using the pool.
  449. (Starting from the task is sent to the worker pool, and ending when the
  450. pool result handler callback is called).
  451. .. event:: task-failed
  452. task-failed
  453. ~~~~~~~~~~~
  454. :signature: ``task-failed(uuid, exception, traceback, hostname, timestamp)``
  455. Sent if the execution of the task failed.
  456. .. event:: task-rejected
  457. task-rejected
  458. ~~~~~~~~~~~~~
  459. :signature: ``task-rejected(uuid, requeued)``
  460. The task was rejected by the worker, possibly to be re-queued or moved to a
  461. dead letter queue.
  462. .. event:: task-revoked
  463. task-revoked
  464. ~~~~~~~~~~~~
  465. :signature: ``task-revoked(uuid, terminated, signum, expired)``
  466. Sent if the task has been revoked (Note that this is likely
  467. to be sent by more than one worker).
  468. - ``terminated`` is set to true if the task process was terminated,
  469. and the ``signum`` field set to the signal used.
  470. - ``expired`` is set to true if the task expired.
  471. .. event:: task-retried
  472. task-retried
  473. ~~~~~~~~~~~~
  474. :signature: ``task-retried(uuid, exception, traceback, hostname, timestamp)``
  475. Sent if the task failed, but will be retried in the future.
  476. .. _event-reference-worker:
  477. Worker Events
  478. -------------
  479. .. event:: worker-online
  480. worker-online
  481. ~~~~~~~~~~~~~
  482. :signature: ``worker-online(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  483. The worker has connected to the broker and is online.
  484. - `hostname`: Nodename of the worker.
  485. - `timestamp`: Event time-stamp.
  486. - `freq`: Heartbeat frequency in seconds (float).
  487. - `sw_ident`: Name of worker software (e.g., ``py-celery``).
  488. - `sw_ver`: Software version (e.g., 2.2.0).
  489. - `sw_sys`: Operating System (e.g., Linux/Darwin).
  490. .. event:: worker-heartbeat
  491. worker-heartbeat
  492. ~~~~~~~~~~~~~~~~
  493. :signature: ``worker-heartbeat(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys,
  494. active, processed)``
  495. Sent every minute, if the worker hasn't sent a heartbeat in 2 minutes,
  496. it is considered to be offline.
  497. - `hostname`: Nodename of the worker.
  498. - `timestamp`: Event time-stamp.
  499. - `freq`: Heartbeat frequency in seconds (float).
  500. - `sw_ident`: Name of worker software (e.g., ``py-celery``).
  501. - `sw_ver`: Software version (e.g., 2.2.0).
  502. - `sw_sys`: Operating System (e.g., Linux/Darwin).
  503. - `active`: Number of currently executing tasks.
  504. - `processed`: Total number of tasks processed by this worker.
  505. .. event:: worker-offline
  506. worker-offline
  507. ~~~~~~~~~~~~~~
  508. :signature: ``worker-offline(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  509. The worker has disconnected from the broker.