monitoring.rst 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771
  1. .. _guide-monitoring:
  2. =================================
  3. Monitoring and Management Guide
  4. =================================
  5. .. contents::
  6. :local:
  7. Introduction
  8. ============
  9. There are several tools available to monitor and inspect Celery clusters.
  10. This document describes some of these, as as well as
  11. features related to monitoring, like events and broadcast commands.
  12. .. _monitoring-workers:
  13. Workers
  14. =======
  15. .. _monitoring-control:
  16. Management Command-line Utilities (``inspect``/``control``)
  17. -----------------------------------------------------------
  18. :program:`celery` can also be used to inspect
  19. and manage worker nodes (and to some degree tasks).
  20. To list all the commands available do:
  21. .. code-block:: bash
  22. $ celery help
  23. or to get help for a specific command do:
  24. .. code-block:: bash
  25. $ celery <command> --help
  26. Commands
  27. ~~~~~~~~
  28. * **shell**: Drop into a Python shell.
  29. The locals will include the ``celery`` variable, which is the current app.
  30. Also all known tasks will be automatically added to locals (unless the
  31. ``--without-tasks`` flag is set).
  32. Uses Ipython, bpython, or regular python in that order if installed.
  33. You can force an implementation using ``--force-ipython|-I``,
  34. ``--force-bpython|-B``, or ``--force-python|-P``.
  35. * **status**: List active nodes in this cluster
  36. .. code-block:: bash
  37. $ celery -A proj status
  38. * **result**: Show the result of a task
  39. .. code-block:: bash
  40. $ celery -A proj result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
  41. Note that you can omit the name of the task as long as the
  42. task doesn't use a custom result backend.
  43. * **purge**: Purge messages from all configured task queues.
  44. .. warning::
  45. There is no undo for this operation, and messages will
  46. be permanently deleted!
  47. .. code-block:: bash
  48. $ celery -A proj purge
  49. * **inspect active**: List active tasks
  50. .. code-block:: bash
  51. $ celery -A proj inspect active
  52. These are all the tasks that are currently being executed.
  53. * **inspect scheduled**: List scheduled ETA tasks
  54. .. code-block:: bash
  55. $ celery -A proj inspect scheduled
  56. These are tasks reserved by the worker because they have the
  57. `eta` or `countdown` argument set.
  58. * **inspect reserved**: List reserved tasks
  59. .. code-block:: bash
  60. $ celery -A proj inspect reserved
  61. This will list all tasks that have been prefetched by the worker,
  62. and is currently waiting to be executed (does not include tasks
  63. with an eta).
  64. * **inspect revoked**: List history of revoked tasks
  65. .. code-block:: bash
  66. $ celery -A proj inspect revoked
  67. * **inspect registered**: List registered tasks
  68. .. code-block:: bash
  69. $ celery -A proj inspect registered
  70. * **inspect stats**: Show worker statistics (see :ref:`worker-statistics`)
  71. .. code-block:: bash
  72. $ celery -A proj inspect stats
  73. * **control enable_events**: Enable events
  74. .. code-block:: bash
  75. $ celery -A proj control enable_events
  76. * **control disable_events**: Disable events
  77. .. code-block:: bash
  78. $ celery -A proj control disable_events
  79. * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
  80. .. code-block:: bash
  81. $ celery -A proj migrate redis://localhost amqp://localhost
  82. This command will migrate all the tasks on one broker to another.
  83. As this command is new and experimental you should be sure to have
  84. a backup of the data before proceeding.
  85. .. note::
  86. All ``inspect`` and ``control`` commands supports a ``--timeout`` argument,
  87. This is the number of seconds to wait for responses.
  88. You may have to increase this timeout if you're not getting a response
  89. due to latency.
  90. .. _inspect-destination:
  91. Specifying destination nodes
  92. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  93. By default the inspect and control commands operates on all workers.
  94. You can specify a single, or a list of workers by using the
  95. `--destination` argument:
  96. .. code-block:: bash
  97. $ celery -A proj inspect -d w1,w2 reserved
  98. $ celery -A proj control -d w1,w2 enable_events
  99. .. _monitoring-flower:
  100. Flower: Real-time Celery web-monitor
  101. ------------------------------------
  102. Flower is a real-time web based monitor and administration tool for Celery.
  103. It is under active development, but is already an essential tool.
  104. Being the recommended monitor for Celery, it obsoletes the Django-Admin
  105. monitor, celerymon and the ncurses based monitor.
  106. Flower is pronounced like "flow", but you can also use the botanical version
  107. if you prefer.
  108. Features
  109. ~~~~~~~~
  110. - Real-time monitoring using Celery Events
  111. - Task progress and history.
  112. - Ability to show task details (arguments, start time, runtime, and more)
  113. - Graphs and statistics
  114. - Remote Control
  115. - View worker status and statistics.
  116. - Shutdown and restart worker instances.
  117. - Control worker pool size and autoscale settings.
  118. - View and modify the queues a worker instance consumes from.
  119. - View currently running tasks
  120. - View scheduled tasks (ETA/countdown)
  121. - View reserved and revoked tasks
  122. - Apply time and rate limits
  123. - Configuration viewer
  124. - Revoke or terminate tasks
  125. - HTTP API
  126. - List workers
  127. - Shut down a worker
  128. - Restart worker’s pool
  129. - Grow worker’s pool
  130. - Shrink worker’s pool
  131. - Autoscale worker pool
  132. - Start consuming from a queue
  133. - Stop consuming from a queue
  134. - List tasks
  135. - List (seen) task types
  136. - Get a task info
  137. - Execute a task
  138. - Execute a task by name
  139. - Get a task result
  140. - Change soft and hard time limits for a task
  141. - Change rate limit for a task
  142. - Revoke a task
  143. - OpenID authentication
  144. **Screenshots**
  145. .. figure:: ../images/dashboard.png
  146. :width: 700px
  147. .. figure:: ../images/monitor.png
  148. :width: 700px
  149. More screenshots_:
  150. .. _screenshots: https://github.com/mher/flower/tree/master/docs/screenshots
  151. Usage
  152. ~~~~~
  153. You can use pip to install Flower:
  154. .. code-block:: bash
  155. $ pip install flower
  156. Running the flower command will start a web-server that you can visit:
  157. .. code-block:: bash
  158. $ celery -A proj flower
  159. The default port is http://localhost:5555, but you can change this using the `--port` argument:
  160. .. code-block:: bash
  161. $ celery -A proj flower --port=5555
  162. Broker URL can also be passed through the `--broker` argument :
  163. .. code-block:: bash
  164. $ celery flower --broker=amqp://guest:guest@localhost:5672//
  165. or
  166. $ celery flower --broker=redis://guest:guest@localhost:6379/0
  167. Then, you can visit flower in your web browser :
  168. .. code-block:: bash
  169. $ open http://localhost:5555
  170. Flower has many more features than are detailed here, including
  171. authorization options. Check out the `official documentation`_ for more
  172. information.
  173. .. _official documentation: http://flower.readthedocs.org/en/latest/
  174. .. _monitoring-celeryev:
  175. celery events: Curses Monitor
  176. -----------------------------
  177. .. versionadded:: 2.0
  178. `celery events` is a simple curses monitor displaying
  179. task and worker history. You can inspect the result and traceback of tasks,
  180. and it also supports some management commands like rate limiting and shutting
  181. down workers. This monitor was started as a proof of concept, and you
  182. probably want to use Flower instead.
  183. Starting:
  184. .. code-block:: bash
  185. $ celery -A proj events
  186. You should see a screen like:
  187. .. figure:: ../images/celeryevshotsm.jpg
  188. `celery events` is also used to start snapshot cameras (see
  189. :ref:`monitoring-snapshots`:
  190. .. code-block:: bash
  191. $ celery -A proj events --camera=<camera-class> --frequency=1.0
  192. and it includes a tool to dump events to :file:`stdout`:
  193. .. code-block:: bash
  194. $ celery -A proj events --dump
  195. For a complete list of options use ``--help``:
  196. .. code-block:: bash
  197. $ celery events --help
  198. .. _`celerymon`: http://github.com/celery/celerymon/
  199. .. _monitoring-rabbitmq:
  200. RabbitMQ
  201. ========
  202. To manage a Celery cluster it is important to know how
  203. RabbitMQ can be monitored.
  204. RabbitMQ ships with the `rabbitmqctl(1)`_ command,
  205. with this you can list queues, exchanges, bindings,
  206. queue lengths, the memory usage of each queue, as well
  207. as manage users, virtual hosts and their permissions.
  208. .. note::
  209. The default virtual host (``"/"``) is used in these
  210. examples, if you use a custom virtual host you have to add
  211. the ``-p`` argument to the command, e.g:
  212. ``rabbitmqctl list_queues -p my_vhost …``
  213. .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
  214. .. _monitoring-rmq-queues:
  215. Inspecting queues
  216. -----------------
  217. Finding the number of tasks in a queue:
  218. .. code-block:: bash
  219. $ rabbitmqctl list_queues name messages messages_ready \
  220. messages_unacknowledged
  221. Here `messages_ready` is the number of messages ready
  222. for delivery (sent but not received), `messages_unacknowledged`
  223. is the number of messages that has been received by a worker but
  224. not acknowledged yet (meaning it is in progress, or has been reserved).
  225. `messages` is the sum of ready and unacknowledged messages.
  226. Finding the number of workers currently consuming from a queue:
  227. .. code-block:: bash
  228. $ rabbitmqctl list_queues name consumers
  229. Finding the amount of memory allocated to a queue:
  230. .. code-block:: bash
  231. $ rabbitmqctl list_queues name memory
  232. :Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
  233. easier to parse.
  234. .. _monitoring-redis:
  235. Redis
  236. =====
  237. If you're using Redis as the broker, you can monitor the Celery cluster using
  238. the `redis-cli(1)` command to list lengths of queues.
  239. .. _monitoring-redis-queues:
  240. Inspecting queues
  241. -----------------
  242. Finding the number of tasks in a queue:
  243. .. code-block:: bash
  244. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
  245. The default queue is named `celery`. To get all available queues, invoke:
  246. .. code-block:: bash
  247. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
  248. .. note::
  249. Queue keys only exists when there are tasks in them, so if a key
  250. does not exist it simply means there are no messages in that queue.
  251. This is because in Redis a list with no elements in it is automatically
  252. removed, and hence it won't show up in the `keys` command output,
  253. and `llen` for that list returns 0.
  254. Also, if you're using Redis for other purposes, the
  255. output of the `keys` command will include unrelated values stored in
  256. the database. The recommended way around this is to use a
  257. dedicated `DATABASE_NUMBER` for Celery, you can also use
  258. database numbers to separate Celery applications from each other (virtual
  259. hosts), but this will not affect the monitoring events used by e.g. Flower
  260. as Redis pub/sub commands are global rather than database based.
  261. .. _monitoring-munin:
  262. Munin
  263. =====
  264. This is a list of known Munin plug-ins that can be useful when
  265. maintaining a Celery cluster.
  266. * rabbitmq-munin: Munin plug-ins for RabbitMQ.
  267. http://github.com/ask/rabbitmq-munin
  268. * celery_tasks: Monitors the number of times each task type has
  269. been executed (requires `celerymon`).
  270. http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
  271. * celery_task_states: Monitors the number of tasks in each state
  272. (requires `celerymon`).
  273. http://exchange.munin-monitoring.org/plugins/celery_tasks/details
  274. .. _monitoring-events:
  275. Events
  276. ======
  277. The worker has the ability to send a message whenever some event
  278. happens. These events are then captured by tools like Flower,
  279. and :program:`celery events` to monitor the cluster.
  280. .. _monitoring-snapshots:
  281. Snapshots
  282. ---------
  283. .. versionadded:: 2.1
  284. Even a single worker can produce a huge amount of events, so storing
  285. the history of all events on disk may be very expensive.
  286. A sequence of events describes the cluster state in that time period,
  287. by taking periodic snapshots of this state you can keep all history, but
  288. still only periodically write it to disk.
  289. To take snapshots you need a Camera class, with this you can define
  290. what should happen every time the state is captured; You can
  291. write it to a database, send it by email or something else entirely.
  292. :program:`celery events` is then used to take snapshots with the camera,
  293. for example if you want to capture state every 2 seconds using the
  294. camera ``myapp.Camera`` you run :program:`celery events` with the following
  295. arguments:
  296. .. code-block:: bash
  297. $ celery -A proj events -c myapp.Camera --frequency=2.0
  298. .. _monitoring-camera:
  299. Custom Camera
  300. ~~~~~~~~~~~~~
  301. Cameras can be useful if you need to capture events and do something
  302. with those events at an interval. For real-time event processing
  303. you should use :class:`@events.Receiver` directly, like in
  304. :ref:`event-real-time-example`.
  305. Here is an example camera, dumping the snapshot to screen:
  306. .. code-block:: python
  307. from pprint import pformat
  308. from celery.events.snapshot import Polaroid
  309. class DumpCam(Polaroid):
  310. def on_shutter(self, state):
  311. if not state.event_count:
  312. # No new events since last snapshot.
  313. return
  314. print('Workers: {0}'.format(pformat(state.workers, indent=4)))
  315. print('Tasks: {0}'.format(pformat(state.tasks, indent=4)))
  316. print('Total: {0.event_count} events, %s {0.task_count}'.format(
  317. state))
  318. See the API reference for :mod:`celery.events.state` to read more
  319. about state objects.
  320. Now you can use this cam with :program:`celery events` by specifying
  321. it with the :option:`-c` option:
  322. .. code-block:: bash
  323. $ celery -A proj events -c myapp.DumpCam --frequency=2.0
  324. Or you can use it programmatically like this:
  325. .. code-block:: python
  326. from celery import Celery
  327. from myapp import DumpCam
  328. def main(app, freq=1.0):
  329. state = app.events.State()
  330. with app.connection() as connection:
  331. recv = app.events.Receiver(connection, handlers={'*': state.event})
  332. with DumpCam(state, freq=freq):
  333. recv.capture(limit=None, timeout=None)
  334. if __name__ == '__main__':
  335. app = Celery(broker='amqp://guest@localhost//')
  336. main(app)
  337. .. _event-real-time-example:
  338. Real-time processing
  339. --------------------
  340. To process events in real-time you need the following
  341. - An event consumer (this is the ``Receiver``)
  342. - A set of handlers called when events come in.
  343. You can have different handlers for each event type,
  344. or a catch-all handler can be used ('*')
  345. - State (optional)
  346. :class:`@events.State` is a convenient in-memory representation
  347. of tasks and workers in the cluster that is updated as events come in.
  348. It encapsulates solutions for many common things, like checking if a
  349. worker is still alive (by verifying heartbeats), merging event fields
  350. together as events come in, making sure timestamps are in sync, and so on.
  351. Combining these you can easily process events in real-time:
  352. .. code-block:: python
  353. from celery import Celery
  354. def my_monitor(app):
  355. state = app.events.State()
  356. def announce_failed_tasks(event):
  357. state.event(event)
  358. # task name is sent only with -received event, and state
  359. # will keep track of this for us.
  360. task = state.tasks.get(event['uuid'])
  361. print('TASK FAILED: %s[%s] %s' % (
  362. task.name, task.uuid, task.info(), ))
  363. with app.connection() as connection:
  364. recv = app.events.Receiver(connection, handlers={
  365. 'task-failed': announce_failed_tasks,
  366. '*': state.event,
  367. })
  368. recv.capture(limit=None, timeout=None, wakeup=True)
  369. if __name__ == '__main__':
  370. app = Celery(broker='amqp://guest@localhost//')
  371. my_monitor(app)
  372. .. note::
  373. The wakeup argument to ``capture`` sends a signal to all workers
  374. to force them to send a heartbeat. This way you can immediately see
  375. workers when the monitor starts.
  376. You can listen to specific events by specifying the handlers:
  377. .. code-block:: python
  378. from celery import Celery
  379. def my_monitor(app):
  380. state = app.events.State()
  381. def announce_failed_tasks(event):
  382. state.event(event)
  383. # task name is sent only with -received event, and state
  384. # will keep track of this for us.
  385. task = state.tasks.get(event['uuid'])
  386. print('TASK FAILED: %s[%s] %s' % (
  387. task.name, task.uuid, task.info(), ))
  388. with app.connection() as connection:
  389. recv = app.events.Receiver(connection, handlers={
  390. 'task-failed': announce_failed_tasks,
  391. })
  392. recv.capture(limit=None, timeout=None, wakeup=True)
  393. if __name__ == '__main__':
  394. app = Celery(broker='amqp://guest@localhost//')
  395. my_monitor(app)
  396. .. _event-reference:
  397. Event Reference
  398. ===============
  399. This list contains the events sent by the worker, and their arguments.
  400. .. _event-reference-task:
  401. Task Events
  402. -----------
  403. .. event:: task-sent
  404. task-sent
  405. ~~~~~~~~~
  406. :signature: ``task-sent(uuid, name, args, kwargs, retries, eta, expires,
  407. queue, exchange, routing_key)``
  408. Sent when a task message is published and
  409. the :setting:`CELERY_SEND_TASK_SENT_EVENT` setting is enabled.
  410. .. event:: task-received
  411. task-received
  412. ~~~~~~~~~~~~~
  413. :signature: ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
  414. timestamp)``
  415. Sent when the worker receives a task.
  416. .. event:: task-started
  417. task-started
  418. ~~~~~~~~~~~~
  419. :signature: ``task-started(uuid, hostname, timestamp, pid)``
  420. Sent just before the worker executes the task.
  421. .. event:: task-succeeded
  422. task-succeeded
  423. ~~~~~~~~~~~~~~
  424. :signature: ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
  425. Sent if the task executed successfully.
  426. Runtime is the time it took to execute the task using the pool.
  427. (Starting from the task is sent to the worker pool, and ending when the
  428. pool result handler callback is called).
  429. .. event:: task-failed
  430. task-failed
  431. ~~~~~~~~~~~
  432. :signature: ``task-failed(uuid, exception, traceback, hostname, timestamp)``
  433. Sent if the execution of the task failed.
  434. .. event:: task-revoked
  435. task-revoked
  436. ~~~~~~~~~~~~
  437. :signature: ``task-revoked(uuid, terminated, signum, expired)``
  438. Sent if the task has been revoked (Note that this is likely
  439. to be sent by more than one worker).
  440. - ``terminated`` is set to true if the task process was terminated,
  441. and the ``signum`` field set to the signal used.
  442. - ``expired`` is set to true if the task expired.
  443. .. event:: task-retried
  444. task-retried
  445. ~~~~~~~~~~~~
  446. :signature: ``task-retried(uuid, exception, traceback, hostname, timestamp)``
  447. Sent if the task failed, but will be retried in the future.
  448. .. _event-reference-worker:
  449. Worker Events
  450. -------------
  451. .. event:: worker-online
  452. worker-online
  453. ~~~~~~~~~~~~~
  454. :signature: ``worker-online(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  455. The worker has connected to the broker and is online.
  456. - `hostname`: Hostname of the worker.
  457. - `timestamp`: Event timestamp.
  458. - `freq`: Heartbeat frequency in seconds (float).
  459. - `sw_ident`: Name of worker software (e.g. ``py-celery``).
  460. - `sw_ver`: Software version (e.g. 2.2.0).
  461. - `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  462. .. event:: worker-heartbeat
  463. worker-heartbeat
  464. ~~~~~~~~~~~~~~~~
  465. :signature: ``worker-heartbeat(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys,
  466. active, processed)``
  467. Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
  468. it is considered to be offline.
  469. - `hostname`: Hostname of the worker.
  470. - `timestamp`: Event timestamp.
  471. - `freq`: Heartbeat frequency in seconds (float).
  472. - `sw_ident`: Name of worker software (e.g. ``py-celery``).
  473. - `sw_ver`: Software version (e.g. 2.2.0).
  474. - `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  475. - `active`: Number of currently executing tasks.
  476. - `processed`: Total number of tasks processed by this worker.
  477. .. event:: worker-offline
  478. worker-offline
  479. ~~~~~~~~~~~~~~
  480. :signature: ``worker-offline(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  481. The worker has disconnected from the broker.