monitoring.rst 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463
  1. .. _guide-monitoring:
  2. ==================
  3. Monitoring Guide
  4. ==================
  5. .. contents::
  6. :local:
  7. Introduction
  8. ============
  9. There are several tools available to monitor and inspect Celery clusters.
  10. This document describes some of these, as as well as
  11. features related to monitoring, like events and broadcast commands.
  12. .. _monitoring-workers:
  13. Workers
  14. =======
  15. .. _monitoring-celeryctl:
  16. celeryctl: Management Utility
  17. -----------------------------
  18. :mod:`~celery.bin.celeryctl` is a command line utility to inspect
  19. and manage worker nodes (and to some degree tasks).
  20. To list all the commands from the command line do::
  21. $ celeryctl help
  22. or to get help for a specific command do::
  23. $ celeryctl <command> --help
  24. Commands
  25. ~~~~~~~~
  26. * **status**: List active nodes in this cluster
  27. ::
  28. $ celeryctl status
  29. * **result**: Show the result of a task
  30. ::
  31. $ celeryctl result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
  32. Note that you can omit the name of the task as long as the
  33. task doesn't use a custom result backend.
  34. * **inspect active**: List active tasks
  35. ::
  36. $ celeryctl inspect active
  37. These are all the tasks that are currently being executed.
  38. * **inspect scheduled**: List scheduled ETA tasks
  39. ::
  40. $ celeryctl inspect scheduled
  41. These are tasks reserved by the worker because they have the
  42. ``eta`` or ``countdown`` argument set.
  43. * **inspect reserved**: List reserved tasks
  44. ::
  45. $ celeryctl inspect reserved
  46. This will list all tasks that have been prefetched by the worker,
  47. and is currently waiting to be executed (does not include tasks
  48. with an eta).
  49. * **inspect revoked**: List history of revoked tasks
  50. ::
  51. $ celeryctl inspect revoked
  52. * **inspect registered_tasks**: List registered tasks
  53. ::
  54. $ celeryctl inspect registered_tasks
  55. * **inspect states**: Show worker statistics
  56. ::
  57. $ celeryctl inspect stats
  58. * **inspect diagnose**: Diagnose the pool processes.
  59. ::
  60. $ celeryctl inspect diagnose
  61. This will verify that the workers pool processes are available
  62. to do work. Note that this will not work if the worker is busy.
  63. * **inspect enable_events**: Enable events
  64. ::
  65. $ celeryctl inspect enable_events
  66. * **inspect disable_events**: Disable events
  67. ::
  68. $ celeryctl inspect disable_events
  69. :Note: All ``inspect`` commands supports the ``--timeout`` argument,
  70. which is the number of seconds to wait for responses.
  71. You may have to increase this timeout If you're getting empty responses
  72. due to latency.
  73. .. _celeryctl-inspect-destination:
  74. Specifying destination nodes
  75. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  76. By default the inspect commands operates on all workers.
  77. You can specify a single, or a list of workers by using the
  78. ``--destination`` argument::
  79. $ celeryctl inspect -d w1,w2 reserved
  80. .. _monitoring-django-admin:
  81. Django Admin Monitor
  82. --------------------
  83. When you add `django-celery`_ to your Django project you will
  84. automatically get a monitor section as part of the Django admin interface.
  85. This can also be used if you're not using Celery with a Django project.
  86. *Screenshot*
  87. .. image:: http://celeryproject.org/beta/djangoceleryadmin2.jpg
  88. .. _`django-celery`: http://pypi.python.org/pypi/django-celery
  89. .. _monitoring-django-starting:
  90. Starting the monitor
  91. ~~~~~~~~~~~~~~~~~~~~
  92. The Celery section will already be present in your admin interface,
  93. but you won't see any data appearing until you start the snapshot camera.
  94. The camera takes snapshots of the events your workers sends at regular
  95. intervals, storing them in your database (See :ref:`monitoring-snapshots`).
  96. To start the camera run::
  97. $ python manage.py celerycam
  98. If you haven't already enabled the sending of events you need to do so::
  99. $ python manage.py celeryctl inspect enable_events
  100. :Tip: You can enable events when the worker starts using the ``-E`` argument
  101. to :mod:`~celery.bin.celeryd`.
  102. Now that the camera has been started, and events have been enabled
  103. you should be able to see your workers and the tasks in the admin interface
  104. (it may take some time for workers to show up).
  105. .. _monitoring-django-frequency:
  106. Shutter frequency
  107. ~~~~~~~~~~~~~~~~~
  108. By default the camera takes a snapshot every second, if this is too frequent
  109. or you want higher precision then you can change this using the
  110. ``--frequency`` argument. This is a float describing how often, in seconds,
  111. it should wake up to check if there are any new events::
  112. $ python manage.py celerycam --frequency=3.0
  113. The camera also supports rate limiting using the ``--maxrate`` argument.
  114. While the frequency controls how often the camera thread wakes up,
  115. the rate limit controls how often it will actually take a snapshot.
  116. The rate limits can be specified in seconds, minutes or hours
  117. by appending ``/s``, ``/m`` or ``/h`` to the value.
  118. Example: ``--maxrate=100/m``, means "hundred writes a minute".
  119. The rate limit is off by default, which means it will take a snapshot
  120. for every ``--frequency`` seconds.
  121. The events also expire after some time, so the database doesn't fill up.
  122. Successful tasks are deleted after 1 day, failed tasks after 3 days,
  123. and tasks in other states after 5 days.
  124. .. _monitoring-nodjango:
  125. Using outside of Django
  126. ~~~~~~~~~~~~~~~~~~~~~~~
  127. TODO
  128. .. _monitoring-celeryev:
  129. celeryev: Curses Monitor
  130. ------------------------
  131. :mod:`~celery.bin.celeryev` is a simple curses monitor displaying
  132. task and worker history. You can inspect the result and traceback of tasks,
  133. and it also supports some management commands like rate limiting and shutdown
  134. of workers.
  135. .. image:: http://celeryproject.org/img/celeryevshotsm.jpg
  136. :mod:`~celery.bin.celeryev` is also used to start snapshot cameras (see
  137. :ref:`monitoring-snapshots`::
  138. $ celeryev --camera=<camera-class> --frequency=1.0
  139. and it includes a tool to dump events to stdout::
  140. $ celeryev --dump
  141. For a complete list of options use ``--help``::
  142. $ celeryev --help
  143. .. _monitoring-celerymon:
  144. celerymon: Web monitor
  145. ----------------------
  146. `celerymon`_ is the ongoing work to create a web monitor.
  147. It's far from complete yet, and does currently only support
  148. a JSON API. Help is desperately needed for this project, so if you,
  149. or someone you knowi, would like to contribute templates, design, code
  150. or help this project in any way, please get in touch!
  151. :Tip: The Django admin monitor can be used even though you're not using
  152. Celery with a Django project. See :ref:`monitoring-nodjango`.
  153. .. _`celerymon`: http://github.com/ask/celerymon/
  154. .. _monitoring-rabbitmq:
  155. RabbitMQ
  156. ========
  157. To manage a Celery cluster it is important to know how
  158. RabbitMQ can be monitored.
  159. RabbitMQ ships with the `rabbitmqctl(1)`_ command,
  160. with this you can list queues, exchanges, bindings,
  161. queue lenghts, the memory usage of each queue, as well
  162. as manage users, virtual hosts and their permissions.
  163. :Note: The default virtual host (``"/"``) is used in these
  164. examples, if you use a custom virtual host you have to add
  165. the ``-p`` argument to the command, e.g:
  166. ``rabbitmqctl list_queues -p my_vhost ....``
  167. .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
  168. .. _monitoring-rmq-queues:
  169. Inspecting queues
  170. -----------------
  171. Finding the number of tasks in a queue::
  172. $ rabbitmqctl list_queues name messages messages_ready \
  173. messages_unacknowlged
  174. Here ``messages_ready`` is the number of messages ready
  175. for delivery (sent but not received), ``messages_unacknowledged``
  176. is the number of messages that has been received by a worker but
  177. not acknowledged yet (meaning it is in progress, or has been reserved).
  178. ``messages`` is the sum of ready and unacknowledged messages combined.
  179. Finding the number of workers currently consuming from a queue::
  180. $ rabbitmqctl list_queues name consumers
  181. Finding the amount of memory allocated to a queue::
  182. $ rabbitmqctl list_queues name memory
  183. :Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
  184. easier to parse.
  185. .. _monitoring-munin:
  186. Munin
  187. =====
  188. This is a list of known Munin plugins that can be useful when
  189. maintaining a Celery cluster.
  190. * rabbitmq-munin: Munin-plugins for RabbitMQ.
  191. http://github.com/ask/rabbitmq-munin
  192. * celery_tasks: Monitors the number of times each task type has
  193. been executed (requires ``celerymon``).
  194. http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
  195. * celery_task_states: Monitors the number of tasks in each state
  196. (requires ``celerymon``).
  197. http://exchange.munin-monitoring.org/plugins/celery_tasks/details
  198. .. _monitoring-events:
  199. Events
  200. ======
  201. The worker has the ability to send a message whenever some event
  202. happens. These events are then captured by tools like ``celerymon`` and
  203. ``celeryev`` to monitor the cluster.
  204. .. _monitoring-snapshots:
  205. Snapshots
  206. ---------
  207. Even a single worker can produce a huge amount of events, so storing
  208. history of events on disk may be very expensive.
  209. A sequence of events describes the cluster state in that time period,
  210. by taking periodic snapshots of this state we can keep all history, but
  211. still only periodically write it to disk.
  212. To take snapshots you need a Camera class, with this you can define
  213. what should happen every time the state is captured. You can
  214. write it to a database, send it by e-mail or something else entirely).
  215. ``celeryev`` is then used to take snapshots with the camera,
  216. for example if you want to capture state every 2 seconds using the
  217. camera ``myapp.Camera`` you run ``celeryev`` with the following arguments::
  218. $ celeryev -c myapp.Camera --frequency=2.0
  219. .. _monitoring-camera:
  220. Custom Camera
  221. ~~~~~~~~~~~~~
  222. Here is an example camera, dumping the snapshot to the screen:
  223. .. code-block:: python
  224. from pprint import pformat
  225. from celery.events.snapshot import Polaroid
  226. class DumpCam(Polaroid):
  227. def shutter(self, state):
  228. if not state.event_count:
  229. # No new events since last snapshot.
  230. return
  231. print("Workers: %s" % (pformat(state.workers, indent=4), ))
  232. print("Tasks: %s" % (pformat(state.tasks, indent=4), ))
  233. print("Total: %s events, %s tasks" % (
  234. state.event_count, state.task_count))
  235. Now you can use this cam with ``celeryev`` by specifying
  236. it with the ``-c`` option::
  237. $ celeryev -c myapp.DumpCam --frequency=2.0
  238. Or you can use it programatically like this::
  239. from celery.events import EventReceiver
  240. from celery.messaging import establish_connection
  241. from celery.events.state import State
  242. from myapp import DumpCam
  243. def main():
  244. state = State()
  245. with establish_connection() as connection:
  246. recv = EventReceiver(connection, handlers={"*": state.event})
  247. with DumpCam(state, freq=1.0):
  248. recv.capture(limit=None, timeout=None)
  249. if __name__ == "__main__":
  250. main()
  251. .. _event-reference:
  252. Event Reference
  253. ---------------
  254. This list contains the events sent by the worker, and their arguments.
  255. .. _event-reference-task:
  256. Task Events
  257. ~~~~~~~~~~~
  258. * ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
  259. timestamp)``
  260. Sent when the worker receives a task.
  261. * ``task-started(uuid, hostname, timestamp)``
  262. Sent just before the worker executes the task.
  263. * ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
  264. Sent if the task executed successfully.
  265. Runtime is the time it took to execute the task using the pool.
  266. (Time starting from the task is sent to the pool, and ending when the
  267. pool result handlers callback is called).
  268. * ``task-failed(uuid, exception, traceback, hostname, timestamp)``
  269. Sent if the execution of the task failed.
  270. * ``task-revoked(uuid)``
  271. Sent if the task has been revoked (Note that this is likely
  272. to be sent by more than one worker)
  273. * ``task-retried(uuid, exception, traceback, hostname, delay, timestamp)``
  274. Sent if the task failed, but will be retried in the future.
  275. (**NOT IMPLEMENTED**)
  276. .. _event-reference-worker:
  277. Worker Events
  278. ~~~~~~~~~~~~~
  279. * ``worker-online(hostname, timestamp)``
  280. The worker has connected to the broker and is online.
  281. * ``worker-heartbeat(hostname, timestamp)``
  282. Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
  283. it is considered to be offline.
  284. * ``worker-offline(hostname, timestamp)``
  285. The worker has disconnected from the broker.