monitoring.rst 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682
  1. .. _guide-monitoring:
  2. =================================
  3. Monitoring and Management Guide
  4. =================================
  5. .. contents::
  6. :local:
  7. Introduction
  8. ============
  9. There are several tools available to monitor and inspect Celery clusters.
  10. This document describes some of these, as as well as
  11. features related to monitoring, like events and broadcast commands.
  12. .. _monitoring-workers:
  13. Workers
  14. =======
  15. .. _monitoring-celeryctl:
  16. ``celery``: Management Command-line Utility
  17. -------------------------------------------
  18. .. versionadded:: 2.1
  19. :program:`celery` can also be used to inspect
  20. and manage worker nodes (and to some degree tasks).
  21. To list all the commands available do::
  22. $ celery help
  23. or to get help for a specific command do::
  24. $ celery <command> --help
  25. Commands
  26. ~~~~~~~~
  27. * **shell**: Drop into a Python shell.
  28. The locals will include the ``celery`` variable, which is the current app.
  29. Also all known tasks will be automatically added to locals (unless the
  30. ``--without-tasks`` flag is set).
  31. Uses Ipython, bpython, or regular python in that order if installed.
  32. You can force an implementation using ``--force-ipython|-I``,
  33. ``--force-bpython|-B``, or ``--force-python|-P``.
  34. * **status**: List active nodes in this cluster
  35. ::
  36. $ celery status
  37. * **result**: Show the result of a task
  38. ::
  39. $ celery result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
  40. Note that you can omit the name of the task as long as the
  41. task doesn't use a custom result backend.
  42. * **purge**: Purge messages from all configured task queues.
  43. ::
  44. $ celery purge
  45. .. warning::
  46. There is no undo for this operation, and messages will
  47. be permanently deleted!
  48. * **inspect active**: List active tasks
  49. ::
  50. $ celery inspect active
  51. These are all the tasks that are currently being executed.
  52. * **inspect scheduled**: List scheduled ETA tasks
  53. ::
  54. $ celery inspect scheduled
  55. These are tasks reserved by the worker because they have the
  56. `eta` or `countdown` argument set.
  57. * **inspect reserved**: List reserved tasks
  58. ::
  59. $ celery inspect reserved
  60. This will list all tasks that have been prefetched by the worker,
  61. and is currently waiting to be executed (does not include tasks
  62. with an eta).
  63. * **inspect revoked**: List history of revoked tasks
  64. ::
  65. $ celery inspect revoked
  66. * **inspect registered**: List registered tasks
  67. ::
  68. $ celery inspect registered
  69. * **inspect stats**: Show worker statistics
  70. ::
  71. $ celery inspect stats
  72. * **control enable_events**: Enable events
  73. ::
  74. $ celery control enable_events
  75. * **control disable_events**: Disable events
  76. ::
  77. $ celery inspect disable_events
  78. * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
  79. ::
  80. $ celery migrate redis://localhost amqp://localhost
  81. This command will migrate all the tasks on one broker to another.
  82. As this command is new and experimental you should be sure to have
  83. a backup of the data before proceeding.
  84. .. note::
  85. All ``inspect`` commands supports a ``--timeout`` argument,
  86. This is the number of seconds to wait for responses.
  87. You may have to increase this timeout if you're not getting a response
  88. due to latency.
  89. .. _celeryctl-inspect-destination:
  90. Specifying destination nodes
  91. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  92. By default the inspect commands operates on all workers.
  93. You can specify a single, or a list of workers by using the
  94. `--destination` argument::
  95. $ celery inspect -d w1,w2 reserved
  96. .. _monitoring-flower:
  97. Celery Flower: Web interface
  98. ----------------------------
  99. Celery Flower is a web based, real-time monitor and administration tool.
  100. Features
  101. ~~~~~~~~
  102. - Workers monitoring and management
  103. - Configuration viewer
  104. - Worker pool control
  105. - Broker options viewer
  106. - Queues management
  107. - Tasks execution statistics
  108. - Task viewer
  109. **Screenshot**
  110. .. figure:: ../images/dashboard.png
  111. More screenshots_:
  112. .. _screenshots: https://github.com/mher/flower/tree/master/docs/screenshots
  113. Usage
  114. ~~~~~
  115. Install Celery Flower: ::
  116. $ pip install flower
  117. Launch Celery Flower and open http://localhost:8008 in browser: ::
  118. $ celery flower
  119. .. _monitoring-django-admin:
  120. Django Admin Monitor
  121. --------------------
  122. .. versionadded:: 2.1
  123. When you add `django-celery`_ to your Django project you will
  124. automatically get a monitor section as part of the Django admin interface.
  125. This can also be used if you're not using Celery with a Django project.
  126. *Screenshot*
  127. .. figure:: ../images/djangoceleryadmin2.jpg
  128. .. _`django-celery`: http://pypi.python.org/pypi/django-celery
  129. .. _monitoring-django-starting:
  130. Starting the monitor
  131. ~~~~~~~~~~~~~~~~~~~~
  132. The Celery section will already be present in your admin interface,
  133. but you won't see any data appearing until you start the snapshot camera.
  134. The camera takes snapshots of the events your workers sends at regular
  135. intervals, storing them in your database (See :ref:`monitoring-snapshots`).
  136. To start the camera run::
  137. $ python manage.py celerycam
  138. If you haven't already enabled the sending of events you need to do so::
  139. $ python manage.py celery control enable_events
  140. :Tip: You can enable events when the worker starts using the `-E` argument.
  141. Now that the camera has been started, and events have been enabled
  142. you should be able to see your workers and the tasks in the admin interface
  143. (it may take some time for workers to show up).
  144. The admin interface shows tasks, worker nodes, and even
  145. lets you perform some actions, like revoking and rate limiting tasks,
  146. or shutting down worker nodes.
  147. .. _monitoring-django-frequency:
  148. Shutter frequency
  149. ~~~~~~~~~~~~~~~~~
  150. By default the camera takes a snapshot every second, if this is too frequent
  151. or you want to have higher precision, then you can change this using the
  152. ``--frequency`` argument. This is a float describing how often, in seconds,
  153. it should wake up to check if there are any new events::
  154. $ python manage.py celerycam --frequency=3.0
  155. The camera also supports rate limiting using the ``--maxrate`` argument.
  156. While the frequency controls how often the camera thread wakes up,
  157. the rate limit controls how often it will actually take a snapshot.
  158. The rate limits can be specified in seconds, minutes or hours
  159. by appending `/s`, `/m` or `/h` to the value.
  160. Example: ``--maxrate=100/m``, means "hundred writes a minute".
  161. The rate limit is off by default, which means it will take a snapshot
  162. for every ``--frequency`` seconds.
  163. The events also expire after some time, so the database doesn't fill up.
  164. Successful tasks are deleted after 1 day, failed tasks after 3 days,
  165. and tasks in other states after 5 days.
  166. .. _monitoring-django-reset:
  167. Resetting monitor data
  168. ~~~~~~~~~~~~~~~~~~~~~~
  169. To reset the monitor data you need to clear out two models::
  170. >>> from djcelery.models import WorkerState, TaskState
  171. # delete worker history
  172. >>> WorkerState.objects.all().delete()
  173. # delete task history
  174. >>> TaskState.objects.all().update(hidden=True)
  175. >>> TaskState.objects.purge()
  176. .. _monitoring-django-expiration:
  177. Expiration
  178. ~~~~~~~~~~
  179. By default monitor data for successful tasks will expire in 1 day,
  180. failed tasks in 3 days and pending tasks in 5 days.
  181. You can change the expiry times for each of these using
  182. adding the following settings to your :file:`settings.py`::
  183. from datetime import timedelta
  184. CELERYCAM_EXPIRE_SUCCESS = timedelta(hours=1)
  185. CELERYCAM_EXPIRE_ERROR = timedelta(hours=2)
  186. CELERYCAM_EXPIRE_PENDING = timedelta(hours=2)
  187. .. _monitoring-nodjango:
  188. Using outside of Django
  189. ~~~~~~~~~~~~~~~~~~~~~~~
  190. `django-celery` also installs the :program:`djcelerymon` program. This
  191. can be used by non-Django users, and runs both a web server and a snapshot
  192. camera in the same process.
  193. **Installing**
  194. Using :program:`pip`::
  195. $ pip install -U django-celery
  196. or using :program:`easy_install`::
  197. $ easy_install -U django-celery
  198. **Running**
  199. :program:`djcelerymon` reads configuration from your Celery configuration
  200. module, and sets up the Django environment using the same settings::
  201. $ djcelerymon
  202. Database tables will be created the first time the monitor is run.
  203. By default an `sqlite3` database file named
  204. :file:`djcelerymon.db` is used, so make sure this file is writeable by the
  205. user running the monitor.
  206. If you want to store the events in a different database, e.g. MySQL,
  207. then you can configure the `DATABASE*` settings directly in your Celery
  208. config module. See http://docs.djangoproject.com/en/dev/ref/settings/#databases
  209. for more information about the database options available.
  210. You will also be asked to create a superuser (and you need to create one
  211. to be able to log into the admin later)::
  212. Creating table auth_permission
  213. Creating table auth_group_permissions
  214. [...]
  215. You just installed Django's auth system, which means you don't
  216. have any superusers defined. Would you like to create
  217. one now? (yes/no): yes
  218. Username (Leave blank to use 'username'): username
  219. Email address: me@example.com
  220. Password: ******
  221. Password (again): ******
  222. Superuser created successfully.
  223. [...]
  224. Django version 1.2.1, using settings 'celeryconfig'
  225. Development server is running at http://127.0.0.1:8000/
  226. Quit the server with CONTROL-C.
  227. Now that the service is started you can visit the monitor
  228. at http://127.0.0.1:8000, and log in using the user you created.
  229. For a list of the command line options supported by :program:`djcelerymon`,
  230. please see ``djcelerymon --help``.
  231. .. _monitoring-celeryev:
  232. celery events: Curses Monitor
  233. -----------------------------
  234. .. versionadded:: 2.0
  235. `celery events` is a simple curses monitor displaying
  236. task and worker history. You can inspect the result and traceback of tasks,
  237. and it also supports some management commands like rate limiting and shutting
  238. down workers.
  239. Starting::
  240. $ celery events
  241. You should see a screen like:
  242. .. figure:: ../images/celeryevshotsm.jpg
  243. `celery events` is also used to start snapshot cameras (see
  244. :ref:`monitoring-snapshots`::
  245. $ celery events --camera=<camera-class> --frequency=1.0
  246. and it includes a tool to dump events to :file:`stdout`::
  247. $ celery events --dump
  248. For a complete list of options use ``--help``::
  249. $ celery events --help
  250. .. _monitoring-celerymon:
  251. celerymon: Web monitor
  252. ----------------------
  253. `celerymon`_ is the ongoing work to create a web monitor.
  254. It's far from complete yet, and does currently only support
  255. a JSON API. Help is desperately needed for this project, so if you,
  256. or someone you know would like to contribute templates, design, code
  257. or help this project in any way, please get in touch!
  258. :Tip: The Django admin monitor can be used even though you're not using
  259. Celery with a Django project. See :ref:`monitoring-nodjango`.
  260. .. _`celerymon`: http://github.com/celery/celerymon/
  261. .. _monitoring-rabbitmq:
  262. RabbitMQ
  263. ========
  264. To manage a Celery cluster it is important to know how
  265. RabbitMQ can be monitored.
  266. RabbitMQ ships with the `rabbitmqctl(1)`_ command,
  267. with this you can list queues, exchanges, bindings,
  268. queue lengths, the memory usage of each queue, as well
  269. as manage users, virtual hosts and their permissions.
  270. .. note::
  271. The default virtual host (``"/"``) is used in these
  272. examples, if you use a custom virtual host you have to add
  273. the ``-p`` argument to the command, e.g:
  274. ``rabbitmqctl list_queues -p my_vhost ....``
  275. .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
  276. .. _monitoring-rmq-queues:
  277. Inspecting queues
  278. -----------------
  279. Finding the number of tasks in a queue::
  280. $ rabbitmqctl list_queues name messages messages_ready \
  281. messages_unacknowledged
  282. Here `messages_ready` is the number of messages ready
  283. for delivery (sent but not received), `messages_unacknowledged`
  284. is the number of messages that has been received by a worker but
  285. not acknowledged yet (meaning it is in progress, or has been reserved).
  286. `messages` is the sum of ready and unacknowledged messages.
  287. Finding the number of workers currently consuming from a queue::
  288. $ rabbitmqctl list_queues name consumers
  289. Finding the amount of memory allocated to a queue::
  290. $ rabbitmqctl list_queues name memory
  291. :Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
  292. easier to parse.
  293. .. _monitoring-redis:
  294. Redis
  295. =====
  296. If you're using Redis as the broker, you can monitor the Celery cluster using
  297. the `redis-cli(1)` command to list lengths of queues.
  298. .. _monitoring-redis-queues:
  299. Inspecting queues
  300. -----------------
  301. Finding the number of tasks in a queue::
  302. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
  303. The default queue is named `celery`. To get all available queues, invoke::
  304. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
  305. .. note::
  306. If a list has no elements in Redis, it doesn't exist. Hence it won't show up
  307. in the `keys` command output. `llen` for that list returns 0 in that case.
  308. On the other hand, if you're also using Redis for other purposes, the output
  309. of the `keys` command will include unrelated values stored in the database.
  310. The recommended way around this is to use a dedicated `DATABASE_NUMBER` for
  311. Celery.
  312. .. _monitoring-munin:
  313. Munin
  314. =====
  315. This is a list of known Munin plug-ins that can be useful when
  316. maintaining a Celery cluster.
  317. * rabbitmq-munin: Munin plug-ins for RabbitMQ.
  318. http://github.com/ask/rabbitmq-munin
  319. * celery_tasks: Monitors the number of times each task type has
  320. been executed (requires `celerymon`).
  321. http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
  322. * celery_task_states: Monitors the number of tasks in each state
  323. (requires `celerymon`).
  324. http://exchange.munin-monitoring.org/plugins/celery_tasks/details
  325. .. _monitoring-events:
  326. Events
  327. ======
  328. The worker has the ability to send a message whenever some event
  329. happens. These events are then captured by tools like :program:`celerymon`
  330. and :program:`celery events` to monitor the cluster.
  331. .. _monitoring-snapshots:
  332. Snapshots
  333. ---------
  334. .. versionadded:: 2.1
  335. Even a single worker can produce a huge amount of events, so storing
  336. the history of all events on disk may be very expensive.
  337. A sequence of events describes the cluster state in that time period,
  338. by taking periodic snapshots of this state we can keep all history, but
  339. still only periodically write it to disk.
  340. To take snapshots you need a Camera class, with this you can define
  341. what should happen every time the state is captured; You can
  342. write it to a database, send it by email or something else entirely.
  343. :program:`celery events` is then used to take snapshots with the camera,
  344. for example if you want to capture state every 2 seconds using the
  345. camera ``myapp.Camera`` you run :program:`celery events` with the following
  346. arguments::
  347. $ celery events -c myapp.Camera --frequency=2.0
  348. .. _monitoring-camera:
  349. Custom Camera
  350. ~~~~~~~~~~~~~
  351. Here is an example camera, dumping the snapshot to screen:
  352. .. code-block:: python
  353. from pprint import pformat
  354. from celery.events.snapshot import Polaroid
  355. class DumpCam(Polaroid):
  356. def on_shutter(self, state):
  357. if not state.event_count:
  358. # No new events since last snapshot.
  359. return
  360. print('Workers: {0}'.format(pformat(state.workers, indent=4)))
  361. print('Tasks: {0}'.format(pformat(state.tasks, indent=4)))
  362. print('Total: {0.event_count} events, %s {0.task_count}'.format(
  363. state))
  364. See the API reference for :mod:`celery.events.state` to read more
  365. about state objects.
  366. Now you can use this cam with :program:`celery events` by specifying
  367. it with the `-c` option::
  368. $ celery events -c myapp.DumpCam --frequency=2.0
  369. Or you can use it programmatically like this::
  370. from celery.events import EventReceiver
  371. from celery.messaging import establish_connection
  372. from celery.events.state import State
  373. from myapp import DumpCam
  374. def main():
  375. state = State()
  376. with establish_connection() as connection:
  377. recv = EventReceiver(connection, handlers={'*': state.event})
  378. with DumpCam(state, freq=1.0):
  379. recv.capture(limit=None, timeout=None)
  380. if __name__ == '__main__':
  381. main()
  382. .. _event-reference:
  383. Event Reference
  384. ---------------
  385. This list contains the events sent by the worker, and their arguments.
  386. .. _event-reference-task:
  387. Task Events
  388. ~~~~~~~~~~~
  389. * ``task-sent(uuid, name, args, kwargs, retries, eta, expires,
  390. queue, exchange, routing_key)``
  391. Sent when a task message is published and
  392. the :setting:`CELERY_SEND_TASK_SENT_EVENT` setting is enabled.
  393. * ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
  394. timestamp)``
  395. Sent when the worker receives a task.
  396. * ``task-started(uuid, hostname, timestamp, pid)``
  397. Sent just before the worker executes the task.
  398. * ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
  399. Sent if the task executed successfully.
  400. Runtime is the time it took to execute the task using the pool.
  401. (Starting from the task is sent to the worker pool, and ending when the
  402. pool result handler callback is called).
  403. * ``task-failed(uuid, exception, traceback, hostname, timestamp)``
  404. Sent if the execution of the task failed.
  405. * ``task-revoked(uuid, terminated, signum, expired)``
  406. Sent if the task has been revoked (Note that this is likely
  407. to be sent by more than one worker).
  408. - ``terminated`` is set to true if the task process was terminated,
  409. and the ``signum`` field set to the signal used.
  410. - ``expired`` is set to true if the task expired.
  411. * ``task-retried(uuid, exception, traceback, hostname, timestamp)``
  412. Sent if the task failed, but will be retried in the future.
  413. .. _event-reference-worker:
  414. Worker Events
  415. ~~~~~~~~~~~~~
  416. * ``worker-online(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  417. The worker has connected to the broker and is online.
  418. * `hostname`: Hostname of the worker.
  419. * `timestamp`: Event timestamp.
  420. * `freq`: Heartbeat frequency in seconds (float).
  421. * `sw_ident`: Name of worker software (e.g. ``py-celery``).
  422. * `sw_ver`: Software version (e.g. 2.2.0).
  423. * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  424. * ``worker-heartbeat(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  425. Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
  426. it is considered to be offline.
  427. * ``worker-offline(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  428. The worker has disconnected from the broker.