monitoring.rst 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909
  1. .. _guide-monitoring:
  2. =================================
  3. Monitoring and Management Guide
  4. =================================
  5. .. contents::
  6. :local:
  7. Introduction
  8. ============
  9. There are several tools available to monitor and inspect Celery clusters.
  10. This document describes some of these, as as well as
  11. features related to monitoring, like events and broadcast commands.
  12. .. _monitoring-workers:
  13. Workers
  14. =======
  15. .. _monitoring-celeryctl:
  16. ``celery``: Management Command-line Utilities
  17. ---------------------------------------------
  18. .. versionadded:: 2.1
  19. :program:`celery` can also be used to inspect
  20. and manage worker nodes (and to some degree tasks).
  21. To list all the commands available do:
  22. .. code-block:: bash
  23. $ celery help
  24. or to get help for a specific command do:
  25. .. code-block:: bash
  26. $ celery <command> --help
  27. Commands
  28. ~~~~~~~~
  29. * **shell**: Drop into a Python shell.
  30. The locals will include the ``celery`` variable, which is the current app.
  31. Also all known tasks will be automatically added to locals (unless the
  32. ``--without-tasks`` flag is set).
  33. Uses Ipython, bpython, or regular python in that order if installed.
  34. You can force an implementation using ``--force-ipython|-I``,
  35. ``--force-bpython|-B``, or ``--force-python|-P``.
  36. * **status**: List active nodes in this cluster
  37. .. code-block:: bash
  38. $ celery status
  39. * **result**: Show the result of a task
  40. .. code-block:: bash
  41. $ celery result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
  42. Note that you can omit the name of the task as long as the
  43. task doesn't use a custom result backend.
  44. * **purge**: Purge messages from all configured task queues.
  45. .. code-block:: bash
  46. $ celery purge
  47. .. warning::
  48. There is no undo for this operation, and messages will
  49. be permanently deleted!
  50. * **inspect active**: List active tasks
  51. .. code-block:: bash
  52. $ celery inspect active
  53. These are all the tasks that are currently being executed.
  54. * **inspect scheduled**: List scheduled ETA tasks
  55. .. code-block:: bash
  56. $ celery inspect scheduled
  57. These are tasks reserved by the worker because they have the
  58. `eta` or `countdown` argument set.
  59. * **inspect reserved**: List reserved tasks
  60. .. code-block:: bash
  61. $ celery inspect reserved
  62. This will list all tasks that have been prefetched by the worker,
  63. and is currently waiting to be executed (does not include tasks
  64. with an eta).
  65. * **inspect revoked**: List history of revoked tasks
  66. .. code-block:: bash
  67. $ celery inspect revoked
  68. * **inspect registered**: List registered tasks
  69. .. code-block:: bash
  70. $ celery inspect registered
  71. * **inspect stats**: Show worker statistics
  72. .. code-block:: bash
  73. $ celery inspect stats
  74. * **control enable_events**: Enable events
  75. .. code-block:: bash
  76. $ celery control enable_events
  77. * **control disable_events**: Disable events
  78. .. code-block:: bash
  79. $ celery inspect disable_events
  80. * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
  81. .. code-block:: bash
  82. $ celery migrate redis://localhost amqp://localhost
  83. This command will migrate all the tasks on one broker to another.
  84. As this command is new and experimental you should be sure to have
  85. a backup of the data before proceeding.
  86. .. note::
  87. All ``inspect`` commands supports a ``--timeout`` argument,
  88. This is the number of seconds to wait for responses.
  89. You may have to increase this timeout if you're not getting a response
  90. due to latency.
  91. .. _celeryctl-inspect-destination:
  92. Specifying destination nodes
  93. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  94. By default the inspect commands operates on all workers.
  95. You can specify a single, or a list of workers by using the
  96. `--destination` argument:
  97. .. code-block:: bash
  98. $ celery inspect -d w1,w2 reserved
  99. .. _monitoring-flower:
  100. Celery Flower: Web interface
  101. ----------------------------
  102. Celery Flower is a web based, real-time monitor and administration tool.
  103. Features
  104. ~~~~~~~~
  105. - Shutdown or restart workers
  106. - View workers status (completed, running tasks, etc.)
  107. - View worker pool options (timeouts, processes, etc.)
  108. - Control worker pool size
  109. - View message broker options
  110. - View active queues, add or cancel queues
  111. - View processed task stats by type
  112. - View currently running tasks
  113. - View scheduled tasks
  114. - View reserved and revoked tasks
  115. - Apply time and rate limits
  116. - View all active configuration options
  117. - View all tasks (by type, by worker, etc.)
  118. - View all task options (arguments, start time, runtime, etc.)
  119. - Revoke or terminate tasks
  120. - View real-time execution graphs
  121. **Screenshots**
  122. .. figure:: ../images/dashboard.png
  123. :width: 700px
  124. .. figure:: ../images/monitor.png
  125. :width: 700px
  126. More screenshots_:
  127. .. _screenshots: https://github.com/mher/flower/tree/master/docs/screenshots
  128. Usage
  129. ~~~~~
  130. Install Celery Flower:
  131. .. code-block:: bash
  132. $ pip install flower
  133. Launch Celery Flower and open http://localhost:5555 in browser:
  134. .. code-block:: bash
  135. $ celery flower
  136. .. _monitoring-django-admin:
  137. Django Admin Monitor
  138. --------------------
  139. .. versionadded:: 2.1
  140. When you add `django-celery`_ to your Django project you will
  141. automatically get a monitor section as part of the Django admin interface.
  142. This can also be used if you're not using Celery with a Django project.
  143. *Screenshot*
  144. .. figure:: ../images/djangoceleryadmin2.jpg
  145. :width: 700px
  146. .. _`django-celery`: http://pypi.python.org/pypi/django-celery
  147. .. _monitoring-django-starting:
  148. Starting the monitor
  149. ~~~~~~~~~~~~~~~~~~~~
  150. The Celery section will already be present in your admin interface,
  151. but you won't see any data appearing until you start the snapshot camera.
  152. The camera takes snapshots of the events your workers sends at regular
  153. intervals, storing them in your database (See :ref:`monitoring-snapshots`).
  154. To start the camera run:
  155. .. code-block:: bash
  156. $ python manage.py celerycam
  157. If you haven't already enabled the sending of events you need to do so:
  158. .. code-block:: bash
  159. $ python manage.py celery control enable_events
  160. :Tip: You can enable events when the worker starts using the `-E` argument.
  161. Now that the camera has been started, and events have been enabled
  162. you should be able to see your workers and the tasks in the admin interface
  163. (it may take some time for workers to show up).
  164. The admin interface shows tasks, worker nodes, and even
  165. lets you perform some actions, like revoking and rate limiting tasks,
  166. or shutting down worker nodes.
  167. .. _monitoring-django-frequency:
  168. Shutter frequency
  169. ~~~~~~~~~~~~~~~~~
  170. By default the camera takes a snapshot every second, if this is too frequent
  171. or you want to have higher precision, then you can change this using the
  172. ``--frequency`` argument. This is a float describing how often, in seconds,
  173. it should wake up to check if there are any new events:
  174. .. code-block:: bash
  175. $ python manage.py celerycam --frequency=3.0
  176. The camera also supports rate limiting using the ``--maxrate`` argument.
  177. While the frequency controls how often the camera thread wakes up,
  178. the rate limit controls how often it will actually take a snapshot.
  179. The rate limits can be specified in seconds, minutes or hours
  180. by appending `/s`, `/m` or `/h` to the value.
  181. Example: ``--maxrate=100/m``, means "hundred writes a minute".
  182. The rate limit is off by default, which means it will take a snapshot
  183. for every ``--frequency`` seconds.
  184. The events also expire after some time, so the database doesn't fill up.
  185. Successful tasks are deleted after 1 day, failed tasks after 3 days,
  186. and tasks in other states after 5 days.
  187. .. _monitoring-django-reset:
  188. Resetting monitor data
  189. ~~~~~~~~~~~~~~~~~~~~~~
  190. To reset the monitor data you need to clear out two models::
  191. >>> from djcelery.models import WorkerState, TaskState
  192. # delete worker history
  193. >>> WorkerState.objects.all().delete()
  194. # delete task history
  195. >>> TaskState.objects.all().update(hidden=True)
  196. >>> TaskState.objects.purge()
  197. .. _monitoring-django-expiration:
  198. Expiration
  199. ~~~~~~~~~~
  200. By default monitor data for successful tasks will expire in 1 day,
  201. failed tasks in 3 days and pending tasks in 5 days.
  202. You can change the expiry times for each of these using
  203. adding the following settings to your :file:`settings.py`:
  204. .. code-block:: python
  205. from datetime import timedelta
  206. CELERYCAM_EXPIRE_SUCCESS = timedelta(hours=1)
  207. CELERYCAM_EXPIRE_ERROR = timedelta(hours=2)
  208. CELERYCAM_EXPIRE_PENDING = timedelta(hours=2)
  209. .. _monitoring-nodjango:
  210. Using outside of Django
  211. ~~~~~~~~~~~~~~~~~~~~~~~
  212. `django-celery` also installs the :program:`djcelerymon` program. This
  213. can be used by non-Django users, and runs both a web server and a snapshot
  214. camera in the same process.
  215. **Installing**
  216. Using :program:`pip`:
  217. .. code-block:: bash
  218. $ pip install -U django-celery
  219. or using :program:`easy_install`:
  220. .. code-block:: bash
  221. $ easy_install -U django-celery
  222. **Running**
  223. :program:`djcelerymon` reads configuration from your Celery configuration
  224. module, and sets up the Django environment using the same settings:
  225. .. code-block:: bash
  226. $ djcelerymon
  227. Database tables will be created the first time the monitor is run.
  228. By default an `sqlite3` database file named
  229. :file:`djcelerymon.db` is used, so make sure this file is writeable by the
  230. user running the monitor.
  231. If you want to store the events in a different database, e.g. MySQL,
  232. then you can configure the `DATABASE*` settings directly in your Celery
  233. config module. See http://docs.djangoproject.com/en/dev/ref/settings/#databases
  234. for more information about the database options available.
  235. You will also be asked to create a superuser (and you need to create one
  236. to be able to log into the admin later)::
  237. Creating table auth_permission
  238. Creating table auth_group_permissions
  239. [...]
  240. You just installed Django's auth system, which means you don't
  241. have any superusers defined. Would you like to create
  242. one now? (yes/no): yes
  243. Username (Leave blank to use 'username'): username
  244. Email address: me@example.com
  245. Password: ******
  246. Password (again): ******
  247. Superuser created successfully.
  248. [...]
  249. Django version 1.2.1, using settings 'celeryconfig'
  250. Development server is running at http://127.0.0.1:8000/
  251. Quit the server with CONTROL-C.
  252. Now that the service is started you can visit the monitor
  253. at http://127.0.0.1:8000, and log in using the user you created.
  254. For a list of the command line options supported by :program:`djcelerymon`,
  255. please see ``djcelerymon --help``.
  256. .. _monitoring-celeryev:
  257. celery events: Curses Monitor
  258. -----------------------------
  259. .. versionadded:: 2.0
  260. `celery events` is a simple curses monitor displaying
  261. task and worker history. You can inspect the result and traceback of tasks,
  262. and it also supports some management commands like rate limiting and shutting
  263. down workers.
  264. Starting:
  265. .. code-block:: bash
  266. $ celery events
  267. You should see a screen like:
  268. .. figure:: ../images/celeryevshotsm.jpg
  269. `celery events` is also used to start snapshot cameras (see
  270. :ref:`monitoring-snapshots`:
  271. .. code-block:: bash
  272. $ celery events --camera=<camera-class> --frequency=1.0
  273. and it includes a tool to dump events to :file:`stdout`:
  274. .. code-block:: bash
  275. $ celery events --dump
  276. For a complete list of options use ``--help``:
  277. .. code-block:: bash
  278. $ celery events --help
  279. .. _monitoring-celerymon:
  280. celerymon: Web monitor
  281. ----------------------
  282. `celerymon`_ is the ongoing work to create a web monitor.
  283. It's far from complete yet, and does currently only support
  284. a JSON API. Help is desperately needed for this project, so if you,
  285. or someone you know would like to contribute templates, design, code
  286. or help this project in any way, please get in touch!
  287. :Tip: The Django admin monitor can be used even though you're not using
  288. Celery with a Django project. See :ref:`monitoring-nodjango`.
  289. .. _`celerymon`: http://github.com/celery/celerymon/
  290. .. _monitoring-rabbitmq:
  291. RabbitMQ
  292. ========
  293. To manage a Celery cluster it is important to know how
  294. RabbitMQ can be monitored.
  295. RabbitMQ ships with the `rabbitmqctl(1)`_ command,
  296. with this you can list queues, exchanges, bindings,
  297. queue lengths, the memory usage of each queue, as well
  298. as manage users, virtual hosts and their permissions.
  299. .. note::
  300. The default virtual host (``"/"``) is used in these
  301. examples, if you use a custom virtual host you have to add
  302. the ``-p`` argument to the command, e.g:
  303. ``rabbitmqctl list_queues -p my_vhost ....``
  304. .. _`rabbitmqctl(1)`: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html
  305. .. _monitoring-rmq-queues:
  306. Inspecting queues
  307. -----------------
  308. Finding the number of tasks in a queue:
  309. .. code-block:: bash
  310. $ rabbitmqctl list_queues name messages messages_ready \
  311. messages_unacknowledged
  312. Here `messages_ready` is the number of messages ready
  313. for delivery (sent but not received), `messages_unacknowledged`
  314. is the number of messages that has been received by a worker but
  315. not acknowledged yet (meaning it is in progress, or has been reserved).
  316. `messages` is the sum of ready and unacknowledged messages.
  317. Finding the number of workers currently consuming from a queue:
  318. .. code-block:: bash
  319. $ rabbitmqctl list_queues name consumers
  320. Finding the amount of memory allocated to a queue:
  321. .. code-block:: bash
  322. $ rabbitmqctl list_queues name memory
  323. :Tip: Adding the ``-q`` option to `rabbitmqctl(1)`_ makes the output
  324. easier to parse.
  325. .. _monitoring-redis:
  326. Redis
  327. =====
  328. If you're using Redis as the broker, you can monitor the Celery cluster using
  329. the `redis-cli(1)` command to list lengths of queues.
  330. .. _monitoring-redis-queues:
  331. Inspecting queues
  332. -----------------
  333. Finding the number of tasks in a queue:
  334. .. code-block:: bash
  335. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER llen QUEUE_NAME
  336. The default queue is named `celery`. To get all available queues, invoke:
  337. .. code-block:: bash
  338. $ redis-cli -h HOST -p PORT -n DATABASE_NUMBER keys \*
  339. .. note::
  340. If a list has no elements in Redis, it doesn't exist. Hence it won't show up
  341. in the `keys` command output. `llen` for that list returns 0 in that case.
  342. On the other hand, if you're also using Redis for other purposes, the output
  343. of the `keys` command will include unrelated values stored in the database.
  344. The recommended way around this is to use a dedicated `DATABASE_NUMBER` for
  345. Celery.
  346. .. _monitoring-munin:
  347. Munin
  348. =====
  349. This is a list of known Munin plug-ins that can be useful when
  350. maintaining a Celery cluster.
  351. * rabbitmq-munin: Munin plug-ins for RabbitMQ.
  352. http://github.com/ask/rabbitmq-munin
  353. * celery_tasks: Monitors the number of times each task type has
  354. been executed (requires `celerymon`).
  355. http://exchange.munin-monitoring.org/plugins/celery_tasks-2/details
  356. * celery_task_states: Monitors the number of tasks in each state
  357. (requires `celerymon`).
  358. http://exchange.munin-monitoring.org/plugins/celery_tasks/details
  359. .. _monitoring-events:
  360. Events
  361. ======
  362. The worker has the ability to send a message whenever some event
  363. happens. These events are then captured by tools like :program:`celerymon`
  364. and :program:`celery events` to monitor the cluster.
  365. .. _monitoring-snapshots:
  366. Snapshots
  367. ---------
  368. .. versionadded:: 2.1
  369. Even a single worker can produce a huge amount of events, so storing
  370. the history of all events on disk may be very expensive.
  371. A sequence of events describes the cluster state in that time period,
  372. by taking periodic snapshots of this state you can keep all history, but
  373. still only periodically write it to disk.
  374. To take snapshots you need a Camera class, with this you can define
  375. what should happen every time the state is captured; You can
  376. write it to a database, send it by email or something else entirely.
  377. :program:`celery events` is then used to take snapshots with the camera,
  378. for example if you want to capture state every 2 seconds using the
  379. camera ``myapp.Camera`` you run :program:`celery events` with the following
  380. arguments:
  381. .. code-block:: bash
  382. $ celery events -c myapp.Camera --frequency=2.0
  383. .. _monitoring-camera:
  384. Custom Camera
  385. ~~~~~~~~~~~~~
  386. Cameras can be useful if you need to capture events and do something
  387. with those events at an interval. For real-time event processing
  388. you should use :class:`@events.Receiver` directly, like in
  389. :ref:`event-real-time-example`.
  390. Here is an example camera, dumping the snapshot to screen:
  391. .. code-block:: python
  392. from pprint import pformat
  393. from celery.events.snapshot import Polaroid
  394. class DumpCam(Polaroid):
  395. def on_shutter(self, state):
  396. if not state.event_count:
  397. # No new events since last snapshot.
  398. return
  399. print('Workers: {0}'.format(pformat(state.workers, indent=4)))
  400. print('Tasks: {0}'.format(pformat(state.tasks, indent=4)))
  401. print('Total: {0.event_count} events, %s {0.task_count}'.format(
  402. state))
  403. See the API reference for :mod:`celery.events.state` to read more
  404. about state objects.
  405. Now you can use this cam with :program:`celery events` by specifying
  406. it with the :option:`-c` option:
  407. .. code-block:: bash
  408. $ celery events -c myapp.DumpCam --frequency=2.0
  409. Or you can use it programmatically like this:
  410. .. code-block:: python
  411. from celery import Celery
  412. from myapp import DumpCam
  413. def main(app, freq=1.0):
  414. state = app.events.State()
  415. with app.connection() as connection:
  416. recv = app.events.Receiver(connection, handlers={'*': state.event})
  417. with DumpCam(state, freq=freq):
  418. recv.capture(limit=None, timeout=None)
  419. if __name__ == '__main__':
  420. celery = Celery(broker='amqp://guest@localhost//')
  421. main(celery)
  422. .. _event-real-time-example:
  423. Real-time processing
  424. --------------------
  425. To process events in real-time you need the following
  426. - An event consumer (this is the ``Receiver``)
  427. - A set of handlers called when events come in.
  428. You can have different handlers for each event type,
  429. or a catch-all handler can be used ('*')
  430. - State (optional)
  431. :class:`@events.State` is a convenient in-memory representation
  432. of tasks and workers in the cluster that is updated as events come in.
  433. It encapsulates solutions for many common things, like checking if a
  434. worker is still alive (by verifying heartbeats), merging event fields
  435. together as events come in, making sure timestamps are in sync, and so on.
  436. Combining these you can easily process events in real-time:
  437. .. code-block:: python
  438. from celery import Celery
  439. def monitor_events(app):
  440. state = app.events.State()
  441. def on_event(event):
  442. state.event(event) # <-- updates in-memory cluster state
  443. print('Workers online: %r' % ', '.join(
  444. worker for worker in state.workers if worker.alive
  445. )
  446. with app.connection() as connection:
  447. recv = app.events.Receiver(connection, handlers={'*': on_event})
  448. recv.capture(limit=None, timeout=None, wakeup=True)
  449. .. note::
  450. The wakeup argument to ``capture`` sends a signal to all workers
  451. to force them to send a heartbeat. This way you can immediately see
  452. workers when the monitor starts.
  453. You can listen to specific events by specifying the handlers:
  454. .. code-block:: python
  455. from celery import Celery
  456. def my_monitor(app):
  457. state = app.events.State()
  458. def announce_failed_tasks(event):
  459. state.event(event)
  460. task_id = event['uuid']
  461. print('TASK FAILED: %s[%s] %s' % (
  462. event['name'], task_id, state[task_id].info(), ))
  463. def announce_dead_workers(event):
  464. state.event(event)
  465. hostname = event['hostname']
  466. if not state.workers[hostname].alive:
  467. print('Worker %s missed heartbeats' % (hostname, ))
  468. with app.connection() as connection:
  469. recv = app.events.Receiver(connection, handlers={
  470. 'task-failed': announce_failed_tasks,
  471. 'worker-heartbeat': announce_dead_workers,
  472. })
  473. recv.capture(limit=None, timeout=None, wakeup=True)
  474. if __name__ == '__main__':
  475. celery = Celery(broker='amqp://guest@localhost//')
  476. my_monitor(celery)
  477. .. _event-reference:
  478. Event Reference
  479. ===============
  480. This list contains the events sent by the worker, and their arguments.
  481. .. _event-reference-task:
  482. Task Events
  483. -----------
  484. .. event:: task-sent
  485. task-sent
  486. ~~~~~~~~~
  487. :signature: ``task-sent(uuid, name, args, kwargs, retries, eta, expires,
  488. queue, exchange, routing_key)``
  489. Sent when a task message is published and
  490. the :setting:`CELERY_SEND_TASK_SENT_EVENT` setting is enabled.
  491. .. event:: task-received
  492. task-received
  493. ~~~~~~~~~~~~~
  494. :signature: ``task-received(uuid, name, args, kwargs, retries, eta, hostname,
  495. timestamp)``
  496. Sent when the worker receives a task.
  497. .. event:: task-started
  498. task-started
  499. ~~~~~~~~~~~~
  500. :signature: ``task-started(uuid, hostname, timestamp, pid)``
  501. Sent just before the worker executes the task.
  502. .. event:: task-succeeded
  503. task-succeeded
  504. ~~~~~~~~~~~~~~
  505. :signature: ``task-succeeded(uuid, result, runtime, hostname, timestamp)``
  506. Sent if the task executed successfully.
  507. Runtime is the time it took to execute the task using the pool.
  508. (Starting from the task is sent to the worker pool, and ending when the
  509. pool result handler callback is called).
  510. .. event:: task-failed
  511. task-failed
  512. ~~~~~~~~~~~
  513. :signature: ``task-failed(uuid, exception, traceback, hostname, timestamp)``
  514. Sent if the execution of the task failed.
  515. .. event:: task-revoked
  516. task-revoked
  517. ~~~~~~~~~~~~
  518. :signature: ``task-revoked(uuid, terminated, signum, expired)``
  519. Sent if the task has been revoked (Note that this is likely
  520. to be sent by more than one worker).
  521. - ``terminated`` is set to true if the task process was terminated,
  522. and the ``signum`` field set to the signal used.
  523. - ``expired`` is set to true if the task expired.
  524. .. event:: task-retried
  525. task-retried
  526. ~~~~~~~~~~~~
  527. :signature: ``task-retried(uuid, exception, traceback, hostname, timestamp)``
  528. Sent if the task failed, but will be retried in the future.
  529. .. _event-reference-worker:
  530. Worker Events
  531. -------------
  532. .. event:: worker-online
  533. worker-online
  534. ~~~~~~~~~~~~~
  535. :signature: ``worker-online(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  536. The worker has connected to the broker and is online.
  537. - `hostname`: Hostname of the worker.
  538. - `timestamp`: Event timestamp.
  539. - `freq`: Heartbeat frequency in seconds (float).
  540. - `sw_ident`: Name of worker software (e.g. ``py-celery``).
  541. - `sw_ver`: Software version (e.g. 2.2.0).
  542. - `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  543. .. event:: worker-heartbeat
  544. worker-heartbeat
  545. ~~~~~~~~~~~~~~~~
  546. :signature: ``worker-heartbeat(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys,
  547. active, processed)``
  548. Sent every minute, if the worker has not sent a heartbeat in 2 minutes,
  549. it is considered to be offline.
  550. - `hostname`: Hostname of the worker.
  551. - `timestamp`: Event timestamp.
  552. - `freq`: Heartbeat frequency in seconds (float).
  553. - `sw_ident`: Name of worker software (e.g. ``py-celery``).
  554. - `sw_ver`: Software version (e.g. 2.2.0).
  555. - `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
  556. - `active`: Number of currently executing tasks.
  557. - `processed`: Total number of tasks processed by this worker.
  558. .. event:: worker-offline
  559. worker-offline
  560. ~~~~~~~~~~~~~~
  561. :signature: ``worker-offline(hostname, timestamp, freq, sw_ident, sw_ver, sw_sys)``
  562. The worker has disconnected from the broker.