extending.rst 28 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886
  1. .. _guide-extending:
  2. ==========================
  3. Extensions and Bootsteps
  4. ==========================
  5. .. contents::
  6. :local:
  7. :depth: 2
  8. .. _extending-custom-consumers:
  9. Custom Message Consumers
  10. ========================
  11. You may want to embed custom Kombu consumers to manually process your messages.
  12. For that purpose a special :class:`~celery.bootstep.ConsumerStep` bootstep class
  13. exists, where you only need to define the ``get_consumers`` method, that must
  14. return a list of :class:`kombu.Consumer` objects to start
  15. whenever the connection is established:
  16. .. code-block:: python
  17. from celery import Celery
  18. from celery import bootsteps
  19. from kombu import Consumer, Exchange, Queue
  20. my_queue = Queue('custom', Exchange('custom'), 'routing_key')
  21. app = Celery(broker='amqp://')
  22. class MyConsumerStep(bootsteps.ConsumerStep):
  23. def get_consumers(self, channel):
  24. return [Consumer(channel,
  25. queues=[my_queue],
  26. callbacks=[self.handle_message],
  27. accept=['json'])]
  28. def handle_message(self, body, message):
  29. print('Received message: {0!r}'.format(body))
  30. message.ack()
  31. app.steps['consumer'].add(MyConsumerStep)
  32. def send_me_a_message(self, who='world!', producer=None):
  33. with app.producer_or_acquire(producer) as producer:
  34. producer.publish(
  35. {'hello': who},
  36. serializer='json',
  37. exchange=my_queue.exchange,
  38. routing_key='routing_key',
  39. declare=[my_queue],
  40. retry=True,
  41. )
  42. if __name__ == '__main__':
  43. send_me_a_message('celery')
  44. .. note::
  45. Kombu Consumers can take use of two different message callback dispatching
  46. mechanisms. The first one is the ``callbacks`` argument that accepts
  47. a list of callbacks with a ``(body, message)`` signature,
  48. the second one is the ``on_message`` argument that takes a single
  49. callback with a ``(message,)`` signature. The latter won't
  50. automatically decode and deserialize the payload.
  51. .. code-block:: python
  52. def get_consumers(self, channel):
  53. return [Consumer(channel, queues=[my_queue],
  54. on_message=self.on_message)]
  55. def on_message(self, message):
  56. payload = message.decode()
  57. print(
  58. 'Received message: {0!r} {props!r} rawlen={s}'.format(
  59. payload, props=message.properties, s=len(message.body),
  60. ))
  61. message.ack()
  62. .. _extending-blueprints:
  63. Blueprints
  64. ==========
  65. Bootsteps is a technique to add functionality to the workers.
  66. A bootstep is a custom class that defines hooks to do custom actions
  67. at different stages in the worker. Every bootstep belongs to a blueprint,
  68. and the worker currently defines two blueprints: **Worker**, and **Consumer**
  69. ----------------------------------------------------------
  70. **Figure A:** Bootsteps in the Worker and Consumer blueprints. Starting
  71. from the bottom up the first step in the worker blueprint
  72. is the Timer, and the last step is to start the Consumer blueprint,
  73. that then establishes the broker connection and starts
  74. consuming messages.
  75. .. figure:: ../images/worker_graph_full.png
  76. ----------------------------------------------------------
  77. .. _extending-worker_blueprint:
  78. Worker
  79. ======
  80. The Worker is the first blueprint to start, and with it starts major components like
  81. the event loop, processing pool, and the timer used for ETA tasks and other
  82. timed events.
  83. When the worker is fully started it continues with the Consumer blueprint,
  84. that sets up how tasks are executed, connects to the broker and starts
  85. the message consumers.
  86. The :class:`~celery.worker.WorkController` is the core worker implementation,
  87. and contains several methods and attributes that you can use in your bootstep.
  88. .. _extending-worker_blueprint-attributes:
  89. Attributes
  90. ----------
  91. .. _extending-worker-app:
  92. .. attribute:: app
  93. The current app instance.
  94. .. _extending-worker-hostname:
  95. .. attribute:: hostname
  96. The workers node name (e.g. `worker1@example.com`)
  97. .. _extending-worker-blueprint:
  98. .. attribute:: blueprint
  99. This is the worker :class:`~celery.bootsteps.Blueprint`.
  100. .. _extending-worker-hub:
  101. .. attribute:: hub
  102. Event loop object (:class:`~kombu.async.Hub`). You can use
  103. this to register callbacks in the event loop.
  104. This is only supported by async I/O enabled transports (amqp, redis),
  105. in which case the `worker.use_eventloop` attribute should be set.
  106. Your worker bootstep must require the Hub bootstep to use this:
  107. .. code-block:: python
  108. class WorkerStep(bootsteps.StartStopStep):
  109. requires = ('celery.worker.components:Hub',)
  110. .. _extending-worker-pool:
  111. .. attribute:: pool
  112. The current process/eventlet/gevent/thread pool.
  113. See :class:`celery.concurrency.base.BasePool`.
  114. Your worker bootstep must require the Pool bootstep to use this:
  115. .. code-block:: python
  116. class WorkerStep(bootsteps.StartStopStep):
  117. requires = ('celery.worker.components:Pool',)
  118. .. _extending-worker-timer:
  119. .. attribute:: timer
  120. :class:`~kombu.async.timer.Timer` used to schedule functions.
  121. Your worker bootstep must require the Timer bootstep to use this:
  122. .. code-block:: python
  123. class WorkerStep(bootsteps.StartStopStep):
  124. requires = ('celery.worker.components:Timer',)
  125. .. _extending-worker-statedb:
  126. .. attribute:: statedb
  127. :class:`Database <celery.worker.state.Persistent>`` to persist state between
  128. worker restarts.
  129. This is only defined if the ``statedb`` argument is enabled.
  130. Your worker bootstep must require the ``Statedb`` bootstep to use this:
  131. .. code-block:: python
  132. class WorkerStep(bootsteps.StartStopStep):
  133. requires = ('celery.worker.components:Statedb',)
  134. Example worker bootstep
  135. -----------------------
  136. An example Worker bootstep could be:
  137. .. code-block:: python
  138. from celery import bootsteps
  139. class ExampleWorkerStep(bootsteps.StartStopStep):
  140. requires = ('Pool',)
  141. def __init__(self, worker, **kwargs):
  142. print('Called when the WorkController instance is constructed')
  143. print('Arguments to WorkController: {0!r}'.format(kwargs))
  144. def create(self, worker):
  145. # this method can be used to delegate the action methods
  146. # to another object that implements ``start`` and ``stop``.
  147. return self
  148. def start(self, worker):
  149. print('Called when the worker is started.')
  150. def stop(self, worker):
  151. print('Called when the worker shuts down.')
  152. def terminate(self, worker):
  153. print('Called when the worker terminates')
  154. Every method is passed the current ``WorkController`` instance as the first
  155. argument.
  156. Another example could use the timer to wake up at regular intervals:
  157. .. code-block:: python
  158. from celery import bootsteps
  159. class DeadlockDetection(bootsteps.StartStopStep):
  160. requires = ('Timer',)
  161. def __init__(self, worker, deadlock_timeout=3600):
  162. self.timeout = deadlock_timeout
  163. self.requests = []
  164. self.tref = None
  165. def start(self, worker):
  166. # run every 30 seconds.
  167. self.tref = worker.timer.call_repeatedly(
  168. 30.0, self.detect, (worker,), priority=10,
  169. )
  170. def stop(self, worker):
  171. if self.tref:
  172. self.tref.cancel()
  173. self.tref = None
  174. def detect(self, worker):
  175. # update active requests
  176. for req in self.worker.active_requests:
  177. if req.time_start and time() - req.time_start > self.timeout:
  178. raise SystemExit()
  179. .. _extending-consumer_blueprint:
  180. Consumer
  181. ========
  182. The Consumer blueprint establishes a connection to the broker, and
  183. is restarted every time this connection is lost. Consumer bootsteps
  184. include the worker heartbeat, the remote control command consumer, and
  185. importantly, the task consumer.
  186. When you create consumer bootsteps you must take into account that it must
  187. be possible to restart your blueprint. An additional 'shutdown' method is
  188. defined for consumer bootsteps, this method is called when the worker is
  189. shutdown.
  190. .. _extending-consumer-attributes:
  191. Attributes
  192. ----------
  193. .. _extending-consumer-app:
  194. .. attribute:: app
  195. The current app instance.
  196. .. _extending-consumer-controller:
  197. .. attribute:: controller
  198. The parent :class:`~@WorkController` object that created this consumer.
  199. .. _extending-consumer-hostname:
  200. .. attribute:: hostname
  201. The workers node name (e.g. `worker1@example.com`)
  202. .. _extending-consumer-blueprint:
  203. .. attribute:: blueprint
  204. This is the worker :class:`~celery.bootsteps.Blueprint`.
  205. .. _extending-consumer-hub:
  206. .. attribute:: hub
  207. Event loop object (:class:`~kombu.async.Hub`). You can use
  208. this to register callbacks in the event loop.
  209. This is only supported by async I/O enabled transports (amqp, redis),
  210. in which case the `worker.use_eventloop` attribute should be set.
  211. Your worker bootstep must require the Hub bootstep to use this:
  212. .. code-block:: python
  213. class WorkerStep(bootsteps.StartStopStep):
  214. requires = ('celery.worker:Hub',)
  215. .. _extending-consumer-connection:
  216. .. attribute:: connection
  217. The current broker connection (:class:`kombu.Connection`).
  218. A consumer bootstep must require the 'Connection' bootstep
  219. to use this:
  220. .. code-block:: python
  221. class Step(bootsteps.StartStopStep):
  222. requires = ('celery.worker.consumer:Connection',)
  223. .. _extending-consumer-event_dispatcher:
  224. .. attribute:: event_dispatcher
  225. A :class:`@events.Dispatcher` object that can be used to send events.
  226. A consumer bootstep must require the `Events` bootstep to use this.
  227. .. code-block:: python
  228. class Step(bootsteps.StartStopStep):
  229. requires = ('celery.worker.consumer:Events',)
  230. .. _extending-consumer-gossip:
  231. .. attribute:: gossip
  232. Worker to worker broadcast communication
  233. (:class:`~celery.worker.consumer.Gossip`).
  234. A consumer bootstep must require the `Gossip` bootstep to use this.
  235. .. code-block:: python
  236. class RatelimitStep(bootsteps.StartStopStep):
  237. """Rate limit tasks based on the number of workers in the
  238. cluster."""
  239. requires = ('celery.worker.consumer:Gossip',)
  240. def start(self, c):
  241. self.c = c
  242. self.c.gossip.on.node_join.add(self.on_cluster_size_change)
  243. self.c.gossip.on.node_leave.add(self.on_cluster_size_change)
  244. self.c.gossip.on.node_lost.add(self.on_node_lost)
  245. self.tasks = [
  246. self.app.tasks['proj.tasks.add']
  247. self.app.tasks['proj.tasks.mul']
  248. ]
  249. self.last_size = None
  250. def on_cluster_size_change(self, worker):
  251. cluster_size = len(list(self.c.gossip.state.alive_workers()))
  252. if cluster_size != self.last_size:
  253. for task in self.tasks:
  254. task.rate_limit = 1.0 / cluster_size
  255. self.c.reset_rate_limits()
  256. self.last_size = cluster_size
  257. def on_node_lost(self, worker):
  258. # may have processed heartbeat too late, so wake up soon
  259. # in order to see if the worker recovered.
  260. self.c.timer.call_after(10.0, self.on_cluster_size_change)
  261. **Callbacks**
  262. - ``<set> gossip.on.node_join``
  263. Called whenever a new node joins the cluster, providing a
  264. :class:`~celery.events.state.Worker` instance.
  265. - ``<set> gossip.on.node_leave``
  266. Called whenever a new node leaves the cluster (shuts down),
  267. providing a :class:`~celery.events.state.Worker` instance.
  268. - ``<set> gossip.on.node_lost``
  269. Called whenever heartbeat was missed for a worker instance in the
  270. cluster (heartbeat not received or processed in time),
  271. providing a :class:`~celery.events.state.Worker` instance.
  272. This doesn't necessarily mean the worker is actually offline, so use a time
  273. out mechanism if the default heartbeat timeout isn't sufficient.
  274. .. _extending-consumer-pool:
  275. .. attribute:: pool
  276. The current process/eventlet/gevent/thread pool.
  277. See :class:`celery.concurrency.base.BasePool`.
  278. .. _extending-consumer-timer:
  279. .. attribute:: timer
  280. :class:`Timer <celery.utils.timer2.Schedule` used to schedule functions.
  281. .. _extending-consumer-heart:
  282. .. attribute:: heart
  283. Responsible for sending worker event heartbeats
  284. (:class:`~celery.worker.heartbeat.Heart`).
  285. Your consumer bootstep must require the `Heart` bootstep to use this:
  286. .. code-block:: python
  287. class Step(bootsteps.StartStopStep):
  288. requires = ('celery.worker.consumer:Heart',)
  289. .. _extending-consumer-task_consumer:
  290. .. attribute:: task_consumer
  291. The :class:`kombu.Consumer` object used to consume task messages.
  292. Your consumer bootstep must require the `Tasks` bootstep to use this:
  293. .. code-block:: python
  294. class Step(bootsteps.StartStopStep):
  295. requires = ('celery.worker.consumer:Tasks',)
  296. .. _extending-consumer-strategies:
  297. .. attribute:: strategies
  298. Every registered task type has an entry in this mapping,
  299. where the value is used to execute an incoming message of this task type
  300. (the task execution strategy). This mapping is generated by the Tasks
  301. bootstep when the consumer starts:
  302. .. code-block:: python
  303. for name, task in app.tasks.items():
  304. strategies[name] = task.start_strategy(app, consumer)
  305. task.__trace__ = celery.app.trace.build_tracer(
  306. name, task, loader, hostname
  307. )
  308. Your consumer bootstep must require the `Tasks` bootstep to use this:
  309. .. code-block:: python
  310. class Step(bootsteps.StartStopStep):
  311. requires = ('celery.worker.consumer:Tasks',)
  312. .. _extending-consumer-task_buckets:
  313. .. attribute:: task_buckets
  314. A :class:`~collections.defaultdict` used to look-up the rate limit for
  315. a task by type.
  316. Entries in this dict may be None (for no limit) or a
  317. :class:`~kombu.utils.limits.TokenBucket` instance implementing
  318. ``consume(tokens)`` and ``expected_time(tokens)``.
  319. TokenBucket implements the `token bucket algorithm`_, but any algorithm
  320. may be used as long as it conforms to the same interface and defines the
  321. two methods above.
  322. .. _`token bucket algorithm`: https://en.wikipedia.org/wiki/Token_bucket
  323. .. _extending_consumer-qos:
  324. .. attribute:: qos
  325. The :class:`~kombu.common.QoS` object can be used to change the
  326. task channels current prefetch_count value, e.g:
  327. .. code-block:: python
  328. # increment at next cycle
  329. consumer.qos.increment_eventually(1)
  330. # decrement at next cycle
  331. consumer.qos.decrement_eventually(1)
  332. consumer.qos.set(10)
  333. Methods
  334. -------
  335. .. method:: consumer.reset_rate_limits()
  336. Updates the ``task_buckets`` mapping for all registered task types.
  337. .. method:: consumer.bucket_for_task(type, Bucket=TokenBucket)
  338. Creates rate limit bucket for a task using its ``task.rate_limit``
  339. attribute.
  340. .. method:: consumer.add_task_queue(name, exchange=None, exchange_type=None,
  341. routing_key=None, \*\*options):
  342. Adds new queue to consume from. This will persist on connection restart.
  343. .. method:: consumer.cancel_task_queue(name)
  344. Stop consuming from queue by name. This will persist on connection
  345. restart.
  346. .. method:: apply_eta_task(request)
  347. Schedule ETA task to execute based on the ``request.eta`` attribute.
  348. (:class:`~celery.worker.request.Request`)
  349. .. _extending-bootsteps:
  350. Installing Bootsteps
  351. ====================
  352. ``app.steps['worker']`` and ``app.steps['consumer']`` can be modified
  353. to add new bootsteps:
  354. .. code-block:: pycon
  355. >>> app = Celery()
  356. >>> app.steps['worker'].add(MyWorkerStep) # < add class, don't instantiate
  357. >>> app.steps['consumer'].add(MyConsumerStep)
  358. >>> app.steps['consumer'].update([StepA, StepB])
  359. >>> app.steps['consumer']
  360. {step:proj.StepB{()}, step:proj.MyConsumerStep{()}, step:proj.StepA{()}
  361. The order of steps isn't important here as the order is decided by the
  362. resulting dependency graph (``Step.requires``).
  363. To illustrate how you can install bootsteps and how they work, this is an example step that
  364. prints some useless debugging information.
  365. It can be added both as a worker and consumer bootstep:
  366. .. code-block:: python
  367. from celery import Celery
  368. from celery import bootsteps
  369. class InfoStep(bootsteps.Step):
  370. def __init__(self, parent, **kwargs):
  371. # here we can prepare the Worker/Consumer object
  372. # in any way we want, set attribute defaults, and so on.
  373. print('{0!r} is in init'.format(parent))
  374. def start(self, parent):
  375. # our step is started together with all other Worker/Consumer
  376. # bootsteps.
  377. print('{0!r} is starting'.format(parent))
  378. def stop(self, parent):
  379. # the Consumer calls stop every time the consumer is restarted
  380. # (i.e. connection is lost) and also at shutdown. The Worker
  381. # will call stop at shutdown only.
  382. print('{0!r} is stopping'.format(parent))
  383. def shutdown(self, parent):
  384. # shutdown is called by the Consumer at shutdown, it's not
  385. # called by Worker.
  386. print('{0!r} is shutting down'.format(parent))
  387. app = Celery(broker='amqp://')
  388. app.steps['worker'].add(InfoStep)
  389. app.steps['consumer'].add(InfoStep)
  390. Starting the worker with this step installed will give us the following
  391. logs:
  392. .. code-block:: text
  393. <Worker: w@example.com (initializing)> is in init
  394. <Consumer: w@example.com (initializing)> is in init
  395. [2013-05-29 16:18:20,544: WARNING/MainProcess]
  396. <Worker: w@example.com (running)> is starting
  397. [2013-05-29 16:18:21,577: WARNING/MainProcess]
  398. <Consumer: w@example.com (running)> is starting
  399. <Consumer: w@example.com (closing)> is stopping
  400. <Worker: w@example.com (closing)> is stopping
  401. <Consumer: w@example.com (terminating)> is shutting down
  402. The ``print`` statements will be redirected to the logging subsystem after
  403. the worker has been initialized, so the "is starting" lines are time-stamped.
  404. You may notice that this does no longer happen at shutdown, this is because
  405. the ``stop`` and ``shutdown`` methods are called inside a *signal handler*,
  406. and it's not safe to use logging inside such a handler.
  407. Logging with the Python logging module isn't :term:`reentrant`:
  408. meaning you cannot interrupt the function then
  409. call it again later. It's important that the ``stop`` and ``shutdown`` methods
  410. you write is also :term:`reentrant`.
  411. Starting the worker with :option:`--loglevel=debug <celery worker --loglevel>`
  412. will show us more information about the boot process:
  413. .. code-block:: text
  414. [2013-05-29 16:18:20,509: DEBUG/MainProcess] | Worker: Preparing bootsteps.
  415. [2013-05-29 16:18:20,511: DEBUG/MainProcess] | Worker: Building graph...
  416. <celery.apps.worker.Worker object at 0x101ad8410> is in init
  417. [2013-05-29 16:18:20,511: DEBUG/MainProcess] | Worker: New boot order:
  418. {Hub, Pool, Timer, StateDB, InfoStep, Beat, Consumer}
  419. [2013-05-29 16:18:20,514: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
  420. [2013-05-29 16:18:20,514: DEBUG/MainProcess] | Consumer: Building graph...
  421. <celery.worker.consumer.Consumer object at 0x101c2d8d0> is in init
  422. [2013-05-29 16:18:20,515: DEBUG/MainProcess] | Consumer: New boot order:
  423. {Connection, Mingle, Events, Gossip, InfoStep, Agent,
  424. Heart, Control, Tasks, event loop}
  425. [2013-05-29 16:18:20,522: DEBUG/MainProcess] | Worker: Starting Hub
  426. [2013-05-29 16:18:20,522: DEBUG/MainProcess] ^-- substep ok
  427. [2013-05-29 16:18:20,522: DEBUG/MainProcess] | Worker: Starting Pool
  428. [2013-05-29 16:18:20,542: DEBUG/MainProcess] ^-- substep ok
  429. [2013-05-29 16:18:20,543: DEBUG/MainProcess] | Worker: Starting InfoStep
  430. [2013-05-29 16:18:20,544: WARNING/MainProcess]
  431. <celery.apps.worker.Worker object at 0x101ad8410> is starting
  432. [2013-05-29 16:18:20,544: DEBUG/MainProcess] ^-- substep ok
  433. [2013-05-29 16:18:20,544: DEBUG/MainProcess] | Worker: Starting Consumer
  434. [2013-05-29 16:18:20,544: DEBUG/MainProcess] | Consumer: Starting Connection
  435. [2013-05-29 16:18:20,559: INFO/MainProcess] Connected to amqp://guest@127.0.0.1:5672//
  436. [2013-05-29 16:18:20,560: DEBUG/MainProcess] ^-- substep ok
  437. [2013-05-29 16:18:20,560: DEBUG/MainProcess] | Consumer: Starting Mingle
  438. [2013-05-29 16:18:20,560: INFO/MainProcess] mingle: searching for neighbors
  439. [2013-05-29 16:18:21,570: INFO/MainProcess] mingle: no one here
  440. [2013-05-29 16:18:21,570: DEBUG/MainProcess] ^-- substep ok
  441. [2013-05-29 16:18:21,571: DEBUG/MainProcess] | Consumer: Starting Events
  442. [2013-05-29 16:18:21,572: DEBUG/MainProcess] ^-- substep ok
  443. [2013-05-29 16:18:21,572: DEBUG/MainProcess] | Consumer: Starting Gossip
  444. [2013-05-29 16:18:21,577: DEBUG/MainProcess] ^-- substep ok
  445. [2013-05-29 16:18:21,577: DEBUG/MainProcess] | Consumer: Starting InfoStep
  446. [2013-05-29 16:18:21,577: WARNING/MainProcess]
  447. <celery.worker.consumer.Consumer object at 0x101c2d8d0> is starting
  448. [2013-05-29 16:18:21,578: DEBUG/MainProcess] ^-- substep ok
  449. [2013-05-29 16:18:21,578: DEBUG/MainProcess] | Consumer: Starting Heart
  450. [2013-05-29 16:18:21,579: DEBUG/MainProcess] ^-- substep ok
  451. [2013-05-29 16:18:21,579: DEBUG/MainProcess] | Consumer: Starting Control
  452. [2013-05-29 16:18:21,583: DEBUG/MainProcess] ^-- substep ok
  453. [2013-05-29 16:18:21,583: DEBUG/MainProcess] | Consumer: Starting Tasks
  454. [2013-05-29 16:18:21,606: DEBUG/MainProcess] basic.qos: prefetch_count->80
  455. [2013-05-29 16:18:21,606: DEBUG/MainProcess] ^-- substep ok
  456. [2013-05-29 16:18:21,606: DEBUG/MainProcess] | Consumer: Starting event loop
  457. [2013-05-29 16:18:21,608: WARNING/MainProcess] celery@example.com ready.
  458. .. _extending-programs:
  459. Command-line programs
  460. =====================
  461. .. _extending-commandoptions:
  462. Adding new command-line options
  463. -------------------------------
  464. .. _extending-command-options:
  465. Command-specific options
  466. ~~~~~~~~~~~~~~~~~~~~~~~~
  467. You can add additional command-line options to the ``worker``, ``beat``, and
  468. ``events`` commands by modifying the :attr:`~@user_options` attribute of the
  469. application instance.
  470. Celery commands uses the :mod:`optparse` module to parse command-line
  471. arguments, and so you have to use :mod:`optparse` specific option instances created
  472. using :func:`optparse.make_option`. Please see the :mod:`optparse`
  473. documentation to read about the fields supported.
  474. Example adding a custom option to the :program:`celery worker` command:
  475. .. code-block:: python
  476. from celery import Celery
  477. from celery.bin import Option # <-- alias to optparse.make_option
  478. app = Celery(broker='amqp://')
  479. app.user_options['worker'].add(
  480. Option('--enable-my-option', action='store_true', default=False,
  481. help='Enable custom option.'),
  482. )
  483. All bootsteps will now receive this argument as a keyword argument to
  484. ``Bootstep.__init__``:
  485. .. code-block:: python
  486. from celery import bootsteps
  487. class MyBootstep(bootsteps.Step):
  488. def __init__(self, worker, enable_my_option=False, **options):
  489. if enable_my_option:
  490. party()
  491. app.steps['worker'].add(MyBootstep)
  492. .. _extending-preload_options:
  493. Preload options
  494. ~~~~~~~~~~~~~~~
  495. The :program:`celery` umbrella command supports the concept of 'preload
  496. options'. These are special options passed to all sub-commands and parsed
  497. outside of the main parsing step.
  498. The list of default preload options can be found in the API reference:
  499. :mod:`celery.bin.base`.
  500. You can add new preload options too, e.g. to specify a configuration template:
  501. .. code-block:: python
  502. from celery import Celery
  503. from celery import signals
  504. from celery.bin import Option
  505. app = Celery()
  506. app.user_options['preload'].add(
  507. Option('-Z', '--template', default='default',
  508. help='Configuration template to use.'),
  509. )
  510. @signals.user_preload_options.connect
  511. def on_preload_parsed(options, **kwargs):
  512. use_template(options['template'])
  513. .. _extending-subcommands:
  514. Adding new :program:`celery` sub-commands
  515. -----------------------------------------
  516. New commands can be added to the :program:`celery` umbrella command by using
  517. `setuptools entry-points`_.
  518. .. _`setuptools entry-points`:
  519. http://reinout.vanrees.org/weblog/2010/01/06/zest-releaser-entry-points.html
  520. Entry-points is special meta-data that can be added to your packages ``setup.py`` program,
  521. and then after installation, read from the system using the :mod:`pkg_resources` module.
  522. Celery recognizes ``celery.commands`` entry-points to install additional
  523. sub-commands, where the value of the entry-point must point to a valid subclass
  524. of :class:`celery.bin.base.Command`. There's limited documentation,
  525. unfortunately, but you can find inspiration from the various commands in the
  526. :mod:`celery.bin` package.
  527. This is how the :pypi:`Flower` monitoring extension adds the :program:`celery flower` command,
  528. by adding an entry-point in :file:`setup.py`:
  529. .. code-block:: python
  530. setup(
  531. name='flower',
  532. entry_points={
  533. 'celery.commands': [
  534. 'flower = flower.command:FlowerCommand',
  535. ],
  536. }
  537. )
  538. The command definition is in two parts separated by the equal sign, where the
  539. first part is the name of the sub-command (flower), then the second part is
  540. the fully qualified symbol path to the class that implements the command:
  541. .. code-block:: text
  542. flower.command:FlowerCommand
  543. The module path and the name of the attribute should be separated by colon
  544. as above.
  545. In the module :file:`flower/command.py`, the command class is defined
  546. something like this:
  547. .. code-block:: python
  548. from celery.bin.base import Command, Option
  549. class FlowerCommand(Command):
  550. def get_options(self):
  551. return (
  552. Option('--port', default=8888, type='int',
  553. help='Webserver port',
  554. ),
  555. Option('--debug', action='store_true'),
  556. )
  557. def run(self, port=None, debug=False, **kwargs):
  558. print('Running our command')
  559. Worker API
  560. ==========
  561. :class:`~kombu.async.Hub` - The workers async event loop
  562. --------------------------------------------------------
  563. :supported transports: amqp, redis
  564. .. versionadded:: 3.0
  565. The worker uses asynchronous I/O when the amqp or redis broker transports are
  566. used. The eventual goal is for all transports to use the event-loop, but that
  567. will take some time so other transports still use a threading-based solution.
  568. .. method:: hub.add(fd, callback, flags)
  569. .. method:: hub.add_reader(fd, callback, \*args)
  570. Add callback to be called when ``fd`` is readable.
  571. The callback will stay registered until explicitly removed using
  572. :meth:`hub.remove(fd) <hub.remove>`, or the file descriptor is
  573. automatically discarded because it's no longer valid.
  574. Note that only one callback can be registered for any given
  575. file descriptor at a time, so calling ``add`` a second time will remove
  576. any callback that was previously registered for that file descriptor.
  577. A file descriptor is any file-like object that supports the ``fileno``
  578. method, or it can be the file descriptor number (int).
  579. .. method:: hub.add_writer(fd, callback, \*args)
  580. Add callback to be called when ``fd`` is writable.
  581. See also notes for :meth:`hub.add_reader` above.
  582. .. method:: hub.remove(fd)
  583. Remove all callbacks for file descriptor ``fd`` from the loop.
  584. Timer - Scheduling events
  585. -------------------------
  586. .. method:: timer.call_after(secs, callback, args=(), kwargs=(),
  587. priority=0)
  588. .. method:: timer.call_repeatedly(secs, callback, args=(), kwargs=(),
  589. priority=0)
  590. .. method:: timer.call_at(eta, callback, args=(), kwargs=(),
  591. priority=0)