changelog-2.1.rst 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765
  1. .. _changelog-2.1:
  2. ===============================
  3. Change history for Celery 2.1
  4. ===============================
  5. .. contents::
  6. :local:
  7. .. _version-2.1.4:
  8. 2.1.4
  9. =====
  10. :release-date: 2010-12-03 12:00 P.M CEST
  11. :release-by: Ask Solem
  12. .. _v214-fixes:
  13. Fixes
  14. -----
  15. * Execution options to `apply_async` now takes precedence over options
  16. returned by active routers. This was a regression introduced recently
  17. (Issue #244).
  18. * curses monitor: Long arguments are now truncated so curses
  19. doesn't crash with out of bounds errors. (Issue #235).
  20. * multi: Channel errors occurring while handling control commands no
  21. longer crash the worker but are instead logged with severity error.
  22. * SQLAlchemy database backend: Fixed a race condition occurring when
  23. the client wrote the pending state. Just like the Django database backend,
  24. it does no longer save the pending state (Issue #261 + Issue #262).
  25. * Error email body now uses `repr(exception)` instead of `str(exception)`,
  26. as the latter could result in Unicode decode errors (Issue #245).
  27. * Error email timeout value is now configurable by using the
  28. :setting:`EMAIL_TIMEOUT` setting.
  29. * `celeryev`: Now works on Windows (but the curses monitor won't work without
  30. having curses).
  31. * Unit test output no longer emits non-standard characters.
  32. * worker: The broadcast consumer is now closed if the connection is reset.
  33. * worker: Now properly handles errors occurring while trying to acknowledge
  34. the message.
  35. * `TaskRequest.on_failure` now encodes traceback using the current file-system
  36. encoding. (Issue #286).
  37. * `EagerResult` can now be pickled (Issue #288).
  38. .. _v214-documentation:
  39. Documentation
  40. -------------
  41. * Adding :ref:`contributing`.
  42. * Added :ref:`guide-optimizing`.
  43. * Added :ref:`faq-security` section to the FAQ.
  44. .. _version-2.1.3:
  45. 2.1.3
  46. =====
  47. :release-date: 2010-11-09 05:00 P.M CEST
  48. :release-by: Ask Solem
  49. .. _v213-fixes:
  50. * Fixed deadlocks in `timer2` which could lead to `djcelerymon`/`celeryev -c`
  51. hanging.
  52. * `EventReceiver`: now sends heartbeat request to find workers.
  53. This means :program:`celeryev` and friends finds workers immediately
  54. at start-up.
  55. * ``celeryev`` curses monitor: Set screen_delay to 10ms, so the screen
  56. refreshes more often.
  57. * Fixed pickling errors when pickling :class:`AsyncResult` on older Python
  58. versions.
  59. * worker: prefetch count was decremented by eta tasks even if there
  60. were no active prefetch limits.
  61. .. _version-2.1.2:
  62. 2.1.2
  63. =====
  64. :release-data: TBA
  65. .. _v212-fixes:
  66. Fixes
  67. -----
  68. * worker: Now sends the :event:`task-retried` event for retried tasks.
  69. * worker: Now honors ignore result for
  70. :exc:`~@WorkerLostError` and timeout errors.
  71. * ``celerybeat``: Fixed :exc:`UnboundLocalError` in ``celerybeat`` logging
  72. when using logging setup signals.
  73. * worker: All log messages now includes `exc_info`.
  74. .. _version-2.1.1:
  75. 2.1.1
  76. =====
  77. :release-date: 2010-10-14 02:00 P.M CEST
  78. :release-by: Ask Solem
  79. .. _v211-fixes:
  80. Fixes
  81. -----
  82. * Now working on Windows again.
  83. Removed dependency on the :mod:`pwd`/:mod:`grp` modules.
  84. * snapshots: Fixed race condition leading to loss of events.
  85. * worker: Reject tasks with an eta that cannot be converted to a time stamp.
  86. See issue #209
  87. * concurrency.processes.pool: The semaphore was released twice for each task
  88. (both at ACK and result ready).
  89. This has been fixed, and it is now released only once per task.
  90. * docs/configuration: Fixed typo `CELERYD_TASK_SOFT_TIME_LIMIT` ->
  91. :setting:`CELERYD_TASK_SOFT_TIME_LIMIT`.
  92. See issue #214
  93. * control command `dump_scheduled`: was using old .info attribute
  94. * multi: Fixed `set changed size during iteration` bug
  95. occurring in the restart command.
  96. * worker: Accidentally tried to use additional command-line arguments.
  97. This would lead to an error like:
  98. `got multiple values for keyword argument 'concurrency'`.
  99. Additional command-line arguments are now ignored, and does not
  100. produce this error. However -- we do reserve the right to use
  101. positional arguments in the future, so please do not depend on this
  102. behavior.
  103. * ``celerybeat``: Now respects routers and task execution options again.
  104. * ``celerybeat``: Now reuses the publisher instead of the connection.
  105. * Cache result backend: Using :class:`float` as the expires argument
  106. to `cache.set` is deprecated by the Memcached libraries,
  107. so we now automatically cast to :class:`int`.
  108. * unit tests: No longer emits logging and warnings in test output.
  109. .. _v211-news:
  110. News
  111. ----
  112. * Now depends on carrot version 0.10.7.
  113. * Added :setting:`CELERY_REDIRECT_STDOUTS`, and
  114. :setting:`CELERYD_REDIRECT_STDOUTS_LEVEL` settings.
  115. :setting:`CELERY_REDIRECT_STDOUTS` is used by the worker and
  116. beat. All output to `stdout` and `stderr` will be
  117. redirected to the current logger if enabled.
  118. :setting:`CELERY_REDIRECT_STDOUTS_LEVEL` decides the log level used and is
  119. :const:`WARNING` by default.
  120. * Added :setting:`CELERYBEAT_SCHEDULER` setting.
  121. This setting is used to define the default for the -S option to
  122. :program:`celerybeat`.
  123. Example:
  124. .. code-block:: python
  125. CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
  126. * Added Task.expires: Used to set default expiry time for tasks.
  127. * New remote control commands: `add_consumer` and `cancel_consumer`.
  128. .. method:: add_consumer(queue, exchange, exchange_type, routing_key,
  129. \*\*options)
  130. :module:
  131. Tells the worker to declare and consume from the specified
  132. declaration.
  133. .. method:: cancel_consumer(queue_name)
  134. :module:
  135. Tells the worker to stop consuming from queue (by queue name).
  136. Commands also added to :program:`celeryctl` and
  137. :class:`~celery.task.control.inspect`.
  138. Example using ``celeryctl`` to start consuming from queue "queue", in
  139. exchange "exchange", of type "direct" using binding key "key":
  140. .. code-block:: console
  141. $ celeryctl inspect add_consumer queue exchange direct key
  142. $ celeryctl inspect cancel_consumer queue
  143. See :ref:`monitoring-control` for more information about the
  144. :program:`celeryctl` program.
  145. Another example using :class:`~celery.task.control.inspect`:
  146. .. code-block:: pycon
  147. >>> from celery.task.control import inspect
  148. >>> inspect.add_consumer(queue='queue', exchange='exchange',
  149. ... exchange_type='direct',
  150. ... routing_key='key',
  151. ... durable=False,
  152. ... auto_delete=True)
  153. >>> inspect.cancel_consumer('queue')
  154. * ``celerybeat``: Now logs the traceback if a message can't be sent.
  155. * ``celerybeat``: Now enables a default socket timeout of 30 seconds.
  156. * ``README``/introduction/homepage: Added link to `Flask-Celery`_.
  157. .. _`Flask-Celery`: https://github.com/ask/flask-celery
  158. .. _version-2.1.0:
  159. 2.1.0
  160. =====
  161. :release-date: 2010-10-08 12:00 P.M CEST
  162. :release-by: Ask Solem
  163. .. _v210-important:
  164. Important Notes
  165. ---------------
  166. * Celery is now following the versioning semantics defined by `semver`_.
  167. This means we are no longer allowed to use odd/even versioning semantics
  168. By our previous versioning scheme this stable release should have
  169. been version 2.2.
  170. .. _`semver`: http://semver.org
  171. * Now depends on Carrot 0.10.7.
  172. * No longer depends on SQLAlchemy, this needs to be installed separately
  173. if the database result backend is used.
  174. * :pypi:`django-celery` now comes with a monitor for the Django Admin
  175. interface. This can also be used if you're not a Django user.
  176. (Update: Django-Admin monitor has been replaced with Flower, see the
  177. Monitoring guide).
  178. * If you get an error after upgrading saying:
  179. `AttributeError: 'module' object has no attribute 'system'`,
  180. Then this is because the `celery.platform` module has been
  181. renamed to `celery.platforms` to not collide with the built-in
  182. :mod:`platform` module.
  183. You have to remove the old :file:`platform.py` (and maybe
  184. :file:`platform.pyc`) file from your previous Celery installation.
  185. To do this use :program:`python` to find the location
  186. of this module:
  187. .. code-block:: console
  188. $ python
  189. >>> import celery.platform
  190. >>> celery.platform
  191. <module 'celery.platform' from '/opt/devel/celery/celery/platform.pyc'>
  192. Here the compiled module is in :file:`/opt/devel/celery/celery/`,
  193. to remove the offending files do:
  194. .. code-block:: console
  195. $ rm -f /opt/devel/celery/celery/platform.py*
  196. .. _v210-news:
  197. News
  198. ----
  199. * Added support for expiration of AMQP results (requires RabbitMQ 2.1.0)
  200. The new configuration option :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`
  201. sets the expiry time in seconds (can be int or float):
  202. .. code-block:: python
  203. CELERY_AMQP_TASK_RESULT_EXPIRES = 30 * 60 # 30 minutes.
  204. CELERY_AMQP_TASK_RESULT_EXPIRES = 0.80 # 800 ms.
  205. * ``celeryev``: Event Snapshots
  206. If enabled, the worker sends messages about what the worker is doing.
  207. These messages are called "events".
  208. The events are used by real-time monitors to show what the
  209. cluster is doing, but they are not very useful for monitoring
  210. over a longer period of time. Snapshots
  211. lets you take "pictures" of the clusters state at regular intervals.
  212. This can then be stored in a database to generate statistics
  213. with, or even monitoring over longer time periods.
  214. :pypi:`django-celery` now comes with a Celery monitor for the Django
  215. Admin interface. To use this you need to run the :pypi:`django-celery`
  216. snapshot camera, which stores snapshots to the database at configurable
  217. intervals.
  218. To use the Django admin monitor you need to do the following:
  219. 1. Create the new database tables:
  220. .. code-block:: console
  221. $ python manage.py syncdb
  222. 2. Start the :pypi:`django-celery` snapshot camera:
  223. .. code-block:: console
  224. $ python manage.py celerycam
  225. 3. Open up the django admin to monitor your cluster.
  226. The admin interface shows tasks, worker nodes, and even
  227. lets you perform some actions, like revoking and rate limiting tasks,
  228. and shutting down worker nodes.
  229. There's also a Debian init.d script for :mod:`~celery.bin.events` available,
  230. see :ref:`daemonizing` for more information.
  231. New command-line arguments to ``celeryev``:
  232. * :option:`celery events --camera`: Snapshot camera class to use.
  233. * :option:`celery events --logfile`: Log file
  234. * :option:`celery events --loglevel`: Log level
  235. * :option:`celery events --maxrate`: Shutter rate limit.
  236. * :option:`celery events --freq`: Shutter frequency
  237. The :option:`--camera <celery events --camera>` argument is the name
  238. of a class used to take snapshots with. It must support the interface
  239. defined by :class:`celery.events.snapshot.Polaroid`.
  240. Shutter frequency controls how often the camera thread wakes up,
  241. while the rate limit controls how often it will actually take
  242. a snapshot.
  243. The rate limit can be an integer (snapshots/s), or a rate limit string
  244. which has the same syntax as the task rate limit strings (`"200/m"`,
  245. `"10/s"`, `"1/h",` etc).
  246. For the Django camera case, this rate limit can be used to control
  247. how often the snapshots are written to the database, and the frequency
  248. used to control how often the thread wakes up to check if there's
  249. anything new.
  250. The rate limit is off by default, which means it will take a snapshot
  251. for every :option:`--frequency <celery events --frequency>` seconds.
  252. * :func:`~celery.task.control.broadcast`: Added callback argument, this can be
  253. used to process replies immediately as they arrive.
  254. * ``celeryctl``: New command line utility to manage and inspect worker nodes,
  255. apply tasks and inspect the results of tasks.
  256. .. seealso::
  257. The :ref:`monitoring-control` section in the :ref:`guide`.
  258. Some examples:
  259. .. code-block:: console
  260. $ celeryctl apply tasks.add -a '[2, 2]' --countdown=10
  261. $ celeryctl inspect active
  262. $ celeryctl inspect registered_tasks
  263. $ celeryctl inspect scheduled
  264. $ celeryctl inspect --help
  265. $ celeryctl apply --help
  266. * Added the ability to set an expiry date and time for tasks.
  267. Example::
  268. >>> # Task expires after one minute from now.
  269. >>> task.apply_async(args, kwargs, expires=60)
  270. >>> # Also supports datetime
  271. >>> task.apply_async(args, kwargs,
  272. ... expires=datetime.now() + timedelta(days=1)
  273. When a worker receives a task that has been expired it will be
  274. marked as revoked (:exc:`~@TaskRevokedError`).
  275. * Changed the way logging is configured.
  276. We now configure the root logger instead of only configuring
  277. our custom logger. In addition we don't hijack
  278. the multiprocessing logger anymore, but instead use a custom logger name
  279. for different applications:
  280. ===================================== =====================================
  281. **Application** **Logger Name**
  282. ===================================== =====================================
  283. ``celeryd`` ``"celery"``
  284. ``celerybeat`` ``"celery.beat"``
  285. ``celeryev`` ``"celery.ev"``
  286. ===================================== =====================================
  287. This means that the `loglevel` and `logfile` arguments will
  288. affect all registered loggers (even those from third-party libraries).
  289. Unless you configure the loggers manually as shown below, that is.
  290. *Users can choose to configure logging by subscribing to the
  291. :signal:`~celery.signals.setup_logging` signal:*
  292. .. code-block:: python
  293. from logging.config import fileConfig
  294. from celery import signals
  295. @signals.setup_logging.connect
  296. def setup_logging(**kwargs):
  297. fileConfig('logging.conf')
  298. If there are no receivers for this signal, the logging subsystem
  299. will be configured using the
  300. :option:`--loglevel <celery worker --loglevel>`/
  301. :option:`--logfile <celery worker --logfile>`
  302. arguments, this will be used for *all defined loggers*.
  303. Remember that the worker also redirects stdout and stderr
  304. to the celery logger, if manually configure logging
  305. you also need to redirect the standard outs manually:
  306. .. code-block:: python
  307. from logging.config import fileConfig
  308. from celery import log
  309. def setup_logging(**kwargs):
  310. import logging
  311. fileConfig('logging.conf')
  312. stdouts = logging.getLogger('mystdoutslogger')
  313. log.redirect_stdouts_to_logger(stdouts, loglevel=logging.WARNING)
  314. * worker Added command line option
  315. :option:`--include <celery worker --include>`:
  316. A comma separated list of (task) modules to be imported.
  317. Example:
  318. .. code-block:: console
  319. $ celeryd -I app1.tasks,app2.tasks
  320. * worker: now emits a warning if running as the root user (euid is 0).
  321. * :func:`celery.messaging.establish_connection`: Ability to override defaults
  322. used using keyword argument "defaults".
  323. * worker: Now uses `multiprocessing.freeze_support()` so that it should work
  324. with **py2exe**, **PyInstaller**, **cx_Freeze**, etc.
  325. * worker: Now includes more meta-data for the :state:`STARTED` state: PID and
  326. host name of the worker that started the task.
  327. See issue #181
  328. * subtask: Merge additional keyword arguments to `subtask()` into task keyword
  329. arguments.
  330. e.g.:
  331. >>> s = subtask((1, 2), {'foo': 'bar'}, baz=1)
  332. >>> s.args
  333. (1, 2)
  334. >>> s.kwargs
  335. {'foo': 'bar', 'baz': 1}
  336. See issue #182.
  337. * worker: Now emits a warning if there is already a worker node using the same
  338. name running on the same virtual host.
  339. * AMQP result backend: Sending of results are now retried if the connection
  340. is down.
  341. * AMQP result backend: `result.get()`: Wait for next state if state is not
  342. in :data:`~celery.states.READY_STATES`.
  343. * TaskSetResult now supports subscription.
  344. ::
  345. >>> res = TaskSet(tasks).apply_async()
  346. >>> res[0].get()
  347. * Added `Task.send_error_emails` + `Task.error_whitelist`, so these can
  348. be configured per task instead of just by the global setting.
  349. * Added `Task.store_errors_even_if_ignored`, so it can be changed per Task,
  350. not just by the global setting.
  351. * The Crontab scheduler no longer wakes up every second, but implements
  352. `remaining_estimate` (*Optimization*).
  353. * worker: Store :state:`FAILURE` result if the
  354. :exc:`~@WorkerLostError` exception occurs (worker process
  355. disappeared).
  356. * worker: Store :state:`FAILURE` result if one of the `*TimeLimitExceeded`
  357. exceptions occurs.
  358. * Refactored the periodic task responsible for cleaning up results.
  359. * The backend cleanup task is now only added to the schedule if
  360. :setting:`CELERY_TASK_RESULT_EXPIRES` is set.
  361. * If the schedule already contains a periodic task named
  362. "celery.backend_cleanup" it won't change it, so the behavior of the
  363. backend cleanup task can be easily changed.
  364. * The task is now run every day at 4:00 AM, rather than every day since
  365. the first time it was run (using Crontab schedule instead of
  366. `run_every`)
  367. * Renamed `celery.task.builtins.DeleteExpiredTaskMetaTask`
  368. -> :class:`celery.task.builtins.backend_cleanup`
  369. * The task itself has been renamed from "celery.delete_expired_task_meta"
  370. to "celery.backend_cleanup"
  371. See issue #134.
  372. * Implemented `AsyncResult.forget` for SQLAlchemy/Memcached/Redis/Tokyo Tyrant
  373. backends. (Forget and remove task result).
  374. See issue #184.
  375. * :meth:`TaskSetResult.join <celery.result.TaskSetResult.join>`:
  376. Added 'propagate=True' argument.
  377. When set to :const:`False` exceptions occurring in subtasks will
  378. not be re-raised.
  379. * Added `Task.update_state(task_id, state, meta)`
  380. as a shortcut to `task.backend.store_result(task_id, meta, state)`.
  381. The backend interface is "private" and the terminology outdated,
  382. so better to move this to :class:`~celery.task.base.Task` so it can be
  383. used.
  384. * timer2: Set `self.running=False` in
  385. :meth:`~celery.utils.timer2.Timer.stop` so it won't try to join again on
  386. subsequent calls to `stop()`.
  387. * Log colors are now disabled by default on Windows.
  388. * `celery.platform` renamed to :mod:`celery.platforms`, so it doesn't
  389. collide with the built-in :mod:`platform` module.
  390. * Exceptions occurring in Mediator+Pool callbacks are now caught and logged
  391. instead of taking down the worker.
  392. * Redis result backend: Now supports result expiration using the Redis
  393. `EXPIRE` command.
  394. * unit tests: Don't leave threads running at tear down.
  395. * worker: Task results shown in logs are now truncated to 46 chars.
  396. * `Task.__name__` is now an alias to `self.__class__.__name__`.
  397. This way tasks introspects more like regular functions.
  398. * `Task.retry`: Now raises :exc:`TypeError` if kwargs argument is empty.
  399. See issue #164.
  400. * ``timedelta_seconds``: Use ``timedelta.total_seconds`` if running on Python 2.7
  401. * :class:`~kombu.utils.limits.TokenBucket`: Generic Token Bucket algorithm
  402. * :mod:`celery.events.state`: Recording of cluster state can now
  403. be paused and resumed, including support for buffering.
  404. .. method:: State.freeze(buffer=True)
  405. Pauses recording of the stream.
  406. If `buffer` is true, events received while being frozen will be
  407. buffered, and may be replayed later.
  408. .. method:: State.thaw(replay=True)
  409. Resumes recording of the stream.
  410. If `replay` is true, then the recorded buffer will be applied.
  411. .. method:: State.freeze_while(fun)
  412. With a function to apply, freezes the stream before,
  413. and replays the buffer after the function returns.
  414. * :meth:`EventReceiver.capture <celery.events.EventReceiver.capture>`
  415. Now supports a timeout keyword argument.
  416. * worker: The mediator thread is now disabled if
  417. :setting:`CELERY_RATE_LIMITS` is enabled, and tasks are directly sent to the
  418. pool without going through the ready queue (*Optimization*).
  419. .. _v210-fixes:
  420. Fixes
  421. -----
  422. * Pool: Process timed out by `TimeoutHandler` must be joined by the Supervisor,
  423. so don't remove it from the internal process list.
  424. See issue #192.
  425. * `TaskPublisher.delay_task` now supports exchange argument, so exchange can be
  426. overridden when sending tasks in bulk using the same publisher
  427. See issue #187.
  428. * the worker no longer marks tasks as revoked if :setting:`CELERY_IGNORE_RESULT`
  429. is enabled.
  430. See issue #207.
  431. * AMQP Result backend: Fixed bug with `result.get()` if
  432. :setting:`CELERY_TRACK_STARTED` enabled.
  433. `result.get()` would stop consuming after receiving the
  434. :state:`STARTED` state.
  435. * Fixed bug where new processes created by the pool supervisor becomes stuck
  436. while reading from the task Queue.
  437. See http://bugs.python.org/issue10037
  438. * Fixed timing issue when declaring the remote control command reply queue
  439. This issue could result in replies being lost, but have now been fixed.
  440. * Backward compatible `LoggerAdapter` implementation: Now works for Python 2.4.
  441. Also added support for several new methods:
  442. `fatal`, `makeRecord`, `_log`, `log`, `isEnabledFor`,
  443. `addHandler`, `removeHandler`.
  444. .. _v210-experimental:
  445. Experimental
  446. ------------
  447. * multi: Added daemonization support.
  448. multi can now be used to start, stop and restart worker nodes:
  449. .. code-block:: console
  450. $ celeryd-multi start jerry elaine george kramer
  451. This also creates PID files and log files (:file:`celeryd@jerry.pid`,
  452. ..., :file:`celeryd@jerry.log`. To specify a location for these files
  453. use the `--pidfile` and `--logfile` arguments with the `%n`
  454. format:
  455. .. code-block:: console
  456. $ celeryd-multi start jerry elaine george kramer \
  457. --logfile=/var/log/celeryd@%n.log \
  458. --pidfile=/var/run/celeryd@%n.pid
  459. Stopping:
  460. .. code-block:: console
  461. $ celeryd-multi stop jerry elaine george kramer
  462. Restarting. The nodes will be restarted one by one as the old ones
  463. are shutdown:
  464. .. code-block:: console
  465. $ celeryd-multi restart jerry elaine george kramer
  466. Killing the nodes (**WARNING**: Will discard currently executing tasks):
  467. .. code-block:: console
  468. $ celeryd-multi kill jerry elaine george kramer
  469. See `celeryd-multi help` for help.
  470. * multi: `start` command renamed to `show`.
  471. `celeryd-multi start` will now actually start and detach worker nodes.
  472. To just generate the commands you have to use `celeryd-multi show`.
  473. * worker: Added `--pidfile` argument.
  474. The worker will write its pid when it starts. The worker will
  475. not be started if this file exists and the pid contained is still alive.
  476. * Added generic init.d script using `celeryd-multi`
  477. https://github.com/celery/celery/tree/master/extra/generic-init.d/celeryd
  478. .. _v210-documentation:
  479. Documentation
  480. -------------
  481. * Added User guide section: Monitoring
  482. * Added user guide section: Periodic Tasks
  483. Moved from `getting-started/periodic-tasks` and updated.
  484. * tutorials/external moved to new section: "community".
  485. * References has been added to all sections in the documentation.
  486. This makes it easier to link between documents.