changelog-2.1.rst 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758
  1. .. _changelog-2.1:
  2. ===============================
  3. Change history for Celery 2.1
  4. ===============================
  5. .. contents::
  6. :local:
  7. .. _version-2.1.4:
  8. 2.1.4
  9. =====
  10. :release-date: 2010-12-03 12:00 P.M CEST
  11. .. _v214-fixes:
  12. Fixes
  13. -----
  14. * Execution options to `apply_async` now takes precedence over options
  15. returned by active routers. This was a regression introduced recently
  16. (Issue #244).
  17. * curses monitor: Long arguments are now truncated so curses
  18. doesn't crash with out of bounds errors. (Issue #235).
  19. * multi: Channel errors occurring while handling control commands no
  20. longer crash the worker but are instead logged with severity error.
  21. * SQLAlchemy database backend: Fixed a race condition occurring when
  22. the client wrote the pending state. Just like the Django database backend,
  23. it does no longer save the pending state (Issue #261 + Issue #262).
  24. * Error email body now uses `repr(exception)` instead of `str(exception)`,
  25. as the latter could result in Unicode decode errors (Issue #245).
  26. * Error email timeout value is now configurable by using the
  27. :setting:`EMAIL_TIMEOUT` setting.
  28. * `celeryev`: Now works on Windows (but the curses monitor won't work without
  29. having curses).
  30. * Unit test output no longer emits non-standard characters.
  31. * worker: The broadcast consumer is now closed if the connection is reset.
  32. * worker: Now properly handles errors occurring while trying to acknowledge
  33. the message.
  34. * `TaskRequest.on_failure` now encodes traceback using the current filesystem
  35. encoding. (Issue #286).
  36. * `EagerResult` can now be pickled (Issue #288).
  37. .. _v214-documentation:
  38. Documentation
  39. -------------
  40. * Adding :ref:`contributing`.
  41. * Added :ref:`guide-optimizing`.
  42. * Added :ref:`faq-security` section to the FAQ.
  43. .. _version-2.1.3:
  44. 2.1.3
  45. =====
  46. :release-date: 2010-11-09 05:00 P.M CEST
  47. .. _v213-fixes:
  48. * Fixed deadlocks in `timer2` which could lead to `djcelerymon`/`celeryev -c`
  49. hanging.
  50. * `EventReceiver`: now sends heartbeat request to find workers.
  51. This means :program:`celeryev` and friends finds workers immediately
  52. at startup.
  53. * celeryev cursesmon: Set screen_delay to 10ms, so the screen refreshes more
  54. often.
  55. * Fixed pickling errors when pickling :class:`AsyncResult` on older Python
  56. versions.
  57. * worker: prefetch count was decremented by eta tasks even if there
  58. were no active prefetch limits.
  59. .. _version-2.1.2:
  60. 2.1.2
  61. =====
  62. :release-data: TBA
  63. .. _v212-fixes:
  64. Fixes
  65. -----
  66. * worker: Now sends the :event:`task-retried` event for retried tasks.
  67. * worker: Now honors ignore result for
  68. :exc:`~@WorkerLostError` and timeout errors.
  69. * celerybeat: Fixed :exc:`UnboundLocalError` in celerybeat logging
  70. when using logging setup signals.
  71. * worker: All log messages now includes `exc_info`.
  72. .. _version-2.1.1:
  73. 2.1.1
  74. =====
  75. :release-date: 2010-10-14 02:00 P.M CEST
  76. .. _v211-fixes:
  77. Fixes
  78. -----
  79. * Now working on Windows again.
  80. Removed dependency on the pwd/grp modules.
  81. * snapshots: Fixed race condition leading to loss of events.
  82. * worker: Reject tasks with an eta that cannot be converted to a time stamp.
  83. See issue #209
  84. * concurrency.processes.pool: The semaphore was released twice for each task
  85. (both at ACK and result ready).
  86. This has been fixed, and it is now released only once per task.
  87. * docs/configuration: Fixed typo `CELERYD_TASK_SOFT_TIME_LIMIT` ->
  88. :setting:`CELERYD_TASK_SOFT_TIME_LIMIT`.
  89. See issue #214
  90. * control command `dump_scheduled`: was using old .info attribute
  91. * multi: Fixed `set changed size during iteration` bug
  92. occurring in the restart command.
  93. * worker: Accidentally tried to use additional command-line arguments.
  94. This would lead to an error like:
  95. `got multiple values for keyword argument 'concurrency'`.
  96. Additional command-line arguments are now ignored, and does not
  97. produce this error. However -- we do reserve the right to use
  98. positional arguments in the future, so please do not depend on this
  99. behavior.
  100. * celerybeat: Now respects routers and task execution options again.
  101. * celerybeat: Now reuses the publisher instead of the connection.
  102. * Cache result backend: Using :class:`float` as the expires argument
  103. to `cache.set` is deprecated by the memcached libraries,
  104. so we now automatically cast to :class:`int`.
  105. * unit tests: No longer emits logging and warnings in test output.
  106. .. _v211-news:
  107. News
  108. ----
  109. * Now depends on carrot version 0.10.7.
  110. * Added :setting:`CELERY_REDIRECT_STDOUTS`, and
  111. :setting:`CELERYD_REDIRECT_STDOUTS_LEVEL` settings.
  112. :setting:`CELERY_REDIRECT_STDOUTS` is used by the worker and
  113. beat. All output to `stdout` and `stderr` will be
  114. redirected to the current logger if enabled.
  115. :setting:`CELERY_REDIRECT_STDOUTS_LEVEL` decides the log level used and is
  116. :const:`WARNING` by default.
  117. * Added :setting:`CELERYBEAT_SCHEDULER` setting.
  118. This setting is used to define the default for the -S option to
  119. :program:`celerybeat`.
  120. Example:
  121. .. code-block:: python
  122. CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
  123. * Added Task.expires: Used to set default expiry time for tasks.
  124. * New remote control commands: `add_consumer` and `cancel_consumer`.
  125. .. method:: add_consumer(queue, exchange, exchange_type, routing_key,
  126. **options)
  127. :module:
  128. Tells the worker to declare and consume from the specified
  129. declaration.
  130. .. method:: cancel_consumer(queue_name)
  131. :module:
  132. Tells the worker to stop consuming from queue (by queue name).
  133. Commands also added to :program:`celeryctl` and
  134. :class:`~celery.task.control.inspect`.
  135. Example using celeryctl to start consuming from queue "queue", in
  136. exchange "exchange", of type "direct" using binding key "key":
  137. .. code-block:: bash
  138. $ celeryctl inspect add_consumer queue exchange direct key
  139. $ celeryctl inspect cancel_consumer queue
  140. See :ref:`monitoring-celeryctl` for more information about the
  141. :program:`celeryctl` program.
  142. Another example using :class:`~celery.task.control.inspect`:
  143. .. code-block:: python
  144. >>> from celery.task.control import inspect
  145. >>> inspect.add_consumer(queue="queue", exchange="exchange",
  146. ... exchange_type="direct",
  147. ... routing_key="key",
  148. ... durable=False,
  149. ... auto_delete=True)
  150. >>> inspect.cancel_consumer("queue")
  151. * celerybeat: Now logs the traceback if a message can't be sent.
  152. * celerybeat: Now enables a default socket timeout of 30 seconds.
  153. * README/introduction/homepage: Added link to `Flask-Celery`_.
  154. .. _`Flask-Celery`: http://github.com/ask/flask-celery
  155. .. _version-2.1.0:
  156. 2.1.0
  157. =====
  158. :release-date: 2010-10-08 12:00 P.M CEST
  159. .. _v210-important:
  160. Important Notes
  161. ---------------
  162. * Celery is now following the versioning semantics defined by `semver`_.
  163. This means we are no longer allowed to use odd/even versioning semantics
  164. By our previous versioning scheme this stable release should have
  165. been version 2.2.
  166. .. _`semver`: http://semver.org
  167. * Now depends on Carrot 0.10.7.
  168. * No longer depends on SQLAlchemy, this needs to be installed separately
  169. if the database result backend is used.
  170. * django-celery now comes with a monitor for the Django Admin interface.
  171. This can also be used if you're not a Django user.
  172. (Update: Django-Admin monitor has been replaced with Flower, see the
  173. Monitoring guide).
  174. * If you get an error after upgrading saying:
  175. `AttributeError: 'module' object has no attribute 'system'`,
  176. Then this is because the `celery.platform` module has been
  177. renamed to `celery.platforms` to not collide with the built-in
  178. :mod:`platform` module.
  179. You have to remove the old :file:`platform.py` (and maybe
  180. :file:`platform.pyc`) file from your previous Celery installation.
  181. To do this use :program:`python` to find the location
  182. of this module:
  183. .. code-block:: bash
  184. $ python
  185. >>> import celery.platform
  186. >>> celery.platform
  187. <module 'celery.platform' from '/opt/devel/celery/celery/platform.pyc'>
  188. Here the compiled module is in :file:`/opt/devel/celery/celery/`,
  189. to remove the offending files do:
  190. .. code-block:: bash
  191. $ rm -f /opt/devel/celery/celery/platform.py*
  192. .. _v210-news:
  193. News
  194. ----
  195. * Added support for expiration of AMQP results (requires RabbitMQ 2.1.0)
  196. The new configuration option :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES`
  197. sets the expiry time in seconds (can be int or float):
  198. .. code-block:: python
  199. CELERY_AMQP_TASK_RESULT_EXPIRES = 30 * 60 # 30 minutes.
  200. CELERY_AMQP_TASK_RESULT_EXPIRES = 0.80 # 800 ms.
  201. * celeryev: Event Snapshots
  202. If enabled, the worker sends messages about what the worker is doing.
  203. These messages are called "events".
  204. The events are used by real-time monitors to show what the
  205. cluster is doing, but they are not very useful for monitoring
  206. over a longer period of time. Snapshots
  207. lets you take "pictures" of the clusters state at regular intervals.
  208. This can then be stored in a database to generate statistics
  209. with, or even monitoring over longer time periods.
  210. django-celery now comes with a Celery monitor for the Django
  211. Admin interface. To use this you need to run the django-celery
  212. snapshot camera, which stores snapshots to the database at configurable
  213. intervals.
  214. To use the Django admin monitor you need to do the following:
  215. 1. Create the new database tables:
  216. .. code-block:: bash
  217. $ python manage.py syncdb
  218. 2. Start the django-celery snapshot camera:
  219. .. code-block:: bash
  220. $ python manage.py celerycam
  221. 3. Open up the django admin to monitor your cluster.
  222. The admin interface shows tasks, worker nodes, and even
  223. lets you perform some actions, like revoking and rate limiting tasks,
  224. and shutting down worker nodes.
  225. There's also a Debian init.d script for :mod:`~celery.bin.events` available,
  226. see :ref:`daemonizing` for more information.
  227. New command-line arguments to celeryev:
  228. * :option:`-c|--camera`: Snapshot camera class to use.
  229. * :option:`--logfile|-f`: Log file
  230. * :option:`--loglevel|-l`: Log level
  231. * :option:`--maxrate|-r`: Shutter rate limit.
  232. * :option:`--freq|-F`: Shutter frequency
  233. The :option:`--camera` argument is the name of a class used to take
  234. snapshots with. It must support the interface defined by
  235. :class:`celery.events.snapshot.Polaroid`.
  236. Shutter frequency controls how often the camera thread wakes up,
  237. while the rate limit controls how often it will actually take
  238. a snapshot.
  239. The rate limit can be an integer (snapshots/s), or a rate limit string
  240. which has the same syntax as the task rate limit strings (`"200/m"`,
  241. `"10/s"`, `"1/h",` etc).
  242. For the Django camera case, this rate limit can be used to control
  243. how often the snapshots are written to the database, and the frequency
  244. used to control how often the thread wakes up to check if there's
  245. anything new.
  246. The rate limit is off by default, which means it will take a snapshot
  247. for every :option:`--frequency` seconds.
  248. * :func:`~celery.task.control.broadcast`: Added callback argument, this can be
  249. used to process replies immediately as they arrive.
  250. * celeryctl: New command-line utility to manage and inspect worker nodes,
  251. apply tasks and inspect the results of tasks.
  252. .. seealso::
  253. The :ref:`monitoring-celeryctl` section in the :ref:`guide`.
  254. Some examples:
  255. .. code-block:: bash
  256. $ celeryctl apply tasks.add -a '[2, 2]' --countdown=10
  257. $ celeryctl inspect active
  258. $ celeryctl inspect registered_tasks
  259. $ celeryctl inspect scheduled
  260. $ celeryctl inspect --help
  261. $ celeryctl apply --help
  262. * Added the ability to set an expiry date and time for tasks.
  263. Example::
  264. >>> # Task expires after one minute from now.
  265. >>> task.apply_async(args, kwargs, expires=60)
  266. >>> # Also supports datetime
  267. >>> task.apply_async(args, kwargs,
  268. ... expires=datetime.now() + timedelta(days=1)
  269. When a worker receives a task that has been expired it will be
  270. marked as revoked (:exc:`~@TaskRevokedError`).
  271. * Changed the way logging is configured.
  272. We now configure the root logger instead of only configuring
  273. our custom logger. In addition we don't hijack
  274. the multiprocessing logger anymore, but instead use a custom logger name
  275. for different applications:
  276. ===================================== =====================================
  277. **Application** **Logger Name**
  278. ===================================== =====================================
  279. `celeryd` "celery"
  280. `celerybeat` "celery.beat"
  281. `celeryev` "celery.ev"
  282. ===================================== =====================================
  283. This means that the `loglevel` and `logfile` arguments will
  284. affect all registered loggers (even those from 3rd party libraries).
  285. Unless you configure the loggers manually as shown below, that is.
  286. *Users can choose to configure logging by subscribing to the
  287. :signal:`~celery.signals.setup_logging` signal:*
  288. .. code-block:: python
  289. from logging.config import fileConfig
  290. from celery import signals
  291. @signals.setup_logging.connect
  292. def setup_logging(**kwargs):
  293. fileConfig("logging.conf")
  294. If there are no receivers for this signal, the logging subsystem
  295. will be configured using the :option:`--loglevel`/:option:`--logfile`
  296. argument, this will be used for *all defined loggers*.
  297. Remember that the worker also redirects stdout and stderr
  298. to the celery logger, if manually configure logging
  299. you also need to redirect the stdouts manually:
  300. .. code-block:: python
  301. from logging.config import fileConfig
  302. from celery import log
  303. def setup_logging(**kwargs):
  304. import logging
  305. fileConfig("logging.conf")
  306. stdouts = logging.getLogger("mystdoutslogger")
  307. log.redirect_stdouts_to_logger(stdouts, loglevel=logging.WARNING)
  308. * worker: Added command-line option :option:`-I`/:option:`--include`:
  309. A comma separated list of (task) modules to be imported.
  310. Example:
  311. .. code-block:: bash
  312. $ celeryd -I app1.tasks,app2.tasks
  313. * worker: now emits a warning if running as the root user (euid is 0).
  314. * :func:`celery.messaging.establish_connection`: Ability to override defaults
  315. used using keyword argument "defaults".
  316. * worker: Now uses `multiprocessing.freeze_support()` so that it should work
  317. with **py2exe**, **PyInstaller**, **cx_Freeze**, etc.
  318. * worker: Now includes more metadata for the :state:`STARTED` state: PID and
  319. host name of the worker that started the task.
  320. See issue #181
  321. * subtask: Merge additional keyword arguments to `subtask()` into task keyword
  322. arguments.
  323. e.g.:
  324. >>> s = subtask((1, 2), {"foo": "bar"}, baz=1)
  325. >>> s.args
  326. (1, 2)
  327. >>> s.kwargs
  328. {"foo": "bar", "baz": 1}
  329. See issue #182.
  330. * worker: Now emits a warning if there is already a worker node using the same
  331. name running on the same virtual host.
  332. * AMQP result backend: Sending of results are now retried if the connection
  333. is down.
  334. * AMQP result backend: `result.get()`: Wait for next state if state is not
  335. in :data:`~celery.states.READY_STATES`.
  336. * TaskSetResult now supports subscription.
  337. ::
  338. >>> res = TaskSet(tasks).apply_async()
  339. >>> res[0].get()
  340. * Added `Task.send_error_emails` + `Task.error_whitelist`, so these can
  341. be configured per task instead of just by the global setting.
  342. * Added `Task.store_errors_even_if_ignored`, so it can be changed per Task,
  343. not just by the global setting.
  344. * The crontab scheduler no longer wakes up every second, but implements
  345. `remaining_estimate` (*Optimization*).
  346. * worker: Store :state:`FAILURE` result if the
  347. :exc:`~@WorkerLostError` exception occurs (worker process
  348. disappeared).
  349. * worker: Store :state:`FAILURE` result if one of the `*TimeLimitExceeded`
  350. exceptions occurs.
  351. * Refactored the periodic task responsible for cleaning up results.
  352. * The backend cleanup task is now only added to the schedule if
  353. :setting:`CELERY_TASK_RESULT_EXPIRES` is set.
  354. * If the schedule already contains a periodic task named
  355. "celery.backend_cleanup" it won't change it, so the behavior of the
  356. backend cleanup task can be easily changed.
  357. * The task is now run every day at 4:00 AM, rather than every day since
  358. the first time it was run (using crontab schedule instead of
  359. `run_every`)
  360. * Renamed `celery.task.builtins.DeleteExpiredTaskMetaTask`
  361. -> :class:`celery.task.builtins.backend_cleanup`
  362. * The task itself has been renamed from "celery.delete_expired_task_meta"
  363. to "celery.backend_cleanup"
  364. See issue #134.
  365. * Implemented `AsyncResult.forget` for sqla/cache/redis/tyrant backends.
  366. (Forget and remove task result).
  367. See issue #184.
  368. * :meth:`TaskSetResult.join <celery.result.TaskSetResult.join>`:
  369. Added 'propagate=True' argument.
  370. When set to :const:`False` exceptions occurring in subtasks will
  371. not be re-raised.
  372. * Added `Task.update_state(task_id, state, meta)`
  373. as a shortcut to `task.backend.store_result(task_id, meta, state)`.
  374. The backend interface is "private" and the terminology outdated,
  375. so better to move this to :class:`~celery.task.base.Task` so it can be
  376. used.
  377. * timer2: Set `self.running=False` in
  378. :meth:`~celery.utils.timer2.Timer.stop` so it won't try to join again on
  379. subsequent calls to `stop()`.
  380. * Log colors are now disabled by default on Windows.
  381. * `celery.platform` renamed to :mod:`celery.platforms`, so it doesn't
  382. collide with the built-in :mod:`platform` module.
  383. * Exceptions occurring in Mediator+Pool callbacks are now caught and logged
  384. instead of taking down the worker.
  385. * Redis result backend: Now supports result expiration using the Redis
  386. `EXPIRE` command.
  387. * unit tests: Don't leave threads running at tear down.
  388. * worker: Task results shown in logs are now truncated to 46 chars.
  389. * `Task.__name__` is now an alias to `self.__class__.__name__`.
  390. This way tasks introspects more like regular functions.
  391. * `Task.retry`: Now raises :exc:`TypeError` if kwargs argument is empty.
  392. See issue #164.
  393. * timedelta_seconds: Use `timedelta.total_seconds` if running on Python 2.7
  394. * :class:`~celery.datastructures.TokenBucket`: Generic Token Bucket algorithm
  395. * :mod:`celery.events.state`: Recording of cluster state can now
  396. be paused and resumed, including support for buffering.
  397. .. method:: State.freeze(buffer=True)
  398. Pauses recording of the stream.
  399. If `buffer` is true, events received while being frozen will be
  400. buffered, and may be replayed later.
  401. .. method:: State.thaw(replay=True)
  402. Resumes recording of the stream.
  403. If `replay` is true, then the recorded buffer will be applied.
  404. .. method:: State.freeze_while(fun)
  405. With a function to apply, freezes the stream before,
  406. and replays the buffer after the function returns.
  407. * :meth:`EventReceiver.capture <celery.events.EventReceiver.capture>`
  408. Now supports a timeout keyword argument.
  409. * worker: The mediator thread is now disabled if
  410. :setting:`CELERY_RATE_LIMITS` is enabled, and tasks are directly sent to the
  411. pool without going through the ready queue (*Optimization*).
  412. .. _v210-fixes:
  413. Fixes
  414. -----
  415. * Pool: Process timed out by `TimeoutHandler` must be joined by the Supervisor,
  416. so don't remove it from the internal process list.
  417. See issue #192.
  418. * `TaskPublisher.delay_task` now supports exchange argument, so exchange can be
  419. overridden when sending tasks in bulk using the same publisher
  420. See issue #187.
  421. * the worker no longer marks tasks as revoked if :setting:`CELERY_IGNORE_RESULT`
  422. is enabled.
  423. See issue #207.
  424. * AMQP Result backend: Fixed bug with `result.get()` if
  425. :setting:`CELERY_TRACK_STARTED` enabled.
  426. `result.get()` would stop consuming after receiving the
  427. :state:`STARTED` state.
  428. * Fixed bug where new processes created by the pool supervisor becomes stuck
  429. while reading from the task Queue.
  430. See http://bugs.python.org/issue10037
  431. * Fixed timing issue when declaring the remote control command reply queue
  432. This issue could result in replies being lost, but have now been fixed.
  433. * Backward compatible `LoggerAdapter` implementation: Now works for Python 2.4.
  434. Also added support for several new methods:
  435. `fatal`, `makeRecord`, `_log`, `log`, `isEnabledFor`,
  436. `addHandler`, `removeHandler`.
  437. .. _v210-experimental:
  438. Experimental
  439. ------------
  440. * multi: Added daemonization support.
  441. multi can now be used to start, stop and restart worker nodes:
  442. .. code-block:: bash
  443. $ celeryd-multi start jerry elaine george kramer
  444. This also creates PID files and log files (:file:`celeryd@jerry.pid`,
  445. ..., :file:`celeryd@jerry.log`. To specify a location for these files
  446. use the `--pidfile` and `--logfile` arguments with the `%n`
  447. format:
  448. .. code-block:: bash
  449. $ celeryd-multi start jerry elaine george kramer \
  450. --logfile=/var/log/celeryd@%n.log \
  451. --pidfile=/var/run/celeryd@%n.pid
  452. Stopping:
  453. .. code-block:: bash
  454. $ celeryd-multi stop jerry elaine george kramer
  455. Restarting. The nodes will be restarted one by one as the old ones
  456. are shutdown:
  457. .. code-block:: bash
  458. $ celeryd-multi restart jerry elaine george kramer
  459. Killing the nodes (**WARNING**: Will discard currently executing tasks):
  460. .. code-block:: bash
  461. $ celeryd-multi kill jerry elaine george kramer
  462. See `celeryd-multi help` for help.
  463. * multi: `start` command renamed to `show`.
  464. `celeryd-multi start` will now actually start and detach worker nodes.
  465. To just generate the commands you have to use `celeryd-multi show`.
  466. * worker: Added `--pidfile` argument.
  467. The worker will write its pid when it starts. The worker will
  468. not be started if this file exists and the pid contained is still alive.
  469. * Added generic init.d script using `celeryd-multi`
  470. http://github.com/celery/celery/tree/master/extra/generic-init.d/celeryd
  471. .. _v210-documentation:
  472. Documentation
  473. -------------
  474. * Added User guide section: Monitoring
  475. * Added user guide section: Periodic Tasks
  476. Moved from `getting-started/periodic-tasks` and updated.
  477. * tutorials/external moved to new section: "community".
  478. * References has been added to all sections in the documentation.
  479. This makes it easier to link between documents.