1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657165816591660166116621663166416651666166716681669167016711672167316741675167616771678167916801681168216831684168516861687168816891690169116921693169416951696169716981699170017011702170317041705170617071708170917101711171217131714171517161717171817191720172117221723172417251726172717281729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780178117821783178417851786178717881789179017911792179317941795179617971798179918001801180218031804180518061807180818091810181118121813181418151816181718181819182018211822182318241825182618271828182918301831183218331834183518361837183818391840184118421843184418451846184718481849185018511852185318541855185618571858185918601861186218631864186518661867186818691870187118721873187418751876187718781879188018811882188318841885188618871888188918901891189218931894189518961897189818991900190119021903190419051906190719081909191019111912191319141915191619171918191919201921192219231924192519261927192819291930193119321933 |
- ================
- Change history
- ================
- 1.2.0 [xxxx-xx-xx xx:xx x.x xxxx]
- =================================
- Upgrading for Django-users
- --------------------------
- Django integration has been moved to a separate package: `django-celery`_.
- * To upgrade you need to install the `django-celery`_ module and change::
- INSTALLED_APPS = "celery"
- to::
- INSTALLED_APPS = "djcelery"
- * The following modules has been moved to `django-celery`_:
- ===================================== =====================================
- **Module name** **Replace with**
- ===================================== =====================================
- ``celery.models`` ``djcelery.models``
- ``celery.managers`` ``djcelery.managers``
- ``celery.views`` ``djcelery.views``
- ``celery.urls`` ``djcelery.url``
- ``celery.management`` ``djcelery.management``
- ``celery.loaders.djangoapp`` ``djcelery.loaders``
- ``celery.backends.database`` ``djcelery.backends.database``
- ``celery.backends.cache`` ``djcelery.backends.cache``
- ===================================== =====================================
- Importing :mod:`djcelery` will automatically setup celery to use the Django
- loader by setting the :envvar:`CELERY_LOADER`` environment variable (it won't
- change it if it's already defined).
- When the Django loader is used, the "database" and "cache" backend aliases
- will point to the :mod:`djcelery` backends instead of the built-in backends.
- .. _`django-celery`: http://pypi.python.org/pypi/django-celery
- Upgrading for others
- --------------------
- The database backend is now using `SQLAlchemy`_ instead of the Django ORM,
- see `Supported Databases`_ for a table of supported databases.
- The ``DATABASE_*`` settings has been replaced by a single setting:
- ``CELERY_RESULT_DBURI``. The value here should be an
- `SQLAlchemy Connection String`_, some examples include:
- .. code-block:: python
- # sqlite (filename)
- CELERY_RESULT_DBURI = "sqlite:///celerydb.sqlite"
- # mysql
- CELERY_RESULT_DBURI = "mysql://scott:tiger@localhost/foo"
- # postgresql
- CELERY_RESULT_DBURI = "postgresql://scott:tiger@localhost/mydatabase"
- # oracle
- CELERY_RESULT_DBURI = "oracle://scott:tiger@127.0.0.1:1521/sidname"
- See `SQLAlchemy Connection Strings`_ for more information about connection
- strings.
- To specify additional SQLAlchemy database engine options you can use
- the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
- # echo enables verbose logging from SQLAlchemy.
- CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
- .. _`SQLAlchemy`:
- http://www.sqlalchemy.org
- .. _`Supported Databases`:
- http://www.sqlalchemy.org/docs/dbengine.html#supported-databases
- .. _`SQLAlchemy Connection String`:
- http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
- .. _`SQLAlchemy Connection Strings`:
- http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
- Backward incompatible changes
- -----------------------------
- * The following deprecated settings has been removed (as scheduled by
- the `deprecation timeline`_):
- ===================================== =====================================
- **Setting name** **Replace with**
- ===================================== =====================================
- ``CELERY_AMQP_CONSUMER_QUEUES`` ``CELERY_QUEUES``
- ``CELERY_AMQP_CONSUMER_QUEUES`` ``CELERY_QUEUES``
- ``CELERY_AMQP_EXCHANGE`` ``CELERY_DEFAULT_EXCHANGE``
- ``CELERY_AMQP_EXCHANGE_TYPE`` ``CELERY_DEFAULT_AMQP_EXCHANGE_TYPE``
- ``CELERY_AMQP_CONSUMER_ROUTING_KEY`` ``CELERY_QUEUES``
- ``CELERY_AMQP_PUBLISHER_ROUTING_KEY`` ``CELERY_DEFAULT_ROUTING_KEY``
- ===================================== =====================================
- .. _`deprecation timeline`:
- http://ask.github.com/celery/internals/deprecation.html
- * The ``celery.task.rest`` module has been removed, use :mod:`celery.task.http`
- instead (as scheduled by the `deprecation timeline`_).
- * It's no longer allowed to skip the class name in loader names.
- (as scheduled by the `deprecation timeline`_):
- Assuming the implicit ``Loader`` class name is no longer supported,
- if you use e.g.::
- CELERY_LOADER = "myapp.loaders"
- You need to include the loader class name, like this::
- CELERY_LOADER = "myapp.loaders.Loader"
- News
- ----
- * now depends on billiard >= 0.4.0
- * Added support for task soft and hard timelimits.
- New settings added:
- * CELERYD_TASK_TIME_LIMIT
- Hard time limit. The worker processing the task will be killed and
- replaced with a new one when this is exceeded.
- * CELERYD_SOFT_TASK_TIME_LIMIT
- Soft time limit. The celery.exceptions.SoftTimeLimitExceeded exception
- will be raised when this is exceeded. The task can catch this to
- e.g. clean up before the hard time limit comes.
- New command line arguments to celeryd added:
- ``--time-limit`` and ``--soft-time-limit``.
- What's left?
- This won't work on platforms not supporting signals (and specifically
- the ``SIGUSR1`` signal) yet. So an alternative the ability to disable
- the feature alltogether on nonconforming platforms must be implemented.
- Also when the hard time limit is exceeded, the task result should
- be a ``TimeLimitExceeded`` exception.
- * celeryd now waits for available pool processes before applying new tasks to the pool.
- This means it doesn't have to wait for dozens of tasks to finish at shutdown
- because it applied n prefetched tasks at once.
- Some overhead for very short tasks though, then the shutdown probably doesn't
- matter either so the feature can disable by the ``CELERYD_POOL_PUTLOCKS``
- setting::
- CELERYD_POOL_PUTLOCKS = False
- See http://github.com/ask/celery/issues/#issue/122
- * Log output is now available in colors.
- ===================================== =====================================
- **Log level** **Color**
- ===================================== =====================================
- ``DEBUG`` Blue
- ``WARNING`` Yellow
- ``CRITICAL`` Magenta
- ``ERROR`` Red
- ===================================== =====================================
- This is only enabled when the log output is a tty.
- You can explicitly enable/disable this feature using the
- ``CELERYD_LOG_COLOR`` setting.
- * Added support for task router classes (like the django multidb routers)
- * New setting: CELERY_ROUTES
- This is a single, or a list of routers to traverse when
- sending tasks. Dicts in this list converts to a
- :class:`celery.routes.MapRoute` instance.
- Examples:
- >>> CELERY_ROUTES = {"celery.ping": "default",
- "mytasks.add": "cpu-bound",
- "video.encode": {
- "queue": "video",
- "exchange": "media"
- "routing_key": "media.video.encode"}}
- >>> CELERY_ROUTES = ("myapp.tasks.Router",
- {"celery.ping": "default})
- Where ``myapp.tasks.Router`` could be:
- .. code-block:: python
- class Router(object):
- def route_for_task(self, task, task_id=None, args=None, kwargs=None):
- if task == "celery.ping":
- return "default"
- route_for_task may return a string or a dict. A string then means
- it's a queue name in ``CELERY_QUEUES``, a dict means it's a custom route.
- When sending tasks, the routers are consulted in order. The first
- router that doesn't return ``None`` is the route to use. The message options
- is then merged with the found route settings, where the routers settings
- have priority.
- Example if :func:`~celery.execute.apply_async` has these arguments::
- >>> Task.apply_async(immediate=False, exchange="video",
- ... routing_key="video.compress")
- and a router returns::
- {"immediate": True,
- "exchange": "urgent"}
- the final message options will be::
- immediate=True, exchange="urgent", routing_key="video.compress"
- (and any default message options defined in the
- :class:`~celery.task.base.Task` class)
- * New Task handler called after the task returns:
- :meth:`~celery.task.base.Task.after_return`.
- * :class:`~celery.datastructures.ExceptionInfo` now passed to
- :meth:`~celery.task.base.Task.on_retry`/
- :meth:`~celery.task.base.Task.on_failure` as einfo keyword argument.
- * celeryd: Added ``CELERYD_MAX_TASKS_PER_CHILD`` /
- :option:`--maxtasksperchild`
- Defineds the maximum number of tasks a pool worker can process before
- the process is terminated and replaced by a new one.
- * Revoked tasks now marked with state ``REVOKED``, and ``result.get()``
- will now raise :exc:`~celery.exceptions.TaskRevokedError`.
- * :func:`celery.task.control.ping` now works as expected.
- * ``apply(throw=True)`` / ``CELERY_EAGER_PROPAGATES_EXCEPTIONS``: Makes eager
- execution re-raise task errors.
- * New signal: :data:`~celery.signals.worker_process_init`: Sent inside the
- pool worker process at init.
- * celeryd :option:`-Q` option: Ability to specifiy list of queues to use,
- disabling other configured queues.
- For example, if ``CELERY_QUEUES`` defines four queues: ``image``, ``video``,
- ``data`` and ``default``, the following command would make celeryd only
- consume from the ``image`` and ``video`` queues::
- $ celeryd -Q image,video
- * :mod:`celeryd-multi <celeryd.bin.celeryd_multi>`: Tool for shell scripts
- to start multiple workers.
- Some examples::
- # Advanced example with 10 workers:
- # * Three of the workers processes the images and video queue
- # * Two of the workers processes the data queue with loglevel DEBUG
- # * the rest processes the default' queue.
- $ celeryd-multi start 10 -l INFO -Q:1-3 images,video -Q:4,5:data
- -Q default -L:4,5 DEBUG
- # get commands to start 10 workers, with 3 processes each
- $ celeryd-multi start 3 -c 3
- celeryd -n celeryd1.myhost -c 3
- celeryd -n celeryd2.myhost -c 3
- celeryd- n celeryd3.myhost -c 3
- # start 3 named workers
- $ celeryd-multi start image video data -c 3
- celeryd -n image.myhost -c 3
- celeryd -n video.myhost -c 3
- celeryd -n data.myhost -c 3
- # specify custom hostname
- $ celeryd-multi start 2 -n worker.example.com -c 3
- celeryd -n celeryd1.worker.example.com -c 3
- celeryd -n celeryd2.worker.example.com -c 3
- # Additionl options are added to each celeryd',
- # but you can also modify the options for ranges of or single workers
- # 3 workers: Two with 3 processes, and one with 10 processes.
- $ celeryd-multi start 3 -c 3 -c:1 10
- celeryd -n celeryd1.myhost -c 10
- celeryd -n celeryd2.myhost -c 3
- celeryd -n celeryd3.myhost -c 3
- # can also specify options for named workers
- $ celeryd-multi start image video data -c 3 -c:image 10
- celeryd -n image.myhost -c 10
- celeryd -n video.myhost -c 3
- celeryd -n data.myhost -c 3
- # ranges and lists of workers in options is also allowed:
- # (-c:1-3 can also be written as -c:1,2,3)
- $ celeryd-multi start 5 -c 3 -c:1-3 10
- celeryd-multi -n celeryd1.myhost -c 10
- celeryd-multi -n celeryd2.myhost -c 10
- celeryd-multi -n celeryd3.myhost -c 10
- celeryd-multi -n celeryd4.myhost -c 3
- celeryd-multi -n celeryd5.myhost -c 3
- # lists also works with named workers
- $ celeryd-multi start foo bar baz xuzzy -c 3 -c:foo,bar,baz 10
- celeryd-multi -n foo.myhost -c 10
- celeryd-multi -n bar.myhost -c 10
- celeryd-multi -n baz.myhost -c 10
- celeryd-multi -n xuzzy.myhost -c 3
- 1.0.4 [2010-05-31 09:54 A.M CEST]
- =================================
- Critical
- --------
- * SIGINT/Ctrl+C killed the pool, abrubtly terminating the currently executing
- tasks.
- Fixed by making the pool worker processes ignore :const:`SIGINT`.
- * Should not close the consumers before the pool is terminated, just cancel the consumers.
- Issue #122. http://github.com/ask/celery/issues/issue/122
- * Now depends on :mod:`billiard` >= 0.3.1
- Changes
- -------
- * :mod:`celery.contrib.abortable`: Abortable tasks.
- Tasks that defines steps of execution, the task can then
- be aborted after each step has completed.
- * Added required RPM package names under ``[bdist_rpm]`` section, to support building RPMs
- from the sources using setup.py
- * Running unittests: :envvar:`NOSE_VERBOSE` environment var now enables verbose output from Nose.
- * :func:`celery.execute.apply`: Pass logfile/loglevel arguments as task kwargs.
- Issue #110 http://github.com/ask/celery/issues/issue/110
- * celery.execute.apply: Should return exception, not :class:`~celery.datastructures.ExceptionInfo`
- on error.
- Issue #111 http://github.com/ask/celery/issues/issue/111
- * Added new entries to the :doc:`FAQs <faq>`:
- * Should I use retry or acks_late?
- * Can I execute a task by name?
- 1.0.3 [2010-05-15 03:00 P.M CEST]
- =================================
- Important notes
- ---------------
- * Messages are now acked *just before* the task function is executed.
- This is the behavior we've wanted all along, but couldn't have because of
- limitations in the multiprocessing module.
- The previous behavior was not good, and the situation worsened with the
- release of 1.0.1, so this change will definitely improve
- reliability, performance and operations in general.
- For more information please see http://bit.ly/9hom6T
- * Database result backend: result now explicitly sets ``null=True`` as
- ``django-picklefield`` version 0.1.5 changed the default behavior
- right under our noses :(
- See: http://bit.ly/d5OwMr
- This means those who created their celery tables (via syncdb or
- celeryinit) with picklefield versions >= 0.1.5 has to alter their tables to
- allow the result field to be ``NULL`` manually.
- MySQL::
- ALTER TABLE celery_taskmeta MODIFY result TEXT NULL
- * Removed ``Task.rate_limit_queue_type``, as it was not really useful
- and made it harder to refactor some parts.
- * Now depends on carrot >= 0.10.4
- * Now depends on billiard >= 0.3.0
- News
- ----
- * AMQP backend: Added timeout support for ``result.get()`` /
- ``result.wait()``.
- * New task option: ``Task.acks_late`` (default: ``CELERY_ACKS_LATE``)
- Late ack means the task messages will be acknowledged **after** the task
- has been executed, not *just before*, which is the default behavior.
- Note that this means the tasks may be executed twice if the worker
- crashes in the middle of their execution. Not acceptable for most
- applications, but desirable for others.
- * Added crontab-like scheduling to periodic tasks.
- Like a cron job, you can specify units of time of when
- you would like the task to execute. While not a full implementation
- of cron's features, it should provide a fair degree of common scheduling
- needs.
- You can specify a minute (0-59), an hour (0-23), and/or a day of the
- week (0-6 where 0 is Sunday, or by names: sun, mon, tue, wed, thu, fri,
- sat).
- Examples:
- .. code-block:: python
- from celery.task.schedules import crontab
- from celery.decorators import periodic_task
- @periodic_task(run_every=crontab(hour=7, minute=30))
- def every_morning():
- print("Runs every morning at 7:30a.m")
- @periodic_task(run_every=crontab(hour=7, minute=30, day_of_week="mon"))
- def every_monday_morning():
- print("Run every monday morning at 7:30a.m")
- @periodic_task(run_every=crontab(minutes=30))
- def every_hour():
- print("Runs every hour on the clock. e.g. 1:30, 2:30, 3:30 etc.")
- Note that this a late addition. While we have unittests, due to the
- nature of this feature we haven't been able to completely test this
- in practice, so consider this experimental.
- * ``TaskPool.apply_async``: Now supports the ``accept_callback`` argument.
- * ``apply_async``: Now raises :exc:`ValueError` if task args is not a list,
- or kwargs is not a tuple (http://github.com/ask/celery/issues/issue/95).
- * ``Task.max_retries`` can now be ``None``, which means it will retry forever.
- * Celerybeat: Now reuses the same connection when publishing large
- sets of tasks.
- * Modified the task locking example in the documentation to use
- ``cache.add`` for atomic locking.
- * Added experimental support for a *started* status on tasks.
- If ``Task.track_started`` is enabled the task will report its status
- as "started" when the task is executed by a worker.
- The default value is ``False`` as the normal behaviour is to not
- report that level of granularity. Tasks are either pending, finished,
- or waiting to be retried. Having a "started" status can be useful for
- when there are long running tasks and there is a need to report which
- task is currently running.
- The global default can be overridden by the ``CELERY_TRACK_STARTED``
- setting.
- * User Guide: New section ``Tips and Best Practices``.
- Contributions welcome!
- Remote control commands
- -----------------------
- * Remote control commands can now send replies back to the caller.
- Existing commands has been improved to send replies, and the client
- interface in ``celery.task.control`` has new keyword arguments: ``reply``,
- ``timeout`` and ``limit``. Where reply means it will wait for replies,
- timeout is the time in seconds to stop waiting for replies, and limit
- is the maximum number of replies to get.
- By default, it will wait for as many replies as possible for one second.
- * rate_limit(task_name, destination=all, reply=False, timeout=1, limit=0)
- Worker returns ``{"ok": message}`` on success,
- or ``{"failure": message}`` on failure.
- >>> from celery.task.control import rate_limit
- >>> rate_limit("tasks.add", "10/s", reply=True)
- [{'worker1': {'ok': 'new rate limit set successfully'}},
- {'worker2': {'ok': 'new rate limit set successfully'}}]
- * ping(destination=all, reply=False, timeout=1, limit=0)
- Worker returns the simple message ``"pong"``.
- >>> from celery.task.control import ping
- >>> ping(reply=True)
- [{'worker1': 'pong'},
- {'worker2': 'pong'},
- * revoke(destination=all, reply=False, timeout=1, limit=0)
- Worker simply returns ``True``.
- >>> from celery.task.control import revoke
- >>> revoke("419e46eb-cf6a-4271-86a8-442b7124132c", reply=True)
- [{'worker1': True},
- {'worker2'; True}]
- * You can now add your own remote control commands!
- Remote control commands are functions registered in the command
- registry. Registering a command is done using
- :meth:`celery.worker.control.Panel.register`:
- .. code-block:: python
- from celery.task.control import Panel
- @Panel.register
- def reset_broker_connection(panel, **kwargs):
- panel.listener.reset_connection()
- return {"ok": "connection re-established"}
- With this module imported in the worker, you can launch the command
- using ``celery.task.control.broadcast``::
- >>> from celery.task.control import broadcast
- >>> broadcast("reset_broker_connection", reply=True)
- [{'worker1': {'ok': 'connection re-established'},
- {'worker2': {'ok': 'connection re-established'}}]
- **TIP** You can choose the worker(s) to receive the command
- by using the ``destination`` argument::
- >>> broadcast("reset_broker_connection", destination=["worker1"])
- [{'worker1': {'ok': 'connection re-established'}]
- * New remote control command: ``dump_reserved``
- Dumps tasks reserved by the worker, waiting to be executed::
- >>> from celery.task.control import broadcast
- >>> broadcast("dump_reserved", reply=True)
- [{'myworker1': [<TaskWrapper ....>]}]
- * New remote control command: ``dump_schedule``
- Dumps the workers currently registered ETA schedule.
- These are tasks with an ``eta`` (or ``countdown``) argument
- waiting to be executed by the worker.
- >>> from celery.task.control import broadcast
- >>> broadcast("dump_schedule", reply=True)
- [{'w1': []},
- {'w3': []},
- {'w2': ['0. 2010-05-12 11:06:00 pri0 <TaskWrapper:
- {name:"opalfeeds.tasks.refresh_feed_slice",
- id:"95b45760-4e73-4ce8-8eac-f100aa80273a",
- args:"(<Feeds freq_max:3600 freq_min:60
- start:2184.0 stop:3276.0>,)",
- kwargs:"{'page': 2}"}>']},
- {'w4': ['0. 2010-05-12 11:00:00 pri0 <TaskWrapper:
- {name:"opalfeeds.tasks.refresh_feed_slice",
- id:"c053480b-58fb-422f-ae68-8d30a464edfe",
- args:"(<Feeds freq_max:3600 freq_min:60
- start:1092.0 stop:2184.0>,)",
- kwargs:"{\'page\': 1}"}>',
- '1. 2010-05-12 11:12:00 pri0 <TaskWrapper:
- {name:"opalfeeds.tasks.refresh_feed_slice",
- id:"ab8bc59e-6cf8-44b8-88d0-f1af57789758",
- args:"(<Feeds freq_max:3600 freq_min:60
- start:3276.0 stop:4365>,)",
- kwargs:"{\'page\': 3}"}>']}]
- Fixes
- -----
- * Mediator thread no longer blocks for more than 1 second.
- With rate limits enabled and when there was a lot of remaining time,
- the mediator thread could block shutdown (and potentially block other
- jobs from coming in).
- * Remote rate limits was not properly applied
- (http://github.com/ask/celery/issues/issue/98)
- * Now handles exceptions with unicode messages correctly in
- ``TaskWrapper.on_failure``.
- * Database backend: ``TaskMeta.result``: default value should be ``None``
- not empty string.
- 1.0.2 [2010-03-31 12:50 P.M CET]
- ================================
- * Deprecated: ``CELERY_BACKEND``, please use ``CELERY_RESULT_BACKEND``
- instead.
- * We now use a custom logger in tasks. This logger supports task magic
- keyword arguments in formats.
- The default format for tasks (``CELERYD_TASK_LOG_FORMAT``) now includes
- the id and the name of tasks so the origin of task log messages can
- easily be traced.
- Example output::
- [2010-03-25 13:11:20,317: INFO/PoolWorker-1]
- [tasks.add(a6e1c5ad-60d9-42a0-8b24-9e39363125a4)] Hello from add
- To revert to the previous behavior you can set::
- CELERYD_TASK_LOG_FORMAT = """
- [%(asctime)s: %(levelname)s/%(processName)s] %(message)s
- """.strip()
- * Unittests: Don't disable the django test database teardown,
- instead fixed the underlying issue which was caused by modifications
- to the ``DATABASE_NAME`` setting (http://github.com/ask/celery/issues/82).
- * Django Loader: New config ``CELERY_DB_REUSE_MAX`` (max number of tasks
- to reuse the same database connection)
- The default is to use a new connection for every task.
- We would very much like to reuse the connection, but a safe number of
- reuses is not known, and we don't have any way to handle the errors
- that might happen, which may even be database dependent.
- See: http://bit.ly/94fwdd
- * celeryd: The worker components are now configurable: ``CELERYD_POOL``,
- ``CELERYD_LISTENER``, ``CELERYD_MEDIATOR``, and ``CELERYD_ETA_SCHEDULER``.
- The default configuration is as follows:
- .. code-block:: python
- CELERYD_POOL = "celery.worker.pool.TaskPool"
- CELERYD_MEDIATOR = "celery.worker.controllers.Mediator"
- CELERYD_ETA_SCHEDULER = "celery.worker.controllers.ScheduleController"
- CELERYD_LISTENER = "celery.worker.listener.CarrotListener"
- The ``CELERYD_POOL`` setting makes it easy to swap out the multiprocessing
- pool with a threaded pool, or how about a twisted/eventlet pool?
- Consider the competition for the first pool plug-in started!
- * Debian init scripts: Use ``-a`` not ``&&``
- (http://github.com/ask/celery/issues/82).
- * Debian init scripts: Now always preserves ``$CELERYD_OPTS`` from the
- ``/etc/default/celeryd`` and ``/etc/default/celerybeat``.
- * celery.beat.Scheduler: Fixed a bug where the schedule was not properly
- flushed to disk if the schedule had not been properly initialized.
- * celerybeat: Now syncs the schedule to disk when receiving the ``SIGTERM``
- and ``SIGINT`` signals.
- * Control commands: Make sure keywords arguments are not in unicode.
- * ETA scheduler: Was missing a logger object, so the scheduler crashed
- when trying to log that a task had been revoked.
- * management.commands.camqadm: Fixed typo ``camqpadm`` -> ``camqadm``
- (http://github.com/ask/celery/issues/83).
- * PeriodicTask.delta_resolution: Was not working for days and hours, now fixed
- by rounding to the nearest day/hour.
- * Fixed a potential infinite loop in ``BaseAsyncResult.__eq__``, although
- there is no evidence that it has ever been triggered.
- * celeryd: Now handles messages with encoding problems by acking them and
- emitting an error message.
- 1.0.1 [2010-02-24 07:05 P.M CET]
- ================================
- * Tasks are now acknowledged early instead of late.
- This is done because messages can only be acked within the same
- connection channel, so if the connection is lost we would have to refetch
- the message again to acknowledge it.
- This might or might not affect you, but mostly those running tasks with a
- really long execution time are affected, as all tasks that has made it
- all the way into the pool needs to be executed before the worker can
- safely terminate (this is at most the number of pool workers, multiplied
- by the ``CELERYD_PREFETCH_MULTIPLIER`` setting.)
- We multiply the prefetch count by default to increase the performance at
- times with bursts of tasks with a short execution time. If this doesn't
- apply to your use case, you should be able to set the prefetch multiplier
- to zero, without sacrificing performance.
- Please note that a patch to :mod:`multiprocessing` is currently being
- worked on, this patch would enable us to use a better solution, and is
- scheduled for inclusion in the ``1.2.0`` release.
- * celeryd now shutdowns cleanly when receving the ``TERM`` signal.
- * celeryd now does a cold shutdown if the ``INT`` signal is received (Ctrl+C),
- this means it tries to terminate as soon as possible.
- * Caching of results now moved to the base backend classes, so no need
- to implement this functionality in the base classes.
- * Caches are now also limited in size, so their memory usage doesn't grow
- out of control.
-
- You can set the maximum number of results the cache
- can hold using the ``CELERY_MAX_CACHED_RESULTS`` setting (the default
- is five thousand results). In addition, you can refetch already retrieved
- results using ``backend.reload_task_result`` +
- ``backend.reload_taskset_result`` (that's for those who want to send
- results incrementally).
- * ``celeryd`` now works on Windows again.
- Note that if running with Django,
- you can't use ``project.settings`` as the settings module name, but the
- following should work::
- $ python manage.py celeryd --settings=settings
- * Execution: ``.messaging.TaskPublisher.send_task`` now
- incorporates all the functionality apply_async previously did.
-
- Like converting countdowns to eta, so :func:`celery.execute.apply_async` is
- now simply a convenient front-end to
- :meth:`celery.messaging.TaskPublisher.send_task`, using
- the task classes default options.
- Also :func:`celery.execute.send_task` has been
- introduced, which can apply tasks using just the task name (useful
- if the client does not have the destination task in its task registry).
- Example:
- >>> from celery.execute import send_task
- >>> result = send_task("celery.ping", args=[], kwargs={})
- >>> result.get()
- 'pong'
- * ``camqadm``: This is a new utility for command line access to the AMQP API.
- Excellent for deleting queues/bindings/exchanges, experimentation and
- testing::
- $ camqadm
- 1> help
- Gives an interactive shell, type ``help`` for a list of commands.
- When using Django, use the management command instead::
- $ python manage.py camqadm
- 1> help
- * Redis result backend: To conform to recent Redis API changes, the following
- settings has been deprecated:
- * ``REDIS_TIMEOUT``
- * ``REDIS_CONNECT_RETRY``
- These will emit a ``DeprecationWarning`` if used.
- A ``REDIS_PASSWORD`` setting has been added, so you can use the new
- simple authentication mechanism in Redis.
- * The redis result backend no longer calls ``SAVE`` when disconnecting,
- as this is apparently better handled by Redis itself.
- * If ``settings.DEBUG`` is on, celeryd now warns about the possible
- memory leak it can result in.
- * The ETA scheduler now sleeps at most two seconds between iterations.
- * The ETA scheduler now deletes any revoked tasks it might encounter.
- As revokes are not yet persistent, this is done to make sure the task
- is revoked even though it's currently being hold because its eta is e.g.
- a week into the future.
- * The ``task_id`` argument is now respected even if the task is executed
- eagerly (either using apply, or ``CELERY_ALWAYS_EAGER``).
- * The internal queues are now cleared if the connection is reset.
- * New magic keyword argument: ``delivery_info``.
- Used by retry() to resend the task to its original destination using the same
- exchange/routing_key.
- * Events: Fields was not passed by ``.send()`` (fixes the uuid keyerrors
- in celerymon)
- * Added ``--schedule``/``-s`` option to celeryd, so it is possible to
- specify a custom schedule filename when using an embedded celerybeat
- server (the ``-B``/``--beat``) option.
- * Better Python 2.4 compatibility. The test suite now passes.
- * task decorators: Now preserve docstring as ``cls.__doc__``, (was previously
- copied to ``cls.run.__doc__``)
- * The ``testproj`` directory has been renamed to ``tests`` and we're now using
- ``nose`` + ``django-nose`` for test discovery, and ``unittest2`` for test
- cases.
- * New pip requirements files available in ``contrib/requirements``.
- * TaskPublisher: Declarations are now done once (per process).
- * Added ``Task.delivery_mode`` and the ``CELERY_DEFAULT_DELIVERY_MODE``
- setting.
- These can be used to mark messages non-persistent (i.e. so they are
- lost if the broker is restarted).
- * Now have our own ``ImproperlyConfigured`` exception, instead of using the
- Django one.
- * Improvements to the debian init scripts: Shows an error if the program is
- not executeable. Does not modify ``CELERYD`` when using django with
- virtualenv.
- 1.0.0 [2010-02-10 04:00 P.M CET]
- ================================
- BACKWARD INCOMPATIBLE CHANGES
- -----------------------------
- * Celery does not support detaching anymore, so you have to use the tools
- available on your platform, or something like supervisord to make
- celeryd/celerybeat/celerymon into background processes.
- We've had too many problems with celeryd daemonizing itself, so it was
- decided it has to be removed. Example startup scripts has been added to
- ``contrib/``:
- * Debian, Ubuntu, (start-stop-daemon)
- ``contrib/debian/init.d/celeryd``
- ``contrib/debian/init.d/celerybeat``
- * Mac OS X launchd
- ``contrib/mac/org.celeryq.celeryd.plist``
- ``contrib/mac/org.celeryq.celerybeat.plist``
- ``contrib/mac/org.celeryq.celerymon.plist``
- * Supervisord (http://supervisord.org)
- ``contrib/supervisord/supervisord.conf``
- In addition to ``--detach``, the following program arguments has been
- removed: ``--uid``, ``--gid``, ``--workdir``, ``--chroot``, ``--pidfile``,
- ``--umask``. All good daemonization tools should support equivalent
- functionality, so don't worry.
- Also the following configuration keys has been removed:
- ``CELERYD_PID_FILE``, ``CELERYBEAT_PID_FILE``, ``CELERYMON_PID_FILE``.
- * Default celeryd loglevel is now ``WARN``, to enable the previous log level
- start celeryd with ``--loglevel=INFO``.
- * Tasks are automatically registered.
- This means you no longer have to register your tasks manually.
- You don't have to change your old code right away, as it doesn't matter if
- a task is registered twice.
- If you don't want your task to be automatically registered you can set
- the ``abstract`` attribute
- .. code-block:: python
- class MyTask(Task):
- abstract = True
- By using ``abstract`` only tasks subclassing this task will be automatically
- registered (this works like the Django ORM).
- If you don't want subclasses to be registered either, you can set the
- ``autoregister`` attribute to ``False``.
- Incidentally, this change also fixes the problems with automatic name
- assignment and relative imports. So you also don't have to specify a task name
- anymore if you use relative imports.
- * You can no longer use regular functions as tasks.
- This change was added
- because it makes the internals a lot more clean and simple. However, you can
- now turn functions into tasks by using the ``@task`` decorator:
- .. code-block:: python
- from celery.decorators import task
- @task
- def add(x, y):
- return x + y
- See the User Guide: :doc:`userguide/tasks` for more information.
- * The periodic task system has been rewritten to a centralized solution.
- This means ``celeryd`` no longer schedules periodic tasks by default,
- but a new daemon has been introduced: ``celerybeat``.
- To launch the periodic task scheduler you have to run celerybeat::
- $ celerybeat
- Make sure this is running on one server only, if you run it twice, all
- periodic tasks will also be executed twice.
- If you only have one worker server you can embed it into celeryd like this::
- $ celeryd --beat # Embed celerybeat in celeryd.
- * The supervisor has been removed.
- This means the ``-S`` and ``--supervised`` options to ``celeryd`` is
- no longer supported. Please use something like http://supervisord.org
- instead.
- * ``TaskSet.join`` has been removed, use ``TaskSetResult.join`` instead.
- * The task status ``"DONE"`` has been renamed to `"SUCCESS"`.
- * ``AsyncResult.is_done`` has been removed, use ``AsyncResult.successful``
- instead.
- * The worker no longer stores errors if ``Task.ignore_result`` is set, to
- revert to the previous behaviour set
- ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED`` to ``True``.
- * The staticstics functionality has been removed in favor of events,
- so the ``-S`` and ``--statistics`` switches has been removed.
- * The module ``celery.task.strategy`` has been removed.
- * ``celery.discovery`` has been removed, and it's ``autodiscover`` function is
- now in ``celery.loaders.djangoapp``. Reason: Internal API.
- * ``CELERY_LOADER`` now needs loader class name in addition to module name,
- E.g. where you previously had: ``"celery.loaders.default"``, you now need
- ``"celery.loaders.default.Loader"``, using the previous syntax will result
- in a DeprecationWarning.
- * Detecting the loader is now lazy, and so is not done when importing
- ``celery.loaders``.
- To make this happen ``celery.loaders.settings`` has
- been renamed to ``load_settings`` and is now a function returning the
- settings object. ``celery.loaders.current_loader`` is now also
- a function, returning the current loader.
- So::
- loader = current_loader
- needs to be changed to::
- loader = current_loader()
- DEPRECATIONS
- ------------
- * The following configuration variables has been renamed and will be
- deprecated in v1.2:
- * CELERYD_DAEMON_LOG_FORMAT -> CELERYD_LOG_FORMAT
- * CELERYD_DAEMON_LOG_LEVEL -> CELERYD_LOG_LEVEL
- * CELERY_AMQP_CONNECTION_TIMEOUT -> CELERY_BROKER_CONNECTION_TIMEOUT
- * CELERY_AMQP_CONNECTION_RETRY -> CELERY_BROKER_CONNECTION_RETRY
- * CELERY_AMQP_CONNECTION_MAX_RETRIES -> CELERY_BROKER_CONNECTION_MAX_RETRIES
- * SEND_CELERY_TASK_ERROR_EMAILS -> CELERY_SEND_TASK_ERROR_EMAILS
- * The public api names in celery.conf has also changed to a consistent naming
- scheme.
- * We now support consuming from an arbitrary number of queues.
- To do this we had to rename the configuration syntax. If you use any of
- the custom AMQP routing options (queue/exchange/routing_key, etc), you
- should read the new FAQ entry: http://bit.ly/aiWoH.
- The previous syntax is deprecated and scheduled for removal in v1.2.
- * ``TaskSet.run`` has been renamed to ``TaskSet.apply_async``.
- ``TaskSet.run`` has now been deprecated, and is scheduled for
- removal in v1.2.
- NEWS
- ----
- * Rate limiting support (per task type, or globally).
- * New periodic task system.
- * Automatic registration.
- * New cool task decorator syntax.
- * celeryd now sends events if enabled with the ``-E`` argument.
- Excellent for monitoring tools, one is already in the making
- (http://github.com/ask/celerymon).
- Current events include: worker-heartbeat,
- task-[received/succeeded/failed/retried],
- worker-online, worker-offline.
- * You can now delete (revoke) tasks that has already been applied.
- * You can now set the hostname celeryd identifies as using the ``--hostname``
- argument.
- * Cache backend now respects ``CELERY_TASK_RESULT_EXPIRES``.
- * Message format has been standardized and now uses ISO-8601 format
- for dates instead of datetime.
- * ``celeryd`` now responds to the ``HUP`` signal by restarting itself.
- * Periodic tasks are now scheduled on the clock.
- I.e. ``timedelta(hours=1)`` means every hour at :00 minutes, not every
- hour from the server starts. To revert to the previous behaviour you
- can set ``PeriodicTask.relative = True``.
- * Now supports passing execute options to a TaskSets list of args, e.g.:
- >>> ts = TaskSet(add, [([2, 2], {}, {"countdown": 1}),
- ... ([4, 4], {}, {"countdown": 2}),
- ... ([8, 8], {}, {"countdown": 3})])
- >>> ts.run()
- * Got a 3x performance gain by setting the prefetch count to four times the
- concurrency, (from an average task round-trip of 0.1s to 0.03s!).
- A new setting has been added: ``CELERYD_PREFETCH_MULTIPLIER``, which
- is set to ``4`` by default.
- * Improved support for webhook tasks.
- ``celery.task.rest`` is now deprecated, replaced with the new and shiny
- :mod:`celery.task.http`. With more reflective names, sensible interface,
- and it's possible to override the methods used to perform HTTP requests.
- * The results of tasksets are now cached by storing it in the result
- backend.
- CHANGES
- -------
- * Now depends on carrot >= 0.8.1
- * New dependencies: billiard, python-dateutil, django-picklefield
- * No longer depends on python-daemon
- * The ``uuid`` distribution is added as a dependency when running Python 2.4.
- * Now remembers the previously detected loader by keeping it in
- the ``CELERY_LOADER`` environment variable.
- This may help on windows where fork emulation is used.
- * ETA no longer sends datetime objects, but uses ISO 8601 date format in a
- string for better compatibility with other platforms.
- * No longer sends error mails for retried tasks.
- * Task can now override the backend used to store results.
- * Refactored the ExecuteWrapper, ``apply`` and ``CELERY_ALWAYS_EAGER`` now
- also executes the task callbacks and signals.
- * Now using a proper scheduler for the tasks with an ETA.
- This means waiting eta tasks are sorted by time, so we don't have
- to poll the whole list all the time.
- * Now also imports modules listed in CELERY_IMPORTS when running
- with django (as documented).
- * Loglevel for stdout/stderr changed from INFO to ERROR
- * ImportErrors are now properly propogated when autodiscovering tasks.
- * You can now use ``celery.messaging.establish_connection`` to establish a
- connection to the broker.
- * When running as a separate service the periodic task scheduler does some
- smart moves to not poll too regularly.
- If you need faster poll times you can lower the value
- of ``CELERYBEAT_MAX_LOOP_INTERVAL``.
- * You can now change periodic task intervals at runtime, by making
- ``run_every`` a property, or subclassing ``PeriodicTask.is_due``.
- * The worker now supports control commands enabled through the use of a
- broadcast queue, you can remotely revoke tasks or set the rate limit for
- a task type. See :mod:`celery.task.control`.
- * The services now sets informative process names (as shown in ``ps``
- listings) if the :mod:`setproctitle` module is installed.
- * :exc:`celery.exceptions.NotRegistered` now inherits from :exc:`KeyError`,
- and ``TaskRegistry.__getitem__``+``pop`` raises ``NotRegistered`` instead
- * You can set the loader via the ``CELERY_LOADER`` environment variable.
- * You can now set ``CELERY_IGNORE_RESULT`` to ignore task results by default
- (if enabled, tasks doesn't save results or errors to the backend used).
- * celeryd now correctly handles malformed messages by throwing away and
- acknowledging the message, instead of crashing.
- BUGS
- ----
- * Fixed a race condition that could happen while storing task results in the
- database.
- DOCUMENTATION
- -------------
- * Reference now split into two sections; API reference and internal module
- reference.
- 0.8.4 [2010-02-05 01:52 P.M CEST]
- ---------------------------------
- * Now emits a warning if the --detach argument is used.
- --detach should not be used anymore, as it has several not easily fixed
- bugs related to it. Instead, use something like start-stop-daemon,
- supervisord or launchd (os x).
- * Make sure logger class is process aware, even if running Python >= 2.6.
- * Error e-mails are not sent anymore when the task is retried.
- 0.8.3 [2009-12-22 09:43 A.M CEST]
- ---------------------------------
- * Fixed a possible race condition that could happen when storing/querying
- task results using the the database backend.
- * Now has console script entry points in the setup.py file, so tools like
- buildout will correctly install the programs celerybin and celeryinit.
- 0.8.2 [2009-11-20 03:40 P.M CEST]
- ---------------------------------
- * QOS Prefetch count was not applied properly, as it was set for every message
- received (which apparently behaves like, "receive one more"), instead of only
- set when our wanted value cahnged.
- 0.8.1 [2009-11-16 05:21 P.M CEST]
- =================================
- VERY IMPORTANT NOTE
- -------------------
- This release (with carrot 0.8.0) enables AMQP QoS (quality of service), which
- means the workers will only receive as many messages as it can handle at a
- time. As with any release, you should test this version upgrade on your
- development servers before rolling it out to production!
- IMPORTANT CHANGES
- -----------------
- * If you're using Python < 2.6 and you use the multiprocessing backport, then
- multiprocessing version 2.6.2.1 is required.
- * All AMQP_* settings has been renamed to BROKER_*, and in addition
- AMQP_SERVER has been renamed to BROKER_HOST, so before where you had::
- AMQP_SERVER = "localhost"
- AMQP_PORT = 5678
- AMQP_USER = "myuser"
- AMQP_PASSWORD = "mypassword"
- AMQP_VHOST = "celery"
- You need to change that to::
- BROKER_HOST = "localhost"
- BROKER_PORT = 5678
- BROKER_USER = "myuser"
- BROKER_PASSWORD = "mypassword"
- BROKER_VHOST = "celery"
- * Custom carrot backends now need to include the backend class name, so before
- where you had::
- CARROT_BACKEND = "mycustom.backend.module"
- you need to change it to::
- CARROT_BACKEND = "mycustom.backend.module.Backend"
- where ``Backend`` is the class name. This is probably ``"Backend"``, as
- that was the previously implied name.
- * New version requirement for carrot: 0.8.0
- CHANGES
- -------
- * Incorporated the multiprocessing backport patch that fixes the
- ``processName`` error.
- * Ignore the result of PeriodicTask's by default.
- * Added a Redis result store backend
- * Allow /etc/default/celeryd to define additional options for the celeryd init
- script.
- * MongoDB periodic tasks issue when using different time than UTC fixed.
- * Windows specific: Negate test for available os.fork (thanks miracle2k)
- * Now tried to handle broken PID files.
- * Added a Django test runner to contrib that sets CELERY_ALWAYS_EAGER = True for testing with the database backend
- * Added a CELERY_CACHE_BACKEND setting for using something other than the django-global cache backend.
- * Use custom implementation of functools.partial (curry) for Python 2.4 support
- (Probably still problems with running on 2.4, but it will eventually be
- supported)
- * Prepare exception to pickle when saving RETRY status for all backends.
- * SQLite no concurrency limit should only be effective if the db backend is used.
- 0.8.0 [2009-09-22 03:06 P.M CEST]
- =================================
- BACKWARD INCOMPATIBLE CHANGES
- -----------------------------
- * Add traceback to result value on failure.
- **NOTE** If you use the database backend you have to re-create the
- database table ``celery_taskmeta``.
-
- Contact the mailinglist or IRC channel listed in README for help
- doing this.
- * Database tables are now only created if the database backend is used,
- so if you change back to the database backend at some point,
- be sure to initialize tables (django: ``syncdb``, python: ``celeryinit``).
- (Note: This is only the case when using Django 1.1 or higher)
- * Now depends on ``carrot`` version 0.6.0.
- * Now depends on python-daemon 1.4.8
- IMPORTANT CHANGES
- -----------------
- * Celery can now be used in pure Python (outside of a Django project).
- This means celery is no longer Django specific.
-
- For more information see the FAQ entry
- `Can I use celery without Django?`_.
- .. _`Can I use celery without Django?`:
- http://ask.github.com/celery/faq.html#can-i-use-celery-without-django
- * Celery now supports task retries.
- See `Cookbook: Retrying Tasks`_ for more information.
- .. _`Cookbook: Retrying Tasks`:
- http://ask.github.com/celery/cookbook/task-retries.html
- * We now have an AMQP result store backend.
- It uses messages to publish task return value and status. And it's
- incredibly fast!
- See http://github.com/ask/celery/issues/closed#issue/6 for more info!
- * AMQP QoS (prefetch count) implemented:
- This to not receive more messages than we can handle.
- * Now redirects stdout/stderr to the celeryd logfile when detached
- * Now uses ``inspect.getargspec`` to only pass default arguments
- the task supports.
- * Add Task.on_success, .on_retry, .on_failure handlers
- See :meth:`celery.task.base.Task.on_success`,
- :meth:`celery.task.base.Task.on_retry`,
- :meth:`celery.task.base.Task.on_failure`,
- * ``celery.utils.gen_unique_id``: Workaround for
- http://bugs.python.org/issue4607
- * You can now customize what happens at worker start, at process init, etc
- by creating your own loaders. (see :mod:`celery.loaders.default`,
- :mod:`celery.loaders.djangoapp`, :mod:`celery.loaders`.)
- * Support for multiple AMQP exchanges and queues.
- This feature misses documentation and tests, so anyone interested
- is encouraged to improve this situation.
- * celeryd now survives a restart of the AMQP server!
- Automatically re-establish AMQP broker connection if it's lost.
- New settings:
- * AMQP_CONNECTION_RETRY
- Set to ``True`` to enable connection retries.
- * AMQP_CONNECTION_MAX_RETRIES.
- Maximum number of restarts before we give up. Default: ``100``.
- NEWS
- ----
- * Fix an incompatibility between python-daemon and multiprocessing,
- which resulted in the ``[Errno 10] No child processes`` problem when
- detaching.
- * Fixed a possible DjangoUnicodeDecodeError being raised when saving pickled
- data to Django's memcached cache backend.
- * Better Windows compatibility.
- * New version of the pickled field (taken from
- http://www.djangosnippets.org/snippets/513/)
- * New signals introduced: ``task_sent``, ``task_prerun`` and
- ``task_postrun``, see :mod:`celery.signals` for more information.
- * ``TaskSetResult.join`` caused ``TypeError`` when ``timeout=None``.
- Thanks Jerzy Kozera. Closes #31
- * ``views.apply`` should return ``HttpResponse`` instance.
- Thanks to Jerzy Kozera. Closes #32
- * ``PeriodicTask``: Save conversion of ``run_every`` from ``int``
- to ``timedelta`` to the class attribute instead of on the instance.
- * Exceptions has been moved to ``celery.exceptions``, but are still
- available in the previous module.
- * Try to rollback transaction and retry saving result if an error happens
- while setting task status with the database backend.
- * jail() refactored into :class:`celery.execute.ExecuteWrapper`.
- * ``views.apply`` now correctly sets mimetype to "application/json"
- * ``views.task_status`` now returns exception if status is RETRY
- * ``views.task_status`` now returns traceback if status is "FAILURE"
- or "RETRY"
- * Documented default task arguments.
- * Add a sensible __repr__ to ExceptionInfo for easier debugging
- * Fix documentation typo ``.. import map`` -> ``.. import dmap``.
- Thanks mikedizon
- 0.6.0 [2009-08-07 06:54 A.M CET]
- ================================
- IMPORTANT CHANGES
- -----------------
- * Fixed a bug where tasks raising unpickleable exceptions crashed pool
- workers. So if you've had pool workers mysteriously dissapearing, or
- problems with celeryd stopping working, this has been fixed in this
- version.
- * Fixed a race condition with periodic tasks.
- * The task pool is now supervised, so if a pool worker crashes,
- goes away or stops responding, it is automatically replaced with
- a new one.
- * Task.name is now automatically generated out of class module+name, e.g.
- ``"djangotwitter.tasks.UpdateStatusesTask"``. Very convenient. No idea why
- we didn't do this before. Some documentation is updated to not manually
- specify a task name.
- NEWS
- ----
- * Tested with Django 1.1
- * New Tutorial: Creating a click counter using carrot and celery
- * Database entries for periodic tasks are now created at ``celeryd``
- startup instead of for each check (which has been a forgotten TODO/XXX
- in the code for a long time)
- * New settings variable: ``CELERY_TASK_RESULT_EXPIRES``
- Time (in seconds, or a `datetime.timedelta` object) for when after
- stored task results are deleted. For the moment this only works for the
- database backend.
- * ``celeryd`` now emits a debug log message for which periodic tasks
- has been launched.
- * The periodic task table is now locked for reading while getting
- periodic task status. (MySQL only so far, seeking patches for other
- engines)
- * A lot more debugging information is now available by turning on the
- ``DEBUG`` loglevel (``--loglevel=DEBUG``).
- * Functions/methods with a timeout argument now works correctly.
- * New: ``celery.strategy.even_time_distribution``:
- With an iterator yielding task args, kwargs tuples, evenly distribute
- the processing of its tasks throughout the time window available.
- * Log message ``Unknown task ignored...`` now has loglevel ``ERROR``
- * Log message ``"Got task from broker"`` is now emitted for all tasks, even if
- the task has an ETA (estimated time of arrival). Also the message now
- includes the ETA for the task (if any).
- * Acknowledgement now happens in the pool callback. Can't do ack in the job
- target, as it's not pickleable (can't share AMQP connection, etc)).
- * Added note about .delay hanging in README
- * Tests now passing in Django 1.1
- * Fixed discovery to make sure app is in INSTALLED_APPS
- * Previously overrided pool behaviour (process reap, wait until pool worker
- available, etc.) is now handled by ``multiprocessing.Pool`` itself.
- * Convert statistics data to unicode for use as kwargs. Thanks Lucy!
- 0.4.1 [2009-07-02 01:42 P.M CET]
- ================================
- * Fixed a bug with parsing the message options (``mandatory``,
- ``routing_key``, ``priority``, ``immediate``)
- 0.4.0 [2009-07-01 07:29 P.M CET]
- ================================
- * Adds eager execution. ``celery.execute.apply``|``Task.apply`` executes the
- function blocking until the task is done, for API compatiblity it
- returns an ``celery.result.EagerResult`` instance. You can configure
- celery to always run tasks locally by setting the
- ``CELERY_ALWAYS_EAGER`` setting to ``True``.
- * Now depends on ``anyjson``.
- * 99% coverage using python ``coverage`` 3.0.
- 0.3.20 [2009-06-25 08:42 P.M CET]
- =================================
- * New arguments to ``apply_async`` (the advanced version of
- ``delay_task``), ``countdown`` and ``eta``;
- >>> # Run 10 seconds into the future.
- >>> res = apply_async(MyTask, countdown=10);
- >>> # Run 1 day from now
- >>> res = apply_async(MyTask, eta=datetime.now() +
- ... timedelta(days=1)
- * Now unlinks the pidfile if it's stale.
- * Lots of more tests.
- * Now compatible with carrot >= 0.5.0.
- * **IMPORTANT** The ``subtask_ids`` attribute on the ``TaskSetResult``
- instance has been removed. To get this information instead use:
- >>> subtask_ids = [subtask.task_id for subtask in ts_res.subtasks]
- * ``Taskset.run()`` now respects extra message options from the task class.
- * Task: Add attribute ``ignore_result``: Don't store the status and
- return value. This means you can't use the
- ``celery.result.AsyncResult`` to check if the task is
- done, or get its return value. Only use if you need the performance
- and is able live without these features. Any exceptions raised will
- store the return value/status as usual.
- * Task: Add attribute ``disable_error_emails`` to disable sending error
- emails for that task.
- * Should now work on Windows (although running in the background won't
- work, so using the ``--detach`` argument results in an exception
- being raised.)
- * Added support for statistics for profiling and monitoring.
- To start sending statistics start ``celeryd`` with the
- ``--statistics`` option. Then after a while you can dump the results
- by running ``python manage.py celerystats``. See
- ``celery.monitoring`` for more information.
- * The celery daemon can now be supervised (i.e it is automatically
- restarted if it crashes). To use this start celeryd with the
- ``--supervised`` option (or alternatively ``-S``).
- * views.apply: View applying a task. Example::
- http://e.com/celery/apply/task_name/arg1/arg2//?kwarg1=a&kwarg2=b
- **NOTE** Use with caution, preferably not make this publicly
- accessible without ensuring your code is safe!
- * Refactored ``celery.task``. It's now split into three modules:
- * celery.task
- Contains ``apply_async``, ``delay_task``, ``discard_all``, and task
- shortcuts, plus imports objects from ``celery.task.base`` and
- ``celery.task.builtins``
- * celery.task.base
- Contains task base classes: ``Task``, ``PeriodicTask``,
- ``TaskSet``, ``AsynchronousMapTask``, ``ExecuteRemoteTask``.
- * celery.task.builtins
- Built-in tasks: ``PingTask``, ``DeleteExpiredTaskMetaTask``.
- 0.3.7 [2008-06-16 11:41 P.M CET]
- --------------------------------
- * **IMPORTANT** Now uses AMQP's ``basic.consume`` instead of
- ``basic.get``. This means we're no longer polling the broker for
- new messages.
- * **IMPORTANT** Default concurrency limit is now set to the number of CPUs
- available on the system.
- * **IMPORTANT** ``tasks.register``: Renamed ``task_name`` argument to
- ``name``, so
- >>> tasks.register(func, task_name="mytask")
- has to be replaced with:
- >>> tasks.register(func, name="mytask")
- * The daemon now correctly runs if the pidlock is stale.
- * Now compatible with carrot 0.4.5
- * Default AMQP connnection timeout is now 4 seconds.
- * ``AsyncResult.read()`` was always returning ``True``.
- * Only use README as long_description if the file exists so easy_install
- doesn't break.
- * ``celery.view``: JSON responses now properly set its mime-type.
- * ``apply_async`` now has a ``connection`` keyword argument so you
- can re-use the same AMQP connection if you want to execute
- more than one task.
- * Handle failures in task_status view such that it won't throw 500s.
- * Fixed typo ``AMQP_SERVER`` in documentation to ``AMQP_HOST``.
- * Worker exception e-mails sent to admins now works properly.
- * No longer depends on ``django``, so installing ``celery`` won't affect
- the preferred Django version installed.
- * Now works with PostgreSQL (psycopg2) again by registering the
- ``PickledObject`` field.
- * ``celeryd``: Added ``--detach`` option as an alias to ``--daemon``, and
- it's the term used in the documentation from now on.
- * Make sure the pool and periodic task worker thread is terminated
- properly at exit. (So ``Ctrl-C`` works again).
- * Now depends on ``python-daemon``.
- * Removed dependency to ``simplejson``
- * Cache Backend: Re-establishes connection for every task process
- if the Django cache backend is memcached/libmemcached.
- * Tyrant Backend: Now re-establishes the connection for every task
- executed.
- 0.3.3 [2009-06-08 01:07 P.M CET]
- ================================
- * The ``PeriodicWorkController`` now sleeps for 1 second between checking
- for periodic tasks to execute.
- 0.3.2 [2009-06-08 01:07 P.M CET]
- ================================
- * celeryd: Added option ``--discard``: Discard (delete!) all waiting
- messages in the queue.
- * celeryd: The ``--wakeup-after`` option was not handled as a float.
- 0.3.1 [2009-06-08 01:07 P.M CET]
- ================================
- * The `PeriodicTask`` worker is now running in its own thread instead
- of blocking the ``TaskController`` loop.
- * Default ``QUEUE_WAKEUP_AFTER`` has been lowered to ``0.1`` (was ``0.3``)
- 0.3.0 [2009-06-08 12:41 P.M CET]
- ================================
- **NOTE** This is a development version, for the stable release, please
- see versions 0.2.x.
- **VERY IMPORTANT:** Pickle is now the encoder used for serializing task
- arguments, so be sure to flush your task queue before you upgrade.
- * **IMPORTANT** TaskSet.run() now returns a celery.result.TaskSetResult
- instance, which lets you inspect the status and return values of a
- taskset as it was a single entity.
- * **IMPORTANT** Celery now depends on carrot >= 0.4.1.
- * The celery daemon now sends task errors to the registered admin e-mails.
- To turn off this feature, set ``SEND_CELERY_TASK_ERROR_EMAILS`` to
- ``False`` in your ``settings.py``. Thanks to Grégoire Cachet.
- * You can now run the celery daemon by using ``manage.py``::
- $ python manage.py celeryd
- Thanks to Grégoire Cachet.
- * Added support for message priorities, topic exchanges, custom routing
- keys for tasks. This means we have introduced
- ``celery.task.apply_async``, a new way of executing tasks.
- You can use ``celery.task.delay`` and ``celery.Task.delay`` like usual, but
- if you want greater control over the message sent, you want
- ``celery.task.apply_async`` and ``celery.Task.apply_async``.
- This also means the AMQP configuration has changed. Some settings has
- been renamed, while others are new::
- CELERY_AMQP_EXCHANGE
- CELERY_AMQP_PUBLISHER_ROUTING_KEY
- CELERY_AMQP_CONSUMER_ROUTING_KEY
- CELERY_AMQP_CONSUMER_QUEUE
- CELERY_AMQP_EXCHANGE_TYPE
- See the entry `Can I send some tasks to only some servers?`_ in the
- `FAQ`_ for more information.
- .. _`Can I send some tasks to only some servers?`:
- http://bit.ly/celery_AMQP_routing
- .. _`FAQ`: http://ask.github.com/celery/faq.html
- * Task errors are now logged using loglevel ``ERROR`` instead of ``INFO``,
- and backtraces are dumped. Thanks to Grégoire Cachet.
- * Make every new worker process re-establish it's Django DB connection,
- this solving the "MySQL connection died?" exceptions.
- Thanks to Vitaly Babiy and Jirka Vejrazka.
- * **IMOPORTANT** Now using pickle to encode task arguments. This means you
- now can pass complex python objects to tasks as arguments.
- * Removed dependency to ``yadayada``.
- * Added a FAQ, see ``docs/faq.rst``.
- * Now converts any unicode keys in task ``kwargs`` to regular strings.
- Thanks Vitaly Babiy.
- * Renamed the ``TaskDaemon`` to ``WorkController``.
- * ``celery.datastructures.TaskProcessQueue`` is now renamed to
- ``celery.pool.TaskPool``.
- * The pool algorithm has been refactored for greater performance and
- stability.
- 0.2.0 [2009-05-20 05:14 P.M CET]
- ================================
- * Final release of 0.2.0
- * Compatible with carrot version 0.4.0.
- * Fixes some syntax errors related to fetching results
- from the database backend.
- 0.2.0-pre3 [2009-05-20 05:14 P.M CET]
- =====================================
- * *Internal release*. Improved handling of unpickled exceptions,
- ``get_result`` now tries to recreate something looking like the
- original exception.
- 0.2.0-pre2 [2009-05-20 01:56 P.M CET]
- =====================================
- * Now handles unpickleable exceptions (like the dynimically generated
- subclasses of ``django.core.exception.MultipleObjectsReturned``).
- 0.2.0-pre1 [2009-05-20 12:33 P.M CET]
- =====================================
- * It's getting quite stable, with a lot of new features, so bump
- version to 0.2. This is a pre-release.
- * ``celery.task.mark_as_read()`` and ``celery.task.mark_as_failure()`` has
- been removed. Use ``celery.backends.default_backend.mark_as_read()``,
- and ``celery.backends.default_backend.mark_as_failure()`` instead.
- 0.1.15 [2009-05-19 04:13 P.M CET]
- =================================
- * The celery daemon was leaking AMQP connections, this should be fixed,
- if you have any problems with too many files open (like ``emfile``
- errors in ``rabbit.log``, please contact us!
- 0.1.14 [2009-05-19 01:08 P.M CET]
- =================================
- * Fixed a syntax error in the ``TaskSet`` class. (No such variable
- ``TimeOutError``).
- 0.1.13 [2009-05-19 12:36 P.M CET]
- =================================
- * Forgot to add ``yadayada`` to install requirements.
- * Now deletes all expired task results, not just those marked as done.
- * Able to load the Tokyo Tyrant backend class without django
- configuration, can specify tyrant settings directly in the class
- constructor.
- * Improved API documentation
- * Now using the Sphinx documentation system, you can build
- the html documentation by doing ::
- $ cd docs
- $ make html
- and the result will be in ``docs/.build/html``.
- 0.1.12 [2009-05-18 04:38 P.M CET]
- =================================
- * ``delay_task()`` etc. now returns ``celery.task.AsyncResult`` object,
- which lets you check the result and any failure that might have
- happened. It kind of works like the ``multiprocessing.AsyncResult``
- class returned by ``multiprocessing.Pool.map_async``.
- * Added dmap() and dmap_async(). This works like the
- ``multiprocessing.Pool`` versions except they are tasks
- distributed to the celery server. Example:
- >>> from celery.task import dmap
- >>> import operator
- >>> dmap(operator.add, [[2, 2], [4, 4], [8, 8]])
- >>> [4, 8, 16]
-
- >>> from celery.task import dmap_async
- >>> import operator
- >>> result = dmap_async(operator.add, [[2, 2], [4, 4], [8, 8]])
- >>> result.ready()
- False
- >>> time.sleep(1)
- >>> result.ready()
- True
- >>> result.result
- [4, 8, 16]
- * Refactored the task metadata cache and database backends, and added
- a new backend for Tokyo Tyrant. You can set the backend in your django
- settings file. e.g::
- CELERY_RESULT_BACKEND = "database"; # Uses the database
- CELERY_RESULT_BACKEND = "cache"; # Uses the django cache framework
- CELERY_RESULT_BACKEND = "tyrant"; # Uses Tokyo Tyrant
- TT_HOST = "localhost"; # Hostname for the Tokyo Tyrant server.
- TT_PORT = 6657; # Port of the Tokyo Tyrant server.
- 0.1.11 [2009-05-12 02:08 P.M CET]
- =================================
- * The logging system was leaking file descriptors, resulting in
- servers stopping with the EMFILES (too many open files) error. (fixed)
- 0.1.10 [2009-05-11 12:46 P.M CET]
- =================================
- * Tasks now supports both positional arguments and keyword arguments.
- * Requires carrot 0.3.8.
- * The daemon now tries to reconnect if the connection is lost.
- 0.1.8 [2009-05-07 12:27 P.M CET]
- ================================
- * Better test coverage
- * More documentation
- * celeryd doesn't emit ``Queue is empty`` message if
- ``settings.CELERYD_EMPTY_MSG_EMIT_EVERY`` is 0.
- 0.1.7 [2009-04-30 1:50 P.M CET]
- ===============================
- * Added some unittests
- * Can now use the database for task metadata (like if the task has
- been executed or not). Set ``settings.CELERY_TASK_META``
- * Can now run ``python setup.py test`` to run the unittests from
- within the ``tests`` project.
- * Can set the AMQP exchange/routing key/queue using
- ``settings.CELERY_AMQP_EXCHANGE``, ``settings.CELERY_AMQP_ROUTING_KEY``,
- and ``settings.CELERY_AMQP_CONSUMER_QUEUE``.
- 0.1.6 [2009-04-28 2:13 P.M CET]
- ===============================
- * Introducing ``TaskSet``. A set of subtasks is executed and you can
- find out how many, or if all them, are done (excellent for progress
- bars and such)
- * Now catches all exceptions when running ``Task.__call__``, so the
- daemon doesn't die. This does't happen for pure functions yet, only
- ``Task`` classes.
- * ``autodiscover()`` now works with zipped eggs.
- * celeryd: Now adds curernt working directory to ``sys.path`` for
- convenience.
- * The ``run_every`` attribute of ``PeriodicTask`` classes can now be a
- ``datetime.timedelta()`` object.
- * celeryd: You can now set the ``DJANGO_PROJECT_DIR`` variable
- for ``celeryd`` and it will add that to ``sys.path`` for easy launching.
- * Can now check if a task has been executed or not via HTTP.
- * You can do this by including the celery ``urls.py`` into your project,
- >>> url(r'^celery/$', include("celery.urls"))
- then visiting the following url,::
- http://mysite/celery/$task_id/done/
- this will return a JSON dictionary like e.g:
- >>> {"task": {"id": $task_id, "executed": true}}
- * ``delay_task`` now returns string id, not ``uuid.UUID`` instance.
- * Now has ``PeriodicTasks``, to have ``cron`` like functionality.
- * Project changed name from ``crunchy`` to ``celery``. The details of
- the name change request is in ``docs/name_change_request.txt``.
- 0.1.0 [2009-04-24 11:28 A.M CET]
- ================================
- * Initial release
|