Ask Solem 62647576d4 Examples now use broker URL. sqlakombu is currently broken though, need to find a solutionf or that пре 13 година
..
README.rst 86becd34e0 Fixed rst typo пре 14 година
bulk_task_producer.py 2e07444f11 Fixes some typos in examples/eventlet/bulk_task_producer.py пре 13 година
celeryconfig.py 62647576d4 Examples now use broker URL. sqlakombu is currently broken though, need to find a solutionf or that пре 13 година
tasks.py d4013e5389 celery.decorators.task replaced with celery.task, where the latter does not support magic keyword arguments. пре 14 година
webcrawler.py d4013e5389 celery.decorators.task replaced with celery.task, where the latter does not support magic keyword arguments. пре 14 година

README.rst

==================================
Example using the Eventlet Pool
==================================

Introduction
============

This is a Celery application containing two example tasks.

First you need to install Eventlet, and also recommended is the `dnspython`
module (when this is installed all name lookups will be asynchronous)::

$ pip install eventlet
$ pip install dnspython

Before you run any of the example tasks you need to start celeryd::

$ cd examples/eventlet
$ celeryd -l info --concurrency=500 --pool=eventlet

As usual you need to have RabbitMQ running, see the Celery getting started
guide if you haven't installed it yet.

Tasks
=====

* `tasks.urlopen`

This task simply makes a request opening the URL and returns the size
of the response body::

$ cd examples/eventlet
$ python
>>> from tasks import urlopen
>>> urlopen.delay("http://www.google.com/").get()
9980

To open several URLs at once you can do:

$ cd examples/eventlet
$ python
>>> from tasks import urlopen
>>> from celery.task.sets import TaskSet
>>> result = TaskSet(urlopen.subtask((url, ))
... for url in LIST_OF_URLS).apply_async()
>>> for incoming_result in result.iter_native():
... print(incoming_result, )

* `webcrawler.crawl`

This is a simple recursive web crawler. It will only crawl
URLs for the current host name. Please see comments in the
`webcrawler.py` file.