Ask Solem 43c4406910 Renames celery.bin.celeryd to celery.bin.worker 12 yıl önce
..
README.rst 43c4406910 Renames celery.bin.celeryd to celery.bin.worker 12 yıl önce
bulk_task_producer.py df87559855 Code now works on both Python 3 and Python 2 (without using 2to3) 12 yıl önce
celeryconfig.py 7af762d1b7 More single quotes 12 yıl önce
tasks.py b23fd8c3f6 Use more str.format 12 yıl önce
webcrawler.py b23fd8c3f6 Use more str.format 12 yıl önce

README.rst

==================================
Example using the Eventlet Pool
==================================

Introduction
============

This is a Celery application containing two example tasks.

First you need to install Eventlet, and also recommended is the `dnspython`
module (when this is installed all name lookups will be asynchronous)::

$ pip install eventlet
$ pip install dnspython

Before you run any of the example tasks you need to start
the worker::

$ cd examples/eventlet
$ celery worker -l info --concurrency=500 --pool=eventlet

As usual you need to have RabbitMQ running, see the Celery getting started
guide if you haven't installed it yet.

Tasks
=====

* `tasks.urlopen`

This task simply makes a request opening the URL and returns the size
of the response body::

$ cd examples/eventlet
$ python
>>> from tasks import urlopen
>>> urlopen.delay("http://www.google.com/").get()
9980

To open several URLs at once you can do::

$ cd examples/eventlet
$ python
>>> from tasks import urlopen
>>> from celery import group
>>> result = group(urlopen.s(url)
... for url in LIST_OF_URLS).apply_async()
>>> for incoming_result in result.iter_native():
... print(incoming_result, )

* `webcrawler.crawl`

This is a simple recursive web crawler. It will only crawl
URLs for the current host name. Please see comments in the
`webcrawler.py` file.