README.rst 1.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
  1. ==================================
  2. Example using the Eventlet Pool
  3. ==================================
  4. Introduction
  5. ============
  6. This is a Celery application containing two example tasks.
  7. First you need to install Eventlet, and also recommended is the `dnspython`
  8. module (when this is installed all name lookups will be asynchronous)::
  9. $ pip install eventlet
  10. $ pip install dnspython
  11. Before you run any of the example tasks you need to start
  12. the worker::
  13. $ cd examples/eventlet
  14. $ celery worker -l info --concurrency=500 --pool=eventlet
  15. As usual you need to have RabbitMQ running, see the Celery getting started
  16. guide if you haven't installed it yet.
  17. Tasks
  18. =====
  19. * `tasks.urlopen`
  20. This task simply makes a request opening the URL and returns the size
  21. of the response body::
  22. $ cd examples/eventlet
  23. $ python
  24. >>> from tasks import urlopen
  25. >>> urlopen.delay("http://www.google.com/").get()
  26. 9980
  27. To open several URLs at once you can do::
  28. $ cd examples/eventlet
  29. $ python
  30. >>> from tasks import urlopen
  31. >>> from celery import group
  32. >>> result = group(urlopen.s(url)
  33. ... for url in LIST_OF_URLS).apply_async()
  34. >>> for incoming_result in result.iter_native():
  35. ... print(incoming_result, )
  36. * `webcrawler.crawl`
  37. This is a simple recursive web crawler. It will only crawl
  38. URLs for the current host name. Please see comments in the
  39. `webcrawler.py` file.