README.rst 1.4 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
  1. ==================================
  2. Example using the Eventlet Pool
  3. ==================================
  4. Introduction
  5. ============
  6. This is a Celery application containing two example tasks.
  7. First you need to install Eventlet, and also recommended is the `dnspython`
  8. module (when this is installed all name lookups will be asynchronous)::
  9. $ pip install eventlet
  10. $ pip install dnspython
  11. $ pip install requests
  12. Before you run any of the example tasks you need to start
  13. the worker::
  14. $ cd examples/eventlet
  15. $ celery worker -l info --concurrency=500 --pool=eventlet
  16. As usual you need to have RabbitMQ running, see the Celery getting started
  17. guide if you haven't installed it yet.
  18. Tasks
  19. =====
  20. * `tasks.urlopen`
  21. This task simply makes a request opening the URL and returns the size
  22. of the response body::
  23. $ cd examples/eventlet
  24. $ python
  25. >>> from tasks import urlopen
  26. >>> urlopen.delay('http://www.google.com/').get()
  27. 9980
  28. To open several URLs at once you can do::
  29. $ cd examples/eventlet
  30. $ python
  31. >>> from tasks import urlopen
  32. >>> from celery import group
  33. >>> result = group(urlopen.s(url)
  34. ... for url in LIST_OF_URLS).apply_async()
  35. >>> for incoming_result in result.iter_native():
  36. ... print(incoming_result)
  37. * `webcrawler.crawl`
  38. This is a simple recursive web crawler. It will only crawl
  39. URLs for the current host name. Please see comments in the
  40. `webcrawler.py` file.