README.rst 1.4 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
  1. ==================================
  2. Example using the Eventlet Pool
  3. ==================================
  4. Introduction
  5. ============
  6. This is a Celery application containing two example tasks.
  7. First you need to install Eventlet, and also recommended is the `dnspython`
  8. module (when this is installed all name lookups will be asynchronous)::
  9. $ pip install eventlet
  10. $ pip install dnspython
  11. Before you run any of the example tasks you need to start celeryd::
  12. $ cd examples/eventlet
  13. $ celeryd -l info --concurrency=500 --pool=eventlet
  14. As usual you need to have RabbitMQ running, see the Celery getting started
  15. guide if you haven't installed it yet.
  16. Tasks
  17. =====
  18. * `tasks.urlopen`
  19. This task simply makes a request opening the URL and returns the size
  20. of the response body::
  21. $ cd examples/eventlet
  22. $ python
  23. >>> from tasks import urlopen
  24. >>> urlopen.delay("http://www.google.com/").get()
  25. 9980
  26. To open several URLs at once you can do::
  27. $ cd examples/eventlet
  28. $ python
  29. >>> from tasks import urlopen
  30. >>> from celery import group
  31. >>> result = group(urlopen.s(url)
  32. ... for url in LIST_OF_URLS).apply_async()
  33. >>> for incoming_result in result.iter_native():
  34. ... print(incoming_result, )
  35. * `webcrawler.crawl`
  36. This is a simple recursive web crawler. It will only crawl
  37. URLs for the current host name. Please see comments in the
  38. `webcrawler.py` file.