Browse Source

Fixed some outdated documentation

Ask Solem 15 years ago
parent
commit
7a818601d4

+ 1 - 1
FAQ

@@ -369,7 +369,7 @@ routing capabilities of AMQP you need to set both the ``queue``, and
 carrot needs to maintain the same interface for both AMQP and STOMP (obviously
 the one with the most capabilities won).
 
-Use the following specific settings in your ``settings.py``:
+Use the following specific settings in your ``celeryconfig.py``/django ``settings.py``:
 
 .. code-block:: python
 

+ 7 - 1
celery/backends/cache.py

@@ -10,15 +10,21 @@ from celery.datastructures import LocalCache
 
 
 def get_best_memcache(*args, **kwargs):
+    behaviors = kwargs.pop("behaviors", None)
+    is_pylibmc = False
     try:
         import pylibmc as memcache
+        is_pylibmc = True
     except ImportError:
         try:
             import memcache
         except ImportError:
             raise ImproperlyConfigured("Memcached backend requires either "
                                        "the 'memcache' or 'pylibmc' library")
-    return memcache.Client(*args, **kwargs)
+    client = memcache.Client(*args, **kwargs)
+    if is_pylibmc and behaviors is not None:
+        client.behaviors = behaviors
+    return client
 
 
 class DummyClient(object):

+ 25 - 21
docs/configuration.rst

@@ -18,21 +18,26 @@ It should contain all you need to run a basic celery set-up.
 
 .. code-block:: python
 
+    # List of modules to import when celery starts.
+    CELERY_IMPORTS = ("myapp.tasks", )
+
+    ## Result store settings.
     CELERY_RESULT_BACKEND = "database"
     CELERY_RESULT_DBURI = "sqlite:///mydatabase.db"
 
+    ## Broker settings.
     BROKER_HOST = "localhost"
     BROKER_PORT = 5672
     BROKER_VHOST = "/"
     BROKER_USER = "guest"
     BROKER_PASSWORD = "guest"
 
+    ## Worker settings
     ## If you're doing mostly I/O you can have more processes,
     ## but if mostly spending CPU, try to keep it close to the
     ## number of CPUs on your machine. If not set, the number of CPUs/cores
     ## available will be used.
-    # CELERYD_CONCURRENCY = 8
-
+    CELERYD_CONCURRENCY = 10
     # CELERYD_LOG_FILE = "celeryd.log"
     # CELERYD_LOG_LEVEL = "INFO"
 
@@ -127,14 +132,6 @@ the ``CELERY_RESULT_ENGINE_OPTIONS`` setting::
 .. _`Connection String`:
     http://www.sqlalchemy.org/docs/dbengine.html#create-engine-url-arguments
 
-Please see the Django ORM database settings documentation:
-http://docs.djangoproject.com/en/dev/ref/settings/#database-engine
-
-If you use this backend, make sure to initialize the database tables
-after configuration. Use the ``celeryinit`` command to do so::
-
-    $ celeryinit
-
 Example configuration
 ---------------------
 
@@ -173,12 +170,8 @@ Example configuration
 Cache backend settings
 ======================
 
-Please see the documentation for the Django cache framework settings:
-http://docs.djangoproject.com/en/dev/topics/cache/#memcached
-
-To use a custom cache backend for Celery, while using another for Django,
-you should use the ``CELERY_CACHE_BACKEND`` setting instead of the regular
-django ``CACHE_BACKEND`` setting.
+The cache backend supports the `pylibmc`_ and `python-memcached` libraries.
+The latter is used only if `pylibmc`_ is not installed.
 
 Example configuration
 ---------------------
@@ -187,14 +180,24 @@ Using a single memcached server:
 
 .. code-block:: python
 
-    CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
+    CELERY_CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
 
 Using multiple memcached servers:
 
 .. code-block:: python
 
     CELERY_RESULT_BACKEND = "cache"
-    CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
+    CELERY_CACHE_BACKEND = 'memcached://172.19.26.240:11211;172.19.26.242:11211/'
+
+You can set pylibmc options using the ``CELERY_CACHE_BACKEND_OPTIONS``
+setting:
+
+.. code-block:: python
+
+    CELERY_CACHE_BACKEND_OPTIONS = {"binary": True,
+                                    "behaviors": {"tcp_nodelay": True}}
+
+.. _`pylibmc`: http://sendapatch.se/projects/pylibmc/
 
 
 Tokyo Tyrant backend settings
@@ -453,9 +456,10 @@ Worker: celeryd
 
 * CELERY_IMPORTS
 
-    A sequence of modules to import when the celery daemon starts.  This is
-    useful to add tasks if you are not using django or cannot use task
-    auto-discovery.
+    A sequence of modules to import when the celery daemon starts.
+
+    This is used to specify the task modules to import, but also
+    to import signal handlers and additional remote control commands, etc.
 
 * CELERYD_MAX_TASKS_PER_CHILD
 

+ 0 - 5
docs/cookbook/index.rst

@@ -9,8 +9,3 @@
     daemonizing
 
 This page contains common recipes and techniques.
-Whenever a setting is mentioned, you should use ``celeryconf.py`` if using
-regular Python, or ``settings.py`` if running under Django.
-
-
-

+ 3 - 3
docs/tutorials/clickcounter.rst

@@ -108,14 +108,14 @@ On to the code...
 
 .. code-block:: python
 
-    from carrot.connection import DjangoBrokerConnection
+    from celery.messaging import establish_connection
     from carrot.messaging import Publisher, Consumer
     from clickmuncher.models import Click
 
 
     def send_increment_clicks(for_url):
         """Send a message for incrementing the click count for an URL."""
-        connection = DjangoBrokerConnection()
+        connection = establish_connection()
         publisher = Publisher(connection=connection,
                               exchange="clicks",
                               routing_key="increment_click",
@@ -130,7 +130,7 @@ On to the code...
     def process_clicks():
         """Process all currently gathered clicks by saving them to the
         database."""
-        connection = DjangoBrokerConnection()
+        connection = establish_connection()
         consumer = Consumer(connection=connection,
                             queue="clicks",
                             exchange="clicks",

+ 0 - 5
docs/tutorials/otherqueues.rst

@@ -75,11 +75,6 @@ configuration values.
 
         $ python manage.py syncdb
 
-  Or if you're not using django, but the default loader instead run
-  ``celeryinit``::
-
-        $ celeryinit
-
 Important notes
 ---------------
 

+ 3 - 0
docs/userguide/executing.rst

@@ -235,6 +235,9 @@ by creating a new queue that binds to ``"image.crop``".
 AMQP options
 ============
 
+**NOTE** The ``mandatory`` and ``immediate`` flags are not supported by
+``amqplib`` at this point.
+
 * mandatory
 
 This sets the delivery to be mandatory. An exception will be raised

+ 1 - 1
docs/userguide/routing.rst

@@ -26,7 +26,7 @@ simple routing tasks.
 
 Say you have two servers, ``x``, and ``y`` that handles regular tasks,
 and one server ``z``, that only handles feed related tasks, you can use this
-configuration:
+configuration::
 
     CELERY_ROUTES = {"feed.tasks.import_feed": "feeds"}
 

+ 22 - 20
docs/userguide/tasks.rst

@@ -43,15 +43,6 @@ The task decorator takes the same execution options as the
     def create_user(username, password):
         User.objects.create(username=username, password=password)
 
-An alternative way to use the decorator is to give the function as an argument
-instead, but if you do this be sure to set the resulting tasks :attr:`__name__`
-attribute, so pickle is able to find it in reverse:
-
-.. code-block:: python
-
-    create_user_task = task()(create_user)
-    create_user_task.__name__ = "create_user_task"
-
 
 Default keyword arguments
 =========================
@@ -110,8 +101,9 @@ the worker log:
 .. code-block:: python
 
     class AddTask(Task):
-        def run(self, x, y, \*\*kwargs):
-            logger = self.get_logger(\*\*kwargs)
+
+        def run(self, x, y, **kwargs):
+            logger = self.get_logger(**kwargs)
             logger.info("Adding %s + %s" % (x, y))
             return x + y
 
@@ -120,14 +112,17 @@ or using the decorator syntax:
 .. code-block:: python
 
     @task()
-    def add(x, y, \*\*kwargs):
-        logger = add.get_logger(\*\*kwargs)
+    def add(x, y, **kwargs):
+        logger = add.get_logger(**kwargs)
         logger.info("Adding %s + %s" % (x, y))
         return x + y
 
 There are several logging levels available, and the workers ``loglevel``
 setting decides whether or not they will be written to the log file.
 
+Of course, you can also simply use ``print`` as anything written to standard
+out/-err will be written to the logfile as well.
+
 
 Retrying a task if something fails
 ==================================
@@ -139,7 +134,7 @@ It will do the right thing, and respect the
 .. code-block:: python
 
     @task()
-    def send_twitter_status(oauth, tweet, \*\*kwargs):
+    def send_twitter_status(oauth, tweet, **kwargs):
         try:
             twitter = Twitter(oauth)
             twitter.update_status(tweet)
@@ -176,7 +171,7 @@ You can also provide the ``countdown`` argument to
     class MyTask(Task):
         default_retry_delay = 30 * 60 # retry in 30 minutes
 
-        def run(self, x, y, \*\*kwargs):
+        def run(self, x, y, **kwargs):
             try:
                 ...
             except Exception, exc:
@@ -254,12 +249,19 @@ Task options
 Message and routing options
 ---------------------------
 
-* routing_key
-    Override the global default ``routing_key`` for this task.
+* queue
+
+    Use the routing settings from a queue defined in ``CELERY_QUEUES``.
+    If defined the ``exchange`` and ``routing_key`` options will be ignored.
 
 * exchange
+
     Override the global default ``exchange`` for this task.
 
+* routing_key
+
+    Override the global default ``routing_key`` for this task.
+
 * mandatory
     If set, the task message has mandatory routing. By default the task
     is silently dropped by the broker if it can't be routed to a queue.
@@ -278,7 +280,7 @@ Message and routing options
     highest. **Note:** RabbitMQ does not support priorities yet.
 
 See :doc:`executing` for more information about the messaging options
-available.
+available, also :doc:`routing`.
 
 Example
 =======
@@ -383,8 +385,8 @@ blog/tasks.py
 
 
     @task
-    def spam_filter(comment_id, remote_addr=None, \*\*kwargs):
-            logger = spam_filter.get_logger(\*\*kwargs)
+    def spam_filter(comment_id, remote_addr=None, **kwargs):
+            logger = spam_filter.get_logger(**kwargs)
             logger.info("Running spam filter for comment %s" % comment_id)
 
             comment = Comment.objects.get(pk=comment_id)