Ask Solem 13 years ago
parent
commit
e9dc65367b

+ 2 - 1
FAQ

@@ -461,7 +461,8 @@ Tasks
 How can I reuse the same connection when applying tasks?
 How can I reuse the same connection when applying tasks?
 --------------------------------------------------------
 --------------------------------------------------------
 
 
-**Answer**: See :ref:`executing-connections`.
+**Answer**: Yes! See the :setting:`BROKER_POOL_LIMIT` setting.
+This setting will be enabled by default in 3.0.
 
 
 .. _faq-execute-task-by-name:
 .. _faq-execute-task-by-name:
 
 

+ 3 - 3
celery/app/base.py

@@ -51,7 +51,7 @@ def pyimplementation():
 
 
 
 
 class LamportClock(object):
 class LamportClock(object):
-    """Lamports logical clock.
+    """Lamport's logical clock.
 
 
     From Wikipedia:
     From Wikipedia:
 
 
@@ -80,7 +80,7 @@ class LamportClock(object):
 
 
     When sending a message use :meth:`forward` to increment the clock,
     When sending a message use :meth:`forward` to increment the clock,
     when receiving a message use :meth:`adjust` to sync with
     when receiving a message use :meth:`adjust` to sync with
-    the timestamp of the incoming message.
+    the time stamp of the incoming message.
 
 
     """
     """
     #: The clocks current value.
     #: The clocks current value.
@@ -382,7 +382,7 @@ class BaseApp(object):
 
 
     @cached_property
     @cached_property
     def backend(self):
     def backend(self):
-        """Storing/retreiving task state.  See
+        """Storing/retrieving task state.  See
         :class:`~celery.backend.base.BaseBackend`."""
         :class:`~celery.backend.base.BaseBackend`."""
         return self._get_backend()
         return self._get_backend()
 
 

+ 3 - 3
celery/app/task/__init__.py

@@ -43,7 +43,7 @@ class Context(threading.local):
 
 
 
 
 class TaskType(type):
 class TaskType(type):
-    """Metaclass for tasks.
+    """Meta class for tasks.
 
 
     Automatically registers the task in the task registry, except
     Automatically registers the task in the task registry, except
     if the `abstract` attribute is set.
     if the `abstract` attribute is set.
@@ -216,7 +216,7 @@ class BaseTask(object):
     #: worker crashes mid execution (which may be acceptable for some
     #: worker crashes mid execution (which may be acceptable for some
     #: applications).
     #: applications).
     #:
     #:
-    #: The application default can be overriden with the
+    #: The application default can be overridden with the
     #: :setting:`CELERY_ACKS_LATE` setting.
     #: :setting:`CELERY_ACKS_LATE` setting.
     acks_late = False
     acks_late = False
 
 
@@ -374,7 +374,7 @@ class BaseTask(object):
         :keyword exchange: The named exchange to send the task to.
         :keyword exchange: The named exchange to send the task to.
                            Defaults to the :attr:`exchange` attribute.
                            Defaults to the :attr:`exchange` attribute.
 
 
-        :keyword exchange_type: The exchange type to initalize the exchange
+        :keyword exchange_type: The exchange type to initialize the exchange
                                 if not already declared.  Defaults to the
                                 if not already declared.  Defaults to the
                                 :attr:`exchange_type` attribute.
                                 :attr:`exchange_type` attribute.
 
 

+ 9 - 9
celery/worker/consumer.py

@@ -18,7 +18,7 @@ up and running.
   consumer (+ QoS), and the broadcast remote control command consumer.
   consumer (+ QoS), and the broadcast remote control command consumer.
 
 
   Also if events are enabled it configures the event dispatcher and starts
   Also if events are enabled it configures the event dispatcher and starts
-  up the hartbeat thread.
+  up the heartbeat thread.
 
 
 * Finally it can consume messages. :meth:`~Consumer.consume_messages`
 * Finally it can consume messages. :meth:`~Consumer.consume_messages`
   is simply an infinite loop waiting for events on the AMQP channels.
   is simply an infinite loop waiting for events on the AMQP channels.
@@ -60,7 +60,7 @@ up and running.
 
 
 * Notice that when the connection is lost all internal queues are cleared
 * Notice that when the connection is lost all internal queues are cleared
   because we can no longer ack the messages reserved in memory.
   because we can no longer ack the messages reserved in memory.
-  Hoever, this is not dangerous as the broker will resend them
+  However, this is not dangerous as the broker will resend them
   to another worker when the channel is closed.
   to another worker when the channel is closed.
 
 
 * **WARNING**: :meth:`~Consumer.stop` does not close the connection!
 * **WARNING**: :meth:`~Consumer.stop` does not close the connection!
@@ -194,7 +194,7 @@ class QoS(object):
 
 
 class Consumer(object):
 class Consumer(object):
     """Listen for messages received from the broker and
     """Listen for messages received from the broker and
-    move them the the ready queue for task processing.
+    move them to the ready queue for task processing.
 
 
     :param ready_queue: See :attr:`ready_queue`.
     :param ready_queue: See :attr:`ready_queue`.
     :param eta_schedule: See :attr:`eta_schedule`.
     :param eta_schedule: See :attr:`eta_schedule`.
@@ -226,7 +226,7 @@ class Consumer(object):
 
 
     #: The thread that sends event heartbeats at regular intervals.
     #: The thread that sends event heartbeats at regular intervals.
     #: The heartbeats are used by monitors to detect that a worker
     #: The heartbeats are used by monitors to detect that a worker
-    #: went offline/disappeared.
+    #: went off-line/disappeared.
     heart = None
     heart = None
 
 
     #: The logger instance to use.  Defaults to the default Celery logger.
     #: The logger instance to use.  Defaults to the default Celery logger.
@@ -289,7 +289,7 @@ class Consumer(object):
     def start(self):
     def start(self):
         """Start the consumer.
         """Start the consumer.
 
 
-        Automatically surivives intermittent connection failure,
+        Automatically survives intermittent connection failure,
         and will retry establishing the connection and restart
         and will retry establishing the connection and restart
         consuming messages.
         consuming messages.
 
 
@@ -348,7 +348,7 @@ class Consumer(object):
                 eta = timer2.to_timestamp(task.eta)
                 eta = timer2.to_timestamp(task.eta)
             except OverflowError, exc:
             except OverflowError, exc:
                 self.logger.error(
                 self.logger.error(
-                    "Couldn't convert eta %s to timestamp: %r. Task: %r" % (
+                    "Couldn't convert eta %s to time stamp: %r. Task: %r" % (
                         task.eta, exc, task.info(safe=True)),
                         task.eta, exc, task.info(safe=True)),
                     exc_info=sys.exc_info())
                     exc_info=sys.exc_info())
                 task.acknowledge()
                 task.acknowledge()
@@ -392,7 +392,7 @@ class Consumer(object):
         :param message: The kombu message object.
         :param message: The kombu message object.
 
 
         """
         """
-        # need to guard against errors occuring while acking the message.
+        # need to guard against errors occurring while acking the message.
         def ack():
         def ack():
             try:
             try:
                 message.ack()
                 message.ack()
@@ -558,7 +558,7 @@ class Consumer(object):
                        self.initial_prefetch_count, self.logger)
                        self.initial_prefetch_count, self.logger)
         self.qos.update()
         self.qos.update()
 
 
-        # receive_message handles incomsing messages.
+        # receive_message handles incoming messages.
         self.task_consumer.register_callback(self.receive_message)
         self.task_consumer.register_callback(self.receive_message)
 
 
         # Setup the process mailbox.
         # Setup the process mailbox.
@@ -583,7 +583,7 @@ class Consumer(object):
         """Restart the heartbeat thread.
         """Restart the heartbeat thread.
 
 
         This thread sends heartbeat events at intervals so monitors
         This thread sends heartbeat events at intervals so monitors
-        can tell if the worker is offline/missing.
+        can tell if the worker is off-line/missing.
 
 
         """
         """
         self.heart = Heart(self.priority_timer, self.event_dispatcher)
         self.heart = Heart(self.priority_timer, self.event_dispatcher)

+ 17 - 3
docs/configuration.rst

@@ -392,7 +392,7 @@ Example configuration
 MongoDB backend settings
 MongoDB backend settings
 ------------------------
 ------------------------
 
 
-.. note:: 
+.. note::
 
 
     The MongoDB backend requires the :mod:`pymongo` library:
     The MongoDB backend requires the :mod:`pymongo` library:
     http://github.com/mongodb/mongo-python-driver/tree/master
     http://github.com/mongodb/mongo-python-driver/tree/master
@@ -535,7 +535,7 @@ BROKER_TRANSPORT
 The Kombu transport to use.  Default is ``amqplib``.
 The Kombu transport to use.  Default is ``amqplib``.
 
 
 You can use a custom transport class name, or select one of the
 You can use a custom transport class name, or select one of the
-built-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``, 
+built-in transports: ``amqplib``, ``pika``, ``redis``, ``beanstalk``,
 ``sqlalchemy``, ``django``, ``mongodb``, ``couchdb``.
 ``sqlalchemy``, ``django``, ``mongodb``, ``couchdb``.
 
 
 .. setting:: BROKER_HOST
 .. setting:: BROKER_HOST
@@ -587,6 +587,8 @@ by all transports.
 BROKER_POOL_LIMIT
 BROKER_POOL_LIMIT
 ~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.3
+
 The maximum number of connections that can be open in the connection pool.
 The maximum number of connections that can be open in the connection pool.
 
 
 A good default value could be 10, or more if you're using eventlet/gevent
 A good default value could be 10, or more if you're using eventlet/gevent
@@ -635,6 +637,8 @@ Default is 100 retries.
 BROKER_TRANSPORT_OPTIONS
 BROKER_TRANSPORT_OPTIONS
 ~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 A dict of additional options passed to the underlying transport.
 A dict of additional options passed to the underlying transport.
 
 
 See your transport user manual for supported options (if any).
 See your transport user manual for supported options (if any).
@@ -750,6 +754,8 @@ methods that have been registered with :mod:`kombu.serialization.registry`.
 CELERY_TASK_PUBLISH_RETRY
 CELERY_TASK_PUBLISH_RETRY
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 Decides if publishing task messages will be retried in the case
 Decides if publishing task messages will be retried in the case
 of connection loss or other connection errors.
 of connection loss or other connection errors.
 See also :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`.
 See also :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`.
@@ -761,6 +767,8 @@ Disabled by default.
 CELERY_TASK_PUBLISH_RETRY_POLICY
 CELERY_TASK_PUBLISH_RETRY_POLICY
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 Defines the default policy when retrying publishing a task message in
 Defines the default policy when retrying publishing a task message in
 the case of connection loss or other connection errors.
 the case of connection loss or other connection errors.
 
 
@@ -1050,6 +1058,8 @@ Send events so the worker can be monitored by tools like `celerymon`.
 CELERY_SEND_TASK_SENT_EVENT
 CELERY_SEND_TASK_SENT_EVENT
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 If enabled, a `task-sent` event will be sent for every task so tasks can be
 If enabled, a `task-sent` event will be sent for every task so tasks can be
 tracked before they are consumed by a worker.
 tracked before they are consumed by a worker.
 
 
@@ -1105,8 +1115,10 @@ Logging
 CELERYD_HIJACK_ROOT_LOGGER
 CELERYD_HIJACK_ROOT_LOGGER
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 By default any previously configured logging options will be reset,
 By default any previously configured logging options will be reset,
-because the Celery apps "hijacks" the root logger.
+because the Celery programs "hijacks" the root logger.
 
 
 If you want to customize your own logging then you can disable
 If you want to customize your own logging then you can disable
 this behavior.
 this behavior.
@@ -1223,6 +1235,8 @@ Default is ``processes``.
 CELERYD_AUTOSCALER
 CELERYD_AUTOSCALER
 ~~~~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~~~~
 
 
+.. versionadded:: 2.2
+
 Name of the autoscaler class to use.
 Name of the autoscaler class to use.
 
 
 Default is ``"celery.worker.autoscale.Autoscaler"``.
 Default is ``"celery.worker.autoscale.Autoscaler"``.

+ 4 - 1
docs/includes/introduction.txt

@@ -23,10 +23,11 @@ Celery is used in production systems to process millions of tasks a day.
 Celery is written in Python, but the protocol can be implemented in any
 Celery is written in Python, but the protocol can be implemented in any
 language.  It can also `operate with other languages using webhooks`_.
 language.  It can also `operate with other languages using webhooks`_.
 
 
-The recommended message broker is `RabbitMQ`_, but limited support for
+The recommended message broker is `RabbitMQ`_, but `limited support`_ for
 `Redis`_, `Beanstalk`_, `MongoDB`_, `CouchDB`_ and
 `Redis`_, `Beanstalk`_, `MongoDB`_, `CouchDB`_ and
 databases (using `SQLAlchemy`_ or the `Django ORM`_) is also available.
 databases (using `SQLAlchemy`_ or the `Django ORM`_) is also available.
 
 
+
 Celery is easy to integrate with `Django`_, `Pylons`_ and `Flask`_, using
 Celery is easy to integrate with `Django`_, `Pylons`_ and `Flask`_, using
 the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
 the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
 
 
@@ -47,6 +48,8 @@ the `django-celery`_, `celery-pylons`_ and `Flask-Celery`_ add-on packages.
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`Flask-Celery`: http://github.com/ask/flask-celery/
 .. _`operate with other languages using webhooks`:
 .. _`operate with other languages using webhooks`:
     http://ask.github.com/celery/userguide/remote-tasks.html
     http://ask.github.com/celery/userguide/remote-tasks.html
+.. _`limited support`:
+    http://kombu.readthedocs.org/en/latest/introduction.html#transport-comparison
 
 
 .. contents::
 .. contents::
     :local:
     :local:

+ 1 - 1
docs/userguide/concurrency/eventlet.rst

@@ -32,7 +32,7 @@ spawn hundreds, or thousands of green threads.  In an informal test with a
 feed hub system the Eventlet pool could fetch and process hundreds of feeds
 feed hub system the Eventlet pool could fetch and process hundreds of feeds
 every second, while the multiprocessing pool spent 14 seconds processing 100
 every second, while the multiprocessing pool spent 14 seconds processing 100
 feeds.  Note that is one of the applications evented I/O is especially good
 feeds.  Note that is one of the applications evented I/O is especially good
-at (asynchronous HTTP requests).  You may want a a mix of both Eventlet and
+at (asynchronous HTTP requests).  You may want a mix of both Eventlet and
 multiprocessing workers, and route tasks according to compatibility or
 multiprocessing workers, and route tasks according to compatibility or
 what works best.
 what works best.
 
 

+ 9 - 5
docs/userguide/executing.rst

@@ -197,12 +197,16 @@ to use when sending a task:
 Connections and connection timeouts.
 Connections and connection timeouts.
 ====================================
 ====================================
 
 
-Currently there is no support for broker connection pools, so 
-`apply_async` establishes and closes a new connection every time
-it is called.  This is something you need to be aware of when sending
-more than one task at a time.
+.. admonition:: Automatic Pool Support
 
 
-You handle the connection manually by creating a
+    In version 2.3 there is now support for automatic connection pools,
+    so you don't have to manually handle connections and publishers
+    to reuse connections.
+
+    See the :setting:`BROKER_POOL_LIMIT` setting.
+    This setting will be enabled by default in version 3.0.
+
+You can handle the connection manually by creating a
 publisher:
 publisher:
 
 
 .. code-block:: python
 .. code-block:: python

+ 12 - 2
docs/userguide/optimizing.rst

@@ -52,6 +52,16 @@ like adding new worker nodes, or revoking unnecessary tasks.
 Worker Settings
 Worker Settings
 ===============
 ===============
 
 
+.. _optimizing-connection-pools:
+
+Broker Connection Pools
+-----------------------
+
+You should enable the :setting:`BROKER_POOL_LIMIT` setting,
+as this will drastically improve overall performance.
+
+This setting will be enabled by default in version 3.0.
+
 .. _optimizing-prefetch-limit:
 .. _optimizing-prefetch-limit:
 
 
 Prefetch Limits
 Prefetch Limits
@@ -74,7 +84,7 @@ If you have many tasks with a long duration you want
 the multiplier value to be 1, which means it will only reserve one
 the multiplier value to be 1, which means it will only reserve one
 task per worker process at a time.
 task per worker process at a time.
 
 
-However -- If you have many short-running tasks, and throughput/roundtrip
+However -- If you have many short-running tasks, and throughput/round trip
 latency[#] is important to you, this number should be large. The worker is
 latency[#] is important to you, this number should be large. The worker is
 able to process more tasks per second if the messages have already been
 able to process more tasks per second if the messages have already been
 prefetched, and is available in memory.  You may have to experiment to find
 prefetched, and is available in memory.  You may have to experiment to find
@@ -82,7 +92,7 @@ the best value that works for you.  Values like 50 or 150 might make sense in
 these circumstances. Say 64, or 128.
 these circumstances. Say 64, or 128.
 
 
 If you have a combination of long- and short-running tasks, the best option
 If you have a combination of long- and short-running tasks, the best option
-is to use two worker nodes that are configured separatly, and route
+is to use two worker nodes that are configured separately, and route
 the tasks according to the run-time. (see :ref:`guide-routing`).
 the tasks according to the run-time. (see :ref:`guide-routing`).
 
 
 .. [*] RabbitMQ and other brokers deliver messages round-robin,
 .. [*] RabbitMQ and other brokers deliver messages round-robin,

+ 1 - 0
docs/userguide/tasksets.rst

@@ -164,6 +164,7 @@ It supports the following operations:
 Chords
 Chords
 ======
 ======
 
 
+.. versionadded:: 2.3
 
 
 A chord is a task that only executes after all of the tasks in a taskset has
 A chord is a task that only executes after all of the tasks in a taskset has
 finished executing.
 finished executing.

+ 1 - 1
docs/userguide/workers.rst

@@ -260,7 +260,7 @@ Example changing the rate limit for the `myapp.mytask` task to accept
     >>> rate_limit("myapp.mytask", "200/m")
     >>> rate_limit("myapp.mytask", "200/m")
 
 
 Example changing the rate limit on a single host by specifying the
 Example changing the rate limit on a single host by specifying the
-destination hostname::
+destination host name::
 
 
     >>> rate_limit("myapp.mytask", "200/m",
     >>> rate_limit("myapp.mytask", "200/m",
     ...            destination=["worker1.example.com"])
     ...            destination=["worker1.example.com"])