Просмотр исходного кода

Bumped version to 2.1.0 and updated the Changelog

Ask Solem 14 лет назад
Родитель
Сommit
1aeb981d13
5 измененных файлов с 79 добавлено и 68 удалено
  1. 61 58
      Changelog
  2. 1 1
      README.rst
  3. 1 1
      celery/__init__.py
  4. 15 7
      docs/homepage/index.html
  5. 1 1
      docs/includes/introduction.txt

+ 61 - 58
Changelog

@@ -19,22 +19,19 @@ Important Notes
 * Celery is now following the versioning semantics defined by `semver`_.
 * Celery is now following the versioning semantics defined by `semver`_.
 
 
     This means we are no longer allowed to use odd/even versioning semantics
     This means we are no longer allowed to use odd/even versioning semantics
-    (see http://github.com/mojombo/semver.org/issues#issue/8).
     By our previous versioning scheme this stable release should have
     By our previous versioning scheme this stable release should have
     been version 2.2.
     been version 2.2.
 
 
-    The document describing our release cycle and versioning scheme
-    can be found at `Wiki: Release Cycle`_.
-
 .. _`semver`: http://semver.org
 .. _`semver`: http://semver.org
-.. _`Wiki: Release Cycle`: http://wiki.github.com/ask/celery/release-cycle
 
 
 * Now depends on Carrot 0.10.7.
 * Now depends on Carrot 0.10.7.
 
 
 * No longer depends on SQLAlchemy, this needs to be installed separately
 * No longer depends on SQLAlchemy, this needs to be installed separately
-  if the database backend is used (does not apply to users of
-  `django-celery`_).
+  if the database result backend is used.
 
 
+* django-celery now comes with a monitor for the Django Admin interface.
+  This can also be used if you're not a Django user.  See
+  :ref:`monitoring-django-admin` and :ref:`monitoring-nodjango` for more information.
 
 
 * If you get an error after upgrading saying:
 * If you get an error after upgrading saying:
   ``AttributeError: 'module' object has no attribute 'system'``,
   ``AttributeError: 'module' object has no attribute 'system'``,
@@ -76,19 +73,20 @@ News
 
 
 * celeryev: Event Snapshots
 * celeryev: Event Snapshots
 
 
-    If enabled, celeryd can send messages every time something
-    happens in the worker. These messages are called "events".
+    If enabled, :program:`celeryd` sends messages about what the worker is doing.
+    These messages are called "events".
     The events are used by real-time monitors to show what the
     The events are used by real-time monitors to show what the
     cluster is doing, but they are not very useful for monitoring
     cluster is doing, but they are not very useful for monitoring
-    over time. That's where the snapshots comes in. Snapshots
+    over a longer period of time.  Snapshots
     lets you take "pictures" of the clusters state at regular intervals.
     lets you take "pictures" of the clusters state at regular intervals.
-    These can then be stored in the database to generate statistics
-    with, or even monitoring.
+    This can then be stored in a database to generate statistics
+    with, or even monitoring over longer time periods.
 
 
-    Django-celery now comes with a Celery monitor for the Django
+    django-celery now comes with a Celery monitor for the Django
     Admin interface. To use this you need to run the django-celery
     Admin interface. To use this you need to run the django-celery
     snapshot camera, which stores snapshots to the database at configurable
     snapshot camera, which stores snapshots to the database at configurable
-    intervals.
+    intervals.  See :ref:`monitoring-nodjango` for information about using
+    this monitor if you're not using Django.
 
 
     To use the Django admin monitor you need to do the following:
     To use the Django admin monitor you need to do the following:
 
 
@@ -136,15 +134,11 @@ News
     The rate limit is off by default, which means it will take a snapshot
     The rate limit is off by default, which means it will take a snapshot
     for every :option:`--frequency` seconds.
     for every :option:`--frequency` seconds.
 
 
-    The django-celery camera also automatically deletes old events.
-    It deletes successful tasks after 1 day, failed tasks after 3 days,
-    and tasks in other states after 5 days.
-
 .. seealso::
 .. seealso::
 
 
     :ref:`monitoring-django-admin` and :ref:`monitoring-snapshots`.
     :ref:`monitoring-django-admin` and :ref:`monitoring-snapshots`.
 
 
-* :func:`celery.task.control.broadcast`: Added callback argument, this can be
+* :func:`~celery.task.control.broadcast`: Added callback argument, this can be
   used to process replies immediately as they arrive.
   used to process replies immediately as they arrive.
 
 
 * celeryctl: New command-line utility to manage and inspect worker nodes,
 * celeryctl: New command-line utility to manage and inspect worker nodes,
@@ -181,8 +175,7 @@ News
     We now configure the root logger instead of only configuring
     We now configure the root logger instead of only configuring
     our custom logger. In addition we don't hijack
     our custom logger. In addition we don't hijack
     the multiprocessing logger anymore, but instead use a custom logger name
     the multiprocessing logger anymore, but instead use a custom logger name
-    (celeryd uses "celery", celerybeat uses "celery.beat", celeryev uses
-    "celery.ev").
+    for different apps:
 
 
     =====================================  =====================================
     =====================================  =====================================
     **Application**                        **Logger Name**
     **Application**                        **Logger Name**
@@ -194,10 +187,10 @@ News
 
 
     This means that the ``loglevel`` and ``logfile`` arguments will
     This means that the ``loglevel`` and ``logfile`` arguments will
     affect all registered loggers (even those from 3rd party libraries).
     affect all registered loggers (even those from 3rd party libraries).
-    That is unless you configure the loggers manually as shown below.
+    Unless you configure the loggers manually as shown below, that is.
 
 
-    Users can choose to configure logging by subscribing to the
-    :data:`~celery.signals.setup_logging` signal:
+    *Users can choose to configure logging by subscribing to the
+    :data:`~celery.signals.setup_logging` signal:*
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -212,9 +205,9 @@ News
     will be configured using the :option:`--loglevel`/:option:`--logfile`
     will be configured using the :option:`--loglevel`/:option:`--logfile`
     argument, this will be used for *all defined loggers*.
     argument, this will be used for *all defined loggers*.
 
 
-    Remember that celeryd also redirects stdout and stderr 
-    to the celery logger, if you want to manually configure logging
-    ands redirect stdouts, you need to enable this manually:
+    Remember that celeryd also redirects stdout and stderr
+    to the celery logger, if manually configure logging
+    you also need to redirect the stdouts manually:
 
 
     .. code-block:: python
     .. code-block:: python
 
 
@@ -228,21 +221,22 @@ News
             log.redirect_stdouts_to_logger(stdouts, loglevel=logging.WARNING)
             log.redirect_stdouts_to_logger(stdouts, loglevel=logging.WARNING)
 
 
 * celeryd: Added command-line option :option:`-I`/:option:`--include`:
 * celeryd: Added command-line option :option:`-I`/:option:`--include`:
-  Additional (task) modules to be imported
 
 
-* celeryd: now emits a warning if running as the root user (euid is 0).
+    A comma separated list of (task) modules to be imported.
 
 
-* Fixed timing issue when declaring the remote control command reply queue
+    Example::
 
 
-    This issue could result in replies being lost, but have now been fixed.
+        $ celeryd -I app1.tasks,app2.tasks
+
+* celeryd: now emits a warning if running as the root user (euid is 0).
 
 
 * :func:`celery.messaging.establish_connection`: Ability to override defaults
 * :func:`celery.messaging.establish_connection`: Ability to override defaults
   used using kwarg "defaults".
   used using kwarg "defaults".
 
 
-* celeryd: Now uses ``multiprocessing.freeze_support()`` so it should work
-  with py2exe and similar tools.
+* celeryd: Now uses ``multiprocessing.freeze_support()`` so that it should work
+  with **py2exe**, **PyInstaller**, **cx_Freeze**, etc.
 
 
-* celeryd: Now includes more metadata for the STARTED state: pid and
+* celeryd: Now includes more metadata for the :state:`STARTED` state: PID and
   hostname of the worker that started the task.
   hostname of the worker that started the task.
 
 
     See issue #181
     See issue #181
@@ -260,15 +254,15 @@ News
     See issue #182.
     See issue #182.
 
 
 * celeryd: Now emits a warning if there is already a worker node using the same
 * celeryd: Now emits a warning if there is already a worker node using the same
-  name running on the current virtual host.
+  name running on the same virtual host.
 
 
 * AMQP result backend: Sending of results are now retried if the connection
 * AMQP result backend: Sending of results are now retried if the connection
   is down.
   is down.
 
 
 * AMQP result backend: ``result.get()``: Wait for next state if state is not
 * AMQP result backend: ``result.get()``: Wait for next state if state is not
-  in :data:`~celery.states.READY_STATES`.
+    in :data:`~celery.states.READY_STATES`.
 
 
-* TaskSetResult now supports ``__getitem__``
+* TaskSetResult now supports subscription.
 
 
     ::
     ::
 
 
@@ -276,13 +270,13 @@ News
         >>> res[0].get()
         >>> res[0].get()
 
 
 * Added ``Task.send_error_emails`` + ``Task.error_whitelist``, so these can
 * Added ``Task.send_error_emails`` + ``Task.error_whitelist``, so these can
-  be configured per task instead of just globally
+  be configured per task instead of just by the global setting.
 
 
 * Added ``Task.store_errors_even_if_ignored``, so it can be changed per Task,
 * Added ``Task.store_errors_even_if_ignored``, so it can be changed per Task,
-  not just globally.
+  not just by the global setting.
 
 
-* The crontab schedule no longer wakes up every second, but implements
-  ``remaining_estimate``.
+* The crontab scheduler no longer wakes up every second, but implements
+  ``remaining_estimate`` (*Optimization*).
 
 
 * celeryd:  Store :state:`FAILURE` result if the
 * celeryd:  Store :state:`FAILURE` result if the
    :exc:`~celery.exceptions.WorkerLostError` exception occurs (worker process
    :exc:`~celery.exceptions.WorkerLostError` exception occurs (worker process
@@ -300,8 +294,9 @@ News
       "celery.backend_cleanup" it won't change it, so the behavior of the
       "celery.backend_cleanup" it won't change it, so the behavior of the
       backend cleanup task can be easily changed.
       backend cleanup task can be easily changed.
 
 
-    * The task is now run by every day at 4:00 AM, instead of every day since
-      fist run (using crontab schedule instead of run_every)
+    * The task is now run every day at 4:00 AM, rather than every day since
+      the first time it was run (using crontab schedule instead of
+      ``run_every``)
 
 
     * Renamed ``celery.task.builtins.DeleteExpiredTaskMetaTask``
     * Renamed ``celery.task.builtins.DeleteExpiredTaskMetaTask``
         -> :class:`celery.task.builtins.backend_cleanup`
         -> :class:`celery.task.builtins.backend_cleanup`
@@ -349,7 +344,7 @@ News
 * celeryd: Task results shown in logs are now truncated to 46 chars.
 * celeryd: Task results shown in logs are now truncated to 46 chars.
 
 
 * ``Task.__name__`` is now an alias to ``self.__class__.__name__``.
 * ``Task.__name__`` is now an alias to ``self.__class__.__name__``.
-   This way it introspects more like a regular function.
+   This way tasks introspects more like regular functions.
 
 
 * ``Task.retry``: Now raises :exc:`TypeError` if kwargs argument is empty.
 * ``Task.retry``: Now raises :exc:`TypeError` if kwargs argument is empty.
 
 
@@ -359,30 +354,34 @@ News
 
 
 * :class:`~celery.datastructures.TokenBucket`: Generic Token Bucket algorithm
 * :class:`~celery.datastructures.TokenBucket`: Generic Token Bucket algorithm
 
 
-* :class:`celery.events.state.State`: Recording of cluster state can now
-  be paused.
+* :mod:`celery.events.state`: Recording of cluster state can now
+  be paused and resumed, including support for buffering.
 
 
-    * ``State.freeze(buffer=True)``
 
 
-    Pauses recording of the stream. If buffer is true, then events received
-    while being frozen will be kept, so it can be replayed later.
+    .. method:: State.freeze(buffer=True)
 
 
-    * ``State.thaw(replay=True)``
+        Pauses recording of the stream.
 
 
-    Resumes recording of the stream. If replay is true, then the buffer
-    will be applied.
+        If ``buffer`` is true, events received while being frozen will be
+        buffered, and may be replayed later.
 
 
-    * ``State.freeze_while(fun)``
+    .. method:: State.thaw(replay=True)
 
 
-    Apply function. Freezes the stream before the function,
-    and replays the buffer when the function returns.
+        Resumes recording of the stream.
+
+        If ``replay`` is true, then the recorded buffer will be applied.
+
+    .. method:: State.freeze_while(fun)
+
+        With a function to apply, freezes the stream before,
+        and replays the buffer after the function returns.
 
 
 * :meth:`EventReceiver.capture <celery.events.EventReceiver.capture>`
 * :meth:`EventReceiver.capture <celery.events.EventReceiver.capture>`
   Now supports a timeout keyword argument.
   Now supports a timeout keyword argument.
 
 
-* Optimization: The mediator thread is now disabled if
-  :setting:`CELERY_RATE_LIMTS` is enabled, and tasks sent directly to the
-  pool without going through the ready queue.
+* celeryd: The mediator thread is now disabled if
+  :setting:`CELERY_RATE_LIMTS` is enabled, and tasks are directly sent to the
+  pool without going through the ready queue (*Optimization*).
 
 
 .. _v210-fixes:
 .. _v210-fixes:
 
 
@@ -415,6 +414,10 @@ Fixes
 
 
     See http://bugs.python.org/issue10037
     See http://bugs.python.org/issue10037
 
 
+* Fixed timing issue when declaring the remote control command reply queue
+
+    This issue could result in replies being lost, but have now been fixed.
+
 * Compat ``LoggerAdapter`` implementation: Now works for Python 2.4.
 * Compat ``LoggerAdapter`` implementation: Now works for Python 2.4.
 
 
     Also added support for several new methods:
     Also added support for several new methods:

+ 1 - 1
README.rst

@@ -4,7 +4,7 @@
 
 
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 .. image:: http://cloud.github.com/downloads/ask/celery/celery_favicon_128.png
 
 
-:Version: 2.1.0rc4
+:Version: 2.1.0
 :Web: http://celeryproject.org/
 :Web: http://celeryproject.org/
 :Download: http://pypi.python.org/pypi/celery/
 :Download: http://pypi.python.org/pypi/celery/
 :Source: http://github.com/ask/celery/
 :Source: http://github.com/ask/celery/

+ 1 - 1
celery/__init__.py

@@ -1,6 +1,6 @@
 """Distributed Task Queue"""
 """Distributed Task Queue"""
 
 
-VERSION = (2, 1, 0, "rc4")
+VERSION = (2, 1, 0)
 
 
 __version__ = ".".join(map(str, VERSION[0:3])) + "".join(VERSION[3:])
 __version__ = ".".join(map(str, VERSION[0:3])) + "".join(VERSION[3:])
 __author__ = "Ask Solem"
 __author__ = "Ask Solem"

+ 15 - 7
docs/homepage/index.html

@@ -55,11 +55,12 @@ pageTracker._trackPageview();
 
 
     <div class="column">
     <div class="column">
         <h2>Distributed Task Queue</h2>
         <h2>Distributed Task Queue</h2>
-        <p> Celery is an asynchronous task queue/job queue based on distributed message passing.
-        It is focused on real-time operation, but supports scheduling as well.</p>
+        <p>Celery is an open source asynchronous task queue/job queue based on
+        distributed message passing.  It is focused on real-time operation,
+        but supports scheduling as well.</p>
 
 
-        <p>The execution units, called tasks, are executed concurrently on a single or
-        more worker servers. Tasks can execute asynchronously (in the background) or synchronously
+        <p>The execution units, called tasks, are executed concurrently on one or
+        more worker nodes. Tasks can execute asynchronously (in the background) or synchronously
         (wait until ready).</p>
         (wait until ready).</p>
 
 
         <p>Celery is already used in production to process millions of tasks a day.</p>
         <p>Celery is already used in production to process millions of tasks a day.</p>
@@ -73,9 +74,9 @@ pageTracker._trackPageview();
         but support for <a href="http://redisdb.com/">Redis</a> and databases
         but support for <a href="http://redisdb.com/">Redis</a> and databases
         is also available.</p>
         is also available.</p>
 
 
-        <p>You may also be pleased to know that full
-        <a href="http://pypi.python.org/pypi/django-celery">Django integration</a>exists,
-        delivered by the django-celery package.</p>
+        Celery is easy to integrate with Django and Pylons, using
+        the <a href="http://pypi.python.org/pypi/django-celery">django-celery</a> and
+        <a href="http://bitbucket.org/ianschenck/celery-pylons">celery-pylons</a> add-on packages.
 
 
         <h3>Example</h3>
         <h3>Example</h3>
         <p>This is a simple task adding two numbers:</p>
         <p>This is a simple task adding two numbers:</p>
@@ -136,6 +137,13 @@ pageTracker._trackPageview();
 
 
     <div class="column side">
     <div class="column side">
 
 
+      <span class="newsitem">
+      <h2>Celery 2.1 released!</h2>
+      <h4>By <a href="http://twitter.com/asksol">@asksol</a> on 2010-10-08.</h4>
+      <p>This new version is now available at PyPI.  Be sure to read the
+      <a href="http://celeryq.org/docs/changelog.html">Changelog</a> before upgrading.</p>
+      </span>
+
       <span class="newsitem">
       <span class="newsitem">
       <h2>Celery 2.0 released!</h2>
       <h2>Celery 2.0 released!</h2>
       <h4>By <a href="http://twitter.com/asksol">@asksol</a> on 2010-07-02.</h4>
       <h4>By <a href="http://twitter.com/asksol">@asksol</a> on 2010-07-02.</h4>

+ 1 - 1
docs/includes/introduction.txt

@@ -1,4 +1,4 @@
-:Version: 2.1.0rc4
+:Version: 2.1.0
 :Web: http://celeryproject.org/
 :Web: http://celeryproject.org/
 :Download: http://pypi.python.org/pypi/celery/
 :Download: http://pypi.python.org/pypi/celery/
 :Source: http://github.com/ask/celery/
 :Source: http://github.com/ask/celery/