Kaynağa Gözat

Fixing typos after running spell check on the documentation

Ask Solem 15 yıl önce
ebeveyn
işleme
7946ecc2d9

+ 10 - 10
docs/configuration.rst

@@ -235,7 +235,7 @@ MongoDB backend settings
         The port the MongoDB server is listening to. Defaults to 27017.
 
     * user
-        Username to authenticate to the MongoDB server as (optional).
+        User name to authenticate to the MongoDB server as (optional).
 
     * password
         Password to authenticate to the MongoDB server (optional).
@@ -244,7 +244,7 @@ MongoDB backend settings
         The database name to connect to. Defaults to "celery".
 
     * taskmeta_collection
-        The collection name to store task metadata.
+        The collection name to store task meta data.
         Defaults to "celery_taskmeta".
 
 
@@ -309,7 +309,7 @@ Connection
     The time between retries is increased for each retry, and is
     not exhausted before ``CELERY_BROKER_CONNECTION_MAX_RETRIES`` is exceeded.
 
-    This behaviour is on by default.
+    This behavior is on by default.
 
 * CELERY_BROKER_CONNECTION_MAX_RETRIES
     Maximum number of retries before we give up re-establishing a connection
@@ -325,7 +325,7 @@ Task execution settings
 * CELERY_ALWAYS_EAGER
     If this is ``True``, all tasks will be executed locally by blocking
     until it is finished. ``apply_async`` and ``Task.delay`` will return
-    a :class:`celery.result.EagerResult` which emulates the behaviour of
+    a :class:`celery.result.EagerResult` which emulates the behavior of
     :class:`celery.result.AsyncResult`, except the result has already
     been evaluated.
 
@@ -334,7 +334,7 @@ Task execution settings
 
 * CELERY_IGNORE_RESULT
 
-    Wheter to store the task return values or not (tombstones).
+    Whether to store the task return values or not (tombstones).
     If you still want to store errors, just not successful return values,
     you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
 
@@ -360,7 +360,7 @@ Worker: celeryd
 * CELERY_IMPORTS
     A sequence of modules to import when the celery daemon starts.  This is
     useful to add tasks if you are not using django or cannot use task
-    autodiscovery.
+    auto-discovery.
 
 * CELERY_SEND_EVENTS
     Send events so the worker can be monitored by tools like ``celerymon``.
@@ -377,7 +377,7 @@ Logging
 -------
 
 * CELERYD_LOG_FILE
-    The default filename the worker daemon logs messages to, can be
+    The default file name the worker daemon logs messages to, can be
     overridden using the `--logfile`` option to ``celeryd``.
 
     The default is ``None`` (``stderr``)
@@ -407,7 +407,7 @@ Periodic Task Server: celerybeat
 
     Name of the file celerybeat stores the current schedule in.
     Can be a relative or absolute path, but be aware that the suffix ``.db``
-    will be appended to the filename.
+    will be appended to the file name.
 
     Can also be set via the ``--schedule`` argument.
 
@@ -417,7 +417,7 @@ Periodic Task Server: celerybeat
     the schedule. Default is 300 seconds (5 minutes).
 
 * CELERYBEAT_LOG_FILE
-    The default filename to log messages to, can be
+    The default file name to log messages to, can be
     overridden using the `--logfile`` option.
 
     The default is ``None`` (``stderr``).
@@ -435,7 +435,7 @@ Monitor Server: celerymon
 =========================
 
 * CELERYMON_LOG_FILE
-    The default filename to log messages to, can be
+    The default file name to log messages to, can be
     overridden using the `--logfile`` option.
 
     The default is ``None`` (``stderr``)

+ 1 - 1
docs/cookbook/tasks.rst

@@ -11,7 +11,7 @@ You can accomplish this by using a lock.
 In this example we'll be using the cache framework to set a lock that is
 accessible for all workers.
 
-It's part of an imaginary RSS Feed application called ``djangofeeds``.
+It's part of an imaginary RSS feed importer called ``djangofeeds``.
 The task takes a feed URL as a single argument, and imports that feed into
 a Django model called ``Feed``. We ensure that it's not possible for two or
 more workers to import the same feed at the same time by setting a cache key

+ 6 - 6
docs/cookbook/unit-testing.rst

@@ -6,7 +6,7 @@ Testing with Django
 -------------------
 
 The problem that you'll first run in to when trying to write a test that runs a
-task is that Django's testrunner doesn't use the same database that your celery
+task is that Django's test runner doesn't use the same database that your celery
 daemon is using. If you're using the database backend, this means that your
 tombstones won't show up in your test database and you won't be able to check
 on your tasks to get the return value or check the status.
@@ -15,19 +15,19 @@ There are two ways to get around this. You can either take advantage of
 ``CELERY_ALWAYS_EAGER = True`` to skip the daemon, or you can avoid testing
 anything that needs to check the status or result of a task.
 
-Using a custom testrunner to test with celery
----------------------------------------------
+Using a custom test runner to test with celery
+----------------------------------------------
 
 If you're going the ``CELERY_ALWAYS_EAGER`` route, which is probably better than
-just never testing some parts of your app, a custom Django testrunner does the
-trick. Celery provides a simple testrunner, but it's easy enough to roll your
+just never testing some parts of your app, a custom Django test runner does the
+trick. Celery provides a simple test runner, but it's easy enough to roll your
 own if you have other things that need to be done.
 http://docs.djangoproject.com/en/dev/topics/testing/#defining-a-test-runner
 
 For this example, we'll use the ``celery.contrib.test_runner`` to test the
 ``add`` task from the :doc:`User Guide: Tasks<../userguide/tasks>` examples.
 
-To enable the testrunner, set the following settings:
+To enable the test runner, set the following settings:
 
 .. code-block:: python
 

+ 2 - 2
docs/getting-started/first-steps-with-django.rst

@@ -49,7 +49,7 @@ However, in production you probably want to run the worker in the
 background as a daemon. To do this you need to use to tools provided by your
 platform, or something like `supervisord`_.
 
-For example startup scripts see ``contrib/debian/init.d`` for using
+For example start-up scripts see ``contrib/debian/init.d`` for using
 ``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
 ``launchd`` on Mac OS X.
 
@@ -101,7 +101,7 @@ picked it up.
 that RabbitMQ is running, and that the user/password has access to the virtual
 host you configured earlier.
 
-Right now we have to check the celery worker logfiles to know what happened
+Right now we have to check the celery worker log files to know what happened
 with the task. This is because we didn't keep the ``AsyncResult`` object
 returned by ``delay``.
 

+ 2 - 2
docs/getting-started/first-steps-with-python.rst

@@ -82,7 +82,7 @@ However, in production you probably want to run the worker in the
 background as a daemon. To do this you need to use to tools provided by your
 platform, or something like `supervisord`_.
 
-For example startup scripts see ``contrib/debian/init.d`` for using
+For example start-up scripts see ``contrib/debian/init.d`` for using
 ``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
 ``launchd`` on Mac OS X.
 
@@ -115,7 +115,7 @@ picked it up.
 that RabbitMQ is running, and that the user/password has access to the virtual
 host you configured earlier.
 
-Right now we have to check the celery worker logfiles to know what happened
+Right now we have to check the celery worker log files to know what happened
 with the task. This is because we didn't keep the ``AsyncResult`` object
 returned by ``delay``.
 

+ 10 - 13
docs/includes/introduction.txt

@@ -1,5 +1,6 @@
 :Version: 1.0.0-pre1
-:Keywords: task queue, job queue, asynchronous, rabbitmq, amqp, redis.
+:Keywords: task queue, job queue, asynchronous, rabbitmq, amqp, redis,
+  django, python, webhooks, queue, distributed
 
 --
 
@@ -56,13 +57,12 @@ Simple!
 Features
 ========
 
-    * Uses messaging (AMQP: RabbitMQ, ZeroMQ, Qpid) to route tasks to the
-      worker servers. Experimental support for STOMP (ActiveMQ) is also 
-      available. For simple setups it's also possible to use Redis or an
-      SQL database as the message queue.
+    * Supports using `RabbitMQ`_, `AMQP`_, `Stomp`_, `Redis`_ or a database
+      as the message queue. However, `RabbitMQ`_ is the recommended solution,
+      so most of the documentation refers to it.
 
-    * You can run as many worker servers as you want, and still
-      be *guaranteed that the task is only executed once.*
+    * Using RabbitMQ, celery is *very robust*. It should survive most
+      scenarios, and your tasks will never be lost.
 
     * Tasks are executed *concurrently* using the Python 2.6
       :mod:`multiprocessing` module (also available as a back-port
@@ -112,6 +112,9 @@ Features
     * Can be configured to send e-mails to the administrators when a task
       fails.
 
+.. _`RabbitMQ`: http://www.rabbitmq.com/
+.. _`AMQP`: http://www.amqp.org/
+.. _`Stomp`: http://stomp.codehaus.org/
 .. _`MongoDB`: http://www.mongodb.org/
 .. _`Redis`: http://code.google.com/p/redis/
 .. _`Tokyo Tyrant`: http://tokyocabinet.sourceforge.net/
@@ -157,9 +160,3 @@ Using the development version
 You can clone the repository by doing the following::
 
     $ git clone git://github.com/ask/celery.git
-
-A look inside the components
-============================
-
-.. image:: http://cloud.github.com/downloads/ask/celery/Celery1.0-inside-worker.jpg
-

+ 3 - 3
docs/tutorials/clickcounter.rst

@@ -89,13 +89,13 @@ functions:
   click it processes all of the messages first, calculates the new click count
   and issues one update per URL. A message that has been received will not be
   deleted from the broker until it has been acknowledged by the receiver, so
-  if the reciever dies in the middle of processing the message, it will be
+  if the receiver dies in the middle of processing the message, it will be
   re-sent at a later point in time. This guarantees delivery and we respect
   this feature here by not acknowledging the message until the clicks has
   actually been written to disk.
   
   **Note**: This could probably be optimized further with
-  some hand-written SQL, but it will do for now. Let's say it's an excersise
+  some hand-written SQL, but it will do for now. Let's say it's an exercise
   left for the picky reader, albeit a discouraged one if you can survive
   without doing it.
 
@@ -227,7 +227,7 @@ Finishing
 =========
 
 There are still ways to improve this application. The URLs could be cleaned
-so the url http://google.com and http://google.com/ is the same. Maybe it's
+so the URL http://google.com and http://google.com/ is the same. Maybe it's
 even possible to update the click count using a single UPDATE query?
 
 If you have any questions regarding this tutorial, please send a mail to the

+ 2 - 2
docs/tutorials/otherqueues.rst

@@ -2,7 +2,7 @@
  Using Celery with Redis/Database as the messaging queue.
 ==========================================================
 
-There's a plugin for celery that enables the use of Redis or an SQL database
+There's a plug-in for celery that enables the use of Redis or an SQL database
 as the messaging queue. This is not part of celery itself, but exists as
 an extension to `carrot`_.
 
@@ -65,7 +65,7 @@ configuration values.
 
     $ python manage.py syncdb
 
-* Or if you're not using django, but the default loader instad run
+* Or if you're not using django, but the default loader instead run
   ``celeryinit``::
 
     $ celeryinit

+ 5 - 5
docs/userguide/executing.rst

@@ -52,7 +52,7 @@ specified date and time has passed, but not necessarily at that exact time.
 
 While ``countdown`` is an integer, ``eta`` must be a ``datetime`` object,
 specifying an exact date and time in the future. This is good if you already
-have a ``datatime`` object and need to modify it with a ``timedelta``, or when
+have a ``datetime`` object and need to modify it with a ``timedelta``, or when
 using time in seconds is not very readable.
 
 .. code-block:: python
@@ -79,7 +79,7 @@ serialization method is sent with the message so the worker knows how to
 deserialize any task (of course, if you use a custom serializer, this must also be
 registered in the worker.)
 
-When sending a task the serializition method is taken from the following
+When sending a task the serialization method is taken from the following
 places in order: The ``serializer`` argument to ``apply_async``, the
 Task's ``serializer`` attribute, and finally the global default ``CELERY_SERIALIZER``
 configuration directive.
@@ -135,7 +135,7 @@ In Python 2.5 and above, you can use the ``with`` statement:
     print([res.get() for res in results])
 
 
-*NOTE* TaskSets already re-uses the same connection, but not if you need to
+*NOTE* Task Sets already re-uses the same connection, but not if you need to
 execute more than one TaskSet.
 
 The connection timeout is the number of seconds to wait before we give up on
@@ -157,7 +157,7 @@ Routing options
 ---------------
 
 Celery uses the AMQP routing mechanisms to route tasks to different workers.
-You can route tasks using the following entitites: exchange, queue and routing key.
+You can route tasks using the following entities: exchange, queue and routing key.
 
 Messages (tasks) are sent to exchanges, a queue binds to an exchange with a
 routing key. Let's look at an example:
@@ -179,7 +179,7 @@ different ways, the exchange types are:
 * topic
 
     In the topic exchange the routing key is made up of words separated by dots (``.``).
-    Words can be matched by the wildcars ``*`` and ``#``, where ``*`` matches one
+    Words can be matched by the wild cards ``*`` and ``#``, where ``*`` matches one
     exact word, and ``#`` matches one or many.
 
     For example, ``*.stock.#`` matches the routing keys ``usd.stock`` and

+ 8 - 8
docs/userguide/tasks.rst

@@ -161,12 +161,12 @@ Task options
 
     This is the name the task is registered as.
     You can set this name manually, or just use the default which is
-    atomatically generated using the module and class name.
+    automatically generated using the module and class name.
 
 * abstract
 
     Abstract classes are not registered, so they're
-    only used for making new task types by subclassing.
+    only used for making new task types by sub classing.
 
 * max_retries
 
@@ -230,7 +230,7 @@ Message and routing options
 * immediate
     Request immediate delivery. If the task cannot be routed to a
     task worker immediately, an exception will be raised. This is
-    instead of the default behaviour, where the broker will accept and
+    instead of the default behavior, where the broker will accept and
     queue the task, but with no guarantee that the task will ever
     be executed.
 
@@ -276,7 +276,7 @@ for the applications listed in ``INSTALLED_APPS``. If you want to do something
 special you can create your own loader to do what you want.
 
 The entity responsible for registering your task in the registry is a
-metaclass, :class:`TaskType`, this is the default metaclass for
+meta class, :class:`TaskType`, this is the default meta class for
 ``Task``. If you want to register your task manually you can set the
 ``abstract`` attribute:
 
@@ -285,7 +285,7 @@ metaclass, :class:`TaskType`, this is the default metaclass for
     class MyTask(Task):
         abstract = True
 
-This way the task won't be registered, but any task subclassing it will.
+This way the task won't be registered, but any task sub classing it will.
 
 So when we send a task, we don't send the function code, we just send the name
 of the task, so when the worker receives the message it can just look it up in
@@ -310,7 +310,7 @@ won't run long enough to block the worker from processing other waiting tasks.
 
 But there's a limit, sending messages takes processing power and bandwidth. If
 your tasks are so short the overhead of passing them around is worse than
-just executing them inline, you should reconsider your strategy. There is no
+just executing them in-line, you should reconsider your strategy. There is no
 universal answer here.
 
 Data locality
@@ -341,7 +341,7 @@ on what machine the task will run, also you can't even know if the task will
 run in a timely manner, so please be wary of the state you pass on to tasks.
 
 One gotcha is Django model objects, they shouldn't be passed on as arguments
-to task classes, it's almost always better to refetch the object from the
+to task classes, it's almost always better to re-fetch the object from the
 database instead, as there are possible race conditions involved.
 
 Imagine the following scenario where you have an article, and a task
@@ -370,7 +370,7 @@ when the task is finally run, the body of the article is reverted to the old
 version, because the task had the old body in its argument.
 
 Fixing the race condition is easy, just use the article id instead, and
-refetch the article in the task body:
+re-fetch the article in the task body:
 
 .. code-block:: python