Browse Source

Use You and I instead of We

Ask Solem 12 years ago
parent
commit
613052a001

+ 2 - 2
docs/django/first-steps-with-django.rst

@@ -43,7 +43,7 @@ alternatives to choose from, see :ref:`celerytut-broker`.
 
 All settings mentioned in the Celery documentation should be added
 to your Django project's ``settings.py`` module. For example
-we can configure the :setting:`BROKER_URL` setting to specify
+you can configure the :setting:`BROKER_URL` setting to specify
 what broker to use::
 
     BROKER_URL = 'amqp://guest:guest@localhost:5672/'
@@ -115,7 +115,7 @@ Calling our task
 ================
 
 Now that the worker is running, open up a new terminal to actually
-call the task we defined::
+call the task you defined::
 
     >>> from celerytest.tasks import add
 

+ 10 - 13
docs/getting-started/first-steps-with-celery.rst

@@ -127,7 +127,7 @@ application or just app in short.  Since this instance is used as
 the entry-point for everything you want to do in Celery, like creating tasks and
 managing workers, it must be possible for other modules to import it.
 
-In this tutorial we will keep everything contained in a single module,
+In this tutorial you will keep everything contained in a single module,
 but for larger projects you want to create
 a :ref:`dedicated module <project-layout>`.
 
@@ -146,22 +146,19 @@ Let's create the file :file:`tasks.py`:
 The first argument to :class:`~celery.app.Celery` is the name of the current module,
 this is needed so that names can be automatically generated, the second
 argument is the broker keyword argument which specifies the URL of the
-message broker we want to use.
-
-The broker argument specifies the URL of the broker we want to use,
-we use RabbitMQ here, which is already the default option,
-but see :ref:`celerytut-broker` above if you want to use something different,
+message broker you want to use, using RabbitMQ here, which is already the
+default option.  See :ref:`celerytut-broker` above for more choices,
 e.g. for Redis you can use ``redis://localhost``, or MongoDB:
 ``mongodb://localhost``.
 
-We defined a single task, called ``add``, which returns the sum of two numbers.
+You defined a single task, called ``add``, which returns the sum of two numbers.
 
 .. _celerytut-running-celeryd:
 
 Running the celery worker server
 ================================
 
-We now run the worker by executing our program with the ``worker``
+You now run the worker by executing our program with the ``worker``
 argument:
 
 .. code-block:: bash
@@ -192,7 +189,7 @@ There also several other commands available, and help is also available:
 Calling the task
 ================
 
-To call our task we can use the :meth:`~@Task.delay` method.
+To call our task you can use the :meth:`~@Task.delay` method.
 
 This is a handy shortcut to the :meth:`~@Task.apply_async`
 method which gives greater control of the task execution (see
@@ -225,7 +222,7 @@ built-in result backends to choose from: `SQLAlchemy`_/`Django`_ ORM,
 .. _`SQLAlchemy`: http://www.sqlalchemy.org/
 .. _`Django`: http://djangoproject.com
 
-For this example we will use the `amqp` result backend, which sends states
+For this example you will use the `amqp` result backend, which sends states
 as messages.  The backend is specified via the ``backend`` argument to
 :class:`@Celery`, (or via the :setting:`CELERY_RESULT_BACKEND` setting if
 you choose to use a configuration module)::
@@ -240,7 +237,7 @@ the message broker (a popular combination)::
 To read more about result backends please see :ref:`task-result-backends`.
 
 Now with the result backend configured, let's call the task again.
-This time we'll hold on to the :class:`~@AsyncResult` instance returned
+This time you'll hold on to the :class:`~@AsyncResult` instance returned
 when you call a task::
 
     >>> result = add.delay(4, 4)
@@ -251,7 +248,7 @@ has finished processing or not::
     >>> result.ready()
     False
 
-We can wait for the result to complete, but this is rarely used
+You can wait for the result to complete, but this is rarely used
 since it turns the asynchronous call into a synchronous one::
 
     >>> result.get(timeout=1)
@@ -264,7 +261,7 @@ the ``propagate`` argument::
     >>> result.get(propagate=True)
 
 
-If the task raised an exception we can also gain access to the
+If the task raised an exception you can also gain access to the
 original traceback::
 
     >>> result.traceback

+ 14 - 14
docs/getting-started/next-steps.rst

@@ -5,7 +5,7 @@
 ============
 
 The :ref:`first-steps` guide is intentionally minimal.  In this guide
-we will demonstrate what Celery offers in more detail, including
+I will demonstrate what Celery offers in more detail, including
 how to add Celery support for your application and library.
 
 This document does not document all of Celery's features and
@@ -36,7 +36,7 @@ Project layout::
 .. literalinclude:: ../../examples/next-steps/proj/celery.py
     :language: python
 
-In this module we created our :class:`@Celery` instance (sometimes
+In this module you created our :class:`@Celery` instance (sometimes
 referred to as the *app*).  To use Celery within your project
 you simply import this instance.
 
@@ -47,17 +47,17 @@ you simply import this instance.
 - The ``backend`` argument specifies the result backend to use,
 
     It's used to keep track of task state and results.
-    While results are disabled by default we use the amqp backend here
-    to demonstrate how retrieving the results work, you may want to use
-    a different backend for your application, as they all have different
-    strengths and weaknesses.  If you don't need results it's best
+    While results are disabled by default I use the amqp backend here
+    because I demonstrate how retrieving results work later, you may want to use
+    a different backend for your application. They all have different
+    strengths and weaknesses.  If you don't need results it's better
     to disable them.  Results can also be disabled for individual tasks
     by setting the ``@task(ignore_result=True)`` option.
 
     See :ref:`celerytut-keeping-results` for more information.
 
 - The ``include`` argument is a list of modules to import when
-  the worker starts.  We need to add our tasks module here so
+  the worker starts.  You need to add our tasks module here so
   that the worker is able to find our tasks.
 
 :file:`proj/tasks.py`
@@ -275,9 +275,9 @@ backend that suits every application, so to choose one you need to consider
 the drawbacks of each individual backend.  For many tasks
 keeping the return value isn't even very useful, so it's a sensible default to
 have.  Also note that result backends are not used for monitoring tasks and workers,
-for that we use dedicated event messages (see :ref:`guide-monitoring`).
+for that Celery uses dedicated event messages (see :ref:`guide-monitoring`).
 
-If you have a result backend configured we can retrieve the return
+If you have a result backend configured you can retrieve the return
 value of a task::
 
     >>> res = add.delay(2, 2)
@@ -289,7 +289,7 @@ You can find the task's id by looking at the :attr:`id` attribute::
     >>> res.id
     d6b3aea2-fb9b-4ebc-8da4-848818db9114
 
-We can also inspect the exception and traceback if the task raised an
+You can also inspect the exception and traceback if the task raised an
 exception, in fact ``result.get()`` will propagate any errors by default::
 
     >>> res = add.delay(2)
@@ -359,7 +359,7 @@ Calling tasks is described in detail in the
 *Canvas*: Designing Workflows
 =============================
 
-We just learned how to call a task using the tasks ``delay`` method,
+You just learned how to call a task using the tasks ``delay`` method,
 and this is often all you need, but sometimes you may want to pass the
 signature of a task invocation to another process or as an argument to another
 function, for this Celery uses something called *subtasks*.
@@ -408,7 +408,7 @@ and this can be resolved when calling the subtask::
     >>> res.get()
     10
 
-Here we added the argument 8, which was prepended to the existing argument 2
+Here you added the argument 8, which was prepended to the existing argument 2
 forming a complete signature of ``add(8, 2)``.
 
 Keyword arguments can also be added later, these are then merged with any
@@ -430,8 +430,8 @@ As stated subtasks supports the calling API, which means that:
   to the arguments in the signature, and keyword arguments is merged with any
   existing keys.
 
-So this all seems very useful, but what can we actually do with these?
-To get to that we must introduce the canvas primitives...
+So this all seems very useful, but what can you actually do with these?
+To get to that I must introduce the canvas primitives...
 
 The Primitives
 --------------

+ 3 - 3
docs/userguide/application.rst

@@ -58,7 +58,7 @@ Whenever you define a task, that task will also be added to the local registry:
     >>> celery.tasks['__main__.add']
     <@task: __main__.add>
 
-and there we see that ``__main__`` again; whenever Celery is not able
+and there you see that ``__main__`` again; whenever Celery is not able
 to detect what module the function belongs to, it uses the main module
 name to generate the beginning of the task name.
 
@@ -254,7 +254,7 @@ of the task to happen either when the task is used, or after the
 application has been *finalized*,
 
 This example shows how the task is not created until
-we use the task, or access an attribute (in this case :meth:`repr`):
+you use the task, or access an attribute (in this case :meth:`repr`):
 
 .. code-block:: python
 
@@ -329,7 +329,7 @@ While it's possible to depend on the current app
 being set, the best practice is to always pass the app instance
 around to anything that needs it.
 
-We call this the "app chain", since it creates a chain
+I call this the "app chain", since it creates a chain
 of instances depending on the app being passed.
 
 The following example is considered bad practice:

+ 2 - 2
docs/userguide/calling.rst

@@ -66,7 +66,7 @@ function:
 
     task.delay(arg1, arg2, kwarg1='x', kwarg2='y')
 
-Using :meth:`~@Task.apply_async` instead we have to write:
+Using :meth:`~@Task.apply_async` instead you have to write:
 
 .. code-block:: python
 
@@ -118,7 +118,7 @@ as a partial argument:
 
 .. sidebar:: What is ``s``?
 
-    The ``add.s`` call used here is called a subtask, we talk
+    The ``add.s`` call used here is called a subtask, I talk
     more about subtasks in the :ref:`canvas guide <guide-canvas>`,
     where you can also learn about :class:`~celery.chain`, which
     is a simpler way to chain tasks together.

+ 14 - 13
docs/userguide/canvas.rst

@@ -15,10 +15,11 @@ Subtasks
 
 .. versionadded:: 2.0
 
-We just learned how to call a task using the tasks ``delay`` method,
-and this is often all you need, but sometimes you may want to pass the
-signature of a task invocation to another process or as an argument to another
-function, for this Celery uses something called *subtasks*.
+You just learned how to call a task using the tasks ``delay`` method
+in the :ref:`calling <calling>` guide, and this is often all you need,
+but sometimes you may want to pass the signature of a task invocation to
+another process or as an argument to another function, for this Celery uses
+something called *subtasks*.
 
 A :func:`~celery.subtask` wraps the arguments, keyword arguments, and execution options
 of a single task invocation in a way such that it can be passed to functions
@@ -48,7 +49,7 @@ or even serialized and sent across the wire.
         >>> add.s(2, 2, debug=True)
         tasks.add(2, 2, debug=True)
 
-- From any subtask instance we can inspect the different fields::
+- From any subtask instance you can inspect the different fields::
 
         >>> s = add.subtask((2, 2), {'debug': True}, countdown=10)
         >>> s.args
@@ -133,7 +134,7 @@ so it's not possible to call the subtask with partial args/kwargs.
 
 .. note::
 
-    In this tutorial we sometimes use the prefix operator `~` to subtasks.
+    In this tutorial I sometimes use the prefix operator `~` to subtasks.
     You probably shouldn't use it in your production code, but it's a handy shortcut
     when experimenting in the Python shell::
 
@@ -158,7 +159,7 @@ to ``apply_async``::
 The callback will only be applied if the task exited successfully,
 and it will be applied with the return value of the parent task as argument.
 
-As we mentioned earlier, any arguments you add to `subtask`,
+As I mentioned earlier, any arguments you add to `subtask`,
 will be prepended to the arguments specified by the subtask itself!
 
 If you have the subtask::
@@ -255,7 +256,7 @@ Here's some examples:
 
 - Immutable subtasks
 
-    As we have learned signatures can be partial, so that arguments can be
+    Signatures can be partial so arguments can be
     added to the existing arguments, but you may not always want that,
     for example if you don't want the result of the previous task in a chain.
 
@@ -268,7 +269,7 @@ Here's some examples:
 
         >>> add.si(2, 2)
 
-    Now we can create a chain of independent tasks instead::
+    Now you can create a chain of independent tasks instead::
 
         >>> res = (add.si(2, 2), add.si(4, 4), add.s(8, 8))()
         >>> res.get()
@@ -282,7 +283,7 @@ Here's some examples:
 
 - Simple group
 
-    We can easily create a group of tasks to execute in parallel::
+    You can easily create a group of tasks to execute in parallel::
 
         >>> from celery import group
         >>> res = group(add.s(i, i) for i in xrange(10))()
@@ -305,7 +306,7 @@ Here's some examples:
             >>> g = group(add.s(i, i) for i in xrange(10))
             >>> g.apply_async()
 
-        This is useful because we can e.g. specify a time for the
+        This is useful because you can e.g. specify a time for the
         messages in the group to be called::
 
             >>> g.apply_async(countdown=10)
@@ -671,7 +672,7 @@ finished executing.
 Let's calculate the sum of the expression
 :math:`1 + 1 + 2 + 2 + 3 + 3 ... n + n` up to a hundred digits.
 
-First we need two tasks, :func:`add` and :func:`tsum` (:func:`sum` is
+First you need two tasks, :func:`add` and :func:`tsum` (:func:`sum` is
 already a standard function):
 
 .. code-block:: python
@@ -685,7 +686,7 @@ already a standard function):
         return sum(numbers)
 
 
-Now we can use a chord to calculate each addition step in parallel, and then
+Now you can use a chord to calculate each addition step in parallel, and then
 get the sum of the resulting numbers::
 
     >>> from celery import chord

+ 9 - 8
docs/userguide/monitoring.rst

@@ -316,7 +316,9 @@ By default monitor data for successful tasks will expire in 1 day,
 failed tasks in 3 days and pending tasks in 5 days.
 
 You can change the expiry times for each of these using
-adding the following settings to your :file:`settings.py`::
+adding the following settings to your :file:`settings.py`:
+
+.. code-block:: python
 
     from datetime import timedelta
 
@@ -588,7 +590,7 @@ Even a single worker can produce a huge amount of events, so storing
 the history of all events on disk may be very expensive.
 
 A sequence of events describes the cluster state in that time period,
-by taking periodic snapshots of this state we can keep all history, but
+by taking periodic snapshots of this state you can keep all history, but
 still only periodically write it to disk.
 
 To take snapshots you need a Camera class, with this you can define
@@ -638,7 +640,7 @@ See the API reference for :mod:`celery.events.state` to read more
 about state objects.
 
 Now you can use this cam with :program:`celery events` by specifying
-it with the `-c` option:
+it with the :option:`-c` option:
 
 .. code-block:: bash
 
@@ -667,7 +669,7 @@ Or you can use it programmatically like this:
 Real-time processing
 --------------------
 
-To process events in real-time we need the following
+To process events in real-time you need the following
 
 - An event consumer (this is the ``Receiver``)
 
@@ -686,8 +688,7 @@ To process events in real-time we need the following
   together as events come in, making sure timestamps are in sync, and so on.
 
 
-
-Combining these we can easily process events in real-time:
+Combining these you can easily process events in real-time:
 
 
 .. code-block:: python
@@ -714,11 +715,11 @@ Combining these we can easily process events in real-time:
 .. note::
 
     The wakeup argument to ``capture`` sends a signal to all workers
-    to force them to send a heartbeat.  This way we can immediately see
+    to force them to send a heartbeat.  This way you can immediately see
     workers when the monitor starts.
 
 
-We can listen to specific events by specifying the handlers:
+You can listen to specific events by specifying the handlers:
 
 .. code-block:: python
 

+ 8 - 6
docs/userguide/routing.rst

@@ -380,7 +380,7 @@ have executed so far.  Type ``help`` for a list of commands available.
 It also supports auto-completion, so you can start typing a command and then
 hit the `tab` key to show a list of possible matches.
 
-Let's create a queue we can send messages to:
+Let's create a queue you can send messages to:
 
 .. code-block:: bash
 
@@ -397,14 +397,16 @@ named ``testqueue``.  The queue is bound to the exchange using
 the routing key ``testkey``.
 
 From now on all messages sent to the exchange ``testexchange`` with routing
-key ``testkey`` will be moved to this queue.  We can send a message by
+key ``testkey`` will be moved to this queue.  You can send a message by
 using the ``basic.publish`` command::
 
     4> basic.publish 'This is a message!' testexchange testkey
     ok.
 
-Now that the message is sent we can retrieve it again.  We use the
-``basic.get``` command here, which polls for new messages on the queue.
+Now that the message is sent you can retrieve it again.  You can use the
+``basic.get``` command here, which polls for new messages on the queue
+(which is alright for maintainence tasks, for services you'd want to use
+``basic.consume`` instead)
 
 Pop a message off the queue::
 
@@ -429,12 +431,12 @@ This tag is used to acknowledge the message.  Also note that
 delivery tags are not unique across connections, so in another client
 the delivery tag `1` might point to a different message than in this channel.
 
-You can acknowledge the message we received using ``basic.ack``::
+You can acknowledge the message you received using ``basic.ack``::
 
     6> basic.ack 1
     ok.
 
-To clean up after our test session we should delete the entities we created::
+To clean up after our test session you should delete the entities you created::
 
     7> queue.delete testqueue
     ok. 0 messages deleted.

+ 10 - 10
docs/userguide/tasks.rst

@@ -309,7 +309,7 @@ Here's an example using ``retry``:
         except (Twitter.FailWhaleError, Twitter.LoginError), exc:
             raise send_twitter_status.retry(exc=exc)
 
-Here we used the `exc` argument to pass the current exception to
+Here the `exc` argument was used to pass the current exception to
 :meth:`~@Task.retry`.  Both the exception and the traceback will
 be available in the task state (if a result backend is enabled).
 
@@ -692,7 +692,7 @@ Use :meth:`~@Task.update_state` to update a task's state::
                 meta={'current': i, 'total': len(filenames)})
 
 
-Here we created the state `"PROGRESS"`, which tells any application
+Here I created the state `"PROGRESS"`, which tells any application
 aware of this state that the task is currently in progress, and also where
 it is in the process by having `current` and `total` counts as part of the
 state metadata.  This can then be used to create e.g. progress bars.
@@ -952,7 +952,7 @@ task as :attr:`~@Task.abstract`:
 This way the task won't be registered, but any task inheriting from
 it will be.
 
-When tasks are sent, we don't send any actual function code, just the name
+When tasks are sent, no actual function code is sent with it, just the name
 of the task to execute.  When the worker then receives the message it can look
 up the name in its task registry to find the execution code.
 
@@ -1057,7 +1057,7 @@ Make your design asynchronous instead, for example by using *callbacks*.
         PageInfo.objects.create(url, info)
 
 
-Here we instead create a chain of tasks by linking together
+Here I instead created a chain of tasks by linking together
 different :func:`~celery.subtask`'s.
 You can read about chains and other powerful constructs
 at :ref:`designing-workflows`.
@@ -1229,8 +1229,8 @@ Let's take a real wold example; A blog where comments posted needs to be
 filtered for spam.  When the comment is created, the spam filter runs in the
 background, so the user doesn't have to wait for it to finish.
 
-We have a Django blog application allowing comments
-on blog posts.  We'll describe parts of the models/views and tasks for this
+I have a Django blog application allowing comments
+on blog posts.  I'll describe parts of the models/views and tasks for this
 application.
 
 blog/models.py
@@ -1260,8 +1260,8 @@ The comment model looks like this:
             verbose_name_plural = _('comments')
 
 
-In the view where the comment is posted, we first write the comment
-to the database, then we launch the spam filter task in the background.
+In the view where the comment is posted, I first write the comment
+to the database, then I launch the spam filter task in the background.
 
 .. _task-example-blog-views:
 
@@ -1304,12 +1304,12 @@ blog/views.py
         return render_to_response(template_name, context_instance=context)
 
 
-To filter spam in comments we use `Akismet`_, the service
+To filter spam in comments I use `Akismet`_, the service
 used to filter spam in comments posted to the free weblog platform
 `Wordpress`.  `Akismet`_ is free for personal use, but for commercial use you
 need to pay.  You have to sign up to their service to get an API key.
 
-To make API calls to `Akismet`_ we use the `akismet.py`_ library written by
+To make API calls to `Akismet`_ I use the `akismet.py`_ library written by
 `Michael Foord`_.
 
 .. _task-example-blog-tasks:

+ 2 - 2
docs/userguide/workers.rst

@@ -461,9 +461,9 @@ The same can be accomplished dynamically using the :meth:`@control.add_consumer`
     [{u'worker1.local': {u'ok': u"already consuming from u'foo'"}}]
 
 
-By now we have only used automatic queues, which is only using a queue name.
+By now I have only shown examples using automatic queues,
 If you need more control you can also specify the exchange, routing_key and
-other options::
+even other options::
 
     >>> myapp.control.add_consumer(
     ...     queue='baz',