Kaynağa Gözat

Fiddled with docs. Took out a bunch of filler words, tried to make more clear

runeh 15 yıl önce
ebeveyn
işleme
98794d472d

+ 14 - 19
docs/getting-started/first-steps-with-django.rst

@@ -25,13 +25,15 @@ You only need three simple steps to use celery with your Django project.
 
 
 That's it.
 That's it.
 
 
-There are more options available, like how many processes you want to process
-work in parallel (the ``CELERY_CONCURRENCY`` setting), and the backend used
-for storing task statuses. But for now, this should do. For all of the options
-available, please see the :doc:`configuration directive
+There are more options available, like how many processes you want to
+work in parallel (the ``CELERY_CONCURRENCY`` setting). You can also
+configure the backend used for storing task statuses. For now though, 
+this should do. For all of the options available, please see the 
+:doc:`configuration directive
+
 reference<../configuration>`.
 reference<../configuration>`.
 
 
-**Note**: If you're using SQLite as the Django database back-end,
+**Note:** If you're using SQLite as the Django database back-end,
 ``celeryd`` will only be able to process one task at a time, this is
 ``celeryd`` will only be able to process one task at a time, this is
 because SQLite doesn't allow concurrent writes.
 because SQLite doesn't allow concurrent writes.
 
 
@@ -47,13 +49,7 @@ see what's going on without consulting the logfile::
 
 
 However, in production you probably want to run the worker in the
 However, in production you probably want to run the worker in the
 background as a daemon. To do this you need to use to tools provided by your
 background as a daemon. To do this you need to use to tools provided by your
-platform, or something like `supervisord`_.
-
-For example start-up scripts see ``contrib/debian/init.d`` for using
-``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
-``launchd`` on Mac OS X.
-
-.. _`supervisord`: http://supervisord.org/
+platform, or something like `supervisord`_. FIXME: See solutions for deamonizing.
 
 
 For a complete listing of the command line arguments available, with a short
 For a complete listing of the command line arguments available, with a short
 description, you can use the help command::
 description, you can use the help command::
@@ -64,12 +60,12 @@ description, you can use the help command::
 Defining and executing tasks
 Defining and executing tasks
 ============================
 ============================
 
 
-**Please note** All of these tasks has to be stored in a real module, they can't
+**Please note:** All the tasks have to be stored in a real module, they can't
 be defined in the python shell or ipython/bpython. This is because the celery
 be defined in the python shell or ipython/bpython. This is because the celery
 worker server needs access to the task function to be able to run it.
 worker server needs access to the task function to be able to run it.
-Put them in the ``tasks`` module of your
-Django application. The worker server will automatically load any ``tasks.py``
-file for all of the applications listed in ``settings.INSTALLED_APPS``.
+Put them in the ``tasks`` module of your Django application. The
+worker server  will automatically load any ``tasks.py`` file for all
+of the applications listed in ``settings.INSTALLED_APPS``.
 Executing tasks using ``delay`` and ``apply_async`` can be done from the
 Executing tasks using ``delay`` and ``apply_async`` can be done from the
 python shell, but keep in mind that since arguments are pickled, you can't
 python shell, but keep in mind that since arguments are pickled, you can't
 use custom classes defined in the shell session.
 use custom classes defined in the shell session.
@@ -84,8 +80,7 @@ This is a task that adds two numbers:
     def add(x, y):
     def add(x, y):
         return x + y
         return x + y
 
 
-Now if we want to execute this task, we can use the
-``delay`` method of the task class.
+To execute this task, we can use the ``delay`` method of the task class.
 This is a handy shortcut to the ``apply_async`` method which gives
 This is a handy shortcut to the ``apply_async`` method which gives
 greater control of the task execution.
 greater control of the task execution.
 See :doc:`Executing Tasks<../userguide/executing>` for more information.
 See :doc:`Executing Tasks<../userguide/executing>` for more information.
@@ -97,7 +92,7 @@ At this point, the task has been sent to the message broker. The message
 broker will hold on to the task until a celery worker server has successfully
 broker will hold on to the task until a celery worker server has successfully
 picked it up.
 picked it up.
 
 
-*Note* If everything is just hanging when you execute ``delay``, please check
+*Note:* If everything is just hanging when you execute ``delay``, please check
 that RabbitMQ is running, and that the user/password has access to the virtual
 that RabbitMQ is running, and that the user/password has access to the virtual
 host you configured earlier.
 host you configured earlier.
 
 

+ 52 - 30
docs/getting-started/first-steps-with-python.rst

@@ -5,10 +5,12 @@
 Creating a simple task
 Creating a simple task
 ======================
 ======================
 
 
-We put tasks in a dedicated ``tasks.py`` module. Your tasks can be in
-any module, but it's a good convention.
+In this example we are creating a simple task that adds two
+numbers. Tasks are defined in a normal python module. The module can
+be named whatever you like, but the convention is to call it
+``tasks.py``.
 
 
-Our task is simple, just adding two numbers
+Our addition task looks like this:
 
 
 ``tasks.py``:
 ``tasks.py``:
 
 
@@ -21,22 +23,34 @@ Our task is simple, just adding two numbers
         return x + y
         return x + y
 
 
 
 
-Tasks in celery are actually classes inheriting from the ``Task`` class.
-When you create a new task it is automatically registered in a registry, but
-for this to happen in the worker you need to give a list of modules the worker
-should import.
+All celery tasks are classes that inherit from the ``Task``
+class. In this case we're using a decorator that wraps the add
+function in an appropriate class for us automatically. The full
+documentation on how to create tasks and task classes are in
+FIXMELINKHERE.
+
+Celery workers maintain a registry of all the available tasks. For it
+to be able to do this you need to give it a list of which modules it
+should import. FIXME: move this below?
 
 
 Configuration
 Configuration
 =============
 =============
 
 
-Celery needs a configuration module, usually called ``celeryconfig.py``.
-This module must be importable and located in the Python path.
+Celery is configured by using a configuration module. By convention,
+this module is called ``celeryconfig.py``. This module must be in the
+Python path so it can be imported.
 
 
 You can set a custom name for the configuration module with the
 You can set a custom name for the configuration module with the
-``CELERY_CONFIG_MODULE`` variable. In these examples we use the default name.
+``CELERY_CONFIG_MODULE`` variable. In these examples we use the
+default name.
+
 
 
 Let's create our ``celeryconfig.py``.
 Let's create our ``celeryconfig.py``.
 
 
+FIXME: Is the invocation below something people are expected to do,
+appending cwd to sys.path? It seems like something that would usually
+be handled elsewhere?
+
 1. Start by making sure Python is able to import modules from the current
 1. Start by making sure Python is able to import modules from the current
    directory::
    directory::
 
 
@@ -44,7 +58,7 @@ Let's create our ``celeryconfig.py``.
         import sys
         import sys
         sys.path.insert(0, os.getcwd())
         sys.path.insert(0, os.getcwd())
 
 
-2. Configure the broker::
+2. Configure how we communicate with the broker::
 
 
         BROKER_HOST = "localhost"
         BROKER_HOST = "localhost"
         BROKER_PORT = 5672
         BROKER_PORT = 5672
@@ -52,35 +66,46 @@ Let's create our ``celeryconfig.py``.
         BROKER_PASSWORD = "mypassword"
         BROKER_PASSWORD = "mypassword"
         BROKER_VHOST = "myvhost"
         BROKER_VHOST = "myvhost"
 
 
-3. We don't want to store the results, so we'll just use the simplest
-   backend available; the AMQP backend::
+3. In this example we don't want to store the results of the tasks, so
+   we'll use the simplest backend available; the AMQP backend::
 
 
         CELERY_BACKEND = "amqp"
         CELERY_BACKEND = "amqp"
 
 
-4. Finally, we list the modules to import. We only have a single module; the
-   ``tasks.py`` module we added earlier::
+4. Finally, we list the modules to import. We only have a single task
+   module, ``tasks.py``, which we added earlier::
 
 
         CELERY_IMPORTS = ("tasks", )
         CELERY_IMPORTS = ("tasks", )
 
 
 That's it.
 That's it.
 
 
-There are more options available, like how many processes you want to process
-work in parallel (the ``CELERY_CONCURRENCY`` setting), and we could use a
-persistent result store backend, but for now, this should do. For all of
-the options available, please see the :doc:`configuration directive
+There are more options available, like how many processes you want to
+process work in parallel (the ``CELERY_CONCURRENCY`` setting), and we
+could use a persistent result store backend, but for now, this should
+do. For all of the options available, see the 
+:doc:`configuration directive
+
 reference<../configuration>`.
 reference<../configuration>`.
 
 
 Running the celery worker server
 Running the celery worker server
 ================================
 ================================
 
 
-To test this we'll be running the worker server in the foreground, so we can
-see what's going on without consulting the logfile::
+To test we will run the worker server in the foreground, so we can
+see what's going on in the terminal::
 
 
     $ celeryd --loglevel=INFO
     $ celeryd --loglevel=INFO
 
 
 However, in production you probably want to run the worker in the
 However, in production you probably want to run the worker in the
-background as a daemon. To do this you need to use to tools provided by your
-platform, or something like `supervisord`_.
+background as a daemon. To do this you need to use to tools provided
+by your platform, or something like `supervisord`_.
+
+For a complete listing of the command line options available, use the
+help command::
+
+    $  celeryd --help
+
+
+FIXME: Move this to a FAQ section or something and link it from the
+supervisord line above:
 
 
 For example start-up scripts see ``contrib/debian/init.d`` for using
 For example start-up scripts see ``contrib/debian/init.d`` for using
 ``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
 ``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
@@ -88,17 +113,14 @@ For example start-up scripts see ``contrib/debian/init.d`` for using
 
 
 .. _`supervisord`: http://supervisord.org/
 .. _`supervisord`: http://supervisord.org/
 
 
-For a complete listing of the command line arguments available, with a short
-description, you can use the help command::
-
-    $  celeryd --help
 
 
 
 
 Executing the task
 Executing the task
 ==================
 ==================
 
 
-Now if we want to execute our task, we can use the
-``delay`` method of the task class.
+Whenever we want to execute our task, we can use the ``delay`` method
+of the task class.
+
 This is a handy shortcut to the ``apply_async`` method which gives
 This is a handy shortcut to the ``apply_async`` method which gives
 greater control of the task execution.
 greater control of the task execution.
 See :doc:`Executing Tasks<../userguide/executing>` for more information.
 See :doc:`Executing Tasks<../userguide/executing>` for more information.
@@ -111,7 +133,7 @@ At this point, the task has been sent to the message broker. The message
 broker will hold on to the task until a celery worker server has successfully
 broker will hold on to the task until a celery worker server has successfully
 picked it up.
 picked it up.
 
 
-*Note* If everything is just hanging when you execute ``delay``, please check
+*Note:* If everything is just hanging when you execute ``delay``, please check
 that RabbitMQ is running, and that the user/password has access to the virtual
 that RabbitMQ is running, and that the user/password has access to the virtual
 host you configured earlier.
 host you configured earlier.