فهرست منبع

Remove daemon related information from documentation.

Ask Solem 15 سال پیش
والد
کامیت
1efc54143a
6فایلهای تغییر یافته به همراه30 افزوده شده و 62 حذف شده
  1. 1 1
      Changelog
  2. 3 3
      FAQ
  3. 9 6
      README.rst
  4. 4 28
      docs/configuration.rst
  5. 9 6
      docs/introduction.rst
  6. 4 18
      docs/reference/celery.conf.rst

+ 1 - 1
Changelog

@@ -55,7 +55,7 @@ BACKWARD INCOMPATIBLE CHANGES
 
 
   To launch the periodic task scheduler you have to run celerybeat::
   To launch the periodic task scheduler you have to run celerybeat::
 
 
-		$ celerybeat --detach
+		$ celerybeat
 
 
   Make sure this is running on one server only, if you run it twice, all
   Make sure this is running on one server only, if you run it twice, all
   periodic tasks will also be executed twice.
   periodic tasks will also be executed twice.

+ 3 - 3
FAQ

@@ -158,14 +158,14 @@ Why won't my Task run?
 (or in some other module Django loads by default, like ``models.py``?).
 (or in some other module Django loads by default, like ``models.py``?).
 Also there might be syntax errors preventing the tasks module being imported.
 Also there might be syntax errors preventing the tasks module being imported.
 
 
-You can find out if the celery daemon is able to run the task by executing the
+You can find out if celery is able to run the task by executing the
 task manually:
 task manually:
 
 
     >>> from myapp.tasks import MyPeriodicTask
     >>> from myapp.tasks import MyPeriodicTask
     >>> MyPeriodicTask.delay()
     >>> MyPeriodicTask.delay()
 
 
-Watch celery daemons logfile (or output if not running as a daemon), to see
-if it's able to find the task, or if some other error is happening.
+Watch celeryds logfile to see if it's able to find the task, or if some
+other error is happening.
 
 
 Why won't my Periodic Task run?
 Why won't my Periodic Task run?
 -------------------------------
 -------------------------------

+ 9 - 6
README.rst

@@ -223,12 +223,15 @@ see what's going on without consulting the logfile::
 
 
     $ python manage.py celeryd
     $ python manage.py celeryd
 
 
-
 However, in production you probably want to run the worker in the
 However, in production you probably want to run the worker in the
-background, as a daemon:: 
+background as a daemon. To do this you need to use to tools provided by your
+platform, or something like `supervisord`_.
 
 
-    $ python manage.py celeryd --detach
+For example startup scripts see ``contrib/debian/init.d`` for using
+``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
+``launchd`` on Mac OS X.
 
 
+.. _`supervisord`: http://supervisord.org/
 
 
 For a complete listing of the command line arguments available, with a short
 For a complete listing of the command line arguments available, with a short
 description, you can use the help command::
 description, you can use the help command::
@@ -334,7 +337,7 @@ any time, or else you will end up with multiple executions of the same task.
 
 
 To start the ``celerybeat`` service::
 To start the ``celerybeat`` service::
 
 
-    $ celerybeat --detach
+    $ celerybeat
 
 
 or if using Django::
 or if using Django::
 
 
@@ -344,11 +347,11 @@ or if using Django::
 You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
 You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
 this is convenient if you only have one server::
 this is convenient if you only have one server::
 
 
-    $ celeryd --detach -B
+    $ celeryd -B
 
 
 or if using Django::
 or if using Django::
 
 
-    $ python manage.py celeryd --detach -B
+    $ python manage.py celeryd  -B
 
 
 
 
 A look inside the components
 A look inside the components

+ 4 - 28
docs/configuration.rst

@@ -38,7 +38,6 @@ it should contain all you need to run a basic celery set-up.
 
 
     # CELERYD_LOG_FILE = "celeryd.log"
     # CELERYD_LOG_FILE = "celeryd.log"
     # CELERYD_LOG_LEVEL = "INFO"
     # CELERYD_LOG_LEVEL = "INFO"
-    # CELERYD_PID_FILE = "celeryd.pid"
 
 
 Concurrency settings
 Concurrency settings
 ====================
 ====================
@@ -339,7 +338,6 @@ Task execution settings
     If you still want to store errors, just not successful return values,
     If you still want to store errors, just not successful return values,
     you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
     you can set ``CELERY_STORE_ERRORS_EVEN_IF_IGNORED``.
 
 
-
 * CELERY_TASK_RESULT_EXPIRES
 * CELERY_TASK_RESULT_EXPIRES
     Time (in seconds, or a :class:`datetime.timedelta` object) for when after
     Time (in seconds, or a :class:`datetime.timedelta` object) for when after
     stored task tombstones are deleted.
     stored task tombstones are deleted.
@@ -382,10 +380,7 @@ Logging
     The default filename the worker daemon logs messages to, can be
     The default filename the worker daemon logs messages to, can be
     overridden using the `--logfile`` option to ``celeryd``.
     overridden using the `--logfile`` option to ``celeryd``.
 
 
-    The default is to log using ``stderr`` if running in the foreground,
-    when running in the background, detached as a daemon, the default
-    logfile is ``celeryd.log``.
-
+    The default is ``None`` (``stderr``)
     Can also be set via the ``--logfile`` argument.
     Can also be set via the ``--logfile`` argument.
 
 
 * CELERYD_LOG_LEVEL
 * CELERYD_LOG_LEVEL
@@ -405,13 +400,6 @@ Logging
     See the Python :mod:`logging` module for more information about log
     See the Python :mod:`logging` module for more information about log
     formats.
     formats.
 
 
-Process
--------
-
-* CELERYD_PID_FILE
-    Full path to the pid file. Default is ``celeryd.pid``.
-    Can also be set via the ``--pidfile`` argument.
-
 Periodic Task Server: celerybeat
 Periodic Task Server: celerybeat
 ================================
 ================================
 
 
@@ -432,10 +420,7 @@ Periodic Task Server: celerybeat
     The default filename to log messages to, can be
     The default filename to log messages to, can be
     overridden using the `--logfile`` option.
     overridden using the `--logfile`` option.
 
 
-    The default is to log using ``stderr`` if running in the foreground,
-    when running in the background, detached as a daemon, the default
-    logfile is ``celerybeat.log``.
-
+    The default is ``None`` (``stderr``).
     Can also be set via the ``--logfile`` argument.
     Can also be set via the ``--logfile`` argument.
 
 
 * CELERYBEAT_LOG_LEVEL
 * CELERYBEAT_LOG_LEVEL
@@ -446,10 +431,6 @@ Periodic Task Server: celerybeat
 
 
     See the :mod:`logging` module for more information.
     See the :mod:`logging` module for more information.
 
 
-* CELERYBEAT_PID_FILE
-    Full path to celerybeat's pid file. Default is ``celerybat.pid``.
-    Can also be set via the ``--pidfile`` argument.
-
 Monitor Server: celerymon
 Monitor Server: celerymon
 =========================
 =========================
 
 
@@ -457,16 +438,11 @@ Monitor Server: celerymon
     The default filename to log messages to, can be
     The default filename to log messages to, can be
     overridden using the `--logfile`` option.
     overridden using the `--logfile`` option.
 
 
-    The default is to log using ``stderr`` if running in the foreground,
-    when running in the background, detached as a daemon, the default
-    logfile is ``celerymon.log``.
+    The default is ``None`` (``stderr``)
+    Can also be set via the ``--logfile`` argument.
 
 
 * CELERYMON_LOG_LEVEL
 * CELERYMON_LOG_LEVEL
     Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
     Logging level. Can be any of ``DEBUG``, ``INFO``, ``WARNING``,
     ``ERROR``, or ``CRITICAL``.
     ``ERROR``, or ``CRITICAL``.
 
 
     See the :mod:`logging` module for more information.
     See the :mod:`logging` module for more information.
-
-* CELERYMON_PID_FILE
-    Full path to celerymon's pid file. Default is ``celerymon.pid``.
-    Can be overridden using the ``--pidfile`` option to ``celerymon``.

+ 9 - 6
docs/introduction.rst

@@ -223,12 +223,15 @@ see what's going on without consulting the logfile::
 
 
     $ python manage.py celeryd
     $ python manage.py celeryd
 
 
-
 However, in production you probably want to run the worker in the
 However, in production you probably want to run the worker in the
-background, as a daemon:: 
+background as a daemon. To do this you need to use to tools provided by your
+platform, or something like `supervisord`_.
 
 
-    $ python manage.py celeryd --detach
+For example startup scripts see ``contrib/debian/init.d`` for using
+``start-stop-daemon`` on Debian/Ubuntu, or ``contrib/mac/org.celeryq.*`` for using
+``launchd`` on Mac OS X.
 
 
+.. _`supervisord`: http://supervisord.org/
 
 
 For a complete listing of the command line arguments available, with a short
 For a complete listing of the command line arguments available, with a short
 description, you can use the help command::
 description, you can use the help command::
@@ -336,7 +339,7 @@ any time, or else you will end up with multiple executions of the same task.
 
 
 To start the ``celerybeat`` service::
 To start the ``celerybeat`` service::
 
 
-    $ celerybeat --detach
+    $ celerybeat
 
 
 or if using Django::
 or if using Django::
 
 
@@ -346,11 +349,11 @@ or if using Django::
 You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
 You can also start ``celerybeat`` with ``celeryd`` by using the ``-B`` option,
 this is convenient if you only have one server::
 this is convenient if you only have one server::
 
 
-    $ celeryd --detach -B
+    $ celeryd -B
 
 
 or if using Django::
 or if using Django::
 
 
-    $ python manage.py celeryd --detach -B
+    $ python manage.py celeryd  -B
 
 
 
 
 A look inside the components
 A look inside the components

+ 4 - 18
docs/reference/celery.conf.rst

@@ -36,7 +36,7 @@ Configuration - celery.conf
 
 
     Always execute tasks locally, don't send to the queue.
     Always execute tasks locally, don't send to the queue.
 
 
-.. data: TASK_RESULT_EXPIRES
+.. data:: TASK_RESULT_EXPIRES
 
 
     Task tombstone expire time in seconds.
     Task tombstone expire time in seconds.
 
 
@@ -88,11 +88,6 @@ Configuration - celery.conf
     If ``True`` all rate limits will be disabled and all tasks will be executed
     If ``True`` all rate limits will be disabled and all tasks will be executed
     as soon as possible.
     as soon as possible.
 
 
-.. data:: CELERYBEAT_PID_FILE
-
-    Name of celerybeats pid file.
-    Default is: ``celerybeat.pid``.
-
 .. data:: CELERYBEAT_LOG_LEVEL
 .. data:: CELERYBEAT_LOG_LEVEL
 
 
     Default log level for celerybeat.
     Default log level for celerybeat.
@@ -101,7 +96,7 @@ Configuration - celery.conf
 .. data:: CELERYBEAT_LOG_FILE
 .. data:: CELERYBEAT_LOG_FILE
 
 
     Default log file for celerybeat.
     Default log file for celerybeat.
-    Default is: ``celerybeat.log``.
+    Default is: ``None`` (stderr)
 
 
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 .. data:: CELERYBEAT_SCHEDULE_FILENAME
 
 
@@ -118,11 +113,6 @@ Configuration - celery.conf
     faster (A value of 5 minutes, means the changes will take effect in 5 minutes
     faster (A value of 5 minutes, means the changes will take effect in 5 minutes
     at maximum).
     at maximum).
 
 
-.. data:: CELERYMON_PID_FILE
-
-    Name of celerymons pid file.
-    Default is: ``celerymon.pid``.
-
 .. data:: CELERYMON_LOG_LEVEL
 .. data:: CELERYMON_LOG_LEVEL
 
 
     Default log level for celerymon.
     Default log level for celerymon.
@@ -131,7 +121,7 @@ Configuration - celery.conf
 .. data:: CELERYMON_LOG_FILE
 .. data:: CELERYMON_LOG_FILE
 
 
     Default log file for celerymon.
     Default log file for celerymon.
-    Default is: ``celerymon.log``.
+    Default is: ``None`` (stderr)
 
 
 .. data:: LOG_LEVELS
 .. data:: LOG_LEVELS
 
 
@@ -144,17 +134,13 @@ Configuration - celery.conf
 .. data:: CELERYD_LOG_FILE
 .. data:: CELERYD_LOG_FILE
 
 
     Filename of the daemon log file.
     Filename of the daemon log file.
+    Default is: ``None`` (stderr)
 
 
 .. data:: CELERYD_LOG_LEVEL
 .. data:: CELERYD_LOG_LEVEL
 
 
     Default log level for daemons. (``WARN``)
     Default log level for daemons. (``WARN``)
 
 
-.. data:: CELERYD_PID_FILE
-
-    Full path to the daemon pidfile.
-
 .. data:: CELERYD_CONCURRENCY
 .. data:: CELERYD_CONCURRENCY
 
 
     The number of concurrent worker processes.
     The number of concurrent worker processes.
     If set to ``0``, the total number of available CPUs/cores will be used.
     If set to ``0``, the total number of available CPUs/cores will be used.
-