Browse Source

Documentation is now using celery command

Ask Solem 12 years ago
parent
commit
fd3808161b

+ 1 - 1
celery/concurrency/processes/__init__.py

@@ -38,7 +38,7 @@ def process_initializer(app, hostname):
     trace._tasks = app._tasks  # make sure this optimization is set.
     platforms.signals.reset(*WORKER_SIGRESET)
     platforms.signals.ignore(*WORKER_SIGIGNORE)
-    platforms.set_mp_process_title("celeryd", hostname=hostname)
+    platforms.set_mp_process_title("celery", hostname=hostname)
     # This is for Windows and other platforms not supporting
     # fork(). Note that init_worker makes sure it's only
     # run once per process.

+ 1 - 1
celery/contrib/rdb.py

@@ -31,7 +31,7 @@ Inspired by http://snippets.dzone.com/posts/show/7248
 
     Base port to bind to.  Default is 6899.
     The debugger will try to find an available port starting from the
-    base port.  The selected port will be logged by celeryd.
+    base port.  The selected port will be logged by the worker.
 
 :copyright: (c) 2009 - 2012 by Ask Solem.
 :license: BSD, see LICENSE for more details.

+ 1 - 1
celery/loaders/base.py

@@ -78,7 +78,7 @@ class BaseLoader(object):
         pass
 
     def on_worker_init(self):
-        """This method is called when the worker (:program:`celeryd`)
+        """This method is called when the worker (:program:`celery worker`)
         starts."""
         pass
 

+ 1 - 1
celery/utils/mail.py

@@ -159,7 +159,7 @@ The contents of the full traceback was:
 
 %(EMAIL_SIGNATURE_SEP)s
 Just to let you know,
-celeryd at %%(hostname)s.
+py-celery at %%(hostname)s.
 """ % {"EMAIL_SIGNATURE_SEP": EMAIL_SIGNATURE_SEP}
 
     error_whitelist = None

+ 1 - 1
celery/worker/state.py

@@ -25,7 +25,7 @@ from celery.datastructures import LimitedSet
 from celery.utils import cached_property
 
 #: Worker software/platform information.
-SOFTWARE_INFO = {"sw_ident": "celeryd",
+SOFTWARE_INFO = {"sw_ident": "py-celery",
                  "sw_ver": __version__,
                  "sw_sys": platform.system()}
 

+ 1 - 1
docs/THANKS

@@ -3,4 +3,4 @@ Thanks to Anton Tsigularov <antont@opera.com> for the previous name (crunchy)
     which we had to abandon because of an existing project with that name.
 Thanks to Armin Ronacher for the Sphinx theme.
 Thanks to Brian K. Jones for bunny.py (http://github.com/bkjones/bunny), the
-    tool that inspired camqadm.
+    tool that inspired 'celery amqp'.

+ 0 - 13
docs/cookbook/index.rst

@@ -1,13 +0,0 @@
-.. _cookbook:
-
-===========
- Cookbook
-===========
-
-.. toctree::
-    :maxdepth: 2
-
-    tasks
-    daemonizing
-
-This page contains common recipes and techniques.

+ 1 - 1
docs/django/first-steps-with-django.rst

@@ -96,7 +96,7 @@ For a complete listing of the command line options available, use the help comma
     $ python manage.py help celeryd
 
 .. _`Running Celery as a Daemon`:
-    http://docs.celeryq.org/en/latest/cookbook/daemonizing.html
+    http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html
 
 Executing our task
 ==================

+ 5 - 4
docs/faq.rst

@@ -372,9 +372,10 @@ Why won't my periodic task run?
 How do I purge all waiting tasks?
 ---------------------------------
 
-**Answer:** You can use celeryctl to purge all configured task queues::
+**Answer:** You can use the ``celery purge`` command to purge
+all configured task queues::
 
-        $ celeryctl purge
+        $ celery purge
 
 or programatically::
 
@@ -383,9 +384,9 @@ or programatically::
         1753
 
 If you only want to purge messages from a specific queue
-you have to use the AMQP API or the :program:`camqadm` utility::
+you have to use the AMQP API or the :program:`celery amqp` utility::
 
-    $ camqadm queue.purge <queue name>
+    $ celery amqp queue.purge <queue name>
 
 The number 1753 is the number of messages deleted.
 

+ 3 - 3
docs/cookbook/daemonizing.rst → docs/tutorials/daemonizing.rst

@@ -1,8 +1,8 @@
 .. _daemonizing:
 
-=============================
- Running celeryd as a daemon
-=============================
+================================
+ Running the worker as a daemon
+================================
 
 Celery does not daemonize itself, please use one of the following
 daemonization tools.

+ 2 - 2
docs/tutorials/debugging.rst

@@ -39,7 +39,7 @@ By default the debugger will only be available from the local host,
 to enable access from the outside you have to set the environment
 variable :envvar:`CELERY_RDB_HOST`.
 
-When `celeryd` encounters your breakpoint it will log the following
+When the worker encounters your breakpoint it will log the following
 information::
 
     [INFO/MainProcess] Got task from broker:
@@ -94,7 +94,7 @@ This is the case for both main and worker processes.
 
 For example starting the worker with::
 
-    CELERY_RDBSIG=1 celeryd -l info
+    CELERY_RDBSIG=1 celery worker -l info
 
 You can start an rdb session for any of the worker processes by executing::
 

+ 2 - 1
docs/tutorials/index.rst

@@ -8,6 +8,7 @@
 .. toctree::
     :maxdepth: 2
 
-    otherqueues
+    daemonizing
     debugging
     clickcounter
+    task-cookbook

+ 1 - 1
docs/cookbook/tasks.rst → docs/tutorials/task-cookbook.rst

@@ -1,7 +1,7 @@
 .. _cookbook-tasks:
 
 ================
- Creating Tasks
+ Task Cookbook
 ================
 
 .. contents::

+ 39 - 34
docs/userguide/monitoring.rst

@@ -23,21 +23,21 @@ Workers
 .. _monitoring-celeryctl:
 
 
-celeryctl: Management Utility
------------------------------
+``celery``: Management Command-line Utility
+-------------------------------------------
 
 .. versionadded:: 2.1
 
-:mod:`~celery.bin.celeryctl` is a command line utility to inspect
+:pogram:`celery` can also be used to inspect
 and manage worker nodes (and to some degree tasks).
 
 To list all the commands available do::
 
-    $ celeryctl help
+    $ celery help
 
 or to get help for a specific command do::
 
-    $ celeryctl <command> --help
+    $ celery <command> --help
 
 Commands
 ~~~~~~~~
@@ -55,12 +55,12 @@ Commands
 * **status**: List active nodes in this cluster
     ::
 
-    $ celeryctl status
+    $ celery status
 
 * **result**: Show the result of a task
     ::
 
-        $ celeryctl result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
+        $ celery result -t tasks.add 4e196aa4-0141-4601-8138-7aa33db0f577
 
     Note that you can omit the name of the task as long as the
     task doesn't use a custom result backend.
@@ -68,7 +68,7 @@ Commands
 * **purge**: Purge messages from all configured task queues.
     ::
 
-        $ celeryctl purge
+        $ celery purge
 
     .. warning::
         There is no undo for this operation, and messages will
@@ -77,14 +77,14 @@ Commands
 * **inspect active**: List active tasks
     ::
 
-        $ celeryctl inspect active
+        $ celery inspect active
 
     These are all the tasks that are currently being executed.
 
 * **inspect scheduled**: List scheduled ETA tasks
     ::
 
-        $ celeryctl inspect scheduled
+        $ celery inspect scheduled
 
     These are tasks reserved by the worker because they have the
     `eta` or `countdown` argument set.
@@ -92,7 +92,7 @@ Commands
 * **inspect reserved**: List reserved tasks
     ::
 
-        $ celeryctl inspect reserved
+        $ celery inspect reserved
 
     This will list all tasks that have been prefetched by the worker,
     and is currently waiting to be executed (does not include tasks
@@ -101,32 +101,32 @@ Commands
 * **inspect revoked**: List history of revoked tasks
     ::
 
-        $ celeryctl inspect revoked
+        $ celery inspect revoked
 
 * **inspect registered**: List registered tasks
     ::
 
-        $ celeryctl inspect registered
+        $ celery inspect registered
 
 * **inspect stats**: Show worker statistics
     ::
 
-        $ celeryctl inspect stats
+        $ celery inspect stats
 
 * **inspect enable_events**: Enable events
     ::
 
-        $ celeryctl inspect enable_events
+        $ celery inspect enable_events
 
 * **inspect disable_events**: Disable events
     ::
 
-        $ celeryctl inspect disable_events
+        $ celery inspect disable_events
 
 * **migrate**: Migrate tasks from one broker to another (**EXPERIMENTAL**).
   ::
 
-        $ celeryctl migrate redis://localhost amqp://localhost
+        $ celery migrate redis://localhost amqp://localhost
 
   This command will migrate all the tasks on one broker to another.
   As this command is new and experimental you should be sure to have
@@ -148,7 +148,7 @@ By default the inspect commands operates on all workers.
 You can specify a single, or a list of workers by using the
 `--destination` argument::
 
-    $ celeryctl inspect -d w1,w2 reserved
+    $ celery inspect -d w1,w2 reserved
 
 
 .. _monitoring-django-admin:
@@ -187,10 +187,9 @@ To start the camera run::
 
 If you haven't already enabled the sending of events you need to do so::
 
-    $ python manage.py celeryctl inspect enable_events
+    $ python manage.py celery inspect enable_events
 
-:Tip: You can enable events when the worker starts using the `-E` argument
-      to :mod:`~celery.bin.celeryd`.
+:Tip: You can enable events when the worker starts using the `-E` argument.
 
 Now that the camera has been started, and events have been enabled
 you should be able to see your workers and the tasks in the admin interface
@@ -292,31 +291,37 @@ please see ``djcelerymon --help``.
 
 .. _monitoring-celeryev:
 
-celeryev: Curses Monitor
+celery events: Curses Monitor
 ------------------------
 
 .. versionadded:: 2.0
 
-:mod:`~celery.bin.celeryev` is a simple curses monitor displaying
+`celery events` is a simple curses monitor displaying
 task and worker history.  You can inspect the result and traceback of tasks,
 and it also supports some management commands like rate limiting and shutting
 down workers.
 
+Starting::
+
+    $ celery events
+
+You should see a screen like:
+
 .. figure:: ../images/celeryevshotsm.jpg
 
 
-:mod:`~celery.bin.celeryev` is also used to start snapshot cameras (see
+`celery events` is also used to start snapshot cameras (see
 :ref:`monitoring-snapshots`::
 
-    $ celeryev --camera=<camera-class> --frequency=1.0
+    $ celery events --camera=<camera-class> --frequency=1.0
 
 and it includes a tool to dump events to :file:`stdout`::
 
-    $ celeryev --dump
+    $ celery events --dump
 
 For a complete list of options use ``--help``::
 
-    $ celeryev --help
+    $ celery events --help
 
 
 .. _monitoring-celerymon:
@@ -449,7 +454,7 @@ Events
 
 The worker has the ability to send a message whenever some event
 happens.  These events are then captured by tools like :program:`celerymon`
-and :program:`celeryev` to monitor the cluster.
+and :program:`celery events` to monitor the cluster.
 
 .. _monitoring-snapshots:
 
@@ -469,12 +474,12 @@ To take snapshots you need a Camera class, with this you can define
 what should happen every time the state is captured;  You can
 write it to a database, send it by email or something else entirely.
 
-:program:`celeryev` is then used to take snapshots with the camera,
+:program:`celery events` is then used to take snapshots with the camera,
 for example if you want to capture state every 2 seconds using the
-camera ``myapp.Camera`` you run :program:`celeryev` with the following
+camera ``myapp.Camera`` you run :program:`celery events` with the following
 arguments::
 
-    $ celeryev -c myapp.Camera --frequency=2.0
+    $ celery events -c myapp.Camera --frequency=2.0
 
 
 .. _monitoring-camera:
@@ -505,10 +510,10 @@ Here is an example camera, dumping the snapshot to screen:
 See the API reference for :mod:`celery.events.state` to read more
 about state objects.
 
-Now you can use this cam with :program:`celeryev` by specifying
+Now you can use this cam with :program:`celery events` by specifying
 it with the `-c` option::
 
-    $ celeryev -c myapp.DumpCam --frequency=2.0
+    $ celery events -c myapp.DumpCam --frequency=2.0
 
 Or you can use it programmatically like this::
 
@@ -587,7 +592,7 @@ Worker Events
     * `hostname`: Hostname of the worker.
     * `timestamp`: Event timestamp.
     * `freq`: Heartbeat frequency in seconds (float).
-    * `sw_ident`: Name of worker software (e.g. celeryd).
+    * `sw_ident`: Name of worker software (e.g. ``py-celery``).
     * `sw_ver`: Software version (e.g. 2.2.0).
     * `sw_sys`: Operating System (e.g. Linux, Windows, Darwin).
 

+ 15 - 14
docs/userguide/periodic-tasks.rst

@@ -10,7 +10,7 @@
 Introduction
 ============
 
-:program:`celerybeat` is a scheduler.  It kicks off tasks at regular intervals,
+:program:`celery beat` is a scheduler.  It kicks off tasks at regular intervals,
 which are then executed by the worker nodes available in the cluster.
 
 By default the entries are taken from the :setting:`CELERYBEAT_SCHEDULE` setting,
@@ -46,7 +46,7 @@ Example: Run the `tasks.add` task every 30 seconds.
 
 
 Using a :class:`~datetime.timedelta` for the schedule means the task will
-be executed 30 seconds after `celerybeat` starts, and then every 30 seconds
+be executed 30 seconds after `celery beat` starts, and then every 30 seconds
 after the last run.  A crontab like schedule also exists, see the section
 on `Crontab schedules`_.
 
@@ -91,7 +91,7 @@ Available Fields
     second, minute, hour or day depending on the period of the timedelta.
 
     If `relative` is true the frequency is not rounded and will be
-    relative to the time when :program:`celerybeat` was started.
+    relative to the time when :program:`celery beat` was started.
 
 .. _beat-crontab:
 
@@ -212,29 +212,30 @@ the :setting:`CELERY_TIMEZONE` setting:
 
 .. _beat-starting:
 
-Starting celerybeat
-===================
+Starting the Scheduler
+======================
 
-To start the :program:`celerybeat` service::
+To start the :program:`celery beat` service::
 
-    $ celerybeat
+    $ celery beat
 
-You can also start `celerybeat` with `celeryd` by using the `-B` option,
-this is convenient if you only intend to use one worker node::
+You can also start embed `beat` inside the worker by enabling
+workers `-B` option, this is convenient if you only intend to
+use one worker node::
 
-    $ celeryd -B
+    $ celery worker -B
 
-Celerybeat needs to store the last run times of the tasks in a local database
+Beat needs to store the last run times of the tasks in a local database
 file (named `celerybeat-schedule` by default), so it needs access to
 write in the current directory, or alternatively you can specify a custom
 location for this file::
 
-    $ celerybeat -s /home/celery/var/run/celerybeat-schedule
+    $ celery beat -s /home/celery/var/run/celerybeat-schedule
 
 
 .. note::
 
-    To daemonize celerybeat see :ref:`daemonizing`.
+    To daemonize beat see :ref:`daemonizing`.
 
 .. _beat-custom-schedulers:
 
@@ -249,7 +250,7 @@ which is simply keeping track of the last run times in a local database file
 `django-celery` also ships with a scheduler that stores the schedule in the
 Django database::
 
-    $ celerybeat -S djcelery.schedulers.DatabaseScheduler
+    $ celery beat -S djcelery.schedulers.DatabaseScheduler
 
 Using `django-celery`'s scheduler you can add, modify and remove periodic
 tasks from the Django Admin.

+ 2 - 2
docs/userguide/remote-tasks.rst

@@ -45,7 +45,7 @@ Enabling the HTTP task
 ----------------------
 
 To enable the HTTP dispatch task you have to add :mod:`celery.task.http`
-to :setting:`CELERY_IMPORTS`, or start ``celeryd`` with ``-I
+to :setting:`CELERY_IMPORTS`, or start the worker with ``-I
 celery.task.http``.
 
 
@@ -109,7 +109,7 @@ functionality.
     >>> res.get()
     100
 
-The output of :program:`celeryd` (or the log file if enabled) should show the
+The output of :program:`celery worker` (or the log file if enabled) should show the
 task being executed::
 
     [INFO/MainProcess] Task celery.task.http.HttpDispatchTask

+ 12 - 11
docs/userguide/routing.rst

@@ -46,12 +46,12 @@ With this route enabled import feed tasks will be routed to the
 
 Now you can start server `z` to only process the feeds queue like this::
 
-    (z)$ celeryd -Q feeds
+    (z)$ celery worker -Q feeds
 
 You can specify as many queues as you want, so you can make this server
 process the default queue as well::
 
-    (z)$ celeryd -Q feeds,celery
+    (z)$ celery worker -Q feeds,celery
 
 .. _routing-changing-default-queue:
 
@@ -144,17 +144,17 @@ You can also override this using the `routing_key` argument to
 To make server `z` consume from the feed queue exclusively you can
 start it with the ``-Q`` option::
 
-    (z)$ celeryd -Q feed_tasks --hostname=z.example.com
+    (z)$ celery worker -Q feed_tasks --hostname=z.example.com
 
 Servers `x` and `y` must be configured to consume from the default queue::
 
-    (x)$ celeryd -Q default --hostname=x.example.com
-    (y)$ celeryd -Q default --hostname=y.example.com
+    (x)$ celery worker -Q default --hostname=x.example.com
+    (y)$ celery worker -Q default --hostname=y.example.com
 
 If you want, you can even have your feed processing worker handle regular
 tasks as well, maybe in times when there's a lot of work to do::
 
-    (z)$ celeryd -Q feed_tasks,default --hostname=z.example.com
+    (z)$ celery worker -Q feed_tasks,default --hostname=z.example.com
 
 If you have another queue but on another exchange you want to add,
 just specify a custom exchange and exchange type:
@@ -349,15 +349,16 @@ Related API commands
 Hands-on with the API
 ---------------------
 
-Celery comes with a tool called :program:`camqadm` (short for Celery AMQ Admin).
-It's used for command-line access to the AMQP API, enabling access to
+Celery comes with a tool called :program:`celery amqp`
+that is used for command-line access to the AMQP API, enabling access to
 administration tasks like creating/deleting queues and exchanges, purging
-queues or sending messages.
+queues or sending messages.  It can also be used for non-AMQP brokers,
+but different implementation may not implement all commands.
 
-You can write commands directly in the arguments to :program:`camqadm`,
+You can write commands directly in the arguments to :program:`celery amqp`,
 or just start with no arguments to start it in shell-mode::
 
-    $ camqadm
+    $ celery amqp
     -> connecting to amqp://guest@localhost:5672/.
     -> connected.
     1>

+ 24 - 24
docs/userguide/workers.rst

@@ -12,26 +12,26 @@
 Starting the worker
 ===================
 
-You can start celeryd to run in the foreground by executing the command::
+You can start the worker in the foreground by executing the command::
 
-    $ celeryd --loglevel=INFO
+    $ celery worker --loglevel=INFO
 
 You probably want to use a daemonization tool to start
-`celeryd` in the background.  See :ref:`daemonizing` for help
-using `celeryd` with popular daemonization tools.
+in the background.  See :ref:`daemonizing` for help
+detaching the worker using popular daemonization tools.
 
 For a full list of available command line options see
 :mod:`~celery.bin.celeryd`, or simply do::
 
-    $ celeryd --help
+    $ celery worker --help
 
 You can also start multiple workers on the same machine. If you do so
 be sure to give a unique name to each individual worker by specifying a
 host name with the :option:`--hostname|-n` argument::
 
-    $ celeryd --loglevel=INFO --concurrency=10 -n worker1.example.com
-    $ celeryd --loglevel=INFO --concurrency=10 -n worker2.example.com
-    $ celeryd --loglevel=INFO --concurrency=10 -n worker3.example.com
+    $ celery worker --loglevel=INFO --concurrency=10 -n worker1.example.com
+    $ celery worker --loglevel=INFO --concurrency=10 -n worker2.example.com
+    $ celery worker --loglevel=INFO --concurrency=10 -n worker3.example.com
 
 .. _worker-stopping:
 
@@ -55,7 +55,7 @@ Also as processes can't override the :sig:`KILL` signal, the worker will
 not be able to reap its children, so make sure to do so manually.  This
 command usually does the trick::
 
-    $ ps auxww | grep celeryd | awk '{print $2}' | xargs kill -9
+    $ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
 
 .. _worker-restarting:
 
@@ -72,7 +72,7 @@ arguments as it was started with.
 
 .. note::
 
-    This will only work if ``celeryd`` is running in the background as
+    This will only work if the worker is running in the background as
     a daemon (it does not have a controlling terminal).
 
     Restarting by HUP is disabled on OS X because of a limitation on
@@ -84,7 +84,7 @@ arguments as it was started with.
 Process Signals
 ===============
 
-The celeryd main process overrides the following signals:
+The worker's main process overrides the following signals:
 
 +--------------+-------------------------------------------------+
 | :sig:`TERM`  | Warm shutdown, wait for tasks to complete.      |
@@ -108,13 +108,13 @@ argument and defaults to the number of CPUs available on the machine.
 
 .. admonition:: Number of processes (multiprocessing)
 
-    More worker processes are usually better, but there's a cut-off point where
-    adding more processes affects performance in negative ways.
-    There is even some evidence to support that having multiple celeryd's running,
-    may perform better than having a single worker.  For example 3 celeryd's with
-    10 worker processes each.  You need to experiment to find the numbers that
-    works best for you, as this varies based on application, work load, task
-    run times and other factors.
+    More pool processes are usually better, but there's a cut-off point where
+    adding more pool processes affects performance in negative ways.
+    There is even some evidence to support that having multiple worker
+    instances running, may perform better than having a single worker.
+    For example 3 workers with 10 pool processes each.  You need to experiment
+    to find the numbers that works best for you, as this varies based on
+    application, work load, task run times and other factors.
 
 .. _worker-persistent-revokes:
 
@@ -206,8 +206,8 @@ a worker can execute before it's replaced by a new process.
 This is useful if you have memory leaks you have no control over
 for example from closed source C extensions.
 
-The option can be set using the `--maxtasksperchild` argument
-to `celeryd` or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
+The option can be set using the workers `--maxtasksperchild` argument
+or using the :setting:`CELERYD_MAX_TASKS_PER_CHILD` setting.
 
 .. _worker-autoreload:
 
@@ -218,7 +218,7 @@ Autoreloading
 
 :supported pools: processes, eventlet, gevent, threads, solo
 
-Starting :program:`celeryd` with the :option:`--autoreload` option will
+Starting :program:`celery worker` with the :option:`--autoreload` option will
 enable the worker to watch for file system changes to all imported task
 modules imported (and also any non-task modules added to the
 :setting:`CELERY_IMPORTS` setting or the :option:`-I|--include` option).
@@ -258,7 +258,7 @@ implementations:
 You can force an implementation by setting the :envvar:`CELERYD_FSNOTIFY`
 environment variable::
 
-    $ env CELERYD_FSNOTIFY=stat celeryd -l info --autoreload
+    $ env CELERYD_FSNOTIFY=stat celery worker -l info --autoreload
 
 .. _worker-remote-control:
 
@@ -290,7 +290,7 @@ to the number of destination hosts.
 
 .. seealso::
 
-    The :program:`celeryctl` program is used to execute remote control
+    The :program:`celery` program is used to execute remote control
     commands from the command line.  It supports all of the commands
     listed below.  See :ref:`monitoring-celeryctl` for more information.
 
@@ -439,7 +439,7 @@ Enable/disable events
 
 You can enable/disable events by using the `enable_events`,
 `disable_events` commands.  This is useful to temporarily monitor
-a worker using :program:`celeryev`/:program:`celerymon`.
+a worker using :program:`celery events`/:program:`celerymon`.
 
 .. code-block:: python
 

+ 10 - 10
docs/whatsnew-2.6.rst

@@ -415,9 +415,9 @@ E.g. if you have a project named 'proj' where the
 celery app is located in 'from proj.celery import celery',
 then the following will be equivalent::
 
-        $ celeryd --app=proj
-        $ celeryd --app=proj.celery:
-        $ celeryd --app=proj.celery:celery
+        $ celery worker --app=proj
+        $ celery worker --app=proj.celery:
+        $ celery worker --app=proj.celery:celery
 
 In Other News
 -------------
@@ -496,13 +496,13 @@ In Other News
         >>> import celery
         >>> print(celery.bugreport())
 
-    - Use celeryctl::
+    - Using the ``celery`` command-line program::
 
-        $ celeryctl report
+        $ celery report
 
     - Get it from remote workers::
 
-        $ celeryctl inspect report
+        $ celery inspect report
 
 - Module ``celery.log`` moved to :mod:`celery.app.log`.
 - Module ``celery.task.control`` moved to :mod:`celery.app.control`.
@@ -566,7 +566,7 @@ In Other News
 * :setting:`CELERY_FORCE_EXECV` is now enabled by default.
 
     If the old behavior is wanted the setting can be set to False,
-    or the new :option:`--no-execv` to :program:`celeryd`.
+    or the new :option:`--no-execv` to :program:`celery worker`.
 
 * Deprecated module ``celery.conf`` has been removed.
 
@@ -581,12 +581,12 @@ In Other News
   :setting:`CELERYBEAT_MAX_LOOP_INTERVAL` setting, it is instead
   set by individual schedulers.
 
-* celeryd now truncates very long message bodies in error reports.
+* Worker: now truncates very long message bodies in error reports.
 
 * :envvar:`CELERY_BENCH` environment variable, will now also list
-  memory usage statistics at celeryd shutdown.
+  memory usage statistics at worker shutdown.
 
-* celeryd now only ever use a single timer for all timing needs,
+* Worker: now only ever use a single timer for all timing needs,
   and instead set different priorities.
 
 Internals