Browse Source

Small documentation fixes

Noticed while reading the docs on readthedocs, reviewed by building the
HTML documentation locally.

Note: optimizing.rst had more footnote markers than actual footnotes, so
I removed the marker (the one after "latency") that didn't have a
footnote.
Marius Gedminas 12 years ago
parent
commit
667a873d47

+ 3 - 3
docs/getting-started/next-steps.rst

@@ -390,14 +390,14 @@ an argument signature specified.  The ``add`` task takes two arguments,
 so a subtask specifying two arguments would make a complete signature::
 
     >>> s1 = add.s(2, 2)
-    >>> res = s2.delay()
+    >>> res = s1.delay()
     >>> res.get()
     4
 
 But, you can also make incomplete signatures to create what we call
 *partials*::
 
-    # incomplete partial:  add(?, 2)
+    # incomplete partial: add(?, 2)
     >>> s2 = add.s(2)
 
 ``s2`` is now a partial subtask that needs another argument to be complete,
@@ -490,7 +490,7 @@ is called:
 
 .. code-block:: python
 
-    >>> from celery imoport chain
+    >>> from celery import chain
     >>> from proj.tasks import add, mul
 
     # (4 + 4) * 8

+ 1 - 1
docs/internals/app-overview.rst

@@ -39,7 +39,7 @@ Creating custom Task subclasses:
     class DebugTask(Task):
         abstract = True
 
-        def on_failure(self, \*args, \*\*kwargs):
+        def on_failure(self, *args, **kwargs):
             import pdb
             pdb.set_trace()
 

+ 1 - 1
docs/userguide/canvas.rst

@@ -464,7 +464,7 @@ the error callbacks take the id of the parent task as argument instead:
     def log_error(task_id):
         result = celery.AsyncResult(task_id)
         result.get(propagate=False)  # make sure result written.
-        with open('/var/errors/%s' % (task_id, )) as fh:
+        with open(os.path.join('/var/errors', task_id), 'a') as fh:
             fh.write('--\n\n%s %s %s' % (
                 task_id, result.result, result.traceback))
 

+ 4 - 4
docs/userguide/optimizing.rst

@@ -23,7 +23,7 @@ back-of-the-envelope calculations by asking the question;
 
     ❝ How much water flows out of the Mississippi River in a day? ❞
 
-The point of this exercise[*] is to show that there is a limit
+The point of this exercise [*]_ is to show that there is a limit
 to how much data a system can process in a timely manner.
 Back of the envelope calculations can be used as a means to plan for this
 ahead of time.
@@ -95,19 +95,19 @@ by users.
 The prefetch limit is a **limit** for the number of tasks (messages) a worker
 can reserve for itself.  If it is zero, the worker will keep
 consuming messages, not respecting that there may be other
-available worker nodes that may be able to process them sooner[#],
+available worker nodes that may be able to process them sooner [*]_,
 or that the messages may not even fit in memory.
 
 The workers' default prefetch count is the
 :setting:`CELERYD_PREFETCH_MULTIPLIER` setting multiplied by the number
-of child worker processes[#].
+of child worker processes [*]_.
 
 If you have many tasks with a long duration you want
 the multiplier value to be 1, which means it will only reserve one
 task per worker process at a time.
 
 However -- If you have many short-running tasks, and throughput/round trip
-latency[#] is important to you, this number should be large. The worker is
+latency is important to you, this number should be large. The worker is
 able to process more tasks per second if the messages have already been
 prefetched, and is available in memory.  You may have to experiment to find
 the best value that works for you.  Values like 50 or 150 might make sense in

+ 4 - 4
docs/userguide/security.rst

@@ -94,7 +94,7 @@ The default `pickle` serializer is convenient because it supports
 arbitrary Python objects, whereas other serializers only
 work with a restricted set of types.
 
-But for the same reasons the `pickle` serializer is inherently insecure[*]_,
+But for the same reasons the `pickle` serializer is inherently insecure [*]_,
 and should be avoided whenever clients are untrusted or
 unauthenticated.
 
@@ -137,13 +137,13 @@ disable all insucure serializers so that the worker won't accept
 messages with untrusted content types.
 
 This is an example configuration using the `auth` serializer,
-with the private key and certificate files located in :`/etc/ssl`.
+with the private key and certificate files located in `/etc/ssl`.
 
 .. code-block:: python
 
     CELERY_SECURITY_KEY = '/etc/ssl/private/worker.key'
     CELERY_SECURITY_CERTIFICATE = '/etc/ssl/certs/worker.pem'
-    CELERY_SECURITY_CERT_STORE = '/etc/ssl/certs/\*.pem'
+    CELERY_SECURITY_CERT_STORE = '/etc/ssl/certs/*.pem'
     from celery.security import setup_security
     setup_security()
 
@@ -182,7 +182,7 @@ This should be fairly easy to setup using syslog (see also `syslog-ng`_ and
 support for using syslog.
 
 A tip for the paranoid is to send logs using UDP and cut the
-transmit part of the logging servers network cable :-)
+transmit part of the logging server's network cable :-)
 
 .. _`syslog-ng`: http://en.wikipedia.org/wiki/Syslog-ng
 .. _`rsyslog`: http://www.rsyslog.com/

+ 2 - 2
docs/userguide/signals.rst

@@ -29,7 +29,7 @@ Example connecting to the :signal:`task_sent` signal:
 
     @task_sent.connect
     def task_sent_handler(sender=None, task_id=None, task=None, args=None,
-                          kwargs=None, \*\*kwds):
+                          kwargs=None, **kwds):
         print('Got signal task_sent for task id %s' % (task_id, ))
 
 
@@ -43,7 +43,7 @@ has been sent by providing the `sender` argument to
 
     @task_sent.connect(sender='tasks.add')
     def task_sent_handler(sender=None, task_id=None, task=None, args=None,
-                          kwargs=None, \*\*kwds):
+                          kwargs=None, **kwds):
         print('Got signal task_sent for task id %s' % (task_id, ))
 
 .. _signal-ref: