diff --git a/PKG-INFO b/PKG-INFO
index 36f6e5a..b891599 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: eliot
-Version: 1.6.0
+Version: 1.13.0
 Summary: Logging library that tells you why it happened
 Home-page: https://github.com/itamarst/eliot/
 Maintainer: Itamar Turner-Trauring
@@ -13,20 +13,39 @@ Description: Eliot: Logging that tells you *why* it happened
                    :target: http://travis-ci.org/itamarst/eliot
                    :alt: Build Status
         
-        Most logging systems tell you *what* happened in your application, whereas ``eliot`` also tells you *why* it happened.
+        Python's built-in ``logging`` and other similar systems output a stream of factoids: they're interesting, but you can't really tell what's going on.
         
+        * Why is your application slow?
+        * What caused this code path to be chosen?
+        * Why did this error happen?
+        
+        Standard logging can't answer these questions.
+        
+        But with a better model you could understand what and why things happened in your application.
+        You could pinpoint performance bottlenecks, you could understand what happened when, who called what.
+        
+        That is what Eliot does.
         ``eliot`` is a Python logging system that outputs causal chains of **actions**: actions can spawn other actions, and eventually they either **succeed or fail**.
         The resulting logs tell you the story of what your software did: what happened, and what caused it.
         
-        Eliot works well within a single process, but can also be used across multiple processes to trace causality across a distributed system.
-        Eliot is only used to generate your logs; you will still need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes.
+        Eliot supports a range of use cases and 3rd party libraries:
         
-        Eliot supports Python 2.7, 3.4, 3.5, 3.6, 3.7 and PyPy.
+        * Logging within a single process.
+        * Causal tracing across a distributed system.
+        * Scientific computing, with `built-in support for NumPy and Dask <https://eliot.readthedocs.io/en/stable/scientific-computing.html>`_.
+        * `Asyncio and Trio coroutines <https://eliot.readthedocs.io/en/stable/generating/asyncio.html>`_ and the `Twisted networking framework <https://eliot.readthedocs.io/en/stable/generating/twisted.html>`_.
+        
+        Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
+        
+        Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
         It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
         
+        Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
+        
         * `Read the documentation <https://eliot.readthedocs.io>`_.
-        * Download from `PyPI`_.
+        * Download from `PyPI`_ or `conda-forge <https://anaconda.org/conda-forge/eliot>`_.
         * Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_ on GitHub.
+        * **Commercial support** is available from `Python⇒Speed <https://pythonspeed.com/services/#eliot>`_.
         
         Testimonials
         ------------
@@ -44,16 +63,15 @@ Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: Apache Software License
 Classifier: Operating System :: OS Independent
 Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
-Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: System :: Logging
-Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
+Requires-Python: >=3.6.0
 Provides-Extra: journald
+Provides-Extra: test
 Provides-Extra: dev
diff --git a/README.rst b/README.rst
index c1f512a..6b80c0a 100644
--- a/README.rst
+++ b/README.rst
@@ -5,20 +5,39 @@ Eliot: Logging that tells you *why* it happened
            :target: http://travis-ci.org/itamarst/eliot
            :alt: Build Status
 
-Most logging systems tell you *what* happened in your application, whereas ``eliot`` also tells you *why* it happened.
+Python's built-in ``logging`` and other similar systems output a stream of factoids: they're interesting, but you can't really tell what's going on.
 
+* Why is your application slow?
+* What caused this code path to be chosen?
+* Why did this error happen?
+
+Standard logging can't answer these questions.
+
+But with a better model you could understand what and why things happened in your application.
+You could pinpoint performance bottlenecks, you could understand what happened when, who called what.
+
+That is what Eliot does.
 ``eliot`` is a Python logging system that outputs causal chains of **actions**: actions can spawn other actions, and eventually they either **succeed or fail**.
 The resulting logs tell you the story of what your software did: what happened, and what caused it.
 
-Eliot works well within a single process, but can also be used across multiple processes to trace causality across a distributed system.
-Eliot is only used to generate your logs; you will still need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes.
+Eliot supports a range of use cases and 3rd party libraries:
 
-Eliot supports Python 2.7, 3.4, 3.5, 3.6, 3.7 and PyPy.
+* Logging within a single process.
+* Causal tracing across a distributed system.
+* Scientific computing, with `built-in support for NumPy and Dask <https://eliot.readthedocs.io/en/stable/scientific-computing.html>`_.
+* `Asyncio and Trio coroutines <https://eliot.readthedocs.io/en/stable/generating/asyncio.html>`_ and the `Twisted networking framework <https://eliot.readthedocs.io/en/stable/generating/twisted.html>`_.
+
+Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
+
+Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
 It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
 
+Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
+
 * `Read the documentation <https://eliot.readthedocs.io>`_.
-* Download from `PyPI`_.
+* Download from `PyPI`_ or `conda-forge <https://anaconda.org/conda-forge/eliot>`_.
 * Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_ on GitHub.
+* **Commercial support** is available from `Python⇒Speed <https://pythonspeed.com/services/#eliot>`_.
 
 Testimonials
 ------------
diff --git a/benchmarks/logwriter.py b/benchmarks/logwriter.py
index 31199a1..b8f1061 100644
--- a/benchmarks/logwriter.py
+++ b/benchmarks/logwriter.py
@@ -16,7 +16,7 @@ MESSAGES = 100000
 
 
 def main(reactor):
-    print "Message size: %d bytes   Num messages: %d" % (LENGTH, MESSAGES)
+    print("Message size: %d bytes   Num messages: %d" % (LENGTH, MESSAGES))
     message = b"a" * LENGTH
     fp = FilePath(tempfile.mktemp())
     writer = ThreadedFileWriter(fp.open("ab"), reactor)
@@ -31,18 +31,19 @@ def main(reactor):
         elapsed = time.time() - start
         kbSec = (LENGTH * MESSAGES) / (elapsed * 1024)
         messagesSec = MESSAGES / elapsed
-        print "messages/sec: %s   KB/sec: %s" % (messagesSec, kbSec)
+        print("messages/sec: %s   KB/sec: %s" % (messagesSec, kbSec))
+
     d.addCallback(done)
 
     def cleanup(result):
         fp.restat()
-        print
-        print "File size: ", fp.getsize()
+        print()
+        print("File size: ", fp.getsize())
         fp.remove()
+
     d.addBoth(cleanup)
     return d
 
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     react(main, [])
-
diff --git a/benchmarks/serialization.py b/benchmarks/serialization.py
index 7e5ea7b..bcc4d21 100644
--- a/benchmarks/serialization.py
+++ b/benchmarks/serialization.py
@@ -10,33 +10,33 @@ from __future__ import unicode_literals
 
 import time
 
-from eliot import Logger, MessageType, Field, ActionType
+from eliot import Message, start_action, to_file
 
-def _ascii(s):
-    return s.decode("ascii")
+# Ensure JSON serialization is part of benchmark:
+to_file(open("/dev/null", "w"))
 
+N = 10000
 
-F1 = Field.forTypes("integer", [int], "")
-F2 = Field("string", _ascii, "")
-F3 = Field("string2", _ascii, "")
-F4 = Field.forTypes("list", [list], "list of integers")
-
-M = MessageType("system:message", [F1, F2, F3, F4], "description")
-A = ActionType("action", [], [], [], "desc")
-
-log = Logger()
-
-N = 100000
 
 def run():
     start = time.time()
-    with A(log):
-        for i in xrange(N):
-            m = M(integer=3, string=b"abcdeft", string2="dgsjdlkgjdsl", list=[1, 2, 3, 4])
-            m.write(log)
+    for i in range(N):
+        with start_action(action_type="my_action"):
+            with start_action(action_type="my_action2") as ctx:
+                ctx.log(
+                    message_type="my_message",
+                    integer=3,
+                    string="abcdeft",
+                    string2="dgsjdlkgjdsl",
+                    list=[1, 2, 3, 4],
+                )
     end = time.time()
-    print "%.6f per message" % ((end - start) / N,)
-    print "%s messages/sec" % (int(N / (end-start)),)
 
-if __name__ == '__main__':
+    # Each iteration has 5 messages: start/end of my_action, start/end of
+    # my_action2, and my_message.
+    print("%.6f per message" % ((end - start) / (N * 5),))
+    print("%s messages/sec" % (int(N / (end - start)),))
+
+
+if __name__ == "__main__":
     run()
diff --git a/docs/source/development.rst b/docs/source/development.rst
index 198f956..b07ad4f 100644
--- a/docs/source/development.rst
+++ b/docs/source/development.rst
@@ -8,4 +8,4 @@ All modules should have the ``from __future__ import unicode_literals`` statemen
 Coding standard is PEP8, with the only exception being camel case methods for the Twisted-related modules.
 Some camel case methods remain for backwards compatibility reasons with the old coding standard.
 
-You should use ``yapf`` to format code.
+You should use ``black`` to format the code.
diff --git a/docs/source/generating/actions.rst b/docs/source/generating/actions.rst
index 45988e1..5d02c38 100644
--- a/docs/source/generating/actions.rst
+++ b/docs/source/generating/actions.rst
@@ -91,12 +91,12 @@ Consider the following code sample:
 
 .. code-block:: python
 
-     from eliot import start_action, start_task, Message
+     from eliot import start_action, start_task
 
-     with start_task(action_type="parent"):
-         Message.log(message_type="info", x=1)
-         with start_action(action_type="child"):
-             Message.log(message_type="info", x=2)
+     with start_task(action_type="parent") as action:
+         action.log(message_type="info", x=1)
+         with start_action(action_type="child") as action:
+             action.log(message_type="info", x=2)
          raise RuntimeError("ono")
 
 All these messages will share the same UUID in their ``task_uuid`` field, since they are all part of the same high-level task.
@@ -132,6 +132,31 @@ You can add fields to both the start message and the success message of an actio
 If you want to include some extra information in case of failures beyond the exception you can always log a regular message with that information.
 Since the message will be recorded inside the context of the action its information will be clearly tied to the result of the action by the person (or code!) reading the logs later on.
 
+Using Generators
+----------------
+
+Generators (functions with ``yield``) and context managers (``with X:``) don't mix well in Python.
+So if you're going to use ``with start_action()`` in a generator, just make sure it doesn't wrap a ``yield`` and you'll be fine.
+
+Here's what you SHOULD NOT DO:
+
+.. code-block:: python
+
+   def generator():
+       with start_action(action_type="x"):
+           # BAD! DO NOT yield inside a start_action() block:
+           yield make_result()
+
+Here's what can do instead:
+
+.. code-block:: python
+
+   def generator():
+       with start_action(action_type="x"):
+           result = make_result()
+       # This is GOOD, no yield inside the start_action() block:
+       yield result
+
 
 Non-Finishing Contexts
 ----------------------
@@ -172,14 +197,14 @@ You shouldn't log within an action's context after it has been finished:
 
 .. code-block:: python
 
-     from eliot import start_action, Message
+     from eliot import start_action
 
      with start_action(action_type=u"message_late").context() as action:
-         Message.log(message_type=u"ok")
+         action.log(message_type=u"ok")
          # finish the action:
          action.finish()
          # Don't do this! This message is being added to a finished action!
-         Message.log(message_type=u"late")
+         action.log(message_type=u"late")
 
 As an alternative to ``with``, you can also explicitly run a function within the action context:
 
diff --git a/docs/source/generating/asyncio.rst b/docs/source/generating/asyncio.rst
index 537e31c..353bb30 100644
--- a/docs/source/generating/asyncio.rst
+++ b/docs/source/generating/asyncio.rst
@@ -1,32 +1,71 @@
 .. _asyncio_coroutine:
 
-Asyncio Coroutine Support
-=========================
+Asyncio/Trio Coroutine Support
+==============================
 
-If you're using ``asyncio`` coroutines in Python 3.5 or later (``async def yourcoro()`` and ``await yourcoro()``) together with Eliot, you need to run the following before doing any logging:
+As of Eliot 1.8, ``asyncio`` and ``trio`` coroutines have appropriate context propogation for Eliot, automatically.
 
-.. code-block:: python
+Asyncio
+--------
 
-   import eliot
-   eliot.use_asyncio_context()
+On Python 3.7 or later, no particular care is needed.
+For Python 3.5 and 3.6 you will need to import either ``eliot`` (or the backport package ``aiocontextvars``) before you create your first event loop.
 
+Here's an example using ``aiohttp``:
 
-Why you need to do this
------------------------
-By default Eliot provides a different "context" for each thread.
-That is how ``with start_action(action_type='my_action'):`` works: it records the current action on this context.
+.. literalinclude:: ../../../examples/asyncio_linkcheck.py
 
-When using coroutines you end up with the same context being used with different coroutines, since they share the same thread.
-Calling ``eliot.use_asyncio_context()`` makes sure each coroutine gets its own context, so ``with start_action()`` in one coroutine doesn't interfere with another.
+And the resulting logs:
 
-However, Eliot will do the right thing for nested coroutines.
-Specifically, coroutines called via ``await a_coroutine()`` will inherit the logging context from the calling coroutine.
+.. code-block:: shell-session
 
+  $ eliot-tree linkcheck.log
+  0a9a5e1b-330c-4251-b7db-fd3161403443
+  └── check_links/1 ⇒ started 2019-04-06 19:49:16 ⧖ 0.535s
+      ├── urls: 
+      │   ├── 0: http://eliot.readthedocs.io
+      │   └── 1: http://nosuchurl
+      ├── download/2/1 ⇒ started 2019-04-06 19:49:16 ⧖ 0.527s
+      │   ├── url: http://eliot.readthedocs.io
+      │   └── download/2/2 ⇒ succeeded 2019-04-06 19:49:16
+      ├── download/3/1 ⇒ started 2019-04-06 19:49:16 ⧖ 0.007s
+      │   ├── url: http://nosuchurl
+      │   └── download/3/2 ⇒ failed 2019-04-06 19:49:16
+      │       ├── errno: -2
+      │       ├── exception: aiohttp.client_exceptions.ClientConnectorError
+      │       └── reason: Cannot connect to host nosuchurl:80 ssl:None [Name or service not known]                                                                                           
+      └── check_links/4 ⇒ failed 2019-04-06 19:49:16
+          ├── exception: builtins.ValueError
+          └── reason: Cannot connect to host nosuchurl:80 ssl:None [Name or service not known]
 
-Limitations
------------
 
-* I haven't tested the Python 3.4 ``yield from`` variation.
-* This doesn't support other event loops (Curio, Trio, Tornado, etc.).
-  If you want these supported please file an issue: https://github.com/itamarst/eliot/issues/new
-  There is talk of adding the concept of a coroutine context to Python 3.7 or perhaps 3.8, in which case it will be easier to automatically support all frameworks.
+Trio
+----
+
+Here's an example of using Trio—we put the action outside the nursery so that it finishes only when the nursery shuts down.
+
+.. literalinclude:: ../../../examples/trio_say.py
+
+And the resulting logs:
+
+.. code-block:: shell-session
+
+  $ eliot-tree trio.log
+  93a4de27-8c95-498b-a188-f0e91482ad10
+  └── main/1 ⇒ started 2019-04-10 21:07:20 ⧖ 2.003s                                            
+      ├── say/2/1 ⇒ started 2019-04-10 21:07:20 ⧖ 2.002s                                       
+      │   ├── message: world
+      │   └── say/2/2 ⇒ succeeded 2019-04-10 21:07:22                                          
+      ├── say/3/1 ⇒ started 2019-04-10 21:07:20 ⧖ 1.001s                                       
+      │   ├── message: hello
+      │   └── say/3/2 ⇒ succeeded 2019-04-10 21:07:21                                          
+      └── main/4 ⇒ succeeded 2019-04-10 21:07:22
+
+If you put the ``start_action`` *inside* the nursery context manager:
+
+1. The two ``say`` calls will be scheduled, but not started.
+2. The parent action will end.
+3. Only then will the child actions be created.
+
+The result is somewhat confusing output.
+Trying to improve this situation is covered in `issue #401 <https://github.com/itamarst/eliot/issues/401>`_.
diff --git a/docs/source/generating/index.rst b/docs/source/generating/index.rst
index ca69acd..a80cb21 100644
--- a/docs/source/generating/index.rst
+++ b/docs/source/generating/index.rst
@@ -2,14 +2,14 @@ Generating Logs
 ===============
 
 .. toctree::
-   messages
    actions
+   messages
    errors
    loglevels
    migrating
    threads
+   testing
    types
-   types-testing
    asyncio
    twisted
 
diff --git a/docs/source/generating/messages.rst b/docs/source/generating/messages.rst
index 874b79f..5b95633 100644
--- a/docs/source/generating/messages.rst
+++ b/docs/source/generating/messages.rst
@@ -1,49 +1,35 @@
+.. _messages:
+
 Messages
 ========
 
-Basic usage
------------
+Sometimes you don't want to generate actions. sometimes you just want an individual isolated message, the way traditional logging systems work.
+Here's how to do that.
 
-At its base, Eliot outputs structured messages composed of named fields.
-Eliot messages are typically serialized to JSON objects.
-Fields therefore can have Unicode names, so either ``unicode`` or ``bytes`` containing UTF-8 encoded Unicode.
-Message values must be supported by JSON: ``int``, ``float``, ``None``, ``unicode``, UTF-8 encoded Unicode as ``bytes``, ``dict`` or ``list``.
-The latter two can only be composed of other supported types.
+When you have an action
+-----------------------
 
-You can log a message like this:
+If you already have an action object, you can log a message in that action's context:
 
 .. code-block:: python
 
-    from eliot import Message
+    from eliot import start_action
 
     class YourClass(object):
         def run(self):
-            # Log a message with two fields, "key" and "value":
-            Message.log(key=123, value=u"hello")
-
-You can also create message and then log it later like this:
+            with start_action(action_type="myaction") as ctx:
+                ctx.log(message_type="mymessage", key="abc", key2=4)
 
-.. code-block:: python
+If you don't have an action
+---------------------------
 
-    from eliot import Message
+If you don't have a reference to an action, or you're worried the function will sometimes be called outside the context of any action at all, you can use ``log_message``:
 
-    class YourClass(object):
-        def run(self):
-            # Create a message with two fields, "key" and "value":
-            msg = Message.new(key=123, value=u"hello")
-            # Write the message:
-            msg.write()
-
-
-Message binding
----------------
+.. code-block:: python
 
-You can also create a new ``Message`` from an existing one by binding new values.
-New values will override ones on the base ``Message``, but ``bind()`` does not mutate the original ``Message``.
+    from eliot import log_message
 
-.. code-block:: python
+    def run(x):
+        log_message(message_type="in_run", xfield=x)
 
-      # This message has fields key=123, value=u"hello"
-      msg = Message.new(key=123, value=u"hello")
-      # And this one has fields key=123, value=u"other", extra=456
-      msg2 = msg.bind(value=u"other", extra=456)
+The main downside to using this function is that it's a little slower, since it needs to handle the case where there is no action in context.
diff --git a/docs/source/generating/types-testing.rst b/docs/source/generating/testing.rst
similarity index 63%
rename from docs/source/generating/types-testing.rst
rename to docs/source/generating/testing.rst
index 517c85d..86ce2a8 100644
--- a/docs/source/generating/types-testing.rst
+++ b/docs/source/generating/testing.rst
@@ -1,19 +1,39 @@
 Unit Testing Your Logging
 =========================
 
-Validate Logging in Tests
--------------------------
-
 Now that you've got some code emitting log messages (or even better, before you've written the code) you can write unit tests to verify it.
 Given good test coverage all code branches should already be covered by tests unrelated to logging.
 Logging can be considered just another aspect of testing those code branches.
+
 Rather than recreating all those tests as separate functions Eliot provides a decorator the allows adding logging assertions to existing tests.
 
-Let's unit test some code that relies on the ``LOG_USER_REGISTRATION`` object we created earlier.
+
+Linting your logs
+-----------------
+
+Decorating a test function with ``eliot.testing.capture_logging`` validation will ensure that:
+
+1. You haven't logged anything that isn't JSON serializable.
+2. There are no unexpected tracebacks, indicating a bug somewhere in your code.
 
 .. code-block:: python
 
-      from myapp.logtypes import LOG_USER_REGISTRATION
+   from eliot.testing import capture_logging
+
+   class MyTest(unittest.TestCase):
+      @capture_logging(None)
+      def test_mytest(self, logger):
+          call_my_function()
+
+
+Making assertions about the logs
+--------------------------------
+
+You can also ensure the correct messages were logged.
+
+.. code-block:: python
+
+      from eliot import log_message
 
       class UserRegistration(object):
 
@@ -22,8 +42,9 @@ Let's unit test some code that relies on the ``LOG_USER_REGISTRATION`` object we
 
           def register(self, username, password, age):
               self.db[username] = (password, age)
-              LOG_USER_REGISTRATION(
-                   username=username, password=password, age=age).write()
+              log_message(message_type="user_registration",
+                          username=username, password=password,
+                          age=age)
 
 
 Here's how we'd test it:
@@ -35,7 +56,6 @@ Here's how we'd test it:
     from eliot.testing import assertContainsFields, capture_logging
 
     from myapp.registration import UserRegistration
-    from myapp.logtypes import LOG_USER_REGISTRATION
 
 
     class LoggingTests(TestCase):
@@ -60,13 +80,6 @@ Here's how we'd test it:
             self.assertEqual(registry.db[u"john"], (u"passsword", 12))
 
 
-Besides calling the given validation function the ``@capture_logging`` decorator will also validate the logged messages after the test is done.
-E.g. it will make sure they are JSON encodable.
-Messages were created using ``ActionType`` and ``MessageType`` will be validated using the applicable ``Field`` definitions.
-You can also call ``MemoryLogger.validate`` yourself to validate written messages.
-If you don't want any additional logging assertions you can decorate your test function using ``@capture_logging(None)``.
-
-
 Testing Tracebacks
 ------------------
 
@@ -96,14 +109,14 @@ Testing Message and Action Structure
 ------------------------------------
 
 Eliot provides utilities for making assertions about the structure of individual messages and actions.
-The simplest method is using the ``assertHasMessage`` utility function which asserts that a message of a given ``MessageType`` has the given fields:
+The simplest method is using the ``assertHasMessage`` utility function which asserts that a message of a given message type has the given fields:
 
 .. code-block:: python
 
     from eliot.testing import assertHasMessage, capture_logging
 
     class LoggingTests(TestCase):
-        @capture_logging(assertHasMessage, LOG_USER_REGISTRATION,
+        @capture_logging(assertHasMessage, "user_registration",
                          {u"username": u"john",
                           u"password": u"password",
                           u"age": 12})
@@ -119,7 +132,7 @@ The simplest method is using the ``assertHasMessage`` utility function which ass
 ``assertHasMessage`` returns the found message and can therefore be used within more complex assertions. ``assertHasAction`` provides similar functionality for actions (see example below).
 
 More generally, ``eliot.testing.LoggedAction`` and ``eliot.testing.LoggedMessage`` are utility classes to aid such testing.
-``LoggedMessage.of_type`` lets you find all messages of a specific ``MessageType``.
+``LoggedMessage.of_type`` lets you find all messages of a specific message type.
 A ``LoggedMessage`` has an attribute ``message`` which contains the logged message dictionary.
 For example, we could rewrite the registration logging test above like so:
 
@@ -132,7 +145,7 @@ For example, we could rewrite the registration logging test above like so:
             """
             Logging assertions for test_registration.
             """
-            logged = LoggedMessage.of_type(logger.messages, LOG_USER_REGISTRATION)[0]
+            logged = LoggedMessage.of_type(logger.messages, "user_registration")[0]
             assertContainsFields(self, logged.message,
                                  {u"username": u"john",
                                   u"password": u"password",
@@ -148,7 +161,7 @@ For example, we could rewrite the registration logging test above like so:
             self.assertEqual(registry.db[u"john"], (u"passsword", 12))
 
 
-Similarly, ``LoggedAction.of_type`` finds all logged actions of a specific ``ActionType``.
+Similarly, ``LoggedAction.of_type`` finds all logged actions of a specific action type.
 A ``LoggedAction`` instance has ``start_message`` and ``end_message`` containing the respective message dictionaries, and a ``children`` attribute containing a list of child ``LoggedAction`` and ``LoggedMessage``.
 That is, a ``LoggedAction`` knows about the messages logged within its context.
 ``LoggedAction`` also has a utility method ``descendants()`` that returns an iterable of all its descendants.
@@ -158,19 +171,18 @@ For example, let's say we have some code like this:
 
 .. code-block:: python
 
-    LOG_SEARCH = ActionType(...)
-    LOG_CHECK = MessageType(...)
+    from eliot import start_action, Message
 
     class Search:
         def search(self, servers, database, key):
-            with LOG_SEARCH(database=database, key=key):
+            with start_action(action_type="log_search", database=database, key=key):
             for server in servers:
-                LOG_CHECK(server=server).write()
+                Message.log(message_type="log_check", server=server)
                 if server.check(database, key):
                     return True
             return False
 
-We want to assert that the LOG_CHECK message was written in the context of the LOG_SEARCH action.
+We want to assert that the "log_check" message was written in the context of the "log_search" action.
 The test would look like this:
 
 .. code-block:: python
@@ -185,8 +197,8 @@ The test would look like this:
             servers = [buildServer(), buildServer()]
 
             searcher.search(servers, "users", "theuser")
-            action = LoggedAction.of_type(logger.messages, searcher.LOG_SEARCH)[0]
-            messages = LoggedMessage.of_type(logger.messages, searcher.LOG_CHECK)
+            action = LoggedAction.of_type(logger.messages, "log_search")[0]
+            messages = LoggedMessage.of_type(logger.messages, "log_check")
             # The action start message had the appropriate fields:
             assertContainsFields(self, action.start_message,
                                  {"database": "users", "key": "theuser"})
@@ -210,103 +222,67 @@ Or we can simplify further by using ``assertHasMessage`` and ``assertHasAction``
             servers = [buildServer(), buildServer()]
 
             searcher.search(servers, "users", "theuser")
-            action = assertHasAction(self, logger, searcher.LOG_SEARCH, succeeded=True,
+            action = assertHasAction(self, logger, "log_search", succeeded=True,
                                      startFields={"database": "users",
                                                   "key": "theuser"})
 
             # Messages were logged in the context of the action
-            messages = LoggedMessage.of_type(logger.messages, searcher.LOG_CHECK)
+            messages = LoggedMessage.of_type(logger.messages, "log_check")
             self.assertEqual(action.children, messages)
             # Each message had the respective server set.
             self.assertEqual(servers, [msg.message["server"] for msg in messages])
 
 
-Restricting Testing to Specific Messages
-----------------------------------------
-
-If you want to only look at certain messages when testing you can log to a specific ``eliot.Logger`` object.
-The messages will still be logged normally but you will be able to limit tests to only looking at those messages.
+Custom JSON encoding
+--------------------
 
-You can log messages to a specific ``Logger``:
+Just like a ``FileDestination`` can have a custom JSON encoder, so can your tests, so you can validate your messages with that JSON encoder:
 
 .. code-block:: python
 
-    from eliot import Message, Logger
+   from unittest import TestCase
+   from eliot.json import EliotJSONEncoder
+   from eliot.testing import capture_logging
 
-    class YourClass(object):
-        logger = Logger()
+   class MyClass:
+       def __init__(self, x):
+           self.x = x
 
-        def run(self):
-            # Create a message with two fields, "key" and "value":
-            msg = Message.new(key=123, value=u"hello")
-            # Write the message:
-            msg.write(self.logger)
+   class MyEncoder(EliotJSONEncoder):
+       def default(self, obj):
+           if isinstance(obj, MyClass):
+               return {"x": obj.x}
+           return EliotJSONEncoder.default(self, obj)
 
-As well as actions:
-
-.. code-block:: python
+   class LoggingTests(TestCase):
+       @capture_logging(None, encoder_=MyEncoder)
+       def test_logging(self, logger):
+           # Logged messages will be validated using MyEncoder....
+           ...
 
-     from eliot import start_action
+Notice that the hyphen after ``encoder_`` is deliberate: by default keyword arguments are passed to the assertion function (the first argument to ``@capture_logging``) so it's marked this way to indicate it's part of Eliot's API.
 
-     logger = Logger()
+Custom testing setup
+--------------------
 
-     with start_action(logger, action_type=u"store_data"):
-         x = get_data()
-         store_data(x)
-
-Or actions created from ``ActionType``:
+In some cases ``@capture_logging`` may not do what you want.
+You can achieve the same effect, but with more control, with some lower-level APIs:
 
 .. code-block:: python
 
-    from eliot import Logger
-
-      from myapp.logtypes import LOG_USER_REGISTRATION
-
-      class UserRegistration(object):
-
-          logger = Logger()
-
-          def __init__(self):
-              self.db = {}
-
-          def register(self, username, password, age):
-              self.db[username] = (password, age)
-              msg = LOG_USER_REGISTRATION(
-                   username=username, password=password, age=age)
-              # Notice use of specific logger:
-              msg.write(self.logger)
-
-The tests would then need to do two things:
-
-1. Decorate your test with ``validate_logging`` instead of ``capture_logging``.
-2. Override the logger used by the logging code to use the one passed in to the test.
-
-For example:
-
-.. code-block:: python
-
-    from eliot.testing import LoggedMessage, validate_logging
-
-    class LoggingTests(TestCase):
-        def assertRegistrationLogging(self, logger):
-            """
-            Logging assertions for test_registration.
-            """
-            logged = LoggedMessage.of_type(logger.messages, LOG_USER_REGISTRATION)[0]
-            assertContainsFields(self, logged.message,
-                                 {u"username": u"john",
-                                  u"password": u"password",
-                                  u"age": 12}))
-
-        # validate_logging only captures log messages logged to the MemoryLogger
-        # instance it passes to the test:
-        @validate_logging(assertRegistrationLogging)
-        def test_registration(self, logger):
-            """
-            Registration adds entries to the in-memory database.
-            """
-            registry = UserRegistration()
-            # Override logger with one used by test:
-            registry.logger = logger
-            registry.register(u"john", u"password", 12)
-            self.assertEqual(registry.db[u"john"], (u"password", 12))
+   from eliot import MemoryLogger
+   from eliot.testing import swap_logger, check_for_errors
+
+   def custom_capture_logging():
+       # Replace default logging setup with a testing logger:
+       test_logger = MemoryLogger()
+       original_logger = swap_logger(test_logger)
+
+       try:
+           run_some_code()
+       finally:
+           # Restore original logging setup:
+           swap_logger(original_logger)
+           # Validate log messages, check for tracebacks:
+           check_for_errors(test_logger)
+           
diff --git a/docs/source/generating/twisted.rst b/docs/source/generating/twisted.rst
index e15ffa8..d8f83c0 100644
--- a/docs/source/generating/twisted.rst
+++ b/docs/source/generating/twisted.rst
@@ -63,6 +63,41 @@ Logging Failures
             d.addErrback(writeFailure)
 
 
+Actions and inlineCallbacks
+---------------------------
+
+Eliot provides a decorator that is compatible with Twisted's ``inlineCallbacks`` but which also behaves well with Eliot's actions.
+Simply substitute ``eliot.twisted.inline_callbacks`` for ``twisted.internet.defer.inlineCallbacks`` in your code.
+
+To understand why, consider the following example:
+
+.. code-block:: python
+
+     from eliot import start_action
+     from twisted.internet.defer import inlineCallbacks
+
+     @inlineCallbacks  # don't use this in real code, use eliot.twisted.inline_callbacks
+     def go():
+         with start_action(action_type=u"yourapp:subsystem:frob") as action:
+	           d = some_deferred_api()
+	           x = yield d
+	           action.log(message_type=u"some-report", x=x)
+
+The action started by this generator remains active as ``yield d`` gives up control to the ``inlineCallbacks`` controller.
+The next bit of code to run will be considered to be a child of ``action``.
+Since that code may be any arbitrary code that happens to get scheduled,
+this is certainly wrong.
+
+Additionally,
+when the ``inlineCallbacks`` controller resumes the generator,
+it will most likely do so with no active action at all.
+This means that the log message following the yield will be recorded with no parent action,
+also certainly wrong.
+
+These problems are solved by using ``eliot.twisted.inline_callbacks`` instead of ``twisted.internet.defer.inlineCallbacks``.
+The behavior of the two decorators is identical except that Eliot's version will preserve the generator's action context and contain it within the generator.
+This extends the ``inlineCallbacks`` illusion of "synchronous" code to Eliot actions.
+
 Actions and Deferreds
 ---------------------
 
diff --git a/docs/source/generating/types.rst b/docs/source/generating/types.rst
index 988e59b..3f8a995 100644
--- a/docs/source/generating/types.rst
+++ b/docs/source/generating/types.rst
@@ -9,7 +9,8 @@ Why Typing?
 So far we've been creating messages and actions in an unstructured manner.
 This means it's harder to support Python objects that aren't built-in and to validate message structure.
 Moreover there's no documentation of what fields messages and action messages expect.
-To improve this we introduce the preferred API for creating actions and standalone messages: ``ActionType`` and ``MessageType``.
+
+To improve this we introduce an optional API for creating actions and standalone messages: ``ActionType`` and ``MessageType``.
 Here's an example demonstrating how we create a message type, bind some values and then log the message:
 
 .. code-block:: python
@@ -190,3 +191,11 @@ If a message fails to serialize then a ``eliot:traceback`` message will be logge
     {"timestamp": "2013-11-22T14:16:51.386827Z",
      "message": "{u\"u'message_type'\": u\"'test'\", u\"u'field'\": u\"'hello'\", u\"u'timestamp'\": u\"'2013-11-22T14:16:51.386634Z'\"}",
      "message_type": "eliot:serialization_failure"}
+
+Testing
+-------
+
+The ``eliot.testing.assertHasAction`` and ``assertHasMessage`` APIs accept ``ActionType`` and ``MessageType`` instances, not just the ``action_type`` and ``message_type`` strings.
+
+Any function decorated with ``@capture_logging`` will additionally validate messages that were created using ``ActionType`` and ``MessageType`` using the applicable ``Field`` definitions.
+This will ensure you've logged all the necessary fields, no additional fields, and used the correct types.
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 3474487..c701a0f 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -1,21 +1,43 @@
 Eliot: Logging that tells you *why* it happened
 ================================================
 
-Most logging systems tell you *what* happened in your application, whereas ``eliot`` also tells you *why* it happened.
+Python's built-in ``logging`` and other similar systems output a stream of factoids: they're interesting, but you can't really tell what's going on.
 
+* Why is your application slow?
+* What caused this code path to be chosen?
+* Why did this error happen?
+
+Standard logging can't answer these questions.
+
+But with a better model you could understand what and why things happened in your application.
+You could pinpoint performance bottlenecks, you could understand what happened when, who called what.
+
+That is what Eliot does.
 ``eliot`` is a Python logging system that outputs causal chains of **actions**: actions can spawn other actions, and eventually they either **succeed or fail**.
 The resulting logs tell you the story of what your software did: what happened, and what caused it.
 
-Eliot works well within a single process, but can also be used across multiple processes to trace causality across a distributed system.
-Eliot is only used to generate your logs; you will still need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes.
+Eliot supports a range of use cases and 3rd party libraries:
+
+* Logging within a single process.
+* Causal tracing across a distributed system.
+* Scientific computing, with :doc:`built-in support for NumPy and Dask <scientific-computing>`.
+* :doc:`Asyncio and Trio coroutines <generating/asyncio>` and the :doc:`Twisted networking framework <generating/twisted>`.
+
+Eliot is only used to generate your logs; you might still need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
 
 * **Start here:** :doc:`Quickstart documentation <quickstart>`
-* Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_ on GitHub.
+* Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_.
+* Eliot is licensed under the `Apache 2.0 license <https://github.com/itamarst/eliot/blob/master/LICENSE>`_, and the source code is `available on GitHub <https://github.com/itamarst/eliot>`_.
+* Eliot supports Python 3.9, 3.8, 3.7, 3.6, and PyPy3.
+  Python 2.7 is in legacy support mode (see :ref:`python2` for details).
+* **Commercial support** is available from `Python⇒Speed <https://pythonspeed.com/services/#eliot>`_.
 * Read on for the full documentation.
 
 Media
 -----
 
+`PyCon 2019 talk: Logging for Scientific Computing <https://pyvideo.org/pycon-us-2019/logging-for-scientific-computing-reproducibility-debugging-optimization.html>`_ (also available in a `prose version <https://pythonspeed.com/articles/logging-for-scientific-computing/>`_).
+
 `Podcast.__init__ episode 133 <https://www.podcastinit.com/eliot-logging-with-itamar-turner-trauring-episode-133/>`_ covers Eliot:
 
 .. raw:: html
@@ -42,15 +64,6 @@ Documentation
    generating/index
    outputting/index
    reading/index
-   usecases/index
+   scientific-computing
    python2
    development
-
-
-Project Information
--------------------
-
-Eliot is maintained by `Itamar Turner-Trauring <mailto:itamar@itamarst.org>`_, and released under the Apache 2.0 License.
-
-It supports Python 3.7, 3.6, 3.5, and 3.4.
-2.7 is currently supported but will be dropped from future releases; see :ref:`python2`.
diff --git a/docs/source/introduction.rst b/docs/source/introduction.rst
index db1fbc4..91b4e01 100644
--- a/docs/source/introduction.rst
+++ b/docs/source/introduction.rst
@@ -13,30 +13,12 @@ Why Eliot?
 
         — George Eliot, *Middlemarch*
 
-The log messages generated by a piece of software tell a story: what, where, when, even why and how if you’re lucky. The readers of this story are more often than not other programs: monitoring systems, performance tools, or just filtering the messages down to something a human can actually comprehend. Unfortunately the output of most logging systems is ill-suited to being read by programs. Even worse, most logging systems omit critical information that both humans and their programs need.
+The log messages generated by a piece of software ought tell a story: what, where, when, even why and how if you’re lucky.
+But most logging systems omit the all-important *why*.
+You know that some things happened, but not how they relate to each other.
 
-Problem #1: Text is hard to search
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Let’s say you want to find all the log messages about a specific person. A first pass of log messages might look like this:
-
-    Sir James Chettam was going to dine at the Grange to-day with another gentleman whom the girls had never seen, and about whom Dorothea felt some venerating expectation.
-    …
-    If Miss Brooke ever attained perfect meekness, it would not be for lack of inward fire.
-
-You could do a text search for log messages containing the text “Dorothea”, but this is likely to fail for some types of searches. You might want to searching for actions involving dinner, but then you would need to search for “dine” and “dinner” and perhaps other words well. A library like `structlog`_ that can generate structured log messages will solve this first problem. You could define a “person” field in your messages and then you can search for all messages where ``person == "Dorothea"`` as well as other structured queries.
-
-.. _structlog: https://structlog.readthedocs.org/
-
-
-Problem #2: Referring to Entities
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Every time a log message is written out you need to decide how to refer to the objects being logged. In the messages we saw above “Dorothea” and “Miss Brooke” are in fact different identifiers for the same person. Having structured messages doesn’t help us find all messages about a specific entity if the object is referred to inconsistently. What you need is infrastructure for converting specific kinds of objects in your code to fields in your structured log messages. Then you can just say “log a message that refers to this Person” and that reusable code will make sure the correct identifier is generated.
-
-
-Problem #3: Actions
-^^^^^^^^^^^^^^^^^^^
+The problem: What caused this to happen?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Most log messages in your program are going to involve actions:
 
@@ -46,74 +28,33 @@ A marriage has a beginning and eventually an end. The end may be successful, pre
 
 Actions also generate other actions: a marriage leads to a trip to Rome, the trip to Rome might lead to a visit to the Vatican Museum, and so on. Other unrelated actions are occurring at the same time, resulting in a forest of actions, with root actions that grow a tree of child actions.
 
-You might want to trace an action from beginning to end, e.g. to measure how long it took to run. You might want to know what high-level action caused a particular unexpected low-level action. You might want to know what actions a specific entity was involved with. None of these are possible in most logging systems since they have no concept of actions to begin with.
-
-
-Problem #4: Cross-Process Actions
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+You might want to trace an action from beginning to end, e.g. to measure how long it took to run. You might want to know what high-level action caused a particular unexpected low-level action. You might want to know what actions a specific entity was involved with.
 
-A story may involve many characters in many places at many times.
-The novelist has the power to present the internal consciousness of not just one character but many: their ways of thinking, their different perceptions of reality.
+None of these are possible in most logging systems since they have no concept of actions.
 
-Similarly, actions in a distributed system may span multiple processes.
-An incoming request to one server may cause a ripple of effects reaching many other processes; the logs from a single process in isolation are insufficient to understand what happened and why.
 
-
-The Solution: Eliot
+The solution: Eliot
 ^^^^^^^^^^^^^^^^^^^
-Eliot is designed to solve all of these problems.
-For simplicity's sake this example focuses on problems 1 and 3; problem 2 is covered by the :ref:`type system <type system>` and problem 4 by :ref:`cross-process actions <cross process tasks>`.
-
-.. literalinclude:: ../../examples/rometrip_messages.py
-
-Here’s how the log messages generated by the code look, as summarized by the `eliot-tree <https://warehouse.python.org/project/eliot-tree/>`_ tool:
-
-.. code::
-
-   68c12428-5d60-49f5-a269-3fb681938f98
-   +-- honeymoon@1
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
-
-   361298ae-b6b7-439a-bc9b-ffde68b7860d
-   +-- visited@1
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
-       |-- place: Rome, Italy
 
-   7fe1615c-e442-4bca-b667-7bb435ac6cb8
-   +-- visited@1
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
-       |-- place: Vatican Museum
+Eliot is designed to solve these problems: the basic logging abstraction is the action.
 
-   c746230c-627e-4ff9-9173-135568df976c
-   +-- visited@1
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
-       |-- place: Statue #1
-
-   5482ec10-36c6-4194-964f-074e325b9329
-   +-- visited@1
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
-       |-- place: Statue #2
-
-We can see different messages are related insofar as they refer to the same person, or the same thing… but we can’t trace the relationship in terms of actions. Was looking at a statue the result of visiting Rome? There’s no way we can tell from the log messages. We could manually log start and finish messages but that won’t suffice when we have many interleaved actions involving the same objects. Which of twenty parallel HTTP request tried to insert a row into the database? Chronological messages simply cannot tell us that.
-
-The solution is to introduce two new concepts: actions and tasks. An “action” is something with a start and an end; the end can be successful or it can fail due to an exception. Log messages, as well as log actions, know the log action whose context they are running in. The result is a tree of actions. A “task” is a top-level action, a basic entry point into the program which drives other actions. The task is therefore the root of the tree of actions. For example, an HTTP request received by a web server might be a task.
-
-In our example we have one task (the honeymoon), an action (travel). We will leave looking as a normal log message because it always succeeds, and no other log message will ever need to run its context. Here’s how our code looks now:
+An “action” is something with a start and an end; the end can be successful or it can fail due to an exception. Log messages, as well as log actions, know the log action whose context they are running in. The result is a tree of actions.
 
+In the following example we have one top-level action (the honeymoon), which leads to other action (travel):
 
 .. literalinclude:: ../../examples/rometrip_actions.py
 
-Actions provide a Python context manager. When the action or task starts a start message is logged.
+Actions provide a Python context manager. When the action starts, a start message is logged.
 If the block finishes successfully a success message is logged for the action; if an exception is thrown a failure message is logged for the action with the exception type and contents.
-Not shown here but supported by the API is the ability to add fields to the success messages for an action. A similar API supports Twisted’s Deferreds.
 
+By default the messages are machine-parseable JSON, but for human consumption a visualization is better.
 Here’s how the log messages generated by the new code look, as summarized by the `eliot-tree <https://warehouse.python.org/project/eliot-tree/>`_ tool:
 
 .. code-block:: console
 
    f9dcc74f-ecda-4543-9e9a-1bb062d199f0
    +-- honeymoon@1/started
-       |-- people: [u'Mrs. Casaubon', u'Mr. Casaubon']
+       |-- people: ['Mrs. Casaubon', 'Mr. Casaubon']
        +-- visited@2,1/started
            |-- place: Rome, Italy
            +-- visited@2,2,1/started
@@ -128,4 +69,6 @@ Here’s how the log messages generated by the new code look, as summarized by t
            +-- visited@2,3/succeeded
        +-- honeymoon@3/succeeded
 
-No longer isolated fragments of meaning, our log messages are now a story. Log events have context, you can tell where they came from and what they led to without guesswork. Was looking at a statue the result of the honeymoon? It most definitely was.
+No longer isolated fragments of meaning, our log messages are now a story. Log events have context, you can tell where they came from and what they led to without guesswork.
+
+Was looking at a statue the result of the honeymoon? It most definitely was.
diff --git a/docs/source/news.rst b/docs/source/news.rst
index b0409df..eb9fb0e 100644
--- a/docs/source/news.rst
+++ b/docs/source/news.rst
@@ -1,6 +1,112 @@
 What's New
 ==========
 
+1.13.0
+^^^^^^
+
+Features:
+
+* ``@capture_logging`` and ``MemoryLogger`` now support specifying a custom JSON encoder. By default they now use Eliot's encoder. This means tests can now match the encoding used by a ``FileDestination``.
+* Added support for Python 3.9.
+
+Deprecation:
+
+* Python 3.5 is no longer supported.
+
+1.12.0
+^^^^^^
+
+Features:
+
+* Dask support now includes support for tracing logging of ``dask.persist()``, via wrapper API ``eliot.dask.persist_with_trace()``.
+
+Bug fixes:
+
+* Dask edge cases that previously weren't handled correctly should work better.
+
+1.11.0
+^^^^^^
+
+Features:
+
+* ``Message.log()`` has been replaced by top-level function ``log_message()``. Or if you're in the context of action ``ctx``, you can call ``ctx.log()``. See :ref:`messages` for details.
+* Python 3.8 is now supported.
+* The ``eliot-prettyprint`` command line tool now supports a more compact format by using the ``--compact`` argument.
+* The ``eliot-prettyprint`` command line tool now supports outputting in local timezones using the ``--local-timezone`` argument.
+
+1.10.0
+^^^^^^
+
+Bug fixes:
+
+* ``@eliot.testing.capture_logging`` now passes ``*args`` and ``**kwargs`` to the wrapped function, as one would expect. Fixes #420. Thanks to Jean-Paul Calderone for the bug report.
+* Eliot works with Dask 2.0. Thanks to Dan Myung for the bug report.
+
+1.9.0
+^^^^^
+
+Deprecation:
+
+* Python versions older than 3.5.3, e.g. the 3.5.2 on Ubuntu Xenial, don't work with Eliot, so added a more informative error message explaining that. Fixes #418. Thanks to Richard van der Hoff for the bug report.
+
+Features:
+
+* If you call ``to_file()/FileDestination()`` with a non-writable file, an
+  exception will be raised. This prevents logging from being silently swallowed
+  when the program runs. Fixes #403.
+* PyPy3 is now officially supported.
+
+Changes:
+
+* If you log a NumPy array whose size > 10000, only a subset will logged. This is to ensure logging giant arrays by mistake doesn't impact your software's performance. If you want to customize logging of large arrays, see :ref:`large_numpy_arrays`. Fixes #410.
+
+1.8.0
+^^^^^
+
+Features:
+
+* Eliot now supports Trio coroutines, as well as other frameworks that utilize Python 3.7's ``contextvars`` (Python 3.5 and 3.6 are also supported, using backport packages).
+
+Deprecation:
+
+* ``eliot.use_asyncio_context()`` is no longer necessary.
+  On Python 3.5 and 3.6, however, you should make sure to import ``eliot`` (or ``aiocontextvars``) before you start your first event loop.
+
+Changes:
+
+* Python 2.7 is now in legacy support mode; the last major Eliot release supporting it is 1.7.0.
+  See :ref:`python2` for details.
+* Python 3.4 is no longer supported.
+
+1.7.0
+^^^^^
+
+Documentation:
+
+* Eliot has an API for testing that your logs were output correctly. Until now, however, the documentation was overly focused on requiring usage of types, which are optional, so it has been rewritten to be more generic: :doc:`read more about the testing API here<generating/testing>`.
+
+Features:
+
+* Generating messages is much faster.
+* Eliot now works with PyInstaller. Thanks to Jean-Paul Calderone for the bug report. Fixes issue #386.
+* The testing infrastructure now has slightly more informative error messages. Thanks to Jean-Paul Calderone for the bug report. Fixes issue #373.
+* Added lower-level testing infrastructure—``eliot.testing.swap_logger`` and ``eliot.testing.check_for_errors``—which is useful for cases when the ``@capture_logging`` decorator is insufficient. For example, test methods that are async, or return Twisted ``Deferred``. See the :doc:`testing documentation<generating/testing>` for details. Thanks to Jean-Paul Calderone for the feature request. Fixes #364.
+* ``eliot.ValidationError``, as raised by e.g. ``capture_logging``, is now part of the public API. Fixed issue #146.
+
+Twisted-related features:
+
+* New decorator, ``@eliot.twisted.inline_callbacks`` , which is like Twisted's ``inlineCallbacks`` but which also manages the Eliot context. Thanks to Jean-Paul Calderone for the fix. Fixed issue #259.
+* ``eliot.twisted.DeferredContext.addCallbacks`` now supports omitting the errback, for compatibility with Twisted's ``Deferred``. Thanks to Jean-Paul Calderone for the fix. Fixed issue #366.
+
+Bug fixes:
+
+* Fixed bug in the ``asyncio`` coroutine support where only the thread where ``use_asyncio_context()`` was called supported coroutine-specific contexts. Fixes issue #388.
+* ``ILogger.write`` is now explicitly thread-safe. The ``MemoryLogger`` (as used
+  by tests) implementation of this method which was previously not thread-safe
+  is now thread-safe. Thanks to Jean-Paul Calderone for the patch. Fixes issue
+  #382.
+
+
 1.6.0
 ^^^^^
 
diff --git a/docs/source/outputting/output.rst b/docs/source/outputting/output.rst
index 28b5346..dd57107 100644
--- a/docs/source/outputting/output.rst
+++ b/docs/source/outputting/output.rst
@@ -21,7 +21,7 @@ This ensures that no messages will be lost if logging happens during configurati
 
 
 Outputting JSON to a file
---------------------
+-------------------------
 
 Since JSON is a common output format, Eliot provides a utility class that logs to a file, ``eliot.FileDestination(file=yourfile)``.
 Each Eliot message will be encoded in JSON and written on a new line.
diff --git a/docs/source/python2.rst b/docs/source/python2.rst
index a70f8bd..536c472 100644
--- a/docs/source/python2.rst
+++ b/docs/source/python2.rst
@@ -3,19 +3,12 @@
 Python 2.7 Support
 ==================
 
-Eliot supports Python 2.7 as of release 1.16.
-However, Eliot will drop support for Python 2 in an upcoming release (probably 1.17 or 1.18).
+The last version of Eliot to support Python 2.7 was release 1.7.
 
 If you are using Eliot with Python 2, keep the following in mind:
 
-* I will provide critical bug fixes for Python 2 for one year after the last release supporting Python 2.7.
-  I will accept patches for critical bug fixes after that (or you can pay me to do additional work).
+* I will provide critical bug fixes for Python 2 until March 2020.
+  I will accept patches for critical bug fixes after that (or you can `pay for my services <https://pythonspeed.com/services/#eliot>`_ to do additional work).
 * Make sure you use an up-to-date ``setuptools`` and ``pip``; in theory this should result in only downloading versions of the package that support Python 2.
-* For extra safety, you can pin Eliot in ``setup.py`` or ``requirements.txt`` by setting: ``eliot < 1.17``.
-
-For example, if it turns out 1.16 is the last version that supports Python 2:
-
-* 1.17 will only support Python 3.
-* Critical bug fixes for Python 2 will be released as 1.16.1, 1.16.2, etc..
-
-I will update this page once I know the final release where Python 2 is supported.
+* For extra safety, you can pin Eliot in ``setup.py`` or ``requirements.txt`` by setting: ``eliot < 1.8``.
+* Critical bug fixes for Python 2 will be released as 1.7.1, 1.7.2, etc..
diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst
index 9a1373e..6b81f2b 100644
--- a/docs/source/quickstart.rst
+++ b/docs/source/quickstart.rst
@@ -13,6 +13,12 @@ To install Eliot and the other tools we'll use in this example, run the followin
 
    $ pip install eliot eliot-tree requests
 
+You can also install it using Conda:
+
+.. code-block:: shell-session
+
+   $ conda install -c conda-forge eliot eliot-tree requests
+
 This will install:
 
 1. Eliot itself.
@@ -121,4 +127,5 @@ You can learn more by reading the rest of the documentation, including:
 * How to generate :doc:`actions <generating/actions>`, :doc:`standalone messages <generating/messages>`, and :doc:`handle errors <generating/errors>`.
 * How to integrate or migrate your :doc:`existing stdlib logging messages <generating/migrating>`.
 * How to output logs :doc:`to a file or elsewhere <outputting/output>`.
-* Using :doc:`asyncio coroutines <generating/asyncio>`, :doc:`threads and processes <generating/threads>`, or :doc:`Twisted <generating/twisted>`.
+* Using :doc:`asyncio or Trio coroutines <generating/asyncio>`, :doc:`threads and processes <generating/threads>`, or :doc:`Twisted <generating/twisted>`.
+* Using Eliot for :doc:`scientific computing <scientific-computing>`.
diff --git a/docs/source/reading/fields.rst b/docs/source/reading/fields.rst
index 2c43bde..5626b82 100644
--- a/docs/source/reading/fields.rst
+++ b/docs/source/reading/fields.rst
@@ -1,6 +1,14 @@
 Message Fields in Depth
 =======================
 
+Structure
+---------
+
+Eliot messages are typically serialized to JSON objects.
+Fields therefore must have ``str`` as their name.
+Message values must be supported by JSON: ``int``, ``float``, ``None``, ``str``, ``dict`` or ``list``.
+The latter two can only be composed of other supported types.
+
 Built-in Fields
 ---------------
 
diff --git a/docs/source/reading/reading.rst b/docs/source/reading/reading.rst
index 89a0ab9..cba0b45 100644
--- a/docs/source/reading/reading.rst
+++ b/docs/source/reading/reading.rst
@@ -16,7 +16,9 @@ Eliot includes a command-line tool that makes it easier to read JSON-formatted E
      another: 2
      value: goodbye
 
-The third-party `eliot-tree`_ tool renders JSON-formatted Eliot messages into a tree visualizing the tasks' actions.
+Run ``eliot-prettyprint --help`` to see the various formatting options; you can for example use a more compact one-message-per-line format.
+
+Additionally, the **highly recommended** third-party `eliot-tree`_ tool renders JSON-formatted Eliot messages into a tree visualizing the tasks' actions.
 
 
 Filtering logs
diff --git a/docs/source/usecases/scientific-computing.rst b/docs/source/scientific-computing.rst
similarity index 79%
rename from docs/source/usecases/scientific-computing.rst
rename to docs/source/scientific-computing.rst
index bec31a6..9f7cec9 100644
--- a/docs/source/usecases/scientific-computing.rst
+++ b/docs/source/scientific-computing.rst
@@ -11,6 +11,25 @@ Eliot is an ideal logging library for these cases:
 * It supports scientific libraries: NumPy and Dask.
   By default, Eliot will automatically serialize NumPy integers, floats, arrays, and bools to JSON (see :ref:`custom_json` for details).
 
+At PyCon 2019 Itamar Turner-Trauring gave talk about logging for scientific computing, in part using Eliot—you can `watch the video <https://pyvideo.org/pycon-us-2019/logging-for-scientific-computing-reproducibility-debugging-optimization.html>`_ or `read a prose version <https://pythonspeed.com/articles/logging-for-scientific-computing/>`_.
+
+.. _large_numpy_arrays:
+
+Logging large arrays
+--------------------
+
+Logging large arrays is a problem: it will take a lot of CPU, and it's no fun discovering that your batch process was slow because you mistakenly logged an array with 30 million integers every time you called a core function.
+
+So how do you deal with logging large arrays?
+
+1. **Log a summary (default behavior):** By default, if you log an array with size > 10,000, Eliot will only log the first 10,000 values, along with the shape.
+2. **Omit the array:** You can also just choose not to log the array at all.
+   With ``log_call`` you can use the ``include_args`` parameter to ensure the array isn't logged (see :ref:`log_call decorator`).
+   With ``start_action`` you can just not pass it in.
+3. **Manual transformation:** If you're using ``start_action`` you can also manually modify the array yourself before passing it in.
+   For example, you could write it to some sort of temporary storage, and then log the path to that file.
+   Or you could summarize it some other way than the default.
+
 
 .. _dask_usage:
 
@@ -25,11 +44,12 @@ In order to do this you will need to:
 * Ensure all worker processes write the Eliot logs to disk (if you're using the ``multiprocessing`` or ``distributed`` backends).
 * If you're using multiple worker machines, aggregate all log files into a single place, so you can more easily analyze them with e.g. `eliot-tree <https://github.com/jonathanj/eliottree>`_.
 * Replace ``dask.compute()`` with ``eliot.dask.compute_with_trace()``.
+* Replace ``dask.persist()`` with ``eliot.dask.persist_with_trace()``.
 
-In the following example, you can see how this works for a Dask run using ``distributed``, the recommended Dask scheduler.
+In the following example, you can see how this works for a Dask run using ``distributed``, the recommended Dask scheduler for more sophisticated use cases.
 We'll be using multiple worker processes, but only use a single machine:
 
-.. literalinclude:: ../../../examples/dask_eliot.py
+.. literalinclude:: ../../examples/dask_eliot.py
 
 In the output you can see how the various Dask tasks depend on each other, and the full trace of the computation:
 
diff --git a/docs/source/usecases/index.rst b/docs/source/usecases/index.rst
deleted file mode 100644
index 7bee910..0000000
--- a/docs/source/usecases/index.rst
+++ /dev/null
@@ -1,6 +0,0 @@
-Use Cases
-=========
-
-.. toctree::
-   scientific-computing
-
diff --git a/eliot.egg-info/PKG-INFO b/eliot.egg-info/PKG-INFO
index 36f6e5a..b891599 100644
--- a/eliot.egg-info/PKG-INFO
+++ b/eliot.egg-info/PKG-INFO
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: eliot
-Version: 1.6.0
+Version: 1.13.0
 Summary: Logging library that tells you why it happened
 Home-page: https://github.com/itamarst/eliot/
 Maintainer: Itamar Turner-Trauring
@@ -13,20 +13,39 @@ Description: Eliot: Logging that tells you *why* it happened
                    :target: http://travis-ci.org/itamarst/eliot
                    :alt: Build Status
         
-        Most logging systems tell you *what* happened in your application, whereas ``eliot`` also tells you *why* it happened.
+        Python's built-in ``logging`` and other similar systems output a stream of factoids: they're interesting, but you can't really tell what's going on.
         
+        * Why is your application slow?
+        * What caused this code path to be chosen?
+        * Why did this error happen?
+        
+        Standard logging can't answer these questions.
+        
+        But with a better model you could understand what and why things happened in your application.
+        You could pinpoint performance bottlenecks, you could understand what happened when, who called what.
+        
+        That is what Eliot does.
         ``eliot`` is a Python logging system that outputs causal chains of **actions**: actions can spawn other actions, and eventually they either **succeed or fail**.
         The resulting logs tell you the story of what your software did: what happened, and what caused it.
         
-        Eliot works well within a single process, but can also be used across multiple processes to trace causality across a distributed system.
-        Eliot is only used to generate your logs; you will still need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes.
+        Eliot supports a range of use cases and 3rd party libraries:
         
-        Eliot supports Python 2.7, 3.4, 3.5, 3.6, 3.7 and PyPy.
+        * Logging within a single process.
+        * Causal tracing across a distributed system.
+        * Scientific computing, with `built-in support for NumPy and Dask <https://eliot.readthedocs.io/en/stable/scientific-computing.html>`_.
+        * `Asyncio and Trio coroutines <https://eliot.readthedocs.io/en/stable/generating/asyncio.html>`_ and the `Twisted networking framework <https://eliot.readthedocs.io/en/stable/generating/twisted.html>`_.
+        
+        Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
+        
+        Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
         It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
         
+        Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
+        
         * `Read the documentation <https://eliot.readthedocs.io>`_.
-        * Download from `PyPI`_.
+        * Download from `PyPI`_ or `conda-forge <https://anaconda.org/conda-forge/eliot>`_.
         * Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_ on GitHub.
+        * **Commercial support** is available from `Python⇒Speed <https://pythonspeed.com/services/#eliot>`_.
         
         Testimonials
         ------------
@@ -44,16 +63,15 @@ Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: Apache Software License
 Classifier: Operating System :: OS Independent
 Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
-Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: System :: Logging
-Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
+Requires-Python: >=3.6.0
 Provides-Extra: journald
+Provides-Extra: test
 Provides-Extra: dev
diff --git a/eliot.egg-info/SOURCES.txt b/eliot.egg-info/SOURCES.txt
index 7cf0d04..346ce52 100644
--- a/eliot.egg-info/SOURCES.txt
+++ b/eliot.egg-info/SOURCES.txt
@@ -15,6 +15,7 @@ docs/source/introduction.rst
 docs/source/news.rst
 docs/source/python2.rst
 docs/source/quickstart.rst
+docs/source/scientific-computing.rst
 docs/source/generating/actions.rst
 docs/source/generating/asyncio.rst
 docs/source/generating/errors.rst
@@ -22,9 +23,9 @@ docs/source/generating/index.rst
 docs/source/generating/loglevels.rst
 docs/source/generating/messages.rst
 docs/source/generating/migrating.rst
+docs/source/generating/testing.rst
 docs/source/generating/threads.rst
 docs/source/generating/twisted.rst
-docs/source/generating/types-testing.rst
 docs/source/generating/types.rst
 docs/source/outputting/elasticsearch.rst
 docs/source/outputting/index.rst
@@ -34,13 +35,11 @@ docs/source/outputting/output.rst
 docs/source/reading/fields.rst
 docs/source/reading/index.rst
 docs/source/reading/reading.rst
-docs/source/usecases/index.rst
-docs/source/usecases/scientific-computing.rst
 eliot/__init__.py
 eliot/_action.py
-eliot/_asyncio.py
 eliot/_bytesjson.py
 eliot/_errors.py
+eliot/_generators.py
 eliot/_message.py
 eliot/_output.py
 eliot/_traceback.py
@@ -67,13 +66,13 @@ eliot.egg-info/requires.txt
 eliot.egg-info/top_level.txt
 eliot/tests/__init__.py
 eliot/tests/common.py
-eliot/tests/corotests.py
 eliot/tests/strategies.py
 eliot/tests/test_action.py
 eliot/tests/test_api.py
 eliot/tests/test_coroutines.py
 eliot/tests/test_dask.py
 eliot/tests/test_filter.py
+eliot/tests/test_generators.py
 eliot/tests/test_journald.py
 eliot/tests/test_json.py
 eliot/tests/test_logwriter.py
@@ -81,6 +80,7 @@ eliot/tests/test_message.py
 eliot/tests/test_output.py
 eliot/tests/test_parse.py
 eliot/tests/test_prettyprint.py
+eliot/tests/test_pyinstaller.py
 eliot/tests/test_serializers.py
 eliot/tests/test_stdlib.py
 eliot/tests/test_tai64n.py
@@ -89,6 +89,7 @@ eliot/tests/test_traceback.py
 eliot/tests/test_twisted.py
 eliot/tests/test_util.py
 eliot/tests/test_validation.py
+examples/asyncio_linkcheck.py
 examples/cross_process_client.py
 examples/cross_process_server.py
 examples/cross_thread.py
@@ -97,6 +98,6 @@ examples/journald.py
 examples/linkcheck.py
 examples/logfile.py
 examples/rometrip_actions.py
-examples/rometrip_messages.py
 examples/stdlib.py
-examples/stdout.py
\ No newline at end of file
+examples/stdout.py
+examples/trio_say.py
\ No newline at end of file
diff --git a/eliot.egg-info/requires.txt b/eliot.egg-info/requires.txt
index 7e6db72..22999a7 100644
--- a/eliot.egg-info/requires.txt
+++ b/eliot.egg-info/requires.txt
@@ -1,18 +1,25 @@
 six
 zope.interface
 pyrsistent>=0.11.8
-boltons
+boltons>=19.0.1
+
+[:python_version < "3.7" and python_version > "2.7"]
+aiocontextvars
 
 [dev]
 setuptools>=40
 twine>=1.12.1
 coverage
-hypothesis>=1.14.0
-testtools
 sphinx
 sphinx_rtd_theme
 flake8
-yapf
+black
 
 [journald]
 cffi>=1.1.2
+
+[test]
+hypothesis>=1.14.0
+testtools
+pytest
+pytest-xdist
diff --git a/eliot/__init__.py b/eliot/__init__.py
index d21ecbb..2a7562e 100644
--- a/eliot/__init__.py
+++ b/eliot/__init__.py
@@ -2,37 +2,59 @@
 Eliot: Logging for Complex & Distributed Systems.
 """
 from warnings import warn
+from sys import version_info
+
+# Enable asyncio contextvars support in Python 3.5/3.6:
+if version_info < (3, 7):
+    # On Python 3.5.2 and earlier, some of the necessary attributes aren't exposed:
+    if version_info < (3, 5, 3):
+        raise RuntimeError(
+            "This version of Eliot doesn't work on Python 3.5.2 or earlier. "
+            "Either upgrade to Python 3.5.3 or later (on Ubuntu 16.04 "
+            "you can use https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa "
+            "to get Python 3.6), or pin Eliot to version 1.7."
+        )
+    import aiocontextvars
+
+    dir(aiocontextvars)  # pacify pyflakes
+    del aiocontextvars
 
 # Expose the public API:
 from ._message import Message
 from ._action import (
-    start_action, startTask, Action, preserve_context, current_action,
-    use_asyncio_context, log_call
+    start_action,
+    startTask,
+    Action,
+    preserve_context,
+    current_action,
+    log_call,
+    log_message,
 )
-from ._output import (
-    ILogger,
-    Logger,
-    MemoryLogger,
-    to_file,
-    FileDestination,
-)
-from ._validation import Field, fields, MessageType, ActionType
+from ._output import ILogger, Logger, MemoryLogger, to_file, FileDestination
+from ._validation import Field, fields, MessageType, ActionType, ValidationError
 from ._traceback import write_traceback, writeFailure
 from ._errors import register_exception_extractor
 from ._version import get_versions
 
-
 # Backwards compatibility:
 def add_destination(destination):
     warn(
-        "add_destination is deprecated since 1.1.0. "
-        "Use add_destinations instead.",
+        "add_destination is deprecated since 1.1.0. " "Use add_destinations instead.",
         DeprecationWarning,
-        stacklevel=2
+        stacklevel=2,
     )
     Logger._destinations.add(destination)
 
 
+# Backwards compatibility:
+def use_asyncio_context():
+    warn(
+        "This function is no longer as needed as of Eliot 1.8.0.",
+        DeprecationWarning,
+        stacklevel=2,
+    )
+
+
 # Backwards compatibilty:
 addDestination = add_destination
 removeDestination = Logger._destinations.remove
@@ -52,10 +74,14 @@ add_global_fields = addGlobalFields
 def _parse_compat():
     # Force eliot.parse to be imported in way that works with old Python:
     from .parse import Parser
+
     del Parser
     import sys
+
     sys.modules["eliot._parse"] = sys.modules["eliot.parse"]
     return sys.modules["eliot.parse"]
+
+
 _parse = _parse_compat()
 del _parse_compat
 
@@ -82,7 +108,7 @@ __all__ = [
     "register_exception_extractor",
     "current_action",
     "use_asyncio_context",
-
+    "ValidationError",
     # PEP 8 variants:
     "write_traceback",
     "write_failure",
@@ -94,12 +120,12 @@ __all__ = [
     "add_global_fields",
     "to_file",
     "log_call",
+    "log_message",
     "__version__",
-
     # Backwards compat for eliot-tree:
     "_parse",
 ]
 
 
-__version__ = get_versions()['version']
+__version__ = get_versions()["version"]
 del get_versions
diff --git a/eliot/_action.py b/eliot/_action.py
index 9549ca2..5b5b519 100644
--- a/eliot/_action.py
+++ b/eliot/_action.py
@@ -9,106 +9,47 @@ from __future__ import unicode_literals, absolute_import
 
 import threading
 from uuid import uuid4
-from itertools import count
 from contextlib import contextmanager
-from warnings import warn
 from functools import partial
 from inspect import getcallargs
+from contextvars import ContextVar
 
-from pyrsistent import (
-    field,
-    PClass,
-    optional,
-    pmap_field,
-    pvector_field,
-    pvector, )
+from pyrsistent import field, PClass, optional, pmap_field, pvector
 from boltons.funcutils import wraps
-from six import text_type as unicode, integer_types, PY3
+from six import text_type as unicode, PY3
 
 from ._message import (
-    Message,
     WrittenMessage,
     EXCEPTION_FIELD,
     REASON_FIELD,
-    TASK_UUID_FIELD, )
+    TASK_UUID_FIELD,
+    MESSAGE_TYPE_FIELD,
+)
 from ._util import safeunicode
 from ._errors import _error_extraction
 
-ACTION_STATUS_FIELD = 'action_status'
-ACTION_TYPE_FIELD = 'action_type'
+ACTION_STATUS_FIELD = "action_status"
+ACTION_TYPE_FIELD = "action_type"
 
-STARTED_STATUS = 'started'
-SUCCEEDED_STATUS = 'succeeded'
-FAILED_STATUS = 'failed'
+STARTED_STATUS = "started"
+SUCCEEDED_STATUS = "succeeded"
+FAILED_STATUS = "failed"
 
 VALID_STATUSES = (STARTED_STATUS, SUCCEEDED_STATUS, FAILED_STATUS)
 
+_ACTION_CONTEXT = ContextVar("eliot.action")
 
-class _ExecutionContext(threading.local):
-    """
-    Call stack-based context, storing the current L{Action}.
-
-    The context is thread-specific, but can be made e.g. coroutine-specific by
-    overriding C{get_sub_context}.
-    """
-
-    def __init__(self):
-        self._main_stack = []
-        self.get_sub_context = lambda: None
-
-    def _get_stack(self):
-        """
-        Get the stack for the current asyncio Task.
-        """
-        stack = self.get_sub_context()
-        if stack is None:
-            return self._main_stack
-        else:
-            return stack
-
-    def push(self, action):
-        """
-        Push the given L{Action} to the front of the stack.
-
-        @param action: L{Action} that will be used for log messages and as
-            parent of newly created L{Action} instances.
-        """
-        self._get_stack().append(action)
+from ._message import TIMESTAMP_FIELD, TASK_LEVEL_FIELD
 
-    def pop(self):
-        """
-        Pop the front L{Action} on the stack.
-        """
-        self._get_stack().pop(-1)
-
-    def current(self):
-        """
-        @return: The current front L{Action}, or C{None} if there is no
-            L{Action} set.
-        """
-        stack = self._get_stack()
-        if not stack:
-            return None
-        return stack[-1]
 
-
-_context = _ExecutionContext()
-current_action = _context.current
-
-
-def use_asyncio_context():
+def current_action():
     """
-    Use a logging context that is tied to the current asyncio coroutine.
-
-    Call this first thing, before doing any other logging.
-
-    Does not currently support event loops other than asyncio.
+    @return: The current C{Action} in context, or C{None} if none were set.
     """
-    from ._asyncio import AsyncioContext
-    _context.get_sub_context = AsyncioContext().get_stack
+    return _ACTION_CONTEXT.get(None)
 
 
-class TaskLevel(PClass):
+class TaskLevel(object):
     """
     The location of a message within the tree of actions of a task.
 
@@ -118,22 +59,45 @@ class TaskLevel(PClass):
         the second item in the task.
     """
 
-    level = pvector_field(integer_types)
+    def __init__(self, level):
+        self._level = level
 
-    # PClass really ought to provide this ordering facility for us:
-    # tobgu/pyrsistent#45.
+    def as_list(self):
+        """Return the current level.
+
+        @return: List of integers.
+        """
+        return self._level[:]
+
+    # Backwards compatibility:
+    @property
+    def level(self):
+        return pvector(self._level)
 
     def __lt__(self, other):
-        return self.level < other.level
+        return self._level < other._level
 
     def __le__(self, other):
-        return self.level <= other.level
+        return self._level <= other._level
 
     def __gt__(self, other):
-        return self.level > other.level
+        return self._level > other._level
 
     def __ge__(self, other):
-        return self.level >= other.level
+        return self._level >= other._level
+
+    def __eq__(self, other):
+        if other.__class__ != TaskLevel:
+            return False
+        return self._level == other._level
+
+    def __ne__(self, other):
+        if other.__class__ != TaskLevel:
+            return True
+        return self._level != other._level
+
+    def __hash__(self):
+        return hash(tuple(self._level))
 
     @classmethod
     def fromString(cls, string):
@@ -152,7 +116,7 @@ class TaskLevel(PClass):
 
         @return: L{unicode} representation of the L{TaskLevel}.
         """
-        return "/" + "/".join(map(unicode, self.level))
+        return "/" + "/".join(map(unicode, self._level))
 
     def next_sibling(self):
         """
@@ -161,7 +125,9 @@ class TaskLevel(PClass):
 
         @return: L{TaskLevel} which follows this one.
         """
-        return TaskLevel(level=self.level.set(-1, self.level[-1] + 1))
+        new_level = self._level[:]
+        new_level[-1] += 1
+        return TaskLevel(level=new_level)
 
     def child(self):
         """
@@ -169,7 +135,9 @@ class TaskLevel(PClass):
 
         @return: L{TaskLevel} which is the first child of this one.
         """
-        return TaskLevel(level=self.level.append(1))
+        new_level = self._level[:]
+        new_level.append(1)
+        return TaskLevel(level=new_level)
 
     def parent(self):
         """
@@ -178,9 +146,9 @@ class TaskLevel(PClass):
 
         @return: L{TaskLevel} which is the parent of this one.
         """
-        if not self.level:
+        if not self._level:
             return None
-        return TaskLevel(level=self.level[:-1])
+        return TaskLevel(level=self._level[:-1])
 
     def is_sibling_of(self, task_level):
         """
@@ -195,6 +163,8 @@ class TaskLevel(PClass):
 
 _TASK_ID_NOT_SUPPLIED = object()
 
+import time
+
 
 class Action(object):
     """
@@ -212,8 +182,7 @@ class Action(object):
     @ivar _finished: L{True} if the L{Action} has finished, otherwise L{False}.
     """
 
-    def __init__(
-        self, logger, task_uuid, task_level, action_type, serializers=None):
+    def __init__(self, logger, task_uuid, task_level, action_type, serializers=None):
         """
         Initialize the L{Action} and log the start message.
 
@@ -236,20 +205,14 @@ class Action(object):
             serialization will be done for messages generated by the
             L{Action}.
         """
-        self._numberOfMessages = iter(count())
         self._successFields = {}
-        self._logger = logger
-        if isinstance(task_level, unicode):
-            warn(
-                "Action should be initialized with a TaskLevel",
-                DeprecationWarning,
-                stacklevel=2)
-            task_level = TaskLevel.fromString(task_level)
+        self._logger = _output._DEFAULT_LOGGER if (logger is None) else logger
         self._task_level = task_level
         self._last_child = None
         self._identification = {
             TASK_UUID_FIELD: task_uuid,
-            ACTION_TYPE_FIELD: action_type, }
+            ACTION_TYPE_FIELD: action_type,
+        }
         self._serializers = serializers
         self._finished = False
 
@@ -269,8 +232,8 @@ class Action(object):
         @return: L{bytes} encoding the current location within the task.
         """
         return "{}@{}".format(
-            self._identification[TASK_UUID_FIELD],
-            self._nextTaskLevel().toString()).encode("ascii")
+            self._identification[TASK_UUID_FIELD], self._nextTaskLevel().toString()
+        ).encode("ascii")
 
     @classmethod
     def continue_task(cls, logger=None, task_id=_TASK_ID_NOT_SUPPLIED):
@@ -292,8 +255,8 @@ class Action(object):
             task_id = task_id.decode("ascii")
         uuid, task_level = task_id.split("@")
         action = cls(
-            logger, uuid, TaskLevel.fromString(task_level),
-            "eliot:remote_task")
+            logger, uuid, TaskLevel.fromString(task_level), "eliot:remote_task"
+        )
         action._start({})
         return action
 
@@ -326,12 +289,14 @@ class Action(object):
         block or L{Action.finish}.
         """
         fields[ACTION_STATUS_FIELD] = STARTED_STATUS
+        fields[TIMESTAMP_FIELD] = time.time()
         fields.update(self._identification)
+        fields[TASK_LEVEL_FIELD] = self._nextTaskLevel().as_list()
         if self._serializers is None:
             serializer = None
         else:
             serializer = self._serializers.start
-        Message(fields, serializer).write(self._logger, self)
+        self._logger.write(fields, serializer)
 
     def finish(self, exception=None):
         """
@@ -358,17 +323,20 @@ class Action(object):
             if self._serializers is not None:
                 serializer = self._serializers.success
         else:
-            fields = _error_extraction.get_fields_for_exception(
-                self._logger, exception)
+            fields = _error_extraction.get_fields_for_exception(self._logger, exception)
             fields[EXCEPTION_FIELD] = "%s.%s" % (
-                exception.__class__.__module__, exception.__class__.__name__)
+                exception.__class__.__module__,
+                exception.__class__.__name__,
+            )
             fields[REASON_FIELD] = safeunicode(exception)
             fields[ACTION_STATUS_FIELD] = FAILED_STATUS
             if self._serializers is not None:
                 serializer = self._serializers.failure
 
+        fields[TIMESTAMP_FIELD] = time.time()
         fields.update(self._identification)
-        Message(fields, serializer).write(self._logger, self)
+        fields[TASK_LEVEL_FIELD] = self._nextTaskLevel().as_list()
+        self._logger.write(fields, serializer)
 
     def child(self, logger, action_type, serializers=None):
         """
@@ -390,18 +358,22 @@ class Action(object):
         """
         newLevel = self._nextTaskLevel()
         return self.__class__(
-            logger, self._identification[TASK_UUID_FIELD], newLevel,
-            action_type, serializers)
+            logger,
+            self._identification[TASK_UUID_FIELD],
+            newLevel,
+            action_type,
+            serializers,
+        )
 
     def run(self, f, *args, **kwargs):
         """
         Run the given function with this L{Action} as its execution context.
         """
-        _context.push(self)
+        parent = _ACTION_CONTEXT.set(self)
         try:
             return f(*args, **kwargs)
         finally:
-            _context.pop()
+            _ACTION_CONTEXT.reset(parent)
 
     def addSuccessFields(self, **fields):
         """
@@ -422,27 +394,37 @@ class Action(object):
 
         The action does NOT finish when the context is exited.
         """
-        _context.push(self)
+        parent = _ACTION_CONTEXT.set(self)
         try:
             yield self
         finally:
-            _context.pop()
+            _ACTION_CONTEXT.reset(parent)
 
     # Python context manager implementation:
     def __enter__(self):
         """
         Push this action onto the execution context.
         """
-        _context.push(self)
+        self._parent_token = _ACTION_CONTEXT.set(self)
         return self
 
     def __exit__(self, type, exception, traceback):
         """
         Pop this action off the execution context, log finish message.
         """
-        _context.pop()
+        _ACTION_CONTEXT.reset(self._parent_token)
+        self._parent_token = None
         self.finish(exception)
 
+    ## Message logging
+    def log(self, message_type, **fields):
+        """Log individual message."""
+        fields[TIMESTAMP_FIELD] = time.time()
+        fields[TASK_UUID_FIELD] = self._identification[TASK_UUID_FIELD]
+        fields[TASK_LEVEL_FIELD] = self._nextTaskLevel().as_list()
+        fields[MESSAGE_TYPE_FIELD] = message_type
+        self._logger.write(fields, fields.pop("__eliot_serializer__", None))
+
 
 class WrongTask(Exception):
     """
@@ -453,8 +435,10 @@ class WrongTask(Exception):
     def __init__(self, action, message):
         Exception.__init__(
             self,
-            'Tried to add {} to {}. Expected task_uuid = {}, got {}'.format(
-                message, action, action.task_uuid, message.task_uuid))
+            "Tried to add {} to {}. Expected task_uuid = {}, got {}".format(
+                message, action, action.task_uuid, message.task_uuid
+            ),
+        )
 
 
 class WrongTaskLevel(Exception):
@@ -466,8 +450,10 @@ class WrongTaskLevel(Exception):
     def __init__(self, action, message):
         Exception.__init__(
             self,
-            'Tried to add {} to {}, but {} is not a sibling of {}'.format(
-                message, action, message.task_level, action.task_level))
+            "Tried to add {} to {}, but {} is not a sibling of {}".format(
+                message, action, message.task_level, action.task_level
+            ),
+        )
 
 
 class WrongActionType(Exception):
@@ -476,12 +462,16 @@ class WrongActionType(Exception):
     """
 
     def __init__(self, action, message):
-        error_msg = 'Tried to end {} with {}. Expected action_type = {}, got {}'
+        error_msg = "Tried to end {} with {}. Expected action_type = {}, got {}"
         Exception.__init__(
             self,
             error_msg.format(
-                action, message, action.action_type,
-                message.contents.get(ACTION_TYPE_FIELD, '<undefined>')))
+                action,
+                message,
+                action.action_type,
+                message.contents.get(ACTION_TYPE_FIELD, "<undefined>"),
+            ),
+        )
 
 
 class InvalidStatus(Exception):
@@ -490,12 +480,17 @@ class InvalidStatus(Exception):
     """
 
     def __init__(self, action, message):
-        error_msg = 'Tried to end {} with {}. Expected status {} or {}, got {}'
+        error_msg = "Tried to end {} with {}. Expected status {} or {}, got {}"
         Exception.__init__(
             self,
             error_msg.format(
-                action, message, SUCCEEDED_STATUS, FAILED_STATUS,
-                message.contents.get(ACTION_STATUS_FIELD, '<undefined>')))
+                action,
+                message,
+                SUCCEEDED_STATUS,
+                FAILED_STATUS,
+                message.contents.get(ACTION_STATUS_FIELD, "<undefined>"),
+            ),
+        )
 
 
 class DuplicateChild(Exception):
@@ -506,8 +501,11 @@ class DuplicateChild(Exception):
 
     def __init__(self, action, message):
         Exception.__init__(
-            self, 'Tried to add {} to {}, but already had child at {}'.format(
-                message, action, message.task_level))
+            self,
+            "Tried to add {} to {}, but already had child at {}".format(
+                message, action, message.task_level
+            ),
+        )
 
 
 class InvalidStartMessage(Exception):
@@ -516,8 +514,7 @@ class InvalidStartMessage(Exception):
     """
 
     def __init__(self, message, reason):
-        Exception.__init__(
-            self, 'Invalid start message {}: {}'.format(message, reason))
+        Exception.__init__(self, "Invalid start message {}: {}".format(message, reason))
 
     @classmethod
     def wrong_status(cls, message):
@@ -525,7 +522,7 @@ class InvalidStartMessage(Exception):
 
     @classmethod
     def wrong_task_level(cls, message):
-        return cls(message, 'first message must have task level ending in 1')
+        return cls(message, "first message must have task level ending in 1")
 
 
 class WrittenAction(PClass):
@@ -550,18 +547,15 @@ class WrittenAction(PClass):
         L{WrittenMessage} objects that make up this action.
     """
 
-    start_message = field(
-        type=optional(WrittenMessage), mandatory=True, initial=None)
-    end_message = field(
-        type=optional(WrittenMessage), mandatory=True, initial=None)
+    start_message = field(type=optional(WrittenMessage), mandatory=True, initial=None)
+    end_message = field(type=optional(WrittenMessage), mandatory=True, initial=None)
     task_level = field(type=TaskLevel, mandatory=True)
     task_uuid = field(type=unicode, mandatory=True, factory=unicode)
     # Pyrsistent doesn't support pmap_field with recursive types.
     _children = pmap_field(TaskLevel, object)
 
     @classmethod
-    def from_messages(
-        cls, start_message=None, children=pvector(), end_message=None):
+    def from_messages(cls, start_message=None, children=pvector(), end_message=None):
         """
         Create a C{WrittenAction} from C{WrittenMessage}s and other
         C{WrittenAction}s.
@@ -592,10 +586,12 @@ class WrittenAction(PClass):
         actual_message = [
             message
             for message in [start_message, end_message] + list(children)
-            if message][0]
+            if message
+        ][0]
         action = cls(
             task_level=actual_message.task_level.parent(),
-            task_uuid=actual_message.task_uuid, )
+            task_uuid=actual_message.task_uuid,
+        )
         if start_message:
             action = action._start(start_message)
         for child in children:
@@ -673,8 +669,7 @@ class WrittenAction(PClass):
         The list of child messages and actions sorted by task level, excluding the
         start and end messages.
         """
-        return pvector(
-            sorted(self._children.values(), key=lambda m: m.task_level))
+        return pvector(sorted(self._children.values(), key=lambda m: m.task_level))
 
     def _validate_message(self, message):
         """
@@ -709,7 +704,7 @@ class WrittenAction(PClass):
         """
         self._validate_message(message)
         level = message.task_level
-        return self.transform(('_children', level), message)
+        return self.transform(("_children", level), message)
 
     def _start(self, start_message):
         """
@@ -723,8 +718,7 @@ class WrittenAction(PClass):
             C{task_level} indicating that it is not the first message of an
             action.
         """
-        if start_message.contents.get(
-            ACTION_STATUS_FIELD, None) != STARTED_STATUS:
+        if start_message.contents.get(ACTION_STATUS_FIELD, None) != STARTED_STATUS:
             raise InvalidStartMessage.wrong_status(start_message)
         if start_message.task_level.level[-1] != 1:
             raise InvalidStartMessage.wrong_task_level(start_message)
@@ -806,7 +800,7 @@ def start_action(logger=None, action_type="", _serializers=None, **fields):
         return action
 
 
-def startTask(logger=None, action_type=u"", _serializers=None, **fields):
+def startTask(logger=None, action_type="", _serializers=None, **fields):
     """
     Like L{action}, but creates a new top-level L{Action} with no parent.
 
@@ -825,11 +819,8 @@ def startTask(logger=None, action_type=u"", _serializers=None, **fields):
     @return: A new L{Action}.
     """
     action = Action(
-        logger,
-        unicode(uuid4()),
-        TaskLevel(level=[]),
-        action_type,
-        _serializers)
+        logger, unicode(uuid4()), TaskLevel(level=[]), action_type, _serializers
+    )
     action._start(fields)
     return action
 
@@ -878,8 +869,7 @@ def preserve_context(f):
 
 
 def log_call(
-        wrapped_function=None, action_type=None, include_args=None,
-        include_result=True
+    wrapped_function=None, action_type=None, include_args=None, include_result=True
 ):
     """Decorator/decorator factory that logs inputs and the return result.
 
@@ -892,23 +882,30 @@ def log_call(
     @param include_result: True by default. If False, the return result isn't logged.
     """
     if wrapped_function is None:
-        return partial(log_call, action_type=action_type, include_args=include_args,
-                       include_result=include_result)
+        return partial(
+            log_call,
+            action_type=action_type,
+            include_args=include_args,
+            include_result=include_result,
+        )
 
     if action_type is None:
         if PY3:
-            action_type = "{}.{}".format(wrapped_function.__module__,
-                                         wrapped_function.__qualname__)
+            action_type = "{}.{}".format(
+                wrapped_function.__module__, wrapped_function.__qualname__
+            )
         else:
             action_type = wrapped_function.__name__
 
     if PY3 and include_args is not None:
         from inspect import signature
+
         sig = signature(wrapped_function)
         if set(include_args) - set(sig.parameters):
             raise ValueError(
-                ("include_args ({}) lists arguments not in the "
-                 "wrapped function").format(include_args)
+                (
+                    "include_args ({}) lists arguments not in the " "wrapped function"
+                ).format(include_args)
             )
 
     @wraps(wrapped_function)
@@ -930,3 +927,19 @@ def log_call(
             return result
 
     return logging_wrapper
+
+
+def log_message(message_type, **fields):
+    """Log a message in the context of the current action.
+
+    If there is no current action, a new UUID will be generated.
+    """
+    # Loggers will hopefully go away...
+    logger = fields.pop("__eliot_logger__", None)
+    action = current_action()
+    if action is None:
+        action = Action(logger, str(uuid4()), TaskLevel(level=[]), "")
+    action.log(message_type, **fields)
+
+
+from . import _output
diff --git a/eliot/_asyncio.py b/eliot/_asyncio.py
deleted file mode 100644
index 86c5900..0000000
--- a/eliot/_asyncio.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""
-Support for asyncio coroutines.
-"""
-
-from asyncio import Task
-from weakref import WeakKeyDictionary
-
-
-class AsyncioContext:
-    """
-    Per-Task context, allowing different coroutines to have different logging
-    context.
-
-    This will be attached to threading.local object, so no need to worry about
-    thread-safety.
-    """
-    def __init__(self):
-        self._per_task = WeakKeyDictionary()
-
-    def get_stack(self):
-        """
-        Get the stack for the current Task, or None if there is no Task.
-        """
-        try:
-            task = Task.current_task()
-        except RuntimeError:
-            # No loop for this thread:
-            task = None
-        if task is None:
-            return None
-        if task not in self._per_task:
-            self._per_task[task] = []
-        return self._per_task[task]
diff --git a/eliot/_bytesjson.py b/eliot/_bytesjson.py
index da07ca0..fad2881 100644
--- a/eliot/_bytesjson.py
+++ b/eliot/_bytesjson.py
@@ -37,7 +37,8 @@ def _dumps(obj, cls=pyjson.JSONEncoder):
             if isinstance(o, bytes):
                 warnings.warn(
                     "Eliot will soon stop supporting encoding bytes in JSON"
-                    " on Python 3", DeprecationWarning
+                    " on Python 3",
+                    DeprecationWarning,
                 )
                 return o.decode("utf-8")
             return cls.default(self, o)
diff --git a/eliot/_errors.py b/eliot/_errors.py
index 11a4c73..7c03e06 100644
--- a/eliot/_errors.py
+++ b/eliot/_errors.py
@@ -46,6 +46,7 @@ class ErrorExtraction(object):
                     return extractor(exception)
                 except:
                     from ._traceback import write_traceback
+
                     write_traceback(logger)
                     return {}
         return {}
diff --git a/eliot/_generators.py b/eliot/_generators.py
new file mode 100644
index 0000000..1a7b0a1
--- /dev/null
+++ b/eliot/_generators.py
@@ -0,0 +1,138 @@
+"""
+Support for maintaining an action context across generator suspension.
+"""
+
+from __future__ import unicode_literals, absolute_import
+
+from sys import exc_info
+from functools import wraps
+from contextlib import contextmanager
+from contextvars import copy_context
+from weakref import WeakKeyDictionary
+
+from . import log_message
+
+
+class _GeneratorContext(object):
+    """Generator sub-context for C{_ExecutionContext}."""
+
+    def __init__(self, execution_context):
+        self._execution_context = execution_context
+        self._contexts = WeakKeyDictionary()
+        self._current_generator = None
+
+    def init_stack(self, generator):
+        """Create a new stack for the given generator."""
+        self._contexts[generator] = copy_context()
+
+    @contextmanager
+    def in_generator(self, generator):
+        """Context manager: set the given generator as the current generator."""
+        previous_generator = self._current_generator
+        try:
+            self._current_generator = generator
+            yield
+        finally:
+            self._current_generator = previous_generator
+
+
+class GeneratorSupportNotEnabled(Exception):
+    """
+    An attempt was made to use a decorated generator without first turning on
+    the generator context manager.
+    """
+
+
+def eliot_friendly_generator_function(original):
+    """
+    Decorate a generator function so that the Eliot action context is
+    preserved across ``yield`` expressions.
+    """
+
+    @wraps(original)
+    def wrapper(*a, **kw):
+        # Keep track of whether the next value to deliver to the generator is
+        # a non-exception or an exception.
+        ok = True
+
+        # Keep track of the next value to deliver to the generator.
+        value_in = None
+
+        # Create the generator with a call to the generator function.  This
+        # happens with whatever Eliot action context happens to be active,
+        # which is fine and correct and also irrelevant because no code in the
+        # generator function can run until we call send or throw on it.
+        gen = original(*a, **kw)
+
+        # Initialize the per-generator context to a copy of the current context.
+        context = copy_context()
+        while True:
+            try:
+                # Whichever way we invoke the generator, we will do it
+                # with the Eliot action context stack we've saved for it.
+                # Then the context manager will re-save it and restore the
+                # "outside" stack for us.
+                #
+                # Regarding the support of Twisted's inlineCallbacks-like
+                # functionality (see eliot.twisted.inline_callbacks):
+                #
+                # The invocation may raise the inlineCallbacks internal
+                # control flow exception _DefGen_Return.  It is not wrong to
+                # just let that propagate upwards here but inlineCallbacks
+                # does think it is wrong.  The behavior triggers a
+                # DeprecationWarning to try to get us to fix our code.  We
+                # could explicitly handle and re-raise the _DefGen_Return but
+                # only at the expense of depending on a private Twisted API.
+                # For now, I'm opting to try to encourage Twisted to fix the
+                # situation (or at least not worsen it):
+                # https://twistedmatrix.com/trac/ticket/9590
+                #
+                # Alternatively, _DefGen_Return is only required on Python 2.
+                # When Python 2 support is dropped, this concern can be
+                # eliminated by always using `return value` instead of
+                # `returnValue(value)` (and adding the necessary logic to the
+                # StopIteration handler below).
+                def go():
+                    if ok:
+                        value_out = gen.send(value_in)
+                    else:
+                        value_out = gen.throw(*value_in)
+                    # We have obtained a value from the generator.  In
+                    # giving it to us, it has given up control.  Note this
+                    # fact here.  Importantly, this is within the
+                    # generator's action context so that we get a good
+                    # indication of where the yield occurred.
+                    #
+                    # This is noisy, enable only for debugging:
+                    if wrapper.debug:
+                        log_message(message_type="yielded")
+                    return value_out
+
+                value_out = context.run(go)
+            except StopIteration:
+                # When the generator raises this, it is signaling
+                # completion.  Leave the loop.
+                break
+            else:
+                try:
+                    # Pass the generator's result along to whoever is
+                    # driving.  Capture the result as the next value to
+                    # send inward.
+                    value_in = yield value_out
+                except:
+                    # Or capture the exception if that's the flavor of the
+                    # next value.  This could possibly include GeneratorExit
+                    # which turns out to be just fine because throwing it into
+                    # the inner generator effectively propagates the close
+                    # (and with the right context!) just as you would want.
+                    # True, the GeneratorExit does get re-throwing out of the
+                    # gen.throw call and hits _the_generator_context's
+                    # contextmanager.  But @contextmanager extremely
+                    # conveniently eats it for us!  Thanks, @contextmanager!
+                    ok = False
+                    value_in = exc_info()
+                else:
+                    ok = True
+
+    wrapper.debug = False
+    return wrapper
diff --git a/eliot/_message.py b/eliot/_message.py
index 60c61a5..d8eaef6 100644
--- a/eliot/_message.py
+++ b/eliot/_message.py
@@ -5,19 +5,18 @@ Log messages and related utilities.
 from __future__ import unicode_literals
 
 import time
-from uuid import uuid4
-
+from warnings import warn
 from six import text_type as unicode
 
-from pyrsistent import PClass, thaw, pmap_field, pmap
+from pyrsistent import PClass, pmap_field
 
-MESSAGE_TYPE_FIELD = 'message_type'
-TASK_UUID_FIELD = 'task_uuid'
-TASK_LEVEL_FIELD = 'task_level'
-TIMESTAMP_FIELD = 'timestamp'
+MESSAGE_TYPE_FIELD = "message_type"
+TASK_UUID_FIELD = "task_uuid"
+TASK_LEVEL_FIELD = "task_level"
+TIMESTAMP_FIELD = "timestamp"
 
-EXCEPTION_FIELD = 'exception'
-REASON_FIELD = 'reason'
+EXCEPTION_FIELD = "exception"
+REASON_FIELD = "reason"
 
 
 class Message(object):
@@ -29,6 +28,7 @@ class Message(object):
     (e.g. C{"_id"} is used by Elasticsearch for unique message identifiers and
     may be auto-populated by logstash).
     """
+
     # Overrideable for testing purposes:
     _time = time.time
 
@@ -46,6 +46,12 @@ class Message(object):
 
         @return: The new L{Message}
         """
+        warn(
+            "Message.new() is deprecated since 1.11.0, "
+            "use eliot.log_message() instead.",
+            DeprecationWarning,
+            stacklevel=2,
+        )
         return _class(fields, _serializer)
 
     @classmethod
@@ -55,7 +61,13 @@ class Message(object):
 
         The keyword arguments will become contents of the L{Message}.
         """
-        _class.new(**fields).write()
+        warn(
+            "Message.log() is deprecated since 1.11.0, "
+            "use Action.log() or eliot.log_message() instead.",
+            DeprecationWarning,
+            stacklevel=2,
+        )
+        _class(fields).write()
 
     def __init__(self, contents, serializer=None):
         """
@@ -70,7 +82,7 @@ class Message(object):
             L{eliot.Logger} may choose to serialize the message. If you're
             using L{eliot.MessageType} this will be populated for you.
         """
-        self._contents = pmap(contents)
+        self._contents = contents.copy()
         self._serializer = serializer
 
     def bind(self, **fields):
@@ -78,13 +90,15 @@ class Message(object):
         Return a new L{Message} with this message's contents plus the
         additional given bindings.
         """
-        return Message(self._contents.update(fields), self._serializer)
+        contents = self._contents.copy()
+        contents.update(fields)
+        return Message(contents, self._serializer)
 
     def contents(self):
         """
         Return a copy of L{Message} contents.
         """
-        return dict(self._contents)
+        return self._contents.copy()
 
     def _timestamp(self):
         """
@@ -92,37 +106,6 @@ class Message(object):
         """
         return self._time()
 
-    def _freeze(self, action=None):
-        """
-        Freeze this message for logging, registering it with C{action}.
-
-        @param action: The L{Action} which is the context for this message. If
-            C{None}, the L{Action} will be deduced from the current call
-            stack.
-
-        @return: A L{PMap} with added C{timestamp}, C{task_uuid}, and
-            C{task_level} entries.
-        """
-        if action is None:
-            action = current_action()
-        if action is None:
-            task_uuid = unicode(uuid4())
-            task_level = [1]
-        else:
-            task_uuid = action._identification[TASK_UUID_FIELD]
-            task_level = thaw(action._nextTaskLevel().level)
-        timestamp = self._timestamp()
-        new_values = {
-            TIMESTAMP_FIELD: timestamp,
-            TASK_UUID_FIELD: task_uuid,
-            TASK_LEVEL_FIELD: task_level
-        }
-        if "action_type" not in self._contents and (
-            "message_type" not in self._contents
-        ):
-            new_values["message_type"] = ""
-        return self._contents.update(new_values)
-
     def write(self, logger=None, action=None):
         """
         Write the message to the given logger.
@@ -138,10 +121,16 @@ class Message(object):
             C{None}, the L{Action} will be deduced from the current call
             stack.
         """
-        if logger is None:
-            logger = _output._DEFAULT_LOGGER
-        logged_dict = self._freeze(action=action)
-        logger.write(dict(logged_dict), self._serializer)
+        fields = dict(self._contents)
+        if "message_type" not in fields:
+            fields["message_type"] = ""
+        if self._serializer is not None:
+            fields["__eliot_serializer__"] = self._serializer
+        if action is None:
+            fields["__eliot_logger__"] = logger
+            log_message(**fields)
+        else:
+            action.log(**fields)
 
 
 class WrittenMessage(PClass):
@@ -150,6 +139,7 @@ class WrittenMessage(PClass):
 
     @ivar _logged_dict: The originally logged dictionary.
     """
+
     _logged_dict = pmap_field((str, unicode), object)
 
     @property
@@ -178,9 +168,11 @@ class WrittenMessage(PClass):
         """
         A C{PMap}, the message contents without Eliot metadata.
         """
-        return self._logged_dict.discard(TIMESTAMP_FIELD).discard(
-            TASK_UUID_FIELD
-        ).discard(TASK_LEVEL_FIELD)
+        return (
+            self._logged_dict.discard(TIMESTAMP_FIELD)
+            .discard(TASK_UUID_FIELD)
+            .discard(TASK_LEVEL_FIELD)
+        )
 
     @classmethod
     def from_dict(cls, logged_dictionary):
@@ -202,5 +194,4 @@ class WrittenMessage(PClass):
 
 
 # Import at end to deal with circular imports:
-from ._action import current_action, TaskLevel
-from . import _output
+from ._action import log_message, TaskLevel
diff --git a/eliot/_output.py b/eliot/_output.py
index 087ce05..debff7c 100644
--- a/eliot/_output.py
+++ b/eliot/_output.py
@@ -2,12 +2,13 @@
 Implementation of hooks and APIs for outputting log messages.
 """
 
-from __future__ import unicode_literals, absolute_import
-
 import sys
+import traceback
+import inspect
 import json as pyjson
-
-from six import text_type as unicode, PY3
+from threading import Lock
+from functools import wraps
+from io import IOBase
 
 from pyrsistent import PClass, field
 
@@ -15,14 +16,10 @@ from . import _bytesjson as bytesjson
 from zope.interface import Interface, implementer
 
 from ._traceback import write_traceback, TRACEBACK_MESSAGE
-from ._message import (
-    Message,
-    EXCEPTION_FIELD,
-    MESSAGE_TYPE_FIELD,
-    REASON_FIELD,
-)
+from ._message import EXCEPTION_FIELD, MESSAGE_TYPE_FIELD, REASON_FIELD
 from ._util import saferepr, safeunicode
 from .json import EliotJSONEncoder
+from ._validation import ValidationError
 
 
 class _DestinationsSendError(Exception):
@@ -135,6 +132,8 @@ class ILogger(Interface):
         """
         Write a dictionary to the appropriate destination.
 
+        @note: This method is thread-safe.
+
         @param serializer: Either C{None}, or a
             L{eliot._validation._MessageSerializer} which can be used to
             validate this message.
@@ -154,7 +153,9 @@ class Logger(object):
     whose messages you want to unit test in isolation, e.g. a class. The tests
     can then replace a specific L{Logger} with a L{MemoryLogger}.
     """
+
     _destinations = Destinations()
+    _log_tracebacks = True
 
     def _safeUnicodeDictionary(self, dictionary):
         """
@@ -169,7 +170,7 @@ class Logger(object):
             faithfully as can be done without putting in too much effort.
         """
         try:
-            return unicode(
+            return str(
                 dict(
                     (saferepr(key), saferepr(value))
                     for (key, value) in dictionary.items()
@@ -188,52 +189,56 @@ class Logger(object):
                 serializer.serialize(dictionary)
         except:
             write_traceback(self)
-            msg = Message(
-                {
-                    MESSAGE_TYPE_FIELD: "eliot:serialization_failure",
-                    "message": self._safeUnicodeDictionary(dictionary)
-                }
+            from ._action import log_message
+
+            log_message(
+                "eliot:serialization_failure",
+                message=self._safeUnicodeDictionary(dictionary),
+                __eliot_logger__=self,
             )
-            msg.write(self)
             return
 
         try:
             self._destinations.send(dictionary)
         except _DestinationsSendError as e:
-            for (exc_type, exception, exc_traceback) in e.errors:
-                try:
-                    # Can't use same code path as serialization errors because
+            from ._action import log_message
+
+            if self._log_tracebacks:
+                for (exc_type, exception, exc_traceback) in e.errors:
+                    # Can't use same Logger as serialization errors because
                     # if destination continues to error out we will get
                     # infinite recursion. So instead we have to manually
-                    # construct a message.
-                    msg = Message(
-                        {
-                            MESSAGE_TYPE_FIELD:
-                            "eliot:destination_failure",
-                            REASON_FIELD:
-                            safeunicode(exception),
-                            EXCEPTION_FIELD:
-                            exc_type.__module__ + "." + exc_type.__name__,
-                            "message":
-                            self._safeUnicodeDictionary(dictionary)
-                        }
-                    )
-                    self._destinations.send(dict(msg._freeze()))
-                except:
-                    # Nothing we can do here, raising exception to caller will
-                    # break business logic, better to have that continue to
-                    # work even if logging isn't.
-                    pass
-
-
-class UnflushedTracebacks(Exception):
-    """
-    The L{MemoryLogger} had some tracebacks logged which were not flushed.
+                    # construct a Logger that won't retry.
+                    logger = Logger()
+                    logger._log_tracebacks = False
+                    logger._destinations = self._destinations
+                    msg = {
+                        MESSAGE_TYPE_FIELD: "eliot:destination_failure",
+                        REASON_FIELD: safeunicode(exception),
+                        EXCEPTION_FIELD: exc_type.__module__ + "." + exc_type.__name__,
+                        "message": self._safeUnicodeDictionary(dictionary),
+                        "__eliot_logger__": logger,
+                    }
+                    log_message(**msg)
+            else:
+                # Nothing we can do here, raising exception to caller will
+                # break business logic, better to have that continue to
+                # work even if logging isn't.
+                pass
+
 
-    This means either your code has a bug and logged an unexpected traceback.
-    If you expected the traceback then you will need to flush it using
-    L{MemoryLogger.flushTracebacks}.
+def exclusively(f):
     """
+    Decorate a function to make it thread-safe by serializing invocations
+    using a per-instance lock.
+    """
+
+    @wraps(f)
+    def exclusively_f(self, *a, **kw):
+        with self._lock:
+            return f(self, *a, **kw)
+
+    return exclusively_f
 
 
 @implementer(ILogger)
@@ -257,9 +262,15 @@ class MemoryLogger(object):
         not mutate this list.
     """
 
-    def __init__(self):
+    def __init__(self, encoder=EliotJSONEncoder):
+        """
+        @param encoder: A JSONEncoder subclass to use when encoding JSON.
+        """
+        self._lock = Lock()
+        self._encoder = encoder
         self.reset()
 
+    @exclusively
     def flushTracebacks(self, exceptionType):
         """
         Flush all logged tracebacks whose exception is of the given type.
@@ -284,15 +295,64 @@ class MemoryLogger(object):
     # PEP 8 variant:
     flush_tracebacks = flushTracebacks
 
+    @exclusively
     def write(self, dictionary, serializer=None):
         """
         Add the dictionary to list of messages.
         """
+        # Validate copy of the dictionary, to ensure what we store isn't
+        # mutated.
+        try:
+            self._validate_message(dictionary.copy(), serializer)
+        except Exception as e:
+            # Skip irrelevant frames that don't help pinpoint the problem:
+            from . import _output, _message, _action
+
+            skip_filenames = [_output.__file__, _message.__file__, _action.__file__]
+            for frame in inspect.stack():
+                if frame[1] not in skip_filenames:
+                    break
+            self._failed_validations.append(
+                "{}: {}".format(e, "".join(traceback.format_stack(frame[0])))
+            )
         self.messages.append(dictionary)
         self.serializers.append(serializer)
         if serializer is TRACEBACK_MESSAGE._serializer:
             self.tracebackMessages.append(dictionary)
 
+    def _validate_message(self, dictionary, serializer):
+        """Validate an individual message.
+
+        As a side-effect, the message is replaced with its serialized contents.
+
+        @param dictionary: A message C{dict} to be validated.  Might be mutated
+            by the serializer!
+
+        @param serializer: C{None} or a serializer.
+
+        @raises TypeError: If a field name is not unicode, or the dictionary
+            fails to serialize to JSON.
+
+        @raises eliot.ValidationError: If serializer was given and validation
+            failed.
+        """
+        if serializer is not None:
+            serializer.validate(dictionary)
+        for key in dictionary:
+            if not isinstance(key, str):
+                if isinstance(key, bytes):
+                    key.decode("utf-8")
+                else:
+                    raise TypeError(dictionary, "%r is not unicode" % (key,))
+        if serializer is not None:
+            serializer.serialize(dictionary)
+
+        try:
+            pyjson.dumps(dictionary, cls=self._encoder)
+        except Exception as e:
+            raise TypeError("Message %s doesn't encode to JSON: %s" % (dictionary, e))
+
+    @exclusively
     def validate(self):
         """
         Validate all written messages.
@@ -300,6 +360,9 @@ class MemoryLogger(object):
         Does minimal validation of types, and for messages with corresponding
         serializers use those to do additional validation.
 
+        As a side-effect, the messages are replaced with their serialized
+        contents.
+
         @raises TypeError: If a field name is not unicode, or the dictionary
             fails to serialize to JSON.
 
@@ -307,26 +370,15 @@ class MemoryLogger(object):
             failed.
         """
         for dictionary, serializer in zip(self.messages, self.serializers):
-            if serializer is not None:
-                serializer.validate(dictionary)
-            for key in dictionary:
-                if not isinstance(key, unicode):
-                    if isinstance(key, bytes):
-                        key.decode("utf-8")
-                    else:
-                        raise TypeError(
-                            dictionary, "%r is not unicode" % (key, )
-                        )
-            if serializer is not None:
-                serializer.serialize(dictionary)
-
             try:
-                bytesjson.dumps(dictionary)
-                pyjson.dumps(dictionary)
-            except Exception as e:
-                raise TypeError("Message %s doesn't encode to JSON: %s" % (
-                    dictionary, e))
-
+                self._validate_message(dictionary, serializer)
+            except (TypeError, ValidationError) as e:
+                # We already figured out which messages failed validation
+                # earlier. This just lets us figure out which exception type to
+                # raise.
+                raise e.__class__("\n\n".join(self._failed_validations))
+
+    @exclusively
     def serialize(self):
         """
         Serialize all written messages.
@@ -342,6 +394,7 @@ class MemoryLogger(object):
             result.append(dictionary)
         return result
 
+    @exclusively
     def reset(self):
         """
         Clear all logged messages.
@@ -355,6 +408,7 @@ class MemoryLogger(object):
         self.messages = []
         self.serializers = []
         self.tracebackMessages = []
+        self._failed_validations = []
 
 
 class FileDestination(PClass):
@@ -371,41 +425,38 @@ class FileDestination(PClass):
 
     @ivar _linebreak: C{"\n"} as either bytes or unicode.
     """
+
     file = field(mandatory=True)
     encoder = field(mandatory=True)
     _dumps = field(mandatory=True)
     _linebreak = field(mandatory=True)
 
     def __new__(cls, file, encoder=EliotJSONEncoder):
+        if isinstance(file, IOBase) and not file.writable():
+            raise RuntimeError("Given file {} is not writeable.")
+
         unicodeFile = False
-        if PY3:
-            try:
-                file.write(b"")
-            except TypeError:
-                unicodeFile = True
+        try:
+            file.write(b"")
+        except TypeError:
+            unicodeFile = True
 
         if unicodeFile:
             # On Python 3 native json module outputs unicode:
             _dumps = pyjson.dumps
-            _linebreak = u"\n"
+            _linebreak = "\n"
         else:
             _dumps = bytesjson.dumps
             _linebreak = b"\n"
         return PClass.__new__(
-            cls,
-            file=file,
-            _dumps=_dumps,
-            _linebreak=_linebreak,
-            encoder=encoder
+            cls, file=file, _dumps=_dumps, _linebreak=_linebreak, encoder=encoder
         )
 
     def __call__(self, message):
         """
         @param message: A message dictionary.
         """
-        self.file.write(
-            self._dumps(message, cls=self.encoder) + self._linebreak
-        )
+        self.file.write(self._dumps(message, cls=self.encoder) + self._linebreak)
         self.file.flush()
 
 
@@ -414,10 +465,10 @@ def to_file(output_file, encoder=EliotJSONEncoder):
     Add a destination that writes a JSON message per line to the given file.
 
     @param output_file: A file-like object.
+
+    @param encoder: A JSONEncoder subclass to use when encoding JSON.
     """
-    Logger._destinations.add(
-        FileDestination(file=output_file, encoder=encoder)
-    )
+    Logger._destinations.add(FileDestination(file=output_file, encoder=encoder))
 
 
 # The default Logger, used when none is specified:
diff --git a/eliot/_traceback.py b/eliot/_traceback.py
index 3b2525c..08e90a5 100644
--- a/eliot/_traceback.py
+++ b/eliot/_traceback.py
@@ -14,14 +14,18 @@ from ._validation import MessageType, Field
 from ._errors import _error_extraction
 
 TRACEBACK_MESSAGE = MessageType(
-    "eliot:traceback", [
+    "eliot:traceback",
+    [
         Field(REASON_FIELD, safeunicode, "The exception's value."),
         Field("traceback", safeunicode, "The traceback."),
         Field(
             EXCEPTION_FIELD,
             lambda typ: "%s.%s" % (typ.__module__, typ.__name__),
-            "The exception type's FQPN.")],
-    "An unexpected exception indicating a bug.")
+            "The exception type's FQPN.",
+        ),
+    ],
+    "An unexpected exception indicating a bug.",
+)
 # The fields here are actually subset of what you might get in practice,
 # due to exception extraction, so we hackily modify the serializer:
 TRACEBACK_MESSAGE._serializer.allow_additional_fields = True
@@ -37,10 +41,8 @@ def _writeTracebackMessage(logger, typ, exception, traceback):
 
     @param traceback: The traceback, a C{str}.
     """
-    msg = TRACEBACK_MESSAGE(
-        reason=exception, traceback=traceback, exception=typ)
-    msg = msg.bind(
-        **_error_extraction.get_fields_for_exception(logger, exception))
+    msg = TRACEBACK_MESSAGE(reason=exception, traceback=traceback, exception=typ)
+    msg = msg.bind(**_error_extraction.get_fields_for_exception(logger, exception))
     msg.write(logger)
 
 
@@ -54,7 +56,11 @@ def _get_traceback_no_io():
     """
     Return a version of L{traceback} that doesn't do I/O.
     """
-    module = load_module(str("_traceback_no_io"), traceback)
+    try:
+        module = load_module(str("_traceback_no_io"), traceback)
+    except NotImplementedError:
+        # Can't fix the I/O problem, oh well:
+        return traceback
 
     class FakeLineCache(object):
         def checkcache(self, *args, **kwargs):
@@ -99,21 +105,24 @@ def writeFailure(failure, logger=None):
     Write a L{twisted.python.failure.Failure} to the log.
 
     This is for situations where you got an unexpected exception and want to
-    log a traceback. For example:
+    log a traceback. For example, if you have C{Deferred} that might error,
+    you'll want to wrap it with a L{eliot.twisted.DeferredContext} and then add
+    C{writeFailure} as the error handler to get the traceback logged:
 
-        d = dostuff()
+        d = DeferredContext(dostuff())
         d.addCallback(process)
         # Final error handler.
-        d.addErrback(writeFailure, logger, "myapp:subsystem")
+        d.addErrback(writeFailure)
 
     @param failure: L{Failure} to write to the log.
 
-    @type logger: L{eliot.ILogger}
+    @type logger: L{eliot.ILogger}. Will be deprecated at some point, so just
+        ignore it.
 
     @return: None
     """
     # Failure.getBriefTraceback does not include source code, so does not do
     # I/O.
     _writeTracebackMessage(
-        logger, failure.value.__class__, failure.value,
-        failure.getBriefTraceback())
+        logger, failure.value.__class__, failure.value, failure.getBriefTraceback()
+    )
diff --git a/eliot/_util.py b/eliot/_util.py
index 420bc4c..38768c4 100644
--- a/eliot/_util.py
+++ b/eliot/_util.py
@@ -4,9 +4,10 @@ Utilities that don't go anywhere else.
 
 from __future__ import unicode_literals
 
+import sys
 from types import ModuleType
 
-from six import exec_, text_type as unicode
+from six import exec_, text_type as unicode, PY3
 
 
 def safeunicode(o):
@@ -52,9 +53,18 @@ def load_module(name, original_module):
     @return: A new, distinct module.
     """
     module = ModuleType(name)
-    path = original_module.__file__
-    if path.endswith(".pyc") or path.endswith(".pyo"):
-        path = path[:-1]
-    with open(path) as f:
-        exec_(f.read(), module.__dict__, module.__dict__)
+    if PY3:
+        import importlib.util
+
+        spec = importlib.util.find_spec(original_module.__name__)
+        source = spec.loader.get_code(original_module.__name__)
+    else:
+        if getattr(sys, "frozen", False):
+            raise NotImplementedError("Can't load modules on Python 2 with PyInstaller")
+        path = original_module.__file__
+        if path.endswith(".pyc") or path.endswith(".pyo"):
+            path = path[:-1]
+        with open(path) as f:
+            source = f.read()
+    exec_(source, module.__dict__, module.__dict__)
     return module
diff --git a/eliot/_validation.py b/eliot/_validation.py
index d904323..c69dc43 100644
--- a/eliot/_validation.py
+++ b/eliot/_validation.py
@@ -7,7 +7,10 @@ although in theory it could be done then as well.
 
 from __future__ import unicode_literals
 
+from warnings import warn
+
 import six
+
 unicode = six.text_type
 
 from pyrsistent import PClass, field as pyrsistent_field
@@ -18,7 +21,8 @@ from ._message import (
     MESSAGE_TYPE_FIELD,
     TASK_LEVEL_FIELD,
     TASK_UUID_FIELD,
-    TIMESTAMP_FIELD, )
+    TIMESTAMP_FIELD,
+)
 from ._action import (
     start_action,
     startTask,
@@ -26,7 +30,9 @@ from ._action import (
     ACTION_TYPE_FIELD,
     STARTED_STATUS,
     SUCCEEDED_STATUS,
-    FAILED_STATUS, )
+    FAILED_STATUS,
+    log_message,
+)
 
 
 class ValidationError(Exception):
@@ -117,8 +123,7 @@ class Field(object):
 
         def validate(checked):
             if checked != value:
-                raise ValidationError(
-                    checked, "Field %r must be %r" % (key, value))
+                raise ValidationError(checked, "Field %r must be %r" % (key, value))
 
         return klass(key, lambda _: value, description, validate)
 
@@ -150,15 +155,15 @@ class Field(object):
             if k is None:
                 k = type(None)
             if k not in _JSON_TYPES:
-                raise TypeError("%s is not JSON-encodeable" % (k, ))
+                raise TypeError("%s is not JSON-encodeable" % (k,))
             fixedClasses.append(k)
         fixedClasses = tuple(fixedClasses)
 
         def validate(value):
             if not isinstance(value, fixedClasses):
                 raise ValidationError(
-                    value,
-                    "Field %r requires type to be one of %s" % (key, classes))
+                    value, "Field %r requires type to be one of %s" % (key, classes)
+                )
             if extraValidator is not None:
                 extraValidator(value)
 
@@ -180,14 +185,13 @@ def fields(*fields, **keys):
     @return: A L{list} of L{Field} instances.
     """
     return list(fields) + [
-        Field.forTypes(key, [value], "") for key, value in keys.items()]
+        Field.forTypes(key, [value], "") for key, value in keys.items()
+    ]
 
 
 REASON = Field.forTypes(REASON_FIELD, [unicode], "The reason for an event.")
-TRACEBACK = Field.forTypes(
-    "traceback", [unicode], "The traceback for an exception.")
-EXCEPTION = Field.forTypes(
-    "exception", [unicode], "The FQPN of an exception class.")
+TRACEBACK = Field.forTypes("traceback", [unicode], "The traceback for an exception.")
+EXCEPTION = Field.forTypes("exception", [unicode], "The FQPN of an exception class.")
 
 
 class _MessageSerializer(object):
@@ -204,26 +208,30 @@ class _MessageSerializer(object):
         keys = []
         for field in fields:
             if not isinstance(field, Field):
-                raise TypeError('Expected a Field instance but got', field)
+                raise TypeError("Expected a Field instance but got", field)
             keys.append(field.key)
         if len(set(keys)) != len(keys):
             raise ValueError(keys, "Duplicate field name")
         if ACTION_TYPE_FIELD in keys:
             if MESSAGE_TYPE_FIELD in keys:
                 raise ValueError(
-                    keys, "Messages must have either "
-                    "'action_type' or 'message_type', not both")
+                    keys,
+                    "Messages must have either "
+                    "'action_type' or 'message_type', not both",
+                )
         elif MESSAGE_TYPE_FIELD not in keys:
             raise ValueError(
-                keys, "Messages must have either 'action_type' ",
-                "or 'message_type'")
+                keys, "Messages must have either 'action_type' ", "or 'message_type'"
+            )
         if any(key.startswith("_") for key in keys):
             raise ValueError(keys, "Field names must not start with '_'")
         for reserved in RESERVED_FIELDS:
             if reserved in keys:
                 raise ValueError(
-                    keys, "The field name %r is reserved for use "
-                    "by the logging framework" % (reserved, ))
+                    keys,
+                    "The field name %r is reserved for use "
+                    "by the logging framework" % (reserved,),
+                )
         self.fields = dict((field.key, field) for field in fields)
         self.allow_additional_fields = allow_additional_fields
 
@@ -253,7 +261,7 @@ class _MessageSerializer(object):
         """
         for key, field in self.fields.items():
             if key not in message:
-                raise ValidationError(message, "Field %r is missing" % (key, ))
+                raise ValidationError(message, "Field %r is missing" % (key,))
             field.validate(message[key])
 
         if self.allow_additional_fields:
@@ -262,7 +270,7 @@ class _MessageSerializer(object):
         fieldSet = set(self.fields) | set(RESERVED_FIELDS)
         for key in message:
             if key not in fieldSet:
-                raise ValidationError(message, "Unexpected field %r" % (key, ))
+                raise ValidationError(message, "Unexpected field %r" % (key,))
 
 
 class MessageType(object):
@@ -309,9 +317,9 @@ class MessageType(object):
         self.message_type = message_type
         self.description = description
         self._serializer = _MessageSerializer(
-            fields + [
-                Field.forValue(
-                    MESSAGE_TYPE_FIELD, message_type, "The message type.")])
+            fields
+            + [Field.forValue(MESSAGE_TYPE_FIELD, message_type, "The message type.")]
+        )
 
     def __call__(self, **fields):
         """
@@ -321,6 +329,12 @@ class MessageType(object):
 
         @rtype: L{eliot.Message}
         """
+        warn(
+            "MessageType.__call__() is deprecated since 1.11.0, "
+            "use MessageType.log() instead.",
+            DeprecationWarning,
+            stacklevel=2,
+        )
         fields[MESSAGE_TYPE_FIELD] = self.message_type
         return Message(fields, self._serializer)
 
@@ -330,13 +344,15 @@ class MessageType(object):
 
         The keyword arguments will become contents of the L{Message}.
         """
-        self(**fields).write()
+        fields["__eliot_serializer__"] = self._serializer
+        log_message(self.message_type, **fields)
 
 
 class _ActionSerializers(PClass):
     """
     Serializers for the three action messages: start, success and failure.
     """
+
     start = pyrsistent_field(mandatory=True)
     success = pyrsistent_field(mandatory=True)
     failure = pyrsistent_field(mandatory=True)
@@ -372,7 +388,7 @@ class ActionType(object):
         this action's start message.
 
     @ivar successFields: A C{list} of L{Field} instances which can appear in
-        this action's succesful finish message.
+        this action's successful finish message.
 
     @ivar failureFields: A C{list} of L{Field} instances which can appear in
         this action's failed finish message (in addition to the built-in
@@ -381,40 +397,45 @@ class ActionType(object):
     @ivar description: A description of what this action's messages mean.
     @type description: C{unicode}
     """
+
     # Overrideable hook for testing; need staticmethod() so functions don't
     # get turned into methods.
     _start_action = staticmethod(start_action)
     _startTask = staticmethod(startTask)
 
-    def __init__(
-        self, action_type, startFields, successFields, description=""):
+    def __init__(self, action_type, startFields, successFields, description=""):
         self.action_type = action_type
         self.description = description
 
         actionTypeField = Field.forValue(
-            ACTION_TYPE_FIELD, action_type, "The action type")
+            ACTION_TYPE_FIELD, action_type, "The action type"
+        )
 
         def makeActionStatusField(value):
-            return Field.forValue(
-                ACTION_STATUS_FIELD, value, "The action status")
+            return Field.forValue(ACTION_STATUS_FIELD, value, "The action status")
 
         startFields = startFields + [
             actionTypeField,
-            makeActionStatusField(STARTED_STATUS)]
+            makeActionStatusField(STARTED_STATUS),
+        ]
         successFields = successFields + [
             actionTypeField,
-            makeActionStatusField(SUCCEEDED_STATUS)]
+            makeActionStatusField(SUCCEEDED_STATUS),
+        ]
         failureFields = [
             actionTypeField,
-            makeActionStatusField(FAILED_STATUS), REASON, EXCEPTION]
+            makeActionStatusField(FAILED_STATUS),
+            REASON,
+            EXCEPTION,
+        ]
 
         self._serializers = _ActionSerializers(
             start=_MessageSerializer(startFields),
             success=_MessageSerializer(successFields),
             # Failed action messages can have extra fields from exception
             # extraction:
-            failure=_MessageSerializer(
-                failureFields, allow_additional_fields=True))
+            failure=_MessageSerializer(failureFields, allow_additional_fields=True),
+        )
 
     def __call__(self, logger=None, **fields):
         """
@@ -446,8 +467,7 @@ class ActionType(object):
 
         @rtype: L{eliot.Action}
         """
-        return self._start_action(
-            logger, self.action_type, self._serializers, **fields)
+        return self._start_action(logger, self.action_type, self._serializers, **fields)
 
     def as_task(self, logger=None, **fields):
         """
@@ -463,8 +483,7 @@ class ActionType(object):
 
         @rtype: L{eliot.Action}
         """
-        return self._startTask(
-            logger, self.action_type, self._serializers, **fields)
+        return self._startTask(logger, self.action_type, self._serializers, **fields)
 
     # Backwards compatible variant:
     asTask = as_task
diff --git a/eliot/_version.py b/eliot/_version.py
index 901e9cb..7ec23e7 100644
--- a/eliot/_version.py
+++ b/eliot/_version.py
@@ -8,11 +8,11 @@ import json
 
 version_json = '''
 {
- "date": "2019-01-02T13:14:02-0500",
+ "date": "2020-12-15T14:09:24-0500",
  "dirty": false,
  "error": null,
- "full-revisionid": "238cba188478da0a5a855512d201d49d74a0a5a3",
- "version": "1.6.0"
+ "full-revisionid": "e858c8ef7302e22ca05f37565d929db8e0fab153",
+ "version": "1.13.0"
 }
 '''  # END VERSION_JSON
 
diff --git a/eliot/dask.py b/eliot/dask.py
index f701ce4..2f8d07e 100644
--- a/eliot/dask.py
+++ b/eliot/dask.py
@@ -2,9 +2,18 @@
 
 from pyrsistent import PClass, field
 
-from dask import compute, optimize
-from dask.core import toposort, get_dependencies
-from . import start_action, current_action, Action, Message
+from dask import compute, optimize, persist
+
+try:
+    from dask.distributed import Future
+except:
+
+    class Future(object):
+        pass
+
+
+from dask.core import toposort, get_dependencies, ishashable
+from . import start_action, current_action, Action
 
 
 class _RunWithEliotContext(PClass):
@@ -16,6 +25,7 @@ class _RunWithEliotContext(PClass):
     @ivar key: The key in the Dask graph.
     @ivar dependencies: The keys in the Dask graph this depends on.
     """
+
     task_id = field(type=str)
     func = field()  # callable
     key = field(type=str)
@@ -34,11 +44,9 @@ class _RunWithEliotContext(PClass):
         return hash(self.func)
 
     def __call__(self, *args, **kwargs):
-        with Action.continue_task(task_id=self.task_id):
-            Message.log(
-                message_type="dask:task",
-                key=self.key,
-                dependencies=self.dependencies
+        with Action.continue_task(task_id=self.task_id) as action:
+            action.log(
+                message_type="dask:task", key=self.key, dependencies=self.dependencies
             )
             return self.func(*args, **kwargs)
 
@@ -76,6 +84,22 @@ def compute_with_trace(*args):
         return compute(*optimized, optimize_graph=False)
 
 
+def persist_with_trace(*args):
+    """Do Dask persist(), but with added Eliot tracing.
+
+    Known issues:
+
+        1. Retries will confuse Eliot.  Probably need different
+           distributed-tree mechanism within Eliot to solve that.
+    """
+    # 1. Create top-level Eliot Action:
+    with start_action(action_type="dask:persist"):
+        # In order to reduce logging verbosity, add logging to the already
+        # optimized graph:
+        optimized = optimize(*args, optimizations=[_add_logging])
+        return persist(*optimized, optimize_graph=False)
+
+
 def _add_logging(dsk, ignore=None):
     """
     Add logging to a Dask graph.
@@ -102,37 +126,43 @@ def _add_logging(dsk, ignore=None):
     key_names = {}
     for key in keys:
         value = dsk[key]
-        if not callable(value) and value in keys:
+        if not callable(value) and ishashable(value) and value in keys:
             # It's an alias for another key:
             key_names[key] = key_names[value]
         else:
             key_names[key] = simplify(key)
 
-    # 2. Create Eliot child Actions for each key, in topological order:
-    key_to_action_id = {
-        key: str(ctx.serialize_task_id(), "utf-8")
-        for key in keys
-    }
+    # Values in the graph can be either:
+    #
+    # 1. A list of other values.
+    # 2. A tuple, where first value might be a callable, aka a task.
+    # 3. A literal of some sort.
+    def maybe_wrap(key, value):
+        if isinstance(value, list):
+            return [maybe_wrap(key, v) for v in value]
+        elif isinstance(value, tuple):
+            func = value[0]
+            args = value[1:]
+            if not callable(func):
+                # Not a callable, so nothing to wrap.
+                return value
+            wrapped_func = _RunWithEliotContext(
+                task_id=str(ctx.serialize_task_id(), "utf-8"),
+                func=func,
+                key=key_names[key],
+                dependencies=[key_names[k] for k in get_dependencies(dsk, key)],
+            )
+            return (wrapped_func,) + args
+        else:
+            return value
 
-    # 3. Replace function with wrapper that logs appropriate Action:
+    # Replace function with wrapper that logs appropriate Action; iterate in
+    # topological order so action task levels are in reasonable order.
     for key in keys:
-        func = dsk[key][0]
-        args = dsk[key][1:]
-        if not callable(func):
-            # This key is just an alias for another key, no need to add
-            # logging:
-            result[key] = dsk[key]
-            continue
-        wrapped_func = _RunWithEliotContext(
-            task_id=key_to_action_id[key],
-            func=func,
-            key=key_names[key],
-            dependencies=[key_names[k] for k in get_dependencies(dsk, key)],
-        )
-        result[key] = (wrapped_func, ) + tuple(args)
-
-    assert result.keys() == dsk.keys()
+        result[key] = maybe_wrap(key, dsk[key])
+
+    assert set(result.keys()) == set(dsk.keys())
     return result
 
 
-__all__ = ["compute_with_trace"]
+__all__ = ["compute_with_trace", "persist_with_trace"]
diff --git a/eliot/filter.py b/eliot/filter.py
index 337e9ee..02c6d43 100644
--- a/eliot/filter.py
+++ b/eliot/filter.py
@@ -4,8 +4,9 @@ Command line program for filtering line-based Eliot logs.
 
 from __future__ import unicode_literals, absolute_import
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     import eliot.filter
+
     eliot.filter.main()
 
 import sys
@@ -32,6 +33,7 @@ class EliotFilter(object):
 
     @ivar code: A Python code object, the compiled filter expression.
     """
+
     _SKIP = object()
 
     def __init__(self, expr, incoming, output):
@@ -71,11 +73,14 @@ class EliotFilter(object):
         """
         return eval(
             self.code,
-            globals(), {
+            globals(),
+            {
                 "J": message,
                 "timedelta": timedelta,
                 "datetime": datetime,
-                "SKIP": self._SKIP})
+                "SKIP": self._SKIP,
+            },
+        )
 
 
 USAGE = b"""\
diff --git a/eliot/journald.py b/eliot/journald.py
index accaab7..74a6053 100644
--- a/eliot/journald.py
+++ b/eliot/journald.py
@@ -12,9 +12,11 @@ from ._message import TASK_UUID_FIELD, MESSAGE_TYPE_FIELD
 from ._action import ACTION_TYPE_FIELD, ACTION_STATUS_FIELD, FAILED_STATUS
 
 _ffi = FFI()
-_ffi.cdef("""
+_ffi.cdef(
+    """
 int sd_journal_send(const char *format, ...);
-""")
+"""
+)
 try:
     try:
         _journald = _ffi.dlopen("libsystemd.so.0")
@@ -36,9 +38,9 @@ def sd_journal_send(**kwargs):
     # The function uses printf formatting, so we need to quote
     # percentages.
     fields = [
-        _ffi.new(
-            "char[]", key.encode("ascii") + b'=' + value.replace(b"%", b"%%"))
-        for key, value in kwargs.items()]
+        _ffi.new("char[]", key.encode("ascii") + b"=" + value.replace(b"%", b"%%"))
+        for key, value in kwargs.items()
+    ]
     fields.append(_ffi.NULL)
     result = _journald.sd_journal_send(*fields)
     if result != 0:
@@ -67,7 +69,7 @@ class JournaldDestination(object):
 
         @param message: Dictionary passed from a C{Logger}.
         """
-        eliot_type = u""
+        eliot_type = ""
         priority = b"6"
         if ACTION_TYPE_FIELD in message:
             eliot_type = message[ACTION_TYPE_FIELD]
@@ -75,11 +77,12 @@ class JournaldDestination(object):
                 priority = b"3"
         elif MESSAGE_TYPE_FIELD in message:
             eliot_type = message[MESSAGE_TYPE_FIELD]
-            if eliot_type == u"eliot:traceback":
+            if eliot_type == "eliot:traceback":
                 priority = b"2"
         sd_journal_send(
             MESSAGE=dumps(message),
             ELIOT_TASK=message[TASK_UUID_FIELD].encode("utf-8"),
             ELIOT_TYPE=eliot_type.encode("utf-8"),
             SYSLOG_IDENTIFIER=self._identifier,
-            PRIORITY=priority)
+            PRIORITY=priority,
+        )
diff --git a/eliot/json.py b/eliot/json.py
index 55c20cc..939d738 100644
--- a/eliot/json.py
+++ b/eliot/json.py
@@ -22,8 +22,15 @@ class EliotJSONEncoder(json.JSONEncoder):
             if isinstance(o, (numpy.bool, numpy.bool_)):
                 return bool(o)
             if isinstance(o, numpy.ndarray):
-                return o.tolist()
+                if o.size > 10000:
+                    # Too big to want to log as-is, log a summary:
+                    return {
+                        "array_start": o.flat[:10000].tolist(),
+                        "original_shape": o.shape,
+                    }
+                else:
+                    return o.tolist()
         return json.JSONEncoder.default(self, o)
 
-__all__ = ["EliotJSONEncoder"]
 
+__all__ = ["EliotJSONEncoder"]
diff --git a/eliot/logwriter.py b/eliot/logwriter.py
index e8137c0..4ce2b7a 100644
--- a/eliot/logwriter.py
+++ b/eliot/logwriter.py
@@ -12,6 +12,7 @@ from warnings import warn
 
 from twisted.application.service import Service
 from twisted.internet.threads import deferToThreadPool
+
 if getattr(select, "poll", None):
     from twisted.internet.pollreactor import PollReactor as Reactor
 else:
@@ -38,7 +39,8 @@ class ThreadedWriter(Service):
     @ivar _thread: C{None}, or a L{threading.Thread} running the private
         reactor.
     """
-    name = u"Eliot Log Writer"
+
+    name = "Eliot Log Writer"
 
     def __init__(self, destination, reactor):
         """
@@ -71,8 +73,8 @@ class ThreadedWriter(Service):
         removeDestination(self)
         self._reactor.callFromThread(self._reactor.stop)
         return deferToThreadPool(
-            self._mainReactor,
-            self._mainReactor.getThreadPool(), self._thread.join)
+            self._mainReactor, self._mainReactor.getThreadPool(), self._thread.join
+        )
 
     def __call__(self, data):
         """
@@ -114,7 +116,8 @@ class ThreadedFileWriter(ThreadedWriter):
             "ThreadedFileWriter is deprecated since 0.9.0. "
             "Use ThreadedWriter instead.",
             DeprecationWarning,
-            stacklevel=2)
+            stacklevel=2,
+        )
         self._logFile = logFile
         ThreadedWriter.__init__(self, FileDestination(file=logFile), reactor)
 
diff --git a/eliot/parse.py b/eliot/parse.py
index bca580c..6226eec 100644
--- a/eliot/parse.py
+++ b/eliot/parse.py
@@ -15,13 +15,15 @@ from ._action import (
     WrittenAction,
     ACTION_STATUS_FIELD,
     STARTED_STATUS,
-    ACTION_TYPE_FIELD, )
+    ACTION_TYPE_FIELD,
+)
 
 
 class Task(PClass):
     """
     A tree of actions with the same task UUID.
     """
+
     _nodes = pmap_field(TaskLevel, (WrittenAction, WrittenMessage))
     _completed = pset_field(TaskLevel)
     _root_level = TaskLevel(level=[])
@@ -51,20 +53,22 @@ class Task(PClass):
         """
         task = self
         if (
-            node.end_message and node.start_message and
-            (len(node.children) == node.end_message.task_level.level[-1] - 2)):
+            node.end_message
+            and node.start_message
+            and (len(node.children) == node.end_message.task_level.level[-1] - 2)
+        ):
             # Possibly this action is complete, make sure all sub-actions
             # are complete:
             completed = True
             for child in node.children:
                 if (
                     isinstance(child, WrittenAction)
-                    and child.task_level not in self._completed):
+                    and child.task_level not in self._completed
+                ):
                     completed = False
                     break
             if completed:
-                task = task.transform(["_completed"],
-                                      lambda s: s.add(node.task_level))
+                task = task.transform(["_completed"], lambda s: s.add(node.task_level))
         task = task.transform(["_nodes", node.task_level], node)
         return task._ensure_node_parents(node)
 
@@ -87,7 +91,8 @@ class Task(PClass):
         parent = self._nodes.get(task_level.parent())
         if parent is None:
             parent = WrittenAction(
-                task_level=task_level.parent(), task_uuid=child.task_uuid)
+                task_level=task_level.parent(), task_uuid=child.task_uuid
+            )
         parent = parent._add_child(child)
         return self._insert_action(parent)
 
@@ -107,8 +112,8 @@ class Task(PClass):
             action = self._nodes.get(action_level)
             if action is None:
                 action = WrittenAction(
-                    task_level=action_level,
-                    task_uuid=message_dict[TASK_UUID_FIELD])
+                    task_level=action_level, task_uuid=message_dict[TASK_UUID_FIELD]
+                )
             if message_dict[ACTION_STATUS_FIELD] == STARTED_STATUS:
                 # Either newly created MissingAction, or one created by
                 # previously added descendant of the action.
@@ -119,9 +124,12 @@ class Task(PClass):
         else:
             # Special case where there is no action:
             if written_message.task_level.level == [1]:
-                return self.transform([
-                    "_nodes", self._root_level], written_message, [
-                        "_completed"], lambda s: s.add(self._root_level))
+                return self.transform(
+                    ["_nodes", self._root_level],
+                    written_message,
+                    ["_completed"],
+                    lambda s: s.add(self._root_level),
+                )
             else:
                 return self._ensure_node_parents(written_message)
 
@@ -132,6 +140,7 @@ class Parser(PClass):
 
     @ivar _tasks: Map from UUID to corresponding L{Task}.
     """
+
     _tasks = pmap_field(unicode, Task)
 
     def add(self, message_dict):
diff --git a/eliot/prettyprint.py b/eliot/prettyprint.py
index 81284bc..4cd54a0 100644
--- a/eliot/prettyprint.py
+++ b/eliot/prettyprint.py
@@ -2,45 +2,61 @@
 API and command-line support for human-readable Eliot messages.
 """
 
-from __future__ import unicode_literals
-
 import pprint
+import argparse
 from datetime import datetime
-from sys import stdin, stdout, argv
+from sys import stdin, stdout
+from collections import OrderedDict
+from json import dumps
 
 from ._bytesjson import loads
 from ._message import (
     TIMESTAMP_FIELD,
     TASK_UUID_FIELD,
     TASK_LEVEL_FIELD,
-    MESSAGE_TYPE_FIELD, )
+    MESSAGE_TYPE_FIELD,
+)
 from ._action import ACTION_TYPE_FIELD, ACTION_STATUS_FIELD
-from ._util import load_module
-
-from six import text_type as unicode, PY2, PY3
-if PY3:
-    # Ensure binary stdin, since we expect specifically UTF-8 encoded
-    # messages, not platform-encoding messages.
-    stdin = stdin.buffer
 
-# On Python 2 pprint formats unicode with u'' prefix, which is inconsistent
-# with Python 3 and not very nice to read. So we modify a copy to omit the u''.
-if PY2:
 
-    def _nicer_unicode_repr(o, original_repr=repr):
-        if isinstance(o, unicode):
-            return original_repr(o.encode("utf-8"))
-        else:
-            return original_repr(o)
+# Ensure binary stdin, since we expect specifically UTF-8 encoded
+# messages, not platform-encoding messages.
+stdin = stdin.buffer
 
-    pprint = load_module(b"unicode_pprint", pprint)
-    pprint.repr = _nicer_unicode_repr
 
 # Fields that all Eliot messages are expected to have:
 REQUIRED_FIELDS = {TASK_LEVEL_FIELD, TASK_UUID_FIELD, TIMESTAMP_FIELD}
 
-
-def pretty_format(message):
+# Fields that get treated specially when formatting.
+_skip_fields = {
+    TIMESTAMP_FIELD,
+    TASK_UUID_FIELD,
+    TASK_LEVEL_FIELD,
+    MESSAGE_TYPE_FIELD,
+    ACTION_TYPE_FIELD,
+    ACTION_STATUS_FIELD,
+}
+
+# First fields to render:
+_first_fields = [ACTION_TYPE_FIELD, MESSAGE_TYPE_FIELD, ACTION_STATUS_FIELD]
+
+
+def _render_timestamp(message: dict, local_timezone: bool) -> str:
+    """Convert a message's timestamp to a string."""
+    # If we were returning or storing the datetime we'd want to use an
+    # explicit timezone instead of a naive datetime, but since we're
+    # just using it for formatting we needn't bother.
+    if local_timezone:
+        dt = datetime.fromtimestamp(message[TIMESTAMP_FIELD])
+    else:
+        dt = datetime.utcfromtimestamp(message[TIMESTAMP_FIELD])
+    result = dt.isoformat(sep="T")
+    if not local_timezone:
+        result += "Z"
+    return result
+
+
+def pretty_format(message: dict, local_timezone: bool = False) -> str:
     """
     Convert a message dictionary into a human-readable string.
 
@@ -48,13 +64,11 @@ def pretty_format(message):
 
     @return: Unicode string.
     """
-    skip = {
-        TIMESTAMP_FIELD, TASK_UUID_FIELD, TASK_LEVEL_FIELD, MESSAGE_TYPE_FIELD,
-        ACTION_TYPE_FIELD, ACTION_STATUS_FIELD}
 
     def add_field(previous, key, value):
-        value = unicode(pprint.pformat(value, width=40)).replace(
-            "\\n", "\n ").replace("\\t", "\t")
+        value = (
+            pprint.pformat(value, width=40).replace("\\n", "\n ").replace("\\t", "\t")
+        )
         # Reindent second line and later to match up with first line's
         # indentation:
         lines = value.split("\n")
@@ -64,28 +78,49 @@ def pretty_format(message):
         return "  %s: %s\n" % (key, value)
 
     remaining = ""
-    for field in [ACTION_TYPE_FIELD, MESSAGE_TYPE_FIELD, ACTION_STATUS_FIELD]:
+    for field in _first_fields:
         if field in message:
             remaining += add_field(remaining, field, message[field])
     for (key, value) in sorted(message.items()):
-        if key not in skip:
+        if key not in _skip_fields:
             remaining += add_field(remaining, key, value)
 
-    level = "/" + "/".join(map(unicode, message[TASK_LEVEL_FIELD]))
-    return "%s -> %s\n%sZ\n%s" % (
+    level = "/" + "/".join(map(str, message[TASK_LEVEL_FIELD]))
+    return "%s -> %s\n%s\n%s" % (
         message[TASK_UUID_FIELD],
         level,
-        # If we were returning or storing the datetime we'd want to use an
-        # explicit timezone instead of a naive datetime, but since we're
-        # just using it for formatting we needn't bother.
-        datetime.utcfromtimestamp(message[TIMESTAMP_FIELD]).isoformat(
-            sep=str(" ")),
-        remaining, )
+        _render_timestamp(message, local_timezone),
+        remaining,
+    )
 
 
-_CLI_HELP = """\
-Usage: cat messages | eliot-prettyprint
+def compact_format(message: dict, local_timezone: bool = False) -> str:
+    """Format an Eliot message into a single line.
 
+    The message is presumed to be JSON-serializable.
+    """
+    ordered_message = OrderedDict()
+    for field in _first_fields:
+        if field in message:
+            ordered_message[field] = message[field]
+    for (key, value) in sorted(message.items()):
+        if key not in _skip_fields:
+            ordered_message[key] = value
+    # drop { and } from JSON:
+    rendered = " ".join(
+        "{}={}".format(key, dumps(value, separators=(",", ":")))
+        for (key, value) in ordered_message.items()
+    )
+
+    return "%s%s %s %s" % (
+        message[TASK_UUID_FIELD],
+        "/" + "/".join(map(str, message[TASK_LEVEL_FIELD])),
+        _render_timestamp(message, local_timezone),
+        rendered,
+    )
+
+
+_CLI_HELP = """\
 Convert Eliot messages into more readable format.
 
 Reads JSON lines from stdin, write out pretty-printed results on stdout.
@@ -97,9 +132,30 @@ def _main():
     Command-line program that reads in JSON from stdin and writes out
     pretty-printed messages to stdout.
     """
-    if argv[1:]:
-        stdout.write(_CLI_HELP)
-        raise SystemExit()
+    parser = argparse.ArgumentParser(
+        description=_CLI_HELP, usage="cat messages | %(prog)s [options]"
+    )
+    parser.add_argument(
+        "-c",
+        "--compact",
+        action="store_true",
+        dest="compact",
+        help="Compact format, one message per line.",
+    )
+    parser.add_argument(
+        "-l",
+        "--local-timezone",
+        action="store_true",
+        dest="local_timezone",
+        help="Use local timezone instead of UTC.",
+    )
+
+    args = parser.parse_args()
+    if args.compact:
+        formatter = compact_format
+    else:
+        formatter = pretty_format
+
     for line in stdin:
         try:
             message = loads(line)
@@ -107,13 +163,10 @@ def _main():
             stdout.write("Not JSON: {}\n\n".format(line.rstrip(b"\n")))
             continue
         if REQUIRED_FIELDS - set(message.keys()):
-            stdout.write(
-                "Not an Eliot message: {}\n\n".format(line.rstrip(b"\n")))
+            stdout.write("Not an Eliot message: {}\n\n".format(line.rstrip(b"\n")))
             continue
-        result = pretty_format(message) + "\n"
-        if PY2:
-            result = result.encode("utf-8")
+        result = formatter(message, args.local_timezone) + "\n"
         stdout.write(result)
 
 
-__all__ = ["pretty_format"]
+__all__ = ["pretty_format", "compact_format"]
diff --git a/eliot/stdlib.py b/eliot/stdlib.py
index 507024e..35ed380 100644
--- a/eliot/stdlib.py
+++ b/eliot/stdlib.py
@@ -2,7 +2,7 @@
 
 from logging import Handler
 
-from ._message import Message
+from ._action import log_message
 from ._traceback import write_traceback
 
 
@@ -10,11 +10,11 @@ class EliotHandler(Handler):
     """A C{logging.Handler} that routes log messages to Eliot."""
 
     def emit(self, record):
-        Message.log(
+        log_message(
             message_type="eliot:stdlib",
             log_level=record.levelname,
             logger=record.name,
-            message=record.getMessage()
+            message=record.getMessage(),
         )
         if record.exc_info:
             write_traceback(exc_info=record.exc_info)
diff --git a/eliot/tai64n.py b/eliot/tai64n.py
index 7b94883..b79f86f 100644
--- a/eliot/tai64n.py
+++ b/eliot/tai64n.py
@@ -12,7 +12,7 @@ import struct
 from binascii import b2a_hex, a2b_hex
 
 _STRUCTURE = b">QI"
-_OFFSET = (2**62) + 10  # last 10 are leap seconds
+_OFFSET = (2 ** 62) + 10  # last 10 are leap seconds
 
 
 def encode(timestamp):
diff --git a/eliot/testing.py b/eliot/testing.py
index 39b844b..7fdcba4 100644
--- a/eliot/testing.py
+++ b/eliot/testing.py
@@ -15,10 +15,12 @@ from ._action import (
     ACTION_TYPE_FIELD,
     STARTED_STATUS,
     FAILED_STATUS,
-    SUCCEEDED_STATUS, )
+    SUCCEEDED_STATUS,
+)
 from ._message import MESSAGE_TYPE_FIELD, TASK_LEVEL_FIELD, TASK_UUID_FIELD
 from ._output import MemoryLogger
 from . import _output
+from .json import EliotJSONEncoder
 
 COMPLETED_STATUSES = (FAILED_STATUS, SUCCEEDED_STATUS)
 
@@ -50,8 +52,9 @@ def assertContainsFields(test, message, fields):
 
     @raises AssertionError: If the message doesn't contain the fields.
     """
-    messageSubset = dict([(key, value) for key, value in message.items()
-                          if key in fields])
+    messageSubset = dict(
+        [(key, value) for key, value in message.items() if key in fields]
+    )
     test.assertEqual(messageSubset, fields)
 
 
@@ -68,16 +71,15 @@ class LoggedAction(PClass):
     @ivar children: A C{list} of direct child L{LoggedMessage} and
         L{LoggedAction} instances.
     """
+
     startMessage = field(mandatory=True)
     endMessage = field(mandatory=True)
     children = field(mandatory=True)
 
     def __new__(cls, startMessage, endMessage, children):
         return PClass.__new__(
-            cls,
-            startMessage=startMessage,
-            endMessage=endMessage,
-            children=children)
+            cls, startMessage=startMessage, endMessage=endMessage, children=children
+        )
 
     @property
     def start_message(self):
@@ -135,14 +137,19 @@ class LoggedAction(PClass):
             elif (
                 len(messageLevel) == len(levelPrefix) + 2
                 and messageLevel[:-2] == levelPrefix
-                and messageLevel[-1] == 1):
+                and messageLevel[-1] == 1
+            ):
                 # If start message level is [1], [1, 2, 1] implies first
                 # message of a direct child.
-                child = klass.fromMessages(
-                    uuid, message[TASK_LEVEL_FIELD], messages)
+                child = klass.fromMessages(uuid, message[TASK_LEVEL_FIELD], messages)
                 children.append(child)
-        if startMessage is None or endMessage is None:
-            raise ValueError(uuid, level)
+        if startMessage is None:
+            raise ValueError("Missing start message")
+        if endMessage is None:
+            raise ValueError(
+                "Missing end message of type "
+                + message.get(ACTION_TYPE_FIELD, "unknown")
+            )
         return klass(startMessage, endMessage, children)
 
     # PEP 8 variant:
@@ -166,11 +173,13 @@ class LoggedAction(PClass):
         for message in messages:
             if (
                 message.get(ACTION_TYPE_FIELD) == actionType
-                and message[ACTION_STATUS_FIELD] == STARTED_STATUS):
+                and message[ACTION_STATUS_FIELD] == STARTED_STATUS
+            ):
                 result.append(
                     klass.fromMessages(
-                        message[TASK_UUID_FIELD], message[TASK_LEVEL_FIELD],
-                        messages))
+                        message[TASK_UUID_FIELD], message[TASK_LEVEL_FIELD], messages
+                    )
+                )
         return result
 
     # Backwards compat:
@@ -222,6 +231,7 @@ class LoggedMessage(PClass):
 
     @ivar message: A C{dict}, the message contents.
     """
+
     message = field(mandatory=True)
 
     def __new__(cls, message):
@@ -261,7 +271,37 @@ class UnflushedTracebacks(Exception):
     """
 
 
-def validateLogging(assertion, *assertionArgs, **assertionKwargs):
+def check_for_errors(logger):
+    """
+    Raise exception if logger has unflushed tracebacks or validation errors.
+
+    @param logger: A L{MemoryLogger}.
+
+    @raise L{UnflushedTracebacks}: If any tracebacks were unflushed.
+    """
+    # Check for unexpected tracebacks first, since that indicates business
+    # logic errors:
+    if logger.tracebackMessages:
+        raise UnflushedTracebacks(logger.tracebackMessages)
+    # If those are fine, validate the logging:
+    logger.validate()
+
+
+def swap_logger(logger):
+    """Swap out the global logging sink.
+
+    @param logger: An C{ILogger}.
+
+    @return: The current C{ILogger}.
+    """
+    previous_logger = _output._DEFAULT_LOGGER
+    _output._DEFAULT_LOGGER = logger
+    return previous_logger
+
+
+def validateLogging(
+    assertion, *assertionArgs, encoder_=EliotJSONEncoder, **assertionKwargs
+):
     """
     Decorator factory for L{unittest.TestCase} methods to add logging
     validation.
@@ -293,6 +333,8 @@ def validateLogging(assertion, *assertionArgs, **assertionKwargs):
 
     @param assertionKwargs: Additional keyword arguments to pass to
         C{assertion}.
+
+    @param encoder_: C{json.JSONEncoder} subclass to use when validating JSON.
     """
 
     def decorator(function):
@@ -300,19 +342,15 @@ def validateLogging(assertion, *assertionArgs, **assertionKwargs):
         def wrapper(self, *args, **kwargs):
             skipped = False
 
-            kwargs["logger"] = logger = MemoryLogger()
-            self.addCleanup(logger.validate)
-
-            def checkForUnflushed():
-                if not skipped and logger.tracebackMessages:
-                    raise UnflushedTracebacks(logger.tracebackMessages)
-
-            self.addCleanup(checkForUnflushed)
+            kwargs["logger"] = logger = MemoryLogger(encoder=encoder_)
+            self.addCleanup(check_for_errors, logger)
             # TestCase runs cleanups in reverse order, and we want this to
             # run *before* tracebacks are checked:
             if assertion is not None:
-                self.addCleanup(lambda: skipped or assertion(
-                    self, logger, *assertionArgs, **assertionKwargs))
+                self.addCleanup(
+                    lambda: skipped
+                    or assertion(self, logger, *assertionArgs, **assertionKwargs)
+                )
             try:
                 return function(self, *args, **kwargs)
             except SkipTest:
@@ -328,7 +366,9 @@ def validateLogging(assertion, *assertionArgs, **assertionKwargs):
 validate_logging = validateLogging
 
 
-def capture_logging(assertion, *assertionArgs, **assertionKwargs):
+def capture_logging(
+    assertion, *assertionArgs, encoder_=EliotJSONEncoder, **assertionKwargs
+):
     """
     Capture and validate all logging that doesn't specify a L{Logger}.
 
@@ -336,18 +376,19 @@ def capture_logging(assertion, *assertionArgs, **assertionKwargs):
     """
 
     def decorator(function):
-        @validate_logging(assertion, *assertionArgs, **assertionKwargs)
+        @validate_logging(
+            assertion, *assertionArgs, encoder_=encoder_, **assertionKwargs
+        )
         @wraps(function)
         def wrapper(self, *args, **kwargs):
             logger = kwargs["logger"]
-            current_logger = _output._DEFAULT_LOGGER
-            _output._DEFAULT_LOGGER = logger
+            previous_logger = swap_logger(logger)
 
             def cleanup():
-                _output._DEFAULT_LOGGER = current_logger
+                swap_logger(previous_logger)
 
             self.addCleanup(cleanup)
-            return function(self, logger)
+            return function(self, *args, **kwargs)
 
         return wrapper
 
@@ -382,14 +423,15 @@ def assertHasMessage(testCase, logger, messageType, fields=None):
     if fields is None:
         fields = {}
     messages = LoggedMessage.ofType(logger.messages, messageType)
-    testCase.assertTrue(messages, "No messages of type %s" % (messageType, ))
+    testCase.assertTrue(messages, "No messages of type %s" % (messageType,))
     loggedMessage = messages[0]
     assertContainsFields(testCase, loggedMessage.message, fields)
     return loggedMessage
 
 
 def assertHasAction(
-    testCase, logger, actionType, succeeded, startFields=None, endFields=None):
+    testCase, logger, actionType, succeeded, startFields=None, endFields=None
+):
     """
     Assert that the given logger has an action of the given type, and the first
     action found of this type has the given fields and success status.
@@ -401,31 +443,31 @@ def assertHasAction(
 
     @param logger: L{eliot.MemoryLogger} whose messages will be checked.
 
-    @param actionType: L{eliot.ActionType} indicating which message we're
-        looking for.
+    @param actionType: L{eliot.ActionType} or C{str} indicating which message
+        we're looking for.
 
     @param succeeded: Expected success status of the action, a C{bool}.
 
     @param startFields: The first action of the given type found must have a
-        superset of the given C{dict} as its start fields. If C{None} then
+        superset of the given C{dict} as its start fields.  If C{None} then
         fields are not checked.
 
     @param endFields: The first action of the given type found must have a
-        superset of the given C{dict} as its end fields. If C{None} then
+        superset of the given C{dict} as its end fields.  If C{None} then
         fields are not checked.
 
     @return: The first found L{LoggedAction} of the given type, if field
         validation succeeded.
 
-    @raises AssertionError: No action was found, or the fields were not superset
-        of given fields.
+    @raises AssertionError: No action was found, or the fields were not
+        superset of given fields.
     """
     if startFields is None:
         startFields = {}
     if endFields is None:
         endFields = {}
     actions = LoggedAction.ofType(logger.messages, actionType)
-    testCase.assertTrue(actions, "No actions of type %s" % (actionType, ))
+    testCase.assertTrue(actions, "No actions of type %s" % (actionType,))
     action = actions[0]
     testCase.assertEqual(action.succeeded, succeeded)
     assertContainsFields(testCase, action.startMessage, startFields)
diff --git a/eliot/tests/__init__.py b/eliot/tests/__init__.py
index 0bd69e7..03f449e 100644
--- a/eliot/tests/__init__.py
+++ b/eliot/tests/__init__.py
@@ -1,3 +1,9 @@
 """
 Tests for the eliot package.
 """
+
+# Increase hypothesis deadline so we don't time out on PyPy:
+from hypothesis import settings
+
+settings.register_profile("eliot", deadline=1000)
+settings.load_profile("eliot")
diff --git a/eliot/tests/common.py b/eliot/tests/common.py
index 6e26b26..7aa042b 100644
--- a/eliot/tests/common.py
+++ b/eliot/tests/common.py
@@ -3,6 +3,20 @@ Common testing infrastructure.
 """
 
 from io import BytesIO
+from json import JSONEncoder
+
+
+class CustomObject(object):
+    """Gets encoded to JSON."""
+
+
+class CustomJSONEncoder(JSONEncoder):
+    """JSONEncoder that knows about L{CustomObject}."""
+
+    def default(self, o):
+        if isinstance(o, CustomObject):
+            return "CUSTOM!"
+        return JSONEncoder.default(self, o)
 
 
 class FakeSys(object):
diff --git a/eliot/tests/corotests.py b/eliot/tests/corotests.py
deleted file mode 100644
index 78e3501..0000000
--- a/eliot/tests/corotests.py
+++ /dev/null
@@ -1,204 +0,0 @@
-"""
-Tests for coroutines.
-
-Imported into test_coroutine.py when running tests under Python 3.5 or later;
-in earlier versions of Python this code is a syntax error.
-"""
-
-import asyncio
-from threading import Thread
-from unittest import TestCase
-
-from ..testing import capture_logging
-from ..parse import Parser
-from .. import start_action
-from .._action import _ExecutionContext, _context, use_asyncio_context
-from .._asyncio import AsyncioContext
-
-
-async def standalone_coro():
-    """
-    Log a message inside a new coroutine.
-    """
-    await asyncio.sleep(0.1)
-    with start_action(action_type="standalone"):
-        pass
-
-
-async def calling_coro():
-    """
-    Log an action inside a coroutine, and call another coroutine.
-    """
-    with start_action(action_type="calling"):
-        await standalone_coro()
-
-
-def run_coroutines(*async_functions):
-    """
-    Run a coroutine until it finishes.
-    """
-    loop = asyncio.get_event_loop()
-    futures = [asyncio.ensure_future(f()) for f in async_functions]
-
-    async def wait_for_futures():
-        for future in futures:
-            await future
-    loop.run_until_complete(wait_for_futures())
-
-
-class CoroutineTests(TestCase):
-    """
-    Tests for coroutines.
-    """
-    def setUp(self):
-        use_asyncio_context()
-
-        def cleanup():
-            _context.get_sub_context = lambda: None
-        self.addCleanup(cleanup)
-
-    @capture_logging(None)
-    def test_coroutine_vs_main_thread_context(self, logger):
-        """
-        A coroutine has a different Eliot logging context than the thread that
-        runs the event loop.
-        """
-        with start_action(action_type="eventloop"):
-            run_coroutines(standalone_coro)
-        trees = Parser.parse_stream(logger.messages)
-        self.assertEqual(
-            sorted([(t.root().action_type, t.root().children) for t in trees]),
-            [("eventloop", []), ("standalone", [])])
-
-    @capture_logging(None)
-    def test_multiple_coroutines_contexts(self, logger):
-        """
-        Each top-level coroutine has its own Eliot logging context.
-        """
-        async def waiting_coro():
-            with start_action(action_type="waiting"):
-                await asyncio.sleep(0.5)
-
-        run_coroutines(waiting_coro, standalone_coro)
-        trees = Parser.parse_stream(logger.messages)
-        self.assertEqual(
-            sorted([(t.root().action_type, t.root().children) for t in trees]),
-            [("standalone", []), ("waiting", [])])
-
-    @capture_logging(None)
-    def test_await_inherits_coroutine_contexts(self, logger):
-        """
-        awaited coroutines inherit the logging context.
-        """
-        run_coroutines(calling_coro)
-        [tree] = Parser.parse_stream(logger.messages)
-        root = tree.root()
-        [child] = root.children
-        self.assertEqual(
-            (root.action_type, child.action_type, child.children),
-            ("calling", "standalone", []))
-
-
-class ContextTests(TestCase):
-    """
-    Tests for coroutine support in ``eliot._action.ExecutionContext``.
-    """
-    def test_threadSafety(self):
-        """
-        Each thread gets its own execution context even when using asyncio
-        contexts.
-        """
-        ctx = _ExecutionContext()
-        ctx.get_sub_context = AsyncioContext().get_stack
-        first = object()
-        ctx.push(first)
-
-        second = object()
-        valuesInThread = []
-
-        def inthread():
-            ctx.push(second)
-            valuesInThread.append(ctx.current())
-
-        thread = Thread(target=inthread)
-        thread.start()
-        thread.join()
-        # Neither thread was affected by the other:
-        self.assertEqual(valuesInThread, [second])
-        self.assertIs(ctx.current(), first)
-
-    def test_coroutine_vs_main_thread_context(self):
-        """
-        A coroutine has a different Eliot context than the thread that runs the
-        event loop.
-        """
-        ctx = _ExecutionContext()
-        ctx.get_sub_context = AsyncioContext().get_stack
-        current_context = []
-
-        async def coro():
-            current_context.append(("coro", ctx.current()))
-            ctx.push("A")
-            current_context.append(("coro", ctx.current()))
-
-        ctx.push("B")
-        current_context.append(("main", ctx.current()))
-        run_coroutines(coro)
-        current_context.append(("main", ctx.current()))
-        self.assertEqual(
-            current_context,
-            [("main", "B"), ("coro", None), ("coro", "A"), ("main", "B")])
-
-    def test_multiple_coroutines_contexts(self):
-        """
-        Each top-level ("Task") coroutine has its own Eliot separate context.
-        """
-        ctx = _ExecutionContext()
-        ctx.get_sub_context = AsyncioContext().get_stack
-        current_context = []
-
-        async def coro2():
-            current_context.append(("coro2", ctx.current()))
-            ctx.push("B")
-            await asyncio.sleep(1)
-            current_context.append(("coro2", ctx.current()))
-
-        async def coro():
-            current_context.append(("coro", ctx.current()))
-            await asyncio.sleep(0.5)
-            current_context.append(("coro", ctx.current()))
-            ctx.push("A")
-            current_context.append(("coro", ctx.current()))
-
-        run_coroutines(coro, coro2)
-        self.assertEqual(
-            current_context,
-            [("coro", None), ("coro2", None), ("coro", None),
-             ("coro", "A"), ("coro2", "B")])
-
-    def test_await_inherits_coroutine_context(self):
-        """
-        A sub-coroutine (scheduled with await) inherits the parent coroutine's
-        context.
-        """
-        ctx = _ExecutionContext()
-        ctx.get_sub_context = AsyncioContext().get_stack
-        current_context = []
-
-        async def coro2():
-            current_context.append(("coro2", ctx.current()))
-            ctx.push("B")
-            current_context.append(("coro2", ctx.current()))
-
-        async def coro():
-            current_context.append(("coro", ctx.current()))
-            ctx.push("A")
-            current_context.append(("coro", ctx.current()))
-            await coro2()
-            current_context.append(("coro", ctx.current()))
-
-        run_coroutines(coro)
-        self.assertEqual(
-            current_context,
-            [("coro", None), ("coro", "A"), ("coro2", "A"),
-             ("coro2", "B"), ("coro", "B")])
diff --git a/eliot/tests/strategies.py b/eliot/tests/strategies.py
index b202e43..ec27226 100644
--- a/eliot/tests/strategies.py
+++ b/eliot/tests/strategies.py
@@ -19,16 +19,27 @@ from hypothesis.strategies import (
     one_of,
     recursive,
     text,
-    uuids, )
+    uuids,
+)
 
 from pyrsistent import pmap, pvector, ny, thaw
 
 from .._action import (
-    ACTION_STATUS_FIELD, ACTION_TYPE_FIELD, FAILED_STATUS, STARTED_STATUS,
-    SUCCEEDED_STATUS, TaskLevel, WrittenAction)
+    ACTION_STATUS_FIELD,
+    ACTION_TYPE_FIELD,
+    FAILED_STATUS,
+    STARTED_STATUS,
+    SUCCEEDED_STATUS,
+    TaskLevel,
+    WrittenAction,
+)
 from .._message import (
-    EXCEPTION_FIELD, REASON_FIELD, TASK_LEVEL_FIELD, TASK_UUID_FIELD,
-    WrittenMessage)
+    EXCEPTION_FIELD,
+    REASON_FIELD,
+    TASK_LEVEL_FIELD,
+    TASK_UUID_FIELD,
+    WrittenMessage,
+)
 
 task_level_indexes = integers(min_value=1, max_value=10)
 # Task levels can be arbitrarily deep, but in the wild rarely as much as 100.
@@ -48,7 +59,9 @@ message_core_dicts = fixed_dictionaries(
     dict(
         task_level=task_level_lists.map(pvector),
         task_uuid=uuids().map(unicode),
-        timestamp=timestamps)).map(pmap)
+        timestamp=timestamps,
+    )
+).map(pmap)
 
 # Text generation is slow. We can make it faster by not generating so
 # much. These are reasonable values.
@@ -57,7 +70,8 @@ message_data_dicts = dictionaries(
     values=labels,
     # People don't normally put much more than ten fields in their
     # messages, surely?
-    max_size=10, ).map(pmap)
+    max_size=10,
+).map(pmap)
 
 
 def written_from_pmap(d):
@@ -85,14 +99,12 @@ def union(*dicts):
 message_dicts = builds(union, message_data_dicts, message_core_dicts)
 written_messages = message_dicts.map(written_from_pmap)
 
-_start_action_fields = fixed_dictionaries({
-    ACTION_STATUS_FIELD:
-    just(STARTED_STATUS),
-    ACTION_TYPE_FIELD:
-    labels, })
-start_action_message_dicts = builds(
-    union, message_dicts, _start_action_fields).map(
-        lambda x: x.update({TASK_LEVEL_FIELD: x[TASK_LEVEL_FIELD].set(-1, 1)}))
+_start_action_fields = fixed_dictionaries(
+    {ACTION_STATUS_FIELD: just(STARTED_STATUS), ACTION_TYPE_FIELD: labels}
+)
+start_action_message_dicts = builds(union, message_dicts, _start_action_fields).map(
+    lambda x: x.update({TASK_LEVEL_FIELD: x[TASK_LEVEL_FIELD].set(-1, 1)})
+)
 start_action_messages = start_action_message_dicts.map(written_from_pmap)
 
 
@@ -101,14 +113,17 @@ def sibling_task_level(message, n):
 
 
 _end_action_fields = one_of(
-    just({
-        ACTION_STATUS_FIELD: SUCCEEDED_STATUS}),
-    fixed_dictionaries({
-        ACTION_STATUS_FIELD: just(FAILED_STATUS),
-        # Text generation is slow. We can make it faster by not generating so
-        # much. Thqese are reasonable values.
-        EXCEPTION_FIELD: labels,
-        REASON_FIELD: labels, }), )
+    just({ACTION_STATUS_FIELD: SUCCEEDED_STATUS}),
+    fixed_dictionaries(
+        {
+            ACTION_STATUS_FIELD: just(FAILED_STATUS),
+            # Text generation is slow. We can make it faster by not generating so
+            # much. Thqese are reasonable values.
+            EXCEPTION_FIELD: labels,
+            REASON_FIELD: labels,
+        }
+    ),
+)
 
 
 def _make_written_action(start_message, child_messages, end_message_dict):
@@ -137,13 +152,16 @@ def _make_written_action(start_message, child_messages, end_message_dict):
     if end_message_dict:
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
-                    ACTION_TYPE_FIELD:
-                    start_message.contents[ACTION_TYPE_FIELD],
-                    TASK_UUID_FIELD:
-                    task_uuid,
-                    TASK_LEVEL_FIELD:
-                    sibling_task_level(start_message, 2 + len(children)), }))
+                end_message_dict,
+                {
+                    ACTION_TYPE_FIELD: start_message.contents[ACTION_TYPE_FIELD],
+                    TASK_UUID_FIELD: task_uuid,
+                    TASK_LEVEL_FIELD: sibling_task_level(
+                        start_message, 2 + len(children)
+                    ),
+                },
+            )
+        )
     else:
         end_message = None
 
@@ -156,7 +174,8 @@ written_actions = recursive(
         _make_written_action,
         start_message=start_action_messages,
         child_messages=lists(children, max_size=5),
-        end_message_dict=builds(union, message_dicts, _end_action_fields) | none(), ),
+        end_message_dict=builds(union, message_dicts, _end_action_fields) | none(),
+    ),
 )
 
 
@@ -180,17 +199,15 @@ def _map_messages(f, written_action):
         return f(written_action)
 
     start_message = f(written_action.start_message)
-    children = written_action.children.transform([ny],
-                                                 partial(_map_messages, f))
+    children = written_action.children.transform([ny], partial(_map_messages, f))
     if written_action.end_message:
         end_message = f(written_action.end_message)
     else:
         end_message = None
 
     return WrittenAction.from_messages(
-        start_message=start_message,
-        children=pvector(children),
-        end_message=end_message, )
+        start_message=start_message, children=pvector(children), end_message=end_message
+    )
 
 
 def reparent_action(task_uuid, task_level, written_action):
@@ -209,10 +226,9 @@ def reparent_action(task_uuid, task_level, written_action):
     old_prefix_len = len(written_action.task_level.level)
 
     def fix_message(message):
-        return (
-            message.transform(
-                ['_logged_dict', TASK_LEVEL_FIELD],
-                lambda level: new_prefix + level[old_prefix_len:]).transform([
-                    '_logged_dict', TASK_UUID_FIELD], task_uuid))
+        return message.transform(
+            ["_logged_dict", TASK_LEVEL_FIELD],
+            lambda level: new_prefix + level[old_prefix_len:],
+        ).transform(["_logged_dict", TASK_UUID_FIELD], task_uuid)
 
     return _map_messages(fix_message, written_action)
diff --git a/eliot/tests/test_action.py b/eliot/tests/test_action.py
index dc1113b..1d6db7c 100644
--- a/eliot/tests/test_action.py
+++ b/eliot/tests/test_action.py
@@ -5,21 +5,18 @@ Tests for L{eliot._action}.
 from __future__ import unicode_literals
 
 import pickle
+import time
 from unittest import TestCase, skipIf
+from unittest.mock import patch
 from threading import Thread
-from warnings import catch_warnings, simplefilter
 
 import six
+
 if six.PY3:
     unicode = six.text_type
 
 from hypothesis import assume, given, settings, HealthCheck
-from hypothesis.strategies import (
-    integers,
-    lists,
-    just,
-    text,
-)
+from hypothesis.strategies import integers, lists, just, text
 
 from pyrsistent import pvector, v
 
@@ -27,11 +24,25 @@ import testtools
 from testtools.matchers import MatchesStructure
 
 from .._action import (
-    Action, _ExecutionContext, current_action, startTask, start_action,
-    ACTION_STATUS_FIELD, ACTION_TYPE_FIELD, FAILED_STATUS, STARTED_STATUS,
-    SUCCEEDED_STATUS, DuplicateChild, InvalidStartMessage, InvalidStatus,
-    TaskLevel, WrittenAction, WrongActionType, WrongTask, WrongTaskLevel,
-    TooManyCalls, log_call
+    Action,
+    current_action,
+    startTask,
+    start_action,
+    ACTION_STATUS_FIELD,
+    ACTION_TYPE_FIELD,
+    FAILED_STATUS,
+    STARTED_STATUS,
+    SUCCEEDED_STATUS,
+    DuplicateChild,
+    InvalidStartMessage,
+    InvalidStatus,
+    TaskLevel,
+    WrittenAction,
+    WrongActionType,
+    WrongTask,
+    WrongTaskLevel,
+    TooManyCalls,
+    log_call,
 )
 from .._message import (
     EXCEPTION_FIELD,
@@ -46,7 +57,6 @@ from .._validation import ActionType, Field, _ActionSerializers
 from ..testing import assertContainsFields, capture_logging
 from ..parse import Parser
 from .. import (
-    _action,
     add_destination,
     remove_destination,
     register_exception_extractor,
@@ -66,126 +76,6 @@ from .strategies import (
     union,
     written_from_pmap,
 )
-import eliot
-
-
-class ExecutionContextTests(TestCase):
-    """
-    Tests for L{_ExecutionContext}.
-    """
-
-    def test_nothingPushed(self):
-        """
-        If no action has been pushed, L{_ExecutionContext.current} returns
-        C{None}.
-        """
-        ctx = _ExecutionContext()
-        self.assertIs(ctx.current(), None)
-
-    def test_pushSingle(self):
-        """
-        L{_ExecutionContext.current} returns the action passed to
-        L{_ExecutionContext.push} (assuming no pops).
-        """
-        ctx = _ExecutionContext()
-        a = object()
-        ctx.push(a)
-        self.assertIs(ctx.current(), a)
-
-    def test_pushMultiple(self):
-        """
-        L{_ExecutionContext.current} returns the action passed to the last
-        call to L{_ExecutionContext.push} (assuming no pops).
-        """
-        ctx = _ExecutionContext()
-        a = object()
-        b = object()
-        ctx.push(a)
-        ctx.push(b)
-        self.assertIs(ctx.current(), b)
-
-    def test_multipleCurrent(self):
-        """
-        Multiple calls to L{_ExecutionContext.current} return the same result.
-        """
-        ctx = _ExecutionContext()
-        a = object()
-        ctx.push(a)
-        ctx.current()
-        self.assertIs(ctx.current(), a)
-
-    def test_popSingle(self):
-        """
-        L{_ExecutionContext.pop} cancels a L{_ExecutionContext.push}, leading
-        to an empty context.
-        """
-        ctx = _ExecutionContext()
-        a = object()
-        ctx.push(a)
-        ctx.pop()
-        self.assertIs(ctx.current(), None)
-
-    def test_popMultiple(self):
-        """
-        L{_ExecutionContext.pop} cancels the last L{_ExecutionContext.push},
-        resulting in current context being whatever was pushed before that.
-        """
-        ctx = _ExecutionContext()
-        a = object()
-        b = object()
-        ctx.push(a)
-        ctx.push(b)
-        ctx.pop()
-        self.assertIs(ctx.current(), a)
-
-    def test_threadStart(self):
-        """
-        Each thread starts with an empty execution context.
-        """
-        ctx = _ExecutionContext()
-        first = object()
-        ctx.push(first)
-
-        valuesInThread = []
-
-        def inthread():
-            valuesInThread.append(ctx.current())
-
-        thread = Thread(target=inthread)
-        thread.start()
-        thread.join()
-        self.assertEqual(valuesInThread, [None])
-
-    def test_threadSafety(self):
-        """
-        Each thread gets its own execution context.
-        """
-        ctx = _ExecutionContext()
-        first = object()
-        ctx.push(first)
-
-        second = object()
-        valuesInThread = []
-
-        def inthread():
-            ctx.push(second)
-            valuesInThread.append(ctx.current())
-
-        thread = Thread(target=inthread)
-        thread.start()
-        thread.join()
-        # Neither thread was affected by the other:
-        self.assertEqual(valuesInThread, [second])
-        self.assertIs(ctx.current(), first)
-
-    def test_globalInstance(self):
-        """
-        A global L{_ExecutionContext} is exposed in the L{eliot._action}
-        module.
-        """
-        self.assertIsInstance(_action._context, _ExecutionContext)
-        self.assertEqual(_action.current_action, _action._context.current)
-        self.assertEqual(eliot.current_action, _action._context.current)
 
 
 class ActionTests(TestCase):
@@ -201,13 +91,15 @@ class ActionTests(TestCase):
         action = Action(logger, "unique", TaskLevel(level=[]), "sys:thename")
         action._start({"key": "value"})
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "unique",
                 "task_level": [1],
                 "action_type": "sys:thename",
                 "action_status": "started",
-                "key": "value"
-            }
+                "key": "value",
+            },
         )
 
     def test_task_uuid(self):
@@ -248,13 +140,12 @@ class ActionTests(TestCase):
         logger2 = MemoryLogger()
         child = action.child(logger2, "newsystem:newname")
         self.assertEqual(
-            [child._logger, child._identification, child._task_level], [
-                logger2, {
-                    "task_uuid": "unique",
-                    "action_type": "newsystem:newname"
-                },
-                TaskLevel(level=[1])
-            ]
+            [child._logger, child._identification, child._task_level],
+            [
+                logger2,
+                {"task_uuid": "unique", "action_type": "newsystem:newname"},
+                TaskLevel(level=[1]),
+            ],
         )
 
     def test_childLevel(self):
@@ -308,6 +199,23 @@ class ActionTests(TestCase):
         action.run(lambda: result.append(current_action()))
         self.assertEqual(result, [action])
 
+    def test_per_thread_context(self):
+        """Different threads have different contexts."""
+        in_thread = []
+
+        def run_in_thread():
+            action = Action(None, "", TaskLevel(level=[]), "")
+            with action.context():
+                time.sleep(0.5)
+                in_thread.append(current_action())
+
+        thread = Thread(target=run_in_thread)
+        thread.start()
+        time.sleep(0.2)
+        self.assertEqual(current_action(), None)
+        thread.join()
+        self.assertIsInstance(in_thread[0], Action)
+
     def test_runContextUnsetOnReturn(self):
         """
         L{Action.run} unsets the action once the given function returns.
@@ -416,12 +324,14 @@ class ActionTests(TestCase):
         action = Action(logger, "unique", TaskLevel(level=[]), "sys:thename")
         action.finish()
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "unique",
                 "task_level": [1],
                 "action_type": "sys:thename",
-                "action_status": "succeeded"
-            }
+                "action_status": "succeeded",
+            },
         )
 
     def test_successfulFinishSerializer(self):
@@ -489,8 +399,7 @@ class ActionTests(TestCase):
 
         action.finish(BadException())
         self.assertEqual(
-            logger.messages[0]["reason"],
-            "eliot: unknown, unicode() raised exception"
+            logger.messages[0]["reason"], "eliot: unknown, unicode() raised exception"
         )
 
     def test_withLogsSuccessfulFinishMessage(self):
@@ -506,12 +415,14 @@ class ActionTests(TestCase):
         # functions, the intended public APIs.
         self.assertEqual(len(logger.messages), 1)
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "uuid",
                 "task_level": [1, 1],
                 "action_type": "sys:me",
-                "action_status": "succeeded"
-            }
+                "action_status": "succeeded",
+            },
         )
 
     def test_withLogsExceptionMessage(self):
@@ -533,14 +444,16 @@ class ActionTests(TestCase):
 
         self.assertEqual(len(logger.messages), 1)
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "uuid",
                 "task_level": [1, 1],
                 "action_type": "sys:me",
                 "action_status": "failed",
                 "reason": "because",
-                "exception": "%s.RuntimeError" % (RuntimeError.__module__, )
-            }
+                "exception": "%s.RuntimeError" % (RuntimeError.__module__,),
+            },
         )
 
     def test_withReturnValue(self):
@@ -562,13 +475,7 @@ class ActionTests(TestCase):
         with action as act:
             act.addSuccessFields(x=1, y=2)
             act.addSuccessFields(z=3)
-        assertContainsFields(
-            self, logger.messages[0], {
-                "x": 1,
-                "y": 2,
-                "z": 3
-            }
-        )
+        assertContainsFields(self, logger.messages[0], {"x": 1, "y": 2, "z": 3})
 
     def test_nextTaskLevel(self):
         """
@@ -576,10 +483,11 @@ class ActionTests(TestCase):
         """
         action = Action(MemoryLogger(), "uuid", TaskLevel(level=[1]), "sys:me")
         self.assertEqual(
-            [action._nextTaskLevel() for i in range(5)], [
+            [action._nextTaskLevel() for i in range(5)],
+            [
                 TaskLevel(level=level)
                 for level in ([1, 1], [1, 2], [1, 3], [1, 4], [1, 5])
-            ]
+            ],
         )
 
     def test_multipleFinishCalls(self):
@@ -596,25 +504,6 @@ class ActionTests(TestCase):
         # Only initial finish message is logged:
         self.assertEqual(len(logger.messages), 1)
 
-    def test_stringActionCompatibility(self):
-        """
-        L{Action} can be initialized with a string version of a L{TaskLevel},
-        for backwards compatibility.
-        """
-        logger = MemoryLogger()
-        action = Action(logger, "uuid", "/1/2/", "sys:me")
-        self.assertEqual(action._task_level, TaskLevel(level=[1, 2]))
-
-    def test_stringActionCompatibilityWarning(self):
-        """
-        Calling L{Action} with a string results in a L{DeprecationWarning}
-        """
-        logger = MemoryLogger()
-        with catch_warnings(record=True) as warnings:
-            simplefilter("always")  # Catch all warnings
-            Action(logger, "uuid", "/1/2/", "sys:me")
-            self.assertEqual(warnings[-1].category, DeprecationWarning)
-
 
 class StartActionAndTaskTests(TestCase):
     """
@@ -644,9 +533,7 @@ class StartActionAndTaskTests(TestCase):
         resulting L{Action}.
         """
         logger = MemoryLogger()
-        serializers = _ActionSerializers(
-            start=None, success=None, failure=None
-        )
+        serializers = _ActionSerializers(start=None, success=None, failure=None)
         action = startTask(logger, "sys:do", serializers)
         self.assertIs(action._serializers, serializers)
 
@@ -656,9 +543,7 @@ class StartActionAndTaskTests(TestCase):
         resulting L{Action}.
         """
         logger = MemoryLogger()
-        serializers = _ActionSerializers(
-            start=None, success=None, failure=None
-        )
+        serializers = _ActionSerializers(start=None, success=None, failure=None)
         action = start_action(logger, "sys:do", serializers)
         self.assertIs(action._serializers, serializers)
 
@@ -670,8 +555,7 @@ class StartActionAndTaskTests(TestCase):
         action = startTask(logger, "sys:do")
         action2 = startTask(logger, "sys:do")
         self.assertNotEqual(
-            action._identification["task_uuid"],
-            action2._identification["task_uuid"]
+            action._identification["task_uuid"], action2._identification["task_uuid"]
         )
 
     def test_startTaskLogsStart(self):
@@ -681,13 +565,15 @@ class StartActionAndTaskTests(TestCase):
         logger = MemoryLogger()
         action = startTask(logger, "sys:do", key="value")
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": action._identification["task_uuid"],
                 "task_level": [1],
                 "action_type": "sys:do",
                 "action_status": "started",
-                "key": "value"
-            }
+                "key": "value",
+            },
         )
 
     def test_start_action_default_action_type(self):
@@ -716,13 +602,15 @@ class StartActionAndTaskTests(TestCase):
         logger = MemoryLogger()
         action = start_action(logger, "sys:do", key="value")
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": action._identification["task_uuid"],
                 "task_level": [1],
                 "action_type": "sys:do",
                 "action_status": "started",
-                "key": "value"
-            }
+                "key": "value",
+            },
         )
 
     def test_startActionWithParent(self):
@@ -748,13 +636,15 @@ class StartActionAndTaskTests(TestCase):
         with parent:
             start_action(logger, "sys:do", key="value")
             assertContainsFields(
-                self, logger.messages[0], {
+                self,
+                logger.messages[0],
+                {
                     "task_uuid": "uuid",
                     "task_level": [1, 1],
                     "action_type": "sys:do",
                     "action_status": "started",
-                    "key": "value"
-                }
+                    "key": "value",
+                },
             )
 
     def test_startTaskNoLogger(self):
@@ -766,13 +656,15 @@ class StartActionAndTaskTests(TestCase):
         self.addCleanup(remove_destination, messages.append)
         action = startTask(action_type="sys:do", key="value")
         assertContainsFields(
-            self, messages[0], {
+            self,
+            messages[0],
+            {
                 "task_uuid": action._identification["task_uuid"],
                 "task_level": [1],
                 "action_type": "sys:do",
                 "action_status": "started",
-                "key": "value"
-            }
+                "key": "value",
+            },
         )
 
     def test_startActionNoLogger(self):
@@ -784,13 +676,15 @@ class StartActionAndTaskTests(TestCase):
         self.addCleanup(remove_destination, messages.append)
         action = start_action(action_type="sys:do", key="value")
         assertContainsFields(
-            self, messages[0], {
+            self,
+            messages[0],
+            {
                 "task_uuid": action._identification["task_uuid"],
                 "task_level": [1],
                 "action_type": "sys:do",
                 "action_status": "started",
-                "key": "value"
-            }
+                "key": "value",
+            },
         )
 
 
@@ -833,11 +727,9 @@ class SerializationTests(TestCase):
             [
                 action._nextTaskLevel(),
                 action.serialize_task_id(),
-                action._nextTaskLevel()
-            ], [
-                TaskLevel(level=[1, 2, 1]), b"uniq123@/1/2/2",
-                TaskLevel(level=[1, 2, 3])
-            ]
+                action._nextTaskLevel(),
+            ],
+            [TaskLevel(level=[1, 2, 1]), b"uniq123@/1/2/2", TaskLevel(level=[1, 2, 3])],
         )
 
     def test_continueTaskReturnsAction(self):
@@ -846,32 +738,24 @@ class SerializationTests(TestCase):
         C{task_uuid} are derived from those in the given serialized task
         identifier.
         """
-        originalAction = Action(
-            None, "uniq456", TaskLevel(level=[3, 4]), "mytype"
-        )
+        originalAction = Action(None, "uniq456", TaskLevel(level=[3, 4]), "mytype")
         taskId = originalAction.serializeTaskId()
 
         newAction = Action.continue_task(MemoryLogger(), taskId)
         self.assertEqual(
+            [newAction.__class__, newAction._identification, newAction._task_level],
             [
-                newAction.__class__, newAction._identification,
-                newAction._task_level
-            ], [
-                Action, {
-                    "task_uuid": "uniq456",
-                    "action_type": "eliot:remote_task"
-                },
-                TaskLevel(level=[3, 4, 1])
-            ]
+                Action,
+                {"task_uuid": "uniq456", "action_type": "eliot:remote_task"},
+                TaskLevel(level=[3, 4, 1]),
+            ],
         )
 
     def test_continueTaskUnicode(self):
         """
         L{Action.continue_task} can take a Unicode task identifier.
         """
-        original_action = Action(
-            None, "uniq790", TaskLevel(level=[3, 4]), "mytype"
-        )
+        original_action = Action(None, "uniq790", TaskLevel(level=[3, 4]), "mytype")
         task_id = unicode(original_action.serialize_task_id(), "utf-8")
 
         new_action = Action.continue_task(MemoryLogger(), task_id)
@@ -881,29 +765,27 @@ class SerializationTests(TestCase):
         """
         L{Action.continue_task} starts the L{Action} it creates.
         """
-        originalAction = Action(
-            None, "uniq456", TaskLevel(level=[3, 4]), "mytype"
-        )
+        originalAction = Action(None, "uniq456", TaskLevel(level=[3, 4]), "mytype")
         taskId = originalAction.serializeTaskId()
         logger = MemoryLogger()
 
         Action.continue_task(logger, taskId)
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "uniq456",
                 "task_level": [3, 4, 1, 1],
                 "action_type": "eliot:remote_task",
-                "action_status": "started"
-            }
+                "action_status": "started",
+            },
         )
 
     def test_continueTaskNoLogger(self):
         """
         L{Action.continue_task} can be called without a logger.
         """
-        originalAction = Action(
-            None, "uniq456", TaskLevel(level=[3, 4]), "mytype"
-        )
+        originalAction = Action(None, "uniq456", TaskLevel(level=[3, 4]), "mytype")
         taskId = originalAction.serializeTaskId()
 
         messages = []
@@ -911,12 +793,14 @@ class SerializationTests(TestCase):
         self.addCleanup(remove_destination, messages.append)
         Action.continue_task(task_id=taskId)
         assertContainsFields(
-            self, messages[-1], {
+            self,
+            messages[-1],
+            {
                 "task_uuid": "uniq456",
                 "task_level": [3, 4, 1, 1],
                 "action_type": "eliot:remote_task",
-                "action_status": "started"
-            }
+                "action_status": "started",
+            },
         )
 
     def test_continueTaskRequiredTaskId(self):
@@ -960,6 +844,36 @@ class TaskLevelTests(TestCase):
             )
         )
 
+    def test_equality(self):
+        """
+        L{TaskChild} correctly implements equality and hashing.
+        """
+        a = TaskLevel(level=[1, 2])
+        a2 = TaskLevel(level=[1, 2])
+        b = TaskLevel(level=[2, 999])
+        self.assertTrue(
+            all(
+                [
+                    a == a2,
+                    a2 == a,
+                    a != b,
+                    b != a,
+                    not b == a,
+                    not a == b,
+                    not a == 1,
+                    a != 1,
+                    hash(a) == hash(a2),
+                    hash(b) != hash(a),
+                ]
+            )
+        )
+
+    def test_as_list(self):
+        """
+        L{TaskChild.as_list} returns the level.
+        """
+        self.assertEqual(TaskLevel(level=[1, 2, 3]).as_list(), [1, 2, 3])
+
     @given(lists(task_level_indexes))
     def test_parent_of_child(self, base_task_level):
         """
@@ -1016,9 +930,8 @@ class TaskLevelTests(TestCase):
         L{TaskLevel.toString}.
         """
         self.assertEqual(
-            [TaskLevel.fromString("/"),
-             TaskLevel.fromString("/2/1")],
-            [TaskLevel(level=[]), TaskLevel(level=[2, 1])]
+            [TaskLevel.fromString("/"), TaskLevel.fromString("/2/1")],
+            [TaskLevel(level=[]), TaskLevel(level=[2, 1])],
         )
 
     def test_from_string(self):
@@ -1059,7 +972,7 @@ class WrittenActionTests(testtools.TestCase):
                 end_time=None,
                 reason=None,
                 exception=None,
-            )
+            ),
         )
 
     @given(start_action_messages, message_dicts, integers(min_value=2))
@@ -1072,13 +985,13 @@ class WrittenActionTests(testtools.TestCase):
         """
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
+                end_message_dict,
+                {
                     ACTION_STATUS_FIELD: SUCCEEDED_STATUS,
-                    ACTION_TYPE_FIELD:
-                    start_message.contents[ACTION_TYPE_FIELD],
+                    ACTION_TYPE_FIELD: start_message.contents[ACTION_TYPE_FIELD],
                     TASK_UUID_FIELD: start_message.task_uuid,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
-                }
+                },
             )
         )
         action = WrittenAction.from_messages(end_message=end_message)
@@ -1094,7 +1007,7 @@ class WrittenActionTests(testtools.TestCase):
                 end_time=end_message.timestamp,
                 reason=None,
                 exception=None,
-            )
+            ),
         )
 
     @given(message_dicts)
@@ -1118,7 +1031,7 @@ class WrittenActionTests(testtools.TestCase):
                 end_time=None,
                 reason=None,
                 exception=None,
-            )
+            ),
         )
 
     @given(start_action_messages, message_dicts, integers(min_value=2))
@@ -1128,21 +1041,22 @@ class WrittenActionTests(testtools.TestCase):
         within such a task. If we try to assemble actions from messages with
         differing task UUIDs, we raise an error.
         """
-        assume(start_message.task_uuid != end_message_dict['task_uuid'])
+        assume(start_message.task_uuid != end_message_dict["task_uuid"])
         action_type = start_message.as_dict()[ACTION_TYPE_FIELD]
         end_message = written_from_pmap(
             union(
-                end_message_dict.set(ACTION_TYPE_FIELD, action_type), {
+                end_message_dict.set(ACTION_TYPE_FIELD, action_type),
+                {
                     ACTION_STATUS_FIELD: SUCCEEDED_STATUS,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
-                }
+                },
             )
         )
         self.assertRaises(
             WrongTask,
             WrittenAction.from_messages,
             start_message,
-            end_message=end_message
+            end_message=end_message,
         )
 
     @given(message_dicts)
@@ -1156,13 +1070,11 @@ class WrittenActionTests(testtools.TestCase):
         """
         assume(ACTION_STATUS_FIELD not in message_dict)
         message = written_from_pmap(message_dict)
-        self.assertRaises(
-            InvalidStartMessage, WrittenAction.from_messages, message
-        )
+        self.assertRaises(InvalidStartMessage, WrittenAction.from_messages, message)
 
     @given(
         message_dict=start_action_message_dicts,
-        status=(just(FAILED_STATUS) | just(SUCCEEDED_STATUS) | text())
+        status=(just(FAILED_STATUS) | just(SUCCEEDED_STATUS) | text()),
     )
     def test_invalid_start_message_wrong_status(self, message_dict, status):
         """
@@ -1173,14 +1085,8 @@ class WrittenActionTests(testtools.TestCase):
         This test handles the case where the status field is present, but is
         not C{STARTED_STATUS}.
         """
-        message = written_from_pmap(
-            message_dict.update({
-                ACTION_STATUS_FIELD: status
-            })
-        )
-        self.assertRaises(
-            InvalidStartMessage, WrittenAction.from_messages, message
-        )
+        message = written_from_pmap(message_dict.update({ACTION_STATUS_FIELD: status}))
+        self.assertRaises(InvalidStartMessage, WrittenAction.from_messages, message)
 
     @given(start_action_message_dicts, integers(min_value=2))
     def test_invalid_task_level_in_start_message(self, start_message_dict, i):
@@ -1195,14 +1101,10 @@ class WrittenActionTests(testtools.TestCase):
         new_level = start_message_dict[TASK_LEVEL_FIELD].append(i)
         message_dict = start_message_dict.set(TASK_LEVEL_FIELD, new_level)
         message = written_from_pmap(message_dict)
-        self.assertRaises(
-            InvalidStartMessage, WrittenAction.from_messages, message
-        )
+        self.assertRaises(InvalidStartMessage, WrittenAction.from_messages, message)
 
     @given(start_action_messages, message_dicts, text(), integers(min_value=1))
-    def test_action_type_mismatch(
-        self, start_message, end_message_dict, end_type, n
-    ):
+    def test_action_type_mismatch(self, start_message, end_message_dict, end_type, n):
         """
         The end message of an action must have the same C{ACTION_TYPE_FIELD} as
         the start message of an action. If we try to end an action with a
@@ -1211,19 +1113,20 @@ class WrittenActionTests(testtools.TestCase):
         assume(end_type != start_message.contents[ACTION_TYPE_FIELD])
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
+                end_message_dict,
+                {
                     ACTION_STATUS_FIELD: SUCCEEDED_STATUS,
                     ACTION_TYPE_FIELD: end_type,
                     TASK_UUID_FIELD: start_message.task_uuid,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
-                }
+                },
             )
         )
         self.assertRaises(
             WrongActionType,
             WrittenAction.from_messages,
             start_message,
-            end_message=end_message
+            end_message=end_message,
         )
 
     @given(start_action_messages, message_dicts, integers(min_value=2))
@@ -1238,18 +1141,16 @@ class WrittenActionTests(testtools.TestCase):
         """
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
+                end_message_dict,
+                {
                     ACTION_STATUS_FIELD: SUCCEEDED_STATUS,
-                    ACTION_TYPE_FIELD:
-                    start_message.contents[ACTION_TYPE_FIELD],
+                    ACTION_TYPE_FIELD: start_message.contents[ACTION_TYPE_FIELD],
                     TASK_UUID_FIELD: start_message.task_uuid,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
-                }
+                },
             )
         )
-        action = WrittenAction.from_messages(
-            start_message, end_message=end_message
-        )
+        action = WrittenAction.from_messages(start_message, end_message=end_message)
         self.assertThat(
             action,
             MatchesStructure.byEquality(
@@ -1262,19 +1163,11 @@ class WrittenActionTests(testtools.TestCase):
                 end_time=end_message.timestamp,
                 reason=None,
                 exception=None,
-            )
+            ),
         )
 
-    @given(
-        start_action_messages,
-        message_dicts,
-        text(),
-        text(),
-        integers(min_value=2)
-    )
-    def test_failed_end(
-        self, start_message, end_message_dict, exception, reason, n
-    ):
+    @given(start_action_messages, message_dicts, text(), text(), integers(min_value=2))
+    def test_failed_end(self, start_message, end_message_dict, exception, reason, n):
         """
         A L{WrittenAction} can be constructed with just a start message and an
         end message: in this case, an end message that indicates that the
@@ -1286,20 +1179,18 @@ class WrittenActionTests(testtools.TestCase):
         """
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
+                end_message_dict,
+                {
                     ACTION_STATUS_FIELD: FAILED_STATUS,
-                    ACTION_TYPE_FIELD:
-                    start_message.contents[ACTION_TYPE_FIELD],
+                    ACTION_TYPE_FIELD: start_message.contents[ACTION_TYPE_FIELD],
                     TASK_UUID_FIELD: start_message.task_uuid,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
                     EXCEPTION_FIELD: exception,
                     REASON_FIELD: reason,
-                }
+                },
             )
         )
-        action = WrittenAction.from_messages(
-            start_message, end_message=end_message
-        )
+        action = WrittenAction.from_messages(start_message, end_message=end_message)
         self.assertThat(
             action,
             MatchesStructure.byEquality(
@@ -1312,7 +1203,7 @@ class WrittenActionTests(testtools.TestCase):
                 end_time=end_message.timestamp,
                 reason=reason,
                 exception=exception,
-            )
+            ),
         )
 
     @given(start_action_messages, message_dicts, integers(min_value=2))
@@ -1325,19 +1216,19 @@ class WrittenActionTests(testtools.TestCase):
         assume(ACTION_STATUS_FIELD not in end_message_dict)
         end_message = written_from_pmap(
             union(
-                end_message_dict, {
-                    ACTION_TYPE_FIELD:
-                    start_message.contents[ACTION_TYPE_FIELD],
+                end_message_dict,
+                {
+                    ACTION_TYPE_FIELD: start_message.contents[ACTION_TYPE_FIELD],
                     TASK_UUID_FIELD: start_message.task_uuid,
                     TASK_LEVEL_FIELD: sibling_task_level(start_message, n),
-                }
+                },
             )
         )
         self.assertRaises(
             InvalidStatus,
             WrittenAction.from_messages,
             start_message,
-            end_message=end_message
+            end_message=end_message,
         )
 
     # This test is slow, and when run under coverage on pypy on Travis won't
@@ -1354,8 +1245,9 @@ class WrittenActionTests(testtools.TestCase):
             reparent_action(
                 start_message.task_uuid,
                 TaskLevel(level=sibling_task_level(start_message, i)),
-                message
-            ) for (i, message) in enumerate(child_messages, 2)
+                message,
+            )
+            for (i, message) in enumerate(child_messages, 2)
         ]
         action = WrittenAction.from_messages(start_message, messages)
 
@@ -1383,25 +1275,18 @@ class WrittenActionTests(testtools.TestCase):
         child of the action's task level.
         """
         assume(
-            not start_message.task_level.
-            is_sibling_of(TaskLevel(level=child_message[TASK_LEVEL_FIELD]))
+            not start_message.task_level.is_sibling_of(
+                TaskLevel(level=child_message[TASK_LEVEL_FIELD])
+            )
         )
         message = written_from_pmap(
-            child_message.update({
-                TASK_UUID_FIELD: start_message.task_uuid
-            })
+            child_message.update({TASK_UUID_FIELD: start_message.task_uuid})
         )
         self.assertRaises(
-            WrongTaskLevel, WrittenAction.from_messages, start_message,
-            v(message)
+            WrongTaskLevel, WrittenAction.from_messages, start_message, v(message)
         )
 
-    @given(
-        start_action_messages,
-        message_dicts,
-        message_dicts,
-        integers(min_value=2)
-    )
+    @given(start_action_messages, message_dicts, message_dicts, integers(min_value=2))
     def test_duplicate_task_level(self, start_message, child1, child2, index):
         """
         If we try to add a child to an action that has a task level that's the
@@ -1411,17 +1296,18 @@ class WrittenActionTests(testtools.TestCase):
         messages = [
             written_from_pmap(
                 union(
-                    child_message, {
+                    child_message,
+                    {
                         TASK_UUID_FIELD: start_message.task_uuid,
                         TASK_LEVEL_FIELD: parent_level.append(index),
-                    }
+                    },
                 )
-            ) for child_message in [child1, child2]
+            )
+            for child_message in [child1, child2]
         ]
         assume(messages[0] != messages[1])
         self.assertRaises(
-            DuplicateChild, WrittenAction.from_messages, start_message,
-            messages
+            DuplicateChild, WrittenAction.from_messages, start_message, messages
         )
 
 
@@ -1449,9 +1335,7 @@ def make_error_extraction_tests(get_messages):
             class MyException(Exception):
                 pass
 
-            register_exception_extractor(
-                MyException, lambda e: {"key": e.args[0]}
-            )
+            register_exception_extractor(MyException, lambda e: {"key": e.args[0]})
             exception = MyException("a value")
             [message] = get_messages(exception)
             assertContainsFields(self, message, {"key": "a value"})
@@ -1469,9 +1353,7 @@ def make_error_extraction_tests(get_messages):
             class SubException(MyException):
                 pass
 
-            register_exception_extractor(
-                MyException, lambda e: {"key": e.args[0]}
-            )
+            register_exception_extractor(MyException, lambda e: {"key": e.args[0]})
             [message] = get_messages(SubException("the value"))
             assertContainsFields(self, message, {"key": "the value"})
 
@@ -1490,12 +1372,8 @@ def make_error_extraction_tests(get_messages):
             class SubSubException(SubException):
                 pass
 
-            register_exception_extractor(
-                MyException, lambda e: {"parent": e.args[0]}
-            )
-            register_exception_extractor(
-                SubException, lambda e: {"child": e.args[0]}
-            )
+            register_exception_extractor(MyException, lambda e: {"parent": e.args[0]})
+            register_exception_extractor(SubException, lambda e: {"child": e.args[0]})
             [message] = get_messages(SubSubException("the value"))
             assertContainsFields(self, message, {"child": "the value"})
 
@@ -1515,14 +1393,9 @@ def make_error_extraction_tests(get_messages):
 
             messages = get_failed_action_messages(MyException())
             assertContainsFields(
-                self, messages[1], {
-                    "action_type": "sys:me",
-                    "action_status": "failed"
-                }
-            )
-            assertContainsFields(
-                self, messages[0], {"message_type": "eliot:traceback"}
+                self, messages[1], {"action_type": "sys:me", "action_status": "failed"}
             )
+            assertContainsFields(self, messages[0], {"message_type": "eliot:traceback"})
             self.assertIn("nosuchattribute", str(messages[0]["reason"]))
 
         def test_environmenterror(self):
@@ -1575,13 +1448,15 @@ class FailedActionExtractionTests(
         exception = MyException("because")
         messages = get_failed_action_messages(exception)
         assertContainsFields(
-            self, messages[0], {
+            self,
+            messages[0],
+            {
                 "task_level": [2],
                 "action_type": "sys:me",
                 "action_status": "failed",
                 "reason": "because",
-                "exception": "eliot.tests.test_action.MyException"
-            }
+                "exception": "eliot.tests.test_action.MyException",
+            },
         )
 
 
@@ -1630,9 +1505,11 @@ class PreserveContextTests(TestCase):
         root = tree.root()
         self.assertEqual(
             (
-                root.action_type, root.children[0].action_type,
-                root.children[0].children[0].contents[MESSAGE_TYPE_FIELD]
-            ), ("parent", "eliot:remote_task", "child")
+                root.action_type,
+                root.children[0].action_type,
+                root.children[0].children[0].contents[MESSAGE_TYPE_FIELD],
+            ),
+            ("parent", "eliot:remote_task", "child"),
         )
 
     def test_callable_only_once(self):
@@ -1674,20 +1551,21 @@ class LogCallTests(TestCase):
         C{@log_call} with no arguments logs return result, arguments, and has
         action type based on the action name.
         """
+
         @log_call
         def myfunc(x, y):
             return 4
 
         myfunc(2, 3)
-        self.assert_logged(logger, self.id() + ".<locals>.myfunc",
-                           {u"x": 2, u"y": 3}, 4)
+        self.assert_logged(logger, self.id() + ".<locals>.myfunc", {"x": 2, "y": 3}, 4)
 
     @capture_logging(None)
     def test_exception(self, logger):
         """C{@log_call} with an exception logs a failed action."""
+
         @log_call
         def myfunc(x, y):
-            1/0
+            1 / 0
 
         with self.assertRaises(ZeroDivisionError):
             myfunc(2, 4)
@@ -1700,62 +1578,68 @@ class LogCallTests(TestCase):
     @capture_logging(None)
     def test_action_type(self, logger):
         """C{@log_call} can take an action type."""
+
         @log_call(action_type="myaction")
         def myfunc(x, y):
             return 4
 
         myfunc(2, 3)
-        self.assert_logged(logger, u"myaction", {u"x": 2, u"y": 3}, 4)
+        self.assert_logged(logger, "myaction", {"x": 2, "y": 3}, 4)
 
     @capture_logging(None)
     def test_default_argument_given(self, logger):
         """C{@log_call} logs default arguments that were passed in."""
+
         @log_call
         def myfunc(x, y=1):
             return 4
 
         myfunc(2, y=5)
-        self.assert_logged(logger, self.id() + ".<locals>.myfunc",
-                           {u"x": 2, u"y": 5}, 4)
+        self.assert_logged(logger, self.id() + ".<locals>.myfunc", {"x": 2, "y": 5}, 4)
 
     @capture_logging(None)
     def test_default_argument_missing(self, logger):
         """C{@log_call} logs default arguments that weren't passed in."""
+
         @log_call
         def myfunc(x, y=1):
             return 6
 
         myfunc(2)
-        self.assert_logged(logger, self.id() + ".<locals>.myfunc",
-                           {u"x": 2, u"y": 1}, 6)
+        self.assert_logged(logger, self.id() + ".<locals>.myfunc", {"x": 2, "y": 1}, 6)
 
     @capture_logging(None)
     def test_star_args_kwargs(self, logger):
         """C{@log_call} logs star args and kwargs."""
+
         @log_call
         def myfunc(x, *y, **z):
             return 6
 
         myfunc(2, 3, 4, a=1, b=2)
-        self.assert_logged(logger, self.id() + ".<locals>.myfunc",
-                           {u"x": 2, u"y": (3, 4), u"z": {u"a": 1, u"b": 2}},
-                           6)
+        self.assert_logged(
+            logger,
+            self.id() + ".<locals>.myfunc",
+            {"x": 2, "y": (3, 4), "z": {"a": 1, "b": 2}},
+            6,
+        )
 
     @capture_logging(None)
     def test_whitelist_args(self, logger):
         """C{@log_call} only includes whitelisted arguments."""
+
         @log_call(include_args=("x", "z"))
         def myfunc(x, y, z):
             return 6
 
         myfunc(2, 3, 4)
-        self.assert_logged(logger, self.id() + ".<locals>.myfunc",
-                           {u"x": 2, u"z": 4}, 6)
+        self.assert_logged(logger, self.id() + ".<locals>.myfunc", {"x": 2, "z": 4}, 6)
 
     @skipIf(six.PY2, "Didn't bother implementing safety check on Python 2")
     def test_wrong_whitelist_args(self):
         """If C{include_args} doesn't match function, raise an exception."""
         with self.assertRaises(ValueError):
+
             @log_call(include_args=["a", "x", "y"])
             def f(x, y):
                 pass
@@ -1763,6 +1647,7 @@ class LogCallTests(TestCase):
     @capture_logging(None)
     def test_no_result(self, logger):
         """C{@log_call} can omit logging the result."""
+
         @log_call(include_result=False)
         def myfunc(x, y):
             return 6
@@ -1779,20 +1664,77 @@ class LogCallTests(TestCase):
 
         This is necessary for e.g. Dask usage.
         """
-        self.assertIs(
-            for_pickling,
-            pickle.loads(pickle.dumps(for_pickling))
-        )
+        self.assertIs(for_pickling, pickle.loads(pickle.dumps(for_pickling)))
 
     @capture_logging(None)
     def test_methods(self, logger):
         """self is not logged."""
+
         class C(object):
             @log_call
             def f(self, x):
                 pass
 
         C().f(2)
-        self.assert_logged(logger, self.id() + u".<locals>.C.f", {u"x": 2}, None)
+        self.assert_logged(logger, self.id() + ".<locals>.C.f", {"x": 2}, None)
+
 
+class IndividualMessageLogTests(TestCase):
+    """Action.log() tests."""
 
+    def test_log_creates_new_dictionary(self):
+        """
+        L{Action.log} creates a new dictionary on each call.
+
+        This is important because we might mutate the dictionary in
+        ``Logger.write``.
+        """
+        messages = []
+        add_destination(messages.append)
+        self.addCleanup(remove_destination, messages.append)
+
+        with start_action(action_type="x") as action:
+            action.log("mymessage", key=4)
+            action.log(message_type="mymessage2", key=5)
+        self.assertEqual(messages[1]["key"], 4)
+        self.assertEqual(messages[2]["key"], 5)
+        self.assertEqual(messages[1]["message_type"], "mymessage")
+        self.assertEqual(messages[2]["message_type"], "mymessage2")
+
+    @patch("time.time")
+    def test_log_adds_timestamp(self, time_func):
+        """
+        L{Action.log} adds a C{"timestamp"} field to the dictionary written
+        to the logger, with the current time in seconds since the epoch.
+        """
+        messages = []
+        add_destination(messages.append)
+        self.addCleanup(remove_destination, messages.append)
+
+        time_func.return_value = timestamp = 1387299889.153187625
+        with start_action(action_type="x") as action:
+            action.log("mymessage", key=4)
+        self.assertEqual(messages[1]["timestamp"], timestamp)
+
+    def test_part_of_action(self):
+        """
+        L{Action.log} adds the identification fields from the given
+        L{Action} to the dictionary written to the logger.
+        """
+        messages = []
+        add_destination(messages.append)
+        self.addCleanup(remove_destination, messages.append)
+
+        action = Action(None, "unique", TaskLevel(level=[37, 4]), "sys:thename")
+        action.log("me", key=2)
+        written = messages[0]
+        del written["timestamp"]
+        self.assertEqual(
+            written,
+            {
+                "task_uuid": "unique",
+                "task_level": [37, 4, 1],
+                "key": 2,
+                "message_type": "me",
+            },
+        )
diff --git a/eliot/tests/test_api.py b/eliot/tests/test_api.py
index 5d35bed..e99798b 100644
--- a/eliot/tests/test_api.py
+++ b/eliot/tests/test_api.py
@@ -37,8 +37,7 @@ class PublicAPITests(TestCase):
         L{eliot.addGlobalFields} calls the corresponding method on the
         L{Destinations} attached to L{Logger}.
         """
-        self.assertEqual(
-            eliot.addGlobalFields, Logger._destinations.addGlobalFields)
+        self.assertEqual(eliot.addGlobalFields, Logger._destinations.addGlobalFields)
 
 
 class PEP8Tests(TestCase):
diff --git a/eliot/tests/test_coroutines.py b/eliot/tests/test_coroutines.py
index 6ef7828..39c5496 100644
--- a/eliot/tests/test_coroutines.py
+++ b/eliot/tests/test_coroutines.py
@@ -1,10 +1,105 @@
 """
-Tests for coroutines, for Python versions that support them.
+Tests for coroutines.
+
+Imported into test_coroutine.py when running tests under Python 3.5 or later;
+in earlier versions of Python this code is a syntax error.
 """
 
-import sys
-if sys.version_info[:2] >= (3, 5):
-    from .corotests import CoroutineTests, ContextTests
+import asyncio
+from unittest import TestCase
+
+from ..testing import capture_logging
+from ..parse import Parser
+from .. import start_action
+
+
+async def standalone_coro():
+    """
+    Log a message inside a new coroutine.
+    """
+    await asyncio.sleep(0.1)
+    with start_action(action_type="standalone"):
+        pass
+
+
+async def calling_coro():
+    """
+    Log an action inside a coroutine, and call another coroutine.
+    """
+    with start_action(action_type="calling"):
+        await standalone_coro()
+
+
+def run_coroutines(*async_functions):
+    """
+    Run a coroutine until it finishes.
+    """
+    loop = asyncio.get_event_loop()
+    futures = [asyncio.ensure_future(f()) for f in async_functions]
+
+    async def wait_for_futures():
+        for future in futures:
+            await future
+
+    loop.run_until_complete(wait_for_futures())
+
+
+class CoroutineTests(TestCase):
+    """
+    Tests for coroutines.
+    """
+
+    @capture_logging(None)
+    def test_multiple_coroutines_contexts(self, logger):
+        """
+        Each top-level coroutine has its own Eliot logging context.
+        """
+
+        async def waiting_coro():
+            with start_action(action_type="waiting"):
+                await asyncio.sleep(0.5)
+
+        run_coroutines(waiting_coro, standalone_coro)
+        trees = Parser.parse_stream(logger.messages)
+        self.assertEqual(
+            sorted([(t.root().action_type, t.root().children) for t in trees]),
+            [("standalone", []), ("waiting", [])],
+        )
+
+    @capture_logging(None)
+    def test_await_inherits_coroutine_contexts(self, logger):
+        """
+        awaited coroutines inherit the logging context.
+        """
+        run_coroutines(calling_coro)
+        [tree] = Parser.parse_stream(logger.messages)
+        root = tree.root()
+        [child] = root.children
+        self.assertEqual(
+            (root.action_type, child.action_type, child.children),
+            ("calling", "standalone", []),
+        )
+
+    @capture_logging(None)
+    def test_interleaved_coroutines(self, logger):
+        """
+        start_action() started in one coroutine doesn't impact another in a
+        different coroutine.
+        """
+
+        async def coro_sleep(delay, action_type):
+            with start_action(action_type=action_type):
+                await asyncio.sleep(delay)
 
+        async def main():
+            with start_action(action_type="main"):
+                f1 = asyncio.ensure_future(coro_sleep(1, "a"))
+                f2 = asyncio.ensure_future(coro_sleep(0.5, "b"))
+                await f1
+                await f2
 
-__all__ = ["CoroutineTests", "ContextTests"]
+        run_coroutines(main)
+        [tree] = list(Parser.parse_stream(logger.messages))
+        root = tree.root()
+        self.assertEqual(root.action_type, "main")
+        self.assertEqual(sorted([c.action_type for c in root.children]), ["a", "b"])
diff --git a/eliot/tests/test_dask.py b/eliot/tests/test_dask.py
index 0ece7a5..f652604 100644
--- a/eliot/tests/test_dask.py
+++ b/eliot/tests/test_dask.py
@@ -3,14 +3,23 @@
 from unittest import TestCase, skipUnless
 
 from ..testing import capture_logging, LoggedAction, LoggedMessage
-from .. import start_action, Message
+from .. import start_action, log_message
+
 try:
     import dask
     from dask.bag import from_sequence
+    from dask.distributed import Client
+    import dask.dataframe as dd
+    import pandas as pd
 except ImportError:
     dask = None
 else:
-    from ..dask import compute_with_trace, _RunWithEliotContext, _add_logging
+    from ..dask import (
+        compute_with_trace,
+        _RunWithEliotContext,
+        _add_logging,
+        persist_with_trace,
+    )
 
 
 @skipUnless(dask, "Dask not available.")
@@ -27,49 +36,125 @@ class DaskTests(TestCase):
         bag = bag.fold(lambda x, y: x + y)
         self.assertEqual(dask.compute(bag), compute_with_trace(bag))
 
+    def test_future(self):
+        """compute_with_trace() can handle Futures."""
+        client = Client(processes=False)
+        self.addCleanup(client.shutdown)
+        [bag] = dask.persist(from_sequence([1, 2, 3]))
+        bag = bag.map(lambda x: x * 5)
+        result = dask.compute(bag)
+        self.assertEqual(result, ([5, 10, 15],))
+        self.assertEqual(result, compute_with_trace(bag))
+
+    def test_persist_result(self):
+        """persist_with_trace() runs the same logic as process()."""
+        client = Client(processes=False)
+        self.addCleanup(client.shutdown)
+        bag = from_sequence([1, 2, 3])
+        bag = bag.map(lambda x: x * 7)
+        self.assertEqual(
+            [b.compute() for b in dask.persist(bag)],
+            [b.compute() for b in persist_with_trace(bag)],
+        )
+
+    def test_persist_pandas(self):
+        """persist_with_trace() with a Pandas dataframe.
+
+        This ensures we don't blow up, which used to be the case.
+        """
+        df = pd.DataFrame()
+        df = dd.from_pandas(df, npartitions=1)
+        persist_with_trace(df)
+
     @capture_logging(None)
-    def test_logging(self, logger):
+    def test_persist_logging(self, logger):
+        """persist_with_trace() preserves Eliot context."""
+
+        def persister(bag):
+            [bag] = persist_with_trace(bag)
+            return dask.compute(bag)
+
+        self.assert_logging(logger, persister, "dask:persist")
+
+    @capture_logging(None)
+    def test_compute_logging(self, logger):
         """compute_with_trace() preserves Eliot context."""
+        self.assert_logging(logger, compute_with_trace, "dask:compute")
+
+    def assert_logging(self, logger, run_with_trace, top_action_name):
+        """Utility function for _with_trace() logging tests."""
+
         def mult(x):
-            Message.log(message_type="mult")
+            log_message(message_type="mult")
             return x * 4
 
         def summer(x, y):
-            Message.log(message_type="finally")
+            log_message(message_type="finally")
             return x + y
 
         bag = from_sequence([1, 2])
         bag = bag.map(mult).fold(summer)
         with start_action(action_type="act1"):
-            compute_with_trace(bag)
+            run_with_trace(bag)
 
         [logged_action] = LoggedAction.ofType(logger.messages, "act1")
         self.assertEqual(
             logged_action.type_tree(),
-            {'act1': [{'dask:compute':
-                       [{'eliot:remote_task': ['dask:task', 'mult']},
-                        {'eliot:remote_task': ['dask:task', 'mult']},
-                        {'eliot:remote_task': ['dask:task', 'finally']}]}]}
+            {
+                "act1": [
+                    {
+                        top_action_name: [
+                            {"eliot:remote_task": ["dask:task", "mult"]},
+                            {"eliot:remote_task": ["dask:task", "mult"]},
+                            {"eliot:remote_task": ["dask:task"]},
+                            {"eliot:remote_task": ["dask:task"]},
+                            {"eliot:remote_task": ["dask:task", "finally"]},
+                        ]
+                    }
+                ]
+            },
         )
 
         # Make sure dependencies are tracked:
-        mult1_msg, mult2_msg, final_msg = LoggedMessage.ofType(
-            logger.messages, "dask:task")
-        self.assertEqual(sorted(final_msg.message["dependencies"]),
-                         sorted([mult1_msg.message["key"],
-                                 mult2_msg.message["key"]]))
+        (
+            mult1_msg,
+            mult2_msg,
+            reduce1_msg,
+            reduce2_msg,
+            final_msg,
+        ) = LoggedMessage.ofType(logger.messages, "dask:task")
+        self.assertEqual(
+            reduce1_msg.message["dependencies"], [mult1_msg.message["key"]]
+        )
+        self.assertEqual(
+            reduce2_msg.message["dependencies"], [mult2_msg.message["key"]]
+        )
+        self.assertEqual(
+            sorted(final_msg.message["dependencies"]),
+            sorted([reduce1_msg.message["key"], reduce2_msg.message["key"]]),
+        )
 
         # Make sure dependencies are logically earlier in the logs:
         self.assertTrue(
-            mult1_msg.message["task_level"] < final_msg.message["task_level"])
+            mult1_msg.message["task_level"] < reduce1_msg.message["task_level"]
+        )
         self.assertTrue(
-            mult2_msg.message["task_level"] < final_msg.message["task_level"])
+            mult2_msg.message["task_level"] < reduce2_msg.message["task_level"]
+        )
+        self.assertTrue(
+            reduce1_msg.message["task_level"] < final_msg.message["task_level"]
+        )
+        self.assertTrue(
+            reduce2_msg.message["task_level"] < final_msg.message["task_level"]
+        )
 
 
 @skipUnless(dask, "Dask not available.")
 class AddLoggingTests(TestCase):
     """Tests for _add_logging()."""
 
+    maxDiff = None
+
     def test_add_logging_to_full_graph(self):
         """_add_logging() recreates Dask graph with wrappers."""
         bag = from_sequence([1, 2, 3])
@@ -91,3 +176,52 @@ class AddLoggingTests(TestCase):
             logging_removed[key] = value
 
         self.assertEqual(logging_removed, graph)
+
+    def test_add_logging_explicit(self):
+        """_add_logging() on more edge cases of the graph."""
+
+        def add(s):
+            return s + "s"
+
+        def add2(s):
+            return s + "s"
+
+        # b runs first, then d, then a and c.
+        graph = {
+            "a": "d",
+            "d": [1, 2, (add, "b")],
+            ("b", 0): 1,
+            "c": (add2, "d"),
+        }
+
+        with start_action(action_type="bleh") as action:
+            task_id = action.task_uuid
+            self.assertEqual(
+                _add_logging(graph),
+                {
+                    "d": [
+                        1,
+                        2,
+                        (
+                            _RunWithEliotContext(
+                                task_id=task_id + "@/2",
+                                func=add,
+                                key="d",
+                                dependencies=["b"],
+                            ),
+                            "b",
+                        ),
+                    ],
+                    "a": "d",
+                    ("b", 0): 1,
+                    "c": (
+                        _RunWithEliotContext(
+                            task_id=task_id + "@/3",
+                            func=add2,
+                            key="c",
+                            dependencies=["d"],
+                        ),
+                        "d",
+                    ),
+                },
+            )
diff --git a/eliot/tests/test_filter.py b/eliot/tests/test_filter.py
index 277b277..3874c9a 100644
--- a/eliot/tests/test_filter.py
+++ b/eliot/tests/test_filter.py
@@ -33,11 +33,11 @@ class EliotFilterTests(TestCase):
         efilter.run()
         self.assertEqual(
             f.getvalue(),
-            json.dumps({
-                "x": 4,
-                "orig": "abcd"}) + b"\n" + json.dumps({
-                    "x": 2,
-                    "orig": [1, 2]}) + b'\n')
+            json.dumps({"x": 4, "orig": "abcd"})
+            + b"\n"
+            + json.dumps({"x": 2, "orig": [1, 2]})
+            + b"\n",
+        )
 
     def evaluateExpression(self, expr, message):
         """
@@ -61,7 +61,8 @@ class EliotFilterTests(TestCase):
         built-ins.
         """
         result = self.evaluateExpression(
-            "isinstance(datetime.utcnow() - datetime.utcnow(), timedelta)", {})
+            "isinstance(datetime.utcnow() - datetime.utcnow(), timedelta)", {}
+        )
         self.assertEqual(result, True)
 
     def test_datetimeSerialization(self):
@@ -92,7 +93,7 @@ class MainTests(TestCase):
         """
         By default L{main} uses information from L{sys}.
         """
-        self.assertEqual(inspect.getargspec(main).defaults, (sys, ))
+        self.assertEqual(inspect.getargspec(main).defaults, (sys,))
 
     def test_stdinOut(self):
         """
diff --git a/eliot/tests/test_generators.py b/eliot/tests/test_generators.py
new file mode 100644
index 0000000..a92cff3
--- /dev/null
+++ b/eliot/tests/test_generators.py
@@ -0,0 +1,294 @@
+"""
+Tests for L{eliot._generators}.
+"""
+
+from __future__ import unicode_literals, absolute_import
+
+from pprint import pformat
+from unittest import TestCase
+
+from eliot import Message, start_action
+from ..testing import capture_logging, assertHasAction
+
+from .._generators import eliot_friendly_generator_function
+
+
+def assert_expected_action_tree(
+    testcase, logger, expected_action_type, expected_type_tree
+):
+    """
+    Assert that a logger has a certain logged action with certain children.
+
+    @see: L{assert_generator_logs_action_tree}
+    """
+    logged_action = assertHasAction(testcase, logger, expected_action_type, True)
+    type_tree = logged_action.type_tree()
+    testcase.assertEqual(
+        {expected_action_type: expected_type_tree},
+        type_tree,
+        "Logger had messages:\n{}".format(pformat(logger.messages, indent=4)),
+    )
+
+
+def assert_generator_logs_action_tree(
+    testcase, generator_function, logger, expected_action_type, expected_type_tree
+):
+    """
+    Assert that exhausting a generator from the given function logs an action
+    of the given type with children matching the given type tree.
+
+    @param testcase: A test case instance to use to make assertions.
+    @type testcase: L{unittest.TestCase}
+
+    @param generator_function: A no-argument callable that returns a generator
+        to be exhausted.
+
+    @param logger: A logger to inspect for logged messages.
+    @type logger: L{MemoryLogger}
+
+    @param expected_action_type: An action type which should be logged by the
+        generator.
+    @type expected_action_type: L{unicode}
+
+    @param expected_type_tree: The types of actions and messages which should
+        be logged beneath the expected action.  The structure of this value
+        matches the structure returned by L{LoggedAction.type_tree}.
+    @type expected_type_tree: L{list}
+    """
+    list(eliot_friendly_generator_function(generator_function)())
+    assert_expected_action_tree(
+        testcase, logger, expected_action_type, expected_type_tree
+    )
+
+
+class EliotFriendlyGeneratorFunctionTests(TestCase):
+    """
+    Tests for L{eliot_friendly_generator_function}.
+    """
+
+    # Get our custom assertion failure messages *and* the standard ones.
+    longMessage = True
+
+    @capture_logging(None)
+    def test_yield_none(self, logger):
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="hello")
+            yield
+            Message.log(message_type="goodbye")
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            list(g())
+
+        assert_expected_action_tree(
+            self, logger, "the-action", ["hello", "yielded", "goodbye"]
+        )
+
+    @capture_logging(None)
+    def test_yield_value(self, logger):
+        expected = object()
+
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="hello")
+            yield expected
+            Message.log(message_type="goodbye")
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            self.assertEqual([expected], list(g()))
+
+        assert_expected_action_tree(
+            self, logger, "the-action", ["hello", "yielded", "goodbye"]
+        )
+
+    @capture_logging(None)
+    def test_yield_inside_another_action(self, logger):
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="a")
+            with start_action(action_type="confounding-factor"):
+                Message.log(message_type="b")
+                yield None
+                Message.log(message_type="c")
+            Message.log(message_type="d")
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            list(g())
+
+        assert_expected_action_tree(
+            self,
+            logger,
+            "the-action",
+            ["a", {"confounding-factor": ["b", "yielded", "c"]}, "d"],
+        )
+
+    @capture_logging(None)
+    def test_yield_inside_nested_actions(self, logger):
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="a")
+            with start_action(action_type="confounding-factor"):
+                Message.log(message_type="b")
+                yield None
+                with start_action(action_type="double-confounding-factor"):
+                    yield None
+                    Message.log(message_type="c")
+                Message.log(message_type="d")
+            Message.log(message_type="e")
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            list(g())
+
+        assert_expected_action_tree(
+            self,
+            logger,
+            "the-action",
+            [
+                "a",
+                {
+                    "confounding-factor": [
+                        "b",
+                        "yielded",
+                        {"double-confounding-factor": ["yielded", "c"]},
+                        "d",
+                    ]
+                },
+                "e",
+            ],
+        )
+
+    @capture_logging(None)
+    def test_generator_and_non_generator(self, logger):
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="a")
+            yield
+            with start_action(action_type="action-a"):
+                Message.log(message_type="b")
+                yield
+                Message.log(message_type="c")
+
+            Message.log(message_type="d")
+            yield
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            generator = g()
+            next(generator)
+            Message.log(message_type="0")
+            next(generator)
+            Message.log(message_type="1")
+            next(generator)
+            Message.log(message_type="2")
+            self.assertRaises(StopIteration, lambda: next(generator))
+
+        assert_expected_action_tree(
+            self,
+            logger,
+            "the-action",
+            [
+                "a",
+                "yielded",
+                "0",
+                {"action-a": ["b", "yielded", "c"]},
+                "1",
+                "d",
+                "yielded",
+                "2",
+            ],
+        )
+
+    @capture_logging(None)
+    def test_concurrent_generators(self, logger):
+        @eliot_friendly_generator_function
+        def g(which):
+            Message.log(message_type="{}-a".format(which))
+            with start_action(action_type=which):
+                Message.log(message_type="{}-b".format(which))
+                yield
+                Message.log(message_type="{}-c".format(which))
+            Message.log(message_type="{}-d".format(which))
+
+        g.debug = True  # output yielded messages
+
+        gens = [g("1"), g("2")]
+        with start_action(action_type="the-action"):
+            while gens:
+                for g in gens[:]:
+                    try:
+                        next(g)
+                    except StopIteration:
+                        gens.remove(g)
+
+        assert_expected_action_tree(
+            self,
+            logger,
+            "the-action",
+            [
+                "1-a",
+                {"1": ["1-b", "yielded", "1-c"]},
+                "2-a",
+                {"2": ["2-b", "yielded", "2-c"]},
+                "1-d",
+                "2-d",
+            ],
+        )
+
+    @capture_logging(None)
+    def test_close_generator(self, logger):
+        @eliot_friendly_generator_function
+        def g():
+            Message.log(message_type="a")
+            try:
+                yield
+                Message.log(message_type="b")
+            finally:
+                Message.log(message_type="c")
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            gen = g()
+            next(gen)
+            gen.close()
+
+        assert_expected_action_tree(self, logger, "the-action", ["a", "yielded", "c"])
+
+    @capture_logging(None)
+    def test_nested_generators(self, logger):
+        @eliot_friendly_generator_function
+        def g(recurse):
+            with start_action(action_type="a-recurse={}".format(recurse)):
+                Message.log(message_type="m-recurse={}".format(recurse))
+                if recurse:
+                    set(g(False))
+                else:
+                    yield
+
+        g.debug = True  # output yielded messages
+
+        with start_action(action_type="the-action"):
+            set(g(True))
+
+        assert_expected_action_tree(
+            self,
+            logger,
+            "the-action",
+            [
+                {
+                    "a-recurse=True": [
+                        "m-recurse=True",
+                        {"a-recurse=False": ["m-recurse=False", "yielded"]},
+                    ]
+                }
+            ],
+        )
diff --git a/eliot/tests/test_journald.py b/eliot/tests/test_journald.py
index 2ccc268..b474300 100644
--- a/eliot/tests/test_journald.py
+++ b/eliot/tests/test_journald.py
@@ -15,6 +15,7 @@ from .._bytesjson import loads
 from .._output import MemoryLogger
 from .._message import TASK_UUID_FIELD
 from .. import start_action, Message, write_traceback
+
 try:
     from ..journald import sd_journal_send, JournaldDestination
 except ImportError:
@@ -45,9 +46,16 @@ def last_journald_message():
     marker = unicode(uuid4())
     sd_journal_send(MESSAGE=marker.encode("ascii"))
     for i in range(500):
-        messages = check_output([
-            b"journalctl", b"-a", b"-o", b"json", b"-n2",
-            b"_PID=" + str(getpid()).encode("ascii")])
+        messages = check_output(
+            [
+                b"journalctl",
+                b"-a",
+                b"-o",
+                b"json",
+                b"-n2",
+                b"_PID=" + str(getpid()).encode("ascii"),
+            ]
+        )
         messages = [loads(m) for m in messages.splitlines()]
         if len(messages) == 2 and messages[1]["MESSAGE"] == marker:
             return messages[0]
@@ -61,8 +69,8 @@ class SdJournaldSendTests(TestCase):
     """
 
     @skipUnless(
-        _journald_available(),
-        "journald unavailable or inactive on this machine.")
+        _journald_available(), "journald unavailable or inactive on this machine."
+    )
     def setUp(self):
         pass
 
@@ -103,9 +111,10 @@ class SdJournaldSendTests(TestCase):
         """
         sd_journal_send(MESSAGE=b"hello", BONUS_FIELD=b"world")
         result = last_journald_message()
-        self.assertEqual((b"hello", b"world"), (
-            result["MESSAGE"].encode("ascii"),
-            result["BONUS_FIELD"].encode("ascii")))
+        self.assertEqual(
+            (b"hello", b"world"),
+            (result["MESSAGE"].encode("ascii"), result["BONUS_FIELD"].encode("ascii")),
+        )
 
     def test_error(self):
         """
@@ -124,8 +133,8 @@ class JournaldDestinationTests(TestCase):
     """
 
     @skipUnless(
-        _journald_available(),
-        "journald unavailable or inactive on this machine.")
+        _journald_available(), "journald unavailable or inactive on this machine."
+    )
     def setUp(self):
         self.destination = JournaldDestination()
         self.logger = MemoryLogger()
@@ -157,8 +166,7 @@ class JournaldDestinationTests(TestCase):
         """
         action_type = "test:type"
         start_action(self.logger, action_type=action_type)
-        self.assert_field_for(
-            self.logger.messages[0], "ELIOT_TYPE", action_type)
+        self.assert_field_for(self.logger.messages[0], "ELIOT_TYPE", action_type)
 
     def test_message_type(self):
         """
@@ -166,8 +174,7 @@ class JournaldDestinationTests(TestCase):
         """
         message_type = "test:type:message"
         Message.new(message_type=message_type).write(self.logger)
-        self.assert_field_for(
-            self.logger.messages[0], "ELIOT_TYPE", message_type)
+        self.assert_field_for(self.logger.messages[0], "ELIOT_TYPE", message_type)
 
     def test_no_type(self):
         """
@@ -182,8 +189,10 @@ class JournaldDestinationTests(TestCase):
         """
         start_action(self.logger, action_type="xxx")
         self.assert_field_for(
-            self.logger.messages[0], "ELIOT_TASK",
-            self.logger.messages[0][TASK_UUID_FIELD])
+            self.logger.messages[0],
+            "ELIOT_TASK",
+            self.logger.messages[0][TASK_UUID_FIELD],
+        )
 
     def test_info_priorities(self):
         """
@@ -197,7 +206,7 @@ class JournaldDestinationTests(TestCase):
         for message in self.logger.messages:
             self.destination(message)
             priorities.append(last_journald_message()["PRIORITY"])
-        self.assertEqual(priorities, [u"6", u"6", u"6", u"6"])
+        self.assertEqual(priorities, ["6", "6", "6", "6"])
 
     def test_error_priority(self):
         """
@@ -208,7 +217,7 @@ class JournaldDestinationTests(TestCase):
                 raise ZeroDivisionError()
         except ZeroDivisionError:
             pass
-        self.assert_field_for(self.logger.messages[-1], "PRIORITY", u"3")
+        self.assert_field_for(self.logger.messages[-1], "PRIORITY", "3")
 
     def test_critical_priority(self):
         """
@@ -218,7 +227,7 @@ class JournaldDestinationTests(TestCase):
             raise ZeroDivisionError()
         except ZeroDivisionError:
             write_traceback(logger=self.logger)
-        self.assert_field_for(self.logger.serialize()[-1], "PRIORITY", u"2")
+        self.assert_field_for(self.logger.serialize()[-1], "PRIORITY", "2")
 
     def test_identifier(self):
         """
@@ -232,6 +241,7 @@ class JournaldDestinationTests(TestCase):
             self.destination = JournaldDestination()
             Message.new(message_type="msg").write(self.logger)
             self.assert_field_for(
-                self.logger.messages[0], "SYSLOG_IDENTIFIER", u"testing123")
+                self.logger.messages[0], "SYSLOG_IDENTIFIER", "testing123"
+            )
         finally:
             argv[0] = original
diff --git a/eliot/tests/test_json.py b/eliot/tests/test_json.py
index b436621..25e25ec 100644
--- a/eliot/tests/test_json.py
+++ b/eliot/tests/test_json.py
@@ -6,6 +6,7 @@ from __future__ import unicode_literals, absolute_import
 
 from unittest import TestCase, skipUnless, skipIf
 from json import loads, dumps
+from math import isnan
 
 try:
     import numpy as np
@@ -18,18 +19,44 @@ from eliot.json import EliotJSONEncoder
 class EliotJSONEncoderTests(TestCase):
     """Tests for L{EliotJSONEncoder}."""
 
+    def test_nan_inf(self):
+        """NaN, inf and -inf are round-tripped."""
+        l = [float("nan"), float("inf"), float("-inf")]
+        roundtripped = loads(dumps(l, cls=EliotJSONEncoder))
+        self.assertEqual(l[1:], roundtripped[1:])
+        self.assertTrue(isnan(roundtripped[0]))
+
     @skipUnless(np, "NumPy not installed.")
     def test_numpy(self):
         """NumPy objects get serialized to readable JSON."""
-        l = [np.float32(12.5), np.float64(2.0), np.float16(0.5),
-             np.bool(True), np.bool(False), np.bool_(True),
-             np.unicode_("hello"),
-             np.byte(12), np.short(12), np.intc(-13), np.int_(0),
-             np.longlong(100), np.intp(7),
-             np.ubyte(12), np.ushort(12), np.uintc(13),
-             np.ulonglong(100), np.uintp(7),
-             np.int8(1), np.int16(3), np.int32(4), np.int64(5),
-             np.uint8(1), np.uint16(3), np.uint32(4), np.uint64(5)]
+        l = [
+            np.float32(12.5),
+            np.float64(2.0),
+            np.float16(0.5),
+            np.bool(True),
+            np.bool(False),
+            np.bool_(True),
+            np.unicode_("hello"),
+            np.byte(12),
+            np.short(12),
+            np.intc(-13),
+            np.int_(0),
+            np.longlong(100),
+            np.intp(7),
+            np.ubyte(12),
+            np.ushort(12),
+            np.uintc(13),
+            np.ulonglong(100),
+            np.uintp(7),
+            np.int8(1),
+            np.int16(3),
+            np.int32(4),
+            np.int64(5),
+            np.uint8(1),
+            np.uint16(3),
+            np.uint32(4),
+            np.uint64(5),
+        ]
         l2 = [l, np.array([1, 2, 3])]
         roundtripped = loads(dumps(l2, cls=EliotJSONEncoder))
         self.assertEqual([l, [1, 2, 3]], roundtripped)
@@ -43,3 +70,20 @@ class EliotJSONEncoderTests(TestCase):
         with self.assertRaises(TypeError):
             dumps([object()], cls=EliotJSONEncoder)
         self.assertEqual(dumps(12, cls=EliotJSONEncoder), "12")
+
+    @skipUnless(np, "NumPy is not installed.")
+    def test_large_numpy_array(self):
+        """
+        Large NumPy arrays are not serialized completely, since this is (A) a
+        performance hit (B) probably a mistake on the user's part.
+        """
+        a1000 = np.array([0] * 10000)
+        self.assertEqual(loads(dumps(a1000, cls=EliotJSONEncoder)), a1000.tolist())
+        a1002 = np.zeros((2, 5001))
+        a1002[0][0] = 12
+        a1002[0][1] = 13
+        a1002[1][1] = 500
+        self.assertEqual(
+            loads(dumps(a1002, cls=EliotJSONEncoder)),
+            {"array_start": a1002.flat[:10000].tolist(), "original_shape": [2, 5001]},
+        )
diff --git a/eliot/tests/test_logwriter.py b/eliot/tests/test_logwriter.py
index 1958903..c57c47a 100644
--- a/eliot/tests/test_logwriter.py
+++ b/eliot/tests/test_logwriter.py
@@ -6,6 +6,7 @@ from __future__ import unicode_literals
 
 import time
 import threading
+
 # Make sure to use StringIO that only accepts unicode:
 from io import BytesIO, StringIO
 from unittest import skipIf
@@ -93,7 +94,7 @@ class ThreadedWriterTests(TestCase):
         """
         L{ThreadedWriter} has a name.
         """
-        self.assertEqual(ThreadedWriter.name, u"Eliot Log Writer")
+        self.assertEqual(ThreadedWriter.name, "Eliot Log Writer")
 
     def test_startServiceRunning(self):
         """
@@ -150,13 +151,15 @@ class ThreadedWriterTests(TestCase):
         writer.startService()
         start = time.time()
         while set(threading.enumerate()) == previousThreads and (
-            time.time() - start < 5):
+            time.time() - start < 5
+        ):
             time.sleep(0.0001)
         # If not true the next assertion might pass by mistake:
         self.assertNotEqual(set(threading.enumerate()), previousThreads)
         writer.stopService()
         while set(threading.enumerate()) != previousThreads and (
-            time.time() - start < 5):
+            time.time() - start < 5
+        ):
             time.sleep(0.0001)
         self.assertEqual(set(threading.enumerate()), previousThreads)
 
@@ -170,7 +173,7 @@ class ThreadedWriterTests(TestCase):
         f.block()
         writer.startService()
         for i in range(100):
-            writer({u"write": 123})
+            writer({"write": 123})
         threads = threading.enumerate()
         writer.stopService()
         # Make sure writes didn't happen before the stopService, thus making the
@@ -216,8 +219,11 @@ class ThreadedWriterTests(TestCase):
         # thread or the I/O thread was never set. Either may happen depending on
         # how and whether the reactor has been started by the unittesting
         # framework.
-        d.addCallback(lambda _: self.assertIn(
-            threadable.ioThread, (None, threading.currentThread().ident)))
+        d.addCallback(
+            lambda _: self.assertIn(
+                threadable.ioThread, (None, threading.currentThread().ident)
+            )
+        )
         return d
 
     def test_startServiceRegistersDestination(self):
@@ -260,8 +266,7 @@ class ThreadedWriterTests(TestCase):
         msg = {"key": 123}
         writer(msg)
         d = writer.stopService()
-        d.addCallback(
-            lambda _: self.assertEqual(result, [(msg, thread_ident)]))
+        d.addCallback(lambda _: self.assertEqual(result, [(msg, thread_ident)]))
         return d
 
 
diff --git a/eliot/tests/test_message.py b/eliot/tests/test_message.py
index 6456fc0..69bb2ea 100644
--- a/eliot/tests/test_message.py
+++ b/eliot/tests/test_message.py
@@ -15,13 +15,13 @@ try:
 except ImportError:
     Failure = None
 
-from .._message import WrittenMessage, Message
+from .._message import WrittenMessage, Message, log_message
 from .._output import MemoryLogger
 from .._action import Action, start_action, TaskLevel
-from .. import add_destination, remove_destination
+from .. import add_destinations, remove_destination
 
 
-class MessageTests(TestCase):
+class DeprecatedMessageTests(TestCase):
     """
     Test for L{Message}.
     """
@@ -50,13 +50,7 @@ class MessageTests(TestCase):
         msg = Message.new(key="value", another=2)
         another = msg.bind(another=3, more=4)
         self.assertIsInstance(another, Message)
-        self.assertEqual(
-            another.contents(), {
-                "key": "value",
-                "another": 3,
-                "more": 4
-            }
-        )
+        self.assertEqual(another.contents(), {"key": "value", "another": 3, "more": 4})
 
     def test_bindPreservesOriginal(self):
         """
@@ -82,10 +76,10 @@ class MessageTests(TestCase):
         L{Message.write} writes to the default logger if none is given.
         """
         messages = []
-        add_destination(messages.append)
+        add_destinations(messages.append)
         self.addCleanup(remove_destination, messages.append)
         Message.new(some_key=1234).write()
-        self.assertEqual(messages[0][u"some_key"], 1234)
+        self.assertEqual(messages[0]["some_key"], 1234)
 
     def test_writeCreatesNewDictionary(self):
         """
@@ -114,10 +108,10 @@ class MessageTests(TestCase):
         dictionary that is superset of the L{Message} contents.
         """
         messages = []
-        add_destination(messages.append)
+        add_destinations(messages.append)
         self.addCleanup(remove_destination, messages.append)
         Message.log(some_key=1234)
-        self.assertEqual(messages[0][u"some_key"], 1234)
+        self.assertEqual(messages[0]["some_key"], 1234)
 
     def test_defaultTime(self):
         """
@@ -133,10 +127,8 @@ class MessageTests(TestCase):
         """
         logger = MemoryLogger()
         msg = Message.new(key=4)
-        timestamp = 1387299889.153187625
-        msg._time = lambda: timestamp
         msg.write(logger)
-        self.assertEqual(logger.messages[0]["timestamp"], timestamp)
+        self.assertTrue(time.time() - logger.messages[0]["timestamp"] < 0.1)
 
     def test_write_preserves_message_type(self):
         """
@@ -148,16 +140,6 @@ class MessageTests(TestCase):
         self.assertEqual(logger.messages[0]["message_type"], "isetit")
         self.assertNotIn("action_type", logger.messages[0])
 
-    def test_write_preserves_action_type(self):
-        """
-        L{Message.write} doesn't add a C{message_type} if an action type is set.
-        """
-        logger = MemoryLogger()
-        msg = Message.new(key=4, action_type="isetit")
-        msg.write(logger)
-        self.assertEqual(logger.messages[0]["action_type"], "isetit")
-        self.assertNotIn("message_type", logger.messages[0])
-
     def test_explicitAction(self):
         """
         L{Message.write} adds the identification fields from the given
@@ -171,12 +153,8 @@ class MessageTests(TestCase):
         written = logger.messages[0]
         del written["timestamp"]
         self.assertEqual(
-            written, {
-                "task_uuid": "unique",
-                "task_level": [1],
-                "key": 2,
-                "message_type": "",
-            }
+            written,
+            {"task_uuid": "unique", "task_level": [1], "key": 2, "message_type": ""},
         )
 
     def test_implicitAction(self):
@@ -185,22 +163,16 @@ class MessageTests(TestCase):
         fields from the current execution context's L{Action} to the
         dictionary written to the logger.
         """
-        action = Action(
-            MemoryLogger(), "unique", TaskLevel(level=[]), "sys:thename"
-        )
         logger = MemoryLogger()
+        action = Action(logger, "unique", TaskLevel(level=[]), "sys:thename")
         msg = Message.new(key=2)
         with action:
             msg.write(logger)
         written = logger.messages[0]
         del written["timestamp"]
         self.assertEqual(
-            written, {
-                "task_uuid": "unique",
-                "task_level": [1],
-                "key": 2,
-                "message_type": "",
-            }
+            written,
+            {"task_uuid": "unique", "task_level": [1], "key": 2, "message_type": ""},
         )
 
     def test_missingAction(self):
@@ -219,8 +191,10 @@ class MessageTests(TestCase):
         self.assertEqual(
             (
                 UUID(message1["task_uuid"]) != UUID(message2["task_uuid"]),
-                message1["task_level"], message2["task_level"]
-            ), (True, [1], [1])
+                message1["task_level"],
+                message2["task_level"],
+            ),
+            (True, [1], [1]),
         )
 
     def test_actionCounter(self):
@@ -236,8 +210,7 @@ class MessageTests(TestCase):
         # We expect 6 messages: start action, 4 standalone messages, finish
         # action:
         self.assertEqual(
-            [m["task_level"] for m in logger.messages],
-            [[1], [2], [3], [4], [5], [6]]
+            [m["task_level"] for m in logger.messages], [[1], [2], [3], [4], [5], [6]]
         )
 
     def test_writePassesSerializer(self):
@@ -286,16 +259,9 @@ class WrittenMessageTests(TestCase):
         to the log.
         """
         log_entry = pmap(
-            {
-                'timestamp': 1,
-                'task_uuid': 'unique',
-                'task_level': [1],
-                'foo': 'bar',
-            }
-        )
-        self.assertEqual(
-            WrittenMessage.from_dict(log_entry).as_dict(), log_entry
+            {"timestamp": 1, "task_uuid": "unique", "task_level": [1], "foo": "bar"}
         )
+        self.assertEqual(WrittenMessage.from_dict(log_entry).as_dict(), log_entry)
 
     def test_from_dict(self):
         """
@@ -303,15 +269,68 @@ class WrittenMessageTests(TestCase):
         deserialized from a log into a L{WrittenMessage} object.
         """
         log_entry = pmap(
-            {
-                'timestamp': 1,
-                'task_uuid': 'unique',
-                'task_level': [1],
-                'foo': 'bar',
-            }
+            {"timestamp": 1, "task_uuid": "unique", "task_level": [1], "foo": "bar"}
         )
         parsed = WrittenMessage.from_dict(log_entry)
         self.assertEqual(parsed.timestamp, 1)
-        self.assertEqual(parsed.task_uuid, 'unique')
+        self.assertEqual(parsed.task_uuid, "unique")
         self.assertEqual(parsed.task_level, TaskLevel(level=[1]))
-        self.assertEqual(parsed.contents, pmap({'foo': 'bar'}))
+        self.assertEqual(parsed.contents, pmap({"foo": "bar"}))
+
+
+class LogMessageTests(TestCase):
+    """Tests for L{log_message}."""
+
+    def test_writes_message(self):
+        """
+        L{log_message} writes to the default logger.
+        """
+        messages = []
+        add_destinations(messages.append)
+        self.addCleanup(remove_destination, messages.append)
+        log_message(message_type="hello", some_key=1234)
+        self.assertEqual(messages[0]["some_key"], 1234)
+        self.assertEqual(messages[0]["message_type"], "hello")
+        self.assertTrue(time.time() - messages[0]["timestamp"] < 0.1)
+
+    def test_implicitAction(self):
+        """
+        If no L{Action} is specified, L{log_message} adds the identification
+        fields from the current execution context's L{Action} to the
+        dictionary written to the logger.
+        """
+        logger = MemoryLogger()
+        action = Action(logger, "unique", TaskLevel(level=[]), "sys:thename")
+        with action:
+            log_message(key=2, message_type="a")
+        written = logger.messages[0]
+        del written["timestamp"]
+        self.assertEqual(
+            written,
+            {"task_uuid": "unique", "task_level": [1], "key": 2, "message_type": "a"},
+        )
+
+    def test_missingAction(self):
+        """
+        If no L{Action} is specified, and the current execution context has no
+        L{Action}, a new task_uuid is generated.
+
+        This ensures all messages have a unique identity, as specified by
+        task_uuid/task_level.
+        """
+        messages = []
+        add_destinations(messages.append)
+        self.addCleanup(remove_destination, messages.append)
+
+        log_message(key=2, message_type="")
+        log_message(key=3, message_type="")
+
+        message1, message2 = messages
+        self.assertEqual(
+            (
+                UUID(message1["task_uuid"]) != UUID(message2["task_uuid"]),
+                message1["task_level"],
+                message2["task_level"],
+            ),
+            (True, [1], [1]),
+        )
diff --git a/eliot/tests/test_output.py b/eliot/tests/test_output.py
index 568204d..6b608c6 100644
--- a/eliot/tests/test_output.py
+++ b/eliot/tests/test_output.py
@@ -2,18 +2,17 @@
 Tests for L{eliot._output}.
 """
 
-from __future__ import unicode_literals
-
 from sys import stdout
-from unittest import TestCase, skipIf, skipUnless
+from unittest import TestCase, skipUnless
+
 # Make sure to use StringIO that only accepts unicode:
 from io import BytesIO, StringIO
 import json as pyjson
 from tempfile import mktemp
 from time import time
 from uuid import UUID
+from threading import Thread
 
-from six import PY3, PY2
 try:
     import numpy as np
 except ImportError:
@@ -21,11 +20,19 @@ except ImportError:
 from zope.interface.verify import verifyClass
 
 from .._output import (
-    MemoryLogger, ILogger, Destinations, Logger, bytesjson as json, to_file,
-    FileDestination, _DestinationsSendError)
+    MemoryLogger,
+    ILogger,
+    Destinations,
+    Logger,
+    bytesjson as json,
+    to_file,
+    FileDestination,
+    _DestinationsSendError,
+)
 from .._validation import ValidationError, Field, _MessageSerializer
 from .._traceback import write_traceback
 from ..testing import assertContainsFields
+from .common import CustomObject, CustomJSONEncoder
 
 
 class MemoryLoggerTests(TestCase):
@@ -44,9 +51,9 @@ class MemoryLoggerTests(TestCase):
         Dictionaries written with L{MemoryLogger.write} are stored on a list.
         """
         logger = MemoryLogger()
-        logger.write({'a': 'b'})
-        logger.write({'c': 1})
-        self.assertEqual(logger.messages, [{'a': 'b'}, {'c': 1}])
+        logger.write({"a": "b"})
+        logger.write({"c": 1})
+        self.assertEqual(logger.messages, [{"a": "b"}, {"c": 1}])
         logger.validate()
 
     def test_notStringFieldKeys(self):
@@ -55,31 +62,21 @@ class MemoryLoggerTests(TestCase):
         raises a C{TypeError}.
         """
         logger = MemoryLogger()
-        logger.write({123: 'b'})
+        logger.write({123: "b"})
         self.assertRaises(TypeError, logger.validate)
 
-    @skipIf(
-        PY3, "Python 3 json module makes it impossible to use bytes as keys")
-    def test_bytesFieldKeys(self):
-        """
-        Field keys can be bytes containing utf-8 encoded Unicode.
-        """
-        logger = MemoryLogger()
-        logger.write({u'\u1234'.encode("utf-8"): 'b'})
-        logger.validate()
-
     def test_bytesMustBeUTF8(self):
         """
         Field keys can be bytes, but only if they're UTF-8 encoded Unicode.
         """
         logger = MemoryLogger()
-        logger.write({'\u1234'.encode("utf-16"): 'b'})
+        logger.write({"\u1234".encode("utf-16"): "b"})
         self.assertRaises(UnicodeDecodeError, logger.validate)
 
     def test_serializer(self):
         """
         L{MemoryLogger.validate} calls the given serializer's C{validate()}
-        method with the message.
+        method with the message, as does L{MemoryLogger.write}.
         """
 
         class FakeValidator(list):
@@ -93,17 +90,18 @@ class MemoryLoggerTests(TestCase):
         logger = MemoryLogger()
         message = {"message_type": "mymessage", "X": 1}
         logger.write(message, validator)
-        self.assertEqual(validator, [])
-        logger.validate()
         self.assertEqual(validator, [message])
+        logger.validate()
+        self.assertEqual(validator, [message, message])
 
     def test_failedValidation(self):
         """
         L{MemoryLogger.validate} will allow exceptions raised by the serializer
         to pass through.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type")])
+        serializer = _MessageSerializer(
+            [Field.forValue("message_type", "mymessage", "The type")]
+        )
         logger = MemoryLogger()
         logger.write({"message_type": "wrongtype"}, serializer)
         self.assertRaises(ValidationError, logger.validate)
@@ -113,45 +111,75 @@ class MemoryLoggerTests(TestCase):
         L{MemoryLogger.validate} will encode the output of serialization to
         JSON.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "type", u"The type"),
-            Field("foo", lambda value: object(), u"The type")])
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "type", "The type"),
+                Field("foo", lambda value: object(), "The type"),
+            ]
+        )
         logger = MemoryLogger()
-        logger.write({
-            "message_type": "type",
-            "foo": "will become object()"}, serializer)
+        logger.write(
+            {"message_type": "type", "foo": "will become object()"}, serializer
+        )
         self.assertRaises(TypeError, logger.validate)
 
+    @skipUnless(np, "NumPy is not installed.")
+    def test_EliotJSONEncoder(self):
+        """
+        L{MemoryLogger.validate} uses the EliotJSONEncoder by default to do
+        encoding testing.
+        """
+        logger = MemoryLogger()
+        logger.write({"message_type": "type", "foo": np.uint64(12)}, None)
+        logger.validate()
+
+    def test_JSON_custom_encoder(self):
+        """
+        L{MemoryLogger.validate} will use a custom JSON encoder if one was given.
+        """
+        logger = MemoryLogger(encoder=CustomJSONEncoder)
+        logger.write(
+            {"message_type": "type", "custom": CustomObject()},
+            None,
+        )
+        logger.validate()
+
     def test_serialize(self):
         """
         L{MemoryLogger.serialize} returns a list of serialized versions of the
         logged messages.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", "The type"),
-            Field("length", len, "The length")])
-        messages = [{
-            "message_type": "mymessage",
-            "length": "abc"}, {
-                "message_type": "mymessage",
-                "length": "abcd"}]
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length"),
+            ]
+        )
+        messages = [
+            {"message_type": "mymessage", "length": "abc"},
+            {"message_type": "mymessage", "length": "abcd"},
+        ]
         logger = MemoryLogger()
         for message in messages:
             logger.write(message, serializer)
         self.assertEqual(
-            logger.serialize(), [{
-                "message_type": "mymessage",
-                "length": 3}, {
-                    "message_type": "mymessage",
-                    "length": 4}])
+            logger.serialize(),
+            [
+                {"message_type": "mymessage", "length": 3},
+                {"message_type": "mymessage", "length": 4},
+            ],
+        )
 
     def test_serializeCopies(self):
         """
         L{MemoryLogger.serialize} does not mutate the original logged messages.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", "The type"),
-            Field("length", len, "The length")])
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length"),
+            ]
+        )
         message = {"message_type": "mymessage", "length": "abc"}
         logger = MemoryLogger()
         logger.write(message, serializer)
@@ -232,8 +260,74 @@ class MemoryLoggerTests(TestCase):
         logger.write({"key": "value"}, None)
         logger.reset()
         self.assertEqual(
-            (logger.messages, logger.serializers,
-             logger.tracebackMessages), ([], [], []))
+            (logger.messages, logger.serializers, logger.tracebackMessages),
+            ([], [], []),
+        )
+
+    def test_threadSafeWrite(self):
+        """
+        L{MemoryLogger.write} can be called from multiple threads concurrently.
+        """
+        # Some threads will log some messages
+        thread_count = 10
+
+        # A lot of messages.  This will keep the threads running long enough
+        # to give them a chance to (try to) interfere with each other.
+        write_count = 10000
+
+        # They'll all use the same MemoryLogger instance.
+        logger = MemoryLogger()
+
+        # Each thread will have its own message and serializer that it writes
+        # to the log over and over again.
+        def write(msg, serializer):
+            for i in range(write_count):
+                logger.write(msg, serializer)
+
+        # Generate a single distinct message for each thread to log.
+        msgs = list({"i": i} for i in range(thread_count))
+
+        # Generate a single distinct serializer for each thread to log.
+        serializers = list(object() for i in range(thread_count))
+
+        # Pair them all up.  This gives us a simple invariant we can check
+        # later on.
+        write_args = zip(msgs, serializers)
+
+        # Create the threads.
+        threads = list(Thread(target=write, args=args) for args in write_args)
+
+        # Run them all.  Note threads early in this list will start writing to
+        # the log before later threads in the list even get a chance to start.
+        # That's part of why we have each thread write so many messages.
+        for t in threads:
+            t.start()
+        # Wait for them all to finish.
+        for t in threads:
+            t.join()
+
+        # Check that we got the correct number of messages in the log.
+        expected_count = thread_count * write_count
+        self.assertEqual(len(logger.messages), expected_count)
+        self.assertEqual(len(logger.serializers), expected_count)
+
+        # Check the simple invariant we created above.  Every logged message
+        # must be paired with the correct serializer, where "correct" is
+        # defined by ``write_args`` above.
+        for position, (msg, serializer) in enumerate(
+            zip(logger.messages, logger.serializers)
+        ):
+            # The indexes must match because the objects are paired using
+            # zip() above.
+            msg_index = msgs.index(msg)
+            serializer_index = serializers.index(serializer)
+            self.assertEqual(
+                msg_index,
+                serializer_index,
+                "Found message #{} with serializer #{} at position {}".format(
+                    msg_index, serializer_index, position
+                ),
+            )
 
 
 class MyException(Exception):
@@ -246,6 +340,7 @@ class BadDestination(list):
     """
     A destination that throws an exception the first time it is called.
     """
+
     called = 0
 
     def __call__(self, msg):
@@ -290,9 +385,8 @@ class DestinationsTests(TestCase):
         destinations.add(dest2)
         destinations.add(dest3.append)
 
-        message = {u"hello": 123}
-        self.assertRaises(
-            _DestinationsSendError, destinations.send, {u"hello": 123})
+        message = {"hello": 123}
+        self.assertRaises(_DestinationsSendError, destinations.send, {"hello": 123})
         self.assertEqual((dest, dest3), ([message], [message]))
 
     def test_destinationExceptionContinue(self):
@@ -304,10 +398,9 @@ class DestinationsTests(TestCase):
         dest = BadDestination()
         destinations.add(dest)
 
-        self.assertRaises(
-            _DestinationsSendError, destinations.send, {u"hello": 123})
-        destinations.send({u"hello": 200})
-        self.assertEqual(dest, [{u"hello": 200}])
+        self.assertRaises(_DestinationsSendError, destinations.send, {"hello": 123})
+        destinations.send({"hello": 200})
+        self.assertEqual(dest, [{"hello": 200}])
 
     def test_remove(self):
         """
@@ -315,7 +408,7 @@ class DestinationsTests(TestCase):
         receive messages from L{Destionations.add} calls.
         """
         destinations = Destinations()
-        message = {u"hello": 123}
+        message = {"hello": 123}
         dest = []
         destinations.add(dest.append)
         destinations.remove(dest.append)
@@ -353,12 +446,7 @@ class DestinationsTests(TestCase):
         destinations.addGlobalFields(x=123, y="hello")
         destinations.addGlobalFields(x=456, z=456)
         destinations.send({"msg": "X"})
-        self.assertEqual(
-            dest, [{
-                "x": 456,
-                "y": "hello",
-                "z": 456,
-                "msg": "X"}])
+        self.assertEqual(dest, [{"x": 456, "y": "hello", "z": 456, "msg": "X"}])
 
     def test_buffering(self):
         """
@@ -371,8 +459,7 @@ class DestinationsTests(TestCase):
             destinations.send(m)
         dest, dest2 = [], []
         destinations.add(dest.append, dest2.append)
-        self.assertEqual(
-            (dest, dest2), (messages[-1000:], messages[-1000:]))
+        self.assertEqual((dest, dest2), (messages[-1000:], messages[-1000:]))
 
     def test_buffering_second_batch(self):
         """
@@ -387,8 +474,7 @@ class DestinationsTests(TestCase):
         destinations.add(dest.append)
         destinations.add(dest2.append)
         destinations.send(message2)
-        self.assertEqual((dest, dest2),
-                         ([message, message2], [message2]))
+        self.assertEqual((dest, dest2), ([message, message2], [message2]))
 
     def test_global_fields_buffering(self):
         """
@@ -448,12 +534,13 @@ class LoggerTests(TestCase):
         """
         logger, written = makeLogger()
 
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type"),
-            Field("length", len, "The length of a thing"), ])
-        logger.write({
-            "message_type": "mymessage",
-            "length": "thething"}, serializer)
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length of a thing"),
+            ]
+        )
+        logger.write({"message_type": "mymessage", "length": "thething"}, serializer)
         self.assertEqual(written, [{"message_type": "mymessage", "length": 8}])
 
     def test_passedInDictionaryUnmodified(self):
@@ -462,9 +549,12 @@ class LoggerTests(TestCase):
         """
         logger, written = makeLogger()
 
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type"),
-            Field("length", len, "The length of a thing"), ])
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length of a thing"),
+            ]
+        )
         d = {"message_type": "mymessage", "length": "thething"}
         original = d.copy()
         logger.write(d, serializer)
@@ -483,9 +573,9 @@ class LoggerTests(TestCase):
         dictionary = {badobject(): 123, 123: badobject()}
         badMessage = "eliot: unknown, unicode() raised exception"
         self.assertEqual(
-            eval(Logger()._safeUnicodeDictionary(dictionary)), {
-                badMessage: "123",
-                "123": badMessage})
+            eval(Logger()._safeUnicodeDictionary(dictionary)),
+            {badMessage: "123", "123": badMessage},
+        )
 
     def test_safeUnicodeDictionaryFallback(self):
         """
@@ -505,7 +595,8 @@ class LoggerTests(TestCase):
 
         self.assertEqual(
             Logger()._safeUnicodeDictionary(badobject()),
-            "eliot: unknown, unicode() raised exception")
+            "eliot: unknown, unicode() raised exception",
+        )
 
     def test_serializationErrorTraceback(self):
         """
@@ -518,25 +609,34 @@ class LoggerTests(TestCase):
         def raiser(i):
             raise RuntimeError("oops")
 
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type"),
-            Field("fail", raiser, "Serialization fail"), ])
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("fail", raiser, "Serialization fail"),
+            ]
+        )
         message = {"message_type": "mymessage", "fail": "will"}
         logger.write(message, serializer)
         self.assertEqual(len(written), 2)
         tracebackMessage = written[0]
         assertContainsFields(
-            self, tracebackMessage, {
-                'exception': '%s.RuntimeError' % (RuntimeError.__module__, ),
-                'message_type': 'eliot:traceback'})
-        self.assertIn("RuntimeError: oops", tracebackMessage['traceback'])
+            self,
+            tracebackMessage,
+            {
+                "exception": "%s.RuntimeError" % (RuntimeError.__module__,),
+                "message_type": "eliot:traceback",
+            },
+        )
+        self.assertIn("RuntimeError: oops", tracebackMessage["traceback"])
         # Calling _safeUnicodeDictionary multiple times leads to
         # inconsistent results due to hash ordering, so compare contents:
         assertContainsFields(
-            self, written[1], {"message_type": "eliot:serialization_failure"})
+            self, written[1], {"message_type": "eliot:serialization_failure"}
+        )
         self.assertEqual(
             eval(written[1]["message"]),
-            dict((repr(key), repr(value)) for (key, value) in message.items()))
+            dict((repr(key), repr(value)) for (key, value) in message.items()),
+        )
 
     def test_destinationExceptionCaught(self):
         """
@@ -551,11 +651,15 @@ class LoggerTests(TestCase):
         message = {"hello": 123}
         logger.write({"hello": 123})
         assertContainsFields(
-            self, dest[0], {
+            self,
+            dest[0],
+            {
                 "message_type": "eliot:destination_failure",
                 "message": logger._safeUnicodeDictionary(message),
                 "reason": "ono",
-                "exception": "eliot.tests.test_output.MyException"})
+                "exception": "eliot.tests.test_output.MyException",
+            },
+        )
 
     def test_destinationMultipleExceptionsCaught(self):
         """
@@ -582,24 +686,38 @@ class LoggerTests(TestCase):
             return [message.pop(key) for message in messages[1:]]
 
         # Make sure we have task_level & task_uuid in exception messages.
-        task_levels = remove(u"task_level")
-        task_uuids = remove(u"task_uuid")
-        timestamps = remove(u"timestamp")
-
-        self.assertEqual((
-            abs(timestamps[0] + timestamps[1] - 2 * time()) < 1,
-            task_levels == [[1], [1]],
-            len([UUID(uuid) for uuid in task_uuids]) == 2, messages), (
-                True, True, True, [
-                    message, {
+        task_levels = remove("task_level")
+        task_uuids = remove("task_uuid")
+        timestamps = remove("timestamp")
+
+        self.assertEqual(
+            (
+                abs(timestamps[0] + timestamps[1] - 2 * time()) < 1,
+                task_levels == [[1], [1]],
+                len([UUID(uuid) for uuid in task_uuids]) == 2,
+                messages,
+            ),
+            (
+                True,
+                True,
+                True,
+                [
+                    message,
+                    {
                         "message_type": "eliot:destination_failure",
                         "message": logger._safeUnicodeDictionary(message),
                         "reason": "ono",
-                        "exception": "eliot.tests.test_output.MyException"}, {
-                            "message_type": "eliot:destination_failure",
-                            "message": logger._safeUnicodeDictionary(message),
-                            "reason": zero_divide,
-                            "exception": zero_type}]))
+                        "exception": "eliot.tests.test_output.MyException",
+                    },
+                    {
+                        "message_type": "eliot:destination_failure",
+                        "message": logger._safeUnicodeDictionary(message),
+                        "reason": zero_divide,
+                        "exception": zero_type,
+                    },
+                ],
+            ),
+        )
 
     def test_destinationExceptionCaughtTwice(self):
         """
@@ -620,22 +738,6 @@ class LoggerTests(TestCase):
         logger.write({"hello": 123})
 
 
-class JSONTests(TestCase):
-    """
-    Tests for the L{json} object exposed by L{eliot._output}.
-    """
-
-    @skipIf(PY3, "Python 3 json does not support bytes as keys")
-    def test_bytes(self):
-        """
-        L{json.dumps} uses a JSON encoder that assumes any C{bytes} are
-        UTF-8 encoded Unicode.
-        """
-        d = {"hello \u1234".encode("utf-8"): "\u5678".encode("utf-8")}
-        result = json.dumps(d)
-        self.assertEqual(json.loads(result), {"hello \u1234": "\u5678"})
-
-
 class PEP8Tests(TestCase):
     """
     Tests for PEP 8 method compatibility.
@@ -646,14 +748,14 @@ class PEP8Tests(TestCase):
         L{MemoryLogger.flush_tracebacks} is the same as
         L{MemoryLogger.flushTracebacks}
         """
-        self.assertEqual(
-            MemoryLogger.flush_tracebacks, MemoryLogger.flushTracebacks)
+        self.assertEqual(MemoryLogger.flush_tracebacks, MemoryLogger.flushTracebacks)
 
 
 class ToFileTests(TestCase):
     """
     Tests for L{to_file}.
     """
+
     def test_to_file_adds_destination(self):
         """
         L{to_file} adds a L{FileDestination} destination with the given file.
@@ -685,9 +787,10 @@ class ToFileTests(TestCase):
         bytes_f = BytesIO()
         destination = FileDestination(file=bytes_f)
         destination(message)
-        self.assertEqual([
-            json.loads(line)
-            for line in bytes_f.getvalue().splitlines()], [{"x": "abc"}])
+        self.assertEqual(
+            [json.loads(line) for line in bytes_f.getvalue().splitlines()],
+            [{"x": "abc"}],
+        )
 
     @skipUnless(np, "NumPy is not installed.")
     def test_default_encoder_is_EliotJSONEncoder(self):
@@ -697,11 +800,7 @@ class ToFileTests(TestCase):
         destination = FileDestination(file=f)
         destination(message)
         self.assertEqual(
-            [
-                json.loads(line)
-                for line in f.getvalue().splitlines()
-            ],
-            [{"x": 3}]
+            [json.loads(line) for line in f.getvalue().splitlines()], [{"x": 3}]
         )
 
     def test_filedestination_writes_json_bytes(self):
@@ -715,9 +814,10 @@ class ToFileTests(TestCase):
         destination = FileDestination(file=bytes_f)
         destination(message1)
         destination(message2)
-        self.assertEqual([
-            json.loads(line)
-            for line in bytes_f.getvalue().splitlines()], [message1, message2])
+        self.assertEqual(
+            [json.loads(line) for line in bytes_f.getvalue().splitlines()],
+            [message1, message2],
+        )
 
     def test_filedestination_custom_encoder(self):
         """
@@ -737,8 +837,8 @@ class ToFileTests(TestCase):
         destination = FileDestination(file=f, encoder=CustomEncoder)
         destination(message)
         self.assertEqual(
-            json.loads(f.getvalue().splitlines()[0]),
-            {"x": 123, "z": "CUSTOM!"})
+            json.loads(f.getvalue().splitlines()[0]), {"x": 123, "z": "CUSTOM!"}
+        )
 
     def test_filedestination_flushes(self):
         """
@@ -755,11 +855,11 @@ class ToFileTests(TestCase):
         destination(message1)
 
         # Message got written even though buffer wasn't filled:
-        self.assertEqual([
-            json.loads(line)
-            for line in open(path, "rb").read().splitlines()], [message1])
+        self.assertEqual(
+            [json.loads(line) for line in open(path, "rb").read().splitlines()],
+            [message1],
+        )
 
-    @skipIf(PY2, "Python 2 files always accept bytes")
     def test_filedestination_writes_json_unicode(self):
         """
         L{FileDestination} writes JSON-encoded messages to file that only
@@ -770,3 +870,13 @@ class ToFileTests(TestCase):
         destination = FileDestination(file=unicode_f)
         destination(message)
         self.assertEqual(pyjson.loads(unicode_f.getvalue()), message)
+
+    def test_filedestination_unwriteable_file(self):
+        """
+        L{FileDestination} raises a runtime error if the given file isn't writeable.
+        """
+        path = mktemp()
+        open(path, "w").close()
+        f = open(path, "r")
+        with self.assertRaises(RuntimeError):
+            FileDestination(f)
diff --git a/eliot/tests/test_parse.py b/eliot/tests/test_parse.py
index 77410ee..fec5bc6 100644
--- a/eliot/tests/test_parse.py
+++ b/eliot/tests/test_parse.py
@@ -21,7 +21,8 @@ from .._message import (
     WrittenMessage,
     MESSAGE_TYPE_FIELD,
     TASK_LEVEL_FIELD,
-    TASK_UUID_FIELD, )
+    TASK_UUID_FIELD,
+)
 from .._action import FAILED_STATUS, ACTION_STATUS_FIELD, WrittenAction
 from .strategies import labels
 
@@ -33,6 +34,7 @@ class ActionStructure(PClass):
     Individual messages are encoded as a unicode string; actions are
     encoded as a L{ActionStructure} instance.
     """
+
     type = field(type=(unicode, None.__class__))
     children = pvector_field(object)  # XXX ("StubAction", unicode))
     failed = field(type=bool)
@@ -51,9 +53,10 @@ class ActionStructure(PClass):
             return cls(
                 type=written.action_type,
                 failed=(
-                    written.end_message.contents[ACTION_STATUS_FIELD] ==
-                    FAILED_STATUS),
-                children=[cls.from_written(o) for o in written.children])
+                    written.end_message.contents[ACTION_STATUS_FIELD] == FAILED_STATUS
+                ),
+                children=[cls.from_written(o) for o in written.children],
+            )
 
     @classmethod
     def to_eliot(cls, structure_or_message, logger):
@@ -89,7 +92,8 @@ def action_structures(draw):
             return ActionStructure(
                 type=draw(labels),
                 failed=draw(st.booleans()),
-                children=[to_structure(o) for o in tree_or_message])
+                children=[to_structure(o) for o in tree_or_message],
+            )
         else:
             return tree_or_message
 
@@ -98,8 +102,7 @@ def action_structures(draw):
 
 def _structure_and_messages(structure):
     messages = ActionStructure.to_eliot(structure, MemoryLogger())
-    return st.permutations(messages).map(
-        lambda permuted: (structure, permuted))
+    return st.permutations(messages).map(lambda permuted: (structure, permuted))
 
 
 # Hypothesis strategy that creates a tuple of ActionStructure/unicode and
@@ -198,9 +201,10 @@ class TaskTests(TestCase):
             ctx.add_success_fields(foo=[1, 2])
         messages = logger.messages
         expected = WrittenAction.from_messages(
-            WrittenMessage.from_dict(messages[0]), [
-                WrittenMessage.from_dict(messages[1])],
-            WrittenMessage.from_dict(messages[2]))
+            WrittenMessage.from_dict(messages[0]),
+            [WrittenMessage.from_dict(messages[1])],
+            WrittenMessage.from_dict(messages[2]),
+        )
 
         task = parse_to_task(messages)
         self.assertEqual(task.root(), expected)
@@ -214,10 +218,11 @@ class ParserTests(TestCase):
     @given(
         structure_and_messages1=STRUCTURES_WITH_MESSAGES,
         structure_and_messages2=STRUCTURES_WITH_MESSAGES,
-        structure_and_messages3=STRUCTURES_WITH_MESSAGES)
+        structure_and_messages3=STRUCTURES_WITH_MESSAGES,
+    )
     def test_parse_into_tasks(
-        self, structure_and_messages1, structure_and_messages2,
-        structure_and_messages3):
+        self, structure_and_messages1, structure_and_messages2, structure_and_messages3
+    ):
         """
         Adding messages to a L{Parser} parses them into a L{Task} instances.
         """
@@ -236,7 +241,8 @@ class ParserTests(TestCase):
                 all_tasks.extend(completed_tasks)
 
         assertCountEqual(
-            self, all_tasks, [parse_to_task(msgs) for msgs in all_messages])
+            self, all_tasks, [parse_to_task(msgs) for msgs in all_messages]
+        )
 
     @given(structure_and_messages=STRUCTURES_WITH_MESSAGES)
     def test_incomplete_tasks(self, structure_and_messages):
@@ -258,18 +264,19 @@ class ParserTests(TestCase):
         self.assertEqual(
             dict(
                 incomplete_matches=incomplete_matches,
-                final_incompleted=parser.incomplete_tasks()),
-            dict(
-                incomplete_matches=[True] * (len(messages) - 1),
-                final_incompleted=[]))
+                final_incompleted=parser.incomplete_tasks(),
+            ),
+            dict(incomplete_matches=[True] * (len(messages) - 1), final_incompleted=[]),
+        )
 
     @given(
         structure_and_messages1=STRUCTURES_WITH_MESSAGES,
         structure_and_messages2=STRUCTURES_WITH_MESSAGES,
-        structure_and_messages3=STRUCTURES_WITH_MESSAGES)
+        structure_and_messages3=STRUCTURES_WITH_MESSAGES,
+    )
     def test_parse_stream(
-        self, structure_and_messages1, structure_and_messages2,
-        structure_and_messages3):
+        self, structure_and_messages1, structure_and_messages2, structure_and_messages3
+    ):
         """
         L{Parser.parse_stream} returns an iterable of completed and then
         incompleted tasks.
@@ -281,20 +288,21 @@ class ParserTests(TestCase):
         assume(len(messages3) > 1)
         # Need unique UUIDs per task:
         assume(
-            len(
-                set(
-                    m[0][TASK_UUID_FIELD]
-                    for m in (messages1, messages2, messages3))) == 3)
+            len(set(m[0][TASK_UUID_FIELD] for m in (messages1, messages2, messages3)))
+            == 3
+        )
 
         # Two complete tasks, one incomplete task:
         all_messages = (messages1, messages2, messages3[:-1])
 
         all_tasks = list(
-            Parser.parse_stream([
-                m for m in chain(*zip_longest(*all_messages))
-                if m is not None]))
+            Parser.parse_stream(
+                [m for m in chain(*zip_longest(*all_messages)) if m is not None]
+            )
+        )
         assertCountEqual(
-            self, all_tasks, [parse_to_task(msgs) for msgs in all_messages])
+            self, all_tasks, [parse_to_task(msgs) for msgs in all_messages]
+        )
 
 
 class BackwardsCompatibility(TestCase):
@@ -305,5 +313,6 @@ class BackwardsCompatibility(TestCase):
         import eliot._parse
         from eliot import _parse
         import eliot.parse
+
         self.assertIs(eliot.parse, eliot._parse)
         self.assertIs(_parse, eliot.parse)
diff --git a/eliot/tests/test_prettyprint.py b/eliot/tests/test_prettyprint.py
index 592f217..d253a35 100644
--- a/eliot/tests/test_prettyprint.py
+++ b/eliot/tests/test_prettyprint.py
@@ -2,29 +2,31 @@
 Tests for C{eliot.prettyprint}.
 """
 
-from __future__ import unicode_literals
-
 from unittest import TestCase
 from subprocess import check_output, Popen, PIPE
+from collections import OrderedDict
+from datetime import datetime
 
 from pyrsistent import pmap
 
 from .._bytesjson import dumps
-from ..prettyprint import pretty_format, _CLI_HELP, REQUIRED_FIELDS
+from ..prettyprint import pretty_format, compact_format, REQUIRED_FIELDS
 
 SIMPLE_MESSAGE = {
     "timestamp": 1443193754,
     "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
     "message_type": "messagey",
     "task_level": [1, 2],
-    "keys": [123, 456]}
+    "keys": [123, 456],
+}
 
 UNTYPED_MESSAGE = {
     "timestamp": 1443193754,
     "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
     "task_level": [1],
     "key": 1234,
-    "abc": "def"}
+    "abc": "def",
+}
 
 
 class FormattingTests(TestCase):
@@ -37,24 +39,28 @@ class FormattingTests(TestCase):
         A typed message is printed as expected.
         """
         self.assertEqual(
-            pretty_format(SIMPLE_MESSAGE), """\
+            pretty_format(SIMPLE_MESSAGE),
+            """\
 8c668cde-235b-4872-af4e-caea524bd1c0 -> /1/2
-2015-09-25 15:09:14Z
+2015-09-25T15:09:14Z
   message_type: 'messagey'
   keys: [123, 456]
-""")
+""",
+        )
 
     def test_untyped_message(self):
         """
         A message with no type is printed as expected.
         """
         self.assertEqual(
-            pretty_format(UNTYPED_MESSAGE), """\
+            pretty_format(UNTYPED_MESSAGE),
+            """\
 8c668cde-235b-4872-af4e-caea524bd1c0 -> /1
-2015-09-25 15:09:14Z
+2015-09-25T15:09:14Z
   abc: 'def'
   key: 1234
-""")
+""",
+        )
 
     def test_action(self):
         """
@@ -66,15 +72,18 @@ class FormattingTests(TestCase):
             "task_level": [2, 2, 2, 1],
             "action_type": "visited",
             "timestamp": 1443193958.0,
-            "action_status": "started"}
+            "action_status": "started",
+        }
         self.assertEqual(
-            pretty_format(message), """\
+            pretty_format(message),
+            """\
 8bc6ded2-446c-4b6d-abbc-4f21f1c9a7d8 -> /2/2/2/1
-2015-09-25 15:12:38Z
+2015-09-25T15:12:38Z
   action_type: 'visited'
   action_status: 'started'
   place: 'Statue #1'
-""")
+""",
+        )
 
     def test_multi_line(self):
         """
@@ -85,17 +94,20 @@ class FormattingTests(TestCase):
             "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
             "task_level": [1],
             "key": "hello\nthere\nmonkeys!\n",
-            "more": "stuff"}
+            "more": "stuff",
+        }
         self.assertEqual(
-            pretty_format(message), """\
+            pretty_format(message),
+            """\
 8c668cde-235b-4872-af4e-caea524bd1c0 -> /1
-2015-09-25 15:09:14Z
+2015-09-25T15:09:14Z
   key: 'hello
      |  there
      |  monkeys!
      |  '
   more: 'stuff'
-""")
+""",
+        )
 
     def test_tabs(self):
         """
@@ -105,13 +117,16 @@ class FormattingTests(TestCase):
             "timestamp": 1443193754,
             "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
             "task_level": [1],
-            "key": "hello\tmonkeys!"}
+            "key": "hello\tmonkeys!",
+        }
         self.assertEqual(
-            pretty_format(message), """\
+            pretty_format(message),
+            """\
 8c668cde-235b-4872-af4e-caea524bd1c0 -> /1
-2015-09-25 15:09:14Z
+2015-09-25T15:09:14Z
   key: 'hello	monkeys!'
-""")
+""",
+        )
 
     def test_structured(self):
         """
@@ -122,17 +137,64 @@ class FormattingTests(TestCase):
             "timestamp": 1443193754,
             "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
             "task_level": [1],
-            "key": {
-                "value": 123,
-                "another": [1, 2, {
-                    "more": "data"}]}}
+            "key": {"value": 123, "another": [1, 2, {"more": "data"}]},
+        }
         self.assertEqual(
-            pretty_format(message), """\
+            pretty_format(message),
+            """\
 8c668cde-235b-4872-af4e-caea524bd1c0 -> /1
-2015-09-25 15:09:14Z
+2015-09-25T15:09:14Z
   key: {'another': [1, 2, {'more': 'data'}],
      |  'value': 123}
-""")
+""",
+        )
+
+    def test_microsecond(self):
+        """
+        Microsecond timestamps are rendered in the output.
+        """
+        message = {
+            "timestamp": 1443193754.123455,
+            "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
+            "task_level": [1],
+        }
+        self.assertEqual(
+            pretty_format(message),
+            """\
+8c668cde-235b-4872-af4e-caea524bd1c0 -> /1
+2015-09-25T15:09:14.123455Z
+""",
+        )
+
+    def test_compact(self):
+        """
+        The compact mode does everything on a single line, including
+        dictionaries and multi-line messages.
+        """
+        message = {
+            "timestamp": 1443193754,
+            "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
+            "task_level": [1],
+            "key": OrderedDict([("value", 123), ("another", [1, 2, {"more": "data"}])]),
+            "multiline": "hello\n\tthere!\nabc",
+        }
+        self.assertEqual(
+            compact_format(message),
+            r'8c668cde-235b-4872-af4e-caea524bd1c0/1 2015-09-25T15:09:14Z key={"value":123,"another":[1,2,{"more":"data"}]} multiline="hello\n\tthere!\nabc"',
+        )
+
+    def test_local(self):
+        """
+        Timestamps can be generated in local timezone.
+        """
+        message = {
+            "timestamp": 1443193754,
+            "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
+            "task_level": [1],
+        }
+        expected = datetime.fromtimestamp(1443193754).isoformat(sep="T")
+        self.assertIn(expected, pretty_format(message, True))
+        self.assertIn(expected, compact_format(message, True))
 
 
 class CommandLineTests(TestCase):
@@ -145,9 +207,9 @@ class CommandLineTests(TestCase):
         C{--help} prints out the help text and exits.
         """
         result = check_output(["eliot-prettyprint", "--help"])
-        self.assertEqual(result, _CLI_HELP.encode("utf-8"))
+        self.assertIn(b"Convert Eliot messages into more readable", result)
 
-    def write_and_read(self, lines):
+    def write_and_read(self, lines, extra_args=()):
         """
         Write the given lines to the command-line on stdin, return stdout.
 
@@ -155,7 +217,9 @@ class CommandLineTests(TestCase):
             new lines.
         @return: Unicode-decoded result of subprocess stdout.
         """
-        process = Popen([b"eliot-prettyprint"], stdin=PIPE, stdout=PIPE)
+        process = Popen(
+            [b"eliot-prettyprint"] + list(extra_args), stdin=PIPE, stdout=PIPE
+        )
         process.stdin.write(b"".join(line + b"\n" for line in lines))
         process.stdin.close()
         result = process.stdout.read().decode("utf-8")
@@ -170,8 +234,38 @@ class CommandLineTests(TestCase):
         messages = [SIMPLE_MESSAGE, UNTYPED_MESSAGE, SIMPLE_MESSAGE]
         stdout = self.write_and_read(map(dumps, messages))
         self.assertEqual(
-            stdout, "".join(
-                pretty_format(message) + "\n" for message in messages))
+            stdout, "".join(pretty_format(message) + "\n" for message in messages)
+        )
+
+    def test_compact_output(self):
+        """
+        In compact mode, the process reads JSON lines from stdin and writes out
+        a pretty-printed compact version.
+        """
+        messages = [SIMPLE_MESSAGE, UNTYPED_MESSAGE, SIMPLE_MESSAGE]
+        stdout = self.write_and_read(map(dumps, messages), [b"--compact"])
+        self.assertEqual(
+            stdout, "".join(compact_format(message) + "\n" for message in messages)
+        )
+
+    def test_local_timezone(self):
+        """
+        Local timezones are used if --local-timezone is given.
+        """
+        message = {
+            "timestamp": 1443193754,
+            "task_uuid": "8c668cde-235b-4872-af4e-caea524bd1c0",
+            "task_level": [1],
+        }
+        expected = datetime.fromtimestamp(1443193754).isoformat(sep="T")
+        stdout = self.write_and_read(
+            [dumps(message)], [b"--compact", b"--local-timezone"]
+        )
+        self.assertIn(expected, stdout)
+        stdout = self.write_and_read(
+            [dumps(message)], [b"--compact", b"--local-timezone"]
+        )
+        self.assertIn(expected, stdout)
 
     def test_not_json_message(self):
         """
@@ -181,21 +275,29 @@ class CommandLineTests(TestCase):
         lines = [dumps(SIMPLE_MESSAGE), not_json, dumps(UNTYPED_MESSAGE)]
         stdout = self.write_and_read(lines)
         self.assertEqual(
-            stdout, "{}\nNot JSON: {}\n\n{}\n".format(
+            stdout,
+            "{}\nNot JSON: {}\n\n{}\n".format(
                 pretty_format(SIMPLE_MESSAGE),
-                str(not_json), pretty_format(UNTYPED_MESSAGE)))
+                str(not_json),
+                pretty_format(UNTYPED_MESSAGE),
+            ),
+        )
 
     def test_missing_required_field(self):
         """
         Non-Eliot JSON messages are not formatted.
         """
         base = pmap(SIMPLE_MESSAGE)
-        messages = [
-            dumps(dict(base.remove(field)))
-            for field in REQUIRED_FIELDS] + [dumps(SIMPLE_MESSAGE)]
+        messages = [dumps(dict(base.remove(field))) for field in REQUIRED_FIELDS] + [
+            dumps(SIMPLE_MESSAGE)
+        ]
         stdout = self.write_and_read(messages)
         self.assertEqual(
-            stdout, "{}{}\n".format(
+            stdout,
+            "{}{}\n".format(
                 "".join(
-                    "Not an Eliot message: {}\n\n".format(msg)
-                    for msg in messages[:-1]), pretty_format(SIMPLE_MESSAGE)))
+                    "Not an Eliot message: {}\n\n".format(msg) for msg in messages[:-1]
+                ),
+                pretty_format(SIMPLE_MESSAGE),
+            ),
+        )
diff --git a/eliot/tests/test_pyinstaller.py b/eliot/tests/test_pyinstaller.py
new file mode 100644
index 0000000..1b07c92
--- /dev/null
+++ b/eliot/tests/test_pyinstaller.py
@@ -0,0 +1,42 @@
+"""Test for pyinstaller compatibility."""
+
+from __future__ import absolute_import
+
+from unittest import TestCase, SkipTest
+from tempfile import mkdtemp, NamedTemporaryFile
+from subprocess import check_call, CalledProcessError
+import os
+
+from six import PY2
+
+if PY2:
+    FileNotFoundError = OSError
+
+
+class PyInstallerTests(TestCase):
+    """Make sure PyInstaller doesn't break Eliot."""
+
+    def setUp(self):
+        try:
+            check_call(["pyinstaller", "--help"])
+        except (CalledProcessError, FileNotFoundError):
+            raise SkipTest("Can't find pyinstaller.")
+
+    def test_importable(self):
+        """The Eliot package can be imported inside a PyInstaller packaged binary."""
+        output_dir = mkdtemp()
+        with NamedTemporaryFile(mode="w") as f:
+            f.write("import eliot; import eliot.prettyprint\n")
+            f.flush()
+            check_call(
+                [
+                    "pyinstaller",
+                    "--distpath",
+                    output_dir,
+                    "-F",
+                    "-n",
+                    "importeliot",
+                    f.name,
+                ]
+            )
+        check_call([os.path.join(output_dir, "importeliot")])
diff --git a/eliot/tests/test_stdlib.py b/eliot/tests/test_stdlib.py
index 494c6ff..c76ca8e 100644
--- a/eliot/tests/test_stdlib.py
+++ b/eliot/tests/test_stdlib.py
@@ -23,21 +23,25 @@ class StdlibTests(TestCase):
         stdlib_logger.warning("ono")
         message = logger.messages[0]
         assertContainsFields(
-            self, message, {
+            self,
+            message,
+            {
                 "message_type": "eliot:stdlib",
                 "log_level": "INFO",
                 "message": "hello",
-                "logger": "eliot-test"
-            }
+                "logger": "eliot-test",
+            },
         )
         message = logger.messages[1]
         assertContainsFields(
-            self, message, {
+            self,
+            message,
+            {
                 "message_type": "eliot:stdlib",
                 "log_level": "WARNING",
                 "message": "ono",
-                "logger": "eliot-test"
-            }
+                "logger": "eliot-test",
+            },
         )
 
     @capture_logging(None)
@@ -55,12 +59,14 @@ class StdlibTests(TestCase):
             stdlib_logger.exception("ono")
         message = logger.messages[0]
         assertContainsFields(
-            self, message, {
+            self,
+            message,
+            {
                 "message_type": "eliot:stdlib",
                 "log_level": "ERROR",
                 "message": "ono",
-                "logger": "eliot-test2"
-            }
+                "logger": "eliot-test2",
+            },
         )
         assert_expected_traceback(
             self, logger, logger.messages[1], exception, expected_traceback
diff --git a/eliot/tests/test_tai64n.py b/eliot/tests/test_tai64n.py
index 5f02000..16529be 100644
--- a/eliot/tests/test_tai64n.py
+++ b/eliot/tests/test_tai64n.py
@@ -43,10 +43,12 @@ class FunctionalTests(TestCase):
         by L{encode}.
         """
         try:
-            process = subprocess.Popen(["tai64nlocal"],
-                                       bufsize=4096,
-                                       stdin=subprocess.PIPE,
-                                       stdout=subprocess.PIPE)
+            process = subprocess.Popen(
+                ["tai64nlocal"],
+                bufsize=4096,
+                stdin=subprocess.PIPE,
+                stdout=subprocess.PIPE,
+            )
         except OSError as e:
             if e.errno == errno.ENOENT:
                 raise SkipTest("This test requires the daemontools package")
@@ -59,7 +61,8 @@ class FunctionalTests(TestCase):
         process.stdin.close()
         decodedToLocalTime = process.stdout.read().strip()
         self.assertEqual(
-            time.strftime(
-                "%Y-%m-%d %H:%M:%S.12345",
-                time.localtime(timestamp)).encode("ascii"),
-            decodedToLocalTime[:25])
+            time.strftime("%Y-%m-%d %H:%M:%S.12345", time.localtime(timestamp)).encode(
+                "ascii"
+            ),
+            decodedToLocalTime[:25],
+        )
diff --git a/eliot/tests/test_testing.py b/eliot/tests/test_testing.py
index 70dda9f..8df9874 100644
--- a/eliot/tests/test_testing.py
+++ b/eliot/tests/test_testing.py
@@ -4,7 +4,12 @@ Tests for L{eliot.testing}.
 
 from __future__ import unicode_literals
 
-from unittest import SkipTest, TestResult, TestCase
+from unittest import SkipTest, TestResult, TestCase, skipUnless
+
+try:
+    import numpy as np
+except ImportError:
+    np = None
 
 from ..testing import (
     issuperset,
@@ -16,13 +21,17 @@ from ..testing import (
     assertHasMessage,
     assertHasAction,
     validate_logging,
-    capture_logging, )
+    capture_logging,
+    swap_logger,
+    check_for_errors,
+)
 from .._output import MemoryLogger
 from .._action import start_action
 from .._message import Message
 from .._validation import ActionType, MessageType, ValidationError, Field
 from .._traceback import write_traceback
-from .. import add_destination, remove_destination, _output
+from .. import add_destination, remove_destination, _output, log_message
+from .common import CustomObject, CustomJSONEncoder
 
 
 class IsSuperSetTests(TestCase):
@@ -64,8 +73,8 @@ class LoggedActionTests(TestCase):
         """
         The values given to the L{LoggedAction} constructor are stored on it.
         """
-        d1 = {'x': 1}
-        d2 = {'y': 2}
+        d1 = {"x": 1}
+        d2 = {"y": 2}
         root = LoggedAction(d1, d2, [])
         self.assertEqual((root.startMessage, root.endMessage), (d1, d2))
 
@@ -106,8 +115,10 @@ class LoggedActionTests(TestCase):
         # Now we should have x message, start action message, another x message
         # and finally finish message.
         logged = self.fromMessagesIndex(logger.messages, 1)
-        self.assertEqual((logged.startMessage, logged.endMessage),
-                         (logger.messages[1], logger.messages[3]))
+        self.assertEqual(
+            (logged.startMessage, logged.endMessage),
+            (logger.messages[1], logger.messages[3]),
+        )
 
     def test_fromMessagesStartAndErrorFinish(self):
         """
@@ -121,8 +132,10 @@ class LoggedActionTests(TestCase):
         except KeyError:
             pass
         logged = self.fromMessagesIndex(logger.messages, 0)
-        self.assertEqual((logged.startMessage, logged.endMessage),
-                         (logger.messages[0], logger.messages[1]))
+        self.assertEqual(
+            (logged.startMessage, logged.endMessage),
+            (logger.messages[0], logger.messages[1]),
+        )
 
     def test_fromMessagesStartNotFound(self):
         """
@@ -130,10 +143,9 @@ class LoggedActionTests(TestCase):
         is not found.
         """
         logger = MemoryLogger()
-        with start_action(logger, "test"):
+        with start_action(logger, action_type="test"):
             pass
-        self.assertRaises(
-            ValueError, self.fromMessagesIndex, logger.messages[1:], 0)
+        self.assertRaises(ValueError, self.fromMessagesIndex, logger.messages[1:], 0)
 
     def test_fromMessagesFinishNotFound(self):
         """
@@ -141,10 +153,11 @@ class LoggedActionTests(TestCase):
         is not found.
         """
         logger = MemoryLogger()
-        with start_action(logger, "test"):
+        with start_action(logger, action_type="test"):
             pass
-        self.assertRaises(
-            ValueError, self.fromMessagesIndex, logger.messages[:1], 0)
+        with self.assertRaises(ValueError) as cm:
+            self.fromMessagesIndex(logger.messages[:1], 0)
+        self.assertEqual(cm.exception.args[0], "Missing end message of type test")
 
     def test_fromMessagesAddsChildMessages(self):
         """
@@ -167,7 +180,8 @@ class LoggedActionTests(TestCase):
 
         expectedChildren = [
             LoggedMessage(logger.messages[2]),
-            LoggedMessage(logger.messages[3])]
+            LoggedMessage(logger.messages[3]),
+        ]
         self.assertEqual(logged.children, expectedChildren)
 
     def test_fromMessagesAddsChildActions(self):
@@ -190,10 +204,10 @@ class LoggedActionTests(TestCase):
         # index 6 - end action
         logged = self.fromMessagesIndex(logger.messages, 0)
 
+        self.assertEqual(logged.children[0], self.fromMessagesIndex(logger.messages, 1))
         self.assertEqual(
-            logged.children[0], self.fromMessagesIndex(logger.messages, 1))
-        self.assertEqual(logged.type_tree(),
-                         {"test": [{"test2": ["end"]}, {"test3": []}]})
+            logged.type_tree(), {"test": [{"test2": ["end"]}, {"test3": []}]}
+        )
 
     def test_ofType(self):
         """
@@ -216,9 +230,12 @@ class LoggedActionTests(TestCase):
         # index 6 - end action
         logged = LoggedAction.ofType(logger.messages, ACTION)
         self.assertEqual(
-            logged, [
+            logged,
+            [
                 self.fromMessagesIndex(logger.messages, 1),
-                self.fromMessagesIndex(logger.messages, 5)])
+                self.fromMessagesIndex(logger.messages, 5),
+            ],
+        )
 
         # String-variant of ofType:
         logged2 = LoggedAction.ofType(logger.messages, "myaction")
@@ -253,10 +270,13 @@ class LoggedActionTests(TestCase):
 
         loggedAction = LoggedAction.ofType(logger.messages, ACTION)[0]
         self.assertEqual(
-            list(loggedAction.descendants()), [
+            list(loggedAction.descendants()),
+            [
                 self.fromMessagesIndex(logger.messages, 1),
                 LoggedMessage(logger.messages[2]),
-                LoggedMessage(logger.messages[4])])
+                LoggedMessage(logger.messages[4]),
+            ],
+        )
 
     def test_succeeded(self):
         """
@@ -291,7 +311,7 @@ class LoggedMessageTest(TestCase):
         """
         The values given to the L{LoggedMessage} constructor are stored on it.
         """
-        message = {'x': 1}
+        message = {"x": 1}
         logged = LoggedMessage(message)
         self.assertEqual(logged.message, message)
 
@@ -310,12 +330,12 @@ class LoggedMessageTest(TestCase):
         MESSAGE().write(logger)
         logged = LoggedMessage.ofType(logger.messages, MESSAGE)
         self.assertEqual(
-            logged, [
-                LoggedMessage(logger.messages[0]),
-                LoggedMessage(logger.messages[2])])
+            logged,
+            [LoggedMessage(logger.messages[0]), LoggedMessage(logger.messages[2])],
+        )
 
         # Lookup by string type:
-        logged2 = LoggedMessage.ofType(logger.messages, 'mymessage')
+        logged2 = LoggedMessage.ofType(logger.messages, "mymessage")
         self.assertEqual(logged, logged2)
 
     def test_ofTypeNotFound(self):
@@ -392,13 +412,13 @@ class ValidateLoggingTestsMixin(object):
     """
     Tests for L{validateLogging} and L{capture_logging}.
     """
+
     validate = None
 
     def test_decoratedFunctionCalledWithMemoryLogger(self):
         """
         The underlying function decorated with L{validateLogging} is called with
-        a L{MemoryLogger} instance in addition to any other arguments if the
-        wrapper is called.
+        a L{MemoryLogger} instance.
         """
         result = []
 
@@ -411,6 +431,28 @@ class ValidateLoggingTestsMixin(object):
         theTest.run()
         self.assertEqual(result, [(theTest, MemoryLogger)])
 
+    def test_decorated_function_passthrough(self):
+        """
+        Additional arguments are passed to the underlying function.
+        """
+        result = []
+
+        def another_wrapper(f):
+            def g(this):
+                f(this, 1, 2, c=3)
+
+            return g
+
+        class MyTest(TestCase):
+            @another_wrapper
+            @self.validate(None)
+            def test_foo(this, a, b, logger, c=None):
+                result.append((a, b, c))
+
+        theTest = MyTest("test_foo")
+        theTest.debug()
+        self.assertEqual(result, [(1, 2, 3)])
+
     def test_newMemoryLogger(self):
         """
         The underlying function decorated with L{validateLogging} is called with
@@ -482,13 +524,15 @@ class ValidateLoggingTestsMixin(object):
             @self.validate(None)
             def runTest(self, logger):
                 self.logger = logger
-                logger.write({
-                    "message_type": "wrongmessage"}, MESSAGE._serializer)
+                logger.write({"message_type": "wrongmessage"}, MESSAGE._serializer)
 
         test = MyTest()
-        self.assertRaises(ValidationError, test.debug)
-        self.assertEqual(
-            list(test.logger.messages[0].keys()), ["message_type"])
+        with self.assertRaises(ValidationError) as context:
+            test.debug()
+        # Some reference to the reason:
+        self.assertIn("wrongmessage", str(context.exception))
+        # Some reference to which file caused the problem:
+        self.assertIn("test_testing.py", str(context.exception))
 
     def test_addCleanupTracebacks(self):
         """
@@ -596,6 +640,7 @@ class ValidateLoggingTests(ValidateLoggingTestsMixin, TestCase):
     """
     Tests for L{validate_logging}.
     """
+
     validate = staticmethod(validate_logging)
 
 
@@ -603,6 +648,7 @@ class CaptureLoggingTests(ValidateLoggingTestsMixin, TestCase):
     """
     Tests for L{capture_logging}.
     """
+
     validate = staticmethod(capture_logging)
 
     def setUp(self):
@@ -632,7 +678,7 @@ class CaptureLoggingTests(ValidateLoggingTestsMixin, TestCase):
 
         test = MyTest()
         test.run()
-        self.assertEqual(test.logger.messages[0][u"some_key"], 1234)
+        self.assertEqual(test.logger.messages[0]["some_key"], 1234)
 
     def test_global_cleanup(self):
         """
@@ -651,7 +697,7 @@ class CaptureLoggingTests(ValidateLoggingTestsMixin, TestCase):
         add_destination(messages.append)
         self.addCleanup(remove_destination, messages.append)
         Message.log(some_key=1234)
-        self.assertEqual(messages[0][u"some_key"], 1234)
+        self.assertEqual(messages[0]["some_key"], 1234)
 
     def test_global_cleanup_exception(self):
         """
@@ -670,7 +716,7 @@ class CaptureLoggingTests(ValidateLoggingTestsMixin, TestCase):
         add_destination(messages.append)
         self.addCleanup(remove_destination, messages.append)
         Message.log(some_key=1234)
-        self.assertEqual(messages[0][u"some_key"], 1234)
+        self.assertEqual(messages[0]["some_key"], 1234)
 
     def test_validationNotRunForSkip(self):
         """
@@ -696,36 +742,35 @@ class CaptureLoggingTests(ValidateLoggingTestsMixin, TestCase):
         # nevertheless marked as a skip with the correct reason.
         self.assertEqual(
             (test.recorded, result.skipped, result.errors, result.failures),
-            (False, [(test, "Do not run this test.")], [], []))
+            (False, [(test, "Do not run this test.")], [], []),
+        )
 
-    def test_unflushedTracebacksDontFailForSkip(self):
-        """
-        If the decorated test raises L{SkipTest} then the unflushed traceback
-        checking normally implied by L{validateLogging} is also skipped.
-        """
 
-        class MyTest(TestCase):
-            @validateLogging(lambda self, logger: None)
-            def runTest(self, logger):
-                try:
-                    1 / 0
-                except:
-                    write_traceback(logger)
-                raise SkipTest("Do not run this test.")
+class JSONEncodingTests(TestCase):
+    """Tests for L{capture_logging} JSON encoder support."""
 
-        test = MyTest()
-        result = TestResult()
-        test.run(result)
+    @skipUnless(np, "NumPy is not installed.")
+    @capture_logging(None)
+    def test_default_JSON_encoder(self, logger):
+        """
+        L{capture_logging} validates using L{EliotJSONEncoder} by default.
+        """
+        # Default JSON encoder can't handle NumPy:
+        log_message(message_type="hello", number=np.uint32(12))
 
-        # Verify that there was only a skip, no additional errors or failures
-        # reported.
-        self.assertEqual((1, [], []),
-                         (len(result.skipped), result.errors, result.failures))
+    @capture_logging(None, encoder_=CustomJSONEncoder)
+    def test_custom_JSON_encoder(self, logger):
+        """
+        L{capture_logging} can be called with a custom JSON encoder, which is then
+        used for validation.
+        """
+        # Default JSON encoder can't handle this custom object:
+        log_message(message_type="hello", object=CustomObject())
 
 
 MESSAGE1 = MessageType(
-    "message1", [Field.forTypes("x", [int], "A number")],
-    "A message for testing.")
+    "message1", [Field.forTypes("x", [int], "A number")], "A message for testing."
+)
 MESSAGE2 = MessageType("message2", [], "A message for testing.")
 
 
@@ -750,8 +795,7 @@ class AssertHasMessageTests(TestCase):
         test = self.UnitTest()
         logger = MemoryLogger()
         MESSAGE1(x=123).write(logger)
-        self.assertRaises(
-            AssertionError, assertHasMessage, test, logger, MESSAGE2)
+        self.assertRaises(AssertionError, assertHasMessage, test, logger, MESSAGE2)
 
     def test_returnsIfMessagesOfType(self):
         """
@@ -762,7 +806,8 @@ class AssertHasMessageTests(TestCase):
         MESSAGE1(x=123).write(logger)
         self.assertEqual(
             assertHasMessage(test, logger, MESSAGE1),
-            LoggedMessage.ofType(logger.messages, MESSAGE1)[0])
+            LoggedMessage.ofType(logger.messages, MESSAGE1)[0],
+        )
 
     def test_failIfNotSubset(self):
         """
@@ -773,8 +818,8 @@ class AssertHasMessageTests(TestCase):
         logger = MemoryLogger()
         MESSAGE1(x=123).write(logger)
         self.assertRaises(
-            AssertionError, assertHasMessage, test, logger, MESSAGE1, {
-                "x": 24})
+            AssertionError, assertHasMessage, test, logger, MESSAGE1, {"x": 24}
+        )
 
     def test_returnsIfSubset(self):
         """
@@ -786,12 +831,16 @@ class AssertHasMessageTests(TestCase):
         MESSAGE1(x=123).write(logger)
         self.assertEqual(
             assertHasMessage(test, logger, MESSAGE1, {"x": 123}),
-            LoggedMessage.ofType(logger.messages, MESSAGE1)[0])
+            LoggedMessage.ofType(logger.messages, MESSAGE1)[0],
+        )
 
 
 ACTION1 = ActionType(
-    "action1", [Field.forTypes("x", [int], "A number")], [
-        Field.forTypes("result", [int], "A number")], "A action for testing.")
+    "action1",
+    [Field.forTypes("x", [int], "A number")],
+    [Field.forTypes("result", [int], "A number")],
+    "A action for testing.",
+)
 ACTION2 = ActionType("action2", [], [], "A action for testing.")
 
 
@@ -817,8 +866,7 @@ class AssertHasActionTests(TestCase):
         logger = MemoryLogger()
         with ACTION1(logger, x=123):
             pass
-        self.assertRaises(
-            AssertionError, assertHasAction, test, logger, ACTION2, True)
+        self.assertRaises(AssertionError, assertHasAction, test, logger, ACTION2, True)
 
     def test_failIfWrongSuccessStatus(self):
         """
@@ -834,10 +882,8 @@ class AssertHasActionTests(TestCase):
                 1 / 0
         except ZeroDivisionError:
             pass
-        self.assertRaises(
-            AssertionError, assertHasAction, test, logger, ACTION1, False)
-        self.assertRaises(
-            AssertionError, assertHasAction, test, logger, ACTION2, True)
+        self.assertRaises(AssertionError, assertHasAction, test, logger, ACTION1, False)
+        self.assertRaises(AssertionError, assertHasAction, test, logger, ACTION2, True)
 
     def test_returnsIfMessagesOfType(self):
         """
@@ -850,7 +896,8 @@ class AssertHasActionTests(TestCase):
             pass
         self.assertEqual(
             assertHasAction(test, logger, ACTION1, True),
-            LoggedAction.ofType(logger.messages, ACTION1)[0])
+            LoggedAction.ofType(logger.messages, ACTION1)[0],
+        )
 
     def test_failIfNotStartSubset(self):
         """
@@ -862,8 +909,8 @@ class AssertHasActionTests(TestCase):
         with ACTION1(logger, x=123):
             pass
         self.assertRaises(
-            AssertionError, assertHasAction, test, logger, ACTION1, True, {
-                "x": 24})
+            AssertionError, assertHasAction, test, logger, ACTION1, True, {"x": 24}
+        )
 
     def test_failIfNotEndSubset(self):
         """
@@ -882,7 +929,8 @@ class AssertHasActionTests(TestCase):
             ACTION1,
             True,
             startFields={"x": 123},
-            endFields={"result": 24})
+            endFields={"result": 24},
+        )
 
     def test_returns(self):
         """
@@ -894,9 +942,9 @@ class AssertHasActionTests(TestCase):
         with ACTION1(logger, x=123) as act:
             act.addSuccessFields(result=5)
         self.assertEqual(
-            assertHasAction(
-                test, logger, ACTION1, True, {"x": 123}, {"result": 5}),
-            LoggedAction.ofType(logger.messages, ACTION1)[0])
+            assertHasAction(test, logger, ACTION1, True, {"x": 123}, {"result": 5}),
+            LoggedAction.ofType(logger.messages, ACTION1)[0],
+        )
 
 
 class PEP8Tests(TestCase):
@@ -945,3 +993,61 @@ class PEP8Tests(TestCase):
         L{validate_logging} is the same as L{validateLogging}.
         """
         self.assertEqual(validate_logging, validateLogging)
+
+
+class LowLevelTestingHooks(TestCase):
+    """Tests for lower-level APIs for setting up MemoryLogger."""
+
+    @capture_logging(None)
+    def test_swap_logger(self, logger):
+        """C{swap_logger} swaps out the current logger."""
+        new_logger = MemoryLogger()
+        old_logger = swap_logger(new_logger)
+        Message.log(message_type="hello")
+
+        # We swapped out old logger for new:
+        self.assertIs(old_logger, logger)
+        self.assertEqual(new_logger.messages[0]["message_type"], "hello")
+
+        # Now restore old logger:
+        intermediate_logger = swap_logger(old_logger)
+        Message.log(message_type="goodbye")
+        self.assertIs(intermediate_logger, new_logger)
+        self.assertEqual(logger.messages[0]["message_type"], "goodbye")
+
+    def test_check_for_errors_unflushed_tracebacks(self):
+        """C{check_for_errors} raises on unflushed tracebacks."""
+        logger = MemoryLogger()
+
+        # No errors initially:
+        check_for_errors(logger)
+
+        try:
+            1 / 0
+        except ZeroDivisionError:
+            write_traceback(logger)
+        logger.flush_tracebacks(ZeroDivisionError)
+
+        # Flushed tracebacks don't count:
+        check_for_errors(logger)
+
+        # But unflushed tracebacks do:
+        try:
+            raise RuntimeError
+        except RuntimeError:
+            write_traceback(logger)
+        with self.assertRaises(UnflushedTracebacks):
+            check_for_errors(logger)
+
+    def test_check_for_errors_validation(self):
+        """C{check_for_errors} raises on validation errors."""
+        logger = MemoryLogger()
+        logger.write({"x": 1, "message_type": "mem"})
+
+        # No errors:
+        check_for_errors(logger)
+
+        # Now long something unserializable to JSON:
+        logger.write({"message_type": object()})
+        with self.assertRaises(TypeError):
+            check_for_errors(logger)
diff --git a/eliot/tests/test_traceback.py b/eliot/tests/test_traceback.py
index 3f2d034..37a3f75 100644
--- a/eliot/tests/test_traceback.py
+++ b/eliot/tests/test_traceback.py
@@ -24,22 +24,20 @@ from .._errors import register_exception_extractor
 from .test_action import make_error_extraction_tests
 
 
-def assert_expected_traceback(
-    test, logger, message, exception, expected_traceback
-):
+def assert_expected_traceback(test, logger, message, exception, expected_traceback):
     """Assert we logged the given exception and the expected traceback."""
     lines = expected_traceback.split("\n")
     # Remove source code lines:
-    expected_traceback = "\n".join(
-        [l for l in lines if not l.startswith("    ")]
-    )
+    expected_traceback = "\n".join([l for l in lines if not l.startswith("    ")])
     assertContainsFields(
-        test, message, {
+        test,
+        message,
+        {
             "message_type": "eliot:traceback",
             "exception": RuntimeError,
             "reason": exception,
-            "traceback": expected_traceback
-        }
+            "traceback": expected_traceback,
+        },
     )
     logger.flushTracebacks(RuntimeError)
 
@@ -107,11 +105,7 @@ class TracebackLoggingTests(TestCase):
             write_traceback()
 
         message = logger.messages[0]
-        assertContainsFields(
-            self, message, {
-                "message_type": "eliot:traceback"
-            }
-        )
+        assertContainsFields(self, message, {"message_type": "eliot:traceback"})
         logger.flushTracebacks(RuntimeError)
 
     @validateLogging(None)
@@ -130,12 +124,14 @@ class TracebackLoggingTests(TestCase):
             writeFailure(failure, logger)
         message = logger.messages[0]
         assertContainsFields(
-            self, message, {
+            self,
+            message,
+            {
                 "message_type": "eliot:traceback",
                 "exception": RuntimeError,
                 "reason": failure.value,
-                "traceback": expectedTraceback
-            }
+                "traceback": expectedTraceback,
+            },
         )
         logger.flushTracebacks(RuntimeError)
 
@@ -154,11 +150,7 @@ class TracebackLoggingTests(TestCase):
             failure = Failure()
             writeFailure(failure)
         message = logger.messages[0]
-        assertContainsFields(
-            self, message, {
-                "message_type": "eliot:traceback"
-            }
-        )
+        assertContainsFields(self, message, {"message_type": "eliot:traceback"})
         logger.flushTracebacks(RuntimeError)
 
     @validateLogging(None)
@@ -189,10 +181,9 @@ class TracebackLoggingTests(TestCase):
         _writeTracebackMessage(logger, *exc_info)
         serialized = logger.serialize()[0]
         assertContainsFields(
-            self, serialized, {
-                "exception": "%s.KeyError" % (KeyError.__module__, ),
-                "reason": "123"
-            }
+            self,
+            serialized,
+            {"exception": "%s.KeyError" % (KeyError.__module__,), "reason": "123"},
         )
         logger.flushTracebacks(KeyError)
 
@@ -213,7 +204,7 @@ class TracebackLoggingTests(TestCase):
         _writeTracebackMessage(logger, *exc_info)
         self.assertEqual(
             logger.serialize()[0]["reason"],
-            "eliot: unknown, unicode() raised exception"
+            "eliot: unknown, unicode() raised exception",
         )
         logger.flushTracebacks(BadException)
 
@@ -234,9 +225,7 @@ def get_traceback_messages(exception):
     return messages
 
 
-class TracebackExtractionTests(
-    make_error_extraction_tests(get_traceback_messages)
-):
+class TracebackExtractionTests(make_error_extraction_tests(get_traceback_messages)):
     """
     Error extraction tests for tracebacks.
     """
@@ -254,9 +243,11 @@ class TracebackExtractionTests(
         exception = MyException("because")
         messages = get_traceback_messages(exception)
         assertContainsFields(
-            self, messages[0], {
+            self,
+            messages[0],
+            {
                 "message_type": "eliot:traceback",
                 "reason": exception,
-                "exception": MyException
-            }
+                "exception": MyException,
+            },
         )
diff --git a/eliot/tests/test_twisted.py b/eliot/tests/test_twisted.py
index 7065818..0476ca1 100644
--- a/eliot/tests/test_twisted.py
+++ b/eliot/tests/test_twisted.py
@@ -8,7 +8,7 @@ import sys
 from functools import wraps
 
 try:
-    from twisted.internet.defer import Deferred, succeed, fail
+    from twisted.internet.defer import Deferred, succeed, fail, returnValue
     from twisted.trial.unittest import TestCase
     from twisted.python.failure import Failure
     from twisted.logger import globalLogPublisher
@@ -19,14 +19,21 @@ else:
     # Make sure we always import this if Twisted is available, so broken
     # logwriter.py causes a failure:
     from ..twisted import (
-        DeferredContext, AlreadyFinished, _passthrough, redirectLogsForTrial,
-        _RedirectLogsForTrial, TwistedDestination
+        DeferredContext,
+        AlreadyFinished,
+        _passthrough,
+        redirectLogsForTrial,
+        _RedirectLogsForTrial,
+        TwistedDestination,
+        inline_callbacks,
     )
 
+from .test_generators import assert_expected_action_tree
+
 from .._action import start_action, current_action, Action, TaskLevel
 from .._output import MemoryLogger, Logger
 from .._message import Message
-from ..testing import assertContainsFields
+from ..testing import assertContainsFields, capture_logging
 from .. import removeDestination, addDestination
 from .._traceback import write_traceback
 from .common import FakeSys
@@ -98,7 +105,7 @@ class DeferredContextTests(TestCase):
 
         result = Deferred()
         context = DeferredContext(result)
-        context.addCallbacks(f, lambda x: None, (1, ), {"y": 2})
+        context.addCallbacks(f, lambda x: None, (1,), {"y": 2})
         result.callback(0)
         self.assertEqual(called, [(0, 1, 2)])
 
@@ -117,10 +124,51 @@ class DeferredContextTests(TestCase):
 
         result = Deferred()
         context = DeferredContext(result)
-        context.addCallbacks(lambda x: None, f, None, None, (1, ), {"y": 2})
+        context.addCallbacks(lambda x: None, f, None, None, (1,), {"y": 2})
         result.errback(RuntimeError())
         self.assertEqual(called, [(1, 2)])
 
+    @withActionContext
+    def test_addCallbacksWithOnlyCallback(self):
+        """
+        L{DeferredContext.addCallbacks} can be called with a single argument, a
+        callback function, and passes it to the wrapped L{Deferred}'s
+        C{addCallbacks}.
+        """
+        called = []
+
+        def f(value):
+            called.append(value)
+
+        result = Deferred()
+        context = DeferredContext(result)
+        context.addCallbacks(f)
+        result.callback(0)
+        self.assertEqual(called, [0])
+
+    @withActionContext
+    def test_addCallbacksWithOnlyCallbackErrorCase(self):
+        """
+        L{DeferredContext.addCallbacks} can be called with a single argument, a
+        callback function, and passes a pass-through errback to the wrapped
+        L{Deferred}'s C{addCallbacks}.
+        """
+        called = []
+
+        def f(value):
+            called.append(value)
+
+        class ExpectedException(Exception):
+            pass
+
+        result = Deferred()
+        context = DeferredContext(result)
+        context.addCallbacks(f)
+        result.errback(Failure(ExpectedException()))
+        self.assertEqual(called, [])
+        # The assertion is inside `failureResultOf`.
+        self.failureResultOf(result, ExpectedException)
+
     @withActionContext
     def test_addCallbacksReturnSelf(self):
         """
@@ -128,9 +176,7 @@ class DeferredContextTests(TestCase):
         """
         result = Deferred()
         context = DeferredContext(result)
-        self.assertIs(
-            context, context.addCallbacks(lambda x: None, lambda x: None)
-        )
+        self.assertIs(context, context.addCallbacks(lambda x: None, lambda x: None))
 
     def test_addCallbacksCallbackContext(self):
         """
@@ -145,9 +191,7 @@ class DeferredContextTests(TestCase):
         with action1.context():
             d = DeferredContext(d)
             with action2.context():
-                d.addCallbacks(
-                    lambda x: context.append(current_action()), lambda x: x
-                )
+                d.addCallbacks(lambda x: context.append(current_action()), lambda x: x)
         self.assertEqual(context, [action1])
 
     def test_addCallbacksErrbackContext(self):
@@ -163,9 +207,7 @@ class DeferredContextTests(TestCase):
         with action1.context():
             d = DeferredContext(d)
             with action2.context():
-                d.addCallbacks(
-                    lambda x: x, lambda x: context.append(current_action())
-                )
+                d.addCallbacks(lambda x: x, lambda x: context.append(current_action()))
         self.assertEqual(context, [action1])
 
     @withActionContext
@@ -177,7 +219,7 @@ class DeferredContextTests(TestCase):
         d = succeed(0)
         d = DeferredContext(d)
         d.addCallbacks(lambda x: [x, 1], lambda x: x)
-        self.assertEqual(self.successResultOf(d), [0, 1])
+        self.assertEqual(self.successResultOf(d.result), [0, 1])
 
     @withActionContext
     def test_addCallbacksErrbackResult(self):
@@ -189,7 +231,7 @@ class DeferredContextTests(TestCase):
         d = fail(exception)
         d = DeferredContext(d)
         d.addCallbacks(lambda x: x, lambda x: [x.value, 1])
-        self.assertEqual(self.successResultOf(d), [exception, 1])
+        self.assertEqual(self.successResultOf(d.result), [exception, 1])
 
     def test_addActionFinishNoImmediateLogging(self):
         """
@@ -215,12 +257,14 @@ class DeferredContextTests(TestCase):
             DeferredContext(d).addActionFinish()
         d.callback("result")
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "uuid",
                 "task_level": [1, 1],
                 "action_type": "sys:me",
-                "action_status": "succeeded"
-            }
+                "action_status": "succeeded",
+            },
         )
 
     def test_addActionFinishSuccessPassThrough(self):
@@ -251,14 +295,16 @@ class DeferredContextTests(TestCase):
         exception = RuntimeError("because")
         d.errback(exception)
         assertContainsFields(
-            self, logger.messages[0], {
+            self,
+            logger.messages[0],
+            {
                 "task_uuid": "uuid",
                 "task_level": [1, 1],
                 "action_type": "sys:me",
                 "action_status": "failed",
                 "reason": "because",
-                "exception": "%s.RuntimeError" % (RuntimeError.__module__, )
-            }
+                "exception": "%s.RuntimeError" % (RuntimeError.__module__,),
+            },
         )
         d.addErrback(lambda _: None)  # don't let Failure go to Twisted logs
 
@@ -298,9 +344,7 @@ class DeferredContextTests(TestCase):
         """
         d = DeferredContext(Deferred())
         d.addActionFinish()
-        self.assertRaises(
-            AlreadyFinished, d.addCallbacks, lambda x: x, lambda x: x
-        )
+        self.assertRaises(AlreadyFinished, d.addCallbacks, lambda x: x, lambda x: x)
 
     @withActionContext
     def test_addActionFinishResult(self):
@@ -330,12 +374,16 @@ class DeferredContextTests(TestCase):
             callbackArgs=None,
             callbackKeywords=None,
             errbackArgs=None,
-            errbackKeywords=None
+            errbackKeywords=None,
         ):
             called.append(
                 (
-                    callback, errback, callbackArgs, callbackKeywords,
-                    errbackArgs, errbackKeywords
+                    callback,
+                    errback,
+                    callbackArgs,
+                    callbackKeywords,
+                    errbackArgs,
+                    errbackKeywords,
                 )
             )
 
@@ -343,12 +391,9 @@ class DeferredContextTests(TestCase):
 
         def f(x, y, z):
             return None
+
         context.addCallback(f, 2, z=3)
-        self.assertEqual(
-            called, [(f, _passthrough, (2, ), {
-                "z": 3
-            }, None, None)]
-        )
+        self.assertEqual(called, [(f, _passthrough, (2,), {"z": 3}, None, None)])
 
     @withActionContext
     def test_addCallbackReturnsSelf(self):
@@ -375,12 +420,16 @@ class DeferredContextTests(TestCase):
             callbackArgs=None,
             callbackKeywords=None,
             errbackArgs=None,
-            errbackKeywords=None
+            errbackKeywords=None,
         ):
             called.append(
                 (
-                    callback, errback, callbackArgs, callbackKeywords,
-                    errbackArgs, errbackKeywords
+                    callback,
+                    errback,
+                    callbackArgs,
+                    callbackKeywords,
+                    errbackArgs,
+                    errbackKeywords,
                 )
             )
 
@@ -388,12 +437,9 @@ class DeferredContextTests(TestCase):
 
         def f(x, y, z):
             pass
+
         context.addErrback(f, 2, z=3)
-        self.assertEqual(
-            called, [(_passthrough, f, None, None, (2, ), {
-                "z": 3
-            })]
-        )
+        self.assertEqual(called, [(_passthrough, f, None, None, (2,), {"z": 3})])
 
     @withActionContext
     def test_addErrbackReturnsSelf(self):
@@ -420,12 +466,16 @@ class DeferredContextTests(TestCase):
             callbackArgs=None,
             callbackKeywords=None,
             errbackArgs=None,
-            errbackKeywords=None
+            errbackKeywords=None,
         ):
             called.append(
                 (
-                    callback, errback, callbackArgs, callbackKeywords,
-                    errbackArgs, errbackKeywords
+                    callback,
+                    errback,
+                    callbackArgs,
+                    callbackKeywords,
+                    errbackArgs,
+                    errbackKeywords,
                 )
             )
 
@@ -433,8 +483,9 @@ class DeferredContextTests(TestCase):
 
         def f(x, y, z):
             return None
+
         context.addBoth(f, 2, z=3)
-        self.assertEqual(called, [(f, f, (2, ), {"z": 3}, (2, ), {"z": 3})])
+        self.assertEqual(called, [(f, f, (2,), {"z": 3}, (2,), {"z": 3})])
 
     @withActionContext
     def test_addBothReturnsSelf(self):
@@ -496,9 +547,7 @@ class RedirectLogsForTrialTests(TestCase):
         """
         originalDestinations = Logger._destinations._destinations[:]
         _RedirectLogsForTrial(FakeSys(["myprogram.py"], b""))()
-        self.assertEqual(
-            Logger._destinations._destinations, originalDestinations
-        )
+        self.assertEqual(Logger._destinations._destinations, originalDestinations)
 
     def test_trialAsPathNoDestination(self):
         """
@@ -506,48 +555,33 @@ class RedirectLogsForTrialTests(TestCase):
         name no destination is added by L{redirectLogsForTrial}.
         """
         originalDestinations = Logger._destinations._destinations[:]
-        _RedirectLogsForTrial(
-            FakeSys(["./trial/myprogram.py"], b"")
-        )()
-        self.assertEqual(
-            Logger._destinations._destinations, originalDestinations
-        )
+        _RedirectLogsForTrial(FakeSys(["./trial/myprogram.py"], b""))()
+        self.assertEqual(Logger._destinations._destinations, originalDestinations)
 
     def test_withoutTrialResult(self):
         """
         When not running under I{trial} L{None} is returned.
         """
-        self.assertIs(
-            None,
-            _RedirectLogsForTrial(
-                FakeSys(["myprogram.py"], b"")
-            )()
-        )
+        self.assertIs(None, _RedirectLogsForTrial(FakeSys(["myprogram.py"], b""))())
 
     def test_noDuplicateAdds(self):
         """
         If a destination has already been added, calling
         L{redirectLogsForTrial} a second time does not add another destination.
         """
-        redirect = _RedirectLogsForTrial(
-            FakeSys(["trial"], b"")
-        )
+        redirect = _RedirectLogsForTrial(FakeSys(["trial"], b""))
         destination = redirect()
         self.addCleanup(removeDestination, destination)
         originalDestinations = Logger._destinations._destinations[:]
         redirect()
-        self.assertEqual(
-            Logger._destinations._destinations, originalDestinations
-        )
+        self.assertEqual(Logger._destinations._destinations, originalDestinations)
 
     def test_noDuplicateAddsResult(self):
         """
         If a destination has already been added, calling
         L{redirectLogsForTrial} a second time returns L{None}.
         """
-        redirect = _RedirectLogsForTrial(
-            FakeSys(["trial"], b"")
-        )
+        redirect = _RedirectLogsForTrial(FakeSys(["trial"], b""))
         destination = redirect()
         self.addCleanup(removeDestination, destination)
         result = redirect()
@@ -570,6 +604,7 @@ class TwistedDestinationTests(TestCase):
     """
     Tests for L{TwistedDestination}.
     """
+
     def redirect_to_twisted(self):
         """
         Redirect Eliot logs to Twisted.
@@ -581,8 +616,8 @@ class TwistedDestinationTests(TestCase):
 
         def got_event(event):
             if event.get("log_namespace") == "eliot":
-                written.append((event["log_level"].name,
-                                event["eliot"]))
+                written.append((event["log_level"].name, event["eliot"]))
+
         globalLogPublisher.addObserver(got_event)
         self.addCleanup(globalLogPublisher.removeObserver, got_event)
         destination = TwistedDestination()
@@ -611,9 +646,7 @@ class TwistedDestinationTests(TestCase):
         written = self.redirect_to_list()
         logger = Logger()
         Message.new(x=123, y=456).write(logger)
-        self.assertEqual(
-            writtenToTwisted, [("info", written[0])]
-        )
+        self.assertEqual(writtenToTwisted, [("info", written[0])])
 
     def test_tracebackMessages(self):
         """
@@ -631,6 +664,153 @@ class TwistedDestinationTests(TestCase):
             raiser()
         except Exception:
             write_traceback(logger)
-        self.assertEqual(
-            writtenToTwisted, [("critical", written[0])]
-        )
+        self.assertEqual(writtenToTwisted, [("critical", written[0])])
+
+
+class InlineCallbacksTests(TestCase):
+    """Tests for C{inline_callbacks}."""
+
+    # Get our custom assertion failure messages *and* the standard ones.
+    longMessage = True
+
+    def _a_b_test(self, logger, g):
+        """A yield was done in between messages a and b inside C{inline_callbacks}."""
+        with start_action(action_type="the-action"):
+            self.assertIs(None, self.successResultOf(g()))
+        assert_expected_action_tree(self, logger, "the-action", ["a", "yielded", "b"])
+
+    @capture_logging(None)
+    def test_yield_none(self, logger):
+        def g():
+            Message.log(message_type="a")
+            yield
+            Message.log(message_type="b")
+
+        g = inline_callbacks(g, debug=True)
+
+        self._a_b_test(logger, g)
+
+    @capture_logging(None)
+    def test_yield_fired_deferred(self, logger):
+        def g():
+            Message.log(message_type="a")
+            yield succeed(None)
+            Message.log(message_type="b")
+
+        g = inline_callbacks(g, debug=True)
+
+        self._a_b_test(logger, g)
+
+    @capture_logging(None)
+    def test_yield_unfired_deferred(self, logger):
+        waiting = Deferred()
+
+        def g():
+            Message.log(message_type="a")
+            yield waiting
+            Message.log(message_type="b")
+
+        g = inline_callbacks(g, debug=True)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            self.assertNoResult(waiting)
+            waiting.callback(None)
+            self.assertIs(None, self.successResultOf(d))
+        assert_expected_action_tree(self, logger, "the-action", ["a", "yielded", "b"])
+
+    @capture_logging(None)
+    def test_returnValue(self, logger):
+        result = object()
+
+        @inline_callbacks
+        def g():
+            if False:
+                yield
+            returnValue(result)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            self.assertIs(result, self.successResultOf(d))
+
+        assert_expected_action_tree(self, logger, "the-action", [])
+
+    @capture_logging(None)
+    def test_returnValue_in_action(self, logger):
+        result = object()
+
+        @inline_callbacks
+        def g():
+            if False:
+                yield
+            with start_action(action_type="g"):
+                returnValue(result)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            self.assertIs(result, self.successResultOf(d))
+
+        assert_expected_action_tree(self, logger, "the-action", [{"g": []}])
+
+    @capture_logging(None)
+    def test_nested_returnValue(self, logger):
+        result = object()
+        another = object()
+
+        def g():
+            d = h()
+            # Run h through to the end but ignore its result.
+            yield d
+            # Give back _our_ result.
+            returnValue(result)
+
+        g = inline_callbacks(g, debug=True)
+
+        def h():
+            yield
+            returnValue(another)
+
+        h = inline_callbacks(h, debug=True)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            self.assertIs(result, self.successResultOf(d))
+
+        assert_expected_action_tree(self, logger, "the-action", ["yielded", "yielded"])
+
+    @capture_logging(None)
+    def test_async_returnValue(self, logger):
+        result = object()
+        waiting = Deferred()
+
+        @inline_callbacks
+        def g():
+            yield waiting
+            returnValue(result)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            waiting.callback(None)
+            self.assertIs(result, self.successResultOf(d))
+
+    @capture_logging(None)
+    def test_nested_async_returnValue(self, logger):
+        result = object()
+        another = object()
+
+        waiting = Deferred()
+
+        @inline_callbacks
+        def g():
+            yield h()
+            returnValue(result)
+
+        @inline_callbacks
+        def h():
+            yield waiting
+            returnValue(another)
+
+        with start_action(action_type="the-action"):
+            d = g()
+            waiting.callback(None)
+            self.assertIs(result, self.successResultOf(d))
diff --git a/eliot/tests/test_util.py b/eliot/tests/test_util.py
index 3f7570e..88d53e5 100644
--- a/eliot/tests/test_util.py
+++ b/eliot/tests/test_util.py
@@ -14,6 +14,7 @@ class LoadModuleTests(TestCase):
     """
     Tests for L{load_module}.
     """
+
     maxDiff = None
 
     def test_returns_module(self):
@@ -42,4 +43,5 @@ class LoadModuleTests(TestCase):
         # Demonstrate that override applies to copy but not original:
         self.assertEqual(
             dict(original=pprint.pformat(123), loaded=loaded.pformat(123)),
-            dict(original='123', loaded="OVERRIDE"))
+            dict(original="123", loaded="OVERRIDE"),
+        )
diff --git a/eliot/tests/test_validation.py b/eliot/tests/test_validation.py
index 47e35be..753d922 100644
--- a/eliot/tests/test_validation.py
+++ b/eliot/tests/test_validation.py
@@ -14,7 +14,8 @@ from .._validation import (
     ActionType,
     ValidationError,
     fields,
-    _MessageSerializer, )
+    _MessageSerializer,
+)
 from .._action import start_action, startTask
 from .._output import MemoryLogger
 from ..serializers import identity
@@ -31,7 +32,7 @@ class TypedFieldTests(TestCase):
         L{Field.validate} will not raise an exception if the given value is in
         the list of supported classes.
         """
-        field = Field.forTypes("path", [unicode, int], u"A path!")
+        field = Field.forTypes("path", [unicode, int], "A path!")
         field.validate(123)
         field.validate("hello")
 
@@ -40,7 +41,7 @@ class TypedFieldTests(TestCase):
         When given a "class" of C{None}, L{Field.validate} will support
         validating C{None}.
         """
-        field = Field.forTypes("None", [None], u"Nothing!")
+        field = Field.forTypes("None", [None], "Nothing!")
         field.validate(None)
 
     def test_validateWrongType(self):
@@ -48,7 +49,7 @@ class TypedFieldTests(TestCase):
         L{Field.validate} will raise a L{ValidationError} exception if the
         given value's type is not in the list of supported classes.
         """
-        field = Field.forTypes("key", [int], u"An integer key")
+        field = Field.forTypes("key", [int], "An integer key")
         self.assertRaises(ValidationError, field.validate, "lala")
         self.assertRaises(ValidationError, field.validate, None)
         self.assertRaises(ValidationError, field.validate, object())
@@ -65,7 +66,7 @@ class TypedFieldTests(TestCase):
             else:
                 raise ValidationError("too small")
 
-        field = Field.forTypes("key", [int], u"An integer key", validate)
+        field = Field.forTypes("key", [int], "An integer key", validate)
         field.validate(11)
 
     def test_extraValidatorFails(self):
@@ -80,7 +81,7 @@ class TypedFieldTests(TestCase):
             else:
                 raise ValidationError("too small")
 
-        field = Field.forTypes("key", [int], u"An int", validate)
+        field = Field.forTypes("key", [int], "An int", validate)
         self.assertRaises(ValidationError, field.validate, 10)
 
     def test_onlyValidTypes(self):
@@ -111,8 +112,8 @@ class FieldTests(TestCase):
         """
         L{Field.description} stores the passed in description.
         """
-        field = Field("path", identity, u"A path!")
-        self.assertEqual(field.description, u"A path!")
+        field = Field("path", identity, "A path!")
+        self.assertEqual(field.description, "A path!")
 
     def test_optionalDescription(self):
         """
@@ -125,22 +126,22 @@ class FieldTests(TestCase):
         """
         L{Field.key} stores the passed in field key.
         """
-        field = Field("path", identity, u"A path!")
-        self.assertEqual(field.key, u"path")
+        field = Field("path", identity, "A path!")
+        self.assertEqual(field.key, "path")
 
     def test_serialize(self):
         """
         L{Field.serialize} calls the given serializer function.
         """
         result = []
-        Field("key", result.append, u"field").serialize(123)
+        Field("key", result.append, "field").serialize(123)
         self.assertEqual(result, [123])
 
     def test_serializeResult(self):
         """
         L{Field.serialize} returns the result of the given serializer function.
         """
-        result = Field("key", lambda obj: 456, u"field").serialize(None)
+        result = Field("key", lambda obj: 456, "field").serialize(None)
         self.assertEqual(result, 456)
 
     def test_serializeCallsValidate(self):
@@ -155,14 +156,14 @@ class FieldTests(TestCase):
         def serialize(obj):
             raise MyException()
 
-        field = Field("key", serialize, u"")
+        field = Field("key", serialize, "")
         self.assertRaises(MyException, field.validate, 123)
 
     def test_noExtraValidator(self):
         """
         L{Field.validate} doesn't break if there is no extra validator.
         """
-        field = Field("key", identity, u"")
+        field = Field("key", identity, "")
         field.validate(123)
 
     def test_extraValidatorPasses(self):
@@ -177,7 +178,7 @@ class FieldTests(TestCase):
             else:
                 raise ValidationError("too small")
 
-        field = Field("path", identity, u"A path!", validate)
+        field = Field("path", identity, "A path!", validate)
         field.validate(11)
 
     def test_extraValidatorFails(self):
@@ -192,7 +193,7 @@ class FieldTests(TestCase):
             else:
                 raise ValidationError("too small")
 
-        field = Field("path", identity, u"A path!", validate)
+        field = Field("path", identity, "A path!", validate)
         self.assertRaises(ValidationError, field.validate, 10)
 
 
@@ -248,27 +249,30 @@ class FieldsTests(TestCase):
         L{fields} accepts positional arguments of L{Field} instances and
         combines them with fields specied as keyword arguments.
         """
-        a_field = Field(u'akey', identity)
+        a_field = Field("akey", identity)
         l = fields(a_field, another=str)
         self.assertIn(a_field, l)
-        self.assertEqual({(type(field), field.key)
-                          for field in l}, {(Field, 'akey'),
-                                            (Field, 'another')})
+        self.assertEqual(
+            {(type(field), field.key) for field in l},
+            {(Field, "akey"), (Field, "another")},
+        )
 
     def test_keys(self):
         """
         L{fields} creates L{Field} instances with the given keys.
         """
         l = fields(key=int, status=str)
-        self.assertEqual({(type(field), field.key)
-                          for field in l}, {(Field, "key"), (Field, "status")})
+        self.assertEqual(
+            {(type(field), field.key) for field in l},
+            {(Field, "key"), (Field, "status")},
+        )
 
     def test_validTypes(self):
         """
         The L{Field} instances constructed by L{fields} validate the specified
         types.
         """
-        field, = fields(key=int)
+        (field,) = fields(key=int)
         self.assertRaises(ValidationError, field.validate, "abc")
 
     def test_noSerialization(self):
@@ -276,7 +280,7 @@ class FieldsTests(TestCase):
         The L{Field} instances constructed by L{fields} do no special
         serialization.
         """
-        field, = fields(key=int)
+        (field,) = fields(key=int)
         self.assertEqual(field.serialize("abc"), "abc")
 
 
@@ -291,10 +295,14 @@ class MessageSerializerTests(TestCase):
         constructed with more than object per field name.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
+            ValueError,
+            _MessageSerializer,
+            [
                 Field("akey", identity, ""),
                 Field("akey", identity, ""),
-                Field("message_type", identity, "")])
+                Field("message_type", identity, ""),
+            ],
+        )
 
     def test_noBothTypeFields(self):
         """
@@ -302,9 +310,10 @@ class MessageSerializerTests(TestCase):
         constructed with both a C{"message_type"} and C{"action_type"} field.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
-                Field("message_type", identity, ""),
-                Field("action_type", identity, "")])
+            ValueError,
+            _MessageSerializer,
+            [Field("message_type", identity, ""), Field("action_type", identity, "")],
+        )
 
     def test_missingTypeField(self):
         """
@@ -319,9 +328,10 @@ class MessageSerializerTests(TestCase):
         a C{"task_level"} field included.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
-                Field("message_type", identity, ""),
-                Field("task_level", identity, "")])
+            ValueError,
+            _MessageSerializer,
+            [Field("message_type", identity, ""), Field("task_level", identity, "")],
+        )
 
     def test_noTaskUuid(self):
         """
@@ -329,9 +339,10 @@ class MessageSerializerTests(TestCase):
         a C{"task_uuid"} field included.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
-                Field("message_type", identity, ""),
-                Field("task_uuid", identity, "")])
+            ValueError,
+            _MessageSerializer,
+            [Field("message_type", identity, ""), Field("task_uuid", identity, "")],
+        )
 
     def test_noTimestamp(self):
         """
@@ -339,9 +350,10 @@ class MessageSerializerTests(TestCase):
         a C{"timestamp"} field included.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
-                Field("message_type", identity, ""),
-                Field("timestamp", identity, "")])
+            ValueError,
+            _MessageSerializer,
+            [Field("message_type", identity, ""), Field("timestamp", identity, "")],
+        )
 
     def test_noUnderscoreStart(self):
         """
@@ -349,18 +361,22 @@ class MessageSerializerTests(TestCase):
         a field included whose name starts with C{"_"}.
         """
         self.assertRaises(
-            ValueError, _MessageSerializer, [
-                Field("message_type", identity, ""),
-                Field("_key", identity, "")])
+            ValueError,
+            _MessageSerializer,
+            [Field("message_type", identity, ""), Field("_key", identity, "")],
+        )
 
     def test_serialize(self):
         """
         L{_MessageSerializer.serialize} will serialize all values in the given
         dictionary using the respective L{Field}.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type"),
-            Field("length", len, "The length of a thing"), ])
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length of a thing"),
+            ]
+        )
         message = {"message_type": "mymessage", "length": "thething"}
         serializer.serialize(message)
         self.assertEqual(message, {"message_type": "mymessage", "length": 8})
@@ -373,30 +389,28 @@ class MessageSerializerTests(TestCase):
         Logging attempts to capture everything, with minimal work; with any
         luck this value is JSON-encodable. Unit tests should catch such bugs, in any case.
         """
-        serializer = _MessageSerializer([
-            Field.forValue("message_type", "mymessage", u"The type"),
-            Field("length", len, "The length of a thing"), ])
-        message = {
-            "message_type": "mymessage",
-            "length": "thething",
-            "extra": 123}
+        serializer = _MessageSerializer(
+            [
+                Field.forValue("message_type", "mymessage", "The type"),
+                Field("length", len, "The length of a thing"),
+            ]
+        )
+        message = {"message_type": "mymessage", "length": "thething", "extra": 123}
         serializer.serialize(message)
         self.assertEqual(
-            message, {"message_type": "mymessage",
-                      "length": 8,
-                      "extra": 123})
+            message, {"message_type": "mymessage", "length": 8, "extra": 123}
+        )
 
     def test_fieldInstances(self):
         """
         Fields to L{_MessageSerializer.__init__} should be instances of
         L{Field}.
         """
-        a_field = Field('a_key', identity)
+        a_field = Field("a_key", identity)
         arg = object()
         with self.assertRaises(TypeError) as cm:
             _MessageSerializer([a_field, arg])
-        self.assertEqual((u'Expected a Field instance but got', arg),
-                         cm.exception.args)
+        self.assertEqual(("Expected a Field instance but got", arg), cm.exception.args)
 
 
 class MessageTypeTests(TestCase):
@@ -409,9 +423,10 @@ class MessageTypeTests(TestCase):
         Return a L{MessageType} suitable for unit tests.
         """
         return MessageType(
-            u"myapp:mysystem", [
-                Field.forTypes(u"key", [int], u""),
-                Field.forTypes(u"value", [int], u"")], u"A message type")
+            "myapp:mysystem",
+            [Field.forTypes("key", [int], ""), Field.forTypes("value", [int], "")],
+            "A message type",
+        )
 
     def test_validateMissingType(self):
         """
@@ -420,9 +435,8 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         self.assertRaises(
-            ValidationError, messageType._serializer.validate, {
-                "key": 1,
-                "value": 2})
+            ValidationError, messageType._serializer.validate, {"key": 1, "value": 2}
+        )
 
     def test_validateWrongType(self):
         """
@@ -432,10 +446,10 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         self.assertRaises(
-            ValidationError, messageType._serializer.validate, {
-                "key": 1,
-                "value": 2,
-                "message_type": "wrong"})
+            ValidationError,
+            messageType._serializer.validate,
+            {"key": 1, "value": 2, "message_type": "wrong"},
+        )
 
     def test_validateExtraField(self):
         """
@@ -444,11 +458,10 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         self.assertRaises(
-            ValidationError, messageType._serializer.validate, {
-                "key": 1,
-                "value": 2,
-                "message_type": "myapp:mysystem",
-                "extra": "hello"})
+            ValidationError,
+            messageType._serializer.validate,
+            {"key": 1, "value": 2, "message_type": "myapp:mysystem", "extra": "hello"},
+        )
 
     def test_validateMissingField(self):
         """
@@ -457,9 +470,10 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         self.assertRaises(
-            ValidationError, messageType._serializer.validate, {
-                "key": 1,
-                "message_type": "myapp:mysystem"})
+            ValidationError,
+            messageType._serializer.validate,
+            {"key": 1, "message_type": "myapp:mysystem"},
+        )
 
     def test_validateFieldValidation(self):
         """
@@ -469,10 +483,10 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         self.assertRaises(
-            ValidationError, messageType._serializer.validate, {
-                "key": 1,
-                "value": None,
-                "message_type": "myapp:mysystem"})
+            ValidationError,
+            messageType._serializer.validate,
+            {"key": 1, "value": None, "message_type": "myapp:mysystem"},
+        )
 
     def test_validateStandardFields(self):
         """
@@ -480,13 +494,16 @@ class MessageTypeTests(TestCase):
         dictionary has the standard fields that are added to all messages.
         """
         messageType = self.messageType()
-        messageType._serializer.validate({
-            "key": 1,
-            "value": 2,
-            "message_type": "myapp:mysystem",
-            "task_level": "/",
-            "task_uuid": "123",
-            "timestamp": "xxx"})
+        messageType._serializer.validate(
+            {
+                "key": 1,
+                "value": 2,
+                "message_type": "myapp:mysystem",
+                "task_level": "/",
+                "task_uuid": "123",
+                "timestamp": "xxx",
+            }
+        )
 
     def test_call(self):
         """
@@ -495,8 +512,7 @@ class MessageTypeTests(TestCase):
         """
         messageType = self.messageType()
         message = messageType()
-        self.assertEqual(
-            message._contents, {"message_type": messageType.message_type})
+        self.assertEqual(message._contents, {"message_type": messageType.message_type})
 
     def test_callSerializer(self):
         """
@@ -515,10 +531,9 @@ class MessageTypeTests(TestCase):
         messageType = self.messageType()
         message = messageType(key=2, value=3)
         self.assertEqual(
-            message._contents, {
-                "message_type": messageType.message_type,
-                "key": 2,
-                "value": 3})
+            message._contents,
+            {"message_type": messageType.message_type, "key": 2, "value": 3},
+        )
 
     def test_logCallsDefaultLoggerWrite(self):
         """
@@ -530,17 +545,16 @@ class MessageTypeTests(TestCase):
         self.addCleanup(remove_destination, messages.append)
         message_type = self.messageType()
         message_type.log(key=1234, value=3)
-        self.assertEqual(messages[0][u"key"], 1234)
-        self.assertEqual(messages[0][u"value"], 3)
-        self.assertEqual(
-            messages[0][u"message_type"], message_type.message_type)
+        self.assertEqual(messages[0]["key"], 1234)
+        self.assertEqual(messages[0]["value"], 3)
+        self.assertEqual(messages[0]["message_type"], message_type.message_type)
 
     def test_description(self):
         """
         L{MessageType.description} stores the passed in description.
         """
         messageType = self.messageType()
-        self.assertEqual(messageType.description, u"A message type")
+        self.assertEqual(messageType.description, "A message type")
 
     def test_optionalDescription(self):
         """
@@ -577,7 +591,8 @@ class ActionTypeTestsMixin(object):
             "myapp:mysystem:myaction",
             [Field.forTypes("key", [int], "")],  # start fields
             [Field.forTypes("value", [int], "")],  # success fields
-            "A action type")
+            "A action type",
+        )
 
     def test_validateMissingType(self):
         """
@@ -588,7 +603,8 @@ class ActionTypeTestsMixin(object):
         message = self.getValidMessage()
         del message["action_type"]
         self.assertRaises(
-            ValidationError, self.getSerializer(actionType).validate, message)
+            ValidationError, self.getSerializer(actionType).validate, message
+        )
 
     def test_validateWrongType(self):
         """
@@ -599,7 +615,8 @@ class ActionTypeTestsMixin(object):
         message = self.getValidMessage()
         message["action_type"] = "xxx"
         self.assertRaises(
-            ValidationError, self.getSerializer(actionType).validate, message)
+            ValidationError, self.getSerializer(actionType).validate, message
+        )
 
     def test_validateExtraField(self):
         """
@@ -610,7 +627,8 @@ class ActionTypeTestsMixin(object):
         message = self.getValidMessage()
         message["extra"] = "ono"
         self.assertRaises(
-            ValidationError, self.getSerializer(actionType).validate, message)
+            ValidationError, self.getSerializer(actionType).validate, message
+        )
 
     def test_validateMissingField(self):
         """
@@ -624,7 +642,8 @@ class ActionTypeTestsMixin(object):
                 del message[key]
                 break
         self.assertRaises(
-            ValidationError, self.getSerializer(actionType).validate, message)
+            ValidationError, self.getSerializer(actionType).validate, message
+        )
 
     def test_validateFieldValidation(self):
         """
@@ -638,7 +657,8 @@ class ActionTypeTestsMixin(object):
                 message[key] = object()
                 break
         self.assertRaises(
-            ValidationError, self.getSerializer(actionType).validate, message)
+            ValidationError, self.getSerializer(actionType).validate, message
+        )
 
     def test_validateStandardFields(self):
         """
@@ -647,10 +667,7 @@ class ActionTypeTestsMixin(object):
         """
         actionType = self.actionType()
         message = self.getValidMessage()
-        message.update({
-            "task_level": "/",
-            "task_uuid": "123",
-            "timestamp": "xxx"})
+        message.update({"task_level": "/", "task_uuid": "123", "timestamp": "xxx"})
         self.getSerializer(actionType).validate(message)
 
 
@@ -666,7 +683,8 @@ class ActionTypeStartMessage(TestCase, ActionTypeTestsMixin):
         return {
             "action_type": "myapp:mysystem:myaction",
             "action_status": "started",
-            "key": 1}
+            "key": 1,
+        }
 
     def getSerializer(self, actionType):
         return actionType._serializers.start
@@ -684,7 +702,8 @@ class ActionTypeSuccessMessage(TestCase, ActionTypeTestsMixin):
         return {
             "action_type": "myapp:mysystem:myaction",
             "action_status": "succeeded",
-            "value": 2}
+            "value": 2,
+        }
 
     def getSerializer(self, actionType):
         return actionType._serializers.success
@@ -703,7 +722,8 @@ class ActionTypeFailureMessage(TestCase, ActionTypeTestsMixin):
             "action_type": "myapp:mysystem:myaction",
             "action_status": "failed",
             "exception": "exceptions.RuntimeError",
-            "reason": "because", }
+            "reason": "because",
+        }
 
     def getSerializer(self, actionType):
         return actionType._serializers.failure
@@ -715,10 +735,7 @@ class ActionTypeFailureMessage(TestCase, ActionTypeTestsMixin):
         """
         actionType = self.actionType()
         message = self.getValidMessage()
-        message.update({
-            "task_level": "/",
-            "task_uuid": "123",
-            "timestamp": "xxx"})
+        message.update({"task_level": "/", "task_uuid": "123", "timestamp": "xxx"})
         message.update({"extra_field": "hello"})
         self.getSerializer(actionType).validate(message)
 
@@ -772,14 +789,18 @@ class ActionTypeTests(TestCase):
         """
         called = []
         actionType = self.actionType()
-        actionType._start_action = lambda *args, **kwargs: called.append(
-            (args, kwargs))
+        actionType._start_action = lambda *args, **kwargs: called.append((args, kwargs))
         logger = object()
         actionType(logger, key=5)
         self.assertEqual(
             called,
-            [((logger, "myapp:mysystem:myaction", actionType._serializers), {
-                "key": 5})])
+            [
+                (
+                    (logger, "myapp:mysystem:myaction", actionType._serializers),
+                    {"key": 5},
+                )
+            ],
+        )
 
     def test_defaultStartAction(self):
         """
@@ -803,14 +824,18 @@ class ActionTypeTests(TestCase):
         """
         called = []
         actionType = self.actionType()
-        actionType._startTask = lambda *args, **kwargs: called.append(
-            (args, kwargs))
+        actionType._startTask = lambda *args, **kwargs: called.append((args, kwargs))
         logger = object()
         actionType.as_task(logger, key=5)
         self.assertEqual(
             called,
-            [((logger, "myapp:mysystem:myaction", actionType._serializers), {
-                "key": 5})])
+            [
+                (
+                    (logger, "myapp:mysystem:myaction", actionType._serializers),
+                    {"key": 5},
+                )
+            ],
+        )
 
     def test_defaultStartTask(self):
         """
@@ -845,13 +870,18 @@ class EndToEndValidationTests(TestCase):
     Test validation of messages created using L{MessageType} and
     L{ActionType}.
     """
+
     MESSAGE = MessageType(
-        "myapp:mymessage", [Field.forTypes("key", [int], "The key")],
-        "A message for testing.")
+        "myapp:mymessage",
+        [Field.forTypes("key", [int], "The key")],
+        "A message for testing.",
+    )
     ACTION = ActionType(
-        "myapp:myaction", [Field.forTypes("key", [int], "The key")], [
-            Field.forTypes("result", [unicode], "The result")],
-        "An action for testing.")
+        "myapp:myaction",
+        [Field.forTypes("key", [int], "The key")],
+        [Field.forTypes("result", [unicode], "The result")],
+        "An action for testing.",
+    )
 
     def test_correctFromMessageType(self):
         """
diff --git a/eliot/twisted.py b/eliot/twisted.py
index 09716ba..5f83094 100644
--- a/eliot/twisted.py
+++ b/eliot/twisted.py
@@ -9,11 +9,18 @@ import sys
 
 from twisted.logger import Logger as TwistedLogger
 from twisted.python.failure import Failure
+from twisted.internet.defer import inlineCallbacks
 
 from ._action import current_action
 from . import addDestination
+from ._generators import eliot_friendly_generator_function
 
-__all__ = ["AlreadyFinished", "DeferredContext", "redirectLogsForTrial"]
+__all__ = [
+    "AlreadyFinished",
+    "DeferredContext",
+    "redirectLogsForTrial",
+    "inline_callbacks",
+]
 
 
 def _passthrough(result):
@@ -55,16 +62,17 @@ class DeferredContext(object):
         if self._action is None:
             raise RuntimeError(
                 "DeferredContext() should only be created in the context of "
-                "an eliot.Action.")
+                "an eliot.Action."
+            )
 
     def addCallbacks(
         self,
         callback,
-        errback,
+        errback=None,
         callbackArgs=None,
         callbackKeywords=None,
         errbackArgs=None,
-        errbackKeywords=None
+        errbackKeywords=None,
     ):
         """
         Add a pair of callbacks that will be run in the context of an eliot
@@ -79,6 +87,9 @@ class DeferredContext(object):
         if self._finishAdded:
             raise AlreadyFinished()
 
+        if errback is None:
+            errback = _passthrough
+
         def callbackWithContext(*args, **kwargs):
             return self._action.run(callback, *args, **kwargs)
 
@@ -86,8 +97,13 @@ class DeferredContext(object):
             return self._action.run(errback, *args, **kwargs)
 
         self.result.addCallbacks(
-            callbackWithContext, errbackWithContext, callbackArgs,
-            callbackKeywords, errbackArgs, errbackKeywords)
+            callbackWithContext,
+            errbackWithContext,
+            callbackArgs,
+            callbackKeywords,
+            errbackArgs,
+            errbackKeywords,
+        )
         return self
 
     def addCallback(self, callback, *args, **kw):
@@ -102,7 +118,8 @@ class DeferredContext(object):
             called. This indicates a programmer error.
         """
         return self.addCallbacks(
-            callback, _passthrough, callbackArgs=args, callbackKeywords=kw)
+            callback, _passthrough, callbackArgs=args, callbackKeywords=kw
+        )
 
     def addErrback(self, errback, *args, **kw):
         """
@@ -116,7 +133,8 @@ class DeferredContext(object):
             called. This indicates a programmer error.
         """
         return self.addCallbacks(
-            _passthrough, errback, errbackArgs=args, errbackKeywords=kw)
+            _passthrough, errback, errbackArgs=args, errbackKeywords=kw
+        )
 
     def addBoth(self, callback, *args, **kw):
         """
@@ -226,10 +244,7 @@ class _RedirectLogsForTrial(object):
 
         @return: The destination added to Eliot if any, otherwise L{None}.
         """
-        if (
-            os.path.basename(self._sys.argv[0]) == 'trial'
-            and not self._redirected
-        ):
+        if os.path.basename(self._sys.argv[0]) == "trial" and not self._redirected:
             self._redirected = True
             destination = TwistedDestination()
             addDestination(destination)
@@ -237,3 +252,16 @@ class _RedirectLogsForTrial(object):
 
 
 redirectLogsForTrial = _RedirectLogsForTrial(sys)
+
+
+def inline_callbacks(original, debug=False):
+    """
+    Decorate a function like ``inlineCallbacks`` would but in a more
+    Eliot-friendly way.  Use it just like ``inlineCallbacks`` but where you
+    want Eliot action contexts to Do The Right Thing inside the decorated
+    function.
+    """
+    f = eliot_friendly_generator_function(original)
+    if debug:
+        f.debug = True
+    return inlineCallbacks(f)
diff --git a/examples/asyncio_linkcheck.py b/examples/asyncio_linkcheck.py
new file mode 100644
index 0000000..28799f6
--- /dev/null
+++ b/examples/asyncio_linkcheck.py
@@ -0,0 +1,24 @@
+import asyncio
+import aiohttp
+from eliot import start_action, to_file
+to_file(open("linkcheck.log", "w"))
+
+
+async def check_links(urls):
+    session = aiohttp.ClientSession()
+    with start_action(action_type="check_links", urls=urls):
+        for url in urls:
+            try:
+                with start_action(action_type="download", url=url):
+                    async with session.get(url) as response:
+                        response.raise_for_status()
+            except Exception as e:
+                raise ValueError(str(e))
+
+try:
+    loop = asyncio.get_event_loop()
+    loop.run_until_complete(
+        check_links(["http://eliot.readthedocs.io", "http://nosuchurl"])
+    )
+except ValueError:
+    print("Not all links were valid.")
diff --git a/examples/journald.py b/examples/journald.py
index 86b40a1..4baf40e 100644
--- a/examples/journald.py
+++ b/examples/journald.py
@@ -4,7 +4,7 @@ Write some logs to journald.
 
 from __future__ import print_function
 
-from eliot import Message, start_action, add_destinations
+from eliot import log_message, start_action, add_destinations
 from eliot.journald import JournaldDestination
 
 add_destinations(JournaldDestination())
@@ -15,5 +15,5 @@ def divide(a, b):
         return a / b
 
 print(divide(10, 2))
-Message.log(message_type="inbetween")
+log_message(message_type="inbetween")
 print(divide(10, 0))
diff --git a/examples/logfile.py b/examples/logfile.py
index 8a09f46..e4b58d5 100644
--- a/examples/logfile.py
+++ b/examples/logfile.py
@@ -6,7 +6,7 @@ from __future__ import unicode_literals, print_function
 from twisted.internet.task import react
 
 from eliot.logwriter import ThreadedWriter
-from eliot import Message, FileDestination
+from eliot import log_message, FileDestination
 
 
 def main(reactor):
@@ -20,7 +20,7 @@ def main(reactor):
     logWriter.startService()
 
     # Log a message:
-    Message.log(value="hello", another=1)
+    log_message(message_type="test", value="hello", another=1)
 
     # Manually stop the service.
     done = logWriter.stopService()
diff --git a/examples/rometrip_actions.py b/examples/rometrip_actions.py
index 082b793..8158b86 100644
--- a/examples/rometrip_actions.py
+++ b/examples/rometrip_actions.py
@@ -1,5 +1,5 @@
 from sys import stdout
-from eliot import start_action, start_task, to_file
+from eliot import start_action, to_file
 to_file(stdout)
 
 
@@ -16,7 +16,7 @@ class Place(object):
 
 
 def honeymoon(family, destination):
-    with start_task(action_type="honeymoon", people=family):
+    with start_action(action_type="honeymoon", people=family):
         destination.visited(family)
 
 
diff --git a/examples/rometrip_messages.py b/examples/rometrip_messages.py
deleted file mode 100644
index c89f19c..0000000
--- a/examples/rometrip_messages.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from sys import stdout
-from eliot import Message, to_file
-to_file(stdout)
-
-
-class Place(object):
-    def __init__(self, name, contained=()):
-        self.name = name
-        self.contained = contained
-
-    def visited(self, people):
-        Message.log(message_type="visited",
-                    people=people, place=self.name)
-        for thing in self.contained:
-            thing.visited(people)
-
-
-def honeymoon(family, destination):
-    Message.log(message_type="honeymoon", people=family)
-    destination.visited(family)
-
-
-honeymoon(["Mrs. Casaubon", "Mr. Casaubon"],
-          Place("Rome, Italy",
-                [Place("Vatican Museum",
-                       [Place("Statue #1"), Place("Statue #2")])]))
diff --git a/examples/stdout.py b/examples/stdout.py
index 82cbc64..8c28e03 100644
--- a/examples/stdout.py
+++ b/examples/stdout.py
@@ -6,14 +6,14 @@ from __future__ import unicode_literals
 import sys
 import time
 
-from eliot import Message, to_file
+from eliot import log_message, to_file
 to_file(sys.stdout)
 
 
 def main():
-    Message.log(value="hello", another=1)
+    log_message(message_type="test", value="hello", another=1)
     time.sleep(0.2)
-    Message.log(value="goodbye", another=2)
+    log_message(message_type="test", value="goodbye", another=2)
 
 
 if __name__ == '__main__':
diff --git a/examples/trio_say.py b/examples/trio_say.py
new file mode 100644
index 0000000..a7845b9
--- /dev/null
+++ b/examples/trio_say.py
@@ -0,0 +1,17 @@
+from eliot import start_action, to_file
+import trio
+
+to_file(open("trio.log", "w"))
+
+
+async def say(message, delay):
+    with start_action(action_type="say", message=message):
+        await trio.sleep(delay)
+
+async def main():
+    with start_action(action_type="main"):
+        async with trio.open_nursery() as nursery:
+            nursery.start_soon(say, "hello", 1)
+            nursery.start_soon(say, "world", 2)
+
+trio.run(main)
diff --git a/setup.py b/setup.py
index b9b9bba..e7de861 100644
--- a/setup.py
+++ b/setup.py
@@ -13,26 +13,24 @@ def read(path):
 
 setup(
     classifiers=[
-        'Intended Audience :: Developers',
-        'License :: OSI Approved :: Apache Software License',
-        'Operating System :: OS Independent',
-        'Programming Language :: Python',
-        'Programming Language :: Python :: 2',
-        'Programming Language :: Python :: 2.7',
-        'Programming Language :: Python :: 3',
-        'Programming Language :: Python :: 3.4',
-        'Programming Language :: Python :: 3.5',
-        'Programming Language :: Python :: 3.6',
-        'Programming Language :: Python :: 3.7',
-        'Programming Language :: Python :: Implementation :: CPython',
-        'Programming Language :: Python :: Implementation :: PyPy',
-        'Topic :: System :: Logging',
+        "Intended Audience :: Developers",
+        "License :: OSI Approved :: Apache Software License",
+        "Operating System :: OS Independent",
+        "Programming Language :: Python",
+        "Programming Language :: Python :: 3",
+        "Programming Language :: Python :: 3.6",
+        "Programming Language :: Python :: 3.7",
+        "Programming Language :: Python :: 3.8",
+        "Programming Language :: Python :: 3.9",
+        "Programming Language :: Python :: Implementation :: CPython",
+        "Programming Language :: Python :: Implementation :: PyPy",
+        "Topic :: System :: Logging",
     ],
-    name='eliot',
+    name="eliot",
     version=versioneer.get_version(),
     cmdclass=versioneer.get_cmdclass(),
     description="Logging library that tells you why it happened",
-    python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
+    python_requires=">=3.6.0",
     install_requires=[
         # Python 3 compatibility:
         "six",
@@ -40,13 +38,23 @@ setup(
         "zope.interface",
         # Persistent objects for Python:
         "pyrsistent >= 0.11.8",  # version with multi-type pvector/pmap_field
-        # Easy decorators:
-        "boltons"
+        # Better decorators, with version that works better with type annotations:
+        "boltons >= 19.0.1",
+        # Backwards compatibility for Python 3.5 and 3.6:
+        'aiocontextvars;python_version<"3.7" and python_version>"2.7"',
     ],
     extras_require={
         "journald": [
             # We use cffi to talk to the journald API:
-            "cffi >= 1.1.2",  # significant API changes in older releases
+            "cffi >= 1.1.2"  # significant API changes in older releases
+        ],
+        "test": [
+            # Bug-seeking missile:
+            "hypothesis >= 1.14.0",
+            # Tasteful testing for Python:
+            "testtools",
+            "pytest",
+            "pytest-xdist",
         ],
         "dev": [
             # Ensure we can do python_requires correctly:
@@ -55,26 +63,18 @@ setup(
             "twine >= 1.12.1",
             # Allows us to measure code coverage:
             "coverage",
-            # Bug-seeking missile:
-            "hypothesis >= 1.14.0",
-            # Tasteful testing for Python:
-            "testtools",
             "sphinx",
             "sphinx_rtd_theme",
             "flake8",
-            "yapf"
-        ]
-    },
-    entry_points={
-        'console_scripts': [
-            'eliot-prettyprint = eliot.prettyprint:_main',
-        ]
+            "black",
+        ],
     },
+    entry_points={"console_scripts": ["eliot-prettyprint = eliot.prettyprint:_main"]},
     keywords="logging",
     license="Apache 2.0",
     packages=["eliot", "eliot.tests"],
     url="https://github.com/itamarst/eliot/",
-    maintainer='Itamar Turner-Trauring',
-    maintainer_email='itamar@itamarst.org',
-    long_description=read('README.rst'),
+    maintainer="Itamar Turner-Trauring",
+    maintainer_email="itamar@itamarst.org",
+    long_description=read("README.rst"),
 )