Codebase list python-eliot / upstream/1.13.0+git20210519.1.8609d38
Import upstream version 1.13.0+git20210519.1.8609d38 Debian Janitor 2 years ago
19 changed file(s) with 343 addition(s) and 71 deletion(s). Raw diff Collapse all Expand all
00 Metadata-Version: 2.1
11 Name: eliot
2 Version: 1.11.0
2 Version: 0+unknown
33 Summary: Logging library that tells you why it happened
44 Home-page: https://github.com/itamarst/eliot/
55 Maintainer: Itamar Turner-Trauring
3636
3737 Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
3838
39 Eliot supports Python 3.5, 3.6, 3.7, and 3.8, as well as PyPy3.
39 Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
4040 It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
4141
4242 Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
6363 Classifier: Operating System :: OS Independent
6464 Classifier: Programming Language :: Python
6565 Classifier: Programming Language :: Python :: 3
66 Classifier: Programming Language :: Python :: 3.5
6766 Classifier: Programming Language :: Python :: 3.6
6867 Classifier: Programming Language :: Python :: 3.7
6968 Classifier: Programming Language :: Python :: 3.8
69 Classifier: Programming Language :: Python :: 3.9
7070 Classifier: Programming Language :: Python :: Implementation :: CPython
7171 Classifier: Programming Language :: Python :: Implementation :: PyPy
7272 Classifier: Topic :: System :: Logging
73 Requires-Python: >=3.5.3
73 Requires-Python: >=3.6.0
74 Provides-Extra: dev
75 Provides-Extra: journald
7476 Provides-Extra: test
75 Provides-Extra: journald
76 Provides-Extra: dev
2828
2929 Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
3030
31 Eliot supports Python 3.5, 3.6, 3.7, and 3.8, as well as PyPy3.
31 Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
3232 It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
3333
3434 Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
232232 self.assertEqual(servers, [msg.message["server"] for msg in messages])
233233
234234
235 Custom JSON encoding
236 --------------------
237
238 Just like a ``FileDestination`` can have a custom JSON encoder, so can your tests, so you can validate your messages with that JSON encoder:
239
240 .. code-block:: python
241
242 from unittest import TestCase
243 from eliot.json import EliotJSONEncoder
244 from eliot.testing import capture_logging
245
246 class MyClass:
247 def __init__(self, x):
248 self.x = x
249
250 class MyEncoder(EliotJSONEncoder):
251 def default(self, obj):
252 if isinstance(obj, MyClass):
253 return {"x": obj.x}
254 return EliotJSONEncoder.default(self, obj)
255
256 class LoggingTests(TestCase):
257 @capture_logging(None, encoder_=MyEncoder)
258 def test_logging(self, logger):
259 # Logged messages will be validated using MyEncoder....
260 ...
261
262 Notice that the hyphen after ``encoder_`` is deliberate: by default keyword arguments are passed to the assertion function (the first argument to ``@capture_logging``) so it's marked this way to indicate it's part of Eliot's API.
263
235264 Custom testing setup
236265 --------------------
237266
2727 * **Start here:** :doc:`Quickstart documentation <quickstart>`
2828 * Need help or have any questions? `File an issue <https://github.com/itamarst/eliot/issues/new>`_.
2929 * Eliot is licensed under the `Apache 2.0 license <https://github.com/itamarst/eliot/blob/master/LICENSE>`_, and the source code is `available on GitHub <https://github.com/itamarst/eliot>`_.
30 * Eliot supports Python 3.8, 3.7, 3.6, and 3.5, as well as PyPy3.
30 * Eliot supports Python 3.9, 3.8, 3.7, 3.6, and PyPy3.
3131 Python 2.7 is in legacy support mode (see :ref:`python2` for details).
3232 * **Commercial support** is available from `Python⇒Speed <https://pythonspeed.com/services/#eliot>`_.
3333 * Read on for the full documentation.
00 What's New
11 ==========
2
3 1.13.0
4 ^^^^^^
5
6 Features:
7
8 * ``@capture_logging`` and ``MemoryLogger`` now support specifying a custom JSON encoder. By default they now use Eliot's encoder. This means tests can now match the encoding used by a ``FileDestination``.
9 * Added support for Python 3.9.
10
11 Deprecation:
12
13 * Python 3.5 is no longer supported.
14
15 1.12.0
16 ^^^^^^
17
18 Features:
19
20 * Dask support now includes support for tracing logging of ``dask.persist()``, via wrapper API ``eliot.dask.persist_with_trace()``.
21
22 Bug fixes:
23
24 * Dask edge cases that previously weren't handled correctly should work better.
225
326 1.11.0
427 ^^^^^^
1717
1818 Run ``eliot-prettyprint --help`` to see the various formatting options; you can for example use a more compact one-message-per-line format.
1919
20 Additionally, the **highly recommended third-party `eliot-tree`_ tool** renders JSON-formatted Eliot messages into a tree visualizing the tasks' actions.
20 Additionally, the **highly recommended** third-party `eliot-tree`_ tool renders JSON-formatted Eliot messages into a tree visualizing the tasks' actions.
2121
2222
2323 Filtering logs
4343 * Ensure all worker processes write the Eliot logs to disk (if you're using the ``multiprocessing`` or ``distributed`` backends).
4444 * If you're using multiple worker machines, aggregate all log files into a single place, so you can more easily analyze them with e.g. `eliot-tree <https://github.com/jonathanj/eliottree>`_.
4545 * Replace ``dask.compute()`` with ``eliot.dask.compute_with_trace()``.
46 * Replace ``dask.persist()`` with ``eliot.dask.persist_with_trace()``.
4647
47 In the following example, you can see how this works for a Dask run using ``distributed``, the recommended Dask scheduler.
48 In the following example, you can see how this works for a Dask run using ``distributed``, the recommended Dask scheduler for more sophisticated use cases.
4849 We'll be using multiple worker processes, but only use a single machine:
4950
5051 .. literalinclude:: ../../examples/dask_eliot.py
261261 not mutate this list.
262262 """
263263
264 def __init__(self):
264 def __init__(self, encoder=EliotJSONEncoder):
265 """
266 @param encoder: A JSONEncoder subclass to use when encoding JSON.
267 """
265268 self._lock = Lock()
269 self._encoder = encoder
266270 self.reset()
267271
268272 @exclusively
343347 serializer.serialize(dictionary)
344348
345349 try:
346 bytesjson.dumps(dictionary)
347 pyjson.dumps(dictionary)
350 pyjson.dumps(dictionary, cls=self._encoder)
348351 except Exception as e:
349352 raise TypeError("Message %s doesn't encode to JSON: %s" % (dictionary, e))
350353
461464 Add a destination that writes a JSON message per line to the given file.
462465
463466 @param output_file: A file-like object.
467
468 @param encoder: A JSONEncoder subclass to use when encoding JSON.
464469 """
465470 Logger._destinations.add(FileDestination(file=output_file, encoder=encoder))
466471
387387 this action's start message.
388388
389389 @ivar successFields: A C{list} of L{Field} instances which can appear in
390 this action's succesful finish message.
390 this action's successful finish message.
391391
392392 @ivar failureFields: A C{list} of L{Field} instances which can appear in
393393 this action's failed finish message (in addition to the built-in
77
88 version_json = '''
99 {
10 "date": "2019-12-07T14:22:41-0500",
11 "dirty": false,
12 "error": null,
13 "full-revisionid": "4ca0fa7519321aceec860e982123a5c448a9debd",
14 "version": "1.11.0"
10 "date": null,
11 "dirty": null,
12 "error": "unable to compute version",
13 "full-revisionid": null,
14 "version": "0+unknown"
1515 }
1616 ''' # END VERSION_JSON
1717
11
22 from pyrsistent import PClass, field
33
4 from dask import compute, optimize
5 from dask.core import toposort, get_dependencies
4 from dask import compute, optimize, persist
5
6 try:
7 from dask.distributed import Future
8 except:
9
10 class Future(object):
11 pass
12
13
14 from dask.core import toposort, get_dependencies, ishashable
615 from . import start_action, current_action, Action
716
817
7483 return compute(*optimized, optimize_graph=False)
7584
7685
86 def persist_with_trace(*args):
87 """Do Dask persist(), but with added Eliot tracing.
88
89 Known issues:
90
91 1. Retries will confuse Eliot. Probably need different
92 distributed-tree mechanism within Eliot to solve that.
93 """
94 # 1. Create top-level Eliot Action:
95 with start_action(action_type="dask:persist"):
96 # In order to reduce logging verbosity, add logging to the already
97 # optimized graph:
98 optimized = optimize(*args, optimizations=[_add_logging])
99 return persist(*optimized, optimize_graph=False)
100
101
77102 def _add_logging(dsk, ignore=None):
78103 """
79104 Add logging to a Dask graph.
100125 key_names = {}
101126 for key in keys:
102127 value = dsk[key]
103 if not callable(value) and value in keys:
128 if not callable(value) and ishashable(value) and value in keys:
104129 # It's an alias for another key:
105130 key_names[key] = key_names[value]
106131 else:
107132 key_names[key] = simplify(key)
108133
109 # 2. Create Eliot child Actions for each key, in topological order:
110 key_to_action_id = {key: str(ctx.serialize_task_id(), "utf-8") for key in keys}
134 # Values in the graph can be either:
135 #
136 # 1. A list of other values.
137 # 2. A tuple, where first value might be a callable, aka a task.
138 # 3. A literal of some sort.
139 def maybe_wrap(key, value):
140 if isinstance(value, list):
141 return [maybe_wrap(key, v) for v in value]
142 elif isinstance(value, tuple):
143 func = value[0]
144 args = value[1:]
145 if not callable(func):
146 # Not a callable, so nothing to wrap.
147 return value
148 wrapped_func = _RunWithEliotContext(
149 task_id=str(ctx.serialize_task_id(), "utf-8"),
150 func=func,
151 key=key_names[key],
152 dependencies=[key_names[k] for k in get_dependencies(dsk, key)],
153 )
154 return (wrapped_func,) + args
155 else:
156 return value
111157
112 # 3. Replace function with wrapper that logs appropriate Action:
158 # Replace function with wrapper that logs appropriate Action; iterate in
159 # topological order so action task levels are in reasonable order.
113160 for key in keys:
114 func = dsk[key][0]
115 args = dsk[key][1:]
116 if not callable(func):
117 # This key is just an alias for another key, no need to add
118 # logging:
119 result[key] = dsk[key]
120 continue
121 wrapped_func = _RunWithEliotContext(
122 task_id=key_to_action_id[key],
123 func=func,
124 key=key_names[key],
125 dependencies=[key_names[k] for k in get_dependencies(dsk, key)],
126 )
127 result[key] = (wrapped_func,) + tuple(args)
161 result[key] = maybe_wrap(key, dsk[key])
128162
129163 assert set(result.keys()) == set(dsk.keys())
130164 return result
131165
132166
133 __all__ = ["compute_with_trace"]
167 __all__ = ["compute_with_trace", "persist_with_trace"]
1919 from ._message import MESSAGE_TYPE_FIELD, TASK_LEVEL_FIELD, TASK_UUID_FIELD
2020 from ._output import MemoryLogger
2121 from . import _output
22 from .json import EliotJSONEncoder
2223
2324 COMPLETED_STATUSES = (FAILED_STATUS, SUCCEEDED_STATUS)
2425
297298 return previous_logger
298299
299300
300 def validateLogging(assertion, *assertionArgs, **assertionKwargs):
301 def validateLogging(
302 assertion, *assertionArgs, encoder_=EliotJSONEncoder, **assertionKwargs
303 ):
301304 """
302305 Decorator factory for L{unittest.TestCase} methods to add logging
303306 validation.
329332
330333 @param assertionKwargs: Additional keyword arguments to pass to
331334 C{assertion}.
335
336 @param encoder_: C{json.JSONEncoder} subclass to use when validating JSON.
332337 """
333338
334339 def decorator(function):
336341 def wrapper(self, *args, **kwargs):
337342 skipped = False
338343
339 kwargs["logger"] = logger = MemoryLogger()
344 kwargs["logger"] = logger = MemoryLogger(encoder=encoder_)
340345 self.addCleanup(check_for_errors, logger)
341346 # TestCase runs cleanups in reverse order, and we want this to
342347 # run *before* tracebacks are checked:
360365 validate_logging = validateLogging
361366
362367
363 def capture_logging(assertion, *assertionArgs, **assertionKwargs):
368 def capture_logging(
369 assertion, *assertionArgs, encoder_=EliotJSONEncoder, **assertionKwargs
370 ):
364371 """
365372 Capture and validate all logging that doesn't specify a L{Logger}.
366373
368375 """
369376
370377 def decorator(function):
371 @validate_logging(assertion, *assertionArgs, **assertionKwargs)
378 @validate_logging(
379 assertion, *assertionArgs, encoder_=encoder_, **assertionKwargs
380 )
372381 @wraps(function)
373382 def wrapper(self, *args, **kwargs):
374383 logger = kwargs["logger"]
22 """
33
44 from io import BytesIO
5 from json import JSONEncoder
6
7
8 class CustomObject(object):
9 """Gets encoded to JSON."""
10
11
12 class CustomJSONEncoder(JSONEncoder):
13 """JSONEncoder that knows about L{CustomObject}."""
14
15 def default(self, o):
16 if isinstance(o, CustomObject):
17 return "CUSTOM!"
18 return JSONEncoder.default(self, o)
519
620
721 class FakeSys(object):
22 from unittest import TestCase, skipUnless
33
44 from ..testing import capture_logging, LoggedAction, LoggedMessage
5 from .. import start_action, Message
5 from .. import start_action, log_message
66
77 try:
88 import dask
99 from dask.bag import from_sequence
10 from dask.distributed import Client
11 import dask.dataframe as dd
12 import pandas as pd
1013 except ImportError:
1114 dask = None
1215 else:
13 from ..dask import compute_with_trace, _RunWithEliotContext, _add_logging
16 from ..dask import (
17 compute_with_trace,
18 _RunWithEliotContext,
19 _add_logging,
20 persist_with_trace,
21 )
1422
1523
1624 @skipUnless(dask, "Dask not available.")
2735 bag = bag.fold(lambda x, y: x + y)
2836 self.assertEqual(dask.compute(bag), compute_with_trace(bag))
2937
38 def test_future(self):
39 """compute_with_trace() can handle Futures."""
40 client = Client(processes=False)
41 self.addCleanup(client.shutdown)
42 [bag] = dask.persist(from_sequence([1, 2, 3]))
43 bag = bag.map(lambda x: x * 5)
44 result = dask.compute(bag)
45 self.assertEqual(result, ([5, 10, 15],))
46 self.assertEqual(result, compute_with_trace(bag))
47
48 def test_persist_result(self):
49 """persist_with_trace() runs the same logic as process()."""
50 client = Client(processes=False)
51 self.addCleanup(client.shutdown)
52 bag = from_sequence([1, 2, 3])
53 bag = bag.map(lambda x: x * 7)
54 self.assertEqual(
55 [b.compute() for b in dask.persist(bag)],
56 [b.compute() for b in persist_with_trace(bag)],
57 )
58
59 def test_persist_pandas(self):
60 """persist_with_trace() with a Pandas dataframe.
61
62 This ensures we don't blow up, which used to be the case.
63 """
64 df = pd.DataFrame()
65 df = dd.from_pandas(df, npartitions=1)
66 persist_with_trace(df)
67
3068 @capture_logging(None)
31 def test_logging(self, logger):
69 def test_persist_logging(self, logger):
70 """persist_with_trace() preserves Eliot context."""
71
72 def persister(bag):
73 [bag] = persist_with_trace(bag)
74 return dask.compute(bag)
75
76 self.assert_logging(logger, persister, "dask:persist")
77
78 @capture_logging(None)
79 def test_compute_logging(self, logger):
3280 """compute_with_trace() preserves Eliot context."""
81 self.assert_logging(logger, compute_with_trace, "dask:compute")
82
83 def assert_logging(self, logger, run_with_trace, top_action_name):
84 """Utility function for _with_trace() logging tests."""
3385
3486 def mult(x):
35 Message.log(message_type="mult")
87 log_message(message_type="mult")
3688 return x * 4
3789
3890 def summer(x, y):
39 Message.log(message_type="finally")
91 log_message(message_type="finally")
4092 return x + y
4193
4294 bag = from_sequence([1, 2])
4395 bag = bag.map(mult).fold(summer)
4496 with start_action(action_type="act1"):
45 compute_with_trace(bag)
97 run_with_trace(bag)
4698
4799 [logged_action] = LoggedAction.ofType(logger.messages, "act1")
48100 self.assertEqual(
50102 {
51103 "act1": [
52104 {
53 "dask:compute": [
105 top_action_name: [
54106 {"eliot:remote_task": ["dask:task", "mult"]},
55107 {"eliot:remote_task": ["dask:task", "mult"]},
56108 {"eliot:remote_task": ["dask:task", "finally"]},
61113 )
62114
63115 # Make sure dependencies are tracked:
64 mult1_msg, mult2_msg, final_msg = LoggedMessage.ofType(
65 logger.messages, "dask:task"
66 )
116 (
117 mult1_msg,
118 mult2_msg,
119 final_msg,
120 ) = LoggedMessage.ofType(logger.messages, "dask:task")
67121 self.assertEqual(
68122 sorted(final_msg.message["dependencies"]),
69123 sorted([mult1_msg.message["key"], mult2_msg.message["key"]]),
81135 @skipUnless(dask, "Dask not available.")
82136 class AddLoggingTests(TestCase):
83137 """Tests for _add_logging()."""
138
139 maxDiff = None
84140
85141 def test_add_logging_to_full_graph(self):
86142 """_add_logging() recreates Dask graph with wrappers."""
103159 logging_removed[key] = value
104160
105161 self.assertEqual(logging_removed, graph)
162
163 def test_add_logging_explicit(self):
164 """_add_logging() on more edge cases of the graph."""
165
166 def add(s):
167 return s + "s"
168
169 def add2(s):
170 return s + "s"
171
172 # b runs first, then d, then a and c.
173 graph = {
174 "a": "d",
175 "d": [1, 2, (add, "b")],
176 ("b", 0): 1,
177 "c": (add2, "d"),
178 }
179
180 with start_action(action_type="bleh") as action:
181 task_id = action.task_uuid
182 self.assertEqual(
183 _add_logging(graph),
184 {
185 "d": [
186 1,
187 2,
188 (
189 _RunWithEliotContext(
190 task_id=task_id + "@/2",
191 func=add,
192 key="d",
193 dependencies=["b"],
194 ),
195 "b",
196 ),
197 ],
198 "a": "d",
199 ("b", 0): 1,
200 "c": (
201 _RunWithEliotContext(
202 task_id=task_id + "@/3",
203 func=add2,
204 key="c",
205 dependencies=["d"],
206 ),
207 "d",
208 ),
209 },
210 )
3131 from .._validation import ValidationError, Field, _MessageSerializer
3232 from .._traceback import write_traceback
3333 from ..testing import assertContainsFields
34 from .common import CustomObject, CustomJSONEncoder
3435
3536
3637 class MemoryLoggerTests(TestCase):
120121 {"message_type": "type", "foo": "will become object()"}, serializer
121122 )
122123 self.assertRaises(TypeError, logger.validate)
124
125 @skipUnless(np, "NumPy is not installed.")
126 def test_EliotJSONEncoder(self):
127 """
128 L{MemoryLogger.validate} uses the EliotJSONEncoder by default to do
129 encoding testing.
130 """
131 logger = MemoryLogger()
132 logger.write({"message_type": "type", "foo": np.uint64(12)}, None)
133 logger.validate()
134
135 def test_JSON_custom_encoder(self):
136 """
137 L{MemoryLogger.validate} will use a custom JSON encoder if one was given.
138 """
139 logger = MemoryLogger(encoder=CustomJSONEncoder)
140 logger.write(
141 {"message_type": "type", "custom": CustomObject()},
142 None,
143 )
144 logger.validate()
123145
124146 def test_serialize(self):
125147 """
33
44 from __future__ import unicode_literals
55
6 from unittest import SkipTest, TestResult, TestCase
6 from unittest import SkipTest, TestResult, TestCase, skipUnless
7
8 try:
9 import numpy as np
10 except ImportError:
11 np = None
712
813 from ..testing import (
914 issuperset,
2429 from .._message import Message
2530 from .._validation import ActionType, MessageType, ValidationError, Field
2631 from .._traceback import write_traceback
27 from .. import add_destination, remove_destination, _output
32 from .. import add_destination, remove_destination, _output, log_message
33 from .common import CustomObject, CustomJSONEncoder
2834
2935
3036 class IsSuperSetTests(TestCase):
739745 )
740746
741747
748 class JSONEncodingTests(TestCase):
749 """Tests for L{capture_logging} JSON encoder support."""
750
751 @skipUnless(np, "NumPy is not installed.")
752 @capture_logging(None)
753 def test_default_JSON_encoder(self, logger):
754 """
755 L{capture_logging} validates using L{EliotJSONEncoder} by default.
756 """
757 # Default JSON encoder can't handle NumPy:
758 log_message(message_type="hello", number=np.uint32(12))
759
760 @capture_logging(None, encoder_=CustomJSONEncoder)
761 def test_custom_JSON_encoder(self, logger):
762 """
763 L{capture_logging} can be called with a custom JSON encoder, which is then
764 used for validation.
765 """
766 # Default JSON encoder can't handle this custom object:
767 log_message(message_type="hello", object=CustomObject())
768
769
742770 MESSAGE1 = MessageType(
743771 "message1", [Field.forTypes("x", [int], "A number")], "A message for testing."
744772 )
00 Metadata-Version: 2.1
11 Name: eliot
2 Version: 1.11.0
2 Version: 0+unknown
33 Summary: Logging library that tells you why it happened
44 Home-page: https://github.com/itamarst/eliot/
55 Maintainer: Itamar Turner-Trauring
3636
3737 Eliot is only used to generate your logs; you will might need tools like Logstash and ElasticSearch to aggregate and store logs if you are using multiple processes across multiple machines.
3838
39 Eliot supports Python 3.5, 3.6, 3.7, and 3.8, as well as PyPy3.
39 Eliot supports Python 3.6, 3.7, 3.8, and 3.9, as well as PyPy3.
4040 It is maintained by Itamar Turner-Trauring, and released under the Apache 2.0 License.
4141
4242 Python 2.7 is in legacy support mode, with the last release supported being 1.7; see `here <https://eliot.readthedocs.io/en/stable/python2.html>`_ for details.
6363 Classifier: Operating System :: OS Independent
6464 Classifier: Programming Language :: Python
6565 Classifier: Programming Language :: Python :: 3
66 Classifier: Programming Language :: Python :: 3.5
6766 Classifier: Programming Language :: Python :: 3.6
6867 Classifier: Programming Language :: Python :: 3.7
6968 Classifier: Programming Language :: Python :: 3.8
69 Classifier: Programming Language :: Python :: 3.9
7070 Classifier: Programming Language :: Python :: Implementation :: CPython
7171 Classifier: Programming Language :: Python :: Implementation :: PyPy
7272 Classifier: Topic :: System :: Logging
73 Requires-Python: >=3.5.3
73 Requires-Python: >=3.6.0
74 Provides-Extra: dev
75 Provides-Extra: journald
7476 Provides-Extra: test
75 Provides-Extra: journald
76 Provides-Extra: dev
0 boltons>=19.0.1
1 pyrsistent>=0.11.8
02 six
13 zope.interface
2 pyrsistent>=0.11.8
3 boltons>=19.0.1
44
55 [:python_version < "3.7" and python_version > "2.7"]
66 aiocontextvars
77
88 [dev]
9 black
10 coverage
11 flake8
912 setuptools>=40
10 twine>=1.12.1
11 coverage
1213 sphinx
1314 sphinx_rtd_theme
14 flake8
15 black
15 twine>=1.12.1
1616
1717 [journald]
1818 cffi>=1.1.2
1919
2020 [test]
2121 hypothesis>=1.14.0
22 pytest
23 pytest-xdist
2224 testtools
23 pytest
1717 "Operating System :: OS Independent",
1818 "Programming Language :: Python",
1919 "Programming Language :: Python :: 3",
20 "Programming Language :: Python :: 3.5",
2120 "Programming Language :: Python :: 3.6",
2221 "Programming Language :: Python :: 3.7",
2322 "Programming Language :: Python :: 3.8",
23 "Programming Language :: Python :: 3.9",
2424 "Programming Language :: Python :: Implementation :: CPython",
2525 "Programming Language :: Python :: Implementation :: PyPy",
2626 "Topic :: System :: Logging",
2929 version=versioneer.get_version(),
3030 cmdclass=versioneer.get_cmdclass(),
3131 description="Logging library that tells you why it happened",
32 python_requires=">=3.5.3",
32 python_requires=">=3.6.0",
3333 install_requires=[
3434 # Python 3 compatibility:
3535 "six",
5353 # Tasteful testing for Python:
5454 "testtools",
5555 "pytest",
56 "pytest-xdist",
5657 ],
5758 "dev": [
5859 # Ensure we can do python_requires correctly: