Import upstream version 1.2.5
Debian Janitor
1 year, 4 months ago
0 | 1.0.0 (2022-06-09) | |
1 | ------------------- | |
2 | - Droop Python 2.7 support (minimal supported python version is 3.5) | |
3 | - Recompile protobuf using version 3.19 | |
4 | ||
5 | 0.21.0 (2021-03-17) | |
6 | ------------------- | |
7 | - The default encoding is now V2 JSON. If you want to keep the old | |
8 | V1 thrift encoding you'll need to specify it. | |
9 | ||
10 | 0.20.2 (2021-03-11) | |
11 | ------------------- | |
12 | - Don't crash when annotating exceptions that cannot be str()'d | |
13 | ||
14 | 0.20.1 (2020-10-27) | |
15 | ------------------- | |
16 | - Support PRODUCER and CONSUMER spans | |
17 | ||
18 | 0.20.0 (2020-03-09) | |
19 | ------------------- | |
20 | - Add create_http_headers helper | |
21 | ||
22 | 0.19.0 (2020-02-28) | |
23 | ------------------- | |
24 | - Add zipkin_span.add_annotation() method | |
25 | - Add autoinstrumentation for python Threads | |
26 | - Allow creating a copy of Tracer | |
27 | - Add extract_zipkin_attrs_from_headers() helper | |
28 | ||
29 | 0.18.7 (2020-01-15) | |
30 | ------------------- | |
31 | - Expose encoding.create_endpoint helper | |
32 | ||
33 | 0.18.6 (2019-09-23) | |
34 | ------------------- | |
35 | - Ensure tags are strings when using V2_JSON encoding | |
36 | ||
37 | 0.18.5 (2019-08-08) | |
38 | ------------------- | |
39 | - Add testing.MockTransportHandler module | |
40 | ||
41 | 0.18.4 (2019-08-02) | |
42 | ------------------- | |
43 | - Fix thriftpy2 import to allow cython module | |
44 | ||
45 | 0.18.3 (2019-05-15) | |
46 | ------------------- | |
47 | - Fix unicode bug when decoding thrift tag strings | |
48 | ||
49 | 0.18.2 (2019-03-26) | |
50 | ------------------- | |
51 | - Handled exception while emitting trace and log the error | |
52 | - Ensure tracer is cleared regardless span of emit outcome | |
53 | ||
54 | 0.18.1 (2019-02-22) | |
55 | ------------------- | |
56 | - Fix ThreadLocalStack() bug introduced in 0.18.0 | |
57 | ||
58 | 0.18.0 (2019-02-13) | |
59 | ------------------- | |
60 | - Fix multithreading issues | |
61 | - Added Tracer module | |
62 | ||
63 | 0.17.1 (2019-02-05) | |
64 | ------------------- | |
65 | - Ignore transport_handler overrides in an inner span since that causes | |
66 | spans to be dropped. | |
67 | ||
68 | 0.17.0 (2019-01-25) | |
69 | ------------------- | |
70 | - Support python 3.7 | |
71 | - py-zipkin now depends on thriftpy2 rather than thriftpy. They | |
72 | can coexist in the same codebase, so it should be safe to upgrade. | |
73 | ||
74 | 0.16.1 (2018-11-16) | |
75 | ------------------- | |
76 | - Handle null timestamps when decoding thrift traces | |
77 | ||
78 | 0.16.0 (2018-11-13) | |
79 | ------------------- | |
80 | - py_zipkin is now able to convert V1 thrift spans to V2 JSON | |
81 | ||
82 | 0.15.1 (2018-10-31) | |
83 | ------------------- | |
84 | - Changed DeprecationWarnings to logging.warning | |
85 | ||
0 | 86 | 0.15.0 (2018-10-22) |
1 | 87 | ------------------- |
2 | 88 | - Added support for V2 JSON encoding. |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: py_zipkin |
2 | Version: 0.15.0 | |
2 | Version: 1.2.5 | |
3 | 3 | Summary: Library for using Zipkin in Python. |
4 | 4 | Home-page: https://github.com/Yelp/py_zipkin |
5 | 5 | Author: Yelp, Inc. |
6 | 6 | Author-email: opensource+py-zipkin@yelp.com |
7 | License: Copyright Yelp 2018 | |
7 | License: Copyright Yelp 2019 | |
8 | 8 | Description: [![Build Status](https://travis-ci.org/Yelp/py_zipkin.svg?branch=master)](https://travis-ci.org/Yelp/py_zipkin) |
9 | 9 | [![Coverage Status](https://img.shields.io/coveralls/Yelp/py_zipkin.svg)](https://coveralls.io/r/Yelp/py_zipkin) |
10 | 10 | [![PyPi version](https://img.shields.io/pypi/v/py_zipkin.svg)](https://pypi.python.org/pypi/py_zipkin/) |
145 | 145 | pluggable, though. |
146 | 146 | |
147 | 147 | The recommended way to implement a new transport handler is to subclass |
148 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
148 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
149 | 149 | `get_max_payload_bytes` methods. |
150 | 150 | |
151 | 151 | `send` receives an already encoded thrift list as argument. |
158 | 158 | |
159 | 159 | > NOTE: older versions of py_zipkin suggested implementing the transport handler |
160 | 160 | > as a function with a single argument. That's still supported and should work |
161 | > with the current py_zipkin version, but it's deprecated. | |
161 | > with the current py_zipkin version, but it's deprecated. | |
162 | 162 | |
163 | 163 | ```python |
164 | 164 | import requests |
202 | 202 | producer.send_messages('kafka_topic_name', message) |
203 | 203 | ``` |
204 | 204 | |
205 | Using in multithreading evironments | |
206 | ----------------------------------- | |
205 | Using in multithreading environments | |
206 | ------------------------------------ | |
207 | 207 | |
208 | 208 | If you want to use py_zipkin in a cooperative multithreading environment, |
209 | 209 | e.g. asyncio, you need to explicitly pass an instance of `py_zipkin.storage.Stack` |
259 | 259 | |
260 | 260 | Copyright (c) 2018, Yelp, Inc. All Rights reserved. Apache v2 |
261 | 261 | |
262 | 1.0.0 (2022-06-09) | |
263 | ------------------- | |
264 | - Droop Python 2.7 support (minimal supported python version is 3.5) | |
265 | - Recompile protobuf using version 3.19 | |
266 | ||
267 | 0.21.0 (2021-03-17) | |
268 | ------------------- | |
269 | - The default encoding is now V2 JSON. If you want to keep the old | |
270 | V1 thrift encoding you'll need to specify it. | |
271 | ||
272 | 0.20.2 (2021-03-11) | |
273 | ------------------- | |
274 | - Don't crash when annotating exceptions that cannot be str()'d | |
275 | ||
276 | 0.20.1 (2020-10-27) | |
277 | ------------------- | |
278 | - Support PRODUCER and CONSUMER spans | |
279 | ||
280 | 0.20.0 (2020-03-09) | |
281 | ------------------- | |
282 | - Add create_http_headers helper | |
283 | ||
284 | 0.19.0 (2020-02-28) | |
285 | ------------------- | |
286 | - Add zipkin_span.add_annotation() method | |
287 | - Add autoinstrumentation for python Threads | |
288 | - Allow creating a copy of Tracer | |
289 | - Add extract_zipkin_attrs_from_headers() helper | |
290 | ||
291 | 0.18.7 (2020-01-15) | |
292 | ------------------- | |
293 | - Expose encoding.create_endpoint helper | |
294 | ||
295 | 0.18.6 (2019-09-23) | |
296 | ------------------- | |
297 | - Ensure tags are strings when using V2_JSON encoding | |
298 | ||
299 | 0.18.5 (2019-08-08) | |
300 | ------------------- | |
301 | - Add testing.MockTransportHandler module | |
302 | ||
303 | 0.18.4 (2019-08-02) | |
304 | ------------------- | |
305 | - Fix thriftpy2 import to allow cython module | |
306 | ||
307 | 0.18.3 (2019-05-15) | |
308 | ------------------- | |
309 | - Fix unicode bug when decoding thrift tag strings | |
310 | ||
311 | 0.18.2 (2019-03-26) | |
312 | ------------------- | |
313 | - Handled exception while emitting trace and log the error | |
314 | - Ensure tracer is cleared regardless span of emit outcome | |
315 | ||
316 | 0.18.1 (2019-02-22) | |
317 | ------------------- | |
318 | - Fix ThreadLocalStack() bug introduced in 0.18.0 | |
319 | ||
320 | 0.18.0 (2019-02-13) | |
321 | ------------------- | |
322 | - Fix multithreading issues | |
323 | - Added Tracer module | |
324 | ||
325 | 0.17.1 (2019-02-05) | |
326 | ------------------- | |
327 | - Ignore transport_handler overrides in an inner span since that causes | |
328 | spans to be dropped. | |
329 | ||
330 | 0.17.0 (2019-01-25) | |
331 | ------------------- | |
332 | - Support python 3.7 | |
333 | - py-zipkin now depends on thriftpy2 rather than thriftpy. They | |
334 | can coexist in the same codebase, so it should be safe to upgrade. | |
335 | ||
336 | 0.16.1 (2018-11-16) | |
337 | ------------------- | |
338 | - Handle null timestamps when decoding thrift traces | |
339 | ||
340 | 0.16.0 (2018-11-13) | |
341 | ------------------- | |
342 | - py_zipkin is now able to convert V1 thrift spans to V2 JSON | |
343 | ||
344 | 0.15.1 (2018-10-31) | |
345 | ------------------- | |
346 | - Changed DeprecationWarnings to logging.warning | |
347 | ||
262 | 348 | 0.15.0 (2018-10-22) |
263 | 349 | ------------------- |
264 | 350 | - Added support for V2 JSON encoding. |
414 | 500 | Classifier: Topic :: Software Development :: Libraries :: Python Modules |
415 | 501 | Classifier: License :: OSI Approved :: Apache Software License |
416 | 502 | Classifier: Operating System :: OS Independent |
417 | Classifier: Programming Language :: Python :: 2.7 | |
418 | Classifier: Programming Language :: Python :: 3.4 | |
419 | Classifier: Programming Language :: Python :: 3.5 | |
420 | 503 | Classifier: Programming Language :: Python :: 3.6 |
504 | Classifier: Programming Language :: Python :: 3.7 | |
505 | Classifier: Programming Language :: Python :: 3.8 | |
421 | 506 | Provides: py_zipkin |
507 | Requires-Python: >=3.6 | |
422 | 508 | Description-Content-Type: text/markdown |
509 | Provides-Extra: protobuf |
137 | 137 | pluggable, though. |
138 | 138 | |
139 | 139 | The recommended way to implement a new transport handler is to subclass |
140 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
140 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
141 | 141 | `get_max_payload_bytes` methods. |
142 | 142 | |
143 | 143 | `send` receives an already encoded thrift list as argument. |
150 | 150 | |
151 | 151 | > NOTE: older versions of py_zipkin suggested implementing the transport handler |
152 | 152 | > as a function with a single argument. That's still supported and should work |
153 | > with the current py_zipkin version, but it's deprecated. | |
153 | > with the current py_zipkin version, but it's deprecated. | |
154 | 154 | |
155 | 155 | ```python |
156 | 156 | import requests |
194 | 194 | producer.send_messages('kafka_topic_name', message) |
195 | 195 | ``` |
196 | 196 | |
197 | Using in multithreading evironments | |
198 | ----------------------------------- | |
197 | Using in multithreading environments | |
198 | ------------------------------------ | |
199 | 199 | |
200 | 200 | If you want to use py_zipkin in a cooperative multithreading environment, |
201 | 201 | e.g. asyncio, you need to explicitly pass an instance of `py_zipkin.storage.Stack` |
0 | # DeprecationWarnings are silent since Python 2.7. | |
1 | # The `default` filter only prints the first occurrence of matching warnings for | |
2 | # each location where the warning is issued, so that we don't spam our users logs. | |
3 | import warnings | |
4 | warnings.simplefilter('default', DeprecationWarning) | |
5 | ||
6 | 0 | # Export useful functions and types from private modules. |
7 | 1 | from py_zipkin.encoding._types import Encoding # noqa |
8 | 2 | from py_zipkin.encoding._types import Kind # noqa |
3 | from py_zipkin.storage import get_default_tracer # noqa | |
4 | from py_zipkin.storage import Tracer # noqa |
0 | import json | |
1 | from typing import List | |
2 | from typing import Optional | |
3 | from typing import Union | |
4 | ||
5 | from py_zipkin.encoding._decoders import get_decoder | |
6 | from py_zipkin.encoding._encoders import get_encoder | |
7 | from py_zipkin.encoding._helpers import create_endpoint # noqa: F401 | |
8 | from py_zipkin.encoding._helpers import Endpoint # noqa: F401 | |
9 | from py_zipkin.encoding._helpers import Span # noqa: F401 | |
10 | from py_zipkin.encoding._types import Encoding | |
11 | from py_zipkin.exception import ZipkinError | |
12 | ||
13 | _V2_ATTRIBUTES = ["tags", "localEndpoint", "remoteEndpoint", "shared", "kind"] | |
14 | ||
15 | ||
16 | def detect_span_version_and_encoding(message: Union[bytes, str]) -> Encoding: | |
17 | """Returns the span type and encoding for the message provided. | |
18 | ||
19 | The logic in this function is a Python port of | |
20 | https://github.com/openzipkin/zipkin/blob/master/zipkin/src/main/java/zipkin/internal/DetectingSpanDecoder.java | |
21 | ||
22 | :param message: span to perform operations on. | |
23 | :type message: byte array | |
24 | :returns: span encoding. | |
25 | :rtype: Encoding | |
26 | """ | |
27 | # In case message is sent in as non-bytearray format, | |
28 | # safeguard convert to bytearray before handling | |
29 | if isinstance(message, str): | |
30 | message = message.encode("utf-8") # pragma: no cover | |
31 | ||
32 | if len(message) < 2: | |
33 | raise ZipkinError("Invalid span format. Message too short.") | |
34 | ||
35 | # Check for binary format | |
36 | if message[0] <= 16: | |
37 | if message[0] == 10 and message[1:2][0] != 0: | |
38 | return Encoding.V2_PROTO3 | |
39 | return Encoding.V1_THRIFT | |
40 | ||
41 | str_msg = message.decode("utf-8") | |
42 | ||
43 | # JSON case for list of spans | |
44 | if str_msg[0] == "[": | |
45 | span_list = json.loads(str_msg) | |
46 | if len(span_list) > 0: | |
47 | # Assumption: All spans in a list are the same version | |
48 | # Logic: Search for identifying fields in all spans, if any span can | |
49 | # be strictly identified to a version, return that version. | |
50 | # Otherwise, if no spans could be strictly identified, default to V2. | |
51 | for span in span_list: | |
52 | if any(word in span for word in _V2_ATTRIBUTES): | |
53 | return Encoding.V2_JSON | |
54 | elif "binaryAnnotations" in span or ( | |
55 | "annotations" in span and "endpoint" in span["annotations"] | |
56 | ): | |
57 | return Encoding.V1_JSON | |
58 | return Encoding.V2_JSON | |
59 | ||
60 | raise ZipkinError("Unknown or unsupported span encoding") | |
61 | ||
62 | ||
63 | def convert_spans( | |
64 | spans: bytes, output_encoding: Encoding, input_encoding: Optional[Encoding] = None | |
65 | ) -> Union[str, bytes]: | |
66 | """Converts encoded spans to a different encoding. | |
67 | ||
68 | param spans: encoded input spans. | |
69 | type spans: byte array | |
70 | param output_encoding: desired output encoding. | |
71 | type output_encoding: Encoding | |
72 | param input_encoding: optional input encoding. If this is not specified, it'll | |
73 | try to understand the encoding automatically by inspecting the input spans. | |
74 | type input_encoding: Encoding | |
75 | :returns: encoded spans. | |
76 | :rtype: byte array | |
77 | """ | |
78 | if not isinstance(input_encoding, Encoding): | |
79 | input_encoding = detect_span_version_and_encoding(message=spans) | |
80 | ||
81 | if input_encoding == output_encoding: | |
82 | return spans | |
83 | ||
84 | decoder = get_decoder(input_encoding) | |
85 | encoder = get_encoder(output_encoding) | |
86 | decoded_spans = decoder.decode_spans(spans) | |
87 | output_spans: List[Union[str, bytes]] = [] | |
88 | ||
89 | # Encode each indivicual span | |
90 | for span in decoded_spans: | |
91 | output_spans.append(encoder.encode_span(span)) | |
92 | ||
93 | # Outputs from encoder.encode_span() can be easily concatenated in a list | |
94 | return encoder.encode_queue(output_spans) |
0 | import logging | |
1 | import socket | |
2 | import struct | |
3 | from typing import Dict | |
4 | from typing import List | |
5 | from typing import Optional | |
6 | from typing import Tuple | |
7 | ||
8 | from thriftpy2.protocol.binary import read_list_begin | |
9 | from thriftpy2.protocol.binary import TBinaryProtocol | |
10 | from thriftpy2.thrift import TType | |
11 | from thriftpy2.transport import TMemoryBuffer | |
12 | ||
13 | from py_zipkin.encoding._helpers import Endpoint | |
14 | from py_zipkin.encoding._helpers import Span | |
15 | from py_zipkin.encoding._types import Encoding | |
16 | from py_zipkin.encoding._types import Kind | |
17 | from py_zipkin.exception import ZipkinError | |
18 | from py_zipkin.thrift import zipkinCore | |
19 | ||
20 | _HEX_DIGITS = "0123456789abcdef" | |
21 | _DROP_ANNOTATIONS = {"cs", "sr", "ss", "cr"} | |
22 | ||
23 | log = logging.getLogger("py_zipkin.encoding") | |
24 | ||
25 | ||
26 | def get_decoder(encoding: Encoding) -> "IDecoder": | |
27 | """Creates encoder object for the given encoding. | |
28 | ||
29 | :param encoding: desired output encoding protocol | |
30 | :type encoding: Encoding | |
31 | :return: corresponding IDecoder object | |
32 | :rtype: IDecoder | |
33 | """ | |
34 | if encoding == Encoding.V1_THRIFT: | |
35 | return _V1ThriftDecoder() | |
36 | if encoding == Encoding.V1_JSON: | |
37 | raise NotImplementedError(f"{encoding} decoding not yet implemented") | |
38 | if encoding == Encoding.V2_JSON: | |
39 | raise NotImplementedError(f"{encoding} decoding not yet implemented") | |
40 | raise ZipkinError(f"Unknown encoding: {encoding}") | |
41 | ||
42 | ||
43 | class IDecoder: | |
44 | """Decoder interface.""" | |
45 | ||
46 | def decode_spans(self, spans: bytes) -> List[Span]: | |
47 | """Decodes an encoded list of spans. | |
48 | ||
49 | :param spans: encoded list of spans | |
50 | :type spans: bytes | |
51 | :return: list of spans | |
52 | :rtype: list of Span | |
53 | """ | |
54 | raise NotImplementedError() | |
55 | ||
56 | ||
57 | class _V1ThriftDecoder(IDecoder): | |
58 | def decode_spans(self, spans: bytes) -> List[Span]: | |
59 | """Decodes an encoded list of spans. | |
60 | ||
61 | :param spans: encoded list of spans | |
62 | :type spans: bytes | |
63 | :return: list of spans | |
64 | :rtype: list of Span | |
65 | """ | |
66 | decoded_spans = [] | |
67 | transport = TMemoryBuffer(spans) | |
68 | ||
69 | if spans[0] == TType.STRUCT: | |
70 | _, size = read_list_begin(transport) | |
71 | else: | |
72 | size = 1 | |
73 | ||
74 | for _ in range(size): | |
75 | span = zipkinCore.Span() | |
76 | span.read(TBinaryProtocol(transport)) # type: ignore[attr-defined] | |
77 | decoded_spans.append(self._decode_thrift_span(span)) | |
78 | return decoded_spans | |
79 | ||
80 | def _convert_from_thrift_endpoint( | |
81 | self, thrift_endpoint: zipkinCore.Endpoint | |
82 | ) -> Endpoint: | |
83 | """Accepts a thrift decoded endpoint and converts it to an Endpoint. | |
84 | ||
85 | :param thrift_endpoint: thrift encoded endpoint | |
86 | :type thrift_endpoint: thrift endpoint | |
87 | :returns: decoded endpoint | |
88 | :rtype: Encoding | |
89 | """ | |
90 | ipv4 = None | |
91 | ipv6 = None | |
92 | port = struct.unpack("H", struct.pack("h", thrift_endpoint.port))[0] | |
93 | ||
94 | if thrift_endpoint.ipv4 != 0: | |
95 | ipv4 = socket.inet_ntop( | |
96 | socket.AF_INET, | |
97 | struct.pack("!i", thrift_endpoint.ipv4), | |
98 | ) | |
99 | ||
100 | if thrift_endpoint.ipv6: | |
101 | # ignore is required due to https://github.com/unmade/thrift-pyi/issues/25 | |
102 | ipv6 = socket.inet_ntop( | |
103 | socket.AF_INET6, | |
104 | thrift_endpoint.ipv6, # type: ignore[arg-type] | |
105 | ) | |
106 | ||
107 | return Endpoint( | |
108 | service_name=thrift_endpoint.service_name, | |
109 | ipv4=ipv4, | |
110 | ipv6=ipv6, | |
111 | port=port, | |
112 | ) | |
113 | ||
114 | def _decode_thrift_annotations( | |
115 | self, thrift_annotations: List[zipkinCore.Annotation] | |
116 | ) -> Tuple[ | |
117 | Dict[str, Optional[float]], | |
118 | Optional[Endpoint], | |
119 | Kind, | |
120 | Optional[int], | |
121 | Optional[int], | |
122 | ]: | |
123 | """Accepts a thrift annotation and converts it to a v1 annotation. | |
124 | ||
125 | :param thrift_annotations: list of thrift annotations. | |
126 | :type thrift_annotations: list of zipkinCore.Span.Annotation | |
127 | :returns: (annotations, local_endpoint, kind, timestmap, duration) | |
128 | """ | |
129 | local_endpoint = None | |
130 | kind = Kind.LOCAL | |
131 | all_annotations: Dict[str, Optional[int]] = {} | |
132 | timestamp: Optional[int] = None | |
133 | duration: Optional[int] = None | |
134 | ||
135 | for thrift_annotation in thrift_annotations: | |
136 | assert thrift_annotation.value is not None | |
137 | all_annotations[thrift_annotation.value] = thrift_annotation.timestamp | |
138 | if thrift_annotation.host: | |
139 | local_endpoint = self._convert_from_thrift_endpoint( | |
140 | thrift_annotation.host, | |
141 | ) | |
142 | ||
143 | if "cs" in all_annotations and "sr" not in all_annotations: | |
144 | kind = Kind.CLIENT | |
145 | timestamp = all_annotations["cs"] | |
146 | assert isinstance(timestamp, int) | |
147 | assert isinstance(all_annotations["cr"], int) | |
148 | duration = all_annotations["cr"] - timestamp | |
149 | elif "cs" not in all_annotations and "sr" in all_annotations: | |
150 | kind = Kind.SERVER | |
151 | timestamp = all_annotations["sr"] | |
152 | assert isinstance(timestamp, int) | |
153 | assert isinstance(all_annotations["ss"], int) | |
154 | duration = all_annotations["ss"] - timestamp | |
155 | ||
156 | annotations = { | |
157 | name: self.seconds(ts) | |
158 | for name, ts in all_annotations.items() | |
159 | if name not in _DROP_ANNOTATIONS | |
160 | } | |
161 | ||
162 | return annotations, local_endpoint, kind, timestamp, duration | |
163 | ||
164 | def _convert_from_thrift_binary_annotations( | |
165 | self, thrift_binary_annotations: List[zipkinCore.BinaryAnnotation] | |
166 | ) -> Tuple[Dict[str, Optional[str]], Optional[Endpoint], Optional[Endpoint]]: | |
167 | """Accepts a thrift decoded binary annotation and converts it | |
168 | to a v1 binary annotation. | |
169 | """ | |
170 | tags: Dict[str, Optional[str]] = {} | |
171 | local_endpoint = None | |
172 | remote_endpoint = None | |
173 | ||
174 | for binary_annotation in thrift_binary_annotations: | |
175 | if binary_annotation.key == "sa": | |
176 | assert binary_annotation.host is not None | |
177 | remote_endpoint = self._convert_from_thrift_endpoint( | |
178 | thrift_endpoint=binary_annotation.host, | |
179 | ) | |
180 | else: | |
181 | key = binary_annotation.key | |
182 | assert key is not None | |
183 | ||
184 | annotation_type = binary_annotation.annotation_type | |
185 | value = binary_annotation.value | |
186 | ||
187 | if annotation_type == zipkinCore.AnnotationType.BOOL: | |
188 | tags[key] = "true" if value == 1 else "false" | |
189 | elif annotation_type == zipkinCore.AnnotationType.STRING: | |
190 | tags[key] = value | |
191 | else: | |
192 | log.warning( | |
193 | "Only STRING and BOOL binary annotations are " | |
194 | "supported right now and can be properly decoded." | |
195 | ) | |
196 | ||
197 | if binary_annotation.host: | |
198 | local_endpoint = self._convert_from_thrift_endpoint( | |
199 | thrift_endpoint=binary_annotation.host, | |
200 | ) | |
201 | ||
202 | return tags, local_endpoint, remote_endpoint | |
203 | ||
204 | def seconds(self, us: Optional[int]) -> Optional[float]: | |
205 | if us is None: | |
206 | return None | |
207 | return round(float(us) / 1000 / 1000, 6) | |
208 | ||
209 | def _decode_thrift_span(self, thrift_span: zipkinCore.Span) -> Span: | |
210 | """Decodes a thrift span. | |
211 | ||
212 | :param thrift_span: thrift span | |
213 | :type thrift_span: thrift Span object | |
214 | :returns: span builder representing this span | |
215 | :rtype: Span | |
216 | """ | |
217 | parent_id = None | |
218 | local_endpoint = None | |
219 | annotations: Dict[str, Optional[float]] = {} | |
220 | tags: Dict[str, Optional[str]] = {} | |
221 | kind = Kind.LOCAL | |
222 | remote_endpoint = None | |
223 | timestamp = None | |
224 | duration = None | |
225 | ||
226 | if thrift_span.parent_id: | |
227 | parent_id = self._convert_unsigned_long_to_lower_hex(thrift_span.parent_id) | |
228 | ||
229 | if thrift_span.annotations: | |
230 | ( | |
231 | annotations, | |
232 | local_endpoint, | |
233 | kind, | |
234 | timestamp, | |
235 | duration, | |
236 | ) = self._decode_thrift_annotations(thrift_span.annotations) | |
237 | ||
238 | if thrift_span.binary_annotations: | |
239 | ( | |
240 | tags, | |
241 | local_endpoint, | |
242 | remote_endpoint, | |
243 | ) = self._convert_from_thrift_binary_annotations( | |
244 | thrift_span.binary_annotations, | |
245 | ) | |
246 | ||
247 | assert thrift_span.trace_id is not None | |
248 | trace_id = self._convert_trace_id_to_string( | |
249 | thrift_span.trace_id, | |
250 | thrift_span.trace_id_high, | |
251 | ) | |
252 | ||
253 | assert thrift_span.id is not None | |
254 | return Span( | |
255 | trace_id=trace_id, | |
256 | name=thrift_span.name, | |
257 | parent_id=parent_id, | |
258 | span_id=self._convert_unsigned_long_to_lower_hex(thrift_span.id), | |
259 | kind=kind, | |
260 | timestamp=self.seconds(timestamp or thrift_span.timestamp), | |
261 | duration=self.seconds(duration or thrift_span.duration), | |
262 | local_endpoint=local_endpoint, | |
263 | remote_endpoint=remote_endpoint, | |
264 | shared=(kind == Kind.SERVER and thrift_span.timestamp is None), | |
265 | annotations=annotations, | |
266 | tags=tags, | |
267 | ) | |
268 | ||
269 | def _convert_trace_id_to_string( | |
270 | self, trace_id: int, trace_id_high: Optional[int] = None | |
271 | ) -> str: | |
272 | """ | |
273 | Converts the provided traceId hex value with optional high bits | |
274 | to a string. | |
275 | ||
276 | :param trace_id: the value of the trace ID | |
277 | :type trace_id: int | |
278 | :param trace_id_high: the high bits of the trace ID | |
279 | :type trace_id: int | |
280 | :returns: trace_id_high + trace_id as a string | |
281 | """ | |
282 | if trace_id_high is not None: | |
283 | result = bytearray(32) | |
284 | self._write_hex_long(result, 0, trace_id_high) | |
285 | self._write_hex_long(result, 16, trace_id) | |
286 | return result.decode("utf8") | |
287 | ||
288 | result = bytearray(16) | |
289 | self._write_hex_long(result, 0, trace_id) | |
290 | return result.decode("utf8") | |
291 | ||
292 | def _convert_unsigned_long_to_lower_hex(self, value: int) -> str: | |
293 | """ | |
294 | Converts the provided unsigned long value to a hex string. | |
295 | ||
296 | :param value: the value to convert | |
297 | :type value: unsigned long | |
298 | :returns: value as a hex string | |
299 | """ | |
300 | result = bytearray(16) | |
301 | self._write_hex_long(result, 0, value) | |
302 | return result.decode("utf8") | |
303 | ||
304 | def _write_hex_long(self, data: bytearray, pos: int, value: int) -> None: | |
305 | """ | |
306 | Writes an unsigned long value across a byte array. | |
307 | ||
308 | :param data: the buffer to write the value to | |
309 | :type data: bytearray | |
310 | :param pos: the starting position | |
311 | :type pos: int | |
312 | :param value: the value to write | |
313 | :type value: unsigned long | |
314 | """ | |
315 | self._write_hex_byte(data, pos + 0, (value >> 56) & 0xFF) | |
316 | self._write_hex_byte(data, pos + 2, (value >> 48) & 0xFF) | |
317 | self._write_hex_byte(data, pos + 4, (value >> 40) & 0xFF) | |
318 | self._write_hex_byte(data, pos + 6, (value >> 32) & 0xFF) | |
319 | self._write_hex_byte(data, pos + 8, (value >> 24) & 0xFF) | |
320 | self._write_hex_byte(data, pos + 10, (value >> 16) & 0xFF) | |
321 | self._write_hex_byte(data, pos + 12, (value >> 8) & 0xFF) | |
322 | self._write_hex_byte(data, pos + 14, (value & 0xFF)) | |
323 | ||
324 | def _write_hex_byte(self, data: bytearray, pos: int, byte: int) -> None: | |
325 | data[pos + 0] = ord(_HEX_DIGITS[int((byte >> 4) & 0xF)]) | |
326 | data[pos + 1] = ord(_HEX_DIGITS[int(byte & 0xF)]) |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import json |
1 | from typing import Dict | |
2 | from typing import List | |
3 | from typing import Mapping | |
4 | from typing import Optional | |
5 | from typing import Union | |
6 | ||
7 | from typing_extensions import TypedDict | |
8 | from typing_extensions import TypeGuard | |
2 | 9 | |
3 | 10 | from py_zipkin import thrift |
11 | from py_zipkin.encoding import protobuf | |
12 | from py_zipkin.encoding._helpers import Endpoint | |
13 | from py_zipkin.encoding._helpers import Span | |
4 | 14 | from py_zipkin.encoding._types import Encoding |
15 | from py_zipkin.encoding._types import Kind | |
5 | 16 | from py_zipkin.exception import ZipkinError |
6 | 17 | |
7 | 18 | |
8 | def get_encoder(encoding): | |
19 | def get_encoder(encoding: Encoding) -> "IEncoder": | |
9 | 20 | """Creates encoder object for the given encoding. |
10 | 21 | |
11 | 22 | :param encoding: desired output encoding protocol. |
19 | 30 | return _V1JSONEncoder() |
20 | 31 | if encoding == Encoding.V2_JSON: |
21 | 32 | return _V2JSONEncoder() |
22 | raise ZipkinError('Unknown encoding: {}'.format(encoding)) | |
23 | ||
24 | ||
25 | class IEncoder(object): | |
33 | if encoding == Encoding.V2_PROTO3: | |
34 | return _V2ProtobufEncoder() | |
35 | raise ZipkinError(f"Unknown encoding: {encoding}") | |
36 | ||
37 | ||
38 | class IEncoder: | |
26 | 39 | """Encoder interface.""" |
27 | 40 | |
28 | def fits(self, current_count, current_size, max_size, new_span): | |
41 | def fits( | |
42 | self, | |
43 | current_count: int, | |
44 | current_size: int, | |
45 | max_size: int, | |
46 | new_span: Union[str, bytes], | |
47 | ) -> bool: | |
29 | 48 | """Returns whether the new span will fit in the list. |
30 | 49 | |
31 | 50 | :param current_count: number of spans already in the list. |
41 | 60 | """ |
42 | 61 | raise NotImplementedError() |
43 | 62 | |
44 | def encode_span(self, span_builder): | |
63 | def encode_span(self, span: Span) -> Union[str, bytes]: | |
45 | 64 | """Encodes a single span. |
46 | 65 | |
47 | :param span_builder: span_builder object representing the span. | |
48 | :type span_builder: SpanBuilder | |
66 | :param span: Span object representing the span. | |
67 | :type span: Span | |
49 | 68 | :return: encoded span. |
50 | 69 | :rtype: str or bytes |
51 | 70 | """ |
52 | 71 | raise NotImplementedError() |
53 | 72 | |
54 | def encode_queue(self, queue): | |
73 | def encode_queue(self, queue: List[Union[str, bytes]]) -> Union[str, bytes]: | |
55 | 74 | """Encodes a list of pre-encoded spans. |
56 | 75 | |
57 | 76 | :param queue: list of encoded spans. |
62 | 81 | raise NotImplementedError() |
63 | 82 | |
64 | 83 | |
84 | def _is_mapping_str_float( | |
85 | mapping: Mapping[str, Optional[float]] | |
86 | ) -> TypeGuard[Mapping[str, float]]: | |
87 | return all(isinstance(value, float) for key, value in mapping.items()) | |
88 | ||
89 | ||
90 | def _is_dict_str_str(mapping: Dict[str, Optional[str]]) -> TypeGuard[Dict[str, str]]: | |
91 | return all(isinstance(value, str) for key, value in mapping.items()) | |
92 | ||
93 | ||
65 | 94 | class _V1ThriftEncoder(IEncoder): |
66 | 95 | """Thrift encoder for V1 spans.""" |
67 | 96 | |
68 | def fits(self, current_count, current_size, max_size, new_span): | |
97 | def fits( | |
98 | self, | |
99 | current_count: int, | |
100 | current_size: int, | |
101 | max_size: int, | |
102 | new_span: Union[str, bytes], | |
103 | ) -> bool: | |
69 | 104 | """Checks if the new span fits in the max payload size. |
70 | 105 | |
71 | 106 | Thrift lists have a fixed-size header and no delimiters between elements |
73 | 108 | """ |
74 | 109 | return thrift.LIST_HEADER_SIZE + current_size + len(new_span) <= max_size |
75 | 110 | |
76 | def encode_span(self, span_builder): | |
111 | def encode_remote_endpoint( | |
112 | self, | |
113 | remote_endpoint: Endpoint, | |
114 | kind: Kind, | |
115 | binary_annotations: List[thrift.zipkinCore.BinaryAnnotation], | |
116 | ) -> None: | |
117 | assert remote_endpoint.port is not None | |
118 | thrift_remote_endpoint = thrift.create_endpoint( | |
119 | remote_endpoint.port, | |
120 | remote_endpoint.service_name, | |
121 | remote_endpoint.ipv4, | |
122 | remote_endpoint.ipv6, | |
123 | ) | |
124 | # these attributes aren't yet supported by thrift-pyi | |
125 | if kind == Kind.CLIENT: | |
126 | key = thrift.zipkinCore.SERVER_ADDR # type: ignore[attr-defined] | |
127 | elif kind == Kind.SERVER: | |
128 | key = thrift.zipkinCore.CLIENT_ADDR # type: ignore[attr-defined] | |
129 | ||
130 | binary_annotations.append( | |
131 | thrift.create_binary_annotation( | |
132 | key=key, | |
133 | value=thrift.SERVER_ADDR_VAL, | |
134 | annotation_type=thrift.zipkinCore.AnnotationType.BOOL, | |
135 | host=thrift_remote_endpoint, | |
136 | ) | |
137 | ) | |
138 | ||
139 | def encode_span(self, v2_span: Span) -> bytes: | |
77 | 140 | """Encodes the current span to thrift.""" |
78 | span = span_builder.build_v1_span() | |
79 | ||
141 | span = v2_span.build_v1_span() | |
142 | assert span.endpoint is not None | |
143 | assert span.endpoint.port is not None | |
80 | 144 | thrift_endpoint = thrift.create_endpoint( |
81 | 145 | span.endpoint.port, |
82 | 146 | span.endpoint.service_name, |
84 | 148 | span.endpoint.ipv6, |
85 | 149 | ) |
86 | 150 | |
151 | assert _is_mapping_str_float(span.annotations) | |
87 | 152 | thrift_annotations = thrift.annotation_list_builder( |
88 | 153 | span.annotations, |
89 | 154 | thrift_endpoint, |
90 | 155 | ) |
91 | 156 | |
157 | assert _is_dict_str_str(span.binary_annotations) | |
92 | 158 | thrift_binary_annotations = thrift.binary_annotation_list_builder( |
93 | 159 | span.binary_annotations, |
94 | 160 | thrift_endpoint, |
95 | 161 | ) |
96 | 162 | |
97 | # Add sa binary annotation | |
98 | if span.sa_endpoint is not None: | |
99 | thrift_sa_endpoint = thrift.create_endpoint( | |
100 | span.sa_endpoint.port, | |
101 | span.sa_endpoint.service_name, | |
102 | span.sa_endpoint.ipv4, | |
103 | span.sa_endpoint.ipv6, | |
104 | ) | |
105 | thrift_binary_annotations.append(thrift.create_binary_annotation( | |
106 | key=thrift.zipkin_core.SERVER_ADDR, | |
107 | value=thrift.SERVER_ADDR_VAL, | |
108 | annotation_type=thrift.zipkin_core.AnnotationType.BOOL, | |
109 | host=thrift_sa_endpoint, | |
110 | )) | |
111 | ||
163 | # Add sa/ca binary annotations | |
164 | if v2_span.remote_endpoint: | |
165 | self.encode_remote_endpoint( | |
166 | v2_span.remote_endpoint, | |
167 | v2_span.kind, | |
168 | thrift_binary_annotations, | |
169 | ) | |
170 | ||
171 | assert span.id is not None | |
112 | 172 | thrift_span = thrift.create_span( |
113 | 173 | span.id, |
114 | 174 | span.parent_id, |
123 | 183 | encoded_span = thrift.span_to_bytes(thrift_span) |
124 | 184 | return encoded_span |
125 | 185 | |
126 | def encode_queue(self, queue): | |
186 | def encode_queue(self, queue: List[Union[str, bytes]]) -> bytes: | |
127 | 187 | """Converts the queue to a thrift list""" |
128 | 188 | return thrift.encode_bytes_list(queue) |
129 | 189 | |
130 | 190 | |
191 | class JSONEndpoint(TypedDict, total=False): | |
192 | serviceName: Optional[str] | |
193 | port: Optional[int] | |
194 | ipv4: Optional[str] | |
195 | ipv6: Optional[str] | |
196 | ||
197 | ||
198 | def _is_str_list(any_str_list: List[Union[str, bytes]]) -> TypeGuard[List[str]]: | |
199 | return all(isinstance(element, str) for element in any_str_list) | |
200 | ||
201 | ||
131 | 202 | class _BaseJSONEncoder(IEncoder): |
132 | """ V1 and V2 JSON encoders need many common helper functions """ | |
133 | ||
134 | def fits(self, current_count, current_size, max_size, new_span): | |
203 | """V1 and V2 JSON encoders need many common helper functions""" | |
204 | ||
205 | def fits( | |
206 | self, | |
207 | current_count: int, | |
208 | current_size: int, | |
209 | max_size: int, | |
210 | new_span: Union[str, bytes], | |
211 | ) -> bool: | |
135 | 212 | """Checks if the new span fits in the max payload size. |
136 | 213 | |
137 | 214 | Json lists only have a 2 bytes overhead from '[]' plus 1 byte from |
139 | 216 | """ |
140 | 217 | return 2 + current_count + current_size + len(new_span) <= max_size |
141 | 218 | |
142 | def _create_json_endpoint(self, endpoint, is_v1): | |
219 | def _create_json_endpoint(self, endpoint: Endpoint, is_v1: bool) -> JSONEndpoint: | |
143 | 220 | """Converts an Endpoint to a JSON endpoint dict. |
144 | 221 | |
145 | 222 | :param endpoint: endpoint object to convert. |
151 | 228 | :return: dict representing a JSON endpoint. |
152 | 229 | :rtype: dict |
153 | 230 | """ |
154 | json_endpoint = {} | |
231 | json_endpoint: JSONEndpoint = {} | |
155 | 232 | |
156 | 233 | if endpoint.service_name: |
157 | json_endpoint['serviceName'] = endpoint.service_name | |
234 | json_endpoint["serviceName"] = endpoint.service_name | |
158 | 235 | elif is_v1: |
159 | 236 | # serviceName is mandatory in v1 |
160 | json_endpoint['serviceName'] = "" | |
237 | json_endpoint["serviceName"] = "" | |
161 | 238 | if endpoint.port and endpoint.port != 0: |
162 | json_endpoint['port'] = endpoint.port | |
239 | json_endpoint["port"] = endpoint.port | |
163 | 240 | if endpoint.ipv4 is not None: |
164 | json_endpoint['ipv4'] = endpoint.ipv4 | |
241 | json_endpoint["ipv4"] = endpoint.ipv4 | |
165 | 242 | if endpoint.ipv6 is not None: |
166 | json_endpoint['ipv6'] = endpoint.ipv6 | |
243 | json_endpoint["ipv6"] = endpoint.ipv6 | |
167 | 244 | |
168 | 245 | return json_endpoint |
169 | 246 | |
170 | def encode_queue(self, queue): | |
247 | def encode_queue(self, queue: List[Union[str, bytes]]) -> str: | |
171 | 248 | """Concatenates the list to a JSON list""" |
172 | return '[' + ','.join(queue) + ']' | |
249 | assert _is_str_list(queue) | |
250 | return "[" + ",".join(queue) + "]" | |
251 | ||
252 | ||
253 | class JSONv1BinaryAnnotation(TypedDict): | |
254 | key: str | |
255 | value: Union[str, bool, None] | |
256 | endpoint: JSONEndpoint | |
257 | ||
258 | ||
259 | class JSONv1Annotation(TypedDict): | |
260 | endpoint: JSONEndpoint | |
261 | timestamp: int | |
262 | value: str | |
263 | ||
264 | ||
265 | class JSONv1Span(TypedDict, total=False): | |
266 | traceId: str | |
267 | name: Optional[str] | |
268 | id: Optional[str] | |
269 | annotations: List[JSONv1Annotation] | |
270 | binaryAnnotations: List[JSONv1BinaryAnnotation] | |
271 | parentId: str | |
272 | timestamp: int | |
273 | duration: int | |
173 | 274 | |
174 | 275 | |
175 | 276 | class _V1JSONEncoder(_BaseJSONEncoder): |
176 | 277 | """JSON encoder for V1 spans.""" |
177 | 278 | |
178 | def encode_span(self, span_builder): | |
279 | def encode_remote_endpoint( | |
280 | self, | |
281 | remote_endpoint: Endpoint, | |
282 | kind: Kind, | |
283 | binary_annotations: List[JSONv1BinaryAnnotation], | |
284 | ) -> None: | |
285 | json_remote_endpoint = self._create_json_endpoint(remote_endpoint, True) | |
286 | if kind == Kind.CLIENT: | |
287 | key = "sa" | |
288 | elif kind == Kind.SERVER: | |
289 | key = "ca" | |
290 | ||
291 | binary_annotations.append( | |
292 | {"key": key, "value": True, "endpoint": json_remote_endpoint} | |
293 | ) | |
294 | ||
295 | def encode_span(self, v2_span: Span) -> str: | |
179 | 296 | """Encodes a single span to JSON.""" |
180 | span = span_builder.build_v1_span() | |
181 | ||
182 | json_span = { | |
183 | 'traceId': span.trace_id, | |
184 | 'name': span.name, | |
185 | 'id': span.id, | |
186 | 'annotations': [], | |
187 | 'binaryAnnotations': [], | |
297 | span = v2_span.build_v1_span() | |
298 | ||
299 | json_span: JSONv1Span = { | |
300 | "traceId": span.trace_id, | |
301 | "name": span.name, | |
302 | "id": span.id, | |
303 | "annotations": [], | |
304 | "binaryAnnotations": [], | |
188 | 305 | } |
189 | 306 | |
190 | 307 | if span.parent_id: |
191 | json_span['parentId'] = span.parent_id | |
308 | json_span["parentId"] = span.parent_id | |
192 | 309 | if span.timestamp: |
193 | json_span['timestamp'] = int(span.timestamp * 1000000) | |
310 | json_span["timestamp"] = int(span.timestamp * 1000000) | |
194 | 311 | if span.duration: |
195 | json_span['duration'] = int(span.duration * 1000000) | |
196 | ||
312 | json_span["duration"] = int(span.duration * 1000000) | |
313 | ||
314 | assert span.endpoint is not None | |
197 | 315 | v1_endpoint = self._create_json_endpoint(span.endpoint, True) |
198 | 316 | |
199 | 317 | for key, timestamp in span.annotations.items(): |
200 | json_span['annotations'].append({ | |
201 | 'endpoint': v1_endpoint, | |
202 | 'timestamp': int(timestamp * 1000000), | |
203 | 'value': key, | |
204 | }) | |
318 | assert timestamp is not None | |
319 | json_span["annotations"].append( | |
320 | { | |
321 | "endpoint": v1_endpoint, | |
322 | "timestamp": int(timestamp * 1000000), | |
323 | "value": key, | |
324 | } | |
325 | ) | |
205 | 326 | |
206 | 327 | for key, value in span.binary_annotations.items(): |
207 | json_span['binaryAnnotations'].append({ | |
208 | 'key': key, | |
209 | 'value': value, | |
210 | 'endpoint': v1_endpoint, | |
211 | }) | |
212 | ||
213 | # Add sa binary annotations | |
214 | if span.sa_endpoint is not None: | |
215 | json_sa_endpoint = self._create_json_endpoint(span.sa_endpoint, True) | |
216 | json_span['binaryAnnotations'].append({ | |
217 | 'key': 'sa', | |
218 | 'value': '1', | |
219 | 'endpoint': json_sa_endpoint, | |
220 | }) | |
328 | json_span["binaryAnnotations"].append( | |
329 | {"key": key, "value": value, "endpoint": v1_endpoint} | |
330 | ) | |
331 | ||
332 | # Add sa/ca binary annotations | |
333 | if v2_span.remote_endpoint: | |
334 | self.encode_remote_endpoint( | |
335 | v2_span.remote_endpoint, | |
336 | v2_span.kind, | |
337 | json_span["binaryAnnotations"], | |
338 | ) | |
221 | 339 | |
222 | 340 | encoded_span = json.dumps(json_span) |
223 | 341 | |
224 | 342 | return encoded_span |
343 | ||
344 | ||
345 | class JSONv2Annotation(TypedDict): | |
346 | timestamp: int | |
347 | value: str | |
348 | ||
349 | ||
350 | class JSONv2Span(TypedDict, total=False): | |
351 | traceId: str | |
352 | id: Optional[str] | |
353 | name: str | |
354 | parentId: str | |
355 | timestamp: int | |
356 | duration: int | |
357 | shared: bool | |
358 | kind: str | |
359 | localEndpoint: JSONEndpoint | |
360 | remoteEndpoint: JSONEndpoint | |
361 | tags: Dict[str, str] | |
362 | annotations: List[JSONv2Annotation] | |
363 | ||
364 | ||
365 | def _is_dict_str_float( | |
366 | mapping: Dict[str, Optional[float]] | |
367 | ) -> TypeGuard[Dict[str, float]]: | |
368 | return all(isinstance(value, float) for key, value in mapping.items()) | |
225 | 369 | |
226 | 370 | |
227 | 371 | class _V2JSONEncoder(_BaseJSONEncoder): |
228 | 372 | """JSON encoder for V2 spans.""" |
229 | 373 | |
230 | def encode_span(self, span_builder): | |
374 | def encode_span(self, span: Span) -> str: | |
231 | 375 | """Encodes a single span to JSON.""" |
232 | span = span_builder.build_v2_span() | |
233 | ||
234 | json_span = { | |
235 | 'traceId': span.trace_id, | |
236 | 'id': span.id, | |
376 | ||
377 | json_span: JSONv2Span = { | |
378 | "traceId": span.trace_id, | |
379 | "id": span.span_id, | |
237 | 380 | } |
238 | 381 | |
239 | 382 | if span.name: |
240 | json_span['name'] = span.name | |
383 | json_span["name"] = span.name | |
241 | 384 | if span.parent_id: |
242 | json_span['parentId'] = span.parent_id | |
385 | json_span["parentId"] = span.parent_id | |
243 | 386 | if span.timestamp: |
244 | json_span['timestamp'] = int(span.timestamp * 1000000) | |
387 | json_span["timestamp"] = int(span.timestamp * 1000000) | |
245 | 388 | if span.duration: |
246 | json_span['duration'] = int(span.duration * 1000000) | |
389 | json_span["duration"] = int(span.duration * 1000000) | |
247 | 390 | if span.shared is True: |
248 | json_span['shared'] = True | |
391 | json_span["shared"] = True | |
249 | 392 | if span.kind and span.kind.value is not None: |
250 | json_span['kind'] = span.kind.value | |
393 | json_span["kind"] = span.kind.value | |
251 | 394 | if span.local_endpoint: |
252 | json_span['localEndpoint'] = self._create_json_endpoint( | |
395 | json_span["localEndpoint"] = self._create_json_endpoint( | |
253 | 396 | span.local_endpoint, |
254 | 397 | False, |
255 | 398 | ) |
256 | 399 | if span.remote_endpoint: |
257 | json_span['remoteEndpoint'] = self._create_json_endpoint( | |
400 | json_span["remoteEndpoint"] = self._create_json_endpoint( | |
258 | 401 | span.remote_endpoint, |
259 | 402 | False, |
260 | 403 | ) |
261 | 404 | if span.tags and len(span.tags) > 0: |
262 | json_span['tags'] = span.tags | |
405 | # Ensure that tags are all strings | |
406 | json_span["tags"] = { | |
407 | str(key): str(value) for key, value in span.tags.items() | |
408 | } | |
263 | 409 | |
264 | 410 | if span.annotations: |
265 | json_span['annotations'] = [ | |
266 | { | |
267 | 'timestamp': int(timestamp * 1000000), | |
268 | 'value': key, | |
269 | } | |
411 | assert _is_dict_str_float(span.annotations) | |
412 | json_span["annotations"] = [ | |
413 | {"timestamp": int(timestamp * 1000000), "value": key} | |
270 | 414 | for key, timestamp in span.annotations.items() |
271 | 415 | ] |
272 | 416 | |
273 | 417 | encoded_span = json.dumps(json_span) |
274 | 418 | |
275 | 419 | return encoded_span |
420 | ||
421 | ||
422 | def _is_bytes_list(any_str_list: List[Union[str, bytes]]) -> TypeGuard[List[bytes]]: | |
423 | return all(isinstance(element, bytes) for element in any_str_list) | |
424 | ||
425 | ||
426 | class _V2ProtobufEncoder(IEncoder): | |
427 | """Protobuf encoder for V2 spans.""" | |
428 | ||
429 | def fits( | |
430 | self, | |
431 | current_count: int, | |
432 | current_size: int, | |
433 | max_size: int, | |
434 | new_span: Union[str, bytes], | |
435 | ) -> bool: | |
436 | """Checks if the new span fits in the max payload size.""" | |
437 | return current_size + len(new_span) <= max_size | |
438 | ||
439 | def encode_span(self, span: Span) -> bytes: | |
440 | """Encodes a single span to protobuf.""" | |
441 | if not protobuf.installed(): | |
442 | raise ZipkinError( | |
443 | "protobuf encoding requires installing the protobuf's extra " | |
444 | "requirements. Use py-zipkin[protobuf] in your requirements.txt." | |
445 | ) | |
446 | ||
447 | pb_span = protobuf.create_protobuf_span(span) | |
448 | return protobuf.encode_pb_list([pb_span]) | |
449 | ||
450 | def encode_queue(self, queue: List[Union[str, bytes]]) -> bytes: | |
451 | """Concatenates the list to a protobuf list and encodes it to bytes""" | |
452 | assert _is_bytes_list(queue) | |
453 | return b"".join(queue) |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import socket |
2 | from collections import namedtuple | |
3 | 1 | from collections import OrderedDict |
2 | from typing import Dict | |
3 | from typing import MutableMapping | |
4 | from typing import NamedTuple | |
5 | from typing import Optional | |
4 | 6 | |
5 | 7 | from py_zipkin.encoding._types import Kind |
6 | 8 | from py_zipkin.exception import ZipkinError |
7 | 9 | |
8 | 10 | |
9 | Endpoint = namedtuple( | |
10 | 'Endpoint', | |
11 | ['service_name', 'ipv4', 'ipv6', 'port'], | |
12 | ) | |
13 | ||
14 | ||
15 | _V1Span = namedtuple( | |
16 | 'V1Span', | |
17 | ['trace_id', 'name', 'parent_id', 'id', 'timestamp', 'duration', 'endpoint', | |
18 | 'annotations', 'binary_annotations', 'sa_endpoint'], | |
19 | ) | |
20 | ||
21 | ||
22 | _V2Span = namedtuple( | |
23 | 'V2Span', | |
24 | ['trace_id', 'name', 'parent_id', 'id', 'kind', 'timestamp', | |
25 | 'duration', 'debug', 'shared', 'local_endpoint', 'remote_endpoint', | |
26 | 'annotations', 'tags'], | |
27 | ) | |
28 | ||
29 | ||
30 | _DROP_ANNOTATIONS_BY_KIND = { | |
31 | Kind.CLIENT: {'ss', 'sr'}, | |
32 | Kind.SERVER: {'cs', 'cr'}, | |
33 | } | |
34 | ||
35 | ||
36 | class SpanBuilder(object): | |
37 | """Internal Span representation. It can generate both v1 and v2 spans. | |
38 | ||
39 | It doesn't exactly map to either V1 or V2, since an intermediate format | |
40 | makes it easier to convert to either format. | |
41 | """ | |
11 | class Endpoint(NamedTuple): | |
12 | service_name: Optional[str] | |
13 | ipv4: Optional[str] | |
14 | ipv6: Optional[str] | |
15 | port: Optional[int] | |
16 | ||
17 | ||
18 | class _V1Span(NamedTuple): | |
19 | trace_id: str | |
20 | name: Optional[str] | |
21 | parent_id: Optional[str] | |
22 | id: Optional[str] | |
23 | timestamp: Optional[float] | |
24 | duration: Optional[float] | |
25 | endpoint: Optional[Endpoint] | |
26 | annotations: MutableMapping[str, Optional[float]] | |
27 | binary_annotations: Dict[str, Optional[str]] | |
28 | remote_endpoint: Optional[Endpoint] | |
29 | ||
30 | ||
31 | class Span: | |
32 | """Internal V2 Span representation.""" | |
42 | 33 | |
43 | 34 | def __init__( |
44 | 35 | self, |
45 | trace_id, | |
46 | name, | |
47 | parent_id, | |
48 | span_id, | |
49 | timestamp, | |
50 | duration, | |
51 | annotations, | |
52 | tags, | |
53 | kind, | |
54 | local_endpoint=None, | |
55 | service_name=None, | |
56 | sa_endpoint=None, | |
57 | report_timestamp=True, | |
36 | trace_id: str, | |
37 | name: Optional[str], | |
38 | parent_id: Optional[str], | |
39 | span_id: Optional[str], | |
40 | kind: Kind, | |
41 | timestamp: Optional[float], | |
42 | duration: Optional[float], | |
43 | local_endpoint: Optional[Endpoint] = None, | |
44 | remote_endpoint: Optional[Endpoint] = None, | |
45 | debug: bool = False, | |
46 | shared: bool = False, | |
47 | annotations: Optional[Dict[str, Optional[float]]] = None, | |
48 | tags: Optional[Dict[str, Optional[str]]] = None, | |
58 | 49 | ): |
59 | """Creates a new SpanBuilder. | |
50 | """Creates a new Span. | |
60 | 51 | |
61 | 52 | :param trace_id: Trace id. |
62 | 53 | :type trace_id: str |
66 | 57 | :type parent_id: str |
67 | 58 | :param span_id: Span id. |
68 | 59 | :type span_id: str |
60 | :param kind: Span type (client, server, local, etc...) | |
61 | :type kind: Kind | |
69 | 62 | :param timestamp: start timestamp in seconds. |
70 | 63 | :type timestamp: float |
71 | 64 | :param duration: span duration in seconds. |
72 | 65 | :type duration: float |
66 | :param local_endpoint: the host that recorded this span. | |
67 | :type local_endpoint: Endpoint | |
68 | :param remote_endpoint: the remote service. | |
69 | :type remote_endpoint: Endpoint | |
70 | :param debug: True is a request to store this span even if it | |
71 | overrides sampling policy. | |
72 | :type debug: bool | |
73 | :param shared: True if we are contributing to a span started by | |
74 | another tracer (ex on a different host). | |
75 | :type shared: bool | |
73 | 76 | :param annotations: Optional dict of str -> timestamp annotations. |
74 | 77 | :type annotations: dict |
75 | 78 | :param tags: Optional dict of str -> str span tags. |
76 | 79 | :type tags: dict |
77 | :param kind: Span type (client, server, local, etc...) | |
78 | :type kind: Kind | |
79 | :param local_endpoint: The host that recorded this span. | |
80 | :type local_endpoint: Endpoint | |
81 | :param service_name: The name of the called service | |
82 | :type service_name: str | |
83 | :param sa_endpoint: Remote server in client spans. | |
84 | :type sa_endpoint: Endpoint | |
85 | :param report_timestamp: Whether the span should report | |
86 | timestamp and duration. | |
87 | :type report_timestamp: bool | |
88 | 80 | """ |
89 | 81 | self.trace_id = trace_id |
90 | 82 | self.name = name |
93 | 85 | self.kind = kind |
94 | 86 | self.timestamp = timestamp |
95 | 87 | self.duration = duration |
96 | self.annotations = annotations | |
97 | self.tags = tags | |
98 | 88 | self.local_endpoint = local_endpoint |
99 | self.service_name = service_name | |
100 | self.sa_endpoint = sa_endpoint | |
101 | self.report_timestamp = report_timestamp | |
89 | self.remote_endpoint = remote_endpoint | |
90 | self.debug = debug | |
91 | self.shared = shared | |
92 | self.annotations = annotations or {} | |
93 | self.tags = tags or {} | |
102 | 94 | |
103 | 95 | if not isinstance(kind, Kind): |
96 | raise ZipkinError(f"Invalid kind value {kind}. Must be of type Kind.") | |
97 | ||
98 | if local_endpoint and not isinstance(local_endpoint, Endpoint): | |
99 | raise ZipkinError("Invalid local_endpoint value. Must be of type Endpoint.") | |
100 | ||
101 | if remote_endpoint and not isinstance(remote_endpoint, Endpoint): | |
104 | 102 | raise ZipkinError( |
105 | 'Invalid kind value {}. Must be of type Kind.'.format(kind)) | |
106 | ||
107 | def build_v1_span(self): | |
103 | "Invalid remote_endpoint value. Must be of type Endpoint." | |
104 | ) | |
105 | ||
106 | def __eq__(self, other: object) -> bool: # pragma: no cover | |
107 | """Compare function to help assert span1 == span2 in py3""" | |
108 | return self.__dict__ == other.__dict__ | |
109 | ||
110 | def __cmp__(self, other: "Span") -> int: # pragma: no cover | |
111 | """Compare function to help assert span1 == span2 in py2""" | |
112 | return self.__dict__ == other.__dict__ | |
113 | ||
114 | def __str__(self) -> str: # pragma: no cover | |
115 | """Compare function to nicely print Span rather than just the pointer""" | |
116 | return str(self.__dict__) | |
117 | ||
118 | def build_v1_span(self) -> _V1Span: | |
108 | 119 | """Builds and returns a V1 Span. |
109 | 120 | |
110 | 121 | :return: newly generated _V1Span |
111 | 122 | :rtype: _V1Span |
112 | 123 | """ |
113 | # We are simulating a full two-part span locally, so set cs=sr and ss=cr | |
114 | full_annotations = OrderedDict([ | |
115 | ('cs', self.timestamp), | |
116 | ('sr', self.timestamp), | |
117 | ('ss', self.timestamp + self.duration), | |
118 | ('cr', self.timestamp + self.duration), | |
119 | ]) | |
120 | ||
121 | if self.kind != Kind.LOCAL: | |
122 | # If kind is not LOCAL, then we only want client or | |
123 | # server side annotations. | |
124 | for ann in _DROP_ANNOTATIONS_BY_KIND[self.kind]: | |
125 | del full_annotations[ann] | |
126 | ||
127 | # Add user-defined annotations. We write them in full_annotations | |
124 | annotations: MutableMapping[str, Optional[float]] = OrderedDict([]) | |
125 | assert self.timestamp is not None | |
126 | if self.kind == Kind.CLIENT: | |
127 | assert self.duration is not None | |
128 | annotations["cs"] = self.timestamp | |
129 | annotations["cr"] = self.timestamp + self.duration | |
130 | elif self.kind == Kind.SERVER: | |
131 | assert self.duration is not None | |
132 | annotations["sr"] = self.timestamp | |
133 | annotations["ss"] = self.timestamp + self.duration | |
134 | elif self.kind == Kind.PRODUCER: | |
135 | annotations["ms"] = self.timestamp | |
136 | elif self.kind == Kind.CONSUMER: | |
137 | annotations["mr"] = self.timestamp | |
138 | ||
139 | # Add user-defined annotations. We write them in annotations | |
128 | 140 | # instead of the opposite so that user annotations will override |
129 | 141 | # any automatically generated annotation. |
130 | full_annotations.update(self.annotations) | |
142 | annotations.update(self.annotations) | |
131 | 143 | |
132 | 144 | return _V1Span( |
133 | 145 | trace_id=self.trace_id, |
134 | 146 | name=self.name, |
135 | 147 | parent_id=self.parent_id, |
136 | 148 | id=self.span_id, |
137 | timestamp=self.timestamp if self.report_timestamp else None, | |
138 | duration=self.duration if self.report_timestamp else None, | |
149 | timestamp=self.timestamp if self.shared is False else None, | |
150 | duration=self.duration if self.shared is False else None, | |
139 | 151 | endpoint=self.local_endpoint, |
140 | annotations=full_annotations, | |
152 | annotations=annotations, | |
141 | 153 | binary_annotations=self.tags, |
142 | sa_endpoint=self.sa_endpoint, | |
154 | remote_endpoint=self.remote_endpoint, | |
143 | 155 | ) |
144 | 156 | |
145 | def build_v2_span(self): | |
146 | """Builds and returns a V2 Span. | |
147 | ||
148 | :return: newly generated _V2Span | |
149 | :rtype: _V2Span | |
150 | """ | |
151 | remote_endpoint = None | |
152 | if self.sa_endpoint: | |
153 | remote_endpoint = self.sa_endpoint | |
154 | ||
155 | return _V2Span( | |
156 | trace_id=self.trace_id, | |
157 | name=self.name, | |
158 | parent_id=self.parent_id, | |
159 | id=self.span_id, | |
160 | kind=self.kind, | |
161 | timestamp=self.timestamp, | |
162 | duration=self.duration, | |
163 | debug=False, | |
164 | shared=self.report_timestamp is False, | |
165 | local_endpoint=self.local_endpoint, | |
166 | remote_endpoint=remote_endpoint, | |
167 | annotations=self.annotations, | |
168 | tags=self.tags, | |
169 | ) | |
170 | ||
171 | ||
172 | def create_endpoint(port=0, service_name='unknown', host=None): | |
157 | ||
158 | def create_endpoint( | |
159 | port: Optional[int] = None, | |
160 | service_name: Optional[str] = None, | |
161 | host: Optional[str] = None, | |
162 | use_defaults: bool = True, | |
163 | ) -> Endpoint: | |
173 | 164 | """Creates a new Endpoint object. |
174 | 165 | |
175 | 166 | :param port: TCP/UDP port. Defaults to 0. |
179 | 170 | :param host: ipv4 or ipv6 address of the host. Defaults to the |
180 | 171 | current host ip. |
181 | 172 | :type host: str |
173 | :param use_defaults: whether to use defaults. | |
174 | :type use_defaults: bool | |
182 | 175 | :returns: zipkin Endpoint object |
183 | 176 | """ |
184 | if host is None: | |
185 | try: | |
186 | host = socket.gethostbyname(socket.gethostname()) | |
187 | except socket.gaierror: | |
188 | host = '127.0.0.1' | |
177 | if use_defaults: | |
178 | if port is None: | |
179 | port = 0 | |
180 | if service_name is None: | |
181 | service_name = "unknown" | |
182 | if host is None: | |
183 | try: | |
184 | host = socket.gethostbyname(socket.gethostname()) | |
185 | except socket.gaierror: | |
186 | host = "127.0.0.1" | |
189 | 187 | |
190 | 188 | ipv4 = None |
191 | 189 | ipv6 = None |
192 | 190 | |
193 | # Check ipv4 or ipv6. | |
194 | try: | |
195 | socket.inet_pton(socket.AF_INET, host) | |
196 | ipv4 = host | |
197 | except socket.error: | |
198 | # If it's not an ipv4 address, maybe it's ipv6. | |
191 | if host: | |
192 | # Check ipv4 or ipv6. | |
199 | 193 | try: |
200 | socket.inet_pton(socket.AF_INET6, host) | |
201 | ipv6 = host | |
202 | except socket.error: | |
203 | # If it's neither ipv4 or ipv6, leave both ip addresses unset. | |
204 | pass | |
205 | ||
206 | return Endpoint( | |
207 | ipv4=ipv4, | |
208 | ipv6=ipv6, | |
209 | port=port, | |
210 | service_name=service_name, | |
211 | ) | |
212 | ||
213 | ||
214 | def copy_endpoint_with_new_service_name(endpoint, new_service_name): | |
194 | socket.inet_pton(socket.AF_INET, host) | |
195 | ipv4 = host | |
196 | except OSError: | |
197 | # If it's not an ipv4 address, maybe it's ipv6. | |
198 | try: | |
199 | socket.inet_pton(socket.AF_INET6, host) | |
200 | ipv6 = host | |
201 | except OSError: | |
202 | # If it's neither ipv4 or ipv6, leave both ip addresses unset. | |
203 | pass | |
204 | ||
205 | return Endpoint(ipv4=ipv4, ipv6=ipv6, port=port, service_name=service_name) | |
206 | ||
207 | ||
208 | def copy_endpoint_with_new_service_name( | |
209 | endpoint: Endpoint, | |
210 | new_service_name: Optional[str], | |
211 | ) -> Endpoint: | |
215 | 212 | """Creates a copy of a given endpoint with a new service name. |
216 | 213 | |
217 | 214 | :param endpoint: existing Endpoint object |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | from enum import Enum |
2 | 1 | |
3 | 2 | |
4 | 3 | class Encoding(Enum): |
5 | 4 | """Supported output encodings.""" |
6 | V1_THRIFT = 'V1_THRIFT' | |
7 | V1_JSON = 'V1_JSON' | |
8 | V2_JSON = 'V2_JSON' | |
9 | V2_PROTOBUF = 'V2_PROTOBUF' | |
5 | ||
6 | V1_THRIFT = "V1_THRIFT" | |
7 | V1_JSON = "V1_JSON" | |
8 | V2_JSON = "V2_JSON" | |
9 | V2_PROTO3 = "V2_PROTO3" | |
10 | 10 | |
11 | 11 | |
12 | 12 | class Kind(Enum): |
13 | 13 | """Type of Span.""" |
14 | CLIENT = 'CLIENT' | |
15 | SERVER = 'SERVER' | |
14 | ||
15 | CLIENT = "CLIENT" | |
16 | SERVER = "SERVER" | |
17 | PRODUCER = "PRODUCER" | |
18 | CONSUMER = "CONSUMER" | |
16 | 19 | LOCAL = None |
0 | import socket | |
1 | import struct | |
2 | from typing import Dict | |
3 | from typing import List | |
4 | from typing import Optional | |
5 | ||
6 | from typing_extensions import TypedDict | |
7 | from typing_extensions import TypeGuard | |
8 | ||
9 | from py_zipkin.encoding._helpers import Endpoint | |
10 | from py_zipkin.encoding._helpers import Span | |
11 | from py_zipkin.encoding._types import Kind | |
12 | from py_zipkin.util import unsigned_hex_to_signed_int | |
13 | ||
14 | try: | |
15 | from py_zipkin.encoding.protobuf import zipkin_pb2 | |
16 | except ImportError: # pragma: no cover | |
17 | pass | |
18 | ||
19 | ||
20 | def installed() -> bool: # pragma: no cover | |
21 | """Checks whether the protobud library is installed and can be used. | |
22 | ||
23 | :return: True if everything's fine, False otherwise | |
24 | :rtype: bool | |
25 | """ | |
26 | try: | |
27 | _ = zipkin_pb2 | |
28 | return True | |
29 | except NameError: | |
30 | return False | |
31 | ||
32 | ||
33 | def encode_pb_list(pb_spans: "List[zipkin_pb2.Span]") -> bytes: | |
34 | """Encode list of protobuf Spans to binary. | |
35 | ||
36 | :param pb_spans: list of protobuf Spans. | |
37 | :type pb_spans: list of zipkin_pb2.Span | |
38 | :return: encoded list. | |
39 | :rtype: bytes | |
40 | """ | |
41 | pb_list = zipkin_pb2.ListOfSpans() | |
42 | pb_list.spans.extend(pb_spans) | |
43 | return pb_list.SerializeToString() | |
44 | ||
45 | ||
46 | class ProtobufSpanArgsDict(TypedDict, total=False): | |
47 | trace_id: bytes | |
48 | parent_id: bytes | |
49 | id: bytes | |
50 | kind: "zipkin_pb2.Span._Kind.ValueType" | |
51 | name: str | |
52 | timestamp: int | |
53 | duration: int | |
54 | local_endpoint: "zipkin_pb2.Endpoint" | |
55 | remote_endpoint: "zipkin_pb2.Endpoint" | |
56 | annotations: "List[zipkin_pb2.Annotation]" | |
57 | tags: Dict[str, str] | |
58 | debug: bool | |
59 | shared: bool | |
60 | ||
61 | ||
62 | def _is_dict_str_str(mapping: Dict[str, Optional[str]]) -> TypeGuard[Dict[str, str]]: | |
63 | return all(isinstance(value, str) for _, value in mapping.items()) | |
64 | ||
65 | ||
66 | def create_protobuf_span(span: Span) -> "zipkin_pb2.Span": | |
67 | """Converts a py_zipkin Span in a protobuf Span. | |
68 | ||
69 | :param span: py_zipkin Span to convert. | |
70 | :type span: py_zipkin.encoding.Span | |
71 | :return: protobuf's Span | |
72 | :rtype: zipkin_pb2.Span | |
73 | """ | |
74 | ||
75 | # Protobuf's composite types (i.e. Span's local_endpoint) are immutable. | |
76 | # So we can't create a zipkin_pb2.Span here and then set the appropriate | |
77 | # fields since `pb_span.local_endpoint = zipkin_pb2.Endpoint` fails. | |
78 | # Instead we just create the kwargs and pass them in to the Span constructor. | |
79 | pb_kwargs: ProtobufSpanArgsDict = {} | |
80 | ||
81 | pb_kwargs["trace_id"] = _hex_to_bytes(span.trace_id) | |
82 | ||
83 | if span.parent_id: | |
84 | pb_kwargs["parent_id"] = _hex_to_bytes(span.parent_id) | |
85 | ||
86 | assert span.span_id is not None | |
87 | pb_kwargs["id"] = _hex_to_bytes(span.span_id) | |
88 | ||
89 | pb_kind = _get_protobuf_kind(span.kind) | |
90 | if pb_kind: | |
91 | pb_kwargs["kind"] = pb_kind | |
92 | ||
93 | if span.name: | |
94 | pb_kwargs["name"] = span.name | |
95 | if span.timestamp: | |
96 | pb_kwargs["timestamp"] = int(span.timestamp * 1000 * 1000) | |
97 | if span.duration: | |
98 | pb_kwargs["duration"] = int(span.duration * 1000 * 1000) | |
99 | ||
100 | if span.local_endpoint: | |
101 | pb_kwargs["local_endpoint"] = _convert_endpoint(span.local_endpoint) | |
102 | ||
103 | if span.remote_endpoint: | |
104 | pb_kwargs["remote_endpoint"] = _convert_endpoint(span.remote_endpoint) | |
105 | ||
106 | if len(span.annotations) > 0: | |
107 | pb_kwargs["annotations"] = _convert_annotations(span.annotations) | |
108 | ||
109 | if len(span.tags) > 0: | |
110 | assert _is_dict_str_str(span.tags) | |
111 | pb_kwargs["tags"] = span.tags | |
112 | ||
113 | if span.debug: | |
114 | pb_kwargs["debug"] = span.debug | |
115 | ||
116 | if span.shared: | |
117 | pb_kwargs["shared"] = span.shared | |
118 | ||
119 | return zipkin_pb2.Span(**pb_kwargs) | |
120 | ||
121 | ||
122 | def _hex_to_bytes(hex_id: str) -> bytes: | |
123 | """Encodes to hexadecimal ids to big-endian binary. | |
124 | ||
125 | :param hex_id: hexadecimal id to encode. | |
126 | :type hex_id: str | |
127 | :return: binary representation. | |
128 | :type: bytes | |
129 | """ | |
130 | if len(hex_id) <= 16: | |
131 | int_id = unsigned_hex_to_signed_int(hex_id) | |
132 | return struct.pack(">q", int_id) | |
133 | else: | |
134 | # There's no 16-bytes encoding in Python's struct. So we convert the | |
135 | # id as 2 64 bit ids and then concatenate the result. | |
136 | ||
137 | # NOTE: we count 16 chars from the right (:-16) rather than the left so | |
138 | # that ids with less than 32 chars will be correctly pre-padded with 0s. | |
139 | high_id = unsigned_hex_to_signed_int(hex_id[:-16]) | |
140 | high_bin = struct.pack(">q", high_id) | |
141 | ||
142 | low_id = unsigned_hex_to_signed_int(hex_id[-16:]) | |
143 | low_bin = struct.pack(">q", low_id) | |
144 | ||
145 | return high_bin + low_bin | |
146 | ||
147 | ||
148 | def _get_protobuf_kind(kind: Kind) -> "Optional[zipkin_pb2.Span._Kind.ValueType]": | |
149 | """Converts py_zipkin's Kind to Protobuf's Kind. | |
150 | ||
151 | :param kind: py_zipkin's Kind. | |
152 | :type kind: py_zipkin.Kind | |
153 | :return: correcponding protobuf's kind value. | |
154 | :rtype: zipkin_pb2.Span._Kind.ValueType | |
155 | """ | |
156 | if kind == Kind.CLIENT: | |
157 | return zipkin_pb2.Span.CLIENT | |
158 | elif kind == Kind.SERVER: | |
159 | return zipkin_pb2.Span.SERVER | |
160 | elif kind == Kind.PRODUCER: | |
161 | return zipkin_pb2.Span.PRODUCER | |
162 | elif kind == Kind.CONSUMER: | |
163 | return zipkin_pb2.Span.CONSUMER | |
164 | return None | |
165 | ||
166 | ||
167 | def _convert_endpoint(endpoint: Endpoint) -> "zipkin_pb2.Endpoint": | |
168 | """Converts py_zipkin's Endpoint to Protobuf's Endpoint. | |
169 | ||
170 | :param endpoint: py_zipkins' endpoint to convert. | |
171 | :type endpoint: py_zipkin.encoding.Endpoint | |
172 | :return: corresponding protobuf's endpoint. | |
173 | :rtype: zipkin_pb2.Endpoint | |
174 | """ | |
175 | pb_endpoint = zipkin_pb2.Endpoint() | |
176 | ||
177 | if endpoint.service_name: | |
178 | pb_endpoint.service_name = endpoint.service_name | |
179 | if endpoint.port and endpoint.port != 0: | |
180 | pb_endpoint.port = endpoint.port | |
181 | if endpoint.ipv4: | |
182 | pb_endpoint.ipv4 = socket.inet_pton(socket.AF_INET, endpoint.ipv4) | |
183 | if endpoint.ipv6: | |
184 | pb_endpoint.ipv6 = socket.inet_pton(socket.AF_INET6, endpoint.ipv6) | |
185 | ||
186 | return pb_endpoint | |
187 | ||
188 | ||
189 | def _convert_annotations( | |
190 | annotations: Dict[str, Optional[float]] | |
191 | ) -> "List[zipkin_pb2.Annotation]": | |
192 | """Converts py_zipkin's annotations dict to protobuf. | |
193 | ||
194 | :param annotations: annotations dict. | |
195 | :type annotations: dict | |
196 | :return: corresponding protobuf's list of annotations. | |
197 | :rtype: list | |
198 | """ | |
199 | pb_annotations = [] | |
200 | for value, ts in annotations.items(): | |
201 | assert ts is not None | |
202 | pb_annotations.append( | |
203 | zipkin_pb2.Annotation(timestamp=int(ts * 1000 * 1000), value=value) | |
204 | ) | |
205 | return pb_annotations |
0 | # -*- coding: utf-8 -*- | |
1 | # Generated by the protocol buffer compiler. DO NOT EDIT! | |
2 | # source: py_zipkin/encoding/protobuf/zipkin.proto | |
3 | """Generated protocol buffer code.""" | |
4 | from google.protobuf import descriptor as _descriptor | |
5 | from google.protobuf import descriptor_pool as _descriptor_pool | |
6 | from google.protobuf import message as _message | |
7 | from google.protobuf import reflection as _reflection | |
8 | from google.protobuf import symbol_database as _symbol_database | |
9 | # @@protoc_insertion_point(imports) | |
10 | ||
11 | _sym_db = _symbol_database.Default() | |
12 | ||
13 | ||
14 | ||
15 | ||
16 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(py_zipkin/encoding/protobuf/zipkin.proto\x12\rzipkin.proto3\"\xf5\x03\n\x04Span\x12\x10\n\x08trace_id\x18\x01 \x01(\x0c\x12\x11\n\tparent_id\x18\x02 \x01(\x0c\x12\n\n\x02id\x18\x03 \x01(\x0c\x12&\n\x04kind\x18\x04 \x01(\x0e\x32\x18.zipkin.proto3.Span.Kind\x12\x0c\n\x04name\x18\x05 \x01(\t\x12\x11\n\ttimestamp\x18\x06 \x01(\x06\x12\x10\n\x08\x64uration\x18\x07 \x01(\x04\x12/\n\x0elocal_endpoint\x18\x08 \x01(\x0b\x32\x17.zipkin.proto3.Endpoint\x12\x30\n\x0fremote_endpoint\x18\t \x01(\x0b\x32\x17.zipkin.proto3.Endpoint\x12.\n\x0b\x61nnotations\x18\n \x03(\x0b\x32\x19.zipkin.proto3.Annotation\x12+\n\x04tags\x18\x0b \x03(\x0b\x32\x1d.zipkin.proto3.Span.TagsEntry\x12\r\n\x05\x64\x65\x62ug\x18\x0c \x01(\x08\x12\x0e\n\x06shared\x18\r \x01(\x08\x1a+\n\tTagsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"U\n\x04Kind\x12\x19\n\x15SPAN_KIND_UNSPECIFIED\x10\x00\x12\n\n\x06\x43LIENT\x10\x01\x12\n\n\x06SERVER\x10\x02\x12\x0c\n\x08PRODUCER\x10\x03\x12\x0c\n\x08\x43ONSUMER\x10\x04\"J\n\x08\x45ndpoint\x12\x14\n\x0cservice_name\x18\x01 \x01(\t\x12\x0c\n\x04ipv4\x18\x02 \x01(\x0c\x12\x0c\n\x04ipv6\x18\x03 \x01(\x0c\x12\x0c\n\x04port\x18\x04 \x01(\x05\".\n\nAnnotation\x12\x11\n\ttimestamp\x18\x01 \x01(\x06\x12\r\n\x05value\x18\x02 \x01(\t\"1\n\x0bListOfSpans\x12\"\n\x05spans\x18\x01 \x03(\x0b\x32\x13.zipkin.proto3.Span\"\x10\n\x0eReportResponse2T\n\x0bSpanService\x12\x45\n\x06Report\x12\x1a.zipkin.proto3.ListOfSpans\x1a\x1d.zipkin.proto3.ReportResponse\"\x00\x42\x12\n\x0ezipkin2.proto3P\x01\x62\x06proto3') | |
17 | ||
18 | ||
19 | ||
20 | _SPAN = DESCRIPTOR.message_types_by_name['Span'] | |
21 | _SPAN_TAGSENTRY = _SPAN.nested_types_by_name['TagsEntry'] | |
22 | _ENDPOINT = DESCRIPTOR.message_types_by_name['Endpoint'] | |
23 | _ANNOTATION = DESCRIPTOR.message_types_by_name['Annotation'] | |
24 | _LISTOFSPANS = DESCRIPTOR.message_types_by_name['ListOfSpans'] | |
25 | _REPORTRESPONSE = DESCRIPTOR.message_types_by_name['ReportResponse'] | |
26 | _SPAN_KIND = _SPAN.enum_types_by_name['Kind'] | |
27 | Span = _reflection.GeneratedProtocolMessageType('Span', (_message.Message,), { | |
28 | ||
29 | 'TagsEntry' : _reflection.GeneratedProtocolMessageType('TagsEntry', (_message.Message,), { | |
30 | 'DESCRIPTOR' : _SPAN_TAGSENTRY, | |
31 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
32 | # @@protoc_insertion_point(class_scope:zipkin.proto3.Span.TagsEntry) | |
33 | }) | |
34 | , | |
35 | 'DESCRIPTOR' : _SPAN, | |
36 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
37 | # @@protoc_insertion_point(class_scope:zipkin.proto3.Span) | |
38 | }) | |
39 | _sym_db.RegisterMessage(Span) | |
40 | _sym_db.RegisterMessage(Span.TagsEntry) | |
41 | ||
42 | Endpoint = _reflection.GeneratedProtocolMessageType('Endpoint', (_message.Message,), { | |
43 | 'DESCRIPTOR' : _ENDPOINT, | |
44 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
45 | # @@protoc_insertion_point(class_scope:zipkin.proto3.Endpoint) | |
46 | }) | |
47 | _sym_db.RegisterMessage(Endpoint) | |
48 | ||
49 | Annotation = _reflection.GeneratedProtocolMessageType('Annotation', (_message.Message,), { | |
50 | 'DESCRIPTOR' : _ANNOTATION, | |
51 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
52 | # @@protoc_insertion_point(class_scope:zipkin.proto3.Annotation) | |
53 | }) | |
54 | _sym_db.RegisterMessage(Annotation) | |
55 | ||
56 | ListOfSpans = _reflection.GeneratedProtocolMessageType('ListOfSpans', (_message.Message,), { | |
57 | 'DESCRIPTOR' : _LISTOFSPANS, | |
58 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
59 | # @@protoc_insertion_point(class_scope:zipkin.proto3.ListOfSpans) | |
60 | }) | |
61 | _sym_db.RegisterMessage(ListOfSpans) | |
62 | ||
63 | ReportResponse = _reflection.GeneratedProtocolMessageType('ReportResponse', (_message.Message,), { | |
64 | 'DESCRIPTOR' : _REPORTRESPONSE, | |
65 | '__module__' : 'py_zipkin.encoding.protobuf.zipkin_pb2' | |
66 | # @@protoc_insertion_point(class_scope:zipkin.proto3.ReportResponse) | |
67 | }) | |
68 | _sym_db.RegisterMessage(ReportResponse) | |
69 | ||
70 | _SPANSERVICE = DESCRIPTOR.services_by_name['SpanService'] | |
71 | if _descriptor._USE_C_DESCRIPTORS == False: | |
72 | ||
73 | DESCRIPTOR._options = None | |
74 | DESCRIPTOR._serialized_options = b'\n\016zipkin2.proto3P\001' | |
75 | _SPAN_TAGSENTRY._options = None | |
76 | _SPAN_TAGSENTRY._serialized_options = b'8\001' | |
77 | _SPAN._serialized_start=60 | |
78 | _SPAN._serialized_end=561 | |
79 | _SPAN_TAGSENTRY._serialized_start=431 | |
80 | _SPAN_TAGSENTRY._serialized_end=474 | |
81 | _SPAN_KIND._serialized_start=476 | |
82 | _SPAN_KIND._serialized_end=561 | |
83 | _ENDPOINT._serialized_start=563 | |
84 | _ENDPOINT._serialized_end=637 | |
85 | _ANNOTATION._serialized_start=639 | |
86 | _ANNOTATION._serialized_end=685 | |
87 | _LISTOFSPANS._serialized_start=687 | |
88 | _LISTOFSPANS._serialized_end=736 | |
89 | _REPORTRESPONSE._serialized_start=738 | |
90 | _REPORTRESPONSE._serialized_end=754 | |
91 | _SPANSERVICE._serialized_start=756 | |
92 | _SPANSERVICE._serialized_end=840 | |
93 | # @@protoc_insertion_point(module_scope) |
0 | """ | |
1 | @generated by mypy-protobuf. Do not edit manually! | |
2 | isort:skip_file | |
3 | ||
4 | Copyright 2018-2019 The OpenZipkin Authors | |
5 | ||
6 | Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except | |
7 | in compliance with the License. You may obtain a copy of the License at | |
8 | ||
9 | http://www.apache.org/licenses/LICENSE-2.0 | |
10 | ||
11 | Unless required by applicable law or agreed to in writing, software distributed under the License | |
12 | is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express | |
13 | or implied. See the License for the specific language governing permissions and limitations under | |
14 | the License. | |
15 | """ | |
16 | import builtins | |
17 | import collections.abc | |
18 | import google.protobuf.descriptor | |
19 | import google.protobuf.internal.containers | |
20 | import google.protobuf.internal.enum_type_wrapper | |
21 | import google.protobuf.message | |
22 | import sys | |
23 | import typing | |
24 | ||
25 | if sys.version_info >= (3, 10): | |
26 | import typing as typing_extensions | |
27 | else: | |
28 | import typing_extensions | |
29 | ||
30 | DESCRIPTOR: google.protobuf.descriptor.FileDescriptor | |
31 | ||
32 | class Span(google.protobuf.message.Message): | |
33 | """A span is a single-host view of an operation. A trace is a series of spans | |
34 | (often RPC calls) which nest to form a latency tree. Spans are in the same | |
35 | trace when they share the same trace ID. The parent_id field establishes the | |
36 | position of one span in the tree. | |
37 | ||
38 | The root span is where parent_id is Absent and usually has the longest | |
39 | duration in the trace. However, nested asynchronous work can materialize as | |
40 | child spans whose duration exceed the root span. | |
41 | ||
42 | Spans usually represent remote activity such as RPC calls, or messaging | |
43 | producers and consumers. However, they can also represent in-process | |
44 | activity in any position of the trace. For example, a root span could | |
45 | represent a server receiving an initial client request. A root span could | |
46 | also represent a scheduled job that has no remote context. | |
47 | ||
48 | Encoding notes: | |
49 | ||
50 | Epoch timestamp are encoded fixed64 as varint would also be 8 bytes, and more | |
51 | expensive to encode and size. Duration is stored uint64, as often the numbers | |
52 | are quite small. | |
53 | ||
54 | Default values are ok, as only natural numbers are used. For example, zero is | |
55 | an invalid timestamp and an invalid duration, false values for debug or shared | |
56 | are ignorable, and zero-length strings also coerce to null. | |
57 | ||
58 | The next id is 14. | |
59 | ||
60 | Note fields up to 15 take 1 byte to encode. Take care when adding new fields | |
61 | https://developers.google.com/protocol-buffers/docs/proto3#assigning-tags | |
62 | """ | |
63 | ||
64 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
65 | ||
66 | class _Kind: | |
67 | ValueType = typing.NewType("ValueType", builtins.int) | |
68 | V: typing_extensions.TypeAlias = ValueType | |
69 | ||
70 | class _KindEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[Span._Kind.ValueType], builtins.type): # noqa: F821 | |
71 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor | |
72 | SPAN_KIND_UNSPECIFIED: Span._Kind.ValueType # 0 | |
73 | """Default value interpreted as absent.""" | |
74 | CLIENT: Span._Kind.ValueType # 1 | |
75 | """The span represents the client side of an RPC operation, implying the | |
76 | following: | |
77 | ||
78 | timestamp is the moment a request was sent to the server. | |
79 | duration is the delay until a response or an error was received. | |
80 | remote_endpoint is the server. | |
81 | """ | |
82 | SERVER: Span._Kind.ValueType # 2 | |
83 | """The span represents the server side of an RPC operation, implying the | |
84 | following: | |
85 | ||
86 | timestamp is the moment a client request was received. | |
87 | duration is the delay until a response was sent or an error. | |
88 | remote_endpoint is the client. | |
89 | """ | |
90 | PRODUCER: Span._Kind.ValueType # 3 | |
91 | """The span represents production of a message to a remote broker, implying | |
92 | the following: | |
93 | ||
94 | timestamp is the moment a message was sent to a destination. | |
95 | duration is the delay sending the message, such as batching. | |
96 | remote_endpoint is the broker. | |
97 | """ | |
98 | CONSUMER: Span._Kind.ValueType # 4 | |
99 | """The span represents consumption of a message from a remote broker, not | |
100 | time spent servicing it. For example, a message processor would be an | |
101 | in-process child span of a consumer. Consumer spans imply the following: | |
102 | ||
103 | timestamp is the moment a message was received from an origin. | |
104 | duration is the delay consuming the message, such as from backlog. | |
105 | remote_endpoint is the broker. | |
106 | """ | |
107 | ||
108 | class Kind(_Kind, metaclass=_KindEnumTypeWrapper): | |
109 | """When present, kind clarifies timestamp, duration and remote_endpoint. When | |
110 | absent, the span is local or incomplete. Unlike client and server, there | |
111 | is no direct critical path latency relationship between producer and | |
112 | consumer spans. | |
113 | """ | |
114 | ||
115 | SPAN_KIND_UNSPECIFIED: Span.Kind.ValueType # 0 | |
116 | """Default value interpreted as absent.""" | |
117 | CLIENT: Span.Kind.ValueType # 1 | |
118 | """The span represents the client side of an RPC operation, implying the | |
119 | following: | |
120 | ||
121 | timestamp is the moment a request was sent to the server. | |
122 | duration is the delay until a response or an error was received. | |
123 | remote_endpoint is the server. | |
124 | """ | |
125 | SERVER: Span.Kind.ValueType # 2 | |
126 | """The span represents the server side of an RPC operation, implying the | |
127 | following: | |
128 | ||
129 | timestamp is the moment a client request was received. | |
130 | duration is the delay until a response was sent or an error. | |
131 | remote_endpoint is the client. | |
132 | """ | |
133 | PRODUCER: Span.Kind.ValueType # 3 | |
134 | """The span represents production of a message to a remote broker, implying | |
135 | the following: | |
136 | ||
137 | timestamp is the moment a message was sent to a destination. | |
138 | duration is the delay sending the message, such as batching. | |
139 | remote_endpoint is the broker. | |
140 | """ | |
141 | CONSUMER: Span.Kind.ValueType # 4 | |
142 | """The span represents consumption of a message from a remote broker, not | |
143 | time spent servicing it. For example, a message processor would be an | |
144 | in-process child span of a consumer. Consumer spans imply the following: | |
145 | ||
146 | timestamp is the moment a message was received from an origin. | |
147 | duration is the delay consuming the message, such as from backlog. | |
148 | remote_endpoint is the broker. | |
149 | """ | |
150 | ||
151 | class TagsEntry(google.protobuf.message.Message): | |
152 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
153 | ||
154 | KEY_FIELD_NUMBER: builtins.int | |
155 | VALUE_FIELD_NUMBER: builtins.int | |
156 | key: builtins.str | |
157 | value: builtins.str | |
158 | def __init__( | |
159 | self, | |
160 | *, | |
161 | key: builtins.str = ..., | |
162 | value: builtins.str = ..., | |
163 | ) -> None: ... | |
164 | def ClearField(self, field_name: typing_extensions.Literal["key", b"key", "value", b"value"]) -> None: ... | |
165 | ||
166 | TRACE_ID_FIELD_NUMBER: builtins.int | |
167 | PARENT_ID_FIELD_NUMBER: builtins.int | |
168 | ID_FIELD_NUMBER: builtins.int | |
169 | KIND_FIELD_NUMBER: builtins.int | |
170 | NAME_FIELD_NUMBER: builtins.int | |
171 | TIMESTAMP_FIELD_NUMBER: builtins.int | |
172 | DURATION_FIELD_NUMBER: builtins.int | |
173 | LOCAL_ENDPOINT_FIELD_NUMBER: builtins.int | |
174 | REMOTE_ENDPOINT_FIELD_NUMBER: builtins.int | |
175 | ANNOTATIONS_FIELD_NUMBER: builtins.int | |
176 | TAGS_FIELD_NUMBER: builtins.int | |
177 | DEBUG_FIELD_NUMBER: builtins.int | |
178 | SHARED_FIELD_NUMBER: builtins.int | |
179 | trace_id: builtins.bytes | |
180 | """Randomly generated, unique identifier for a trace, set on all spans within | |
181 | it. | |
182 | ||
183 | This field is required and encoded as 8 or 16 bytes, in big endian byte | |
184 | order. | |
185 | """ | |
186 | parent_id: builtins.bytes | |
187 | """The parent span ID or absent if this the root span in a trace.""" | |
188 | id: builtins.bytes | |
189 | """Unique identifier for this operation within the trace. | |
190 | ||
191 | This field is required and encoded as 8 opaque bytes. | |
192 | """ | |
193 | kind: global___Span.Kind.ValueType | |
194 | """When present, used to interpret remote_endpoint""" | |
195 | name: builtins.str | |
196 | """The logical operation this span represents in lowercase (e.g. rpc method). | |
197 | Leave absent if unknown. | |
198 | ||
199 | As these are lookup labels, take care to ensure names are low cardinality. | |
200 | For example, do not embed variables into the name. | |
201 | """ | |
202 | timestamp: builtins.int | |
203 | """Epoch microseconds of the start of this span, possibly absent if | |
204 | incomplete. | |
205 | ||
206 | For example, 1502787600000000 corresponds to 2017-08-15 09:00 UTC | |
207 | ||
208 | This value should be set directly by instrumentation, using the most | |
209 | precise value possible. For example, gettimeofday or multiplying epoch | |
210 | millis by 1000. | |
211 | ||
212 | There are three known edge-cases where this could be reported absent. | |
213 | - A span was allocated but never started (ex not yet received a timestamp) | |
214 | - The span's start event was lost | |
215 | - Data about a completed span (ex tags) were sent after the fact | |
216 | """ | |
217 | duration: builtins.int | |
218 | """Duration in microseconds of the critical path, if known. Durations of less | |
219 | than one are rounded up. Duration of children can be longer than their | |
220 | parents due to asynchronous operations. | |
221 | ||
222 | For example 150 milliseconds is 150000 microseconds. | |
223 | """ | |
224 | @property | |
225 | def local_endpoint(self) -> global___Endpoint: | |
226 | """The host that recorded this span, primarily for query by service name. | |
227 | ||
228 | Instrumentation should always record this. Usually, absent implies late | |
229 | data. The IP address corresponding to this is usually the site local or | |
230 | advertised service address. When present, the port indicates the listen | |
231 | port. | |
232 | """ | |
233 | @property | |
234 | def remote_endpoint(self) -> global___Endpoint: | |
235 | """When an RPC (or messaging) span, indicates the other side of the | |
236 | connection. | |
237 | ||
238 | By recording the remote endpoint, your trace will contain network context | |
239 | even if the peer is not tracing. For example, you can record the IP from | |
240 | the "X-Forwarded-For" header or the service name and socket of a remote | |
241 | peer. | |
242 | """ | |
243 | @property | |
244 | def annotations(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Annotation]: | |
245 | """Associates events that explain latency with the time they happened.""" | |
246 | @property | |
247 | def tags(self) -> google.protobuf.internal.containers.ScalarMap[builtins.str, builtins.str]: | |
248 | """Tags give your span context for search, viewing and analysis. | |
249 | ||
250 | For example, a key "your_app.version" would let you lookup traces by | |
251 | version. A tag "sql.query" isn't searchable, but it can help in debugging | |
252 | when viewing a trace. | |
253 | """ | |
254 | debug: builtins.bool | |
255 | """True is a request to store this span even if it overrides sampling policy. | |
256 | ||
257 | This is true when the "X-B3-Flags" header has a value of 1. | |
258 | """ | |
259 | shared: builtins.bool | |
260 | """True if we are contributing to a span started by another tracer (ex on a | |
261 | different host). | |
262 | """ | |
263 | def __init__( | |
264 | self, | |
265 | *, | |
266 | trace_id: builtins.bytes = ..., | |
267 | parent_id: builtins.bytes = ..., | |
268 | id: builtins.bytes = ..., | |
269 | kind: global___Span.Kind.ValueType = ..., | |
270 | name: builtins.str = ..., | |
271 | timestamp: builtins.int = ..., | |
272 | duration: builtins.int = ..., | |
273 | local_endpoint: global___Endpoint | None = ..., | |
274 | remote_endpoint: global___Endpoint | None = ..., | |
275 | annotations: collections.abc.Iterable[global___Annotation] | None = ..., | |
276 | tags: collections.abc.Mapping[builtins.str, builtins.str] | None = ..., | |
277 | debug: builtins.bool = ..., | |
278 | shared: builtins.bool = ..., | |
279 | ) -> None: ... | |
280 | def HasField(self, field_name: typing_extensions.Literal["local_endpoint", b"local_endpoint", "remote_endpoint", b"remote_endpoint"]) -> builtins.bool: ... | |
281 | def ClearField(self, field_name: typing_extensions.Literal["annotations", b"annotations", "debug", b"debug", "duration", b"duration", "id", b"id", "kind", b"kind", "local_endpoint", b"local_endpoint", "name", b"name", "parent_id", b"parent_id", "remote_endpoint", b"remote_endpoint", "shared", b"shared", "tags", b"tags", "timestamp", b"timestamp", "trace_id", b"trace_id"]) -> None: ... | |
282 | ||
283 | global___Span = Span | |
284 | ||
285 | class Endpoint(google.protobuf.message.Message): | |
286 | """The network context of a node in the service graph. | |
287 | ||
288 | The next id is 5. | |
289 | """ | |
290 | ||
291 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
292 | ||
293 | SERVICE_NAME_FIELD_NUMBER: builtins.int | |
294 | IPV4_FIELD_NUMBER: builtins.int | |
295 | IPV6_FIELD_NUMBER: builtins.int | |
296 | PORT_FIELD_NUMBER: builtins.int | |
297 | service_name: builtins.str | |
298 | """Lower-case label of this node in the service graph, such as "favstar". | |
299 | Leave absent if unknown. | |
300 | ||
301 | This is a primary label for trace lookup and aggregation, so it should be | |
302 | intuitive and consistent. Many use a name from service discovery. | |
303 | """ | |
304 | ipv4: builtins.bytes | |
305 | """4 byte representation of the primary IPv4 address associated with this | |
306 | connection. Absent if unknown. | |
307 | """ | |
308 | ipv6: builtins.bytes | |
309 | """16 byte representation of the primary IPv6 address associated with this | |
310 | connection. Absent if unknown. | |
311 | ||
312 | Prefer using the ipv4 field for mapped addresses. | |
313 | """ | |
314 | port: builtins.int | |
315 | """Depending on context, this could be a listen port or the client-side of a | |
316 | socket. Absent if unknown. | |
317 | """ | |
318 | def __init__( | |
319 | self, | |
320 | *, | |
321 | service_name: builtins.str = ..., | |
322 | ipv4: builtins.bytes = ..., | |
323 | ipv6: builtins.bytes = ..., | |
324 | port: builtins.int = ..., | |
325 | ) -> None: ... | |
326 | def ClearField(self, field_name: typing_extensions.Literal["ipv4", b"ipv4", "ipv6", b"ipv6", "port", b"port", "service_name", b"service_name"]) -> None: ... | |
327 | ||
328 | global___Endpoint = Endpoint | |
329 | ||
330 | class Annotation(google.protobuf.message.Message): | |
331 | """Associates an event that explains latency with a timestamp. | |
332 | Unlike log statements, annotations are often codes. Ex. "ws" for WireSend | |
333 | ||
334 | The next id is 3. | |
335 | """ | |
336 | ||
337 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
338 | ||
339 | TIMESTAMP_FIELD_NUMBER: builtins.int | |
340 | VALUE_FIELD_NUMBER: builtins.int | |
341 | timestamp: builtins.int | |
342 | """Epoch microseconds of this event. | |
343 | ||
344 | For example, 1502787600000000 corresponds to 2017-08-15 09:00 UTC | |
345 | ||
346 | This value should be set directly by instrumentation, using the most | |
347 | precise value possible. For example, gettimeofday or multiplying epoch | |
348 | millis by 1000. | |
349 | """ | |
350 | value: builtins.str | |
351 | """Usually a short tag indicating an event, like "error" | |
352 | ||
353 | While possible to add larger data, such as garbage collection details, low | |
354 | cardinality event names both keep the size of spans down and also are easy | |
355 | to search against. | |
356 | """ | |
357 | def __init__( | |
358 | self, | |
359 | *, | |
360 | timestamp: builtins.int = ..., | |
361 | value: builtins.str = ..., | |
362 | ) -> None: ... | |
363 | def ClearField(self, field_name: typing_extensions.Literal["timestamp", b"timestamp", "value", b"value"]) -> None: ... | |
364 | ||
365 | global___Annotation = Annotation | |
366 | ||
367 | class ListOfSpans(google.protobuf.message.Message): | |
368 | """A list of spans with possibly different trace ids, in no particular order. | |
369 | ||
370 | This is used for all transports: POST, Kafka messages etc. No other fields | |
371 | are expected, This message facilitates the mechanics of encoding a list, as | |
372 | a field number is required. The name of this type is the same in the OpenApi | |
373 | aka Swagger specification. https://zipkin.io/zipkin-api/#/default/post_spans | |
374 | """ | |
375 | ||
376 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
377 | ||
378 | SPANS_FIELD_NUMBER: builtins.int | |
379 | @property | |
380 | def spans(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Span]: ... | |
381 | def __init__( | |
382 | self, | |
383 | *, | |
384 | spans: collections.abc.Iterable[global___Span] | None = ..., | |
385 | ) -> None: ... | |
386 | def ClearField(self, field_name: typing_extensions.Literal["spans", b"spans"]) -> None: ... | |
387 | ||
388 | global___ListOfSpans = ListOfSpans | |
389 | ||
390 | class ReportResponse(google.protobuf.message.Message): | |
391 | """Response for SpanService/Report RPC. This response currently does not return | |
392 | any information beyond indicating that the request has finished. That said, | |
393 | it may be extended in the future. | |
394 | """ | |
395 | ||
396 | DESCRIPTOR: google.protobuf.descriptor.Descriptor | |
397 | ||
398 | def __init__( | |
399 | self, | |
400 | ) -> None: ... | |
401 | ||
402 | global___ReportResponse = ReportResponse |
0 | # -*- coding: utf-8 -*- | |
1 | ||
2 | ||
3 | 0 | class ZipkinError(Exception): |
4 | 1 | """Custom error to be raised on Zipkin exceptions.""" |
0 | import threading | |
1 | from functools import partial | |
2 | from typing import Any | |
3 | from typing import Callable | |
4 | ||
5 | from py_zipkin import storage | |
6 | ||
7 | _orig_Thread_start = threading.Thread.start | |
8 | _orig_Thread_run = threading.Thread.run | |
9 | ||
10 | ||
11 | def _Thread_pre_start(self: Any) -> None: | |
12 | self._orig_tracer = None | |
13 | if storage.has_default_tracer(): | |
14 | self._orig_tracer = storage.get_default_tracer().copy() | |
15 | ||
16 | ||
17 | def _Thread_wrap_run(self: Any, actual_run_fn: Callable[[], None]) -> None: | |
18 | # This executes in the new OS thread | |
19 | if self._orig_tracer: | |
20 | # Inject our copied Tracer into our thread-local-storage | |
21 | storage.set_default_tracer(self._orig_tracer) | |
22 | try: | |
23 | actual_run_fn() | |
24 | finally: | |
25 | # I think this is probably a good idea for the same reasons the | |
26 | # parent class deletes __target, __args, and __kwargs | |
27 | if self._orig_tracer: | |
28 | del self._orig_tracer | |
29 | ||
30 | ||
31 | def patch_threading() -> None: # pragma: no cover | |
32 | """Monkey-patch threading module to work better with tracing.""" | |
33 | ||
34 | def _new_start(self: Any) -> None: | |
35 | _Thread_pre_start(self) | |
36 | _orig_Thread_start(self) | |
37 | ||
38 | def _new_run(self: Any) -> None: | |
39 | _Thread_wrap_run(self, partial(_orig_Thread_run, self)) | |
40 | ||
41 | threading.Thread.start = _new_start # type: ignore[assignment] | |
42 | threading.Thread.run = _new_run # type: ignore[assignment] | |
43 | ||
44 | ||
45 | def unpatch_threading() -> None: # pragma: no cover | |
46 | threading.Thread.start = _orig_Thread_start # type: ignore[assignment] | |
47 | threading.Thread.run = _orig_Thread_run # type: ignore[assignment] |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import os |
2 | 1 | import time |
2 | from types import TracebackType | |
3 | from typing import Callable | |
4 | from typing import Dict | |
5 | from typing import List | |
6 | from typing import Optional | |
7 | from typing import Type | |
8 | from typing import Union | |
3 | 9 | |
4 | 10 | from py_zipkin import Kind |
5 | from py_zipkin.encoding._helpers import SpanBuilder | |
11 | from py_zipkin.encoding._encoders import get_encoder | |
12 | from py_zipkin.encoding._encoders import IEncoder | |
6 | 13 | from py_zipkin.encoding._helpers import copy_endpoint_with_new_service_name |
7 | from py_zipkin.encoding._encoders import get_encoder | |
14 | from py_zipkin.encoding._helpers import Endpoint | |
15 | from py_zipkin.encoding._helpers import Span | |
16 | from py_zipkin.encoding._types import Encoding | |
8 | 17 | from py_zipkin.exception import ZipkinError |
18 | from py_zipkin.storage import Tracer | |
9 | 19 | from py_zipkin.transport import BaseTransportHandler |
10 | ||
11 | ||
12 | LOGGING_END_KEY = 'py_zipkin.logging_end' | |
13 | ||
14 | ||
15 | class ZipkinLoggingContext(object): | |
20 | from py_zipkin.util import ZipkinAttrs | |
21 | ||
22 | ||
23 | LOGGING_END_KEY = "py_zipkin.logging_end" | |
24 | ||
25 | ||
26 | TransportHandler = Union[BaseTransportHandler, Callable[[Union[str, bytes]], None]] | |
27 | ||
28 | ||
29 | class ZipkinLoggingContext: | |
16 | 30 | """A logging context specific to a Zipkin trace. If the trace is sampled, |
17 | 31 | the logging context sends serialized Zipkin spans to a transport_handler. |
18 | 32 | The logging context sends root "server" or "client" span, as well as all |
23 | 37 | |
24 | 38 | def __init__( |
25 | 39 | self, |
26 | zipkin_attrs, | |
27 | endpoint, | |
28 | span_name, | |
29 | transport_handler, | |
30 | report_root_timestamp, | |
31 | span_storage, | |
32 | service_name, | |
33 | binary_annotations=None, | |
34 | add_logging_annotation=False, | |
35 | client_context=False, | |
36 | max_span_batch_size=None, | |
37 | firehose_handler=None, | |
38 | encoding=None, | |
40 | zipkin_attrs: ZipkinAttrs, | |
41 | endpoint: Endpoint, | |
42 | span_name: str, | |
43 | transport_handler: Optional[TransportHandler], | |
44 | report_root_timestamp: float, | |
45 | get_tracer: Callable[[], Tracer], | |
46 | service_name: str, | |
47 | binary_annotations: Optional[Dict[str, Optional[str]]] = None, | |
48 | add_logging_annotation: bool = False, | |
49 | client_context: bool = False, | |
50 | max_span_batch_size: Optional[int] = None, | |
51 | firehose_handler: Optional[TransportHandler] = None, | |
52 | encoding: Optional[Encoding] = None, | |
53 | annotations: Optional[Dict[str, Optional[float]]] = None, | |
39 | 54 | ): |
40 | 55 | self.zipkin_attrs = zipkin_attrs |
41 | 56 | self.endpoint = endpoint |
42 | 57 | self.span_name = span_name |
43 | 58 | self.transport_handler = transport_handler |
44 | 59 | self.response_status_code = 0 |
45 | self.span_storage = span_storage | |
60 | self._get_tracer = get_tracer | |
46 | 61 | self.service_name = service_name |
47 | 62 | self.report_root_timestamp = report_root_timestamp |
48 | self.binary_annotations_dict = binary_annotations or {} | |
63 | self.tags = binary_annotations or {} | |
49 | 64 | self.add_logging_annotation = add_logging_annotation |
50 | 65 | self.client_context = client_context |
51 | 66 | self.max_span_batch_size = max_span_batch_size |
52 | 67 | self.firehose_handler = firehose_handler |
53 | ||
54 | self.sa_endpoint = None | |
68 | self.annotations = annotations or {} | |
69 | ||
70 | self.remote_endpoint: Optional[Endpoint] = None | |
71 | assert encoding is not None | |
55 | 72 | self.encoder = get_encoder(encoding) |
56 | 73 | |
57 | def start(self): | |
74 | def start(self) -> "ZipkinLoggingContext": | |
58 | 75 | """Actions to be taken before request is handled.""" |
59 | 76 | |
60 | 77 | # Record the start timestamp. |
61 | 78 | self.start_timestamp = time.time() |
62 | 79 | return self |
63 | 80 | |
64 | def stop(self): | |
65 | """Actions to be taken post request handling. | |
66 | """ | |
81 | def stop(self) -> None: | |
82 | """Actions to be taken post request handling.""" | |
67 | 83 | |
68 | 84 | self.emit_spans() |
69 | 85 | |
70 | def emit_spans(self): | |
86 | def emit_spans(self) -> None: | |
71 | 87 | """Main function to log all the annotations stored during the entire |
72 | 88 | request. This is done if the request is sampled and the response was |
73 | 89 | a success. It also logs the service (`ss` and `sr`) or the client |
78 | 94 | if self.firehose_handler: |
79 | 95 | # FIXME: We need to allow different batching settings per handler |
80 | 96 | self._emit_spans_with_span_sender( |
81 | ZipkinBatchSender(self.firehose_handler, | |
82 | self.max_span_batch_size, | |
83 | self.encoder) | |
97 | ZipkinBatchSender( | |
98 | self.firehose_handler, self.max_span_batch_size, self.encoder | |
99 | ) | |
84 | 100 | ) |
85 | 101 | |
86 | 102 | if not self.zipkin_attrs.is_sampled: |
87 | self.span_storage.clear() | |
103 | self._get_tracer().clear() | |
88 | 104 | return |
89 | 105 | |
90 | span_sender = ZipkinBatchSender(self.transport_handler, | |
91 | self.max_span_batch_size, | |
92 | self.encoder) | |
106 | span_sender = ZipkinBatchSender( | |
107 | self.transport_handler, self.max_span_batch_size, self.encoder | |
108 | ) | |
93 | 109 | |
94 | 110 | self._emit_spans_with_span_sender(span_sender) |
95 | self.span_storage.clear() | |
96 | ||
97 | def _emit_spans_with_span_sender(self, span_sender): | |
111 | self._get_tracer().clear() | |
112 | ||
113 | def _emit_spans_with_span_sender(self, span_sender: "ZipkinBatchSender") -> None: | |
98 | 114 | with span_sender: |
99 | 115 | end_timestamp = time.time() |
100 | 116 | |
101 | 117 | # Collect, annotate, and log client spans from the logging handler |
102 | for span in self.span_storage: | |
118 | for span in self._get_tracer()._span_storage: | |
119 | assert span.local_endpoint is not None | |
103 | 120 | span.local_endpoint = copy_endpoint_with_new_service_name( |
104 | 121 | self.endpoint, |
105 | span.service_name, | |
122 | span.local_endpoint.service_name, | |
106 | 123 | ) |
107 | 124 | |
108 | 125 | span_sender.add_span(span) |
109 | 126 | |
110 | annotations = {} | |
111 | ||
112 | 127 | if self.add_logging_annotation: |
113 | annotations[LOGGING_END_KEY] = time.time() | |
114 | ||
115 | span_sender.add_span(SpanBuilder( | |
116 | trace_id=self.zipkin_attrs.trace_id, | |
117 | name=self.span_name, | |
118 | parent_id=self.zipkin_attrs.parent_span_id, | |
119 | span_id=self.zipkin_attrs.span_id, | |
120 | timestamp=self.start_timestamp, | |
121 | duration=end_timestamp - self.start_timestamp, | |
122 | annotations=annotations, | |
123 | tags=self.binary_annotations_dict, | |
124 | kind=Kind.CLIENT if self.client_context else Kind.SERVER, | |
125 | local_endpoint=self.endpoint, | |
126 | service_name=self.service_name, | |
127 | sa_endpoint=self.sa_endpoint, | |
128 | report_timestamp=self.report_root_timestamp, | |
129 | )) | |
130 | ||
131 | ||
132 | class ZipkinBatchSender(object): | |
128 | self.annotations[LOGGING_END_KEY] = time.time() | |
129 | ||
130 | span_sender.add_span( | |
131 | Span( | |
132 | trace_id=self.zipkin_attrs.trace_id, | |
133 | name=self.span_name, | |
134 | parent_id=self.zipkin_attrs.parent_span_id, | |
135 | span_id=self.zipkin_attrs.span_id, | |
136 | kind=Kind.CLIENT if self.client_context else Kind.SERVER, | |
137 | timestamp=self.start_timestamp, | |
138 | duration=end_timestamp - self.start_timestamp, | |
139 | local_endpoint=self.endpoint, | |
140 | remote_endpoint=self.remote_endpoint, | |
141 | shared=not self.report_root_timestamp, | |
142 | annotations=self.annotations, | |
143 | tags=self.tags, | |
144 | ) | |
145 | ) | |
146 | ||
147 | ||
148 | class ZipkinBatchSender: | |
133 | 149 | |
134 | 150 | MAX_PORTION_SIZE = 100 |
135 | 151 | |
136 | def __init__(self, transport_handler, max_portion_size, encoder): | |
152 | def __init__( | |
153 | self, | |
154 | transport_handler: Optional[TransportHandler], | |
155 | max_portion_size: Optional[int], | |
156 | encoder: IEncoder, | |
157 | ) -> None: | |
137 | 158 | self.transport_handler = transport_handler |
138 | 159 | self.max_portion_size = max_portion_size or self.MAX_PORTION_SIZE |
139 | 160 | self.encoder = encoder |
143 | 164 | else: |
144 | 165 | self.max_payload_bytes = None |
145 | 166 | |
146 | def __enter__(self): | |
167 | def __enter__(self) -> "ZipkinBatchSender": | |
147 | 168 | self._reset_queue() |
148 | 169 | return self |
149 | 170 | |
150 | def __exit__(self, _exc_type, _exc_value, _exc_traceback): | |
171 | def __exit__( | |
172 | self, | |
173 | _exc_type: Optional[Type[BaseException]], | |
174 | _exc_value: Optional[BaseException], | |
175 | _exc_traceback: Optional[TracebackType], | |
176 | ) -> None: | |
151 | 177 | if any((_exc_type, _exc_value, _exc_traceback)): |
178 | assert _exc_type is not None | |
179 | assert _exc_value is not None | |
180 | assert _exc_traceback is not None | |
152 | 181 | filename = os.path.split(_exc_traceback.tb_frame.f_code.co_filename)[1] |
153 | error = '({0}:{1}) {2}: {3}'.format( | |
182 | error = "({}:{}) {}: {}".format( | |
154 | 183 | filename, |
155 | 184 | _exc_traceback.tb_lineno, |
156 | 185 | _exc_type.__name__, |
160 | 189 | else: |
161 | 190 | self.flush() |
162 | 191 | |
163 | def _reset_queue(self): | |
164 | self.queue = [] | |
192 | def _reset_queue(self) -> None: | |
193 | self.queue: List[Union[str, bytes]] = [] | |
165 | 194 | self.current_size = 0 |
166 | 195 | |
167 | def add_span(self, internal_span): | |
196 | def add_span(self, internal_span: Span) -> None: | |
168 | 197 | encoded_span = self.encoder.encode_span(internal_span) |
169 | 198 | |
170 | 199 | # If we've already reached the max batch size or the new span doesn't |
171 | 200 | # fit in max_payload_bytes, send what we've collected until now and |
172 | 201 | # start a new batch. |
173 | 202 | is_over_size_limit = ( |
174 | self.max_payload_bytes is not None and | |
175 | not self.encoder.fits( | |
203 | self.max_payload_bytes is not None | |
204 | and not self.encoder.fits( | |
176 | 205 | current_count=len(self.queue), |
177 | 206 | current_size=self.current_size, |
178 | 207 | max_size=self.max_payload_bytes, |
186 | 215 | self.queue.append(encoded_span) |
187 | 216 | self.current_size += len(encoded_span) |
188 | 217 | |
189 | def flush(self): | |
218 | def flush(self) -> None: | |
190 | 219 | if self.transport_handler and len(self.queue) > 0: |
191 | 220 | |
192 | 221 | message = self.encoder.encode_queue(self.queue) |
0 | import logging | |
1 | from typing import Dict | |
2 | from typing import Optional | |
3 | ||
4 | from typing_extensions import TypedDict | |
5 | ||
6 | from py_zipkin.storage import get_default_tracer | |
7 | from py_zipkin.storage import Stack | |
8 | from py_zipkin.storage import Tracer | |
9 | from py_zipkin.util import _should_sample | |
10 | from py_zipkin.util import create_attrs_for_span | |
11 | from py_zipkin.util import generate_random_64bit_string | |
12 | from py_zipkin.util import ZipkinAttrs | |
13 | ||
14 | log = logging.getLogger(__name__) | |
15 | ||
16 | ||
17 | class B3JSON(TypedDict): | |
18 | trace_id: Optional[str] | |
19 | span_id: Optional[str] | |
20 | parent_span_id: Optional[str] | |
21 | sampled_str: Optional[str] | |
22 | ||
23 | ||
24 | def _parse_single_header(b3_header: str) -> B3JSON: | |
25 | """ | |
26 | Parse out and return the data necessary for generating ZipkinAttrs. | |
27 | ||
28 | Returns a dict with the following keys: | |
29 | 'trace_id': str or None | |
30 | 'span_id': str or None | |
31 | 'parent_span_id': str or None | |
32 | 'sampled_str': '0', '1', 'd', or None (defer) | |
33 | """ | |
34 | parsed: B3JSON = { | |
35 | "trace_id": None, | |
36 | "span_id": None, | |
37 | "parent_span_id": None, | |
38 | "sampled_str": None, | |
39 | } | |
40 | ||
41 | # b3={TraceId}-{SpanId}-{SamplingState}-{ParentSpanId} | |
42 | # (last 2 fields optional) | |
43 | # OR | |
44 | # b3={SamplingState} | |
45 | bits = b3_header.split("-") | |
46 | ||
47 | # Handle the lone-sampling-decision case: | |
48 | if len(bits) == 1: | |
49 | if bits[0] in ("0", "1", "d"): | |
50 | parsed["sampled_str"] = bits[0] | |
51 | return parsed | |
52 | raise ValueError("Invalid sample-only value: %r" % bits[0]) | |
53 | if len(bits) > 4: | |
54 | # Too many segments | |
55 | raise ValueError("Too many segments in b3 header: %r" % b3_header) | |
56 | parsed["trace_id"] = bits[0] | |
57 | if not parsed["trace_id"]: | |
58 | raise ValueError("Bad or missing TraceId") | |
59 | parsed["span_id"] = bits[1] | |
60 | if not parsed["span_id"]: | |
61 | raise ValueError("Bad or missing SpanId") | |
62 | if len(bits) > 3: | |
63 | parsed["parent_span_id"] = bits[3] | |
64 | if not parsed["parent_span_id"]: | |
65 | raise ValueError("Got empty ParentSpanId") | |
66 | if len(bits) > 2: | |
67 | # Empty-string means "missing" which means "Defer" | |
68 | if bits[2]: | |
69 | parsed["sampled_str"] = bits[2] | |
70 | if parsed["sampled_str"] not in ("0", "1", "d"): | |
71 | raise ValueError("Bad SampledState: %r" % parsed["sampled_str"]) | |
72 | return parsed | |
73 | ||
74 | ||
75 | def _parse_multi_header(headers: Dict[str, str]) -> B3JSON: | |
76 | """ | |
77 | Parse out and return the data necessary for generating ZipkinAttrs. | |
78 | ||
79 | Returns a dict with the following keys: | |
80 | 'trace_id': str or None | |
81 | 'span_id': str or None | |
82 | 'parent_span_id': str or None | |
83 | 'sampled_str': '0', '1', 'd', or None (defer) | |
84 | """ | |
85 | parsed: B3JSON = { | |
86 | "trace_id": headers.get("X-B3-TraceId", None), | |
87 | "span_id": headers.get("X-B3-SpanId", None), | |
88 | "parent_span_id": headers.get("X-B3-ParentSpanId", None), | |
89 | "sampled_str": headers.get("X-B3-Sampled", None), | |
90 | } | |
91 | # Normalize X-B3-Flags and X-B3-Sampled to None, '0', '1', or 'd' | |
92 | if headers.get("X-B3-Flags") == "1": | |
93 | parsed["sampled_str"] = "d" | |
94 | if parsed["sampled_str"] == "true": | |
95 | parsed["sampled_str"] = "1" | |
96 | elif parsed["sampled_str"] == "false": | |
97 | parsed["sampled_str"] = "0" | |
98 | if parsed["sampled_str"] not in (None, "1", "0", "d"): | |
99 | raise ValueError("Got invalid X-B3-Sampled: %s" % parsed["sampled_str"]) | |
100 | for k in ("trace_id", "span_id", "parent_span_id"): | |
101 | if parsed[k] == "": # type: ignore[literal-required] | |
102 | raise ValueError("Got empty-string %r" % k) | |
103 | if parsed["trace_id"] and not parsed["span_id"]: | |
104 | raise ValueError("Got X-B3-TraceId but not X-B3-SpanId") | |
105 | elif parsed["span_id"] and not parsed["trace_id"]: | |
106 | raise ValueError("Got X-B3-SpanId but not X-B3-TraceId") | |
107 | ||
108 | # Handle the common case of no headers at all | |
109 | if not parsed["trace_id"] and not parsed["sampled_str"]: | |
110 | raise ValueError() # won't trigger a log message | |
111 | ||
112 | return parsed | |
113 | ||
114 | ||
115 | def extract_zipkin_attrs_from_headers( | |
116 | headers: Dict[str, str], | |
117 | sample_rate: float = 100.0, | |
118 | use_128bit_trace_id: bool = False, | |
119 | ) -> Optional[ZipkinAttrs]: | |
120 | """ | |
121 | Implements extraction of B3 headers per: | |
122 | https://github.com/openzipkin/b3-propagation | |
123 | ||
124 | The input headers can be any dict-like container that supports "in" | |
125 | membership test and a .get() method that accepts a default value. | |
126 | ||
127 | Returns a ZipkinAttrs instance or None | |
128 | """ | |
129 | try: | |
130 | if "b3" in headers: | |
131 | parsed = _parse_single_header(headers["b3"]) | |
132 | else: | |
133 | parsed = _parse_multi_header(headers) | |
134 | except ValueError as e: | |
135 | if str(e): | |
136 | log.warning(e) | |
137 | return None | |
138 | ||
139 | # Handle the lone-sampling-decision case: | |
140 | if not parsed["trace_id"]: | |
141 | if parsed["sampled_str"] in ("1", "d"): | |
142 | sample_rate = 100.0 | |
143 | else: | |
144 | sample_rate = 0.0 | |
145 | attrs = create_attrs_for_span( | |
146 | sample_rate=sample_rate, | |
147 | use_128bit_trace_id=use_128bit_trace_id, | |
148 | flags="1" if parsed["sampled_str"] == "d" else "0", | |
149 | ) | |
150 | return attrs | |
151 | ||
152 | # Handle any sampling decision, including if it was deferred | |
153 | if parsed["sampled_str"]: | |
154 | # We have 1==Accept, 0==Deny, d==Debug | |
155 | if parsed["sampled_str"] in ("1", "d"): | |
156 | is_sampled = True | |
157 | else: | |
158 | is_sampled = False | |
159 | else: | |
160 | # sample flag missing; means "Defer" and we're responsible for | |
161 | # rolling fresh dice | |
162 | is_sampled = _should_sample(sample_rate) | |
163 | ||
164 | return ZipkinAttrs( | |
165 | parsed["trace_id"], | |
166 | parsed["span_id"], | |
167 | parsed["parent_span_id"], | |
168 | "1" if parsed["sampled_str"] == "d" else "0", | |
169 | is_sampled, | |
170 | ) | |
171 | ||
172 | ||
173 | def create_http_headers( | |
174 | context_stack: Optional[Stack] = None, | |
175 | tracer: Optional[Tracer] = None, | |
176 | new_span_id: bool = False, | |
177 | ) -> Dict[str, Optional[str]]: | |
178 | """ | |
179 | Generate the headers for a new zipkin span. | |
180 | ||
181 | .. note:: | |
182 | ||
183 | If the method is not called from within a zipkin_trace context, | |
184 | empty dict will be returned back. | |
185 | ||
186 | :returns: dict containing (X-B3-TraceId, X-B3-SpanId, X-B3-ParentSpanId, | |
187 | X-B3-Flags and X-B3-Sampled) keys OR an empty dict. | |
188 | """ | |
189 | if tracer: | |
190 | zipkin_attrs = tracer.get_zipkin_attrs() | |
191 | elif context_stack: | |
192 | zipkin_attrs = context_stack.get() | |
193 | else: | |
194 | zipkin_attrs = get_default_tracer().get_zipkin_attrs() | |
195 | ||
196 | # If zipkin_attrs is still not set then we're not in a trace context | |
197 | if not zipkin_attrs: | |
198 | return {} | |
199 | ||
200 | if new_span_id: | |
201 | span_id: Optional[str] = generate_random_64bit_string() | |
202 | parent_span_id = zipkin_attrs.span_id | |
203 | else: | |
204 | span_id = zipkin_attrs.span_id | |
205 | parent_span_id = zipkin_attrs.parent_span_id | |
206 | ||
207 | return { | |
208 | "X-B3-TraceId": zipkin_attrs.trace_id, | |
209 | "X-B3-SpanId": span_id, | |
210 | "X-B3-ParentSpanId": parent_span_id, | |
211 | "X-B3-Flags": "0", | |
212 | "X-B3-Sampled": "1" if zipkin_attrs.is_sampled else "0", | |
213 | } |
0 | from collections import deque | |
1 | ||
2 | from py_zipkin import thread_local | |
3 | ||
4 | ||
5 | class Stack(object): | |
0 | import logging | |
1 | import threading | |
2 | from typing import Any | |
3 | from typing import Deque | |
4 | from typing import List | |
5 | from typing import Optional | |
6 | from typing import TYPE_CHECKING | |
7 | ||
8 | from py_zipkin.encoding._helpers import Span | |
9 | from py_zipkin.util import ZipkinAttrs | |
10 | ||
11 | if TYPE_CHECKING: # pragma: no cover | |
12 | from py_zipkin import zipkin | |
13 | ||
14 | try: # pragma: no cover | |
15 | # Since python 3.7 threadlocal is deprecated in favor of contextvars | |
16 | # which also work in asyncio. | |
17 | import contextvars | |
18 | ||
19 | _contextvars_tracer: Optional[ | |
20 | contextvars.ContextVar["Tracer"] | |
21 | ] = contextvars.ContextVar("py_zipkin.Tracer object") | |
22 | except ImportError: # pragma: no cover | |
23 | # The contextvars module was added in python 3.7 | |
24 | _contextvars_tracer = None | |
25 | ||
26 | _thread_local_tracer = threading.local() | |
27 | ||
28 | ||
29 | log = logging.getLogger("py_zipkin.storage") | |
30 | ||
31 | ||
32 | def _get_thread_local_tracer() -> "Tracer": | |
33 | """Returns the current tracer from thread-local. | |
34 | ||
35 | If there's no current tracer it'll create a new one. | |
36 | :returns: current tracer. | |
37 | :rtype: Tracer | |
38 | """ | |
39 | if not hasattr(_thread_local_tracer, "tracer"): | |
40 | _thread_local_tracer.tracer = Tracer() | |
41 | return _thread_local_tracer.tracer | |
42 | ||
43 | ||
44 | def _set_thread_local_tracer(tracer: "Tracer") -> None: | |
45 | """Sets the current tracer in thread-local. | |
46 | ||
47 | :param tracer: current tracer. | |
48 | :type tracer: Tracer | |
49 | """ | |
50 | _thread_local_tracer.tracer = tracer | |
51 | ||
52 | ||
53 | def _get_contextvars_tracer() -> "Tracer": # pragma: no cover | |
54 | """Returns the current tracer from contextvars. | |
55 | ||
56 | If there's no current tracer it'll create a new one. | |
57 | :returns: current tracer. | |
58 | :rtype: Tracer | |
59 | """ | |
60 | assert _contextvars_tracer is not None | |
61 | try: | |
62 | return _contextvars_tracer.get() | |
63 | except LookupError: | |
64 | _contextvars_tracer.set(Tracer()) | |
65 | return _contextvars_tracer.get() | |
66 | ||
67 | ||
68 | def _set_contextvars_tracer(tracer: "Tracer") -> None: # pragma: no cover | |
69 | """Sets the current tracer in contextvars. | |
70 | ||
71 | :param tracer: current tracer. | |
72 | :type tracer: Tracer | |
73 | """ | |
74 | assert _contextvars_tracer is not None | |
75 | _contextvars_tracer.set(tracer) | |
76 | ||
77 | ||
78 | class Tracer: | |
79 | def __init__(self) -> None: | |
80 | self._is_transport_configured = False | |
81 | self._span_storage = SpanStorage() | |
82 | self._context_stack = Stack() | |
83 | ||
84 | def get_zipkin_attrs(self) -> Optional[ZipkinAttrs]: | |
85 | return self._context_stack.get() | |
86 | ||
87 | def push_zipkin_attrs(self, ctx: ZipkinAttrs) -> None: | |
88 | self._context_stack.push(ctx) | |
89 | ||
90 | def pop_zipkin_attrs(self) -> Optional[ZipkinAttrs]: | |
91 | return self._context_stack.pop() | |
92 | ||
93 | def add_span(self, span: Span) -> None: | |
94 | self._span_storage.append(span) | |
95 | ||
96 | def get_spans(self) -> "SpanStorage": | |
97 | return self._span_storage | |
98 | ||
99 | def clear(self) -> None: | |
100 | self._span_storage.clear() | |
101 | ||
102 | def set_transport_configured(self, configured: bool) -> None: | |
103 | self._is_transport_configured = configured | |
104 | ||
105 | def is_transport_configured(self) -> bool: | |
106 | return self._is_transport_configured | |
107 | ||
108 | def zipkin_span(self, *argv: Any, **kwargs: Any) -> "zipkin.zipkin_span": | |
109 | from py_zipkin import zipkin | |
110 | ||
111 | kwargs["_tracer"] = self | |
112 | return zipkin.zipkin_span(*argv, **kwargs) | |
113 | ||
114 | def copy(self) -> "Tracer": | |
115 | """Return a copy of this instance, but with a deep-copied | |
116 | _context_stack. The use-case is for passing a copy of a Tracer into | |
117 | a new thread context. | |
118 | """ | |
119 | the_copy = self.__class__() | |
120 | the_copy._is_transport_configured = self._is_transport_configured | |
121 | the_copy._span_storage = self._span_storage | |
122 | the_copy._context_stack = self._context_stack.copy() | |
123 | return the_copy | |
124 | ||
125 | ||
126 | class Stack: | |
6 | 127 | """ |
7 | 128 | Stack is a simple stack class. |
8 | 129 | |
9 | 130 | It offers the operations push, pop and get. |
10 | 131 | The latter two return None if the stack is empty. |
11 | """ | |
12 | ||
13 | def __init__(self, storage): | |
14 | self._storage = storage | |
15 | ||
16 | def push(self, item): | |
132 | ||
133 | .. deprecated:: | |
134 | Use the Tracer interface which offers better multi-threading support. | |
135 | Stack will be removed in version 1.0. | |
136 | """ | |
137 | ||
138 | def __init__(self, storage: Optional[List[ZipkinAttrs]] = None) -> None: | |
139 | if storage is not None: | |
140 | log.warning("Passing a storage object to Stack is deprecated.") | |
141 | self.__storage: List[ZipkinAttrs] = storage | |
142 | else: | |
143 | self.__storage = [] | |
144 | ||
145 | # this pattern is currently necessary due to | |
146 | # https://github.com/python/mypy/issues/4125 | |
147 | @property | |
148 | def _storage(self) -> List[ZipkinAttrs]: | |
149 | return self.__storage | |
150 | ||
151 | @_storage.setter | |
152 | def _storage(self, value: List[ZipkinAttrs]) -> None: # pragma: no cover | |
153 | self.__storage = value | |
154 | ||
155 | @_storage.deleter | |
156 | def _storage(self) -> None: # pragma: no cover | |
157 | del self.__storage | |
158 | ||
159 | def push(self, item: ZipkinAttrs) -> None: | |
17 | 160 | self._storage.append(item) |
18 | 161 | |
19 | def pop(self): | |
162 | def pop(self) -> Optional[ZipkinAttrs]: | |
20 | 163 | if self._storage: |
21 | 164 | return self._storage.pop() |
22 | ||
23 | def get(self): | |
165 | return None | |
166 | ||
167 | def get(self) -> Optional[ZipkinAttrs]: | |
24 | 168 | if self._storage: |
25 | 169 | return self._storage[-1] |
170 | return None | |
171 | ||
172 | def copy(self) -> "Stack": | |
173 | # Return a new Stack() instance with a deep copy of our stack contents | |
174 | the_copy = self.__class__() | |
175 | the_copy._storage = self._storage[:] | |
176 | return the_copy | |
26 | 177 | |
27 | 178 | |
28 | 179 | class ThreadLocalStack(Stack): |
29 | """ | |
30 | ThreadLocalStack is variant of Stack that uses a thread local storage. | |
180 | """ThreadLocalStack is variant of Stack that uses a thread local storage. | |
31 | 181 | |
32 | 182 | The thread local storage is accessed lazily in every method call, |
33 | 183 | so the thread that calls the method matters, not the thread that |
34 | 184 | instantiated the class. |
35 | 185 | Every instance shares the same thread local data. |
36 | """ | |
37 | ||
38 | def __init__(self): | |
186 | ||
187 | .. deprecated:: | |
188 | Use the Tracer interface which offers better multi-threading support. | |
189 | ThreadLocalStack will be removed in version 1.0. | |
190 | """ | |
191 | ||
192 | def __init__(self) -> None: | |
193 | log.warning( | |
194 | "ThreadLocalStack is deprecated. See DEPRECATIONS.rst for" | |
195 | "details on how to migrate to using Tracer." | |
196 | ) | |
197 | ||
198 | @property | |
199 | def _storage(self) -> List[ZipkinAttrs]: | |
200 | return get_default_tracer()._context_stack._storage | |
201 | ||
202 | @_storage.setter | |
203 | def _storage(self, value: List[ZipkinAttrs]) -> None: # pragma: no cover | |
204 | get_default_tracer()._context_stack._storage = value | |
205 | ||
206 | @_storage.deleter | |
207 | def _storage(self) -> None: # pragma: no cover | |
208 | del get_default_tracer()._context_stack._storage | |
209 | ||
210 | ||
211 | class SpanStorage(Deque[Span]): | |
212 | """Stores the list of completed spans ready to be sent. | |
213 | ||
214 | .. deprecated:: | |
215 | Use the Tracer interface which offers better multi-threading support. | |
216 | SpanStorage will be removed in version 1.0. | |
217 | """ | |
218 | ||
219 | pass | |
220 | ||
221 | ||
222 | def default_span_storage() -> SpanStorage: | |
223 | log.warning( | |
224 | "default_span_storage is deprecated. See DEPRECATIONS.rst for" | |
225 | "details on how to migrate to using Tracer." | |
226 | ) | |
227 | return get_default_tracer()._span_storage | |
228 | ||
229 | ||
230 | def has_default_tracer() -> bool: | |
231 | """Is there a default tracer created already? | |
232 | ||
233 | :returns: Is there a default tracer created already? | |
234 | :rtype: boolean | |
235 | """ | |
236 | try: | |
237 | if _contextvars_tracer and _contextvars_tracer.get(): | |
238 | return True | |
239 | except LookupError: | |
39 | 240 | pass |
40 | ||
41 | @property | |
42 | def _storage(self): | |
43 | return thread_local.get_thread_local_zipkin_attrs() | |
44 | ||
45 | ||
46 | class SpanStorage(deque): | |
47 | def __init__(self): | |
48 | super(SpanStorage, self).__init__() | |
49 | self._is_transport_configured = False | |
50 | ||
51 | def is_transport_configured(self): | |
52 | """Helper function to check whether a transport is configured. | |
53 | ||
54 | We need to propagate this info to the child zipkin_spans since | |
55 | if no transport is set-up they should not generate any Span to | |
56 | avoid memory leaks. | |
57 | ||
58 | :returns: whether transport is configured or not | |
59 | :rtype: bool | |
60 | """ | |
61 | return self._is_transport_configured | |
62 | ||
63 | def set_transport_configured(self, configured): | |
64 | """Set whether the transport is configured or not. | |
65 | ||
66 | :param configured: whether transport is configured or not | |
67 | :type configured: bool | |
68 | """ | |
69 | self._is_transport_configured = configured | |
70 | ||
71 | ||
72 | def default_span_storage(): | |
73 | return thread_local.get_thread_local_span_storage() | |
241 | return hasattr(_thread_local_tracer, "tracer") | |
242 | ||
243 | ||
244 | def get_default_tracer() -> Tracer: | |
245 | """Return the current default Tracer. | |
246 | ||
247 | For now it'll get it from thread-local in Python 2.7 to 3.6 and from | |
248 | contextvars since Python 3.7. | |
249 | ||
250 | :returns: current default tracer. | |
251 | :rtype: Tracer | |
252 | """ | |
253 | if _contextvars_tracer: | |
254 | return _get_contextvars_tracer() | |
255 | ||
256 | return _get_thread_local_tracer() | |
257 | ||
258 | ||
259 | def set_default_tracer(tracer: Tracer) -> None: | |
260 | """Sets the current default Tracer. | |
261 | ||
262 | For now it'll get it from thread-local in Python 2.7 to 3.6 and from | |
263 | contextvars since Python 3.7. | |
264 | ||
265 | :returns: current default tracer. | |
266 | :rtype: Tracer | |
267 | """ | |
268 | if _contextvars_tracer: | |
269 | _set_contextvars_tracer(tracer) | |
270 | ||
271 | _set_thread_local_tracer(tracer) |
0 | from py_zipkin.testing.mock_transport import MockTransportHandler # noqa: F401 |
0 | from typing import List | |
1 | from typing import Optional | |
2 | from typing import Union | |
3 | ||
4 | from py_zipkin.transport import BaseTransportHandler | |
5 | ||
6 | ||
7 | class MockTransportHandler(BaseTransportHandler): | |
8 | """Mock transport for use in tests. | |
9 | ||
10 | It doesn't emit anything and just stores the generated spans in memory. | |
11 | To check what has been emitted you can use `get_payloads` and get back | |
12 | the list of encoded spans that were emitted. | |
13 | To use it: | |
14 | ||
15 | .. code-block:: python | |
16 | ||
17 | transport = MockTransportHandler() | |
18 | with zipkin.zipkin_span( | |
19 | service_name='test_service_name', | |
20 | span_name='test_span_name', | |
21 | transport_handler=transport, | |
22 | sample_rate=100.0, | |
23 | encoding=Encoding.V2_JSON, | |
24 | ): | |
25 | do_something() | |
26 | ||
27 | spans = transport.get_payloads() | |
28 | assert len(spans) == 1 | |
29 | decoded_spans = json.loads(spans[0]) | |
30 | assert decoded_spans == [{}] | |
31 | """ | |
32 | ||
33 | def __init__(self, max_payload_bytes: Optional[int] = None) -> None: | |
34 | """Creates a new MockTransportHandler. | |
35 | ||
36 | :param max_payload_bytes: max payload size in bytes. You often don't | |
37 | need to set this in tests unless you want to test what happens | |
38 | when your spans are bigger than the maximum payload size. | |
39 | :type max_payload_bytes: int | |
40 | """ | |
41 | self.max_payload_bytes = max_payload_bytes | |
42 | self.payloads: List[Union[bytes, str]] = [] | |
43 | ||
44 | def send(self, payload: Union[bytes, str]) -> None: | |
45 | """Overrides the real send method. Should not be called directly.""" | |
46 | self.payloads.append(payload) | |
47 | return payload # type: ignore[return-value] | |
48 | ||
49 | def get_max_payload_bytes(self) -> Optional[int]: | |
50 | """Overrides the real method. Should not be called directly.""" | |
51 | return self.max_payload_bytes | |
52 | ||
53 | def get_payloads(self) -> List[Union[bytes, str]]: | |
54 | """Returns the encoded spans that were sent. | |
55 | ||
56 | Spans are batched before being sent, so most of the time the returned | |
57 | list will contain only one element. Each element is gonna be an encoded | |
58 | list of spans. | |
59 | """ | |
60 | return self.payloads |
0 | # -*- coding: utf-8 -*- | |
1 | import threading | |
2 | import warnings | |
0 | import logging | |
1 | from typing import Deque | |
2 | from typing import List | |
3 | from typing import Optional | |
3 | 4 | |
4 | _thread_local = threading.local() | |
5 | from py_zipkin.encoding._helpers import Span | |
6 | from py_zipkin.storage import get_default_tracer | |
7 | from py_zipkin.util import ZipkinAttrs | |
8 | ||
9 | log = logging.getLogger("py_zipkin.thread_local") | |
5 | 10 | |
6 | 11 | |
7 | def get_thread_local_zipkin_attrs(): | |
12 | def get_thread_local_zipkin_attrs() -> List[ZipkinAttrs]: | |
8 | 13 | """A wrapper to return _thread_local.zipkin_attrs |
9 | 14 | |
10 | 15 | Returns a list of ZipkinAttrs objects, used for intra-process context |
11 | 16 | propagation. |
12 | 17 | |
18 | .. deprecated:: | |
19 | Use the Tracer interface which offers better multi-threading support. | |
20 | get_thread_local_zipkin_attrs will be removed in version 1.0. | |
21 | ||
13 | 22 | :returns: list that may contain zipkin attribute tuples |
14 | 23 | :rtype: list |
15 | 24 | """ |
16 | if not hasattr(_thread_local, 'zipkin_attrs'): | |
17 | _thread_local.zipkin_attrs = [] | |
18 | return _thread_local.zipkin_attrs | |
25 | log.warning( | |
26 | "get_thread_local_zipkin_attrs is deprecated. See DEPRECATIONS.rst" | |
27 | " for details on how to migrate to using Tracer." | |
28 | ) | |
29 | return get_default_tracer()._context_stack._storage | |
19 | 30 | |
20 | 31 | |
21 | def get_thread_local_span_storage(): | |
32 | def get_thread_local_span_storage() -> Deque[Span]: | |
22 | 33 | """A wrapper to return _thread_local.span_storage |
23 | 34 | |
24 | 35 | Returns a SpanStorage object used to temporarily store all spans created in |
25 | 36 | the current process. The transport handlers will pull from this storage when |
26 | 37 | they emit the spans. |
27 | 38 | |
39 | .. deprecated:: | |
40 | Use the Tracer interface which offers better multi-threading support. | |
41 | get_thread_local_span_storage will be removed in version 1.0. | |
42 | ||
28 | 43 | :returns: SpanStore object containing all non-root spans. |
29 | 44 | :rtype: py_zipkin.storage.SpanStore |
30 | 45 | """ |
31 | if not hasattr(_thread_local, 'span_storage'): | |
32 | from py_zipkin.storage import SpanStorage | |
33 | _thread_local.span_storage = SpanStorage() | |
34 | return _thread_local.span_storage | |
46 | log.warning( | |
47 | "get_thread_local_span_storage is deprecated. See DEPRECATIONS.rst" | |
48 | " for details on how to migrate to using Tracer." | |
49 | ) | |
50 | return get_default_tracer()._span_storage | |
35 | 51 | |
36 | 52 | |
37 | def get_zipkin_attrs(): | |
53 | def get_zipkin_attrs() -> Optional[ZipkinAttrs]: | |
38 | 54 | """Get the topmost level zipkin attributes stored. |
55 | ||
56 | .. deprecated:: | |
57 | Use the Tracer interface which offers better multi-threading support. | |
58 | get_zipkin_attrs will be removed in version 1.0. | |
39 | 59 | |
40 | 60 | :returns: tuple containing zipkin attrs |
41 | 61 | :rtype: :class:`zipkin.ZipkinAttrs` |
42 | 62 | """ |
43 | 63 | from py_zipkin.storage import ThreadLocalStack |
44 | warnings.warn( | |
45 | 'Use py_zipkin.stack.ThreadLocalStack().get', | |
46 | DeprecationWarning, | |
64 | ||
65 | log.warning( | |
66 | "get_zipkin_attrs is deprecated. See DEPRECATIONS.rst for" | |
67 | "details on how to migrate to using Tracer." | |
47 | 68 | ) |
48 | 69 | return ThreadLocalStack().get() |
49 | 70 | |
50 | 71 | |
51 | def pop_zipkin_attrs(): | |
72 | def pop_zipkin_attrs() -> Optional[ZipkinAttrs]: | |
52 | 73 | """Pop the topmost level zipkin attributes, if present. |
74 | ||
75 | .. deprecated:: | |
76 | Use the Tracer interface which offers better multi-threading support. | |
77 | pop_zipkin_attrs will be removed in version 1.0. | |
53 | 78 | |
54 | 79 | :returns: tuple containing zipkin attrs |
55 | 80 | :rtype: :class:`zipkin.ZipkinAttrs` |
56 | 81 | """ |
57 | 82 | from py_zipkin.storage import ThreadLocalStack |
58 | warnings.warn( | |
59 | 'Use py_zipkin.stack.ThreadLocalStack().pop', | |
60 | DeprecationWarning, | |
83 | ||
84 | log.warning( | |
85 | "pop_zipkin_attrs is deprecated. See DEPRECATIONS.rst for" | |
86 | "details on how to migrate to using Tracer." | |
61 | 87 | ) |
62 | 88 | return ThreadLocalStack().pop() |
63 | 89 | |
64 | 90 | |
65 | def push_zipkin_attrs(zipkin_attr): | |
91 | def push_zipkin_attrs(zipkin_attr: ZipkinAttrs) -> None: | |
66 | 92 | """Stores the zipkin attributes to thread local. |
93 | ||
94 | .. deprecated:: | |
95 | Use the Tracer interface which offers better multi-threading support. | |
96 | push_zipkin_attrs will be removed in version 1.0. | |
67 | 97 | |
68 | 98 | :param zipkin_attr: tuple containing zipkin related attrs |
69 | 99 | :type zipkin_attr: :class:`zipkin.ZipkinAttrs` |
70 | 100 | """ |
71 | 101 | from py_zipkin.storage import ThreadLocalStack |
72 | warnings.warn( | |
73 | 'Use py_zipkin.stack.ThreadLocalStack().push', | |
74 | DeprecationWarning, | |
102 | ||
103 | log.warning( | |
104 | "push_zipkin_attrs is deprecated. See DEPRECATIONS.rst for" | |
105 | "details on how to migrate to using Tracer." | |
75 | 106 | ) |
76 | return ThreadLocalStack().push(zipkin_attr) | |
107 | ThreadLocalStack().push(zipkin_attr) |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import os |
2 | 1 | import socket |
3 | 2 | import struct |
4 | ||
5 | import thriftpy | |
6 | from thriftpy.protocol.binary import TBinaryProtocol | |
7 | from thriftpy.protocol.binary import write_list_begin | |
8 | from thriftpy.thrift import TType | |
9 | from thriftpy.transport import TMemoryBuffer | |
3 | from typing import Dict | |
4 | from typing import List | |
5 | from typing import Mapping | |
6 | from typing import Optional | |
7 | from typing import TYPE_CHECKING | |
8 | ||
9 | import thriftpy2 | |
10 | from thriftpy2.protocol import TBinaryProtocol | |
11 | from thriftpy2.protocol.binary import write_list_begin | |
12 | from thriftpy2.thrift import TType | |
13 | from thriftpy2.transport import TMemoryBuffer | |
14 | from typing_extensions import TypedDict | |
10 | 15 | |
11 | 16 | from py_zipkin.util import unsigned_hex_to_signed_int |
12 | 17 | |
13 | 18 | |
14 | thrift_filepath = os.path.join(os.path.dirname(__file__), 'zipkinCore.thrift') | |
15 | zipkin_core = thriftpy.load(thrift_filepath, module_name="zipkinCore_thrift") | |
16 | ||
17 | SERVER_ADDR_VAL = '\x01' | |
19 | thrift_filepath = os.path.join(os.path.dirname(__file__), "zipkinCore.thrift") | |
20 | ||
21 | # this is an extremely weird pattern, but since zipkinCore isn't a "real" | |
22 | # module, but the .pyi file pretends it is, we can't import it for real but we | |
23 | # can during type checking. Hence, we end up with this weird pattern. | |
24 | if TYPE_CHECKING: # pragma: no cover | |
25 | from . import zipkinCore | |
26 | else: | |
27 | # load this as `zipkinCore` so that thrift-pyi generation matches | |
28 | zipkinCore = thriftpy2.load(thrift_filepath, module_name="zipkinCore_thrift") | |
29 | ||
30 | # keep this interface in case it's used | |
31 | zipkin_core = zipkinCore | |
32 | ||
33 | SERVER_ADDR_VAL = "\x01" | |
18 | 34 | LIST_HEADER_SIZE = 5 # size in bytes of the encoded list header |
19 | 35 | |
20 | dummy_endpoint = zipkin_core.Endpoint() | |
21 | ||
22 | ||
23 | def create_annotation(timestamp, value, host): | |
36 | dummy_endpoint = zipkinCore.Endpoint() | |
37 | ||
38 | ||
39 | def create_annotation( | |
40 | timestamp: int, value: str, host: "zipkinCore.Endpoint" | |
41 | ) -> "zipkinCore.Annotation": | |
24 | 42 | """ |
25 | 43 | Create a zipkin annotation object |
26 | 44 | |
30 | 48 | |
31 | 49 | :returns: zipkin annotation object |
32 | 50 | """ |
33 | return zipkin_core.Annotation(timestamp=timestamp, value=value, host=host) | |
34 | ||
35 | ||
36 | def create_binary_annotation(key, value, annotation_type, host): | |
51 | return zipkinCore.Annotation(timestamp=timestamp, value=value, host=host) | |
52 | ||
53 | ||
54 | def create_binary_annotation( | |
55 | key: str, | |
56 | value: str, | |
57 | annotation_type: "zipkinCore.AnnotationType", | |
58 | host: "zipkinCore.Endpoint", | |
59 | ) -> "zipkinCore.BinaryAnnotation": | |
37 | 60 | """ |
38 | 61 | Create a zipkin binary annotation object |
39 | 62 | |
44 | 67 | |
45 | 68 | :returns: zipkin binary annotation object |
46 | 69 | """ |
47 | return zipkin_core.BinaryAnnotation( | |
70 | return zipkinCore.BinaryAnnotation( | |
48 | 71 | key=key, |
49 | 72 | value=value, |
50 | 73 | annotation_type=annotation_type, |
52 | 75 | ) |
53 | 76 | |
54 | 77 | |
55 | def create_endpoint(port=0, service_name='unknown', ipv4=None, ipv6=None): | |
78 | def create_endpoint( | |
79 | port: int = 0, | |
80 | service_name: Optional[str] = "unknown", | |
81 | ipv4: Optional[str] = None, | |
82 | ipv6: Optional[str] = None, | |
83 | ) -> "zipkinCore.Endpoint": | |
56 | 84 | """Create a zipkin Endpoint object. |
57 | 85 | |
58 | 86 | An Endpoint object holds information about the network context of a span. |
68 | 96 | |
69 | 97 | # Convert ip address to network byte order |
70 | 98 | if ipv4: |
71 | ipv4_int = struct.unpack('!i', socket.inet_pton(socket.AF_INET, ipv4))[0] | |
99 | ipv4_int = struct.unpack("!i", socket.inet_pton(socket.AF_INET, ipv4))[0] | |
72 | 100 | |
73 | 101 | if ipv6: |
74 | 102 | ipv6_binary = socket.inet_pton(socket.AF_INET6, ipv6) |
75 | 103 | |
76 | 104 | # Zipkin passes unsigned values in signed types because Thrift has no |
77 | 105 | # unsigned types, so we have to convert the value. |
78 | port = struct.unpack('h', struct.pack('H', port))[0] | |
79 | return zipkin_core.Endpoint( | |
106 | port = struct.unpack("h", struct.pack("H", port))[0] | |
107 | # type ignore due to https://github.com/unmade/thrift-pyi/issues/25 | |
108 | return zipkinCore.Endpoint( | |
80 | 109 | ipv4=ipv4_int, |
81 | ipv6=ipv6_binary, | |
110 | ipv6=ipv6_binary, # type: ignore[arg-type] | |
82 | 111 | port=port, |
83 | 112 | service_name=service_name, |
84 | 113 | ) |
85 | 114 | |
86 | 115 | |
87 | def copy_endpoint_with_new_service_name(endpoint, service_name): | |
116 | def copy_endpoint_with_new_service_name( | |
117 | endpoint: "zipkinCore.Endpoint", service_name: str | |
118 | ) -> "zipkinCore.Endpoint": | |
88 | 119 | """Copies a copy of a given endpoint with a new service name. |
89 | 120 | This should be very fast, on the order of several microseconds. |
90 | 121 | |
91 | :param endpoint: existing zipkin_core.Endpoint object | |
122 | :param endpoint: existing zipkinCore.Endpoint object | |
92 | 123 | :param service_name: str of new service name |
93 | 124 | :returns: zipkin Endpoint object |
94 | 125 | """ |
95 | return zipkin_core.Endpoint( | |
126 | return zipkinCore.Endpoint( | |
96 | 127 | ipv4=endpoint.ipv4, |
97 | 128 | port=endpoint.port, |
98 | 129 | service_name=service_name, |
99 | 130 | ) |
100 | 131 | |
101 | 132 | |
102 | def annotation_list_builder(annotations, host): | |
103 | """ | |
104 | Reformat annotations dict to return list of corresponding zipkin_core objects. | |
133 | def annotation_list_builder( | |
134 | annotations: Mapping[str, float], host: "zipkinCore.Endpoint" | |
135 | ) -> List["zipkinCore.Annotation"]: | |
136 | """ | |
137 | Reformat annotations dict to return list of corresponding zipkinCore objects. | |
105 | 138 | |
106 | 139 | :param annotations: dict containing key as annotation name, |
107 | 140 | value being timestamp in seconds(float). |
108 | :type host: :class:`zipkin_core.Endpoint` | |
109 | :returns: a list of annotation zipkin_core objects | |
141 | :type host: :class:`zipkinCore.Endpoint` | |
142 | :returns: a list of annotation zipkinCore objects | |
110 | 143 | :rtype: list |
111 | 144 | """ |
112 | 145 | return [ |
115 | 148 | ] |
116 | 149 | |
117 | 150 | |
118 | def binary_annotation_list_builder(binary_annotations, host): | |
119 | """ | |
120 | Reformat binary annotations dict to return list of zipkin_core objects. The | |
151 | def binary_annotation_list_builder( | |
152 | binary_annotations: Dict[str, str], host: "zipkinCore.Endpoint" | |
153 | ) -> List["zipkinCore.BinaryAnnotation"]: | |
154 | """ | |
155 | Reformat binary annotations dict to return list of zipkinCore objects. The | |
121 | 156 | value of the binary annotations MUST be in string format. |
122 | 157 | |
123 | 158 | :param binary_annotations: dict with key, value being the name and value |
124 | 159 | of the binary annotation being logged. |
125 | :type host: :class:`zipkin_core.Endpoint` | |
126 | :returns: a list of binary annotation zipkin_core objects | |
160 | :type host: :class:`zipkinCore.Endpoint` | |
161 | :returns: a list of binary annotation zipkinCore objects | |
127 | 162 | :rtype: list |
128 | 163 | """ |
129 | 164 | # TODO: Remove the type hard-coding of STRING to take it as a param option. |
130 | ann_type = zipkin_core.AnnotationType.STRING | |
165 | ann_type = zipkinCore.AnnotationType.STRING | |
131 | 166 | return [ |
132 | 167 | create_binary_annotation(key, str(value), ann_type, host) |
133 | 168 | for key, value in binary_annotations.items() |
134 | 169 | ] |
135 | 170 | |
136 | 171 | |
172 | class SpanKwargs(TypedDict, total=False): | |
173 | trace_id: int | |
174 | name: Optional[str] | |
175 | id: int | |
176 | annotations: List["zipkinCore.Annotation"] | |
177 | binary_annotations: List["zipkinCore.BinaryAnnotation"] | |
178 | timestamp: Optional[int] | |
179 | duration: Optional[int] | |
180 | trace_id_high: Optional[int] | |
181 | parent_id: int | |
182 | ||
183 | ||
137 | 184 | def create_span( |
138 | span_id, | |
139 | parent_span_id, | |
140 | trace_id, | |
141 | span_name, | |
142 | annotations, | |
143 | binary_annotations, | |
144 | timestamp_s, | |
145 | duration_s, | |
146 | ): | |
147 | """Takes a bunch of span attributes and returns a thriftpy representation | |
185 | span_id: str, | |
186 | parent_span_id: Optional[str], | |
187 | trace_id: str, | |
188 | span_name: Optional[str], | |
189 | annotations: List["zipkinCore.Annotation"], | |
190 | binary_annotations: List["zipkinCore.BinaryAnnotation"], | |
191 | timestamp_s: Optional[float], | |
192 | duration_s: Optional[float], | |
193 | ) -> "zipkinCore.Span": | |
194 | """Takes a bunch of span attributes and returns a thriftpy2 representation | |
148 | 195 | of the span. Timestamps passed in are in seconds, they're converted to |
149 | 196 | microseconds before thrift encoding. |
150 | 197 | """ |
155 | 202 | assert trace_id_length == 32 |
156 | 203 | trace_id, trace_id_high = trace_id[16:], trace_id[:16] |
157 | 204 | |
205 | trace_id_high_int = None | |
158 | 206 | if trace_id_high: |
159 | trace_id_high = unsigned_hex_to_signed_int(trace_id_high) | |
160 | ||
161 | span_dict = { | |
162 | 'trace_id': unsigned_hex_to_signed_int(trace_id), | |
163 | 'name': span_name, | |
164 | 'id': unsigned_hex_to_signed_int(span_id), | |
165 | 'annotations': annotations, | |
166 | 'binary_annotations': binary_annotations, | |
167 | 'timestamp': int(timestamp_s * 1000000) if timestamp_s else None, | |
168 | 'duration': int(duration_s * 1000000) if duration_s else None, | |
169 | 'trace_id_high': trace_id_high, | |
207 | trace_id_high_int = unsigned_hex_to_signed_int(trace_id_high) | |
208 | ||
209 | span_dict: SpanKwargs = { | |
210 | "trace_id": unsigned_hex_to_signed_int(trace_id), | |
211 | "name": span_name, | |
212 | "id": unsigned_hex_to_signed_int(span_id), | |
213 | "annotations": annotations, | |
214 | "binary_annotations": binary_annotations, | |
215 | "timestamp": int(timestamp_s * 1000000) if timestamp_s else None, | |
216 | "duration": int(duration_s * 1000000) if duration_s else None, | |
217 | "trace_id_high": trace_id_high_int, | |
170 | 218 | } |
171 | 219 | if parent_span_id: |
172 | span_dict['parent_id'] = unsigned_hex_to_signed_int(parent_span_id) | |
173 | return zipkin_core.Span(**span_dict) | |
174 | ||
175 | ||
176 | def span_to_bytes(thrift_span): | |
220 | span_dict["parent_id"] = unsigned_hex_to_signed_int(parent_span_id) | |
221 | return zipkinCore.Span(**span_dict) | |
222 | ||
223 | ||
224 | def span_to_bytes(thrift_span: "zipkinCore.Span") -> bytes: | |
177 | 225 | """ |
178 | 226 | Returns a TBinaryProtocol encoded Thrift span. |
179 | 227 | |
182 | 230 | """ |
183 | 231 | transport = TMemoryBuffer() |
184 | 232 | protocol = TBinaryProtocol(transport) |
185 | thrift_span.write(protocol) | |
233 | # this type ingore is necessary because thrift-pyi is not complete in its | |
234 | # type annotations | |
235 | thrift_span.write(protocol) # type: ignore[attr-defined] | |
186 | 236 | |
187 | 237 | return bytes(transport.getvalue()) |
188 | 238 | |
189 | 239 | |
190 | def encode_bytes_list(binary_thrift_obj_list): # pragma: no cover | |
240 | def encode_bytes_list( | |
241 | binary_thrift_obj_list: List[TBinaryProtocol], | |
242 | ) -> bytes: # pragma: no cover | |
191 | 243 | """ |
192 | 244 | Returns a TBinaryProtocol encoded list of Thrift objects. |
193 | 245 |
0 | from dataclasses import dataclass | |
1 | from enum import IntEnum | |
2 | from typing import * | |
3 | ||
4 | class AnnotationType(IntEnum): | |
5 | BOOL = 0 | |
6 | BYTES = 1 | |
7 | I16 = 2 | |
8 | I32 = 3 | |
9 | I64 = 4 | |
10 | DOUBLE = 5 | |
11 | STRING = 6 | |
12 | ||
13 | @dataclass | |
14 | class Endpoint: | |
15 | ipv4: Optional[int] = None | |
16 | port: Optional[int] = None | |
17 | service_name: Optional[str] = None | |
18 | ipv6: Optional[str] = None | |
19 | ||
20 | @dataclass | |
21 | class Annotation: | |
22 | timestamp: Optional[int] = None | |
23 | value: Optional[str] = None | |
24 | host: Optional[Endpoint] = None | |
25 | ||
26 | @dataclass | |
27 | class BinaryAnnotation: | |
28 | key: Optional[str] = None | |
29 | value: Optional[str] = None | |
30 | annotation_type: Optional[int] = None | |
31 | host: Optional[Endpoint] = None | |
32 | ||
33 | @dataclass | |
34 | class Span: | |
35 | trace_id: Optional[int] = None | |
36 | name: Optional[str] = None | |
37 | id: Optional[int] = None | |
38 | parent_id: Optional[int] = None | |
39 | annotations: Optional[List[Annotation]] = None | |
40 | binary_annotations: Optional[List[BinaryAnnotation]] = None | |
41 | debug: Optional[bool] = False | |
42 | timestamp: Optional[int] = None | |
43 | duration: Optional[int] = None | |
44 | trace_id_high: Optional[int] = None |
0 | # -*- coding: utf-8 -*- | |
0 | from typing import Optional | |
1 | from typing import Tuple | |
2 | from typing import Union | |
3 | from urllib.request import Request | |
4 | from urllib.request import urlopen | |
5 | ||
6 | from py_zipkin.encoding import detect_span_version_and_encoding | |
7 | from py_zipkin.encoding import Encoding | |
1 | 8 | |
2 | 9 | |
3 | class BaseTransportHandler(object): | |
4 | ||
5 | def get_max_payload_bytes(self): # pragma: no cover | |
10 | class BaseTransportHandler: | |
11 | def get_max_payload_bytes(self) -> Optional[int]: # pragma: no cover | |
6 | 12 | """Returns the maximum payload size for this transport. |
7 | 13 | |
8 | 14 | Most transports have a maximum packet size that can be sent. For example, |
15 | 21 | |
16 | 22 | :returns: max payload size in bytes or None. |
17 | 23 | """ |
18 | raise NotImplementedError('get_max_payload_bytes is not implemented') | |
24 | raise NotImplementedError("get_max_payload_bytes is not implemented") | |
19 | 25 | |
20 | def send(self, payload): # pragma: no cover | |
26 | def send(self, payload: Union[bytes, str]) -> None: # pragma: no cover | |
21 | 27 | """Sends the encoded payload over the transport. |
22 | 28 | |
23 | 29 | :argument payload: encoded list of spans. |
24 | 30 | """ |
25 | raise NotImplementedError('send is not implemented') | |
31 | raise NotImplementedError("send is not implemented") | |
26 | 32 | |
27 | def __call__(self, payload): | |
33 | def __call__(self, payload: Union[bytes, str]) -> None: | |
28 | 34 | """Internal wrapper around `send`. Do not override. |
29 | 35 | |
30 | 36 | Mostly used to keep backward compatibility with older transports |
34 | 40 | code every time. |
35 | 41 | """ |
36 | 42 | self.send(payload) |
43 | ||
44 | ||
45 | class UnknownEncoding(Exception): | |
46 | """Exception class for when encountering an unknown Encoding""" | |
47 | ||
48 | ||
49 | class SimpleHTTPTransport(BaseTransportHandler): | |
50 | def __init__(self, address: str, port: int) -> None: | |
51 | """A simple HTTP transport for zipkin. | |
52 | ||
53 | This is not production ready (not async, no retries) but | |
54 | it's helpful for tests or people trying out py-zipkin. | |
55 | ||
56 | .. code-block:: python | |
57 | ||
58 | with zipkin_span( | |
59 | service_name='my_service', | |
60 | span_name='home', | |
61 | sample_rate=100, | |
62 | transport_handler=SimpleHTTPTransport('localhost', 9411), | |
63 | encoding=Encoding.V2_JSON, | |
64 | ): | |
65 | pass | |
66 | ||
67 | :param address: zipkin server address. | |
68 | :type address: str | |
69 | :param port: zipkin server port. | |
70 | :type port: int | |
71 | """ | |
72 | super().__init__() | |
73 | self.address = address | |
74 | self.port = port | |
75 | ||
76 | def get_max_payload_bytes(self) -> Optional[int]: | |
77 | return None | |
78 | ||
79 | def _get_path_content_type(self, payload: Union[str, bytes]) -> Tuple[str, str]: | |
80 | """Choose the right api path and content type depending on the encoding. | |
81 | ||
82 | This is not something you'd need to do generally when writing your own | |
83 | transport since in that case you'd know which encoding you're using. | |
84 | Since this is a generic transport, we need to make it compatible with | |
85 | any encoding instead. | |
86 | """ | |
87 | encoded_payload = ( | |
88 | payload.encode("utf-8") if isinstance(payload, str) else payload | |
89 | ) | |
90 | encoding = detect_span_version_and_encoding(encoded_payload) | |
91 | ||
92 | if encoding == Encoding.V1_JSON: | |
93 | return "/api/v1/spans", "application/json" | |
94 | elif encoding == Encoding.V1_THRIFT: | |
95 | return "/api/v1/spans", "application/x-thrift" | |
96 | elif encoding == Encoding.V2_JSON: | |
97 | return "/api/v2/spans", "application/json" | |
98 | elif encoding == Encoding.V2_PROTO3: | |
99 | return "/api/v2/spans", "application/x-protobuf" | |
100 | else: # pragma: nocover | |
101 | raise UnknownEncoding(f"Unknown encoding: {encoding}") | |
102 | ||
103 | def send(self, payload: Union[str, bytes]) -> None: | |
104 | encoded_payload = ( | |
105 | payload.encode("utf-8") if isinstance(payload, str) else payload | |
106 | ) | |
107 | path, content_type = self._get_path_content_type(encoded_payload) | |
108 | url = f"http://{self.address}:{self.port}{path}" | |
109 | ||
110 | req = Request(url, encoded_payload, {"Content-Type": content_type}) | |
111 | response = urlopen(req) | |
112 | ||
113 | assert response.getcode() == 202 |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import random |
2 | 1 | import struct |
3 | 2 | import time |
3 | from typing import NamedTuple | |
4 | from typing import Optional | |
4 | 5 | |
5 | 6 | |
6 | def generate_random_64bit_string(): | |
7 | class ZipkinAttrs(NamedTuple): | |
8 | """ | |
9 | Holds the basic attributes needed to log a zipkin trace | |
10 | ||
11 | :param trace_id: Unique trace id | |
12 | :param span_id: Span Id of the current request span | |
13 | :param parent_span_id: Parent span Id of the current request span | |
14 | :param flags: stores flags header. Currently unused | |
15 | :param is_sampled: pre-computed bool whether the trace should be logged | |
16 | """ | |
17 | ||
18 | trace_id: str | |
19 | span_id: Optional[str] | |
20 | parent_span_id: Optional[str] | |
21 | flags: str | |
22 | is_sampled: bool | |
23 | ||
24 | ||
25 | def generate_random_64bit_string() -> str: | |
7 | 26 | """Returns a 64 bit UTF-8 encoded string. In the interests of simplicity, |
8 | 27 | this is always cast to a `str` instead of (in py2 land) a unicode string. |
9 | 28 | Certain clients (I'm looking at you, Twisted) don't enjoy unicode headers. |
10 | 29 | |
11 | 30 | :returns: random 16-character string |
12 | 31 | """ |
13 | return '{:016x}'.format(random.getrandbits(64)) | |
32 | return f"{random.getrandbits(64):016x}" | |
14 | 33 | |
15 | 34 | |
16 | def generate_random_128bit_string(): | |
35 | def generate_random_128bit_string() -> str: | |
17 | 36 | """Returns a 128 bit UTF-8 encoded string. Follows the same conventions |
18 | 37 | as generate_random_64bit_string(). |
19 | 38 | |
25 | 44 | """ |
26 | 45 | t = int(time.time()) |
27 | 46 | lower_96 = random.getrandbits(96) |
28 | return '{:032x}'.format((t << 96) | lower_96) | |
47 | return f"{(t << 96) | lower_96:032x}" | |
29 | 48 | |
30 | 49 | |
31 | def unsigned_hex_to_signed_int(hex_string): | |
50 | def unsigned_hex_to_signed_int(hex_string: str) -> int: | |
32 | 51 | """Converts a 64-bit hex string to a signed int value. |
33 | 52 | |
34 | 53 | This is due to the fact that Apache Thrift only has signed values. |
40 | 59 | :param hex_string: the string representation of a zipkin ID |
41 | 60 | :returns: signed int representation |
42 | 61 | """ |
43 | return struct.unpack('q', struct.pack('Q', int(hex_string, 16)))[0] | |
62 | return struct.unpack("q", struct.pack("Q", int(hex_string, 16)))[0] | |
44 | 63 | |
45 | 64 | |
46 | def signed_int_to_unsigned_hex(signed_int): | |
65 | def signed_int_to_unsigned_hex(signed_int: int) -> str: | |
47 | 66 | """Converts a signed int value to a 64-bit hex string. |
48 | 67 | |
49 | 68 | Examples: |
53 | 72 | :param signed_int: an int to convert |
54 | 73 | :returns: unsigned hex string |
55 | 74 | """ |
56 | hex_string = hex(struct.unpack('Q', struct.pack('q', signed_int))[0])[2:] | |
57 | if hex_string.endswith('L'): | |
75 | hex_string = hex(struct.unpack("Q", struct.pack("q", signed_int))[0])[2:] | |
76 | if hex_string.endswith("L"): | |
58 | 77 | return hex_string[:-1] |
59 | 78 | return hex_string |
79 | ||
80 | ||
81 | def _should_sample(sample_rate: float) -> bool: | |
82 | if sample_rate == 0.0: | |
83 | return False # save a die roll | |
84 | elif sample_rate == 100.0: | |
85 | return True # ditto | |
86 | return (random.random() * 100) < sample_rate | |
87 | ||
88 | ||
89 | def create_attrs_for_span( | |
90 | sample_rate: float = 100.0, | |
91 | trace_id: Optional[str] = None, | |
92 | span_id: Optional[str] = None, | |
93 | use_128bit_trace_id: bool = False, | |
94 | flags: Optional[str] = None, | |
95 | ) -> ZipkinAttrs: | |
96 | """Creates a set of zipkin attributes for a span. | |
97 | ||
98 | :param sample_rate: Float between 0.0 and 100.0 to determine sampling rate | |
99 | :type sample_rate: float | |
100 | :param trace_id: Optional 16-character hex string representing a trace_id. | |
101 | If this is None, a random trace_id will be generated. | |
102 | :type trace_id: str | |
103 | :param span_id: Optional 16-character hex string representing a span_id. | |
104 | If this is None, a random span_id will be generated. | |
105 | :type span_id: str | |
106 | :param use_128bit_trace_id: If true, generate 128-bit trace_ids | |
107 | :type use_128bit_trace_id: bool | |
108 | """ | |
109 | # Calculate if this trace is sampled based on the sample rate | |
110 | if trace_id is None: | |
111 | if use_128bit_trace_id: | |
112 | trace_id = generate_random_128bit_string() | |
113 | else: | |
114 | trace_id = generate_random_64bit_string() | |
115 | if span_id is None: | |
116 | span_id = generate_random_64bit_string() | |
117 | is_sampled = _should_sample(sample_rate) | |
118 | ||
119 | return ZipkinAttrs( | |
120 | trace_id=trace_id, | |
121 | span_id=span_id, | |
122 | parent_span_id=None, | |
123 | flags=flags or "0", | |
124 | is_sampled=is_sampled, | |
125 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | import functools |
2 | import random | |
1 | import logging | |
3 | 2 | import time |
4 | import warnings | |
5 | from collections import namedtuple | |
3 | from types import TracebackType | |
4 | from typing import Any | |
5 | from typing import Callable | |
6 | from typing import cast | |
7 | from typing import Dict | |
8 | from typing import Optional | |
9 | from typing import Tuple | |
10 | from typing import Type | |
11 | from typing import TypeVar | |
6 | 12 | |
7 | 13 | from py_zipkin import Encoding |
8 | 14 | from py_zipkin import Kind |
9 | 15 | from py_zipkin import storage |
10 | 16 | from py_zipkin.encoding._helpers import create_endpoint |
11 | from py_zipkin.encoding._helpers import SpanBuilder | |
17 | from py_zipkin.encoding._helpers import Endpoint | |
18 | from py_zipkin.encoding._helpers import Span | |
12 | 19 | from py_zipkin.exception import ZipkinError |
20 | from py_zipkin.logging_helper import TransportHandler | |
13 | 21 | from py_zipkin.logging_helper import ZipkinLoggingContext |
14 | from py_zipkin.storage import ThreadLocalStack | |
15 | from py_zipkin.util import generate_random_128bit_string | |
22 | from py_zipkin.request_helpers import create_http_headers | |
23 | from py_zipkin.storage import get_default_tracer | |
24 | from py_zipkin.storage import SpanStorage | |
25 | from py_zipkin.storage import Stack | |
26 | from py_zipkin.storage import Tracer | |
27 | from py_zipkin.util import create_attrs_for_span | |
16 | 28 | from py_zipkin.util import generate_random_64bit_string |
17 | ||
18 | ||
19 | """ | |
20 | Holds the basic attributes needed to log a zipkin trace | |
21 | ||
22 | :param trace_id: Unique trace id | |
23 | :param span_id: Span Id of the current request span | |
24 | :param parent_span_id: Parent span Id of the current request span | |
25 | :param flags: stores flags header. Currently unused | |
26 | :param is_sampled: pre-computed boolean whether the trace should be logged | |
27 | """ | |
28 | ZipkinAttrs = namedtuple( | |
29 | 'ZipkinAttrs', | |
30 | ['trace_id', 'span_id', 'parent_span_id', 'flags', 'is_sampled'], | |
31 | ) | |
32 | ||
33 | ERROR_KEY = 'error' | |
34 | ||
35 | ||
36 | class zipkin_span(object): | |
29 | from py_zipkin.util import ZipkinAttrs | |
30 | ||
31 | log = logging.getLogger(__name__) | |
32 | ||
33 | ERROR_KEY = "error" | |
34 | ||
35 | ||
36 | F = TypeVar("F", bound=Callable[..., Any]) | |
37 | ||
38 | ||
39 | class zipkin_span: | |
37 | 40 | """Context manager/decorator for all of your zipkin tracing needs. |
38 | 41 | |
39 | 42 | Usage #1: Start a trace with a given sampling rate |
92 | 95 | |
93 | 96 | def __init__( |
94 | 97 | self, |
95 | service_name, | |
96 | span_name='span', | |
97 | zipkin_attrs=None, | |
98 | transport_handler=None, | |
99 | max_span_batch_size=None, | |
100 | annotations=None, | |
101 | binary_annotations=None, | |
102 | port=0, | |
103 | sample_rate=None, | |
104 | include=None, | |
105 | add_logging_annotation=False, | |
106 | report_root_timestamp=False, | |
107 | use_128bit_trace_id=False, | |
108 | host=None, | |
109 | context_stack=None, | |
110 | span_storage=None, | |
111 | firehose_handler=None, | |
112 | kind=None, | |
113 | timestamp=None, | |
114 | duration=None, | |
115 | encoding=Encoding.V1_THRIFT, | |
98 | service_name: str, | |
99 | span_name: str = "span", | |
100 | zipkin_attrs: Optional[ZipkinAttrs] = None, | |
101 | transport_handler: Optional[TransportHandler] = None, | |
102 | max_span_batch_size: Optional[int] = None, | |
103 | annotations: Optional[Dict[str, Optional[float]]] = None, | |
104 | binary_annotations: Optional[Dict[str, Optional[str]]] = None, | |
105 | port: int = 0, | |
106 | sample_rate: Optional[float] = None, | |
107 | include: Optional[str] = None, | |
108 | add_logging_annotation: bool = False, | |
109 | report_root_timestamp: bool = False, | |
110 | use_128bit_trace_id: bool = False, | |
111 | host: Optional[str] = None, | |
112 | context_stack: Optional[Stack] = None, | |
113 | span_storage: Optional[SpanStorage] = None, | |
114 | firehose_handler: Optional[TransportHandler] = None, | |
115 | kind: Optional[Kind] = None, | |
116 | timestamp: Optional[float] = None, | |
117 | duration: Optional[float] = None, | |
118 | encoding: Encoding = Encoding.V2_JSON, | |
119 | _tracer: Optional[Tracer] = None, | |
116 | 120 | ): |
117 | 121 | """Logs a zipkin span. If this is the root span, then a zipkin |
118 | 122 | trace is started as well. |
183 | 187 | :type duration: float |
184 | 188 | :param encoding: Output encoding format, defaults to V1 thrift spans. |
185 | 189 | :type encoding: Encoding |
190 | :param _tracer: Current tracer object. This argument is passed in | |
191 | automatically when you create a zipkin_span from a Tracer. | |
192 | :type _tracer: Tracer | |
186 | 193 | """ |
187 | 194 | self.service_name = service_name |
188 | 195 | self.span_name = span_name |
189 | self.zipkin_attrs = zipkin_attrs | |
196 | self.zipkin_attrs_override = zipkin_attrs | |
190 | 197 | self.transport_handler = transport_handler |
191 | 198 | self.max_span_batch_size = max_span_batch_size |
192 | 199 | self.annotations = annotations or {} |
197 | 204 | self.report_root_timestamp_override = report_root_timestamp |
198 | 205 | self.use_128bit_trace_id = use_128bit_trace_id |
199 | 206 | self.host = host |
200 | self._context_stack = context_stack or ThreadLocalStack() | |
201 | if span_storage is not None: | |
202 | self._span_storage = span_storage | |
203 | else: | |
204 | self._span_storage = storage.default_span_storage() | |
207 | self._context_stack = context_stack | |
208 | self._span_storage = span_storage | |
205 | 209 | self.firehose_handler = firehose_handler |
206 | 210 | self.kind = self._generate_kind(kind, include) |
207 | 211 | self.timestamp = timestamp |
208 | 212 | self.duration = duration |
209 | 213 | self.encoding = encoding |
214 | self._tracer = _tracer | |
210 | 215 | |
211 | 216 | self._is_local_root_span = False |
212 | self.logging_context = None | |
217 | self.logging_context: Optional[ZipkinLoggingContext] = None | |
213 | 218 | self.do_pop_attrs = False |
214 | 219 | # Spans that log a 'cs' timestamp can additionally record a |
215 | 220 | # 'sa' binary annotation that shows where the request is going. |
216 | self.sa_endpoint = None | |
221 | self.remote_endpoint: Optional[Endpoint] = None | |
222 | self.zipkin_attrs: Optional[ZipkinAttrs] = None | |
217 | 223 | |
218 | 224 | # It used to be possible to override timestamp and duration by passing |
219 | 225 | # in the cs/cr or sr/ss annotations. We want to keep backward compatibility |
221 | 227 | # same way. |
222 | 228 | # This doesn't fit well with v2 spans since those annotations are gone, so |
223 | 229 | # we also log a deprecation warning. |
224 | if 'sr' in self.annotations and 'ss' in self.annotations: | |
225 | self.duration = self.annotations['ss'] - self.annotations['sr'] | |
226 | self.timestamp = self.annotations['sr'] | |
227 | warnings.warn( | |
230 | if "sr" in self.annotations and "ss" in self.annotations: | |
231 | assert self.annotations["ss"] is not None | |
232 | assert self.annotations["sr"] is not None | |
233 | self.duration = self.annotations["ss"] - self.annotations["sr"] | |
234 | self.timestamp = self.annotations["sr"] | |
235 | log.warning( | |
228 | 236 | "Manually setting 'sr'/'ss' annotations is deprecated. Please " |
229 | "use the timestamp and duration parameters.", | |
230 | DeprecationWarning, | |
237 | "use the timestamp and duration parameters." | |
231 | 238 | ) |
232 | if 'cr' in self.annotations and 'cs' in self.annotations: | |
233 | self.duration = self.annotations['cr'] - self.annotations['cs'] | |
234 | self.timestamp = self.annotations['cs'] | |
235 | warnings.warn( | |
239 | if "cr" in self.annotations and "cs" in self.annotations: | |
240 | assert self.annotations["cr"] is not None | |
241 | assert self.annotations["cs"] is not None | |
242 | self.duration = self.annotations["cr"] - self.annotations["cs"] | |
243 | self.timestamp = self.annotations["cs"] | |
244 | log.warning( | |
236 | 245 | "Manually setting 'cr'/'cs' annotations is deprecated. Please " |
237 | "use the timestamp and duration parameters.", | |
238 | DeprecationWarning, | |
246 | "use the timestamp and duration parameters." | |
239 | 247 | ) |
240 | 248 | |
241 | # Root spans have transport_handler and at least one of zipkin_attrs | |
242 | # or sample_rate. | |
243 | if self.zipkin_attrs or self.sample_rate is not None: | |
249 | # Root spans have transport_handler and at least one of | |
250 | # zipkin_attrs_override or sample_rate. | |
251 | if self.zipkin_attrs_override or self.sample_rate is not None: | |
244 | 252 | # transport_handler is mandatory for root spans |
245 | 253 | if self.transport_handler is None: |
246 | raise ZipkinError( | |
247 | 'Root spans require a transport handler to be given') | |
254 | raise ZipkinError("Root spans require a transport handler to be given") | |
248 | 255 | |
249 | 256 | self._is_local_root_span = True |
250 | 257 | |
253 | 260 | self._is_local_root_span = True |
254 | 261 | |
255 | 262 | if self.sample_rate is not None and not (0.0 <= self.sample_rate <= 100.0): |
256 | raise ZipkinError('Sample rate must be between 0.0 and 100.0') | |
257 | ||
258 | if not isinstance(self._span_storage, storage.SpanStorage): | |
259 | raise ZipkinError('span_storage should be an instance ' | |
260 | 'of py_zipkin.storage.SpanStorage') | |
261 | ||
262 | def __call__(self, f): | |
263 | raise ZipkinError("Sample rate must be between 0.0 and 100.0") | |
264 | ||
265 | if self._span_storage is not None and not isinstance( | |
266 | self._span_storage, storage.SpanStorage | |
267 | ): | |
268 | raise ZipkinError( | |
269 | "span_storage should be an instance of py_zipkin.storage.SpanStorage" | |
270 | ) | |
271 | ||
272 | if self._span_storage is not None: | |
273 | log.warning("span_storage is deprecated. Set local_storage instead.") | |
274 | self.get_tracer()._span_storage = self._span_storage | |
275 | ||
276 | if self._context_stack is not None: | |
277 | log.warning("context_stack is deprecated. Set local_storage instead.") | |
278 | self.get_tracer()._context_stack = self._context_stack | |
279 | ||
280 | def __call__(self, f: F) -> F: | |
263 | 281 | @functools.wraps(f) |
264 | def decorated(*args, **kwargs): | |
282 | def decorated(*args: Any, **kwargs: Any) -> Any: | |
265 | 283 | with zipkin_span( |
266 | 284 | service_name=self.service_name, |
267 | 285 | span_name=self.span_name, |
268 | 286 | zipkin_attrs=self.zipkin_attrs, |
269 | 287 | transport_handler=self.transport_handler, |
288 | max_span_batch_size=self.max_span_batch_size, | |
270 | 289 | annotations=self.annotations, |
271 | 290 | binary_annotations=self.binary_annotations, |
272 | 291 | port=self.port, |
273 | 292 | sample_rate=self.sample_rate, |
293 | include=None, | |
294 | add_logging_annotation=self.add_logging_annotation, | |
295 | report_root_timestamp=self.report_root_timestamp_override, | |
296 | use_128bit_trace_id=self.use_128bit_trace_id, | |
274 | 297 | host=self.host, |
275 | 298 | context_stack=self._context_stack, |
276 | 299 | span_storage=self._span_storage, |
279 | 302 | timestamp=self.timestamp, |
280 | 303 | duration=self.duration, |
281 | 304 | encoding=self.encoding, |
305 | _tracer=self._tracer, | |
282 | 306 | ): |
283 | 307 | return f(*args, **kwargs) |
284 | return decorated | |
285 | ||
286 | def __enter__(self): | |
308 | ||
309 | return cast(F, decorated) | |
310 | ||
311 | def get_tracer(self) -> Tracer: | |
312 | if self._tracer is not None: | |
313 | return self._tracer | |
314 | else: | |
315 | return get_default_tracer() | |
316 | ||
317 | def __enter__(self) -> "zipkin_span": | |
287 | 318 | return self.start() |
288 | 319 | |
289 | def _generate_kind(self, kind, include): | |
320 | def _generate_kind(self, kind: Optional[Kind], include: Optional[str]) -> Kind: | |
290 | 321 | # If `kind` is not set, then we generate it from `include`. |
291 | 322 | # This code maintains backward compatibility with old versions of py_zipkin |
292 | 323 | # which used include rather than kind to identify client / server spans. |
298 | 329 | # than it's a client or server span respectively. |
299 | 330 | # If neither or both are present, then it's a local span |
300 | 331 | # which is represented by kind = None. |
301 | warnings.warn( | |
302 | 'The include argument is deprecated. Please use kind.', | |
303 | DeprecationWarning, | |
304 | ) | |
305 | if 'client' in include and 'server' not in include: | |
332 | log.warning("The include argument is deprecated. Please use kind.") | |
333 | if "client" in include and "server" not in include: | |
306 | 334 | return Kind.CLIENT |
307 | elif 'client' not in include and 'server' in include: | |
335 | elif "client" not in include and "server" in include: | |
308 | 336 | return Kind.SERVER |
309 | 337 | else: |
310 | 338 | return Kind.LOCAL |
312 | 340 | # If both kind and include are unset, then it's a local span. |
313 | 341 | return Kind.LOCAL |
314 | 342 | |
315 | def start(self): | |
316 | """Enter the new span context. All annotations logged inside this | |
317 | context will be attributed to this span. All new spans generated | |
318 | inside this context will have this span as their parent. | |
319 | ||
320 | In the unsampled case, this context still generates new span IDs and | |
321 | pushes them onto the threadlocal stack, so downstream services calls | |
322 | made will pass the correct headers. However, the logging handler is | |
323 | never attached in the unsampled case, so the spans are never logged. | |
324 | """ | |
325 | self.do_pop_attrs = False | |
326 | report_root_timestamp = False | |
327 | ||
343 | def _get_current_context(self) -> Tuple[bool, Optional[ZipkinAttrs]]: | |
344 | """Returns the current ZipkinAttrs and generates new ones if needed. | |
345 | ||
346 | :returns: (report_root_timestamp, zipkin_attrs) | |
347 | :rtype: (bool, ZipkinAttrs) | |
348 | """ | |
328 | 349 | # This check is technically not necessary since only root spans will have |
329 | 350 | # sample_rate, zipkin_attrs or a transport set. But it helps making the |
330 | 351 | # code clearer by separating the logic for a root span from the one for a |
343 | 364 | if self.sample_rate is not None: |
344 | 365 | |
345 | 366 | # If this trace is not sampled, we re-roll the dice. |
346 | if self.zipkin_attrs and not self.zipkin_attrs.is_sampled: | |
367 | if ( | |
368 | self.zipkin_attrs_override | |
369 | and not self.zipkin_attrs_override.is_sampled | |
370 | ): | |
347 | 371 | # This will be the root span of the trace, so we should |
348 | 372 | # set timestamp and duration. |
349 | report_root_timestamp = True | |
350 | self.zipkin_attrs = create_attrs_for_span( | |
351 | sample_rate=self.sample_rate, | |
352 | trace_id=self.zipkin_attrs.trace_id, | |
353 | use_128bit_trace_id=self.use_128bit_trace_id, | |
373 | return ( | |
374 | True, | |
375 | create_attrs_for_span( | |
376 | sample_rate=self.sample_rate, | |
377 | trace_id=self.zipkin_attrs_override.trace_id, | |
378 | ), | |
354 | 379 | ) |
355 | 380 | |
356 | # If zipkin_attrs was not passed in, we simply generate new | |
357 | # zipkin_attrs to start a new trace. | |
358 | elif not self.zipkin_attrs: | |
359 | # This will be the root span of the trace, so we should | |
360 | # set timestamp and duration. | |
361 | report_root_timestamp = True | |
362 | self.zipkin_attrs = create_attrs_for_span( | |
363 | sample_rate=self.sample_rate, | |
364 | use_128bit_trace_id=self.use_128bit_trace_id, | |
381 | # If zipkin_attrs_override was not passed in, we simply generate | |
382 | # new zipkin_attrs to start a new trace. | |
383 | elif not self.zipkin_attrs_override: | |
384 | return ( | |
385 | True, | |
386 | create_attrs_for_span( | |
387 | sample_rate=self.sample_rate, | |
388 | use_128bit_trace_id=self.use_128bit_trace_id, | |
389 | ), | |
365 | 390 | ) |
366 | 391 | |
367 | if self.firehose_handler and not self.zipkin_attrs: | |
392 | if self.firehose_handler and not self.zipkin_attrs_override: | |
368 | 393 | # If it has gotten here, the only thing that is |
369 | 394 | # causing a trace is the firehose. So we force a trace |
370 | 395 | # with sample rate of 0 |
371 | report_root_timestamp = True | |
372 | self.zipkin_attrs = create_attrs_for_span( | |
373 | sample_rate=0.0, | |
374 | use_128bit_trace_id=self.use_128bit_trace_id, | |
396 | return ( | |
397 | True, | |
398 | create_attrs_for_span( | |
399 | sample_rate=0.0, | |
400 | use_128bit_trace_id=self.use_128bit_trace_id, | |
401 | ), | |
375 | 402 | ) |
403 | ||
404 | # If we arrive here it means the sample_rate was not set while | |
405 | # zipkin_attrs_override was, so let's simply return that. | |
406 | return False, self.zipkin_attrs_override | |
407 | ||
376 | 408 | else: |
377 | # If zipkin_attrs was not passed in, we check if there's already a | |
378 | # trace context in _context_stack. | |
379 | if not self.zipkin_attrs: | |
380 | existing_zipkin_attrs = self._context_stack.get() | |
381 | # If there's an existing context, let's create new zipkin_attrs | |
382 | # with that context as parent. | |
383 | if existing_zipkin_attrs: | |
384 | self.zipkin_attrs = ZipkinAttrs( | |
409 | # Check if there's already a trace context in _context_stack. | |
410 | existing_zipkin_attrs = self.get_tracer().get_zipkin_attrs() | |
411 | # If there's an existing context, let's create new zipkin_attrs | |
412 | # with that context as parent. | |
413 | if existing_zipkin_attrs: | |
414 | return ( | |
415 | False, | |
416 | ZipkinAttrs( | |
385 | 417 | trace_id=existing_zipkin_attrs.trace_id, |
386 | 418 | span_id=generate_random_64bit_string(), |
387 | 419 | parent_span_id=existing_zipkin_attrs.span_id, |
388 | 420 | flags=existing_zipkin_attrs.flags, |
389 | 421 | is_sampled=existing_zipkin_attrs.is_sampled, |
390 | ) | |
422 | ), | |
423 | ) | |
424 | ||
425 | return False, None | |
426 | ||
427 | def start(self) -> "zipkin_span": | |
428 | """Enter the new span context. All annotations logged inside this | |
429 | context will be attributed to this span. All new spans generated | |
430 | inside this context will have this span as their parent. | |
431 | ||
432 | In the unsampled case, this context still generates new span IDs and | |
433 | pushes them onto the threadlocal stack, so downstream services calls | |
434 | made will pass the correct headers. However, the logging handler is | |
435 | never attached in the unsampled case, so the spans are never logged. | |
436 | """ | |
437 | self.do_pop_attrs = False | |
438 | ||
439 | report_root_timestamp, self.zipkin_attrs = self._get_current_context() | |
391 | 440 | |
392 | 441 | # If zipkin_attrs are not set up by now, that means this span is not |
393 | 442 | # configured to perform logging itself, and it's not in an existing |
396 | 445 | if not self.zipkin_attrs: |
397 | 446 | return self |
398 | 447 | |
399 | self._context_stack.push(self.zipkin_attrs) | |
448 | self.get_tracer().push_zipkin_attrs(self.zipkin_attrs) | |
400 | 449 | self.do_pop_attrs = True |
401 | 450 | |
402 | 451 | self.start_timestamp = time.time() |
404 | 453 | if self._is_local_root_span: |
405 | 454 | # Don't set up any logging if we're not sampling |
406 | 455 | if not self.zipkin_attrs.is_sampled and not self.firehose_handler: |
456 | return self | |
457 | # If transport is already configured don't override it. Doing so would | |
458 | # cause all previously recorded spans to never be emitted as exiting | |
459 | # the inner logging context will reset transport_configured to False. | |
460 | if self.get_tracer().is_transport_configured(): | |
461 | log.info( | |
462 | "Transport was already configured, ignoring override " | |
463 | "from span {}".format(self.span_name) | |
464 | ) | |
407 | 465 | return self |
408 | 466 | endpoint = create_endpoint(self.port, self.service_name, self.host) |
409 | 467 | self.logging_context = ZipkinLoggingContext( |
412 | 470 | self.span_name, |
413 | 471 | self.transport_handler, |
414 | 472 | report_root_timestamp or self.report_root_timestamp_override, |
415 | self._span_storage, | |
473 | self.get_tracer, | |
416 | 474 | self.service_name, |
417 | 475 | binary_annotations=self.binary_annotations, |
418 | 476 | add_logging_annotation=self.add_logging_annotation, |
420 | 478 | max_span_batch_size=self.max_span_batch_size, |
421 | 479 | firehose_handler=self.firehose_handler, |
422 | 480 | encoding=self.encoding, |
481 | annotations=self.annotations, | |
423 | 482 | ) |
424 | 483 | self.logging_context.start() |
425 | self._span_storage.set_transport_configured(configured=True) | |
484 | self.get_tracer().set_transport_configured(configured=True) | |
426 | 485 | |
427 | 486 | return self |
428 | 487 | |
429 | def __exit__(self, _exc_type, _exc_value, _exc_traceback): | |
488 | def __exit__( | |
489 | self, | |
490 | _exc_type: Optional[Type[BaseException]], | |
491 | _exc_value: Optional[BaseException], | |
492 | _exc_traceback: TracebackType, | |
493 | ) -> None: | |
430 | 494 | self.stop(_exc_type, _exc_value, _exc_traceback) |
431 | 495 | |
432 | def stop(self, _exc_type=None, _exc_value=None, _exc_traceback=None): | |
496 | def stop( | |
497 | self, | |
498 | _exc_type: Optional[Type[BaseException]] = None, | |
499 | _exc_value: Optional[BaseException] = None, | |
500 | _exc_traceback: Optional[TracebackType] = None, | |
501 | ) -> None: | |
433 | 502 | """Exit the span context. Zipkin attrs are pushed onto the |
434 | 503 | threadlocal stack regardless of sampling, so they always need to be |
435 | 504 | popped off. The actual logging of spans depends on sampling and that |
437 | 506 | """ |
438 | 507 | |
439 | 508 | if self.do_pop_attrs: |
440 | self._context_stack.pop() | |
509 | self.get_tracer().pop_zipkin_attrs() | |
441 | 510 | |
442 | 511 | # If no transport is configured, there's no reason to create a new Span. |
443 | 512 | # This also helps avoiding memory leaks since without a transport nothing |
444 | # would pull spans out of _span_storage. | |
445 | if not self._span_storage.is_transport_configured(): | |
513 | # would pull spans out of get_tracer(). | |
514 | if not self.get_tracer().is_transport_configured(): | |
446 | 515 | return |
447 | 516 | |
448 | 517 | # Add the error annotation if an exception occurred |
449 | 518 | if any((_exc_type, _exc_value, _exc_traceback)): |
450 | error_msg = u'{0}: {1}'.format(_exc_type.__name__, _exc_value) | |
451 | self.update_binary_annotations({ | |
452 | ERROR_KEY: error_msg, | |
453 | }) | |
519 | assert _exc_type is not None | |
520 | try: | |
521 | error_msg = f"{_exc_type.__name__}: {_exc_value}" | |
522 | except TypeError: | |
523 | # This sometimes happens when an exception raises when calling | |
524 | # __str__ on it. | |
525 | error_msg = f"{_exc_type.__name__}: {_exc_value!r}" | |
526 | self.update_binary_annotations({ERROR_KEY: error_msg}) | |
454 | 527 | |
455 | 528 | # Logging context is only initialized for "root" spans of the local |
456 | 529 | # process (i.e. this zipkin_span not inside of any other local |
457 | 530 | # zipkin_spans) |
458 | 531 | if self.logging_context: |
459 | self.logging_context.stop() | |
460 | self.logging_context = None | |
461 | self._span_storage.set_transport_configured(configured=False) | |
462 | return | |
532 | try: | |
533 | self.logging_context.stop() | |
534 | except Exception as ex: | |
535 | err_msg = f"Error emitting zipkin trace. {repr(ex)}" | |
536 | log.error(err_msg) | |
537 | finally: | |
538 | self.logging_context = None | |
539 | self.get_tracer().clear() | |
540 | self.get_tracer().set_transport_configured(configured=False) | |
541 | return | |
463 | 542 | |
464 | 543 | # If we've gotten here, that means that this span is a child span of |
465 | 544 | # this context's root span (i.e. it's a zipkin_span inside another |
471 | 550 | else: |
472 | 551 | duration = end_timestamp - self.start_timestamp |
473 | 552 | |
474 | self._span_storage.append(SpanBuilder( | |
475 | trace_id=self.zipkin_attrs.trace_id, | |
476 | name=self.span_name, | |
477 | parent_id=self.zipkin_attrs.parent_span_id, | |
478 | span_id=self.zipkin_attrs.span_id, | |
479 | timestamp=self.timestamp if self.timestamp else self.start_timestamp, | |
480 | duration=duration, | |
481 | annotations=self.annotations, | |
482 | tags=self.binary_annotations, | |
483 | kind=self.kind, | |
484 | service_name=self.service_name, | |
485 | sa_endpoint=self.sa_endpoint, | |
486 | )) | |
487 | ||
488 | def update_binary_annotations(self, extra_annotations): | |
489 | """Updates the binary annotations for the current span. | |
490 | ||
491 | If this trace is not being sampled then this is a no-op. | |
492 | """ | |
493 | if not self.zipkin_attrs: | |
494 | return | |
553 | endpoint = create_endpoint(self.port, self.service_name, self.host) | |
554 | assert self.zipkin_attrs is not None | |
555 | self.get_tracer().add_span( | |
556 | Span( | |
557 | trace_id=self.zipkin_attrs.trace_id, | |
558 | name=self.span_name, | |
559 | parent_id=self.zipkin_attrs.parent_span_id, | |
560 | span_id=self.zipkin_attrs.span_id, | |
561 | kind=self.kind, | |
562 | timestamp=self.timestamp if self.timestamp else self.start_timestamp, | |
563 | duration=duration, | |
564 | annotations=self.annotations, | |
565 | local_endpoint=endpoint, | |
566 | remote_endpoint=self.remote_endpoint, | |
567 | tags=self.binary_annotations, | |
568 | ) | |
569 | ) | |
570 | ||
571 | def update_binary_annotations( | |
572 | self, extra_annotations: Dict[str, Optional[str]] | |
573 | ) -> None: | |
574 | """Updates the binary annotations for the current span.""" | |
495 | 575 | if not self.logging_context: |
496 | 576 | # This is not the root span, so binary annotations will be added |
497 | 577 | # to the log handler when this span context exits. |
499 | 579 | else: |
500 | 580 | # Otherwise, we're in the context of the root span, so just update |
501 | 581 | # the binary annotations for the logging context directly. |
502 | self.logging_context.binary_annotations_dict.update(extra_annotations) | |
582 | self.logging_context.tags.update(extra_annotations) | |
583 | ||
584 | def add_annotation(self, value: str, timestamp: Optional[float] = None) -> None: | |
585 | """Add an annotation for the current span | |
586 | ||
587 | The timestamp defaults to "now", but may be specified. | |
588 | ||
589 | :param value: The annotation string | |
590 | :type value: str | |
591 | :param timestamp: Timestamp for the annotation | |
592 | :type timestamp: float | |
593 | """ | |
594 | timestamp = timestamp or time.time() | |
595 | if not self.logging_context: | |
596 | # This is not the root span, so annotations will be added | |
597 | # to the log handler when this span context exits. | |
598 | self.annotations[value] = timestamp | |
599 | else: | |
600 | # Otherwise, we're in the context of the root span, so just update | |
601 | # the annotations for the logging context directly. | |
602 | self.logging_context.annotations[value] = timestamp | |
503 | 603 | |
504 | 604 | def add_sa_binary_annotation( |
505 | 605 | self, |
506 | port=0, | |
507 | service_name='unknown', | |
508 | host='127.0.0.1', | |
509 | ): | |
606 | port: int = 0, | |
607 | service_name: str = "unknown", | |
608 | host: str = "127.0.0.1", | |
609 | ) -> None: | |
510 | 610 | """Adds a 'sa' binary annotation to the current span. |
511 | 611 | |
512 | 612 | 'sa' binary annotations are useful for situations where you need to log |
521 | 621 | :param host: Host address of the destination |
522 | 622 | :type host: str |
523 | 623 | """ |
524 | if not self.zipkin_attrs: | |
525 | return | |
526 | ||
527 | 624 | if self.kind != Kind.CLIENT: |
528 | 625 | # TODO: trying to set a sa binary annotation for a non-client span |
529 | 626 | # should result in a logged error |
530 | 627 | return |
531 | 628 | |
532 | sa_endpoint = create_endpoint( | |
629 | remote_endpoint = create_endpoint( | |
533 | 630 | port=port, |
534 | 631 | service_name=service_name, |
535 | 632 | host=host, |
536 | 633 | ) |
537 | 634 | if not self.logging_context: |
538 | if self.sa_endpoint is not None: | |
539 | raise ValueError('SA annotation already set.') | |
540 | self.sa_endpoint = sa_endpoint | |
635 | if self.remote_endpoint is not None: | |
636 | raise ValueError("SA annotation already set.") | |
637 | self.remote_endpoint = remote_endpoint | |
541 | 638 | else: |
542 | if self.logging_context.sa_endpoint is not None: | |
543 | raise ValueError('SA annotation already set.') | |
544 | self.logging_context.sa_endpoint = sa_endpoint | |
545 | ||
546 | def override_span_name(self, name): | |
639 | if self.logging_context.remote_endpoint is not None: | |
640 | raise ValueError("SA annotation already set.") | |
641 | self.logging_context.remote_endpoint = remote_endpoint | |
642 | ||
643 | def override_span_name(self, name: str) -> None: | |
547 | 644 | """Overrides the current span name. |
548 | 645 | |
549 | 646 | This is useful if you don't know the span name yet when you create the |
559 | 656 | self.logging_context.span_name = name |
560 | 657 | |
561 | 658 | |
562 | def _validate_args(kwargs): | |
563 | if 'kind' in kwargs: | |
659 | def _validate_args(kwargs: Dict[str, Any]) -> None: | |
660 | if "kind" in kwargs: | |
564 | 661 | raise ValueError( |
565 | 662 | '"kind" is not valid in this context. ' |
566 | 'You probably want to use zipkin_span()' | |
663 | "You probably want to use zipkin_span()" | |
567 | 664 | ) |
568 | 665 | |
569 | 666 | |
573 | 670 | Subclass of :class:`zipkin_span` using only annotations relevant to clients |
574 | 671 | """ |
575 | 672 | |
576 | def __init__(self, *args, **kwargs): | |
673 | def __init__(self, *args: Any, **kwargs: Any) -> None: | |
577 | 674 | """Logs a zipkin span with client annotations. |
578 | 675 | |
579 | 676 | See :class:`zipkin_span` for arguments |
580 | 677 | """ |
581 | 678 | _validate_args(kwargs) |
582 | 679 | |
583 | kwargs['kind'] = Kind.CLIENT | |
584 | super(zipkin_client_span, self).__init__(*args, **kwargs) | |
680 | kwargs["kind"] = Kind.CLIENT | |
681 | super().__init__(*args, **kwargs) | |
585 | 682 | |
586 | 683 | |
587 | 684 | class zipkin_server_span(zipkin_span): |
590 | 687 | Subclass of :class:`zipkin_span` using only annotations relevant to servers |
591 | 688 | """ |
592 | 689 | |
593 | def __init__(self, *args, **kwargs): | |
690 | def __init__(self, *args: Any, **kwargs: Any) -> None: | |
594 | 691 | """Logs a zipkin span with server annotations. |
595 | 692 | |
596 | 693 | See :class:`zipkin_span` for arguments |
597 | 694 | """ |
598 | 695 | _validate_args(kwargs) |
599 | 696 | |
600 | kwargs['kind'] = Kind.SERVER | |
601 | super(zipkin_server_span, self).__init__(*args, **kwargs) | |
602 | ||
603 | ||
604 | def create_attrs_for_span( | |
605 | sample_rate=100.0, | |
606 | trace_id=None, | |
607 | span_id=None, | |
608 | use_128bit_trace_id=False, | |
609 | ): | |
610 | """Creates a set of zipkin attributes for a span. | |
611 | ||
612 | :param sample_rate: Float between 0.0 and 100.0 to determine sampling rate | |
613 | :type sample_rate: float | |
614 | :param trace_id: Optional 16-character hex string representing a trace_id. | |
615 | If this is None, a random trace_id will be generated. | |
616 | :type trace_id: str | |
617 | :param span_id: Optional 16-character hex string representing a span_id. | |
618 | If this is None, a random span_id will be generated. | |
619 | :type span_id: str | |
620 | :param use_128bit_trace_id: If true, generate 128-bit trace_ids | |
621 | :type use_128bit_trace_id: boolean | |
622 | """ | |
623 | # Calculate if this trace is sampled based on the sample rate | |
624 | if trace_id is None: | |
625 | if use_128bit_trace_id: | |
626 | trace_id = generate_random_128bit_string() | |
627 | else: | |
628 | trace_id = generate_random_64bit_string() | |
629 | if span_id is None: | |
630 | span_id = generate_random_64bit_string() | |
631 | if sample_rate == 0.0: | |
632 | is_sampled = False | |
633 | else: | |
634 | is_sampled = (random.random() * 100) < sample_rate | |
635 | ||
636 | return ZipkinAttrs( | |
637 | trace_id=trace_id, | |
638 | span_id=span_id, | |
639 | parent_span_id=None, | |
640 | flags='0', | |
641 | is_sampled=is_sampled, | |
642 | ) | |
643 | ||
644 | ||
645 | def create_http_headers_for_new_span(context_stack=None): | |
697 | kwargs["kind"] = Kind.SERVER | |
698 | super().__init__(*args, **kwargs) | |
699 | ||
700 | ||
701 | def create_http_headers_for_new_span( | |
702 | context_stack: Optional[Stack] = None, tracer: Optional[Tracer] = None | |
703 | ) -> Dict[str, Optional[str]]: | |
646 | 704 | """ |
647 | 705 | Generate the headers for a new zipkin span. |
648 | 706 | |
654 | 712 | :returns: dict containing (X-B3-TraceId, X-B3-SpanId, X-B3-ParentSpanId, |
655 | 713 | X-B3-Flags and X-B3-Sampled) keys OR an empty dict. |
656 | 714 | """ |
657 | if context_stack is None: | |
658 | context_stack = ThreadLocalStack() | |
659 | zipkin_attrs = context_stack.get() | |
660 | if not zipkin_attrs: | |
661 | return {} | |
662 | ||
663 | return { | |
664 | 'X-B3-TraceId': zipkin_attrs.trace_id, | |
665 | 'X-B3-SpanId': generate_random_64bit_string(), | |
666 | 'X-B3-ParentSpanId': zipkin_attrs.span_id, | |
667 | 'X-B3-Flags': '0', | |
668 | 'X-B3-Sampled': '1' if zipkin_attrs.is_sampled else '0', | |
669 | } | |
715 | return create_http_headers(context_stack, tracer, True) |
0 | 0 | Metadata-Version: 2.1 |
1 | 1 | Name: py-zipkin |
2 | Version: 0.15.0 | |
2 | Version: 1.2.5 | |
3 | 3 | Summary: Library for using Zipkin in Python. |
4 | 4 | Home-page: https://github.com/Yelp/py_zipkin |
5 | 5 | Author: Yelp, Inc. |
6 | 6 | Author-email: opensource+py-zipkin@yelp.com |
7 | License: Copyright Yelp 2018 | |
7 | License: Copyright Yelp 2019 | |
8 | 8 | Description: [![Build Status](https://travis-ci.org/Yelp/py_zipkin.svg?branch=master)](https://travis-ci.org/Yelp/py_zipkin) |
9 | 9 | [![Coverage Status](https://img.shields.io/coveralls/Yelp/py_zipkin.svg)](https://coveralls.io/r/Yelp/py_zipkin) |
10 | 10 | [![PyPi version](https://img.shields.io/pypi/v/py_zipkin.svg)](https://pypi.python.org/pypi/py_zipkin/) |
145 | 145 | pluggable, though. |
146 | 146 | |
147 | 147 | The recommended way to implement a new transport handler is to subclass |
148 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
148 | `py_zipkin.transport.BaseTransportHandler` and implement the `send` and | |
149 | 149 | `get_max_payload_bytes` methods. |
150 | 150 | |
151 | 151 | `send` receives an already encoded thrift list as argument. |
158 | 158 | |
159 | 159 | > NOTE: older versions of py_zipkin suggested implementing the transport handler |
160 | 160 | > as a function with a single argument. That's still supported and should work |
161 | > with the current py_zipkin version, but it's deprecated. | |
161 | > with the current py_zipkin version, but it's deprecated. | |
162 | 162 | |
163 | 163 | ```python |
164 | 164 | import requests |
202 | 202 | producer.send_messages('kafka_topic_name', message) |
203 | 203 | ``` |
204 | 204 | |
205 | Using in multithreading evironments | |
206 | ----------------------------------- | |
205 | Using in multithreading environments | |
206 | ------------------------------------ | |
207 | 207 | |
208 | 208 | If you want to use py_zipkin in a cooperative multithreading environment, |
209 | 209 | e.g. asyncio, you need to explicitly pass an instance of `py_zipkin.storage.Stack` |
259 | 259 | |
260 | 260 | Copyright (c) 2018, Yelp, Inc. All Rights reserved. Apache v2 |
261 | 261 | |
262 | 1.0.0 (2022-06-09) | |
263 | ------------------- | |
264 | - Droop Python 2.7 support (minimal supported python version is 3.5) | |
265 | - Recompile protobuf using version 3.19 | |
266 | ||
267 | 0.21.0 (2021-03-17) | |
268 | ------------------- | |
269 | - The default encoding is now V2 JSON. If you want to keep the old | |
270 | V1 thrift encoding you'll need to specify it. | |
271 | ||
272 | 0.20.2 (2021-03-11) | |
273 | ------------------- | |
274 | - Don't crash when annotating exceptions that cannot be str()'d | |
275 | ||
276 | 0.20.1 (2020-10-27) | |
277 | ------------------- | |
278 | - Support PRODUCER and CONSUMER spans | |
279 | ||
280 | 0.20.0 (2020-03-09) | |
281 | ------------------- | |
282 | - Add create_http_headers helper | |
283 | ||
284 | 0.19.0 (2020-02-28) | |
285 | ------------------- | |
286 | - Add zipkin_span.add_annotation() method | |
287 | - Add autoinstrumentation for python Threads | |
288 | - Allow creating a copy of Tracer | |
289 | - Add extract_zipkin_attrs_from_headers() helper | |
290 | ||
291 | 0.18.7 (2020-01-15) | |
292 | ------------------- | |
293 | - Expose encoding.create_endpoint helper | |
294 | ||
295 | 0.18.6 (2019-09-23) | |
296 | ------------------- | |
297 | - Ensure tags are strings when using V2_JSON encoding | |
298 | ||
299 | 0.18.5 (2019-08-08) | |
300 | ------------------- | |
301 | - Add testing.MockTransportHandler module | |
302 | ||
303 | 0.18.4 (2019-08-02) | |
304 | ------------------- | |
305 | - Fix thriftpy2 import to allow cython module | |
306 | ||
307 | 0.18.3 (2019-05-15) | |
308 | ------------------- | |
309 | - Fix unicode bug when decoding thrift tag strings | |
310 | ||
311 | 0.18.2 (2019-03-26) | |
312 | ------------------- | |
313 | - Handled exception while emitting trace and log the error | |
314 | - Ensure tracer is cleared regardless span of emit outcome | |
315 | ||
316 | 0.18.1 (2019-02-22) | |
317 | ------------------- | |
318 | - Fix ThreadLocalStack() bug introduced in 0.18.0 | |
319 | ||
320 | 0.18.0 (2019-02-13) | |
321 | ------------------- | |
322 | - Fix multithreading issues | |
323 | - Added Tracer module | |
324 | ||
325 | 0.17.1 (2019-02-05) | |
326 | ------------------- | |
327 | - Ignore transport_handler overrides in an inner span since that causes | |
328 | spans to be dropped. | |
329 | ||
330 | 0.17.0 (2019-01-25) | |
331 | ------------------- | |
332 | - Support python 3.7 | |
333 | - py-zipkin now depends on thriftpy2 rather than thriftpy. They | |
334 | can coexist in the same codebase, so it should be safe to upgrade. | |
335 | ||
336 | 0.16.1 (2018-11-16) | |
337 | ------------------- | |
338 | - Handle null timestamps when decoding thrift traces | |
339 | ||
340 | 0.16.0 (2018-11-13) | |
341 | ------------------- | |
342 | - py_zipkin is now able to convert V1 thrift spans to V2 JSON | |
343 | ||
344 | 0.15.1 (2018-10-31) | |
345 | ------------------- | |
346 | - Changed DeprecationWarnings to logging.warning | |
347 | ||
262 | 348 | 0.15.0 (2018-10-22) |
263 | 349 | ------------------- |
264 | 350 | - Added support for V2 JSON encoding. |
414 | 500 | Classifier: Topic :: Software Development :: Libraries :: Python Modules |
415 | 501 | Classifier: License :: OSI Approved :: Apache Software License |
416 | 502 | Classifier: Operating System :: OS Independent |
417 | Classifier: Programming Language :: Python :: 2.7 | |
418 | Classifier: Programming Language :: Python :: 3.4 | |
419 | Classifier: Programming Language :: Python :: 3.5 | |
420 | 503 | Classifier: Programming Language :: Python :: 3.6 |
504 | Classifier: Programming Language :: Python :: 3.7 | |
505 | Classifier: Programming Language :: Python :: 3.8 | |
421 | 506 | Provides: py_zipkin |
507 | Requires-Python: >=3.6 | |
422 | 508 | Description-Content-Type: text/markdown |
509 | Provides-Extra: protobuf |
0 | 0 | CHANGELOG.rst |
1 | 1 | MANIFEST.in |
2 | 2 | README.md |
3 | setup.cfg | |
4 | 3 | setup.py |
5 | 4 | py_zipkin/__init__.py |
6 | 5 | py_zipkin/exception.py |
7 | 6 | py_zipkin/logging_helper.py |
7 | py_zipkin/py.typed | |
8 | py_zipkin/request_helpers.py | |
8 | 9 | py_zipkin/storage.py |
9 | 10 | py_zipkin/thread_local.py |
10 | 11 | py_zipkin/transport.py |
16 | 17 | py_zipkin.egg-info/requires.txt |
17 | 18 | py_zipkin.egg-info/top_level.txt |
18 | 19 | py_zipkin/encoding/__init__.py |
20 | py_zipkin/encoding/_decoders.py | |
19 | 21 | py_zipkin/encoding/_encoders.py |
20 | 22 | py_zipkin/encoding/_helpers.py |
21 | 23 | py_zipkin/encoding/_types.py |
24 | py_zipkin/encoding/protobuf/__init__.py | |
25 | py_zipkin/encoding/protobuf/zipkin_pb2.py | |
26 | py_zipkin/encoding/protobuf/zipkin_pb2.pyi | |
27 | py_zipkin/instrumentations/__init__.py | |
28 | py_zipkin/instrumentations/python_threads.py | |
29 | py_zipkin/testing/__init__.py | |
30 | py_zipkin/testing/mock_transport.py | |
22 | 31 | py_zipkin/thrift/__init__.py |
32 | py_zipkin/thrift/zipkinCore.pyi | |
23 | 33 | py_zipkin/thrift/zipkinCore.thrift⏎ |
0 | six | |
1 | thriftpy | |
0 | thriftpy2<0.4.14,>=0.4.0 | |
1 | typing-extensions>=3.10.0.0 | |
2 | 2 | |
3 | [:python_version=="2.7"] | |
4 | enum34 | |
3 | [protobuf] | |
4 | protobuf>=3.12.4 |
0 | [wheel] | |
1 | universal = True | |
2 | ||
3 | [pep8] | |
4 | ignore = E265,E309,E501 | |
5 | ||
6 | 0 | [egg_info] |
7 | 1 | tag_build = |
8 | 2 | tag_date = 0 |
0 | 0 | #!/usr/bin/python |
1 | # -*- coding: utf-8 -*- | |
2 | 1 | import os |
3 | 2 | |
4 | 3 | from setuptools import find_packages |
5 | 4 | from setuptools import setup |
6 | 5 | |
7 | __version__ = '0.15.0' | |
6 | __version__ = '1.2.5' | |
8 | 7 | |
9 | 8 | |
10 | 9 | def read(f): |
17 | 16 | provides=["py_zipkin"], |
18 | 17 | author='Yelp, Inc.', |
19 | 18 | author_email='opensource+py-zipkin@yelp.com', |
20 | license='Copyright Yelp 2018', | |
19 | license='Copyright Yelp 2019', | |
21 | 20 | url="https://github.com/Yelp/py_zipkin", |
22 | 21 | description='Library for using Zipkin in Python.', |
23 | 22 | long_description='\n\n'.join((read('README.md'), read('CHANGELOG.rst'))), |
24 | 23 | long_description_content_type="text/markdown", |
25 | 24 | packages=find_packages(exclude=('tests*', 'testing*', 'tools*')), |
26 | package_data={'': ['*.thrift']}, | |
25 | package_data={ | |
26 | '': ['*.thrift'], | |
27 | 'py_zipkin': ['py.typed'], | |
28 | 'py_zipkin.thrift': ['*.pyi'], | |
29 | 'py_zipkin.encoding.protobuf': ['*.pyi'], | |
30 | }, | |
31 | python_requires='>=3.6', | |
27 | 32 | install_requires=[ |
28 | 'six', | |
29 | 'thriftpy', | |
33 | 'thriftpy2>=0.4.0,<0.4.14', | |
34 | 'typing-extensions>=3.10.0.0', | |
30 | 35 | ], |
31 | extras_require={':python_version=="2.7"': ['enum34']}, | |
36 | extras_require={ | |
37 | 'protobuf': 'protobuf >= 3.12.4', | |
38 | }, | |
32 | 39 | classifiers=[ |
33 | 40 | "Development Status :: 3 - Alpha", |
34 | 41 | "Intended Audience :: Developers", |
35 | 42 | "Topic :: Software Development :: Libraries :: Python Modules", |
36 | 43 | "License :: OSI Approved :: Apache Software License", |
37 | 44 | "Operating System :: OS Independent", |
38 | "Programming Language :: Python :: 2.7", | |
39 | "Programming Language :: Python :: 3.4", | |
40 | "Programming Language :: Python :: 3.5", | |
41 | 45 | "Programming Language :: Python :: 3.6", |
46 | "Programming Language :: Python :: 3.7", | |
47 | "Programming Language :: Python :: 3.8", | |
42 | 48 | ], |
43 | 49 | ) |