New upstream version 0.8.0
Robert Edmonds
5 years ago
3 | 3 | |
4 | 4 | Copyright 2014-2016 VeriSign, Inc. |
5 | 5 | |
6 | Copyright 2016-2017 Casey Deccio. | |
6 | Copyright 2016-2019 Casey Deccio. | |
7 | 7 | |
8 | 8 | DNSViz is free software; you can redistribute it and/or modify |
9 | 9 | it under the terms of the GNU General Public License as published by |
0 | 0 | include COPYRIGHT LICENSE |
1 | include requirements.txt | |
1 | 2 | include dnsviz/config.py.in |
3 | exclude dnsviz/config.py | |
2 | 4 | include doc/COPYRIGHT |
3 | 5 | include doc/Makefile |
4 | 6 | include doc/src/*dot |
0 | 0 | Metadata-Version: 1.1 |
1 | 1 | Name: dnsviz |
2 | Version: 0.6.6 | |
2 | Version: 0.8.0 | |
3 | 3 | Summary: DNS analysis and visualization tool suite |
4 | 4 | Home-page: https://github.com/dnsviz/dnsviz/ |
5 | 5 | Author: Casey Deccio |
18 | 18 | Classifier: Natural Language :: English |
19 | 19 | Classifier: Operating System :: MacOS :: MacOS X |
20 | 20 | Classifier: Operating System :: POSIX |
21 | Classifier: Programming Language :: Python :: 2.6 | |
22 | 21 | Classifier: Programming Language :: Python :: 2.7 |
23 | 22 | Classifier: Programming Language :: Python :: 3 |
24 | 23 | Classifier: Topic :: Internet :: Name Service (DNS) |
27 | 26 | Requires: pygraphviz (>=1.1) |
28 | 27 | Requires: m2crypto (>=0.24.0) |
29 | 28 | Requires: dnspython (>=1.11) |
29 | Requires: libnacl |
8 | 8 | |
9 | 9 | ## Installation |
10 | 10 | |
11 | DNSViz packages are available in repositories for popular operating systems, | |
12 | such as Debian, Ubuntu, and FreeBSD. DNSViz can also be installed on Mac OS X | |
13 | via Homebrew or MacPorts. | |
14 | ||
15 | The remainer of this section covers other methods of installation, including a | |
16 | list of [dependencies](#dependencies), installation to a | |
17 | [virtual environment](#installation-in-a-virtual-environment), and installation | |
18 | on [Fedora](#fedora-rpm-build-and-install) and | |
19 | [RHEL6 or RHEL7](#rhel6rhel7-rpm-build-and-install). | |
20 | ||
21 | Instructions for running in a Docker container are also available | |
22 | [later in this document](#docker-container). | |
23 | ||
11 | 24 | |
12 | 25 | ### Dependencies |
13 | 26 | |
14 | * python (2.6/2.7/3.4) - http://www.python.org/ | |
15 | ||
16 | python 2.6, 2.7, or 3.4 is required. For python 3.4 the other third-party | |
17 | dependencies must also support python 3.4. Note that for python 2.6 the | |
18 | importlib (https://pypi.python.org/pypi/importlib) and ordereddict | |
19 | (https://pypi.python.org/pypi/ordereddict) packages are also required. | |
20 | ||
21 | * dnspython (1.11.0 or later) - http://www.dnspython.org/ | |
22 | ||
23 | dnspython is required. Version 1.10.0 is sufficient if you're not issuing | |
24 | TLSA queries, but more generally version 1.11.0 or greater is required. | |
25 | ||
26 | * pygraphviz (1.1 or later) - http://pygraphviz.github.io/ | |
27 | ||
28 | pygraphviz is required for most functionality. `dnsviz probe` and `dnsviz grok` | |
29 | (without the -t option) can be used without pygraphviz installed. Version 1.1 | |
30 | or greater is required because of the support for unicode names and HTML-like | |
31 | labels, both of which are utilized in the visual output. | |
32 | ||
33 | * M2Crypto (0.24.0 or later) - https://gitlab.com/m2crypto/m2crypto | |
34 | ||
35 | M2Crypto is required if cryptographic validation of signatures and digests is | |
36 | desired (and thus is highly recommended). The current code will display | |
37 | warnings if the cryptographic elements cannot be verified. | |
38 | ||
39 | Note that M2Crypto version 0.21.1 or later can be used to validate some | |
40 | DNSSEC algorithms, but support for the following DNSSEC algorithms is not | |
41 | available in releases of M2Crypto prior to 0.24.0 without a patch: | |
42 | 3 (DSA-SHA1), 6 (DSA-NSEC3-SHA1), 12 (GOST R 34.10-2001), | |
43 | 13 (ECDSA Curve P-256 with SHA-256), 14 (ECDSA Curve P-384 with SHA-384). | |
44 | There are two patches included in the `contrib` directory that can be | |
45 | applied to pre-0.24.0 versions to get this functionality: | |
46 | `contrib/m2crypto-pre0.23.patch` or `contrib/m2crypto-0.23.patch`. For | |
47 | example: | |
48 | ||
49 | ``` | |
50 | $ patch -p1 < /path/to/dnsviz-source/contrib/m2crypto-pre0.23.patch | |
51 | ``` | |
52 | ||
53 | * (optional) ISC BIND - https://www.isc.org/downloads/bind/ | |
54 | ||
55 | When calling `dnsviz probe` if the `-N` option is used or if a zone file is | |
56 | used in conjunction with the `-x` option, `named(8)` is looked for in PATH | |
57 | and invoked to serve the zone file. ISC BIND is only needed in this specific | |
58 | case, and `named(8)` does not need to be running. | |
59 | ||
60 | ||
61 | ### Generic Build and Install | |
62 | ||
63 | A generic build and install is performed with the following commands: | |
64 | ||
65 | ``` | |
66 | $ python setup.py build | |
67 | $ sudo python setup.py install | |
68 | ``` | |
69 | ||
70 | To see all installation options, run the following: | |
71 | ||
72 | ``` | |
73 | $ python setup.py --help | |
74 | ``` | |
75 | ||
76 | ||
77 | ### RPM Build and Install (RHEL6 or RHEL7) | |
27 | * python (2.7/3.4/3.5/3.6) - http://www.python.org/ | |
28 | ||
29 | * dnspython (1.13.0 or later) - http://www.dnspython.org/ | |
30 | ||
31 | * pygraphviz (1.4 or later) - http://pygraphviz.github.io/ | |
32 | ||
33 | * M2Crypto (0.28.0 or later) - https://gitlab.com/m2crypto/m2crypto | |
34 | ||
35 | * libnacl - https://github.com/saltstack/libnacl | |
36 | ||
37 | Note that the software versions listed above are known to work with the current | |
38 | version of DNSViz. Other versions might also work well together, but might | |
39 | have some caveats. For example, while the current version of DNSViz works with | |
40 | python 2.6, the importlib (https://pypi.python.org/pypi/importlib) and | |
41 | ordereddict (https://pypi.python.org/pypi/ordereddict) packages are | |
42 | additionally required. Also for python 2.6, pygraphviz version 1.1 or 1.2 is | |
43 | required (pygraphviz version 1.3 dropped support for python 2.6). | |
44 | ||
45 | ||
46 | ### Optional Software | |
47 | ||
48 | * OpenSSL GOST Engine - https://github.com/gost-engine/engine | |
49 | ||
50 | With OpenSSL version 1.1.0 and later, the OpenSSL GOST Engine is necessary to | |
51 | validate DNSSEC signatures with algorithm 12 (GOST R 34.10-2001) and create | |
52 | digests of type 3 (GOST R 34.11-94). | |
53 | ||
54 | * ISC BIND - https://www.isc.org/downloads/bind/ | |
55 | ||
56 | When using DNSViz for [pre-deployment testing](#pre-deployment-dns-testing) | |
57 | by specifying zone files and/or alternate delegation information on the | |
58 | command line (i.e., with `-N`, `-x`, or `-D`), `named(8)` is invoked to serve | |
59 | one or more zones. ISC BIND is only needed in this case, and `named(8)` does | |
60 | not need to be running (i.e., as a server). | |
61 | ||
62 | Note that default AppArmor policies for Debian are known to cause issues when | |
63 | invoking `named(8)` from DNSViz for pre-deployment testing. Two solutions to | |
64 | this problem are to either: 1) create a local policy for AppArmor that allows | |
65 | `named(8)` to run with fewer restrictions; or 2) disable AppArmor completely. | |
66 | ||
67 | ||
68 | ### Installation in a Virtual Environment | |
69 | ||
70 | To install DNSViz to a virtual environment, first create and activate a virtual | |
71 | environment, and install the dependencies: | |
72 | ``` | |
73 | $ virtualenv ~/myenv | |
74 | $ source ~/myenv/bin/activate | |
75 | (myenv) $ pip install -r requirements.txt | |
76 | ``` | |
77 | (Note that this installs the dependencies that are python packages, but some of | |
78 | these packages have non-python dependecies, such as Graphviz (required for | |
79 | pygraphviz) and libsodium (required for libnacl), that are not installed | |
80 | automatically.) | |
81 | ||
82 | Next download and install DNSViz from the Python Package Index (PyPI): | |
83 | ``` | |
84 | (myenv) $ pip install dnsviz | |
85 | ``` | |
86 | or locally, from a downloaded copy of DNSViz: | |
87 | ``` | |
88 | (myenv) $ pip install . | |
89 | ``` | |
90 | ||
91 | ||
92 | ### Fedora RPM Build and Install | |
93 | ||
94 | A Fedora RPM can be built for either python2 or python3. However, note that | |
95 | with Fedora releases after 29, python2 packages are being removed, so python3 | |
96 | is preferred. | |
97 | ||
98 | The value of ${PY_VERS} is either 2 or 3, corresponding to python2 or python3. | |
99 | ||
100 | Install the tools for building an RPM, and set up the rpmbuild tree. | |
101 | ``` | |
102 | $ sudo dnf install rpm-build rpmdevtools python${PY_VERS}-devel | |
103 | $ rpmdev-setuptree | |
104 | ``` | |
105 | ||
106 | From within the DNSViz source directory, create a source distribution tarball | |
107 | and copy it and the DNSViz spec file to the appropriate rpmbuild | |
108 | subdirectories. | |
109 | ``` | |
110 | $ python setup.py sdist | |
111 | $ cp dist/dnsviz-*.tar.gz ~/rpmbuild/SOURCES/ | |
112 | $ cp contrib/dnsviz-py${PY_VERS}.spec ~/rpmbuild/SPECS/dnsviz.spec | |
113 | ``` | |
114 | ||
115 | Install dnspython, pygraphviz, M2Crypto, and libnacl. | |
116 | ``` | |
117 | $ sudo dnf install python${PY_VERS}-dns python${PY_VERS}-pygraphviz python${PY_VERS}-libnacl | |
118 | ``` | |
119 | For python2: | |
120 | ``` | |
121 | $ sudo dnf install m2crypto | |
122 | ``` | |
123 | For python3: | |
124 | ``` | |
125 | $ sudo dnf install python3-m2crypto | |
126 | ``` | |
127 | ||
128 | Build and install the DNSViz RPM. | |
129 | ``` | |
130 | $ rpmbuild -ba rpmbuild/SPECS/dnsviz.spec | |
131 | $ sudo rpm -iv rpmbuild/RPMS/noarch/dnsviz-*-1.*.noarch.rpm | |
132 | ``` | |
133 | ||
134 | ||
135 | ### RHEL6/RHEL7 RPM Build and Install | |
78 | 136 | |
79 | 137 | Install pygraphviz and M2Crypto, after installing their build dependencies. |
80 | 138 | ``` |
81 | 139 | $ sudo yum install python-setuptools gcc python-devel graphviz-devel openssl-devel |
82 | 140 | $ sudo easy_install pbr |
83 | $ sudo easy_install m2crypto pygraphviz | |
141 | $ sudo easy_install m2crypto pygraphviz==1.2 | |
84 | 142 | ``` |
85 | 143 | |
86 | 144 | (RHEL6 only) Install the EPEL repository, and the necessary python libraries |
457 | 515 | -D example.com:dsset-example.com. \ |
458 | 516 | example.com |
459 | 517 | ``` |
518 | ||
519 | ||
520 | ## Docker Container | |
521 | ||
522 | A ready-to-use docker container is available for use. | |
523 | ||
524 | ``` | |
525 | docker pull dnsviz/dnsviz | |
526 | ``` | |
527 | ||
528 | This section only covers Docker-related examples, for more information see the | |
529 | [Usage](#usage) section. | |
530 | ||
531 | ||
532 | ### Simple Usage | |
533 | ||
534 | ``` | |
535 | $ docker run dnsviz/dnsviz help | |
536 | $ docker run dnsviz/dnsviz query example.com | |
537 | ``` | |
538 | ||
539 | ||
540 | ### Working with Files | |
541 | ||
542 | It might be useful to mount a local working directory into the container, | |
543 | especially when combining multiple commands or working with zone files. | |
544 | ||
545 | ``` | |
546 | $ docker run -v "$PWD:/data:rw" dnsviz/dnsviz probe dnsviz.net > probe.json | |
547 | $ docker run -v "$PWD:/data:rw" dnsviz/dnsviz graph -r probe.json -T png -O | |
548 | ``` | |
549 | ||
550 | ||
551 | ### Using a Host Network | |
552 | ||
553 | When running authoritative queries, a host network is recommended. | |
554 | ||
555 | ``` | |
556 | $ docker run --network host dnsviz/dnsviz probe -4 -A example.com > example.json | |
557 | ``` | |
558 | ||
559 | Otherwise, you're likely to encounter the following error: | |
560 | `dnsviz.query.SourceAddressBindError: Unable to bind to local address (EADDRNOTAVAIL)` | |
561 | ||
562 | ||
563 | ### Interactive Mode | |
564 | ||
565 | When performing complex analyses, where you need to combine multiple DNSViz | |
566 | commands, use bash redirection, etc., it might be useful to run the container | |
567 | interactively: | |
568 | ||
569 | ``` | |
570 | $ docker run --network host -v "$PWD:/data:rw" --entrypoint /bin/sh -ti dnsviz/dnsviz | |
571 | /data # dnsviz --help | |
572 | ``` |
4 | 4 | # Created by Casey Deccio (casey@deccio.net) |
5 | 5 | # |
6 | 6 | # Copyright 2015-2016 VeriSign, Inc. |
7 | # | |
8 | # Copyright 2016-2019 Casey Deccio | |
7 | 9 | # |
8 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
9 | 11 | # it under the terms of the GNU General Public License as published by |
21 | 23 | |
22 | 24 | from __future__ import unicode_literals |
23 | 25 | |
26 | import getopt | |
24 | 27 | import importlib |
25 | 28 | import sys |
26 | 29 | |
37 | 40 | err += '\n\n' |
38 | 41 | else: |
39 | 42 | err = '' |
40 | sys.stderr.write('''%sUsage: dnsviz <command> [args] | |
43 | sys.stderr.write('''%sUsage: dnsviz [options] <command> [args] | |
44 | Options: | |
45 | -p <path> - Add path to the python path. | |
41 | 46 | Commands: |
42 | probe - issue diagnostic DNS queries | |
43 | grok - assess diagnostic DNS queries | |
44 | graph - graph the assessment of diagnostic DNS queries | |
45 | print - process diagnostic DNS queries to textual output | |
46 | query - assess a DNS query | |
47 | probe - Issue diagnostic DNS queries. | |
48 | grok - Assess diagnostic DNS queries. | |
49 | graph - Graph the assessment of diagnostic DNS queries. | |
50 | print - Process diagnostic DNS queries to textual output. | |
51 | query - Assess a DNS query. | |
47 | 52 | help [<command>] |
48 | - show usage for a command | |
53 | - Show usage for a command. | |
49 | 54 | ''' % (err)) |
50 | 55 | |
51 | 56 | def main(): |
52 | 57 | check_deps() |
53 | 58 | |
54 | if len(sys.argv) < 2: | |
59 | try: | |
60 | opts, args = getopt.getopt(sys.argv[1:], 'p:') | |
61 | except getopt.GetoptError as e: | |
62 | sys.stderr.write('%s\n' % str(e)) | |
63 | sys.exit(1) | |
64 | ||
65 | opts = dict(opts) | |
66 | ||
67 | if len(args) < 1: | |
55 | 68 | usage() |
56 | 69 | sys.exit(0) |
57 | 70 | |
58 | if sys.argv[1] == 'help': | |
59 | if len(sys.argv) < 3: | |
71 | if args[0] == 'help': | |
72 | if len(args) < 2: | |
60 | 73 | usage() |
61 | 74 | sys.exit(0) |
62 | 75 | |
63 | command = sys.argv[2] | |
76 | command = args[1] | |
64 | 77 | else: |
65 | command = sys.argv[1] | |
78 | command = args[0] | |
79 | ||
80 | if '-p' in opts: | |
81 | sys.path.insert(0, opts['-p']) | |
66 | 82 | |
67 | 83 | # first try importing just the commands module to make sure |
68 | 84 | # dnsviz is properly reachable with the current path |
79 | 95 | if exc_frame.tb_next.tb_next is not None: |
80 | 96 | raise |
81 | 97 | |
82 | usage('Invalid command: %s' % command) | |
98 | sys.stderr.write('Invalid command: %s\n' % command) | |
83 | 99 | sys.exit(1) |
84 | 100 | |
85 | if sys.argv[1] == 'help': | |
101 | if args[0] == 'help': | |
86 | 102 | mod.usage() |
87 | 103 | else: |
88 | mod.main(sys.argv[1:]) | |
104 | mod.main(args) | |
89 | 105 | |
90 | 106 | if __name__ == "__main__": |
91 | 107 | main() |
5 | 5 | # |
6 | 6 | # Copyright 2014-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
27 | 27 | import errno |
28 | 28 | import socket |
29 | 29 | import sys |
30 | ||
31 | # python3/python2 dual compatibility | |
32 | try: | |
33 | import urllib.parse | |
34 | except ImportError: | |
35 | import urlparse | |
36 | else: | |
37 | urlparse = urllib.parse | |
30 | 38 | |
31 | 39 | import dns.flags, dns.exception, dns.name, dns.opcode, dns.rdataclass, dns.rdatatype |
32 | 40 | |
91 | 99 | self.trusted_keys = () |
92 | 100 | self.show_ttl = True |
93 | 101 | self.lg_url = None |
102 | self.lg_factory = None | |
94 | 103 | |
95 | 104 | def process_query_options(self, global_options): |
96 | 105 | for arg in global_options + self.query_options: |
192 | 201 | elif arg == '+nomultiline': |
193 | 202 | self.multiline = False |
194 | 203 | #TODO +ndots=D |
195 | #TODO +[no]nsid | |
204 | elif arg == '+nsid': | |
205 | if self.edns < 0: | |
206 | self.edns = 0 | |
207 | if not [x for x in filter(lambda x: x.otype == dns.edns.NSID, self.edns_options)]: | |
208 | self.edns_options.append(dns.edns.GenericOption(dns.edns.NSID, b'')) | |
209 | elif arg == '+nonsid': | |
210 | l = [x for x in filter(lambda x: x.otype == dns.edns.NSID, self.edns_options)] | |
211 | self.edns_options.remove(dns.edns.GenericOption(dns.edns.NSID, b'')) | |
196 | 212 | #TODO +[no]nssearch |
197 | 213 | #TODO +[no]onesoa |
198 | 214 | #TODO +[no]qr |
302 | 318 | |
303 | 319 | self.nameservers = nameservers + processed_nameservers |
304 | 320 | |
321 | def process_looking_glass(self, looking_glass_cache): | |
322 | if self.lg_url is None: | |
323 | return | |
324 | ||
325 | if self.lg_url not in looking_glass_cache: | |
326 | # check that version is >= 2.7.9 if HTTPS is requested | |
327 | if self.lg_url.startswith('https'): | |
328 | vers0, vers1, vers2 = sys.version_info[:3] | |
329 | if (2, 7, 9) > (vers0, vers1, vers2): | |
330 | sys.stderr.write('python version >= 2.7.9 is required to use a DNS looking glass with HTTPS.\n') | |
331 | sys.exit(1) | |
332 | ||
333 | url = urlparse.urlparse(self.lg_url) | |
334 | if url.scheme in ('http', 'https'): | |
335 | fact = transport.DNSQueryTransportHandlerHTTPFactory(self.lg_url, insecure=options['insecure']) | |
336 | elif url.scheme == 'ws': | |
337 | if url.hostname is not None: | |
338 | usage('WebSocket URL must designate a local UNIX domain socket.') | |
339 | sys.exit(1) | |
340 | fact = transport.DNSQueryTransportHandlerWebSocketServerFactory(url.path) | |
341 | elif url.scheme == 'ssh': | |
342 | fact = transport.DNSQueryTransportHandlerRemoteCmdFactory(self.lg_url) | |
343 | else: | |
344 | usage('Unsupported URL scheme: "%s"' % self.lg_url) | |
345 | sys.exit(1) | |
346 | looking_glass_cache[self.lg_url] = fact | |
347 | self.lg_factory = looking_glass_cache[self.lg_url] | |
348 | ||
305 | 349 | def _get_resolver(self, options): |
306 | 350 | class CustomQuery(Q.DNSQueryFactory): |
307 | 351 | flags = self.flags |
312 | 356 | tcp = self.tcp |
313 | 357 | response_handlers = self.handlers |
314 | 358 | |
315 | if self.lg_url is not None: | |
316 | th_factories = (transport.DNSQueryTransportHandlerHTTPFactory(self.lg_url),) | |
359 | if self.lg_factory is not None: | |
360 | th_factories = (self.lg_factory,) | |
317 | 361 | else: |
318 | 362 | th_factories = None |
319 | 363 | |
390 | 434 | if response.message.edns >= 0: |
391 | 435 | s += ';; OPT PSEUDOSECTION:\n' |
392 | 436 | s += '; EDNS: version: %d, flags: %s; udp: %d\n' % (response.message.edns, dns.flags.edns_to_text(response.message.ednsflags).lower(), response.message.payload) |
437 | ||
438 | for opt in response.message.options: | |
439 | chars = [] | |
440 | if opt.otype == dns.edns.NSID: | |
441 | s += '; NSID:' | |
442 | for b in opt.data: | |
443 | s += ' %02x' % b | |
444 | chars.append(chr(b)) | |
445 | for c in chars: | |
446 | s += ' (%s)' % c | |
447 | s += '\n' | |
393 | 448 | |
394 | 449 | if response.message.question and self.show_question: |
395 | 450 | if self.show_comments: |
425 | 480 | return s |
426 | 481 | |
427 | 482 | elif response.error in (Q.RESPONSE_ERROR_TIMEOUT, Q.RESPONSE_ERROR_NETWORK_ERROR): |
428 | return ';; connection timed out; no servers could be reached' | |
429 | ||
430 | else: | |
431 | return ';; the response from %s was malformed' % server | |
483 | return ';; connection timed out; no servers could be reached\n' | |
484 | ||
485 | else: | |
486 | return ';; the response from %s was malformed\n' % server | |
432 | 487 | |
433 | 488 | def query_and_display(self, options, filehandle): |
434 | 489 | try: |
452 | 507 | 'use_ipv6': None, |
453 | 508 | 'client_ipv4': None, |
454 | 509 | 'client_ipv6': None, |
510 | 'insecure': None, | |
455 | 511 | 'port': 53, |
456 | 512 | } |
457 | 513 | |
467 | 523 | if not self.queries: |
468 | 524 | self.queries.append(DigCommandLineQuery('.', dns.rdatatype.NS, dns.rdataclass.IN)) |
469 | 525 | |
526 | looking_glass_cache = {} | |
470 | 527 | for q in self.queries: |
471 | 528 | q.process_nameservers(self.nameservers, self.options['use_ipv4'], self.options['use_ipv6']) |
472 | 529 | q.process_query_options(self.global_query_options) |
530 | q.process_looking_glass(looking_glass_cache) | |
473 | 531 | |
474 | 532 | if not q.nameservers: |
475 | 533 | raise SemanticException('No nameservers to query') |
625 | 683 | elif self.args[self.arg_index].startswith('-4'): |
626 | 684 | self._get_arg(False) |
627 | 685 | self.options['use_ipv4'] = True |
686 | elif self.args[self.arg_index].startswith('-k'): | |
687 | self._get_arg(False) | |
688 | self.options['insecure'] = True | |
628 | 689 | else: |
629 | 690 | raise CommandLineException('Option "%s" not recognized.' % self.args[self.arg_index][:2]) |
630 | 691 |
0 | Name: dnsviz | |
1 | Version: 0.8.0 | |
2 | Release: 1%{?dist} | |
3 | Summary: Tools for analyzing and visualizing DNS and DNSSEC behavior | |
4 | ||
5 | License: GPLv2+ | |
6 | URL: https://github.com/dnsviz/dnsviz | |
7 | Source0: https://github.com/dnsviz/dnsviz/releases/download/v%{version}/%{name}-%{version}.tar.gz | |
8 | ||
9 | BuildArch: noarch | |
10 | BuildRequires: python2-devel | |
11 | BuildRequires: graphviz | |
12 | BuildRequires: make | |
13 | # python2-pygraphviz should be >= 1.4 | |
14 | Requires: python2-pygraphviz >= 1.3 | |
15 | Requires: m2crypto >= 0.28.0 | |
16 | Requires: python2-dns >= 1.13 | |
17 | Requires: python2-libnacl | |
18 | ||
19 | %description | |
20 | DNSViz is a tool suite for analysis and visualization of Domain Name System | |
21 | (DNS) behavior, including its security extensions (DNSSEC). This tool suite | |
22 | powers the Web-based analysis available at http://dnsviz.net/ | |
23 | ||
24 | %prep | |
25 | %autosetup | |
26 | ||
27 | %build | |
28 | %py2_build | |
29 | ||
30 | %install | |
31 | #XXX Normally the py2_install macro would be used here, | |
32 | # but dnsviz/config.py is build with the install command, | |
33 | # so install MUST call the build subcommand, so config.py | |
34 | # will be proplerly placed. With py2_install, the | |
35 | # --skip-build argument is used. | |
36 | %{__python2} %{py_setup} %{?py_setup_args} install -O1 --root %{buildroot} %{?*} | |
37 | ||
38 | #XXX no checks yet | |
39 | #%check | |
40 | #%{__python2} setup.py test | |
41 | ||
42 | %clean | |
43 | rm -rf %{buildroot} | |
44 | ||
45 | %files | |
46 | %license LICENSE | |
47 | %doc README.md | |
48 | %{python2_sitelib}/%{name}/* | |
49 | %{python2_sitelib}/%{name}-%{version}-py2.7.egg-info/* | |
50 | %{_bindir}/%{name} | |
51 | %{_datadir}/%{name}/* | |
52 | %{_defaultdocdir}/%{name}/dnsviz-graph.html | |
53 | %{_defaultdocdir}/%{name}/images/*png | |
54 | %{_mandir}/man1/%{name}.1* | |
55 | %{_mandir}/man1/%{name}-probe.1* | |
56 | %{_mandir}/man1/%{name}-graph.1* | |
57 | %{_mandir}/man1/%{name}-grok.1* | |
58 | %{_mandir}/man1/%{name}-print.1* | |
59 | %{_mandir}/man1/%{name}-query.1* | |
60 | ||
61 | %changelog | |
62 | * Fri Jan 25 2019 Casey Deccio | |
63 | 0.8.0 release |
0 | Name: dnsviz | |
1 | Version: 0.8.0 | |
2 | Release: 1%{?dist} | |
3 | Summary: Tools for analyzing and visualizing DNS and DNSSEC behavior | |
4 | ||
5 | License: GPLv2+ | |
6 | URL: https://github.com/dnsviz/dnsviz | |
7 | Source0: https://github.com/dnsviz/dnsviz/releases/download/v%{version}/%{name}-%{version}.tar.gz | |
8 | ||
9 | BuildArch: noarch | |
10 | BuildRequires: python3-devel | |
11 | BuildRequires: graphviz | |
12 | BuildRequires: make | |
13 | # python3-pygraphviz should be >= 1.4 | |
14 | Requires: python3-pygraphviz >= 1.3 | |
15 | Requires: python3-m2crypto >= 0.28.0 | |
16 | Requires: python3-dns >= 1.13 | |
17 | Requires: python3-libnacl | |
18 | ||
19 | %description | |
20 | DNSViz is a tool suite for analysis and visualization of Domain Name System | |
21 | (DNS) behavior, including its security extensions (DNSSEC). This tool suite | |
22 | powers the Web-based analysis available at http://dnsviz.net/ | |
23 | ||
24 | %prep | |
25 | %autosetup | |
26 | ||
27 | %build | |
28 | %py3_build | |
29 | ||
30 | %install | |
31 | #XXX Normally the py3_install macro would be used here, | |
32 | # but dnsviz/config.py is build with the install command, | |
33 | # so install MUST call the build subcommand, so config.py | |
34 | # will be proplerly placed. With py3_install, the | |
35 | # --skip-build argument is used. | |
36 | %{__python3} %{py_setup} %{?py_setup_args} install -O1 --root %{buildroot} %{?*} | |
37 | ||
38 | #XXX no checks yet | |
39 | #%check | |
40 | #%{__python3} setup.py test | |
41 | ||
42 | %clean | |
43 | rm -rf %{buildroot} | |
44 | ||
45 | %files | |
46 | %license LICENSE | |
47 | %doc README.md | |
48 | %{python3_sitelib}/%{name}/* | |
49 | %{python3_sitelib}/%{name}-%{version}-py3.7.egg-info/* | |
50 | %{_bindir}/%{name} | |
51 | %{_datadir}/%{name}/* | |
52 | %{_defaultdocdir}/%{name}/dnsviz-graph.html | |
53 | %{_defaultdocdir}/%{name}/images/*png | |
54 | %{_mandir}/man1/%{name}.1* | |
55 | %{_mandir}/man1/%{name}-probe.1* | |
56 | %{_mandir}/man1/%{name}-graph.1* | |
57 | %{_mandir}/man1/%{name}-grok.1* | |
58 | %{_mandir}/man1/%{name}-print.1* | |
59 | %{_mandir}/man1/%{name}-query.1* | |
60 | ||
61 | %changelog | |
62 | * Fri Jan 25 2019 Casey Deccio | |
63 | 0.8.0 release |
0 | diff --git a/M2Crypto/DSA.py b/M2Crypto/DSA.py | |
1 | index 57d123b..325e418 100644 | |
2 | --- a/M2Crypto/DSA.py | |
3 | +++ b/M2Crypto/DSA.py | |
4 | @@ -396,6 +396,29 @@ def load_key_bio(bio, callback=util.passphrase_callback): | |
5 | raise DSAError('problem loading DSA key pair') | |
6 | return DSA(dsa, 1) | |
7 | ||
8 | +def pub_key_from_params(p, q, g, pub): | |
9 | + """ | |
10 | + Factory function that instantiates a DSA_pub object using | |
11 | + the parameters and public key specified. | |
12 | + | |
13 | + @type p: str | |
14 | + @param p: value of p, a "byte string" | |
15 | + @type q: str | |
16 | + @param q: value of q, a "byte string" | |
17 | + @type g: str | |
18 | + @param g: value of g, a "byte string" | |
19 | + @type pub: str | |
20 | + @param pub: value of the public key, a "byte string" | |
21 | + @rtype: DSA_pub | |
22 | + @return: instance of DSA_pub. | |
23 | + """ | |
24 | + dsa = m2.dsa_new() | |
25 | + m2.dsa_set_p(dsa, p) | |
26 | + m2.dsa_set_q(dsa, q) | |
27 | + m2.dsa_set_g(dsa, g) | |
28 | + m2.dsa_set_pub(dsa, pub) | |
29 | + return DSA_pub(dsa, 1) | |
30 | + | |
31 | ||
32 | def load_pub_key(file, callback=util.passphrase_callback): | |
33 | """ | |
34 | diff --git a/M2Crypto/EC.py b/M2Crypto/EC.py | |
35 | index a4a9faf..800a705 100644 | |
36 | --- a/M2Crypto/EC.py | |
37 | +++ b/M2Crypto/EC.py | |
38 | @@ -254,6 +254,13 @@ class EC_pub(EC): | |
39 | self.der = m2.ec_key_get_public_der(self.ec) | |
40 | return self.der | |
41 | ||
42 | + def get_key(self): | |
43 | + """ | |
44 | + Returns the public key as a byte string. | |
45 | + """ | |
46 | + assert self.check_key(), 'key is not initialised' | |
47 | + return m2.ec_key_get_public_key(self.ec) | |
48 | + | |
49 | save_key = EC.save_pub_key | |
50 | ||
51 | save_key_bio = EC.save_pub_key_bio | |
52 | @@ -333,3 +340,9 @@ def pub_key_from_der(der): | |
53 | Create EC_pub from DER. | |
54 | """ | |
55 | return EC_pub(m2.ec_key_from_pubkey_der(der), 1) | |
56 | + | |
57 | +def pub_key_from_params(curve, bytes): | |
58 | + """ | |
59 | + Create EC_pub from curve name and octet string. | |
60 | + """ | |
61 | + return EC_pub(m2.ec_key_from_pubkey_params(curve, bytes), 1) | |
62 | diff --git a/M2Crypto/EVP.py b/M2Crypto/EVP.py | |
63 | index 12618a2..28303bd 100644 | |
64 | --- a/M2Crypto/EVP.py | |
65 | +++ b/M2Crypto/EVP.py | |
66 | @@ -40,8 +40,13 @@ class MessageDigest: | |
67 | def __init__(self, algo): | |
68 | md = getattr(m2, algo, None) | |
69 | if md is None: | |
70 | - raise ValueError('unknown algorithm', algo) | |
71 | - self.md = md() | |
72 | + # if the digest algorithm isn't found as an attribute of the m2 | |
73 | + # module, try to look up the digest using get_digestbyname() | |
74 | + self.md = m2.get_digestbyname(algo) | |
75 | + if self.md is None: | |
76 | + raise ValueError('unknown algorithm', algo) | |
77 | + else: | |
78 | + self.md = md() | |
79 | self.ctx = m2.md_ctx_new() | |
80 | m2.digest_init(self.ctx, self.md) | |
81 | ||
82 | @@ -389,6 +394,25 @@ def load_key_bio(bio, callback=util.passphrase_callback): | |
83 | raise EVPError(Err.get_error()) | |
84 | return PKey(cptr, 1) | |
85 | ||
86 | +def load_key_bio_pubkey(bio, callback=util.passphrase_callback): | |
87 | + """ | |
88 | + Load an M2Crypto.EVP.PKey from a public key as a M2Crypto.BIO object. | |
89 | + | |
90 | + @type bio: M2Crypto.BIO | |
91 | + @param bio: M2Crypto.BIO object containing the key in PEM format. | |
92 | + | |
93 | + @type callback: Python callable | |
94 | + @param callback: A Python callable object that is invoked | |
95 | + to acquire a passphrase with which to protect the key. | |
96 | + | |
97 | + @rtype: M2Crypto.EVP.PKey | |
98 | + @return: M2Crypto.EVP.PKey object. | |
99 | + """ | |
100 | + cptr = m2.pkey_read_pem_pubkey(bio._ptr(), callback) | |
101 | + if cptr is None: | |
102 | + raise EVPError(Err.get_error()) | |
103 | + return PKey(cptr, 1) | |
104 | + | |
105 | def load_key_string(string, callback=util.passphrase_callback): | |
106 | """ | |
107 | Load an M2Crypto.EVP.PKey from a string. | |
108 | @@ -405,3 +429,20 @@ def load_key_string(string, callback=util.passphrase_callback): | |
109 | """ | |
110 | bio = BIO.MemoryBuffer(string) | |
111 | return load_key_bio(bio, callback) | |
112 | + | |
113 | +def load_key_string_pubkey(string, callback=util.passphrase_callback): | |
114 | + """ | |
115 | + Load an M2Crypto.EVP.PKey from a public key as a string. | |
116 | + | |
117 | + @type string: string | |
118 | + @param string: String containing the key in PEM format. | |
119 | + | |
120 | + @type callback: Python callable | |
121 | + @param callback: A Python callable object that is invoked | |
122 | + to acquire a passphrase with which to protect the key. | |
123 | + | |
124 | + @rtype: M2Crypto.EVP.PKey | |
125 | + @return: M2Crypto.EVP.PKey object. | |
126 | + """ | |
127 | + bio = BIO.MemoryBuffer(string) | |
128 | + return load_key_bio_pubkey(bio, callback) | |
129 | diff --git a/SWIG/_dsa.i b/SWIG/_dsa.i | |
130 | index a35dd88..a6da42d 100644 | |
131 | --- a/SWIG/_dsa.i | |
132 | +++ b/SWIG/_dsa.i | |
133 | @@ -153,6 +153,25 @@ PyObject *dsa_set_g(DSA *dsa, PyObject *value) { | |
134 | Py_INCREF(Py_None); | |
135 | return Py_None; | |
136 | } | |
137 | + | |
138 | +PyObject *dsa_set_pub(DSA *dsa, PyObject *value) { | |
139 | + BIGNUM *bn; | |
140 | + const void *vbuf; | |
141 | + int vlen; | |
142 | + | |
143 | + if (m2_PyObject_AsReadBufferInt(value, &vbuf, &vlen) == -1) | |
144 | + return NULL; | |
145 | + | |
146 | + if (!(bn = BN_mpi2bn((unsigned char *)vbuf, vlen, NULL))) { | |
147 | + PyErr_SetString(_dsa_err, ERR_reason_error_string(ERR_get_error())); | |
148 | + return NULL; | |
149 | + } | |
150 | + if (dsa->pub_key) | |
151 | + BN_free(dsa->pub_key); | |
152 | + dsa->pub_key = bn; | |
153 | + Py_INCREF(Py_None); | |
154 | + return Py_None; | |
155 | +} | |
156 | %} | |
157 | ||
158 | %inline %{ | |
159 | diff --git a/SWIG/_ec.i b/SWIG/_ec.i | |
160 | index f0e52bd..9065c10 100644 | |
161 | --- a/SWIG/_ec.i | |
162 | +++ b/SWIG/_ec.i | |
163 | @@ -189,6 +189,43 @@ PyObject *ec_key_get_public_der(EC_KEY *key) { | |
164 | ||
165 | return pyo; | |
166 | } | |
167 | + | |
168 | +PyObject *ec_key_get_public_key(EC_KEY *key) { | |
169 | + | |
170 | + unsigned char *src=NULL; | |
171 | + void *dst=NULL; | |
172 | + int src_len=0; | |
173 | + Py_ssize_t dst_len=0; | |
174 | + PyObject *pyo=NULL; | |
175 | + int ret=0; | |
176 | + | |
177 | + /* Convert to binary */ | |
178 | + src_len = i2o_ECPublicKey(key, &src); | |
179 | + if (src_len < 0) | |
180 | + { | |
181 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
182 | + return NULL; | |
183 | + } | |
184 | + | |
185 | + /* Create a PyBuffer containing a copy of the binary, | |
186 | + * to simplify memory deallocation | |
187 | + */ | |
188 | + pyo = PyBuffer_New( src_len ); | |
189 | + ret = PyObject_AsWriteBuffer( pyo, &dst, &dst_len ); | |
190 | + assert( src_len == dst_len ); | |
191 | + if (ret < 0) | |
192 | + { | |
193 | + Py_DECREF(pyo); | |
194 | + OPENSSL_free(src); | |
195 | + PyErr_SetString(_ec_err, "cannot get write buffer"); | |
196 | + return NULL; | |
197 | + } | |
198 | + memcpy( dst, src, src_len ); | |
199 | + OPENSSL_free(src); | |
200 | + | |
201 | + return pyo; | |
202 | +} | |
203 | + | |
204 | %} | |
205 | ||
206 | %threadallow ec_key_read_pubkey; | |
207 | @@ -404,6 +441,32 @@ EC_KEY* ec_key_from_pubkey_der(PyObject *pubkey) { | |
208 | return keypair; | |
209 | } | |
210 | ||
211 | +EC_KEY* ec_key_from_pubkey_params(int nid, PyObject *pubkey) { | |
212 | + const void *keypairbuf; | |
213 | + Py_ssize_t keypairbuflen; | |
214 | + const unsigned char *tempBuf; | |
215 | + EC_KEY *keypair; | |
216 | + | |
217 | + if (PyObject_AsReadBuffer(pubkey, &keypairbuf, &keypairbuflen) == -1) | |
218 | + { | |
219 | + return NULL; | |
220 | + } | |
221 | + | |
222 | + keypair = ec_key_new_by_curve_name(nid); | |
223 | + if (!keypair) { | |
224 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
225 | + return NULL; | |
226 | + } | |
227 | + | |
228 | + tempBuf = (const unsigned char *)keypairbuf; | |
229 | + if ((o2i_ECPublicKey( &keypair, &tempBuf, keypairbuflen)) == 0) | |
230 | + { | |
231 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
232 | + return NULL; | |
233 | + } | |
234 | + return keypair; | |
235 | +} | |
236 | + | |
237 | ||
238 | // According to [SEC2] the degree of the group is defined as EC key length | |
239 | int ec_key_keylen(EC_KEY *key) { | |
240 | diff --git a/SWIG/_evp.i b/SWIG/_evp.i | |
241 | index 85382db..033897b 100644 | |
242 | --- a/SWIG/_evp.i | |
243 | +++ b/SWIG/_evp.i | |
244 | @@ -49,6 +49,9 @@ extern const EVP_MD *EVP_sha512(void); | |
245 | %rename(digest_init) EVP_DigestInit; | |
246 | extern int EVP_DigestInit(EVP_MD_CTX *, const EVP_MD *); | |
247 | ||
248 | +%rename(get_digestbyname) EVP_get_digestbyname; | |
249 | +extern EVP_MD *EVP_get_digestbyname(const char * name); | |
250 | + | |
251 | %rename(des_ecb) EVP_des_ecb; | |
252 | extern const EVP_CIPHER *EVP_des_ecb(void); | |
253 | %rename(des_ede_ecb) EVP_des_ede; | |
254 | @@ -519,6 +522,17 @@ EVP_PKEY *pkey_read_pem(BIO *f, PyObject *pyfunc) { | |
255 | return pk; | |
256 | } | |
257 | ||
258 | +EVP_PKEY *pkey_read_pem_pubkey(BIO *f, PyObject *pyfunc) { | |
259 | + EVP_PKEY *pk; | |
260 | + | |
261 | + Py_INCREF(pyfunc); | |
262 | + Py_BEGIN_ALLOW_THREADS | |
263 | + pk = PEM_read_bio_PUBKEY(f, NULL, passphrase_callback, (void *)pyfunc); | |
264 | + Py_END_ALLOW_THREADS | |
265 | + Py_DECREF(pyfunc); | |
266 | + return pk; | |
267 | +} | |
268 | + | |
269 | int pkey_assign_rsa(EVP_PKEY *pkey, RSA *rsa) { | |
270 | return EVP_PKEY_assign_RSA(pkey, rsa); | |
271 | } | |
272 | diff --git a/tests/test_dsa.py b/tests/test_dsa.py | |
273 | index 27d1f61..c224a53 100644 | |
274 | --- a/tests/test_dsa.py | |
275 | +++ b/tests/test_dsa.py | |
276 | @@ -99,6 +99,19 @@ class DSATestCase(unittest.TestCase): | |
277 | r, s = dsa2.sign(self.data) | |
278 | assert dsa2.verify(self.data, r, s) | |
279 | ||
280 | + def test_pub_key_from_params(self): | |
281 | + dsa = DSA.gen_params(1024, self.callback) | |
282 | + dsa.gen_key() | |
283 | + assert len(dsa) == 1024 | |
284 | + p = dsa.p | |
285 | + q = dsa.q | |
286 | + g = dsa.g | |
287 | + pub = dsa.pub | |
288 | + dsa2 = DSA.pub_key_from_params(p,q,g,pub) | |
289 | + assert dsa2.check_key() | |
290 | + r,s = dsa.sign(self.data) | |
291 | + assert dsa2.verify(self.data, r, s) | |
292 | + | |
293 | def suite(): | |
294 | return unittest.makeSuite(DSATestCase) | |
295 | ||
296 | diff --git a/tests/test_ecdsa.py b/tests/test_ecdsa.py | |
297 | index d6c75d1..d28be96 100644 | |
298 | --- a/tests/test_ecdsa.py | |
299 | +++ b/tests/test_ecdsa.py | |
300 | @@ -70,6 +70,16 @@ class ECDSATestCase(unittest.TestCase): | |
301 | ec = EC.gen_params(EC.NID_sect233k1) | |
302 | self.assertEqual(len(ec), 233) | |
303 | ||
304 | + def test_pub_key_from_params(self): | |
305 | + curve = EC.NID_X9_62_prime256v1 | |
306 | + ec = EC.gen_params(curve) | |
307 | + ec.gen_key() | |
308 | + ec_pub = ec.pub() | |
309 | + k = ec_pub.get_key() | |
310 | + ec2 = EC.pub_key_from_params(curve, k) | |
311 | + assert ec2.check_key() | |
312 | + r, s = ec.sign_dsa(self.data) | |
313 | + assert ec2.verify_dsa(self.data, r, s) | |
314 | ||
315 | def suite(): | |
316 | return unittest.makeSuite(ECDSATestCase) | |
317 | diff --git a/tests/test_evp.py b/tests/test_evp.py | |
318 | index 8cf7d12..bddec84 100644 | |
319 | --- a/tests/test_evp.py | |
320 | +++ b/tests/test_evp.py | |
321 | @@ -58,6 +58,9 @@ class EVPTestCase(unittest.TestCase): | |
322 | # A quick but not thorough sanity check | |
323 | self.assertEqual(len(der_blob), 160) | |
324 | ||
325 | + def test_get_digestbyname(self): | |
326 | + self.assertEqual(m2.get_digestbyname('sha513'), None) | |
327 | + self.assertNotEqual(m2.get_digestbyname('sha1'), None) | |
328 | ||
329 | def test_MessageDigest(self): | |
330 | with self.assertRaises(ValueError): | |
331 | @@ -66,6 +69,19 @@ class EVPTestCase(unittest.TestCase): | |
332 | self.assertEqual(md.update('Hello'), 1) | |
333 | self.assertEqual(util.octx_to_num(md.final()), 1415821221623963719413415453263690387336440359920) | |
334 | ||
335 | + # temporarily remove sha1 from m2 | |
336 | + old_sha1 = m2.sha1 | |
337 | + del m2.sha1 | |
338 | + | |
339 | + # now run the same test again, relying on EVP.MessageDigest() to call | |
340 | + # get_digestbyname() under the hood | |
341 | + md = EVP.MessageDigest('sha1') | |
342 | + self.assertEqual(md.update('Hello'), 1) | |
343 | + self.assertEqual(util.octx_to_num(md.final()), 1415821221623963719413415453263690387336440359920) | |
344 | + | |
345 | + # put sha1 back in place | |
346 | + m2.sha1 = old_sha1 | |
347 | + | |
348 | def test_as_der_capture_key(self): | |
349 | """ | |
350 | Test DER encoding the PKey instance after assigning | |
351 | @@ -140,6 +156,26 @@ class EVPTestCase(unittest.TestCase): | |
352 | rsa3 = RSA.gen_key(1024, 3, callback=self._gen_callback) | |
353 | self.assertNotEqual(rsa.sign(digest), rsa3.sign(digest)) | |
354 | ||
355 | + def test_load_key_string_pubkey(self): | |
356 | + """ | |
357 | + Testing creating a PKey instance from PEM string. | |
358 | + """ | |
359 | + rsa = RSA.gen_key(1024, 3, callback=self._gen_callback) | |
360 | + self.assertIsInstance(rsa, RSA.RSA) | |
361 | + | |
362 | + rsa_pem = BIO.MemoryBuffer() | |
363 | + rsa.save_pub_key_bio(rsa_pem) | |
364 | + pkey = EVP.load_key_string_pubkey(rsa_pem.read()) | |
365 | + rsa2 = pkey.get_rsa() | |
366 | + self.assertIsInstance(rsa2, RSA.RSA_pub) | |
367 | + self.assertEqual(rsa.e, rsa2.e) | |
368 | + self.assertEqual(rsa.n, rsa2.n) | |
369 | + pem = rsa.as_pem(callback=self._pass_callback) | |
370 | + pem2 = rsa2.as_pem() | |
371 | + assert pem | |
372 | + assert pem2 | |
373 | + self.assertNotEqual(pem, pem2) | |
374 | + | |
375 | def test_get_rsa_fail(self): | |
376 | """ | |
377 | Testing trying to retrieve the RSA key from the PKey instance |
0 | diff -ur M2Crypto-0.22.3/M2Crypto/DSA.py M2Crypto-0.22.3.new/M2Crypto/DSA.py | |
1 | --- M2Crypto-0.22.3/M2Crypto/DSA.py 2014-01-22 14:37:01.000000000 -0500 | |
2 | +++ M2Crypto-0.22.3.new/M2Crypto/DSA.py 2016-01-12 19:25:07.000000000 -0500 | |
3 | @@ -394,6 +394,29 @@ | |
4 | raise DSAError('problem loading DSA key pair') | |
5 | return DSA(dsa, 1) | |
6 | ||
7 | +def pub_key_from_params(p, q, g, pub): | |
8 | + """ | |
9 | + Factory function that instantiates a DSA_pub object using | |
10 | + the parameters and public key specified. | |
11 | + | |
12 | + @type p: str | |
13 | + @param p: value of p, a "byte string" | |
14 | + @type q: str | |
15 | + @param q: value of q, a "byte string" | |
16 | + @type g: str | |
17 | + @param g: value of g, a "byte string" | |
18 | + @type pub: str | |
19 | + @param pub: value of the public key, a "byte string" | |
20 | + @rtype: DSA_pub | |
21 | + @return: instance of DSA_pub. | |
22 | + """ | |
23 | + dsa = m2.dsa_new() | |
24 | + m2.dsa_set_p(dsa, p) | |
25 | + m2.dsa_set_q(dsa, q) | |
26 | + m2.dsa_set_g(dsa, g) | |
27 | + m2.dsa_set_pub(dsa, pub) | |
28 | + return DSA_pub(dsa, 1) | |
29 | + | |
30 | ||
31 | def load_pub_key(file, callback=util.passphrase_callback): | |
32 | """ | |
33 | diff -ur M2Crypto-0.22.3/M2Crypto/EC.py M2Crypto-0.22.3.new/M2Crypto/EC.py | |
34 | --- M2Crypto-0.22.3/M2Crypto/EC.py 2014-01-22 14:37:01.000000000 -0500 | |
35 | +++ M2Crypto-0.22.3.new/M2Crypto/EC.py 2016-01-12 19:25:07.000000000 -0500 | |
36 | @@ -254,6 +254,13 @@ | |
37 | self.der = m2.ec_key_get_public_der(self.ec) | |
38 | return self.der | |
39 | ||
40 | + def get_key(self): | |
41 | + """ | |
42 | + Returns the public key as a byte string. | |
43 | + """ | |
44 | + assert self.check_key(), 'key is not initialised' | |
45 | + return m2.ec_key_get_public_key(self.ec) | |
46 | + | |
47 | save_key = EC.save_pub_key | |
48 | ||
49 | save_key_bio = EC.save_pub_key_bio | |
50 | @@ -333,3 +340,9 @@ | |
51 | Create EC_pub from DER. | |
52 | """ | |
53 | return EC_pub(m2.ec_key_from_pubkey_der(der), 1) | |
54 | + | |
55 | +def pub_key_from_params(curve, bytes): | |
56 | + """ | |
57 | + Create EC_pub from curve name and octet string. | |
58 | + """ | |
59 | + return EC_pub(m2.ec_key_from_pubkey_params(curve, bytes), 1) | |
60 | diff -ur M2Crypto-0.22.3/M2Crypto/EVP.py M2Crypto-0.22.3.new/M2Crypto/EVP.py | |
61 | --- M2Crypto-0.22.3/M2Crypto/EVP.py 2014-01-22 14:37:01.000000000 -0500 | |
62 | +++ M2Crypto-0.22.3.new/M2Crypto/EVP.py 2016-01-12 21:11:36.000000000 -0500 | |
63 | @@ -40,8 +40,13 @@ | |
64 | def __init__(self, algo): | |
65 | md = getattr(m2, algo, None) | |
66 | if md is None: | |
67 | - raise ValueError, ('unknown algorithm', algo) | |
68 | - self.md=md() | |
69 | + # if the digest algorithm isn't found as an attribute of the m2 | |
70 | + # module, try to look up the digest using get_digestbyname() | |
71 | + self.md = m2.get_digestbyname(algo) | |
72 | + if self.md is None: | |
73 | + raise ValueError('unknown algorithm', algo) | |
74 | + else: | |
75 | + self.md = md() | |
76 | self.ctx=m2.md_ctx_new() | |
77 | m2.digest_init(self.ctx, self.md) | |
78 | ||
79 | @@ -389,6 +394,25 @@ | |
80 | raise EVPError(Err.get_error()) | |
81 | return PKey(cptr, 1) | |
82 | ||
83 | +def load_key_bio_pubkey(bio, callback=util.passphrase_callback): | |
84 | + """ | |
85 | + Load an M2Crypto.EVP.PKey from a public key as a M2Crypto.BIO object. | |
86 | + | |
87 | + @type bio: M2Crypto.BIO | |
88 | + @param bio: M2Crypto.BIO object containing the key in PEM format. | |
89 | + | |
90 | + @type callback: Python callable | |
91 | + @param callback: A Python callable object that is invoked | |
92 | + to acquire a passphrase with which to protect the key. | |
93 | + | |
94 | + @rtype: M2Crypto.EVP.PKey | |
95 | + @return: M2Crypto.EVP.PKey object. | |
96 | + """ | |
97 | + cptr = m2.pkey_read_pem_pubkey(bio._ptr(), callback) | |
98 | + if cptr is None: | |
99 | + raise EVPError(Err.get_error()) | |
100 | + return PKey(cptr, 1) | |
101 | + | |
102 | def load_key_string(string, callback=util.passphrase_callback): | |
103 | """ | |
104 | Load an M2Crypto.EVP.PKey from a string. | |
105 | @@ -406,3 +430,19 @@ | |
106 | bio = BIO.MemoryBuffer(string) | |
107 | return load_key_bio( bio, callback) | |
108 | ||
109 | +def load_key_string_pubkey(string, callback=util.passphrase_callback): | |
110 | + """ | |
111 | + Load an M2Crypto.EVP.PKey from a public key as a string. | |
112 | + | |
113 | + @type string: string | |
114 | + @param string: String containing the key in PEM format. | |
115 | + | |
116 | + @type callback: Python callable | |
117 | + @param callback: A Python callable object that is invoked | |
118 | + to acquire a passphrase with which to protect the key. | |
119 | + | |
120 | + @rtype: M2Crypto.EVP.PKey | |
121 | + @return: M2Crypto.EVP.PKey object. | |
122 | + """ | |
123 | + bio = BIO.MemoryBuffer(string) | |
124 | + return load_key_bio_pubkey(bio, callback) | |
125 | diff -ur M2Crypto-0.22.3/SWIG/_dsa.i M2Crypto-0.22.3.new/SWIG/_dsa.i | |
126 | --- M2Crypto-0.22.3/SWIG/_dsa.i 2014-01-22 14:37:01.000000000 -0500 | |
127 | +++ M2Crypto-0.22.3.new/SWIG/_dsa.i 2016-01-12 19:25:07.000000000 -0500 | |
128 | @@ -153,6 +153,25 @@ | |
129 | Py_INCREF(Py_None); | |
130 | return Py_None; | |
131 | } | |
132 | + | |
133 | +PyObject *dsa_set_pub(DSA *dsa, PyObject *value) { | |
134 | + BIGNUM *bn; | |
135 | + const void *vbuf; | |
136 | + int vlen; | |
137 | + | |
138 | + if (m2_PyObject_AsReadBufferInt(value, &vbuf, &vlen) == -1) | |
139 | + return NULL; | |
140 | + | |
141 | + if (!(bn = BN_mpi2bn((unsigned char *)vbuf, vlen, NULL))) { | |
142 | + PyErr_SetString(_dsa_err, ERR_reason_error_string(ERR_get_error())); | |
143 | + return NULL; | |
144 | + } | |
145 | + if (dsa->pub_key) | |
146 | + BN_free(dsa->pub_key); | |
147 | + dsa->pub_key = bn; | |
148 | + Py_INCREF(Py_None); | |
149 | + return Py_None; | |
150 | +} | |
151 | %} | |
152 | ||
153 | %inline %{ | |
154 | diff -ur M2Crypto-0.22.3/SWIG/_ec.i M2Crypto-0.22.3.new/SWIG/_ec.i | |
155 | --- M2Crypto-0.22.3/SWIG/_ec.i 2014-01-22 14:37:01.000000000 -0500 | |
156 | +++ M2Crypto-0.22.3.new/SWIG/_ec.i 2016-01-12 19:25:07.000000000 -0500 | |
157 | @@ -189,6 +189,43 @@ | |
158 | ||
159 | return pyo; | |
160 | } | |
161 | + | |
162 | +PyObject *ec_key_get_public_key(EC_KEY *key) { | |
163 | + | |
164 | + unsigned char *src=NULL; | |
165 | + void *dst=NULL; | |
166 | + int src_len=0; | |
167 | + Py_ssize_t dst_len=0; | |
168 | + PyObject *pyo=NULL; | |
169 | + int ret=0; | |
170 | + | |
171 | + /* Convert to binary */ | |
172 | + src_len = i2o_ECPublicKey(key, &src); | |
173 | + if (src_len < 0) | |
174 | + { | |
175 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
176 | + return NULL; | |
177 | + } | |
178 | + | |
179 | + /* Create a PyBuffer containing a copy of the binary, | |
180 | + * to simplify memory deallocation | |
181 | + */ | |
182 | + pyo = PyBuffer_New( src_len ); | |
183 | + ret = PyObject_AsWriteBuffer( pyo, &dst, &dst_len ); | |
184 | + assert( src_len == dst_len ); | |
185 | + if (ret < 0) | |
186 | + { | |
187 | + Py_DECREF(pyo); | |
188 | + OPENSSL_free(src); | |
189 | + PyErr_SetString(_ec_err, "cannot get write buffer"); | |
190 | + return NULL; | |
191 | + } | |
192 | + memcpy( dst, src, src_len ); | |
193 | + OPENSSL_free(src); | |
194 | + | |
195 | + return pyo; | |
196 | +} | |
197 | + | |
198 | %} | |
199 | ||
200 | %threadallow ec_key_read_pubkey; | |
201 | @@ -404,6 +441,32 @@ | |
202 | return keypair; | |
203 | } | |
204 | ||
205 | +EC_KEY* ec_key_from_pubkey_params(int nid, PyObject *pubkey) { | |
206 | + const void *keypairbuf; | |
207 | + Py_ssize_t keypairbuflen; | |
208 | + const unsigned char *tempBuf; | |
209 | + EC_KEY *keypair; | |
210 | + | |
211 | + if (PyObject_AsReadBuffer(pubkey, &keypairbuf, &keypairbuflen) == -1) | |
212 | + { | |
213 | + return NULL; | |
214 | + } | |
215 | + | |
216 | + keypair = ec_key_new_by_curve_name(nid); | |
217 | + if (!keypair) { | |
218 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
219 | + return NULL; | |
220 | + } | |
221 | + | |
222 | + tempBuf = (const unsigned char *)keypairbuf; | |
223 | + if ((o2i_ECPublicKey( &keypair, &tempBuf, keypairbuflen)) == 0) | |
224 | + { | |
225 | + PyErr_SetString(_ec_err, ERR_reason_error_string(ERR_get_error())); | |
226 | + return NULL; | |
227 | + } | |
228 | + return keypair; | |
229 | +} | |
230 | + | |
231 | ||
232 | // According to [SEC2] the degree of the group is defined as EC key length | |
233 | int ec_key_keylen(EC_KEY *key) { | |
234 | diff -ur M2Crypto-0.22.3/SWIG/_evp.i M2Crypto-0.22.3.new/SWIG/_evp.i | |
235 | --- M2Crypto-0.22.3/SWIG/_evp.i 2014-01-22 14:37:01.000000000 -0500 | |
236 | +++ M2Crypto-0.22.3.new/SWIG/_evp.i 2016-01-12 19:25:07.000000000 -0500 | |
237 | @@ -49,6 +49,9 @@ | |
238 | %rename(digest_init) EVP_DigestInit; | |
239 | extern int EVP_DigestInit(EVP_MD_CTX *, const EVP_MD *); | |
240 | ||
241 | +%rename(get_digestbyname) EVP_get_digestbyname; | |
242 | +extern EVP_MD *EVP_get_digestbyname(const char * name); | |
243 | + | |
244 | %rename(des_ecb) EVP_des_ecb; | |
245 | extern const EVP_CIPHER *EVP_des_ecb(void); | |
246 | %rename(des_ede_ecb) EVP_des_ede; | |
247 | @@ -506,6 +509,17 @@ | |
248 | return pk; | |
249 | } | |
250 | ||
251 | +EVP_PKEY *pkey_read_pem_pubkey(BIO *f, PyObject *pyfunc) { | |
252 | + EVP_PKEY *pk; | |
253 | + | |
254 | + Py_INCREF(pyfunc); | |
255 | + Py_BEGIN_ALLOW_THREADS | |
256 | + pk = PEM_read_bio_PUBKEY(f, NULL, passphrase_callback, (void *)pyfunc); | |
257 | + Py_END_ALLOW_THREADS | |
258 | + Py_DECREF(pyfunc); | |
259 | + return pk; | |
260 | +} | |
261 | + | |
262 | int pkey_assign_rsa(EVP_PKEY *pkey, RSA *rsa) { | |
263 | return EVP_PKEY_assign_RSA(pkey, rsa); | |
264 | } | |
265 | diff -ur M2Crypto-0.22.3/tests/test_dsa.py M2Crypto-0.22.3.new/tests/test_dsa.py | |
266 | --- M2Crypto-0.22.3/tests/test_dsa.py 2014-01-22 14:37:01.000000000 -0500 | |
267 | +++ M2Crypto-0.22.3.new/tests/test_dsa.py 2016-01-12 19:25:07.000000000 -0500 | |
268 | @@ -87,6 +87,19 @@ | |
269 | r,s = dsa2.sign(self.data) | |
270 | assert dsa2.verify(self.data, r, s) | |
271 | ||
272 | + def test_pub_key_from_params(self): | |
273 | + dsa = DSA.gen_params(1024, self.callback) | |
274 | + dsa.gen_key() | |
275 | + assert len(dsa) == 1024 | |
276 | + p = dsa.p | |
277 | + q = dsa.q | |
278 | + g = dsa.g | |
279 | + pub = dsa.pub | |
280 | + dsa2 = DSA.pub_key_from_params(p,q,g,pub) | |
281 | + assert dsa2.check_key() | |
282 | + r,s = dsa.sign(self.data) | |
283 | + assert dsa2.verify(self.data, r, s) | |
284 | + | |
285 | def suite(): | |
286 | return unittest.makeSuite(DSATestCase) | |
287 | ||
288 | diff -ur M2Crypto-0.22.3/tests/test_ecdsa.py M2Crypto-0.22.3.new/tests/test_ecdsa.py | |
289 | --- M2Crypto-0.22.3/tests/test_ecdsa.py 2014-01-22 14:37:01.000000000 -0500 | |
290 | +++ M2Crypto-0.22.3.new/tests/test_ecdsa.py 2016-01-12 19:25:07.000000000 -0500 | |
291 | @@ -63,6 +63,16 @@ | |
292 | ec = EC.gen_params(EC.NID_sect233k1) | |
293 | assert len(ec) == 233 | |
294 | ||
295 | + def test_pub_key_from_params(self): | |
296 | + curve = EC.NID_X9_62_prime256v1 | |
297 | + ec = EC.gen_params(curve) | |
298 | + ec.gen_key() | |
299 | + ec_pub = ec.pub() | |
300 | + k = ec_pub.get_key() | |
301 | + ec2 = EC.pub_key_from_params(curve, k) | |
302 | + assert ec2.check_key() | |
303 | + r, s = ec.sign_dsa(self.data) | |
304 | + assert ec2.verify_dsa(self.data, r, s) | |
305 | ||
306 | def suite(): | |
307 | return unittest.makeSuite(ECDSATestCase) | |
308 | diff -ur M2Crypto-0.22.3/tests/test_evp.py M2Crypto-0.22.3.new/tests/test_evp.py | |
309 | --- M2Crypto-0.22.3/tests/test_evp.py 2014-01-22 14:37:01.000000000 -0500 | |
310 | +++ M2Crypto-0.22.3.new/tests/test_evp.py 2016-01-12 21:05:53.000000000 -0500 | |
311 | @@ -52,13 +52,25 @@ | |
312 | #A quick but not thorough sanity check | |
313 | assert len(der_blob) == 160 | |
314 | ||
315 | - | |
316 | def test_MessageDigest(self): | |
317 | self.assertRaises(ValueError, EVP.MessageDigest, 'sha513') | |
318 | md = EVP.MessageDigest('sha1') | |
319 | assert md.update('Hello') == 1 | |
320 | assert util.octx_to_num(md.final()) == 1415821221623963719413415453263690387336440359920 | |
321 | ||
322 | + # temporarily remove sha1 from m2 | |
323 | + old_sha1 = m2.sha1 | |
324 | + del m2.sha1 | |
325 | + | |
326 | + # now run the same test again, relying on EVP.MessageDigest() to call | |
327 | + # get_digestbyname() under the hood | |
328 | + md = EVP.MessageDigest('sha1') | |
329 | + self.assertEqual(md.update('Hello'), 1) | |
330 | + self.assertEqual(util.octx_to_num(md.final()), 1415821221623963719413415453263690387336440359920) | |
331 | + | |
332 | + # put sha1 back in place | |
333 | + m2.sha1 = old_sha1 | |
334 | + | |
335 | def test_as_der_capture_key(self): | |
336 | """ | |
337 | Test DER encoding the PKey instance after assigning | |
338 | @@ -92,6 +104,9 @@ | |
339 | ||
340 | self.assertRaises(ValueError, EVP.hmac, 'key', 'data', algo='sha513') | |
341 | ||
342 | + def test_get_digestbyname(self): | |
343 | + self.assertEqual(m2.get_digestbyname('sha513'), None) | |
344 | + self.assertNotEqual(m2.get_digestbyname('sha1'), None) | |
345 | ||
346 | def test_get_rsa(self): | |
347 | """ | |
348 | @@ -117,7 +132,27 @@ | |
349 | ||
350 | rsa3 = RSA.gen_key(1024, 3, callback=self._gen_callback) | |
351 | assert rsa.sign(digest) != rsa3.sign(digest) | |
352 | - | |
353 | + | |
354 | + def test_load_key_string_pubkey(self): | |
355 | + """ | |
356 | + Testing creating a PKey instance from PEM string. | |
357 | + """ | |
358 | + rsa = RSA.gen_key(1024, 3, callback=self._gen_callback) | |
359 | + self.assertIsInstance(rsa, RSA.RSA) | |
360 | + | |
361 | + rsa_pem = BIO.MemoryBuffer() | |
362 | + rsa.save_pub_key_bio(rsa_pem) | |
363 | + pkey = EVP.load_key_string_pubkey(rsa_pem.read()) | |
364 | + rsa2 = pkey.get_rsa() | |
365 | + self.assertIsInstance(rsa2, RSA.RSA_pub) | |
366 | + self.assertEqual(rsa.e, rsa2.e) | |
367 | + self.assertEqual(rsa.n, rsa2.n) | |
368 | + pem = rsa.as_pem(callback=self._pass_callback) | |
369 | + pem2 = rsa2.as_pem() | |
370 | + assert pem | |
371 | + assert pem2 | |
372 | + self.assertNotEqual(pem, pem2) | |
373 | + | |
374 | def test_get_rsa_fail(self): | |
375 | """ | |
376 | Testing trying to retrieve the RSA key from the PKey instance |
0 | from .online import WILDCARD_EXPLICIT_DELEGATION, Analyst, OnlineDomainNameAnalysis, PrivateAnalyst, RecursiveAnalyst, PrivateRecursiveAnalyst, NetworkConnectivityException, DNS_RAW_VERSION | |
0 | from .online import COOKIE_STANDIN, WILDCARD_EXPLICIT_DELEGATION, Analyst, OnlineDomainNameAnalysis, PrivateAnalyst, RecursiveAnalyst, PrivateRecursiveAnalyst, NetworkConnectivityException, DNS_RAW_VERSION | |
1 | 1 | from .offline import OfflineDomainNameAnalysis, TTLAgnosticOfflineDomainNameAnalysis, DNS_PROCESSED_VERSION |
4 | 4 | # |
5 | 5 | # Copyright 2015-2016 VeriSign, Inc. |
6 | 6 | # |
7 | # Copyright 2016-2017 Casey Deccio. | |
7 | # Copyright 2016-2019 Casey Deccio | |
8 | 8 | # |
9 | 9 | # DNSViz is free software; you can redistribute it and/or modify |
10 | 10 | # it under the terms of the GNU General Public License as published by |
22 | 22 | |
23 | 23 | from __future__ import unicode_literals |
24 | 24 | |
25 | import cgi | |
26 | 25 | import datetime |
27 | 26 | |
28 | 27 | # minimal support for python2.6 |
30 | 29 | from collections import OrderedDict |
31 | 30 | except ImportError: |
32 | 31 | from ordereddict import OrderedDict |
32 | ||
33 | # python3/python2 dual compatibility | |
34 | try: | |
35 | from html import escape | |
36 | except ImportError: | |
37 | from cgi import escape | |
33 | 38 | |
34 | 39 | import dns.dnssec |
35 | 40 | |
57 | 62 | except KeyError: |
58 | 63 | raise TypeError('The "%s" keyword argument is required for instantiation.' % param) |
59 | 64 | |
65 | def __hash__(self): | |
66 | return id(self) | |
67 | ||
60 | 68 | def __str__(self): |
61 | 69 | return self.code |
62 | 70 | |
82 | 90 | |
83 | 91 | @property |
84 | 92 | def html_description(self): |
85 | description_template_escaped = cgi.escape(self.description_template, True) | |
93 | description_template_escaped = escape(self.description_template, True) | |
86 | 94 | template_kwargs_escaped = {} |
87 | 95 | for n, v in self.template_kwargs.items(): |
88 | 96 | if isinstance(v, int): |
89 | 97 | template_kwargs_escaped[n] = v |
90 | 98 | else: |
91 | 99 | if isinstance(v, str): |
92 | template_kwargs_escaped[n] = cgi.escape(v) | |
100 | template_kwargs_escaped[n] = escape(v) | |
93 | 101 | else: |
94 | template_kwargs_escaped[n] = cgi.escape(str(v)) | |
102 | template_kwargs_escaped[n] = escape(str(v)) | |
95 | 103 | return description_template_escaped % template_kwargs_escaped |
96 | 104 | |
97 | 105 | def add_server_client(self, server, client, response): |
300 | 308 | ''' |
301 | 309 | >>> e = InceptionWithinClockSkew(inception=datetime.datetime(2015,1,10,0,0,0), reference_time=datetime.datetime(2015,1,10,0,0,1)) |
302 | 310 | >>> e.description |
303 | 'The value of the Signature Inception field of the RRSIG RR (2015-01-10 00:00:00) is within possible clock skew range of the current time (2015-01-10 00:00:01)'. | |
311 | 'The value of the Signature Inception field of the RRSIG RR (2015-01-10 00:00:00) is within possible clock skew range (1 second) of the current time (2015-01-10 00:00:01).' | |
304 | 312 | ''' |
305 | 313 | |
306 | 314 | _abstract = False |
318 | 326 | ''' |
319 | 327 | >>> e = ExpirationWithinClockSkew(expiration=datetime.datetime(2015,1,10,0,0,1), reference_time=datetime.datetime(2015,1,10,0,0,0)) |
320 | 328 | >>> e.description |
321 | 'The value of the Signature Expiration field of the RRSIG RR (2015-01-10 00:00:01) is within possible clock skew range of the current time (2015-01-10 00:00:00)'. | |
329 | 'The value of the Signature Expiration field of the RRSIG RR (2015-01-10 00:00:01) is within possible clock skew range (1 second) of the current time (2015-01-10 00:00:00).' | |
322 | 330 | ''' |
323 | 331 | |
324 | 332 | _abstract = False |
344 | 352 | description_template = "The cryptographic signature of the RRSIG RR does not properly validate." |
345 | 353 | references = ['RFC 4035, Sec. 5.3.3'] |
346 | 354 | required_params = [] |
355 | ||
356 | class RRSIGBadLength(RRSIGError): | |
357 | pass | |
358 | ||
359 | class RRSIGBadLengthGOST(RRSIGBadLength): | |
360 | ''' | |
361 | >>> e = RRSIGBadLengthGOST(length=500) | |
362 | >>> e.description | |
363 | 'The length of the signature is 500 bits, but a GOST signature (DNSSEC algorithm 12) must be 512 bits long.' | |
364 | ''' | |
365 | _abstract = False | |
366 | description_template = 'The length of the signature is %(length)d bits, but a GOST signature (DNSSEC algorithm 12) must be 512 bits long.' | |
367 | code = 'RRSIG_BAD_LENGTH_GOST' | |
368 | references = ['RFC 5933, Sec. 5.2'] | |
369 | required_params = ['length'] | |
370 | ||
371 | class RRSIGBadLengthECDSA(RRSIGBadLength): | |
372 | curve = None | |
373 | algorithm = None | |
374 | correct_length = None | |
375 | description_template = 'The length of the signature is %(length)d bits, but an ECDSA signature made with Curve %(curve)s (DNSSEC algorithm %(algorithm)d) must be %(correct_length)d bits long.' | |
376 | references = ['RFC 6605, Sec. 4'] | |
377 | required_params = ['length'] | |
378 | ||
379 | def __init__(self, **kwargs): | |
380 | super(RRSIGBadLengthECDSA, self).__init__(**kwargs) | |
381 | self.template_kwargs['curve'] = self.curve | |
382 | self.template_kwargs['algorithm'] = self.algorithm | |
383 | self.template_kwargs['correct_length'] = self.correct_length | |
384 | ||
385 | class RRSIGBadLengthECDSA256(RRSIGBadLengthECDSA): | |
386 | ''' | |
387 | >>> e = RRSIGBadLengthECDSA256(length=500) | |
388 | >>> e.description | |
389 | 'The length of the signature is 500 bits, but an ECDSA signature made with Curve P-256 (DNSSEC algorithm 13) must be 512 bits long.' | |
390 | ''' | |
391 | curve = 'P-256' | |
392 | algorithm = 13 | |
393 | correct_length = 512 | |
394 | _abstract = False | |
395 | code = 'RRSIG_BAD_LENGTH_ECDSA256' | |
396 | ||
397 | class RRSIGBadLengthECDSA384(RRSIGBadLengthECDSA): | |
398 | ''' | |
399 | >>> e = RRSIGBadLengthECDSA384(length=500) | |
400 | >>> e.description | |
401 | 'The length of the signature is 500 bits, but an ECDSA signature made with Curve P-384 (DNSSEC algorithm 14) must be 768 bits long.' | |
402 | ''' | |
403 | curve = 'P-384' | |
404 | algorithm = 14 | |
405 | correct_length = 768 | |
406 | _abstract = False | |
407 | code = 'RRSIG_BAD_LENGTH_ECDSA384' | |
408 | ||
409 | class RRSIGBadLengthEdDSA(RRSIGBadLength): | |
410 | curve = None | |
411 | algorithm = None | |
412 | correct_length = None | |
413 | description_template = 'The length of the signature is %(length)d bits, but an %(curve)s signature (DNSSEC algorithm %(algorithm)d) must be %(correct_length)d bits long.' | |
414 | references = ['RFC 8080, Sec. 4'] | |
415 | required_params = ['length'] | |
416 | ||
417 | def __init__(self, **kwargs): | |
418 | super(RRSIGBadLengthEdDSA, self).__init__(**kwargs) | |
419 | self.template_kwargs['curve'] = self.curve | |
420 | self.template_kwargs['algorithm'] = self.algorithm | |
421 | self.template_kwargs['correct_length'] = self.correct_length | |
422 | ||
423 | class RRSIGBadLengthEd25519(RRSIGBadLengthEdDSA): | |
424 | ''' | |
425 | >>> e = RRSIGBadLengthEd25519(length=500) | |
426 | >>> e.description | |
427 | 'The length of the signature is 500 bits, but an Ed25519 signature (DNSSEC algorithm 15) must be 512 bits long.' | |
428 | ''' | |
429 | curve = 'Ed25519' | |
430 | algorithm = 15 | |
431 | correct_length = 512 | |
432 | _abstract = False | |
433 | code = 'RRSIG_BAD_LENGTH_ED25519' | |
434 | ||
435 | class RRSIGBadLengthEd448(RRSIGBadLengthEdDSA): | |
436 | ''' | |
437 | >>> e = RRSIGBadLengthEd448(length=500) | |
438 | >>> e.description | |
439 | 'The length of the signature is 500 bits, but an Ed448 signature (DNSSEC algorithm 16) must be 912 bits long.' | |
440 | ''' | |
441 | curve = 'Ed448' | |
442 | algorithm = 16 | |
443 | correct_length = 912 | |
444 | _abstract = False | |
445 | code = 'RRSIG_BAD_LENGTH_ED448' | |
347 | 446 | |
348 | 447 | class DSError(DomainNameAnalysisError): |
349 | 448 | pass |
1166 | 1265 | references = ['RFC 6891, Sec. 6.1.4'] |
1167 | 1266 | required_params = ['flags'] |
1168 | 1267 | |
1268 | class DNSCookieError(ResponseError): | |
1269 | pass | |
1270 | ||
1271 | class GratuitousCookie(DNSCookieError): | |
1272 | ''' | |
1273 | >>> e = GratuitousCookie() | |
1274 | >>> e.description | |
1275 | 'The server sent a COOKIE option when none was sent by the client.' | |
1276 | ''' | |
1277 | ||
1278 | _abstract = False | |
1279 | code = 'GRATUITOUS_COOKIE' | |
1280 | description_template = 'The server sent a COOKIE option when none was sent by the client.' | |
1281 | references = ['RFC 7873, Sec. 5.2.1'] | |
1282 | ||
1283 | class MalformedCookieWithoutFORMERR(DNSCookieError): | |
1284 | ''' | |
1285 | >>> e = MalformedCookieWithoutFORMERR() | |
1286 | >>> e.description | |
1287 | 'The server appears to support DNS cookies but did not return a FORMERR status when issued a malformed COOKIE option.' | |
1288 | ''' | |
1289 | ||
1290 | _abstract = False | |
1291 | code = 'MALFORMED_COOKIE_WITHOUT_FORMERR' | |
1292 | description_template = 'The server appears to support DNS cookies but did not return a FORMERR status when issued a malformed COOKIE option.' | |
1293 | references = ['RFC 7873, Sec. 5.2.2'] | |
1294 | ||
1295 | class NoCookieOption(DNSCookieError): | |
1296 | ''' | |
1297 | >>> e = NoCookieOption() | |
1298 | >>> e.description | |
1299 | 'The server appears to support DNS cookies but did not return a COOKIE option.' | |
1300 | ''' | |
1301 | ||
1302 | _abstract = False | |
1303 | code = 'NO_COOKIE_OPTION' | |
1304 | description_template = 'The server appears to support DNS cookies but did not return a COOKIE option.' | |
1305 | references = ['RFC 7873, Sec. 5.2.3'] | |
1306 | ||
1307 | class NoServerCookieWithoutBADCOOKIE(DNSCookieError): | |
1308 | ''' | |
1309 | >>> e = NoServerCookieWithoutBADCOOKIE() | |
1310 | >>> e.description | |
1311 | 'The server appears to support DNS cookies but did not return a BADCOOKIE status when no server cookie was sent.' | |
1312 | ''' | |
1313 | ||
1314 | _abstract = False | |
1315 | code = 'NO_SERVER_COOKIE_WITHOUT_BADCOOKIE' | |
1316 | description_template = 'The server appears to support DNS cookies but did not return a BADCOOKIE status when no server cookie was sent.' | |
1317 | references = ['RFC 7873, Sec. 5.2.3'] | |
1318 | ||
1319 | class InvalidServerCookieWithoutBADCOOKIE(DNSCookieError): | |
1320 | ''' | |
1321 | >>> e = InvalidServerCookieWithoutBADCOOKIE() | |
1322 | >>> e.description | |
1323 | 'The server appears to support DNS cookies but did not return a BADCOOKIE status when an invalid server cookie was sent.' | |
1324 | ''' | |
1325 | ||
1326 | _abstract = False | |
1327 | code = 'INVALID_SERVER_COOKIE_WITHOUT_BADCOOKIE' | |
1328 | description_template = 'The server appears to support DNS cookies but did not return a BADCOOKIE status when an invalid server cookie was sent.' | |
1329 | references = ['RFC 7873, Sec. 5.2.4'] | |
1330 | ||
1331 | class NoServerCookie(DNSCookieError): | |
1332 | ''' | |
1333 | >>> e = NoServerCookie() | |
1334 | >>> e.description | |
1335 | 'The server appears to support DNS cookies but did not return a server cookie with its COOKIE option.' | |
1336 | ''' | |
1337 | ||
1338 | _abstract = False | |
1339 | code = 'NO_SERVER_COOKIE' | |
1340 | description_template = 'The server appears to support DNS cookies but did not return a server cookie with its COOKIE option.' | |
1341 | references = ['RFC 7873, Sec. 5.2.3'] | |
1342 | ||
1343 | class ClientCookieMismatch(DNSCookieError): | |
1344 | ''' | |
1345 | >>> e = ClientCookieMismatch() | |
1346 | >>> e.description | |
1347 | 'The client cookie returned by the server did not match what was sent.' | |
1348 | ''' | |
1349 | ||
1350 | _abstract = False | |
1351 | code = 'CLIENT_COOKIE_MISMATCH' | |
1352 | description_template = 'The client cookie returned by the server did not match what was sent.' | |
1353 | references = ['RFC 7873, Sec. 5.3'] | |
1354 | ||
1355 | class CookieInvalidLength(DNSCookieError): | |
1356 | ''' | |
1357 | >>> e = CookieInvalidLength(length=61) | |
1358 | >>> e.description | |
1359 | 'The cookie returned by the server had an invalid length of 61 bytes.' | |
1360 | ''' | |
1361 | ||
1362 | _abstract = False | |
1363 | code = 'COOKIE_INVALID_LENGTH' | |
1364 | description_template = 'The cookie returned by the server had an invalid length of %(length)d bytes.' | |
1365 | references = ['RFC 7873, Sec. 5.3'] | |
1366 | required_params = ['length'] | |
1367 | ||
1169 | 1368 | class UnableToRetrieveDNSSECRecords(ResponseError): |
1170 | 1369 | ''' |
1171 | 1370 | >>> e = UnableToRetrieveDNSSECRecords() |
1402 | 1601 | self.template_kwargs['description'] = 'No response was received until the UDP payload size was decreased, indicating that the server might be attempting to send a payload that exceeds the path maximum transmission unit (PMTU) size.' |
1403 | 1602 | if self.template_kwargs['pmtu_lower_bound'] is not None and self.template_kwargs['pmtu_upper_bound'] is not None: |
1404 | 1603 | self.template_kwargs['description'] += ' The PMTU was bounded between %(pmtu_lower_bound)d and %(pmtu_upper_bound)d bytes.' % self.template_kwargs |
1604 | ||
1605 | class ForeignClassData(ResponseError): | |
1606 | section = None | |
1607 | description_template = 'Data of class %(cls)s was found in the %(section)s section of the response.' | |
1608 | references = ['RFC 1034', 'RFC 1035'] | |
1609 | required_params = ['cls'] | |
1610 | ||
1611 | def __init__(self, **kwargs): | |
1612 | super(ForeignClassData, self).__init__(**kwargs) | |
1613 | self.template_kwargs['section'] = self.section | |
1614 | ||
1615 | class ForeignClassDataAnswer(ForeignClassData): | |
1616 | ''' | |
1617 | >>> e = ForeignClassDataAnswer(cls='CH') | |
1618 | >>> e.description | |
1619 | 'Data of class CH was found in the Answer section of the response.' | |
1620 | ''' | |
1621 | section = 'Answer' | |
1622 | _abstract = False | |
1623 | code = 'FOREIGN_CLASS_DATA_ANSWER' | |
1624 | ||
1625 | class ForeignClassDataAuthority(ForeignClassData): | |
1626 | ''' | |
1627 | >>> e = ForeignClassDataAuthority(cls='CH') | |
1628 | >>> e.description | |
1629 | 'Data of class CH was found in the Authority section of the response.' | |
1630 | ''' | |
1631 | section = 'Authority' | |
1632 | _abstract = False | |
1633 | code = 'FOREIGN_CLASS_DATA_AUTHORITY' | |
1634 | ||
1635 | class ForeignClassDataAdditional(ForeignClassData): | |
1636 | ''' | |
1637 | >>> e = ForeignClassDataAdditional(cls='CH') | |
1638 | >>> e.description | |
1639 | 'Data of class CH was found in the Additional section of the response.' | |
1640 | ''' | |
1641 | section = 'Additional' | |
1642 | _abstract = False | |
1643 | code = 'FOREIGN_CLASS_DATA_ADDITIONAL' | |
1644 | ||
1645 | class CasePreservationError(ResponseError): | |
1646 | ''' | |
1647 | >>> e = CasePreservationError(qname='ExAmPlE.CoM') | |
1648 | >>> e.description | |
1649 | 'The case of the query name (ExAmPlE.CoM) was not preserved in the Question section of the response.' | |
1650 | ''' | |
1651 | ||
1652 | _abstract = False | |
1653 | code = 'CASE_NOT_PRESERVED' | |
1654 | description_template = '%(description)s' | |
1655 | description_template = 'The case of the query name (%(qname)s) was not preserved in the Question section of the response.' | |
1656 | required_params = ['qname'] | |
1405 | 1657 | |
1406 | 1658 | class DelegationError(DomainNameAnalysisError): |
1407 | 1659 | pass |
1755 | 2007 | code = 'DNSKEY_NOT_AT_ZONE_APEX' |
1756 | 2008 | required_params = ['zone', 'name'] |
1757 | 2009 | |
2010 | class DNSKEYBadLength(DNSKEYError): | |
2011 | pass | |
2012 | ||
2013 | class DNSKEYBadLengthGOST(DNSKEYBadLength): | |
2014 | ''' | |
2015 | >>> e = DNSKEYBadLengthGOST(length=500) | |
2016 | >>> e.description | |
2017 | 'The length of the key is 500 bits, but a GOST public key (DNSSEC algorithm 12) must be 512 bits long.' | |
2018 | ''' | |
2019 | _abstract = False | |
2020 | description_template = 'The length of the key is %(length)d bits, but a GOST public key (DNSSEC algorithm 12) must be 512 bits long.' | |
2021 | code = 'DNSKEY_BAD_LENGTH_GOST' | |
2022 | references = ['RFC 5933, Sec. 5.1'] | |
2023 | required_params = ['length'] | |
2024 | ||
2025 | class DNSKEYBadLengthECDSA(DNSKEYBadLength): | |
2026 | curve = None | |
2027 | algorithm = None | |
2028 | correct_length = None | |
2029 | description_template = 'The length of the key is %(length)d bits, but an ECDSA public key using Curve %(curve)s (DNSSEC algorithm %(algorithm)d) must be %(correct_length)d bits long.' | |
2030 | references = ['RFC 6605, Sec. 4'] | |
2031 | required_params = ['length'] | |
2032 | ||
2033 | def __init__(self, **kwargs): | |
2034 | super(DNSKEYBadLengthECDSA, self).__init__(**kwargs) | |
2035 | self.template_kwargs['curve'] = self.curve | |
2036 | self.template_kwargs['algorithm'] = self.algorithm | |
2037 | self.template_kwargs['correct_length'] = self.correct_length | |
2038 | ||
2039 | class DNSKEYBadLengthECDSA256(DNSKEYBadLengthECDSA): | |
2040 | ''' | |
2041 | >>> e = DNSKEYBadLengthECDSA256(length=500) | |
2042 | >>> e.description | |
2043 | 'The length of the key is 500 bits, but an ECDSA public key using Curve P-256 (DNSSEC algorithm 13) must be 512 bits long.' | |
2044 | ''' | |
2045 | curve = 'P-256' | |
2046 | algorithm = 13 | |
2047 | correct_length = 512 | |
2048 | _abstract = False | |
2049 | code = 'DNSKEY_BAD_LENGTH_ECDSA256' | |
2050 | ||
2051 | class DNSKEYBadLengthECDSA384(DNSKEYBadLengthECDSA): | |
2052 | ''' | |
2053 | >>> e = DNSKEYBadLengthECDSA384(length=500) | |
2054 | >>> e.description | |
2055 | 'The length of the key is 500 bits, but an ECDSA public key using Curve P-384 (DNSSEC algorithm 14) must be 768 bits long.' | |
2056 | ''' | |
2057 | curve = 'P-384' | |
2058 | algorithm = 14 | |
2059 | correct_length = 768 | |
2060 | _abstract = False | |
2061 | code = 'DNSKEY_BAD_LENGTH_ECDSA384' | |
2062 | ||
2063 | class DNSKEYBadLengthEdDSA(DNSKEYBadLength): | |
2064 | curve = None | |
2065 | algorithm = None | |
2066 | correct_length = None | |
2067 | description_template = 'The length of the key is %(length)d bits, but an %(curve)s public key (DNSSEC algorithm %(algorithm)d) must be %(correct_length)d bits long.' | |
2068 | references = ['RFC 8080, Sec. 3'] | |
2069 | required_params = ['length'] | |
2070 | ||
2071 | def __init__(self, **kwargs): | |
2072 | super(DNSKEYBadLengthEdDSA, self).__init__(**kwargs) | |
2073 | self.template_kwargs['curve'] = self.curve | |
2074 | self.template_kwargs['algorithm'] = self.algorithm | |
2075 | self.template_kwargs['correct_length'] = self.correct_length | |
2076 | ||
2077 | class DNSKEYBadLengthEd25519(DNSKEYBadLengthEdDSA): | |
2078 | ''' | |
2079 | >>> e = DNSKEYBadLengthEd25519(length=500) | |
2080 | >>> e.description | |
2081 | 'The length of the key is 500 bits, but an Ed25519 public key (DNSSEC algorithm 15) must be 256 bits long.' | |
2082 | ''' | |
2083 | curve = 'Ed25519' | |
2084 | algorithm = 15 | |
2085 | correct_length = 256 | |
2086 | _abstract = False | |
2087 | code = 'DNSKEY_BAD_LENGTH_ED25519' | |
2088 | ||
2089 | class DNSKEYBadLengthEd448(DNSKEYBadLengthEdDSA): | |
2090 | ''' | |
2091 | >>> e = DNSKEYBadLengthEd448(length=500) | |
2092 | >>> e.description | |
2093 | 'The length of the key is 500 bits, but an Ed448 public key (DNSSEC algorithm 16) must be 456 bits long.' | |
2094 | ''' | |
2095 | curve = 'Ed448' | |
2096 | algorithm = 16 | |
2097 | correct_length = 456 | |
2098 | _abstract = False | |
2099 | code = 'DNSKEY_BAD_LENGTH_ED448' | |
2100 | ||
1758 | 2101 | class TrustAnchorError(DomainNameAnalysisError): |
1759 | 2102 | pass |
1760 | 2103 |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016-2017 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
37 | 35 | except ImportError: |
38 | 36 | from ordereddict import OrderedDict |
39 | 37 | |
40 | import dns.flags, dns.rdataclass, dns.rdatatype | |
38 | import dns.flags, dns.rcode, dns.rdataclass, dns.rdatatype | |
41 | 39 | |
42 | 40 | from dnsviz import crypto |
43 | 41 | import dnsviz.format as fmt |
56 | 54 | #XXX (this needs to be updated if new specification ever updates |
57 | 55 | # RFC 6891) |
58 | 56 | EDNS_DEFINED_FLAGS = dns.flags.DO |
57 | ||
58 | DNSSEC_KEY_LENGTHS_BY_ALGORITHM = { | |
59 | 12: 512, 13: 512, 14: 768, 15: 256, 16: 456, | |
60 | } | |
61 | DNSSEC_KEY_LENGTH_ERRORS = { | |
62 | 12: Errors.DNSKEYBadLengthGOST, 13: Errors.DNSKEYBadLengthECDSA256, | |
63 | 14: Errors.DNSKEYBadLengthECDSA384, 15: Errors.DNSKEYBadLengthEd25519, | |
64 | 16: Errors.DNSKEYBadLengthEd448, | |
65 | } | |
59 | 66 | |
60 | 67 | _logger = logging.getLogger(__name__) |
61 | 68 | |
86 | 93 | QUERY_CLASS = Q.TTLDistinguishingMultiQueryAggregateDNSResponse |
87 | 94 | |
88 | 95 | def __init__(self, *args, **kwargs): |
96 | ||
97 | self._strict_cookies = kwargs.pop('strict_cookies', False) | |
98 | ||
89 | 99 | super(OfflineDomainNameAnalysis, self).__init__(*args, **kwargs) |
90 | 100 | |
91 | 101 | if self.analysis_type != ANALYSIS_TYPE_AUTHORITATIVE: |
293 | 303 | if not hasattr(self, '_response_info') or self._response_info is None: |
294 | 304 | self._response_info = {} |
295 | 305 | if (name, rdtype) not in self._response_info: |
306 | self._response_info[(name, rdtype)] = None | |
296 | 307 | self._response_info[(name, rdtype)] = self._get_response_info(name, rdtype) |
297 | 308 | return self._response_info[(name, rdtype)] |
298 | 309 | |
559 | 570 | tup.append(name_tup) |
560 | 571 | |
561 | 572 | for response_info in response_info_list: |
562 | if (response_info.qname, response_info.rdtype) in processed: | |
563 | continue | |
564 | processed.add((response_info.qname, response_info.rdtype)) | |
565 | ||
566 | 573 | # if we've already done this one (above) then just move along. |
567 | 574 | # These were only done if the name is a zone. |
568 | 575 | if response_info.name_obj.is_zone() and \ |
720 | 727 | elif action == Q.RETRY_ACTION_ADD_EDNS_OPTION: |
721 | 728 | return self._server_responsive_with_condition(server, client, tcp, |
722 | 729 | lambda x: x.effective_edns >= 0 and \ |
723 | not [x for x in x.effective_edns_options if action_arg == x.otype] and \ | |
730 | not [y for y in x.effective_edns_options if action_arg == y.otype] and \ | |
724 | 731 | |
725 | 732 | ((x.effective_tcp and x.tcp_responsive) or \ |
726 | 733 | (not x.effective_tcp and x.udp_responsive)) and \ |
729 | 736 | elif action == Q.RETRY_ACTION_REMOVE_EDNS_OPTION: |
730 | 737 | return self._server_responsive_with_condition(server, client, tcp, |
731 | 738 | lambda x: x.effective_edns >= 0 and \ |
732 | [x for x in x.effective_edns_options if action_arg == x.otype] and \ | |
739 | [y for y in x.effective_edns_options if action_arg == y.otype] and \ | |
733 | 740 | |
734 | 741 | ((x.effective_tcp and x.tcp_responsive) or \ |
735 | 742 | (not x.effective_tcp and x.udp_responsive)) and \ |
878 | 885 | for server in query1.responses: |
879 | 886 | bailiwick = bailiwick_map.get(server, default_bailiwick) |
880 | 887 | for client in query1.responses[server]: |
881 | if query1.responses[server][client].is_referral(self.name, rdtype, bailiwick, proper=True): | |
888 | if query1.responses[server][client].is_referral(self.name, rdtype, query.rdclass, bailiwick, proper=True): | |
882 | 889 | self.yxdomain.add(self.name) |
883 | 890 | raise FoundYXDOMAIN |
884 | 891 | except FoundYXDOMAIN: |
908 | 915 | self.status = Status.NAME_STATUS_NXDOMAIN |
909 | 916 | break |
910 | 917 | |
911 | def _populate_response_errors(self, qname_obj, response, server, client, warnings, errors): | |
918 | def _populate_responsiveness_errors(self, qname_obj, response, server, client, warnings, errors): | |
912 | 919 | # if we had to make some change to elicit a response, find out why that |
913 | 920 | # was |
914 | 921 | change_err = None |
915 | edns_errs = [] | |
916 | 922 | if response.responsive_cause_index is not None: |
917 | 923 | retry = response.history[response.responsive_cause_index] |
918 | 924 | |
973 | 979 | retry.action_arg == dns.flags.CD: |
974 | 980 | pass |
975 | 981 | |
982 | # or if the RCODE was BADCOOKIE, and the COOKIE opt we sent | |
983 | # contained only a client cookie or an invalid server cookie, | |
984 | # then this was a reasonable response from a server that | |
985 | # supports cookies | |
986 | elif retry.cause_arg == 23 and \ | |
987 | response.server_cookie_status in (Q.DNS_COOKIE_CLIENT_COOKIE_ONLY, Q.DNS_COOKIE_SERVER_COOKIE_BAD) and \ | |
988 | retry.action == Q.RETRY_ACTION_UPDATE_DNS_COOKIE: | |
989 | pass | |
990 | ||
991 | ||
992 | # or if the RCODE was FORMERR, and the COOKIE opt we sent | |
993 | # contained a malformed cookie, then this was a reasonable | |
994 | # response from a server that supports cookies | |
995 | if retry.cause_arg == dns.rcode.FORMERR and \ | |
996 | (retry.action == Q.RETRY_ACTION_DISABLE_EDNS or \ | |
997 | (retry.action == Q.RETRY_ACTION_REMOVE_EDNS_OPTION and retry.action_arg == 10)): | |
998 | pass | |
999 | ||
976 | 1000 | # otherwise, set the error class and instantiation kwargs |
977 | 1001 | # appropriately |
978 | 1002 | else: |
1087 | 1111 | group = warnings |
1088 | 1112 | Errors.DomainNameAnalysisError.insert_into_list(change_err, group, server, client, response) |
1089 | 1113 | |
1114 | def _populate_edns_errors(self, qname_obj, response, server, client, warnings, errors): | |
1115 | ||
1090 | 1116 | # if we actually got a message response (as opposed to timeout, network |
1091 | 1117 | # error, form error, etc.) |
1092 | if response.message is not None: | |
1093 | ||
1094 | # if the effective request used EDNS | |
1095 | if response.effective_edns >= 0: | |
1096 | # if the message response didn't use EDNS, then create an error | |
1097 | if response.message.edns < 0: | |
1098 | # if there were indicators that the server supported EDNS | |
1099 | # (e.g., by RRSIGs in the answer), then report it as such | |
1100 | if [x for x in response.message.answer if x.rdtype == dns.rdatatype.RRSIG]: | |
1101 | edns_errs.append(Errors.EDNSSupportNoOpt()) | |
1102 | # otherwise, simply report it as a server not responding | |
1103 | # properly to EDNS requests | |
1104 | else: | |
1105 | edns_errs.append(Errors.EDNSIgnored()) | |
1106 | ||
1107 | # the message response did use EDNS | |
1118 | if response.message is None: | |
1119 | return | |
1120 | ||
1121 | edns_errs = [] | |
1122 | ||
1123 | # if the effective request used EDNS | |
1124 | if response.effective_edns >= 0: | |
1125 | # if the message response didn't use EDNS, then create an error | |
1126 | if response.message.edns < 0: | |
1127 | # if there were indicators that the server supported EDNS | |
1128 | # (e.g., by RRSIGs in the answer), then report it as such | |
1129 | if [x for x in response.message.answer if x.rdtype == dns.rdatatype.RRSIG]: | |
1130 | edns_errs.append(Errors.EDNSSupportNoOpt()) | |
1131 | # otherwise, simply report it as a server not responding | |
1132 | # properly to EDNS requests | |
1108 | 1133 | else: |
1109 | if response.message.rcode() == dns.rcode.BADVERS: | |
1110 | # if the message response code was BADVERS, then the EDNS | |
1111 | # version in the response should have been less than | |
1112 | # that of the request | |
1113 | if response.message.edns >= response.effective_edns: | |
1114 | edns_errs.append(Errors.ImplementedEDNSVersionNotProvided(request_version=response.effective_edns, response_version=response.message.edns)) | |
1115 | ||
1116 | # if the message response used a version of EDNS other than | |
1117 | # that requested, then create an error (should have been | |
1118 | # answered with BADVERS) | |
1119 | elif response.message.edns != response.effective_edns: | |
1120 | edns_errs.append(Errors.EDNSVersionMismatch(request_version=response.effective_edns, response_version=response.message.edns)) | |
1121 | ||
1122 | # check that all EDNS flags are all zero, except for DO | |
1123 | undefined_edns_flags_set = (response.message.ednsflags & 0xffff) & ~EDNS_DEFINED_FLAGS | |
1124 | if undefined_edns_flags_set: | |
1125 | edns_errs.append(Errors.EDNSUndefinedFlagsSet(flags=undefined_edns_flags_set)) | |
1126 | ||
1134 | edns_errs.append(Errors.EDNSIgnored()) | |
1135 | ||
1136 | # the message response did use EDNS | |
1127 | 1137 | else: |
1128 | # if the effective request didn't use EDNS, and we got a | |
1129 | # message response with an OPT record | |
1130 | if response.message.edns >= 0: | |
1131 | edns_errs.append(Errors.GratuitousOPT()) | |
1138 | if response.message.rcode() == dns.rcode.BADVERS: | |
1139 | # if the message response code was BADVERS, then the EDNS | |
1140 | # version in the response should have been less than | |
1141 | # that of the request | |
1142 | if response.message.edns >= response.effective_edns: | |
1143 | edns_errs.append(Errors.ImplementedEDNSVersionNotProvided(request_version=response.effective_edns, response_version=response.message.edns)) | |
1144 | ||
1145 | # if the message response used a version of EDNS other than | |
1146 | # that requested, then create an error (should have been | |
1147 | # answered with BADVERS) | |
1148 | elif response.message.edns != response.effective_edns: | |
1149 | edns_errs.append(Errors.EDNSVersionMismatch(request_version=response.effective_edns, response_version=response.message.edns)) | |
1150 | ||
1151 | # check that all EDNS flags are all zero, except for DO | |
1152 | undefined_edns_flags_set = (response.message.ednsflags & 0xffff) & ~EDNS_DEFINED_FLAGS | |
1153 | if undefined_edns_flags_set: | |
1154 | edns_errs.append(Errors.EDNSUndefinedFlagsSet(flags=undefined_edns_flags_set)) | |
1155 | ||
1156 | else: | |
1157 | # if the effective request didn't use EDNS, and we got a | |
1158 | # message response with an OPT record | |
1159 | if response.message.edns >= 0: | |
1160 | edns_errs.append(Errors.GratuitousOPT()) | |
1132 | 1161 | |
1133 | 1162 | for edns_err in edns_errs: |
1134 | 1163 | Errors.DomainNameAnalysisError.insert_into_list(edns_err, warnings, server, client, response) |
1135 | 1164 | |
1165 | def _populate_cookie_errors(self, qname_obj, response, server, client, warnings, errors): | |
1166 | ||
1167 | if response.message is None: | |
1168 | return | |
1169 | ||
1170 | cookie_errs = [] | |
1171 | ||
1172 | try: | |
1173 | cookie_opt = [o for o in response.effective_edns_options if o.otype == 10][0] | |
1174 | except IndexError: | |
1175 | cookie_opt = None | |
1176 | ||
1177 | try: | |
1178 | cookie_opt_from_server = [o for o in response.message.options if o.otype == 10][0] | |
1179 | except IndexError: | |
1180 | cookie_opt_from_server = None | |
1181 | ||
1182 | # supports_cookies is a boolean value that indicates whether the server | |
1183 | # supports DNS cookies. Note that we are not looking for the value of | |
1184 | # the server cookie itself, only whether the server supports cookies, | |
1185 | # so we don't need to use get_cookie_jar_mapping(). | |
1186 | supports_cookies = qname_obj is not None and server in qname_obj.cookie_jar | |
1187 | ||
1188 | # RFC 7873: 5.2.1. No OPT RR or No COOKIE Option | |
1189 | if response.query.edns < 0 or cookie_opt is None: # response.effective_server_cookie_status == Q.DNS_COOKIE_NO_COOKIE | |
1190 | if cookie_opt_from_server is not None: | |
1191 | cookie_errs.append(Errors.GratuitousCookie()) | |
1192 | ||
1193 | elif supports_cookies: | |
1194 | # The following are scenarios for DNS cookies. | |
1195 | ||
1196 | # RFC 7873: 5.2.2. Malformed COOKIE Option | |
1197 | if response.server_cookie_status == Q.DNS_COOKIE_IMPROPER_LENGTH: | |
1198 | ||
1199 | issued_formerr = False | |
1200 | if response.effective_server_cookie_status == Q.DNS_COOKIE_IMPROPER_LENGTH: | |
1201 | if response.message.rcode() == dns.rcode.FORMERR: | |
1202 | # The query resulting in the response we got was sent | |
1203 | # with a COOKIE option with improper length, and the | |
1204 | # return code for the response was FORMERR. | |
1205 | issued_formerr = True | |
1206 | elif response.responsive_cause_index is not None: | |
1207 | retry = response.history[response.responsive_cause_index] | |
1208 | if retry.cause == Q.RETRY_CAUSE_RCODE and \ | |
1209 | retry.cause_arg == dns.rcode.FORMERR and \ | |
1210 | (retry.action == Q.RETRY_ACTION_DISABLE_EDNS or \ | |
1211 | (retry.action == Q.RETRY_ACTION_REMOVE_EDNS_OPTION and retry.action_arg == 10)): | |
1212 | # We started with a COOKIE opt with improper length, | |
1213 | # and, in response to FORMERR, from the server, we | |
1214 | # changed EDNS behavior either by disabling EDNS or | |
1215 | # removing the DNS COOKIE OPT, which resulted in us | |
1216 | # getting a legitimate response. | |
1217 | issued_formerr = True | |
1218 | if not issued_formerr: | |
1219 | cookie_errs.append(Errors.MalformedCookieWithoutFORMERR()) | |
1220 | ||
1221 | # RFC 7873: 5.2.3. Only a Client Cookie | |
1222 | # RFC 7873: 5.2.4. A Client Cookie and an Invalid Server Cookie | |
1223 | if response.server_cookie_status in (Q.DNS_COOKIE_CLIENT_COOKIE_ONLY, Q.DNS_COOKIE_SERVER_COOKIE_BAD): | |
1224 | if response.server_cookie_status == Q.DNS_COOKIE_CLIENT_COOKIE_ONLY: | |
1225 | err_cls = Errors.NoServerCookieWithoutBADCOOKIE | |
1226 | else: | |
1227 | err_cls = Errors.InvalidServerCookieWithoutBADCOOKIE | |
1228 | ||
1229 | issued_badcookie = False | |
1230 | if response.effective_server_cookie_status in (Q.DNS_COOKIE_CLIENT_COOKIE_ONLY, Q.DNS_COOKIE_SERVER_COOKIE_BAD): | |
1231 | # The query resulting in the response we got was sent with | |
1232 | # a bad server cookie. | |
1233 | if cookie_opt_from_server is None: | |
1234 | cookie_errs.append(Errors.NoCookieOption()) | |
1235 | elif len(cookie_opt_from_server.data) == 8: | |
1236 | cookie_errs.append(Errors.NoServerCookie()) | |
1237 | ||
1238 | if response.message.rcode() == 23: | |
1239 | # The query resulting in the response we got was sent | |
1240 | # with an invalid server cookie, and the result was | |
1241 | # BADCOOKIE. | |
1242 | issued_badcookie = True | |
1243 | ||
1244 | elif response.responsive_cause_index is not None: | |
1245 | retry = response.history[response.responsive_cause_index] | |
1246 | if retry.cause == Q.RETRY_CAUSE_RCODE and \ | |
1247 | retry.cause_arg == 23 and \ | |
1248 | retry.action == Q.RETRY_ACTION_UPDATE_DNS_COOKIE: | |
1249 | # We started with a COOKIE opt with an invalid server | |
1250 | # cookie, and, in response to a BADCOOKIE response from | |
1251 | # the server, we updated to a fresh DNS server cookie, | |
1252 | # which resulted in us getting a legitimate response. | |
1253 | issued_badcookie = True | |
1254 | ||
1255 | if self._strict_cookies and not issued_badcookie: | |
1256 | cookie_errs.append(err_cls()) | |
1257 | ||
1258 | # RFC 7873: 5.2.5. A Client Cookie and a Valid Server Cookie | |
1259 | if response.effective_server_cookie_status == Q.DNS_COOKIE_SERVER_COOKIE_FRESH: | |
1260 | # The query resulting in the response we got was sent with only | |
1261 | # a client cookie. | |
1262 | if cookie_opt_from_server is None: | |
1263 | cookie_errs.append(Errors.NoCookieOption()) | |
1264 | elif len(cookie_opt_from_server.data) == 8: | |
1265 | cookie_errs.append(Errors.NoServerCookie()) | |
1266 | ||
1267 | if cookie_opt is not None and cookie_opt_from_server is not None: | |
1268 | # RFC 7873: 5.3. Client cookie does not match | |
1269 | if len(cookie_opt_from_server.data) >= 8 and \ | |
1270 | cookie_opt_from_server.data[:8] != cookie_opt.data[:8]: | |
1271 | cookie_errs.append(Errors.ClientCookieMismatch()) | |
1272 | ||
1273 | # RFC 7873: 5.3. Client cookie has and invalid length | |
1274 | if len(cookie_opt_from_server.data) < 8 or \ | |
1275 | len(cookie_opt_from_server.data) > 40: | |
1276 | cookie_errs.append(Errors.CookieInvalidLength(length=len(cookie_opt_from_server.data))) | |
1277 | ||
1278 | for cookie_err in cookie_errs: | |
1279 | Errors.DomainNameAnalysisError.insert_into_list(cookie_err, warnings, server, client, response) | |
1280 | ||
1281 | def _populate_response_errors(self, qname_obj, response, server, client, warnings, errors): | |
1136 | 1282 | if qname_obj is not None: |
1137 | 1283 | # if the response was complete (not truncated), then mark any |
1138 | 1284 | # response flag issues as errors. Otherwise, mark them as |
1151 | 1297 | # check for NOERROR, inconsistent with NXDOMAIN in ancestor |
1152 | 1298 | if response.is_complete_response() and response.message.rcode() == dns.rcode.NOERROR and qname_obj.nxdomain_ancestor is not None: |
1153 | 1299 | Errors.DomainNameAnalysisError.insert_into_list(Errors.InconsistentNXDOMAINAncestry(qname=fmt.humanize_name(response.query.qname), ancestor_qname=fmt.humanize_name(qname_obj.nxdomain_ancestor.name)), errors, server, client, response) |
1300 | ||
1301 | def _populate_foreign_class_warnings(self, qname_obj, response, server, client, warnings, errors): | |
1302 | query = response.query | |
1303 | cls = query.rdclass | |
1304 | ||
1305 | if response.message is None: | |
1306 | return | |
1307 | ||
1308 | # if there was foriegn class data, then warn about it | |
1309 | ans_cls = [r.rdclass for r in response.message.answer if r.rdclass != cls] | |
1310 | auth_cls = [r.rdclass for r in response.message.authority if r.rdclass != cls] | |
1311 | add_cls = [r.rdclass for r in response.message.additional if r.rdclass != cls] | |
1312 | if ans_cls: | |
1313 | Errors.DomainNameAnalysisError.insert_into_list(Errors.ForeignClassDataAnswer(cls=dns.rdataclass.to_text(ans_cls[0])), warnings, server, client, response) | |
1314 | if auth_cls: | |
1315 | Errors.DomainNameAnalysisError.insert_into_list(Errors.ForeignClassDataAuthority(cls=dns.rdataclass.to_text(auth_cls[0])), warnings, server, client, response) | |
1316 | if add_cls: | |
1317 | Errors.DomainNameAnalysisError.insert_into_list(Errors.ForeignClassDataAdditional(cls=dns.rdataclass.to_text(add_cls[0])), warnings, server, client, response) | |
1318 | ||
1319 | def _populate_case_preservation_warnings(self, qname_obj, response, server, client, warnings, errors): | |
1320 | query = response.query | |
1321 | msg = response.message | |
1322 | ||
1323 | # if there was a case mismatch, then warn about it | |
1324 | if msg.question and query.qname.to_text() != msg.question[0].name.to_text(): | |
1325 | Errors.DomainNameAnalysisError.insert_into_list(Errors.CasePreservationError(qname=fmt.humanize_name(query.qname, canonicalize=False)), warnings, server, client, response) | |
1154 | 1326 | |
1155 | 1327 | def _populate_wildcard_status(self, query, rrset_info, qname_obj, supported_algs): |
1156 | 1328 | for wildcard_name in rrset_info.wildcard_info: |
1357 | 1529 | if populate_response_errors: |
1358 | 1530 | for server,client in rrset_info.servers_clients: |
1359 | 1531 | for response in rrset_info.servers_clients[(server,client)]: |
1532 | self._populate_responsiveness_errors(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) | |
1360 | 1533 | self._populate_response_errors(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) |
1534 | self._populate_edns_errors(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) | |
1535 | self._populate_cookie_errors(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) | |
1536 | self._populate_foreign_class_warnings(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) | |
1537 | self._populate_case_preservation_warnings(qname_obj, response, server, client, self.rrset_warnings[rrset_info], self.rrset_errors[rrset_info]) | |
1361 | 1538 | |
1362 | 1539 | def _populate_invalid_response_status(self, query): |
1363 | 1540 | self.response_errors[query] = [] |
1405 | 1582 | for truncated_info in query.truncated_info: |
1406 | 1583 | for server, client in truncated_info.servers_clients: |
1407 | 1584 | for response in truncated_info.servers_clients[(server, client)]: |
1585 | self._populate_responsiveness_errors(self, response, server, client, self.response_warnings[query], self.response_errors[query]) | |
1408 | 1586 | self._populate_response_errors(self, response, server, client, self.response_warnings[query], self.response_errors[query]) |
1587 | self._populate_edns_errors(self, response, server, client, self.response_warnings[query], self.response_errors[query]) | |
1588 | self._populate_cookie_errors(self, response, server, client, self.response_warnings[query], self.response_errors[query]) | |
1589 | self._populate_foreign_class_warnings(self, response, server, client, self.response_warnings[query], self.response_errors[query]) | |
1590 | self._populate_case_preservation_warnings(self, response, server, client, self.response_warnings[query], self.response_errors[query]) | |
1409 | 1591 | |
1410 | 1592 | def _populate_rrsig_status_all(self, supported_algs): |
1411 | 1593 | self.rrset_warnings = {} |
1678 | 1860 | else: |
1679 | 1861 | return |
1680 | 1862 | |
1863 | ds_rrset_exists = False | |
1681 | 1864 | secure_path = False |
1682 | 1865 | |
1683 | 1866 | bailiwick_map, default_bailiwick = self.get_bailiwick_mapping() |
1698 | 1881 | bailiwick = bailiwick_map.get(server, default_bailiwick) |
1699 | 1882 | for client in dnskey_query.responses[server]: |
1700 | 1883 | response = dnskey_query.responses[server][client] |
1701 | if response.is_valid_response() and response.is_complete_response() and not response.is_referral(self.name, dns.rdatatype.DNSKEY, bailiwick): | |
1884 | if response.is_valid_response() and response.is_complete_response() and not response.is_referral(self.name, dns.rdatatype.DNSKEY, dnskey_query.rdclass, bailiwick): | |
1702 | 1885 | dnskey_server_client_responses.add((server,client,response)) |
1703 | 1886 | |
1704 | 1887 | for ds_rrset_info in ds_rrset_answer_info: |
1705 | 1888 | # there are CNAMEs that show up here... |
1706 | 1889 | if not (ds_rrset_info.rrset.name == name and ds_rrset_info.rrset.rdtype == rdtype): |
1707 | 1890 | continue |
1891 | ds_rrset_exists = True | |
1708 | 1892 | |
1709 | 1893 | # for each set of DS records provided by one or more servers, |
1710 | 1894 | # identify the set of DNSSEC algorithms and the set of digest |
1820 | 2004 | |
1821 | 2005 | if self.delegation_status[rdtype] is None: |
1822 | 2006 | if ds_rrset_answer_info: |
1823 | if secure_path: | |
2007 | if ds_rrset_exists: | |
2008 | # DS RRs exist | |
2009 | if secure_path: | |
2010 | # If any DNSSEC algorithms are supported, then status | |
2011 | # is bogus because there should have been matching KSK. | |
2012 | self.delegation_status[rdtype] = Status.DELEGATION_STATUS_BOGUS | |
2013 | else: | |
2014 | # If no algorithsm are supported, then this is a | |
2015 | # provably insecure delegation. | |
2016 | self.delegation_status[rdtype] = Status.DELEGATION_STATUS_INSECURE | |
2017 | else: | |
2018 | # Only CNAME returned for DS query. With no DS records and | |
2019 | # no valid non-existence proof, the delegation is bogus. | |
1824 | 2020 | self.delegation_status[rdtype] = Status.DELEGATION_STATUS_BOGUS |
1825 | else: | |
1826 | self.delegation_status[rdtype] = Status.DELEGATION_STATUS_INSECURE | |
1827 | 2021 | elif self.parent.signed: |
1828 | 2022 | self.delegation_status[rdtype] = Status.DELEGATION_STATUS_BOGUS |
1829 | 2023 | for nsec_status_list in [self.nxdomain_status[n] for n in self.nxdomain_status if n.qname == name and n.rdtype == dns.rdatatype.DS] + \ |
1962 | 2156 | servers_without_soa.add((server, client, response)) |
1963 | 2157 | servers_missing_nsec.add((server, client, response)) |
1964 | 2158 | |
2159 | self._populate_responsiveness_errors(qname_obj, response, server, client, warnings, errors) | |
1965 | 2160 | self._populate_response_errors(qname_obj, response, server, client, warnings, errors) |
2161 | self._populate_edns_errors(qname_obj, response, server, client, warnings, errors) | |
2162 | self._populate_cookie_errors(qname_obj, response, server, client, warnings, errors) | |
2163 | self._populate_foreign_class_warnings(qname_obj, response, server, client, warnings, errors) | |
2164 | self._populate_case_preservation_warnings(qname_obj, response, server, client, warnings, errors) | |
1966 | 2165 | |
1967 | 2166 | for soa_rrset_info in neg_response_info.soa_rrset_info: |
1968 | 2167 | soa_owner_name = soa_rrset_info.rrset.name |
2175 | 2374 | for (server,client,response) in servers_clients_without: |
2176 | 2375 | err.add_server_client(server, client, response) |
2177 | 2376 | |
2377 | if dnskey.rdata.algorithm in DNSSEC_KEY_LENGTHS_BY_ALGORITHM and \ | |
2378 | dnskey.key_len != DNSSEC_KEY_LENGTHS_BY_ALGORITHM[dnskey.rdata.algorithm]: | |
2379 | dnskey.errors.append(DNSSEC_KEY_LENGTH_ERRORS[dnskey.rdata.algorithm](length=dnskey.key_len)) | |
2380 | ||
2178 | 2381 | if trusted_keys_rdata and not trusted_keys_self_signing: |
2179 | 2382 | self.zone_errors.append(Errors.NoTrustAnchorSigning(zone=fmt.humanize_name(self.zone.name))) |
2180 | 2383 | |
2279 | 2482 | |
2280 | 2483 | self.response_component_status = response_component_status |
2281 | 2484 | |
2282 | def _serialize_rrset_info(self, rrset_info, consolidate_clients=False, show_servers=True, loglevel=logging.DEBUG, html_format=False): | |
2485 | def _serialize_rrset_info(self, rrset_info, consolidate_clients=False, show_servers=True, show_server_meta=True, loglevel=logging.DEBUG, html_format=False): | |
2283 | 2486 | d = OrderedDict() |
2284 | 2487 | |
2285 | 2488 | rrsig_list = [] |
2345 | 2548 | |
2346 | 2549 | if loglevel <= logging.INFO and show_servers: |
2347 | 2550 | servers = tuple_to_dict(rrset_info.servers_clients) |
2551 | server_list = list(servers) | |
2552 | server_list.sort() | |
2348 | 2553 | if consolidate_clients: |
2349 | servers = list(servers) | |
2350 | servers.sort() | |
2554 | servers = server_list | |
2351 | 2555 | d['servers'] = servers |
2352 | 2556 | |
2353 | tags = set() | |
2354 | for server,client in rrset_info.servers_clients: | |
2355 | for response in rrset_info.servers_clients[(server,client)]: | |
2356 | tags.add(response.effective_query_tag()) | |
2357 | d['query_options'] = list(tags) | |
2358 | d['query_options'].sort() | |
2557 | if show_server_meta: | |
2558 | tags = set() | |
2559 | cookie_tags = {} | |
2560 | for server,client in rrset_info.servers_clients: | |
2561 | for response in rrset_info.servers_clients[(server,client)]: | |
2562 | tags.add(response.effective_query_tag()) | |
2563 | cookie_tags[server] = OrderedDict(( | |
2564 | ('request', response.request_cookie_tag()), | |
2565 | ('response', response.response_cookie_tag()), | |
2566 | )) | |
2567 | d['query_options'] = list(tags) | |
2568 | d['query_options'].sort() | |
2569 | ||
2570 | cookie_tag_mapping = OrderedDict() | |
2571 | for server in server_list: | |
2572 | cookie_tag_mapping[server] = cookie_tags[server] | |
2573 | d['cookie_status'] = cookie_tag_mapping | |
2359 | 2574 | |
2360 | 2575 | if self.rrset_warnings[rrset_info] and loglevel <= logging.WARNING: |
2361 | 2576 | d['warnings'] = [w.serialize(consolidate_clients=consolidate_clients, html_format=html_format) for w in self.rrset_warnings[rrset_info]] |
2376 | 2591 | |
2377 | 2592 | soa_list = [] |
2378 | 2593 | for soa_rrset_info in neg_response_info.soa_rrset_info: |
2379 | rrset_serialized = self._serialize_rrset_info(soa_rrset_info, consolidate_clients=consolidate_clients, loglevel=loglevel, html_format=html_format) | |
2594 | rrset_serialized = self._serialize_rrset_info(soa_rrset_info, consolidate_clients=consolidate_clients, show_server_meta=False, loglevel=loglevel, html_format=html_format) | |
2380 | 2595 | if rrset_serialized: |
2381 | 2596 | soa_list.append(rrset_serialized) |
2382 | 2597 | |
2399 | 2614 | |
2400 | 2615 | if loglevel <= logging.INFO: |
2401 | 2616 | servers = tuple_to_dict(neg_response_info.servers_clients) |
2617 | server_list = list(servers) | |
2618 | server_list.sort() | |
2402 | 2619 | if consolidate_clients: |
2403 | servers = list(servers) | |
2404 | servers.sort() | |
2620 | servers = server_list | |
2405 | 2621 | d['servers'] = servers |
2406 | 2622 | |
2407 | 2623 | tags = set() |
2624 | cookie_tags = {} | |
2408 | 2625 | for server,client in neg_response_info.servers_clients: |
2409 | 2626 | for response in neg_response_info.servers_clients[(server,client)]: |
2410 | 2627 | tags.add(response.effective_query_tag()) |
2628 | cookie_tags[server] = OrderedDict(( | |
2629 | ('request', response.request_cookie_tag()), | |
2630 | ('response', response.response_cookie_tag()), | |
2631 | )) | |
2411 | 2632 | d['query_options'] = list(tags) |
2412 | 2633 | d['query_options'].sort() |
2634 | ||
2635 | cookie_tag_mapping = OrderedDict() | |
2636 | for server in server_list: | |
2637 | cookie_tag_mapping[server] = cookie_tags[server] | |
2638 | d['cookie_status'] = cookie_tag_mapping | |
2413 | 2639 | |
2414 | 2640 | if warnings[neg_response_info] and loglevel <= logging.WARNING: |
2415 | 2641 | d['warnings'] = [w.serialize(consolidate_clients=consolidate_clients, html_format=html_format) for w in warnings[neg_response_info]] |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
9 | 7 | # certain rights in this software. |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
10 | # | |
11 | # Copyright 2016-2019 Casey Deccio | |
12 | 12 | # |
13 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
14 | 14 | # it under the terms of the GNU General Public License as published by |
26 | 26 | |
27 | 27 | from __future__ import unicode_literals |
28 | 28 | |
29 | import binascii | |
29 | 30 | import datetime |
30 | 31 | import logging |
31 | 32 | import random |
54 | 55 | |
55 | 56 | _logger = logging.getLogger(__name__) |
56 | 57 | |
57 | DNS_RAW_VERSION = 1.1 | |
58 | DNS_RAW_VERSION = 1.2 | |
58 | 59 | |
59 | 60 | class NetworkConnectivityException(Exception): |
60 | 61 | pass |
78 | 79 | PROTO_LABEL_RE = re.compile(r'^_(tcp|udp|sctp)$') |
79 | 80 | |
80 | 81 | WILDCARD_EXPLICIT_DELEGATION = dns.name.from_text('*') |
82 | ||
83 | COOKIE_STANDIN = binascii.unhexlify('cccccccccccccccc') | |
84 | COOKIE_BAD = binascii.unhexlify('bbbbbbbbbbbbbbbb') | |
81 | 85 | |
82 | 86 | ANALYSIS_TYPE_AUTHORITATIVE = 0 |
83 | 87 | ANALYSIS_TYPE_RECURSIVE = 1 |
97 | 101 | class OnlineDomainNameAnalysis(object): |
98 | 102 | QUERY_CLASS = Q.MultiQuery |
99 | 103 | |
100 | def __init__(self, name, stub=False, analysis_type=ANALYSIS_TYPE_AUTHORITATIVE): | |
104 | def __init__(self, name, stub=False, analysis_type=ANALYSIS_TYPE_AUTHORITATIVE, cookie_standin=None, cookie_bad=None): | |
101 | 105 | |
102 | 106 | ################################################## |
103 | 107 | # General attributes |
108 | 112 | self.analysis_type = analysis_type |
109 | 113 | self.stub = stub |
110 | 114 | |
115 | # Attributes related to DNS cookie | |
116 | if cookie_standin is None: | |
117 | cookie_standin = COOKIE_STANDIN | |
118 | self.cookie_standin = cookie_standin | |
119 | if cookie_bad is None: | |
120 | cookie_bad = COOKIE_BAD | |
121 | self.cookie_bad = cookie_bad | |
122 | ||
111 | 123 | # a class for constructing the queries |
112 | 124 | self._query_cls = self.QUERY_CLASS |
113 | 125 | |
118 | 130 | self.analysis_start = None |
119 | 131 | self.analysis_end = None |
120 | 132 | |
121 | # The record type queried with the name when eliciting a referral. | |
133 | # The record types queried with the name when eliciting a referral, | |
134 | # eliciting authority section NS records, and eliciting DNS cookies | |
122 | 135 | # (serialized). |
123 | 136 | self.referral_rdtype = None |
137 | self.auth_rdtype = None | |
138 | self.cookie_rdtype = None | |
124 | 139 | |
125 | 140 | # Whether or not the delegation was specified explicitly or learned |
126 | 141 | # by delegation. This is for informational purposes more than |
181 | 196 | self._auth_servers_clients = set() |
182 | 197 | self._valid_servers_clients_udp = set() |
183 | 198 | self._valid_servers_clients_tcp = set() |
199 | ||
200 | # A mapping of server to server-provided DNS cookie | |
201 | self.cookie_jar = {} | |
184 | 202 | |
185 | 203 | def __repr__(self): |
186 | 204 | return '<%s %s>' % (self.__class__.__name__, self.__str__()) |
286 | 304 | self._bailiwick_mapping = dict([(s,self.parent_name()) for s in self.parent.get_auth_or_designated_servers()]), self.name |
287 | 305 | return self._bailiwick_mapping |
288 | 306 | |
307 | def get_cookie_jar_mapping(self): | |
308 | if not hasattr(self, '_cookie_jar_mapping') or self._cookie_jar_mapping is None: | |
309 | if self.parent is None: | |
310 | self._cookie_jar_mapping = {}, self.cookie_jar | |
311 | else: | |
312 | self._cookie_jar_mapping = dict([(s,self.parent.cookie_jar) for s in self.parent.get_auth_or_designated_servers()]), self.cookie_jar | |
313 | return self._cookie_jar_mapping | |
314 | ||
289 | 315 | def _add_glue_ip_mapping(self, response): |
290 | 316 | '''Extract a mapping of NS targets to IP addresses from A and AAAA |
291 | 317 | records in the additional section of a referral.''' |
334 | 360 | return |
335 | 361 | for ns in self.get_ns_names_in_child().difference(self.get_ns_names_in_parent()): |
336 | 362 | self.ns_dependencies[ns] = None |
363 | ||
364 | def _set_server_cookies(self, response, server): | |
365 | server_cookie = response.get_server_cookie() | |
366 | if server_cookie is not None and server not in self.cookie_jar: | |
367 | self.cookie_jar[server] = server_cookie | |
337 | 368 | |
338 | 369 | def _process_response_answer_rrset(self, rrset, query, response): |
339 | 370 | if query.qname in (self.name, self.dlv_name): |
363 | 394 | if rrset.rdtype == dns.rdatatype.CNAME: |
364 | 395 | self._handle_cname_response(rrset) |
365 | 396 | |
366 | def _process_response(self, response, server, client, query, bailiwick, detect_ns): | |
397 | def _process_response(self, response, server, client, query, bailiwick, detect_ns, detect_cookies): | |
367 | 398 | '''Process a DNS response from a query, setting and updating instance |
368 | 399 | variables appropriately, and calling helper methods as necessary.''' |
369 | 400 | |
370 | 401 | if response.message is None: |
371 | 402 | return |
403 | ||
404 | if detect_cookies: | |
405 | self._set_server_cookies(response, server) | |
372 | 406 | |
373 | 407 | is_authoritative = response.is_authoritative() |
374 | 408 | |
413 | 447 | |
414 | 448 | # look for SOA in authority section, in the case of negative responses |
415 | 449 | try: |
416 | soa_rrset = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA][0] | |
450 | soa_rrset = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA and x.rdclass == query.rdclass][0] | |
417 | 451 | if soa_rrset.name == self.name: |
418 | 452 | self.has_soa = True |
419 | 453 | except IndexError: |
422 | 456 | if query.qname == self.name and detect_ns: |
423 | 457 | # if this is a referral, also grab the referral information, if it |
424 | 458 | # pertains to this name (could alternatively be a parent) |
425 | if response.is_referral(query.qname, query.rdtype, bailiwick): | |
459 | if response.is_referral(query.qname, query.rdtype, query.rdclass, bailiwick): | |
426 | 460 | try: |
427 | rrset = response.message.find_rrset(response.message.authority, self.name, dns.rdataclass.IN, dns.rdatatype.NS) | |
461 | rrset = response.message.find_rrset(response.message.authority, self.name, query.rdclass, dns.rdatatype.NS) | |
428 | 462 | except KeyError: |
429 | 463 | pass |
430 | 464 | else: |
434 | 468 | # if it is an (authoritative) answer that has authority information, then add it |
435 | 469 | else: |
436 | 470 | try: |
437 | rrset = response.message.find_rrset(response.message.authority, query.qname, dns.rdataclass.IN, dns.rdatatype.NS) | |
471 | rrset = response.message.find_rrset(response.message.authority, query.qname, query.rdclass, dns.rdatatype.NS) | |
438 | 472 | self._handle_ns_response(rrset, is_authoritative and not self.explicit_delegation) |
439 | 473 | except KeyError: |
440 | 474 | pass |
450 | 484 | if ip is not None: |
451 | 485 | self._auth_ns_ip_mapping[name].add(ip) |
452 | 486 | |
453 | def add_query(self, query, detect_ns): | |
487 | def add_query(self, query, detect_ns, detect_cookies): | |
454 | 488 | '''Process a DNS query and its responses, setting and updating instance |
455 | 489 | variables appropriately, and calling helper methods as necessary.''' |
456 | 490 | |
486 | 520 | if response.tcp_responsive: |
487 | 521 | self._responsive_servers_clients_tcp.add((server, client)) |
488 | 522 | |
489 | self._process_response(query.responses[server][client], server, client, query, bailiwick, detect_ns) | |
523 | self._process_response(query.responses[server][client], server, client, query, bailiwick, detect_ns, detect_cookies) | |
490 | 524 | |
491 | 525 | def get_glue_ip_mapping(self): |
492 | 526 | '''Return a reference to the mapping of targets of delegation records |
766 | 800 | d[name_str] = OrderedDict() |
767 | 801 | d[name_str]['type'] = analysis_types[self.analysis_type] |
768 | 802 | d[name_str]['stub'] = self.stub |
803 | if self.cookie_standin is not None: | |
804 | d[name_str]['cookie_standin'] = lb2s(binascii.hexlify(self.cookie_standin)) | |
805 | if self.cookie_bad is not None: | |
806 | d[name_str]['cookie_bad'] = lb2s(binascii.hexlify(self.cookie_bad)) | |
769 | 807 | d[name_str]['analysis_start'] = fmt.datetime_to_str(self.analysis_start) |
770 | 808 | d[name_str]['analysis_end'] = fmt.datetime_to_str(self.analysis_end) |
771 | 809 | if not self.stub: |
780 | 818 | d[name_str]['nxdomain_ancestor'] = lb2s(self.nxdomain_ancestor_name().canonicalize().to_text()) |
781 | 819 | if self.referral_rdtype is not None: |
782 | 820 | d[name_str]['referral_rdtype'] = dns.rdatatype.to_text(self.referral_rdtype) |
821 | if self.auth_rdtype is not None: | |
822 | d[name_str]['auth_rdtype'] = dns.rdatatype.to_text(self.auth_rdtype) | |
823 | if self.cookie_rdtype is not None: | |
824 | d[name_str]['cookie_rdtype'] = dns.rdatatype.to_text(self.cookie_rdtype) | |
783 | 825 | d[name_str]['explicit_delegation'] = self.explicit_delegation |
784 | 826 | if self.nxdomain_name is not None: |
785 | 827 | d[name_str]['nxdomain_name'] = lb2s(self.nxdomain_name.to_text()) |
829 | 871 | mx_obj.serialize(d, meta_only, trace=trace + [self]) |
830 | 872 | |
831 | 873 | @classmethod |
832 | def deserialize(cls, name, d1, cache=None): | |
874 | def deserialize(cls, name, d1, cache=None, **kwargs): | |
833 | 875 | if cache is None: |
834 | 876 | cache = {} |
835 | 877 | |
844 | 886 | |
845 | 887 | if 'parent' in d: |
846 | 888 | parent_name = dns.name.from_text(d['parent']) |
847 | parent = cls.deserialize(parent_name, d1, cache=cache) | |
889 | parent = cls.deserialize(parent_name, d1, cache=cache, **kwargs) | |
848 | 890 | else: |
849 | 891 | parent = None |
850 | 892 | |
851 | 893 | if name != dns.name.root and 'dlv_parent' in d: |
852 | 894 | dlv_parent_name = dns.name.from_text(d['dlv_parent']) |
853 | dlv_parent = cls.deserialize(dlv_parent_name, d1, cache=cache) | |
895 | dlv_parent = cls.deserialize(dlv_parent_name, d1, cache=cache, **kwargs) | |
854 | 896 | else: |
855 | 897 | dlv_parent_name = None |
856 | 898 | dlv_parent = None |
857 | 899 | |
858 | 900 | if 'nxdomain_ancestor' in d: |
859 | 901 | nxdomain_ancestor_name = dns.name.from_text(d['nxdomain_ancestor']) |
860 | nxdomain_ancestor = cls.deserialize(nxdomain_ancestor_name, d1, cache=cache) | |
902 | nxdomain_ancestor = cls.deserialize(nxdomain_ancestor_name, d1, cache=cache, **kwargs) | |
861 | 903 | else: |
862 | 904 | nxdomain_ancestor_name = None |
863 | 905 | nxdomain_ancestor = None |
864 | 906 | |
907 | if 'cookie_standin' in d: | |
908 | cookie_standin = binascii.unhexlify(d['cookie_standin']) | |
909 | else: | |
910 | cookie_standin = None | |
911 | if 'cookie_bad' in d: | |
912 | cookie_bad = binascii.unhexlify(d['cookie_bad']) | |
913 | else: | |
914 | cookie_bad = None | |
915 | ||
865 | 916 | _logger.info('Loading %s' % fmt.humanize_name(name)) |
866 | 917 | |
867 | cache[name] = a = cls(name, stub=stub, analysis_type=analysis_type) | |
918 | cache[name] = a = cls(name, stub=stub, analysis_type=analysis_type, cookie_standin=cookie_standin, cookie_bad=cookie_bad, **kwargs) | |
868 | 919 | a.parent = parent |
869 | 920 | if dlv_parent is not None: |
870 | 921 | a.dlv_parent = dlv_parent |
876 | 927 | if not a.stub: |
877 | 928 | if 'referral_rdtype' in d: |
878 | 929 | a.referral_rdtype = dns.rdatatype.from_text(d['referral_rdtype']) |
930 | if 'auth_rdtype' in d: | |
931 | a.auth_rdtype = dns.rdatatype.from_text(d['auth_rdtype']) | |
932 | if 'cookie_rdtype' in d: | |
933 | a.cookie_rdtype = dns.rdatatype.from_text(d['cookie_rdtype']) | |
879 | 934 | a.explicit_delegation = d['explicit_delegation'] |
880 | 935 | if 'nxdomain_name' in d: |
881 | 936 | a.nxdomain_name = dns.name.from_text(d['nxdomain_name']) |
899 | 954 | return |
900 | 955 | |
901 | 956 | bailiwick_map, default_bailiwick = self.get_bailiwick_mapping() |
957 | cookie_jar_map, default_cookie_jar = self.get_cookie_jar_mapping() | |
958 | cookie_standin = self.cookie_standin | |
959 | cookie_bad = self.cookie_bad | |
902 | 960 | |
903 | 961 | query_map = {} |
904 | 962 | #XXX backwards compatibility with previous version |
905 | 963 | if isinstance(d['queries'], list): |
906 | 964 | for query in d['queries']: |
907 | key = (dns.name.from_text(query['qname']), dns.rdatatype.from_text(query['qtype']), dns.rdataclass.from_text(query['qclass'])) | |
965 | key = (dns.name.from_text(query['qname']), dns.rdatatype.from_text(query['qtype'])) | |
908 | 966 | if key not in query_map: |
909 | 967 | query_map[key] = [] |
910 | 968 | query_map[key].append(query) |
913 | 971 | vals = query_str.split('/') |
914 | 972 | qname = dns.name.from_text('/'.join(vals[:-2])) |
915 | 973 | rdtype = dns.rdatatype.from_text(vals[-1]) |
916 | rdclass = dns.rdataclass.from_text(vals[-2]) | |
917 | key = (qname, rdtype, rdclass) | |
918 | query_map[key] = [] | |
974 | key = (qname, rdtype) | |
975 | if key not in query_map: | |
976 | query_map[key] = [] | |
919 | 977 | for query in d['queries'][query_str]: |
920 | 978 | query_map[key].append(query) |
921 | 979 | |
922 | # import delegation NS queries first | |
923 | delegation_types = set([dns.rdatatype.NS]) | |
980 | # Import the following first, in this order: | |
981 | # - Queries used to detect delegation (NS and referral_rdtype) | |
982 | # - Queries used to detect NS records from authority section (auth_rdtype) | |
983 | # - Queries used to detect server cookies (cookie_rdtype) | |
984 | delegation_types = OrderedDict(((dns.rdatatype.NS, None),)) | |
924 | 985 | if self.referral_rdtype is not None: |
925 | delegation_types.add(self.referral_rdtype) | |
986 | delegation_types[self.referral_rdtype] = None | |
987 | if self.auth_rdtype is not None: | |
988 | delegation_types[self.auth_rdtype] = None | |
989 | if self.cookie_rdtype is not None: | |
990 | delegation_types[self.cookie_rdtype] = None | |
926 | 991 | for rdtype in delegation_types: |
927 | 992 | # if the query has already been imported, then |
928 | 993 | # don't re-import |
929 | 994 | if (self.name, rdtype) in self.queries: |
930 | 995 | continue |
931 | key = (self.name, rdtype, dns.rdataclass.IN) | |
996 | key = (self.name, rdtype) | |
932 | 997 | if key in query_map: |
933 | 998 | _logger.debug('Importing %s/%s...' % (fmt.humanize_name(self.name), dns.rdatatype.to_text(rdtype))) |
934 | 999 | for query in query_map[key]: |
935 | self.add_query(Q.DNSQuery.deserialize(query, bailiwick_map, default_bailiwick), True) | |
1000 | detect_ns = rdtype in (dns.rdatatype.NS, self.referral_rdtype, self.auth_rdtype) | |
1001 | detect_cookies = rdtype == self.cookie_rdtype | |
1002 | self.add_query(Q.DNSQuery.deserialize(query, bailiwick_map, default_bailiwick, cookie_jar_map, default_cookie_jar, cookie_standin, cookie_bad), detect_ns, detect_cookies) | |
1003 | ||
936 | 1004 | # set the NS dependencies for the name |
937 | 1005 | if self.is_zone(): |
938 | 1006 | self.set_ns_dependencies() |
939 | 1007 | |
940 | 1008 | for key in query_map: |
941 | qname, rdtype, rdclass = key | |
1009 | qname, rdtype = key | |
942 | 1010 | # if the query has already been imported, then |
943 | 1011 | # don't re-import |
944 | 1012 | if (qname, rdtype) in self.queries: |
945 | 1013 | continue |
946 | if rdtype in delegation_types: | |
1014 | if qname == self.name and rdtype in delegation_types: | |
947 | 1015 | continue |
948 | 1016 | if (qname, rdtype) == (self.nxdomain_name, self.nxdomain_rdtype): |
949 | 1017 | extra = ' (NXDOMAIN)' |
950 | 1018 | elif (qname, rdtype) == (self.nxrrset_name, self.nxrrset_rdtype): |
951 | extra = ' (No data)' | |
1019 | extra = ' (NODATA)' | |
952 | 1020 | else: |
953 | 1021 | extra = '' |
954 | 1022 | _logger.debug('Importing %s/%s%s...' % (fmt.humanize_name(qname), dns.rdatatype.to_text(rdtype), extra)) |
955 | 1023 | for query in query_map[key]: |
956 | self.add_query(Q.DNSQuery.deserialize(query, bailiwick_map, default_bailiwick), False) | |
1024 | self.add_query(Q.DNSQuery.deserialize(query, bailiwick_map, default_bailiwick, cookie_jar_map, default_cookie_jar, cookie_standin, cookie_bad), False, False) | |
957 | 1025 | |
958 | 1026 | def _deserialize_dependencies(self, d, cache): |
959 | 1027 | if self.stub: |
981 | 1049 | class Analyst(object): |
982 | 1050 | analysis_model = ActiveDomainNameAnalysis |
983 | 1051 | _simple_query = Q.SimpleDNSQuery |
1052 | _quick_query = Q.QuickDNSSECQuery | |
984 | 1053 | _diagnostic_query = Q.DiagnosticQuery |
985 | 1054 | _tcp_diagnostic_query = Q.TCPDiagnosticQuery |
986 | 1055 | _pmtu_diagnostic_query = Q.PMTUDiagnosticQuery |
994 | 1063 | qname_only = True |
995 | 1064 | analysis_type = ANALYSIS_TYPE_AUTHORITATIVE |
996 | 1065 | |
997 | clone_attrnames = ['dlv_domain', 'try_ipv4', 'try_ipv6', 'client_ipv4', 'client_ipv6', 'query_class_mixin', 'logger', 'ceiling', 'edns_diagnostics', 'follow_ns', 'explicit_delegations', 'stop_at_explicit', 'odd_ports', 'analysis_cache', 'cache_level', 'analysis_cache_lock', 'transport_manager', 'th_factories', 'resolver'] | |
998 | ||
999 | def __init__(self, name, dlv_domain=None, try_ipv4=True, try_ipv6=True, client_ipv4=None, client_ipv6=None, query_class_mixin=None, logger=_logger, ceiling=None, edns_diagnostics=False, | |
1066 | clone_attrnames = ['rdclass', 'dlv_domain', 'try_ipv4', 'try_ipv6', 'client_ipv4', 'client_ipv6', 'query_class_mixin', 'logger', 'ceiling', 'edns_diagnostics', 'follow_ns', 'explicit_delegations', 'stop_at_explicit', 'odd_ports', 'analysis_cache', 'cache_level', 'analysis_cache_lock', 'transport_manager', 'th_factories', 'resolver'] | |
1067 | ||
1068 | def __init__(self, name, rdclass=dns.rdataclass.IN, dlv_domain=None, try_ipv4=True, try_ipv6=True, client_ipv4=None, client_ipv6=None, query_class_mixin=None, logger=_logger, ceiling=None, edns_diagnostics=False, | |
1000 | 1069 | follow_ns=False, follow_mx=False, trace=None, explicit_delegations=None, stop_at_explicit=None, odd_ports=None, extra_rdtypes=None, explicit_only=False, |
1001 | 1070 | analysis_cache=None, cache_level=None, analysis_cache_lock=None, th_factories=None, transport_manager=None, resolver=None): |
1002 | 1071 | |
1072 | self.simple_query = self._simple_query | |
1073 | self.quick_query = self._quick_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1074 | self.diagnostic_query_no_server_cookie = self._diagnostic_query.add_mixin(query_class_mixin) | |
1075 | self.diagnostic_query_bad_server_cookie = self._diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_BAD) | |
1076 | self.diagnostic_query = self._diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1077 | self.tcp_diagnostic_query = self._tcp_diagnostic_query.add_mixin(query_class_mixin).remove_cookie_option() | |
1078 | self.pmtu_diagnostic_query = self._pmtu_diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1079 | self.truncation_diagnostic_query = self._truncation_diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1080 | self.edns_version_diagnostic_query = self._edns_version_diagnostic_query | |
1081 | self.edns_flag_diagnostic_query = self._edns_flag_diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1082 | self.edns_opt_diagnostic_query = self._edns_opt_diagnostic_query.add_mixin(query_class_mixin).add_server_cookie(COOKIE_STANDIN) | |
1083 | ||
1003 | 1084 | self.query_class_mixin = query_class_mixin |
1004 | self.simple_query = self._get_query_class(self._simple_query, self.query_class_mixin) | |
1005 | self.diagnostic_query = self._get_query_class(self._diagnostic_query, self.query_class_mixin) | |
1006 | self.tcp_diagnostic_query = self._get_query_class(self._tcp_diagnostic_query, self.query_class_mixin) | |
1007 | self.pmtu_diagnostic_query = self._get_query_class(self._pmtu_diagnostic_query, self.query_class_mixin) | |
1008 | self.truncation_diagnostic_query = self._get_query_class(self._truncation_diagnostic_query, self.query_class_mixin) | |
1009 | self.edns_version_diagnostic_query = self._get_query_class(self._edns_version_diagnostic_query, self.query_class_mixin) | |
1010 | self.edns_flag_diagnostic_query = self._get_query_class(self._edns_flag_diagnostic_query, self.query_class_mixin) | |
1011 | self.edns_opt_diagnostic_query = self._get_query_class(self._edns_opt_diagnostic_query, self.query_class_mixin) | |
1012 | 1085 | |
1013 | 1086 | if transport_manager is None: |
1014 | 1087 | self.transport_manager = transport.DNSQueryTransportManager() |
1023 | 1096 | self.allow_private_query = not bool([x for x in self.th_factories if not x.cls.allow_private_query]) |
1024 | 1097 | |
1025 | 1098 | self.name = name |
1099 | self.rdclass = rdclass | |
1026 | 1100 | self.dlv_domain = dlv_domain |
1027 | 1101 | |
1028 | 1102 | if explicit_delegations is None: |
1079 | 1153 | |
1080 | 1154 | self.edns_diagnostics = edns_diagnostics |
1081 | 1155 | |
1156 | cookie_opt = self.diagnostic_query.get_cookie_opt() | |
1157 | self.dns_cookies = cookie_opt is not None | |
1158 | ||
1082 | 1159 | self.follow_ns = follow_ns |
1083 | 1160 | self.follow_mx = follow_mx |
1084 | 1161 | |
1106 | 1183 | hints = util.get_root_hints() |
1107 | 1184 | for key in self.explicit_delegations: |
1108 | 1185 | hints[key] = self.explicit_delegations[key] |
1109 | return Resolver.FullResolver(hints, odd_ports=self.odd_ports, transport_manager=self.transport_manager) | |
1110 | ||
1111 | def _get_query_class(self, cls, mixin): | |
1112 | if mixin is None: | |
1113 | return cls | |
1114 | class _foo(mixin, cls): | |
1115 | pass | |
1116 | return _foo | |
1186 | return Resolver.FullResolver(hints, query_cls=(self.quick_query, self.diagnostic_query), odd_ports=self.odd_ports, cookie_standin=COOKIE_STANDIN, transport_manager=self.transport_manager) | |
1117 | 1187 | |
1118 | 1188 | def _detect_cname_chain(self): |
1119 | 1189 | self._cname_chain = [] |
1130 | 1200 | rdtype = dns.rdatatype.A |
1131 | 1201 | |
1132 | 1202 | try: |
1133 | ans = self.resolver.query_for_answer(self.name, rdtype, dns.rdataclass.IN, allow_noanswer=True) | |
1203 | ans = self.resolver.query_for_answer(self.name, rdtype, self.rdclass, allow_noanswer=True) | |
1134 | 1204 | except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN, dns.exception.DNSException): |
1135 | 1205 | return |
1136 | 1206 | |
1137 | 1207 | cname = self.name |
1138 | 1208 | for i in range(Resolver.MAX_CNAME_REDIRECTION): |
1139 | 1209 | try: |
1140 | cname = ans.response.find_rrset(ans.response.answer, cname, dns.rdataclass.IN, dns.rdatatype.CNAME)[0].target | |
1210 | cname = ans.response.find_rrset(ans.response.answer, cname, self.rdclass, dns.rdatatype.CNAME)[0].target | |
1141 | 1211 | self._cname_chain.append(cname) |
1142 | 1212 | except KeyError: |
1143 | 1213 | return |
1159 | 1229 | ceiling = self.name |
1160 | 1230 | |
1161 | 1231 | try: |
1162 | ans = self.resolver.query_for_answer(ceiling, dns.rdatatype.NS, dns.rdataclass.IN) | |
1232 | ans = self.resolver.query_for_answer(ceiling, dns.rdatatype.NS, self.rdclass) | |
1163 | 1233 | try: |
1164 | ans.response.find_rrset(ans.response.answer, ceiling, dns.rdataclass.IN, dns.rdatatype.NS) | |
1234 | ans.response.find_rrset(ans.response.answer, ceiling, self.rdclass, dns.rdatatype.NS) | |
1165 | 1235 | return ceiling, False |
1166 | 1236 | except KeyError: |
1167 | 1237 | pass |
1342 | 1412 | return False |
1343 | 1413 | return True |
1344 | 1414 | |
1345 | def _add_query(self, name_obj, query, detect_ns=False, iterative=False): | |
1415 | def _add_query(self, name_obj, query, detect_ns, detect_cookies, iterative=False): | |
1346 | 1416 | # if this query is empty (i.e., nothing was actually asked, e.g., due |
1347 | 1417 | # to client-side connectivity failure), then raise a connectivity |
1348 | 1418 | # failure |
1349 | 1419 | if not query.responses and not iterative: |
1350 | 1420 | self._raise_connectivity_error_local(query.servers) |
1351 | 1421 | |
1352 | name_obj.add_query(query, detect_ns) | |
1422 | name_obj.add_query(query, detect_ns, detect_cookies) | |
1353 | 1423 | |
1354 | 1424 | def _filter_servers_network(self, servers): |
1355 | 1425 | if not self.try_ipv6: |
1377 | 1447 | name_obj = self.analysis_cache[name] |
1378 | 1448 | except KeyError: |
1379 | 1449 | if lock: |
1380 | name_obj = self.analysis_cache[name] = self.analysis_model(name, stub=stub, analysis_type=self.analysis_type) | |
1450 | name_obj = self.analysis_cache[name] = self.analysis_model(name, stub=stub, analysis_type=self.analysis_type, cookie_standin=COOKIE_STANDIN) | |
1381 | 1451 | return name_obj |
1382 | 1452 | # if not locking, then return None |
1383 | 1453 | else: |
1488 | 1558 | self._handle_explicit_delegations(name_obj) |
1489 | 1559 | if not name_obj.explicit_delegation: |
1490 | 1560 | try: |
1491 | ans = self.resolver.query_for_answer(name, dns.rdatatype.NS, dns.rdataclass.IN) | |
1561 | ans = self.resolver.query_for_answer(name, dns.rdatatype.NS, self.rdclass) | |
1492 | 1562 | |
1493 | 1563 | # resolve every name in the NS RRset |
1494 | 1564 | query_tuples = [] |
1495 | 1565 | for rr in ans.rrset: |
1496 | query_tuples.extend([(rr.target, dns.rdatatype.A, dns.rdataclass.IN), (rr.target, dns.rdatatype.AAAA, dns.rdataclass.IN)]) | |
1566 | query_tuples.extend([(rr.target, dns.rdatatype.A, self.rdclass), (rr.target, dns.rdatatype.AAAA, self.rdclass)]) | |
1497 | 1567 | answer_map = self.resolver.query_multiple_for_answer(*query_tuples) |
1498 | 1568 | for query_tuple in answer_map: |
1499 | 1569 | a = answer_map[query_tuple] |
1635 | 1705 | servers = name_obj.zone.get_responsive_auth_or_designated_servers() |
1636 | 1706 | |
1637 | 1707 | odd_ports = dict([(s, self.odd_ports[(n, s)]) for n, s in self.odd_ports if n == name_obj.zone.name]) |
1708 | cookie_jar = name_obj.zone.cookie_jar | |
1638 | 1709 | |
1639 | 1710 | servers = self._filter_servers(servers) |
1640 | 1711 | exclude_no_answer = set() |
1643 | 1714 | # if there are responsive servers to query... |
1644 | 1715 | if servers: |
1645 | 1716 | |
1717 | # If 1) this is a zone, 2) DNS cookies are supported, and | |
1718 | # 3) cookies have not yet been elicited, then issue queries now to | |
1719 | # elicit DNS cookies. | |
1720 | if name_obj.is_zone() and self.dns_cookies and name_obj.cookie_rdtype is None: | |
1721 | self.logger.debug('Querying for DNS server cookies %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(dns.rdatatype.SOA))) | |
1722 | query = self.diagnostic_query_no_server_cookie(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1723 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) | |
1724 | self._add_query(name_obj, query, False, True) | |
1725 | ||
1726 | name_obj.cookie_rdtype = dns.rdatatype.SOA | |
1727 | ||
1646 | 1728 | # queries specific to zones for which non-delegation-related |
1647 | 1729 | # queries are being issued |
1648 | 1730 | if name_obj.is_zone() and self._ask_non_delegation_queries(name_obj.name) and not self.explicit_only: |
1650 | 1732 | # EDNS diagnostic queries |
1651 | 1733 | if self.edns_diagnostics: |
1652 | 1734 | self.logger.debug('Preparing EDNS diagnostic queries %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(dns.rdatatype.SOA))) |
1653 | queries[(name_obj.name, -(dns.rdatatype.SOA+100))] = self.edns_version_diagnostic_query(name_obj.name, dns.rdatatype.SOA, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1654 | queries[(name_obj.name, -(dns.rdatatype.SOA+101))] = self.edns_opt_diagnostic_query(name_obj.name, dns.rdatatype.SOA, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1655 | queries[(name_obj.name, -(dns.rdatatype.SOA+102))] = self.edns_flag_diagnostic_query(name_obj.name, dns.rdatatype.SOA, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1735 | queries[(name_obj.name, -(dns.rdatatype.SOA+100))] = self.edns_version_diagnostic_query(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1736 | queries[(name_obj.name, -(dns.rdatatype.SOA+101))] = self.edns_opt_diagnostic_query(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1737 | queries[(name_obj.name, -(dns.rdatatype.SOA+102))] = self.edns_flag_diagnostic_query(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1738 | ||
1739 | # Query with a mixed-case name for 0x20, if possible | |
1740 | mixed_case_name = self._mix_case(name_obj.name) | |
1741 | if mixed_case_name is not None: | |
1742 | self.logger.debug('Preparing 0x20 query %s/%s...' % (fmt.humanize_name(mixed_case_name, canonicalize=False), dns.rdatatype.to_text(dns.rdatatype.SOA))) | |
1743 | queries[(name_obj.name, -(dns.rdatatype.SOA+103))] = self.diagnostic_query(mixed_case_name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1744 | ||
1745 | # DNS cookies diagnostic queries | |
1746 | if self.dns_cookies: | |
1747 | self.logger.debug('Preparing DNS cookie diagnostic query %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(dns.rdatatype.SOA))) | |
1748 | queries[(name_obj.name, -(dns.rdatatype.SOA+104))] = self.diagnostic_query_bad_server_cookie(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_bad=COOKIE_BAD) | |
1656 | 1749 | |
1657 | 1750 | # negative queries for all zones |
1658 | 1751 | self._set_negative_queries(name_obj) |
1659 | 1752 | if name_obj.nxdomain_name is not None: |
1660 | 1753 | self.logger.debug('Preparing query %s/%s (NXDOMAIN)...' % (fmt.humanize_name(name_obj.nxdomain_name), dns.rdatatype.to_text(name_obj.nxdomain_rdtype))) |
1661 | queries[(name_obj.nxdomain_name, name_obj.nxdomain_rdtype)] = self.diagnostic_query(name_obj.nxdomain_name, name_obj.nxdomain_rdtype, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1754 | queries[(name_obj.nxdomain_name, name_obj.nxdomain_rdtype)] = self.diagnostic_query(name_obj.nxdomain_name, name_obj.nxdomain_rdtype, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1662 | 1755 | if name_obj.nxrrset_name is not None: |
1663 | self.logger.debug('Preparing query %s/%s (No data)...' % (fmt.humanize_name(name_obj.nxrrset_name), dns.rdatatype.to_text(name_obj.nxrrset_rdtype))) | |
1664 | queries[(name_obj.nxrrset_name, name_obj.nxrrset_rdtype)] = self.diagnostic_query(name_obj.nxrrset_name, name_obj.nxrrset_rdtype, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1756 | self.logger.debug('Preparing query %s/%s (NODATA)...' % (fmt.humanize_name(name_obj.nxrrset_name), dns.rdatatype.to_text(name_obj.nxrrset_rdtype))) | |
1757 | queries[(name_obj.nxrrset_name, name_obj.nxrrset_rdtype)] = self.diagnostic_query(name_obj.nxrrset_name, name_obj.nxrrset_rdtype, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1665 | 1758 | |
1666 | 1759 | # if the name is SLD or lower, then ask MX and TXT |
1667 | 1760 | if self._is_sld_or_lower(name_obj.name): |
1668 | 1761 | self.logger.debug('Preparing query %s/MX...' % fmt.humanize_name(name_obj.name)) |
1669 | 1762 | # note that we use a PMTU diagnostic query here, to simultaneously test PMTU |
1670 | queries[(name_obj.name, dns.rdatatype.MX)] = self.pmtu_diagnostic_query(name_obj.name, dns.rdatatype.MX, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1763 | queries[(name_obj.name, dns.rdatatype.MX)] = self.pmtu_diagnostic_query(name_obj.name, dns.rdatatype.MX, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1671 | 1764 | # we also do a query with small UDP payload to elicit and test a truncated response |
1672 | queries[(name_obj.name, -dns.rdatatype.MX)] = self.truncation_diagnostic_query(name_obj.name, dns.rdatatype.MX, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1765 | queries[(name_obj.name, -dns.rdatatype.MX)] = self.truncation_diagnostic_query(name_obj.name, dns.rdatatype.MX, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1673 | 1766 | |
1674 | 1767 | self.logger.debug('Preparing query %s/TXT...' % fmt.humanize_name(name_obj.name)) |
1675 | queries[(name_obj.name, dns.rdatatype.TXT)] = self.diagnostic_query(name_obj.name, dns.rdatatype.TXT, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1768 | queries[(name_obj.name, dns.rdatatype.TXT)] = self.diagnostic_query(name_obj.name, dns.rdatatype.TXT, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1676 | 1769 | |
1677 | 1770 | # for zones and for (non-zone) names which have DNSKEYs referenced |
1678 | 1771 | if name_obj.is_zone() or self._force_dnskey_query(name_obj.name): |
1681 | 1774 | if servers: |
1682 | 1775 | if self._ask_non_delegation_queries(name_obj.name) and not self.explicit_only: |
1683 | 1776 | self.logger.debug('Preparing query %s/SOA...' % fmt.humanize_name(name_obj.name)) |
1684 | queries[(name_obj.name, dns.rdatatype.SOA)] = self.diagnostic_query(name_obj.name, dns.rdatatype.SOA, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1777 | queries[(name_obj.name, dns.rdatatype.SOA)] = self.diagnostic_query(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1685 | 1778 | |
1686 | 1779 | if name_obj.is_zone(): |
1687 | 1780 | # for zones we also use a TCP diagnostic query here, to simultaneously test TCP connectivity |
1688 | queries[(name_obj.name, -dns.rdatatype.SOA)] = self.tcp_diagnostic_query(name_obj.name, dns.rdatatype.SOA, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1781 | queries[(name_obj.name, -dns.rdatatype.SOA)] = self.tcp_diagnostic_query(name_obj.name, dns.rdatatype.SOA, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1689 | 1782 | else: |
1690 | 1783 | # for non-zones we don't need to keey the (UDP) SOA query, if there is no positive response |
1691 | 1784 | exclude_no_answer.add((name_obj.name, dns.rdatatype.SOA)) |
1692 | 1785 | |
1693 | 1786 | self.logger.debug('Preparing query %s/DNSKEY...' % fmt.humanize_name(name_obj.name)) |
1694 | 1787 | # note that we use a PMTU diagnostic query here, to simultaneously test PMTU |
1695 | queries[(name_obj.name, dns.rdatatype.DNSKEY)] = self.pmtu_diagnostic_query(name_obj.name, dns.rdatatype.DNSKEY, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1788 | queries[(name_obj.name, dns.rdatatype.DNSKEY)] = self.pmtu_diagnostic_query(name_obj.name, dns.rdatatype.DNSKEY, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1696 | 1789 | |
1697 | 1790 | # we also do a query with small UDP payload to elicit and test a truncated response |
1698 | queries[(name_obj.name, -dns.rdatatype.DNSKEY)] = self.truncation_diagnostic_query(name_obj.name, dns.rdatatype.DNSKEY, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1791 | queries[(name_obj.name, -dns.rdatatype.DNSKEY)] = self.truncation_diagnostic_query(name_obj.name, dns.rdatatype.DNSKEY, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1699 | 1792 | |
1700 | 1793 | # query for DS/DLV |
1701 | 1794 | if name_obj.parent is not None: |
1714 | 1807 | parent_servers = self._filter_servers(parent_servers) |
1715 | 1808 | |
1716 | 1809 | parent_odd_ports = dict([(s, self.odd_ports[(n, s)]) for n, s in self.odd_ports if n == name_obj.zone.parent.name]) |
1810 | parent_cookie_jar = name_obj.zone.parent.cookie_jar | |
1717 | 1811 | |
1718 | 1812 | self.logger.debug('Preparing query %s/DS...' % fmt.humanize_name(name_obj.name)) |
1719 | queries[(name_obj.name, dns.rdatatype.DS)] = self.diagnostic_query(name_obj.name, dns.rdatatype.DS, dns.rdataclass.IN, parent_servers, name_obj.parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=parent_odd_ports) | |
1813 | queries[(name_obj.name, dns.rdatatype.DS)] = self.diagnostic_query(name_obj.name, dns.rdatatype.DS, self.rdclass, parent_servers, name_obj.parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=parent_odd_ports, cookie_jar=parent_cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1720 | 1814 | |
1721 | 1815 | if name_obj.dlv_parent is not None and self.dlv_domain != self.name: |
1722 | 1816 | dlv_servers = name_obj.dlv_parent.get_responsive_auth_or_designated_servers() |
1724 | 1818 | dlv_name = name_obj.dlv_name |
1725 | 1819 | if dlv_servers: |
1726 | 1820 | dlv_odd_ports = dict([(s, self.odd_ports[(n, s)]) for n, s in self.odd_ports if n == name_obj.dlv_parent.name]) |
1821 | dlv_cookie_jar = name_obj.dlv_parent.cookie_jar | |
1727 | 1822 | |
1728 | 1823 | self.logger.debug('Preparing query %s/DLV...' % fmt.humanize_name(dlv_name)) |
1729 | queries[(dlv_name, dns.rdatatype.DLV)] = self.diagnostic_query(dlv_name, dns.rdatatype.DLV, dns.rdataclass.IN, dlv_servers, name_obj.dlv_parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=dlv_odd_ports) | |
1824 | queries[(dlv_name, dns.rdatatype.DLV)] = self.diagnostic_query(dlv_name, dns.rdatatype.DLV, self.rdclass, dlv_servers, name_obj.dlv_parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=dlv_odd_ports, cookie_jar=dlv_cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1730 | 1825 | exclude_no_answer.add((dlv_name, dns.rdatatype.DLV)) |
1731 | 1826 | |
1732 | 1827 | # get rid of any queries already asked |
1739 | 1834 | for rdtype in self._rdtypes_to_query(name_obj.name): |
1740 | 1835 | if (name_obj.name, rdtype) not in all_queries: |
1741 | 1836 | self.logger.debug('Preparing query %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(rdtype))) |
1742 | queries[(name_obj.name, rdtype)] = self.diagnostic_query(name_obj.name, rdtype, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1837 | queries[(name_obj.name, rdtype)] = self.diagnostic_query(name_obj.name, rdtype, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1743 | 1838 | |
1744 | 1839 | # if no default queries were identified (e.g., empty non-terminal in |
1745 | 1840 | # in-addr.arpa space), then add a backup. |
1746 | 1841 | if not (queries or name_obj.queries): |
1747 | 1842 | rdtype = dns.rdatatype.A |
1748 | 1843 | self.logger.debug('Preparing query %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(rdtype))) |
1749 | queries[(name_obj.name, rdtype)] = self.diagnostic_query(name_obj.name, rdtype, dns.rdataclass.IN, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1844 | queries[(name_obj.name, rdtype)] = self.diagnostic_query(name_obj.name, rdtype, self.rdclass, servers, bailiwick, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1750 | 1845 | |
1751 | 1846 | # actually execute the queries, then store the results |
1752 | 1847 | self.logger.debug('Executing queries...') |
1753 | 1848 | Q.ExecutableDNSQuery.execute_queries(*list(queries.values()), tm=self.transport_manager, th_factories=self.th_factories) |
1754 | 1849 | for key, query in queries.items(): |
1755 | 1850 | if query.is_answer_any() or key not in exclude_no_answer: |
1756 | self._add_query(name_obj, query) | |
1851 | self._add_query(name_obj, query, False, False) | |
1757 | 1852 | |
1758 | 1853 | def _analyze_delegation(self, name_obj): |
1759 | 1854 | if name_obj.parent is None: |
1769 | 1864 | parent_auth_servers = set(self._filter_servers(parent_auth_servers)) |
1770 | 1865 | |
1771 | 1866 | odd_ports = dict([(s, self.odd_ports[(n, s)]) for n, s in self.odd_ports if n == name_obj.zone.name]) |
1867 | cookie_jar = name_obj.zone.cookie_jar | |
1772 | 1868 | |
1773 | 1869 | if not parent_auth_servers: |
1774 | 1870 | return False |
1794 | 1890 | name_obj.referral_rdtype = rdtype |
1795 | 1891 | |
1796 | 1892 | self.logger.debug('Querying %s/%s (referral)...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(rdtype))) |
1797 | query = self.diagnostic_query(name_obj.name, rdtype, dns.rdataclass.IN, parent_auth_servers, name_obj.parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
1893 | query = self.diagnostic_query(name_obj.name, rdtype, self.rdclass, parent_auth_servers, name_obj.parent_name(), self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
1798 | 1894 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) |
1799 | 1895 | referral_queries[rdtype] = query |
1800 | 1896 | |
1869 | 1965 | |
1870 | 1966 | # add remaining queries |
1871 | 1967 | for query in referral_queries.values(): |
1872 | self._add_query(name_obj, query, True) | |
1968 | self._add_query(name_obj, query, True, False) | |
1873 | 1969 | |
1874 | 1970 | # return a positive response only if not nxdomain |
1875 | 1971 | return not is_nxdomain |
1876 | 1972 | |
1877 | # add any queries made | |
1973 | if self.dns_cookies: | |
1974 | # An NS query to authoritative servers will always be used to | |
1975 | # elicit a server query. | |
1976 | name_obj.cookie_rdtype = dns.rdatatype.NS | |
1977 | cookie_jar = name_obj.cookie_jar | |
1978 | cookie_str = ', detecting cookies' | |
1979 | else: | |
1980 | name_obj.cookie_rdtype = None | |
1981 | cookie_jar = None | |
1982 | cookie_str = '' | |
1983 | ||
1984 | ||
1985 | # Add any queries made. At this point, at least one of the queries is | |
1986 | # for type NS. If there is a second, it is because the first resulted | |
1987 | # in NXDOMAIN, and the type for the second query is secondary_rdtype. | |
1878 | 1988 | for query in referral_queries.values(): |
1879 | self._add_query(name_obj, query, True) | |
1880 | ||
1881 | # now identify the authoritative NS RRset from all servers, resolve all | |
1882 | # names referred to in the NS RRset(s), and query each corresponding | |
1883 | # server, until all names have been queried | |
1989 | detect_cookies = query.rdtype == name_obj.cookie_rdtype | |
1990 | self._add_query(name_obj, query, True, detect_cookies) | |
1991 | ||
1992 | # Identify auth_rdtype, the rdtype used to query the authoritative | |
1993 | # servers to retrieve NS records in the authority section. | |
1994 | name_obj.auth_rdtype = secondary_rdtype | |
1995 | ||
1996 | # Now identify the authoritative NS RRset from all servers, both by | |
1997 | # querying the authoritative servers for NS and by querying them for | |
1998 | # another type and looking for NS in the authority section. Resolve | |
1999 | # all names referred to in the NS RRset(s), and query each | |
2000 | # corresponding server, until all names have been resolved and all | |
2001 | # corresponding addresses queried.. | |
1884 | 2002 | names_resolved = set() |
1885 | names_not_resolved = name_obj.get_ns_names().difference(names_resolved) | |
2003 | names_not_resolved = name_obj.get_ns_names() | |
1886 | 2004 | while names_not_resolved: |
1887 | 2005 | # resolve every name in the NS RRset |
1888 | 2006 | query_tuples = [] |
1889 | 2007 | for name in names_not_resolved: |
1890 | query_tuples.extend([(name, dns.rdatatype.A, dns.rdataclass.IN), (name, dns.rdatatype.AAAA, dns.rdataclass.IN)]) | |
2008 | query_tuples.extend([(name, dns.rdatatype.A, self.rdclass), (name, dns.rdatatype.AAAA, self.rdclass)]) | |
1891 | 2009 | answer_map = self.resolver.query_multiple_for_answer(*query_tuples) |
1892 | 2010 | for query_tuple in answer_map: |
1893 | 2011 | name = query_tuple[0] |
1911 | 2029 | servers_queried[dns.rdatatype.NS].update(servers) |
1912 | 2030 | servers = self._filter_servers(servers, no_raise=True) |
1913 | 2031 | if servers: |
1914 | self.logger.debug('Querying %s/NS (auth)...' % fmt.humanize_name(name_obj.name)) | |
1915 | queries.append(self.diagnostic_query(name_obj.name, dns.rdatatype.NS, dns.rdataclass.IN, servers, name_obj.name, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports)) | |
2032 | self.logger.debug('Querying %s/NS (auth%s)...' % (fmt.humanize_name(name_obj.name), cookie_str)) | |
2033 | query = self.diagnostic_query_no_server_cookie(name_obj.name, dns.rdatatype.NS, self.rdclass, servers, name_obj.name, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
2034 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) | |
2035 | self._add_query(name_obj, query, True, True, True) | |
1916 | 2036 | |
1917 | 2037 | # secondary query |
1918 | 2038 | if secondary_rdtype is not None and self._ask_non_delegation_queries(name_obj.name): |
1921 | 2041 | servers = self._filter_servers(servers, no_raise=True) |
1922 | 2042 | if servers: |
1923 | 2043 | self.logger.debug('Querying %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(secondary_rdtype))) |
1924 | queries.append(self.diagnostic_query(name_obj.name, secondary_rdtype, dns.rdataclass.IN, servers, name_obj.name, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports)) | |
1925 | ||
1926 | # actually execute the queries, then store the results | |
1927 | Q.ExecutableDNSQuery.execute_queries(*queries, tm=self.transport_manager, th_factories=self.th_factories) | |
1928 | for query in queries: | |
1929 | self._add_query(name_obj, query, True, True) | |
2044 | query = self.diagnostic_query(name_obj.name, secondary_rdtype, self.rdclass, servers, name_obj.name, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
2045 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) | |
2046 | self._add_query(name_obj, query, True, False, True) | |
1930 | 2047 | |
1931 | 2048 | names_not_resolved = name_obj.get_ns_names().difference(names_resolved) |
1932 | 2049 | |
1981 | 2098 | for name, exc_info in errors[1:]: |
1982 | 2099 | self.logger.error('Error analyzing %s' % name, exc_info=exc_info) |
1983 | 2100 | raise errors[0][1][0].with_traceback(errors[0][1][2]) |
2101 | ||
2102 | def _mix_case(self, name): | |
2103 | name = name.to_text().lower() | |
2104 | name_len = len(name) | |
2105 | rnd = random.getrandbits((name_len + 8) - (name_len % 8)) | |
2106 | new_name = '' | |
2107 | changed = False | |
2108 | for i, c in enumerate(name): | |
2109 | # If the character is a lower case letter, mix it up randomly. | |
2110 | # Always make the first letter upper case, to ensure that it isn't | |
2111 | # completely lower case. | |
2112 | if ord('a') <= ord(c) <= ord('z') and ((rnd & (1 << i)) or not changed): | |
2113 | new_name += chr(ord(c) - 32) | |
2114 | changed = True | |
2115 | else: | |
2116 | new_name += c | |
2117 | if not changed: | |
2118 | return None | |
2119 | return dns.name.from_text(new_name) | |
1984 | 2120 | |
1985 | 2121 | def _set_negative_queries(self, name_obj): |
1986 | 2122 | random_label = ''.join(random.sample('abcdefghijklmnopqrstuvwxyz1234567890', 10)) |
2049 | 2185 | servers = list(self._root_servers(proto)) |
2050 | 2186 | checker = Resolver.Resolver(servers, self.simple_query, max_attempts=1, shuffle=True, transport_manager=self.transport_manager) |
2051 | 2187 | try: |
2052 | checker.query_for_answer(dns.name.root, dns.rdatatype.NS, dns.rdataclass.IN) | |
2188 | checker.query_for_answer(dns.name.root, dns.rdatatype.NS, self.rdclass) | |
2053 | 2189 | return True |
2054 | 2190 | except dns.resolver.NoNameservers: |
2055 | 2191 | return None |
2160 | 2296 | resolver = Resolver.Resolver(list(servers), Q.StandardRecursiveQueryCD, transport_manager=self.transport_manager) |
2161 | 2297 | |
2162 | 2298 | try: |
2163 | ans = resolver.query_for_answer(name, dns.rdatatype.NS, dns.rdataclass.IN) | |
2299 | ans = resolver.query_for_answer(name, dns.rdatatype.NS, self.rdclass) | |
2164 | 2300 | except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN): |
2165 | 2301 | name_obj.parent = self._analyze_stub(name.parent()).zone |
2166 | 2302 | except dns.exception.DNSException: |
2271 | 2407 | raise NoNameservers('No resolvers available to query!') |
2272 | 2408 | |
2273 | 2409 | odd_ports = dict([(s, self.odd_ports[(n, s)]) for n, s in self.odd_ports if n == name_obj.zone.name]) |
2410 | cookie_jar = name_obj.zone.cookie_jar | |
2274 | 2411 | |
2275 | 2412 | # make common query first to prime the cache |
2276 | 2413 | |
2289 | 2426 | rdtype = dns.rdatatype.A |
2290 | 2427 | |
2291 | 2428 | self.logger.debug('Querying %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(rdtype))) |
2292 | query = self.diagnostic_query(name_obj.name, rdtype, dns.rdataclass.IN, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
2429 | query = self.diagnostic_query_no_server_cookie(name_obj.name, rdtype, self.rdclass, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
2293 | 2430 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) |
2294 | self._add_query(name_obj, query, True) | |
2431 | self._add_query(name_obj, query, True, True) | |
2432 | ||
2433 | if self.dns_cookies: | |
2434 | name_obj.cookie_rdtype = rdtype | |
2435 | else: | |
2436 | name_obj.cookie_rdtype = None | |
2295 | 2437 | |
2296 | 2438 | # if there were no valid responses, then exit out early |
2297 | 2439 | if not query.is_valid_complete_response_any() and not self.explicit_only: |
2311 | 2453 | # make DS queries (these won't be included in the above mix |
2312 | 2454 | # because there is no parent on the name_obj) |
2313 | 2455 | self.logger.debug('Querying %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(dns.rdatatype.DS))) |
2314 | query = self.diagnostic_query(name_obj.name, dns.rdatatype.DS, dns.rdataclass.IN, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
2456 | query = self.diagnostic_query(name_obj.name, dns.rdatatype.DS, self.rdclass, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
2315 | 2457 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) |
2316 | self._add_query(name_obj, query) | |
2458 | self._add_query(name_obj, query, False, False) | |
2317 | 2459 | |
2318 | 2460 | # for non-TLDs make NS queries after all others |
2319 | 2461 | if len(name_obj.name) > 2: |
2320 | 2462 | # ensure these weren't already queried for (e.g., as part of extra_rdtypes) |
2321 | 2463 | if (name_obj.name, dns.rdatatype.NS) not in name_obj.queries: |
2322 | 2464 | self.logger.debug('Querying %s/%s...' % (fmt.humanize_name(name_obj.name), dns.rdatatype.to_text(dns.rdatatype.NS))) |
2323 | query = self.diagnostic_query(name_obj.name, dns.rdatatype.NS, dns.rdataclass.IN, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports) | |
2465 | query = self.diagnostic_query(name_obj.name, dns.rdatatype.NS, self.rdclass, servers, None, self.client_ipv4, self.client_ipv6, odd_ports=odd_ports, cookie_jar=cookie_jar, cookie_standin=COOKIE_STANDIN) | |
2324 | 2466 | query.execute(tm=self.transport_manager, th_factories=self.th_factories) |
2325 | self._add_query(name_obj, query, True) | |
2467 | self._add_query(name_obj, query, True, False) | |
2326 | 2468 | |
2327 | 2469 | return name_obj |
2328 | 2470 |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
9 | 7 | # certain rights in this software. |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
10 | # | |
11 | # Copyright 2016-2019 Casey Deccio | |
12 | 12 | # |
13 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
14 | 14 | # it under the terms of the GNU General Public License as published by |
27 | 27 | from __future__ import unicode_literals |
28 | 28 | |
29 | 29 | import base64 |
30 | import cgi | |
31 | 30 | import datetime |
32 | 31 | import logging |
33 | 32 | |
36 | 35 | from collections import OrderedDict |
37 | 36 | except ImportError: |
38 | 37 | from ordereddict import OrderedDict |
38 | ||
39 | # python3/python2 dual compatibility | |
40 | try: | |
41 | from html import escape | |
42 | except ImportError: | |
43 | from cgi import escape | |
39 | 44 | |
40 | 45 | import dns.name, dns.rdatatype |
41 | 46 | |
152 | 157 | DNAME_STATUS_INVALID: 'INVALID', |
153 | 158 | } |
154 | 159 | |
160 | RRSIG_SIG_LENGTHS_BY_ALGORITHM = { | |
161 | 12: 512, 13: 512, 14: 768, 15: 512, 16: 912, | |
162 | } | |
163 | RRSIG_SIG_LENGTH_ERRORS = { | |
164 | 12: Errors.RRSIGBadLengthGOST, 13: Errors.RRSIGBadLengthECDSA256, | |
165 | 14: Errors.RRSIGBadLengthECDSA384, 15: Errors.RRSIGBadLengthEd25519, | |
166 | 16: Errors.RRSIGBadLengthEd448, | |
167 | } | |
168 | ||
155 | 169 | class RRSIGStatus(object): |
156 | 170 | def __init__(self, rrset, rrsig, dnskey, zone_name, reference_ts, supported_algs): |
157 | 171 | self.rrset = rrset |
205 | 219 | if self.validation_status == RRSIG_STATUS_VALID: |
206 | 220 | self.validation_status = RRSIG_STATUS_INVALID |
207 | 221 | |
222 | sig_len = len(self.rrsig.signature) << 3 | |
223 | if self.rrsig.algorithm in RRSIG_SIG_LENGTHS_BY_ALGORITHM and \ | |
224 | sig_len != RRSIG_SIG_LENGTHS_BY_ALGORITHM[self.rrsig.algorithm]: | |
225 | self.errors.append(RRSIG_SIG_LENGTH_ERRORS[self.rrsig.algorithm](length=sig_len)) | |
226 | ||
208 | 227 | if self.reference_ts < self.rrsig.inception: |
209 | 228 | if self.validation_status == RRSIG_STATUS_VALID: |
210 | 229 | self.validation_status = RRSIG_STATUS_PREMATURE |
242 | 261 | erroneous_status |
243 | 262 | |
244 | 263 | if html_format: |
245 | formatter = lambda x: cgi.escape(x, True) | |
264 | formatter = lambda x: escape(x, True) | |
246 | 265 | else: |
247 | 266 | formatter = lambda x: x |
248 | 267 | |
381 | 400 | erroneous_status |
382 | 401 | |
383 | 402 | if html_format: |
384 | formatter = lambda x: cgi.escape(x, True) | |
403 | formatter = lambda x: escape(x, True) | |
385 | 404 | else: |
386 | 405 | formatter = lambda x: x |
387 | 406 | |
534 | 553 | (erroneous_status or nsec_list) |
535 | 554 | |
536 | 555 | if html_format: |
537 | formatter = lambda x: cgi.escape(x, True) | |
556 | formatter = lambda x: escape(x, True) | |
538 | 557 | else: |
539 | 558 | formatter = lambda x: x |
540 | 559 | |
785 | 804 | (erroneous_status or nsec_list) |
786 | 805 | |
787 | 806 | if html_format: |
788 | formatter = lambda x: cgi.escape(x, True) | |
807 | formatter = lambda x: escape(x, True) | |
789 | 808 | else: |
790 | 809 | formatter = lambda x: x |
791 | 810 | |
998 | 1017 | (erroneous_status or nsec3_list) |
999 | 1018 | |
1000 | 1019 | if html_format: |
1001 | formatter = lambda x: cgi.escape(x, True) | |
1020 | formatter = lambda x: escape(x, True) | |
1002 | 1021 | else: |
1003 | 1022 | formatter = lambda x: x |
1004 | 1023 | |
1349 | 1368 | (erroneous_status or nsec3_list) |
1350 | 1369 | |
1351 | 1370 | if html_format: |
1352 | formatter = lambda x: cgi.escape(x, True) | |
1371 | formatter = lambda x: escape(x, True) | |
1353 | 1372 | else: |
1354 | 1373 | formatter = lambda x: x |
1355 | 1374 | |
1483 | 1502 | (erroneous_status or dname_serialized) |
1484 | 1503 | |
1485 | 1504 | if html_format: |
1486 | formatter = lambda x: cgi.escape(x, True) | |
1505 | formatter = lambda x: escape(x, True) | |
1487 | 1506 | else: |
1488 | 1507 | formatter = lambda x: x |
1489 | 1508 |
5 | 5 | # |
6 | 6 | # Copyright 2014-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
31 | 31 | import os |
32 | 32 | import re |
33 | 33 | import sys |
34 | ||
35 | # minimal support for python2.6 | |
36 | try: | |
37 | from collections import OrderedDict | |
38 | except ImportError: | |
39 | from ordereddict import OrderedDict | |
34 | 40 | |
35 | 41 | import dns.exception, dns.name |
36 | 42 | |
54 | 60 | LOCAL_MEDIA_URL = 'file://' + DNSVIZ_SHARE_PATH |
55 | 61 | DNSSEC_TEMPLATE_FILE = os.path.join(DNSVIZ_SHARE_PATH, 'html', 'dnssec-template.html') |
56 | 62 | |
57 | logger = logging.getLogger('dnsviz.analysis.offline') | |
63 | logging.basicConfig(level=logging.WARNING, format='%(message)s') | |
64 | logger = logging.getLogger() | |
58 | 65 | |
59 | 66 | def usage(err=None): |
60 | 67 | if err is not None: |
61 | 68 | err += '\n\n' |
62 | 69 | else: |
63 | 70 | err = '' |
64 | sys.stderr.write('''%sUsage: dnsviz graph [options] [domain name...] | |
71 | sys.stderr.write('''%sUsage: %s %s [options] [domain_name...] | |
72 | ||
73 | Graph the assessment of diagnostic DNS queries. | |
74 | ||
65 | 75 | Options: |
66 | -f <filename> - read names from a file | |
67 | -r <filename> - read diagnostic queries from a file | |
68 | -t <filename> - specify file containing trusted keys | |
76 | -f <filename> - Read names from a file. | |
77 | -r <filename> - Read diagnostic queries from a file. | |
78 | -t <filename> - Use trusted keys from the designated file. | |
79 | -C - Enforce DNS cookies strictly. | |
69 | 80 | -R <type>[,<type>...] |
70 | - Process queries of only the specified type(s) | |
71 | -O - derive the filename(s) from the format and domain name(s) | |
72 | -o <filename> - save the output to the specified file | |
73 | -T <format> - specify the format of the output | |
74 | -h - display the usage and exit | |
75 | ''' % (err)) | |
76 | ||
77 | def finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, filename): | |
81 | - Process queries of only the specified type(s). | |
82 | -e - Do not remove redundant RRSIG edges from the graph. | |
83 | -O - Derive the filename(s) from the format and domain name(s). | |
84 | -o <filename> - Save the output to the specified file. | |
85 | -T <format> - Use the specified output format. | |
86 | -h - Display the usage and exit. | |
87 | ''' % (err, sys.argv[0], __name__.split('.')[-1])) | |
88 | ||
89 | def finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, filename, remove_edges): | |
78 | 90 | G.add_trust(trusted_keys) |
79 | G.remove_extra_edges() | |
91 | ||
92 | if remove_edges: | |
93 | G.remove_extra_edges() | |
80 | 94 | |
81 | 95 | if fmt == 'html': |
82 | 96 | try: |
86 | 100 | sys.exit(3) |
87 | 101 | |
88 | 102 | try: |
89 | template_str = io.open(DNSSEC_TEMPLATE_FILE, 'r', encoding='utf-8').read() | |
103 | with io.open(DNSSEC_TEMPLATE_FILE, 'r', encoding='utf-8') as fh: | |
104 | template_str = fh.read() | |
90 | 105 | except IOError as e: |
91 | 106 | logger.error('Error reading template file "%s": %s' % (DNSSEC_TEMPLATE_FILE, e.strerror)) |
92 | 107 | sys.exit(3) |
118 | 133 | logger.error(str(e)) |
119 | 134 | sys.exit(3) |
120 | 135 | |
121 | def test_m2crypto(): | |
122 | try: | |
123 | import M2Crypto | |
124 | except ImportError: | |
125 | sys.stderr.write('''Warning: M2Crypto is not installed; cryptographic validation of signatures and digests will not be available.\n''') | |
126 | ||
127 | 136 | def test_pygraphviz(): |
128 | 137 | try: |
129 | 138 | from pygraphviz import release |
132 | 141 | major = int(major) |
133 | 142 | minor = int(re.sub(r'(\d+)[^\d].*', r'\1', minor)) |
134 | 143 | if (major, minor) < (1,1): |
135 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
144 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
136 | 145 | sys.exit(2) |
137 | 146 | except ValueError: |
138 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
147 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
139 | 148 | sys.exit(2) |
140 | 149 | except ImportError: |
141 | sys.stderr.write('''pygraphviz is required, but not installed.\n''') | |
150 | logger.error('''pygraphviz is required, but not installed.''') | |
142 | 151 | sys.exit(2) |
143 | 152 | |
144 | 153 | def main(argv): |
145 | 154 | try: |
146 | test_m2crypto() | |
147 | 155 | test_pygraphviz() |
148 | 156 | |
149 | 157 | try: |
150 | opts, args = getopt.getopt(argv[1:], 'f:r:R:t:Oo:T:h') | |
158 | opts, args = getopt.getopt(argv[1:], 'f:r:R:et:COo:T:h') | |
151 | 159 | except getopt.GetoptError as e: |
152 | usage(str(e)) | |
160 | sys.stderr.write('%s\n' % str(e)) | |
153 | 161 | sys.exit(1) |
154 | 162 | |
155 | 163 | # collect trusted keys |
157 | 165 | for opt, arg in opts: |
158 | 166 | if opt == '-t': |
159 | 167 | try: |
160 | tk_str = io.open(arg, 'r', encoding='utf-8').read() | |
168 | with io.open(arg, 'r', encoding='utf-8') as fh: | |
169 | tk_str = fh.read() | |
161 | 170 | except IOError as e: |
162 | sys.stderr.write('%s: "%s"\n' % (e.strerror, arg)) | |
171 | logger.error('%s: "%s"' % (e.strerror, arg)) | |
163 | 172 | sys.exit(3) |
164 | 173 | try: |
165 | 174 | trusted_keys.extend(get_trusted_keys(tk_str)) |
166 | 175 | except dns.exception.DNSException: |
167 | sys.stderr.write('There was an error parsing the trusted keys file: "%s"\n' % arg) | |
176 | logger.error('There was an error parsing the trusted keys file: "%s"' % arg) | |
168 | 177 | sys.exit(3) |
169 | 178 | |
170 | 179 | opts = dict(opts) |
173 | 182 | sys.exit(0) |
174 | 183 | |
175 | 184 | if '-f' in opts and args: |
176 | usage('If -f is used, then domain names may not supplied as command line arguments.') | |
185 | sys.stderr.write('If -f is used, then domain names may not supplied as command line arguments.\n') | |
177 | 186 | sys.exit(1) |
178 | 187 | |
179 | 188 | if '-R' in opts: |
180 | 189 | try: |
181 | 190 | rdtypes = opts['-R'].split(',') |
182 | 191 | except ValueError: |
183 | usage('The list of types was invalid: "%s"' % opts['-R']) | |
192 | sys.stderr.write('The list of types was invalid: "%s"\n' % opts['-R']) | |
184 | 193 | sys.exit(1) |
185 | 194 | try: |
186 | 195 | rdtypes = [dns.rdatatype.from_text(x) for x in rdtypes] |
187 | 196 | except dns.rdatatype.UnknownRdatatype: |
188 | usage('The list of types was invalid: "%s"' % opts['-R']) | |
197 | sys.stderr.write('The list of types was invalid: "%s"\n' % opts['-R']) | |
189 | 198 | sys.exit(1) |
190 | 199 | else: |
191 | 200 | rdtypes = None |
201 | ||
202 | strict_cookies = '-C' in opts | |
203 | ||
204 | remove_edges = '-e' not in opts | |
192 | 205 | |
193 | 206 | if '-T' in opts: |
194 | 207 | fmt = opts['-T'] |
197 | 210 | else: |
198 | 211 | fmt = 'dot' |
199 | 212 | if fmt not in ('dot','png','jpg','svg','html'): |
200 | usage('Image format unrecognized: "%s"' % fmt) | |
213 | sys.stderr.write('Image format unrecognized: "%s"\n' % fmt) | |
201 | 214 | sys.exit(1) |
202 | 215 | |
203 | 216 | if '-o' in opts and '-O' in opts: |
204 | usage('The -o and -O options may not be used together.') | |
217 | sys.stderr.write('The -o and -O options may not be used together.\n') | |
205 | 218 | sys.exit(1) |
206 | ||
207 | handler = logging.StreamHandler() | |
208 | handler.setLevel(logging.WARNING) | |
209 | logger.addHandler(handler) | |
210 | logger.setLevel(logging.WARNING) | |
211 | 219 | |
212 | 220 | if '-r' not in opts or opts['-r'] == '-': |
213 | 221 | opt_r = sys.stdin.fileno() |
214 | 222 | else: |
215 | 223 | opt_r = opts['-r'] |
216 | 224 | try: |
217 | analysis_str = io.open(opt_r, 'r', encoding='utf-8').read() | |
225 | with io.open(opt_r, 'r', encoding='utf-8') as fh: | |
226 | analysis_str = fh.read() | |
218 | 227 | except IOError as e: |
219 | 228 | logger.error('%s: "%s"' % (e.strerror, opts.get('-r', '-'))) |
220 | 229 | sys.exit(3) |
230 | if not analysis_str: | |
231 | if opt_r != sys.stdin.fileno(): | |
232 | logger.error('No input.') | |
233 | sys.exit(3) | |
221 | 234 | try: |
222 | 235 | analysis_structured = json.loads(analysis_str) |
223 | 236 | except ValueError: |
224 | logger.error('There was an error parsing the json input: "%s"' % opts.get('-r', '-')) | |
237 | logger.error('There was an error parsing the JSON input: "%s"' % opts.get('-r', '-')) | |
225 | 238 | sys.exit(3) |
226 | 239 | |
227 | 240 | # check version |
240 | 253 | logger.error('Version %d.%d of JSON input is incompatible with this software.' % (major_vers, minor_vers)) |
241 | 254 | sys.exit(3) |
242 | 255 | |
243 | names = [] | |
256 | names = OrderedDict() | |
244 | 257 | if '-f' in opts: |
245 | 258 | if opts['-f'] == '-': |
246 | 259 | opts['-f'] = sys.stdin.fileno() |
258 | 271 | except dns.exception.DNSException: |
259 | 272 | logger.error('The domain name was invalid: "%s"' % name) |
260 | 273 | else: |
261 | names.append(name) | |
274 | if name not in names: | |
275 | names[name] = None | |
262 | 276 | f.close() |
263 | 277 | else: |
264 | 278 | if args: |
269 | 283 | try: |
270 | 284 | args = analysis_structured['_meta._dnsviz.']['names'] |
271 | 285 | except KeyError: |
272 | logger.error('No names found in json input!') | |
286 | logger.error('No names found in JSON input!') | |
273 | 287 | sys.exit(3) |
274 | 288 | for name in args: |
275 | 289 | try: |
279 | 293 | except dns.exception.DNSException: |
280 | 294 | logger.error('The domain name was invalid: "%s"' % name) |
281 | 295 | else: |
282 | names.append(name) | |
296 | if name not in names: | |
297 | names[name] = None | |
283 | 298 | |
284 | 299 | latest_analysis_date = None |
285 | 300 | name_objs = [] |
289 | 304 | if name_str not in analysis_structured or analysis_structured[name_str].get('stub', True): |
290 | 305 | logger.error('The analysis of "%s" was not found in the input.' % lb2s(name.to_text())) |
291 | 306 | continue |
292 | name_obj = OfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache) | |
307 | name_obj = OfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache, strict_cookies=strict_cookies) | |
293 | 308 | name_objs.append(name_obj) |
294 | 309 | |
295 | 310 | if latest_analysis_date is None or latest_analysis_date > name_obj.analysis_end: |
327 | 342 | name = 'root' |
328 | 343 | else: |
329 | 344 | name = lb2s(name_obj.name.canonicalize().to_text()).rstrip('.') |
330 | finish_graph(G, [name_obj], rdtypes, trusted_keys, fmt, '%s.%s' % (name, fmt)) | |
345 | finish_graph(G, [name_obj], rdtypes, trusted_keys, fmt, '%s.%s' % (name, fmt), remove_edges) | |
331 | 346 | G = DNSAuthGraph() |
332 | 347 | |
333 | 348 | if '-O' not in opts: |
334 | 349 | if '-o' not in opts or opts['-o'] == '-': |
335 | finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, None) | |
350 | finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, None, remove_edges) | |
336 | 351 | else: |
337 | finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, opts['-o']) | |
352 | finish_graph(G, name_objs, rdtypes, trusted_keys, fmt, opts['-o'], remove_edges) | |
338 | 353 | |
339 | 354 | except KeyboardInterrupt: |
340 | 355 | logger.error('Interrupted.') |
5 | 5 | # |
6 | 6 | # Copyright 2014-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
56 | 56 | else: |
57 | 57 | raise |
58 | 58 | |
59 | logger = logging.getLogger('dnsviz.analysis.offline') | |
59 | logging.basicConfig(level=logging.WARNING, format='%(message)s') | |
60 | logger = logging.getLogger() | |
60 | 61 | |
61 | 62 | TERM_COLOR_MAP = { |
62 | 63 | 'BOLD': '\033[1m', |
97 | 98 | err += '\n\n' |
98 | 99 | else: |
99 | 100 | err = '' |
100 | sys.stderr.write('''%sUsage: dnsviz grok [options] [domain name...] | |
101 | sys.stderr.write('''%sUsage: %s %s [options] [domain_name...] | |
102 | ||
103 | Assess diagnostic DNS queries. | |
104 | ||
101 | 105 | Options: |
102 | -f <filename> - read names from a file | |
103 | -r <filename> - read diagnostic queries from a file | |
104 | -t <filename> - specify file containing trusted keys | |
105 | -o <filename> - save the output to the specified file | |
106 | -c - make json output minimal instead of pretty | |
107 | -l <loglevel> - set log level to one of: error, warning, info, debug | |
108 | -h - display the usage and exit | |
109 | ''' % (err)) | |
106 | -f <filename> - Read names from a file. | |
107 | -r <filename> - Read diagnostic queries from a file. | |
108 | -t <filename> - Use trusted keys from the designated file. | |
109 | -C - Enforce DNS cookies strictly. | |
110 | -o <filename> - Save the output to the specified file. | |
111 | -c - Format JSON output minimally, instead of "pretty". | |
112 | -l <loglevel> - Log at the specified level: error, warning, info, debug. | |
113 | -h - Display the usage and exit. | |
114 | ''' % (err, sys.argv[0], __name__.split('.')[-1])) | |
110 | 115 | |
111 | 116 | def color_json(s): |
112 | 117 | error = None |
144 | 149 | |
145 | 150 | return s1.rstrip() |
146 | 151 | |
147 | def test_m2crypto(): | |
148 | try: | |
149 | import M2Crypto | |
150 | except ImportError: | |
151 | sys.stderr.write('''Warning: M2Crypto is not installed; cryptographic validation of signatures and digests will not be available.\n''') | |
152 | ||
153 | 152 | def test_pygraphviz(): |
154 | 153 | try: |
155 | 154 | from pygraphviz import release |
158 | 157 | major = int(major) |
159 | 158 | minor = int(re.sub(r'(\d+)[^\d].*', r'\1', minor)) |
160 | 159 | if (major, minor) < (1,1): |
161 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
160 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
162 | 161 | sys.exit(2) |
163 | 162 | except ValueError: |
164 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
163 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
165 | 164 | sys.exit(2) |
166 | 165 | except ImportError: |
167 | sys.stderr.write('''pygraphviz is required, but not installed.\n''') | |
166 | logger.error('''pygraphviz is required, but not installed.''') | |
168 | 167 | sys.exit(2) |
169 | 168 | |
170 | 169 | def main(argv): |
171 | 170 | try: |
172 | test_m2crypto() | |
173 | ||
174 | #TODO remove -p option (it is now the default, and -c is used to change it) | |
175 | try: | |
176 | opts, args = getopt.getopt(argv[1:], 'f:r:t:o:cpl:h') | |
171 | try: | |
172 | opts, args = getopt.getopt(argv[1:], 'f:r:t:Co:cl:h') | |
177 | 173 | except getopt.GetoptError as e: |
178 | usage(str(e)) | |
174 | sys.stderr.write('%s\n' % str(e)) | |
179 | 175 | sys.exit(1) |
180 | 176 | |
181 | 177 | # collect trusted keys |
183 | 179 | for opt, arg in opts: |
184 | 180 | if opt == '-t': |
185 | 181 | try: |
186 | tk_str = io.open(arg, 'r', encoding='utf-8').read() | |
182 | with io.open(arg, 'r', encoding='utf-8') as fh: | |
183 | tk_str = fh.read() | |
187 | 184 | except IOError as e: |
188 | sys.stderr.write('%s: "%s"\n' % (e.strerror, arg)) | |
185 | logger.error('%s: "%s"' % (e.strerror, arg)) | |
189 | 186 | sys.exit(3) |
190 | 187 | try: |
191 | 188 | trusted_keys.extend(get_trusted_keys(tk_str)) |
192 | 189 | except dns.exception.DNSException: |
193 | sys.stderr.write('There was an error parsing the trusted keys file: "%s"\n' % arg) | |
190 | logger.error('There was an error parsing the trusted keys file: "%s"' % arg) | |
194 | 191 | sys.exit(3) |
195 | 192 | |
196 | 193 | opts = dict(opts) |
199 | 196 | sys.exit(0) |
200 | 197 | |
201 | 198 | if '-f' in opts and args: |
202 | usage('If -f is used, then domain names may not supplied as command line arguments.') | |
199 | sys.stderr.write('If -f is used, then domain names may not supplied as command line arguments.\n') | |
203 | 200 | sys.exit(1) |
204 | 201 | |
205 | 202 | if '-l' in opts: |
212 | 209 | elif opts['-l'] == 'debug': |
213 | 210 | loglevel = logging.DEBUG |
214 | 211 | else: |
215 | usage('Invalid log level: "%s"' % opts['-l']) | |
212 | sys.stderr.write('Invalid log level: "%s"\n' % opts['-l']) | |
216 | 213 | sys.exit(1) |
217 | 214 | else: |
218 | 215 | loglevel = logging.DEBUG |
219 | handler = logging.StreamHandler() | |
220 | handler.setLevel(logging.WARNING) | |
221 | logger.addHandler(handler) | |
222 | logger.setLevel(logging.WARNING) | |
216 | ||
217 | strict_cookies = '-C' in opts | |
223 | 218 | |
224 | 219 | if '-r' not in opts or opts['-r'] == '-': |
225 | 220 | opt_r = sys.stdin.fileno() |
226 | 221 | else: |
227 | 222 | opt_r = opts['-r'] |
228 | 223 | try: |
229 | analysis_str = io.open(opt_r, 'r', encoding='utf-8').read() | |
224 | with io.open(opt_r, 'r', encoding='utf-8') as fh: | |
225 | analysis_str = fh.read() | |
230 | 226 | except IOError as e: |
231 | 227 | logger.error('%s: "%s"' % (e.strerror, opts.get('-r', '-'))) |
232 | 228 | sys.exit(3) |
229 | if not analysis_str: | |
230 | if opt_r != sys.stdin.fileno(): | |
231 | logger.error('No input.') | |
232 | sys.exit(3) | |
233 | 233 | try: |
234 | 234 | analysis_structured = json.loads(analysis_str) |
235 | 235 | except ValueError: |
236 | logger.error('There was an error parsing the json input: "%s"' % opts.get('-r', '-')) | |
236 | logger.error('There was an error parsing the JSON input: "%s"' % opts.get('-r', '-')) | |
237 | 237 | sys.exit(3) |
238 | 238 | |
239 | 239 | # check version |
252 | 252 | logger.error('Version %d.%d of JSON input is incompatible with this software.' % (major_vers, minor_vers)) |
253 | 253 | sys.exit(3) |
254 | 254 | |
255 | names = [] | |
255 | names = OrderedDict() | |
256 | 256 | if '-f' in opts: |
257 | 257 | if opts['-f'] == '-': |
258 | 258 | opts['-f'] = sys.stdin.fileno() |
270 | 270 | except dns.exception.DNSException: |
271 | 271 | logger.error('The domain name was invalid: "%s"' % name) |
272 | 272 | else: |
273 | names.append(name) | |
273 | if name not in names: | |
274 | names[name] = None | |
274 | 275 | f.close() |
275 | 276 | else: |
276 | 277 | if args: |
281 | 282 | try: |
282 | 283 | args = analysis_structured['_meta._dnsviz.']['names'] |
283 | 284 | except KeyError: |
284 | logger.error('No names found in json input!') | |
285 | logger.error('No names found in JSON input!') | |
285 | 286 | sys.exit(3) |
286 | 287 | for name in args: |
287 | 288 | try: |
291 | 292 | except dns.exception.DNSException: |
292 | 293 | logger.error('The domain name was invalid: "%s"' % name) |
293 | 294 | else: |
294 | names.append(name) | |
295 | if name not in names: | |
296 | names[name] = None | |
295 | 297 | |
296 | 298 | if '-o' not in opts or opts['-o'] == '-': |
297 | 299 | opts['-o'] = sys.stdout.fileno() |
317 | 319 | if name_str not in analysis_structured or analysis_structured[name_str].get('stub', True): |
318 | 320 | logger.error('The analysis of "%s" was not found in the input.' % lb2s(name.to_text())) |
319 | 321 | continue |
320 | name_obj = OfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache) | |
322 | name_obj = OfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache, strict_cookies=strict_cookies) | |
321 | 323 | name_objs.append(name_obj) |
322 | 324 | |
323 | 325 | if not name_objs: |
0 | #!/usr/bin/env python | |
1 | # | |
2 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, | |
3 | # analysis, and visualization. | |
4 | # Created by Casey Deccio (casey@deccio.net) | |
5 | # | |
6 | # Copyright 2016-2019 Casey Deccio | |
7 | # | |
8 | # DNSViz is free software; you can redistribute it and/or modify | |
9 | # it under the terms of the GNU General Public License as published by | |
10 | # the Free Software Foundation; either version 2 of the License, or | |
11 | # (at your option) any later version. | |
12 | # | |
13 | # DNSViz is distributed in the hope that it will be useful, | |
14 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
15 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
16 | # GNU General Public License for more details. | |
17 | # | |
18 | # You should have received a copy of the GNU General Public License along | |
19 | # with DNSViz. If not, see <http://www.gnu.org/licenses/>. | |
20 | # | |
21 | ||
22 | from __future__ import unicode_literals | |
23 | ||
24 | import codecs | |
25 | import io | |
26 | import json | |
27 | import threading | |
28 | import sys | |
29 | ||
30 | # python3/python2 dual compatibility | |
31 | try: | |
32 | import queue | |
33 | except ImportError: | |
34 | import Queue as queue | |
35 | ||
36 | from dnsviz import transport | |
37 | ||
38 | class RemoteQueryError(Exception): | |
39 | pass | |
40 | ||
41 | def main(argv): | |
42 | sock = transport.ReaderWriter(io.open(sys.stdin.fileno(), 'rb'), io.open(sys.stdout.fileno(), 'wb')) | |
43 | sock.lock = threading.Lock() | |
44 | qth_reader = transport.DNSQueryTransportHandlerWebSocketClientReader(sock) | |
45 | qth_writer = transport.DNSQueryTransportHandlerWebSocketClientWriter(sock) | |
46 | ||
47 | response_queue = queue.Queue() | |
48 | queries_in_waiting = set() | |
49 | th_factory = transport.DNSQueryTransportHandlerDNSFactory() | |
50 | tm = transport.DNSQueryTransportManager() | |
51 | try: | |
52 | while True: | |
53 | try: | |
54 | qth_writer.qtms = [] | |
55 | ||
56 | tm.handle_msg(qth_reader) | |
57 | qth_reader.finalize() | |
58 | ||
59 | if len(qth_reader.msg_recv) == 0: | |
60 | break | |
61 | ||
62 | # load the json content | |
63 | try: | |
64 | content = json.loads(codecs.decode(qth_reader.msg_recv, 'utf-8')) | |
65 | except ValueError: | |
66 | raise RemoteQueryError('JSON decoding of request failed: %s' % qth_reader.msg_recv) | |
67 | ||
68 | if 'version' not in content: | |
69 | raise RemoteQueryError('No version information in request.') | |
70 | try: | |
71 | major_vers, minor_vers = [int(x) for x in str(content['version']).split('.', 1)] | |
72 | except ValueError: | |
73 | raise RemoteQueryError('Version of JSON input in request is invalid: %s' % content['version']) | |
74 | ||
75 | # ensure major version is a match and minor version is no greater | |
76 | # than the current minor version | |
77 | curr_major_vers, curr_minor_vers = [int(x) for x in str(transport.DNS_TRANSPORT_VERSION).split('.', 1)] | |
78 | if major_vers != curr_major_vers or minor_vers > curr_minor_vers: | |
79 | raise RemoteQueryError('Version %d.%d of JSON input in request is incompatible with this software.' % (major_vers, minor_vers)) | |
80 | ||
81 | if 'requests' not in content: | |
82 | raise RemoteQueryError('No request information in request.') | |
83 | ||
84 | for i, qtm_serialized in enumerate(content['requests']): | |
85 | try: | |
86 | qtm = transport.DNSQueryTransportMeta.deserialize_request(qtm_serialized) | |
87 | except transport.TransportMetaDeserializationError as e: | |
88 | raise RemoteQueryError('Error deserializing request information: %s' % e) | |
89 | ||
90 | qth_writer.add_qtm(qtm) | |
91 | th = th_factory.build(processed_queue=response_queue) | |
92 | th.add_qtm(qtm) | |
93 | th.init_req() | |
94 | tm.handle_msg_nowait(th) | |
95 | queries_in_waiting.add(th) | |
96 | ||
97 | while queries_in_waiting: | |
98 | th = response_queue.get() | |
99 | th.finalize() | |
100 | queries_in_waiting.remove(th) | |
101 | ||
102 | qth_writer.init_req() | |
103 | ||
104 | except RemoteQueryError as e: | |
105 | qth_writer.init_err_send(str(e)) | |
106 | ||
107 | tm.handle_msg(qth_writer) | |
108 | ||
109 | except EOFError: | |
110 | pass | |
111 | finally: | |
112 | tm.close() | |
113 | ||
114 | if __name__ == '__main__': | |
115 | main() |
5 | 5 | # |
6 | 6 | # Copyright 2014-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
31 | 31 | import os |
32 | 32 | import re |
33 | 33 | import sys |
34 | ||
35 | # minimal support for python2.6 | |
36 | try: | |
37 | from collections import OrderedDict | |
38 | except ImportError: | |
39 | from ordereddict import OrderedDict | |
34 | 40 | |
35 | 41 | import dns.exception, dns.name |
36 | 42 | |
50 | 56 | else: |
51 | 57 | raise |
52 | 58 | |
53 | logger = logging.getLogger('dnsviz.analysis.offline') | |
59 | logging.basicConfig(level=logging.WARNING, format='%(message)s') | |
60 | logger = logging.getLogger() | |
54 | 61 | |
55 | 62 | def usage(err=None): |
56 | 63 | if err is not None: |
57 | 64 | err += '\n\n' |
58 | 65 | else: |
59 | 66 | err = '' |
60 | sys.stderr.write('''%sUsage: dnsviz print [options] [domain name...] | |
67 | sys.stderr.write('''%sUsage: %s %s [options] [domain_name...] | |
68 | ||
69 | Print the assessment of diagnostic DNS queries. | |
70 | ||
61 | 71 | Options: |
62 | -f <filename> - read names from a file | |
63 | -r <filename> - read diagnostic queries from a file | |
64 | -t <filename> - specify file containing trusted keys | |
72 | -f <filename> - Read names from a file. | |
73 | -r <filename> - Read diagnostic queries from a file. | |
74 | -t <filename> - Use trusted keys from the designated file. | |
75 | -C - Enforce DNS cookies strictly. | |
65 | 76 | -R <type>[,<type>...] |
66 | - Process queries of only the specified type(s) | |
67 | -O - derive the filename(s) from domain name(s) | |
68 | -o <filename> - save the output to the specified file | |
69 | -h - display the usage and exit | |
70 | ''' % (err)) | |
77 | - Process queries of only the specified type(s). | |
78 | -O - Derive the filename(s) from domain name(s). | |
79 | -o <filename> - Save the output to the specified file. | |
80 | -h - Display the usage and exit. | |
81 | ''' % (err, sys.argv[0], __name__.split('.')[-1])) | |
71 | 82 | |
72 | 83 | def finish_graph(G, name_objs, rdtypes, trusted_keys, filename): |
73 | 84 | G.add_trust(trusted_keys) |
278 | 289 | |
279 | 290 | return s |
280 | 291 | |
281 | def test_m2crypto(): | |
282 | try: | |
283 | import M2Crypto | |
284 | except ImportError: | |
285 | sys.stderr.write('''Warning: M2Crypto is not installed; cryptographic validation of signatures and digests will not be available.\n''') | |
286 | ||
287 | 292 | def test_pygraphviz(): |
288 | 293 | try: |
289 | 294 | from pygraphviz import release |
292 | 297 | major = int(major) |
293 | 298 | minor = int(re.sub(r'(\d+)[^\d].*', r'\1', minor)) |
294 | 299 | if (major, minor) < (1,1): |
295 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
300 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
296 | 301 | sys.exit(2) |
297 | 302 | except ValueError: |
298 | sys.stderr.write('''pygraphviz version >= 1.1 is required, but version %s is installed.\n''' % release.version) | |
303 | logger.error('''pygraphviz version >= 1.1 is required, but version %s is installed.''' % release.version) | |
299 | 304 | sys.exit(2) |
300 | 305 | except ImportError: |
301 | sys.stderr.write('''pygraphviz is required, but not installed.\n''') | |
306 | logger.error('''pygraphviz is required, but not installed.''') | |
302 | 307 | sys.exit(2) |
303 | 308 | |
304 | 309 | def main(argv): |
305 | 310 | try: |
306 | test_m2crypto() | |
307 | 311 | test_pygraphviz() |
308 | 312 | |
309 | 313 | try: |
310 | opts, args = getopt.getopt(argv[1:], 'f:r:R:t:Oo:h') | |
314 | opts, args = getopt.getopt(argv[1:], 'f:r:R:t:COo:h') | |
311 | 315 | except getopt.GetoptError as e: |
312 | usage(str(e)) | |
316 | sys.stderr.write('%s\n' % str(e)) | |
313 | 317 | sys.exit(1) |
314 | 318 | |
315 | 319 | # collect trusted keys |
317 | 321 | for opt, arg in opts: |
318 | 322 | if opt == '-t': |
319 | 323 | try: |
320 | tk_str = io.open(arg, 'r', encoding='utf-8').read() | |
324 | with io.open(arg, 'r', encoding='utf-8') as fh: | |
325 | tk_str = fh.read() | |
321 | 326 | except IOError as e: |
322 | sys.stderr.write('%s: "%s"\n' % (e.strerror, arg)) | |
327 | logger.error('%s: "%s"' % (e.strerror, arg)) | |
323 | 328 | sys.exit(3) |
324 | 329 | try: |
325 | 330 | trusted_keys.extend(get_trusted_keys(tk_str)) |
326 | 331 | except dns.exception.DNSException: |
327 | sys.stderr.write('There was an error parsing the trusted keys file: "%s"\n' % arg) | |
332 | logger.error('There was an error parsing the trusted keys file: "%s"' % arg) | |
328 | 333 | sys.exit(3) |
329 | 334 | |
330 | 335 | opts = dict(opts) |
333 | 338 | sys.exit(0) |
334 | 339 | |
335 | 340 | if '-f' in opts and args: |
336 | usage('If -f is used, then domain names may not supplied as command line arguments.') | |
341 | sys.stderr.write('If -f is used, then domain names may not supplied as command line arguments.\n') | |
337 | 342 | sys.exit(1) |
338 | 343 | |
339 | 344 | if '-R' in opts: |
340 | 345 | try: |
341 | 346 | rdtypes = opts['-R'].split(',') |
342 | 347 | except ValueError: |
343 | usage('The list of types was invalid: "%s"' % opts['-R']) | |
348 | sys.stderr.write('The list of types was invalid: "%s"\n' % opts['-R']) | |
344 | 349 | sys.exit(1) |
345 | 350 | try: |
346 | 351 | rdtypes = [dns.rdatatype.from_text(x) for x in rdtypes] |
347 | 352 | except dns.rdatatype.UnknownRdatatype: |
348 | usage('The list of types was invalid: "%s"' % opts['-R']) | |
353 | sys.stderr.write('The list of types was invalid: "%s"\n' % opts['-R']) | |
349 | 354 | sys.exit(1) |
350 | 355 | else: |
351 | 356 | rdtypes = None |
352 | 357 | |
358 | strict_cookies = '-C' in opts | |
359 | ||
353 | 360 | if '-o' in opts and '-O' in opts: |
354 | usage('The -o and -O options may not be used together.') | |
361 | sys.stderr.write('The -o and -O options may not be used together.\n') | |
355 | 362 | sys.exit(1) |
356 | ||
357 | handler = logging.StreamHandler() | |
358 | handler.setLevel(logging.WARNING) | |
359 | logger.addHandler(handler) | |
360 | logger.setLevel(logging.WARNING) | |
361 | 363 | |
362 | 364 | if '-r' not in opts or opts['-r'] == '-': |
363 | 365 | opt_r = sys.stdin.fileno() |
364 | 366 | else: |
365 | 367 | opt_r = opts['-r'] |
366 | 368 | try: |
367 | analysis_str = io.open(opt_r, 'r', encoding='utf-8').read() | |
369 | with io.open(opt_r, 'r', encoding='utf-8') as fh: | |
370 | analysis_str = fh.read() | |
368 | 371 | except IOError as e: |
369 | 372 | logger.error('%s: "%s"' % (e.strerror, opts.get('-r', '-'))) |
373 | sys.exit(3) | |
374 | if not analysis_str: | |
375 | if opt_r != sys.stdin.fileno(): | |
376 | logger.error('No input.') | |
370 | 377 | sys.exit(3) |
371 | 378 | try: |
372 | 379 | analysis_structured = json.loads(analysis_str) |
373 | 380 | except ValueError: |
374 | logger.error('There was an error parsing the json input: "%s"' % opts.get('-r', '-')) | |
381 | logger.error('There was an error parsing the JSON input: "%s"' % opts.get('-r', '-')) | |
375 | 382 | sys.exit(3) |
376 | 383 | |
377 | 384 | # check version |
390 | 397 | logger.error('Version %d.%d of JSON input is incompatible with this software.' % (major_vers, minor_vers)) |
391 | 398 | sys.exit(3) |
392 | 399 | |
393 | names = [] | |
400 | names = OrderedDict() | |
394 | 401 | if '-f' in opts: |
395 | 402 | if opts['-f'] == '-': |
396 | 403 | opts['-f'] = sys.stdin.fileno() |
408 | 415 | except dns.exception.DNSException: |
409 | 416 | logger.error('The domain name was invalid: "%s"' % name) |
410 | 417 | else: |
411 | names.append(name) | |
418 | if name not in names: | |
419 | names[name] = None | |
412 | 420 | f.close() |
413 | 421 | else: |
414 | 422 | if args: |
419 | 427 | try: |
420 | 428 | args = analysis_structured['_meta._dnsviz.']['names'] |
421 | 429 | except KeyError: |
422 | logger.error('No names found in json input!') | |
430 | logger.error('No names found in JSON input!') | |
423 | 431 | sys.exit(3) |
424 | 432 | for name in args: |
425 | 433 | try: |
429 | 437 | except dns.exception.DNSException: |
430 | 438 | logger.error('The domain name was invalid: "%s"' % name) |
431 | 439 | else: |
432 | names.append(name) | |
440 | if name not in names: | |
441 | names[name] = None | |
433 | 442 | |
434 | 443 | latest_analysis_date = None |
435 | 444 | name_objs = [] |
439 | 448 | if name_str not in analysis_structured or analysis_structured[name_str].get('stub', True): |
440 | 449 | logger.error('The analysis of "%s" was not found in the input.' % lb2s(name.to_text())) |
441 | 450 | continue |
442 | name_obj = TTLAgnosticOfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache) | |
451 | name_obj = TTLAgnosticOfflineDomainNameAnalysis.deserialize(name, analysis_structured, cache, strict_cookies=strict_cookies) | |
443 | 452 | name_objs.append(name_obj) |
444 | 453 | |
445 | 454 | if latest_analysis_date is None or latest_analysis_date > name_obj.analysis_end: |
5 | 5 | # |
6 | 6 | # Copyright 2014-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
24 | 24 | from __future__ import unicode_literals |
25 | 25 | |
26 | 26 | import atexit |
27 | import binascii | |
27 | 28 | import codecs |
28 | 29 | import errno |
29 | 30 | import getopt |
30 | 31 | import io |
31 | 32 | import json |
32 | 33 | import logging |
34 | import multiprocessing | |
35 | import multiprocessing.managers | |
33 | 36 | import os |
37 | import random | |
34 | 38 | import re |
39 | import shutil | |
35 | 40 | import signal |
36 | 41 | import socket |
37 | import sys | |
38 | import multiprocessing | |
39 | import multiprocessing.managers | |
40 | import signal | |
41 | import shutil | |
42 | 42 | import struct |
43 | 43 | import subprocess |
44 | import sys | |
44 | 45 | import tempfile |
45 | 46 | import threading |
46 | 47 | import time |
61 | 62 | |
62 | 63 | import dns.edns, dns.exception, dns.message, dns.name, dns.rdata, dns.rdataclass, dns.rdatatype, dns.rdtypes.ANY.NS, dns.rdtypes.IN.A, dns.rdtypes.IN.AAAA, dns.resolver, dns.rrset |
63 | 64 | |
64 | from dnsviz.analysis import WILDCARD_EXPLICIT_DELEGATION, PrivateAnalyst, PrivateRecursiveAnalyst, OnlineDomainNameAnalysis, NetworkConnectivityException, DNS_RAW_VERSION | |
65 | from dnsviz.analysis import COOKIE_STANDIN, WILDCARD_EXPLICIT_DELEGATION, PrivateAnalyst, PrivateRecursiveAnalyst, OnlineDomainNameAnalysis, NetworkConnectivityException, DNS_RAW_VERSION | |
65 | 66 | import dnsviz.format as fmt |
66 | 67 | from dnsviz.ipaddr import IPAddr |
67 | from dnsviz.query import StandardRecursiveQueryCD | |
68 | from dnsviz.query import DiagnosticQuery, QuickDNSSECQuery, StandardRecursiveQueryCD | |
68 | 69 | from dnsviz.resolver import DNSAnswer, Resolver, PrivateFullResolver |
69 | 70 | from dnsviz import transport |
70 | 71 | from dnsviz.util import get_client_address, get_root_hints |
71 | 72 | lb2s = fmt.latin1_binary_to_string |
72 | 73 | |
73 | logger = logging.getLogger('dnsviz.analysis.online') | |
74 | logging.basicConfig(level=logging.WARNING, format='%(message)s') | |
75 | logger = logging.getLogger() | |
74 | 76 | |
75 | 77 | # this needs to be global because of multiprocessing |
76 | 78 | tm = None |
79 | th_factories = None | |
77 | 80 | resolver = None |
78 | 81 | bootstrap_resolver = None |
79 | 82 | explicit_delegations = None |
117 | 120 | def _init_full_resolver(): |
118 | 121 | global resolver |
119 | 122 | |
123 | quick_query = QuickDNSSECQuery.add_mixin(CustomQueryMixin).add_server_cookie(COOKIE_STANDIN) | |
124 | diagnostic_query = DiagnosticQuery.add_mixin(CustomQueryMixin).add_server_cookie(COOKIE_STANDIN) | |
125 | ||
120 | 126 | # now that we have the hints, make resolver a full resolver instead of a stub |
121 | 127 | hints = get_root_hints() |
122 | 128 | for key in explicit_delegations: |
123 | 129 | hints[key] = explicit_delegations[key] |
124 | resolver = PrivateFullResolver(hints, odd_ports=odd_ports, transport_manager=tm) | |
130 | resolver = PrivateFullResolver(hints, query_cls=(quick_query, diagnostic_query), odd_ports=odd_ports, cookie_standin=COOKIE_STANDIN, transport_manager=tm) | |
125 | 131 | |
126 | 132 | def _init_interrupt_handler(): |
127 | 133 | signal.signal(signal.SIGINT, _raise_eof) |
135 | 141 | _init_interrupt_handler() |
136 | 142 | |
137 | 143 | def _analyze(args): |
138 | (cls, name, dlv_domain, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, \ | |
139 | stop_at_explicit, extra_rdtypes, explicit_only, cache, cache_level, cache_lock, th_factories) = args | |
144 | (cls, name, rdclass, dlv_domain, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, \ | |
145 | stop_at_explicit, extra_rdtypes, explicit_only, cache, cache_level, cache_lock) = args | |
140 | 146 | if ceiling is not None and name.is_subdomain(ceiling): |
141 | 147 | c = ceiling |
142 | 148 | else: |
143 | 149 | c = name |
144 | 150 | try: |
145 | a = cls(name, dlv_domain=dlv_domain, try_ipv4=try_ipv4, try_ipv6=try_ipv6, client_ipv4=client_ipv4, client_ipv6=client_ipv6, query_class_mixin=query_class_mixin, ceiling=c, edns_diagnostics=edns_diagnostics, explicit_delegations=explicit_delegations, stop_at_explicit=stop_at_explicit, odd_ports=odd_ports, extra_rdtypes=extra_rdtypes, explicit_only=explicit_only, analysis_cache=cache, cache_level=cache_level, analysis_cache_lock=cache_lock, transport_manager=tm, th_factories=th_factories, resolver=resolver) | |
151 | a = cls(name, rdclass=rdclass, dlv_domain=dlv_domain, try_ipv4=try_ipv4, try_ipv6=try_ipv6, client_ipv4=client_ipv4, client_ipv6=client_ipv6, query_class_mixin=query_class_mixin, ceiling=c, edns_diagnostics=edns_diagnostics, explicit_delegations=explicit_delegations, stop_at_explicit=stop_at_explicit, odd_ports=odd_ports, extra_rdtypes=extra_rdtypes, explicit_only=explicit_only, analysis_cache=cache, cache_level=cache_level, analysis_cache_lock=cache_lock, transport_manager=tm, th_factories=th_factories, resolver=resolver) | |
146 | 152 | return a.analyze() |
147 | 153 | # re-raise a KeyboardInterrupt, as this means we've been interrupted |
148 | 154 | except KeyboardInterrupt: |
158 | 164 | logger.exception('Error analyzing %s' % fmt.humanize_name(name)) |
159 | 165 | return None |
160 | 166 | |
161 | class CustomQueryMixin(object): | |
162 | pass | |
163 | ||
164 | 167 | class BulkAnalyst(object): |
165 | 168 | analyst_cls = PrivateAnalyst |
166 | 169 | use_full_resolver = True |
167 | 170 | |
168 | def __init__(self, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain, th_factories): | |
171 | def __init__(self, rdclass, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain): | |
172 | self.rdclass = rdclass | |
169 | 173 | self.try_ipv4 = try_ipv4 |
170 | 174 | self.try_ipv6 = try_ipv6 |
171 | 175 | self.client_ipv4 = client_ipv4 |
178 | 182 | self.extra_rdtypes = extra_rdtypes |
179 | 183 | self.explicit_only = explicit_only |
180 | 184 | self.dlv_domain = dlv_domain |
181 | self.th_factories = th_factories | |
182 | 185 | |
183 | 186 | self.cache = {} |
184 | 187 | self.cache_lock = threading.Lock() |
185 | 188 | |
186 | 189 | def _name_to_args_iter(self, names): |
187 | 190 | for name in names: |
188 | yield (self.analyst_cls, name, self.dlv_domain, self.try_ipv4, self.try_ipv6, self.client_ipv4, self.client_ipv6, self.query_class_mixin, self.ceiling, self.edns_diagnostics, self.stop_at_explicit, self.extra_rdtypes, self.explicit_only, self.cache, self.cache_level, self.cache_lock, self.th_factories) | |
191 | yield (self.analyst_cls, name, self.rdclass, self.dlv_domain, self.try_ipv4, self.try_ipv6, self.client_ipv4, self.client_ipv6, self.query_class_mixin, self.ceiling, self.edns_diagnostics, self.stop_at_explicit, self.extra_rdtypes, self.explicit_only, self.cache, self.cache_level, self.cache_lock) | |
189 | 192 | |
190 | 193 | def analyze(self, names, flush_func=None): |
191 | 194 | name_objs = [] |
275 | 278 | analyst_cls = MultiProcessAnalyst |
276 | 279 | use_full_resolver = None |
277 | 280 | |
278 | def __init__(self, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain, th_factories, processes): | |
279 | super(ParallelAnalystMixin, self).__init__(try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain, th_factories) | |
281 | def __init__(self, rdclass, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain, processes): | |
282 | super(ParallelAnalystMixin, self).__init__(rdclass, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, extra_rdtypes, explicit_only, dlv_domain) | |
280 | 283 | self.manager = multiprocessing.managers.SyncManager() |
281 | 284 | self.manager.start() |
282 | 285 | |
341 | 344 | if require_name: |
342 | 345 | mappings_from_file = [] |
343 | 346 | try: |
344 | s = io.open(mapping, 'r', encoding='utf-8').read() | |
347 | with io.open(mapping, 'r', encoding='utf-8') as fh: | |
348 | s = fh.read() | |
345 | 349 | except IOError as e: |
346 | 350 | usage('%s: "%s"' % (e.strerror, mapping)) |
347 | 351 | sys.exit(3) |
506 | 510 | # if the value is actually a path, then check it as a zone file |
507 | 511 | if os.path.isfile(ds): |
508 | 512 | try: |
509 | s = io.open(ds, 'r', encoding='utf-8').read() | |
513 | with io.open(ds, 'r', encoding='utf-8') as fh: | |
514 | s = fh.read() | |
510 | 515 | except IOError as e: |
511 | 516 | usage('%s: "%s"' % (e.strerror, ds)) |
512 | 517 | sys.exit(3) |
560 | 565 | |
561 | 566 | def _serve_zone(zone, zone_file, port): |
562 | 567 | tmpdir = tempfile.mkdtemp(prefix='dnsviz') |
568 | env = { 'PATH': '%s:/sbin:/usr/sbin:/usr/local/sbin' % (os.environ.get('PATH', '')) } | |
563 | 569 | pid = None |
564 | 570 | |
565 | 571 | io.open('%s/named.conf' % tmpdir, 'w', encoding='utf-8').write(''' |
566 | 572 | options { |
567 | directory "%s"; | |
573 | directory "%s"; | |
568 | 574 | pid-file "named.pid"; |
569 | 575 | listen-on port %s { localhost; }; |
570 | 576 | listen-on-v6 port %s { localhost; }; |
584 | 590 | ''' % (tmpdir, port, port, lb2s(zone.to_text()), os.path.abspath(zone_file), tmpdir)) |
585 | 591 | |
586 | 592 | try: |
587 | p = subprocess.Popen(['named-checkconf', '-z', '%s/named.conf' % tmpdir], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) | |
593 | p = subprocess.Popen(['named-checkconf', '-z', '%s/named.conf' % tmpdir], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=env) | |
588 | 594 | except OSError as e: |
589 | 595 | usage('This option requires executing named-checkconf. Please ensure that it is installed and in PATH (%s).' % e) |
590 | 596 | _cleanup_process(tmpdir, pid) |
597 | 603 | sys.exit(1) |
598 | 604 | |
599 | 605 | try: |
600 | p = subprocess.Popen(['named', '-L', '%s/named.log' % tmpdir, '-c', '%s/named.conf' % tmpdir], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) | |
606 | p = subprocess.Popen(['named', '-c', '%s/named.conf' % tmpdir], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=env) | |
601 | 607 | except OSError as e: |
602 | 608 | usage('This option requires executing named. Please ensure that it is installed and in PATH (%s).' % e) |
603 | 609 | _cleanup_process(tmpdir, pid) |
606 | 612 | (stdout, stderr) = p.communicate() |
607 | 613 | if p.returncode != 0: |
608 | 614 | try: |
609 | log = io.open('%s/named.log' % tmpdir, 'r', encoding='utf-8').read() | |
615 | with io.open('%s/named.log' % tmpdir, 'r', encoding='utf-8') as fh: | |
616 | log = fh.read() | |
610 | 617 | except IOError as e: |
611 | 618 | log = '' |
612 | 619 | if not log: |
616 | 623 | sys.exit(1) |
617 | 624 | |
618 | 625 | try: |
619 | pid = int(io.open('%s/named.pid' % tmpdir, 'r', encoding='utf-8').read()) | |
626 | with io.open('%s/named.pid' % tmpdir, 'r', encoding='utf-8') as fh: | |
627 | pid = int(fh.read()) | |
620 | 628 | except (IOError, ValueError) as e: |
621 | 629 | usage('There was an error detecting the process ID for named: %s' % e) |
622 | 630 | _cleanup_process(tmpdir, pid) |
626 | 634 | |
627 | 635 | def _get_ecs_option(s): |
628 | 636 | try: |
629 | addr, prefix = s.split('/', 1) | |
637 | addr, prefix_len = s.split('/', 1) | |
630 | 638 | except ValueError: |
631 | 639 | addr = s |
632 | prefix = None | |
640 | prefix_len = None | |
633 | 641 | |
634 | 642 | try: |
635 | 643 | addr = IPAddr(addr) |
644 | 652 | addrlen = 16 |
645 | 653 | family = 2 |
646 | 654 | |
647 | if prefix is None: | |
648 | prefix = addrlen << 3 | |
655 | if prefix_len is None: | |
656 | prefix_len = addrlen << 3 | |
649 | 657 | else: |
650 | 658 | try: |
651 | prefix = int(prefix) | |
659 | prefix_len = int(prefix_len) | |
652 | 660 | except ValueError: |
653 | usage('The mask length was invalid: "%s"' % prefix) | |
654 | sys.exit(1) | |
655 | ||
656 | if prefix < 0 or prefix > (addrlen << 3): | |
657 | usage('The mask length was invalid: "%d"' % prefix) | |
658 | sys.exit(1) | |
659 | ||
660 | bytes_masked, remainder = divmod(prefix, 8) | |
661 | usage('The mask length was invalid: "%s"' % prefix_len) | |
662 | sys.exit(1) | |
663 | ||
664 | if prefix_len < 0 or prefix_len > (addrlen << 3): | |
665 | usage('The mask length was invalid: "%d"' % prefix_len) | |
666 | sys.exit(1) | |
667 | ||
668 | bytes_masked, remainder = divmod(prefix_len, 8) | |
661 | 669 | if remainder: |
662 | 670 | bytes_masked += 1 |
663 | 671 | |
664 | wire = struct.pack('!H', family) | |
665 | wire += struct.pack('!B', prefix) | |
666 | wire += struct.pack('!B', 0) | |
672 | wire = struct.pack(b'!H', family) | |
673 | wire += struct.pack(b'!B', prefix_len) | |
674 | wire += struct.pack(b'!B', 0) | |
667 | 675 | wire += addr._ipaddr_bytes[:bytes_masked] |
668 | 676 | |
669 | 677 | return dns.edns.GenericOption(8, wire) |
671 | 679 | def _get_nsid_option(): |
672 | 680 | |
673 | 681 | return dns.edns.GenericOption(dns.edns.NSID, b'') |
682 | ||
683 | def _get_dns_cookie_option(cookie=None): | |
684 | if cookie is None: | |
685 | r = random.getrandbits(64) | |
686 | cookie = struct.pack(b'Q', r) | |
687 | else: | |
688 | try: | |
689 | cookie = binascii.unhexlify(cookie) | |
690 | except TypeError: | |
691 | usage('The DNS cookie provided was not valid hexadecimal: "%s"' % cookie) | |
692 | sys.exit(1) | |
693 | ||
694 | if len(cookie) != 8: | |
695 | usage('The DNS client cookie provided had a length of %d, but only a length of %d is valid .' % (len(cookie), 8)) | |
696 | sys.exit(1) | |
697 | ||
698 | return dns.edns.GenericOption(10, cookie) | |
699 | ||
700 | class CustomQueryMixin(object): | |
701 | edns_options = [] | |
674 | 702 | |
675 | 703 | def usage(err=None): |
676 | 704 | if err is not None: |
677 | 705 | err += '\n\n' |
678 | 706 | else: |
679 | 707 | err = '' |
680 | sys.stderr.write('''%sUsage: dnsviz probe [options] [domain_name...] | |
708 | sys.stderr.write('''%sUsage: %s %s [options] [domain_name...] | |
709 | ||
710 | Issue diagnostic DNS queries. | |
711 | ||
681 | 712 | Options: |
682 | -f <filename> - read names from a file | |
683 | -d <level> - set debug level | |
684 | -r <filename> - read diagnostic queries from a file | |
685 | -t <threads> - specify number of threads to use for parallel queries | |
686 | -4 - use IPv4 only | |
687 | -6 - use IPv6 only | |
688 | -b - specify a source IPv4 or IPv6 address for queries | |
689 | -u <url> - URL for DNS looking glass | |
690 | -k - Do not verify TLS cert for DNS looking glass using HTTPS | |
691 | -a <ancestor> - query the ancestry of each domain name through ancestor | |
713 | -f <filename> - Read names from a file. | |
714 | -d <level> - Set debug level. | |
715 | -r <filename> - Read diagnostic queries from a file. | |
716 | -t <threads> - Use the specified number of threads for parallel queries. | |
717 | -4 - Use IPv4 only. | |
718 | -6 - Use IPv6 only. | |
719 | -b <addr> - Use the specified source IPv4 or IPv6 address for queries. | |
720 | -u <url> - Issue queries through the DNS looking glass at the | |
721 | specified URL. | |
722 | -k - Do not verify the TLS certificate for a DNS looking glass | |
723 | using HTTPS. | |
724 | -a <ancestor> - Query the ancestry of each domain name through the | |
725 | specified ancestor. | |
692 | 726 | -R <type>[,<type>...] |
693 | - perform analysis using only the specified type(s) | |
727 | - Issue queries for only the specified type(s) during analysis. | |
694 | 728 | -s <server>[,<server>...] |
695 | - designate servers for recursive analysis | |
696 | -A - query analysis against authoritative servers | |
729 | - Query the specified recursive server(s). | |
730 | -A - Query authoritative servers, instead of recursive servers. | |
697 | 731 | -x <domain>[+]:<server>[,<server>...] |
698 | - designate authoritative servers explicitly for a domain | |
732 | - Query the specified authoritative servers for a domain. | |
699 | 733 | -N <domain>:<server>[,<server>...] |
700 | - specify delegation information for a domain | |
734 | - Use the specified delegation information for a domain. | |
701 | 735 | -D <domain>:"<ds>"[,"<ds>"...] |
702 | - specify DS records for a domain | |
703 | -n - use the NSID EDNS option | |
704 | -e <subnet>[:<prefix>] | |
705 | - use the EDNS client subnet option with subnet/prefix | |
706 | -E - include EDNS compatibility diagnostics | |
707 | -p - make json output pretty instead of minimal | |
708 | -o <filename> - write the analysis to the specified file | |
709 | -h - display the usage and exit | |
710 | ''' % (err)) | |
736 | - Use the specified DS records for a domain. | |
737 | -n - Use the NSID EDNS option in queries. | |
738 | -e <subnet>[:<prefix_len>] | |
739 | - Use the DNS client subnet option with the specified subnet | |
740 | and prefix length in queries. | |
741 | -c <cookie> - Use the specified DNS cookie value in queries. | |
742 | -E - Issue queries to check EDNS compatibility. | |
743 | -o <filename> - Write the analysis to the specified file. | |
744 | -p - Format JSON output with indentation and newlines. | |
745 | -h - Display the usage and exit. | |
746 | ''' % (err, sys.argv[0], __name__.split('.')[-1])) | |
711 | 747 | |
712 | 748 | def main(argv): |
713 | 749 | global tm |
750 | global th_factories | |
714 | 751 | global resolver |
715 | 752 | global bootstrap_resolver |
716 | 753 | global explicit_delegations |
719 | 756 | |
720 | 757 | try: |
721 | 758 | try: |
722 | opts, args = getopt.getopt(argv[1:], 'f:d:l:c:r:t:64b:u:kmpo:a:R:x:N:D:ne:EAs:Fh') | |
759 | opts, args = getopt.getopt(argv[1:], 'f:d:l:C:r:t:64b:u:kmpo:a:R:x:N:D:ne:c:EAs:Fh') | |
723 | 760 | except getopt.GetoptError as e: |
724 | 761 | usage(str(e)) |
725 | 762 | sys.exit(1) |
731 | 768 | explicit_delegations = {} |
732 | 769 | odd_ports = {} |
733 | 770 | stop_at_explicit = {} |
771 | rdclass = dns.rdataclass.IN | |
734 | 772 | client_ipv4 = None |
735 | 773 | client_ipv6 = None |
736 | 774 | delegation_info = {} |
958 | 996 | if opts['-u'].startswith('https'): |
959 | 997 | vers0, vers1, vers2 = sys.version_info[:3] |
960 | 998 | if (2, 7, 9) > (vers0, vers1, vers2): |
961 | sys.stderr.write('python version >= 2.7.9 is required to use a DNS looking glass with HTTPS.\n') | |
999 | logger.error('python version >= 2.7.9 is required to use a DNS looking glass with HTTPS.') | |
962 | 1000 | sys.exit(1) |
963 | 1001 | |
964 | 1002 | url = urlparse.urlparse(opts['-u']) |
968 | 1006 | if url.hostname is not None: |
969 | 1007 | usage('WebSocket URL must designate a local UNIX domain socket.') |
970 | 1008 | sys.exit(1) |
971 | th_factories = (transport.DNSQueryTransportHandlerWebSocketFactory(url.path),) | |
1009 | th_factories = (transport.DNSQueryTransportHandlerWebSocketServerFactory(url.path),) | |
1010 | elif url.scheme == 'ssh': | |
1011 | th_factories = (transport.DNSQueryTransportHandlerRemoteCmdFactory(opts['-u']),) | |
972 | 1012 | else: |
973 | 1013 | usage('Unsupported URL scheme: "%s"' % opts['-u']) |
974 | 1014 | sys.exit(1) |
987 | 1027 | # the following option is not documented in usage, as it doesn't |
988 | 1028 | # apply to most users |
989 | 1029 | try: |
990 | cache_level = int(opts['-c']) | |
1030 | cache_level = int(opts['-C']) | |
991 | 1031 | except (KeyError, ValueError): |
992 | 1032 | cache_level = None |
993 | 1033 | |
1017 | 1057 | debug_level = logging.WARNING |
1018 | 1058 | else: |
1019 | 1059 | debug_level = logging.ERROR |
1020 | handler = logging.StreamHandler() | |
1021 | handler.setLevel(debug_level) | |
1022 | logger.addHandler(handler) | |
1023 | 1060 | logger.setLevel(debug_level) |
1024 | 1061 | |
1025 | 1062 | if '-A' in opts: |
1034 | 1071 | else: |
1035 | 1072 | opt_r = opts['-r'] |
1036 | 1073 | try: |
1037 | analysis_str = io.open(opt_r, 'r', encoding='utf-8').read() | |
1074 | with io.open(opt_r, 'r', encoding='utf-8') as fh: | |
1075 | analysis_str = fh.read() | |
1038 | 1076 | except IOError as e: |
1039 | 1077 | logger.error('%s: "%s"' % (e.strerror, opts.get('-r', '-'))) |
1040 | 1078 | sys.exit(3) |
1079 | if not analysis_str: | |
1080 | if opt_r != sys.stdin.fileno(): | |
1081 | logger.error('No input.') | |
1082 | sys.exit(3) | |
1041 | 1083 | try: |
1042 | 1084 | analysis_structured = json.loads(analysis_str) |
1043 | 1085 | except ValueError: |
1044 | logger.error('There was an error parsing the json input: "%s"' % opts['-r']) | |
1086 | logger.error('There was an error parsing the JSON input: "%s"' % opts['-r']) | |
1045 | 1087 | sys.exit(3) |
1046 | 1088 | |
1047 | 1089 | # check version |
1060 | 1102 | logger.error('Version %d.%d of JSON input is incompatible with this software.' % (major_vers, minor_vers)) |
1061 | 1103 | sys.exit(3) |
1062 | 1104 | |
1063 | names = [] | |
1105 | names = OrderedDict() | |
1064 | 1106 | if '-f' in opts: |
1065 | 1107 | if opts['-f'] == '-': |
1066 | 1108 | opts['-f'] = sys.stdin.fileno() |
1078 | 1120 | except dns.exception.DNSException: |
1079 | 1121 | logger.error('The domain name was invalid: "%s"' % name) |
1080 | 1122 | else: |
1081 | names.append(name) | |
1123 | if name not in names: | |
1124 | names[name] = None | |
1082 | 1125 | f.close() |
1083 | 1126 | else: |
1084 | 1127 | if args: |
1089 | 1132 | try: |
1090 | 1133 | args = analysis_structured['_meta._dnsviz.']['names'] |
1091 | 1134 | except KeyError: |
1092 | logger.error('No names found in json input!') | |
1135 | logger.error('No names found in JSON input!') | |
1093 | 1136 | sys.exit(3) |
1094 | 1137 | for name in args: |
1095 | 1138 | try: |
1099 | 1142 | except dns.exception.DNSException: |
1100 | 1143 | logger.error('The domain name was invalid: "%s"' % name) |
1101 | 1144 | else: |
1102 | names.append(name) | |
1145 | if name not in names: | |
1146 | names[name] = None | |
1103 | 1147 | |
1104 | 1148 | if '-p' in opts: |
1105 | 1149 | kwargs = { 'indent': 4, 'separators': (',', ': ') } |
1128 | 1172 | |
1129 | 1173 | flush = '-F' in opts |
1130 | 1174 | |
1131 | if '-n' in opts or '-e' in opts: | |
1132 | CustomQueryMixin.edns_options = [] | |
1133 | if '-e' in opts: | |
1134 | CustomQueryMixin.edns_options.append(_get_ecs_option(opts['-e'])) | |
1135 | if '-n' in opts: | |
1136 | CustomQueryMixin.edns_options.append(_get_nsid_option()) | |
1137 | query_class_mixin = CustomQueryMixin | |
1138 | else: | |
1139 | query_class_mixin = None | |
1175 | query_class_mixin = CustomQueryMixin | |
1176 | if '-e' in opts: | |
1177 | CustomQueryMixin.edns_options.append(_get_ecs_option(opts['-e'])) | |
1178 | if '-n' in opts: | |
1179 | CustomQueryMixin.edns_options.append(_get_nsid_option()) | |
1180 | if '-c' in opts: | |
1181 | if opts['-c']: | |
1182 | CustomQueryMixin.edns_options.append(_get_dns_cookie_option(opts['-c'])) | |
1183 | else: | |
1184 | # No cookie option was specified, so generate one | |
1185 | CustomQueryMixin.edns_options.append(_get_dns_cookie_option()) | |
1140 | 1186 | |
1141 | 1187 | name_objs = [] |
1142 | 1188 | if '-r' in opts: |
1148 | 1194 | name_objs.append(OnlineDomainNameAnalysis.deserialize(name, analysis_structured, cache)) |
1149 | 1195 | else: |
1150 | 1196 | if '-t' in opts: |
1151 | a = cls(try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, rdtypes, explicit_only, dlv_domain, th_factories, processes) | |
1197 | a = cls(rdclass, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, rdtypes, explicit_only, dlv_domain, processes) | |
1152 | 1198 | else: |
1153 | 1199 | if cls.use_full_resolver: |
1154 | 1200 | _init_full_resolver() |
1155 | 1201 | else: |
1156 | 1202 | _init_stub_resolver() |
1157 | a = cls(try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, rdtypes, explicit_only, dlv_domain, th_factories) | |
1203 | a = cls(rdclass, try_ipv4, try_ipv6, client_ipv4, client_ipv6, query_class_mixin, ceiling, edns_diagnostics, stop_at_explicit, cache_level, rdtypes, explicit_only, dlv_domain) | |
1158 | 1204 | if flush: |
1159 | 1205 | fh.write('{') |
1160 | 1206 | a.analyze(names, _flush) |
5 | 5 | # |
6 | 6 | # Copyright 2015-2016 VeriSign, Inc. |
7 | 7 | # |
8 | # Copyright 2016-2017 Casey Deccio. | |
8 | # Copyright 2016-2019 Casey Deccio | |
9 | 9 | # |
10 | 10 | # DNSViz is free software; you can redistribute it and/or modify |
11 | 11 | # it under the terms of the GNU General Public License as published by |
49 | 49 | err += '\n\n' |
50 | 50 | else: |
51 | 51 | err = '' |
52 | sys.stderr.write('''%sUsage: dnsviz query [@global-server] [domain] [q-type] [q-class] {q-opt} | |
52 | sys.stderr.write('''%sUsage: %s %s [@global-server] [domain] [q-type] [q-class] {q-opt} | |
53 | 53 | {global-d-opt} host [@local-server] {local-d-opt} |
54 | 54 | [ host [@local-server] {local-d-opt} [...]] |
55 | 55 | Where: domain is in the Domain Name System |
69 | 69 | global d-opts and servers (before host name) affect all queries. |
70 | 70 | local d-opts and servers (after host name) affect only that lookup. |
71 | 71 | -h (print help and exit) |
72 | ''' % (err)) | |
72 | ''' % (err, sys.argv[0], __name__.split('.')[-1])) | |
73 | 73 | |
74 | 74 | class DVCommandLineQuery: |
75 | 75 | def __init__(self, qname, rdtype, rdclass): |
96 | 96 | if not arg: |
97 | 97 | raise ValueError() |
98 | 98 | except ValueError: |
99 | usage('+trusted-key requires a filename argument.') | |
99 | sys.stderr.write('+trusted-key requires a filename argument.\n') | |
100 | 100 | sys.exit(1) |
101 | 101 | else: |
102 | 102 | self.trusted_keys_file = arg |
103 | 103 | else: |
104 | usage('Option "%s" not recognized.' % arg) | |
104 | sys.stderr.write('Option "%s" not recognized.\n' % arg) | |
105 | 105 | sys.exit(1) |
106 | 106 | |
107 | 107 | def process_nameservers(self, nameservers, use_ipv4, use_ipv6): |
215 | 215 | try: |
216 | 216 | if len(self.args[self.arg_index]) > 2: |
217 | 217 | if not has_arg: |
218 | usage('"%s" option does not take arguments' % self.args[self.arg_index][:2]) | |
218 | sys.stderr.write('"%s" option does not take arguments\n' % self.args[self.arg_index][:2]) | |
219 | 219 | sys.exit(1) |
220 | 220 | return self.args[self.arg_index][2:] |
221 | 221 | else: |
224 | 224 | else: |
225 | 225 | self.arg_index += 1 |
226 | 226 | if self.arg_index >= len(self.args): |
227 | usage('"%s" option requires an argument' % self.args[self.arg_index - 1]) | |
227 | sys.stderr.write('"%s" option requires an argument\n' % self.args[self.arg_index - 1]) | |
228 | 228 | sys.exit(1) |
229 | 229 | return self.args[self.arg_index] |
230 | 230 | finally: |
351 | 351 | self._get_arg(False) |
352 | 352 | self.options['use_ipv4'] = True |
353 | 353 | else: |
354 | usage('Option "%s" not recognized.' % self.args[self.arg_index][:2]) | |
354 | sys.stderr.write('Option "%s" not recognized.\n' % self.args[self.arg_index][:2]) | |
355 | 355 | sys.exit(1) |
356 | 356 | |
357 | 357 | def _add_query_option(self, query): |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | 2 | # analysis, and visualization. |
3 | # Author: Casey Deccio (casey@deccio.net) | |
3 | # Created by Casey Deccio (casey@deccio.net) | |
4 | 4 | # |
5 | 5 | # Copyright 2014-2016 Verisign, Inc. |
6 | # | |
7 | # Copyright 2016-2019 Casey Deccio | |
6 | 8 | # |
7 | 9 | # DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | # it under the terms of the GNU General Public License as published by |
21 | 23 | from __future__ import unicode_literals |
22 | 24 | |
23 | 25 | import os |
24 | DNSVIZ_SHARE_PATH = os.path.join('__DNSVIZ_INSTALL_PREFIX__', 'share', 'dnsviz') | |
26 | import sys | |
27 | ||
28 | if hasattr(sys, 'real_prefix'): | |
29 | DNSVIZ_INSTALL_PREFIX = sys.prefix | |
30 | else: | |
31 | DNSVIZ_INSTALL_PREFIX = '__DNSVIZ_INSTALL_PREFIX__' | |
32 | DNSVIZ_SHARE_PATH = os.path.join(DNSVIZ_INSTALL_PREFIX, 'share', 'dnsviz') | |
25 | 33 | JQUERY_PATH = __JQUERY_PATH__ |
26 | 34 | JQUERY_UI_PATH = __JQUERY_UI_PATH__ |
27 | 35 | JQUERY_UI_CSS_PATH = __JQUERY_UI_CSS_PATH__ |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
30 | 28 | |
31 | 29 | import atexit |
32 | 30 | import base64 |
31 | import binascii | |
32 | import logging | |
33 | 33 | import struct |
34 | 34 | import hashlib |
35 | 35 | import os |
36 | 36 | import re |
37 | 37 | |
38 | from . import format as fmt | |
39 | lb2s = fmt.latin1_binary_to_string | |
40 | ||
41 | logger = logging.getLogger(__name__) | |
42 | ||
43 | ALG_TYPE_DNSSEC = 0 | |
44 | ALG_TYPE_DIGEST = 1 | |
45 | ALG_TYPE_NSEC3 = 2 | |
46 | ||
47 | ALG_TYPE_DNSSEC_TEXT = [ | |
48 | 'algorithm', | |
49 | 'digest algorithm', | |
50 | 'NSEC3 algorithm', | |
51 | ] | |
52 | ||
53 | _crypto_sources = { | |
54 | 'M2Crypto >= 0.21.1': (set([1,5,7,8,10]), set([1,2,4]), set([1])), | |
55 | 'M2Crypto >= 0.24.0': (set([3,6,13,14]), set(), set()), | |
56 | 'M2Crypto >= 0.24.0 and either openssl < 1.1.0 or openssl >= 1.1.0 plus the OpenSSL GOST Engine': (set([12]), set([3]), set()), | |
57 | 'libnacl': (set([15]), set(), set()), | |
58 | } | |
59 | _logged_modules = set() | |
60 | ||
61 | _supported_algs = set() | |
62 | _supported_digest_algs = set() | |
63 | _supported_nsec3_algs = set([1]) | |
38 | 64 | try: |
39 | 65 | from M2Crypto import EVP, RSA |
40 | 66 | from M2Crypto.m2 import hex_to_bn, bn_to_mpi |
41 | 67 | except: |
42 | _supported_algs = set() | |
43 | _supported_digest_algs = set() | |
68 | pass | |
44 | 69 | else: |
45 | _supported_algs = set([1,5,7,8,10]) | |
46 | _supported_digest_algs = set([1,2,4]) | |
47 | ||
48 | _supported_nsec3_algs = set([1]) | |
70 | _supported_algs.update(set([1,5,7,8,10])) | |
71 | _supported_digest_algs.update(set([1,2,4])) | |
72 | ||
73 | try: | |
74 | from libnacl.sign import Verifier as ed25519Verifier | |
75 | except ImportError: | |
76 | pass | |
77 | else: | |
78 | _supported_algs.add(15) | |
49 | 79 | |
50 | 80 | GOST_PREFIX = b'\x30\x63\x30\x1c\x06\x06\x2a\x85\x03\x02\x02\x13\x30\x12\x06\x07\x2a\x85\x03\x02\x02\x23\x01\x06\x07\x2a\x85\x03\x02\x02\x1e\x01\x03\x43\x00\x04\x40' |
81 | GOST_ENGINE_NAME = b'gost' | |
51 | 82 | GOST_DIGEST_NAME = b'GOST R 34.11-94' |
52 | 83 | |
84 | # python3/python2 dual compatibility | |
85 | if not isinstance(GOST_ENGINE_NAME, str): | |
86 | GOST_ENGINE_NAME = lb2s(GOST_ENGINE_NAME) | |
87 | GOST_DIGEST_NAME = lb2s(GOST_DIGEST_NAME) | |
88 | ||
53 | 89 | EC_NOCOMPRESSION = b'\x04' |
90 | ||
54 | 91 | |
55 | 92 | def _init_dynamic(): |
56 | 93 | try: |
95 | 132 | def nsec3_alg_is_supported(alg): |
96 | 133 | return alg in _supported_nsec3_algs |
97 | 134 | |
135 | def _log_unsupported_alg(alg, alg_type): | |
136 | for mod in _crypto_sources: | |
137 | if alg in _crypto_sources[mod][alg_type]: | |
138 | if mod not in _logged_modules: | |
139 | _logged_modules.add(mod) | |
140 | logger.warning('Warning: Without the installation of %s, cryptographic validation of DNSSEC %s %d (and possibly others) is not supported.' % (mod, ALG_TYPE_DNSSEC_TEXT[alg_type], alg)) | |
141 | return | |
142 | ||
98 | 143 | def _gost_init(): |
99 | 144 | try: |
100 | gost = Engine.Engine(b'gost') | |
145 | gost = Engine.Engine(GOST_ENGINE_NAME) | |
101 | 146 | gost.init() |
102 | 147 | gost.set_default() |
103 | 148 | except ValueError: |
106 | 151 | def _gost_cleanup(): |
107 | 152 | from M2Crypto import Engine |
108 | 153 | try: |
109 | gost = Engine.Engine(b'gost') | |
154 | gost = Engine.Engine(GOST_ENGINE_NAME) | |
110 | 155 | except ValueError: |
111 | 156 | pass |
112 | 157 | else: |
136 | 181 | |
137 | 182 | def validate_ds_digest(digest_alg, digest, dnskey_msg): |
138 | 183 | if not digest_alg_is_supported(digest_alg): |
184 | _log_unsupported_alg(digest_alg, ALG_TYPE_DIGEST) | |
139 | 185 | return None |
140 | 186 | |
141 | 187 | if digest_alg == 1: |
161 | 207 | |
162 | 208 | def _dnskey_to_dsa(key): |
163 | 209 | # get T |
164 | t, = struct.unpack(b'B',key[0]) | |
210 | t = key[0] | |
211 | # python3/python2 dual compatibility | |
212 | if not isinstance(t, int): | |
213 | t = ord(t) | |
165 | 214 | offset = 1 |
166 | 215 | |
167 | 216 | # get Q |
168 | 217 | new_offset = offset+20 |
169 | q = b'' | |
170 | for c in key[offset:new_offset]: | |
171 | q += b'%02x' % struct.unpack(b'B',c)[0] | |
172 | q = bn_to_mpi(hex_to_bn(q)) | |
218 | q = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:new_offset]))) | |
173 | 219 | offset = new_offset |
174 | 220 | |
175 | 221 | # get P |
176 | 222 | new_offset = offset+64+(t<<3) |
177 | p = b'' | |
178 | for c in key[offset:new_offset]: | |
179 | p += b'%02x' % struct.unpack(b'B',c)[0] | |
180 | p = bn_to_mpi(hex_to_bn(p)) | |
223 | p = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:new_offset]))) | |
181 | 224 | offset = new_offset |
182 | 225 | |
183 | 226 | # get G |
184 | 227 | new_offset = offset+64+(t<<3) |
185 | g = b'' | |
186 | for c in key[offset:new_offset]: | |
187 | g += b'%02x' % struct.unpack(b'B',c)[0] | |
188 | g = bn_to_mpi(hex_to_bn(g)) | |
228 | g = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:new_offset]))) | |
189 | 229 | offset = new_offset |
190 | 230 | |
191 | 231 | # get Y |
192 | 232 | new_offset = offset+64+(t<<3) |
193 | y = b'' | |
194 | for c in key[offset:new_offset]: | |
195 | y += b'%02x' % struct.unpack(b'B',c)[0] | |
196 | y = bn_to_mpi(hex_to_bn(y)) | |
233 | y = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:new_offset]))) | |
197 | 234 | offset = new_offset |
198 | 235 | |
199 | 236 | # create the DSA public key |
202 | 239 | def _dnskey_to_rsa(key): |
203 | 240 | try: |
204 | 241 | # get the exponent length |
205 | e_len, = struct.unpack(b'B',key[0]) | |
242 | e_len = key[0] | |
206 | 243 | except IndexError: |
207 | 244 | return None |
245 | # python3/python2 dual compatibility | |
246 | if not isinstance(e_len, int): | |
247 | e_len = ord(e_len) | |
208 | 248 | |
209 | 249 | offset = 1 |
210 | 250 | if e_len == 0: |
212 | 252 | offset = 3 |
213 | 253 | |
214 | 254 | # get the exponent |
215 | e = b'' | |
216 | for c in key[offset:offset+e_len]: | |
217 | e += b'%02x' % struct.unpack(b'B',c)[0] | |
218 | e = bn_to_mpi(hex_to_bn(e)) | |
255 | e = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:offset+e_len]))) | |
219 | 256 | offset += e_len |
220 | 257 | |
221 | 258 | # get the modulus |
222 | n = b'' | |
223 | for c in key[offset:]: | |
224 | n += b'%02x' % struct.unpack(b'B',c)[0] | |
225 | n = bn_to_mpi(hex_to_bn(n)) | |
259 | n = bn_to_mpi(hex_to_bn(binascii.hexlify(key[offset:]))) | |
226 | 260 | |
227 | 261 | # create the RSA public key |
228 | 262 | rsa = RSA.new_pub_key((e,n)) |
233 | 267 | |
234 | 268 | def _dnskey_to_gost(key): |
235 | 269 | der = GOST_PREFIX + key |
236 | pem = bytes('-----BEGIN PUBLIC KEY-----\n'+base64.encodestring(der)+'-----END PUBLIC KEY-----') | |
270 | pem = b'-----BEGIN PUBLIC KEY-----\n'+base64.encodestring(der)+b'-----END PUBLIC KEY-----' | |
237 | 271 | |
238 | 272 | return EVP.load_key_string_pubkey(pem) |
239 | 273 | |
275 | 309 | pubkey = _dnskey_to_dsa(key) |
276 | 310 | |
277 | 311 | # get T |
278 | t, = struct.unpack(b'B',sig[0]) | |
312 | t = sig[0] | |
313 | # python3/python2 dual compatibility | |
314 | if not isinstance(t, int): | |
315 | t = ord(t) | |
279 | 316 | offset = 1 |
280 | 317 | |
281 | 318 | # get R |
282 | 319 | new_offset = offset+20 |
283 | r = b'' | |
284 | for c in sig[offset:new_offset]: | |
285 | r += b'%02x' % struct.unpack(b'B',c)[0] | |
286 | r = bn_to_mpi(hex_to_bn(r)) | |
320 | r = bn_to_mpi(hex_to_bn(binascii.hexlify(sig[offset:new_offset]))) | |
287 | 321 | offset = new_offset |
288 | 322 | |
289 | 323 | # get S |
290 | 324 | new_offset = offset+20 |
291 | s = b'' | |
292 | for c in sig[offset:new_offset]: | |
293 | s += b'%02x' % struct.unpack(b'B',c)[0] | |
294 | s = bn_to_mpi(hex_to_bn(s)) | |
325 | s = bn_to_mpi(hex_to_bn(binascii.hexlify(sig[offset:new_offset]))) | |
295 | 326 | offset = new_offset |
296 | 327 | |
297 | 328 | md = EVP.MessageDigest('sha1') |
338 | 369 | |
339 | 370 | # get R |
340 | 371 | new_offset = offset+sigsize//2 |
341 | r = b'' | |
342 | for c in sig[offset:new_offset]: | |
343 | r += b'%02x' % struct.unpack(b'B',c)[0] | |
344 | r = bn_to_mpi(hex_to_bn(r)) | |
372 | r = bn_to_mpi(hex_to_bn(binascii.hexlify(sig[offset:new_offset]))) | |
345 | 373 | offset = new_offset |
346 | 374 | |
347 | 375 | # get S |
348 | 376 | new_offset = offset+sigsize//2 |
349 | s = b'' | |
350 | for c in sig[offset:new_offset]: | |
351 | s += b'%02x' % struct.unpack(b'B',c)[0] | |
352 | s = bn_to_mpi(hex_to_bn(s)) | |
377 | s = bn_to_mpi(hex_to_bn(binascii.hexlify(sig[offset:new_offset]))) | |
353 | 378 | offset = new_offset |
354 | 379 | |
355 | 380 | md = EVP.MessageDigest(alg) |
358 | 383 | |
359 | 384 | return pubkey.verify_dsa(digest, r, s) == 1 |
360 | 385 | |
386 | def _validate_rrsig_ed25519(alg, sig, msg, key): | |
387 | try: | |
388 | verifier = ed25519Verifier(binascii.hexlify(key)) | |
389 | return verifier.verify(sig + msg) == msg | |
390 | except ValueError: | |
391 | return False | |
392 | ||
361 | 393 | def validate_rrsig(alg, sig, msg, key): |
362 | 394 | if not alg_is_supported(alg): |
395 | _log_unsupported_alg(alg, ALG_TYPE_DNSSEC) | |
363 | 396 | return None |
364 | 397 | |
365 | 398 | # create an RSA key object for RSA keys |
371 | 404 | return _validate_rrsig_gost(alg, sig, msg, key) |
372 | 405 | elif alg in (13,14): |
373 | 406 | return _validate_rrsig_ec(alg, sig, msg, key) |
407 | elif alg in (15,): | |
408 | return _validate_rrsig_ed25519(alg, sig, msg, key) | |
374 | 409 | |
375 | 410 | def get_digest_for_nsec3(val, salt, alg, iterations): |
376 | 411 | if not nsec3_alg_is_supported(alg): |
412 | _log_unsupported_alg(alg, ALG_TYPE_NSEC3) | |
377 | 413 | return None |
378 | 414 | |
379 | 415 | if alg == 1: |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
39 | 37 | DNSKEY_FLAGS = {'ZONE': 0x0100, 'SEP': 0x0001, 'revoke': 0x0080} |
40 | 38 | DNSKEY_PROTOCOLS = { 3: 'DNSSEC' } |
41 | 39 | DNSKEY_ALGORITHMS = { 1: 'RSA/MD5', 2: 'Diffie-Hellman', 3: 'DSA/SHA1', 5: 'RSA/SHA-1', 6: 'DSA-NSEC3-SHA1', 7: 'RSASHA1-NSEC3-SHA1', \ |
42 | 8: 'RSA/SHA-256', 10: 'RSA/SHA-512', 12: 'GOST R 34.10-2001', 13: 'ECDSA Curve P-256 with SHA-256', 14: 'ECDSA Curve P-384 with SHA-384' } | |
40 | 8: 'RSA/SHA-256', 10: 'RSA/SHA-512', 12: 'GOST R 34.10-2001', 13: 'ECDSA Curve P-256 with SHA-256', 14: 'ECDSA Curve P-384 with SHA-384', | |
41 | 15: 'Ed25519', 16: 'Ed448' } | |
43 | 42 | DS_DIGEST_TYPES = { 1: 'SHA-1', 2: 'SHA-256', 3: 'GOST 34.11-94', 4: 'SHA-384' } |
44 | 43 | |
45 | 44 | NSEC3_FLAGS = {'OPTOUT': 0x01} |
156 | 155 | def format_nsec3_rrset_text(nsec3_rrset_text): |
157 | 156 | return re.sub(r'^(\d+\s+\d+\s+\d+\s+\S+\s+)([0-9a-zA-Z]+)', lambda x: '%s%s' % (x.group(1), x.group(2).upper()), nsec3_rrset_text).rstrip('.') |
158 | 157 | |
159 | def humanize_name(name, idn=False): | |
158 | def humanize_name(name, idn=False, canonicalize=True): | |
159 | if canonicalize: | |
160 | name = name.canonicalize() | |
160 | 161 | if idn: |
161 | 162 | try: |
162 | name = name.canonicalize().to_unicode() | |
163 | name = name.to_unicode() | |
163 | 164 | except UnicodeError: |
164 | name = lb2s(name.canonicalize().to_text()) | |
165 | name = lb2s(name.to_text()) | |
165 | 166 | else: |
166 | name = lb2s(name.canonicalize().to_text()) | |
167 | name = lb2s(name.to_text()) | |
167 | 168 | if name == '.': |
168 | 169 | return name |
169 | 170 | return name.rstrip('.') |
0 | # | |
0 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
1 | 2 | # analysis, and visualization. |
2 | 3 | # Created by Casey Deccio (casey@deccio.net) |
3 | 4 | # |
4 | 5 | # Copyright 2014-2016 VeriSign, Inc. |
5 | 6 | # |
6 | # Copyright 2016-2017 Casey Deccio. | |
7 | # Copyright 2016-2019 Casey Deccio | |
7 | 8 | # |
8 | 9 | # DNSViz is free software; you can redistribute it and/or modify |
9 | 10 | # it under the terms of the GNU General Public License as published by |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016-2017 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
28 | 26 | |
29 | 27 | from __future__ import unicode_literals |
30 | 28 | |
31 | import base64 | |
29 | import binascii | |
32 | 30 | import bisect |
31 | import copy | |
33 | 32 | import errno |
34 | 33 | import io |
35 | 34 | import socket |
109 | 108 | RETRY_ACTION_REMOVE_EDNS_OPTION = 11 |
110 | 109 | RETRY_ACTION_CHANGE_SPORT = 12 |
111 | 110 | RETRY_ACTION_CHANGE_EDNS_VERSION = 13 |
111 | RETRY_ACTION_UPDATE_DNS_COOKIE = 14 | |
112 | 112 | retry_actions = { |
113 | 113 | RETRY_ACTION_NO_CHANGE: 'NO_CHANGE', |
114 | 114 | RETRY_ACTION_USE_TCP: 'USE_TCP', # implies CHANGE_SPORT |
123 | 123 | RETRY_ACTION_REMOVE_EDNS_OPTION: 'REMOVE_EDNS_OPTION', # implies CHANGE_SPORT |
124 | 124 | RETRY_ACTION_CHANGE_SPORT: 'CHANGE_SPORT', |
125 | 125 | RETRY_ACTION_CHANGE_EDNS_VERSION: 'CHANGE_EDNS_VERSION', # implies CHANGE_SPORT |
126 | RETRY_ACTION_UPDATE_DNS_COOKIE: 'UPDATE_DNS_COOKIE', # implies CHANGE_SPORT | |
126 | 127 | } |
127 | 128 | retry_action_codes = { |
128 | 129 | 'NO_CHANGE': RETRY_ACTION_NO_CHANGE, |
138 | 139 | 'REMOVE_EDNS_OPTION': RETRY_ACTION_REMOVE_EDNS_OPTION, |
139 | 140 | 'CHANGE_SPORT': RETRY_ACTION_CHANGE_SPORT, |
140 | 141 | 'CHANGE_EDNS_VERSION': RETRY_ACTION_CHANGE_EDNS_VERSION, |
142 | 'UPDATE_DNS_COOKIE': RETRY_ACTION_UPDATE_DNS_COOKIE, | |
141 | 143 | } |
144 | ||
145 | DNS_COOKIE_NO_COOKIE = 0 | |
146 | DNS_COOKIE_CLIENT_COOKIE_ONLY = 1 | |
147 | DNS_COOKIE_SERVER_COOKIE_FRESH = 2 | |
148 | DNS_COOKIE_SERVER_COOKIE_STATIC = 3 | |
149 | DNS_COOKIE_SERVER_COOKIE_BAD = 4 | |
150 | DNS_COOKIE_IMPROPER_LENGTH = 5 | |
142 | 151 | |
143 | 152 | MIN_QUERY_TIMEOUT = 0.1 |
144 | 153 | MAX_CNAME_REDIRECTION = 40 |
397 | 406 | class RemoveEDNSOptionOnTimeoutHandler(DNSResponseHandler): |
398 | 407 | '''Remove EDNS option after a given number of timeouts.''' |
399 | 408 | |
400 | def __init__(self, otype, timeouts): | |
401 | self._otype = otype | |
409 | def __init__(self, timeouts): | |
402 | 410 | self._timeouts = timeouts |
403 | 411 | |
404 | 412 | def handle(self, response_wire, response, response_time): |
405 | 413 | timeouts = self._get_num_timeouts(response) |
406 | filtered_options = [x for x in self._request.options if self._otype == x.otype] | |
407 | if not self._params['tcp'] and timeouts >= self._timeouts and filtered_options: | |
408 | self._request.options.remove(filtered_options[0]) | |
409 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_REMOVE_EDNS_OPTION, self._otype) | |
414 | try: | |
415 | opt = self._request.options[0] | |
416 | except IndexError: | |
417 | opt = None | |
418 | if not self._params['tcp'] and timeouts >= self._timeouts and opt is not None: | |
419 | self._request.options.remove(opt) | |
420 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_REMOVE_EDNS_OPTION, opt.otype) | |
410 | 421 | |
411 | 422 | class DisableEDNSOnTimeoutHandler(DNSResponseHandler): |
412 | 423 | '''Disable EDNS after a given number of timeouts. Some servers don't |
443 | 454 | if isinstance(response, dns.message.Message) and response.rcode() in (dns.rcode.NOTIMP, dns.rcode.FORMERR, dns.rcode.SERVFAIL) and self._request.edns >= 0: |
444 | 455 | self._request.use_edns(False) |
445 | 456 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_RCODE, response.rcode(), RETRY_ACTION_DISABLE_EDNS, None) |
457 | ||
458 | class AddServerCookieOnBADCOOKIE(DNSResponseHandler): | |
459 | '''Update the DNS Cookie EDNS option with the server cookie when a | |
460 | BADCOOKIE rcode is received.''' | |
461 | ||
462 | def _add_server_cookie(self, response): | |
463 | try: | |
464 | client_opt = [o for o in self._request.options if o.otype == 10][0] | |
465 | except IndexError: | |
466 | return False | |
467 | try: | |
468 | server_opt = [o for o in response.options if o.otype == 10][0] | |
469 | except IndexError: | |
470 | return False | |
471 | client_cookie = client_opt.data[:8] | |
472 | server_cookie1 = client_opt.data[8:] | |
473 | server_cookie2 = server_opt.data[8:] | |
474 | if server_cookie1 == server_cookie2: | |
475 | return False | |
476 | client_opt.data = client_cookie + server_cookie2 | |
477 | return True | |
478 | ||
479 | def handle(self, response_wire, response, response_time): | |
480 | if isinstance(response, dns.message.Message) and response.rcode() == 23: | |
481 | if self._add_server_cookie(response): | |
482 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_RCODE, response.rcode(), RETRY_ACTION_UPDATE_DNS_COOKIE, None) | |
446 | 483 | |
447 | 484 | class UseUDPOnTimeoutHandler(DNSResponseHandler): |
448 | 485 | '''Revert to UDP if TCP connectivity fails.''' |
513 | 550 | TCP_FINAL = 7 |
514 | 551 | INVALID = 8 |
515 | 552 | |
516 | def __init__(self, reduced_payload, initial_timeouts, bounding_timeout, subhandlers): | |
553 | def __init__(self, reduced_payload, initial_timeouts, max_timeouts, bounding_timeout): | |
517 | 554 | self._reduced_payload = reduced_payload |
518 | 555 | self._initial_timeouts = initial_timeouts |
556 | self._max_timeouts = max_timeouts | |
519 | 557 | self._bounding_timeout = bounding_timeout |
520 | ||
521 | self._subhandlers = [h.build() for h in subhandlers] | |
522 | 558 | |
523 | 559 | self._lower_bound = None |
524 | 560 | self._upper_bound = None |
525 | 561 | self._water_mark = None |
526 | 562 | self._state = self.START |
527 | 563 | |
528 | def set_context(self, params, history, request): | |
529 | '''Set local parameters pertaining to DNS query.''' | |
530 | ||
531 | super(PMTUBoundingHandler, self).set_context(params, history, request) | |
532 | for handler in self._subhandlers: | |
533 | handler.set_context(params, history, request) | |
534 | ||
535 | def handle_sub(self, response_wire, response, response_time): | |
536 | for handler in self._subhandlers: | |
537 | handler.handle(response_wire, response, response_time) | |
538 | ||
539 | 564 | def handle(self, response_wire, response, response_time): |
540 | timeouts = self._get_num_timeouts(response) | |
541 | is_timeout = isinstance(response, dns.exception.Timeout) | |
542 | is_valid = isinstance(response, dns.message.Message) and response.rcode() in (dns.rcode.NOERROR, dns.rcode.NXDOMAIN) | |
565 | if self._state == self.INVALID: | |
566 | return | |
543 | 567 | |
544 | 568 | # python3/python2 dual compatibility |
545 | 569 | if isinstance(response_wire, str): |
547 | 571 | else: |
548 | 572 | map_func = lambda x: x |
549 | 573 | |
550 | if self._request.edns < 0 or not (self._request.ednsflags & dns.flags.DO): | |
574 | timeouts = self._get_num_timeouts(response) | |
575 | is_timeout = isinstance(response, dns.exception.Timeout) | |
576 | is_valid = isinstance(response, dns.message.Message) and response.rcode() in (dns.rcode.NOERROR, dns.rcode.NXDOMAIN) | |
577 | is_truncated = response_wire is not None and map_func(response_wire[2]) & 0x02 | |
578 | if response_wire is not None: | |
579 | response_len = len(response_wire) | |
580 | else: | |
581 | response_len = None | |
582 | ||
583 | if self._request.edns >= 0 and \ | |
584 | (is_timeout or is_valid or is_truncated): | |
585 | pass | |
586 | else: | |
551 | 587 | self._state = self.INVALID |
552 | ||
553 | if self._state == self.INVALID: | |
554 | self.handle_sub(response_wire, response, response_time) | |
555 | ||
556 | elif self._state == self.START: | |
557 | self.handle_sub(response_wire, response, response_time) | |
588 | return | |
589 | ||
590 | if self._state == self.START: | |
558 | 591 | if timeouts >= self._initial_timeouts: |
559 | 592 | self._lower_bound = self._reduced_payload |
560 | 593 | self._upper_bound = self._request.payload - 1 |
563 | 596 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, self._reduced_payload) |
564 | 597 | |
565 | 598 | elif self._state == self.REDUCED_PAYLOAD: |
566 | self.handle_sub(response_wire, response, response_time) | |
599 | if timeouts >= self._max_timeouts: | |
600 | self._state == self.INVALID | |
601 | return None | |
602 | ||
567 | 603 | if not is_timeout: |
568 | if (response_wire is not None and map_func(response_wire[2]) & 0x02) or is_valid: | |
569 | self._lower_bound = self._water_mark = len(response_wire) | |
604 | if is_truncated or is_valid: | |
605 | self._lower_bound = self._water_mark = response_len | |
570 | 606 | self._params['timeout'] = self._bounding_timeout |
571 | 607 | self._params['tcp'] = True |
572 | 608 | self._state = self.USE_TCP |
573 | if response_wire is not None and map_func(response_wire[2]) & 0x02: | |
574 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, len(response_wire), RETRY_ACTION_USE_TCP, None) | |
609 | if is_truncated: | |
610 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, response_len, RETRY_ACTION_USE_TCP, None) | |
575 | 611 | else: |
576 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, len(response_wire), RETRY_ACTION_USE_TCP, None) | |
612 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, response_len, RETRY_ACTION_USE_TCP, None) | |
577 | 613 | |
578 | 614 | elif self._state == self.USE_TCP: |
579 | 615 | if not is_timeout and is_valid: |
580 | 616 | #XXX this is cheating because we're not reporting the change to UDP |
581 | 617 | self._params['tcp'] = False |
582 | payload = len(response_wire) - 1 | |
618 | payload = response_len - 1 | |
583 | 619 | self._request.payload = payload |
584 | 620 | self._state = self.TCP_MINUS_ONE |
585 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, len(response_wire), RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) | |
621 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, response_len, RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) | |
586 | 622 | |
587 | 623 | elif self._state == self.TCP_MINUS_ONE: |
588 | 624 | if is_timeout: |
592 | 628 | self._state = self.PICKLE |
593 | 629 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) |
594 | 630 | # if the size of the message is less than the watermark, then perhaps we were rate limited |
595 | elif response_wire is not None and len(response_wire) < self._water_mark: | |
631 | elif response_wire is not None and response_len < self._water_mark: | |
596 | 632 | # but if this isn't the first time, just quit. it could be that |
597 | 633 | # the server simply has some wonky way of determining how/where to truncate. |
598 | 634 | if self._history[-1].cause == RETRY_CAUSE_DIAGNOSTIC and self._history[-1].action == RETRY_ACTION_CHANGE_SPORT: |
604 | 640 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, None, RETRY_ACTION_CHANGE_SPORT, None) |
605 | 641 | # if the response was truncated, then the size of the payload |
606 | 642 | # received via TCP is the largest we can receive |
607 | elif response_wire is not None and map_func(response_wire[2]) & 0x02: | |
643 | elif is_truncated: | |
608 | 644 | self._params['tcp'] = True |
609 | 645 | self._state = self.TCP_FINAL |
610 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, len(response_wire), RETRY_ACTION_USE_TCP, None) | |
646 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, response_len, RETRY_ACTION_USE_TCP, None) | |
611 | 647 | |
612 | 648 | elif self._state == self.PICKLE: |
613 | 649 | if self._upper_bound - self._lower_bound <= 1: |
614 | 650 | self._params['tcp'] = True |
615 | 651 | self._state = self.TCP_FINAL |
616 | if response_wire is not None and map_func(response_wire[2]) & 0x02: | |
617 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, len(response_wire), RETRY_ACTION_USE_TCP, None) | |
652 | if is_truncated: | |
653 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TC_SET, response_len, RETRY_ACTION_USE_TCP, None) | |
618 | 654 | elif is_timeout: |
619 | 655 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_USE_TCP, None) |
620 | 656 | elif not is_valid: |
625 | 661 | self._request.payload = payload |
626 | 662 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_TIMEOUT, None, RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) |
627 | 663 | # if the size of the message is less than the watermark, then perhaps we were rate limited |
628 | elif len(response_wire) < self._water_mark: | |
664 | elif response_len < self._water_mark: | |
629 | 665 | # but if this isn't the first time, just quit. it could be that |
630 | 666 | # the server simply has some wonky way of determining how/where to truncate. |
631 | 667 | if self._history[-1].cause == RETRY_CAUSE_DIAGNOSTIC and self._history[-1].action == RETRY_ACTION_CHANGE_SPORT: |
639 | 675 | self._lower_bound = self._request.payload |
640 | 676 | payload = self._lower_bound + (self._upper_bound + 1 - self._lower_bound)//2 |
641 | 677 | self._request.payload = payload |
642 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, len(response_wire), RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) | |
678 | return DNSQueryRetryAttempt(response_time, RETRY_CAUSE_DIAGNOSTIC, response_len, RETRY_ACTION_CHANGE_UDP_MAX_PAYLOAD, payload) | |
643 | 679 | |
644 | 680 | elif self._state == self.TCP_FINAL: |
681 | pass | |
682 | ||
683 | elif self._state == self.INVALID: | |
645 | 684 | pass |
646 | 685 | |
647 | 686 | class ChangeTimeoutOnTimeoutHandler(ActionIndependentDNSResponseHandler): |
702 | 741 | class DNSQueryHandler: |
703 | 742 | '''A handler associated with a DNS query to a server.''' |
704 | 743 | |
705 | def __init__(self, query, request, params, response_handlers, server, client): | |
744 | def __init__(self, query, request, server_cookie, server_cookie_status, params, response_handlers, server, client): | |
706 | 745 | self.query = query |
707 | 746 | self.request = request |
708 | 747 | self.params = params |
748 | self.server_cookie = server_cookie | |
749 | self.server_cookie_status = server_cookie_status | |
709 | 750 | self._response_handlers = response_handlers |
710 | 751 | self.history = [] |
711 | 752 | self._server = server |
800 | 841 | self.truncated_info = [] |
801 | 842 | self.error_info = [] |
802 | 843 | |
803 | def _aggregate_response(self, server, client, response, qname, rdtype, bailiwick): | |
844 | def _aggregate_response(self, server, client, response, qname, rdtype, rdclass, bailiwick): | |
804 | 845 | if response.is_valid_response(): |
805 | 846 | if response.is_complete_response(): |
806 | is_referral = response.is_referral(qname, rdtype, bailiwick) | |
807 | self._aggregate_answer(server, client, response, is_referral, qname, rdtype) | |
847 | is_referral = response.is_referral(qname, rdtype, rdclass, bailiwick) | |
848 | self._aggregate_answer(server, client, response, is_referral, qname, rdtype, rdclass) | |
808 | 849 | else: |
809 | 850 | truncated_info = TruncatedResponse(response.message.to_wire()) |
810 | 851 | DNSResponseComponent.insert_into_list(truncated_info, self.truncated_info, server, client, response) |
812 | 853 | else: |
813 | 854 | self._aggregate_error(server, client, response) |
814 | 855 | |
815 | def _aggregate_answer(self, server, client, response, referral, qname, rdtype): | |
856 | def _aggregate_answer(self, server, client, response, referral, qname, rdtype, rdclass): | |
816 | 857 | msg = response.message |
817 | 858 | |
818 | 859 | # sort with the most specific DNAME infos first |
819 | dname_rrsets = [x for x in msg.answer if x.rdtype == dns.rdatatype.DNAME] | |
860 | dname_rrsets = [x for x in msg.answer if x.rdtype == dns.rdatatype.DNAME and x.rdclass == rdclass] | |
820 | 861 | dname_rrsets.sort(reverse=True) |
821 | 862 | |
822 | 863 | qname_sought = qname |
832 | 873 | break |
833 | 874 | |
834 | 875 | try: |
835 | rrset_info = self._aggregate_answer_rrset(server, client, response, qname_sought, rdtype, referral) | |
876 | rrset_info = self._aggregate_answer_rrset(server, client, response, qname_sought, rdtype, rdclass, referral) | |
836 | 877 | |
837 | 878 | # if there was a synthesized CNAME, add it to the rrset_info |
838 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME and synthesized_cname_info is not None: | |
839 | synthesized_cname_info = rrset_info.create_or_update_cname_from_dname_info(synthesized_cname_info, server, client, response) | |
840 | synthesized_cname_info.update_rrsig_info(server, client, response, msg.answer, referral) | |
879 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME and rrset_info.rrset.rdclass == rdclass and synthesized_cname_info is not None: | |
880 | synthesized_cname_info = rrset_info.create_or_update_cname_from_dname_info(synthesized_cname_info, server, client, response, rdclass) | |
881 | synthesized_cname_info.update_rrsig_info(server, client, response, msg.answer, rdclass, referral) | |
841 | 882 | |
842 | 883 | except KeyError: |
843 | 884 | if synthesized_cname_info is None: |
844 | 885 | raise |
845 | 886 | synthesized_cname_info = DNSResponseComponent.insert_into_list(synthesized_cname_info, self.answer_info, server, client, response) |
846 | synthesized_cname_info.dname_info.update_rrsig_info(server, client, response, msg.answer, referral) | |
887 | synthesized_cname_info.dname_info.update_rrsig_info(server, client, response, msg.answer, rdclass, referral) | |
847 | 888 | rrset_info = synthesized_cname_info |
848 | 889 | |
849 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME: | |
890 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME and rrset_info.rrset.rdclass == rdclass: | |
850 | 891 | qname_sought = rrset_info.rrset[0].target |
851 | 892 | else: |
852 | 893 | break |
855 | 896 | if referral and rdtype != dns.rdatatype.DS: |
856 | 897 | # add referrals |
857 | 898 | try: |
858 | rrset = [x for x in msg.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS][0] | |
899 | rrset = [x for x in msg.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS and x.rdclass == rdclass][0] | |
859 | 900 | except IndexError: |
860 | 901 | pass |
861 | 902 | else: |
877 | 918 | |
878 | 919 | neg_response_info = NegativeResponseInfo(qname_sought, rdtype, self.ttl_cmp) |
879 | 920 | neg_response_info = DNSResponseComponent.insert_into_list(neg_response_info, neg_response_info_list, server, client, response) |
880 | neg_response_info.create_or_update_nsec_info(server, client, response, referral) | |
881 | neg_response_info.create_or_update_soa_info(server, client, response, referral) | |
882 | ||
883 | def _aggregate_answer_rrset(self, server, client, response, qname, rdtype, referral): | |
921 | neg_response_info.create_or_update_nsec_info(server, client, response, rdclass, referral) | |
922 | neg_response_info.create_or_update_soa_info(server, client, response, rdclass, referral) | |
923 | ||
924 | def _aggregate_answer_rrset(self, server, client, response, qname, rdtype, rdclass, referral): | |
884 | 925 | msg = response.message |
885 | 926 | |
886 | 927 | try: |
887 | rrset = msg.find_rrset(msg.answer, qname, dns.rdataclass.IN, rdtype) | |
928 | rrset = msg.find_rrset(msg.answer, qname, rdclass, rdtype) | |
888 | 929 | except KeyError: |
889 | rrset = msg.find_rrset(msg.answer, qname, dns.rdataclass.IN, dns.rdatatype.CNAME) | |
930 | rrset = msg.find_rrset(msg.answer, qname, rdclass, dns.rdatatype.CNAME) | |
890 | 931 | |
891 | 932 | rrset_info = RRsetInfo(rrset, self.ttl_cmp) |
892 | 933 | rrset_info = DNSResponseComponent.insert_into_list(rrset_info, self.answer_info, server, client, response) |
893 | 934 | |
894 | rrset_info.update_rrsig_info(server, client, response, msg.answer, referral) | |
935 | rrset_info.update_rrsig_info(server, client, response, msg.answer, rdclass, referral) | |
895 | 936 | |
896 | 937 | return rrset_info |
897 | 938 | |
944 | 985 | if not (isinstance(query, DNSQuery)): |
945 | 986 | raise ValueError('A DNSQuery instance can only be joined with another DNSQuery instance.') |
946 | 987 | |
947 | if not (self.qname == query.qname and self.rdtype == query.rdtype and \ | |
988 | if not (self.qname.to_text() == query.qname.to_text() and self.rdtype == query.rdtype and \ | |
948 | 989 | self.rdclass == query.rdclass and self.flags == query.flags and \ |
949 | 990 | self.edns == query.edns and self.edns_max_udp_payload == query.edns_max_udp_payload and \ |
950 | 991 | self.edns_flags == query.edns_flags and self.edns_options == query.edns_options and \ |
1058 | 1099 | for server in self.responses: |
1059 | 1100 | bailiwick = bailiwick_map.get(server, default_bailiwick) |
1060 | 1101 | for client, response in self.responses[server].items(): |
1061 | if response.is_valid_response() and response.is_complete_response() and not response.is_referral(self.qname, self.rdtype, bailiwick): | |
1102 | if response.is_valid_response() and response.is_complete_response() and not response.is_referral(self.qname, self.rdtype, self.rdclass, bailiwick): | |
1062 | 1103 | servers_clients.add((server, client)) |
1063 | 1104 | return servers_clients |
1064 | 1105 | |
1088 | 1129 | for o in self.edns_options: |
1089 | 1130 | s = io.BytesIO() |
1090 | 1131 | o.to_wire(s) |
1091 | d['options']['edns_options'].append((o.otype, lb2s(base64.b64encode(s.getvalue())))) | |
1132 | d['options']['edns_options'].append((o.otype, lb2s(binascii.hexlify(s.getvalue())))) | |
1092 | 1133 | d['options']['tcp'] = self.tcp |
1093 | 1134 | |
1094 | 1135 | d['responses'] = OrderedDict() |
1107 | 1148 | return d |
1108 | 1149 | |
1109 | 1150 | @classmethod |
1110 | def deserialize(self, d, bailiwick_map, default_bailiwick): | |
1151 | def deserialize(self, d, bailiwick_map, default_bailiwick, cookie_jar_map, default_cookie_jar, cookie_standin, cookie_bad): | |
1111 | 1152 | qname = dns.name.from_text(d['qname']) |
1112 | 1153 | rdclass = dns.rdataclass.from_text(d['qclass']) |
1113 | 1154 | rdtype = dns.rdatatype.from_text(d['qtype']) |
1121 | 1162 | edns_flags = d1['edns_flags'] |
1122 | 1163 | edns_options = [] |
1123 | 1164 | for otype, data in d1['edns_options']: |
1124 | edns_options.append(dns.edns.GenericOption(otype, base64.b64decode(data))) | |
1165 | edns_options.append(dns.edns.GenericOption(otype, binascii.unhexlify(data))) | |
1125 | 1166 | else: |
1126 | 1167 | edns = None |
1127 | 1168 | edns_max_udp_payload = None |
1133 | 1174 | q = DNSQuery(qname, rdtype, rdclass, |
1134 | 1175 | flags, edns, edns_max_udp_payload, edns_flags, edns_options, tcp) |
1135 | 1176 | |
1177 | server_cookie = None | |
1178 | server_cookie_status = DNS_COOKIE_NO_COOKIE | |
1179 | if edns >= 0: | |
1180 | try: | |
1181 | cookie_opt = [o for o in edns_options if o.otype == 10][0] | |
1182 | except IndexError: | |
1183 | pass | |
1184 | else: | |
1185 | if len(cookie_opt.data) == 8: | |
1186 | server_cookie_status = DNS_COOKIE_CLIENT_COOKIE_ONLY | |
1187 | elif len(cookie_opt.data) >= 16 and len(cookie_opt.data) <= 40: | |
1188 | if cookie_opt.data[8:] == cookie_standin: | |
1189 | # initially assume that there is a cookie for the server; | |
1190 | # change the value later if there isn't | |
1191 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_FRESH | |
1192 | elif cookie_opt.data[8:] == cookie_bad: | |
1193 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_BAD | |
1194 | else: | |
1195 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_STATIC | |
1196 | else: | |
1197 | server_cookie_status = DNS_COOKIE_IMPROPER_LENGTH | |
1198 | ||
1136 | 1199 | for server in d['responses']: |
1137 | bailiwick = bailiwick_map.get(IPAddr(server), default_bailiwick) | |
1200 | server_ip = IPAddr(server) | |
1201 | bailiwick = bailiwick_map.get(server_ip, default_bailiwick) | |
1202 | cookie_jar = cookie_jar_map.get(server_ip, default_cookie_jar) | |
1203 | server_cookie = cookie_jar.get(server_ip, None) | |
1204 | status = server_cookie_status | |
1205 | if status == DNS_COOKIE_SERVER_COOKIE_FRESH and server_cookie is None: | |
1206 | status = DNS_COOKIE_CLIENT_COOKIE_ONLY | |
1138 | 1207 | for client in d['responses'][server]: |
1139 | q.add_response(IPAddr(server), IPAddr(client), DNSResponse.deserialize(d['responses'][server][client], q), bailiwick) | |
1208 | q.add_response(server_ip, IPAddr(client), DNSResponse.deserialize(d['responses'][server][client], q, server_cookie, status), bailiwick) | |
1140 | 1209 | return q |
1141 | 1210 | |
1142 | 1211 | class DNSQueryAggregateDNSResponse(DNSQuery, AggregateDNSResponse): |
1148 | 1217 | |
1149 | 1218 | def add_response(self, server, client, response, bailiwick): |
1150 | 1219 | super(DNSQueryAggregateDNSResponse, self).add_response(server, client, response, bailiwick) |
1151 | self._aggregate_response(server, client, response, self.qname, self.rdtype, bailiwick) | |
1220 | self._aggregate_response(server, client, response, self.qname, self.rdtype, self.rdclass, bailiwick) | |
1152 | 1221 | |
1153 | 1222 | class MultiQuery(object): |
1154 | 1223 | '''An simple DNS Query and its responses.''' |
1169 | 1238 | s = io.BytesIO() |
1170 | 1239 | o.to_wire(s) |
1171 | 1240 | edns_options_str += struct.pack(b'!H', o.otype) + s.getvalue() |
1172 | params = (query.flags, query.edns, query.edns_max_udp_payload, query.edns_flags, edns_options_str, query.tcp) | |
1241 | params = (query.qname.to_text(), query.flags, query.edns, query.edns_max_udp_payload, query.edns_flags, edns_options_str, query.tcp) | |
1173 | 1242 | if params in self.queries: |
1174 | 1243 | self.queries[params] = self.queries[params].join(query, bailiwick_map, default_bailiwick) |
1175 | 1244 | else: |
1204 | 1273 | for server in query.responses: |
1205 | 1274 | bailiwick = bailiwick_map.get(server, default_bailiwick) |
1206 | 1275 | for client, response in query.responses[server].items(): |
1207 | self._aggregate_response(server, client, response, self.qname, self.rdtype, bailiwick) | |
1276 | self._aggregate_response(server, client, response, self.qname, self.rdtype, self.rdclass, bailiwick) | |
1208 | 1277 | |
1209 | 1278 | class TTLDistinguishingMultiQueryAggregateDNSResponse(MultiQueryAggregateDNSResponse): |
1210 | 1279 | ttl_cmp = True |
1215 | 1284 | default_th_factory = transport.DNSQueryTransportHandlerDNSPrivateFactory() |
1216 | 1285 | |
1217 | 1286 | def __init__(self, qname, rdtype, rdclass, servers, bailiwick, |
1218 | client_ipv4, client_ipv6, port, odd_ports, | |
1287 | client_ipv4, client_ipv6, port, odd_ports, cookie_jar, cookie_standin, cookie_bad, | |
1219 | 1288 | flags, edns, edns_max_udp_payload, edns_flags, edns_options, tcp, |
1220 | 1289 | response_handlers, query_timeout, max_attempts, lifetime): |
1221 | 1290 | |
1238 | 1307 | if odd_ports is None: |
1239 | 1308 | odd_ports = {} |
1240 | 1309 | self.odd_ports = odd_ports |
1310 | if cookie_jar is None: | |
1311 | cookie_jar = {} | |
1312 | self.cookie_jar = cookie_jar | |
1313 | self.cookie_standin = cookie_standin | |
1314 | self.cookie_bad = cookie_bad | |
1241 | 1315 | self.response_handlers = response_handlers |
1242 | 1316 | |
1243 | 1317 | self.query_timeout = query_timeout |
1250 | 1324 | self._executed = False |
1251 | 1325 | |
1252 | 1326 | def get_query_handler(self, server): |
1327 | edns_options = copy.deepcopy(self.edns_options) | |
1328 | server_cookie = None | |
1329 | server_cookie_status = DNS_COOKIE_NO_COOKIE | |
1330 | ||
1331 | if self.edns >= 0: | |
1332 | try: | |
1333 | cookie_opt = [o for o in edns_options if o.otype == 10][0] | |
1334 | except IndexError: | |
1335 | pass | |
1336 | else: | |
1337 | if len(cookie_opt.data) == 8: | |
1338 | server_cookie_status = DNS_COOKIE_CLIENT_COOKIE_ONLY | |
1339 | elif len(cookie_opt.data) >= 16 and len(cookie_opt.data) <= 40: | |
1340 | if cookie_opt.data[8:] == self.cookie_standin: | |
1341 | if server in self.cookie_jar: | |
1342 | # if there is a cookie for this server, | |
1343 | # then add it | |
1344 | server_cookie = self.cookie_jar[server] | |
1345 | cookie_opt.data = cookie_opt.data[:8] + server_cookie | |
1346 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_FRESH | |
1347 | else: | |
1348 | # otherwise, send just the client cookie. | |
1349 | cookie_opt.data = cookie_opt.data[:8] | |
1350 | server_cookie_status = DNS_COOKIE_CLIENT_COOKIE_ONLY | |
1351 | elif cookie_opt.data[8:] == self.cookie_bad: | |
1352 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_BAD | |
1353 | else: | |
1354 | server_cookie_status = DNS_COOKIE_SERVER_COOKIE_STATIC | |
1355 | else: | |
1356 | server_cookie_status = DNS_COOKIE_IMPROPER_LENGTH | |
1357 | ||
1253 | 1358 | request = dns.message.Message() |
1254 | 1359 | request.flags = self.flags |
1255 | 1360 | request.find_rrset(request.question, self.qname, self.rdclass, self.rdtype, create=True, force_unique=True) |
1256 | request.use_edns(self.edns, self.edns_flags, self.edns_max_udp_payload, options=self.edns_options[:]) | |
1361 | request.use_edns(self.edns, self.edns_flags, self.edns_max_udp_payload, options=edns_options) | |
1257 | 1362 | |
1258 | 1363 | if server.version == 6: |
1259 | 1364 | client = self.client_ipv6 |
1270 | 1375 | if self.lifetime is not None: |
1271 | 1376 | response_handlers.append(LifetimeHandler(self.lifetime).build()) |
1272 | 1377 | |
1273 | return DNSQueryHandler(self, request, params, response_handlers, server, client) | |
1378 | return DNSQueryHandler(self, request, server_cookie, server_cookie_status, params, response_handlers, server, client) | |
1274 | 1379 | |
1275 | 1380 | @classmethod |
1276 | 1381 | def execute_queries(cls, *queries, **kwargs): |
1330 | 1435 | |
1331 | 1436 | while query_handlers: |
1332 | 1437 | while request_list and time.time() >= request_list[0][0]: |
1333 | tm.query_nowait(request_list.pop(0)[1]) | |
1438 | tm.handle_msg_nowait(request_list.pop(0)[1]) | |
1334 | 1439 | |
1335 | 1440 | t = time.time() |
1336 | 1441 | if request_list and t < request_list[0][0]: |
1406 | 1511 | errno1 = response.errno |
1407 | 1512 | else: |
1408 | 1513 | errno1 = None |
1409 | response_obj = DNSResponse(msg, msg_size, err, errno1, qh.history, response_time, query) | |
1514 | response_obj = DNSResponse(msg, msg_size, err, errno1, qh.history, response_time, query, qh.server_cookie, qh.server_cookie_status) | |
1410 | 1515 | |
1411 | 1516 | # if client IP is not specified, and there is a socket |
1412 | 1517 | # failure, then src might be None |
1501 | 1606 | response_handlers = [] |
1502 | 1607 | |
1503 | 1608 | def __new__(cls, qname, rdtype, rdclass, servers, bailiwick=None, |
1504 | client_ipv4=None, client_ipv6=None, port=53, odd_ports=None, | |
1609 | client_ipv4=None, client_ipv6=None, port=53, odd_ports=None, cookie_jar=None, cookie_standin=None, cookie_bad=None, | |
1505 | 1610 | query_timeout=None, max_attempts=None, lifetime=None, |
1506 | 1611 | executable=True): |
1507 | 1612 | |
1514 | 1619 | |
1515 | 1620 | if executable: |
1516 | 1621 | return ExecutableDNSQuery(qname, rdtype, rdclass, servers, bailiwick, |
1517 | client_ipv4, client_ipv6, port, odd_ports, | |
1622 | client_ipv4, client_ipv6, port, odd_ports, cookie_jar, cookie_standin, cookie_bad, | |
1518 | 1623 | cls.flags, cls.edns, cls.edns_max_udp_payload, cls.edns_flags, cls.edns_options, cls.tcp, |
1519 | 1624 | cls.response_handlers, query_timeout, max_attempts, lifetime) |
1520 | 1625 | |
1525 | 1630 | def __init__(self, *args, **kwargs): |
1526 | 1631 | raise NotImplemented() |
1527 | 1632 | |
1633 | @classmethod | |
1634 | def add_mixin(cls, mixin_cls): | |
1635 | class _foo(cls): | |
1636 | flags = cls.flags | getattr(mixin_cls, 'flags', 0) | |
1637 | edns_flags = cls.edns_flags | getattr(mixin_cls, 'edns_flags', 0) | |
1638 | edns_options = cls.edns_options + copy.deepcopy(getattr(mixin_cls, 'edns_options', [])) | |
1639 | return _foo | |
1640 | ||
1641 | @classmethod | |
1642 | def get_cookie_opt(cls): | |
1643 | try: | |
1644 | return [o for o in cls.edns_options if o.otype == 10][0] | |
1645 | except IndexError: | |
1646 | return None | |
1647 | ||
1648 | @classmethod | |
1649 | def add_server_cookie(cls, server_cookie): | |
1650 | cookie_opt = cls.get_cookie_opt() | |
1651 | if cookie_opt is not None: | |
1652 | if len(cookie_opt.data) != 8: | |
1653 | raise TypeError('COOKIE option must have length of 8.') | |
1654 | cookie_opt.data += server_cookie | |
1655 | return cls | |
1656 | ||
1657 | @classmethod | |
1658 | def remove_cookie_option(cls): | |
1659 | cookie_opt = cls.get_cookie_opt() | |
1660 | if cookie_opt is not None: | |
1661 | cls.edns_options.remove(cookie_opt) | |
1662 | return cls | |
1528 | 1663 | |
1529 | 1664 | class SimpleDNSQuery(DNSQueryFactory): |
1530 | 1665 | '''A simple query, no frills.''' |
1539 | 1674 | class StandardQuery(SimpleDNSQuery): |
1540 | 1675 | '''A standard old-school DNS query that handles truncated packets.''' |
1541 | 1676 | |
1542 | response_handlers = SimpleDNSQuery.response_handlers + [UseTCPOnTCFlagHandler()] | |
1677 | response_handlers = \ | |
1678 | SimpleDNSQuery.response_handlers + \ | |
1679 | [UseTCPOnTCFlagHandler()] | |
1543 | 1680 | |
1544 | 1681 | class StandardRecursiveQuery(StandardQuery, RecursiveDNSQuery): |
1545 | 1682 | '''A standard old-school recursive DNS query that handles truncated packets.''' |
1550 | 1687 | '''A recursive DNS query that retries with checking disabled if the |
1551 | 1688 | response code is SERVFAIL.''' |
1552 | 1689 | |
1553 | response_handlers = StandardRecursiveQuery.response_handlers + [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL)] | |
1690 | response_handlers = \ | |
1691 | StandardRecursiveQuery.response_handlers + \ | |
1692 | [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL)] | |
1554 | 1693 | |
1555 | 1694 | class EDNS0Query(StandardQuery): |
1556 | 1695 | '''A standard query with EDNS0.''' |
1576 | 1715 | '''A standard DNSSEC query, designed for quick turnaround.''' |
1577 | 1716 | |
1578 | 1717 | response_handlers = DNSSECQuery.response_handlers + \ |
1579 | [DisableEDNSOnFormerrHandler(), DisableEDNSOnRcodeHandler()] | |
1718 | [ | |
1719 | AddServerCookieOnBADCOOKIE(), | |
1720 | DisableEDNSOnFormerrHandler(), | |
1721 | DisableEDNSOnRcodeHandler() | |
1722 | ] | |
1580 | 1723 | |
1581 | 1724 | query_timeout = 1.0 |
1582 | 1725 | max_attempts = 1 |
1583 | 1726 | lifetime = 3.0 |
1584 | 1727 | |
1585 | class RobustDNSSECQuery(DNSSECQuery): | |
1586 | '''A robust query with a number of handlers, designed to get a response, | |
1587 | in the midst of compatibility and connectivity issues.''' | |
1588 | ||
1589 | response_handlers = DNSSECQuery.response_handlers + \ | |
1590 | [DisableEDNSOnFormerrHandler(), DisableEDNSOnRcodeHandler(), | |
1591 | ReduceUDPMaxPayloadOnTimeoutHandler(512, 3), | |
1592 | DisableEDNSOnTimeoutHandler(4)] | |
1593 | ||
1594 | # For timeouts: | |
1595 | # 1 - no change | |
1596 | # 2 - no change | |
1597 | # 3 - reduce udp max payload to 512; change timeout to 1 second | |
1598 | # 4 - disable EDNS | |
1599 | # 5 - return | |
1600 | ||
1601 | query_timeout = 1.0 | |
1602 | max_attempts = 5 | |
1603 | lifetime = 7.0 | |
1604 | ||
1605 | 1728 | class DiagnosticQuery(DNSSECQuery): |
1606 | 1729 | '''A robust query with a number of handlers, designed to detect common DNS |
1607 | 1730 | compatibility and connectivity issues.''' |
1608 | 1731 | |
1609 | 1732 | response_handlers = DNSSECQuery.response_handlers + \ |
1610 | [DisableEDNSOnFormerrHandler(), DisableEDNSOnRcodeHandler(), | |
1611 | ReduceUDPMaxPayloadOnTimeoutHandler(512, 4), | |
1612 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 6), DisableEDNSOnTimeoutHandler(7), | |
1613 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1614 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1615 | ChangeTimeoutOnTimeoutHandler(1.0, 4), | |
1616 | ChangeTimeoutOnTimeoutHandler(2.0, 5)] | |
1733 | [ | |
1734 | AddServerCookieOnBADCOOKIE(), | |
1735 | DisableEDNSOnFormerrHandler(), | |
1736 | DisableEDNSOnRcodeHandler(), | |
1737 | ReduceUDPMaxPayloadOnTimeoutHandler(512, 4), | |
1738 | RemoveEDNSOptionOnTimeoutHandler(6), | |
1739 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 10), | |
1740 | DisableEDNSOnTimeoutHandler(11), | |
1741 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1742 | ChangeTimeoutOnTimeoutHandler(1.0, 4), | |
1743 | ChangeTimeoutOnTimeoutHandler(2.0, 5), | |
1744 | ChangeTimeoutOnTimeoutHandler(1.0, 6), | |
1745 | ] | |
1617 | 1746 | # For timeouts: |
1618 | 1747 | # 1 - no change |
1619 | 1748 | # 2 - change timeout to 2 seconds |
1620 | # 3 - change timeout to 4 seconds | |
1749 | # 3 - no change | |
1621 | 1750 | # 4 - reduce udp max payload to 512; change timeout to 1 second |
1622 | 1751 | # 5 - change timeout to 2 seconds |
1623 | # 6 - clear DO flag | |
1624 | # 7 - disable EDNS | |
1625 | # 8 - return | |
1752 | # 6 - remove EDNS option (if any); change timeout to 1 second | |
1753 | # 7 - remove EDNS option (if any) | |
1754 | # 8 - remove EDNS option (if any) | |
1755 | # 9 - remove EDNS option (if any) | |
1756 | # 10 - clear DO flag; | |
1757 | # 11 - disable EDNS | |
1758 | # 12 - return (give up) | |
1626 | 1759 | |
1627 | 1760 | query_timeout = 1.0 |
1628 | max_attempts = 8 | |
1629 | lifetime = 18.0 | |
1761 | max_attempts = 12 | |
1762 | lifetime = 16.0 | |
1630 | 1763 | |
1631 | 1764 | class RecursiveDiagnosticQuery(RecursiveDNSSECQuery): |
1632 | 1765 | '''A robust query to a cache with a number of handlers, designed to detect |
1633 | 1766 | common DNS compatibility and connectivity issues.''' |
1634 | 1767 | |
1635 | 1768 | response_handlers = DNSSECQuery.response_handlers + \ |
1636 | [DisableEDNSOnFormerrHandler(), SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), DisableEDNSOnRcodeHandler(), | |
1637 | ReduceUDPMaxPayloadOnTimeoutHandler(512, 5), | |
1638 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 7), DisableEDNSOnTimeoutHandler(8), | |
1639 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1640 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1641 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1642 | ChangeTimeoutOnTimeoutHandler(1.0, 5), | |
1643 | ChangeTimeoutOnTimeoutHandler(2.0, 6)] | |
1769 | [ | |
1770 | AddServerCookieOnBADCOOKIE(), | |
1771 | DisableEDNSOnFormerrHandler(), | |
1772 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1773 | DisableEDNSOnRcodeHandler(), | |
1774 | ReduceUDPMaxPayloadOnTimeoutHandler(512, 5), | |
1775 | RemoveEDNSOptionOnTimeoutHandler(7), | |
1776 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 11), | |
1777 | DisableEDNSOnTimeoutHandler(12), | |
1778 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1779 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1780 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1781 | ChangeTimeoutOnTimeoutHandler(1.0, 5), | |
1782 | ChangeTimeoutOnTimeoutHandler(2.0, 6), | |
1783 | ChangeTimeoutOnTimeoutHandler(1.0, 7), | |
1784 | ] | |
1644 | 1785 | # For timeouts: |
1645 | 1786 | # 1 - no change |
1646 | 1787 | # 2 - change timeout to 2 seconds |
1648 | 1789 | # 4 - change timeout to 8 seconds |
1649 | 1790 | # 5 - reduce udp max payload to 512; change timeout to 1 second |
1650 | 1791 | # 6 - change timeout to 2 seconds |
1651 | # 7 - clear DO flag | |
1652 | # 8 - disable EDNS | |
1653 | # 9 - return | |
1792 | # 7 - remove EDNS option (if any); change timeout to 1 second | |
1793 | # 8 - remove EDNS option (if any) | |
1794 | # 9 - remove EDNS option (if any) | |
1795 | # 10 - remove EDNS option (if any) | |
1796 | # 11 - clear DO flag | |
1797 | # 12 - disable EDNS | |
1798 | # 13 - return (give up) | |
1654 | 1799 | |
1655 | 1800 | query_timeout = 1.0 |
1656 | max_attempts = 9 | |
1657 | lifetime = 25.0 | |
1801 | max_attempts = 13 | |
1802 | lifetime = 26.0 | |
1658 | 1803 | |
1659 | 1804 | class TCPDiagnosticQuery(DNSSECQuery): |
1660 | 1805 | '''A robust query with a number of handlers, designed to detect common DNS |
1662 | 1807 | |
1663 | 1808 | tcp = True |
1664 | 1809 | |
1665 | response_handlers = [ | |
1666 | DisableEDNSOnFormerrHandler(), DisableEDNSOnRcodeHandler(), | |
1667 | ChangeTimeoutOnTimeoutHandler(4.0, 2)] | |
1810 | response_handlers = \ | |
1811 | [ | |
1812 | DisableEDNSOnFormerrHandler(), | |
1813 | DisableEDNSOnRcodeHandler(), | |
1814 | ChangeTimeoutOnTimeoutHandler(4.0, 2) | |
1815 | ] | |
1668 | 1816 | # For timeouts: |
1669 | 1817 | # 1 - no change |
1670 | 1818 | # 2 - change timeout to 4 seconds |
1680 | 1828 | |
1681 | 1829 | tcp = True |
1682 | 1830 | |
1683 | response_handlers = [ | |
1684 | DisableEDNSOnFormerrHandler(), SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), DisableEDNSOnRcodeHandler(), | |
1685 | ChangeTimeoutOnTimeoutHandler(4.0, 2), | |
1686 | ChangeTimeoutOnTimeoutHandler(8.0, 3)] | |
1831 | response_handlers = \ | |
1832 | [ | |
1833 | DisableEDNSOnFormerrHandler(), | |
1834 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1835 | DisableEDNSOnRcodeHandler(), | |
1836 | ChangeTimeoutOnTimeoutHandler(4.0, 2), | |
1837 | ChangeTimeoutOnTimeoutHandler(8.0, 3) | |
1838 | ] | |
1687 | 1839 | # For timeouts: |
1688 | 1840 | # 1 - no change |
1689 | 1841 | # 2 - change timeout to 4 seconds |
1696 | 1848 | |
1697 | 1849 | class PMTUDiagnosticQuery(DNSSECQuery): |
1698 | 1850 | |
1699 | response_handlers = [PMTUBoundingHandler(512, 4, 1.0, | |
1700 | (MaxTimeoutsHandler(8), | |
1701 | LifetimeHandler(18.0), | |
1702 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1703 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1704 | ChangeTimeoutOnTimeoutHandler(1.0, 4), | |
1705 | ChangeTimeoutOnTimeoutHandler(2.0, 5))), | |
1706 | UseTCPOnTCFlagHandler(), | |
1707 | DisableEDNSOnFormerrHandler(), DisableEDNSOnRcodeHandler(), | |
1708 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 6), DisableEDNSOnTimeoutHandler(7)] | |
1709 | ||
1710 | query_timeout = 1.0 | |
1711 | max_attempts = 15 | |
1712 | lifetime = 18.0 | |
1713 | ||
1714 | class RecursivePMTUDiagnosticQuery(RecursiveDNSSECQuery): | |
1715 | ||
1716 | response_handlers = [PMTUBoundingHandler(512, 5, 1.0, | |
1717 | (MaxTimeoutsHandler(8), | |
1718 | LifetimeHandler(25.0), | |
1719 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1720 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1721 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1722 | ChangeTimeoutOnTimeoutHandler(1.0, 5), | |
1723 | ChangeTimeoutOnTimeoutHandler(2.0, 6))), | |
1724 | UseTCPOnTCFlagHandler(), | |
1725 | DisableEDNSOnFormerrHandler(), SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), DisableEDNSOnRcodeHandler(), | |
1726 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 7), DisableEDNSOnTimeoutHandler(8)] | |
1727 | ||
1728 | query_timeout = 1.0 | |
1729 | max_attempts = 15 | |
1730 | lifetime = 25.0 | |
1731 | ||
1732 | class TruncationDiagnosticQuery(DNSSECQuery): | |
1733 | '''A simple query to test the results of a query with capabilities of only | |
1734 | receiving back a small (512 byte) payload.''' | |
1735 | ||
1736 | response_handlers = [ChangeTimeoutOnTimeoutHandler(2.0, 2), ChangeTimeoutOnTimeoutHandler(4.0, 3)] | |
1851 | response_handlers = \ | |
1852 | [PMTUBoundingHandler(512, 4, 6, 1.0)] + \ | |
1853 | DNSSECQuery.response_handlers + \ | |
1854 | [ | |
1855 | AddServerCookieOnBADCOOKIE(), | |
1856 | DisableEDNSOnFormerrHandler(), | |
1857 | DisableEDNSOnRcodeHandler(), | |
1858 | RemoveEDNSOptionOnTimeoutHandler(6), | |
1859 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 10), | |
1860 | DisableEDNSOnTimeoutHandler(11), | |
1861 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1862 | ChangeTimeoutOnTimeoutHandler(1.0, 4), | |
1863 | ChangeTimeoutOnTimeoutHandler(2.0, 5), | |
1864 | ChangeTimeoutOnTimeoutHandler(1.0, 6), | |
1865 | ] | |
1737 | 1866 | # For timeouts: |
1738 | 1867 | # 1 - no change |
1739 | 1868 | # 2 - change timeout to 2 seconds |
1740 | # 3 - change timeout to 4 seconds | |
1741 | ||
1742 | edns_max_udp_payload = 512 | |
1869 | # 3 - no change | |
1870 | # 4 - reduce udp max payload to 512; change timeout to 1 second | |
1871 | # 5 - change timeout to 2 seconds | |
1872 | # 6 - remove EDNS option (if any); change timeout to 1 second | |
1873 | # 7 - remove EDNS option (if any) | |
1874 | # 8 - remove EDNS option (if any) | |
1875 | # 9 - remove EDNS option (if any) | |
1876 | # 10 - clear DO flag; | |
1877 | # 11 - disable EDNS | |
1878 | # 12 - return (give up) | |
1743 | 1879 | |
1744 | 1880 | query_timeout = 1.0 |
1745 | max_attempts = 4 | |
1746 | lifetime = 8.0 | |
1747 | ||
1748 | class RecursiveTruncationDiagnosticQuery(DNSSECQuery, RecursiveDNSQuery): | |
1749 | '''A simple recursive query to test the results of a query with | |
1750 | capabilities of only receiving back a small (512 byte) payload.''' | |
1751 | ||
1752 | response_handlers = [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1753 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1754 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1755 | ChangeTimeoutOnTimeoutHandler(8.0, 4)] | |
1881 | max_attempts = 12 | |
1882 | lifetime = 22.0 # set this a little longer due to pickle stage | |
1883 | ||
1884 | class RecursivePMTUDiagnosticQuery(RecursiveDNSSECQuery): | |
1885 | ||
1886 | response_handlers = \ | |
1887 | [PMTUBoundingHandler(512, 5, 7, 1.0)] + \ | |
1888 | DNSSECQuery.response_handlers + \ | |
1889 | [ | |
1890 | AddServerCookieOnBADCOOKIE(), | |
1891 | DisableEDNSOnFormerrHandler(), | |
1892 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1893 | DisableEDNSOnRcodeHandler(), | |
1894 | RemoveEDNSOptionOnTimeoutHandler(7), | |
1895 | ClearEDNSFlagOnTimeoutHandler(dns.flags.DO, 11), | |
1896 | DisableEDNSOnTimeoutHandler(12), | |
1897 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1898 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1899 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1900 | ChangeTimeoutOnTimeoutHandler(1.0, 5), | |
1901 | ChangeTimeoutOnTimeoutHandler(2.0, 6), | |
1902 | ChangeTimeoutOnTimeoutHandler(1.0, 7), | |
1903 | ] | |
1756 | 1904 | # For timeouts: |
1757 | 1905 | # 1 - no change |
1758 | 1906 | # 2 - change timeout to 2 seconds |
1759 | 1907 | # 3 - change timeout to 4 seconds |
1760 | 1908 | # 4 - change timeout to 8 seconds |
1761 | ||
1762 | edns_max_udp_payload = 512 | |
1909 | # 5 - reduce udp max payload to 512; change timeout to 1 second | |
1910 | # 6 - change timeout to 2 seconds | |
1911 | # 7 - remove EDNS option (if any); change timeout to 1 second | |
1912 | # 8 - remove EDNS option (if any) | |
1913 | # 9 - remove EDNS option (if any) | |
1914 | # 10 - remove EDNS option (if any) | |
1915 | # 11 - clear DO flag | |
1916 | # 12 - disable EDNS | |
1917 | # 13 - return (give up) | |
1763 | 1918 | |
1764 | 1919 | query_timeout = 1.0 |
1765 | max_attempts = 5 | |
1766 | lifetime = 18.0 | |
1767 | ||
1768 | class EDNSVersionDiagnosticQuery(SimpleDNSQuery): | |
1769 | '''A query designed to test unknown EDNS version compatibility.''' | |
1770 | ||
1771 | edns = 100 | |
1772 | edns_max_udp_payload = 512 | |
1773 | ||
1774 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1775 | [ChangeEDNSVersionOnTimeoutHandler(0, 4), | |
1776 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1777 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1778 | ChangeTimeoutOnTimeoutHandler(2.0, 4)] | |
1920 | max_attempts = 13 | |
1921 | lifetime = 32.0 # set this a little longer due to pickle stage | |
1922 | ||
1923 | class TruncationDiagnosticQuery(DNSSECQuery): | |
1924 | '''A simple query to test the results of a query with capabilities of only | |
1925 | receiving back a small (512 byte) payload.''' | |
1926 | ||
1927 | response_handlers = \ | |
1928 | [ | |
1929 | AddServerCookieOnBADCOOKIE(), | |
1930 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1931 | ChangeTimeoutOnTimeoutHandler(4.0, 3) | |
1932 | ] | |
1779 | 1933 | # For timeouts: |
1780 | 1934 | # 1 - no change |
1781 | 1935 | # 2 - change timeout to 2 seconds |
1782 | 1936 | # 3 - change timeout to 4 seconds |
1783 | # 4 - change EDNS version to 0; change timeout to 2 seconds | |
1784 | # 5 - return | |
1937 | ||
1938 | edns_max_udp_payload = 512 | |
1785 | 1939 | |
1786 | 1940 | query_timeout = 1.0 |
1787 | max_attempts = 5 | |
1788 | lifetime = 15.0 | |
1789 | ||
1790 | class EDNSOptDiagnosticQuery(SimpleDNSQuery): | |
1791 | '''A query designed to test unknown EDNS option compatibility.''' | |
1792 | ||
1793 | edns = 0 | |
1794 | edns_max_udp_payload = 512 | |
1795 | edns_options = [dns.edns.GenericOption(100, b'')] | |
1796 | ||
1797 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1798 | [RemoveEDNSOptionOnTimeoutHandler(100, 4), | |
1799 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1800 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1801 | ChangeTimeoutOnTimeoutHandler(2.0, 4)] | |
1802 | ||
1803 | # For timeouts: | |
1804 | # 1 - no change | |
1805 | # 2 - change timeout to 2 seconds | |
1806 | # 3 - change timeout to 4 seconds | |
1807 | # 4 - remove EDNS option; change timeout to 2 seconds | |
1808 | # 5 - return | |
1809 | ||
1810 | query_timeout = 1.0 | |
1811 | max_attempts = 5 | |
1812 | lifetime = 15.0 | |
1813 | ||
1814 | class EDNSFlagDiagnosticQuery(SimpleDNSQuery): | |
1815 | '''A query designed to test unknown EDNS flag compatibility.''' | |
1816 | ||
1817 | edns = 0 | |
1818 | edns_max_udp_payload = 512 | |
1819 | edns_flags = SimpleDNSQuery.edns_flags | 0x80 | |
1820 | ||
1821 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1822 | [ClearEDNSFlagOnTimeoutHandler(0x80, 4), | |
1823 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1824 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1825 | ChangeTimeoutOnTimeoutHandler(2.0, 4)] | |
1826 | ||
1827 | # For timeouts: | |
1828 | # 1 - no change | |
1829 | # 2 - change timeout to 2 seconds | |
1830 | # 3 - change timeout to 4 seconds | |
1831 | # 4 - clear EDNS flag; change timeout to 2 seconds | |
1832 | # 5 - return | |
1833 | ||
1834 | query_timeout = 1.0 | |
1835 | max_attempts = 5 | |
1836 | lifetime = 15.0 | |
1837 | ||
1838 | class RecursiveEDNSVersionDiagnosticQuery(SimpleDNSQuery): | |
1839 | '''A query designed to test unknown EDNS version compatibility on recursive | |
1840 | servers.''' | |
1841 | ||
1842 | flags = dns.flags.RD | |
1843 | edns = 100 | |
1844 | edns_max_udp_payload = 512 | |
1845 | ||
1846 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1847 | [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1848 | ChangeEDNSVersionOnTimeoutHandler(0, 5), | |
1849 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1850 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1851 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1852 | ChangeTimeoutOnTimeoutHandler(2.0, 5)] | |
1941 | max_attempts = 4 | |
1942 | lifetime = 8.0 | |
1943 | ||
1944 | class RecursiveTruncationDiagnosticQuery(DNSSECQuery, RecursiveDNSQuery): | |
1945 | '''A simple recursive query to test the results of a query with | |
1946 | capabilities of only receiving back a small (512 byte) payload.''' | |
1947 | ||
1948 | response_handlers = \ | |
1949 | [ | |
1950 | AddServerCookieOnBADCOOKIE(), | |
1951 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1952 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1953 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1954 | ChangeTimeoutOnTimeoutHandler(8.0, 4) | |
1955 | ] | |
1853 | 1956 | # For timeouts: |
1854 | 1957 | # 1 - no change |
1855 | 1958 | # 2 - change timeout to 2 seconds |
1856 | 1959 | # 3 - change timeout to 4 seconds |
1857 | 1960 | # 4 - change timeout to 8 seconds |
1858 | # 5 - change EDNS version to 0; change timeout to 2 seconds | |
1859 | # 6 - return | |
1961 | ||
1962 | edns_max_udp_payload = 512 | |
1860 | 1963 | |
1861 | 1964 | query_timeout = 1.0 |
1862 | max_attempts = 6 | |
1863 | lifetime = 25.0 | |
1864 | ||
1865 | class RecursiveEDNSOptDiagnosticQuery(SimpleDNSQuery): | |
1866 | '''A query designed to test unknown EDNS option compatibility on recursive | |
1867 | servers.''' | |
1868 | ||
1869 | flags = dns.flags.RD | |
1965 | max_attempts = 5 | |
1966 | lifetime = 18.0 | |
1967 | ||
1968 | class EDNSVersionDiagnosticQuery(SimpleDNSQuery): | |
1969 | '''A query designed to test unknown EDNS version compatibility.''' | |
1970 | ||
1971 | edns = 100 | |
1972 | edns_max_udp_payload = 512 | |
1973 | ||
1974 | response_handlers = \ | |
1975 | SimpleDNSQuery.response_handlers + \ | |
1976 | [ | |
1977 | ChangeEDNSVersionOnTimeoutHandler(0, 4), | |
1978 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1979 | ChangeTimeoutOnTimeoutHandler(1.0, 4) | |
1980 | ] | |
1981 | # For timeouts: | |
1982 | # 1 - no change | |
1983 | # 2 - change timeout to 2 seconds | |
1984 | # 3 - no change | |
1985 | # 4 - change EDNS version to 0; change timeout to 1 second | |
1986 | # 5 - return | |
1987 | ||
1988 | query_timeout = 1.0 | |
1989 | max_attempts = 5 | |
1990 | lifetime = 7.0 | |
1991 | ||
1992 | class EDNSOptDiagnosticQuery(SimpleDNSQuery): | |
1993 | '''A query designed to test unknown EDNS option compatibility.''' | |
1994 | ||
1870 | 1995 | edns = 0 |
1871 | 1996 | edns_max_udp_payload = 512 |
1872 | 1997 | edns_options = [dns.edns.GenericOption(100, b'')] |
1873 | 1998 | |
1874 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1875 | [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1876 | RemoveEDNSOptionOnTimeoutHandler(100, 5), | |
1877 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1878 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1879 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1880 | ChangeTimeoutOnTimeoutHandler(2.0, 5)] | |
1881 | ||
1999 | response_handlers = \ | |
2000 | SimpleDNSQuery.response_handlers + \ | |
2001 | [ | |
2002 | AddServerCookieOnBADCOOKIE(), | |
2003 | RemoveEDNSOptionOnTimeoutHandler(4), | |
2004 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
2005 | ChangeTimeoutOnTimeoutHandler(1.0, 4) | |
2006 | ] | |
2007 | ||
2008 | # For timeouts: | |
2009 | # 1 - no change | |
2010 | # 2 - change timeout to 2 seconds | |
2011 | # 3 - no change | |
2012 | # 4 - remove EDNS option (if any); change timeout to 1 second | |
2013 | # 5 - remove EDNS option (if any) | |
2014 | # 6 - remove EDNS option (if any) | |
2015 | # 7 - remove EDNS option (if any) | |
2016 | # 8 - return | |
2017 | ||
2018 | query_timeout = 1.0 | |
2019 | max_attempts = 8 | |
2020 | lifetime = 11.0 | |
2021 | ||
2022 | class EDNSFlagDiagnosticQuery(SimpleDNSQuery): | |
2023 | '''A query designed to test unknown EDNS flag compatibility.''' | |
2024 | ||
2025 | edns = 0 | |
2026 | edns_max_udp_payload = 512 | |
2027 | edns_flags = SimpleDNSQuery.edns_flags | 0x80 | |
2028 | ||
2029 | response_handlers = \ | |
2030 | SimpleDNSQuery.response_handlers + \ | |
2031 | [ | |
2032 | AddServerCookieOnBADCOOKIE(), | |
2033 | RemoveEDNSOptionOnTimeoutHandler(4), | |
2034 | ClearEDNSFlagOnTimeoutHandler(0x80, 8), | |
2035 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
2036 | ChangeTimeoutOnTimeoutHandler(1.0, 4) | |
2037 | ] | |
2038 | ||
2039 | # For timeouts: | |
2040 | # 1 - no change | |
2041 | # 2 - change timeout to 2 seconds | |
2042 | # 3 - no change | |
2043 | # 4 - remove EDNS option (if any); change timeout to 1 second | |
2044 | # 5 - remove EDNS option (if any) | |
2045 | # 6 - remove EDNS option (if any) | |
2046 | # 7 - remove EDNS option (if any) | |
2047 | # 8 - clear EDNS flag | |
2048 | # 9 - return | |
2049 | ||
2050 | query_timeout = 1.0 | |
2051 | max_attempts = 9 | |
2052 | lifetime = 12.0 | |
2053 | ||
2054 | class RecursiveEDNSVersionDiagnosticQuery(SimpleDNSQuery): | |
2055 | '''A query designed to test unknown EDNS version compatibility on recursive | |
2056 | servers.''' | |
2057 | ||
2058 | flags = dns.flags.RD | |
2059 | edns = 100 | |
2060 | edns_max_udp_payload = 512 | |
2061 | ||
2062 | response_handlers = \ | |
2063 | SimpleDNSQuery.response_handlers + \ | |
2064 | [ | |
2065 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
2066 | ChangeEDNSVersionOnTimeoutHandler(0, 5), | |
2067 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
2068 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
2069 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
2070 | ChangeTimeoutOnTimeoutHandler(1.0, 5) | |
2071 | ] | |
1882 | 2072 | # For timeouts: |
1883 | 2073 | # 1 - no change |
1884 | 2074 | # 2 - change timeout to 2 seconds |
1885 | 2075 | # 3 - change timeout to 4 seconds |
1886 | 2076 | # 4 - change timeout to 8 seconds |
1887 | # 5 - remove EDNS option; change timeout to 2 seconds | |
2077 | # 5 - change EDNS version to 0; change timeout to 1 second | |
1888 | 2078 | # 6 - return |
1889 | 2079 | |
1890 | 2080 | query_timeout = 1.0 |
1891 | 2081 | max_attempts = 6 |
1892 | lifetime = 25.0 | |
1893 | ||
1894 | class RecursiveEDNSFlagDiagnosticQuery(SimpleDNSQuery): | |
1895 | '''A query designed to test unknown EDNS flag compatibility on recursive | |
2082 | lifetime = 18.0 | |
2083 | ||
2084 | class RecursiveEDNSOptDiagnosticQuery(SimpleDNSQuery): | |
2085 | '''A query designed to test unknown EDNS option compatibility on recursive | |
1896 | 2086 | servers.''' |
1897 | 2087 | |
1898 | 2088 | flags = dns.flags.RD |
1899 | 2089 | edns = 0 |
1900 | 2090 | edns_max_udp_payload = 512 |
1901 | edns_flags = SimpleDNSQuery.edns_flags | 0x80 | |
1902 | ||
1903 | response_handlers = SimpleDNSQuery.response_handlers + \ | |
1904 | [SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
1905 | ClearEDNSFlagOnTimeoutHandler(0x80, 5), | |
1906 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
1907 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
1908 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
1909 | ChangeTimeoutOnTimeoutHandler(2.0, 5)] | |
2091 | edns_options = [dns.edns.GenericOption(100, b'')] | |
2092 | ||
2093 | response_handlers = \ | |
2094 | SimpleDNSQuery.response_handlers + \ | |
2095 | [ | |
2096 | AddServerCookieOnBADCOOKIE(), | |
2097 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
2098 | RemoveEDNSOptionOnTimeoutHandler(5), | |
2099 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
2100 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
2101 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
2102 | ChangeTimeoutOnTimeoutHandler(1.0, 5) | |
2103 | ] | |
1910 | 2104 | |
1911 | 2105 | # For timeouts: |
1912 | 2106 | # 1 - no change |
1913 | 2107 | # 2 - change timeout to 2 seconds |
1914 | 2108 | # 3 - change timeout to 4 seconds |
1915 | 2109 | # 4 - change timeout to 8 seconds |
1916 | # 5 - clear EDNS flag; change timeout to 2 seconds | |
1917 | # 6 - return | |
2110 | # 5 - remove EDNS option (if any); change timeout to 1 second | |
2111 | # 6 - remove EDNS option (if any) | |
2112 | # 7 - remove EDNS option (if any) | |
2113 | # 8 - remove EDNS option (if any) | |
2114 | # 9 - return | |
1918 | 2115 | |
1919 | 2116 | query_timeout = 1.0 |
1920 | max_attempts = 6 | |
1921 | lifetime = 25.0 | |
2117 | max_attempts = 9 | |
2118 | lifetime = 21.0 | |
2119 | ||
2120 | class RecursiveEDNSFlagDiagnosticQuery(SimpleDNSQuery): | |
2121 | '''A query designed to test unknown EDNS flag compatibility on recursive | |
2122 | servers.''' | |
2123 | ||
2124 | flags = dns.flags.RD | |
2125 | edns = 0 | |
2126 | edns_max_udp_payload = 512 | |
2127 | edns_flags = SimpleDNSQuery.edns_flags | 0x80 | |
2128 | ||
2129 | response_handlers = \ | |
2130 | SimpleDNSQuery.response_handlers + \ | |
2131 | [ | |
2132 | AddServerCookieOnBADCOOKIE(), | |
2133 | SetFlagOnRcodeHandler(dns.flags.CD, dns.rcode.SERVFAIL), | |
2134 | RemoveEDNSOptionOnTimeoutHandler(5), | |
2135 | ClearEDNSFlagOnTimeoutHandler(0x80, 9), | |
2136 | ChangeTimeoutOnTimeoutHandler(2.0, 2), | |
2137 | ChangeTimeoutOnTimeoutHandler(4.0, 3), | |
2138 | ChangeTimeoutOnTimeoutHandler(8.0, 4), | |
2139 | ChangeTimeoutOnTimeoutHandler(1.0, 5) | |
2140 | ] | |
2141 | ||
2142 | # For timeouts: | |
2143 | # 1 - no change | |
2144 | # 2 - change timeout to 2 seconds | |
2145 | # 3 - change timeout to 4 seconds | |
2146 | # 4 - change timeout to 8 seconds | |
2147 | # 5 - remove EDNS option (if any); change timeout to 1 second | |
2148 | # 6 - remove EDNS option (if any) | |
2149 | # 7 - remove EDNS option (if any) | |
2150 | # 8 - remove EDNS option (if any) | |
2151 | # 9 - clear EDNS flag | |
2152 | # 10 - return | |
2153 | ||
2154 | query_timeout = 1.0 | |
2155 | max_attempts = 10 | |
2156 | lifetime = 22.0 | |
1922 | 2157 | |
1923 | 2158 | def main(): |
1924 | 2159 | import json |
3 | 3 | # Created by Casey Deccio (casey@deccio.net) |
4 | 4 | # |
5 | 5 | # Copyright 2014-2016 VeriSign, Inc. |
6 | # | |
7 | # Copyright 2016-2019 Casey Deccio | |
6 | 8 | # |
7 | 9 | # DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | # it under the terms of the GNU General Public License as published by |
293 | 295 | |
294 | 296 | default_th_factory = transport.DNSQueryTransportHandlerDNSFactory() |
295 | 297 | |
296 | def __init__(self, hints=util.get_root_hints(), query_cls=(query.QuickDNSSECQuery, query.RobustDNSSECQuery), client_ipv4=None, client_ipv6=None, odd_ports=None, transport_manager=None, th_factories=None, max_ttl=None): | |
298 | def __init__(self, hints=util.get_root_hints(), query_cls=(query.QuickDNSSECQuery, query.DiagnosticQuery), client_ipv4=None, client_ipv6=None, odd_ports=None, cookie_standin=None, transport_manager=None, th_factories=None, max_ttl=None): | |
297 | 299 | |
298 | 300 | self._hints = hints |
299 | 301 | self._query_cls = query_cls |
312 | 314 | |
313 | 315 | self._max_ttl = max_ttl |
314 | 316 | |
317 | self._cookie_standin = cookie_standin | |
318 | self._cookie_jar = {} | |
315 | 319 | self._cache = {} |
316 | 320 | self._expirations = [] |
317 | 321 | self._cache_lock = threading.Lock() |
444 | 448 | responses[query_tuple] = self.query(query_tuple[0], query_tuple[1], query_tuple[2]) |
445 | 449 | return responses |
446 | 450 | |
447 | def _query(self, qname, rdtype, rdclass, level, max_source, starting_domain=None): | |
448 | self.expire_cache() | |
449 | ||
450 | # check for max chain length | |
451 | if level > self.MAX_CHAIN: | |
452 | raise ServFail('SERVFAIL - resolution chain too long') | |
453 | ||
451 | def _get_answer(self, qname, rdtype, rdclass, max_source): | |
454 | 452 | # first check cache for answer |
455 | 453 | entry = self.cache_get(qname, rdtype) |
456 | 454 | if entry is not None and entry.source <= max_source: |
460 | 458 | if self.SRC_ADDITIONAL <= max_source and (qname, rdtype) in self._hints: |
461 | 459 | return [self._hints[(qname, rdtype)], dns.rcode.NOERROR] |
462 | 460 | |
461 | return None | |
462 | ||
463 | def _query(self, qname, rdtype, rdclass, level, max_source, starting_domain=None): | |
464 | self.expire_cache() | |
465 | ||
466 | # check for max chain length | |
467 | if level > self.MAX_CHAIN: | |
468 | raise ServFail('SERVFAIL - resolution chain too long') | |
469 | ||
470 | ans = self._get_answer(qname, rdtype, rdclass, max_source) | |
471 | if ans: | |
472 | return ans | |
473 | ||
463 | 474 | # next check cache for alias |
464 | entry = self.cache_get(qname, dns.rdatatype.CNAME) | |
465 | if entry is not None and entry.rrset is not None: | |
466 | return [entry.rrset] + self._query(entry.rrset[0].target, rdtype, rdclass, level + 1, max_source) | |
475 | ans = self._get_answer(qname, dns.rdatatype.CNAME, rdclass, max_source) | |
476 | if ans: | |
477 | return [ans[0]] + self._query(entry.rrset[0].target, rdtype, rdclass, level + 1, max_source) | |
467 | 478 | |
468 | 479 | # now check for closest enclosing NS, DNAME, or hint |
469 | 480 | closest_zone = qname |
487 | 498 | return [entry.rrset, cname_rrset] + self._query(cname_rrset[0].target, rdtype, rdclass, level + 1, max_source) |
488 | 499 | |
489 | 500 | # look for NS records in cache |
490 | entry = self.cache_get(closest_zone, dns.rdatatype.NS) | |
491 | if entry is not None: | |
492 | if entry.rrset is not None: | |
493 | ns_rrset = entry.rrset | |
494 | for rdata in entry.rrset: | |
495 | ns_names[rdata.target] = None | |
496 | ||
497 | # look for NS records in hints | |
498 | else: | |
499 | try: | |
500 | ns_rrset = self._hints[(closest_zone, dns.rdatatype.NS)] | |
501 | except KeyError: | |
502 | pass | |
503 | else: | |
504 | for rdata in ns_rrset: | |
505 | ns_names[rdata.target] = None | |
501 | ans = self._get_answer(closest_zone, dns.rdatatype.NS, rdclass, self.SRC_ADDITIONAL) | |
502 | if ans and ans[0] is not None: | |
503 | ns_rrset = ans[0] | |
504 | for ns_rdata in ans[0]: | |
505 | addrs = set() | |
506 | for a_rdtype in dns.rdatatype.A, dns.rdatatype.AAAA: | |
507 | ans1 = self._get_answer(ns_rdata.target, a_rdtype, rdclass, self.SRC_ADDITIONAL) | |
508 | if ans1 and ans1[0]: | |
509 | for a_rdata in ans1[0]: | |
510 | addrs.add(IPAddr(a_rdata.address)) | |
511 | if addrs: | |
512 | ns_names[ns_rdata.target] = addrs | |
513 | else: | |
514 | ns_names[ns_rdata.target] = None | |
506 | 515 | |
507 | 516 | # if there were NS records associated with the names, then |
508 | 517 | # no need to continue |
530 | 539 | ns_names_without_addresses = list(set(ns_names).difference(ns_names_with_addresses)) |
531 | 540 | random.shuffle(ns_names_without_addresses) |
532 | 541 | all_ns_names = ns_names_with_addresses + ns_names_without_addresses |
542 | previous_valid_answer = set() | |
533 | 543 | |
534 | 544 | for query_cls in self._query_cls: |
535 | 545 | # query each server until we get a match |
554 | 564 | for rdata in a_rrset: |
555 | 565 | ns_names[ns_name].add(IPAddr(rdata.address)) |
556 | 566 | |
557 | for server in ns_names[ns_name]: | |
567 | for server in ns_names[ns_name].difference(previous_valid_answer): | |
558 | 568 | # server disallowed by policy |
559 | 569 | if not self._allow_server(server): |
560 | 570 | continue |
561 | 571 | |
562 | q = query_cls(qname, rdtype, rdclass, (server,), bailiwick, self._client_ipv4, self._client_ipv6, self._odd_ports.get((bailiwick, server), 53)) | |
572 | q = query_cls(qname, rdtype, rdclass, (server,), bailiwick, self._client_ipv4, self._client_ipv6, self._odd_ports.get((bailiwick, server), 53), cookie_jar=self._cookie_jar, cookie_standin=self._cookie_standin) | |
563 | 573 | q.execute(tm=self._transport_manager, th_factories=self._th_factories) |
564 | 574 | is_referral = False |
565 | 575 | |
570 | 580 | server1, client_response = list(q.responses.items())[0] |
571 | 581 | client, response = list(client_response.items())[0] |
572 | 582 | |
573 | if response.is_valid_response() and response.is_complete_response(): | |
574 | soa_rrset = None | |
575 | rcode = response.message.rcode() | |
576 | ||
577 | # response is acceptable | |
583 | server_cookie = response.get_server_cookie() | |
584 | if server_cookie is not None: | |
585 | self._cookie_jar[server1] = server_cookie | |
586 | ||
587 | if not (response.is_valid_response() and response.is_complete_response()): | |
588 | continue | |
589 | ||
590 | previous_valid_answer.add(server) | |
591 | ||
592 | soa_rrset = None | |
593 | rcode = response.message.rcode() | |
594 | ||
595 | # response is acceptable | |
596 | try: | |
597 | # first check for exact match | |
598 | ret = [[x for x in response.message.answer if x.name == qname and x.rdtype == rdtype and x.rdclass == rdclass][0]] | |
599 | except IndexError: | |
578 | 600 | try: |
579 | # first check for exact match | |
580 | ret = [[x for x in response.message.answer if x.name == qname and x.rdtype == rdtype and x.rdclass == rdclass][0]] | |
601 | # now look for DNAME | |
602 | dname_rrset = [x for x in response.message.answer if qname.is_subdomain(x.name) and qname != x.name and x.rdtype == dns.rdatatype.DNAME and x.rdclass == rdclass][0] | |
581 | 603 | except IndexError: |
582 | 604 | try: |
583 | # now look for DNAME | |
584 | dname_rrset = [x for x in response.message.answer if qname.is_subdomain(x.name) and qname != x.name and x.rdtype == dns.rdatatype.DNAME and x.rdclass == rdclass][0] | |
605 | # now look for CNAME | |
606 | cname_rrset = [x for x in response.message.answer if x.name == qname and x.rdtype == dns.rdatatype.CNAME and x.rdclass == rdclass][0] | |
585 | 607 | except IndexError: |
608 | ret = [None] | |
609 | # no answer | |
586 | 610 | try: |
587 | # now look for CNAME | |
588 | cname_rrset = [x for x in response.message.answer if x.name == qname and x.rdtype == dns.rdatatype.CNAME and x.rdclass == rdclass][0] | |
611 | soa_rrset = [x for x in response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.SOA][0] | |
589 | 612 | except IndexError: |
590 | ret = [None] | |
591 | # no answer | |
613 | pass | |
614 | # cache the NS RRset | |
615 | else: | |
616 | cname_rrset = [x for x in response.message.answer if x.name == qname and x.rdtype == dns.rdatatype.CNAME and x.rdclass == rdclass][0] | |
617 | ret = [cname_rrset] | |
618 | else: | |
619 | # handle DNAME: return the DNAME, CNAME and (recursively) its chain | |
620 | cname_rrset = Response.cname_from_dname(qname, dname_rrset) | |
621 | ret = [dname_rrset, cname_rrset] | |
622 | ||
623 | if response.is_referral(qname, rdtype, rdclass, bailiwick): | |
624 | is_referral = True | |
625 | a_rrsets = {} | |
626 | min_ttl = None | |
627 | ret = None | |
628 | ||
629 | # if response is referral, then we follow it | |
630 | ns_rrset = [x for x in response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS][0] | |
631 | ns_names = response.ns_ip_mapping_from_additional(ns_rrset.name, bailiwick) | |
632 | for ns_name in ns_names: | |
633 | if not ns_names[ns_name]: | |
634 | ns_names[ns_name] = None | |
635 | else: # name is in bailiwick | |
636 | for a_rdtype in (dns.rdatatype.A, dns.rdatatype.AAAA): | |
592 | 637 | try: |
593 | soa_rrset = [x for x in response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.SOA][0] | |
594 | except IndexError: | |
638 | a_rrsets[a_rdtype] = response.message.find_rrset(response.message.additional, ns_name, a_rdtype, dns.rdataclass.IN) | |
639 | except KeyError: | |
595 | 640 | pass |
596 | # cache the NS RRset | |
597 | else: | |
598 | cname_rrset = [x for x in response.message.answer if x.name == qname and x.rdtype == dns.rdatatype.CNAME and x.rdclass == rdclass][0] | |
599 | ret = [cname_rrset] | |
600 | else: | |
601 | # handle DNAME: return the DNAME, CNAME and (recursively) its chain | |
602 | cname_rrset = Response.cname_from_dname(qname, dname_rrset) | |
603 | ret = [dname_rrset, cname_rrset] | |
604 | ||
605 | if response.is_referral(qname, rdtype, bailiwick): | |
606 | is_referral = True | |
607 | a_rrsets = {} | |
608 | min_ttl = None | |
609 | ret = None | |
610 | ||
611 | # if response is referral, then we follow it | |
612 | ns_rrset = [x for x in response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS][0] | |
641 | else: | |
642 | if min_ttl is None or a_rrsets[a_rdtype].ttl < min_ttl: | |
643 | min_ttl = a_rrsets[a_rdtype].ttl | |
644 | ||
645 | for a_rdtype in (dns.rdatatype.A, dns.rdatatype.AAAA): | |
646 | if a_rdtype in a_rrsets: | |
647 | a_rrsets[a_rdtype].update_ttl(min_ttl) | |
648 | self.cache_put(ns_name, a_rdtype, a_rrsets[a_rdtype], self.SRC_ADDITIONAL, dns.rcode.NOERROR, None, None) | |
649 | else: | |
650 | self.cache_put(ns_name, a_rdtype, None, self.SRC_ADDITIONAL, dns.rcode.NOERROR, None, min_ttl) | |
651 | ||
652 | if min_ttl is not None: | |
653 | ns_rrset.update_ttl(min_ttl) | |
654 | ||
655 | # cache the NS RRset | |
656 | self.cache_put(ns_rrset.name, dns.rdatatype.NS, ns_rrset, self.SRC_NONAUTH_AUTH, rcode, None, None) | |
657 | break | |
658 | ||
659 | elif response.is_authoritative(): | |
660 | terminal = True | |
661 | a_rrsets = {} | |
662 | min_ttl = None | |
663 | ||
664 | # if response is authoritative (and not a referral), then we return it | |
665 | try: | |
666 | ns_rrset = [x for x in response.message.answer + response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS][0] | |
667 | except IndexError: | |
668 | pass | |
669 | else: | |
670 | ||
613 | 671 | ns_names = response.ns_ip_mapping_from_additional(ns_rrset.name, bailiwick) |
614 | 672 | for ns_name in ns_names: |
615 | 673 | if not ns_names[ns_name]: |
634 | 692 | if min_ttl is not None: |
635 | 693 | ns_rrset.update_ttl(min_ttl) |
636 | 694 | |
637 | # cache the NS RRset | |
638 | self.cache_put(ns_rrset.name, dns.rdatatype.NS, ns_rrset, self.SRC_NONAUTH_AUTH, rcode, None, None) | |
639 | break | |
640 | ||
641 | elif response.is_authoritative(): | |
642 | terminal = True | |
643 | a_rrsets = {} | |
644 | min_ttl = None | |
645 | ||
646 | # if response is authoritative (and not a referral), then we return it | |
647 | try: | |
648 | ns_rrset = [x for x in response.message.answer + response.message.authority if qname.is_subdomain(x.name) and x.rdtype == dns.rdatatype.NS][0] | |
649 | except IndexError: | |
650 | pass | |
651 | else: | |
652 | ||
653 | ns_names = response.ns_ip_mapping_from_additional(ns_rrset.name, bailiwick) | |
654 | for ns_name in ns_names: | |
655 | if not ns_names[ns_name]: | |
656 | ns_names[ns_name] = None | |
657 | else: # name is in bailiwick | |
658 | for a_rdtype in (dns.rdatatype.A, dns.rdatatype.AAAA): | |
659 | try: | |
660 | a_rrsets[a_rdtype] = response.message.find_rrset(response.message.additional, ns_name, a_rdtype, dns.rdataclass.IN) | |
661 | except KeyError: | |
662 | pass | |
663 | else: | |
664 | if min_ttl is None or a_rrsets[a_rdtype].ttl < min_ttl: | |
665 | min_ttl = a_rrsets[a_rdtype].ttl | |
666 | ||
667 | for a_rdtype in (dns.rdatatype.A, dns.rdatatype.AAAA): | |
668 | if a_rdtype in a_rrsets: | |
669 | a_rrsets[a_rdtype].update_ttl(min_ttl) | |
670 | self.cache_put(ns_name, a_rdtype, a_rrsets[a_rdtype], self.SRC_ADDITIONAL, dns.rcode.NOERROR, None, None) | |
671 | else: | |
672 | self.cache_put(ns_name, a_rdtype, None, self.SRC_ADDITIONAL, dns.rcode.NOERROR, None, min_ttl) | |
673 | ||
674 | if min_ttl is not None: | |
675 | ns_rrset.update_ttl(min_ttl) | |
676 | ||
677 | self.cache_put(ns_rrset.name, dns.rdatatype.NS, ns_rrset, self.SRC_AUTH_AUTH, rcode, None, None) | |
678 | ||
679 | if ret[-1] == None: | |
680 | self.cache_put(qname, rdtype, None, self.SRC_AUTH_ANS, rcode, soa_rrset, None) | |
681 | ||
682 | else: | |
683 | for rrset in ret: | |
684 | self.cache_put(rrset.name, rrset.rdtype, rrset, self.SRC_AUTH_ANS, rcode, None, None) | |
685 | ||
686 | if ret[-1].rdtype == dns.rdatatype.CNAME: | |
687 | ret += self._query(ret[-1][0].target, rdtype, rdclass, level + 1, self.SRC_NONAUTH_ANS) | |
688 | terminal = False | |
689 | ||
690 | if terminal: | |
691 | ret.append(rcode) | |
692 | return ret | |
695 | self.cache_put(ns_rrset.name, dns.rdatatype.NS, ns_rrset, self.SRC_AUTH_AUTH, rcode, None, None) | |
696 | ||
697 | if ret[-1] == None: | |
698 | self.cache_put(qname, rdtype, None, self.SRC_AUTH_ANS, rcode, soa_rrset, None) | |
699 | ||
700 | else: | |
701 | for rrset in ret: | |
702 | self.cache_put(rrset.name, rrset.rdtype, rrset, self.SRC_AUTH_ANS, rcode, None, None) | |
703 | ||
704 | if ret[-1].rdtype == dns.rdatatype.CNAME: | |
705 | ret += self._query(ret[-1][0].target, rdtype, rdclass, level + 1, self.SRC_NONAUTH_ANS) | |
706 | terminal = False | |
707 | ||
708 | if terminal: | |
709 | ret.append(rcode) | |
710 | return ret | |
693 | 711 | |
694 | 712 | # if referral, then break |
695 | 713 | if is_referral: |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
9 | 7 | # certain rights in this software. |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
10 | # | |
11 | # Copyright 2016-2019 Casey Deccio | |
12 | 12 | # |
13 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
14 | 14 | # it under the terms of the GNU General Public License as published by |
27 | 27 | from __future__ import unicode_literals |
28 | 28 | |
29 | 29 | import base64 |
30 | import binascii | |
31 | import copy | |
30 | 32 | import errno |
31 | import cgi | |
32 | 33 | import codecs |
33 | 34 | import datetime |
34 | 35 | import hashlib |
44 | 45 | except ImportError: |
45 | 46 | from ordereddict import OrderedDict |
46 | 47 | |
48 | # python3/python2 dual compatibility | |
49 | try: | |
50 | from html import escape | |
51 | except ImportError: | |
52 | from cgi import escape | |
53 | ||
47 | 54 | import dns.flags, dns.message, dns.rcode, dns.rdataclass, dns.rdatatype, dns.rrset |
48 | 55 | |
49 | 56 | from . import base32 |
56 | 63 | class DNSResponse: |
57 | 64 | '''A DNS response, including meta information''' |
58 | 65 | |
59 | def __init__(self, message, msg_size, error, errno1, history, response_time, query, review_history=True): | |
66 | def __init__(self, message, msg_size, error, errno1, history, response_time, query, server_cookie, server_cookie_status, review_history=True): | |
60 | 67 | self.message = message |
61 | 68 | self.msg_size = msg_size |
62 | 69 | self.error = error |
65 | 72 | self.response_time = response_time |
66 | 73 | |
67 | 74 | self.query = query |
75 | self.server_cookie = server_cookie | |
76 | self.server_cookie_status = server_cookie_status | |
68 | 77 | |
69 | 78 | self.effective_flags = None |
70 | 79 | self.effective_edns = None |
72 | 81 | self.effective_edns_flags = None |
73 | 82 | self.effective_edns_options = None |
74 | 83 | self.effective_tcp = None |
84 | self.effective_server_cookie_status = None | |
75 | 85 | |
76 | 86 | self.udp_attempted = None |
77 | 87 | self.udp_responsive = None |
92 | 102 | def __repr__(self): |
93 | 103 | return '<%s: "%s">' % (self.__class__.__name__, str(self)) |
94 | 104 | |
105 | @classmethod | |
106 | def _query_tag_bind(cls, tcp, flags, edns, edns_flags, edns_max_udp_payload, edns_options, qname): | |
107 | s = [] | |
108 | if flags & dns.flags.RD: | |
109 | s.append('+') | |
110 | else: | |
111 | s.append('-') | |
112 | if edns >= 0: | |
113 | s.append('E(%d)' % (edns)) | |
114 | if tcp: | |
115 | s.append('T') | |
116 | if edns >= 0 and edns_flags & dns.flags.DO: | |
117 | s.append('D') | |
118 | if flags & dns.flags.CD: | |
119 | s.append('C') | |
120 | # Flags other than the ones commonly seen in queries | |
121 | if flags & dns.flags.AD: | |
122 | s.append('A') | |
123 | if flags & dns.flags.AA: | |
124 | s.append('a') | |
125 | if flags & dns.flags.TC: | |
126 | s.append('t') | |
127 | if flags & dns.flags.RA: | |
128 | s.append('r') | |
129 | if edns >= 0: | |
130 | # EDNS max UDP payload | |
131 | s.append('P(%d)' % edns_max_udp_payload) | |
132 | # EDNS flags other than DO | |
133 | if edns_flags & ~dns.flags.DO: | |
134 | s.append('F(0x%x)' % edns_flags) | |
135 | # other options | |
136 | for opt in edns_options: | |
137 | if opt.otype == 3: | |
138 | # NSID | |
139 | s.append('N') | |
140 | elif opt.otype == 8: | |
141 | # EDNS Client Subnet | |
142 | s.append('s') | |
143 | elif opt.otype == 10: | |
144 | # DNS cookies | |
145 | s.append('K') | |
146 | if qname.to_text() != qname.to_text().lower(): | |
147 | s.append('X') | |
148 | return s | |
149 | ||
150 | @classmethod | |
151 | def _query_tag_human(cls, tcp, flags, edns, edns_flags, edns_max_udp_payload, edns_options, qname): | |
152 | s = '' | |
153 | if tcp: | |
154 | s += 'TCP_' | |
155 | else: | |
156 | s += 'UDP_' | |
157 | ||
158 | if flags & dns.flags.RD: | |
159 | s += '+' | |
160 | else: | |
161 | s += '-' | |
162 | if flags & dns.flags.CD: | |
163 | s += 'C' | |
164 | # Flags other than the ones commonly seen in queries | |
165 | if flags & dns.flags.AD: | |
166 | s += 'A' | |
167 | if flags & dns.flags.AA: | |
168 | s += 'a' | |
169 | if flags & dns.flags.TC: | |
170 | s += 't' | |
171 | if flags & dns.flags.RA: | |
172 | s += 'r' | |
173 | s += '_' | |
174 | ||
175 | if edns < 0: | |
176 | s += 'NOEDNS_' | |
177 | else: | |
178 | s += 'EDNS%d_' % (edns) | |
179 | ||
180 | # EDNS max UDP payload | |
181 | s += '%d_' % edns_max_udp_payload | |
182 | ||
183 | if edns_flags & dns.flags.DO: | |
184 | s += 'D' | |
185 | # EDNS flags other than DO | |
186 | if edns_flags & ~dns.flags.DO: | |
187 | s += '%d' % edns_flags | |
188 | s += '_' | |
189 | ||
190 | # other options | |
191 | for opt in edns_options: | |
192 | if opt.otype == 3: | |
193 | # NSID | |
194 | s += 'N' | |
195 | elif opt.otype == 8: | |
196 | # EDNS Client Subnet | |
197 | s += 's' | |
198 | elif opt.otype == 10: | |
199 | # DNS cookies | |
200 | s += 'K' | |
201 | else: | |
202 | # DNS cookies | |
203 | s += 'O(%d)' % opt.otype | |
204 | ||
205 | if qname.to_text() != qname.to_text().lower(): | |
206 | s += '_0x20' | |
207 | return s | |
208 | ||
209 | def request_cookie_tag(self): | |
210 | from . import query as Q | |
211 | ||
212 | if self.effective_server_cookie_status == Q.DNS_COOKIE_NO_COOKIE: | |
213 | return 'NO_COOKIE' | |
214 | elif self.effective_server_cookie_status == Q.DNS_COOKIE_IMPROPER_LENGTH: | |
215 | return 'MALFORMED_COOKIE' | |
216 | elif self.effective_server_cookie_status == Q.DNS_COOKIE_CLIENT_COOKIE_ONLY: | |
217 | return 'CLIENT_COOKIE_ONLY' | |
218 | elif self.effective_server_cookie_status == Q.DNS_COOKIE_SERVER_COOKIE_FRESH: | |
219 | return 'VALID_SERVER_COOKIE' | |
220 | elif self.effective_server_cookie_status == Q.DNS_COOKIE_SERVER_COOKIE_BAD: | |
221 | return 'INVALID_SERVER_COOKIE' | |
222 | else: | |
223 | raise Exception('Unknown cookie status!') | |
224 | ||
225 | def response_cookie_tag(self): | |
226 | ||
227 | if self.message is None: | |
228 | return 'ERROR' | |
229 | ||
230 | if self.message.edns < 0: | |
231 | return 'NO_EDNS' | |
232 | ||
233 | try: | |
234 | cookie_opt = [o for o in self.message.options if o.otype == 10][0] | |
235 | except IndexError: | |
236 | return 'NO_COOKIE_OPT' | |
237 | ||
238 | if len(cookie_opt.data) < 8 or len(cookie_opt.data) > 40: | |
239 | return 'MALFORMED_COOKIE' | |
240 | ||
241 | elif len(cookie_opt.data) == 8: | |
242 | return 'CLIENT_COOKIE_ONLY' | |
243 | ||
244 | else: | |
245 | return 'CLIENT_AND_SERVER_COOKIE' | |
246 | ||
95 | 247 | def initial_query_tag(self): |
96 | s = '' | |
97 | if self.query.tcp: | |
98 | s += 'TCP_' | |
99 | else: | |
100 | s += 'UDP_' | |
101 | s += '%d_' % self.query.flags | |
102 | if self.query.edns < 0: | |
103 | s += 'NOEDNS' | |
104 | else: | |
105 | s += 'EDNS%d_%d_%d' % (self.query.edns, (self.query.edns_flags & 0xffff), self.query.edns_max_udp_payload) | |
106 | for opt in self.query.edns_options: | |
107 | s += '_%d' % opt.otype | |
108 | return s | |
248 | return ''.join(self._query_tag_human(self.query.tcp, self.query.flags, self.query.edns, self.query.edns_flags, self.query.edns_max_udp_payload, self.query.edns_options, self.query.qname)) | |
109 | 249 | |
110 | 250 | def effective_query_tag(self): |
111 | s = '' | |
112 | if self.effective_tcp: | |
113 | s += 'TCP_' | |
114 | else: | |
115 | s += 'UDP_' | |
116 | s += '%d_' % self.effective_flags | |
117 | if self.effective_edns < 0: | |
118 | s += 'NOEDNS' | |
119 | else: | |
120 | s += 'EDNS%d_%d_%d' % (self.effective_edns, (self.effective_edns_flags & 0xffff), self.effective_edns_max_udp_payload) | |
121 | for opt in self.effective_edns_options: | |
122 | s += '_%d' % opt.otype | |
123 | return s | |
251 | return ''.join(self._query_tag_human(self.effective_tcp, self.effective_flags, self.effective_edns, self.effective_edns_flags, self.query.edns_max_udp_payload, self.effective_edns_options, self.query.qname)) | |
124 | 252 | |
125 | 253 | def section_rr_count(self, section): |
126 | 254 | if self.message is None: |
151 | 279 | t += retry.response_time |
152 | 280 | return t |
153 | 281 | |
282 | def get_cookie_opt(self): | |
283 | if self.message is None: | |
284 | return None | |
285 | try: | |
286 | return [o for o in self.message.options if o.otype == 10][0] | |
287 | except IndexError: | |
288 | return None | |
289 | ||
290 | def get_server_cookie(self): | |
291 | cookie_opt = self.get_cookie_opt() | |
292 | if cookie_opt is not None and len(cookie_opt.data) > 8: | |
293 | return cookie_opt.data[8:] | |
294 | return None | |
295 | ||
154 | 296 | def copy(self): |
155 | clone = DNSResponse(self.message, self.msg_size, self.error, self.errno, self.history, self.response_time, self.query, review_history=False) | |
156 | clone.set_effective_request_options(self.effective_flags, self.effective_edns, self.effective_edns_max_udp_payload, self.effective_edns_flags, self.effective_edns_options, self.effective_tcp) | |
297 | clone = DNSResponse(self.message, self.msg_size, self.error, self.errno, self.history, self.response_time, self.query, self.server_cookie, self.server_cookie_status, review_history=False) | |
298 | clone.set_effective_request_options(self.effective_flags, self.effective_edns, self.effective_edns_max_udp_payload, self.effective_edns_flags, self.effective_edns_options, self.effective_tcp, self.effective_server_cookie_status) | |
157 | 299 | clone.set_responsiveness(self.udp_attempted, self.udp_responsive, self.tcp_attempted, self.tcp_responsive, self.responsive_cause_index, self.responsive_cause_index_tcp) |
158 | 300 | return clone |
159 | 301 | |
160 | def set_effective_request_options(self, flags, edns, edns_max_udp_payload, edns_flags, edns_options, effective_tcp): | |
302 | def set_effective_request_options(self, flags, edns, edns_max_udp_payload, edns_flags, edns_options, tcp, server_cookie_status): | |
161 | 303 | self.effective_flags = flags |
162 | 304 | self.effective_edns = edns |
163 | 305 | self.effective_edns_max_udp_payload = edns_max_udp_payload |
164 | 306 | self.effective_edns_flags = edns_flags |
165 | 307 | self.effective_edns_options = edns_options |
166 | self.effective_tcp = effective_tcp | |
308 | self.effective_tcp = tcp | |
309 | self.effective_server_cookie_status = server_cookie_status | |
167 | 310 | |
168 | 311 | def set_responsiveness(self, udp_attempted, udp_responsive, tcp_attempted, tcp_responsive, responsive_cause_index, responsive_cause_index_tcp): |
169 | 312 | self.udp_attempted = udp_attempted |
180 | 323 | edns = self.query.edns |
181 | 324 | edns_max_udp_payload = self.query.edns_max_udp_payload |
182 | 325 | edns_flags = self.query.edns_flags |
183 | edns_options = self.query.edns_options[:] | |
326 | edns_options = copy.deepcopy(self.query.edns_options) | |
327 | server_cookie_status = self.server_cookie_status | |
184 | 328 | |
185 | 329 | # mark whether TCP or UDP was attempted initially |
186 | 330 | tcp_attempted = tcp = self.query.tcp |
256 | 400 | filtered_options = [x for x in edns_options if retry.action_arg == x.otype] |
257 | 401 | if filtered_options: |
258 | 402 | edns_options.remove(filtered_options[0]) |
403 | # If COOKIE option was removed, then reset | |
404 | # server_cookie_status | |
405 | if filtered_options[0].otype == 10: | |
406 | server_cookie_status = Q.DNS_COOKIE_NO_COOKIE | |
259 | 407 | elif retry.action == Q.RETRY_ACTION_CHANGE_SPORT: |
260 | 408 | pass |
261 | 409 | elif retry.action == Q.RETRY_ACTION_CHANGE_EDNS_VERSION: |
262 | 410 | edns = retry.action_arg |
411 | elif retry.action == Q.RETRY_ACTION_UPDATE_DNS_COOKIE: | |
412 | server_cookie_status = Q.DNS_COOKIE_SERVER_COOKIE_FRESH | |
263 | 413 | |
264 | 414 | prev_index = i |
265 | 415 | |
285 | 435 | responsive_cause_index = prev_index |
286 | 436 | responsive_cause_index_tcp = tcp |
287 | 437 | |
288 | self.set_effective_request_options(flags, edns, edns_max_udp_payload, edns_flags, edns_options, tcp) | |
438 | # If EDNS was effectively disabled, reset EDNS options | |
439 | if edns < 0: | |
440 | edns_max_udp_payload = None | |
441 | edns_flags = 0 | |
442 | edns_options = [] | |
443 | server_cookie_status = Q.DNS_COOKIE_NO_COOKIE | |
444 | ||
445 | self.set_effective_request_options(flags, edns, edns_max_udp_payload, edns_flags, edns_options, tcp, server_cookie_status) | |
289 | 446 | self.set_responsiveness(udp_attempted, udp_responsive, tcp_attempted, tcp_responsive, responsive_cause_index, responsive_cause_index_tcp) |
290 | 447 | |
291 | 448 | def recursion_desired(self): |
333 | 490 | |
334 | 491 | return self.message is not None and bool(self.message.flags & dns.flags.AA) |
335 | 492 | |
336 | def is_referral(self, qname, rdtype, bailiwick, proper=False): | |
493 | def is_referral(self, qname, rdtype, rdclass, bailiwick, proper=False): | |
337 | 494 | '''Return True if this response yields a referral for the queried |
338 | 495 | name.''' |
339 | 496 | |
349 | 506 | return False |
350 | 507 | # if the name exists in the answer section with the requested rdtype or |
351 | 508 | # CNAME, then it can't be a referral |
352 | if [x for x in self.message.answer if x.name == qname and x.rdtype in (rdtype, dns.rdatatype.CNAME)]: | |
509 | if [x for x in self.message.answer if x.name == qname and x.rdtype in (rdtype, dns.rdatatype.CNAME) and x.rdclass == rdclass]: | |
353 | 510 | return False |
354 | 511 | # if an SOA record with the given qname exists, then the server |
355 | 512 | # is authoritative for the name, so it is a referral |
356 | 513 | try: |
357 | self.message.find_rrset(self.message.authority, qname, dns.rdataclass.IN, dns.rdatatype.SOA) | |
514 | self.message.find_rrset(self.message.authority, qname, rdclass, dns.rdatatype.SOA) | |
358 | 515 | return False |
359 | 516 | except KeyError: |
360 | 517 | pass |
361 | 518 | # if proper referral is requested and qname is equal to of an NS RRset |
362 | 519 | # in the authority, then it is a referral |
363 | 520 | if proper: |
364 | if [x for x in self.message.authority if qname == x.name and x.rdtype == dns.rdatatype.NS]: | |
521 | if [x for x in self.message.authority if qname == x.name and x.rdtype == dns.rdatatype.NS and x.rdclass == rdclass]: | |
365 | 522 | return True |
366 | 523 | # if proper referral is NOT requested, qname is a subdomain of |
367 | 524 | # (including equal to) an NS RRset in the authority, and qname is not |
368 | 525 | # equal to bailiwick, then it is a referral |
369 | 526 | else: |
370 | if [x for x in self.message.authority if qname.is_subdomain(x.name) and bailiwick != x.name and x.rdtype == dns.rdatatype.NS]: | |
527 | if [x for x in self.message.authority if qname.is_subdomain(x.name) and bailiwick != x.name and x.rdtype == dns.rdatatype.NS and x.rdclass == rdclass]: | |
371 | 528 | return True |
372 | 529 | return False |
373 | 530 | |
509 | 666 | for o in self.effective_edns_options: |
510 | 667 | s = io.BytesIO() |
511 | 668 | o.to_wire(s) |
512 | d['effective_query_options']['edns_options'].append(base64.b64encode(s.getvalue())) | |
669 | d['effective_query_options']['edns_options'].append((o.type, binascii.hexlify(s.getvalue()))) | |
513 | 670 | d['effective_query_options']['tcp'] = self.effective_tcp |
514 | 671 | |
515 | 672 | if self.responsive_cause_index is not None: |
542 | 699 | return d |
543 | 700 | |
544 | 701 | @classmethod |
545 | def deserialize(cls, d, query): | |
702 | def deserialize(cls, d, query, server_cookie, server_cookie_status): | |
546 | 703 | from . import query as Q |
547 | 704 | |
548 | 705 | if 'msg_size' in d: |
589 | 746 | history = [] |
590 | 747 | for retry in d['history']: |
591 | 748 | history.append(Q.DNSQueryRetryAttempt.deserialize(retry)) |
592 | return DNSResponse(message, msg_size, error, errno1, history, response_time, query) | |
749 | return DNSResponse(message, msg_size, error, errno1, history, response_time, query, server_cookie, server_cookie_status) | |
593 | 750 | |
594 | 751 | class DNSResponseComponent(object): |
595 | 752 | def __init__(self): |
730 | 887 | elif rdata.algorithm in (13,14): |
731 | 888 | return len(key_str)<<3 |
732 | 889 | |
890 | # EDDSA keys | |
891 | elif rdata.algorithm in (15,16): | |
892 | return len(key_str)<<3 | |
893 | ||
733 | 894 | # other keys - just guess, based on the length of the raw key material |
734 | 895 | else: |
735 | 896 | return len(key_str)<<3 |
760 | 921 | d = OrderedDict() |
761 | 922 | |
762 | 923 | if html_format: |
763 | formatter = lambda x: cgi.escape(x, True) | |
924 | formatter = lambda x: escape(x, True) | |
764 | 925 | else: |
765 | 926 | formatter = lambda x: x |
766 | 927 | |
872 | 1033 | rdata_wire = rdata.to_digestable() |
873 | 1034 | rdata_len = len(rdata_wire) |
874 | 1035 | |
875 | stuff = struct.pack("!HHIH", rrset.rdtype, rrset.rdclass, | |
1036 | stuff = struct.pack(b'!HHIH', rrset.rdtype, rrset.rdclass, | |
876 | 1037 | ttl, rdata_len) |
877 | 1038 | s += name_wire + stuff + rdata_wire |
878 | 1039 | |
881 | 1042 | def get_rrsig_info(self, rrsig): |
882 | 1043 | return self.rrsig_info[rrsig] |
883 | 1044 | |
884 | def update_rrsig_info(self, server, client, response, section, is_referral): | |
1045 | def update_rrsig_info(self, server, client, response, section, rdclass, is_referral): | |
885 | 1046 | try: |
886 | rrsig_rrset = response.message.find_rrset(section, self.rrset.name, dns.rdataclass.IN, dns.rdatatype.RRSIG, self.rrset.rdtype) | |
1047 | rrsig_rrset = response.message.find_rrset(section, self.rrset.name, rdclass, dns.rdatatype.RRSIG, self.rrset.rdtype) | |
887 | 1048 | for rrsig in rrsig_rrset: |
888 | self.create_or_update_rrsig_info(rrsig, rrsig_rrset.ttl, server, client, response, is_referral) | |
1049 | self.create_or_update_rrsig_info(rrsig, rrsig_rrset.ttl, server, client, response, rdclass, is_referral) | |
889 | 1050 | except KeyError: |
890 | 1051 | pass |
891 | 1052 | |
892 | 1053 | if self.dname_info is not None: |
893 | self.dname_info.update_rrsig_info(server, client, response, section, is_referral) | |
894 | ||
895 | def create_or_update_rrsig_info(self, rrsig, ttl, server, client, response, is_referral): | |
1054 | self.dname_info.update_rrsig_info(server, client, response, section, rdclass, is_referral) | |
1055 | ||
1056 | def create_or_update_rrsig_info(self, rrsig, ttl, server, client, response, rdclass, is_referral): | |
896 | 1057 | try: |
897 | 1058 | rrsig_info = self.get_rrsig_info(rrsig) |
898 | 1059 | except KeyError: |
899 | 1060 | rrsig_info = self.rrsig_info[rrsig] = RDataMeta(self.rrset.name, ttl, dns.rdatatype.RRSIG, rrsig) |
900 | 1061 | rrsig_info.add_server_client(server, client, response) |
901 | self.set_wildcard_info(rrsig, server, client, response, is_referral) | |
902 | ||
903 | def create_or_update_cname_from_dname_info(self, synthesized_cname_info, server, client, response): | |
904 | return self.insert_into_list(synthesized_cname_info, self.cname_info_from_dname, server, client, response) | |
1062 | self.set_wildcard_info(rrsig, server, client, response, rdclass, is_referral) | |
1063 | ||
1064 | def create_or_update_cname_from_dname_info(self, synthesized_cname_info, server, client, response, rdclass): | |
1065 | return self.insert_into_list(synthesized_cname_info, self.cname_info_from_dname, server, client, response, rdclass) | |
905 | 1066 | |
906 | 1067 | def is_wildcard(self, rrsig): |
907 | 1068 | if self.rrset.name[0] == '*': |
913 | 1074 | return dns.name.Name(('*',)+self.rrset.name.labels[-(rrsig.labels+1):]) |
914 | 1075 | return self.rrset.name |
915 | 1076 | |
916 | def set_wildcard_info(self, rrsig, server, client, response, is_referral): | |
1077 | def set_wildcard_info(self, rrsig, server, client, response, rdclass, is_referral): | |
917 | 1078 | if self.is_wildcard(rrsig): |
918 | 1079 | wildcard_name = self.reduce_wildcard(rrsig) |
919 | 1080 | if wildcard_name not in self.wildcard_info: |
920 | 1081 | self.wildcard_info[wildcard_name] = NegativeResponseInfo(self.rrset.name, self.rrset.rdtype, self.ttl_cmp) |
921 | 1082 | self.wildcard_info[wildcard_name].add_server_client(server, client, response) |
922 | self.wildcard_info[wildcard_name].create_or_update_nsec_info(server, client, response, is_referral) | |
1083 | self.wildcard_info[wildcard_name].create_or_update_nsec_info(server, client, response, rdclass, is_referral) | |
923 | 1084 | |
924 | 1085 | def message_for_rrsig(self, rrsig): |
925 | 1086 | |
940 | 1101 | d = OrderedDict() |
941 | 1102 | |
942 | 1103 | if html_format: |
943 | formatter = lambda x: cgi.escape(x, True) | |
1104 | formatter = lambda x: escape(x, True) | |
944 | 1105 | else: |
945 | 1106 | formatter = lambda x: x |
946 | 1107 | |
1005 | 1166 | def __hash__(self): |
1006 | 1167 | return hash(id(self)) |
1007 | 1168 | |
1008 | def create_or_update_soa_info(self, server, client, response, is_referral): | |
1009 | soa_rrsets = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA and self.qname.is_subdomain(x.name)] | |
1169 | def create_or_update_soa_info(self, server, client, response, rdclass, is_referral): | |
1170 | soa_rrsets = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA and x.rdclass == rdclass and self.qname.is_subdomain(x.name)] | |
1010 | 1171 | if not soa_rrsets: |
1011 | soa_rrsets = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA] | |
1172 | soa_rrsets = [x for x in response.message.authority if x.rdtype == dns.rdatatype.SOA and x.rdclass == rdclass] | |
1012 | 1173 | soa_rrsets.sort(reverse=True) |
1013 | 1174 | try: |
1014 | 1175 | soa_rrset = soa_rrsets[0] |
1020 | 1181 | |
1021 | 1182 | soa_rrset_info = RRsetInfo(soa_rrset, self.ttl_cmp) |
1022 | 1183 | soa_rrset_info = self.insert_into_list(soa_rrset_info, self.soa_rrset_info, server, client, response) |
1023 | soa_rrset_info.update_rrsig_info(server, client, response, response.message.authority, is_referral) | |
1184 | soa_rrset_info.update_rrsig_info(server, client, response, response.message.authority, rdclass, is_referral) | |
1024 | 1185 | |
1025 | 1186 | return soa_rrset_info |
1026 | 1187 | |
1027 | def create_or_update_nsec_info(self, server, client, response, is_referral): | |
1188 | def create_or_update_nsec_info(self, server, client, response, rdclass, is_referral): | |
1028 | 1189 | for rdtype in dns.rdatatype.NSEC, dns.rdatatype.NSEC3: |
1029 | nsec_rrsets = [x for x in response.message.authority if x.rdtype == rdtype] | |
1190 | nsec_rrsets = [x for x in response.message.authority if x.rdtype == rdtype and x.rdclass == rdclass] | |
1030 | 1191 | if not nsec_rrsets: |
1031 | 1192 | continue |
1032 | 1193 | |
1034 | 1195 | nsec_set_info = self.insert_into_list(nsec_set_info, self.nsec_set_info, server, client, response) |
1035 | 1196 | |
1036 | 1197 | for name in nsec_set_info.rrsets: |
1037 | nsec_set_info.rrsets[name].update_rrsig_info(server, client, response, response.message.authority, is_referral) | |
1198 | nsec_set_info.rrsets[name].update_rrsig_info(server, client, response, response.message.authority, rdclass, is_referral) | |
1038 | 1199 | |
1039 | 1200 | class NSECSet(DNSResponseComponent): |
1040 | 1201 | def __init__(self, rrsets, referral, ttl_cmp): |
1102 | 1263 | for name, rrset_info in self.rrsets.items(): |
1103 | 1264 | rrset_info.add_server_client(server, client, response) |
1104 | 1265 | |
1105 | def create_or_update_rrsig_info(self, name, rrsig, ttl, server, client, response, is_referral): | |
1106 | self.rrsets[name].create_or_update_rrsig_info(rrsig, ttl, server, client, response, is_referral) | |
1266 | def create_or_update_rrsig_info(self, name, rrsig, ttl, server, client, response, rdclass, is_referral): | |
1267 | self.rrsets[name].create_or_update_rrsig_info(rrsig, ttl, server, client, response, rdclass, is_referral) | |
1107 | 1268 | |
1108 | 1269 | def is_valid_nsec3_name(self, nsec_name, algorithm): |
1109 | 1270 | # python3/python2 dual compatibility |
4 | 4 | # |
5 | 5 | # Copyright 2014-2016 VeriSign, Inc. |
6 | 6 | # |
7 | # Copyright 2016 Casey Deccio. | |
7 | # Copyright 2016-2019 Casey Deccio | |
8 | 8 | # |
9 | 9 | # DNSViz is free software; you can redistribute it and/or modify |
10 | 10 | # it under the terms of the GNU General Public License as published by |
27 | 27 | import codecs |
28 | 28 | import errno |
29 | 29 | import fcntl |
30 | import io | |
30 | 31 | import json |
31 | 32 | import os |
32 | 33 | import random |
35 | 36 | import socket |
36 | 37 | import ssl |
37 | 38 | import struct |
39 | import subprocess | |
38 | 40 | import threading |
39 | 41 | import time |
40 | 42 | |
75 | 77 | CHUNK_SIZE_RE = re.compile(r'^(?P<length>[0-9a-fA-F]+)(;[^\r\n]+)?(\r\n|\r|\n)') |
76 | 78 | CRLF_START_RE = re.compile(r'^(\r\n|\n|\r)') |
77 | 79 | |
80 | class SocketWrapper(object): | |
81 | def __init__(self): | |
82 | raise NotImplemented | |
83 | ||
84 | class Socket(SocketWrapper): | |
85 | def __init__(self, sock): | |
86 | self.sock = sock | |
87 | self.reader = sock | |
88 | self.writer = sock | |
89 | self.reader_fd = sock.fileno() | |
90 | self.writer_fd = sock.fileno() | |
91 | self.family = sock.family | |
92 | self.type = sock.type | |
93 | self.lock = None | |
94 | ||
95 | def recv(self, n): | |
96 | return self.sock.recv(n) | |
97 | ||
98 | def send(self, s): | |
99 | return self.sock.send(s) | |
100 | ||
101 | def setblocking(self, b): | |
102 | self.sock.setblocking(b) | |
103 | ||
104 | def bind(self, a): | |
105 | self.sock.bind(a) | |
106 | ||
107 | def connect(self, a): | |
108 | self.sock.connect(a) | |
109 | ||
110 | def getsockname(self): | |
111 | return self.sock.getsockname() | |
112 | ||
113 | def close(self): | |
114 | self.sock.close() | |
115 | ||
116 | class ReaderWriter(SocketWrapper): | |
117 | def __init__(self, reader, writer, proc=None): | |
118 | self.reader = reader | |
119 | self.writer = writer | |
120 | self.reader_fd = self.reader.fileno() | |
121 | self.writer_fd = self.writer.fileno() | |
122 | self.family = socket.AF_INET | |
123 | self.type = socket.SOCK_STREAM | |
124 | self.lock = None | |
125 | self.proc = proc | |
126 | ||
127 | def recv(self, n): | |
128 | return os.read(self.reader_fd, n) | |
129 | ||
130 | def send(self, s): | |
131 | return os.write(self.writer_fd, s) | |
132 | ||
133 | def setblocking(self, b): | |
134 | if not b: | |
135 | fcntl.fcntl(self.reader_fd, fcntl.F_SETFL, os.O_NONBLOCK) | |
136 | fcntl.fcntl(self.writer_fd, fcntl.F_SETFL, os.O_NONBLOCK) | |
137 | ||
138 | def bind(self, a): | |
139 | pass | |
140 | ||
141 | def connect(self, a): | |
142 | pass | |
143 | ||
144 | def getsockname(self): | |
145 | return ('localhost', 0) | |
146 | ||
147 | def close(self): | |
148 | pass | |
149 | ||
78 | 150 | class RemoteQueryTransportError(Exception): |
79 | 151 | pass |
80 | 152 | |
81 | 153 | class TransportMetaDeserializationError(Exception): |
154 | pass | |
155 | ||
156 | class SocketInUse(Exception): | |
82 | 157 | pass |
83 | 158 | |
84 | 159 | class DNSQueryTransportMeta(object): |
244 | 319 | self.end_time = time.time() |
245 | 320 | self.start_time = self.end_time - (elapsed/1000.0) |
246 | 321 | |
322 | QTH_MODE_WRITE_READ = 0 | |
323 | QTH_MODE_WRITE = 1 | |
324 | QTH_MODE_READ = 2 | |
325 | ||
247 | 326 | class DNSQueryTransportHandler(object): |
248 | 327 | singleton = False |
249 | 328 | allow_loopback_query = False |
250 | 329 | allow_private_query = False |
251 | 330 | timeout_baseline = 0.0 |
252 | ||
253 | def __init__(self, processed_queue=None, factory=None): | |
254 | self.req = None | |
255 | self.req_len = None | |
256 | self.req_index = None | |
257 | ||
258 | self.res = None | |
259 | self.res_len = None | |
260 | self.res_buf = None | |
261 | self.res_index = None | |
331 | mode = QTH_MODE_WRITE_READ | |
332 | ||
333 | def __init__(self, sock=None, recycle_sock=False, processed_queue=None, factory=None): | |
334 | self.msg_send = None | |
335 | self.msg_send_len = None | |
336 | self.msg_send_index = None | |
337 | ||
338 | self.msg_recv = None | |
339 | self.msg_recv_len = None | |
340 | self.msg_recv_buf = None | |
341 | self.msg_recv_index = None | |
262 | 342 | self.err = None |
263 | 343 | |
264 | 344 | self.dst = None |
272 | 352 | self._processed_queue = processed_queue |
273 | 353 | self.factory = factory |
274 | 354 | |
355 | self._sock = sock | |
356 | self.sock = None | |
357 | self.recycle_sock = recycle_sock | |
358 | ||
275 | 359 | self.expiration = None |
276 | self.sock = None | |
277 | self.sockfd = None | |
278 | 360 | self.start_time = None |
279 | 361 | self.end_time = None |
280 | 362 | |
295 | 377 | self.src = None |
296 | 378 | |
297 | 379 | def finalize(self): |
298 | assert self.res is not None or self.err is not None, 'Query must have been executed before finalize() can be called' | |
380 | assert self.mode in (QTH_MODE_WRITE_READ, QTH_MODE_READ), 'finalize() can only be called for modes QTH_MODE_READ and QTH_MODE_WRITE_READ' | |
381 | assert self.msg_recv is not None or self.err is not None, 'Query must have been executed before finalize() can be called' | |
299 | 382 | |
300 | 383 | self._check_source() |
301 | 384 | |
302 | 385 | # clear out any partial responses if there was an error |
303 | 386 | if self.err is not None: |
304 | self.res = None | |
305 | ||
387 | self.msg_recv = None | |
388 | ||
389 | if self.factory is not None: | |
390 | if self.recycle_sock: | |
391 | # if recycle_sock is requested, add the sock to the factory. | |
392 | # Then add the lock to the sock to prevent concurrent use of | |
393 | # the socket. | |
394 | if self.sock is not None and self._sock is None: | |
395 | self.factory.lock.acquire() | |
396 | try: | |
397 | if self.factory.sock is None: | |
398 | self.factory.sock = self.sock | |
399 | self.factory.sock.lock = self.factory.lock | |
400 | finally: | |
401 | self.factory.lock.release() | |
402 | ||
403 | elif self.sock is not None and self.sock is self.factory.sock: | |
404 | # if recycle_sock is not requested, and this sock is in the | |
405 | # factory, then remove it. | |
406 | self.factory.lock.acquire() | |
407 | try: | |
408 | if self.sock is self.factory.sock: | |
409 | self.factory.sock = None | |
410 | finally: | |
411 | self.factory.lock.release() | |
412 | ||
413 | #TODO change this and the overriding child methods to init_msg_send | |
306 | 414 | def init_req(self): |
307 | 415 | raise NotImplemented |
308 | 416 | |
309 | def _init_res_buffer(self): | |
310 | self.res = b'' | |
311 | self.res_buf = b'' | |
312 | self.res_index = 0 | |
417 | def _init_msg_recv(self): | |
418 | self.msg_recv = b'' | |
419 | self.msg_recv_buf = b'' | |
420 | self.msg_recv_index = 0 | |
421 | self.msg_recv_len = None | |
313 | 422 | |
314 | 423 | def prepare(self): |
315 | assert self.req is not None, 'Request must be initialized with init_req() before be added before prepare() can be called' | |
424 | if self.mode in (QTH_MODE_WRITE_READ, QTH_MODE_WRITE): | |
425 | assert self.msg_send is not None, 'Request must be initialized with init_req() before be added before prepare() can be called' | |
426 | ||
427 | if self.mode in (QTH_MODE_WRITE_READ, QTH_MODE_READ): | |
428 | self._init_msg_recv() | |
316 | 429 | |
317 | 430 | if self.timeout is None: |
318 | 431 | self.timeout = self.timeout_baseline |
319 | 432 | |
320 | self._init_res_buffer() | |
321 | try: | |
322 | self._create_socket() | |
323 | self._configure_socket() | |
324 | self._bind_socket() | |
325 | self._set_start_time() | |
326 | self._connect_socket() | |
327 | except socket.error as e: | |
328 | self.err = e | |
329 | self.cleanup() | |
433 | if self._sock is not None: | |
434 | # if a pre-existing socket is available for re-use, then use that | |
435 | # instead | |
436 | try: | |
437 | self._reuse_socket() | |
438 | self._set_start_time() | |
439 | except SocketInUse as e: | |
440 | self.err = e | |
441 | else: | |
442 | try: | |
443 | self._create_socket() | |
444 | self._configure_socket() | |
445 | self._bind_socket() | |
446 | self._set_start_time() | |
447 | self._connect_socket() | |
448 | except socket.error as e: | |
449 | self.err = e | |
450 | ||
451 | def _reuse_socket(self): | |
452 | # wait for the lock on the socket | |
453 | if not self._sock.lock.acquire(False): | |
454 | raise SocketInUse() | |
455 | self.sock = self._sock | |
330 | 456 | |
331 | 457 | def _get_af(self): |
332 | 458 | if self.dst.version == 6: |
336 | 462 | |
337 | 463 | def _create_socket(self): |
338 | 464 | af = self._get_af() |
339 | self.sock = socket.socket(af, self.transport_type) | |
340 | self.sockfd = self.sock.fileno() | |
465 | self.sock = Socket(socket.socket(af, self.transport_type)) | |
341 | 466 | |
342 | 467 | def _configure_socket(self): |
343 | 468 | self.sock.setblocking(0) |
393 | 518 | # set end (and start, if necessary) times, as appropriate |
394 | 519 | self._set_end_time() |
395 | 520 | |
396 | self._set_socket_info() | |
397 | ||
398 | 521 | # close socket |
399 | 522 | if self.sock is not None: |
400 | self.sock.close() | |
401 | ||
523 | self._set_socket_info() | |
524 | ||
525 | if not self.recycle_sock: | |
526 | self.sock.close() | |
527 | if self.sock.lock is not None: | |
528 | self.sock.lock.release() | |
402 | 529 | # place in processed queue, if specified |
403 | 530 | if self._processed_queue is not None: |
404 | 531 | self._processed_queue.put(self) |
405 | 532 | |
406 | 533 | def do_write(self): |
407 | 534 | try: |
408 | self.req_index += self.sock.send(self.req[self.req_index:]) | |
409 | if self.req_index >= self.req_len: | |
535 | self.msg_send_index += self.sock.send(self.msg_send[self.msg_send_index:]) | |
536 | if self.msg_send_index >= self.msg_send_len: | |
410 | 537 | return True |
411 | 538 | except socket.error as e: |
412 | 539 | self.err = e |
413 | self.cleanup() | |
414 | 540 | return True |
415 | 541 | |
416 | 542 | def do_read(self): |
426 | 552 | } |
427 | 553 | return d |
428 | 554 | |
555 | def serialize_responses(self): | |
556 | d = { | |
557 | 'version': DNS_TRANSPORT_VERSION, | |
558 | 'responses': [q.serialize_response() for q in self.qtms] | |
559 | } | |
560 | return d | |
561 | ||
429 | 562 | class DNSQueryTransportHandlerDNS(DNSQueryTransportHandler): |
430 | 563 | singleton = True |
431 | 564 | |
436 | 569 | qtm = self.qtms[0] |
437 | 570 | qtm.src = self.src |
438 | 571 | qtm.sport = self.sport |
439 | qtm.res = self.res | |
572 | qtm.res = self.msg_recv | |
440 | 573 | qtm.err = self.err |
441 | 574 | qtm.start_time = self.start_time |
442 | 575 | qtm.end_time = self.end_time |
451 | 584 | self.src = qtm.src |
452 | 585 | self.sport = qtm.sport |
453 | 586 | |
454 | self.req = qtm.req | |
455 | self.req_len = len(qtm.req) | |
456 | self.req_index = 0 | |
587 | self.msg_send = qtm.req | |
588 | self.msg_send_len = len(qtm.req) | |
589 | self.msg_send_index = 0 | |
457 | 590 | |
458 | 591 | # python3/python2 dual compatibility |
459 | if isinstance(self.req, str): | |
592 | if isinstance(self.msg_send, str): | |
460 | 593 | map_func = lambda x: ord(x) |
461 | 594 | else: |
462 | 595 | map_func = lambda x: x |
463 | 596 | |
464 | self._queryid_wire = self.req[:2] | |
597 | self._queryid_wire = self.msg_send[:2] | |
465 | 598 | index = 12 |
466 | while map_func(self.req[index]) != 0: | |
467 | index += map_func(self.req[index]) + 1 | |
599 | while map_func(self.msg_send[index]) != 0: | |
600 | index += map_func(self.msg_send[index]) + 1 | |
468 | 601 | index += 4 |
469 | self._question_wire = self.req[12:index] | |
602 | self._question_wire = self.msg_send[12:index] | |
470 | 603 | |
471 | 604 | if qtm.tcp: |
472 | 605 | self.transport_type = socket.SOCK_STREAM |
473 | self.req = struct.pack(b'!H', self.req_len) + self.req | |
474 | self.req_len += struct.calcsize(b'H') | |
606 | self.msg_send = struct.pack(b'!H', self.msg_send_len) + self.msg_send | |
607 | self.msg_send_len += struct.calcsize(b'H') | |
475 | 608 | else: |
476 | 609 | self.transport_type = socket.SOCK_DGRAM |
477 | 610 | |
478 | def _check_response_consistency(self): | |
479 | if self.require_queryid_match and self.res[:2] != self._queryid_wire: | |
611 | def _check_msg_recv_consistency(self): | |
612 | if self.require_queryid_match and self.msg_recv[:2] != self._queryid_wire: | |
480 | 613 | return False |
481 | 614 | return True |
482 | 615 | |
484 | 617 | # UDP |
485 | 618 | if self.sock.type == socket.SOCK_DGRAM: |
486 | 619 | try: |
487 | self.res = self.sock.recv(65536) | |
488 | if self._check_response_consistency(): | |
489 | self.cleanup() | |
620 | self.msg_recv = self.sock.recv(65536) | |
621 | if self._check_msg_recv_consistency(): | |
490 | 622 | return True |
491 | 623 | else: |
492 | self.res = b'' | |
624 | self.msg_recv = b'' | |
493 | 625 | except socket.error as e: |
494 | 626 | self.err = e |
495 | self.cleanup() | |
496 | 627 | return True |
497 | 628 | |
498 | 629 | # TCP |
499 | 630 | else: |
500 | 631 | try: |
501 | if self.res_len is None: | |
502 | if self.res_buf: | |
632 | if self.msg_recv_len is None: | |
633 | if self.msg_recv_buf: | |
503 | 634 | buf = self.sock.recv(1) |
504 | 635 | else: |
505 | 636 | buf = self.sock.recv(2) |
506 | 637 | if buf == b'': |
507 | 638 | raise EOFError() |
508 | 639 | |
509 | self.res_buf += buf | |
510 | if len(self.res_buf) == 2: | |
511 | self.res_len = struct.unpack(b'!H', self.res_buf)[0] | |
512 | ||
513 | if self.res_len is not None: | |
514 | buf = self.sock.recv(self.res_len - self.res_index) | |
640 | self.msg_recv_buf += buf | |
641 | if len(self.msg_recv_buf) == 2: | |
642 | self.msg_recv_len = struct.unpack(b'!H', self.msg_recv_buf)[0] | |
643 | ||
644 | if self.msg_recv_len is not None: | |
645 | buf = self.sock.recv(self.msg_recv_len - self.msg_recv_index) | |
515 | 646 | if buf == b'': |
516 | 647 | raise EOFError() |
517 | 648 | |
518 | self.res += buf | |
519 | self.res_index = len(self.res) | |
520 | ||
521 | if self.res_index >= self.res_len: | |
522 | self.cleanup() | |
649 | self.msg_recv += buf | |
650 | self.msg_recv_index = len(self.msg_recv) | |
651 | ||
652 | if self.msg_recv_index >= self.msg_recv_len: | |
523 | 653 | return True |
524 | 654 | |
525 | 655 | except (socket.error, EOFError) as e: |
527 | 657 | pass |
528 | 658 | else: |
529 | 659 | self.err = e |
530 | self.cleanup() | |
531 | 660 | return True |
532 | 661 | |
533 | 662 | def do_timeout(self): |
534 | 663 | self.err = dns.exception.Timeout() |
535 | self.cleanup() | |
536 | 664 | |
537 | 665 | class DNSQueryTransportHandlerDNSPrivate(DNSQueryTransportHandlerDNS): |
538 | 666 | allow_loopback_query = True |
544 | 672 | class DNSQueryTransportHandlerMulti(DNSQueryTransportHandler): |
545 | 673 | singleton = False |
546 | 674 | |
675 | def _set_timeout(self, qtm): | |
676 | if self.timeout is None: | |
677 | # allow 5 seconds for looking glass overhead, as a baseline | |
678 | self.timeout = self.timeout_baseline | |
679 | # account for worst case, in which case queries are performed serially | |
680 | # on the remote end | |
681 | self.timeout += qtm.timeout | |
682 | ||
547 | 683 | def finalize(self): |
548 | 684 | super(DNSQueryTransportHandlerMulti, self).finalize() |
549 | 685 | |
552 | 688 | raise self.err |
553 | 689 | |
554 | 690 | # if there is no content, raise an exception |
555 | if self.res is None: | |
691 | if self.msg_recv is None: | |
556 | 692 | raise RemoteQueryTransportError('No content in response') |
557 | 693 | |
558 | 694 | # load the json content |
559 | 695 | try: |
560 | content = json.loads(codecs.decode(self.res, 'utf-8')) | |
696 | content = json.loads(codecs.decode(self.msg_recv, 'utf-8')) | |
561 | 697 | except ValueError: |
562 | raise RemoteQueryTransportError('JSON decoding of response failed: %s' % self.res) | |
698 | raise RemoteQueryTransportError('JSON decoding of response failed: %s' % self.msg_recv) | |
563 | 699 | |
564 | 700 | if 'version' not in content: |
565 | 701 | raise RemoteQueryTransportError('No version information in response.') |
577 | 713 | if 'error' in content: |
578 | 714 | raise RemoteQueryTransportError('Remote query error: %s' % content['error']) |
579 | 715 | |
580 | if 'responses' not in content: | |
581 | raise RemoteQueryTransportError('No DNS response information in response.') | |
716 | if self.mode == QTH_MODE_WRITE_READ: | |
717 | if 'responses' not in content: | |
718 | raise RemoteQueryTransportError('No DNS response information in response.') | |
719 | else: # self.mode == QTH_MODE_READ: | |
720 | if 'requests' not in content: | |
721 | raise RemoteQueryTransportError('No DNS requests information in response.') | |
582 | 722 | |
583 | 723 | for i in range(len(self.qtms)): |
584 | 724 | try: |
585 | self.qtms[i].deserialize_response(content['responses'][i]) | |
725 | if self.mode == QTH_MODE_WRITE_READ: | |
726 | self.qtms[i].deserialize_response(content['responses'][i]) | |
727 | else: # self.mode == QTH_MODE_READ: | |
728 | self.qtms[i].deserialize_request(content['requests'][i]) | |
586 | 729 | except IndexError: |
587 | raise RemoteQueryTransportError('DNS response information missing from response') | |
730 | raise RemoteQueryTransportError('DNS response or request information missing from message') | |
588 | 731 | except TransportMetaDeserializationError as e: |
589 | 732 | raise RemoteQueryTransportError(str(e)) |
590 | 733 | |
591 | 734 | class DNSQueryTransportHandlerHTTP(DNSQueryTransportHandlerMulti): |
592 | 735 | timeout_baseline = 5.0 |
593 | 736 | |
594 | def __init__(self, url, insecure=False, processed_queue=None, factory=None): | |
595 | super(DNSQueryTransportHandlerHTTP, self).__init__(processed_queue=processed_queue, factory=factory) | |
737 | def __init__(self, url, insecure=False, sock=None, recycle_sock=True, processed_queue=None, factory=None): | |
738 | super(DNSQueryTransportHandlerHTTP, self).__init__(sock=sock, recycle_sock=recycle_sock, processed_queue=processed_queue, factory=factory) | |
596 | 739 | |
597 | 740 | self.transport_type = socket.SOCK_STREAM |
598 | 741 | |
625 | 768 | |
626 | 769 | self.chunked_encoding = None |
627 | 770 | |
628 | def _set_timeout(self, qtm): | |
629 | if self.timeout is None: | |
630 | # allow 5 seconds for HTTP overhead, as a baseline | |
631 | self.timeout = self.timeout_baseline | |
632 | # account for worst case, in which case queries are performed serially | |
633 | # on the remote end | |
634 | self.timeout += qtm.timeout | |
635 | ||
636 | 771 | def _create_socket(self): |
637 | 772 | super(DNSQueryTransportHandlerHTTP, self)._create_socket() |
638 | 773 | |
642 | 777 | if self.insecure: |
643 | 778 | ctx.check_hostname = False |
644 | 779 | ctx.verify_mode = ssl.CERT_NONE |
645 | self.sock = ctx.wrap_socket(self.sock, server_hostname=self.host) | |
780 | new_sock = Socket(ctx.wrap_socket(self.sock.sock, server_hostname=self.host)) | |
781 | new_sock.lock = self.sock.lock | |
782 | self.sock = new_sock | |
646 | 783 | |
647 | 784 | def _post_data(self): |
648 | 785 | return 'content=' + urlquote.quote(json.dumps(self.serialize_requests())) |
659 | 796 | |
660 | 797 | def init_req(self): |
661 | 798 | data = self._post_data() |
662 | self.req = codecs.encode('POST %s HTTP/1.1\r\nHost: %s\r\nUser-Agent: DNSViz/0.6.6\r\nAccept: application/json\r\n%sContent-Length: %d\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s' % (self.path, self.host, self._authentication_header(), len(data), data), 'latin1') | |
663 | self.req_len = len(self.req) | |
664 | self.req_index = 0 | |
799 | self.msg_send = codecs.encode('POST %s HTTP/1.1\r\nHost: %s\r\nUser-Agent: DNSViz/0.8.0\r\nAccept: application/json\r\n%sContent-Length: %d\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n%s' % (self.path, self.host, self._authentication_header(), len(data), data), 'latin1') | |
800 | self.msg_send_len = len(self.msg_send) | |
801 | self.msg_send_index = 0 | |
665 | 802 | |
666 | 803 | def prepare(self): |
667 | 804 | super(DNSQueryTransportHandlerHTTP, self).prepare() |
668 | if self.err is not None: | |
805 | if self.err is not None and not isinstance(self.err, SocketInUse): | |
669 | 806 | self.err = RemoteQueryTransportError('Error making HTTP connection: %s' % self.err) |
670 | 807 | |
671 | 808 | def do_write(self): |
679 | 816 | buf = self.sock.recv(65536) |
680 | 817 | if buf == b'': |
681 | 818 | raise EOFError |
682 | self.res_buf += buf | |
819 | self.msg_recv_buf += buf | |
683 | 820 | |
684 | 821 | # still reading status and headers |
685 | if self.chunked_encoding is None and self.res_len is None: | |
686 | headers_end_match = HTTP_HEADER_END_RE.search(lb2s(self.res_buf)) | |
822 | if self.chunked_encoding is None and self.msg_recv_len is None: | |
823 | headers_end_match = HTTP_HEADER_END_RE.search(lb2s(self.msg_recv_buf)) | |
687 | 824 | if headers_end_match is not None: |
688 | headers = self.res_buf[:headers_end_match.start()] | |
689 | self.res_buf = self.res_buf[headers_end_match.end():] | |
825 | headers = self.msg_recv_buf[:headers_end_match.start()] | |
826 | self.msg_recv_buf = self.msg_recv_buf[headers_end_match.end():] | |
690 | 827 | |
691 | 828 | # check HTTP status |
692 | 829 | status_match = HTTP_STATUS_RE.search(lb2s(headers)) |
693 | 830 | if status_match is None: |
694 | 831 | self.err = RemoteQueryTransportError('Malformed HTTP status line') |
695 | self.cleanup() | |
696 | 832 | return True |
697 | 833 | status = int(status_match.group('status')) |
698 | 834 | if status != 200: |
699 | 835 | self.err = RemoteQueryTransportError('%d HTTP status' % status) |
700 | self.cleanup() | |
701 | 836 | return True |
702 | 837 | |
703 | 838 | # get content length or determine whether "chunked" |
705 | 840 | content_length_match = CONTENT_LENGTH_RE.search(lb2s(headers)) |
706 | 841 | if content_length_match is not None: |
707 | 842 | self.chunked_encoding = False |
708 | self.res_len = int(content_length_match.group('length')) | |
843 | self.msg_recv_len = int(content_length_match.group('length')) | |
709 | 844 | else: |
710 | 845 | self.chunked_encoding = CHUNKED_ENCODING_RE.search(lb2s(headers)) is not None |
711 | 846 | |
713 | 848 | if self.chunked_encoding: |
714 | 849 | # look through as many chunks as are readily available |
715 | 850 | # (without having to read from socket again) |
716 | while self.res_buf: | |
717 | if self.res_len is None: | |
851 | while self.msg_recv_buf: | |
852 | if self.msg_recv_len is None: | |
718 | 853 | # looking for chunk length |
719 | 854 | |
720 | 855 | # strip off beginning CRLF, if any |
721 | 856 | # (this is for chunks after the first one) |
722 | crlf_start_match = CRLF_START_RE.search(lb2s(self.res_buf)) | |
857 | crlf_start_match = CRLF_START_RE.search(lb2s(self.msg_recv_buf)) | |
723 | 858 | if crlf_start_match is not None: |
724 | self.res_buf = self.res_buf[crlf_start_match.end():] | |
859 | self.msg_recv_buf = self.msg_recv_buf[crlf_start_match.end():] | |
725 | 860 | |
726 | 861 | # find the chunk length |
727 | chunk_len_match = CHUNK_SIZE_RE.search(lb2s(self.res_buf)) | |
862 | chunk_len_match = CHUNK_SIZE_RE.search(lb2s(self.msg_recv_buf)) | |
728 | 863 | if chunk_len_match is not None: |
729 | self.res_len = int(chunk_len_match.group('length'), 16) | |
730 | self.res_buf = self.res_buf[chunk_len_match.end():] | |
731 | self.res_index = 0 | |
864 | self.msg_recv_len = int(chunk_len_match.group('length'), 16) | |
865 | self.msg_recv_buf = self.msg_recv_buf[chunk_len_match.end():] | |
866 | self.msg_recv_index = 0 | |
732 | 867 | else: |
733 | 868 | # if we don't currently know the length of the next |
734 | 869 | # chunk, and we don't have enough data to find the |
736 | 871 | # don't have any more data to go off of. |
737 | 872 | break |
738 | 873 | |
739 | if self.res_len is not None: | |
874 | if self.msg_recv_len is not None: | |
740 | 875 | # we know a length of the current chunk |
741 | 876 | |
742 | if self.res_len == 0: | |
877 | if self.msg_recv_len == 0: | |
743 | 878 | # no chunks left, so clean up and return |
744 | self.cleanup() | |
745 | 879 | return True |
746 | 880 | |
747 | 881 | # read remaining bytes |
748 | bytes_remaining = self.res_len - self.res_index | |
749 | if len(self.res_buf) > bytes_remaining: | |
750 | self.res += self.res_buf[:bytes_remaining] | |
751 | self.res_index = 0 | |
752 | self.res_buf = self.res_buf[bytes_remaining:] | |
753 | self.res_len = None | |
882 | bytes_remaining = self.msg_recv_len - self.msg_recv_index | |
883 | if len(self.msg_recv_buf) > bytes_remaining: | |
884 | self.msg_recv += self.msg_recv_buf[:bytes_remaining] | |
885 | self.msg_recv_index = 0 | |
886 | self.msg_recv_buf = self.msg_recv_buf[bytes_remaining:] | |
887 | self.msg_recv_len = None | |
754 | 888 | else: |
755 | self.res += self.res_buf | |
756 | self.res_index += len(self.res_buf) | |
757 | self.res_buf = b'' | |
889 | self.msg_recv += self.msg_recv_buf | |
890 | self.msg_recv_index += len(self.msg_recv_buf) | |
891 | self.msg_recv_buf = b'' | |
758 | 892 | |
759 | 893 | elif self.chunked_encoding == False: |
760 | 894 | # output is not chunked, so we're either reading until we've |
761 | 895 | # read all the bytes specified by the content-length header (if |
762 | 896 | # specified) or until the server closes the connection (or we |
763 | 897 | # time out) |
764 | if self.res_len is not None: | |
765 | bytes_remaining = self.res_len - self.res_index | |
766 | self.res += self.res_buf[:bytes_remaining] | |
767 | self.res_buf = self.res_buf[bytes_remaining:] | |
768 | self.res_index = len(self.res) | |
769 | ||
770 | if self.res_index >= self.res_len: | |
771 | self.cleanup() | |
898 | if self.msg_recv_len is not None: | |
899 | bytes_remaining = self.msg_recv_len - self.msg_recv_index | |
900 | self.msg_recv += self.msg_recv_buf[:bytes_remaining] | |
901 | self.msg_recv_buf = self.msg_recv_buf[bytes_remaining:] | |
902 | self.msg_recv_index = len(self.msg_recv) | |
903 | ||
904 | if self.msg_recv_index >= self.msg_recv_len: | |
772 | 905 | return True |
773 | 906 | else: |
774 | self.res += self.res_buf | |
775 | self.res_buf = b'' | |
907 | self.msg_recv += self.msg_recv_buf | |
908 | self.msg_recv_buf = b'' | |
776 | 909 | |
777 | 910 | except (socket.error, EOFError) as e: |
778 | 911 | if isinstance(e, socket.error) and e.errno == socket.errno.EAGAIN: |
782 | 915 | # using chunked encoding, then don't throw an error. If the |
783 | 916 | # content was bad, then it will be reflected in the decoding of |
784 | 917 | # the content |
785 | if self.chunked_encoding == False and self.res_len is None: | |
918 | if self.chunked_encoding == False and self.msg_recv_len is None: | |
786 | 919 | pass |
787 | 920 | else: |
788 | 921 | self.err = RemoteQueryTransportError('Error communicating with HTTP server: %s' % e) |
789 | self.cleanup() | |
790 | 922 | return True |
791 | 923 | |
792 | 924 | def do_timeout(self): |
793 | 925 | self.err = RemoteQueryTransportError('HTTP request timed out') |
794 | self.cleanup() | |
795 | 926 | |
796 | 927 | class DNSQueryTransportHandlerHTTPPrivate(DNSQueryTransportHandlerHTTP): |
797 | 928 | allow_loopback_query = True |
798 | 929 | allow_private_query = True |
799 | 930 | |
800 | class DNSQueryTransportHandlerWebSocket(DNSQueryTransportHandlerMulti): | |
931 | class DNSQueryTransportHandlerWebSocketServer(DNSQueryTransportHandlerMulti): | |
801 | 932 | timeout_baseline = 5.0 |
802 | ||
803 | def __init__(self, path, processed_queue=None, factory=None): | |
804 | super(DNSQueryTransportHandlerWebSocket, self).__init__(processed_queue=processed_queue, factory=factory) | |
933 | unmask_on_recv = True | |
934 | ||
935 | def __init__(self, path, sock=None, recycle_sock=True, processed_queue=None, factory=None): | |
936 | super(DNSQueryTransportHandlerWebSocketServer, self).__init__(sock=sock, recycle_sock=recycle_sock, processed_queue=processed_queue, factory=factory) | |
805 | 937 | |
806 | 938 | self.dst = path |
807 | 939 | self.transport_type = socket.SOCK_STREAM |
809 | 941 | self.mask_mapping = [] |
810 | 942 | self.has_more = None |
811 | 943 | |
812 | def _set_timeout(self, qtm): | |
813 | if self.timeout is None: | |
814 | # allow 5 seconds for browser overhead, as a baseline | |
815 | self.timeout = self.timeout_baseline | |
816 | # account for worst case, in which case queries are performed serially | |
817 | # on the remote end | |
818 | self.timeout += qtm.timeout | |
819 | ||
820 | 944 | def _get_af(self): |
821 | 945 | return socket.AF_UNIX |
822 | 946 | |
830 | 954 | return self.dst |
831 | 955 | |
832 | 956 | def prepare(self): |
833 | super(DNSQueryTransportHandlerWebSocket, self).prepare() | |
834 | if self.err is not None: | |
957 | super(DNSQueryTransportHandlerWebSocketServer, self).prepare() | |
958 | if self.err is not None and not isinstance(self.err, SocketInUse): | |
835 | 959 | self.err = RemoteQueryTransportError('Error connecting to UNIX domain socket: %s' % self.err) |
836 | 960 | |
837 | 961 | def do_write(self): |
838 | val = super(DNSQueryTransportHandlerWebSocket, self).do_write() | |
962 | val = super(DNSQueryTransportHandlerWebSocketServer, self).do_write() | |
839 | 963 | if self.err is not None: |
840 | 964 | self.err = RemoteQueryTransportError('Error writing to UNIX domain socket: %s' % self.err) |
841 | 965 | return val |
842 | 966 | |
843 | 967 | def finalize(self): |
844 | # python3/python2 dual compatibility | |
845 | if isinstance(self.msg_recv, str): | |
846 | decode_func = lambda x: struct.unpack(b'!B', x)[0] | |
847 | else: | |
848 | decode_func = lambda x: x | |
849 | ||
850 | new_res = b'' | |
851 | for i, mask_index in enumerate(self.mask_mapping): | |
852 | mask_octets = struct.unpack(b'!BBBB', self.res[mask_index:mask_index + 4]) | |
853 | if i >= len(self.mask_mapping) - 1: | |
854 | buf = self.res[mask_index + 4:] | |
968 | if self.unmask_on_recv: | |
969 | ||
970 | # python3/python2 dual compatibility | |
971 | if isinstance(self.msg_recv, str): | |
972 | decode_func = lambda x: struct.unpack(b'!B', x)[0] | |
855 | 973 | else: |
856 | buf = self.res[mask_index + 4:self.mask_mapping[i + 1]] | |
857 | for j in range(len(buf)): | |
858 | b = decode_func(buf[j]) | |
859 | new_res += struct.pack(b'!B', b ^ mask_octets[j % 4]); | |
860 | self.res = new_res | |
861 | ||
862 | super(DNSQueryTransportHandlerWebSocket, self).finalize() | |
974 | decode_func = lambda x: x | |
975 | ||
976 | new_msg_recv = b'' | |
977 | for i, mask_index in enumerate(self.mask_mapping): | |
978 | mask_octets = struct.unpack(b'!BBBB', self.msg_recv[mask_index:mask_index + 4]) | |
979 | if i >= len(self.mask_mapping) - 1: | |
980 | buf = self.msg_recv[mask_index + 4:] | |
981 | else: | |
982 | buf = self.msg_recv[mask_index + 4:self.mask_mapping[i + 1]] | |
983 | for j in range(len(buf)): | |
984 | b = decode_func(buf[j]) | |
985 | new_msg_recv += struct.pack(b'!B', b ^ mask_octets[j % 4]) | |
986 | ||
987 | self.msg_recv = new_msg_recv | |
988 | ||
989 | super(DNSQueryTransportHandlerWebSocketServer, self).finalize() | |
863 | 990 | |
864 | 991 | def init_req(self): |
865 | 992 | data = codecs.encode(json.dumps(self.serialize_requests()), 'utf-8') |
872 | 999 | header += struct.pack(b'!BH', 126, l) |
873 | 1000 | else: # 0xffff < len <= 2^63 |
874 | 1001 | header += struct.pack(b'!BLL', 127, 0, l) |
875 | self.req = header + data | |
876 | self.req_len = len(self.req) | |
877 | self.req_index = 0 | |
878 | ||
879 | def init_empty_req(self): | |
880 | self.req = b'\x81\x00' | |
881 | self.req_len = len(self.req) | |
882 | self.req_index = 0 | |
1002 | self.msg_send = header + data | |
1003 | self.msg_send_len = len(self.msg_send) | |
1004 | self.msg_send_index = 0 | |
1005 | ||
1006 | def init_empty_msg_send(self): | |
1007 | self.msg_send = b'\x81\x00' | |
1008 | self.msg_send_len = len(self.msg_send) | |
1009 | self.msg_send_index = 0 | |
883 | 1010 | |
884 | 1011 | def do_read(self): |
885 | 1012 | try: |
886 | 1013 | buf = self.sock.recv(65536) |
887 | 1014 | if buf == b'': |
888 | 1015 | raise EOFError |
889 | self.res_buf += buf | |
1016 | self.msg_recv_buf += buf | |
890 | 1017 | |
891 | 1018 | # look through as many frames as are readily available |
892 | 1019 | # (without having to read from socket again) |
893 | while self.res_buf: | |
894 | if self.res_len is None: | |
1020 | while self.msg_recv_buf: | |
1021 | if self.msg_recv_len is None: | |
895 | 1022 | # looking for frame length |
896 | if len(self.res_buf) >= 2: | |
897 | byte0, byte1 = struct.unpack(b'!BB', self.res_buf[0:2]) | |
1023 | if len(self.msg_recv_buf) >= 2: | |
1024 | byte0, byte1 = struct.unpack(b'!BB', self.msg_recv_buf[0:2]) | |
898 | 1025 | byte1b = byte1 & 0x7f |
899 | 1026 | |
900 | 1027 | # mask must be set |
901 | 1028 | if not byte1 & 0x80: |
902 | 1029 | if self.err is not None: |
903 | 1030 | self.err = RemoteQueryTransportError('Mask bit not set in message from server') |
904 | self.cleanup() | |
905 | 1031 | return True |
906 | 1032 | |
907 | 1033 | # check for FIN flag |
915 | 1041 | else: # byte1b == 127: |
916 | 1042 | header_len = 10 |
917 | 1043 | |
918 | if len(self.res_buf) >= header_len: | |
1044 | if len(self.msg_recv_buf) >= header_len: | |
919 | 1045 | if byte1b <= 125: |
920 | self.res_len = byte1b | |
1046 | self.msg_recv_len = byte1b | |
921 | 1047 | elif byte1b == 126: |
922 | self.res_len = struct.unpack(b'!H', self.res_buf[2:4])[0] | |
1048 | self.msg_recv_len = struct.unpack(b'!H', self.msg_recv_buf[2:4])[0] | |
923 | 1049 | else: # byte1b == 127: |
924 | self.res_len = struct.unpack(b'!Q', self.res_buf[2:10])[0] | |
925 | ||
926 | # handle mask | |
927 | self.mask_mapping.append(len(self.res)) | |
928 | self.res_len += 4 | |
929 | ||
930 | self.res_buf = self.res_buf[header_len:] | |
1050 | self.msg_recv_len = struct.unpack(b'!Q', self.msg_recv_buf[2:10])[0] | |
1051 | ||
1052 | if self.unmask_on_recv: | |
1053 | # handle mask | |
1054 | self.mask_mapping.append(len(self.msg_recv)) | |
1055 | self.msg_recv_len += 4 | |
1056 | ||
1057 | self.msg_recv_buf = self.msg_recv_buf[header_len:] | |
931 | 1058 | |
932 | 1059 | else: |
933 | 1060 | # if we don't currently know the length of the next |
936 | 1063 | # don't have any more data to go off of. |
937 | 1064 | break |
938 | 1065 | |
939 | if self.res_len is not None: | |
1066 | if self.msg_recv_len is not None: | |
940 | 1067 | # we know a length of the current chunk |
941 | 1068 | |
942 | 1069 | # read remaining bytes |
943 | bytes_remaining = self.res_len - self.res_index | |
944 | if len(self.res_buf) > bytes_remaining: | |
945 | self.res += self.res_buf[:bytes_remaining] | |
946 | self.res_index = 0 | |
947 | self.res_buf = self.res_buf[bytes_remaining:] | |
948 | self.res_len = None | |
1070 | bytes_remaining = self.msg_recv_len - self.msg_recv_index | |
1071 | if len(self.msg_recv_buf) > bytes_remaining: | |
1072 | self.msg_recv += self.msg_recv_buf[:bytes_remaining] | |
1073 | self.msg_recv_index = 0 | |
1074 | self.msg_recv_buf = self.msg_recv_buf[bytes_remaining:] | |
1075 | self.msg_recv_len = None | |
949 | 1076 | else: |
950 | self.res += self.res_buf | |
951 | self.res_index += len(self.res_buf) | |
952 | self.res_buf = b'' | |
953 | ||
954 | if self.res_index >= self.res_len and not self.has_more: | |
955 | self.cleanup() | |
1077 | self.msg_recv += self.msg_recv_buf | |
1078 | self.msg_recv_index += len(self.msg_recv_buf) | |
1079 | self.msg_recv_buf = b'' | |
1080 | ||
1081 | if self.msg_recv_index >= self.msg_recv_len and not self.has_more: | |
956 | 1082 | return True |
957 | 1083 | |
958 | 1084 | except (socket.error, EOFError) as e: |
960 | 1086 | pass |
961 | 1087 | else: |
962 | 1088 | self.err = e |
963 | self.cleanup() | |
964 | 1089 | return True |
965 | 1090 | |
966 | 1091 | def do_timeout(self): |
967 | 1092 | self.err = RemoteQueryTransportError('Read of UNIX domain socket timed out') |
968 | self.cleanup() | |
969 | ||
970 | class DNSQueryTransportHandlerWebSocketPrivate(DNSQueryTransportHandlerWebSocket): | |
1093 | ||
1094 | class DNSQueryTransportHandlerWebSocketServerPrivate(DNSQueryTransportHandlerWebSocketServer): | |
971 | 1095 | allow_loopback_query = True |
972 | 1096 | allow_private_query = True |
1097 | ||
1098 | class DNSQueryTransportHandlerWebSocketClient(DNSQueryTransportHandlerWebSocketServer): | |
1099 | unmask_on_recv = False | |
1100 | ||
1101 | def __init__(self, sock, recycle_sock=True, processed_queue=None, factory=None): | |
1102 | super(DNSQueryTransportHandlerWebSocketClient, self).__init__(None, sock=sock, recycle_sock=recycle_sock, processed_queue=processed_queue, factory=factory) | |
1103 | ||
1104 | def _init_req(self, data): | |
1105 | header = b'\x81' | |
1106 | l = len(data) | |
1107 | if l <= 125: | |
1108 | header += struct.pack(b'!B', l | 0x80) | |
1109 | elif l <= 0xffff: | |
1110 | header += struct.pack(b'!BH', 126 | 0x80, l) | |
1111 | else: # 0xffff < len <= 2^63 | |
1112 | header += struct.pack(b'!BLL', 127 | 0x80, 0, l) | |
1113 | ||
1114 | mask_int = random.randint(0, 0xffffffff) | |
1115 | mask = [(mask_int >> 24) & 0xff, | |
1116 | (mask_int >> 16) & 0xff, | |
1117 | (mask_int >> 8) & 0xff, | |
1118 | mask_int & 0xff] | |
1119 | ||
1120 | header += struct.pack(b'!BBBB', *mask) | |
1121 | ||
1122 | # python3/python2 dual compatibility | |
1123 | if isinstance(data, str): | |
1124 | map_func = lambda x: ord(x) | |
1125 | else: | |
1126 | map_func = lambda x: x | |
1127 | ||
1128 | self.msg_send = header | |
1129 | for i, b in enumerate(data): | |
1130 | self.msg_send += struct.pack(b'!B', mask[i % 4] ^ map_func(b)) | |
1131 | self.msg_send_len = len(self.msg_send) | |
1132 | self.msg_send_index = 0 | |
1133 | ||
1134 | def init_req(self): | |
1135 | self._init_req(codecs.encode(json.dumps(self.serialize_responses()), 'utf-8')) | |
1136 | ||
1137 | def init_err_send(self, err): | |
1138 | self._init_req(codecs.encode(err, 'utf-8')) | |
1139 | ||
1140 | class DNSQueryTransportHandlerWebSocketClientReader(DNSQueryTransportHandlerWebSocketClient): | |
1141 | mode = QTH_MODE_READ | |
1142 | ||
1143 | class DNSQueryTransportHandlerWebSocketClientWriter(DNSQueryTransportHandlerWebSocketClient): | |
1144 | mode = QTH_MODE_WRITE | |
1145 | ||
1146 | class DNSQueryTransportHandlerCmd(DNSQueryTransportHandlerWebSocketServer): | |
1147 | allow_loopback_query = True | |
1148 | allow_private_query = True | |
1149 | ||
1150 | def __init__(self, args, sock=None, recycle_sock=True, processed_queue=None, factory=None): | |
1151 | super(DNSQueryTransportHandlerCmd, self).__init__(None, sock=sock, recycle_sock=recycle_sock, processed_queue=processed_queue, factory=factory) | |
1152 | ||
1153 | self.args = args | |
1154 | ||
1155 | def _get_af(self): | |
1156 | return None | |
1157 | ||
1158 | def _bind_socket(self): | |
1159 | pass | |
1160 | ||
1161 | def _set_socket_info(self): | |
1162 | pass | |
1163 | ||
1164 | def _get_connect_arg(self): | |
1165 | return None | |
1166 | ||
1167 | def _create_socket(self): | |
1168 | try: | |
1169 | p = subprocess.Popen(self.args, stdin=subprocess.PIPE, stdout=subprocess.PIPE) | |
1170 | except OSError as e: | |
1171 | raise socket.error(str(e)) | |
1172 | else: | |
1173 | self.sock = ReaderWriter(io.open(p.stdout.fileno(), 'rb'), io.open(p.stdin.fileno(), 'wb'), p) | |
1174 | ||
1175 | def _connect_socket(self): | |
1176 | pass | |
1177 | ||
1178 | def do_write(self): | |
1179 | if self.sock.proc.poll() is not None: | |
1180 | self.err = RemoteQueryTransportError('Subprocess has ended with status %d.' % (self.sock.proc.returncode)) | |
1181 | return True | |
1182 | return super(DNSQueryTransportHandlerCmd, self).do_write() | |
1183 | ||
1184 | def do_read(self): | |
1185 | if self.sock.proc.poll() is not None: | |
1186 | self.err = RemoteQueryTransportError('Subprocess has ended with status %d.' % (self.sock.proc.returncode)) | |
1187 | return True | |
1188 | return super(DNSQueryTransportHandlerCmd, self).do_read() | |
1189 | ||
1190 | def cleanup(self): | |
1191 | super(DNSQueryTransportHandlerCmd, self).cleanup() | |
1192 | if self.sock is not None and not self.recycle_sock and self.sock.proc is not None and self.sock.proc.poll() is None: | |
1193 | self.sock.proc.terminate() | |
1194 | self.sock.proc.wait() | |
1195 | return True | |
1196 | ||
1197 | class DNSQueryTransportHandlerRemoteCmd(DNSQueryTransportHandlerCmd): | |
1198 | timeout_baseline = 10.0 | |
1199 | ||
1200 | def __init__(self, url, sock=None, recycle_sock=True, processed_queue=None, factory=None): | |
1201 | ||
1202 | parse_result = urlparse.urlparse(url) | |
1203 | scheme = parse_result.scheme | |
1204 | if not scheme: | |
1205 | scheme = 'ssh' | |
1206 | elif scheme != 'ssh': | |
1207 | raise RemoteQueryTransportError('Invalid scheme: %s' % scheme) | |
1208 | ||
1209 | args = ['ssh', '-T'] | |
1210 | if parse_result.port is not None: | |
1211 | args.extend(['-p', str(parse_result.port)]) | |
1212 | if parse_result.username is not None: | |
1213 | args.append('%s@%s' % (parse_result.username, parse_result.hostname)) | |
1214 | else: | |
1215 | args.append('%s' % (parse_result.hostname)) | |
1216 | if parse_result.path and parse_result.path != '/': | |
1217 | args.append(parse_result.path) | |
1218 | else: | |
1219 | args.append('dnsviz lookingglass') | |
1220 | ||
1221 | super(DNSQueryTransportHandlerRemoteCmd, self).__init__(args, sock=sock, recycle_sock=recycle_sock, processed_queue=processed_queue, factory=factory) | |
1222 | ||
1223 | def _get_af(self): | |
1224 | return None | |
1225 | ||
1226 | def _bind_socket(self): | |
1227 | pass | |
1228 | ||
1229 | def _set_socket_info(self): | |
1230 | pass | |
1231 | ||
1232 | def _get_connect_arg(self): | |
1233 | return None | |
1234 | ||
1235 | def _create_socket(self): | |
1236 | try: | |
1237 | p = subprocess.Popen(self.args, stdin=subprocess.PIPE, stdout=subprocess.PIPE) | |
1238 | except OSError as e: | |
1239 | raise socket.error(str(e)) | |
1240 | else: | |
1241 | self.sock = ReaderWriter(io.open(p.stdout.fileno(), 'rb'), io.open(p.stdin.fileno(), 'wb'), p) | |
1242 | ||
1243 | def _connect_socket(self): | |
1244 | pass | |
1245 | ||
1246 | def do_write(self): | |
1247 | if self.sock.proc.poll() is not None: | |
1248 | self.err = RemoteQueryTransportError('Subprocess has ended with status %d.' % (self.sock.proc.returncode)) | |
1249 | return True | |
1250 | return super(DNSQueryTransportHandlerCmd, self).do_write() | |
1251 | ||
1252 | def do_read(self): | |
1253 | if self.sock.proc.poll() is not None: | |
1254 | self.err = RemoteQueryTransportError('Subprocess has ended with status %d.' % (self.sock.proc.returncode)) | |
1255 | return True | |
1256 | return super(DNSQueryTransportHandlerCmd, self).do_read() | |
1257 | ||
1258 | def cleanup(self): | |
1259 | super(DNSQueryTransportHandlerCmd, self).cleanup() | |
1260 | if self.sock is not None and not self.recycle_sock and self.sock.proc is not None and self.sock.proc.poll() is None: | |
1261 | self.sock.proc.terminate() | |
1262 | self.sock.proc.wait() | |
1263 | return True | |
973 | 1264 | |
974 | 1265 | class DNSQueryTransportHandlerFactory(object): |
975 | 1266 | cls = DNSQueryTransportHandler |
978 | 1269 | self.args = args |
979 | 1270 | self.kwargs = kwargs |
980 | 1271 | self.kwargs['factory'] = self |
1272 | self.lock = threading.Lock() | |
1273 | self.sock = None | |
1274 | ||
1275 | def __del__(self): | |
1276 | if self.sock is not None: | |
1277 | self.sock.close() | |
981 | 1278 | |
982 | 1279 | def build(self, **kwargs): |
1280 | if 'sock' not in kwargs and self.sock is not None: | |
1281 | kwargs['sock'] = self.sock | |
983 | 1282 | for name in self.kwargs: |
984 | 1283 | if name not in kwargs: |
985 | 1284 | kwargs[name] = self.kwargs[name] |
997 | 1296 | class DNSQueryTransportHandlerHTTPPrivateFactory(DNSQueryTransportHandlerFactory): |
998 | 1297 | cls = DNSQueryTransportHandlerHTTPPrivate |
999 | 1298 | |
1000 | class _DNSQueryTransportHandlerWebSocketFactory(DNSQueryTransportHandlerFactory): | |
1001 | cls = DNSQueryTransportHandlerWebSocket | |
1002 | ||
1003 | class DNSQueryTransportHandlerWebSocketFactory: | |
1299 | class _DNSQueryTransportHandlerWebSocketServerFactory(DNSQueryTransportHandlerFactory): | |
1300 | cls = DNSQueryTransportHandlerWebSocketServer | |
1301 | ||
1302 | class DNSQueryTransportHandlerWebSocketServerFactory: | |
1004 | 1303 | def __init__(self, *args, **kwargs): |
1005 | self._f = _DNSQueryTransportHandlerWebSocketFactory(*args, **kwargs) | |
1304 | self._f = _DNSQueryTransportHandlerWebSocketServerFactory(*args, **kwargs) | |
1006 | 1305 | |
1007 | 1306 | def __del__(self): |
1008 | 1307 | try: |
1009 | 1308 | qth = self._f.build() |
1010 | qth.init_empty_req() | |
1309 | qth.init_empty_msg_send() | |
1011 | 1310 | qth.prepare() |
1012 | 1311 | qth.do_write() |
1013 | 1312 | except: |
1020 | 1319 | def build(self, **kwargs): |
1021 | 1320 | return self._f.build(**kwargs) |
1022 | 1321 | |
1023 | class _DNSQueryTransportHandlerWebSocketPrivateFactory(DNSQueryTransportHandlerFactory): | |
1024 | cls = DNSQueryTransportHandlerWebSocketPrivate | |
1025 | ||
1026 | class DNSQueryTransportHandlerWebSocketPrivateFactory: | |
1322 | class _DNSQueryTransportHandlerWebSocketServerPrivateFactory(DNSQueryTransportHandlerFactory): | |
1323 | cls = DNSQueryTransportHandlerWebSocketServerPrivate | |
1324 | ||
1325 | class DNSQueryTransportHandlerWebSocketServerPrivateFactory: | |
1027 | 1326 | def __init__(self, *args, **kwargs): |
1028 | self._f = _DNSQueryTransportHandlerWebSocketPrivateFactory(*args, **kwargs) | |
1327 | self._f = _DNSQueryTransportHandlerWebSocketServerPrivateFactory(*args, **kwargs) | |
1029 | 1328 | |
1030 | 1329 | def __del__(self): |
1031 | 1330 | try: |
1032 | 1331 | qth = self._f.build() |
1033 | qth.init_empty_req() | |
1332 | qth.init_empty_msg_send() | |
1034 | 1333 | qth.prepare() |
1035 | 1334 | qth.do_write() |
1036 | 1335 | except: |
1043 | 1342 | def build(self, **kwargs): |
1044 | 1343 | return self._f.build(**kwargs) |
1045 | 1344 | |
1345 | class DNSQueryTransportHandlerCmdFactory(DNSQueryTransportHandlerFactory): | |
1346 | cls = DNSQueryTransportHandlerCmd | |
1347 | ||
1348 | class DNSQueryTransportHandlerRemoteCmdFactory(DNSQueryTransportHandlerFactory): | |
1349 | cls = DNSQueryTransportHandlerRemoteCmd | |
1350 | ||
1046 | 1351 | class DNSQueryTransportHandlerWrapper(object): |
1047 | 1352 | def __init__(self, qh): |
1048 | 1353 | self.qh = qh |
1060 | 1365 | def __init__(self): |
1061 | 1366 | self._notify_read_fd, self._notify_write_fd = os.pipe() |
1062 | 1367 | fcntl.fcntl(self._notify_read_fd, fcntl.F_SETFL, os.O_NONBLOCK) |
1063 | self._query_queue = queue.Queue() | |
1368 | self._msg_queue = queue.Queue() | |
1064 | 1369 | self._event_map = {} |
1065 | 1370 | |
1066 | 1371 | self._close = threading.Event() |
1071 | 1376 | self._close.set() |
1072 | 1377 | os.write(self._notify_write_fd, struct.pack(b'!B', 0)) |
1073 | 1378 | |
1074 | def query(self, qh): | |
1379 | def handle_msg(self, qh): | |
1075 | 1380 | self._event_map[qh] = threading.Event() |
1076 | self._query(qh) | |
1381 | self._handle_msg(qh, True) | |
1077 | 1382 | self._event_map[qh].wait() |
1078 | 1383 | del self._event_map[qh] |
1079 | 1384 | |
1080 | def query_nowait(self, qh): | |
1081 | self._query(qh) | |
1082 | ||
1083 | def _query(self, qh): | |
1084 | self._query_queue.put(qh) | |
1085 | os.write(self._notify_write_fd, struct.pack(b'!B', 0)) | |
1385 | def handle_msg_nowait(self, qh): | |
1386 | self._handle_msg(qh, True) | |
1387 | ||
1388 | def _handle_msg(self, qh, notify): | |
1389 | self._msg_queue.put(qh) | |
1390 | if notify: | |
1391 | os.write(self._notify_write_fd, struct.pack(b'!B', 0)) | |
1086 | 1392 | |
1087 | 1393 | def _loop(self): |
1088 | 1394 | '''Return the data resulting from a UDP transaction.''' |
1115 | 1421 | qh = query_meta[fd] |
1116 | 1422 | |
1117 | 1423 | if qh.do_write(): |
1118 | if qh.err is not None: | |
1424 | if qh.err is not None or qh.mode == QTH_MODE_WRITE: | |
1425 | qh.cleanup() | |
1119 | 1426 | finished_fds.append(fd) |
1120 | else: | |
1427 | else: # qh.mode == QTH_MODE_WRITE_READ | |
1121 | 1428 | wlist_in.remove(fd) |
1122 | rlist_in.append(fd) | |
1429 | rlist_in.append(qh.sock.reader_fd) | |
1123 | 1430 | |
1124 | 1431 | # handle the responses |
1125 | 1432 | for fd in rlist_out: |
1128 | 1435 | |
1129 | 1436 | qh = query_meta[fd] |
1130 | 1437 | |
1131 | if qh.do_read(): | |
1132 | finished_fds.append(fd) | |
1438 | if qh.do_read(): # qh.mode in (QTH_MODE_WRITE_READ, QTH_MODE_READ) | |
1439 | qh.cleanup() | |
1440 | finished_fds.append(qh.sock.reader_fd) | |
1133 | 1441 | |
1134 | 1442 | # handle the expired queries |
1135 | 1443 | future_index = bisect.bisect_right(expirations, ((time.time(), DNSQueryTransportHandlerWrapper(None)))) |
1142 | 1450 | continue |
1143 | 1451 | |
1144 | 1452 | qh.do_timeout() |
1145 | finished_fds.append(qh.sockfd) | |
1453 | qh.cleanup() | |
1454 | finished_fds.append(qh.sock.reader_fd) | |
1146 | 1455 | expirations = expirations[future_index:] |
1147 | 1456 | |
1148 | 1457 | # for any fds that need to be finished, do it now |
1149 | 1458 | for fd in finished_fds: |
1459 | qh = query_meta[fd] | |
1150 | 1460 | try: |
1151 | rlist_in.remove(fd) | |
1461 | rlist_in.remove(qh.sock.reader_fd) | |
1152 | 1462 | except ValueError: |
1153 | wlist_in.remove(fd) | |
1154 | if query_meta[fd] in self._event_map: | |
1155 | self._event_map[query_meta[fd]].set() | |
1463 | wlist_in.remove(qh.sock.writer_fd) | |
1464 | if qh in self._event_map: | |
1465 | self._event_map[qh].set() | |
1156 | 1466 | del query_meta[fd] |
1467 | ||
1468 | if finished_fds: | |
1469 | # if any sockets were finished, then notify, in case any | |
1470 | # queued messages are waiting to be handled. | |
1471 | os.write(self._notify_write_fd, struct.pack(b'!B', 0)) | |
1157 | 1472 | |
1158 | 1473 | # handle the new queries |
1159 | 1474 | if self._notify_read_fd in rlist_out: |
1160 | 1475 | # empty the pipe |
1161 | 1476 | os.read(self._notify_read_fd, 65536) |
1162 | 1477 | |
1478 | requeue = [] | |
1163 | 1479 | while True: |
1164 | 1480 | try: |
1165 | qh = self._query_queue.get_nowait() | |
1481 | qh = self._msg_queue.get_nowait() | |
1166 | 1482 | qh.prepare() |
1483 | ||
1167 | 1484 | if qh.err is not None: |
1168 | if qh in self._event_map: | |
1169 | self._event_map[qh].set() | |
1485 | if isinstance(qh.err, SocketInUse): | |
1486 | # if this was a SocketInUse, just requeue, and try again | |
1487 | qh.err = None | |
1488 | requeue.append(qh) | |
1489 | ||
1490 | else: | |
1491 | qh.cleanup() | |
1492 | if qh in self._event_map: | |
1493 | self._event_map[qh].set() | |
1170 | 1494 | else: |
1171 | 1495 | # if we successfully bound and connected the |
1172 | 1496 | # socket, then put this socket in the write fd list |
1173 | fd = qh.sock.fileno() | |
1174 | query_meta[fd] = qh | |
1497 | query_meta[qh.sock.reader_fd] = qh | |
1498 | query_meta[qh.sock.writer_fd] = qh | |
1175 | 1499 | bisect.insort(expirations, (qh.expiration, DNSQueryTransportHandlerWrapper(qh))) |
1176 | wlist_in.append(fd) | |
1500 | if qh.mode in (QTH_MODE_WRITE_READ, QTH_MODE_WRITE): | |
1501 | wlist_in.append(qh.sock.writer_fd) | |
1502 | elif qh.mode == QTH_MODE_READ: | |
1503 | rlist_in.append(qh.sock.reader_fd) | |
1504 | else: | |
1505 | raise Exception('Unexpected mode: %d' % qh.mode) | |
1177 | 1506 | except queue.Empty: |
1178 | 1507 | break |
1508 | ||
1509 | for qh in requeue: | |
1510 | self._handle_msg(qh, False) | |
1179 | 1511 | |
1180 | 1512 | class DNSQueryTransportHandlerHTTPPrivate(DNSQueryTransportHandlerHTTP): |
1181 | 1513 | allow_loopback_query = True |
1188 | 1520 | def __del__(self): |
1189 | 1521 | self.close() |
1190 | 1522 | |
1191 | def query(self, qh): | |
1192 | return self._th.query(qh) | |
1193 | ||
1194 | def query_nowait(self, qh): | |
1195 | return self._th.query_nowait(qh) | |
1523 | def handle_msg(self, qh): | |
1524 | return self._th.handle_msg(qh) | |
1525 | ||
1526 | def handle_msg_nowait(self, qh): | |
1527 | return self._th.handle_msg_nowait(qh) | |
1196 | 1528 | |
1197 | 1529 | def close(self): |
1198 | 1530 | return self._th.close() |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016-2017 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
89 | 87 | ''' |
90 | 88 | |
91 | 89 | HISTORICAL_ROOT_IPS = ( |
92 | (dns.name.from_text('h.root-servers.net.'), IPAddr('2001:500:1::803f:235')), # December 1, 2015 | |
93 | (dns.name.from_text('l.root-servers.net.'), IPAddr('2001:500:3::42')), # March 24, 2016 | |
94 | (dns.name.from_text('b.root-servers.net.'), IPAddr('2001:500:84::b')), # June 1, 2017 | |
90 | (dns.name.from_text('h.root-servers.net.'), IPAddr('2001:500:1::803f:235')), # 2015-12-01 | |
91 | (dns.name.from_text('l.root-servers.net.'), IPAddr('2001:500:3::42')), # 2016-03-24 | |
92 | (dns.name.from_text('b.root-servers.net.'), IPAddr('2001:500:84::b')), # 2017-06-01 | |
95 | 93 | ) |
96 | 94 | |
97 | 95 | # The following list should include all current and historical trust anchors |
107 | 105 | TRUSTED_KEYS_ROOT = ( |
108 | 106 | ('. IN DNSKEY 257 3 8 AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0=', |
109 | 107 | datetime.datetime(2010, 7, 16, 0, 0, 0, 0, fmt.utc), |
110 | datetime.datetime(2017, 10, 15, 0, 0, 0, 0, fmt.utc)), | |
108 | datetime.datetime(2018, 10, 15, 16, 0, 0, 0, fmt.utc)), # 2018-10-11 16:00:00 UTC + 4 days (2*TTL) | |
111 | 109 | ('. IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3 +/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kv ArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF 0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+e oZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfd RUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwN R1AkUTV74bU=', |
112 | datetime.datetime(2017, 7, 12, 0, 0, 0, 0, fmt.utc), | |
110 | datetime.datetime(2017, 8, 11, 0, 0, 0, 0, fmt.utc), # 2017-07-12 00:00:00 UTC + 30 days (RFC 5011 add hold-down time) | |
113 | 111 | None), |
114 | 112 | ) |
115 | 113 | |
166 | 164 | |
167 | 165 | def get_root_hints(): |
168 | 166 | try: |
169 | return get_hints(io.open(ROOT_HINTS, 'r', encoding='utf-8').read()) | |
167 | with io.open(ROOT_HINTS, 'r', encoding='utf-8') as fh: | |
168 | return get_hints(fh.read()) | |
170 | 169 | except IOError: |
171 | 170 | return get_hints(ROOT_HINTS_STR_DEFAULT) |
172 | 171 |
0 | 0 | # |
1 | 1 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
2 | # analysis, and visualization. This file (or some portion thereof) is a | |
3 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
4 | # code originally developed at Sandia National Laboratories. | |
2 | # analysis, and visualization. | |
5 | 3 | # Created by Casey Deccio (casey@deccio.net) |
6 | 4 | # |
7 | 5 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # |
11 | 9 | # Copyright 2014-2016 VeriSign, Inc. |
12 | 10 | # |
13 | # Copyright 2016 Casey Deccio. | |
11 | # Copyright 2016-2019 Casey Deccio | |
14 | 12 | # |
15 | 13 | # DNSViz is free software; you can redistribute it and/or modify |
16 | 14 | # it under the terms of the GNU General Public License as published by |
29 | 27 | from __future__ import unicode_literals |
30 | 28 | |
31 | 29 | import codecs |
32 | import cgi | |
33 | 30 | import errno |
34 | 31 | import io |
35 | 32 | import json |
43 | 40 | from collections import OrderedDict |
44 | 41 | except ImportError: |
45 | 42 | from ordereddict import OrderedDict |
43 | ||
44 | # python3/python2 dual compatibility | |
45 | try: | |
46 | from html import escape | |
47 | except ImportError: | |
48 | from cgi import escape | |
46 | 49 | |
47 | 50 | import dns.name, dns.rdtypes, dns.rdatatype, dns.dnssec |
48 | 51 | |
110 | 113 | d = OrderedDict() |
111 | 114 | |
112 | 115 | if html_format: |
113 | formatter = lambda x: cgi.escape(x, True) | |
116 | formatter = lambda x: escape(x, True) | |
114 | 117 | else: |
115 | 118 | formatter = lambda x: x |
116 | 119 | |
160 | 163 | self.dnskey_ids = {} |
161 | 164 | self.ds_ids = {} |
162 | 165 | self.nsec_ids = {} |
166 | self.rrset_ids = {} | |
163 | 167 | self.next_dnskey_id = 0 |
164 | 168 | self.next_ds_id = 0 |
165 | 169 | self.next_nsec_id = 0 |
170 | self.next_rrset_id = 10 | |
166 | 171 | |
167 | 172 | def _raphael_unit_mapping_expression(self, val, unit): |
168 | 173 | #XXX doesn't work properly |
670 | 675 | self.node_mapping[edge_id].add(rrsig_status) |
671 | 676 | self.node_reverse_mapping[rrsig_status] = edge_id |
672 | 677 | |
678 | def id_for_rrset(self, rrset_info): | |
679 | name, rdtype = rrset_info.rrset.name, rrset_info.rrset.rdtype | |
680 | try: | |
681 | rrset_info_list = self.rrset_ids[(name,rdtype)] | |
682 | except KeyError: | |
683 | self.rrset_ids[(name,rdtype)] = [] | |
684 | rrset_info_list = self.rrset_ids[(name,rdtype)] | |
685 | ||
686 | for rrset_info1, id in rrset_info_list: | |
687 | if rrset_info == rrset_info1: | |
688 | return id | |
689 | ||
690 | id = self.next_rrset_id | |
691 | self.rrset_ids[(name,rdtype)].append((rrset_info, id)) | |
692 | self.next_rrset_id += 1 | |
693 | return id | |
694 | ||
673 | 695 | def rrset_node_str(self, name, rdtype, id): |
674 | 696 | return 'RRset-%d|%s|%s' % (id, fmt.humanize_name(name), dns.rdatatype.to_text(rdtype)) |
675 | 697 | |
679 | 701 | def get_rrset(self, name, rdtype, id): |
680 | 702 | return self.G.get_node(self.rrset_node_str(name, rdtype, id)) |
681 | 703 | |
682 | def add_rrset(self, rrset_info, wildcard_name, name_obj, zone_obj, id): | |
704 | def add_rrset(self, rrset_info, wildcard_name, name_obj, zone_obj): | |
683 | 705 | name = wildcard_name or rrset_info.rrset.name |
684 | node_str = self.rrset_node_str(name, rrset_info.rrset.rdtype, id) | |
706 | node_str = self.rrset_node_str(name, rrset_info.rrset.rdtype, self.id_for_rrset(rrset_info)) | |
685 | 707 | node_id = node_str.replace('*', '_') |
686 | 708 | |
687 | 709 | if not self.G.has_node(node_str): |
845 | 867 | def add_warnings(self, name_obj, zone_obj, name, rdtype, warnings_list): |
846 | 868 | return self._add_errors(name_obj, zone_obj, name, rdtype, warnings_list, 3, WARNING_ICON, 'warnings', 'WARNING', 'Response warnings for') |
847 | 869 | |
848 | def add_dname(self, dname_status, name_obj, zone_obj, id): | |
870 | def add_dname(self, dname_status, name_obj, zone_obj): | |
849 | 871 | dname_rrset_info = dname_status.synthesized_cname.dname_info |
850 | dname_node = self.add_rrset(dname_rrset_info, None, name_obj, zone_obj, id) | |
872 | dname_node = self.add_rrset(dname_rrset_info, None, name_obj, zone_obj) | |
851 | 873 | |
852 | 874 | if dname_status.validation_status == Status.DNAME_STATUS_VALID: |
853 | 875 | line_color = COLORS['secure'] |
862 | 884 | if dname_status.included_cname is None: |
863 | 885 | cname_node = self.add_rrset_non_existent(name_obj, zone_obj, Response.NegativeResponseInfo(dname_status.synthesized_cname.rrset.name, dns.rdatatype.CNAME, False), False, False) |
864 | 886 | else: |
865 | cname_node = self.add_rrset(dname_status.included_cname, None, name_obj, zone_obj, id) | |
887 | cname_node = self.add_rrset(dname_status.included_cname, None, name_obj, zone_obj) | |
866 | 888 | |
867 | 889 | edge_id = 'dname-%s|%s|%s|%s' % (cname_node, dname_node, line_color.lstrip('#'), line_style) |
868 | 890 | edge_key = '%s-%s' % (line_color, line_style) |
1011 | 1033 | |
1012 | 1034 | return nsec_node |
1013 | 1035 | |
1014 | def add_wildcard(self, name_obj, zone_obj, rrset_info, nsec_status, wildcard_name, id): | |
1015 | wildcard_node = self.add_rrset(rrset_info, wildcard_name, name_obj, zone_obj, id) | |
1036 | def add_wildcard(self, name_obj, zone_obj, rrset_info, nsec_status, wildcard_name): | |
1037 | wildcard_node = self.add_rrset(rrset_info, wildcard_name, name_obj, zone_obj) | |
1016 | 1038 | self.add_rrsigs(name_obj, zone_obj, rrset_info, wildcard_node) |
1017 | 1039 | nxdomain_node = self.add_rrset_non_existent(name_obj, zone_obj, rrset_info.wildcard_info[wildcard_name], True, True) |
1018 | 1040 | |
1025 | 1047 | return wildcard_node |
1026 | 1048 | |
1027 | 1049 | #XXX consider adding this node (using, e.g., clustering) |
1028 | #rrset_node = self.add_rrset(rrset_info, None, zone_obj, zone_obj, id) | |
1050 | #rrset_node = self.add_rrset(rrset_info, None, zone_obj, zone_obj) | |
1029 | 1051 | #self.G.add_edge(rrset_node, nxdomain_node, color=COLORS['secure'], style='invis', dir='back') |
1030 | 1052 | #self.G.add_edge(rrset_node, wildcard_node, color=COLORS['secure'], style='invis', dir='back') |
1031 | 1053 | #return rrset_node |
1049 | 1071 | self.add_rrsig(rrsig_status, name_obj, signer_obj, signed_node, port=port) |
1050 | 1072 | |
1051 | 1073 | def graph_rrset_auth(self, name_obj, name, rdtype): |
1052 | if (name, rdtype) in self.processed_rrsets: | |
1053 | return self.processed_rrsets[(name, rdtype)] | |
1054 | self.processed_rrsets[(name, rdtype)] = [] | |
1074 | if (name, rdtype) not in self.processed_rrsets: | |
1075 | self.processed_rrsets[(name, rdtype)] = [] | |
1055 | 1076 | |
1056 | 1077 | #XXX there are reasons for this (e.g., NXDOMAIN, after which no further |
1057 | 1078 | # queries are made), but it would be good to have a sanity check, so |
1087 | 1108 | if cname_obj is not None: |
1088 | 1109 | cname_nodes.extend(self.graph_rrset_auth(cname_obj, target, rdtype)) |
1089 | 1110 | |
1090 | id = 10 | |
1091 | 1111 | query = name_obj.queries[(name, rdtype)] |
1092 | 1112 | node_to_cname_mapping = set() |
1093 | 1113 | for rrset_info in query.answer_info: |
1114 | 1134 | #XXX can we combine wildcard components into a cluster? |
1115 | 1135 | if rrset_info in name_obj.dname_status: |
1116 | 1136 | for dname_status in name_obj.dname_status[rrset_info]: |
1117 | my_nodes.append(self.add_dname(dname_status, name_obj, my_zone_obj, id)) | |
1118 | id += 1 | |
1137 | my_nodes.append(self.add_dname(dname_status, name_obj, my_zone_obj)) | |
1119 | 1138 | elif rrset_info.wildcard_info: |
1120 | 1139 | for wildcard_name in rrset_info.wildcard_info: |
1121 | 1140 | if name_obj.wildcard_status[rrset_info.wildcard_info[wildcard_name]]: |
1122 | 1141 | for nsec_status in name_obj.wildcard_status[rrset_info.wildcard_info[wildcard_name]]: |
1123 | my_nodes.append(self.add_wildcard(name_obj, my_zone_obj, rrset_info, nsec_status, wildcard_name, id)) | |
1124 | id += 1 | |
1142 | my_nodes.append(self.add_wildcard(name_obj, my_zone_obj, rrset_info, nsec_status, wildcard_name)) | |
1125 | 1143 | else: |
1126 | my_nodes.append(self.add_wildcard(name_obj, my_zone_obj, rrset_info, None, wildcard_name, id)) | |
1127 | id += 1 | |
1128 | else: | |
1129 | rrset_node = self.add_rrset(rrset_info, None, name_obj, my_zone_obj, id) | |
1144 | my_nodes.append(self.add_wildcard(name_obj, my_zone_obj, rrset_info, None, wildcard_name)) | |
1145 | else: | |
1146 | rrset_node = self.add_rrset(rrset_info, None, name_obj, my_zone_obj) | |
1130 | 1147 | self.add_rrsigs(name_obj, my_zone_obj, rrset_info, rrset_node) |
1131 | 1148 | my_nodes.append(rrset_node) |
1132 | id += 1 | |
1133 | 1149 | |
1134 | 1150 | # if this is a CNAME record, create a node-to-target mapping |
1135 | 1151 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME: |
1167 | 1183 | nsec_cell = lb2s(nsec_name.canonicalize().to_text()) |
1168 | 1184 | self.add_rrsigs(name_obj, my_zone_obj, rrset_info, nsec_node, port=nsec_cell) |
1169 | 1185 | |
1170 | id += 1 | |
1171 | 1186 | for soa_rrset_info in neg_response_info.soa_rrset_info: |
1172 | 1187 | # If no servers match the authoritative servers, then put this in the parent zone |
1173 | 1188 | if not set([s for (s,c) in soa_rrset_info.servers_clients]).intersection(my_zone_obj.get_auth_or_designated_servers()) and my_zone_obj.parent is not None: |
1174 | 1189 | z_obj = my_zone_obj.parent |
1175 | 1190 | else: |
1176 | 1191 | z_obj = my_zone_obj |
1177 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, z_obj, id) | |
1192 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, z_obj) | |
1178 | 1193 | self.add_rrsigs(name_obj, my_zone_obj, soa_rrset_info, soa_rrset_node) |
1179 | id += 1 | |
1180 | 1194 | |
1181 | 1195 | for neg_response_info in query.nodata_info: |
1182 | 1196 | # only do qname, unless analysis type is recursive |
1202 | 1216 | nsec_cell = lb2s(nsec_name.canonicalize().to_text()) |
1203 | 1217 | self.add_rrsigs(name_obj, my_zone_obj, rrset_info, nsec_node, port=nsec_cell) |
1204 | 1218 | |
1205 | id += 1 | |
1206 | 1219 | for soa_rrset_info in neg_response_info.soa_rrset_info: |
1207 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, my_zone_obj, id) | |
1220 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, my_zone_obj) | |
1208 | 1221 | self.add_rrsigs(name_obj, my_zone_obj, soa_rrset_info, soa_rrset_node) |
1209 | id += 1 | |
1210 | 1222 | |
1211 | 1223 | error_node = self.add_errors(name_obj, zone_obj, name, rdtype, name_obj.response_errors[query]) |
1212 | 1224 | if error_node is not None: |
1306 | 1318 | # Map CNAME responses to DNSKEY/DS queries to appropriate node |
1307 | 1319 | for rrset_info in name_obj.queries[(name_obj.name, rdtype)].answer_info: |
1308 | 1320 | if rrset_info.rrset.rdtype == dns.rdatatype.CNAME: |
1309 | rrset_node = self.add_rrset(rrset_info, None, name_obj, name_obj.zone, 0) | |
1321 | rrset_node = self.add_rrset(rrset_info, None, name_obj, name_obj.zone) | |
1310 | 1322 | if rrset_node not in self.node_mapping: |
1311 | 1323 | self.node_mapping[rrset_node] = [] |
1312 | 1324 | self.node_mapping[rrset_node].add(rrset_info) |
1388 | 1400 | |
1389 | 1401 | # add SOA |
1390 | 1402 | for soa_rrset_info in soa_rrsets: |
1391 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, parent_obj, 0) | |
1403 | soa_rrset_node = self.add_rrset(soa_rrset_info, None, name_obj, parent_obj) | |
1392 | 1404 | self.add_rrsigs(name_obj, parent_obj, soa_rrset_info, soa_rrset_node) |
1393 | 1405 | |
1394 | 1406 | # add mappings for negative responses |
0 | 0 | Metadata-Version: 1.1 |
1 | 1 | Name: dnsviz |
2 | Version: 0.6.6 | |
2 | Version: 0.8.0 | |
3 | 3 | Summary: DNS analysis and visualization tool suite |
4 | 4 | Home-page: https://github.com/dnsviz/dnsviz/ |
5 | 5 | Author: Casey Deccio |
18 | 18 | Classifier: Natural Language :: English |
19 | 19 | Classifier: Operating System :: MacOS :: MacOS X |
20 | 20 | Classifier: Operating System :: POSIX |
21 | Classifier: Programming Language :: Python :: 2.6 | |
22 | 21 | Classifier: Programming Language :: Python :: 2.7 |
23 | 22 | Classifier: Programming Language :: Python :: 3 |
24 | 23 | Classifier: Topic :: Internet :: Name Service (DNS) |
27 | 26 | Requires: pygraphviz (>=1.1) |
28 | 27 | Requires: m2crypto (>=0.24.0) |
29 | 28 | Requires: dnspython (>=1.11) |
29 | Requires: libnacl |
1 | 1 | LICENSE |
2 | 2 | MANIFEST.in |
3 | 3 | README.md |
4 | requirements.txt | |
5 | setup.cfg | |
4 | 6 | setup.py |
5 | 7 | bin/dnsviz |
6 | 8 | contrib/digviz |
7 | 9 | contrib/dnsviz-lg-ws.js |
8 | 10 | contrib/dnsviz-lg.cgi |
9 | contrib/m2crypto-0.23.patch | |
10 | contrib/m2crypto-pre0.23.patch | |
11 | contrib/dnsviz-py2.spec | |
12 | contrib/dnsviz-py3.spec | |
11 | 13 | contrib/rpm-install.sh |
12 | 14 | contrib/dnsviz-lg-java/net/dnsviz/applet/DNSLookingGlassApplet.java |
13 | 15 | contrib/dnsviz-lg-java/net/dnsviz/lookingglass/DNSLookingGlass.java |
45 | 47 | dnsviz/commands/__init__.py |
46 | 48 | dnsviz/commands/graph.py |
47 | 49 | dnsviz/commands/grok.py |
50 | dnsviz/commands/lookingglass.py | |
48 | 51 | dnsviz/commands/print.py |
49 | 52 | dnsviz/commands/probe.py |
50 | 53 | dnsviz/commands/query.py |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz-probe 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz-probe 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz-graph \- graph the assessment of diagnostic DNS queries |
23 | 25 | .SH SYNOPSIS |
66 | 68 | input. |
67 | 69 | .TP |
68 | 70 | .B -t \fIfilename\fR |
69 | Specify a file that contains trusted keys for processing diagnostic queries. | |
71 | Use trusted keys from the specified file when processing diagnostic queries. | |
70 | 72 | This overrides the default behavior of using the installed keys for the root |
71 | 73 | zone. |
72 | 74 | |
74 | 76 | records that correspond to one more trusted keys for one or more DNS zones. |
75 | 77 | |
76 | 78 | This option may be used multiple times on the command line. |
79 | .TP | |
80 | .B -C | |
81 | Enforce DNS cookies strictly. Require a server to return a "BADCOOKIE" response | |
82 | when a query contains a COOKIE option with no server cookie or with an invalid | |
83 | server cookie. | |
77 | 84 | .TP |
78 | 85 | .B -R \fItype\fR[,\fItype...\fI] |
79 | 86 | Process queries of only the specified type(s) (e.g., A, AAAA). The default is |
80 | 87 | to process all types queried as part of the diagnostic input. |
81 | 88 | .TP |
89 | .B -e | |
90 | Do not remove redundant RRSIG edges from the graph. | |
91 | ||
92 | As described in \fB"RRSIGs"\fR, some edges representing RRSIGs made by KSKs are | |
93 | removed from the graph to reduce visual complexity. If this option is used, | |
94 | those edges are preserved. | |
95 | .TP | |
82 | 96 | .B -O |
83 | 97 | Save the output to a file, whose name is derived from the format (i.e., |
84 | 98 | provided to \fB-T\fR) and the domain name. |
95 | 109 | contain the collective output for all domain names processed. |
96 | 110 | .TP |
97 | 111 | .B -T \fIformat\fR |
98 | Specify the format of the format from among the following: "dot", "png", "jpg", | |
99 | "svg", and "html". The default is "dot". | |
112 | Use the specified output format for the graph, selected from among the | |
113 | following: "dot", "png", "jpg", "svg", and "html". The default is "dot". | |
100 | 114 | .TP |
101 | 115 | .B -h |
102 | 116 | Display the usage and exit. |
464 | 478 | .IP 3 |
465 | 479 | There was an error processing the input or saving the output. |
466 | 480 | .IP 4 |
467 | Program execution was interrupted, or an unknown error ocurred. | |
481 | Program execution was interrupted, or an unknown error occurred. | |
468 | 482 | .SH SEE ALSO |
469 | 483 | .BR dnsviz(1), |
470 | 484 | .BR dnsviz-probe(1), |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz-grok 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz-grok 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz-grok \- assess diagnostic DNS queries |
23 | 25 | .SH SYNOPSIS |
64 | 66 | input. |
65 | 67 | .TP |
66 | 68 | .B -t \fIfilename\fR |
67 | Specify a file that contains trusted keys for processing diagnostic queries. | |
68 | The default is to not use any trusted keys. | |
69 | Use trusted keys from the specified file when processing diagnostic queries. | |
70 | This overrides the default behavior of using the installed keys for the root | |
71 | zone. | |
69 | 72 | |
70 | 73 | The format of this file is master zone file format and should contain DNSKEY |
71 | 74 | records that correspond to one more trusted keys for one or more DNS zones. |
72 | 75 | |
73 | 76 | This option may be used multiple times on the command line. |
74 | 77 | .TP |
78 | .B -C | |
79 | Enforce DNS cookies strictly. Require a server to return a "BADCOOKIE" response | |
80 | when a query contains a COOKIE option with no server cookie or with an invalid | |
81 | server cookie. | |
82 | .TP | |
75 | 83 | \fB-o\fR \fIfilename\fR |
76 | 84 | Write the output to the specified file instead of to standard output, which |
77 | 85 | is the default. |
78 | 86 | .TP |
79 | 87 | .B -c |
80 | Make JSON output minimal instead of "pretty" (i.e., remove indentation and | |
88 | Format JSON output minimally instead of "pretty" (i.e., with indentation and | |
81 | 89 | newlines). |
82 | 90 | .TP |
83 | 91 | .B -l \fIlevel\fR |
99 | 107 | .IP 3 |
100 | 108 | There was an error processing the input or saving the output. |
101 | 109 | .IP 4 |
102 | Program execution was interrupted, or an unknown error ocurred. | |
110 | Program execution was interrupted, or an unknown error occurred. | |
103 | 111 | .SH SEE ALSO |
104 | 112 | .BR dnsviz(1), |
105 | 113 | .BR dnsviz-probe(1), |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz-print 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz-print 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz-print \- print the assessment of diagnostic DNS queries |
23 | 25 | .SH SYNOPSIS |
66 | 68 | input. |
67 | 69 | .TP |
68 | 70 | .B -t \fIfilename\fR |
69 | Specify a file that contains trusted keys for processing diagnostic queries. | |
70 | This overrides the default behavior of using the built-in keys for the root | |
71 | Use trusted keys from the specified file when processing diagnostic queries. | |
72 | This overrides the default behavior of using the installed keys for the root | |
71 | 73 | zone. |
72 | 74 | |
73 | 75 | The format of this file is master zone file format and should contain DNSKEY |
74 | 76 | records that correspond to one more trusted keys for one or more DNS zones. |
75 | 77 | |
76 | 78 | This option may be used multiple times on the command line. |
79 | .TP | |
80 | .B -C | |
81 | Enforce DNS cookies strictly. Require a server to return a "BADCOOKIE" response | |
82 | when a query contains a COOKIE option with no server cookie or with an invalid | |
83 | server cookie. | |
77 | 84 | .TP |
78 | 85 | .B -R \fItype\fR[,\fItype...\fR] |
79 | 86 | Process queries of only the specified type(s) (e.g., A, AAAA). The default is |
290 | 297 | .IP 3 |
291 | 298 | There was an error processing the input or saving the output. |
292 | 299 | .IP 4 |
293 | Program execution was interrupted, or an unknown error ocurred. | |
300 | Program execution was interrupted, or an unknown error occurred. | |
294 | 301 | .SH SEE ALSO |
295 | 302 | .BR dnsviz(1), |
296 | 303 | .BR dnsviz-probe(1), |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz-probe 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz-probe 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz-probe \- issue diagnostic DNS queries |
23 | 25 | .SH SYNOPSIS |
61 | 63 | servers. Specify "-" to read from standard input. |
62 | 64 | .TP |
63 | 65 | .B -t \fIthreads\fR |
64 | Specify the number of threads to use for issuing diagnostic queries for | |
65 | different names in parallel. The default is to execute diagnostic queries of | |
66 | names serially. | |
66 | Issue diagnostic queries for different names in parallel using the specified | |
67 | number of threads. The default is to execute diagnostic queries of names | |
68 | serially. | |
67 | 69 | .TP |
68 | 70 | .B -4 |
69 | 71 | Use IPv4 only. |
72 | 74 | Use IPv6 only. |
73 | 75 | .TP |
74 | 76 | .B -b \fIaddress\fR |
75 | Specify a source IPv4 or IPv6 address for queries, rather than detecting it. | |
77 | Use the specified source IPv4 or IPv6 address for queries, rather than | |
78 | detecting it. | |
76 | 79 | |
77 | 80 | This option can be used more than once to supply both an IPv4 and an IPv6 |
78 | 81 | address. |
81 | 84 | and it is desirable to use the non-default interface for queries. |
82 | 85 | .TP |
83 | 86 | .B -u \fIurl\fR |
84 | Specify the URL (HTTP/HTTPS only) for a DNS looking glass that will send the | |
85 | diagnostic queries, rather than sending them locally. | |
87 | Issue queries through the DNS looking glass at the specified URL (HTTP(S) or | |
88 | SSH). The queries will appear to come from the looking glass rather than from | |
89 | the local machine. | |
86 | 90 | |
87 | 91 | .RS |
88 | 92 | .RS |
99 | 103 | Same, but use HTTP Basic authentication: |
100 | 104 | .P |
101 | 105 | http://username:password@www.example.com/cgi-bin/dnsviz-lg.cgi |
106 | .PD | |
107 | .P | |
108 | .PD 0 | |
109 | Issue DNS queries from host.example.com on which DNSViz is also installed. | |
110 | .P | |
111 | ssh://username@host.example.com | |
102 | 112 | .PD |
103 | 113 | .RE |
104 | 114 | .P |
109 | 119 | |
110 | 120 | .TP |
111 | 121 | .B -k |
112 | When \fB-u\fR is used to specify the URL of a DNS looking glass, don't verify | |
113 | the server-side TLS cert. | |
122 | Do not verify the server-side TLS certificate for a HTTPS-based DNS looking | |
123 | glass that was specified using \fB-u\fR. | |
114 | 124 | .TP |
115 | 125 | .B -a \fIancestor\fR |
116 | 126 | Issue diagnostic queries of each domain name through the specified ancestor. The |
126 | 136 | is a zone). |
127 | 137 | .TP |
128 | 138 | .B -s \fIserver\fR[,\fIserver...\fR] |
129 | Designate one or more servers for recursive queries, rather than using those | |
130 | specified in \fI/etc/resolv.conf\fR. | |
139 | Query the specified recursive resolver(s), rather than using those specified in | |
140 | \fI/etc/resolv.conf\fR. | |
131 | 141 | |
132 | 142 | Each server specified may either be an address (IPv4 or IPv6), a domain name |
133 | 143 | (which will be resolved to an address using the standard resolution process), |
169 | 179 | Query authoritative servers, rather than (the default) recursive servers. |
170 | 180 | .TP |
171 | 181 | .B -x \fIdomain\fR[\fB+\fR]\fB:\fR\fIserver\fR[,\fIserver...\fR] |
172 | Explicitly designate authoritative servers for a domain, rather than learning | |
173 | them by following delegations. This option dictates which servers will be | |
174 | queried for a domain, but the servers specified will not be used to check NS or | |
175 | glue record consistency with the child; for that behavior, see \fB-N\fR. | |
182 | Treat the specified servers as authoritative for a domain, rather than learning | |
183 | authoritative servers by following delegations. This option dictates which | |
184 | servers will be queried for a domain, but the servers specified will not be | |
185 | used to check NS or glue record consistency with the child; for that behavior, | |
186 | see \fB-N\fR. | |
176 | 187 | |
177 | 188 | The default behavior is to identify and query servers authoritative for |
178 | 189 | ancestors of the specified domain, if other options so dictate. However, if |
212 | 223 | .RE |
213 | 224 | .TP |
214 | 225 | .B -N \fIdomain\fR\fB:\fR\fIserver\fR[,\fIserver...\fR] |
215 | Specify delegation information for a domain, i.e., the NS and glue records for | |
216 | the domain, which would be served by the domain's parent. This is used for | |
217 | testing new delegations or testing a potential change to a delegation. | |
226 | Use the specified delegation information for a domain, i.e., the NS and glue | |
227 | records for the domain, which would be served by the domain's parent. This is | |
228 | used for testing new delegations or testing a potential change to a delegation. | |
218 | 229 | |
219 | 230 | This option has similar usage to that of the \fB-x\fR option. The major |
220 | 231 | difference is that the server names supplied comprise the NS record set, and |
227 | 238 | specified, which contains the delegation NS and glue records for the domain. |
228 | 239 | .TP |
229 | 240 | .B -D \fIdomain\fR\fB:\fR\fIds\fR[,\fIds...\fR] |
230 | Specify one or more delegation signer (DS) records for a domain. This is used | |
241 | Use the specified delegation signer (DS) records for a domain. This is used | |
231 | 242 | in conjunction with the \fB-N\fR option for testing the introduction or change |
232 | 243 | of DS records. |
233 | 244 | |
268 | 279 | .B -n |
269 | 280 | Use the NSID EDNS option with every DNS query issued. |
270 | 281 | .TP |
271 | .B -e \fIsubnet\fR[\fB:\fR\fIprefix\fR] | |
282 | .B -e \fIsubnet\fR[\fB:\fR\fIprefix_len\fR] | |
272 | 283 | Use the EDNS Client Subnet option with every DNS query issued, using the |
273 | specified \fIsubnet\fR and \fIprefix\fR as values. If \fIprefix\fR is not | |
284 | specified \fIsubnet\fR and \fIprefix_len\fR as values. If \fIprefix\fR is not | |
274 | 285 | specified, the prefix is the length of the entire address. |
275 | 286 | .TP |
287 | .B -c \fIcookie\fR | |
288 | Send the specified DNS client cookie with every DNS query issued. The value | |
289 | specified is for a client cookie only and thus should be exactly 64 bits long. | |
290 | The value for the cookie is specified using hexadecimal representation, e.g., | |
291 | deadbeef1580f00d. | |
292 | ||
293 | If the \fB-c\fR option is not used, the default behavior is for a DNS client | |
294 | cookie to be generated randomly to be sent with queries. If an empty string is | |
295 | specified, then DNS cookies are disabled. | |
296 | .TP | |
276 | 297 | .B -E |
277 | Include diagnostic DNS queries that can assess EDNS compatibility of servers. | |
298 | Issue queries to check EDNS compatibility of servers. | |
278 | 299 | |
279 | 300 | If this option is used, each server probed will be queried with "future" EDNS |
280 | 301 | settings, the respective responses can later be assessed for proper behavior. |
286 | 307 | is the default. |
287 | 308 | .TP |
288 | 309 | .B -p |
289 | Make JSON output "pretty" instead of minimal (i.e., using indentation and | |
290 | newlines). Note that this is the default when the output is a TTY. | |
310 | Output "pretty" instead of minimal JSON output, i.e., using indentation and | |
311 | newlines. Note that this is the default when the output is a TTY. | |
291 | 312 | .TP |
292 | 313 | .B -h |
293 | 314 | Display the usage and exit. |
303 | 324 | .IP 3 |
304 | 325 | There was an error processing the input or saving the output. |
305 | 326 | .IP 4 |
306 | Program execution was interrupted, or an unknown error ocurred. | |
327 | Program execution was interrupted, or an unknown error occurred. | |
307 | 328 | .SH SEE ALSO |
308 | 329 | .BR dnsviz(1), |
309 | 330 | .BR dnsviz-grok(1), |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz-query 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz-query 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz-query \- assess a DNS query |
23 | 25 | .SH SYNOPSIS |
107 | 109 | .IP 0 |
108 | 110 | Program terminated normally. |
109 | 111 | .IP 1 |
110 | Program execution was interrupted, or an unknown error ocurred. | |
112 | Program execution was interrupted, or an unknown error occurred. | |
111 | 113 | .SH SEE ALSO |
112 | 114 | .BR dnsviz(1), |
113 | 115 | .BR dnsviz-probe(1), |
3 | 3 | .\" Created by Casey Deccio (casey@deccio.net) |
4 | 4 | .\" |
5 | 5 | .\" Copyright 2015-2016 VeriSign, Inc. |
6 | .\" | |
7 | .\" Copyright 2016-2019 Casey Deccio | |
6 | 8 | .\" |
7 | 9 | .\" DNSViz is free software; you can redistribute it and/or modify |
8 | 10 | .\" it under the terms of the GNU General Public License as published by |
17 | 19 | .\" You should have received a copy of the GNU General Public License along |
18 | 20 | .\" with DNSViz. If not, see <http://www.gnu.org/licenses/>. |
19 | 21 | .\" |
20 | .TH dnsviz 1 "30 Jun 2017" "0.6.6" | |
22 | .TH dnsviz 1 "25 Jan 2019" "0.8.0" | |
21 | 23 | .SH NAME |
22 | 24 | dnsviz \- issue and assess diagnostic DNS queries |
23 | 25 | .SH SYNOPSIS |
24 | 26 | .P |
25 | 27 | .B dnsviz |
28 | [ \fIoptions \fR ] | |
26 | 29 | \fIcommand\fR |
27 | [ \fIoptions\fR ] | |
30 | [ \fIargs\fR ] | |
28 | 31 | .TP |
29 | 32 | .B dnsviz |
33 | [ \fIoptions \fR ] | |
30 | 34 | \fBhelp\fR [ \fIcommand\fR ] |
31 | 35 | .SH DESCRIPTION |
32 | 36 | .P |
43 | 47 | argument is "help", then the usage of the command specified by the second |
44 | 48 | argument is displayed. If no second argument is provided, then a general usage |
45 | 49 | message is given. |
50 | .SH OPTIONS | |
51 | .TP | |
52 | .B -p \fIpath\fR | |
53 | Add a path to the python path. | |
46 | 54 | .SH COMMANDS |
47 | 55 | .TP |
48 | 56 | .B probe |
0 | [bdist_wheel] | |
1 | universal = 1 | |
2 | ||
3 | [metadata] | |
4 | license_files = | |
5 | LICENSE | |
6 | COPYRIGHT | |
7 | ||
0 | 8 | [egg_info] |
1 | 9 | tag_build = |
2 | 10 | tag_date = 0 |
32 | 32 | if os.path.exists(filename_out) and os.path.getctime(filename_out) > os.path.getctime(filename): |
33 | 33 | return |
34 | 34 | |
35 | in_fh = open(filename, 'r') | |
36 | out_fh = open(filename_out, 'w') | |
37 | s = in_fh.read() | |
35 | with open(filename, 'r') as in_fh: | |
36 | s = in_fh.read() | |
37 | ||
38 | 38 | s = s.replace('__DNSVIZ_INSTALL_PREFIX__', install_prefix) |
39 | 39 | s = s.replace('__JQUERY_PATH__', JQUERY_PATH) |
40 | 40 | s = s.replace('__JQUERY_UI_PATH__', JQUERY_UI_PATH) |
41 | 41 | s = s.replace('__JQUERY_UI_CSS_PATH__', JQUERY_UI_CSS_PATH) |
42 | 42 | s = s.replace('__RAPHAEL_PATH__', RAPHAEL_PATH) |
43 | out_fh.write(s) | |
44 | in_fh.close() | |
45 | out_fh.close() | |
43 | ||
44 | with open(filename_out, 'w') as out_fh: | |
45 | out_fh.write(s) | |
46 | 46 | |
47 | 47 | def make_documentation(): |
48 | 48 | os.chdir('doc') |
51 | 51 | sys.stderr.write('Warning: Some of the included documentation failed to build. Proceeding without it.\n') |
52 | 52 | finally: |
53 | 53 | os.chdir('..') |
54 | ||
55 | def create_config(prefix): | |
56 | # Create dnsviz/config.py, so version exists for packages that don't | |
57 | # require calling install. Even though the install prefix is the empty | |
58 | # string, the use case for this is virtual environments, which won't | |
59 | # use it. | |
60 | apply_substitutions(os.path.join('dnsviz','config.py.in'), prefix) | |
61 | # update the timestamp of config.py.in, so if/when the install command | |
62 | # is called, config.py will be rewritten, i.e., with the real install | |
63 | # prefix. | |
64 | os.utime(os.path.join('dnsviz', 'config.py.in'), None) | |
54 | 65 | |
55 | 66 | class MyBuildPy(build_py): |
56 | 67 | def run(self): |
65 | 76 | install_data = os.path.join(os.path.sep, os.path.relpath(self.install_data, self.root)) |
66 | 77 | else: |
67 | 78 | install_data = self.install_data |
68 | apply_substitutions(os.path.join('dnsviz','config.py.in'), install_data) | |
79 | create_config(install_data) | |
69 | 80 | install.run(self) |
70 | 81 | |
71 | 82 | DOC_FILES = [('share/doc/dnsviz', ['README.md'])] |
104 | 115 | else: |
105 | 116 | map_func = lambda x: codecs.decode(x, 'latin1') |
106 | 117 | |
118 | create_config('') | |
107 | 119 | setup(name='dnsviz', |
108 | version='0.6.6', | |
120 | version='0.8.0', | |
109 | 121 | author='Casey Deccio', |
110 | 122 | author_email='casey@deccio.net', |
111 | 123 | url='https://github.com/dnsviz/dnsviz/', |
122 | 134 | 'pygraphviz (>=1.1)', |
123 | 135 | 'm2crypto (>=0.24.0)', |
124 | 136 | 'dnspython (>=1.11)', |
137 | 'libnacl', | |
125 | 138 | ], |
126 | 139 | classifiers=[ |
127 | 140 | 'Development Status :: 5 - Production/Stable', |
133 | 146 | 'Natural Language :: English', |
134 | 147 | 'Operating System :: MacOS :: MacOS X', |
135 | 148 | 'Operating System :: POSIX', |
136 | 'Programming Language :: Python :: 2.6', | |
137 | 149 | 'Programming Language :: Python :: 2.7', |
138 | 150 | 'Programming Language :: Python :: 3', |
139 | 151 | 'Topic :: Internet :: Name Service (DNS)', |
0 | 0 | /* |
1 | 1 | # |
2 | 2 | # This file is a part of DNSViz, a tool suite for DNS/DNSSEC monitoring, |
3 | # analysis, and visualization. This file (or some portion thereof) is a | |
4 | # derivative work authored by VeriSign, Inc., and created in 2014, based on | |
5 | # code originally developed at Sandia National Laboratories. | |
3 | # analysis, and visualization. | |
6 | 4 | # Created by Casey Deccio (casey@deccio.net) |
7 | 5 | # |
8 | 6 | # Copyright 2012-2014 Sandia Corporation. Under the terms of Contract |
10 | 8 | # certain rights in this software. |
11 | 9 | # |
12 | 10 | # Copyright 2014-2016 VeriSign, Inc. |
11 | # | |
12 | # Copyright 2016-2019 Casey Deccio | |
13 | 13 | # |
14 | 14 | # DNSViz is free software; you can redistribute it and/or modify |
15 | 15 | # it under the terms of the GNU General Public License as published by |
54 | 54 | 12: 'GOST R 34.10-2001', |
55 | 55 | 13: 'ECDSA Curve P-256 with SHA-256', |
56 | 56 | 14: 'ECDSA Curve P-384 with SHA-384', |
57 | 15: 'Ed25519', | |
58 | 16: 'Ed448', | |
57 | 59 | } |
58 | 60 | this._digest_algorithms = { |
59 | 61 | 1: 'SHA-1', |