New Upstream Release - tldextract

Ready changes

Summary

Merged new upstream version: 3.4.4 (was: 3.4.0).

Resulting package

Built on 2023-07-10T02:26 (took 4m53s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-releases python3-tldextractapt install -t fresh-releases tldextract

Lintian Result

Diff

diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
new file mode 100644
index 0000000..6e648bd
--- /dev/null
+++ b/.github/FUNDING.yml
@@ -0,0 +1 @@
+github: [john-kurkowski]
diff --git a/.travis.yml b/.travis.yml
index 4527f47..8492f24 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,19 +1,23 @@
-sudo: false
+dist: focal
 language: python
 matrix:
   include:
-    - python: 3.6
-      env: TOXENV=py36
-    - python: 3.7
+    - python: "3.7"
       env: TOXENV=py37
-    - python: 3.8
+    - python: "3.8"
       env: TOXENV=py38
-    - python: 3.9
+    - python: "3.9"
       env: TOXENV=py39
-    - python: pypy3
+    - python: "3.10"
+      env: TOXENV=py310
+    - python: "3.11"
+      env: TOXENV=py311
+    - python: pypy3.7-7.3.5
+      dist: xenial
       env: TOXENV=pypy3
     - env: TOXENV=codestyle
     - env: TOXENV=lint
-python: 3.9
+    - env: TOXENV=typecheck
+python: "3.10"
 install: pip install tox
 script: tox
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 34f0fa3..4e8ff7e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -3,6 +3,91 @@
 After upgrading, update your cache file by deleting it or via `tldextract
 --update`.
 
+## 3.4.4 (2023-05-19)
+
+* Bugfixes
+  * Honor private domains flag on `self`, not only when passed to `__call__` ([#289](https://github.com/john-kurkowski/tldextract/issues/289))
+
+## 3.4.3 (2023-05-18)
+
+* Bugfixes
+  * Speed up 10-15% over all inputs
+    * Refactor `suffix_index()` to use a trie ([#285](https://github.com/john-kurkowski/tldextract/issues/285))
+* Docs
+  * Adopt PEP257 doc style
+
+## 3.4.2 (2023-05-16)
+
+* Bugfixes
+  * Speed up 10-40% on "average" inputs, and even more on pathological inputs, like long subdomains
+    * Optimize `suffix_index()`: search from right to left ([#283](https://github.com/john-kurkowski/tldextract/issues/283))
+    * Optimize netloc extraction: switch from regex to if/else ([#284](https://github.com/john-kurkowski/tldextract/issues/284))
+
+## 3.4.1 (2023-04-26)
+
+* Bugfixes
+  * Fix Pyright not finding tldextract public interface ([#279](https://github.com/john-kurkowski/tldextract/issues/279))
+  * Fix various Pyright checks
+  * Use SPDX license identifier ([#280](https://github.com/john-kurkowski/tldextract/issues/280))
+  * Support Python 3.11
+* Docs
+  * Add FAQ about private domains
+* Misc.
+  * Update bundled snapshot
+  * Fix lint in newer pylint
+
+## 3.4.0 (2022-10-04)
+
+* Features
+  * Add method `extract_urllib` to extract from a `urllib.parse.{ParseResult,SplitResult}` ([#274](https://github.com/john-kurkowski/tldextract/issues/274))
+* Bugfixes
+  * Fix internal type-var error, in newer versions of mypy ([#275](https://github.com/john-kurkowski/tldextract/issues/275))
+
+## 3.3.1 (2022-07-08)
+
+* Bugfixes
+  * Fix documented types, in README and in exception message ([#265](https://github.com/john-kurkowski/tldextract/issues/265))
+* Misc.
+  * Format source code
+
+## 3.3.0 (2022-05-04)
+
+* Features
+  * Add CLI flag `--suffix_list_url` to set the suffix list URL(s) or source file(s) ([#197](https://github.com/john-kurkowski/tldextract/issues/197))
+  * Add CLI flag `--no_fallback_to_snapshot` to not fall back to the snapshot ([#260](https://github.com/john-kurkowski/tldextract/issues/260))
+  * Add alias `--include_psl_private_domains` for CLI flag `--private_domains`
+* Bugfixes
+  * Handle more internationalized domain name dots ([#253](https://github.com/john-kurkowski/tldextract/issues/253))
+* Misc.
+  * Update bundled snapshot
+  * Add basic CLI test coverage
+
+## 3.2.1 (2022-04-11)
+
+* Bugfixes
+  * Fix incorrect namespace used for caching function returns ([#258](https://github.com/john-kurkowski/tldextract/issues/258))
+  * Remove redundant encode ([`6e2c0e0`](https://github.com/john-kurkowski/tldextract/commit/6e2c0e0))
+  * Remove redundant lowercase ([`226bfc2`](https://github.com/john-kurkowski/tldextract/commit/226bfc2))
+  * Remove unused `try`/`except` path ([#255](https://github.com/john-kurkowski/tldextract/issues/255))
+  * Add types to the private API (disallow untyped calls and defs) ([#256](https://github.com/john-kurkowski/tldextract/issues/256))
+  * Rely on `python_requires` instead of runtime check ([#247](https://github.com/john-kurkowski/tldextract/issues/247))
+* Docs
+  * Fix docs with updated types
+  * Fix link in Travis CI badge ([#248](https://github.com/john-kurkowski/tldextract/issues/248))
+  * Rewrite documentation intro
+  * Remove unnecessary subheading
+  * Unify case
+
+## 3.2.0 (2022-02-20)
+
+* Features
+    * Add types to the public API ([#244](https://github.com/john-kurkowski/tldextract/issues/244))
+* Bugfixes
+    * Add support for Python 3.10 ([#246](https://github.com/john-kurkowski/tldextract/issues/246))
+    * Drop support for EOL Python 3.6 ([#246](https://github.com/john-kurkowski/tldextract/issues/246))
+    * Remove py2 tag from wheel ([#245](https://github.com/john-kurkowski/tldextract/issues/245))
+    * Remove extra backtick in README ([#240](https://github.com/john-kurkowski/tldextract/issues/240))
+
 ## 3.1.2 (2021-09-01)
 
 * Misc.
diff --git a/MANIFEST.in b/MANIFEST.in
index e29f070..f0350d5 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,3 +1,4 @@
 include tldextract/.tld_set_snapshot
+include tldextract/py.typed
 include LICENSE
 recursive-include tests *.py *.dat
diff --git a/README.md b/README.md
index 8cbc6dc..cfb9382 100644
--- a/README.md
+++ b/README.md
@@ -1,22 +1,22 @@
-# tldextract
+# tldextract [![PyPI version](https://badge.fury.io/py/tldextract.svg)](https://badge.fury.io/py/tldextract) [![Build Status](https://travis-ci.com/john-kurkowski/tldextract.svg?branch=master)](https://app.travis-ci.com/github/john-kurkowski/tldextract)
 
-## Python Module [![PyPI version](https://badge.fury.io/py/tldextract.svg)](https://badge.fury.io/py/tldextract) [![Build Status](https://travis-ci.com/john-kurkowski/tldextract.svg?branch=master)](https://travis-ci.com/john-kurkowski/tldextract)
+`tldextract` accurately separates a URL's subdomain, domain, and public suffix,
+using [the Public Suffix List (PSL)](https://publicsuffix.org).
 
-`tldextract` accurately separates the gTLD or ccTLD (generic or country code
-top-level domain) from the registered domain and subdomains of a URL. For
-example, say you want just the 'google' part of 'http://www.google.com'.
+Say you want just the "google" part of https://www.google.com. *Everybody gets
+this wrong.* Splitting on the "." and taking the 2nd-to-last element only works
+for simple domains, e.g. .com. Consider
+[http://forums.bbc.co.uk](http://forums.bbc.co.uk): the naive splitting method
+will give you "co" as the domain, instead of "bbc". Rather than juggle TLDs,
+gTLDs, or ccTLDs  yourself, `tldextract` extracts the currently living public
+suffixes according to [the Public Suffix List](https://publicsuffix.org).
 
-*Everybody gets this wrong.* Splitting on the '.' and taking the last 2
-elements goes a long way only if you're thinking of simple e.g. .com
-domains. Think parsing
-[http://forums.bbc.co.uk](http://forums.bbc.co.uk) for example: the naive
-splitting method above will give you 'co' as the domain and 'uk' as the TLD,
-instead of 'bbc' and 'co.uk' respectively.
+> A "public suffix" is one under which Internet users can directly register
+> names.
 
-`tldextract` on the other hand knows what all gTLDs and ccTLDs look like by
-looking up the currently living ones according to [the Public Suffix List
-(PSL)](http://www.publicsuffix.org). So, given a URL, it knows its subdomain
-from its domain, and its domain from its country code.
+A public suffix is also sometimes called an effective TLD (eTLD).
+
+## Usage
 
 ```python
 >>> import tldextract
@@ -75,13 +75,13 @@ or suffix were found:
 By default, this package supports the public ICANN TLDs and their exceptions.
 You can optionally support the Public Suffix List's private domains as well.
 
-This module started by implementing the chosen answer from [this StackOverflow question on
+This package started by implementing the chosen answer from [this StackOverflow question on
 getting the "domain name" from a URL](http://stackoverflow.com/questions/569137/how-to-get-domain-name-from-url/569219#569219).
 However, the proposed regex solution doesn't address many country codes like
 com.au, or the exceptions to country codes like the registered domain
-parliament.uk. The Public Suffix List does, and so does this module.
+parliament.uk. The Public Suffix List does, and so does this package.
 
-### Installation
+## Install
 
 Latest release on PyPI:
 
@@ -95,17 +95,17 @@ Or the latest dev version:
 pip install -e 'git://github.com/john-kurkowski/tldextract.git#egg=tldextract'
 ```
 
-Command-line usage, splits the url components by space:
+Command-line usage, splits the URL components by space:
 
 ```zsh
 tldextract http://forums.bbc.co.uk
 # forums bbc co.uk
 ```
 
-### Note About Caching
+## Note about caching
 
-Beware when first running the module, it updates its TLD list with a live HTTP
-request. This updated TLD set is usually cached indefinitely in ``$HOME/.cache/python-tldextract`.
+Beware when first calling `tldextract`, it updates its TLD list with a live HTTP
+request. This updated TLD set is usually cached indefinitely in `$HOME/.cache/python-tldextract`.
 To control the cache's location, set TLDEXTRACT_CACHE environment variable or set the
 cache_dir path in TLDExtract initialization.
 
@@ -116,7 +116,7 @@ when I haven't kept this code up to date.)
 
 ```python
 # extract callable that falls back to the included TLD snapshot, no live HTTP fetching
-no_fetch_extract = tldextract.TLDExtract(suffix_list_urls=None)
+no_fetch_extract = tldextract.TLDExtract(suffix_list_urls=())
 no_fetch_extract('http://www.google.com')
 
 # extract callable that reads/writes the updated TLD set to a different path
@@ -124,7 +124,7 @@ custom_cache_extract = tldextract.TLDExtract(cache_dir='/path/to/your/cache/')
 custom_cache_extract('http://www.google.com')
 
 # extract callable that doesn't use caching
-no_cache_extract = tldextract.TLDExtract(cache_dir=False)
+no_cache_extract = tldextract.TLDExtract(cache_dir=None)
 no_cache_extract('http://www.google.com')
 ```
 
@@ -143,9 +143,9 @@ env TLDEXTRACT_CACHE="~/tldextract.cache" tldextract --update
 
 It is also recommended to delete the file after upgrading this lib.
 
-### Advanced Usage
+## Advanced usage
 
-#### Public vs. Private Domains
+### Public vs. private domains
 
 The PSL [maintains a concept of "private"
 domains](https://publicsuffix.org/list/).
@@ -179,11 +179,11 @@ ExtractResult(subdomain='', domain='waiterrant', suffix='blogspot.com')
 ```
 
 The thinking behind the default is, it's the more common case when people
-mentally parse a URL. It doesn't assume familiarity with the PSL nor that the
-PSL makes such a distinction. Note this may run counter to the default parsing
-behavior of other, PSL-based libraries.
+mentally parse a domain name. It doesn't assume familiarity with the PSL nor
+that the PSL makes a public/private distinction. Note this default may run
+counter to the default parsing behavior of other, PSL-based libraries.
 
-#### Specifying your own URL or file for the Suffix List data
+### Specifying your own URL or file for Public Suffix List data
 
 You can specify your own input data in place of the default Mozilla Public Suffix List:
 
@@ -211,9 +211,18 @@ extract = tldextract.TLDExtract(
 Use an absolute path when specifying the `suffix_list_urls` keyword argument.
 `os.path` is your friend.
 
-### FAQ
+The command line update command can be used with a URL or local file you specify:
+
+```zsh
+tldextract --update --suffix_list_url "http://foo.bar.baz"
+```
+
+This could be useful in production when you don't want the delay associated with updating the suffix
+list on first use, or if you are behind a complex firewall that prevents a simple update from working.
+
+## FAQ
 
-#### Can you add suffix \_\_\_\_? Can you make an exception for domain \_\_\_\_?
+### Can you add suffix \_\_\_\_? Can you make an exception for domain \_\_\_\_?
 
 This project doesn't contain an actual list of public suffixes. That comes from
 [the Public Suffix List (PSL)](https://publicsuffix.org/). Submit amendments there.
@@ -222,16 +231,32 @@ This project doesn't contain an actual list of public suffixes. That comes from
 forking the PSL and using your fork in the `suffix_list_urls` param, or adding
 your suffix piecemeal with the `extra_suffixes` param.)
 
-#### If I pass an invalid URL, I still get a result, no error. What gives?
+### I see my suffix in [the Public Suffix List (PSL)](https://publicsuffix.org/), but this library doesn't extract it.
+
+Check if your suffix is in the private section of the list. See [this
+documentation](#public-vs-private-domains).
+
+### If I pass an invalid URL, I still get a result, no error. What gives?
 
 To keep `tldextract` light in LoC & overhead, and because there are plenty of
 URL validators out there, this library is very lenient with input. If valid
 URLs are important to you, validate them before calling `tldextract`.
 
-This lenient stance lowers the learning curve of using the library, at the cost
-of desensitizing users to the nuances of URLs. Who knows how much. But in the
-future, I would consider an overhaul. For example, users could opt into
-validation, either receiving exceptions or error metadata on results.
+To avoid parsing a string twice, you can pass `tldextract` the output of
+[`urllib.parse`](https://docs.python.org/3/library/urllib.parse.html) methods.
+For example:
+
+```py
+extractor = TLDExtract()
+split_url = urllib.parse.urlsplit("https://foo.bar.com:8080")
+split_suffix = extractor.extract_urllib(split_url)
+url_to_crawl = f"{split_url.scheme}://{split_suffix.registered_domain}:{split_url.port}"
+```
+
+`tldextract`'s lenient string parsing stance lowers the learning curve of using
+the library, at the cost of desensitizing users to the nuances of URLs. This
+could be overhauled. For example, users could opt into validation, either
+receiving exceptions or error metadata on results.
 
 ## Contribute
 
@@ -241,7 +266,7 @@ validation, either receiving exceptions or error metadata on results.
 2. Change into the new directory.
 3. `pip install tox`
 
-### Running the Test Suite
+### Running the test suite
 
 Run all tests against all supported Python versions:
 
@@ -255,3 +280,12 @@ Run all tests against a specific Python environment configuration:
 tox -l
 tox -e py37
 ```
+
+### Code Style
+
+Automatically format all code:
+
+```zsh
+pip install black
+black .
+```
diff --git a/conftest.py b/conftest.py
index 1ae092b..55df60e 100644
--- a/conftest.py
+++ b/conftest.py
@@ -1,3 +1,3 @@
-'''py.test standard config file.'''
+"""py.test standard config file."""
 
-collect_ignore = ('setup.py',)
+collect_ignore = ("setup.py",)
diff --git a/debian/changelog b/debian/changelog
index bb77f1b..dafc869 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,11 @@
-tldextract (3.1.2-3) UNRELEASED; urgency=medium
+tldextract (3.4.4-1) UNRELEASED; urgency=medium
 
   * Set upstream metadata fields: Bug-Database, Bug-Submit, Repository-Browse.
   * Update standards version to 4.6.1, no changes needed.
+  * New upstream release.
+  * New upstream release.
 
- -- Debian Janitor <janitor@jelmer.uk>  Mon, 31 Oct 2022 19:52:35 -0000
+ -- Debian Janitor <janitor@jelmer.uk>  Mon, 10 Jul 2023 02:21:41 -0000
 
 tldextract (3.1.2-2) unstable; urgency=medium
 
diff --git a/debian/patches/pubsuffix.diff b/debian/patches/pubsuffix.diff
index 43210d4..4b64634 100644
--- a/debian/patches/pubsuffix.diff
+++ b/debian/patches/pubsuffix.diff
@@ -5,9 +5,11 @@ Description: Adds list of suffixes from package publicsuffix
 Author: Ana Custura <ana@netstat.org.uk>
 ---
 This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
---- a/tldextract/tldextract.py
-+++ b/tldextract/tldextract.py
-@@ -66,6 +66,7 @@
+Index: python-tldextract.git/tldextract/tldextract.py
+===================================================================
+--- python-tldextract.git.orig/tldextract/tldextract.py
++++ python-tldextract.git/tldextract/tldextract.py
+@@ -78,6 +78,7 @@ LOG = logging.getLogger("tldextract")
  CACHE_TIMEOUT = os.environ.get("TLDEXTRACT_CACHE_TIMEOUT")
  
  PUBLIC_SUFFIX_LIST_URLS = (
diff --git a/pylintrc b/pylintrc
deleted file mode 100644
index 5c4bac2..0000000
--- a/pylintrc
+++ /dev/null
@@ -1,588 +0,0 @@
-[MASTER]
-
-# A comma-separated list of package or module names from where C extensions may
-# be loaded. Extensions are loading into the active Python interpreter and may
-# run arbitrary code.
-extension-pkg-whitelist=
-
-# Specify a score threshold to be exceeded before program exits with error.
-fail-under=10.0
-
-# Add files or directories to the blacklist. They should be base names, not
-# paths.
-ignore=CVS
-
-# Add files or directories matching the regex patterns to the blacklist. The
-# regex matches against base names, not paths.
-#
-# See https://github.com/carsongee/pytest-pylint/issues/137
-# ignore-patterns=
-
-# Python code to execute, usually for sys.path manipulation such as
-# pygtk.require().
-#init-hook=
-
-# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
-# number of processors available to use.
-jobs=1
-
-# Control the amount of potential inferred values when inferring a single
-# object. This can help the performance when dealing with large functions or
-# complex, nested conditions.
-limit-inference-results=100
-
-# List of plugins (as comma separated values of python module names) to load,
-# usually to register additional checkers.
-load-plugins=
-
-# Pickle collected data for later comparisons.
-persistent=yes
-
-# When enabled, pylint would attempt to guess common misconfiguration and emit
-# user-friendly hints instead of false-positive error messages.
-suggestion-mode=yes
-
-# Allow loading of arbitrary C extensions. Extensions are imported into the
-# active Python interpreter and may run arbitrary code.
-unsafe-load-any-extension=no
-
-
-[MESSAGES CONTROL]
-
-# Only show warnings with the listed confidence levels. Leave empty to show
-# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED.
-confidence=
-
-# Disable the message, report, category or checker with the given id(s). You
-# can either give multiple identifiers separated by comma (,) or put this
-# option multiple times (only on the command line, not in the configuration
-# file where it should appear only once). You can also use "--disable=all" to
-# disable everything first and then reenable specific checks. For example, if
-# you want to run only the similarities checker, you can use "--disable=all
-# --enable=similarities". If you want to run only the classes checker, but have
-# no Warning level messages displayed, use "--disable=all --enable=classes
-# --disable=W".
-disable=print-statement,
-        parameter-unpacking,
-        unpacking-in-except,
-        old-raise-syntax,
-        backtick,
-        long-suffix,
-        old-ne-operator,
-        old-octal-literal,
-        import-star-module-level,
-        non-ascii-bytes-literal,
-        raw-checker-failed,
-        bad-inline-option,
-        locally-disabled,
-        file-ignored,
-        suppressed-message,
-        useless-suppression,
-        deprecated-pragma,
-        use-symbolic-message-instead,
-        apply-builtin,
-        basestring-builtin,
-        buffer-builtin,
-        cmp-builtin,
-        coerce-builtin,
-        execfile-builtin,
-        file-builtin,
-        long-builtin,
-        raw_input-builtin,
-        reduce-builtin,
-        standarderror-builtin,
-        unicode-builtin,
-        xrange-builtin,
-        coerce-method,
-        delslice-method,
-        getslice-method,
-        setslice-method,
-        no-absolute-import,
-        old-division,
-        dict-iter-method,
-        dict-view-method,
-        next-method-called,
-        metaclass-assignment,
-        indexing-exception,
-        raising-string,
-        reload-builtin,
-        oct-method,
-        hex-method,
-        nonzero-method,
-        cmp-method,
-        input-builtin,
-        round-builtin,
-        intern-builtin,
-        unichr-builtin,
-        map-builtin-not-iterating,
-        zip-builtin-not-iterating,
-        range-builtin-not-iterating,
-        filter-builtin-not-iterating,
-        using-cmp-argument,
-        eq-without-hash,
-        div-method,
-        idiv-method,
-        rdiv-method,
-        exception-message-attribute,
-        invalid-str-codec,
-        sys-max-int,
-        bad-python3-import,
-        deprecated-string-function,
-        deprecated-str-translate-call,
-        deprecated-itertools-function,
-        deprecated-types-field,
-        next-method-defined,
-        dict-items-not-iterating,
-        dict-keys-not-iterating,
-        dict-values-not-iterating,
-        deprecated-operator-function,
-        deprecated-urllib-function,
-        xreadlines-attribute,
-        deprecated-sys-function,
-        exception-escape,
-        comprehension-escape
-
-# Enable the message, report, category or checker with the given id(s). You can
-# either give multiple identifier separated by comma (,) or put this option
-# multiple time (only on the command line, not in the configuration file where
-# it should appear only once). See also the "--disable" option for examples.
-enable=c-extension-no-member
-
-
-[REPORTS]
-
-# Python expression which should return a score less than or equal to 10. You
-# have access to the variables 'error', 'warning', 'refactor', and 'convention'
-# which contain the number of messages in each category, as well as 'statement'
-# which is the total number of statements analyzed. This score is used by the
-# global evaluation report (RP0004).
-evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
-
-# Template used to display messages. This is a python new-style format string
-# used to format the message information. See doc for all details.
-#msg-template=
-
-# Set the output format. Available formats are text, parseable, colorized, json
-# and msvs (visual studio). You can also give a reporter class, e.g.
-# mypackage.mymodule.MyReporterClass.
-output-format=text
-
-# Tells whether to display a full report or only the messages.
-reports=no
-
-# Activate the evaluation score.
-score=yes
-
-
-[REFACTORING]
-
-# Maximum number of nested blocks for function / method body
-max-nested-blocks=5
-
-# Complete name of functions that never returns. When checking for
-# inconsistent-return-statements if a never returning function is called then
-# it will be considered as an explicit return statement and no message will be
-# printed.
-never-returning-functions=sys.exit
-
-
-[LOGGING]
-
-# The type of string formatting that logging methods do. `old` means using %
-# formatting, `new` is for `{}` formatting.
-logging-format-style=old
-
-# Logging modules to check that the string format arguments are in logging
-# function parameter format.
-logging-modules=logging
-
-
-[SPELLING]
-
-# Limits count of emitted suggestions for spelling mistakes.
-max-spelling-suggestions=4
-
-# Spelling dictionary name. Available dictionaries: none. To make it work,
-# install the python-enchant package.
-spelling-dict=
-
-# List of comma separated words that should not be checked.
-spelling-ignore-words=
-
-# A path to a file that contains the private dictionary; one word per line.
-spelling-private-dict-file=
-
-# Tells whether to store unknown words to the private dictionary (see the
-# --spelling-private-dict-file option) instead of raising a message.
-spelling-store-unknown-words=no
-
-
-[MISCELLANEOUS]
-
-# List of note tags to take in consideration, separated by a comma.
-notes=
-
-# Regular expression of note tags to take in consideration.
-#notes-rgx=
-
-
-[TYPECHECK]
-
-# List of decorators that produce context managers, such as
-# contextlib.contextmanager. Add to this list to register other decorators that
-# produce valid context managers.
-contextmanager-decorators=contextlib.contextmanager
-
-# List of members which are set dynamically and missed by pylint inference
-# system, and so shouldn't trigger E1101 when accessed. Python regular
-# expressions are accepted.
-generated-members=
-
-# Tells whether missing members accessed in mixin class should be ignored. A
-# mixin class is detected if its name ends with "mixin" (case insensitive).
-ignore-mixin-members=yes
-
-# Tells whether to warn about missing members when the owner of the attribute
-# is inferred to be None.
-ignore-none=yes
-
-# This flag controls whether pylint should warn about no-member and similar
-# checks whenever an opaque object is returned when inferring. The inference
-# can return multiple potential results while evaluating a Python object, but
-# some branches might not be evaluated, which results in partial inference. In
-# that case, it might be useful to still emit no-member and other checks for
-# the rest of the inferred objects.
-ignore-on-opaque-inference=yes
-
-# List of class names for which member attributes should not be checked (useful
-# for classes with dynamically set attributes). This supports the use of
-# qualified names.
-ignored-classes=optparse.Values,thread._local,_thread._local
-
-# List of module names for which member attributes should not be checked
-# (useful for modules/projects where namespaces are manipulated during runtime
-# and thus existing member attributes cannot be deduced by static analysis). It
-# supports qualified module names, as well as Unix pattern matching.
-ignored-modules=
-
-# Show a hint with possible names when a member name was not found. The aspect
-# of finding the hint is based on edit distance.
-missing-member-hint=yes
-
-# The minimum edit distance a name should have in order to be considered a
-# similar match for a missing member name.
-missing-member-hint-distance=1
-
-# The total number of similar names that should be taken in consideration when
-# showing a hint for a missing member.
-missing-member-max-choices=1
-
-# List of decorators that change the signature of a decorated function.
-signature-mutators=
-
-
-[VARIABLES]
-
-# List of additional names supposed to be defined in builtins. Remember that
-# you should avoid defining new builtins when possible.
-additional-builtins=
-
-# Tells whether unused global variables should be treated as a violation.
-allow-global-unused-variables=yes
-
-# List of strings which can identify a callback function by name. A callback
-# name must start or end with one of those strings.
-callbacks=cb_,
-          _cb
-
-# A regular expression matching the name of dummy variables (i.e. expected to
-# not be used).
-dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
-
-# Argument names that match this expression will be ignored. Default to name
-# with leading underscore.
-ignored-argument-names=_.*|^ignored_|^unused_
-
-# Tells whether we should check for unused import in __init__ files.
-init-import=no
-
-# List of qualified module names which can have objects that can redefine
-# builtins.
-redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io
-
-
-[FORMAT]
-
-# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
-expected-line-ending-format=
-
-# Regexp for a line that is allowed to be longer than the limit.
-ignore-long-lines=^\s*(# )?<?https?://\S+>?$
-
-# Number of spaces of indent required inside a hanging or continued line.
-indent-after-paren=4
-
-# String used as indentation unit. This is usually "    " (4 spaces) or "\t" (1
-# tab).
-indent-string='    '
-
-# Maximum number of characters on a single line.
-max-line-length=100
-
-# Maximum number of lines in a module.
-max-module-lines=1000
-
-# Allow the body of a class to be on the same line as the declaration if body
-# contains single statement.
-single-line-class-stmt=no
-
-# Allow the body of an if to be on the same line as the test if there is no
-# else.
-single-line-if-stmt=no
-
-
-[SIMILARITIES]
-
-# Ignore comments when computing similarities.
-ignore-comments=yes
-
-# Ignore docstrings when computing similarities.
-ignore-docstrings=yes
-
-# Ignore imports when computing similarities.
-ignore-imports=no
-
-# Minimum lines number of a similarity.
-min-similarity-lines=4
-
-
-[BASIC]
-
-# Naming style matching correct argument names.
-argument-naming-style=snake_case
-
-# Regular expression matching correct argument names. Overrides argument-
-# naming-style.
-#argument-rgx=
-
-# Naming style matching correct attribute names.
-attr-naming-style=snake_case
-
-# Regular expression matching correct attribute names. Overrides attr-naming-
-# style.
-#attr-rgx=
-
-# Bad variable names which should always be refused, separated by a comma.
-bad-names=foo,
-          bar,
-          baz,
-          toto,
-          tutu,
-          tata
-
-# Bad variable names regexes, separated by a comma. If names match any regex,
-# they will always be refused
-bad-names-rgxs=
-
-# Naming style matching correct class attribute names.
-class-attribute-naming-style=any
-
-# Regular expression matching correct class attribute names. Overrides class-
-# attribute-naming-style.
-#class-attribute-rgx=
-
-# Naming style matching correct class names.
-class-naming-style=PascalCase
-
-# Regular expression matching correct class names. Overrides class-naming-
-# style.
-#class-rgx=
-
-# Naming style matching correct constant names.
-const-naming-style=UPPER_CASE
-
-# Regular expression matching correct constant names. Overrides const-naming-
-# style.
-#const-rgx=
-
-# Minimum line length for functions/classes that require docstrings, shorter
-# ones are exempt.
-docstring-min-length=-1
-
-# Naming style matching correct function names.
-function-naming-style=snake_case
-
-# Regular expression matching correct function names. Overrides function-
-# naming-style.
-#function-rgx=
-
-# Good variable names which should always be accepted, separated by a comma.
-good-names=i,
-           j,
-           k,
-           ex,
-           Run,
-           _
-
-# Good variable names regexes, separated by a comma. If names match any regex,
-# they will always be accepted
-good-names-rgxs=
-
-# Include a hint for the correct naming format with invalid-name.
-include-naming-hint=no
-
-# Naming style matching correct inline iteration names.
-inlinevar-naming-style=any
-
-# Regular expression matching correct inline iteration names. Overrides
-# inlinevar-naming-style.
-#inlinevar-rgx=
-
-# Naming style matching correct method names.
-method-naming-style=snake_case
-
-# Regular expression matching correct method names. Overrides method-naming-
-# style.
-#method-rgx=
-
-# Naming style matching correct module names.
-module-naming-style=snake_case
-
-# Regular expression matching correct module names. Overrides module-naming-
-# style.
-#module-rgx=
-
-# Colon-delimited sets of names that determine each other's naming style when
-# the name regexes allow several styles.
-name-group=
-
-# Regular expression which should only match function or class names that do
-# not require a docstring.
-no-docstring-rgx=(^_|test_.*|.*_test)
-
-# List of decorators that produce properties, such as abc.abstractproperty. Add
-# to this list to register other decorators that produce valid properties.
-# These decorators are taken in consideration only for invalid-name.
-property-classes=abc.abstractproperty
-
-# Naming style matching correct variable names.
-variable-naming-style=snake_case
-
-# Regular expression matching correct variable names. Overrides variable-
-# naming-style.
-#variable-rgx=
-
-
-[STRING]
-
-# This flag controls whether inconsistent-quotes generates a warning when the
-# character used as a quote delimiter is used inconsistently within a module.
-check-quote-consistency=no
-
-# This flag controls whether the implicit-str-concat should generate a warning
-# on implicit string concatenation in sequences defined over several lines.
-check-str-concat-over-line-jumps=no
-
-
-[IMPORTS]
-
-# List of modules that can be imported at any level, not just the top level
-# one.
-allow-any-import-level=
-
-# Allow wildcard imports from modules that define __all__.
-allow-wildcard-with-all=no
-
-# Analyse import fallback blocks. This can be used to support both Python 2 and
-# 3 compatible code, which means that the block might have code that exists
-# only in one or another interpreter, leading to false positives when analysed.
-analyse-fallback-blocks=no
-
-# Deprecated modules which should not be used, separated by a comma.
-deprecated-modules=optparse,tkinter.tix
-
-# Create a graph of external dependencies in the given file (report RP0402 must
-# not be disabled).
-ext-import-graph=
-
-# Create a graph of every (i.e. internal and external) dependencies in the
-# given file (report RP0402 must not be disabled).
-import-graph=
-
-# Create a graph of internal dependencies in the given file (report RP0402 must
-# not be disabled).
-int-import-graph=
-
-# Force import order to recognize a module as part of the standard
-# compatibility libraries.
-known-standard-library=
-
-# Force import order to recognize a module as part of a third party library.
-known-third-party=enchant
-
-# Couples of modules and preferred modules, separated by a comma.
-preferred-modules=
-
-
-[CLASSES]
-
-# List of method names used to declare (i.e. assign) instance attributes.
-defining-attr-methods=__init__,
-                      __new__,
-                      setUp,
-                      __post_init__
-
-# List of member names, which should be excluded from the protected access
-# warning.
-exclude-protected=_asdict,
-                  _fields,
-                  _replace,
-                  _source,
-                  _make
-
-# List of valid names for the first argument in a class method.
-valid-classmethod-first-arg=cls
-
-# List of valid names for the first argument in a metaclass class method.
-valid-metaclass-classmethod-first-arg=cls
-
-
-[DESIGN]
-
-# Maximum number of arguments for function / method.
-max-args=5
-
-# Maximum number of attributes for a class (see R0902).
-max-attributes=8
-
-# Maximum number of boolean expressions in an if statement (see R0916).
-max-bool-expr=5
-
-# Maximum number of branch for function / method body.
-max-branches=12
-
-# Maximum number of locals for function / method body.
-max-locals=15
-
-# Maximum number of parents for a class (see R0901).
-max-parents=7
-
-# Maximum number of public methods for a class (see R0904).
-max-public-methods=20
-
-# Maximum number of return / yield for function / method body.
-max-returns=6
-
-# Maximum number of statements in function / method body.
-max-statements=50
-
-# Minimum number of public methods for a class (see R0903).
-min-public-methods=2
-
-
-[EXCEPTIONS]
-
-# Exceptions that will emit a warning when being caught. Defaults to
-# "BaseException, Exception".
-overgeneral-exceptions=BaseException,
-                       Exception
diff --git a/setup.cfg b/setup.cfg
index ed8a958..00d6347 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,5 +1,21 @@
-[bdist_wheel]
-universal = 1
-
 [metadata]
 license_file = LICENSE
+
+[mypy]
+check_untyped_defs = True
+disallow_incomplete_defs = True
+disallow_untyped_calls = True
+
+[mypy-tldextract.*]
+disallow_untyped_defs = True
+
+[pycodestyle]
+# E203 - whitespace before; disagrees with PEP8 https://github.com/psf/black/issues/354#issuecomment-397684838
+# E501 - line too long
+# W503 - line break before binary operator; disagrees with PEP8 https://github.com/psf/black/issues/52
+ignore = E203, E501, W503
+
+[pylint.master]
+disable =
+    fixme
+no-docstring-rgx = (^_|test_.*)
diff --git a/setup.py b/setup.py
index ce80601..646fd78 100644
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,6 @@
-"""`tldextract` accurately separates the gTLD or ccTLD (generic or country code
-top-level domain) from the registered domain and subdomains of a URL.
+"""`tldextract` accurately separates a URL's subdomain, domain, and public suffix.
+
+It does this via the Public Suffix List (PSL).
 
     >>> import tldextract
     >>> tldextract.extract('http://forums.news.cnn.com/')
@@ -25,17 +26,8 @@ By default, this package supports the public ICANN TLDs and their exceptions.
 You can optionally support the Public Suffix List's private domains as well.
 """
 
-import sys
-
 from setuptools import setup
 
-if sys.version_info < (3, 5):
-    raise RuntimeError(
-        "Python %s.%s is EOL and no longer supported. "
-        "Please upgrade your Python or use an older "
-        "version of tldextract." % (sys.version_info[0], sys.version_info[1])
-    )
-
 INSTALL_REQUIRES = ["idna", "requests>=2.1.0", "requests-file>=1.4", "filelock>=3.0.8"]
 
 setup(
@@ -43,18 +35,21 @@ setup(
     author="John Kurkowski",
     author_email="john.kurkowski@gmail.com",
     description=(
-        "Accurately separate the TLD from the registered domain and "
-        "subdomains of a URL, using the Public Suffix List. By "
+        "Accurately separates a URL's subdomain, domain, and public suffix, "
+        "using the Public Suffix List (PSL). By "
         "default, this includes the public ICANN TLDs and their "
         "exceptions. You can optionally support the Public Suffix "
         "List's private domains as well."
     ),
-    license="BSD License",
-    keywords="tld domain subdomain url parse extract urlparse urlsplit public suffix list",
+    license="BSD-3-Clause",
+    keywords=(
+        "tld domain subdomain url parse extract urlparse urlsplit public suffix list"
+        " publicsuffix publicsuffixlist"
+    ),
     url="https://github.com/john-kurkowski/tldextract",
     packages=["tldextract"],
     include_package_data=True,
-    python_requires=">=3.6",
+    python_requires=">=3.7",
     long_description=__doc__,
     long_description_content_type="text/markdown",
     classifiers=[
@@ -62,13 +57,20 @@ setup(
         "Topic :: Utilities",
         "License :: OSI Approved :: BSD License",
         "Programming Language :: Python :: 3",
-        "Programming Language :: Python :: 3.6",
         "Programming Language :: Python :: 3.7",
         "Programming Language :: Python :: 3.8",
         "Programming Language :: Python :: 3.9",
+        "Programming Language :: Python :: 3.10",
+        "Programming Language :: Python :: 3.11",
     ],
-    entry_points={"console_scripts": ["tldextract = tldextract.cli:main", ]},
+    entry_points={
+        "console_scripts": [
+            "tldextract = tldextract.cli:main",
+        ]
+    },
     setup_requires=["setuptools_scm"],
-    use_scm_version={"write_to": "tldextract/_version.py", },
+    use_scm_version={
+        "write_to": "tldextract/_version.py",
+    },
     install_requires=INSTALL_REQUIRES,
 )
diff --git a/tests/__init__.py b/tests/__init__.py
index e69de29..1ae4f0e 100644
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -0,0 +1 @@
+"""Package tests."""
diff --git a/tests/cli_test.py b/tests/cli_test.py
new file mode 100644
index 0000000..9abcda9
--- /dev/null
+++ b/tests/cli_test.py
@@ -0,0 +1,35 @@
+"""tldextract integration tests."""
+
+import sys
+
+import pytest
+
+from tldextract.cli import main
+
+
+def test_cli_no_input(monkeypatch):
+    monkeypatch.setattr(sys, "argv", ["tldextract"])
+    with pytest.raises(SystemExit) as ex:
+        main()
+
+    assert ex.value.code == 1
+
+
+def test_cli_parses_args(monkeypatch):
+    monkeypatch.setattr(sys, "argv", ["tldextract", "--some", "nonsense"])
+    with pytest.raises(SystemExit) as ex:
+        main()
+
+    assert ex.value.code == 2
+
+
+def test_cli_posargs(capsys, monkeypatch):
+    monkeypatch.setattr(
+        sys, "argv", ["tldextract", "example.com", "bbc.co.uk", "forums.bbc.co.uk"]
+    )
+
+    main()
+
+    stdout, stderr = capsys.readouterr()
+    assert not stderr
+    assert stdout == " example com\n bbc co.uk\nforums bbc co.uk\n"
diff --git a/tests/conftest.py b/tests/conftest.py
index 7cb6886..df07c48 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -3,13 +3,16 @@
 import logging
 
 import pytest
+
 import tldextract.cache
 
 
 @pytest.fixture(autouse=True)
 def reset_log_level():
-    """Automatically reset log level verbosity between tests. Generally want
-    test output the Unix way: silence is golden."""
+    """Automatically reset log level verbosity between tests.
+
+    Generally want test output the Unix way: silence is golden.
+    """
     tldextract.cache._DID_LOG_UNABLE_TO_CACHE = (  # pylint: disable=protected-access
         False
     )
diff --git a/tests/custom_suffix_test.py b/tests/custom_suffix_test.py
index f062193..8258a53 100644
--- a/tests/custom_suffix_test.py
+++ b/tests/custom_suffix_test.py
@@ -1,4 +1,4 @@
-'''tldextract unit tests with a custom suffix list.'''
+"""tldextract unit tests with a custom suffix list."""
 
 import os
 import tempfile
@@ -6,45 +6,49 @@ import tempfile
 import tldextract
 
 FAKE_SUFFIX_LIST_URL = "file://" + os.path.join(
-    os.path.dirname(os.path.abspath(__file__)),
-    'fixtures/fake_suffix_list_fixture.dat'
+    os.path.dirname(os.path.abspath(__file__)), "fixtures/fake_suffix_list_fixture.dat"
 )
-EXTRA_SUFFIXES = ['foo1', 'bar1', 'baz1']
+EXTRA_SUFFIXES = ["foo1", "bar1", "baz1"]
 
 extract_using_fake_suffix_list = tldextract.TLDExtract(
-    cache_dir=tempfile.mkdtemp(),
-    suffix_list_urls=[FAKE_SUFFIX_LIST_URL]
+    cache_dir=tempfile.mkdtemp(), suffix_list_urls=[FAKE_SUFFIX_LIST_URL]
 )
 extract_using_fake_suffix_list_no_cache = tldextract.TLDExtract(
-    cache_dir=None,
-    suffix_list_urls=[FAKE_SUFFIX_LIST_URL]
+    cache_dir=None, suffix_list_urls=[FAKE_SUFFIX_LIST_URL]
 )
 extract_using_extra_suffixes = tldextract.TLDExtract(
     cache_dir=None,
     suffix_list_urls=[FAKE_SUFFIX_LIST_URL],
-    extra_suffixes=EXTRA_SUFFIXES
+    extra_suffixes=EXTRA_SUFFIXES,
 )
 
 
 def test_private_extraction():
-    tld = tldextract.TLDExtract(
-        cache_dir=tempfile.mkdtemp(),
-        suffix_list_urls=[]
-    )
+    tld = tldextract.TLDExtract(cache_dir=tempfile.mkdtemp(), suffix_list_urls=[])
 
-    assert tld("foo.blogspot.com") == ('foo', 'blogspot', 'com')
-    assert tld("foo.blogspot.com", include_psl_private_domains=True) == ('', 'foo', 'blogspot.com')
+    assert tld("foo.blogspot.com") == ("foo", "blogspot", "com")
+    assert tld("foo.blogspot.com", include_psl_private_domains=True) == (
+        "",
+        "foo",
+        "blogspot.com",
+    )
 
 
 def test_suffix_which_is_not_in_custom_list():
-    for fun in (extract_using_fake_suffix_list, extract_using_fake_suffix_list_no_cache):
+    for fun in (
+        extract_using_fake_suffix_list,
+        extract_using_fake_suffix_list_no_cache,
+    ):
         result = fun("www.google.com")
         assert result.suffix == ""
 
 
 def test_custom_suffixes():
-    for fun in (extract_using_fake_suffix_list, extract_using_fake_suffix_list_no_cache):
-        for custom_suffix in ('foo', 'bar', 'baz'):
+    for fun in (
+        extract_using_fake_suffix_list,
+        extract_using_fake_suffix_list_no_cache,
+    ):
+        for custom_suffix in ("foo", "bar", "baz"):
             result = fun("www.foo.bar.baz.quux" + "." + custom_suffix)
             assert result.suffix == custom_suffix
 
diff --git a/tests/integration_test.py b/tests/integration_test.py
index d017890..432d7b2 100644
--- a/tests/integration_test.py
+++ b/tests/integration_test.py
@@ -1,4 +1,4 @@
-'''tldextract integration tests.'''
+"""tldextract integration tests."""
 
 import pytest
 
@@ -8,5 +8,5 @@ import tldextract
 def test_bad_kwargs():
     with pytest.raises(ValueError):
         tldextract.TLDExtract(
-            cache_dir=False, suffix_list_urls=False, fallback_to_snapshot=False
+            cache_dir=None, suffix_list_urls=(), fallback_to_snapshot=False
         )
diff --git a/tests/main_test.py b/tests/main_test.py
index dc772f7..c96a7f5 100644
--- a/tests/main_test.py
+++ b/tests/main_test.py
@@ -1,43 +1,53 @@
-# -*- coding: utf-8 -*-
-'''Main tldextract unit tests.'''
+"""Main tldextract unit tests."""
 
 import logging
 import os
 import tempfile
+from typing import Sequence, Tuple
 
 import pytest
 import responses
+
 import tldextract
+import tldextract.suffix_list
 from tldextract.cache import DiskCache
 from tldextract.suffix_list import SuffixListNotFound
 from tldextract.tldextract import ExtractResult
 
-
 extract = tldextract.TLDExtract(cache_dir=tempfile.mkdtemp())
-extract_no_cache = tldextract.TLDExtract(cache_dir=False)
-extract_using_real_local_suffix_list = tldextract.TLDExtract(cache_dir=tempfile.mkdtemp())
-extract_using_real_local_suffix_list_no_cache = tldextract.TLDExtract(cache_dir=False)
+extract_no_cache = tldextract.TLDExtract(cache_dir=None)
+extract_using_real_local_suffix_list = tldextract.TLDExtract(
+    cache_dir=tempfile.mkdtemp()
+)
+extract_using_real_local_suffix_list_no_cache = tldextract.TLDExtract(cache_dir=None)
 extract_using_fallback_to_snapshot_no_cache = tldextract.TLDExtract(
-    cache_dir=None,
-    suffix_list_urls=None
+    cache_dir=None, suffix_list_urls=()
 )
 
 
-def assert_extract(  # pylint: disable=missing-docstring
-        url,
-        expected_domain_data,
-        expected_ip_data='',
-        funs=(
-            extract,
-            extract_no_cache,
-            extract_using_real_local_suffix_list,
-            extract_using_real_local_suffix_list_no_cache,
-            extract_using_fallback_to_snapshot_no_cache
-        )):
-    (expected_fqdn,
-     expected_subdomain,
-     expected_domain,
-     expected_tld) = expected_domain_data
+def assert_extract(
+    url: str,
+    expected_domain_data: Tuple[str, str, str, str],
+    expected_ip_data: str = "",
+    funs: Sequence[tldextract.TLDExtract] = (
+        extract,
+        extract_no_cache,
+        extract_using_real_local_suffix_list,
+        extract_using_real_local_suffix_list_no_cache,
+        extract_using_fallback_to_snapshot_no_cache,
+    ),
+) -> None:
+    """Test helper to compare all expected and actual attributes of an extraction.
+
+    Runs the same comparison across several permutations of tldextract instance
+    configurations.
+    """
+    (
+        expected_fqdn,
+        expected_subdomain,
+        expected_domain,
+        expected_tld,
+    ) = expected_domain_data
     for fun in funs:
         ext = fun(url)
         assert expected_fqdn == ext.fqdn
@@ -48,13 +58,14 @@ def assert_extract(  # pylint: disable=missing-docstring
 
 
 def test_american():
-    assert_extract('http://www.google.com',
-                   ('www.google.com', 'www', 'google', 'com'))
+    assert_extract("http://www.google.com", ("www.google.com", "www", "google", "com"))
 
 
 def test_british():
-    assert_extract("http://www.theregister.co.uk",
-                   ("www.theregister.co.uk", "www", "theregister", "co.uk"))
+    assert_extract(
+        "http://www.theregister.co.uk",
+        ("www.theregister.co.uk", "www", "theregister", "co.uk"),
+    )
 
 
 def test_no_subdomain():
@@ -62,183 +73,285 @@ def test_no_subdomain():
 
 
 def test_nested_subdomain():
-    assert_extract("http://media.forums.theregister.co.uk",
-                   ("media.forums.theregister.co.uk", "media.forums",
-                    "theregister", "co.uk"))
+    assert_extract(
+        "http://media.forums.theregister.co.uk",
+        ("media.forums.theregister.co.uk", "media.forums", "theregister", "co.uk"),
+    )
 
 
 def test_odd_but_possible():
-    assert_extract('http://www.www.com', ('www.www.com', 'www', 'www', 'com'))
-    assert_extract('http://www.com', ('www.com', '', 'www', 'com'))
+    assert_extract("http://www.www.com", ("www.www.com", "www", "www", "com"))
+    assert_extract("http://www.com", ("www.com", "", "www", "com"))
 
 
 def test_suffix():
-    assert_extract('com', ('', '', '', 'com'))
-    assert_extract('co.uk', ('', '', '', 'co.uk'))
+    assert_extract("com", ("", "", "", "com"))
+    assert_extract("co.uk", ("", "", "", "co.uk"))
+    assert_extract("example.ck", ("", "", "", "example.ck"))
+    assert_extract("www.example.ck", ("www.example.ck", "", "www", "example.ck"))
+    assert_extract(
+        "sub.www.example.ck", ("sub.www.example.ck", "sub", "www", "example.ck")
+    )
+    assert_extract("www.ck", ("www.ck", "", "www", "ck"))
+    assert_extract("nes.buskerud.no", ("", "", "", "nes.buskerud.no"))
+    assert_extract("buskerud.no", ("buskerud.no", "", "buskerud", "no"))
 
 
 def test_local_host():
-    assert_extract('http://internalunlikelyhostname/',
-                   ('', '', 'internalunlikelyhostname', ''))
-    assert_extract('http://internalunlikelyhostname.bizarre',
-                   ('', 'internalunlikelyhostname', 'bizarre', ''))
+    assert_extract(
+        "http://internalunlikelyhostname/", ("", "", "internalunlikelyhostname", "")
+    )
+    assert_extract(
+        "http://internalunlikelyhostname.bizarre",
+        ("", "internalunlikelyhostname", "bizarre", ""),
+    )
 
 
 def test_qualified_local_host():
-    assert_extract('http://internalunlikelyhostname.info/',
-                   ('internalunlikelyhostname.info',
-                    '', 'internalunlikelyhostname', 'info'))
-    assert_extract('http://internalunlikelyhostname.information/',
-                   ('',
-                    'internalunlikelyhostname', 'information', ''))
+    assert_extract(
+        "http://internalunlikelyhostname.info/",
+        ("internalunlikelyhostname.info", "", "internalunlikelyhostname", "info"),
+    )
+    assert_extract(
+        "http://internalunlikelyhostname.information/",
+        ("", "internalunlikelyhostname", "information", ""),
+    )
 
 
 def test_ip():
-    assert_extract('http://216.22.0.192/',
-                   ('', '', '216.22.0.192', ''),
-                   expected_ip_data='216.22.0.192',)
-    assert_extract('http://216.22.project.coop/',
-                   ('216.22.project.coop', '216.22', 'project', 'coop'))
+    assert_extract(
+        "http://216.22.0.192/",
+        ("", "", "216.22.0.192", ""),
+        expected_ip_data="216.22.0.192",
+    )
+    assert_extract(
+        "http://216.22.project.coop/",
+        ("216.22.project.coop", "216.22", "project", "coop"),
+    )
 
 
 def test_looks_like_ip():
-    assert_extract('1\xe9', ('', '', '1\xe9', ''))
+    assert_extract("1\xe9", ("", "", "1\xe9", ""))
 
 
 def test_punycode():
-    assert_extract('http://xn--h1alffa9f.xn--p1ai',
-                   ('xn--h1alffa9f.xn--p1ai', '', 'xn--h1alffa9f', 'xn--p1ai'))
-    assert_extract('http://xN--h1alffa9f.xn--p1ai',
-                   ('xN--h1alffa9f.xn--p1ai', '', 'xN--h1alffa9f', 'xn--p1ai'))
-    assert_extract('http://XN--h1alffa9f.xn--p1ai',
-                   ('XN--h1alffa9f.xn--p1ai', '', 'XN--h1alffa9f', 'xn--p1ai'))
+    assert_extract(
+        "http://xn--h1alffa9f.xn--p1ai",
+        ("xn--h1alffa9f.xn--p1ai", "", "xn--h1alffa9f", "xn--p1ai"),
+    )
+    assert_extract(
+        "http://xN--h1alffa9f.xn--p1ai",
+        ("xN--h1alffa9f.xn--p1ai", "", "xN--h1alffa9f", "xn--p1ai"),
+    )
+    assert_extract(
+        "http://XN--h1alffa9f.xn--p1ai",
+        ("XN--h1alffa9f.xn--p1ai", "", "XN--h1alffa9f", "xn--p1ai"),
+    )
     # Entries that might generate UnicodeError exception
     # This subdomain generates UnicodeError 'IDNA does not round-trip'
-    assert_extract('xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w.google.com',
-                   ('xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w.google.com',
-                    'xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w', 'google',
-                    'com'))
+    assert_extract(
+        "xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w.google.com",
+        (
+            "xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w.google.com",
+            "xn--tub-1m9d15sfkkhsifsbqygyujjrw602gk4li5qqk98aca0w",
+            "google",
+            "com",
+        ),
+    )
     # This subdomain generates UnicodeError 'incomplete punicode string'
-    assert_extract('xn--tub-1m9d15sfkkhsifsbqygyujjrw60.google.com',
-                   ('xn--tub-1m9d15sfkkhsifsbqygyujjrw60.google.com',
-                    'xn--tub-1m9d15sfkkhsifsbqygyujjrw60', 'google', 'com'))
+    assert_extract(
+        "xn--tub-1m9d15sfkkhsifsbqygyujjrw60.google.com",
+        (
+            "xn--tub-1m9d15sfkkhsifsbqygyujjrw60.google.com",
+            "xn--tub-1m9d15sfkkhsifsbqygyujjrw60",
+            "google",
+            "com",
+        ),
+    )
 
 
 def test_invalid_puny_with_puny():
-    assert_extract('http://xn--zckzap6140b352by.blog.so-net.xn--wcvs22d.hk',
-                   ('xn--zckzap6140b352by.blog.so-net.xn--wcvs22d.hk',
-                    'xn--zckzap6140b352by.blog', 'so-net', 'xn--wcvs22d.hk'))
-    assert_extract('http://xn--&.so-net.com',
-                   ('xn--&.so-net.com',
-                    'xn--&', 'so-net', 'com'))
+    assert_extract(
+        "http://xn--zckzap6140b352by.blog.so-net.xn--wcvs22d.hk",
+        (
+            "xn--zckzap6140b352by.blog.so-net.xn--wcvs22d.hk",
+            "xn--zckzap6140b352by.blog",
+            "so-net",
+            "xn--wcvs22d.hk",
+        ),
+    )
+    assert_extract(
+        "http://xn--&.so-net.com", ("xn--&.so-net.com", "xn--&", "so-net", "com")
+    )
 
 
 def test_puny_with_non_puny():
-    assert_extract('http://xn--zckzap6140b352by.blog.so-net.教育.hk',
-                   ('xn--zckzap6140b352by.blog.so-net.教育.hk',
-                    'xn--zckzap6140b352by.blog', 'so-net', '教育.hk'))
+    assert_extract(
+        "http://xn--zckzap6140b352by.blog.so-net.教育.hk",
+        (
+            "xn--zckzap6140b352by.blog.so-net.教育.hk",
+            "xn--zckzap6140b352by.blog",
+            "so-net",
+            "教育.hk",
+        ),
+    )
 
 
 def test_idna_2008():
-    """Python supports IDNA 2003.
-    The IDNA library adds 2008 support for characters like ß.
-    """
-    assert_extract('xn--gieen46ers-73a.de',
-                   ('xn--gieen46ers-73a.de', '', 'xn--gieen46ers-73a', 'de'))
+    """Python supports IDNA 2003. The IDNA library adds 2008 support for characters like ß."""
+    assert_extract(
+        "xn--gieen46ers-73a.de",
+        ("xn--gieen46ers-73a.de", "", "xn--gieen46ers-73a", "de"),
+    )
+    assert_extract(
+        "angelinablog。com.de",
+        ("angelinablog.com.de", "angelinablog", "com", "de"),
+    )
 
 
 def test_empty():
-    assert_extract('http://', ('', '', '', ''))
+    assert_extract("http://", ("", "", "", ""))
 
 
 def test_scheme():
-    assert_extract('https://mail.google.com/mail', ('mail.google.com', 'mail', 'google', 'com'))
-    assert_extract('ssh://mail.google.com/mail', ('mail.google.com', 'mail', 'google', 'com'))
-    assert_extract('//mail.google.com/mail', ('mail.google.com', 'mail', 'google', 'com'))
-    assert_extract('mail.google.com/mail',
-                   ('mail.google.com', 'mail', 'google', 'com'), funs=(extract,))
+    assert_extract("//", ("", "", "", ""))
+    assert_extract("://", ("", "", "", ""))
+    assert_extract("://example.com", ("", "", "", ""))
+    assert_extract("a+-.://example.com", ("example.com", "", "example", "com"))
+    assert_extract("a#//example.com", ("", "", "a", ""))
+    assert_extract("a@://example.com", ("", "", "", ""))
+    assert_extract("#//example.com", ("", "", "", ""))
+    assert_extract(
+        "https://mail.google.com/mail", ("mail.google.com", "mail", "google", "com")
+    )
+    assert_extract(
+        "ssh://mail.google.com/mail", ("mail.google.com", "mail", "google", "com")
+    )
+    assert_extract(
+        "//mail.google.com/mail", ("mail.google.com", "mail", "google", "com")
+    )
+    assert_extract(
+        "mail.google.com/mail",
+        ("mail.google.com", "mail", "google", "com"),
+        funs=(extract,),
+    )
 
 
 def test_port():
-    assert_extract('git+ssh://www.github.com:8443/', ('www.github.com', 'www', 'github', 'com'))
+    assert_extract(
+        "git+ssh://www.github.com:8443/", ("www.github.com", "www", "github", "com")
+    )
 
 
 def test_username():
-    assert_extract('ftp://johndoe:5cr1p7k1dd13@1337.warez.com:2501',
-                   ('1337.warez.com', '1337', 'warez', 'com'))
+    assert_extract(
+        "ftp://johndoe:5cr1p7k1dd13@1337.warez.com:2501",
+        ("1337.warez.com", "1337", "warez", "com"),
+    )
 
 
 def test_query_fragment():
-    assert_extract('http://google.com?q=cats', ('google.com', '', 'google', 'com'))
-    assert_extract('http://google.com#Welcome', ('google.com', '', 'google', 'com'))
-    assert_extract('http://google.com/#Welcome', ('google.com', '', 'google', 'com'))
-    assert_extract('http://google.com/s#Welcome', ('google.com', '', 'google', 'com'))
-    assert_extract('http://google.com/s?q=cats#Welcome', ('google.com', '', 'google', 'com'))
+    assert_extract("http://google.com?q=cats", ("google.com", "", "google", "com"))
+    assert_extract("http://google.com#Welcome", ("google.com", "", "google", "com"))
+    assert_extract("http://google.com/#Welcome", ("google.com", "", "google", "com"))
+    assert_extract("http://google.com/s#Welcome", ("google.com", "", "google", "com"))
+    assert_extract(
+        "http://google.com/s?q=cats#Welcome", ("google.com", "", "google", "com")
+    )
 
 
 def test_regex_order():
-    assert_extract('http://www.parliament.uk',
-                   ('www.parliament.uk', 'www', 'parliament', 'uk'))
-    assert_extract('http://www.parliament.co.uk',
-                   ('www.parliament.co.uk', 'www', 'parliament', 'co.uk'))
+    assert_extract(
+        "http://www.parliament.uk", ("www.parliament.uk", "www", "parliament", "uk")
+    )
+    assert_extract(
+        "http://www.parliament.co.uk",
+        ("www.parliament.co.uk", "www", "parliament", "co.uk"),
+    )
 
 
 def test_unhandled_by_iana():
-    assert_extract('http://www.cgs.act.edu.au/',
-                   ('www.cgs.act.edu.au', 'www', 'cgs', 'act.edu.au'))
-    assert_extract('http://www.google.com.au/',
-                   ('www.google.com.au', 'www', 'google', 'com.au'))
+    assert_extract(
+        "http://www.cgs.act.edu.au/", ("www.cgs.act.edu.au", "www", "cgs", "act.edu.au")
+    )
+    assert_extract(
+        "http://www.google.com.au/", ("www.google.com.au", "www", "google", "com.au")
+    )
 
 
 def test_tld_is_a_website_too():
-    assert_extract('http://www.metp.net.cn', ('www.metp.net.cn', 'www', 'metp', 'net.cn'))
+    assert_extract(
+        "http://www.metp.net.cn", ("www.metp.net.cn", "www", "metp", "net.cn")
+    )
     # This is unhandled by the PSL. Or is it?
     # assert_extract(http://www.net.cn',
     #                ('www.net.cn', 'www', 'net', 'cn'))
 
 
+def test_no_1st_level_tld():
+    assert_extract("za", ("", "", "za", ""))
+    assert_extract("example.za", ("", "example", "za", ""))
+    assert_extract("co.za", ("", "", "", "co.za"))
+    assert_extract("example.co.za", ("example.co.za", "", "example", "co.za"))
+    assert_extract(
+        "sub.example.co.za", ("sub.example.co.za", "sub", "example", "co.za")
+    )
+
+
 def test_dns_root_label():
-    assert_extract('http://www.example.com./',
-                   ('www.example.com', 'www', 'example', 'com'))
+    assert_extract(
+        "http://www.example.com./", ("www.example.com", "www", "example", "com")
+    )
+    assert_extract(
+        "http://www.example.com\u3002/", ("www.example.com", "www", "example", "com")
+    )
+    assert_extract(
+        "http://www.example.com\uff0e/", ("www.example.com", "www", "example", "com")
+    )
+    assert_extract(
+        "http://www.example.com\uff61/", ("www.example.com", "www", "example", "com")
+    )
 
 
 def test_private_domains():
-    assert_extract('http://waiterrant.blogspot.com',
-                   ('waiterrant.blogspot.com', 'waiterrant', 'blogspot', 'com'))
+    assert_extract(
+        "http://waiterrant.blogspot.com",
+        ("waiterrant.blogspot.com", "waiterrant", "blogspot", "com"),
+    )
 
 
 def test_ipv4():
-    assert_extract('http://127.0.0.1/foo/bar',
-                   ('', '', '127.0.0.1', ''),
-                   expected_ip_data='127.0.0.1')
+    assert_extract(
+        "http://127.0.0.1/foo/bar",
+        ("", "", "127.0.0.1", ""),
+        expected_ip_data="127.0.0.1",
+    )
 
 
 def test_ipv4_bad():
-    assert_extract('http://256.256.256.256/foo/bar',
-                   ('', '256.256.256', '256', ''),
-                   expected_ip_data='')
+    assert_extract(
+        "http://256.256.256.256/foo/bar",
+        ("", "256.256.256", "256", ""),
+        expected_ip_data="",
+    )
 
 
 def test_ipv4_lookalike():
-    assert_extract('http://127.0.0.1.9/foo/bar',
-                   ('', '127.0.0.1', '9', ''),
-                   expected_ip_data='')
+    assert_extract(
+        "http://127.0.0.1.9/foo/bar", ("", "127.0.0.1", "9", ""), expected_ip_data=""
+    )
 
 
 def test_result_as_dict():
     result = extract(
-        "http://admin:password1@www.google.com:666"
-        "/secret/admin/interface?param1=42"
+        "http://admin:password1@www.google.com:666/secret/admin/interface?param1=42"
     )
-    expected_dict = {'subdomain': 'www',
-                     'domain': 'google',
-                     'suffix': 'com'}
+    expected_dict = {"subdomain": "www", "domain": "google", "suffix": "com"}
     assert result._asdict() == expected_dict
 
 
 def test_cache_permission(mocker, monkeypatch, tmpdir):
     """Emit a warning once that this can't cache the latest PSL."""
-
     warning = mocker.patch.object(logging.getLogger("tldextract.cache"), "warning")
 
     def no_permission_makedirs(*args, **kwargs):
@@ -263,28 +376,31 @@ def test_cache_permission(mocker, monkeypatch, tmpdir):
 
 @responses.activate
 def test_cache_timeouts(tmpdir):
-    server = 'http://some-server.com'
-    responses.add(
-        responses.GET,
-        server,
-        status=408
-    )
+    server = "http://some-server.com"
+    responses.add(responses.GET, server, status=408)
     cache = DiskCache(tmpdir)
 
     with pytest.raises(SuffixListNotFound):
         tldextract.suffix_list.find_first_response(cache, [server], 5)
 
 
+def test_include_psl_private_domain_attr():
+    extract_private = tldextract.TLDExtract(include_psl_private_domains=True)
+    extract_public = tldextract.TLDExtract(include_psl_private_domains=False)
+    assert extract_private("foo.uk.com") == ExtractResult(
+        subdomain="", domain="foo", suffix="uk.com"
+    )
+    assert extract_public("foo.uk.com") == ExtractResult(
+        subdomain="foo", domain="uk", suffix="com"
+    )
+
+
 def test_tlds_property():
     extract_private = tldextract.TLDExtract(
-        cache_dir=None,
-        suffix_list_urls=None,
-        include_psl_private_domains=True
+        cache_dir=None, suffix_list_urls=(), include_psl_private_domains=True
     )
     extract_public = tldextract.TLDExtract(
-        cache_dir=None,
-        suffix_list_urls=None,
-        include_psl_private_domains=False
+        cache_dir=None, suffix_list_urls=(), include_psl_private_domains=False
     )
     assert len(extract_private.tlds) > len(extract_public.tlds)
 
@@ -293,5 +409,35 @@ def test_global_extract():
     assert tldextract.extract("foo.blogspot.com") == ExtractResult(
         subdomain="foo", domain="blogspot", suffix="com"
     )
-    assert tldextract.extract("foo.blogspot.com", include_psl_private_domains=True) == \
-           ExtractResult(subdomain='', domain='foo', suffix='blogspot.com')
+    assert tldextract.extract(
+        "foo.blogspot.com", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="", domain="foo", suffix="blogspot.com")
+    assert tldextract.extract(
+        "s3.ap-south-1.amazonaws.com", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="", domain="", suffix="s3.ap-south-1.amazonaws.com")
+    assert tldextract.extract(
+        "the-quick-brown-fox.ap-south-1.amazonaws.com", include_psl_private_domains=True
+    ) == ExtractResult(
+        subdomain="the-quick-brown-fox.ap-south-1", domain="amazonaws", suffix="com"
+    )
+    assert tldextract.extract(
+        "ap-south-1.amazonaws.com", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="ap-south-1", domain="amazonaws", suffix="com")
+    assert tldextract.extract(
+        "amazonaws.com", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="", domain="amazonaws", suffix="com")
+    assert tldextract.extract(
+        "s3.cn-north-1.amazonaws.com.cn", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="", domain="", suffix="s3.cn-north-1.amazonaws.com.cn")
+    assert tldextract.extract(
+        "the-quick-brown-fox.cn-north-1.amazonaws.com.cn",
+        include_psl_private_domains=True,
+    ) == ExtractResult(
+        subdomain="the-quick-brown-fox.cn-north-1", domain="amazonaws", suffix="com.cn"
+    )
+    assert tldextract.extract(
+        "cn-north-1.amazonaws.com.cn", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="cn-north-1", domain="amazonaws", suffix="com.cn")
+    assert tldextract.extract(
+        "amazonaws.com.cn", include_psl_private_domains=True
+    ) == ExtractResult(subdomain="", domain="amazonaws", suffix="com.cn")
diff --git a/tests/test_cache.py b/tests/test_cache.py
index b148ae1..a9d538c 100644
--- a/tests/test_cache.py
+++ b/tests/test_cache.py
@@ -1,12 +1,13 @@
-"""Test the caching functionality"""
+"""Test the caching functionality."""
 import os.path
 import sys
 import types
+from typing import Any, Dict, Hashable, cast
+from unittest.mock import Mock
 
 import pytest
-
 import tldextract.cache
-from tldextract.cache import DiskCache, get_pkg_unique_identifier, get_cache_dir
+from tldextract.cache import DiskCache, get_cache_dir, get_pkg_unique_identifier
 
 
 def test_disk_cache(tmpdir):
@@ -27,16 +28,21 @@ def test_get_pkg_unique_identifier(monkeypatch):
     monkeypatch.setattr(sys, "version_info", (3, 8, 1, "final", 0))
     monkeypatch.setattr(sys, "prefix", "/home/john/.pyenv/versions/myvirtualenv")
 
-    mock_version_module = types.ModuleType('tldextract._version', 'mocked module')
-    mock_version_module.version = "1.2.3"
+    mock_version_module = types.ModuleType("tldextract._version", "mocked module")
+    cast(Any, mock_version_module).version = "1.2.3"
     monkeypatch.setitem(sys.modules, "tldextract._version", mock_version_module)
 
-    assert get_pkg_unique_identifier() == "3.8.1.final__myvirtualenv__f01a7b__tldextract-1.2.3"
+    assert (
+        get_pkg_unique_identifier()
+        == "3.8.1.final__myvirtualenv__f01a7b__tldextract-1.2.3"
+    )
 
 
 def test_get_cache_dir(monkeypatch):
     pkg_identifier = "3.8.1.final__myvirtualenv__f01a7b__tldextract-1.2.3"
-    monkeypatch.setattr(tldextract.cache, "get_pkg_unique_identifier", lambda: pkg_identifier)
+    monkeypatch.setattr(
+        tldextract.cache, "get_pkg_unique_identifier", lambda: pkg_identifier
+    )
 
     # with no HOME set, fallback to attempting to use package directory itself
     monkeypatch.delenv("HOME", raising=False)
@@ -48,14 +54,18 @@ def test_get_cache_dir(monkeypatch):
     monkeypatch.setenv("HOME", "/home/john")
     monkeypatch.delenv("XDG_CACHE_HOME", raising=False)
     monkeypatch.delenv("TLDEXTRACT_CACHE", raising=False)
-    assert get_cache_dir() == os.path.join("/home/john", ".cache/python-tldextract", pkg_identifier)
+    assert get_cache_dir() == os.path.join(
+        "/home/john", ".cache/python-tldextract", pkg_identifier
+    )
 
     # if XDG_CACHE_HOME is set, use it
     monkeypatch.setenv("HOME", "/home/john")
     monkeypatch.setenv("XDG_CACHE_HOME", "/my/alt/cache")
     monkeypatch.delenv("TLDEXTRACT_CACHE", raising=False)
 
-    assert get_cache_dir() == os.path.join("/my/alt/cache/python-tldextract", pkg_identifier)
+    assert get_cache_dir() == os.path.join(
+        "/my/alt/cache/python-tldextract", pkg_identifier
+    )
 
     # if TLDEXTRACT_CACHE is set, use it
     monkeypatch.setenv("HOME", "/home/john")
@@ -63,3 +73,29 @@ def test_get_cache_dir(monkeypatch):
     monkeypatch.setenv("TLDEXTRACT_CACHE", "/alt-tld-cache")
 
     assert get_cache_dir() == "/alt-tld-cache"
+
+
+def test_run_and_cache(tmpdir):
+    cache = DiskCache(tmpdir)
+
+    return_value1 = "unique return value"
+    some_fn = Mock(return_value=return_value1)
+    kwargs1: Dict[str, Hashable] = {"value": 1}
+
+    assert some_fn.call_count == 0
+
+    call1 = cache.run_and_cache(some_fn, "test_namespace", kwargs1, kwargs1.keys())
+    assert call1 == return_value1
+    assert some_fn.call_count == 1
+
+    call2 = cache.run_and_cache(some_fn, "test_namespace", kwargs1, kwargs1.keys())
+    assert call2 == return_value1
+    assert some_fn.call_count == 1
+
+    kwargs2: Dict[str, Hashable] = {"value": 2}
+    return_value2 = "another return value"
+    some_fn.return_value = return_value2
+
+    call3 = cache.run_and_cache(some_fn, "test_namespace", kwargs2, kwargs2.keys())
+    assert call3 == return_value2
+    assert some_fn.call_count == 2
diff --git a/tests/test_parallel.py b/tests/test_parallel.py
index 680c226..46f2798 100644
--- a/tests/test_parallel.py
+++ b/tests/test_parallel.py
@@ -1,4 +1,4 @@
-"""Test ability to run in parallel with shared cache"""
+"""Test ability to run in parallel with shared cache."""
 import os
 import os.path
 from multiprocessing import Pool
@@ -10,7 +10,7 @@ from tldextract.tldextract import PUBLIC_SUFFIX_LIST_URLS
 
 
 def test_multiprocessing_makes_one_request(tmpdir):
-    """Ensure there aren't duplicate download requests"""
+    """Ensure there aren't duplicate download requests."""
     process_count = 3
     with Pool(processes=process_count) as pool:
         http_request_counts = pool.map(_run_extractor, [str(tmpdir)] * process_count)
@@ -19,13 +19,8 @@ def test_multiprocessing_makes_one_request(tmpdir):
 
 @responses.activate
 def _run_extractor(cache_dir):
-    """run the extractor"""
-    responses.add(
-        responses.GET,
-        PUBLIC_SUFFIX_LIST_URLS[0],
-        status=208,
-        body="uk.co"
-    )
+    """Run the extractor."""
+    responses.add(responses.GET, PUBLIC_SUFFIX_LIST_URLS[0], status=208, body="uk.co")
     extract = TLDExtract(cache_dir=cache_dir)
 
     extract("bar.uk.com", include_psl_private_domains=True)
@@ -34,13 +29,8 @@ def _run_extractor(cache_dir):
 
 @responses.activate
 def test_cache_cleared_by_other_process(tmpdir, monkeypatch):
-    """Simulate a file being deleted after we check for existence but before we try to delete it"""
-    responses.add(
-        responses.GET,
-        PUBLIC_SUFFIX_LIST_URLS[0],
-        status=208,
-        body="uk.com"
-    )
+    """Simulate a file being deleted after we check for existence but before we try to delete it."""
+    responses.add(responses.GET, PUBLIC_SUFFIX_LIST_URLS[0], status=208, body="uk.com")
 
     cache_dir = str(tmpdir)
     extract = TLDExtract(cache_dir=cache_dir)
@@ -48,7 +38,7 @@ def test_cache_cleared_by_other_process(tmpdir, monkeypatch):
     orig_unlink = os.unlink
 
     def evil_unlink(filename):
-        """Simulates someone delete the file right before we try to"""
+        """Simulate someone deletes the file right before we try to."""
         if filename.startswith(cache_dir):
             orig_unlink(filename)
         orig_unlink(filename)
diff --git a/tests/test_trie.py b/tests/test_trie.py
new file mode 100644
index 0000000..94c2a7f
--- /dev/null
+++ b/tests/test_trie.py
@@ -0,0 +1,53 @@
+"""Trie tests."""
+from itertools import permutations
+
+from tldextract.tldextract import Trie
+
+
+def test_nested_dict() -> None:
+    original_keys_sequence = [
+        ["a"],
+        ["a", "d"],
+        ["a", "b"],
+        ["a", "b", "c"],
+        ["c"],
+        ["c", "b"],
+        ["d", "f"],
+    ]
+    for keys_sequence in permutations(original_keys_sequence):
+        trie = Trie()
+        for keys in keys_sequence:
+            trie.add_suffix(keys)
+        # check each nested value
+        # Top level c
+        assert "c" in trie.matches
+        top_c = trie.matches["c"]
+        assert len(top_c.matches) == 1
+        assert "b" in top_c.matches
+        assert top_c.end
+        # Top level a
+        assert "a" in trie.matches
+        top_a = trie.matches["a"]
+        assert len(top_a.matches) == 2
+        #  a -> d
+        assert "d" in top_a.matches
+        a_to_d = top_a.matches["d"]
+        assert not a_to_d.matches
+        #  a -> b
+        assert "b" in top_a.matches
+        a_to_b = top_a.matches["b"]
+        assert a_to_b.end
+        assert len(a_to_b.matches) == 1
+        #  a -> b -> c
+        assert "c" in a_to_b.matches
+        a_to_b_to_c = a_to_b.matches["c"]
+        assert not a_to_b_to_c.matches
+        assert top_a.end
+        #  d -> f
+        assert "d" in trie.matches
+        top_d = trie.matches["d"]
+        assert not top_d.end
+        assert "f" in top_d.matches
+        d_to_f = top_d.matches["f"]
+        assert d_to_f.end
+        assert not d_to_f.matches
diff --git a/tldextract/.tld_set_snapshot b/tldextract/.tld_set_snapshot
index 50beed6..73b5793 100644
--- a/tldextract/.tld_set_snapshot
+++ b/tldextract/.tld_set_snapshot
@@ -9,7 +9,7 @@
 
 // ===BEGIN ICANN DOMAINS===
 
-// ac : https://en.wikipedia.org/wiki/.ac
+// ac : http://nic.ac/rules.htm
 ac
 com.ac
 edu.ac
@@ -22,8 +22,7 @@ org.ac
 ad
 nom.ad
 
-// ae : https://en.wikipedia.org/wiki/.ae
-// see also: "Domain Name Eligibility Policy" at http://www.aeda.ae/eng/aepolicy.php
+// ae : https://tdra.gov.ae/en/aeda/ae-policies
 ae
 co.ae
 net.ae
@@ -175,17 +174,21 @@ it.ao
 // aq : https://en.wikipedia.org/wiki/.aq
 aq
 
-// ar : https://nic.ar/nic-argentina/normativa-vigente
+// ar : https://nic.ar/es/nic-argentina/normativa
 ar
+bet.ar
 com.ar
+coop.ar
 edu.ar
 gob.ar
 gov.ar
 int.ar
 mil.ar
 musica.ar
+mutual.ar
 net.ar
 org.ar
+senasa.ar
 tur.ar
 
 // arpa : https://en.wikipedia.org/wiki/.arpa
@@ -377,11 +380,29 @@ org.bi
 // biz : https://en.wikipedia.org/wiki/.biz
 biz
 
-// bj : https://en.wikipedia.org/wiki/.bj
+// bj : https://nic.bj/bj-suffixes.txt
+// submitted by registry <contact@nic.bj>
 bj
-asso.bj
-barreau.bj
-gouv.bj
+africa.bj
+agro.bj
+architectes.bj
+assur.bj
+avocats.bj
+co.bj
+com.bj
+eco.bj
+econo.bj
+edu.bj
+info.bj
+loisirs.bj
+money.bj
+net.bj
+org.bj
+ote.bj
+resto.bj
+restaurant.bj
+tourism.bj
+univ.bj
 
 // bm : http://www.bermudanic.bm/dnr-text.txt
 bm
@@ -734,7 +755,6 @@ gouv.ci
 // cl : https://www.nic.cl
 // Confirmed by .CL registry <hsalgado@nic.cl>
 cl
-aprendemas.cl
 co.cl
 gob.cl
 gov.cl
@@ -839,7 +859,13 @@ gov.cu
 inf.cu
 
 // cv : https://en.wikipedia.org/wiki/.cv
+// cv : http://www.dns.cv/tldcv_portal/do?com=DS;5446457100;111;+PAGE(4000018)+K-CAT-CODIGO(RDOM)+RCNT(100); <- registration rules
 cv
+com.cv
+edu.cv
+int.cv
+nome.cv
+org.cv
 
 // cw : http://www.una.cw/cw_registry/
 // Confirmed by registry <registry@una.net> 2013-03-26
@@ -856,6 +882,7 @@ gov.cx
 
 // cy : http://www.nic.cy/
 // Submitted by registry Panayiotou Fotia <cydns@ucy.ac.cy>
+// namespace policies URL https://www.nic.cy/portal//sites/default/files/symfonia_gia_eggrafi.pdf
 cy
 ac.cy
 biz.cy
@@ -863,10 +890,9 @@ com.cy
 ekloges.cy
 gov.cy
 ltd.cy
-name.cy
+mil.cy
 net.cy
 org.cy
-parliament.cy
 press.cy
 pro.cy
 tm.cy
@@ -1025,8 +1051,7 @@ fm
 // fo : https://en.wikipedia.org/wiki/.fo
 fo
 
-// fr : http://www.afnic.fr/
-// domaines descriptifs : https://www.afnic.fr/medias/documents/Cadre_legal/Afnic_Naming_Policy_12122016_VEN.pdf
+// fr : https://www.afnic.fr/ https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 fr
 asso.fr
 com.fr
@@ -1034,7 +1059,7 @@ gouv.fr
 nom.fr
 prd.fr
 tm.fr
-// domaines sectoriels : https://www.afnic.fr/en/products-and-services/the-fr-tld/sector-based-fr-domains-4.html
+// Former "domaines sectoriels", still registration suffixes
 aeroport.fr
 avocat.fr
 avoues.fr
@@ -1152,7 +1177,7 @@ gov.gr
 // gs : https://en.wikipedia.org/wiki/.gs
 gs
 
-// gt : http://www.gt/politicas_de_registro.html
+// gt : https://www.gt/sitio/registration_policy.php?lang=en
 gt
 com.gt
 edu.gt
@@ -1176,6 +1201,7 @@ org.gu
 web.gu
 
 // gw : https://en.wikipedia.org/wiki/.gw
+// gw : https://nic.gw/regras/
 gw
 
 // gy : https://en.wikipedia.org/wiki/.gy
@@ -1306,7 +1332,9 @@ web.id
 ie
 gov.ie
 
-// il : http://www.isoc.org.il/domains/
+// il :         http://www.isoc.org.il/domains/
+// see also:    https://en.isoc.org.il/il-cctld/registration-rules
+// ISOC-IL      (operated by .il Registry)
 il
 ac.il
 co.il
@@ -1316,6 +1344,16 @@ k12.il
 muni.il
 net.il
 org.il
+// xn--4dbrk0ce ("Israel", Hebrew) : IL
+ישראל
+// xn--4dbgdty6c.xn--4dbrk0ce.
+אקדמיה.ישראל
+// xn--5dbhl8d.xn--4dbrk0ce.
+ישוב.ישראל
+// xn--8dbq2a.xn--4dbrk0ce.
+צהל.ישראל
+// xn--hebda8b.xn--4dbrk0ce.
+ממשל.ישראל
 
 // im : https://www.nic.im/
 // Submitted by registry <info@nic.im>
@@ -1331,22 +1369,51 @@ tt.im
 tv.im
 
 // in : https://en.wikipedia.org/wiki/.in
-// see also: https://registry.in/Policies
+// see also: https://registry.in/policies
 // Please note, that nic.in is not an official eTLD, but used by most
 // government institutions.
 in
+5g.in
+6g.in
+ac.in
+ai.in
+am.in
+bihar.in
+biz.in
+business.in
+ca.in
+cn.in
 co.in
+com.in
+coop.in
+cs.in
+delhi.in
+dr.in
+edu.in
+er.in
 firm.in
-net.in
-org.in
 gen.in
+gov.in
+gujarat.in
 ind.in
+info.in
+int.in
+internet.in
+io.in
+me.in
+mil.in
+net.in
 nic.in
-ac.in
-edu.in
+org.in
+pg.in
+post.in
+pro.in
 res.in
-gov.in
-mil.in
+travel.in
+tv.in
+uk.in
+up.in
+us.in
 
 // info : https://en.wikipedia.org/wiki/.info
 info
@@ -1356,7 +1423,7 @@ info
 int
 eu.int
 
-// io : http://www.nic.io/rules.html
+// io : http://www.nic.io/rules.htm
 // list of other 2nd level tlds ?
 io
 com.io
@@ -3755,11 +3822,10 @@ org.kw
 // ky : http://www.icta.ky/da_ky_reg_dom.php
 // Confirmed by registry <kysupport@perimeterusa.com> 2008-06-17
 ky
-edu.ky
-gov.ky
 com.ky
-org.ky
+edu.ky
 net.ky
+org.ky
 
 // kz : https://en.wikipedia.org/wiki/.kz
 // see also: http://www.nic.kz/rules/index.jsp
@@ -4003,555 +4069,8 @@ ac.mu
 co.mu
 or.mu
 
-// museum : http://about.museum/naming/
-// http://index.museum/
+// museum : https://welcome.museum/wp-content/uploads/2018/05/20180525-Registration-Policy-MUSEUM-EN_VF-2.pdf https://welcome.museum/buy-your-dot-museum-2/
 museum
-academy.museum
-agriculture.museum
-air.museum
-airguard.museum
-alabama.museum
-alaska.museum
-amber.museum
-ambulance.museum
-american.museum
-americana.museum
-americanantiques.museum
-americanart.museum
-amsterdam.museum
-and.museum
-annefrank.museum
-anthro.museum
-anthropology.museum
-antiques.museum
-aquarium.museum
-arboretum.museum
-archaeological.museum
-archaeology.museum
-architecture.museum
-art.museum
-artanddesign.museum
-artcenter.museum
-artdeco.museum
-arteducation.museum
-artgallery.museum
-arts.museum
-artsandcrafts.museum
-asmatart.museum
-assassination.museum
-assisi.museum
-association.museum
-astronomy.museum
-atlanta.museum
-austin.museum
-australia.museum
-automotive.museum
-aviation.museum
-axis.museum
-badajoz.museum
-baghdad.museum
-bahn.museum
-bale.museum
-baltimore.museum
-barcelona.museum
-baseball.museum
-basel.museum
-baths.museum
-bauern.museum
-beauxarts.museum
-beeldengeluid.museum
-bellevue.museum
-bergbau.museum
-berkeley.museum
-berlin.museum
-bern.museum
-bible.museum
-bilbao.museum
-bill.museum
-birdart.museum
-birthplace.museum
-bonn.museum
-boston.museum
-botanical.museum
-botanicalgarden.museum
-botanicgarden.museum
-botany.museum
-brandywinevalley.museum
-brasil.museum
-bristol.museum
-british.museum
-britishcolumbia.museum
-broadcast.museum
-brunel.museum
-brussel.museum
-brussels.museum
-bruxelles.museum
-building.museum
-burghof.museum
-bus.museum
-bushey.museum
-cadaques.museum
-california.museum
-cambridge.museum
-can.museum
-canada.museum
-capebreton.museum
-carrier.museum
-cartoonart.museum
-casadelamoneda.museum
-castle.museum
-castres.museum
-celtic.museum
-center.museum
-chattanooga.museum
-cheltenham.museum
-chesapeakebay.museum
-chicago.museum
-children.museum
-childrens.museum
-childrensgarden.museum
-chiropractic.museum
-chocolate.museum
-christiansburg.museum
-cincinnati.museum
-cinema.museum
-circus.museum
-civilisation.museum
-civilization.museum
-civilwar.museum
-clinton.museum
-clock.museum
-coal.museum
-coastaldefence.museum
-cody.museum
-coldwar.museum
-collection.museum
-colonialwilliamsburg.museum
-coloradoplateau.museum
-columbia.museum
-columbus.museum
-communication.museum
-communications.museum
-community.museum
-computer.museum
-computerhistory.museum
-comunicações.museum
-contemporary.museum
-contemporaryart.museum
-convent.museum
-copenhagen.museum
-corporation.museum
-correios-e-telecomunicações.museum
-corvette.museum
-costume.museum
-countryestate.museum
-county.museum
-crafts.museum
-cranbrook.museum
-creation.museum
-cultural.museum
-culturalcenter.museum
-culture.museum
-cyber.museum
-cymru.museum
-dali.museum
-dallas.museum
-database.museum
-ddr.museum
-decorativearts.museum
-delaware.museum
-delmenhorst.museum
-denmark.museum
-depot.museum
-design.museum
-detroit.museum
-dinosaur.museum
-discovery.museum
-dolls.museum
-donostia.museum
-durham.museum
-eastafrica.museum
-eastcoast.museum
-education.museum
-educational.museum
-egyptian.museum
-eisenbahn.museum
-elburg.museum
-elvendrell.museum
-embroidery.museum
-encyclopedic.museum
-england.museum
-entomology.museum
-environment.museum
-environmentalconservation.museum
-epilepsy.museum
-essex.museum
-estate.museum
-ethnology.museum
-exeter.museum
-exhibition.museum
-family.museum
-farm.museum
-farmequipment.museum
-farmers.museum
-farmstead.museum
-field.museum
-figueres.museum
-filatelia.museum
-film.museum
-fineart.museum
-finearts.museum
-finland.museum
-flanders.museum
-florida.museum
-force.museum
-fortmissoula.museum
-fortworth.museum
-foundation.museum
-francaise.museum
-frankfurt.museum
-franziskaner.museum
-freemasonry.museum
-freiburg.museum
-fribourg.museum
-frog.museum
-fundacio.museum
-furniture.museum
-gallery.museum
-garden.museum
-gateway.museum
-geelvinck.museum
-gemological.museum
-geology.museum
-georgia.museum
-giessen.museum
-glas.museum
-glass.museum
-gorge.museum
-grandrapids.museum
-graz.museum
-guernsey.museum
-halloffame.museum
-hamburg.museum
-handson.museum
-harvestcelebration.museum
-hawaii.museum
-health.museum
-heimatunduhren.museum
-hellas.museum
-helsinki.museum
-hembygdsforbund.museum
-heritage.museum
-histoire.museum
-historical.museum
-historicalsociety.museum
-historichouses.museum
-historisch.museum
-historisches.museum
-history.museum
-historyofscience.museum
-horology.museum
-house.museum
-humanities.museum
-illustration.museum
-imageandsound.museum
-indian.museum
-indiana.museum
-indianapolis.museum
-indianmarket.museum
-intelligence.museum
-interactive.museum
-iraq.museum
-iron.museum
-isleofman.museum
-jamison.museum
-jefferson.museum
-jerusalem.museum
-jewelry.museum
-jewish.museum
-jewishart.museum
-jfk.museum
-journalism.museum
-judaica.museum
-judygarland.museum
-juedisches.museum
-juif.museum
-karate.museum
-karikatur.museum
-kids.museum
-koebenhavn.museum
-koeln.museum
-kunst.museum
-kunstsammlung.museum
-kunstunddesign.museum
-labor.museum
-labour.museum
-lajolla.museum
-lancashire.museum
-landes.museum
-lans.museum
-läns.museum
-larsson.museum
-lewismiller.museum
-lincoln.museum
-linz.museum
-living.museum
-livinghistory.museum
-localhistory.museum
-london.museum
-losangeles.museum
-louvre.museum
-loyalist.museum
-lucerne.museum
-luxembourg.museum
-luzern.museum
-mad.museum
-madrid.museum
-mallorca.museum
-manchester.museum
-mansion.museum
-mansions.museum
-manx.museum
-marburg.museum
-maritime.museum
-maritimo.museum
-maryland.museum
-marylhurst.museum
-media.museum
-medical.museum
-medizinhistorisches.museum
-meeres.museum
-memorial.museum
-mesaverde.museum
-michigan.museum
-midatlantic.museum
-military.museum
-mill.museum
-miners.museum
-mining.museum
-minnesota.museum
-missile.museum
-missoula.museum
-modern.museum
-moma.museum
-money.museum
-monmouth.museum
-monticello.museum
-montreal.museum
-moscow.museum
-motorcycle.museum
-muenchen.museum
-muenster.museum
-mulhouse.museum
-muncie.museum
-museet.museum
-museumcenter.museum
-museumvereniging.museum
-music.museum
-national.museum
-nationalfirearms.museum
-nationalheritage.museum
-nativeamerican.museum
-naturalhistory.museum
-naturalhistorymuseum.museum
-naturalsciences.museum
-nature.museum
-naturhistorisches.museum
-natuurwetenschappen.museum
-naumburg.museum
-naval.museum
-nebraska.museum
-neues.museum
-newhampshire.museum
-newjersey.museum
-newmexico.museum
-newport.museum
-newspaper.museum
-newyork.museum
-niepce.museum
-norfolk.museum
-north.museum
-nrw.museum
-nyc.museum
-nyny.museum
-oceanographic.museum
-oceanographique.museum
-omaha.museum
-online.museum
-ontario.museum
-openair.museum
-oregon.museum
-oregontrail.museum
-otago.museum
-oxford.museum
-pacific.museum
-paderborn.museum
-palace.museum
-paleo.museum
-palmsprings.museum
-panama.museum
-paris.museum
-pasadena.museum
-pharmacy.museum
-philadelphia.museum
-philadelphiaarea.museum
-philately.museum
-phoenix.museum
-photography.museum
-pilots.museum
-pittsburgh.museum
-planetarium.museum
-plantation.museum
-plants.museum
-plaza.museum
-portal.museum
-portland.museum
-portlligat.museum
-posts-and-telecommunications.museum
-preservation.museum
-presidio.museum
-press.museum
-project.museum
-public.museum
-pubol.museum
-quebec.museum
-railroad.museum
-railway.museum
-research.museum
-resistance.museum
-riodejaneiro.museum
-rochester.museum
-rockart.museum
-roma.museum
-russia.museum
-saintlouis.museum
-salem.museum
-salvadordali.museum
-salzburg.museum
-sandiego.museum
-sanfrancisco.museum
-santabarbara.museum
-santacruz.museum
-santafe.museum
-saskatchewan.museum
-satx.museum
-savannahga.museum
-schlesisches.museum
-schoenbrunn.museum
-schokoladen.museum
-school.museum
-schweiz.museum
-science.museum
-scienceandhistory.museum
-scienceandindustry.museum
-sciencecenter.museum
-sciencecenters.museum
-science-fiction.museum
-sciencehistory.museum
-sciences.museum
-sciencesnaturelles.museum
-scotland.museum
-seaport.museum
-settlement.museum
-settlers.museum
-shell.museum
-sherbrooke.museum
-sibenik.museum
-silk.museum
-ski.museum
-skole.museum
-society.museum
-sologne.museum
-soundandvision.museum
-southcarolina.museum
-southwest.museum
-space.museum
-spy.museum
-square.museum
-stadt.museum
-stalbans.museum
-starnberg.museum
-state.museum
-stateofdelaware.museum
-station.museum
-steam.museum
-steiermark.museum
-stjohn.museum
-stockholm.museum
-stpetersburg.museum
-stuttgart.museum
-suisse.museum
-surgeonshall.museum
-surrey.museum
-svizzera.museum
-sweden.museum
-sydney.museum
-tank.museum
-tcm.museum
-technology.museum
-telekommunikation.museum
-television.museum
-texas.museum
-textile.museum
-theater.museum
-time.museum
-timekeeping.museum
-topology.museum
-torino.museum
-touch.museum
-town.museum
-transport.museum
-tree.museum
-trolley.museum
-trust.museum
-trustee.museum
-uhren.museum
-ulm.museum
-undersea.museum
-university.museum
-usa.museum
-usantiques.museum
-usarts.museum
-uscountryestate.museum
-usculture.museum
-usdecorativearts.museum
-usgarden.museum
-ushistory.museum
-ushuaia.museum
-uslivinghistory.museum
-utah.museum
-uvic.museum
-valley.museum
-vantaa.museum
-versailles.museum
-viking.museum
-village.museum
-virginia.museum
-virtual.museum
-virtuel.museum
-vlaanderen.museum
-volkenkunde.museum
-wales.museum
-wallonie.museum
-war.museum
-washingtondc.museum
-watchandclock.museum
-watch-and-clock.museum
-western.museum
-westfalen.museum
-whaling.museum
-wildlife.museum
-williamsburg.museum
-windmill.museum
-workshop.museum
-york.museum
-yorkshire.museum
-yosemite.museum
-youth.museum
-zoological.museum
-zoology.museum
-ירושלים.museum
-иком.museum
 
 // mv : https://en.wikipedia.org/wiki/.mv
 // "mv" included because, contra Wikipedia, google.mv exists.
@@ -4594,15 +4113,17 @@ gob.mx
 edu.mx
 net.mx
 
-// my : http://www.mynic.net.my/
+// my : http://www.mynic.my/
+// Available strings: https://mynic.my/resources/domains/buying-a-domain/
 my
+biz.my
 com.my
-net.my
-org.my
-gov.my
 edu.my
+gov.my
 mil.my
 name.my
+net.my
+org.my
 
 // mz : http://www.uem.mz/
 // Submitted by registry <antonio@uem.mz>
@@ -4703,6 +4224,7 @@ nl
 // Norid geographical second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-b/
 // Norid category second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-c/
 // Norid category second-level domains managed by parties other than Norid : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-d/
+// RSS feed: https://teknisk.norid.no/en/feed/
 no
 // Norid category second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-c/
 fhs.no
@@ -5791,7 +5313,7 @@ zarow.pl
 zgora.pl
 zgorzelec.pl
 
-// pm : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf
+// pm : https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 pm
 
 // pn : http://www.government.pn/PnRegistry/policies.htm
@@ -5847,7 +5369,7 @@ com.ps
 org.ps
 net.ps
 
-// pt : http://online.dns.pt/dns/start_dns
+// pt : https://www.dns.pt/en/domain/pt-terms-and-conditions-registration-rules/
 pt
 net.pt
 gov.pt
@@ -5889,7 +5411,7 @@ net.qa
 org.qa
 sch.qa
 
-// re : http://www.afnic.re/obtenir/chartes/nommage-re/annexe-descriptifs
+// re : https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 re
 asso.re
 com.re
@@ -6024,7 +5546,7 @@ gov.sg
 edu.sg
 per.sg
 
-// sh : http://www.nic.sh/registrar.html
+// sh : http://nic.sh/rules.htm
 sh
 com.sh
 net.sh
@@ -6084,8 +5606,10 @@ biz.ss
 com.ss
 edu.ss
 gov.ss
+me.ss
 net.ss
 org.ss
+sch.ss
 
 // st : http://www.nic.st/html/policyrules/
 st
@@ -6094,7 +5618,6 @@ com.st
 consulado.st
 edu.st
 embaixada.st
-gov.st
 mil.st
 net.st
 org.st
@@ -6145,7 +5668,7 @@ td
 // http://www.telnic.org/
 tel
 
-// tf : https://en.wikipedia.org/wiki/.tf
+// tf : https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 tf
 
 // tg : https://en.wikipedia.org/wiki/.tg
@@ -6199,29 +5722,22 @@ gov.tm
 mil.tm
 edu.tm
 
-// tn : https://en.wikipedia.org/wiki/.tn
-// http://whois.ati.tn/
+// tn : http://www.registre.tn/fr/
+// https://whois.ati.tn/
 tn
 com.tn
 ens.tn
 fin.tn
 gov.tn
 ind.tn
+info.tn
 intl.tn
+mincom.tn
 nat.tn
 net.tn
 org.tn
-info.tn
 perso.tn
 tourism.tn
-edunet.tn
-rnrt.tn
-rns.tn
-rnu.tn
-mincom.tn
-agrinet.tn
-defense.tn
-turen.tn
 
 // to : https://en.wikipedia.org/wiki/.to
 // Submitted by registry <egullich@colo.to>
@@ -6360,6 +5876,7 @@ kiev.ua
 kirovograd.ua
 km.ua
 kr.ua
+kropyvnytskyi.ua
 krym.ua
 ks.ua
 kv.ua
@@ -6711,9 +6228,10 @@ mil.vc
 edu.vc
 
 // ve : https://registro.nic.ve/
-// Submitted by registry
+// Submitted by registry nic@nic.ve and nicve@conatel.gob.ve
 ve
 arts.ve
+bib.ve
 co.ve
 com.ve
 e12.ve
@@ -6725,7 +6243,9 @@ info.ve
 int.ve
 mil.ve
 net.ve
+nom.ve
 org.ve
+rar.ve
 rec.ve
 store.ve
 tec.ve
@@ -6768,7 +6288,7 @@ edu.vu
 net.vu
 org.vu
 
-// wf : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf
+// wf : https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 wf
 
 // ws : https://en.wikipedia.org/wiki/.ws
@@ -6780,7 +6300,7 @@ org.ws
 gov.ws
 edu.ws
 
-// yt : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf
+// yt : https://www.afnic.fr/wp-media/uploads/2022/12/afnic-naming-policy-2023-01-01.pdf
 yt
 
 // IDN ccTLDs
@@ -6804,6 +6324,9 @@ yt
 // xn--90ae ("bg", Bulgarian) : BG
 бг
 
+// xn--mgbcpq6gpa1a ("albahrain", Arabic) : BH
+البحرين
+
 // xn--90ais ("bel", Belarusian/Russian Cyrillic) : BY
 // Operated by .by registry
 бел
@@ -6936,6 +6459,9 @@ yt
 // xn--80ao21a ("Kaz", Kazakh) : KZ
 қаз
 
+// xn--q7ce6a ("Lao", Lao) : LA
+ລາວ
+
 // xn--fzc2c9e2c ("Lanka", Sinhalese-Sinhala) : LK
 // https://nic.lk
 ලංකා
@@ -7061,7 +6587,13 @@ yt
 xxx
 
 // ye : http://www.y.net.ye/services/domain_name.htm
-*.ye
+ye
+com.ye
+edu.ye
+gov.ye
+net.ye
+mil.ye
+org.ye
 
 // za : https://www.zadna.org.za/content/page/domain-information/
 ac.za
@@ -7110,7 +6642,7 @@ org.zw
 
 // newGTLDs
 
-// List of new gTLDs imported from https://www.icann.org/resources/registries/gtlds/v2/gtlds.json on 2020-10-06T17:44:42Z
+// List of new gTLDs imported from https://www.icann.org/resources/registries/gtlds/v2/gtlds.json on 2023-04-14T15:13:16Z
 // This list is auto-generated, don't edit it manually.
 // aaa : 2015-02-26 American Automobile Association, Inc.
 aaa
@@ -7136,7 +6668,7 @@ abc
 // able : 2015-06-25 Able Inc.
 able
 
-// abogado : 2014-04-24 Minds + Machines Group Limited
+// abogado : 2014-04-24 Registry Services, LLC
 abogado
 
 // abudhabi : 2015-07-30 Abu Dhabi Systems and Information Centre
@@ -7160,9 +6692,6 @@ aco
 // actor : 2013-12-12 Dog Beach, LLC
 actor
 
-// adac : 2015-07-16 Allgemeiner Deutscher Automobil-Club e.V. (ADAC)
-adac
-
 // ads : 2014-12-04 Charleston Road Registry Inc.
 ads
 
@@ -7175,9 +6704,6 @@ aeg
 // aetna : 2015-05-21 Aetna Life Insurance Company
 aetna
 
-// afamilycompany : 2015-07-23 Johnson Shareholdings, Inc.
-afamilycompany
-
 // afl : 2014-10-02 Australian Football League
 afl
 
@@ -7283,7 +6809,7 @@ arab
 // aramco : 2014-11-20 Aramco Services Company
 aramco
 
-// archi : 2014-02-06 Afilias Limited
+// archi : 2014-02-06 Identity Digital Limited
 archi
 
 // army : 2014-03-06 Dog Beach, LLC
@@ -7316,7 +6842,7 @@ audi
 // audible : 2015-06-25 Amazon Registry Services, Inc.
 audible
 
-// audio : 2014-03-20 UNR Corp.
+// audio : 2014-03-20 XYZ.COM LLC
 audio
 
 // auspost : 2015-08-13 Australian Postal Corporation
@@ -7328,16 +6854,16 @@ author
 // auto : 2014-11-13 XYZ.COM LLC
 auto
 
-// autos : 2014-01-09 DERAutos, LLC
+// autos : 2014-01-09 XYZ.COM LLC
 autos
 
-// avianca : 2015-01-08 Avianca Holdings S.A.
+// avianca : 2015-01-08 Avianca Inc.
 avianca
 
-// aws : 2015-06-25 Amazon Registry Services, Inc.
+// aws : 2015-06-25 AWS Registry LLC
 aws
 
-// axa : 2013-12-19 AXA SA
+// axa : 2013-12-19 AXA Group Operations SAS
 axa
 
 // azure : 2014-12-18 Microsoft Corporation
@@ -7412,7 +6938,7 @@ beats
 // beauty : 2015-12-03 XYZ.COM LLC
 beauty
 
-// beer : 2014-01-09 Minds + Machines Group Limited
+// beer : 2014-01-09 Registry Services, LLC
 beer
 
 // bentley : 2014-12-18 Bentley Motors Limited
@@ -7427,7 +6953,7 @@ best
 // bestbuy : 2015-07-31 BBY Solutions, Inc.
 bestbuy
 
-// bet : 2015-05-07 Afilias Limited
+// bet : 2015-05-07 Identity Digital Limited
 bet
 
 // bharti : 2014-01-09 Bharti Enterprises (Holding) Private Limited
@@ -7448,13 +6974,13 @@ bing
 // bingo : 2014-12-04 Binky Moon, LLC
 bingo
 
-// bio : 2014-03-06 Afilias Limited
+// bio : 2014-03-06 Identity Digital Limited
 bio
 
-// black : 2014-01-16 Afilias Limited
+// black : 2014-01-16 Identity Digital Limited
 black
 
-// blackfriday : 2014-01-16 UNR Corp.
+// blackfriday : 2014-01-16 Registry Services, LLC
 blackfriday
 
 // blockbuster : 2015-07-30 Dish DBS Corporation
@@ -7466,7 +6992,7 @@ blog
 // bloomberg : 2014-07-17 Bloomberg IP Holdings LLC
 bloomberg
 
-// blue : 2013-11-07 Afilias Limited
+// blue : 2013-11-07 Identity Digital Limited
 blue
 
 // bms : 2014-10-30 Bristol-Myers Squibb Company
@@ -7478,7 +7004,7 @@ bmw
 // bnpparibas : 2014-05-29 BNP Paribas
 bnpparibas
 
-// boats : 2014-12-04 DERBoats, LLC
+// boats : 2014-12-04 XYZ.COM LLC
 boats
 
 // boehringer : 2015-07-09 Boehringer Ingelheim International GmbH
@@ -7508,7 +7034,7 @@ bosch
 // bostik : 2015-05-28 Bostik SA
 bostik
 
-// boston : 2015-12-10 Boston TLD Management, LLC
+// boston : 2015-12-10 Registry Services, LLC
 boston
 
 // bot : 2014-12-18 Amazon Registry Services, Inc.
@@ -7517,7 +7043,7 @@ bot
 // boutique : 2013-11-14 Binky Moon, LLC
 boutique
 
-// box : 2015-11-12 .BOX INC.
+// box : 2015-11-12 Intercap Registry Inc.
 box
 
 // bradesco : 2014-12-18 Banco Bradesco S.A.
@@ -7529,7 +7055,7 @@ bridgestone
 // broadway : 2014-12-22 Celebrate Broadway, Inc.
 broadway
 
-// broker : 2014-12-11 Dotbroker Registry Limited
+// broker : 2014-12-11 Dog Beach, LLC
 broker
 
 // brother : 2015-01-29 Brother Industries, Ltd.
@@ -7538,12 +7064,6 @@ brother
 // brussels : 2014-02-06 DNS.be vzw
 brussels
 
-// budapest : 2013-11-21 Minds + Machines Group Limited
-budapest
-
-// bugatti : 2015-07-23 Bugatti International SA
-bugatti
-
 // build : 2013-11-07 Plan Bee LLC
 build
 
@@ -7577,7 +7097,7 @@ call
 // calvinklein : 2015-07-30 PVH gTLD Holdings LLC
 calvinklein
 
-// cam : 2016-04-21 AC Webconnecting Holding B.V.
+// cam : 2016-04-21 Cam Connecting SARL
 cam
 
 // camera : 2013-08-27 Binky Moon, LLC
@@ -7586,9 +7106,6 @@ camera
 // camp : 2013-11-07 Binky Moon, LLC
 camp
 
-// cancerresearch : 2014-05-15 Australian Cancer Research Foundation
-cancerresearch
-
 // canon : 2014-09-12 Canon Inc.
 canon
 
@@ -7622,15 +7139,12 @@ careers
 // cars : 2014-11-13 XYZ.COM LLC
 cars
 
-// casa : 2013-11-21 Minds + Machines Group Limited
+// casa : 2013-11-21 Registry Services, LLC
 casa
 
-// case : 2015-09-03 CNH Industrial N.V.
+// case : 2015-09-03 Digity, LLC
 case
 
-// caseih : 2015-09-03 CNH Industrial N.V.
-caseih
-
 // cash : 2014-03-06 Binky Moon, LLC
 cash
 
@@ -7655,9 +7169,6 @@ cbre
 // cbs : 2015-08-06 CBS Domains Inc.
 cbs
 
-// ceb : 2015-04-09 The Corporate Executive Board Company
-ceb
-
 // center : 2013-11-07 Binky Moon, LLC
 center
 
@@ -7670,7 +7181,7 @@ cern
 // cfa : 2014-08-28 CFA Institute
 cfa
 
-// cfd : 2014-12-11 DotCFD Registry Limited
+// cfd : 2014-12-11 ShortDot SA
 cfd
 
 // chanel : 2015-04-09 Chanel International B.V.
@@ -7679,7 +7190,7 @@ chanel
 // channel : 2014-05-08 Charleston Road Registry Inc.
 channel
 
-// charity : 2018-04-11 Binky Moon, LLC
+// charity : 2018-04-11 Public Interest Registry
 charity
 
 // chase : 2015-04-30 JPMorgan Chase Bank, National Association
@@ -7694,7 +7205,7 @@ cheap
 // chintai : 2015-06-11 CHINTAI Corporation
 chintai
 
-// christmas : 2013-11-21 UNR Corp.
+// christmas : 2013-11-21 XYZ.COM LLC
 christmas
 
 // chrome : 2014-07-24 Charleston Road Registry Inc.
@@ -7733,7 +7244,7 @@ claims
 // cleaning : 2013-12-05 Binky Moon, LLC
 cleaning
 
-// click : 2014-06-05 UNR Corp.
+// click : 2014-06-05 Internet Naming Company LLC
 click
 
 // clinic : 2014-03-20 Binky Moon, LLC
@@ -7748,7 +7259,7 @@ clothing
 // cloud : 2015-04-16 Aruba PEC S.p.A.
 cloud
 
-// club : 2013-11-08 .CLUB DOMAINS, LLC
+// club : 2013-11-08 Registry Services, LLC
 club
 
 // clubmed : 2015-06-25 Club Méditerranée S.A.
@@ -7805,7 +7316,7 @@ contact
 // contractors : 2013-09-10 Binky Moon, LLC
 contractors
 
-// cooking : 2013-11-21 Minds + Machines Group Limited
+// cooking : 2013-11-21 Registry Services, LLC
 cooking
 
 // cookingchannel : 2015-07-02 Lifestyle Domain Holdings, Inc.
@@ -7817,7 +7328,7 @@ cool
 // corsica : 2014-09-25 Collectivité de Corse
 corsica
 
-// country : 2013-12-19 DotCountry LLC
+// country : 2013-12-19 Internet Naming Company LLC
 country
 
 // coupon : 2015-02-26 Amazon Registry Services, Inc.
@@ -7826,7 +7337,7 @@ coupon
 // coupons : 2015-03-26 Binky Moon, LLC
 coupons
 
-// courses : 2014-12-04 OPEN UNIVERSITIES AUSTRALIA PTY LTD
+// courses : 2014-12-04 Registry Services, LLC
 courses
 
 // cpa : 2019-06-10 American Institute of Certified Public Accountants
@@ -7838,7 +7349,7 @@ credit
 // creditcard : 2014-03-20 Binky Moon, LLC
 creditcard
 
-// creditunion : 2015-01-22 CUNA Performance Resources, LLC
+// creditunion : 2015-01-22 DotCooperation LLC
 creditunion
 
 // cricket : 2014-10-09 dot Cricket Limited
@@ -7856,9 +7367,6 @@ cruise
 // cruises : 2013-12-05 Binky Moon, LLC
 cruises
 
-// csc : 2014-09-25 Alliance-One Services, Inc.
-csc
-
 // cuisinella : 2014-04-03 SCHMIDT GROUPE S.A.S.
 cuisinella
 
@@ -7895,7 +7403,7 @@ day
 // dclk : 2014-11-20 Charleston Road Registry Inc.
 dclk
 
-// dds : 2015-05-07 Minds + Machines Group Limited
+// dds : 2015-05-07 Registry Services, LLC
 dds
 
 // deal : 2015-06-25 Amazon Registry Services, Inc.
@@ -7934,7 +7442,7 @@ dentist
 // desi : 2013-11-14 Desi Networks LLC
 desi
 
-// design : 2014-11-07 Top Level Design, LLC
+// design : 2014-11-07 Registry Services, LLC
 design
 
 // dev : 2014-10-16 Charleston Road Registry Inc.
@@ -7946,7 +7454,7 @@ dhl
 // diamonds : 2013-09-22 Binky Moon, LLC
 diamonds
 
-// diet : 2014-06-26 UNR Corp.
+// diet : 2014-06-26 XYZ.COM LLC
 diet
 
 // digital : 2014-03-06 Binky Moon, LLC
@@ -8000,13 +7508,10 @@ dtv
 // dubai : 2015-01-01 Dubai Smart Government Department
 dubai
 
-// duck : 2015-07-23 Johnson Shareholdings, Inc.
-duck
-
 // dunlop : 2015-07-02 The Goodyear Tire & Rubber Company
 dunlop
 
-// dupont : 2015-06-25 E. I. du Pont de Nemours and Company
+// dupont : 2015-06-25 DuPont Specialty Products USA, LLC
 dupont
 
 // durban : 2014-03-24 ZA Central Registry NPC trading as ZA Central Registry
@@ -8018,7 +7523,7 @@ dvag
 // dvr : 2016-05-26 DISH Technologies L.L.C.
 dvr
 
-// earth : 2014-12-04 Interlink Co., Ltd.
+// earth : 2014-12-04 Interlink Systems Innovation Institute K.K.
 earth
 
 // eat : 2014-01-23 Charleston Road Registry Inc.
@@ -8123,7 +7628,7 @@ farm
 // farmers : 2015-07-09 Farmers Insurance Exchange
 farmers
 
-// fashion : 2014-07-03 Minds + Machines Group Limited
+// fashion : 2014-07-03 Registry Services, LLC
 fashion
 
 // fast : 2014-12-18 Amazon Registry Services, Inc.
@@ -8174,10 +7679,10 @@ firmdale
 // fish : 2013-12-12 Binky Moon, LLC
 fish
 
-// fishing : 2013-11-21 Minds + Machines Group Limited
+// fishing : 2013-11-21 Registry Services, LLC
 fishing
 
-// fit : 2014-11-07 Minds + Machines Group Limited
+// fit : 2014-11-07 Registry Services, LLC
 fit
 
 // fitness : 2014-03-06 Binky Moon, LLC
@@ -8195,7 +7700,7 @@ flir
 // florist : 2013-11-07 Binky Moon, LLC
 florist
 
-// flowers : 2014-10-09 UNR Corp.
+// flowers : 2014-10-09 XYZ.COM LLC
 flowers
 
 // fly : 2014-05-08 Charleston Road Registry Inc.
@@ -8216,7 +7721,7 @@ football
 // ford : 2014-11-13 Ford Motor Company
 ford
 
-// forex : 2014-12-11 Dotforex Registry Limited
+// forex : 2014-12-11 Dog Beach, LLC
 forex
 
 // forsale : 2014-05-22 Dog Beach, LLC
@@ -8225,7 +7730,7 @@ forsale
 // forum : 2015-04-02 Fegistry, LLC
 forum
 
-// foundation : 2013-12-05 Binky Moon, LLC
+// foundation : 2013-12-05 Public Interest Registry
 foundation
 
 // fox : 2015-09-11 FOX Registry, LLC
@@ -8255,10 +7760,7 @@ ftr
 // fujitsu : 2015-07-30 Fujitsu Limited
 fujitsu
 
-// fujixerox : 2015-07-23 Xerox DNHC LLC
-fujixerox
-
-// fun : 2016-01-14 DotSpace Inc.
+// fun : 2016-01-14 Radix FZC
 fun
 
 // fund : 2014-03-20 Binky Moon, LLC
@@ -8285,7 +7787,7 @@ gallo
 // gallup : 2015-02-19 Gallup, Inc.
 gallup
 
-// game : 2015-05-28 UNR Corp.
+// game : 2015-05-28 XYZ.COM LLC
 game
 
 // games : 2015-05-28 Dog Beach, LLC
@@ -8294,7 +7796,7 @@ games
 // gap : 2015-07-31 The Gap, Inc.
 gap
 
-// garden : 2014-06-26 Minds + Machines Group Limited
+// garden : 2014-06-26 Registry Services, LLC
 garden
 
 // gay : 2019-05-23 Top Level Design, LLC
@@ -8309,7 +7811,7 @@ gdn
 // gea : 2014-12-04 GEA Group Aktiengesellschaft
 gea
 
-// gent : 2014-01-23 COMBELL NV
+// gent : 2014-01-23 Easyhost BV
 gent
 
 // genting : 2015-03-12 Resorts World Inc Pte. Ltd.
@@ -8327,22 +7829,19 @@ gift
 // gifts : 2014-07-03 Binky Moon, LLC
 gifts
 
-// gives : 2014-03-06 Dog Beach, LLC
+// gives : 2014-03-06 Public Interest Registry
 gives
 
-// giving : 2014-11-13 Giving Limited
+// giving : 2014-11-13 Public Interest Registry
 giving
 
-// glade : 2015-07-23 Johnson Shareholdings, Inc.
-glade
-
 // glass : 2013-11-07 Binky Moon, LLC
 glass
 
 // gle : 2014-07-24 Charleston Road Registry Inc.
 gle
 
-// global : 2014-04-17 Dot Global Domain Registry Limited
+// global : 2014-04-17 Identity Digital Limited
 global
 
 // globo : 2013-12-19 Globo Comunicação e Participações S.A
@@ -8399,7 +7898,7 @@ graphics
 // gratis : 2014-03-20 Binky Moon, LLC
 gratis
 
-// green : 2014-05-08 Afilias Limited
+// green : 2014-05-08 Identity Digital Limited
 green
 
 // gripe : 2014-03-06 Binky Moon, LLC
@@ -8423,7 +7922,7 @@ guge
 // guide : 2013-09-13 Binky Moon, LLC
 guide
 
-// guitars : 2013-11-14 UNR Corp.
+// guitars : 2013-11-14 XYZ.COM LLC
 guitars
 
 // guru : 2013-08-27 Binky Moon, LLC
@@ -8456,7 +7955,7 @@ health
 // healthcare : 2014-06-12 Binky Moon, LLC
 healthcare
 
-// help : 2014-06-26 UNR Corp.
+// help : 2014-06-26 Innovation service Limited
 help
 
 // helsinki : 2015-02-05 City of Helsinki
@@ -8471,7 +7970,7 @@ hermes
 // hgtv : 2015-07-02 Lifestyle Domain Holdings, Inc.
 hgtv
 
-// hiphop : 2014-03-06 UNR Corp.
+// hiphop : 2014-03-06 Dot Hip Hop, LLC
 hiphop
 
 // hisamitsu : 2015-07-16 Hisamitsu Pharmaceutical Co.,Inc.
@@ -8480,7 +7979,7 @@ hisamitsu
 // hitachi : 2014-10-31 Hitachi, Ltd.
 hitachi
 
-// hiv : 2014-03-13 UNR Corp.
+// hiv : 2014-03-13 Internet Naming Company LLC
 hiv
 
 // hkt : 2015-05-14 PCCW-HKT DataCom Services Limited
@@ -8501,7 +8000,7 @@ homedepot
 // homegoods : 2015-07-16 The TJX Companies, Inc.
 homegoods
 
-// homes : 2014-01-09 DERHomes, LLC
+// homes : 2014-01-09 XYZ.COM LLC
 homes
 
 // homesense : 2015-07-16 The TJX Companies, Inc.
@@ -8510,16 +8009,16 @@ homesense
 // honda : 2014-12-18 Honda Motor Co., Ltd.
 honda
 
-// horse : 2013-11-21 Minds + Machines Group Limited
+// horse : 2013-11-21 Registry Services, LLC
 horse
 
 // hospital : 2016-10-20 Binky Moon, LLC
 hospital
 
-// host : 2014-04-17 DotHost Inc.
+// host : 2014-04-17 Radix FZC
 host
 
-// hosting : 2014-05-29 UNR Corp.
+// hosting : 2014-05-29 XYZ.COM LLC
 hosting
 
 // hot : 2015-08-27 Amazon Registry Services, Inc.
@@ -8609,9 +8108,6 @@ insurance
 // insure : 2014-03-20 Binky Moon, LLC
 insure
 
-// intel : 2015-08-06 Intel Corporation
-intel
-
 // international : 2013-11-07 Binky Moon, LLC
 international
 
@@ -8642,9 +8138,6 @@ itau
 // itv : 2015-07-09 ITV Services Limited
 itv
 
-// iveco : 2015-09-03 CNH Industrial N.V.
-iveco
-
 // jaguar : 2014-11-13 Jaguar Land Rover Ltd
 jaguar
 
@@ -8654,9 +8147,6 @@ java
 // jcb : 2014-11-20 JCB Co., Ltd.
 jcb
 
-// jcp : 2015-04-23 JCP Media, Inc.
-jcp
-
 // jeep : 2015-07-30 FCA US LLC.
 jeep
 
@@ -8693,7 +8183,7 @@ jpmorgan
 // jprs : 2014-09-18 Japan Registry Services Co., Ltd.
 jprs
 
-// juegos : 2014-03-20 UNR Corp.
+// juegos : 2014-03-20 Internet Naming Company LLC
 juegos
 
 // juniper : 2015-07-30 JUNIPER NETWORKS, INC.
@@ -8720,7 +8210,10 @@ kfh
 // kia : 2015-07-09 KIA MOTORS CORPORATION
 kia
 
-// kim : 2013-09-23 Afilias Limited
+// kids : 2021-08-13 DotKids Foundation Limited
+kids
+
+// kim : 2013-09-23 Identity Digital Limited
 kim
 
 // kinder : 2014-11-07 Ferrero Trading Lux S.A.
@@ -8789,7 +8282,7 @@ lanxess
 // lasalle : 2015-04-02 Jones Lang LaSalle Incorporated
 lasalle
 
-// lat : 2014-10-16 ECOM-LAC Federaciòn de Latinoamèrica y el Caribe para Internet y el Comercio Electrònico
+// lat : 2014-10-16 XYZ.COM LLC
 lat
 
 // latino : 2015-07-30 Dish DBS Corporation
@@ -8798,7 +8291,7 @@ latino
 // latrobe : 2014-06-16 La Trobe University
 latrobe
 
-// law : 2015-01-22 LW TLD Limited
+// law : 2015-01-22 Registry Services, LLC
 law
 
 // lawyer : 2014-03-20 Dog Beach, LLC
@@ -8825,7 +8318,7 @@ lego
 // lexus : 2015-04-23 TOYOTA MOTOR CORPORATION
 lexus
 
-// lgbt : 2014-05-08 Afilias Limited
+// lgbt : 2014-05-08 Identity Digital Limited
 lgbt
 
 // lidl : 2014-09-18 Schwarz Domains und Services GmbH & Co. KG
@@ -8858,10 +8351,7 @@ limo
 // lincoln : 2014-11-13 Ford Motor Company
 lincoln
 
-// linde : 2014-12-04 Linde Aktiengesellschaft
-linde
-
-// link : 2013-11-14 UNR Corp.
+// link : 2013-11-14 Nova Registry Ltd
 link
 
 // lipsy : 2015-06-25 Lipsy Ltd
@@ -8873,13 +8363,10 @@ live
 // living : 2015-07-30 Lifestyle Domain Holdings, Inc.
 living
 
-// lixil : 2015-03-19 LIXIL Group Corporation
-lixil
-
-// llc : 2017-12-14 Afilias Limited
+// llc : 2017-12-14 Identity Digital Limited
 llc
 
-// llp : 2019-08-26 UNR Corp.
+// llp : 2019-08-26 Intercap Registry Inc.
 llp
 
 // loan : 2014-11-20 dot Loan Limited
@@ -8894,10 +8381,7 @@ locker
 // locus : 2015-06-25 Locus Analytics LLC
 locus
 
-// loft : 2015-07-30 Annco, Inc.
-loft
-
-// lol : 2015-01-30 UNR Corp.
+// lol : 2015-01-30 XYZ.COM LLC
 lol
 
 // london : 2013-11-14 Dot London Domains Limited
@@ -8906,7 +8390,7 @@ london
 // lotte : 2014-11-07 Lotte Holdings Co., Ltd.
 lotte
 
-// lotto : 2014-04-10 Afilias Limited
+// lotto : 2014-04-10 Identity Digital Limited
 lotto
 
 // love : 2014-12-22 Merchant Law Group LLP
@@ -8927,18 +8411,12 @@ ltda
 // lundbeck : 2015-08-06 H. Lundbeck A/S
 lundbeck
 
-// lupin : 2014-11-07 LUPIN LIMITED
-lupin
-
-// luxe : 2014-01-09 Minds + Machines Group Limited
+// luxe : 2014-01-09 Registry Services, LLC
 luxe
 
 // luxury : 2013-10-17 Luxury Partners, LLC
 luxury
 
-// macys : 2015-07-31 Macys, Inc.
-macys
-
 // madrid : 2014-05-01 Comunidad de Madrid
 madrid
 
@@ -8969,7 +8447,7 @@ market
 // marketing : 2013-11-07 Binky Moon, LLC
 marketing
 
-// markets : 2014-12-11 Dotmarkets Registry Limited
+// markets : 2014-12-11 Dog Beach, LLC
 markets
 
 // marriott : 2014-10-09 Marriott Worldwide Corporation
@@ -9017,7 +8495,7 @@ menu
 // merckmsd : 2016-07-14 MSD Registry Holdings, Inc.
 merckmsd
 
-// miami : 2013-12-19 Minds + Machines Group Limited
+// miami : 2013-12-19 Registry Services, LLC
 miami
 
 // microsoft : 2014-12-18 Microsoft Corporation
@@ -9050,13 +8528,13 @@ mobile
 // moda : 2013-11-07 Dog Beach, LLC
 moda
 
-// moe : 2013-11-13 Interlink Co., Ltd.
+// moe : 2013-11-13 Interlink Systems Innovation Institute K.K.
 moe
 
 // moi : 2014-12-18 Amazon Registry Services, Inc.
 moi
 
-// mom : 2015-04-16 UNR Corp.
+// mom : 2015-04-16 XYZ.COM LLC
 mom
 
 // monash : 2013-09-30 Monash University
@@ -9080,7 +8558,7 @@ moscow
 // moto : 2015-06-04 Motorola Trademark Holdings, LLC
 moto
 
-// motorcycles : 2014-01-09 DERMotorcycles, LLC
+// motorcycles : 2014-01-09 XYZ.COM LLC
 motorcycles
 
 // mov : 2014-01-30 Charleston Road Registry Inc.
@@ -9098,6 +8576,9 @@ mtn
 // mtr : 2015-03-12 MTR Corporation Limited
 mtr
 
+// music : 2021-05-04 DotMusic Limited
+music
+
 // mutual : 2015-04-02 Northwestern Mutual MU TLD Registry, LLC
 mutual
 
@@ -9107,9 +8588,6 @@ nab
 // nagoya : 2013-10-24 GMO Registry, Inc.
 nagoya
 
-// nationwide : 2015-07-23 Nationwide Mutual Insurance Company
-nationwide
-
 // natura : 2015-03-12 NATURA COSMÉTICOS S.A.
 natura
 
@@ -9137,9 +8615,6 @@ neustar
 // new : 2014-01-30 Charleston Road Registry Inc.
 new
 
-// newholland : 2015-09-03 CNH Industrial N.V.
-newholland
-
 // news : 2014-12-18 Dog Beach, LLC
 news
 
@@ -9212,12 +8687,9 @@ nyc
 // obi : 2014-09-25 OBI Group Holding SE & Co. KGaA
 obi
 
-// observer : 2015-04-30 Top Level Spectrum, Inc.
+// observer : 2015-04-30 Dog Beach, LLC
 observer
 
-// off : 2015-07-23 Johnson Shareholdings, Inc.
-off
-
 // office : 2015-03-12 Microsoft Corporation
 office
 
@@ -9245,15 +8717,12 @@ one
 // ong : 2014-03-06 Public Interest Registry
 ong
 
-// onl : 2013-09-16 I-Registry Ltd.
+// onl : 2013-09-16 iRegistry GmbH
 onl
 
-// online : 2015-01-15 DotOnline Inc.
+// online : 2015-01-15 Radix FZC
 online
 
-// onyourside : 2015-07-23 Nationwide Mutual Insurance Company
-onyourside
-
 // ooo : 2014-01-09 INFIBEAM AVENUES LIMITED
 ooo
 
@@ -9266,7 +8735,7 @@ oracle
 // orange : 2015-03-12 Orange Brand Services Limited
 orange
 
-// organic : 2014-03-27 Afilias Limited
+// organic : 2014-03-27 Identity Digital Limited
 organic
 
 // origins : 2015-10-01 The Estée Lauder Companies Inc.
@@ -9287,7 +8756,7 @@ ovh
 // page : 2014-12-04 Charleston Road Registry Inc.
 page
 
-// panasonic : 2015-07-30 Panasonic Corporation
+// panasonic : 2015-07-30 Panasonic Holdings Corporation
 panasonic
 
 // paris : 2014-01-30 City of Paris
@@ -9314,7 +8783,7 @@ pay
 // pccw : 2015-05-14 PCCW Enterprises Limited
 pccw
 
-// pet : 2015-05-07 Afilias Limited
+// pet : 2015-05-07 Identity Digital Limited
 pet
 
 // pfizer : 2015-09-11 Pfizer Inc.
@@ -9332,7 +8801,7 @@ philips
 // phone : 2016-06-02 Dish DBS Corporation
 phone
 
-// photo : 2013-11-14 UNR Corp.
+// photo : 2013-11-14 Registry Services, LLC
 photo
 
 // photography : 2013-09-20 Binky Moon, LLC
@@ -9344,7 +8813,7 @@ photos
 // physio : 2014-05-01 PhysBiz Pty Ltd
 physio
 
-// pics : 2013-11-14 UNR Corp.
+// pics : 2013-11-14 XYZ.COM LLC
 pics
 
 // pictet : 2014-06-26 Pictet Europe S.A.
@@ -9362,7 +8831,7 @@ pin
 // ping : 2015-06-11 Ping Registry Provider, Inc.
 ping
 
-// pink : 2013-10-01 Afilias Limited
+// pink : 2013-10-01 Identity Digital Limited
 pink
 
 // pioneer : 2015-07-16 Pioneer Corporation
@@ -9392,7 +8861,7 @@ pnc
 // pohl : 2014-06-23 Deutsche Vermögensberatung Aktiengesellschaft DVAG
 pohl
 
-// poker : 2014-07-03 Afilias Limited
+// poker : 2014-07-03 Identity Digital Limited
 poker
 
 // politie : 2015-08-20 Politie Nederland
@@ -9407,7 +8876,7 @@ pramerica
 // praxi : 2013-12-05 Praxi S.p.A.
 praxi
 
-// press : 2014-04-03 DotPress Inc.
+// press : 2014-04-03 Radix FZC
 press
 
 // prime : 2015-06-25 Amazon Registry Services, Inc.
@@ -9425,13 +8894,13 @@ prof
 // progressive : 2015-07-23 Progressive Casualty Insurance Company
 progressive
 
-// promo : 2014-12-18 Afilias Limited
+// promo : 2014-12-18 Identity Digital Limited
 promo
 
 // properties : 2013-12-05 Binky Moon, LLC
 properties
 
-// property : 2014-05-22 UNR Corp.
+// property : 2014-05-22 Internet Naming Company LLC
 property
 
 // protection : 2015-04-23 XYZ.COM LLC
@@ -9449,7 +8918,7 @@ pub
 // pwc : 2015-10-29 PricewaterhouseCoopers LLP
 pwc
 
-// qpon : 2013-11-14 dotCOOL, Inc.
+// qpon : 2013-11-14 dotQPON LLC
 qpon
 
 // quebec : 2013-12-19 PointQuébec Inc
@@ -9458,18 +8927,12 @@ quebec
 // quest : 2015-03-26 XYZ.COM LLC
 quest
 
-// qvc : 2015-07-30 QVC, Inc.
-qvc
-
 // racing : 2014-12-04 Premier Registry Limited
 racing
 
 // radio : 2016-07-21 European Broadcasting Union (EBU)
 radio
 
-// raid : 2015-07-23 Johnson Shareholdings, Inc.
-raid
-
 // read : 2014-12-18 Amazon Registry Services, Inc.
 read
 
@@ -9479,13 +8942,13 @@ realestate
 // realtor : 2014-05-29 Real Estate Domains LLC
 realtor
 
-// realty : 2015-03-19 Fegistry, LLC
+// realty : 2015-03-19 Dog Beach, LLC
 realty
 
 // recipes : 2013-10-17 Binky Moon, LLC
 recipes
 
-// red : 2013-11-07 Afilias Limited
+// red : 2013-11-07 Identity Digital Limited
 red
 
 // redstone : 2014-10-31 Redstone Haute Couture Co., Ltd.
@@ -9542,7 +9005,7 @@ reviews
 // rexroth : 2015-06-18 Robert Bosch GMBH
 rexroth
 
-// rich : 2013-11-21 I-Registry Ltd.
+// rich : 2013-11-21 iRegistry GmbH
 rich
 
 // richardli : 2015-05-14 Pacific Century Asset Management (HK) Limited
@@ -9560,16 +9023,13 @@ rio
 // rip : 2014-07-10 Dog Beach, LLC
 rip
 
-// rmit : 2015-11-19 Royal Melbourne Institute of Technology
-rmit
-
 // rocher : 2014-12-18 Ferrero Trading Lux S.A.
 rocher
 
 // rocks : 2013-11-14 Dog Beach, LLC
 rocks
 
-// rodeo : 2013-12-19 Minds + Machines Group Limited
+// rodeo : 2013-12-19 Registry Services, LLC
 rodeo
 
 // rogers : 2015-08-06 Rogers Communications Canada Inc.
@@ -9584,7 +9044,7 @@ rsvp
 // rugby : 2016-12-15 World Rugby Strategic Developments Limited
 rugby
 
-// ruhr : 2013-10-02 regiodot GmbH & Co. KG
+// ruhr : 2013-10-02 dotSaarland GmbH
 ruhr
 
 // run : 2015-03-19 Binky Moon, LLC
@@ -9647,7 +9107,7 @@ saxo
 // sbi : 2015-03-12 STATE BANK OF INDIA
 sbi
 
-// sbs : 2014-11-07 SPECIAL BROADCASTING SERVICE CORPORATION
+// sbs : 2014-11-07 ShortDot SA
 sbs
 
 // sca : 2014-03-13 SVENSKA CELLULOSA AKTIEBOLAGET SCA (publ)
@@ -9677,9 +9137,6 @@ schwarz
 // science : 2014-09-11 dot Science Limited
 science
 
-// scjohnson : 2015-07-23 Johnson Shareholdings, Inc.
-scjohnson
-
 // scot : 2014-01-23 Dot Scot Registry Limited
 scot
 
@@ -9707,9 +9164,6 @@ sener
 // services : 2014-02-27 Binky Moon, LLC
 services
 
-// ses : 2015-07-23 SES
-ses
-
 // seven : 2015-08-06 Seven West Media Ltd
 seven
 
@@ -9719,7 +9173,7 @@ sew
 // sex : 2014-11-13 ICM Registry SX LLC
 sex
 
-// sexy : 2013-09-11 UNR Corp.
+// sexy : 2013-09-11 Internet Naming Company LLC
 sexy
 
 // sfr : 2015-08-13 Societe Francaise du Radiotelephone - SFR
@@ -9740,7 +9194,7 @@ shell
 // shia : 2014-09-04 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti.
 shia
 
-// shiksha : 2013-11-14 Afilias Limited
+// shiksha : 2013-11-14 Identity Digital Limited
 shiksha
 
 // shoes : 2013-10-02 Binky Moon, LLC
@@ -9761,9 +9215,6 @@ show
 // showtime : 2015-08-06 CBS Domains Inc.
 showtime
 
-// shriram : 2014-01-23 Shriram Capital Ltd.
-shriram
-
 // silk : 2015-06-25 Amazon Registry Services, Inc.
 silk
 
@@ -9773,10 +9224,10 @@ sina
 // singles : 2013-08-27 Binky Moon, LLC
 singles
 
-// site : 2015-01-15 DotSite Inc.
+// site : 2015-01-15 Radix FZC
 site
 
-// ski : 2015-04-09 Afilias Limited
+// ski : 2015-04-09 Identity Digital Limited
 ski
 
 // skin : 2015-01-15 XYZ.COM LLC
@@ -9797,7 +9248,7 @@ smart
 // smile : 2014-12-18 Amazon Registry Services, Inc.
 smile
 
-// sncf : 2015-02-19 Société Nationale des Chemins de fer Francais S N C F
+// sncf : 2015-02-19 Société Nationale SNCF
 sncf
 
 // soccer : 2015-03-26 Binky Moon, LLC
@@ -9833,7 +9284,7 @@ soy
 // spa : 2019-09-19 Asia Spa and Wellness Promotion Council Limited
 spa
 
-// space : 2014-04-03 DotSpace Inc.
+// space : 2014-04-03 Radix FZC
 space
 
 // sport : 2017-11-16 Global Association of International Sports Federations (GAISF)
@@ -9842,9 +9293,6 @@ sport
 // spot : 2015-02-26 Amazon Registry Services, Inc.
 spot
 
-// spreadbetting : 2014-12-11 Dotspreadbetting Registry Limited
-spreadbetting
-
 // srl : 2015-05-07 InterNetX, Corp
 srl
 
@@ -9875,7 +9323,7 @@ stockholm
 // storage : 2014-12-22 XYZ.COM LLC
 storage
 
-// store : 2015-04-09 DotStore Inc.
+// store : 2015-04-09 Radix FZC
 store
 
 // stream : 2016-01-08 dot Stream Limited
@@ -9884,7 +9332,7 @@ stream
 // studio : 2015-02-11 Dog Beach, LLC
 studio
 
-// study : 2014-12-11 OPEN UNIVERSITIES AUSTRALIA PTY LTD
+// study : 2014-12-11 Registry Services, LLC
 study
 
 // style : 2014-12-04 Binky Moon, LLC
@@ -9902,7 +9350,7 @@ supply
 // support : 2013-10-24 Binky Moon, LLC
 support
 
-// surf : 2014-01-09 Minds + Machines Group Limited
+// surf : 2014-01-09 Registry Services, LLC
 surf
 
 // surgery : 2014-03-20 Binky Moon, LLC
@@ -9914,9 +9362,6 @@ suzuki
 // swatch : 2015-01-08 The Swatch Group Ltd
 swatch
 
-// swiftcover : 2015-07-23 Swiftcover Insurance Services Limited
-swiftcover
-
 // swiss : 2014-10-16 Swiss Confederation
 swiss
 
@@ -9947,7 +9392,7 @@ tatamotors
 // tatar : 2014-04-24 Limited Liability Company "Coordination Center of Regional Domain of Tatarstan Republic"
 tatar
 
-// tattoo : 2013-08-30 UNR Corp.
+// tattoo : 2013-08-30 Top Level Design, LLC
 tattoo
 
 // tax : 2014-03-20 Binky Moon, LLC
@@ -9965,7 +9410,7 @@ tdk
 // team : 2015-03-05 Binky Moon, LLC
 team
 
-// tech : 2015-01-30 Personals TLD Inc.
+// tech : 2015-01-30 Radix FZC
 tech
 
 // technology : 2013-09-13 Binky Moon, LLC
@@ -9992,7 +9437,7 @@ theatre
 // tiaa : 2015-07-23 Teachers Insurance and Annuity Association of America
 tiaa
 
-// tickets : 2015-02-05 Accent Media Limited
+// tickets : 2015-02-05 XYZ.COM LLC
 tickets
 
 // tienda : 2013-11-14 Binky Moon, LLC
@@ -10040,7 +9485,7 @@ toray
 // toshiba : 2014-04-10 TOSHIBA Corporation
 toshiba
 
-// total : 2015-08-06 Total SA
+// total : 2015-08-06 TotalEnergies SE
 total
 
 // tours : 2015-01-22 Binky Moon, LLC
@@ -10058,7 +9503,7 @@ toys
 // trade : 2014-01-23 Elite Registry Limited
 trade
 
-// trading : 2014-12-11 Dottrading Registry Limited
+// trading : 2014-12-11 Dog Beach, LLC
 trading
 
 // training : 2013-11-07 Binky Moon, LLC
@@ -10076,7 +9521,7 @@ travelers
 // travelersinsurance : 2015-03-26 Travelers TLD, LLC
 travelersinsurance
 
-// trust : 2014-10-16 NCC Group Domain Services, Inc.
+// trust : 2014-10-16 Internet Naming Company LLC
 trust
 
 // trv : 2015-03-26 Travelers TLD, LLC
@@ -10109,7 +9554,7 @@ unicom
 // university : 2014-03-06 Binky Moon, LLC
 university
 
-// uno : 2013-09-11 DotSite Inc.
+// uno : 2013-09-11 Radix FZC
 uno
 
 // uol : 2014-05-01 UBN INTERNET LTDA.
@@ -10160,7 +9605,7 @@ villas
 // vin : 2015-06-18 Binky Moon, LLC
 vin
 
-// vip : 2015-01-22 Minds + Machines Group Limited
+// vip : 2015-01-22 Registry Services, LLC
 vip
 
 // virgin : 2014-09-25 Virgin Enterprises Limited
@@ -10181,7 +9626,7 @@ vivo
 // vlaanderen : 2014-02-06 DNS.be vzw
 vlaanderen
 
-// vodka : 2013-12-19 Minds + Machines Group Limited
+// vodka : 2013-12-19 Registry Services, LLC
 vodka
 
 // volkswagen : 2015-05-14 Volkswagen Group of America Inc.
@@ -10223,7 +9668,7 @@ wanggou
 // watch : 2013-11-14 Binky Moon, LLC
 watch
 
-// watches : 2014-12-22 Richemont DNS Inc.
+// watches : 2014-12-22 Identity Digital Limited
 watches
 
 // weather : 2015-01-08 International Business Machines Corporation
@@ -10238,10 +9683,10 @@ webcam
 // weber : 2015-06-04 Saint-Gobain Weber SA
 weber
 
-// website : 2014-04-03 DotWebsite Inc.
+// website : 2014-04-03 Radix FZC
 website
 
-// wedding : 2014-04-24 Minds + Machines Group Limited
+// wedding : 2014-04-24 Registry Services, LLC
 wedding
 
 // weibo : 2015-03-05 Sina Corporation
@@ -10283,7 +9728,7 @@ wolterskluwer
 // woodside : 2015-07-09 Woodside Petroleum Limited
 woodside
 
-// work : 2013-12-19 Minds + Machines Group Limited
+// work : 2013-12-19 Registry Services, LLC
 work
 
 // works : 2013-11-14 Binky Moon, LLC
@@ -10334,9 +9779,6 @@ xin
 // xn--3ds443g : 2013-09-08 TLD REGISTRY LIMITED OY
 在线
 
-// xn--3oq18vl8pn36a : 2015-07-02 Volkswagen (China) Investment Co., Ltd.
-大众汽车
-
 // xn--3pxu8k : 2015-01-15 VeriSign Sarl
 点看
 
@@ -10346,7 +9788,7 @@ xin
 // xn--45q11c : 2013-11-21 Zodiac Gemini Ltd
 八卦
 
-// xn--4gbrim : 2013-10-04 Fans TLD Limited
+// xn--4gbrim : 2013-10-04 Helium TLDs Ltd
 موقع
 
 // xn--55qw42g : 2013-11-08 China Organizational Name Administration Center
@@ -10361,7 +9803,7 @@ xin
 // xn--5tzm5g : 2014-12-22 Global Website TLD Asia Limited
 网站
 
-// xn--6frz82g : 2013-09-23 Afilias Limited
+// xn--6frz82g : 2013-09-23 Identity Digital Limited
 移动
 
 // xn--6qq986b3xl : 2013-09-13 Tycoon Treasure Limited
@@ -10451,7 +9893,7 @@ xin
 // xn--fzys8d69uvgm : 2015-05-14 PCCW Enterprises Limited
 電訊盈科
 
-// xn--g2xx48c : 2015-01-30 Minds + Machines Group Limited
+// xn--g2xx48c : 2015-01-30 Nawang Heli(Xiamen) Network Service Co., LTD.
 购物
 
 // xn--gckr3f0f : 2015-02-26 Amazon Registry Services, Inc.
@@ -10478,9 +9920,6 @@ xin
 // xn--jlq480n2rg : 2019-12-19 Amazon Registry Services, Inc.
 亚马逊
 
-// xn--jlq61u9w7b : 2015-01-08 Nokia Corporation
-诺基亚
-
 // xn--jvr189m : 2015-02-26 Amazon Registry Services, Inc.
 食品
 
@@ -10598,10 +10037,10 @@ vermögensberatung
 // xyz : 2013-12-05 XYZ.COM LLC
 xyz
 
-// yachts : 2014-01-09 DERYachts, LLC
+// yachts : 2014-01-09 XYZ.COM LLC
 yachts
 
-// yahoo : 2015-04-02 Yahoo! Domain Services Inc.
+// yahoo : 2015-04-02 Oath Inc.
 yahoo
 
 // yamaxun : 2014-12-18 Amazon Registry Services, Inc.
@@ -10613,7 +10052,7 @@ yandex
 // yodobashi : 2014-11-20 YODOBASHI CAMERA CO.,LTD.
 yodobashi
 
-// yoga : 2014-05-29 Minds + Machines Group Limited
+// yoga : 2014-05-29 Registry Services, LLC
 yoga
 
 // yokohama : 2013-12-12 GMO Registry, Inc.
@@ -10660,16 +10099,68 @@ ltd.ua
 // 611coin : https://611project.org/
 611.to
 
+// Aaron Marais' Gitlab pages: https://lab.aaronleem.co.za
+// Submitted by Aaron Marais <its_me@aaronleem.co.za>
+graphox.us
+
+// accesso Technology Group, plc. : https://accesso.com/
+// Submitted by accesso Team <accessoecommerce@accesso.com>
+*.devcdnaccesso.com
+
+// Acorn Labs : https://acorn.io
+// Submitted by Craig Jellick <domains@acorn.io>
+*.on-acorn.io
+
+// ActiveTrail: https://www.activetrail.biz/
+// Submitted by Ofer Kalaora <postmaster@activetrail.com>
+activetrail.biz
+
 // Adobe : https://www.adobe.com/
-// Submitted by Ian Boston <boston@adobe.com>
+// Submitted by Ian Boston <boston@adobe.com> and Lars Trieloff <trieloff@adobe.com>
 adobeaemcloud.com
-adobeaemcloud.net
 *.dev.adobeaemcloud.com
+hlx.live
+adobeaemcloud.net
+hlx.page
+hlx3.page
+
+// Adobe Developer Platform : https://developer.adobe.com
+// Submitted by Jesse MacFadyen<jessem@adobe.com>
+adobeio-static.net
+adobeioruntime.net
 
 // Agnat sp. z o.o. : https://domena.pl
 // Submitted by Przemyslaw Plewa <it-admin@domena.pl>
 beep.pl
 
+// Airkit : https://www.airkit.com/
+// Submitted by Grant Cooksey <security@airkit.com>
+airkitapps.com
+airkitapps-au.com
+airkitapps.eu
+
+// Aiven: https://aiven.io/
+// Submitted by Etienne Stalmans <security@aiven.io>
+aivencloud.com
+
+// Akamai : https://www.akamai.com/
+// Submitted by Akamai Team <publicsuffixlist@akamai.com>
+akadns.net
+akamai.net
+akamai-staging.net
+akamaiedge.net
+akamaiedge-staging.net
+akamaihd.net
+akamaihd-staging.net
+akamaiorigin.net
+akamaiorigin-staging.net
+akamaized.net
+akamaized-staging.net
+edgekey.net
+edgekey-staging.net
+edgesuite.net
+edgesuite-staging.net
+
 // alboto.ca : http://alboto.ca
 // Submitted by Anton Avramov <avramov@alboto.ca>
 barsy.ca
@@ -10683,12 +10174,6 @@ barsy.ca
 // Submitted by Werner Kaltofen <wk@all-inkl.com>
 kasserver.com
 
-// Algorithmia, Inc. : algorithmia.com
-// Submitted by Eli Perelman <eperelman@algorithmia.io>
-*.algorithmia.com
-!teams.algorithmia.com
-!test.algorithmia.com
-
 // Altervista: https://www.altervista.org
 // Submitted by Carlo Cannas <tech_staff@altervista.it>
 altervista.org
@@ -10697,19 +10182,134 @@ altervista.org
 // Submitted by Cyril <admin@alwaysdata.com>
 alwaysdata.net
 
-// Amazon CloudFront : https://aws.amazon.com/cloudfront/
+// Amaze Software : https://amaze.co
+// Submitted by Domain Admin <domainadmin@amaze.co>
+myamaze.net
+
+// Amazon : https://www.amazon.com/
+// Submitted by AWS Security <psl-maintainers@amazon.com>
+// Subsections of Amazon/subsidiaries will appear until "concludes" tag
+
+// Amazon CloudFront
 // Submitted by Donavan Miller <donavanm@amazon.com>
+// Reference: 54144616-fd49-4435-8535-19c6a601bdb3
 cloudfront.net
 
-// Amazon Elastic Compute Cloud : https://aws.amazon.com/ec2/
+// Amazon EC2
 // Submitted by Luke Wells <psl-maintainers@amazon.com>
+// Reference: 4c38fa71-58ac-4768-99e5-689c1767e537
 *.compute.amazonaws.com
 *.compute-1.amazonaws.com
 *.compute.amazonaws.com.cn
 us-east-1.amazonaws.com
 
-// Amazon Elastic Beanstalk : https://aws.amazon.com/elasticbeanstalk/
+// Amazon S3
+// Submitted by Luke Wells <psl-maintainers@amazon.com>
+// Reference: d068bd97-f0a9-4838-a6d8-954b622ef4ae
+s3.cn-north-1.amazonaws.com.cn
+s3.dualstack.ap-northeast-1.amazonaws.com
+s3.dualstack.ap-northeast-2.amazonaws.com
+s3.ap-northeast-2.amazonaws.com
+s3-website.ap-northeast-2.amazonaws.com
+s3.dualstack.ap-south-1.amazonaws.com
+s3.ap-south-1.amazonaws.com
+s3-website.ap-south-1.amazonaws.com
+s3.dualstack.ap-southeast-1.amazonaws.com
+s3.dualstack.ap-southeast-2.amazonaws.com
+s3.dualstack.ca-central-1.amazonaws.com
+s3.ca-central-1.amazonaws.com
+s3-website.ca-central-1.amazonaws.com
+s3.dualstack.eu-central-1.amazonaws.com
+s3.eu-central-1.amazonaws.com
+s3-website.eu-central-1.amazonaws.com
+s3.dualstack.eu-west-1.amazonaws.com
+s3.dualstack.eu-west-2.amazonaws.com
+s3.eu-west-2.amazonaws.com
+s3-website.eu-west-2.amazonaws.com
+s3.dualstack.eu-west-3.amazonaws.com
+s3.eu-west-3.amazonaws.com
+s3-website.eu-west-3.amazonaws.com
+s3.amazonaws.com
+s3-ap-northeast-1.amazonaws.com
+s3-ap-northeast-2.amazonaws.com
+s3-ap-south-1.amazonaws.com
+s3-ap-southeast-1.amazonaws.com
+s3-ap-southeast-2.amazonaws.com
+s3-ca-central-1.amazonaws.com
+s3-eu-central-1.amazonaws.com
+s3-eu-west-1.amazonaws.com
+s3-eu-west-2.amazonaws.com
+s3-eu-west-3.amazonaws.com
+s3-external-1.amazonaws.com
+s3-fips-us-gov-west-1.amazonaws.com
+s3-sa-east-1.amazonaws.com
+s3-us-east-2.amazonaws.com
+s3-us-gov-west-1.amazonaws.com
+s3-us-west-1.amazonaws.com
+s3-us-west-2.amazonaws.com
+s3-website-ap-northeast-1.amazonaws.com
+s3-website-ap-southeast-1.amazonaws.com
+s3-website-ap-southeast-2.amazonaws.com
+s3-website-eu-west-1.amazonaws.com
+s3-website-sa-east-1.amazonaws.com
+s3-website-us-east-1.amazonaws.com
+s3-website-us-west-1.amazonaws.com
+s3-website-us-west-2.amazonaws.com
+s3.dualstack.sa-east-1.amazonaws.com
+s3.dualstack.us-east-1.amazonaws.com
+s3.dualstack.us-east-2.amazonaws.com
+s3.us-east-2.amazonaws.com
+s3-website.us-east-2.amazonaws.com
+
+// AWS Cloud9
+// Submitted by: AWS Security <psl-maintainers@amazon.com>
+// Reference: 2b6dfa9a-3a7f-4367-b2e7-0321e77c0d59
+vfs.cloud9.af-south-1.amazonaws.com
+webview-assets.cloud9.af-south-1.amazonaws.com
+vfs.cloud9.ap-east-1.amazonaws.com
+webview-assets.cloud9.ap-east-1.amazonaws.com
+vfs.cloud9.ap-northeast-1.amazonaws.com
+webview-assets.cloud9.ap-northeast-1.amazonaws.com
+vfs.cloud9.ap-northeast-2.amazonaws.com
+webview-assets.cloud9.ap-northeast-2.amazonaws.com
+vfs.cloud9.ap-northeast-3.amazonaws.com
+webview-assets.cloud9.ap-northeast-3.amazonaws.com
+vfs.cloud9.ap-south-1.amazonaws.com
+webview-assets.cloud9.ap-south-1.amazonaws.com
+vfs.cloud9.ap-southeast-1.amazonaws.com
+webview-assets.cloud9.ap-southeast-1.amazonaws.com
+vfs.cloud9.ap-southeast-2.amazonaws.com
+webview-assets.cloud9.ap-southeast-2.amazonaws.com
+vfs.cloud9.ca-central-1.amazonaws.com
+webview-assets.cloud9.ca-central-1.amazonaws.com
+vfs.cloud9.eu-central-1.amazonaws.com
+webview-assets.cloud9.eu-central-1.amazonaws.com
+vfs.cloud9.eu-north-1.amazonaws.com
+webview-assets.cloud9.eu-north-1.amazonaws.com
+vfs.cloud9.eu-south-1.amazonaws.com
+webview-assets.cloud9.eu-south-1.amazonaws.com
+vfs.cloud9.eu-west-1.amazonaws.com
+webview-assets.cloud9.eu-west-1.amazonaws.com
+vfs.cloud9.eu-west-2.amazonaws.com
+webview-assets.cloud9.eu-west-2.amazonaws.com
+vfs.cloud9.eu-west-3.amazonaws.com
+webview-assets.cloud9.eu-west-3.amazonaws.com
+vfs.cloud9.me-south-1.amazonaws.com
+webview-assets.cloud9.me-south-1.amazonaws.com
+vfs.cloud9.sa-east-1.amazonaws.com
+webview-assets.cloud9.sa-east-1.amazonaws.com
+vfs.cloud9.us-east-1.amazonaws.com
+webview-assets.cloud9.us-east-1.amazonaws.com
+vfs.cloud9.us-east-2.amazonaws.com
+webview-assets.cloud9.us-east-2.amazonaws.com
+vfs.cloud9.us-west-1.amazonaws.com
+webview-assets.cloud9.us-west-1.amazonaws.com
+vfs.cloud9.us-west-2.amazonaws.com
+webview-assets.cloud9.us-west-2.amazonaws.com
+
+// AWS Elastic Beanstalk
 // Submitted by Luke Wells <psl-maintainers@amazon.com>
+// Reference: aa202394-43a0-4857-b245-8db04549137e
 cn-north-1.eb.amazonaws.com.cn
 cn-northwest-1.eb.amazonaws.com.cn
 elasticbeanstalk.com
@@ -10731,71 +10331,24 @@ us-gov-west-1.elasticbeanstalk.com
 us-west-1.elasticbeanstalk.com
 us-west-2.elasticbeanstalk.com
 
-// Amazon Elastic Load Balancing : https://aws.amazon.com/elasticloadbalancing/
+// (AWS) Elastic Load Balancing
 // Submitted by Luke Wells <psl-maintainers@amazon.com>
-*.elb.amazonaws.com
+// Reference: 12a3d528-1bac-4433-a359-a395867ffed2
 *.elb.amazonaws.com.cn
+*.elb.amazonaws.com
 
-// Amazon S3 : https://aws.amazon.com/s3/
-// Submitted by Luke Wells <psl-maintainers@amazon.com>
-s3.amazonaws.com
-s3-ap-northeast-1.amazonaws.com
-s3-ap-northeast-2.amazonaws.com
-s3-ap-south-1.amazonaws.com
-s3-ap-southeast-1.amazonaws.com
-s3-ap-southeast-2.amazonaws.com
-s3-ca-central-1.amazonaws.com
-s3-eu-central-1.amazonaws.com
-s3-eu-west-1.amazonaws.com
-s3-eu-west-2.amazonaws.com
-s3-eu-west-3.amazonaws.com
-s3-external-1.amazonaws.com
-s3-fips-us-gov-west-1.amazonaws.com
-s3-sa-east-1.amazonaws.com
-s3-us-gov-west-1.amazonaws.com
-s3-us-east-2.amazonaws.com
-s3-us-west-1.amazonaws.com
-s3-us-west-2.amazonaws.com
-s3.ap-northeast-2.amazonaws.com
-s3.ap-south-1.amazonaws.com
-s3.cn-north-1.amazonaws.com.cn
-s3.ca-central-1.amazonaws.com
-s3.eu-central-1.amazonaws.com
-s3.eu-west-2.amazonaws.com
-s3.eu-west-3.amazonaws.com
-s3.us-east-2.amazonaws.com
-s3.dualstack.ap-northeast-1.amazonaws.com
-s3.dualstack.ap-northeast-2.amazonaws.com
-s3.dualstack.ap-south-1.amazonaws.com
-s3.dualstack.ap-southeast-1.amazonaws.com
-s3.dualstack.ap-southeast-2.amazonaws.com
-s3.dualstack.ca-central-1.amazonaws.com
-s3.dualstack.eu-central-1.amazonaws.com
-s3.dualstack.eu-west-1.amazonaws.com
-s3.dualstack.eu-west-2.amazonaws.com
-s3.dualstack.eu-west-3.amazonaws.com
-s3.dualstack.sa-east-1.amazonaws.com
-s3.dualstack.us-east-1.amazonaws.com
-s3.dualstack.us-east-2.amazonaws.com
-s3-website-us-east-1.amazonaws.com
-s3-website-us-west-1.amazonaws.com
-s3-website-us-west-2.amazonaws.com
-s3-website-ap-northeast-1.amazonaws.com
-s3-website-ap-southeast-1.amazonaws.com
-s3-website-ap-southeast-2.amazonaws.com
-s3-website-eu-west-1.amazonaws.com
-s3-website-sa-east-1.amazonaws.com
-s3-website.ap-northeast-2.amazonaws.com
-s3-website.ap-south-1.amazonaws.com
-s3-website.ca-central-1.amazonaws.com
-s3-website.eu-central-1.amazonaws.com
-s3-website.eu-west-2.amazonaws.com
-s3-website.eu-west-3.amazonaws.com
-s3-website.us-east-2.amazonaws.com
+// AWS Global Accelerator
+// Submitted by Daniel Massaguer <psl-maintainers@amazon.com>
+// Reference: d916759d-a08b-4241-b536-4db887383a6a
+awsglobalaccelerator.com
 
-// Amsterdam Wireless: https://www.amsterdamwireless.nl/
-// Submitted by Imre Jonk <hostmaster@amsterdamwireless.nl>
-amsw.nl
+// eero
+// Submitted by Yue Kang <eero-dynamic-dns@amazon.com>
+// Reference: 264afe70-f62c-4c02-8ab9-b5281ed24461
+eero.online
+eero-stage.online
+
+// concludes Amazon
 
 // Amune : https://amune.org/
 // Submitted by Team Amune <cert@amune.org>
@@ -10806,6 +10359,19 @@ tele.amune.org
 // Submitted by Apigee Security Team <security@apigee.com>
 apigee.io
 
+// Apphud : https://apphud.com
+// Submitted by Alexander Selivanov <alex@apphud.com>
+siiites.com
+
+// Appspace : https://www.appspace.com
+// Submitted by Appspace Security Team <security@appspace.com>
+appspacehosted.com
+appspaceusercontent.com
+
+// Appudo UG (haftungsbeschränkt) : https://www.appudo.com
+// Submitted by Alexander Hochbaum <admin@appudo.com>
+appudo.net
+
 // Aptible : https://www.aptible.com/
 // Submitted by Thomas Orozco <thomas@aptible.com>
 on-aptible.com
@@ -10831,15 +10397,35 @@ sweetpepper.org
 // Submitted by Vincent Tseng <vincenttseng@asustor.com>
 myasustor.com
 
+// Atlassian : https://atlassian.com
+// Submitted by Sam Smyth <devloop@atlassian.com>
+cdn.prod.atlassian-dev.net
+
+// Authentick UG (haftungsbeschränkt) : https://authentick.net
+// Submitted by Lukas Reschke <lukas@authentick.net>
+translated.page
+
+// Autocode : https://autocode.com
+// Submitted by Jacob Lee <jacob@autocode.com>
+autocode.dev
+
 // AVM : https://avm.de
 // Submitted by Andreas Weise <a.weise@avm.de>
 myfritz.net
 
+// AVStack Pte. Ltd. : https://avstack.io
+// Submitted by Jasper Hugo <jasper@avstack.io>
+onavstack.net
+
 // AW AdvisorWebsites.com Software Inc : https://advisorwebsites.com
 // Submitted by James Kennedy <domains@advisorwebsites.com>
 *.awdev.ca
 *.advisor.ws
 
+// AZ.pl sp. z.o.o: https://az.pl
+// Submitted by Krzysztof Wolski <krzysztof.wolski@home.eu>
+ecommerce-shop.pl
+
 // b-data GmbH : https://www.b-data.io
 // Submitted by Olivier Benz <olivier.benz@b-data.ch>
 b-data.io
@@ -10852,12 +10438,37 @@ backplaneapp.io
 // Submitted by Petros Angelatos <petrosagg@balena.io>
 balena-devices.com
 
+// University of Banja Luka : https://unibl.org
+// Domains for Republic of Srpska administrative entity.
+// Submitted by Marko Ivanovic <kormang@hotmail.rs>
+rs.ba
+
 // Banzai Cloud
 // Submitted by Janos Matyas <info@banzaicloud.com>
 *.banzai.cloud
 app.banzaicloud.io
 *.backyards.banzaicloud.io
 
+// BASE, Inc. : https://binc.jp
+// Submitted by Yuya NAGASAWA <public-suffix-list@binc.jp>
+base.ec
+official.ec
+buyshop.jp
+fashionstore.jp
+handcrafted.jp
+kawaiishop.jp
+supersale.jp
+theshop.jp
+shopselect.net
+base.shop
+
+// BeagleBoard.org Foundation : https://beagleboard.org
+// Submitted by Jason Kridner <jkridner@beagleboard.org>
+beagleboard.io
+
+// Beget Ltd
+// Submitted by Lev Nekrasov <lnekrasov@beget.com>
+*.beget.app
 
 // BetaInABox
 // Submitted by Adrian <adrian@betainabox.com>
@@ -10867,14 +10478,30 @@ betainabox.com
 // Submitted by Nathan O'Sullivan <nathan@mammoth.com.au>
 bnr.la
 
+// Bitbucket : http://bitbucket.org
+// Submitted by Andy Ortlieb <aortlieb@atlassian.com>
+bitbucket.io
+
 // Blackbaud, Inc. : https://www.blackbaud.com
 // Submitted by Paul Crowder <paul.crowder@blackbaud.com>
 blackbaudcdn.net
 
+// Blatech : http://www.blatech.net
+// Submitted by Luke Bratch <luke@bratch.co.uk>
+of.je
+
+// Blue Bite, LLC : https://bluebite.com
+// Submitted by Joshua Weiss <admin.engineering@bluebite.com>
+bluebite.io
+
 // Boomla : https://boomla.com
 // Submitted by Tibor Halter <thalter@boomla.com>
 boomla.net
 
+// Boutir : https://www.boutir.com
+// Submitted by Eric Ng Ka Ka <ngkaka@boutir.com>
+boutir.com
+
 // Boxfuse : https://boxfuse.com
 // Submitted by Axel Fontaine <axel@boxfuse.com>
 boxfuse.io
@@ -10888,6 +10515,10 @@ square7.de
 bplaced.net
 square7.net
 
+// Brendly : https://brendly.rs
+// Submitted by Dusan Radovanovic <dusan.radovanovic@brendly.rs>
+shop.brendly.rs
+
 // BrowserSafetyMark
 // Submitted by Dave Tharp <browsersafetymark.io@quicinc.com>
 browsersafetymark.io
@@ -10898,15 +10529,26 @@ uk0.bigv.io
 dh.bytemark.co.uk
 vm.bytemark.co.uk
 
+// Caf.js Labs LLC : https://www.cafjs.com
+// Submitted by Antonio Lain <antlai@cafjs.com>
+cafjs.com
+
 // callidomus : https://www.callidomus.com/
 // Submitted by Marcus Popp <admin@callidomus.com>
 mycd.eu
 
+// Canva Pty Ltd : https://canva.com/
+// Submitted by Joel Aquilina <publicsuffixlist@canva.com>
+canva-apps.cn
+canva-apps.com
+
 // Carrd : https://carrd.co
 // Submitted by AJ <aj@carrd.co>
+drr.ac
+uwu.ai
 carrd.co
 crd.co
-uwu.ai
+ju.mp
 
 // CentralNic : http://www.centralnic.com/names/domains
 // Submitted by registry <gavin.brown@centralnic.com>
@@ -10934,7 +10576,6 @@ za.com
 // No longer operated by CentralNic, these entries should be adopted and/or removed by current operators
 // Submitted by Gavin Brown <gavin.brown@centralnic.com>
 ar.com
-gb.com
 hu.com
 kr.com
 no.com
@@ -10972,11 +10613,6 @@ nz.basketball
 radio.am
 radio.fm
 
-// Globe Hosting SRL : https://www.globehosting.com/
-// Submitted by Gavin Brown <gavin.brown@centralnic.com>
-co.ro
-shop.ro
-
 // c.la : http://www.c.la/
 c.la
 
@@ -10984,31 +10620,31 @@ c.la
 // Submitted by B. Blechschmidt <hostmaster@certmgr.org>
 certmgr.org
 
-// Citrix : https://citrix.com
-// Submitted by Alex Stoddard <alex.stoddard@citrix.com>
-xenapponazure.com
+// Cityhost LLC  : https://cityhost.ua
+// Submitted by Maksym Rivtin <support@cityhost.net.ua>
+cx.ua
 
 // Civilized Discourse Construction Kit, Inc. : https://www.discourse.org/
 // Submitted by Rishabh Nambiar & Michael Brown <team@discourse.org>
 discourse.group
 discourse.team
 
-// ClearVox : http://www.clearvox.nl/
-// Submitted by Leon Rowland <leon@clearvox.nl>
-virtueeldomein.nl
-
 // Clever Cloud : https://www.clever-cloud.com/
 // Submitted by Quentin Adam <noc@clever-cloud.com>
 cleverapps.io
 
 // Clerk : https://www.clerk.dev
-// Submitted by Colin Sidoti <colin@clerk.dev>
+// Submitted by Colin Sidoti <systems@clerk.dev>
+clerk.app
+clerkstage.app
 *.lcl.dev
+*.lclstage.dev
 *.stg.dev
+*.stgstage.dev
 
-// Clic2000 : https://clic2000.fr
-// Submitted by Mathilde Blanchemanche <mathilde@clic2000.fr>
-clic2000.net
+// ClickRising : https://clickrising.com/
+// Submitted by Umut Gumeli <infrastructure-publicsuffixlist@clickrising.com>
+clickrising.net
 
 // Cloud66 : https://www.cloud66.com/
 // Submitted by Khash Sajadi <khash@cloud66.com>
@@ -11030,13 +10666,16 @@ cloudcontrolled.com
 cloudcontrolapp.com
 
 // Cloudera, Inc. : https://www.cloudera.com/
-// Submitted by Philip Langdale <security@cloudera.com>
-cloudera.site
+// Submitted by Kedarnath Waikar <security@cloudera.com>
+*.cloudera.site
 
 // Cloudflare, Inc. : https://www.cloudflare.com/
 // Submitted by Cloudflare Team <publicsuffixlist@cloudflare.com>
-pages.dev
+cf-ipfs.com
+cloudflare-ipfs.com
 trycloudflare.com
+pages.dev
+r2.dev
 workers.dev
 
 // Clovyr : https://clovyr.io
@@ -11076,14 +10715,14 @@ cloudns.pro
 cloudns.pw
 cloudns.us
 
-// Cloudeity Inc : https://cloudeity.com
-// Submitted by Stefan Dimitrov <contact@cloudeity.com>
-cloudeity.net
-
 // CNPY : https://cnpy.gdn
 // Submitted by Angelo Gladding <angelo@lahacker.net>
 cnpy.gdn
 
+// Codeberg e. V. : https://codeberg.org
+// Submitted by Moritz Marquardt <git@momar.de>
+codeberg.page
+
 // CoDNS B.V.
 co.nl
 co.no
@@ -11184,18 +10823,47 @@ dyndns.dappnode.io
 // Submitted by Paul Biggar <ops@darklang.com>
 builtwithdark.com
 
+// DataDetect, LLC. : https://datadetect.com
+// Submitted by Andrew Banchich <abanchich@sceven.com>
+demo.datadetect.com
+instance.datadetect.com
+
 // Datawire, Inc : https://www.datawire.io
 // Submitted by Richard Li <secalert@datawire.io>
 edgestack.me
 
+// DDNS5 : https://ddns5.com
+// Submitted by Cameron Elliott <cameron@cameronelliott.com>
+ddns5.com
+
 // Debian : https://www.debian.org/
 // Submitted by Peter Palfrader / Debian Sysadmin Team <dsa-publicsuffixlist@debian.org>
 debian.net
 
+// Deno Land Inc : https://deno.com/
+// Submitted by Luca Casonato <hostmaster@deno.com>
+deno.dev
+deno-staging.dev
+
 // deSEC : https://desec.io/
 // Submitted by Peter Thomassen <peter@desec.io>
 dedyn.io
 
+// Deta: https://www.deta.sh/
+// Submitted by Aavash Shrestha <aavash@deta.sh>
+deta.app
+deta.dev
+
+// Diher Solutions : https://diher.solutions
+// Submitted by Didi Hermawan <mail@diher.solutions>
+*.rss.my.id
+*.diher.solutions
+
+// Discord Inc : https://discord.com
+// Submitted by Sahn Lam <slam@discordapp.com>
+discordsays.com
+discordsez.com
+
 // DNS Africa Ltd https://dns.business
 // Submitted by Calvin Browne <calvin@dns.business>
 jozi.biz
@@ -11213,6 +10881,10 @@ shop.th
 // Submitted by Paul Fang <mis@draytek.com>
 drayddns.com
 
+// DreamCommerce : https://shoper.pl/
+// Submitted by Konrad Kotarba <konrad.kotarba@dreamcommerce.com>
+shoparena.pl
+
 // DreamHost : http://www.dreamhost.com/
 // Submitted by Andrew Farmer <andrew.farmer@dreamhost.com>
 dreamhosters.com
@@ -11540,6 +11212,14 @@ ddnss.org
 definima.net
 definima.io
 
+// DigitalOcean App Platform : https://www.digitalocean.com/products/app-platform/
+// Submitted by Braxton Huggins <psl-maintainers@digitalocean.com>
+ondigitalocean.app
+
+// DigitalOcean Spaces : https://www.digitalocean.com/products/spaces/
+// Submitted by Robin H. Johnson <psl-maintainers@digitalocean.com>
+*.digitaloceanspaces.com
+
 // dnstrace.pro : https://dnstrace.pro/
 // Submitted by Chris Partridge <chris@partridge.tech>
 bci.dnstrace.pro
@@ -11572,6 +11252,16 @@ dynv6.net
 // Submitted by Vladimir Dudr <info@e4you.cz>
 e4.cz
 
+// Easypanel : https://easypanel.io
+// Submitted by Andrei Canta <andrei@easypanel.io>
+easypanel.app
+easypanel.host
+
+// Elementor : Elementor Ltd.
+// Submitted by Anton Barkan <antonb@elementor.com>
+elementor.cloud
+elementor.cool
+
 // En root‽ : https://en-root.org
 // Submitted by Emmanuel Raviart <emmanuel@raviart.com>
 en-root.fr
@@ -11579,16 +11269,21 @@ en-root.fr
 // Enalean SAS: https://www.enalean.com
 // Submitted by Thomas Cottier <thomas.cottier@enalean.com>
 mytuleap.com
+tuleap-partners.com
+
+// Encoretivity AB: https://encore.dev
+// Submitted by André Eriksson <andre@encore.dev>
+encr.app
+encoreapi.com
 
 // ECG Robotics, Inc: https://ecgrobotics.org
 // Submitted by <frc1533@ecgrobotics.org>
 onred.one
 staging.onred.one
 
-// Enonic : http://enonic.com/
-// Submitted by Erik Kaareng-Sunde <esu@enonic.com>
-enonic.io
-customer.enonic.io
+// encoway GmbH : https://www.encoway.de
+// Submitted by Marcel Daus <cloudops@encoway.de>
+eu.encoway.cloud
 
 // EU.org https://eu.org/
 // Submitted by Pierre Beyssac <hostmaster@eu.org>
@@ -11649,6 +11344,10 @@ tr.eu.org
 uk.eu.org
 us.eu.org
 
+// Eurobyte : https://eurobyte.ru
+// Submitted by Evgeniy Subbotin <e.subbotin@eurobyte.ru>
+eurodir.ru
+
 // Evennode : http://www.evennode.com/
 // Submitted by Michal Kralik <support@evennode.com>
 eu-1.evennode.com
@@ -11759,6 +11458,8 @@ u.channelsdvr.net
 
 // Fastly Inc. : http://www.fastly.com/
 // Submitted by Fastly Security <security@fastly.com>
+edgecompute.app
+fastly-edge.com
 fastly-terrarium.com
 fastlylb.net
 map.fastlylb.net
@@ -11770,6 +11471,10 @@ a.ssl.fastly.net
 b.ssl.fastly.net
 global.ssl.fastly.net
 
+// Fastmail : https://www.fastmail.com/
+// Submitted by Marc Bradshaw <marc@fastmailteam.com>
+*.user.fm
+
 // FASTVPS EESTI OU : https://fastvps.ru/
 // Submitted by Likhachev Vasiliy <lihachev@fastvps.ru>
 fastvps-server.com
@@ -11778,10 +11483,6 @@ myfast.host
 fastvps.site
 myfast.space
 
-// Featherhead : https://featherhead.xyz/
-// Submitted by Simon Menke <simon@featherhead.xyz>
-fhapp.xyz
-
 // Fedora : https://fedoraproject.org/
 // submitted by Patrick Uiterwijk <puiterwijk@fedoraproject.org>
 fedorainfracloud.org
@@ -11794,13 +11495,16 @@ app.os.stg.fedoraproject.org
 // submitted by Keith Fairley <domains@fearworksmedia.co.uk>
 conn.uk
 copro.uk
-couk.me
-ukco.me
+hosp.uk
 
 // Fermax : https://fermax.com/
 // submitted by Koen Van Isterdael <k.vanisterdael@fermax.be>
 mydobiss.com
 
+// FH Muenster : https://www.fh-muenster.de
+// Submitted by Robin Naundorf <r.naundorf@fh-muenster.de>
+fh-muenster.io
+
 // Filegear Inc. : https://www.filegear.com
 // Submitted by Jason Zhu <jason@owtware.com>
 filegear.me
@@ -11815,6 +11519,19 @@ filegear-sg.me
 // Submitted by Chris Raynor <chris@firebase.com>
 firebaseapp.com
 
+// Firewebkit : https://www.firewebkit.com
+// Submitted by Majid Qureshi <mqureshi@amrayn.com>
+fireweb.app
+
+// FLAP : https://www.flap.cloud
+// Submitted by Louis Chemineau <louis@chmn.me>
+flap.id
+
+// FlashDrive : https://flashdrive.io
+// Submitted by Eric Chan <support@flashdrive.io>
+onflashdrive.app
+fldrv.com
+
 // fly.io: https://fly.io
 // Submitted by Kurt Mackey <kurt@fly.io>
 fly.dev
@@ -11825,6 +11542,28 @@ shw.io
 // Submitted by Jonathan Rudenberg <jonathan@flynn.io>
 flynnhosting.net
 
+// Forgerock : https://www.forgerock.com
+// Submitted by Roderick Parr <roderick.parr@forgerock.com>
+forgeblocks.com
+id.forgerock.io
+
+// Framer : https://www.framer.com
+// Submitted by Koen Rouwhorst <koenrh@framer.com>
+framer.app
+framercanvas.com
+framer.media
+framer.photos
+framer.website
+framer.wiki
+
+// Frusky MEDIA&PR : https://www.frusky.de
+// Submitted by Victor Pupynin <hallo@frusky.de>
+*.frusky.de
+
+// RavPage : https://www.ravpage.co.il
+// Submitted by Roni Horowitz <roni@responder.co.il>
+ravpage.co.il
+
 // Frederik Braun https://frederik-braun.com
 // Submitted by Frederik Braun <fb@frederik-braun.com>
 0e.vc
@@ -11842,6 +11581,10 @@ freeboxos.fr
 // Submitted by Daniel Stone <daniel@fooishbar.org>
 freedesktop.org
 
+// freemyip.com : https://freemyip.com
+// Submitted by Cadence <contact@freemyip.com>
+freemyip.com
+
 // FunkFeuer - Verein zur Förderung freier Netze : https://www.funkfeuer.at
 // Submitted by Daniel A. Maierhofer <vorstand@funkfeuer.at>
 wien.funkfeuer.at
@@ -11857,10 +11600,22 @@ futuremailing.at
 *.kunden.ortsinfo.at
 *.statics.cloud
 
-// GDS : https://www.gov.uk/service-manual/operations/operating-servicegovuk-subdomains
-// Submitted by David Illsley <david.illsley@digital.cabinet-office.gov.uk>
+// GDS : https://www.gov.uk/service-manual/technology/managing-domain-names
+// Submitted by Stephen Ford <hostmaster@digital.cabinet-office.gov.uk>
+independent-commission.uk
+independent-inquest.uk
+independent-inquiry.uk
+independent-panel.uk
+independent-review.uk
+public-inquiry.uk
+royal-commission.uk
+campaign.gov.uk
 service.gov.uk
 
+// CDDO : https://www.gov.uk/guidance/get-an-api-domain-on-govuk
+// Submitted by Jamie Tanna <jamie.tanna@digital.cabinet-office.gov.uk>
+api.gov.uk
+
 // Gehirn Inc. : https://www.gehirn.co.jp/
 // Submitted by Kohei YOSHIDA <tech@gehirn.co.jp>
 gehirn.ne.jp
@@ -11871,11 +11626,21 @@ usercontent.jp
 gentapps.com
 gentlentapis.com
 lab.ms
+cdn-edges.net
+
+// Ghost Foundation : https://ghost.org
+// Submitted by Matt Hanley <security@ghost.org>
+ghost.io
+
+// GignoSystemJapan: http://gsj.bz
+// Submitted by GignoSystemJapan <kakutou-ec@gsj.bz>
+gsj.bz
 
 // GitHub, Inc.
 // Submitted by Patrick Toomey <security@github.com>
-github.io
 githubusercontent.com
+githubpreview.dev
+github.io
 
 // GitLab, Inc.
 // Submitted by Alex Hanselka <alex@gitlab.com>
@@ -11890,12 +11655,127 @@ gitpage.si
 // Submitted by Mads Hartmann <mads@glitch.com>
 glitch.me
 
+// Global NOG Alliance : https://nogalliance.org/
+// Submitted by Sander Steffann <sander@nogalliance.org>
+nog.community
+
+// Globe Hosting SRL : https://www.globehosting.com/
+// Submitted by Gavin Brown <gavin.brown@centralnic.com>
+co.ro
+shop.ro
+
 // GMO Pepabo, Inc. : https://pepabo.com/
-// Submitted by dojineko <admin@pepabo.com>
+// Submitted by Hosting Div <admin@pepabo.com>
 lolipop.io
+angry.jp
+babyblue.jp
+babymilk.jp
+backdrop.jp
+bambina.jp
+bitter.jp
+blush.jp
+boo.jp
+boy.jp
+boyfriend.jp
+but.jp
+candypop.jp
+capoo.jp
+catfood.jp
+cheap.jp
+chicappa.jp
+chillout.jp
+chips.jp
+chowder.jp
+chu.jp
+ciao.jp
+cocotte.jp
+coolblog.jp
+cranky.jp
+cutegirl.jp
+daa.jp
+deca.jp
+deci.jp
+digick.jp
+egoism.jp
+fakefur.jp
+fem.jp
+flier.jp
+floppy.jp
+fool.jp
+frenchkiss.jp
+girlfriend.jp
+girly.jp
+gloomy.jp
+gonna.jp
+greater.jp
+hacca.jp
+heavy.jp
+her.jp
+hiho.jp
+hippy.jp
+holy.jp
+hungry.jp
+icurus.jp
+itigo.jp
+jellybean.jp
+kikirara.jp
+kill.jp
+kilo.jp
+kuron.jp
+littlestar.jp
+lolipopmc.jp
+lolitapunk.jp
+lomo.jp
+lovepop.jp
+lovesick.jp
+main.jp
+mods.jp
+mond.jp
+mongolian.jp
+moo.jp
+namaste.jp
+nikita.jp
+nobushi.jp
+noor.jp
+oops.jp
+parallel.jp
+parasite.jp
+pecori.jp
+peewee.jp
+penne.jp
+pepper.jp
+perma.jp
+pigboat.jp
+pinoko.jp
+punyu.jp
+pupu.jp
+pussycat.jp
+pya.jp
+raindrop.jp
+readymade.jp
+sadist.jp
+schoolbus.jp
+secret.jp
+staba.jp
+stripper.jp
+sub.jp
+sunnyday.jp
+thick.jp
+tonkotsu.jp
+under.jp
+upper.jp
+velvet.jp
+verse.jp
+versus.jp
+vivian.jp
+watson.jp
+weblike.jp
+whitesnow.jp
+zombie.jp
+heteml.net
 
 // GOV.UK Platform as a Service : https://www.cloud.service.gov.uk/
-// Submitted by Tom Whitwell <tom.whitwell@digital.cabinet-office.gov.uk>
+// Submitted by Tom Whitwell <gov-uk-paas-support@digital.cabinet-office.gov.uk>
 cloudapps.digital
 london.cloudapps.digital
 
@@ -11923,6 +11803,18 @@ web.app
 *.0emm.com
 appspot.com
 *.r.appspot.com
+codespot.com
+googleapis.com
+googlecode.com
+pagespeedmobilizer.com
+publishproxy.com
+withgoogle.com
+withyoutube.com
+*.gateway.dev
+cloud.goog
+translate.goog
+*.usercontent.goog
+cloudfunctions.net
 blogspot.ae
 blogspot.al
 blogspot.am
@@ -11997,27 +11889,26 @@ blogspot.td
 blogspot.tw
 blogspot.ug
 blogspot.vn
-cloudfunctions.net
-cloud.goog
-codespot.com
-googleapis.com
-googlecode.com
-pagespeedmobilizer.com
-publishproxy.com
-translate.goog
-withgoogle.com
-withyoutube.com
 
-// Aaron Marais' Gitlab pages: https://lab.aaronleem.co.za
-// Submitted by Aaron Marais <its_me@aaronleem.co.za>
-graphox.us
+// Goupile : https://goupile.fr
+// Submitted by Niels Martignene <hello@goupile.fr>
+goupile.fr
+
+// Government of the Netherlands: https://www.government.nl
+// Submitted by <domeinnaam@minaz.nl>
+gov.nl
 
 // Group 53, LLC : https://www.group53.com
 // Submitted by Tyler Todd <noc@nova53.net>
 awsmppl.com
 
+// GünstigBestellen : https://günstigbestellen.de
+// Submitted by Furkan Akkoc <info@hendelzon.de>
+günstigbestellen.de
+günstigliefern.de
+
 // Hakaran group: http://hakaran.cz
-// Submited by Arseniy Sokolov <security@hakaran.cz>
+// Submitted by Arseniy Sokolov <security@hakaran.cz>
 fin.ci
 free.hr
 caa.li
@@ -12037,6 +11928,10 @@ hashbang.sh
 hasura.app
 hasura-app.io
 
+// Heilbronn University of Applied Sciences - Faculty Informatics (GitLab Pages): https://www.hs-heilbronn.de
+// Submitted by Richard Zowalla <mi-admin@hs-heilbronn.de>
+pages.it.hs-heilbronn.de
+
 // Hepforge : https://www.hepforge.org
 // Submitted by David Grellscheid <admin@hepforge.org>
 hepforge.org
@@ -12048,27 +11943,41 @@ herokussl.com
 
 // Hibernating Rhinos
 // Submitted by Oren Eini <oren@ravendb.net>
-myravendb.com
+ravendb.cloud
 ravendb.community
 ravendb.me
 development.run
 ravendb.run
 
+// home.pl S.A.: https://home.pl
+// Submitted by Krzysztof Wolski <krzysztof.wolski@home.eu>
+homesklep.pl
+
+// Hong Kong Productivity Council: https://www.hkpc.org/
+// Submitted by SECaaS Team <summchan@hkpc.org>
+secaas.hk
+
+// Hoplix : https://www.hoplix.com
+// Submitted by Danilo De Franco<info@hoplix.shop>
+hoplix.shop
+
+
 // HOSTBIP REGISTRY : https://www.hostbip.com/
 // Submitted by Atanunu Igbunuroghene <publicsuffixlist@hostbip.com>
-bpl.biz
 orx.biz
-ng.city
 biz.gl
-ng.ink
 col.ng
 firm.ng
 gen.ng
 ltd.ng
 ngo.ng
-ng.school
+edu.scot
 sch.so
 
+// HostFly : https://www.ie.ua
+// Submitted by Bohdan Dub <support@hostfly.com.ua>
+ie.ua
+
 // HostyHosting (hostyhosting.com)
 hostyhosting.io
 
@@ -12085,6 +11994,24 @@ moonscale.net
 // Submitted by Hannu Aronsson <haa@iki.fi>
 iki.fi
 
+// iliad italia: https://www.iliad.it
+// Submitted by Marios Makassikis <mmakassikis@freebox.fr>
+ibxos.it
+iliadboxos.it
+
+// Impertrix Solutions : <https://impertrixcdn.com>
+// Submitted by Zhixiang Zhao <csuite@impertrix.com>
+impertrixcdn.com
+impertrix.com
+
+// Incsub, LLC: https://incsub.com/
+// Submitted by Aaron Edwards <sysadmins@incsub.com>
+smushcdn.com
+wphostedmail.com
+wpmucdn.com
+tempurl.host
+wpmudev.host
+
 // Individual Network Berlin e.V. : https://www.in-berlin.de/
 // Submitted by Christian Seitz <chris@in-berlin.de>
 dyn-berlin.de
@@ -12141,16 +12068,22 @@ to.leg.br
 pixolino.com
 
 // Internet-Pro, LLP: https://netangels.ru/
-// Submited by Vasiliy Sheredeko <piphon@gmail.com>
+// Submitted by Vasiliy Sheredeko <piphon@gmail.com>
 na4u.ru
 
+// iopsys software solutions AB : https://iopsys.eu/
+// Submitted by Roman Azarenko <roman.azarenko@iopsys.eu>
+iopsys.se
+
 // IPiFony Systems, Inc. : https://www.ipifony.com/
 // Submitted by Matthew Hardeman <mhardeman@ipifony.com>
 ipifony.net
 
-// IServ GmbH : https://iserv.eu
-// Submitted by Kim-Alexander Brodowski <info@iserv.eu>
+// IServ GmbH : https://iserv.de
+// Submitted by Mario Hoberg <info@iserv.de>
+iservschule.de
 mein-iserv.de
+schulplattform.de
 schulserver.de
 test-iserv.de
 iserv.dev
@@ -12159,31 +12092,91 @@ iserv.dev
 // Submitted by Yuji Minagawa <domains-admin@iodata.jp>
 iobb.net
 
-//Jelastic, Inc. : https://jelastic.com/
-// Submited by Ihor Kolodyuk <ik@jelastic.com>
+// Jelastic, Inc. : https://jelastic.com/
+// Submitted by Ihor Kolodyuk <ik@jelastic.com>
+mel.cloudlets.com.au
+cloud.interhostsolutions.be
+users.scale.virtualcloud.com.br
+mycloud.by
+alp1.ae.flow.ch
 appengine.flow.ch
+es-1.axarnet.cloud
+diadem.cloud
 vip.jelastic.cloud
 jele.cloud
+it1.eur.aruba.jenv-aruba.cloud
+it1.jenv-aruba.cloud
+keliweb.cloud
+cs.keliweb.cloud
+oxa.cloud
+tn.oxa.cloud
+uk.oxa.cloud
+primetel.cloud
+uk.primetel.cloud
+ca.reclaim.cloud
+uk.reclaim.cloud
+us.reclaim.cloud
+ch.trendhosting.cloud
+de.trendhosting.cloud
 jele.club
+amscompute.com
+clicketcloud.com
 dopaas.com
 hidora.com
+paas.hosted-by-previder.com
+rag-cloud.hosteur.com
+rag-cloud-ch.hosteur.com
 jcloud.ik-server.com
+jcloud-ver-jpc.ik-server.com
 demo.jelastic.com
+kilatiron.com
 paas.massivegrid.com
+jed.wafaicloud.com
+lon.wafaicloud.com
+ryd.wafaicloud.com
 j.scaleforce.com.cy
 jelastic.dogado.eu
 fi.cloudplatform.fi
+demo.datacenter.fi
 paas.datacenter.fi
 jele.host
 mircloud.host
+paas.beebyte.io
+sekd1.beebyteapp.io
 jele.io
+cloud-fr1.unispace.io
+jc.neen.it
+cloud.jelastic.open.tim.it
+jcloud.kz
+upaas.kazteleport.kz
 cloudjiffy.net
+fra1-de.cloudjiffy.net
+west1-us.cloudjiffy.net
 jls-sto1.elastx.net
+jls-sto2.elastx.net
+jls-sto3.elastx.net
+faststacks.net
+fr-1.paas.massivegrid.net
+lon-1.paas.massivegrid.net
+lon-2.paas.massivegrid.net
+ny-1.paas.massivegrid.net
+ny-2.paas.massivegrid.net
+sg-1.paas.massivegrid.net
 jelastic.saveincloud.net
+nordeste-idc.saveincloud.net
+j.scaleforce.net
+jelastic.tsukaeru.net
+sdscloud.pl
+unicloud.pl
+mircloud.ru
 jelastic.regruhosting.ru
+enscaled.sg
 jele.site
 jelastic.team
+orangecloud.tn
 j.layershift.co.uk
+phx.enscaled.us
+mircloud.us
 
 // Jino : https://www.jino.ru
 // Submitted by Sergey Ulyashin <ulyashin@jino.ru>
@@ -12193,6 +12186,10 @@ myjino.ru
 *.spectrum.myjino.ru
 *.vps.myjino.ru
 
+// Jotelulu S.L. : https://jotelulu.com
+// Submitted by Daniel Fariña <ingenieria@jotelulu.com>
+jotelulu.cloud
+
 // Joyent : https://www.joyent.com/
 // Submitted by Brian Bennett <brian.bennett@joyent.com>
 *.triton.zone
@@ -12207,6 +12204,14 @@ js.org
 kaas.gg
 khplay.nl
 
+// Kakao : https://www.kakaocorp.com/
+// Submitted by JaeYoong Lee <cec@kakaocorp.com>
+ktistory.com
+
+// Kapsi : https://kapsi.fi
+// Submitted by Tomi Juntunen <erani@kapsi.fi>
+kapsi.fi
+
 // Keyweb AG : https://www.keyweb.de
 // Submitted by Martin Dannehl <postmaster@keymachine.de>
 keymachine.de
@@ -12220,14 +12225,28 @@ uni5.net
 // Submitted by Roy Keene <rkeene@knightpoint.com>
 knightpoint.systems
 
+// KoobinEvent, SL: https://www.koobin.com
+// Submitted by Iván Oliva <ivan.oliva@koobin.com>
+koobin.events
+
 // KUROKU LTD : https://kuroku.ltd/
 // Submitted by DisposaBoy <security@oya.to>
 oya.to
 
+// Katholieke Universiteit Leuven: https://www.kuleuven.be
+// Submitted by Abuse KU Leuven <abuse@kuleuven.be>
+kuleuven.cloud
+ezproxy.kuleuven.be
+
 // .KRD : http://nic.krd/data/krd/Registration%20Policy.pdf
 co.krd
 edu.krd
 
+// Krellian Ltd. : https://krellian.com
+// Submitted by Ben Francis <ben@krellian.com>
+krellian.net
+webthings.io
+
 // LCube - Professional hosting e.K. : https://www.lcube-webhosting.de
 // Submitted by Lars Laehn <info@lcube.de>
 git-repos.de
@@ -12258,10 +12277,6 @@ co.technology
 // Submitted by Greg Holland <greg.holland@lmpm.com>
 app.lmpm.com
 
-// Linki Tools UG : https://linki.tools
-// Submitted by Paulo Matos <pmatos@linki.tools>
-linkitools.space
-
 // linkyard ldt: https://www.linkyard.ch/
 // Submitted by Mario Siegenthaler <mario.siegenthaler@linkyard.ch>
 linkyard.cloud
@@ -12272,11 +12287,16 @@ linkyard-cloud.ch
 members.linode.com
 *.nodebalancer.linode.com
 *.linodeobjects.com
+ip.linodeusercontent.com
 
 // LiquidNet Ltd : http://www.liquidnetlimited.com/
 // Submitted by Victor Velchev <admin@liquidnetlimited.com>
 we.bs
 
+// Localcert : https://localcert.dev
+// Submitted by Lann Martin <security@localcert.dev>
+*.user.localcert.dev
+
 // localzone.xyz
 // Submitted by Kenny Niehage <hello@yahe.sh>
 localzone.xyz
@@ -12289,6 +12309,14 @@ loginline.io
 loginline.services
 loginline.site
 
+// Lokalized : https://lokalized.nl
+// Submitted by Noah Taheij <noah@lokalized.nl>
+servers.run
+
+// Lõhmus Family, The
+// Submitted by Heiki Lõhmus <hostmaster at lohmus dot me>
+lohmus.me
+
 // LubMAN UMCS Sp. z o.o : https://lubman.pl/
 // Submitted by Ireneusz Maliszewski <ireneusz.maliszewski@lubman.pl>
 krasnik.pl
@@ -12300,7 +12328,6 @@ swidnik.pl
 
 // Lug.org.uk : https://lug.org.uk
 // Submitted by Jon Spriggs <admin@lug.org.uk>
-uklugs.org
 glug.org.uk
 lug.org.uk
 lugs.org.uk
@@ -12326,6 +12353,7 @@ barsy.online
 barsy.org
 barsy.pro
 barsy.pub
+barsy.ro
 barsy.shop
 barsy.site
 barsy.support
@@ -12344,20 +12372,43 @@ mayfirst.org
 // Submitted by Ilya Zaretskiy <zaretskiy@corp.mail.ru>
 hb.cldmail.ru
 
+// Mail Transfer Platform : https://www.neupeer.com
+// Submitted by Li Hui <lihui@neupeer.com>
+cn.vu
+
+// Maze Play: https://www.mazeplay.com
+// Submitted by Adam Humpherys <adam@mws.dev>
+mazeplay.com
+
 // mcpe.me : https://mcpe.me
 // Submitted by Noa Heyl <hi@noa.dev>
 mcpe.me
 
 // McHost : https://mchost.ru
 // Submitted by Evgeniy Subbotin <e.subbotin@mchost.ru>
+mcdir.me
 mcdir.ru
+mcpre.ru
 vps.mcdir.ru
 
+// Mediatech : https://mediatech.by
+// Submitted by Evgeniy Kozhuhovskiy <ugenk@mediatech.by>
+mediatech.by
+mediatech.dev
+
+// Medicom Health : https://medicomhealth.com
+// Submitted by Michael Olson <molson@medicomhealth.com>
+hra.health
+
 // Memset hosting : https://www.memset.com
 // Submitted by Tom Whitwell <domains@memset.com>
 miniserver.com
 memset.net
 
+// Messerli Informatik AG : https://www.messerli.ch/
+// Submitted by Ruben Schmidmeister <psl-maintainers@messerli.ch>
+messerli.app
+
 // MetaCentrum, CESNET z.s.p.o. : https://www.metacentrum.cz/en/
 // Submitted by Zdeněk Šustr <zdenek.sustr@cesnet.cz>
 *.cloud.metacentrum.cz
@@ -12377,16 +12428,29 @@ eu.meteorapp.com
 co.pl
 
 // Microsoft Corporation : http://microsoft.com
-// Submitted by Mostafa Elzeiny <moelzein@microsoft.com>
+// Submitted by Public Suffix List Admin <msftpsladmin@microsoft.com>
 *.azurecontainer.io
 azurewebsites.net
 azure-mobile.net
 cloudapp.net
+azurestaticapps.net
+1.azurestaticapps.net
+2.azurestaticapps.net
+3.azurestaticapps.net
+centralus.azurestaticapps.net
+eastasia.azurestaticapps.net
+eastus2.azurestaticapps.net
+westeurope.azurestaticapps.net
+westus2.azurestaticapps.net
 
 // minion.systems : http://minion.systems
 // Submitted by Robert Böttinger <r@minion.systems>
 csx.cc
 
+// Mintere : https://mintere.com/
+// Submitted by Ben Aubin <security@mintere.com>
+mintere.site
+
 // MobileEducation, LLC : https://joinforte.com
 // Submitted by Grayson Martin <grayson.martin@mobileeducation.us>
 forte.id
@@ -12409,8 +12473,11 @@ pp.ru
 // Submitted by Paul Cammish <kelduum@mythic-beasts.com>
 hostedpi.com
 customer.mythic-beasts.com
+caracal.mythic-beasts.com
+fentiger.mythic-beasts.com
 lynx.mythic-beasts.com
 ocelot.mythic-beasts.com
+oncilla.mythic-beasts.com
 onza.mythic-beasts.com
 sphinx.mythic-beasts.com
 vs.mythic-beasts.com
@@ -12422,25 +12489,9 @@ cust.retrosnub.co.uk
 // Submitted by Paulus Schoutsen <infra@nabucasa.com>
 ui.nabu.casa
 
-// Names.of.London : https://names.of.london/
-// Submitted by James Stevens <registry@names.of.london> or <james@jrcs.net>
-pony.club
-of.fashion
-on.fashion
-of.football
-in.london
-of.london
-for.men
-and.mom
-for.mom
-for.one
-for.sale
-of.work
-to.work
-
-// NCTU.ME : https://nctu.me/
-// Submitted by Tocknicsu <admin@nctu.me>
-nctu.me
+// Net at Work Gmbh : https://www.netatwork.de
+// Submitted by Jan Jaeschke <jan.jaeschke@netatwork.de>
+cloud.nospamproxy.com
 
 // Netlify : https://www.netlify.com
 // Submitted by Jessica Parsons <jessica@netlify.com>
@@ -12452,7 +12503,19 @@ netlify.app
 
 // ngrok : https://ngrok.com/
 // Submitted by Alan Shreve <alan@ngrok.com>
+ngrok.app
+ngrok-free.app
+ngrok.dev
+ngrok-free.dev
 ngrok.io
+ap.ngrok.io
+au.ngrok.io
+eu.ngrok.io
+in.ngrok.io
+jp.ngrok.io
+sa.ngrok.io
+us.ngrok.io
+ngrok.pizza
 
 // Nimbus Hosting Ltd. : https://www.nimbushosting.co.uk/
 // Submitted by Nicholas Ford <nick@nimbushosting.co.uk>
@@ -12462,6 +12525,23 @@ nh-serv.co.uk
 // Submitted by Jeff Wheelhouse <support@nearlyfreespeech.net>
 nfshost.com
 
+// Noop : https://noop.app
+// Submitted by Nathaniel Schweinberg <noop@rearc.io>
+*.developer.app
+noop.app
+
+// Northflank Ltd. : https://northflank.com/
+// Submitted by Marco Suter <marco@northflank.com>
+*.northflank.app
+*.build.run
+*.code.run
+*.database.run
+*.migration.run
+
+// Noticeable : https://noticeable.io
+// Submitted by Laurent Pellegrino <security@noticeable.io>
+noticeable.news
+
 // Now-DNS : https://now-dns.com
 // Submitted by Steve Russell <steve@now-dns.com>
 dnsking.ch
@@ -12587,11 +12667,6 @@ zapto.org
 // Submitted by Konstantin Nosov <Nosov@nodeart.io>
 stage.nodeart.io
 
-// Nodum B.V. : https://nodum.io/
-// Submitted by Wietse Wind <hello+publicsuffixlist@nodum.io>
-nodum.co
-nodum.io
-
 // Nucleos Inc. : https://nucleos.com
 // Submitted by Piotr Zduniak <piotr@nucleos.com>
 pcloud.host
@@ -12600,60 +12675,6 @@ pcloud.host
 // Submitted by Matthew Brown <mattbrown@nyc.mn>
 nyc.mn
 
-// NymNom : https://nymnom.com/
-// Submitted by NymNom <psl@nymnom.com>
-nom.ae
-nom.af
-nom.ai
-nom.al
-nym.by
-nom.bz
-nym.bz
-nom.cl
-nym.ec
-nom.gd
-nom.ge
-nom.gl
-nym.gr
-nom.gt
-nym.gy
-nym.hk
-nom.hn
-nym.ie
-nom.im
-nom.ke
-nym.kz
-nym.la
-nym.lc
-nom.li
-nym.li
-nym.lt
-nym.lu
-nom.lv
-nym.me
-nom.mk
-nym.mn
-nym.mx
-nom.nu
-nym.nz
-nym.pe
-nym.pt
-nom.pw
-nom.qa
-nym.ro
-nom.rs
-nom.si
-nym.sk
-nom.st
-nym.su
-nym.sx
-nom.tj
-nym.tw
-nom.ug
-nom.uy
-nom.vc
-nom.vg
-
 // Observable, Inc. : https://observablehq.com
 // Submitted by Mike Bostock <dns@observablehq.com>
 static.observableusercontent.com
@@ -12674,6 +12695,29 @@ cloudycluster.net
 // Submitted by Vicary Archangel <vicary@omniwe.com>
 omniwe.site
 
+// One.com: https://www.one.com/
+// Submitted by Jacob Bunk Nielsen <jbn@one.com>
+123hjemmeside.dk
+123hjemmeside.no
+123homepage.it
+123kotisivu.fi
+123minsida.se
+123miweb.es
+123paginaweb.pt
+123sait.ru
+123siteweb.fr
+123webseite.at
+123webseite.de
+123website.be
+123website.ch
+123website.lu
+123website.nl
+service.one
+simplesite.com
+simplesite.com.br
+simplesite.gr
+simplesite.pl
+
 // One Fold Media : http://www.onefoldmedia.com/
 // Submitted by Eddie Jones <eddie@onefoldmedia.com>
 nid.io
@@ -12686,18 +12730,33 @@ opensocial.site
 // Submitted by Sven Marnach <sven@opencraft.com>
 opencraft.hosting
 
+// OpenResearch GmbH: https://openresearch.com/
+// Submitted by Philipp Schmid <ops@openresearch.com>
+orsites.com
+
 // Opera Software, A.S.A.
 // Submitted by Yngve Pettersen <yngve@opera.com>
 operaunite.com
 
-// Oursky Limited : https://skygear.io/
-// Submited by Skygear Developer <hello@skygear.io>
+// Orange : https://www.orange.com
+// Submitted by Alexandre Linte <alexandre.linte@orange.com>
+tech.orange
+
+// Oursky Limited : https://authgear.com/, https://skygear.io/
+// Submitted by Authgear Team <hello@authgear.com>, Skygear Developer <hello@skygear.io>
+authgear-staging.com
+authgearapps.com
 skygearapp.com
 
 // OutSystems
 // Submitted by Duarte Santos <domain-admin@outsystemscloud.com>
 outsystemscloud.com
 
+// OVHcloud: https://ovhcloud.com
+// Submitted by Vincent Cassé <vincent.casse@ovhcloud.com>
+*.webpaas.ovh.net
+*.hosting.ovh.net
+
 // OwnProvider GmbH: http://www.ownprovider.com
 // Submitted by Jan Moennich <jan.moennich@ownprovider.com>
 ownprovider.com
@@ -12727,6 +12786,10 @@ pagefrontapp.com
 // Submitted by Yann Guichard <yann@pagexl.com>
 pagexl.com
 
+// Paywhirl, Inc : https://paywhirl.com/
+// Submitted by Daniel Netzer <dan@paywhirl.com>
+*.paywhirl.com
+
 // pcarrier.ca Software Inc: https://pcarrier.ca/
 // Submitted by Pierre Carrier <pc@rrier.ca>
 bar0.net
@@ -12755,6 +12818,10 @@ mypep.link
 // Submitted by Kenneth Van Alstyne <kvanalstyne@perspecta.com>
 perspecta.cloud
 
+// PE Ulyanov Kirill Sergeevich : https://airy.host
+// Submitted by Kirill Ulyanov <k.ulyanov@airy.host>
+lk3.ru
+
 // Planet-Work : https://www.planet-work.com/
 // Submitted by Frédéric VANNIÈRE <f.vanniere@planet-work.com>
 on-web.fr
@@ -12766,6 +12833,7 @@ ent.platform.sh
 eu.platform.sh
 us.platform.sh
 *.platformsh.site
+*.tst.site
 
 // Platter: https://platter.dev
 // Submitted by Patrick Flor <patrick@platter.dev>
@@ -12783,10 +12851,25 @@ pleskns.com
 // Submitted by Maximilian Schieder <maxi@zeug.co>
 dyn53.io
 
+// Porter : https://porter.run/
+// Submitted by Rudraksh MK <rudi@porter.run>
+onporter.run
+
 // Positive Codes Technology Company : http://co.bn/faq.html
 // Submitted by Zulfais <pc@co.bn>
 co.bn
 
+// Postman, Inc : https://postman.com
+// Submitted by Rahul Dhawan <security@postman.com>
+postman-echo.com
+pstmn.io
+mock.pstmn.io
+httpbin.org
+
+//prequalifyme.today : https://prequalifyme.today
+//Submitted by DeepakTiwari deepak@ivylead.io
+prequalifyme.today
+
 // prgmr.com : https://prgmr.com/
 // Submitted by Sarah Newman <owner@prgmr.com>
 xen.prgmr.com
@@ -12816,14 +12899,35 @@ byen.site
 // Submitted by Kor Nielsen <kor@pubtls.org>
 pubtls.org
 
+// PythonAnywhere LLP: https://www.pythonanywhere.com
+// Submitted by Giles Thomas <giles@pythonanywhere.com>
+pythonanywhere.com
+eu.pythonanywhere.com
+
+// QOTO, Org.
+// Submitted by Jeffrey Phillips Freeman <jeffrey.freeman@qoto.org>
+qoto.io
+
 // Qualifio : https://qualifio.com/
 // Submitted by Xavier De Cock <xdecock@gmail.com>
 qualifioapp.com
 
+// Quality Unit: https://qualityunit.com
+// Submitted by Vasyl Tsalko <vtsalko@qualityunit.com>
+ladesk.com
+
 // QuickBackend: https://www.quickbackend.com
 // Submitted by Dani Biro <dani@pymet.com>
 qbuser.com
 
+// Rad Web Hosting: https://radwebhosting.com
+// Submitted by Scott Claeys <s.claeys@radwebhosting.com>
+cloudsite.builders
+
+// Redgate Software: https://red-gate.com
+// Submitted by Andrew Farries <andrew.farries@red-gate.com>
+instances.spawn.cc
+
 // Redstar Consultants : https://www.redstarconsultants.com/
 // Submitted by Jons Slemmer <jons@redstarconsultants.com>
 instantcloud.cn
@@ -12885,8 +12989,11 @@ app.render.com
 onrender.com
 
 // Repl.it : https://repl.it
-// Submitted by Mason Clayton <mason@repl.it>
+// Submitted by Lincoln Bergeson <lincoln@replit.com>
+firewalledreplit.co
+id.firewalledreplit.co
 repl.co
+id.repl.co
 repl.run
 
 // Resin.io : https://resin.io
@@ -12901,13 +13008,93 @@ hzc.io
 // Revitalised Limited : http://www.revitalised.co.uk
 // Submitted by Jack Price <jack@revitalised.co.uk>
 wellbeingzone.eu
-ptplus.fit
 wellbeingzone.co.uk
 
+// Rico Developments Limited : https://adimo.co
+// Submitted by Colin Brown <hello@adimo.co>
+adimo.co.uk
+
+// Riseup Networks : https://riseup.net
+// Submitted by Micah Anderson <micah@riseup.net>
+itcouldbewor.se
+
 // Rochester Institute of Technology : http://www.rit.edu/
 // Submitted by Jennifer Herting <jchits@rit.edu>
 git-pages.rit.edu
 
+// Rocky Enterprise Software Foundation : https://resf.org
+// Submitted by Neil Hanlon <neil@resf.org>
+rocky.page
+
+// Rusnames Limited: http://rusnames.ru/
+// Submitted by Sergey Zotov <admin@rusnames.ru>
+биз.рус
+ком.рус
+крым.рус
+мир.рус
+мск.рус
+орг.рус
+самара.рус
+сочи.рус
+спб.рус
+я.рус
+
+// SAKURA Internet Inc. : https://www.sakura.ad.jp/
+// Submitted by Internet Service Department <rs-vendor-ml@sakura.ad.jp>
+180r.com
+dojin.com
+sakuratan.com
+sakuraweb.com
+x0.com
+2-d.jp
+bona.jp
+crap.jp
+daynight.jp
+eek.jp
+flop.jp
+halfmoon.jp
+jeez.jp
+matrix.jp
+mimoza.jp
+ivory.ne.jp
+mail-box.ne.jp
+mints.ne.jp
+mokuren.ne.jp
+opal.ne.jp
+sakura.ne.jp
+sumomo.ne.jp
+topaz.ne.jp
+netgamers.jp
+nyanta.jp
+o0o0.jp
+rdy.jp
+rgr.jp
+rulez.jp
+s3.isk01.sakurastorage.jp
+s3.isk02.sakurastorage.jp
+saloon.jp
+sblo.jp
+skr.jp
+tank.jp
+uh-oh.jp
+undo.jp
+rs.webaccel.jp
+user.webaccel.jp
+websozai.jp
+xii.jp
+squares.net
+jpn.org
+kirara.st
+x0.to
+from.tv
+sakura.tv
+
+// Salesforce.com, Inc. https://salesforce.com/
+// Submitted by Michael Biven <mbiven@salesforce.com>
+*.builder.code.com
+*.dev-builder.code.com
+*.stg-builder.code.com
+
 // Sandstorm Development Group, Inc. : https://sandcats.io/
 // Submitted by Asheesh Laroia <asheesh@sandstorm.io>
 sandcats.io
@@ -12917,6 +13104,34 @@ sandcats.io
 logoip.de
 logoip.com
 
+// Scaleway : https://www.scaleway.com/
+// Submitted by Rémy Léone <rleone@scaleway.com>
+fr-par-1.baremetal.scw.cloud
+fr-par-2.baremetal.scw.cloud
+nl-ams-1.baremetal.scw.cloud
+fnc.fr-par.scw.cloud
+functions.fnc.fr-par.scw.cloud
+k8s.fr-par.scw.cloud
+nodes.k8s.fr-par.scw.cloud
+s3.fr-par.scw.cloud
+s3-website.fr-par.scw.cloud
+whm.fr-par.scw.cloud
+priv.instances.scw.cloud
+pub.instances.scw.cloud
+k8s.scw.cloud
+k8s.nl-ams.scw.cloud
+nodes.k8s.nl-ams.scw.cloud
+s3.nl-ams.scw.cloud
+s3-website.nl-ams.scw.cloud
+whm.nl-ams.scw.cloud
+k8s.pl-waw.scw.cloud
+nodes.k8s.pl-waw.scw.cloud
+s3.pl-waw.scw.cloud
+s3-website.pl-waw.scw.cloud
+scalebook.scw.cloud
+smartlabeling.scw.cloud
+dedibox.fr
+
 // schokokeks.org GbR : https://schokokeks.org/
 // Submitted by Hanno Böck <hanno@schokokeks.org>
 schokokeks.net
@@ -12924,6 +13139,7 @@ schokokeks.net
 // Scottish Government: https://www.gov.scot
 // Submitted by Martin Ellis <martin.ellis@gov.scot>
 gov.scot
+service.gov.scot
 
 // Scry Security : http://www.scrysec.com
 // Submitted by Shante Adam <shante@skyhat.io>
@@ -12946,16 +13162,33 @@ spdns.org
 // Submitted by Artem Kondratev <accounts@seidat.com>
 seidat.net
 
+// Sellfy : https://sellfy.com
+// Submitted by Yuriy Romadin <contact@sellfy.com>
+sellfy.store
+
 // Senseering GmbH : https://www.senseering.de
 // Submitted by Felix Mönckemeyer <f.moenckemeyer@senseering.de>
 senseering.net
 
+// Sendmsg: https://www.sendmsg.co.il
+// Submitted by Assaf Stern <domains@comstar.co.il>
+minisite.ms
+
+// Service Magnet : https://myservicemagnet.com
+// Submitted by Dave Sanders <dave@myservicemagnet.com>
+magnet.page
+
 // Service Online LLC : http://drs.ua/
 // Submitted by Serhii Bulakh <support@drs.ua>
 biz.ua
 co.ua
 pp.ua
 
+// Shift Crypto AG : https://shiftcrypto.ch
+// Submitted by alex <alex@shiftcrypto.ch>
+shiftcrypto.dev
+shiftcrypto.io
+
 // ShiftEdit : https://shiftedit.net/
 // Submitted by Adam Jimenez <adam@shiftcreate.com>
 shiftedit.io
@@ -12964,6 +13197,10 @@ shiftedit.io
 // Submitted by Alex Bowers <alex@shopblocks.com>
 myshopblocks.com
 
+// Shopify : https://www.shopify.com
+// Submitted by Alex Richter <alex.richter@shopify.com>
+myshopify.com
+
 // Shopit : https://www.shopitcommerce.com/
 // Submitted by Craig McMahon <craig@shopitcommerce.com>
 shopitsite.com
@@ -12998,16 +13235,52 @@ beta.bounty-full.com
 // Submitted by Aral Balkan <aral@small-tech.org>
 small-web.org
 
+// Smoove.io : https://www.smoove.io/
+// Submitted by Dan Kozak <dan@smoove.io>
+vp4.me
+
+// Snowflake Inc : https://www.snowflake.com/
+// Submitted by Faith Olapade <faith.olapade@snowflake.com>
+snowflake.app
+privatelink.snowflake.app
+streamlit.app
+streamlitapp.com
+
+// Snowplow Analytics : https://snowplowanalytics.com/
+// Submitted by Ian Streeter <ian@snowplowanalytics.com>
+try-snowplow.com
+
+// SourceHut : https://sourcehut.org
+// Submitted by Drew DeVault <sir@cmpwn.com>
+srht.site
+
 // Stackhero : https://www.stackhero.io
 // Submitted by Adrien Gillon <adrien+public-suffix-list@stackhero.io>
 stackhero-network.com
 
+// Staclar : https://staclar.com
+// Submitted by Q Misell <q@staclar.com>
+musician.io
+// Submitted by Matthias Merkel <matthias.merkel@staclar.com>
+novecore.site
+
 // staticland : https://static.land
 // Submitted by Seth Vincent <sethvincent@gmail.com>
 static.land
 dev.static.land
 sites.static.land
 
+// Storebase : https://www.storebase.io
+// Submitted by Tony Schirmer <tony@storebase.io>
+storebase.store
+
+// Strategic System Consulting (eApps Hosting): https://www.eapps.com/
+// Submitted by Alex Oancea <aoancea@cloudscale365.com>
+vps-host.net
+atl.jelastic.vps-host.net
+njs.jelastic.vps-host.net
+ric.jelastic.vps-host.net
+
 // Sony Interactive Entertainment LLC : https://sie.com/
 // Submitted by David Coles <david.coles@sony.com>
 playstation-cloud.com
@@ -13025,6 +13298,28 @@ spacekit.io
 // Submitted by Stefan Neufeind <info@speedpartner.de>
 customer.speedpartner.de
 
+// Spreadshop (sprd.net AG) : https://www.spreadshop.com/
+// Submitted by Martin Breest <security@spreadshop.com>
+myspreadshop.at
+myspreadshop.com.au
+myspreadshop.be
+myspreadshop.ca
+myspreadshop.ch
+myspreadshop.com
+myspreadshop.de
+myspreadshop.dk
+myspreadshop.es
+myspreadshop.fi
+myspreadshop.fr
+myspreadshop.ie
+myspreadshop.it
+myspreadshop.net
+myspreadshop.nl
+myspreadshop.no
+myspreadshop.pl
+myspreadshop.se
+myspreadshop.co.uk
+
 // Standard Library : https://stdlib.com
 // Submitted by Jacob Lee <jacob@stdlib.com>
 api.stdlib.com
@@ -13046,10 +13341,12 @@ user.srcf.net
 // Submitted by Dan Miller <dm@sub6.com>
 temp-dns.com
 
-// Swisscom Application Cloud: https://developer.swisscom.com
-// Submitted by Matthias.Winzeler <matthias.winzeler@swisscom.com>
-applicationcloud.io
-scapp.io
+// Supabase : https://supabase.io
+// Submitted by Inian Parameshwaran <security@supabase.io>
+supabase.co
+supabase.in
+supabase.net
+su.paba.se
 
 // Symfony, SAS : https://symfony.com/
 // Submitted by Fabien Potencier <fabien@symfony.com>
@@ -13062,26 +13359,38 @@ syncloud.it
 
 // Synology, Inc. : https://www.synology.com/
 // Submitted by Rony Weng <ronyweng@synology.com>
-diskstation.me
 dscloud.biz
-dscloud.me
-dscloud.mobi
+direct.quickconnect.cn
 dsmynas.com
-dsmynas.net
-dsmynas.org
 familyds.com
-familyds.net
-familyds.org
+diskstation.me
+dscloud.me
 i234.me
 myds.me
 synology.me
+dscloud.mobi
+dsmynas.net
+familyds.net
+dsmynas.org
+familyds.org
 vpnplus.to
 direct.quickconnect.to
 
+// Tabit Technologies Ltd. : https://tabit.cloud/
+// Submitted by Oren Agiv <oren@tabit.cloud>
+tabitorder.co.il
+mytabit.co.il
+mytabit.com
+
 // TAIFUN Software AG : http://taifun-software.de
 // Submitted by Bjoern Henke <dev-server@taifun-software.de>
 taifun-dns.de
 
+// Tailscale Inc. : https://www.tailscale.com
+// Submitted by David Anderson <danderson@tailscale.com>
+beta.tailscale.net
+ts.net
+
 // TASK geographical domains (www.task.gda.pl/uslugi/dns)
 gda.pl
 gdansk.pl
@@ -13089,9 +13398,14 @@ gdynia.pl
 med.pl
 sopot.pl
 
+// team.blue https://team.blue
+// Submitted by Cedric Dubois <cedric.dubois@team.blue>
+site.tb-hosting.com
+
 // Teckids e.V. : https://www.teckids.org
 // Submitted by Dominik George <dominik.george@teckids.org>
-edugit.org
+edugit.io
+s3.teckids.org
 
 // Telebit : https://telebit.cloud
 // Submitted by AJ ONeal <aj@telebit.cloud>
@@ -13099,24 +13413,34 @@ telebit.app
 telebit.io
 *.telebit.xyz
 
-// The Gwiddle Foundation : https://gwiddlefoundation.org.uk
-// Submitted by Joshua Bayfield <joshua.bayfield@gwiddlefoundation.org.uk>
-gwiddle.co.uk
-
 // Thingdust AG : https://thingdust.com/
 // Submitted by Adrian Imboden <adi@thingdust.com>
+*.firenet.ch
+*.svc.firenet.ch
+reservd.com
 thingdustdata.com
 cust.dev.thingdust.io
 cust.disrec.thingdust.io
 cust.prod.thingdust.io
 cust.testing.thingdust.io
-*.firenet.ch
-*.svc.firenet.ch
+reservd.dev.thingdust.io
+reservd.disrec.thingdust.io
+reservd.testing.thingdust.io
+
+// ticket i/O GmbH : https://ticket.io
+// Submitted by Christian Franke <it@ticket.io>
+tickets.io
 
 // Tlon.io : https://tlon.io
 // Submitted by Mark Staarink <mark@tlon.io>
 arvo.network
 azimuth.network
+tlon.network
+
+// Tor Project, Inc. : https://torproject.org
+// Submitted by Antoine Beaupré <anarcat@torproject.org
+torproject.net
+pages.torproject.net
 
 // TownNews.com : http://www.townnews.com
 // Submitted by Dustin Ward <dward@townnews.com>
@@ -13151,6 +13475,10 @@ lima.zone
 *.transurl.eu
 *.transurl.nl
 
+// TransIP: https://www.transip.nl
+// Submitted by Cedric Dubois <cedric.dubois@team.blue>
+site.transip.me
+
 // TuxFamily : http://tuxfamily.org
 // Submitted by TuxFamily administrators <adm@staff.tuxfamily.org>
 tuxfamily.org
@@ -13171,6 +13499,14 @@ syno-ds.de
 synology-diskstation.de
 synology-ds.de
 
+// Typedream : https://typedream.com
+// Submitted by Putri Karunia <putri@typedream.com>
+typedream.app
+
+// Typeform : https://www.typeform.com
+// Submitted by Sergi Ferriz <sergi.ferriz@typeform.com>
+pro.typeform.com
+
 // Uberspace : https://uberspace.de
 // Submitted by Moritz Werner <mwerner@jonaspasche.com>
 uber.space
@@ -13183,11 +13519,28 @@ hk.org
 ltd.hk
 inc.hk
 
+// UK Intis Telecom LTD : https://it.com
+// Submitted by ITComdomains <to@it.com>
+it.com
+
+// UNIVERSAL DOMAIN REGISTRY : https://www.udr.org.yt/
+// see also: whois -h whois.udr.org.yt help
+// Submitted by Atanunu Igbunuroghene <publicsuffixlist@udr.org.yt>
+name.pm
+sch.tf
+biz.wf
+sch.wf
+org.yt
+
 // United Gameserver GmbH : https://united-gameserver.de
 // Submitted by Stefan Schwarz <sysadm@united-gameserver.de>
 virtualuser.de
 virtual-user.de
 
+// Upli : https://upli.io
+// Submitted by Lenny Bakkalian <lenny.bakkalian@gmail.com>
+upli.io
+
 // urown.net : https://urown.net
 // Submitted by Hostmaster <hostmaster@urown.net>
 urown.cloud
@@ -13250,7 +13603,6 @@ at.md
 de.md
 jp.md
 to.md
-uwu.nu
 indie.porn
 vxl.sh
 ch.tc
@@ -13266,6 +13618,10 @@ me.vu
 // Submitted by Serhii Rostilo <sergey@rostilo.kiev.ua>
 v.ua
 
+// Vultr Objects : https://www.vultr.com/products/object-storage/
+// Submitted by Niels Maumenee <storage@vultr.com>
+*.vultrobjects.com
+
 // Waffle Computer Inc., Ltd. : https://docs.waffleinfo.com
 // Submitted by Masayuki Note <masa@blade.wafflecell.com>
 wafflecell.com
@@ -13274,6 +13630,13 @@ wafflecell.com
 // Submitted by Arnold Hendriks <info@webhare.com>
 *.webhare.dev
 
+// WebHotelier Technologies Ltd: https://www.webhotelier.net/
+// Submitted by Apostolos Tsakpinis <apostolos.tsakpinis@gmail.com>
+reserve-online.net
+reserve-online.com
+bookonline.app
+hotelwithflight.com
+
 // WeDeploy by Liferay, Inc. : https://www.wedeploy.com
 // Submitted by Henrique Vicente <security@wedeploy.com>
 wedeploy.io
@@ -13299,17 +13662,35 @@ wmcloud.org
 panel.gg
 daemon.panel.gg
 
+// Wizard Zines : https://wizardzines.com
+// Submitted by Julia Evans <julia@wizardzines.com>
+messwithdns.com
+
 // WoltLab GmbH : https://www.woltlab.com
 // Submitted by Tim Düsterhus <security@woltlab.cloud>
+woltlab-demo.com
 myforum.community
 community-pro.de
 diskussionsbereich.de
 community-pro.net
 meinforum.net
 
-// www.com.vc : http://www.com.vc
-// Submitted by Li Hui <lihui@sinopub.com>
-cn.vu
+// Woods Valldata : https://www.woodsvalldata.co.uk/
+// Submitted by Chris Whittle <chris.whittle@woodsvalldata.co.uk>
+affinitylottery.org.uk
+raffleentry.org.uk
+weeklylottery.org.uk
+
+// WP Engine : https://wpengine.com/
+// Submitted by Michael Smith <michael.smith@wpengine.com>
+// Submitted by Brandon DuRette <brandon.durette@wpengine.com>
+wpenginepowered.com
+js.wpenginepowered.com
+
+// Wix.com, Inc. : https://www.wix.com
+// Submitted by Shahar Talmi <shahar@wix.com>
+wixsite.com
+editorx.io
 
 // XenonCloud GbR: https://xenoncloud.net
 // Submitted by Julian Uphoff <publicsuffixlist@xenoncloud.net>
@@ -13353,6 +13734,7 @@ ybo.trade
 
 // Yunohost : https://yunohost.org
 // Submitted by Valentin Grimaud <security@yunohost.org>
+ynh.fr
 nohost.me
 noho.st
 
@@ -13371,26 +13753,4 @@ basicserver.io
 virtualserver.io
 enterprisecloud.nu
 
-// Mintere : https://mintere.com/
-// Submitted by Ben Aubin <security@mintere.com>
-mintere.site
-
-// Cityhost LLC  : https://cityhost.ua
-// Submitted by Maksym Rivtin <support@cityhost.net.ua>
-cx.ua
-
-// WP Engine : https://wpengine.com/
-// Submitted by Michael Smith <michael.smith@wpengine.com>
-// Submitted by Brandon DuRette <brandon.durette@wpengine.com>
-wpenginepowered.com
-js.wpenginepowered.com
-
-// Impertrix Solutions : <https://impertrixcdn.com>
-// Submitted by Zhixiang Zhao <csuite@impertrix.com>
-impertrixcdn.com
-impertrix.com
-
-// GignoSystemJapan: http://gsj.bz
-// Submitted by GignoSystemJapan <kakutou-ec@gsj.bz>
-gsj.bz
-// ===END PRIVATE DOMAINS===
\ No newline at end of file
+// ===END PRIVATE DOMAINS===
diff --git a/tldextract/__init__.py b/tldextract/__init__.py
index 4c6fc6c..8d7bb37 100644
--- a/tldextract/__init__.py
+++ b/tldextract/__init__.py
@@ -1,5 +1,12 @@
 """Export tldextract's public interface."""
 
-from .tldextract import extract, TLDExtract
+from . import _version
+from .tldextract import TLDExtract, extract
 
-from ._version import version as __version__
+__version__: str = _version.version
+
+__all__ = [
+    "extract",
+    "TLDExtract",
+    "__version__",
+]
diff --git a/tldextract/__main__.py b/tldextract/__main__.py
index 35eb47f..92e4078 100644
--- a/tldextract/__main__.py
+++ b/tldextract/__main__.py
@@ -1,8 +1,7 @@
-'''tldextract __main__.'''
+"""tldextract __main__."""
 
 
 from .cli import main
 
-
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/tldextract/cache.py b/tldextract/cache.py
index 3fb659c..6939140 100644
--- a/tldextract/cache.py
+++ b/tldextract/cache.py
@@ -1,4 +1,4 @@
-"""Helpers """
+"""Helpers."""
 import errno
 import hashlib
 import json
@@ -7,17 +7,30 @@ import os
 import os.path
 import sys
 from hashlib import md5
+from typing import (
+    Callable,
+    Dict,
+    Hashable,
+    Iterable,
+    Optional,
+    TypeVar,
+    Union,
+    cast,
+)
 
 from filelock import FileLock
+import requests
 
 LOG = logging.getLogger(__name__)
 
 _DID_LOG_UNABLE_TO_CACHE = False
 
+T = TypeVar("T")  # pylint: disable=invalid-name
 
-def get_pkg_unique_identifier():
+
+def get_pkg_unique_identifier() -> str:
     """
-    Generate an identifier unique to the python version, tldextract version, and python instance
+    Generate an identifier unique to the python version, tldextract version, and python instance.
 
     This will prevent interference between virtualenvs and issues that might arise when installing
     a new version of tldextract
@@ -46,9 +59,9 @@ def get_pkg_unique_identifier():
     return pkg_identifier
 
 
-def get_cache_dir():
+def get_cache_dir() -> str:
     """
-    Get a cache dir that we have permission to write to
+    Get a cache dir that we have permission to write to.
 
     Try to follow the XDG standard, but if that doesn't work fallback to the package directory
     http://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
@@ -73,9 +86,9 @@ def get_cache_dir():
 
 
 class DiskCache:
-    """Disk _cache that only works for jsonable values"""
+    """Disk _cache that only works for jsonable values."""
 
-    def __init__(self, cache_dir, lock_timeout=20):
+    def __init__(self, cache_dir: Optional[str], lock_timeout: int = 20):
         self.enabled = bool(cache_dir)
         self.cache_dir = os.path.expanduser(str(cache_dir) or "")
         self.lock_timeout = lock_timeout
@@ -83,7 +96,7 @@ class DiskCache:
         # combined with a call to `.clear()` wont wipe someones hard drive
         self.file_ext = ".tldextract.json"
 
-    def get(self, namespace, key):
+    def get(self, namespace: str, key: Union[str, Dict[str, Hashable]]) -> object:
         """Retrieve a value from the disk cache"""
         if not self.enabled:
             raise KeyError("Cache is disabled")
@@ -92,34 +105,35 @@ class DiskCache:
         if not os.path.isfile(cache_filepath):
             raise KeyError("namespace: " + namespace + " key: " + repr(key))
         try:
-            with open(cache_filepath) as cache_file:  # pylint: disable=unspecified-encoding
+            # pylint: disable-next=unspecified-encoding
+            with open(cache_filepath) as cache_file:
                 return json.load(cache_file)
         except (OSError, ValueError) as exc:
             LOG.error("error reading TLD cache file %s: %s", cache_filepath, exc)
-            raise KeyError(  # pylint: disable=raise-missing-from
-                "namespace: " + namespace + " key: " + repr(key)
-            )
+            raise KeyError("namespace: " + namespace + " key: " + repr(key)) from None
 
-    def set(self, namespace, key, value):
-        """Set a value in the disk cache"""
+    def set(
+        self, namespace: str, key: Union[str, Dict[str, Hashable]], value: object
+    ) -> None:
+        """Set a value in the disk cache."""
         if not self.enabled:
-            return False
+            return
+
         cache_filepath = self._key_to_cachefile_path(namespace, key)
 
         try:
             _make_dir(cache_filepath)
-            with open(cache_filepath, "w") as cache_file:  # pylint: disable=unspecified-encoding
+            # pylint: disable-next=unspecified-encoding
+            with open(cache_filepath, "w") as cache_file:
                 json.dump(value, cache_file)
         except OSError as ioe:
             global _DID_LOG_UNABLE_TO_CACHE  # pylint: disable=global-statement
             if not _DID_LOG_UNABLE_TO_CACHE:
                 LOG.warning(
-                    (
-                        "unable to cache %s.%s in %s. This could refresh the "
-                        "Public Suffix List over HTTP every app startup. "
-                        "Construct your `TLDExtract` with a writable `cache_dir` or "
-                        "set `cache_dir=False` to silence this warning. %s"
-                    ),
+                    "unable to cache %s.%s in %s. This could refresh the "
+                    "Public Suffix List over HTTP every app startup. "
+                    "Construct your `TLDExtract` with a writable `cache_dir` or "
+                    "set `cache_dir=None` to silence this warning. %s",
                     namespace,
                     key,
                     cache_filepath,
@@ -127,10 +141,8 @@ class DiskCache:
                 )
                 _DID_LOG_UNABLE_TO_CACHE = True
 
-        return None
-
-    def clear(self):
-        """Clear the disk cache"""
+    def clear(self) -> None:
+        """Clear the disk cache."""
         for root, _, files in os.walk(self.cache_dir):
             for filename in files:
                 if filename.endswith(self.file_ext) or filename.endswith(
@@ -146,7 +158,9 @@ class DiskCache:
                         if exc.errno != errno.ENOENT:
                             raise
 
-    def _key_to_cachefile_path(self, namespace, key):
+    def _key_to_cachefile_path(
+        self, namespace: str, key: Union[str, Dict[str, Hashable]]
+    ) -> str:
         namespace_path = os.path.join(self.cache_dir, namespace)
         hashed_key = _make_cache_key(key)
 
@@ -154,8 +168,14 @@ class DiskCache:
 
         return cache_path
 
-    def run_and_cache(self, func, namespace, kwargs, hashed_argnames):
-        """Get a url but cache the response"""
+    def run_and_cache(
+        self,
+        func: Callable[..., T],
+        namespace: str,
+        kwargs: Dict[str, Hashable],
+        hashed_argnames: Iterable[str],
+    ) -> T:
+        """Get a url but cache the response."""
         if not self.enabled:
             return func(**kwargs)
 
@@ -168,12 +188,10 @@ class DiskCache:
             global _DID_LOG_UNABLE_TO_CACHE  # pylint: disable=global-statement
             if not _DID_LOG_UNABLE_TO_CACHE:
                 LOG.warning(
-                    (
-                        "unable to cache %s.%s in %s. This could refresh the "
-                        "Public Suffix List over HTTP every app startup. "
-                        "Construct your `TLDExtract` with a writable `cache_dir` or "
-                        "set `cache_dir=False` to silence this warning. %s"
-                    ),
+                    "unable to cache %s.%s in %s. This could refresh the "
+                    "Public Suffix List over HTTP every app startup. "
+                    "Construct your `TLDExtract` with a writable `cache_dir` or "
+                    "set `cache_dir=None` to silence this warning. %s",
                     namespace,
                     key_args,
                     cache_filepath,
@@ -183,17 +201,21 @@ class DiskCache:
 
             return func(**kwargs)
 
+        # Disable lint of 3rd party (see also https://github.com/tox-dev/py-filelock/issues/102)
+        # pylint: disable-next=abstract-class-instantiated
         with FileLock(lock_path, timeout=self.lock_timeout):
             try:
-                result = self.get(namespace=namespace, key=key_args)
+                result = cast(T, self.get(namespace=namespace, key=key_args))
             except KeyError:
                 result = func(**kwargs)
-                self.set(namespace="urls", key=key_args, value=result)
+                self.set(namespace=namespace, key=key_args, value=result)
 
             return result
 
-    def cached_fetch_url(self, session, url, timeout):
-        """Get a url but cache the response"""
+    def cached_fetch_url(
+        self, session: requests.Session, url: str, timeout: Union[float, int, None]
+    ) -> str:
+        """Get a url but cache the response."""
         return self.run_and_cache(
             func=_fetch_url,
             namespace="urls",
@@ -202,8 +224,7 @@ class DiskCache:
         )
 
 
-def _fetch_url(session, url, timeout):
-
+def _fetch_url(session: requests.Session, url: str, timeout: Optional[int]) -> str:
     response = session.get(url, timeout=timeout)
     response.raise_for_status()
     text = response.text
@@ -214,17 +235,13 @@ def _fetch_url(session, url, timeout):
     return text
 
 
-def _make_cache_key(inputs):
+def _make_cache_key(inputs: Union[str, Dict[str, Hashable]]) -> str:
     key = repr(inputs)
-    try:
-        key = md5(key).hexdigest()
-    except TypeError:
-        key = md5(key.encode("utf8")).hexdigest()
-    return key
+    return md5(key.encode("utf8")).hexdigest()
 
 
-def _make_dir(filename):
-    """Make a directory if it doesn't already exist"""
+def _make_dir(filename: str) -> None:
+    """Make a directory if it doesn't already exist."""
     if not os.path.exists(os.path.dirname(filename)):
         try:
             os.makedirs(os.path.dirname(filename))
diff --git a/tldextract/cli.py b/tldextract/cli.py
index 0d4f043..59d452a 100644
--- a/tldextract/cli.py
+++ b/tldextract/cli.py
@@ -1,16 +1,18 @@
-"""tldextract CLI"""
+"""tldextract CLI."""
 
 
 import argparse
 import logging
+import os.path
+import pathlib
 import sys
 
 from ._version import version as __version__
 from .tldextract import TLDExtract
 
 
-def main():
-    """tldextract CLI main command."""
+def main() -> None:
+    """Tldextract CLI main command."""
     logging.basicConfig()
 
     parser = argparse.ArgumentParser(
@@ -31,25 +33,52 @@ def main():
         action="store_true",
         help="force fetch the latest TLD definitions",
     )
+    parser.add_argument(
+        "--suffix_list_url",
+        action="append",
+        required=False,
+        help="use an alternate URL or local file for TLD definitions",
+    )
     parser.add_argument(
         "-c", "--cache_dir", help="use an alternate TLD definition caching folder"
     )
     parser.add_argument(
         "-p",
+        "--include_psl_private_domains",
         "--private_domains",
         default=False,
         action="store_true",
         help="Include private domains",
     )
+    parser.add_argument(
+        "--no_fallback_to_snapshot",
+        default=True,
+        action="store_false",
+        dest="fallback_to_snapshot",
+        help="Don't fall back to the package's snapshot of the suffix list",
+    )
 
     args = parser.parse_args()
 
     obj_kwargs = {
-        "include_psl_private_domains": args.private_domains,
+        "include_psl_private_domains": args.include_psl_private_domains,
+        "fallback_to_snapshot": args.fallback_to_snapshot,
     }
+
     if args.cache_dir:
         obj_kwargs["cache_dir"] = args.cache_dir
 
+    if args.suffix_list_url is not None:
+        suffix_list_urls = []
+        for source in args.suffix_list_url:
+            if os.path.isfile(source):
+                as_path_uri = pathlib.Path(os.path.abspath(source)).as_uri()
+                suffix_list_urls.append(as_path_uri)
+            else:
+                suffix_list_urls.append(source)
+
+        obj_kwargs["suffix_list_urls"] = suffix_list_urls
+
     tld_extract = TLDExtract(**obj_kwargs)
 
     if args.update:
@@ -57,7 +86,6 @@ def main():
     elif not args.input:
         parser.print_usage()
         sys.exit(1)
-        return
 
     for i in args.input:
         print(" ".join(tld_extract(i)))
diff --git a/tldextract/py.typed b/tldextract/py.typed
new file mode 100644
index 0000000..e69de29
diff --git a/tldextract/remote.py b/tldextract/remote.py
index 08e8eb1..60b8629 100644
--- a/tldextract/remote.py
+++ b/tldextract/remote.py
@@ -1,19 +1,51 @@
-'tldextract helpers for testing and fetching remote resources.'
+"""tldextract helpers for testing and fetching remote resources."""
 
 import re
 import socket
-
 from urllib.parse import scheme_chars
 
-
 IP_RE = re.compile(
-    r'^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$')  # pylint: disable=line-too-long
-
-SCHEME_RE = re.compile(r'^([' + scheme_chars + ']+:)?//')
-
-
-def looks_like_ip(maybe_ip):
-    """Does the given str look like an IP address?"""
+    r"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.)"
+    r"{3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"
+)
+
+scheme_chars_set = set(scheme_chars)
+
+
+def lenient_netloc(url: str) -> str:
+    """Extract the netloc of a URL-like string.
+
+    Similar to the netloc attribute returned by
+    urllib.parse.{urlparse,urlsplit}, but extract more leniently, without
+    raising errors.
+    """
+    return (
+        _schemeless_url(url)
+        .partition("/")[0]
+        .partition("?")[0]
+        .partition("#")[0]
+        .rpartition("@")[-1]
+        .partition(":")[0]
+        .strip()
+        .rstrip(".\u3002\uff0e\uff61")
+    )
+
+
+def _schemeless_url(url: str) -> str:
+    double_slashes_start = url.find("//")
+    if double_slashes_start == 0:
+        return url[2:]
+    if (
+        double_slashes_start < 2
+        or not url[double_slashes_start - 1] == ":"
+        or set(url[: double_slashes_start - 1]) - scheme_chars_set
+    ):
+        return url
+    return url[double_slashes_start + 2 :]
+
+
+def looks_like_ip(maybe_ip: str) -> bool:
+    """Check whether the given str looks like an IP address."""
     if not maybe_ip[0].isdigit():
         return False
 
@@ -23,7 +55,7 @@ def looks_like_ip(maybe_ip):
     except (AttributeError, UnicodeError):
         if IP_RE.match(maybe_ip):
             return True
-    except socket.error:
+    except OSError:
         pass
 
     return False
diff --git a/tldextract/suffix_list.py b/tldextract/suffix_list.py
index a4371c8..de0fbb9 100644
--- a/tldextract/suffix_list.py
+++ b/tldextract/suffix_list.py
@@ -1,11 +1,14 @@
-"tldextract helpers for testing and fetching remote resources."
+"""tldextract helpers for testing and fetching remote resources."""
 
 import logging
 import pkgutil
 import re
+from typing import List, Sequence, Tuple, Union, cast
 
 import requests
-from requests_file import FileAdapter
+from requests_file import FileAdapter  # type: ignore[import]
+
+from .cache import DiskCache
 
 LOG = logging.getLogger("tldextract")
 
@@ -14,14 +17,19 @@ PUBLIC_PRIVATE_SUFFIX_SEPARATOR = "// ===BEGIN PRIVATE DOMAINS==="
 
 
 class SuffixListNotFound(LookupError):
-    """A recoverable error while looking up a suffix list. Recoverable because
-    you can specify backups, or use this library's bundled snapshot."""
-
+    """A recoverable error while looking up a suffix list.
 
-def find_first_response(cache, urls, cache_fetch_timeout=None):
-    """Decode the first successfully fetched URL, from UTF-8 encoding to
-    Python unicode.
+    Recoverable because you can specify backups, or use this library's bundled
+    snapshot.
     """
+
+
+def find_first_response(
+    cache: DiskCache,
+    urls: Sequence[str],
+    cache_fetch_timeout: Union[float, int, None] = None,
+) -> str:
+    """Decode the first successfully fetched URL, from UTF-8 encoding to Python unicode."""
     with requests.Session() as session:
         session.mount("file://", FileAdapter())
 
@@ -33,14 +41,13 @@ def find_first_response(cache, urls, cache_fetch_timeout=None):
             except requests.exceptions.RequestException:
                 LOG.exception("Exception reading Public Suffix List url %s", url)
     raise SuffixListNotFound(
-        "No Public Suffix List found. Consider using a mirror or constructing "
-        "your TLDExtract with `suffix_list_urls=None`."
+        "No remote Public Suffix List found. Consider using a mirror, or avoid this"
+        " fetch by constructing your TLDExtract with `suffix_list_urls=()`."
     )
 
 
-def extract_tlds_from_suffix_list(suffix_list_text):
-    """Parse the raw suffix list text for its different designations of
-    suffixes."""
+def extract_tlds_from_suffix_list(suffix_list_text: str) -> Tuple[List[str], List[str]]:
+    """Parse the raw suffix list text for its different designations of suffixes."""
     public_text, _, private_text = suffix_list_text.partition(
         PUBLIC_PRIVATE_SUFFIX_SEPARATOR
     )
@@ -50,7 +57,12 @@ def extract_tlds_from_suffix_list(suffix_list_text):
     return public_tlds, private_tlds
 
 
-def get_suffix_lists(cache, urls, cache_fetch_timeout, fallback_to_snapshot):
+def get_suffix_lists(
+    cache: DiskCache,
+    urls: Sequence[str],
+    cache_fetch_timeout: Union[float, int, None],
+    fallback_to_snapshot: bool,
+) -> Tuple[List[str], List[str]]:
     """Fetch, parse, and cache the suffix lists"""
     return cache.run_and_cache(
         func=_get_suffix_lists,
@@ -65,16 +77,22 @@ def get_suffix_lists(cache, urls, cache_fetch_timeout, fallback_to_snapshot):
     )
 
 
-def _get_suffix_lists(cache, urls, cache_fetch_timeout, fallback_to_snapshot):
+def _get_suffix_lists(
+    cache: DiskCache,
+    urls: Sequence[str],
+    cache_fetch_timeout: Union[float, int, None],
+    fallback_to_snapshot: bool,
+) -> Tuple[List[str], List[str]]:
     """Fetch, parse, and cache the suffix lists"""
 
     try:
         text = find_first_response(cache, urls, cache_fetch_timeout=cache_fetch_timeout)
     except SuffixListNotFound as exc:
         if fallback_to_snapshot:
-            text = pkgutil.get_data("tldextract", ".tld_set_snapshot")
-            if not isinstance(text, str):
-                text = str(text, "utf-8")
+            maybe_pkg_data = pkgutil.get_data("tldextract", ".tld_set_snapshot")
+            # package maintainers guarantee file is included
+            pkg_data = cast(bytes, maybe_pkg_data)
+            text = pkg_data.decode("utf-8")
         else:
             raise exc
 
diff --git a/tldextract/tldextract.py b/tldextract/tldextract.py
index 536f056..1303dd8 100644
--- a/tldextract/tldextract.py
+++ b/tldextract/tldextract.py
@@ -1,6 +1,6 @@
-# -*- coding: utf-8 -*-
-"""`tldextract` accurately separates the gTLD or ccTLD (generic or country code
-top-level domain) from the registered domain and subdomains of a URL.
+"""`tldextract` accurately separates a URL's subdomain, domain, and public suffix.
+
+It does this via the Public Suffix List (PSL).
 
     >>> import tldextract
 
@@ -49,15 +49,27 @@ or suffix were found:
     '127.0.0.1'
 """
 
-import collections
+from __future__ import annotations
+
 import logging
 import os
+import urllib.parse
 from functools import wraps
+from typing import (
+    Collection,
+    Dict,
+    FrozenSet,
+    List,
+    NamedTuple,
+    Optional,
+    Sequence,
+    Union,
+)
 
 import idna
 
 from .cache import DiskCache, get_cache_dir
-from .remote import IP_RE, SCHEME_RE, looks_like_ip
+from .remote import IP_RE, lenient_netloc, looks_like_ip
 from .suffix_list import get_suffix_lists
 
 LOG = logging.getLogger("tldextract")
@@ -71,14 +83,15 @@ PUBLIC_SUFFIX_LIST_URLS = (
 )
 
 
-class ExtractResult(collections.namedtuple("ExtractResult", "subdomain domain suffix")):
+class ExtractResult(NamedTuple):
     """namedtuple of a URL's subdomain, domain, and suffix."""
 
-    # Necessary for __dict__ member to get populated in Python 3+
-    __slots__ = ()
+    subdomain: str
+    domain: str
+    suffix: str
 
     @property
-    def registered_domain(self):
+    def registered_domain(self) -> str:
         """
         Joins the domain and suffix fields with a dot, if they're both set.
 
@@ -87,12 +100,12 @@ class ExtractResult(collections.namedtuple("ExtractResult", "subdomain domain su
         >>> extract('http://localhost:8080').registered_domain
         ''
         """
-        if self.domain and self.suffix:
-            return self.domain + "." + self.suffix
+        if self.suffix and self.domain:
+            return f"{self.domain}.{self.suffix}"
         return ""
 
     @property
-    def fqdn(self):
+    def fqdn(self) -> str:
         """
         Returns a Fully Qualified Domain Name, if there is a proper domain/suffix.
 
@@ -101,15 +114,16 @@ class ExtractResult(collections.namedtuple("ExtractResult", "subdomain domain su
         >>> extract('http://localhost:8080').fqdn
         ''
         """
-        if self.domain and self.suffix:
-            # self is the namedtuple (subdomain domain suffix)
+        if self.suffix and self.domain:
+            # Disable bogus lint error (https://github.com/PyCQA/pylint/issues/2568)
+            # pylint: disable-next=not-an-iterable
             return ".".join(i for i in self if i)
         return ""
 
     @property
-    def ipv4(self):
+    def ipv4(self) -> str:
         """
-        Returns the ipv4 if that is what the presented domain/url is
+        Returns the ipv4 if that is what the presented domain/url is.
 
         >>> extract('http://127.0.0.1/path/to/file').ipv4
         '127.0.0.1'
@@ -124,31 +138,27 @@ class ExtractResult(collections.namedtuple("ExtractResult", "subdomain domain su
 
 
 class TLDExtract:
-    """A callable for extracting, subdomain, domain, and suffix components from
-    a URL."""
+    """A callable for extracting, subdomain, domain, and suffix components from a URL."""
 
     # TODO: Agreed with Pylint: too-many-arguments
     def __init__(  # pylint: disable=too-many-arguments
         self,
-        cache_dir=get_cache_dir(),
-        suffix_list_urls=PUBLIC_SUFFIX_LIST_URLS,
-        fallback_to_snapshot=True,
-        include_psl_private_domains=False,
-        extra_suffixes=(),
-        cache_fetch_timeout=CACHE_TIMEOUT,
-    ):
-        """
-        Constructs a callable for extracting subdomain, domain, and suffix
-        components from a URL.
-
-        Upon calling it, it first checks for a JSON in `cache_dir`.
-        By default, the `cache_dir` will live in the tldextract directory.
-
-        You can disable the caching functionality of this module  by setting `cache_dir` to False.
+        cache_dir: Optional[str] = get_cache_dir(),
+        suffix_list_urls: Sequence[str] = PUBLIC_SUFFIX_LIST_URLS,
+        fallback_to_snapshot: bool = True,
+        include_psl_private_domains: bool = False,
+        extra_suffixes: Sequence[str] = (),
+        cache_fetch_timeout: Union[str, float, None] = CACHE_TIMEOUT,
+    ) -> None:
+        """Construct a callable for extracting subdomain, domain, and suffix components from a URL.
+
+        Upon calling it, it first checks for a JSON in `cache_dir`. By default,
+        the `cache_dir` will live in the tldextract directory. You can disable
+        the caching functionality of this module by setting `cache_dir` to `None`.
 
         If the cached version does not exist (such as on the first run), HTTP request the URLs in
         `suffix_list_urls` in order, until one returns public suffix list data. To disable HTTP
-        requests, set this to something falsy.
+        requests, set this to an empty sequence.
 
         The default list of URLs point to the latest version of the Mozilla Public Suffix List and
         its mirror, but any similar document could be specified. Local files can be specified by
@@ -193,52 +203,80 @@ class TLDExtract:
 
         self.include_psl_private_domains = include_psl_private_domains
         self.extra_suffixes = extra_suffixes
-        self._extractor = None
+        self._extractor: Optional[_PublicSuffixListTLDExtractor] = None
 
-        self.cache_fetch_timeout = cache_fetch_timeout
+        self.cache_fetch_timeout = (
+            float(cache_fetch_timeout)
+            if isinstance(cache_fetch_timeout, str)
+            else cache_fetch_timeout
+        )
         self._cache = DiskCache(cache_dir)
-        if isinstance(self.cache_fetch_timeout, str):
-            self.cache_fetch_timeout = float(self.cache_fetch_timeout)
 
-    def __call__(self, url, include_psl_private_domains=None):
-        """
-        Takes a string URL and splits it into its subdomain, domain, and
-        suffix (effective TLD, gTLD, ccTLD, etc.) component.
+    def __call__(
+        self, url: str, include_psl_private_domains: bool | None = None
+    ) -> ExtractResult:
+        """Alias for `extract_str`."""
+        return self.extract_str(url, include_psl_private_domains)
+
+    def extract_str(
+        self, url: str, include_psl_private_domains: bool | None = None
+    ) -> ExtractResult:
+        """Take a string URL and splits it into its subdomain, domain, and suffix components.
 
-        >>> extract = TLDExtract()
-        >>> extract('http://forums.news.cnn.com/')
+        I.e. its effective TLD, gTLD, ccTLD, etc. components.
+
+        >>> extractor = TLDExtract()
+        >>> extractor.extract_str('http://forums.news.cnn.com/')
         ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')
-        >>> extract('http://forums.bbc.co.uk/')
+        >>> extractor.extract_str('http://forums.bbc.co.uk/')
         ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk')
         """
+        return self._extract_netloc(lenient_netloc(url), include_psl_private_domains)
 
-        netloc = (
-            SCHEME_RE.sub("", url)
-            .partition("/")[0]
-            .partition("?")[0]
-            .partition("#")[0]
-            .split("@")[-1]
-            .partition(":")[0]
-            .strip()
-            .rstrip(".")
-        )
+    def extract_urllib(
+        self,
+        url: Union[urllib.parse.ParseResult, urllib.parse.SplitResult],
+        include_psl_private_domains: Optional[bool] = None,
+    ) -> ExtractResult:
+        """Take the output of urllib.parse URL parsing methods and further splits the parsed URL.
+
+        Splits the parsed URL into its subdomain, domain, and suffix
+        components, i.e. its effective TLD, gTLD, ccTLD, etc. components.
+
+        This method is like `extract_str` but faster, as the string's domain
+        name has already been parsed.
 
-        labels = netloc.split(".")
+        >>> extractor = TLDExtract()
+        >>> extractor.extract_urllib(urllib.parse.urlsplit('http://forums.news.cnn.com/'))
+        ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')
+        >>> extractor.extract_urllib(urllib.parse.urlsplit('http://forums.bbc.co.uk/'))
+        ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk')
+        """
+        return self._extract_netloc(url.netloc, include_psl_private_domains)
+
+    def _extract_netloc(
+        self, netloc: str, include_psl_private_domains: Optional[bool]
+    ) -> ExtractResult:
+        labels = (
+            netloc.replace("\u3002", "\u002e")
+            .replace("\uff0e", "\u002e")
+            .replace("\uff61", "\u002e")
+            .split(".")
+        )
 
-        translations = [_decode_punycode(label) for label in labels]
         suffix_index = self._get_tld_extractor().suffix_index(
-            translations, include_psl_private_domains=include_psl_private_domains
+            labels, include_psl_private_domains=include_psl_private_domains
         )
 
-        suffix = ".".join(labels[suffix_index:])
-        if not suffix and netloc and looks_like_ip(netloc):
+        if suffix_index == len(labels) and netloc and looks_like_ip(netloc):
             return ExtractResult("", netloc, "")
 
-        subdomain = ".".join(labels[: suffix_index - 1]) if suffix_index else ""
+        suffix = ".".join(labels[suffix_index:]) if suffix_index != len(labels) else ""
+        subdomain = ".".join(labels[: suffix_index - 1]) if suffix_index >= 2 else ""
         domain = labels[suffix_index - 1] if suffix_index else ""
         return ExtractResult(subdomain, domain, suffix)
 
-    def update(self, fetch_now=False):
+    def update(self, fetch_now: bool = False) -> None:
         """Force fetch the latest suffix list definitions."""
         self._extractor = None
         self._cache.clear()
@@ -246,24 +284,25 @@ class TLDExtract:
             self._get_tld_extractor()
 
     @property
-    def tlds(self):
+    def tlds(self) -> List[str]:
         """
-        Returns the list of tld's used by default
+        Returns the list of tld's used by default.
 
         This will vary based on `include_psl_private_domains` and `extra_suffixes`
         """
         return list(self._get_tld_extractor().tlds())
 
-    def _get_tld_extractor(self):
-        """Get or compute this object's TLDExtractor. Looks up the TLDExtractor
-        in roughly the following order, based on the settings passed to
-        __init__:
+    def _get_tld_extractor(self) -> _PublicSuffixListTLDExtractor:
+        """Get or compute this object's TLDExtractor.
+
+        Looks up the TLDExtractor in roughly the following order, based on the
+        settings passed to __init__:
 
         1. Memoized on `self`
         2. Local system _cache file
         3. Remote PSL, over HTTP
-        4. Bundled PSL snapshot file"""
-
+        4. Bundled PSL snapshot file
+        """
         if self._extractor:
             return self._extractor
 
@@ -289,25 +328,59 @@ class TLDExtract:
 TLD_EXTRACTOR = TLDExtract()
 
 
+class Trie:
+    """Trie for storing eTLDs with their labels in reverse-order."""
+
+    def __init__(self, matches: Optional[Dict] = None, end: bool = False) -> None:
+        self.matches = matches if matches else {}
+        self.end = end
+
+    @staticmethod
+    def create(suffixes: Collection[str]) -> Trie:
+        """Create a Trie from a list of suffixes and return its root node."""
+        root_node = Trie()
+
+        for suffix in suffixes:
+            suffix_labels = suffix.split(".")
+            suffix_labels.reverse()
+            root_node.add_suffix(suffix_labels)
+
+        return root_node
+
+    def add_suffix(self, labels: List[str]) -> None:
+        """Append a suffix's labels to this Trie node."""
+        node = self
+
+        for label in labels:
+            if label not in node.matches:
+                node.matches[label] = Trie()
+            node = node.matches[label]
+
+        node.end = True
+
+
 @wraps(TLD_EXTRACTOR.__call__)
-def extract(
-    url, include_psl_private_domains=False
-):  # pylint: disable=missing-function-docstring
+def extract(  # pylint: disable=missing-function-docstring
+    url: str, include_psl_private_domains: Optional[bool] = False
+) -> ExtractResult:
     return TLD_EXTRACTOR(url, include_psl_private_domains=include_psl_private_domains)
 
 
 @wraps(TLD_EXTRACTOR.update)
-def update(*args, **kwargs):  # pylint: disable=missing-function-docstring
+# pylint: disable-next=missing-function-docstring
+def update(*args, **kwargs):  # type: ignore[no-untyped-def]
     return TLD_EXTRACTOR.update(*args, **kwargs)
 
 
 class _PublicSuffixListTLDExtractor:
-    """Wrapper around this project's main algo for PSL
-    lookups.
-    """
+    """Wrapper around this project's main algo for PSL lookups."""
 
     def __init__(
-        self, public_tlds, private_tlds, extra_tlds, include_psl_private_domains=False
+        self,
+        public_tlds: List[str],
+        private_tlds: List[str],
+        extra_tlds: List[str],
+        include_psl_private_domains: bool = False,
     ):
         # set the default value
         self.include_psl_private_domains = include_psl_private_domains
@@ -315,8 +388,12 @@ class _PublicSuffixListTLDExtractor:
         self.private_tlds = private_tlds
         self.tlds_incl_private = frozenset(public_tlds + private_tlds + extra_tlds)
         self.tlds_excl_private = frozenset(public_tlds + extra_tlds)
+        self.tlds_incl_private_trie = Trie.create(self.tlds_incl_private)
+        self.tlds_excl_private_trie = Trie.create(self.tlds_excl_private)
 
-    def tlds(self, include_psl_private_domains=None):
+    def tlds(
+        self, include_psl_private_domains: Optional[bool] = None
+    ) -> FrozenSet[str]:
         """Get the currently filtered list of suffixes."""
         if include_psl_private_domains is None:
             include_psl_private_domains = self.include_psl_private_domains
@@ -327,34 +404,50 @@ class _PublicSuffixListTLDExtractor:
             else self.tlds_excl_private
         )
 
-    def suffix_index(self, lower_spl, include_psl_private_domains=None):
-        """Returns the index of the first suffix label.
-        Returns len(spl) if no suffix is found
-        """
-        tlds = self.tlds(include_psl_private_domains)
-        length = len(lower_spl)
-        for i in range(length):
-            maybe_tld = ".".join(lower_spl[i:])
-            exception_tld = "!" + maybe_tld
-            if exception_tld in tlds:
-                return i + 1
-
-            if maybe_tld in tlds:
-                return i
-
-            wildcard_tld = "*." + ".".join(lower_spl[i + 1 :])
-            if wildcard_tld in tlds:
-                return i
-
-        return length
+    def suffix_index(
+        self, spl: List[str], include_psl_private_domains: Optional[bool] = None
+    ) -> int:
+        """Return the index of the first suffix label.
 
+        Returns len(spl) if no suffix is found.
+        """
+        if include_psl_private_domains is None:
+            include_psl_private_domains = self.include_psl_private_domains
 
-def _decode_punycode(label):
+        node = (
+            self.tlds_incl_private_trie
+            if include_psl_private_domains
+            else self.tlds_excl_private_trie
+        )
+        i = len(spl)
+        j = i
+        for label in reversed(spl):
+            decoded_label = _decode_punycode(label)
+            if decoded_label in node.matches:
+                j -= 1
+                if node.matches[decoded_label].end:
+                    i = j
+                node = node.matches[decoded_label]
+                continue
+
+            is_wildcard = "*" in node.matches
+            if is_wildcard:
+                is_wildcard_exception = "!" + decoded_label in node.matches
+                if is_wildcard_exception:
+                    return j
+                return j - 1
+
+            break
+
+        return i
+
+
+def _decode_punycode(label: str) -> str:
     lowered = label.lower()
     looks_like_puny = lowered.startswith("xn--")
     if looks_like_puny:
         try:
-            return idna.decode(label.encode("ascii")).lower()
+            return idna.decode(lowered)
         except (UnicodeError, IndexError):
             pass
     return lowered
diff --git a/tox.ini b/tox.ini
index 4701155..c4b144c 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
 [tox]
-envlist = py{36,37,38,39,py3},codestyle,lint
+envlist = py{37,38,39,310,311,py3},codestyle,lint,typecheck
 
 [testenv]
 deps =
@@ -13,8 +13,12 @@ deps =
 commands = pytest --pylint {posargs}
 
 [testenv:codestyle]
-deps = pycodestyle
-commands = pycodestyle tldextract tests {posargs}
+deps =
+    black
+    pycodestyle
+commands =
+    pycodestyle tldextract tests {posargs}
+    black --check {posargs:.}
 
 [testenv:lint]
 deps =
@@ -24,7 +28,13 @@ deps =
     responses
 commands = pytest --pylint -m pylint {posargs}
 
-[pycodestyle]
-# E203 - whitespace before; disagrees with PEP8 https://github.com/psf/black/issues/354#issuecomment-397684838
-# E501 - line too long
-ignore = E203,E501
+[testenv:typecheck]
+deps =
+    mypy
+    pytest
+    pytest-gitignore
+    pytest-pylint
+    responses
+    types-filelock
+    types-requests
+commands = mypy --show-error-codes tldextract tests

More details

Full run details