New Upstream Release - python-dendropy

Ready changes

Summary

Merged new upstream version: 4.6.0 (was: 4.5.2).

Resulting package

Built on 2023-05-14T01:44 (took 10m26s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-releases python3-dendropyapt install -t fresh-releases sumtrees

Lintian Result

Diff

diff --git a/AUTHORS.rst b/AUTHORS.rst
new file mode 100644
index 00000000..48e8c737
--- /dev/null
+++ b/AUTHORS.rst
@@ -0,0 +1,2 @@
+Jeet Sukumaran <jeetsukumaran@gmail.com>
+Mark T. Holder <mtholder@ku.edu>
diff --git a/CHANGES.rst b/CHANGES.rst
index 9ad209bc..279c643c 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -1,3 +1,12 @@
+Release 4.5.2
+-------------
+
+-   Support for user-specified random seed in RaXML wrapper (thanks @NoahAmsel)
+-   *MUCH* faster label lookup (thanks Sam Nicholls / @SamStudio8 !)
+-   Faster birth-death tree generation (thanks @NicolaDM !)
+-   Storage of supplemental NEXUS blocks
+-   Fix type: "PhylogeneticIndependentConstrasts" => "PhylogeneticIndependentContrasts"
+
 Release 4.4.0
 -------------
 
diff --git a/PKG-INFO b/PKG-INFO
index f0716c68..496aa06b 100644
--- a/PKG-INFO
+++ b/PKG-INFO
@@ -1,8 +1,8 @@
 Metadata-Version: 1.1
 Name: DendroPy
-Version: 4.5.2
+Version: 4.6.0
 Summary: A Python library for phylogenetics and phylogenetic computing: reading, writing, simulation, processing and manipulation of phylogenetic trees (phylogenies) and characters.
-Home-page: http://packages.python.org/DendroPy/
+Home-page: http://pypi.org/project/DendroPy//
 Author: Jeet Sukumaran and Mark T. Holder
 Author-email: jeetsukumaran@gmail.com, mtholder@ku.edu
 License: BSD
@@ -10,6 +10,16 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
            :align: right
            :alt: DendroPy
         
+        .. image:: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml/badge.svg
+           :target: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml
+        
+        .. image:: https://img.shields.io/pypi/v/DendroPy.svg
+                :target: https://pypi.org/project/DendroPy/
+        
+        .. image:: https://readthedocs.org/projects/DendroPy/badge/?version=main
+                :target: https://dendropy.readthedocs.io/en/main/?badge=main
+                :alt: Documentation Status
+        
         DendroPy is a Python library for phylogenetic computing.
         It provides classes and functions for the simulation, processing, and
         manipulation of phylogenetic trees and character matrices, and supports the
@@ -27,7 +37,7 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
         
         DendroPy is also hosted in the official Python repository:
         
-            http://packages.python.org/DendroPy/
+            http://pypi.org/project/DendroPy//
         
         Requirements and Installation
         =============================
@@ -65,7 +75,7 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
         Current Release
         ===============
         
-        The current release of DendroPy is version 4.5.2.
+        The current release of DendroPy is version 4.6.0.
         
         
 Keywords: phylogenetics phylogeny phylogenies phylogeography evolution evolutionary biology systematics coalescent population genetics phyloinformatics bioinformatics
@@ -77,13 +87,11 @@ Classifier: Natural Language :: English
 Classifier: Operating System :: OS Independent
 Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.1
-Classifier: Programming Language :: Python :: 3.2
-Classifier: Programming Language :: Python :: 3.3
-Classifier: Programming Language :: Python :: 3.4
-Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
 Classifier: Programming Language :: Python
 Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
diff --git a/README.rst b/README.rst
index bd726676..b4b391cf 100644
--- a/README.rst
+++ b/README.rst
@@ -2,6 +2,16 @@
    :align: right
    :alt: DendroPy
 
+.. image:: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml/badge.svg
+   :target: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml
+
+.. image:: https://img.shields.io/pypi/v/DendroPy.svg
+        :target: https://pypi.org/project/DendroPy/
+
+.. image:: https://readthedocs.org/projects/DendroPy/badge/?version=main
+        :target: https://dendropy.readthedocs.io/en/main/?badge=main
+        :alt: Documentation Status
+
 DendroPy is a Python library for phylogenetic computing.
 It provides classes and functions for the simulation, processing, and
 manipulation of phylogenetic trees and character matrices, and supports the
@@ -19,7 +29,7 @@ The primary home page for DendroPy, with detailed tutorials and documentation, i
 
 DendroPy is also hosted in the official Python repository:
 
-    http://packages.python.org/DendroPy/
+    http://pypi.org/project/DendroPy//
 
 Requirements and Installation
 =============================
diff --git a/applications/dendropy-format/dendropy-format b/applications/dendropy-format/dendropy-format
index 0e656e31..729d62be 100755
--- a/applications/dendropy-format/dendropy-format
+++ b/applications/dendropy-format/dendropy-format
@@ -12,7 +12,7 @@ from dendropy.utility import error
 from dendropy.utility import messaging
 
 usage = """\
-cat SRC-FILE | dendropy-format --from [FORMAT] --to [FORMAT] [OPTIONS] > DEST-FILE
+dendropy-format --from [FORMAT] --to [FORMAT] [OPTIONS] <SOURCE-FILE> > DEST-FILE
 """
 
 def convert(args):
@@ -68,6 +68,28 @@ def convert(args):
     if args.output_format == "phylip-strict":
         args.output_format = "phylip"
         write_kwargs["strict"] = True
+    if args.output_format == "nexus" or args.output_format == "newick":
+        if args.unquoted_underscores:
+            write_kwargs["unquoted_underscores"] = True
+    if args.recode_uncertain is not None:
+        operational_state_alphabet = dendropy.DNA_STATE_ALPHABET
+        if args.recode_uncertain == "gap":
+            convert_to = operational_state_alphabet.gap
+        elif args.recode_uncertain == "missing":
+            convert_to = operational_state_alphabet.missing
+        else:
+            raise ValueError(args.recode_uncertain)
+        convert_from = set()
+        for s in operational_state_alphabet._polymorphic_states:
+            convert_from.add(s)
+        for s in operational_state_alphabet._ambiguous_states:
+            convert_from.add(s)
+        for char_matrix in ds.char_matrices:
+            for taxon in char_matrix:
+                seq = char_matrix[taxon]
+                for idx, c in enumerate(seq):
+                    if c in convert_from:
+                        seq[idx] = convert_to
     dest = sys.stdout
     with dest:
         ds.write(
@@ -155,6 +177,21 @@ def main():
                     "phylip-strict",
                     ],
             help="Format of data source.")
+    destination_options.add_argument(
+            "-u", "--unquoted-underscores",
+            action="store_true",
+            default=None,
+            help="[NEXUS/Newick:] Do not quote labels with undescores.",
+            )
+    destination_options.add_argument(
+            "--recode-uncertain",
+            choices=[
+                "missing",
+                "gap",
+                ],
+            default=None,
+            help="Recode ambiguous or uncertain characters as missing ('?') or gaps ('-')",
+            )
     args = parser.parse_args()
     convert(args)
 
diff --git a/applications/sumlabels/sumlabels.py b/applications/sumlabels/sumlabels.py
index c9b03b18..f26f0844 100755
--- a/applications/sumlabels/sumlabels.py
+++ b/applications/sumlabels/sumlabels.py
@@ -54,7 +54,7 @@ def main_cli():
 
     description =  "%s %s %s" % (_program_name, _program_version, _program_subtitle)
 
-    parser = argparse.ArgumentParser(version =_program_version, description=description)
+    parser = argparse.ArgumentParser(description=description)
 
     parser.add_argument(
             "sources",
diff --git a/applications/sumtrees/sumtrees.py b/applications/sumtrees/sumtrees.py
index 85100eab..9e523a76 100755
--- a/applications/sumtrees/sumtrees.py
+++ b/applications/sumtrees/sumtrees.py
@@ -1533,7 +1533,7 @@ def main():
                 preserve_underscores=args.preserve_underscores,
                 )
         if tree_array.split_distribution.is_mixed_rootings_counted():
-            raise TreeArray.IncompatibleRootingTreeArrayUpdate("Mixed rooting states detected in source trees")
+            raise dendropy.TreeArray.IncompatibleRootingTreeArrayUpdate("Mixed rooting states detected in source trees")
     except KeyboardInterrupt as e:
         raise e
     except Exception as exception_object:
diff --git a/debian/changelog b/debian/changelog
index d366314e..36f3a1a4 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+python-dendropy (4.6.0-1) UNRELEASED; urgency=low
+
+  * New upstream release.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Sun, 14 May 2023 01:37:30 -0000
+
 python-dendropy (4.5.2-1) unstable; urgency=medium
 
   * Team Upload.
diff --git a/debian/patches/do_not_try_running_non-existing_testsuite.patch b/debian/patches/do_not_try_running_non-existing_testsuite.patch
index cd908068..899dff5b 100644
--- a/debian/patches/do_not_try_running_non-existing_testsuite.patch
+++ b/debian/patches/do_not_try_running_non-existing_testsuite.patch
@@ -3,8 +3,10 @@ Last-Update: Tue, 08 Oct 2019 09:01:44 +0200
 Description: There is no test suite to run - just data
  see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=937698#17
 
---- a/setup.py
-+++ b/setup.py
+Index: python-dendropy.git/setup.py
+===================================================================
+--- python-dendropy.git.orig/setup.py
++++ python-dendropy.git/setup.py
 @@ -95,7 +95,7 @@ else:
  EXTRA_KWARGS = dict(
      install_requires = ['setuptools'],
diff --git a/setup.py b/setup.py
index b14736ce..27c80f09 100644
--- a/setup.py
+++ b/setup.py
@@ -160,7 +160,7 @@ setup(name='DendroPy',
       version=__version__,
       author='Jeet Sukumaran and Mark T. Holder',
       author_email='jeetsukumaran@gmail.com, mtholder@ku.edu',
-      url='http://packages.python.org/DendroPy/',
+      url='http://pypi.org/project/DendroPy//',
       description="A Python library for phylogenetics and phylogenetic computing: reading, writing, simulation, processing and manipulation of phylogenetic trees (phylogenies) and characters.",
       license='BSD',
       packages=PACKAGES,
@@ -180,14 +180,12 @@ setup(name='DendroPy',
             "Operating System :: OS Independent",
             "Programming Language :: Python :: 2.7",
             "Programming Language :: Python :: 3",
-            "Programming Language :: Python :: 3.1",
-            "Programming Language :: Python :: 3.2",
-            "Programming Language :: Python :: 3.3",
-            "Programming Language :: Python :: 3.4",
-            "Programming Language :: Python :: 3.5",
             "Programming Language :: Python :: 3.6",
             "Programming Language :: Python :: 3.7",
             "Programming Language :: Python :: 3.8",
+            "Programming Language :: Python :: 3.9",
+            "Programming Language :: Python :: 3.10",
+            "Programming Language :: Python :: 3.11",
             "Programming Language :: Python",
             "Topic :: Scientific/Engineering :: Bio-Informatics",
             ],
diff --git a/src/DendroPy.egg-info/PKG-INFO b/src/DendroPy.egg-info/PKG-INFO
index f0716c68..496aa06b 100644
--- a/src/DendroPy.egg-info/PKG-INFO
+++ b/src/DendroPy.egg-info/PKG-INFO
@@ -1,8 +1,8 @@
 Metadata-Version: 1.1
 Name: DendroPy
-Version: 4.5.2
+Version: 4.6.0
 Summary: A Python library for phylogenetics and phylogenetic computing: reading, writing, simulation, processing and manipulation of phylogenetic trees (phylogenies) and characters.
-Home-page: http://packages.python.org/DendroPy/
+Home-page: http://pypi.org/project/DendroPy//
 Author: Jeet Sukumaran and Mark T. Holder
 Author-email: jeetsukumaran@gmail.com, mtholder@ku.edu
 License: BSD
@@ -10,6 +10,16 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
            :align: right
            :alt: DendroPy
         
+        .. image:: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml/badge.svg
+           :target: https://github.com/jeetsukumaran/DendroPy/actions/workflows/ci.yaml
+        
+        .. image:: https://img.shields.io/pypi/v/DendroPy.svg
+                :target: https://pypi.org/project/DendroPy/
+        
+        .. image:: https://readthedocs.org/projects/DendroPy/badge/?version=main
+                :target: https://dendropy.readthedocs.io/en/main/?badge=main
+                :alt: Documentation Status
+        
         DendroPy is a Python library for phylogenetic computing.
         It provides classes and functions for the simulation, processing, and
         manipulation of phylogenetic trees and character matrices, and supports the
@@ -27,7 +37,7 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
         
         DendroPy is also hosted in the official Python repository:
         
-            http://packages.python.org/DendroPy/
+            http://pypi.org/project/DendroPy//
         
         Requirements and Installation
         =============================
@@ -65,7 +75,7 @@ Description: .. image:: https://raw.githubusercontent.com/jeetsukumaran/DendroPy
         Current Release
         ===============
         
-        The current release of DendroPy is version 4.5.2.
+        The current release of DendroPy is version 4.6.0.
         
         
 Keywords: phylogenetics phylogeny phylogenies phylogeography evolution evolutionary biology systematics coalescent population genetics phyloinformatics bioinformatics
@@ -77,13 +87,11 @@ Classifier: Natural Language :: English
 Classifier: Operating System :: OS Independent
 Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.1
-Classifier: Programming Language :: Python :: 3.2
-Classifier: Programming Language :: Python :: 3.3
-Classifier: Programming Language :: Python :: 3.4
-Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3.11
 Classifier: Programming Language :: Python
 Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
diff --git a/src/DendroPy.egg-info/SOURCES.txt b/src/DendroPy.egg-info/SOURCES.txt
index abe3312a..9f633607 100644
--- a/src/DendroPy.egg-info/SOURCES.txt
+++ b/src/DendroPy.egg-info/SOURCES.txt
@@ -1,3 +1,4 @@
+AUTHORS.rst
 CHANGES.rst
 LICENSE.rst
 MANIFEST.in
@@ -54,7 +55,11 @@ src/dendropy/datamodel/charstatemodel.py
 src/dendropy/datamodel/datasetmodel.py
 src/dendropy/datamodel/taxonmodel.py
 src/dendropy/datamodel/treecollectionmodel.py
-src/dendropy/datamodel/treemodel.py
+src/dendropy/datamodel/treemodel/__init__.py
+src/dendropy/datamodel/treemodel/_bipartition.py
+src/dendropy/datamodel/treemodel/_edge.py
+src/dendropy/datamodel/treemodel/_node.py
+src/dendropy/datamodel/treemodel/_tree.py
 src/dendropy/interop/__init__.py
 src/dendropy/interop/ape.py
 src/dendropy/interop/biopython.py
diff --git a/src/dendropy/__init__.py b/src/dendropy/__init__.py
index d3277a8a..6921aacb 100644
--- a/src/dendropy/__init__.py
+++ b/src/dendropy/__init__.py
@@ -104,9 +104,9 @@ from dendropy.legacy import treesum
 ## PACKAGE METADATA
 import collections
 __project__ = "DendroPy"
-__version__ = "4.5.2"
+__version__ = "4.6.0"
 __author__ = "Jeet Sukumaran and Mark T. Holder"
-__copyright__ = "Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder."
+__copyright__ = "Copyright 2010-2022 Jeet Sukumaran and Mark T. Holder."
 __citation__ = "Sukumaran, J and MT Holder. 2010. DendroPy: a Python library for phylogenetic computing. Bioinformatics 26: 1569-1571."
 PACKAGE_VERSION = __version__ # for backwards compatibility (with sate)
 
diff --git a/src/dendropy/calculate/mathfn.py b/src/dendropy/calculate/mathfn.py
index 6675762b..5d3af3be 100644
--- a/src/dendropy/calculate/mathfn.py
+++ b/src/dendropy/calculate/mathfn.py
@@ -21,6 +21,8 @@
 Some common mathematical functions.
 """
 
+import functools
+
 def gcd(a, b):
     """Return greatest common divisor using Euclid's Algorithm."""
     while b:
@@ -33,5 +35,4 @@ def lcm(a, b):
 
 def LCM(*args):
     """Return lcm of args."""
-    return reduce(lcm, args)
-
+    return functools.reduce(lcm, args)
diff --git a/src/dendropy/calculate/phylogeneticdistance.py b/src/dendropy/calculate/phylogeneticdistance.py
index 6de1076e..dc7c6273 100644
--- a/src/dendropy/calculate/phylogeneticdistance.py
+++ b/src/dendropy/calculate/phylogeneticdistance.py
@@ -91,7 +91,7 @@ class PhylogeneticDistanceMatrix(object):
             label_transform_fn=None,
             **csv_reader_kwargs
             ):
-        """
+        r"""
         Instantiates a new PhylogeneticDistanceMatrix instance with data
         from an external source.
 
@@ -502,7 +502,7 @@ class PhylogeneticDistanceMatrix(object):
             filter_fn=None,
             is_weighted_edge_distances=True,
             is_normalize_by_tree_size=False):
-        """
+        r"""
         Calculates the phylogenetic ecology statistic "MPD"[1,2] for the tree
         (only considering taxa for which ``filter_fn`` returns True when
         applied if ``filter_fn`` is specified).
@@ -510,9 +510,9 @@ class PhylogeneticDistanceMatrix(object):
         The mean pairwise distance (mpd) is given by:
 
             .. math::
-                mpd = \\frac{ \\sum_{i}^{n} \\sum_{j}^{n} \\delta_{i,j} }{n \\choose 2},
+                mpd = \frac{ \sum_{i}^{n} \sum_{j}^{n} \delta_{i,j} }{n \choose 2},
 
-        where :math:`i \\neq j`, :math:`\\delta_{i,j}` is the phylogenetic
+        where :math:`i \neq j`, :math:`\delta_{i,j}` is the phylogenetic
         distance between species :math:`i` and :math:`j`, and :math:`n` is the number
         of species in the sample.
 
@@ -585,7 +585,7 @@ class PhylogeneticDistanceMatrix(object):
             filter_fn=None,
             is_weighted_edge_distances=True,
             is_normalize_by_tree_size=False):
-        """
+        r"""
         Calculates the phylogenetic ecology statistic "MNTD"[1,2] for the tree
         (only considering taxa for which ``filter_fn`` returns True when
         applied if ``filter_fn`` is specified).
@@ -593,9 +593,9 @@ class PhylogeneticDistanceMatrix(object):
         The mean nearest taxon distance (mntd) is given by:
 
             .. math::
-                mntd = \\frac{ \\sum_{i}^{n} min(\\delta_{i,j}) }{n},
+                mntd = \frac{ \sum_{i}^{n} min(\delta_{i,j}) }{n},
 
-        where :math:`i \\neq j`, :math:`\\delta_{i,j}` is the phylogenetic
+        where :math:`i \neq j`, :math:`\delta_{i,j}` is the phylogenetic
         distance between species :math:`i` and :math:`j`, and :math:`n` is the number
         of species in the sample.
 
@@ -670,14 +670,14 @@ class PhylogeneticDistanceMatrix(object):
             is_skip_single_taxon_assemblages=False,
             null_model_type="taxa.label",
             rng=None):
-        """
+        r"""
         Returns the standardized effect size value for the MPD statistic under
         a null model under various community compositions.
 
         The S.E.S. is given by:
 
             .. math::
-                SES(statistic) = \\frac{observed - mean(model_{null})}{sd(model_{null})}
+                SES(statistic) = \frac{observed - mean(model_{null})}{sd(model_{null})}
 
         This removes any bias associated with the decrease in variance in the
         MPD statistic value as species richness increases to the point where
@@ -767,14 +767,14 @@ class PhylogeneticDistanceMatrix(object):
             is_skip_single_taxon_assemblages=False,
             null_model_type="taxa.label",
             rng=None):
-        """
+        r"""
         Returns the standardized effect size value for the MNTD statistic under
         a null model under various community compositions.
 
         The S.E.S. is given by:
 
             .. math::
-                SES(statistic) = \\frac{observed - mean(model_{null})}{sd(model_{null})}
+                SES(statistic) = \frac{observed - mean(model_{null})}{sd(model_{null})}
 
         This removes any bias associated with the decrease in variance in the
         MPD statistic value as species richness increases to the point where
diff --git a/src/dendropy/calculate/popgenstat.py b/src/dendropy/calculate/popgenstat.py
index bc5d0aac..5694a894 100644
--- a/src/dendropy/calculate/popgenstat.py
+++ b/src/dendropy/calculate/popgenstat.py
@@ -83,14 +83,14 @@ def _count_differences(char_sequences, state_alphabet, ignore_uncertain=True):
     return sum_diff, mean_diff / comps, sq_diff
 
 def _nucleotide_diversity(char_sequences, state_alphabet, ignore_uncertain=True):
-    """
+    r"""
     Returns $\pi$, the proportional nucleotide diversity, calculated for a
     list of character sequences.
     """
     return _count_differences(char_sequences, state_alphabet, ignore_uncertain)[1]
 
 def _average_number_of_pairwise_differences(char_sequences, state_alphabet, ignore_uncertain=True):
-    """
+    r"""
     Returns $k$ (Tajima 1983; Wakely 1996), calculated for a set of sequences:
 
     k = \frac{\right(\sum \sum \k_{ij}\left)}{n \choose 2}
@@ -178,7 +178,7 @@ def average_number_of_pairwise_differences(char_matrix, ignore_uncertain=True):
     return _average_number_of_pairwise_differences(char_matrix.sequences(), char_matrix.default_state_alphabet, ignore_uncertain)
 
 def nucleotide_diversity(char_matrix, ignore_uncertain=True):
-    """
+    r"""
     Returns $\pi$, calculated for a character block.
     """
     return _nucleotide_diversity(char_matrix.sequences(), char_matrix.default_state_alphabet, ignore_uncertain)
diff --git a/src/dendropy/calculate/probability.py b/src/dendropy/calculate/probability.py
index 5d701830..ca70c319 100644
--- a/src/dendropy/calculate/probability.py
+++ b/src/dendropy/calculate/probability.py
@@ -182,7 +182,7 @@ def chisq_pdf(chisq, df):
     if even:
         s = y
     else:
-        s = 2.0 * zprob(-math.sqrt(chisq))
+        s = 2.0 * z_pmf(-math.sqrt(chisq))
     if (df > 2):
         chisq = 0.5 * (df - 1.0)
         if even:
diff --git a/src/dendropy/calculate/statistics.py b/src/dendropy/calculate/statistics.py
index 1c05f2e0..db6f1f4f 100644
--- a/src/dendropy/calculate/statistics.py
+++ b/src/dendropy/calculate/statistics.py
@@ -69,7 +69,7 @@ def mode(values, bin_size=0.1):
             bins[idx] = 1
     sorted_bins = sorted(bins.items(), key=itemgetter(1), reverse=True)
     max_count = sorted_bins[0][1]
-    results = [(sorted_bins[i][0] * bin_size) for i in xrange(len(sorted_bins)) if sorted_bins[i][1] >= max_count]
+    results = [(sorted_bins[i][0] * bin_size) for i in range(len(sorted_bins)) if sorted_bins[i][1] >= max_count]
     return results
 
 def median(pool):
diff --git a/src/dendropy/calculate/treecompare.py b/src/dendropy/calculate/treecompare.py
index a1e42343..70028334 100644
--- a/src/dendropy/calculate/treecompare.py
+++ b/src/dendropy/calculate/treecompare.py
@@ -67,14 +67,14 @@ def symmetric_difference(tree1, tree2, is_bipartitions_updated=False):
 
     ::
 
-        import dendropy
+        from dendropy import TaxonNamespace, Tree
         from dendropy.calculate import treecompare
-        tns = dendropy.TaxonNamespace()
-        tree1 = tree.get_from_path(
+        tns = TaxonNamespace()
+        tree1 = Tree.get_from_path(
                 "t1.nex",
                 "nexus",
                 taxon_namespace=tns)
-        tree2 = tree.get_from_path(
+        tree2 = Tree.get_from_path(
                 "t2.nex",
                 "nexus",
                 taxon_namespace=tns)
diff --git a/src/dendropy/calculate/treesum.py b/src/dendropy/calculate/treesum.py
index 312f37ef..413228e7 100644
--- a/src/dendropy/calculate/treesum.py
+++ b/src/dendropy/calculate/treesum.py
@@ -304,31 +304,6 @@ class TreeSummarizer(object):
                 edge.length = 0.0
         return tree
 
-        ## here we add the support values and/or edge lengths for the terminal taxa ##
-        for node in leaves:
-            if not is_rooted:
-                split = node.edge.split_bitmask
-            else:
-                split = node.edge.leafset_bitmask
-            self.map_split_support_to_node(node, 1.0)
-            if include_edge_lengths:
-                elen = split_distribution.split_edge_lengths.get(split, [0.0])
-                if len(elen) > 0:
-                    mean, var = mean_and_sample_variance(elen)
-                    node.edge.length = mean
-                    if include_edge_length_var:
-                        node.edge.length_var = var
-                else:
-                    node.edge.length = None
-                    if include_edge_length_var:
-                        node.edge.length_var = None
-        #if include_edge_lengths:
-            #self.map_edge_lengths_to_tree(tree=con_tree,
-            #        split_distribution=split_distribution,
-            #        summarization_fn=summarization_fn,
-            #        include_edge_length_var=False)
-        return con_tree
-
     def count_splits_on_trees(self, tree_iterator, split_distribution=None, is_bipartitions_updated=False):
         """
         Given a list of trees file, a SplitsDistribution object (a new one, or,
diff --git a/src/dendropy/dataio/ioservice.py b/src/dendropy/dataio/ioservice.py
index f5dcd58b..5b38b935 100644
--- a/src/dendropy/dataio/ioservice.py
+++ b/src/dendropy/dataio/ioservice.py
@@ -24,8 +24,14 @@ from dendropy.datamodel import basemodel
 from dendropy.datamodel import taxonmodel
 from dendropy.utility import deprecate
 from dendropy.utility import textprocessing
-if not (sys.version_info.major >= 3 and sys.version_info.minor >= 4):
+if sys.version_info.major >= 3 and sys.version_info.minor >= 4:
+    import pathlib
+    def _is_pathlib_path(x):
+        return isinstance(x, pathlib.PurePath)
+else:
     from dendropy.utility.filesys import pre_py34_open as open
+    def _is_pathlib_path(x):
+        return False
 
 ###############################################################################
 ## IOService
@@ -48,11 +54,13 @@ class IOService(object):
     def _get_attached_taxon_set(self):
         IOService.attached_taxon_set_deprecation_warning()
         return self.attached_taxon_namespace
-    def _set_attached_taxon_set(IOService, v):
+    def _set_attached_taxon_set(self, v):
         IOService.attached_taxon_set_deprecation_warning()
         self.attached_taxon_namespace = v
-    def _del_attached_taxon_set(IOService):
+    def _del_attached_taxon_set(self):
         IOService.attached_taxon_set_deprecation_warning()
+        del self.attached_taxon_namespace
+
     attached_taxon_set = property(_get_attached_taxon_set, _set_attached_taxon_set, _del_attached_taxon_set)
 
     def check_for_unused_keyword_arguments(self, kwargs_dict):
@@ -560,6 +568,9 @@ class DataYielder(IOService):
         if textprocessing.is_str_type(current_file):
             self._current_file = open(current_file, "r")
             self._current_file_name = current_file
+        elif _is_pathlib_path(current_file):
+            self._current_file = current_file.open()
+            self._current_file_name = current_file
         else:
             self._current_file = current_file
             try:
diff --git a/src/dendropy/dataio/newickyielder.py b/src/dendropy/dataio/newickyielder.py
index bbdf60d5..ebdc9936 100644
--- a/src/dendropy/dataio/newickyielder.py
+++ b/src/dendropy/dataio/newickyielder.py
@@ -33,7 +33,7 @@ class NewickTreeDataYielder(ioservice.TreeDataYielder):
             taxon_namespace=None,
             tree_type=None,
             **kwargs):
-        """
+        r"""
 
         Parameters
         ----------
diff --git a/src/dendropy/dataio/nexmlreader.py b/src/dendropy/dataio/nexmlreader.py
index bd634c81..fd4ca356 100644
--- a/src/dendropy/dataio/nexmlreader.py
+++ b/src/dendropy/dataio/nexmlreader.py
@@ -28,6 +28,7 @@ from dendropy.utility import container
 from dendropy.utility import textprocessing
 from dendropy.utility import error
 from dendropy.dataio import xmlprocessing
+from dendropy.datamodel.charstatemodel import StateAlphabet, StateIdentity
 
 SUPPORTED_NEXML_NAMESPACES = ('http://www.nexml.org/1.0', 'http://www.nexml.org/2009')
 
@@ -76,7 +77,7 @@ class _AnnotationParser(object):
                     value = self._coerce_to_xml_schema_type(value, dt)
                 elif dt_namespace.startswith("http://www.nexml.org/1.0") or dt_namespace.startswith("http://www.nexml.org/2009"):
                     value = self._coerce_to_nexml_type(value, dt)
-                elif dt_namespace.startswith("http://dendropy.org") or dt_namespace.startswith("http://packages.python.org/DendroPy"):
+                elif dt_namespace.startswith("http://dendropy.org") or dt_namespace.startswith("http://packages.python.org/DendroPy") or dt_namespace.startswith("http://pypi.org/project/DendroPy/"):
                     value = self._coerce_to_dendropy_type(value, dt)
         a = annotated.annotations.add_new(
                 name=name,
@@ -487,7 +488,7 @@ class _NexmlTreeParser(object):
                     raise Exception("Tree already has an explictly defined root node, but node without parent found: {}".format(unparented_node))
             else:
                 tree_obj.seed_node = unparented_node
-        elif len(unparented_node_sets) > 1:
+        elif len(unparented_node_set) > 1:
             for node in unparented_node_set:
                 tree_obj.seed_node.add_child(node)
         else:
@@ -614,10 +615,10 @@ class _NexmlCharBlockParser(_AnnotationParser):
         # set up taxa
         otus_id = nxchars.get('otus', None)
         if otus_id is None:
-            raise Exception("Character Block %s (\"%s\"): Taxon namespace not specified" % (char_matrix_oid, char_matrix.label))
+            raise Exception("Character Block %s (\"%s\"): Taxon namespace not specified" % (char_matrix_oid, label))
         taxon_namespace = self._id_taxon_namespace_map.get(otus_id, None)
         if not taxon_namespace:
-            raise Exception("Character Block %s (\"%s\"): Specified taxon namespace not found" % (char_matrix_oid, char_matrix.label))
+            raise Exception("Character Block %s (\"%s\"): Specified taxon namespace not found" % (char_matrix_oid, label))
 
         # character matrix instantiation
         nxchartype = nxchars.parse_type()
@@ -636,7 +637,7 @@ class _NexmlCharBlockParser(_AnnotationParser):
         elif nxchartype.startswith('Continuous'):
             data_type = "continuous"
         else:
-            raise Exception("Character Block %s (\"%s\"): Character type '%s' not supported" % (char_matrix_oid, char_matrix.label, nxchartype))
+            raise Exception("Character Block %s (\"%s\"): Character type '%s' not supported" % (char_matrix_oid, label, nxchartype))
         char_matrix = self._char_matrix_factory(
                 data_type,
                 taxon_namespace=taxon_namespace,
@@ -899,10 +900,9 @@ class _NexmlCharBlockParser(_AnnotationParser):
         Defaults to '0' - '9' if not specified.
         """
         if symbol_list is None:
-            symbol_list = [str(i) for i in xrange(10)]
-        state_alphabet = dendropy.StateAlphabet()
+            symbol_list = [str(i) for i in range(10)]
+        state_alphabet = StateAlphabet()
         for s in symbol_list:
-            state_alphabet.append(dendropy.StateAlphabetElement(symbol=s))
+            state_alphabet.append(StateIdentity(symbol=s))
         char_matrix.state_alphabets.append(state_alphabet)
         char_matrix.default_state_alphabet = state_alphabet
-
diff --git a/src/dendropy/dataio/nexmlwriter.py b/src/dendropy/dataio/nexmlwriter.py
index d13b30b3..132296ae 100644
--- a/src/dendropy/dataio/nexmlwriter.py
+++ b/src/dendropy/dataio/nexmlwriter.py
@@ -33,11 +33,11 @@ from dendropy.utility.textprocessing import StringIO
 def _safe_unicode(obj, *args):
     """ return the unicode representation of obj """
     try:
-        return unicode(obj, *args)
+        return str(obj, *args)
     except UnicodeDecodeError:
         # obj is byte string
         ascii_text = str(obj).encode('string_escape')
-        return unicode(ascii_text)
+        return str(ascii_text)
 
 def _safe_str(obj):
     """ return the byte string representation of obj """
@@ -45,7 +45,7 @@ def _safe_str(obj):
         return str(obj)
     except UnicodeEncodeError:
         # obj is unicode
-        return unicode(obj).encode('unicode_escape')
+        return str(obj).encode('unicode_escape')
 
 def _protect_attr(x):
 #     return cgi.escape(x)
@@ -402,7 +402,7 @@ class NexmlWriter(ioservice.DataWriter):
             ["xml", "http://www.w3.org/XML/1998/namespace"],
             ["nex", "http://www.nexml.org/2009"],
             ["xsd", "http://www.w3.org/2001/XMLSchema#"],
-            # ["dendropy", "http://packages.python.org/DendroPy/"],
+            # ["dendropy", "http://pypi.org/project/DendroPy/"],
                 ]
         # parts.append('%sxmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"' \
         #              % (self.indent * (indent_level+1)))
@@ -559,13 +559,9 @@ class NexmlWriter(ioservice.DataWriter):
         elif len(char_matrix.state_alphabets) == 1:
             sa = char_matrix.state_alphabets[0]
         elif len(char_matrix.state_alphabets) > 1:
-            raise TypeError("Character cell %d for taxon '%s' does not have a state alphabet mapping given by the" % (col_idx, taxon.label)\
-                    + " 'character_type' property, and multiple state alphabets are defined for the containing" \
-                    + " character matrix with no default specified")
+            raise TypeError("Multiple state alphabets defined for this matrix with no default specified")
         elif len(char_matrix.state_alphabets) == 0:
-            raise TypeError("Character cell %d for taxon '%s' does not have a state alphabet mapping given by the" % (col_idx, taxon.label)\
-                    + " 'character_type' property, and no state alphabets are defined for the containing" \
-                    + " character matrix")
+            raise TypeError("No state alphabets defined for this matrix")
         return sa
 
     def _write_format_section(self, char_matrix, dest, indent_level):
diff --git a/src/dendropy/dataio/nexmlyielder.py b/src/dendropy/dataio/nexmlyielder.py
index 69c599ba..5cc6184b 100644
--- a/src/dendropy/dataio/nexmlyielder.py
+++ b/src/dendropy/dataio/nexmlyielder.py
@@ -37,7 +37,7 @@ class NexmlTreeDataYielder(
             taxon_namespace=None,
             tree_type=None,
             **kwargs):
-        """
+        r"""
 
         Parameters
         ----------
diff --git a/src/dendropy/dataio/nexusprocessing.py b/src/dendropy/dataio/nexusprocessing.py
index 637e84db..3d1fb8c2 100644
--- a/src/dendropy/dataio/nexusprocessing.py
+++ b/src/dendropy/dataio/nexusprocessing.py
@@ -41,7 +41,7 @@ class NexusTokenizer(Tokenizer):
         Tokenizer.__init__(self,
             src=src,
             uncaptured_delimiters=set(" \t\n\r"),
-            captured_delimiters=set("{}(),;:=\\\""),
+            captured_delimiters=set(r'{}(),;:=\"'),
             quote_chars=set("'"),
             escape_quote_by_doubling=True,
             escape_chars=set(""),
@@ -480,9 +480,9 @@ def escape_nexus_token(label, preserve_spaces=False, quote_underscores=True):
         return ""
     if not preserve_spaces \
             and "_" not in label \
-            and not re.search('[\(\)\[\]\{\}\\\/\,\;\:\=\*\'\"\`\+\-\<\>\0\t\n]', label):
+            and not re.search(r'''[\(\)\[\]\{\}\\/\\,\\;\\:\\=\\*'"\`\+\-\<\>\0\t\n]''', label):
         label = label.replace(' ', '_').replace('\t', '_')
-    elif re.search('[\(\)\[\]\{\}\\\/\,\;\:\=\*\'\"\`\+\-\<\>\0\t\n\r ]', label) \
+    elif re.search(r'''[\(\)\[\]\{\}\\/\,\;\:\=\*'"\`\+\-\<\>\0\t\n\r ]''', label) \
         or quote_underscores and "_" in label:
         s = label.split("'")
         if len(s) == 1:
diff --git a/src/dendropy/dataio/nexusreader.py b/src/dendropy/dataio/nexusreader.py
index 9a13c13e..46491c2c 100644
--- a/src/dendropy/dataio/nexusreader.py
+++ b/src/dendropy/dataio/nexusreader.py
@@ -713,7 +713,7 @@ class NexusReader(ioservice.DataReader):
         Assumes current token is 'TITLE'
         """
         if self._nexus_tokenizer.cast_current_token_to_ucase() != "TITLE":
-            raise self._nexus_error("Expecting 'TITLE' token, but instead found '{}'".format(token))
+            raise self._nexus_error("Expecting 'TITLE' token, but instead found '{}'".format(self._nexus_tokenizer.cast_current_token_to_ucase()))
         title = self._nexus_tokenizer.require_next_token()
         sc = self._nexus_tokenizer.require_next_token()
         if sc != ";":
@@ -1241,7 +1241,7 @@ class NexusReader(ioservice.DataReader):
                                             else:
                                                 raise self._nexus_error('Expecting digit but found "%s".' % (token))
                                         else:
-                                            raise self._nexus_error('Expecting other tokens after "\\", but no more found.')
+                                            raise self._nexus_error(r'Expecting other tokens after "\", but no more found.')
                                         token = self._nexus_tokenizer.next_token()
                                     else:
                                         step = 1
@@ -1462,9 +1462,8 @@ class NexusReader(ioservice.DataReader):
                     #         exc.__cause__ = None # Python 3.3, 3.4
                     #         raise exc
                 if len(character_data_vector) == self._file_specified_nchar:
-                    raise self._too_many_characters_error(c)
+                    raise self._too_many_characters_error(token)
                 character_data_vector.append(state)
         if self._interleave:
             self._nexus_tokenizer.set_capture_eol(False)
         return character_data_vector
-
diff --git a/src/dendropy/dataio/nexuswriter.py b/src/dendropy/dataio/nexuswriter.py
index a3ab80ec..84fd6962 100644
--- a/src/dendropy/dataio/nexuswriter.py
+++ b/src/dendropy/dataio/nexuswriter.py
@@ -269,7 +269,7 @@ class NexusWriter(ioservice.DataWriter):
 
         #  Write out taxon namespaces
         if not self.simple and not self.suppress_taxa_blocks:
-            if self.suppress_block_titles and len(taxon_namespace_to_write) > 1:
+            if self.suppress_block_titles and len(self.taxon_namespaces_to_write) > 1:
                 warnings.warn("Multiple taxon namespaces will be written, but block titles are suppressed: data file may not be interpretable")
             for tns in self.taxon_namespaces_to_write:
                 self._write_taxa_block(stream, tns)
diff --git a/src/dendropy/dataio/nexusyielder.py b/src/dendropy/dataio/nexusyielder.py
index bb6febe6..bb802736 100644
--- a/src/dendropy/dataio/nexusyielder.py
+++ b/src/dendropy/dataio/nexusyielder.py
@@ -38,7 +38,7 @@ class NexusTreeDataYielder(
             taxon_namespace=None,
             tree_type=None,
             **kwargs):
-        """
+        r"""
 
         Parameters
         ----------
diff --git a/src/dendropy/dataio/phylipreader.py b/src/dendropy/dataio/phylipreader.py
index 925c2533..5ebbfdfc 100644
--- a/src/dendropy/dataio/phylipreader.py
+++ b/src/dendropy/dataio/phylipreader.py
@@ -184,7 +184,7 @@ class PhylipReader(ioservice.DataReader):
             raise error.DataParseError("Expecting at least 2 lines in PHYLIP format data source", stream=self.stream)
         desc_line = lines[0]
         lines = lines[1:]
-        m = re.match('\s*(\d+)\s+(\d+)\s*$', desc_line)
+        m = re.match(r'\s*(\d+)\s+(\d+)\s*$', desc_line)
         if m is None:
             raise self._data_parse_error("Invalid data description line: '%s'" % desc_line)
         self.ntax = int(m.groups()[0])
diff --git a/src/dendropy/dataio/tokenizer.py b/src/dendropy/dataio/tokenizer.py
index 753367cd..096c432c 100644
--- a/src/dendropy/dataio/tokenizer.py
+++ b/src/dendropy/dataio/tokenizer.py
@@ -198,7 +198,7 @@ class Tokenizer(object):
                             quote_char=cur_quote_char,
                             line_num=self.current_line_num,
                             col_num=self.current_column_num,
-                            stream=src)
+                            stream=self.src)
                 if self._cur_char == cur_quote_char:
                     self._get_next_char()
                     if self.escape_quote_by_doubling:
diff --git a/src/dendropy/datamodel/basemodel.py b/src/dendropy/datamodel/basemodel.py
index ba3169b7..97540d99 100644
--- a/src/dendropy/datamodel/basemodel.py
+++ b/src/dendropy/datamodel/basemodel.py
@@ -213,7 +213,8 @@ class Deserializable(object):
             New instance of object, constructed and populated from data given
             in source.
         """
-        with open(src, "r", newline=None) as fsrc:
+        open_args = ["r"] if sys.version_info >= (3, 3) else ["rU"]
+        with open(src, *open_args) as fsrc:
             return cls._parse_and_create_from_stream(stream=fsrc,
                     schema=schema,
                     **kwargs)
@@ -277,8 +278,8 @@ class Deserializable(object):
                     stream=ssrc,
                     schema=schema,
                     **kwargs)
-        except error.DataParseError:
-            sys.stderr.write(text)
+        except error.DataParseError as exc:
+            exc.url_text = text
             raise
     get_from_url = classmethod(get_from_url)
 
@@ -410,7 +411,8 @@ class MultiReadable(object):
                 - |CharacterMatrix|: number of sequences
                 - |DataSet|: ``tuple`` (number of taxon namespaces, number of tree lists, number of matrices)
         """
-        with open(src, "r", newline=None) as fsrc:
+        open_args = ["r"] if sys.version_info >= (3, 3) else ["rU"]
+        with open(src, *open_args) as fsrc:
             return self._parse_and_add_from_stream(stream=fsrc, schema=schema, **kwargs)
 
     def read_from_string(self, src, schema, **kwargs):
@@ -941,7 +943,7 @@ class Annotation(Annotable):
 
     def _get_namespace(self):
         if self._namespace is None:
-            self._namespace = "http://packages.python.org/DendroPy/"
+            self._namespace = "http://pypi.org/project/DendroPy/"
         return self._namespace
     def _set_namespace(self, prefix):
         self._namespace = prefix
@@ -1065,7 +1067,7 @@ class AnnotationSet(container.OrderedSet):
         if not name_is_prefixed:
             if name_prefix is None and namespace is None:
                 name_prefix = "dendropy"
-                namespace = "http://packages.python.org/DendroPy/"
+                namespace = "http://pypi.org/project/DendroPy/"
             elif name_prefix is None:
                 raise TypeError("Cannot specify 'name_prefix' for unqualified name without specifying 'namespace'")
             elif namespace is None:
@@ -1143,7 +1145,7 @@ class AnnotationSet(container.OrderedSet):
         if not name_is_prefixed:
             if name_prefix is None and namespace is None:
                 name_prefix = "dendropy"
-                namespace = "http://packages.python.org/DendroPy/"
+                namespace = "http://pypi.org/project/DendroPy/"
             elif name_prefix is None:
                 raise TypeError("Cannot specify 'name_prefix' for unqualified name without specifying 'namespace'")
             elif namespace is None:
@@ -1292,7 +1294,7 @@ class AnnotationSet(container.OrderedSet):
         # elif store_as.lower().startswith("bibtex-record"):
         #     if name_prefix is None and namespace is None:
         #         name_prefix = "dendropy"
-        #         namespace = "http://packages.python.org/DendroPy/"
+        #         namespace = "http://pypi.org/project/DendroPy/"
         #     self.add_new(
         #             name="bibtex",
         #             value=bt.as_compact_bibtex(),
@@ -1358,8 +1360,8 @@ class AnnotationSet(container.OrderedSet):
         that match based on *all* criteria specified in keyword arguments::
 
             >>> notes = tree.annotations.findall(name="color")
-            >>> notes = tree.annotations.findall(namespace="http://packages.python.org/DendroPy/")
-            >>> notes = tree.annotations.findall(namespace="http://packages.python.org/DendroPy/",
+            >>> notes = tree.annotations.findall(namespace="http://pypi.org/project/DendroPy/")
+            >>> notes = tree.annotations.findall(namespace="http://pypi.org/project/DendroPy/",
                                           name="color")
             >>> notes = tree.annotations.findall(name_prefix="dc")
             >>> notes = tree.annotations.findall(prefixed_name="dc:color")
@@ -1474,14 +1476,14 @@ class AnnotationSet(container.OrderedSet):
             >>> tree.annotations.drop(name="color")
 
         Remove all annotation objects with ``namespace`` ==
-        "http://packages.python.org/DendroPy/"::
+        "http://pypi.org/project/DendroPy/"::
 
-            >>> tree.annotations.drop(namespace="http://packages.python.org/DendroPy/")
+            >>> tree.annotations.drop(namespace="http://pypi.org/project/DendroPy/")
 
         Remove all annotation objects with ``namespace`` ==
-        "http://packages.python.org/DendroPy/" *and* ``name`` == "color"::
+        "http://pypi.org/project/DendroPy/" *and* ``name`` == "color"::
 
-            >>> tree.annotations.drop(namespace="http://packages.python.org/DendroPy/",
+            >>> tree.annotations.drop(namespace="http://pypi.org/project/DendroPy/",
                     name="color")
 
         Remove all annotation objects with ``name_prefix`` == "dc"::
diff --git a/src/dendropy/datamodel/charmatrixmodel.py b/src/dendropy/datamodel/charmatrixmodel.py
index bb58b170..e8adfd2d 100644
--- a/src/dendropy/datamodel/charmatrixmodel.py
+++ b/src/dendropy/datamodel/charmatrixmodel.py
@@ -511,7 +511,7 @@ class CharacterMatrix(
             **kwargs):
         taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs, None)
         if taxon_namespace is None:
-            taxon_namespace = taxonmodel.TaxonNamespace()
+            taxon_namespace = taxonmodel.TaxonNamespace(is_case_sensitive=kwargs.get("case_sensitive_taxon_labels", False))
         def tns_factory(label):
             if label is not None and taxon_namespace.label is None:
                 taxon_namespace.label = label
@@ -540,7 +540,7 @@ class CharacterMatrix(
 
     @classmethod
     def get(cls, **kwargs):
-        """
+        r"""
         Instantiate and return a *new* character matrix object from a data source.
 
         **Mandatory Source-Specification Keyword Argument (Exactly One of the Following Required):**
@@ -599,7 +599,7 @@ class CharacterMatrix(
                     path="python_morph.nex",
                     schema="nexus")
             std2 = dendropy.StandardCharacterMatrix.get(
-                    data=">t1\\n01011\\n\\n>t2\\n11100",
+                    data=">t1\n01011\n\n>t2\n11100",
                     schema="fasta")
 
         """
@@ -657,7 +657,7 @@ class CharacterMatrix(
         """
         taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs, None)
         if taxon_namespace is None:
-            taxon_namespace = taxonmodel.TaxonNamespace()
+            taxon_namespace = taxonmodel.TaxonNamespace(is_case_sensitive=kwargs.get("case_sensitive_taxon_labels", False))
         kwargs["taxon_namespace"] = taxon_namespace
         char_matrices = []
         for stream in streams:
@@ -674,7 +674,10 @@ class CharacterMatrix(
         character matrix. Component parts will be recorded as character
         subsets.
         """
-        streams = [open(path, "rU") for path in paths]
+        try:
+            streams = [open(path, "rU") for path in paths]
+        except ValueError:
+            streams = [open(path, "r") for path in paths]
         return cls.concatenate_from_streams(streams, schema, **kwargs)
     concatenate_from_paths = classmethod(concatenate_from_paths)
 
@@ -683,7 +686,7 @@ class CharacterMatrix(
             char_matrix=None,
             case_sensitive_taxon_labels=False,
             **kwargs):
-        """
+        r"""
         Populates character matrix from dictionary (or similar mapping type),
         creating |Taxon| objects and sequences as needed.
 
@@ -851,7 +854,7 @@ class CharacterMatrix(
     #     return self.clone_from(m)
 
     def _format_and_write_to_stream(self, stream, schema, **kwargs):
-        """
+        r"""
         Writes out ``self`` in ``schema`` format to a destination given by
         file-like object ``stream``.
 
@@ -1870,7 +1873,7 @@ class DiscreteCharacterMatrix(CharacterMatrix):
         return taxon_to_state_indices
 
     def folded_site_frequency_spectrum(self, is_pad_vector_to_unfolded_length=False):
-        """
+        r"""
         Returns the folded or minor site/allele frequency spectrum.
 
         Given $N$ chromosomes, the site frequency spectrum is a vector $(f_0,
@@ -1879,13 +1882,13 @@ class DiscreteCharacterMatrix(CharacterMatrix):
         alleles, 1 allele, 2 alleles, etc.
 
         The *folded* site frequency spectrum is a vector $(f_0, f_1, f_2, ...,
-        f_m), m = \\ceil{\\frac{N}{2}}$, where the values are the number of minor
+        f_m), m = \ceil{\frac{N}{2}}$, where the values are the number of minor
         alleles in the site.
 
         Parameters
         ----------
         is_pad_vector_to_unfolded_length: bool
-            If False, then the vector length will be $\\ceil{\\frac{N}{2}}$,
+            If False, then the vector length will be $\ceil{\frac{N}{2}}$,
             where $N$ is the number of taxa. Otherwise, by default,
             True, length of vector will be number of taxa + 1, with the
             first element the number of monomorphic sites not contributing to
diff --git a/src/dendropy/datamodel/charstatemodel.py b/src/dendropy/datamodel/charstatemodel.py
index 157e0cac..fa448058 100644
--- a/src/dendropy/datamodel/charstatemodel.py
+++ b/src/dendropy/datamodel/charstatemodel.py
@@ -302,7 +302,7 @@ class StateAlphabet(
         return new_state
 
     def new_ambiguous_state(self, symbol, **kwargs):
-        """
+        r"""
         Adds a new ambiguous state to the collection
         of states in this alphabet.
 
@@ -339,7 +339,7 @@ class StateAlphabet(
     def new_polymorphic_state(self,
             symbol,
             **kwargs):
-        """
+        r"""
         Adds a new polymorphic state to the collection
         of states in this alphabet.
 
@@ -377,7 +377,7 @@ class StateAlphabet(
             symbol,
             state_denomination,
             **kwargs):
-        """
+        r"""
         Adds a new polymorphic or ambiguous state to the collection
         of states in this alphabet.
 
diff --git a/src/dendropy/datamodel/datasetmodel.py b/src/dendropy/datamodel/datasetmodel.py
index d428223b..0d34b09b 100644
--- a/src/dendropy/datamodel/datasetmodel.py
+++ b/src/dendropy/datamodel/datasetmodel.py
@@ -346,7 +346,7 @@ class DataSet(
             exclude_trees=False,
             exclude_chars=False,
             **kwargs):
-        """
+        r"""
         Writes out ``self`` in ``schema`` format to a destination given by
         file-like object ``stream``.
 
@@ -579,7 +579,7 @@ class DataSet(
         return tree_list
 
     def new_tree_list(self, *args, **kwargs):
-        """
+        r"""
         Creates a new |TreeList| instance, adds it to this DataSet.
 
         Parameters
diff --git a/src/dendropy/datamodel/taxonmodel.py b/src/dendropy/datamodel/taxonmodel.py
index 985dafe4..f7bfc8b3 100644
--- a/src/dendropy/datamodel/taxonmodel.py
+++ b/src/dendropy/datamodel/taxonmodel.py
@@ -304,7 +304,7 @@ class TaxonNamespaceAssociated(object):
 
         """
         if taxon_namespace is None:
-            taxon_namespace = taxon.TaxonNamespace()
+            taxon_namespace = TaxonNamespace()
         self._taxon_namespace = taxon_namespace
         self.reconstruct_taxon_namespace(
                 unify_taxa_by_label=unify_taxa_by_label,
@@ -442,7 +442,7 @@ class TaxonNamespace(
     ### Life-cycle
 
     def __init__(self, *args, **kwargs):
-        """
+        r"""
         Parameters
         ----------
 
@@ -617,7 +617,7 @@ class TaxonNamespace(
         return id(self)
 
     def __lt__(self, other):
-        return self._taxa < o._taxa
+        return self._taxa < other._taxa
 
     def __eq__(self, other):
         # enforce non-equivalence of non-identical namespaces
@@ -1399,7 +1399,7 @@ class TaxonNamespace(
         return self._taxon_accession_index_map[taxon]
 
     def taxa_bitmask(self, **kwargs):
-        """
+        r"""
         Retrieves the list of split hash bitmask values representing all taxa
         specified by keyword-specified list of taxon objects (``taxa=``) or
         labels (``labels=``).
@@ -1431,7 +1431,7 @@ class TaxonNamespace(
 
     def taxa_bipartition(self,
             **kwargs):
-        """
+        r"""
         Returns a bipartition that represents all taxa specified by
         keyword-specified list of taxon objects (``taxa=``) or labels
         (``labels=``).
@@ -1602,7 +1602,7 @@ class TaxonNamespace(
     ### I/O
 
     def _format_and_write_to_stream(self, stream, schema, **kwargs):
-        """
+        r"""
         Writes out ``self`` in ``schema`` format to a destination given by
         file-like object ``stream``.
 
diff --git a/src/dendropy/datamodel/treecollectionmodel.py b/src/dendropy/datamodel/treecollectionmodel.py
index cd5cf0b9..8e4f9a29 100644
--- a/src/dendropy/datamodel/treecollectionmodel.py
+++ b/src/dendropy/datamodel/treecollectionmodel.py
@@ -59,7 +59,7 @@ class TreeList(
             collection_offset=None,
             tree_offset=None,
             **kwargs):
-        """
+        r"""
         Constructs a new |TreeList| object and populates it with trees from
         file-like object ``stream``.
 
@@ -283,7 +283,7 @@ class TreeList(
     DEFAULT_TREE_TYPE = treemodel.Tree
 
     def tree_factory(cls, *args, **kwargs):
-        """
+        r"""
         Creates and returns a |Tree| of a type that this list understands how to
         manage.
 
@@ -516,7 +516,7 @@ class TreeList(
             collection_offset=None,
             tree_offset=None,
             **kwargs):
-        """
+        r"""
         Parses |Tree| objects from data source and adds to this collection.
 
         Notes
@@ -679,7 +679,7 @@ class TreeList(
         return basemodel.MultiReadable._read_from(self, **kwargs)
 
     def _format_and_write_to_stream(self, stream, schema, **kwargs):
-        """
+        r"""
         Writes out ``self`` in ``schema`` format to a destination given by
         file-like object ``stream``.
 
@@ -726,7 +726,7 @@ class TreeList(
             tree,
             taxon_import_strategy="migrate",
             **kwargs):
-        """
+        r"""
         Inserts a |Tree| object, ``tree``, into the collection before
         ``index``.
 
@@ -780,7 +780,7 @@ class TreeList(
             tree,
             taxon_import_strategy="migrate",
             **kwargs):
-        """
+        r"""
         Adds a |Tree| object, ``tree``, to the collection.
 
         The |TaxonNamespace| reference of ``tree`` will be set to that of
@@ -1714,7 +1714,7 @@ class SplitDistribution(taxonmodel.TaxonNamespaceAssociated):
             summarize_splits=True,
             **split_summarization_kwargs
             ):
-        """
+        r"""
         Returns a consensus tree from splits in ``self``.
 
         Parameters
@@ -1770,7 +1770,7 @@ class SplitDistribution(taxonmodel.TaxonNamespaceAssociated):
             is_bipartitions_updated=False,
             **split_summarization_kwargs
             ):
-        """
+        r"""
         Summarizes support of splits/edges/node on tree.
 
         Parameters
@@ -1804,17 +1804,17 @@ class SplitDistribution(taxonmodel.TaxonNamespaceAssociated):
 
     def _get_taxon_set(self):
         from dendropy import taxonmodel
-        taxon_model.taxon_set_deprecation_warning()
+        taxonmodel.taxon_set_deprecation_warning()
         return self.taxon_namespace
 
     def _set_taxon_set(self, v):
         from dendropy import taxonmodel
-        taxon_model.taxon_set_deprecation_warning()
+        taxonmodel.taxon_set_deprecation_warning()
         self.taxon_namespace = v
 
     def _del_taxon_set(self):
         from dendropy import taxonmodel
-        taxon_model.taxon_set_deprecation_warning()
+        taxonmodel.taxon_set_deprecation_warning()
 
     taxon_set = property(_get_taxon_set, _set_taxon_set, _del_taxon_set)
 
@@ -2368,7 +2368,7 @@ class TreeArray(
             files,
             schema,
             **kwargs):
-        """
+        r"""
         Adds multiple structures from one or more external file sources to the
         collection.
 
@@ -2485,7 +2485,7 @@ class TreeArray(
     ##############################################################################
     ## Container (List) Interface
 
-    def append(tree, is_bipartitions_updated=False):
+    def append(self, tree, is_bipartitions_updated=False):
         """
         Adds a |Tree| instance to the collection before position given
         by ``index``.
@@ -2505,7 +2505,7 @@ class TreeArray(
         return self.add_tree(tree=tree,
                 is_bipartitions_updated=is_bipartitions_updated)
 
-    def insert(index, tree, is_bipartitions_updated=False):
+    def insert(self, index, tree, is_bipartitions_updated=False):
         """
         Adds a |Tree| instance to the collection before position given
         by ``index``.
@@ -2553,7 +2553,7 @@ class TreeArray(
         assert self.use_tree_weights is tree_array.use_tree_weights
         self._tree_split_bitmasks.extend(tree_array._tree_split_bitmasks)
         self._tree_edge_lengths.extend(tree_array._tree_edge_lengths)
-        self._tree_weights.extend(other._tree_weights)
+        self._tree_weights.extend(tree_array._tree_weights)
         self._split_distribution.update(tree_array._split_distribution)
         return self
 
@@ -2846,7 +2846,7 @@ class TreeArray(
             summarize_splits=True,
             **split_summarization_kwargs
             ):
-        """
+        r"""
         Returns a consensus tree from splits in ``self``.
 
         Parameters
diff --git a/src/dendropy/datamodel/treemodel/__init__.py b/src/dendropy/datamodel/treemodel/__init__.py
new file mode 100644
index 00000000..113dd473
--- /dev/null
+++ b/src/dendropy/datamodel/treemodel/__init__.py
@@ -0,0 +1,29 @@
+#! /usr/bin/env python
+# -*- coding: utf-8 -*-
+
+##############################################################################
+##  DendroPy Phylogenetic Computing Library.
+##
+##  Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
+##  All rights reserved.
+##
+##  See "LICENSE.rst" for terms and conditions of usage.
+##
+##  If you use this work or any portion thereof in published work,
+##  please cite it as:
+##
+##     Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
+##     for phylogenetic computing. Bioinformatics 26: 1569-1571.
+##
+##############################################################################
+
+"""
+This subpackage handles the core definition of tree data structure class,
+as well as all the structural classes that make up a tree.
+"""
+
+from dendropy.datamodel.treemodel._bipartition import Bipartition
+from dendropy.datamodel.treemodel._edge import Edge
+from dendropy.datamodel.treemodel._node import Node
+from dendropy.datamodel.treemodel._tree import Tree
+from dendropy.datamodel.treemodel._tree import AsciiTreePlot
diff --git a/src/dendropy/datamodel/treemodel/_bipartition.py b/src/dendropy/datamodel/treemodel/_bipartition.py
new file mode 100644
index 00000000..ccc2aa1a
--- /dev/null
+++ b/src/dendropy/datamodel/treemodel/_bipartition.py
@@ -0,0 +1,704 @@
+#! /usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from dendropy.utility import bitprocessing
+
+class Bipartition(object):
+    """
+    A bipartition on a tree.
+
+    A bipartition of a tree is a division or sorting of the leaves/tips of a
+    tree into two mutually-exclusive and collectively-comprehensive subsets,
+    obtained by bisecting the tree at a particular edge. There is thus a
+    one-to-one correspondence with an edge of a tree and a bipartition. The
+    term "split" is often also used to refer to the same concept, though this
+    is typically applied to unrooted trees.
+
+    A bipartition is modeled using a bitmask. This is a a bit array
+    representing the membership of taxa, with the least-significant bit
+    corresponding to the first taxon, the next least-signficant bit
+    corresponding to the second taxon, and so on, till the last taxon
+    corresponding to the most-significant bit. Taxon membership in one of two
+    arbitrary groups, '0' or '1', is indicated by its corresponding bit being
+    unset or set, respectively.
+
+    To allow comparisons and correct identification of the same bipartition
+    across different rotational and orientiational representations of unrooted
+    trees, we *normalize* the bipartition such that the first taxon is always
+    assigned to group '0' for bipartition representations of unrooted trees.
+
+    The normalization of the bitmask loses information about the actual
+    descendents of a particular edge. Thus in addition to the
+    :attr:`Bipartition.bitmask` attribute, each |Bipartition| object
+    also maintains a :attr:`Bipartition.leafset_bitmask` attribute which is
+    *unnormalized*. This is a bit array representing the presence or absence of
+    taxa in the subtree descending from the child node of the edge of which
+    this bipartition is associated. The least-significant bit corresponds to
+    the first taxon, the next least-signficant bit corresponds to the second
+    taxon, and so on, with the last taxon corresponding to the most-significant
+    bit. For rooted trees, the value of :attr:`Bipartition.bitmask` and
+    :attr:`Bipartition.leafset_bitmask` are identical. For unrooted trees, they
+    may or may not be equal.
+
+    In general, we use :attr:`Bipartition.bitmask` data to establish the *identity*
+    of a split or bipartition across *different* trees: for example, when
+    computing the Robinson-Foulds distances between trees, or in assessing the
+    support for different bipartitions given an MCMC or bootstrap sample of trees.
+    Here the normalization of the bitmask in unrooted trees allows for the
+    (arbitrarily-labeled) group '0' to be consistent across different
+    representations, rotations, and orientations of trees.
+
+    On the other hand, we use :attr:`Bipartition.leafset_bitmask` data to work
+    with various ancestor-descendent relationships *within* the *same* tree:
+    for example, to quickly assess if a taxon descends from a particular
+    node in a given tree, or if a particular node is a common ancestor of
+    two taxa in a given tree.
+
+    The |Bipartition| object might be used in keys in dictionaries and
+    look-up tables implemented as sets to allow for, e.g., calculation of
+    support in terms of the number times a particular bipartition is observed.
+    The :attr:`Bipartition.bitmask` is used as hash value for this purpose. As
+    such, it is crucial that this value does not change once a particular
+    |Bipartition| object is stored in a dictionary or set. To this end,
+    we impose the constraint that |Bipartition| objects are immutable
+    unless the ``is_mutable`` attribute is explicitly set to |True| as a sort
+    of waiver signed by the client code. Client code does this at its risk,
+    with the warning that anything up to and including the implosion of the
+    universe may occur if the |Bipartition| object is a member of an set
+    of dictionary at the time (or, at the very least, the modified
+    |Bipartition| object may not be accessible from dictionaries
+    and sets in which it is stored, or may occlude other
+    |Bipartition| objects in the container).
+
+    Note
+    ----
+
+    There are two possible ways of mapping taxa to bits in a bitarray or bitstring.
+
+    In the "Least-Signficiant-Bit" (LSB) scheme, the first taxon corresponds to the
+    least-significant, or left-most bit. So, given four taxa, indexed from 1 to 4,
+    taxon 1 would map to 0b0001, taxon 2 would map to 0b0010, taxon 3 would map
+    to 0b0100, and taxon 4 would map to 0b1000.
+
+    In the "Most-Significant-Bit" (MSB) scheme, on the other hand, the first taxon
+    corresponds to the most-significant, or right-most bit. So, given four
+    taxa, indexed from 1 to 4, taxon 1 would map to 0b1000, taxon 2 would map
+    to 0b0100, taxon 3 would map to 0b0010, and taxon 4 would map to 0b0001.
+
+    We selected the Least Significant Bit (LSB) approach because the MSB scheme
+    requires the size of the taxon namespace to fixed before the index can be
+    assigned to any taxa. For example, under the MSB scheme, if there are 4
+    taxa, the bitmask for taxon 1 is 0b1000 == 8, but if another taxon is
+    added, then the bitmask for taxon 1 will become 0b10000 == 16. On the other
+    hand, under the LSB scheme, the bitmask for taxon 1 will be 0b0001 == 1 if
+    there are 4 taxa, and 0b00001 == 1 if there 5 taxa, and so on. This
+    stability of taxon indexes even as the taxon namespace grows is a strongly
+    desirable property, and this the adoption of the LSB scheme.
+
+    Constraining the first taxon to be in group 0 (LSB-0) rather than group 1
+    (LSB-1) is motivated by the fact that, in the former, we can would combine
+    the bitmasks of child nodes using OR (logical addition) operations when
+    calculating the bitmask for a parent node, whereas, with the latter, we
+    would need to use AND operations. The former strikes us as more intuitive.
+
+    """
+
+    def normalize_bitmask(bitmask, fill_bitmask, lowest_relevant_bit=1):
+        if bitmask & lowest_relevant_bit:
+            return (~bitmask) & fill_bitmask  # force least-significant bit to 0
+        else:
+            return bitmask & fill_bitmask  # keep least-significant bit as 0
+
+    normalize_bitmask = staticmethod(normalize_bitmask)
+
+    def is_trivial_bitmask(bitmask, fill_bitmask):
+        """
+        Returns True if the bitmask occurs in any tree of the taxa ``mask`` -- if
+        there is only fewer than two 1's or fewer than two 0's in ``bitmask`` (among
+        all of the that are 1 in mask).
+        """
+        masked_split = bitmask & fill_bitmask
+        if bitmask == 0 or bitmask == fill_bitmask:
+            return True
+        if ((masked_split - 1) & masked_split) == 0:
+            return True
+        cm = (~bitmask) & fill_bitmask
+        if ((cm - 1) & cm) == 0:
+            return True
+        return False
+
+    is_trivial_bitmask = staticmethod(is_trivial_bitmask)
+
+    def is_trivial_leafset(leafset_bitmask):
+        return bitprocessing.num_set_bits(leafset_bitmask) == 1
+
+    is_trivial_leafset = staticmethod(is_trivial_leafset)
+
+    def is_compatible_bitmasks(m1, m2, fill_bitmask):
+        """
+        Returns |True| if ``m1`` is compatible with ``m2``
+
+        Parameters
+        ----------
+        m1 : int
+            A bitmask representing a split.
+        m2 : int
+            A bitmask representing a split.
+
+        Returns
+        -------
+        bool
+            |True| if ``m1`` is compatible with ``m2``. |False| otherwise.
+        """
+        if fill_bitmask != 0:
+            m1 = fill_bitmask & m1
+            m2 = fill_bitmask & m2
+        if 0 == (m1 & m2):
+            return True
+        c2 = m1 ^ m2
+        if 0 == (m1 & c2):
+            return True
+        c1 = fill_bitmask ^ m1
+        if 0 == (c1 & m2):
+            return True
+        if 0 == (c1 & c2):
+            return True
+        return False
+
+    is_compatible_bitmasks = staticmethod(is_compatible_bitmasks)
+
+    ## Life-cycle
+
+    def __init__(self, **kwargs):
+        """
+
+        Keyword Arguments
+        -----------------
+        bitmask : integer
+            A bit array representing the membership of taxa, with the
+            least-significant bit corresponding to the first taxon, the next
+            least-signficant bit correspodning to the second taxon, and so on,
+            till the last taxon corresponding to the most-significant bit.
+            Taxon membership in one of two arbitrary groups, '0' or '1', is
+            indicated by its correspondign bit being unset or set,
+            respectively.
+        leafset_bitmask : integer
+            A bit array representing the presence or absence of taxa in the
+            subtree descending from the child node of the edge of which this
+            bipartition is associated. The least-significant bit corresponds to
+            the first taxon, the next least-signficant bit corresponds to the
+            second taxon, and so on, with the last taxon corresponding to the
+            most-significant bit.
+        tree_leafset_bitmask : integer
+            The ``leafset_bitmask`` of the root edge of the tree with which this
+            bipartition is associated. In, general, this will be $0b1111...n$,
+            where $n$ is the number of taxa, *except* in cases of trees with
+            incomplete leaf-sets, where the positions corresponding to the
+            missing taxa will have the bits unset.
+        is_rooted : bool
+            Specifies whether or not the tree with which this bipartition is
+            associated is rooted.
+        """
+        self._split_bitmask = kwargs.get("bitmask", 0)
+        self._leafset_bitmask = kwargs.get("leafset_bitmask", self._split_bitmask)
+        self._tree_leafset_bitmask = kwargs.get("tree_leafset_bitmask", None)
+        self._lowest_relevant_bit = None
+        self._is_rooted = kwargs.get("is_rooted", None)
+        # self.edge = kwargs.get("edge", None)
+        is_mutable = kwargs.get("is_mutable", None)
+        if kwargs.get("compile_bipartition", True):
+            self.is_mutable = True
+            self.compile_split_bitmask(
+                leafset_bitmask=self._leafset_bitmask,
+                tree_leafset_bitmask=self._tree_leafset_bitmask,
+            )
+            if is_mutable is None:
+                self.is_mutable = True
+            else:
+                self.is_mutable = is_mutable
+        elif is_mutable is not None:
+            self.is_mutable = is_mutable
+
+    ## Identity
+
+    def __hash__(self):
+        assert not self.is_mutable, "Bipartition is mutable: hash is unstable"
+        return self._split_bitmask or 0
+
+    def __eq__(self, other):
+        # return self._split_bitmask == other._split_bitmask
+        return (
+            self._split_bitmask is not None
+            and self._split_bitmask == other._split_bitmask
+        ) or (self._split_bitmask is other._split_bitmask)
+
+    ## All properties are publically read-only if not mutable
+
+    def _get_split_bitmask(self):
+        return self._split_bitmask
+
+    def _set_split_bitmask(self, value):
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        self._split_bitmask = value
+
+    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
+
+    def _get_leafset_bitmask(self):
+        return self._leafset_bitmask
+
+    def _set_leafset_bitmask(self, value):
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        self._leafset_bitmask = value
+
+    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
+
+    def _get_tree_leafset_bitmask(self):
+        return self._tree_leafset_bitmask
+
+    def _set_tree_leafset_bitmask(self, value):
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        self.compile_tree_leafset_bitmask(value)
+
+    tree_leafset_bitmask = property(
+        _get_tree_leafset_bitmask, _set_tree_leafset_bitmask
+    )
+
+    def _get_is_rooted(self):
+        return self._is_rooted
+
+    def _set_is_rooted(self, value):
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        self._is_rooted = value
+
+    is_rooted = property(_get_is_rooted, _set_is_rooted)
+
+    ## Representation
+
+    def __str__(self):
+        return bin(self._split_bitmask)[2:].rjust(
+            bitprocessing.bit_length(self._tree_leafset_bitmask), "0"
+        )
+
+    def __int__(self):
+        return self._split_bitmask
+
+    def split_as_int(self):
+        return self._split_bitmask
+
+    def leafset_as_int(self):
+        return self._leafset_bitmask
+
+    def split_as_bitstring(self, symbol0="0", symbol1="1", reverse=False):
+        """
+        Composes and returns and representation of the bipartition as a
+        bitstring.
+
+        Parameters
+        ----------
+        symbol1 : str
+            The symbol to represent group '0' in the bitmask.
+        symbol1 : str
+            The symbol to represent group '1' in the bitmask.
+        reverse : bool
+            If |True|, then the first taxon will correspond to the
+            most-significant bit, instead of the least-significant bit, as is
+            the default.
+
+        Returns
+        -------
+        str
+            The bitstring representing the bipartition.
+
+        Example
+        -------
+        To represent a bipartition in the same scheme used by, e.g. PAUP* or
+        Mr. Bayes::
+
+            print(bipartition.split_as_bitstring('.', '*', reverse=True))
+        """
+        return self.bitmask_as_bitstring(
+            mask=self._split_bitmask, symbol0=symbol0, symbol1=symbol1, reverse=reverse
+        )
+
+    def leafset_as_bitstring(self, symbol0="0", symbol1="1", reverse=False):
+        """
+        Composes and returns and representation of the bipartition leafset as a
+        bitstring.
+
+        Parameters
+        ----------
+        symbol1 : str
+            The symbol to represent group '0' in the bitmask.
+        symbol1 : str
+            The symbol to represent group '1' in the bitmask.
+        reverse : bool
+            If |True|, then the first taxon will correspond to the
+            most-significant bit, instead of the least-significant bit, as is
+            the default.
+
+        Returns
+        -------
+        str
+            The bitstring representing the bipartition.
+
+        Example
+        -------
+        To represent a bipartition in the same scheme used by, e.g. PAUP* or
+        Mr. Bayes::
+
+            print(bipartition.leafset_as_bitstring('.', '*', reverse=True))
+        """
+        return self.bitmask_as_bitstring(
+            mask=self._leafset_bitmask,
+            symbol0=symbol0,
+            symbol1=symbol1,
+            reverse=reverse,
+        )
+
+    def bitmask_as_bitstring(self, mask, symbol0=None, symbol1=None, reverse=False):
+        return bitprocessing.int_as_bitstring(
+            mask,
+            length=bitprocessing.bit_length(self._tree_leafset_bitmask),
+            symbol0=symbol0,
+            symbol1=symbol1,
+            reverse=reverse,
+        )
+
+    ## Calculation
+
+    def compile_tree_leafset_bitmask(
+        self, tree_leafset_bitmask, lowest_relevant_bit=None
+    ):
+        """
+        Avoids recalculation of ``lowest_relevant_bit`` if specified.
+        """
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        self._tree_leafset_bitmask = tree_leafset_bitmask
+        if lowest_relevant_bit is not None:
+            self._lowest_relevant_bit = lowest_relevant_bit
+        elif self._tree_leafset_bitmask:
+            self._lowest_relevant_bit = bitprocessing.least_significant_set_bit(
+                self._tree_leafset_bitmask
+            )
+        else:
+            self._lowest_relevant_bit = None
+        return self._tree_leafset_bitmask
+
+    def compile_leafset_bitmask(self, leafset_bitmask=None, tree_leafset_bitmask=None):
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        if tree_leafset_bitmask is not None:
+            self.compile_tree_leafset_bitmask(tree_leafset_bitmask)
+        if leafset_bitmask is None:
+            leafset_bitmask = self._leafset_bitmask
+        if self._tree_leafset_bitmask:
+            self._leafset_bitmask = leafset_bitmask & self._tree_leafset_bitmask
+        else:
+            self._leafset_bitmask = leafset_bitmask
+        return self._leafset_bitmask
+
+    def compile_split_bitmask(
+        self,
+        leafset_bitmask=None,
+        tree_leafset_bitmask=None,
+        is_rooted=None,
+        is_mutable=True,
+    ):
+        """
+        Updates the values of the various masks specified and calculates the
+        normalized bipartition bitmask.
+
+        If a rooted bipartition, then this is set to the value of the leafset
+        bitmask.
+        If an unrooted bipartition, then the leafset bitmask is normalized such that
+        the lowest-significant bit (i.e., the group to which the first taxon
+        belongs) is set to '0'.
+
+        Also makes this bipartition immutable (unless ``is_mutable`` is |False|),
+        which facilitates it being used in dictionaries and sets.
+
+        Parameters
+        ----------
+        leafset_bitmask : integer
+            A bit array representing the presence or absence of taxa in the
+            subtree descending from the child node of the edge of which this
+            bipartition is associated. The least-significant bit corresponds to
+            the first taxon, the next least-signficant bit corresponds to the
+            second taxon, and so on, with the last taxon corresponding to the
+            most-significant bit. If not specified or |None|, the current value
+            of ``self.leafset_bitmask`` is used.
+        tree_leafset_bitmask : integer
+            The ``leafset_bitmask`` of the root edge of the tree with which this
+            bipartition is associated. In, general, this will be $0b1111...n$,
+            where $n$ is the number of taxa, *except* in cases of trees with
+            incomplete leaf-sets, where the positions corresponding to the
+            missing taxa will have the bits unset. If not specified or |None|,
+            the current value of ``self.tree_leafset_bitmask`` is used.
+        is_rooted : bool
+            Specifies whether or not the tree with which this bipartition is
+            associated is rooted. If not specified or |None|, the current value
+            of ``self.is_rooted`` is used.
+
+        Returns
+        -------
+        integer
+            The bipartition bitmask.
+        """
+        assert self.is_mutable, "Bipartition instance is not mutable"
+        if is_rooted is not None:
+            self._is_rooted = is_rooted
+        if tree_leafset_bitmask:
+            self.compile_tree_leafset_bitmask(tree_leafset_bitmask=tree_leafset_bitmask)
+        if leafset_bitmask:
+            self.compile_leafset_bitmask(leafset_bitmask=leafset_bitmask)
+        if self._leafset_bitmask is None:
+            return
+        if self._tree_leafset_bitmask is None:
+            return
+        if self._is_rooted:
+            self._split_bitmask = self._leafset_bitmask
+        else:
+            self._split_bitmask = Bipartition.normalize_bitmask(
+                bitmask=self._leafset_bitmask,
+                fill_bitmask=self._tree_leafset_bitmask,
+                lowest_relevant_bit=self._lowest_relevant_bit,
+            )
+        if is_mutable is not None:
+            self.is_mutable = is_mutable
+        return self._split_bitmask
+
+    def compile_bipartition(self, is_mutable=None):
+        """
+        Updates the values of the various masks specified and calculates the
+        normalized bipartition bitmask.
+
+        If a rooted bipartition, then this is set to the value of the leafset
+        bitmask.
+        If an unrooted bipartition, then the leafset bitmask is normalized such that
+        the lowest-significant bit (i.e., the group to which the first taxon
+        belongs) is set to '0'.
+
+        Also makes this bipartition immutable (unless ``is_mutable`` is |False|),
+        which facilitates it being used in dictionaries and sets.
+
+        Note that this requires full population of the following fields:
+            - self._leafset_bitmask
+            - self._tree_leafset_bitmask
+        """
+        self.compile_split_bitmask(
+            self,
+            leafset_bitmask=self._leafset_bitmask,
+            tree_leafset_bitmask=self._tree_leafset_bitmask,
+            is_rooted=self._is_rooted,
+            is_mutable=is_mutable,
+        )
+
+    ## Operations
+
+    def normalize(self, bitmask, convention="lsb0"):
+        """
+        Return ``bitmask`` ensuring that the bit corresponding to the first
+        taxon is 1.
+        """
+        if convention == "lsb0":
+            if self._lowest_relevant_bit & bitmask:
+                return (~bitmask) & self._tree_leafset_bitmask
+            else:
+                return bitmask & self._tree_leafset_bitmask
+        elif convention == "lsb1":
+            if self._lowest_relevant_bit & bitmask:
+                return bitmask & self._tree_leafset_bitmask
+            else:
+                return (~bitmask) & self._tree_leafset_bitmask
+        else:
+            raise ValueError("Unrecognized convention: {}".format(convention))
+
+    def is_compatible_with(self, other):
+        """
+        Returns |True| if ``other`` is compatible with self.
+
+        Parameters
+        ----------
+        other : |Bipartition|
+            The bipartition to check for compatibility.
+
+        Returns
+        -------
+        bool
+            |True| if ``other`` is compatible with ``self``; |False| otherwise.
+        """
+        m1 = self._split_bitmask
+        if isinstance(other, int):
+            m2 = other
+        else:
+            m2 = other._split_bitmask
+        return Bipartition.is_compatible_bitmasks(m1, m2, self._tree_leafset_bitmask)
+
+    def is_incompatible_with(self, other):
+        """
+        Returns |True| if ``other`` conflicts with self.
+
+        Parameters
+        ----------
+        other : |Bipartition|
+            The bipartition to check for conflicts.
+
+        Returns
+        -------
+        bool
+            |True| if ``other`` conflicts with ``self``; |False| otherwise.
+        """
+        return not self.is_compatible_with(other)
+
+    def is_nested_within(self, other, is_other_masked_for_tree_leafset=False):
+        """
+        Returns |True| if the current bipartition is contained
+        within other.
+
+        Parameters
+        ----------
+        other : |Bipartition|
+            The bipartition to check.
+
+        Returns
+        -------
+        bool
+            |True| if the the bipartition is "contained" within ``other``
+        """
+        if self._is_rooted:
+            m1 = self._leafset_bitmask
+            m2 = other._leafset_bitmask
+        else:
+            m1 = self._split_bitmask
+            m2 = other._split_bitmask
+        if not is_other_masked_for_tree_leafset:
+            m2 = self._tree_leafset_bitmask & m2
+        return (m1 & m2) == m1
+
+    def is_leafset_nested_within(self, other):
+        """
+        Returns |True| if the leafset of ``self`` is a subset of the leafset of
+        ``other``.
+
+        Parameters
+        ----------
+        other : |Bipartition|
+            The bipartition to check for compatibility.
+
+        Returns
+        -------
+        bool
+            |True| if the leafset of ``self`` is contained in ``other``.
+        """
+        if isinstance(other, int):
+            m2 = other
+        else:
+            m2 = other._leafset_bitmask
+        m2 = self._tree_leafset_bitmask & m2
+        return (m2 & self._leafset_bitmask) == self._leafset_bitmask
+
+    def is_trivial(self):
+        """
+        Returns
+        -------
+        bool
+            |True| if this bipartition divides a leaf and the rest of the
+            tree.
+        """
+        return Bipartition.is_trivial_bitmask(
+            self._split_bitmask, self._tree_leafset_bitmask
+        )
+
+    def split_as_newick_string(
+        self, taxon_namespace, preserve_spaces=False, quote_underscores=True
+    ):
+        """
+        Represents this bipartition split as a newick string.
+
+        Parameters
+        ----------
+        taxon_namespace : |TaxonNamespace| instance
+            The operational taxonomic unit concept namespace to reference.
+        preserve_spaces : boolean, optional
+            If |False| (default), then spaces in taxon labels will be replaced
+            by underscores. If |True|, then taxon labels with spaces will be
+            wrapped in quotes.
+        quote_underscores : boolean, optional
+            If |True| (default), then taxon labels with underscores will be
+            wrapped in quotes. If |False|, then the labels will not be wrapped
+            in quotes.
+
+        Returns
+        -------
+        string
+            NEWICK representation of split specified by ``bitmask``.
+        """
+        return taxon_namespace.bitmask_as_newick_string(
+            bitmask=self._split_bitmask,
+            preserve_spaces=preserve_spaces,
+            quote_underscores=quote_underscores,
+        )
+
+    def leafset_as_newick_string(
+        self, taxon_namespace, preserve_spaces=False, quote_underscores=True
+    ):
+        """
+        Represents this bipartition leafset as a newick string.
+
+        Parameters
+        ----------
+        taxon_namespace : |TaxonNamespace| instance
+            The operational taxonomic unit concept namespace to reference.
+        preserve_spaces : boolean, optional
+            If |False| (default), then spaces in taxon labels will be replaced
+            by underscores. If |True|, then taxon labels with spaces will be
+            wrapped in quotes.
+        quote_underscores : boolean, optional
+            If |True| (default), then taxon labels with underscores will be
+            wrapped in quotes. If |False|, then the labels will not be wrapped
+            in quotes.
+
+        Returns
+        -------
+        string
+            NEWICK representation of split specified by ``bitmask``.
+        """
+        return taxon_namespace.bitmask_as_newick_string(
+            bitmask=self._leafset_bitmask,
+            preserve_spaces=preserve_spaces,
+            quote_underscores=quote_underscores,
+        )
+
+    def leafset_taxa(self, taxon_namespace, index=0):
+        """
+        Returns list of |Taxon| objects in the leafset of this
+        bipartition.
+
+        Parameters
+        ----------
+        taxon_namespace : |TaxonNamespace| instance
+            The operational taxonomic unit concept namespace to reference.
+        index : integer, optional
+            Start from this |Taxon| object instead of the first
+            |Taxon| object in the collection.
+
+        Returns
+        -------
+        :py:class:`list` [|Taxon|]
+            List of |Taxon| objects specified or spanned by
+            ``bitmask``.
+        """
+        return taxon_namespace.bitmask_taxa_list(
+            bitmask=self._leafset_bitmask, index=index
+        )
+
+    def _format_bipartition(self, length=None, **kwargs):
+        if length is None:
+            length = len(kwargs.get("taxon_namespace"))
+        return bitprocessing.int_as_bitstring(self, length=length)
+
+    # def as_newick_string
+    # def is_trivial
+    # def is_non_singleton
+    # def leafset_hash
+    # def leafset_as_bitstring
+    # def is_compatible
diff --git a/src/dendropy/datamodel/treemodel/_edge.py b/src/dendropy/datamodel/treemodel/_edge.py
new file mode 100644
index 00000000..ee6b34e9
--- /dev/null
+++ b/src/dendropy/datamodel/treemodel/_edge.py
@@ -0,0 +1,258 @@
+#! /usr/bin/env python
+# -*- coding: utf-8 -*-
+
+import copy
+from dendropy.utility.textprocessing import StringIO
+from dendropy.datamodel import basemodel
+from dendropy.datamodel.treemodel import _bipartition
+
+class Edge(basemodel.DataObject, basemodel.Annotable):
+    """
+    An :term:``edge`` on a :term:``tree``.
+    """
+
+    def __init__(self, **kwargs):
+        """
+        Keyword Arguments
+        -----------------
+        head_node : |Node|, optional
+            Node from to which this edge links, i.e., the child node of this
+            node ``tail_node``.
+        length : numerical, optional
+            A value representing the weight of the edge.
+        rootedge : boolean, optional
+            Is the child node of this edge the root or seed node of the tree?
+        label : string, optional
+            Label for this edge.
+
+        """
+        basemodel.DataObject.__init__(self, label=kwargs.pop("label", None))
+        self._head_node = kwargs.pop("head_node", None)
+        if "tail_node" in kwargs:
+            raise TypeError(
+                "Setting the tail node directly is no longer supported: instead, set"
+                " the parent node of the head node"
+            )
+        self.rootedge = kwargs.pop("rootedge", None)
+        self.length = kwargs.pop("length", None)
+        if kwargs:
+            raise TypeError("Unsupported keyword arguments: {}".format(kwargs))
+
+        self._bipartition = None
+        self.comments = []
+
+    def __copy__(self, memo=None):
+        raise TypeError("Cannot directly copy Edge")
+
+    def taxon_namespace_scoped_copy(self, memo=None):
+        raise TypeError("Cannot directly copy Edge")
+
+    def __deepcopy__(self, memo=None):
+        # call Annotable.__deepcopy__()
+        return basemodel.Annotable.__deepcopy__(self, memo=memo)
+        # return super(Edge, self).__deepcopy__(memo=memo)
+
+    def __hash__(self):
+        return id(self)
+
+    def __eq__(self, other):
+        return self is other
+
+    def __lt__(self, other):
+        return id(self) < id(other)
+
+    def _get_tail_node(self):
+        if self._head_node is None:
+            return None
+        return self._head_node._parent_node
+
+    def _set_tail_node(self, node):
+        if self._head_node is None:
+            raise ValueError("'_head_node' is 'None': cannot assign 'tail_node'")
+        # Go through managed property instead of
+        # setting attribute to ensure book-keeping
+        self._head_node.parent_node = node
+    tail_node = property(_get_tail_node, _set_tail_node)
+
+    def _get_head_node(self):
+        return self._head_node
+
+    def _set_head_node(self, node):
+        # Go through managed property instead of setting attribute to ensure
+        # book-keeping; following should also set ``_head_node`` of ``self``
+        node.edge = self
+    head_node = property(_get_head_node, _set_head_node)
+
+    def is_leaf(self):
+        "Returns True if the head node has no children"
+        return self.head_node and self.head_node.is_leaf()
+
+    def is_terminal(self):
+        return self.is_leaf()
+
+    def is_internal(self):
+        "Returns True if the head node has children"
+        return self.head_node and not self.head_node.is_leaf()
+
+    def get_adjacent_edges(self):
+        """
+        Returns a list of all edges that "share" a node with ``self``.
+        """
+        he = [i for i in self.head_node.incident_edges() if i is not self]
+        te = [i for i in self.tail_node.incident_edges() if i is not self]
+        he.extend(te)
+        return he
+    adjacent_edges = property(get_adjacent_edges)
+
+    def collapse(self, adjust_collapsed_head_children_edge_lengths=False):
+        """
+        Inserts all children of the head_node of self as children of the
+        tail_node of self in the same place in the child_node list that
+        head_node had occupied. The edge length and head_node will no longer be
+        part of the tree unless ``adjust_collapsed_head_children_edge_lengths``.
+        is True.
+        """
+        to_del = self.head_node
+        parent = self.tail_node
+        if not parent:
+            return
+        children = to_del.child_nodes()
+        if not children:
+            raise ValueError("collapse_self called with a terminal.")
+        pos = parent.child_nodes().index(to_del)
+        parent.remove_child(to_del)
+        for child in children:
+            parent.insert_child(pos, child)
+            pos += 1
+            if adjust_collapsed_head_children_edge_lengths and self.length is not None:
+                # print id(child), child.edge.length, self.length
+                if child.edge.length is None:
+                    child.edge.length = self.length
+                else:
+                    child.edge.length += self.length
+
+    def invert(self, update_bipartitions=False):
+        """
+        Changes polarity of edge.
+        """
+        # self.head_node, self.tail_node = self.tail_node, self.head_node
+
+        if not self.head_node:
+            raise ValueError("Cannot invert edge with 'None' for head node")
+        if not self.tail_node:
+            raise ValueError("Cannot invert edge with 'None' for tail node")
+
+        old_head_node = self.head_node
+        new_tail_node = old_head_node
+        old_tail_node = self.tail_node
+        new_head_node = old_tail_node
+        grandparent = old_tail_node._parent_node
+        if grandparent is not None:
+            for idx, ch in enumerate(grandparent._child_nodes):
+                if ch is old_tail_node:
+                    grandparent._child_nodes[idx] = old_head_node
+                    break
+            else:
+                # we did not break loop: force insertion of old_head_node if
+                # not already there
+                if old_head_node not in grandparent._child_nodes:
+                    grandparent._child_nodes.append(old_head_node)
+        assert old_head_node in old_tail_node._child_nodes
+        old_tail_node.remove_child(old_head_node)
+        assert old_head_node not in old_tail_node._child_nodes
+        old_head_node.add_child(old_tail_node)
+        old_tail_node.edge.length, old_head_node.edge.length = (
+            old_head_node.edge.length,
+            old_tail_node.edge_length,
+        )
+
+    def _get_bipartition(self):
+        if self._bipartition is None:
+            self._bipartition = _bipartition.Bipartition(
+                edge=self,
+                is_mutable=True,
+            )
+        return self._bipartition
+
+    def _set_bipartition(self, v=None):
+        self._bipartition = v
+    bipartition = property(_get_bipartition, _set_bipartition)
+
+    def _get_split_bitmask(self):
+        return self.bipartition._split_bitmask
+
+    def _set_split_bitmask(self, h):
+        self.bipartition._split_bitmask = h
+    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
+
+    def _get_leafset_bitmask(self):
+        return self.bipartition._leafset_bitmask
+
+    def _set_leafset_bitmask(self, h):
+        self.bipartition._leafset_bitmask = h
+    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
+
+    def _get_tree_leafset_bitmask(self):
+        return self.bipartition._tree_leafset_bitmask
+
+    def _set_tree_leafset_bitmask(self, h):
+        self.bipartition._tree_leafset_bitmask = h
+    tree_leafset_bitmask = property(
+        _get_tree_leafset_bitmask, _set_tree_leafset_bitmask
+    )
+
+    def split_as_bitstring(self):
+        return self.bipartition.split_as_bitstring()
+
+    def leafset_as_bitstring(self):
+        return self.bipartition.leafset_as_bitstring()
+
+    def description(
+        self, depth=1, indent=0, itemize="", output=None, taxon_namespace=None
+    ):
+        """
+        Returns description of object, up to level ``depth``.
+        """
+        if depth is None or depth < 0:
+            return
+        output_strio = StringIO()
+        if self.label is None:
+            label = " (%s, Length=%s)" % (id(self), str(self.length))
+        else:
+            label = " (%s: '%s', Length=%s)" % (id(self), self.label, str(self.length))
+        output_strio.write(
+            "%s%sEdge object at %s%s" % (indent * " ", itemize, hex(id(self)), label)
+        )
+        if depth >= 1:
+            leader1 = " " * (indent + 4)
+            leader2 = " " * (indent + 8)
+            output_strio.write("\n%s[Length]" % leader1)
+            if self.length is not None:
+                length = self.length
+            else:
+                length = "None"
+            output_strio.write("\n%s%s" % (leader2, length))
+            output_strio.write("\n%s[Tail Node]" % leader1)
+            if self.tail_node is not None:
+                tn = self.tail_node.description(0)
+            else:
+                tn = "None"
+            output_strio.write("\n%s%s" % (leader2, tn))
+            output_strio.write("\n%s[Head Node]" % leader1)
+            if self.head_node is not None:
+                hn = self.head_node.description(0)
+            else:
+                hn = "None"
+            output_strio.write("\n%s%s" % (leader2, hn))
+        s = output_strio.getvalue()
+        if output is not None:
+            output.write(s)
+        return s
+
+    def _format_edge(self, **kwargs):
+        ef = kwargs.get('edge_formatter', None)
+        if ef:
+            return ef(self)
+        return str(self)
+
+
diff --git a/src/dendropy/datamodel/treemodel/_node.py b/src/dendropy/datamodel/treemodel/_node.py
new file mode 100644
index 00000000..0b45e0f7
--- /dev/null
+++ b/src/dendropy/datamodel/treemodel/_node.py
@@ -0,0 +1,1674 @@
+#! /usr/bin/env python
+# -*- coding: utf-8 -*-
+
+from dendropy.utility.textprocessing import StringIO
+from dendropy.utility import deprecate
+from dendropy.utility import error
+from dendropy.datamodel import basemodel
+from dendropy.datamodel.treemodel import _edge
+
+class Node(basemodel.DataObject, basemodel.Annotable):
+    """
+    A :term:|Node| on a :term:|Tree|.
+    """
+
+    def edge_factory(cls, **kwargs):
+        """
+        Creates and returns a |Edge| object.
+
+        Derived classes can override this method to provide support for
+        specialized or different types of edges on the tree.
+
+        Parameters
+        ----------
+
+        kwargs : keyword arguments
+            Passed directly to constructor of |Edge|.
+
+        Returns
+        -------
+        |Edge|
+            A new |Edge| object.
+
+        """
+        return _edge.Edge(**kwargs)
+
+    edge_factory = classmethod(edge_factory)
+
+    def __init__(self, **kwargs):
+        """
+        Keyword Arguments
+        -----------------
+        taxon : |Taxon|, optional
+            The |Taxon| instance representing the operational taxonomic
+            unit concept associated with this Node.
+        label : string, optional
+            A label for this node.
+        edge_length : numeric, optional
+            Length or weight of the edge subtending this node.
+
+        """
+        basemodel.DataObject.__init__(self, label=kwargs.pop("label", None))
+        self.taxon = kwargs.pop("taxon", None)
+        self.age = None
+        self._edge = None
+        self._child_nodes = []
+        self._parent_node = None
+        self.edge = self.edge_factory(
+            head_node=self, length=kwargs.pop("edge_length", None)
+        )
+        if kwargs:
+            raise TypeError("Unsupported keyword arguments: {}".format(kwargs))
+        self.comments = []
+
+    def __copy__(self, memo=None):
+        raise TypeError("Cannot directly copy Edge")
+
+    def taxon_namespace_scoped_copy(self, memo=None):
+        raise TypeError("Cannot directly copy Node")
+
+    def __deepcopy__(self, memo=None):
+        return basemodel.Annotable.__deepcopy__(self, memo=memo)
+        # if memo is None:
+        #     memo = {}
+        # other = basemodel.Annotable.__deepcopy__(self, memo=memo)
+        # memo[id(self._child_nodes)] = other._child_nodes
+        # for ch in self._child_nodes:
+        #     try:
+        #         och = memo[id(ch)]
+        #         if och not in other._child_nodes:
+        #             other._child_nodes.append(och)
+        #     except KeyError:
+        #         och = copy.deepcopy(ch, memo)
+        #         memo[id(chd)] = och
+        #         if och not in other._child_nodes:
+        #             other._child_nodes.append(och)
+        # return other
+        # return super(Node, self).__deepcopy__(memo=memo)
+
+    def __hash__(self):
+        return id(self)
+
+    def __eq__(self, other):
+        # IMPORTANT LESSON LEARNED: if you define __hash__, you *must* define __eq__
+        return self is other
+
+    def __repr__(self):
+        return "<{} object at {}: '{}' ({})>".format(
+            self.__class__.__name__, hex(id(self)), self._label, repr(self.taxon)
+        )
+
+    def __iter__(self, *args, **kwargs):
+        return self.preorder_iter(*args, **kwargs)
+
+    def preorder_iter(self, filter_fn=None):
+        """
+        Pre-order iterator over nodes of subtree rooted at this node.
+
+        Visits self and all descendant nodes, with each node visited before its
+        children. Nodes can optionally be filtered by ``filter_fn``: only nodes
+        for which ``filter_fn`` returns |True| when called with the node as an
+        argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding nodes of the subtree rooted at this node in
+            pre-order sequence.
+        """
+        stack = [self]
+        while stack:
+            node = stack.pop()
+            if filter_fn is None or filter_fn(node):
+                yield node
+            stack.extend(n for n in reversed(node._child_nodes))
+
+    def preorder_internal_node_iter(self, filter_fn=None, exclude_seed_node=False):
+        """
+        Pre-order iterator over internal nodes of subtree rooted at this node.
+
+        Visits self and all internal descendant nodes, with each node visited
+        before its children. In DendroPy, "internal nodes" are nodes that have
+        at least one child node, and thus the root or seed node is typically included
+        unless ``exclude_seed_node`` is |True|. Nodes can optionally be filtered
+        by ``filter_fn``: only nodes for which ``filter_fn`` returns |True| when
+        passed the node as an argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+        exclude_seed_node : boolean, optional
+            If |False| (default), then the seed node or root is visited. If
+            |True|, then the seed node is skipped.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding the internal nodes of the subtree rooted at
+            this node in pre-order sequence.
+        """
+        if exclude_seed_node:
+            froot = lambda x: x._parent_node is not None
+        else:
+            froot = lambda x: True
+        if filter_fn:
+            f = lambda x: (froot(x) and x._child_nodes and filter_fn(x)) or None
+        else:
+            f = lambda x: (x and froot(x) and x._child_nodes) or None
+        return self.preorder_iter(filter_fn=f)
+
+    def postorder_iter(self, filter_fn=None):
+        """
+        Post-order iterator over nodes of subtree rooted at this node.
+
+        Visits self and all descendant nodes, with each node visited after its
+        children. Nodes can optionally be filtered by ``filter_fn``: only nodes
+        for which ``filter_fn`` returns |True| when called with the node as an
+        argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding the nodes of the subtree rooted at
+            this node in post-order sequence.
+        """
+        # if self._child_nodes:
+        #     for nd in self._child_nodes:
+        #         for ch in nd.postorder_iter(filter_fn=filter_fn):
+        #             yield ch
+        # if filter_fn is None or filter_fn(self):
+        #     yield self
+        # return
+
+        # stack = [(self, False)]
+        # while stack:
+        #     node, state = stack.pop(0)
+        #     if state:
+        #         if filter_fn is None or filter_fn(node):
+        #             yield node
+        #     else:
+        #         stack.insert(0, (node, True))
+        #         child_nodes = [(n, False) for n in node._child_nodes]
+        #         child_nodes.extend(stack)
+        #         stack = child_nodes
+
+        ## Prefer `pop()` to `pop(0)`.
+        ## Thanks to Mark T. Holder
+        ## From peyotl commits: d1ffef2 + 19fdea1
+        stack = [(self, False)]
+        while stack:
+            node, state = stack.pop()
+            if state:
+                if filter_fn is None or filter_fn(node):
+                    yield node
+            else:
+                stack.append((node, True))
+                stack.extend([(n, False) for n in reversed(node._child_nodes)])
+
+    def postorder_internal_node_iter(self, filter_fn=None, exclude_seed_node=False):
+        """
+        Pre-order iterator over internal nodes of subtree rooted at this node.
+
+        Visits self and all internal descendant nodes, with each node visited
+        after its children. In DendroPy, "internal nodes" are nodes that have
+        at least one child node, and thus the root or seed node is typically
+        included unless ``exclude_seed_node`` is |True|. Nodes can optionally be
+        filtered by ``filter_fn``: only nodes for which ``filter_fn`` returns
+        |True| when passed the node as an argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+        exclude_seed_node : boolean, optional
+            If |False| (default), then the seed node or root is visited. If
+            |True|, then the seed node is skipped.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding the internal nodes of the subtree rooted at
+            this node in post-order sequence.
+        """
+        if exclude_seed_node:
+            froot = lambda x: x._parent_node is not None
+        else:
+            froot = lambda x: True
+        if filter_fn:
+            f = lambda x: (froot(x) and x._child_nodes and filter_fn(x)) or None
+        else:
+            f = lambda x: (x and froot(x) and x._child_nodes) or None
+        return self.postorder_iter(filter_fn=f)
+
+    def levelorder_iter(self, filter_fn=None):
+        """
+        Level-order iteration over nodes of subtree rooted at this node.
+
+        Visits self and all descendant nodes, with each node and other nodes at
+        the same level (distance from root) visited before their children.
+        Nodes can optionally be filtered by ``filter_fn``: only nodes for which
+        ``filter_fn`` returns |True| when called with the node as an argument are
+        visited.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding nodes of the subtree rooted at this node in
+            level-order sequence.
+        """
+        if filter_fn is None or filter_fn(self):
+            yield self
+        remaining = self.child_nodes()
+        while len(remaining) > 0:
+            node = remaining.pop(0)
+            if filter_fn is None or filter_fn(node):
+                yield node
+            child_nodes = node.child_nodes()
+            remaining.extend(child_nodes)
+
+    def level_order_iter(self, filter_fn=None):
+        """
+        DEPRECATED: Use :meth:`Node.levelorder_iter()` instead.
+        """
+        deprecate.dendropy_deprecation_warning(
+            message=(
+                "Deprecated since DendroPy 4: 'level_order_iter()' will no longer be"
+                " supported in future releases; use 'levelorder_iter()' instead"
+            ),
+            stacklevel=3,
+        )
+        return self.levelorder_iter(filter_fn=filter_fn)
+
+    def inorder_iter(self, filter_fn=None):
+        """
+        In-order iteration over nodes of subtree rooted at this node.
+
+        Visits self and all descendant nodes, with each node visited in-between
+        its children. Only valid for strictly-bifurcating trees. Nodes can
+        optionally be filtered by ``filter_fn``: only nodes for which ``filter_fn``
+        returns |True| when called with the node as an argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding nodes of the subtree rooted at this node in
+            infix or in-order sequence.
+        """
+        if len(self._child_nodes) == 0:
+            if filter_fn is None or filter_fn(self):
+                yield self
+        elif len(self._child_nodes) == 2:
+            for nd in self._child_nodes[0].inorder_iter(filter_fn=filter_fn):
+                yield nd
+            if filter_fn is None or filter_fn(self):
+                yield self
+            for nd in self._child_nodes[1].inorder_iter(filter_fn=filter_fn):
+                yield nd
+        else:
+            raise TypeError("In-order traversal only supported for binary trees")
+
+    def leaf_iter(self, filter_fn=None):
+        """
+        Iterate over all tips or leaves that ultimately descend from this node.
+
+        Visits all leaf or tip nodes descended from this node. Nodes can
+        optionally be filtered by ``filter_fn``: only nodes for which ``filter_fn``
+        returns |True| when called with the node as an argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding leaf nodes of the subtree rooted at this node.
+        """
+        if filter_fn:
+            ff = lambda x: x.is_leaf() and filter_fn(x) or None
+        else:
+            ff = lambda x: x.is_leaf() and x or None
+        for node in self.postorder_iter(ff):
+            yield node
+
+    def child_node_iter(self, filter_fn=None):
+        """
+        Iterator over all nodes that are the (immediate) children of this node.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            An iterator yielding nodes that have this node as a parent.
+        """
+        for node in self._child_nodes:
+            if filter_fn is None or filter_fn(node):
+                yield node
+
+    def child_edge_iter(self, filter_fn=None):
+        """
+        Iterator over all edges that are the (immediate) children of this edge.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Edge| object as an argument
+            and returns |True| if the |Edge| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all edges visited will be yielded.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Edge|]
+            An iterator yielding edges that have this edge as a parent.
+        """
+        for node in self._child_nodes:
+            if filter_fn is None or filter_fn(node.edge):
+                yield node.edge
+
+    def ancestor_iter(self, filter_fn=None, inclusive=False):
+        """
+        Iterator over all ancestors of this node.
+
+        Visits all nodes that are the ancestors of this node.  If ``inclusive``
+        is |True|, ``self`` is returned as the first item of the sequence;
+        otherwise ``self`` is skipped. Nodes can optionally be filtered by
+        ``filter_fn``: only nodes for which ``filter_fn`` returns |True| when
+        passed the node as an argument are yielded.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (default), then all nodes visited will be yielded.
+        inclusive : boolean, optional
+            If |True|, includes this node in the sequence. If |False|, this is
+            skipped.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            Iterator over all predecessor/ancestor nodes of this node.
+        """
+        if inclusive and (filter_fn is None or filter_fn(self)):
+            yield self
+        node = self
+        while node is not None:
+            node = node._parent_node
+            if node is not None and (filter_fn is None or filter_fn(node)):
+                yield node
+
+    def ageorder_iter(self, filter_fn=None, include_leaves=True, descending=False):
+        """
+        Iterator over nodes of subtree rooted at this node in order of the age
+        of the node (i.e., the time since the present).
+
+        Iterates over nodes in order of age ('age' is as given by the ``age``
+        attribute, which is usually the sum of edge lengths from tips
+        to node, i.e., time since present).
+        If ``include_leaves`` is |True| (default), leaves are included in the
+        iteration; if ``include_leaves`` is |False|, leaves will be skipped.
+        If ``descending`` is |False| (default), younger nodes will be returned
+        before older ones; if |True|, older nodes will be returned before
+        younger ones.
+
+        Parameters
+        ----------
+        filter_fn : function object, optional
+            A function object that takes a |Node| object as an argument
+            and returns |True| if the |Node| object is to be yielded by
+            the iterator, or |False| if not. If ``filter_fn`` is |None|
+            (defau
+        include_leaves : boolean, optional
+            If |True| (default), then leaf nodes are included in the iteration.
+            If |False|, then leaf nodes are skipped.lt), then all nodes visited will be yielded.
+        descending : boolean, optional
+            If |False| (default), then younger nodes are visited before older
+            ones. If |True|, then older nodes are visited before younger ones.
+
+        Returns
+        -------
+        :py:class:`collections.Iterator` [|Node|]
+            Iterator over age-ordered sequence of nodes in subtree rooted at
+            this node.
+        """
+        # if not descending:
+        #     leaves = [nd for nd in self.leaf_iter()]
+        #     queued_pairs = []
+        #     in_queue = set()
+        #     for leaf in leaves:
+        #         age_nd_tuple = (leaf.age, leaf)
+        #         queued_pairs.insert(bisect.bisect(queued_pairs, age_nd_tuple), age_nd_tuple)
+        #         in_queue.add(leaf)
+        #     while queued_pairs:
+        #         next_el = queued_pairs.pop(0)
+        #         age, nd = next_el
+        #         in_queue.remove(nd)
+        #         p = nd._parent_node
+        #         if p and p not in in_queue:
+        #             age_nd_tuple = (p.age, p)
+        #             queued_pairs.insert(bisect.bisect(queued_pairs, age_nd_tuple), age_nd_tuple)
+        #             in_queue.add(p)
+        #         if include_leaves or nd.is_internal():
+        #             yield nd
+        # else:
+        #     nds = [(nd.age, nd) for nd in self.preorder_iter()]
+        #     nds.sort(reverse=True)
+        #     for nd in nds:
+        #         if include_leaves or nd[1].is_internal():
+        #             yield nd[1]
+        nds = [nd for nd in self.preorder_iter()]
+        if descending:
+            reverse = True
+        else:
+            reverse = False
+        nds.sort(key=lambda x: x.age, reverse=reverse)
+        for nd in nds:
+            if (include_leaves or nd._child_nodes) and (
+                filter_fn is None or filter_fn(nd)
+            ):
+                yield nd
+
+    def age_order_iter(self, include_leaves=True, filter_fn=None, descending=False):
+        """
+        Deprecated: use :meth:`Node.ageorder_iter()` instead.
+        """
+        deprecate.dendropy_deprecation_warning(
+            message=(
+                "Deprecated since DendroPy 4: 'age_order_iter()' will no longer be"
+                " supported in future releases; use 'ageorder_iter()' instead"
+            ),
+            stacklevel=3,
+        )
+        return self.ageorder_iter(
+            include_leaves=include_leaves, filter_fn=filter_fn, descending=descending
+        )
+
+    def apply(self, before_fn=None, after_fn=None, leaf_fn=None):
+        r"""
+        Applies function ``before_fn`` and ``after_fn`` to all internal nodes and
+        ``leaf_fn`` to all terminal nodes in subtree starting with ``self``, with
+        nodes visited in pre-order.
+
+        Given a tree with preorder sequence of nodes of
+        [a,b,i,e,j,k,c,g,l,m,f,n,h,o,p,]::
+
+                           a
+                          / \
+                         /   \
+                        /     \
+                       /       \
+                      /         \
+                     /           \
+                    /             c
+                   b             / \
+                  / \           /   \
+                 /   e         /     f
+                /   / \       /     / \
+               /   /   \     g     /   h
+              /   /     \   / \   /   / \
+             i   j       k l   m n   o   p
+
+
+        the following order of function calls results:
+
+            before_fn(a)
+            before_fn(b)
+            leaf_fn(i)
+            before_fn(e)
+            leaf_fn(j)
+            leaf_fn(k)
+            after_fn(e)
+            after_fn(b)
+            before_fn(c)
+            before_fn(g)
+            leaf_fn(l)
+            leaf_fn(m)
+            after_fn(g)
+            before_fn(f)
+            leaf_fn(n)
+            before_fn(h)
+            leaf_fn(o)
+            leaf_fn(p)
+            after_fn(h)
+            after_fn(f)
+            after_fn(c)
+            after_fn(a)
+
+        Parameters
+        ----------
+        before_fn : function object or |None|
+            A function object that takes a |Node| as its argument.
+        after_fn : function object or |None|
+            A function object that takes a |Node| as its argument.
+        leaf_fn : function object or |None|
+            A function object that takes a |Node| as its argument.
+
+        Notes
+        -----
+        Adapted from work by Mark T. Holder (the ``peyotl`` module of the Open
+        Tree of Life Project):
+
+            https://github.com/OpenTreeOfLife/peyotl.git
+
+        """
+        stack = [self]
+        while stack:
+            node = stack.pop()
+            if not node._child_nodes:
+                if leaf_fn:
+                    leaf_fn(node)
+                # (while node is the last child of parent ...)
+                while (node._parent_node is None) or (
+                    node._parent_node._child_nodes[-1] is node
+                ):
+                    node = node._parent_node
+                    if node is not None:
+                        if after_fn is not None:
+                            after_fn(node)
+                    else:
+                        break
+            else:
+                if before_fn is not None:
+                    before_fn(node)
+                stack.extend([i for i in reversed(node._child_nodes)])
+        return
+
+    def set_child_nodes(self, child_nodes):
+        """
+        Assigns the set of child nodes for this node.
+
+        Results in the ``parent_node`` attribute of each |Node| in ``nodes``
+        as well as the ``tail_node`` attribute of corresponding |Edge|
+        objects being assigned to ``self``.
+
+        Parameters
+        ----------
+        child_nodes : collections.Iterable[|Node|]
+            The (iterable) collection of child nodes to be assigned this node
+            as a parent.
+        """
+        self.clear_child_nodes()
+        # Go through add to ensure book-keeping
+        # (e.g. avoiding multiple adds) takes
+        # place.
+        for nd in child_nodes:
+            self.add_child(nd)
+
+    def set_children(self, child_nodes):
+        """Deprecated: use :meth:`Node.set_child_nodes()` instead."""
+        return self.set_child_nodes(child_nodes)
+
+    def add_child(self, node):
+        """
+        Adds a child node to this node if it is not already a child.
+
+        Results in the ``parent_node`` attribute of ``node`` as well as the
+        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
+
+        Parameters
+        ----------
+        node : |Node|
+            The node to be added as a child of this node.
+
+        Returns
+        -------
+        |Node|
+            The node that was added.
+        """
+        assert node is not self, "Cannot add node as child of itself"
+        assert self._parent_node is not node, (
+            "Cannot add a node's parent as its child: remove the node from its parent's"
+            " child set first"
+        )
+        node._parent_node = self
+        if node not in self._child_nodes:
+            self._child_nodes.append(node)
+        return node
+
+    def insert_child(self, index, node):
+        """
+        Adds a child node to this node.
+
+        If the node is already a child of this node, then it is moved
+        to the specified position.
+        Results in the ``parent_node`` attribute of ``node`` as well as the
+        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
+
+        Parameters
+        ----------
+        index : integer
+            The index before which to insert the new node.
+        node : |Node|
+            The node to be added as a child of this node.
+
+        Returns
+        -------
+        |Node|
+            The node that was added.
+        """
+        node._parent_node = self
+        try:
+            cur_index = self._child_nodes.index(node)
+        except ValueError:
+            pass
+        else:
+            if cur_index == index:
+                return
+            self._child_nodes.remove(node)
+        self._child_nodes.insert(index, node)
+        return node
+
+    def new_child(self, **kwargs):
+        """
+        Create and add a new child to this node.
+
+        Parameters
+        ----------
+        kwargs : keyword arguments
+            Keyword arguments will be passed directly to the |Node|
+            constructor (:meth:`Node.__init()__`).
+
+        Returns
+        -------
+        |Node|
+            The new child node that was created and added.
+        """
+        node = self.__class__(**kwargs)
+        return self.add_child(node=node)
+
+    def insert_new_child(self, index, **kwargs):
+        """
+        Create and add a new child to this node at a particular position.
+
+        Results in the ``parent_node`` attribute of ``node`` as well as the
+        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
+
+        Parameters
+        ----------
+        index : integer
+            The index before which to insert the new node.
+        kwargs : keyword arguments, optional
+            Keyword arguments will be passed directly to the |Node|
+            constructor (:meth:`Node.__init()__`).
+
+        Returns
+        -------
+        |Node|
+            The new child node that was created and added.
+        """
+        node = self.__class__(**kwargs)
+        return self.insert_child(index=index, node=node)
+
+    def remove_child(self, node, suppress_unifurcations=False):
+        """
+        Removes a node from the child set of this node.
+
+        Results in the parent of the node being removed set to |None|.  If
+        ``suppress_unifurcations`` is |True|, if this node ends up having only one
+        child after removal of the specified node, then this node will be
+        removed from the tree, with its single child added to the child node
+        set of its parent and the edge length adjusted accordingly.
+        ``suppress_unifurcations`` should only be |True| for unrooted trees.
+
+        Parameters
+        ----------
+        node : |Node|
+            The node to be removed.
+        suppress_unifurcations : boolean, optional
+            If |False| (default), no action is taken. If |True|, then if the
+            node removal results in a node with degree of two (i.e., a single
+            parent and a single child), then it will be removed from
+            the tree and its (sole) child will be added as a child of its
+            parent (with edge lengths adjusted accordingly).
+
+        Returns
+        -------
+        |Node|
+            The node removed.
+        """
+        if not node:
+            raise ValueError("Tried to remove an non-existing or null node")
+        children = self._child_nodes
+        if node in children:
+            node._parent_node = None
+            node.edge.tail_node = None
+            index = children.index(node)
+            children.remove(node)
+            if suppress_unifurcations:
+                if self._parent_node:
+                    if len(children) == 1:
+                        child = children[0]
+                        pos = self._parent_node._child_nodes.index(self)
+                        self._parent_node.insert_child(pos, child)
+                        self._parent_node.remove_child(
+                            self, suppress_unifurcations=False
+                        )
+                        try:
+                            child.edge.length += self.edge.length
+                        except:
+                            pass
+                        self._child_nodes = []
+                else:
+                    to_remove = None
+                    if len(children) == 2:
+                        if children[0].is_internal():
+                            to_remove = children[0]
+                            other = children[1]
+                        elif children[1].is_internal():
+                            to_remove = children[1]
+                            other = children[0]
+                    if to_remove is not None:
+                        try:
+                            other.edge.length += to_remove.edge.length
+                        except:
+                            pass
+                        pos = self._child_nodes.index(to_remove)
+                        self.remove_child(to_remove, suppress_unifurcations=False)
+                        tr_children = to_remove._child_nodes
+                        tr_children.reverse()
+                        for c in tr_children:
+                            self.insert_child(pos, c)
+                        to_remove._child_nodes = []
+        else:
+            raise ValueError("Tried to remove a node that is not listed as a child")
+        return node
+
+    def clear_child_nodes(self):
+        """
+        Removes all child nodes.
+        """
+        del self._child_nodes[:]  # list.clear() is not in Python 2.7
+
+    def reversible_remove_child(self, node, suppress_unifurcations=False):
+        """
+        This function is a (less-efficient) version of remove_child that also
+        returns the data needed by reinsert_nodes to "undo" the removal.
+
+        Returns a list of tuples.  The first element of each tuple is the
+        node removed, the other elements are the information needed by
+        ``reinsert_nodes`` in order to restore the tree to the same topology as
+        it was before the call to ``remove_child.`` If ``suppress_unifurcations`` is False
+        then the returned list will contain only one item.
+
+        ``suppress_unifurcations`` should only be called on unrooted trees.
+        """
+        if not node:
+            raise ValueError("Tried to remove an non-existing or null node")
+        children = self._child_nodes
+        try:
+            pos = children.index(node)
+        except:
+            raise ValueError("Tried to remove a node that is not listed as a child")
+        removed = [(node, self, pos, [], None)]
+        node._parent_node = None
+        node.edge.tail_node = None
+        children.remove(node)
+        if suppress_unifurcations:
+            p = self._parent_node
+            if p:
+                if len(children) == 1:
+                    child = children[0]
+                    pos = p._child_nodes.index(self)
+                    p.insert_child(pos, child)
+                    self._child_nodes = []
+                    p.remove_child(self, suppress_unifurcations=False)
+                    e = child.edge
+                    try:
+                        e.length += self.edge.length
+                    except:
+                        e = None
+                    t = (self, p, pos, [child], e)
+                    removed.append(t)
+            else:
+                to_remove = None
+                if len(children) == 2:
+                    if children[0].is_internal():
+                        to_remove = children[0]
+                        other = children[1]
+                    elif children[1].is_internal():
+                        to_remove = children[1]
+                        other = children[0]
+                if to_remove is not None:
+                    e = other.edge
+                    try:
+                        e.length += to_remove.edge.length
+                    except:
+                        e = None
+                    pos = self._child_nodes.index(to_remove)
+                    self.remove_child(to_remove, suppress_unifurcations=False)
+                    tr_children = to_remove._child_nodes
+                    to_remove._child_nodes = []
+                    for n, c in enumerate(tr_children):
+                        new_pos = pos + n
+                        self.insert_child(pos, c)
+                    t = (to_remove, self, pos, tr_children, e)
+                    removed.append(t)
+
+        return removed
+
+    def reinsert_nodes(self, nd_connection_list):
+        """
+        This function should be used to "undo" the effects of
+        Node.reversible_remove_child NOTE: the behavior is only
+        guaranteed if the tree has not been modified between the
+        remove_child and reinsert_nodes calls! (or the tree has been
+        restored such that the node/edge identities are identical to the
+        state before the remove_child call.
+
+        The order of info in each tuple is:
+
+            0 - node removed
+            1 - parent of node removed
+            2 - pos in parent matrix
+            3 - children of node removed that were "stolen"
+            4 - edge that was lengthened by "stealing" length from node's edge
+        """
+        # we unroll the stack of operations
+        for blob in nd_connection_list[-1::-1]:
+            # _LOG.debug(blob)
+            n, p, pos, children, e = blob
+            for c in children:
+                cp = c._parent_node
+                if cp:
+                    cp.remove_child(c)
+                n.add_child(c)
+            p.insert_child(pos, n)
+            if e is not None:
+                e.length -= n.edge.length
+
+    def collapse_neighborhood(self, dist):
+        if dist < 1:
+            return
+        children = self.child_nodes()
+        for ch in children:
+            if not ch.is_leaf():
+                ch.edge.collapse()
+        if self._parent_node:
+            p = self._parent_node
+            self.edge.collapse()
+            p.collapse_neighborhood(dist - 1)
+        else:
+            self.collapse_neighborhood(dist - 1)
+
+    def collapse_clade(self):
+        """Collapses all internal edges that are descendants of self."""
+        if self.is_leaf():
+            return
+        leaves = [i for i in self.leaf_iter()]
+        self.set_child_nodes(leaves)
+
+    def collapse_conflicting(self, bipartition):
+        """
+        Collapses every edge in the subtree that conflicts with the given
+        bipartition. This can include the edge subtending subtree_root.
+        """
+        to_collapse_head_nodes = []
+        for nd in self.postorder_iter():
+            if nd._child_nodes and nd.edge.bipartition.is_incompatible_with(
+                bipartition
+            ):
+                to_collapse_head_nodes.append(nd)
+        for nd in to_collapse_head_nodes:
+            e = nd.edge
+            e.collapse()
+
+    def _get_edge(self):
+        """
+        Returns the edge subtending this node.
+        """
+        return self._edge
+
+    def _set_edge(self, new_edge):
+        """
+        Sets the edge subtending this node, and sets head_node of
+        ``edge`` to point to self.
+        """
+        # if edge is None:
+        #     raise ValueError("A Node cannot have 'None' for an edge")
+        if new_edge is self._edge:
+            return
+        if self._parent_node is not None:
+            try:
+                self._parent_node._child_nodes.remove(self)
+            except ValueError:
+                pass
+
+        ## Minimal management
+        self._edge = new_edge
+        if self._edge:
+            self._edge._head_node = self
+
+    edge = property(_get_edge, _set_edge)
+
+    def _get_edge_length(self):
+        """
+        Returns the length of the edge subtending this node.
+        """
+        return self._edge.length
+
+    def _set_edge_length(self, v=None):
+        """
+        Sets the edge subtending this node, and sets head_node of
+        ``edge`` to point to self.
+        """
+        self._edge.length = v
+
+    edge_length = property(_get_edge_length, _set_edge_length)
+
+    def _get_bipartition(self):
+        """
+        Returns the bipartition for the edge subtending this node.
+        """
+        return self._edge.bipartition
+
+    def _set_bipartition(self, v=None):
+        """
+        Sets the bipartition for the edge subtending this node.
+        """
+        self._edge.bipartition = v
+
+    bipartition = property(_get_bipartition, _set_bipartition)
+
+    def _get_split_bitmask(self):
+        return self._edge.bipartition._split_bitmask
+
+    def _set_split_bitmask(self, h):
+        self._edge.bipartition._split_bitmask = h
+
+    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
+
+    def _get_leafset_bitmask(self):
+        return self._edge.bipartition._leafset_bitmask
+
+    def _set_leafset_bitmask(self, h):
+        self._edge.bipartition._leafset_bitmask = h
+
+    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
+
+    def _get_tree_leafset_bitmask(self):
+        return self._edge.bipartition._tree_leafset_bitmask
+
+    def _set_tree_leafset_bitmask(self, h):
+        self._edge.bipartition._tree_leafset_bitmask = h
+
+    tree_leafset_bitmask = property(
+        _get_tree_leafset_bitmask, _set_tree_leafset_bitmask
+    )
+
+    def split_as_bitstring(self):
+        return self._edge.bipartition.split_as_bitstring()
+
+    def leafset_as_bitstring(self):
+        return self._edge.bipartition.leafset_as_bitstring()
+
+    def _get_parent_node(self):
+        """Returns the parent node of this node."""
+        return self._parent_node
+
+    def _set_parent_node(self, parent):
+        """Sets the parent node of this node."""
+        if self._parent_node is not None:
+            try:
+                self._parent_node._child_nodes.remove(self)
+            except ValueError:
+                pass
+        self._parent_node = parent
+        if self._parent_node is not None:
+            if self not in self._parent_node._child_nodes:
+                self._parent_node._child_nodes.append(self)
+
+    parent_node = property(_get_parent_node, _set_parent_node)
+
+    def is_leaf(self):
+        """
+        Returns |True| if the node is a tip or a leaf node, i.e. has no child
+        nodes.
+
+        Returns
+        -------
+        boolean
+            |True| if the node is a leaf, i.e., has no child nodes. |False|
+            otherwise.
+        """
+        return bool(not self._child_nodes)
+
+    def is_internal(self):
+        """
+        Returns |True| if the node is *not* a tip or a leaf node.
+
+        Returns
+        -------
+        boolean
+            |True| if the node is not a leaf. |False| otherwise.
+        """
+        return bool(self._child_nodes)
+
+    def leaf_nodes(self):
+        """
+        Returns list of all leaf_nodes descended from this node (or just
+        list with ``self`` as the only member if ``self`` is a leaf).
+
+        Note
+        ----
+        Usage of  `leaf_iter()` is preferable for efficiency reasons unless
+        actual list is required.
+
+        Returns
+        -------
+        :py:class:`list` [|Node|]
+           A ``list`` of |Node| objects descended from this node
+           (inclusive of ``self``) that are the leaves.
+        """
+        return [
+            node
+            for node in self.postorder_iter(lambda x: bool(len(x.child_nodes()) == 0))
+        ]
+
+    def num_child_nodes(self):
+        """
+        Returns number of child nodes.
+
+        Returns
+        -------
+        int
+            Number of children in ``self``.
+        """
+        return len(self._child_nodes)
+
+    def child_nodes(self):
+        """
+        Returns a shallow-copy list of all child nodes of this node.
+
+        Note
+        ----
+        Unless an actual ``list`` is needed, iterating over the child nodes using
+        :meth:`Node.child_node_iter()` is preferable to avoid the overhead of
+        list construction.
+
+        Returns
+        -------
+        :py:class:`list` [|Node|]
+           A ``list`` of |Node| objects that have ``self`` as a parent.
+        """
+        return list(self._child_nodes)
+
+    def child_edges(self):
+        """
+        Returns a shallow-copy list of all child edges of this node.
+
+        Note
+        ----
+        Unless an actual ``list`` is needed, iterating over the child edges using
+        :meth:`Node.child_edge_iter()` is preferable to avoid the overhead of
+        list construction.
+
+        Returns
+        -------
+        :py:class:`list` [|Edge|]
+           A ``list`` of |Edge| objects that have ``self`` as a tail node.
+        """
+        return list(ch.edge for ch in self._child_nodes)
+
+    def incident_edges(self):
+        """
+        Return parent and child edges.
+
+        Returns
+        -------
+        :py:class:`list` [|Edge|]
+            A list of edges linking to this node, with outgoing edges (edges
+            connecting to child nodes) followed by the edge connecting
+            this node to its parent.
+        """
+        e = [c.edge for c in self._child_nodes]
+        e.append(self.edge)
+        return e
+
+    def get_incident_edges(self):
+        """Legacy synonym for :meth:`Node.incident_edges()`."""
+        return self.incident_edges()
+
+    def adjacent_nodes(self):
+        """
+        Return parent and child nodes.
+
+        Returns
+        -------
+        :py:class:`list` [|Node|]
+            A list with all child nodes and parent node of this node.
+        """
+        n = [c for c in self._child_nodes]
+        if self._parent_node:
+            n.append(self._parent_node)
+        return n
+
+    def get_adjacent_nodes(self):
+        """Legacy synonym for :meth:`Node.adjacent_edges()`"""
+        return self.adjacent_nodes()
+
+    def sibling_nodes(self):
+        """
+        Return all other children of parent, excluding self.
+
+        Returns
+        -------
+        :py:class:`list` [|Node|]
+            A list of all nodes descended from the same parent as ``self``,
+            excluding ``self``.
+        """
+        p = self._parent_node
+        if not p:
+            return []
+        sisters = [nd for nd in p.child_nodes() if nd is not self]
+        return sisters
+
+    def sister_nodes(self):
+        """Legacy synonym for :meth:`Node.sister_nodes()`"""
+        return self.sibling_nodes()
+
+    def extract_subtree(
+        self,
+        extraction_source_reference_attr_name="extraction_source",
+        node_filter_fn=None,
+        suppress_unifurcations=True,
+        is_apply_filter_to_leaf_nodes=True,
+        is_apply_filter_to_internal_nodes=False,
+        node_factory=None,
+    ):
+        """
+        Returns a clone of the structure descending from this node.
+
+        Parameters
+        ----------
+        extraction_source_reference_attr_name : str
+            Name of attribute to set on cloned nodes that references
+            corresponding original node. If ``None``, then attribute (and
+            reference) will not be created.
+        node_filter_fn : None or function object
+            If ``None``, then entire tree structure is cloned.
+            If not ``None``, must be a function object that returns ``True``
+            if a particular |Node| instance on the original tree should
+            be included in the cloned tree, or ``False`` otherwise.
+        is_apply_filter_to_leaf_nodes : bool
+            If ``True`` then the above filter will be applied to leaf nodes. If
+            ``False`` then it will not (and all leaf nodes will be
+            automatically included, unless excluded by an ancestral node being
+            filtered out).
+        is_apply_filter_to_internal_nodes : bool
+            If ``True`` then the above filter will be applied to internal nodes. If
+            ``False`` then it will not (internal nodes without children will
+            still be filtered out).
+        node_factory : function
+            If not ``None``, must be a function that takes no arguments and
+            returns a new |Node| (or equivalent) instance.
+
+        Returns
+        -------
+        nd : |Node|
+            A node with descending subtree mirroring this one.
+
+        """
+        memo = {}
+        is_excluded_nodes = False
+        start_node = None
+        start_node_to_match = self
+        if node_factory is None:
+            node_factory = self.__class__
+        nd1 = None  # verbosity to mollify linter
+        for nd0 in self.postorder_iter():
+            if node_filter_fn is not None:
+                if nd0._child_nodes:
+                    if is_apply_filter_to_internal_nodes:
+                        is_apply_filter = True
+                    else:
+                        is_apply_filter = False
+                else:
+                    if is_apply_filter_to_leaf_nodes:
+                        is_apply_filter = True
+                    else:
+                        is_apply_filter = False
+                if is_apply_filter and not node_filter_fn(nd0):
+                    is_excluded_nodes = True
+                    continue
+            original_node_has_children = False
+            children_to_add = []
+            for ch_nd0 in nd0.child_node_iter():
+                original_node_has_children = True
+                ch_nd1 = memo.get(ch_nd0, None)
+                if ch_nd1 is not None:
+                    children_to_add.append(ch_nd1)
+            if not children_to_add and original_node_has_children:
+                # filter removes all descendents of internal node,
+                # so this internal node is not added
+                if nd0.parent_node is None:
+                    raise error.SeedNodeDeletionException(
+                        "Attempting to remove seed node or node without parent"
+                    )
+                if nd0 is self:
+                    start_node_to_match = nd0.parent_node
+                continue
+            elif len(children_to_add) == 1 and suppress_unifurcations:
+                if nd0.edge.length is not None:
+                    if children_to_add[0].edge.length is None:
+                        children_to_add[0].edge.length = nd0.edge.length
+                    else:
+                        children_to_add[0].edge.length += nd0.edge.length
+                else:
+                    nd1.edge.length = children_to_add[0].edge.length
+                if nd0.parent_node is None:
+                    start_node = children_to_add[0]
+                    break
+                if nd0 is self:
+                    start_node_to_match = nd0.parent_node
+                memo[nd0] = children_to_add[0]
+            else:
+                nd1 = node_factory()
+                nd1.label = nd0.label
+                nd1.taxon = nd0.taxon
+                nd1.edge.length = nd0.edge.length
+                nd1.edge.label = nd0.edge.label
+                for ch_nd1 in children_to_add:
+                    nd1.add_child(ch_nd1)
+                if nd0 is start_node_to_match:
+                    start_node = nd1
+                memo[nd0] = nd1
+                if extraction_source_reference_attr_name:
+                    setattr(nd1, extraction_source_reference_attr_name, nd0)
+        if start_node is not None:
+            return start_node
+        else:
+            ## TODO: find a replacement node
+            raise ValueError
+
+    def level(self):
+        """
+        Returns the number of nodes between ``self`` and the seed node of the tree.
+
+        Returns
+        -------
+        integer
+            The number of nodes between ``self`` and the seed node of the tree,
+            or 0 if ``self`` has no parent.
+        """
+        if self._parent_node:
+            return self._parent_node.level() + 1
+        else:
+            return 0
+
+    def distance_from_root(self):
+        """
+        Weighted path length of ``self`` from root.
+
+        Returns
+        -------
+        numeric
+            Total weight of all edges connecting ``self`` with the root of the
+            tree.
+        """
+        if self._parent_node and self.edge.length != None:
+            if self._parent_node.distance_from_root == None:
+                return float(self.edge.length)
+            else:
+                distance_from_root = float(self.edge.length)
+                parent_node = self._parent_node
+                # The root is identified when a node with no
+                # parent is encountered. If we want to use some
+                # other criteria (e.g., where a is_root property
+                # is True), we modify it here.
+                while parent_node:
+                    if parent_node.edge.length != None:
+                        distance_from_root = distance_from_root + float(
+                            parent_node.edge.length
+                        )
+                    parent_node = parent_node._parent_node
+                return distance_from_root
+        elif not self._parent_node and self.edge.length != None:
+            return float(self.edge.length)
+        elif self._parent_node and self.edge.length == None:
+            # what do we do here: parent node exists, but my
+            # length does not?
+            return float(self._parent_node.edge.length)
+        elif not self._parent_node and self.edge.length == None:
+            # no parent node, and no edge length
+            return 0.0
+        else:
+            # WTF????
+            return 0.0
+
+    def distance_from_tip(self):
+        """
+        Maximum weighted length of path of ``self`` to tip.
+
+        If tree is not ultrametric (i.e., descendent edges have different
+        lengths), then count the maximum of edge lengths. Note that
+        :meth:`Tree.calc_node_ages()` is a more efficient way of doing this
+        over the whole tree if this value is need for many or all the nodes on
+        the tree.
+
+        Returns
+        -------
+        numeric
+            Maximum weight of edges connecting ``self`` to tip.
+        """
+        if not self._child_nodes:
+            return 0.0
+        else:
+            distance_from_tips = []
+            for ch in self._child_nodes:
+                if ch.edge.length is not None:
+                    curr_edge_length = ch.edge_length
+                else:
+                    curr_edge_length = 0.0
+                if not hasattr(ch, "_distance_from_tip"):
+                    ch._distance_from_tip = ch.distance_from_tip()
+                distance_from_tips.append(ch._distance_from_tip + curr_edge_length)
+            self._distance_from_tip = float(max(distance_from_tips))
+            return self._distance_from_tip
+
+    def description(
+        self, depth=1, indent=0, itemize="", output=None, taxon_namespace=None
+    ):
+        """
+        Returns description of object, up to level ``depth``.
+        """
+        if depth is None or depth < 0:
+            return
+        output_strio = StringIO()
+        label = str(self)
+        output_strio.write(
+            "%s%sNode object at %s%s" % (indent * " ", itemize, hex(id(self)), label)
+        )
+        if depth >= 1:
+            leader1 = " " * (indent + 4)
+            leader2 = " " * (indent + 8)
+            output_strio.write("\n%s[Edge]" % leader1)
+            if self.edge is not None:
+                edge_desc = self.edge.description(0)
+            else:
+                edge_desc = "None"
+            output_strio.write("\n%s%s" % (leader2, edge_desc))
+
+            output_strio.write("\n%s[Taxon]" % leader1)
+            if self.taxon is not None:
+                taxon_desc = self.taxon.description(0)
+            else:
+                taxon_desc = "None"
+            output_strio.write("\n%s%s" % (leader2, taxon_desc))
+
+            output_strio.write("\n%s[Parent]" % leader1)
+            if self._parent_node is not None:
+                parent_node_desc = self._parent_node.description(0)
+            else:
+                parent_node_desc = "None"
+            output_strio.write("\n%s%s" % (leader2, parent_node_desc))
+            output_strio.write("\n%s[Children]" % leader1)
+            if len(self._child_nodes) == 0:
+                output_strio.write("\n%sNone" % leader2)
+            else:
+                for i, cnd in enumerate(self._child_nodes):
+                    output_strio.write("\n%s[%d] %s" % (leader2, i, cnd.description(0)))
+        s = output_strio.getvalue()
+        if output is not None:
+            output.write(s)
+        return s
+
+    ## For debugging we build-in a full-fledged NEWICK composition independent
+    ## of the nexus/newick family of modules. Client code should prefer to
+    ## use Newick/Nexus readers/writers, or Tree.write(), TreeList.write(),
+    ## DataSet.write() etc.
+
+    def _as_newick_string(self, **kwargs):
+        """
+        This returns the Node as a NEWICK statement according to the given
+        formatting rules. This should be used for debugging purposes only.
+        For production purposes, use the the full-fledged 'as_string()'
+        method of the object.
+        """
+        out = StringIO()
+        self._write_newick(out, **kwargs)
+        return out.getvalue()
+
+    def _write_newick(self, out, **kwargs):
+        """
+        This returns the Node as a NEWICK statement according to the given
+        formatting rules. This should be used for debugging purposes only.  For
+        production purposes, use the the full-fledged 'write_to_stream()'
+        method of the object.
+        """
+        edge_lengths = not kwargs.get("suppress_edge_lengths", False)
+        edge_lengths = kwargs.get("edge_lengths", edge_lengths)
+        child_nodes = self.child_nodes()
+        if child_nodes:
+            out.write("(")
+            f_child = child_nodes[0]
+            for child in child_nodes:
+                if child is not f_child:
+                    out.write(",")
+                child._write_newick(out, **kwargs)
+            out.write(")")
+
+        out.write(self._get_node_token(**kwargs))
+        if edge_lengths:
+            e = self.edge
+            if e:
+                sel = e.length
+                if sel is not None:
+                    fmt = kwargs.get("edge_length_formatter", None)
+                    if fmt:
+                        out.write(":%s" % fmt(sel))
+                    else:
+                        s = ""
+                        try:
+                            s = float(sel)
+                            s = str(s)
+                        except ValueError:
+                            s = str(sel)
+                        if s:
+                            out.write(":%s" % s)
+
+    def _get_node_token(self, **kwargs):
+        """returns a string that is an identifier for the node.  This is called
+        by the newick-writing functions, so the kwargs that affect how node
+        labels show up in a newick string are the same ones used here:
+        ``suppress_internal_labels`` is a Boolean, and defaults to False.
+        """
+        is_leaf = len(self._child_nodes) == 0
+        if not is_leaf:
+            if kwargs.get("suppress_internal_labels", False) or not kwargs.get(
+                "include_internal_labels", True
+            ):
+                return ""
+        if self.taxon is not None:
+            if self.taxon.label:
+                label = self.taxon.label
+            else:
+                # return "_" # taxon, but no label: anonymous
+                label = (  # "_" is not anonoymous/unnamed, but a name of <blank>; so we return nothing instead
+                    ""
+                )
+        else:
+            if self.label:
+                label = self.label
+            else:
+                label = ""
+        if not label or kwargs.get("raw_labels", False):
+            return label
+        elif " " in label and "_" in label:
+            if "'" in label:
+                label.replace("'", "''")
+            return "'{}'".format(label)
+        elif " " in label and not kwargs.get("preserve_spaces"):
+            return label.replace(" ", "_")
+        else:
+            return label
+
+    def _get_indented_form(self, **kwargs):
+        out = StringIO()
+        self._write_indented_form(out, **kwargs)
+        return out.getvalue()
+
+    def _write_indented_form(self, out, **kwargs):
+        indentation = kwargs.get("indentation", "    ")
+        level = kwargs.get("level", 0)
+        ancestors = []
+        siblings = []
+        n = self
+        while n is not None:
+            n._write_indented_form_line(out, level, **kwargs)
+            n, lev = n._preorder_list_manip(siblings, ancestors)
+            level += lev
+
+    def _get_indented_form_line(self, level, **kwargs):
+        out = StringIO()
+        self._write_indented_form_line(out, level, **kwargs)
+        return out.getvalue()
+
+    def _write_indented_form_line(self, out, level, **kwargs):
+        indentation = kwargs.get("indentation", "    ")
+        label = self._format_node(**kwargs)
+        if kwargs.get("bipartitions"):
+            cm = "%s " % self.edge.bipartition._format_bipartition(**kwargs)
+        else:
+            cm = ""
+        out.write("%s%s%s\n" % (cm, indentation * level, label))
+
+    def _format_node(self, **kwargs):
+        nf = kwargs.get('node_formatter', None)
+        if nf:
+            return nf(self)
+        if self.taxon is not None:
+            return str(self.taxon)
+        if self.label is not None:
+            return self.label
+        return ""
+
+    def _preorder_list_manip(self, siblings, ancestors):
+        """
+        Helper function for recursion free preorder traversal, that does
+        not rely on attributes of the node other than child_nodes() (thus it
+        is useful for debuggging).
+
+        Returns the next node (or None) and the number of levels toward the
+        root the function "moved".
+        """
+        levels_moved = 0
+        c = self.child_nodes()
+        if c:
+            levels_moved += 1
+            ancestors.append(list(siblings))
+            del siblings[:]
+            siblings.extend(c[1:])
+            return c[0], levels_moved
+        while not siblings:
+            if ancestors:
+                levels_moved -= 1
+                del siblings[:]
+                siblings.extend(ancestors.pop())
+            else:
+                return None, levels_moved
+        return siblings.pop(0), levels_moved
+
+    def _convert_node_to_root_polytomy(self):
+        """If ``self`` has two children and at least on of them is an internal node,
+        then it will be converted to an out-degree three node (with the edge length
+        added as needed).
+
+        Returns a tuple of child nodes that were detached (or() if the tree was not
+        modified). This can be useful for removing the deleted node from the split_edge_map
+        dictionary.
+        """
+        nd_children = self.child_nodes()
+        if len(nd_children) > 2:
+            return ()
+        try:
+            left_child = nd_children[0]
+        except:
+            return ()
+        if not left_child:
+            return ()
+        if len(nd_children) == 1:
+            right_child = None
+            dest_edge_head = self
+        else:
+            right_child = nd_children[1]
+            dest_edge_head = right_child
+        curr_add = None
+        if right_child and right_child.is_internal():
+            try:
+                left_child.edge.length += right_child.edge.length
+            except:
+                pass
+            self.remove_child(right_child)
+            grand_kids = right_child.child_nodes()
+            for gc in grand_kids:
+                self.add_child(gc)
+            curr_add = right_child
+        elif left_child.is_internal():
+            try:
+                dest_edge_head.edge.length += left_child.edge.length
+            except:
+                pass
+            self.remove_child(left_child)
+            grand_kids = left_child.child_nodes()
+            for gc in grand_kids:
+                self.add_child(gc)
+            curr_add = left_child
+        if curr_add:
+            ndl = [curr_add]
+            t = self._convert_node_to_root_polytomy()
+            ndl.extend(t)
+            return tuple(ndl)
+        return ()
diff --git a/src/dendropy/datamodel/treemodel.py b/src/dendropy/datamodel/treemodel/_tree.py
similarity index 51%
rename from src/dendropy/datamodel/treemodel.py
rename to src/dendropy/datamodel/treemodel/_tree.py
index 5fd6acab..01a96992 100644
--- a/src/dendropy/datamodel/treemodel.py
+++ b/src/dendropy/datamodel/treemodel/_tree.py
@@ -1,2555 +1,31 @@
 #! /usr/bin/env python
 # -*- coding: utf-8 -*-
 
-##############################################################################
-##  DendroPy Phylogenetic Computing Library.
-##
-##  Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
-##  All rights reserved.
-##
-##  See "LICENSE.rst" for terms and conditions of usage.
-##
-##  If you use this work or any portion thereof in published work,
-##  please cite it as:
-##
-##     Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
-##     for phylogenetic computing. Bioinformatics 26: 1569-1571.
-##
-##############################################################################
-
-"""
-This module handles the core definition of tree data structure class,
-as well as all the structural classes that make up a tree.
-"""
-
-import collections
-import math
-from dendropy.utility.textprocessing import StringIO
-import copy
-import sys
-from dendropy.utility import GLOBAL_RNG
-from dendropy.utility import container
-from dendropy.utility import terminal
-from dendropy.utility import error
-from dendropy.utility import bitprocessing
-from dendropy.utility import deprecate
-from dendropy.utility import constants
-from dendropy.utility import textprocessing
-from dendropy.datamodel import basemodel
-from dendropy.datamodel import taxonmodel
-from dendropy import dataio
-
-##############################################################################
-### Bipartition
-
-class Bipartition(object):
-    """
-    A bipartition on a tree.
-
-    A bipartition of a tree is a division or sorting of the leaves/tips of a
-    tree into two mutually-exclusive and collectively-comprehensive subsets,
-    obtained by bisecting the tree at a particular edge. There is thus a
-    one-to-one correspondence with an edge of a tree and a bipartition. The
-    term "split" is often also used to refer to the same concept, though this
-    is typically applied to unrooted trees.
-
-    A bipartition is modeled using a bitmask. This is a a bit array
-    representing the membership of taxa, with the least-significant bit
-    corresponding to the first taxon, the next least-signficant bit
-    corresponding to the second taxon, and so on, till the last taxon
-    corresponding to the most-significant bit. Taxon membership in one of two
-    arbitrary groups, '0' or '1', is indicated by its corresponding bit being
-    unset or set, respectively.
-
-    To allow comparisons and correct identification of the same bipartition
-    across different rotational and orientiational representations of unrooted
-    trees, we *normalize* the bipartition such that the first taxon is always
-    assigned to group '0' for bipartition representations of unrooted trees.
-
-    The normalization of the bitmask loses information about the actual
-    descendents of a particular edge. Thus in addition to the
-    :attr:`Bipartition.bitmask` attribute, each |Bipartition| object
-    also maintains a :attr:`Bipartition.leafset_bitmask` attribute which is
-    *unnormalized*. This is a bit array representing the presence or absence of
-    taxa in the subtree descending from the child node of the edge of which
-    this bipartition is associated. The least-significant bit corresponds to
-    the first taxon, the next least-signficant bit corresponds to the second
-    taxon, and so on, with the last taxon corresponding to the most-significant
-    bit. For rooted trees, the value of :attr:`Bipartition.bitmask` and
-    :attr:`Bipartition.leafset_bitmask` are identical. For unrooted trees, they
-    may or may not be equal.
-
-    In general, we use :attr:`Bipartition.bitmask` data to establish the *identity*
-    of a split or bipartition across *different* trees: for example, when
-    computing the Robinson-Foulds distances between trees, or in assessing the
-    support for different bipartitions given an MCMC or bootstrap sample of trees.
-    Here the normalization of the bitmask in unrooted trees allows for the
-    (arbitrarily-labeled) group '0' to be consistent across different
-    representations, rotations, and orientations of trees.
-
-    On the other hand, we use :attr:`Bipartition.leafset_bitmask` data to work
-    with various ancestor-descendent relationships *within* the *same* tree:
-    for example, to quickly assess if a taxon descends from a particular
-    node in a given tree, or if a particular node is a common ancestor of
-    two taxa in a given tree.
-
-    The |Bipartition| object might be used in keys in dictionaries and
-    look-up tables implemented as sets to allow for, e.g., calculation of
-    support in terms of the number times a particular bipartition is observed.
-    The :attr:`Bipartition.bitmask` is used as hash value for this purpose. As
-    such, it is crucial that this value does not change once a particular
-    |Bipartition| object is stored in a dictionary or set. To this end,
-    we impose the constraint that |Bipartition| objects are immutable
-    unless the ``is_mutable`` attribute is explicitly set to |True| as a sort
-    of waiver signed by the client code. Client code does this at its risk,
-    with the warning that anything up to and including the implosion of the
-    universe may occur if the |Bipartition| object is a member of an set
-    of dictionary at the time (or, at the very least, the modified
-    |Bipartition| object may not be accessible from dictionaries
-    and sets in which it is stored, or may occlude other
-    |Bipartition| objects in the container).
-
-    Note
-    ----
-
-    There are two possible ways of mapping taxa to bits in a bitarray or bitstring.
-
-    In the "Least-Signficiant-Bit" (LSB) scheme, the first taxon corresponds to the
-    least-significant, or left-most bit. So, given four taxa, indexed from 1 to 4,
-    taxon 1 would map to 0b0001, taxon 2 would map to 0b0010, taxon 3 would map
-    to 0b0100, and taxon 4 would map to 0b1000.
-
-    In the "Most-Significant-Bit" (MSB) scheme, on the other hand, the first taxon
-    corresponds to the most-significant, or right-most bit. So, given four
-    taxa, indexed from 1 to 4, taxon 1 would map to 0b1000, taxon 2 would map
-    to 0b0100, taxon 3 would map to 0b0010, and taxon 4 would map to 0b0001.
-
-    We selected the Least Significant Bit (LSB) approach because the MSB scheme
-    requires the size of the taxon namespace to fixed before the index can be
-    assigned to any taxa. For example, under the MSB scheme, if there are 4
-    taxa, the bitmask for taxon 1 is 0b1000 == 8, but if another taxon is
-    added, then the bitmask for taxon 1 will become 0b10000 == 16. On the other
-    hand, under the LSB scheme, the bitmask for taxon 1 will be 0b0001 == 1 if
-    there are 4 taxa, and 0b00001 == 1 if there 5 taxa, and so on. This
-    stability of taxon indexes even as the taxon namespace grows is a strongly
-    desirable property, and this the adoption of the LSB scheme.
-
-    Constraining the first taxon to be in group 0 (LSB-0) rather than group 1
-    (LSB-1) is motivated by the fact that, in the former, we can would combine
-    the bitmasks of child nodes using OR (logical addition) operations when
-    calculating the bitmask for a parent node, whereas, with the latter, we
-    would need to use AND operations. The former strikes us as more intuitive.
-
-    """
-
-    def normalize_bitmask(bitmask, fill_bitmask, lowest_relevant_bit=1):
-        if bitmask & lowest_relevant_bit:
-            return (~bitmask) & fill_bitmask             # force least-significant bit to 0
-        else:
-            return bitmask & fill_bitmask                # keep least-significant bit as 0
-    normalize_bitmask = staticmethod(normalize_bitmask)
-
-    def is_trivial_bitmask(bitmask, fill_bitmask):
-        """
-        Returns True if the bitmask occurs in any tree of the taxa ``mask`` -- if
-        there is only fewer than two 1's or fewer than two 0's in ``bitmask`` (among
-        all of the that are 1 in mask).
-        """
-        masked_split = bitmask & fill_bitmask
-        if bitmask == 0 or bitmask == fill_bitmask:
-            return True
-        if ((masked_split - 1) & masked_split) == 0:
-            return True
-        cm = (~bitmask) & fill_bitmask
-        if ((cm - 1) & cm) == 0:
-            return True
-        return False
-    is_trivial_bitmask = staticmethod(is_trivial_bitmask)
-
-    def is_trivial_leafset(leafset_bitmask):
-        return bitprocessing.num_set_bits(leafset_bitmask) == 1
-    is_trivial_leafset = staticmethod(is_trivial_leafset)
-
-    def is_compatible_bitmasks(m1, m2, fill_bitmask):
-        """
-        Returns |True| if ``m1`` is compatible with ``m2``
-
-        Parameters
-        ----------
-        m1 : int
-            A bitmask representing a split.
-        m2 : int
-            A bitmask representing a split.
-
-        Returns
-        -------
-        bool
-            |True| if ``m1`` is compatible with ``m2``. |False| otherwise.
-        """
-        if fill_bitmask != 0:
-            m1 = fill_bitmask & m1
-            m2 = fill_bitmask & m2
-        if 0 == (m1 & m2):
-            return True
-        c2 = m1 ^ m2
-        if 0 == (m1 & c2):
-            return True
-        c1 = fill_bitmask ^ m1
-        if 0 == (c1 & m2):
-            return True
-        if 0 == (c1 & c2):
-            return True
-        return False
-    is_compatible_bitmasks = staticmethod(is_compatible_bitmasks)
-
-    ##############################################################################
-    ## Life-cycle
-
-    def __init__(self, **kwargs):
-        """
-
-        Keyword Arguments
-        -----------------
-        bitmask : integer
-            A bit array representing the membership of taxa, with the
-            least-significant bit corresponding to the first taxon, the next
-            least-signficant bit correspodning to the second taxon, and so on,
-            till the last taxon corresponding to the most-significant bit.
-            Taxon membership in one of two arbitrary groups, '0' or '1', is
-            indicated by its correspondign bit being unset or set,
-            respectively.
-        leafset_bitmask : integer
-            A bit array representing the presence or absence of taxa in the
-            subtree descending from the child node of the edge of which this
-            bipartition is associated. The least-significant bit corresponds to
-            the first taxon, the next least-signficant bit corresponds to the
-            second taxon, and so on, with the last taxon corresponding to the
-            most-significant bit.
-        tree_leafset_bitmask : integer
-            The ``leafset_bitmask`` of the root edge of the tree with which this
-            bipartition is associated. In, general, this will be $0b1111...n$,
-            where $n$ is the number of taxa, *except* in cases of trees with
-            incomplete leaf-sets, where the positions corresponding to the
-            missing taxa will have the bits unset.
-        is_rooted : bool
-            Specifies whether or not the tree with which this bipartition is
-            associated is rooted.
-        """
-        self._split_bitmask = kwargs.get("bitmask", 0)
-        self._leafset_bitmask = kwargs.get("leafset_bitmask", self._split_bitmask)
-        self._tree_leafset_bitmask = kwargs.get("tree_leafset_bitmask", None)
-        self._lowest_relevant_bit = None
-        self._is_rooted = kwargs.get("is_rooted", None)
-        # self.edge = kwargs.get("edge", None)
-        is_mutable = kwargs.get("is_mutable", None)
-        if kwargs.get("compile_bipartition", True):
-            self.is_mutable = True
-            self.compile_split_bitmask(
-                    leafset_bitmask=self._leafset_bitmask,
-                    tree_leafset_bitmask=self._tree_leafset_bitmask)
-            if is_mutable is None:
-                self.is_mutable = True
-            else:
-                self.is_mutable = is_mutable
-        elif is_mutable is not None:
-            self.is_mutable = is_mutable
-
-    ##############################################################################
-    ## Identity
-
-    def __hash__(self):
-        assert not self.is_mutable, "Bipartition is mutable: hash is unstable"
-        return self._split_bitmask or 0
-
-    def __eq__(self, other):
-        # return self._split_bitmask == other._split_bitmask
-        return (self._split_bitmask is not None and self._split_bitmask == other._split_bitmask) or (self._split_bitmask is other._split_bitmask)
-
-    ##############################################################################
-    ## All properties are publically read-only if not mutable
-
-    def _get_split_bitmask(self):
-        return self._split_bitmask
-    def _set_split_bitmask(self, value):
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        self._split_bitmask = value
-    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
-
-    def _get_leafset_bitmask(self):
-        return self._leafset_bitmask
-    def _set_leafset_bitmask(self, value):
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        self._leafset_bitmask = value
-    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
-
-    def _get_tree_leafset_bitmask(self):
-        return self._tree_leafset_bitmask
-    def _set_tree_leafset_bitmask(self, value):
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        self.compile_tree_leafset_bitmask(value)
-    tree_leafset_bitmask = property(_get_tree_leafset_bitmask, _set_tree_leafset_bitmask)
-
-    def _get_is_rooted(self):
-        return self._is_rooted
-    def _set_is_rooted(self, value):
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        self._is_rooted = value
-    is_rooted = property(_get_is_rooted, _set_is_rooted)
-
-    ##############################################################################
-    ## Representation
-
-    def __str__(self):
-        return bin(self._split_bitmask)[2:].rjust(bitprocessing.bit_length(self._tree_leafset_bitmask), '0')
-
-    def __int__(self):
-        return self._split_bitmask
-
-    def split_as_int(self):
-        return self._split_bitmask
-
-    def leafset_as_int(self):
-        return self._leafset_bitmask
-
-    def split_as_bitstring(self, symbol0="0", symbol1="1", reverse=False):
-        """
-        Composes and returns and representation of the bipartition as a
-        bitstring.
-
-        Parameters
-        ----------
-        symbol1 : str
-            The symbol to represent group '0' in the bitmask.
-        symbol1 : str
-            The symbol to represent group '1' in the bitmask.
-        reverse : bool
-            If |True|, then the first taxon will correspond to the
-            most-significant bit, instead of the least-significant bit, as is
-            the default.
-
-        Returns
-        -------
-        str
-            The bitstring representing the bipartition.
-
-        Example
-        -------
-        To represent a bipartition in the same scheme used by, e.g. PAUP* or
-        Mr. Bayes::
-
-            print(bipartition.split_as_bitstring('.', '*', reverse=True))
-        """
-        return self.bitmask_as_bitstring(
-                mask=self._split_bitmask,
-                symbol0=symbol0,
-                symbol1=symbol1,
-                reverse=reverse)
-
-    def leafset_as_bitstring(self, symbol0="0", symbol1="1", reverse=False):
-        """
-        Composes and returns and representation of the bipartition leafset as a
-        bitstring.
-
-        Parameters
-        ----------
-        symbol1 : str
-            The symbol to represent group '0' in the bitmask.
-        symbol1 : str
-            The symbol to represent group '1' in the bitmask.
-        reverse : bool
-            If |True|, then the first taxon will correspond to the
-            most-significant bit, instead of the least-significant bit, as is
-            the default.
-
-        Returns
-        -------
-        str
-            The bitstring representing the bipartition.
-
-        Example
-        -------
-        To represent a bipartition in the same scheme used by, e.g. PAUP* or
-        Mr. Bayes::
-
-            print(bipartition.leafset_as_bitstring('.', '*', reverse=True))
-        """
-        return self.bitmask_as_bitstring(
-                mask=self._leafset_bitmask,
-                symbol0=symbol0,
-                symbol1=symbol1,
-                reverse=reverse)
-
-    def bitmask_as_bitstring(self, mask, symbol0=None, symbol1=None, reverse=False):
-        return bitprocessing.int_as_bitstring(mask,
-                length=bitprocessing.bit_length(self._tree_leafset_bitmask),
-                symbol0=symbol0,
-                symbol1=symbol1,
-                reverse=reverse)
-
-    ##############################################################################
-    ## Calculation
-
-    def compile_tree_leafset_bitmask(self,
-            tree_leafset_bitmask,
-            lowest_relevant_bit=None):
-        """
-        Avoids recalculation of ``lowest_relevant_bit`` if specified.
-        """
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        self._tree_leafset_bitmask = tree_leafset_bitmask
-        if lowest_relevant_bit is not None:
-            self._lowest_relevant_bit = lowest_relevant_bit
-        elif self._tree_leafset_bitmask:
-            self._lowest_relevant_bit = bitprocessing.least_significant_set_bit(self._tree_leafset_bitmask)
-        else:
-            self._lowest_relevant_bit = None
-        return self._tree_leafset_bitmask
-
-    def compile_leafset_bitmask(self,
-           leafset_bitmask=None,
-           tree_leafset_bitmask=None):
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        if tree_leafset_bitmask is not None:
-            self.compile_tree_leafset_bitmask(tree_leafset_bitmask)
-        if leafset_bitmask is None:
-            leafset_bitmask = self._leafset_bitmask
-        if self._tree_leafset_bitmask:
-            self._leafset_bitmask = leafset_bitmask & self._tree_leafset_bitmask
-        else:
-            self._leafset_bitmask = leafset_bitmask
-        return self._leafset_bitmask
-
-    def compile_split_bitmask(self,
-           leafset_bitmask=None,
-           tree_leafset_bitmask=None,
-           is_rooted=None,
-           is_mutable=True):
-        """
-        Updates the values of the various masks specified and calculates the
-        normalized bipartition bitmask.
-
-        If a rooted bipartition, then this is set to the value of the leafset
-        bitmask.
-        If an unrooted bipartition, then the leafset bitmask is normalized such that
-        the lowest-significant bit (i.e., the group to which the first taxon
-        belongs) is set to '0'.
-
-        Also makes this bipartition immutable (unless ``is_mutable`` is |False|),
-        which facilitates it being used in dictionaries and sets.
-
-        Parameters
-        ----------
-        leafset_bitmask : integer
-            A bit array representing the presence or absence of taxa in the
-            subtree descending from the child node of the edge of which this
-            bipartition is associated. The least-significant bit corresponds to
-            the first taxon, the next least-signficant bit corresponds to the
-            second taxon, and so on, with the last taxon corresponding to the
-            most-significant bit. If not specified or |None|, the current value
-            of ``self.leafset_bitmask`` is used.
-        tree_leafset_bitmask : integer
-            The ``leafset_bitmask`` of the root edge of the tree with which this
-            bipartition is associated. In, general, this will be $0b1111...n$,
-            where $n$ is the number of taxa, *except* in cases of trees with
-            incomplete leaf-sets, where the positions corresponding to the
-            missing taxa will have the bits unset. If not specified or |None|,
-            the current value of ``self.tree_leafset_bitmask`` is used.
-        is_rooted : bool
-            Specifies whether or not the tree with which this bipartition is
-            associated is rooted. If not specified or |None|, the current value
-            of ``self.is_rooted`` is used.
-
-        Returns
-        -------
-        integer
-            The bipartition bitmask.
-        """
-        assert self.is_mutable, "Bipartition instance is not mutable"
-        if is_rooted is not None:
-            self._is_rooted = is_rooted
-        if tree_leafset_bitmask:
-            self.compile_tree_leafset_bitmask(tree_leafset_bitmask=tree_leafset_bitmask)
-        if leafset_bitmask:
-            self.compile_leafset_bitmask(leafset_bitmask=leafset_bitmask)
-        if self._leafset_bitmask is None:
-            return
-        if self._tree_leafset_bitmask is None:
-            return
-        if self._is_rooted:
-            self._split_bitmask = self._leafset_bitmask
-        else:
-            self._split_bitmask = Bipartition.normalize_bitmask(
-                    bitmask=self._leafset_bitmask,
-                    fill_bitmask=self._tree_leafset_bitmask,
-                    lowest_relevant_bit=self._lowest_relevant_bit)
-        if is_mutable is not None:
-            self.is_mutable = is_mutable
-        return self._split_bitmask
-
-    def compile_bipartition(self, is_mutable=None):
-        """
-        Updates the values of the various masks specified and calculates the
-        normalized bipartition bitmask.
-
-        If a rooted bipartition, then this is set to the value of the leafset
-        bitmask.
-        If an unrooted bipartition, then the leafset bitmask is normalized such that
-        the lowest-significant bit (i.e., the group to which the first taxon
-        belongs) is set to '0'.
-
-        Also makes this bipartition immutable (unless ``is_mutable`` is |False|),
-        which facilitates it being used in dictionaries and sets.
-
-        Note that this requires full population of the following fields:
-            - self._leafset_bitmask
-            - self._tree_leafset_bitmask
-        """
-        self.compile_split_bitmask(self,
-            leafset_bitmask=self._leafset_bitmask,
-            tree_leafset_bitmask=self._tree_leafset_bitmask,
-            is_rooted=self._is_rooted,
-            is_mutable=is_mutable)
-
-    ##############################################################################
-    ## Operations
-
-    def normalize(self, bitmask, convention="lsb0"):
-        """
-        Return ``bitmask`` ensuring that the bit corresponding to the first
-        taxon is 1.
-        """
-        if convention == "lsb0":
-            if self._lowest_relevant_bit & bitmask:
-                return (~bitmask) & self._tree_leafset_bitmask
-            else:
-                return bitmask & self._tree_leafset_bitmask
-        elif convention == "lsb1":
-            if self._lowest_relevant_bit & bitmask:
-                return bitmask & self._tree_leafset_bitmask
-            else:
-                return (~bitmask) & self._tree_leafset_bitmask
-        else:
-            raise ValueError("Unrecognized convention: {}".format(convention))
-
-    def is_compatible_with(self, other):
-        """
-        Returns |True| if ``other`` is compatible with self.
-
-        Parameters
-        ----------
-        other : |Bipartition|
-            The bipartition to check for compatibility.
-
-        Returns
-        -------
-        bool
-            |True| if ``other`` is compatible with ``self``; |False| otherwise.
-        """
-        m1 = self._split_bitmask
-        if isinstance(other, int):
-            m2 = other
-        else:
-            m2 = other._split_bitmask
-        return Bipartition.is_compatible_bitmasks(m1, m2, self._tree_leafset_bitmask)
-
-    def is_incompatible_with(self, other):
-        """
-        Returns |True| if ``other`` conflicts with self.
-
-        Parameters
-        ----------
-        other : |Bipartition|
-            The bipartition to check for conflicts.
-
-        Returns
-        -------
-        bool
-            |True| if ``other`` conflicts with ``self``; |False| otherwise.
-        """
-        return not self.is_compatible_with(other)
-
-    def is_nested_within(self, other, is_other_masked_for_tree_leafset=False):
-        """
-        Returns |True| if the current bipartition is contained
-        within other.
-
-        Parameters
-        ----------
-        other : |Bipartition|
-            The bipartition to check.
-
-        Returns
-        -------
-        bool
-            |True| if the the bipartition is "contained" within ``other``
-        """
-        if self._is_rooted:
-            m1 = self._leafset_bitmask
-            m2 = other._leafset_bitmask
-        else:
-            m1 = self._split_bitmask
-            m2 = other._split_bitmask
-        if not is_other_masked_for_tree_leafset:
-            m2 = self._tree_leafset_bitmask & m2
-        return ( (m1 & m2) == m1 )
-
-    def is_leafset_nested_within(self, other):
-        """
-        Returns |True| if the leafset of ``self`` is a subset of the leafset of
-        ``other``.
-
-        Parameters
-        ----------
-        other : |Bipartition|
-            The bipartition to check for compatibility.
-
-        Returns
-        -------
-        bool
-            |True| if the leafset of ``self`` is contained in ``other``.
-        """
-        if isinstance(other, int):
-            m2 = other
-        else:
-            m2 = other._leafset_bitmask
-        m2 = self._tree_leafset_bitmask & m2
-        return ( (m2 & self._leafset_bitmask) ==  self._leafset_bitmask )
-
-    def is_trivial(self):
-        """
-        Returns
-        -------
-        bool
-            |True| if this bipartition divides a leaf and the rest of the
-            tree.
-        """
-        return Bipartition.is_trivial_bitmask(self._split_bitmask,
-                self._tree_leafset_bitmask)
-
-    def split_as_newick_string(self,
-            taxon_namespace,
-            preserve_spaces=False,
-            quote_underscores=True):
-        """
-        Represents this bipartition split as a newick string.
-
-        Parameters
-        ----------
-        taxon_namespace : |TaxonNamespace| instance
-            The operational taxonomic unit concept namespace to reference.
-        preserve_spaces : boolean, optional
-            If |False| (default), then spaces in taxon labels will be replaced
-            by underscores. If |True|, then taxon labels with spaces will be
-            wrapped in quotes.
-        quote_underscores : boolean, optional
-            If |True| (default), then taxon labels with underscores will be
-            wrapped in quotes. If |False|, then the labels will not be wrapped
-            in quotes.
-
-        Returns
-        -------
-        string
-            NEWICK representation of split specified by ``bitmask``.
-        """
-        return taxon_namespace.bitmask_as_newick_string(
-                bitmask=self._split_bitmask,
-                preserve_spaces=preserve_spaces,
-                quote_underscores=quote_underscores)
-
-    def leafset_as_newick_string(self,
-            taxon_namespace,
-            preserve_spaces=False,
-            quote_underscores=True):
-        """
-        Represents this bipartition leafset as a newick string.
-
-        Parameters
-        ----------
-        taxon_namespace : |TaxonNamespace| instance
-            The operational taxonomic unit concept namespace to reference.
-        preserve_spaces : boolean, optional
-            If |False| (default), then spaces in taxon labels will be replaced
-            by underscores. If |True|, then taxon labels with spaces will be
-            wrapped in quotes.
-        quote_underscores : boolean, optional
-            If |True| (default), then taxon labels with underscores will be
-            wrapped in quotes. If |False|, then the labels will not be wrapped
-            in quotes.
-
-        Returns
-        -------
-        string
-            NEWICK representation of split specified by ``bitmask``.
-        """
-        return taxon_namespace.bitmask_as_newick_string(
-                bitmask=self._leafset_bitmask,
-                preserve_spaces=preserve_spaces,
-                quote_underscores=quote_underscores)
-
-    def leafset_taxa(self, taxon_namespace, index=0):
-        """
-        Returns list of |Taxon| objects in the leafset of this
-        bipartition.
-
-        Parameters
-        ----------
-        taxon_namespace : |TaxonNamespace| instance
-            The operational taxonomic unit concept namespace to reference.
-        index : integer, optional
-            Start from this |Taxon| object instead of the first
-            |Taxon| object in the collection.
-
-        Returns
-        -------
-        :py:class:`list` [|Taxon|]
-            List of |Taxon| objects specified or spanned by
-            ``bitmask``.
-        """
-        return taxon_namespace.bitmask_taxa_list(
-                bitmask=self._leafset_bitmask,
-                index=index)
-
-    # def as_newick_string
-    # def is_trivial
-    # def is_non_singleton
-    # def leafset_hash
-    # def leafset_as_bitstring
-    # def is_compatible
-
-##############################################################################
-### Edge
-
-class Edge(
-        basemodel.DataObject,
-        basemodel.Annotable):
-    """
-    An :term:``edge`` on a :term:``tree``.
-    """
-
-    ###########################################################################
-    ### Life-cycle and Identity
-
-    def __init__(self, **kwargs):
-        """
-        Keyword Arguments
-        -----------------
-        head_node : |Node|, optional
-            Node from to which this edge links, i.e., the child node of this
-            node ``tail_node``.
-        length : numerical, optional
-            A value representing the weight of the edge.
-        rootedge : boolean, optional
-            Is the child node of this edge the root or seed node of the tree?
-        label : string, optional
-            Label for this edge.
-
-        """
-        basemodel.DataObject.__init__(self, label=kwargs.pop("label", None))
-        self._head_node = kwargs.pop("head_node", None)
-        if "tail_node" in kwargs:
-            raise TypeError("Setting the tail node directly is no longer supported: instead, set the parent node of the head node")
-        self.rootedge = kwargs.pop("rootedge", None)
-        self.length = kwargs.pop("length", None)
-        if kwargs:
-            raise TypeError("Unsupported keyword arguments: {}".format(kwargs))
-
-        self._bipartition = None
-        self.comments = []
-
-    def __copy__(self, memo=None):
-        raise TypeError("Cannot directly copy Edge")
-
-    def taxon_namespace_scoped_copy(self, memo=None):
-        raise TypeError("Cannot directly copy Edge")
-
-    def __deepcopy__(self, memo=None):
-        # call Annotable.__deepcopy__()
-        return basemodel.Annotable.__deepcopy__(self, memo=memo)
-        # return super(Edge, self).__deepcopy__(memo=memo)
-
-    def __hash__(self):
-        return id(self)
-
-    def __eq__(self, other):
-        return self is other
-
-    def __lt__(self, other):
-        return id(self) < id(other)
-
-    ###########################################################################
-    ### Basic Structure
-
-    def _get_tail_node(self):
-        if self._head_node is None:
-            return None
-        return self._head_node._parent_node
-    def _set_tail_node(self, node):
-        if self._head_node is None:
-            raise ValueError("'_head_node' is 'None': cannot assign 'tail_node'")
-        # Go through managed property instead of
-        # setting attribute to ensure book-keeping
-        self._head_node.parent_node = node
-    tail_node = property(_get_tail_node, _set_tail_node)
-
-    def _get_head_node(self):
-        return self._head_node
-    def _set_head_node(self, node):
-        # Go through managed property instead of setting attribute to ensure
-        # book-keeping; following should also set ``_head_node`` of ``self``
-        node.edge = self
-    head_node = property(_get_head_node, _set_head_node)
-
-    def is_leaf(self):
-        "Returns True if the head node has no children"
-        return self.head_node and self.head_node.is_leaf()
-
-    def is_terminal(self):
-        return self.is_leaf()
-
-    def is_internal(self):
-        "Returns True if the head node has children"
-        return self.head_node and not self.head_node.is_leaf()
-
-    def get_adjacent_edges(self):
-        """
-        Returns a list of all edges that "share" a node with ``self``.
-        """
-        he = [i for i in self.head_node.incident_edges() if i is not self]
-        te = [i for i in self.tail_node.incident_edges() if i is not self]
-        he.extend(te)
-        return he
-    adjacent_edges = property(get_adjacent_edges)
-
-    ###########################################################################
-    ### Structural Manipulation
-
-    def collapse(self, adjust_collapsed_head_children_edge_lengths=False):
-        """
-        Inserts all children of the head_node of self as children of the
-        tail_node of self in the same place in the child_node list that
-        head_node had occupied. The edge length and head_node will no longer be
-        part of the tree unless ``adjust_collapsed_head_children_edge_lengths``.
-        is True.
-        """
-        to_del = self.head_node
-        parent = self.tail_node
-        if not parent:
-            return
-        children = to_del.child_nodes()
-        if not children:
-            raise ValueError('collapse_self called with a terminal.')
-        pos = parent.child_nodes().index(to_del)
-        parent.remove_child(to_del)
-        for child in children:
-            parent.insert_child(pos, child)
-            pos += 1
-            if adjust_collapsed_head_children_edge_lengths and self.length is not None:
-                # print id(child), child.edge.length, self.length
-                if child.edge.length is None:
-                    child.edge.length = self.length
-                else:
-                    child.edge.length += self.length
-
-    def invert(self, update_bipartitions=False):
-        """
-        Changes polarity of edge.
-        """
-        # self.head_node, self.tail_node = self.tail_node, self.head_node
-
-        if not self.head_node:
-            raise ValueError("Cannot invert edge with 'None' for head node")
-        if not self.tail_node:
-            raise ValueError("Cannot invert edge with 'None' for tail node")
-
-        old_head_node = self.head_node
-        new_tail_node = old_head_node
-        old_tail_node = self.tail_node
-        new_head_node = old_tail_node
-        grandparent = old_tail_node._parent_node
-        if grandparent is not None:
-            for idx, ch in enumerate(grandparent._child_nodes):
-                if ch is old_tail_node:
-                    grandparent._child_nodes[idx] = old_head_node
-                    break
-            else:
-                # we did not break loop: force insertion of old_head_node if
-                # not already there
-                if old_head_node not in grandparent._child_nodes:
-                    grandparent._child_nodes.append(old_head_node)
-        assert old_head_node in old_tail_node._child_nodes
-        old_tail_node.remove_child(old_head_node)
-        assert old_head_node not in old_tail_node._child_nodes
-        old_head_node.add_child(old_tail_node)
-        old_tail_node.edge.length, old_head_node.edge.length = old_head_node.edge.length, old_tail_node.edge_length
-
-    ###########################################################################
-    ### Bipartition Management
-
-    def _get_bipartition(self):
-        if self._bipartition is None:
-            self._bipartition = Bipartition(
-                    edge=self,
-                    is_mutable=True,
-                    )
-        return self._bipartition
-    def _set_bipartition(self, v=None):
-        self._bipartition = v
-    bipartition = property(_get_bipartition, _set_bipartition)
-
-    def _get_split_bitmask(self):
-        return self.bipartition._split_bitmask
-    def _set_split_bitmask(self, h):
-        self.bipartition._split_bitmask = h
-    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
-
-    def _get_leafset_bitmask(self):
-        return self.bipartition._leafset_bitmask
-    def _set_leafset_bitmask(self, h):
-        self.bipartition._leafset_bitmask = h
-    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
-
-    def _get_tree_leafset_bitmask(self):
-        return self.bipartition._tree_leafset_bitmask
-    def _set_tree_leafset_bitmask(self, h):
-        self.bipartition._tree_leafset_bitmask = h
-    tree_leafset_bitmask = property(_get_tree_leafset_bitmask, _set_tree_leafset_bitmask)
-
-    def split_as_bitstring(self):
-        return self.bipartition.split_as_bitstring()
-
-    def leafset_as_bitstring(self):
-        return self.bipartition.leafset_as_bitstring()
-
-    ###########################################################################
-    ### Representation
-
-    def description(self,
-            depth=1,
-            indent=0,
-            itemize="",
-            output=None,
-            taxon_namespace=None):
-        """
-        Returns description of object, up to level ``depth``.
-        """
-        if depth is None or depth < 0:
-            return
-        output_strio = StringIO()
-        if self.label is None:
-            label = " (%s, Length=%s)" % (id(self), str(self.length))
-        else:
-            label = " (%s: '%s', Length=%s)" % (id(self), self.label, str(self.length))
-        output_strio.write('%s%sEdge object at %s%s'
-                % (indent*' ',
-                   itemize,
-                   hex(id(self)),
-                   label))
-        if depth >= 1:
-            leader1 = ' ' * (indent + 4)
-            leader2 = ' ' * (indent + 8)
-            output_strio.write('\n%s[Length]' % leader1)
-            if self.length is not None:
-                length = self.length
-            else:
-                length = "None"
-            output_strio.write('\n%s%s' % (leader2, length))
-            output_strio.write('\n%s[Tail Node]' % leader1)
-            if self.tail_node is not None:
-                tn = self.tail_node.description(0)
-            else:
-                tn = "None"
-            output_strio.write('\n%s%s' % (leader2, tn))
-            output_strio.write('\n%s[Head Node]' % leader1)
-            if self.head_node is not None:
-                hn = self.head_node.description(0)
-            else:
-                hn = "None"
-            output_strio.write('\n%s%s' % (leader2, hn))
-        s = output_strio.getvalue()
-        if output is not None:
-            output.write(s)
-        return s
-
-##############################################################################
-### Node
-
-class Node(
-        basemodel.DataObject,
-        basemodel.Annotable):
-    """
-    A :term:|Node| on a :term:|Tree|.
-    """
-
-    def edge_factory(cls, **kwargs):
-        """
-        Creates and returns a |Edge| object.
-
-        Derived classes can override this method to provide support for
-        specialized or different types of edges on the tree.
-
-        Parameters
-        ----------
-
-        \*\*kwargs : keyword arguments
-            Passed directly to constructor of |Edge|.
-
-        Returns
-        -------
-        |Edge|
-            A new |Edge| object.
-
-        """
-        return Edge(**kwargs)
-    edge_factory = classmethod(edge_factory)
-
-    ###########################################################################
-    ### Life-cycle
-
-    def __init__(self, **kwargs):
-        """
-        Keyword Arguments
-        -----------------
-        taxon : |Taxon|, optional
-            The |Taxon| instance representing the operational taxonomic
-            unit concept associated with this Node.
-        label : string, optional
-            A label for this node.
-        edge_length : numeric, optional
-            Length or weight of the edge subtending this node.
-
-        """
-        basemodel.DataObject.__init__(self, label=kwargs.pop("label", None))
-        self.taxon = kwargs.pop("taxon", None)
-        self.age = None
-        self._edge = None
-        self._child_nodes = []
-        self._parent_node = None
-        self.edge = self.edge_factory(head_node=self,
-                length=kwargs.pop("edge_length", None))
-        if kwargs:
-            raise TypeError("Unsupported keyword arguments: {}".format(kwargs))
-        self.comments = []
-
-    def __copy__(self, memo=None):
-        raise TypeError("Cannot directly copy Edge")
-
-    def taxon_namespace_scoped_copy(self, memo=None):
-        raise TypeError("Cannot directly copy Node")
-
-    def __deepcopy__(self, memo=None):
-        return basemodel.Annotable.__deepcopy__(self, memo=memo)
-        # if memo is None:
-        #     memo = {}
-        # other = basemodel.Annotable.__deepcopy__(self, memo=memo)
-        # memo[id(self._child_nodes)] = other._child_nodes
-        # for ch in self._child_nodes:
-        #     try:
-        #         och = memo[id(ch)]
-        #         if och not in other._child_nodes:
-        #             other._child_nodes.append(och)
-        #     except KeyError:
-        #         och = copy.deepcopy(ch, memo)
-        #         memo[id(chd)] = och
-        #         if och not in other._child_nodes:
-        #             other._child_nodes.append(och)
-        # return other
-        # return super(Node, self).__deepcopy__(memo=memo)
-
-    ###########################################################################
-    ### Identity
-
-    def __hash__(self):
-        return id(self)
-
-    def __eq__(self, other):
-        # IMPORTANT LESSON LEARNED: if you define __hash__, you *must* define __eq__
-        return self is other
-
-    def __repr__(self):
-        return "<{} object at {}: '{}' ({})>".format(self.__class__.__name__, hex(id(self)), self._label, repr(self.taxon))
-
-    ###########################################################################
-    ### Iterators
-
-    def preorder_iter(self, filter_fn=None):
-        """
-        Pre-order iterator over nodes of subtree rooted at this node.
-
-        Visits self and all descendant nodes, with each node visited before its
-        children. Nodes can optionally be filtered by ``filter_fn``: only nodes
-        for which ``filter_fn`` returns |True| when called with the node as an
-        argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding nodes of the subtree rooted at this node in
-            pre-order sequence.
-        """
-        stack = [self]
-        while stack:
-            node = stack.pop()
-            if filter_fn is None or filter_fn(node):
-                yield node
-            stack.extend(n for n in reversed(node._child_nodes))
-
-    def preorder_internal_node_iter(self, filter_fn=None, exclude_seed_node=False):
-        """
-        Pre-order iterator over internal nodes of subtree rooted at this node.
-
-        Visits self and all internal descendant nodes, with each node visited
-        before its children. In DendroPy, "internal nodes" are nodes that have
-        at least one child node, and thus the root or seed node is typically included
-        unless ``exclude_seed_node`` is |True|. Nodes can optionally be filtered
-        by ``filter_fn``: only nodes for which ``filter_fn`` returns |True| when
-        passed the node as an argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-        exclude_seed_node : boolean, optional
-            If |False| (default), then the seed node or root is visited. If
-            |True|, then the seed node is skipped.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding the internal nodes of the subtree rooted at
-            this node in pre-order sequence.
-        """
-        if exclude_seed_node:
-            froot = lambda x: x._parent_node is not None
-        else:
-            froot = lambda x: True
-        if filter_fn:
-            f = lambda x: (froot(x) and x._child_nodes and filter_fn(x)) or None
-        else:
-            f = lambda x: (x and froot(x) and x._child_nodes) or None
-        return self.preorder_iter(filter_fn=f)
-
-    def postorder_iter(self, filter_fn=None):
-        """
-        Post-order iterator over nodes of subtree rooted at this node.
-
-        Visits self and all descendant nodes, with each node visited after its
-        children. Nodes can optionally be filtered by ``filter_fn``: only nodes
-        for which ``filter_fn`` returns |True| when called with the node as an
-        argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding the nodes of the subtree rooted at
-            this node in post-order sequence.
-        """
-        # if self._child_nodes:
-        #     for nd in self._child_nodes:
-        #         for ch in nd.postorder_iter(filter_fn=filter_fn):
-        #             yield ch
-        # if filter_fn is None or filter_fn(self):
-        #     yield self
-        # return
-
-        # stack = [(self, False)]
-        # while stack:
-        #     node, state = stack.pop(0)
-        #     if state:
-        #         if filter_fn is None or filter_fn(node):
-        #             yield node
-        #     else:
-        #         stack.insert(0, (node, True))
-        #         child_nodes = [(n, False) for n in node._child_nodes]
-        #         child_nodes.extend(stack)
-        #         stack = child_nodes
-
-        ## Prefer `pop()` to `pop(0)`.
-        ## Thanks to Mark T. Holder
-        ## From peyotl commits: d1ffef2 + 19fdea1
-        stack = [(self, False)]
-        while stack:
-            node, state = stack.pop()
-            if state:
-                if filter_fn is None or filter_fn(node):
-                    yield node
-            else:
-                stack.append((node, True))
-                stack.extend([(n, False) for n in reversed(node._child_nodes)])
-
-    def postorder_internal_node_iter(self, filter_fn=None, exclude_seed_node=False):
-        """
-        Pre-order iterator over internal nodes of subtree rooted at this node.
-
-        Visits self and all internal descendant nodes, with each node visited
-        after its children. In DendroPy, "internal nodes" are nodes that have
-        at least one child node, and thus the root or seed node is typically
-        included unless ``exclude_seed_node`` is |True|. Nodes can optionally be
-        filtered by ``filter_fn``: only nodes for which ``filter_fn`` returns
-        |True| when passed the node as an argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-        exclude_seed_node : boolean, optional
-            If |False| (default), then the seed node or root is visited. If
-            |True|, then the seed node is skipped.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding the internal nodes of the subtree rooted at
-            this node in post-order sequence.
-        """
-        if exclude_seed_node:
-            froot = lambda x: x._parent_node is not None
-        else:
-            froot = lambda x: True
-        if filter_fn:
-            f = lambda x: (froot(x) and x._child_nodes and filter_fn(x)) or None
-        else:
-            f = lambda x: (x and froot(x) and x._child_nodes) or None
-        return self.postorder_iter(filter_fn=f)
-
-    def levelorder_iter(self, filter_fn=None):
-        """
-        Level-order iteration over nodes of subtree rooted at this node.
-
-        Visits self and all descendant nodes, with each node and other nodes at
-        the same level (distance from root) visited before their children.
-        Nodes can optionally be filtered by ``filter_fn``: only nodes for which
-        ``filter_fn`` returns |True| when called with the node as an argument are
-        visited.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding nodes of the subtree rooted at this node in
-            level-order sequence.
-        """
-        if filter_fn is None or filter_fn(self):
-            yield self
-        remaining = self.child_nodes()
-        while len(remaining) > 0:
-            node = remaining.pop(0)
-            if filter_fn is None or filter_fn(node):
-                yield node
-            child_nodes = node.child_nodes()
-            remaining.extend(child_nodes)
-
-    def level_order_iter(self, filter_fn=None):
-        """
-        DEPRECATED: Use :meth:`Node.levelorder_iter()` instead.
-        """
-        deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'level_order_iter()' will no longer be supported in future releases; use 'levelorder_iter()' instead",
-                stacklevel=3)
-        return self.levelorder_iter(filter_fn=filter_fn)
-
-    def inorder_iter(self, filter_fn=None):
-        """
-        In-order iteration over nodes of subtree rooted at this node.
-
-        Visits self and all descendant nodes, with each node visited in-between
-        its children. Only valid for strictly-bifurcating trees. Nodes can
-        optionally be filtered by ``filter_fn``: only nodes for which ``filter_fn``
-        returns |True| when called with the node as an argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding nodes of the subtree rooted at this node in
-            infix or in-order sequence.
-        """
-        if len(self._child_nodes) == 0:
-            if filter_fn is None or filter_fn(self):
-                yield self
-        elif len(self._child_nodes) == 2:
-            for nd in self._child_nodes[0].inorder_iter(filter_fn=filter_fn):
-                yield nd
-            if filter_fn is None or filter_fn(self):
-                yield self
-            for nd in self._child_nodes[1].inorder_iter(filter_fn=filter_fn):
-                yield nd
-        else:
-            raise TypeError("In-order traversal only supported for binary trees")
-
-    def leaf_iter(self, filter_fn=None):
-        """
-        Iterate over all tips or leaves that ultimately descend from this node.
-
-        Visits all leaf or tip nodes descended from this node. Nodes can
-        optionally be filtered by ``filter_fn``: only nodes for which ``filter_fn``
-        returns |True| when called with the node as an argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding leaf nodes of the subtree rooted at this node.
-        """
-        if filter_fn:
-            ff = lambda x: x.is_leaf() and filter_fn(x) or None
-        else:
-            ff = lambda x: x.is_leaf() and x or None
-        for node in self.postorder_iter(ff):
-            yield node
-
-    def child_node_iter(self, filter_fn=None):
-        """
-        Iterator over all nodes that are the (immediate) children of this node.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            An iterator yielding nodes that have this node as a parent.
-        """
-        for node in self._child_nodes:
-            if filter_fn is None or filter_fn(node):
-                yield node
-
-    def child_edge_iter(self, filter_fn=None):
-        """
-        Iterator over all edges that are the (immediate) children of this edge.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Edge| object as an argument
-            and returns |True| if the |Edge| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all edges visited will be yielded.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Edge|]
-            An iterator yielding edges that have this edge as a parent.
-        """
-        for node in self._child_nodes:
-            if filter_fn is None or filter_fn(node.edge):
-                yield node.edge
-
-    def ancestor_iter(self, filter_fn=None, inclusive=False):
-        """
-        Iterator over all ancestors of this node.
-
-        Visits all nodes that are the ancestors of this node.  If ``inclusive``
-        is |True|, ``self`` is returned as the first item of the sequence;
-        otherwise ``self`` is skipped. Nodes can optionally be filtered by
-        ``filter_fn``: only nodes for which ``filter_fn`` returns |True| when
-        passed the node as an argument are yielded.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (default), then all nodes visited will be yielded.
-        inclusive : boolean, optional
-            If |True|, includes this node in the sequence. If |False|, this is
-            skipped.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            Iterator over all predecessor/ancestor nodes of this node.
-        """
-        if inclusive and (filter_fn is None or filter_fn(self)):
-            yield self
-        node = self
-        while node is not None:
-            node = node._parent_node
-            if node is not None \
-                   and (filter_fn is None or filter_fn(node)):
-                yield node
-
-    def ageorder_iter(self, filter_fn=None, include_leaves=True, descending=False):
-        """
-        Iterator over nodes of subtree rooted at this node in order of the age
-        of the node (i.e., the time since the present).
-
-        Iterates over nodes in order of age ('age' is as given by the ``age``
-        attribute, which is usually the sum of edge lengths from tips
-        to node, i.e., time since present).
-        If ``include_leaves`` is |True| (default), leaves are included in the
-        iteration; if ``include_leaves`` is |False|, leaves will be skipped.
-        If ``descending`` is |False| (default), younger nodes will be returned
-        before older ones; if |True|, older nodes will be returned before
-        younger ones.
-
-        Parameters
-        ----------
-        filter_fn : function object, optional
-            A function object that takes a |Node| object as an argument
-            and returns |True| if the |Node| object is to be yielded by
-            the iterator, or |False| if not. If ``filter_fn`` is |None|
-            (defau
-        include_leaves : boolean, optional
-            If |True| (default), then leaf nodes are included in the iteration.
-            If |False|, then leaf nodes are skipped.lt), then all nodes visited will be yielded.
-        descending : boolean, optional
-            If |False| (default), then younger nodes are visited before older
-            ones. If |True|, then older nodes are visited before younger ones.
-
-        Returns
-        -------
-        :py:class:`collections.Iterator` [|Node|]
-            Iterator over age-ordered sequence of nodes in subtree rooted at
-            this node.
-        """
-        # if not descending:
-        #     leaves = [nd for nd in self.leaf_iter()]
-        #     queued_pairs = []
-        #     in_queue = set()
-        #     for leaf in leaves:
-        #         age_nd_tuple = (leaf.age, leaf)
-        #         queued_pairs.insert(bisect.bisect(queued_pairs, age_nd_tuple), age_nd_tuple)
-        #         in_queue.add(leaf)
-        #     while queued_pairs:
-        #         next_el = queued_pairs.pop(0)
-        #         age, nd = next_el
-        #         in_queue.remove(nd)
-        #         p = nd._parent_node
-        #         if p and p not in in_queue:
-        #             age_nd_tuple = (p.age, p)
-        #             queued_pairs.insert(bisect.bisect(queued_pairs, age_nd_tuple), age_nd_tuple)
-        #             in_queue.add(p)
-        #         if include_leaves or nd.is_internal():
-        #             yield nd
-        # else:
-        #     nds = [(nd.age, nd) for nd in self.preorder_iter()]
-        #     nds.sort(reverse=True)
-        #     for nd in nds:
-        #         if include_leaves or nd[1].is_internal():
-        #             yield nd[1]
-        nds = [nd for nd in self.preorder_iter()]
-        if descending:
-            reverse = True
-        else:
-            reverse = False
-        nds.sort(key=lambda x: x.age, reverse=reverse)
-        for nd in nds:
-            if (include_leaves or nd._child_nodes) and (filter_fn is None or filter_fn(nd)):
-                yield nd
-
-    def age_order_iter(self, include_leaves=True, filter_fn=None, descending=False):
-        """
-        Deprecated: use :meth:`Node.ageorder_iter()` instead.
-        """
-        deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'age_order_iter()' will no longer be supported in future releases; use 'ageorder_iter()' instead",
-                stacklevel=3)
-        return self.ageorder_iter(include_leaves=include_leaves,
-                filter_fn=filter_fn,
-                descending=descending)
-
-    ###########################################################################
-    ### Node Processesor
-
-    def apply(self, before_fn=None, after_fn=None, leaf_fn=None):
-        """
-        Applies function ``before_fn`` and ``after_fn`` to all internal nodes and
-        ``leaf_fn`` to all terminal nodes in subtree starting with ``self``, with
-        nodes visited in pre-order.
-
-        Given a tree with preorder sequence of nodes of
-        [a,b,i,e,j,k,c,g,l,m,f,n,h,o,p,]::
-
-                           a
-                          / \
-                         /   \
-                        /     \
-                       /       \
-                      /         \
-                     /           \
-                    /             c
-                   b             / \
-                  / \           /   \
-                 /   e         /     f
-                /   / \       /     / \
-               /   /   \     g     /   h
-              /   /     \   / \   /   / \
-             i   j       k l   m n   o   p
-
-
-        the following order of function calls results:
-
-            before_fn(a)
-            before_fn(b)
-            leaf_fn(i)
-            before_fn(e)
-            leaf_fn(j)
-            leaf_fn(k)
-            after_fn(e)
-            after_fn(b)
-            before_fn(c)
-            before_fn(g)
-            leaf_fn(l)
-            leaf_fn(m)
-            after_fn(g)
-            before_fn(f)
-            leaf_fn(n)
-            before_fn(h)
-            leaf_fn(o)
-            leaf_fn(p)
-            after_fn(h)
-            after_fn(f)
-            after_fn(c)
-            after_fn(a)
-
-        Parameters
-        ----------
-        before_fn : function object or |None|
-            A function object that takes a |Node| as its argument.
-        after_fn : function object or |None|
-            A function object that takes a |Node| as its argument.
-        leaf_fn : function object or |None|
-            A function object that takes a |Node| as its argument.
-
-        Notes
-        -----
-        Adapted from work by Mark T. Holder (the ``peyotl`` module of the Open
-        Tree of Life Project):
-
-            https://github.com/OpenTreeOfLife/peyotl.git
-
-        """
-        stack = [self]
-        while stack:
-            node = stack.pop()
-            if not node._child_nodes:
-                if leaf_fn:
-                    leaf_fn(node)
-                # (while node is the last child of parent ...)
-                while (
-                        (node._parent_node is None)
-                        or (node._parent_node._child_nodes[-1] is node)
-                      ):
-                    node = node._parent_node
-                    if node is not None:
-                        if after_fn is not None:
-                            after_fn(node)
-                    else:
-                        break
-            else:
-                if before_fn is not None:
-                    before_fn(node)
-                stack.extend([i for i in reversed(node._child_nodes)])
-        return
-
-    ###########################################################################
-    ### Child Node Access and Manipulation
-
-    def set_child_nodes(self, child_nodes):
-        """
-        Assigns the set of child nodes for this node.
-
-        Results in the ``parent_node`` attribute of each |Node| in ``nodes``
-        as well as the ``tail_node`` attribute of corresponding |Edge|
-        objects being assigned to ``self``.
-
-        Parameters
-        ----------
-        child_nodes : collections.Iterable[|Node|]
-            The (iterable) collection of child nodes to be assigned this node
-            as a parent.
-        """
-        self.clear_child_nodes()
-        # Go through add to ensure book-keeping
-        # (e.g. avoiding multiple adds) takes
-        # place.
-        for nd in child_nodes:
-            self.add_child(nd)
-
-    def set_children(self, child_nodes):
-        """Deprecated: use :meth:`Node.set_child_nodes()` instead."""
-        return self.set_child_nodes(child_nodes)
-
-    def add_child(self, node):
-        """
-        Adds a child node to this node if it is not already a child.
-
-        Results in the ``parent_node`` attribute of ``node`` as well as the
-        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
-
-        Parameters
-        ----------
-        node : |Node|
-            The node to be added as a child of this node.
-
-        Returns
-        -------
-        |Node|
-            The node that was added.
-        """
-        assert node is not self, "Cannot add node as child of itself"
-        assert self._parent_node is not node, "Cannot add a node's parent as its child: remove the node from its parent's child set first"
-        node._parent_node = self
-        if node not in self._child_nodes:
-            self._child_nodes.append(node)
-        return node
-
-    def insert_child(self, index, node):
-        """
-        Adds a child node to this node.
-
-        If the node is already a child of this node, then it is moved
-        to the specified position.
-        Results in the ``parent_node`` attribute of ``node`` as well as the
-        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
-
-        Parameters
-        ----------
-        index : integer
-            The index before which to insert the new node.
-        node : |Node|
-            The node to be added as a child of this node.
-
-        Returns
-        -------
-        |Node|
-            The node that was added.
-        """
-        node._parent_node = self
-        try:
-            cur_index = self._child_nodes.index(node)
-        except ValueError:
-            pass
-        else:
-            if cur_index == index:
-                return
-            self._child_nodes.remove(node)
-        self._child_nodes.insert(index, node)
-        return node
-
-    def new_child(self, **kwargs):
-        """
-        Create and add a new child to this node.
-
-        Parameters
-        ----------
-        \*\*kwargs : keyword arguments
-            Keyword arguments will be passed directly to the |Node|
-            constructor (:meth:`Node.__init()__`).
-
-        Returns
-        -------
-        |Node|
-            The new child node that was created and added.
-        """
-        node = self.__class__(**kwargs)
-        return self.add_child(node=node)
-
-    def insert_new_child(self, index, **kwargs):
-        """
-        Create and add a new child to this node at a particular position.
-
-        Results in the ``parent_node`` attribute of ``node`` as well as the
-        ``tail_node`` attribute of ``node.edge`` being assigned to ``self``.
-
-        Parameters
-        ----------
-        index : integer
-            The index before which to insert the new node.
-        \*\*kwargs : keyword arguments, optional
-            Keyword arguments will be passed directly to the |Node|
-            constructor (:meth:`Node.__init()__`).
-
-        Returns
-        -------
-        |Node|
-            The new child node that was created and added.
-        """
-        node = self.__class__(**kwargs)
-        return self.insert_child(index=index, node=node)
-
-    def remove_child(self, node, suppress_unifurcations=False):
-        """
-        Removes a node from the child set of this node.
-
-        Results in the parent of the node being removed set to |None|.  If
-        ``suppress_unifurcations`` is |True|, if this node ends up having only one
-        child after removal of the specified node, then this node will be
-        removed from the tree, with its single child added to the child node
-        set of its parent and the edge length adjusted accordingly.
-        ``suppress_unifurcations`` should only be |True| for unrooted trees.
-
-        Parameters
-        ----------
-        node : |Node|
-            The node to be removed.
-        suppress_unifurcations : boolean, optional
-            If |False| (default), no action is taken. If |True|, then if the
-            node removal results in a node with degree of two (i.e., a single
-            parent and a single child), then it will be removed from
-            the tree and its (sole) child will be added as a child of its
-            parent (with edge lengths adjusted accordingly).
-
-        Returns
-        -------
-        |Node|
-            The node removed.
-        """
-        if not node:
-            raise ValueError("Tried to remove an non-existing or null node")
-        children = self._child_nodes
-        if node in children:
-            node._parent_node = None
-            node.edge.tail_node = None
-            index = children.index(node)
-            children.remove(node)
-            if suppress_unifurcations:
-                if self._parent_node:
-                    if len(children) == 1:
-                        child = children[0]
-                        pos = self._parent_node._child_nodes.index(self)
-                        self._parent_node.insert_child(pos, child)
-                        self._parent_node.remove_child(self, suppress_unifurcations=False)
-                        try:
-                            child.edge.length += self.edge.length
-                        except:
-                            pass
-                        self._child_nodes = []
-                else:
-                    to_remove = None
-                    if len(children) == 2:
-                        if children[0].is_internal():
-                            to_remove = children[0]
-                            other = children[1]
-                        elif children[1].is_internal():
-                            to_remove = children[1]
-                            other = children[0]
-                    if to_remove is not None:
-                        try:
-                            other.edge.length += to_remove.edge.length
-                        except:
-                            pass
-                        pos = self._child_nodes.index(to_remove)
-                        self.remove_child(to_remove, suppress_unifurcations=False)
-                        tr_children = to_remove._child_nodes
-                        tr_children.reverse()
-                        for c in tr_children:
-                            self.insert_child(pos, c)
-                        to_remove._child_nodes = []
-        else:
-            raise ValueError("Tried to remove a node that is not listed as a child")
-        return node
-
-    def clear_child_nodes(self):
-        """
-        Removes all child nodes.
-        """
-        del self._child_nodes[:] # list.clear() is not in Python 2.7
-
-    def reversible_remove_child(self, node, suppress_unifurcations=False):
-        """
-        This function is a (less-efficient) version of remove_child that also
-        returns the data needed by reinsert_nodes to "undo" the removal.
-
-        Returns a list of tuples.  The first element of each tuple is the
-        node removed, the other elements are the information needed by
-        ``reinsert_nodes`` in order to restore the tree to the same topology as
-        it was before the call to ``remove_child.`` If ``suppress_unifurcations`` is False
-        then the returned list will contain only one item.
-
-        ``suppress_unifurcations`` should only be called on unrooted trees.
-        """
-        if not node:
-            raise ValueError("Tried to remove an non-existing or null node")
-        children = self._child_nodes
-        try:
-            pos = children.index(node)
-        except:
-            raise ValueError("Tried to remove a node that is not listed as a child")
-        removed = [(node, self, pos, [], None)]
-        node._parent_node = None
-        node.edge.tail_node = None
-        children.remove(node)
-        if suppress_unifurcations:
-            p = self._parent_node
-            if p:
-                if len(children) == 1:
-                    child = children[0]
-                    pos = p._child_nodes.index(self)
-                    p.insert_child(pos, child)
-                    self._child_nodes = []
-                    p.remove_child(self, suppress_unifurcations=False)
-                    e = child.edge
-                    try:
-                        e.length += self.edge.length
-                    except:
-                        e = None
-                    t = (self, p, pos, [child], e)
-                    removed.append(t)
-            else:
-                to_remove = None
-                if len(children) == 2:
-                    if children[0].is_internal():
-                        to_remove = children[0]
-                        other = children[1]
-                    elif children[1].is_internal():
-                        to_remove = children[1]
-                        other = children[0]
-                if to_remove is not None:
-                    e = other.edge
-                    try:
-                        e.length += to_remove.edge.length
-                    except:
-                        e = None
-                    pos = self._child_nodes.index(to_remove)
-                    self.remove_child(to_remove, suppress_unifurcations=False)
-                    tr_children = to_remove._child_nodes
-                    to_remove._child_nodes = []
-                    for n, c in enumerate(tr_children):
-                        new_pos = pos + n
-                        self.insert_child(pos, c)
-                    t = (to_remove, self, pos, tr_children, e)
-                    removed.append(t)
-
-        return removed
-
-    def reinsert_nodes(self, nd_connection_list):
-        """
-        This function should be used to "undo" the effects of
-        Node.reversible_remove_child NOTE: the behavior is only
-        guaranteed if the tree has not been modified between the
-        remove_child and reinsert_nodes calls! (or the tree has been
-        restored such that the node/edge identities are identical to the
-        state before the remove_child call.
-
-        The order of info in each tuple is:
-
-            0 - node removed
-            1 - parent of node removed
-            2 - pos in parent matrix
-            3 - children of node removed that were "stolen"
-            4 - edge that was lengthened by "stealing" length from node's edge
-        """
-        # we unroll the stack of operations
-        for blob in nd_connection_list[-1::-1]:
-            #_LOG.debug(blob)
-            n, p, pos, children, e = blob
-            for c in children:
-                cp = c._parent_node
-                if cp:
-                    cp.remove_child(c)
-                n.add_child(c)
-            p.insert_child(pos, n)
-            if e is not None:
-                e.length -= n.edge.length
-
-    def collapse_neighborhood(self, dist):
-        if dist < 1:
-            return
-        children = self.child_nodes()
-        for ch in children:
-            if not ch.is_leaf():
-                ch.edge.collapse()
-        if self._parent_node:
-            p = self._parent_node
-            self.edge.collapse()
-            p.collapse_neighborhood(dist -1)
-        else:
-            self.collapse_neighborhood(dist - 1)
-
-    def collapse_clade(self):
-        """Collapses all internal edges that are descendants of self."""
-        if self.is_leaf():
-            return
-        leaves = [i for i in self.leaf_iter()]
-        self.set_child_nodes(leaves)
-
-    def collapse_conflicting(self, bipartition):
-        """
-        Collapses every edge in the subtree that conflicts with the given
-        bipartition. This can include the edge subtending subtree_root.
-        """
-        to_collapse_head_nodes = []
-        for nd in self.postorder_iter():
-            if nd._child_nodes and nd.edge.bipartition.is_incompatible_with(bipartition):
-                to_collapse_head_nodes.append(nd)
-        for nd in to_collapse_head_nodes:
-            e = nd.edge
-            e.collapse()
-
-    ###########################################################################
-    ### Edge Access and Manipulation
-
-    def _get_edge(self):
-        """
-        Returns the edge subtending this node.
-        """
-        return self._edge
-    def _set_edge(self, new_edge):
-        """
-        Sets the edge subtending this node, and sets head_node of
-        ``edge`` to point to self.
-        """
-        # if edge is None:
-        #     raise ValueError("A Node cannot have 'None' for an edge")
-        if new_edge is self._edge:
-            return
-        if self._parent_node is not None:
-            try:
-                self._parent_node._child_nodes.remove(self)
-            except ValueError:
-                pass
-
-        ## Minimal management
-        self._edge = new_edge
-        if self._edge:
-            self._edge._head_node = self
-
-    edge = property(_get_edge, _set_edge)
-
-    def _get_edge_length(self):
-        """
-        Returns the length of the edge subtending this node.
-        """
-        return self._edge.length
-    def _set_edge_length(self, v=None):
-        """
-        Sets the edge subtending this node, and sets head_node of
-        ``edge`` to point to self.
-        """
-        self._edge.length = v
-    edge_length = property(_get_edge_length, _set_edge_length)
-
-    def _get_bipartition(self):
-        """
-        Returns the bipartition for the edge subtending this node.
-        """
-        return self._edge.bipartition
-    def _set_bipartition(self, v=None):
-        """
-        Sets the bipartition for the edge subtending this node.
-        """
-        self._edge.bipartition = v
-    bipartition = property(_get_bipartition, _set_bipartition)
-
-    def _get_split_bitmask(self):
-        return self._edge.bipartition._split_bitmask
-    def _set_split_bitmask(self, h):
-        self._edge.bipartition._split_bitmask = h
-    split_bitmask = property(_get_split_bitmask, _set_split_bitmask)
-
-    def _get_leafset_bitmask(self):
-        return self._edge.bipartition._leafset_bitmask
-    def _set_leafset_bitmask(self, h):
-        self._edge.bipartition._leafset_bitmask = h
-    leafset_bitmask = property(_get_leafset_bitmask, _set_leafset_bitmask)
-
-    def _get_tree_leafset_bitmask(self):
-        return self._edge.bipartition._tree_leafset_bitmask
-    def _set_tree_leafset_bitmask(self, h):
-        self._edge.bipartition._tree_leafset_bitmask = h
-    tree_leafset_bitmask = property(_get_tree_leafset_bitmask, _set_tree_leafset_bitmask)
-
-    def split_as_bitstring(self):
-        return self._edge.bipartition.split_as_bitstring()
-
-    def leafset_as_bitstring(self):
-        return self._edge.bipartition.leafset_as_bitstring()
-
-    ###########################################################################
-    ### Parent Access and Manipulation
-
-    def _get_parent_node(self):
-        """Returns the parent node of this node."""
-        return self._parent_node
-    def _set_parent_node(self, parent):
-        """Sets the parent node of this node."""
-        if self._parent_node is not None:
-            try:
-                self._parent_node._child_nodes.remove(self)
-            except ValueError:
-                pass
-        self._parent_node = parent
-        if self._parent_node is not None:
-            if self not in self._parent_node._child_nodes:
-                self._parent_node._child_nodes.append(self)
-    parent_node = property(_get_parent_node, _set_parent_node)
-
-    ###########################################################################
-    ### General Structural Access and Information
-
-    def is_leaf(self):
-        """
-        Returns |True| if the node is a tip or a leaf node, i.e. has no child
-        nodes.
-
-        Returns
-        -------
-        boolean
-            |True| if the node is a leaf, i.e., has no child nodes. |False|
-            otherwise.
-        """
-        return bool(not self._child_nodes)
-
-    def is_internal(self):
-        """
-        Returns |True| if the node is *not* a tip or a leaf node.
-
-        Returns
-        -------
-        boolean
-            |True| if the node is not a leaf. |False| otherwise.
-        """
-        return bool(self._child_nodes)
-
-    def leaf_nodes(self):
-        """
-        Returns list of all leaf_nodes descended from this node (or just
-        list with ``self`` as the only member if ``self`` is a leaf).
-
-        Note
-        ----
-        Usage of  `leaf_iter()` is preferable for efficiency reasons unless
-        actual list is required.
-
-        Returns
-        -------
-        :py:class:`list` [|Node|]
-           A ``list`` of |Node| objects descended from this node
-           (inclusive of ``self``) that are the leaves.
-        """
-        return [node for node in \
-                self.postorder_iter(lambda x: bool(len(x.child_nodes())==0))]
-
-    def num_child_nodes(self):
-        """
-        Returns number of child nodes.
-
-        Returns
-        -------
-        int
-            Number of children in ``self``.
-        """
-        return len(self._child_nodes)
-
-    def child_nodes(self):
-        """
-        Returns a shallow-copy list of all child nodes of this node.
-
-        Note
-        ----
-        Unless an actual ``list`` is needed, iterating over the child nodes using
-        :meth:`Node.child_node_iter()` is preferable to avoid the overhead of
-        list construction.
-
-        Returns
-        -------
-        :py:class:`list` [|Node|]
-           A ``list`` of |Node| objects that have ``self`` as a parent.
-        """
-        return list(self._child_nodes)
-
-    def child_edges(self):
-        """
-        Returns a shallow-copy list of all child edges of this node.
-
-        Note
-        ----
-        Unless an actual ``list`` is needed, iterating over the child edges using
-        :meth:`Node.child_edge_iter()` is preferable to avoid the overhead of
-        list construction.
-
-        Returns
-        -------
-        :py:class:`list` [|Edge|]
-           A ``list`` of |Edge| objects that have ``self`` as a tail node.
-        """
-        return list(ch.edge for ch in self._child_nodes)
-
-    def incident_edges(self):
-        """
-        Return parent and child edges.
-
-        Returns
-        -------
-        :py:class:`list` [|Edge|]
-            A list of edges linking to this node, with outgoing edges (edges
-            connecting to child nodes) followed by the edge connecting
-            this node to its parent.
-        """
-        e = [c.edge for c in self._child_nodes]
-        e.append(self.edge)
-        return e
-
-    def get_incident_edges(self):
-        """Legacy synonym for :meth:`Node.incident_edges()`."""
-        return self.incident_edges()
-
-    def adjacent_nodes(self):
-        """
-        Return parent and child nodes.
-
-        Returns
-        -------
-        :py:class:`list` [|Node|]
-            A list with all child nodes and parent node of this node.
-        """
-        n = [c for c in self._child_nodes]
-        if self._parent_node:
-            n.append(self._parent_node)
-        return n
-
-    def get_adjacent_nodes(self):
-        """Legacy synonym for :meth:`Node.adjacent_edges()`"""
-        return self.adjacent_nodes()
-
-    def sibling_nodes(self):
-        """
-        Return all other children of parent, excluding self.
-
-        Returns
-        -------
-        :py:class:`list` [|Node|]
-            A list of all nodes descended from the same parent as ``self``,
-            excluding ``self``.
-        """
-        p = self._parent_node
-        if not p:
-            return []
-        sisters = [nd for nd in p.child_nodes() if nd is not self]
-        return sisters
-
-    def sister_nodes(self):
-        """Legacy synonym for :meth:`Node.sister_nodes()`"""
-        return self.sibling_nodes()
-
-    def extract_subtree(self,
-            extraction_source_reference_attr_name="extraction_source",
-            node_filter_fn=None,
-            suppress_unifurcations=True,
-            is_apply_filter_to_leaf_nodes=True,
-            is_apply_filter_to_internal_nodes=False,
-            node_factory=None,
-            ):
-        """
-        Returns a clone of the structure descending from this node.
-
-        Parameters
-        ----------
-        extraction_source_reference_attr_name : str
-            Name of attribute to set on cloned nodes that references
-            corresponding original node. If ``None``, then attribute (and
-            reference) will not be created.
-        node_filter_fn : None or function object
-            If ``None``, then entire tree structure is cloned.
-            If not ``None``, must be a function object that returns ``True``
-            if a particular |Node| instance on the original tree should
-            be included in the cloned tree, or ``False`` otherwise.
-        is_apply_filter_to_leaf_nodes : bool
-            If ``True`` then the above filter will be applied to leaf nodes. If
-            ``False`` then it will not (and all leaf nodes will be
-            automatically included, unless excluded by an ancestral node being
-            filtered out).
-        is_apply_filter_to_internal_nodes : bool
-            If ``True`` then the above filter will be applied to internal nodes. If
-            ``False`` then it will not (internal nodes without children will
-            still be filtered out).
-        node_factory : function
-            If not ``None``, must be a function that takes no arguments and
-            returns a new |Node| (or equivalent) instance.
-
-        Returns
-        -------
-        nd : |Node|
-            A node with descending subtree mirroring this one.
-
-        """
-        memo = {}
-        is_excluded_nodes = False
-        start_node = None
-        start_node_to_match = self
-        if node_factory is None:
-            node_factory = self.__class__
-        for nd0 in self.postorder_iter():
-            if node_filter_fn is not None:
-                if nd0._child_nodes:
-                    if is_apply_filter_to_internal_nodes:
-                        is_apply_filter = True
-                    else:
-                        is_apply_filter = False
-                else:
-                    if is_apply_filter_to_leaf_nodes:
-                        is_apply_filter = True
-                    else:
-                        is_apply_filter = False
-                if is_apply_filter and not node_filter_fn(nd0):
-                    is_excluded_nodes = True
-                    continue
-            original_node_has_children = False
-            children_to_add = []
-            for ch_nd0 in nd0.child_node_iter():
-                original_node_has_children = True
-                ch_nd1 = memo.get(ch_nd0, None)
-                if ch_nd1 is not None:
-                    children_to_add.append(ch_nd1)
-            if not children_to_add and original_node_has_children:
-                # filter removes all descendents of internal node,
-                # so this internal node is not added
-                if nd0.parent_node is None:
-                    raise error.SeedNodeDeletionException("Attempting to remove seed node or node without parent")
-                if nd0 is self:
-                    start_node_to_match = nd0.parent_node
-                continue
-            elif len(children_to_add) == 1 and suppress_unifurcations:
-                if nd0.edge.length is not None:
-                    if children_to_add[0].edge.length is None:
-                        children_to_add[0].edge.length = nd0.edge.length
-                    else:
-                        children_to_add[0].edge.length += nd0.edge.length
-                else:
-                    nd1.edge.length = children_to_add[0].edge.length
-                if nd0.parent_node is None:
-                    start_node = children_to_add[0]
-                    break
-                if nd0 is self:
-                    start_node_to_match = nd0.parent_node
-                memo[nd0] = children_to_add[0]
-            else:
-                nd1 = node_factory()
-                nd1.label = nd0.label
-                nd1.taxon = nd0.taxon
-                nd1.edge.length = nd0.edge.length
-                nd1.edge.label = nd0.edge.label
-                for ch_nd1 in children_to_add:
-                    nd1.add_child(ch_nd1)
-                if nd0 is start_node_to_match:
-                    start_node = nd1
-                memo[nd0] = nd1
-                if extraction_source_reference_attr_name:
-                    setattr(nd1, extraction_source_reference_attr_name, nd0)
-        if start_node is not None:
-            return start_node
-        else:
-            ## TODO: find a replacement node
-            raise ValueError
-
-    ###########################################################################
-    ### Metrics
-
-    def level(self):
-        """
-        Returns the number of nodes between ``self`` and the seed node of the tree.
-
-        Returns
-        -------
-        integer
-            The number of nodes between ``self`` and the seed node of the tree,
-            or 0 if ``self`` has no parent.
-        """
-        if self._parent_node:
-            return self._parent_node.level() + 1
-        else:
-            return 0
-
-    def distance_from_root(self):
-        """
-        Weighted path length of ``self`` from root.
-
-        Returns
-        -------
-        numeric
-            Total weight of all edges connecting ``self`` with the root of the
-            tree.
-        """
-        if self._parent_node and self.edge.length != None:
-            if self._parent_node.distance_from_root == None:
-                return float(self.edge.length)
-            else:
-                distance_from_root = float(self.edge.length)
-                parent_node = self._parent_node
-                # The root is identified when a node with no
-                # parent is encountered. If we want to use some
-                # other criteria (e.g., where a is_root property
-                # is True), we modify it here.
-                while parent_node:
-                    if parent_node.edge.length != None:
-                        distance_from_root = distance_from_root + float(parent_node.edge.length)
-                    parent_node = parent_node._parent_node
-                return distance_from_root
-        elif not self._parent_node and self.edge.length != None:
-            return float(self.edge.length)
-        elif self._parent_node and self.edge.length == None:
-            # what do we do here: parent node exists, but my
-            # length does not?
-            return float(self._parent_node.edge.length)
-        elif not self._parent_node and self.edge.length == None:
-            # no parent node, and no edge length
-            return 0.0
-        else:
-            # WTF????
-            return 0.0
-
-    def distance_from_tip(self):
-        """
-        Maximum weighted length of path of ``self`` to tip.
-
-        If tree is not ultrametric (i.e., descendent edges have different
-        lengths), then count the maximum of edge lengths. Note that
-        :meth:`Tree.calc_node_ages()` is a more efficient way of doing this
-        over the whole tree if this value is need for many or all the nodes on
-        the tree.
-
-        Returns
-        -------
-        numeric
-            Maximum weight of edges connecting ``self`` to tip.
-        """
-        if not self._child_nodes:
-            return 0.0
-        else:
-            distance_from_tips = []
-            for ch in self._child_nodes:
-                if ch.edge.length is not None:
-                    curr_edge_length = ch.edge_length
-                else:
-                    curr_edge_length = 0.0
-                if not hasattr(ch, "_distance_from_tip"):
-                    ch._distance_from_tip = ch.distance_from_tip()
-                distance_from_tips.append(ch._distance_from_tip + curr_edge_length)
-            self._distance_from_tip = float(max(distance_from_tips))
-            return self._distance_from_tip
-
-    ###########################################################################
-    ### Representation
-
-    def description(self, depth=1, indent=0, itemize="", output=None, taxon_namespace=None):
-        """
-        Returns description of object, up to level ``depth``.
-        """
-        if depth is None or depth < 0:
-            return
-        output_strio = StringIO()
-        label = str(self)
-        output_strio.write('%s%sNode object at %s%s'
-                % (indent*' ',
-                   itemize,
-                   hex(id(self)),
-                   label))
-        if depth >= 1:
-            leader1 = ' ' * (indent + 4)
-            leader2 = ' ' * (indent + 8)
-            output_strio.write('\n%s[Edge]' % leader1)
-            if self.edge is not None:
-                edge_desc = self.edge.description(0)
-            else:
-                edge_desc = 'None'
-            output_strio.write('\n%s%s' % (leader2, edge_desc))
-
-            output_strio.write('\n%s[Taxon]' % leader1)
-            if self.taxon is not None:
-                taxon_desc = self.taxon.description(0)
-            else:
-                taxon_desc = 'None'
-            output_strio.write('\n%s%s' % (leader2, taxon_desc))
-
-            output_strio.write('\n%s[Parent]' % leader1)
-            if self._parent_node is not None:
-                parent_node_desc = self._parent_node.description(0)
-            else:
-                parent_node_desc = 'None'
-            output_strio.write('\n%s%s' % (leader2, parent_node_desc))
-            output_strio.write('\n%s[Children]' % leader1)
-            if len(self._child_nodes) == 0:
-                output_strio.write('\n%sNone' % leader2)
-            else:
-                for i, cnd in enumerate(self._child_nodes):
-                    output_strio.write('\n%s[%d] %s' % (leader2, i, cnd.description(0)))
-        s = output_strio.getvalue()
-        if output is not None:
-            output.write(s)
-        return s
-
-    ###########################################################################
-    ### Native NEWICK printer
-    ## For debugging we build-in a full-fledged NEWICK composition independent
-    ## of the nexus/newick family of modules. Client code should prefer to
-    ## use Newick/Nexus readers/writers, or Tree.write(), TreeList.write(),
-    ## DataSet.write() etc.
-
-    def _as_newick_string(self, **kwargs):
-        """
-        This returns the Node as a NEWICK statement according to the given
-        formatting rules. This should be used for debugging purposes only.
-        For production purposes, use the the full-fledged 'as_string()'
-        method of the object.
-        """
-        out = StringIO()
-        self._write_newick(out, **kwargs)
-        return out.getvalue()
-
-    def _write_newick(self, out, **kwargs):
-        """
-        This returns the Node as a NEWICK statement according to the given
-        formatting rules. This should be used for debugging purposes only.  For
-        production purposes, use the the full-fledged 'write_to_stream()'
-        method of the object.
-        """
-        edge_lengths = not kwargs.get('suppress_edge_lengths', False)
-        edge_lengths = kwargs.get('edge_lengths', edge_lengths)
-        child_nodes = self.child_nodes()
-        if child_nodes:
-            out.write('(')
-            f_child = child_nodes[0]
-            for child in child_nodes:
-                if child is not f_child:
-                    out.write(',')
-                child._write_newick(out, **kwargs)
-            out.write(')')
-
-        out.write(self._get_node_token(**kwargs))
-        if edge_lengths:
-            e = self.edge
-            if e:
-                sel = e.length
-                if sel is not None:
-                    fmt = kwargs.get('edge_length_formatter', None)
-                    if fmt:
-                        out.write(":%s" % fmt(sel))
-                    else:
-                        s = ""
-                        try:
-                            s = float(sel)
-                            s = str(s)
-                        except ValueError:
-                            s = str(sel)
-                        if s:
-                            out.write(":%s" % s)
-
-    def _get_node_token(self, **kwargs):
-        """returns a string that is an identifier for the node.  This is called
-        by the newick-writing functions, so the kwargs that affect how node
-        labels show up in a newick string are the same ones used here:
-        ``suppress_internal_labels`` is a Boolean, and defaults to False.
-        """
-        is_leaf = (len(self._child_nodes) == 0)
-        if not is_leaf:
-            if kwargs.get("suppress_internal_labels", False) \
-                    or not kwargs.get("include_internal_labels", True):
-                return ""
-        if self.taxon is not None:
-            if self.taxon.label:
-                label = self.taxon.label
-            else:
-                # return "_" # taxon, but no label: anonymous
-                label = "" # "_" is not anonoymous/unnamed, but a name of <blank>; so we return nothing instead
-        else:
-            if self.label:
-                label = self.label
-            else:
-                label = ""
-        if not label or kwargs.get("raw_labels", False):
-            return label
-        elif " " in label and "_" in label:
-            if "'" in label:
-                label.replace("'", "''")
-            return "'{}'".format(label)
-        elif " " in label and not kwargs.get("preserve_spaces"):
-            return label.replace(" ", "_")
-        else:
-            return label
-
-    ###########################################################################
-    ### alternate representation of tree structure for debugging
-
-    def _get_indented_form(self, **kwargs):
-        out = StringIO()
-        self._write_indented_form(out, **kwargs)
-        return out.getvalue()
-
-    def _write_indented_form(self, out, **kwargs):
-        indentation = kwargs.get("indentation", "    ")
-        level = kwargs.get("level", 0)
-        ancestors = []
-        siblings = []
-        n = self
-        while n is not None:
-            n._write_indented_form_line(out, level, **kwargs)
-            n, lev = _preorder_list_manip(n, siblings, ancestors)
-            level += lev
-
-    def _get_indented_form_line(self, level, **kwargs):
-        out = StringIO()
-        self._write_indented_form_line(out, level, **kwargs)
-        return out.getvalue()
-
-    def _write_indented_form_line(self, out, level, **kwargs):
-        indentation = kwargs.get("indentation", "    ")
-        label = _format_node(self, **kwargs)
-        if kwargs.get("bipartitions"):
-            cm = "%s " % _format_bipartition(self.edge.bipartition, **kwargs)
-        else:
-            cm = ""
-        out.write("%s%s%s\n" % ( cm, indentation*level, label))
+import copy
+from dendropy.utility.textprocessing import StringIO
+from dendropy.utility import terminal
+from dendropy.utility import error
+from dendropy.utility import bitprocessing
+from dendropy.utility import deprecate
+from dendropy.utility import constants
+from dendropy.utility import GLOBAL_RNG
+from dendropy.utility import messaging
+from dendropy.datamodel import basemodel
+from dendropy.datamodel import taxonmodel
+from dendropy.datamodel.treemodel import _bipartition
+from dendropy.datamodel.treemodel import _node
+from dendropy import dataio
 
-##############################################################################
-### Tree
+_LOG = messaging.get_logger(__name__)
 
 class Tree(
-        taxonmodel.TaxonNamespaceAssociated,
-        basemodel.Annotable,
-        basemodel.Deserializable,
-        basemodel.NonMultiReadable,
-        basemodel.Serializable,
-        basemodel.DataObject):
+    taxonmodel.TaxonNamespaceAssociated,
+    basemodel.Annotable,
+    basemodel.Deserializable,
+    basemodel.NonMultiReadable,
+    basemodel.Serializable,
+    basemodel.DataObject,
+):
     """
     An arborescence, i.e. a fully-connected directed acyclic graph with all
     edges directing away from the root and toward the tips. The "root" of the
@@ -2558,13 +34,10 @@ class Tree(
     semantically equivalent to the root.
     """
 
-    def _parse_and_create_from_stream(cls,
-            stream,
-            schema,
-            collection_offset=None,
-            tree_offset=None,
-            **kwargs):
-        """
+    def _parse_and_create_from_stream(
+        cls, stream, schema, collection_offset=None, tree_offset=None, **kwargs
+    ):
+        r"""
         Constructs a new |Tree| object and populates it with data from
         file-like object ``stream``.
 
@@ -2638,16 +111,22 @@ class Tree(
         """
         from dendropy.datamodel.treecollectionmodel import TreeList
 
-        taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs, None)
+        taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(
+            kwargs, None
+        )
         if taxon_namespace is None:
-            taxon_namespace = taxonmodel.TaxonNamespace()
+            taxon_namespace = taxonmodel.TaxonNamespace(
+                is_case_sensitive=kwargs.get("case_sensitive_taxon_labels", False)
+            )
 
         def tns_factory(label):
             if label is not None and taxon_namespace.label is None:
                 taxon_namespace.label = label
             return taxon_namespace
 
-        tree_list_factory = lambda label, taxon_namespace: TreeList(label=label, taxon_namespace=taxon_namespace, tree_type=cls)
+        tree_list_factory = lambda label, taxon_namespace: TreeList(
+            label=label, taxon_namespace=taxon_namespace, tree_type=cls
+        )
         label = kwargs.pop("label", None)
         reader = dataio.get_reader(schema, **kwargs)
         # if collection_offset is None and tree_offset is not None:
@@ -2657,10 +136,11 @@ class Tree(
         if tree_offset is None:
             tree_offset = 0
         tree_lists = reader.read_tree_lists(
-                    stream=stream,
-                    taxon_namespace_factory=tns_factory,
-                    tree_list_factory=tree_list_factory,
-                    global_annotations_target=None)
+            stream=stream,
+            taxon_namespace_factory=tns_factory,
+            tree_list_factory=tree_list_factory,
+            global_annotations_target=None,
+        )
         if not tree_lists:
             raise ValueError("No trees in data source")
         tree_list = tree_lists[collection_offset]
@@ -2669,6 +149,7 @@ class Tree(
         tree = tree_list[tree_offset]
         tree.label = label
         return tree
+
     _parse_and_create_from_stream = classmethod(_parse_and_create_from_stream)
 
     @classmethod
@@ -2757,12 +238,8 @@ class Tree(
         """
         return cls._get_from(**kwargs)
 
-    def yield_from_files(cls,
-            files,
-            schema,
-            taxon_namespace=None,
-            **kwargs):
-        """
+    def yield_from_files(cls, files, schema, taxon_namespace=None, **kwargs):
+        r"""
         Iterates over trees from files, returning them one-by-one instead of
         instantiating all of them in memory at once.
 
@@ -2825,29 +302,34 @@ class Tree(
 
         """
         if taxon_namespace is None:
-            taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs, None)
+            taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(
+                kwargs, None
+            )
             if taxon_namespace is None:
-                taxon_namespace = taxonmodel.TaxonNamespace()
+                taxon_namespace = taxonmodel.TaxonNamespace(
+                    is_case_sensitive=kwargs.get("case_sensitive_taxon_labels", False)
+                )
         else:
             assert "taxon_set" not in kwargs
         if "tree_offset" in kwargs:
-            raise TypeError("'tree_offset' is not supported: trees should be skipped/discarded on the client code side")
+            raise TypeError(
+                "'tree_offset' is not supported: trees should be skipped/discarded on"
+                " the client code side"
+            )
         tree_yielder = dataio.get_tree_yielder(
-                files,
-                schema,
-                taxon_namespace=taxon_namespace,
-                tree_type=cls,
-                **kwargs)
+            files, schema, taxon_namespace=taxon_namespace, tree_type=cls, **kwargs
+        )
         return tree_yielder
+
     yield_from_files = classmethod(yield_from_files)
 
     def from_bipartition_encoding(
-            cls,
-            bipartition_encoding,
-            taxon_namespace,
-            is_rooted=False,
-            edge_lengths=None,
-            ):
+        cls,
+        bipartition_encoding,
+        taxon_namespace,
+        is_rooted=False,
+        edge_lengths=None,
+    ):
         """
         Reconstructs a tree from a bipartition encoding.
 
@@ -2884,19 +366,21 @@ class Tree(
         #     split_edge_lengths = dict(zip(split_bitmasks,
         #         [b.edge.length for b in bipartition_encoding]))
         return cls.from_split_bitmasks(
-                split_bitmasks=split_bitmasks,
-                taxon_namespace=taxon_namespace,
-                split_edge_lengths=split_edge_lengths,
-                is_rooted=is_rooted)
+            split_bitmasks=split_bitmasks,
+            taxon_namespace=taxon_namespace,
+            split_edge_lengths=split_edge_lengths,
+            is_rooted=is_rooted,
+        )
+
     from_bipartition_encoding = classmethod(from_bipartition_encoding)
 
     def from_split_bitmasks(
-            cls,
-            split_bitmasks,
-            taxon_namespace,
-            is_rooted=False,
-            split_edge_lengths=None,
-            ):
+        cls,
+        split_bitmasks,
+        taxon_namespace,
+        is_rooted=False,
+        split_edge_lengths=None,
+    ):
         """
         Reconstructs a tree from a collection of splits represented as bitmasks.
 
@@ -2944,18 +428,20 @@ class Tree(
         split_bitmasks_to_add = []
         for s in split_bitmasks:
             m = s & all_taxa_bitmask
-            if (m != all_taxa_bitmask) and ((m-1) & m): # if not root (i.e., all "1's") and not singleton (i.e., one "1")
+            if (m != all_taxa_bitmask) and (
+                (m - 1) & m
+            ):  # if not root (i.e., all "1's") and not singleton (i.e., one "1")
                 if is_rooted:
                     split_bitmasks_to_add.append(m)
                 else:
                     if 1 & m:
-                        split_bitmasks_to_add.append( (~m) & all_taxa_bitmask )
+                        split_bitmasks_to_add.append((~m) & all_taxa_bitmask)
                     else:
                         # "denormalize" split_bitmasks
                         split_bitmasks_to_add.append(m)
 
         # Now when we add split_bitmasks in order, we will do a greedy, extended majority-rule consensus tree
-        #for freq, split_to_add, split_in_dict in to_try_to_add:
+        # for freq, split_to_add, split_in_dict in to_try_to_add:
         _get_mask = lambda x: getattr(x.bipartition, "_leafset_bitmask")
         for split_to_add in split_bitmasks_to_add:
             if (split_to_add & root_edge.bipartition.leafset_bitmask) != split_to_add:
@@ -2965,14 +451,19 @@ class Tree(
                 lb = bitprocessing.least_significant_set_bit(split_to_add)
                 one_leaf = to_leaf_dict[lb]
                 parent_node = one_leaf
-                while (split_to_add & parent_node.edge.bipartition.leafset_bitmask) != split_to_add:
+                while (
+                    split_to_add & parent_node.edge.bipartition.leafset_bitmask
+                ) != split_to_add:
                     parent_node = parent_node.parent_node
             else:
                 parent_node = reconstructed_tree.mrca(split_bitmask=split_to_add)
-            if parent_node is None or parent_node.edge.bipartition.leafset_bitmask == split_to_add:
-                continue # split is not in tree, or already in tree.
+            if (
+                parent_node is None
+                or parent_node.edge.bipartition.leafset_bitmask == split_to_add
+            ):
+                continue  # split is not in tree, or already in tree.
             new_node = cls.node_factory()
-            #self.map_split_support_to_node(node=new_node, split_support=freq)
+            # self.map_split_support_to_node(node=new_node, split_support=freq)
             new_node_children = []
             new_edge = new_node.edge
             new_mask = 0
@@ -2980,33 +471,35 @@ class Tree(
                 # might need to modify the following if rooted split_bitmasks
                 # are used
                 cecm = child.edge.bipartition.leafset_bitmask
-                if (cecm & split_to_add):
+                if cecm & split_to_add:
                     assert cecm != split_to_add
                     new_mask |= cecm
                     new_node_children.append(child)
-                    new_edge.bipartition = Bipartition(
-                            leafset_bitmask=new_mask,
-                            tree_leafset_bitmask=all_taxa_bitmask,
-                            is_mutable=False,
-                            compile_bipartition=True)
+                    new_edge.bipartition = _bipartition.Bipartition(
+                        leafset_bitmask=new_mask,
+                        tree_leafset_bitmask=all_taxa_bitmask,
+                        is_mutable=False,
+                        compile_bipartition=True,
+                    )
                     reconstructed_tree.bipartition_encoding.append(new_edge.bipartition)
             # Check to see if we have accumulated all of the bits that we
             #   needed, but none that we don't need.
             if new_edge.bipartition.leafset_bitmask == split_to_add:
                 if split_edge_lengths:
                     new_edge.length = split_edge_lengths[split_to_add]
-                    #old_split = new_old_split_map[split_to_add]
-                    #new_edge.length = split_edge_lengths[old_split]
+                    # old_split = new_old_split_map[split_to_add]
+                    # new_edge.length = split_edge_lengths[old_split]
                 for child in new_node_children:
                     parent_node.remove_child(child)
                     new_node.add_child(child)
                 parent_node.add_child(new_node)
                 # reconstructed_tree.split_edge_map[split_to_add] = new_edge
         return reconstructed_tree
+
     from_split_bitmasks = classmethod(from_split_bitmasks)
 
     def node_factory(cls, **kwargs):
-        """
+        r"""
         Creates and returns a |Node| object.
 
         Derived classes can override this method to provide support for
@@ -3024,14 +517,12 @@ class Tree(
             A new |Node| object.
 
         """
-        return Node(**kwargs)
-    node_factory = classmethod(node_factory)
+        return _node.Node(**kwargs)
 
-    ###########################################################################
-    ### Special/Lifecycle methods
+    node_factory = classmethod(node_factory)
 
     def __init__(self, *args, **kwargs):
-        """
+        r"""
         The constructor can optionally construct a |Tree| object by
         cloning another |Tree| object passed as the first positional
         argument, or out of a data source if ``stream`` and ``schema`` keyword
@@ -3162,22 +653,39 @@ class Tree(
         """
         if len(args) > 1:
             # only allow 1 positional argument
-            raise error.TooManyArgumentsError(func_name=self.__class__.__name__, max_args=1, args=args)
+            raise error.TooManyArgumentsError(
+                func_name=self.__class__.__name__, max_args=1, args=args
+            )
         elif len(args) == 1:
             if "seed_node" in kwargs:
-                raise TypeError("Cannot specify 'seed_node' if passing in a Tree object to clone")
+                raise TypeError(
+                    "Cannot specify 'seed_node' if passing in a Tree object to clone"
+                )
             if "stream" in kwargs or "schema" in kwargs:
-                raise TypeError("Constructing from an external stream is no longer supported: use the factory method 'Tree.get(file=...)'")
-            if isinstance(args[0], Node):
-                raise TypeError("Constructing a tree around a Node passed as a position argument is no longer supported; a keyword argument is now required for this approach: use Tree(seed_node=node)")
+                raise TypeError(
+                    "Constructing from an external stream is no longer supported: use"
+                    " the factory method 'Tree.get(file=...)'"
+                )
+            if isinstance(args[0], _node.Node):
+                raise TypeError(
+                    "Constructing a tree around a Node passed as a position argument is"
+                    " no longer supported; a keyword argument is now required for this"
+                    " approach: use Tree(seed_node=node)"
+                )
             if isinstance(args[0], Tree):
                 self._clone_from(args[0], kwargs)
             else:
-                raise error.InvalidArgumentValueError(func_name=self.__class__.__name__, arg=args[0])
+                raise error.InvalidArgumentValueError(
+                    func_name=self.__class__.__name__, arg=args[0]
+                )
         else:
             basemodel.DataObject.__init__(self, label=kwargs.pop("label", None))
-            taxonmodel.TaxonNamespaceAssociated.__init__(self,
-                    taxon_namespace=taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs, None))
+            taxonmodel.TaxonNamespaceAssociated.__init__(
+                self,
+                taxon_namespace=taxonmodel.process_kwargs_dict_for_taxon_namespace(
+                    kwargs, None
+                ),
+            )
             self.comments = []
             self._is_rooted = kwargs.pop("is_rooted", None)
             self.weight = None
@@ -3196,22 +704,34 @@ class Tree(
         if kwargs:
             raise TypeError("Unrecognized or unsupported arguments: {}".format(kwargs))
 
-    ##############################################################################
     ## Bipartitions
 
     def _get_split_edges(self):
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'Tree.split_edges' will no longer be supported in future releases; use 'Tree.bipartition_encoding' for a list of bipartitions on the tree, or dereference the edge through the 'Tree.bipartition_edge_map' attribute.",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'Tree.split_edges' will no longer be"
+                " supported in future releases; use 'Tree.bipartition_encoding' for a"
+                " list of bipartitions on the tree, or dereference the edge through the"
+                " 'Tree.bipartition_edge_map' attribute."
+            ),
+            stacklevel=3,
+        )
         return self.bipartition_encoding
+
     def _set_split_edges(self, m):
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'Tree.split_edges' will no longer be supported in future releases; use 'Tree.bipartition_encoding' for a list of bipartitions on the tree, or dereference the edge through the 'Tree.bipartition_edge_map' attribute.",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'Tree.split_edges' will no longer be"
+                " supported in future releases; use 'Tree.bipartition_encoding' for a"
+                " list of bipartitions on the tree, or dereference the edge through the"
+                " 'Tree.bipartition_edge_map' attribute."
+            ),
+            stacklevel=3,
+        )
         self.bipartition_encoding = m
+
     split_edges = property(_get_split_edges, _set_split_edges)
 
-    ##############################################################################
     ## Identity
 
     def __hash__(self):
@@ -3220,14 +740,15 @@ class Tree(
     def __eq__(self, other):
         return self is other
 
-    ##############################################################################
     ## Copying/cloning
 
     def _clone_from(self, tree, kwargs_dict):
         # super(Tree, self).__init__()
         memo = {}
         # memo[id(tree)] = self
-        taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(kwargs_dict, tree.taxon_namespace)
+        taxon_namespace = taxonmodel.process_kwargs_dict_for_taxon_namespace(
+            kwargs_dict, tree.taxon_namespace
+        )
         memo[id(tree.taxon_namespace)] = taxon_namespace
         if taxon_namespace is not tree.taxon_namespace:
             for t1 in tree.taxon_namespace:
@@ -3289,18 +810,16 @@ class Tree(
         # # return
         # return other
 
-    ###########################################################################
-    ### Extracting Trees and Subtrees
-
-    def extract_tree(self,
-            extraction_source_reference_attr_name="extraction_source",
-            node_filter_fn=None,
-            suppress_unifurcations=True,
-            is_apply_filter_to_leaf_nodes=True,
-            is_apply_filter_to_internal_nodes=False,
-            tree_factory=None,
-            node_factory=None,
-            ):
+    def extract_tree(
+        self,
+        extraction_source_reference_attr_name="extraction_source",
+        node_filter_fn=None,
+        suppress_unifurcations=True,
+        is_apply_filter_to_leaf_nodes=True,
+        is_apply_filter_to_internal_nodes=False,
+        tree_factory=None,
+        node_factory=None,
+    ):
         """
         Returns a copy of this tree that only includes the basic structure
         (nodes, edges), and minimal attributes (edge lengths, node labels, and
@@ -3396,20 +915,21 @@ class Tree(
         other.length_type = self.length_type
         other.label = self.label
         other.seed_node = self.seed_node.extract_subtree(
-                extraction_source_reference_attr_name=extraction_source_reference_attr_name,
-                node_filter_fn=node_filter_fn,
-                suppress_unifurcations=suppress_unifurcations,
-                is_apply_filter_to_leaf_nodes=is_apply_filter_to_leaf_nodes,
-                is_apply_filter_to_internal_nodes=is_apply_filter_to_internal_nodes,
-                node_factory=node_factory,
-                )
+            extraction_source_reference_attr_name=extraction_source_reference_attr_name,
+            node_filter_fn=node_filter_fn,
+            suppress_unifurcations=suppress_unifurcations,
+            is_apply_filter_to_leaf_nodes=is_apply_filter_to_leaf_nodes,
+            is_apply_filter_to_internal_nodes=is_apply_filter_to_internal_nodes,
+            node_factory=node_factory,
+        )
         return other
 
-    def extract_tree_with_taxa(self,
-            taxa,
-            extraction_source_reference_attr_name="extraction_source",
-            suppress_unifurcations=True,
-            ):
+    def extract_tree_with_taxa(
+        self,
+        taxa,
+        extraction_source_reference_attr_name="extraction_source",
+        suppress_unifurcations=True,
+    ):
         """
         Returns a copy of this tree that only includes leaf nodes if they
         are associated with the taxon objects listed in ``taxa``. Note that
@@ -3463,17 +983,18 @@ class Tree(
         """
         node_filter_fn = lambda nd: nd.taxon is None or nd.taxon in set(taxa)
         return self.extract_tree(
-                node_filter_fn=node_filter_fn,
-                extraction_source_reference_attr_name=extraction_source_reference_attr_name,
-                is_apply_filter_to_leaf_nodes=True,
-                is_apply_filter_to_internal_nodes=False,
-                )
+            node_filter_fn=node_filter_fn,
+            extraction_source_reference_attr_name=extraction_source_reference_attr_name,
+            is_apply_filter_to_leaf_nodes=True,
+            is_apply_filter_to_internal_nodes=False,
+        )
 
-    def extract_tree_with_taxa_labels(self,
-            labels,
-            extraction_source_reference_attr_name="extraction_source",
-            suppress_unifurcations=True,
-            ):
+    def extract_tree_with_taxa_labels(
+        self,
+        labels,
+        extraction_source_reference_attr_name="extraction_source",
+        suppress_unifurcations=True,
+    ):
         """
         Returns a copy of this tree that only includes leaf nodes if they are
         associated with taxon objects with labels matching those listed in
@@ -3527,17 +1048,18 @@ class Tree(
         """
         node_filter_fn = lambda nd: nd.taxon is None or nd.taxon.label in set(labels)
         return self.extract_tree(
-                node_filter_fn=node_filter_fn,
-                extraction_source_reference_attr_name=extraction_source_reference_attr_name,
-                is_apply_filter_to_leaf_nodes=True,
-                is_apply_filter_to_internal_nodes=False,
-                )
+            node_filter_fn=node_filter_fn,
+            extraction_source_reference_attr_name=extraction_source_reference_attr_name,
+            is_apply_filter_to_leaf_nodes=True,
+            is_apply_filter_to_internal_nodes=False,
+        )
 
-    def extract_tree_without_taxa(self,
-            taxa,
-            extraction_source_reference_attr_name="extraction_source",
-            suppress_unifurcations=True,
-            ):
+    def extract_tree_without_taxa(
+        self,
+        taxa,
+        extraction_source_reference_attr_name="extraction_source",
+        suppress_unifurcations=True,
+    ):
         """
         Returns a copy of this tree that only includes leaf nodes if they
         are NOT associated with the taxon objects listed in ``taxa``. Note that
@@ -3591,17 +1113,18 @@ class Tree(
         """
         node_filter_fn = lambda nd: nd.taxon is None or nd.taxon not in set(taxa)
         return self.extract_tree(
-                node_filter_fn=node_filter_fn,
-                extraction_source_reference_attr_name=extraction_source_reference_attr_name,
-                is_apply_filter_to_leaf_nodes=True,
-                is_apply_filter_to_internal_nodes=False,
-                )
+            node_filter_fn=node_filter_fn,
+            extraction_source_reference_attr_name=extraction_source_reference_attr_name,
+            is_apply_filter_to_leaf_nodes=True,
+            is_apply_filter_to_internal_nodes=False,
+        )
 
-    def extract_tree_without_taxa_labels(self,
-            labels,
-            extraction_source_reference_attr_name="extraction_source",
-            suppress_unifurcations=True,
-            ):
+    def extract_tree_without_taxa_labels(
+        self,
+        labels,
+        extraction_source_reference_attr_name="extraction_source",
+        suppress_unifurcations=True,
+    ):
         """
         Returns a copy of this tree that only includes leaf nodes if they
         are NOT associated with the taxon objects listed in ``taxa``. Note that
@@ -3653,19 +1176,18 @@ class Tree(
             A new tree based on this one, with nodes filtered out if specified.
 
         """
-        node_filter_fn = lambda nd: nd.taxon is None or nd.taxon.label not in set(labels)
+        node_filter_fn = lambda nd: nd.taxon is None or nd.taxon.label not in set(
+            labels
+        )
         return self.extract_tree(
-                node_filter_fn=node_filter_fn,
-                extraction_source_reference_attr_name=extraction_source_reference_attr_name,
-                is_apply_filter_to_leaf_nodes=True,
-                is_apply_filter_to_internal_nodes=False,
-                )
-
-    ###########################################################################
-    ### I/O
+            node_filter_fn=node_filter_fn,
+            extraction_source_reference_attr_name=extraction_source_reference_attr_name,
+            is_apply_filter_to_leaf_nodes=True,
+            is_apply_filter_to_internal_nodes=False,
+        )
 
     def _format_and_write_to_stream(self, stream, schema, **kwargs):
-        """
+        r"""
         Writes out ``self`` in ``schema`` format to a destination given by
         file-like object ``stream``.
 
@@ -3686,6 +1208,7 @@ class Tree(
 
         """
         from dendropy.datamodel.treecollectionmodel import TreeList
+
         tree_list = TreeList(taxon_namespace=self.taxon_namespace)
         tree_list.append(self, taxon_import_strategy="add")
         # Go through TreeList.write() to reduce testing targets (i.e., testing
@@ -3693,9 +1216,6 @@ class Tree(
         tree_list.write_to_stream(stream, schema, **kwargs)
         # writer.write_tree_list(tree_list, stream)
 
-    ###########################################################################
-    ### Node and Edge Collection Access
-
     def nodes(self, filter_fn=None):
         """
         Returns list of nodes on tree.
@@ -3745,7 +1265,12 @@ class Tree(
         :py:class:`list` [|Node|]
             List of internal |Node| objects in ``self``.
         """
-        return [nd for nd in self.preorder_internal_node_iter(exclude_seed_node=exclude_seed_node)]
+        return [
+            nd
+            for nd in self.preorder_internal_node_iter(
+                exclude_seed_node=exclude_seed_node
+            )
+        ]
 
     def edges(self, filter_fn=None):
         """
@@ -3793,10 +1318,12 @@ class Tree(
         :py:class:`list` [|Edge|]
             List of internal |Edge| objects in ``self``.
         """
-        return [nd.edge for nd in self.preorder_internal_node_iter(exclude_seed_node=exclude_seed_edge)]
-
-    ###########################################################################
-    ### Node Finders
+        return [
+            nd.edge
+            for nd in self.preorder_internal_node_iter(
+                exclude_seed_node=exclude_seed_edge
+            )
+        ]
 
     def find_node(self, filter_fn):
         """
@@ -3951,7 +1478,7 @@ class Tree(
         # return self.find_node_with_taxon(lambda x: x is taxon)
 
     def mrca(self, **kwargs):
-        """
+        r"""
         Returns most-recent common ancestor node of a set of taxa on the tree.
 
         Returns the shallowest node in the tree (the node nearest the tips)
@@ -4012,7 +1539,10 @@ class Tree(
                     if len(taxa) != len(kwargs["taxon_labels"]):
                         raise KeyError("Not all labels matched to taxa")
                 else:
-                    raise TypeError("Must specify one of: 'leafset_bitmask', 'taxa' or 'taxon_labels'")
+                    raise TypeError(
+                        "Must specify one of: 'leafset_bitmask', 'taxa' or"
+                        " 'taxon_labels'"
+                    )
             if taxa is None:
                 raise ValueError("No taxa matching criteria found")
             leafset_bitmask = self.taxon_namespace.taxa_bitmask(taxa=taxa)
@@ -4020,10 +1550,14 @@ class Tree(
         if leafset_bitmask is None or leafset_bitmask == 0:
             raise ValueError("Null leafset bitmask (0)")
 
-        if start_node.edge.bipartition.leafset_bitmask == 0 or not kwargs.get("is_bipartitions_updated", True):
+        if start_node.edge.bipartition.leafset_bitmask == 0 or not kwargs.get(
+            "is_bipartitions_updated", True
+        ):
             self.encode_bipartitions(suppress_unifurcations=False)
 
-        if (start_node.edge.bipartition.leafset_bitmask & leafset_bitmask) != leafset_bitmask:
+        if (
+            start_node.edge.bipartition.leafset_bitmask & leafset_bitmask
+        ) != leafset_bitmask:
             return None
 
         curr_node = start_node
@@ -4032,12 +1566,16 @@ class Tree(
         try:
             while True:
                 cm = curr_node.edge.bipartition.leafset_bitmask
-                cms = (cm & leafset_bitmask)
+                cms = cm & leafset_bitmask
                 if cms:
                     # for at least one taxon cm has 1 and bipartition has 1
                     if cms == leafset_bitmask:
                         # curr_node has all of the 1's that bipartition has
                         if cm == leafset_bitmask:
+                            # step down internal unifurcations until first
+                            # multifurcation
+                            while curr_node.num_child_nodes() == 1:
+                                curr_node, = curr_node.child_nodes()
                             return curr_node
                         last_match = curr_node
                         nd_source = iter(curr_node.child_nodes())
@@ -4053,9 +1591,6 @@ class Tree(
             #   leaves that have not been encoded with leafset_bitmasks.
             return last_match
 
-    ###########################################################################
-    ### Node iterators
-
     def __iter__(self):
         """
         Iterate over nodes on tree in pre-order.
@@ -4126,8 +1661,9 @@ class Tree(
         :py:class:`collections.Iterator` [|Node|]
             An iterator yielding the internal nodes of ``self``.
         """
-        return self.seed_node.preorder_internal_node_iter(filter_fn=filter_fn,
-                exclude_seed_node=exclude_seed_node)
+        return self.seed_node.preorder_internal_node_iter(
+            filter_fn=filter_fn, exclude_seed_node=exclude_seed_node
+        )
 
     def postorder_node_iter(self, filter_fn=None):
         """
@@ -4181,8 +1717,9 @@ class Tree(
             An iterator yielding the internal nodes of ``self`` in post-order
             sequence.
         """
-        return self.seed_node.postorder_internal_node_iter(filter_fn=filter_fn,
-                exclude_seed_node=exclude_seed_node)
+        return self.seed_node.postorder_internal_node_iter(
+            filter_fn=filter_fn, exclude_seed_node=exclude_seed_node
+        )
 
     def levelorder_node_iter(self, filter_fn=None):
         """
@@ -4213,8 +1750,12 @@ class Tree(
         Deprecated: use :meth:`Tree.levelorder_node_iter()` instead.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'level_order_node_iter()' will no longer be supported in future releases; use 'levelorder_node_iter()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'level_order_node_iter()' will no longer"
+                " be supported in future releases; use 'levelorder_node_iter()' instead"
+            ),
+            stacklevel=3,
+        )
         return self.seed_node.levelorder_iter(filter_fn=filter_fn)
 
     def inorder_node_iter(self, filter_fn=None):
@@ -4269,8 +1810,12 @@ class Tree(
         Deprecated: use :meth:`Tree.leaf_node_iter()` instead.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'leaf_iter()' will no longer be supported in future releases; use 'leaf_node_iter()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'leaf_iter()' will no longer be supported"
+                " in future releases; use 'leaf_node_iter()' instead"
+            ),
+            stacklevel=3,
+        )
         return self.seed_node.leaf_iter(filter_fn=filter_fn)
 
     def ageorder_node_iter(self, include_leaves=True, filter_fn=None, descending=False):
@@ -4308,23 +1853,29 @@ class Tree(
         """
         if self.seed_node.age is None:
             self.calc_node_ages()
-        return self.seed_node.ageorder_iter(include_leaves=include_leaves,
-                filter_fn=filter_fn,
-                descending=descending)
+        return self.seed_node.ageorder_iter(
+            include_leaves=include_leaves, filter_fn=filter_fn, descending=descending
+        )
 
-    def age_order_node_iter(self, include_leaves=True, filter_fn=None, descending=False):
+    def age_order_node_iter(
+        self, include_leaves=True, filter_fn=None, descending=False
+    ):
         """
         Deprecated: use :meth:`Tree.ageorder_node_iter()` instead.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'age_order_node_iter()' will no longer be supported in future releases; use 'ageorder_node_iter()' instead",
-                stacklevel=3)
-        return self.ageorder_node_iter(include_leaves=include_leaves,
-                filter_fn=filter_fn,
-                descending=descending)
+            message=(
+                "Deprecated since DendroPy 4: 'age_order_node_iter()' will no longer be"
+                " supported in future releases; use 'ageorder_node_iter()' instead"
+            ),
+            stacklevel=3,
+        )
+        return self.ageorder_node_iter(
+            include_leaves=include_leaves, filter_fn=filter_fn, descending=descending
+        )
 
     def apply(self, before_fn=None, after_fn=None, leaf_fn=None):
-        """
+        r"""
         Applies function ``before_fn`` and ``after_fn`` to all internal nodes and
         ``leaf_fn`` to all terminal nodes in subtree starting with ``self``, with
         nodes visited in pre-order.
@@ -4393,9 +1944,6 @@ class Tree(
         """
         self.seed_node.apply(before_fn, after_fn, leaf_fn)
 
-    ###########################################################################
-    ### Edge iterators
-
     def preorder_edge_iter(self, filter_fn=None):
         """
         Pre-order iterator over nodes in tree.
@@ -4461,12 +2009,14 @@ class Tree(
         else:
             froot = lambda e: True
         if filter_fn:
-            f = lambda x: (froot(x) and x._head_node._child_nodes and filter_fn(x)) or None
+            f = (
+                lambda x: (froot(x) and x._head_node._child_nodes and filter_fn(x))
+                or None
+            )
         else:
             f = lambda x: (x and froot(x) and x._head_node._child_nodes) or None
         return self.preorder_edge_iter(filter_fn=f)
 
-
     def postorder_edge_iter(self, filter_fn=None):
         """
         Post-order iterator over edges of tree.
@@ -4516,7 +2066,9 @@ class Tree(
                     yield edge
             else:
                 stack.append((edge, True))
-                stack.extend([(n._edge, False) for n in reversed(edge._head_node._child_nodes)])
+                stack.extend(
+                    [(n._edge, False) for n in reversed(edge._head_node._child_nodes)]
+                )
 
     def postorder_internal_edge_iter(self, filter_fn=None, exclude_seed_edge=False):
         """
@@ -4553,7 +2105,10 @@ class Tree(
         else:
             froot = lambda e: True
         if filter_fn:
-            f = lambda x: (froot(x) and x._head_node._child_nodes and filter_fn(x)) or None
+            f = (
+                lambda x: (froot(x) and x._head_node._child_nodes and filter_fn(x))
+                or None
+            )
         else:
             f = lambda x: (x and froot(x) and x._head_node._child_nodes) or None
         return self.postorder_edge_iter(filter_fn=f)
@@ -4581,7 +2136,7 @@ class Tree(
             An iterator yielding edges of ``self`` in level-order sequence.
         """
         if filter_fn is not None:
-            f = lambda x : filter_fn(x.edge)
+            f = lambda x: filter_fn(x.edge)
         else:
             f = None
         for nd in self.seed_node.levelorder_iter(filter_fn=f):
@@ -4592,8 +2147,12 @@ class Tree(
         Deprecated: use :meth:`Tree.levelorder_edge_iter()` instead.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'level_order_edge_iter()' will no longer be supported in future releases; use 'levelorder_edge_iter()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'level_order_edge_iter()' will no longer"
+                " be supported in future releases; use 'levelorder_edge_iter()' instead"
+            ),
+            stacklevel=3,
+        )
         return self.levelorder_edge_iter(filter_fn=filter_fn)
 
     def inorder_edge_iter(self, filter_fn=None):
@@ -4619,7 +2178,7 @@ class Tree(
             An iterator yielding edges of ``self`` in infix or in-order sequence.
         """
         if filter_fn is not None:
-            f = lambda x : filter_fn(x.edge)
+            f = lambda x: filter_fn(x.edge)
         else:
             f = None
         for nd in self.seed_node.inorder_iter(filter_fn=f):
@@ -4647,23 +2206,21 @@ class Tree(
             An iterator yielding leaf edges in ``self``.
         """
         if filter_fn is not None:
-            f = lambda x : filter_fn(x.edge)
+            f = lambda x: filter_fn(x.edge)
         else:
             f = None
         for nd in self.seed_node.leaf_iter(filter_fn=f):
             yield nd.edge
 
-    ###########################################################################
-    ### Taxa Management
-
-    def reconstruct_taxon_namespace(self,
-            unify_taxa_by_label=True,
-            taxon_mapping_memo=None):
+    def reconstruct_taxon_namespace(
+        self, unify_taxa_by_label=True, taxon_mapping_memo=None
+    ):
         if taxon_mapping_memo is None:
             taxon_mapping_memo = {}
         for node in self:
-            if (node.taxon is not None
-                    and (unify_taxa_by_label or node.taxon not in self.taxon_namespace)):
+            if node.taxon is not None and (
+                unify_taxa_by_label or node.taxon not in self.taxon_namespace
+            ):
                 t = taxon_mapping_memo.get(node.taxon, None)
                 if t is None:
                     # taxon to use not given and
@@ -4719,8 +2276,12 @@ class Tree(
         with taxa from this tree.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'infer_taxa()' will no longer be supported in future releases; use 'update_taxon_namespace()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'infer_taxa()' will no longer be"
+                " supported in future releases; use 'update_taxon_namespace()' instead"
+            ),
+            stacklevel=3,
+        )
         taxon_namespace = taxonmodel.TaxonNamespace()
         for node in self.postorder_node_iter():
             if node.taxon is not None:
@@ -4733,8 +2294,13 @@ class Tree(
         Remaps node taxon objects
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'reindex_subcomponent_taxa()' will no longer be supported in future releases; use 'reconstruct_taxon_namespace()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'reindex_subcomponent_taxa()' will no"
+                " longer be supported in future releases; use"
+                " 'reconstruct_taxon_namespace()' instead"
+            ),
+            stacklevel=3,
+        )
         for node in self.postorder_node_iter():
             t = node.taxon
             if t:
@@ -4770,41 +2336,46 @@ class Tree(
             rng = GLOBAL_RNG
         if len(self.taxon_namespace) == 0:
             for i, nd in enumerate(self.leaf_nodes()):
-                nd.taxon = self.taxon_namespace.require_taxon(label=("T%d" % (i+1)))
+                nd.taxon = self.taxon_namespace.require_taxon(label="T%d" % (i + 1))
         else:
             taxa = [t for t in self.taxon_namespace]
             for i, nd in enumerate(self.leaf_nodes()):
                 if len(taxa) > 0:
-                    nd.taxon = taxa.pop(rng.randint(0, len(taxa)-1))
+                    nd.taxon = taxa.pop(rng.randint(0, len(taxa) - 1))
                 else:
                     if not create_required_taxa:
-                        raise ValueError("TaxonNamespace has %d taxa, but tree has %d tips" % (len(self.taxon_namespace), len(self.leaf_nodes())))
-                    label = "T%d" % (i+1)
+                        raise ValueError(
+                            "TaxonNamespace has %d taxa, but tree has %d tips"
+                            % (len(self.taxon_namespace), len(self.leaf_nodes()))
+                        )
+                    label = "T%d" % (i + 1)
                     k = 0
                     while self.taxon_namespace.has_taxon(label=label):
-                        label = "T%d" % (i+1+k)
+                        label = "T%d" % (i + 1 + k)
                         k += 1
                     nd.taxon = self.taxon_namespace.require_taxon(label=label)
 
-    ###########################################################################
-    ### Structure
-
     def _get_is_rootedness_undefined(self):
         return self._is_rooted is None
+
     is_rootedness_undefined = property(_get_is_rootedness_undefined)
     # legacy:
     rooting_state_is_undefined = property(_get_is_rootedness_undefined)
 
     def _get_is_rooted(self):
         return None if self._is_rooted is None else self._is_rooted
+
     def _set_is_rooted(self, val):
         self._is_rooted = val
+
     is_rooted = property(_get_is_rooted, _set_is_rooted)
 
     def _get_is_unrooted(self):
         return None if self._is_rooted is None else (not self._is_rooted)
+
     def _set_is_unrooted(self, val):
         self._is_rooted = not val
+
     is_unrooted = property(_get_is_unrooted, _set_is_unrooted)
 
     def collapse_basal_bifurcation(self, set_as_unrooted_tree=True):
@@ -4835,20 +2406,24 @@ class Tree(
 
     def _get_seed_node(self):
         return self._seed_node
+
     def _set_seed_node(self, node):
         self._seed_node = node
         if self._seed_node is not None:
             self._seed_node.parent_node = None
+
     seed_node = property(_get_seed_node, _set_seed_node)
 
     def deroot(self):
         self.collapse_basal_bifurcation(set_as_unrooted_tree=True)
 
-    def reseed_at(self,
-            new_seed_node,
-            update_bipartitions=False,
-            collapse_unrooted_basal_bifurcation=True,
-            suppress_unifurcations=True):
+    def reseed_at(
+        self,
+        new_seed_node,
+        update_bipartitions=False,
+        collapse_unrooted_basal_bifurcation=True,
+        suppress_unifurcations=True,
+    ):
         """
         Reseeds the tree at a different (existing) node.
 
@@ -4929,19 +2504,24 @@ class Tree(
 
         if update_bipartitions:
             self.encode_bipartitions(
-                    suppress_unifurcations=suppress_unifurcations,
-                    collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation)
+                suppress_unifurcations=suppress_unifurcations,
+                collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation,
+            )
         else:
-            if (collapse_unrooted_basal_bifurcation
-                    and not self._is_rooted
-                    and len(self.seed_node._child_nodes) == 2):
+            if (
+                collapse_unrooted_basal_bifurcation
+                and not self._is_rooted
+                and len(self.seed_node._child_nodes) == 2
+            ):
                 self.collapse_basal_bifurcation()
             if suppress_unifurcations:
                 self.suppress_unifurcations()
 
         return self.seed_node
 
-    def to_outgroup_position(self, outgroup_node, update_bipartitions=False, suppress_unifurcations=True):
+    def to_outgroup_position(
+        self, outgroup_node, update_bipartitions=False, suppress_unifurcations=True
+    ):
         """Reroots the tree at the parent of ``outgroup_node`` and makes ``outgroup_node`` the first child
         of the new root.  This is just a convenience function to make it easy
         to place a clade as the first child under the root.
@@ -4955,14 +2535,24 @@ class Tree(
         """
         p = outgroup_node._parent_node
         assert p is not None
-        self.reseed_at(p, update_bipartitions=update_bipartitions, suppress_unifurcations=suppress_unifurcations)
+        self.reseed_at(
+            p,
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+        )
         p.remove_child(outgroup_node)
         _ognlen = outgroup_node.edge.length
         p.insert_child(0, outgroup_node)
         assert outgroup_node.edge.length == _ognlen
         return self.seed_node
 
-    def reroot_at_node(self, new_root_node, update_bipartitions=False, suppress_unifurcations=True, collapse_unrooted_basal_bifurcation=True):
+    def reroot_at_node(
+        self,
+        new_root_node,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+        collapse_unrooted_basal_bifurcation=True,
+    ):
         """
         Takes an internal node, ``new_seed_node`` that must already be in the tree and
         roots the tree at that node.
@@ -4976,22 +2566,28 @@ class Tree(
         ``suppress_unifurcations`` is False, then it will be
         removed from the tree.
         """
-        self.reseed_at(new_seed_node=new_root_node,
-                update_bipartitions=False,
-                suppress_unifurcations=suppress_unifurcations,
-                collapse_unrooted_basal_bifurcation=False,
-                )
+        self.reseed_at(
+            new_seed_node=new_root_node,
+            update_bipartitions=False,
+            suppress_unifurcations=suppress_unifurcations,
+            collapse_unrooted_basal_bifurcation=False,
+        )
         self.is_rooted = True
         if update_bipartitions:
-            self.update_bipartitions(suppress_unifurcations=suppress_unifurcations, collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation)
+            self.update_bipartitions(
+                suppress_unifurcations=suppress_unifurcations,
+                collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation,
+            )
         return self.seed_node
 
-    def reroot_at_edge(self,
-            edge,
-            length1=None,
-            length2=None,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+    def reroot_at_edge(
+        self,
+        edge,
+        length1=None,
+        length2=None,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+    ):
         """
         Takes an internal edge, ``edge``, adds a new node to it, and then roots
         the tree on the new node.
@@ -5012,12 +2608,19 @@ class Tree(
         # new_seed_node.add_child(old_head, edge_length=length2)
         new_seed_node.add_child(old_head)
         old_head.edge.length = length2
-        self.reroot_at_node(new_seed_node,
-                update_bipartitions=update_bipartitions,
-                suppress_unifurcations=suppress_unifurcations)
+        self.reroot_at_node(
+            new_seed_node,
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+        )
         return self.seed_node
 
-    def reroot_at_midpoint(self, update_bipartitions=False, suppress_unifurcations=True, collapse_unrooted_basal_bifurcation=True):
+    def reroot_at_midpoint(
+        self,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+        collapse_unrooted_basal_bifurcation=True,
+    ):
         """
         Reroots the tree at the the mid-point of the longest distance between
         two taxa in a tree.
@@ -5030,6 +2633,7 @@ class Tree(
         removed from the tree.
         """
         from dendropy.calculate.phylogeneticdistance import PhylogeneticDistanceMatrix
+
         pdm = PhylogeneticDistanceMatrix.from_tree(self)
 
         ## ugly, ugly, ugly code to find two nodes that span the midpoint
@@ -5040,11 +2644,14 @@ class Tree(
             for tax in (maxtax1, maxtax2):
                 if nd.taxon is tax:
                     spanning_nodes[found] = nd
-                    found +=1
+                    found += 1
                     break
             if found == 2:
                 break
-        if spanning_nodes[0].distance_from_root() < spanning_nodes[1].distance_from_root():
+        if (
+            spanning_nodes[0].distance_from_root()
+            < spanning_nodes[1].distance_from_root()
+        ):
             n1 = spanning_nodes[1]
             n2 = spanning_nodes[0]
         else:
@@ -5053,11 +2660,11 @@ class Tree(
 
         plen = float(pdm.patristic_distance(maxtax1, maxtax2)) / 2
         mrca_node = pdm.mrca(n1.taxon, n2.taxon)
-        #assert mrca_node is self.mrca(taxa=[n1.taxon, n2.taxon])
-        #mrca_node = self.mrca(taxa=[n1.taxon, n2.taxon])
+        # assert mrca_node is self.mrca(taxa=[n1.taxon, n2.taxon])
+        # mrca_node = self.mrca(taxa=[n1.taxon, n2.taxon])
         cur_node = n1
 
-        break_on_node = None # populated *iff* midpoint is exactly at an existing node
+        break_on_node = None  # populated *iff* midpoint is exactly at an existing node
         target_edge = None
         head_node_edge_len = None
 
@@ -5065,7 +2672,7 @@ class Tree(
         while cur_node is not mrca_node:
             if cur_node.edge.length > plen:
                 target_edge = cur_node.edge
-                head_node_edge_len = plen #cur_node.edge.length - plen
+                head_node_edge_len = plen  # cur_node.edge.length - plen
                 plen = 0
                 break
             elif cur_node.edge.length < plen:
@@ -5078,14 +2685,18 @@ class Tree(
         assert break_on_node is not None or target_edge is not None
 
         if break_on_node:
-            self.reseed_at(break_on_node, update_bipartitions=False, suppress_unifurcations=suppress_unifurcations)
+            self.reseed_at(
+                break_on_node,
+                update_bipartitions=False,
+                suppress_unifurcations=suppress_unifurcations,
+            )
             new_seed_node = break_on_node
         else:
             tail_node_edge_len = target_edge.length - head_node_edge_len
             old_head_node = target_edge.head_node
             old_tail_node = target_edge.tail_node
             old_tail_node.remove_child(old_head_node)
-            new_seed_node = Node()
+            new_seed_node = _node.Node()
             # new_seed_node.add_child(old_head_node, edge_length=head_node_edge_len)
             new_seed_node.add_child(old_head_node)
             old_head_node.edge.length = head_node_edge_len
@@ -5093,16 +2704,17 @@ class Tree(
             old_tail_node.add_child(new_seed_node)
             new_seed_node.edge.length = tail_node_edge_len
             self.reseed_at(
-                    new_seed_node, update_bipartitions=False,
-                    suppress_unifurcations=suppress_unifurcations,
-                    collapse_unrooted_basal_bifurcation=False,
-                    )
+                new_seed_node,
+                update_bipartitions=False,
+                suppress_unifurcations=suppress_unifurcations,
+                collapse_unrooted_basal_bifurcation=False,
+            )
         self.is_rooted = True
         if update_bipartitions:
             self.update_bipartitions(
-                    suppress_unifurcations=False,
-                    collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation,
-                    )
+                suppress_unifurcations=False,
+                collapse_unrooted_basal_bifurcation=collapse_unrooted_basal_bifurcation,
+            )
         return self.seed_node
 
     def suppress_unifurcations(self, update_bipartitions=False):
@@ -5147,32 +2759,33 @@ class Tree(
                     self.seed_node._parent_node = None
         if bipartitions_to_delete:
             old_encoding = self.bipartition_encoding
-            self.bipartition_encoding = [b for b in old_encoding if id(b) not in bipartitions_to_delete]
+            self.bipartition_encoding = [
+                b for b in old_encoding if id(b) not in bipartitions_to_delete
+            ]
         return remapped_nodes
 
     def delete_outdegree_one_nodes(self):
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'delete_outdegree_one_nodes()' has been replaced by 'suppress_unifurcations()'",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'delete_outdegree_one_nodes()' has been"
+                " replaced by 'suppress_unifurcations()'"
+            ),
+            stacklevel=3,
+        )
         return self.suppress_unifurcations()
 
-    def collapse_unweighted_edges(self,
-            threshold=0.0000001,
-            update_bipartitions=False):
+    def collapse_unweighted_edges(self, threshold=0.0000001, update_bipartitions=False):
         """
         Collapse all *internal* edges with edge lengths less than or equal to
         ``threshold`` (or with |None| for edge length).
         """
         for e in self.postorder_edge_iter():
             if e.length is None or (e.length <= threshold) and e.is_internal():
-               e.collapse()
+                e.collapse()
         if update_bipartitions:
             self.update_bipartitions()
 
-    def resolve_polytomies(self,
-            limit=2,
-            update_bipartitions=False,
-            rng=None):
+    def resolve_polytomies(self, limit=2, update_bipartitions=False, rng=None):
         """
         Arbitrarily resolve polytomies using 0-length edges.
 
@@ -5197,7 +2810,9 @@ class Tree(
                 polytomies.append(node)
         for node in polytomies:
             if rng:
-                to_attach = rng.sample(node._child_nodes, len(node._child_nodes)-limit)
+                to_attach = rng.sample(
+                    node._child_nodes, len(node._child_nodes) - limit
+                )
                 for child in to_attach:
                     node.remove_child(child)
                 attachment_points = list(node._child_nodes)
@@ -5205,7 +2820,7 @@ class Tree(
                 while len(to_attach) > 0:
                     next_child = to_attach.pop()
                     next_sib = rng.choice(attachment_points)
-                    next_attachment = Node()
+                    next_attachment = _node.Node()
                     if next_sib is node:
                         cc = list(node._child_nodes)
                         node.add_child(next_attachment)
@@ -5224,7 +2839,7 @@ class Tree(
                     attachment_points.append(next_child)
             else:
                 while len(node._child_nodes) > limit:
-                    nn1 = Node()
+                    nn1 = _node.Node()
                     nn1.edge.length = 0.0
                     c1 = node._child_nodes[0]
                     c2 = node._child_nodes[1]
@@ -5236,17 +2851,16 @@ class Tree(
         if update_bipartitions:
             self.update_bipartitions()
 
-    def prune_subtree(self,
-            node,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+    def prune_subtree(
+        self, node, update_bipartitions=False, suppress_unifurcations=True
+    ):
         """
         Removes subtree starting at ``node`` from tree.
         """
         if not node:
             raise ValueError("Tried to remove an non-existing or null node")
         if node._parent_node is None:
-            raise TypeError('Node has no parent and is implicit root: cannot be pruned')
+            raise TypeError("Node has no parent and is implicit root: cannot be pruned")
         node._parent_node.remove_child(node)
         if suppress_unifurcations:
             self.suppress_unifurcations()
@@ -5254,11 +2868,12 @@ class Tree(
             self.update_bipartitions()
 
     def filter_leaf_nodes(
-            self,
-            filter_fn,
-            recursive=True,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+        self,
+        filter_fn,
+        recursive=True,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+    ):
         """
         Removes all leaves for which ``filter_fn`` returns |False|. If recursive
         is |True|, then process is repeated until all leaf nodes in the tree will
@@ -5294,7 +2909,9 @@ class Tree(
             nodes_to_remove = [nd for nd in self.leaf_node_iter() if not filter_fn(nd)]
             for nd in nodes_to_remove:
                 if nd.edge.tail_node is None:
-                    raise error.SeedNodeDeletionException("Attempting to remove seed node or node without parent")
+                    raise error.SeedNodeDeletionException(
+                        "Attempting to remove seed node or node without parent"
+                    )
                 nd.edge.tail_node.remove_child(nd)
             if nodes_to_remove:
                 nodes_removed += nodes_to_remove
@@ -5307,10 +2924,9 @@ class Tree(
             self.update_bipartitions()
         return nodes_removed
 
-    def prune_leaves_without_taxa(self,
-            recursive=True,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+    def prune_leaves_without_taxa(
+        self, recursive=True, update_bipartitions=False, suppress_unifurcations=True
+    ):
         """
         Removes all terminal nodes that have their ``taxon`` attribute set to
         |None|.
@@ -5332,21 +2948,31 @@ class Tree(
             self.update_bipartitions()
         return nodes_removed
 
-    def prune_nodes(self, nodes, prune_leaves_without_taxa=False, update_bipartitions=False, suppress_unifurcations=True):
+    def prune_nodes(
+        self,
+        nodes,
+        prune_leaves_without_taxa=False,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+    ):
         for nd in nodes:
             if nd.edge.tail_node is None:
                 raise Exception("Attempting to remove root node or node without parent")
             nd.edge.tail_node.remove_child(nd)
         if prune_leaves_without_taxa:
-            self.prune_leaves_without_taxa(update_bipartitions=update_bipartitions,
-                    suppress_unifurcations=suppress_unifurcations)
+            self.prune_leaves_without_taxa(
+                update_bipartitions=update_bipartitions,
+                suppress_unifurcations=suppress_unifurcations,
+            )
 
-    def prune_taxa(self,
-            taxa,
-            update_bipartitions=False,
-            suppress_unifurcations=True,
-            is_apply_filter_to_leaf_nodes=True,
-            is_apply_filter_to_internal_nodes=False):
+    def prune_taxa(
+        self,
+        taxa,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+        is_apply_filter_to_leaf_nodes=True,
+        is_apply_filter_to_internal_nodes=False,
+    ):
         """
         Removes terminal nodes associated with Taxon objects given by the container
         ``taxa`` (which can be any iterable, including a TaxonNamespace object) from ``self``.
@@ -5355,57 +2981,62 @@ class Tree(
         nodes_to_remove = []
         for nd in self.postorder_node_iter():
             if (
-                ((is_apply_filter_to_internal_nodes and nd._child_nodes)
-                or (is_apply_filter_to_leaf_nodes and not nd._child_nodes))
-                and (nd.taxon and nd.taxon in taxa)
-                ):
-                    nd.edge.tail_node.remove_child(nd)
-        self.prune_leaves_without_taxa(update_bipartitions=update_bipartitions,
-                suppress_unifurcations=suppress_unifurcations)
-
-    def prune_taxa_with_labels(self,
-            labels,
-            update_bipartitions=False,
-            suppress_unifurcations=True,
-            is_apply_filter_to_leaf_nodes=True,
-            is_apply_filter_to_internal_nodes=False):
+                (is_apply_filter_to_internal_nodes and nd._child_nodes)
+                or (is_apply_filter_to_leaf_nodes and not nd._child_nodes)
+            ) and (nd.taxon and nd.taxon in taxa):
+                nd.edge.tail_node.remove_child(nd)
+        self.prune_leaves_without_taxa(
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+        )
+
+    def prune_taxa_with_labels(
+        self,
+        labels,
+        update_bipartitions=False,
+        suppress_unifurcations=True,
+        is_apply_filter_to_leaf_nodes=True,
+        is_apply_filter_to_internal_nodes=False,
+    ):
         """
         Removes terminal nodes that are associated with Taxon objects with
         labels given by ``labels``.
         """
         taxa = self.taxon_namespace.get_taxa(labels=labels)
-        self.prune_taxa(taxa=taxa,
-                update_bipartitions=update_bipartitions,
-                suppress_unifurcations=suppress_unifurcations,
-                is_apply_filter_to_leaf_nodes=is_apply_filter_to_leaf_nodes,
-                is_apply_filter_to_internal_nodes=is_apply_filter_to_internal_nodes)
+        self.prune_taxa(
+            taxa=taxa,
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+            is_apply_filter_to_leaf_nodes=is_apply_filter_to_leaf_nodes,
+            is_apply_filter_to_internal_nodes=is_apply_filter_to_internal_nodes,
+        )
 
-    def retain_taxa(self,
-            taxa,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+    def retain_taxa(self, taxa, update_bipartitions=False, suppress_unifurcations=True):
         """
         Removes terminal nodes that are not associated with any
         of the Taxon objects given by ``taxa`` (which can be any iterable, including a
         TaxonNamespace object) from the ``self``.
         """
         to_prune = [t for t in self.taxon_namespace if t not in taxa]
-        self.prune_taxa(to_prune,
-                update_bipartitions=update_bipartitions,
-                suppress_unifurcations=suppress_unifurcations)
+        self.prune_taxa(
+            to_prune,
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+        )
 
-    def retain_taxa_with_labels(self,
-            labels,
-            update_bipartitions=False,
-            suppress_unifurcations=True):
+    def retain_taxa_with_labels(
+        self, labels, update_bipartitions=False, suppress_unifurcations=True
+    ):
         """
         Removes terminal nodes that are not associated with Taxon objects with
         labels given by ``labels``.
         """
         taxa = self.taxon_namespace.get_taxa(labels=labels)
-        self.retain_taxa(taxa=taxa,
-                update_bipartitions=update_bipartitions,
-                suppress_unifurcations=suppress_unifurcations)
+        self.retain_taxa(
+            taxa=taxa,
+            update_bipartitions=update_bipartitions,
+            suppress_unifurcations=suppress_unifurcations,
+        )
 
     def randomly_reorient(self, rng=None, update_bipartitions=False):
         """
@@ -5414,7 +3045,7 @@ class Tree(
         and ``bipartition_edge_map`` attributes kept valid.
         """
         if rng is None:
-            rng = GLOBAL_RNG # use the global rng by default
+            rng = GLOBAL_RNG  # use the global rng by default
         nd = rng.sample(self.nodes(), 1)[0]
         if nd.is_leaf():
             self.to_outgroup_position(nd, update_bipartitions=update_bipartitions)
@@ -5425,7 +3056,7 @@ class Tree(
     def randomly_rotate(self, rng=None):
         "Randomly rotates the branches around all internal nodes in ``self``"
         if rng is None:
-            rng = GLOBAL_RNG # use the global rng by default
+            rng = GLOBAL_RNG  # use the global rng by default
         internal_nodes = self.internal_nodes()
         for nd in internal_nodes:
             c = nd.child_nodes()
@@ -5441,25 +3072,31 @@ class Tree(
         Returns a dictionary mapping the old taxa to their new counterparts.
         """
         if rng is None:
-            rng = GLOBAL_RNG # use the global rng by default
+            rng = GLOBAL_RNG  # use the global rng by default
         if include_internal_nodes:
             nd_iterator = self.preorder_node_iter
         else:
             nd_iterator = self.leaf_node_iter
         current_node_taxon_map = {}
-        node_taxa = set()
+        node_taxa = []
         for nd in nd_iterator():
             if nd.taxon is not None:
+                assert nd.taxon not in current_node_taxon_map
                 current_node_taxon_map[nd] = nd.taxon
-                assert nd.taxon not in node_taxa
-                node_taxa.add(nd.taxon)
+                node_taxa.append(nd.taxon)
         assert len(current_node_taxon_map) == len(node_taxa)
         current_to_shuffled_taxon_map = {}
         for nd in current_node_taxon_map:
-            new_taxon = rng.sample(node_taxa, 1)[0]
+            # swap a random element to end of node_taxa...
+            random_index = rng.randrange(len(node_taxa))
+            node_taxa[-1], node_taxa[random_index] = (
+                node_taxa[random_index], node_taxa[-1]
+            )
+            # ... then pop it off the end and use it
+            new_taxon = node_taxa.pop()
             current_to_shuffled_taxon_map[nd.taxon] = new_taxon
             nd.taxon = new_taxon
-            node_taxa.remove(new_taxon)
+
         assert len(node_taxa) == 0, node_taxa
         assert len(current_to_shuffled_taxon_map) == len(current_node_taxon_map)
         return current_to_shuffled_taxon_map
@@ -5480,7 +3117,9 @@ class Tree(
                     total += node_desc_counts[child]
                 total += len(nd._child_nodes)
                 node_desc_counts[nd] = total
-                nd._child_nodes.sort(key=lambda n: node_desc_counts[n], reverse=not ascending)
+                nd._child_nodes.sort(
+                    key=lambda n: node_desc_counts[n], reverse=not ascending
+                )
 
     def truncate_from_root(self, distance_from_root):
         self.calc_node_root_distances()
@@ -5493,7 +3132,10 @@ class Tree(
             else:
                 if nd.root_distance == distance_from_root:
                     new_terminals.append(nd)
-                elif nd.root_distance > distance_from_root and nd._parent_node.root_distance < distance_from_root:
+                elif (
+                    nd.root_distance > distance_from_root
+                    and nd._parent_node.root_distance < distance_from_root
+                ):
                     # cut above current node
                     nd.edge.length = distance_from_root - nd._parent_node.root_distance
                     nd.root_distance = distance_from_root
@@ -5502,19 +3144,17 @@ class Tree(
             for ch in nd.child_nodes():
                 nd.remove_child(ch)
 
-    ###########################################################################
-    ### Ages, depths, branch lengths etc. (mutation)
-
     def scale_edges(self, edge_len_multiplier):
         """Multiplies every edge length in ``self`` by ``edge_len_multiplier``"""
         for e in self.postorder_edge_iter():
             if e.length is not None:
                 e.length *= edge_len_multiplier
 
-    def set_edge_lengths_from_node_ages(self,
-            minimum_edge_length=0.0,
-            error_on_negative_edge_lengths=False,
-            ):
+    def set_edge_lengths_from_node_ages(
+        self,
+        minimum_edge_length=0.0,
+        error_on_negative_edge_lengths=False,
+    ):
         """
         Sets the edge lengths of the tree so that the path lengths from the
         tips equal the value of the ``age`` attribute of the nodes.
@@ -5530,20 +3170,20 @@ class Tree(
         """
         for nd in self.preorder_node_iter():
             if nd._parent_node is not None:
-                #if nd._parent_node.age < nd.age:
+                # if nd._parent_node.age < nd.age:
                 #    nd.edge.length = 0.0
-                #else:
+                # else:
                 #    nd.edge.length = nd._parent_node.age - nd.age
                 edge_length = nd._parent_node.age - nd.age
-                if minimum_edge_length is not None and edge_length < minimum_edge_length:
+                if (
+                    minimum_edge_length is not None
+                    and edge_length < minimum_edge_length
+                ):
                     edge_length = minimum_edge_length
                 if error_on_negative_edge_lengths and edge_length < 0.0:
                     raise ValueError("Negative edge length: {}".format(edge_length))
                 nd.edge.length = edge_length
 
-    ###########################################################################
-    ### Ages, depths, branch lengths etc. (calculation)
-
     def phylogenetic_distance_matrix(self, *args, **kwargs):
         """
         Returns a |PhylogeneticDistanceMatrix| instance based
@@ -5556,22 +3196,89 @@ class Tree(
             tree in its current state.
         """
         from dendropy.calculate.phylogeneticdistance import PhylogeneticDistanceMatrix
+
         return PhylogeneticDistanceMatrix.from_tree(tree=self, *args, **kwargs)
 
     def node_distance_matrix(self):
         from dendropy.calculate.phylogeneticdistance import NodeDistanceMatrix
+
         return NodeDistanceMatrix.from_tree(tree=self)
 
-    def calc_node_ages(self,
-            ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
-            is_force_max_age=False,
-            is_force_min_age=False,
-            set_node_age_fn=None,
-            is_return_internal_node_ages_only=False):
+    def resolve_node_depths(
+        self,
+        node_callback_fn=None,
+        node_edge_length_fn=None,
+        attr_name="depth",
+    ):
+        """
+        Adds an attribute given by ``attr_name`` to  each node, with the value equal to
+        the sum of edge lengths from the root.
+        """
+        cache = {}
+        if node_edge_length_fn is None:
+            node_edge_length_fn = lambda nd: nd.edge.length
+        for node in self.preorder_node_iter():
+            if node._parent_node is None:
+                assert node is self.seed_node
+                v = 0.0
+            else:
+                v = node_edge_length_fn(node) + cache[node._parent_node]
+            cache[node] = v
+            if attr_name:
+                setattr(node, attr_name, v)
+            if node_callback_fn:
+                node_callback_fn(node)
+        return cache
+
+    def resolve_node_ages(
+        self,
+        node_callback_fn=None,
+        node_edge_length_fn=None,
+        attr_name="age",
+    ):
+        """
+        Adds an attribute called "age" to  each node, with the value equal to
+        the time elapsed since the present.
+
+        This is calculated by:
+        (1) setting the age of the root node to the sum of path lengths to the most distant tip
+        (2) setting the age of each other node as the sum of path lengths from the root.
+
+        Unlike the (legacy) `calc_node_ages()` there is no ultrametricity requirement or check.
+
+        """
+        depth_cache = self.resolve_node_depths(
+            node_edge_length_fn=node_edge_length_fn,
+            attr_name=None,
+        )
+        max_depth = max(depth_cache.values())
+        cache = {}
+        for node in self.preorder_node_iter():
+            v = max_depth - depth_cache[node]
+            cache[node] = v
+            if attr_name:
+                setattr(node, attr_name, v)
+            # if node is self.seed_node:
+            #     assert abs(getattr(node, attr_name) - max_root_distance[0]) <= 1e-8
+            if node_callback_fn:
+                node_callback_fn(node)
+        return cache
+
+    def calc_node_ages(
+        self,
+        ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
+        is_force_max_age=False,
+        is_force_min_age=False,
+        set_node_age_fn=None,
+        is_return_internal_node_ages_only=False,
+    ):
         """
         Adds an attribute called "age" to  each node, with the value equal to
         the sum of edge lengths from the node to the tips.
 
+        NOTE: Consider using the newer and more flexible `resolve_node_ages()`
+        instead of this.
+
         Parameters
         ----------
         ultrametricity_precision : numeric or bool or None
@@ -5615,7 +3322,9 @@ class Tree(
         """
         ages = []
         if is_force_max_age and is_force_min_age:
-            raise ValueError("Cannot specify both 'is_force_max_age' and 'is_force_min_age'")
+            raise ValueError(
+                "Cannot specify both 'is_force_max_age' and 'is_force_min_age'"
+            )
         for node in self.postorder_node_iter():
             child_nodes = node.child_nodes()
             if set_node_age_fn is not None:
@@ -5624,17 +3333,24 @@ class Tree(
                 if node.age is not None:
                     continue
             if len(child_nodes) == 0:
-               node.age = 0.0
-               if not is_return_internal_node_ages_only:
-                   ages.append(node.age)
+                node.age = 0.0
+                if not is_return_internal_node_ages_only:
+                    ages.append(node.age)
             else:
                 if is_force_max_age:
-                    age_to_set = max([ (child.age + child.edge.length) for child in child_nodes ])
+                    age_to_set = max(
+                        [(child.age + child.edge.length) for child in child_nodes]
+                    )
                 elif is_force_min_age:
-                    age_to_set = min([ (child.age + child.edge.length) for child in child_nodes ])
+                    age_to_set = min(
+                        [(child.age + child.edge.length) for child in child_nodes]
+                    )
                 else:
                     first_child = child_nodes[0]
-                    if first_child.edge.length is not None and first_child.age is not None:
+                    if (
+                        first_child.edge.length is not None
+                        and first_child.age is not None
+                    ):
                         age_to_set = first_child.age + first_child.edge.length
                     elif first_child.edge.length is None:
                         first_child.edge.length = 0.0
@@ -5645,7 +3361,13 @@ class Tree(
                     else:
                         age_to_set = 0.0
                 node.age = age_to_set
-                if not (is_force_max_age or is_force_min_age or ultrametricity_precision is None or ultrametricity_precision is False or ultrametricity_precision < 0):
+                if not (
+                    is_force_max_age
+                    or is_force_min_age
+                    or ultrametricity_precision is None
+                    or ultrametricity_precision is False
+                    or ultrametricity_precision < 0
+                ):
                     for nnd in child_nodes[1:]:
                         try:
                             ocnd = nnd.age + nnd.edge.length
@@ -5653,7 +3375,7 @@ class Tree(
                             nnd.edge.length = 0.0
                             ocnd = nnd.age
                         d = abs(node.age - ocnd)
-                        if  d > ultrametricity_precision:
+                        if d > ultrametricity_precision:
                             # try:
                             #     self.encode_bipartitions()
                             #     node_id = nnd.bipartition.split_as_newick_string(taxon_namespace=self.taxon_namespace)
@@ -5663,26 +3385,31 @@ class Tree(
                             subtree = node._as_newick_string()
                             desc = []
                             for desc_nd in child_nodes:
-                                desc.append("-   {}: has age of {} and edge length of {}, resulting in parent node age of {}".format(
-                                    desc_nd,
-                                    desc_nd.age,
-                                    desc_nd.edge.length,
-                                    desc_nd.edge.length + desc_nd.age))
+                                desc.append(
+                                    "-   {}: has age of {} and edge length of {},"
+                                    " resulting in parent node age of {}".format(
+                                        desc_nd,
+                                        desc_nd.age,
+                                        desc_nd.edge.length,
+                                        desc_nd.edge.length + desc_nd.age,
+                                    )
+                                )
                             desc = "\n".join(desc)
                             raise error.UltrametricityError(
-                                    ("Tree is not ultrametric within threshold of {threshold}: {deviance}.\n"
-                                     "Encountered in subtree of node {node} (edge length of {length}):\n"
-                                     "\n    {subtree}\n\n"
-                                     "Age of children:\n"
-                                     "{desc}"
-                                     ).format(
-                                threshold=ultrametricity_precision,
-                                deviance=d,
-                                node=node_id,
-                                length=node.edge.length,
-                                desc=desc,
-                                subtree=subtree,
-                                ))
+                                (
+                                    "Tree is not ultrametric within threshold of"
+                                    " {threshold}: {deviance}.\nEncountered in subtree"
+                                    " of node {node} (edge length of {length}):\n\n   "
+                                    " {subtree}\n\nAge of children:\n{desc}"
+                                ).format(
+                                    threshold=ultrametricity_precision,
+                                    deviance=d,
+                                    node=node_id,
+                                    length=node.edge.length,
+                                    desc=desc,
+                                    subtree=subtree,
+                                )
+                            )
                 ages.append(node.age)
         return ages
 
@@ -5699,34 +3426,38 @@ class Tree(
                 node.root_distance = 0.0
             else:
                 node.root_distance = node.edge.length + node._parent_node.root_distance
-            if (not return_leaf_distances_only or node.is_leaf()):
+            if not return_leaf_distances_only or node.is_leaf():
                 dists.append(node.root_distance)
         return dists
 
-    def internal_node_ages(self,
-            ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
-            is_force_max_age=False,
-            is_force_min_age=False,
-            set_node_age_fn=None,
-            ):
+    def internal_node_ages(
+        self,
+        ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
+        is_force_max_age=False,
+        is_force_min_age=False,
+        set_node_age_fn=None,
+    ):
         """
         Returns list of ages of speciation events / coalescence times on tree.
         """
         ages = self.calc_node_ages(
-                ultrametricity_precision=ultrametricity_precision, is_return_internal_node_ages_only=True,
-                is_force_max_age=is_force_max_age,
-                is_force_min_age=is_force_min_age,
-                set_node_age_fn=set_node_age_fn,
-                )
+            ultrametricity_precision=ultrametricity_precision,
+            is_return_internal_node_ages_only=True,
+            is_force_max_age=is_force_max_age,
+            is_force_min_age=is_force_min_age,
+            set_node_age_fn=set_node_age_fn,
+        )
         ages.sort()
         return ages
 
-    def node_ages(self,
-            ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
-            is_force_max_age=False,
-            is_force_min_age=False,
-            set_node_age_fn=None,
-            internal_only=False):
+    def node_ages(
+        self,
+        ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
+        is_force_max_age=False,
+        is_force_min_age=False,
+        set_node_age_fn=None,
+        internal_only=False,
+    ):
         """
         Returns list of ages of all nodes on tree.
         NOTE: Changed from DendroPy3: this function now returns the ages of
@@ -5734,11 +3465,12 @@ class Tree(
         `Tree.internal_node_ages`.
         """
         ages = self.calc_node_ages(
-                ultrametricity_precision=ultrametricity_precision,
-                is_force_max_age=is_force_max_age,
-                is_force_min_age=is_force_min_age,
-                set_node_age_fn=set_node_age_fn,
-                is_return_internal_node_ages_only=internal_only)
+            ultrametricity_precision=ultrametricity_precision,
+            is_force_max_age=is_force_max_age,
+            is_force_min_age=is_force_min_age,
+            set_node_age_fn=set_node_age_fn,
+            is_return_internal_node_ages_only=internal_only,
+        )
         ages.sort()
         return ages
 
@@ -5796,30 +3528,34 @@ class Tree(
             else:
                 if nd.root_distance == distance_from_root:
                     num_lineages += 1
-                elif nd.root_distance >= distance_from_root and nd._parent_node.root_distance < distance_from_root:
+                elif (
+                    nd.root_distance >= distance_from_root
+                    and nd._parent_node.root_distance < distance_from_root
+                ):
                     num_lineages += 1
         return num_lineages
 
-    ###########################################################################
-    ### Bipartition Management
-
     def _compile_mutable_bipartition_for_edge(self, edge):
         edge.bipartition.compile_split_bitmask(
-                tree_leafset_bitmask=self.seed_node.edge.bipartition._leafset_bitmask,
-                is_mutable=True)
+            tree_leafset_bitmask=self.seed_node.edge.bipartition._leafset_bitmask,
+            is_mutable=True,
+        )
         return edge.bipartition
 
     def _compile_immutable_bipartition_for_edge(self, edge):
         edge.bipartition.compile_split_bitmask(
-                tree_leafset_bitmask=self.seed_node.edge.bipartition._leafset_bitmask,
-                is_mutable=False)
+            tree_leafset_bitmask=self.seed_node.edge.bipartition._leafset_bitmask,
+            is_mutable=False,
+        )
         return edge.bipartition
 
-    def encode_bipartitions(self,
-            suppress_unifurcations=True,
-            collapse_unrooted_basal_bifurcation=True,
-            suppress_storage=False,
-            is_bipartitions_mutable=False):
+    def encode_bipartitions(
+        self,
+        suppress_unifurcations=True,
+        collapse_unrooted_basal_bifurcation=True,
+        suppress_storage=False,
+        is_bipartitions_mutable=False,
+    ):
         """
         Calculates the bipartitions of this tree.
 
@@ -5856,9 +3592,11 @@ class Tree(
         seed_node = self.seed_node
         if not seed_node:
             return
-        if (collapse_unrooted_basal_bifurcation
-                and not self._is_rooted
-                and len(seed_node._child_nodes) == 2):
+        if (
+            collapse_unrooted_basal_bifurcation
+            and not self._is_rooted
+            and len(seed_node._child_nodes) == 2
+        ):
             # We do this because an unrooted tree
             # has no *true* degree-3 internal nodes:
             #
@@ -5903,7 +3641,9 @@ class Tree(
                     tree_edges.append(edge)
                     for child in child_nodes:
                         leafset_bitmask |= child.edge.bipartition._leafset_bitmask
-                edge.bipartition = Bipartition(compile_bipartition=False, is_mutable=True)
+                edge.bipartition = _bipartition.Bipartition(
+                    compile_bipartition=False, is_mutable=True
+                )
                 edge.bipartition._leafset_bitmask = leafset_bitmask
                 edge.bipartition._is_rooted = self._is_rooted
         # Create normalized bitmasks, where the full (self) bipartition mask is *not*
@@ -5934,8 +3674,13 @@ class Tree(
         Recalculates bipartition hashes for tree.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'Tree.encode_splits()' will no longer be supported in future releases; use 'Tree.encode_bipartitions()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'Tree.encode_splits()' will no longer be"
+                " supported in future releases; use 'Tree.encode_bipartitions()'"
+                " instead"
+            ),
+            stacklevel=3,
+        )
         return self.encode_bipartitions(*args, **kwargs)
 
     def update_splits(self, *args, **kwargs):
@@ -5943,8 +3688,13 @@ class Tree(
         Recalculates bipartition hashes for tree.
         """
         deprecate.dendropy_deprecation_warning(
-                message="Deprecated since DendroPy 4: 'Tree.encode_splits()' will no longer be supported in future releases; use 'Tree.update_bipartitions()' instead",
-                stacklevel=3)
+            message=(
+                "Deprecated since DendroPy 4: 'Tree.encode_splits()' will no longer be"
+                " supported in future releases; use 'Tree.update_bipartitions()'"
+                " instead"
+            ),
+            stacklevel=3,
+        )
         return self.encode_bipartitions(*args, **kwargs)
 
     def _get_bipartition_edge_map(self):
@@ -5957,16 +3707,15 @@ class Tree(
                 self._bipartition_edge_map[edge.bipartition] = edge
                 self._split_bitmask_edge_map[edge.bipartition.split_bitmask] = edge
         return self._bipartition_edge_map
+
     bipartition_edge_map = property(_get_bipartition_edge_map)
 
     def _get_split_bitmask_edge_map(self):
         if not self._split_bitmask_edge_map:
             self._get_bipartition_edge_map()
         return self._split_bitmask_edge_map
-    split_bitmask_edge_map = property(_get_split_bitmask_edge_map)
 
-    ###########################################################################
-    ### Metrics -- Unary
+    split_bitmask_edge_map = property(_get_split_bitmask_edge_map)
 
     def __len__(self):
         """
@@ -5980,61 +3729,105 @@ class Tree(
     def B1(self):
         """DEPRECATED: Use :func:`dendropy.calculate.treemeasure.B1()`."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.B1()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.B1(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.B1()",
+            new_construct=(
+                "from dendropy.calculate import treemeasure\ntreemeasure.B1(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
+
         return treemeasure.B1(self)
 
     def colless_tree_imbalance(self, normalize="max"):
         """DEPRECATED: Use 'dendropy.calculate.treemeasure.colless_tree_imbalance()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.colless_tree_imbalance()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.colless_tree_imbalance(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.colless_tree_imbalance()",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treemeasure\ntreemeasure.colless_tree_imbalance(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
+
         return treemeasure.colless_tree_imbalance(self, normalize)
 
     def pybus_harvey_gamma(self, prec=0.00001):
         """DEPRECATED: Use 'dendropy.calculate.treemeasure.pybus_harvey_gamma()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.pybus_harvey_gamma()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.pybus_harvey_gamma(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.pybus_harvey_gamma()",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treemeasure\ntreemeasure.pybus_harvey_gamma(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
+
         return treemeasure.pybus_harvey_gamma(self, prec)
 
     def N_bar(self):
         """DEPRECATED: Use 'dendropy.calculate.treemeasure.N_bar()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.N_bar()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.N_bar(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.N_bar()",
+            new_construct=(
+                "from dendropy.calculate import treemeasure\ntreemeasure.N_bar(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
+
         return treemeasure.N_bar(self)
 
     def sackin_index(self, normalize=True):
         """DEPRECATED: Use 'dendropy.calculate.treemeasure.sackin_index()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.sackin_index()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.sackin_index(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.sackin_index()",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treemeasure\ntreemeasure.sackin_index(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
+
         return treemeasure.sackin_index(self, normalize)
 
     def treeness(self):
         """DEPRECATED: Use 'dendropy.calculate.treemeasure.treeness()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Unary statistics on trees are now implemented in the 'dendropy.calculate.treemeasure' module.",
-                old_construct="tree.treeness()",
-                new_construct="from dendropy.calculate import treemeasure\ntreemeasure.treeness(tree)")
+            preamble=(
+                "Deprecated since DendroPy 4: Unary statistics on trees are now"
+                " implemented in the 'dendropy.calculate.treemeasure' module."
+            ),
+            old_construct="tree.treeness()",
+            new_construct=(
+                "from dendropy.calculate import treemeasure\ntreemeasure.treeness(tree)"
+            ),
+        )
         from dendropy.calculate import treemeasure
-        return treemeasure.treeness(self)
 
-    ###########################################################################
-    ### Comparisons with Another Tree
+        return treemeasure.treeness(self)
 
-    def is_compatible_with_bipartition(self, bipartition, is_bipartitions_updated=False):
+    def is_compatible_with_bipartition(
+        self, bipartition, is_bipartitions_updated=False
+    ):
         """
         Returns true if the |Bipartition| ``bipartition`` is compatible
         with all bipartitions of this tree.
@@ -6055,50 +3848,92 @@ class Tree(
     def find_missing_splits(self, other_tree):
         """DEPRECATED: Use 'dendropy.treecompare.find_missing_bipartitions()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Statistics comparing two trees are now implemented in the 'dendropy.calculate.treecompare' module.",
-                old_construct="tree1.find_missing_splits(tree2)",
-                new_construct="from dendropy.calculate import treecompare\ntreecompare.find_missing_bipartitions(tree1, tree2)")
+            preamble=(
+                "Deprecated since DendroPy 4: Statistics comparing two trees are now"
+                " implemented in the 'dendropy.calculate.treecompare' module."
+            ),
+            old_construct="tree1.find_missing_splits(tree2)",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treecompare\ntreecompare.find_missing_bipartitions(tree1, tree2)"
+            ),
+        )
         from dendropy.calculate import treecompare
+
         return treecompare.find_missing_splits(self, other_tree)
 
     def symmetric_difference(self, other_tree):
         """DEPRECATED: Use 'dendropy.treecompare.symmetric_difference()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Statistics comparing two trees are now implemented in the 'dendropy.calculate.treecompare' module.",
-                old_construct="tree1.symmetric_difference(tree2)",
-                new_construct="from dendropy.calculate import treecompare\ntreecompare.symmetric_difference(tree1, tree2)")
+            preamble=(
+                "Deprecated since DendroPy 4: Statistics comparing two trees are now"
+                " implemented in the 'dendropy.calculate.treecompare' module."
+            ),
+            old_construct="tree1.symmetric_difference(tree2)",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treecompare\ntreecompare.symmetric_difference(tree1, tree2)"
+            ),
+        )
         from dendropy.calculate import treecompare
+
         return treecompare.symmetric_difference(self, other_tree)
 
     def false_positives_and_negatives(self, other_tree):
         """DEPRECATED: Use 'dendropy.treecompare.false_positives_and_negatives()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Statistics comparing two trees are now implemented in the 'dendropy.calculate.treecompare' module.",
-                old_construct="tree1.false_positives_and_negatives(tree2)",
-                new_construct="from dendropy.calculate import treecompare\ntreecompare.false_positives_and_negatives(tree1, tree2)")
+            preamble=(
+                "Deprecated since DendroPy 4: Statistics comparing two trees are now"
+                " implemented in the 'dendropy.calculate.treecompare' module."
+            ),
+            old_construct="tree1.false_positives_and_negatives(tree2)",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treecompare\ntreecompare.false_positives_and_negatives(tree1, tree2)"
+            ),
+        )
         from dendropy.calculate import treecompare
+
         return treecompare.false_positives_and_negatives(self, other_tree)
 
     def robinson_foulds_distance(self, other_tree):
         """DEPRECATED: Use 'dendropy.treecompare.weighted_robinson_foulds_distance()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Statistics comparing two trees are now implemented in the 'dendropy.calculate.treecompare' module, and this method's functionality is available through the 'weighted_robinson_foulds_distance()' function. For the *unweighted* RF distance, see 'dendropy.calculate.treecompare.symmetric_difference()'.",
-                old_construct="tree1.robinson_foulds_distance(tree2)",
-                new_construct="from dendropy.calculate import treecompare\ntreecompare.weighted_robinson_foulds_distance(tree1, tree2)")
+            preamble=(
+                "Deprecated since DendroPy 4: Statistics comparing two trees are now"
+                " implemented in the 'dendropy.calculate.treecompare' module, and this"
+                " method's functionality is available through the"
+                " 'weighted_robinson_foulds_distance()' function. For the *unweighted*"
+                " RF distance, see"
+                " 'dendropy.calculate.treecompare.symmetric_difference()'."
+            ),
+            old_construct="tree1.robinson_foulds_distance(tree2)",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treecompare\ntreecompare.weighted_robinson_foulds_distance(tree1,"
+                " tree2)"
+            ),
+        )
         from dendropy.calculate import treecompare
+
         return treecompare.weighted_robinson_foulds_distance(self, other_tree)
 
     def euclidean_distance(self, other_tree):
         """DEPRECATED: Use 'dendropy.treecompare.euclidean_distance()'."""
         deprecate.dendropy_deprecation_warning(
-                preamble="Deprecated since DendroPy 4: Statistics comparing two trees are now implemented in the 'dendropy.calculate.treecompare' module.",
-                old_construct="tree1.euclidean_distance(tree2)",
-                new_construct="from dendropy.calculate import treecompare\ntreecompare.euclidean_distance(tree1, tree2)")
+            preamble=(
+                "Deprecated since DendroPy 4: Statistics comparing two trees are now"
+                " implemented in the 'dendropy.calculate.treecompare' module."
+            ),
+            old_construct="tree1.euclidean_distance(tree2)",
+            new_construct=(
+                "from dendropy.calculate import"
+                " treecompare\ntreecompare.euclidean_distance(tree1, tree2)"
+            ),
+        )
         from dendropy.calculate import treecompare
-        return treecompare.euclidean_distance(self, other_tree)
 
-    ###########################################################################
-    ### Metadata
+        return treecompare.euclidean_distance(self, other_tree)
 
     def strip_comments(self):
         """
@@ -6109,9 +3944,6 @@ class Tree(
             nd.comments = []
             nd.edge.comments = []
 
-    ###########################################################################
-    ### Representation
-
     def __str__(self):
         "Dump Newick string."
         return "%s" % self._as_newick_string()
@@ -6130,35 +3962,50 @@ class Tree(
             label = " (%s)" % id(self)
         else:
             label = " (%s: '%s')" % (id(self), self.label)
-        output_strio.write('%s%sTree object at %s%s'
-                % (indent*' ',
-                   itemize,
-                   hex(id(self)),
-                   label))
+        output_strio.write(
+            "%s%sTree object at %s%s" % (indent * " ", itemize, hex(id(self)), label)
+        )
         if depth >= 1:
             newick_str = self._as_newick_string()
             if not newick_str:
                 newick_str = "()"
             if depth == 1:
-                output_strio.write(': %s' % newick_str)
+                output_strio.write(": %s" % newick_str)
             elif depth >= 2:
                 num_nodes = len([nd for nd in self.preorder_node_iter()])
                 num_edges = len([ed for ed in self.preorder_edge_iter()])
-                output_strio.write(': %d Nodes, %d Edges' % (num_nodes, num_edges))
+                output_strio.write(": %d Nodes, %d Edges" % (num_nodes, num_edges))
                 if self.taxon_namespace is not None:
-                    output_strio.write("\n%s[Taxon Set]\n" % (" " * (indent+4)))
-                    self.taxon_namespace.description(depth=depth-1, indent=indent+8, itemize="", output=output_strio)
-                output_strio.write('\n%s[Tree]' % (" " * (indent+4)))
-                output_strio.write('\n%s%s' % (" " * (indent+8), newick_str))
+                    output_strio.write("\n%s[Taxon Set]\n" % (" " * (indent + 4)))
+                    self.taxon_namespace.description(
+                        depth=depth - 1,
+                        indent=indent + 8,
+                        itemize="",
+                        output=output_strio,
+                    )
+                output_strio.write("\n%s[Tree]" % (" " * (indent + 4)))
+                output_strio.write("\n%s%s" % (" " * (indent + 8), newick_str))
                 if depth >= 3:
-                    output_strio.write("\n%s[Nodes]" % (" " * (indent+4)))
+                    output_strio.write("\n%s[Nodes]" % (" " * (indent + 4)))
                     for i, nd in enumerate(self.preorder_node_iter()):
-                        output_strio.write('\n')
-                        nd.description(depth=depth-3, indent=indent+8, itemize="[%d] " % i, output=output_strio, taxon_namespace=self.taxon_namespace)
-                    output_strio.write("\n%s[Edges]" % (" " * (indent+4)))
+                        output_strio.write("\n")
+                        nd.description(
+                            depth=depth - 3,
+                            indent=indent + 8,
+                            itemize="[%d] " % i,
+                            output=output_strio,
+                            taxon_namespace=self.taxon_namespace,
+                        )
+                    output_strio.write("\n%s[Edges]" % (" " * (indent + 4)))
                     for i, ed in enumerate(self.preorder_edge_iter()):
-                        output_strio.write('\n')
-                        ed.description(depth=depth-3, indent=indent+8, itemize="[%d] " % i, output=output_strio, taxon_namespace=self.taxon_namespace)
+                        output_strio.write("\n")
+                        ed.description(
+                            depth=depth - 3,
+                            indent=indent + 8,
+                            itemize="[%d] " % i,
+                            output=output_strio,
+                            taxon_namespace=self.taxon_namespace,
+                        )
 
         s = output_strio.getvalue()
         if output is not None:
@@ -6181,10 +4028,7 @@ class Tree(
             tree_args = ""
         else:
             tree_args = ", " + tree_args
-        p.append("%s = dendropy.Tree(label=%s%s)" \
-            % (tree_obj_name,
-               label,
-               tree_args))
+        p.append("%s = dendropy.Tree(label=%s%s)" % (tree_obj_name, label, tree_args))
 
         taxon_obj_namer = lambda x: "tax_%s" % id(x)
         for taxon in self.taxon_namespace:
@@ -6193,11 +4037,14 @@ class Tree(
                 label = "'" + taxon.label + "'"
             else:
                 label = "None"
-            p.append("%s = %s.taxon_namespace.require_taxon(label=%s)" \
-                % (tobj_name,
-                   tree_obj_name,
-                   label,
-                   ))
+            p.append(
+                "%s = %s.taxon_namespace.require_taxon(label=%s)"
+                % (
+                    tobj_name,
+                    tree_obj_name,
+                    label,
+                )
+            )
 
         node_obj_namer = lambda x: "nd_%s" % id(x)
         for node in self.preorder_node_iter():
@@ -6214,19 +4061,19 @@ class Tree(
                     ct = taxon_obj_namer(child.taxon)
                 else:
                     ct = "None"
-                p.append("%s = %s.new_child(label=%s, taxon=%s, edge_length=%s)" %
-                        (node_obj_namer(child),
-                         nn,
-                         label,
-                         ct,
-                         child.edge.length,
-                         ))
+                p.append(
+                    "%s = %s.new_child(label=%s, taxon=%s, edge_length=%s)"
+                    % (
+                        node_obj_namer(child),
+                        nn,
+                        label,
+                        ct,
+                        child.edge.length,
+                    )
+                )
 
         return "\n".join(p)
 
-    ###########################################################################
-    ### Representation
-
     def as_ascii_plot(self, **kwargs):
         """
         Returns a string representation a graphic of this tree using ASCII
@@ -6246,6 +4093,7 @@ class Tree(
         Writes an ASCII text graphic of this tree to standard output.
         """
         import sys
+
         self.write_ascii_plot(sys.stdout, **kwargs)
         sys.stdout.write("\n")
 
@@ -6256,10 +4104,9 @@ class Tree(
         if not kwargs.get("taxon_namespace"):
             kwargs["taxon_namespace"] = self.taxon_namespace
         out.write("digraph G {\n")
-
         nd_id_to_dot_nd = {}
         for n, nd in enumerate(self.preorder_node_iter()):
-            label = _format_node(nd, **kwargs)
+            label = nd._format_node(**kwargs)
             if nd is self.seed_node:
                 label = "root %s" % label
             dot_nd = "n%d" % n
@@ -6272,14 +4119,11 @@ class Tree(
             except:
                 pass
             else:
-                label = _format_edge(e, **kwargs)
+                label = e._format_edge(**kwargs)
                 s = ' %s -> %s [label="%s"];\n' % (par_dot_nd, dot_nd, label)
                 out.write(s)
         out.write("}\n")
 
-    ###########################################################################
-    ### Debugging/Testing
-
     def _assign_node_labels_from_taxon(self):
         for nd in self.postorder_node_iter():
             if nd.label is not None:
@@ -6300,13 +4144,18 @@ class Tree(
 
     def _debug_check_tree(self, logger_obj=None, **kwargs):
         import logging, inspect
+
         if logger_obj and logger_obj.isEnabledFor(logging.DEBUG):
             try:
                 assert self._debug_tree_is_valid(logger_obj=logger_obj, **kwargs)
             except:
                 calling_frame = inspect.currentframe().f_back
                 co = calling_frame.f_code
-                emsg = "\nCalled from file %s, line %d, in %s" % (co.co_filename, calling_frame.f_lineno, co.co_name)
+                emsg = "\nCalled from file %s, line %d, in %s" % (
+                    co.co_filename,
+                    calling_frame.f_lineno,
+                    co.co_name,
+                )
                 _LOG.debug("%s" % str(self))
                 _LOG.debug("%s" % self._get_indented_form(**kwargs))
         assert self._debug_tree_is_valid(logger_obj=logger_obj, **kwargs)
@@ -6317,9 +4166,11 @@ class Tree(
         kwargs:
             ``check_bipartitions`` if True specifies that the bipartition attributes are checked.
         """
-        check_bipartitions = kwargs.get('check_bipartitions', False)
-        unique_bipartition_edge_mapping = kwargs.get('unique_bipartition_edge_mapping', False)
-        taxon_namespace = kwargs.get('taxon_namespace')
+        check_bipartitions = kwargs.get("check_bipartitions", False)
+        unique_bipartition_edge_mapping = kwargs.get(
+            "unique_bipartition_edge_mapping", False
+        )
+        taxon_namespace = kwargs.get("taxon_namespace")
         if taxon_namespace is None:
             taxon_namespace = self.taxon_namespace
         if check_bipartitions:
@@ -6327,53 +4178,91 @@ class Tree(
         nodes = {}
         edges = {}
         curr_node = self.seed_node
-        assert curr_node._parent_node is None, \
-                "{} is seed node, but has non-'None' parent node: {}".format(curr_node, curr_node._parent_node)
-        assert curr_node.edge.tail_node is None, \
-                "{} is seed node, but edge has non-'None' tail node: {}".format(curr_node, curr_node.edge._parent_node)
+        assert (
+            curr_node._parent_node is None
+        ), "{} is seed node, but has non-'None' parent node: {}".format(
+            curr_node, curr_node._parent_node
+        )
+        assert (
+            curr_node.edge.tail_node is None
+        ), "{} is seed node, but edge has non-'None' tail node: {}".format(
+            curr_node, curr_node.edge._parent_node
+        )
         ancestors = []
         siblings = []
         while curr_node:
-            assert curr_node not in nodes, \
-                    "Node {} seen multiple times".format(curr_node)
+            assert curr_node not in nodes, "Node {} seen multiple times".format(
+                curr_node
+            )
             curr_edge = curr_node.edge
-            assert curr_edge not in edges, \
-                    "Edge of {}, {}, is also an edge of {}".format(curr_node, curr_node.edge, edges[curr_edge])
+            assert (
+                curr_edge not in edges
+            ), "Edge of {}, {}, is also an edge of {}".format(
+                curr_node, curr_node.edge, edges[curr_edge]
+            )
             edges[curr_edge] = curr_node
             nodes[curr_node] = curr_edge
-            assert curr_edge.head_node is curr_node, \
-                    "Head node of edge of {}, {}, is {}, not {}".format(curr_node, curr_edge, curr_edge.head_node, curr_node)
-            assert curr_edge.tail_node is curr_node._parent_node, \
-                    "Tail node of edge of {}, {}, is {}, but parent node is {}".format(curr_node, curr_edge, curr_edge.tail_node, curr_node._parent_node)
+            assert (
+                curr_edge.head_node is curr_node
+            ), "Head node of edge of {}, {}, is {}, not {}".format(
+                curr_node, curr_edge, curr_edge.head_node, curr_node
+            )
+            assert (
+                curr_edge.tail_node is curr_node._parent_node
+            ), "Tail node of edge of {}, {}, is {}, but parent node is {}".format(
+                curr_node, curr_edge, curr_edge.tail_node, curr_node._parent_node
+            )
             if check_bipartitions:
                 cm = 0
-                assert (curr_edge.bipartition._leafset_bitmask | taxa_mask) == taxa_mask, \
-                        "Bipartition mask error: {} | {} == {} (expecting: {})".format(
-                                curr_edge.bipartition.leafset_as_bitstring(),
-                                self.seed_node.edge.bipartition.leafset_as_bitstring(),
-                                self.seed_node.edge.bipartition.bitmask_as_bitstring(curr_edge.bipartition._leafset_bitmask | taxa_mask),
-                                self.seed_node.edge.bipartition.leafset_as_bitstring(), )
+                assert (
+                    curr_edge.bipartition._leafset_bitmask | taxa_mask
+                ) == taxa_mask, "Bipartition mask error: {} | {} == {} (expecting: {})".format(
+                    curr_edge.bipartition.leafset_as_bitstring(),
+                    self.seed_node.edge.bipartition.leafset_as_bitstring(),
+                    self.seed_node.edge.bipartition.bitmask_as_bitstring(
+                        curr_edge.bipartition._leafset_bitmask | taxa_mask
+                    ),
+                    self.seed_node.edge.bipartition.leafset_as_bitstring(),
+                )
             c = curr_node._child_nodes
             if c:
                 for child in c:
-                    assert child._parent_node is curr_node, \
-                            "Child of {}, {}, has {} as parent".format(curr_node, child, child._parent_node)
+                    assert (
+                        child._parent_node is curr_node
+                    ), "Child of {}, {}, has {} as parent".format(
+                        curr_node, child, child._parent_node
+                    )
                     if check_bipartitions:
                         cm |= child.edge.bipartition._leafset_bitmask
             elif check_bipartitions:
-                assert curr_node.taxon is not None, \
-                        "Cannot check bipartitions: {} is a leaf node, but its 'taxon' attribute is 'None'".format(curr_node)
+                assert curr_node.taxon is not None, (
+                    "Cannot check bipartitions: {} is a leaf node, but its 'taxon'"
+                    " attribute is 'None'".format(curr_node)
+                )
                 cm = taxon_namespace.taxon_bitmask(curr_node.taxon)
             if check_bipartitions:
-                assert (cm & taxa_mask) == curr_edge.bipartition._leafset_bitmask, \
-                        "Bipartition leafset bitmask error: {} (taxa: {}, leafset: {})".format(
-                                curr_edge.bipartition.bitmask_as_bitstring(cm),
-                                curr_edge.bipartition.bitmask_as_bitstring(taxa_mask),
-                                curr_edge.bipartition.leafset_as_bitstring())
+                assert (
+                    cm & taxa_mask
+                ) == curr_edge.bipartition._leafset_bitmask, (
+                    "Bipartition leafset bitmask error: {} (taxa: {}, leafset: {})"
+                    .format(
+                        curr_edge.bipartition.bitmask_as_bitstring(cm),
+                        curr_edge.bipartition.bitmask_as_bitstring(taxa_mask),
+                        curr_edge.bipartition.leafset_as_bitstring(),
+                    )
+                )
                 if unique_bipartition_edge_mapping:
-                    assert self.bipartition_edge_map[curr_edge.bipartition] is curr_edge, \
-                            "Expecting edge {} for bipartition {}, but instead found {}".format(curr_edge, curr_edge.bipartition, self.bipartition_edge_map[curr_edge.bipartition])
-            curr_node, level = _preorder_list_manip(curr_node, siblings, ancestors)
+                    assert (
+                        self.bipartition_edge_map[curr_edge.bipartition] is curr_edge
+                    ), (
+                        "Expecting edge {} for bipartition {}, but instead found {}"
+                        .format(
+                            curr_edge,
+                            curr_edge.bipartition,
+                            self.bipartition_edge_map[curr_edge.bipartition],
+                        )
+                    )
+            curr_node, level = curr_node._preorder_list_manip(siblings, ancestors)
         if check_bipartitions:
             for b in self.bipartition_encoding:
                 e = self.bipartition_edge_map[b]
@@ -6397,6 +4286,7 @@ class Tree(
         to the standard output stream.
         """
         import sys
+
         sys.stdout.write(self._as_newick_string(**kwargs))
         sys.stdout.write("\n")
 
@@ -6409,14 +4299,17 @@ class Tree(
         """
         self.seed_node._write_newick(out, **kwargs)
 
-    def _plot_bipartitions_on_tree(self,
-            show_splits=True,
-            show_leafsets=True,
-            show_taxon_labels=False,
-            is_bipartitions_updated=False,
-            width=120):
+    def _plot_bipartitions_on_tree(
+        self,
+        show_splits=True,
+        show_leafsets=True,
+        show_taxon_labels=False,
+        is_bipartitions_updated=False,
+        width=120,
+    ):
         if not is_bipartitions_updated:
             self.encode_bipartitions()
+
         def _print_node(nd):
             d = []
             if show_splits:
@@ -6427,17 +4320,15 @@ class Tree(
             if show_taxon_labels and nd.taxon is not None:
                 s = s + " ({})".format(nd.taxon.label)
             return s
+
         return self.as_ascii_plot(
-                show_internal_node_labels=True,
-                node_label_compose_fn=_print_node,
-                width=width,
-                )
+            show_internal_node_labels=True,
+            node_label_compose_fn=_print_node,
+            width=width,
+        )
 
-###############################################################################
-### AsciiTreePlot
 
 class AsciiTreePlot(object):
-
     class NullEdgeLengthError(ValueError):
         def __init__(self, *args, **kwargs):
             ValueError.__init__(self, *args, **kwargs)
@@ -6464,13 +4355,12 @@ class AsciiTreePlot(object):
             the string to be used to display it.
 
         """
-        self.plot_metric = kwargs.pop('plot_metric', 'depth')
-        self.show_external_node_labels = kwargs.pop('show_external_node_labels', True)
-        self.show_internal_node_labels = kwargs.pop('show_internal_node_labels', False)
-        self.leaf_spacing_factor = kwargs.pop('leaf_spacing_factor', 2)
-#        self.null_edge_length = kwargs.pop('null_edge_length', 0)
-        self.width = kwargs.pop('width', None)
-        self.display_width = kwargs.pop('display_width', self.width) # legacy
+        self.plot_metric = kwargs.pop("plot_metric", "depth")
+        self.show_external_node_labels = kwargs.pop("show_external_node_labels", True)
+        self.show_internal_node_labels = kwargs.pop("show_internal_node_labels", False)
+        self.leaf_spacing_factor = kwargs.pop("leaf_spacing_factor", 2)
+        self.width = kwargs.pop("width", None)
+        self.display_width = kwargs.pop("display_width", self.width)  # legacy
         self.compose_node = kwargs.pop("node_label_compose_fn", None)
         if self.compose_node is None:
             self.compose_node = self.default_compose_node
@@ -6494,45 +4384,56 @@ class AsciiTreePlot(object):
         self.node_label_map = {}
 
     def _calc_node_offsets(self, tree):
-        if self.plot_metric == 'age' or self.plot_metric == 'depth':
+        if self.plot_metric == "age" or self.plot_metric == "depth":
 
             for nd in tree.postorder_node_iter():
                 cnds = nd.child_nodes()
-                if self.plot_metric == 'depth': # 'number of branchings from tip'
+                if self.plot_metric == "depth":  # 'number of branchings from tip'
                     if len(cnds) == 0:
                         curr_node_offset = 0.0
                     else:
                         depths = [self.node_offset[v] for v in cnds]
                         curr_node_offset = max(depths) + 1
-                elif self.plot_metric == 'age': # 'sum of edge weights from tip'
+                elif self.plot_metric == "age":  # 'sum of edge weights from tip'
                     # note: no enforcement of ultrametricity!
                     if len(cnds) == 0:
                         curr_node_offset = 0.0
                     else:
                         if cnds[0].edge.length is not None:
-                            curr_node_offset = self.node_offset[cnds[0]] + cnds[0].edge.length
+                            curr_node_offset = (
+                                self.node_offset[cnds[0]] + cnds[0].edge.length
+                            )
                 else:
-                    raise ValueError("Unrecognized plot metric '%s' (must be one of: 'age', 'depth', 'level', or 'length')" % self.plot_metric)
+                    raise ValueError(
+                        "Unrecognized plot metric '%s' (must be one of: 'age', 'depth',"
+                        " 'level', or 'length')"
+                        % self.plot_metric
+                    )
                 self.node_offset[nd] = curr_node_offset
             flipped_origin = max(self.node_offset.values())
             for nd in self.node_offset:
                 self.node_offset[nd] = flipped_origin - self.node_offset[nd]
         else:
             for nd in tree.preorder_node_iter():
-                if self.plot_metric == 'level': # 'number of branchings from root'
+                if self.plot_metric == "level":  # 'number of branchings from root'
                     curr_edge_len = 1
-                elif self.plot_metric == 'length': # 'sum of edge weights from root'
+                elif self.plot_metric == "length":  # 'sum of edge weights from root'
                     if nd.edge.length is not None:
                         curr_edge_len = nd.edge.length
                     else:
                         curr_edge_len = 0
                 else:
-                    raise ValueError("Unrecognized plot metric '%s' (must be one of: 'age', 'depth', 'level', or 'length')" % self.plot_metric)
+                    raise ValueError(
+                        "Unrecognized plot metric '%s' (must be one of: 'age', 'depth',"
+                        " 'level', or 'length')"
+                        % self.plot_metric
+                    )
                 if nd._parent_node is None:
                     self.node_offset[nd] = curr_edge_len
                 else:
-                    self.node_offset[nd] =  curr_edge_len + self.node_offset[nd._parent_node]
-#        print "\n".join([str(k) for k in self.node_offset.values()])
+                    self.node_offset[nd] = (
+                        curr_edge_len + self.node_offset[nd._parent_node]
+                    )
 
     def draw(self, tree, dest):
         dest.write(self.compose(tree))
@@ -6556,23 +4457,31 @@ class AsciiTreePlot(object):
             display_width = terminal.terminal_width() - 1
         else:
             display_width = self.display_width
-        max_label_len = max([len(self.get_label_for_node(i)) for i in tree.leaf_node_iter()])
+        max_label_len = max(
+            [len(self.get_label_for_node(i)) for i in tree.leaf_node_iter()]
+        )
         if max_label_len <= 0:
             max_label_len = 0
-        #effective_display_width = display_width - max_label_len - len(tree.internal_nodes) - 1
+        # effective_display_width = display_width - max_label_len - len(tree.internal_nodes) - 1
         effective_display_width = display_width - max_label_len - 1
         self._calc_node_offsets(tree)
-        widths = [self.node_offset[i] for i in tree.leaf_node_iter() if self.node_offset[i] is not None]
+        widths = [
+            self.node_offset[i]
+            for i in tree.leaf_node_iter()
+            if self.node_offset[i] is not None
+        ]
         max_width = float(max(widths))
         if max_width == 0:
-            raise AsciiTreePlot.NullEdgeLengthError("Tree cannot be plotted under metric '%s' due to zero or null edge lengths: '%s'" % (self.plot_metric, tree._as_newick_string()))
+            raise AsciiTreePlot.NullEdgeLengthError(
+                "Tree cannot be plotted under metric '%s' due to zero or null edge"
+                " lengths: '%s'" % (self.plot_metric, tree._as_newick_string())
+            )
         edge_scale_factor = float(effective_display_width) / max_width
-        self.calc_plot(tree.seed_node,
-                       edge_scale_factor=edge_scale_factor)
-        for i in range(len(tree.leaf_nodes())*self.leaf_spacing_factor + 1):
-            self.grid.append([' ' for i in range(0, display_width)])
+        self.calc_plot(tree.seed_node, edge_scale_factor=edge_scale_factor)
+        for i in range(len(tree.leaf_nodes()) * self.leaf_spacing_factor + 1):
+            self.grid.append([" " for i in range(0, display_width)])
         self.draw_node(tree.seed_node)
-        display = '\n'.join([''.join(i) for i in self.grid])
+        display = "\n".join(["".join(i) for i in self.grid])
         return display
 
     def calc_plot(self, node, edge_scale_factor):
@@ -6585,7 +4494,7 @@ class AsciiTreePlot(object):
             for n in child_nodes:
                 self.calc_plot(n, edge_scale_factor)
             ys = [self.node_row[n] for n in child_nodes]
-            self.node_row[node] = int(float((max(ys)-min(ys)) / 2) + min(ys))
+            self.node_row[node] = int(float((max(ys) - min(ys)) / 2) + min(ys))
         else:
             self.node_row[node] = self.current_leaf_row
             self.current_leaf_row = self.current_leaf_row + self.leaf_spacing_factor
@@ -6599,7 +4508,7 @@ class AsciiTreePlot(object):
         if label:
             for i in range(len(label)):
                 if start_col + i < len(self.grid[row]):
-                    self.grid[row][start_col+i] = label[i]
+                    self.grid[row][start_col + i] = label[i]
 
     def draw_node(self, node):
         """
@@ -6611,29 +4520,29 @@ class AsciiTreePlot(object):
                 start_row = min([self.node_row[node], self.node_row[child_node]])
                 end_row = max([self.node_row[node], self.node_row[child_node]])
                 if i == 0:
-                    self.grid[self.node_row[child_node]][self.node_col[node]] = '/'
-                    start_row = start_row+1
+                    self.grid[self.node_row[child_node]][self.node_col[node]] = "/"
+                    start_row = start_row + 1
                     edge_row = self.node_row[child_node]
-                elif i == len(child_nodes)-1:
-                    self.grid[self.node_row[child_node]][self.node_col[node]] = '\\'
+                elif i == len(child_nodes) - 1:
+                    self.grid[self.node_row[child_node]][self.node_col[node]] = "\\"
                     edge_row = self.node_row[child_node]
                 else:
-                    self.grid[self.node_row[child_node]][self.node_col[node]] = '+'
+                    self.grid[self.node_row[child_node]][self.node_col[node]] = "+"
                     edge_row = self.node_row[child_node]
                 self.draw_node(child_node)
-                for x in range(self.node_col[node]+1, self.node_col[child_node]):
-                    self.grid[edge_row][x] = '-'
+                for x in range(self.node_col[node] + 1, self.node_col[child_node]):
+                    self.grid[edge_row][x] = "-"
                 for y in range(start_row, end_row):
-                    self.grid[y][self.node_col[node]] = '|'
+                    self.grid[y][self.node_col[node]] = "|"
             label = []
             if self.show_internal_node_labels:
                 label = self.get_label_for_node(node)
                 self.draw_internal_text(label, self.node_row[node], self.node_col[node])
             else:
-                self.grid[self.node_row[node]][self.node_col[node]]='+'
+                self.grid[self.node_row[node]][self.node_col[node]] = "+"
         else:
             label = self.get_label_for_node(node)
-            self.draw_label(label, self.node_row[node], self.node_col[node]+1)
+            self.draw_label(label, self.node_row[node], self.node_col[node] + 1)
 
     def draw_internal_text(self, label, r, c):
         row = self.grid[r]
@@ -6642,106 +4551,3 @@ class AsciiTreePlot(object):
                 row[c + n] = letter
         except:
             pass
-
-###############################################################################
-### Helper Functions
-
-def _preorder_list_manip(n, siblings, ancestors):
-    """
-    Helper function for recursion free preorder traversal, that does
-    not rely on attributes of the node other than child_nodes() (thus it
-    is useful for debuggging).
-
-    Returns the next node (or None) and the number of levels toward the
-    root the function "moved".
-    """
-    levels_moved = 0
-    c = n.child_nodes()
-    if c:
-        levels_moved += 1
-        ancestors.append(list(siblings))
-        del siblings[:]
-        siblings.extend(c[1:])
-        return c[0], levels_moved
-    while not siblings:
-        if ancestors:
-            levels_moved -= 1
-            del siblings[:]
-            siblings.extend(ancestors.pop())
-        else:
-            return None, levels_moved
-    return siblings.pop(0), levels_moved
-
-def _format_node(nd, **kwargs):
-    nf = kwargs.get('node_formatter', None)
-    if nf:
-        return nf(nd)
-    if nd.taxon is not None:
-        return str(nd.taxon)
-    if nd.label is not None:
-        return nd.label
-    return ""
-
-def _format_edge(e, **kwargs):
-    ef = kwargs.get('edge_formatter', None)
-    if ef:
-        return ef(e)
-    return str(e)
-
-def _format_split(split, length=None, **kwargs):
-    if length is None:
-        length = len(kwargs.get("taxon_namespace"))
-    return bitprocessing.int_as_bitstring(split, length=length)
-
-def _convert_node_to_root_polytomy(nd):
-    """If ``nd`` has two children and at least on of them is an internal node,
-    then it will be converted to an out-degree three node (with the edge length
-    added as needed).
-
-    Returns a tuple of child nodes that were detached (or() if the tree was not
-    modified). This can be useful for removing the deleted node from the split_edge_map
-    dictionary.
-    """
-    nd_children = nd.child_nodes()
-    if len(nd_children) > 2:
-        return ()
-    try:
-        left_child = nd_children[0]
-    except:
-        return ()
-    if not left_child:
-        return ()
-    if len(nd_children) == 1:
-        right_child = None
-        dest_edge_head = nd
-    else:
-        right_child = nd_children[1]
-        dest_edge_head = right_child
-    curr_add = None
-    if right_child and right_child.is_internal():
-        try:
-            left_child.edge.length += right_child.edge.length
-        except:
-            pass
-        nd.remove_child(right_child)
-        grand_kids = right_child.child_nodes()
-        for gc in grand_kids:
-            nd.add_child(gc)
-        curr_add = right_child
-    elif left_child.is_internal():
-        try:
-            dest_edge_head.edge.length += left_child.edge.length
-        except:
-            pass
-        nd.remove_child(left_child)
-        grand_kids = left_child.child_nodes()
-        for gc in grand_kids:
-            nd.add_child(gc)
-        curr_add = left_child
-    if curr_add:
-        ndl = [curr_add]
-        t = _convert_node_to_root_polytomy(nd)
-        ndl.extend(t)
-        return tuple(ndl)
-    return ()
-
diff --git a/src/dendropy/interop/ape.py b/src/dendropy/interop/ape.py
index 3f6972d9..40d0b351 100644
--- a/src/dendropy/interop/ape.py
+++ b/src/dendropy/interop/ape.py
@@ -121,9 +121,15 @@ else:
         rfunc(*args, **kwargs)
         _R('sink(type="message")')
         _R('sink()')
-        i = open(stdoutf.name, "rU")
+        try:
+            i = open(stdoutf.name, "rU")
+        except ValueError:
+            i = open(stdoutf.name, "r")
         stdout = i.read()
-        i = open(stderrf.name, "rU")
+        try:
+            i = open(stderrf.name, "rU")
+        except ValueError:
+            i = open(stdoutf.name, "r")
         stderr = i.read()
         return stdout, stderr
 
@@ -148,12 +154,12 @@ else:
     #            23, 291, 313, 196, 1027, 5712]
         stdout, stderr = exec_and_capture(_R['bd.ext'], as_ape_object(t), as_r_vector(taxon_num_species, int))
         patterns = {
-            'deviance' : '\s*Deviance: ([\d\-\.Ee\+]+).*',
-            'log-likelihood' : '\s*Log-likelihood: ([\d\-\.Ee\+]+)',
-            'd/b' : '\s*d / b = ([\d\-\.Ee\+]+)',
-            'd/b s.e.' : '\s*d / b = .* StdErr = ([\d\-\.Ee\+]+)',
-            'b-d' : '\s*b - d = ([\d\-\.Ee\+]+)',
-            'b-d s.e.' : '\s*b - d = .* StdErr = ([\d\-\.Ee\+]+)',
+            r'deviance' : r'\s*Deviance: ([\d\-\.Ee\+]+).*',
+            r'log-likelihood' : r'\s*Log-likelihood: ([\d\-\.Ee\+]+)',
+            r'd/b' : r'\s*d / b = ([\d\-\.Ee\+]+)',
+            r'd/b s.e.' : r'\s*d / b = .* StdErr = ([\d\-\.Ee\+]+)',
+            r'b-d' : r'\s*b - d = ([\d\-\.Ee\+]+)',
+            r'b-d s.e.' : r'\s*b - d = .* StdErr = ([\d\-\.Ee\+]+)',
         }
         results = {}
         for k, v in patterns.items():
diff --git a/src/dendropy/interop/genbank.py b/src/dendropy/interop/genbank.py
index 1f21904f..965136ac 100644
--- a/src/dendropy/interop/genbank.py
+++ b/src/dendropy/interop/genbank.py
@@ -71,7 +71,7 @@ class GenBankResourceStore(object):
         return gb_recs
     parse_xml = staticmethod(parse_xml)
 
-    def fetch_xml(db, ids, prefix=None, email=None, as_stream=False):
+    def fetch_xml(db, ids, prefix=None, email=None, as_stream=False, str_decoding="utf-8"):
         stream = entrez.efetch(db=db,
                 ids=ids,
                 rettype='gbc',
@@ -80,7 +80,7 @@ class GenBankResourceStore(object):
         if as_stream:
             return stream
         else:
-            return stream.read()
+            return stream.read().decode(str_decoding)
     fetch_xml = staticmethod(fetch_xml)
 
     def prepare_ids(ids, prefix=None):
@@ -968,7 +968,7 @@ class GenBankAccessionRecord(object):
         #         value=value,
         #         datatype_hint=None,
         #         name_prefix="dendropy",
-        #         namespace="http://packages.python.org/DendroPy/",
+        #         namespace="http://pypi.org/project/DendroPy/",
         #         name_is_prefixed=False,
         #         is_attribute=False,
         #         annotate_as_reference=False,
diff --git a/src/dendropy/interop/paup.py b/src/dendropy/interop/paup.py
index d2beff0b..15beb306 100644
--- a/src/dendropy/interop/paup.py
+++ b/src/dendropy/interop/paup.py
@@ -329,7 +329,7 @@ class PaupService(object):
         self.commands.append("execute {}".format(filepath))
         if clear_trees:
             self.commands.append("cleartrees")
-        return commands
+        return self.commands
 
     def stage_load_trees(self,
             tree_filepaths,
@@ -555,7 +555,7 @@ def symmetric_difference(tree1, tree2):
     trees.write_to_stream(tf, schema='nexus')
     tf.flush()
     assert tree1.is_rooted == tree2.is_rooted
-    sd = get_split_distribution(
+    sd = trees.get_split_distribution_from_files(
             tree_filepaths=[tf.name],
             taxa_filepath=tf.name,
             is_rooted=tree1.is_rooted,
diff --git a/src/dendropy/interop/raxml.py b/src/dendropy/interop/raxml.py
index 73cd3a19..8d710cd8 100644
--- a/src/dendropy/interop/raxml.py
+++ b/src/dendropy/interop/raxml.py
@@ -458,6 +458,8 @@ class RaxmlRunner(object):
 
     def estimate_tree(self,
             char_matrix,
+            substitution_model='GTRCAT',
+            random_seed=None,
             raxml_args=None):
 
         # set up taxa
@@ -486,14 +488,15 @@ class RaxmlRunner(object):
         self.files_to_clean.append(raxml_seqs_filepath + ".reduced")
 
         # run RAxML
+        if random_seed is None:
+            random_seed = random.randint(0, sys.maxsize)
         if raxml_args is None:
             raxml_args = []
         cmd = [self.raxml_path,
-                '-m',
-                'GTRCAT',
+                '-m', substitution_model,
                 '-s', raxml_seqs_filepath,
                 '-n', self.name,
-                '-p', str(random.randint(0, sys.maxsize))] + raxml_args
+                '-p', str(random_seed)] + raxml_args
         # self._send_info("Executing: {}".format(" ".join(cmd)))
         if self.verbosity >= 2:
             stdout_pipe = None
diff --git a/src/dendropy/interop/rstats.py b/src/dendropy/interop/rstats.py
index d2f9641f..1badfa61 100644
--- a/src/dendropy/interop/rstats.py
+++ b/src/dendropy/interop/rstats.py
@@ -42,7 +42,7 @@ class RService(object):
             env=None,
             rscript_path=RSCRIPT_EXECUTABLE,
             ):
-        """
+        r"""
         Executes a sequence of commands in R and returns the results. All the
         noise is sunk into the stderr return variable, and just the output
         comes out cleanly in the stdout return variable.
@@ -127,7 +127,7 @@ class RService(object):
         use this::
 
             returncode, stdout, stderr = RService.call([
-                "cat('hello, world\\n')",
+                "cat('hello, world\n')",
             ])
 
         or::
diff --git a/src/dendropy/interop/seqgen.py b/src/dendropy/interop/seqgen.py
index 0409f193..5aee61cf 100644
--- a/src/dendropy/interop/seqgen.py
+++ b/src/dendropy/interop/seqgen.py
@@ -83,7 +83,12 @@ class SeqGen(object):
         return None
     get_model = staticmethod(get_model)
 
-    def __init__(self, strongly_unique_tempfiles=False):
+    def __init__(
+        self,
+        strongly_unique_tempfiles=False,
+        rng=None,
+        seq_len=None,
+    ):
         """
         Sets up all properties, which (generally) map directly to command
         parameters of Seq-Gen.
@@ -98,11 +103,11 @@ class SeqGen(object):
         # python object specific attributes
         self.seqgen_path = 'seq-gen'
         self.rng_seed = None
-        self._rng = None
+        self._rng = rng
 
         # following are passed to seq-gen in one form or another
         self.char_model = 'HKY'
-        self.seq_len = None
+        self.seq_len = seq_len
         self.num_partitions = None
         self.scale_branch_lens = None
         self.scale_tree_len = None
@@ -114,9 +119,9 @@ class SeqGen(object):
         self.ti_tv = 0.5 # = kappa of 1.0, i.e. JC
         self.general_rates = None
         self.ancestral_seq_idx = None
-        self.output_text_append = None
         self.write_ancestral_seqs = False
         self.write_site_rates = False
+        self.record_separator = "//"
 
     def _get_rng(self):
         if self._rng is None:
@@ -131,7 +136,11 @@ class SeqGen(object):
     def _set_kappa(self, kappa):
         self.ti_tv = kappa * 2
 
-    def _compose_arguments(self):
+    def _compose_arguments(
+        self,
+        output_format="nexus",
+        append_file=None,
+    ):
         """
         Composes and returns a list of strings that make up the arguments to a Seq-Gen
         call, based on the attribute values of the object.
@@ -170,20 +179,30 @@ class SeqGen(object):
                 args.append("-r%s" % (",".join([str(r) for r in self.general_rates])))
         if self.ancestral_seq_idx:
             args.append("-k%s" % self.ancestral_seq_idx)
-        if self.output_text_append:
-            args.append("-x'%s'" % self.output_text_append)
         if self.write_ancestral_seqs:
             args.append("-wa")
         if self.write_site_rates:
             args.append("-wr")
 
+        if output_format == "nexus":
+            args.append("-on")
+        elif output_format == "fasta":
+            args.append("-of")
+        elif output_format == "phylip":
+            args.append("-op")
+        elif output_format == "relaxed-phylip":
+            args.append("-or")
+        elif output_format:
+            raise ValueError(output_format)
+
+        if append_file:
+            args.append("-x%s" % append_file)
+
         # following are controlled directly by the wrapper
         # silent running
         args.append("-q")
         # we explicitly pass a random number seed on each call
         args.append("-z%s" % self.rng.randint(0, sys.maxsize))
-        # force nexus
-        args.append("-on")
         # force one dataset at a time
         args.append("-n1")
         return args
@@ -194,17 +213,90 @@ class SeqGen(object):
             dataset=None,
             taxon_namespace=None,
             input_sequences=None,
-            **kwargs):
-        args=self._compose_arguments()
+            tree_serialization_kwargs=None,
+            **kwargs
+    ):
+        stdout = self.generate_raw(
+            trees=trees,
+            taxon_namespace=taxon_namespace,
+            input_sequences=input_sequences,
+            tree_serialization_kwargs=tree_serialization_kwargs,
+            output_format="nexus",
+        )
+        if taxon_namespace is None:
+            taxon_namespace = trees.taxon_namespace
+        if dataset is None:
+            dataset = dendropy.DataSet(**kwargs)
+            if taxon_namespace is not None:
+                dataset.attach_taxon_namespace(taxon_namespace)
+        dataset.read(data=stdout, schema="nexus")
+        return dataset
+
+    def generate_dicts(
+            self,
+            **kwargs
+    ):
+        if "output_format" in kwargs:
+            raise TypeError("Cannot specify 'output_format' when requesting 'dict' result")
+        kwargs["output_format"] = "fasta"
+        if "append_file" in kwargs:
+            raise TypeError("Cannot specify 'append_file' when requesting 'dict' result")
+        # with self.get_tempfile() as textf:
+        with self.get_tempfile() as textf:
+            textf.write(self.record_separator)
+            textf.write("\n")
+            textf.flush()
+            kwargs["append_file"] = textf.name
+            raw_result = self.generate_raw(**kwargs)
+        data_ds = []
+        data_d = {}
+        label_data = None
+        sequence_data = []
+        for idx, line in enumerate(raw_result.split("\n")):
+            line = line.strip()
+            if not line:
+                continue
+            if line.startswith(">") or line == self.record_separator:
+                if label_data is not None:
+                    assert label_data not in data_d
+                    data_d[label_data] = "".join(sequence_data)
+                    sequence_data = []
+                else:
+                    assert not sequence_data
+                if line == self.record_separator:
+                    if data_d:
+                        data_ds.append(data_d)
+                        data_d = {}
+                        label_data = None
+                else:
+                    label_data = line[1:]
+            else:
+                sequence_data.append(line)
+        return data_ds
+
+    def generate_raw(
+            self,
+            trees,
+            taxon_namespace=None,
+            input_sequences=None,
+            tree_serialization_kwargs=None,
+            **kwargs
+    ):
+        args=self._compose_arguments(**kwargs)
         # with open("x.txt", "w") as inputf:
         with self.get_tempfile() as inputf:
             if input_sequences is not None:
                 input_sequences.write_to_stream(inputf, schema="phylip",)
                 inputf.write("{}\n".format(len(trees)))
-            trees.write_to_stream(inputf,
+            if tree_serialization_kwargs is None:
+                tree_serialization_kwargs = {}
+            trees.write_to_stream(
+                    inputf,
                     "newick",
                     suppress_rooting=True,
-                    suppress_internal_node_labels=True)
+                    suppress_internal_node_labels=True,
+                    **tree_serialization_kwargs
+            )
             inputf.flush()
             args.append(inputf.name)
             # print("seq-gen args: = %s" % " ".join(args))
@@ -212,14 +304,9 @@ class SeqGen(object):
             stdout, stderr = processio.communicate(run)
             if stderr or run.returncode != 0:
                 raise RuntimeError("Seq-gen error: %s" % stderr)
-            if taxon_namespace is None:
-                taxon_namespace = trees.taxon_namespace
-            if dataset is None:
-                dataset = dendropy.DataSet(**kwargs)
-                if taxon_namespace is not None:
-                    dataset.attach_taxon_namespace(taxon_namespace)
-            dataset.read(data=stdout, schema="nexus")
-            return dataset
+            return stdout
+
+
 
 
 
diff --git a/src/dendropy/legacy/continuous.py b/src/dendropy/legacy/continuous.py
index c76ce262..319b4336 100644
--- a/src/dendropy/legacy/continuous.py
+++ b/src/dendropy/legacy/continuous.py
@@ -32,7 +32,7 @@ def simulate_continuous(node, rng=None, **kwargs):
             epilog="Note that this function is also available through 'dendropy.simulate.charsim.evolve_continuous_char(...)'.")
     return continuous.evolve_continuous_char(node, rng, **kwargs)
 
-class PhylogeneticIndependentConstrasts(continuous.PhylogeneticIndependentConstrasts):
+class PhylogeneticIndependentContrasts(continuous.PhylogeneticIndependentContrasts):
 
     def __init__(self,
             tree,
@@ -40,10 +40,10 @@ class PhylogeneticIndependentConstrasts(continuous.PhylogeneticIndependentConstr
             polytomy_strategy=None):
         deprecate.dendropy_deprecation_warning(
                 preamble="The 'dendropy.continuous' module has moved to 'dendropy.model.continuous'.",
-                old_construct="from dendropy import continuous\ncontinuous.PhylogeneticIndependentConstrasts(...)",
-                new_construct="from dendropy.model import continuous\ncontinuous.PhylogeneticIndependentConstrasts(...)",
+                old_construct="from dendropy import continuous\ncontinuous.PhylogeneticIndependentContrasts(...)",
+                new_construct="from dendropy.model import continuous\ncontinuous.PhylogeneticIndependentContrasts(...)",
                 )
-        continuous.PhylogeneticIndependentConstrasts.__init__(self,
+        continuous.PhylogeneticIndependentContrasts.__init__(self,
                 tree=tree,
                 char_matrix=char_matrix,
                 polytomy_strategy=polytomy_strategy)
diff --git a/src/dendropy/legacy/treesplit.py b/src/dendropy/legacy/treesplit.py
index 171f8a55..8b7b2ecb 100644
--- a/src/dendropy/legacy/treesplit.py
+++ b/src/dendropy/legacy/treesplit.py
@@ -93,8 +93,8 @@ d = bitprocessing.int_as_bitstring(..., reverse=True)""")
     return bitprocessing.int_as_bitstring(
             mask=split_mask,
             length=width,
-            symbol0=symbol0,
-            symbol1=symbol1,
+            symbol0=symbol1,
+            symbol1=symbol2,
             reverse=True)
 
 def find_edge_from_split(root, split_to_find, mask=-1):
diff --git a/src/dendropy/model/birthdeath.py b/src/dendropy/model/birthdeath.py
index a47def0c..d09d7af9 100644
--- a/src/dendropy/model/birthdeath.py
+++ b/src/dendropy/model/birthdeath.py
@@ -245,7 +245,7 @@ def birth_death_tree(birth_rate, death_rate, birth_rate_sd=0.0, death_rate_sd=0.
     elif target_num_extant_tips is None:
         raise ValueError("If 'gsa_ntax' is specified, 'num_extant_tips' must be specified")
     elif target_num_extinct_tips is not None:
-        raise ValueError("If 'gsa_ntax' is specified, 'num_extinct_tups' cannot be specified")
+        raise ValueError("If 'gsa_ntax' is specified, 'num_extinct_tips' cannot be specified")
     elif target_num_total_tips is not None:
         raise ValueError("If 'gsa_ntax' is specified, 'num_total_tips' cannot be specified")
     elif gsa_ntax < target_num_extant_tips:
@@ -1159,7 +1159,7 @@ def uniform_pure_birth_tree(taxon_namespace, birth_rate=1.0, rng=None):
     return tree
 
 def fit_pure_birth_model(**kwargs):
-    """
+    r"""
     Calculates the maximum-likelihood estimate of the birth rate of a set of *internal* node ages under a Yule (pure-birth) model.
 
     Requires either a |Tree| object or an interable of *internal* node ages to be passed in via keyword arguments ``tree`` or ``internal_node_ages``, respectively. The former is more convenient when doing one-off calculations, while the latter is more efficient if the list of internal node ages needs to be used in other places and you already have it calculated and want to avoid re-calculating it here.
@@ -1310,7 +1310,7 @@ def fit_pure_birth_model_to_tree(tree, ultrametricity_precision=constants.DEFAUL
 
 
 def birth_death_likelihood(**kwargs):
-    """
+    r"""
     Calculates the log-likelihood of a tree (or a set of internal nodes) under
     a birth death model.
 
diff --git a/src/dendropy/model/coalescent.py b/src/dendropy/model/coalescent.py
index 157b1d1c..5c627bac 100644
--- a/src/dendropy/model/coalescent.py
+++ b/src/dendropy/model/coalescent.py
@@ -32,10 +32,8 @@ from dendropy.calculate import combinatorics
 ###############################################################################
 ## Calculations and statistics
 
-def discrete_time_to_coalescence(n_genes,
-                                 pop_size=None,
-                                 n_to_coalesce=2,
-                                 rng=None):
+
+def discrete_time_to_coalescence(n_genes, pop_size=None, n_to_coalesce=2, rng=None):
     """
     A random draw from the "Kingman distribution" (discrete time version): Time
     to go from ``n_genes`` genes to ``n_genes``-1 genes in a discrete-time
@@ -75,11 +73,9 @@ def discrete_time_to_coalescence(n_genes,
     tmrca = probability.geometric_rv(p)
     return tmrca * time_units
 
-def time_to_coalescence(n_genes,
-        pop_size=None,
-        n_to_coalesce=2,
-        rng=None):
-    """
+
+def time_to_coalescence(n_genes, pop_size=None, n_to_coalesce=2, rng=None):
+    r"""
     A random draw from the "Kingman distribution" (discrete time version): Time
     to go from ``n_genes`` genes to ``n_genes``-1 genes in a continuous-time
     Wright-Fisher population of ``pop_size`` genes; i.e. waiting time until
@@ -87,7 +83,7 @@ def time_to_coalescence(n_genes,
 
     Given the number of gene lineages in a sample, ``n_genes``, and a
     population size, ``pop_size``, this function returns a random number from
-    an exponential distribution with rate $\\choose(``pop_size``, 2)$.
+    an exponential distribution with rate $\choose(``pop_size``, 2)$.
     ``pop_size`` is the effective *haploid* population size; i.e., number of gene
     in the population: 2 * N in a diploid population of N individuals,
     or N in a haploid population of N individuals. If ``pop_size`` is 1 or 0 or
@@ -98,11 +94,11 @@ def time_to_coalescence(n_genes,
     The coalescence time, or the waiting time for the coalescence, of two
     gene lineages evolving in a population with haploid size $N$ is an
     exponentially-distributed random variable with rate of $N$ an
-    expectation of $\\frac{1}{N}$).
+    expectation of $\frac{1}{N}$).
     The waiting time for coalescence of *any* two gene lineages in a sample of
     $n$ gene lineages evolving in a population with haploid size $N$ is an
-    exponentially-distributed random variable with rate of $\\choose{N, 2}$ and
-    an expectation of $\\frac{1}{\choose{N, 2}}$.
+    exponentially-distributed random variable with rate of $\choose{N, 2}$ and
+    an expectation of $\frac{1}{\choose{N, 2}}$.
 
     Parameters
     ----------
@@ -135,6 +131,7 @@ def time_to_coalescence(n_genes,
     tmrca = rng.expovariate(rate)
     return tmrca * time_units
 
+
 def expected_tmrca(n_genes, pop_size=None, n_to_coalesce=2):
     """
     Expected (mean) value for the Time to the Most Recent Common Ancestor of
@@ -164,17 +161,16 @@ def expected_tmrca(n_genes, pop_size=None, n_to_coalesce=2):
 
     """
     nc2 = combinatorics.choose(n_genes, n_to_coalesce)
-    tmrca = (float(1)/nc2)
+    tmrca = float(1) / nc2
     if pop_size is not None:
         return tmrca * pop_size
     else:
         return tmrca
 
-def coalesce_nodes(nodes,
-             pop_size=None,
-             period=None,
-             rng=None,
-             use_expected_tmrca=False):
+
+def coalesce_nodes(
+    nodes, pop_size=None, period=None, rng=None, use_expected_tmrca=False
+):
     """
     Returns a list of nodes that have not yet coalesced once ``period`` is
     exhausted.
@@ -317,7 +313,10 @@ def coalesce_nodes(nodes,
     # return the list of nodes that have not coalesced
     return nodes
 
-def node_waiting_time_pairs(tree, ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION):
+
+def node_waiting_time_pairs(
+    tree, ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION
+):
     """
     Returns a list of tuples of (nodes, coalescent interval time) on the tree.
     That is, each element in the list is tuple pair consisting of where: the
@@ -352,10 +351,13 @@ def node_waiting_time_pairs(tree, ultrametricity_precision=constants.DEFAULT_ULT
     for i, d in enumerate(ages[1:]):
         nd = d[0]
         prev_nd = ages[i][0]
-        intervals.append( (nd, nd.age - prev_nd.age) )
+        intervals.append((nd, nd.age - prev_nd.age))
     return intervals
 
-def extract_coalescent_frames(tree, ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION):
+
+def extract_coalescent_frames(
+    tree, ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION
+):
     """
     Returns a list of tuples describing the coalescent frames on the tree. That
     is, each element in the list is tuple pair consisting of where: the first
@@ -382,8 +384,10 @@ def extract_coalescent_frames(tree, ultrametricity_precision=constants.DEFAULT_U
         Returns dictionary, with key = number of alleles, and values = waiting
         time for coalescent for the given tree
     """
-    nwti = node_waiting_time_pairs(tree, ultrametricity_precision=ultrametricity_precision)
-#     num_genes = len(tree.taxon_namespace)
+    nwti = node_waiting_time_pairs(
+        tree, ultrametricity_precision=ultrametricity_precision
+    )
+    #     num_genes = len(tree.taxon_namespace)
     num_genes = len(tree.leaf_nodes())
     num_genes_wt = {}
     for n in nwti:
@@ -392,17 +396,18 @@ def extract_coalescent_frames(tree, ultrametricity_precision=constants.DEFAULT_U
     # num_alleles_list = sorted(num_genes_wt.keys(), reverse=True)
     return num_genes_wt
 
+
 def log_probability_of_coalescent_frames(coalescent_frames, haploid_pop_size):
-    """
+    r"""
     Under the classical neutral coalescent \citep{Kingman1982,
     Kingman1982b}, the waiting times between coalescent events in a
     sample of $k$ alleles segregating in a  population of (haploid) size
     $N_e$ is distributed exponentially with a rate parameter of
-    :math`\\frac{{k \choose 2}}{N_e}`::
+    :math`\frac{{k \choose 2}}{N_e}`::
 
         .. math::
 
-            \\Pr(T) =  \\frac{{k \\choose 2}}{N_e} \\e{-  \\frac{{k \\choose 2}}{N_e} T},
+            \Pr(T) =  \frac{{k \choose 2}}{N_e} \e{-  \frac{{k \choose 2}}{N_e} T},
 
     where $T$ is the length of  (chronological) time in which there are
     $k$ alleles in the sample (i.e., for $k$ alleles to coalesce into
@@ -410,26 +415,36 @@ def log_probability_of_coalescent_frames(coalescent_frames, haploid_pop_size):
     """
     lp = 0.0
     for k, t in coalescent_frames.items():
-        k2N = (float(k * (k-1)) / 2) / haploid_pop_size
-#         k2N = float(combinatorics.choose(k, 2)) / haploid_pop_size
-        lp =  lp + math.log(k2N) - (k2N * t)
+        k2N = (float(k * (k - 1)) / 2) / haploid_pop_size
+        #         k2N = float(combinatorics.choose(k, 2)) / haploid_pop_size
+        lp = lp + math.log(k2N) - (k2N * t)
     return lp
 
-def log_probability_of_coalescent_tree(tree, haploid_pop_size, ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION):
+
+def log_probability_of_coalescent_tree(
+    tree,
+    haploid_pop_size,
+    ultrametricity_precision=constants.DEFAULT_ULTRAMETRICITY_PRECISION,
+):
     """
     Wraps up extraction of coalescent frames and reporting of probability.
     """
-    return log_probability_of_coalescent_frames(extract_coalescent_frames(tree),
-            haploid_pop_size)
+    return log_probability_of_coalescent_frames(
+        extract_coalescent_frames(tree), haploid_pop_size
+    )
+
 
 ###############################################################################
 ## Tree Simulations
 
-def contained_coalescent_tree(containing_tree,
-        gene_to_containing_taxon_map,
-        edge_pop_size_attr="pop_size",
-        default_pop_size=1,
-        rng=None):
+
+def contained_coalescent_tree(
+    containing_tree,
+    gene_to_containing_taxon_map,
+    edge_pop_size_attr="pop_size",
+    default_pop_size=1,
+    rng=None,
+):
     """
     Returns a gene tree simulated under the coalescent contained within a
     population or species tree.
@@ -474,7 +489,7 @@ def contained_coalescent_tree(containing_tree,
             tree nodes that are uncoalesced as values.
 
     Note that this function does very much the same thing as
-    ``constrained_kingman()``, but provides a very different API.
+    ``constrained_kingman_tree()``, but provides a very different API.
     """
 
     if rng is None:
@@ -483,7 +498,7 @@ def contained_coalescent_tree(containing_tree,
     gene_tree_taxon_namespace = gene_to_containing_taxon_map.domain_taxon_namespace
     if gene_tree_taxon_namespace is None:
         gene_tree_taxon_namespace = dendropy.TaxonNamespace()
-        for gene_taxa in pop_gene_taxa_map:
+        for gene_taxa in gene_to_containing_taxon_map:
             for taxon in gene_taxa:
                 gene_tree_taxon_namespace.add(taxon)
     gene_tree = dendropy.Tree(taxon_namespace=gene_tree_taxon_namespace)
@@ -499,8 +514,8 @@ def contained_coalescent_tree(containing_tree,
                 gene_node = dendropy.Node()
                 gene_node.taxon = gene_taxon
                 pop_node_genes[nd].append(gene_node)
-            #gene_nodes = [dendropy.Node() for i in range(len(gene_taxa))]
-            #for gidx, gene_node in enumerate(gene_nodes):
+            # gene_nodes = [dendropy.Node() for i in range(len(gene_taxa))]
+            # for gidx, gene_node in enumerate(gene_nodes):
             #    gene_node.taxon = gene_taxa[gidx]
             #    pop_node_genes[nd].append(gene_node)
 
@@ -512,18 +527,22 @@ def contained_coalescent_tree(containing_tree,
             pop_size = default_pop_size
         if edge.head_node.parent_node is None:
             if len(pop_node_genes[edge.head_node]) > 1:
-                final = coalesce_nodes(nodes=pop_node_genes[edge.head_node],
-                                            pop_size=pop_size,
-                                            period=None,
-                                            rng=rng)
+                final = coalesce_nodes(
+                    nodes=pop_node_genes[edge.head_node],
+                    pop_size=pop_size,
+                    period=None,
+                    rng=rng,
+                )
             else:
                 final = pop_node_genes[edge.head_node]
             gene_tree.seed_node = final[0]
         else:
-            uncoal = coalesce_nodes(nodes=pop_node_genes[edge.head_node],
-                                         pop_size=pop_size,
-                                         period=edge.length,
-                                         rng=rng)
+            uncoal = coalesce_nodes(
+                nodes=pop_node_genes[edge.head_node],
+                pop_size=pop_size,
+                period=edge.length,
+                rng=rng,
+            )
             if edge.tail_node not in pop_node_genes:
                 pop_node_genes[edge.tail_node] = []
             pop_node_genes[edge.tail_node].extend(uncoal)
@@ -531,6 +550,7 @@ def contained_coalescent_tree(containing_tree,
     gene_tree.pop_node_genes = pop_node_genes
     return gene_tree
 
+
 def pure_kingman_tree(taxon_namespace, pop_size=1, rng=None):
     """
     Generates a tree under the unconstrained Kingman's coalescent process.
@@ -552,16 +572,15 @@ def pure_kingman_tree(taxon_namespace, pop_size=1, rng=None):
 
     """
     if rng is None:
-        rng = GLOBAL_RNG # use the global rng by default
+        rng = GLOBAL_RNG  # use the global rng by default
     nodes = [dendropy.Node(taxon=t) for t in taxon_namespace]
-    seed_node = coalesce_nodes(nodes=nodes,
-                                    pop_size=pop_size,
-                                    period=None,
-                                    rng=rng,
-                                    use_expected_tmrca=False)[0]
+    seed_node = coalesce_nodes(
+        nodes=nodes, pop_size=pop_size, period=None, rng=rng, use_expected_tmrca=False
+    )[0]
     tree = dendropy.Tree(taxon_namespace=taxon_namespace, seed_node=seed_node)
     return tree
 
+
 def pure_kingman_tree_shape(num_leaves, pop_size=1, rng=None):
     """
     Like :func:`dendropy.model.pure_kingman_tree`, but does not assign taxa to tips.
@@ -581,39 +600,41 @@ def pure_kingman_tree_shape(num_leaves, pop_size=1, rng=None):
 
     """
     if rng is None:
-        rng = GLOBAL_RNG # use the global rng by default
+        rng = GLOBAL_RNG  # use the global rng by default
     nodes = [dendropy.Node() for t in range(num_leaves)]
-    seed_node = coalesce_nodes(nodes=nodes,
-                                    pop_size=pop_size,
-                                    period=None,
-                                    rng=rng,
-                                    use_expected_tmrca=False)[0]
+    seed_node = coalesce_nodes(
+        nodes=nodes, pop_size=pop_size, period=None, rng=rng, use_expected_tmrca=False
+    )[0]
     tree = dendropy.Tree(seed_node=seed_node)
     return tree
 
+
 def mean_kingman_tree(taxon_namespace, pop_size=1, rng=None):
     """
     Returns a tree with coalescent intervals given by the expected times under
     Kingman's neutral coalescent.
     """
     if rng is None:
-        rng = GLOBAL_RNG # use the global rng by default
+        rng = GLOBAL_RNG  # use the global rng by default
     nodes = [dendropy.Node(taxon=t) for t in taxon_namespace]
-    seed_node = coalesce_nodes(nodes=nodes,
-                                    pop_size=pop_size,
-                                    period=None,
-                                    rng=rng,
-                                    use_expected_tmrca=True)[0]
+    seed_node = coalesce_nodes(
+        nodes=nodes, pop_size=pop_size, period=None, rng=rng, use_expected_tmrca=True
+    )[0]
     tree = dendropy.Tree(taxon_namespace=taxon_namespace, seed_node=seed_node)
     return tree
 
-def constrained_kingman_tree(pop_tree,
-                        gene_tree_list=None,
-                        rng=None,
-                        gene_node_label_fn=None,
-                        num_genes_attr='num_genes',
-                        pop_size_attr='pop_size',
-                        decorate_original_tree=False):
+
+def constrained_kingman_tree(
+    pop_tree,
+    gene_tree_list=None,
+    rng=None,
+    gene_node_label_fn=None,
+    gene_sampling_strategy="random_uniform",
+    num_genes=None,
+    num_genes_attr="num_genes",
+    pop_size_attr="pop_size",
+    decorate_original_tree=False,
+):
     """
     Given a population tree, ``pop_tree`` this will return a *pair of
     trees*: a gene tree simulated on this population tree based on
@@ -622,16 +643,23 @@ def constrained_kingman_tree(pop_tree,
     uncoalesced nodes from the gene tree associated with the given
     node from the population tree.
 
-    ``pop_tree`` should be a DendroPy Tree object or an object
-    of a class derived from this with the following attribute
-    ``num_genes`` -- the number of gene samples from each population in the
-    present.  Each edge on the tree should also have the attribute
+    ``pop_tree``: a Tree object.
 
-    ``pop_size_attr`` is the attribute name of the edges of ``pop_tree`` that
-    specify the population size. By default it is ``pop_size``. The should
-    specify the effective *haploid* population size; i.e., number of gene
-    in the population: 2 * N in a diploid population of N individuals,
-    or N in a haploid population of N individuals.
+    ``gene_sampling_strategy``: string
+        - "node_attribute": Will expect each leaf of ``pop_tree`` to
+          have an attribute, ``num_genes``, that specifies the number
+          of genes to be sampled from that population.
+        - "fixed_per_population": Will assign ``num_genes`` to each population.
+        - "random_uniform": Will assign genes to leaves with
+          uniform probability until ``num_genes`` genes have been
+          assigned.
+
+    ``pop_size_attr``: string
+        The attribute name of the edges of ``pop_tree`` that
+        specify the population size. By default it is ``pop_size``. The should
+        specify the effective *haploid* population size; i.e., number of gene
+        in the population: 2 * N in a diploid population of N individuals,
+        or N in a haploid population of N individuals.
 
     If ``pop_size`` is 1 or 0 or None, then the edge lengths of ``pop_tree`` is
     taken to be in haploid population units; i.e. where 1 unit equals 2N
@@ -652,13 +680,19 @@ def constrained_kingman_tree(pop_tree,
     each node of the population tree is added to the original (input) population
     tree instead of a copy.
 
-    Note that this function does very much the same thing as ``contained_coalescent()``,
+    If ``num_genes`` is None, then it will be set to 1 under the
+    "node_attribute" strategy (serving as a fallback default for nodes that do
+    not spcify ``num_genes_attr``) or the leaf count of ``pop_tree`` under the
+    ``random_uniform`` strategy.
+
+    Note that this function does very much the same thing as
+    ``contained_coalescent_tree()``,
     but provides a very different API.
     """
 
     # get our random number generator
     if rng is None:
-        rng = GLOBAL_RNG # use the global rng by default
+        rng = GLOBAL_RNG  # use the global rng by default
 
     if gene_tree_list is not None:
         gtaxa = gene_tree_list.taxon_namespace
@@ -668,16 +702,51 @@ def constrained_kingman_tree(pop_tree,
     if gene_node_label_fn is None:
         gene_node_label_fn = lambda x, y: "%s_%02d" % (x, y)
 
+    # @MAM taking a stab at a reasonable default for num_genes,
+    # it may make sense to do something else entirely here
+    if num_genes is None:
+        if gene_sampling_strategy == "random_uniform":
+            num_genes = sum(1 for __ in pop_tree.leaf_node_iter())
+        elif gene_sampling_strategy == "node_attribute":
+            num_genes = 1
+        else:
+            num_genes = None
+
     # we create a set of gene nodes for each leaf node on the population
     # tree, and associate those gene nodes to the leaf by assignment
     # of 'taxon'.
-    for leaf_count, leaf in enumerate(pop_tree.leaf_node_iter()):
-        gene_nodes = []
-        for gene_count in range(getattr(leaf, num_genes_attr)):
+    if gene_sampling_strategy in ("node_attribute", "fixed_per_population"):
+        for leaf_count, leaf in enumerate(pop_tree.leaf_node_iter()):
+            gene_nodes = []
+            if gene_sampling_strategy == "node_attribute":
+                node_ngenes = getattr(leaf, num_genes_attr)
+            else:
+                node_ngenes = num_genes
+            for gene_count in range(node_ngenes):
+                gene_node = dendropy.Node()
+                gene_node.taxon = gtaxa.require_taxon(
+                    label=gene_node_label_fn(leaf.taxon.label, gene_count + 1)
+                )
+                gene_nodes.append(gene_node)
+            leaf.gene_nodes = gene_nodes
+    elif gene_sampling_strategy == "random_uniform":
+        gene_count = 0
+        leaves = list(pop_tree.leaf_node_iter())
+        while gene_count < num_genes:
+            gene_count += 1
+            leaf = rng.choice(leaves)
             gene_node = dendropy.Node()
-            gene_node.taxon = gtaxa.require_taxon(label=gene_node_label_fn(leaf.taxon.label, gene_count+1))
-            gene_nodes.append(gene_node)
-        leaf.gene_nodes = gene_nodes
+            gene_node.taxon = gtaxa.require_taxon(
+                label=gene_node_label_fn(leaf.taxon.label, gene_count)
+            )
+            try:
+                leaf.gene_nodes.append(gene_node)
+            except:
+                leaf.gene_nodes = [ gene_node ]
+    else:
+        raise ValueError("Unrecognized strategy '{}'".format(
+            gene_sampling_strategy
+        ))
 
     # We iterate through the edges of the population tree in post-order,
     # i.e., visiting child edges before we visit parent edges. For
@@ -698,30 +767,33 @@ def constrained_kingman_tree(pop_tree,
     gene_tree = dendropy.Tree()
     gene_tree.taxon_namespace = gtaxa
     for edge in working_poptree.postorder_edge_iter():
-
+        if not hasattr(edge.head_node, "gene_nodes"):
+            continue
         # if mrca root, run unconstrained coalescent
+        if hasattr(edge, pop_size_attr):
+            pop_size = getattr(edge, pop_size_attr)
+        else:
+            # this means all our time will be in population units
+            pop_size = 1.0
         if edge.head_node.parent_node is None:
             if len(edge.head_node.gene_nodes) > 1:
-                final = coalesce_nodes(nodes=edge.head_node.gene_nodes,
-                                            pop_size=pop_size,
-                                            period=None,
-                                            rng=rng)
+                final = coalesce_nodes(
+                    nodes=edge.head_node.gene_nodes,
+                    pop_size=pop_size,
+                    period=None,
+                    rng=rng,
+                )
             else:
                 final = edge.head_node.gene_nodes
             gene_tree.seed_node = final[0]
         else:
-
-            if hasattr(edge, pop_size_attr):
-                pop_size = getattr(edge, pop_size_attr)
-            else:
-                # this means all our time will be in population units
-                pop_size = 1
-
-            uncoal = coalesce_nodes(nodes=edge.head_node.gene_nodes,
-                                         pop_size=pop_size,
-                                         period=edge.length,
-                                         rng=rng)
-            if not hasattr(edge.tail_node, 'gene_nodes'):
+            uncoal = coalesce_nodes(
+                nodes=edge.head_node.gene_nodes,
+                pop_size=pop_size,
+                period=edge.length,
+                rng=rng,
+            )
+            if not hasattr(edge.tail_node, "gene_nodes"):
                 edge.tail_node.gene_nodes = []
             edge.tail_node.gene_nodes.extend(uncoal)
 
diff --git a/src/dendropy/model/continuous.py b/src/dendropy/model/continuous.py
index 9ad8e32f..6f43719e 100644
--- a/src/dendropy/model/continuous.py
+++ b/src/dendropy/model/continuous.py
@@ -27,7 +27,7 @@ import operator
 import dendropy
 from dendropy.utility import GLOBAL_RNG
 
-class PhylogeneticIndependentConstrasts(object):
+class PhylogeneticIndependentContrasts(object):
     """
     Phylogenetic Independent Contrasts.
 
diff --git a/src/dendropy/model/discrete.py b/src/dendropy/model/discrete.py
index 42a7f453..7574586f 100644
--- a/src/dendropy/model/discrete.py
+++ b/src/dendropy/model/discrete.py
@@ -117,6 +117,7 @@ class DiscreteCharacterEvolver(object):
             seq_model = getattr(tree, self.seq_model_attr, None)
 
         # loop through edges in preorder (root->tips)
+        n_prev_seq = None  # to mollify linter undefined variable warning
         for edge in tree.preorder_edge_iter():
             node = edge.head_node
             if not hasattr(node, self.seq_attr):
@@ -124,6 +125,7 @@ class DiscreteCharacterEvolver(object):
             seq_list = getattr(node, self.seq_attr)
             if edge.tail_node:
                 par = edge.tail_node
+                assert n_prev_seq is not None
                 if len(seq_list) != n_prev_seq:
                     raise ValueError("'%s' length varies among nodes" % self.seq_attr)
                 par_seq = getattr(par, self.seq_attr)[-1]
diff --git a/src/dendropy/model/parsimony.py b/src/dendropy/model/parsimony.py
index 2af7fe8e..0eac99c8 100644
--- a/src/dendropy/model/parsimony.py
+++ b/src/dendropy/model/parsimony.py
@@ -397,7 +397,7 @@ def parsimony_score(
 
     """
     if tree.taxon_namespace is not chars.taxon_namespace:
-        raise TaxonNamespaceIdentityError(tree, data)
+        raise TaxonNamespaceIdentityError(tree, chars)
     taxon_state_sets_map = chars.taxon_state_sets_map(gaps_as_missing=gaps_as_missing)
     nodes = tree.postorder_node_iter()
     pscore = fitch_down_pass(nodes,
diff --git a/src/dendropy/model/protractedspeciation.py b/src/dendropy/model/protractedspeciation.py
index c6898278..da2c2f12 100644
--- a/src/dendropy/model/protractedspeciation.py
+++ b/src/dendropy/model/protractedspeciation.py
@@ -38,7 +38,7 @@ from dendropy.calculate import probability
 def _D(speciation_initiation_rate,
        speciation_completion_rate,
        incipient_species_extinction_rate):
-    """
+    r"""
     Returns value of D, as given in eq. 5 in Etienne et al.
     (2014).
 
@@ -72,7 +72,7 @@ def _D(speciation_initiation_rate,
 def _phi(speciation_initiation_rate,
        speciation_completion_rate,
        incipient_species_extinction_rate):
-    """
+    r"""
     Returns value of $\varphi$, as given in eq. 6 in Etienne et al.
     (2014).
 
@@ -106,7 +106,7 @@ def expected_duration_of_speciation(
         incipient_species_extinction_rate,
         D=None,
         ):
-    """
+    r"""
     Returns mean duration of speciation, following Eqs. 4 in Etienne et al.
     (2014):
 
@@ -160,7 +160,7 @@ def probability_of_duration_of_speciation(
         D=None,
         phi=None,
         ):
-    """
+    r"""
     Returns probability of duration of speciation, tau, following Eqs. 6
     in Etienne et al.
 
@@ -215,7 +215,7 @@ def log_probability_of_duration_of_speciation(
         D=None,
         phi=None,
         ):
-    """
+    r"""
     Returns probability of duration of speciation, tau, following Eqs. 6
     in Etienne et al.
 
@@ -269,7 +269,7 @@ def maximum_probability_duration_of_speciation(
         D=None,
         phi=None,
         ):
-    """
+    r"""
     Returns duration of speciation that maximizes probability under given
     process parameters, following eq. 8 of Etienne et al (2014).
 
@@ -882,7 +882,7 @@ class ProtractedSpeciationProcess(object):
         elif self.species_lineage_sampling_scheme == "random":
             lt = self.rng.sample(lineage_collection, len(lineage_collection))
         else:
-            raise ValueError(sampling_scheme)
+            raise ValueError(self.species_lineage_sampling_scheme)
         seen_species_ids = set()
         to_restore_species_extinction_times = {}
         for lineage_entry in lt:
diff --git a/src/dendropy/simulate/popgensim.py b/src/dendropy/simulate/popgensim.py
index f5041078..7e7f0c92 100644
--- a/src/dendropy/simulate/popgensim.py
+++ b/src/dendropy/simulate/popgensim.py
@@ -100,7 +100,7 @@ class FragmentedPopulations(object):
     def generate_pop_tree(self, species_name, samples_per_pop=10):
         tree_data = { 'sp': species_name, 'divt': self.div_time_gens }
         desc_lineages = []
-        for i in xrange(self.num_desc_pops):
+        for i in range(self.num_desc_pops):
             tree_data['id'] = i+1
             desc_lineages.append("%(sp)s%(id)d:%(divt)d" % tree_data)
         tree_string = "(" + (",".join(desc_lineages)) + ("):%d" % 0) #% (self.num_desc_pops * self.desc_pop_size * 10))
diff --git a/src/dendropy/utility/bibtex.py b/src/dendropy/utility/bibtex.py
index 34691871..dba5e9b4 100644
--- a/src/dendropy/utility/bibtex.py
+++ b/src/dendropy/utility/bibtex.py
@@ -64,14 +64,14 @@ def _clean_parsed_text(text):
         text = text[1:-1]
     elif text.startswith('"') and text.endswith('"'):
         text = text[1:-1]
-    text = re.sub("[\s]+", " ", text).strip()
+    text = re.sub(r"[\s]+", " ", text).strip()
     return text
 
 def _format_bibtex_value(text, col_start=1, wrap=True, width=78):
     """
     Formats text of a BibTeX field.
     """
-    ftext = re.sub("[\s]+", " ", text).strip()
+    ftext = re.sub(r"[\s]+", " ", text).strip()
     col_indent = " " * col_start
     if not ftext[0].isdigit():
         if wrap:
diff --git a/src/dendropy/utility/bitprocessing.py b/src/dendropy/utility/bitprocessing.py
index 1840c478..aeb6f58c 100644
--- a/src/dendropy/utility/bitprocessing.py
+++ b/src/dendropy/utility/bitprocessing.py
@@ -33,7 +33,7 @@ if sys.hexversion >= 0x03010000:
         """
         try:
             return n.bit_length()
-        except AttributeError:
+        except AttributeError:  # if n is None
             return 0
 else:
     def bit_length(n):
@@ -43,9 +43,12 @@ else:
         index of the highest set bit, or the width of the bitstring
         representing the integer.
         """
-        s = bin(n)          # binary representation:  bin(-37) --> '-0b100101'
-        s = s.lstrip('-0b') # remove leading zeros and minus sign
-        return len(s)       # len('100101') --> 6
+        try:
+            s = bin(n)          # binary representation:  bin(-37) --> '-0b100101'
+            s = s.lstrip('-0b') # remove leading zeros and minus sign
+            return len(s)       # len('100101') --> 6
+        except TypeError:  # if n is None
+            return 0
 
 def int_as_bitstring(n, length=None, symbol0=None, symbol1=None, reverse=False):
     if length is None:
diff --git a/src/dendropy/utility/cli.py b/src/dendropy/utility/cli.py
index b6dbdad8..4123e72b 100644
--- a/src/dendropy/utility/cli.py
+++ b/src/dendropy/utility/cli.py
@@ -26,7 +26,8 @@ import argparse
 import os
 import sys
 if sys.hexversion < 0x03000000:
-    input_str = raw_input
+    import __builtin_ as builtins  # verbosity added to mollify linter
+    input_str = builtins.raw_input
 else:
     input_str = input
 import textwrap
diff --git a/src/dendropy/utility/debug.py b/src/dendropy/utility/debug.py
index 1b554dad..89a90536 100644
--- a/src/dendropy/utility/debug.py
+++ b/src/dendropy/utility/debug.py
@@ -21,8 +21,11 @@
 Various data structures.
 """
 
+import inspect
+import sys
+
 def get_calling_code_info(stack_level):
-    frame = inspect.stack()[stacklevel]
+    frame = inspect.stack()[stack_level]
     filename = inspect.getfile(frame[0])
     lineno = inspect.getlineno(frame[0])
     return filename, lineno
diff --git a/src/dendropy/utility/filesys.py b/src/dendropy/utility/filesys.py
index 18ee6bea..68217c8a 100644
--- a/src/dendropy/utility/filesys.py
+++ b/src/dendropy/utility/filesys.py
@@ -136,7 +136,10 @@ class LineReadingThread(Thread):
         Returns None if the stop event is triggered.
         """
         if self.wait_for_file_to_appear(filename):
-            return open(filename, "rU")
+            try:
+                return open(filename, "rU")
+            except ValueError:
+                return open(filename, "r")
         return None
 
 
diff --git a/src/dendropy/utility/terminal.py b/src/dendropy/utility/terminal.py
index 8fbbc48a..2d4fd967 100644
--- a/src/dendropy/utility/terminal.py
+++ b/src/dendropy/utility/terminal.py
@@ -17,6 +17,7 @@
 ##
 ##############################################################################
 
+import os
 import sys
 
 def ttysize():
@@ -27,8 +28,8 @@ def ttysize():
         if not ln1:
             raise ValueError('tty size not supported for input')
         vals = {'rows':None, 'columns':None}
-        for ph in string.split(ln1, ';'):
-            x = string.split(ph)
+        for ph in str.split(ln1, ';'):
+            x = str.split(ph)
             if len(x) == 2:
                 vals[x[0]] = x[1]
                 vals[x[1]] = x[0]
diff --git a/src/dendropy/utility/textprocessing.py b/src/dendropy/utility/textprocessing.py
index 610baaeb..2ee24445 100644
--- a/src/dendropy/utility/textprocessing.py
+++ b/src/dendropy/utility/textprocessing.py
@@ -39,9 +39,12 @@ except ImportError:
 ## Unicode/String Conversions
 
 try:
-    ENCODING = locale.getdefaultlocale()[1]
-except ValueError:
-    ENCODING = None # let default value be assigned below
+    ENCODING = locale.getencoding()
+except:
+    try:
+        ENCODING = locale.getdefaultlocale()[1]
+    except ValueError:
+        ENCODING = None # let default value be assigned below
 
 if ENCODING == None:
     ENCODING = 'UTF-8'
@@ -71,8 +74,9 @@ def parse_curie_standard_qualified_name(prefixed_name, sep=":"):
     # https://github.com/mtholder/peyotl
     # https://github.com/mtholder/peyotl/blob/c3a544211edc669e664bae28095d52cecfa004f3/peyotl/utility/str_util.py#L5-L25
 if sys.version_info.major == 2:
+    import __builtin__ as builtins  # extra verbosity to mollify linter
     def is_str_type(x):
-        return isinstance(x, basestring)
+        return isinstance(x, builtins.basestring)
 else:
     def is_str_type(x):
         return isinstance(x, str)
diff --git a/tests/data/chars/crotaphytus_bicinctores.cytb.aligned.nexml b/tests/data/chars/crotaphytus_bicinctores.cytb.aligned.nexml
index c58252d4..534ab8c3 100644
--- a/tests/data/chars/crotaphytus_bicinctores.cytb.aligned.nexml
+++ b/tests/data/chars/crotaphytus_bicinctores.cytb.aligned.nexml
@@ -4,7 +4,7 @@
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
     xmlns:dwc="http://rs.tdwg.org/dwc/terms/"
     xmlns:to="http://rs.tdwg.org/ontology/voc/TaxonOccurrence#"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns:dcterms="http://purl.org/dc/terms/"
     xmlns="http://www.nexml.org/2009"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
diff --git a/tests/data/chars/crotaphytus_bicinctores.nd2.aligned.nexml b/tests/data/chars/crotaphytus_bicinctores.nd2.aligned.nexml
index 06abe6bb..38d594cf 100644
--- a/tests/data/chars/crotaphytus_bicinctores.nd2.aligned.nexml
+++ b/tests/data/chars/crotaphytus_bicinctores.nd2.aligned.nexml
@@ -4,7 +4,7 @@
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
     xmlns:dwc="http://rs.tdwg.org/dwc/terms/"
     xmlns:to="http://rs.tdwg.org/ontology/voc/TaxonOccurrence#"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns:dcterms="http://purl.org/dc/terms/"
     xmlns="http://www.nexml.org/2009"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
diff --git a/tests/data/trees/dendropy-test-trees-multifurcating-rooted-annotated.nexml b/tests/data/trees/dendropy-test-trees-multifurcating-rooted-annotated.nexml
index 636b9366..28dc60a2 100644
--- a/tests/data/trees/dendropy-test-trees-multifurcating-rooted-annotated.nexml
+++ b/tests/data/trees/dendropy-test-trees-multifurcating-rooted-annotated.nexml
@@ -2,7 +2,7 @@
 <nex:nexml
     version="0.9"
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns="http://www.nexml.org/2009"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:xml="http://www.w3.org/XML/1998/namespace"
diff --git a/tests/data/trees/dendropy-test-trees-n33-unrooted-annotated-x10a.nexml b/tests/data/trees/dendropy-test-trees-n33-unrooted-annotated-x10a.nexml
index 005b8aa8..0371e683 100644
--- a/tests/data/trees/dendropy-test-trees-n33-unrooted-annotated-x10a.nexml
+++ b/tests/data/trees/dendropy-test-trees-n33-unrooted-annotated-x10a.nexml
@@ -2,7 +2,7 @@
 <nex:nexml
     version="0.9"
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns="http://www.nexml.org/2009"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:xml="http://www.w3.org/XML/1998/namespace"
diff --git a/tests/data/trees/pythonidae.annotated.bad.nexml b/tests/data/trees/pythonidae.annotated.bad.nexml
index ef3433d4..0c2a98a4 100644
--- a/tests/data/trees/pythonidae.annotated.bad.nexml
+++ b/tests/data/trees/pythonidae.annotated.bad.nexml
@@ -3,7 +3,7 @@
     version="0.9"
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
     xmlns:skos="http://www.w3.org/2004/02/skos/core#"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns="http://www.nexml.org/2009"
diff --git a/tests/data/trees/pythonidae.annotated.nexml b/tests/data/trees/pythonidae.annotated.nexml
index 9a7f3067..ce175978 100644
--- a/tests/data/trees/pythonidae.annotated.nexml
+++ b/tests/data/trees/pythonidae.annotated.nexml
@@ -3,7 +3,7 @@
     version="0.9"
     xsi:schemaLocation="http://www.nexml.org/2009 ../xsd/nexml.xsd"
     xmlns:skos="http://www.w3.org/2004/02/skos/core#"
-    xmlns:dendropy="http://packages.python.org/DendroPy/"
+    xmlns:dendropy="http://pypi.org/project/DendroPy/"
     xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns="http://www.nexml.org/2009"

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.6.0.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.6.0.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.6.0.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.6.0.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.6.0.egg-info/zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel/__init__.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel/_bipartition.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel/_edge.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel/_node.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel/_tree.py

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/entry_points.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/DendroPy-4.5.2.egg-info/zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/dendropy/datamodel/treemodel.py

No differences were encountered between the control files of package python3-dendropy

No differences were encountered between the control files of package sumtrees

More details

Full run details