uncommitted - pynpoint

Ready changes

Summary

Import uploads missing from VCS:

Diff

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
new file mode 100644
index 0000000..d3bdbb0
--- /dev/null
+++ b/.github/workflows/ci.yml
@@ -0,0 +1,47 @@
+name: CI
+
+on: [push, pull_request]
+
+jobs:
+  build:
+
+    runs-on: ubuntu-latest
+
+    strategy:
+      matrix:
+        python-version: [3.7, 3.8, 3.9]
+
+    steps:
+      - uses: actions/checkout@v2
+
+      - name: Setup Python ${{ matrix.python-version }}
+        uses: actions/setup-python@v2
+        with:
+          python-version: ${{ matrix.python-version }}
+
+      - name: Install dependencies
+        run: |
+          sudo apt-get install pandoc
+          pip install --upgrade pip
+          pip install flake8 pytest pytest-cov sphinx
+          pip install -r docs/requirements.txt
+          pip install -r requirements.txt
+          pip install .
+
+      - name: Lint with flake8
+        run: |
+          # stop the build if there are Python syntax errors or undefined names
+          flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
+          # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
+          flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
+
+      - name: Build documentation
+        run: |
+          make docs
+
+      - name: Run pytest
+        run: |
+          make test
+
+      - name: Upload coverage to Codecov
+        uses: codecov/codecov-action@v2
diff --git a/.gitignore b/.gitignore
index 67bbd7e..4452e2e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -36,3 +36,14 @@ pynpoint.egg-info/*
 
 # Vim
 .tags
+
+# Tutorials
+docs/tutorials/PynPoint_config.ini
+docs/tutorials/PynPoint_database.hdf5
+docs/tutorials/betapic_naco_mp.hdf5
+docs/tutorials/hd142527_zimpol_h-alpha.tgz
+docs/tutorials/input
+docs/tutorials/.ipynb_checkpoints
+docs/tutorials/*.fits
+docs/tutorials/*.dat
+docs/tutorials/*.npy
diff --git a/.readthedocs.yml b/.readthedocs.yml
index 46a1d21..8384686 100644
--- a/.readthedocs.yml
+++ b/.readthedocs.yml
@@ -7,6 +7,7 @@ build:
     image: latest
 
 python:
-   version: 3.7
+   version: 3.8
    install:
       - requirements: requirements.txt
+      - requirements: docs/requirements.txt
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 901a2e6..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,34 +0,0 @@
-language: python
-
-os: linux
-
-dist: xenial
-
-python:
-  - 3.6
-  - 3.7
-
-env:
-  - NUMBA_DISABLE_JIT=1
-
-before_install:
-  - sudo apt-get install ncompress
-
-install:
-  - pip install -r requirements.txt
-  - pip install pytest-cov==2.7
-  - pip install coveralls
-  - pip install PyYAML
-  - pip install sphinx
-  - pip install sphinx-rtd-theme
-
-script:
-  - make docs
-  - make test
-
-after_success:
-  - coveralls
-
-notifications:
-  - webhooks: https://coveralls.io/webhook
-  - email: false
diff --git a/LICENSE b/LICENSE
index a0643a7..f7c19e4 100644
--- a/LICENSE
+++ b/LICENSE
@@ -632,7 +632,7 @@ state the exclusion of warranty; and each file should have at least
 the "copyright" line and a pointer to where the full notice is found.
 
     Pipeline for processing and analysis of high-contrast imaging data
-    Copyright (C) 2014-2020  Tomas Stolker & Markus Bonse
+    Copyright (C) 2014-2021  Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara
 
     This program is free software: you can redistribute it and/or modify
     it under the terms of the GNU General Public License as published by
@@ -652,7 +652,7 @@ Also add information on how to contact you by electronic and paper mail.
   If the program does terminal interaction, make it output a short
 notice like this when it starts in an interactive mode:
 
-    PynPoint  Copyright (C) 2014-2020  Tomas Stolker & Markus Bonse
+    PynPoint  Copyright (C) 2014-2021  Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara
     This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
     This is free software, and you are welcome to redistribute it
     under certain conditions; type `show c' for details.
diff --git a/Makefile b/Makefile
index e7849fb..5b77e8c 100644
--- a/Makefile
+++ b/Makefile
@@ -1,10 +1,10 @@
-.PHONY: help clean clean-build clean-python clean-test test test-all coverage docs
+.PHONY: help pypi pypi-test test coverage docs clean clean-build clean-python clean-test
 
 help:
-	@echo "pypi - submit package to the PyPI server"
-	@echo "pypi-test - submit package to the TestPyPI server"
+	@echo "pypi - submit to PyPI server"
+	@echo "pypi-test - submit to TestPyPI server"
 	@echo "docs - generate Sphinx documentation"
-	@echo "test - run test cases"
+	@echo "test - run unit tests"
 	@echo "coverage - check code coverage"
 	@echo "clean - remove all artifacts"
 	@echo "clean-build - remove build artifacts"
@@ -26,11 +26,12 @@ docs:
 	rm -f docs/pynpoint.processing.rst
 	rm -f docs/pynpoint.util.rst
 	sphinx-apidoc -o docs pynpoint
+	cd docs/
 	$(MAKE) -C docs clean
 	$(MAKE) -C docs html
 
 test:
-	pytest --cov=pynpoint tests
+	pytest --cov=pynpoint/ --cov-report=xml
 
 coverage:
 	coverage run --rcfile .coveragerc -m py.test
@@ -46,6 +47,15 @@ clean-build:
 	rm -rf htmlcov/
 	rm -rf .eggs/
 	rm -rf docs/_build
+	rm -rf docs/tutorials/PynPoint_config.ini
+	rm -rf docs/tutorials/PynPoint_database.hdf5
+	rm -rf docs/tutorials/betapic_naco_mp.hdf5
+	rm -rf docs/tutorials/hd142527_zimpol_h-alpha.tgz
+	rm -rf docs/tutorials/input
+	rm -rf docs/tutorials/.ipynb_checkpoints
+	rm -rf docs/tutorials/*.fits
+	rm -rf docs/tutorials/*.dat
+	rm -rf docs/tutorials/*.npy
 
 clean-python:
 	find . -name '*.pyc' -exec rm -f {} +
diff --git a/README.rst b/README.rst
index c644b9b..ce401df 100644
--- a/README.rst
+++ b/README.rst
@@ -3,60 +3,53 @@ PynPoint
 
 **Pipeline for processing and analysis of high-contrast imaging data**
 
-.. image:: https://badge.fury.io/py/pynpoint.svg
-    :target: https://pypi.python.org/pypi/pynpoint
+.. image:: https://img.shields.io/pypi/v/pynpoint
+   :target: https://pypi.python.org/pypi/pynpoint
 
-.. image:: https://img.shields.io/badge/Python-3.6%2C%203.7-yellow.svg?style=flat
-    :target: https://pypi.python.org/pypi/pynpoint
+.. image:: https://img.shields.io/pypi/pyversions/pynpoint
+   :target: https://pypi.python.org/pypi/pynpoint
 
-.. image:: https://travis-ci.org/PynPoint/PynPoint.svg?branch=master
-    :target: https://travis-ci.org/PynPoint/PynPoint
+.. image:: https://github.com/PynPoint/PynPoint/workflows/CI/badge.svg?branch=main
+   :target: https://github.com/PynPoint/PynPoint/actions
 
-.. image:: https://readthedocs.org/projects/pynpoint/badge/?version=latest
-    :target: http://pynpoint.readthedocs.io/en/latest/?badge=latest
+.. image:: https://img.shields.io/readthedocs/pynpoint
+   :target: http://pynpoint.readthedocs.io
 
-.. image:: https://coveralls.io/repos/github/PynPoint/PynPoint/badge.svg?branch=master
-    :target: https://coveralls.io/github/PynPoint/PynPoint?branch=master
+.. image:: https://codecov.io/gh/PynPoint/PynPoint/branch/main/graph/badge.svg?token=35stSKWsaJ
+   :target: https://codecov.io/gh/PynPoint/PynPoint    
 
-.. image:: https://www.codefactor.io/repository/github/pynpoint/pynpoint/badge
-    :target: https://www.codefactor.io/repository/github/pynpoint/pynpoint
+.. image:: https://img.shields.io/codefactor/grade/github/PynPoint/PynPoint
+   :target: https://www.codefactor.io/repository/github/pynpoint/pynpoint
 
-.. image:: https://img.shields.io/badge/License-GPLv3-blue.svg
-    :target: https://github.com/PynPoint/PynPoint/blob/master/LICENSE
+.. image:: https://img.shields.io/github/license/pynpoint/pynpoint
+   :target: https://github.com/PynPoint/PynPoint/blob/main/LICENSE
 
-.. image:: http://img.shields.io/badge/arXiv-1811.03336-orange.svg?style=flat
-    :target: http://arxiv.org/abs/1811.03336
-
-PynPoint is a generic, end-to-end pipeline for the data reduction and analysis of high-contrast imaging data of planetary and substellar companions, as well as circumstellar disks in scattered light. The package is stable, has been extensively tested, and is available on `PyPI <https://pypi.org/project/pynpoint/>`_. PynPoint is under continuous development so the latest implementations can be pulled from Github repository.
-
-The pipeline has a modular architecture with a central data storage in which all results are stored by the processing modules. These modules have specific tasks such as the subtraction of the thermal background emission, frame selection, centering, PSF subtraction, and photometric and astrometric measurements. The tags from the central data storage can be written to FITS, HDF5, and text files with the available I/O modules.
-
-To get a first impression, there is an end-to-end example available of a `SPHERE/ZIMPOL <https://www.eso.org/sci/facilities/paranal/instruments/sphere.html>`_ H-alpha data set of the accreting M dwarf companion of `HD 142527 <http://ui.adsabs.harvard.edu/abs/2019A%26A...622A.156C>`_, which can be downloaded `here <https://people.phys.ethz.ch/~stolkert/pynpoint/hd142527_zimpol_h-alpha.tgz>`_.
+PynPoint is a generic, end-to-end pipeline for the reduction and analysis of high-contrast imaging data of exoplanets. The pipeline uses principal component analysis (PCA) for the subtraction of the stellar PSF and supports post-processing with ADI, RDI, and SDI techniques. The package is stable, extensively tested, and actively maintained.
 
 Documentation
 -------------
 
-Documentation can be found at `http://pynpoint.readthedocs.io <http://pynpoint.readthedocs.io>`_, including installation instructions, details on the architecture of PynPoint, and a description of all the pipeline modules and their input parameters.
+Documentation is available at `http://pynpoint.readthedocs.io <http://pynpoint.readthedocs.io>`_, including installation instructions, details on the pipeline architecture, and several notebook tutorials.
 
 Mailing list
 ------------
 
-Please subscribe to the `mailing list <https://pynpoint.readthedocs.io/en/latest/mailing.html>`_ if you want to be informed about new functionalities, pipeline modules, releases, and other PynPoint related news.
+Please subscribe to the `mailing list <https://pynpoint.readthedocs.io/en/latest/mailing.html>`_ if you want to be informed about PynPoint related news.
 
 Attribution
 -----------
 
-If you use PynPoint in your publication then please cite `Stolker et al. (2019) <http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S>`_. Please also cite `Amara & Quanz (2012) <http://ui.adsabs.harvard.edu/abs/2012MNRAS.427..948A>`_ as the origin of PynPoint, which focused initially on the use of principal component analysis (PCA) as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <http://ui.adsabs.harvard.edu/abs/2018A%26A...611A..23H>`_ or `Bonse, Quanz & Amara (2018) <http://ui.adsabs.harvard.edu/abs/2018arXiv180405063B>`_, respectively.
+If you use PynPoint in your publication then please cite `Stolker et al. (2019) <https://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S/abstract>`_. Please also cite `Amara & Quanz (2012) <https://ui.adsabs.harvard.edu/abs/2012MNRAS.427..948A/abstract>`_ as the origin of PynPoint, which focused initially on the use of PCA as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <https://ui.adsabs.harvard.edu/abs/2018A%26A...611A..23H/abstract>`_ or `Bonse et al. (preprint) <https://ui.adsabs.harvard.edu/abs/2018arXiv180405063B/abstract>`_, respectively.
 
 Contributing
 ------------
 
-Contributions in the form of bug fixes, new or improved functionalities, and additional pipeline modules are highly appreciated. Please consider forking the repository and creating a pull request to help improve and extend the package. Instructions for writing of modules are provided in the documentation. Bug reports can be provided by creating an `issue <https://github.com/PynPoint/PynPoint/issues>`_ on the Github page.
+Contributions in the form of bug fixes, new or improved functionalities, and pipeline modules are highly appreciated. Please consider forking the repository and creating a pull request to help improve and extend the package. Instructions for `coding of a pipeline module <https://pynpoint.readthedocs.io/en/latest/coding.html>`_ are available in the documentation. Bugs can be reported by creating an `issue <https://github.com/PynPoint/PynPoint/issues>`_ on the Github page.
 
 License
 -------
 
-Copyright 2014-2020 Tomas Stolker, Markus Bonse, Sascha Quanz, Adam Amara, and contributors.
+Copyright 2014-2021 Tomas Stolker, Markus Bonse, Sascha Quanz, Adam Amara, and contributors.
 
 PynPoint is distributed under the GNU General Public License v3. See the LICENSE file for the terms and conditions.
 
diff --git a/debian/changelog b/debian/changelog
index 127cf7f..8cae346 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,30 @@
+pynpoint (0.10.0-1) unstable; urgency=medium
+
+  * New upstream version.
+  * d/patches: dropped as it was upstreamed.
+  * d/control:
+    - updated build-depends python3.
+    - added Rules-Requires-Root.
+  * Bump standards version to 4.6.0.
+  * d/upstream/metadata: added.
+
+ -- Gürkan Myczko <gurkan@phys.ethz.ch>  Mon, 04 Oct 2021 12:17:09 +0200
+
+pynpoint (0.8.3-3) unstable; urgency=medium
+
+  * d/control: fix maintainer team address. (Closes: #976324)
+
+ -- Gürkan Myczko <gurkan@phys.ethz.ch>  Wed, 09 Dec 2020 10:30:49 +0100
+
+pynpoint (0.8.3-2) unstable; urgency=medium
+
+  * Source only upload.
+  * d/copyright:
+    - update copyright years.
+    - added Upstream-Contact.
+
+ -- Gürkan Myczko <gurkan@phys.ethz.ch>  Wed, 02 Dec 2020 08:00:44 +0100
+
 pynpoint (0.8.3-1) unstable; urgency=low
 
   * Initial release. (Closes: #923320)
diff --git a/debian/control b/debian/control
index 1edbb45..dde5200 100644
--- a/debian/control
+++ b/debian/control
@@ -1,14 +1,13 @@
 Source: pynpoint
 Section: science
 Priority: optional
-Maintainer: Debian Astronomy Maintainers <debian-astro-maintainers@lists.debian.org>
+Maintainer: Debian Astronomy Maintainers <debian-astro-maintainers@lists.alioth.debian.org>
 Uploaders:
  Gürkan Myczko <gurkan@phys.ethz.ch>,
 Build-Depends:
- debhelper (>= 12),
- debhelper-compat (= 12),
+ debhelper-compat (= 13),
  dh-python,
- python3 (>= 3.6),
+ python3:any | python3-all:any | python3-dev:any | python3-all-dev:any | dh-sequence-python3,
  python3-astropy,
  python3-emcee,
  python3-ephem,
@@ -23,9 +22,10 @@ Build-Depends:
  python3-typeguard,
  python3-pip
 X-Python3-Version: >= 3.6
-Standards-Version: 4.5.0
+Standards-Version: 4.6.0
 Vcs-Git: https://salsa.debian.org/debian-astro-team/pynpoint.git
 Vcs-Browser: https://salsa.debian.org/debian-astro-team/pynpoint
+Rules-Requires-Root: no
 Homepage: https://github.com/PynPoint/PynPoint
 
 Package: python3-pynpoint
diff --git a/debian/copyright b/debian/copyright
index bd4540d..7882ebb 100644
--- a/debian/copyright
+++ b/debian/copyright
@@ -1,5 +1,6 @@
 Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
 Upstream-Name: pynpoint
+Upstream-Contact: Tomas Stolker
 Source: https://github.com/PynPoint/PynPoint
 
 Files: *
@@ -45,7 +46,7 @@ License: GPL-3-only
  Public License can be found in `/usr/share/common-licenses/GPL-3'.
 
 Files: debian/*
-Copyright: 2019 Gürkan Myczko <gurkan@phys.ethz.ch>
+Copyright: 2019-2020 Gürkan Myczko <gurkan@phys.ethz.ch>
 License: GPL-2+
  This package is free software; you can redistribute it and/or modify
  it under the terms of the GNU General Public License as published by
diff --git a/debian/patches/pipv20-compat b/debian/patches/pipv20-compat
deleted file mode 100644
index 229ea68..0000000
--- a/debian/patches/pipv20-compat
+++ /dev/null
@@ -1,46 +0,0 @@
-Description: <short summary of the patch>
- TODO: Put a short summary on the line above and replace this paragraph
- with a longer explanation of this change. Complete the meta-information
- with other relevant fields (see below for details). To make it easier, the
- information below has been extracted from the changelog. Adjust it or drop
- it.
- .
- pynpoint (0.8.3-1) unstable; urgency=medium
- .
-   * New upstream version.
-   * Bump standards version to 4.5.0.
-Author: Gürkan Myczko <gurkan@phys.ethz.ch>
-
----
-The information above should follow the Patch Tagging Guidelines, please
-checkout http://dep.debian.net/deps/dep3/ to learn about the format. Here
-are templates for supplementary fields that you might want to add:
-
-Origin: <vendor|upstream|other>, <url of original patch>
-Bug: <url in upstream bugtracker>
-Bug-Debian: https://bugs.debian.org/<bugnumber>
-Bug-Ubuntu: https://launchpad.net/bugs/<bugnumber>
-Forwarded: <no|not-needed|url proving that it has been forwarded>
-Reviewed-By: <name and email of someone who approved the patch>
-Last-Update: 2020-05-24
-
---- pynpoint-0.8.3.orig/setup.py
-+++ pynpoint-0.8.3/setup.py
-@@ -2,13 +2,11 @@
- 
- from setuptools import setup
- 
--try:
--    from pip._internal.req import parse_requirements
--except ImportError:
--    from pip.req import parse_requirements
-+from pip._internal.network.session import PipSession
-+from pip._internal.req import parse_requirements
- 
--reqs = parse_requirements('requirements.txt', session='hack')
--reqs = [str(ir.req) for ir in reqs]
-+reqs = parse_requirements('requirements.txt', session=PipSession())
-+reqs = [str(req.requirement) for req in reqs]
- 
- setup(
-     name='pynpoint',
diff --git a/debian/patches/series b/debian/patches/series
deleted file mode 100644
index 948e732..0000000
--- a/debian/patches/series
+++ /dev/null
@@ -1 +0,0 @@
-pipv20-compat
diff --git a/debian/upstream/metadata b/debian/upstream/metadata
new file mode 100644
index 0000000..26f1a86
--- /dev/null
+++ b/debian/upstream/metadata
@@ -0,0 +1,4 @@
+Bug-Database: https://github.com/PynPoint/PynPoint/issues
+Bug-Submit: https://github.com/PynPoint/PynPoint/issues/new
+Repository: https://github.com/PynPoint/PynPoint.git
+Repository-Browse: https://github.com/PynPoint/PynPoint
diff --git a/docs/_static/custom.css b/docs/_static/custom.css
deleted file mode 100644
index faa1f02..0000000
--- a/docs/_static/custom.css
+++ /dev/null
@@ -1,4 +0,0 @@
-@import url("default.css");
-span.caption-text {
-  color: rgb(46,103,149);
-}
\ No newline at end of file
diff --git a/docs/_static/favicon.png b/docs/_static/favicon.png
new file mode 100644
index 0000000..078fc41
Binary files /dev/null and b/docs/_static/favicon.png differ
diff --git a/docs/_static/logo.png b/docs/_static/logo.png
index 653ab8d..80deee5 100644
Binary files a/docs/_static/logo.png and b/docs/_static/logo.png differ
diff --git a/docs/_static/residuals.png b/docs/_static/residuals.png
deleted file mode 100644
index 18aca31..0000000
Binary files a/docs/_static/residuals.png and /dev/null differ
diff --git a/docs/about.rst b/docs/about.rst
index 4fc52b5..ea815b4 100644
--- a/docs/about.rst
+++ b/docs/about.rst
@@ -3,33 +3,25 @@
 About
 =====
 
-.. _team:
+.. _contact:
 
-Development Team
-----------------
+Contact
+-------
 
-* Tomas Stolker <tomas.stolker@phys.ethz.ch>
-* Markus Bonse <markus.bonse@stud.tu-darmstadt.de>
-* Sascha Quanz <sascha.quanz@phys.ethz.ch>
-* Adam Amara <adam.amara@phys.ethz.ch>
+Questions can be directed to `Tomas Stolker <https://home.strw.leidenuniv.nl/~stolker/>`_ (stolker@strw.leidenuniv.nl) and `Markus Bonse <https://ipa.phys.ethz.ch/people/person-detail.MjIxMTA5.TGlzdC8zNDM1LDU5MTA3MzQ0MA==.html>`_ (mbonse@phys.ethz.ch), who have been the main developers during the recent years.
 
 .. _attribution:
 
 Attribution
 -----------
 
-If you use PynPoint in your publication then please cite `Stolker et al. (2019) <http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S>`_. Please also cite `Amara & Quanz (2012) <http://ui.adsabs.harvard.edu/abs/2012MNRAS.427..948A>`_ as the origin of PynPoint, which focused initially on the use of principal component analysis (PCA) as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <http://ui.adsabs.harvard.edu/abs/2018A%26A...611A..23H>`_ or `Bonse, Quanz & Amara (2018) <http://ui.adsabs.harvard.edu/abs/2018arXiv180405063B>`_, respectively.
+If you use PynPoint in your publication then please cite `Stolker et al. (2019) <http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S>`_. Please also cite `Amara & Quanz (2012) <http://ui.adsabs.harvard.edu/abs/2012MNRAS.427..948A>`_ as the origin of PynPoint, which focused initially on the use of principal component analysis (PCA) as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <http://ui.adsabs.harvard.edu/abs/2018A%26A...611A..23H>`_ or `Bonse et al. (preprint) <http://ui.adsabs.harvard.edu/abs/2018arXiv180405063B>`_, respectively.
 
 .. _acknowledgements:
 
 Acknowledgements 
 ----------------
 
-We would like to thank several people who provided contributions and helped testing the package before its release:
-
-* Anna Boehle (ETH Zurich)
-* Alexander Bohn (Leiden University)
-* Gabriele Cugno (ETH Zurich)
-* Silvan Hunziker (ETH Zurich)
+We would like to thank those who have provided `contributions <https://github.com/PynPoint/PynPoint/graphs/contributors>`_ to PynPoint.
 
 The PynPoint logo was designed by `Atlas Interactive <https://atlas-interactive.nl>`_ and is `available <https://quanz-group.ethz.ch/research/algorithms/pynpoint.html>`_ for use in presentations.
diff --git a/docs/architecture.rst b/docs/architecture.rst
index 6b6eec3..450c98b 100644
--- a/docs/architecture.rst
+++ b/docs/architecture.rst
@@ -8,37 +8,39 @@ Architecture
 Introduction
 ------------
 
-PynPoint has evolved from a PSF subtraction toolkit to an end-to-end pipeline for processing and analysis of high-contrast imaging data. The architecture of PynPoint was redesigned in v0.3.0 with the goal to create a generic, modular, and open-source data reduction pipeline, which is extendable to new data processing techniques and data types in the future.
+PynPoint has evolved from a PSF subtraction toolkit to an end-to-end pipeline for processing and analysis of high-contrast imaging data. The architecture of PynPoint was redesigned in v0.3.0 with the goal to create a generic, modular, and open-source data reduction pipeline, which is extendable to new data processing techniques and data types.
 
-The actual pipeline and the processing modules have been separated in a different subpackages. Therefore, it is possible to extend the processing functionalities without intervening with the core of the pipeline.  The UML class diagram below illustrates the pipeline architecture of PynPoint:
+The actual pipeline and the processing modules have been separated in a different subpackages. Therefore, it is possible to extend the processing functionalities without intervening with the core of the pipeline. The UML class diagram below illustrates the pipeline architecture:
 
 .. image:: _static/uml.png
    :width: 100%
 
 The diagram shows that the architecture is subdivided in three components:
 
-	* Data management - :class:`pynpoint.core.dataio`
-	* Pipeline modules for reading, writing, and processing of data - :class:`pynpoint.core.processing`
-	* The actual pipeline - :class:`pynpoint.core.pypeline`
+	* Data management: :class:`pynpoint.core.dataio`
+	* Pipeline modules for reading, writing, and processing of data: :class:`pynpoint.core.processing`
+	* The actual pipeline: :class:`pynpoint.core.pypeline`
 
-.. _database:
+.. _central_database:
 
-Central Database
+Central database
 ----------------
 
-In the new architecture, the data management has been separated from the data processing for the following reasons:
+The data management has been separated from the data processing for the following reasons:
 
-	1. Raw datasets can be very large, in particular in the 3--5 μm wavelength regime, which challenges the processing on a computer with a small amount of memory (RAM). A central database is used to store the data on a computer's hard drive.
+	1. Raw datasets can be very large (in particular in the 3--5 μm regime) which challenges the processing on a computer with a small amount of memory (RAM). A central database is used to store the data on a computer's hard drive.
 	2. Some data is used in different steps of the pipeline. A central database makes it easy to access that data without making a copy.
 	3. The central data storage on the hard drive will remain updated after each step. Therefore, processing steps that already finished remain unaffected if an error occurs or the data reduction is interrupted by the user.
 
-Understanding the central data storage classes can be helpful if you plan to write your own Pipeline modules (see :ref:`coding`). When running the pipeline, it is enough to understand the concept of database tags.
+Understanding the central data storage classes can be helpful with the development of new pipeline modules (see :ref:`coding`). When running the pipeline, it is sufficient to understand the concept of database tags.
 
-Each pipeline module has input and/or output tags which point to specific dataset in the central database. A module with ``image_in_tag=im_arr`` will look for a stack of input images in the central database under the tag name `im_arr`. Similarly, a module with ``image_out_tag=im_arr_processed`` will a stack of processed images to the central database under the tag `im_arr_processed`. Note that input tags will never change the data in the database.
+Each pipeline module has input and/or output tags which point to specific dataset in the central database. A module with ``image_in_tag='im_arr'`` will read the input images from the central database under the tag name `im_arr`. Similarly, a module with ``image_out_tag='im_arr_processed'`` will store a the processed images in the central database under the tag `im_arr_processed`.
 
-Accessing the data storage occurs through instances of :class:`~pynpoint.core.dataio.Port` which allow pipeline modules to read data from and write data to central database.
+Accessing the data storage occurs through instances of :class:`~pynpoint.core.dataio.Port` which allows pipeline modules to read data from and write data to database.
 
-Pipeline Modules
+.. _modules:
+
+Pipeline modules
 ----------------
 
 A pipeline module has a specific task that is appended to the internal queue of a :class:`~pynpoint.core.pypeline.Pypeline` instance. Pipeline modules can read and write data tags from and to the central database through dedicated input and output connections. There are three types of pipeline modules:
@@ -62,15 +64,15 @@ The :class:`~pynpoint.core.pypeline` module is the central component which manag
 
     from pynpoint import Pypeline, FitsReadingModule
 
-    pipeline = Pypeline(working_place_in="/path/to/working_place",
-                        input_place_in="/path/to/input_place",
-                        output_place_in="/path/to/output_place")
+    pipeline = Pypeline(working_place_in='/path/to/working_place',
+                        input_place_in='/path/to/input_place',
+                        output_place_in='/path/to/output_place')
 
-A pipeline module is created from any of the classes listed in the :ref:`overview` section, for example:
+A pipeline module is created from any of the classes listed in the :ref:`pipeline_modules` section, for example:
 
 .. code-block:: python
 
-    module = FitsReadingModule(name_in="read", image_tag="input")
+    module = FitsReadingModule(name_in='read', image_tag='input')
 
 The module is appended to the pipeline queue as:
 
@@ -82,7 +84,7 @@ And can be removed from the queue with the following method:
 
 .. code-block:: python
 
-    pipeline.remove_module("read")
+    pipeline.remove_module('read')
 
 The names and order of the pipeline modules can be listed with:
 
@@ -100,8 +102,8 @@ Or a single module is executed as:
 
 .. code-block:: python
 
-    pipeline.run_module("read")
+    pipeline.run_module('read')
 
 Both run methods will check if the pipeline has valid input and output tags.
 
-An instance of :class:`~pynpoint.core.pypeline.Pypeline` can be used to directly access data from the central database. See the :ref:`hdf5-files` section for more information.
+An instance of :class:`~pynpoint.core.pypeline.Pypeline` can be used to directly access data from the central database. See the :ref:`hdf5_files` section for more information.
diff --git a/docs/coding.rst b/docs/coding.rst
index 27001af..4143845 100644
--- a/docs/coding.rst
+++ b/docs/coding.rst
@@ -1,13 +1,13 @@
 .. _coding:
 
-Coding a New Module
+Coding a new module
 ===================
 
 .. _constructor:
 
 There are three different types of pipeline modules: :class:`~pynpoint.core.processing.ReadingModule`, :class:`~pynpoint.core.processing.WritingModule`, and :class:`~pynpoint.core.processing.ProcessingModule`. The concept is similar for these three modules so here we will explain only how to code a processing module.
 
-Class Constructor
+Class constructor
 -----------------
 
 First, we need to import the interface (i.e. abstract class) :class:`~pynpoint.core.processing.ProcessingModule`: :
@@ -22,26 +22,26 @@ All pipeline modules are classes which contain the parameters of the pipeline st
 
     class ExampleModule(ProcessingModule):
 
-When an IDE like PyCharm is used, a warning will appear that all abstract methods must be implemented in the ``ExampleModule`` class. The abstract class ProcessingModule has some abstract methods which have to be implemented by its children classes (e.g., ``__init__()`` and ``run()``). Thus we have to implement the ``__init__()`` function (i.e., the constructor of our module):
+When an IDE like *PyCharm* is used, a warning will appear that all abstract methods must be implemented in the ``ExampleModule`` class. The abstract class :class:`~pynpoint.core.processing.ProcessingModule` has some abstract methods which have to be implemented by its children classes (e.g., ``__init__`` and ``run``). We start by implementing the ``__init__`` method (i.e., the constructor of our module):
 
 .. code-block:: python
 
     def __init__(self,
-                 name_in="example",
-                 in_tag_1="in_tag_1",
-                 in_tag_2="in_tag_2",
-                 out_tag_1="out_tag_1",
-                 out_tag_2="out_tag_2”,
+                 name_in='example',
+                 in_tag_1='in_tag_1',
+                 in_tag_2='in_tag_2',
+                 out_tag_1='out_tag_1',
+                 out_tag_2='out_tag_2',
                  parameter_1=0,
-                 parameter_2="value"):
+                 parameter_2='value'):
 
-Each ``__init__()`` function of a :class:`~pynpoint.core.processing.PypelineModule` requires a ``name_in`` argument (and default value) which is used by the pipeline to run individual modules by name. Furthermore, the input and output tags have to be defined which are used to to access data from the central database. The constructor starts with a call of the :class:`~pynpoint.core.processing.ProcessingModule` interface:
+Each ``__init__`` method of :class:`~pynpoint.core.processing.PypelineModule` requires a ``name_in`` argument which is used by the pipeline to run individual modules by name. Furthermore, the input and output tags have to be defined which are used to access data from the central database. The constructor starts with a call of the :class:`~pynpoint.core.processing.ProcessingModule` interface:
 
 .. code-block:: python
    
-    super(ExampleModule, self).__init__(name_in)
+    super().__init__(name_in)
 
-Next, the input and output ports behind the database tags have to be defined:
+Next, the input and output ports behind the database tags need to be defined:
 
 .. code-block:: python
 
@@ -53,7 +53,7 @@ Next, the input and output ports behind the database tags have to be defined:
 
 Reading to and writing from the central database should always be done with the ``add_input_port`` and ``add_output_port`` functionalities and not by manually creating an instance of :class:`~pynpoint.core.dataio.InputPort` or :class:`~pynpoint.core.dataio.OutputPort`.
 
-Finally, the module parameters should be saved to the ``ExampleModule`` instance:
+Finally, the module parameters should be saved as attributes of the ``ExampleModule`` instance:
 
 .. code-block:: python
 
@@ -62,41 +62,41 @@ Finally, the module parameters should be saved to the ``ExampleModule`` instance
 
 That's it! The constructor of the ``ExampleModule`` is ready.
 
-.. _method:
+.. _run_method:
 
-Run Method
+Run method
 ----------
 
-We can now add the functionalities of the module in the ``run()`` method which will be called by the pipeline:
+We can now add the functionalities of the module in the ``run`` method which will be called by the pipeline:
 
 .. code-block:: python
 
     def run(self):
 
-The input ports of the module are used to load data from the central database into the memory with slicing or the ``get_all()`` function:
+The input ports of the module are used to load data from the central database into the memory with slicing or with the ``get_all`` method:
 
 .. code-block:: python
 
         data1 = self.m_in_port_1.get_all()
         data2 = self.m_in_port_2[0:4]
 
-We want to avoid using the ``get_all()`` function because data sets in 3--5 μm range typically consists of thousands of images. Therefore, loading all images at once in the computer memory might not be possible, in particular early in the data reduction chain when the images have their original size. Instead, it is recommended to use the ``MEMORY`` attribute that is specified in the configuration file.
+We want to avoid using the ``get_all`` method because data sets obtained in the $L'$ and $M'$ bands typically consists of thousands of images so loading all images at once in the computer memory might not be possible. Instead, it is recommended to use the ``MEMORY`` attribute that is specified in the configuration file (see :ref:`configuration`)
 
-Attributes of the input port are accessed in the following:
+Attributes of a dataset can be read as follows:
 
 .. code-block:: python
 
-        parang = self.m_in_port_1.get_attribute("PARANG")
-        pixscale = self.m_in_port_2.get_attribute("PIXSCALE")
+        parang = self.m_in_port_1.get_attribute('PARANG')
+        pixscale = self.m_in_port_2.get_attribute('PIXSCALE')
 
 And attributes of the central configuration are accessed through the :class:`~pynpoint.core.dataio.ConfigPort`:
 
 .. code-block:: python
 
-        memory = self._m_config_port.get_attribute("MEMORY")
-        cpu = self._m_config_port.get_attribute("CPU")
+        memory = self._m_config_port.get_attribute('MEMORY')
+        cpu = self._m_config_port.get_attribute('CPU')
 
-More information on importing of data can be found in the package documentation of :class:`~pynpoint.core.dataio.InputPort`. 
+More information on importing of data can be found in the API documentation of :class:`~pynpoint.core.dataio.InputPort`. 
 
 Next, the processing steps are implemented:
 
@@ -116,21 +116,21 @@ The output ports are used to write the results to the central database:
         self.m_out_port_1.append(result2)
 
         self.m_out_port_2[0:2] = result2
-        self.m_out_port_2.add_attribute(name="new_attribute", value=attribute)
+        self.m_out_port_2.add_attribute(name='new_attribute', value=attribute)
 
-More information on storing of data can be found in the package documentation of :class:`~pynpoint.core.dataio.OutputPort`.
+More information on storing of data can be found in the API documentation of :class:`~pynpoint.core.dataio.OutputPort`.
 
-The attribute information has to be copied from the input port and history information has to be added. This step should be repeated for all the output ports:
+The data attributes of the input port need to be copied and history information should be added. These steps should be repeated for all the output ports:
 
 .. code-block:: python
 
         self.m_out_port_1.copy_attributes(self.m_in_port_1)
-        self.m_out_port_1.add_history("ExampleModule", "history text")
+        self.m_out_port_1.add_history('ExampleModule', 'history text')
 
         self.m_out_port_2.copy_attributes(self.m_in_port_1)
-        self.m_out_port_2.add_history("ExampleModule", "history text")
+        self.m_out_port_2.add_history('ExampleModule', 'history text')
 
-Finally, the central database and all the open ports should be closed:
+Finally, the central database and all the open ports are closed:
 
 .. code-block:: python
 
@@ -140,13 +140,16 @@ Finally, the central database and all the open ports should be closed:
 
    It is enough to close only one port because all other ports will be closed automatically.
 
-.. warning::
+.. _apply_function:
 
-   It is not recommended to use the same tag name for the input and output port because that would only be possible when data is read and     written at once with the ``get_all()`` and ``set_all()`` functionalities, respectively. Instead image should be read and written in amounts of ``MEMORY`` so an error should be raised when ``in_tag=out_tag``.
+Apply function to images
+------------------------
+
+A processing module often applies a specific method to each image of an input port. Therefore, the :func:`~pynpoint.core.processing.ProcessingModule.apply_function_to_images` function has been implemented to apply a function to all images of an input port. This function uses the ``CPU`` and ``MEMORY`` parameters from the configuration file to automatically process subsets of images in parallel. An example of the implementation can be found in the code of the bad pixel cleaning with a sigma filter: :class:`~pynpoint.processing.badpixel.BadPixelSigmaFilterModule`.
 
-.. _example-module:
+.. _example_module:
 
-Example Module
+Example module
 --------------
 
 The full code for the ``ExampleModule`` from above is:
@@ -158,13 +161,13 @@ The full code for the ``ExampleModule`` from above is:
     class ExampleModule(ProcessingModule):
 
         def __init__(self,
-                     name_in="example",
-                     in_tag_1="in_tag_1",
-                     in_tag_2="in_tag_2",
-                     out_tag_1="out_tag_1",
-                     out_tag_2="out_tag_2”,
+                     name_in='example',
+                     in_tag_1='in_tag_1',
+                     in_tag_2='in_tag_2',
+                     out_tag_1='out_tag_1',
+                     out_tag_2='out_tag_2”,
                      parameter_1=0,
-                     parameter_2="value"):
+                     parameter_2='value'):
 
             super(ExampleModule, self).__init__(name_in)
 
@@ -182,11 +185,11 @@ The full code for the ``ExampleModule`` from above is:
             data1 = self.m_in_port_1.get_all()
             data2 = self.m_in_port_2[0:4]
 
-            parang = self.m_in_port_1.get_attribute("PARANG")
-            pixscale = self.m_in_port_2.get_attribute("PIXSCALE")
+            parang = self.m_in_port_1.get_attribute('PARANG')
+            pixscale = self.m_in_port_2.get_attribute('PIXSCALE')
 
-            memory = self._m_config_port.get_attribute("MEMORY")
-            cpu = self._m_config_port.get_attribute("CPU")
+            memory = self._m_config_port.get_attribute('MEMORY')
+            cpu = self._m_config_port.get_attribute('CPU')
 
             result1 = 10.*self.m_parameter_1
             result2 = 20.*self.m_parameter_1
@@ -196,19 +199,12 @@ The full code for the ``ExampleModule`` from above is:
             self.m_out_port_1.append(result2)
 
             self.m_out_port_2[0:2] = result2
-            self.m_out_port_2.add_attribute(name="new_attribute", value=attribute)
+            self.m_out_port_2.add_attribute(name='new_attribute', value=attribute)
 
             self.m_out_port_1.copy_attributes(self.m_in_port_1)
-            self.m_out_port_1.add_history("ExampleModule", "history text")
+            self.m_out_port_1.add_history('ExampleModule', 'history text')
 
             self.m_out_port_2.copy_attributes(self.m_in_port_1)
-            self.m_out_port_2.add_history("ExampleModule", "history text")
+            self.m_out_port_2.add_history('ExampleModule', 'history text')
 
             self.m_out_port_1.close_port()
-
-.. _apply-function:
-
-Apply Function To Images
-------------------------
-
-A processing module often applies a specific method to each image of an input port. Therefore, the :func:`~pynpoint.core.processing.ProcessingModule.apply_function_to_images` function has been implemented to apply a function to all images of an input port. This function uses the ``CPU`` and ``MEMORY`` parameter from the configuration file to automatically process subsets of images in parallel. An example of the implementation can be found in the code of the bad pixel cleaning with a sigma filter: :class:`~pynpoint.processing.badpixel.BadPixelSigmaFilterModule`.
diff --git a/docs/conf.py b/docs/conf.py
index eb73d9d..70d4767 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -21,7 +21,7 @@ sys.path.insert(0, os.path.abspath('../'))
 # -- Project information -----------------------------------------------------
 
 project = 'PynPoint'
-copyright = '2014-2020, Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara'
+copyright = '2014-2021, Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara'
 author = 'Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara'
 
 # The short X.Y version
@@ -46,7 +46,8 @@ release = version
 extensions = [
     'sphinx.ext.autodoc',
     'sphinx.ext.napoleon',
-    'sphinx.ext.viewcode'
+    'sphinx.ext.viewcode',
+    'nbsphinx'
 ]
 
 # Add any paths that contain templates here, relative to this directory.
@@ -82,21 +83,26 @@ pygments_style = None
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
 #
-html_theme = 'sphinx_rtd_theme'
+html_theme = 'sphinx_book_theme'
 
 # Theme options are theme-specific and customize the look and feel of a theme
 # further.  For a list of options available for each theme, see the
 # documentation.
 #
-html_theme_options = { 'logo_only': True,
-                       'display_version': False,
-                       'prev_next_buttons_location': 'bottom',
-                       'style_external_links': False,
-                       'collapse_navigation': True,
-                       'sticky_navigation': True,
-                       'navigation_depth': 2,
-                       'includehidden': True,
-                       'titles_only': False }
+html_theme_options = {
+    'path_to_docs': 'docs',
+    'repository_url': 'https://github.com/PynPoint/PynPoint',
+    'repository_branch': 'main',
+    'launch_buttons': {
+        'binderhub_url': 'https://mybinder.org',
+        'notebook_interface': 'jupyterlab',
+    },
+    'use_edit_page_button': True,
+    'use_issues_button': True,
+    'use_repository_button': True,
+    'use_download_button': True,
+    'logo_only': True,
+}
 
 # Add any paths that contain custom static files (such as style sheets) here,
 # relative to this directory. They are copied after the builtin static files,
@@ -114,13 +120,13 @@ html_static_path = ['_static']
 # html_sidebars = {}
 
 html_logo = '_static/logo.png'
-# html_favicon = '_static/logo.jpg'
+html_favicon = '_static/favicon.png'
 html_search_language = 'en'
 
 html_context = {'display_github': True,
                 'github_user': 'PynPoint',
                 'github_repo': 'PynPoint',
-                'github_version': 'master/docs/'}
+                'github_version': 'main/docs/'}
 
 autoclass_content = 'both'
 
@@ -201,5 +207,3 @@ epub_exclude_files = ['search.html']
 
 
 # -- Extension configuration -------------------------------------------------
-
-html_css_files = ['custom.css']
diff --git a/docs/configuration.rst b/docs/configuration.rst
index 4c43c10..34b52ab 100644
--- a/docs/configuration.rst
+++ b/docs/configuration.rst
@@ -12,12 +12,12 @@ A configuration file has to be stored in the ``working_place_in`` with the name
 
 .. _config_file:
 
-Config File
------------
+Configuration file
+------------------
 
 The file contains two different sections of configuration parameters. The ``header`` section is used to link attributes in PynPoint with header values in the FITS files that will be imported into the database. For example, some of the pipeline modules require values for the dithering position. These attributes are stored as ``DITHER_X`` and ``DITHER_Y`` in the central database and are for example provided by the ``ESO SEQ CUMOFFSETX`` and ``ESO SEQ CUMOFFSETY`` values in the FITS header. Setting ``DITHER_X: ESO SEQ CUMOFFSETX`` in the ``header`` section of the configuration file makes sure that the relevant FITS header values are imported when :class:`~pynpoint.readwrite.fitsreading.FitsReadingModule` is executed. Therefore, FITS files have to be imported again if values in the ``header`` section are changed. Values can be set to ``None`` since ``header`` values are only required for some of the pipeline modules.
 
-The second section of the configuration values contains the central settings that are used by the pipeline modules. These values are stored in the ``settings`` section of the configuration file. The pixel scale can be provided in arcsec per pixel (e.g. ``PIXSCALE: 0.027``), the number of images that will be simultaneously loaded into the memory (e.g. ``MEMORY: 1000``), and the number of cores that are used for pipeline modules that have multiprocessing capabilities (e.g. ``CPU: 8``) such as :class:`~pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule` and :class:`~pynpoint.processing.fluxposition.MCMCsamplingModule`. A complete overview of the pipeline modules that support multiprocessing is available in the :ref:`overview` section.
+The second section of the configuration values contains the central settings that are used by the pipeline modules. These values are stored in the ``settings`` section of the configuration file. The pixel scale can be provided in arcsec per pixel (e.g. ``PIXSCALE: 0.027``), the number of images that will be simultaneously loaded into the memory (e.g. ``MEMORY: 1000``), and the number of cores that are used for pipeline modules that have multiprocessing capabilities (e.g. ``CPU: 8``) such as :class:`~pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule` and :class:`~pynpoint.processing.fluxposition.MCMCsamplingModule`. A complete overview of the pipeline modules that support multiprocessing is available in the :ref:`pipeline_modules` section.
 
 Note that some of the pipeline modules provide also multithreading support, which by default runs on all available CPUs. The multithreading can be controlled from the command line by setting the ``OMP_NUM_THREADS`` environment variable:
 
@@ -129,4 +129,4 @@ VLT/VISIR
 
    PIXSCALE: 0.045
    MEMORY: 1000
-   CPU: 1
\ No newline at end of file
+   CPU: 1
diff --git a/docs/contributing.rst b/docs/contributing.rst
index 59c4910..fc26f42 100644
--- a/docs/contributing.rst
+++ b/docs/contributing.rst
@@ -3,18 +3,6 @@
 Contributing
 ============
 
-If you encounter problems when using PynPoint then please contact |stolker| (see :ref:`team` section). Bug reports and functionality requests can be provided by creating an |issue| on the Github page.
+We welcome contributions, for example with the development of new pipeline modules and tutorials, improving existing functionalities, and fixing bugs. Please consider forking the Github repository and creating a `pull request <https://github.com/PynPoint/PynPoint/pulls>`_ for implementations that could be of interest for other users.
 
-We also welcome active help with bug fixing and the development of new functionalities and pipeline modules. Please consider forking the Github repository and creating a |pull| for implementations that could be of interest for other users.
-
-.. |stolker| raw:: html
-
-   <a href="https://people.phys.ethz.ch/~stolkert/" target="_blank">Tomas Stolker</a>
-
-.. |issue| raw:: html
-
-   <a href="https://github.com/PynPoint/PynPoint/issues" target="_blank">issue</a>
-
-.. |pull| raw:: html
-
-   <a href="https://github.com/PynPoint/PynPoint/pulls" target="_blank">pull request</a>
\ No newline at end of file
+Bug reports and functionality requests can be provided by creating an `issue <https://github.com/PynPoint/PynPoint/issues>`_ on the Github page.
diff --git a/docs/examples.rst b/docs/examples.rst
deleted file mode 100644
index 778ecc3..0000000
--- a/docs/examples.rst
+++ /dev/null
@@ -1,449 +0,0 @@
-.. _examples:
-
-Examples
---------
-
-VLT/SPHERE H-alpha data
-~~~~~~~~~~~~~~~~~~~~~~~
-
-An end-to-end example of a `SPHERE/ZIMPOL <https://www.eso.org/sci/facilities/paranal/instruments/sphere.html>`_ H-alpha data set of the accreting M dwarf companion of HD 142527 (see `Cugno et al. 2019 <https://ui.adsabs.harvard.edu/abs/2019A%26A...622A.156C>`_) can be downloaded `here <https://people.phys.ethz.ch/~stolkert/pynpoint/hd142527_zimpol_h-alpha.tgz>`_.
-
-VLT/NACO Mp dithering data
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Here we show a processing example of a pupil-stabilized data set of beta Pic as in `Stolker et al. (2019) <http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S>`_ (see also :ref:`running`). This archival data set was obtained with `VLT/NACO <https://www.eso.org/sci/facilities/paranal/instruments/naco.html>`_ in the Mp band, which can be downloaded from the ESO archive under program ID |data|. A dithering pattern was applied to sample the sky background. Before starting the data reduction, it is useful to sort the various required science and calibration files into separate folders. Also, it is important to provide the correct NACO keywords in the configuration file (see :ref:`configuration` section).
-
-Now we can start the data reduction by first importing the Pypeline and the required pipeline modules, for example:
-
-.. code-block:: python
-
-   from pynpoint import Pypeline, FitsReadingModule
-
-Next, we create an instance of :class:`~pynpoint.core.pypeline.Pypeline` with the ``working_place_in`` pointing to a path where PynPoint has enough space to create its database, ``input_place_in`` pointing to the default input path, and ``output_place_in`` to a folder in which results that are exported from the database:
-
-.. code-block:: python
-
-   pipeline = Pypeline(working_place_in='/path/to/working_place',
-                       input_place_in='/path/to/input_place',
-                       output_place_in'/path/to/output_place')
-
-The FWHM of the PSF is defined for simplicity:
-
-.. code-block:: python
-
-   fwhm = 0.134  # [arcsec]
-
-Now we are ready to add and run all the pipeline modules that we need. Have a look at the documentation in the :ref:`pynpoint-package` section for a detailed description of the individual modules and their parameters. 
-
-1. We start by importing the raw science images with a DIT of 65 ms into the database:
-
-.. code-block:: python
-
-   module = FitsReadingModule(name_in='read1',
-                              input_dir='/path/to/science/',
-                              image_tag='science',
-                              overwrite=True,
-                              check=True)
-
-   pipeline.add_module(module)
-   pipeline.run_module('read1')
-
-There are 55384 images of (y, x) = (386, 384) pixels in size:
-
-.. code-block:: python
-
-   print(pipeline.get_shape('science'))
-
-.. code-block:: console
-
-   (55384, 386, 384)
-
-2. We also import the raw flat (DIT = 56 ms) and dark images (DIT = 56 ms):
-
-.. code-block:: python
-
-   module = FitsReadingModule(name_in='read2',
-                              input_dir='/path/to/flat/',
-                              image_tag='flat',
-                              overwrite=True,
-                              check=False)
-
-   pipeline.add_module(module)
-   pipeline.run_module('read2')
-
-   module = FitsReadingModule(name_in='read3',
-                              input_dir='/path/to/dark/',
-                              image_tag='dark',
-                              overwrite=True,
-                              check=False)
-
-   pipeline.add_module(module)
-   pipeline.run_module('read3')
-
-There are 5 flat fields and 3 dark frames, both 514 x 512 pixels in size:
-
-.. code-block:: python
-
-   print(pipeline.get_shape('flat'))
-   print(pipeline.get_shape('dark'))
-
-.. code-block:: console
-
-   (5, 514, 512)
-   (3, 514, 512)
-
-3. Remove every NDIT+1 frame (which contains the average of the FITS cube) from the science data (NACO specific):
-
-.. code-block:: python
-
-   module = RemoveLastFrameModule(name_in='last',
-                                  image_in_tag='science',
-                                  image_out_tag='science_last')
-
-   pipeline.add_module(module)
-   pipeline.run_module('last')
-
-.. code-block:: python
-
-   print(pipeline.get_shape('science'))
-
-.. code-block:: console
-
-   (55200, 386, 384)
-
-4. Calculate the parallactic angles for each image:
-
-.. code-block:: python
-
-   module = AngleCalculationModule(name_in='angle',
-                                   data_tag='science_last',
-                                   instrument='NACO')
-
-   pipeline.add_module(module)
-   pipeline.run_module('angle')
-
-The angles are stored as attributes to the `science_last` dataset and will be copied and updated automatically as we continue the data reduction. To get the angles from the database:
-
-.. code-block:: python
-
-   parang = pipeline.get_attribute('science_last', 'PARANG', static=False)
-   print(parang)
-
-.. code-block:: console
-
-   [-109.75667269 -109.75615294 -109.75563318 ... -57.98983035 -57.98936535 -57.98890035]
-
-5. Remove the top and bottom line to make the images square:
-
-.. code-block:: python
-
-   module = RemoveLinesModule(lines=(0, 0, 1, 1),
-                              name_in='cut1',
-                              image_in_tag='science_last',
-                              image_out_tag='science_cut')
-
-   pipeline.add_module(module)
-
-   module = RemoveLinesModule(lines=(0, 0, 1, 1),
-                              name_in='cut2',
-                              image_in_tag='flat',
-                              image_out_tag='flat_cut')
-
-   pipeline.add_module(module)
-
-   module = RemoveLinesModule(lines=(0, 0, 1, 1),
-                              name_in='cut3',
-                              image_in_tag='dark',
-                              image_out_tag='dark_cut')
-
-   pipeline.add_module(module)
-
-   pipeline.run_module('cut1')
-   pipeline.run_module('cut2')
-   pipeline.run_module('cut3')
-
-   print(pipeline.get_shape('science_cut'))
-   print(pipeline.get_shape('flat_cut'))
-   print(pipeline.get_shape('dark_cut'))
-
-.. code-block:: console
-
-   (55200, 384, 384)
-   (5, 512, 512)
-   (3, 512, 512)
-
-6. Subtract the dark current from the flat field:
-
-.. code-block:: python
-
-   module = DarkCalibrationModule(name_in='dark',
-                                  image_in_tag='flat_cut',
-                                  dark_in_tag='dark_cut',
-                                  image_out_tag='flat_cal')
-
-   pipeline.add_module(module)
-   pipeline.run_module('dark')
-
-7. Divide the science data by the master flat (the `flat_cal` images are automatically cropped around their center):
-
-.. code-block:: python
-
-   module = FlatCalibrationModule(name_in='flat',
-                                  image_in_tag='science_cut',
-                                  flat_in_tag='flat_cal',
-                                  image_out_tag='science_cal')
-
-   pipeline.add_module(module)
-   pipeline.run_module('flat')
-
-8. Remove the first 5 frames from each FITS cube because of the systematically higher background emission:
-
-.. code-block:: python
-
-   module = RemoveStartFramesModule(frames=5,
-                                    name_in='first',
-                                    image_in_tag='science_cal',
-                                    image_out_tag='science_first')
-
-   pipeline.add_module(module)
-   pipeline.run_module('first')
-
-   print(pipeline.get_shape('science_first'))
-
-.. code-block:: console
-
-   (54280, 384, 384)
-
-9. Now we sort out the star and background images and apply a mean background subtraction:
-
-.. code-block:: python
-
-   module = DitheringBackgroundModule(name_in='background',
-                                      image_in_tag='science_first',
-                                      image_out_tag='science_background',
-                                      center=((263, 263), (116, 263), (116, 116), (263, 116)),
-                                      cubes=None,
-                                      size=3.5,
-                                      crop=True,
-                                      prepare=True,
-                                      pca_background=False,
-                                      combine='mean')
-
-   pipeline.add_module(module)
-   pipeline.run_module('background')
-
-10. Bad pixel correction:
-
-.. code-block:: python
-
-   module = BadPixelSigmaFilterModule(name_in='bad',
-                                     image_in_tag='science_background',
-                                     image_out_tag='science_bad',
-                                     map_out_tag=None,
-                                     box=9,
-                                     sigma=5.,
-                                     iterate=3)
-
-   pipeline.add_module(module)
-   pipeline.run_module('bad')
-
-11. Frame selection:
-
-.. code-block:: python
-
-   module = FrameSelectionModule(name_in='select',
-                                 image_in_tag='science_bad',
-                                 selected_out_tag='science_selected',
-                                 removed_out_tag='science_removed',
-                                 index_out_tag=None,
-                                 method='median',
-                                 threshold=2.,
-                                 fwhm=fwhm,
-                                 aperture=('circular', fwhm),
-                                 position=(None, None, 4.*fwhm))
-
-   pipeline.add_module(module)
-   pipeline.run_module('select')
-
-12. Extract the star position and center with pixel precision:
-
-.. code-block:: python
-
-   module = StarExtractionModule(name_in='extract',
-                                 image_in_tag='science_selected',
-                                 image_out_tag='science_extract',
-                                 index_out_tag=None,
-                                 image_size=3.,
-                                 fwhm_star=fwhm,
-                                 position=(None, None, 4.*fwhm))
-
-   pipeline.add_module(module)
-   pipeline.run_module('extract')
-
-13. Align the images with a cross-correlation of the central 800 mas:
-
-.. code-block:: python
-
-   module = StarAlignmentModule(name_in='align',
-                                image_in_tag='science_extract',
-                                ref_image_in_tag=None,
-                                image_out_tag='science_align',
-                                interpolation='spline',
-                                accuracy=10,
-                                resize=None,
-                                num_references=10,
-                                subframe=0.8)
-
-   pipeline.add_module(module)
-   pipeline.run_module('align')
-
-14. Center the images with subpixel precision by fitting a 2D Gaussian and applying a constant shift:
-
-.. code-block:: python
-
-   module = FitCenterModule(name_in='center',
-                            image_in_tag='science_align',
-                            fit_out_tag='fit',
-                            mask_out_tag=None,
-                            method='mean',
-                            radius=5.*fwhm,
-                            sign='positive',
-                            model='gaussian',
-                            filter_size=None,
-                            guess=(0., 0., 1., 1., 100., 0., 0.))
-
-   pipeline.add_module(module)
-   pipeline.run_module('center')
-
-   module = ShiftImagesModule(name_in='shift',
-                              image_in_tag='science_align',
-                              image_out_tag='science_center',
-                              shift_xy='fit',
-                              interpolation='spline')
-
-   pipeline.add_module(module)
-   pipeline.run_module('shift')
-
-To read the first image from the `science_center` dataset:
-
-.. code-block:: python
-
-   image = pipeline.get_data('science_center', data_range=(0, 1))
-   print(image.shape)
-
-.. code-block:: console
-
-   (1, 111, 111)
-
-And to plot the image:
-
-.. code-block:: python
-
-   import matplotlib.pyplot as plt
-   plt.imshow(image[0, ], origin='lower')
-   plt.show()
-
-.. image:: _static/betapic_center.png
-   :width: 60%
-   :align: center
-
-15. Now we stack every 100 images to lower the computation time during the PSF subtraction:
-
-.. code-block:: python
-
-   module = StackAndSubsetModule(name_in='stack',
-                                 image_in_tag='science_center',
-                                 image_out_tag='science_stack',
-                                 random=None,
-                                 stacking=100)
-
-   pipeline.add_module(stack)
-   pipeline.run_module('stack')
-
-16. Prepare the data for PSF subtraction:
-
-.. code-block:: python
-
-   module = PSFpreparationModule(name_in='prep',
-                                 image_in_tag='science_stack',
-                                 image_out_tag='science_prep',
-                                 mask_out_tag=None,
-                                 norm=False,
-                                 resize=None,
-                                 cent_size=fwhm,
-                                 edge_size=1.)
-
-   pipeline.add_module(module)
-   pipeline.run_module('prep')
-
-17. PSF subtraction with PCA:
-
-.. code-block:: python
-
-   module = PcaPsfSubtractionModule(pca_numbers=range(1, 51),
-                                    name_in='pca',
-                                    images_in_tag='science_prep',
-                                    reference_in_tag='science_prep',
-                                    res_median_tag='pca_median',
-                                    extra_rot=0.)
-
-   pipeline.add_module(module)
-   pipeline.run_module('pca')
-
-This is what the median residuals look like after subtraction 10 principal components:
-
-.. code-block:: python
-
-   data = pipeline.get_data('pca_median')
-
-   plt.imshow(data[9, ], origin='lower')
-   plt.show()
-
-.. image:: _static/betapic_pca.png
-   :width: 60%
-   :align: center
-
-18. Measure the signal-to-noise ratio (S/N) and false positive fraction at the position of the planet:
-
-.. code-block:: python
-
-   module = FalsePositiveModule(position=(50.5, 26.5),
-                                aperture=fwhm/2.,
-                                ignore=True,
-                                name_in='fpf',
-                                image_in_tag='pca_median',
-                                snr_out_tag='fpf')
-
-   pipeline.add_module(module)
-   pipeline.run_module('fpf')
-
-And to plot the S/N ratio for the range of principal components:
-
-.. code-block:: python
-
-   data = pipeline.get_data('snr')
-   plt.plot(range(1, 51), data[:, 4])
-   plt.xlabel('Principal components', fontsize=12)
-   plt.ylabel('Signal-to-noise ratio', fontsize=12)
-   plt.show()
-
-.. image:: _static/betapic_snr.png
-   :width: 60%
-   :align: center
-
-19. Write the median residuals to a FITS file:
-
-.. code-block:: python
-
-   module = FitsWritingModule(name_in='write',
-                              file_name='residuals.fits',
-                              output_dir=None,
-                              data_tag='pca_median',
-                              data_range=None)
-
-   pipeline.add_module(module)
-   pipeline.run_module('write')
-
-.. |data| raw:: html
-
-   <a href="http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=090.C-0653(D)" target="_blank">090.C-0653(D)</a>
diff --git a/docs/index.rst b/docs/index.rst
index a5b64ee..67071c4 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -3,58 +3,51 @@
 PynPoint
 ========
 
-PynPoint is a pipeline for processing and analysis of high-contrast imaging data of exoplanets and circumstellar disks. The Python package has been developed at the |ipa| of ETH Zurich in a collaboration between the |planets| and the |cosmo|.
+PynPoint is a pipeline for processing and analysis of high-contrast imaging data of exoplanets. The pipeline uses principal component analysis for the subtraction of the stellar PSF and supports post-processing with ADI, RDI, and SDI techniques.
 
 .. figure:: _static/eso.jpg
    :width: 100%
    :target: http://www.eso.org/public/news/eso1310/
 
-.. |ipa| raw:: html
-
-	<a href="http://www.ipa.phys.ethz.ch/" target="_blank">Institute of Particle Physics and Astrophysics</a>
-
-.. |planets| raw:: html
-
-   <a href="https://quanz-group.ethz.ch/" target="_blank">Exoplanets and Habitability Group</a>
-
-.. |cosmo| raw:: html
-
-   <a href="http://www.cosmology.ethz.ch/" target="_blank">Cosmology Research Group</a>
-
 .. toctree::
    :maxdepth: 2
-   :caption: Getting Started
+   :caption: Getting started
+   :hidden:
 
    installation
-   running
+   tutorials/first_example.ipynb
 
 .. toctree::
    :maxdepth: 2
-   :caption: User Documentation
+   :caption: User documentation
+   :hidden:
 
-   overview
    architecture
    configuration
-   tutorial
-   examples
+   pipeline_modules
+   running_pynpoint
+   tutorials
    modules
 
 .. toctree::
    :maxdepth: 2
-   :caption: NEAR Documentation
+   :caption: NEAR documentation
+   :hidden:
 
    near
 
 .. toctree::
    :maxdepth: 2
-   :caption: Developer Documentation
+   :caption: Developer documentation
+   :hidden:
 
    python
    coding
 
 .. toctree::
    :maxdepth: 2
-   :caption: About PynPoint
+   :caption: About
+   :hidden:
 
    mailing
    contributing
diff --git a/docs/installation.rst b/docs/installation.rst
index ad83acf..a991253 100644
--- a/docs/installation.rst
+++ b/docs/installation.rst
@@ -3,12 +3,14 @@
 Installation
 ============
 
-PynPoint is compatible with Python 3.6 and 3.7. Earlier versions (up to v0.7.0) are also compatible with Python 2.7. We highly recommend using Python 3 since several key Python projects have already |python| Python 2.
+PynPoint is compatible with Python 3.7/3.8/3.9. Earlier versions (up to v0.7.0) are also compatible with Python 2.7.
+
+.. _virtual_environment:
 
 Virtual Environment
 -------------------
 
-PynPoint is available in the |pypi| and on |github|. We recommend using a Python virtual environment to install and run PynPoint such that the correct versions of the dependencies can be installed without affecting other installed Python packages. First install `virtualenv`, for example with the |pip|:
+PynPoint is available in the `PyPI repository <https://pypi.org/project/pynpoint/>`_ and on `Github <https://github.com/PynPoint/PynPoint>`_. We recommend using a Python virtual environment to install and run PynPoint such that the correct versions of the dependencies can be installed without affecting other installed Python packages. First install `virtualenv`, for example with the `pip package manager <https://packaging.python.org/tutorials/installing-packages/>`_:
 
 .. code-block:: console
 
@@ -35,6 +37,8 @@ A virtual environment can be deactivated with:
 .. important::
    Make sure to adjust the path where the virtual environment is installed and activated.
 
+.. _installation_pypi:
+
 Installation from PyPI
 ----------------------
 
@@ -61,41 +65,49 @@ To update the installation to the most recent version:
 Installation from Github
 ------------------------
 
-The repository can be cloned from Github, which contains the most recent implementations:
+Instead of using ``pip``, the repository with the most recent implementations can also be cloned from Github:
 
 .. code-block:: console
 
     $ git clone git@github.com:PynPoint/PynPoint.git
 
-In that case, the dependencies can be installed from the PynPoint folder:
+The package is installed by running the setup script:
 
 .. code-block:: console
 
-    $ pip install -r requirements.txt
+    $ python setup.py install
 
-And to update the dependencies to the latest versions with which PynPoint is compatible:
+Alternatively, the path of the repository can be added to the ``PYTHONPATH`` environment variable such that PynPoint can be imported from any working folder:
 
 .. code-block:: console
 
-    $ pip install --upgrade -r requirements.txt 
+    $ echo "export PYTHONPATH='$PYTHONPATH:/path/to/pynpoint'" >> folder_name/bin/activate
 
-Once a local copy of the repository exists, new commits can be pulled from Github with:
+The dependencies can also be installed manually from the PynPoint folder:
 
 .. code-block:: console
 
-    $ git pull origin master
+    $ pip install -r requirements.txt
 
-By adding the path of the repository to the ``PYTHONPATH`` environment variable enables PynPoint to be imported from any location:
+Or updated to the latest versions with which PynPoint is compatible:
 
 .. code-block:: console
 
-    $ echo "export PYTHONPATH='$PYTHONPATH:/path/to/pynpoint'" >> folder_name/bin/activate
+    $ pip install --upgrade -r requirements.txt 
+
+Once a local copy of the repository exists, new commits can be pulled from Github with:
+
+.. code-block:: console
+
+    $ git pull origin main
 
 .. important::
    Make sure to adjust local path in which PynPoint will be cloned from the Github repository.
 
 Do you want to makes changes to the code? Then please fork the PynPoint repository on the Github page and clone your own fork instead of the main repository. We very much welcome contributions and pull requests (see :ref:`contributing` section).
 
+.. _testing_pynpoint:
+
 Testing Pynpoint
 ----------------
 
@@ -115,19 +127,3 @@ The installation can be tested by starting Python in interactive mode and printi
          >>> sys.path
 
    The result should contain the folder in which the Github repository was cloned or the folder in which Python modules are installed with pip.
-
-.. |python| raw:: html
-
-   <a href="https://python3statement.org/" target="_blank">stopped supporting</a>
-
-.. |pypi| raw:: html
-
-   <a href="https://pypi.org/project/pynpoint/" target="_blank">PyPI repository</a>
-
-.. |github| raw:: html
-
-   <a href="https://github.com/PynPoint/PynPoint" target="_blank">Github</a>
-
-.. |pip| raw:: html
-
-   <a href="https://packaging.python.org/tutorials/installing-packages/" target="_blank">pip package manager</a>
diff --git a/docs/mailing.rst b/docs/mailing.rst
index 600b6ea..81167f6 100644
--- a/docs/mailing.rst
+++ b/docs/mailing.rst
@@ -3,12 +3,8 @@
 Mailing List
 ============
 
-The PynPoint mailing list is used to announce releases, new functionalities, pipeline modules, and other updates. The mailing list can be joined by sending a blank email to pynpoint-join@lists.phys.ethz.ch.
+The PynPoint mailing list is used to announce releases, new functionalities, and other updates. The mailing list can be joined by sending a blank email to pynpoint-join@lists.phys.ethz.ch.
 
 The mailing list can be consulted for suggestions and questions about PynPoint by sending an email to pynpoint@lists.phys.ethz.ch.
 
-Further information about the mailing list can be found on the |mailing|.
-
-.. |mailing| raw:: html
-
-   <a href="https://lists.phys.ethz.ch/listinfo/pynpoint" target="_blank">web interface</a>
+Further information about the mailing list can be found on the `web interface <https://lists.phys.ethz.ch/listinfo/pynpoint>`_.
diff --git a/docs/near.rst b/docs/near.rst
index 4dc5cb0..532f147 100644
--- a/docs/near.rst
+++ b/docs/near.rst
@@ -1,6 +1,6 @@
 .. _near_data:
 
-Data Reduction
+Data reduction
 ==============
 
 .. _near_intro:
@@ -8,9 +8,9 @@ Data Reduction
 Introduction
 ------------
 
-The documentation on this page contains an introduction into data reduction of the modified |visir| instrument for the |near| (New Earths in the Alpha Cen Region) experiment. All data are available in the ESO archive under program ID |archive|.
+The documentation on this page contains an introduction into data reduction of the modified `VLT/VISIR <https://www.eso.org/sci/facilities/paranal/instruments/visir.html>`_ instrument for the `NEAR <https://www.eso.org/public/news/eso1702/>`_ (New Earths in the Alpha Cen Region) experiment. All data are available in the ESO archive under program ID `2102.C-5011(A) <http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=2102.C-5011(A)>`_.
 
-The basic processing steps with PynPoint are described in the example below while a complete overview of all available pipeline modules can be found in the :ref:`overview` section. Further details about the pipeline architecture and data processing are also available in |stolker|. More in-depth information of the input parameters for individual PynPoint modules can be found in the :ref:`api`. 
+The basic processing steps with PynPoint are described in the example below while a complete overview of all available pipeline modules can be found in the :ref:`pipeline_modules` section. Further details about the pipeline architecture and data processing are also available in `Stolker et al. (2019) <http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..>`_. More in-depth information of the input parameters for individual PynPoint modules can be found in the :ref:`api`. 
 
 Please also have a look at the :ref:`attribution` section when using PynPoint results in a publication. 
 
@@ -24,21 +24,7 @@ In this example, we will process the images of chop A (i.e., frames in which alp
 Setup
 ^^^^^
 
-To get started, use the instructions available in the :ref:`installation` section to install PynPoint.
-
-The results shown below are based on 1 hour of commissioning data of alpha Cen. There is a |bash| available to download all the FITS files (126 Gb). First make the bash script executable:
-
-.. code-block:: console
-
-    $ chmod +x near_files.sh
-
-And then execute it as:
-
-.. code-block:: console
-
-   $ ./near_files.sh
-
-You can also start by downloading only a few files by running a subset of the bash script lines (useful for validating the pipeline installation because analyzing the full data set takes hours).
+To get started, use the instructions available in the :ref:`installation` section to install PynPoint. We also need to download the NEAR data associated with the ESO program that was provided above. It is recommended to start with downloading only a few files to first validate the pipeline installation.
 
 Now that we have the data, we can start the data reduction with PynPoint!
 
@@ -76,7 +62,6 @@ The ``MEMORY`` and ``CPU`` setting can be adjusted. They define the number of im
 
 Note that in addition to the config file above, the ``working_place`` directory is also used to store the database file (`PynPoint_database.hdf5`). This database stores all intermediate results (typically a stack of images), which allows the user to rerun particular processing steps without having to rerun the complete pipeline. 
 
-
 Running PynPoint
 ^^^^^^^^^^^^^^^^
 
@@ -391,7 +376,7 @@ PynPoint also includes a module to calculate the detection limits of the final i
 Results
 -------
 
-The images that were exported to a FITS file can be visualized with a tool such as |ds9|. We can also use the :class:`~pynpoint.core.pypeline.Pypeline` functionalities to get the data from the database (without having to rerun the pipeline). For example, to get the residuals of the PSF subtraction:
+The images that were exported to a FITS file can be visualized with a tool such as `DS9 <http://ds9.si.edu/site/Home.html>`_. We can also use the :class:`~pynpoint.core.pypeline.Pypeline` functionalities to get the data from the database (without having to rerun the pipeline). For example, to get the residuals of the PSF subtraction:
 
 .. code-block:: python
 
@@ -427,27 +412,3 @@ Or to plot the detection limits with the error bars showing the variance of the
 .. image:: _static/near_limits.png
    :width: 70%
    :align: center
-
-.. |visir| raw:: html
-
-   <a href="https://www.eso.org/sci/facilities/paranal/instruments/visir.html" target="_blank">VLT/VISIR</a>
-
-.. |near| raw:: html
-
-   <a href="https://www.eso.org/public/news/eso1702/" target="_blank">NEAR</a>
-
-.. |stolker| raw:: html
-
-   <a href="http://ui.adsabs.harvard.edu/abs/2019A%26A...621A..59S" target="_blank">Stolker et al. (2019)</a>
-
-.. |archive| raw:: html
-
-   <a href="http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=2102.C-5011(A)" target="_blank">2102.C-5011(A)</a>
-
-.. |bash| raw:: html
-
-   <a href="https://people.phys.ethz.ch/~stolkert/pynpoint/near_files.sh" target="_blank">Bash script</a>
-
-.. |ds9| raw:: html
-
-   <a href="http://ds9.si.edu/site/Home.html" target="_blank">DS9</a>
diff --git a/docs/overview.rst b/docs/pipeline_modules.rst
similarity index 81%
rename from docs/overview.rst
rename to docs/pipeline_modules.rst
index 9424a66..328b919 100644
--- a/docs/overview.rst
+++ b/docs/pipeline_modules.rst
@@ -1,28 +1,34 @@
-.. _overview:
+.. _pipeline_modules:
 
-Overview
-========
+Pipeline modules
+================
 
-Here you find a list of all available pipeline modules with a very short description of what each module does. Reading modules import data into the database, writing modules export data from the database, and processing modules run a specific task of the data reduction and analysis. More details on the design of the pipeline can be found in the :ref:`architecture` section. 
+This page contains a list of all available pipeline modules and a short description of what they are used for. Reading modules import data into the database, writing modules export data from the database, and processing modules run a specific task of the data reduction or analysis. More details on the design of the pipeline can be found in the :ref:`architecture` section. 
 
 .. note::
    All PynPoint classes ending with ``Module`` in their name (e.g. :class:`~pynpoint.readwrite.fitsreading.FitsReadingModule`) are pipeline modules that can be added to an instance of :class:`~pynpoint.core.pypeline.Pypeline` (see :ref:`pypeline` section).
 
-.. _readmodule:
+.. important::
+   The pipeline modules with multiprocessing functionalities are indicated with "CPU" in parentheses. The number of parallel processes can be set with the ``CPU`` parameter in the configuration file and the number of images that is simultaneously loaded into the memory with the ``MEMORY`` parameter. Pipeline modules that apply (in parallel) a function to subsets of images use a number of images per subset equal to ``MEMORY`` divided by ``CPU``.
 
-Reading Modules
+.. important::
+   The pipeline modules that are compatible with both regular imaging and integral field spectroscopy datasets (i.e. 3D and 4D data) are indicated with "IFS" in parentheses. All other modules are only compatible with regular imaging.
+
+.. _reading_module:
+
+Reading modules
 ---------------
 
-* :class:`~pynpoint.readwrite.fitsreading.FitsReadingModule`: Import FITS files and relevant header information into the database.
+* :class:`~pynpoint.readwrite.fitsreading.FitsReadingModule` (IFS): Import FITS files and relevant header information into the database.
 * :class:`~pynpoint.readwrite.hdf5reading.Hdf5ReadingModule`: Import datasets and attributes from an HDF5 file (as created by PynPoint).
 * :class:`~pynpoint.readwrite.attr_reading.AttributeReadingModule`: Import a list of values as dataset attribute.
 * :class:`~pynpoint.readwrite.attr_reading.ParangReadingModule`: Import a list of parallactic angles as dataset attribute.
 * :class:`~pynpoint.readwrite.attr_reading.WavelengthReadingModule`: Import a list of calibrated wavelengths as dataset attribute.
 * :class:`~pynpoint.readwrite.nearreading.NearReadingModule` (CPU): Import VLT/VISIR data for the NEAR experiment.
 
-.. _writemodule:
+.. _writing_module:
 
-Writing Modules
+Writing modules
 ---------------
 
 * :class:`~pynpoint.readwrite.fitswriting.FitsWritingModule`: Export a dataset from the database to a FITS file.
@@ -31,12 +37,12 @@ Writing Modules
 * :class:`~pynpoint.readwrite.attr_writing.AttributeWritingModule`: Export a list of attribute values to an ASCII file.
 * :class:`~pynpoint.readwrite.attr_writing.ParangWritingModule`: Export the parallactic angles of a dataset to an ASCII file.
 
-.. _procmodule:
+.. _processing_module:
 
-Processing Modules
+Processing modules
 ------------------
 
-Background Subtraction
+Background subtraction
 ~~~~~~~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.background.SimpleBackgroundSubtractionModule`: Simple background subtraction for dithering datasets.
@@ -44,7 +50,7 @@ Background Subtraction
 * :class:`~pynpoint.processing.background.LineSubtractionModule` (CPU): Subtraction of striped detector artifacts.
 * :class:`~pynpoint.processing.background.NoddingBackgroundModule`: Background subtraction for nodding datasets.
 
-Bad Pixel Cleaning
+Bad pixel cleaning
 ~~~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.badpixel.BadPixelSigmaFilterModule` (CPU): Find and replace bad pixels with a sigma filter.
@@ -53,7 +59,7 @@ Bad Pixel Cleaning
 * :class:`~pynpoint.processing.badpixel.BadPixelTimeFilterModule` (CPU): Sigma clipping of bad pixels along the time dimension.
 * :class:`~pynpoint.processing.badpixel.ReplaceBadPixelsModule` (CPU): Replace bad pixels based on a bad pixel map.
 
-Basic Processing
+Basic processing
 ~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.basic.SubtractImagesModule`: Subtract two stacks of images.
@@ -67,9 +73,9 @@ Centering
 * :class:`~pynpoint.processing.centering.StarAlignmentModule` (CPU): Align the images with a cross-correlation.
 * :class:`~pynpoint.processing.centering.FitCenterModule` (CPU): Fit the PSF with a 2D Gaussian or Moffat function.
 * :class:`~pynpoint.processing.centering.ShiftImagesModule`: Shift a stack of images.
-* :class:`~pynpoint.processing.centering.WaffleCenteringModule`: Use the waffle spots to center the images.
+* :class:`~pynpoint.processing.centering.WaffleCenteringModule` (IFS): Use the waffle spots to center the images.
 
-Dark and Flat Correction
+Dark and flat correction
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.darkflat.DarkCalibrationModule`: Dark frame subtraction.
@@ -81,13 +87,13 @@ Denoising
 * :class:`~pynpoint.processing.timedenoising.WaveletTimeDenoisingModule` (CPU): Wavelet-based denoising in the time domain.
 * :class:`~pynpoint.processing.timedenoising.TimeNormalizationModule` (CPU): Normalize a stack of images.
 
-Detection Limits
+Detection limits
 ~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.limits.ContrastCurveModule` (CPU): Compute a contrast curve.
 * :class:`~pynpoint.processing.limits.MassLimitsModule`: Calculate mass limits from a contrast curve and an isochrones model grid.
 
-Extract Star
+Extract star
 ~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.extract.StarExtractionModule` (CPU): Locate and crop the position of the star.
@@ -98,7 +104,7 @@ Filters
 
 * :class:`~pynpoint.processing.filter.GaussianFilterModule`: Apply a Gaussian filter to the images.
 
-Flux and Position
+Flux and position
 ~~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.fluxposition.FakePlanetModule`: Inject an artificial planet in a dataset.
@@ -108,7 +114,7 @@ Flux and Position
 * :class:`~pynpoint.processing.fluxposition.AperturePhotometryModule` (CPU): Compute the integrated flux at a position.
 * :class:`~pynpoint.processing.fluxposition.SystematicErrorModule`: Compute the systematic errors on the flux and position.
 
-Frame Selection
+Frame selection
 ~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.frameselection.RemoveFramesModule`: Remove images by their index number.
@@ -120,34 +126,34 @@ Frame Selection
 * :class:`~pynpoint.processing.frameselection.SelectByAttributeModule`: Select images by the ascending/descending attribute values.
 * :class:`~pynpoint.processing.frameselection.ResidualSelectionModule`: Frame selection on the residuals of the PSF subtraction.
 
-Image Resizing
+Image resizing
 ~~~~~~~~~~~~~~
 
-* :class:`~pynpoint.processing.resizing.CropImagesModule`: Crop the images.
+* :class:`~pynpoint.processing.resizing.CropImagesModule` (IFS): Crop the images.
 * :class:`~pynpoint.processing.resizing.ScaleImagesModule` (CPU): Resample the images (spatially and/or in flux).
 * :class:`~pynpoint.processing.resizing.AddLinesModule`: Add pixel lines on the sides of the images.
 * :class:`~pynpoint.processing.resizing.RemoveLinesModule`: Remove pixel lines from the sides of the images.
 
-PCA Background Subtraction
+PCA background subtraction
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 * :class:`~pynpoint.processing.pcabackground.PCABackgroundPreparationModule`: Preparation for the PCA-based background subtraction.
 * :class:`~pynpoint.processing.pcabackground.PCABackgroundSubtractionModule`: PCA-based background subtraction.
 * :class:`~pynpoint.processing.pcabackground.DitheringBackgroundModule`: Wrapper for background subtraction of dithering datasets.
 
-PSF Preparation
+PSF preparation
 ~~~~~~~~~~~~~~~
 
-* :class:`~pynpoint.processing.psfpreparation.PSFpreparationModule`: Mask the images before the PSF subtraction.
+* :class:`~pynpoint.processing.psfpreparation.PSFpreparationModule` (IFS): Mask the images before the PSF subtraction.
 * :class:`~pynpoint.processing.psfpreparation.AngleInterpolationModule`: Interpolate the parallactic angles between the start and end values.
 * :class:`~pynpoint.processing.psfpreparation.AngleCalculationModule`: Calculate the parallactic angles.
-* :class:`~pynpoint.processing.psfpreparation.SortParangModule`: Sort the images by parallactic angle.
+* :class:`~pynpoint.processing.psfpreparation.SortParangModule` (IFS): Sort the images by parallactic angle.
 * :class:`~pynpoint.processing.psfpreparation.SDIpreparationModule`: Prepare the images for SDI.
 
-PSF Subtraction
+PSF subtraction
 ~~~~~~~~~~~~~~~
 
-* :class:`~pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule` (CPU): PSF subtraction with PCA.
+* :class:`~pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule` (CPU, IFS): PSF subtraction with PCA.
 * :class:`~pynpoint.processing.psfsubtraction.ClassicalADIModule` (CPU): PSF subtraction with classical ADI.
 
 Stacking
@@ -155,8 +161,5 @@ Stacking
 
 * :class:`~pynpoint.processing.stacksubset.StackAndSubsetModule`: Stack and/or select a random subset of the images.
 * :class:`~pynpoint.processing.stacksubset.StackCubesModule`: Collapse each original data cube separately.
-* :class:`~pynpoint.processing.stacksubset.DerotateAndStackModule`: Derotate and/or stack the images.
+* :class:`~pynpoint.processing.stacksubset.DerotateAndStackModule` (IFS): Derotate and/or stack the images.
 * :class:`~pynpoint.processing.stacksubset.CombineTagsModule`: Combine multiple database tags into a single dataset.
-
-.. note::
-   The pipeline modules with multiprocessing functionalities are indicated with "CPU" in parentheses. The number of parallel processes can be set with the ``CPU`` parameter in the central configuration file and the number of images that is simultaneously loaded into the memory with the ``MEMORY`` parameter. Pipeline modules that apply (in parallel) a function to subsets of images use a number of images per subset equal to ``MEMORY`` divided by ``CPU``.
diff --git a/docs/pynpoint.core.rst b/docs/pynpoint.core.rst
index 712a765..d5aa418 100644
--- a/docs/pynpoint.core.rst
+++ b/docs/pynpoint.core.rst
@@ -36,7 +36,6 @@ pynpoint.core.pypeline module
    :undoc-members:
    :show-inheritance:
 
-
 Module contents
 ---------------
 
diff --git a/docs/pynpoint.processing.rst b/docs/pynpoint.processing.rst
index 387e1e5..190719a 100644
--- a/docs/pynpoint.processing.rst
+++ b/docs/pynpoint.processing.rst
@@ -132,7 +132,6 @@ pynpoint.processing.timedenoising module
    :undoc-members:
    :show-inheritance:
 
-
 Module contents
 ---------------
 
diff --git a/docs/pynpoint.readwrite.rst b/docs/pynpoint.readwrite.rst
index 9d62dda..d51be0b 100644
--- a/docs/pynpoint.readwrite.rst
+++ b/docs/pynpoint.readwrite.rst
@@ -68,7 +68,6 @@ pynpoint.readwrite.textwriting module
    :undoc-members:
    :show-inheritance:
 
-
 Module contents
 ---------------
 
diff --git a/docs/pynpoint.util.rst b/docs/pynpoint.util.rst
index d44ed6f..bc67b07 100644
--- a/docs/pynpoint.util.rst
+++ b/docs/pynpoint.util.rst
@@ -12,6 +12,14 @@ pynpoint.util.analysis module
    :undoc-members:
    :show-inheritance:
 
+pynpoint.util.apply\_func module
+--------------------------------
+
+.. automodule:: pynpoint.util.apply_func
+   :members:
+   :undoc-members:
+   :show-inheritance:
+
 pynpoint.util.attributes module
 -------------------------------
 
@@ -92,6 +100,14 @@ pynpoint.util.multistack module
    :undoc-members:
    :show-inheritance:
 
+pynpoint.util.postproc module
+-----------------------------
+
+.. automodule:: pynpoint.util.postproc
+   :members:
+   :undoc-members:
+   :show-inheritance:
+
 pynpoint.util.psf module
 ------------------------
 
@@ -116,6 +132,14 @@ pynpoint.util.residuals module
    :undoc-members:
    :show-inheritance:
 
+pynpoint.util.sdi module
+------------------------
+
+.. automodule:: pynpoint.util.sdi
+   :members:
+   :undoc-members:
+   :show-inheritance:
+
 pynpoint.util.star module
 -------------------------
 
@@ -132,10 +156,10 @@ pynpoint.util.tests module
    :undoc-members:
    :show-inheritance:
 
-pynpoint.util.types module
---------------------------
+pynpoint.util.type\_aliases module
+----------------------------------
 
-.. automodule:: pynpoint.util.types
+.. automodule:: pynpoint.util.type_aliases
    :members:
    :undoc-members:
    :show-inheritance:
@@ -148,7 +172,6 @@ pynpoint.util.wavelets module
    :undoc-members:
    :show-inheritance:
 
-
 Module contents
 ---------------
 
diff --git a/docs/python.rst b/docs/python.rst
index 16fa711..5d3e69b 100644
--- a/docs/python.rst
+++ b/docs/python.rst
@@ -1,30 +1,18 @@
 .. _python:
 
-Python Guidelines
+Python guidelines
 =================
 
 .. _starting:
 
-Getting Started
+Getting started
 ---------------
 
 The modular architecture of PynPoint allows for easy implementation of new pipeline modules and we welcome contributions from users. Before writing a new PynPoint module, it is helpful to have a look at the :ref:`architecture` section. In addition, some basic knowledge on Python is required and some understanding on the following items can be helpful:
 
-    * Python |types| such as lists, tuples, and dictionaries.
-    * |classes|, in particular the concept of inheritance.
-    * |abc| as interfaces.
-
-.. |types| raw:: html
-
-   <a href="https://docs.python.org/3/library/stdtypes.html" target="_blank">types</a>
-
-.. |classes| raw:: html
-
-   <a href="https://docs.python.org/3/tutorial/classes.html" target="_blank">Classes</a>
-
-.. |abc| raw:: html
-
-   <a href="https://docs.python.org/3/library/abc.html" target="_blank">Abstract classes</a>
+    * Python `types <https://docs.python.org/3/library/stdtypes.html>`_ such as lists, tuples, and dictionaries.
+    * `Classes <https://docs.python.org/3/tutorial/classes.html>`_ and in particular the concept of inheritance.
+    * `Absrtact classes <https://docs.python.org/3/library/abc.html>`_ as interfaces.
 
 .. _conventions:
 
@@ -33,41 +21,17 @@ Conventions
 
 Before we start writing a new PynPoint module, please take notice of the following style conventions:
 
-    * |pep8| -- style guide for Python code
-    * We recommend using |pylint| and |pycodestyle| to analyze newly written code in order to keep PynPoint well structured, readable, and documented.
+    * `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_ -- style guide for Python code
+    * We recommend using `pylint <https://www.pylint.org>`_ and `pycodestyle <https://pypi.org/project/pycodestyle/>`_ to analyze newly written code in order to keep PynPoint well structured, readable, and documented.
     * Names of class member should start with ``m_``.
     * Images should ideally not be read from and written to the central database at once but in amounts of ``MEMORY``.
 
-.. |pep8| raw:: html
-
-   <a href="https://www.python.org/dev/peps/pep-0008/" target="_blank">PEP 8</a>
-
-.. |pylint| raw:: html
-
-   <a href="https://www.pylint.org" target="_blank">pylint</a>
-
-.. |pycodestyle| raw:: html
-
-   <a href="https://pypi.org/project/pycodestyle/" target="_blank">pycodestyle</a>
-
 Unit tests
 ----------
 
-PynPoint is a robust pipeline package with 95% of the code covered by |unittest|. Testing of the package is done by running ``make test`` in the cloned repository. This requires the installation of:
+PynPoint is a robust pipeline package with 95% of the code covered by `unit tests <https://docs.python.org/3/library/unittest.html>`_. Testing of the package is done by running ``make test`` in the cloned repository. This requires the installation of:
 
-   * |pytest|
-   * |pytest-cov|
+   * `pytest <https://docs.pytest.org/en/latest/getting-started.html>`_
+   * `pytest-cov <https://pytest-cov.readthedocs.io/en/latest/readme.html>`_
 
 The unit tests ensure that the output from existing functionalities will not change whenever new code. With these things in mind, we are now ready to code!
-
-.. |unittest| raw:: html
-
-   <a href="https://docs.python.org/3/library/unittest.html" target="_blank">unit tests</a>
-
-.. |pytest| raw:: html
-
-   <a href="https://docs.pytest.org/en/latest/getting-started.html" target="_blank">pytest</a>
-
-.. |pytest-cov| raw:: html
-
-   <a href="https://pytest-cov.readthedocs.io/en/latest/readme.html" target="_blank">pytest-cov</a>
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 0000000..0b13a48
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,4 @@
+nbsphinx
+pandoc
+jupyter
+sphinx_book_theme
diff --git a/docs/running.rst b/docs/running.rst
deleted file mode 100644
index 5fd918a..0000000
--- a/docs/running.rst
+++ /dev/null
@@ -1,100 +0,0 @@
-.. _running:
-
-Running PynPoint
-================
-
-Introduction
-------------
-
-As a first example, we provide a preprocessed dataset of beta Pic in the M' filter (4.8 μm). This archival dataset was obtained with NACO at the Very Large Telescope under the ESO program ID |id|. The exposure time of the individual images was 65 ms and the total field rotation about 50 degrees. To limit the size of the dataset, every 200 images have been mean-collapsed. The data is stored in an HDF5 database (see :ref:`hdf5-files`) which contains 263 images of 80x80 pixels, the parallactic angles, and the pixel scale. The dataset is stored under the tag name ``stack``.
-
-First Example
--------------
-
-The following script downloads the data (13 MB), runs the PSF subtraction with PynPoint, and plots an image of the median-collapsed residuals:
-
-.. code-block:: python
-
-   import os
-   import urllib
-   import matplotlib.pyplot as plt
-
-   from pynpoint import Pypeline, \
-                        Hdf5ReadingModule, \
-                        PSFpreparationModule, \
-                        PcaPsfSubtractionModule
-
-   working_place = '/path/to/working_place/'
-   input_place = '/path/to/input_place/'
-   output_place = '/path/to/output_place/'
-
-   data_url = 'https://people.phys.ethz.ch/~stolkert/pynpoint/betapic_naco_mp.hdf5'
-   data_loc = os.path.join(input_place, 'betapic_naco_mp.hdf5')
-
-   urllib.request.urlretrieve(data_url, data_loc)
-
-   pipeline = Pypeline(working_place_in=working_place,
-                       input_place_in=input_place,
-                       output_place_in=output_place)
-
-   module = Hdf5ReadingModule(name_in='read',
-                              input_filename='betapic_naco_mp.hdf5',
-                              input_dir=None,
-                              tag_dictionary={'stack': 'stack'})
-
-   pipeline.add_module(module)
-
-   module = PSFpreparationModule(name_in='prep',
-                                 image_in_tag='stack',
-                                 image_out_tag='prep',
-                                 mask_out_tag=None,
-                                 norm=False,
-                                 resize=None,
-                                 cent_size=0.15,
-                                 edge_size=1.1)
-
-   pipeline.add_module(module)
-
-   module = PcaPsfSubtractionModule(pca_numbers=[20, ],
-                                    name_in='pca',
-                                    images_in_tag='prep',
-                                    reference_in_tag='prep',
-                                    res_median_tag='residuals')
-
-   pipeline.add_module(module)
-
-   pipeline.run()
-
-   residuals = pipeline.get_data('residuals')
-   pixscale = pipeline.get_attribute('stack', 'PIXSCALE')
-
-   size = pixscale*residuals.shape[-1]/2.
-
-   plt.imshow(residuals[0, ], origin='lower', extent=[size, -size, -size, size])
-   plt.title('beta Pic b - NACO M\' - median residuals')
-   plt.xlabel('R.A. offset [arcsec]', fontsize=12)
-   plt.ylabel('Dec. offset [arcsec]', fontsize=12)
-   plt.colorbar()
-   plt.savefig(os.path.join(output_place, 'residuals.png'), bbox_inches='tight')
-
-.. |id| raw:: html
-
-   <a href="http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=090.C-0653(D)" target="_blank">090.C-0653(D)</a>
-
-.. important::
-   In the example, make sure to change the path of the ``working place``, ``input place``, and ``output place``.
-
-Detection of beta Pic b
------------------------
-
-That's it! The residuals of the PSF subtraction are stored in the database under the tag name ``residuals`` and the plotted image is located in the ``output_place_in`` folder. The image shows the detection of the exoplanet |beta_pic_b|:
-
-.. |beta_pic_b| raw:: html
-
-   <a href="http://www.openexoplanetcatalogue.com/planet/beta%20Pic%20b/" target="_blank">beta Pic b</a>
-
-.. image:: _static/residuals.png
-   :width: 70%
-   :align: center
-
-The star of this planetary system is located in the center of the image (which is masked here) and the orientation of the image is such that North is up and East is left. The bright yellow feature in the bottom right direction is the planet beta Pic b at an angular separation of 0.46 arcseconds.
diff --git a/docs/running_pynpoint.rst b/docs/running_pynpoint.rst
new file mode 100644
index 0000000..c3c4f6e
--- /dev/null
+++ b/docs/running_pynpoint.rst
@@ -0,0 +1,122 @@
+.. _running_pynpoint:
+
+Running PynPoint
+================
+
+.. _running_intro:
+
+Introduction
+------------
+
+The pipeline can be executed with a Python script, in `interactive mode <https://docs.python.org/3/tutorial/interpreter.html#interactive-mode>`_, or with a `Jupyter Notebook <https://jupyter.org/>`_. The main components of PynPoint are the pipeline and the three types of pipeline modules:
+
+1. :class:`~pynpoint.core.pypeline.Pypeline` -- The actual pipeline which capsules a list of pipeline modules.
+
+2. :class:`~pynpoint.core.processing.ReadingModule` -- Module for importing data and relevant header information from FITS, HDF5, or ASCII files into the database.
+
+3. :class:`~pynpoint.core.processing.WritingModule` -- Module for exporting results from the database into FITS, HDF5 or ASCII files.
+
+4. :class:`~pynpoint.core.processing.ProcessingModule` -- Module for processing data with a specific data reduction or analysis recipe.
+
+.. _initiating_pypeline:
+
+Initiating the Pypeline
+-----------------------
+
+The pipeline is initiated by creating an instance of :class:`~pynpoint.core.pypeline.Pypeline`:
+
+.. code-block:: python
+
+    pipeline = Pypeline(working_place_in='/path/to/working_place',
+                        input_place_in='/path/to/input_place',
+                        output_place_in='/path/to/output_place')
+
+PynPoint creates an HDF5 database called ``PynPoin_database.hdf5`` in the ``working_place_in`` of the pipeline. This is the central data storage in which the processing results from a :class:`~pynpoint.core.processing.ProcessingModule` are stored. The advantage of the HDF5 format is that reading of data is much faster than from FITS files and it is also possible to quickly read subsets from large datasets.
+
+Restoring data from an already existing pipeline database can be done by creating an instance of :class:`~pynpoint.core.pypeline.Pypeline` with the ``working_place_in`` pointing to the path of the ``PynPoint_database.hdf5`` file.
+
+.. _running_modules:
+
+Running pipeline modules
+------------------------
+
+Input data is read into the central database with a :class:`~pynpoint.core.processing.ReadingModule`. By default, PynPoint will read data from the ``input_place_in`` but setting a manual folder is possible to read data to separate database tags (e.g., dark frames, flat fields, and science data).
+
+For example, to read the images from FITS files that are located in the default input place:
+
+.. code-block:: python
+
+    module = FitsReadingModule(name_in='read',
+                               input_dir=None,
+                               image_tag='science')
+
+    pipeline.add_module(module)
+
+The images from the FITS files are stored in the database as a dataset with a unique tag. This tag can be used by other pipeline module to read the data for further processing.
+
+The parallactic angles can be read from a text or FITS file and are attached as attribute to a dataset:
+
+.. code-block:: python
+
+    module = ParangReadingModule(name_in='parang',
+                                 data_tag='science'
+                                 file_name='parang.dat',
+                                 input_dir=None)
+
+    pipeline.add_module(module)
+
+Finally, we run all pipeline modules:
+
+.. code-block:: python
+
+    pipeline.run()
+
+Alternatively, it is also possible to run each pipeline module individually by their ``name_in`` value:
+
+.. code-block:: python
+
+    pipeline.run_module('read')
+    pipeline.run_module('parang')
+
+.. important::
+   Some pipeline modules require pixel coordinates for certain arguments. Throughout PynPoint, pixel coordinates are zero-indexed, meaning that (x, y) = (0, 0) corresponds to the center of the pixel in the bottom-left corner of the image. This means that there is an offset of -1 in both directions with respect to the pixel coordinates of DS9, for which the center of the bottom-left pixel is (x, y) = (1, 1).
+
+.. _hdf5_files:
+
+HDF5 database
+-------------
+
+There are several ways to access the datasets in the HDF5 database that is used by PynPoint:
+
+* The :class:`~pynpoint.readwrite.fitswriting.FitsWritingModule` exports a dataset from the database into a FITS file.
+
+* Several methods of the :class:`~pynpoint.core.pypeline.Pypeline` class help to easily retrieve data and attributes from the database. For example:
+
+   * To read a dataset:
+
+     .. code-block:: python
+
+        pipeline.get_data('tag_name')
+
+   * To read an attribute of a dataset:
+
+     .. code-block:: python
+
+        pipeline.get_attribute('tag_name', 'attr_name')
+
+* The `h5py <http://www.h5py.org/>`_ Python package can be used to access the HDF5 file directly.
+
+* There are external tools available such as `HDFCompass <https://support.hdfgroup.org/projects/compass/download.html>`_ or `HDFView <https://support.hdfgroup.org/downloads/index.html>`_ to read, inspect, and visualize data and attributes. HDFCompass is easy to use and has a basic plotting functionality. In HDFCompass, the static PynPoint attributes can be opened with the *Reopen as HDF5 Attributes* option.
+
+.. _data_attributes:
+
+Dataset attributes
+------------------
+
+Apart from using :meth:`~pynpoint.core.pypeline.Pypeline.get_attribute`, it is also possible to print and return all attributes of a dataset with the :meth:`~pynpoint.core.pypeline.Pypeline.list_attributes` method of :class:`~pynpoint.core.pypeline.Pypeline`:
+
+.. code-block:: python
+
+  attr_dict = pipeline.list_attributes('tag_name')
+
+The method returns a dictionary that contains both the static and non-static attributes.
diff --git a/docs/tutorial.rst b/docs/tutorial.rst
deleted file mode 100644
index 043c304..0000000
--- a/docs/tutorial.rst
+++ /dev/null
@@ -1,122 +0,0 @@
-.. _tutorial:
-
-Tutorial
-========
-
-.. _introduction:
-
-Introduction
-------------
-
-The pipeline can be executed with a Python script, in interactive mode of Python, or with a Jupyter Notebook. The pipeline works with two different components:
-
-1. Pipeline modules which read, write, and process data:
-
-	1.1 :class:`pynpoint.core.processing.ReadingModule` - Reading of the data and relevant header information.
-
-	1.2 :class:`pynpoint.core.processing.WritingModule` - Exporting of results from the database.
-
-	1.3 :class:`pynpoint.core.processing.ProcessingModule` - Processing and analysis of the data.
-
-2. The actual pipeline :class:`pynpoint.core.pypeline.Pypeline` which capsules a list of pipeline modules.
-
-.. important::
-   Pixel coordinates are zero-indexed, meaning that (x, y) = (0, 0) corresponds to the center of the pixel in the bottom-left corner of the image. The coordinate of the bottom-left corner is therefore (x, y) = (-0.5, -0.5). This means that there is an offset of -1.0 in both directions with respect to the pixel coordinates of DS9, for which the bottom-left corner is (x, y) = (0.5, 0.5).
-
-.. _data-types:
-
-Data Types
-----------
-
-PynPoint currently works with three types of input and output data:
-
-* FITS files
-* HDF5 files
-* ASCII files
-
-PynPoint creates an HDF5 database called ``PynPoin_database.hdf5`` in the ``working_place_in`` of the pipeline. This is the central data storage in which the results of the processing steps are saved. The advantage of the HDF5 data format is that reading of data is much faster compared to the FITS data format and it is possible to quickly read subsets from very large datasets.
-
-Input data is read into the central database with a :class:`~pynpoint.core.processing.ReadingModule`. By default, PynPoint will read data from the ``input_place_in`` but setting a manual folder is possible to read data to separate database tags (e.g., dark frames, flat fields, and science data). Here we show an example of how to read FITS files and a list of parallactic angles.
-
-First, we need to create an instance of :class:`~pynpoint.core.pypeline.Pypeline`:
-
-.. code-block:: python
-
-    pipeline = Pypeline(working_place_in="/path/to/working_place",
-                        input_place_in="/path/to/input_place",
-                        output_place_in="/path/to/output_place")
-
-Next, we read the science data from the the default input location:
-
-.. code-block:: python
-
-    module = FitsReadingModule(name_in="read_science",
-                               input_dir=None,
-                               image_tag="science")
-
-    pipeline.add_module(module)
-
-And we read the flat fields from a separate location:
-
-.. code-block:: python
-
-    module = FitsReadingModule(name_in="read_flat",
-                               input_dir="/path/to/flat",
-                               image_tag="flat")
-
-    pipeline.add_module(module)
-
-The parallactic angles are read from a text file in the default input folder and attached as attribute to the science data:
-
-.. code-block:: python
-
-    module = ParangReadingModule(file_name="parang.dat",
-                                 name_in="parang",
-                                 input_dir=None,
-                                 data_tag="science")
-
-    pipeline.add_module(module)
-
-Finally, we run all pipeline modules:
-
-.. code-block:: python
-
-    pipeline.run()
-
-Alternatively, it is also possible to run the modules individually by their ``name_in`` value:
-
-.. code-block:: python
-
-    pipeline.run_module("read_science")
-    pipeline.run_module("read_flat")
-    pipeline.run_module("parang")
-
-The FITS files of the science data and flat fields are read and stored into the central HDF5 database. The data is labelled with a tag which is used by other pipeline module to access data from the database.
-
-Restoring data from an already existing pipeline database can be done by creating an instance of :class:`~pynpoint.core.pypeline.Pypeline` with the ``working_place_in`` pointing to the path of the ``PynPoint_database.hdf5`` file.
-
-PynPoint can also handle the HDF5 format as input and output data. Data and corresponding attributes can be exported as HDF5 file with  :class:`~pynpoint.readwrite.hdf5writing.Hdf5WritingModule`. This data format can be imported into the central database with :class:`~pynpoint.readwrite.hdf5reading.Hdf5ReadingModule`. Have a look at the :ref:`pynpoint-package` section for more information.
-
-.. _hdf5-files:
-
-HDF5 Files
-----------
-
-There are several options to access data from the central HDF5 database:
-
-	* Use :class:`~pynpoint.readwrite.fitswriting.FitsWritingModule` to export data to a FITS file, as shown in the :ref:`examples` section.
-	* Use the easy access functions of the :class:`pynpoint.core.pypeline` module to retrieve data and attributes from the database:
-
-		* ``pipeline.get_data(tag='tag_name')``
-
-		* ``pipeline.get_attribute(data_tag='tag_name', attr_name='attr_name')``
-
-	* Use an external tool such as |HDFCompass| or |HDFView| to read, inspect, and visualize data and attributes in the HDF5 database. We recommend using HDFCompass because it is easy to use and has a basic plotting functionality, allowing the user to quickly inspect images from a particular database tag. In HDFCompass, the static attributes can be opened with the `Reopen as HDF5 Attributes` option.
-
-.. |HDFCompass| raw:: html
-
-   <a href="https://support.hdfgroup.org/projects/compass/download.html" target="_blank">HDFCompass</a>
-
-.. |HDFView| raw:: html
-
-   <a href="https://support.hdfgroup.org/downloads/index.html" target="_blank">HDFView</a>
diff --git a/docs/tutorials.rst b/docs/tutorials.rst
new file mode 100644
index 0000000..42b1985
--- /dev/null
+++ b/docs/tutorials.rst
@@ -0,0 +1,17 @@
+.. _tutorials:
+
+Tutorials
+=========
+
+Curious to see an example with a more detailed workflow? There are several Jupyter notebooks with tutorials available:
+
+.. toctree::
+   :hidden:
+
+   tutorials/first_example.ipynb
+   tutorials/zimpol_adi.ipynb
+
+* :ref:`First example: PSF subtraction with PCA </tutorials/first_example.ipynb>`  (:download:`download notebook </tutorials/first_example.ipynb>`)
+* :ref:`Non-coronagraphic angular differential imaging </tutorials/zimpol_adi.ipynb>`  (:download:`download notebook </tutorials/zimpol_adi.ipynb>`)
+
+The notebooks can also be viewed on `Github  <https://github.com/PynPoint/PynPoint/tree/main/docs/tutorials>`_.
diff --git a/docs/tutorials/first_example.ipynb b/docs/tutorials/first_example.ipynb
new file mode 100644
index 0000000..436056d
--- /dev/null
+++ b/docs/tutorials/first_example.ipynb
@@ -0,0 +1,443 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# First example"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Introduction"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In this first example, we will run the PSF subtraction on a preprocessed ADI dataset of $\\beta$ Pictoris. This archival dataset was obtained with NACO in $M'$ (4.8 $\\mu$m) at the Very Large Telescope (ESO program ID: [090.C-0653(D)](http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=090.C-0653(D))). The exposure time per image was 65 ms and the parallactic rotation was about 50 degrees. Every 200 images have been mean-collapsed to limit the size of the dataset."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Getting started"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We start by importing the required Python modules for this tutorial."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import os\n",
+    "import urllib\n",
+    "import matplotlib.pyplot as plt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And also the pipeline and pipeline modules of PynPoint."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from pynpoint import Pypeline, Hdf5ReadingModule, PSFpreparationModule, PcaPsfSubtractionModule"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we download the preprocessed data (13 MB). The dataset is stored in an HDF5 database and contains 263 images of 80 by 80 pixels. The parallactic angles and pixel scale are stored as attributes of the dataset."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "('./betapic_naco_mp.hdf5', <http.client.HTTPMessage at 0x14120ce10>)"
+      ]
+     },
+     "execution_count": 3,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "urllib.request.urlretrieve('https://home.strw.leidenuniv.nl/~stolker/pynpoint/betapic_naco_mp.hdf5',\n",
+    "                           './betapic_naco_mp.hdf5')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Initiating the Pypeline"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will now initiate PynPoint by creating an instance of the [Pypeline](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.pypeline.Pypeline) class. The object requires the paths of the working folder, input folder and output folder. Here we simply use the current folder for all three of them."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "===============\n",
+      "PynPoint v0.9.0\n",
+      "===============\n",
+      "\n",
+      "Working folder: ./\n",
+      "Input folder: ./\n",
+      "Output folder: ./\n",
+      "\n",
+      "Database: ./PynPoint_database.hdf5\n",
+      "Configuration: ./PynPoint_config.ini\n",
+      "\n",
+      "Number of CPUs: 8\n",
+      "Number of threads: not set\n"
+     ]
+    },
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "/Users/tomasstolker/applications/pynpoint/pynpoint/core/pypeline.py:286: UserWarning: Configuration file not found. Creating PynPoint_config.ini with default values in the working place.\n",
+      "  warnings.warn('Configuration file not found. Creating PynPoint_config.ini with '\n"
+     ]
+    }
+   ],
+   "source": [
+    "pipeline = Pypeline(working_place_in='./',\n",
+    "                    input_place_in='./',\n",
+    "                    output_place_in='./')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "A configuration file with default values has been created in the working folder. Next, we will add three pipeline modules to the `Pypeline` object."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## PSF subtraction with PCA"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We start with the [Hdf5ReadingModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.readwrite.html#pynpoint.readwrite.hdf5reading.Hdf5ReadingModule) which will import the preprocessed data from the HDF5 file that was downloaded into the current database. The instance of the `Hdf5ReadingModule` class is added to the `Pypeline` with the [add_module](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=add_module#pynpoint.core.pypeline.Pypeline.add_module) method. The dataset that we need to import has the tag *stack* so we specify this name as input and output in the dictionary of `tag_dictionary`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "module = Hdf5ReadingModule(name_in='read',\n",
+    "                           input_filename='betapic_naco_mp.hdf5',\n",
+    "                           input_dir=None,\n",
+    "                           tag_dictionary={'stack': 'stack'})\n",
+    "\n",
+    "pipeline.add_module(module)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we ise the [PSFpreparationModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html?highlight=psfprep#pynpoint.processing.psfpreparation.PSFpreparationModule) to mask the central (saturated) area of the PSF and also pixels beyond 1.1 arcseconds."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "module = PSFpreparationModule(name_in='prep',\n",
+    "                              image_in_tag='stack',\n",
+    "                              image_out_tag='prep',\n",
+    "                              mask_out_tag=None,\n",
+    "                              norm=False,\n",
+    "                              resize=None,\n",
+    "                              cent_size=0.15,\n",
+    "                              edge_size=1.1)\n",
+    "\n",
+    "pipeline.add_module(module)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The last pipeline module that we use is [PcaPsfSubtractionModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html?highlight=pcapsf#pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule). This module will run the PSF subtraction with PCA. Here we chose to subtract 20 principal components and store the median-collapsed residuals at the database tag *residuals*."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "module = PcaPsfSubtractionModule(pca_numbers=[20, ],\n",
+    "                                 name_in='pca',\n",
+    "                                 images_in_tag='prep',\n",
+    "                                 reference_in_tag='prep',\n",
+    "                                 res_median_tag='residuals')\n",
+    "\n",
+    "pipeline.add_module(module)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can now run the three pipeline modules that were added toe the `Pypeline` with the [run](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.pypeline.Pypeline.run) method."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------\n",
+      "Hdf5ReadingModule\n",
+      "-----------------\n",
+      "\n",
+      "Module name: read\n",
+      "Reading HDF5 file... [DONE]                      \n",
+      "Output port: stack (263, 80, 80)\n",
+      "\n",
+      "--------------------\n",
+      "PSFpreparationModule\n",
+      "--------------------\n",
+      "\n",
+      "Module name: prep\n",
+      "Input port: stack (263, 80, 80)\n",
+      "Preparing images for PSF subtraction... [DONE]                      \n",
+      "Output port: prep (263, 80, 80)\n",
+      "\n",
+      "-----------------------\n",
+      "PcaPsfSubtractionModule\n",
+      "-----------------------\n",
+      "\n",
+      "Module name: pca\n",
+      "Input port: prep (263, 80, 80)\n",
+      "Input parameters:\n",
+      "   - Post-processing type: ADI\n",
+      "   - Number of principal components: [20]\n",
+      "   - Subtract mean: True\n",
+      "   - Extra rotation (deg): 0.0\n",
+      "Constructing PSF model... [DONE]\n",
+      "Creating residuals. [DONE]\n",
+      "Output port: residuals (1, 80, 80)\n"
+     ]
+    }
+   ],
+   "source": [
+    "pipeline.run()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Accessing results in the database"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The `Pypeline` has [several methods](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.pypeline.Pypeline) to access the datasets and attributes that are stored in the database. For example, we can use the [get_shape](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.dataio.InputPort.get_shape) method to check the shape of the *residuals* dataset that was stored by the `PcaPsfSubtractionModule`. The dataset contains 1 image since we ran the PSF subtraction only with 20 principal components."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "(1, 80, 80)"
+      ]
+     },
+     "execution_count": 9,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "pipeline.get_shape('residuals')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we use the [get_data](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.pypeline.Pypeline.get_data) method to read the median-collapsed residuals of the PSF subtraction."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "residuals = pipeline.get_data('residuals')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will also extract the pixel scale, which is stored as the `PIXSCALE` attribute of the dataset, by using the [get_attribute](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=Pypeline#pynpoint.core.pypeline.Pypeline.get_attribute) method."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "Pixel scale = 27.0 mas\n"
+     ]
+    }
+   ],
+   "source": [
+    "pixscale = pipeline.get_attribute('residuals', 'PIXSCALE')\n",
+    "print(f'Pixel scale = {pixscale*1e3} mas')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Plotting the residuals"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Finally, let's have a look at the residuals of the PSF subtraction. For simplicity, we define the image size in arcseconds."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "size = pixscale * residuals.shape[-1]/2."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And plot the first image of the *residuals* dataset with `matplotlib`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWIAAAEKCAYAAAAo+19NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAADHyklEQVR4nOz9e7xl3VEWCj815lxr7+73DbmQGGMIt2NEuYmYD44fHrkIgt/vHFARBLxEDEb8RM/BG+GgXA8aFUX4ROUFIyDIXSSfRGO4iRwEEoRDuBwlRJCEACEJyft2995rzTnq/FH11Kgx19rde3fv7t7d767fr3vvvda8jDnmnDWqnnqqSlQVl3Ipl3Ipl3L/pNzvAVzKpVzKpTzZ5VIRX8qlXMql3Ge5VMSXcimXcin3WS4V8aVcyqVcyn2WS0V8KZdyKZdyn2W83wO4nzIePqLrpzzjfg/jUi7loZXN42/FdHRN7uQYH/3hj+hb3jqfatsf+8njV6rqx9zJ+e6HPKkV8fopz8Bv/yOfeb+HcSmX8tDK//2vvvSOj/GWt8740Ve+66m2HZ7zc8+84xPeB3lSK+JLOWfZR0kXQAugApQZkGn3+zraTyggNe0ngCiAuvs5+C+fXoA6CnSw7cuktp+2selg4wHaMeWSSn+hxR6BesvtHmS5VMSXcn6iu0pNB6AOYtGIWTFsTTlSmWoR1NG2kZoUZ1KyZTKlyu1VXKEOSVnDFOx8CNQ1ULaAXEccr0x2zHkE6kpsn6mNRXT3vJdyMUSh2OrpoIkHVS4V8aWcn7hVu+9zpYLz76n4dKn4XJlrUopCS5lWMvx73aM3fZud4/LwXAAACK3qZDHHNtizqMil9Xy/5GG3iC8Ua0JEXiYivyYiP3XC9yIiXy4irxORnxSRD0zfvVBEfs7/vfDejfrJJ+pwQx36f/NaMF2xf/MaqCv7XKqiTAoVYDoUTAdiVrLyO2DY2k8eP2CKJSSB9p3M9g+w82gRlC0wHgHDsVvBrsDrGj4ms8DNmk7amgtD8W1WEhCGOnxSR7HzXFrN91QUillP9+9BlYtmEX8NgH8E4OtO+P4PAni+//tgAP8EwAeLyDMAfB6AF8BeqR8TkZer6tvu+oifjBK4r3Sf1dGUryhQthKKtMz2gmgxZSZVzcrdImADrU3BAw0bth2zlapQsWPTSp4HQAfbQCagOPwhs+87AvPKTGl1JQwAKHbesLgB1GLXkJU9xI5vY7DxX8q9lbrX1Xp45EJZxKr6AwDeepNNPg7A16nJDwN4mog8B8BHA3iVqr7Vle+rADxwFJYLI1SCyeqkUuP3pggVovZvCTnEdrRS4vv2QjGIFxYmoYLi3xU0pV/E8WGJ48R50z/izMSGOyhB0jVlTJiYcwriddcB3y9dz6VVfO9EAczQU/27lYjIoYj8qIj8XyLy0yLyBXf/Cm4tF80ivpU8F8Avpb/f4J+d9PmOiMiLAbwYAFaPPv3ujPIBF6ke7FKFzAyUOaSwct3lFqRZuQCKosyCSliBAbbFcQHHgItBGUBTajq45VoAmT2QpoCUFHTjgsDFAAotftJYIHjChZJXv66NWdUR/CsAVr5dBYZjbUpcTXEPtIKT1a5IgcLsHDzcxtt9kXO0iI8BfISqPiEiKwA/KCL/1g27+yYPmiK+Y1HVxwA8BgBXn/W8y1dmKdpTv8qkGDYKLYo6FIcWHG5QhxqomGpThMUVKbCAG1KQjhBBDtzpaNuLiFuuGsovFLMftzg+rL5QAK783dKtY6/obR91uEFD6WtpYynHtohIB4c0j6COi3Ffyl0XBbA9J/xXre7vE/7nyv/d9zv5oCniNwJ4Xvr7XfyzNwL4sMXn33/PRvUAyj7XWlyBMWAl1RWcMGAlARdUkQhu8TNRBWZXfqW3RlUEZdKmPHOczJkQQYFIiwHQwwxU4AWC6i9nHS0ABzgNjko0cY3FcWqz1G18RdTGPrTr6ifEx66JMUG4RBzrJquqtH061kV+xRMUcimnFz0l7HBaEZEBwI8B+K0AvkJVf+TcDn6b8qAp4pcD+AwR+SZYsO7tqvomEXklgL8lIsQa/gCAz75fg7zookkZxO+OC4u2oBsgrrhMU9aVwxAL7DWzGciOCNZBVoYKyMbPlTBZHiPYEqAljgY3qKIOgrp2JV+bxVtXfj5a2TyOY8EyqUESFRg2imHr0MhsLIi6UgvGlR4D1sUYgQZlSCwW/TxqAXTl+8093pwXkktlfAZRYD69Hn6miLwm/f2Ye8HtcKozgA8QkacB+A4ReV9V3cvUuldyoRSxiHwjzLJ9poi8AcaEWAGAqv5TAK8A8P8B8DoA1wF8qn/3VhH5IgCv9kN9oareLOj3pJeltSrUgHTrB1O+BRrkBVLEDF7YfTMCn6XV6C48lVkpZl5KTQpuoZCEyEIE1RpeK25l10EgomGE1sEUamTpFRhbw4c5TBkyceuYCt4ZGDbQ3Tky/DkP0K8trpnYcfMcdvBijt/n5r77wQ+YMHZ8Svl1VX3BqY6r+hsi8n2wwP6lIqao6iff4nsF8BdO+O5lAF52N8b1QMvSPU4uu4pYQM7pZGVryk8LIENSzC5lBjQpxjgFLd4EIcjkirA47rt0yfPfujg+eiWcz6ODHU+2CAt+gCv3wdKbqwRk7IFBQV3ZNnNt0AmhFuLPMR5Z/NMGbWgRh2EQyjxOJKbYCX0E44TXkO5Fl82X5+RS9ohgPqfJEZFnAdi6Er4C4KMA/J1zOfgdyIVSxJdyzpJYBJ0FXBC823Cf3W2nEqFFHBgwt52xUEDp++R+l0mh2iAKJoB0w8sKzX8WGphz+pziOHVdATJLBAR1BooAdVTUtTTFSkXnmDcteSrTOqbrq64HHffOjAqZPTBYAR0V1b2CJUMDMA9Ci4RFv2SOhOXMc9Y2Z/k6L6WJBevObVKeA+BrHScuAL5FVf/NeR38duVSET9Esjc1V9tnSzjArFRz82PbsPQ0rOZ9xwqmQ0HAGaFU0SzMvBCE1Un33XWj8OfCQuwsSf+5Y0XmsWUrNSlxniuCbSDs0JTu/gldHFcRqdU7i0Sa173ic9kxMfYsNHFq2XP8J6kocG4Wsar+JIDfdS4HO0e5VMQPiWQrtkzaWZTEZBl8qh6YolKoRSBFoRsBZu2y2goUNSkQO1mjt9URmImjalO2zJaTSTEcN0VcBwCDBdh0lDhWMAqwu5AMW7seqZ65x9/Hdk4qteFYIbN0Cjsy/ipQNm2MBpskwBfp/AoMjBDlc8zGtsg0PIhxomNOcxBSJVKkg/ZGCzt5FrpuvGq7bxpUvUuFDNSHPIPmUhE/JJKx2BbookVq7nL1VOAMN0QFMxFoeuNJM6vSK65GBzNOroq0ZAggZbs5D9hrSYgC04FgutIUIBUptKUNB6QQxzNFXTYKUcG8HkJpdxa+j7dsLMU5Fh4RhyaaV2BKTmJOIpC2wHLLtA96SOUv0jiCUeL7U5lHENHrXMgMjJPX1dB2bCrrmIMqTSHjya2Mz9MivqhyqYgfREkvZeCZInu3IfwA+MtczYzLpSPpBnc4LhkP2Xqm8lNTchUtcKWw7wNbJhxQJay7nXcpueYZ7gDauVQs287KZUpH/doLXaDBHDFNC6uZItWyAeMY+6CCDOOkn/ydzArN98CN7BJYTvouQT+amBY2HvuZ6yjHQoC28DzZlLJCMF+sagznLpeK+AGUwF09TZjKaZ/3lq1fmUwnR91ftIpkUtVSjP14dWXKY5khZ4wKYPLMtxhPHMsVk9cLLlugHiHYGIArycqxtCy9LniVoJbtI7KjhKQiiswvoQyxQ4U1HCnbKbhmwUl4Vl5akBbJI2A9jD1Y8pJ5sayRUVXiWBG8m91DmOHFiGy/gFV8XDJrU+BC6mB/nXYht3paHg65hCYu5WJJCkiFlTqk75aSnt+cGszEC1Oapgj5oteV45UFwFZQtnbgKL4jAFStBkTqusGMPLrqxKTFWQ09v1bD/Q5ltbCKuf98KAuMV5uFW7GzAJFzDCRrc6bLn7ab24K2NzhG+Dgp5L1We5qXoNh11+qQycxxt8WNypvJIbZoaNT3qIPBLLLkMz+JRCHY6HDrDR9guVTED4gYFQqhBEQVItJRzkjLAszaUmDX3VZELoYQYli1oBnLW1JpSbaIhcdLLne2uD2YBSpCTQqQsAEtTLf0Qqkt6lJ0ePQEDOGqN8whAmP5Gjm+0MDt+w6W8G3U53bIVn++rgQ/xL6OZ5ctUHyeSNfTIv6z7ZstYgvw2eKXA6YqiGxDBhs77jVhCuk/fzLAFOYsXUITl3IBZNgoVtfsrWNx86Faui5gLvJ8oOEuVw+gMWAGZJxSMThmW0drLwQFdCuhOMtWewUCONScrGP/LGc9la1pXFELhNENlzlZirTkpcETVDLRvw5NSQ3HugMVcAxN2WsoK0lBsqwIM7UNQNTSYEARME9gPnDrfi0xno4aWE1RllkhW78cpRLXCJoq74PagsaFVAdjmmQjTwdnrwRsJDHG8BZo/XuReih66OghVs6XwbpLuRhCa9B/p9/cMFV39ZdKKluffJa7IFRKgFhaWvlfcst1eazStiOTgCnG2YK95SUSi00SNR0cY66Lc/N0eyloy/k7oYKXpLkVzi2tYgYts2UKRCF7fhZKnrQKwS7nOFvlC6imL8upUc8ijz/w7zwHTwJRFczLB+Mhk0tFfBElvYB8/qZDQR1NK5CSRmVhFqBECUmouc0hmfUgQBlawRxRRdlI4K/RTSNZrp2FGpZ0U+C0MoettvP6+CBeQIgLQpJQ9knp1bVhrcPG+MeZR6ysdUElRkwbALNCWoGgNm6jvzWrdTrsx6GzhOUcwTehZ+C/Az3Mkfm9xIUjI28xf2k+FW1f2YMn856HJe0ZkGUGxuNWUGkeEVmFsS4tqHYPk9SH8aKSXCriiybJcso4Y10Lts67XV1XDEcNA61CHqtpDJk1FEikGCe3vo6CeaUtcWGDKIheJk8+WLu1DEA87zhnyem4SKioqdtyWgjmtaBeMRc/cO58rWgQhcErcEVsypip1zKrKaDEEqlUUlDUksfS3H7jIZsSA2D99A587lxp69wWCe0SL8zLCCWcxmzBxjZ+HRL1zxcdBu8kPkOCGbQLwJHbTOgiG4CigBx5wXoBpqs+nxWQSaC13cu2080esgdLLFj3cKuqh/vqHiDJsSVgjzef3d/a3Oy9KcjcRtAfSNP7SetMzYW21kP+VbJUu3OI9i94hiJo8bEOQ4IBqKiXyRp2yHxewz1iDlyJUVGfxGDqLFbsQgb7d+L5/E/3LHYYGKkzR54Lo8Zph0XvzBfSXGZY4owYblbgGmM/2zEeZFFcBusu5R5JsB6ACNBkpTBsFON1U2bseGw8YoTLXJwTW7buUhertVtTEZpIJhgQCmLYJBw3WW7wjLzWbFOCccFzVK+GFpxmYZBP2vFvKCIulSASCK26BG+Udj1M4LBO0O33lgDRcN0d+pmiMT60Wc8yu2UJpBQ5RHnKfB8iVTxJFBYSQAfZr1zTfeOcl8QeiSFKOl/eJ/Gdu4JKVW36nDKYk3L2LrwPkcyXPOJLuauSLMpQeA4JZMuybIHxhnYc2joA0IZvRpqvQwR2vPaSCzO2pL3gZDV0lDCgr8dAShmahRfZXzC8MrjDRYLGRjbCeKNhvHWUhu9yAWGNYAiwaYEqZTazW9lMmthL3XK9z6Ci+HE7hgXndGrXASAlsADLwu9LRSxDm5No6+SLST5fbE/lu+dYSww+6Hy+fab3Zeu9LCGe5TGS9/AwWM6XmXWXclelc2cXVhGVcFczIltA2pTGPqZDQA5MYli4s6xFzHPZAdqYMseVdSeioHp+uWnRoo0zn2PHRCO6kSx+U8zp+jOkoGmeBFGDmOON/bFH6dQ0pYu5zdfHd5xeQMxjotXFYVjHo7b7kK+782QEqCV/19KaayTGNMs+5odMmDzkIsb3hmXj1REdbztLxz6pe+blAZR6yZq4dyIiHwPgy2CVDr5aVV+6+P5LAXy4/3kVwG9S1af5dzOA1/p3/11VP/aeDPo2JDMYIrAzoAVgaE3OrVqYWaD2RtVRMDuDgKnEMqPrvCxMIc6fu0JuKdGmCag8ctQ/WBUe+IOn3g5bRR3J4uhbGmVrrI7SUn5ZalNToSA0hcVrCMWE5p5HptqALlW5m8Nl+rFDHsWLy0dPu7wuiNewGNr8Nus8LX7kYLvilAkYjxQdrIDFudHmk/eUY0LROC+wP3gZi25pVv68BlSKsT6uCuZD22446he/WGR4/7baQzAPoJizd6mI74l4oeavgFXMfwOAV4vIy1X1Z7iNqn5m2v4voq8rekNVP+AeDffOJL0QXeGXZCGLW2SRFpstTabUhqKj8mrkfx6DymRf+m62OrPy6LLC5vZv2KrjyQIc+GGoHLXVQxBx62/fm6+AoP+u4yUjjQvoF4mAC7RZ0EvoIQlrWEi+njyHZEhM6BI/WonKhNUnSx5OGdMBUNXes0lQUAc3peuKOc6eztLTWE5bQD9oNEUs5g3ts4CTZP8mD5IoBNvLFOd7Jh8E4HWq+noA8AahHwfgZ07Y/pNhPe0ePOkUpYJsA83fu4XJ9vVSNLpqGMdVAU/GYHsggxua6xyKwF1hRLadYjhqMAMVQ/WMrVgEVFuQCU5rc8uwJPe8rhHjshPsXkdXp0Fo4beGn2GoZi890qHRjRWaM9AQ/OK8kAiasquklglaqz1p19VxgnlP0rhtG7eMJS2ALCtKxedBs5iburgX7h100EtBDyHF9bSxddCImpc0+j7DsVqtjwSzMDMy5vNWEM4FF1VcJnTcQ3kugF9Kf78B1ql5R0Tk3QC8B4DvTR8fevfWCcBLVfVf36Vx3rZkGhKAhkGicXIBNEqX5Bq3rdh5JDhQCTt2iLUff27Rfr786qnMWoDV44LV9QqpntK7tuPMB5aSG4HBZA0DyZWfva3SDGyvCLZXTTmxnVBcw5wsccd3iTebFWmBuVkSRMA6yG6tLpUHazKU2SCCgB5YIGjdFCWZEjqaq76fWdGU7DJAx3rGhHUs4aMtSFbcHs7XtmssW6Bct3tQUj2NgGvS9WQst0vzzjAVK7VR0QMYjhTjDTtu8fFHh+sER7VrSNfs68CDpYzlMqHjgsonAfg2b4tNeTdVfaOIvCeA7xWR16rqzy93FJEXA3gxAKweffq9GS3PnayucEsDP6QJ5ptQaXQwRrLiYjvpLVHtfukkrEukc+fNMkSwz13OEMHye1q9yUUXNCwVajhxtxjpQknEQPcMPyv0Pd8tg1ux7eK6d6zC5c99x9gH6/i/nj+M7n7tzKHsXu9yXBwrO2t3c8VrJS+bi26mrgn6+cvP0AOqyxSXFvG9lDcCeF76+138s33ySVh0c1bVN/rP14vI98Pw4x1FrKqPAXgMAK4+63n3zi7gixOBNx+Pv7xlkmbFhPuvHdUqfkaCBC1O6bBGFqwBmpVqbqx9Zxl0NoYyAnU2S63VK26WWrjNLpFYkDDO8UbqiMHGnKSyzfASlmrX6KnGZdI+UEVRs6bL3DjLka7MsahZl5tHs8+OgDm638H5bYyFpQKVqe/GEY1GfTz0KAB0mK/MsGannDPhcfy+zIriGjYgIEFXuCgW1AGYD9qYSuJAZ0obMwXLpL0iVjQWTZIouOQW9YOqzy6DdfdOXg3g+SLyHjAF/EkAPmW5kYj8dgBPB/Cf0mdPB3BdVY9F5JkAPgTA370no94j2erJypN8Wbr14T67ErQWP9LYBmplFsP6YYWxmS+7hILmSwl49bU12neOP5aNjalsG0tAqpe89Be20OpOeGtnxcUCIkAxJsd4ZH/Ph2iVyxLFqs42KcMGGAoVjOx0OBZXwnBF3eaytTTiNdWxJbN0wUwqthR0jMULGvNtJ0Tgz8anNhx+mbpNuKALvunivAz4JU8j7hnamJaLQHyXy5FSCad73yAfZ6psNQKXHMPgz1Pmfgf7hPUrHkBRyGVh+HslqjqJyGcAeCWMvvYyVf1pEflCAK9R1Zf7pp8E4JtUu1JavwPAV4oYQQiGEZ8U5Lvrstft9Ze5wAunQ9G1VpdmTXa0NvaXrwDcXWUZxcgW40vrtCvilYAdm0XSc71KpiLvk+D17gMTC6AMIC3fDSqM/chIh4mSrhdKwxceuuSnwjF9jpBSr8XHsZMO7MdiTQdaqB1mnxYf8Px74ARa7XH+tntcZywQS5iAG2bIgMdMzUrzghIQjt8SLuASDJWseLVLSulgiQdUlymA7WWtiXsnqvoKAK9YfPa5i78/f89+PwTg/e7q4M4qbrlGKvIhvHmkBZdkbppDpbUZAhBBGVqDoq29DtBSfSnFq56xbm/1wBvQlB2L8qDaZ9OBhGXZDdsNxij/SBReGquizFYop/nX/qM6z7hil1/r10VWRh2A6RDdAiRuBUv1NkxklLg7D/KjqwJDmr9CbYQuwSGgjzyUHJzjwlR9PgcLwDH4qUNKcQ6vw7wUXo/6IpA5xZnDK9t+Qcl4vqaxDZs23uj4LP195nGmIXlCbDLqkFfx+8vGAGVrYzflDUvLZjA3LwwXWuSyHvGl3IbQ+nIecIVZr/MapoCdFpZfBGUCRMYu4S+U+8/EI+d1ax3EPme5wliZ/C1PCRqY0UomhkLUoLzpUllI+p2fk6UBQRFtgbkwBV1RAag1pfwSsyU+CpjPsyiLWTR5AJoUZfISMoMhRBDBwXwPOqZFtnIJcbgH0Y1x6JkqBUnB+3GpiDXPcbJyqTyjT95yrMsEFD/+sMntq9p4IgsyxRUslVzjOssWGLZtziKRxqEL7hOWMefhAdBvisvMuku5DYlaCQmzA5JiGASVFlzCY5fWT1M+iG13ou50Xemu+rZlC1O4A0LBByzgvFMmH/RKrj9B5rGW2QYmFQ6PcKO0rcMTZbIFSPZ9D3SdOHIqMevwWpJIKyW5O8lUOk3LZVw15oayCDgGxqtJ+YaC8+/z6lTaYrW3vjwXGWn3bgknLAOKebHJyjoWs1zj2DH5du0a59X0/Ij0102sfjlvAWOlus0XWS4t4ks5k+QkhHgpaIVQ2awBQDqObrijcIULNIxPmlssAKpT3cLKgXrwqiU6jDe86M+hpUOzCDmKKeKAIPJ4Z0viAM/t56RyyRXX5oO+1Q9g1rl1IAbkuCUV7KN/aRHMtOo9iKmDYD4kLICAG2IxSvuLIlgeVEbBKJi1K5QffOhQsAhlZspIrC3S0JRUjJsLx57EruBN+/VEQBFNwXI/HRuvmddQJg24hYsp1Osvq8FVWxZJEqAyYYeME+FzYzWJGezLbJHp0ObBqs7ZPMvUzoGVRELORRVVubSIL+U2xC0YEd2hKmXrqHO/fb8d0fYzrNOlq5sCVlH9bMEioEJXpO33YIRLZRXDkJT6y2Mt03fT2DJzoPsezWPoAkjavlNBpCUvx8ZgXvydL0EX40e/b8dkSBYxz5tbPsU+ae7aXBDWSR8nmEaW8yrp+Gm+1I3cfWM1SGHxQEiaRo471e1gLGB53jraPWPHE1lc+0UXBS5TnC/ldKJJyTCDihxcYrkA2ksJdBlQ5RCYt9K98Bbs0+Sqq1tMTRHo4MG7Cgjb56QXPoJ0VM7uchM37TLYslKsCFhBPXUX7MCR0nRpqcMtVLZayjSxgCQWbnNWelTi7CzC695nTVenmOVazFS0mZki5FBr8wA66IKp1YKoibzMSowFrCv2boMqbllqEaAqikMJdZXuY5oDJfyTrPPMKw5YBGrXwIXNFXPx3yNwK4AW83wkHys9g2UGsOn55MxABMw7mz04mRuRXiy57Fl3KWcRuqhOH5MqgCuU8bglMLQkjua+T5vWPZiK0QrtJAvOlRIhgHklmK769iqodfdFIt7YWbnuBivQWiyh/96K9vCaXEkdG3/Vrg0QFV9MvKXR1lkibtGqwzFMda6pVkYWmw9T7mXjSm5PIXWgWcx1tKQVLb7QeIEfHQ27JsRQ/FqrF8JnsXeyHAgdDIRRkuK2e7EcbPtZtqzEpihbaSnWI1BL8jy4vSu6qCDHhZA4fxTrt/1YeJ51nSP4x2JEfg25fgaVMNleMgOjV89jEgjEO3cD0RE8W8oXTRS45BFnEZEDAL8FwBUAb1bVN9+VUT1EktN+w+Xl77mdEBUgLaeyVIy7+y8zr5Y4apyT31UE66CDQRbjy5+flJLbPujHtmNZ8/jp2IGjavsHIPDtSGBYQgT5+Bna2TPuHOTM+wY3m9Y5rzGfS/0QmnbXZgkv51ry/DJg2FEQ0c/3PsnQh3O1u+y7mKt8Qe152TmGpI+youbPwrk4xdgugDzpM+tE5CkA/gSs2tkHwWp4CQAVkTfCEjAeU9VX382BXnhJSotWIeA0sWrc4VC8hDFW5aYpp5ENpeJdGbRTPgMU9ZgZew2CYP1ewF/callsrF1bx5b0wWpgO0qL40rnm1fSBaXE03m5cNQR4cIzcGYBJdlR4AYBSAcH0NU3i30P26QgiuxIBcbrWRumcbuFSerWzHZPAsxiVmCGBJbZcBEgnIECDes64JVQkNqs/MSYGI4V2Hr2n/O54/hFwwLuJ6TtX/36d/BzAVCkU9AyA+Nk8Ei2rrNVT5nzWFNAL49vX9zgfsuTPrNORP4ygM8B8HoALwfwxQB+GcANAM8A8L4A/icArxKRHwbwF1X15+7qiC+a5JczWWnx2LjlUYdW9pFi7rVvtk0JCPm4jLpnK9Ld0wrBsPW8BypqWHQ9F0KXaopuPLIaE9MVsgSkWY5ZxB5+O1+qizHC2iLFy9uqq6kAGJoiGDaNvUAKHWr/wkcB+7SQkBddVw27zI1SSdMbNurV15wiODTFo8UtWucyz2huPPaxH8hC2KOIeT+ZUsxzxW3KVrW0aw+YgDAVWykh7e8888CKpc1jNgC7zD9p1ynVIC9rQ+UZdUt9tbT8gY4rHdgxN6/pWi6QPNmbh/6PAD5UVX/qhO9/FMDLROTTAbwIwIcCePIo4qUCgykNXXJxVSMwZlghP+9dyyW7onO7iQkStqgSf1NxEOvd4Ya6ciGW3L2wfFGTEsr90TpYIL0LJxooyTJV7ZVApprtK1Bj+9kFdS6zK/DoOIKTg0qtLkcbZ/tbF+5789+DzrcPspGWUpytzDraZ91zkO+lJsw8sViImRPGoPD+GbCNdk+Qjgm0esU+rqJtwbPAnUKmnsMdY+MpeOlzP46LGBNTBbb1Ag7sHOWmilhVP/E0B1HVYwD/+FxG9CAJXcAFE2CYmiJrShLQlYT1ZTCDu4pUUAyazAgognzP6UAwHUq8TEGTcsUTdXJFekWaaErTgUA8lVrdGuZ+xuxA4LQBc9CNJcRQ6BIrlgtRVu4RBEyKKwci68q8gWjlVH0BWUtn5Up163rrFeN8IbHjGZEtFyaKa6ASE/u7OCOFSqxjGEAgo3YLB+eOc8Dmop0k6ID86Xy9ZQLkmrb7Xyw9fLyeAmc+xs0jAmX1teoBOO0DbLEAeBo2FxcG5+YDW2CHjWL1hMZzEV5VLJINqhHncFNYqGkn5nAfxaCJJ7EiziIivxnAqKpvWHz+LgC2qvqr5z24Cy26+7uKoFQN5ZixOqzsBWLAJaLfs0DiRUVYUXSrh425nizKHi62u/oNU3Xss1iKc9foM9O1pFnMjRng15GsR1pK0d04b1sB1NR8NEuy2vYxHohLa4IxOit6sBcvF98h40BVLBnFcdaw7P0fU8ojIUPy/u06BOjTyZNXkGtG0PLOdLDufhMGSRCDPQd++goMUzsP2RzRcgpte6nS4INEb9yZW7FFtK78Ho0tCy/gqOOUsAHZaVmnRRvUtacDCxLkdFHkMrOuydcD+GYAX7X4/KMB/DEAf+C8BnXhZZ/+cUYCQGWy2EWktdBJVdWC95sCJQFBJHZDmRTDMVsjtZbzgbMO7RjR8JNKLimbDssGXf1d95SpxlZgqFni7XoTw2G5KFGxLRQXr0egwJE0Ohax2KUbnheABZQRwalk2ZoFa3UwgP54MrM2hh9nAMDAH9L5FQa6A4FdY0DDv4GG33LRSpouFhCx+ZzTtrmzyrySGN/ynmRvisdfslGyAuW1ZhhkL+6fFzb3eCIYK75YpfoVF0XsllwqYsoLsCjG7vIfAfy98xnOAyBLxeMSbung/NYFhCAzICwIL81ipEKLOsTutgOIzCpRNT7xteoc2hTw4cuc60x4p2HAea0p0SAUlrvdReHRLARfWUu6BqBZrAsLs0sFRlMAKq0uMdSL5GTlotbuhx2dc4+2Dt5IqdmVfek4RwPPIZ1lOjj/NpQYF6aJ90nDwo1C7AuYh90uWLe5rgR1pc1ST6wRMllysHJn4XLPhfWg55VAyPOtPgfSnqF4bmqruhcV1Qo/l6YwY7xtAcmeRgQOmeSx0YBwolg+YJS5hadwMeQSmlhue7Dn88MTPn9SSLdQJwuny8rylzZnU7lxtlepdy4zWyEly2WHB5zOzd/DilqMKZRj4sRS9vJ18zXuezl3TH+EYtu5tGR1FleOKoqZkaO8w3IOsoWdPs/bYXGNEfTM15YUbofjZxhpOR8n3KPOehU/DC30NNb4lYtwsYQPQgec69z5OjjWkcDhid1htWMnqKppv5OUaMd4qdwBQWOkVX7RDNDz6lknIs8D8HUAng274sdU9cvO5eB3IGdRxD8C4M/7vyx/AdZd40kh2XqjtbLEGiMQ4plzOUIu8cI360oFO0VhRBV1BWweLZE23GOT/YOZA1Zh/cBfyslxxPTS7vBmsxtc3LqiNTqmsbpbz/oFdsB+f46V6dsdA6O7/tbDDjOC/5wx1oBHVr1C7njOU9M7Ogjm0eZaA7dt3UAy6yG6XPA8eUrdqkfg1a27Na8xPBnAVlvdvS+dsvfzR43o5cKXFiimQZNhEcFd4TXkFdT3TVBFpDCH5wVvF5Us5rSPiln+XEwiu1PTPb1PYqyJc6s1MQH4K6r6nz1H4sdE5FX3s5EEcDZF/DkAvldE3h/A9/pnHwHrDfeR5z2wiyqVgRLCCikw1NGyxNgT4xE6qylbV+xTFu55pOHCIsWjoj5iO5ZpP884HytjqrndT8lYJZVm4sjShS+Z7eHbzmuBuqmoKQJfRwk6WnbBM+si4BpJ56Xil7TwUDkPzh0WU0AsMkRMNWPPy0AesdfpSqvVrHTfq0K9V14XjNu22hY71DxpsE4Pc9jxAD9OXEKzzvP1xr1Kc8qKaVmi99wMrK6r17BoyTAyAwMX5Pys+eIYWDsSjCHoF1pfsAq9M3A/7Rbv5lmIpY4LukXjXst5JnSo6psAvMl/f1xEfhbWQf6+KuJTAy+q+sMAfg+AXwDwR/zffwPwe7xDxh2LiHyMiPwXEXmdiLxkz/d/WkTeLCI/4f8+LX33QhH5Of/3wvMYz94xam9FdS4tn+P08O8EPpLVHIGV2o7ZLNXe7QQWFhvaZyelqTb3Nh0/F6Tfc6ylix9WEQuTZ7w3bRtJG9L2y/PRBZDyHAz9ueK4SHPYDTLPq3b3I6dMy3LuchAtHx+LfZbeAodEjyZR4zrWyAnXu3PN3T1P/9I+mrbdmcd8ShZ+Sgtwt1CdNI+LBS0HQ/cGWe+jEqZUyKn+AXimiLwm/XvxSccUkXeHGZI/co8u40Q5U60JVf2/APzxuzEQERkAfAWAjwLwBgCvFpGX73EZvllVP2Ox7zMAfB4soKgwd+Plqvq2cxugP4xlu1uYJsj1Y7O0+D3Qu3pI1t1wrFg/bgq3a5WUzrkvAp75rbRScwF6AOHalq36OFun5aVyIL4JH3/0kxNpnaRhx6hbf0vjmlt2oE5o3UFEoa74IzBUWsbfvIZ1JUnutxY06lZN0Ab6OQilQ0XBwKEyYNVSpzlP0xVp85ld7XQMAFHhzObStKJUC7QFxktFNcAGIWbhakHU+kW1e8NnYl5bh5Yye6Bsj7dAmZ0z3hYYRXgxMSFJeQ7NAuZ3vLaAFhIkAlq7yjHY8xj88li8LwaNTXEm1sSvq+oLbrWRiDwK4NsB/G+q+o47GN65yFmL/jwbwJ8E8J4APldVf11EPgTAL6vqf7vDsXwQgNep6uv9XN8E4ONwOpfhowG8SlXf6vu+CsDHAPjGOxxTL9qzBbrUW8kuoluz6YUPZTyY4tLBFNR4g1ZRwUwuabKcIgW29EwJqQqklN/o+gCOzbaJGhWwLsphgeY0XaBTyh2j4IZgdE6qBR01Og7zOOQlFyASEaorw6Ia0f+6snEYM8TOYYuFhHJsFnhSQj7HoZAA73Li15px2xmWgZfodVq8aL6qlxPNF54s4WRd27UZjl1SFbzM3AiPgPfX0YmytXHMgLeEagsmNoLVtWr3IkMiCW6YDxA84WED42ynpAuV9sxFx+8iXR0OWt2cn8xBr5LaXA1+nf58wmGQvdTE+yjnyZoQkRVMCX+Dqv6rczvwHchZEjp+N4DvgcER7wPgSwD8OsyC/W0APuUOx/JcAL+U/n4DgA/es93Hi8jvA/BfAXymqv7SCfs+d99J3FV5MQCsHn366UZW0YqiZ8ts4c7d1HpIL0xYQPHw98eJhIti3zXLhdfg50/KwD7YHePeOSC8wGMmxdIzNtyqW6O9kNkiRHthQ7eRHoe2TwT70jXmusaBu0q6OD9Wi0CiV4Lp+Bkvj4SZNA9xbRDA6wFnC9HKjbaLyO5+QC8jQtMu3XcBmgdCrLdabZF91zV70aGo49EtCFysvLFAzEVrrwWhlcyLTuOJHdL9CgjIV0vGG9SRJkEXeO1S6y+AqAqmc1LEIiIA/hmAn1XVf3AuBz0HOYtF/CUAvkxVP09EHk+fvxLAp57vsE6U/z+Ab1TVYxH5cwC+FhYwPLWo6mMAHgOAq8963qkeN0v/1WQpmUWS4QQV6ShFGaMkhtpxc5Estdq4weQRM6A0I1lqkRCCVreByg3t5WoJAMnnVVq1dr5hVuf7OsuA15AschQLJs6edhxJBLRY1QbIgBevCbAAYfFAz3SlzYVt5N4Aq6elfXVot4RJJWGlDt76ZzD3fzjaVbah2GnNKYzfm4Juw8ZaS9UrgnktGI4Vo8MFHY5c3PIc831u52H3bKAF0lhjmtQwLrzqVeogwPSIpyIfW2Au5pQLGM9d0/mKVU8zD4JejnOQFwtUxq/rCE9caYG91TGwumaeynSlYD5oY+sCfPme3Wc5x4SOD4F59a8VkZ/wz/537yB/3+Qsivh3wwr7LOVNME7encobATwv/f0u/lmIqr4l/fnVAP5u2vfDFvt+/zmMCQAtHbNyKnoKFTE1pId/R5I1ly23LiiTlF8HQ8AsuTK1brz5kaTybPS4ZJ6m8/fX44odnsuRSP1LizeSJdyAjWpdiQJFOCCnTgPoXHxTRm4pAtG1uCl+6SxS5H+03h0TryMgs2B02KOjjCl6JZyuaYeXS4iGCtzhnijqjjaGCEbmOWWSSIKhwq2vdoOD801FR/x9sGuw+9ks+O4RSpZuYLto511us0+s6Wg6ho+dTBOD0SS4x+em7s5RFOeniFX1B3EBL/MsivgGgKfv+fy3A/i1cxjLqwE8X0TeA6ZYPwkLuENEnuP0EwD4WAA/67+/EsDfEhGO7w8A+OxzGBMAe4Cng1aMhgEnUQvc5aysjL8GXlwkcv/7F0Owvbp7vjKpvWzSU7xU0Hq5oVl30ZU5K1O3fubaK+shqFdte/KU4+/aFEi2uDP1jEkTHIfth6Y44dlwo2A6tPGYtejYMeeNytyDe4hracHHzurfWn0OcmLh9ZFZMKlMgGzdC/E6GaSERYbbZBM6HKVAnAIIrF86pRUYb+E12P0uG8OOlywX4q7zumHtOSW5bAAU8pgVDA4ye9CoZtpZyJnap0Uh6+a9dPfGxxyYOdqzFLENct9TYDT2SwZCLFD8bOFZ3Etr+TLFucl3Avg8EfkE/1ud/vF3YMD3HYmqTiLyGTClOgB4mar+tIh8IYDXqOrLAfwlEflY2KP1VgB/2vd9q4h8EVpiyRcycHceogMwjQAgrQ2NWqBtZLUvd4fnlWA+RAq2Aepdl5lJFkXK14LtU+zlZz1i8ah6mdxqWmmnkLqXbTaeMt3OaAkvgLpbD69NXGbvsOxF6mfCJAoILbt2xe3iS7N0c2p1KIAZnjWY8ExCHB70mx4F6mi1MoZj2364kZUFQOgkgnalBcamw9bOx5RmUnqucOtaIJPVKB6P1FkKdr/GIw0IYNhUhw4Eq+uKuqEF247XJ5X0RdejZrJb9IQmoiM055awzhU70HBDU22NBGO4ZxH1lwP2SM+fW+T1wH5np2tuF6nRS0Wc4SQu1OLPVuJAc1vi28FfLm1uVKwWdR0dpkqY+t2WJ31h+IX8VQCvAPBmAFcB/CAMkvg/AfyN8xiM4zSvWHz2uen3z8YJlq6qvgzAy85jHDvHTvgZrZCWXZUw4YAn+oem8XB7KwZIlofsbq97icO7f4s6bpBccf4knqwJL97hpyZ2R3Ci07VHjCi/e/teQlrI6NkFef46fJvn1/4Yxo/t4YawnOueffaMoYM3fNHqAmPk+Sa2Bues+7nv+InvvW8sjdO7p1C7tu0z3p49gB3Jz0eaz52MtxPGnO+f/Z26hOTnLlP7soV9AeS8UpwvqpxaETvX7veKyEcA+ECYY/OfVfW779bgLoqEkgVQYR12s9VB67bjCwO961gBqAKDNCsTiEIww3GjVqmgZxok6zpbOUGx8vMNW+6L1CrJfhJeQQYC1bL/OuqXwjskL7BwNveMgCCVukTpSmKjYUH6GMbrAEQSfmoW4PYRP6YB1VZhztkAdZRIa7a6zOl6XYnwvgxe9hFAWMK5apzUVPlsAOaDgjo0S3vHIk64PeeKdDM2biUbYj6Qbiw6iNETS9uP0sEniV7GZ6WrUcF90t/l2O9HKiFKnjhhEB3M2xi27XqqwxLRZWQ0XnUX3Ktu7W8q5pV7fv1QzJM5KQ5yF0UVmJ7MheH3iap+L2Apzs7He7hliYMCViowfUYGxU5FtISlSjUMtXoknUE+1hMOZSLwFyErBengjyDaqxWPqcRfJw5WMOfEDSBceI671TJo1C12OJ4PJShWQbkDr6Pv4VZXVqOAtKigeLEexWyKODNOgOz6OsZdbdxMs+ZiAAVW14HxuKIOgumKz0fyQIoX0q+jKXf2icuKpmwVqEBdlZhfsl5yevAywEfMlgum4cI2xvlAMC8q5QWWW+yaWBGNC2SklpPmtnhWsqeS4wkyaXSathRmseMzlXzNJJnGYmEALhbyrfO5Bwn2SW4TVSZFOfYF7UAhpZ9Hg9nQDIx7aKQ+7NDEqZcZEflLIvLx6e9/BuCGpyS/110Z3UUQurY1KbD84ib3t7En0kvgSpeJD319hQRr+LHCJc+f7xlTvLTZvc4WdFbCC9cWaBac7SftOhawRVi/Abukz/O1alP2rBO8TMnlfh0LIhV3b5S7/L0fd2kJI30eZTJ7a7b/J2FNd4Go5NV093MPpNAFIWmVks2QPmM6OOeth64W9zth8A368oU6v53pmrqxLbwuYZ3hxbWYt8aaJrJ7fcLrMjpcft7LrMGyiISme2gYEyM+zb8HVc5iEf8lAH8GADyh4hNhrIaPB/D3AfzP5z66CyAqlvY6bNg8sikuQggA9kaRLdDEv5KCkWbVZqyxa+bpARJaRfESqgdMAqNu1ma0UooBhIHaXFy3glkVTgswrwC2IJKqHdzRov0a7r4g1XcobVGaD80qg3ommheXaQqqXzSiNgXZFomWFenQALZXzcploKtsvUawQwCz069sIWhZhtx/OgS2jxj3S9NiOGzhHT/QFU3qcVhqff/hVnfcy5LuUYIAlh27szLm57WIBVWlpXwH22Pw++TBRHblyGOR4rWNfYyra/68Oh86P44iwNaCBdHQFAByk4LpsKCOrPYGlG3dG0Clwo7MxnuglPfGSx4iOYsifi4sqw4A/hcA36qq3yIir4UVh3/oZEd5zRZtr0ODEDoMOBPygUgFzp+FW6ptn2Y9+6b+0mrejwoCaJaS+pjUyP65L1pXFS4fx49vuGyDSSw12fDv3Hw0c2RzMKlxctFw4QGYD1l9TVpZRo7d4RkfemcVIxWw4cLBCmUVCh1Nsa+mphiitQ/aeXJWYisxatBRpCjTWp8VxRe2mdvuc7mz1T+0ccWFSMvSKwBwnJRhpsEhKTWmp492P+vYul0Hz1gcEuECQtqkW6WkM1p7I4dBkidm98mhkaFV4FsGEbnoMJGoTIrRYSpLTyck0rySdqNwT+QyWNfkHQB+EyyV+KPQunJsYcXhHyrJ+Kio+ksg3csaOOjiJQtX1936oHPRYE0KtXuW+Utgwc4Tlqb0AibJ1tVy7Onl37ddYzII6iBuEfv1DfsTJFpChHbniMAe6xRUBF7q0c3WlofnpZC5QEaDENqwieK1R1JJSXOopGElNoQAQuUshrsD1g0kajUownpd9rwD0gKIhaWni59oCgxAj7OmDMgdi5iWeih1dJxeC8ABmNDqWVeznmP+8v2ddq32EyWuUQPCyFBDGASDcb/tM+mPKw4HpR6LYUzcJVF9+DHisyjifw/gq0TkPwP4rQD+rX/+PmiW8sMj4m6uK2CZAE3Vr+hCMkU5mkESB8yYsigq3VfAmBNAFMjpTlvNSqMCK0faXlpaOXNjOizxQuLMUdt4q7nMQWzHxWReOy/VrWtorvrVroUWbV8sxiuFXUVADTKnAj7FXvThyAJC80HrNJw9giiwXwyGgFuC43XzPAhNsNIZ4GM9bvPB5qB1ZR5LmZ3vy9uSFDrZLVxA6O3EIhc3eTG3/lkwFP2Y4tc4uPUaxXZKct99xaUVHK2xcoDOF67xemPQcJHRotDDtA/82SKXWdKxThDGH8rcOM1lq8G2Md41UNfAvC5pweKzie7ehnVedf8Jz00E8yVrIuQvAPhiAO8K4I+mhIkPxHlXObsgshPwWVhDmQUQmCItC6BZk/mgbmDQ+tiRpRWVU6uT1RYKBtg9zsL6xr5zJeuK+K1VTHMlkrDr2CUtHIGlpmpuS8w7gokzAmvWpHAYAMw4N+t4FDTlujMvnBsfqyVGtHoJQYXbaGDedUyWHbdL46YX0iZ1IXvmsONiR5W73XneOVRqZ7/kDhM2iuLtyduKFOh8f2/iGZ0oyVAoTHMuvH63vMd0b6mIU+XBGG/M+RnHcNYhX1rEJs4j/ot7Pv+8cx3RBZLuwR+cYUVlg2ZhhGXkSldre6EQWKGC7XaiTCbQWWzxd1YYSQIOSMkbZF7k5zQwaHddA69kUaEBALHr2izH6KuXzymp4E0KaNH9lwqMN9pY9yVp8NxkG3S4N6wDiIAWmu2clTaDi2XBCGDQTWbt4BxWLTPYKOGhWdHWxN2eG7QQ81QsW4/lKKOQD5LCreJeQN/kcz5ITA5fAAo0lHkk9yha70AALPVZB0AP7X6MR96phCa1pHmp7f5WttoibY5p66UdN9cXDnikCMSfhVzfhHMU3HXS3zx9fE7ZjnddCeMSmgjx1OaNqn7n4vOPBbBW1W8778FdFKErC4G98DV9mRSTJXsgLARyNKsH7eoKPWaaLdtkmfK4wOJvP1elW+hBsc411F6x8EUivBGLirTvxeslBOHfxxsJKmN7+QPvdqVeJmB13JRMYK8poYOLGOlTS8uWwoULQCRMkAWAjY81z9ci3dr2M5YLoSV+JrNC4DWNIXasTbLIyfxYiwW1BJiuGuOibIHVNXGlp7Fgsdg+reEyG5tjupoYDrxGFQyswOb75vvEm22p0YbbD0cKue6LwKwYjxZUPzQPArxPJaU9Aw1T5rNQEQFBCFD5DNJzSRY44NflzI3VDcVwbNd4tAxa3k2hQXSBRUQKgENVvX47+58FePl8AEd7Pr/u3z00oulBbxbrku/Kjf2l1JZkseSxtsBds2C7gNtCKS/H0o0p/WspxNLOkcZxEoe3s7qpFMh5RVMy+3jA3fhz8DA4p4hKcZ1lv4+NIP13O5brHQjvQfCPGWhNgdNbBuJPuCc8fggXt8GCn/n52dlnH8Sii2cB6O7zzjiTV9Ztmy3+kyRd+965WDyPOVaQn5V8/++FnKFV0j0TEfmDIvJ1IvKLADYAHheRayLyH0Xkc0Tkt5z2WGfBiN8TwH/Z8/nr/LuHQnI2G62E/HJogdd39Y8cNysbRKGeqLjFKma+/3DUF7bJL/qceMIlbcMg3ZKmFZQjFm1XtO4T2dKW9o5Z0aF+GwvWaAt0cQxeoQzu2uf6EEW1w3yDI+xKWItg9m1UpLOu8z46wNsNCWTl1pcHkSwvV9ucFzfmsoXHYkUVO4ov1y4uW0kp0Ijmr8rU6sSrJs6PGRivIWoeR9slAXQNZH43NPF5ff9l4M46b7iFHgoz1XxA/5NzN6/zYtKul2ycSCUXp6+R3rdQsFpaFiCfoVhE5zafUt36lNZhmwt+HRkMFYOQok2UeWn7uPTnIXrBgnUi8odhxc6eAquN87cA/DKsQuUzALwvrKHy3xSRrwHwN1X1zTc75lkU8dsAPB/WPDTLbwPw+BmOc6EleK1JYbXgkwBFe1fYI+7D1l6E6VBaUkGyKoYb2tUdiP3ZwQHSlCYVsbYXPUMTtiM6y6tM6PuqlX67cD2B4JYSjhiOWZHNFMrgrnAozaHR9kTsOKNXIItUZGnHCoaHAnXUVreC16awusIDIMXd4wpYPYqmEDnnxLWLIuiEpiztOki9s+15HyWwWhUNSp16erACwOBzMdmiGU0vfK7GG3QR0DjJCbYJRczzirT7MCerUdDaHykihoBiD5nGNfjpUsEn4z83AwGAV+rzLEHOoyYYao/VC6BLwe8CyhG406aI0XdVscSclmRDmh452DtW+TnLBYMmXgLgLwN4harWPd9/CwCIyHMB/K8A/hQs6e1EOWsZzC8VkT+iqv/VT/ReAP4BgH99huNcaDEFeLq7HmVRkrJmUEPnZjHFSzvsO4h0lguAnqFBBbKkJukJz72klzZBDfF1UsLhes5qdY4ZMefLS4vUx5cDNx2MocmY28MCkH2P6uy4WLLcafm26W9UtyiUEz3s2pztJE1of9z8k793wURpymyJwVLBtEQR6fbrEnoUHgNYQDNpLpbQEBfs6I6dF01azNm6XVxjzZ+TWdHBYQpFssTT5ztjKfZll/CRlHYdAD1YeGZ57u+isrxIrAlV3dfCbd92bwTw10+z7VkU8WfBuMM/IyIszv4cAD8K4K+d4TgXWmQChgVFp3vph92HjxQzqWYpGi9TMV0V6x4s5Ghq97AulQHEvx5hVsZim7Zxs7QyuZ/750LpkpTnThsn8kiPa1xUZjhIkdaYVIFxo5FkEQsIr58Wek6PZsCHnTgyoTnxZ5nSq3Txh4RzO5NiOG4TEQFBBgznplgs+JXNy/QPTUmlLbqss6CJiSeWZC6tNIUr2upqCLnQnoAx3rDFjQHaPE/dQiYpoDhrQC2EhYLrTLd/tsWfiyddKC3mSeR601HgKXkO+fnprGJJ17gPZ/WxRhAyFsbUwgvtWOctqhdLEd8NOYsingB8KIAPB/AB/tmPA/ge1QvmONyB5Ag64FaJSG+ZLNzs/A/esZgEeSDtkwr6xA9t57Ff0Fkj8a5kVzJbMmlb7h+WCrdbLCyxvVtG5kb351enQS2DjWXLMpca1yParC5a7qT+katqOLO2a/PkEC0CrNDc4MHmSdXnRABx/D1bpl2wUm3gwYHmvhyTU7923mW/rlzlDKLeKNDuWd/I1Tahwoz54lzXFKz0kpv5PC24aTc2SnoCQHUaX1qAujlBvwgIuZTIafYaVMWbBe4yltsZGcm6PonGOB82I0BmK206JBjrbsXLLhp9TURefsJXb4fF0h5T1VN3LjqVIhaRwU/wO1X1VQBeddoTnEVE5GMAfBkshPPVqvrSxfd/GcCnwRaFNwP4M6r6i/7dDOC1vul/V9WPvb1BoGWlMYOtaMeaAHoLtoCcVrNS+BCXjDNScZZWOKirCcFzJ9d4n8UM9J8Xp57lz7vFRGmppqJCc7No5gOLgunYGlPmtj5AU0IWkDLsgJiiFhgMA9O8bWz2y7xyr2Bo68aSHdFdW22WVmC1R4rxeoWOlvGlxQoVBS9auJ9ZhrSerPaCBC1PACswRGXG+zuYFWl0vNDsZmUGDawtOpybsYrxiGqrd1ymNrd1BlDF1g3RCAA3GEtjMnay0/z+l8kXNCaMqN0PWsodFMTnpiqGbTICfLGIWhVbRKuqOraSqxb4Ey+stFjwNT0XeQ73QEV3Qy6gqfeWEz5/Gqw42l8Rkf9JVX/qNAc7lSJW1dkpGutTDfE2xJX9V8DqWLwBwKtF5OWq+jNpsx8H8AJVvS4ifx7WPPSP+Xc3VPUD7nQcTN2VGRbESXzMZUEYpqnOxV5aJmzYl/bisN5BJtJvr5pFSZdbFAEBqMBghWzRAt5JWJpF5dbXsGnt4cMaFWlGu7T9GbwqG087ngVbAHU0k4tKj+O3D1vixHxgWWplsi7Aw2YX/GXt5OG4QmbFdHVAHUebI/QLzDLwaGPQ1qDUFdzqiRnrt29R1wXbR8eWijtYwaLORZ4lIBjWeA5X3uEG4q/EnWcqIQbDtPWyC1x8j8JZe9ulLJIw4txiyVgnbVvznoCyLA7P+eAzRCYEs9p8UZ3XvOfS9vHiSRwDgJacMrY2X2wpJdU6bLN41Xxo0EPZmpFAzjyt89YWK3GStR/33RCFoF4g1gQAqOqnnvSdc4q/CsDfhhVIu6Wc5eq+CMBLReSZZ9jnLPJBAF6nqq9X1Q2AbwLwcXkDVf2+RJj+YVi35nOXHIQJSZZq/ixbHDlQ1ruLmiCAxWHJwc34LS2Q9Pvec4YlqN0+OQDTWdXM9mJQzBki8Vk3nj3zQmtemrKLQFNyu6n4yrY6CyJtezPLJo093G/+TFznltQhoYAyZztj1GFhd/O6Zzx7LLoIZvIYiQlBmGY80mhIWlihL5JoUoZjWqBFd4+/I4tryHMSWHzqOwdgx2uS5XznueGCns8nu3PY35vWKDcvTnm8d0v0lP8ugjiT4sthne9PJWftWfceAN4oIm8AcG1x8vc/w7H2yXNhld0obwBws+jki9AKDwHAoYi8BgZbvFRV//W+nUTkxQBeDACrR59uH9aG+zFwFIEU0eBxdhHyhbKObDW0h7iuxGhTfOHUs9bcBSS1K3M1xS25KC25atZICe1qrmmZFHULFMj+xSMr42rWDLPO2OJnhOOKvl3jGVN5SeDk8XILraj+7ZOAWry+AwpYZLxMaXzCdGfbf7yx2H+hoKYrBVrWlrV2pVirn0OvR8zqZdIs3RbEsl+HFFAavEAOx0FqVtnY3zK1DhyEAeyKCCFwfMa/Hm7MgAc1d82aPckdgqYwkuLcwYL9Z8ABXJCk3YP8rPH5rDAYiKhHXqCkKopDFvNKILSS/bwBS6VzcvENfNyhsJJabO2Db8+VT6x4EIN112G9PU8lZ1HEFyaFWUT+BIAXwIKHlHdT1TeKyHsC+F4Rea2q/vxyX1V9DMBjAHD1Wc9TAOECA/AH04NFq9QEc2gWKBxvlKLOBXX3PQdnkuXa8DXW/7XPqybLEm278ahi2CrmdcHWWQjZNZ0PWNyb+f6615rrrtvdVVXWDUZgwsFf9rEi4aDxM1mR6otUU4L2+bCFpfEWe8GDETcDpfi5OS902at3OJ76OQtLH9ZfbboyNGzUq77VdatfwXrFw7F1c+4Wyi2iulqkBAsXRNuEEAMxXgDd4hYUPmdGlBkox4rxxmzztx78OZG9HTBMIUsEIDtLHIh9+TvUEmdoZffbJ3pZKORW4L0rr5nOS89LtAWS51W7RtYzzotgxpahgEwaC6tUV9I5bZ3eCheM85KLYu6eXj4KwH897cZnKfrzBbc1nNPLGwE8L/39Lv5ZJyLykQA+B8CHqupxGt8b/efrReT7AfwuADuKeK9QkfrvJ7mvnYvp20bEPL9cdIGX3F/4cb2bsywjENJeyNoYZY3fupAWVZc2ZvQKJFxpLiJQSOr5FucFApdGSn0NxcWBpHO3MdgBLGJPzFpaYocrA9K4djBF6Y/Ha5B0LjJXducTgafSgiN1L7+8S7dd0ZTNEpYgpxg87uL7kALUlYHfOkoEQ9tiK218WW6lVBZuPxeO3N064JluMpr1210n2rPMZ4meQFwbLe5s5YZF7efE4mIU/fUt349zlItmEYvIHznhq6fCIIk/A+BTT3u8MzcPvYvyagDPF5H3gCngT4K1YgoRkd8F4CsBfEymhojI0wFcV9Vjx7A/BBbIO5XMa7O6AET5RHO/EIGKCuxYMnVlnXOleiqs04ckwRwMsDGqDpiy6o4FdPxk20e6KHzuk5YTNuYDs1isA287FjOyxiNP350Bqc5JXWVtiGj3FIouvdDBSXVFK75ddYUQmXUwRcHgJJVwXUk064z7tbju6arvH/xYBO3OUqSb5YoIyilk9gVA4Y1LLeV7O1oK7nidVEK7T3H+2YGGxIgI136dYJsbuXs0AmagMts8OmB71SJxxNn7BIk2b7q8/oVFnC3KwNkV0XhA6bJwDB6MK4u55DyWbfqcx87Q0Jjuk1vMxbND69g6QseiApgBwcXcWTN8Dnhd7RkH9i5etyEKoNZzOtj5yUkIweMw+tqfUdVvPu3BzlJ9bQ2zRD8ZVpO46+Csujdv7NSiqpOIfAaAV8KcnZep6k+LyBcCeI2qvhzWFeRRAN8q9gCQpvY7AHyliJHNYBjxz+w90b5ze3dgABivS1QTyymfOTASysrx3uKQAbHYwBiTuxiVttBbfTk4EqnErB2wAcoTHiRadpMQu9I6ivWNI47rFb7MPeaCQkaHxgutQ7Ougo8bnaMlWviE9VTRWaPEDee1YL4CU4LXm1cRxx7av7C68tzT9SUOvhFf/FJnk6HVcigJL13CJyqN/TCC23nti5ELLaJEZLOINSiF8yFiccURess6W8bi8MjQxhjBU7INFMGmWVrBYZnu0S9dNmLxIWQvRxfXrmmexaq8sZIaPYu8yO54ID6GMEIEmBxeYmA3xgakwOzifmp/L84tgMcJuECiuhORuSM5i0X8RTCq2N8G8KWwbLp3h1muf/M8BqOqr4AV0ciffW76/SNP2O+HALzf7Z/YlSlcobklUyHeE6wpRz5sy+4aFtSQILdLRTSmBJqS4LZ2svbdSQ9t4I7ZYvJMthZ1J3aCoNTx9zpqK7qTamhYe/heO8RLpYriJ4xylvnFE9jLKG4VzZKuUVqpyKoRrIzx5XoHDMw5x3gnwk/FMwFj6qCR55OFhPK+OdlF3VplnYeuKE7CUYNjTe/GYY6wpBNbgs9H9h7mVev2MWwVWg0zt9KebD/VriECtLxXvojn64d/JgrjYfszUFOQNOpRs+OHWKAuQxEBr2h/eF4XwEayre8hKXDRS487CYBiC3nV9p3RJtFve45yAXnE5ypnUcSfCODTVfXficiXAPhOVf15EflZGDD9lXdlhPdAyqzAprnHoTzH3Re1qyOQFEMdTaFEMIe1EWZTpPMBMK2kVzD7XNQkWrDrlotxY7PraRu341amKKsF5XSU2AZqEMbBdTvxxMAf0EXM2eNtPkitlJI12URQ0ss3Hwh0Eqw2M8YbhmVI6qAc7rNXPFNnkHRzAoRnIBVYXffUagbrCqClYLoCr66GVkt3AQ2wGt58xdzqsrFmmDqIXbsHoiJotwFW1yrqINg+KpieYiyW9TvaGLLnQot+esTgoDIBekOCfTEeVZ87aYXsFYDaAsnrsYJGeUGlUrcFbTqQFjhOnOfuwXHFOzu0YPe6pU5zMeH8cuHhfM2rFispW3g3SkXNRZt8bBMWdbCRAndoi825yQVUxGJu+Z+CdbJ/T9goXw/gWwF8w1kyjs+iiJ8NgO7+E7AMEgD4d7CScA+uuAu5xOvCZU/WRvaQcvAu0krDUiWOZgXjo49cdgv3uOocz9KF7JQMh7qwpJeWNcfOGhfhLqtGlpas7EBL/m6Z3EoaBHNtltcyIBMWuc9NHdy866wyTgwi8BPVu5KlthMIdbfaEmNqMBKCJjgAOiq0CqKrcR6f0Oo0y42lPC39WoE990R8oZJRsXH6IBdVjpeFdrqEFGduqC+IwuPNbX66uePCubReOR3C+W0wQ7ueZM1zCLSo00Im1ReNhDl3wdKm97vrCHhFrUxFd52EqTwYGfc3eSA2Hj1H5SkXLljn8u0A/hAso/e1sKl8bwBfB+APwxT0qeQsivi/A/gt/vN1AD4awI8B+D2wOpwPtPAhDYwW/vJV+NPoGyaXlCm0xAKhgEzGcRVabpKsPWLJtX+4944lW6fZckbaT8wljnY9af9WQCYlkcSi0WrTkkcaQSiOUxumTKXM7sTdGHhoEdS1muKaBVoGDJuCeS3YPtJ6xjEpomyatWaFc0zrtjKLzRqso2C6WmLcWixt+eBt0mCb8Fz8MtfAPElAKMOmjZ2ZZF1QKyknnqNM1hWjbBEB0GUxdSpT1i4uW2Yt2rxZINOClhGw1Db3dZ0Ust+7bN3nSmgyK0oVKL02QfDM80IbC6q6B7BoRJrTtJninMdln7c5ZXPSDHWQwpfjHPFY3A0c4YJZxCLyxwH8ARhx4N8vvvtoAN8uIp+iqv/yNMc7iyL+DgC/H5bR9mUAvlFE/iwsEePvneE4F06yEkaig7HVUaSMqjMlPL10OPZaEtoCe/NKsL2aKmYlxRUNLuHGxCDR9ihEW/po7oSwTAoAHIJYe8Bwa5YoXzYWOs/8Y+J3dQDg5QxZQSzXrW2t52m9os3HnJSQj5djmteNYbF91JVg4qEOR8Bwza5vPDK33QJCgnml1iLoEXuxh6PWGbuugOmwdFbocAyM121yWeWOmYN1DegEo9PX/j6pwyeAze+QIQ3A6zJLKCDWR2ZK9Q7tz+/z+h0a82Np6xrHMqx50T4JcFjDx5L4wsaUIR+7nSeK9W/bfYwgclLgwXWvxmJh8JgWtxXgtz/mg4LpkC6GKdF5xQUiwRT58avGJuEiOt+1wgeIcenFY038CQB/Z6mEAUBVXykif8+3OV9FrKqfnX7/NhH5JRhN7L+q6r857XEeBNlLRtcTPltEkFsQxt6Obh8ew18cFXiHirbJPuv3ZtZzx6Lwv7vg1eLYDfP2r0tzJ/MxqGDyuAOaoYucraHkNgNuiObPFotNn+ZrQb1cMa27/kyhSsIAK7HPwDHTtUdAc25/50UyL5TLee4WwbT4SN7Wr6GwgWg1JSwzImmlS8FeXFc35+k+Bj9X+nPl66PVq8Ce5yw9e/usyfQMajpn/smgbAcz8N245xbqhVPEvxPA37jJ998F4M+d9mC3zSNW1R8B8CO3u/9FkvwC5Jq5bHoZUWk0xZIxuyWFp8yAuls/r5yjmQoIUVgDoavTCx9LARSG72YFavs1pV+2AAsAZRy5KwSUEhxEG785MvaYCFLpdgvqSuNaiD3OB+YNRMCIVv0KoaiaRd6UY+58QQt5OhTUocQ1DV5eczhq8zu55drhofQGkhUfrI65QRDDBhivafCwowiOs1tkVucJV8wHBZtHSnccSe43x8tgY6PCtdZODQu2ICBhI3pPkMX9F39Y2EvPFTYDpeR8h5dWWiAtnpuCSF0Xn9tABfy4Mipkockt8OeP60hoCNH5A3BWTdH+uSPjBJyHZLT4PZa7paEvGDQB4J0BvOkm378J1jbpVHJTRSwiv1dVf/A0BxKRRwG8h6q+9rQnv0iSrZ4uMOJ/z9kio1u+tEbhlognGujKlWAVICV00OKoSopme8roroa1LNIsI8cCCZMACNwwF8TJNDNNxcsj0WQUTJ4FPxwB5bi9RJqCajYnGmnQkVAytBoYXYAweQiW2MKaE4jgFa3jWZwDXRGt6octoMetFVW9icub03hzoI0MiuFIcfCO5oJzMZvXzruuwHijYjiaG23L06SLJ4yMxxYknA9KpFYz2cHmHhhSII9Ke3ZFZz3e2rZcHGmNcj4ELWGHCrFLH0daUAGfPPLG27XtdMgW26db/6WNMT+7kYLP+0fOe2K6FGeC1EEwrXn/UjCxpCf5vBXnxVPEKwRItFcmLHItbia3soi/WkTeCOCrAXyXqr5juYGIvD8MC/lTsMJAD5wiJpZqVpt2iiVvAywss/Tg1UGs5uyiEE63z/Lv7I7uedDotts+0jtn2TpGjwGGEufvS1eSf0tSZOGWNmXM8+qgvaWtlmAS41jCDgkayVxiWshcKPpW8r11dZoXr4MHaksA6aCU0iiFeeFkvzUcDtElozunW7QqJRIbOJfdgurHY1W6nP2Y72uGEWLepb9uWpwxf4VcacGOgmXasR/Tgsr93ITVzG4t+XnmQk0eczI8wrNZzLMOQNW+/2AHz/mYY27OSxTYeYkuhvxtEbl+wnenLvgD3FoRvy8M5/g8AP9CRH4e1q30CMDTAbwXgEMA/wrAR5wlm+0iSVTcAhqe59ZEF11XhOUUFojzdcPy7ZRsagi6UFjcv67pazelG9vV7OJrh5OSnrRPmeno7rM6dpkrmhE/rI1tELUqciF8QjGj8Z+leg3iY89UmzQpHeleZlLa+DKPDgtYCyc773hdsb5WjZN6KMFkAHolcBohVqyiloSTrF8tVkTp4G0bQBXzlRHzYUFdCY6fWlqrIm0eA2GA6UqDA9q5rBVWC+7ZTwYAO+XqlnLAHQxeRrp6Y0R07Abftq4t6EkKHZ/BYPX4mIta7eTVdQ8YH9jYjYcu4RENxy34y7rT81rC86iDACvHlqemyHm+aWwLHbPwdp5BIKzuO8u17eW8iBgi8jIA/zOAX1PV972DQ/0AgP/hFNucSm6qiFV1ghVr/woReQGA3wvg3QBcgVHX/h6A71PVt572hBdR+DBJBeqgoVzmAkSBdlcuWiTxSF0BKawSW4IG2sFt/7ygdwEvV/oFCKpV7Jra1i8j9XnsDCiFmyupRkHCk7vgTTb8S9qGllL6LtK4FS2poTQalaVLA0hWWrOIkwVZYG3X4QrtqBpWfaW0Uou0Mk9z35bXI27RMytw8OzIqijXt5Ba0bo9D6awror37evnkQtEXaeF2GGUvCh3+O0iWCiqRkNcwEVh0WZowKGZzjMR8doXAI4WVDvxe5v2Xd2wQvUbDK1Nl7NILMXeHtaSaHt18IeTHt6ASBxq1rt0yrV1eEGr+5G9GNldwO5Yzo818TUA/hGM63vboqofdh6DoZyFNfEaAK85z5NfGFm4ZZ316orBsoYaLQ1A34ONm/sDzeNmLLCOSC9ae+khaBZINyaHBejG7hkz6WIdzCFuSbOQi1t6UvmZWHGefMwECRC3VUGkeUObhRUQAgugA/ESB8zhtS3qiA4pK5vW32w+LK0offIkbkt47UkhBNY8CjAWK8O5LnbedfJ80oLI9vTh+vs1qvhCQkXFRcuVViQELWCViAcwoMnPZ+1S15XHjg1aynTZIIKOpAR2C3sxCtnmkWLGxCp9GYFEn49BEAlf6oHlbRtTDsYB6HoEWro2Woae8tyLBXhhLJyHnBfUoao/ICLvfj5HO1m8eNmLVPVmzIqQi1R97b5J8DkXNztHg+sBwMh3BCeQrFu66dK/XLHdKJgHs6rLBLcmpVNCMtnLPjOFVZuizVQrBswAYFqncfg+w7ExAiJDLisO2BiGbVPkZHZEQkDqZD0dthZQ6uwBHrf4y13VrtnYGHaMoqbErUWTzc94zfi2zPjaPCpNAd2JwZMWqpzpF7DBWjAfDMbEuDpg8+gQ/GkueHV0V/3QCv9AXQFuEYqlFlgRflWoYpejmxZZwBJDgkd8aAwKmb2q2xaY1wow7X0UVLJXEuXN2j15AowXo9K0eKkXpa9jqmKXituX2e4ngAi8NeqeAluxqDHaNUT36Fw3wp8rWtLctq5S4I8LuWr3vN6xLBa3W8gzvUEE5TGvQX7XxQujfTysacWHw5LfLhXxaYWBk9AFe256FMxZtDuKgAiwS0NbnENim4QLZMt7eW66sP5wnzTuHHSJXevinyIwaJkFBdqoXIt01HhJFcbvTYGdXLSnC9akgF++rsy2APrElyUz4HZl6S00b8OzA4tAR+N2saB9zQGwZMFxYQhObU0BWGkWdFvUOAe7vF0ZpTu+puMug2eZ/dF1d445TjVO9o07sUgGRYvnKwLyys+pigassTReFQjIpQvwUbHnOc/XRphMDQI5P3Bin0t4ovy6qr7g3E59ChGR9wPwZ2GkhafCyA3/u6r+6GmPcamIKW4Q7xDv3fVkME/oIoqnwiY+rRZEMZuW2OHH0aYgyIHVfbNPPe0veqSfiilPqFtpnnUW263M6lLH9MgDhrTmmgz8Fe90rAWQlW1PK6Z1p2iKKqfbziugiMEboXxcuVotYtuJCtcWAS9sM6lb32kRabvctgTPGugVnS8884Hg+OkriFrXkx04qQDqlimk4Z/DxrBjLep0PT4knA9PMff7s4/rTPpa2VhAjZ5JoyIiAnsRc0gwGRfYWi3lmhXaIpGjmmZmPABAbCeL8UTxHxhuXFWwXBBiPnhZc6JL1ua9ZOpgeFI54Ixd4+CO5JygifMSEXkKrF76pwH47bC6E38MVj3yy85KXLhUxC7x4I8Ny81JGNmqjJZFa0klJgFCF0wqYHTcTuAvz4BIxw1qGSXcawWLjdcVwLKWUCsxWbatO7G1ildsrxYcE7LwsbEtj8ySWgC15p5aADlsbwo/z/QogNtKcokFeqyxaNBKGtUhj/TyFQF00xIDgle7tO7uUJbKhIoBMPz06Oklxhop3C6B3/u4got8bOwCBp60LPYbEK2frErefkUq1SCZ8YZXkDvIfeI0rPcIwjnMFOOS9jyx/Gk4VRUgpSCKtzHAS6+EypJcYZGW5j01Vk2bvLavzBplQXPjWUIXxk9vwbqMMZ8rTnxeMMf5yRsB/CdYuYd/xabGchOv+GZy6qkSkd8nIjuKW0RGEfl9t3X2iyKC7iXI1mBnFd7sEAkfa1gZdn9f7uM/d1ozZYxt4cLuFaVrqDsvVQ5G7Xh5xPTS+KKGby63uOdFzYkUwZNdWpvSXt6wkPK/eyHLOdi3SbqGNg/Jq8nXm4JVPH73/Oy53u73fA+W923x3NxSOO66O66Mm+OkY57glUTBqdrPSQ4K6+JZ2oFablMp7YiinexW/24hIvKNMAX6XiLyBhF50W2O6hgGQzwVRuG9IzmLRfx9AJ4D4NcWnz/Vv7tj1qCIfAxshRkAfLWqvnTx/QGMdvK7AbwFwB9T1V/w7z4bBpLPAP6Sqr7ytOfN2KcxAuj3IVzIKAQvYqaHW4PRdocvBEn1bsUMtT24AKL0ZbimfPlSZTPrtEzesLv16bhGTZNmUYbrD8+MSu4hec+D7VM9A6xMXlltVuuKwZfXu1RsnmIWDwNW0WvPr5WuetkCw9YGXofSss78uua1YHrExjxsWkspAC3T7C5Jt3guFU1SVmQmAG2ujNUhcZ2skjZeV6yumVcRHZDVIfLZCv1EpxOes5gXFMFBX+itvKftTxpgXjSq83aDUraEEeiNJGUMkK/Na1MMC2tSaoM4IqhJD9CV73Ds92jhJbFwfGQCahsLjxcW/jlCE+fImvjk8zkSfgusBOaLAHyJiLwKwL/A6ZbPHTmLIl460pR3BnDtdk7eHVxkgHGWPwrAGwC8WkRevsBaXgTgbar6W0Xkk2B1kP+YiLw3rFPI+8Am6LtF5Lep6qK6w0knT3n8roTDunOFGpSt5EYG+Z4W7NLiIJTh+3R4cd392QWAxFOg92XrdVS41O5+7o+nAzB5XQkRWN0KDxZKUqoZTiDuSeU5Xk/81cXLLo6Jx3yVlqDCoGZds2i6Qq+JH0sAVZyXwXSiZIs1SUcTBAIPhgKztCppc0Hg29GC6QYwHhs8NR0Cc/GMx6CIee3p5GsqvLfgWvp7PTVWSx2Ml23PIrwZrUYCzD6PKi/guc+dHgpmQi0JpiC+m581KlY7toZyJ/S1401RwdJLSHPbYeTJgzgXOSdFfF6iqltYAfhvFZF3A/CnYXkVI4AvEJGvAfDvfbtbyi0VsYi8nOcG8PUicpy+HmDZdz902gu4iXwQgNep6uv9vN8E4OPQitHD//58//3bAPwjr5L/cQC+ybs6/zcReZ0f7z+d9uRLVkLHB0Z7mN1YTfshFFHsa3Bk9LLj8QITpkLjC5QUXCwAnZWjYUkZbKCBFVYArOXLFyazOsqMVudi6V6iHV+SK2k4uDQ8ssAi4b6f8BgA2NoemgKLeV5pbVax+sI5AeBuywmK/iRYYAkJxAILdC56tD262UIiu79XWt+MPfixlhAIYQ0tYkkk6Bf8nWMLvGi+XxYXSN+vzK6Ex2QQcP7JEvGFGzzF2AyQrnoclXJ6joMtsQxOL7Hnh1RU9RdhyvcLYTWKXwQL3h2hNdC4qZzGIn6L/xQAb0NfBH4D4AcBfNXphnxTeS6AX0p/vwHAB5+0jTcbfTvMIn8urE5y3ve5+04iIi8G8GIAWD36dPssWwipxisDDmUCokh6Uq47OCHoYvrfW3PHA3P2bcpkL3FuKJqDV6GkIsCC6ABhbA23SmqyXGit+PGZdjocAWNtijzGSZd6BiQCRnb8urEqaLKyc0TSSBoXRCBQTIdi9WypEOgp+NiHYy+C5K44K4Wda0T9NiV41qkAf9TGQFJ+MyL9WapVLwtoIlt9y4UuLXjxLG2ZKm7fRS3ftE9dSVDoci1ozl0dEEYCPSMWJIJYAHDYtP3K7JXgrrRqcDHutCghvCqzxDvam7TrDUNhTv+oiFMDgJ0g4J3cqwdAoXtrpFcCeKWIPAtGZzuV3FIRq+qnAoCI/AKAL1HVO4Yh7qc4ufsxALj6rOclMNV+ZEgiepKpRf8FySKkRZB3T/sBtIJ3tQ2t66jI1Q0Q3UFl8TIEL9etmOptgvLLpQlKMZaFV0EbU+fmuA6NZqQV2l7GOTKWG0SjaXz+MiqZHWjKN798MrdaE11UPS1M91zSQkqubA6cLi1i7sKgZlir+RqyR7RQwvEsDQpMTIhBY8VkhZ3wV8xAYXWzFEMIaGWx+PNYDevV6DMYRYncqu2y+HjMSOCw9l5c4MMbo8WbvIcuEUja97T6z+UWK84zxfmORUTkVv3oVPXNsCbLp9r+LCnOX+AHfQGs2MW/UdVrIvIIgGOvS3En8kYAz0t/v4t/tm+bNziD46kwi/00++6KPzihQPyzIPFTYSZXPydGoKZCNWgvR2C6pZV0XEbKT7QUspvaPfAIxb/EgiNpgQtAFescjNYWSKUFi9obDIyurJmtl4Mwml4oAF1tBVpQ2SIkrQ5A4KpGb0K8nKwLDPHuJPdCtP1c0ri6rhawsQ5OzSMeKnOrOwz0C+3SOypOe9PJ8HFavMMxgkYIeKPOleHGhJNyjCCPiYsFvRiozykX/eztiOHN84pV70xDdnGG2qhu3RzteSaXRfn5LIRChwAH3sVkUV+C8eRzkYtlEf/fIvJFAL7V4dC9IiK/A8BfhjUU/ds3O+CpFbGIPBvAd8KwVwXwfD/BP4BhIf/raY91grwawPM9R/uNsODbpyy2eTmAF8Kw3z8K4HtVVR3H/pci8g9gwbrnAzhVVgsVMd2omQowu6jZYspKuxBLaIaioilSmRF1eLNpsAzKLR+yujKLpcyAsNfa0LYLFgPaZ0wUieAi3cnixH0q+LDabaGxWgNeUW3daGscX9koxiPbb3qkBYECI07W8bBRrJ4wtsf2EeOqWq2LxnkejypYLrKriXA3JCtgVzTWScO+YDJFDmQNW1OWlpZsmXgyG5+4bFNlM2nPRChHd9fH6woUYFME6u2axuvA6nFT8PMaqF5QaDq085atYel8vqKtVhofMXurPeH3LN1T+P2wGsTssmw7xbORFqPO+9o3fQJTtkxK2bbzsiaHDAg4Iuba95VzXGgvGDTxYgB/Fxaj+h5YDZ5clfK9YQXSfhuAL4cVGbqpnIU18aUAfhWGyf739Pm3Avj/neE4e8Ux38+AYSwDgJep6k87AP4aVX05gH8GK8f5OgBvhSlr+HbfAgvsTQD+wmkZE5Fy7BJ8WLSXN79otlG3y94HerntvjTc9kc7TlAii3Xfzd+zVvCydU38XL5UtMKT25ibX0pJ54vztjFHIMahC45vH18YQCj2iC/uUYT2ou45wC2UwnnITtJHNeqhFitspP5ZpnP0gb007qR7djjIaf6ygi5Ts2A7OGL5bDCdmRZvfIHwuuxcadL2PaM8T27JtNw2zcnNONZxnby5aeOl9xbnP0+5QIpYVf8DgA8Wkf83gE+GZdS9G6wq5a8D+HEALwPw9ar6G6c55lkU8e8H8PtV9W2L7JGfB/CuZzjOiaKqr4ClCObPPjf9fgTgE07Y94sBfPFZz1k8y6qrL+DFd6prHAY/mGHGalN8+JYKbDzS4Kd2CoacZL707v4DbX8taPUQct87dRqZON7nKdU5gyle/Amtti6LESUuLAulN1c7UbScf8rOFhGohHgAxi6mI/I7A6R4eneZGKnXuCaOZT4obr2n7sR3WQmHxwO2L6KVW1O2YVNWIuq8cQlFxrKige1XxH2AKOSY7Z6cilasS8g6DgpsH0FTjPBiPlH5rKWFlw1xXesGEji23986AOoZkUa9A1S0S2Gm1LFPzMmJGrBLb97VnpZdMinGTZvH8LZgae5dpxSBmVB118C5Y7lAipiiqj+E82GMnUkRX4GxJJbyLJhJ/uCJIikjU4oW4ALUaUN1sAd82ChkVswH7NzgEe0cKHNYgJHujs7DBzUpn0gSATy12l58tiESf6DJcy0b2z/gg4LoIWdKEKYQp5ZqXGn1Duhe6nIDgY+SKcLIe2Ci2gI/KN7gc04vnCSL2c9bPOV6nG2+6rpg85QhlAH7uTW8GthnlZ2nK2oWqcMCnqRRtsB4ZGOOUp/p3IRnWnKF30RJkIYruTIJylwxbLRbYEbvRK0CzFes9rFwoeR8EXriuatXOZvUK8RJzDPSs1pLU8KtY0jjMEegmcdAO36z1LV7B8QnPTNwDE6ii4jwqOzZ1Khx0ln5+7yzO71/F1ARn6ecBcX5ARhpmaKehPFZAL7nPAd138QfpFwjoqOoAYgyl8mVz9Zs594tvt9JBRYEOT4HvTpakAeWOotg30OuiLToRsPqx76Troo03n0vj3I+JDWubIuNeH1czkMdxRTdum0L7V+kjo+aoJEY4zm8cN09WLjKWRGx5m8kLVDpDmnBSg0zMw67hB9YjnKHDaIn/Nwz17Eo+4LYMvFuruCMzeHWdOZo3+TasxLOFnJ3npgTcb7zrkLvnqcFBHWuUuV0/x5QOYtF/NcB/AcR+X8BOADw92GZbE8F8CF3YWz3Xtward4aiPVXgyMp2RWzHnUdvcyl+gM6r1tAii57UMGArnVQ5g6PxxpKldAAa86GVQLYgz+37aJ1jbu8VjVMYjtyV1uFNezN+uvmY70bvAsrWC3YtH1UgAIcD4LtowPKRvHIrwLj0Ra1NiijpqajdWW/c55zyu3tKOOcGch5zDxWLiTB7AgXv8QiEkqYi/BsnaAjkYP32c9RfJEUdRZECloFzY33u5orn8ebOedNoQObRyWs2OiwveBdx6Ke2DNlakq4K/PJ7VNxHn5WNqkh6BVAWcSKbAzCMlUDSsltsNh5OhYm4s/1fDXxw24Rn4W+9jNed/PPwwpeHMICdV+hqjdrK/1ASR2N9E4aEl/mluKJlvCR6VthJTkuCjRuJ/FdRUSUAX/I2b0ikgq8q8aWFo4f88Cx4+ULSYpdsqCtT523DOJLV9U4rHBcnNaXK+Md8V2j469j36SzUemz3oISLrkCDEeC+hZBpH2D14vmMtOtd6gjcOKTLMabyE5qrUhn8fG+Ba/aF8Q6Gg6/48r7QjxeV4cWcuW5puwjISeO1RbFuOxsPS9gLFsAeSOAUu0Y84FhwwY9SNAoGVAIi37h7cSiPUrjGsckYYemGSwSf9amK40rTNxXCiADgGqV/1jnONP1CLN1FLzzVpyXiriJqv4KrJHowyXpAbJuvxJ/RxAvJ0LU9vK0YJ2EizqvibXJTnNP7ge4QjyWsJia26+d1Wp8Xelc04765j9pxdTZFhNNCqijOKUotxbvCCLNKuy4y9wetPiYXm1YMtSy9zLIVZxZWdepA3JqMAogAnocfySFLCy208iOK+79A7MSHKZmWUY2myuiipb2KzOi7nOn6HyRs07GaS4556N5BZiax9BBGSN22l5lrnBsh6bg7TjJ+rzJ4pRxbC5ysQC4gs6LXYatcgrzsmzrDo/dF7hICIKNsWire33uSlNP/yw8qHIWHvGHAjhS1R/xv/80rCjyTwP4K6r6xF0Z4T0QFjmBWuBFnfWgo7nmSyoWK3WFohJ7yZjiub1qT+lwZG1xKB0h3y2Ykfivv6Bltrq1w8Zghekg4ZihhDX+KOG6ttrIdUzKhpIsMcMjtClfBqZCWUt0/M0F7KsCOnjAbvDgXQUOfkO7F5XY4XTVmlgyUEeFLJqucRRsH/U2QjV5DqlC2IkvYbonOeCVF0gUbx11bNe7faRgOrR7vrpm1iAXUD4LPbzgwcvJxqQOEVlNXwZWjZFSB2C8ISjbGttzXNOBBExEj6Qdq90jwKACjiMHflnfcGfh8WdrGgsgdu/J7GEAdQemmNPzS0za4SKkgHOcL//j8+SQijUE0M7itvrMJ9y325ELqohF5ONV9dtP+O6zVPXvnOY4ZwnW/UMAv9lP8F4AvhLATwL4PbCqQw+uUDHO7cGNljCLQA7QXLDuX7aOk0WS05iXgRxTPOoWsMEJQWMifEAXNlvTofBSbzC+rG7VWBv1RS0E9C99JFaQ4pTc7ziXNMu6c1sTljpsTamOxxZhZ3Zdq03QW/OAW31eBIiYZA7knQWa6Di8+YXN1ufcYBAdJVJ8cwBV4jnQDr6xe9UK6kcAFb01yXEHBBL3kngyQlkZxNOfg+OJGiN5n/SM7U5Aupe8LwnL7gKUQGfh2jWkfxlKuQXUEEWiFrDYTnD5HGTvO7fn332QrxeRrxaRqzFWkXcRke8D8JmnPchZoInfCuC1/vvHA3iVqv5/ReSDYZWG/vwZjnXxxB+cOpqlQ5oYYL8rsMN3XbIkAlpw6hiLtABJ2SxOSxdUub9SgUlwXqlMO57o8gUevA4CXw5SmQaxTbMi5EsGBTbSvaSkesW1aeO6DhtAJrrXZimLZ4AJnDZ3XH1uvJ7xLGaNFmDSVu+irhSTNGuZ8xnNR6s0OIHYb0l0rrBSm3LDAExrs06lIlzyidCOzwMDjvNKomdfWdbDEG+FVWwnpmvr4EWXQsFTQdtnZWoUNlPstuG8asqfzVgBtM7I2u6ZuFKMhXr2us7LTtA+N+L3G14xT7NCknascaOJ6oY2Ll+EywTgmItks+bjUG7F02NhBxiwgtwAzEPqWPPkkA8G8C8B/ISIfAqs/MM/AfAjAH7naQ9yFkVc0Yq//34A3+G//wos2+6Blngw15aeG+5b7sHF9y9hrV0yBexF1E17KXOAJidh2AGsCpek/YHWB62uHOsdGO1HuLZL3FAHoE7WFJJWrkC74uUd1ucv/8AATPq+C9aoYji2l8/4re4GX/FzzK50q6JsFKsnJqgIpisGnspsn9tYi/XVG9hZuZ1riRWX47YAZGuZLu+wEYAdlN2Tmb2o/XQVaC2rbA6nKwiGxnDs53I4YTgWrK9pK8+pVPoezONN1z7phfS2fE+0mAIHaw8nqzNb6IBzmb0zcztvr8iHjUbKNdB7GDJrFGYKyAONN0zaG+tGjzcUw7H6fHp3aV+4VFpsIryVZERw7FE/xD1ILtgCizV0BaDO0yq+oNCEqv6k19/5x7DSCwrgr6rql5/lOGdRxK8G8De9Ev3/BC8lCeDdATzwrIklTpeDEcApHyoqMzIZbrJPDxH4uWhxS3uJeitWm/utsXtYrkuGgALdA3yrFNTOXU3npoJavpjh+iq/7y+Yltjys445sByHpGtjYCg6WaOlUJ/kqqdxB3VsYaFRYd5S8uKUFmHt7kn/E2rzQkUa1inncs8zkQNinfciAAtMdNBU/n557UjPznK86dgZisjUxcz9zs9Yg8nS9+mYO/zp85STIJmLI78TwIcCeB2s+NgHichTVPXx0x7gLIr4MwF8A6wI+xer6s/755+AMxRgv4jSIs5AuOLSFB+t44ypxbZhESEspnhxtCkcmYChqpc+lMAT55VAnJnR4Y7S9hMYdWzlgT+6kxk/jiQOAICwZLC5u4seAeoBMhX0OOSsoXTIchg2GpZpXXl2HlNvB6AcS2DcWgTTldEsp7XBIlbAyKyv7SOWXcaXmSJJCTXF1iLwUb/52JpwijK7kH45Ii08rLmEq3Z4pysUUQsy0TmZDu1AhDwA52Zv2z3LzIpI9khWaJccIgCLygNokAD5vpktQVaKzzspdIH35vrIFagrtfT3eMhsv9lhHqayAwi+e10Bm6cU25+0yuSpcZ676mron4tusec/cq990bV0a+xfKO5ELqgiFpHPBfA5sO5CLwHwHjA9+VoR+ZOq+h9Pc5ybKmIR+VMAvllVj1X1tQDef89mfxWGcj2wEtijC198ABEZ5su3E7SLl5Ivl1PA/KUNfJdYYO1fkrrSBIM0l5KWIRX8sFEMN6or76GlQzMhYAYGH4tI47fqkkYFa5/ECmLR3VmBsk3kfEptZSGnQ8ucC4iAi8VMReycY2nKV4fG/JiupNoJyZJkmcxQxskis0QEG8qwVazeMWHYVmwfGXH8tCGdpx2X2DytKAlF3N5m1uqVCocuvO/fxrR0wBgbdeaHtGMFo8FrPRfpgp2ED6I3X1oIpFpAk1S4eS2x8AX8kpJ3qndbLscGLYiSqYEWSKsOo6x93hmsoxUr6LBtbgugo8rxfmQruXHZEQHlXKkvd/eWqRkxnI9zkwuqiAF8OoD/RVX/vf/9X0TkfwTwfwD4bljy2y3lVhbxPwfwbwG8WURmAM9R1a55qBfieeClc11dCUSiAV8kKuLk9mGhcLm/LjCyzp2/icUQLwHhBtKMPPCW3UpadmFJxvmtSE93bo43u9VLSxHox+RudrAv8r5ekY0W4bLzdRQSWlDvMrRyEqzQfZzGZdZeAYqlUWeYIyvjOEbyXOBz2GpEG54uPncRYFVTZgXazUUkrHDO0BbwbnFWeE5PU8JY/KuDWaldjeDlMxFjRj+vi3mpAyDZWmf1uBndPHdBZS4mSN/lewtErWj1xd4CleZpZZiFiRzcV9N+5yW8RxdU3l9Vfz1/4LXZXyIirzhhnx25lSJ+M4ye9nKEangIpTSGhMzNAmPGmcxmjQVemZUhqKBbEojMnlOQXiBaCGHtJIsF6WVoASmr+gbno9bROLBhmUbmFbObtHshqKDZUTjG7Bc1eNIFa8wC7drihYZXK/M1vQ78zgOHdEWLsSE4aSoI/nDGgwFEQSXKcrHKfdfaRmhjeZptwFoMVMCRopy6MMfu9FyqnUSldb3Q6nWUr9ugNo8UbB8R1Oodqj3gOV0xa79svLoe7D4sg7XhxcDmJ7Btb1mkhftJUOXi2v2Zss/acxbzkyzMqCbnz0LxQk0M+HWLkN0WVJjyL1sNLnxwh9NcNb5540nHPFZjmQRdTfuxBsR3zrDERcWIl0p48d0PnPY4t1LE/xTAvxaJdf1XFiUw80nP0xG5p9K5YnN6qaiIs9XoSlNF4mG3KmqN0E7FCNb+BcLlzEV+omQEjclQxu7uJcCncT29XftADd5e5v6i8vHQrFQqxMBBW8pvhkT48Nd1cmOT+1q27WXkmDlnoRBzsgKSJUYlsVzaBSdaUrxH81paTV/QYkspyKVdx06QqgI1UsnbMWQGhhsVosB0WHYKARGDZm0M3q86tnRozkeWHYjF963OLjAsW4IG1gcBaZHbydgzcXl/Y1HfpCScHIzkPBWORRscBHT1kW3MzZtaYr308qQKBuLGvphEXCEFKs9VLqgiTs2V94qqfuxpjnNTRayqny8i3wrrePGvAPxZAL9xyjE+kKJDxEY63HFmHd7EFFAs3HEG4Aog6a0JLFUV6pYWA2U7q70r1grZTwFaKq7BuzEodh7W2JfK3V+SaFmUoA9z/7J5aj+Kg86BlSsseJM4vLwGZnTFpVdYXWfPRIO2wkNdHdsExZCiFwp+SIpW0FHH6EYb1gqgOPc3tWjKipDBVSaSsJ9bV77zqGJ1zWwKuuOA1Ra2Vkd94EqSN7JMfmCCA3/ndyX9bnWOpWsIGscSp8It72cF4O2xsDR/qL8XXOJYmHlvc8A5tYHqMHef5/QYxR8WnGzPi3UXb/c6UyzPRS6oIkZrrkxZwVgUz4PpzFPJaYN1Py0iXwDgG1X1+llHeisRkWcA+GYYFe4XAHyiqr5tsc0HwIjS7wRDwL5YVb/Zv/saGH3k7b75n1bVnzjTGLQpCEbI5wP06b/wGrM3/CErrWNGi3wnqyDzkMM112h/1OGuQLxEsoVVygqusLuGTGdOlnNdqRVlSRItbcKykVBqWgA5Vqwfb7zXzr324cai4hZbp/xmYNgC7NRLi2i6UrB5J1tAVk8YX3deNUbDcKQYvQf4dGiuvnCei/Nqr2swH+gBTIdobZXU5yEV3NERmK/w3hnLYdgAq6PW9ogLatm2YBlrTJdNxXA0Qapi/XZL+NBigUmmCq8fN20eno3f34HFhbosOA1WBRc8cncZeC2SeOIJ4orFYuI+2gU3AQSTBsW6LdekYKPYUX4gHDqrQOeRqfhCkRhBtLDpvSmt83SNEN4zicUIIKvGz7wRDOdYge0CQxOfuu9zEfn7AN5x2uOUW3z/z2GKDwA+F8Cjpz3wGeUlAL5HVZ8Pq238kj3bXAfwp1T1fQB8DIB/KCJPS9//NVX9AP/3E7c7kNytNipyDW6JjSnbLiyt3ipoNVzhvwt2lG1KX+0kwwL7rGHfJgeAdtKPc2Csu672ebA0cootIYdlWm1Nny+VTQ7yAQFJVGK0uufckQrbovHcJluZdryWpdUF4jI8tjjHMqiI5bV016ExD/a7QrYVZVN3KGaR+p6t7IqGA/P4yTq2OdJQjt0zwHuYXX/OERkeSy9nAfPsHOsWkqGSnQBdGsuJHO983vTcdZDa0rg4L9FT/rs48pUA/sJpN74owbqPA/Bh/vvXAvh+WMH5EFX9r+n3XxaRX4N1B/mNcxkBH0Z/O0QBbBbBBzWLeDxCe4nQFBCrfln9BGlKOivWSrqQlalkJa94qLN1lR4uLfDWSYgsKRV0mGhcygBUTa2IXGkyW0omLwrjhWvmtVGthuM2NirZIdXdCEuQ15ACTRAxrvRR+r62wkCBpztWWdyiZsUwWXmyxiiQ0rBOSuCvaT4YlSjHwFoB0ud0tGtkicZyQ6OkqfGiNeah524LMAhQBFDFcFQdSmm4dHUlu8Shs2LMdSoiyy0tKplpEQtybfe8LBJoYu4Xno+od88gZOSWLztE53EFVDOj96hIM9O2vW2rO4V/yBqyASF657bGpGoZj2jzei7CRe/Bkvc6y8YXJVj37FTT+FcAPPtmG4vIBwFYw/rlUb7YydXfA+AlJ7W5FpEXw7MCV48+3T7Lz4xbNdYqSXYUwHAjtY4B7EEceivNHhrdcftzSyGpgKqZJsEfHbMSQ2dtVyf6MxGheH+06VBah+e8MCwi4S3lV6OehhbrzDxdAWQWrK6ZW14mC/wUBh8nb/ezNiWSa9hmK7xMiLlp14CgOZm3YYqDNRSCLzu1ynFcCBsLRWMMXVCQRYeOFatrfj1X2jxxcV1dBw7fOkcALBbJPL7B/PBgCCgw3Jghk7n/88FgCSOTfdnBBZKSS+jKg8wO6SxL1qjuFl3/GUHXpNQBhKfVdewolua9uqYYb1RLsllLLEbTVaTx2MK3esJ5wIOE58cKcvGsKFA2wPqY19pbxhHsTQlM9YALdEtnfxIF6758+RGA5wD4g7AGoqeSexasE5HvhldvW8jnLM6pIiffRhF5DoB/AeCFqmEzfTZMga8BPAazpr9w3/6q+phvg6vPet7+86g/w3XhZSU3eO9yJOnlPOH7/G8Zle6DX7qzL1/eTIvrXM00Nk37dedeSnLX87g7Wlnpx0pFQHe2g1+o2FxRdkyAaFKZrv8msoy+m0exmBdN8waeQ7p50ALD+udWkY71lM1CLDE+LQI4xktoARMgg5nBEiaixBgh8EI9bZHgz85yzgvJnnluH6afC9iCir/bz4OpetKcdveuXxhOetPUHKbO20prffs9n09PPt6dykXFiAG83+LvCkMSPhPnpYgBQFV/GsAdB+tU9SNP+k5EflVEnqOqb3JF+2snbPdOAL4LwOeo6g+nY9OaPhaRfw7L9rst4Q1XeBCKqp4PnFtd5oLay5qziyLgRh/ClVJuN8SgTShXoCu0wopewRgg9rpqrwGj+XAXG1Q2oOKUdnyBF+fxbsW0xGbgYKMR4szUNlpq8xqYDvpQgowSabE2Zxrn5XHmQ2Cil+BubKZ3zSsB1ojgaE6W4DxROWZmBjHaDE2Qj1vFFG09SMdSa+VUR9t4PvDqbLNZ0jI1S77MivF6xeraBBA3Jo482cDKdoYcGferXl2hXhlRB3MLeC3TQenoY/RcCAFZMLVXnJJ+J9ZqeLunbisg23ZMzuu8oqeR7rVDBHkhk0p+Ohc4BWbjRXeFqdyYsAVLg5LG80UwMCnGHUv+bsgFVcSq+uHncZyztEr6AgAQkfcE8N6wqflZVX39OYzj5QBeCOCl/vM7lxuIyBpW8e3rVPXbFt9RiQuAPwTgp+50QJkeBWSLpikcbAGZU4YX+9glJUwJjqW6QkzWNQCM3nMuzg1ui5aw4IrHCsA33LfMRrnTdK7GHXbrdQAAVwRUtNUU0HBsru10aIqsBZmAOhbMV3JdCq9HkZI7mJpNS82gEYnqbKxhkK+Zgc+MU0agjnNftbeEqmHWVBSztHNS6e5kwCkwFWLiVpltPlTIJF7E3VzxurJ7WbYWsLPC9LyXijJVQBVy/Rjy+DW73Gc8FSgC8aYA1VtBs74Gg24qpoTnK4BM0pWY3AluCRdgtFRlN+0l8cVjLgdg3kdhI7MiHZs4c1Omfi+9hGc2Juzcqd4FeP+lnd/vERNJTvS67lQW79LDKGfp0PEUmKn98WgsRRGRbwfworNUGtojLwXwLSLyIgC/COAT/eAvAPDpqvpp/tnvA/DO3h0EaDS1bxCRZ8Eeg5+A5X/fueRgER9Qp/YEpYsbaHvAw2KgdSee1eRc2hwxX0IAdOvtXLupxQBaYZoEA2TJHTwiAUH5ORoTQeyFL0OzqOxF9L+RrLbsWjvWKmjHAZC4wX7uBQOD2zDAE7gqkiL1j6Sm4vzJVa9DqtfLufZC+kWAshEUeh9Z8ZPFEopFo6MJANSt6by6EsyHI8pcUW5MkO0M1Ao5nuznNhWFnitkqi0+kNz/5u3Y72VW6FbCApdZAd5fTdvGPUQo4Hzv8zzkwOAuc0L7/Rdf7ZX07PJ+FKSX/aT97obyXRz+IkETt0riyHIuCR0L+XJY0Z8PB/BD/tmHwAJ6/xDAi85wrE5U9S2wGsfLz18Da8cEVf16AF9/wv4fcbvn3jkWlY4CxWlbHUXMgy3Rs21qDy6VbwSpUjAHK3hwLj1U2p+XTR8Di3aLyJSOosxiVuFxK8TTumAg3r2gpiEpOIcueH0z3XfXRFoMLtDRFx1fNDQHiEZeowCT2hsawTqJziS01Gm90srV0VkNBZax55lqZYMWGPIFYGABGaClLSfrL9PQho21XtLRau0Ox9Lxwesawded14q6UoBdVNTaG0HNOt5MNlHDseLwxmQW8GYLffwJ6GYLjCOwXkFKAWaDKYoq8JSVwSJDs2gzy4D3TGarQSyzRvowKZLw8ZTJeuzN7sEsa1KEt1EBbCS4yoS2woqlIk7W6t6sQ38W8zPcUvr5PCTDYM87c5d18YVSxNhN4rhjOYsi/lgAf2hR1u37nYXwHbgDRXyhRJIFSSuSBW1oGjjE0LlitFbTA0MamEXYpe2/76HKL0iydDteMHmqcyP8G5i9GAcDcLQW/YWailmTtSAw3jooyqKKVlhoms5PqzcZf/w7svayAvCklWjMqg6fkJvt0AQzxSPBhZg5F7IkmTVRdNmhw65z2NgkmLWtYTlmLnLUs/DMsDoDOgrUcdR53TpoyDQD2wl64wj16BjlyiFkHIECyFyBebZUyriH0miF2cL0LL7GmjGcPzLXfG4yRt5l6i08py41mc8Mnw9plnb2vrQkI/kkzclnvgscY4eH3u6JFzBafH7ucoEU8UlJHHciZ1HEV7B/JXgrgMPzGc4FkX1YF61N52MCCNd+h03gCrJsPbuISmVOCiEbOQsLOQyZAa1VEsdUjVrWJZ0w/TcFr2I8Ip2VFMdi4KW08ogRrEF7CcsGWNXUaHTY/8JJBTBZKU5eBxVtTS5y2WoHXZTZzpGL3wBuJc90r1vgqRynrDXnA+daGxZcc/giKSoqk7pW1Cuzfe5dlbfrASoFZSvWyn4tGI4U699YY3h8BanEVVzxThNUqynkaYaUguHG7EHYimFVUKtjxev2rOhE6l9b5MVpkjWgDHTzYAFORVZEMhvVmckkfGi0AEyz24F1/BmM+0uvq/gf6R5l2iHUFhSbc231JuZ2vOCy3025QIoYAETk/QH8VGJu3ZGcRRH/nwC+yIsdX/fBPALgC9CgigdeqLwguqss/QGMrrjSXrTYP7lvo6qlAntUX1W8w3ILvPHYBSmlGX7s0dgHNd0lqcB83KqeMTjVFSGa29hpOTGRgKBfBN0KoOumtHjtZAsMUdEL2F71tGRdBNH894E90XzsOkqfIuswBGDzwvKKy3q4ALwGhCsKT6ctWyuOz2QRVo0jrMH5KTOt3V2fuR5WrN5pgzJUHKwnjMOMazcOcHxwAEwF07WC7aOC8brgyltHrN62RpkrovP1NAHHx8AwAMMAKQJVxXB99MDVYJzeFbAZxRg2sM11azjxeNSy7+waDX7oLF8uPBOiiE9m2GRWSY8zN08onomsjDkRTOwY0HjoWREv7gc/58/h2HvprYwRInsgi3MTLgoXS34cxhf+NQAQke8C8GmJwXUmOWuHjlcCeKOI/KR/9n6w1OOPvp2TPwgSgRi+13w4M84madts2ZT++07SZ5kDSm5udieXUXUqzJ10Zl1s52OI7bLFS2snHT8npdDlpNIGWISoWa7dsLQp25Osoww3EFFpzAuEYqFS2MeLzTDEUlFEMDRl9Gk6Nk9ahopxrLiy3mI9zJjmAdvDEXWrmCuAOtiCdyCohyPKZoQMA1AGhMap1YKW5BpXmOmY4Ajeh7i/Zff3k+5dfu7iGsUfCzbt3LNfZ9kyYegm90Nvcc860XYPWwr33TaF27nPS0TkYwB8GWwJ+mpVfentHGbx9++DoQa3JWehr/2UiDwfwB8H8Nv9438B4BtU9cbtDuCiCZMGRGHBpZKtNq/AxW1TMM4CL8Zl5S2qDi3w96xYpSJa1phScSaGtuNalwfbJ9d6UBHMB/YyspiM1UJo46HinQ+83mwXRNSuDCL5yYHljoBeAUQFw5Fny4FWmbTjS1OMcV6mHW96pR1zm36nkiS9LWh6kq7BhRaY7ezvJbtRJDd62FTjLW8FMhfooNheEchkjUWhwMF6wsFqwtMOb+DR1THe6eAIT1w9wFQLrh2vceN4hc31Nd5x/QBaruDg7WtcrRXjO66ZRTxNgAjk8AB6sALWK8yPrDBfHTGvi825F8+ROelMsYpl05WCee33L3VA5rUR0mm8aY3t+H1ARH5PM1e9bIEhrFfGKVqBIT7P0N7jYTNVEE4CFl5KWgSlcZzvhZxXirOIDLC2Rh8F4A0AXi0iL1fVnzmfM9yenMUihkMSX3WXxnIhRCoIS0Yku0zwAu0GIaiaJmCx70hxHpv1aEpJ7IVDUmIUt850tIdeKlBr316H2wVf2EtEAk3Bh2KbES7vvJbEEoAXDreaDC3N2vadxRU1X/rqypCQS1WMx+1cgU07A6NsW4smKhSpwLhtEA5rVHQcbM61j6VMamNJBfRZznK4oZGeTUs5lDFo2bEusBXt0eOGpZdpcMzTruPqwQZXV1s88/AanrYyG6I8am/6tekAj08HePONR/GLb38OpBZsrwpWTzyKcTVAjraQa253rIxBUQ9MCW8fGTyhI7UPSiUvuVBZR2mu1n4R2UJ2RVumDPekRW3tPQm1GQjzyrsxF1/QJ4eQfI7nVVt0y1ahrMMxNQhos7JEHVUE3zgH7MhB5/O8t6jVXZJzhCY+CMDrmP8gIt8Eq3VzVkWcgaT82W3JmRTxk00iou3T2wU+UgoyAxuaXeD4Mv10tzEw08kbhwJekEV3HriwBHTP8boNmxWcOcEBB9T2E0gKkZAIz1EBz/i17QaviRvW9+LcdKOL7n0Md9zvfduURsMLqKJoZKnIcj6lWYcAry3fDxtL2VqiSiyOMAW4mQasSkUR+3dl2OIpwxEKFG+RR1AheGLcQg8qpqsFZSPYXh0hmwMMIsakUIUerKCrARhL5ynk+hv52qOims9L0MX4L8NJe5TbMtW83aTFcRZznJ/bZZ3iDJXENmkOGVDdgUhcCe+js527nPDcnCDPFJHXpL8f87IGlOcC+KX09xsAfPBtjEoAfL2IsKbNIYCvEpEu8/hu8IifVJLdPmaS6SA7NYZZFpMlEucqjYal7QFndwwJlgCgm1afl1YhXUgrGK4YjqS9ePmlSYEbwgnzgSmuum4Ws0zAamvXEkXNYdsoIRUvZlMma6Fj7ZW8KMyhtIagqVKXVONZA7490txIU66cp4AnlpS0Amy9s3M0slSztKvTy1oWmqBCjYI3tBY/1uSz6TEVgWwrBk9Lng8OG13uWPCOx69ie/UY9SkFjw7HeNeDt+ADr/wCDmXGfz56V/zEtXfFrIJHftM1XBuvYPuUEcPxiIOnD1i/4wCHv77yynHFss9WpXGWR9axTouFoiXhpPkgB7pRz+zzgDbS4lvHxG4Z0ClOcrXLFkDpO4VUT4Xn/LJGsMZi24ruWwykcb85vuGYgV02DPV2T8xgPEf89kQ5/Tl+XVVfcBdHQvnaxd97cxxOK5eK+CThA157iyGsGIq//UEXKqaMuqLsQGfBRcHwJMTfcrddaKJ1DS0TLdxebePMfeMiHdoVGhVwTqyIwt8k63ORmHq6WV0hqn7xOnJwjWPLc8Z5WL48+15YhbNIRqBs1HvweXUzpt4mq129RCXbDdl8tgFnPLocTYAqZD6wnf2Y0/GAzWiP/qrMeMb4BN5rNeGqrPHm+S143fBsbMYRz3jkOlSB67iC46evYzEdjlbR6dsSWVIHkWJ/E+7JAa4oUp/ni1ARCxBxAUs0wXbc3tKWrDgZONO8X5s/SEtOylh+BSL+GBz2pIjLtsUI5gMAo/Xbq17wP1LY76IyFpyrsn8jrHsG5V38szPJeXOJLxXxSZJcfaA9/BnnZCYVKVNSteMVZwgjGpJWtyxLX+pxLoDQEvUAHIv/BGOj9j+XLltY3/6yxwuaSi3ayYBSPDCo7eWVpDSyVVXQXmDhebM7y2tNMEVOKOis4D3KuUzWGqp0xWeIl7T9M2ZZ8uI4N2U0Hs0Yrls6Mtwiti4cRnooW2DeFkzTgOvTCtfnNR6fr+Ct84zjcgRgjWeM11BV8Oj6GE+s1rixrtG7bzoUbJ8yugekBn/wur28JgvpBH2Mis2V2yyIYFxMS4KV8qJXR/dysvLOz6Ok4jx7IIa80O6rXd2l4ufnKSlsBnNzjCOU9bJK4F2Sc6xv/GoAzxeR94Ap4E8C8CnndfDblbPUmngfAIOq/uTi8/cHMN3vqONdEWZH5ReDvwiwfcQi8SzULlNr7EnLkfAG+6QZN1gCDmChH1OeEi9vjm6LlTsAgrbkbW+WuJ0HxWjFygysPKU2W0q1Nne1jrYAxFhny8waPKCTI+M5sYQdlHNQMd7JKqhqEILMHsxLtYABoK/i1dJ0TWEkTeHYarMqvejP3K592DrveKMY336M8htPAKVAxwEYCsbrM9ZPDChTweYJwXx1wFwUbzu+ireuH8Ebhmfg51ZPxzuVI8xa8G7rX8dTh+v41SvvhON5xPF2xObqgRd4EuhQUCbg4O3VWyg1JVgmYP2EW5Cexg21e2DcbyrP3Xu3xJZ1AKZc4J0wWWm6VAegEl5g5mBJVm6y1Ktj/eZp2c+BTQAAi3Fkg0MApBZVnQKfLXAdY7ubcjaM+OaHUp1E5DNgVNwBwMu8wuR9lbNYxI/BaB8/ufj8vQF8BoDfe16DulCS8E3+HVabJ1OImMIaPANtaXUEpjprYK919PiWQwi5ulp3TreqRVM6bHopgfQi5HNnq7wutmc5TLQXfGkVh4vKcL+kcySFrDxGPneBK1NN1izfZZpq6bMEcwBofOosaUyhnHkNkTCiVqTneGM1IaKjSbWFZfCsuwnAVLCdB9yYV7he13i8GgW0ouCwbHCoKxwME1ZlxjBUq2PhkMN8YDQ0Uu2WPPDGLNDAaGmxM9Mx7mO+xDSv+TmDAJjRK+NkHLCEZs6M7ANxbpmnBT7X2s7HC0kKmUWu8niXadd3W85T2avqKwC84vyOeOdyFkX8/gB+dM/nr8ZuceSHTjprkC9dtWIuxMlQ3TgKCCEFxxIuS+wt46ksJgQgXoKqgrpWKEttpu9ypl/XkgmAzOLj0ujmkQuZ56ypslGMTpuLNjpJjG9qBw5OsqCVWUxj6aL3hBfYJdkDkeS6Dmxwmqxr0tZ2FHxhMXermNa5z34+4yK7ohtHQDVoZuNqxMGqYLgyYvWMgukRAaTgLY8/AgDY1AFPGY7w1OEGZgiqFrx9voLXP/7O+OV3vBOuXz+wINpaIds2QebdGGOi1RtumCnvN2A4OK8tOOWDKUR3CoIrHGVBZ2vLtVyo6ohomxSFkJYGg8+fQWIOFVUJqCTaOSkzMz2AO/SMlLJV6AYBYXHe7wlbIss9VPr3Q86iiGcAT93z+dPR238PnySrouPBzorRCfHBwYVZnB2GK4hKWzmC3qwYr1xG19HFlKZEwsQyrZh1FrrMMbTAj1TFeJzaDEVSicRiEVXB3KLkefmTXFZ2jNZB2kKi/Xkz7YrKhoHJYaMdNj5sKoYjr+PrSqquJBqPCtpxu0CYu99laosZCyFJtTFgNQLHG9R3PA7MM4oqVrViuLrGwTs/BdOjBVIFR287xK9NBTe2I9ZlxiPjBtUv4O3bQ/zS256G679xBZgEpTSGCee/joLp0ArCT1dNEZdJgWNPxElshPmgWcxdNmBiNUQyhcNbw6xYXddoi0XKmCgwwz2rFaKA074mpYS2bM5aN+7ovDy2hZBBU3psgLVjGqRf7AHO++6rcrfkAqY4n6ucRRH/BwCfIyKfoKozAIjICGt19AN3Y3AXTpaKcMkeoLWasLbYnMEnTfsjueuh/dpDFxazWIuenWdx8UHeTzL2lwJfbeNdCCC2S1Z3gypsjB0nWdMLCkBrqjmQ3O8d5SP9z73Xs7zY5Kqbhaz9Nal3S3ZsWOYBMgzQebbyo9MMmao1D/XAnWwKps2Ao80Kv7G5gimtstenNbbbAZikpZlJ8yy6MWVoIOG8+fq6a5YGByw/z/O/+zm9msUYlnO2b+78c0F/LzpvJvHMO7hCAIXDE/kZvpdyqYhD/jqAHwTwOhH5Qf/s9wJ4FJZn/fCKW55LEn5YYeklFLUMNgAtgAJ3BRMTYckIYKTd3Mm+lTvgGHJqt1S2SQFKGtPCfZwOk4L3sdeVcZUhaEp1n7vpLyP5wAFz5GvIL0jR7uUFzGploEhT4G97pUAO0LvUgqCFtWuTXskxqKVmQXLIpANOj64hV1YoRxPKakTZWjoyphm4scHhWwzA3T5SoMOA7bWCG4+u8LrNiNV6wjBUrIYZ23nA9toacjT0jJO0GIDUsdI+7xNTtEuGyNmHHDtjBF2yjHtNOgDHTxWDBWpPkczbxrncK8vQAu9zpqRlNlBmyIzXbQ7ryiz8OjRaJZCerXutFBcL+cMoZ6k18V+cIfEZAD7AP/4GAP9YVX/5TgYhIs8A8M0A3h3ALwD4RFV9257tZgCv9T//O7NWnIryTQDeGcCPAfiTqrq5kzF15+XDDrQXIsECdOtQYKUgvag2q6xlilCjZ/UvwaAK9RfJ2tprXwYTiLoUcrzghJZmnecgSijADpfWqHGhIm2BoEss0r3UxDQNTtA4byicbqKSMg+YxLcfWiq4FgDuAmcllal2LfBnVmhnbdIqHrTV/HVlMR9a9bPhYMBYAJmqpSUfWYH31duOUOY1to+OmK6MKFvBdFyw0UNsRwUOZgwHM3QWyI0BwzGhg8UCFAupBQ3jXoxNT5WNWAF9pUIzhTuTppgLHCXYi/eQ/feq0+4IIeX5te1Z5xjxPDaYw1PFndlDqCy3P2IB/+HYYJD50DR1xCw0jes+iNzHc98rOWutiTdh0XX5nOQlAL5HVV8qIi/xvz9rz3Y3VPUD9nz+dwB8qap+k4j8U1iR+n9yF8a5Y3kGhxRoCjC5kwFX8HsqHh4w4arckIGr4G0qX7a2YceqyPuChlKCQkIxI5Sa7afpc/TQBJWq7nkJpOGKQLNWdixH/hvSebP1vsfKyYue8tjaH19Is/PPLWg2dHNvmL67KOMAXa/MMh4keL3DMTDesG3LjQJdKSrccZkFZXKsFwDrVAQf2BV/cMm1KcFo0MniPBWxYLSEE0ULtsKK9afHKM95PGtuhcc9IcTh21UvDpefP+tCLa1m8+K57Hnxdp/I9OiCr+ne3xe5R3zl+yVnUsQi8n4A/hyA94T1qXuTiPwhAL+oqj9+B+P4OAAf5r9/LYDvx35FvG9MAuAj0EjZXwvg83GXFHFWIFlxBV6MRoAPZeXvDxXJcGQvyHxg9WoZSOFLbzxgf/G5X7KeuzoXGaOmNbpHeWnxzCjXcHRzq6dudy+oeGNOpAxAdSuuGm1ruirecBMtMHhkfFkVmwNCM1teS/Imhm1jUsw54BQ0Lx9qoUVq362uaafwAGB7tWA6LCizBQVtLktci15do76T9y5wZVY2FYdvq1hdE2weFZRZMK8E8xXBfGgTWTbSqGg+H8Oxj8G9iOmweQZlq8aOWCWF69luZdKAMKiAmbUmCmyLQA9aGjxvYDRThSt4SMeUCchjMH56HL/SygWGbQXTk6OVE/sFSmvFJAeyM7fMtozn+R5VW1vKw24Rn5qEIiJ/AEZVey6svxxrb/4PAD7vDsfx7FRQ+VcAPPuE7Q5F5DUi8sO+AAAGR/yGqjKT/w0+xr0iIi/2Y7xmOrp22wMOZZgs06wQmZNvlCTprBAWPQ9sMFu/ySKOu5MtaT6Q2SL2fzloFJZtHpdnSalTmHLlL6T98vlzplceXx0R2WbqReABsxSpbMMqHhvjwPBsb/bpyrjjQ6e6xlTKPC+z8Jqyta/qyiqaba+YMq0D630YXFTHgno4oB4OMc4yV0/0mLG+phivm3U8XhcMR/avbK18psymqMvsXZ+9Hx/P3XW05n2gwoufqaYD2uJK2h3hiR2LMx8387cXzx1pbzHPToUrszURCGw/IB8+p0xCgnOkEQtJi4W0Md4X0TP8e0DlLBbxFwH4y6r6j0Xk8fT59wP4K7faWUS+G8Bv3vNVB3WoqoqcuP69m6q+UUTeE8D3ishrAbz9VKNvx38MlpyCq8963tlvHRUV+g7LHRxR2oMcLmGybONQ+QFKUEBJQZVITR2SNVIlYAXCEd0Qk/XcKVdWyxocn+T2McYezohOHLE/wkMsMwAvJkSLOAf26CUssxNtTI1PHP3uYiEgI0L6RSUWEyv8k2Ebo3EJpGg0J4UUlM0KZRoday1miR9XFFQ/rzRrcFIMoOXeWjPFmNP4pyuwOhhAYxnQM0oZf2XyeaqNUhiL04KO1gU+ndLYTu7nCK9qUUBoocBbMM7qj8ykTvqxpNr1St5e2jFyYA9o9/Sec4eTXAbrmrwv9mejvBXAM261s6p+5EnficivishzHOqI9iN7jvFG//l6Efl+AL8LwLcDeJqIjG4V31YRj9NKrnyV8/jp3ndkeC9iYxvbj46cnzBCO7ZEmvGwtaysuqaFmpIhkrvaWb5o5xi26kkIrbFnJAokBVC2qUh84j9LXiC4wDCdVny/ud82gpY4YdHhnKknOBDCIb9ZXQmrNCsyBTYBK8kpXn8halMMZsmJ71cmTz4ZVh70bDju6gnBeN3ndJSOEaKTYjhq80o+L5DmoADbp9j5xxu2faZ6laIoWx/H1p6BTHNkoLIOtnDo0Ba3joEjizlMi6NZ5Ai4KWAGwInsMCtXrcVe1tKiAGbrks1rrKzqxmvwKnystc1KffcTHnjYFfFZ1ri3Yr/L/4EwOOBO5OUAXui/vxDAdy43EJGni8iB//5MAB8C4GfUTLvvA/BHb7b/uUq2fvf8vo9eFLvueZhzimxso7vfnypQcsLxl+PuAonBCU4/84KxtPp9n+5fxjEZMMqQw3I8nVWY/qWxZUtvScvbkbQPYQB2ZJ5XLagW3Y9piSemApkHhB8Ka0QzQYbV6yIgl7o9L+9xnkv+nqALAAEPdEyTxZznz7v6HHu2zayZjgO8DAjv8c52pjMdv7s390MUtqqc5t8DKmexiP8lgL8nIp8Im5pRRD4UwJcA+Od3OI6XAvgWEXkRgF8E8IkAICIvAPDpqvppAH4HgK8Uifypl6ZCQ58F4JtE5P+ANfX7Z3c4nlNJtmbtyW4WRigierBujSpfcHUL86BZiZmJIemFI284WBNqisVqK7QXMOCIAdg8uuvaxgs983hovFLAahLzJZZ+17CIaTFz28QSQOo2MhybO17hSR60rvnEJaoVy3LSGpbq85Wezh2utPTZe+O1/iWsg9U55q2x6xbnUBfoIJgPSguoZsW0WBgBBFZdVBp9cdLmHS0X4QVcwFoj1dksOgBVgelAQjnTi6HXEXzxpfKlZQ3jn8c2CTOnZ8JFhWPg9XHuCHVkRspeBbzH87qX8rAH686iiP8GgK+BKUqBtRYRmIL+4jsZhKq+BRYAXH7+GgCf5r//EE6oaeFtTz7oTsZwZqFCSzCBqEabnOxrkEXAIMh4w3YwVx7BCJBqyq0LxNV2LrD1ULAqJJIGDM6w428eFWwfocud8EQ0hW095dpLFtbgrC2wlJUMEAqkTK4AZmNQzI5d19HgkzI759UrdIV7P7TU2eIMCkWbGxTvpsFFKtgWqWVPClzRupUJWB3Z79ESaBTMV3xuaeXO6mnIFtCbvAD7DlbfeQ2p5ZACSEHCsNqRfqa5C4gBbX5Jd7RFjVXYNFph2XVpJHEIcWa054wKP3O7iRuLKrZXSmw7bJOiDcwXVlfYFSv5x3xWgr8eAVu/D/czGnapiE1UdQvgj4vI58Kw2QLgx1X15+7W4B4YCde9x9rsQ76I7Cyszb12i0TRXMVlgZc4/tKqzYyCsJjIQJC2zwKGOMm9vOn3rjzatu288ZN4uWOe+9zZjnMt2hTewg2mpbwPiulgkPz9AtqRZYSQYyB0cTNQLlvEAigEbAWVPYgOJsn7LBT7chjkhMc9XMIRseBoXGeGQOKwHYvGTrxMf1axrxQI4yA3Is0wRbBvLpjSE1xaxDuiqj8P4OfvwlgeLFm4osxgy80UI9tJNbpIqFgHjnndMqbCYvNuxrlbca77S4smGmn6OeKFYrSeeKaYhTZTKbLK1ggUD+Z0zIDBTpoVaO5EHS++oDEq0v6ktOkIyJQWpcyKSOci0sLrtuvl5LXrBpolP2xbx2cZWiByPuAgEEyO4gEtm1PTYFaoB3GP2ES1w6eDadJSsmvUUW7z3OH4KXBbtpwjh5BKf1/LJt1rcnTpLYh7CizAxKSQuXVcjkJPnEfvy8dEjKiSF2OTeJaysgcata27roUhkZk/90VUm9HxkMqpFLGIXIHVmvh4WDKHAng9gG8F8PdV9cZdG+EFlkwtY+WqHKQhLgh3Idltd/uItEpkXk5y2Bh7IVu6QXMaxABFNOW+7JDM7/iTllN2ncPKLO3ljfGm68kvYigjATAByBbhwvIkf1ocPuH7u8Pw4Hikt8Z0AJTJENJ2LWlOWc2tJhbCNBp/mIyCSHyZ1ah+qQAS06zJSbZjpBTl/DPNWR5vF9is6BaboH457ENOb/GuJbEoi7MTHL/VQ2PI2LhdGZemPFv2okERTIaZ2KaptMAhny0IjKbo4yeDJvjL7oEN28Uz3T3k7f7dV6v04dbDt1bEXmHte2HsiH8H4Ltgr8l7A/hcAH9QRD40JVQ8eSRBBqKwYtvxIvaFe+oAgKUeSYvasTwQL0fndvr3wiSMfQ8lLR74y1TdJU3sDVqdXd2LdBl5+44hkRcXvuyDAKr9QpDgFbIPYp7ARSFZ8ktFrW3cy/nl/uTELseXE2wiWKYIXDMCcnlKvSzpvmuO8aSFilZu3HNmFbrCZa3pJf+4/7udgPUyOO4MewR0lTye5X3vLPjF92RYNIhIOlZFntNow5Q+y/BLx/S4T3IJTQAvBvBbAXzgsqWIiLwvjDr2Z3G3UoovsiR3WyZFqVaPdjheWLYFSQGzw0NTWlSS7OhrAUAqcokXkgG2Zet4FUmcW/s5eA3bKEYPWyhsH/uZy2sKEG54ByMIun57PHdd7/JPrfUTosDMeOR1kOkaD2hMCF57UoCijTnQYe0+QB0QQciytXTqpWVqC15aNBRtHjS1BkpWbgT+Yk7beaPiWe11tGjjiROaihoNeSHJ/xZSU0CS8yeS4Chf0NoireneIeaUC3BWVgwMEqYQD6DS0g5IbQR2tGxewJdzcj9EER7hwyqn4RH/UQBfvK+vk6r+FIC/DeATzntgD4xIs3pYPYwdKbLVxxemjrtRdaC3JkMJdxZOU4R7Xwpp/NmMGUexcG0WHEn/HQSRFFPDv2Xn3DvXM7TjSDqHQQiNiRGNUDP7gdcbg0DK1Nv93hSHRB0L7tNxebMSz9bfHmWoS0W5UDzd3OV/CYfmtbZ7o9hnve4VQhes68D90uLVFublvrIz/o6TnPD8sNZ5fxYc8Y7HTbZMXkwugugp/z2gchqL+H0A/G83+f67YdXSnrSSI9k6ANMVaS+vMwDmg4QBZgUjzTKxVjveV602RQgg8MYO80yWa8YTuf0yMEMlHkFFDmHpssZLqO36knLOiQV9OrVCHKvMlto+t/YkVkQo9ckyCFXc+qbCT6674cLEfREYMhZzEwo+wxdUgqWfj1w3WtwzyYvSbhDRvZiNeMYirDxnaReTPZAuGDjYNQCWkSdTG1s8F10tC2n3q7Z7zp5yoWD93ijpb+IWdapXTG53DiLminrdc3AB5BKasFZIb77J928G8LRzGc0DLFRkdbSi2nTprT2NYLpqL1WZgXLc0kcjKr8SzKpAleB+ZuuWbqhI48RGZTK6qLkd04KDlIuUZ+hBpha574S4KF9qV4imFKQlhKT6yxHYU0QA80SKFqmuS2vGvx+2Dm0UwfFolck4LqApKSi8vY8AVaMXnrVeakozitak689YfSigpJiUC1AB6qi9Ms7zpMDozIN51aAnVU9ooZWcrVZpLBMIoJOguLZpFed6yz/ua1rglXWhZ+mZBT4u1houc/NOAEHl/ffkHi0wbjE/v2CK75I1YW0KbxaIq4hWhpeSrR7rriuNiZBxTzRL2v5A43z6Tzsgj4soLJOt3lyKc6cdU5wnUc20P+4yIMix9Be1+Gz5TuRjpnM019mLFC3cXSY77D1mHG958naMZcJJxotlz3j3Kpc9h9/h7Wr/XYyb854OsQxsLRfEfdIdb3GNy+9VTeF2HkU6RcfqgBv0tf8s7tHiWYxzXjSd94DDDqeR0yhiAfD1InJ8wvcHJ3z+pBSpaqUTBVGVDAqUbYMrUBqHtmuPRNpTMd4qC9WQB8YssAktOFdHNJoVEFZTprItla9hms1Np2W5bLVDhRTptF57AaA1ZmtM1aaNomA7r5UQACRKg3butybedJJ5lSqjVWvj0/jaYoV8Vu0ao8EqsWUxmlinpJG8EGWHZGtqas09vYvyCv3CtVzwfP/50DPzyBefEW2GyP0NPJlcWJU+exIa2Xs7wnvo/wWTIvGPwfKWvPaYAz9uUtYM5oayTd7FPqV+UURwugXtQZbTKOKvPcU2X3enA3lYhPxUthziC83odygmoavo32esLtetrWmfwchYKhrHYPGZkKRAcnCmU67pRZ6d/SBVobkS3NKC1zZeHbwg+uBfpJc6Z2jxOC39F4mZ0RSC5gAk9xuBaW1/W3Fz7QKd4dbDoJ5h4nmbQuZ9sHEpiPXWEQapHFuVO+PXWq0L1myGNq5tnovAlOHXtXZO8sYWg7oyGEqLfTYcg0lvfi8dpxdTyHZ87cucpnngM7Tki/N5yvzizvJXW4xygtE+SiKAxjlOnsCFk3rrTR5kuaUiVtVPvRcDeWiELwGaEs4R6U72PPxdJJybaXrxAaBYTd72fbOKumNjcSwq56xs47w2kJy4EPvvuUZTbHyr0f9cXsutxBXckuNq17YYq59bqnWqiPnzuSS/uTtGCm6eaPHp4pqX94HzVhOGnKAJ4/JKWKzZQuUzsENvW1wnUZil9af8bEpBuTR3to10izcXnm4xXUI4CY643zzhW8mlRXwpZxZZrt5iru98iEi9zSmqTMNF/kyamwk0Sw4CiAfMjFucOgXTol4nq04UcKuYzUNZ23ipKOvIY/Nlxh7YA2EdRxpuZifAjzu0v/N+8e6rBruh5u4evOa5XU9sE3APAgJqLAoWkreCOexJxyClWcdAGcXYBK706tA6WfC2ke0BldbBuNp4zU12pe9tkQjviDfpXD2RlGWy8LerNq/5GQnWDRUk4SRpYykbYDyqYR1HwC95Rm2OmFLf5i/ui8Aa3KYOMfOq3+7CyZ4F+mGTS0V8zrLPtVOQnmYvSLRJanrE3UNNLrAAVYO5ENgkXX2HM2ilhbsKBFsAgCVxJNywzMCckjWytaSDeHAn4dX7XgIqfYXh3TkLbAFt7LNy4xjJAg2LMg6QlFWCN7KCKyyK47ANFSvEky2O22LGDDhTqNLmv7O03bIk86O080k3j36PxsX8iVHuhmNEn0Bm+c1j6+jN9PTuGvM88RwJximzRn+76QBRarUlZljHZ4O8mkvTcZN5TzgPitaNW4AdA+LCyGWtiUs5D1HLOIPQik2Wrr8zuZUOGIATQSWliS5veqHULcUC7Xgtwe3VZkF2rihOgA0UjWEh6PRnzlSjuqSiDuhEEClCxICp1Kn4TlTKFf0CINiFcrJkd1vNCrRMQcQCwFKT4aYv9s+JIQ12UOhWYh45voB1astaq3PfWZvbLrP01I9bIe0aF5ZxzR5Euj7iygAXynQDgAZ57UvNSgtT9k60qHOP0zxedD13CU1cyp0Ko/7DkXTKxlJ+TeNZerB6wfKcseYvUrZ4vWh6I/wLBijgVdk6iyi99JkF0LpDtBc1F8fJfNLMIBBVKFq338BepWUNRnCxAFKt4WYkVKQFITDN2ZQLM/FENSrUAYi+bwDQKWBfjFp9h/5l5f7kPQOI8zcGhkE2wzFQqkI30l76bC3z+pmKDlPkZdloUx1OGBqTolRfsLxRKlkrORlGnNOcE01sbtow5rVPmvh9UPFMwzYXnUcFwlT+/DgktuQ2X9gAHSV7Dg+pnCbF+a6LiDxDRF4lIj/nP5++Z5sPF5GfSP+O2MlZRL5GRP5b+u4D7vU13Ez48lnXYgQevJPym91TvlQpAaDDFrNFlFKlgWYRLwvQxHGB4MMux9ltv2ebrOuWTIe8T5c2S8WfWgvl8wUWTos8K92kYLJVuzN+t2hzd+q9Kbt5l+LKL2Pgae6666JuTvDEEq4IHDxR3fL++TpjYavaFsDl/KNtR+WaMyLznO/Fd/Pzs0hb7tKhHwS5bJV0T+QlAL5HVV8qIi/xvz8rb6Cq3wfgAwBT3ABeB+Dfp03+mqp+270Z7u2JLJSukv8KWsimFAxDdupRYjEwuMVCM/nFLaz4pgBq49A2yxSAsG5tG5N6IR7rAOI/PT1ZRVqtYLeeGBDiS2w0Mmm0OLcWsaHi3a2bweuNMWQFW1qboMG7buxN154bRBBZfOykUftj1hWweaSEohw2morDs1YwmRUKdpamtWnpwfb9vEJQ4qxbycLSpsIU66C8THWS2mCqrAw1ija1Z6RNCuee126YsWjfVYO4cS6TCkjcvxIGQO+ZPBDyII31NuSiKOKPA/Bh/vvXAvh+LBTxQv4ogH+rqtfv7rDOUcICapW0upeXCQqK1DZHW0AoWYDZ0m3u/cJSgyncaSWYD8QxVEC0707BJpsiwOiud7O+AKQU6K56GmxMdSWYrtif4zVLkmjWo/bBrCUDg7BGstaYxBIQgHjH5zEfFx1PeLoK1AObjLJtixvPM68N7pGqGK+bgrcUckEFopC+VKuVUWaL59WVMV3qDIw3TEPqgL5mSA6Yqi8KboHW0gJhtNJjwSQjxKGGjuO7YIwAFhDl3BcF1GuOjN51uq4Eky/euc4x2zBxLuLWPWCKTerDjU1cFOfk2ar6Jv/9VwA8+xbbfxKAb1x89sUi8pMi8qXs9nwhJaCB9AKid+WB5LLP6V9StAFvJPwyIIF9bmo6d/y5gEL2urFAc6UTRNBVakO/bRfY8/00Kxvpt+nGlxecW3xPKGPnMvdZe5IgAx5rASt0SShpjD3flskf0mHDN5W0gOTj771fy3Oiv95gQGR8P50jX28E6RZjfNCU8A7sdLN/D6jcM4tYRL4bwG/e89Xn5D9UVUVOflRE5DmwJqKvTB9/NkyBrwE8BrOmv/CE/V8Mq7GM1aM7UPTdk4TN5iBSWMReIlOqQo6bxZfTnpmi2op+t8I2WuieU7kwu88tywWOSeqZpAd4uirAFXRuqziOacOVCCLmbKzBk9/ppkc2nf8k/zmy3FSteahbxAw8AojEDlVA53asyCQkRrrRll02mwfBMQQExGku7Y+aquMNWwU2tIg17kVd28+yda4y9ij8xUIS41tYtgwSdpCJtDTqoMd5dTTOOZ+ZrjYxu3l4SjbUqGqkukXd51XyrlJbpQdVLDnqQVs9zib3TBGr6kee9J2I/KqIPEdV3+SK9tducqhPBPAd3syUx6Y1fSwi/xzAX73JOB6DKWtcfdbz7u3dpRUU7Xq0WTieZCGzBM4q1Yuf041mSrErtDpoRNpDoYHWs3QWH7irK2BU35bfi9db8OQSFrffob0t+p9JhZdXTNlraJZ1JXaZLH1Vu0apajUauT3PkSCSbKmzGhkTMHg81oBGXPti3mnNw+Z+Zg+4TascV5z+FokzQMAcUentJtZnZ6EKx+iwgBKOsvmYDxCc7YIGUbE3XGDQ/qzUQaxbs5fJ3B5YhTcolbTueitMzHFOM8f5wMpDrogvCjTxcgAv9N9fCOA7b7LtJ2MBS7jyhogIgD8E4KfOf4jnKHSpl+659tsAfBmJNdItRhTQ6dzWxMIA0CzSru4ulaMp7qAvZTc9R/bRfqd1LLOaYth4Zt/Usvs45p0gG9J3HB65w9E8dTEHCU4IS1jb+NiNJI+xO0e+rnnxk1h9UAlTUf00vmhfFR6Ctp8cS0AbzSOIxdXn2QKQ2s1vYOhxnyTGkWsDC2siJ1pbgzD6OsTxL9/Hh0EuWRP3RF4K4FtE5EUAfhFm9UJEXgDg01X10/zvdwfwPAD/YbH/N4jIs2Cv0U8A+PR7M+yzS7Af0t8CC/BIae4qX1LCDdndzymukQ7NThHaFBEtuYA8XOkweAXfL7iqudip5rHYR2QpSGWd274VEpV8c6kbhhnj8n85E5C83HkNbK+mwj7Mlhvdla8OF9C6HAHxi+XYGPxThz4aU8WVZCEzRTFsKsqxoh4I5sPSFbNn0HTYqHF8x1TLgVYs504suGdKT5sH4AtJmYH1VpuF7YXceY0qgBTtOi93GLZ7B/MBYr7JgGCzVKuGpy3xh4dJXsoDK4t35m6JiHwCgM8H8DsAfJCqvubun9XkQihiVX0LgN+/5/PXAPi09PcvAHjunu0+4m6O77ylQ8CXi/jCOmalr8467KqYoSm7SuupWYrhpoq41YAu8l+oULI1xWMuxsJiO8XdeivA7tZ1woCRFeJSCbilbOfTFpCsCi2l2y67/EHXq+hc7cjW84Vjp4KZK+FW7F2j9CYhFaiX6Bx6CMWSYxwKWtyzjv8s7SfvFe9THdo8RHZdgl3atnu0JT0G7RemNoZkYdt02tcLOOlhkHvEmvgpAH8EwFfei5NluRCK+Ekv7rayulbwddP3QHt52YTUPvRNFsqzgxsSD1hhfw83bDu2p88uP7FdCi01WneAWa8zm4emJIMyN2sU3kI+uMe03gswHQJACSu3zGYZjjf66+hhEYQ1zACbjU9iMQh8Ns2dDoJ5MV91UExXC6R6dtqYlD0DpJHlmCz7hctPCEHHtl0UcFIfR3Uu86NtoSFFjvd1H0tCyv7nIK5LrHQn7y1TzR8+uTewg6r+LABI11Xg3silIr7f4ve8zIgC8DkbbFlUJrDHxLe1F5F4Y4+lyuTFcRzSgLvl4420TbIcGYGfVy3hgCyD4m2IdBBMV3wMaf+ybSwBSh0FODD+rRRr0aMC6COC+YqNf7zRYICDx2fI7PDJunW7JsNEZoNFrIuzWUlHTx2sPVFSaBVqcUBt0AbAMdq4pittEQlFSHaCeNujtXRWvJ3fNp1XiFZHrFsRTAk1OIO1auYrdr1lC4zX1Ltstxd+5z4Dvbczp3vNJB/CVflxehg1sXtPp5RnikiGFB7zAP2FlktFfFEkP2f5BVu8lLnu7GkkOMb5s5yI4C5uVspRaiFj04v3gOnBFulCZ5VlqzzoaOnaMhOiwK3vagQKIZUsFdPJsEnmJxMbjfEIAkvMU6RLhdVBB+6FEEtOrn9XAhT9+ftrSVlygUkjKt/F9abxYXlNaDCL8Pf0E9JfU3ctTwY5PTLx66r6gpO+vBmNVlW/8zZGdi5yqYgviux5oZZKzUTjpc8Wca7P0NV44MutLSgGuCWnHjCbzVImJc0oVhKQSWCQgxWMyWwNHaw4j41FMexxscl/zgkrUeAnH3sl2F4tkKqYDwSTjwHaUnkNxrBjT1eK8X6Z+svFpPbc2czBzZBH2QIypa4cYtdASxrADvVLB2Ba9ccGfOGQtphJUsCAeyYJ31bvdj0ca1Poab6o4DNNMBZNBlf5fDwJlPF58YhvRqO9n3KpiC+gdBQv8mVzUIauacZ10+d1bMqpJEUdrInkcgMaPdSYHFJXEhH6YSOmLNLxI8gE1pqw35nYkZkbHY1N+gUkjis8r/Xjg9r5Z+/cPB6ppyUjOhHXUTA5dKGJ6ZDLTHb0tYw5+xiFEMNBu3YrqN46VC85yXXtUA0aFJOTNXayFZ3POzjVL6zf9PlynhrV0Ez0mLNEeXzQkzTOLA8wNe00clF4xJdyCtFkNe2TZWpxhhYyh3hZiSu7zh0H94Rz2FiSr0z4I52rs/KS4s683JzKvMQ2uXCUyXnGUysO3mXbncDMgHpChf/bdz1s6NlVT0ODGmIctR9fp+T3zdOeeVxyq7vFdl/W3jILMiv4h1sn7YoqMNfT/bsDEZE/LCJvAPB7AHyXiLzyVvucl1xaxBdcGgSQlFe2iFNlLbIfAMFsP6JeMdDc5MyQqGPPkADQBayiL5zv11l7jh0ze0yLFwBKiqSyzi4z9hLXua5bu6dl3eTxhmL9BGv2mjKtg5i1vm51eKn0w1sQAYpimIHVNeMKb68W1KvSxu3bBzQiinqE6PRRV4DMguHIElcaHs5r1Q4uIM+1+BzpYuHhedm+CuLQCdkZaN02RJsXUxWQ0YHrJT79ZJN7w5r4DgDfcddPtEcuFfEFlM7KIl6Z8MawcDW9/CkrTouiRABJOoqVujIhjEDrjxhvwyHFx2JKh1CkHb+NZ8nrjXKUrljDai09q2I+YEo3MCTlw/OPx4rxRk2LhkJWxZkUEvCLJpYIRZ0zPRxVDMfVK5MtLGelMvaKZVubw5m8azSlqOoV1ajAWfOBn/mYVbngSbNyM2zk86UpYBeLHC1lAIVdTURIpOkzMZ+M8pBDE5eK+EGQfXBEtpD8Ga0DIKx5QKU2W2lHqYDOdqzgDoPKXXaOXXKPsISr5p/xXaWbn8aVMG1JSjYuqVs4EEp5aWUC9ALEFpIuSUL6oRMjJvQxCqqW1j05XUv8meCR1q9PInOPAcMMgQSEwfmTfiwRnFyOza+rjk7FYzGivNAqIGrtr4JxkuIAT0pRAPXhnoBLRXzR5SZKuLEE7CVmcCtcfLrfzMCa0SkEQhd11O58XafprCDSuWLz6mnASbECiGpmlpCgEZxq+ymK94arowAjWgeT2vBdAJjXxSqOrQumA0IS0iIcrkAjKDbZ+adDgRyIcaIzHk2+tABgijGhhckOqCKYDw1qITwSuDUZGKvGtmgc71QQKXF/mZLMrEYdAHiWYsd8IFSSes1FQaOHWxfdRLgaPbxyqYgfdMmYbcoeYw2LSMGlco3CNP6i1/4YQI9T5oVgryJICprUsYwRd5zgRRCLpTqj2M7UxgfArKCoY+EBNOK1e+Yg09cA2y5brbFtnjMsPudiUhTVYR2pQHGSb2Yr7AvAtep6LX08Xy+8ghy9l1r6MYl4tbX6ZFe+SRR3HIi76HKpiB9A6ctSIrDGUHJ09fcoSXO5U0pwScfLOnDYVQKBUY+NIkccmPtL+reDaVZTToBb4q5QI8U6VV+bVwI8OsTvxIWBpOypvNxSVbfw57XVcxiPNVnoAHFYQgOBhwOBa+vA7L8TEmekBTeZng1ki7gFRsvCAyHMIJMTV/bVIKbCvpReLjHiS7nQ4kWBcqSe7dW1WjpuV0sCtt28NiszJz90Cn5wnZj2ixrDVMReEa0OZskNC8qVFfJpZieZHdbPDZG+zBKYmSZmFeIEuZPJDqVsotXduMWbdxJMh96VOTFJireKqqum0DkZxTtoN6pZgj18DuLXAijTpVMbqWC3jLYqdvBNgkMKFLoV71foPOQFfn4pe+RSEV/KAyuy+J3KQvq/KbrcHg3a4Ge5yto+/m9mAQSckF14uPWpLaCYXfOW1CB9JbV8PdnaplXrv9PKX3J1oelybxX8vBUcI+1ckveFW7w1fSmLY/P37Kk83DrmHEQvFfGlPBiyt/KWpLrAyFabK9MEI6i77DkAllkIAFo7HjWOLtBgAeRjJJZBdBcuTFM2alfZAmNtx2hWrwTEUNd2ruFIrdAOx0SmgV+jek5y0OmmpuS67RiUI/3MFaVZ1G0eqYwbk6LNA5UoMfnJvQKWBRWV4CbnlG6mPAOwjtD5Gi7l5qIAHvLmoZeK+CGSfZYVaxeHa99Ze2kHadhvWGwKq9U4t6CapQFbGU1WEGMxeCp+FMd1nbfMFGSZqd3QOg278o7xuQKfDwXzIVrx82M141Jba6i4RoEFwRK7AwtLM6zxvLCwAWjV/6e9sw+Wu6zu+Oe7NzeJIG0JRl6UIgQ6ICgRM2qRllZnWnUMlGnsaGilNAKZDn1BmYpFx+pMi4BTW99QBpTCUEVpZ3zLGNCIKY6U4gvyJiGmUYmE14SSkLd79/SP8/x+++ze3Xv35t7sb3fv+cw8c/f3+z37/M6ze/fs2fOc55zS79xkRhdKuPjCqmfjF/7oBQ3XRlkJOvmjy2RA2dzKTTjB9AiLOBhGOm6rzZPEp/OlK6LIBTyWwszGvYNqWXRCno+iuFfhg64brQl5Cj90boGWsbX5T/6yZBKNnA7lDTz6oPyiKZ5TA0wT5ppvAsl/EbS6LsrXod6YO1BGeeS/NMoE/Nn45QJqGzdQMB1s6KMm+iLXhKS3SXpAUj2VR+rU702SHpa0UdJl2fljJf13On+LpPmdxpirKFe0WctDv0b2GKPPGyO7LcX0NvI7yMyv7zBGdxqju+qM7qozsrdZQY2PqpHJLMUx1/a4e2He7vQTPit3NDGygzLHRG1vw41QzyI1rGVbdpF4qMgNXLgBigVFt7hTy/zWXi6pES0xPtpct05jRm2vz3vebmPeLreaPb44Lfqle43Ph7GFYmyhZ6Pz+1L+ighFPAMMzOpdtUGlLxQxjRIl6zt1kDQCfAp4M/By4B2SXp4uXwl8zMyOB7YBqw6suINJHlrWFGYGbh2ONyIYai2KGkglkqxUpq5QrWG5KlM8aczcDVHkGq6ln/DlfS2LHLCJyrm0hJPVWc+UpU8sKduR5pCzCelA20Rf1NLGk9KqbbHky4xvRfQJWaL57NNT5KgolW+eXCmU8MypW3dtQOkLRWxmD5nZw1N0ew2w0cw2mdle4IvA2aly8xuAW1O/f8MrOQfTQR2UUaEosxX+IgxsbKHnfigUULHLbmSvlYp6JLN+wRWmV/9Qed/6SFEzrkV5kv28zzLI5b7XktZIhMIF0uazKfNxxhaKvQfXGFuoFNbWvFhZJgayQrlmLpoxb2Ul6JbY5mCWiSrOfcNLgF9mx48CrwUOA7ab2Vh2fkKB0QJJFwIXAoy+8NADI+mgUst8tJTrallmNCu3OO97Qa3MEVGf737RkT3G6C7/MEzIA0FD+RU77ooE74W7oalvVifOaqKe4s/KqAwDxhp6tnUTROuW6zw/MrhVPXaQLygqiyMuqmx44dC0IJktKKIsmXspbLPsoYxnGbOImpgt+qVESapfdS3AQYuPjo9MKy3Ks20kRhGWVlqxKvuWEQG1hrKakBAH3zJs9dQnC/NqG9JVuhXUsNw7xH81JSbK3C7txrRUgboGWJkSM58opQ84X4CMDRgVMMDWbjf0TBHPQomSLcDR2fFL07mngd+QNC9ZxcX5YJbJrcJ8AarM7VBLlT9ovt4URZH+jtcoKycXOYsbPmKbEAZWZiyrN4Yqd8jlURS5zzv1b+dugYaLoSgoWmzXro9Y6Y6op4RBRQhg+HurwLDx4Y75GyTXxP8AJ0g6Fle0bwdWmplJ+g6wAvcbnwf0zMKeM6S4WTLlB0nZjTX8y+ML/HyzIk4WcVKyBaWFm5RcqThNZSY2jyl214UVMcXJkq6PJku8UNo0rNXWDR1NGyuKPuNuDddSVWjVU56hQr6RlIc4m29QAcZAL8R1Q18s1nUqUSLpKElrAJK1ezGwFngI+JKZPZCGeC/wbkkbcZ/x9b2ew5yhg1U4IZl9op2vuDxfHjQryAkuhXwhroiwaBqsvait2dGaLHQ6jDVhYoQS7ges3l0bUPrCIu5UosTMfgW8JTteA6xp028THlURVEhZyikLZxPF4l2zsm46X3x+Mv9uEdkAKRJjX4t7IUt3mT8vDd7IFEfzQl2x+66IcyblqSh2FE5WEzCoBgNfTxhi+kIRB8PDhOodxflO/TsNVERvWCMzW+FPznfPdRaEiUmDihuWvu3s/rVYf+tbzAba2u2GUMRBEPQ9w75YJxvysJDJkPQk8PNJurwIeKpH4hxIhmUeMDxzmSvzOMbMFs/kBpK+me7TDU+Z2Ztmcr8qmNOKeCok3WNmHXNfDArDMg8YnrnEPIKcvoiaCIIgmMuEIg6CIKiYUMSTc23VAswSwzIPGJ65xDyCkvARB0EQVExYxEEQBBUTijgIgqBi5rwilvQ5SU9Iur/DdUn6eCrD9BNJp/Vaxm7pVEoqu/7nkp6U9OPU3lWFnFPRxTwWpJJYG1OJrJdVIOa0kLRI0u2SHkl/2ybDljSevT9f7bWcUzHTsmZBe+a8IgZuACYLAH8zcEJqFwLX9ECmaTNFKamcW8xsaWrX9VTILuhyHquAbak01sfwUln9zmXAt83sBODb6bgdu7L356zeidc1My1rFrRhzitiM1sPPDNJl7OBG825C899fGRvpJsWbUtJVSzT/tDNPM7GS2KBl8h6YyqZ1c/kMg9sOa+ZlDU78NINLnNeEXdBuxJNHUsxVUi3cv5xcrHcKunoNterppt5lH1SetRn8fSn/czhZvZYerwVOLxDv4WS7pF0l6Q/6o1os86gfGb6hkj6M7f4GvAFM9sj6SLcMntDxTINDZOVA8sPUjGDTnGjx5jZFknHAesk3WdmP5ttWSejX8qazSVCEU9NpxJN/caUcprZ09nhdcBVPZBrunTzehd9HpU0D/h1vGRWpUxWDkzS45KONLPHkmvriQ5jbEl/N0m6A3gV0FNFfADLmgUdCNfE1HwVeGeKnngd8Gz2E7OfKEtJSZqPl5JqWnVv8W2fhVc66TemnEc6Pi89XgGss/7fmZTL3Lacl6RDJS1Ij18EvB54sGcSzh7dvIdBjpnN6QZ8AXgM2If7slYBq4HV6brwFeCfAfcBy6qWeZK5vAXYkGS9PJ37MHBWenwF8ABwL/Ad4MSqZd7PeSwEvgxsBO4Gjqta5i7mdBgeLfEI8C1gUTq/DLguPT49/Y/dm/6uqlruNvM4J31O9gCPA2vT+aOANZO9h9E6t9jiHARBUDHhmgiCIKiYUMRBEAQVE4o4CIKgYkIRB0EQVEwo4iAIgooJRRzsN5KOkHSbpJ3FTrF253os05mSNqTEMwOJpBenLHkvrVqWoDeEIh5gJN0gyVIbk/QLSde0S7Eo6XBJu1Of2XrfL8XjR5cCR05ybkak+a3osvvVwD+a2fhs3LsKzOwJ4EbgQ1XLEvSGUMSDz7dwhfcy4F3AcuDTbfqdh+ea2A384Szd+3jgB2b2iJltneRcT5B0OnAi8KUZjjN/diSaEZ8HzpW0qGpBggNPKOLBZ4+ZbTWzR83sNuAW4A/a9PsL3Mq6Cd89OCWSLkqJvfemvxdk1zbjqQ3fmSzWG9qdy8bZkCzypyStTTkiirHOl/Rgur5B0iWF1Z7GBPhyGrM4bsdKPOfvrmzsJZK+Imlrcpf8UNJbW+a5WdI/yIsEbAduTudfJ2ldet6z6fFR6drvpgxpO9K1uyWdko15uqTvSnpe0pb0S+XXsuuS9B55ovg9kh6VdEVx3czuB36F5/4Nhp2qt/ZF2/+GJ7X/enZ8HL6FeWtLv98BngRGgWNxq3jxFGOfg2/7vhj4LeCv0vHydH0xcDuu+I/AE++0O7cMGAPOBY4BTgUuAealcS7At5ivSLItx9NEXpzdx3Br/4jJ5Ma3Bl/ecu5UfMv6K3Br/XJgL9n2bmAz8H/A36U+J6Tn7cKrFC8FTgIuAn4TT5a1DfgosAS3wlcCJ6XxXgHsAN6Txnot8H3g1uyeVwDb8S/I44HfBv6yRfYvAjdV/X8W7cC3ygWINoM3zxXxWPrQ70oKy4BL2vT7ZHa8Hrh0irG/B3yuzTh3ZsdfB25o6dN0DrfongUO6XCfXwB/1nLub4EHs2MDVnTxemwHzu+i313A+7PjzcDXWvrcDHy/w/MXJZnO7HD9RuD6lnNL03NeDLwQ/zJcPYWc/wz8V9X/Z9EOfAvXxOCzHv+Qvwb4BLAG+HhxMf0cfhvukijoxj1xEq6Mc+7ES99Mh9uBnwP/K+lmSedJOiTJthhPl/jZ9BN/h6QdwEdwS3O6vABXcCWSDpZ0VXJ9bEvjL8Mt25x7Wo5fBaxrdxMzewb/Ulor6RuS3i0pH+/VwJ+2zKl4LZfgr+ECPAnQZOxKcwqGnFDEg8/zZrbRzO4zs78GDgI+kF1fmc59L0VWjOF1906U9Pr9uN+0QtLM7DngNOBPcOv3fcBPk6+1+P9bjX+ZFO0U4OT9kO0poDVi5KP4F9EHgDPT+HcDrQtyO6dzIzM7H3c5rMdTij4sqVgEreH5npdm7VTcTfHjadxmEe5SCoacUMTDx4eA9xaLSrjl+0malcJS4BtMbhU/hOfDzTmD/ciPa2ZjZrbOzN4HvBI4GHirmT2OL0gtSV8mTS0bYh/QTVzwj5hosZ+B1xz8DzP7CZ7CsRtr+0dMUb3EzO41syvN7PeAO2jkG/4hcHK7OZkvJD6Ep5F84xQynJLGCoacqNAxZJjZHZIeBN4v6TP4z/BV5qvwJZJuAq6X9DfJam3lajxS4QfAbXil63OZ5ip+ilBYgluOzwC/DxxCIyn9B4FPpGiFNfiC4mnAS8ysiCLYjBcI/S4eJbKtw+3WMvHLZQNwjqSv4Ar9g3g+46m4GrhL0rV4Purd+KLnbfiXwkV4svMt+CLpK2lU+L4yPfczwGeB5/AFveVmdpGZPSfpX4ErJO1Jr81hwKvN7Jr0uh2Euzj+vgtZg0Gnaid1tP1vtERNZOdX4hbX54ENHZ57MPA8cOEk46/Gk6/vS38vaLnezWLdGXgS+qdxn+f9tCyoAe/ALb/deDTCncDbs+vL8YTq+4DNk8h7aJrTydm5Y/BY6524NXxpGxk302bxMsm+Psm9nUbM9uHAf+JKeA/ucrkKGM2euwz4Jh6NsRNP9P7h7HoNuAzYhEdx/BLfiJK/Jj+t+n8sWm9aJIYPhgpJH8FD3LqKle5XJN0N/IuZ/XvVsgQHnvARB8PGPwGbNOC5JoBb8TJewRwgLOIgCIKKCYs4CIKgYkIRB0EQVEwo4iAIgooJRRwEQVAxoYiDIAgqJhRxEARBxfw/zM2qlcA33jgAAAAASUVORK5CYII=\n",
+      "text/plain": [
+       "<Figure size 432x288 with 2 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "plt.imshow(residuals[0, ], origin='lower', extent=[size, -size, -size, size])\n",
+    "plt.xlabel('RA offset (arcsec)', fontsize=14)\n",
+    "plt.ylabel('Dec offset (arcsec)', fontsize=14)\n",
+    "cb = plt.colorbar()\n",
+    "cb.set_label('Flux (ADU)', size=14.)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The star is located in the center of the image (which is masked here) and the orientation of the image is such that north is up and east is left. The bright yellow feature in southwest direction is the exoplanet $\\beta$ Pictoris b at an angular separation of 0.46 arcseconds."
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.7.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/docs/tutorials/zimpol_adi.ipynb b/docs/tutorials/zimpol_adi.ipynb
new file mode 100644
index 0000000..5bb4e95
--- /dev/null
+++ b/docs/tutorials/zimpol_adi.ipynb
@@ -0,0 +1,1586 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Non-coronagraphic angular differential imaging"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "In this tutorial, we will process and analyze an archival [SPHERE/ZIMPOL](https://www.eso.org/sci/facilities/paranal/instruments/sphere/inst.html) dataset of [HD 142527](https://ui.adsabs.harvard.edu/abs/2019A%26A...622A.156C/abstract) that was obtained with the narrowband H$\\alpha$ filter (*N_Ha*) and without coronagraph. A few ZIMPOL specific preprocessing steps were already done so we start the processing of the data with the bad pixel cleaning and image registration. There are pipeline modules available for dual-band simultaneous differential imaging (i.e. [SDIpreparationModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.psfpreparation.SDIpreparationModule) and [SubtractImagesModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.basic.SubtractImagesModule)) but for simplicity we only use the *N_Ha* data in this tutorial in combination with angular differential imaging."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Getting started"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's start by importing the required Python modules. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import configparser\n",
+    "import tarfile\n",
+    "import urllib.request\n",
+    "import matplotlib.pyplot as plt\n",
+    "import numpy as np\n",
+    "from matplotlib.colors import LogNorm\n",
+    "from matplotlib.patches import Circle"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And also the pipeline and required modules of PynPoint."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from pynpoint import Pypeline, FitsReadingModule, ParangReadingModule, \\\n",
+    "                     StarExtractionModule, BadPixelSigmaFilterModule, \\\n",
+    "                     StarAlignmentModule, FitCenterModule, ShiftImagesModule, \\\n",
+    "                     PSFpreparationModule, PcaPsfSubtractionModule, \\\n",
+    "                     FalsePositiveModule, SimplexMinimizationModule, \\\n",
+    "                     FakePlanetModule, ContrastCurveModule, \\\n",
+    "                     FitsWritingModule, TextWritingModule"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we will download a tarball with the preprocessed images and the parallactic angles."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "('hd142527_zimpol_h-alpha.tgz', <http.client.HTTPMessage at 0x1454433d0>)"
+      ]
+     },
+     "execution_count": 3,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "urllib.request.urlretrieve('https://home.strw.leidenuniv.nl/~stolker/pynpoint/hd142527_zimpol_h-alpha.tgz',\n",
+    "                           'hd142527_zimpol_h-alpha.tgz')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Will unpack the compressed archive file in a folder called *input*."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "tar = tarfile.open('hd142527_zimpol_h-alpha.tgz')\n",
+    "tar.extractall(path='input')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Creating the configuration file"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "PynPoint requires a configuration file with the global settings and the FITS header keywords that have to be imported. The text file should be named `PynPoint_config.ini` (see [documentation](https://pynpoint.readthedocs.io/en/latest/configuration.html) for several instrument specific examples) and located in the working place of the pipeline.\n",
+    "\n",
+    "In this case, we don't need any of the header data but we set the pixel scale to 3.6 mas pixel$^{-1}$ with the `PIXSCALE` keyword. We also set the `MEMORY` keyword to `None` such that that all images of a dataset are loaded at once in the RAM when the data is processed by a certain pipeline module. The number of processes that is used by pipeline modules that support multiprocessing (see [overview of pipeline modules](https://pynpoint.readthedocs.io/en/latest/overview.html)) is set with the `CPU` keyword."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "config = configparser.ConfigParser()\n",
+    "config.add_section('header')\n",
+    "config.add_section('settings')\n",
+    "config['settings']['PIXSCALE'] = '0.0036'\n",
+    "config['settings']['MEMORY'] = 'None'\n",
+    "config['settings']['CPU'] = '1'\n",
+    "\n",
+    "with open('PynPoint_config.ini', 'w') as configfile:\n",
+    "    config.write(configfile)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Initiating the Pypeline"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can now initiate the `Pypeline` by setting the working, input, and output folders. The configuration file will be read and the HDF5 database is created in the working place since it does not yet exist."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "===============\n",
+      "PynPoint v0.9.0\n",
+      "===============\n",
+      "\n",
+      "Working place: ./\n",
+      "Input place: input/\n",
+      "Output place: ./\n",
+      "\n",
+      "Database: ./PynPoint_database.hdf5\n",
+      "Configuration: ./PynPoint_config.ini\n",
+      "\n",
+      "Number of CPUs: 1\n",
+      "Number of threads: not set\n"
+     ]
+    }
+   ],
+   "source": [
+    "pipeline = Pypeline(working_place_in='./',\n",
+    "                    input_place_in='input/',\n",
+    "                    output_place_in='./')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Some routines by libraries such as `numpy` and `scipy` use multithreading. The number of threads can be set beforehand from the command line with the `OMP_NUM_THREADS` environment variable (e.g. `export OMP_NUM_THREADS=4`). This is in particular important if a pipeline module also uses multiprocessing."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Importing the images and parallactic angles"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will now import the images from the FITS files into the PynPoint database. This is done by first adding an instance of the [FitsReadingModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.readwrite.html#pynpoint.readwrite.fitsreading.FitsReadingModule) to the `Pypeline` with the `add_module` method and then running the module with the `run_module` method. The data is stored in the database with the name of the `image_tag` argument."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------\n",
+      "FitsReadingModule\n",
+      "-----------------\n",
+      "\n",
+      "Module name: read\n",
+      "Reading FITS files... [DONE]                      \n",
+      "Output ports: zimpol (70, 1024, 1024), fits_header/cal_OBS091_0235_cam2.fits (868,), fits_header/cal_OBS091_0237_cam2.fits (868,), fits_header/cal_OBS091_0239_cam2.fits (868,), fits_header/cal_OBS091_0241_cam2.fits (868,), fits_header/cal_OBS091_0243_cam2.fits (868,), fits_header/cal_OBS091_0245_cam2.fits (868,), fits_header/cal_OBS091_0247_cam2.fits (868,)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = FitsReadingModule(name_in='read',\n",
+    "                           input_dir=None,\n",
+    "                           image_tag='zimpol',\n",
+    "                           overwrite=True,\n",
+    "                           check=False,\n",
+    "                           filenames=None,\n",
+    "                           ifs_data=False)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('read')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's check the shape of the imported dataset. There are 70 images of 1024 by 1024 pixels."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "(70, 1024, 1024)"
+      ]
+     },
+     "execution_count": 8,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "pipeline.get_shape('zimpol')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We will also import the parallactic angles from a plain text file (a FITS file would also work) by using the [ParangReadingModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.readwrite.html#pynpoint.readwrite.attr_reading.ParangReadingModule). The angles will be stored as the `PARANG` attribute to the dataset that was previously imported and has the database tag *zimpol*."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------\n",
+      "ParangReadingModule\n",
+      "-------------------\n",
+      "\n",
+      "Module name: parang\n",
+      "Reading parallactic angles... [DONE]\n",
+      "Number of angles: 70\n",
+      "Rotation range: -14.31 - 34.36 deg\n",
+      "Output port: zimpol (70, 1024, 1024)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = ParangReadingModule(name_in='parang',\n",
+    "                             data_tag='zimpol',\n",
+    "                             file_name='parang.dat',\n",
+    "                             input_dir=None,\n",
+    "                             overwrite=True)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('parang')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The attributes can be read from the database with the [get_attribute](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=get_data#pynpoint.core.pypeline.Pypeline.get_attribute) method of the `Pypeline`. Let's have a look at the values of `PARANG`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "array([-14.3082 , -13.9496 , -13.5902 , -13.23   , -12.8691 , -12.5074 ,\n",
+       "       -12.145  , -11.782  , -11.4182 , -11.0538 ,  -6.43366,  -6.0622 ,\n",
+       "        -5.69037,  -5.3182 ,  -4.9457 ,  -4.57291,  -4.19983,  -3.8265 ,\n",
+       "        -3.45294,  -3.07917,   1.61837,   1.99275,   2.36701,   2.74112,\n",
+       "         3.11507,   3.48882,   3.86237,   4.23567,   4.60872,   4.98149,\n",
+       "         9.6373 ,  10.0041 ,  10.3703 ,  10.736  ,  11.101  ,  11.4653 ,\n",
+       "        11.829  ,  12.192  ,  12.5543 ,  12.9158 ,  17.3717 ,  17.7218 ,\n",
+       "        18.0711 ,  18.4193 ,  18.7666 ,  19.1129 ,  19.4581 ,  19.8024 ,\n",
+       "        20.1456 ,  20.4878 ,  24.9167 ,  25.243  ,  25.568  ,  25.8919 ,\n",
+       "        26.2145 ,  26.536  ,  26.8563 ,  27.1753 ,  27.4931 ,  27.8097 ,\n",
+       "        31.7057 ,  32.0052 ,  32.3035 ,  32.6005 ,  32.8962 ,  33.1906 ,\n",
+       "        33.4838 ,  33.7757 ,  34.0663 ,  34.3556 ])"
+      ]
+     },
+     "execution_count": 10,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "pipeline.get_attribute('zimpol', 'PARANG', static=False)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Bad pixel correction"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The first processing module that we will use is the [BadPixelSigmaFilterModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.badpixel.BadPixelSigmaFilterModule) to correct bad pixels with a sigma filter. We replace outliers that deviate by more than 3$\\sigma$ from their neighboring pixels and iterate three times. \n",
+    "\n",
+    "The input port of `image_in_tag` points to the dataset that was imported by the `FitsReadingModule` and the output port of `image_out_tag` stores the processed / cleaned dataset in the database."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------------\n",
+      "BadPixelSigmaFilterModule\n",
+      "-------------------------\n",
+      "\n",
+      "Module name: badpixel\n",
+      "Input port: zimpol (70, 1024, 1024)\n",
+      "Bad pixel sigma filter... [DONE]                      \n",
+      "Output port: bad (70, 1024, 1024)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = BadPixelSigmaFilterModule(name_in='badpixel',\n",
+    "                                   image_in_tag='zimpol',\n",
+    "                                   image_out_tag='bad',\n",
+    "                                   map_out_tag=None,\n",
+    "                                   box=9,\n",
+    "                                   sigma=3.,\n",
+    "                                   iterate=3)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('badpixel')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Image centering"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we will crop the image around the brightest pixel at the position of the star with the [StarExtractionModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.extract.StarExtractionModule)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "--------------------\n",
+      "StarExtractionModule\n",
+      "--------------------\n",
+      "\n",
+      "Module name: extract\n",
+      "Input port: bad (70, 1024, 1024)\n",
+      "Extracting stellar position... [DONE]                      \n",
+      "Output port: crop (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = StarExtractionModule(name_in='extract',\n",
+    "                              image_in_tag='bad',\n",
+    "                              image_out_tag='crop',\n",
+    "                              index_out_tag=None,\n",
+    "                              image_size=0.2,\n",
+    "                              fwhm_star=0.03,\n",
+    "                              position=(476, 436, 0.1))\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('extract')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's have a look at the first image from the processed data that is now centered with pixel precision. The data van be read from the database by using the [get_data](https://pynpoint.readthedocs.io/en/latest/pynpoint.core.html?highlight=get_data#pynpoint.core.pypeline.Pypeline.get_data) method of the `Pypeline`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x145d1c690>"
+      ]
+     },
+     "execution_count": 13,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAWfUlEQVR4nO2dX4wd5XnGn2fP/rMNxJg/loVRTQVtykVipBUhggswJaI0ClwglBRVvrDkm1QiSqQEWqlSpF4kNyG5qBJZBcUXaYBCkBFKGxxjFEVqDEuBxOAQG2QSXNtbJzYY7F3vn7cXZ2Dne2d35szOzDlnz/f8pKM938ycmXfP7Lvf98z7fu9HM4MQYvAZ6rUBQojuIGcXIhLk7EJEgpxdiEiQswsRCcPdvNgox2wc67p5SSGiYhof4oLNcKl9XXX2cazDZ3h7Ny8pRFQcsH3L7tMwXohIkLMLEQlydiEiQc4uRCTI2YWIBDm7EJHQ1dCb6GPoQrOaDTlwqGcXIhLk7EJEgobxMeGH6lWO1TB/1aGeXYhIkLMLEQlydiEiQZp9kCijyZu8lvR8X6KeXYhIkLMLEQlydiEiQZp9NdFNTS4GDvXsQkSCnF2ISOhoGE/yKICzAOYBzJnZBMkNAB4HsAXAUQD3mdnpZswUQlSlTM9+m5ltNbOJpP0ggH1mdh2AfUlbVIVc/rVayPsdVtPvMWBUGcbfDWB38n43gHsqWyOEaIxOnd0APEfyZZI7k20bzex48v4EgI1LfZDkTpKTJCdnMVPRXCHESuk09HaLmR0jeSWAvSR/m95pZkZyyRxJM9sFYBcAXMINyqMUokd01LOb2bHk5xSApwHcCOAkyU0AkPycaspIURMcWnz11I4CTS+93wiFd53kOpIXf/QewOcAHATwDIDtyWHbAexpykghRHU6GcZvBPA02/9VhwH8u5n9F8mXADxBcgeAdwDc15yZQoiqFDq7mb0N4NNLbP8jAK3SKMQqQbnxg0yeNq+q222h2udXimrjrRilywoRCXJ2ISJBw/heUyWc1MsQWpVrNykBNMxfFvXsQkSCnF2ISJCzCxEJ0uyriZI6mUMrfx5gCw1q27J6v06Nn9b0kel39exCRIKcXYhIkLMLEQnS7E3TxTh6FY3e5LmKKHw+kPc9VNHz/t4MuIZXzy5EJMjZhYgEObsQkSDNDpTX1U1quxI6vVBXN5k777VyBV1d9Hvkanp/XWn4ZVHPLkQkyNmFiAQ5uxCREKdmr1qSuEcljTPa1uvVAu3LGu02K3OuVtgsiqs73Z3+vUvH5HtVPqsPUc8uRCTI2YWIBDm7EJEQj2bv16WDcuLTZTV6RpMP5ZWSrlarrdS3uRDqZvNmlZg777+TRufdDxjq2YWIBDm7EJEwuMP4fh22FxAMU92wnS0/jHftlgtx+XOX+U6KUnHd8NnSw343bPd20g/r4YbiOeGzrg7bB6wstXp2ISJBzi5EJHTs7CRbJF8h+WzSvobkAZJHSD5OcrQ5M4UQVSmj2R8AcAjAJUn72wAeNrPHSP4AwA4A36/Zvs4po0d7mVJZNO00tT+j0Z0mp9foFdNpw8+W06ucn1/c5Y9N7VsSb5fCaY3QUc9OcjOAvwXwb0mbALYBeDI5ZDeAexqwTwhRE50O478L4OsAPuoCLwNwxszmkva7AK5a6oMkd5KcJDk5i5kqtgohKlDo7CQ/D2DKzF5eyQXMbJeZTZjZxAjGVnIKIUQNdKLZbwbwBZJ3ARhHW7N/D8B6ksNJ774ZwLHmzKyZPtLoubrbx9Ezbad1/bm87k7tZ9G5PT527jS7pa5FzIX78s9crOlTlE6XbfL5zCorY1XYs5vZQ2a22cy2APgigOfN7H4A+wHcmxy2HcCexqwUQlSmSpz9GwC+SvII2hr+kXpMEkI0Qal0WTN7AcALyfu3AdxYv0lCiCYY3Nz4PiUzbdVPU03r7kxcvSAXfqjg+OHU7R4u0Pe+7TX7XKiz00ebj8H7PHrUh6a8do7SZYWIBDm7EJEgZxciEqTZ66ZE7juwxBzzvJz0TK670+ijI+H+kfD2Wnq/0+zm55z7mLHX6Jxd3k7/Wa+jM5q+qGxV50tLFWr4ppZ/BsJ714cxd/XsQkSCnF2ISNAwfin8EKxCiauiUFtemmq2WqwP07nP+mH7mnAugo0tDuNtxA3jvZpww1/O+rRTtz/Vpkt/tYIquOZDjJnPpxpFw/Kyw/oS51rtqGcXIhLk7EJEgpxdiEiQZgeqa/R0KamC0lCF5ZzTGt5r2REXWht3mnzteNBeWBMevzC2eLszOtpL2fn8Ka30obtU+Ixz+Wm8aOWH4jyBhm85DZ75bP6KsRxaXocXhunKaPg+nP6qnl2ISJCzCxEJcnYhImFwNHsZTVR1aai8lMui8s0+rp5nS94UVWTj6Atrw9L9816zj+bE9DNx9XD/kJ+mOuyeRcylnzXkL1tl5lNz3f5lrUT22YHf76fiZjR+ql2g5wdtuqx6diEiQc4uRCTI2YWIhMHR7GWoMfcdcKWk8spMLXGtzP5UfruPq9t4qMltNLx9C6Mt1w7/l8+PLbbNa9n81Z4xNBt+Z63zoS3D5xZFfutsaJfPLeB0eG6vjEkf40+185aKBrKlupyGD5ap8l1dUUXrOuPuni7E4dWzCxEJcnYhIkHOLkQkxKnZy+Lz23PmqJfW6P5c6Vj6mFvy3rXTue7tdnju+RzNPj8aXnfBpd3Pj4T7h5yeHT4f6tXR1LVGna5u+ZJWBXn3GdKxcf/1ZkpeuXP7ufOBHW7efOY+F8Td65z/3oVcevXsQkSCnF2ISJCzCxEJ0uxLUaTR8+aoV9HoAJhqm58z7vLRfR25ufGwfeGSsD2zfvHa0xtCO2Y/EWrE+bGwPTQbHj/yXnjutScXbbtoKJxXP+607pDTypmy1TkxafN638fkff81Fy4fHQTX/XV9u/OVpBNbaixT3YCGV88uRCQUOjvJcZIvknyN5Oskv5lsv4bkAZJHSD5OcrToXEKI3tHJMH4GwDYz+4DkCIBfkvxPAF8F8LCZPUbyBwB2APh+g7bWR9Uprn4oHqTLFkxx9aWm3DA+WKnFl4YeKUiHXRO2L1wcXvv8FYvt85vD4e36Te8H7Q3rzgXtDy+E/8tPnvxEaNvw4nTb4enQzuFz4WdHz7v5s5mh+fJDWNKNrf302IKheTpUVzgwLgjFefq9THVhz25tPkiaI8nLAGwD8GSyfTeAe5owUAhRDx1pdpItkq8CmAKwF8BbAM6Y2Ufdw7sArlrmsztJTpKcnMVMDSYLIVZCR85uZvNmthXAZgA3Avhkpxcws11mNmFmEyMYK/6AEKIRSoXezOwMyf0APgtgPcnhpHffDOBYEwbWRhWdXqIcdHZV1oIyVHltn+o55FNcw3PPrg33T1/uNPvVi1r52mtPBPtuu+J3QfvykbNB+/D5jUF739xfBO2zU4u6PJOKO+x+D/fsAW712cx3mJ6m6vdlQmsFSnxo+e+3KukQbT+WtOrkafwVJNcn79cAuAPAIQD7AdybHLYdwJ6GbBRC1EAnPfsmALtJttD+5/CEmT1L8g0Aj5H8FwCvAHikQTuFEBUpdHYz+zWAG5bY/jba+l0IsQpQuixQnB7b4LVyS0sXLNm84Kahzq4L2zMbQt248erTH7+//6oDwb57L/p90PYpr78a+0PQfufchqA9edH6j9+np9K27cxP88WC+zP03//soi7Plo72Zald2aoSujyzlLSPxPdBrLwKSpcVIhLk7EJEgpxdiEiQZu830pqz5LRGXxrZ3NLITK3LfHZ+TbDvbReuXrAw2/H5Dz4VtN88dWXQHn5v8eItVyra40teD/klm3yq/FzOXFP32aLceF+KOibUswsRCXJ2ISJBzi5EJEiz9xqvV9P4ed4FetPHlG3IlYNKafb3nGY/OhvGzf9v7pKg/avT1wTt90+tC9rr3lu89vC0W3LJy2YXd/dx9ta8+8BsiT7Jf0dVNLqP969yva+eXYhIkLMLEQlydiEiQZp9BfhYbjofO7svZ/lhIBscT8eUh8P48tCFsN1y2nj0g/DaY6fCc//v7y/7+P1TM1uDfc+N/1XQPjsdFho5c+LioD1+LJyDPv7HxWsPTxfEuh2ZUtJ+meVUbnw6Tx4ALBOj9zXqcuLwPo9+wGPy6tmFiAQ5uxCRMLjD+Aolh3xJoWwJYT9FM6dskm+7cJq5FU3SpZIzaaIXwiFsazrcP3o2PNeaU3767OLQ+4PTlwa7zrpDh8+Fdl9yOtw//qfwO0pf28sL+lCax3/fc+77Tq0gY74MlVtdpjD0lr5XeWHPuumD6bDq2YWIBDm7EJEgZxciEgZXs5dYGbQ0Tgemo2eZskhFoTgXKrKUJGXLa/Zw2aTWhxeC9thp/787DI+1Zhb3z427Q30FrAtOk38Ytkc+DH+PoQuL7aFZvwKsO3bWhRTPhb8Xp8PfK/i9vUb3KcV5obb2BixLUaitQHfXWj66hlVbPerZhYgEObsQkSBnFyISBlezl8FrMVcyuDDuni5nzAJd59r0Oi9VSsov95RdWipsjzid15oOY9JjZxZv90LLnds/4sjEvpdf+jiDf07h4uZDM6FdnHYLfs6Emt1mU5rdx9k9eXF1187o+Trpg7i6Rz27EJEgZxciEuTsQkSCNPtSFGj43I96vVqQf+1LSQWtuXyNXpQ50HJamTPp6bMl/88XlHsOruOnqPpc9wv5cXVz+zOx9MAOl9fg4/B5eQ9e35fU2f0eV/eoZxciEjpZn/1qkvtJvkHydZIPJNs3kNxL8nDy89KicwkhekcnPfscgK+Z2fUAbgLwZZLXA3gQwD4zuw7AvqQthOhTOlmf/TiA48n7syQPAbgKwN0Abk0O2w3gBQDfaMTKPiM37u6WEPZVpzIa3i3ZnNacXpNnNGLR8wGfS5+6lg27OflF8/DL4HQzXSkpb1cQRwfy8929Ri/Ifc9o+PTn646F92FsPU0pzU5yC4AbABwAsDH5RwAAJwBsrNc0IUSddOzsJC8C8BSAr5jZ++l91v73uuTjRJI7SU6SnJzFzFKHCCG6QEfOTnIEbUf/kZn9JNl8kuSmZP8mAFNLfdbMdpnZhJlNjGBsqUOEEF2gULOznZD9CIBDZvad1K5nAGwH8K3k555GLOwHKsTd8+a+A8jq25RWzuTR+3O7ad+ZZwler7YWdXqmvp1f6qisZk9rZa+5XT67+f0Fc9LT32FGo2c0ecGSWSV0da1x9MzJu1+mupOkmpsB/D2A35B8Ndn2j2g7+RMkdwB4B8B9jVgohKiFTp7G/xLLJ2vdXq85QoimiCddNj1sqrNEVeY6BcPEhXwJEJS4ckPUojCTL3llc+5arVS7FYbeiqbPZg3NCYEVpKwWDq3zwms5U1aBJSSC/06qDM3LhNZ6MEwvQumyQkSCnF2ISJCzCxEJ8Wj2BknrQPoQVlnSYaaiMF3GjvxUXKZP6JehqjP0VkWTA1ldnqPZ+0ajA32p09OoZxciEuTsQkSCnF2ISJBm7zZldKALV2dKXGP5VNv2AU4bp/Wu1/NFGr3Ms4gCnVyo0fPyC4rKUJWh6pTUPtfoHvXsQkSCnF2ISJCzCxEJ0uw1ky1ZVWMefiZe7TV8Qe58Wpd7Pe+vldH/Bf1CXsnsIo3uD88rHV1ZZ1f4/CrT6B717EJEgpxdiEiQswsRCXFqdq+9mpzfXkCV3O3M0tGePE1foD8zcfcK8exsXL1cqahS31Gd5ZxXuUb3qGcXIhLk7EJEQpzD+KqUqDbbZIXSonPnDvOLymMtvQzAIj6kWONQO/f3anLVlQEbtnvUswsRCXJ2ISJBzi5EJEiz10FaR5ZZLaboXGXxU2BzdbSbHls2rTcnElf7cwqF02pBPbsQkSBnFyIS5OxCRII0e900GQcue+0Szw8aXbG0+OK9u3ZEqGcXIhLk7EJEQqGzk3yU5BTJg6ltG0juJXk4+Xlps2YKIarSSc/+QwB3um0PAthnZtcB2Je0Vy9m4WtQsIXOX720o9FrD+i9XQGFzm5mvwDwJ7f5bgC7k/e7AdxTr1lCiLpZ6dP4jWZ2PHl/AsDG5Q4kuRPATgAYx9oVXk4IUZXKD+isXYZk2fGRme0yswkzmxjBWNXLCSFWyEqd/STJTQCQ/Jyqz6QeQIYvUUye7u6mJhcds1JnfwbA9uT9dgB76jFHCNEUnYTefgzgvwH8Jcl3Se4A8C0Ad5A8DOCvk7YQoo8pfEBnZl9aZtftNdsihGgQ5caLNr1cVkl0BaXLChEJcnYhIkHDeKC3K8RUSeFUmFCUQD27EJEgZxciEuTsQkSCNHvddHMaZR+tRts3RD6NNQ/17EJEgpxdiEiQswsRCdLsSyHdt3rQveoY9exCRIKcXYhIkLMLEQnS7INEk3H3Mucu0tFV7JJGXzHq2YWIBDm7EJEgZxciEqTZB5k8nV1V+5bR8MrZ7wvUswsRCXJ2ISJBzi5EJEizx0S/xKjL5gP0i92rHPXsQkSCnF2ISNAwXtRP0bBcobieoJ5diEiQswsRCZWcneSdJN8keYTkg3UZJYSonxU7O8kWgH8F8DcArgfwJZLX12WY6HPI8CX6nio9+40AjpjZ22Z2AcBjAO6uxywhRN1UcfarAPwh1X432RZAcifJSZKTs5ipcDkhRBUaf0BnZrvMbMLMJkYw1vTlhBDLUCXOfgzA1an25mTbspzF6VM/tyffAXA5gFMVrt0UsqtT2hms/WdXm5jt+rPldtBWmHdMchjA7wDcjraTvwTg78zs9Q4+O2lmEyu6cIPIrnLIrnL02q4V9+xmNkfyHwD8DEALwKOdOLoQojdUSpc1s58C+GlNtgghGqRXGXS7enTdImRXOWRXOXpq14o1uxBidaHceCEiQc4uRCR01dn7aeIMyUdJTpE8mNq2geRekoeTn5d22aarSe4n+QbJ10k+0A92JTaMk3yR5GuJbd9Mtl9D8kByTx8nOdpt2xI7WiRfIflsv9hF8ijJ35B8leRksq1n97Jrzt6HE2d+COBOt+1BAPvM7DoA+5J2N5kD8DUzux7ATQC+nHxHvbYLAGYAbDOzTwPYCuBOkjcB+DaAh83sWgCnAezogW0A8ACAQ6l2v9h1m5ltTcXXe3cvzawrLwCfBfCzVPshAA916/rL2LQFwMFU+00Am5L3mwC82WP79gC4ow/tWgvgfwB8Bu2MsOGl7nEX7dmMtuNsA/AsAPaJXUcBXO629exednMY39HEmR6z0cyOJ+9PANjYK0NIbgFwA4AD/WJXMlR+FcAUgL0A3gJwxszmkkN6dU+/C+DrABaS9mV9YpcBeI7kyyR3Jtt6di9Vg24ZzMxI9iQuSfIiAE8B+IqZvc/UfPFe2mVm8wC2klwP4GkAn+yFHWlIfh7AlJm9TPLWHpvjucXMjpG8EsBekr9N7+z2vexmz1564kwPOElyEwAkP6e6bQDJEbQd/Udm9pN+sSuNmZ0BsB/t4fH6ZJ4E0Jt7ejOAL5A8inZNhW0AvtcHdsHMjiU/p9D+53gjengvu+nsLwG4LnlKOgrgiwCe6eL1O+EZANuT99vR1sxdg+0u/BEAh8zsO/1iV2LbFUmPDpJr0H6WcAhtp7+3V7aZ2UNmttnMtqD9N/W8md3fa7tIriN58UfvAXwOwEH08l52+YHFXWjPlHsLwD91+4GJs+XHAI4DmEVb0+1AW+vtA3AYwM8BbOiyTbegrfN+DeDV5HVXr+1KbPsUgFcS2w4C+Odk+58DeBHAEQD/AWCsh/f0VgDP9oNdyfVfS16vf/T33st7qXRZISJBGXRCRIKcXYhIkLMLEQlydiEiQc4uRCTI2YWIBDm7EJHw/+yrlk5mujYuAAAAAElFTkSuQmCC\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('crop')\n",
+    "plt.imshow(data[0, ], origin='lower')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "After the approximate centering, we apply a relative alignment of the images with the [StarAlignmentModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.centering.StarAlignmentModule) by cross-correlating each images with 10 randomly selected images from the dataset. Each image is then shifted to the average offset from the cross-correlation with the 10 images."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------\n",
+      "StarAlignmentModule\n",
+      "-------------------\n",
+      "\n",
+      "Module name: align\n",
+      "Input port: crop (70, 57, 57)\n",
+      "Aligning images... [DONE]                      \n",
+      "Output port: aligned (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = StarAlignmentModule(name_in='align',\n",
+    "                             image_in_tag='crop',\n",
+    "                             ref_image_in_tag=None,\n",
+    "                             image_out_tag='aligned',\n",
+    "                             interpolation='spline',\n",
+    "                             accuracy=10,\n",
+    "                             resize=None,\n",
+    "                             num_references=10,\n",
+    "                             subframe=0.1)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('align')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "As a third centering step, we use the [FitCenterModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.centering.FitCenterModule) to fit the PSF of the mean image with a 2D Moffat function. The best-fit parameters are stored in the database at the argument name of `fit_out_tag`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "---------------\n",
+      "FitCenterModule\n",
+      "---------------\n",
+      "\n",
+      "Module name: center\n",
+      "Input port: aligned (70, 57, 57)\n",
+      "Fitting the stellar PSF... [DONE]\n",
+      "Output port: fit (70, 16)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = FitCenterModule(name_in='center',\n",
+    "                         image_in_tag='aligned',\n",
+    "                         fit_out_tag='fit',\n",
+    "                         mask_out_tag=None,\n",
+    "                         method='mean',\n",
+    "                         radius=0.1,\n",
+    "                         sign='positive',\n",
+    "                         model='moffat',\n",
+    "                         filter_size=None,\n",
+    "                         guess=(0., 0., 10., 10., 10000., 0., 0., 1.))\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('center')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The processed images from the `StarAlignmentModule` and the best-fit parameters from the `FitCenterModule` are now used as input for [ShiftImagesModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.centering.ShiftImagesModule). This module shifts all images by the (constant) offset such that the peak of the Moffat function is located in the center of the image."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------\n",
+      "ShiftImagesModule\n",
+      "-----------------\n",
+      "\n",
+      "Module name: shift\n",
+      "Input ports: aligned (70, 57, 57), fit (70, 16)\n",
+      "Shifting the images... [DONE]                      \n",
+      "Output port: centered (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = ShiftImagesModule(name_in='shift',\n",
+    "                           image_in_tag='aligned',\n",
+    "                           image_out_tag='centered',\n",
+    "                           shift_xy='fit',\n",
+    "                           interpolation='spline')\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('shift')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's have a look at the central part of the first image. The brightest pixel of the PSF is indeed in the center of the image as expected."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "(17.0, 40.0)"
+      ]
+     },
+     "execution_count": 17,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQEAAAD8CAYAAAB3lxGOAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAATVUlEQVR4nO3dW4zc5XnH8e9vZmd37RhiTB3XsomgIYWiqDWp4xLRi8gVFUpQQiQUEaXIF0hOqkYCJW04XEHVSKVqILlK5YQEX1AFC1CIUKvKCkYqUjEyYA7GUUISaEHGJoABg72neXoxf7fL7tr/Z9ZzWPv9faSVd2bfff/PHPyb07Pvq4jAzMrVGHYBZjZcDgGzwjkEzArnEDArnEPArHAOAbPCpUNAUlPS05Ierk5fIGm3pBcl3SdptH9lmlm/dPNM4AZg/6zTdwB3RcSFwFvA9b0szMwGIxUCktYDnwN+WJ0WsBm4vxqyHbi6D/WZWZ+NJMd9F/gWcFZ1+lzgcERMV6dfAdYt9IuStgJbAZo0/3Q5Z88ZkDl8alByqtxcvawrO2wIkw1Yojs128Day07XxFy97avtX5fusXiPyZjo6k5SGwKSrgIORcSTkj7TbVERsQ3YBnC2VsWfNf/yg/M3EvUq96pFzcS4ZjM5V2Jcci5SlzEbKInLmDneMMzMJMa0U1PF9HT9oHZurkxd6fb6diboknUtwuPT/9H172SeCVwOfF7SZ4Fx4Gzge8BKSSPVs4H1wKtdH93Mhq72YSUibomI9RFxPnAt8EhEfAXYBVxTDdsCPNS3Ks2sb06lT+Am4BuSXqTzHsHdvSnJzAYp+8YgABHxKPBo9f1vgE2nXEHiNW7qtT6kXqOnXutD7nV19rV3I1NX7y4jjeRcvXrvIPk6PvO+R5B4rQ8o6i9jZN8TOI3F3PcgFvGeozsGzQrnEDArnEPArHAOAbPCOQTMCucQMCucQ8CscA4Bs8J11Sx0yrTAHwylmnKyzS89/OOaVFNO8o+RRlv1g0aSTUyJuiI7V0biD2c0nfjDoKRs+1LqD3qyTUypuXp3GbN/EJf5Q6N5/58W0R/lZwJmhXMImBXOIWBWOIeAWeEcAmaFcwiYFc4hYFY4h4BZ4RwCZoUbbMdgp2Xwg+dkl9oetERdynbmJToGI9NVCMRY4ibLdlhmZJbomkguCZYZlF3aO9ENqGauyy+zDFn2fppY9Sy3LDnkOgt7sHy5nwmYFc4hYFY4h4BZ4RwCZoVzCJgVziFgVjiHgFnhHAJmhRtwsxDzl/fKNLYkGzVSDR3JvQg1krhqxkZTc8V4/bh2YgxAjCWWF8suoZboWWlM93A/v8xSZcn9A5VY7ivayds60byT3eIvVVcvH3rbcyfrvvnOzwTMCucQMCucQ8CscA4Bs8I5BMwK5xAwK5xDwKxwDgGzwjkEzAo30I5B0eflxFKbmyaXBBsfqx0Sy8dTU7WX13cDzoznbooYqc9tJZfo0nS2D65uouRtmhk3jLky95setvllb5/U0m7ND9auqe7rqb1kksYlPSHpGUn7JN1enX+PpN9K2lt9bej+8GY2bJmHnwlgc0QckdQCHpP079XP/i4i7u9feWbWb7UhEJ2N4I9UJ1vVV4+eR5rZsKVe6EhqStoLHAJ2RsTu6kfflvSspLsk1b+INrMlJxUCETETERuA9cAmSZ8AbgEuBj4FrAJuWuh3JW2VtEfSnkkmelO1mfVMV295RsRhYBdwZUQciI4J4MfAphP8zraI2BgRG0fxkwWzpSbz6cBqSSur75cBVwC/kLS2Ok/A1cDz/SvTzPol8+nAWmC7pCad0NgREQ9LekTSajof/+8Fvta/Ms2sXzKfDjwLXLrA+Zu7Ppo0fzmxzJ5/ySXBMkuHKbvnX2Jceyw3V6YRaGY8dxnbrR42rSSW1WpM1dfVnMjV3kg0OjWSS6NlRmWbcjKjRG6/xcz9ObLNQotprFvE77ht2KxwDgGzwjkEzArnEDArnEPArHAOAbPCOQTMCucQMCucQ8CscIPfkHROR1OqGzC7iWgr0cGX3kQ00TGYXBKsPVpff7YTsD2a6Ehr5rrGkvt11mpO5iYaea9+XKuVm6uZ6TTNLrOWGJNeQCO16WpytljEZrDuGDSzbjkEzArnEDArnEPArHAOAbPCOQTMCucQMCucQ8CscINtFtICzUGZRqCRZJmJZqFoJff8G60f1x7LNbbMLKvP2unEGIDJD2XmSk3FzHii8ShRVvNY7nhjb9dPtiyxBBnA2ExmabTckmCaqW/KybbgRGKu9ENvpllobnPSIlYk8zMBs8I5BMwK5xAwK5xDwKxwDgGzwjkEzArnEDArnEPArHAOAbPCDXh5Mc3vEGzW55BGkutgJboBsx2D7cQyV+3RXIZOLa8fd2xVbq6jH6lvCTu2ZiY1lz48WT+mWd+ZN/NObsm2sYOJjVlfTm5I2h6rP95U8nrIdPklZa4v2snjJdo1529u6uXFzKxLDgGzwjkEzArnEDArnEPArHAOAbPCOQTMCucQMCvcwJcXozGnmaGHexFGovGI5PJVmf38ZrLNQoklwY6tyjV5HF1Xv2TWmo++mZrrwpW/qx3TUH1jyy/f+kjqeAdjVe2YscO5u+T0m/XXaSuxBySAEg1k2X0NU41A7eRjb2IuzcxpiPLyYmbWrdoQkDQu6QlJz0jaJ+n26vwLJO2W9KKk+yTlekfNbEnJPBOYADZHxJ8AG4ArJV0G3AHcFREXAm8B1/etSjPrm9oQiI4j1clW9RXAZuD+6vztwNX9KNDM+iv1noCkpqS9wCFgJ/Br4HBEHH+X6hVg3Ql+d6ukPZL2TLaTC9Sb2cCkQiAiZiJiA7Ae2ARcnD1ARGyLiI0RsXG0Mb64Ks2sb7r6dCAiDgO7gE8DKyUd/2xlPfBqb0szs0HIfDqwWtLK6vtlwBXAfjphcE01bAvwUJ9qNLM+ynRmrAW2S2rSCY0dEfGwpBeAn0j6B+Bp4O4+1mlmfVIbAhHxLHDpAuf/hs77A92R5pxMtDhlOgEXmHshkTkeEInOwunEsmEAx86tP+b75+U2z7zoD+tfdV237vHUXJ8c+5/aMW8nlvF6YGxj6ng/Pbyidsz0eK5jcKaVuK0TS8QBxFgPG2fndvAtIL2cWeYukV2q7CTcMWhWOIeAWeEcAmaFcwiYFc4hYFY4h4BZ4RwCZoVzCJgVbsB7ES4g2bwzcImy2slrb6a+34bG2VOpuS7+8MHaMZkmIIA/Gl1eO+aV6SO1Y1Y0J1LHUyPR2JK8O8TcZeoW0G7lHuMayaaiDGWuinZyqbIB8TMBs8I5BMwK5xAwK5xDwKxwDgGzwjkEzArnEDArnEPArHDDbxbK7PGWba5IzJXeUy5zuGxjS6IXRY1cXZOJDqWXp89JzTUR79SOefzoRbVjHj308dTxpl9fVjvmQ/UlAdCcqr++Mg1FAO3EykLZR0vNJOpKrD4Eyb6pHtyf/UzArHAOAbPCOQTMCucQMCucQ8CscA4Bs8I5BMwK5xAwK5xDwKxwg+8YnNPhFD3s8svs8RbJvduU6FJs5Bq/UGJPuZljuZviv9+r7wb8r9ELU3ONJQp76u3z6mt6bVXueG/Ut06OvJe7rRuJjsH0UmWZvS4bvXu8zNy3gNQ+g/P+/yyigdDPBMwK5xAwK5xDwKxwDgGzwjkEzArnEDArnEPArHAOAbPCDbZZKJi/VFimcSK5HBMz9ZmmqdxcjaP1jTStI63UXONv1HetzIyPpubaN1nfvLN/xe+n5op2opvmrfq6lh3MPZYse73+th57N9nMlVjGKy21xF2uLqYT96+pRPcY5O7388Z0f734mYBZ4WpDQNJ5knZJekHSPkk3VOffJulVSXurr8/2v1wz67XMy4Fp4JsR8ZSks4AnJe2sfnZXRPxz/8ozs36rDYGIOAAcqL5/V9J+YF2/CzOzwejqPQFJ5wOXArurs74u6VlJP5K04J+3SdoqaY+kPZNx7NSqNbOeS4eApBXAA8CNEfEO8H3gY8AGOs8UvrPQ70XEtojYGBEbRzV+6hWbWU+lQkBSi04A3BsRDwJExMGImImINvADYFP/yjSzfsl8OiDgbmB/RNw56/y1s4Z9EXi+9+WZWb9lPh24HLgOeE7S3uq8W4EvS9pApzvhJeCrfajPzPos8+nAYyy8UNO/LeqIMafzql3fFZXqbCO3vFi2W6sxWV/XyPu57sPxw/WvutojucvYTCxD1h7NNYI2purHtN6tHzP2dq5LbTTRDdg8luvMa0zVj8su45XpPsx2mipz/5rO3QcjM27ufd7Li5lZtxwCZoVzCJgVziFgVjiHgFnhHAJmhXMImBXOIWBWuAEvLxbEnOaG5HZxuekTYzpd0AmJvedazWQTU6JpZeRo7qaYeqO+rkhGuxJ9Oc3J+tqbE8kGn8n6cY3ksmGaTjT4TOfqar4/WT/X+xOpuZionysyS5DB/EagBSdLLnt2En4mYFY4h4BZ4RwCZoVzCJgVziFgVjiHgFnhHAJmhXMImBXOIWBWuMF2DMK8jR1TXX69PH6yY1CN+nHZBG0lOtea7+duitHRZvKo9ZS58hPdjsps6NljPV0S7GiiY/BY/RhILgmWXF4s1Q047/bxhqRm1iWHgFnhHAJmhXMImBXOIWBWOIeAWeEcAmaFcwiYFW7gy4sxMzP/vLpfS06faQOKbLNQj8YA8xqkFtKcTO4f2Eo0C2WXUMuOq5NtFsqMS66WldnzL7UvIMBk/aaMMZXYuBHyjUAZiUatmHudei9CM+uWQ8CscA4Bs8I5BMwK5xAwK5xDwKxwDgGzwjkEzArnEDAr3EA7BoP5HU5KdNN1M38dtXOdX/M6sRYelJpLmQ0oJ3J1qZnI7WQnYPRwrl5RZhNOmN95upBEJyAkNwjt65JgJ5gq1WE5gA1JJZ0naZekFyTtk3RDdf4qSTsl/ar695xTrsbMBi7zcmAa+GZEXAJcBvyNpEuAm4GfR8THgZ9Xp83sNFMbAhFxICKeqr5/F9gPrAO+AGyvhm0Hru5TjWbWR129JyDpfOBSYDewJiIOVD96DVhzgt/ZCmwFGGf5ogs1s/5IfzogaQXwAHBjRLwz+2fReQdjwXcxImJbRGyMiI0tjZ9SsWbWe6kQkNSiEwD3RsSD1dkHJa2tfr4WONSfEs2snzKfDgi4G9gfEXfO+tHPgC3V91uAh3pfnpn1W+Y9gcuB64DnJO2tzrsV+Edgh6TrgZeBL/WlQjPrq9oQiIjHOPFKWn9xqgVkGiJ62lDUw+XF0k0fmf38Mo07AO3eLS+m6cS4xJ6M2eshJXtbJ5qFItt41M7tWZibaxFLgp1wrkT9i1iuby63DZsVziFgVjiHgFnhHAJmhXMImBXOIWBWOIeAWeEcAmaFcwiYFW6wG5LCorrLIhtViS4yZZfe6uXyYoklp2I6eSFTS4Il58p0A/ZS5rbPLM8Fyc1NsxulLtElwTJdkXOOt5jeTT8TMCucQ8CscA4Bs8I5BMwK5xAwK5xDwKxwDgGzwjkEzAo3+Gahxejh8lXZxqOeLmmWmauRK0yZC7BUm4UyerlUWbbxKLMkWGbvQ0g2MeXqSi2PNu8yenkxM+uSQ8CscA4Bs8I5BMwKp/Qa6L04mPQ6nY1KBuX3gN8N8Hi9djrX79qH46KIOKubXxjopwMRsXqQx5O0JyI2DvKYvXQ61+/ah0PSnm5/xy8HzArnEDAr3JkeAtuGXcApOp3rd+3D0XXtA31j0MyWnjP9mYCZ1XAImBXujAkBSedJ2iXpBUn7JN1Qnb9K0k5Jv6r+PWfYtc51ktpvk/SqpL3V12eHXetcksYlPSHpmar226vzL5C0W9KLku6TNDrsWhdykvrvkfTbWdf9hiGXekKSmpKelvRwdbq76z4izogvYC3wyer7s4BfApcA/wTcXJ1/M3DHsGvtovbbgL8ddn01tQtYUX3fAnYDlwE7gGur8/8F+Oth19pl/fcA1wy7vuRl+Abwr8DD1emurvsz5plARByIiKeq798F9gPrgC8A26th24Grh1LgSZyk9iUvOo5UJ1vVVwCbgfur85fk9Q4nrf+0IGk98Dngh9Vp0eV1f8aEwGySzgcupZPqayLiQPWj14A1w6orY07tAF+X9KykHy3FlzLwf09H9wKHgJ3Ar4HDETFdDXmFJRxqc+uPiOPX/ber6/4uSWPDq/Ckvgt8Czi+sMC5dHndn3EhIGkF8ABwY0S8M/tn0Xl+tGRTfoHavw98DNgAHAC+M7zqTiwiZiJiA7Ae2ARcPNyKujO3fkmfAG6hczk+BawCbhpehQuTdBVwKCKePJV5zqgQkNSi85/o3oh4sDr7oKS11c/X0kn7JWeh2iPiYHUHbQM/oPMfbMmKiMPALuDTwEpJx/82ZT3w6rDqyppV/5XVS7SIiAngxyzN6/5y4POSXgJ+QudlwPfo8ro/Y0Kgei10N7A/Iu6c9aOfAVuq77cADw26tjonqv14eFW+CDw/6NrqSFotaWX1/TLgCjrvaewCrqmGLcnrHU5Y/y9mPXCIzmvqJXfdR8QtEbE+Is4HrgUeiYiv0OV1f8Z0DEr6c+A/gef4/9dHt9J5bb0D+CidP2P+UkS8OZQiT+AktX+ZzkuBAF4Cvjrr/Y0lQdIf03nzqUnnQWVHRPy9pD+g8+i0Cnga+KvqUXVJOUn9jwCr6Xx6sBf42qw3EJccSZ+h80nSVd1e92dMCJjZ4pwxLwfMbHEcAmaFcwiYFc4hYFY4h4BZ4RwCZoVzCJgV7n8BSgIlQPFZebAAAAAASUVORK5CYII=\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('centered')\n",
+    "plt.imshow(data[0, ], origin='lower')\n",
+    "plt.xlim(17, 40)\n",
+    "plt.ylim(17, 40)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Masking the images"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Before running the PSF subtraction, we use the [PSFpreparationModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.psfpreparation.PSFpreparationModule) to mask the central part of the PSF and we also create a outer mask with a diameter equal to the field of view of the image. The latter is achieved by simply setting the argument of `edge_size` to a value that is larger than the field of view."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 18,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "--------------------\n",
+      "PSFpreparationModule\n",
+      "--------------------\n",
+      "\n",
+      "Module name: prep1\n",
+      "Input port: centered (70, 57, 57)\n",
+      "Preparing images for PSF subtraction... [DONE]                      \n",
+      "Output port: prep (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = PSFpreparationModule(name_in='prep1',\n",
+    "                              image_in_tag='centered',\n",
+    "                              image_out_tag='prep',\n",
+    "                              mask_out_tag=None,\n",
+    "                              norm=False,\n",
+    "                              cent_size=0.02,\n",
+    "                              edge_size=0.2)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('prep1')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's have a look at the first image and show it on a logarithmic color scale."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 19,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x145e778d0>"
+      ]
+     },
+     "execution_count": 19,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAodUlEQVR4nO2da6xc13Xf/+vM895LihTJK5omFdNOhRhqkdgooTqwPzhyZVFuEPuDbcRNDRVRwARJCqdJEcspWiBoUThFEcet27iCJVgtXMt2EkOCEFFiFRlFgEI2WcuOZNmWLEiQaEq8lMTXvXce55zVD3fI2eu/OefMfc6QZ/0AgrNnnzlnz2Pfs/97vURV4TjOtU8y6QE4jrM1+GR3nIrgk91xKoJPdsepCD7ZHaci1LfyYnv27NGDBw9u5SUdp1KcOHHijKrOX6lvSyf7wYMHcfz48a28pONUChF5aVSfL+MdpyL4ZHecirCly3innNtnP3n5saap6UtmZ+3BtZppSqtp++v269XtwetFbF9Cf/drth/kaSlpbvuzoJ1ldKxto2/fF3J7Lu31Rh5/9Oy9cNaG39kdpyL4ZHeciuCT3XEqgmv2LeaDzU/YJ1h3N4e6W1hHt1p0bIP6rWbXhv1683bQz3/m+VoM6eqI4PVSs+dS2h+g3YB4P4Damgw/o8N7f9u+lvY1jr5+T/E4K4zf2R2nIvhkd5yK4Mv4Dea25GP2CVrCsvlMaBkfLutl107TpbRs5+VxPtemc1F/Y3hurdNSm45N+nbZnpA1TGnYCNs5LdszMq3R+xA2xbG0Cd8nmxOXrFnv8Pxv2nPRZ3T09BdRVfzO7jgVwSe741QEn+yOUxFcs6+B22oft08EpiJpkMsqIW1rPmMNKkFbZ60G1wYJZTKX5bOk6RNyiQ10ed5k8xgPFIVIUT9pdmQlprUS05uBTYT8+RUOjDR9bq9zrZvt/M7uOBXBJ7vjVASf7I5TEVyzj8Ht2/+5aSfkthoicxSGSrpQ2qTDZ+y5Qhu0NsmmTLbxbMZq9KxVbDsPhbaWecfSa7Vt9wuSZHQIbK1LIa1s06drSa9v+zlUNzyWbPKhezGP44qvD2z42u2avsO7fsO0j77xpcJzXW34nd1xKoJPdsepCGMt40XkRQAXAGQAUlU9JCK7AHwNwEEALwL4uKq+uTnDdBxnvaxGs/+Sqp4J2ncDeFxVPysidw/an97Q0W0RkT8162wKFeUUTqFtN9KQDIelsr97oNPZrp43SLOTRs9Zs0d29uAxh50WhJVesZ+3AwqkspBm53MlSnsgRbZy3iugcFr2w480fOgTwemw6NjDN/yWfS2H015lmn49y/gPA7h/8Ph+AB9Z92gcx9k0xp3sCuAxETkhIkcGz+1V1VODx68C2HulF4rIERE5LiLHFxYW1jlcx3HWyrjL+Pep6kkRuQHAMRH5Ydipqipy5YWcqt4D4B4AOHTokBeDd5wJMdZkV9WTg/9Pi8g3AdwC4DUR2aeqp0RkH4DTmzjODeWOt/6uaUuDdDRrRvbHvnDRtkPdXWAjBq5gVyc9W2Qn1kaxHZ01et6w7awxWgsLifB8xvYnqR1Xna4lwT4H7y1w2mk+V9a2P0OOpQ/3UBJ6z9K3Nn2ll7IuR2DTl4z86ul75lTeENt/tdnlS5fxIjInItsvPQbwQQBPA3gIwJ2Dw+4E8OBmDdJxnPUzzp19L4BvDnac6wD+l6oeFZHvAPi6iNwF4CUAHy84h+M4E6Z0sqvqCwB+4QrPvw7gA5sxKMdxNp7K+Mbfsf9fDBsUA83liqJ+suWybTzU/Frn5GyWOPcbx6gHw2rZPm5zTHrWJI3eWo1mt+2cU+NZ9/VoXyPU7BLpZs5vxzb7kvx3gcbnzy8hzS4UO6+8XxDGB/B7oDJVkd6Ptlfs7ySMoXj0wpcxbbi7rONUBJ/sjlMRrtllPLs6ShiWSsv2qGooL+coPVQUxhpWQ8lLXAm4+iktJdO54Tj7261cyGlZnrK7LFkQUzo+XHVGaagIXoonGZnm6lGg6sjXRstlyngdLfvpWrVeIBHa9j3XOrSsp/BadtUNw22F03wt2ZBXDr2NTLJ91jZDDu8+YtrTkPLK7+yOUxF8sjtORfDJ7jgV4ZrR7OwCy66NxuWVQ1TZRZVLDFGIa2HFUz4Xl2hqUUgrh60G6Z+ydrFpLW2ze6y9NJve8vBtrFKzs22OdbU53yojIBL2Ss35fQWanU1r9PlyJi52zQ1djNn1NqH9lEizM1ymKvydkDvxNLjW+p3dcSqCT3bHqQg+2R2nIlwzmj0KR+T0RKGg5VRQHHbapn7WZuxem4/u49f2r7dGZnZ5DcNSc/p2OP0z97Om536j6Vep2fM6u8eOfi2PMzqWtzUi91rbrnWGj+sd21eUemvltaTDw/fRZ3dZ+4HlCZXX7lIa63S0mzX/HnWZBj4B/M7uOBXBJ7vjVASf7I5TEa5azX777CdNO/ZXJ1Ea2EDzOZtzSamMEqdVYlt50i0oSUy+2DmlXIps5+zfHhwehayynX2O+snnPCO7uwbtUt/4Mls594dt/vjYjk5SN6E2H2/87jkzNF+Lnoj97gObPe8dNKnEFfv0k40/SgtelKaazsV+IY/89AvYbPzO7jgVwSe741QEn+yOUxGuGs1+W43yWZIPeq1NJYQaNqWzBjo93W01e3+2+GPgNEl8dBLYX9MZe93+NvKz5/TPZBcO7dls22ZN3p+z7XTOfiY5ZbUOfcy1VlLaOC8R9UUvp74k5bRT1E97IDUbVm40PscD1OliLKv5fYQ2++i74FTd5CsflcAm/wwEOp01u2yzX5Z2rN398N7fNu2jr/03bDR+Z3eciuCT3XEqgk92x6kIV41mLyqLBCCKQReypadzQwGbzhTbvjmPHNtutW71WB7kXY583TkPHJdoIl0eatJ01vb1rjNN9K+z40znSGM2OU4/6CfNzunVSs3srOnDS1Osu/bp8+sUX4w1vsmdF/ns23YUC8+ZuoPceZrysfQ7oHac9pvzbwf+AJwzgXMksK98n/IgbgJ+Z3eciuCT3XEqwtQu429LPlbYn8xS+OH2baadzts1b2/X0G6Vzox2UQXiMEquQsqujwiW7myeYddQXlay22o6M3yiu9P2dXeTfJijtMkztl0nuZHUhu0k4WV8WZvGTSvYNB2+sX6XlqxcIbZRbIrjzyRUUfx5lrr1FlDmMhwdTxV5wNVoik7IprhZctleWjbtzXCn9Tu741QEn+yOUxHGnuwiUhOR74rIw4P220XkSRF5XkS+JiLNsnM4jjM5VqPZPwXgWQCXxPCfAPicqj4gIl8EcBeAP9/g8Y1EZqzvKKeWymZt+GGYVjkOi6Q2md4S9sEk8iCsNa8Vm9Y4TLVP5rX+9vAxub+SRk/mbCxos0XtBrXrWfDY9tXoQ6glBXmnAGS5/RCX+8PP+2Jiv4sOm+K4PBSZtNiF2HxfnJqLjmVNH6XyClNJ076F0ncVuceWifzwffEmB5vtGP4MykzNa2CsO7uIHADwTwB8adAWALcC+IvBIfcD+MiGj85xnA1j3GX8nwH4Qwz/du0GcFZVL90eXgGw/0ovFJEjInJcRI4vLCysZ6yO46yD0skuIr8M4LSqnljLBVT1HlU9pKqH5ufn13IKx3E2gHE0+3sB/IqIfAhAGyua/fMAdopIfXB3PwDg5HoHU2RbT9pUNpns6tms3R9kt9UwdTJr8KRPepXSD9eWrb7NOfVUc/THyJqRSzJ1ryeX2OuHY0l3lNjRG7bNGn1b28aKzjaGBu3tDetMMFu3xu4W1WTKKff0Ymo/73NdazcOyVL7IfTZfZbdTula4XZC5P4aGeW53BN1hzqcvV1Js6NtX1zr8j4Gu9cW7HNwCbE+paXmEG16H7fv+PXLjx89d9/o6xRQemdX1c+o6gFVPQjgVwH8jar+GoAnAHx0cNidAB5c0wgcx9kS1mNn/zSA3xeR57Gi4e/dmCE5jrMZrMpdVlW/BeBbg8cvALhl44fkOM5mMLW+8QxrGi6jnDfITsmppAMJxGV8E9JiSY/8mMk3Hnyt4FI5p38mjZ6StE0pA3Y6E4ytRemwmlbntVtWZ++YsTp8V3vJtK9rDv2v55sXTd+2utX3DTJYd6ge9PnU7qHUg5zNKW1UdFP7eaU9247Cgik0N09H6+zI7k5pqKPyUOHeTX91C1tOS51QiTHNghgJ1ujsG8+lozgtdVm56DXg7rKOUxF8sjtORfDJ7jgVYaKavTBmnX2LKa0POO0P2SVj3R30kZ1dKFV0rRPVHzKwxgzLAkW+7zPU3kYafpZTR432iY7C6OtW9+1s25joA7NnTfvG9huXH++pXzB9bQ4qJxZzu2dyJtlu2nmg0zuZ/a6WWlaPdqjdnyFNT+miQq0c5RLgfACUajrhrzKw6ffJRl/r0u+CUlxJxuManXJMKaW49GggnZI0VPT71s5wTyW0uQPj2939zu44FcEnu+NUhKk1vUmTwuPLQgQ5LLU/2nVRyGTCxwqlG8pbbObjrKPDx/1Z28eVVtnUlrc5zDJIHUXusC0yte2asaa1fTPnTPttM2dM+8bGcBm/q2ZNb02ODSXO5+3C/m6QAnaZysde6NvXLrbtd5v1yRRHy+U0CKdN6BerPVpakwWLK8TmQfUZdr1tsLkW9LvI7HfLJtywogxX701IZkq95Pcc2fmCsWXF39Uo/M7uOBXBJ7vjVASf7I5TEaZWs2u/2PzFuiXpFeuYMNUUu9ZGZhEy72jkimsP720b9vd2UMjqDnssm9qUTW3BuWuk2a+jkNWf2famaf/c7Gum/Y7WadN+S22o6dti33NGNqyO2p9GjTRkk14fute2araP220Kxe1ROq0embSy4GeqHfvhZ202uY422wFALeivN9h0Zo/lJ4TyaSXp+PdK3gdS2reQLpk+SdMnQRVYXbR7NePid3bHqQg+2R2nIvhkd5yKMFWaXVqBS2Y+2m0UiG3lSmGo0V+xIMQw6iP3WaV0RBm1uXxUN9DpfZstC/05e+5shiutki23NdR2czPWpfKGWevi+tb2WdM+0HzdtEONDgDztaE77SyblIk+7LV35TZ8dmdideNsMtxPaJHrbZ8M2imloc64IiyPJRm938IVY9lGL1ypNdDscRh0sattQudO2qz5C8qApeQAwGZ0EBQCK43h6zVZ2z3a7+yOUxF8sjtORfDJ7jgVYbo0e+gP3ye7I/sDc5qfkn5zbGr1aE4hlxn5NaezVid2d5Bm3zXUZ73rSKPPkUZv095C2wrDmbnh2PZutxr97XNWk+9vWjv7bvJ335FQKulARu4/cAobyXMv77v8mPV8jXJJ5aSNM7Jns5TuBP4GGfmnp2TrVt4PoP6sO/wuNSF/i5w1O4XAclQqjbsW+EzUl+333N9uf2N1TltNCP++gzRVQimsPjjzzy4/3i67/uGoc/qd3XEqgk92x6kIPtkdpyJMl2ZvBZqd0w8xZGdnu2REgW1SKT49m7UfS3+OyhdRjHpoS89mSzT6THGZ5e1BOug97UXTd0PTavgb6udNezdp5d01q5Xfsn9jdXrITTcOz916ZZ/p66jVmOcym097MaPcBcRyffj6jPR+t2+/Ky4l3ac01t1keK6cNHnWpXaH04LbcXGKLCPh2a8+2kKy45Ye+93T7yhMTU0p2sK9LumNnjd+Z3eciuCT3XEqgk92x6kIE9XsyXabjliCmF3W5GEq3ZV+8h1mzc6le2eGgkupxHLetLquv822e5T+uW+HjTTQ7KvV6Ntm7PvaHeSV29Miu3nNavLtiU0dvYN80jdToxfxM2TD779sNfzZzCbiO8c1sYhOoNl75GffpTJL/cz2L/bsfkEe2NK79L1HpbjJv53TVNeo1LRJJU2v5TTVfJ9NaO8BS/Z3koS2dcrHKOHeQcFWl9/ZHacilE52EWmLyLdF5Hsi8oyI/PHg+beLyJMi8ryIfE1EirdUHceZKOMs47sAblXViyLSAPC3IvIIgN8H8DlVfUBEvgjgLgB/vpqLG1MbAA1S8Qibynrkq0jL9Mg9NqocEiyxeBnfohBMWq6lVNWFUyFpWHW0QdVlqGpLg1IyzTTs0ntbY7is31Yjd1dyf50T+5n87I2TWbaXweP60Qv/wLRvaFiTYkpL9cXa8HfS5Wozqf0NdSjXdGyqCyRBnaQip6nmQqwUpRqvmXVkF5veuJ1RlaGEzIAaSIyETG9qlvXrML3pCpfEY2PwTwHcCuAvBs/fD+AjZedyHGdyjKXZRaQmIk8BOA3gGICfADirqpduU68A2D/itUdE5LiIHF9YWNiAITuOsxbGmuyqmqnquwAcAHALgHeOewFVvUdVD6nqofn5+bWN0nGcdbMq05uqnhWRJwD8IoCdIlIf3N0PADi52osLV2ZlF9jwWHZNpGOj/mg/YPh3jd1hOYQ1bbFGt2Nh7aaBW6qQOaZGLqv1xI57pm41+0xt2G5QSSYu0TRbUnl1WmGT4R7S7EzoXruYtgqOBFIOOxVKNZ2M/o0pm9Jq3KYUV6zLw/0B2lNiM17kvU1tduE2e04z9Bmk6RWPY8bZjZ8XkZ2DxzMAbgPwLIAnAHx0cNidAB4sO5fjOJNjnDv7PgD3i0gNK38cvq6qD4vIDwA8ICL/HsB3Ady7ieN0HGedlE52Vf0+gHdf4fkXsKLfHce5CphsiCvri4JUUqUhr9xfI4USiKSMQ1qbbNOkYbH9lYciIx4DENKM7GJZSzgF9ugU2gkZZznd09XCnNi9Bk6nVQSnoa5zTeb1wJmla8VtJuznNNXRfgBrdDqeNb75fbO7bLgB4O6yjuP4ZHeciuCT3XEqwkQ1u3K66ND/nTS4KQ0FQCjds9ZZYJFGCsoG5S0KZaS0vpEmJ0iGQ8KyQNSnJSeL0igFYq5LmwV93ky4SmmRv8A8pddilvKhz8SbqQ2PZT969p3vUchrGqap4u+Gvzu6FbJ/Rb2kqrih5DcV7Q9QuG1o45eobNV4ezd+Z3eciuCT3XEqgk92x6kIk9XsXYpRD+zsQrZENFgwUT87G7OvfKB5olK9q9Xo7A4QSFCl8kQ5OVBzeWL25e4FOp01e4dEY6fM8DulsL96m+zuDbFiOPQn6NN7jtJUpaTZKZV0WD5KudxTSenost+JCWcnHR3Fs5fJ7ChfQ/h4bf4Vfmd3nIrgk91xKoJPdsepCBPV7PkFG8ectIeB48qaZdGmURal9MOzNuic88yFpXokY4OqbSZkP61xFmuKd0+CkjtZnzR5nzQmpT5e6lsdfr42fB9zNWtTPpPaHNYLmW2/ftImC9q9f9UpBjaFk1QOisVvTvsWReWiznS3mb43u/YzWqbPs0OppLNQw9N3RVsFpW3+3YS6nFwJkPS1sM3nrnE5qM5wfyss3wwAGsazF9jc/c7uOBXBJ7vjVISp8r80qabKXABTXmOxy+to8xqbQRJa1icpL9NR3A6Oz/tkeutRtZmGbXdqdpl5MQh5PVe30uRM3y5hTzfsMv6n9bOmff2rN9lxvuU5bAW8bF+ir5JNhrxsv0BVXt/sD5fqZ3tUAbZHqaSpsgrLqPD7kZTSN/P3Tm1emkfmtKCfpWDc5t8ctWkZj35wAq5+JOH78CqujlN5fLI7TkXwye44FWHKNPtQiwiV8ZEZyufMaadJxwiZ3iRwMZS82AxS79h2RlXsaj0yHXUCzU7psDJKR9xPrD61BkW6DqWsmq3bzYJttRtMu02ppft62rTfFpjmNtos98OX33r5cYdCcVmjX8jtd/lyf7dp/7jzFtN+aWnX5cdnluZM32LXfjn9/mj3WADQ7rBd65Amp4jryHzGUpl1ePA7qvWKNTmfu75MVVt7BVWKKX2bhBVe15NK2nGcawOf7I5TEXyyO05FmCrNjjzUKTQ01iJKooft8lGI4LCd9MkVkdNScbpnKp8buc8GsrEWlf0luzu9DS7gtBiGSVIcZKtm7eqNKNbWspTbVF6vZucuP9754jsKX5uRvZZTYvWi8NqdI8/FdvSzmXVxfam7x7RfXrretE8vDd/3xY59Tz3yY8gpzVfepbTLvaD0cfS9sp3dNGMNH2n24FzkixFpeP4NkmaXfkGKbNLsj5z8L8PXyX8+Meplfmd3nIrgk91xKoJPdsepCNOl2YugsD40G1c+7hJRWqBAs3esHqqVpKmqd2w7a1NZ4KB8FJf1jaQthXNqRto4EPUXKdUxa3hOU53TwMMUzACwEPjS76jZsslNMhonKN4PyOk+kQXvKwpRpfTPr/etrfxkZ6dp//TiDvv65aFdPrKbk0bPWKMv23b94vD4+mLxXkycMty22XZe64f7QuS7sWRfnKRkK+cS5KTZNQiNlv5qclgH11zTqxzHueoYpz77jSLyhIj8QESeEZFPDZ7fJSLHROS5wf/Xl53LcZzJMc6dPQXwB6p6M4D3APgdEbkZwN0AHlfVmwA8Pmg7jjOljFOf/RSAU4PHF0TkWQD7AXwYwPsHh90P4FsAPr1RA4tSSbOu5n7W6KSJtBHqKdJDFNeslK4ospGyDTXQelwiKPK/JimcU8x0GFpPplicJ42eZsWlj95sWa18srHz8uO5uhWoLTIac7tBgjUpyIW8nNkP4Q3S6K93bXuB/N0vLFvf+dCWHtnRKX8ASLPXlqjE88XhZ1i32xZI2OmBiPIgRL+DwDe+Q+W1aZ+INXnSIR2+Rl1exKo0u4gcBPBuAE8C2Dv4QwAArwLYu7FDcxxnIxl7sovINgB/CeD3VNVU4tOV7JBX/FMvIkdE5LiIHF9YWFjXYB3HWTtjTXYRaWBlon9FVf9q8PRrIrJv0L8PwOkrvVZV71HVQ6p6aH5+fiPG7DjOGijV7CIiAO4F8Kyq/mnQ9RCAOwF8dvD/g+sdTJhKWuocz06po1vWhqxc/onj24MUwpxmWjj/F8WRJw3S8GRDzQJ5xXnLlHRdXCaYffiH74P1fEo2+YtRaSlKyUylkLYHn9lc3fqYt+tWsDZJs3PJpoz3D4IyTEup/W4WqX2R8sYtUUx6kS2dNbou2/dYu8B2dTvORpBAgDU3pxhnjc52+MZSPrLduGA/z2TJtlmzy5J15uAS5Gu1rYeM41TzXgCfBPB3IvLU4Lk/wsok/7qI3AXgJQAfX/doHMfZNMbZjf9bjE5Z+YGNHY7jOJvFRN1lj+XfMO3DO+8aNiLTW3GV1sj0Ru6HGi7RCsJfgThtFXuOFoU2lppvoioiFFYZvO2crpNRmuScTIaLtMTtdihNdXu4dG817EDbDXuxZq3Y1JbraAnBVVm6VEm1T+8jo/fBy3izdO+Rq/KibTcu0LL9ommiHsQQ8/cYVQYi91hettd5Gb84PGH9otUI0qUfBi/LyR08ursGxz/y8ue5dyzcXdZxKoJPdsepCD7ZHaciTFeIa+gSS+6xumR9G1nTRBpH2E11qK+Uq7iSiyung+Y/iazpJTCJcWij0vuQkq2H0CtVonJE9ljWumlq30g6Q1o4cDvtUojwctOevF6nMOAC91jAanjW8ylp9kijUzsnc5qE6Z/J/bV5rlij1ygteJguKqqkSm8xdH9dOXdGbfrMzga/0S5p9g7Z+erFe1L8e49CvNeA39kdpyL4ZHeciuCT3XEqwlRp9qNvfOny48Pzv2k7Mw5LpdS7rIFYDAd2d2GbfFl56Mg2PrrNdvYoLRVr9KLsTzwsSo/F4bMJlaXKKDVy3hyeMG3ZgaUN0skNSpNEZayE0m2HbsBRVm/yJVAOS6V2sky29OXh69mO3jpL6Z9I6nJ4sv2uit1jOZVU85zV3Qnb0t84h1Foj45tNkccOWDZftlHz95bfPwY+J3dcSqCT3bHqQg+2R2nIkyVZjdk7JhcMlT2Z1cSYIFe5fI5bAtnOPQx6g9FKoewsr7nkkLRucKBcSc1C/YOgDjcNguyPeVcNqlBdt4m+cLX2HGcP+/wYC6FzCmbSZNzeqdlOj6Qr3Wqcc3tJGO7+uhx1jvs625/c/WLFKZ6ji7Gv6NQl0cOFLxvwZs75J/R49jo9eN3dsepCD7ZHaci+GR3nIowtZo9tLkDwOHdR+wBrOlzztFM7VATsW07sqOz73tZf+Abz5WkKeUVbyUU6e6SisyRDZ8qSyFrkY96YIfPo9LSpO9bNO6Sa4XwZxClXGZNHvVTO/BRbyyXaHKivsy/g7CPNPoFsqOfp3iM8+x4T/seoW2cYzPYrs5+I9R+dOl/YqPxO7vjVASf7I5TEaZ2Gc/oMi2p5my1E+1S6s9aQXZOCu/kCjFs4tLEfkwJhT7W6sN2HAlaYtaLzGU6so+tenytnD2GOX1WsErNm+RKW1LJpmjZvnKx4DpsXqQ2Z2mNXFp53GH23n6ByQ9xWCpXZkm6WdBHlWsX7cCEMhQz7LIdLt3zxUV7bpadCYf1knbZBPzO7jgVwSe741QEn+yOUxGuGs3OpojDu36j+AWc/ynUX2SW44qvQu6zSZSWmt0/C3R2lNKquB1q0oTNdnwuNvNxpiOuThs0M7IEsWaPQnMZls7BuTnMN6qs0mfzGZ97dOgpu7iqsGttVthOOsPBJcuUvnmZNhNIs0curqzDuR2+NtL39rs5ln195Gs3Cr+zO05F8MnuOBXBJ7vjVISrRrMzkTttmYYP9BSn9dUG2eSVbfSk6euURqkXpLzi8M6s2M7OGP2fsqAn2zf7A9C1axSGmofVaKN0TXRudo8tS6clo/s4RDgqn8VVdDldVNBf69F3Qa7LCfUnvQJbOe3FlGp0KtnEYah5Z+guK43itFOPdb9S2L8Z+J3dcSqCT3bHqQilk11E7hOR0yLydPDcLhE5JiLPDf6/fnOH6TjOehlHs38ZwBcA/I/gubsBPK6qnxWRuwftT2/88MZHM9bVBQZv0masKTWy8xa3Q52dk0ZPyHSrtWINX+sG+p81O6F1tqOziLdNCYR3QvbpvM42/WLfeA5jjfJHFxwbafg+a/rROpxfG4cbsw5njR+ci8smsy28wG5ehlAJMf59ToLSO7uq/h8Ab9DTHwZw/+Dx/QA+srHDchxno1mrZt+rqqcGj18FsHfUgSJyRESOi8jxhYWFNV7OcZz1su4NOl1Z845cw6nqPap6SFUPzc/Pr/dyjuOskbXa2V8TkX2qekpE9gE4vZGDWguPnrvPtKPyUQ1y/g6IdDSn/eW0wKyFAx1ZI+2ak66WlDUm25QL9CmnGy6rWkWppjRILS0cDM/vOboW+w8UaXTW3CXvmbNU90frW055HZfusv111uxBPDuXVY72avg3w+1Wy1457Kf4i8c2Ic3Ualnrnf0hAHcOHt8J4MGNGY7jOJvFOKa3rwL4vwB+TkReEZG7AHwWwG0i8hyAfzxoO44zxZQu41X1EyO6PrDBY3EcZxO5an3jyzi68N9N+463/cthg+3RJSbQqMRzZLsN+tnvO0pbzTZlOleoZznunv0B2M7OPvukfUPbOfu68xKP7epRrDzngivo43GUltPi/mAsZfsnEXyuwLauPfJ9jwZSvK/BtnQE7aO0hzQNuLus41QEn+yOUxGu2WU888hLn7v8+PA77zZ9kUmL3VQpLTUvvc25omVjsculcH/weilz10ypIsl11hRUWOmGrY09lgh0AKfIoiVtuFTnMFQ+NnIDrnF4LbnyNof3pNhUyZKB01BZ85oES/fIvZg+73yxY9rRsp3gsOtpw+/sjlMRfLI7TkXwye44FaEymj3k6A+tD9Dhv/9Hph1p5ZS1HdccCtxQSTOWhalyqqPQ3MbhmkqakcM7oVazF6XI5kBbPlfOtjfqZ10d6vSwxNIVx80mRB4Lp7EOrh2FtNJ7TKikE6cgMy6yJaY37VB8csNOl0cv3o+rCb+zO05F8MnuOBXBJ7vjVIRKanbm6DP/obD/9p//N6YtObuhBnZg0oFlZX8jzV4EaUalNuvVfMb2Rxo/fG2vOFVX9FpuhyWbOUS1KJ0zgJxKYoNeXw/LWNM+Ru28tYVHnyfp7vz8hcuPORW01O04JpHueTPxO7vjVASf7I5TEXyyO05FcM0+Bo9+/9+Z9h0/+6/sAfWhYTiy65K+57ZyWut6Qa1krqtMsI0/KUq3xaG2y6R1KVw2SslM19KZxsg+vla4xwEg0v9ROuhwnF2yoy+RZucSTqzZw3Zuj32s99WR170W8Du741QEn+yOUxF8sjtORXDNvgYe+cl/Gtl3+Ibfsk9wquOyGPXm3MhjpUu+3Kx9o7JVpLN7gd7lGPNl2mvg2O2oZJY9d9YavdcQaXSiSKMDNjuUdIrLKEd7IsvLpn0sfaDwWtcyfmd3nIrgk91xKoJPdsepCK7ZN5ijp79Y2H/7tjvtE6ydA50elSOiksLSK/lbzX75oQ4nHR3Zq9nez7q6IB13lO65pEx1KaEO5z0Paj/y0y+s71rXMH5nd5yK4JPdcSqCL+O3mLJURrfv+PXLj5Ntc6ZP2fV20ZqVZG7GHr9E/eHymtJja4fSJs/O2v5207ZnKW11sFTPG3Rudr0tucUIpaIOw2uP/vg/Fr/YGYnf2R2nIvhkd5yKsK7JLiKHReRHIvK8iNxd/grHcSbFmjW7iNQA/FcAtwF4BcB3ROQhVf3BRg2uijy6juqftyUf27iBvP7Gml8apamm9rH8G2s+t7N21nNnvwXA86r6gqr2ADwA4MMbMyzHcTaa9Uz2/QBeDtqvDJ4ziMgRETkuIscXFhbWcTnHcdbDpm/Qqeo9qnpIVQ/Nz89v9uUcxxnBeuzsJwHcGLQPDJ4byYkTJ86IyEsA9gA4s45rbxY+rtWxpnEJu9NuPNfU57VK3jaqQ9j/elxEpA7gxwA+gJVJ/h0A/1RVnxnjtcdV9dCaLryJ+LhWh49rdUx6XGu+s6tqKiK/C+BRADUA940z0R3HmQzrcpdV1b8G8NcbNBbHcTaRSXnQ3TOh65bh41odPq7VMdFxrVmzO45zdeG+8Y5TEXyyO05F2NLJPk2BMyJyn4icFpGng+d2icgxEXlu8P/1WzymG0XkCRH5gYg8IyKfmoZxDcbQFpFvi8j3BmP748HzbxeRJwff6ddEpFl2rk0aX01EvisiD0/LuETkRRH5OxF5SkSOD56b2He5ZZM9CJy5A8DNAD4hIjdv1fWvwJcBHKbn7gbwuKreBODxQXsrSQH8gareDOA9AH5n8BlNelwA0AVwq6r+AoB3ATgsIu8B8CcAPqeqfw/AmwDumsDYAOBTAJ4N2tMyrl9S1XcF9vXJfZequiX/APwigEeD9mcAfGarrj9iTAcBPB20fwRg3+DxPgA/mvD4HsRKVOG0jWsWwP8D8I+w4hFWv9J3vIXjOYCViXMrgIexEmg3DeN6EcAeem5i3+VWLuPHCpyZMHtV9dTg8asA9k5qICJyEMC7ATw5LeMaLJWfAnAawDEAPwFwVlUvpb2d1Hf6ZwD+EMMct7unZFwK4DEROSEiRwbPTey79Bx0I1BVFZGJ2CVFZBuAvwTwe6p6PvQln+S4VDUD8C4R2QngmwDeOYlxhIjILwM4raonROT9Ex4O8z5VPSkiNwA4JiI/DDu3+rvcyjv7qgNnJsBrIrIPAAb/n97qAYhIAysT/Suq+lfTMq4QVT0L4AmsLI93DuIkgMl8p+8F8Csi8iJWcircCuDzUzAuqOrJwf+nsfLH8RZM8Lvcysn+HQA3DXZJmwB+FcBDW3j9cXgIwKUqDndiRTNvGbJyC78XwLOq+qfTMq7B2OYHd3SIyAxW9hKexcqk/+ikxqaqn1HVA6p6ECu/qb9R1V+b9LhEZE5Etl96DOCDAJ7GJL/LLd6w+BBWIuV+AuBfb/WGCY3lqwBOAehjRdPdhRWt9ziA5wD8bwC7tnhM78OKzvs+gKcG/z406XENxvbzAL47GNvTAP7t4Pl3APg2gOcBfANAa4Lf6fsBPDwN4xpc/3uDf89c+r1P8rt0d1nHqQjuQec4FcEnu+NUBJ/sjlMRfLI7TkXwye44FcEnu+NUBJ/sjlMR/j9wOHfhq2SlAwAAAABJRU5ErkJggg==\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('prep')\n",
+    "max_flux = np.amax(data[0, ])\n",
+    "plt.imshow(data[0, ], origin='lower', norm=LogNorm(vmin=0.01*max_flux, vmax=max_flux))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Later on, we require a PSF template for both the relative calibration and the estimation of detection limits. Therefore, we create another masked dataset from the centered images but this time we only mask pixels beyond 70 mas and do not use a central mask."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 20,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "--------------------\n",
+      "PSFpreparationModule\n",
+      "--------------------\n",
+      "\n",
+      "Module name: prep2\n",
+      "Input port: centered (70, 57, 57)\n",
+      "Preparing images for PSF subtraction... [DONE]                      \n",
+      "Output port: psf (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = PSFpreparationModule(name_in='prep2',\n",
+    "                              image_in_tag='centered',\n",
+    "                              image_out_tag='psf',\n",
+    "                              mask_out_tag=None,\n",
+    "                              norm=False,\n",
+    "                              cent_size=None,\n",
+    "                              edge_size=0.07)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('prep2')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's have a look at the first image from this stack of PSF templates."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x145ee56d0>"
+      ]
+     },
+     "execution_count": 21,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAaoUlEQVR4nO2dbYxcZ3XH/+feedsXe9dOFtfECIOIivKhBGmVguADJA0JFJF8AARFyJVc+QtIQSBB0kqVkPoBVIkXqRXIahBuRUl4VaIIcFwTVCFVgXUTICHQmCgpcW3v5sX22uud19MPe+15zpndmdmd1/Xz/0mrnWfunbln5+6Z5/7vOc85oqoghFz7JKM2gBAyHOjshEQCnZ2QSKCzExIJdHZCIiE3zINdf/31un///mEekpCoOHHixEuqOrfetqE6+/79+7GwsDDMQxISFSLywkbbeBlPSCTQ2QmJBDo7IZFAZyckEujshEQCnZ2QSKCzExIJdHZCIoHOTkgk0NkJiQQ6OyGRQGcnJBLo7IREAp2dkEigsxMSCXR2QiKBzk5IJNDZCYkEOjshkUBnJyQSuio4KSLPA1gGUAdQU9V5EdkN4EEA+wE8D+DDqvrqYMwkhPTKZmb2d6vqzao6n43vBXBcVW8EcDwbE0LGlF4u4+8CcCR7fATA3T1bQwgZGN06uwJ4VEROiMih7Lk9qno6e3wGwJ71Xigih0RkQUQWlpaWejSXELJVum0S8U5VPSUirwFwTER+F25UVRWRdRu9q+phAIcBYH5+ns3gCRkRXc3sqnoq+70I4IcAbgFwVkT2AkD2e3FQRhJCeqejs4vIlIjsuPIYwHsAPAXgYQAHst0OAHhoUEYSQnqnm8v4PQB+KCJX9v93Vf2JiPwSwHdE5CCAFwB8eHBmEkJ6paOzq+pzAN6yzvMvA7htEEYRQvoPM+gIiQQ6OyGRQGcnJBK6jbOTbcidswftE4V89y+uN8ywsbJixlou2/3XbuBmG206xbHGd7s/LhkYnNkJiQQ6OyGRQGcnJBKo2bcRtycfMuOkVDJjKRXteGrSvkGo2UONvR61uj1W6uaFCXtsSLC9bl975+6/sfs2rKbXWs2Mj148AtJ/OLMTEgl0dkIigc5OSCRQs48ZXpe3QwoF+0TRanYfV9fNaPZcao+Vd/8q2qY0QeLmkIaN2fvXiovpv/e1n7S7r1y++vgn5+7f+LikLZzZCYkEOjshkUBnJyQSqNmHzGY0ORKrmxMX25adO8xYJ+32Rslr9ubp1qS9Zl+/omD45o2Nt7n7AVK1cXfUnWav1tqPg8d37Phrs00rVTN+tPytje2KHM7shEQCnZ2QSKCzExIJ1OwDZlMa3ZG4OLlMT5mxzkybcW3GavZ6yZ7eRqH53a6dvuY7aPa2mt7F0ZOqHadll3e/anV3suI0f5BrL2o/AwQxeAB4T+GjZvxo5dttDI0LzuyERAKdnZBIoLMTEgnU7H2mF40OAMlUU5MmO6wm1+tmzbi6265Xr+60p7NWst/ljVxTC6sN4UNd2L1jnN1tlyDsnrg4errq1q+7KSbnc+V9XD7Iy/d59MhZvQ+3XsCfj5jr4XFmJyQS6OyERAIv4/tAr5fuIeGle2PPbrOtvMeGnSo77bV4rWSvxet5O9bgbPtLaX8ZD/HbXTisJbzWfJyW3b4utVbq9uAtqbt++W04diWsNkt4rmK7pOfMTkgk0NkJiYSunV1EUhF5QkQeycZvEJHHReSkiDwoIoVO70EIGR2b0ez3AHgGwM5s/EUAX1bVB0Tk6wAOAvhan+0bS/qr0d0y1ZnmuDZr01+9Rq9M2e/quqvuXC9srNkbvsqU1/AdpgFp2PdOg25Qmjg9X/ca3tm16jW8HUtYIsvr+Zz7Qzah6WMLy3U1s4vIPgB/CeBfsrEAuBXA97JdjgC4ewD2EUL6RLeX8V8B8FkAV26rXgfgnKpe+Rp9EcAN671QRA6JyIKILCwtLfViKyGkBzo6u4i8H8Ciqp7YygFU9bCqzqvq/Nzc3FbeghDSB7rR7O8A8AEReR+AEtY0+1cBzIpILpvd9wE4NTgzR0tfNbpv2bTnejOu/klTs6/utvc8qxMu/dXdEvUa3Wv4UKe3bMu7/Fc/Dfj0WCeNw1TclrJULvs1qTrN7uPs/thhuyi/7Neb6WP6TtP7VlMh17qG7zizq+p9qrpPVfcD+AiAn6rqxwA8BuCD2W4HADw0MCsJIT3TS5z9cwA+LSInsabhWb2fkDFmU+myqvozAD/LHj8H4Jb+m0QIGQTMjR80Xr/O7DTjxoxbpjrZPCUtue5eoxfd9gm7vebG9VJT+9aLbtlpwYtyO4RbWZpUfP568NDF0f2+9bK/t2AvMNOi/bc0S179clgfk69U7Di1uQmNlZXmW/WYZ7/dYLosIZFAZyckEujshEQCNfs69DWuPm1LS2HG5sLXdtg2y42CF8sBblPDhpxbNHp1hxXa9algXPTx6DbtnABozeXh5zZOpheXC1+3Mhq1Vbu9Oul0d83F0oM4e9JurTtay23DtYdKgph+/cJFu2/DJgRca3F3zuyERAKdnZBI4GV8n/HpsMnsjBnXd9pr7XrJhYaCUlKNvA9R2bG/bK9N2bBUfdpemidTzUvaXMF1ZXHLUl2EC/WaK4Hl/nUawaV7vebs8CFC/3dU7ZyT+PTa4LI+Td2+BRemc51rfaVaKTVlU5q3sczGhQtmrOUyriU4sxMSCXR2QiKBzk5IJFCzo8+htjm7ZLVxnU2PrU07nZhzyz2DYd2H1mxmLSozVhvXZmz6Z7rDhp2KxUCzp1bP+wYwNafRG75FjPguL82x7zbjS2DVbbQRNdfUxafbhv+macHre/vmScV+aEnV37doHtx3yfX3A+ovv2rG2z0Ux5mdkEigsxMSCXR2QiKBmr0PpLt2XX3c2GXTY2szVqDWply8esLFnIPlnrVJt81pdpP+CiCZtJq9VLJieLLYzFvNpy7+bN8atYadBypOw6/mrd4tB2PX3QnSsK/16bStGt0Slq1KXTqxuI6xibMzqbqOspXmZ5ZPXTnslpZW9vOrv2I1/HaDMzshkUBnJyQS6OyERAI1+xZIptwyyr3NevjV3a7M1LT9iGtuOacvPVWZbo4rNq0elZ0uNj5pdXehaDV7IWfHoU4v+W0uIT2XtF/yetktQ10uN+9NnM/bpP0KXPlsF4iXDv2iw6W8LWWpa17/u+0uhp8rNz9/X8K66PR/WrWfUXJ5FdsZzuyERAKdnZBIoLMTEglRavZec+GTXbNmXJ1patSWOHoHjV5zbZjCWHpt0mrIxoQVpGnRCthiwWrM6aKtB7Wz2NScs4XLdlve6tGp1K7lTlwu/KWazR9YLDfzC06l9mbDogu816r2tVLzJa5cKepwOYEvU92SV2/HacXn8Ddfn7h19LlJl2c/4eyctPci7pg+YMZHLx7BOMOZnZBIoLMTEgl0dkIiIUrNvllSV0eusduWg67ubAaCaxMdNHpLi6aN89/Ddk0AgILT7Dmr2ScKVsCGGh0Abpg8d/Xx60uvmG37Ci+b8XU5W2Y5dSvez9VtPsEfq7uvPp7KvdZsq9btfYyXql7Du75Wvnt0EEv3mtyX0/bbNfVx+LAFlquNl3caPu/y7Kfc4gTXWmrc4cxOSCR0dHYRKYnIL0TkVyLytIh8Pnv+DSLyuIicFJEHRaTQ6b0IIaOjm8v4MoBbVfWiiOQB/FxEfgzg0wC+rKoPiMjXARwE8LUB2jo8WrqM2GWrdVeuuBGUSqr78s/uK7ClPLQr0dQIuqlq3nUsdV1bcv4yPm8v43cXL5nxa4vnrz5+Y3HRbNuff8mMZ5L2ZZRnk5UNty0WbCmumZKVE+eKVstUJ+y/Yd0tr9UgotiSLlt156pDKC4skeVW3kJdlxt1l/Hqyli1X5g7fnSc2XWNKwIun/0ogFsBfC97/giAuwdhICGkP3Sl2UUkFZEnASwCOAbgDwDOqV79zn0RwA0bvPaQiCyIyMLS0lIfTCaEbIWunF1V66p6M4B9AG4B8OZuD6Cqh1V1XlXn5+bmOr+AEDIQNhV6U9VzIvIYgLcDmBWRXDa77wNwahAGjoJk0oZY1KVNNoqu9VFQDtqXUW4pq9xhuxn78k6uRZMvB72jYHX23pJtZ/Sm0tmrj99cOGO27XNLXnck9mZDVa1YLtVtuu25RlPDT7tU22LqltPm7XtVXUdZJ8vRqDQ/iKTilbJblupKXnlhbZa1tpTDdvv6TrU5p+GxvejmbvyciMxmjycA3A7gGQCPAfhgttsBAA8NyEZCSB/oZmbfC+CIiKRY+3L4jqo+IiK/BfCAiPwDgCcA3D9AOwkhPdLR2VX11wDeus7zz2FNvxNCtgFMl10HKbjgeN5+TD4FM9SF6mL0LRWXvC5sv7qzLakrHVVInO5ObXx7Nm3G3Wfc2tAdib0vURTXRskZnhdXsglNW1IX3M75cdp+qa6rDmW0sbbE4Dt83puhnb5Ha9z9mouzE0KuDejshEQCnZ2QSKBmX4/Ea/L2LYfCsbgWQi1lkn1wtl0nZBdw1g6C1JeOSnwMOhj7QtGrajV43f0dF9Vq/LMu6f9MrbkMeLFilwD7stMt9xqK9r1tMS2gHsbO/anxpaNrbnvdj4O/q0OgXF0LZy1Yd5GyO9iYw5mdkEigsxMSCXR2QiKBmr0bvA6vu1bJtUCzt2hE/15uu9Oc4f7iYsgNV3LZt1X246pLvF/Vpna+pPbU5xs+bm4NP++S+s/U7Zr1s4FmP1d17Z/ca1OX45+6uLtIGzHd5vNad+xkddjlKmlJwt/4sJlhHXYYbzizExIJdHZCIoHOTkgkULOvg1Zs3De57FohTbi88XLzY0wr9vuzXmjfrsiP03Jz/4Y7Ow333qur1o5Xy3Yd/v+VbQnsyXRP87WuBvOO1K5Pr7uk/Vfqtg7fi5XdZvzC5eua+5ZtS2tfStrj8wcaDV9XrmlLumq3pZfd2JXOS8su9yD4vMN7LWvb3L2YqhX1UnE3ABqdRP54wZmdkEigsxMSCbyMXwetuITNVXttKGUbWkrLwfJOd9mYFuzYdzDRnFtGGVzxpm7fxqoLtZXt6Tt/2baEPV2wl/H5IM634mpYl5ye8GG7l6r2Mn5x1abEvrzavHS/5Dq8VNxlfM13dXUhxUbFdWIJSlHlOly2d5RJQVfXlnPlLuOl1mFc52U8IWQMobMTEgl0dkIiIUrNfqzxXTO+PfmQGWvZCkG9ZFsdyZTT7EEILOc6gfp2T74MVQtBGMqXRfJ6v5a3T5xPbejNt1FarjR1+kxh1mzLudzRmktx9ctUvS4v15r/Si2a3KfxVu17V1bseyUXXCrvcqDZXdepVs3udbjb32h2F2oru1CbC72hZsc/PvmP2E5wZickEujshEQCnZ2QSIhSs2+WxooViullG3NOik3NmbqWQbmC+z6V7r9fW0oZp75sskvNhdW+yy5+fTm4t7CYt3+DX73pU1Y7jdvhyz/Xq+4zuGT/DXMX7XvnLjXHLqvXaHCgtSyV3567HORErLhlvat27DW7VLdXGSoPZ3ZCIoHOTkgk0NkJiQRq9i7QmtVqumKFo4Sa3bUIamn763LOxcWkJdS3LdWZfFlqtxQUPufcxsarq81jVwsuzztpV9O6FW2n2X3bZKfRE5fj7zV63mv24JaJX5bqy1B5jZ6/ZP/O/IXmucwtuzUPKy5o78qRbbclrR7O7IREQjf92V8nIo+JyG9F5GkRuSd7freIHBORZ7PfuwZvLiFkq3Qzs9cAfEZVbwLwNgCfEJGbANwL4Liq3gjgeDYmhIwp3fRnPw3gdPZ4WUSeAXADgLsAvCvb7QiAnwH43ECsHDCdcuU9LXH3UjPnXFx759TFxsUluCdVr+HDfb2ehxu7kldOK/sSTmGefsPfW0jVje2x1Gn6FsUedmiq+TXn7UtJ5S7Bjn3+e2XjUt2py4XPrbrxJfuC3IVmG+tk2QXty66OgUs++PH/fgXbmU1pdhHZD+CtAB4HsCf7IgCAMwD2bPQ6Qsjo6drZRWQawPcBfEpVL4TbVFWxQZs8ETkkIgsisrC0tNSTsYSQrdOVs4tIHmuO/i1V/UH29FkR2Ztt3wtgcb3XquphVZ1X1fm5ubl+2EwI2QIdNbuICID7ATyjql8KNj0M4ACAL2S/HxqIhWOIr1Gn1WDRdN1qRJ9v3ULDaeFGqE9dLbZ2MXkA4jS7r79WL4aa3Znha+O1VH9u33o6DMsnVWdHhzpxLWvSW1piB/s6jZ5fsePCeft558+vmnFyPrhB4GoLqq8pV9veufCebpJq3gHg4wB+IyJPZs/9Ldac/DsichDACwA+PBALCSF9oZu78T/HOjdfM27rrzmEkEHBdNl16BiKc2mUGoRskrK9RvV3LZOWjrBuiWbQdSRxpaLTih37sJ3vRlNzobdaqc1lvOtc47vRdMJ0R+1QztmXjmrptOqXqVbDUlLuMv6ilU35c/ayPX3VxvX0/HLzsb9Md+mwR5e/iWsJpssSEgl0dkIigc5OSCRQs/eDIPSmvlVUar9PNXHhMv9ewfa01r6raFLxqbdO47ulpWHaahiGA9bR8L4tlZ8WvOGBlPYprS2htJbSUXacW7V/d+5y8/X5iy609orNrU1evWjNcqnNYZnwhjtXaDjDrzE4sxMSCXR2QiKBzk5IJFCzd0HHdlFBvFaqToBWrRhukboutpuE6bM+nu/0f7LiNHvZtmFOp21p6Vyx+fp6yZV3bomzu1bTHTS8qWLlkgu8Zu9Y7tkvS70UlJJycXRZesXauWw1e+OyW8bqS00F+PN8rcGZnZBIoLMTEgl0dkIigZp9C7Ro+NxHmgO3ZNXnX4srdSSuDbBZIuvz6FO37tQtyUzdWKpu+2RT49fL9r0ark2Vj8O3tJ5qM020lnd2dqz6sX1Besne95BLTZ0uyzbXvXH+gh1XXCJ+xBrdw5mdkEigsxMSCXR2QiKBmr0PHKs9cPXxHZMfN9u8zlan6VvysYO2wFr3SeYurz7nTp+Lw6eXSvblUxPNbSUbo1dXWrpesu/t21h5DW+O43P6yy6n/6LT5H49wUWXzx6026otL8NubN+myhObTg/hzE5IJNDZCYkEXsb3maMr/2bGd0wfsDuUfUVTe4lrKtdu8hIViStTNT1lt68G3VBK9hIfBZd6W3RrXl3I0I81GEvVhRvdGJdtyqtfFtzw1XvDcBov27cMZ3ZCIoHOTkgk0NkJiQRq9gFz9OIRM+7UIbYnXBivfuHCBjsCUrTLYZMJp+F9aq7X7D6VN7xf4O5LNNx9iYbT7P0sB0WNvjGc2QmJBDo7IZFAZyckEqjZh0wnTTlQTR+gXlf7brIu9bZFw/tU3vC9fTktH2fvQaNTk28dzuyERAKdnZBI6OjsIvINEVkUkaeC53aLyDEReTb7vWuwZhJCeqUbzf5NAP8E4F+D5+4FcFxVvyAi92bjz/XfvPhop0kHqefVlcDW6gY7DgHq8sHQcWZX1f8E8Ip7+i4AV7JFjgC4u79mEUL6zVY1+x5VPZ09PgNgz0Y7isghEVkQkYWlpaUtHo4Q0is936DTtTjLhusOVfWwqs6r6vzc3FyvhyOEbJGtxtnPisheVT0tInsBLPbTKLI+m9Wyw4rZd4IafDzY6sz+MIArVRkOAHioP+YQQgZFN6G3bwP4LwB/KiIvishBAF8AcLuIPAvgL7IxIWSM6XgZr6of3WDTbX22hRAyQJgbfw1DrUxCmC5LSCTQ2QmJBDo7IZFAZyckEujshEQCnZ2QSKCzExIJdHZCIoHOTkgk0NkJiQQ6OyGRQGcnJBLo7IREAp2dkEigsxMSCXR2QiKBzk5IJNDZCYkEOjshkUBnJyQS6OyERAKdnZBIoLMTEgl0dkIigc5OSCTQ2QmJBDo7IZFAZyckEnpydhG5U0R+LyInReTefhlFCOk/W3Z2EUkB/DOA9wK4CcBHReSmfhlGCOkvvczstwA4qarPqWoFwAMA7uqPWYSQftOLs98A4I/B+MXsOYOIHBKRBRFZWFpa6uFwhJBeGPgNOlU9rKrzqjo/Nzc36MMRQjYg18NrTwF4XTDelz23ISdOnHhJRF4AcD2Al3o49qCgXZuDdm2OYdj1+o02iKpu6R1FJAfgfwDchjUn/yWAv1LVp7t47YKqzm/pwAOEdm0O2rU5Rm3Xlmd2Va2JyCcBHAWQAvhGN45OCBkNvVzGQ1V/BOBHfbKFEDJARpVBd3hEx+0E7doctGtzjNSuLWt2Qsj2grnxhEQCnZ2QSBiqs4/TwhkR+YaILIrIU8Fzu0XkmIg8m/3eNWSbXicij4nIb0XkaRG5ZxzsymwoicgvRORXmW2fz55/g4g8np3TB0WkMGzbMjtSEXlCRB4ZF7tE5HkR+Y2IPCkiC9lzIzuXQ3P2MVw4800Ad7rn7gVwXFVvBHA8Gw+TGoDPqOpNAN4G4BPZZzRquwCgDOBWVX0LgJsB3CkibwPwRQBfVtU3AXgVwMER2AYA9wB4JhiPi13vVtWbg/j66M6lqg7lB8DbARwNxvcBuG9Yx9/Apv0AngrGvwewN3u8F8DvR2zfQwBuH0O7JgH8N4A/x1pGWG69czxEe/ZhzXFuBfAIABkTu54HcL17bmTncpiX8V0tnBkxe1T1dPb4DIA9ozJERPYDeCuAx8fFruxS+UkAiwCOAfgDgHOqWst2GdU5/QqAzwJoZOPrxsQuBfCoiJwQkUPZcyM7lz0l1VzLqKqKyEjikiIyDeD7AD6lqhdEZCzsUtU6gJtFZBbADwG8eRR2hIjI+wEsquoJEXnXiM3xvFNVT4nIawAcE5HfhRuHfS6HObNveuHMCDgrInsBIPu9OGwDRCSPNUf/lqr+YFzsClHVcwAew9rl8Wy2TgIYzTl9B4APiMjzWKupcCuAr46BXVDVU9nvRax9Od6CEZ7LYTr7LwHcmN0lLQD4CICHh3j8bngYwIHs8QGsaeahIWtT+P0AnlHVL42LXZltc9mMDhGZwNq9hGew5vQfHJVtqnqfqu5T1f1Y+5/6qap+bNR2iciUiOy48hjAewA8hVGeyyHfsHgf1lbK/QHA3w37homz5dsATgOoYk3THcSa1jsO4FkA/wFg95BteifWdN6vATyZ/bxv1HZltv0ZgCcy254C8PfZ828E8AsAJwF8F0BxhOf0XQAeGQe7suP/Kvt5+sr/+yjPJdNlCYkEZtAREgl0dkIigc5OSCTQ2QmJBDo7IZFAZyckEujshETC/wM5/TWz0ICX2wAAAABJRU5ErkJggg==\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('psf')\n",
+    "max_flux = np.amax(data[0, ])\n",
+    "plt.imshow(data[0, ], origin='lower', norm=LogNorm(vmin=0.01*max_flux, vmax=max_flux))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## PSF subtraction with PCA"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "After masking the images, we will now run the PSF subtraction with an implementation of full-frame PCA. We use the [PcaPsfSubtractionModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.psfsubtraction.PcaPsfSubtractionModule) and set the argument of `pca_numbers` to a range from 1 to 30 principal components. This means that the mean- and median-collapsed residuals that are stored with the output ports to `res_mean_tag` and `res_median_tag` will contain 30 images, so with an increasing number of subtracted principal components. We will also store the PCA basis (i.e. the principal components) and apply an extra rotation of -133 deg such that north will be aligned with the positive *y* axis."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 22,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------------\n",
+      "PcaPsfSubtractionModule\n",
+      "-----------------------\n",
+      "\n",
+      "Module name: pca\n",
+      "Input port: prep (70, 57, 57)\n",
+      "Input parameters:\n",
+      "   - Post-processing type: ADI\n",
+      "   - Number of principal components: range(1, 31)\n",
+      "   - Subtract mean: True\n",
+      "   - Extra rotation (deg): -133.0\n",
+      "Constructing PSF model... [DONE]\n",
+      "Output ports: pca_mean (30, 57, 57), pca_median (30, 57, 57), pca_basis (30, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = PcaPsfSubtractionModule(name_in='pca',\n",
+    "                                 images_in_tag='prep',\n",
+    "                                 reference_in_tag='prep',\n",
+    "                                 res_mean_tag='pca_mean',\n",
+    "                                 res_median_tag='pca_median',\n",
+    "                                 basis_out_tag='pca_basis',\n",
+    "                                 pca_numbers=range(1, 31),\n",
+    "                                 extra_rot=-133.,\n",
+    "                                 subtract_mean=True,\n",
+    "                                 processing_type='ADI')\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('pca')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's have a look and the median-collapsed residuals after subtracting 15 principal components. The H$\\alpha$ emission from the accreting M dwarf companion HD 142527 B is clearly detected east / left of the central star."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 23,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x145f55550>"
+      ]
+     },
+     "execution_count": 23,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAu+UlEQVR4nO2deaxc133fv7/Z583bF+40H7VRlh1LclhFruVEluxEdha7qGvYMVICkSu0dQsHSRHLSVvAQNDaQBHHaNIEQm1YLlzLrm1BsprFspbEbqyFMimZFCVxF9f3SL5t3uwz9/SPN+Q9v+/wbeR7M4+6vw9AcM67d+49c++cued7fps452AYxlufWKc7YBhGe7DBbhgRwQa7YUQEG+yGERFssBtGREi082TxnpxLjPS385RvcUS3YoFqu4B+yxvea/6ZX65RJkZvcF5fxCw8naJ+bgqNfEEut62tgz0x0o9Nf/KZdp7yLYXQLQwa+g+Zrqpql4sp1Xaz3u3ONtQ2BHRwOjYP7ji9v1ENfz1iSf2jY7SP0//xL+bdZtN4w4gINtgNIyK0dRpvLI+gpG+PpPTUmafL9foiv90Zb/8a7ZsgnU3T9kRXXZ+L+pbuqVx6XSlo+ZBI634HJBmCalyfmj6nsTLYk90wIoINdsOICDbYDSMimGbvMFkylxXz6UuvJcl2c611GzNaGwcZ0rqkuyURHs+52LzbAMCR/hdh05vW8FXPzBejY9ULSX2sNK896DabGH18E9/ce83Mt1TsyW4YEcEGu2FEBJvGrzJsVhKaWhemM/Nud+zFxl6o8fmn6UDrtF9NzakfMTpWQ6sLNBpLfy4ENG1n91n+XEJmP/YMbPHu8zfN6nNlhkqqXa3YV/wi9mQ3jIhgg90wIoINdsOICCZoVgBfGzs2DZGZiXU0KAy1Racv41hBUd/OGJnHEr6Zz5EZj01tZNLiY4OD4jLhuaSbzGG01hCntQXW9PWy1uHZnvKl18XprD5vd021azW9RsJrJih70Xm9+r1vdezJbhgRwQa7YUQEG+yGERFMsy+BFvdN0pi+ns0MlNU2DvdsoUK/t6lQzwrZvlknJ9IUdkrbW97v4UizczhtS6aatNbZbNP3+5ZMkSstaXDW7OULWoezD0AqEa5VlPi8RKOysF+DeDqdXZVLxUXu1TWOPdkNIyLYYDeMiLCkabyIHAOQx1x+0rpzbqeIDAL4NoBRAMcAfNw5N7k63TQM42pZjmZ/v3PuvNd+EMBTzrkvisiDzfbnVrR3HSKglE3JLNlyp9N6e1+Ykqla1pfUkZCOURiqi89v33YlrT9bfN3Zx5z0aYP8xhueD3qcw0wp7RQfO0XXoEr61l8f4HWKeErr7AYdO96rtTOH7s4WwvgBviaS0/2WAn2lyZbu63T2mw/y+nrl1hdU+1rX9Fczjf8IgIebrx8G8NGr7o1hGKvGUge7A/BDEXlJRB5o/m29c+5M8/VZAOsv90YReUBEdovI7ka+cLldDMNoA0udxt/lnDslIusAPCkir/kbnXNOOJVJuO0hAA8BQPq6zVYqxDA6xJIGu3PuVPP/cRF5FMAdAMZEZKNz7oyIbAQwvor9XFXY71tIV9dmSatx7Lfnj82x3PGehf2vhXzpnafZM4PaZl+rsn862fvJD3xgw4xqT57vCfvM9ug4xcKz3Z00O6d79o/HqaOZBKet0rIbLqfP5aee5nvT4rPPMf507uJMqP8TGbo3dOzCpLb/S5Gu2QAF/a9xFp3Gi0hORHouvgbwqwD2AXgcwK7mbrsAPLZanTQM4+pZypN9PYBHZc6NLAHgfzvn/lZEXgTwHRG5H8BxAB9fvW4ahnG1LDrYnXNHANx6mb9fAHDvanTKMIyVJ5K+8TGyR/euz6t2saw1erVGNmXSnEnfR5381aslSqNMmtJx2SV/G1dFpnZAduJERp+bP4fyw2dfdy7BTHb3CuXKyw7oXG+B57/e4ndP+etaqstSLD2vB/h++3zeUrFLtdOUg65ynv3uw5eN+CJpqGk9BX1a42eyVDW3tLbt8OYuaxgRwQa7YUSEyEzjffdPR1PW6Wk9FeS0ysmcnq5xyKs/Ta0VKY0yT4/ZFkTn8t1Ytw5NqW2H92/S782SCYvdY2varVe8FE4xkg893Xr6yxKgQdKFpZBfQbbBqaE4tTSlvOLqMg1yW/XDfktkDuNjNeqUhipDU3XPfZlDbVlicaoudiH2zXiAlkILhRd3CnuyG0ZEsMFuGBHBBrthRIS3rGZfyAWWQz85n5PrZXdN0m4LhJqyVuPwWDZD1UkXxj1z0LGxIbUtNUlauKjfWxvR53JxbcLKdIXbK2N6nWJqhq4Jm+bITFUoae2c6Q3DfGuT+lic7plTYLWk16ZHkL9GEr+g31vvJzMdh/2yG7B38BqbRSl8Fuz2y6HP3mfmfnKoSIurcwewJ7thRAQb7IYREWywG0ZE6LyQWCHib2qbZ7BO60TfBhrLLWzXDSj8k9032R3UeesDrE+rk7pfrAOH1+kw1FwqtOmfy+fUtlqKdOAQxYayeyfhp2yOU3hmo6w/c7ZXh9eWZ7XNPkduq4WzXl9J77PfgqM0VXy9OZTUecsDrNG5tBSvBwidK5YM38/nBd07dolwtN7C/gRqX1qHyNH1LBb09WwH9mQ3jIhgg90wIoINdsOICNesZucUTDKqNSSrKV+Xs2YX8s2Okb91S8pmsqE6T5Oynmff+Bj5Y0/NaHt3PhFqOS6b5Fijc70n0srJs/r9tY2hTudwWda+JS7JRJ+rlqUr7NuYC3ob+9VziLAL6Fi0niDevXNdC9+rgNYeMENfce/YSUp/VaeQYUd2dVC/Ay5z7d1rDttljc6xBQH7GqwC9mQ3jIhgg90wIoINdsOICNesZm9Jo0RwquTcSPHS6xKVJ3Kk8xzHmCfZ/1rv75dpLlNpqNxwUbULE1oL963TKbFmi+H7HduBF4mRZj/w2A2zevsC5YvYHu3IZh+jlFfVvD7WhtELl16fPTmotvk++QBQnqE4e9L4LkHPIC+ugWMN2NYdz1KKa07V7e+/SCnuFPm+V6nfqNEB/LUgOm+6T9vZmQqtz6wG9mQ3jIhgg90wIoINdsOICNeMZme7pFDpnu4ura8mp/pUu5wKNSbbwlP9+r2ZtD52oUglmlNk7/boGdbFKwuzmXn2nGOa7Oy9PaG/wCzF0TfIt+BdN5xQ7VeOb1btOuVjS54Ir0G9m+z/67WmZF/5FKXILlNOgLE3RsIG6eZUkt4b03p/403nVHt8olf3zVuLqE7Q9SQ7e4xiD6SbSlF79u+unL7vrJvZvz1G/hguSfHt3rpGlWz0VYpnZz8RXi9YjRx29mQ3jIhgg90wIsI1M41nd8KAzEiTpfnTDwNAVy6cpubP69DRekxPqUo0g2qUF5mCeRKD0w/FExQeS/1+2zY9hZ0qhqa5Op2XTWuJGB2bzWUkN6rD4f5xSmkVp2NzHVZOq5zspwqz/uciN96utHZ/zce0+fHMeL9qs5uqn/JKSCK0uDLT9DfXpfuZPxtWsq1Ulvf1Dwp0b6lCr2+ak5L+DDmSdzNTPardkoqre+Hqv1eCPdkNIyLYYDeMiLDkwS4icRHZIyJPNNvbReR5ETkkIt8WkbVd1c4wIs5yRMtnARwAcNEu8iUAX3bOPSIifwXgfgB/uWI946pJbIrgFExxNs1pbVfIeyYb0nmsxYIkuWCSq2iDUh/BM1MFdF4uqzSV0KajN89q11LxPhaHgu687rhqn5rV5sX1m6ZUu1SllM5D4TWqF/W6xabBadU+l+xW7cGcdvs9s2eDavtnqmf1vRjp0nr1bNCv2sMj2mU4oFs9cc4zxfF953TPREvIsWfiYrfnliq5XIaKo5dj869zxIa1WW+WTbAUjgxek6LvJIdlXwlLerKLyBYAvw7gfzbbAuAeAN9t7vIwgI9edW8Mw1g1ljqN/zMAfwjg4s/REIAp59zFn5uTADZf5n0QkQdEZLeI7G7kC5fbxTCMNrDoYBeR3wAw7px76UpO4Jx7yDm30zm3M96TW/wNhmGsCkvR7O8F8Fsi8mEAGcxp9q8A6BeRRPPpvgXAqZXsWEv53Bla/yPNEyetHKPSR34opFQo7JFKH3P4rJzQduHkNj1DyXiuu/nT2n46Df0DF+/T9tMNQ1orVz0def6cdhvde2KLav/KdYdU+6kX3qnPRbb0ek94TRIUnnmhoN12C+d0u/Sm/lzBMNmB/VPRmsi+47rU9L9893Oq/Tcnb1HtXrLLT/ohsKRtu9bpe3HT8Lhq7z+zUbVVGTByCU6QO2zArrj0HYvTd7TmfW72zeBrwimuuLQU+4msBIs+2Z1zn3fObXHOjQL4BICnnXOfAvAMgI81d9sF4LEV751hGCvG1djZPwfg90XkEOY0/FdXpkuGYawGy/IXdM49C+DZ5usjAO5Y+S4ZhrEarCnfeN8/uzZJKYCSVAqZUh01qDRPbVZrfPFSPLkc6Xuy1QYUvilkymWtNtId6sbZbq3vE5TSavu6C6r9oQ37VPsrL9x76XWOUhnVKAXT0wdvUu3YoNa6dU737OlCIUlYPtCv2tmi1pjr3ndatbk0VU82tCuP01oDp95+/NgvqHaVQnG7krQe4K1zZCks9b1bjqj2sbwuc91NvvHTDW8tgtNQs66mJsdElDlGQr2XjPYJSnVG37kM3evCtLbLp711oStNYWXusoYREWywG0ZEsMFuGBFhTWl2VX6HNHqcS/WQXoqRJmop+5sJ5t9GbZCmr2n39RYb6tnp0AbNZX9qZMstDWq9daw8rA/u2ZELU1r/336D9o3f8+p21Y5RDDV6tN04fc5bt+CsSOR6XevR13OiqPuyY0Tbs/e8Nnrpdf967eteKOn1k9+5/gXV/u//8AHVzqf0vU6eCNdvSlt1P396elS1OQ0Yp57ySzbxegqv3VQ4BoK+Y/1DOlX3zEx4jVIUT1Gn9RbOz1CtcO4CsuH7eRI4Y9USK0fZk90wIoINdsOICGtqGu+nd4pTtc4Wd1lyN2zwFLaL7GVeaGSsquc9jt7quDoqZ/6k/f0pVoymhrfecFJ3K6GnqONl7YY6siF0n53eo6f4sRt1PxK92tQ2sFW7jgaP6veP7jp46fWRSW2iKr+g2/1v6Os71q9DXoOBCd3vzVOXXp97c0Bti/fpfu6f1e6zsV59TSbGKLvsNs8sxemxyC3aTevvSYHNaVPh9sagnuJzFVchF9ZYt95/akzfu9514bS+ROYxnrZzGjCWFHH6fie9DL1lOnajvoAJ0MOe7IYREWywG0ZEsMFuGBGho5qd0/4kPG3HGqfF3MCpd0njY1Jrt9RU+IbKiN43MUOmOKr0waGiAZnqal64bYzSJiW2k/ai9M/7L+j0TsNeCqcP/Prratu39uhQhOS4vn7nc9rF2L1bn2tiz/WXXmfOkUlwi74mtR69ndN8BWTv8bUzh53etVW7tP6bkWdV++m9OsSV3Vg3DofrGOem9drB9Ix223UUGir0HXOeOTJ1VJvpGpQ5ym3mKjl6uKT6tIb3SZIGZ/ftdL9ep2CXYnaJVa7kVIU4llo4Ndel/Za0l2EY1zw22A0jIthgN4yI0Fk7O+lAX7VwmGmCbLVcVomppPRHq3mmW6H0wuw6Givr7Znzus0eri4W/ma6Ad3Pw2TPjtFSxIUJ0qB7w4OfeodOFf1f3/s91f6j5/6Zan/trq+r9u8+8YBq++sW9XdoV89P7tir2o/8+D2q7Yr6eu7oGVPtH4yFKbFqp7SOfqZ2o2o/dWiH7tcFugE3aK082hva9E+/vk5ti5eo0mpKa98gw7G84TWoDlDaKQoR3rZehyOP5/W9Khb0GknD0+mcwpr9Rriqa4K+zwFdb/8rySnY/LEgC7jO2pPdMCKCDXbDiAg22A0jInRUs3PK5uVQ5RLNZE+Nk6+8b8ZMTNM2kv/pCS188qPUzwX6LRe0DXSionX3tlFdolmrQuCX7tl/6XWxro/1jdNaR68bmVHt/3bi11Q7XtGfo9YXatQUhVB+97XbVVtIvzoqR/TogdtU+9/e+veXXv/52AfVtt4faQ0/8EkdL3BoWqd7Btmkf/rczWG/9J5wFHbqSLO3lFnyfOUdbePU0kdPjqh2rk+X8mJd3UiHurxKNvlMl76eQaC/gxWynbeUP/O+wMnUwuGz82FPdsOICDbYDSMi2GA3jIiwpjR7w9NA7OvOvsOOUwbxsbopVe9YuH9KV1xCTZtPEatT3DilVZYG2+G9bVRveFJLYRw/qnVgz3pt7/7xa6FN+lO36/RNuyfept+b1r7Zh5/RaapoM0rvCO3XnAapJfaA4JLBAcV6/4+Xf+XSa6F8ABO3am08/ZIuYzV4mPo5ovsW9+Tu7Nv1h8oc07buGC2ClG4mH3RP07MvB+t7R/EZxVl9rkS3PrYq4UyLC5Wy1uSNwsLfX07L5tvtK7Re5bw06txndYp5txiG8ZbCBrthRAQb7IYREdqu2X2dzuoi8Gy/sfj8mgUAYmWKMSe9FaOyzPVseLwGmTRT02SP7tbthDavIqnN2wh8CUXOydkTWl+Vtmqdx/nE4JWp+uZz2q7OuppTRyfIxly7SfuY++sebpFSRy3wuZOsb71DrdO6et2gvmD5Z9frfmozPGLv1osqs2PhDvEL+nr59xUA6hv1Wk02p+3b1aNe3rgecrCga5KmkkzDvTpO/9RJnWO8cS4MiJchfQ0alYXzA7R+36lrpflt6Ql/fYvLTvnnmHeLYRhvKRYd7CKSEZEXRORlEdkvIl9o/n27iDwvIodE5NsiklrsWIZhdI6lTOMrAO5xzs2KSBLAT0TkbwD8PoAvO+ceEZG/AnA/gL9c6EAi2jyRzegpbQGhaYNnlWx6kxE9TeKUwrUZqgIbeNVQaOrX6GLTmn5rjNxOhcwbsfmzE6E8QmmSaPpWy+t+vuPmE5de7z+oTVSo6fNu/ZE+1vF/TlV0qC9q6r7EKiLz7e+Chbf7nNunw1Kv/9I/qvap779DtYt0TXyzVGOQqtycJDNUniqtVrRd1fWG79+0Tdvpzoz1q3aVJNbpMW02bXHF9frZ3a0lwMwZnXba0b3sIhPs7IyuwONXqwlIEjT86kjuKkxvbo6LPUk2/zkA9wD4bvPvDwP46GLHMgyjcyxJs4tIXET2AhgH8CSAwwCmnHMXfyZPAtg8z3sfEJHdIrK7MVO43C6GYbSBJQ1251zDOXcbgC0A7gBw88LvUO99yDm30zm3M96bW/wNhmGsCssyvTnnpkTkGQDvAdAvIonm030LgFOLvr8uqE2F5onkCIljT28kKIyvUiQTFUmTJJudyP0wtqHs7Uspgip8bDpWgXThKFWU9cJr46QZYyNau6XT+r2Nhv69fXWf5xJLmnBgizZJnfplKi8bUDptZrk6/QqPxWaj7hunVLvx/nerduUNrdHvu3ePao+XQ9390qFtalttu76+OE96n9KE9faGdtTTZ3SZKnC5pwJVXqWquLx/ohC281Ndel+6Xux+PEv7s2k00RV+bwL6Por/nboa05uIjIhIf/N1FsAHARwA8AyAjzV32wXgscWOZRhG51jKk30jgIdFJI65H4fvOOeeEJFXATwiIn8CYA+Ar65iPw3DuEoWHezOuVcA3H6Zvx/BnH43DOMaoK3usrFkgJ4N+UttTqfbmA3bQVorDA7di6fJXk3HylBpnls3hUsK13WdV9temdaGhDsHj162/xc5WtS5pP/fm2FoaSVJYZBvantpbRulOj6tt2e8dM/v+81X1DZOwRzbVtTnIhtrg8obX00asOXAepPLF8/8h7xq14/T94A+x0uvhddXyA06nqc2pZZ2M7qmU34g7Aunjmb7dUBlv4VdVkkENzKeOzL1E+wOu0AoKgDEs1SOy0tzJVRyPHDe9WU3aA9zlzWMiGCD3TAigg12w4gIbQ9x9VPoVqa1nsoMemmTKBWvq+l2QNpk3bAOoxzMaj37wcFXL71u0G/c3T0H9LHi2k/56cLbVbsnqW27Wc/OWbmgNXhylkobP6cdi6r9qqlCYH+4X5cy3vK4vgalIf05+j6hXR2OnqI6VW0iRumduVzxl+/4ump/urhr4QN6tmNOefXxD/1Etb9zQNvwa/Q9SnhrPb09+jtSzlI48intV5+mMlX1Hfr9df9cLSHEFNvB9nDS9Jz2S7z1LEcpq+J+abS4hbgaRuSxwW4YEcEGu2FEhLZq9qAR07HKpFv89MYBlXOK9Wqf8lRa2yEvTGl99b4NOj9xzavLfEtGa9tf1ksHKJINdCp7TLUb5Og8mAu125TrVdsqO3ROqxKlL05Oax3opydmO/nMqN53Zgddg0MbVFuynI47fL2cePSloGzrZM9P5vS9+/f7Pqnad73tiGrvm9DloH5xx7FLr1/af53a9oNj71TtFm1MGtYvbzyd1/7ojWm9thCj70FliEqMk4+6r5djGX3t2a7uJvW5WIfHKE218/zyW/T+AjHs6phL2sswjGseG+yGERFssBtGRGivnd0Bzvc/Zunh65rkwnbHeJcWnYmE1lNTNa3HDiPMg9YT13by/zKr7dHDSe27HZAmmqTcx0cOh6mROYV18py2u7PcSmhTLUrJcAGh/xBpxg/oePZtPTrzz+++Tducv7D7N1Xb1+msIVnbtpSDaqmVrJuJVHj9OUb/X//Cj1X7Gw/dp9o/eqde52D/9zMIc79xGeqZcb1W07tO+0iU6N5VvBJOsRn99U/l9b71bvqQ/VRKitaVEp6fSC1P+VfJ9yC9Qd/46rj+vvL9CLzPLSn2s8eSsCe7YUQEG+yGERHaO40XQFLhnCOe0lPv+kw49eGpnDhtdqpQiCtXjPnZuA5bffe60Ny2t6CroaZj2kzy7IQOJd19Yqtqp6jfUg77FlD12Dpl3sJGqkLarU1z6efDVFPVPnrvbv2H//TpR1T75ZJO2XTr206q9t6j3udYxFrD1xM0VWypwOuZCVMHtXT51tO/ptql6/R7uw/q6XBhq75ofdtC+TJzpF/3g74H2wcmVPvlSZ2OW4rhvUpu0jKIU0djitI/TeipeW6UKtecCuWI9OgpP9Nyfcm0zBVk/Kov/F5OAzYf9mQ3jIhgg90wIoINdsOICG3V7BJzSHhurvXK/Kd33eRuSBoxlWQxrMkm9fuP5IcuvY6JTsE8VdIas0ippeuU8qo2rVNPdZ31wnY5qnSTNvM1yF22dFinM64PhuL4/e/7udr2o5d1yOu/+r+fVu2B7ZOqHf/ukGoP/Yuw3FEqoa/P6RN632SPXlvgdYrCeW0q6n01/Fyz27TAL1Wp4u512uxUHNP+yskZvb+fpsrR4ylB/Xr5gF63aMFz660fn780FACAUkc70tH5STKXeTqdU6EHDf3eJH1/43G6ZnR96941kASVFFtiujF7shtGRLDBbhgRwQa7YUSEtmp253RaKnbZzAyFNmcO7+SQVnbJ3Dak7aunprVNulQMdXaDw2cpBBMXtCbn9YMYpxT2CNIUqnhKrweM3nZatY811qt294bQ3XOySq625HvgqDzU1EG9FtG1Tl/fbq+s9Z0jx9S2v/17HR674d4x1T79Y22vHhjXn3PytvAart+q1w7OQS9k5Lr0ekC+j1JJN3R7U2+YcuyNhHatdce1to1tpnJQLe4C4b0PhL5/g/q9ZSr7HSfbeaOg+xnzdHqtoG3y7P5aht4+MqjTqnFJMt8lNrjCFOH2ZDeMiGCD3TAigg12w4gIbU8lrdICk9bwfXzTGa2PAk7rQzJlfFbbTJNxbcdUHuj0XhmjMr+k81JjpPEpzDLhSb3kJNmIr9e+72entOZMD+ntRS8E842f3KS2Dd9zTrUvTOrP3LdX386uj5xV7Xw5PPa+qU1q230fe061H33un6h27rYp3e+M1t3Dnh05m9T3LlbT16v6Sr9qY6suw8S+CScmw/233DCutr19QH/Gl8/rmIh4TK9rnGl4527o+1qmNOCJXioPxWHB5JcfTHk6nMs9NWgth9agWnzlCeetUbWEIy8Re7IbRkRYSn32rSLyjIi8KiL7ReSzzb8PisiTInKw+f/AYscyDKNzLOXJXgfwB865WwDcCeAzInILgAcBPOWcuxHAU822YRhrlKXUZz8D4EzzdV5EDgDYDOAjAO5u7vYwgGcBfG7R43m+ybHk/Pl0SiVth2xQGZ9UTuupUkXvz77GfgphVyb7KZX5DSijkJAbPoXWo7Az1N0BxTxnyEe6dE7bhXfs0GmtXz8SplHuulfr05mfrlNtt45Scb1H6+gE+WMPdIX93NGn7ehHC9o3PrtBp3divVqt62Nv6wtt65MV8hmn27zzA7rc1j/u1vkDuGRzsRje+zTFPLwxra8Ja/SJvE4h5sdmBOf0vQpGKH2z7jbcFKV/Zvu2vx7VRb4ZpLNrlIb6vNPrL40yfcn8TN2JJeahIpal2UVkFMDtAJ4HsL75QwAAZwGsn+99hmF0niUPdhHpBvA9AL/nnFPuPs45h9Yfwovve0BEdovI7sZM4XK7GIbRBpY02EUkibmB/k3n3Pebfx4TkY3N7RsBjF/uvc65h5xzO51zO+O9ucvtYhhGG1hUs4uIAPgqgAPOuT/1Nj0OYBeALzb/f2xJZ/RLENFcoFoKdQzHKSe6tR5lTZ6gdn5K60bnlfplO6WQRqyTdqtTmWD2UYen5VxW97tMPtKZMX3JD5Z1fjtkw88xU9Rx3qVRvU6R6dXXpHJa/5jGhvXnfPO1UGnlt+t+TZLNnuOx37VZ+/R3JXRffnLwhvC9FKtdz1KZr0Dr0Xfeely19x3TPgB+/rWulL43x6ksdZrWcqqUMyGY8bRyN2lfWpdoKatEvvE5ivkve+tMmezC/Yjl9LlrJfLl4ByL3LcrYClONe8F8DsAfi4ie5t/+yPMDfLviMj9AI4D+PhV98YwjFVjKavxP8H8uUjvXdnuGIaxWrTfXXYBUtlwmsSmHr+SBwCkaVpfou3JrJ5yqWlSn95W7aVpOoUugk1vOUolXfCr3OhjxckVN3i7Nmn1Zclc5smR3x59UW37wZl3qfapCR3GyxJiQ05XthnrCfevk1kuTdcrm9bT0MMTero8PU3mtQvhFPadv6gr6O6ZHVXtF1+5XrVBobrrN07pfp/tv/T61KwO4+VVJw5LjU3reynD4ediucHUZqmqa5ZdXPX+fvrnYl33o9U1XH9P+PvKKdskfmXmNh9zlzWMiGCD3TAigg12w4gIa0qz17yUzaxx4mSKqJKpoqdPh4rmZ3S4oo+vLwEA67Rudizl+CeRTHHOq9ApXNKK1gPSXMaVGO0L02s9dvpWtW22QjqQlk2F0mW9cpRKH3lrEfVe/aGqJ7XZrtxPenVSX+9kkcJ83x76WZ2e1WsJ6T4dslqZ0Pemq1/fuwtT2gwILxV1elDvu9j3YDatzZd+KmnWzY2WSsGUDprue3GS0ob5pc1a0j0vrLlZo8dSK296sye7YUQEG+yGERFssBtGRFhTmt2nJc00uR/WqCRTtabbbEOt+SGyQ/pYcVofCBoU8pohEU/b46XwN7OR0Lq5+7Du12xC6+4N/doW/rM3Q/fZ9ZReeDCryybdvemgah/doMNU953ZqNr186G+Lc5QyaUNWuumDmgNP/RPdfqnYpXKGXtrEbyNfSaQ0vp168CUarOWPp4I86JUyI4udKw6pRjndY1EJtThVXZlJvfXDLnm5mdpHYjCbX1avkN53e9Uvz5Xlv1G8vr+cMmnK8Ge7IYREWywG0ZEsMFuGBFhzWp2JpfROnuGtBn7KddrWjv7Ya1ulssNkd28n8pBkf1ayN7qp0pmE31pvdZamT6tzd48q329e3pD7Rwnm/ydw0dV+9Gj2lf+5mGdUqBaphJCg6HGFLoGdU6btF3388wBSok1oK/RLaNhCOzpGZ0uu69b29nPT2k9OpbvUe0c+eWruAgqfcT6n/V+QLbzmJdinP3Rha53jeIHeDufK5cLP+csa25a9+H1Asd9WQGNztiT3TAigg12w4gINtgNIyJcM5qdSx0FVUq1yy7nC5WxTdA2Ksks02RDZk1Pttv6plDfcmnpoJvtvqT7KGVwOR2ee+a8tnX/rzfuUu2erdoOf3RK29kdrTUk/NjuLdquXp+hVNyUgimgx0Lsgr5Gr0po079pq05TzaW5Rrik8+l+1Z6mOAj48pVTitF9rnM5Y9K+tWL4OWPsi0F+9tlevdbA/u0NWhOp+L4fnOGKvq8JLhPeBuzJbhgRwQa7YUSEa2Yaz/D0jaftjqbeysWVK2xWF3axjG/Qbqr+VBAA4t7xGid1uqYgRdPMjJ76Jbrnn87FZvTtiW/UU+8YSYJCWfdLSCK4zeG0lCvs8PVrFGk7mY7I8olub8o7VdZupZPntWkt2aVNa8KVgci9NuadO5jV14+zDLe4x9JU3XmVgXjaDqo2W+GMr/S94YpGvgt3QNeP3XqZBl/QVcCe7IYREWywG0ZEsMFuGBHh2tXsZFIJKvO7xwLaXNayjStmkimuQRVLwRLTd8lM62MnhrXOdmTD4lDIRj28Jbnt2rQ2O6W1cJ5dQ8nslFinz+1r1PgEVUqhfqOH0ibTugaH+c6Ohea1Ypn27SITF6doJtfd/usnVNsPLY2Tu3FA14DNpKzZ/VTTCUoNzcs8bD6r8zrHAubeFo3OYdQrkGZqudiT3TAigg12w4gINtgNIyJcs5qd4dS7whLT06uObfSLVMx0nGKYyiz1dIfaeKqmbco1CmVknZghm7Nvb02Qeyanc45tpGNRaqPq6zrUVDaG5wrIxMwlsdjWHc8tXPrI168BH5zt5pSiWehzzVL1Wv+atKRUJtdm9OrP0VLiyXt/nX0JaM3DgdaBqN+81uPf2zqtIXFq6ZVIDb1c7MluGBHBBrthRIRFB7uIfE1ExkVkn/e3QRF5UkQONv8fWOgYhmF0nqVo9q8D+HMA3/D+9iCAp5xzXxSRB5vtz61895ZBi42URKXnX53q0rqO/ZLZXs3aLE7vn57MedsoxJX0PutEtrP3emmpZqiEVdBPx57U6YlradL0WboGvq89+Rqwru4bKKj2bIHKKHHoQSE8d4x0M/uzu2kqv7VBh5KyX4PyfyeXh1qO007pz1WmFM7qc3N6MYpTYP921t31BGv8+emERmcWfbI75/4BwAT9+SMAHm6+fhjAR1e2W4ZhrDRXqtnXO+fONF+fBbB+vh1F5AER2S0iuxv5wny7GYaxylz1Ap1zzmGBGYxz7iHn3E7n3M54T26+3QzDWGWu1M4+JiIbnXNnRGQjgPFF39FmWCP5cc7VSa0/Y5wiiH662M+ZNb4fO99osKikWHmKC8/1aL1a9kondZHdnFMXF8bpx5P8B2SdPnbgpZ7ifsTZT0EW9uVm+3XDl9UUYx5wKq4M2ZwLlA5qWOcPqHjpn2L0GdN0jVjvO7Zvez7+Qv4VfGzW8PUSDReOa/DXLbLzl4bqFFf6ZH8cwK7m610AHluZ7hiGsVosxfT2LQA/BbBDRE6KyP0AvgjggyJyEMAHmm3DMNYwi07jnXOfnGfTvSvcF8MwVpG3jG/8Yvjx7y7FJYI4Fn6RWGTS4fBjvSkvGWtjR7bd/FntS58eDO3sQzmtXU+e075LiV7tV880yF8g7u0fXKDSx8NaY5Yq2hbOqburZDuHd305xrwlLTVdP15fKZFtPOFp63qe0nz36esdT9BaAvXbX3/h9ZI66312RUhTzASvCy2UvnwNYO6yhhERbLAbRkSIzDTeh8Nh2eTCUz/HaZYYb6qezumpdZXSTrNbKofX5rKhKenU+X7dD5oltoRNsslQ5p92uvTC1U/9tMgAkOLP5Sidtmdq4ik/h96WZrUbcLJXb+eSO346reyQTrVVmtFm1ESmTm0yn3mfqzhLUoZuM19PTk/O6c3WOvZkN4yIYIPdMCKCDXbDiAiR1OxMS1pf0vCpfq0pa5X5UwqzuyaXCGpxHSXzWKEU6shUWuvNcknr5BbXUK4wxJVEfc2ZWHidgkOEG3RuPrbf7OnXJsOZya75dwYQkItxwKGnnmtumdZAeC0hWKSMku8GzKa32eks767fe41pdMae7IYREWywG0ZEsMFuGBHBNPtlYLfHeo3LP1G4p2fL7e/RduAiuZ1yeGylOH8J4mpl4fLENXovlz5OJLXNuVbzyj+Rfb+lRDPrf9KrrF/99YU8ad8YpXsOcuRCzPZsTgvmbU9Q2q86rTWw6zP7VDjv+tdqVFb5Gtfki2FPdsOICDbYDSMi2GA3jIhgmv0K4HLRfhnmqRm2KWsd6Pt5A63aueT5a7ONPpulUlGkbdl+nSK96peialCoKFpKDOsmp11OU+lkH0eljxyVbOZ0ULxGEs/Mv57APvz8mfl6csirr/ir7C/xFsee7IYREWywG0ZEsMFuGBEhWqJlldC+9YukLuIqQG7+1EYx0vuFPJVg4n6QzblQo/39dFCs0UmTc/rsNMWcV8v6q+Nr6WSPXluo5bWvAZdo4nJbjHhrD1wuK0brJ6mUtsOzr0KUsSe7YUQEG+yGERFssBtGRDDNvsq0pBemNseg+7qb/bxbKuqxyZn0K2vjhqd32R4dxBb+3WeNzusDvm28zvbr+MKfuSUvH12jRGr+Y/OaiF8qytDYk90wIoINdsOICDaN7zQ8g6XptQ9PWXlaz6ml61Qd1Z9Oc5VWTpOc7OIUzBQ6yiGyXlWclvDZRSQAT+PjVFU38NyRWaoYS8ee7IYREWywG0ZEuKrBLiL3icjrInJIRB5cqU4ZhrHyXLFmF5E4gL8A8EEAJwG8KCKPO+deXanOGZrFqoRySuxYtj7Pnpep8EphpWziSma1jq5O69JJvousr7GB1tRQi9GyNmGsCFfzZL8DwCHn3BHnXBXAIwA+sjLdMgxjpbmawb4ZwAmvfbL5N4WIPCAiu0VkdyNfuIrTGYZxNaz6Ap1z7iHn3E7n3M54T261T2cYxjxcjZ39FICtXntL82/zUj16+vyxT/3xcQDDAM5fxblXC+vX8rB+LY929GvbfBvEcdLuJSIiCQBvALgXc4P8RQC/7Zzbv4T37nbO7byiE68i1q/lYf1aHp3u1xU/2Z1zdRH5dwD+DkAcwNeWMtANw+gMV+Uu65z7awB/vUJ9MQxjFemUB91DHTrvYli/lof1a3l0tF9XrNkNw7i2MN94w4gINtgNIyK0dbCvpcAZEfmaiIyLyD7vb4Mi8qSIHGz+P9DmPm0VkWdE5FUR2S8in10L/Wr2ISMiL4jIy82+faH59+0i8nzznn5bRFKLHWuV+hcXkT0i8sRa6ZeIHBORn4vIXhHZ3fxbx+5l2wa7FzjzIQC3APikiNzSrvNfhq8DuI/+9iCAp5xzNwJ4qtluJ3UAf+CcuwXAnQA+07xGne4XAFQA3OOcuxXAbQDuE5E7AXwJwJedczcAmARwfwf6BgCfBXDAa6+Vfr3fOXebZ1/v3L10zrXlH4D3APg7r/15AJ9v1/nn6dMogH1e+3UAG5uvNwJ4vcP9ewxzUYVrrV9dAH4G4Jcw5xGWuNw9bmN/tmBu4NwD4AnM5f9ZC/06BmCY/taxe9nOafySAmc6zHrn3Jnm67MA1neqIyIyCuB2AM+vlX41p8p7AYwDeBLAYQBTzrmLsbSduqd/BuAPAVzMWTW0RvrlAPxQRF4SkQeaf+vYvbQcdPPgnHPCidrahIh0A/gegN9zzs2IVzOqk/1yzjUA3CYi/QAeBXBzJ/rhIyK/AWDcOfeSiNzd4e4wdznnTonIOgBPishr/sZ238t2PtmXHTjTAcZEZCMANP8fb3cHRCSJuYH+Tefc99dKv3ycc1MAnsHc9Li/GScBdOaevhfAb4nIMczlVLgHwFfWQL/gnDvV/H8ccz+Od6CD97Kdg/1FADc2V0lTAD4B4PE2nn8pPA5gV/P1Lsxp5rYhc4/wrwI44Jz707XSr2bfRppPdIhIFnNrCQcwN+g/1qm+Oec+75zb4pwbxdx36mnn3Kc63S8RyYlIz8XXAH4VwD508l62ecHiw5iLlDsM4I/bvWBCffkWgDMAapjTdPdjTus9BeAggB8BGGxzn+7CnM57BcDe5r8Pd7pfzb69C8CeZt/2AfjPzb9fB+AFAIcA/B8A6Q7e07sBPLEW+tU8/8vNf/svft87eS/NXdYwIoJ50BlGRLDBbhgRwQa7YUQEG+yGERFssBtGRLDBbhgRwQa7YUSE/w/VwJHaGivXHAAAAABJRU5ErkJggg==\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('pca_median')\n",
+    "plt.imshow(data[14, ], origin='lower')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's also have a look at the PCA basis that was stored at the *pca_basis* tag. Here we plot the second principal component."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 24,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x145fba810>"
+      ]
+     },
+     "execution_count": 24,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAckklEQVR4nO2dXYxcR5XH/6d7vuzxZCeOjeO1EwLCLMrD4qysbBDRCpIFZQMieUAIFq38YMkPy0pBIEGyrFZC4gFe+HhAIGuD8ANLwqcSRbDgNUa7SKuQCXEgickmMclix/GE2E7GHntmuvvsQ197bp07XdU19/bHTP1/Umu6uu5H9b19pup/z6lToqoghKx/aoNuACGkP9DYCUkEGjshiUBjJyQRaOyEJMJIP09Wn5rUka3T/TwlIUnRePUcmnMXZKW6vhr7yNZp/PkXPtHPUxKSFC//y9c71nEYT0gi0NgJSYS+DuNJtYhRZsFgSE+96ooyz3PuiMjLuEOTHsGenZBEoLETkgg0dkISgZq931ip69OzgW0LGj0go2N1+aqhRh9K2LMTkgg0dkISgcP4tUwP845EudaASofu0S5F0hXs2QlJBBo7IYlAYyckEajZhw2fPjV1fXOlrcRada/lr+Fa/Q6rhD07IYlAYyckEWjshCQCNfug8YXE9tG/XPCri63vX1ssZc5d8NEnptPzsGcnJBFo7IQkQlfDeBF5EcAcgCaAhqruEZHNAB4EcAOAFwF8RFXP9qaZhJCyxPTs71XV3aq6JyvfC+Cwqu4CcDgrr0/UvNYBIuq8IHBf9juLdv8qnCxQ30NEOr9itu1m/2GnzDD+LgAHs/cHAdxdujWEkJ7RrbErgJ+LyOMisj/7bJuqnsrevwJg20o7ish+EZkRkZnm3IWSzSWErJZuXW+3qupJEXkTgEMi8vt8paqqdJgTqaoHABwAgPG37lgng2BC1h5dGbuqnsz+zorIjwHcDOC0iGxX1VMish3AbA/bWS2D9L3acw3Jv7+CBrV+9qiD+b+Uxur2KucA5M4tJW/8Wpt3HxzGi8ikiExdfg/g/QCeAvAwgL3ZZnsBPNSrRhJCytNNz74NwI+l/W9sBMC/q+p/iMhjAL4nIvsAvATgI71rJiGkLEFjV9XjAN65wuevAbi9F40ihFRPOrHxZfTUsPpUY79Tie8RzEmXrzca2867Dz62KKPRo5alsgkC7IOKUG5u871yxWHU7wyXJSQRaOyEJML6Hcb7hlFDNF2z1GgvNB4uTFPVjnV2yFozZamZsql3hup2+NtC523LUiL8tnD5KgzlHUa3HHt2QhKBxk5IItDYCUmE9aPZexkCW+LYoamQvnqr86rUuva8BQlvNHotoNlbOV1eaKdP3wPVhsMaBvZ4xj7zgHVH9rMxbdizE5IINHZCEoHGTkgirB/NHqBfoYxl0xX52hYKWdWQQs1XW00ZKJciFC4bClt1drZa2I/ve5R+BlJlaG4fYM9OSCLQ2AlJBBo7IYmwfjR7lcsVRcpV8Wjh0LRJbfn0qf+8hX19x4L5WjZ8vW42brr9gPW7e/VuYIqrJTjl1Ukl5afMs4Z+ZgwbROw8e3ZCEoHGTkgi0NgJSYT1o9ljGIK5xZeJio03mlybAc1uv2ctV183k8ztua2GD4hK/3OLiH2xwrOIGB0e4b/2zslHdLoAf7MKJ+9/7Dx7dkISgcZOSCLQ2AlJhDWr2UN+dJ/mKc6ntge357Jx43b7zn7gYF4zn1/eVjXM/+ZFUzYyXKy/O+cr17qpG7H61T2Y1Nzt7fz2/PeomecBtUCXUvCNm+cF+fsV0rIt+yiihIYvUEgd3VnzF44VaEc/HiOxZyckEWjshCTC0A7jy04V9Y6Lyq6k0sOQzPwwVZv+YbssmWGlccUVInVzd1vtMDxAIY2VuQb1keXx88hI06kbqfndfJaWaXjLGcabulCIsFfORTWr6GqLmG4bcvP1wxXHnp2QRKCxE5IIXRu7iNRF5AkReSQrv0VEHhWR50XkQREZ610zCSFlidHs9wA4BuCqrPwlAF9R1QdE5JsA9gH4RsXt65oq0y4HXW0xxwrUF0Jgc+41tRp90bi/jGYvhMtad1peOttQ20K73HO3zLEwaoq1xvL7uqvZN4wtec9lWWq6vrdm7nu1TLuWrJ/OUPxd5EuRNzaYFsz3gMDvtitMC+5BGquuenYR2QngAwD+LSsLgNsA/CDb5CCAu0u3hhDSM7odxn8VwGewHLZxDYBzqnr53/kJADtW2lFE9ovIjIjMNOculGkrIaQEQWMXkQ8CmFXVx1dzAlU9oKp7VHVPfWpyNYcghFRAN5r93QA+JCJ3AphAW7N/DcC0iIxkvftOACfLNiZGGxf8jiE/ZMwSzgG/ekz6oqB/1UqzvJYO+ZDNv2ob8lrQ7PXuNSUaVkMaX7nR7CO5ENmrJhacuk1jbtkuB2396gtN92e50FguLxo9b69vwz5rMGUnjiEUwhqst590H9Yb+0wpnxbMm8rMQ7BnV9X7VHWnqt4A4KMAfqGqHwdwBMCHs832AnhoVS0ghPSFMn72zwL4lIg8j7aGv7+aJhFCekFUuKyq/hLAL7P3xwHcXH2TCCG9YGhj4ytfLie3e2gqY2g5Y8+hgxRj4SPSQdupnza9s9XshnzsfDGu3jbMLTbNs1Udc3eo5+Lfrxq/5NRNj8075VGj/xda7s/wktHs52U8V3Jjtwr3csnd134t9/mA/3mKauB5i/fpjTlW4eEMl38ihPQIGjshiUBjJyQRBqrZvX71ijWNo+2CIjyk6X3rKgf2tTHSBd2d07M2dVQoZt/Guy91jq2vXwz4kG3Iudl8bKLhlN+06fyV99dvPOvUbR2bc8qj5gHB+ea4t3ymvvzA4A0Td39hydXw1oe/2DAavpX3hXeeN79iOTAtP8Z3XiaNdeE306XfnT07IYlAYyckEYbX9RZLwV1ms5/m3kammQpmHc1nlw2szmmPZTOvyvjyMNUOSS1Nk7ZKL7kxrPU5dyw+Mr/cuJGL7rEaG83wd9KknZpyp6lev9kdqr/z6uVo6bdPvOLUTddd15vl1caUUz7T2OSU89chNNRumvDYZiEl1nJ9y96cVqDvs8l8bTZf6d6tF6Jw5/P7rzItGnt2QhKBxk5IItDYCUmEgWr2op7tft+QNo7NOBS1q8cFVtT3/pTB9ZpdPaWVe+9uu7TkavClS+7tsxp9/Ix77rxOt661lv0lTC86xTdve80p37LlD075XZPPX3l/3cg5p26jcbVdMnNzX667SU1O1q52yku5xl4cdZ9LLLbcL7JQN9NjzZTY/PW2btDCkjp2eqzZWiJ+sMHfq93elIMrC3UBe3ZCEoHGTkgi0NgJSYR142cvs6prlRTCHgNay7dUkvUhNxomJdOce/vGzrnbj581Pv1chOvin9kVXt121UZd/bphxPWz25DXUVk++FTNrdtWd8Nfm+ZmTMjrpt5tzJnmst/9tSV/HkOblsoXEmvravY3ZJ6ntJr+tNX5e12cLgtTtlNgzbF85zHlbn/a7NkJSQQaOyGJQGMnJBHWrmaPTFvlaPpAauiCzg4t++Otdo9erxu/uufYS0ajN8+7Pubxs8avfs7d38a/533rF24ylRYzbfLYiWu95fHdy5p+qvZ7p26z0fCbahNOeVrd6bJjhRxZyzRMgMCiCRCwS0ctmfkDjVy9vfbBORB9JGbF8fxvyvfsij07IYlAYyckEWjshCTC2tXslpCG9+ixoFYLHCu/xFBBMtn0Q7XOfl9bXjDz02vnXT06atbJHLlo4u4X3fLsB9xlmKrkm0f/5sr7/75ul1P3jzuOOOVbJ1y/es0Izaa5ivO5NFVvLLl6/2LDvUYLDb+Gz8cqhJ6fFOYt2PsemgeRwy6BHcT3DIrz2QkhPmjshCQCjZ2QRBisZq9wDnoBn0aPPFRxmWUT15xPTxx5Nuujb+Y0ZeuCiX2fN7Hcbrh64eS91Og+nv7jdqf8yMbdTnli82NuWdwv8sylnU75hfmtV96fmr/KqTs7v8EpX1xwU0s3G51j5ZtGs9t5CiFl3GrZ+HfpXBc4Vj+Wg2LPTkgiBI1dRCZE5Nci8qSIPC0in88+f4uIPCoiz4vIgyIyFjoWIWRwdDOMXwBwm6qeF5FRAL8SkZ8C+BSAr6jqAyLyTQD7AHyjh23tG6G0v3aIlh+jFVbnMMUm/FMwG7lUU2JXdDErvthmnrndXT11WPjpszc65bfddNq7/W/euN4pH399y5X3r190XW+Li2bFF5te266SkyuquRc2DVhhNd/AKi7530XB1RaY8mrxpbGKSd+WJ9iza5vLa/uMZi8FcBuAH2SfHwRw9+qaQAjpB11pdhGpi8hRALMADgF4AcA51SszGE4A2NFh3/0iMiMiM825CyttQgjpA10Zu6o2VXU3gJ0Abgbwjm5PoKoHVHWPqu6pT/mzjBBCekeU601Vz4nIEQDvAjAtIiNZ774TwEn/3itQpbchIjy2sGsgRZDP1QbAmQ4aWlHTpgRWoxPRyO1vMxuPuds2J3rvrukFP/jjXzlle71nz7jutcaF0c4b2+sXytmU697EuN6aBU1udjXnKrjXPCvExqZF60UatW6exm8Vkens/QYA7wNwDMARAB/ONtsL4KHqm0cIqYpuevbtAA6KSB3tfw7fU9VHROQZAA+IyBcAPAHg/h62kxBSkqCxq+pvAdy0wufH0dbvhJA1wPqZ4hpBrBxSu+pvIS2w5+ABv3tB4+d0oRqN3hDTEJv/eY1w8qVrnHLdTt2dc7/XSO4y2KWlm5vca6Kj9hqhY1mNw9pONy7cO4MvXLZqze2uBr2651Nr89dCCImGxk5IItDYCUmEgWr21cb4liZyOmFoiqtTtrHYxXV+TdnU5zS7TJiUym5YOBo19/atFa/79JMmJfbrrs6uL7nlhdxSVfPXGp09bpexCq0Dlntf6+wnB1C4Vy37OMC31FRZzd6Dm8menZBEoLETkgg0dkISYbj87D5/YUg3h7bPV0XOJe4pJt66Nr6s08fG3WWRJsbc9E0Lm9zbZ1NPDytbj7qzH0fOuUtRtcbd76Fvn7ry/uLWyJsTs3koo3hgOegyvvXYpb477etrA3t2QhKBxk5IItDYCUmE4dLsPiKXaI7CHtqeujAH3bN/SPAX1od2i/k502OjrmafHF90yuOmfnbezfkpdq73gNj1VfdZQ+0PLztlveSmvK7tcJeDhixrdrNCM7QemM9egkKsfEijD8fl7gh7dkISgcZOSCIM7zC+RJqpFQ/nScUbdJkEUgo70ybtsDLk5hsxq5KMLrveNk24w9trJ99wyhtH3GH9zqlzTvmZ0+5wuF+uuR3fd89TP/2qU268dsYp1ybd3ITNzW750tXLN8BOcdVRO4yPWM03JHMCw3ZfCrIY11n09lzFlRDig8ZOSCLQ2AlJhMFq9pI6POpUPpeMf2ZjuJ1OtUmTZHSddYfVrWbPrSQ6WnPraqYd4zV3CuymifNOGdtecYon5qavvH/1zBSqpH5ief5tY4PR1Rvdubn1aza7O1+71Smev36jU750TW7l1Ukz7dfq7tBPKp9KOuAGDWr0MqsQhzy0sb/BLmDPTkgi0NgJSQQaOyGJMFjNHuNLrzI81hAb4Vog/y/TfgcbDuvxqwPAaH25XK/ZHNYuo0azT4/MO+WpTe4Szjs3nlsubHOPVQgNNRw/v8UpP3fa1dnINaXlZobG0rXu8wG5xvWjn79ug1Oeu97tgxY2L1+Hgl89kO65cD/ycRBRz2JWcS6nzr9r1LEMhSXEO8CenZBEoLETkgg0dkISYXhj43tJQA8VXJyFpXw7P2sQs7eYWPm6WSbYlkdyZetXt7q6GdDZW0bnnPJUbVnDT9fn7eYO55qur3vBzC39v/Fppzw/sdzWi1vcPqQ16vrZraZf2Ox+j0tbzffekLtGMbHvgKvR4c5FCMWjF+5zhQwijTp7dkISoZv12a8TkSMi8oyIPC0i92SfbxaRQyLyXPb36t43lxCyWrrp2RsAPq2qNwK4BcAnRORGAPcCOKyquwAczsqEkCGlm/XZTwE4lb2fE5FjAHYAuAvAe7LNDgL4JYDPlmpNGY0U4ZcMnSV2LnLMsWo2dbTxped961ajX2q688RfX3L904tGV1udfba27N8+VZt26paMkD69cJVTPj7nLrM8Pz/ulPNNXTRjvMakWUbJTKtvTrjXpLnRE18QWAK7uJyWqY5J1eVbmhtx8y2CBH3+5UV+lGYXkRsA3ATgUQDbsn8EAPAKCmEahJBhomtjF5FNAH4I4JOq6qRMUVVFh/lGIrJfRGZEZKY5d2GlTQghfaArYxeRUbQN/Tuq+qPs49Misj2r3w5gdqV9VfWAqu5R1T31qcmVNiGE9IGgZhcRAXA/gGOq+uVc1cMA9gL4Yvb3oSobFqWPYo9tyqUPHaGnQvnu8r7dplluyGr4hqlvmiWIl9Q4tHNS2Or5801Xg7+24P5jvrDopqm2c7t1fPmLLU0ZzW23HTFxC3V/2cGzxDUAYKSzXx3w/47UPxWh1Bzz4O+34pyLK9FNUM27AfwDgN+JyNHss39G28i/JyL7ALwE4COVt44QUhndPI3/FTp3frdX2xxCSK8YaLhsmVUviweLGIzb8NfAsULNdL6H3dgMre10xGbTHYovNpaH3tZtV6+5tyvkmptvuEPv/PbWTXd+0R3Gv37RDXG9YFxtrUuuRMg31aoHG7KqdujtW2HHYve14bAm/Njnaov+/Zn74Tt2adlZYhXiTjBclpBEoLETkgg0dkISIc0prgF9r7Ghi3m9autaZrqmuP9fG+5CrM6U13rTFb9LgTRV9txW0+fLC0331s8vuXp/Ycmtt88WCufOa+eQb9NePqt9fe61kGutFtDV+WtSCIcNhOJaBjBNtQzs2QlJBBo7IYlAYyckEZLR7D7fZMEnGtL0Rjo7Ws+GhhaEnbtz0/y/Xcpp5ZAkXDLhsjaNlSUffpv35wNFjb644Gr41qJxnjc9SyGZLqTwDMR2MQW/e2dfuk3zVfOF1gYIpZ0qTHW2kt73PCC6Lfbc/vrVwJ6dkESgsROSCDR2QhIhGc3uI1YPFbSeb9UqW1fww7vFZk4LN+s2bt7saqVuQDI2c88TCtNjA370YormzpuqvSC2XVZnB3zjeZ0enGZauDfmGUp+9adQPEVoKe81Bnt2QhKBxk5IItDYCUmEdavZvTq88rxUERQ0vCnmtHSjYdM1udva2Hf7LMG3bJXV6C1TLvh9jY4uXF5ftxGYB17wV/v82aEYiGDyAU9d7O+gQr96bP1qYM9OSCLQ2AlJhDUzjK8022zJYXtheOwc0AxRI0+W39umsBKx/5tdv11xGI+O9YVhYkTKpZXq88cu7dKybcmfJ+RqiyGwa6UZjXswLI+FPTshiUBjJyQRaOyEJMKa0eyWkIYvo7eC+qqQvWh5h+KU1thY3JyutiGtBY0e+l/d2TUXDOO1RwpECOd1evS1D2h8p7bsaqYRu5d5TjQMGt3Cnp2QRKCxE5IINHZCEmHNavZeEp0SSPJvA9M7PfsGsamhzfTYYLu9SwrFaeEoXR7rGy+z9FEfQ6F7udJwL2DPTkgi0NgJSYSgsYvIt0RkVkSeyn22WUQOichz2d+re9tMQkhZuunZvw3gDvPZvQAOq+ouAIez8kBRdV+9RKTzq7ixeVnUvDyoivtq1cxLzMutb6l0fNljw7xC19d77e3xAvX2XM6xA6/i9wjv0821t98x+hoMAUFjV9X/AnDGfHwXgIPZ+4MA7q62WYSQqlmtZt+mqqey968A2NZpQxHZLyIzIjLTnLuwytMRQspS+gGdqnoHQap6QFX3qOqe+tRk2dMRQlbJav3sp0Vku6qeEpHtAGarbFQV+HygZTVV1P6RcfZldrW+8sL+ZfzXEduHjhUdx5DPShUZD+BN1TXkfvGqWW3P/jCAvdn7vQAeqqY5hJBe0Y3r7bsA/gfAX4jICRHZB+CLAN4nIs8B+NusTAgZYoLDeFX9WIeq2ytuCyGkhyQTGx+VWjpEySnrq8Zq8lCuN8/SR+XbYsqd0/AFd+0mvqDbbWN0eOllkatMS90HGC5LSCLQ2AlJhGSG8Q5lh7M9HLY7q4zGjgWrHLZXOFyOlT2x7jXvqa3U6eXwegiH7nnYsxOSCDR2QhKBxk5IIqSp2UtqSN/qqMGlj0pQCAEupIru2amD18C/c9yxoyikEO9eo5e+XkOu0S3s2QlJBBo7IYlAYyckEdLU7JbQ0r3orNGrPpd318glhqsMEe7ls4goSmj09vbL74c1fVSvYM9OSCLQ2AlJBBo7IYlAzd4NniWaY/etkujlh7yppCL96L4prhVPAfa2pcJnIOtdw7NnJyQRaOyEJAKNnZBEoGZfDT3U4T7dHdKUMUtLF1aWDmn0Es8DCpsOKE6hfe5y+69l2LMTkgg0dkISgcZOSCJQs5OVidToVeaNKxwb/ckXsN5hz05IItDYCUkEDuOHjL65hnqZUqnsCjukJ7BnJyQRaOyEJEIpYxeRO0TkWRF5XkTurapRhJDqWbVmF5E6gK8DeB+AEwAeE5GHVfWZqhpHIqlQ7xdcaaFj57YPPXeojbZW1yhSijI9+80AnlfV46q6COABAHdV0yxCSNWUMfYdAP6YK5/IPnMQkf0iMiMiM825CyVORwgpQ88f0KnqAVXdo6p76lOTvT4dIaQDZfzsJwFclyvvzD7ryOIfXv7Tix//3EsAtgD4U4lz9wq2Kw62K45+tOvNnSpEVxnFISIjAP4XwO1oG/ljAP5eVZ/uYt8ZVd2zqhP3ELYrDrYrjkG3a9U9u6o2ROSfAPwMQB3At7oxdELIYCgVLquqPwHwk4raQgjpIYOKoDswoPOGYLviYLviGGi7Vq3ZCSFrC8bGE5IINHZCEqGvxj5ME2dE5FsiMisiT+U+2ywih0Tkuezv1X1u03UickREnhGRp0XknmFoV9aGCRH5tYg8mbXt89nnbxGRR7N7+qCIjPW7bVk76iLyhIg8MiztEpEXReR3InJURGayzwZ2L/tm7LmJM38H4EYAHxORG/t1/hX4NoA7zGf3AjisqrsAHM7K/aQB4NOqeiOAWwB8IrtGg24XACwAuE1V3wlgN4A7ROQWAF8C8BVVfRuAswD2DaBtAHAPgGO58rC0672qujvnXx/cvVTVvrwAvAvAz3Ll+wDc16/zd2jTDQCeypWfBbA9e78dwLMDbt9DaM8qHLZ2bQTwGwB/jXZE2MhK97iP7dmJtuHcBuARtHPfDEO7XgSwxXw2sHvZz2F8VxNnBsw2VT2VvX8FwLZBNUREbgBwE4BHh6Vd2VD5KIBZAIcAvADgnKo2sk0GdU+/CuAzAC7Pnb1mSNqlAH4uIo+LyP7ss4HdS+ag64Cqqgwob7GIbALwQwCfVNU3RPJzxQfXLlVtAtgtItMAfgzgHYNoRx4R+SCAWVV9XETeM+DmWG5V1ZMi8iYAh0Tk9/nKft/Lfvbs0RNnBsBpEdkOANnf2X43QERG0Tb076jqj4alXXlU9RyAI2gPj6ezeRLAYO7puwF8SEReRDunwm0AvjYE7YKqnsz+zqL9z/FmDPBe9tPYHwOwK3tKOgbgowAe7uP5u+FhAHuz93vR1sx9Q9pd+P0Ajqnql4elXVnbtmY9OkRkA9rPEo6hbfQfHlTbVPU+Vd2pqjeg/Zv6hap+fNDtEpFJEZm6/B7A+wE8hUHeyz4/sLgT7ZlyLwD4XL8fmJi2fBfAKQBLaGu6fWhrvcMAngPwnwA297lNt6Kt834L4Gj2unPQ7cra9pcAnsja9hSAf80+fyuAXwN4HsD3AYwP8J6+B8Ajw9Cu7PxPZq+nL//eB3kvGS5LSCIwgo6QRKCxE5IINHZCEoHGTkgi0NgJSQQaOyGJQGMnJBH+H5EX7MaDdgLbAAAAAElFTkSuQmCC\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('pca_basis')\n",
+    "plt.imshow(data[1, ], origin='lower')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Signal-to-noise and false positive fraction"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have the residuals of the PSF subtraction, we can calculate the signal-to-noise ratio (S/N) and false positive fraction (FPF) of the detected signal as function of number of principal components that have been subtracted.\n",
+    "\n",
+    "To do so, we will first check at which pixel coordinates the aperture should be placed such that it encompasses most of the companion flux while excluding most of the (negative) self-subtraction regions. We will read the median-collapsed residuals with the `get_data` method."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 25,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "data = pipeline.get_data('pca_median')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "And we use the functionalities of `matplotlib` to overlay an aperture on the residuals after subtracting 15 principal components."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 26,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.patches.Circle at 0x145ffd050>"
+      ]
+     },
+     "execution_count": 26,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAA0DUlEQVR4nO2deZRc1X3nv7/aq3f1otaKJBA7GLAJXgAbg4kBJwGfcRhjOyEnOEwmnoyT2IlxMpMZn5Mz48zkeJlxlkNiB5zxGGMDw2Jjg1lik7AJS4CEhHahtVvqtbq6utY7f3Shd3/fUi+Suqtaer/POTqq2+/Ve7feq1vvfu9vE+ccDMM4/Yk0ugOGYdQHG+yGERJssBtGSLDBbhghwQa7YYSEWD1PFm1tdrGejnqe8jRHdCtSUW1Xod/ysveaf+aP1ygToTc4ry9iFp5GUTo8jHImK8faVtfBHuvpwLK/+Ew9T3laIXQLK2X9h1RTQbUnxhOq7ca8250uq22o0MHp2Dy4o/T+ciH49YjE9Y+OUT8O/Ke/nnKbTeMNIyTYYDeMkFDXabxxfFRy+vZIQk+debpcKs3w253y9i/SvjHS2TRtjzWV9Lmob8nW/NHX+ayWD7Gk7neFJEOlENWnps9pzA32ZDeMkGCD3TBCgg12wwgJptkbTJrMZeOZ5NHXEme7uda65VGtjSsp0rqkuyUWHM+5yJTbAMCR/hdh05vW8AXPzBehY5WycX2sJK896DabGH18E9/ke83MN1vsyW4YIcEGu2GEBJvGzzNsVhKaWmdHUlNud+zFxl6o0amn6UDttF9NzakfETpWWasLlMuzfy5UaNrO7rP8uYTMfuwZWOPd528a0+dKdeVUu5C3r/jb2JPdMEKCDXbDCAk22A0jJJigmQN8bezYNERmJtbRoDDUGp1+HMeqjOvbGSHzWMw38zky47GpjUxafGxwUFwqOJe0kDmM1hqitLbAmr40oXV4unXi6OvxkbQ+b0tRtYtFvUbCayaY8KLz2vR7T3fsyW4YIcEGu2GEBBvshhESTLPPghr3TdKYvp5NLZpQ2zjcs4Y8/d4mAj0rZPtmnRxLUtgpba95v4cjzc7htDWZapJaZ7NN3+9bPEGutKTBWbNPDGgdzj4AiViwVpHj8xLl/PR+DeLpdHZVzo3PcK9OcezJbhghwQa7YYSEWU3jRWQ3gAwm85OWnHOXi0gngO8BWA1gN4BbnXND89NNwzBOluPR7B90zh3x2ncBeMo592URuava/sKc9q5BVChlUzxNttyRpN7eHqRkKkzoS+pISEcoDNVFp7Zvu5zWnzW+7uxjTvq0TH7jZc8HPcphppR2io+doGtQIH3rrw/wOkU0oXV2mY4dbdPamUN3x7JB/ABfE2nW/ZYsfaXJlu7rdPabr2T09Wruzar2qa7pT2YafzOAe6uv7wVwy0n3xjCMeWO2g90BeEJEXhGRO6t/63XOHay+PgSg91hvFJE7RWSdiKwrZ7LH2sUwjDow22n8Vc65/SKyGMCTIrLF3+icc8KpTIJtdwO4GwCSZy63UiGG0SBmNdidc/ur//eLyEMArgDQJyJLnXMHRWQpgP557Oe8wn7fQrq6OEZajWO/PX9sjuWOtk7vfy3kS+88zZ7q1Db7YoH908neT37gi5aMqvbQkdagz2yPjlIsPNvdSbNzumf/eJw6molx2iotu+Ga9bn81NN8b2p89jnGn849Phro/1iK7g0dOzuk7f8yTtdsEQX9L3BmnMaLSLOItL79GsAvA9gI4BEAt1d3ux3Aw/PVScMwTp7ZPNl7ATwkk25kMQD/1zn3YxF5GcD9InIHgD0Abp2/bhqGcbLMONidczsBXHKMvw8AuG4+OmUYxtwTSt/4CNmj23ozqj0+oTV6oUg2ZdKccd9HnfzVCzlKo0ya0nHZJX8bV0WmdoXsxLGUPjd/DuWHz77uXIKZ7O55ypWXXqRzvVU8//Uav3vKX1dTXZZi6Xk9wPfb5/PmxptUO0k56PJH2O8+eFmOzpCGmtZT0K41fipNVXNzC9sOb+6yhhESbLAbRkgIzTTed/90NGUdGdFTQU6rHG/W0zUOefWnqcVxSqPM02O2BdG5fDfWlV3DatuOTcv0e9NkwmL32KJ26xUvhVOE5ENri57+sgQok3RhKeRXkC1zaihOLU0pr7i6TJncVv2w3xyZw/hY5RKloUrRVN1zX+ZQW5ZYnKqLXYh9Mx6gpdB04cWNwp7shhESbLAbRkiwwW4YIeG01ezTucBy6Cfnc3Jt7K5J2m2aUFPWahwey2aoEunCqGcO2t3XpbYlhkgLj+v3Fnv0uVxUm7BSTcH2fJ9epxgepWvCpjkyU2VzWjun2oIw3+KQPhane+YUWDXptekR5K+RRAf0e0sdZKbjsF92A/YOXmSzKIXPgt1+OfTZ+8zcTw4VqXF1bgD2ZDeMkGCD3TBCgg12wwgJjRcSc0T0LW3zrCzWOtG3gUaap7frVij8k9032R3UeesDrE8LQ7pfrAO7F+sw1OZEYNM/nGlW24oJ0oFdFBvK7p2En7I5SuGZ5Qn9mdNtOrx2Ykzb7JvJbTV7yOsr6X32W3CUpoqvN4eSOm95gDU6l5bi9QChc0Xiwfv5vKB7xy4RjtZb2J9A7UvrEM10Pcez+nrWA3uyG0ZIsMFuGCHBBrthhIRTVrNzCiZZrTUkqylfl7NmF/LNjpC/dU3KZrKhOk+Tsp5n3/gI+WMPj2p7dyYWaDkum+RYo3O9J9LK8UP6/cWlgU7ncFnWvjkuyUSfq5imK+zbmLN6G/vVc4iwq9CxaD1BvHvnmqa/VxVae8AofcW9Y8cp/VWJQoYd2dVB/a5wmWvvXnPYLmt0ji2osK/BPGBPdsMICTbYDSMk2GA3jJBwymr2mjRKBKdKbu4ZP/o6R+WJHOk8xzHmcfa/1vv7ZZonqDRUc/e4amcHtRZuX6xTYo2NB+93bAeeIUaa/cAja8f09mnKF7E92pHNPkIprwoZfawlqweOvj60r1Nt833yAWBilOLsSeO7GD2DvLgGjjVgW3c0TSmuOVW3v/8MpbgT5PteoH6jSAfw14LovMl2bWdn8rQ+Mx/Yk90wQoINdsMICTbYDSMknDKane2SQqV7Wpq0vhoablftiUSgMdkWnujQ700l9bGz41SiOUH2bo/Wbl28MjuWmmLPSUbIzt7WGvgLjFEcfZl8C96xdq9qv7ZnuWqXKB9bfG9wDUotZP/v1ZqSfeUTlCJ7gnIC9G3tCRqkmxNxem9E6/2l5xxW7f7BNt03by2iMEjXk+zsEYo9kBYqRe3Zv5ua9X1n3cz+7RHyx3Bxim/31jUKZKMvUDw7+4nwesF85LCzJ7thhAQb7IYREk6ZaTy7E1bIjDSUmzr9MAA0NQfT1MwRHTpaiugpVY5mUOWJGaZgnsTg9EPRGIXHUr/PWKWnsMPjgWmuROdl01osQsdmcxnJjUJ3sH+UUlpF6dhch5XTKsc7qMKs/7nIjbcpqd1fMxFtfjzY36Ha7Kbqp7wSkgg1rsw0/W1u0v3MHAoq2ebzx/f1r2Tp3lKFXt80Jzn9GZpJ3o0Ot6p2TSqulumr/54I9mQ3jJBgg90wQsKsB7uIREVkvYg8Vm2vEZEXRWS7iHxPRBZ2VTvDCDnHI1o+C2AzgLftIn8J4KvOuftE5O8A3AHgb+esZ1w1iU0RnIIpyqY5re2yGc9kQzqPtVglTi6Y5CpaptRH8MxUFTovl1UajmnT0VuHtGupeB+LQ0EvP3OPau8f0+bF3mXDqp0rUErnruAalcb1usWyzhHVPhxvUe3OZu32e3D9EtX2z1RK63vR06T16qFKh2p392iX4Qrd6sHDnimO7zuneyZqQo49Exe7PddUyeUyVBy9HJl6nSPSrc16Y2yCpXBk8JoUfSc5LPtEmNWTXURWAPgIgH+otgXAtQB+UN3lXgC3nHRvDMOYN2Y7jf8agD8B8PbPUReAYefc2z83+wAsP8b7ICJ3isg6EVlXzmSPtYthGHVgxsEuIr8CoN8598qJnMA5d7dz7nLn3OXR1uaZ32AYxrwwG81+JYBfE5GbAKQwqdm/DqBDRGLVp/sKAPvnsmM15XNHaf2PNE+UtHKESh/5oZCSp7BHKn3M4bOyV9uF46v0DCXlue5mDmj76Qj0D1y0XdtPl3RprVzwdOSRw9ptdMPeFar9gTO3q/ZTL12kz0W29FJrcE1iFJ45kNVuu9nDup17S3+uSjfZgf1T0ZrIxj261PRvvvMF1X583wWq3UZ2+SE/BJa0bdNifS/O6e5X7U0Hl6q2KgNGLsExcoetsCsufcei9B0tep+bfTP4mnCKKy4txX4ic8GMT3bn3Bedcyucc6sBfBzA0865TwJ4BsDHqrvdDuDhOe+dYRhzxsnY2b8A4I9EZDsmNfw356ZLhmHMB8flL+icexbAs9XXOwFcMfddMgxjPlhQvvG+f3ZxiFIAxakUMqU6KlNpnuKY1vjipXhyzaTvyVZbofBNIVMua7WelkA3jrVofR+jlFZrFg+o9o1LNqr211+67ujrZkplVKQUTE9vO0e1I51a65Y43bOnC4Uk4cTmDtVOj2uNufjqA6rNpala04FduZ/WGjj19iO7L1btAoXiNsVpPcBb50hTWOqVK3aq9u6MLnPdQr7xI2VvLYLTULOupibHRExwjIR6LxntY5TqjL5zKbrX2RFtl09660InmsLK3GUNIyTYYDeMkGCD3TBCwoLS7Kr8Dmn0KJfqIb0UIU1UU/Y3VZl6G7VBmr6o3ddrbKiHRgIbNJf9KZItN9ep9dbuiW59cM+OnB3W+v+ytdo3fv0ba1Q7QjHUaNV24+Rhb92CsyKR63WxVV/PwXHdl3N7tD17/ZbVR1939Gpf92xOr5/8xlkvqfb//tmHVDuT0Pc6vjdYv8mt1P18/sBq1eY0YJx6yi/ZxOspvHaT5xgI+o51dOlU3aOjwTVKUDxFidZbOD9DIc+5C8iG7+dJ4IxVs6wcZU92wwgJNtgNIyQsqGm8n94pStU6a9xlyd2wzFPYJrKXeaGRkYKe9zh6q+PqqJz5k/b3p1gRmhpesnaf7lZMT1H7J7Qbas+SwH12ZL2e4kfO1v2ItWlT26KV2nW08pB+/+rbtx19vXNIm6gmXtLtjq36+vZ16JDXyqJB3e/lw0dfH35rkdoWbdf93DSm3WcjbfqaDPZRdtlVnlmK02ORW7Qb0d+TLJvThoPt5U49xecqrkIurJEWvf9wn753bYuDaX2OzGM8bec0YCwpovT9jnsZeifo2OXSNCZAD3uyG0ZIsMFuGCHBBrthhISGanZO+xPztB1rnBpzA6feJY2PIa3dEsPBG/I9et/YKJniqNIHh4pWyFRX9MJtI5Q2KbaGtBelf940oNM7dXspnD70kTfVtu+u16EI8X59/Y40axdj9059rsH1Zx19nTpMJsEV+poUW/V2TvNVIXuPr5057PSqldql9d/3PKvaT2/QIa7sxrq0O1jHODyi1w5GRrXbrqPQUKHvmPPMkYld2kxXpsxRbjlXydHDJdGuNbxPnDQ4u28nO/Q6BbsUs0usciWnKsSRxPSpuY7uN6u9DMM45bHBbhghwQa7YYSExtrZSQf6qoXDTGNkq+WySkw+oT9a0TPdCqUXZtfRyITenjqi2+zh6iLBb6ZbpPu5g+zZEVqKGBgkDbohOPj+C3Wq6P9+5QOq/acvfFS1v3XVPar924/dqdr+ukXpQu3qedu5G1T7vp+/V7XduL6e57b2qfajfUFKrOJ+raOfKZ6t2k9tP1f3a4BuwFqtlVe3BTb9A28uVtuiOaq0mtDat5LiWN7gGhQWUdopChFe1avDkfsz+l6NZ/UaSdnT6ZzCmv1GuKprjL7PFbre/leSU7D5Y0GmcZ21J7thhAQb7IYREmywG0ZIaKhm55TNx0OBSzSTPTVKvvK+GTM2QttI/icHtfDJrKZ+TtNvGdA20MG81t2rVusSzVoVAu++dtPR1+MlfaxvH9A6enHPqGr/1d4Pq3Y0rz9HsT3QqAkKofzBlstUW0i/OipH9NDmS1X79y7556Ovv9F3vdrW9lOt4RfdpuMFto/odM8gm/TzL5wX9EvvCUdhp440e02ZJc9X3tE2Ti29a1+Paje361JerKvLyUCXF8gmn2rS17NS0d/BPNnOa8qfeV/geGL68NmpsCe7YYQEG+yGERJssBtGSFhQmr3saSD2dWffYccpg/hYLZSqty/YP6ErLqGozaeIlChunNIqS5nt8N42qjc8pKUw9uzSOrC1V9u7f74lsEl/8jKdvmnd4Bn6vUntm73jGZ2mijYjd2Fgv+Y0SDWxBwSXDK5QrPffvPqBo6+F8gEMXqK18cgruoxV5w7qZ4/uW9STu2Pn6w+V2q1t3RFaBMmdRz7onqZnXw7W947iM8bH9LliLfrYqoQzLS7kJ7QmL2en//5yWjbfbp+n9SrnpVHnPqtTTLnFMIzTChvshhESbLAbRkiou2b3dTqri4pn+41Ep9YsABCZoBhz0lsRKstcSgfHK5NJMzFC9ugWwS+dtQJdLU3YsPsA8uNaV8e1eRsVX0KRc3J6r9ZXuZVa53E+MXhlqr7zgrars67m1NExsjEXz9E+5v66h5uh1FENfO4461vvUIu1rl7cqS9Y5tle3U9thkfknXpRZawv2CE6oK+Xf18BoLRUr9Wkm7V9u7DLyxvXSg4WdE2SVJKpu03H6e/fp3OMlw8HAfHSpa9BOT99foDa7zt1LTe1LT3mr29x2Sn/HFNuCQm9LS34ym0fwYP/8VNqnH7qqsvwV5/6CC4+I3D4uGh5Lz57/ftwRk9H/TtqGCfJjINdRFIi8pKIvCoim0TkS9W/rxGRF0Vku4h8T0QSMx1rodCWDFZVD2ezuGhFL85b2oNVXUFW1Be2vYUfb3gTB4eCp9KVZ6/Cv/vgu3HnDe+ua38NYy6YzTQ+D+Ba59yYiMQBPCcijwP4IwBfdc7dJyJ/B+AOAH873YFEtHkindJT2iyCQcizSja9SY+eJnFK4eIoVYGtRJGIRvHH77sKH73gArzvnruRL09Of37viUdRLJexLTGI8lIHKQPf3L0e39y9HgAQaZvszbMHdmPZa+144OXXUKqa67pbmrAonsaOQ+z4OslED6VJoulbMaP7eeF5e4++3rRNm6hQ1Fdl5U/1sfb8G6qiQ31RU/dZVhGZan9XmX67z+GNOiz1rL/8V9Xe/+CFqj1O18Q3S5U7qcrNPjJDZajSal7bVV1b8P5lq/Q9O9jXodoFklgH+rTZtMYV1+tnS4uWAKMHddppR/eyiUywY6O6Ao9fraZCkqDsV0dyJ2F6c5O83ZN49Z8DcC2AH1T/fi+AW2Y6VqOpOId3r1iJjmQKFy8OdOPr/X3YMnAEZRZKxOuH+vDFHz2B1/cFsdx//tHrcP/nPoEPX3rONO80jMYzqwU6EYkCeAXAWgB/DWAHgGHn3Ns/k/sALJ/ivXcCuBMAYt3tx9qlbpQqFXzuJ48jlYrhtf5DJ328WCSCgbFxTBRL2LD7wMxvMIwGMqvB7pwrA7hURDoAPATgvOnfod57N4C7ASB11vITD3M7QX7vkncjFY3jaz97HgCwbXCgJlLqRClVKvjSQ0/hbx57HgNj43NyTMOYL47L9OacGxaRZwC8F0CHiMSqT/cVAPbP+P6SoDgcmCfiPWT68PRGjML48uNkoiJpEmezU8Th7I4ufP5dVwMAfty3EVtHDlf3pRRBeT42HStLunC1XmvYlQnaNy+7AJ94xyW446EHkSuVEOnR2i2Z1O8tl7WSemOj5xJLmnDRCm2S2v9+Ki9boXTazPHq9BM8FquhlrOHVbv8wXeqdn6r1ug3XLdetfsnAt39yvZValtxjb6+OEJ6n9KEtbUFYaoHDuoyVeByT1mqvEpVcXn/WDZoZ4ab9L50vdj9eIz2Z9NorCn43lTo+yj+d+pkTG8i0lN9okNE0gCuB7AZwDMAPlbd7XYAD890rHqzbXgAv//Pj+Kuf/nx0YE+nySjUfzxVVfjPStX4taLL5738xnG8TCbJ/tSAPdWdXsEwP3OucdE5A0A94nIXwBYD+Cb89jPE+aHu7YAAGJNM+w4B+TLZfzO/3sIF/cuwQNvbJr5DYZRR2Yc7M651wBcdoy/7wRwRe07Gs/H17wLz+zYhz2Z4bqfe+vAALYOHNsMZxiNpK7uspF4Ba1LMkfbnE63PBa0K0mtMDh0L5oke3X1WGe1deG/XHYT/vjiAq59/OsYKU5qukuWBUsKZzYdUe99bUQbEt7TuWvaz7FrXOeS/pe3gtDSfDzQjO2JFC6orMXL+4JzF1dRquMD2p6a8tI9X/2rr6ltnII5skovCsbIxlqm8sYnkwbseGC9yeWLRz+fUe3SHvoe0Od4ZUtwfYXcoKMZalNqaTeqazplFgV94dTRbL+uUNlvYZdVEsHllOeOTP0Eu8NOE4oKANE0lePy0lwJlRyvOO/6shu0x8Kqzz4H5Msl3L9jA/Iuf3SgN4IVLW146pZPI1cs4cq//XvkSzMsnhnGPHPaDfZ92RH86cuP16y4170fY6N4c/gIhkcn0JlO4WBmbOY3GcY8ctoN9oXErz/+HZQPhj7WyFgg1H2w+yl08yNaT6U6vbRJlIrXFXW7Qtpkcfcoruu9CLlyEeuHdiGdGFbbr+984+jrMomta1o362NF9VP46ez5qt0a1/Ig7dk58wOBBs/DIT1GpY1f0PGchQ7VVCGwT2zSpYxXPKKvQa5Lf472j2tXh137qU5VnYiQ0xKXK/7qFfeo9qfHb5/+gJ7tmFNe3Xrjc6p9/2Ztwy/S9yjmrfW0teo1j4k0hSPv1371SSpTVTpXv7/kn6smhJhiO9geTpqe036Jt57lKGVV1C+NFj0JO/upxO+fdxO+dvlvYWm6o9FdUSzvaENnc3rmHQ1jHjltBnsEgmf7NuHFI9uwe2z+HWhmyxc+/H489Ud34KOXXTjzzoYxj5w2mr0Ch69t+WGju1HDpoP9ODA8ip6WOnj1GMY01HWwV8oRHatMusVPb1yhck6RNu1Tnkjq1faBYa2vrl6i8xMXvbrMF6S0tn2/XjrAONlAh9O7VbtMjs6dzYF2G3ZtatsD+fV44IdVX+9zgRylL46PaB3opydmO/noar3v6Ll0DbYvUW1Jczru4PXxxKPPBmVbJ3t+vFnfu9/feJtqX3XGTtXeOKjLQb3r3N1HX7+y6Uy17dHdF6l2jTYmDeuXNx7J6B/g8oheW4jQ9yDfRSXGyUfd18uRlL72bFd3Q/pcrMMjlKbaeX75NXp/mhh2n9PmyR6PRNAaT6HsKhgpNM6+bhgLldNGs9+48ny8/NE/xH9914dn3tkwQshpM9gnSiWMFfM4NJ6Zeec68v6la/DEr9yJP3/X9TPvbBjzSH2n8Q5wvv8xSw9f18SntztGm7To/NehTXjPjzbCwSGVBoaLWo/tQJAHrTWqp/n/bUzbo7vj+gejQppoiHIf79wRpLjiFNaXRVbh7PZu/OyN3YjvTNeEG8co50UuHiwgdGwnzfghHc++qlWnNv7tM7TN+UvrflW1fZ3OGpK1bU05qJpayboZSwR6lmP0f/fin6v2t+++QbV/epFe52D/94MIcr9xGerRfr1W07ZY+0jk6N7lvRJOkVH99U9k9L6lFvqQHVRKitaVYp6fSDFD+VfJ9yC5RN/4Qr/+vvL9qHifWxLsZ49Zcdpo9tJsP3Gd+bvnXsJPNm8z33ij4Zw2g91H5jQly8lRcQ47jgw2uhuGUefBLoAkgidwNKFNGaXRYOrDUzlx2uyUpxBX5wS3rL4In3/HB/DsgR34n1sfUNvfuTgwt23I6mqoyYh+6j47qENJ1+1dqdoJ6rdMBH2reNVjBUCJMm9hKVUhbcmpdvLFINVUgfNzrtN/+M+fvk+1X83plE2XnLFPtTfs8j7HDL+HXIEHNHGqqcDrmQkT27S34Hef1oumuTP1e1u26elwdqW+aO2rAvkyurND94O+B2sW6R/WV4d0Om4ZD+5VfJmWQZw6GsOU/mlQT82bV1Plmv2BHJFWPeVnaq4vmZa5goxf9YXfO0NS5KOcNgt0APD64EEsa27HxZ1LF8TT/X9cfQP+4ZZbcHZXV6O7Yhin12DfMTqAW37yj7j5iX+Em6ng+DyTjMbw4VXn4Jo1Z5peNxYEp51mf23wYKO7AGAyicY1P/h7XN2xBm+NjMz8BsOYZ+o62CXiEPPcXEv5qU/vWsjdkDRiIs5iWLOkKYH39ZyHxw/8AgCwMxNMpSOiUzAP57TGHKfU0iVKn1Uc0emKmw55YbueFW9ktIDHoq8Cy4K/lcldNrdDpzMudQbi+INXv662/fRVHfL6Oz/8tGovWjOk2tEfaPnQ9etBbrxETF/fA3v1vvFWvbbA6xTZI9pU1PZG8LnGVmmBnytQxd0ztdlpvE/7K8dH9f5+mipHc9EY9evVzXrdogbPrbe0Z+rSUAAASh3tSEdnhshc5ul0ToVeKev3xun7G43SNaPrW/KugcSopNgs042ddk92AEhFY7j3vZ/B4lQ7DuWGsX5o58xvmiMS0ShuPPtsPLxlS93OaRiz4bTS7G8zUS7h0f0v4ZXBHTg0MTTzG+aQP3v/NfjqjR/BX1z3obqe1zBm4rR8sgPAPTufRsU9VfeFuid3bMdN55yD72/aWNfzGsZM1HWwO6fTUrHLZqorsDlzeCeHtLJL5qoubV/dP6Jt0r+x6gN4ct9W7MoMoszhsxSCiQGtyXn9IEIphUUCW+fP+nbj6v/z9xgvFYEkENmv1wNWX6oLQO4u96p2y5LA3XOooN/LvgeOykMNb9NrEU2L9fVt8cpav6dnt9r243/W4bFLrutT7QM/1/bqRf36R3To0uAa9q7Us6nD0O7IzU16PSDTTqmky7q9rG306OutMe1a6/ZobRtZThGPNe4Cwb2vCH3/OvV7J6jsd5Rs5+Ws7mfE0+nFrLbJs/vrBPT2ns5R1eaSZL6DaOUEU4SfltN45qNnXIa7Lr0W9133KSQiXLX85LioZzEe/MyncOkZQQz2eGl6hwrDaAShGOw/ObAJD+56HV95/WcoVCafbnPlcnPD2rNx3tIe/O41C7I4jmEc5bTV7D7jpQI+/8Kj6m+/eeFluGXtBfirdc/h+eHtsz7WRV296FzUguf27gEA/K+XXkB+sIR/en79DO80jMZS/8HupwUmreH7+CZTeipc4bQ+JFP6x7TNNB7Vdkzlge6Am886H+/sXYb2eBLSN6nNLlq8GGu7uvDyxH6MF4o4u6sL/dkx7D006RRzTk83fnjzb2LXwBBu/MY91T44/OPOlwFMXsz4ENmIz9K+74eGteZMdunt414I5tbnzlHbuq/ViTQHhvRnbt+gb2fTzYdUOzMRHHvj8DK17YaPvaDaD73wS6rdfOmw7ndK6+5uz46cjut7Fynqe1d4rUO1sVKXYcIyrZ33DgX7r1jbr7adv0h/xleP6FJe0Yhe1zhY9s5d1rp4YkCvkcTaqDwUhwWTX35l2NPhXO6pTGmnaA2qxleecN4aVU048iwJxTT+WHzyh9/H5575EZ7YEzzVf+288/GVG2/Ch89aiytWLMd9t/5b/OF7rzy6fefAIA5lxvDMmzuRjIViUmScRsz4jRWRlQC+DaAXkykL7nbOfV1EOgF8D8BqALsB3Oqcq69R+yTIlYr4wdbJssrR6m/epv4+PL51K/aMjGAol8OL+/YiUwieYKVKBVd/425E8o0PsjGM42U2j6cSgM85534hIq0AXhGRJwH8FoCnnHNfFpG7ANwF4Avz19X55+EtWyY936pj+bbv3w8AiC6ACDrDOFlmU5/9IICD1dcZEdkMYDmAmwFcU93tXgDPYhaD3U85HIlPnV0ml9N2yDKV8Uk0az2Vy+v92dfYTyHsJsh+SmV+K5RRSMgNn0Lrkb080N0VinlOkY907rC2C597rk5r/ebOwITXdJ3Wp6PPL1Ztt1h3bPi9WkfHyB97UVPQz3PbtR19V1b7xqeX6PROrFcLJX3sVe3BpG4oTz7jdJsv/5Aut/Wv63T+AC7ZPD4e3PtkXF/PrSP6mrBGH8zoFGJ+bEblsL5XlR5K36y7DTdM6Z/Zvu2vRzWRbwbp7CKloT7i9PpLeYK+ZH6m7tiJZWU6Ls0uIqsBXAbgRQC91R8CADiEyWm+YRgLlFkPdhFpAfAAgD9wzil3H+ecQ+0P4dvvu1NE1onIuvJo9li7GIZRB2Y12EUkjsmB/h3n3IPVP/eJyNLq9qUA+o/1Xufc3c65y51zl0fbmo+1i2EYdWA2q/EC4JsANjvnvuJtegTA7QC+XP3/4Vmd0S9BRHOBQi7QMRynHGvRepQ1eYzamWGtG51X6pftlEIasUTarURlgtlHHZ6Wc2nd7wnykU716Uu+bULnt0M6+Byj4zrOO7dar1Ok2vQ1yR/QP6aRbv0539oSKK3MGt2vIbLZczz2O5Zrn/6mmO7Lc9vWBu+lWO1Smsp8VbQeveiSPaq9cbf2AfDzrzUl9L3ZQ2Wpk7SWU6CcCZVRTyu3kPaldYmaskrkG99MMf8T3jpTKj19PyLN+tzFHMVrcI5F7tsJMJvV+CsB/AaA10VkQ/Vvf4rJQX6/iNwBYA+AW0+6N4ZhzBuzWY1/DlO7kl83t90xDGO+WFBuYIl0ME1iU49fyQMAkjStz9H2eFpPudQ0qV1vK7TRNJ1CF8Gmt2ZKJZ31q9zoY0X7dL8q52uTVnuazGWeHPnE6pfVtkcPvkO19w/qMF6WEEuadWWbvtZg/xKZ5ZJ0vdJJPQ3dMainyyMjZF4bCKawF71LV9BdP7ZatV9+7SzVBoXq9i4d1v0+1HH09f4xHcbLq04clhoZ0fdSuoPPxXKDKY5RVdc0u7jq/f30z+Ml3Y9a13D9PeHvK6dsk+iJmdt8Qusuaxhhwwa7YYQEG+yGERIWlGYveimbWeNEyRRRIFNFa7sOFc2M6nBFH19fAgAWa93sWMrxTyKZ4pxXoVO4pBWtByS5jCuxuj1Ir/XwgUvUtrE86UBaNhVKl/XaLip95K1FlNr0hyrs02a7iQ7Sq0P6esfH9clj5wd+VgfG9FpCsl2HrOYH9b1p6tD3bmBYmwHhpaJOdup9Z/oejCW1+dJPJc26uVxTKZjSQdN9Hx+itGF+abOadM/Ta27W6JHE3Jve7MluGCHBBrthhAQb7IYREhaUZvepSTNN7odFKslUKOo221CLfohslz5WlNYHKmUKeU2RiKft0Vzwm1mOad3cskP3ayymdfeSDm0L/8VbgftsL6UX7kzrsknXLNum2ruW6DDVjQeXqnbpSKBvx0ep5NISrXUTm7WG73qfTv80XqByxt5aBG9jnwkktH5duWhYtVlL74kFJbLyZEcXOlaJUozzukYsFejwArsyk/trilxzM2O0DhSfumBnzXcoo/ud6NDnSrPfSEbfHy75dCLYk90wQoINdsMICTbYDSMkLFjNzjSntM4eJW3GfsqlIpVo8sJa3RiXGyK7eQdVdOFyT2Rv9VMls4k+16u1Vqpda7O3Dmlf79a2QDtHySb/nu5dqv3QLu0rf163TilQmKASQp2BxhS6BiVOm7RG9/PgZkqJtUhfowtWByGwB0Z1uuz2Fm1nPzKs9WhfplW1m8kvX8VFUOkj1v+s9ytkO494KcbZH13oehfLXOZrev/25ubgc46x5qZ1H14vcNyXOdDojD3ZDSMk2GA3jJBgg90wQsIpo9m51FGlQKl22eV8ujK2MdpGJZllhGzIrOnJdltaFuhbLi1daWG7L+k+Shk8kQzOPXpE27r/aetVqt26Utvhdw1rO7ujtYaYH9u9QtvVS6OUiptSMFXosRAZ0NfoDQls+ues1GmquTRXD5d0PtCh2iMUBwFfvnJKMbrPJS5nTNq3OB58zgj7YpCffbpNrzWwf3uZ1kTyvu8HZ7ii72uMy4TXAXuyG0ZIsMFuGCHhlJnGMzx942m7o6m3cnHlCpuF6V0so0u0m6o/FQSAqHe88j6drqmSoGlmSk/9Yi1TT+cio/r2RJfqqXeEJEF2QvdLSCK45cG0lCvs8PUrj9N2Mh2R5RMt3pR3eEK7lQ4d0aa1eJM2rQlXBiL32oh37sqYvn6cZbjGPZam6s6rDMTTdlC12TxnfKXvDVc08l24K3T92K2XKfMFnQfsyW4YIcEGu2GEBBvshhESTl3NTiaVSn5q91hAm8tqtnHFTDLFlaliKVhi+i6ZSX3sWLfW2Y5sWBwKWS4Ft6R5jTatjQ1rLZxh11AyO8UW63P7GjU6SJVSqN9opbTJtK7BYb5jfYF5bXyC9m0iExenaCbX3Y6zBlXbDy2Nkrtxha4Bm0lZs/uppmOUGpqXedh8VuJ1jmnMvTUancOo5yDN1PFiT3bDCAk22A0jJNhgN4yQcMpqdoZT7wpLTE+vOrbRz1Ax03GKYSqz1NoSaOPhorYpFymUkXViimzOvr01Ru6ZnM45spSORamNCm/qUFNZGpyrQiZmLonFtu5o8/Slj3z9WuGDs92cUjQLfa4xql7rX5OalMrk2ow2/TlqSjx57y+xLwGteTjQOhD1m9d6/HtbojUkTi09F6mhjxd7shtGSLDBbhghYcbBLiLfEpF+Edno/a1TRJ4UkW3V/xdNdwzDMBrPbDT7PQC+AeDb3t/uAvCUc+7LInJXtf2Fue/ecVBjIyVR6flXJ5q0rmO/ZLZXszaL0vtHhpq9bRTiSnqfdSLb2du8tFSjVMKq0kHHHtLpiYtJ0vRpuga+rz35GrCubl+UVe2xLJVR4tCDbHDuCOlm9md3I1R+a4kOJWW/BuX/Ti4PxWZOO6U/1wSlcFafm9OLUZwC+7ez7i7FWONPTSM0OjPjk9059zMAg/TnmwHcW319L4Bb5rZbhmHMNSeq2Xudcwerrw8B6J1qRxG5U0TWici6ciY71W6GYcwzJ71A55xzmGYG45y72zl3uXPu8mhr81S7GYYxz5yonb1PRJY65w6KyFIA/TO+o86wRvLjnAtDWn9GOEUQ/XSxnzNrfD92vlxmUUmx8hQX3tyq9eqEVzqpiezmnLo4208/nuQ/IIv1sSte6inuR5T9FGR6X262X5d9WU0x5hVOxZUim3OW0kF16/wBeS/9U4Q+Y5KuEet9x/Ztz8dfyL+Cj80avpSj4cJxDf66RXrq0lCN4kSf7I8AuL36+nYAD89NdwzDmC9mY3r7LoDnAZwrIvtE5A4AXwZwvYhsA/ChatswjAXMjNN459xtU2y6bo77YhjGPHLa+MbPhB//7hJcIohj4WeIRSYdDj/Wm/KSsTZ2ZNvNHNK+9MnOwM7e1ay1677D2ncp1qb96pky+QtEvf0rA1T6uFtrzFxe28I5dXeBbOfwri/HmNekpabrx+srObKNxzxtXcpQmu92fb2jMVpLoH776y+8XlJivc+uCEmKmeB1oenSly8AzF3WMEKCDXbDCAmhmcb7cDgsm1x46uc4zRLjTdWTzXpqXaC00+yWyuG1zenAlLT/SIfuB80Sa8Im2WQoU087XXL66qd+WmQASPDncpRO2zM18ZSfQ29zY9oNON6mt3PJHT+dVrpLp9rKjWozaixVojaZz7zPNT5GUoZuM19PTk/O6c0WOvZkN4yQYIPdMEKCDXbDCAmh1OxMTVpf0vCJDq0pi/mpUwqzuyaXCKpxHSXzWDYX6MhEUuvNiZzWyTWuoVxhiCuJ+pozNv06BYcIl+ncfGy/2dqhTYajQ01T7wygQi7GFQ499VxzJ2gNhNcSKjOUUfLdgNn0NjaS5t31e08xjc7Yk90wQoINdsMICTbYDSMkmGY/Buz2WCpy+ScK9/RsuR2t2g48Tm6nHB6bH5+6BHEhP3154iK9l0sfx+La5lwseuWfyL5fU6KZ9T/pVdav/vpChrRvhNI9V5rJhZjt2ZwWzNseo7RfJVprYNdn9qlw3vUvFqms8imuyWfCnuyGERJssBtGSLDBbhghwTT7CcDlov0yzMOjbFPWOtD38wZqtXPO89dmG306TaWiSNuy/TpBetUvRVWmUFHUlBjWTU67nKTSyT6OSh85KtnM6aB4jSSamno9gX34+TPz9eSQV1/xF9hf4jTHnuyGERJssBtGSLDBbhghIVyiZZ7QvvUzpC7iKkBu6tRGEdL72QyVYOJ+kM05W6T9/XRQrNFJk3P67CTFnBcm9FfH19LxVr22UMxoXwMu0cTlthjx1h64XFaE1k8SCW2HZ1+FMGNPdsMICTbYDSMk2GA3jJBgmn2eqUkvTG2OQfd1N/t511TUY5Mz6VfWxmVP77I9uhKZ/nefNTqvD/i28RLbr6PTf+aavHx0jWKJqY/NayJ+qShDY092wwgJNtgNIyTYNL7R8AyWptc+PGXlaT2nli5RdVR/Os1VWjlNcryJUzBT6CiHyHpVcWrCZ2eQADyNj1JV3YrnjsxSxZg99mQ3jJBgg90wQsJJDXYRuUFE3hSR7SJy11x1yjCMueeENbuIRAH8NYDrAewD8LKIPOKce2OuOmdoZqoSyimxI+nSFHseo8IrhZWyiSue1jq6MKJLJ/kusr7GBmpTQ81EzdqEMSeczJP9CgDbnXM7nXMFAPcBuHluumUYxlxzMoN9OYC9Xntf9W8KEblTRNaJyLpyJnsSpzMM42SY9wU659zdzrnLnXOXR1ub5/t0hmFMwcnY2fcDWOm1V1T/NiWFXQeO7P7kn+0B0A3gyEmce76wfh0f1q/jox79WjXVBnGctHuWiEgMwFYA12FykL8M4BPOuU2zeO8659zlJ3TiecT6dXxYv46PRvfrhJ/szrmSiPwHAD8BEAXwrdkMdMMwGsNJucs6534E4Edz1BfDMOaRRnnQ3d2g886E9ev4sH4dHw3t1wlrdsMwTi3MN94wQoINdsMICXUd7AspcEZEviUi/SKy0ftbp4g8KSLbqv8vqnOfVorIMyLyhohsEpHPLoR+VfuQEpGXROTVat++VP37GhF5sXpPvyciiZmONU/9i4rIehF5bKH0S0R2i8jrIrJBRNZV/9awe1m3we4FztwI4AIAt4nIBfU6/zG4B8AN9Le7ADzlnDsbwFPVdj0pAficc+4CAO8B8JnqNWp0vwAgD+Ba59wlAC4FcIOIvAfAXwL4qnNuLYAhAHc0oG8A8FkAm732QunXB51zl3r29cbdS+dcXf4BeC+An3jtLwL4Yr3OP0WfVgPY6LXfBLC0+nopgDcb3L+HMRlVuND61QTgFwDejUmPsNix7nEd+7MCkwPnWgCPYTL/z0Lo124A3fS3ht3Lek7jZxU402B6nXMHq68PAehtVEdEZDWAywC8uFD6VZ0qbwDQD+BJADsADDvn3o6lbdQ9/RqAPwHwds6qrgXSLwfgCRF5RUTurP6tYffSctBNgXPOCSdqqxMi0gLgAQB/4JwbFa9mVCP75ZwrA7hURDoAPATgvEb0w0dEfgVAv3PuFRG5psHdYa5yzu0XkcUAnhSRLf7Get/Lej7ZjztwpgH0ichSAKj+31/vDohIHJMD/TvOuQcXSr98nHPDAJ7B5PS4oxonATTmnl4J4NdEZDcmcypcC+DrC6BfcM7tr/7fj8kfxyvQwHtZz8H+MoCzq6ukCQAfB/BIHc8/Gx4BcHv19e2Y1Mx1QyYf4d8EsNk595WF0q9q33qqT3SISBqTawmbMTnoP9aovjnnvuicW+GcW43J79TTzrlPNrpfItIsIq1vvwbwywA2opH3ss4LFjdhMlJuB4A/q/eCCfXluwAOAihiUtPdgUmt9xSAbQB+CqCzzn26CpM67zUAG6r/bmp0v6p9eweA9dW+bQTw59W/nwngJQDbAXwfQLKB9/QaAI8thH5Vz/9q9d+mt7/vjbyX5i5rGCHBPOgMIyTYYDeMkGCD3TBCgg12wwgJNtgNIyTYYDeMkGCD3TBCwv8HI5hr3hHflv0AAAAASUVORK5CYII=\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "fig, ax = plt.subplots()\n",
+    "ax.imshow(data[14, ], origin='lower')\n",
+    "aperture = Circle((11, 26), radius=5, fill=False, ls=':', lw=2., color='white')\n",
+    "ax.add_artist(aperture)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Next, we use the [FalsePositiveModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.fluxposition.FalsePositiveModule) to calculate both the S/N and FPF. We set position of the `aperture` to the coordinates that we tested and the radius of the aperture to 5 pixels. For the reference apertures, we will ignore the neighboring apertures to the companion (i.e. `ignore=True`) such that the self-subtraction regions will not bias the noise estimate."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 27,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------\n",
+      "FalsePositiveModule\n",
+      "-------------------\n",
+      "\n",
+      "Module name: snr\n",
+      "Input port: pca_median (30, 57, 57)\n",
+      "Input parameters:\n",
+      "   - Aperture position = (11.0, 26.0)\n",
+      "   - Aperture radius (pixels) = 5.00\n",
+      "   - Optimize aperture position = False\n",
+      "   - Ignore neighboring apertures = True\n",
+      "   - Minimization tolerance = 0.01\n",
+      "Calculating the S/N and FPF...\n",
+      "Image 001/30 -> (x, y) = (11.00, 26.00), S/N = 5.54, FPF = 7.28e-04\n",
+      "Image 002/30 -> (x, y) = (11.00, 26.00), S/N = 4.85, FPF = 1.43e-03\n",
+      "Image 003/30 -> (x, y) = (11.00, 26.00), S/N = 5.75, FPF = 6.01e-04\n",
+      "Image 004/30 -> (x, y) = (11.00, 26.00), S/N = 7.43, FPF = 1.53e-04\n",
+      "Image 005/30 -> (x, y) = (11.00, 26.00), S/N = 10.16, FPF = 2.64e-05\n",
+      "Image 006/30 -> (x, y) = (11.00, 26.00), S/N = 8.92, FPF = 5.54e-05\n",
+      "Image 007/30 -> (x, y) = (11.00, 26.00), S/N = 8.35, FPF = 8.02e-05\n",
+      "Image 008/30 -> (x, y) = (11.00, 26.00), S/N = 5.59, FPF = 6.99e-04\n",
+      "Image 009/30 -> (x, y) = (11.00, 26.00), S/N = 7.81, FPF = 1.16e-04\n",
+      "Image 010/30 -> (x, y) = (11.00, 26.00), S/N = 6.46, FPF = 3.25e-04\n",
+      "Image 011/30 -> (x, y) = (11.00, 26.00), S/N = 7.34, FPF = 1.63e-04\n",
+      "Image 012/30 -> (x, y) = (11.00, 26.00), S/N = 7.17, FPF = 1.86e-04\n",
+      "Image 013/30 -> (x, y) = (11.00, 26.00), S/N = 6.97, FPF = 2.16e-04\n",
+      "Image 014/30 -> (x, y) = (11.00, 26.00), S/N = 6.32, FPF = 3.66e-04\n",
+      "Image 015/30 -> (x, y) = (11.00, 26.00), S/N = 8.25, FPF = 8.55e-05\n",
+      "Image 016/30 -> (x, y) = (11.00, 26.00), S/N = 9.85, FPF = 3.16e-05\n",
+      "Image 017/30 -> (x, y) = (11.00, 26.00), S/N = 9.98, FPF = 2.94e-05\n",
+      "Image 018/30 -> (x, y) = (11.00, 26.00), S/N = 8.71, FPF = 6.31e-05\n",
+      "Image 019/30 -> (x, y) = (11.00, 26.00), S/N = 11.85, FPF = 1.09e-05\n",
+      "Image 020/30 -> (x, y) = (11.00, 26.00), S/N = 9.01, FPF = 5.23e-05\n",
+      "Image 021/30 -> (x, y) = (11.00, 26.00), S/N = 6.85, FPF = 2.37e-04\n",
+      "Image 022/30 -> (x, y) = (11.00, 26.00), S/N = 6.41, FPF = 3.39e-04\n",
+      "Image 023/30 -> (x, y) = (11.00, 26.00), S/N = 7.57, FPF = 1.38e-04\n",
+      "Image 024/30 -> (x, y) = (11.00, 26.00), S/N = 6.88, FPF = 2.33e-04\n",
+      "Image 025/30 -> (x, y) = (11.00, 26.00), S/N = 5.45, FPF = 7.91e-04\n",
+      "Image 026/30 -> (x, y) = (11.00, 26.00), S/N = 5.32, FPF = 8.99e-04\n",
+      "Image 027/30 -> (x, y) = (11.00, 26.00), S/N = 4.38, FPF = 2.33e-03\n",
+      "Image 028/30 -> (x, y) = (11.00, 26.00), S/N = 3.06, FPF = 1.11e-02\n",
+      "Image 029/30 -> (x, y) = (11.00, 26.00), S/N = 4.45, FPF = 2.18e-03\n",
+      "Image 030/30 -> (x, y) = (11.00, 26.00), S/N = 4.89, FPF = 1.37e-03\n",
+      "Output port: snr (30, 6)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = FalsePositiveModule(name_in='snr',\n",
+    "                             image_in_tag='pca_median',\n",
+    "                             snr_out_tag='snr',\n",
+    "                             position=(11., 26.),\n",
+    "                             aperture=5.*0.0036,\n",
+    "                             ignore=True,\n",
+    "                             optimize=False)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('snr')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The results have been stored in the dataset with the tag *snr*. Let's plot the S/N as function of principal components that have been extracted. As expected, for a large number of components the S/N goes towards zero due to increased self-subtraction."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 28,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "Text(0, 0.5, 'Signal-to-noise ratio')"
+      ]
+     },
+     "execution_count": 28,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEKCAYAAAAfGVI8AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAfS0lEQVR4nO3de5wcdZnv8c+XEJYBgcElIhnMJioGMQGiI64GEUVO8MiBnKgoiwoqZt3VFY9r2ERcuS0SCHqUVVFEBBW5LMaAsHKRiyyKHANBg2AAkUsmQIIYl8gAITznj6qGTjOX6nTVVHfX9/169Svdv+rLU9NQT//uigjMzKy6Nis7ADMzK5cTgZlZxTkRmJlVnBOBmVnFORGYmVXc5mUHsCl22GGHmDx5ctlhmJl1lFtuueXRiJjQWN6RiWDy5MksXbq07DDMzDqKpPuHKnfTkJlZxTkRmJlV3JglAklnS1ot6fa6skWSfifpN5J+JKl3rOIxM7PEWNYIzgEOaCi7GpgWEbsDdwELxjAeMzNjDBNBRNwAPNZQdlVEPJM+/CWw81jFY2ZmiXYaNfRh4MLhDkqaC8wFmDRp0ljFZNaxliwbYNGVK1i1dpCJvT3MmzWV2TP6yg7L2lBbdBZLOgZ4BjhvuOdExJkR0R8R/RMmvGAYrJnVWbJsgAWLlzOwdpAABtYOsmDxcpYsGyg7NGtDpScCSUcABwKHhdfENsvFoitXMLh+w0Zlg+s3sOjKFSVFZO2s1KYhSQcARwNviYgnyozFrJusWjvYVLlV21gOHz0fuAmYKmmlpI8AXwW2Aa6WdJukb4xVPGbdbGJvT1PlVm1jViOIiEOHKP72WH2+WZXMmzWVBYuXb9Q81DN+HPNmTS0xKmtX7TRqyMxyUhsd5FFDloUTgVmXmj2jzxd+y6T0UUNmZlYuJwIzs4pzIjAzqzgnAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIzs4pzIjAzqzgnAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIzs4pzIjAzqzgnAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIzs4pzIjAzqzgnAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIzs4rbPOsTJe0IfBzYDQjgDuDrEfFIQbGZmdkYyFQjkDQTuAf4O2AQeBI4DLhb0huLC8/MzIqWtUZwGnA+8LGIeBZA0mbAN4AvAm8qJjwzMyta1kSwJ3BELQkARMSzkr4ELCsiMDMzGxtZE8GfgSnAiobyKcDaPAOquiXLBlh05QpWrR1kYm8P82ZNZfaMvrLDMrMuljURXAB8W9LRwC/SspnAKSRNRpaDJcsGWLB4OYPrNwAwsHaQBYuXAzgZmFlhsiaCowEBZ9e9Zj1wBjC/gLgqadGVK55LAjWD6zew6MoVTgRmVphMiSAingaOkrQAeEVa/PuIeKKwyCpo1drBpsrNzPKQeR4BQHrhX74pHyTpbOBAYHVETEvLXgxcCEwG7gMOiYg/bcr7d4OJvT0MDHHRn9jbU0I0NpbcN2RlGnYegaRLJW1bd3/YW8bPOgc4oKFsPnBNROwCXEPFm5nmzZpKz/hxG5X1jB/HvFlTS4rIxkKtb2hg7SDB831DS5YNlB2aVcRIE8r+SDKDGOCx9PFwt1FFxA3p+9Q7GDg3vX8uMDvLe3Wr2TP6OHnOdPp6exDQ19vDyXOm+5dhlxupb8hsLAzbNBQRH6q7f0RBn79jRDyU3n8Y2HG4J0qaC8wFmDRpUkHhlG/2jD5f+LtIliYf9w1Z2bIuMXG2pG2GKN86bftvWUQEz9dAhjp+ZkT0R0T/hAkT8vhIs0JlbfIZrg/IfUM2VrKuPno4MNR/lT3AB1v4/Eck7QSQ/ru6hfcyaytZm3zcN2RlG3HUUDqqR+lte0nP1B0eB7wTaGX10UtJkszC9N9LWngvs7aStcmn1lTkUUNWltGGjz5K0lxTW3a6UQDHZvkgSecD+wI7SFqZvm4hcJGkjwD3A4dkC9us/TUzHNh9Q1am0RLBW0lqA9cC72LjUT9PA/dHxKosHxQRhw5zaL8srzfrNPNmTd1oyRBwk4+1pxETQUT8DEDSFODB+tVHzWxkbvKxTpF1iYn7ASRNBCYBWzQcvyH/0Mw6n5t8rBNkSgRpAvgBsA9Jv4DYeKjnuKFeZ2Zm7S/r8NEvAxtI9it+Angz8B7gTl64bISZmXWQrIvOvQV4Z0T8TlIAayLi55KeAk4Eri4sQjMzK1TWGkEPyVBSSEYOvSS9fwewe95BmZnZ2MmaCH4H7Jrevw34mKS/AT4OeIlEM7MOlrVp6CvAS9P7JwBXAIcCT5HMCDYzsw6VdfjoeXX3b5U0maSG8EBEPDrsC83MrO2N2jQkabykhyW9plYWEU9ExK1OAmZmnW/URBAR60k2qh92iWgzM+tcWTuL/x1YIKmpPY7NzKz9Zb2wv5lkLsGApNuBv9QfjIiD8g7MzMzGRtZE8CjwwyIDMbPyZNlS07pX1lFDHxr9WWbWiWpbataWy65tqQk4GVSE2/w7mH/FWR5G2lLT/z1VgxNBh/KvOMtL1i01rXtlHTVkbSbrxuhmoxlq68yRyq37OBF0KP+Ks7zMmzWVnvEbbyniLTWrxYmgQ/lXnOVl9ow+Tp4znb7eHgT09fZw8pzpbmKskMx9BJLeQbLa6MuBWRHxoKQjgT9ExDVFBWhD88bo1VXEIAFvqVltmWoEkg4DLgLuBqYA49ND44CjiwnNRuJfcdVUGyQwsHaQ4PlBAkuWeTV423SKGH0JIUm/Bk6OiAskPQ7sERH3StoDuCoidiw60Hr9/f2xdOnSsfxIs7Ywc+G1DAzRD9TX28PP57+thIisk0i6JSL6G8uz9hHsAtw0RPk6YNtWAjOz7DxIwIqQNRGsAl41RPk+wO/zC8fMRuJBAlaErIngTOB0STPTxy+TdDhwKnBGIZGZ2Qt4qKcVIetaQ6dK2g64GtgSuI5km8rTIuJrBcZnZnVqgwG8tIjlKVNn8XNPlrYCdiOpSdwREeuKCmwk7iw2M2vecJ3FTa01FBFPAEsl9QAzJd0dEffnFaQVw4vTmdlIss4jOEfSP6b3twBuBq4CVqQTzaxNedy5mY0ma2fxLOCX6f2DgO2AlwLHpTdrU16cLn9Llg0wc+G1TJl/OTMXXuukah0vayLYHlid3j8AuDgiVgMXkPQZWJvyuPN8uYZl3ShrIngYmCZpHEnt4Kdp+YuA9UUEZvnwuPN8uYZl3ShrIjgbuBC4HdgA1BaZewPwuwLispx43Hm+XMOybpR1HsEJkn4LTAL+IyKeTg89A5xSVHDWOo87z9fE3p4h1/pxDcs6WebhoxHxwyHKzs03HCuClxjOj5f/tm40bCKQNAf4cUSsT+8PKyIWtxKEpP8DHAkEsBz4UEQ82cp7mhXBNSzrRsPOLJb0LPDSiFid3h9ORMS4EY6PHIDUB9wI7BYRg5IuAv4zIs4Z7jWeWWxm1rymZxZHxGZD3S/I5kCPpPXAViSrnZqZ2Rgofc/iiBgATgMeAB4C/hwRVzU+T9JcSUslLV2zZs1Yh2lm1rUyJwJJu0v6bnox/pWkcyVNazUASdsDB5NsgTkR2FrS+xufFxFnRkR/RPRPmDCh1Y+1Fnl2rVn3yLrW0EHArcDLgJ8AV5AMJV0m6X+1GMPbgT9ExJqIWA8sBt7U4ntagTy71qy7ZB0++m/ASRFxbH2hpBPSYz9uIYYHgL9Nl7geBPYD3BPcxkaaXevRM2adJ2vT0KuA7w1R/j2gpQHUEXEzcDFJjWN5GtOZrbynFcuza826S9ZEsBp43RDlrwMeaTWIiDg2InaNiGkR8YGIeKrV97TieP0is+6SNRF8C/impGMkvTW9fQ74Bv71Xjlev8isuzTTR7AO+GfgxLRsFXAscHoBcVkb8+xay5N30CtfU3sWA0jaBiAiHi8kogw8s9isO9RGoDWu3XTynOlOBgUYbmZx0xPKIuLxMpOAmXUP7+/QHjI1DUl6MXASydDOl9CQQCJi2/xD6y6u/pq9kEegtYesfQTfBmaQdAyvIlkl1DJqrP7WJmABTgZWad7foT1kTQT7AfunY/6tSZ6AlY1rTdXj/R3aQ9ZEsJpk1JBtAld/R+daUzV5BFp7yJoIjgFOkHR4RDghNMnV39G51lRd3kGvfFkTweeAycBqSfcD6+sPRsTuOcfVVVz9HV2ztSY3I5nlJ2siuLjQKLqcq7+ja6bW5GYks3xlSgQRcXzRgXQ7V39H1kytyc1IZvlqekKZpK9L2qGIYKy6Zs/o4+Q50+nr7UFAX2/PsLNL3flulq+sTUP13k+yteSjOcdiFZe11uTOd7N8bcqexco9CrMmNLP6qbfUNBvdptQIzEqVtfPdncpm2TSdCCJimyICMWtGlmYkdyp3Hw8bLkZTiUDSy4HdSNYaujMi7i0kKrMcuFM5f2VeiF3DK06mPgJJ20r6D+AeYAlwCXC3pItq+xOYtRtvqZmv2oV4YO0gwfMX4rHqd/GS1cXJ2ln8FWB34K1AT3rbLy37ciGRmbXIW2rmq+wLsWt4xcmaCA4CjoyIn0XE+vR2PTAXmF1UcGataGZugo2u7Auxa3jFydpH0AP8cYjyx4At8wvHLF+e0Z2fsudveM2u4mStEfwcOFHSVrUCSVsDxwO/KCIwM2svZTe1uYZXnKw1gk8DVwADkn6Tlk0HngBmFRGYmbWXdlg80TW8Yigi266TaW3gMGDXtOhO4LyIGPOemv7+/li6dOlYf6yZWUeTdEtE9DeWZ928fh/gFxHxrYbyzSXtExE35BSndSFPAjJrb1mbhq4DdiLZsrLedumxcS94hRmeBFRl/gHQObJ2FotkNnGjvwb+kl841m3KHntu5Sh78pk1Z8QagaRL07sBfF/SU3WHxwHT8KghG0HZY8+tHF7nqbOM1jRUmzsg4E9A/f+9TwM3At9qfJFZTdljz5vhpoz8+AdAZxkxEUTEhwAk3QecFhFuBrKmdMokIPdl5KuTfgBYxj6CiDi+lgQkzZfUW2hU1jU6ZRKQ+zLyVfbkM2vOpmxM81ngImBtvqFYt+qESUBuyshXO0w+s+w2JRF4q0rrOm7KyF8n/ACwxKbsWWzWddyUYVW2KTWC3YBVeQdiViY3ZViVbcqexQ/mHUTa+XwWybyEAD4cETfl/TlmI3FThlXVsIlA0uMMPZv4BSJi2xbj+ApwRUS8W9IWwFajvcDMzPIxUo3gE2MRgKTtgH2AIwAi4mmSyWpmZjYGhk0EEXHuGMUwBVgDfEfSHsAtwFGNk9ckzSXZGpNJkyaNUWhmZt2vHUYNbQ68FjgjImaQLGI3v/FJEXFmRPRHRP+ECRPGOkYzs66VdT+CLYBjgEOBScD4+uMR0coy1CuBlRFxc/r4YoZIBO3Ia9OYWTfIWiM4ETgc+CLwLDAP+BrJonT/2EoAEfEw8KCk2oDt/YA7WnnPseBlds2sW2RNBIcAH4uIbwIbgEsi4pPAscD+OcTxT8B56X7IewJfyOE9C+W1acysW2SdR7Ajz/9KXwf0pvevAE5pNYiIuA14wT6a7cxr05hZt8iaCB4AJqb/3gPMIhnd80Y23qOgMrp1bRr3e5hVT9amoR+RtN1DMvnreEl/AM4hmRHc9pYsG2DmwmuZMv9yZi68tuW2/G5cm8b9HmbVlKlGEBEL6u5fLGkl8Cbgroi4rKjg8lLEpiPduDaNtxc0q6ZNWXSOiPgl8MucYylMURe4blubxv0e1i3cxNmczIlA0s4kS0G8hIYmpYj4Us5x5coXuGy6td/DqsXbjjYvUx+BpMOA35NsVP8pkuGetduYrEnUiuEuZL7Abawb+z2sejy0u3lZawQnkEwm+9eI2DDak9tNp2ygXrZu7Pew6mm2BcDNSM3NIzirE5MA+ALXjG7r97DqaaaJ081IiayJ4D+BNwD3FhhLoXyBM6uGZloAPFIukTURXA2cIuk1wHJgff3BiFicd2BmZpuimRYADyRJZE0E30z//ewQxwJoZfVRM7NcZW0B8Ei5RKZRQxGx2Qg3JwEz60geKZfYpAllZmbdwANJElk3pvn8MIcCeJJkIborIqJaDWtm1vE8kCR7jeA9JDuTbQ2sSssmkmwruQZ4GbBa0lsiomNHFpmZVVHW1Ue/CPwKmBwRkyJiEjAZuJlkstlE4C6grZeaMDOzF8paIzgWODgiVtYKImKlpKOBJRHxXUnHAJcUEaSZWdUVOQO6mZnFWw5R/lcki9ABPAJslUdQZfOUczNrJ0XPgM7aNPRT4JuSXi9ps/T2euAMkslmANOBP7QcUcm8OYuZtZuiF9LLmgiOJPnFfzPwVHr7ZVr20fQ5jwOfySWqEnnlQjNrN0XPgM66Q9lq4ABJU4HaTIvfRcRddc+5LpeISuYp52bWboqeAZ21RgBARKyIiEvT212jv6LzeO8CM2s3Rc+AHrZGIOl0YEFE/CW9P6yI+GQu0bQB711gZu2m6BnQIzUNTQfG190fTuQSSZvwlHMza0dFzoBWROddx/v7+2Pp0qVlh2Fm1lEk3RIR/Y3lTfUR1L3Z5pJe1HpYZmZWthETgaT9JB3SUDYfWAeslXSFpN4C4zMz61pLlg0wc+G1TJl/OTMXXlvafKXRho/OB35SeyBpL+ALwLeBO4F5wDHpv2ZmXSvvFQfaab/k0ZqGpgM/q3v8HuAXEfHRiPgS8EngoKKCMzNrB0WsONBOk1dHSwS9wOq6xzOBK+oe/wrwcBoz62pFXLTbafLqaIngIeAVAJL+CpgB3FR3fBuS5SbMzLpWERftdpq8Oloi+AlwqqS3AaeQbETzX3XHdyfZnczMrGsVcdFup/2SR0sEnyfZivKnwIeBj0bE03XHP8zzq4+amXWlIi7as2f0cfKc6fT19iCgr7eHk+dML2XyaqYJZZK2A9ZFxIaG8hen5U8P/cpieEKZmY21btinZLgJZVlXH/3zMOWPtRqYmVkn6OZN7jdpZrGZmXWPtkkEksZJWibpsrJjMTOrkrZJBMBRJLOVzcxsDLVFIpC0M/BO4KyyYzEzq5q2SATAl4GjgWeHe4KkuZKWSlq6Zs2aMQvMzKzblZ4IJB0IrI6IW0Z6XkScGRH9EdE/YcKEMYrOzKz7lZ4ISNYvOkjSfcAFwNskfb/ckMzMqqP0RBARCyJi54iYDLwPuDYi3l9yWGZmlVF6IjAzs3Jlmlk8ViLieuD6ksMwM6sU1wjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzi2mrROTOzbrBk2QCLrlzBqrWDTOztYd6sqcye0Vd2WMNyIjAzy9GSZQMsWLycwfUbABhYO8iCxcsB2jYZuGnIzCxHi65c8VwSqBlcv4FFV64oKaLRORGYmeVo1drBpsrbgROBmVmOJvb2NFXeDpwIzMxyNG/WVHrGj9uorGf8OObNmlpSRKNzZ7GZWY5qHcIeNWRmVmGzZ/S19YW/kZuGzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKk4RUXYMTZO0Bri/oXgH4NESwilKt50PdN85ddv5QPedU7edD7R2Tn8TERMaCzsyEQxF0tKI6C87jrx02/lA951Tt50PdN85ddv5QDHn5KYhM7OKcyIwM6u4bkoEZ5YdQM667Xyg+86p284Huu+cuu18oIBz6po+AjMz2zTdVCMwM7NN4ERgZlZxHZ8IJB0gaYWkeyTNLzuePEi6T9JySbdJWlp2PM2SdLak1ZJuryt7saSrJd2d/rt9mTE2a5hzOk7SQPo93Sbpf5YZYzMkvUzSdZLukPRbSUel5R35PY1wPp38HW0p6f9J+nV6Tsen5VMk3Zxe8y6UtEXLn9XJfQSSxgF3AfsDK4FfAYdGxB2lBtYiSfcB/RHRkRNhJO0DrAO+GxHT0rJTgcciYmGasLePiH8pM85mDHNOxwHrIuK0MmPbFJJ2AnaKiFslbQPcAswGjqADv6cRzucQOvc7ErB1RKyTNB64ETgK+DSwOCIukPQN4NcRcUYrn9XpNYK9gHsi4t6IeBq4ADi45JgqLyJuAB5rKD4YODe9fy7J/6QdY5hz6lgR8VBE3Jrefxy4E+ijQ7+nEc6nY0ViXfpwfHoL4G3AxWl5Lt9RpyeCPuDBuscr6fAvPxXAVZJukTS37GBysmNEPJTefxjYscxgcvQJSb9Jm446ohmlkaTJwAzgZrrge2o4H+jg70jSOEm3AauBq4HfA2sj4pn0Kblc8zo9EXSrvSPitcA7gI+nzRJdI5L2yM5tk3zeGcArgD2Bh4AvlhrNJpD0IuCHwKci4r/rj3Xi9zTE+XT0dxQRGyJiT2BnkhaQXYv4nE5PBAPAy+oe75yWdbSIGEj/XQ38iOQ/gE73SNqOW2vPXV1yPC2LiEfS/1GfBb5Fh31PabvzD4HzImJxWtyx39NQ59Pp31FNRKwFrgPeCPRKqm0znMs1r9MTwa+AXdJe9C2A9wGXlhxTSyRtnXZ2IWlr4H8At4/8qo5wKXB4ev9w4JISY8lF7YKZ+t900PeUdkR+G7gzIr5Ud6gjv6fhzqfDv6MJknrT+z0kg2LuJEkI706flst31NGjhgDS4WBfBsYBZ0fESeVG1BpJLyepBQBsDvyg085J0vnAviTL5T4CHAssAS4CJpEsIX5IRHRM5+sw57QvSZNDAPcBf1/Xvt7WJO0N/BewHHg2Lf4sSbt6x31PI5zPoXTud7Q7SWfwOJIf7RdFxAnpNeIC4MXAMuD9EfFUS5/V6YnAzMxa0+lNQ2Zm1iInAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIrlKRzJF2W4/tNlhSSct28O+84zTqJE4Flkl4oI72tl3SvpNPSSW8jOQp4f46hPAjsBNyW43taTiRdL+mrZcdhzdl89KeYPeenwAdIVkF8M3AWsDXwD41PTKfAb4iIP+cZQERsIFkMzcxy4hqBNeOpiHg4Ih6MiB8A55EugZtuAHK7pCMk/R54Cti6sckl/cX4dUlfkPRoutnLaZI2q3vOFunx+yU9ldY+Ppke26hpSNK+6eMD041HnkxXbX1d3fv9taTzJa2UNJhu8vGhZk9e0q6SLpX0Z0nrJN0kaXp6bDNJ/yrpwTTm5ZIOrnttLe73SfpZGscySbtLmibpF5L+IulGSVPqXlf7ux4p6YH0dUsk7VD3nKyf/S4lm808oWQDl/0bzm83SZdLejz9Xs6X9NK64+dIukzSUUo2e/mTpO9I2qp2HHgLyUKJtdrjZEnjJZ0uaVUa34OSFjb797fiOBFYKwZJagc1U4C/A94D7AE8OczrDgOeAd4EfAL4FPDeuuPnAh8k2YDj1cBHgLWjxHIa8C9AP3AvcFntAgVsCdwKHAi8BvgK8E1J+43yns+RNJFkY5AgWfPltcDXSKb/Q9IENi+NYTrJMiGLJe3Z8FbHA6eQLJO8Fjgf+HfgGJIF0bYETm94zWSS5rWDgbcDuwBn1x3P+tknpe+9B8k6XRcoWa2ztibPDSRr8eyVfs6LgEvqkzRJTXBaevy9JOv3HFUXx03Ad0ia73Yiacr7ZPq896WxvxdYgbWPiPDNt1FvwDnAZXWP9wIeBS5MHx8HrCdZz36k110P3NTwnKuBs9L7u5BcbA8YJo7J6fH+9PG+6ePD6p7zIpKL7JEjnM8Ftc8cKs4hnn8Sydo7WwxzfAD4fEPZ9cD3G+L++7rjB6Zlc+rKjiDZUav2+DhgAzCprmzv9HW7tPDZfWnZ3unjE4BrGt5j+/Q5e9X9jR4ExtU951vATxs+96sN73M6cA3pkja+td/NNQJrxgFpk8iTJL/8bgD+qe74yoh4JMP7/Kbh8SrgJen9GSSLhl3XZGw31e5EsqvTcmA3eG5zj2OUbE7yR0nrgDkkC6tlNQO4MZKd8DYiaVtgIvDzhkM31mKoU3/utb/V8oayretqMwADEfFA3eObSf5Gr27hs1el/9b+7q8D9km/33Xp36i26dMr6l53RyT9NPXv8xJGdg7Jwm93SfqapHc21DKsZO4stmbcAMwl+eW/KiLWNxz/S8b3aXxdUGwz5WeAfyZpulhOsvfwFxj9ApaHxlUd1w9xbKiyPP4ew352RISk+s/ZDLic5G/VqD65N/3dRbKP8GRgFrAfSdPfryXtH8k+AVYyZ2VrxhMRcU9E3D9EEsjLbST/Xb61ydf9be2OkiGt00jWboekKeXHEfG9iLiNZLu/VzX5/suAvZXse7GRSHbCWgXMbDi0N3BHk58zlD5J9Rsw7UXyN7ozx8++laT/5P70O66/Pd7E+zzN8/0mz4mIxyPi4oj4B+CdJPvuvrKJ97UCORFYW4mIu0jWwz8rHeUyRdKbJX1glJd+TtL+kl5D0pH6NPCD9NhdwH6S9pa0K/BVko7tZnydpO/hIkmvl/RKSYfWdcguAj6Tlr1K0gkkHaunNfk5QxkEzpW0p6Q3At8ALo+Iu3P87K8B2wEXSnqDpJdLerukM5VulJTRfcBe6WihHdIRTZ9OY3u1pFeSDCj4b5L9dq0NuGnI2tEHgRNJOhl3ILlg/N9RXjOfZD/aqcBvgQMjotZU9W8kF/6fkFxUzyEZ+trYhj6siBhQsnf0IpL+iyBpZpqbPuV0YBvgVJIN31cA74qIX2f9jBHcR9K5/WOSv8dVwJF1x1v+7IhYJWkmcDJwBcnopQfSz2pm05PTSJp+7gB6SP7uj5OMaqoNBFgGvCMinmjifa1A3pjGOpqkfUkuzBMi4tFyo8mfpOOAd0fEtLJjse7lpiEzs4pzIjAzqzg3DZmZVZxrBGZmFedEYGZWcU4EZmYV50RgZlZxTgRmZhX3/wFSiZHBERvzoAAAAABJRU5ErkJggg==\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('snr')\n",
+    "plt.plot(range(1, 31), data[:, 4], 'o')\n",
+    "plt.xlabel('Principal components', fontsize=14)\n",
+    "plt.ylabel('Signal-to-noise ratio', fontsize=14)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Relative photometric and astrometric calibration"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "With the next analysis, we will measure the relative brightness and position of the companion. We will use the [SimplexMinimizationModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.fluxposition.SimplexMinimizationModule) to minimize the flux within a large aperture at the position of the companion while iterative injecting negative copies of the PSF. This procedure will be repeated for principal components in the range of 1 to 10. We need to specify two database tags as input, namely the stack of centered images and the PSF templates (i.e. the stack of masked images) that will be injected to remove the companion flux. Apart from an approximate position of the companion, the downhill simplex method of the minimization algorithm also requires an estimate (e.g. within ${\\sim} 1$ magnitude from the actual value) of the flux contrast."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 29,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------------\n",
+      "SimplexMinimizationModule\n",
+      "-------------------------\n",
+      "\n",
+      "Module name: simplex\n",
+      "Input ports: centered (70, 57, 57), psf (70, 57, 57)\n",
+      "Input parameters:\n",
+      "   - Number of principal components = range(1, 11)\n",
+      "   - Figure of merit = gaussian\n",
+      "   - Residuals type = median\n",
+      "   - Absolute tolerance (pixels/mag) = 0.01\n",
+      "   - Maximum offset = None\n",
+      "   - Guessed position (x, y) = (11.00, 26.00)\n",
+      "   - Aperture position (x, y) = (11, 26)\n",
+      "   - Aperture radius (pixels) = 10\n",
+      "   - Inner mask radius (pixels) = 5\n",
+      "   - Outer mask radius (pixels) = 55\n",
+      "Image center (y, x) = (28.0, 28.0)\n",
+      "Simplex minimization... 1 PC - chi^2 = 3.64e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (12.91, 26.00)\n",
+      "   - Separation (mas) = 54.81\n",
+      "   - Position angle (deg) = 97.56\n",
+      "   - Contrast (mag) = 5.69\n",
+      "Simplex minimization... 2 PC - chi^2 = 1.87e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (13.49, 26.75)\n",
+      "   - Separation (mas) = 52.42\n",
+      "   - Position angle (deg) = 94.92\n",
+      "   - Contrast (mag) = 5.45\n",
+      "Simplex minimization... 3 PC - chi^2 = 2.88e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (13.31, 26.66)\n",
+      "   - Separation (mas) = 53.12\n",
+      "   - Position angle (deg) = 95.23\n",
+      "   - Contrast (mag) = 5.49\n",
+      "Simplex minimization... 4 PC - chi^2 = 4.44e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (13.13, 26.27)\n",
+      "   - Separation (mas) = 53.89\n",
+      "   - Position angle (deg) = 96.62\n",
+      "   - Contrast (mag) = 5.55\n",
+      "Simplex minimization... 5 PC - chi^2 = 3.00e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (12.76, 26.43)\n",
+      "   - Separation (mas) = 55.16\n",
+      "   - Position angle (deg) = 95.88\n",
+      "   - Contrast (mag) = 5.63\n",
+      "Simplex minimization... 6 PC - chi^2 = 2.78e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (12.60, 26.44)\n",
+      "   - Separation (mas) = 55.71\n",
+      "   - Position angle (deg) = 95.80\n",
+      "   - Contrast (mag) = 5.62\n",
+      "Simplex minimization... 7 PC - chi^2 = 3.61e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (12.02, 26.26)\n",
+      "   - Separation (mas) = 57.87\n",
+      "   - Position angle (deg) = 96.22\n",
+      "   - Contrast (mag) = 5.82\n",
+      "Simplex minimization... 8 PC - chi^2 = 4.30e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (12.21, 26.25)\n",
+      "   - Separation (mas) = 57.17\n",
+      "   - Position angle (deg) = 96.32\n",
+      "   - Contrast (mag) = 5.73\n",
+      "Simplex minimization... 9 PC - chi^2 = 2.96e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (11.33, 26.18)\n",
+      "   - Separation (mas) = 60.37\n",
+      "   - Position angle (deg) = 96.22\n",
+      "   - Contrast (mag) = 5.98\n",
+      "Simplex minimization... 10 PC - chi^2 = 2.97e+02 [DONE]\n",
+      "Best-fit parameters:\n",
+      "   - Position (x, y) = (11.59, 26.26)\n",
+      "   - Separation (mas) = 59.42\n",
+      "   - Position angle (deg) = 96.07\n",
+      "   - Contrast (mag) = 5.82\n",
+      "Output ports: simplex001 (89, 57, 57), fluxpos001 (89, 6), simplex002 (70, 57, 57), fluxpos002 (70, 6), simplex003 (75, 57, 57), fluxpos003 (75, 6), simplex004 (73, 57, 57), fluxpos004 (73, 6), simplex005 (63, 57, 57), fluxpos005 (63, 6), simplex006 (79, 57, 57), fluxpos006 (79, 6), simplex007 (71, 57, 57), fluxpos007 (71, 6), simplex008 (66, 57, 57), fluxpos008 (66, 6), simplex009 (60, 57, 57), fluxpos009 (60, 6), simplex010 (78, 57, 57), fluxpos010 (78, 6)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = SimplexMinimizationModule(name_in='simplex',\n",
+    "                                   image_in_tag='centered',\n",
+    "                                   psf_in_tag='psf',\n",
+    "                                   res_out_tag='simplex',\n",
+    "                                   flux_position_tag='fluxpos',\n",
+    "                                   position=(11, 26),\n",
+    "                                   magnitude=6.,\n",
+    "                                   psf_scaling=-1.,\n",
+    "                                   merit='gaussian',\n",
+    "                                   aperture=10.*0.0036,\n",
+    "                                   sigma=0.,\n",
+    "                                   tolerance=0.01,\n",
+    "                                   pca_number=range(1, 11),\n",
+    "                                   cent_size=0.02,\n",
+    "                                   edge_size=0.2,\n",
+    "                                   extra_rot=-133.,\n",
+    "                                   residuals='median',\n",
+    "                                   reference_in_tag=None,\n",
+    "                                   offset=None)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('simplex')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "When running the `SimplexMinimizationModule`, we see the $\\chi^2$ value changing until the tolerance threshold has been reached. The best-fit position and contrast is then printed and also stored in the database at the `flux_position_tag`. If the argument of `pca_number` is a list or range (instead of a single value), then the names of the `flux_position_tag` and `res_out_tag` are appended with the number of principal components in 3 digits (e.g. 003 for 3 principal components).\n",
+    "\n",
+    "The `res_out_tag` contains the PSF subtraction residuals for each iteration so the last image in the dataset shows the best-fit result. Let's have a look at the residuals after subtracting 10 principal components with the best-fit negative PSF injected (i.e. which has fully cancelled the companion flux)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 30,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "<matplotlib.image.AxesImage at 0x14615dbd0>"
+      ]
+     },
+     "execution_count": 30,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD4CAYAAAAq5pAIAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAAAqZUlEQVR4nO2de6xlV33fv7/zus+5c+c9dx54jG2MhwaMMzgOoATsQglFYKXUgtLKkSxZrahEEqRgJ6USUis5qhQSKVEit6C4Eg04CWDHgYDrGCpaYnuMH/g9Y2N7Zjwzd1535r7Pa/WPezz79/uue/Y+93XOHe/fRxrNWXfvvfba++x19vqu32NJCAGO47z1KfS6AY7jdAfv7I6TE7yzO05O8M7uODnBO7vj5IRSN09WHBoK5dHN3TzlWwuhMhlSAm0X2i7N5Z+a647I2u50hdrEWTSmpxf9Nrra2cujm7H387/TzVO+teDO3LDlRsWWC3VbLk8mFQQe02X9kPD+VG5UkgP4RyY61lkzjvzZV9tu86/BcXKCd3bHyQldHcY7HaCG04V5HrdTkTR4s5ThDanrJglQG7LlQo3qLlNVdHxRtbVZse0oVDOug+VIv3t1rgX+ZnecnOCd3XFygnd2x8kJrtm7DenV2gYrvPvOFJNdScvyT7OQaa1IGp81vTaBsZkuaiYdy2a8ULTlutLZ2sQHxKa4+kD7di1WLs4l9XG7XN93jr/ZHScneGd3nJzgw/i1hkeZVNbDdsAOYSXLq62Yvr3BJrCGqpCGwywZoqE1nYslREUNtVkixHKC/XipyNcxmPyhOGN37jubbtab3+TD/DfxN7vj5ATv7I6TE7yzO05OcM2+GmidSBKxSK6iTbrj7OIqzfaRaVq7AtYkBcQ6OvopV7qc6xJyjy1PUbupqsgE1pd8Ls3abWxqy4qwK9Rph6o6lPad227/wPesRNGeBVVXfShfet7f7I6TE7yzO05O8M7uODnBNXsHNDkDTI3L2sZMGpLs06GYrrtNKCn9FEchr2x352+TQ2DLKpsM6fvSTLqLaxyGirZUN7Sfh1j4gy3yPYjq25jUV7lA7eTvgs7F4ba1Ddqt195g1vBa378V8De74+QE7+yOkxM6GsaLyKsAJrFgvKmHEA6IyGYA3wKwD8CrAG4JIZxbm2Y6jrNSlqLZPxxCOK3KdwB4KIRwl4jc0Sp/aVVb1ytIQhbmbbk0a3eoDyitx7Ka5WhmFtfQblNk+9a2bSC2fZdJC2udzVqX9X6d0lDx3EOR7on2H9BzGECshTkUl8NUOY2VsduzTb6WXlcUP6D3HbD7ViZsXTWee+CQ40uMlQzjPwXgntbnewDcvOLWOI6zZnTa2QOAH4rI4yJye+tvO0IIx1ufTwDYsdiBInK7iBwUkYON6ekVNtdxnOXS6TD+gyGEYyKyHcCDIvKC3hhCCCJsrLm47W4AdwNA/+69+fJPdJx1REedPYRwrPX/uIh8B8D1AE6KyFgI4biIjAEYX8N2rims+yLoJ4r927XeZb3JdvXSVLrNuax8uasj9ljW6NGqLGWyKVN6aG1L51TSTU55lRHvHqWpUtcR63uOD1jab76ur9GX7qfA7aqO2AspXUguNIpbIH8K9qvnuYdLTcNnDuNFZEhENrz5GcBHATwD4H4At7Z2uxXAfWvVSMdxVk4nb/YdAL4jC2lTSgD+VwjhH0TkMQD3ishtAF4DcMvaNdNxnJWS2dlDCK8AeM8ifz8D4Ka1aJTjOKuP+8YDqG60uk4aVqtVzpMtl+yz2s+cfbFZ97EuLM7ZstGFWSutEjxfUGB/AdXOOsfGz6ZXzvEA9UFqWkNvI/s13b/onnCMP81N6PmAyG0hJa4eACpn7U3U8e7REtc8T8HlrHwB63z62d1lHScneGd3nJyQm2G8NjvVB9KH2hySyeaeyB1U1cdupNkrltC51DA/MmFlhJ1G6Z5p6K2HpVJPlxeRuZHNfpynSr022J2YPTB4RdgobXXK8LpAEotNiNH9TwlPrg/Zi+B0WNE9oWvm50Z/13xN64F12CTHcdYC7+yOkxO8sztOTnjLanY2Q81vSQQXLyHEoaM6DRKwSBglL6uk0hdxeqcm6WjW8FFIpnIl5fP2UbaAuW322PKFdM2pl0Lia0aW6yfPD9CTo+8Jr+LKZrtIs1NTOBV1s6RWceX0WJX0Mmt6/XpjN97IPZbmdvjVyPdXXxebH/me9AJ/sztOTvDO7jg5wTu74+SEt4xmT03JDOsOynb0IqUMznJxZb1r3FBpqaOsMEh2FU07NkoVzXXRNUfnVs3mNMmcGprtxKy7ea5Ba1IOM53fnD4HEtn0bdG4J3OIcHTNkYsr7a98JuqDKc4CyF5KKva/SD7z88jhsVxXN/A3u+PkBO/sjpMTvLM7Tk64ZDU7h6HGO7QP4WQf8lmyV7MvNy+7FKWl0iGYJAOzUjQFStFUmkl+f8tT9lieD2BdWJqhc1M7yyrUtDq6NBt9fTh9iWetb+NwWNardntmaun59t8d+7czvFxUVWn8wjxpdE47Tc+B9tUA4vkD/f1EPvpks2/wd5kRYrwa+JvdcXKCd3bHyQne2R0nJ1yymp1hmzLbV/XPGqeOYr/lGvnGNylFM2tMXTf7r8+P2nJ5kg6ldmrtW6PUT+wzXhu2ZU4VxcsXaU3KdnbWq5E/Oz0pfedsu2d2KT1LBmq2KffRMkscixChquN02VEMOs2BsP+ATh/dIDt7aLS3yQPZy1obfwua4wgD9lyNCvl6zKasU7VK+JvdcXKCd3bHyQne2R0nJ1wymj3KB4Z0u2X/KauJ5rYq/2qOWx5O14yc94zzGev5grScZ0Bsh2d77OTbVWXkwz90xP42s/0661xaK0/tsw7spUmrGdm/PcrbR/qV8wdoAvmvc7pn9hsvX7DXqVM4txYrSeqm54B9C/QS2ICdx2j2pfvGSy3dv6LJ8e4K9oEIRVt3cdqW+f7xvNJq4G92x8kJ3tkdJydcMsN4HqKyeyGHI6aFPvIQKXJV5JTNnEKY3FK1ROAcSywR6oMcokltmdbusnbf6cusfbGwmexnr1g7Ew+P9XXJoK2rSa6jxTkq06n4nmw8lHye3wyC3HrZhMjusrxSq7qnbA6LVmmhds1vo3s2q66LvnZ+Dmqb7LGVM1bqBF5yR7eLntdCxv1sDGZc1yrgb3bHyQne2R0nJ3Tc2UWkKCJPiMgDrfLlIvKIiBwWkW+JSDRf7jjO+mEpmv0LAJ4HMNIq/yGAr4YQvikifwHgNgB/vloNy0ovxCar+pAtR0shafnFUjYlZBWINSa7pbIZSsMmFa6L3Wcb/cn+lQnb0Nm3k4Z8wdobZ99mheLgZju5EJ7cePFzldwzw0YrErf+X7v91Hvte2HwJM1FqOkCNlFF4bFkUqxttefuf8NOuOg5Eg5tjlZx5fDY2fYmrlBi85itq3zO/oHnEqIVe6eSc/F8CbtYsytz31meM7FtyXQp7oCO3uwisgfAvwTwP1plAXAjgL9p7XIPgJtX3BrHcdaMTofxfwzg95DMdW4BMBFCePMn+SiA3YsdKCK3i8hBETnYmJ5ebBfHcbpAZmcXkU8AGA8hPL6cE4QQ7g4hHAghHCgODWUf4DjOmtCJZv8AgE+KyMcB9GNBs/8JgFERKbXe7nsAHFu7ZsbaLC0FM7BIKmSl1dgVNEo7TTZQPhcfX1OadNPzVltNj5ELJkmvygVy+1Va7cwv24vuf9XOgUbpnM/YhtZPjpgylObs32onD8ple1Fnr95kyuwPMDhu95/bnOjbmTGrT7c+ZY+duIrmMYbtXEP/ODtJJB95voS1cWOzbZfMWN3d1Ms/kZ6PUlrTNUfLMEdhwSpdOftq1NPnGqLrWIPp7sw3ewjhzhDCnhDCPgCfAfCPIYTPAXgYwKdbu90K4L7Vb57jOKvFSuzsXwLwuyJyGAsa/mur0yTHcdaCJbnLhhB+BOBHrc+vALh+9ZvkOM5asL5845WsYVs32zhZE7HurlDqI627OYUVhyqy3Xx+C9mUKX1xUaV/Ls3ZfQdO2/Loi9b2Pf4+O2l54eqkcaVJSu9EPvnNWrrttrrN6tfyueQm9D+8wWybeDftSxqSQ4Zf/4y9icUTSd1RO0nb9p8iG/M5K8T5u9RxEX0Tdtv5a6gdfbYczttHXKfuZv//6hZ7rJCPRBSGyuNi9VBGS0v3pz9jtQ30TEV+JSqdVt/ybO7uLus4OcE7u+PkBO/sjpMT1pVmr6jliVizs46LfKBJT0V2dp3tiWyYvEQQL7PEaZeL9BtZUimI6v22ruoGWz78H6yA3TBic0+HE4mW5vPO0TJVYz8lfUopm2pD9lxVJdM5xbL027oGTtlHw8TsA3j/la+Y8tNP7r/4eerdNnDhXIn8A2iOpEBxDpw2TPs1sI9DccpeY+UYtXsn3aNyoo3rQ2TvnyI/evJnr24mTZ+Stor95vn5ZI1eOZ/x3lXV8ZLX7A/QDn+zO05O8M7uODlhXQ3jtYsgD4Mil1bK3pmFHtZzWCmbSeY3kaltox1yDRyloaOq78I++/s5u9uOO7dtsSev1W1dG59LvhJthgOAMoVBDh6zLq/H32/NaexaqkM4i788Ybb9q8ueN+Vvl6415f/6vu+a8jOze0z5pZtOXfxce2ybPTGHodIwvkYmwv4z9sseGE/2n9pjv3dezac6ak9WoOekqExxnJ23TsNh4Qyv0Qow7Z9Bfl5DMWMVIgoD5tVqtGTgZ59db9vhb3bHyQne2R0nJ3hnd5yc0FPNHrkMKt1SjFwV7bGcapdXHeEUQ5rI7ER1sSsur97BzKto0LkdVmcPHrW3ePKE1bNz++yF7X4jOf78L1kx1kdpkmZ2s43QFi/76KumvLV/6uLnbZUps+3+Q79kyv3P2rq/fPQzprzj2pOmXCwkbWVT2uxu9h213+3oU/bLq9qpB5y+Truhpq/i2thA56J7og8fPmzPy269s3u43bbIq8001RxJFMo8Qav5cJpvTsNG5jV9HVx3p/ib3XFygnd2x8kJ3tkdJyf0VLOzbtGukJE7LLkucnpi1ui8XJTZnmWX5FTS5L5ZG2l/7hKtQMopgOvbrUYfftYuaXr8/UnjSpRmanK/PXbXJ46b8ol/epstT1rx++KTyfbhV207eUHSC1eQb8FJu/+xo3aNp9963/+7+Pn7R+y8xOxOe/9GXqZzn7bnmtvS/h1UJpdWfk5A9myZsw9GWWlndsme38xpqKiuYTu505y0mt/YwjOWJ4vmnGg7p57WbeW5hXb7Mf5md5yc4J3dcXKCd3bHyQk91eyRrtG+27T8TZO0GB/LGp3DWHUqnzQbPBDbMbnu6ihpO+VDHSgt0sCovZD6eWvkn9ll9erY/vHkPA1agunYqCkfJpv90El7T4Yf2WjK1WuS3/aqrSq6xtjnnFJJv2Jv8I8uu+ri51nS3Ffea/NUHf6snacoVO3+/WdsW6rqMua3UsgqheayRtcpwwDrRz47ljF502e3hxqFNtP8gb5jHIqblUoqK9KjqeYmeP5KX1NaPf5md5yc4J3dcXKCd3bHyQm91ezk767jr1mjR77vgTQQ+UjXB/j49ponywbKKZqjZYDUuSvnbEPnp+wt/tANz9q6yWj6s+8l6Z3Ynl8miRlo2aR5a/rG+GZKo6zuabRMdSFdNbK/O9/fNx7ddfFzhb6rX3zS+tlXzqTf/zrFLgy/1n6uoT6QvrxWbYe9Rw2dxoq/R5bVlGq6MJ/+btT3pMQGbypyfHukw9kPv5hsL9F8VVOlx46uQZ+j/SbHcd5KeGd3nJzgnd1xckJXNXsoWPs353ozPuik+1iLsCSa20o5u2j/hlqyqVmyv3HlKbIpj7HRnvOYWZ3dfyqpj1Muj15x1pR//MI7bF3j1l697XDSTs4tNrPdtnt+0iaZ43gB1n1mG/tmF9PtwFmGYO27wDb5y79r7ezNfnv/Xr7FPoYjL9jy7I6kvqGjtDTXJtL/9ETXqu3fZ3zNxY00iXTcTh5EeRGpPq2l2bed5zgi3/eU5Z8BoFBVx/Pcjb5E9413HCezs4tIv4g8KiJPicizIvKV1t8vF5FHROSwiHxLRNZg+XjHcVaLTobx8wBuDCFMiUgZwE9E5PsAfhfAV0MI3xSRvwBwG4A/T6tIGkBZrb7CIa66nLXKhdDqJ5zGKgr103YrXp2DwlBBoY3FCXubGsOUrvh48pvJbr7nXttk9+U0wDTUPnWdWhWHhpmNTXaYWRq3WkevqAPE16VTe0Xmw6VCt8zIAtp2/korNzZ/9xlT3r73n5nyqV+h1F5HkjEurzYrbA4rsZRht2plwtpp5UW5bM9bnbMmQx5aR9Y1dfjcdtvOylkap7PMpGG+kJJsKA9jTtGmVzRKk26ZX3lY4M2EZeXWvwDgRgB/0/r7PQBuzqrLcZze0dHvu4gUReRJAOMAHgTwMoCJEMKbP19HAexuc+ztInJQRA42ZqZXocmO4yyHjjp7CKERQrgWwB4A1wN4Z6cnCCHcHUI4EEI4UBwcWl4rHcdZMUsyvYUQJkTkYQC/CmBUREqtt/seAMeWevIo9RSnz1Ww9gqs6XllSw4pVOGJoZKu6/pfoxBMTo1Mdc9uT8r1jVY09R9Pv8Ws1bR5p0yry7Jd6a7f/IYp3/ntz5lyVuqjFZFi4mGz5/iNVmQOnrLvioa93dj2T7ah47+WCNgdu+2qt+OHttpzZ+hq7ZY6P2lP3KBlpwoZ1sgiPRd6+bIKpf3mtOlsduYlnaI5KDV/w+HI88rUmZayqpPZ+G0iMtr6PADgIwCeB/AwgE+3drsVwH1ZdTmO0zs6ebOPAbhHRIpY+HG4N4TwgIg8B+CbIvJfADwB4Gtr2E7HcVZIZmcPITwN4L2L/P0VLOh3x3EuAbob4ipWp3NYn9aBkUYn+2mTWl4k+yLr3eZsImbYDbWRksIKiJfTHXijvc20MUTabDulUeLQxiESYOrw+X6rIcsT9rxfvuffmnJ9u70wXnIo0yV2jRh8yWrjyoRdeqr2722ZzMjY8nejFz83f5Oeiw12YqL8Bulu0vBzO9X+5ErbpGesRCmtajwPRM9gWbt7s0trMWVfxPNC1UjToy3a9TZtaSh3l3WcnOCd3XFygnd2x8kJPQ1xrZxLWdKJfa8pRVBWat4o1dFg+/03HrI7F+fsvhfezvZuKqq7yPMQOvwVAGYoffHQIet/PXW50viUJrlCPvrlSduOIvmJ18mHKSuF9mrB935uh73ml37Lho5es/G0Kb9yaospT/1aImivG7F29uu3v27Kf9+0S0/z8tA61XRaKnMAqG2w7R58g0KMabmomort4FTckR89pycnjc5zTtofg5cc73QNZ3+zO05O8M7uODnBO7vj5ISuanZpUkw1m6v18k/kS8zx7by8DtfF/tY61rtJKa9mdti6I21Leov92bVfM/s4z21Jj5Xf+IrV5eXJ5Pe3WbENqY7YqsbuPWTKL/7+FabMS01rLc2+BkuGNahO5U0aMopjIF44stOUhzbQkln15D48fmif2VammP4omxmnCVe3m2MzOFUU6/3oeaXddVw5b+Nzsa88xzGkzknR/TVLnXlaKsdxvLM7Tk7wzu44OaHryz9prcipeTWRLzxpcPZjZl961sZaC3MOL70kMAA0Su01+WLn1jZ81skNWva3fMH+vh67yW4vqrRol3/XZvZhDX/sv9tY7i+/89um/N/u+bQpa20X3Xv+Klj78XbOoaY0KedB2/t9u/PkHnsdfXRPpnaRIfm65Kb0HUnPa8rfLb/OqhuT+913xm7ssyZ8zI9yOX1up6py/vGcCMeg8z1iDc9Lf+nj2c5u/ObdN95xHO/sjpMTuh7iqoeSkelDDZd5lRY2TfA4k91h2W3VrBBLKZZrG+2Yq++UHZ/xkKtOxxvJkZH7qUqrihbP269g2+NJXaeutf6uYw+fMuVd/8m28yu/80m7/ddPmPKpJ3Zc/ByZ3lLSIgOLmLB4mKq+ny1P2/sz9JMXTbl8rTURzm2lsFT6rvt+nnx5G47YE0/upZVWabg8t43knjIRcipzvVIqYIf8C3XTM0lDZu1+m7VKK5tv2VYXhSOrNFUcDttpujF/sztOTvDO7jg5wTu74+SErpvetGkgCuPTep78Hlnfp5ntACDQyqsmtRSZ5Tj9cIPDYdnjlVNeKz3Fq4gaN1IAI1utOW3m7KgpF2uJTrzwDqsZp95mTW3bH6c0VCfsb/e2K+25Tpa1iZC0bob78fwua9MqXLAXuvOnSd3Hf90ee/KGa0z5ir+eNeWRHx62B3zU7l+eSq5zeqc9L+vVKs2n1DfZCyvMJBfGzx+HtPK8BOvo8nkKu1bPKLu7Fvh5zljuqTTVfi4i1vvoCH+zO05O8M7uODnBO7vj5ISeana2c2otXN1kBRNrM3ZhrdMyyk3SNQWVFphdVjlVNNeFERJUROUXif8iLztV35Z+bGPQnuvs1Ymm3PiirWviXXbfkzdbHd3/tE1x9eyxMVO+/O8SA+2Rz1stu/eWn9t2ffg62653Wn/l6T2miKnPnb/4eeBRu0z14AkOeaUUzlftNWV2p61uTMoDJ21dfROgfUmHU2qv4tnkkednilNvs2bn54TTmev5mTIvR8Z2c05nPkQpyGhORPcV9iVwO7vjOAbv7I6TE7yzO05O6L5mV7C/sLbtsvYtkl2YU0sVKI1yY6PVpE3lG1/L+I0TWi63dMTq1TrbOZWTdIOXg56yt/hCYdCUS5uto/Osil9k/Tl4xGrZjT+24Z6DJ6z9+o3rrbh77V8kea2ar5lN+MEbT5ryu/70/aZc20DxBJtICz+W6PSh43bf6gj5nI/YLy+M2nJ11Lat76xuB83VULpsDnGVs/Ye1dUcyRDdz7qd8oiI0lZFqbmSz2wL52ddAtvZKQ04P2N6X/IhCVlrS7/Zho72chznkqeT9dn3isjDIvKciDwrIl9o/X2ziDwoIoda/2/KqstxnN7RyZu9DuCLIYT9AG4A8HkR2Q/gDgAPhRCuAvBQq+w4zjqlk/XZjwM43vo8KSLPA9gN4FMAPtTa7R4APwLwpfTKrO2SU0nVle92kZbL5djhwOl0ScPLbPuY9ChWmGO1Oa0vZUIq0bJBZkkrstVyrLyM28rY5x+qbRLFONtdp3cWqWxF5+ykbUvYmWj4Xf9gv/rr9/9rU5650orfYh+lvH7d5kbSaZTKL9N8S5V8D/o5TTjFD9DxZ96TlDkmYugo2+TT9evA8WR/fmai+AueN6Lvne3w9eH2xxaop/FzwjZ7fkZ1ujP2T+mUJWl2EdkH4L0AHgGwo/VDAAAnAOxod5zjOL2n484uIsMA/hbAb4cQLuhtIYSANrE3InK7iBwUkYON6enFdnEcpwt01NlFpIyFjv6NEMKbKUxPishYa/sYgPHFjg0h3B1COBBCOFAcGlpsF8dxukCmZhcRAfA1AM+HEP5IbbofwK0A7mr9f99ST866RJcj3/cNbJdk/ZRu19Q2UfYtro2Q+KJcZDxoCXRureXmt5G2neAJAVvkuPL5rcnxUTwA5R6rbaCq6TLKA/ZCyz9LROXJ99E1nR025eFNM6a8a8QM5nBiyJ586hdJPu7pHemx8uUp8vm34esYOG5vkl66i5dVqlNaZbZPs84uq8HlLOWni/LsUS6CKH02pTe3uQjtNv7uinaFKwjNHzQr7VNLc37ATunEqeYDAP4dgJ+LyJOtv/0+Fjr5vSJyG4DXANyyvCY4jtMNOpmN/wnaLxd30+o2x3GctaKn7rKMWU0lxRURiIcybMbjoaM2q9Ro6MeutrwaTeTWS3etqSQEpyqqTNpjp/bZhg29bisrbk3Ge42zdpw4eQWNDSt23D74sjXrNX9h50im35GY0/pfs/vWChTCSud+aYN1892w2U626ns0vde2a+Akrbxy3l7Hrh/bL3PiSjum7T+l0z21TxEOxCnIC+T6bIbuNGwv02o+/Iw1eTEaMikWZtWqQzRsZ3NZlVd84RWNCC09Ow1pjepY3mGO41xqeGd3nJzgnd1xcsK60uxtpwGxyCqtvC9bSdiMoo4v20hQ1EZppdUJ1vB2/yaFFGozCZv1pq6kP5C5cfpqK+52jE5d/Hz+BStIK+dNERf227pnr7H2nMJJq7vLJxMtPLfLzh2Uz1m302Y/3ZNB6z5bP0hxT8o0Ovy6vX9s4tpx0NZVHbE3uDRj95/fpF1F7WnZ1MYuxQOTtL+6pZx+LIhtN6/IyzocZB4rqeeqSqZihjU6183zA8vV6eYcK6/CcZxLAe/sjpMTvLM7Tk5YX5pdwdqrPkrL+KSEsAIAmu0nAAKHw5JbZOSaS3X3naYwVmUzZbtu6Szp0SumTHnurDX6Tzy2PamX0mnP7SJ/WHa9PWY1OreldkUiKoeeseGwfefsNc9ut/e38Lp1p+W0ytqvYXq33TZ4wrbjzLvsNc+MkQ2aUl6NHGr/mPIcCei7mtti624MJfdQh7sCse2bbePRMtecHbqUXCeHQUeuzjT3wBo9Otcq4G92x8kJ3tkdJyd4Z3ecnLBuNXvk+16n3yXWS2X7hxLpVRMayRGtpK/Y/srL51Y3cUqspMxzDdzO2mtWrFXm2O9e+dmTX321ZMv9YzYMdW7O1l0sUqjpsUQrz+6019joI59+sulzOGf/aVrmWk0BcPhmhXzhZ3aSnqV7UL6QMR+j9yVb+NzWDLGr5nKi74p35XTlND/QN9H+GWtQ/AWfi9Ossb/AcsNY0/A3u+PkBO/sjpMTvLM7Tk5Yt5qdqZympXoGKf0Q6T62MXOMuoa12eCx9PTE7DMdCkmZ5w4a1E5Op9V3luLdr0jEWoHmEirnKP3w2RFTDmNWVJbO2uswS1PxksGU+rhJ8wO1EfZBt9vnNycV8v2MbM6ckon9GOieVDeqfSlPQY1t4eQzwXkN9Ln5misX6JqH0/3uebtelqnJORHY54FSofH2tcDf7I6TE7yzO05OuGSG8ZyJlt0PeYjFK4foYRIPtXkYyW6SHOLa4FUz1YiM0yLxkLRuvU4xT+6c/cqFs2/C7ssultVRe+zwYTt+LlIo78zu5DObftg9s0ErmrIZsGyTzaJZTt4bJTovh2dyhlhuS9rwmOuuU4Zd/t55xVO9UgubVKNsvjz0HkqXjtrdNkqLRqa1Mq3Ww8/kWuBvdsfJCd7ZHScneGd3nJxwyWh2Jit1NGugoHQ269PITEf6n7VvIM2utR27VHKqY9arkRlPnbtKejRKr82riJLO5pVB9fzBPGWVilIw0XWwi3Ck4VXkbtY94O1sqotTMCXnnt9it7AprjJnD+aVWbVO53kfdgnmY9lkGK3Aq07NzyevWNQNjc74m91xcoJ3dsfJCd7ZHScnXLKaPVoBliUQ25F1mWyvkZ03csW12zkNsLavloXsumzT53NZj1djs2dN3n8qPTRUu5UCi/gLqJ/2MrmGsi/B/Ajbq6lu0uGS0m7WwhVOycyvnGgR3eQP0kh3Xeb0TgWbtdrY4eNVctPzk7N7bZHq1s8kz0NE8y1rEMKahb/ZHScneGd3nJyQ2dlF5OsiMi4iz6i/bRaRB0XkUOv/TWl1OI7TezrR7H8J4E8B/E/1tzsAPBRCuEtE7miVv7T6zVsCGRGCDaXlCjUKQyVNybo6rouX221/LId38lwCzz3oEE3WvjXyq2ffgiitMoV7ahs/69Xo2Iw5kLT0W2zvL9nsWZn+AMWZtC8z/YuOYiaE/RiU/zrNvdQ5PTZp+BLNkQjds4K6jkiTr30EayaZb/YQwv8BcJb+/CkA97Q+3wPg5tVtluM4q81yNfuOEMLx1ucTAHa021FEbheRgyJysDE9vczTOY6zUlY8QRdCCIgHfXr73SGEAyGEA8WhoXa7OY6zxizXzn5SRMZCCMdFZAzA+Go2aq0p1NMFFKcUYr3Fcc7aNs529WKd7e58MlvU6bOEzjO30wrBEsVEZ9m3tS5n33bW7HyNHA/A8ez6tTE/ajexZi+xbzzZxlnDc9nURXELfM2M1tKsq9lvIbKr0/xLND+j5hoiv/l1wHLf7PcDuLX1+VYA961OcxzHWSs6Mb39FYCfArhaRI6KyG0A7gLwERE5BOCft8qO46xjMofxIYTPttl00yq3xXGcNeSS9Y1fKsYmHflepx/LWi6yMavxUaTreJVljqVnTa80O88dcM60yB+AyqzLtV94FENOTwJr9EKNr8turw1rI76ti33IWVdzjD8fbzQ/jUU5HoBh33jtx8CxA2zfz/Jfj+ca1p9O17i7rOPkBO/sjpMTcjOM1/BwK3LX5KEfDeciV1O1P6/e2ezjoTjHOlLjVN3RKqLkCsoSgE1cnGpam/nSVkoBgEYfuZLSkHdmjFfgUXVRO6sUxsv3u0TnTgtPFvpuCnSP2OTF34dOLc2Si49lmcSrCsXps9Y3l1hzHcdZLt7ZHScneGd3nJyQS83ORGmoebknWsWV95/fmojhaEkh9qytpM8X2I22yOaxJpnHmlV2gaV0z5PJZw4z5boL7EpK4Z6cXluv8lqnEAheAivLPZbR4clNMts1KUSYX1+sy/V8AF8zuwjH6cjT27neucSb7zhOp3hnd5yc4J3dcXKCa/ZFyHJxjZaH0ksO8fLEnPJa0l0y6xvUksIpS0MB2ctWsT+Bdi2N3Hp5ieGU8NjFytp+rZeCAhZZHprOxe3WcwuA1fSs2dnXgOvm+QHtAxDNp7C76/r2fl0y/mZ3nJzgnd1xcoJ3dsfJCa7Zl0EU+pii7UoUNsnhsjVKX1yaZGd5fWI6bcb8QLSstdLZUQor8iHna2qSrg5kS9cpnHg+gNvJ566O8Mmo3ep+R0sdczgtHcuxCfoeXup286WSs8t1nPzind1xcoJ3dsfJCa7ZVwGtQSMdSMZc9gOPUkup3VlzR0sEsxZmm3NK2iqOT2ebPcek8xLNoHkL7RPAcfhZcxzsh89zEzXVVvb353TPXI5s+jl+veX40h0nX3hnd5yc4J3dcXKCa/ZVJvajJ994TkNN++u4cV7aiLVwtJQUk5bLrdk+zhtYbFllu0NlmmO/dd3UDjpVg/3VOVU3z0WoU3P8ela6ZyfB3+yOkxO8sztOTvBhfK/hIa5KZxynQaYymeY4zVKaDIiGwxmhtyVOu8xprZQ8iVZh4aE2v2IyTG9mWP8WCzvtJv5md5yc4J3dcXLCijq7iHxMRF4UkcMicsdqNcpxnNVn2ZpdRIoA/gzARwAcBfCYiNwfQnhutRrnWKKQ1kK6gK0NcwXJx8j0RnC6bNbdvF2nseJllJho1VY2R6Y1Lb3ZTgorebNfD+BwCOGVEEIVwDcBfGp1muU4zmqzks6+G8ARVT7a+ptBRG4XkYMicrAxPb2C0zmOsxLWfIIuhHB3COFACOFAcWgo+wDHcdaEldjZjwHYq8p7Wn9ry/wbR08f/oMvvgZgK4DTKzj3WuHtWhrerqXRjXZd1m6DhLA8LwURKQF4CcBNWOjkjwH4NyGEZzs49mAI4cCyTryGeLuWhrdrafS6Xct+s4cQ6iLyHwH8AAvzqV/vpKM7jtMbVuQuG0L4HoDvrVJbHMdZQ3rlQXd3j86bhbdraXi7lkZP27Vsze44zqWF+8Y7Tk7wzu44OaGrnX09Bc6IyNdFZFxEnlF/2ywiD4rIodb/m7rcpr0i8rCIPCciz4rIF9ZDu1pt6BeRR0XkqVbbvtL6++Ui8kjrO/2WiFSy6lqj9hVF5AkReWC9tEtEXhWRn4vIkyJysPW3nn2XXevsKnDmNwDsB/BZEdnfrfMvwl8C+Bj97Q4AD4UQrgLwUKvcTeoAvhhC2A/gBgCfb92jXrcLAOYB3BhCeA+AawF8TERuAPCHAL4aQrgSwDkAt/WgbQDwBQDPq/J6adeHQwjXKvt6777LEEJX/gH4VQA/UOU7AdzZrfO3adM+AM+o8osAxlqfxwC82OP23YeFqML11q5BAD8D8CtY8AgrLfYdd7E9e7DQcW4E8AAWYuPWQ7teBbCV/taz77Kbw/iOAmd6zI4QwvHW5xMAdvSqISKyD8B7ATyyXtrVGio/CWAcwIMAXgYwEUJ4M+C1V9/pHwP4PSS5eresk3YFAD8UkcdF5PbW33r2XXoOujaEEIJItIBSVxCRYQB/C+C3QwgXdH64XrYrhNAAcK2IjAL4DoB39qIdGhH5BIDxEMLjIvKhHjeH+WAI4ZiIbAfwoIi8oDd2+7vs5pt9yYEzPeCkiIwBQOv/8W43QETKWOjo3wghfHu9tEsTQpgA8DAWhsejrTgJoDff6QcAfFJEXsVCToUbAfzJOmgXQgjHWv+PY+HH8Xr08LvsZmd/DMBVrVnSCoDPALi/i+fvhPsB3Nr6fCsWNHPXkIVX+NcAPB9C+KP10q5W27a13ugQkQEszCU8j4VO/+letS2EcGcIYU8IYR8Wnql/DCF8rtftEpEhEdnw5mcAHwXwDHr5XXZ5wuLjWIiUexnAH3R7woTa8lcAjgOoYUHT3YYFrfcQgEMA/jeAzV1u0wexoPOeBvBk69/He92uVtveDeCJVtueAfCfW39/O4BHARwG8NcA+nr4nX4IwAProV2t8z/V+vfsm897L79Ld5d1nJzgHnSOkxO8sztOTvDO7jg5wTu74+QE7+yOkxO8sztOTvDO7jg54f8DAvK7yjKagDoAAAAASUVORK5CYII=\n",
+      "text/plain": [
+       "<Figure size 432x288 with 1 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "data = pipeline.get_data('simplex010')\n",
+    "plt.imshow(data[-1, ], origin='lower')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Let's also plot the measured separation, position angle, and contrast as function of principal components that have been subtracted."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 31,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "Text(0, 0.5, 'Contrast (mag)')"
+      ]
+     },
+     "execution_count": 31,
+     "metadata": {},
+     "output_type": "execute_result"
+    },
+    {
+     "data": {
+      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgIAAAHoCAYAAAA7coe1AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Il7ecAAAACXBIWXMAAAsTAAALEwEAmpwYAABMeElEQVR4nO3deXhkZZn38e+vu0UIi2wNsiXVCCiIssUWZJFFfBF9YWScEYwKbhkUBIZxRpjM6ygaBQVUBkYMizBSgAzLiIAsgoKOCKShhWZvmlToBu1mFYgsTd/vH+cEqqsryUlXnVRS9ftcV1059ZxTz7mr0K67nvOc+1FEYGZmZq1pWqMDMDMzs8ZxImBmZtbCnAiYmZm1MCcCZmZmLcyJgJmZWQtzImBmZtbCnAiYmZm1MCcCZmZmLWzGWAdIWhs4CHg/UABWA5YAdwK/jIjf5xifmZmZ5WjEEQFJG0s6G3gC6AFWAfqB64ESSWJwg6T7JH18IoI1MzOz+hptRGAucD7QGRH3VjtA0mrA3wDHStosIk6ue4Q5WH/99aNQKDQ6DDMzswkxZ86cJyNiZrV9GmmtAUkzI2JJ1pOM9/hG6uzsjP7+/kaHYWZmNiEkzYmIzmr7Rrw0MN4v9amSBJiZmdkbMt01IKlX0uFV2g+X9M36h2VmZlabYrFIoVBg2rRpFAoFisVio0OalLLePvgp4K4q7XOAT9cvHDMzs9oVi0W6u7splUpEBKVSie7ubicDVWRNBDYguWWw0lPAhvULx8zMrHY9PT0MDQ0t1zY0NERPT0+DIpq8siYCg8DuVdr3ABbWLxwzM7PaDQ4Ojqu9lY1ZUCj1Y+D7klYBbkrb9gG+A5yUR2BmZmYrq729nVKpVLXdlpdpRCAiTiFJBk4DHkofPwTOiojv5heemZnZ+PX29tLW1rZcW1tbG729vQ2KaPLKvNZARBwPrA/snD5mRsRx4zmZpP0kPShpvqQVXivpzZJ+lu6/TVKhbN+7Jd0q6V5J90haNW3fKX0+X9JpkjSemMzMrPl0dXXR19dHR0cHkujo6KCvr4+urq5GhzbpjFhQqOrB0vrA24C5EfHyuE4kTScZSdiXZF7BHcAhEXFf2TFfAt4dEYdLOhj4aER8XNIMkrUNPhURf5S0HvBsRLwm6XbgKOA24BrgtIj45WixuKCQmZm1kpUqKFTRwZqS/htYDPwe2CRtP1PS1zPGMRuYHxELIuIV4GLgwIpjDiQpawxwKbBP+gv/g8DdEfFHgIh4Kk0CNgLWiog/RJLR/BdJyWMzMzPLIOulgZOAjYEdgb+WtV8FfDRjH5sAj5U9X5i2VT0mIpYCzwHrAVsBIek6SXdK+pey48vvWqjWJwCSuiX1S+pfssRFEM3MzCD7XQMHkAzTz5VUfi3hfmDz+oe1ghnAbsB7gCHgRklzSBKFTCKiD+iD5NJAHkGamZlNNVlHBNYhKR5UaU3gtYx9LAI2K3u+adpW9Zh0XsBb0vMuBG6JiCcjYohkLsCO6fGbjtGnmZmZjSBrInAHyajAsOFf1P9AMmcgax9bSpqV1iM4GLiy4pgrgUPT7Y8BN6XX/q8D3iWpLU0Q3g/cFxFPAH+RtHM6l+DTwM8zxmNmZtbysl4a+FfgOknvTF9zbLo9m6S64JgiYqmkI0m+1KcD50bEvZJOAPoj4krgHOCnkuYDT5MkC0TEM5JOJUkmArgmIq5Ou/4ScB6wGvDL9GFmZmYZZL59UNK7gK8AO5GMJNwJnBQR9+QXXj58+6CZmU1GxWKRnp4eBgcHaW9vp7e3ty61D0a7fTDriADpF/6hYx5oZmZm4za8YuLwYknDKyYCuRZCylpHYBtJby97vq+kCyQdnxYKMjMzsxo0asXErJMFzwV2AJC0GcmEvHWBI4Bv5ROamZlZ62jUiolZE4F3kMwJgGQ2/20RsT/wKeCQPAIzMzNrJSOtjJj3iolZE4HpwCvp9j4k9/EDPAJsWO+gzMzMWk2jVkzMmgjMA74oaXeSRODatH0T4Mk8AjMzM2sljVoxMdPtg5L2AP6HpNLf+RHx2bT9O8BWEfG3eQZZb7590MzMWknNtw9GxC2SZpKs9PdM2a4fk9T+NzMzsykoUyIg6a3AjIhYWLFrKaC6R2VmZmYTIuscgQuAD1Vp/z/AT+sXjpmZmU2krIlAJ3BLlfbfpvvMzMxsCsqaCMwA3lylfdUR2s3MzGwKyJoI3AZ8sUr7ESQrApqZmdkUlHXRoR7gJknvBm5K2/YmKTv8gTwCMzMzs/xlGhGIiD8AOwOPAgelj0eBXSLi9/mFZ2ZmZnkac0RA0ptI7hr414j4ZP4hmZmZ2UQZc0QgIl4FPgiMXYLQzMzMppSskwUvJ7kcYGZmZk0k62TBQeDf0kWH+oEXy3dGxKn1DszMzMzylzUROAx4Bnh3+igXgBMBMzOzKSjrXQOzRnlsnvVkkvaT9KCk+ZKOq7L/zZJ+lu6/TVIhbS9I+qukuenjzLLXHCLpHkl3S7pW0vpZ4zEzM2t1WecI1EzSdOAMkjULtgEOkbRNxWGfA56JiC2A7wMnle17JCK2Tx+Hp33OAH4I7BUR7wbuBo7M+a2YmZk1jcyJgKStJP2rpDMlnVv+yNjFbGB+RCyIiFeAi4EDK445EDg/3b4U2EfSaKsbKn2snh63FvB41vdkZtYIxWKRQqHAtGnTKBQKFIvFRodkLSzrMsQfBi4D7gJ2Iikr/DaSdQZ+m/FcmwCPlT1fCLx3pGMiYqmk54D10n2zJN0F/AX4t4j4bUS8KumLwD0kExgfJil7XO09dAPdAO3t7RlDNjOrr2KxSHd3N0NDQwCUSiW6u7sB6OrqamRo1qKyjgicAHwjInYBXgY+BRSAXwG/ySWy5T0BtEfEDsCxwIWS1kqLHX2RpNTxxiSXBo6v1kFE9EVEZ0R0zpw5cwJCNjNbUU9Pz+tJwLChoSF6enoaFJG1uqyJwNuBn6XbrwJtEfESSYJwTMY+FgGblT3fNG2rekx6/f8twFMR8XJEPAUQEXOAR4CtgO3TtkciIoBLgPdljMfMbMINDg6Oq90sb1kTgedJlhyG5Nf5Fun2DGCdjH3cAWwpaZakVYCDgSsrjrkSODTd/hhwU0SEpJnpZEMkbQ5sCSwgSRy2kTT8E39f4P6M8ZiZTbiRLk36kqU1yniWId4t3b4aOEXSvwM/AW7N0kFELCWZ0X8dyZf1JRFxr6QTJB2QHnYOsJ6k+SSXAIZvMdwDuFvSXJJJhIdHxNMR8TjwDeAWSXeTjBB8O+N7MjObcL29vbS1tS3X1tbWRm9vb4MislanZER9jIOSX+FrRMTdktqAU4BdgYeAYyNiSo1pdXZ2Rn9/f6PDMLMWVSwW6enpYXBwkPb2dnp7ez1R0HIlaU5EdFbdlyURaDZOBMzMrJWMlgiMeGlgjPv3az7ezMzMGm+0OQIPSPqkpDeP1oGkrSWdxRvX883MrMm4CFLzGq2gUDfwXeB0STeSrDr4OPASyZ0C25BMINwKOA04Pd9QzcysEVwEqbmNOUdA0vuAQ4DdgQ5gNeBJkiqD1wEXRMSz+YZZX54jYGaWXaFQoFQqrdDe0dHBwMDAxAdk4zbaHIExSwxHxO+B39c9KjMzmxJcBKm5Tdjqg2ZmNjW5CFJzcyJgZmajchGk5uZEwMzMRtXV1UVfXx8dHR1IoqOjg76+Pk8UbBIuKGRmZtbkVqqgkJmZmTW/Me8aqCRpbSoSiIh4ul4BmZmZ2cTJlAhI6gDOBPYEVinfBQQwve6RmZmZWe6yjgj8BFgb+BxJdcHWm1hgZmbWhLImArOBnSNiXp7BmJmZ2cTKOlnwUWDUxYfMzMxs6smaCBwNfEfSFnkGY2ZmZhMr66WBn5OMCDwo6WVgafnOiFir3oGZmZlZ/rImAkfmGoWZmZk1RKZEICLOzzsQMzMzm3iZKwtKerOkz0o6WdL3JB0maVwTCCXtJ+lBSfMlHTfCOX6W7r9NUiFtL0j6q6S56ePMstesIqlP0kOSHpD0t+OJyczMrJVlLSi0DXAtsBZwT9r8BeAbkvaLiPsz9DEdOAPYF1gI3CHpyoi4r+ywzwHPRMQWkg4GTgI+nu57JCK2r9J1D7A4IraSNA1YN8t7MjMzs+wjAj8E7gLaI2L3iNgdaAf+CPwgYx+zgfkRsSAiXgEuBg6sOOZAYPgyxKXAPpI0Rr+fBb4DEBHLIuLJjPGYmZm1vKyJwK7Av0bEX4Yb0u0eYLeMfWwCPFb2fGHaVvWYiFgKPAesl+6bJekuSTdL2h1eX/cA4JuS7pT035I2rHZySd2S+iX1L1myJGPIZmZmzS1rIvASSYnhSm9J9+XtCZLRiB2AY4ELJa1FcmljU+D3EbEjcCtwcrUOIqIvIjojonPmzJkTELKZ5a1YLFIoFJg2bRqFQoFisdjokMymnKyJwC+AsyTtKml6+tgN+DFwZcY+FgGblT3fNG2reoykGSSJxlMR8XJEPAUQEXOAR4CtgKeAIeDy9PX/DeyYMR4zm8KKxSLd3d2USiUiglKpRHd3t5MBs3EaT2XBh4HfkowAvATcDDwEHJOxjzuALSXNkrQKcDArJhFXAoem2x8DboqIkDQznWyIpM2BLYEFEREkScqe6Wv2Ae7DzJpeT08PQ0NDy7UNDQ3R09PToIjMpqasdQSeBQ6UtCXwjrT5/oiYn/VEEbFU0pHAdSTLFp8bEfdKOgHoj4grgXOAn0qaDzxNkiwA7AGcIOlVYBlweEQ8ne77avqaHwBLgM9kjcnMpq7BwcFxtZtZdUp+VLeWzs7O6O/vb3QYZlaDQqFAqVRaob2jo4OBgYGJD8hsEpM0JyI6q+0bcURA0mnA8RHxYro9oog4qsYYzczGpbe3l+7u7uUuD7S1tdHb29vAqMymntEuDbwLeFPZtpnZpNHV1QUkcwUGBwdpb2+nt7f39XYzy8aXBszMzJrcaJcGMt01IOlrktqqtK8m6Wu1Bmhmjed78s1aU9bbB/8dWKNKe1u6z8ymMN+Tb9a6siYCAqpdQ9iB5DY/M5vCfE++WesatY6ApOdJEoAAFkgqTwamA6sCZ1Z7rZlNHb4n36x1jVVQ6EiS0YBzSRYYeq5s3yvAQETcmlNsZjZB2tvbq96T397e3oBozGwijXppICLOj4jzgL2AH6XPhx8XtXoS4MlV1ix6e3tpa1t+PrDvyTdrDVlLDN88vC3prcAqFftbbvxweHLV8HXV4clVgO9jtinH9+Sbta5MdQTSJX//A/h7KpIAgIiYXv/Q8lOPOgIub2pmZlNFzXUEgFOA7YC/IVl58BPAPwMLgY/XIcYpx5OrzMysGWRNBD4EfDkirgNeA+ZExKnAccA/5BXcZDbSJCpPrjIzs6kkayKwNjA8Dv4csF66fSvwvjrHNCV4cpWZmTWDrInAI8Dm6fb9wMGSBBxEixYU6urqoq+vj46ODiTR0dFBX1+fJ1eZmdmUknWy4D8Cr0XEaZL2Bq4iWZlwGnB0RJyeb5j15UWHzMyslYw2WTDr7YPfL9u+SdI7gE7g4Yi4pz5hmpmZ2UQbMxGQ9Cbgd8CnI+JBeL1ugKfHm5mZTXFjzhGIiFeBWVRfdMjMzMymsKyTBc8HvpBnIGZmZjbxMs0RAFYHuiTtC8wBXizfGRFHZelE0n7AD0lWLjw7Ik6s2P9m4L+AnYCngI9HxICkAsndCg+mh/4hIg6veO2VwOYRsW3G92RmZtbysiYCWwN3ptubV+zLdMlA0nTgDGBfkoqEd0i6MiLuKzvsc8AzEbGFpIOBk3ijcuEjEbH9CH0fBLyQJQ4zMzN7Q6ZLAxGx1yiPvTOeazYwPyIWRMQrwMXAgRXHHEhyGQLgUmCftF7BiCStARwLfCtjHGa588qUZjZVZJ0jUA+bAI+VPV+YtlU9JiKWsnwVw1mS7pJ0s6Tdy17zTZK1EIZGO7mkbkn9kvqXLFlSw9swG93wypSlUomIeH1lSicDZjYZZU4EJO0lqU/StZJuKn/kGWDqCaA9InYg+fV/oaS1JG0PvC0irhirg4joi4jOiOicOXNmzuFaK+vp6Xl9eephQ0ND9PT0NCgiM7ORZUoEJB0G/BJYE9gTWAKsA+wI3DfiC5e3CNis7PmmaVvVYyTNAN4CPBURL0fEUwARMYek5PFWwC5Ap6QBkloHW0n6TcZ4zHLhlSnNbCrJOiLwFeDIiDgEeBU4Pv11fgHZJ+ndAWwpaZakVYCDgSsrjrkSODTd/hhwU0SEpJnpZEMkbQ5sCSyIiB9FxMYRUQB2Ax6KiD0zxmOWC69MaWZTSdZEYHPgV+n2y8Aa6fbpwGFZOkiv+R8JXEdyK+AlEXGvpBMkHZAedg6wnqT5JJcAjkvb9wDuljSXZBLh4RHRkosd2eTnlSnNbCrJevvgUySXBSAZvt8WuJtkIt9qWU8WEdcA11S0fa1s+yXg76q87jLgsjH6HkjjMmuo4RUoe3p6GBwcpL29nd7eXq9MaWaTUtZE4LfAB4F7gEuA09LiQvsAN+QUm9mU1dXV5S9+M5sSsiYCRwKrptvfAZYCu5IkBb5/38zMbIrKugzx02Xby0gq/pmZmdkUl3VEAEmrAp8Atkmb7gMuioi/5hGYmZmZ5S9rHYEdSe7dP4WkVPBs4GRgQbrPLDcu12tmlp+sIwJ9wP8Cn4mIFwEkrQ6cm+7rzCc8a3XD5XqHK/UNl+sFPBnPzKwOFDH24oGS/grsVLFSIJLeCfRHROZbCCeDzs7O6O/vb3QYlkGhUKBUKq3Q3tHRwcDAwMQHZGY2BUmaExFVf7RnLSj0ALBxlfaNgIdWNjCzsbhcr5lZvrImAv9GUjvgYEmF9HEw8AOgR9K6w4/cIrWW5HK9Zmb5ypoI/AJ4B3AhyaTBR9LtbYCfkyxC9GT616xuXK7XzCxfWScL7pVrFGYjcLleM7N8ZZos2Gw8WdDMzFpJPSYLIuldkk6X9EtJG6VtfyNph3oFamZmZhMra0GhDwJ3AJsAe/PGioNvA/49n9DMzMwsb1lHBL4JHBsRHwVeKWv/DUmVQZsiXKXPzMzKZZ0suC1wTZX2pwHfMjhFuEqfmZlVyjoi8DTJZYFKOwIL6xeO5amnp+f1JGDY0NAQPT09DYrIzMwaLWsicCHwPUmbAgHMkPR+koWH/iuv4Ky+XKXPzMwqjaey4KNACViDZAnim4DfAa7sMkW4Sp+ZmVXKlAhExKsR0QVsCfw98AngHRHxqYh4Lc8ArX5cpc/MzCplnSwIQEQsABZImgGsmk9IlhdX6TMzs0qjVhaUtA+wXkRcUtZ2HPB1kiTiV8DBEfFsvmHWlysLmplZK6mlsuBxwKZlHc0Gvg38FPgXYDvAU87NzMymqLFGBP4EfDgi5qTPvwfsEhG7pc//DvhWRLx9IoKtF0lLSCY+trL1SVaMtHz5c54Y/pwnjj/riVHvz7kjImZW2zHWHIG1gcVlz3dl+cJCw2WHp5SRPoxWIql/pGEiqx9/zhPDn/PE8Wc9MSbycx7r0sATJOsJIOnNwA7ArWX71wRezic0MzMzy9tYicAvge9K2hs4CXgR+G3Z/ncD83OKzczMzHI21qWBrwGXk9wd8AJwaESULzr0WeCGnGKzfPU1OoAW4c95Yvhznjj+rCfGhH3Oo04WfP0g6S3AC5XFgyStm7a/Uv2VZmZmNpllSgTMzMysOWVda8DMzMyakBOBFiNpM0m/lnSfpHslHd3omJqZpOmS7pJ0VaNjaVaS1pZ0qaQHJN0vaZdGx9SMJP1j+m/GPEkXSXKZ+TqRdK6kxZLmlbWtK+kGSQ+nf9fJ6/xOBFrPUuCfImIbYGfgCEnbNDimZnY0cH+jg2hyPwSujYh3kFQ79eddZ5I2AY4COiNiW2A6cHBjo2oq5wH7VbQdB9wYEVsCN6bPc+FEoMVExBMRcWe6/TzJP5pTrijUVCBpU+DDwNmNjqVZpROZ9wDOAYiIV6ba2idTyAxgtXTRuTbg8QbH0zQi4hbg6YrmA4Hz0+3zgb/J6/xOBFqYpAJJkajbGhxKs/oByZocyxocRzObBSwBfpJegjlb0uqNDqrZRMQi4GRgkKTQ3HMRcX1jo2p6G0bEE+n2n4AN8zqRE4EWJWkN4DLgmIj4S6PjaTaSPgIsHl6nw3IzA9gR+FFE7EBS9Cy3IdRWlV6fPpAk8doYWF3SJxsbVeuI5Pa+3G7xcyLQgiS9iSQJKEbE5Y2Op0ntChwgaQC4GNhb0gWNDakpLQQWRsTwqNalJImB1dcHgEcjYklEvEpSaO59DY6p2f1Z0kYA6d/FYxy/0pwItBhJIrmeen9EnNroeJpVRBwfEZtGRIFkUtVNEeFfUHUWEX8CHpM0vALqPsB9DQypWQ0CO0tqS/8N2QdPyszblcCh6fahwM/zOpETgdazK/Apkl+oc9PH/o0OyqwGXwaKku4Gtge+3dhwmk864nIpcCdwD8l3h0sN14mki0gW9Hu7pIWSPgecCOwr6WGSEZkTczu/KwuamZm1Lo8ImJmZtTAnAmZmZi3MiYCZmVkLcyJgZmbWwpwImJmZtbAZK/tCSWsDBwHvBwrAaiSlPu8EfhkRv69DfGZmZpajcY8ISNpY0tkk9aZ7gFWAfuB6oESSGNyQLnP78XoGa2ZmZvW1MiMCc0lWQuqMiHurHSBpNZKVko6VtFlEnLzSEZqZmVluxl1QSNLMiFiS1/ETYf31149CodDoMMzMzCbEnDlznoyImdX2jXtEYLxf6pMtCQAoFAr09/c3OgwzM7MJIak00r6VniyYdvzpEXYF8BIwPyLuquUcZmZmlp9abx88AzgLOA84N32cB5wNXADMkTRHUtXhiKmuWCxSKBSYNm0ahUKBYrHY6JDMzMzGpdZE4O+Bu0hWtFs1fewKzAE+CuwACGi65W6LxSLd3d2USiUiglKpRHd3t5MBMzObUmpafVDS/cBh6RKV5e07Az+JiK0l7QX8NCI2HaWfo4EvkCQNZ0XEDyT9DBheY3xt4NmI2L7KaweA54HXgKUR0TlW3J2dnVHrHIFCoUCptOIll46ODgYGBmrq28zMrJ4kzRnp+7GmOQIkhYSGqrQPpfsAHgXWGSW4bUmSgNnAK8C1kq6KiI+XHXMK8NwocewVEU+OK/IaDQ4OjqvdzMxsMqr10sDtwKmS3jrckG6fDAyPEmwJLBylj62B2yJiKCKWAjeTVCwc7k8klyAuqjHWumpvbx9Xu5mZ2WRUayLweWBjYFDSQDpMP5i2fT49ZnXgW6P0MQ/YXdJ6ktqA/YHNyvbvDvw5Ih4e4fUBXJ9OSuxe+bcyPr29vbS1tS3X1tbWRm9v70SFYGZmVrOaLg1ExMPp0P4HeeN6/gPADZFOPoiI/xmjj/slnURSovhFksqFr5UdcgijjwbsFhGLJG1AUtr4gYi4pfKgNEnohvr8au/q6gKgp6eHwcFB2tvb6e3tfb3dzMxsKqhpsmAeJH0bWBgR/ylpBrAI2CkiRru8MPzarwMvjFXSuB6TBc3MzKaK0SYL1rwMsaQvSbpX0pCkzdO24yT9/Tj62CD9204yP+DCdNcHgAdGSgIkrS5pzeFtkpGJeSv/bszMzFpLTYmApGOAfwP6SG79G7YIOHIcXV0m6T7gF8AREfFs2n4wFZcF0tUPr0mfbgj8TtIfSSYuXh0R1473fZiZmbWqWm8fPBz4QkRcLal8QuCdwDuzdhIRu4/QfliVtsdJJhQSEQuA7cYTsJmZmb2h1ksDHVQfin8VWK3Gvs3MzCxntSYCC4Adq7TvD9xXY99mZmaWs1ovDZwMnJ7e/y9gF0mfAv4F+GytwZmZmVm+aq0j8JP0Fr9vA23AT4HHgaMi4md1iM/MzMxyVOuIABFxFnCWpPWBaRGxuPawzMzMbCLUnAgMm+hFf8zMzKx2404EJD1KUt9/TBGx+bgjMjMzswmzMiMCp5dtrwEcS1LM59a0bReSJYVPqS00MzMzy9u4E4GIeP0LXtJ5wEkR8e3yYyQdzzgKCpmZmVlj1FpH4CDgkirt/w0cUGPfZmZmlrNaE4EXgT2rtO8JDNXYt5mZmeWs1rsGvg+cIakT+EPatjNwKPD1Gvs2MzOznNVaUOi7kgaAo4HhZYfvBw6NiGqXDMzMzGwSqUdBoUuoPk/AzMzMJrlxzxGQpHofL+loSfMk3SvpmLTtZ5Lmpo8BSXNHeO1+kh6UNF/SceOJzczMrNWtzGTBByR9UtKbRztI0taSzgJG/XKWtC3wBZLaA9sBH5G0RUR8PCK2j4jtgcuAy6u8djpwBvAhYBvgEEnbrMR7MjMza0krc2mgG/guyaqDNwL9JAsNvQSsQ/KFvBuwFXAayxcgqmZr4LaIGAKQdDPJbYnfTZ+LZP7B3lVeOxuYHxEL0mMvBg7ESyCbmZllsjIFhW4G3ivpfcAhwMeBDmA14EngLuBc4IKIeDZDl/OAXknrAX8F9idJLobtDvw5Ih6u8tpNgMfKni8E3lvtJJK6SZIY2tvbM4RlZmbW/FZ6smBE/B74fa0BRMT9kk4CriepSzAXeK3skEOAi+pwnj6gD6CzszPTWglmZmbNrtaCQnUREedExE4RsQfwDPAQgKQZJJcJfjbCSxcBm5U93zRtMzMzswwmRSIgaYP0bzvJF/+F6a4PAA9ExMIRXnoHsKWkWZJWAQ4Grsw7XjMzs2ZRcx2BOrksnSPwKnBE2dyCg6m4LCBpY+DsiNg/IpZKOhK4DpgOnBsR905g3GZmZlPapEgEImL3EdoPq9L2OMmEwuHn1wDX5BacmZlZE5sUlwbMzMysMWpOBCStKuljkr4qae207W2S1q05OjMzM8tVTZcGJG0B3ACsCawN/DfwLPDF9Pnna4rOzMzMclXriMAPSBKBDUmKAQ27Etirxr7NzMwsZ7VOFnwfsHNEvFaxttAgsHGNfZuZmVnO6jFZ8E1V2tqB5+rQt5mZmeWo1kTgeuDYsuchaS3gG8DVNfZtZmZmOas1ETgW2E3Sg8CqJKWAB4C3Msbyw2bNrFgsUigUmDZtGoVCgWKx2OiQzMyqqmmOQEQ8Lml7koWBdiRJLPqAYkT8dbTXmjWrYrFId3c3Q0NDAJRKJbq7uwHo6upqZGhmZitQROstxNfZ2Rn9/f1jH2i2EgqFAqVSaYX2jo4OBgYGJj4gM2t5kuZERGe1feMeEZB0UNZjI+Ly8fZvNtUNDg6Oq93MrJFW5tLApRmPC5KFgMxaSnt7e9URgfb29gZEY2Y2unFPFoyIaRkfTgKsJfX29tLW1rZcW1tbG729vQ2KyMxsZJNi0SFJR0uaJ+leSceUtX9Z0gNp+3dHeO2ApHskzZXkC//WcF1dXfT19dHR0YEkOjo66Ovr80RBM5uUal1r4NMj7ArgJWB+RNw1Rh/bAl8AZgOvANdKugrYDDgQ2C4iXpa0wSjd7BURT477DZjlpKury1/8ZjYl1Fpi+AxgFZLqgsvStmnAq+n2myTdBewXEUtG6GNr4LaIGAKQdDNwENAJnBgRLwNExOIaYzUzM7MKtV4a+HvgLmBXkoJCq6bbc4CPAjsAAk4dpY95wO6S1pPUBuxPMhqwVdp+m6SbJb1nhNcHcL2kOZK6RzqJpG5J/ZL6lywZKScxMzNrLbWOCJwKHBYRt5W13SrpWOAnEbG1pH8CfjpSBxFxv6STSMoVvwjMBV5LY1sX2Bl4D3CJpM1jxcIHu0XEovTSwQ2SHoiIW6qcp4+k2BGdnZ2tVzzBzMysilpHBArAUJX2oXQfwKPAOqN1EhHnRMROEbEH8AzwELAQuDwSt5Nceli/ymsXpX8XA1eQzDUwMzOzDGpNBG4HTpX01uGGdPtkYHiUYEuSL/URDU8ElNROMj/gQuB/gL3S9q1I5iI8WfG61SWtObwNfJDkUoOZmZllUGsi8HlgY2AwvY1vABhM2z6fHrM68K0x+rlM0n3AL4AjIuJZ4Fxgc0nzgIuBQyMiJG0s6Zr0dRsCv5P0R5Kk5OqIuLbG92RmZhW8kFbzqnmtAUki+SX+9rTpAeCGKtfyJw2vNWBmll3lQlqQFMlyfYypY7S1BrzokJmZjcoLaU19dV10qErn7wX2ATag4lJDRBxVa/9mZtZYXkirudVaWfArwHeB+cDjJPf0D2u9oQYzsybkhbSaW62TBY8GjoqIrSJiz4jYq+yxdz0CNDNrNlNt4t1UXUhrqn3OjVLrpYG1gGvGPMrMzIAVJ96VSiW6u5OiqJN14t1wXD09PQwODtLe3k5vb++kjRem5ufcKDVNFpR0JnB3RPxn/ULKnycLmlmjeOLdxPDnvLw8Jws+BnxD0q7A3byx2BAAETHaGgNmZi3HE+8mhj/n7GpNBD4PvAC8L32UC0ZfbMjMrOV44t3E8OecXU2TBSNi1iiPzesVpJlZs5iqE++mGn/O2dV614CZmY1DV1cXfX19dHR0IImOjg5X6MuBP+fs6lFieCvgY0A7ycJAr4uIz9bUeU48WdDMzCajYrGYy90ZuU0WlPRh4DLgLmAn4A7gbcCbgd/W0reZmVkradQtj7VeGjgB+EZE7AK8DHwKKAC/An5TY99mZmYto6enZ7mFnQCGhobo6enJ9by1JgJvB36Wbr8KtEXESyQJwjFZO5F0tKR5ku6VdExZ+5clPZC2f3eE1+4n6UFJ8yUdt9LvxMzMrIEadctjrbcPPg+smm4/AWwBzEv7XSdLB5K2Bb4AzAZeAa6VdBWwGXAgsF1EvCxpgyqvnQ6cAewLLATukHRlRNxX07syMzObYI265bHWEYHbgN3S7auBUyT9O/AT4NaMfWwN3BYRQxGxFLgZOAj4InBiRLwMEBGLq7x2NjA/IhZExCvAxSTJg5mZ2ZTSqFsea00EjgX+kG5/Hbge+FuS1Qg/n7GPecDuktaT1AbsTzIasFXafpukmyW9p8prNyGpbjhsYdq2Akndkvol9S9ZsiRjaGZmZhOjUbc81nRpICIWlG0PkfyKH28f90s6iSSJeBGYC7yWxrYusDPwHuASSZvHSt7vGBF9QB8ktw+uTB9mZmZ56urqmvBaB5OioFBEnBMRO0XEHsAzwEMkv+4vj8TtwDJg/YqXLiIZPRi2adpmZuPkJVvNWlOtkwXrQtIGEbFYUjvJ/ICdSb749wJ+nRYtWgV4suKldwBbSppFkgAcDHxi4iI3aw5estWsdU2KEQHgMkn3Ab8AjoiIZ4Fzgc0lzSOZBHhoRISkjSVdA5BOLjwSuA64H7gkIu5tyDswm8Iadf+ymTVezSWGpyKXGDZb3rRp06j2b4Ekli1b1oCIzKyeRisxPFlGBMysgUa6T3myL9nqeQ1mtat5joCk9wL7ABtQkVhExFG19m9m+evt7V1ujgBM/iVbPa/BrD5qujQg6SvAd0nqBjwOlHcWEbF3beHlw5cGzFaU16pneSkUClWrsHV0dDAwMDDxAZlNYqNdGqg1EXgMOCkiTl/pThrAiYDZ1Od5DWbZ5TlHYC3gmhr7MDMbt6k6r8Fssqk1EbgI2K8egZiZjUej6rKbNZtaJws+BnxD0q7A3SRLEb8uIk6tsX8zs6qG5y9MpXkNZpNRrXMEHh1ld0TE5ivdeY48R8DMzFrJaHMEal10aFYtrzczM7PGqltBIUlrSFq9Xv2ZmZlZ/mpOBCQdIWkQeA74i6SSpC/VHpqZmZnlraZLA5L+FTgeOBn4Xdq8O3CipLUi4sQa4zMzM7Mc1XrXwOFAd0RcVNZ2o6SHgW8DTgTMzMwmsVovDWwA3FGl/XZgw6ydSDpa0jxJ90o6Jm37uqRFkuamj/1HeO2ApHvSY3wrgJmZ2TjUOiLwEPAJ4ISK9k8AD2bpQNK2wBeA2cArwLWSrkp3fz8iTs7QzV4R8WS2kM3MzGxYrYnA14FLJO0B/G/ativwfuDvMvaxNXBbRAwBSLoZOKjGuMzMzCyDmi4NRMTlwHuBPwEfSR9/AmZHxP9k7GYesLuk9SS1AfsDm6X7jpR0t6RzJa0zUhjA9ZLmSOoe6SSSuiX1S+pfsmRJxtDMzMyaW02VBesWhPQ54EvAi8C9wMvAd4AnSb7ovwlsFBGfrfLaTSJikaQNgBuAL0fELaOdz5UFzcysldR19UFJ65Zvj/bI2mdEnBMRO0XEHsAzwEMR8eeIeC0ilgFnkcwhqPbaRenfxcAVIx1nZmZmK1qZOQJLJG2UfvEO/2KvpLR9epYOJW0QEYsltZPMD9g5PccT6SEfJbmEUPm61YFpEfF8uv1BVpy4aGZmZiNYmURgb+Dpsu16XFu4TNJ6JKsXHhERz0r6D0nbp/0PAP8AIGlj4OyI2J/kFsUrJEHyXi6MiGvrEI+ZmVlLmBRzBCaa5wiYmVkrqescgYqOX0sn6VW2ryfptVr6NjMzs/zVWllQI7S/maQ4kJmZmU1iK1VQSNKx6WYAh0t6oWz3dJKFhx6oMTYzMzPL2cqOCHw5fQj4fNnzL6fP30yyIJFZzYrFIoVCgWnTplEoFCgWi40OycysaazUiEBEzAKQ9GvgoIh4pq5RmaWKxSLd3d0MDQ0BUCqV6O5OCkh2dXU1MjQzs6bguwZsUisUCpRKpRXaOzo6GBgYmPiAzMymoNHuGhj3iICk04DjI+LFdHtEEXHUePs3Kzc4ODiudjMzG5+VuTTwLuBNZdsjab2hBqu79vb2qiMC7e3tDYjGzKz5jDsRiIi9qm2b5aG3t3e5OQIAbW1t9Pb2NjAqM7PmUWsdgRVI2kLSqvXu11pTV1cXfX19dHR0IImOjg76+vo8UdDMrE5qmiwo6dvAgxFxvpKC/9cD+wDPAR+KiD/UJ8z68mRBMzNrJbmVGAa6gAfT7Q8B2wM7A/8FfKfGvs3MzCxnK1VHoMyGwMJ0e3/gkoi4XdLTgH9ym5mZTXK1jgg8BXSk2x8Ebky3ZzDyOgQrkHS0pHmS7pV0TNr2dUmLJM1NH/uP8Nr9JD0oab6k41b+rZiZmbWeWkcELgMulPQQsC5wXdq+PTA/SweStgW+AMwmWajoWklXpbu/HxEnj/La6cAZwL4kIxN3SLoyIu5bifdiZmbWcmpNBI4FSkA78C8R8WLavhHwo4x9bA3cFhFDAJJuBg7K+NrZwPyIWJC+9mLgQMCJgJmZWQY1XRqIiKURcUpEHB0Rd5W1fz8izs7YzTxgd0nrSWojmWuwWbrvSEl3SzpX0jpVXrsJ8FjZ84Vp2wokdUvql9S/ZMmSjKE1Hy/gY2Zm5WquIyBpQ0knSLpU0n9L+oakDbK+PiLuB04iufXwWmAu8BrJiMLbSC4zPAGcUkucEdEXEZ0R0Tlz5sxaupqyhhfwKZVKRMTrC/g4GTAza101JQKSdiWZC/AJ4K/ASyS3FM6XtEvWfiLinIjYKSL2AJ4BHoqIP0fEaxGxDDiL5DJApUW8MXoAsGnaZlX09PQsV6EPYGhoiJ6engZFZGZmjVbrHIGTgYuAw9MvbCRNA84k+QX/viydSNogIhZLaieZH7CzpI0i4on0kI+SXEKodAewpaRZJAnAwSRJiVXhBXzMzKxSrYnA9sBhw0kAQEQsk3QqcNeIr1rRZZLWA14FjoiIZyX9h6TtSRYvGgD+AUDSxsDZEbF/RCyVdCTJ3QrTgXMj4t4a31PT8gI+ZmZWqdZE4DlgFm9UFxw2C3g2aycRsXuVtk+NcOzjJBMKh59fA1yT9VytzAv4mJlZpVonC14MnCOpS9Ks9PFJ4GySSwY2iXgBHzMzq1TrokOrAN8DDueN0YVXSWb8fzUiXqk5whx40SEzM2sloy06VNOlgfSL/mhJx5Pc6gfwyHBxIDMzM5vcVmpEIC388z3gb4A3Ab8CjoqIJ+saXU4kLSGpiNjK1gemxH+vKc6f88Tw5zxx/FlPjHp/zh0RUbWIzsomAt8DvgQUSWoHHAL8JiL+rpYobeJI6h9pmMjqx5/zxPDnPHH8WU+MifycV/bSwEHA5yLiYgBJFwD/K2l6RLxWt+jMzMwsVyt718BmwG+Hn0TE7cBSYON6BGVmZmYTY2UTgekkSwaXW0rtdQls4vQ1OoAW4c95Yvhznjj+rCfGhH3OKztHYBlwA/ByWfOHgJuB1+8YiIgDag3QzMzM8rOyv+DPr9J2QS2BmJmZ2cSrqaCQmZmZTW21lhi2KUbSZpJ+Lek+SfdKOrrRMTUzSdMl3SXpqkbH0qwkrS3pUkkPSLp/PEugW3aS/jH9N2OepIskrdromJqFpHMlLZY0r6xtXUk3SHo4/btOXud3ItB6lgL/FBHbADsDR0japsExNbOjgfsbHUST+yFwbUS8A9gOf951J2kT4CigMyK2JZkwfnBjo2oq5wH7VbQdB9wYEVsCN6bPc+FEoMVExBMRcWe6/TzJP5qbNDaq5iRpU+DDJItwWQ4kvQXYAzgHkrLnEfFsQ4NqXjOA1STNANqAxxscT9OIiFuApyuaD+SN+Xjnk1TyzYUTgRYmqQDsANzW4FCa1Q+AfwGWNTiOZjYLWAL8JL0Ec7ak1RsdVLOJiEXAycAg8ATwXERc39iomt6GEfFEuv0nYMO8TuREoEVJWgO4DDgmIv7S6HiajaSPAIsjYk6jY2lyM4AdgR9FxA7Ai+Q4hNqq0uvTB5IkXhsDq6dLztsEiGRWf24z+50ItCBJbyJJAooRcXmj42lSuwIHSBoALgb2TktxW30tBBZGxPCo1qUkiYHV1weARyNiSUS8ClwOvK/BMTW7P0vaCCD9uzivEzkRaDGSRHI99f6IOLXR8TSriDg+IjaNiALJpKqbIsK/oOosIv4EPCbp7WnTPsB9DQypWQ0CO0tqS/8N2QdPyszblcCh6fahwM/zOpETgdazK/Apkl+oc9PH/o0OyqwGXwaKku4Gtge+3dhwmk864nIpcCdwD8l3h0sN14mki4BbgbdLWijpc8CJwL6SHiYZkTkxt/O7oJCZmVnr8oiAmZlZC3MiYGZm1sKcCJiZmbUwJwJmZmYtzImAmZlZC3MiYGZm1sKcCJiZmbWwSZ8IjLXWuBKnSZov6W5JLi9qZmaW0YxGB5DB8FrjH5O0Csnyl+U+BGyZPt4L/Cj9a2ZmZmOY1IlA2Vrjh0Gy1jjwSsVhBwL/la7O9Id0BGGjsuUbV7D++utHoVDIJ2gzM7NJZs6cOU9GxMxq+yZ1IsDya41vB8wBjo6IF8uO2QR4rOz5wrRtxESgUCjQ39+fQ7hmZmaTj6TSSPsm+xyBuq01LqlbUr+k/iVLltQzRjMzsylrsicCWdYaXwRsVvZ807RtORHRFxGdEdE5c2bV0REzM2sixWKRQqHAtGnTKBQKFIvFRoc0KU3qRCDjWuNXAp9O7x7YGXhutPkBZmbW/IrFIt3d3ZRKJSKCUqlEd3e3k4EqJv0yxJK2B84GVgEWAJ8BPg4QEWdKEnA6sB8wBHwmIkadANDZ2RmeI2Bm1rwKhQKl0oqXxTs6OhgYGJj4gBpM0pyI6Ky6b7InAnlwImBm1tymTZtGte83SSxbtqwBETXWaInApL40YGZmtjLa29vH1d7KnAiYmVnT6e3tpa1t+fpzbW1t9Pb2NiiiycuJgJmZNZ2uri76+vro6OhAEh0dHfT19dHV1dXo0CYdzxEwMzNrcp4jYGZmZlU5ETAzM2thua41IOnNwMbAasCSiHBtXzMzs0mk7iMCktaU9EVJtwDPAfOBecCfJA1KOkvSe+p9XjMzMxu/uiYCko4FBoDPAjeQLBG8PbAVsAvwdZJRiBskXStpy3qe38zMzMan3pcGdgbeHxHzRth/O3CupMOBzwHvBx6ucwxmZmaWUV0TgYj4+4zHvQz8Zz3PbWZmZuPnuwbMzMxaWG53DUj6NVCtWlEAL5FMIjw/Iu7MKwYzMzMbXZ4jAvcDO5LcPrgwfWyUti0Gdgduk7TPaJ1IGpB0j6S5klYoByhpHUlXSLpb0u2Stq37OzEzM2tSedYReAk4LyKOKW+UdAoQEbGjpB8C3wJuHKOvvSLiyRH2/SswNyI+KukdwBnAqMmFmZmZJfIcETiU5Eu50o+Bz6TbZwHb1HiebYCbACLiAaAgacMa+zQzM2sJeSYCAt5ZpX2bdB/AK8CyMfoJ4HpJcyR1V9n/R+AgAEmzgQ5g0xWCkbol9UvqX7LEBQ7NzMwg30sD5wPnpEWD7kjb3gN8FTgvff5+kqqDo9ktIhZJ2oCkENEDEXFL2f4TgR9KmgvcA9wFvFbZSUT0AX2QrD64Uu/IzMysyeSZCHwF+DPwj8Bb07Y/Ad8DTk6fXwf8crROImJR+nexpCuA2cAtZfv/QnqpQZKAR4EFdXsXZmZmTSy3RCAiXiP5tX6ipLXStr9UHDM4Wh+SVgemRcTz6fYHgRMqjlkbGIqIV4DPA7dUnsfMzMyqy3X1wWE1fDFvCFyR/NBnBnBhRFybligmIs4EtgbOlxTAvSSli83MzCyDvJch/gxwCNAOrFK+LyI2H+v1EbEA2K5K+5ll27eSLGpkZmZm45TbXQOS/hk4BZgDFID/IZkYuC5wbl7nNTMzs+zyvH3wC0B3RBwPvAqcHhEHkCQHHTme18zMzDLKMxHYlGTZYYC/Amul2xcBf5vjec3MzCyjPBOBPwHrp9slYJd0ewuqL0ZkZmZmEyzPROAm4IB0+xzg1HRFwp8Bl+d4XjMzM8soz7sGukkTjYg4U9IzwK7AZSTrDZiZmVmD5VlQaBll6whExM9IRgPMzMxsksi7jsAqwLbABlRchoiIa/I8t5mZmY0tzzoC+wKDQD9wDXBV2eMXeZ3XzGyyKxaLFAoFpk2bRqFQoFgsNjoka2F5ThY8g+RLfxbQBqxW9mjL8bxmZpNWsViku7ubUqlERFAqleju7nYyYA2TZyKwEfDtiChFxEsR8XL5I8fzmplNWj09PQwNDS3XNjQ0RE9PT4MissmkEaNFec4RuAp4H14S2MzsdYOD1RddHandWsfwaNFwojg8WgTQ1dWV23kVkU9tH0lvAYrAwyRrDLxavj8i/iuXE2fQ2dkZ/f39jTq9mbWwQqFAqVRaob2jo4OBgYGJD8gmjTz/tyFpTkR0VtuX54jA/wH2AfYHhli+mmAAmRIBSQPA88BrwNLKN5ImHBeQrHA4Azg5In5Sa/BmZnno7e1d7lcfQFtbG729vQ2MyiaDRo0W5TlH4GTgdGDNiFgjItYse6w11osr7BUR24+QzRwB3BcR2wF7Aqekty2amU06XV1d9PX10dHRgSQ6Ojro6+vLdejXpob29vZxtddLnonA2sCZEfFijueAZHRhTUkC1gCeBpbmfE4zs5XW1dXFwMAAy5YtY2BgwEmAAcloUVvb8jfVTcRoUZ6JwGXAB+rQTwDXS5ojqbvK/tOBrYHHgXuAo9OqhsuR1C2pX1L/kiVL6hCWmZlZ/TRqtCjPyYL/DzgauA64mxUnC56asZ9NImKRpA2AG4AvR8QtZfs/RrKGwbHA29JjtouIv4zUpycLmplZK2nUZMHPkkzye1/6KBdApkQgIhalfxdLugKYDdxSdshngBMjyWjmS3oUeAdwe23hm5mZNb88Fx2aVWsfklYHpkXE8+n2B4ETKg4bJLk74beSNgTejmsXmJmZZZLrokN1sCFwRTIPkBnAhRFxraTDIVneGPgmcJ6kewABX42IJxsVsJmZ2VRS10RA0r8B389yp4CkXYF1I2LEBYgiYgGwXZX2M8u2HycZKTAzM7NxqvddA28DBiX1Sfq/kjYa3iFpVUk7SjpK0u3AT4Fn6nx+MzMzG4e6jghExGckvQs4kqRy4FqSguSOgVVIhu7vBPqA8734kJmZWWPVfY5ARNwD/IOkLwLvBjpIlh5+Epjr6/dmZmaTR553DSwD5qYPMzMzm4TyrCxoZmZmk5wTATMzG1OxWKRQKDBt2jQKhQLFYrHRIVmdTPY6AmZm1mDFYnG5pZNLpRLd3cnSL14waerziICZmY2qp6fn9SRg2NDQED09PQ2KyOopt0RA0tcktVVpX03S1/I6r5mZ1dfg4OC42m1qyXNE4N+BNaq0t6X7zMxsCmhvbx9Xu00teSYCIlllsNIOwNM5ntfMzOqot7eXtrblB3jb2tro7e1tUERWT3WfLCjpeZIEIIAFaWXBYdOBVYEzq73WzMwmn+EJgT09PQwODtLe3k5vb68nCjYJRVT70V5Dh9KhJKMB5wLHAM+V7X4FGIiIW+t60nHq7OyM/v7+RoZgZmY2YSTNiYjOavvyKDF8fnrSR4H/jYiltfQnaQB4HngNWFr5RiT9MzCcls4AtgZmRoQvP5iZmY0hzzkCS0hWIwRA0r6SLpB0vKTp4+xrr4jYvlo2ExHfS/dtDxwP3DxRSYALbJiZ2VSXZyJwLsnEQCRtBvwcWBc4AvhWTuc8BLgop76XM1xgo1QqERGvF9hwMmBmZlNJ3ecIvN6x9CwwOyIekvSPwAERsZekvYCfREQhYz+PAs+QTD78cUT0jXBcG7AQ2KLaiICkbqAboL29fadSqbQS7+oNhUKBan10dHQwMDBQU99mZmb1NNocgTxHBKaTTA4E2Ae4Jt1+BNhwHP3sFhE7Ah8CjpC0xwjH/V+SOQlVLwtERF9EdEZE58yZM8dx+upcYMPMzJpBnonAPOCLknYnSQSuTds3AZ7M2klELEr/LgauAGaPcOjBTNBlAXCBDTMzaw55JgJfBb4A/Aa4KCLuSdsPAG7P0oGk1SWtObwNfJAkwag87i3A+0nmIUwIF9gwM7NmkNvqgxFxi6SZwFoR8UzZrh8DQyO8rNKGwBWSIIn1woi4VtLh6TmGCxN9FLg+Il6sT/Rjc4ENMzNrBrlNFpzMXFDIzMxayYQWFKo48V4kt/S1A6uU74uIvfM8t5mZmY0tz2WIDwN+CawJ7ElSYGgdYEfgvrzOa2ZmZtnlOVnwK8CREXEI8CpwfETsAFwAvJDjec3MzCyjPBOBzYFfpdsvA2uk26cDh+V4XjMzM8soz0TgKZLLAgCLgG3T7fWA1XI8r5mZmWWU52TB35Lc938PcAlwmqR9SYoL3ZDjec3MzCyjPBOBI4FV0+3vAEuBXUmSgrwWHTIzM7NxyOXSgKQZJCV/AYiIZRFxUkQcEBFfiYhn8zivma28qbis9lSM2WyyyWVEICKWSvoecHUe/ZtZfQ0vqz00lBT9HF5WG5i01TKnYsxmk1GeyxDfCJwREZfncoIauLKg2fKm4rLaUzFms0ZpVGXBs4CTJbUDc4Dl1gGIiDtzPLeZjcNUXFZ7KsZsNhnlmQhcmP49tcq+AKbneG4zG4f29vaqv64n87LaUzFms8kozzoCs0Z5bJ61E0kDku6RNFdS1fF8SXum+++VdHMdYjdrKVNxWe2pGLPZZJRnItABLIqIUvmDpLhQxzj72isitq92fUPS2sB/AgdExDuBv6s1cLNW09XVRV9fHx0dHUiio6ODvr6+ST3pbirGbDYZ5TlZ8DVgo4hYXNG+HrA4IjJdGpA0AHRGxJMj7P8SsHFE/FvW2DxZ0MzMWslokwXzHBEQyVyASutRMXFwDAFcL2mOpO4q+7cC1pH0m/SYT69ErGZmZi2p7pMFJV2ZbgZwgaSXy3ZPJ1lz4Pfj6HK3iFgkaQPgBkkPRMQtZftnADuRlC5eDbhV0h8i4qGKuLqBbvBkIjMzs2F5jAg8lT4EPFP2/ClgIXAm8MmsnUXEovTvYuAKYHbFIQuB6yLixfTywS3AdlX66YuIzojonDlz5rjflJmZWTOq+4hARHwGXr+2f3JEjOcywHIkrQ5Mi4jn0+0PAidUHPZz4PS0rPEqwHuB76/sOc3MzFpJbnUEIuIbdehmQ+AKSZDEemFEXCvp8PQcZ0bE/ZKuBe4GlgFnR8S8OpzbzMys6eV518C6QC/JtfsNqLgMERFr5XLiDHzXgJmZtZJGlRg+B9gB6AMep/odBGZmZtZAeSYC+wD7RsRtOZ7DzMzMapBnHYHFwAs59m82aRWLRQqFAtOmTaNQKFAsFhsdkplZVXkmAj3ACZLWyPEcZpNOsViku7ubUqlERFAqleju7nYyYGaTUp6TBe8BCiRFhErAq+X7I+LduZw4A08WtDwVCoWqq+J1dHQwMDAw8QGZWctr1GTBS3Ps22zSGhwcHFe7mVkjTfY6AmZTTnt7e9URAZe2NrPJKM85AgBI2lvSkZKOkLRn3ucza7Te3l7a2tqWa2tra6O3t7dBEZmZjSy3EQFJm5CsDbATSR0BgI0l9QMfjYjHR3yx2RTW1dUFQE9PD4ODg7S3t9Pb2/t6u5nZZJLnZMHLgI2BT0TEo2nb5sAFwOMR8bFcTpyBJwuamVkradRkwX2BPYeTAICIWCDpKODGHM9rZmZmGeU9R6DacINLDZuZmU0SeSYCNwL/IWmz4QZJ7cAP8IiAmZnZpJBnInAUsDqwQFJJUgl4JG07KmsnkgYk3SNpbjrRsHL/npKeS/fPlfS1ur0DMzOzJpdnHYHHJO0IfAB4R9p8f0T8aiW62ysinhxl/28j4iMr0a+ZmVlLy3WOQCRuiIj/SB8rkwRYi/MCPmZm+al7IiDpQ+lw/lpV9r0l3bfvOLoM4HpJcyR1j3DMLpL+KOmXkt45Qlzdkvol9S9ZsmQcp7dG8gI+Zmb5qnsdAUlXA9dExBkj7P8i8JGI+HDG/jaJiEWSNgBuAL4cEbeU7V8LWBYRL0jaH/hhRGw5Wp+uIzB1eAEfM7PajVZHII9LA+8GRrsEcBOwXdbOImJR+ncxSaXC2RX7/xIRL6Tb1wBvkrT+eIO2yckL+JiZ5SuPRGAmsGyU/QGsl6UjSatLWnN4G/ggMK/imLdKUro9m+Q9PbUScdskNNJCPV7Ax8ysPvJIBBaSjAqM5N3Aoox9bQj8TtIfgduBqyPiWkmHSzo8PeZjwLz0mNOAgyOvusk24byAj5lZvvK4ffBq4JuSromIv5bvkNQGnJAeM6aIWECVywgRcWbZ9unA6TVFbJOWF/AxM8tXHpMFNwDuIrk8cDrwQLpra+BIQMCOEfHnup54HDxZ0MzMWsmELjoUEYslvQ/4EfBtki9+SOYGXAcc0cgkwMzMzN6QS2XBiCgB+0taB9iCJBl4OCKeyeN8ZmZmtnLyriz4TETcERG3OwmYHFylz8zMyuW21oBNPsNV+oaGhgBer9IHePKdmVmLynVEwCaXnp6e15OAYUNDQ/T09DQoIjMzazQnAi3EVfrMzKySE4EW4ip9ZmZWyYlAC3GVPjMzq+REoIV0dXXR19dHR0cHkujo6KCvr88TBc3MWljdKwtOBa4saGZmrWSilyE2MzOzKaIlRwQkLQFKjY6jwdYHnmx0EC3An/PE8Oc8cfxZT4x6f84dETGz2o6WTAQMJPWPNExk9ePPeWL4c544/qwnxkR+zr40YGZm1sKcCJiZmbUwJwKtq6/RAbQIf84Tw5/zxPFnPTEm7HP2HAEzM7MW5hEBMzOzFuZEwMzMrIU5EWgxkjaT9GtJ90m6V9LRjY6pmUmaLukuSVc1OpZmJWltSZdKekDS/ZJ2aXRMzUjSP6b/ZsyTdJGkVRsdU7OQdK6kxZLmlbWtK+kGSQ+nf9fJ6/xOBFrPUuCfImIbYGfgCEnbNDimZnY0cH+jg2hyPwSujYh3ANvhz7vuJG0CHAV0RsS2wHTg4MZG1VTOA/araDsOuDEitgRuTJ/nwolAi4mIJyLiznT7eZJ/NDdpbFTNSdKmwIeBsxsdS7OS9BZgD+AcgIh4JSKebWhQzWsGsJqkGUAb8HiD42kaEXEL8HRF84HA+en2+cDf5HV+JwItTFIB2AG4rcGhNKsfAP8CLGtwHM1sFrAE+El6CeZsSas3OqhmExGLgJOBQeAJ4LmIuL6xUTW9DSPiiXT7T8CGeZ3IiUCLkrQGcBlwTET8pdHxNBtJHwEWR8ScRsfS5GYAOwI/iogdgBfJcQi1VaXXpw8kSbw2BlaX9MnGRtU6IrnPP7d7/Z0ItCBJbyJJAooRcXmj42lSuwIHSBoALgb2lnRBY0NqSguBhRExPKp1KUliYPX1AeDRiFgSEa8ClwPva3BMze7PkjYCSP8uzutETgRajCSRXE+9PyJObXQ8zSoijo+ITSOiQDKp6qaI8C+oOouIPwGPSXp72rQPcF8DQ2pWg8DOktrSf0P2wZMy83YlcGi6fSjw87xO5ESg9ewKfIrkF+rc9LF/o4Myq8GXgaKku4HtgW83Npzmk464XArcCdxD8t3hUsN1Iuki4Fbg7ZIWSvoccCKwr6SHSUZkTszt/C4xbGZm1ro8ImBmZtbCnAiYmZm1MCcCZmZmLcyJgJmZWQtzImBmZtbCnAiYNYik8+q5KqGkgqSQ1FmvPtN+6xqnmU0uTgTMapR+UUb6eFXSAkknZ6h5fzRQzyJDjwEbAXPr2KfViaTfSDq90XGYVZrR6ADMmsSvSAo1vQnYnWTFwdWBL1YemK7e9lpEPFfPACLiNZLFSczMMvOIgFl9vBwRf4qIxyLiQqBIumyopK9LmifpMEmPAC+TLNqy3JB7+ovxPyV9W9KTkhanIwvTyo5ZJd1fkvRyOvpwVLpvuUsDkvZMn38krSD5kqQ5knYq6289SRel1cz+KuleSZ8Z75uX9A5JV0p6TtILkm6V9K503zRJ/0/SY2nM90g6sOy1w3EfLOnmNI67JL1b0raSfi/pRUm/kzSr7HXDn+vnJQ2mr/sfSeuXHZP13H8r6QZJQ5Luk7RvxfvbRtLVkp5P/7tcJOmtZfvPk3SVpKMlLZL0jKSfSGob3g+8HziibPSoIOlNkk6T9Hga32OScqsgZ1aNEwGzfPyVZHRg2CzgE8DfAdsBL43wui5gKcmCLkcCxwAfL9t/PvBp4Fhga+BzwLNjxHIy8FWgE1gAXDX8BQWsSlI29iPAO4EfAj+WtM8Yfb5O0sbA70hWR9uXZNGfM4Dp6SFHA/+cxvAu4ArgcknbV3T1DeAkkqWxnwUuAv4D6AFmp7GeVvGaAsnllQNJyrBuCZxbtj/ruXvTvrcD7gAuVrJC5/CCL7cA89I4PgCsAfy8PEkjGQnaNt3/ceCj6fmH47gV+AnJ5ZuNSC7lHJUed3Aa+8eBBzGbSBHhhx9+1PAAzgOuKns+G3gS+Fn6/OvAqyTri4/2ut8At1YccwNwdrq9JcmX7X4jxFFI93emz/dMn3eVHbMGyZfs50d5PxcPn7NanFWO7wVKwCoj7F8EfK2i7TfABRVx/0PZ/o+kbQeVtR0GvFD2/OvAa0B7Wdtu6eu2rOHcm6Rtu6XPTwBurOhjnfSY2WWf0WPA9LJjzgJ+VXHe0yv6OQ24kbTcux9+NOLhEQGz+tgvHRJ/ieSX3y0ki+EMWxgRf87Qz90Vzx8HNki3dwCWAb8eZ2y3Dm9ExAski8ZsAyBpuqQeSXdLekrSC8BBQPs4+t8B+F1EvFK5Q9JaJOvX/2/Frt8Nx1Cm/L0Pf1b3VLStXjaaAbAoIgbLnt9G8hltXcO5H0//Dn/uOwF7pP99X0g/o8fSfW8re919kczTKO9nA0Z3HslCSQ9JOkPShytGGcxy58mCZvVxC9BN8sv/8UjWbC/3YsZ+Kl8X5HsJ7yvAP5EMXd8DvECyet9YX2D1ULni2atV9lVrq8fnMeK5IyIklZ9nGnA1yWdVqTy5G/d/u4i4U1IB+D8kS/ueD/xR0r4RsWyM92BWF848zepjKCLmR0SpShJQL3NJ/j+71zhft/PwhpJbGrfljbXkdwN+ERE/jYi5wCPAVuPs/y5gN0mrVO6IiL+Q/DLetWLXbsB94zxPNZtI2qzs+WySz+j+Op77TpL5E6X0v3H54/lx9PMKb8ybeF1EPB8Rl0bEF4EPA3sDW4yjX7OaOBEwmyIi4iHgEuDsdJb7LEm7S/rUGC/9N0n7SnonyUS6V4AL030PAftI2k3SO4DTSSY2jsd/ksw9uETSeyRtIemQsgl53wO+krZtJekEkol1J4/zPNX8FThf0vaSdgHOBK6OiIfreO4zgLcAP5P0XkmbS/qApD5Ja46jnwFgdnq3wPrpHQ3HprFtLWkLkgmlfwEWjqNfs5r40oDZ1PJp4Jskk8zWJ/nC+P4YrzkOOAV4O3Av8JGIGL5U8S2SL/5fknypnkdy62PlNfQRRcQiSXuQfOn+mmRI/B6SSyWksa4JfBfYkGRW/N9GxB+znmMUAySTG39B8nlcD3y+bH/N546IxyXtCnwHuJbk7oXB9FwvjyPWk0mG/u8DViP53J8nuatheCLoXcCHImJoHP2a1UQRlZfKzKwZSNqT5It5ZkQ82dho6k/S14GPRcS2jY7FbCrzpQEzM7MW5kTAzMyshfnSgJmZWQvziICZmVkLcyJgZmbWwpwImJmZtTAnAmZmZi3MiYCZmVkL+/9JkYusMUb9hgAAAABJRU5ErkJggg==\n",
+      "text/plain": [
+       "<Figure size 576x576 with 3 Axes>"
+      ]
+     },
+     "metadata": {
+      "needs_background": "light"
+     },
+     "output_type": "display_data"
+    }
+   ],
+   "source": [
+    "fig = plt.figure(figsize=(8, 8))\n",
+    "ax1 = fig.add_subplot(3, 1, 1)\n",
+    "ax2 = fig.add_subplot(3, 1, 2)\n",
+    "ax3 = fig.add_subplot(3, 1, 3)\n",
+    "\n",
+    "for i in range(1, 11):\n",
+    "    data = pipeline.get_data(f'fluxpos{i:03d}')\n",
+    "    ax1.scatter(i, data[-1, 2], color='black')\n",
+    "    ax2.scatter(i, data[-1, 3], color='black')\n",
+    "    ax3.scatter(i, data[-1, 4], color='black')\n",
+    "\n",
+    "ax3.set_xlabel('Principal components', fontsize=14)\n",
+    "ax1.set_ylabel('Separation (arcsec)', fontsize=14)\n",
+    "ax2.set_ylabel('Position angle (deg)', fontsize=14)\n",
+    "ax3.set_ylabel('Contrast (mag)', fontsize=14)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Detection limits"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "As a final analysis, we will estimate detection limits from the data. To do so, we will first use the [FakePlanetModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.fluxposition.FakePlanetModule) to remove the flux of the companion from the data since it would otherwise bias the result. We use the PSF template that was stored with the tag *psf* and we adopt the separation and position angle that was determined with the `SimplexMinimizationModule`. We need to apply a correction of -133 degrees which was used previously for `extra_rot`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 32,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "----------------\n",
+      "FakePlanetModule\n",
+      "----------------\n",
+      "\n",
+      "Module name: fake\n",
+      "Input ports: centered (70, 57, 57), psf (70, 57, 57)\n",
+      "Input parameters:\n",
+      "   - Magnitude = 6.10\n",
+      "   - PSF scaling = -1.0\n",
+      "   - Separation (arcsec) = 0.06\n",
+      "   - Position angle (deg) = 0.06\n",
+      "Injecting artificial planets... [DONE]                      \n",
+      "Output port: removed (70, 57, 57)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = FakePlanetModule(name_in='fake',\n",
+    "                          image_in_tag='centered',\n",
+    "                          psf_in_tag='psf',\n",
+    "                          image_out_tag='removed',\n",
+    "                          position=(0.061, 97.3-133.),\n",
+    "                          magnitude=6.1,\n",
+    "                          psf_scaling=-1.,\n",
+    "                          interpolation='spline')\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('fake')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that the data only contains the flux of the central star, we use the [ContrastCurveModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.processing.html#pynpoint.processing.limits.ContrastCurveModule) to calculate the detection limits. We will calculate the brightness limits by setting the false positive fraction (FPF) to $2.87 \\times 10^{-7}$, which corresponds to $5\\sigma$ in the limit of Gaussian noise. At small angular separations, the detection limits are affected by small sample statistics (see [Mawet et al. 2014](https://ui.adsabs.harvard.edu/abs/2014ApJ...792...97M/abstract)) so this FPF would only correspond to a $5\\sigma$ detection at large separation from the star. In this example, we will subtract 10 principal components and use the median-collapsed residuals."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 33,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-------------------\n",
+      "ContrastCurveModule\n",
+      "-------------------\n",
+      "\n",
+      "Module name: limits\n",
+      "Input ports: removed (70, 57, 57), psf (70, 57, 57)\n",
+      "                                                      \n",
+      "Calculating detection limits... [DONE]\n",
+      "Output port: limits (4, 4)\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = ContrastCurveModule(name_in='limits',\n",
+    "                             image_in_tag='removed',\n",
+    "                             psf_in_tag='psf',\n",
+    "                             contrast_out_tag='limits',\n",
+    "                             separation=(0.05, 5., 0.01),\n",
+    "                             angle=(0., 360., 60.),\n",
+    "                             threshold=('fpf', 2.87e-7),\n",
+    "                             psf_scaling=1.,\n",
+    "                             aperture=0.02,\n",
+    "                             pca_number=10,\n",
+    "                             cent_size=0.02,\n",
+    "                             edge_size=2.,\n",
+    "                             extra_rot=-133.,\n",
+    "                             residuals='median',\n",
+    "                             snr_inject=100.)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('limits')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Exporting datasets to FITS and plain text formats"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now that we have finished the data processing and analysis, we will export some of the results from the HDF5 database to other data formats. Since astronomical images are commonly viewed with tools such as [DS9](https://sites.google.com/cfa.harvard.edu/saoimageds9), we will use the [FitsWritingModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.readwrite.html?highlight=fitswr#pynpoint.readwrite.fitswriting.FitsWritingModule) to write the median-collapsed residuals of the PSF subtraction to a FITS file. The database tag is specified as argument of `data_tag` and we will store the FITS file in the default output place of the `Pypeline`. The FITS file contains a 3D dataset of which the first dimension corresponds to an increasing number of subtracted principal components."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 34,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------\n",
+      "FitsWritingModule\n",
+      "-----------------\n",
+      "\n",
+      "Module name: write1\n",
+      "Input port: pca_median (30, 57, 57)\n",
+      "Writing FITS file... [DONE]\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = FitsWritingModule(name_in='write1',\n",
+    "                           data_tag='pca_median',\n",
+    "                           file_name='pca_median.fits',\n",
+    "                           output_dir=None,\n",
+    "                           data_range=None,\n",
+    "                           overwrite=True,\n",
+    "                           subset_size=None)\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('write1')"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Similarly, we can export 1D and 2D datasets to a plain text file with the [TextWritingModule](https://pynpoint.readthedocs.io/en/latest/pynpoint.readwrite.html#pynpoint.readwrite.textwriting.TextWritingModule). Let's export the detection limits that were estimated with the `ContrastCurveModule`. We specify again the database tag and also add a header as first line in the text file."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 35,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "\n",
+      "-----------------\n",
+      "TextWritingModule\n",
+      "-----------------\n",
+      "\n",
+      "Module name: write2\n",
+      "Input port: limits (4, 4)\n",
+      "Writing text file... [DONE]\n"
+     ]
+    }
+   ],
+   "source": [
+    "module = TextWritingModule(name_in='write2',\n",
+    "                           data_tag='limits',\n",
+    "                           file_name='limits.dat',\n",
+    "                           output_dir=None,\n",
+    "                           header='Separation (arcsec) - Contrast (mag) - Variance (mag) - FPF')\n",
+    "\n",
+    "pipeline.add_module(module)\n",
+    "pipeline.run_module('write2')"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.7.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/pynpoint/__init__.py b/pynpoint/__init__.py
index e40b63a..3356daf 100644
--- a/pynpoint/__init__.py
+++ b/pynpoint/__init__.py
@@ -101,7 +101,7 @@ warnings.simplefilter('always', DeprecationWarning)
 
 __author__ = 'Tomas Stolker & Markus Bonse'
 __license__ = 'GPLv3'
-__version__ = '0.8.3'
+__version__ = '0.10.0'
 __maintainer__ = 'Tomas Stolker'
-__email__ = 'tomas.stolker@phys.ethz.ch'
+__email__ = 'stolker@strw.leidenuniv.nl'
 __status__ = 'Development'
diff --git a/pynpoint/core/dataio.py b/pynpoint/core/dataio.py
index 26692d3..8e4d6f7 100644
--- a/pynpoint/core/dataio.py
+++ b/pynpoint/core/dataio.py
@@ -4,14 +4,16 @@ Modules for accessing data and attributes in the central database.
 
 import os
 import warnings
+
 from abc import ABCMeta, abstractmethod
 from typing import Dict, List, Optional, Tuple, Union
 
 import h5py
 import numpy as np
+
 from typeguard import typechecked
 
-from pynpoint.util.types import NonStaticAttribute, StaticAttribute
+from pynpoint.util.type_aliases import NonStaticAttribute, StaticAttribute
 
 
 class DataStorage:
@@ -226,7 +228,7 @@ class ConfigPort(Port):
             None
         """
 
-        super(ConfigPort, self).__init__(tag, data_storage_in)
+        super().__init__(tag, data_storage_in)
 
         if tag != 'config':
             raise ValueError('The tag name of the central configuration should be \'config\'.')
@@ -373,7 +375,7 @@ class InputPort(Port):
             None
         """
 
-        super(InputPort, self).__init__(tag, data_storage_in)
+        super().__init__(tag, data_storage_in)
 
         if tag == 'config':
             raise ValueError('The tag name \'config\' is reserved for the central configuration '
@@ -675,7 +677,7 @@ class OutputPort(Port):
             None
         """
 
-        super(OutputPort, self).__init__(tag, data_storage_in)
+        super().__init__(tag, data_storage_in)
 
         self.m_activate = activate_init
 
@@ -858,15 +860,15 @@ class OutputPort(Port):
                     data_dim: Optional[int] = None,
                     force: bool = False) -> None:
         """
-        Internal function for appending data to a dataset or appending non-static attribute
-        information. See :func:`~pynpoint.core.dataio.OutputPort.append` for more information.
+        Internal function for appending data to a dataset or appending non-static attributes.
+        See :func:`~pynpoint.core.dataio.OutputPort.append` for more information.
 
         Parameters
         ----------
         tag : str
-            Database tag of the data that will be modified.
+            Database tag where the data will be stored.
         data : np.ndarray
-            The data that will be stored and replace any old data.
+            The data that will be appended.
         data_dim : int
             Number of dimension of the data.
         force : bool
@@ -900,8 +902,10 @@ class OutputPort(Port):
 
             if data_dim == 2:
                 data = data[np.newaxis, :]
+
             elif data_dim == 3:
                 data = data[np.newaxis, :, :]
+
             elif data_dim == 4:
                 data = data[:, np.newaxis, :, :]
 
@@ -1069,31 +1073,30 @@ class OutputPort(Port):
                data_dim: Optional[int] = None,
                force: bool = False) -> None:
         """
-        Appends data to an existing dataset with the tag of the Port along the first
-        dimension. If no data exists with the tag of the Port a new data set is created.
-        For more information about how the dimensions are organized see documentation of
-        the function :func:`~pynpoint.core.dataio.OutputPort.set_all`. Note it is not possible to
-        append data with a different shape or data type to the existing dataset.
+        Appends data to an existing dataset along the first dimension. If no data exists for the
+        :class:`~pynpoint.core.dataio.OutputPort`, then a new data set is created. For more
+        information about how the dimensions are organized, see the documentation of
+        :func:`~pynpoint.core.dataio.OutputPort.set_all`. Note it is not possible to append data
+        with a different shape or data type to an existing dataset.
 
-        **Example:** An internal data set is 3D (storing a stack of 2D images) with shape
-        (233, 300, 300) which mean it contains 233 images with a resolution of 300 x 300 pixel.
-        Thus it is only possible to extend along the first dimension by appending new images with
-        a size of (300, 300) or by appending a stack of images (:, 300, 300). Everything else will
-        raise exceptions.
+        **Example:** An internal data set is 3D (storing a stack of 2D images) with shape of
+        ``(233, 300, 300)``, that is, it contains 233 images with a resolution of 300 by 300
+        pixels. Thus it is only possible to extend along the first dimension by appending new
+        images with a shape of ``(300, 300)`` or by appending a stack of images with a shape of
+        ``(:, 300, 300)``.
 
-        It is possible to force the function to overwrite the existing data set if and only if the
-        shape or type of the input data does not match the existing data. **Warning**: This can
-        delete the existing data.
+        It is possible to force the function to overwrite existing data set if the shape or type of
+        the input data do not match the existing data.
 
         Parameters
         ----------
         data : np.ndarray
-            The data which will be appended.
+            The data that will be appended.
         data_dim : int
             Number of data dimensions used if a new data set is created. The dimension of the
-            *data* is used if set to None.
+            ``data`` is used if set to None.
         force : bool
-            The existing data will be overwritten if shape or type does not match if set to True.
+            The existing data will be overwritten if the shape or type does not match.
 
         Returns
         -------
@@ -1174,7 +1177,8 @@ class OutputPort(Port):
         if self._check_status_and_activate():
 
             if self._m_tag not in self._m_data_storage.m_data_bank:
-                warnings.warn('Can not store attribute if data tag does not exist.')
+                warnings.warn(f'Can not store the attribute \'{name}\' because the dataset '
+                              f'\'{self._m_tag}\' does not exist.')
 
             else:
                 if static:
diff --git a/pynpoint/core/processing.py b/pynpoint/core/processing.py
index 8fafa33..0768239 100644
--- a/pynpoint/core/processing.py
+++ b/pynpoint/core/processing.py
@@ -6,10 +6,12 @@ import math
 import os
 import time
 import warnings
+
 from abc import ABCMeta, abstractmethod
 from typing import Callable, List, Optional
 
 import numpy as np
+
 from typeguard import typechecked
 
 from pynpoint.core.dataio import ConfigPort, DataStorage, InputPort, OutputPort
@@ -23,24 +25,25 @@ class PypelineModule(metaclass=ABCMeta):
     """
     Abstract interface for the PypelineModule:
 
-        * Reading Module (:class:`pynpoint.core.processing.ReadingModule`)
-        * Writing Module (:class:`pynpoint.core.processing.WritingModule`)
-        * Processing Module (:class:`pynpoint.core.processing.ProcessingModule`)
+        * Reading module (:class:`pynpoint.core.processing.ReadingModule`)
+        * Writing module (:class:`pynpoint.core.processing.WritingModule`)
+        * Processing module (:class:`pynpoint.core.processing.ProcessingModule`)
 
-    Each PypelineModule has a name as a unique identifier in the Pypeline and requires the
-    *connect_database* and *run* methods.
+    Each :class:`~pynpoint.core.processing.PypelineModule` has a name as a unique identifier in the
+    :class:`~pynpoint.core.pypeline.Pypeline` and requires the ``connect_database`` and ``run``
+    methods.
     """
 
     @typechecked
     def __init__(self,
                  name_in: str) -> None:
         """
-        Abstract constructor of a PypelineModule. Needs a name as identifier.
+        Abstract constructor of a :class:`~pynpoint.core.processing.PypelineModule`.
 
         Parameters
         ----------
         name_in : str
-            The name of the PypelineModule.
+            The name of the :class:`~pynpoint.core.processing.PypelineModule`.
 
         Returns
         -------
@@ -48,8 +51,6 @@ class PypelineModule(metaclass=ABCMeta):
             None
         """
 
-        assert isinstance(name_in, str), 'Name of the PypelineModule needs to be a string.'
-
         self._m_name = name_in
         self._m_data_base = None
         self._m_config_port = ConfigPort('config')
@@ -58,13 +59,13 @@ class PypelineModule(metaclass=ABCMeta):
     @typechecked
     def name(self) -> str:
         """
-        Returns the name of the PypelineModule. This property makes sure that the internal module
-        name can not be changed.
+        Returns the name of the :class:`~pynpoint.core.processing.PypelineModule`. This property
+        makes sure that the internal module name can not be changed.
 
         Returns
         -------
         str
-            The name of the PypelineModule.
+            The name of the :class:`~pynpoint.core.processing.PypelineModule`
         """
 
         return self._m_name
@@ -74,8 +75,9 @@ class PypelineModule(metaclass=ABCMeta):
     def connect_database(self,
                          data_base_in: DataStorage) -> None:
         """
-        Abstract interface for the function *connect_database* which is needed to connect the Ports
-        of a PypelineModule with the DataStorage.
+        Abstract interface for the function ``connect_database`` which is needed to connect a
+        :class:`~pynpoint.core.dataio.Port` of a :class:`~pynpoint.core.processing.PypelineModule`
+        with the :class:`~pynpoint.core.dataio.DataStorage`.
 
         Parameters
         ----------
@@ -87,8 +89,8 @@ class PypelineModule(metaclass=ABCMeta):
     @typechecked
     def run(self) -> None:
         """
-        Abstract interface for the run method of a PypelineModule which inheres the actual
-        algorithm behind the module.
+        Abstract interface for the run method of :class:`~pynpoint.core.processing.PypelineModule`
+        which inheres the actual algorithm behind the module.
         """
 
 
@@ -124,7 +126,7 @@ class ReadingModule(PypelineModule, metaclass=ABCMeta):
             None
         """
 
-        super(ReadingModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         assert (os.path.isdir(str(input_dir)) or input_dir is None), 'Input directory for ' \
             'reading module does not exist - input requested: %s.' % input_dir
@@ -205,7 +207,7 @@ class ReadingModule(PypelineModule, metaclass=ABCMeta):
 
         Returns
         -------
-        list(str, )
+        list(str)
             List of output tags.
         """
 
@@ -254,7 +256,7 @@ class WritingModule(PypelineModule, metaclass=ABCMeta):
             None
         """
 
-        super(WritingModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         assert (os.path.isdir(str(output_dir)) or output_dir is None), 'Output directory for ' \
             'writing module does not exist - input requested: %s.' % output_dir
@@ -328,7 +330,7 @@ class WritingModule(PypelineModule, metaclass=ABCMeta):
 
         Returns
         -------
-        list(str, )
+        list(str)
             List of input tags.
         """
 
@@ -365,7 +367,7 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
              The name of the ProcessingModule.
         """
 
-        super(ProcessingModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self._m_input_ports = {}
         self._m_output_ports = {}
@@ -408,27 +410,31 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
                         tag: str,
                         activation: bool = True) -> OutputPort:
         """
-        Function which creates an OutputPort for a ProcessingModule and appends it to the internal
-        OutputPort dictionary. This function should be used by classes inheriting from
-        ProcessingModule to make sure that only output ports with unique tags are added. The new
-        port can be used as: ::
+        Function which creates an :class:`~pynpoint.core.dataio.OutputPort` for a
+        :class:`~pynpoint.core.processing.ProcessingModule` and appends it to the internal
+        :class:`~pynpoint.core.dataio.OutputPort` dictionary. This function should be used by
+        classes inheriting from :class:`~pynpoint.core.processing.ProcessingModule` to make sure
+        that only output ports with unique tags are added. The new port can be used as:
+
+        .. code-block:: python
 
              port = self._m_output_ports[tag]
 
-        or by using the returned Port.
+        or by using the returned :class:`~pynpoint.core.dataio.Port`.
 
         Parameters
         ----------
         tag : str
             Tag of the new output port.
         activation : bool
-            Activation status of the Port after creation. Deactivated ports will not save their
-            results until they are activated.
+            Activation status of the :class:`~pynpoint.core.dataio.Port` after creation.
+            Deactivated ports will not save their results until they are activated.
 
         Returns
         -------
         pynpoint.core.dataio.OutputPort
-            The new OutputPort for the ProcessingModule.
+            The new :class:`~pynpoint.core.dataio.OutputPort` for the
+            :class:`~pynpoint.core.processing.ProcessingModule`.
         """
 
         port = OutputPort(tag, activate_init=activation)
@@ -504,7 +510,7 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
 
         im_shape = image_in_port.get_shape()
 
-        size = apply_function(init_line, func, func_args).shape[0]
+        size = apply_function(init_line, 0, func, func_args).shape[0]
 
         image_out_port.set_all(data=np.zeros((size, im_shape[1], im_shape[2])),
                                data_dim=3,
@@ -585,9 +591,9 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
                 args = update_arguments(i, nimages, func_args)
 
                 if args is None:
-                    result.append(func(images[i, ]))
+                    result.append(func(images[i, ], i))
                 else:
-                    result.append(func(images[i, ], *args))
+                    result.append(func(images[i, ], i, *args))
 
             image_out_port.set_all(np.asarray(result), keep_attributes=True)
 
@@ -601,9 +607,9 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
                 args = update_arguments(i, nimages, func_args)
 
                 if args is None:
-                    result = func(image_in_port[i, ])
+                    result = func(image_in_port[i, ], i)
                 else:
-                    result = func(image_in_port[i, ], *args)
+                    result = func(image_in_port[i, ], i, *args)
 
                 if result.ndim == 1:
                     image_out_port.append(result, data_dim=2)
@@ -614,9 +620,9 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
             # process images in parallel in stacks of MEMORY/CPU images
             print(message, end='')
 
-            result = apply_function(tmp_data=image_in_port[0, :, :],
-                                    func=func,
-                                    func_args=update_arguments(0, nimages, func_args))
+            args = update_arguments(0, nimages, func_args)
+
+            result = apply_function(image_in_port[0, :, :], 0, func, args)
 
             result_shape = result.shape
 
@@ -651,7 +657,7 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
 
         Returns
         -------
-        list(str, )
+        list(str)
             List of input tags.
         """
 
@@ -664,7 +670,7 @@ class ProcessingModule(PypelineModule, metaclass=ABCMeta):
 
         Returns
         -------
-        list(str, )
+        list(str)
             List of output tags.
         """
 
diff --git a/pynpoint/core/pypeline.py b/pynpoint/core/pypeline.py
index d566e06..8a7560b 100644
--- a/pynpoint/core/pypeline.py
+++ b/pynpoint/core/pypeline.py
@@ -9,29 +9,31 @@ import multiprocessing
 import os
 import urllib.request
 import warnings
-from typing import Any, List, Optional, Tuple, Union
+
+from typing import Any, Dict, List, Optional, Tuple, Union
 from urllib.error import URLError
 
 import h5py
 import numpy as np
+
 from typeguard import typechecked
 
 import pynpoint
+
 from pynpoint.core.attributes import get_attributes
 from pynpoint.core.dataio import DataStorage
-from pynpoint.core.processing import ProcessingModule, PypelineModule, \
-    ReadingModule, WritingModule
+from pynpoint.core.processing import ProcessingModule, PypelineModule, ReadingModule, WritingModule
 from pynpoint.util.module import input_info, module_info, output_info
-from pynpoint.util.types import NonStaticAttribute, StaticAttribute
+from pynpoint.util.type_aliases import NonStaticAttribute, StaticAttribute
 
 
 class Pypeline:
     """
-    A Pypeline instance can be used to manage various processing steps. It inheres an internal
-    dictionary of Pypeline steps (modules) and their names. A Pypeline has a central DataStorage on
-    the hard drive which can be accessed by various modules. The order of the modules depends on
-    the order the steps have been added to the pypeline. It is possible to run all modules attached
-    to the Pypeline at once or run a single modules by name.
+    The :class:`~pynpoint.core.pypeline.Pypeline` class manages the pipeline modules. It inheres an
+    internal dictionary of pipeline modules and has a :class:`~pynpoint.core.dataio.DataStorage`
+    which is accessed by the various modules. The order in which the pipeline modules are executed
+    depends on the order they have been added to the :class:`~pynpoint.core.pypeline.Pypeline`. It
+    is possible to run all modules at once or run a single module by name.
     """
 
     @typechecked
@@ -40,23 +42,23 @@ class Pypeline:
                  input_place_in: Optional[str] = None,
                  output_place_in: Optional[str] = None) -> None:
         """
-        Constructor of Pypeline.
-
         Parameters
         ----------
-        working_place_in : str
-            Working location of the Pypeline which needs to be a folder on the hard drive. The
-            given folder will be used to save the central PynPoint database (an HDF5 file) in
-            which all the intermediate processing steps are saved. Note that the HDF5 file can
-            become very large depending on the size and number of input images.
-        input_place_in : str
-            Default input directory of the Pypeline. All ReadingModules added to the Pypeline
-            use this directory to look for input data. It is possible to specify a different
-            location for the ReadingModules using their constructors.
-        output_place_in : str
-            Default result directory used to save the output of all WritingModules added to the
-            Pypeline. It is possible to specify a different locations for the WritingModules by
-            using their constructors.
+        working_place_in : str, None
+            Working location where the central HDF5 database and the configuration file will be
+            stored. Sufficient space is required in the working folder since each pipeline module
+            stores a dataset in the HDF5 database. The current working folder of Python is used as
+            working folder if the argument is set to None.
+        input_place_in : str, None
+            Default input folder where a :class:`~pynpoint.core.processing.ReadingModule` that is
+            added to the :class:`~pynpoint.core.pypeline.Pypeline` will look for input data. The
+            current working folder of Python is used as input folder if the argument is set to
+            None.
+        output_place_in : str, None
+            Default output folder where a :class:`~pynpoint.core.processing.WritingModule` that is
+            added to the :class:`~pynpoint.core.pypeline.Pypeline` will store output data. The
+            current working folder of Python is used as output folder if the argument is set to
+            None.
 
         Returns
         -------
@@ -84,13 +86,30 @@ class Pypeline:
             print('Please consider using the \'Watch\' button on the Github page:')
             print('https://github.com/PynPoint/PynPoint\n')
 
-        self._m_working_place = working_place_in
-        self._m_input_place = input_place_in
-        self._m_output_place = output_place_in
+        if working_place_in is None:
+            self._m_working_place = os.getcwd()
+        else:
+            self._m_working_place = working_place_in
+
+        if input_place_in is None:
+            self._m_input_place = os.getcwd()
+        else:
+            self._m_input_place = input_place_in
+
+        if output_place_in is None:
+            self._m_output_place = os.getcwd()
+        else:
+            self._m_output_place = output_place_in
+
+        print(f'Working place: {self._m_working_place}')
+        print(f'Input place: {self._m_input_place}')
+        print(f'Output place: {self._m_output_place}\n')
 
         self._m_modules = collections.OrderedDict()
 
-        self.m_data_storage = DataStorage(os.path.join(working_place_in, 'PynPoint_database.hdf5'))
+        hdf5_path = os.path.join(self._m_working_place, 'PynPoint_database.hdf5')
+        self.m_data_storage = DataStorage(hdf5_path)
+
         print(f'Database: {self.m_data_storage._m_location}')
 
         self._config_init()
@@ -100,15 +119,16 @@ class Pypeline:
                     key: str,
                     value: Any) -> None:
         """
-        This method is called every time a member / attribute of the Pypeline is changed. It checks
-        whether a chosen working / input / output directory exists.
+        Internal method which assigns a value to an object attribute. This method is called
+        whenever and attribute of the :class:`~pynpoint.core.pypeline.Pypeline` is changed and
+        checks if the chosen working, input, or output folder exists.
 
         Parameters
         ----------
         key : str
-            Member or attribute name.
+            Attribute name.
         value : str
-            New value for the given member or attribute.
+            Value for the attribute.
 
         Returns
         -------
@@ -116,33 +136,40 @@ class Pypeline:
             None
         """
 
-        if key in ['_m_working_place', '_m_input_place', '_m_output_place']:
-            assert (os.path.isdir(str(value))), f'Input directory for {key} does not exist - ' \
-                                                f'input requested: {value}.'
+        if key == '_m_working_place':
+            error_msg = f'The folder that was chosen for the working place does not exist: {value}.'
+            assert os.path.isdir(str(value)), error_msg
+
+        elif key == '_m_input_place':
+            error_msg = f'The folder that was chosen for the input place does not exist: {value}.'
+            assert os.path.isdir(str(value)), error_msg
 
-        super(Pypeline, self).__setattr__(key, value)
+        elif key == '_m_output_place':
+            error_msg = f'The folder that was chosen for the output place does not exist: {value}.'
+            assert os.path.isdir(str(value)), error_msg
+
+        super().__setattr__(key, value)
 
     @staticmethod
     @typechecked
     def _validate(module: Union[ReadingModule, WritingModule, ProcessingModule],
                   tags: List[str]) -> Tuple[bool, Optional[str]]:
         """
-        Internal function which is used for the validation of the pipeline. Validates a
-        single module.
+        Internal method to validate a :class:`~pynpoint.core.processing.PypelineModule`.
 
         Parameters
         ----------
-        module : ReadingModule, WritingModule, or ProcessingModule
-            The pipeline module.
-        tags : list(str, )
-            Tags in the database.
+        module : ReadingModule, WritingModule, ProcessingModule
+            Pipeline module that will be validated.
+        tags : list(str)
+            Tags that are present in the database.
 
         Returns
         -------
         bool
-            Module validation.
-        str
-            Module name.
+            Validation of the pipeline module.
+        str, None
+            Pipeline module name in case it is not valid. Returns None if the module was validated.
         """
 
         if isinstance(module, ReadingModule):
@@ -155,21 +182,19 @@ class Pypeline:
 
         elif isinstance(module, ProcessingModule):
             tags.extend(module.get_all_output_tags())
+
             for tag in module.get_all_input_tags():
                 if tag not in tags:
                     return False, module.name
 
-        else:
-            return False, None
-
         return True, None
 
     @typechecked
     def _config_init(self) -> None:
         """
-        Internal function which initializes the configuration file. It reads PynPoint_config.ini
-        in the working folder and creates this file with the default (ESO/NACO) settings in case
-        the file is not present.
+        Internal method to initialize the configuration file. The configuration parameters are read
+        from *PynPoint_config.ini* in the working folder. The file is created with default values
+        (ESO/NACO) in case the file is not present.
 
         Returns
         -------
@@ -201,6 +226,7 @@ class Pypeline:
                          attributes: dict) -> dict:
 
             config = configparser.ConfigParser()
+
             with open(config_file) as cf_open:
                 config.read_file(cf_open)
 
@@ -282,15 +308,16 @@ class Pypeline:
     def add_module(self,
                    module: PypelineModule) -> None:
         """
-        Adds a Pypeline module to the internal Pypeline dictionary. The module is appended at the
+        Method for adding a :class:`~pynpoint.core.processing.PypelineModule` to the internal
+        dictionary of the :class:`~pynpoint.core.pypeline.Pypeline`. The module is appended at the
         end of this ordered dictionary. If the input module is a reading or writing module without
-        a specified input or output location then the Pypeline default location is used. Moreover,
-        the given module is connected to the Pypeline internal data storage.
+        a specified input or output location then the default location is used. The module is
+        connected to the internal data storage of the :class:`~pynpoint.core.pypeline.Pypeline`.
 
         Parameters
         ----------
-        module : ReadingModule, WritingModule, or ProcessingModule
-            Input pipeline module.
+        module : ReadingModule, WritingModule, ProcessingModule
+            Pipeline module that will be added to the :class:`~pynpoint.core.pypeline.Pypeline`.
 
         Returns
         -------
@@ -298,19 +325,21 @@ class Pypeline:
             None
         """
 
-        if isinstance(module, WritingModule):
-            if module.m_output_location is None:
-                module.m_output_location = self._m_output_place
-
         if isinstance(module, ReadingModule):
             if module.m_input_location is None:
                 module.m_input_location = self._m_input_place
 
+        if isinstance(module, WritingModule):
+            if module.m_output_location is None:
+                module.m_output_location = self._m_output_place
+
         module.connect_database(self.m_data_storage)
 
         if module.name in self._m_modules:
-            warnings.warn(f'Pipeline module names need to be unique. Overwriting module '
-                          f'\'{module.name}\'.')
+            warnings.warn(f'Names of pipeline modules that are added to the Pypeline need to '
+                          f'be unique. The current pipeline module, \'{module.name}\', does '
+                          f'already exist in the Pypeline dictionary so the previous module '
+                          f'with the same name will be overwritten.')
 
         self._m_modules[module.name] = module
 
@@ -318,7 +347,9 @@ class Pypeline:
     def remove_module(self,
                       name: str) -> bool:
         """
-        Removes a Pypeline module from the internal dictionary.
+        Method to remove a :class:`~pynpoint.core.processing.PypelineModule` from the internal
+        dictionary with pipeline modules that are added to the
+        :class:`~pynpoint.core.pypeline.Pypeline`.
 
         Parameters
         ----------
@@ -328,15 +359,19 @@ class Pypeline:
         Returns
         -------
         bool
-            Confirmation of removal.
+            Confirmation of removing the :class:`~pynpoint.core.processing.PypelineModule`.
         """
 
         if name in self._m_modules:
             del self._m_modules[name]
+
             removed = True
 
         else:
-            warnings.warn(f'Pipeline module name \'{name}\' not found in the Pypeline dictionary.')
+            warnings.warn(f'Pipeline module \'{name}\' is not found in the Pypeline dictionary '
+                          f'so it could not be removed. The dictionary contains the following '
+                          f'modules: {list(self._m_modules.keys())}.')
+
             removed = False
 
         return removed
@@ -344,11 +379,12 @@ class Pypeline:
     @typechecked
     def get_module_names(self) -> List[str]:
         """
-        Function which returns a list of all module names.
+        Method to return a list with the names of all pipeline modules that are added to the
+        :class:`~pynpoint.core.pypeline.Pypeline`.
 
         Returns
         -------
-        list(str, )
+        list(str)
             Ordered list of all Pypeline modules.
         """
 
@@ -357,68 +393,78 @@ class Pypeline:
     @typechecked
     def validate_pipeline(self) -> Tuple[bool, Optional[str]]:
         """
-        Function which checks if all input ports of the Pypeline are pointing to previous output
-        ports.
+        Method to check if each :class:`~pynpoint.core.dataio.InputPort` is pointing to an
+        :class:`~pynpoint.core.dataio.OutputPort` of a previously added
+        :class:`~pynpoint.core.processing.PypelineModule`.
 
         Returns
         -------
         bool
-            Confirmation of pipeline validation.
-        str
-            Module name that is not valid.
+            Validation of the pipeline.
+        str, None
+            Name of the pipeline module that can not be validated. Returns None if all modules
+            were validated.
         """
 
         self.m_data_storage.open_connection()
 
+        # Create list with all datasets that are stored in the database
         data_tags = list(self.m_data_storage.m_data_bank.keys())
 
+        # Initiate the validation in case self._m_modules.values() is empty
+        validation = (True, None)
+
+        # Loop over all pipline modules in the ordered dictionary
         for module in self._m_modules.values():
+            # Validate the pipeline module
             validation = self._validate(module, data_tags)
 
             if not validation[0]:
+                # Break the for loop if a module could not be validated
                 break
 
-        else:
-            validation = True, None
-
         return validation
 
     @typechecked
     def validate_pipeline_module(self,
-                                 name: str) -> Optional[Tuple[bool, Optional[str]]]:
+                                 name: str) -> Tuple[bool, Optional[str]]:
         """
-        Checks if the data exists for the module with label *name*.
+        Method to check if each :class:`~pynpoint.core.dataio.InputPort` of a
+        :class:`~pynpoint.core.processing.PypelineModule` with label ``name`` points to an
+        existing dataset in the database.
 
         Parameters
         ----------
         name : str
-            Name of the module that is checked.
+            Name of the pipeline module instance that will be validated.
 
         Returns
         -------
         bool
-            Confirmation of pipeline module validation.
-        str
-            Module name that is not valid.
+            Validation of the pipeline module.
+        str, None
+            Pipeline module name in case it is not valid. Returns None if the module was validated.
         """
 
         self.m_data_storage.open_connection()
 
-        existing_data_tags = list(self.m_data_storage.m_data_bank.keys())
+        # Create list with all datasets that are stored in the database
+        data_tags = list(self.m_data_storage.m_data_bank.keys())
 
+        # Check if the name is included in the internal dictionary with added modules
         if name in self._m_modules:
-            module = self._m_modules[name]
-            validate = self._validate(module, existing_data_tags)
+            # Validate the pipeline module
+            validate = self._validate(self._m_modules[name], data_tags)
 
         else:
-            validate = None
+            validate = (False, name)
 
         return validate
 
     @typechecked
     def run(self) -> None:
         """
-        Function for running all pipeline modules that are added to the
+        Method for running all pipeline modules that are added to the
         :class:`~pynpoint.core.pypeline.Pypeline`.
 
         Returns
@@ -444,7 +490,7 @@ class Pypeline:
     def run_module(self,
                    name: str) -> None:
         """
-        Function for running a pipeline module.
+        Method for running a pipeline module.
 
         Parameters
         ----------
@@ -514,15 +560,15 @@ class Pypeline:
                  tag: str,
                  data_range: Optional[Tuple[int, int]] = None) -> np.ndarray:
         """
-        Function for accessing data in the central database.
+        Method for reading data from the database.
 
         Parameters
         ----------
         tag : str
             Database tag.
         data_range : tuple(int, int), None
-            Slicing range which can be used to select a subset of images from a 3D dataset. All
-            data are selected if set to None.
+            Slicing range for the first axis of a dataset. This argument can be used to select a
+            subset of images from dataset. The full dataset is read if the argument is set to None.
 
         Returns
         -------
@@ -546,8 +592,8 @@ class Pypeline:
     def delete_data(self,
                     tag: str) -> None:
         """
-        Function for deleting a dataset and related attributes from the central database. Disk
-        space does not seem to free up when using this function.
+        Method for deleting a dataset and related attributes from the central database. Disk
+        space does not seem to free up when using this method.
 
         Parameters
         ----------
@@ -580,7 +626,7 @@ class Pypeline:
                       attr_name: str,
                       static: bool = True) -> Union[StaticAttribute, NonStaticAttribute]:
         """
-        Function for accessing attributes in the central database.
+        Method for reading an attribute from the database.
 
         Parameters
         ----------
@@ -589,12 +635,13 @@ class Pypeline:
         attr_name : str
             Name of the attribute.
         static : bool
-            Static or non-static attribute.
+            Static (True) or non-static attribute (False).
 
         Returns
         -------
         StaticAttribute, NonStaticAttribute
-            The values of the attribute, which can either be static or non-static.
+            Attribute value. For a static attribute, a single value is returned. For a non-static
+            attribute, an array of values is returned.
         """
 
         self.m_data_storage.open_connection()
@@ -617,8 +664,7 @@ class Pypeline:
                       attr_value: Union[StaticAttribute, NonStaticAttribute],
                       static: bool = True) -> None:
         """
-        Function for writing attributes to the central database. Existing values will be
-        overwritten.
+        Method for writing an attribute to the database. Existing values will be overwritten.
 
         Parameters
         ----------
@@ -629,7 +675,7 @@ class Pypeline:
         attr_value : StaticAttribute, NonStaticAttribute
             Attribute value.
         static : bool
-            Static or non-static attribute.
+            Static (True) or non-static attribute (False).
 
         Returns
         -------
@@ -655,36 +701,37 @@ class Pypeline:
         self.m_data_storage.close_connection()
 
     @typechecked
-    def get_tags(self) -> np.ndarray:
+    def get_tags(self) -> List[str]:
         """
-        Function for listing the database tags, ignoring header and config tags.
+        Method for returning a list with all database tags, except header and configuration tags.
 
         Returns
         -------
-        np.ndarray
+        list(str)
             Database tags.
         """
 
         self.m_data_storage.open_connection()
 
         tags = list(self.m_data_storage.m_data_bank.keys())
-        select = []
+
+        selected_tags = []
 
         for item in tags:
-            if item in ('config', 'fits_header') or item[0:7] == 'header_':
+            if item in ['config', 'fits_header'] or item[0:7] == 'header_':
                 continue
 
-            select.append(item)
+            selected_tags.append(item)
 
         self.m_data_storage.close_connection()
 
-        return np.asarray(select)
+        return selected_tags
 
     @typechecked
     def get_shape(self,
                   tag: str) -> Optional[Tuple[int, ...]]:
         """
-        Function for getting the shape of a database entry.
+        Method for returning the shape of a database entry.
 
         Parameters
         ----------
@@ -693,8 +740,8 @@ class Pypeline:
 
         Returns
         -------
-        tuple(int, )
-            Dataset shape.
+        tuple(int, ...), None
+            Shape of the dataset. None is returned if the database tag is not found.
         """
 
         self.m_data_storage.open_connection()
@@ -707,3 +754,46 @@ class Pypeline:
         self.m_data_storage.close_connection()
 
         return data_shape
+
+    @typechecked
+    def list_attributes(self,
+                        data_tag: str) -> Dict[str, Union[str, np.float64, np.ndarray]]:
+        """
+        Method for printing and returning an overview of all attributes of a dataset.
+
+        Parameters
+        ----------
+        data_tag : str
+            Database tag of which the attributes will be extracted.
+
+        Returns
+        -------
+        dict(str, bool)
+            Dictionary with all attributes, both static and non-static.
+        """
+
+        print_text = f'Attribute overview of {data_tag}'
+
+        print('\n' + len(print_text) * '-')
+        print(print_text)
+        print(len(print_text) * '-' + '\n')
+
+        self.m_data_storage.open_connection()
+
+        attributes = {}
+
+        print('Static attributes:')
+
+        for key, value in self.m_data_storage.m_data_bank[data_tag].attrs.items():
+            attributes[key] = value
+            print(f'\n   - {key} = {value}')
+
+        print('\nNon-static attributes:')
+
+        for key, value in self.m_data_storage.m_data_bank[f'header_{data_tag}'].items():
+            attributes[key] = list(value)
+            print(f'\n   - {key} = {list(value)}')
+
+        self.m_data_storage.close_connection()
+
+        return attributes
diff --git a/pynpoint/processing/background.py b/pynpoint/processing/background.py
index daa476b..a3fa4db 100644
--- a/pynpoint/processing/background.py
+++ b/pynpoint/processing/background.py
@@ -5,14 +5,16 @@ Pipeline modules for subtraction of the background emission.
 import time
 import warnings
 
+from typing import Any, Optional, Union
+
 import numpy as np
 
 from typeguard import typechecked
-from typing import Any, Optional, Union
 
 from pynpoint.core.processing import ProcessingModule
 from pynpoint.util.image import create_mask
 from pynpoint.util.module import progress
+from pynpoint.util.apply_func import subtract_line
 
 
 class SimpleBackgroundSubtractionModule(ProcessingModule):
@@ -48,7 +50,7 @@ class SimpleBackgroundSubtractionModule(ProcessingModule):
             None
         """
 
-        super(SimpleBackgroundSubtractionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -130,7 +132,7 @@ class MeanBackgroundSubtractionModule(ProcessingModule):
             None
         """
 
-        super(MeanBackgroundSubtractionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -274,7 +276,7 @@ class MeanBackgroundSubtractionModule(ProcessingModule):
             # -----------------------------------------------------------
 
         if isinstance(self.m_shift, np.ndarray):
-            history = f'shift = NFRAMES'
+            history = 'shift = NFRAMES'
         else:
             history = f'shift = {self.m_shift}'
 
@@ -321,7 +323,7 @@ class LineSubtractionModule(ProcessingModule):
             None
         """
 
-        super(LineSubtractionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -345,35 +347,6 @@ class LineSubtractionModule(ProcessingModule):
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
         im_shape = self.m_image_in_port.get_shape()[-2:]
 
-        @typechecked
-        def _subtract_line(image_in: np.ndarray,
-                           mask: np.ndarray) -> np.ndarray:
-
-            image_tmp = np.copy(image_in)
-            image_tmp[mask == 0.] = np.nan
-
-            if self.m_combine == 'mean':
-                row_mean = np.nanmean(image_tmp, axis=1)
-                col_mean = np.nanmean(image_tmp, axis=0)
-
-                x_grid, y_grid = np.meshgrid(col_mean, row_mean)
-                subtract = (x_grid+y_grid)/2.
-
-            elif self.m_combine == 'median':
-                col_median = np.nanmedian(image_tmp, axis=0)
-                col_2d = np.tile(col_median, (im_shape[1], 1))
-
-                image_tmp -= col_2d
-                image_tmp[mask == 0.] = np.nan
-
-                row_median = np.nanmedian(image_tmp, axis=1)
-                row_2d = np.tile(row_median, (im_shape[0], 1))
-                row_2d = np.rot90(row_2d)  # 90 deg rotation in clockwise direction
-
-                subtract = col_2d + row_2d
-
-            return image_in - subtract
-
         if self.m_mask:
             size = (self.m_mask/pixscale, None)
         else:
@@ -381,11 +354,13 @@ class LineSubtractionModule(ProcessingModule):
 
         mask = create_mask(im_shape, size)
 
-        self.apply_function_to_images(_subtract_line,
+        self.apply_function_to_images(subtract_line,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Background subtraction',
-                                      func_args=(mask, ))
+                                      func_args=(mask,
+                                                 self.m_combine,
+                                                 im_shape))
 
         history = f'combine = {self.m_combine}'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
@@ -434,7 +409,7 @@ class NoddingBackgroundModule(ProcessingModule):
             None
         """
 
-        super(NoddingBackgroundModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_science_in_port = self.add_input_port(science_in_tag)
         self.m_sky_in_port = self.add_input_port(sky_in_tag)
diff --git a/pynpoint/processing/badpixel.py b/pynpoint/processing/badpixel.py
index 36c97ab..f64fa0a 100644
--- a/pynpoint/processing/badpixel.py
+++ b/pynpoint/processing/badpixel.py
@@ -2,178 +2,17 @@
 Pipeline modules for the detection and interpolation of bad pixels.
 """
 
-import copy
 import warnings
 
-from typing import Optional, Tuple, Union
+from typing import Optional, Tuple
 
-import cv2
 import numpy as np
 
-from numba import jit
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
-
-
-# This function cannot by @typechecked because of a compatibility issue with numba
-@jit(cache=True)
-def _calc_fast_convolution(F_roof_tmp: np.complex128,
-                           W: np.ndarray,
-                           tmp_s: tuple,
-                           N_size: float,
-                           tmp_G: np.ndarray,
-                           N: Tuple[int, ...]) -> np.ndarray:
-
-    new = np.zeros(N, dtype=np.complex64)
-
-    if ((tmp_s[0] == 0) and (tmp_s[1] == 0)) or \
-            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
-            ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
-            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):
-
-        for m in range(0, N[0], 1):
-            for j in range(0, N[1], 1):
-                new[m, j] = F_roof_tmp * W[m - tmp_s[0], j - tmp_s[1]]
-
-    else:
-
-        for m in range(0, N[0], 1):
-            for j in range(0, N[1], 1):
-                new[m, j] = (F_roof_tmp * W[m - tmp_s[0], j - tmp_s[1]] +
-                             np.conjugate(F_roof_tmp) * W[(m + tmp_s[0]) %
-                             N[0], (j + tmp_s[1]) % N[1]])
-
-    if ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
-            ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
-            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):  # causes problems, unknown why
-
-        res = new / float(N_size)
-
-    else:
-
-        res = new / float(N_size)
-
-    tmp_G = tmp_G - res
-
-    return tmp_G
-
-
-@typechecked
-def _bad_pixel_interpolation(image_in: np.ndarray,
-                             bad_pixel_map: np.ndarray,
-                             iterations: int) -> np.ndarray:
-    """
-    Internal function to interpolate bad pixels.
-
-    Parameters
-    ----------
-    image_in : numpy.ndarray
-        Input image.
-    bad_pixel_map : numpy.ndarray
-        Bad pixel map.
-    iterations : int
-        Number of iterations.
-
-    Returns
-    -------
-    numpy.ndarray
-        Image in which the bad pixels have been interpolated.
-    """
-
-    image_in = image_in * bad_pixel_map
-
-    # for names see ref paper
-    g = copy.deepcopy(image_in)
-    G = np.fft.fft2(g)
-    w = copy.deepcopy(bad_pixel_map)
-    W = np.fft.fft2(w)
-
-    N = g.shape
-    N_size = float(N[0] * N[1])
-    F_roof = np.zeros(N, dtype=complex)
-    tmp_G = copy.deepcopy(G)
-
-    iteration = 0
-
-    while iteration < iterations:
-        # 1.) select line using max search and compute conjugate
-        tmp_s = np.unravel_index(np.argmax(abs(tmp_G.real[:, 0: N[1] // 2])),
-                                 (N[0], N[1] // 2))
-
-        tmp_s_conjugate = (np.mod(N[0] - tmp_s[0], N[0]),
-                           np.mod(N[1] - tmp_s[1], N[1]))
-
-        # 2.) compute the new F_roof
-        # special cases s = 0 or s = N/2 no conjugate line exists
-        if ((tmp_s[0] == 0) and (tmp_s[1] == 0)) or \
-                ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
-                ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
-                ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):
-            F_roof_tmp = N_size * tmp_G[tmp_s] / W[(0, 0)]
-
-            # 3.) update F_roof
-            F_roof[tmp_s] += F_roof_tmp
-
-        # conjugate line exists
-        else:
-            a = (np.power(np.abs(W[(0, 0)]), 2))
-            b = np.power(np.abs(W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]), 2)
-
-            if a == b:
-                W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]] += 0.00000000001
-
-            a = (np.power(np.abs(W[(0, 0)]), 2))
-            b = np.power(np.abs(W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]),
-                         2.0) + 0.01
-            c = a - b
-
-            F_roof_tmp = N_size * (tmp_G[tmp_s] * W[(0, 0)] - np.conj(tmp_G[tmp_s]) *
-                                   W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]) / c
-
-            # 3.) update F_roof
-            F_roof[tmp_s] += F_roof_tmp
-            F_roof[tmp_s_conjugate] += np.conjugate(F_roof_tmp)
-
-        # 4.) calc the new error spectrum using fast numba function
-        tmp_G = _calc_fast_convolution(F_roof_tmp, W, tmp_s, N_size, tmp_G, N)
-
-        iteration += 1
-
-    return image_in * bad_pixel_map + np.fft.ifft2(F_roof).real * (1 - bad_pixel_map)
-
-
-# @jit(cache=True)
-# def _sigma_detection(dev_image,
-#                      var_image,
-#                      source_image,
-#                      out_image):
-#     """
-#     Internal function to create a map with ones and zeros.
-#
-#     Parameters
-#     ----------
-#     dev_image : numpy.ndarray
-#         Image of pixel deviations from neighborhood means, squared.
-#     var_image : numpy.ndarray
-#         Image of pixel neighborhood variances * (N_sigma)^2.
-#     source_image : numpy.ndarray
-#         Input image.
-#     out_image : numpy.ndarray
-#         Bad pixel map.
-#
-#     Returns
-#     -------
-#     NoneType
-#         None
-#     """
-#
-#     for i in range(source_image.shape[0]):
-#         for j in range(source_image.shape[1]):
-#             if dev_image[i][j] < var_image[i][j]:
-#                 out_image[i][j] = 1
-#             else:
-#                 out_image[i][j] = 0
+from pynpoint.util.apply_func import bad_pixel_sigma_filter, image_interpolation, \
+                                     replace_pixels, time_filter
 
 
 class BadPixelSigmaFilterModule(ProcessingModule):
@@ -184,26 +23,6 @@ class BadPixelSigmaFilterModule(ProcessingModule):
 
     __author__ = 'Markus Bonse, Tomas Stolker'
 
-    # This function cannot by @typechecked because of a compatibility issue with numba
-    @staticmethod
-    @jit(cache=True)
-    def _sigma_filter(dev_image: np.ndarray,
-                      var_image: np.ndarray,
-                      mean_image: np.ndarray,
-                      source_image: np.ndarray,
-                      out_image: np.ndarray,
-                      bad_pixel_map: np.ndarray) -> None:
-
-        for i in range(source_image.shape[0]):
-            for j in range(source_image.shape[1]):
-
-                if dev_image[i][j] < var_image[i][j]:
-                    out_image[i][j] = source_image[i][j]
-
-                else:
-                    out_image[i][j] = mean_image[i][j]
-                    bad_pixel_map[i][j] = 0
-
     @typechecked
     def __init__(self,
                  name_in: str,
@@ -239,7 +58,7 @@ class BadPixelSigmaFilterModule(ProcessingModule):
             None
         """
 
-        super(BadPixelSigmaFilterModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -253,6 +72,9 @@ class BadPixelSigmaFilterModule(ProcessingModule):
         self.m_sigma = sigma
         self.m_iterate = iterate
 
+        if self.m_iterate < 1:
+            raise ValueError('The argument of \'iterate\' should be 1 or larger.')
+
     @typechecked
     def run(self) -> None:
         """
@@ -265,65 +87,23 @@ class BadPixelSigmaFilterModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _bad_pixel_sigma_filter(image_in: np.ndarray,
-                                    box: int,
-                                    sigma: float,
-                                    iterate: int) -> np.ndarray:
-
-            # algorithm adapted from http://idlastro.gsfc.nasa.gov/ftp/pro/image/sigma_filter.pro
-
-            bad_pixel_map = np.ones(image_in.shape)
-
-            if iterate < 1:
-                iterate = 1
-
-            while iterate > 0:
-                box2 = box * box
-
-                source_image = copy.deepcopy(image_in)
-
-                mean_image = (cv2.blur(copy.deepcopy(source_image),
-                                       (box, box)) * box2 - source_image) / (box2 - 1)
-
-                dev_image = (mean_image - source_image) ** 2
-
-                fact = float(sigma ** 2) / (box2 - 2)
-                var_image = fact * (cv2.blur(copy.deepcopy(dev_image),
-                                             (box, box)) * box2 - dev_image)
-
-                out_image = image_in
-
-                self._sigma_filter(dev_image,
-                                   var_image,
-                                   mean_image,
-                                   source_image,
-                                   out_image,
-                                   bad_pixel_map)
-
-                iterate -= 1
-
-            if self.m_map_out_port is not None:
-                self.m_map_out_port.append(bad_pixel_map, data_dim=3)
-
-            return out_image
-
         cpu = self._m_config_port.get_attribute('CPU')
 
-        if cpu > 1:
-            if self.m_map_out_port is not None:
-                warnings.warn('The map_out_port can only be used if CPU=1. No data will be '
-                              'stored to this output port.')
+        if cpu > 1 and self.m_map_out_port is not None:
+            warnings.warn('The \'map_out_port\' can only be used if CPU = 1. No data will '
+                          'be stored to this output port.')
 
+            del self._m_output_ports[self.m_map_out_port.tag]
             self.m_map_out_port = None
 
-        self.apply_function_to_images(_bad_pixel_sigma_filter,
+        self.apply_function_to_images(bad_pixel_sigma_filter,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Bad pixel sigma filter',
                                       func_args=(self.m_box,
                                                  self.m_sigma,
-                                                 self.m_iterate))
+                                                 self.m_iterate,
+                                                 self.m_map_out_port))
 
         history = f'sigma = {self.m_sigma}'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
@@ -377,7 +157,7 @@ class BadPixelMapModule(ProcessingModule):
             None
         """
 
-        super(BadPixelMapModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         if dark_in_tag is None:
             self.m_dark_port = None
@@ -430,7 +210,7 @@ class BadPixelMapModule(ProcessingModule):
 
             max_flat = np.max(flat)
 
-            print(f'Threshold flat field [counts] = {max_flat*self.m_flat_threshold}')
+            print(f'Threshold flat field (ADU) = {max_flat*self.m_flat_threshold:.2e}')
 
             if self.m_dark_port is None:
                 bpmap = np.ones(flat.shape)
@@ -488,7 +268,7 @@ class BadPixelInterpolationModule(ProcessingModule):
             None
         """
 
-        super(BadPixelInterpolationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_bp_map_in_port = self.add_input_port(bad_pixel_map_tag)
@@ -518,16 +298,12 @@ class BadPixelInterpolationModule(ProcessingModule):
             raise ValueError('The shape of the bad pixel map does not match the shape of the '
                              'images.')
 
-        @typechecked
-        def _image_interpolation(image_in: np.ndarray) -> np.ndarray:
-            return _bad_pixel_interpolation(image_in,
-                                            bad_pixel_map,
-                                            self.m_iterations)
-
-        self.apply_function_to_images(_image_interpolation,
+        self.apply_function_to_images(image_interpolation,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
-                                      'Bad pixel interpolation')
+                                      'Bad pixel interpolation',
+                                      func_args=(self.m_iterations,
+                                                 bad_pixel_map))
 
         history = f'iterations = {self.m_iterations}'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
@@ -569,7 +345,7 @@ class BadPixelTimeFilterModule(ProcessingModule):
             None
         """
 
-        super(BadPixelTimeFilterModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -589,31 +365,9 @@ class BadPixelTimeFilterModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _time_filter(timeline: np.ndarray,
-                         sigma: Tuple[float, float]) -> np.ndarray:
-
-            median = np.median(timeline)
-            std = np.std(timeline)
-
-            index_lower = np.argwhere(timeline < median-sigma[0]*std)
-            index_upper = np.argwhere(timeline > median+sigma[1]*std)
-
-            if index_lower.size > 0:
-                mask = np.ones(timeline.shape, dtype=bool)
-                mask[index_lower] = False
-                timeline[index_lower] = np.mean(timeline[mask])
-
-            if index_upper.size > 0:
-                mask = np.ones(timeline.shape, dtype=bool)
-                mask[index_upper] = False
-                timeline[index_upper] = np.mean(timeline[mask])
-
-            return timeline
-
         print('Temporal filtering of bad pixels ...', end='')
 
-        self.apply_function_in_time(_time_filter,
+        self.apply_function_in_time(time_filter,
                                     self.m_image_in_port,
                                     self.m_image_out_port,
                                     func_args=(self.m_sigma, ))
@@ -663,7 +417,7 @@ class ReplaceBadPixelsModule(ProcessingModule):
             None
         """
 
-        super(ReplaceBadPixelsModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_map_in_port = self.add_input_port(map_in_tag)
@@ -688,39 +442,13 @@ class ReplaceBadPixelsModule(ProcessingModule):
         bpmap = self.m_map_in_port.get_all()[0, ]
         index = np.argwhere(bpmap == 0)
 
-        @typechecked
-        def _replace_pixels(image: np.ndarray,
-                            index: np.ndarray) -> np.ndarray:
-
-            im_mask = np.copy(image)
-
-            for _, item in enumerate(index):
-                im_mask[item[0], item[1]] = np.nan
-
-            for _, item in enumerate(index):
-                im_tmp = im_mask[item[0]-self.m_size:item[0]+self.m_size+1,
-                                 item[1]-self.m_size:item[1]+self.m_size+1]
-
-                if np.size(np.where(im_tmp != np.nan)[0]) == 0:
-                    im_mask[item[0], item[1]] = image[item[0], item[1]]
-
-                else:
-                    if self.m_replace == 'mean':
-                        im_mask[item[0], item[1]] = np.nanmean(im_tmp)
-
-                    elif self.m_replace == 'median':
-                        im_mask[item[0], item[1]] = np.nanmedian(im_tmp)
-
-                    elif self.m_replace == 'nan':
-                        im_mask[item[0], item[1]] = np.nan
-
-            return im_mask
-
-        self.apply_function_to_images(_replace_pixels,
+        self.apply_function_to_images(replace_pixels,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Running ReplaceBadPixelsModule',
-                                      func_args=(index, ))
+                                      func_args=(index,
+                                                 self.m_size,
+                                                 self.m_replace))
 
         history = f'replace = {self.m_replace}'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
diff --git a/pynpoint/processing/basic.py b/pynpoint/processing/basic.py
index 1cb45b1..f359458 100644
--- a/pynpoint/processing/basic.py
+++ b/pynpoint/processing/basic.py
@@ -44,7 +44,7 @@ class SubtractImagesModule(ProcessingModule):
             None
         """
 
-        super(SubtractImagesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in1_port = self.add_input_port(image_in_tags[0])
         self.m_image_in2_port = self.add_input_port(image_in_tags[1])
@@ -118,7 +118,7 @@ class AddImagesModule(ProcessingModule):
             None
         """
 
-        super(AddImagesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in1_port = self.add_input_port(image_in_tags[0])
         self.m_image_in2_port = self.add_input_port(image_in_tags[1])
@@ -191,7 +191,7 @@ class RotateImagesModule(ProcessingModule):
             None
         """
 
-        super(RotateImagesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -265,7 +265,7 @@ class RepeatImagesModule(ProcessingModule):
             None
         """
 
-        super(RepeatImagesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
diff --git a/pynpoint/processing/centering.py b/pynpoint/processing/centering.py
index 987046f..ce5ebf0 100644
--- a/pynpoint/processing/centering.py
+++ b/pynpoint/processing/centering.py
@@ -2,24 +2,23 @@
 Pipeline modules for aligning and centering of the star.
 """
 
-import time
 import math
+import time
 import warnings
 
 from typing import Optional, Tuple, Union
 
 import numpy as np
 
-from astropy.modeling import models, fitting
+from astropy.modeling import fitting, models
 from scipy.ndimage.filters import gaussian_filter
-from scipy.optimize import curve_fit
-from skimage.registration import phase_cross_correlation
-from skimage.transform import rescale
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
+from pynpoint.util.image import center_pixel, crop_image, pixel_distance, shift_image, \
+                                subpixel_distance
 from pynpoint.util.module import memory_frames, progress
-from pynpoint.util.image import crop_image, shift_image, center_pixel
+from pynpoint.util.apply_func import align_image, apply_shift, fit_2d_function
 
 
 class StarAlignmentModule(ProcessingModule):
@@ -73,7 +72,7 @@ class StarAlignmentModule(ProcessingModule):
             None
         """
 
-        super(StarAlignmentModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -102,49 +101,6 @@ class StarAlignmentModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _align_image(image_in: np.ndarray) -> np.ndarray:
-            offset = np.array([0., 0.])
-
-            for i in range(self.m_num_references):
-                if self.m_subframe is None:
-                    tmp_offset, _, _ = phase_cross_correlation(ref_images[i, :, :],
-                                                               image_in,
-                                                               upsample_factor=self.m_accuracy)
-
-                else:
-                    sub_in = crop_image(image_in, None, self.m_subframe)
-                    sub_ref = crop_image(ref_images[i, :, :], None, self.m_subframe)
-
-                    tmp_offset, _, _ = phase_cross_correlation(sub_ref,
-                                                               sub_in,
-                                                               upsample_factor=self.m_accuracy)
-                offset += tmp_offset
-
-            offset /= float(self.m_num_references)
-
-            if self.m_resize is not None:
-                offset *= self.m_resize
-
-                sum_before = np.sum(image_in)
-
-                tmp_image = rescale(image=np.asarray(image_in, dtype=np.float64),
-                                    scale=(self.m_resize, self.m_resize),
-                                    order=5,
-                                    mode='reflect',
-                                    anti_aliasing=True,
-                                    multichannel=False)
-
-                sum_after = np.sum(tmp_image)
-
-                # Conserve flux because the rescale function normalizes all values to [0:1].
-                tmp_image = tmp_image*(sum_before/sum_after)
-
-            else:
-                tmp_image = image_in
-
-            return shift_image(tmp_image, offset, self.m_interpolation)
-
         if self.m_ref_image_in_port is None:
             random = np.random.choice(self.m_image_in_port.get_shape()[0],
                                       self.m_num_references,
@@ -169,10 +125,17 @@ class StarAlignmentModule(ProcessingModule):
             pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
             self.m_subframe = int(self.m_subframe/pixscale)
 
-        self.apply_function_to_images(_align_image,
+        self.apply_function_to_images(align_image,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
-                                      'Aligning images')
+                                      'Aligning images',
+                                      func_args=(self.m_interpolation,
+                                                 self.m_accuracy,
+                                                 self.m_resize,
+                                                 self.m_num_references,
+                                                 self.m_subframe,
+                                                 ref_images.reshape(-1),
+                                                 ref_images.shape))
 
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
 
@@ -180,7 +143,7 @@ class StarAlignmentModule(ProcessingModule):
             pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
             new_pixscale = pixscale/self.m_resize
             self.m_image_out_port.add_attribute('PIXSCALE', new_pixscale)
-            print(f'New pixel scale [arcsec] = {new_pixscale:.2f}')
+            print(f'New pixel scale (arcsec) = {new_pixscale:.2f}')
 
         history = f'resize = {self.m_resize}'
         self.m_image_out_port.add_history('StarAlignmentModule', history)
@@ -201,45 +164,52 @@ class FitCenterModule(ProcessingModule):
                  fit_out_tag: str,
                  mask_out_tag: Optional[str] = None,
                  method: str = 'full',
-                 radius: float = 0.1,
+                 mask_radii: Tuple[Optional[float], float] = (None, 0.1),
                  sign: str = 'positive',
                  model: str = 'gaussian',
                  filter_size: Optional[float] = None,
-                 **kwargs: tuple) -> None:
+                 **kwargs: Union[Tuple[float, float, float, float, float, float, float],
+                                 Tuple[float, float, float, float, float, float, float, float],
+                                 float]) -> None:
         """
         Parameters
         ----------
         name_in : str
             Unique name of the module instance.
         image_in_tag : str
-            Tag of the database entry with images that are read as input.
+            Database tag of the images that are read as input.
         fit_out_tag : str
-            Tag of the database entry with the best-fit results of the model fit and the 1-sigma
-            errors. Data is written in the following format: x offset (pix), x offset error (pix)
+            Database tag where the best-fit results and 1σ errors will be stored.
+            The data are written in the following format: x offset (pix), x offset error (pix)
             y offset (pix), y offset error (pix), FWHM major axis (arcsec), FWHM major axis error
-            (arcsec), FWHM minor axis (arcsec), FWHM minor axis error (arcsec), amplitude (counts),
-            amplitude error (counts), angle (deg), angle error (deg) measured in counterclockwise
-            direction with respect to the upward direction (i.e., East of North), offset (counts),
-            offset error (counts), power index (only for Moffat function), and power index error
-            (only for Moffat function). Not used if set to None.
+            (arcsec), FWHM minor axis (arcsec), FWHM minor axis error (arcsec), amplitude (ADU),
+            amplitude error (ADU), angle (deg), angle error (deg) measured in counterclockwise
+            direction with respect to the upward direction (i.e. east of north), offset (ADU),
+            offset error (ADU), power index (only for Moffat function), and power index error
+            (only for Moffat function). The ``fit_out_tag`` can be used as argument of ``shift_xy``
+            when running the :class:`~pynpoint.processing.centering.ShiftImagesModule`.
         mask_out_tag : str, None
-            Tag of the database entry with the masked images that are written as output. The
-            unmasked part of the images is used for the fit. The effect of the smoothing that is
-            applied by setting the *fwhm* parameter is also visible in the data of the
-            *mask_out_tag*. Data is not written when set to None.
+            Database tag where the masked images will be stored. The unmasked part of the images is
+            used for the fit. The effect of the smoothing that is applied by setting the ``fwhm``
+            argument is also visible in the data of the ``mask_out_tag``. The data are not stored
+            if the argument is set to None. The :class:`~pynpoint.core.dataio.OutputPort` of
+            ``mask_out_tag`` can only be used when ``CPU = 1``.
         method : str
-            Fit and shift all the images individually ('full') or only fit the mean of the cube and
-            shift all images to that location ('mean'). The 'mean' method could be used after
-            running the :class:`~pynpoint.processing.centering.StarAlignmentModule`.
-        radius : float
-            Radius (arcsec) around the center of the image beyond which pixels are neglected with
-            the fit. The radius is centered on the position specified in *guess*, which is the
-            center of the image by default.
+            Fit and shift each image individually ('full') or only fit the mean of the cube and
+            shift each image by this constant offset ('mean'). The 'mean' method can be used in
+            case the images are already aligned with
+            :class:`~pynpoint.processing.centering.StarAlignmentModule`.
+        mask_radii : tuple(float, float), tuple(None, float)
+            Inner and outer radius (arcsec) within and beyond which pixels are neglected during the
+            fit. The radii are centered at the position that specified with the argument of
+            ``guess``, which is the center of the image by default. The outer mask (second value
+            of ``mask_radii``) is mandatory whereas radius of the inner mask is optional and can
+            be set to None.
         sign : str
-            Fit a 'positive' or 'negative' Gaussian/Moffat. A negative model can be used to center
-            coronagraphic data in which a dark hole is present.
+            Fit a 'positive' or 'negative' Gaussian/Moffat function. A 'negative' model can be used
+            to center coronagraphic data in which a dark hole.
         model : str
-            Type of 2D model used to fit the PSF ('gaussian' or 'moffat'). Both models are
+            Type of 2D model that is used for the fit ('gaussian' or 'moffat'). Both models are
             elliptical in shape.
         filter_size : float, None
             Standard deviation (arcsec) of the Gaussian filter that is used to smooth the
@@ -247,10 +217,11 @@ class FitCenterModule(ProcessingModule):
 
         Keyword arguments
         -----------------
-        guess : tuple(float, float, float, float, float, float, float, float)
+        guess : tuple(float, float, float, float, float, float, float, float),
+                tuple(float, float, float, float, float, float, float, float, float)
             The initial parameter values for the least squares fit: x offset with respect to center
             (pix), y offset with respect to center (pix), FWHM x (pix), FWHM y (pix), amplitude
-            (counts), angle (deg), offset (counts), and power index (only for Moffat function).
+            (ADU), angle (deg), offset (ADU), and power index (only for Moffat function).
 
         Returns
         -------
@@ -268,7 +239,14 @@ class FitCenterModule(ProcessingModule):
             elif model == 'moffat':
                 self.m_guess = (0., 0., 1., 1., 1., 0., 0., 1.)
 
-        super(FitCenterModule, self).__init__(name_in)
+        if 'radius' in kwargs:
+            mask_radii = (None, kwargs['radius'])
+
+            warnings.warn(f'The \'radius\' parameter has been deprecated. Please use the '
+                          f'\'mask_radii\' parameter instead. The argument of \'mask_radii\' '
+                          f'is set to {mask_radii}.', DeprecationWarning)
+
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_fit_out_port = self.add_output_port(fit_out_tag)
@@ -279,22 +257,19 @@ class FitCenterModule(ProcessingModule):
             self.m_mask_out_port = self.add_output_port(mask_out_tag)
 
         self.m_method = method
-        self.m_radius = radius
+        self.m_mask_radii = mask_radii
         self.m_sign = sign
         self.m_model = model
         self.m_filter_size = filter_size
-        self.m_model_func = None
-
-        self.m_count = 0
 
     @typechecked
     def run(self) -> None:
         """
-        Run method of the module. Uses a non-linear least squares (Levenberg-Marquardt) to fit the
-        the individual images or the mean of the stack with a 2D Gaussian or Moffat function, and
-        stores the best fit results. The fitting results contain zeros in case the algorithm could
-        not converge. The `fit_out_tag` can be directly used as input for the `shift_xy` argument
-        of the :class:`~pynpoint.processing.centering.ShiftImagesModule`.
+        Run method of the module. Uses a non-linear least squares (Levenberg-Marquardt) method
+        to fit the the individual images or the mean of all images with a 2D Gaussian or Moffat
+        function. The best-fit results and errors are stored and contain zeros in case the
+        algorithm could not converge. The ``fit_out_tag`` can be used as argument of ``shift_xy``
+        when running the :class:`~pynpoint.processing.centering.ShiftImagesModule`.
 
         Returns
         -------
@@ -306,258 +281,50 @@ class FitCenterModule(ProcessingModule):
         cpu = self._m_config_port.get_attribute('CPU')
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
 
-        npix = self.m_image_in_port.get_shape()[-1]
-
-        if cpu > 1:
-            if self.m_mask_out_port is not None:
-                warnings.warn('The mask_out_port can only be used if CPU=1. No data will be '
-                              'stored to this output port.')
+        if cpu > 1 and self.m_mask_out_port is not None:
+            warnings.warn('The mask_out_port can only be used if CPU=1. No data will be '
+                          'stored to this output port.')
 
+            del self._m_output_ports[self.m_mask_out_port.tag]
             self.m_mask_out_port = None
 
-        if self.m_radius:
-            self.m_radius /= pixscale
+        if self.m_mask_radii[0] is None:
+            # Convert from arcsec to pixels and change None to 0
+            self.m_mask_radii = (0., self.m_mask_radii[1]/pixscale)
+
+        else:
+            # Convert from arcsec to pixels
+            self.m_mask_radii = (self.m_mask_radii[0]/pixscale, self.m_mask_radii[1]/pixscale)
 
         if self.m_filter_size:
+            # Convert from arcsec to pixels
             self.m_filter_size /= pixscale
 
-        if npix % 2 == 0:
-            x_grid = y_grid = np.linspace(-npix/2+0.5, npix/2-0.5, npix)
-            x_ap = np.linspace(-npix/2+0.5-self.m_guess[0], npix/2-0.5-self.m_guess[0], npix)
-            y_ap = np.linspace(-npix/2+0.5-self.m_guess[1], npix/2-0.5-self.m_guess[1], npix)
-
-        elif npix % 2 == 1:
-            x_grid = y_grid = np.linspace(-(npix-1)/2, (npix-1)/2, npix)
-            x_ap = np.linspace(-(npix-1)/2-self.m_guess[0], (npix-1)/2-self.m_guess[0], npix)
-            y_ap = np.linspace(-(npix-1)/2-self.m_guess[1], (npix-1)/2-self.m_guess[1], npix)
-
-        xx_grid, yy_grid = np.meshgrid(x_grid, y_grid)
-        xx_ap, yy_ap = np.meshgrid(x_ap, y_ap)
-        rr_ap = np.sqrt(xx_ap**2+yy_ap**2)
-
-        @typechecked
-        def gaussian_2d(grid: Union[Tuple[np.ndarray, np.ndarray], np.ndarray],
-                        x_center: float,
-                        y_center: float,
-                        fwhm_x: float,
-                        fwhm_y: float,
-                        amp: float,
-                        theta: float,
-                        offset: float) -> np.ndarray:
-            """
-            Function to create a 2D elliptical Gaussian model.
-
-            Parameters
-            ----------
-            grid : tuple(numpy.ndarray, numpy.ndarray), numpy.ndarray
-                A tuple of two 2D arrays with the mesh grid points in x and y
-                direction, or an equivalent 3D numpy array with 2 elements
-                along the first axis.
-            x_center : float
-                Offset of the model center along the x axis (pix).
-            y_center : float
-                Offset of the model center along the y axis (pix).
-            fwhm_x : float
-                Full width at half maximum along the x axis (pix).
-            fwhm_y : float
-                Full width at half maximum along the y axis (pix).
-            amp : float
-                Peak flux.
-            theta : float
-                Rotation angle in counterclockwise direction (rad).
-            offset : float
-                Flux offset.
-
-            Returns
-            -------
-            numpy.ndimage
-                Raveled 2D elliptical Gaussian model.
-            """
-
-            (xx_grid, yy_grid) = grid
-
-            x_diff = xx_grid - x_center
-            y_diff = yy_grid - y_center
-
-            sigma_x = fwhm_x/math.sqrt(8.*math.log(2.))
-            sigma_y = fwhm_y/math.sqrt(8.*math.log(2.))
-
-            a_gauss = 0.5 * ((np.cos(theta)/sigma_x)**2 + (np.sin(theta)/sigma_y)**2)
-            b_gauss = 0.5 * ((np.sin(2.*theta)/sigma_x**2) - (np.sin(2.*theta)/sigma_y**2))
-            c_gauss = 0.5 * ((np.sin(theta)/sigma_x)**2 + (np.cos(theta)/sigma_y)**2)
-
-            gaussian = offset + amp*np.exp(-(a_gauss*x_diff**2 + b_gauss*x_diff*y_diff +
-                                             c_gauss*y_diff**2))
-
-            if self.m_radius:
-                gaussian = gaussian[rr_ap < self.m_radius]
-            else:
-                gaussian = np.ravel(gaussian)
-
-            return gaussian
-
-        @typechecked
-        def moffat_2d(grid: Union[Tuple[np.ndarray, np.ndarray], np.ndarray],
-                      x_center: float,
-                      y_center: float,
-                      fwhm_x: float,
-                      fwhm_y: float,
-                      amp: float,
-                      theta: float,
-                      offset: float,
-                      beta: float) -> np.ndarray:
-            """
-            Function to create a 2D elliptical Moffat model.
-
-            The parametrization used here is equivalent to the one in AsPyLib:
-            http://www.aspylib.com/doc/aspylib_fitting.html#elliptical-moffat-psf
-
-            Parameters
-            ----------
-            grid : tuple(numpy.ndarray, numpy.ndarray), numpy.ndarray
-                A tuple of two 2D arrays with the mesh grid points in x and y
-                direction, or an equivalent 3D numpy array with 2 elements
-                along the first axis.
-            x_center : float
-                Offset of the model center along the x axis (pix).
-            y_center : float
-                Offset of the model center along the y axis (pix).
-            fwhm_x : float
-                Full width at half maximum along the x axis (pix).
-            fwhm_y : float
-                Full width at half maximum along the y axis (pix).
-            amp : float
-                Peak flux.
-            theta : float
-                Rotation angle in counterclockwise direction (rad).
-            offset : float
-                Flux offset.
-            beta : float
-                Power index.
-
-            Returns
-            -------
-            numpy.ndimage
-                Raveled 2D elliptical Moffat model.
-            """
-
-            (xx_grid, yy_grid) = grid
-
-            x_diff = xx_grid - x_center
-            y_diff = yy_grid - y_center
-
-            if 2.**(1./beta)-1. < 0.:
-                alpha_x = np.nan
-                alpha_y = np.nan
-
-            else:
-                alpha_x = 0.5*fwhm_x/np.sqrt(2.**(1./beta)-1.)
-                alpha_y = 0.5*fwhm_y/np.sqrt(2.**(1./beta)-1.)
-
-            if alpha_x == 0. or alpha_y == 0.:
-                a_moffat = np.nan
-                b_moffat = np.nan
-                c_moffat = np.nan
-
-            else:
-                a_moffat = (np.cos(theta)/alpha_x)**2. + (np.sin(theta)/alpha_y)**2.
-                b_moffat = (np.sin(theta)/alpha_x)**2. + (np.cos(theta)/alpha_y)**2.
-                c_moffat = 2.*np.sin(theta)*np.cos(theta)*(1./alpha_x**2. - 1./alpha_y**2.)
-
-            a_term = a_moffat*x_diff**2
-            b_term = b_moffat*y_diff**2
-            c_term = c_moffat*x_diff*y_diff
-
-            moffat = offset + amp / (1.+a_term+b_term+c_term)**beta
-
-            if self.m_radius:
-                moffat = moffat[rr_ap < self.m_radius]
-            else:
-                moffat = np.ravel(moffat)
-
-            return moffat
-
-        @typechecked
-        def _fit_2d_function(image: np.ndarray) -> np.ndarray:
-
-            if self.m_filter_size:
-                image = gaussian_filter(image, self.m_filter_size)
-
-            if self.m_mask_out_port:
-                mask = np.copy(image)
-
-                if self.m_radius:
-                    mask[rr_ap > self.m_radius] = 0.
+        _, xx_grid, yy_grid = pixel_distance(self.m_image_in_port.get_shape()[-2:], position=None)
 
-                self.m_mask_out_port.append(mask, data_dim=3)
-
-            if self.m_sign == 'negative':
-                image = -1.*image + np.abs(np.min(-1.*image))
-
-            if self.m_radius:
-                image = image[rr_ap < self.m_radius]
-            else:
-                image = np.ravel(image)
-
-            if self.m_model == 'gaussian':
-                self.m_model_func = gaussian_2d
-
-            elif self.m_model == 'moffat':
-                self.m_model_func = moffat_2d
-
-            try:
-                popt, pcov = curve_fit(self.m_model_func,
-                                       (xx_grid, yy_grid),
-                                       image,
-                                       p0=self.m_guess,
-                                       sigma=None,
-                                       method='lm')
-
-                perr = np.sqrt(np.diag(pcov))
-
-            except RuntimeError:
-                if self.m_model == 'gaussian':
-                    popt = np.zeros(7)
-                    perr = np.zeros(7)
-
-                elif self.m_model == 'moffat':
-                    popt = np.zeros(8)
-                    perr = np.zeros(8)
-
-                self.m_count += 1
-
-            if self.m_model == 'gaussian':
-
-                best_fit = np.asarray((popt[0], perr[0],
-                                       popt[1], perr[1],
-                                       popt[2]*pixscale, perr[2]*pixscale,
-                                       popt[3]*pixscale, perr[3]*pixscale,
-                                       popt[4], perr[4],
-                                       math.degrees(popt[5]) % 360., math.degrees(perr[5]),
-                                       popt[6], perr[6]))
-
-            elif self.m_model == 'moffat':
-
-                best_fit = np.asarray((popt[0], perr[0],
-                                       popt[1], perr[1],
-                                       popt[2]*pixscale, perr[2]*pixscale,
-                                       popt[3]*pixscale, perr[3]*pixscale,
-                                       popt[4], perr[4],
-                                       math.degrees(popt[5]) % 360., math.degrees(perr[5]),
-                                       popt[6], perr[6],
-                                       popt[7], perr[7]))
-
-            return best_fit
+        rr_ap = subpixel_distance(self.m_image_in_port.get_shape()[-2:],
+                                  position=(self.m_guess[1], self.m_guess[0]),
+                                  shift_center=False)  # (y, x)
 
         nimages = self.m_image_in_port.get_shape()[0]
         frames = memory_frames(memory, nimages)
 
         if self.m_method == 'full':
 
-            self.apply_function_to_images(_fit_2d_function,
+            self.apply_function_to_images(fit_2d_function,
                                           self.m_image_in_port,
                                           self.m_fit_out_port,
-                                          'Fitting the stellar PSF')
+                                          'Fitting the stellar PSF',
+                                          func_args=(self.m_mask_radii,
+                                                     self.m_sign,
+                                                     self.m_model,
+                                                     self.m_filter_size,
+                                                     self.m_guess,
+                                                     self.m_mask_out_port,
+                                                     xx_grid,
+                                                     yy_grid,
+                                                     rr_ap,
+                                                     pixscale))
 
         elif self.m_method == 'mean':
             print('Fitting the stellar PSF...', end='')
@@ -567,7 +334,19 @@ class FitCenterModule(ProcessingModule):
             for i, _ in enumerate(frames[:-1]):
                 im_mean += np.sum(self.m_image_in_port[frames[i]:frames[i+1], ], axis=0)
 
-            best_fit = _fit_2d_function(im_mean/float(nimages))
+            best_fit = fit_2d_function(im_mean/float(nimages),
+                                       0,
+                                       self.m_mask_radii,
+                                       self.m_sign,
+                                       self.m_model,
+                                       self.m_filter_size,
+                                       self.m_guess,
+                                       self.m_mask_out_port,
+                                       xx_grid,
+                                       yy_grid,
+                                       rr_ap,
+                                       pixscale)
+
             best_fit = best_fit[np.newaxis, ...]
             best_fit = np.repeat(best_fit, nimages, axis=0)
 
@@ -575,9 +354,6 @@ class FitCenterModule(ProcessingModule):
 
             print(' [DONE]')
 
-        if self.m_count > 0:
-            print(f'Fit could not converge on {self.m_count} image(s). [WARNING]')
-
         history = f'model = {self.m_model}'
 
         self.m_fit_out_port.copy_attributes(self.m_image_in_port)
@@ -625,7 +401,7 @@ class ShiftImagesModule(ProcessingModule):
             None
         """
 
-        super(ShiftImagesModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -685,11 +461,12 @@ class ShiftImagesModule(ProcessingModule):
         # apply a constant shift
         if constant:
 
-            self.apply_function_to_images(shift_image,
+            self.apply_function_to_images(apply_shift,
                                           self.m_image_in_port,
                                           self.m_image_out_port,
                                           'Shifting the images',
-                                          func_args=(self.m_shift, self.m_interpolation))
+                                          func_args=(self.m_shift,
+                                                     self.m_interpolation))
 
             # if self.m_fit_in_port is None or constant:
             history = f'shift_xy = {self.m_shift[0]:.2f}, {self.m_shift[1]:.2f}'
@@ -701,8 +478,8 @@ class ShiftImagesModule(ProcessingModule):
 
 class WaffleCenteringModule(ProcessingModule):
     """
-    Pipeline module for centering of SPHERE data obtained with a Lyot coronagraph for which center
-    frames with satellite spots are available.
+    Pipeline module for centering of coronagraphic data for which dedicated center frames with
+    satellite spots are available.
     """
 
     __author__ = 'Alexander Bohn'
@@ -716,7 +493,8 @@ class WaffleCenteringModule(ProcessingModule):
                  size: Optional[float] = None,
                  center: Optional[Tuple[float, float]] = None,
                  radius: float = 45.,
-                 pattern: str = 'x',
+                 pattern: str = None,
+                 angle: float = 45.,
                  sigma: float = 0.06,
                  dither: bool = False) -> None:
         """
@@ -737,14 +515,22 @@ class WaffleCenteringModule(ProcessingModule):
             Approximate position (x0, y0) of the coronagraph. The center of the image is used if
             set to None.
         radius : float
-            Approximate separation (pix) of the waffle spots from the star.
-        pattern : str
-            Waffle pattern that is used ('x' or '+').
+            Approximate separation (pix) of the satellite spots from the star. For IFS data, the
+            separation of the spots in the image with the shortest wavelength is required.
+        pattern : str, None
+            Waffle pattern that is used ('x' or '+'). This parameter will be deprecated in a future
+            release. Please use the ``angle`` parameter instead. The parameter will be ignored if
+            set to None.
+        angle : float
+            Angle offset (deg) in clockwise direction of the satellite spots with respect to the
+            '+' orientation (i.e. when the spots are located along the horizontal and vertical
+            axis). The previous use of the '+' pattern corresponds to 0 degrees and 'x' pattern
+            corresponds to 45 degrees. SPHERE/IFS data requires an angle of 55.48 degrees.
         sigma : float
             Standard deviation (arcsec) of the Gaussian kernel that is used for the unsharp
             masking.
         dither : bool
-            Apply dithering correction based on the DITHER_X and DITHER_Y attributes.
+            Apply dithering correction based on the ``DITHER_X`` and ``DITHER_Y`` attributes.
 
         Returns
         -------
@@ -752,7 +538,7 @@ class WaffleCenteringModule(ProcessingModule):
             None
         """
 
-        super(WaffleCenteringModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_center_in_port = self.add_input_port(center_in_tag)
@@ -762,6 +548,7 @@ class WaffleCenteringModule(ProcessingModule):
         self.m_center = center
         self.m_radius = radius
         self.m_pattern = pattern
+        self.m_angle = angle
         self.m_sigma = sigma
         self.m_dither = dither
 
@@ -779,12 +566,17 @@ class WaffleCenteringModule(ProcessingModule):
         """
 
         @typechecked
-        def _get_center(center: Optional[Tuple[int, int]]) -> Tuple[np.ndarray, Tuple[int, int]]:
-            center_frame = self.m_center_in_port[0, ]
+        def _get_center(image_number: int,
+                        center: Optional[Tuple[int, int]]) -> Tuple[np.ndarray, Tuple[int, int]]:
 
-            if center_shape[0] > 1:
+            if center_shape[-3] > 1:
                 warnings.warn('Multiple center images found. Using the first image of the stack.')
 
+            if ndim == 3:
+                center_frame = self.m_center_in_port[0, ]
+            elif ndim == 4:
+                center_frame = self.m_center_in_port[image_number, 0, ]
+
             if center is None:
                 center = center_pixel(center_frame)
             else:
@@ -794,12 +586,53 @@ class WaffleCenteringModule(ProcessingModule):
 
         center_shape = self.m_center_in_port.get_shape()
         im_shape = self.m_image_in_port.get_shape()
+        ndim = self.m_image_in_port.get_ndim()
+
+        center_frame, self.m_center = _get_center(0, self.m_center)
+
+        # Read in wavelength information or set it to default values
+        if ndim == 4:
+            wavelength = self.m_image_in_port.get_attribute('WAVELENGTH')
+
+            if wavelength is None:
+                raise ValueError('The wavelength information is required to centre IFS data. '
+                                 'Please add it via the WavelengthReadingModule before using '
+                                 'the WaffleCenteringModule.')
+
+            if im_shape[0] != center_shape[0]:
+                raise ValueError(f'Number of science wavelength channels: {im_shape[0]}. '
+                                 f'Number of center wavelength channels: {center_shape[0]}. '
+                                 'Exactly one center image per wavelength is required.')
+
+            wavelength_min = np.min(wavelength)
 
-        center_frame, self.m_center = _get_center(self.m_center)
+        elif ndim == 3:
+            # for none ifs data, use default value
+            wavelength = [1.]
+            wavelength_min = 1.
 
+        # check if science and center images have the same shape
         if im_shape[-2:] != center_shape[-2:]:
             raise ValueError('Science and center images should have the same shape.')
 
+        # Setting angle via pattern (used for backwards compability)
+        if self.m_pattern is not None:
+
+            if self.m_pattern == 'x':
+                self.m_angle = 45.
+
+            elif self.m_pattern == '+':
+                self.m_angle = 0.
+
+            else:
+                raise ValueError(f'The pattern {self.m_pattern} is not valid. Please select '
+                                 f'either \'x\' or \'+\'.')
+
+            warnings.warn(f'The \'pattern\' parameter will be deprecated in a future release. '
+                          f'Please Use the \'angle\' parameter instead and set it to '
+                          f'{self.m_angle} degrees.',
+                          DeprecationWarning)
+
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
 
         self.m_sigma /= pixscale
@@ -815,9 +648,6 @@ class WaffleCenteringModule(ProcessingModule):
             nframes = np.cumsum(nframes)
             nframes = np.insert(nframes, 0, 0)
 
-        center_frame_unsharp = center_frame - gaussian_filter(input=center_frame,
-                                                              sigma=self.m_sigma)
-
         # size of center image, only works with odd value
         ref_image_size = 21
 
@@ -825,136 +655,163 @@ class WaffleCenteringModule(ProcessingModule):
         x_pos = np.zeros(4)
         y_pos = np.zeros(4)
 
-        # Loop for 4 waffle spots
-        for i in range(4):
-            # Approximate positions of waffle spots
-            if self.m_pattern == 'x':
-                x_0 = np.floor(self.m_center[0] + self.m_radius * np.cos(np.pi / 4. * (2 * i + 1)))
-                y_0 = np.floor(self.m_center[1] + self.m_radius * np.sin(np.pi / 4. * (2 * i + 1)))
+        # Arrays for the center position for each wavelength
+        x_center = np.zeros((len(wavelength)))
+        y_center = np.zeros((len(wavelength)))
 
-            elif self.m_pattern == '+':
-                x_0 = np.floor(self.m_center[0] + self.m_radius * np.cos(np.pi / 4. * (2 * i)))
-                y_0 = np.floor(self.m_center[1] + self.m_radius * np.sin(np.pi / 4. * (2 * i)))
+        # Loop for 4 waffle spots
+        for w, wave_nr in enumerate(wavelength):
 
-            tmp_center_frame = crop_image(image=center_frame_unsharp,
-                                          center=(int(y_0), int(x_0)),
-                                          size=ref_image_size)
+            # Prapre centering frame
+            center_frame, _ = _get_center(w, self.m_center)
 
-            # find maximum in tmp image
-            coords = np.unravel_index(indices=np.argmax(tmp_center_frame),
-                                      shape=tmp_center_frame.shape)
+            center_frame_unsharp = center_frame - gaussian_filter(input=center_frame,
+                                                                  sigma=self.m_sigma)
 
-            y_max, x_max = coords[0], coords[1]
+            for i in range(4):
+                # Approximate positions of waffle spots
+                radius = self.m_radius * wave_nr / wavelength_min
 
-            pixmax = tmp_center_frame[y_max, x_max]
-            max_pos = np.array([x_max, y_max]).reshape(1, 2)
+                x_0 = np.floor(self.m_center[0] + radius *
+                               np.cos(self.m_angle*np.pi/180 + np.pi / 4. * (2 * i)))
 
-            # Check whether it is the correct maximum: second brightest pixel should be nearby
-            tmp_center_frame[y_max, x_max] = 0.
+                y_0 = np.floor(self.m_center[1] + radius *
+                               np.sin(self.m_angle*np.pi/180 + np.pi / 4. * (2 * i)))
 
-            # introduce distance parameter
-            dist = np.inf
+                tmp_center_frame = crop_image(image=center_frame_unsharp,
+                                              center=(int(y_0), int(x_0)),
+                                              size=ref_image_size)
 
-            while dist > 2:
+                # find maximum in tmp image
                 coords = np.unravel_index(indices=np.argmax(tmp_center_frame),
                                           shape=tmp_center_frame.shape)
 
-                y_max_new, x_max_new = coords[0], coords[1]
+                y_max, x_max = coords[0], coords[1]
 
-                pixmax_new = tmp_center_frame[y_max_new, x_max_new]
+                pixmax = tmp_center_frame[y_max, x_max]
+                max_pos = np.array([x_max, y_max]).reshape(1, 2)
 
-                # Caculate minimal distance to previous points
-                tmp_center_frame[y_max_new, x_max_new] = 0.
+                # Check whether it is the correct maximum: second brightest pixel should be nearby
+                tmp_center_frame[y_max, x_max] = 0.
 
-                dist = np.amin(np.linalg.norm(np.vstack((max_pos[:, 0]-x_max_new,
-                                                         max_pos[:, 1]-y_max_new)),
-                                              axis=0))
+                # introduce distance parameter
+                dist = np.inf
 
-                if dist <= 2 and pixmax_new < pixmax:
-                    break
+                while dist > 2:
+                    coords = np.unravel_index(indices=np.argmax(tmp_center_frame),
+                                              shape=tmp_center_frame.shape)
 
-                max_pos = np.vstack((max_pos, [x_max_new, y_max_new]))
+                    y_max_new, x_max_new = coords[0], coords[1]
 
-                x_max = x_max_new
-                y_max = y_max_new
-                pixmax = pixmax_new
+                    pixmax_new = tmp_center_frame[y_max_new, x_max_new]
 
-            x_0 = x_0 - (ref_image_size-1)/2 + x_max
-            y_0 = y_0 - (ref_image_size-1)/2 + y_max
+                    # Caculate minimal distance to previous points
+                    tmp_center_frame[y_max_new, x_max_new] = 0.
 
-            # create reference image around determined maximum
-            ref_center_frame = crop_image(image=center_frame_unsharp,
-                                          center=(int(y_0), int(x_0)),
-                                          size=ref_image_size)
+                    dist = np.amin(np.linalg.norm(np.vstack((max_pos[:, 0]-x_max_new,
+                                                             max_pos[:, 1]-y_max_new)),
+                                                  axis=0))
 
-            # Fit the data using astropy.modeling
-            gauss_init = models.Gaussian2D(amplitude=np.amax(ref_center_frame),
-                                           x_mean=x_0,
-                                           y_mean=y_0,
-                                           x_stddev=1.,
-                                           y_stddev=1.,
-                                           theta=0.)
+                    if dist <= 2 and pixmax_new < pixmax:
+                        break
 
-            fit_gauss = fitting.LevMarLSQFitter()
+                    max_pos = np.vstack((max_pos, [x_max_new, y_max_new]))
 
-            y_grid, x_grid = np.mgrid[y_0-(ref_image_size-1)/2:y_0+(ref_image_size-1)/2+1,
-                                      x_0-(ref_image_size-1)/2:x_0+(ref_image_size-1)/2+1]
+                    x_max = x_max_new
+                    y_max = y_max_new
+                    pixmax = pixmax_new
 
-            gauss = fit_gauss(gauss_init,
-                              x_grid,
-                              y_grid,
-                              ref_center_frame)
+                x_0 = x_0 - (ref_image_size-1)/2 + x_max
+                y_0 = y_0 - (ref_image_size-1)/2 + y_max
 
-            x_pos[i] = gauss.x_mean.value
-            y_pos[i] = gauss.y_mean.value
+                # create reference image around determined maximum
+                ref_center_frame = crop_image(image=center_frame_unsharp,
+                                              center=(int(y_0), int(x_0)),
+                                              size=ref_image_size)
 
-        # Find star position as intersection of two lines
+                # Fit the data using astropy.modeling
+                gauss_init = models.Gaussian2D(amplitude=np.amax(ref_center_frame),
+                                               x_mean=x_0,
+                                               y_mean=y_0,
+                                               x_stddev=1.,
+                                               y_stddev=1.,
+                                               theta=0.)
 
-        x_center = ((y_pos[0]-x_pos[0]*(y_pos[2]-y_pos[0])/(x_pos[2]-float(x_pos[0]))) -
-                    (y_pos[1]-x_pos[1]*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])))) / \
-                   ((y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])) -
-                    (y_pos[2]-y_pos[0])/(x_pos[2]-float(x_pos[0])))
+                fit_gauss = fitting.LevMarLSQFitter()
 
-        y_center = x_center*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])) + \
-            (y_pos[1]-x_pos[1]*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])))
+                y_grid, x_grid = np.mgrid[y_0-(ref_image_size-1)/2:y_0+(ref_image_size-1)/2+1,
+                                          x_0-(ref_image_size-1)/2:x_0+(ref_image_size-1)/2+1]
 
-        nimages = self.m_image_in_port.get_shape()[0]
-        npix = self.m_image_in_port.get_shape()[1]
+                gauss = fit_gauss(gauss_init,
+                                  x_grid,
+                                  y_grid,
+                                  ref_center_frame)
+
+                x_pos[i] = gauss.x_mean.value
+                y_pos[i] = gauss.y_mean.value
+
+            # Find star position as intersection of two lines
+
+            x_center[w] = ((y_pos[0]-x_pos[0]*(y_pos[2]-y_pos[0])/(x_pos[2]-float(x_pos[0]))) -
+                           (y_pos[1]-x_pos[1]*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])))) / \
+                          ((y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])) -
+                           (y_pos[2]-y_pos[0])/(x_pos[2]-float(x_pos[0])))
+
+            y_center[w] = x_center[w]*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])) + \
+                (y_pos[1]-x_pos[1]*(y_pos[1]-y_pos[3])/(x_pos[1]-float(x_pos[3])))
+
+        # Adjust science images
+        nimages = self.m_image_in_port.get_shape()[-3]
+        npix = self.m_image_in_port.get_shape()[-2]
+        nwavelengths = len(wavelength)
 
         start_time = time.time()
+
         for i in range(nimages):
-            progress(i, nimages, 'Centering the images...', start_time)
+            im_storage = []
+            for j in range(nwavelengths):
+                im_index = i*nwavelengths + j
 
-            image = self.m_image_in_port[i, ]
+                progress(im_index, nimages*nwavelengths, 'Centering the images...', start_time)
 
-            shift_yx = np.array([(float(im_shape[-2])-1.)/2. - y_center,
-                                 (float(im_shape[-1])-1.)/2. - x_center])
+                if ndim == 3:
+                    image = self.m_image_in_port[i, ]
+                elif ndim == 4:
+                    image = self.m_image_in_port[j, i, ]
 
-            if self.m_dither:
-                index = np.digitize(i, nframes, right=False) - 1
+                shift_yx = np.array([(float(im_shape[-2])-1.)/2. - y_center[j],
+                                     (float(im_shape[-1])-1.)/2. - x_center[j]])
 
-                shift_yx[0] -= dither_y[index]
-                shift_yx[1] -= dither_x[index]
+                if self.m_dither:
+                    index = np.digitize(i, nframes, right=False) - 1
 
-            if npix % 2 == 0 and self.m_size is not None:
-                im_tmp = np.zeros((image.shape[0]+1, image.shape[1]+1))
-                im_tmp[:-1, :-1] = image
-                image = im_tmp
+                    shift_yx[0] -= dither_y[index]
+                    shift_yx[1] -= dither_x[index]
 
-                shift_yx[0] += 0.5
-                shift_yx[1] += 0.5
+                if npix % 2 == 0 and self.m_size is not None:
+                    im_tmp = np.zeros((image.shape[0]+1, image.shape[1]+1))
+                    im_tmp[:-1, :-1] = image
+                    image = im_tmp
 
-            im_shift = shift_image(image, shift_yx, 'spline')
+                    shift_yx[0] += 0.5
+                    shift_yx[1] += 0.5
 
-            if self.m_size is not None:
-                im_crop = crop_image(im_shift, None, self.m_size)
-                self.m_image_out_port.append(im_crop, data_dim=3)
-            else:
-                self.m_image_out_port.append(im_shift, data_dim=3)
+                im_shift = shift_image(image, shift_yx, 'spline')
+
+                if self.m_size is not None:
+                    im_crop = crop_image(im_shift, None, self.m_size)
+                    im_storage.append(im_crop)
+                else:
+                    im_storage.append(im_shift)
+
+            if ndim == 3:
+                self.m_image_out_port.append(im_storage[0], data_dim=3)
+            elif ndim == 4:
+                self.m_image_out_port.append(np.asarray(im_storage), data_dim=4)
 
         print(f'Center [x, y] = [{x_center}, {y_center}]')
 
-        history = f'[x, y] = [{round(x_center, 2)}, {round(y_center, 2)}]'
+        history = f'[x, y] = [{round(x_center[j], 2)}, {round(y_center[j], 2)}]'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
         self.m_image_out_port.add_history('WaffleCenteringModule', history)
         self.m_image_out_port.close_port()
diff --git a/pynpoint/processing/darkflat.py b/pynpoint/processing/darkflat.py
index 821efb7..e4c5741 100644
--- a/pynpoint/processing/darkflat.py
+++ b/pynpoint/processing/darkflat.py
@@ -85,7 +85,7 @@ class DarkCalibrationModule(ProcessingModule):
             None
         """
 
-        super(DarkCalibrationModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_dark_in_port = self.add_input_port(dark_in_tag)
@@ -158,7 +158,7 @@ class FlatCalibrationModule(ProcessingModule):
             None
         """
 
-        super(FlatCalibrationModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_flat_in_port = self.add_input_port(flat_in_tag)
diff --git a/pynpoint/processing/extract.py b/pynpoint/processing/extract.py
index 19c1919..5606ea4 100644
--- a/pynpoint/processing/extract.py
+++ b/pynpoint/processing/extract.py
@@ -12,8 +12,8 @@ import numpy as np
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
-from pynpoint.util.image import crop_image, center_pixel, rotate_coordinates
-from pynpoint.util.star import locate_star
+from pynpoint.util.apply_func import crop_around_star, crop_rotating_star
+from pynpoint.util.image import rotate_coordinates
 
 
 class StarExtractionModule(ProcessingModule):
@@ -65,7 +65,7 @@ class StarExtractionModule(ProcessingModule):
             None
         """
 
-        super(StarExtractionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -79,17 +79,13 @@ class StarExtractionModule(ProcessingModule):
         self.m_fwhm_star = fwhm_star
         self.m_position = position
 
-        self.m_count = 0
-
     @typechecked
     def run(self) -> None:
         """
         Run method of the module. Locates the position of the star (only pixel precision) by
         selecting the highest pixel value. A Gaussian kernel with a FWHM similar to the PSF is
         used to lower the contribution of bad pixels which may have higher values than the peak
-        of the PSF. Images are cropped and written to an output port. The position of the star
-        is attached to the input images (only with ``CPU == 1``) as the non-static attribute
-        ``STAR_POSITION`` (y, x).
+        of the PSF. Images are cropped and written to an output port.
 
         Returns
         -------
@@ -99,7 +95,11 @@ class StarExtractionModule(ProcessingModule):
 
         cpu = self._m_config_port.get_attribute('CPU')
 
-        if cpu > 1:
+        if cpu > 1 and self.m_index_out_port is not None:
+            warnings.warn('The \'index_out_port\' can only be used if CPU = 1. No data will '
+                          'be stored to this output port.')
+
+            del self._m_output_ports[self.m_index_out_port.tag]
             self.m_index_out_port = None
 
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
@@ -107,76 +107,26 @@ class StarExtractionModule(ProcessingModule):
         self.m_image_size = int(math.ceil(self.m_image_size/pixscale))
         self.m_fwhm_star = int(math.ceil(self.m_fwhm_star/pixscale))
 
-        star = []
-        index = []
-
-        @typechecked
-        def _crop_around_star(image: np.ndarray,
-                              position: Optional[Union[Tuple[int, int, float],
-                                                       Tuple[None, None, float]]],
-                              im_size: int,
-                              fwhm: int) -> np.ndarray:
-
-            if position is None:
-                center = None
-                width = None
-
-            else:
-                if position[0] is None and position[1] is None:
-                    center = None
-                else:
-                    center = (position[1], position[0])  # (y, x)
-
-                width = int(math.ceil(position[2]/pixscale))
-
-            starpos = locate_star(image, center, width, fwhm)
-
-            try:
-                im_crop = crop_image(image, tuple(starpos), im_size)
-
-            except ValueError:
-                if cpu == 1:
-                    warnings.warn(f'Chosen image size is too large to crop the image around the '
-                                  f'brightest pixel (image index = {self.m_count}, pixel [x, y] '
-                                  f'= [{starpos[0]}, {starpos[1]}]). Using the center of the '
-                                  f'image instead.')
-
-                    index.append(self.m_count)
-
-                else:
-                    warnings.warn('Chosen image size is too large to crop the image around the '
-                                  'brightest pixel. Using the center of the image instead.')
-
-                starpos = center_pixel(image)
-                im_crop = crop_image(image, tuple(starpos), im_size)
-
-            if cpu == 1:
-                star.append((starpos[1], starpos[0]))
-                self.m_count += 1
-
-            return im_crop
-
-        self.apply_function_to_images(_crop_around_star,
+        self.apply_function_to_images(crop_around_star,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Extracting stellar position',
                                       func_args=(self.m_position,
                                                  self.m_image_size,
-                                                 self.m_fwhm_star))
+                                                 self.m_fwhm_star,
+                                                 pixscale,
+                                                 self.m_index_out_port,
+                                                 self.m_image_out_port))
 
-        history = f'fwhm_star [pix] = {self.m_fwhm_star}'
+        history = f'fwhm_star (pix) = {self.m_fwhm_star}'
 
         if self.m_index_out_port is not None:
-            self.m_index_out_port.set_all(index, data_dim=1)
             self.m_index_out_port.copy_attributes(self.m_image_in_port)
             self.m_index_out_port.add_history('StarExtractionModule', history)
 
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
         self.m_image_out_port.add_history('StarExtractionModule', history)
 
-        if cpu == 1:
-            self.m_image_out_port.add_attribute('STAR_POSITION', np.asarray(star), static=False)
-
         self.m_image_out_port.close_port()
 
 
@@ -228,7 +178,7 @@ class ExtractBinaryModule(ProcessingModule):
             None
         """
 
-        super(ExtractBinaryModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -272,30 +222,16 @@ class ExtractBinaryModule(ProcessingModule):
         if self.m_filter_size is not None:
             self.m_filter_size = int(math.ceil(self.m_filter_size/pixscale))
 
-        @typechecked
-        def _crop_rotating_star(image: np.ndarray,
-                                position: Union[Tuple[float, float], np.ndarray],
-                                im_size: int,
-                                filter_size: Optional[int]) -> np.ndarray:
-
-            starpos = locate_star(image=image,
-                                  center=tuple(position),
-                                  width=self.m_search_size,
-                                  fwhm=filter_size)
-
-            return crop_image(image=image,
-                              center=tuple(starpos),
-                              size=im_size)
-
-        self.apply_function_to_images(_crop_rotating_star,
+        self.apply_function_to_images(crop_rotating_star,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Extracting binary position',
                                       func_args=(positions,
                                                  self.m_image_size,
-                                                 self.m_filter_size))
+                                                 self.m_filter_size,
+                                                 self.m_search_size))
 
-        history = f'filter [pix] = {self.m_filter_size}'
+        history = f'filter (pix) = {self.m_filter_size}'
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
         self.m_image_out_port.add_history('ExtractBinaryModule', history)
         self.m_image_out_port.close_port()
diff --git a/pynpoint/processing/filter.py b/pynpoint/processing/filter.py
index 69c30c6..f6b86e4 100644
--- a/pynpoint/processing/filter.py
+++ b/pynpoint/processing/filter.py
@@ -44,7 +44,7 @@ class GaussianFilterModule(ProcessingModule):
             None
         """
 
-        super(GaussianFilterModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
diff --git a/pynpoint/processing/fluxposition.py b/pynpoint/processing/fluxposition.py
index 17c119b..f6ecb70 100644
--- a/pynpoint/processing/fluxposition.py
+++ b/pynpoint/processing/fluxposition.py
@@ -2,7 +2,6 @@
 Pipeline modules for photometric and astrometric measurements.
 """
 
-import sys
 import time
 import warnings
 
@@ -15,10 +14,10 @@ import emcee
 from typeguard import typechecked
 from scipy.optimize import minimize
 from sklearn.decomposition import PCA
-from photutils import aperture_photometry, CircularAperture
-from photutils.aperture import Aperture
+from photutils import CircularAperture
 
 from pynpoint.core.processing import ProcessingModule
+from pynpoint.util.apply_func import photometry
 from pynpoint.util.analysis import fake_planet, merit_function, false_alarm, pixel_variance
 from pynpoint.util.image import create_mask, polar_to_cartesian, cartesian_to_polar, \
                                 center_subpixel, rotate_coordinates
@@ -76,7 +75,7 @@ class FakePlanetModule(ProcessingModule):
             None
         """
 
-        super(FakePlanetModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -105,6 +104,12 @@ class FakePlanetModule(ProcessingModule):
             None
         """
 
+        print('Input parameters:')
+        print(f'   - Magnitude = {self.m_magnitude:.2f}')
+        print(f'   - PSF scaling = {self.m_psf_scaling}')
+        print(f'   - Separation (arcsec) = {self.m_position[0]:.2f}')
+        print(f'   - Position angle (deg) = {self.m_position[0]:.2f}')
+
         memory = self._m_config_port.get_attribute('MEMORY')
         parang = self.m_image_in_port.get_attribute('PARANG')
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
@@ -172,10 +177,10 @@ class SimplexMinimizationModule(ProcessingModule):
                  psf_in_tag: str,
                  res_out_tag: str,
                  flux_position_tag: str,
-                 position: Tuple[int, int],
+                 position: Tuple[float, float],
                  magnitude: float,
                  psf_scaling: float = -1.,
-                 merit: str = 'hessian',
+                 merit: str = 'gaussian',
                  aperture: float = 0.1,
                  sigma: float = 0.0,
                  tolerance: float = 0.1,
@@ -205,9 +210,12 @@ class SimplexMinimizationModule(ProcessingModule):
             output. Each step of the minimization stores the x position (pixels), y position
             (pixels), separation (arcsec), angle (deg), contrast (mag), and the chi-square value.
             The last row contains the best-fit results.
-        position : tuple(int, int)
-            Approximate position (x, y) of the planet in pixels. This is also the location where
-            the figure of merit is calculated within an aperture of radius ``aperture``.
+        position : tuple(float, float)
+            Approximate position of the planet (x, y), provided with subpixel precision (i.e. as
+            floats). The figure of merit is calculated within an aperture of radius ``aperture``
+            centered at the rounded (i.e. integers) coordinates of ``position``. When setting,
+            ``offset=0.``, the ``position`` is used as fixed position of the planet while only
+            retrieving the contrast.
         magnitude : float
             Approximate magnitude of the planet relative to the star.
         psf_scaling : float
@@ -254,7 +262,8 @@ class SimplexMinimizationModule(ProcessingModule):
             position measurements in the context of RDI.
         offset : float, None
             Offset (pixels) by which the injected negative PSF may deviate from ``position``. The
-            constraint on the position is not applied if set to ``None``.
+            constraint on the position is not applied if set to None. Only the contrast is
+            optimized and the position if fixed to ``position`` if ``offset=0``.
 
         Returns
         -------
@@ -262,7 +271,7 @@ class SimplexMinimizationModule(ProcessingModule):
             None
         """
 
-        super(SimplexMinimizationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -303,6 +312,7 @@ class SimplexMinimizationModule(ProcessingModule):
 
         if isinstance(pca_number, int):
             self.m_pca_number = [pca_number]
+
         else:
             self.m_pca_number = pca_number
 
@@ -319,18 +329,31 @@ class SimplexMinimizationModule(ProcessingModule):
             None
         """
 
+        print('Input parameters:')
+        print(f'   - Number of principal components = {self.m_pca_number}')
+        print(f'   - Figure of merit = {self.m_merit}')
+        print(f'   - Residuals type = {self.m_residuals}')
+        print(f'   - Absolute tolerance (pixels/mag) = {self.m_tolerance}')
+        print(f'   - Maximum offset = {self.m_offset}')
+        print(f'   - Guessed position (x, y) = ({self.m_position[0]:.2f}, '
+              f'{self.m_position[1]:.2f})')
+
         parang = self.m_image_in_port.get_attribute('PARANG')
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
 
-        aperture = (self.m_position[1], self.m_position[0], self.m_aperture/pixscale)
+        aperture = (round(self.m_position[1]), round(self.m_position[0]), self.m_aperture/pixscale)
+        print(f'   - Aperture position (x, y) = ({aperture[1]}, {aperture[0]})')
+        print(f'   - Aperture radius (pixels) = {int(aperture[2])}')
 
         self.m_sigma /= pixscale
 
         if self.m_cent_size is not None:
             self.m_cent_size /= pixscale
+            print(f'   - Inner mask radius (pixels) = {int(self.m_cent_size)}')
 
         if self.m_edge_size is not None:
             self.m_edge_size /= pixscale
+            print(f'   - Outer mask radius (pixels) = {int(self.m_edge_size)}')
 
         psf = self.m_psf_in_port.get_all()
         images = self.m_image_in_port.get_all()
@@ -342,6 +365,12 @@ class SimplexMinimizationModule(ProcessingModule):
                              'the SimplexMinimizationModule.')
 
         center = center_subpixel(psf)
+        print(f'Image center (y, x) = {center}')
+
+        # Rotate the initial position, (y, x), by the extra rotation angle to (y_rot, x_rot)
+        pos_init = rotate_coordinates(center,
+                                      (self.m_position[1], self.m_position[0]),
+                                      self.m_extra_rot)
 
         if self.m_reference_in_port is not None and self.m_merit != 'poisson':
             raise NotImplementedError('The reference_in_tag can only be used in combination with '
@@ -354,21 +383,28 @@ class SimplexMinimizationModule(ProcessingModule):
                        sklearn_pca: Optional[PCA],
                        var_noise: Optional[float]) -> float:
 
-            pos_y = arg[0]
-            pos_x = arg[1]
-            mag = arg[2]
+            # Extract the contrast, y position, and x position from the argument tuple
+            mag = arg[0]
 
-            if self.m_offset is not None:
-                if pos_x < self.m_position[0] - self.m_offset or \
-                        pos_x > self.m_position[0] + self.m_offset:
-                    return np.inf
+            if self.m_offset is None or self.m_offset > 0.:
+                pos_y = arg[1]
+                pos_x = arg[2]
 
-                if pos_y < self.m_position[1] - self.m_offset or \
-                        pos_y > self.m_position[1] + self.m_offset:
-                    return np.inf
+            else:
+                pos_y = pos_init[0]
+                pos_x = pos_init[1]
 
+            # Calculate the absolute offset (pixels) with respect to the initial guess
+            pos_offset = np.sqrt((pos_x-pos_init[1])**2 + (pos_y-pos_init[0])**2)
+
+            if self.m_offset is not None and pos_offset > self.m_offset:
+                # Return chi-square = inf if the offset needs to be tested and is too large
+                return np.inf
+
+            # Convert the cartesian position to a separation and position angle
             sep_ang = cartesian_to_polar(center, pos_y, pos_x)
 
+            # Inject the negative artifical planet at the position and contrast that is tested
             fake = fake_planet(images=images,
                                psf=psf,
                                parang=parang,
@@ -376,9 +412,11 @@ class SimplexMinimizationModule(ProcessingModule):
                                magnitude=mag,
                                psf_scaling=self.m_psf_scaling)
 
+            # Create a mask
             mask = create_mask(fake.shape[-2:], (self.m_cent_size, self.m_edge_size))
 
             if self.m_reference_in_port is None:
+                # PSF subtraction with the science data as reference data (ADI)
                 im_res_rot, im_res_derot = pca_psf_subtraction(images=fake*mask,
                                                                angles=-1.*parang+self.m_extra_rot,
                                                                pca_number=n_components,
@@ -387,6 +425,7 @@ class SimplexMinimizationModule(ProcessingModule):
                                                                indices=None)
 
             else:
+                # PSF subtraction with separate reference data (RDI)
                 im_reshape = np.reshape(fake*mask, (im_shape[0], im_shape[1]*im_shape[2]))
 
                 im_res_rot, im_res_derot = pca_psf_subtraction(images=im_reshape,
@@ -396,43 +435,44 @@ class SimplexMinimizationModule(ProcessingModule):
                                                                im_shape=im_shape,
                                                                indices=None)
 
+            # Collapse the residuals of the PSF subtraction
             res_stack = combine_residuals(method=self.m_residuals,
                                           res_rot=im_res_derot,
                                           residuals=im_res_rot,
                                           angles=parang)
 
+            # Appedn the collapsed residuals to the output port
             self.m_res_out_port[count].append(res_stack, data_dim=3)
 
-            chi_square = merit_function(residuals=res_stack[0, ],
-                                        merit=self.m_merit,
-                                        aperture=aperture,
-                                        sigma=self.m_sigma,
-                                        var_noise=var_noise)
+            # Calculate the chi-square for the tested position and contrast
+            chi_sq = merit_function(residuals=res_stack[0, ],
+                                    merit=self.m_merit,
+                                    aperture=aperture,
+                                    sigma=self.m_sigma,
+                                    var_noise=var_noise)
 
+            # Apply the extra rotation to the y and x position
+            # The returned position is given as (y, x)
             position = rotate_coordinates(center, (pos_y, pos_x), -self.m_extra_rot)
 
+            # Create and array with the x position, y position, separation (arcsec), position
+            # angle (deg), contrast (mag), and chi-square
             res = np.asarray([position[1],
                               position[0],
                               sep_ang[0]*pixscale,
                               (sep_ang[1]-self.m_extra_rot) % 360.,
                               mag,
-                              chi_square])
+                              chi_sq])
 
+            # Append the results to the output port
             self.m_flux_pos_port[count].append(res, data_dim=2)
 
-            sys.stdout.write('\rSimplex minimization... ')
-            sys.stdout.write(f'{n_components} PC - chi^2 = {chi_square:.8E}')
-            sys.stdout.flush()
+            print(f'\rSimplex minimization... {n_components} PC - chi^2 = {chi_sq:.2e}', end='')
 
-            return chi_square
-
-        pos_init = rotate_coordinates(center,
-                                      (self.m_position[1], self.m_position[0]),  # (y, x)
-                                      self.m_extra_rot)
+            return chi_sq
 
         for i, n_components in enumerate(self.m_pca_number):
-            sys.stdout.write(f'\rSimplex minimization... {n_components} PC ')
-            sys.stdout.flush()
+            print(f'\rSimplex minimization... {n_components} PC ', end='')
 
             if self.m_reference_in_port is None:
                 sklearn_pca = None
@@ -479,15 +519,37 @@ class SimplexMinimizationModule(ProcessingModule):
                                            aperture=aperture,
                                            sigma=self.m_sigma)
 
-            minimize(fun=_objective,
-                     x0=np.array([pos_init[0], pos_init[1], self.m_magnitude]),
-                     args=(i, n_components, sklearn_pca, var_noise),
-                     method='Nelder-Mead',
-                     tol=None,
-                     options={'xatol': self.m_tolerance, 'fatol': float('inf')})
+            if self.m_offset == 0.:
+                x0_minimize = np.array([self.m_magnitude])
+            else:
+                x0_minimize = np.array([self.m_magnitude, pos_init[0], pos_init[1]])
+
+            min_result = minimize(fun=_objective,
+                                  x0=x0_minimize,
+                                  args=(i, n_components, sklearn_pca, var_noise),
+                                  method='Nelder-Mead',
+                                  tol=None,
+                                  options={'xatol': self.m_tolerance, 'fatol': float('inf')})
+
+            print(' [DONE]')
+
+            if self.m_offset == 0.:
+                pos_x = pos_init[1]
+                pos_y = pos_init[0]
+
+            else:
+                pos_x = min_result.x[2]
+                pos_y = min_result.x[1]
+
+            pos_rot_yx = rotate_coordinates(center, (pos_y, pos_x), -self.m_extra_rot)
+
+            sep_ang = cartesian_to_polar(center, pos_rot_yx[0], pos_rot_yx[1])
 
-        sys.stdout.write(' [DONE]\n')
-        sys.stdout.flush()
+            print('Best-fit parameters:')
+            print(f'   - Position (x, y) = ({pos_rot_yx[1]:.2f}, {pos_rot_yx[0]:.2f})')
+            print(f'   - Separation (mas) = {sep_ang[0]*pixscale*1e3:.2f}')
+            print(f'   - Position angle (deg) = {sep_ang[1]:.2f}')
+            print(f'   - Contrast (mag) = {min_result.x[0]:.2f}')
 
         history = f'merit = {self.m_merit}'
 
@@ -576,7 +638,7 @@ class FalsePositiveModule(ProcessingModule):
             warnings.warn('The \'bounds\' keyword argument has been deprecated. Please use '
                           '\'offset\' instead (e.g. offset=3.0).', DeprecationWarning)
 
-        super(FalsePositiveModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_snr_out_port = self.add_output_port(snr_out_tag)
@@ -604,13 +666,10 @@ class FalsePositiveModule(ProcessingModule):
 
             pos_x, pos_y = arg
 
-            if self.m_offset is not None:
-                if pos_x < self.m_position[0] - self.m_offset or \
-                        pos_x > self.m_position[0] + self.m_offset:
-                    snr = 0.
+            pos_offset = np.sqrt((pos_x-self.m_position[0])**2 + (pos_y-self.m_position[1])**2)
 
-                elif pos_y < self.m_position[1] - self.m_offset or \
-                        pos_y > self.m_position[1] + self.m_offset:
+            if self.m_offset is not None:
+                if pos_offset > self.m_offset:
                     snr = 0.
 
                 else:
@@ -628,13 +687,18 @@ class FalsePositiveModule(ProcessingModule):
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
         self.m_aperture /= pixscale
 
+        print('Input parameters:')
+        print(f'   - Aperture position = {self.m_position}')
+        print(f'   - Aperture radius (pixels) = {self.m_aperture:.2f}')
+        print(f'   - Optimize aperture position = {self.m_optimize}')
+        print(f'   - Ignore neighboring apertures = {self.m_ignore}')
+        print(f'   - Minimization tolerance = {self.m_tolerance}')
+
         nimages = self.m_image_in_port.get_shape()[0]
 
-        start_time = time.time()
+        print('Calculating the S/N and FPF...')
 
         for j in range(nimages):
-            progress(j, nimages, 'Calculating S/N and FPF...', start_time)
-
             image = self.m_image_in_port[j, ]
             center = center_subpixel(image)
 
@@ -662,12 +726,15 @@ class FalsePositiveModule(ProcessingModule):
 
                 x_pos, y_pos = self.m_position[0], self.m_position[1]
 
+            print(f'Image {j+1:03d}/{nimages} -> (x, y) = ({x_pos:.2f}, {y_pos:.2f}), '
+                  f'S/N = {snr:.2f}, FPF = {fpf:.2e}')
+
             sep_ang = cartesian_to_polar(center, y_pos, x_pos)
             result = np.column_stack((x_pos, y_pos, sep_ang[0]*pixscale, sep_ang[1], snr, fpf))
 
             self.m_snr_out_port.append(result, data_dim=2)
 
-        history = f'aperture [arcsec] = {self.m_aperture*pixscale:.2f}'
+        history = f'aperture (arcsec) = {self.m_aperture*pixscale:.2f}'
         self.m_snr_out_port.copy_attributes(self.m_image_in_port)
         self.m_snr_out_port.add_history('FalsePositiveModule', history)
         self.m_snr_out_port.close_port()
@@ -773,7 +840,7 @@ class MCMCsamplingModule(ProcessingModule):
         else:
             self.m_sigma = (1e-5, 1e-3, 1e-3)
 
-        super(MCMCsamplingModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -814,6 +881,10 @@ class MCMCsamplingModule(ProcessingModule):
             None
         """
 
+        print('Input parameters:')
+        print(f'   - Number of principal components: {self.m_pca_number}')
+        print(f'   - Figure of merit: {self.m_merit}')
+
         ndim = 3
 
         cpu = self._m_config_port.get_attribute('CPU')
@@ -846,15 +917,13 @@ class MCMCsamplingModule(ProcessingModule):
 
         if isinstance(self.m_aperture, float):
             yx_pos = polar_to_cartesian(images, self.m_param[0]/pixscale, self.m_param[1])
-            aperture = (int(round(yx_pos[0])), int(round(yx_pos[1])), self.m_aperture/pixscale)
+            aperture = (round(yx_pos[0]), round(yx_pos[1]), self.m_aperture/pixscale)
 
         elif isinstance(self.m_aperture, tuple):
             aperture = (self.m_aperture[1], self.m_aperture[0], self.m_aperture[2]/pixscale)
 
-        print(f'Number of principal components: {self.m_pca_number}')
-        print(f'Aperture position [x, y]: [{aperture[1]}, {aperture[0]}]')
-        print(f'Aperture radius (pixels): {aperture[2]:.2f}')
-        print(f'Figure of merit: {self.m_merit}')
+        print(f'   - Aperture position (x, y): ({aperture[1]}, {aperture[0]})')
+        print(f'   - Aperture radius (pixels): {int(aperture[2])}')
 
         if self.m_merit == 'poisson':
             var_noise = None
@@ -993,7 +1062,7 @@ class AperturePhotometryModule(ProcessingModule):
             None
         """
 
-        super(AperturePhotometryModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_phot_out_port = self.add_output_port(phot_out_tag)
@@ -1013,17 +1082,6 @@ class AperturePhotometryModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _photometry(image: np.ndarray,
-                        aperture: Union[Aperture, List[Aperture]]) -> np.ndarray:
-            # https://photutils.readthedocs.io/en/stable/overview.html
-            # In Photutils, pixel coordinates are zero-indexed, meaning that (x, y) = (0, 0)
-            # corresponds to the center of the lowest, leftmost array element. This means that
-            # the value of data[0, 0] is taken as the value over the range -0.5 < x <= 0.5,
-            # -0.5 < y <= 0.5. Note that this is the same coordinate system as used by PynPoint.
-
-            return np.array(aperture_photometry(image, aperture, method='exact')['aperture_sum'])
-
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
         self.m_radius /= pixscale
 
@@ -1040,13 +1098,18 @@ class AperturePhotometryModule(ProcessingModule):
         # Position in CircularAperture is defined as (x, y)
         aperture = CircularAperture((self.m_position[0], self.m_position[1]), self.m_radius)
 
-        self.apply_function_to_images(_photometry,
+        self.apply_function_to_images(photometry,
                                       self.m_image_in_port,
                                       self.m_phot_out_port,
                                       'Aperture photometry',
                                       func_args=(aperture, ))
 
-        history = f'radius [arcsec] = {self.m_radius*pixscale:.3f}'
+        self.m_phot_in_port = self.add_input_port(self.m_phot_out_port.tag)
+        data = self.m_phot_in_port.get_all()
+
+        print(f'Mean flux (counts) = {np.mean(data):.2f} +/- {np.std(data):.2f}')
+
+        history = f'radius (pixels) = {self.m_radius:.3f}'
         self.m_phot_out_port.copy_attributes(self.m_image_in_port)
         self.m_phot_out_port.add_history('AperturePhotometryModule', history)
         self.m_phot_out_port.close_port()
@@ -1128,8 +1191,9 @@ class SystematicErrorModule(ProcessingModule):
         residuals : str
             Method for combining the residuals ('mean', 'median', 'weighted', or 'clipped').
         offset : float, None
-            Offset (pix) by which the negative PSF may deviate from the positive injected PSF. No
-            constraint on the position is applied if set to None.
+            Offset (pixels) by which the negative PSF may deviate from the positive injected PSF.
+            No constraint on the position is applied if set to None. Only the contrast is optimized
+            and the position is fixed to the injected value if ``offset=0``.
 
         Returns
         -------
@@ -1137,7 +1201,7 @@ class SystematicErrorModule(ProcessingModule):
             None
         """
 
-        super(SystematicErrorModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_tag = image_in_tag
         self.m_psf_in_tag = psf_in_tag
@@ -1173,6 +1237,14 @@ class SystematicErrorModule(ProcessingModule):
             None
         """
 
+        print('Input parameters:')
+        print(f'   - Number of principal components = {self.m_pca_number}')
+        print(f'   - Figure of merit = {self.m_merit}')
+        print(f'   - Residuals type = {self.m_residuals}')
+        print(f'   - Absolute tolerance (pixels/mag) = {self.m_tolerance}')
+        print(f'   - Maximum offset = {self.m_offset}')
+        print(f'   - Aperture radius (arcsec) = {self.m_aperture}')
+
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
         image = self.m_image_in_port[0, ]
 
@@ -1180,7 +1252,8 @@ class SystematicErrorModule(ProcessingModule):
                                   image_in_tag=self.m_image_in_tag,
                                   psf_in_tag=self.m_psf_in_tag,
                                   image_out_tag=f'{self._m_name}_empty',
-                                  position=self.m_position,
+                                  position=(self.m_position[0],
+                                            self.m_position[1]+self.m_extra_rot),
                                   magnitude=self.m_magnitude,
                                   psf_scaling=-self.m_psf_scaling)
 
@@ -1190,12 +1263,38 @@ class SystematicErrorModule(ProcessingModule):
         module.run()
 
         sep = float(self.m_position[0])
+
         angles = np.linspace(self.m_angles[0], self.m_angles[1], self.m_angles[2], endpoint=True)
 
+        print('Testing the following parameters:')
+        print(f'   - Contrast (mag) = {self.m_magnitude:.2f}')
+        print(f'   - Separation (mas) = {sep*1e3:.1f}')
+        print(f'   - Position angle range (deg) = {angles[0]} - {angles[-1]}')
+
+        if angles.size > 1:
+            print(f'     in steps of {np.mean(np.diff(angles)):.2f} deg')
+
+        # Image center (y, x) with subpixel accuracy
+        im_center = center_subpixel(image)
+
         for i, ang in enumerate(angles):
-            print(f'Processing position angle: {ang} deg...')
+            print(f'\nProcessing position angle: {ang} deg...')
+
+            # Convert the polar coordiantes of the separation and position angle that is tested
+            # into cartesian coordinates (y, x)
+            planet_pos_yx = polar_to_cartesian(image, sep/pixscale, ang)
+            planet_pos_xy = (planet_pos_yx[1], planet_pos_yx[0])
 
-            module = FakePlanetModule(position=(sep, ang),
+            # Convert the planet position to polar coordinates
+            planet_sep_ang = cartesian_to_polar(im_center, planet_pos_yx[0], planet_pos_yx[1])
+
+            # Change the separation units to arcsec
+            planet_sep_ang = (planet_sep_ang[0]*pixscale, planet_sep_ang[1])
+
+            # Inject the artifical planet
+
+            module = FakePlanetModule(position=(planet_sep_ang[0],
+                                                planet_sep_ang[1]+self.m_extra_rot),
                                       magnitude=self.m_magnitude,
                                       psf_scaling=self.m_psf_scaling,
                                       name_in=f'{self._m_name}_fake_{i}',
@@ -1208,10 +1307,9 @@ class SystematicErrorModule(ProcessingModule):
             module._m_output_ports[f'{self._m_name}_fake'].del_all_attributes()
             module.run()
 
-            position = polar_to_cartesian(image, sep/pixscale, ang)
-            position = (int(round(position[1])), int(round(position[0])))
+            # Retrieve the position and contrast of the artificial planet
 
-            module = SimplexMinimizationModule(position=position,
+            module = SimplexMinimizationModule(position=planet_pos_xy,
                                                magnitude=self.m_magnitude,
                                                psf_scaling=-self.m_psf_scaling,
                                                name_in=f'{self._m_name}_fake_{i}',
@@ -1227,7 +1325,7 @@ class SystematicErrorModule(ProcessingModule):
                                                cent_size=self.m_mask[0],
                                                edge_size=self.m_mask[1],
                                                extra_rot=self.m_extra_rot,
-                                               residuals='median',
+                                               residuals=self.m_residuals,
                                                offset=self.m_offset)
 
             module.connect_database(self._m_data_base)
@@ -1237,14 +1335,21 @@ class SystematicErrorModule(ProcessingModule):
             module._m_output_ports[f'{self._m_name}_fluxpos'].del_all_attributes()
             module.run()
 
+            # Add the input port to collect the results of SimplexMinimizationModule
             fluxpos_out_port = self.add_input_port(f'{self._m_name}_fluxpos')
 
-            data = [self.m_position[0] - fluxpos_out_port[-1, 2],
-                    ang - fluxpos_out_port[-1, 3],
-                    self.m_magnitude - fluxpos_out_port[-1, 4]]
+            # Create a list with the offset between the injected and retrieved values of the
+            # separation (arcsec), position angle (deg), contrast (mag), x position (pixels),
+            # and y position (pixels).
+            data = [planet_sep_ang[0] - fluxpos_out_port[-1, 2],  # Separation (arcsec)
+                    planet_sep_ang[1] - fluxpos_out_port[-1, 3],  # Position angle (deg)
+                    self.m_magnitude - fluxpos_out_port[-1, 4],  # Contrast (mag)
+                    planet_pos_xy[0] - fluxpos_out_port[-1, 0],  # Position x (pixels)
+                    planet_pos_xy[1] - fluxpos_out_port[-1, 1]]  # Position y (pixels)
 
             if data[1] > 180.:
                 data[1] -= 360.
+
             elif data[1] < -180.:
                 data[1] += 360.
 
@@ -1258,18 +1363,28 @@ class SystematicErrorModule(ProcessingModule):
         sep_percen = np.percentile(offsets[:, 0], [16., 50., 84.])
         ang_percen = np.percentile(offsets[:, 1], [16., 50., 84.])
         mag_percen = np.percentile(offsets[:, 2], [16., 50., 84.])
+        x_pos_percen = np.percentile(offsets[:, 3], [16., 50., 84.])
+        y_pos_percen = np.percentile(offsets[:, 4], [16., 50., 84.])
+
+        print('\nMedian offset and uncertainties:')
+
+        print(f'   - Position x (pixels) = {x_pos_percen[1]:.2f} '
+              f'(-{x_pos_percen[1]-x_pos_percen[0]:.2f} '
+              f'+{x_pos_percen[2]-x_pos_percen[1]:.2f})')
 
-        print('Median and uncertainties:')
+        print(f'   - Position y (pixels) = {y_pos_percen[1]:.2f} '
+              f'(-{y_pos_percen[1]-y_pos_percen[0]:.2f} '
+              f'+{y_pos_percen[2]-y_pos_percen[1]:.2f})')
 
-        print(f'Separation [mas] = {1e3*sep_percen[1]:.2f} '
+        print(f'   - Separation (mas) = {1e3*sep_percen[1]:.2f} '
               f'(-{1e3*sep_percen[1]-1e3*sep_percen[0]:.2f} '
               f'+{1e3*sep_percen[2]-1e3*sep_percen[1]:.2f})')
 
-        print(f'Position angle [deg] = {ang_percen[1]:.2f} '
+        print(f'   - Position angle (deg) = {ang_percen[1]:.2f} '
               f'(-{ang_percen[1]-ang_percen[0]:.2f} '
               f'+{ang_percen[2]-ang_percen[1]:.2f})')
 
-        print(f'Contrast [mag] = {mag_percen[1]:.2f} '
+        print(f'   - Contrast (mag) = {mag_percen[1]:.2f} '
               f'(-{mag_percen[1]-mag_percen[0]:.2f} '
               f'+{mag_percen[2]-mag_percen[1]:.2f})')
 
diff --git a/pynpoint/processing/frameselection.py b/pynpoint/processing/frameselection.py
index eb01451..f2ea861 100644
--- a/pynpoint/processing/frameselection.py
+++ b/pynpoint/processing/frameselection.py
@@ -16,6 +16,7 @@ from typeguard import typechecked
 from skimage.metrics import structural_similarity, mean_squared_error
 
 from pynpoint.core.processing import ProcessingModule
+from pynpoint.util.apply_func import image_stat
 from pynpoint.util.image import crop_image, pixel_distance, center_pixel
 from pynpoint.util.module import progress
 from pynpoint.util.remove import write_selected_data, write_selected_attributes
@@ -59,7 +60,7 @@ class RemoveFramesModule(ProcessingModule):
             None
         """
 
-        super(RemoveFramesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -178,7 +179,7 @@ class FrameSelectionModule(ProcessingModule):
             None
         """
 
-        super(FrameSelectionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -373,7 +374,7 @@ class RemoveLastFrameModule(ProcessingModule):
             None
         """
 
-        super(RemoveLastFrameModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -457,7 +458,7 @@ class RemoveStartFramesModule(ProcessingModule):
             None
         """
 
-        super(RemoveStartFramesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -572,7 +573,7 @@ class ImageStatisticsModule(ProcessingModule):
             None
         """
 
-        super(ImageStatisticsModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_stat_out_port = self.add_output_port(stat_out_tag)
@@ -614,31 +615,11 @@ class ImageStatisticsModule(ProcessingModule):
                                    int(self.m_position[0]),  # x position
                                    self.m_position[2]/pixscale)  # radius (pix)
 
-            rr_grid = pixel_distance(im_shape, self.m_position[0:2])
+            rr_grid, _, _ = pixel_distance(im_shape, position=self.m_position[0:2])
             rr_reshape = np.reshape(rr_grid, (rr_grid.shape[0]*rr_grid.shape[1]))
             indices = np.where(rr_reshape <= self.m_position[2])[0]
 
-        @typechecked
-        def _image_stat(image_in: np.ndarray,
-                        indices: Optional[np.ndarray]) -> np.ndarray:
-
-            if indices is None:
-                image_select = np.copy(image_in)
-
-            else:
-                image_reshape = np.reshape(image_in, (image_in.shape[0]*image_in.shape[1]))
-                image_select = image_reshape[indices]
-
-            nmin = np.nanmin(image_select)
-            nmax = np.nanmax(image_select)
-            nsum = np.nansum(image_select)
-            mean = np.nanmean(image_select)
-            median = np.nanmedian(image_select)
-            std = np.nanstd(image_select)
-
-            return np.asarray([nmin, nmax, nsum, mean, median, std])
-
-        self.apply_function_to_images(_image_stat,
+        self.apply_function_to_images(image_stat,
                                       self.m_image_in_port,
                                       self.m_stat_out_port,
                                       'Calculating image statistics',
@@ -695,7 +676,7 @@ class FrameSimilarityModule(ProcessingModule):
             None
         """
 
-        super(FrameSimilarityModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_tag)
         self.m_image_out_port = self.add_output_port(image_tag)
@@ -834,9 +815,8 @@ class FrameSimilarityModule(ProcessingModule):
             time.sleep(5)
 
         if nfinished != nimages:
-            sys.stdout.write('\r                                                      ')
-            sys.stdout.write('\rCalculating image similarity... [DONE]\n')
-            sys.stdout.flush()
+            print('\r                                                      ')
+            print('\rCalculating image similarity... [DONE]')
 
         # get the results for every async_result object
         for async_result in async_results:
@@ -916,7 +896,7 @@ class SelectByAttributeModule(ProcessingModule):
                                     removed_out_tag='im_arr_removed'))
         """
 
-        super(SelectByAttributeModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_selected_out_port = self.add_output_port(selected_out_tag)
@@ -1015,7 +995,7 @@ class ResidualSelectionModule(ProcessingModule):
             None
         """
 
-        super(ResidualSelectionModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -1043,7 +1023,7 @@ class ResidualSelectionModule(ProcessingModule):
         nimages = self.m_image_in_port.get_shape()[0]
         npix = self.m_image_in_port.get_shape()[-1]
 
-        rr_grid = pixel_distance((npix, npix), position=None)
+        rr_grid, _, _ = pixel_distance((npix, npix), position=None)
 
         pixel_select = np.where((rr_grid > self.m_annulus_radii[0]/pixscale) &
                                 (rr_grid < self.m_annulus_radii[1]/pixscale))
diff --git a/pynpoint/processing/limits.py b/pynpoint/processing/limits.py
index 6074ca8..f113b66 100644
--- a/pynpoint/processing/limits.py
+++ b/pynpoint/processing/limits.py
@@ -3,7 +3,6 @@ Pipeline modules for estimating detection limits.
 """
 
 import os
-import sys
 import math
 import time
 import warnings
@@ -106,7 +105,7 @@ class ContrastCurveModule(ProcessingModule):
             None
         """
 
-        super(ContrastCurveModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         if 'sigma' in kwargs:
             warnings.warn('The \'sigma\' parameter has been deprecated. Please use the '
@@ -275,9 +274,8 @@ class ContrastCurveModule(ProcessingModule):
             time.sleep(5)
 
         if nfinished != len(positions):
-            sys.stdout.write('\r                                                      ')
-            sys.stdout.write('\rCalculating detection limits... [DONE]\n')
-            sys.stdout.flush()
+            print('\r                                                      ')
+            print('\rCalculating detection limits... [DONE]')
 
         # get the results for every async_result object
         for item in async_results:
@@ -362,7 +360,7 @@ class MassLimitsModule(ProcessingModule):
             None
         """
 
-        super(MassLimitsModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_star_age = star_prop['age']/1000.  # [Myr]
         self.m_star_abs = star_prop['magnitude'] - 5.*math.log10(star_prop['distance']/10.)
@@ -392,11 +390,11 @@ class MassLimitsModule(ProcessingModule):
 
         Returns
         -------
-        list(float, )
+        list(float)
             List with all the ages from the model grid.
-        list(numpy.ndarray, )
+        list(np.ndarray)
             List with all the isochrone data, so the length is the same as the number of ages.
-        list(str, )
+        list(str)
             List with all the column names from the model grid.
         """
 
@@ -449,21 +447,21 @@ class MassLimitsModule(ProcessingModule):
 
         Parameters
         ----------
-        age_eval : numpy.ndarray
+        age_eval : np.ndarray
             Age at which the system is evaluated. Must be of the same shape as `mag_eval`.
-        mag_eval : numpy.ndarray
+        mag_eval : np.ndarray
             Absolute magnitude for which the system is evaluated. Must be of the same shape as
             `age_eval`.
         filter_index: int
             Column index where the filter is located.
-        model_age: list(float, )
+        model_age: list(float)
             List of ages which are given by the model.
-        model_data: list(numpy.ndarray, )
+        model_data: list(np.ndarray)
             List of arrays containing the model data.
 
         Returns
         -------
-        griddata : numpy.ndarray
+        griddata : np.ndarray
             Interpolated values for the given evaluation points (age_eval, mag_eval). Has the
             same shape as age_eval and mag_eval.
         """
diff --git a/pynpoint/processing/pcabackground.py b/pynpoint/processing/pcabackground.py
index 8088790..18fcbbb 100644
--- a/pynpoint/processing/pcabackground.py
+++ b/pynpoint/processing/pcabackground.py
@@ -2,29 +2,29 @@
 Pipeline modules for PCA-based background subtraction.
 """
 
-import time
 import math
+import time
 import warnings
 
-from typing import Optional, Tuple, Union
+from typing import List, Optional, Tuple, Union
 
 import numpy as np
 
-from scipy.sparse.linalg import svds
 from scipy.optimize import curve_fit
+from scipy.sparse.linalg import svds
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
+from pynpoint.processing.psfpreparation import SortParangModule
 from pynpoint.processing.resizing import CropImagesModule
 from pynpoint.processing.stacksubset import CombineTagsModule
-from pynpoint.processing.psfpreparation import SortParangModule
-from pynpoint.util.module import progress, memory_frames
+from pynpoint.util.module import memory_frames, progress
 from pynpoint.util.star import locate_star
 
 
 class PCABackgroundPreparationModule(ProcessingModule):
     """
-    Pipeline module for preparing the PCA background subtraction.
+    Pipeline module for preparing the images for a PCA-based background subtraction.
     """
 
     __author__ = 'Tomas Stolker, Silvan Hunziker'
@@ -38,31 +38,31 @@ class PCABackgroundPreparationModule(ProcessingModule):
                  background_out_tag: str,
                  dither: Union[Tuple[int, int, int],
                                Tuple[int, None, Tuple[float, float]]],
-                 combine: str = 'mean',
-                 **kwargs: str) -> None:
+                 combine: str = 'mean') -> None:
         """
         Parameters
         ----------
         name_in : str
             Unique name of the pipeline module instance.
         image_in_tag : str
-            Tag of the database entry that is read as input.
+            Database tag with the images that are read as input.
         star_out_tag : str
-            Output tag with the images containing the star.
+            Database tag to store the images that contain the star.
         subtracted_out_tag : str
-            Output tag with the mean/median background subtracted images with the star.
+            Database tag to store the mean/median background subtracted images with the star.
         background_out_tag : str
-            Output tag with the images containing only background emission.
+            Database tag to store the images that contain only background emission.
         dither : tuple(int, int, int), tuple(int, None, tuple(float, float))
             Tuple with the parameters for separating the star and background frames. The tuple
-            should contain three values (positions, cubes, first) with *positions* the number
-            of unique dithering position, *cubes* the number of consecutive cubes per dithering
-            position, and *first* the index value of the first cube which contains the star
-            (Python indexing starts at zero). Sorting is based on the ``DITHER_X`` and ``DITHER_Y``
-            attributes when *cubes* is set to None. In that case, the *first* value should be
-            a tuple with the ``DITHER_X`` and ``DITHER_Y`` values in which the star appears first.
+            should contain three values, ``(positions, cubes, first)``, with ``positions`` the
+            number of unique dithering position, ``cubes`` the number of consecutive cubes per
+            dithering position, and ``first`` the index value of the first cube which contains the
+            star (Python indexing starts at zero). Sorting is based on the ``DITHER_X`` and
+            ``DITHER_Y`` attributes when ``cubes`` is set to None. In that case, the ``first``
+            value should be a tuple with the ``DITHER_X`` and ``DITHER_Y`` values in which the star
+            appears first.
         combine : str
-            Method to combine the background images ('mean' or 'median').
+            Method for combining the background images ('mean' or 'median').
 
         Returns
         -------
@@ -70,11 +70,7 @@ class PCABackgroundPreparationModule(ProcessingModule):
             None
         """
 
-        if 'mask_planet' in kwargs:
-            warnings.warn('The \'mean_out_tag\' has been replaced by the \'subtracted_out_tag\'.',
-                          DeprecationWarning)
-
-        super(PCABackgroundPreparationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_star_out_port = self.add_output_port(star_out_tag)
@@ -100,6 +96,7 @@ class PCABackgroundPreparationModule(ProcessingModule):
         for i, item in enumerate(nframes):
             if self.m_combine == 'mean':
                 cube_mean[i, ] = np.mean(self.m_image_in_port[count:count+item, ], axis=0)
+
             elif self.m_combine == 'median':
                 cube_mean[i, ] = np.median(self.m_image_in_port[count:count+item, ], axis=0)
 
@@ -184,7 +181,7 @@ class PCABackgroundPreparationModule(ProcessingModule):
                 background = (bg_prev+bg_next)/2.
 
             else:
-                raise ValueError('Neither previous nor next background frames found.')
+                raise ValueError('Neither previous nor next background frames are found.')
 
             return background
 
@@ -237,7 +234,7 @@ class PCABackgroundPreparationModule(ProcessingModule):
         """
         Run method of the module. Separates the star and background frames, subtracts the mean
         or median background from both the star and background frames, and writes the star and
-        background frames separately.
+        background frames separately to their respective output ports.
 
         Returns
         -------
@@ -288,7 +285,7 @@ class PCABackgroundPreparationModule(ProcessingModule):
 
 class PCABackgroundSubtractionModule(ProcessingModule):
     """
-    Pipeline module for PCA based background subtraction. See Hunziker et al. 2018 for details.
+    Pipeline module applying a PCA-based background subtraction (see Hunziker et al. 2018).
     """
 
     __author__ = 'Tomas Stolker, Silvan Hunziker'
@@ -303,34 +300,29 @@ class PCABackgroundSubtractionModule(ProcessingModule):
                  mask_out_tag: Optional[str] = None,
                  pca_number: int = 60,
                  mask_star: float = 0.7,
-                 subtract_mean: bool = False,
                  subframe: Optional[float] = None,
                  gaussian: float = 0.15,
-                 **kwargs: tuple) -> None:
+                 **kwargs) -> None:
         """
         Parameters
         ----------
         name_in : str
-            Tag of the database entry with the star images.
+            Unique name of the pipeline module instance.
         star_in_tag : str
-            Tag of the database entry with the star images.
+            Database tag with the input images that contain the star.
         background_in_tag : str
-            Tag of the database entry with the background images.
+            Database tag with the input images that contain only background emission.
         residuals_out_tag : str
-            Tag of the database entry with the residuals of the star images after the background
-            subtraction.
+            Database tag to store the background-subtracted images of the star.
         fit_out_tag : str, None
-            Tag of the database entry with the fitted background. No data is written when set to
-            None.
+            Database tag to store the modeled background images. The data is not stored if the
+            arguments is set to None.
         mask_out_tag : str, None
-            Tag of the database entry with the mask. No data is written when set to None.
+            Database tag to store the mask. The data is not stored if the argument is set to None.
         pca_number : int
-            Number of principal components.
+            Number of principal components that is used to model the background emission.
         mask_star : float
             Radius of the central mask (arcsec).
-        subtract_mean : bool
-            The mean of the background images is subtracted from both the star and background
-            images before the PCA basis is constructed.
         gaussian : float
             Full width at half maximum (arcsec) of the Gaussian kernel that is used to smooth the
             image before the star is located.
@@ -344,10 +336,12 @@ class PCABackgroundSubtractionModule(ProcessingModule):
             None
         """
 
-        if 'mask_planet' in kwargs:
-            warnings.warn('The \'mask_planet\' parameter has been deprecated.', DeprecationWarning)
+        super().__init__(name_in)
 
-        super(PCABackgroundSubtractionModule, self).__init__(name_in)
+        if 'subtract_mean' in kwargs:
+            warnings.warn('The \'subtract_mean\' parameter has been deprecated. Subtracting of '
+                          'the mean is no longer optional so subtract_mean=True.',
+                          DeprecationWarning)
 
         self.m_star_in_port = self.add_input_port(star_in_tag)
         self.m_background_in_port = self.add_input_port(background_in_tag)
@@ -365,17 +359,16 @@ class PCABackgroundSubtractionModule(ProcessingModule):
 
         self.m_pca_number = pca_number
         self.m_mask_star = mask_star
-        self.m_subtract_mean = subtract_mean
         self.m_gaussian = gaussian
         self.m_subframe = subframe
 
     @typechecked
     def run(self) -> None:
         """
-        Run method of the module. Creates a PCA basis set of the background frames, masks the PSF
-        in the star frames and optionally an off-axis point source, fits the star frames with a
-        linear combination of the principal components, and writes the residuals of the background
-        subtracted images.
+        Run method of the module. Creates a PCA basis set of the background frames after
+        subtracting the mean background frame from both the star and background frames, masks the
+        PSF of the star, projects the star frames onto the principal components, and stores the
+        residuals of the background subtracted images.
 
         Returns
         -------
@@ -411,24 +404,30 @@ class PCABackgroundSubtractionModule(ProcessingModule):
 
         @typechecked
         def _create_basis(images: np.ndarray,
-                          bg_mean: np.ndarray,
                           pca_number: int) -> np.ndarray:
             """
-            Method for creating a set of principal components for a stack of images.
+            Method for calculating the principal components for a stack of background images.
+
+            Parameters
+            ----------
+            images : np.ndarray
+                Background images with the mean subtracted from all images.
+            pca_number : int
+                Number of principal components that is used to model the background emission.
+
+            Returns
+            -------
+            np.ndarray
+                Principal components with the second and third dimension reshaped to ``images``.
             """
 
-            if self.m_subtract_mean:
-                images -= bg_mean
-
             _, _, v_svd = svds(images.reshape(images.shape[0],
                                               images.shape[1]*images.shape[2]),
                                k=pca_number)
 
             v_svd = v_svd[::-1, ]
 
-            pca_basis = v_svd.reshape(v_svd.shape[0], images.shape[1], images.shape[2])
-
-            return pca_basis
+            return v_svd.reshape(v_svd.shape[0], images.shape[1], images.shape[2])
 
         @typechecked
         def _model_background(basis: np.ndarray,
@@ -485,15 +484,14 @@ class PCABackgroundSubtractionModule(ProcessingModule):
 
         star = np.zeros((nimages, 2))
         for i, _ in enumerate(star):
-            star[i, :] = locate_star(image=self.m_star_in_port[i, ]-bg_mean,
+            star[i, :] = locate_star(image=self.m_star_in_port[i, ] - bg_mean,
                                      center=None,
                                      width=self.m_subframe,
                                      fwhm=self.m_gaussian)
 
         print('Creating PCA basis set...', end='')
 
-        basis_pca = _create_basis(self.m_background_in_port.get_all(),
-                                  bg_mean,
+        basis_pca = _create_basis(self.m_background_in_port.get_all() - bg_mean,
                                   self.m_pca_number)
 
         print(' [DONE]')
@@ -502,10 +500,8 @@ class PCABackgroundSubtractionModule(ProcessingModule):
         for i, _ in enumerate(frames[:-1]):
             progress(i, len(frames[:-1]), 'Calculating background model...', start_time)
 
-            im_star = self.m_star_in_port[frames[i]:frames[i+1], ]
-
-            if self.m_subtract_mean:
-                im_star -= bg_mean
+            # Subtract the mean background from the star frames
+            im_star = self.m_star_in_port[frames[i]:frames[i+1], ] - bg_mean
 
             mask = _create_mask(self.m_mask_star,
                                 star[frames[i]:frames[i+1], ],
@@ -539,8 +535,8 @@ class PCABackgroundSubtractionModule(ProcessingModule):
 
 class DitheringBackgroundModule(ProcessingModule):
     """
-    Pipeline module for PCA-based background subtraction of data with dithering. This is a wrapper
-    that applies the processing modules required for the PCA background subtraction.
+    Pipeline module for PCA-based background subtraction of dithering data. This is a wrapper that
+    applies the processing modules for either a mean or the PCA-based background subtraction.
     """
 
     __author__ = 'Tomas Stolker'
@@ -550,62 +546,52 @@ class DitheringBackgroundModule(ProcessingModule):
                  name_in: str,
                  image_in_tag: str,
                  image_out_tag: str,
-                 center: Optional[Tuple[Tuple[int, int], ...]] = None,
+                 center: Optional[List[Tuple[int, int]]] = None,
                  cubes: Optional[int] = None,
                  size: float = 2.,
                  gaussian: float = 0.15,
                  subframe: Optional[float] = None,
-                 pca_number: int = 60,
+                 pca_number: Optional[int] = 5,
                  mask_star: float = 0.7,
-                 subtract_mean: bool = False,
-                 **kwargs: Union[bool, str]) -> None:
+                 **kwargs) -> None:
         """
         Parameters
         ----------
         name_in : str
             Unique name of the module instance.
         image_in_tag : str
-            Tag of the database entry that is read as input.
+            Database tag with input images.
         image_out_tag : str
-            Tag of the database entry that is written as output. Not written if set to None.
-        center : tuple(tuple(int, int), ), None
-            Tuple with the centers of the dithering positions, e.g. ((x0,y0), (x1,y1)). The order
+            Database tag to store the background subtracted images.
+        center : list(tuple(int, int)), None
+            Tuple with the centers of the dithering positions, e.g. ((x0, y0), (x1, y1)). The order
             of the coordinates should correspond to the order in which the star is present. If
-            *center* and *cubes* are both set to None then sorting and subtracting of the
-            background frames is based on DITHER_X and DITHER_Y. If *center* is specified and
-            *cubes* is set to None then the DITHER_X and DITHER_Y attributes will be used for
-            sorting and subtracting of the background but not for selecting the dither positions.
+            ``center`` and ``cubes`` are both set to None then sorting and subtracting of the
+            background frames is based on ``DITHER_X`` and ``DITHER_Y``. If ``center`` is
+            specified and ``cubes`` is set to None then the ``DITHER_X`` and ``DITHER_Y``
+            attributes will be used for sorting and subtracting of the background but not for
+            selecting the dither positions.
         cubes : int, None
-            Number of consecutive cubes per dither position. If *cubes* is set to None then sorting
-            and subtracting of the background frames is based on DITHER_X and DITHER_Y.
+            Number of consecutive cubes per dither position. If ``cubes`` is set to None then
+            sorting and subtracting of the background frames is based on ``DITHER_X`` and
+            ``DITHER_Y``.
         size : float
-            Image size (arsec) that is cropped at the specified dither positions.
+            Cropped image size (arcsec).
         gaussian : float
             Full width at half maximum (arcsec) of the Gaussian kernel that is used to smooth the
             image before the star is located.
         subframe : float, None
             Size (arcsec) of the subframe that is used to search for the star. Cropping of the
-            subframe is done around the center of the dithering position. If set to None then the
-            full frame size (*size*) will be used.
-        pca_number : int
-            Number of principal components.
+            subframe is done around the center of the dithering position. The full image size
+            (i.e. ``size``) will be used if set to None then.
+        pca_number : int, None
+            Number of principal components that is used to model the background emission. The PCA
+            background subtraction is skipped if the argument is set to None. In that case, the
+            mean background subtracted images are written toe ``image_out_tag``.
         mask_star : float
-            Radius of the central mask (arcsec).
-        subtract_mean : bool
-            The mean of the background images is subtracted from both the star and background
-            images before the PCA basis is constructed.
-
-        Keyword Arguments
-        -----------------
-        crop : bool
-            Skip the step of selecting and cropping of the dithering positions if set to False.
-        prepare : bool
-            Skip the step of preparing the PCA background subtraction if set to False.
-        pca_background : bool
-            Skip the step of the PCA background subtraction if set to False.
-        combine : str
-            Combine the mean background subtracted ('mean') or PCA background subtracted ('pca')
-            frames. This step is ignored if set to None.
+            Radius of the central mask (arcsec) that is used to exclude the star when fitting the
+            principal components. The region behind the mask is included when subtracting the
+            PCA background model.
 
         Returns
         -------
@@ -617,26 +603,29 @@ class DitheringBackgroundModule(ProcessingModule):
             warnings.warn('The \'mask_planet\' parameter has been deprecated.', DeprecationWarning)
 
         if 'crop' in kwargs:
-            self.m_crop = kwargs['crop']
-        else:
-            self.m_crop = True
+            warnings.warn('The \'crop\' parameter has been deprecated. The step to crop the '
+                          'images is no longer optional so crop=True.', DeprecationWarning)
 
         if 'prepare' in kwargs:
-            self.m_prepare = kwargs['prepare']
-        else:
-            self.m_prepare = True
+            warnings.warn('The \'prepare\' parameter has been deprecated. The preparation step '
+                          'is no longer optional so prepare=True.', DeprecationWarning)
 
         if 'pca_background' in kwargs:
-            self.m_pca_background = kwargs['pca_background']
-        else:
-            self.m_pca_background = True
+            warnings.warn('The \'pca_background\' parameter has been deprecated. The PCA '
+                          'background is no longer optional when combine=\'pca\' so '
+                          'pca_background=True.', DeprecationWarning)
+
+        if 'subtract_mean' in kwargs:
+            warnings.warn('The \'subtract_mean\' parameter has been deprecated. Subtracting of '
+                          'the mean is no longer optional so subtract_mean=True.',
+                          DeprecationWarning)
 
         if 'combine' in kwargs:
-            self.m_combine = kwargs['combine']
-        else:
-            self.m_combine = 'pca'
+            warnings.warn('The \'combine\' parameter has been deprecated. To write the mean '
+                          'background subtracted images to image_out_tag is done by setting '
+                          'pca_number=None.', DeprecationWarning)
 
-        super(DitheringBackgroundModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -647,7 +636,6 @@ class DitheringBackgroundModule(ProcessingModule):
         self.m_gaussian = gaussian
         self.m_pca_number = pca_number
         self.m_mask_star = mask_star
-        self.m_subtract_mean = subtract_mean
         self.m_subframe = subframe
 
         self.m_image_in_tag = image_in_tag
@@ -664,6 +652,7 @@ class DitheringBackgroundModule(ProcessingModule):
             dither_xy[:, 1] = dither_y
 
             _, index = np.unique(dither_xy, axis=0, return_index=True)
+
             dither = dither_xy[np.sort(index)]
 
             npix = self.m_image_in_port.get_shape()[1]
@@ -708,19 +697,18 @@ class DitheringBackgroundModule(ProcessingModule):
                          n_dither: int,
                          position: Tuple[int, int],
                          star_pos: Union[np.ndarray, np.int64]) -> None:
-            if self.m_crop or self.m_prepare or self.m_pca_background:
-                print(f'Processing dither position {count+1} out of {n_dither}...')
-                print(f'Center position = {position}')
+            print(f'Processing dither position {count+1} out of {n_dither}...')
+            print(f'Center position = {position}')
 
-                if self.m_cubes is None and self.m_center is not None:
-                    print(f'DITHER_X, DITHER_Y = {tuple(star_pos)}')
+            if self.m_cubes is None and self.m_center is not None:
+                print(f'DITHER_X, DITHER_Y = {tuple(star_pos)}')
 
         @typechecked
         def _admin_end(count: int) -> None:
-            if self.m_combine == 'mean':
+            if self.m_pca_number is None:
                 tags.append(f'{self.m_image_in_tag}_dither_mean{count+1}')
 
-            elif self.m_combine == 'pca':
+            else:
                 tags.append(f'{self.m_image_in_tag}_dither_pca_res{count+1}')
 
         n_dither, star_pos = self._initialize()
@@ -729,66 +717,64 @@ class DitheringBackgroundModule(ProcessingModule):
         for i, position in enumerate(self.m_center):
             _admin_start(i, n_dither, position, star_pos[i])
 
-            if self.m_crop:
-                im_out_tag = f'{self.m_image_in_tag}_dither_crop{i+1}'
+            im_out_tag = f'{self.m_image_in_tag}_dither_crop{i+1}'
 
-                module = CropImagesModule(name_in=f'crop{i}',
-                                          image_in_tag=self.m_image_in_tag,
-                                          image_out_tag=im_out_tag,
-                                          size=self.m_size,
-                                          center=(int(math.ceil(position[0])),
-                                                  int(math.ceil(position[1]))))
+            module = CropImagesModule(name_in=f'crop{i}',
+                                      image_in_tag=self.m_image_in_tag,
+                                      image_out_tag=im_out_tag,
+                                      size=self.m_size,
+                                      center=(int(math.ceil(position[0])),
+                                              int(math.ceil(position[1]))))
 
-                module.connect_database(self._m_data_base)
-                module._m_output_ports[im_out_tag].del_all_data()
-                module._m_output_ports[im_out_tag].del_all_attributes()
-                module.run()
+            module.connect_database(self._m_data_base)
+            module._m_output_ports[im_out_tag].del_all_data()
+            module._m_output_ports[im_out_tag].del_all_attributes()
+            module.run()
 
-            if self.m_prepare:
-                if self.m_cubes is None:
-                    dither_val = (n_dither, self.m_cubes, tuple(star_pos[i]))
-                else:
-                    dither_val = (n_dither, self.m_cubes, int(star_pos[i]))
-
-                im_in_tag = f'{self.m_image_in_tag}_dither_crop{i+1}'
-                star_out_tag = f'{self.m_image_in_tag}_dither_star{i+1}'
-                sub_out_tag = f'{self.m_image_in_tag}_dither_mean{i+1}'
-                back_out_tag = f'{self.m_image_in_tag}_dither_background{i+1}'
-
-                module = PCABackgroundPreparationModule(name_in=f'prepare{i}',
-                                                        image_in_tag=im_in_tag,
-                                                        star_out_tag=star_out_tag,
-                                                        subtracted_out_tag=sub_out_tag,
-                                                        background_out_tag=back_out_tag,
-                                                        dither=dither_val,
-                                                        combine='mean')
+            if self.m_cubes is None:
+                dither_val = (n_dither, self.m_cubes, tuple(star_pos[i]))
+            else:
+                dither_val = (n_dither, self.m_cubes, int(star_pos[i]))
 
-                module.connect_database(self._m_data_base)
-                module._m_output_ports[star_out_tag].del_all_data()
-                module._m_output_ports[star_out_tag].del_all_attributes()
-                module._m_output_ports[sub_out_tag].del_all_data()
-                module._m_output_ports[sub_out_tag].del_all_attributes()
-                module._m_output_ports[back_out_tag].del_all_data()
-                module._m_output_ports[back_out_tag].del_all_attributes()
-                module.run()
+            im_in_tag = f'{self.m_image_in_tag}_dither_crop{i+1}'
+            star_out_tag = f'{self.m_image_in_tag}_dither_star{i+1}'
+            sub_out_tag = f'{self.m_image_in_tag}_dither_mean{i+1}'
+            back_out_tag = f'{self.m_image_in_tag}_dither_background{i+1}'
+
+            module = PCABackgroundPreparationModule(name_in=f'prepare{i}',
+                                                    image_in_tag=im_in_tag,
+                                                    star_out_tag=star_out_tag,
+                                                    subtracted_out_tag=sub_out_tag,
+                                                    background_out_tag=back_out_tag,
+                                                    dither=dither_val,
+                                                    combine='mean')
+
+            module.connect_database(self._m_data_base)
+            module._m_output_ports[star_out_tag].del_all_data()
+            module._m_output_ports[star_out_tag].del_all_attributes()
+            module._m_output_ports[sub_out_tag].del_all_data()
+            module._m_output_ports[sub_out_tag].del_all_attributes()
+            module._m_output_ports[back_out_tag].del_all_data()
+            module._m_output_ports[back_out_tag].del_all_attributes()
+            module.run()
 
-            if self.m_pca_background:
+            if self.m_pca_number is not None:
                 star_in_tag = f'{self.m_image_in_tag}_dither_star{i+1}'
                 back_in_tag = f'{self.m_image_in_tag}_dither_background{i+1}'
                 res_out_tag = f'{self.m_image_in_tag}_dither_pca_res{i+1}'
                 fit_out_tag = f'{self.m_image_in_tag}_dither_pca_fit{i+1}'
                 mask_out_tag = f'{self.m_image_in_tag}_dither_pca_mask{i+1}'
 
-                module = PCABackgroundSubtractionModule(pca_number=self.m_pca_number,
-                                                        mask_star=self.m_mask_star,
-                                                        subtract_mean=self.m_subtract_mean,
-                                                        subframe=self.m_subframe,
-                                                        name_in=f'pca_background{i}',
+                module = PCABackgroundSubtractionModule(name_in=f'pca_background{i}',
                                                         star_in_tag=star_in_tag,
                                                         background_in_tag=back_in_tag,
                                                         residuals_out_tag=res_out_tag,
                                                         fit_out_tag=fit_out_tag,
-                                                        mask_out_tag=mask_out_tag)
+                                                        mask_out_tag=mask_out_tag,
+                                                        pca_number=self.m_pca_number,
+                                                        mask_star=self.m_mask_star,
+                                                        subframe=self.m_subframe,
+                                                        gaussian=self.m_gaussian)
 
                 module.connect_database(self._m_data_base)
                 module._m_output_ports[res_out_tag].del_all_data()
@@ -801,23 +787,22 @@ class DitheringBackgroundModule(ProcessingModule):
 
             _admin_end(i)
 
-        if self.m_combine is not None and self.m_image_out_tag is not None:
-            module = CombineTagsModule(name_in='combine',
-                                       check_attr=True,
-                                       index_init=False,
-                                       image_in_tags=tags,
-                                       image_out_tag=self.m_image_in_tag+'_dither_combine')
-
-            module.connect_database(self._m_data_base)
-            module._m_output_ports[self.m_image_in_tag+'_dither_combine'].del_all_data()
-            module._m_output_ports[self.m_image_in_tag+'_dither_combine'].del_all_attributes()
-            module.run()
-
-            module = SortParangModule(name_in='sort',
-                                      image_in_tag=self.m_image_in_tag+'_dither_combine',
-                                      image_out_tag=self.m_image_out_tag)
-
-            module.connect_database(self._m_data_base)
-            module._m_output_ports[self.m_image_out_tag].del_all_data()
-            module._m_output_ports[self.m_image_out_tag].del_all_attributes()
-            module.run()
+        module = CombineTagsModule(name_in='combine',
+                                   check_attr=True,
+                                   index_init=False,
+                                   image_in_tags=tags,
+                                   image_out_tag=self.m_image_in_tag+'_dither_combine')
+
+        module.connect_database(self._m_data_base)
+        module._m_output_ports[self.m_image_in_tag+'_dither_combine'].del_all_data()
+        module._m_output_ports[self.m_image_in_tag+'_dither_combine'].del_all_attributes()
+        module.run()
+
+        module = SortParangModule(name_in='sort',
+                                  image_in_tag=self.m_image_in_tag+'_dither_combine',
+                                  image_out_tag=self.m_image_out_tag)
+
+        module.connect_database(self._m_data_base)
+        module._m_output_ports[self.m_image_out_tag].del_all_data()
+        module._m_output_ports[self.m_image_out_tag].del_all_attributes()
+        module.run()
diff --git a/pynpoint/processing/psfpreparation.py b/pynpoint/processing/psfpreparation.py
index 5be687e..fdd6910 100644
--- a/pynpoint/processing/psfpreparation.py
+++ b/pynpoint/processing/psfpreparation.py
@@ -21,10 +21,10 @@ from pynpoint.util.image import create_mask, scale_image, shift_image
 class PSFpreparationModule(ProcessingModule):
     """
     Module to prepare the data for PSF subtraction with PCA. The preparation steps include masking
-    and image normalization.
+    and an optional normalization.
     """
 
-    __author__ = 'Markus Bonse, Tomas Stolker, Timothy Gebhard'
+    __author__ = 'Markus Bonse, Tomas Stolker, Timothy Gebhard, Sven Kiefer'
 
     @typechecked
     def __init__(self,
@@ -49,7 +49,8 @@ class PSFpreparationModule(ProcessingModule):
             Tag of the database entry with the mask that is written as output. If set to None, no
             mask array is saved.
         norm : bool
-            Normalize each image by its Frobenius norm.
+            Normalize each image by its Frobenius norm. Only supported for 3D datasets (i.e.
+            regular imaging).
         resize : float, None
             DEPRECATED. This parameter is currently ignored by the module and will be removed in a
             future version of PynPoint.
@@ -66,7 +67,7 @@ class PSFpreparationModule(ProcessingModule):
             None
         """
 
-        super(PSFpreparationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
 
@@ -91,25 +92,40 @@ class PSFpreparationModule(ProcessingModule):
     def run(self) -> None:
         """
         Run method of the module. Masks and normalizes the images.
-
         Returns
         -------
         NoneType
             None
         """
 
-        # Get PIXSCALE and MEMORY attributes
+        # Get the PIXSCALE and MEMORY attributes
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
         memory = self._m_config_port.get_attribute('MEMORY')
 
-        # Get the number of images and split into batches to comply with memory constraints
+        # Get the numnber of dimensions and shape
+        ndim = self.m_image_in_port.get_ndim()
         im_shape = self.m_image_in_port.get_shape()
-        nimages = im_shape[0]
-        frames = memory_frames(memory, nimages)
+
+        if ndim == 3:
+            # Number of images
+            nimages = im_shape[-3]
+
+            # Split into batches to comply with memory constraints
+            frames = memory_frames(memory, nimages)
+
+        elif ndim == 4:
+            # Process all wavelengths per exposure at once
+            frames = np.linspace(0, im_shape[-3], im_shape[-3]+1)
+
+        if self.m_norm and ndim == 4:
+            warnings.warn('The \'norm\' parameter does not support 4D datasets and will therefore '
+                          'be ignored.')
 
         # Convert m_cent_size and m_edge_size from arcseconds to pixels
+
         if self.m_cent_size is not None:
             self.m_cent_size /= pixscale
+
         if self.m_edge_size is not None:
             self.m_edge_size /= pixscale
 
@@ -121,34 +137,42 @@ class PSFpreparationModule(ProcessingModule):
         # we are not normalizing, this list will remain empty)
         norms = list()
 
-        # Run the PSFpreparationModule for each subset of frames
         start_time = time.time()
-        for i, _ in enumerate(frames[:-1]):
 
+        # Run the PSFpreparationModule for each subset of frames
+        for i in range(frames[:-1].size):
             # Print progress to command line
             progress(i, len(frames[:-1]), 'Preparing images for PSF subtraction...', start_time)
 
-            # Get the images and ensure they have the correct 3D shape with the following
-            # three dimensions: (batch_size, height, width)
-            images = self.m_image_in_port[frames[i]:frames[i+1], ]
+            if ndim == 3:
+                # Get the images and ensure they have the correct 3D shape with the following
+                # three dimensions: (batch_size, height, width)
+                images = self.m_image_in_port[frames[i]:frames[i+1], ]
+
+                if images.ndim == 2:
+                    warnings.warn('The input data has 2 dimensions whereas 3 dimensions are '
+                                  'required. An extra dimension has been added.')
 
-            if images.ndim == 2:
-                warnings.warn('The input data has 2 dimensions whereas 3 dimensions are required. '
-                              'An extra dimension has been added.')
+                    images = images[np.newaxis, ...]
 
-                images = images[np.newaxis, ...]
+            elif ndim == 4:
+                # Process all wavelengths per exposure at once
+                images = self.m_image_in_port[:, i, ]
 
             # Apply the mask, i.e., set all pixels to 0 where the mask is False
             images[:, ~mask] = 0.
 
             # If desired, normalize the images using the Frobenius norm
-            if self.m_norm:
+            if self.m_norm and ndim == 3:
                 im_norm = np.linalg.norm(images, ord='fro', axis=(1, 2))
                 images /= im_norm[:, np.newaxis, np.newaxis]
                 norms.append(im_norm)
 
             # Write processed images to output port
-            self.m_image_out_port.append(images, data_dim=3)
+            if ndim == 3:
+                self.m_image_out_port.append(images, data_dim=3)
+            elif ndim == 4:
+                self.m_image_out_port.append(images, data_dim=4)
 
         # Store information about mask
         if self.m_mask_out_port is not None:
@@ -170,6 +194,7 @@ class PSFpreparationModule(ProcessingModule):
             self.m_image_out_port.add_attribute(name='cent_size',
                                                 value=self.m_cent_size * pixscale,
                                                 static=True)
+
         if self.m_edge_size is not None:
             self.m_image_out_port.add_attribute(name='edge_size',
                                                 value=self.m_edge_size * pixscale,
@@ -202,7 +227,7 @@ class AngleInterpolationModule(ProcessingModule):
             None
         """
 
-        super(AngleInterpolationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_data_in_port = self.add_input_port(data_tag)
         self.m_data_out_port = self.add_output_port(data_tag)
@@ -264,7 +289,7 @@ class AngleInterpolationModule(ProcessingModule):
 
 class SortParangModule(ProcessingModule):
     """
-    Module to sort the images and non-static attributes with increasing INDEX.
+    Module to sort the images and attributes with increasing ``INDEX``.
     """
 
     __author__ = 'Tomas Stolker'
@@ -280,10 +305,10 @@ class SortParangModule(ProcessingModule):
         name_in : str
             Unique name of the module instance.
         image_in_tag : str
-            Tag of the database entry that is read as input.
+            Database tag with the input data.
         image_out_tag : str
-            Tag of the database entry with images that is written as output. Should be different
-            from *image_in_tag*.
+            Database tag where the output data will be stored. Should be different from
+            ``image_in_tag``.
 
         Returns
         -------
@@ -291,7 +316,7 @@ class SortParangModule(ProcessingModule):
             None
         """
 
-        super(SortParangModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -299,7 +324,8 @@ class SortParangModule(ProcessingModule):
     @typechecked
     def run(self) -> None:
         """
-        Run method of the module. Sorts the images and relevant non-static attributes.
+        Run method of the module. Sorts the images and attributes with increasing ``INDEX``.
+        Therefore, the images are sorted by there original (usually chronological) order.
 
         Returns
         -------
@@ -307,12 +333,12 @@ class SortParangModule(ProcessingModule):
             None
         """
 
-        if self.m_image_in_port.tag == self.m_image_out_port.tag:
-            raise ValueError('Input and output port should have a different tag.')
-
         memory = self._m_config_port.get_attribute('MEMORY')
         index = self.m_image_in_port.get_attribute('INDEX')
 
+        ndim = self.m_image_in_port.get_ndim()
+        nimages = self.m_image_in_port.get_shape()[-3]
+
         index_new = np.zeros(index.shape, dtype=np.int)
 
         if 'PARANG' in self.m_image_in_port.get_all_non_static_attributes():
@@ -331,11 +357,10 @@ class SortParangModule(ProcessingModule):
 
         index_sort = np.argsort(index)
 
-        nimages = self.m_image_in_port.get_shape()[0]
-
         frames = memory_frames(memory, nimages)
 
         start_time = time.time()
+
         for i, _ in enumerate(frames[:-1]):
             progress(i, len(frames[:-1]), 'Sorting images in time...', start_time)
 
@@ -347,9 +372,13 @@ class SortParangModule(ProcessingModule):
             if star_new is not None:
                 star_new[frames[i]:frames[i+1]] = star[index_sort[frames[i]:frames[i+1]]]
 
-            # h5py indexing elements must be in increasing order
-            for _, item in enumerate(index_sort[frames[i]:frames[i+1]]):
-                self.m_image_out_port.append(self.m_image_in_port[item, ], data_dim=3)
+            # HDF5 indexing elements must be in increasing order
+            for item in index_sort[frames[i]:frames[i+1]]:
+                if ndim == 3:
+                    self.m_image_out_port.append(self.m_image_in_port[item, ], data_dim=3)
+
+                elif ndim == 4:
+                    self.m_image_out_port.append(self.m_image_in_port[:, item, ], data_dim=4)
 
         self.m_image_out_port.copy_attributes(self.m_image_in_port)
         self.m_image_out_port.add_history('SortParangModule', 'sorted by INDEX')
@@ -394,7 +423,7 @@ class AngleCalculationModule(ProcessingModule):
             None
         """
 
-        super(AngleCalculationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         # Parameters
         self.m_instrument = instrument
@@ -609,7 +638,7 @@ class AngleCalculationModule(ProcessingModule):
 
 class SDIpreparationModule(ProcessingModule):
     """
-    Module for preparing continuum frames for SDI subtraction.
+    Module for preparing continuum frames for dual-band simultaneous differential imaging.
     """
 
     __author__ = 'Gabriele Cugno, Tomas Stolker'
@@ -644,7 +673,7 @@ class SDIpreparationModule(ProcessingModule):
             None
         """
 
-        super(SDIpreparationModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -675,7 +704,7 @@ class SDIpreparationModule(ProcessingModule):
 
         start_time = time.time()
         for i in range(nimages):
-            progress(i, nimages, 'Preparing images for SDI...', start_time)
+            progress(i, nimages, 'Preparing images for dual-band SDI...', start_time)
 
             image = self.m_image_in_port[i, ]
 
diff --git a/pynpoint/processing/psfsubtraction.py b/pynpoint/processing/psfsubtraction.py
index cb81b33..8190a22 100644
--- a/pynpoint/processing/psfsubtraction.py
+++ b/pynpoint/processing/psfsubtraction.py
@@ -16,21 +16,24 @@ from sklearn.decomposition import PCA
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
+from pynpoint.util.apply_func import subtract_psf
 from pynpoint.util.module import progress
 from pynpoint.util.multipca import PcaMultiprocessingCapsule
-from pynpoint.util.psf import pca_psf_subtraction
 from pynpoint.util.residuals import combine_residuals
+from pynpoint.util.postproc import postprocessor
+from pynpoint.util.sdi import scaling_factors
 
 
 class PcaPsfSubtractionModule(ProcessingModule):
     """
-    Pipeline module for PSF subtraction with principal component analysis (PCA). The residuals are
+    Pipeline module for PSF subtraction with principal component analysis (PCA). The module can
+    be used for ADI, RDI (see ``subtract_mean`` parameter), SDI, and ASDI. The residuals are
     calculated in parallel for the selected numbers of principal components. This may require
     a large amount of memory in case the stack of input images is very large. The number of
-    processes can be set with the CPU keyword in the configuration file.
+    processes can therefore be set with the ``CPU`` keyword in the configuration file.
     """
 
-    __author__ = 'Markus Bonse, Tomas Stolker'
+    __author__ = 'Markus Bonse, Tomas Stolker, Sven Kiefer'
 
     @typechecked
     def __init__(self,
@@ -43,44 +46,78 @@ class PcaPsfSubtractionModule(ProcessingModule):
                  res_rot_mean_clip_tag: Optional[str] = None,
                  res_arr_out_tag: Optional[str] = None,
                  basis_out_tag: Optional[str] = None,
-                 pca_numbers: Union[range, List[int], np.ndarray] = range(1, 21),
+                 pca_numbers: Union[range,
+                                    List[int],
+                                    np.ndarray,
+                                    Tuple[range, range],
+                                    Tuple[List[int], List[int]],
+                                    Tuple[np.ndarray, np.ndarray]] = range(1, 21),
                  extra_rot: float = 0.,
-                 subtract_mean: bool = True) -> None:
+                 subtract_mean: bool = True,
+                 processing_type: str = 'ADI') -> None:
         """
         Parameters
         ----------
         name_in : str
-            Unique name of the module instance.
+            Name tag of the pipeline module.
         images_in_tag : str
-            Tag of the database entry with the science images that are read as input
+            Database entry with the images from which the PSF model will be subtracted.
         reference_in_tag : str
-            Tag of the database entry with the reference images that are read as input.
+            Database entry with the reference images from which the PSF model is created. Usually
+            ``reference_in_tag`` is the same as ``images_in_tag``, but a different dataset can be
+            used as reference images in case of RDI.
         res_mean_tag : str, None
-            Tag of the database entry with the mean collapsed residuals. Not calculated if set to
-            None.
+            Database entry where the the mean-collapsed residuals will be stored. The residuals are
+            not calculated and stored if set to None.
         res_median_tag : str, None
-            Tag of the database entry with the median collapsed residuals. Not calculated if set
-            to None.
+            Database entry where the the median-collapsed residuals will be stored. The residuals
+            are not calculated and stored if set to None.
         res_weighted_tag : str, None
-            Tag of the database entry with the noise-weighted residuals (see Bottom et al. 2017).
-            Not calculated if set to None.
+            Database entry where the the noise-weighted residuals will be stored (see Bottom et al.
+            2017). The residuals are not calculated and stored if set to None.
         res_rot_mean_clip_tag : str, None
             Tag of the database entry of the clipped mean residuals. Not calculated if set to
             None.
         res_arr_out_tag : str, None
-            Tag of the database entry with the derotated image residuals from the PSF subtraction.
-            The tag name of `res_arr_out_tag` is appended with the number of principal components
-            that was used. Not calculated if set to None. Not supported with multiprocessing.
+            Database entry where the derotated, but not collapsed, residuals are stored. The number
+            of principal components is was used is appended to the ``res_arr_out_tag``. The
+            residuals are not stored if set to None. This parameter is not supported with
+            multiprocessing (i.e. ``CPU`` > 1). For IFS data and if the processing type is either
+            ADI+SDI or SDI+ADI the residuals can only be calculated if exactly 1 principal component
+            for each ADI and SDI is given with the pca_numbers parameter.
         basis_out_tag : str, None
-            Tag of the database entry with the basis set. Not stored if set to None.
-        pca_numbers : range, list(int, ), numpy.ndarray
-            Number of principal components used for the PSF model. Can be a single value or a tuple
-            with integers.
+            Database entry where the principal components are stored. The data is not stored if set
+            to None. Only supported for imaging data with ``processing_type='ADI'``.
+        pca_numbers : range, list(int), np.ndarray, tuple(range, range), tuple[list(int),
+                      list(int)), tuple(np.ndarray, np.ndarray))
+            Number of principal components that are used for the PSF model. With ADI or SDI, a
+            single list/range/array needs to be provided while for SDI+ADI or ADI+SDI a tuple is
+            required with twice a list/range/array.
         extra_rot : float
             Additional rotation angle of the images (deg).
         subtract_mean : bool
             The mean of the science and reference images is subtracted from the corresponding
-            stack, before the PCA basis is constructed and fitted.
+            stack, before the PCA basis is constructed and fitted. Set the argument to ``False``
+            for RDI, that is, in case ``reference_in_tag`` is different from ``images_in_tag``
+            and there is no or limited field rotation. The parameter is only supported with
+            ``processing_type='ADI'``.
+        processing_type : str
+            Post-processing type:
+                - ADI: Angular differential imaging. Can be used both on imaging and IFS datasets.
+                  This argument is also used for RDI, in which case the ``PARANG`` attribute should
+                  contain zeros a derotation angles (e.g. with
+                  :func:`~pynpoint.core.pypeline.Pypeline.set_attribute` or
+                  :class:`~pynpoint.readwrite.attr_writing.ParangWritingModule`). The collapsed
+                  residuals are stored as 3D dataset with one image per principal component.
+                - SDI: Spectral differential imaging. Can only be applied on IFS datasets. The
+                  collapsed residuals are stored as $D dataset with one image per wavelength and
+                  principal component.
+                - SDI+ADI: Spectral and angular differential imaging. Can only be applied on IFS
+                  datasets. The collapsed residuals are stored as 5D datasets with one image per
+                  wavelength and each of the principal components.
+                - ADI+SDI: Angular and spectral differential imaging. Can only be applied on IFS
+                  datasets. The collapsed residuals are stored as 5D datasets with one image per
+                  wavelength and each of the principal components.
 
         Returns
         -------
@@ -88,13 +125,21 @@ class PcaPsfSubtractionModule(ProcessingModule):
             None
         """
 
-        super(PcaPsfSubtractionModule, self).__init__(name_in)
+        super().__init__(name_in)
+
+        self.m_pca_numbers = pca_numbers
+
+        if isinstance(pca_numbers, tuple):
+            self.m_components = (np.sort(np.atleast_1d(pca_numbers[0])),
+                                 np.sort(np.atleast_1d(pca_numbers[1])))
+
+        else:
+            self.m_components = np.sort(np.atleast_1d(pca_numbers))
+            self.m_pca = PCA(n_components=np.amax(self.m_components), svd_solver='arpack')
 
-        self.m_components = np.sort(np.atleast_1d(pca_numbers))
         self.m_extra_rot = extra_rot
         self.m_subtract_mean = subtract_mean
-
-        self.m_pca = PCA(n_components=np.amax(self.m_components), svd_solver='arpack')
+        self.m_processing_type = processing_type
 
         self.m_reference_in_port = self.add_input_port(reference_in_tag)
         self.m_star_in_port = self.add_input_port(images_in_tag)
@@ -122,30 +167,85 @@ class PcaPsfSubtractionModule(ProcessingModule):
         if res_arr_out_tag is None:
             self.m_res_arr_out_ports = None
         else:
-            self.m_res_arr_out_ports = {}
-            for pca_number in self.m_components:
-                self.m_res_arr_out_ports[pca_number] = self.add_output_port(res_arr_out_tag +
-                                                                            str(pca_number))
+            if isinstance(self.m_components, tuple):
+                self.m_res_arr_out_ports = self.add_output_port(res_arr_out_tag)
+            else:
+                self.m_res_arr_out_ports = {}
+
+                for pca_number in self.m_components:
+                    self.m_res_arr_out_ports[pca_number] = self.add_output_port(
+                        res_arr_out_tag + str(pca_number))
 
         if basis_out_tag is None:
             self.m_basis_out_port = None
         else:
             self.m_basis_out_port = self.add_output_port(basis_out_tag)
 
+        if self.m_processing_type in ['ADI', 'SDI']:
+            if not isinstance(self.m_components, (range, list, np.ndarray)):
+                raise ValueError(f'The post-processing type \'{self.m_processing_type}\' requires '
+                                 f'a single range/list/array as argument for \'pca_numbers\'.')
+
+        elif self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+            if not isinstance(self.m_components, tuple):
+                raise ValueError(f'The post-processing type \'{self.m_processing_type}\' requires '
+                                 f'a tuple for with twice a range/list/array as argument for '
+                                 f'\'pca_numbers\'.')
+
+            if res_arr_out_tag is not None and len(self.m_components[0]) + \
+                    len(self.m_components[1]) != 2:
+                raise ValueError(f'If the post-processing type \'{self.m_processing_type}\' '
+                                 'is selected, residuals can only be calculated if no more than '
+                                 '1 principal component for ADI and SDI is given.')
+        else:
+            raise ValueError('Please select a valid post-processing type.')
+
     @typechecked
     def _run_multi_processing(self,
                               star_reshape: np.ndarray,
-                              im_shape: Tuple[int, int, int],
-                              indices: np.ndarray) -> None:
+                              im_shape: tuple,
+                              indices: Optional[np.ndarray]) -> None:
         """
         Internal function to create the residuals, derotate the images, and write the output
         using multiprocessing.
         """
 
         cpu = self._m_config_port.get_attribute('CPU')
-        angles = -1.*self.m_star_in_port.get_attribute('PARANG') + self.m_extra_rot
+        parang = -1.*self.m_star_in_port.get_attribute('PARANG') + self.m_extra_rot
+
+        if self.m_ifs_data:
+            if 'WAVELENGTH' in self.m_star_in_port.get_all_non_static_attributes():
+                wavelength = self.m_star_in_port.get_attribute('WAVELENGTH')
+
+            else:
+                raise ValueError('The wavelengths are not found. These should be stored '
+                                 'as the \'WAVELENGTH\' attribute.')
+
+            scales = scaling_factors(wavelength)
+
+        else:
+            scales = None
+
+        if self.m_processing_type in ['ADI', 'SDI']:
+            pca_first = self.m_components
+            pca_secon = [-1]  # Not used
+
+        elif self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+            pca_first = self.m_components[0]
+            pca_secon = self.m_components[1]
+
+        if self.m_ifs_data:
+            if self.m_processing_type in ['ADI', 'SDI']:
+                res_shape = (len(pca_first), len(wavelength), im_shape[-2], im_shape[-1])
 
-        tmp_output = np.zeros((len(self.m_components), im_shape[1], im_shape[2]))
+            elif self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                res_shape = (len(pca_first), len(pca_secon), len(wavelength),
+                             im_shape[-2], im_shape[-1])
+
+        else:
+            res_shape = (len(self.m_components), im_shape[1], im_shape[2])
+
+        tmp_output = np.zeros(res_shape)
 
         if self.m_res_mean_out_port is not None:
             self.m_res_mean_out_port.set_all(tmp_output, keep_attributes=False)
@@ -174,10 +274,6 @@ class PcaPsfSubtractionModule(ProcessingModule):
         if self.m_res_rot_mean_clip_out_port is not None:
             self.m_res_rot_mean_clip_out_port.close_port()
 
-        if self.m_res_arr_out_ports is not None:
-            for pca_number in self.m_components:
-                self.m_res_arr_out_ports[pca_number].close_port()
-
         if self.m_basis_out_port is not None:
             self.m_basis_out_port.close_port()
 
@@ -189,17 +285,19 @@ class PcaPsfSubtractionModule(ProcessingModule):
                                             deepcopy(self.m_components),
                                             deepcopy(self.m_pca),
                                             deepcopy(star_reshape),
-                                            deepcopy(angles),
+                                            deepcopy(parang),
+                                            deepcopy(scales),
                                             im_shape,
-                                            indices)
+                                            indices,
+                                            self.m_processing_type)
 
         capsule.run()
 
     @typechecked
     def _run_single_processing(self,
                                star_reshape: np.ndarray,
-                               im_shape: Tuple[int, int, int],
-                               indices: np.ndarray) -> None:
+                               im_shape: tuple,
+                               indices: Optional[np.ndarray]) -> None:
         """
         Internal function to create the residuals, derotate the images, and write the output
         using a single process.
@@ -207,49 +305,170 @@ class PcaPsfSubtractionModule(ProcessingModule):
 
         start_time = time.time()
 
-        for i, pca_number in enumerate(self.m_components):
-            progress(i, len(self.m_components), 'Creating residuals...', start_time)
+        # Get the parallactic angles
+        parang = -1.*self.m_star_in_port.get_attribute('PARANG') + self.m_extra_rot
+
+        if self.m_ifs_data:
+            # Get the wavelengths
+            if 'WAVELENGTH' in self.m_star_in_port.get_all_non_static_attributes():
+                wavelength = self.m_star_in_port.get_attribute('WAVELENGTH')
+
+            else:
+                raise ValueError('The wavelengths are not found. These should be stored '
+                                 'as the \'WAVELENGTH\' attribute.')
+
+            # Calculate the wavelength ratios
+            scales = scaling_factors(wavelength)
+
+        else:
+            scales = None
+
+        if self.m_processing_type in ['ADI', 'SDI']:
+            pca_first = self.m_components
+            pca_secon = [-1]  # Not used
+
+        elif self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+            pca_first = self.m_components[0]
+            pca_secon = self.m_components[1]
+
+        # Setup output arrays
+
+        out_array_res = np.zeros(im_shape)
+
+        if self.m_ifs_data:
+            if self.m_processing_type in ['ADI', 'SDI']:
+                res_shape = (len(pca_first), len(wavelength), im_shape[-2], im_shape[-1])
+
+            elif self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                res_shape = (len(pca_first), len(pca_secon), len(wavelength),
+                             im_shape[-2], im_shape[-1])
+
+        else:
+            res_shape = (len(pca_first), im_shape[-2], im_shape[-1])
+
+        out_array_mean = np.zeros(res_shape)
+        out_array_medi = np.zeros(res_shape)
+        out_array_weig = np.zeros(res_shape)
+        out_array_clip = np.zeros(res_shape)
+
+        # loop over all different combination of pca_numbers and applying the reductions
+        for i, pca_1 in enumerate(pca_first):
+            for j, pca_2 in enumerate(pca_secon):
+                progress(i+j, len(pca_first)+len(pca_secon), 'Creating residuals...', start_time)
+
+                # process images
+                residuals, res_rot = postprocessor(images=star_reshape,
+                                                   angles=parang,
+                                                   scales=scales,
+                                                   pca_number=(pca_1, pca_2),
+                                                   pca_sklearn=self.m_pca,
+                                                   im_shape=im_shape,
+                                                   indices=indices,
+                                                   processing_type=self.m_processing_type)
+
+                # 1.) derotated residuals
+                if self.m_res_arr_out_ports is not None:
+                    if not self.m_ifs_data:
+                        self.m_res_arr_out_ports[pca_1].set_all(res_rot)
+                        self.m_res_arr_out_ports[pca_1].copy_attributes(self.m_star_in_port)
+                        self.m_res_arr_out_ports[pca_1].add_history(
+                            'PcaPsfSubtractionModule', f'max PC number = {pca_first}')
+
+                    else:
+                        out_array_res = residuals
 
-            parang = -1.*self.m_star_in_port.get_attribute('PARANG') + self.m_extra_rot
+                # 2.) mean residuals
+                if self.m_res_mean_out_port is not None:
+                    if self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                        out_array_mean[i, j] = combine_residuals(method='mean',
+                                                                 res_rot=res_rot,
+                                                                 angles=parang)
 
-            residuals, res_rot = pca_psf_subtraction(images=star_reshape,
-                                                     angles=parang,
-                                                     pca_number=int(pca_number),
-                                                     pca_sklearn=self.m_pca,
-                                                     im_shape=im_shape,
-                                                     indices=indices)
+                    else:
+                        out_array_mean[i] = combine_residuals(method='mean',
+                                                              res_rot=res_rot,
+                                                              angles=parang)
 
-            hist = f'max PC number = {np.amax(self.m_components)}'
+                # 3.) median residuals
+                if self.m_res_median_out_port is not None:
+                    if self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                        out_array_medi[i, j] = combine_residuals(method='median',
+                                                                 res_rot=res_rot,
+                                                                 angles=parang)
 
-            # 1.) derotated residuals
-            if self.m_res_arr_out_ports is not None:
-                self.m_res_arr_out_ports[pca_number].set_all(res_rot)
-                self.m_res_arr_out_ports[pca_number].copy_attributes(self.m_star_in_port)
-                self.m_res_arr_out_ports[pca_number].add_history('PcaPsfSubtractionModule', hist)
+                    else:
+                        out_array_medi[i] = combine_residuals(method='median',
+                                                              res_rot=res_rot,
+                                                              angles=parang)
+
+                # 4.) noise-weighted residuals
+                if self.m_res_weighted_out_port is not None:
+                    if self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                        out_array_weig[i, j] = combine_residuals(method='weighted',
+                                                                 res_rot=res_rot,
+                                                                 residuals=residuals,
+                                                                 angles=parang)
 
-            # 2.) mean residuals
-            if self.m_res_mean_out_port is not None:
-                stack = combine_residuals(method='mean', res_rot=res_rot)
-                self.m_res_mean_out_port.append(stack, data_dim=3)
+                    else:
+                        out_array_weig[i] = combine_residuals(method='weighted',
+                                                              res_rot=res_rot,
+                                                              residuals=residuals,
+                                                              angles=parang)
+
+                # 5.) clipped mean residuals
+                if self.m_res_rot_mean_clip_out_port is not None:
+                    if self.m_processing_type in ['SDI+ADI', 'ADI+SDI']:
+                        out_array_clip[i, j] = combine_residuals(method='clipped',
+                                                                 res_rot=res_rot,
+                                                                 angles=parang)
 
-            # 3.) median residuals
-            if self.m_res_median_out_port is not None:
-                stack = combine_residuals(method='median', res_rot=res_rot)
-                self.m_res_median_out_port.append(stack, data_dim=3)
+                    else:
+                        out_array_clip[i] = combine_residuals(method='clipped',
+                                                              res_rot=res_rot,
+                                                              angles=parang)
+
+        # Configurate data output according to the processing type
+        # 1.) derotated residuals
+        if self.m_res_arr_out_ports is not None and self.m_ifs_data:
+            if pca_secon[0] == -1:
+                history = f'max PC number = {pca_first}'
+
+            else:
+                history = f'max PC number = {pca_first} / {pca_secon}'
+
+            # squeeze out_array_res to reduce dimensionallity as the residuals of
+            # SDI+ADI and ADI+SDI are always of the form (1, 1, ...)
+            squeezed = np.squeeze(out_array_res)
+
+            if isinstance(self.m_components, tuple):
+                self.m_res_arr_out_ports.set_all(squeezed, data_dim=squeezed.ndim)
+                self.m_res_arr_out_ports.copy_attributes(self.m_star_in_port)
+                self.m_res_arr_out_ports.add_history('PcaPsfSubtractionModule', history)
+
+            else:
+                for i, pca in enumerate(self.m_components):
+                    self.m_res_arr_out_ports[pca].append(squeezed[i])
+                    self.m_res_arr_out_ports[pca].add_history('PcaPsfSubtractionModule', history)
+
+        # 2.) mean residuals
+        if self.m_res_mean_out_port is not None:
+            self.m_res_mean_out_port.set_all(out_array_mean,
+                                             data_dim=out_array_mean.ndim)
 
-            # 4.) noise-weighted residuals
-            if self.m_res_weighted_out_port is not None:
-                stack = combine_residuals(method='weighted',
-                                          res_rot=res_rot,
-                                          residuals=residuals,
-                                          angles=parang)
+        # 3.) median residuals
+        if self.m_res_median_out_port is not None:
+            self.m_res_median_out_port.set_all(out_array_medi,
+                                               data_dim=out_array_medi.ndim)
 
-                self.m_res_weighted_out_port.append(stack, data_dim=3)
+        # 4.) noise-weighted residuals
+        if self.m_res_weighted_out_port is not None:
+            self.m_res_weighted_out_port.set_all(out_array_weig,
+                                                 data_dim=out_array_weig.ndim)
 
-            # 5.) clipped mean residuals
-            if self.m_res_rot_mean_clip_out_port is not None:
-                stack = combine_residuals(method='clipped', res_rot=res_rot)
-                self.m_res_rot_mean_clip_out_port.append(stack, data_dim=3)
+        # 5.) clipped mean residuals
+        if self.m_res_rot_mean_clip_out_port is not None:
+            self.m_res_rot_mean_clip_out_port.set_all(out_array_clip,
+                                                      data_dim=out_array_clip.ndim)
 
     @typechecked
     def run(self) -> None:
@@ -265,81 +484,115 @@ class PcaPsfSubtractionModule(ProcessingModule):
             None
         """
 
+        print('Input parameters:')
+        print(f'   - Post-processing type: {self.m_processing_type}')
+        print(f'   - Number of principal components: {self.m_pca_numbers}')
+        print(f'   - Subtract mean: {self.m_subtract_mean}')
+        print(f'   - Extra rotation (deg): {self.m_extra_rot}')
+
         cpu = self._m_config_port.get_attribute('CPU')
 
         if cpu > 1 and self.m_res_arr_out_ports is not None:
-            warnings.warn(f'Multiprocessing not possible if \'res_arr_out_tag\' is not set '
-                          f'to None.')
+            warnings.warn('Multiprocessing not possible if \'res_arr_out_tag\' is not set '
+                          'to None.')
 
-        # get all data
+        # Read the data
         star_data = self.m_star_in_port.get_all()
         im_shape = star_data.shape
 
-        # select the first image and get the unmasked image indices
-        im_star = star_data[0, ].reshape(-1)
-        indices = np.where(im_star != 0.)[0]
-
-        # reshape the star data and select the unmasked pixels
-        star_reshape = star_data.reshape(im_shape[0], im_shape[1]*im_shape[2])
-        star_reshape = star_reshape[:, indices]
+        # Parse input processing types to internal processing types
+        if star_data.ndim == 3:
+            self.m_ifs_data = False
 
-        if self.m_reference_in_port.tag == self.m_star_in_port.tag:
-            ref_reshape = deepcopy(star_reshape)
+        elif star_data.ndim == 4:
+            self.m_ifs_data = True
 
         else:
-            ref_data = self.m_reference_in_port.get_all()
-            ref_shape = ref_data.shape
+            raise ValueError(f'The input data has {star_data.ndim} dimensions while only 3 or 4 '
+                             f' are supported by the pipeline module.')
 
-            if ref_shape[-2:] != im_shape[-2:]:
-                raise ValueError('The image size of the science data and the reference data '
-                                 'should be identical.')
+        if self.m_processing_type == 'ADI' and not self.m_ifs_data:
+            # select the first image and get the unmasked image indices
+            im_star = star_data[0, ].reshape(-1)
+            indices = np.where(im_star != 0.)[0]
 
-            # reshape reference data and select the unmasked pixels
-            ref_reshape = ref_data.reshape(ref_shape[0], ref_shape[1]*ref_shape[2])
-            ref_reshape = ref_reshape[:, indices]
+            # reshape the star data and select the unmasked pixels
+            star_reshape = star_data.reshape(im_shape[0], im_shape[1]*im_shape[2])
+            star_reshape = star_reshape[:, indices]
 
-        # subtract mean from science data, if required
-        if self.m_subtract_mean:
-            mean_star = np.mean(star_reshape, axis=0)
-            star_reshape -= mean_star
+            if self.m_reference_in_port.tag == self.m_star_in_port.tag:
+                ref_reshape = deepcopy(star_reshape)
 
-        # subtract mean from reference data
-        mean_ref = np.mean(ref_reshape, axis=0)
-        ref_reshape -= mean_ref
+            else:
+                ref_data = self.m_reference_in_port.get_all()
+                ref_shape = ref_data.shape
 
-        # create the PCA basis
-        print('Constructing PSF model...', end='')
-        self.m_pca.fit(ref_reshape)
+                if ref_shape[-2:] != im_shape[-2:]:
+                    raise ValueError('The image size of the science data and the reference data '
+                                     'should be identical.')
 
-        # add mean of reference array as 1st PC and orthogonalize it with respect to the PCA basis
-        if not self.m_subtract_mean:
-            mean_ref_reshape = mean_ref.reshape((1, mean_ref.shape[0]))
+                # reshape reference data and select the unmasked pixels
+                ref_reshape = ref_data.reshape(ref_shape[0], ref_shape[1]*ref_shape[2])
+                ref_reshape = ref_reshape[:, indices]
 
-            q_ortho, _ = np.linalg.qr(np.vstack((mean_ref_reshape,
-                                                 self.m_pca.components_[:-1, ])).T)
+            # subtract mean from science data, if required
+            if self.m_subtract_mean:
+                mean_star = np.mean(star_reshape, axis=0)
+                star_reshape -= mean_star
 
-            self.m_pca.components_ = q_ortho.T
+            # subtract mean from reference data
+            mean_ref = np.mean(ref_reshape, axis=0)
+            ref_reshape -= mean_ref
 
-        print(' [DONE]')
+            # create the PCA basis
+            print('Constructing PSF model...', end='')
+            self.m_pca.fit(ref_reshape)
 
-        if self.m_basis_out_port is not None:
-            pc_size = self.m_pca.components_.shape[0]
+            # add mean of reference array as 1st PC and orthogonalize it with respect to
+            # the other principal components
+            if not self.m_subtract_mean:
+                mean_ref_reshape = mean_ref.reshape((1, mean_ref.shape[0]))
+
+                q_ortho, _ = np.linalg.qr(np.vstack((mean_ref_reshape,
+                                                     self.m_pca.components_[:-1, ])).T)
+
+                self.m_pca.components_ = q_ortho.T
+
+            print(' [DONE]')
+
+            if self.m_basis_out_port is not None:
+                pc_size = self.m_pca.components_.shape[0]
 
-            basis = np.zeros((pc_size, im_shape[1]*im_shape[2]))
-            basis[:, indices] = self.m_pca.components_
-            basis = basis.reshape((pc_size, im_shape[1], im_shape[2]))
+                basis = np.zeros((pc_size, im_shape[1]*im_shape[2]))
+                basis[:, indices] = self.m_pca.components_
+                basis = basis.reshape((pc_size, im_shape[1], im_shape[2]))
 
-            self.m_basis_out_port.set_all(basis)
+                self.m_basis_out_port.set_all(basis)
 
+        else:
+            # This setup is used for SDI processes. No preparations are possible because SDI/ADI
+            # combinations are case specific and need to be conducted in pca_psf_subtraction.
+            self.m_pca = None
+            indices = None
+            star_reshape = star_data
+
+        # Running a single processing PCA analysis
         if cpu == 1 or self.m_res_arr_out_ports is not None:
             self._run_single_processing(star_reshape, im_shape, indices)
 
+        # Running multiprocessed PCA analysis
         else:
             print('Creating residuals', end='')
             self._run_multi_processing(star_reshape, im_shape, indices)
             print(' [DONE]')
 
-        history = f'max PC number = {np.amax(self.m_components)}'
+        # write history
+        if isinstance(self.m_components, tuple):
+            history = f'max PC number = {np.amax(self.m_components[0])} / ' \
+                      f'{np.amax(self.m_components[1])}'
+
+        else:
+            history = f'max PC number = {np.amax(self.m_components)}'
 
         # save history for all other ports
         if self.m_res_mean_out_port is not None:
@@ -376,7 +629,7 @@ class ClassicalADIModule(ProcessingModule):
                  image_in_tag: str,
                  res_out_tag: str,
                  stack_out_tag: str,
-                 threshold: Union[Tuple[float, float, float], None],
+                 threshold: Optional[Tuple[float, float, float]],
                  nreference: Optional[int] = None,
                  residuals: str = 'median',
                  extra_rot: float = 0.) -> None:
@@ -410,7 +663,7 @@ class ClassicalADIModule(ProcessingModule):
             None
         """
 
-        super(ClassicalADIModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_res_out_port = self.add_output_port(res_out_tag)
@@ -421,8 +674,6 @@ class ClassicalADIModule(ProcessingModule):
         self.m_extra_rot = extra_rot
         self.m_residuals = residuals
 
-        self.m_count = 0
-
     @typechecked
     def run(self) -> None:
         """
@@ -439,39 +690,10 @@ class ClassicalADIModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _subtract_psf(image: np.ndarray,
-                          parang_thres: Optional[float],
-                          nref: Optional[int],
-                          reference: Optional[np.ndarray] = None) -> np.ndarray:
-
-            if parang_thres:
-                ang_diff = np.abs(parang[self.m_count]-parang)
-                index_thres = np.where(ang_diff > parang_thres)[0]
-
-                if index_thres.size == 0:
-                    reference = self.m_image_in_port.get_all()
-                    warnings.warn('No images meet the rotation threshold. Creating a reference '
-                                  'PSF from the median of all images instead.')
-
-                else:
-                    if nref:
-                        index_diff = np.abs(self.m_count - index_thres)
-                        index_near = np.argsort(index_diff)[:nref]
-                        index_sort = np.sort(index_thres[index_near])
-                        reference = self.m_image_in_port[index_sort, :, :]
-
-                    else:
-                        reference = self.m_image_in_port[index_thres, :, :]
-
-                reference = np.median(reference, axis=0)
-
-            self.m_count += 1
-
-            return image-reference
-
         parang = -1.*self.m_image_in_port.get_attribute('PARANG') + self.m_extra_rot
 
+        nimages = self.m_image_in_port.get_shape()[0]
+
         if self.m_threshold:
             parang_thres = 2.*math.atan2(self.m_threshold[2]*self.m_threshold[1],
                                          2.*self.m_threshold[0])
@@ -483,11 +705,20 @@ class ClassicalADIModule(ProcessingModule):
             reference = self.m_image_in_port.get_all()
             reference = np.median(reference, axis=0)
 
-        self.apply_function_to_images(_subtract_psf,
+        ang_diff = np.zeros((nimages, parang.shape[0]))
+
+        for i in range(nimages):
+            ang_diff[i, :] = np.abs(parang[i] - parang)
+
+        self.apply_function_to_images(subtract_psf,
                                       self.m_image_in_port,
                                       self.m_res_out_port,
                                       'Classical ADI',
-                                      func_args=(parang_thres, self.m_nreference, reference))
+                                      func_args=(parang_thres,
+                                                 self.m_nreference,
+                                                 reference,
+                                                 ang_diff,
+                                                 self.m_image_in_port))
 
         self.m_res_in_port = self.add_input_port(self.m_res_out_port._m_tag)
         im_res = self.m_res_in_port.get_all()
diff --git a/pynpoint/processing/resizing.py b/pynpoint/processing/resizing.py
index 60fa3fe..0f46283 100644
--- a/pynpoint/processing/resizing.py
+++ b/pynpoint/processing/resizing.py
@@ -5,15 +5,16 @@ Pipeline modules for resizing of images.
 import math
 import time
 
-from typing import Union, Tuple
+from typing import Tuple, Union
 
 import numpy as np
 
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
-from pynpoint.util.image import crop_image, scale_image
-from pynpoint.util.module import progress, memory_frames
+from pynpoint.util.apply_func import image_scaling
+from pynpoint.util.image import crop_image
+from pynpoint.util.module import memory_frames, progress
 
 
 class CropImagesModule(ProcessingModule):
@@ -53,7 +54,7 @@ class CropImagesModule(ProcessingModule):
             None
         """
 
-        super(CropImagesModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -79,7 +80,21 @@ class CropImagesModule(ProcessingModule):
         # Get memory and number of images to split the frames into chunks
         memory = self._m_config_port.get_attribute('MEMORY')
         nimages = self.m_image_in_port.get_shape()[0]
-        frames = memory_frames(memory, nimages)
+
+        # Get the numnber of dimensions and shape
+        ndim = self.m_image_in_port.get_ndim()
+        im_shape = self.m_image_in_port.get_shape()
+
+        if ndim == 3:
+            # Number of images
+            nimages = im_shape[-3]
+
+            # Split into batches to comply with memory constraints
+            frames = memory_frames(memory, nimages)
+
+        elif ndim == 4:
+            # Process all wavelengths per exposure at once
+            frames = np.linspace(0, im_shape[-3], im_shape[-3]+1)
 
         # Convert size parameter from arcseconds to pixels
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
@@ -88,7 +103,7 @@ class CropImagesModule(ProcessingModule):
         print(f'New image size (pixels) = {self.m_size}')
 
         if self.m_center is not None:
-            print(f'New image center (x, y) = {self.m_center}')
+            print(f'New image center (x, y) = ({self.m_center[1]}, {self.m_center[0]})')
 
         # Crop images chunk by chunk
         start_time = time.time()
@@ -97,12 +112,22 @@ class CropImagesModule(ProcessingModule):
             # Update progress bar
             progress(i, len(frames[:-1]), 'Cropping images...', start_time)
 
-            # Select and crop images in the current chunk
-            images = self.m_image_in_port[frames[i]:frames[i+1], ]
+            # Select images in the current chunk
+            if ndim == 3:
+                images = self.m_image_in_port[frames[i]:frames[i+1], ]
+
+            elif ndim == 4:
+                # Process all wavelengths per exposure at once
+                images = self.m_image_in_port[:, i, ]
+
+            # crop images according to input parameters
             images = crop_image(images, self.m_center, self.m_size, copy=False)
 
-            # Write cropped images to output port
-            self.m_image_out_port.append(images, data_dim=3)
+            # Write processed images to output port
+            if ndim == 3:
+                self.m_image_out_port.append(images, data_dim=3)
+            elif ndim == 4:
+                self.m_image_out_port.append(images, data_dim=4)
 
         # Save history and copy attributes
         history = f'image size (pix) = {self.m_size}'
@@ -124,7 +149,7 @@ class ScaleImagesModule(ProcessingModule):
                  scaling: Union[Tuple[float, float, float],
                                 Tuple[None, None, float],
                                 Tuple[float, float, None]],
-                 pixscale: bool = False) -> None:
+                 pixscale: bool = True) -> None:
         """
         Parameters
         ----------
@@ -134,7 +159,7 @@ class ScaleImagesModule(ProcessingModule):
             Tag of the database entry that is read as input.
         image_out_tag : str
             Tag of the database entry that is written as output. Should be different from
-            *image_in_tag*.
+            ``image_in_tag``.
         scaling : tuple(float, float, float)
             Tuple with the scaling factors for the image size and flux, (scaling_x, scaling_y,
             scaling_flux). Upsampling and downsampling of the image corresponds to
@@ -148,7 +173,7 @@ class ScaleImagesModule(ProcessingModule):
             None
         """
 
-        super(ScaleImagesModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -184,15 +209,7 @@ class ScaleImagesModule(ProcessingModule):
 
         pixscale = self.m_image_in_port.get_attribute('PIXSCALE')
 
-        @typechecked
-        def _image_scaling(image_in: np.ndarray,
-                           scaling_y: float,
-                           scaling_x: float,
-                           scaling_flux: float) -> np.ndarray:
-
-            return scaling_flux * scale_image(image_in, scaling_y, scaling_x)
-
-        self.apply_function_to_images(_image_scaling,
+        self.apply_function_to_images(image_scaling,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Scaling images',
@@ -243,7 +260,7 @@ class AddLinesModule(ProcessingModule):
             None
         """
 
-        super(AddLinesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -324,7 +341,7 @@ class RemoveLinesModule(ProcessingModule):
             None
         """
 
-        super(RemoveLinesModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
diff --git a/pynpoint/processing/stacksubset.py b/pynpoint/processing/stacksubset.py
index 6e01e80..457482b 100644
--- a/pynpoint/processing/stacksubset.py
+++ b/pynpoint/processing/stacksubset.py
@@ -5,7 +5,7 @@ Pipeline modules for stacking and subsampling of images.
 import time
 import warnings
 
-from typing import List, Optional, Tuple, Union
+from typing import List, Optional, Tuple
 
 import numpy as np
 
@@ -57,7 +57,7 @@ class StackAndSubsetModule(ProcessingModule):
             None
         """
 
-        super(StackAndSubsetModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -228,7 +228,7 @@ class StackCubesModule(ProcessingModule):
             None
         """
 
-        super(StackCubesModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -238,7 +238,7 @@ class StackCubesModule(ProcessingModule):
     @typechecked
     def run(self) -> None:
         """
-        Run method of the module. Uses the NFRAMES attribute to select the images of each cube,
+        Run method of the module. Uses the ``NFRAMES`` attribute to select the images of each cube,
         calculates the mean or median of each cube, and saves the data and attributes.
 
         Returns
@@ -298,7 +298,7 @@ class StackCubesModule(ProcessingModule):
 class DerotateAndStackModule(ProcessingModule):
     """
     Pipeline module for derotating and/or stacking (i.e., taking the median or average) of the
-    images.
+    images, either along the time or the wavelengths dimension.
     """
 
     @typechecked
@@ -308,7 +308,8 @@ class DerotateAndStackModule(ProcessingModule):
                  image_out_tag: str,
                  derotate: bool = True,
                  stack: Optional[str] = None,
-                 extra_rot: float = 0.) -> None:
+                 extra_rot: float = 0.,
+                 dimension: str = 'time') -> None:
         """
         Parameters
         ----------
@@ -317,15 +318,19 @@ class DerotateAndStackModule(ProcessingModule):
         image_in_tag : str
             Tag of the database entry that is read as input.
         image_out_tag : str
-            Tag of the database entry that is written as output. The output is either 2D
-            (*stack=False*) or 3D (*stack=True*).
+            Tag of the database entry that is written as output. The shape of the output data is
+            equal to the data from ``image_in_tag``. If the argument of ``stack`` is not None,
+            then the size of the collapsed dimension is equal to 1.
         derotate : bool
-            Derotate the images with the PARANG attribute.
+            Derotate the images with the ``PARANG`` attribute.
         stack : str
             Type of stacking applied after optional derotation ('mean', 'median', or None for no
             stacking).
         extra_rot : float
             Additional rotation angle of the images in clockwise direction (deg).
+        dimension : str
+            Dimension along which the images are stacked. Can either be 'time' or 'wavelength'. If
+            the ``image_in_tag`` has three dimensions then ``dimension`` is always fixed to 'time'.
 
         Returns
         -------
@@ -333,7 +338,7 @@ class DerotateAndStackModule(ProcessingModule):
             None
         """
 
-        super(DerotateAndStackModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -341,12 +346,14 @@ class DerotateAndStackModule(ProcessingModule):
         self.m_derotate = derotate
         self.m_stack = stack
         self.m_extra_rot = extra_rot
+        self.m_dimension = dimension
 
     @typechecked
     def run(self) -> None:
         """
-        Run method of the module. Uses the PARANG attributes to derotate the images (if *derotate*
-        is set to True) and applies an optional mean or median stacking afterwards.
+        Run method of the module. Uses the ``PARANG`` attributes to derotate the images (if
+        ``derotate`` is set to ``True``) and applies an optional mean or median stacking
+        along the time or wavelengths dimension afterwards.
 
         Returns
         -------
@@ -356,27 +363,55 @@ class DerotateAndStackModule(ProcessingModule):
 
         @typechecked
         def _initialize(ndim: int,
-                        npix: int) -> Tuple[int, np.ndarray, Optional[np.ndarray]]:
+                        npix: int) -> Tuple[int, np.ndarray, Optional[np.ndarray],
+                                            Optional[np.ndarray]]:
 
             if ndim == 2:
                 nimages = 1
+
             elif ndim == 3:
-                nimages = self.m_image_in_port.get_shape()[0]
+                nimages = self.m_image_in_port.get_shape()[-3]
 
-            if self.m_stack == 'median':
-                frames = np.array([0, nimages])
-            else:
-                frames = memory_frames(memory, nimages)
+                if self.m_stack == 'median':
+                    frames = np.array([0, nimages])
+
+                else:
+                    frames = memory_frames(memory, nimages)
+
+            elif ndim == 4:
+                nimages = self.m_image_in_port.get_shape()[-3]
+                nwave = self.m_image_in_port.get_shape()[-4]
+
+                if self.m_dimension == 'time':
+                    frames = np.linspace(0, nwave, nwave+1)
+
+                elif self.m_dimension == 'wavelength':
+                    frames = np.linspace(0, nimages, nimages+1)
+
+                else:
+                    raise ValueError('The dimension should be set to \'time\' or \'wavelength\'.')
 
             if self.m_stack == 'mean':
-                im_tot = np.zeros((npix, npix))
+                if ndim == 4:
+                    if self.m_dimension == 'time':
+                        im_tot = np.zeros((nwave, npix, npix))
+
+                    elif self.m_dimension == 'wavelength':
+                        im_tot = np.zeros((nimages, npix, npix))
+
+                else:
+                    im_tot = np.zeros((npix, npix))
+
             else:
                 im_tot = None
 
-            return nimages, frames, im_tot
+            if self.m_stack is None and ndim == 4:
+                im_none = np.zeros((nwave, nimages, npix, npix))
 
-        if self.m_image_in_port.tag == self.m_image_out_port.tag:
-            raise ValueError('Input and output port should have a different tag.')
+            else:
+                im_none = None
+
+            return nimages, frames, im_tot, im_none
 
         memory = self._m_config_port.get_attribute('MEMORY')
 
@@ -384,36 +419,98 @@ class DerotateAndStackModule(ProcessingModule):
             parang = self.m_image_in_port.get_attribute('PARANG')
 
         ndim = self.m_image_in_port.get_ndim()
-        npix = self.m_image_in_port.get_shape()[1]
+        npix = self.m_image_in_port.get_shape()[-2]
 
-        nimages, frames, im_tot = _initialize(ndim, npix)
+        nimages, frames, im_tot, im_none = _initialize(ndim, npix)
 
         start_time = time.time()
         for i, _ in enumerate(frames[:-1]):
             progress(i, len(frames[:-1]), 'Derotating and/or stacking images...', start_time)
 
-            images = self.m_image_in_port[frames[i]:frames[i+1], ]
+            if ndim == 3:
+                # Get the images and ensure they have the correct 3D shape with the following
+                # three dimensions: (batch_size, height, width)
+                images = self.m_image_in_port[frames[i]:frames[i+1], ]
+
+            elif ndim == 4:
+                # Process all time frames per exposure at once
+                if self.m_dimension == 'time':
+                    images = self.m_image_in_port[i, :, ]
+
+                elif self.m_dimension == 'wavelength':
+                    images = self.m_image_in_port[:, i, ]
 
             if self.m_derotate:
-                angles = -parang[frames[i]:frames[i+1]]+self.m_extra_rot
+                if ndim == 4:
+                    if self.m_dimension == 'time':
+                        angles = -1.*parang + self.m_extra_rot
+
+                    elif self.m_dimension == 'wavelength':
+                        n_wavel = self.m_image_in_port.get_shape()[-4]
+                        angles = np.full(n_wavel, -1.*parang[i]) + self.m_extra_rot
+
+                else:
+                    angles = -parang[frames[i]:frames[i+1]]+self.m_extra_rot
+
                 images = rotate_images(images, angles)
 
             if self.m_stack is None:
                 if ndim == 2:
                     self.m_image_out_port.set_all(images[np.newaxis, ...])
+
                 elif ndim == 3:
                     self.m_image_out_port.append(images, data_dim=3)
 
+                elif ndim == 4:
+                    if self.m_dimension == 'time':
+                        im_none[i] = images
+
+                    elif self.m_dimension == 'wavelength':
+                        im_none[:, i] = images
+
             elif self.m_stack == 'mean':
-                im_tot += np.sum(images, axis=0)
+                if ndim == 4:
+                    im_tot[i] = np.sum(images, axis=0)
+
+                else:
+                    im_tot += np.sum(images, axis=0)
 
         if self.m_stack == 'mean':
-            im_stack = im_tot/float(nimages)
-            self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
+            if ndim == 4:
+                im_stack = im_tot/float(im_tot.shape[0])
+
+                if self.m_dimension == 'time':
+                    self.m_image_out_port.set_all(im_stack[:, np.newaxis, ...])
+
+                elif self.m_dimension == 'wavelength':
+                    self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
+
+            else:
+                im_stack = im_tot/float(nimages)
+                self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
 
         elif self.m_stack == 'median':
-            im_stack = np.median(images, axis=0)
-            self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
+            if ndim == 4:
+                images = self.m_image_in_port[:]
+
+                if self.m_dimension == 'time':
+                    im_stack = np.median(images, axis=1)
+                    self.m_image_out_port.set_all(im_stack[:, np.newaxis, ...])
+
+                elif self.m_dimension == 'wavelength':
+                    im_stack = np.median(images, axis=0)
+                    self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
+
+            else:
+                im_stack = np.median(images, axis=0)
+                self.m_image_out_port.set_all(im_stack[np.newaxis, ...])
+
+        elif self.m_stack is None and ndim == 4:
+            if self.m_dimension == 'time':
+                self.m_image_out_port.set_all(im_none)
+
+            elif self.m_dimension == 'wavelength':
+                self.m_image_out_port.set_all(im_none)
 
         if self.m_derotate or self.m_stack is not None:
             self.m_image_out_port.copy_attributes(self.m_image_in_port)
@@ -456,7 +553,7 @@ class CombineTagsModule(ProcessingModule):
             None
         """
 
-        super(CombineTagsModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_out_port = self.add_output_port(image_out_tag)
 
diff --git a/pynpoint/processing/timedenoising.py b/pynpoint/processing/timedenoising.py
index 51525ef..4e639b3 100644
--- a/pynpoint/processing/timedenoising.py
+++ b/pynpoint/processing/timedenoising.py
@@ -1,19 +1,18 @@
 """
 Continuous wavelet transform (CWT) and discrete wavelet transform (DWT) denoising for speckle
 suppression in the time domain. The module can be used as additional preprocessing step. See
-Bonse et al. 2018 more information.
+Bonse et al. (arXiv:1804.05063) more information.
 """
 
 from typing import Union
 
 import pywt
-import numpy as np
 
-from statsmodels.robust import mad
 from typeguard import typechecked
 
 from pynpoint.core.processing import ProcessingModule
-from pynpoint.util.wavelets import WaveletAnalysisCapsule
+from pynpoint.util.apply_func import cwt_denoise_line_in_time, dwt_denoise_line_in_time, \
+                                     normalization
 
 
 class CwtWaveletConfiguration:
@@ -83,8 +82,8 @@ class DwtWaveletConfiguration:
 
         # create list of supported wavelets
         supported = []
-        for family in pywt.families():
-            supported += pywt.wavelist(family)
+        for item in pywt.families():
+            supported += pywt.wavelist(item)
 
         # check if wavelet is supported
         if wavelet not in supported:
@@ -96,7 +95,7 @@ class DwtWaveletConfiguration:
 class WaveletTimeDenoisingModule(ProcessingModule):
     """
     Pipeline module for speckle subtraction in the time domain by using CWT or DWT wavelet
-    shrinkage (see Bonse et al. 2018).
+    shrinkage. See Bonse et al. (arXiv:1804.05063) for details.
     """
 
     __author__ = 'Markus Bonse, Tomas Stolker'
@@ -114,22 +113,23 @@ class WaveletTimeDenoisingModule(ProcessingModule):
         Parameters
         ----------
         name_in : str
-            Unique name of the module instance.
+            Unique name for the pipeline module.
         image_in_tag : str
-            Tag of the database entry that is read as input.
+            Database tag with the input data.
         image_out_tag : str
-            Tag of the database entry that is written as output.
+            Database tag for the output data.
         wavelet_configuration : pynpoint.processing.timedenoising.CwtWaveletConfiguration or \
                                 pynpoint.processing.timedenoising.DwtWaveletConfiguration
-            Instance of DwtWaveletConfiguration or CwtWaveletConfiguration which gives the
-            parameters of the wavelet transformation to be used.
+            Instance of :class:`~pynpoint.processing.timedenoising.DwtWaveletConfiguration` or
+            :class:`~pynpoint.processing.timedenoising.CwtWaveletConfiguration` which contains the
+            parameters for the wavelet transformation.
         padding : str
-            Padding method ('zero', 'mirror', or 'none').
+            Padding method (``'zero'``, ``'mirror'``, or ``'none'``).
         median_filter : bool
-            If true a median filter in time is applied which removes outliers in time like cosmic
-            rays.
+            Apply a median filter in time to remove outliers, for example due to cosmic rays.
         threshold_function : str
-            Threshold function used for wavelet shrinkage in the wavelet space ('soft' or 'hard').
+            Threshold function that is used for wavelet shrinkage in the wavelet space
+            (``'soft'`` or ``'hard'``).
 
         Returns
         -------
@@ -137,7 +137,7 @@ class WaveletTimeDenoisingModule(ProcessingModule):
             None
         """
 
-        super(WaveletTimeDenoisingModule, self).__init__(name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -151,6 +151,9 @@ class WaveletTimeDenoisingModule(ProcessingModule):
         assert threshold_function in ['soft', 'hard']
         self.m_threshold_function = threshold_function == 'soft'
 
+        assert isinstance(wavelet_configuration,
+                          (DwtWaveletConfiguration, CwtWaveletConfiguration))
+
     @typechecked
     def run(self) -> None:
         """
@@ -170,87 +173,22 @@ class WaveletTimeDenoisingModule(ProcessingModule):
             if self.m_padding == 'none':
                 self.m_padding = 'periodic'
 
-            @typechecked
-            def denoise_line_in_time(signal_in: np.ndarray) -> np.ndarray:
-                """
-                Definition of the temporal denoising for DWT.
-
-                Parameters
-                ----------
-                signal_in : numpy.ndarray
-                    1D input signal.
-
-                Returns
-                -------
-                numpy.ndarray
-                    Multilevel 1D inverse discrete wavelet transform.
-                """
-
-                if self.m_threshold_function:
-                    threshold_mode = 'soft'
-                else:
-                    threshold_mode = 'hard'
-
-                coef = pywt.wavedec(signal_in,
-                                    wavelet=self.m_wavelet_configuration.m_wavelet,
-                                    level=None,
-                                    mode=self.m_padding)
-
-                sigma = mad(coef[-1])
-                threshold = sigma * np.sqrt(2 * np.log(len(signal_in)))
-
-                denoised = coef[:]
-
-                denoised[1:] = (pywt.threshold(i,
-                                               value=threshold,
-                                               mode=threshold_mode)
-                                for i in denoised[1:])
-
-                return pywt.waverec(denoised,
-                                    wavelet=self.m_wavelet_configuration.m_wavelet,
-                                    mode=self.m_padding)
+            self.apply_function_in_time(dwt_denoise_line_in_time,
+                                        self.m_image_in_port,
+                                        self.m_image_out_port,
+                                        func_args=(self.m_threshold_function,
+                                                   self.m_padding,
+                                                   self.m_wavelet_configuration))
 
         elif isinstance(self.m_wavelet_configuration, CwtWaveletConfiguration):
 
-            @typechecked
-            def denoise_line_in_time(signal_in: np.ndarray) -> np.ndarray:
-                """
-                Definition of temporal denoising for CWT.
-
-                Parameters
-                ----------
-                signal_in : numpy.ndarray
-                    1D input signal.
-
-                Returns
-                -------
-                numpy.ndarray
-                    1D output signal.
-                """
-
-                cwt_capsule = WaveletAnalysisCapsule(
-                    signal_in=signal_in,
-                    padding=self.m_padding,
-                    wavelet_in=self.m_wavelet_configuration.m_wavelet,
-                    order=self.m_wavelet_configuration.m_wavelet_order,
-                    frequency_resolution=self.m_wavelet_configuration.m_resolution)
-
-                cwt_capsule.compute_cwt()
-                cwt_capsule.denoise_spectrum(soft=self.m_threshold_function)
-
-                if self.m_median_filter:
-                    cwt_capsule.median_filter()
-
-                cwt_capsule.update_signal()
-
-                return cwt_capsule.get_signal()
-
-        else:
-            return
-
-        self.apply_function_in_time(denoise_line_in_time,
-                                    self.m_image_in_port,
-                                    self.m_image_out_port)
+            self.apply_function_in_time(cwt_denoise_line_in_time,
+                                        self.m_image_in_port,
+                                        self.m_image_out_port,
+                                        func_args=(self.m_threshold_function,
+                                                   self.m_padding,
+                                                   self.m_median_filter,
+                                                   self.m_wavelet_configuration))
 
         if self.m_threshold_function:
             history = 'threshold_function = soft'
@@ -264,8 +202,8 @@ class WaveletTimeDenoisingModule(ProcessingModule):
 
 class TimeNormalizationModule(ProcessingModule):
     """
-    Pipeline module for normalization of global brightness variations of the detector
-    (see Bonse et al. 2018).
+    Pipeline module for normalization of global brightness variations of the detector. See Bonse
+    et al. (arXiv:1804.05063) for details.
     """
 
     __author__ = 'Markus Bonse, Tomas Stolker'
@@ -279,11 +217,11 @@ class TimeNormalizationModule(ProcessingModule):
         Parameters
         ----------
         name_in : str
-            Unique name of the module instance.
+            Unique name for the pipeline module.
         image_in_tag : str
-            Tag of the database entry that is read as input.
+            Database tag with the input data.
         image_out_tag : str
-            Tag of the database entry that is written as output.
+            Database tag for the output data.
 
         Returns
         -------
@@ -291,7 +229,7 @@ class TimeNormalizationModule(ProcessingModule):
             None
         """
 
-        super(TimeNormalizationModule, self).__init__(name_in=name_in)
+        super().__init__(name_in)
 
         self.m_image_in_port = self.add_input_port(image_in_tag)
         self.m_image_out_port = self.add_output_port(image_out_tag)
@@ -307,11 +245,7 @@ class TimeNormalizationModule(ProcessingModule):
             None
         """
 
-        @typechecked
-        def _normalization(image_in: np.ndarray) -> np.ndarray:
-            return image_in - np.median(image_in)
-
-        self.apply_function_to_images(_normalization,
+        self.apply_function_to_images(normalization,
                                       self.m_image_in_port,
                                       self.m_image_out_port,
                                       'Time normalization')
diff --git a/pynpoint/readwrite/attr_reading.py b/pynpoint/readwrite/attr_reading.py
index 1af946e..4c2e1b7 100644
--- a/pynpoint/readwrite/attr_reading.py
+++ b/pynpoint/readwrite/attr_reading.py
@@ -5,12 +5,12 @@ Modules for reading attributes from a FITS or ASCII file.
 import os
 import warnings
 
-import numpy as np
-
 from typing import Optional
 
-from typeguard import typechecked
+import numpy as np
+
 from astropy.io import fits
+from typeguard import typechecked
 
 from pynpoint.core.attributes import get_attributes
 from pynpoint.core.processing import ReadingModule
@@ -59,7 +59,7 @@ class AttributeReadingModule(ReadingModule):
             None
         """
 
-        super(AttributeReadingModule, self).__init__(name_in, input_dir)
+        super().__init__(name_in, input_dir=input_dir)
 
         self.m_data_port = self.add_output_port(data_tag)
 
@@ -155,7 +155,8 @@ class ParangReadingModule(ReadingModule):
         NoneType
             None
         """
-        super(ParangReadingModule, self).__init__(name_in, input_dir)
+
+        super().__init__(name_in, input_dir=input_dir)
 
         self.m_data_port = self.add_output_port(data_tag)
 
@@ -188,7 +189,7 @@ class ParangReadingModule(ReadingModule):
                              f'the parallactic angles.')
 
         print(f'Number of angles: {parang.size}')
-        print(f'Rotation range: {parang[0]:.2f} - {parang[-1]:.2f} deg')
+        print(f'Rotation range: {parang[0]:.2f} -> {parang[-1]:.2f} deg')
 
         status = self.m_data_port.check_non_static_attribute('PARANG', parang)
 
@@ -247,7 +248,8 @@ class WavelengthReadingModule(ReadingModule):
         NoneType
             None
         """
-        super(WavelengthReadingModule, self).__init__(name_in, input_dir)
+
+        super().__init__(name_in, input_dir=input_dir)
 
         self.m_data_port = self.add_output_port(data_tag)
 
diff --git a/pynpoint/readwrite/attr_writing.py b/pynpoint/readwrite/attr_writing.py
index 5d1979d..28d46fe 100644
--- a/pynpoint/readwrite/attr_writing.py
+++ b/pynpoint/readwrite/attr_writing.py
@@ -4,10 +4,10 @@ Modules for writing data as text file.
 
 import os
 
-import numpy as np
-
 from typing import Optional
 
+import numpy as np
+
 from typeguard import typechecked
 
 from pynpoint.core.processing import WritingModule
@@ -52,7 +52,7 @@ class AttributeWritingModule(WritingModule):
             None
         """
 
-        super(AttributeWritingModule, self).__init__(name_in, output_dir)
+        super().__init__(name_in, output_dir=output_dir)
 
         self.m_data_port = self.add_input_port(data_tag)
 
@@ -126,7 +126,7 @@ class ParangWritingModule(WritingModule):
             None
         """
 
-        super(ParangWritingModule, self).__init__(name_in, output_dir)
+        super().__init__(name_in, output_dir=output_dir)
 
         self.m_data_port = self.add_input_port(data_tag)
 
diff --git a/pynpoint/readwrite/fitsreading.py b/pynpoint/readwrite/fitsreading.py
index dcbd0d7..d0449e2 100644
--- a/pynpoint/readwrite/fitsreading.py
+++ b/pynpoint/readwrite/fitsreading.py
@@ -75,7 +75,7 @@ class FitsReadingModule(ReadingModule):
             None
         """
 
-        super(FitsReadingModule, self).__init__(name_in, input_dir)
+        super().__init__(name_in, input_dir=input_dir)
 
         self.m_image_out_port = self.add_output_port(image_tag)
 
diff --git a/pynpoint/readwrite/fitswriting.py b/pynpoint/readwrite/fitswriting.py
index 4c75479..d67f195 100644
--- a/pynpoint/readwrite/fitswriting.py
+++ b/pynpoint/readwrite/fitswriting.py
@@ -52,7 +52,7 @@ class FitsWritingModule(WritingModule):
             A two element tuple which specifies a begin and end frame of the export. This can be
             used to save a subsets of a large dataset. The whole dataset will be exported if set
             to None.
-        overwrite : bool, None
+        overwrite : bool
             Overwrite an existing FITS file with an identical filename.
         subset_size : int, None
             Size of the subsets that are created when storing the data. This can be useful if the
@@ -65,7 +65,7 @@ class FitsWritingModule(WritingModule):
             None
         """
 
-        super(FitsWritingModule, self).__init__(name_in=name_in, output_dir=output_dir)
+        super().__init__(name_in, output_dir=output_dir)
 
         if not file_name.endswith('.fits'):
             raise ValueError('Output \'file_name\' requires the FITS extension.')
diff --git a/pynpoint/readwrite/hdf5reading.py b/pynpoint/readwrite/hdf5reading.py
index 04aa755..131bc27 100644
--- a/pynpoint/readwrite/hdf5reading.py
+++ b/pynpoint/readwrite/hdf5reading.py
@@ -59,7 +59,7 @@ class Hdf5ReadingModule(ReadingModule):
             None
         """
 
-        super(Hdf5ReadingModule, self).__init__(name_in, input_dir)
+        super().__init__(name_in, input_dir=input_dir)
 
         if tag_dictionary is None:
             tag_dictionary = {}
diff --git a/pynpoint/readwrite/hdf5writing.py b/pynpoint/readwrite/hdf5writing.py
index de74d8e..056d001 100644
--- a/pynpoint/readwrite/hdf5writing.py
+++ b/pynpoint/readwrite/hdf5writing.py
@@ -55,7 +55,7 @@ class Hdf5WritingModule(WritingModule):
             None
         """
 
-        super(Hdf5WritingModule, self).__init__(name_in, output_dir)
+        super().__init__(name_in, output_dir=output_dir)
 
         if tag_dictionary is None:
             tag_dictionary = {}
diff --git a/pynpoint/readwrite/nearreading.py b/pynpoint/readwrite/nearreading.py
index bc80343..bc8775c 100644
--- a/pynpoint/readwrite/nearreading.py
+++ b/pynpoint/readwrite/nearreading.py
@@ -76,7 +76,7 @@ class NearReadingModule(ReadingModule):
             None
         """
 
-        super(NearReadingModule, self).__init__(name_in, input_dir)
+        super().__init__(name_in, input_dir=input_dir)
 
         self.m_chopa_out_port = self.add_output_port(chopa_out_tag)
         self.m_chopb_out_port = self.add_output_port(chopb_out_tag)
diff --git a/pynpoint/readwrite/textwriting.py b/pynpoint/readwrite/textwriting.py
index 0fefecd..e65dc2b 100644
--- a/pynpoint/readwrite/textwriting.py
+++ b/pynpoint/readwrite/textwriting.py
@@ -50,7 +50,7 @@ class TextWritingModule(WritingModule):
             None
         """
 
-        super(TextWritingModule, self).__init__(name_in, output_dir)
+        super().__init__(name_in, output_dir=output_dir)
 
         self.m_data_port = self.add_input_port(data_tag)
 
diff --git a/pynpoint/util/analysis.py b/pynpoint/util/analysis.py
index 6ac6b3c..ccdb0df 100644
--- a/pynpoint/util/analysis.py
+++ b/pynpoint/util/analysis.py
@@ -282,8 +282,7 @@ def merit_function(residuals: np.ndarray,
         Chi-square value.
     """
 
-    rr_grid = pixel_distance(im_shape=residuals.shape,
-                             position=(aperture[0], aperture[1]))
+    rr_grid, _, _ = pixel_distance(residuals.shape, position=(aperture[0], aperture[1]))
 
     indices = np.where(rr_grid <= aperture[2])
 
diff --git a/pynpoint/util/apply_func.py b/pynpoint/util/apply_func.py
new file mode 100644
index 0000000..1b8c9f1
--- /dev/null
+++ b/pynpoint/util/apply_func.py
@@ -0,0 +1,849 @@
+"""
+Functions that are executed with
+:func:`~pynpoint.core.processing.ProcessingModule.apply_function_to_images` and
+:func:`~pynpoint.core.processing.ProcessingModule.apply_function_in_time`. The functions are placed
+here such that they are pickable by the multiprocessing functionalities. The first two parameters
+are always the sliced data and the index in the dataset.
+
+TODO Docstrings are missing for most of the functions.
+"""
+
+import copy
+import math
+import warnings
+
+from typing import List, Optional, Union, Tuple
+
+import cv2
+import numpy as np
+import pywt
+
+from numba import jit
+from photutils import aperture_photometry
+from photutils.aperture import Aperture
+from scipy.ndimage.filters import gaussian_filter
+from scipy.optimize import curve_fit
+from skimage.registration import phase_cross_correlation
+from skimage.transform import rescale
+from statsmodels.robust import mad
+from typeguard import typechecked
+
+from pynpoint.core.dataio import InputPort, OutputPort
+from pynpoint.util.image import center_pixel, crop_image, scale_image, shift_image
+from pynpoint.util.star import locate_star
+from pynpoint.util.wavelets import WaveletAnalysisCapsule
+
+
+@typechecked
+def image_scaling(image_in: np.ndarray,
+                  im_index: int,
+                  scaling_y: float,
+                  scaling_x: float,
+                  scaling_flux: float) -> np.ndarray:
+
+    return scaling_flux * scale_image(image_in, scaling_y, scaling_x)
+
+
+@typechecked
+def subtract_line(image_in: np.ndarray,
+                  im_index: int,
+                  mask: np.ndarray,
+                  combine: str,
+                  im_shape: Tuple[int, int]) -> np.ndarray:
+
+    image_tmp = np.copy(image_in)
+    image_tmp[mask == 0.] = np.nan
+
+    if combine == 'mean':
+        row_mean = np.nanmean(image_tmp, axis=1)
+        col_mean = np.nanmean(image_tmp, axis=0)
+
+        x_grid, y_grid = np.meshgrid(col_mean, row_mean)
+        subtract = (x_grid+y_grid)/2.
+
+    elif combine == 'median':
+        col_median = np.nanmedian(image_tmp, axis=0)
+        col_2d = np.tile(col_median, (im_shape[1], 1))
+
+        image_tmp -= col_2d
+        image_tmp[mask == 0.] = np.nan
+
+        row_median = np.nanmedian(image_tmp, axis=1)
+        row_2d = np.tile(row_median, (im_shape[0], 1))
+        row_2d = np.rot90(row_2d)  # 90 deg rotation in clockwise direction
+
+        subtract = col_2d + row_2d
+
+    return image_in - subtract
+
+
+@typechecked
+def align_image(image_in: np.ndarray,
+                im_index: int,
+                interpolation: str,
+                accuracy: float,
+                resize: Optional[float],
+                num_references: int,
+                subframe: Optional[float],
+                ref_images_reshape: np.ndarray,
+                ref_images_shape: Tuple[int, int, int]) -> np.ndarray:
+
+    offset = np.array([0., 0.])
+
+    # Reshape the reference images back to their original 3D shape
+    # The original shape can not be used directly because of util.module.update_arguments
+    ref_images = ref_images_reshape.reshape(ref_images_shape)
+
+    for i in range(num_references):
+        if subframe is None:
+            tmp_offset, _, _ = phase_cross_correlation(ref_images[i, :, :],
+                                                       image_in,
+                                                       upsample_factor=accuracy)
+
+        else:
+            sub_in = crop_image(image_in, None, subframe)
+            sub_ref = crop_image(ref_images[i, :, :], None, subframe)
+
+            tmp_offset, _, _ = phase_cross_correlation(sub_ref,
+                                                       sub_in,
+                                                       upsample_factor=accuracy)
+        offset += tmp_offset
+
+    offset /= float(num_references)
+
+    if resize is not None:
+        offset *= resize
+
+        sum_before = np.sum(image_in)
+
+        tmp_image = rescale(image_in,
+                            (resize, resize),
+                            order=5,
+                            mode='reflect',
+                            multichannel=False,
+                            anti_aliasing=True)
+
+        sum_after = np.sum(tmp_image)
+
+        # Conserve flux because the rescale function normalizes all values to [0:1].
+        tmp_image = tmp_image*(sum_before/sum_after)
+
+    else:
+        tmp_image = image_in
+
+    return shift_image(tmp_image, offset, interpolation)
+
+
+@typechecked
+def fit_2d_function(image: np.ndarray,
+                    im_index: int,
+                    mask_radii: Tuple[float, float],
+                    sign: str,
+                    model: str,
+                    filter_size: Optional[float],
+                    guess: Union[Tuple[float, float, float, float, float, float, float],
+                                 Tuple[float, float, float, float, float, float, float, float]],
+                    mask_out_port: Optional[OutputPort],
+                    xx_grid: np.ndarray,
+                    yy_grid: np.ndarray,
+                    rr_ap: np.ndarray,
+                    pixscale: float) -> np.ndarray:
+
+    @typechecked
+    def gaussian_2d(grid: Union[Tuple[np.ndarray, np.ndarray], np.ndarray],
+                    x_center: float,
+                    y_center: float,
+                    fwhm_x: float,
+                    fwhm_y: float,
+                    amp: float,
+                    theta: float,
+                    offset: float) -> np.ndarray:
+        """
+        Function to create a 2D elliptical Gaussian model.
+
+        Parameters
+        ----------
+        grid : tuple(np.ndarray, np.ndarray), np.ndarray
+            A tuple of two 2D arrays with the mesh grid points in x and y
+            direction, or an equivalent 3D numpy array with 2 elements
+            along the first axis.
+        x_center : float
+            Offset of the model center along the x axis (pix).
+        y_center : float
+            Offset of the model center along the y axis (pix).
+        fwhm_x : float
+            Full width at half maximum along the x axis (pix).
+        fwhm_y : float
+            Full width at half maximum along the y axis (pix).
+        amp : float
+            Peak flux.
+        theta : float
+            Rotation angle in counterclockwise direction (rad).
+        offset : float
+            Flux offset.
+
+        Returns
+        -------
+        np.ndimage
+            Raveled 2D elliptical Gaussian model.
+        """
+
+        (xx_grid, yy_grid) = grid
+
+        x_diff = xx_grid - x_center
+        y_diff = yy_grid - y_center
+
+        sigma_x = fwhm_x/math.sqrt(8.*math.log(2.))
+        sigma_y = fwhm_y/math.sqrt(8.*math.log(2.))
+
+        a_gauss = 0.5 * ((np.cos(theta)/sigma_x)**2 + (np.sin(theta)/sigma_y)**2)
+        b_gauss = 0.5 * ((np.sin(2.*theta)/sigma_x**2) - (np.sin(2.*theta)/sigma_y**2))
+        c_gauss = 0.5 * ((np.sin(theta)/sigma_x)**2 + (np.cos(theta)/sigma_y)**2)
+
+        gaussian = offset + amp*np.exp(-(a_gauss*x_diff**2 + b_gauss*x_diff*y_diff +
+                                         c_gauss*y_diff**2))
+
+        return gaussian[(rr_ap > mask_radii[0]) & (rr_ap < mask_radii[1])]
+
+    @typechecked
+    def moffat_2d(grid: Union[Tuple[np.ndarray, np.ndarray], np.ndarray],
+                  x_center: float,
+                  y_center: float,
+                  fwhm_x: float,
+                  fwhm_y: float,
+                  amp: float,
+                  theta: float,
+                  offset: float,
+                  beta: float) -> np.ndarray:
+        """
+        Function to create a 2D elliptical Moffat model.
+
+        The parametrization used here is equivalent to the one in AsPyLib:
+        http://www.aspylib.com/doc/aspylib_fitting.html#elliptical-moffat-psf
+
+        Parameters
+        ----------
+        grid : tuple(np.ndarray, np.ndarray), np.ndarray
+            A tuple of two 2D arrays with the mesh grid points in x and y
+            direction, or an equivalent 3D numpy array with 2 elements
+            along the first axis.
+        x_center : float
+            Offset of the model center along the x axis (pix).
+        y_center : float
+            Offset of the model center along the y axis (pix).
+        fwhm_x : float
+            Full width at half maximum along the x axis (pix).
+        fwhm_y : float
+            Full width at half maximum along the y axis (pix).
+        amp : float
+            Peak flux.
+        theta : float
+            Rotation angle in counterclockwise direction (rad).
+        offset : float
+            Flux offset.
+        beta : float
+            Power index.
+
+        Returns
+        -------
+        np.ndimage
+            Raveled 2D elliptical Moffat model.
+        """
+
+        (xx_grid, yy_grid) = grid
+
+        x_diff = xx_grid - x_center
+        y_diff = yy_grid - y_center
+
+        if 2.**(1./beta)-1. < 0.:
+            alpha_x = np.nan
+            alpha_y = np.nan
+
+        else:
+            alpha_x = 0.5*fwhm_x/np.sqrt(2.**(1./beta)-1.)
+            alpha_y = 0.5*fwhm_y/np.sqrt(2.**(1./beta)-1.)
+
+        if alpha_x == 0. or alpha_y == 0.:
+            a_moffat = np.nan
+            b_moffat = np.nan
+            c_moffat = np.nan
+
+        else:
+            a_moffat = (np.cos(theta)/alpha_x)**2. + (np.sin(theta)/alpha_y)**2.
+            b_moffat = (np.sin(theta)/alpha_x)**2. + (np.cos(theta)/alpha_y)**2.
+            c_moffat = 2.*np.sin(theta)*np.cos(theta)*(1./alpha_x**2. - 1./alpha_y**2.)
+
+        a_term = a_moffat*x_diff**2
+        b_term = b_moffat*y_diff**2
+        c_term = c_moffat*x_diff*y_diff
+
+        moffat = offset + amp / (1.+a_term+b_term+c_term)**beta
+
+        return moffat[(rr_ap > mask_radii[0]) & (rr_ap < mask_radii[1])]
+
+    if filter_size:
+        image = gaussian_filter(image, filter_size)
+
+    if mask_out_port is not None:
+        mask = np.copy(image)
+
+        mask[(rr_ap < mask_radii[0]) | (rr_ap > mask_radii[1])] = 0.
+
+        mask_out_port.append(mask, data_dim=3)
+
+    if sign == 'negative':
+        image = -1.*image + np.abs(np.min(-1.*image))
+
+    image = image[(rr_ap > mask_radii[0]) & (rr_ap < mask_radii[1])]
+
+    if model == 'gaussian':
+        model_func = gaussian_2d
+
+    elif model == 'moffat':
+        model_func = moffat_2d
+
+    try:
+        popt, pcov = curve_fit(model_func,
+                               (xx_grid, yy_grid),
+                               image,
+                               p0=guess,
+                               sigma=None,
+                               method='lm')
+
+        perr = np.sqrt(np.diag(pcov))
+
+    except RuntimeError:
+        if model == 'gaussian':
+            popt = np.zeros(7)
+            perr = np.zeros(7)
+
+        elif model == 'moffat':
+            popt = np.zeros(8)
+            perr = np.zeros(8)
+
+        print(f'Fit could not converge on image number {im_index}. [WARNING]')
+
+    if model == 'gaussian':
+
+        best_fit = np.asarray((popt[0], perr[0],
+                               popt[1], perr[1],
+                               popt[2]*pixscale, perr[2]*pixscale,
+                               popt[3]*pixscale, perr[3]*pixscale,
+                               popt[4], perr[4],
+                               math.degrees(popt[5]) % 360., math.degrees(perr[5]),
+                               popt[6], perr[6]))
+
+    elif model == 'moffat':
+
+        best_fit = np.asarray((popt[0], perr[0],
+                               popt[1], perr[1],
+                               popt[2]*pixscale, perr[2]*pixscale,
+                               popt[3]*pixscale, perr[3]*pixscale,
+                               popt[4], perr[4],
+                               math.degrees(popt[5]) % 360., math.degrees(perr[5]),
+                               popt[6], perr[6],
+                               popt[7], perr[7]))
+
+    return best_fit
+
+
+@typechecked
+def crop_around_star(image: np.ndarray,
+                     im_index: int,
+                     position: Optional[Union[Tuple[int, int, float],
+                                              Tuple[None, None, float]]],
+                     im_size: int,
+                     fwhm: int,
+                     pixscale: float,
+                     index_out_port: Optional[OutputPort],
+                     image_out_port: OutputPort) -> np.ndarray:
+
+    if position is None:
+        center = None
+        width = None
+
+    else:
+        if position[0] is None and position[1] is None:
+            center = None
+        else:
+            center = (position[1], position[0])  # (y, x)
+
+        width = int(math.ceil(position[2]/pixscale))
+
+    starpos = locate_star(image, center, width, fwhm)
+
+    try:
+        im_crop = crop_image(image, tuple(starpos), im_size)
+
+    except ValueError:
+        warnings.warn(f'Chosen image size is too large to crop the image around the '
+                      f'brightest pixel (image index = {im_index}, pixel [x, y] '
+                      f'= [{starpos[0]}, {starpos[1]}]). Using the center of the '
+                      f'image instead.')
+
+        if index_out_port is not None:
+            index_out_port.append(im_index, data_dim=1)
+
+        starpos = center_pixel(image)
+        im_crop = crop_image(image, tuple(starpos), im_size)
+
+    return im_crop
+
+
+@typechecked
+def crop_rotating_star(image: np.ndarray,
+                       im_index: int,
+                       position: Union[Tuple[float, float], np.ndarray],
+                       im_size: int,
+                       filter_size: Optional[int],
+                       search_size: int) -> np.ndarray:
+
+    starpos = locate_star(image=image,
+                          center=tuple(position),
+                          width=search_size,
+                          fwhm=filter_size)
+
+    return crop_image(image=image,
+                      center=tuple(starpos),
+                      size=im_size)
+
+
+@typechecked
+def photometry(image: np.ndarray,
+               im_index: int,
+               aperture: Union[Aperture, List[Aperture]]) -> np.float64:
+    # https://photutils.readthedocs.io/en/stable/overview.html
+    # In Photutils, pixel coordinates are zero-indexed, meaning that (x, y) = (0, 0)
+    # corresponds to the center of the lowest, leftmost array element. This means that
+    # the value of data[0, 0] is taken as the value over the range -0.5 < x <= 0.5,
+    # -0.5 < y <= 0.5. Note that this is the same coordinate system as used by PynPoint.
+
+    return np.array(aperture_photometry(image, aperture, method='exact')['aperture_sum'])
+
+
+@typechecked
+def image_stat(image_in: np.ndarray,
+               im_index: int,
+               indices: Optional[np.ndarray]) -> np.ndarray:
+
+    if indices is None:
+        image_select = np.copy(image_in)
+
+    else:
+        image_reshape = np.reshape(image_in, (image_in.shape[0]*image_in.shape[1]))
+        image_select = image_reshape[indices]
+
+    nmin = np.nanmin(image_select)
+    nmax = np.nanmax(image_select)
+    nsum = np.nansum(image_select)
+    mean = np.nanmean(image_select)
+    median = np.nanmedian(image_select)
+    std = np.nanstd(image_select)
+
+    return np.asarray([nmin, nmax, nsum, mean, median, std])
+
+
+@typechecked
+def subtract_psf(image: np.ndarray,
+                 im_index: int,
+                 parang_thres: Optional[float],
+                 nref: Optional[int],
+                 reference: Optional[np.ndarray],
+                 ang_diff: np.ndarray,
+                 image_in_port: InputPort) -> np.ndarray:
+
+    if parang_thres:
+        index_thres = np.where(ang_diff > parang_thres)[0]
+
+        if index_thres.size == 0:
+            reference = image_in_port.get_all()
+
+            warnings.warn('No images meet the rotation threshold. Creating a reference '
+                          'PSF from the median of all images instead.')
+
+        else:
+            if nref:
+                index_diff = np.abs(im_index - index_thres)
+                index_near = np.argsort(index_diff)[:nref]
+                index_sort = np.sort(index_thres[index_near])
+
+                reference = image_in_port[index_sort, :, :]
+
+            else:
+                reference = image_in_port[index_thres, :, :]
+
+        reference = np.median(reference, axis=0)
+
+    return image-reference
+
+
+@typechecked
+def dwt_denoise_line_in_time(signal_in: np.ndarray,
+                             im_index: int,
+                             threshold_function: bool,
+                             padding: str,
+                             wavelet_conf) -> np.ndarray:
+    """
+    Definition of the temporal denoising for DWT.
+
+    Parameters
+    ----------
+    signal_in : np.ndarray
+        1D input signal.
+
+    Returns
+    -------
+    np.ndarray
+        Multilevel 1D inverse discrete wavelet transform.
+    """
+
+    if threshold_function:
+        threshold_mode = 'soft'
+    else:
+        threshold_mode = 'hard'
+
+    coef = pywt.wavedec(signal_in, wavelet=wavelet_conf.m_wavelet, level=None, mode=padding)
+
+    sigma = mad(coef[-1])
+
+    threshold = sigma * np.sqrt(2 * np.log(len(signal_in)))
+
+    denoised = coef[:]
+
+    denoised[1:] = (pywt.threshold(i, value=threshold, mode=threshold_mode) for i in denoised[1:])
+
+    return pywt.waverec(denoised, wavelet=wavelet_conf.m_wavelet, mode=padding)
+
+
+@typechecked
+def cwt_denoise_line_in_time(signal_in: np.ndarray,
+                             im_index: int,
+                             threshold_function: bool,
+                             padding: str,
+                             median_filter: bool,
+                             wavelet_conf) -> np.ndarray:
+    """
+    Definition of temporal denoising for CWT.
+
+    Parameters
+    ----------
+    signal_in : np.ndarray
+        1D input signal.
+
+    Returns
+    -------
+    np.ndarray
+        1D output signal.
+    """
+
+    cwt_capsule = WaveletAnalysisCapsule(signal_in=signal_in,
+                                         padding=padding,
+                                         wavelet_in=wavelet_conf.m_wavelet,
+                                         order=wavelet_conf.m_wavelet_order,
+                                         frequency_resolution=wavelet_conf.m_resolution)
+
+    cwt_capsule.compute_cwt()
+
+    cwt_capsule.denoise_spectrum(soft=threshold_function)
+
+    if median_filter:
+        cwt_capsule.median_filter()
+
+    cwt_capsule.update_signal()
+
+    return cwt_capsule.get_signal()
+
+
+@typechecked
+def normalization(image_in: np.ndarray,
+                  im_index: int) -> np.ndarray:
+
+    return image_in - np.median(image_in)
+
+
+@typechecked
+def time_filter(timeline: np.ndarray,
+                im_index: int,
+                sigma: Tuple[float, float]) -> np.ndarray:
+
+    median = np.median(timeline)
+    std = np.std(timeline)
+
+    index_lower = np.argwhere(timeline < median-sigma[0]*std)
+    index_upper = np.argwhere(timeline > median+sigma[1]*std)
+
+    if index_lower.size > 0:
+        mask = np.ones(timeline.shape, dtype=bool)
+        mask[index_lower] = False
+        timeline[index_lower] = np.mean(timeline[mask])
+
+    if index_upper.size > 0:
+        mask = np.ones(timeline.shape, dtype=bool)
+        mask[index_upper] = False
+        timeline[index_upper] = np.mean(timeline[mask])
+
+    return timeline
+
+
+# This function cannot by @typechecked because of a compatibility issue with numba
+@jit(cache=True)
+def calc_fast_convolution(F_roof_tmp: np.complex128,
+                          W: np.ndarray,
+                          tmp_s: tuple,
+                          N_size: float,
+                          tmp_G: np.ndarray,
+                          N: Tuple[int, ...]) -> np.ndarray:
+
+    new = np.zeros(N, dtype=np.complex64)
+
+    if ((tmp_s[0] == 0) and (tmp_s[1] == 0)) or \
+            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
+            ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
+            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):
+
+        for m in range(0, N[0], 1):
+            for j in range(0, N[1], 1):
+                new[m, j] = F_roof_tmp * W[m - tmp_s[0], j - tmp_s[1]]
+
+    else:
+
+        for m in range(0, N[0], 1):
+            for j in range(0, N[1], 1):
+                new[m, j] = (F_roof_tmp * W[m - tmp_s[0], j - tmp_s[1]] +
+                             np.conjugate(F_roof_tmp) * W[(m + tmp_s[0]) %
+                             N[0], (j + tmp_s[1]) % N[1]])
+
+    if ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
+            ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
+            ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):  # causes problems, unknown why
+
+        res = new / float(N_size)
+
+    else:
+
+        res = new / float(N_size)
+
+    tmp_G = tmp_G - res
+
+    return tmp_G
+
+
+@typechecked
+def bad_pixel_interpolation(image_in: np.ndarray,
+                            bad_pixel_map: np.ndarray,
+                            iterations: int) -> np.ndarray:
+    """
+    Internal function to interpolate bad pixels.
+
+    Parameters
+    ----------
+    image_in : np.ndarray
+        Input image.
+    bad_pixel_map : np.ndarray
+        Bad pixel map.
+    iterations : int
+        Number of iterations.
+
+    Returns
+    -------
+    np.ndarray
+        Image in which the bad pixels have been interpolated.
+    """
+
+    image_in = image_in * bad_pixel_map
+
+    # for names see ref paper
+    g = copy.deepcopy(image_in)
+    G = np.fft.fft2(g)
+    w = copy.deepcopy(bad_pixel_map)
+    W = np.fft.fft2(w)
+
+    N = g.shape
+    N_size = float(N[0] * N[1])
+    F_roof = np.zeros(N, dtype=complex)
+    tmp_G = copy.deepcopy(G)
+
+    iteration = 0
+
+    while iteration < iterations:
+        # 1.) select line using max search and compute conjugate
+        tmp_s = np.unravel_index(np.argmax(abs(tmp_G.real[:, 0: N[1] // 2])),
+                                 (N[0], N[1] // 2))
+
+        tmp_s_conjugate = (np.mod(N[0] - tmp_s[0], N[0]),
+                           np.mod(N[1] - tmp_s[1], N[1]))
+
+        # 2.) compute the new F_roof
+        # special cases s = 0 or s = N/2 no conjugate line exists
+        if ((tmp_s[0] == 0) and (tmp_s[1] == 0)) or \
+                ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == 0)) or \
+                ((tmp_s[0] == 0) and (tmp_s[1] == N[1] / 2)) or \
+                ((tmp_s[0] == N[0] / 2) and (tmp_s[1] == N[1] / 2)):
+            F_roof_tmp = N_size * tmp_G[tmp_s] / W[(0, 0)]
+
+            # 3.) update F_roof
+            F_roof[tmp_s] += F_roof_tmp
+
+        # conjugate line exists
+        else:
+            a = (np.power(np.abs(W[(0, 0)]), 2))
+            b = np.power(np.abs(W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]), 2)
+
+            if a == b:
+                W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]] += 0.00000000001
+
+            a = (np.power(np.abs(W[(0, 0)]), 2))
+            b = np.power(np.abs(W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]),
+                         2.0) + 0.01
+            c = a - b
+
+            F_roof_tmp = N_size * (tmp_G[tmp_s] * W[(0, 0)] - np.conj(tmp_G[tmp_s]) *
+                                   W[(2 * tmp_s[0]) % N[0], (2 * tmp_s[1]) % N[1]]) / c
+
+            # 3.) update F_roof
+            F_roof[tmp_s] += F_roof_tmp
+            F_roof[tmp_s_conjugate] += np.conjugate(F_roof_tmp)
+
+        # 4.) calc the new error spectrum using fast numba function
+        tmp_G = calc_fast_convolution(F_roof_tmp, W, tmp_s, N_size, tmp_G, N)
+
+        iteration += 1
+
+    return image_in * bad_pixel_map + np.fft.ifft2(F_roof).real * (1 - bad_pixel_map)
+
+
+@typechecked
+def image_interpolation(image_in: np.ndarray,
+                        im_index: int,
+                        iterations: int,
+                        bad_pixel_map: np.ndarray) -> np.ndarray:
+
+    return bad_pixel_interpolation(image_in,
+                                   bad_pixel_map,
+                                   iterations)
+
+
+@typechecked
+def replace_pixels(image: np.ndarray,
+                   im_index: int,
+                   index: np.ndarray,
+                   size: int,
+                   replace: str) -> np.ndarray:
+
+    im_mask = np.copy(image)
+
+    for _, item in enumerate(index):
+        im_mask[item[0], item[1]] = np.nan
+
+    for _, item in enumerate(index):
+        im_tmp = im_mask[item[0]-size:item[0]+size+1,
+                         item[1]-size:item[1]+size+1]
+
+        if np.size(np.where(im_tmp != np.nan)[0]) == 0:
+            im_mask[item[0], item[1]] = image[item[0], item[1]]
+
+        else:
+            if replace == 'mean':
+                im_mask[item[0], item[1]] = np.nanmean(im_tmp)
+
+            elif replace == 'median':
+                im_mask[item[0], item[1]] = np.nanmedian(im_tmp)
+
+            elif replace == 'nan':
+                im_mask[item[0], item[1]] = np.nan
+
+    return im_mask
+
+
+# This function cannot by @typechecked because of a compatibility issue with numba
+@jit(cache=True)
+def sigma_filter(dev_image: np.ndarray,
+                 var_image: np.ndarray,
+                 mean_image: np.ndarray,
+                 source_image: np.ndarray,
+                 out_image: np.ndarray,
+                 bad_pixel_map: np.ndarray) -> None:
+
+    for i in range(source_image.shape[0]):
+        for j in range(source_image.shape[1]):
+
+            if dev_image[i][j] < var_image[i][j]:
+                out_image[i][j] = source_image[i][j]
+
+            else:
+                out_image[i][j] = mean_image[i][j]
+                bad_pixel_map[i][j] = 0
+
+    return out_image, bad_pixel_map
+
+
+@typechecked
+def bad_pixel_sigma_filter(image_in: np.ndarray,
+                           im_index: int,
+                           box: int,
+                           sigma: float,
+                           iterate: int,
+                           map_out_port: Optional[OutputPort]) -> np.ndarray:
+
+    # Algorithm adapted from http://idlastro.gsfc.nasa.gov/ftp/pro/image/sigma_filter.pro
+
+    # Initialize bad pixel map
+
+    bad_pixel_map = np.ones(image_in.shape)
+
+    while iterate > 0:
+        # Source image
+
+        source_image = copy.deepcopy(image_in)
+
+        source_blur = cv2.blur(copy.deepcopy(source_image), (box, box))
+
+        # Mean image
+
+        box2 = box * box
+
+        mean_image = (source_blur * box2 - source_image) / (box2 - 1)
+
+        # Squared deviation between mean and source image
+
+        dev_image = (mean_image - source_image) ** 2
+
+        dev_blur = cv2.blur(copy.deepcopy(dev_image), (box, box))
+
+        # Compute variance by smoothing the image with the deviations from the mean
+
+        fact = float(sigma ** 2) / (box2 - 2)
+
+        var_image = fact * (dev_blur * box2 - dev_image)
+
+        # Update image_in for the next iteration by setting out_image equal to image_in
+
+        out_image = image_in
+
+        # Apply the sigma filter
+
+        out_image, bad_pixel_map = sigma_filter(dev_image,
+                                                var_image,
+                                                mean_image,
+                                                source_image,
+                                                out_image,
+                                                bad_pixel_map)
+
+        # Subtract 1 from the number of iterations
+
+        iterate -= 1
+
+    if map_out_port is not None:
+        # Write bad pixel map to the database when CPU = 1
+        map_out_port.append(bad_pixel_map, data_dim=3)
+
+    return out_image
+
+
+@typechecked
+def apply_shift(image_in: np.ndarray,
+                im_index: int,
+                shift: Union[Tuple[float, float], np.ndarray],
+                interpolation: str) -> np.ndarray:
+
+    return shift_image(image_in, shift, interpolation)
diff --git a/pynpoint/util/image.py b/pynpoint/util/image.py
index 8019559..6d440bb 100644
--- a/pynpoint/util/image.py
+++ b/pynpoint/util/image.py
@@ -8,9 +8,9 @@ from typing import Optional, Tuple, Union
 
 import numpy as np
 
-from typeguard import typechecked
-from skimage.transform import rescale
 from scipy.ndimage import fourier_shift, shift, rotate
+from skimage.transform import rescale
+from typeguard import typechecked
 
 
 @typechecked
@@ -22,7 +22,7 @@ def center_pixel(image: np.ndarray) -> Tuple[int, int]:
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D or 3D).
 
     Returns
@@ -58,7 +58,7 @@ def center_subpixel(image: np.ndarray) -> Tuple[float, float]:
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D or 3D).
 
     Returns
@@ -83,7 +83,7 @@ def crop_image(image: np.ndarray,
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D or 3D).
     center : tuple(int, int), None
         The new image center (y, x). The center of the image is used if set to None.
@@ -94,7 +94,7 @@ def crop_image(image: np.ndarray,
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Cropped odd-sized image (2D or 3D).
     """
 
@@ -129,14 +129,14 @@ def rotate_images(images: np.ndarray,
 
     Parameters
     ----------
-    images : numpy.ndarray
+    images : np.ndarray
         Stack of images (3D).
-    angles : numpy.ndarray
+    angles : np.ndarray
         Rotation angles (deg).
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Rotated images.
     """
 
@@ -166,7 +166,7 @@ def create_mask(im_shape: Tuple[int, int],
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Image mask.
     """
 
@@ -204,7 +204,7 @@ def shift_image(image: np.ndarray,
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D or 3D). If 3D the image is not shifted along the 0th axis.
     shift_yx : tuple(float, float), np.ndarray
         Shift (y, x) to be applied (pix). An additional shift of zero pixels will be added
@@ -216,7 +216,7 @@ def shift_image(image: np.ndarray,
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Shifted image.
     """
 
@@ -245,14 +245,14 @@ def shift_image(image: np.ndarray,
 
 @typechecked
 def scale_image(image: np.ndarray,
-                scaling_y: float,
-                scaling_x: float) -> np.ndarray:
+                scaling_y: Union[float, np.float32],
+                scaling_x: Union[float, np.float32]) -> np.ndarray:
     """
     Function to spatially scale an image.
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D).
     scaling_y : float
         Scaling factor y.
@@ -261,18 +261,18 @@ def scale_image(image: np.ndarray,
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Shifted image (2D).
     """
 
     sum_before = np.sum(image)
 
-    im_scale = rescale(image=np.asarray(image, dtype=np.float64),
-                       scale=(scaling_y, scaling_x),
+    im_scale = rescale(image,
+                       (scaling_y, scaling_x),
                        order=5,
                        mode='reflect',
-                       anti_aliasing=True,
-                       multichannel=False)
+                       multichannel=False,
+                       anti_aliasing=True)
 
     sum_after = np.sum(im_scale)
 
@@ -320,10 +320,10 @@ def polar_to_cartesian(image: np.ndarray,
 
     Parameters
     ----------
-    image : numpy.ndarray
+    image : np.ndarray
         Input image (2D or 3D).
     sep : float
-        Separation (pix).
+        Separation (pixels).
     ang : float
         Position angle (deg), measured counterclockwise with respect to the positive y-axis.
 
@@ -343,31 +343,39 @@ def polar_to_cartesian(image: np.ndarray,
 
 @typechecked
 def pixel_distance(im_shape: Tuple[int, int],
-                   position: Optional[Tuple[int, int]] = None) -> np.ndarray:
+                   position: Optional[Tuple[int, int]] = None) -> Tuple[
+                       np.ndarray, np.ndarray, np.ndarray]:
     """
     Function to calculate the distance of each pixel with respect to a given pixel position.
+    Supports both odd and even sized images.
 
     Parameters
     ----------
     im_shape : tuple(int, int)
         Image shape (y, x).
     position : tuple(int, int)
-        Pixel center (y, x) from which the distance is calculated. The image center is used if
-        set to None. Python indexing starts at zero so the bottom left pixel is (0, 0).
+        Pixel center (y, x) from which the distance is calculated. The image center is used if set
+        to None. Python indexing starts at zero so the center of the bottom left pixel is (0, 0).
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         2D array with the distances of each pixel from the provided pixel position.
+    np.ndarray
+        2D array with the x coordinates.
+    np.ndarray
+        2D array with the y coordinates.
     """
 
     if im_shape[0] % 2 == 0:
         y_grid = np.linspace(-im_shape[0] / 2 + 0.5, im_shape[0] / 2 - 0.5, im_shape[0])
+
     else:
         y_grid = np.linspace(-(im_shape[0] - 1) / 2, (im_shape[0] - 1) / 2, im_shape[0])
 
     if im_shape[1] % 2 == 0:
         x_grid = np.linspace(-im_shape[1] / 2 + 0.5, im_shape[1] / 2 - 0.5, im_shape[1])
+
     else:
         x_grid = np.linspace(-(im_shape[1] - 1) / 2, (im_shape[1] - 1) / 2, im_shape[1])
 
@@ -380,14 +388,16 @@ def pixel_distance(im_shape: Tuple[int, int],
 
     xx_grid, yy_grid = np.meshgrid(x_grid, y_grid)
 
-    return np.sqrt(xx_grid**2 + yy_grid**2)
+    return np.sqrt(xx_grid**2 + yy_grid**2), xx_grid, yy_grid
 
 
 @typechecked
 def subpixel_distance(im_shape: Tuple[int, int],
-                      position: Tuple[float, float]) -> np.ndarray:
+                      position: Tuple[float, float],
+                      shift_center: bool = True) -> np.ndarray:
     """
     Function to calculate the distance of each pixel with respect to a given subpixel position.
+    Supports both odd and even sized images.
 
     Parameters
     ----------
@@ -396,30 +406,38 @@ def subpixel_distance(im_shape: Tuple[int, int],
     position : tuple(float, float)
         Pixel center (y, x) from which the distance is calculated. Python indexing starts at zero
         so the bottom left image corner is (-0.5, -0.5).
+    shift_center : bool
+        Apply the coordinate correction for the image center.
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         2D array with the distances of each pixel from the provided pixel position.
     """
 
-    if im_shape[0] % 2 == 0:
-        raise ValueError('The subpixel_distance function has only been implemented for '
-                         'odd-sized images.')
-
-    y_size = (im_shape[0] - 1) / 2
-    x_size = (im_shape[1] - 1) / 2
+    # Get 2D x and y coordinates with respect to the image center
+    _, xx_grid, yy_grid = pixel_distance(im_shape, position=None)
 
-    y_grid = np.linspace(-y_size, y_size, im_shape[0])
-    x_grid = np.linspace(-x_size, x_size, im_shape[1])
+    if im_shape[0] % 2 == 0:
+        # Distance from the image center to the center of the outermost pixel
+        # Even sized images
+        y_size = im_shape[0] / 2 + 0.5
+        x_size = im_shape[1] / 2 + 0.5
 
-    y_pos = position[0] - y_size
-    x_pos = position[1] - x_size
+    else:
+        # Distance from the image center to the center of the outermost pixel
+        # Odd sized images
+        y_size = (im_shape[0] - 1) / 2
+        x_size = (im_shape[1] - 1) / 2
 
-    y_grid -= y_pos
-    x_grid -= x_pos
+    if shift_center:
+        # Shift the image center to the center of the bottom left pixel
+        yy_grid += y_size
+        xx_grid += x_size
 
-    xx_grid, yy_grid = np.meshgrid(x_grid, y_grid)
+    # Apply a subpixel shift of the coordinate system to the requested position
+    yy_grid -= position[0]
+    xx_grid -= position[1]
 
     return np.sqrt(xx_grid**2 + yy_grid**2)
 
@@ -431,7 +449,7 @@ def select_annulus(image_in: np.ndarray,
                    mask_position: Optional[Tuple[float, float]] = None,
                    mask_radius: Optional[float] = None) -> np.ndarray:
     """
-    image_in : numpy.ndarray
+    image_in : np.ndarray
         Input image.
     radius_in : float
         Inner radius of the annulus (pix).
@@ -484,7 +502,7 @@ def rotate_coordinates(center: Tuple[float, float],
     Parameters
     ----------
     center : tuple(float, float)
-        Image center (y, x).
+        Image center (y, x) with subpixel accuracy.
     position : tuple(float, float)
         Position (y, x) in the image, or a 2D numpy array of positions.
     angle : float
@@ -502,4 +520,4 @@ def rotate_coordinates(center: Tuple[float, float],
     pos_x = (position[1] - center[1]) * math.cos(np.radians(angle)) - \
             (position[0] - center[0]) * math.sin(np.radians(angle))
 
-    return center[0] + pos_y, center[1] + pos_x
+    return center[0]+pos_y, center[1]+pos_x
diff --git a/pynpoint/util/multiline.py b/pynpoint/util/multiline.py
index 793c8a7..a6e9536 100644
--- a/pynpoint/util/multiline.py
+++ b/pynpoint/util/multiline.py
@@ -137,11 +137,14 @@ class LineTaskProcessor(TaskProcessor):
                                tmp_task.m_input_data.shape[1],
                                tmp_task.m_input_data.shape[2]))
 
+        count = 0
+
         for i in range(tmp_task.m_input_data.shape[1]):
             for j in range(tmp_task.m_input_data.shape[2]):
-                result_arr[:, i, j] = apply_function(tmp_data=tmp_task.m_input_data[:, i, j],
-                                                     func=self.m_function,
-                                                     func_args=self.m_function_args)
+                result_arr[:, i, j] = apply_function(tmp_task.m_input_data[:, i, j], count,
+                                                     self.m_function, self.m_function_args)
+
+                count += 1
 
         return TaskResult(result_arr, tmp_task.m_job_parameter[1])
 
diff --git a/pynpoint/util/multipca.py b/pynpoint/util/multipca.py
index 6971a3d..e249aca 100644
--- a/pynpoint/util/multipca.py
+++ b/pynpoint/util/multipca.py
@@ -16,7 +16,7 @@ from sklearn.decomposition import PCA
 from pynpoint.core.dataio import OutputPort
 from pynpoint.util.multiproc import TaskProcessor, TaskCreator, TaskWriter, TaskResult, \
                                     TaskInput, MultiprocessingCapsule, to_slice
-from pynpoint.util.psf import pca_psf_subtraction
+from pynpoint.util.postproc import postprocessor
 from pynpoint.util.residuals import combine_residuals
 
 
@@ -30,7 +30,7 @@ class PcaTaskCreator(TaskCreator):
     def __init__(self,
                  tasks_queue_in: multiprocessing.JoinableQueue,
                  num_proc: int,
-                 pca_numbers: np.ndarray) -> None:
+                 pca_numbers: Union[np.ndarray, tuple]) -> None:
         """
         Parameters
         ----------
@@ -38,7 +38,7 @@ class PcaTaskCreator(TaskCreator):
             Input task queue.
         num_proc : int
             Number of processors.
-        pca_numbers : numpy.ndarray
+        pca_numbers : np.ndarray, tuple
             Principal components for which the residuals are computed.
 
         Returns
@@ -61,12 +61,20 @@ class PcaTaskCreator(TaskCreator):
         NoneType
             None
         """
+        if isinstance(self.m_pca_numbers, tuple):
+            for i, pca_first in enumerate(self.m_pca_numbers[0]):
+                for j, pca_secon in enumerate(self.m_pca_numbers[1]):
+                    parameters = (((i, i+1, None), (j, j+1, None), (None, None, None)), )
+                    self.m_task_queue.put(TaskInput(tuple((pca_first, pca_secon)), parameters))
 
-        for i, pca_number in enumerate(self.m_pca_numbers):
-            parameters = (((i, i+1, None), (None, None, None), (None, None, None)), )
-            self.m_task_queue.put(TaskInput(pca_number, parameters))
+            self.create_poison_pills()
 
-        self.create_poison_pills()
+        else:
+            for i, pca_number in enumerate(self.m_pca_numbers):
+                parameters = (((i, i+1, None), (None, None, None), (None, None, None)), )
+                self.m_task_queue.put(TaskInput(pca_number, parameters))
+
+            self.create_poison_pills()
 
 
 class PcaTaskProcessor(TaskProcessor):
@@ -89,10 +97,12 @@ class PcaTaskProcessor(TaskProcessor):
                  result_queue_in: multiprocessing.JoinableQueue,
                  star_reshape: np.ndarray,
                  angles: np.ndarray,
-                 pca_model: PCA,
-                 im_shape: Tuple[int, int, int],
-                 indices: np.ndarray,
-                 requirements: Tuple[bool, bool, bool, bool]) -> None:
+                 scales: Optional[np.ndarray],
+                 pca_model: Optional[PCA],
+                 im_shape: tuple,
+                 indices: Optional[np.ndarray],
+                 requirements: Tuple[bool, bool, bool, bool],
+                 processing_type: str) -> None:
         """
         Parameters
         ----------
@@ -100,18 +110,22 @@ class PcaTaskProcessor(TaskProcessor):
             Input task queue.
         result_queue_in : multiprocessing.queues.JoinableQueue
             Input result queue.
-        star_reshape : numpy.ndarray
+        star_reshape : np.ndarray
             Reshaped (2D) stack of images.
-        angles : numpy.ndarray
+        angles : np.ndarray
             Derotation angles (deg).
+        scales : np.ndarray
+            scaling factors
         pca_model : sklearn.decomposition.pca.PCA
             PCA object with the basis.
         im_shape : tuple(int, int, int)
             Original shape of the stack of images.
-        indices : numpy.ndarray
+        indices : np.ndarray
             Non-masked image indices.
         requirements : tuple(bool, bool, bool, bool)
             Required output residuals.
+        processing_type : str
+            selected processing type.
 
         Returns
         -------
@@ -124,9 +138,11 @@ class PcaTaskProcessor(TaskProcessor):
         self.m_star_reshape = star_reshape
         self.m_pca_model = pca_model
         self.m_angles = angles
+        self.m_scales = scales
         self.m_im_shape = im_shape
         self.m_indices = indices
         self.m_requirements = requirements
+        self.m_processing_type = processing_type
 
     @typechecked
     def run_job(self,
@@ -145,20 +161,36 @@ class PcaTaskProcessor(TaskProcessor):
             Output residuals.
         """
 
-        residuals, res_rot = pca_psf_subtraction(images=self.m_star_reshape,
-                                                 angles=self.m_angles,
-                                                 pca_number=int(tmp_task.m_input_data),
-                                                 pca_sklearn=self.m_pca_model,
-                                                 im_shape=self.m_im_shape,
-                                                 indices=self.m_indices)
-
-        res_output = np.zeros((4, res_rot.shape[1], res_rot.shape[2]))
+        # correct data type of pca_number if necessary
+        if isinstance(tmp_task.m_input_data, tuple):
+            pca_number = tmp_task.m_input_data
+        else:
+            pca_number = int(tmp_task.m_input_data)
+
+        residuals, res_rot = postprocessor(images=self.m_star_reshape,
+                                           angles=self.m_angles,
+                                           scales=self.m_scales,
+                                           pca_number=pca_number,
+                                           pca_sklearn=self.m_pca_model,
+                                           im_shape=self.m_im_shape,
+                                           indices=self.m_indices,
+                                           processing_type=self.m_processing_type)
+
+        # differentiate between IFS data or Mono-Wavelength data
+        if res_rot.ndim == 3:
+            res_output = np.zeros((4, res_rot.shape[-2], res_rot.shape[-1]))
+
+        else:
+            res_output = np.zeros((4, len(self.m_star_reshape),
+                                   res_rot.shape[-2], res_rot.shape[-1]))
 
         if self.m_requirements[0]:
-            res_output[0, ] = combine_residuals(method='mean', res_rot=res_rot)
+            res_output[0, ] = combine_residuals(method='mean',
+                                                res_rot=res_rot)
 
         if self.m_requirements[1]:
-            res_output[1, ] = combine_residuals(method='median', res_rot=res_rot)
+            res_output[1, ] = combine_residuals(method='median',
+                                                res_rot=res_rot)
 
         if self.m_requirements[2]:
             res_output[2, ] = combine_residuals(method='weighted',
@@ -167,7 +199,8 @@ class PcaTaskProcessor(TaskProcessor):
                                                 angles=self.m_angles)
 
         if self.m_requirements[3]:
-            res_output[3, ] = combine_residuals(method='clipped', res_rot=res_rot)
+            res_output[3, ] = combine_residuals(method='clipped',
+                                                res_rot=res_rot)
 
         sys.stdout.write('.')
         sys.stdout.flush()
@@ -247,25 +280,29 @@ class PcaTaskWriter(TaskWriter):
 
             with self.m_data_mutex:
                 res_slice = to_slice(next_result.m_position)
+                if next_result.m_position[1][0] is None:
+                    res_slice = (next_result.m_position[0][0])
+                else:
+                    res_slice = (next_result.m_position[0][0], next_result.m_position[1][0])
 
                 if self.m_requirements[0]:
                     self.m_mean_out_port._check_status_and_activate()
-                    self.m_mean_out_port[res_slice] = next_result.m_data_array[0, :, :]
+                    self.m_mean_out_port[res_slice] = next_result.m_data_array[0]
                     self.m_mean_out_port.close_port()
 
                 if self.m_requirements[1]:
                     self.m_median_out_port._check_status_and_activate()
-                    self.m_median_out_port[res_slice] = next_result.m_data_array[1, :, :]
+                    self.m_median_out_port[res_slice] = next_result.m_data_array[1]
                     self.m_median_out_port.close_port()
 
                 if self.m_requirements[2]:
                     self.m_weighted_out_port._check_status_and_activate()
-                    self.m_weighted_out_port[res_slice] = next_result.m_data_array[2, :, :]
+                    self.m_weighted_out_port[res_slice] = next_result.m_data_array[2]
                     self.m_weighted_out_port.close_port()
 
                 if self.m_requirements[3]:
                     self.m_clip_out_port._check_status_and_activate()
-                    self.m_clip_out_port[res_slice] = next_result.m_data_array[3, :, :]
+                    self.m_clip_out_port[res_slice] = next_result.m_data_array[3]
                     self.m_clip_out_port.close_port()
 
             self.m_result_queue.task_done()
@@ -283,12 +320,14 @@ class PcaMultiprocessingCapsule(MultiprocessingCapsule):
                  weighted_out_port: Optional[OutputPort],
                  clip_out_port: Optional[OutputPort],
                  num_proc: int,
-                 pca_numbers: np.ndarray,
-                 pca_model: PCA,
+                 pca_numbers: Union[tuple, np.ndarray],
+                 pca_model: Optional[PCA],
                  star_reshape: np.ndarray,
                  angles: np.ndarray,
-                 im_shape: Tuple[int, int, int],
-                 indices: np.ndarray) -> None:
+                 scales: Optional[np.ndarray],
+                 im_shape: tuple,
+                 indices: Optional[np.ndarray],
+                 processing_type: str) -> None:
         """
         Constructor of PcaMultiprocessingCapsule.
 
@@ -304,18 +343,22 @@ class PcaMultiprocessingCapsule(MultiprocessingCapsule):
             Output port for the mean clipped residuals.
         num_proc : int
             Number of processors.
-        pca_numbers : numpy.ndarray
+        pca_numbers : np.ndarray
             Number of principal components.
         pca_model : sklearn.decomposition.pca.PCA
             PCA object with the basis.
-        star_reshape : numpy.ndarray
+        star_reshape : np.ndarray
             Reshaped (2D) input images.
-        angles : numpy.ndarray
+        angles : np.ndarray
             Derotation angles (deg).
+        scales : np.ndarray
+            scaling factors.
         im_shape : tuple(int, int, int)
             Original shape of the input images.
-        indices : numpy.ndarray
+        indices : np.ndarray
             Non-masked pixel indices.
+        processing_type : str
+            selection of processing type
 
         Returns
         -------
@@ -331,8 +374,10 @@ class PcaMultiprocessingCapsule(MultiprocessingCapsule):
         self.m_pca_model = pca_model
         self.m_star_reshape = star_reshape
         self.m_angles = angles
+        self.m_scales = scales
         self.m_im_shape = im_shape
         self.m_indices = indices
+        self.m_processing_type = processing_type
 
         self.m_requirements = [False, False, False, False]
 
@@ -417,9 +462,11 @@ class PcaMultiprocessingCapsule(MultiprocessingCapsule):
                                                self.m_result_queue,
                                                self.m_star_reshape,
                                                self.m_angles,
+                                               self.m_scales,
                                                self.m_pca_model,
                                                self.m_im_shape,
                                                self.m_indices,
-                                               self.m_requirements))
+                                               self.m_requirements,
+                                               self.m_processing_type))
 
         return processors
diff --git a/pynpoint/util/multiproc.py b/pynpoint/util/multiproc.py
index 4344e6c..0478ebb 100644
--- a/pynpoint/util/multiproc.py
+++ b/pynpoint/util/multiproc.py
@@ -14,6 +14,12 @@ from typeguard import typechecked
 from pynpoint.core.dataio import InputPort, OutputPort
 
 
+# On macOS, the spawn start method is the default since Python 3.8. The fork start method should
+# be considered unsafe as it can lead to crashes of the subprocess.
+# TODO Not using the fork method results in an error.
+multiprocessing.set_start_method('fork')
+
+
 class TaskInput:
     """
     Class for tasks that are processed by the :class:`~pynpoint.util.multiproc.TaskProcessor`.
@@ -21,12 +27,12 @@ class TaskInput:
 
     @typechecked
     def __init__(self,
-                 input_data: Union[np.ndarray, np.int64],
+                 input_data: Union[np.ndarray, np.int64, tuple],
                  job_parameter: tuple) -> None:
         """
         Parameters
         ----------
-        input_data : int, float, numpy.ndarray
+        input_data : int, float, np.ndarray
             Input data for by the :class:`~pynpoint.util.multiproc.TaskProcessor`.
         job_parameter : tuple
             Additional data or parameters.
@@ -53,7 +59,7 @@ class TaskResult:
         """
         Parameters
         ----------
-        data_array : numpy.ndarray
+        data_array : np.ndarray
             Array with the results for a given position.
         position : tuple(tuple(int, int, int), tuple(int, int, int), tuple(int, int, int))
              The position where the results will be stored.
@@ -500,6 +506,7 @@ class MultiprocessingCapsule(metaclass=ABCMeta):
 
 @typechecked
 def apply_function(tmp_data: np.ndarray,
+                   data_index: int,
                    func: Callable,
                    func_args: Optional[tuple]) -> np.ndarray:
     """
@@ -507,8 +514,11 @@ def apply_function(tmp_data: np.ndarray,
 
     Parameters
     ----------
-    tmp_data : numpy.ndarray
+    tmp_data : np.ndarray
         Input data.
+    data_index : int
+        Index of the data subset. When processing a stack of images, the argument of ``data_index``
+        is the image index in the full stack.
     func : function
         Function.
     func_args : tuple, None
@@ -516,14 +526,14 @@ def apply_function(tmp_data: np.ndarray,
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         The results of the function.
     """
 
     if func_args is None:
-        result = np.array(func(tmp_data))
+        result = np.array(func(tmp_data, data_index))
     else:
-        result = np.array(func(tmp_data, *func_args))
+        result = np.array(func(tmp_data, data_index, *func_args))
 
     return result
 
diff --git a/pynpoint/util/multistack.py b/pynpoint/util/multistack.py
index 9943101..d7af4fd 100644
--- a/pynpoint/util/multistack.py
+++ b/pynpoint/util/multistack.py
@@ -170,9 +170,7 @@ class StackTaskProcessor(TaskProcessor):
 
             args = update_arguments(index, self.m_nimages, self.m_function_args)
 
-            result_arr[i, ] = apply_function(tmp_data=tmp_task.m_input_data[i, ],
-                                             func=self.m_function,
-                                             func_args=args)
+            result_arr[i, ] = apply_function(tmp_task.m_input_data[i, ], i, self.m_function, args)
 
         sys.stdout.write('.')
         sys.stdout.flush()
diff --git a/pynpoint/util/postproc.py b/pynpoint/util/postproc.py
new file mode 100644
index 0000000..d7b63a8
--- /dev/null
+++ b/pynpoint/util/postproc.py
@@ -0,0 +1,158 @@
+"""
+Functions for post-processing.
+"""
+
+from typing import Union, Optional, Tuple
+
+import numpy as np
+
+from typeguard import typechecked
+from sklearn.decomposition import PCA
+
+from pynpoint.util.psf import pca_psf_subtraction
+from pynpoint.util.sdi import sdi_scaling
+
+
+@typechecked
+def postprocessor(images: np.ndarray,
+                  angles: np.ndarray,
+                  scales: Optional[np.ndarray],
+                  pca_number: Union[int, Tuple[Union[int, np.int64], Union[int, np.int64]]],
+                  pca_sklearn: PCA = None,
+                  im_shape: Union[None, tuple] = None,
+                  indices: np.ndarray = None,
+                  mask: np.ndarray = None,
+                  processing_type: str = 'ADI'):
+
+    """
+    Function to apply different kind of post processings. It is equivalent to
+    :func:`~pynpoint.util.psf.pca_psf_subtraction` if ``processing_type='ADI'` and
+    ``mask=None``.
+
+    Parameters
+    ----------
+    images : np.array
+        Input images which should be reduced.
+    angles : np.ndarray
+        Derotation angles (deg).
+    scales : np.array
+        Scaling factors
+    pca_number : tuple(int, int)
+        Number of principal components used for the PSF subtraction.
+    pca_sklearn : sklearn.decomposition.pca.PCA, None
+        PCA object with the basis if not set to None.
+    im_shape : tuple(int, int, int), None
+        Original shape of the stack with images. Required if ``pca_sklearn`` is not set to None.
+    indices : np.ndarray, None
+        Non-masked image indices. All pixels are used if set to None.
+    mask : np.ndarray
+        Mask (2D).
+    processing_type : str
+        Post-processing type:
+            - ADI: Angular differential imaging.
+            - SDI: Spectral differential imaging.
+            - SDI+ADI: Spectral and angular differential imaging.
+            - ADI+SDI: Angular and spectral differential imaging.
+
+    Returns
+    -------
+    np.ndarray
+        Residuals of the PSF subtraction.
+    np.ndarray
+        Derotated residuals of the PSF subtraction.
+    """
+
+    if not isinstance(pca_number, tuple):
+        pca_number = (pca_number, -1)
+
+    if mask is None:
+        mask = 1.
+
+    res_raw = np.zeros(images.shape)
+    res_rot = np.zeros(images.shape)
+
+    if processing_type == 'ADI':
+        if images.ndim == 2:
+            res_raw, res_rot = pca_psf_subtraction(images=images*mask,
+                                                   angles=angles,
+                                                   scales=None,
+                                                   pca_number=pca_number[0],
+                                                   pca_sklearn=pca_sklearn,
+                                                   im_shape=im_shape,
+                                                   indices=indices)
+
+        elif images.ndim == 4:
+            for i in range(images.shape[0]):
+                res_raw[i, ], res_rot[i, ] = pca_psf_subtraction(images=images[i, ]*mask,
+                                                                 angles=angles,
+                                                                 scales=None,
+                                                                 pca_number=pca_number[0],
+                                                                 pca_sklearn=pca_sklearn,
+                                                                 im_shape=im_shape,
+                                                                 indices=indices)
+
+    elif processing_type == 'SDI':
+        for i in range(images.shape[1]):
+            im_scaled = sdi_scaling(images[:, i, :, :], scales)
+
+            res_raw[:, i], res_rot[:, i] = pca_psf_subtraction(images=im_scaled*mask,
+                                                               angles=np.full(scales.size,
+                                                                              angles[i]),
+                                                               scales=scales,
+                                                               pca_number=pca_number[0],
+                                                               pca_sklearn=pca_sklearn,
+                                                               im_shape=im_shape,
+                                                               indices=indices)
+
+    elif processing_type == 'SDI+ADI':
+        # SDI
+        res_raw_int = np.zeros(res_raw.shape)
+
+        for i in range(images.shape[1]):
+            im_scaled = sdi_scaling(images[:, i], scales)
+
+            res_raw_int[:, i], _ = pca_psf_subtraction(images=im_scaled*mask,
+                                                       angles=None,
+                                                       scales=scales,
+                                                       pca_number=pca_number[0],
+                                                       pca_sklearn=pca_sklearn,
+                                                       im_shape=im_shape,
+                                                       indices=indices)
+
+        # ADI
+        for i in range(images.shape[0]):
+            res_raw[i], res_rot[i] = pca_psf_subtraction(images=res_raw_int[i]*mask,
+                                                         angles=angles,
+                                                         scales=None,
+                                                         pca_number=pca_number[1],
+                                                         pca_sklearn=pca_sklearn,
+                                                         im_shape=im_shape,
+                                                         indices=indices)
+
+    elif processing_type == 'ADI+SDI':
+        # ADI
+        res_raw_int = np.zeros(res_raw.shape)
+
+        for i in range(images.shape[0]):
+            res_raw_int[i], _ = pca_psf_subtraction(images=images[i, ]*mask,
+                                                    angles=None,
+                                                    scales=None,
+                                                    pca_number=pca_number[0],
+                                                    pca_sklearn=pca_sklearn,
+                                                    im_shape=im_shape,
+                                                    indices=indices)
+
+        # SDI
+        for i in range(images.shape[1]):
+            im_scaled = sdi_scaling(res_raw_int[:, i], scales)
+
+            res_raw[:, i], res_rot[:, i] = pca_psf_subtraction(images=im_scaled*mask,
+                                                               angles=np.full(scales.size,
+                                                                              angles[i]),
+                                                               scales=scales,
+                                                               pca_number=pca_number[1],
+                                                               pca_sklearn=pca_sklearn,
+                                                               im_shape=im_shape,
+                                                               indices=indices)
+
+    return res_raw, res_rot
diff --git a/pynpoint/util/psf.py b/pynpoint/util/psf.py
index de7d737..137ca98 100644
--- a/pynpoint/util/psf.py
+++ b/pynpoint/util/psf.py
@@ -2,7 +2,7 @@
 Functions for PSF subtraction.
 """
 
-from typing import Optional, Tuple
+from typing import Optional, Union, Tuple
 
 import numpy as np
 
@@ -10,93 +10,153 @@ from scipy.ndimage import rotate
 from sklearn.decomposition import PCA
 from typeguard import typechecked
 
+from pynpoint.util.image import scale_image, shift_image
+
 
 @typechecked
 def pca_psf_subtraction(images: np.ndarray,
-                        angles: np.ndarray,
-                        pca_number: int,
+                        angles: Optional[np.ndarray],
+                        pca_number: Union[int, np.int64],
+                        scales: Optional[np.ndarray] = None,
                         pca_sklearn: Optional[PCA] = None,
-                        im_shape: Optional[Tuple[int, int, int]] = None,
+                        im_shape: Optional[tuple] = None,
                         indices: Optional[np.ndarray] = None) -> Tuple[np.ndarray, np.ndarray]:
     """
     Function for PSF subtraction with PCA.
 
     Parameters
     ----------
-    images : numpy.ndarray
-        Stack of images. Also used as reference images if `pca_sklearn` is set to None. Should be
-        in the original 3D shape if `pca_sklearn` is set to None or in the 2D reshaped format if
-        `pca_sklearn` is not set to None.
-    angles : numpy.ndarray
-        Derotation angles (deg).
+    images : np.ndarray
+        Stack of images. Also used as reference images if ```pca_sklearn``` is set to None. The
+        data should have the original 3D shape if ``pca_sklearn`` is set to None or it should be
+        in a 2D reshaped format if ``pca_sklearn`` is not set to None.
+    angles : np.ndarray
+        Parallactic angles (deg).
     pca_number : int
-        Number of principal components used for the PSF model.
+        Number of principal components.
+    scales : np.ndarray, None
+        Scaling factors for SDI. Not used if set to None.
     pca_sklearn : sklearn.decomposition.pca.PCA, None
-        PCA decomposition of the input data.
+        PCA object with the principal components.
     im_shape : tuple(int, int, int), None
-        Original shape of the stack with images. Required if `pca_sklearn` is not set to None.
-    indices : numpy.ndarray, None
-        Non-masked image indices. All pixels are used if set to None.
+        The original 3D shape of the stack with images. Only required if ``pca_sklearn`` is not set
+        to None.
+    indices : np.ndarray, None
+        Array with the indices of the pixels that are used for the PSF subtraction. All pixels are
+        used if set to None.
 
     Returns
     -------
-    numpy.ndarray
+    np.ndarray
         Residuals of the PSF subtraction.
-    numpy.ndarray
+    np.ndarray
         Derotated residuals of the PSF subtraction.
     """
 
     if pca_sklearn is None:
+        # Create a PCA object if not provided as argument
         pca_sklearn = PCA(n_components=pca_number, svd_solver='arpack')
 
+        # The 3D shape of the array with images
         im_shape = images.shape
 
         if indices is None:
-            # select the first image and get the unmasked image indices
+            # Select the first image and get the unmasked image indices
             im_star = images[0, ].reshape(-1)
             indices = np.where(im_star != 0.)[0]
 
-        # reshape the images and select the unmasked pixels
+        # Reshape the images and select the unmasked pixels
         im_reshape = images.reshape(im_shape[0], im_shape[1]*im_shape[2])
         im_reshape = im_reshape[:, indices]
 
-        # subtract mean image
+        # Subtract the mean image
+        # This is also done by sklearn.decomposition.PCA.fit()
         im_reshape -= np.mean(im_reshape, axis=0)
 
-        # create pca basis
+        # Fit the principal components
         pca_sklearn.fit(im_reshape)
 
     else:
+        # If the PCA object is already there then so are the reshaped data
         im_reshape = np.copy(images)
 
-    # create pca representation
-    zeros = np.zeros((pca_sklearn.n_components - pca_number, im_reshape.shape[0]))
+    # Project the data on the principal components
+    # Note that this is the same as sklearn.decomposition.PCA.transform()
+    # It is harcoded because the number of components has been adjusted
     pca_rep = np.matmul(pca_sklearn.components_[:pca_number], im_reshape.T)
+
+    # The zeros are added with vstack to account for the components that have not been used for the
+    # transformation to the lower-dimensional space, while they were initiated with the PCA object.
+    # Since inverse_transform uses the number of initial components, the zeros are added for
+    # components > pca_number. These components do not impact the inverse transformation.
+    zeros = np.zeros((pca_sklearn.n_components - pca_number, im_reshape.shape[0]))
     pca_rep = np.vstack((pca_rep, zeros)).T
 
-    # create psf model
+    # Transform the data back to the original space
     psf_model = pca_sklearn.inverse_transform(pca_rep)
 
-    # create original array size
+    # Create an array with the original shape
     residuals = np.zeros((im_shape[0], im_shape[1]*im_shape[2]))
 
-    # subtract the psf model
+    # Select all pixel indices if set to None
     if indices is None:
         indices = np.arange(0, im_reshape.shape[1], 1)
 
+    # Subtract the PSF model
     residuals[:, indices] = im_reshape - psf_model
 
-    # reshape to the original image size
+    # Reshape the residuals to the original shape
     residuals = residuals.reshape(im_shape)
 
-    # check if the number of parang is equal to the number of images
-    if residuals.shape[0] != angles.shape[0]:
-        raise ValueError(f'The number of images ({residuals.shape[0]}) is not equal to the '
-                         f'number of parallactic angles ({angles.shape[0]}).')
+    # ----------- back scale images
+    scal_cor = np.zeros(residuals.shape)
+
+    if scales is not None:
+
+        # check if the number of parang is equal to the number of images
+        if residuals.shape[0] != scales.shape[0]:
+            raise ValueError(f'The number of images ({residuals.shape[0]}) is not equal to the '
+                             f'number of wavelengths ({scales.shape[0]}).')
+
+        for i, _ in enumerate(scales):
+            # rescaling the images
+            swaps = scale_image(residuals[i, ], 1/scales[i], 1/scales[i])
+
+            npix_del = scal_cor.shape[-1] - swaps.shape[-1]
+
+            if npix_del == 0:
+                scal_cor[i, ] = swaps
+
+            else:
+                if npix_del % 2 == 0:
+                    npix_del_a = int(npix_del/2)
+                    npix_del_b = int(npix_del/2)
+
+                else:
+                    npix_del_a = int((npix_del-1)/2)
+                    npix_del_b = int((npix_del+1)/2)
+
+                scal_cor[i, npix_del_a:-npix_del_b, npix_del_a:-npix_del_b] = swaps
+
+                if npix_del % 2 == 1:
+                    scal_cor[i, ] = shift_image(scal_cor[i, ], (0.5, 0.5), interpolation='spline')
+
+    else:
+        scal_cor = residuals
 
-    # derotate the images
     res_rot = np.zeros(residuals.shape)
-    for j, item in enumerate(angles):
-        res_rot[j, ] = rotate(residuals[j, ], item, reshape=False)
 
-    return residuals, res_rot
+    if angles is not None:
+
+        # Check if the number of parang is equal to the number of images
+        if residuals.shape[0] != angles.shape[0]:
+            raise ValueError(f'The number of images ({residuals.shape[0]}) is not equal to the '
+                             f'number of parallactic angles ({angles.shape[0]}).')
+
+        for j, item in enumerate(angles):
+            res_rot[j, ] = rotate(scal_cor[j, ], item, reshape=False)
+
+    else:
+        res_rot = scal_cor
+
+    return scal_cor, res_rot
diff --git a/pynpoint/util/residuals.py b/pynpoint/util/residuals.py
index 8619939..ff19c57 100644
--- a/pynpoint/util/residuals.py
+++ b/pynpoint/util/residuals.py
@@ -16,6 +16,58 @@ def combine_residuals(method: str,
                       residuals: Optional[np.ndarray] = None,
                       angles: Optional[np.ndarray] = None) -> np.ndarray:
     """
+    Wavelength wrapper for the combine_residual function. Produces an array with either 1
+    or number of wavelengths sized array.
+
+    Parameters
+    ----------
+    method : str
+        Method used for combining the residuals ('mean', 'median', 'weighted', or 'clipped').
+    res_rot : np.ndarray
+        Derotated residuals of the PSF subtraction (3D).
+    residuals : np.ndarray, None
+        Non-derotated residuals of the PSF subtraction (3D). Only required for the noise-weighted
+        residuals.
+    angles : np.ndarray, None
+        Derotation angles (deg). Only required for the noise-weighted residuals.
+
+    Returns
+    -------
+    np.ndarray
+        Collapsed residuals (3D).
+    """
+
+    if res_rot.ndim == 3:
+        output = _residuals(method=method,
+                            res_rot=np.asarray(res_rot),
+                            residuals=residuals,
+                            angles=angles)
+
+    if res_rot.ndim == 4:
+        output = np.zeros((res_rot.shape[0], res_rot.shape[2], res_rot.shape[3]))
+
+        for i in range(res_rot.shape[0]):
+            if residuals is None:
+                output[i, ] = _residuals(method=method,
+                                         res_rot=res_rot[i, ],
+                                         residuals=residuals,
+                                         angles=angles)[0]
+
+            else:
+                output[i, ] = _residuals(method=method,
+                                         res_rot=res_rot[i, ],
+                                         residuals=residuals[i, ],
+                                         angles=angles)[0]
+
+    return output
+
+
+@typechecked
+def _residuals(method: str,
+               res_rot: np.ndarray,
+               residuals: Optional[np.ndarray] = None,
+               angles: Optional[np.ndarray] = None) -> np.ndarray:
+    """
     Function for combining the derotated residuals of the PSF subtraction.
 
     Parameters
@@ -50,6 +102,7 @@ def combine_residuals(method: str,
                                axis=0)
 
         res_var = np.zeros(res_repeat.shape)
+
         for j, angle in enumerate(angles):
             # scipy.ndimage.rotate rotates in clockwise direction for positive angles
             res_var[j, ] = rotate(input=res_repeat[j, ],
diff --git a/pynpoint/util/sdi.py b/pynpoint/util/sdi.py
new file mode 100644
index 0000000..8050315
--- /dev/null
+++ b/pynpoint/util/sdi.py
@@ -0,0 +1,79 @@
+"""
+Functions for spectral differential imaging.
+"""
+
+import numpy as np
+
+from typeguard import typechecked
+
+from pynpoint.util.image import scale_image, shift_image
+
+
+@typechecked
+def sdi_scaling(image_in: np.ndarray,
+                scaling: np.ndarray) -> np.ndarray:
+
+    """
+    Function to rescale the images by their wavelength ratios.
+
+    Parameters
+    ----------
+    image_in : np.ndarray
+        Data to rescale
+    scaling : np.ndarray
+        Scaling factors.
+
+    Returns
+    -------
+    np.ndarray
+        Rescaled images with the same shape as ``image_in``.
+    """
+
+    if image_in.shape[0] != scaling.shape[0]:
+        raise ValueError('The number of wavelengths is not equal to the number of available '
+                         'scaling factors.')
+
+    image_out = np.zeros(image_in.shape)
+
+    for i in range(image_in.shape[0]):
+        swaps = scale_image(image_in[i, ], scaling[i], scaling[i])
+
+        npix_del = swaps.shape[-1] - image_out.shape[-1]
+
+        if npix_del == 0:
+            image_out[i, ] = swaps
+
+        else:
+            if npix_del % 2 == 0:
+                npix_del_a = int(npix_del/2)
+                npix_del_b = int(npix_del/2)
+
+            else:
+                npix_del_a = int((npix_del-1)/2)
+                npix_del_b = int((npix_del+1)/2)
+
+            image_out[i, ] = swaps[npix_del_a:-npix_del_b, npix_del_a:-npix_del_b]
+
+        if npix_del % 2 == 1:
+            image_out[i, ] = shift_image(image_out[i, ], (-0.5, -0.5), interpolation='spline')
+
+    return image_out
+
+
+@typechecked
+def scaling_factors(wavelengths: np.ndarray) -> np.ndarray:
+    """
+    Function to calculate the scaling factors for SDI.
+
+    Parameters
+    ----------
+    wavelengths : np.ndarray
+        Array with the wavelength of each frame.
+
+    Returns
+    -------
+    np.ndarray
+        Scaling factors.
+    """
+
+    return max(wavelengths) / wavelengths
diff --git a/pynpoint/util/tests.py b/pynpoint/util/tests.py
index e48d567..daca867 100644
--- a/pynpoint/util/tests.py
+++ b/pynpoint/util/tests.py
@@ -142,7 +142,7 @@ def create_fits(path: str,
 @typechecked
 def create_fake_data(path: str) -> None:
     """
-    Create ADI test data with a fake planet.
+    Create an ADI dataset with a star and planet.
 
     Parameters
     ----------
@@ -191,6 +191,60 @@ def create_fake_data(path: str) -> None:
     create_fits(path, 'images.fits', images, ndit, exp_no, 0., 0.)
 
 
+@typechecked
+def create_ifs_data(path: str) -> None:
+    """
+    Create an IFS dataset with a star and planet.
+
+    Parameters
+    ----------
+    path : str
+        Working folder.
+
+    Returns
+    -------
+    NoneType
+        None
+    """
+
+    ndit = 10
+    npix = 21
+    nwavel = 3
+    fwhm = 3.
+    sep = 6.
+    contrast = 1.
+    pos_star = 10.
+    exp_no = 1
+
+    parang = np.linspace(0., 180., 10)
+    wavelength = [1., 1.1, 1.2]
+
+    if not os.path.exists(path):
+        os.makedirs(path)
+
+    sigma = fwhm / (2.*math.sqrt(2.*math.log(2.)))
+
+    x = y = np.arange(0., 21., 1.)
+    xx, yy = np.meshgrid(x, y)
+
+    np.random.seed(1)
+
+    images = np.random.normal(loc=0, scale=0.05, size=(nwavel, ndit, npix, npix))
+
+    for i, par_item in enumerate(parang):
+        for j, wav_item in enumerate(wavelength):
+            sigma_scale = sigma*wav_item
+
+            star = np.exp(-((xx-pos_star)**2+(yy-pos_star)**2)/(2.*sigma_scale**2))
+
+            x_shift = sep*math.cos(math.radians(par_item))
+            y_shift = sep*math.sin(math.radians(par_item))
+
+            images[j, i, ] += star + shift(contrast*star, (x_shift, y_shift), order=5)
+
+    create_fits(path, 'images.fits', images, ndit, exp_no, 0., 0.)
+
+
 @typechecked
 def create_star_data(path: str,
                      npix: int = 11,
diff --git a/pynpoint/util/types.py b/pynpoint/util/type_aliases.py
similarity index 100%
rename from pynpoint/util/types.py
rename to pynpoint/util/type_aliases.py
diff --git a/requirements.txt b/requirements.txt
index 7e53587..a6267b5 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,14 +1,14 @@
-astropy ~= 4.0
-emcee ~= 3.0
-h5py ~= 2.10
-numba ~= 0.49
-numpy ~= 1.18
-opencv-python ~= 4.2
-photutils ~= 0.7
-PyWavelets ~= 1.1
-scikit-image ~= 0.17
-scikit-learn ~= 0.22
-scipy ~= 1.4
-statsmodels ~= 0.11
-tqdm ~= 4.46
-typeguard ~= 2.7
+astropy ~= 4.1.0
+emcee ~= 3.0.0
+h5py ~= 3.1.0
+numba ~= 0.54.0
+numpy ~= 1.19.0
+opencv-python ~= 4.4.0
+photutils ~= 1.1.0
+PyWavelets ~= 1.1.0
+scikit-image ~= 0.18.0
+scikit-learn ~= 0.24.0
+scipy ~= 1.5.0
+statsmodels ~= 0.12.0
+tqdm ~= 4.62.0
+typeguard ~= 2.12.0
diff --git a/setup.py b/setup.py
old mode 100755
new mode 100644
index 0d54947..868b172
--- a/setup.py
+++ b/setup.py
@@ -2,22 +2,20 @@
 
 from setuptools import setup
 
-try:
-    from pip._internal.req import parse_requirements
-except ImportError:
-    from pip.req import parse_requirements
+from pip._internal.network.session import PipSession
+from pip._internal.req import parse_requirements
 
-reqs = parse_requirements('requirements.txt', session='hack')
-reqs = [str(ir.req) for ir in reqs]
+reqs = parse_requirements('requirements.txt', session=PipSession())
+reqs = [str(req.requirement) for req in reqs]
 
 setup(
     name='pynpoint',
-    version='0.8.3',
+    version='0.10.0',
     description='Pipeline for processing and analysis of high-contrast imaging data',
     long_description=open('README.rst').read(),
     long_description_content_type='text/x-rst',
     author='Tomas Stolker & Markus Bonse',
-    author_email='tomas.stolker@phys.ethz.ch',
+    author_email='stolker@strw.leidenuniv.nl',
     url='https://github.com/PynPoint/PynPoint',
     project_urls={'Documentation': 'https://pynpoint.readthedocs.io'},
     packages=['pynpoint',
@@ -36,8 +34,9 @@ setup(
         'Topic :: Scientific/Engineering :: Astronomy',
         'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
         'Natural Language :: English',
-        'Programming Language :: Python :: 3.6',
         'Programming Language :: Python :: 3.7',
+        'Programming Language :: Python :: 3.8',
+        'Programming Language :: Python :: 3.9',
     ],
     tests_require=['pytest'],
 )
diff --git a/tests/test_core/test_outputport.py b/tests/test_core/test_outputport.py
index e99cf84..374df3f 100644
--- a/tests/test_core/test_outputport.py
+++ b/tests/test_core/test_outputport.py
@@ -418,7 +418,8 @@ class TestOutputPort:
         assert len(warning) == 1
 
         # check that the message matches
-        assert warning[0].message.args[0] == 'Can not store attribute if data tag does not exist.'
+        assert warning[0].message.args[0] == 'Can not store the attribute \'attr1\' because ' \
+                                             'the dataset \'new_data\' does not exist.'
 
         out_port.del_all_attributes()
         out_port.del_all_data()
diff --git a/tests/test_core/test_processing.py b/tests/test_core/test_processing.py
index 5d9d1ad..ecdab32 100644
--- a/tests/test_core/test_processing.py
+++ b/tests/test_core/test_processing.py
@@ -69,6 +69,18 @@ class TestProcessing:
         assert warning[0].message.args[0] == 'Tag \'test\' of ProcessingModule \'badpixel\' is ' \
                                              'already used.'
 
+    def test_output_port_set_connection(self) -> None:
+
+        self.pipeline.m_data_storage.open_connection()
+
+        module = BadPixelSigmaFilterModule(name_in='badpixel2',
+                                           image_in_tag='images',
+                                           image_out_tag='im_out')
+
+        self.pipeline.add_module(module)
+
+        port = module.add_output_port('test1')
+
         self.pipeline.m_data_storage.close_connection()
 
     def test_apply_function(self) -> None:
diff --git a/tests/test_core/test_pypeline.py b/tests/test_core/test_pypeline.py
index ce38759..25d8ec8 100644
--- a/tests/test_core/test_pypeline.py
+++ b/tests/test_core/test_pypeline.py
@@ -101,32 +101,26 @@ class TestPypeline:
 
     def test_create_pipeline_path_missing(self) -> None:
 
-        dir_non_exists = self.test_dir + 'none/'
+        dir_non_exists = self.test_dir + 'none_dir/'
         dir_exists = self.test_dir
 
         with pytest.raises(AssertionError) as error:
             Pypeline(dir_non_exists, dir_exists, dir_exists)
 
-        assert str(error.value) == 'Input directory for _m_working_place does not exist ' \
-                                   '- input requested: '+self.test_dir+'none/.'
+        assert str(error.value) == 'The folder that was chosen for the working place does not ' \
+                                   'exist: '+self.test_dir+'none_dir/.'
 
         with pytest.raises(AssertionError) as error:
             Pypeline(dir_exists, dir_non_exists, dir_exists)
 
-        assert str(error.value) == 'Input directory for _m_input_place does not exist ' \
-                                   '- input requested: '+self.test_dir+'none/.'
+        assert str(error.value) == 'The folder that was chosen for the input place does not ' \
+                                   'exist: '+self.test_dir+'none_dir/.'
 
         with pytest.raises(AssertionError) as error:
             Pypeline(dir_exists, dir_exists, dir_non_exists)
 
-        assert str(error.value) == 'Input directory for _m_output_place does not exist ' \
-                                   '- input requested: '+self.test_dir+'none/.'
-
-        with pytest.raises(AssertionError) as error:
-            Pypeline()
-
-        assert str(error.value) == 'Input directory for _m_working_place does not exist ' \
-                                   '- input requested: None.'
+        assert str(error.value) == 'The folder that was chosen for the output place does not ' \
+                                   'exist: '+self.test_dir+'none_dir/.'
 
     def test_create_pipeline_existing_database(self) -> None:
 
@@ -179,8 +173,11 @@ class TestPypeline:
 
         assert len(warning) == 1
 
-        assert warning[0].message.args[0] == 'Pipeline module names need to be unique. ' \
-                                             'Overwriting module \'read2\'.'
+        assert warning[0].message.args[0] == 'Names of pipeline modules that are added to the ' \
+                                             'Pypeline need to be unique. The current pipeline ' \
+                                             'module, \'read2\', does already exist in the ' \
+                                             'Pypeline dictionary so the previous module with ' \
+                                             'the same name will be overwritten.'
 
         module = BadPixelSigmaFilterModule(name_in='badpixel',
                                            image_in_tag='im_arr1',
@@ -271,7 +268,7 @@ class TestPypeline:
                                    'which is not created by a previous module or the data does ' \
                                    'not exist in the database.'
 
-        assert pipeline.validate_pipeline_module('test') is None
+        assert pipeline.validate_pipeline_module('test') == (False, 'test')
 
         with pytest.raises(TypeError) as error:
             pipeline._validate('module', 'tag')
@@ -318,8 +315,9 @@ class TestPypeline:
 
         assert len(warning) == 1
 
-        assert warning[0].message.args[0] == 'Pipeline module name \'test\' not found in the ' \
-                                             'Pypeline dictionary.'
+        assert warning[0].message.args[0] == 'Pipeline module \'test\' is not found in the ' \
+                                             'Pypeline dictionary so it could not be removed. ' \
+                                             'The dictionary contains the following modules: [].' \
 
         os.remove(self.test_dir+'PynPoint_database.hdf5')
 
@@ -339,7 +337,19 @@ class TestPypeline:
 
         pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
 
-        assert pipeline.get_tags() == 'images'
+        assert pipeline.get_tags() == ['images']
+
+    def test_list_attributes(self) -> None:
+
+        pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
+
+        attr_dict = pipeline.list_attributes('images')
+
+        assert len(attr_dict) == 11
+        assert attr_dict['INSTRUMENT'] == 'IMAGER'
+        assert attr_dict['PIXSCALE'] == 0.027
+        assert attr_dict['NFRAMES'] == [5]
+        assert attr_dict['PARANG_START'] == [10.]
 
     def test_set_and_get_attribute(self) -> None:
 
@@ -359,13 +369,21 @@ class TestPypeline:
         attribute = pipeline.get_attribute('images', 'PARANG', static=False)
         assert attribute == pytest.approx(np.arange(10., 21., 1.), rel=self.limit, abs=0.)
 
+    def test_get_data_range(self) -> None:
+
+        pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
+
+        data = pipeline.get_data('images', data_range=(0, 2))
+
+        assert data.shape == (2, 11, 11)
+
     def test_delete_data(self) -> None:
 
         pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
 
         pipeline.delete_data('images')
 
-        assert pipeline.get_tags().size == 0
+        assert len(pipeline.get_tags()) == 0
 
     def test_delete_not_found(self) -> None:
 
diff --git a/tests/test_processing/test_background.py b/tests/test_processing/test_background.py
index 70ed057..c6f0650 100644
--- a/tests/test_processing/test_background.py
+++ b/tests/test_processing/test_background.py
@@ -1,14 +1,14 @@
 import os
 
-import pytest
 import numpy as np
+import pytest
 
 from pynpoint.core.pypeline import Pypeline
 from pynpoint.readwrite.fitsreading import FitsReadingModule
-from pynpoint.processing.background import MeanBackgroundSubtractionModule, \
-                                           SimpleBackgroundSubtractionModule, \
-                                           LineSubtractionModule, \
-                                           NoddingBackgroundModule
+from pynpoint.processing.background import LineSubtractionModule, \
+                                           MeanBackgroundSubtractionModule, \
+                                           NoddingBackgroundModule, \
+                                           SimpleBackgroundSubtractionModule
 from pynpoint.processing.pcabackground import DitheringBackgroundModule
 from pynpoint.processing.stacksubset import StackCubesModule
 from pynpoint.util.tests import create_config, create_dither_data, create_star_data, \
@@ -123,11 +123,7 @@ class TestBackground:
                                            gaussian=0.05,
                                            subframe=0.1,
                                            pca_number=1,
-                                           mask_star=0.05,
-                                           crop=True,
-                                           prepare=True,
-                                           pca_background=True,
-                                           combine='pca')
+                                           mask_star=0.05)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('pca_dither1')
@@ -149,11 +145,11 @@ class TestBackground:
         assert data.shape == (15, 9, 9)
 
         data = self.pipeline.get_data('dither_dither_pca_fit1')
-        assert np.sum(data) == pytest.approx(-0.01019999314121019, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(-0.6816458444287745, rel=1e-5, abs=0.)
         assert data.shape == (5, 9, 9)
 
         data = self.pipeline.get_data('dither_dither_pca_res1')
-        assert np.sum(data) == pytest.approx(54.884085831929795, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(55.63879076093719, rel=1e-6, abs=0.)
         assert data.shape == (5, 9, 9)
 
         data = self.pipeline.get_data('dither_dither_pca_mask1')
@@ -161,11 +157,11 @@ class TestBackground:
         assert data.shape == (5, 9, 9)
 
         data = self.pipeline.get_data('pca_dither1')
-        assert np.sum(data) == pytest.approx(208.774670964812, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(208.24417329569593, rel=1e-6, abs=0.)
         assert data.shape == (20, 9, 9)
 
         attr = self.pipeline.get_attribute('dither_dither_pca_res1', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(51., rel=self.limit, abs=0.)
+        assert np.sum(attr) == pytest.approx(40., rel=self.limit, abs=0.)
         assert attr.shape == (5, 2)
 
     def test_dithering_center(self) -> None:
@@ -173,23 +169,19 @@ class TestBackground:
         module = DitheringBackgroundModule(name_in='pca_dither2',
                                            image_in_tag='dither',
                                            image_out_tag='pca_dither2',
-                                           center=((5, 5), (5, 15), (15, 15), (15, 5)),
+                                           center=[(5, 5), (5, 15), (15, 15), (15, 5)],
                                            cubes=1,
                                            size=0.2,
                                            gaussian=0.05,
                                            subframe=None,
                                            pca_number=1,
-                                           mask_star=0.05,
-                                           crop=True,
-                                           prepare=True,
-                                           pca_background=True,
-                                           combine='pca')
+                                           mask_star=0.05)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('pca_dither2')
 
         data = self.pipeline.get_data('pca_dither2')
-        assert np.sum(data) == pytest.approx(209.8271898501695, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(208.24417332523367, rel=1e-6, abs=0.)
         assert data.shape == (20, 9, 9)
 
     def test_nodding_background(self) -> None:
diff --git a/tests/test_processing/test_badpixel.py b/tests/test_processing/test_badpixel.py
index 4ce5aae..c381e1e 100644
--- a/tests/test_processing/test_badpixel.py
+++ b/tests/test_processing/test_badpixel.py
@@ -51,14 +51,14 @@ class TestBadPixel:
                                            image_out_tag='sigma1',
                                            map_out_tag='None',
                                            box=9,
-                                           sigma=5.,
-                                           iterate=1)
+                                           sigma=3.,
+                                           iterate=5)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('sigma1')
 
         data = self.pipeline.get_data('sigma1')
-        assert np.sum(data) == pytest.approx(0.007314386854009355, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.006513475520308432, rel=self.limit, abs=0.)
         assert data.shape == (5, 11, 11)
 
     def test_bad_pixel_map_out(self) -> None:
@@ -68,22 +68,22 @@ class TestBadPixel:
                                            image_out_tag='sigma2',
                                            map_out_tag='bpmap',
                                            box=9,
-                                           sigma=5.,
-                                           iterate=1)
+                                           sigma=2.,
+                                           iterate=3)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('sigma2')
 
         data = self.pipeline.get_data('sigma2')
-        assert data[0, 0, 0] == pytest.approx(0.00032486907273264834, rel=self.limit, abs=0.)
+        assert data[0, 0, 0] == pytest.approx(-2.4570591355257687e-05, rel=self.limit, abs=0.)
         assert data[0, 5, 5] == pytest.approx(9.903775276151606e-06, rel=self.limit, abs=0.)
-        assert np.sum(data) == pytest.approx(0.007314386854009355, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.011777887008566097, rel=self.limit, abs=0.)
         assert data.shape == (5, 11, 11)
 
         data = self.pipeline.get_data('bpmap')
-        assert data[0, 0, 0] == pytest.approx(1., rel=self.limit, abs=0.)
+        assert data[0, 1, 1] == pytest.approx(1., rel=self.limit, abs=0.)
         assert data[0, 5, 5] == pytest.approx(0., rel=self.limit, abs=0.)
-        assert np.sum(data) == pytest.approx(604., rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(519.0, rel=self.limit, abs=0.)
         assert data.shape == (5, 11, 11)
 
     def test_bad_pixel_map(self) -> None:
diff --git a/tests/test_processing/test_centering.py b/tests/test_processing/test_centering.py
index 6bfb785..bcaaa00 100644
--- a/tests/test_processing/test_centering.py
+++ b/tests/test_processing/test_centering.py
@@ -75,19 +75,15 @@ class TestCentering:
         with pytest.warns(UserWarning) as warning:
             self.pipeline.run_module('extract1')
 
-        assert len(warning) == 1
+        assert len(warning) == 3
 
-        assert warning[0].message.args[0] == 'The new dataset that is stored under the tag name ' \
-                                             '\'index\' is empty.'
+        assert warning[0].message.args[0] == 'Can not store the attribute \'INSTRUMENT\' ' \
+                                             'because the dataset \'index\' does not exist.'
 
         data = self.pipeline.get_data('extract1')
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
-        attr = self.pipeline.get_attribute('extract1', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(100, rel=self.limit, abs=0.)
-        assert attr.shape == (10, 2)
-
     def test_star_align(self) -> None:
 
         module = StarAlignmentModule(name_in='align1',
@@ -223,7 +219,15 @@ class TestCentering:
                                        sigma=0.05)
 
         self.pipeline.add_module(module)
-        self.pipeline.run_module('waffle')
+
+        with pytest.warns(DeprecationWarning) as warning:
+            self.pipeline.run_module('waffle')
+
+        assert len(warning) == 1
+
+        assert warning[0].message.args[0] == 'The \'pattern\' parameter will be deprecated in a ' \
+                                             'future release. Please Use the \'angle\' ' \
+                                             'parameter instead and set it to 45.0 degrees.'
 
         data = self.pipeline.get_data('center')
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
@@ -295,7 +299,15 @@ class TestCentering:
                                        sigma=0.05)
 
         self.pipeline.add_module(module)
-        self.pipeline.run_module('waffle_even')
+
+        with pytest.warns(DeprecationWarning) as warning:
+            self.pipeline.run_module('waffle_even')
+
+        assert len(warning) == 1
+
+        assert warning[0].message.args[0] == 'The \'pattern\' parameter will be deprecated in a ' \
+                                             'future release. Please Use the \'angle\' ' \
+                                             'parameter instead and set it to 45.0 degrees.'
 
         data = self.pipeline.get_data('center_even')
         assert np.sum(data) == pytest.approx(105.22695036281449, rel=self.limit, abs=0.)
@@ -311,7 +323,7 @@ class TestCentering:
                                  fit_out_tag='fit_full',
                                  mask_out_tag='mask',
                                  method='full',
-                                 radius=0.1,
+                                 mask_radii=(None, 0.1),
                                  sign='positive',
                                  model='gaussian',
                                  guess=(1., 2., 3., 3., 0.01, 0., 0.))
@@ -338,7 +350,7 @@ class TestCentering:
                                  fit_out_tag='fit_mean',
                                  mask_out_tag=None,
                                  method='mean',
-                                 radius=0.1,
+                                 mask_radii=(None, 0.1),
                                  sign='positive',
                                  model='moffat',
                                  guess=(1., 2., 3., 3., 0.01, 0., 0., 1.))
diff --git a/tests/test_processing/test_extract.py b/tests/test_processing/test_extract.py
index 5fcd11b..2f16f45 100644
--- a/tests/test_processing/test_extract.py
+++ b/tests/test_processing/test_extract.py
@@ -72,19 +72,22 @@ class TestExtract:
         with pytest.warns(UserWarning) as warning:
             self.pipeline.run_module('extract1')
 
-        assert len(warning) == 1
+        assert len(warning) == 3
 
-        assert warning[0].message.args[0] == 'The new dataset that is stored under the tag name ' \
-                                             '\'index\' is empty.'
+        assert warning[0].message.args[0] == 'Can not store the attribute \'INSTRUMENT\' because ' \
+                                             'the dataset \'index\' does not exist.'
+
+        assert warning[1].message.args[0] == 'Can not store the attribute \'PIXSCALE\' because ' \
+                                             'the dataset \'index\' does not exist.'
+
+        assert warning[2].message.args[0] == 'Can not store the attribute \'History: ' \
+                                             'StarExtractionModule\' because the dataset ' \
+                                             '\'index\' does not exist.'
 
         data = self.pipeline.get_data('extract1')
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
-        attr = self.pipeline.get_attribute('extract1', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(100, rel=self.limit, abs=0.)
-        assert attr.shape == (10, 2)
-
     def test_extract_center_none(self) -> None:
 
         module = StarExtractionModule(name_in='extract2',
@@ -100,19 +103,22 @@ class TestExtract:
         with pytest.warns(UserWarning) as warning:
             self.pipeline.run_module('extract2')
 
-        assert len(warning) == 1
+        assert len(warning) == 3
+
+        assert warning[0].message.args[0] == 'Can not store the attribute \'INSTRUMENT\' because ' \
+                                             'the dataset \'index\' does not exist.'
+
+        assert warning[1].message.args[0] == 'Can not store the attribute \'PIXSCALE\' because ' \
+                                             'the dataset \'index\' does not exist.'
 
-        assert warning[0].message.args[0] == 'The new dataset that is stored under the tag name ' \
-                                             '\'index\' is empty.'
+        assert warning[2].message.args[0] == 'Can not store the attribute \'History: ' \
+                                             'StarExtractionModule\' because the dataset ' \
+                                             '\'index\' does not exist.'
 
         data = self.pipeline.get_data('extract2')
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
-        attr = self.pipeline.get_attribute('extract2', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(100, rel=self.limit, abs=0.)
-        assert attr.shape == (10, 2)
-
     def test_extract_position(self) -> None:
 
         module = StarExtractionModule(name_in='extract7',
@@ -130,10 +136,6 @@ class TestExtract:
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
-        attr = self.pipeline.get_attribute('extract7', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(100, rel=self.limit, abs=0.)
-        assert attr.shape == (10, 2)
-
     def test_extract_too_large(self) -> None:
 
         module = StarExtractionModule(name_in='extract3',
@@ -160,10 +162,6 @@ class TestExtract:
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
-        attr = self.pipeline.get_attribute('extract3', 'STAR_POSITION', static=False)
-        assert np.sum(attr) == pytest.approx(100, rel=self.limit, abs=0.)
-        assert attr.shape == (10, 2)
-
     def test_star_extract_cpu(self) -> None:
 
         with h5py.File(self.test_dir+'PynPoint_database.hdf5', 'a') as hdf_file:
@@ -182,11 +180,15 @@ class TestExtract:
         with pytest.warns(UserWarning) as warning:
             self.pipeline.run_module('extract4')
 
-        assert len(warning) == 1
+        assert len(warning) == 2
+
+        assert warning[0].message.args[0] == 'The \'index_out_port\' can only be used if ' \
+                                             'CPU = 1. No data will be stored to this output port.'
 
-        assert warning[0].message.args[0] == 'Chosen image size is too large to crop the image ' \
-                                             'around the brightest pixel. Using the center of ' \
-                                             'the image instead.'
+        assert warning[1].message.args[0] == 'Chosen image size is too large to crop the image ' \
+                                             'around the brightest pixel (image index = 0, ' \
+                                             'pixel [x, y] = [2, 2]). Using the center of the ' \
+                                             'image instead.'
 
     def test_extract_binary(self) -> None:
 
diff --git a/tests/test_processing/test_fluxposition.py b/tests/test_processing/test_fluxposition.py
index 812747b..0602a7f 100644
--- a/tests/test_processing/test_fluxposition.py
+++ b/tests/test_processing/test_fluxposition.py
@@ -149,7 +149,7 @@ class TestFluxPosition:
         self.pipeline.run_module('pca')
 
         data = self.pipeline.get_data('res_mean')
-        assert np.sum(data) == pytest.approx(0.015843543362863227, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.014757351752469366, rel=self.limit, abs=0.)
         assert data.shape == (1, 21, 21)
 
     def test_false_positive(self) -> None:
@@ -189,11 +189,11 @@ class TestFluxPosition:
         self.pipeline.run_module('false2')
 
         data = self.pipeline.get_data('snr_fpf2')
-        assert data[0, 1] == pytest.approx(2.0959960937500006, rel=self.limit, abs=0.)
-        assert data[0, 2] == pytest.approx(0.21342343096632785, rel=self.limit, abs=0.)
-        assert data[0, 3] == pytest.approx(179.3133641536648, rel=self.limit, abs=0.)
-        assert data[0, 4] == pytest.approx(24.497480327287796, rel=self.limit, abs=0.)
-        assert data[0, 5] == pytest.approx(2.4056070777715073e-08, rel=self.limit, abs=0.)
+        assert data[0, 1] == pytest.approx(2.0681640624999993, rel=self.limit, abs=0.)
+        assert data[0, 2] == pytest.approx(0.21416845852767494, rel=self.limit, abs=0.)
+        assert data[0, 3] == pytest.approx(179.47800221910444, rel=self.limit, abs=0.)
+        assert data[0, 4] == pytest.approx(24.254455766076823, rel=self.limit, abs=0.)
+        assert data[0, 5] == pytest.approx(2.5776271254831863e-08, rel=self.limit, abs=0.)
         assert data.shape == (1, 6)
 
     def test_simplex_minimization_hessian(self) -> None:
@@ -203,7 +203,7 @@ class TestFluxPosition:
                                            psf_in_tag='psf',
                                            res_out_tag='simplex_res',
                                            flux_position_tag='flux_position',
-                                           position=(10, 3),
+                                           position=(10., 3.),
                                            magnitude=2.5,
                                            psf_scaling=-1.,
                                            merit='hessian',
@@ -216,13 +216,13 @@ class TestFluxPosition:
                                            extra_rot=0.,
                                            reference_in_tag=None,
                                            residuals='median',
-                                           offset=3.)
+                                           offset=1.)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('simplex1')
 
         data = self.pipeline.get_data('simplex_res')
-        assert np.sum(data) == pytest.approx(0.07149269966957492, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.07079158286664607, rel=self.limit, abs=0.)
         assert data.shape == (25, 21, 21)
 
         data = self.pipeline.get_data('flux_position')
@@ -240,7 +240,7 @@ class TestFluxPosition:
                                            psf_in_tag='psf',
                                            res_out_tag='simplex_res_ref',
                                            flux_position_tag='flux_position_ref',
-                                           position=(10, 3),
+                                           position=(10., 3.),
                                            magnitude=2.5,
                                            psf_scaling=-1.,
                                            merit='poisson',
@@ -258,7 +258,7 @@ class TestFluxPosition:
         self.pipeline.run_module('simplex2')
 
         data = self.pipeline.get_data('simplex_res_ref')
-        assert np.sum(data) == pytest.approx(9.91226137018148, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(9.914746160040783, rel=self.limit, abs=0.)
         assert data.shape == (28, 21, 21)
 
         data = self.pipeline.get_data('flux_position_ref')
@@ -305,15 +305,8 @@ class TestFluxPosition:
                                     sigma=(1e-3, 1e-1, 1e-2))
 
         self.pipeline.add_module(module)
-
-        # with pytest.warns(RuntimeWarning) as warning:
         self.pipeline.run_module('mcmc')
 
-        # assert len(warning) == 5
-        #
-        # data = self.pipeline.get_data('mcmc')
-        # assert data.shape == (5, 6, 3)
-
     def test_systematic_error(self) -> None:
 
         module = SystematicErrorModule(name_in='error',
@@ -322,7 +315,7 @@ class TestFluxPosition:
                                        offset_out_tag='offset',
                                        position=(0.162, 0.),
                                        magnitude=5.,
-                                       angles=(0., 360., 2),
+                                       angles=(0., 180., 2),
                                        psf_scaling=1.,
                                        merit='gaussian',
                                        aperture=0.06,
@@ -331,7 +324,7 @@ class TestFluxPosition:
                                        mask=(None, None),
                                        extra_rot=0.,
                                        residuals='median',
-                                       offset=2.)
+                                       offset=1.)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('error')
@@ -340,4 +333,6 @@ class TestFluxPosition:
         assert data[0, 0] == pytest.approx(-0.0028749671933526733, rel=self.limit, abs=0.)
         assert data[0, 1] == pytest.approx(0.2786088210998514, rel=self.limit, abs=0.)
         assert data[0, 2] == pytest.approx(-0.02916297162565762, rel=self.limit, abs=0.)
-        assert data.shape == (2, 3)
+        assert data[0, 3] == pytest.approx(-0.02969350583704866, rel=self.limit, abs=0.)
+        assert data[0, 4] == pytest.approx(-0.10640807184499579, rel=self.limit, abs=0.)
+        assert data.shape == (2, 5)
diff --git a/tests/test_processing/test_limits.py b/tests/test_processing/test_limits.py
index 75b6fb6..e7fd326 100644
--- a/tests/test_processing/test_limits.py
+++ b/tests/test_processing/test_limits.py
@@ -133,7 +133,7 @@ class TestLimits:
         with h5py.File(self.test_dir+'PynPoint_database.hdf5', 'a') as hdf_file:
             hdf_file['contrast_limits'] = limits
 
-        url = 'https://phoenix.ens-lyon.fr/Grids/AMES-Cond/ISOCHRONES/' \
+        url = 'https://home.strw.leidenuniv.nl/~stolker/pynpoint/' \
               'model.AMES-Cond-2000.M-0.0.NaCo.Vega'
 
         filename = self.test_dir + 'model.AMES-Cond-2000.M-0.0.NaCo.Vega'
diff --git a/tests/test_processing/test_psfpreparation.py b/tests/test_processing/test_psfpreparation.py
index f56cb4a..5883caf 100644
--- a/tests/test_processing/test_psfpreparation.py
+++ b/tests/test_processing/test_psfpreparation.py
@@ -6,8 +6,9 @@ import numpy as np
 from pynpoint.core.pypeline import Pypeline
 from pynpoint.readwrite.fitsreading import FitsReadingModule
 from pynpoint.processing.psfpreparation import PSFpreparationModule, AngleInterpolationModule, \
-                                               AngleCalculationModule, SDIpreparationModule
-from pynpoint.util.tests import create_config, create_star_data, remove_test_data
+                                               AngleCalculationModule, SDIpreparationModule, \
+                                               SortParangModule
+from pynpoint.util.tests import create_config, create_star_data, create_ifs_data, remove_test_data
 
 
 class TestPsfPreparation:
@@ -18,13 +19,14 @@ class TestPsfPreparation:
         self.test_dir = os.path.dirname(__file__) + '/'
 
         create_star_data(self.test_dir+'prep')
+        create_ifs_data(self.test_dir+'prep_ifs')
         create_config(self.test_dir+'PynPoint_config.ini')
 
         self.pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
 
     def teardown_class(self) -> None:
 
-        remove_test_data(self.test_dir, folders=['prep'])
+        remove_test_data(self.test_dir, folders=['prep', 'prep_ifs'])
 
     def test_read_data(self) -> None:
 
@@ -39,6 +41,18 @@ class TestPsfPreparation:
         assert np.sum(data) == pytest.approx(105.54278879805277, rel=self.limit, abs=0.)
         assert data.shape == (10, 11, 11)
 
+        module = FitsReadingModule(name_in='read_ifs',
+                                   image_tag='read_ifs',
+                                   input_dir=self.test_dir+'prep_ifs',
+                                   ifs_data=True)
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('read_ifs')
+
+        data = self.pipeline.get_data('read_ifs')
+        assert np.sum(data) == pytest.approx(749.8396528807369, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
     def test_angle_interpolation(self) -> None:
 
         module = AngleInterpolationModule(name_in='angle1',
@@ -74,7 +88,7 @@ class TestPsfPreparation:
         self.pipeline.run_module('angle2')
 
         data = self.pipeline.get_data('header_read/PARANG')
-        assert np.sum(data) == pytest.approx(-550.2338300130718, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(-550.2338288730655, rel=self.limit, abs=0.)
         assert data.shape == (10, )
 
         self.pipeline.set_attribute('read', 'RA', (60000.0, 60000.0, 60000.0, 60000.0),
@@ -107,7 +121,7 @@ class TestPsfPreparation:
             assert warning[1].message.args[0] == warning_1
 
         data = self.pipeline.get_data('header_read/PARANG')
-        assert np.sum(data) == pytest.approx(1704.220236104952, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(1704.2202372447628, rel=self.limit, abs=0.)
         assert data.shape == (10, )
 
         module = AngleCalculationModule(instrument='SPHERE/IFS',
@@ -137,9 +151,52 @@ class TestPsfPreparation:
             assert warning[2].message.args[0] == warning_2
 
         data = self.pipeline.get_data('header_read/PARANG')
-        assert np.sum(data) == pytest.approx(-890.8506520762833, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(-890.8506509366362, rel=self.limit, abs=0.)
         assert data.shape == (10, )
 
+    def test_angle_sort(self) -> None:
+
+        index = self.pipeline.get_data('header_read/INDEX')
+        self.pipeline.set_attribute('read', 'INDEX', index[::-1], static=False)
+
+        module = SortParangModule(name_in='sort1',
+                                  image_in_tag='read',
+                                  image_out_tag='read_sorted')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('sort1')
+        self.pipeline.set_attribute('read', 'INDEX', index, static=False)
+
+        parang = self.pipeline.get_data('header_read/PARANG')[::-1]
+        parang_sort = self.pipeline.get_data('header_read_sorted/PARANG')
+        assert np.sum(parang) == pytest.approx(np.sum(parang_sort), rel=self.limit, abs=0.)
+
+        parang_set = [0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]
+        self.pipeline.set_attribute('read_ifs', 'PARANG', parang_set, static=False)
+
+        data = self.pipeline.get_data('read_sorted')
+        assert np.sum(data[0]) == pytest.approx(9.71156815235485, rel=self.limit, abs=0.)
+
+    def test_angle_sort_ifs(self) -> None:
+
+        index = self.pipeline.get_data('header_read_ifs/INDEX')
+        self.pipeline.set_attribute('read_ifs', 'INDEX', index[::-1], static=False)
+
+        module = SortParangModule(name_in='sort2',
+                                  image_in_tag='read_ifs',
+                                  image_out_tag='read_ifs_sorted')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('sort2')
+        self.pipeline.set_attribute('read_ifs', 'INDEX', index, static=False)
+
+        parang = self.pipeline.get_data('header_read_ifs/PARANG')[::-1]
+        parang_sort = self.pipeline.get_data('header_read_ifs_sorted/PARANG')
+        assert np.sum(parang) == pytest.approx(np.sum(parang_sort), rel=self.limit, abs=0.)
+
+        data = self.pipeline.get_data('read_ifs_sorted')
+        assert np.sum(data[0, 0]) == pytest.approx(21.185139976163477, rel=self.limit, abs=0.)
+
     def test_angle_interpolation_mismatch(self) -> None:
 
         self.pipeline.set_attribute('read', 'NDIT', [9, 9, 9, 9], static=False)
@@ -219,6 +276,23 @@ class TestPsfPreparation:
         assert np.sum(data) == pytest.approx(105.54278879805277, rel=self.limit, abs=0.)
         assert data.shape == (10, 11, 11)
 
+    def test_psf_preparation_sdi(self) -> None:
+
+        module = PSFpreparationModule(name_in='prep4',
+                                      image_in_tag='read_ifs',
+                                      image_out_tag='prep4',
+                                      mask_out_tag=None,
+                                      norm=False,
+                                      cent_size=None,
+                                      edge_size=None)
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('prep4')
+
+        data = self.pipeline.get_data('prep4')
+        assert np.sum(data) == pytest.approx(749.8396528807369, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
     def test_sdi_preparation(self) -> None:
 
         module = SDIpreparationModule(name_in='sdi',
diff --git a/tests/test_processing/test_psfsubtraction.py b/tests/test_processing/test_psfsubtraction_adi.py
similarity index 98%
rename from tests/test_processing/test_psfsubtraction.py
rename to tests/test_processing/test_psfsubtraction_adi.py
index f40e9f2..f0b9f18 100644
--- a/tests/test_processing/test_psfsubtraction.py
+++ b/tests/test_processing/test_psfsubtraction_adi.py
@@ -11,7 +11,7 @@ from pynpoint.processing.psfsubtraction import PcaPsfSubtractionModule, Classica
 from pynpoint.util.tests import create_config, create_fake_data, remove_test_data
 
 
-class TestPsfSubtraction:
+class TestPsfSubtractionAdi:
 
     def setup_class(self) -> None:
 
@@ -193,7 +193,7 @@ class TestPsfSubtraction:
         self.pipeline.run_module('pca_no_mean')
 
         data = self.pipeline.get_data('res_mean_no_mean')
-        assert np.sum(data) == pytest.approx(0.0005733657032555452, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.0006081272007585688, rel=self.limit, abs=0.)
         assert data.shape == (2, 21, 21)
 
         data = self.pipeline.get_data('basis_no_mean')
@@ -219,7 +219,7 @@ class TestPsfSubtraction:
         self.pipeline.run_module('pca_ref')
 
         data = self.pipeline.get_data('res_mean_ref')
-        assert np.sum(data) == pytest.approx(0.0005868283126528002, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.0006330226118859073, rel=self.limit, abs=0.)
         assert data.shape == (2, 21, 21)
 
         data = self.pipeline.get_data('basis_ref')
@@ -245,7 +245,7 @@ class TestPsfSubtraction:
         self.pipeline.run_module('pca_ref_no_mean')
 
         data = self.pipeline.get_data('res_mean_ref_no_mean')
-        assert np.sum(data) == pytest.approx(0.0005733657032555494, rel=self.limit, abs=0.)
+        assert np.sum(data) == pytest.approx(0.0006081272007585764, rel=self.limit, abs=0.)
         assert data.shape == (2, 21, 21)
 
         data = self.pipeline.get_data('basis_ref_no_mean')
@@ -282,9 +282,9 @@ class TestPsfSubtraction:
         assert np.sum(data) == pytest.approx(0.06014309988789256, rel=self.limit, abs=0.)
         assert data.shape == (2, 21, 21)
 
-        # data = self.pipeline.get_data('res_clip_single_mask')
+        data = self.pipeline.get_data('res_clip_single_mask')
         # assert np.sum(data) == pytest.approx(9.35120662148806e-05, rel=self.limit, abs=0.)
-        # assert data.shape == (2, 21, 21)
+        assert data.shape == (2, 21, 21)
 
         data = self.pipeline.get_data('res_arr_single_mask1')
         assert np.sum(data) == pytest.approx(0.0006170872862547557, rel=self.limit, abs=0.)
diff --git a/tests/test_processing/test_psfsubtraction_sdi.py b/tests/test_processing/test_psfsubtraction_sdi.py
new file mode 100644
index 0000000..0daf535
--- /dev/null
+++ b/tests/test_processing/test_psfsubtraction_sdi.py
@@ -0,0 +1,151 @@
+import os
+import h5py
+
+import pytest
+import numpy as np
+
+from pynpoint.core.pypeline import Pypeline
+from pynpoint.readwrite.fitsreading import FitsReadingModule
+from pynpoint.processing.psfsubtraction import PcaPsfSubtractionModule
+from pynpoint.util.tests import create_config, create_ifs_data, remove_test_data
+
+
+class TestPsfSubtractionSdi:
+
+    def setup_class(self) -> None:
+
+        self.limit = 1e-5
+        self.test_dir = os.path.dirname(__file__) + '/'
+
+        create_ifs_data(self.test_dir+'science')
+        create_config(self.test_dir+'PynPoint_config.ini')
+
+        self.pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
+
+    def teardown_class(self) -> None:
+
+        remove_test_data(self.test_dir, folders=['science'])
+
+    def test_read_data(self) -> None:
+
+        module = FitsReadingModule(name_in='read',
+                                   image_tag='science',
+                                   input_dir=self.test_dir+'science',
+                                   ifs_data=True)
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('read')
+
+        data = self.pipeline.get_data('science')
+        assert np.sum(data) == pytest.approx(749.8396528807368, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
+        self.pipeline.set_attribute('science', 'WAVELENGTH', [1., 1.1, 1.2], static=False)
+        self.pipeline.set_attribute('science', 'PARANG', np.linspace(0., 180., 10), static=False)
+
+    def test_psf_subtraction_sdi(self) -> None:
+
+        processing_types = ['ADI', 'SDI+ADI', 'ADI+SDI']
+
+        expected = [[-0.16718942968552664, -0.790697125718532,
+                     19.507979777136892, -0.21617058715490922],
+                    [-0.001347198747121658, -0.08621264803633322,
+                     2.3073192270025333, -0.010269745733878437],
+                    [0.009450917836998779, -0.05776205365084376,
+                     -0.43506678222476264, 0.0058856438951644455]]
+
+        shape_expc = [(2, 3, 21, 21), (2, 2, 3, 21, 21), (1, 1, 3, 21, 21)]
+
+        pca_numbers = [range(1, 3), (range(1, 3), range(1, 3)), ([1], [1])]
+
+        res_arr_tags = [None, None, 'res_arr_single_sdi_ADI+SDI']
+
+        for i, p_type in enumerate(processing_types):
+
+            module = PcaPsfSubtractionModule(pca_numbers=pca_numbers[i],
+                                             name_in='pca_single_sdi_'+p_type,
+                                             images_in_tag='science',
+                                             reference_in_tag='science',
+                                             res_mean_tag='res_mean_single_sdi_'+p_type,
+                                             res_median_tag='res_median_single_sdi_'+p_type,
+                                             res_weighted_tag='res_weighted_single_sdi_'+p_type,
+                                             res_rot_mean_clip_tag='res_clip_single_sdi_'+p_type,
+                                             res_arr_out_tag=res_arr_tags[i],
+                                             basis_out_tag='basis_single_sdi_'+p_type,
+                                             extra_rot=0.,
+                                             subtract_mean=True,
+                                             processing_type=p_type)
+
+            self.pipeline.add_module(module)
+            self.pipeline.run_module('pca_single_sdi_'+p_type)
+
+            data = self.pipeline.get_data('res_mean_single_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][0], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_median_single_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][1], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_weighted_single_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][2], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_clip_single_sdi_'+p_type)
+#            assert np.sum(data) == pytest.approx(expected[i][3], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            # data = self.pipeline.get_data('basis_single_sdi_'+p_type)
+            # assert np.sum(data) == pytest.approx(-1.3886119555248766, rel=self.limit, abs=0.)
+            # assert data.shape == (5, 30, 30)
+
+    def test_psf_subtraction_sdi_multi(self) -> None:
+
+        with h5py.File(self.test_dir+'PynPoint_database.hdf5', 'a') as hdf_file:
+            hdf_file['config'].attrs['CPU'] = 4
+
+        processing_types = ['SDI', 'ADI+SDI']
+
+        pca_numbers = [range(1, 3), (range(1, 3), range(1, 3))]
+
+        expected = [[-0.004159475403024583, 0.02613693149969979,
+                     -0.12940723035023394, -0.008432530081399985],
+                    [-0.006580571531064533, -0.08171546066331437,
+                     0.5700432018961117, -0.014527353460544753]]
+
+        shape_expc = [(2, 3, 21, 21), (2, 2, 3, 21, 21)]
+
+        for i, p_type in enumerate(processing_types):
+
+            module = PcaPsfSubtractionModule(pca_numbers=pca_numbers[i],
+                                             name_in='pca_multi_sdi_'+p_type,
+                                             images_in_tag='science',
+                                             reference_in_tag='science',
+                                             res_mean_tag='res_mean_multi_sdi_'+p_type,
+                                             res_median_tag='res_median_multi_sdi_'+p_type,
+                                             res_weighted_tag='res_weighted_multi_sdi_'+p_type,
+                                             res_rot_mean_clip_tag='res_clip_multi_sdi_'+p_type,
+                                             res_arr_out_tag=None,
+                                             basis_out_tag=None,
+                                             extra_rot=0.,
+                                             subtract_mean=True,
+                                             processing_type=p_type)
+
+            self.pipeline.add_module(module)
+            self.pipeline.run_module('pca_multi_sdi_'+p_type)
+
+            data = self.pipeline.get_data('res_mean_multi_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][0], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_median_multi_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][1], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_weighted_multi_sdi_'+p_type)
+            assert np.sum(data) == pytest.approx(expected[i][2], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
+
+            data = self.pipeline.get_data('res_clip_multi_sdi_'+p_type)
+#            assert np.sum(data) == pytest.approx(expected[i][3], rel=self.limit, abs=0.)
+            assert data.shape == shape_expc[i]
diff --git a/tests/test_processing/test_resizing.py b/tests/test_processing/test_resizing.py
index ac2b21d..a7fee7e 100644
--- a/tests/test_processing/test_resizing.py
+++ b/tests/test_processing/test_resizing.py
@@ -8,7 +8,7 @@ from pynpoint.core.pypeline import Pypeline
 from pynpoint.readwrite.fitsreading import FitsReadingModule
 from pynpoint.processing.resizing import CropImagesModule, ScaleImagesModule, \
                                          AddLinesModule, RemoveLinesModule
-from pynpoint.util.tests import create_config, create_star_data, remove_test_data
+from pynpoint.util.tests import create_config, create_star_data, create_ifs_data, remove_test_data
 
 
 class TestResizing:
@@ -19,13 +19,14 @@ class TestResizing:
         self.test_dir = os.path.dirname(__file__) + '/'
 
         create_star_data(self.test_dir+'resize')
+        create_ifs_data(self.test_dir+'resize_ifs')
         create_config(self.test_dir+'PynPoint_config.ini')
 
         self.pipeline = Pypeline(self.test_dir, self.test_dir, self.test_dir)
 
     def teardown_class(self) -> None:
 
-        remove_test_data(self.test_dir, folders=['resize'])
+        remove_test_data(self.test_dir, folders=['resize', 'resize_ifs'])
 
     def test_read_data(self) -> None:
 
@@ -42,6 +43,20 @@ class TestResizing:
         assert np.sum(data) == pytest.approx(105.54278879805277, rel=self.limit, abs=0.)
         assert data.shape == (10, 11, 11)
 
+        module = FitsReadingModule(name_in='read_ifs',
+                                   image_tag='read_ifs',
+                                   input_dir=self.test_dir+'resize_ifs',
+                                   overwrite=True,
+                                   check=True,
+                                   ifs_data=True)
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('read_ifs')
+
+        data = self.pipeline.get_data('read_ifs')
+        assert np.sum(data) == pytest.approx(749.8396528807369, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
     def test_crop_images(self) -> None:
 
         module = CropImagesModule(size=0.2,
@@ -62,6 +77,15 @@ class TestResizing:
         self.pipeline.add_module(module)
         self.pipeline.run_module('crop2')
 
+        module = CropImagesModule(size=0.2,
+                                  center=(4, 4),
+                                  name_in='crop_ifs',
+                                  image_in_tag='read_ifs',
+                                  image_out_tag='crop_ifs')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('crop_ifs')
+
         data = self.pipeline.get_data('crop1')
         assert np.sum(data) == pytest.approx(104.93318507061295, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
@@ -70,20 +94,26 @@ class TestResizing:
         assert np.sum(data) == pytest.approx(105.64863165433025, rel=self.limit, abs=0.)
         assert data.shape == (10, 9, 9)
 
+        data = self.pipeline.get_data('crop_ifs')
+        assert np.sum(data) == pytest.approx(15.870936600122521, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 9, 9)
+
     def test_scale_images(self) -> None:
 
-        module = ScaleImagesModule(scaling=(2., 2., None),
-                                   name_in='scale1',
+        module = ScaleImagesModule(name_in='scale1',
                                    image_in_tag='read',
-                                   image_out_tag='scale1')
+                                   image_out_tag='scale1',
+                                   scaling=(2., 2., None),
+                                   pixscale=True)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('scale1')
 
-        module = ScaleImagesModule(scaling=(None, None, 2.),
-                                   name_in='scale2',
+        module = ScaleImagesModule(name_in='scale2',
                                    image_in_tag='read',
-                                   image_out_tag='scale2')
+                                   image_out_tag='scale2',
+                                   scaling=(None, None, 2.),
+                                   pixscale=True)
 
         self.pipeline.add_module(module)
         self.pipeline.run_module('scale2')
@@ -96,6 +126,15 @@ class TestResizing:
         assert np.sum(data) == pytest.approx(211.08557759610554, rel=self.limit, abs=0.)
         assert data.shape == (10, 11, 11)
 
+        attr = self.pipeline.get_attribute('read', 'PIXSCALE', static=True)
+        assert attr == pytest.approx(0.027, rel=self.limit, abs=0.)
+
+        attr = self.pipeline.get_attribute('scale1', 'PIXSCALE', static=True)
+        assert attr == pytest.approx(0.0135, rel=self.limit, abs=0.)
+
+        attr = self.pipeline.get_attribute('scale2', 'PIXSCALE', static=True)
+        assert attr == pytest.approx(0.027, rel=self.limit, abs=0.)
+
     def test_add_lines(self) -> None:
 
         module = AddLinesModule(lines=(2, 5, 0, 3),
diff --git a/tests/test_processing/test_stacksubsample.py b/tests/test_processing/test_stacksubsample.py
index edbd95e..bf5052b 100644
--- a/tests/test_processing/test_stacksubsample.py
+++ b/tests/test_processing/test_stacksubsample.py
@@ -7,7 +7,7 @@ from pynpoint.core.pypeline import Pypeline
 from pynpoint.readwrite.fitsreading import FitsReadingModule
 from pynpoint.processing.stacksubset import StackAndSubsetModule, StackCubesModule, \
                                             DerotateAndStackModule, CombineTagsModule
-from pynpoint.util.tests import create_config, create_star_data, remove_test_data
+from pynpoint.util.tests import create_config, create_star_data, create_ifs_data, remove_test_data
 
 
 class TestStackSubset:
@@ -17,6 +17,7 @@ class TestStackSubset:
         self.limit = 1e-10
         self.test_dir = os.path.dirname(__file__) + '/'
 
+        create_ifs_data(self.test_dir+'data_ifs')
         create_star_data(self.test_dir+'data')
         create_star_data(self.test_dir+'extra')
 
@@ -26,7 +27,7 @@ class TestStackSubset:
 
     def teardown_class(self) -> None:
 
-        remove_test_data(self.test_dir, folders=['data', 'extra'])
+        remove_test_data(self.test_dir, folders=['data_ifs', 'extra', 'data'])
 
     def test_read_data(self) -> None:
 
@@ -55,6 +56,21 @@ class TestStackSubset:
         extra = self.pipeline.get_data('extra')
         assert data == pytest.approx(extra, rel=self.limit, abs=0.)
 
+        module = FitsReadingModule(name_in='read_ifs',
+                                   image_tag='images_ifs',
+                                   input_dir=self.test_dir+'data_ifs',
+                                   overwrite=True,
+                                   check=True,
+                                   ifs_data=True)
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('read_ifs')
+        self.pipeline.set_attribute('images_ifs', 'PARANG', np.linspace(0., 180., 10), static=False)
+
+        data = self.pipeline.get_data('images_ifs')
+        assert np.sum(data) == pytest.approx(749.8396528807369, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
     def test_stack_and_subset(self) -> None:
 
         self.pipeline.set_attribute('images', 'PARANG', np.arange(10.), static=False)
@@ -173,6 +189,55 @@ class TestStackSubset:
         assert np.mean(data) == pytest.approx(0.0861160094566323, rel=self.limit, abs=0.)
         assert data.shape == (1, 11, 11)
 
+        data = self.pipeline.get_data('derotate2')
+        assert np.mean(data) == pytest.approx(0.0861160094566323, rel=self.limit, abs=0.)
+        assert data.shape == (1, 11, 11)
+
+        module = DerotateAndStackModule(name_in='derotate_ifs1',
+                                        image_in_tag='images_ifs',
+                                        image_out_tag='derotate_ifs1',
+                                        derotate=True,
+                                        stack='mean',
+                                        extra_rot=0.,
+                                        dimension='time')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('derotate_ifs1')
+
+        data = self.pipeline.get_data('derotate_ifs1')
+        assert np.mean(data) == pytest.approx(0.1884438996655355, rel=self.limit, abs=0.)
+        assert data.shape == (3, 1, 21, 21)
+
+        module = DerotateAndStackModule(name_in='derotate_ifs2',
+                                        image_in_tag='images_ifs',
+                                        image_out_tag='derotate_ifs2',
+                                        derotate=False,
+                                        stack='median',
+                                        extra_rot=0.,
+                                        dimension='wavelength')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('derotate_ifs2')
+
+        data = self.pipeline.get_data('derotate_ifs2')
+        assert np.mean(data) == pytest.approx(0.055939644983170146, rel=self.limit, abs=0.)
+        assert data.shape == (1, 10, 21, 21)
+
+        module = DerotateAndStackModule(name_in='derotate_ifs3',
+                                        image_in_tag='images_ifs',
+                                        image_out_tag='derotate_ifs3',
+                                        derotate=True,
+                                        stack=None,
+                                        extra_rot=0.,
+                                        dimension='wavelength')
+
+        self.pipeline.add_module(module)
+        self.pipeline.run_module('derotate_ifs3')
+
+        data = self.pipeline.get_data('derotate_ifs3')
+        assert np.mean(data) == pytest.approx(0.05653316989966066, rel=self.limit, abs=0.)
+        assert data.shape == (3, 10, 21, 21)
+
     def test_combine_tags(self) -> None:
 
         module = CombineTagsModule(image_in_tags=['images', 'extra'],

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.10.0.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.10.0.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.10.0.egg-info/not-zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.10.0.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.10.0.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint/util/apply_func.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint/util/postproc.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint/util/sdi.py
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint/util/type_aliases.py

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.8.3.egg-info/PKG-INFO
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.8.3.egg-info/dependency_links.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.8.3.egg-info/not-zip-safe
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.8.3.egg-info/requires.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint-0.8.3.egg-info/top_level.txt
-rw-r--r--  root/root   /usr/lib/python3/dist-packages/pynpoint/util/types.py

Control files: lines which differ (wdiff format)

  • Maintainer: Debian Astronomy Maintainers  

Run locally

More details

Full run details