New upstream version 0.34.1.1
Andrej Shadura
5 years ago
12 | 12 | machine: true |
13 | 13 | steps: |
14 | 14 | - checkout |
15 | - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_SHA1} . | |
16 | - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_SHA1}-py3 --build-arg PYTHON_VERSION=3.6 . | |
15 | - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest . | |
16 | - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest-py3 --build-arg PYTHON_VERSION=3.6 . | |
17 | 17 | - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD |
18 | 18 | - run: docker push matrixdotorg/synapse:latest |
19 | 19 | - run: docker push matrixdotorg/synapse:latest-py3 |
2 | 2 | <!-- Please read CONTRIBUTING.rst before submitting your pull request --> |
3 | 3 | |
4 | 4 | * [ ] Pull request is based on the develop branch |
5 | * [ ] Pull request includes a [changelog file](CONTRIBUTING.rst#changelog) | |
6 | * [ ] Pull request includes a [sign off](CONTRIBUTING.rst#sign-off) | |
5 | * [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#changelog) | |
6 | * [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#sign-off) |
64 | 64 | * Docker packaging |
65 | 65 | |
66 | 66 | Serban Constantin <serban.constantin at gmail dot com> |
67 | * Small bug fix⏎ | |
67 | * Small bug fix | |
68 | ||
69 | Jason Robinson <jasonr at matrix.org> | |
70 | * Minor fixes |
0 | Synapse 0.34.1.1 (2019-01-11) | |
1 | ============================= | |
2 | ||
3 | This release fixes CVE-2019-5885 and is recommended for all users of Synapse 0.34.1. | |
4 | ||
5 | This release is compatible with Python 2.7 and 3.5+. Python 3.7 is fully supported. | |
6 | ||
7 | Bugfixes | |
8 | -------- | |
9 | ||
10 | - Fix spontaneous logout on upgrade | |
11 | ([\#4374](https://github.com/matrix-org/synapse/issues/4374)) | |
12 | ||
13 | ||
14 | Synapse 0.34.1 (2019-01-09) | |
15 | =========================== | |
16 | ||
17 | Internal Changes | |
18 | ---------------- | |
19 | ||
20 | - Add better logging for unexpected errors while sending transactions ([\#4361](https://github.com/matrix-org/synapse/issues/4361), [\#4362](https://github.com/matrix-org/synapse/issues/4362)) | |
21 | ||
22 | ||
23 | Synapse 0.34.1rc1 (2019-01-08) | |
24 | ============================== | |
25 | ||
26 | Features | |
27 | -------- | |
28 | ||
29 | - Special-case a support user for use in verifying behaviour of a given server. The support user does not appear in user directory or monthly active user counts. ([\#4141](https://github.com/matrix-org/synapse/issues/4141), [\#4344](https://github.com/matrix-org/synapse/issues/4344)) | |
30 | - Support for serving .well-known files ([\#4262](https://github.com/matrix-org/synapse/issues/4262)) | |
31 | - Rework SAML2 authentication ([\#4265](https://github.com/matrix-org/synapse/issues/4265), [\#4267](https://github.com/matrix-org/synapse/issues/4267)) | |
32 | - SAML2 authentication: Initialise user display name from SAML2 data ([\#4272](https://github.com/matrix-org/synapse/issues/4272)) | |
33 | - Synapse can now have its conditional/extra dependencies installed by pip. This functionality can be used by using `pip install matrix-synapse[feature]`, where feature is a comma separated list with the possible values `email.enable_notifs`, `matrix-synapse-ldap3`, `postgres`, `resources.consent`, `saml2`, `url_preview`, and `test`. If you want to install all optional dependencies, you can use "all" instead. ([\#4298](https://github.com/matrix-org/synapse/issues/4298), [\#4325](https://github.com/matrix-org/synapse/issues/4325), [\#4327](https://github.com/matrix-org/synapse/issues/4327)) | |
34 | - Add routes for reading account data. ([\#4303](https://github.com/matrix-org/synapse/issues/4303)) | |
35 | - Add opt-in support for v2 rooms ([\#4307](https://github.com/matrix-org/synapse/issues/4307)) | |
36 | - Add a script to generate a clean config file ([\#4315](https://github.com/matrix-org/synapse/issues/4315)) | |
37 | - Return server data in /login response ([\#4319](https://github.com/matrix-org/synapse/issues/4319)) | |
38 | ||
39 | ||
40 | Bugfixes | |
41 | -------- | |
42 | ||
43 | - Fix contains_url check to be consistent with other instances in code-base and check that value is an instance of string. ([\#3405](https://github.com/matrix-org/synapse/issues/3405)) | |
44 | - Fix CAS login when username is not valid in an MXID ([\#4264](https://github.com/matrix-org/synapse/issues/4264)) | |
45 | - Send CORS headers for /media/config ([\#4279](https://github.com/matrix-org/synapse/issues/4279)) | |
46 | - Add 'sandbox' to CSP for media reprository ([\#4284](https://github.com/matrix-org/synapse/issues/4284)) | |
47 | - Make the new landing page prettier. ([\#4294](https://github.com/matrix-org/synapse/issues/4294)) | |
48 | - Fix deleting E2E room keys when using old SQLite versions. ([\#4295](https://github.com/matrix-org/synapse/issues/4295)) | |
49 | - The metric synapse_admin_mau:current previously did not update when config.mau_stats_only was set to True ([\#4305](https://github.com/matrix-org/synapse/issues/4305)) | |
50 | - Fixed per-room account data filters ([\#4309](https://github.com/matrix-org/synapse/issues/4309)) | |
51 | - Fix indentation in default config ([\#4313](https://github.com/matrix-org/synapse/issues/4313)) | |
52 | - Fix synapse:latest docker upload ([\#4316](https://github.com/matrix-org/synapse/issues/4316)) | |
53 | - Fix test_metric.py compatibility with prometheus_client 0.5. Contributed by Maarten de Vries <maarten@de-vri.es>. ([\#4317](https://github.com/matrix-org/synapse/issues/4317)) | |
54 | - Avoid packaging _trial_temp directory in -py3 debian packages ([\#4326](https://github.com/matrix-org/synapse/issues/4326)) | |
55 | - Check jinja version for consent resource ([\#4327](https://github.com/matrix-org/synapse/issues/4327)) | |
56 | - fix NPE in /messages by checking if all events were filtered out ([\#4330](https://github.com/matrix-org/synapse/issues/4330)) | |
57 | - Fix `python -m synapse.config` on Python 3. ([\#4356](https://github.com/matrix-org/synapse/issues/4356)) | |
58 | ||
59 | ||
60 | Deprecations and Removals | |
61 | ------------------------- | |
62 | ||
63 | - Remove the deprecated v1/register API on Python 2. It was never ported to Python 3. ([\#4334](https://github.com/matrix-org/synapse/issues/4334)) | |
64 | ||
65 | ||
66 | Internal Changes | |
67 | ---------------- | |
68 | ||
69 | - Getting URL previews of IP addresses no longer fails on Python 3. ([\#4215](https://github.com/matrix-org/synapse/issues/4215)) | |
70 | - drop undocumented dependency on dateutil ([\#4266](https://github.com/matrix-org/synapse/issues/4266)) | |
71 | - Update the example systemd config to use a virtualenv ([\#4273](https://github.com/matrix-org/synapse/issues/4273)) | |
72 | - Update link to kernel DCO guide ([\#4274](https://github.com/matrix-org/synapse/issues/4274)) | |
73 | - Make isort tox check print diff when it fails ([\#4283](https://github.com/matrix-org/synapse/issues/4283)) | |
74 | - Log room_id in Unknown room errors ([\#4297](https://github.com/matrix-org/synapse/issues/4297)) | |
75 | - Documentation improvements for coturn setup. Contributed by Krithin Sitaram. ([\#4333](https://github.com/matrix-org/synapse/issues/4333)) | |
76 | - Update pull request template to use absolute links ([\#4341](https://github.com/matrix-org/synapse/issues/4341)) | |
77 | - Update README to not lie about required restart when updating TLS certificates ([\#4343](https://github.com/matrix-org/synapse/issues/4343)) | |
78 | - Update debian packaging for compatibility with transitional package ([\#4349](https://github.com/matrix-org/synapse/issues/4349)) | |
79 | - Fix command hint to generate a config file when trying to start without a config file ([\#4353](https://github.com/matrix-org/synapse/issues/4353)) | |
80 | - Add better logging for unexpected errors while sending transactions ([\#4358](https://github.com/matrix-org/synapse/issues/4358)) | |
81 | ||
82 | ||
0 | 83 | Synapse 0.34.0 (2018-12-20) |
1 | 84 | =========================== |
2 | 85 |
101 | 101 | In order to have a concrete record that your contribution is intentional |
102 | 102 | and you agree to license it under the same terms as the project's license, we've adopted the |
103 | 103 | same lightweight approach that the Linux Kernel |
104 | (https://www.kernel.org/doc/Documentation/SubmittingPatches), Docker | |
104 | `submitting patches process <https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>`_, Docker | |
105 | 105 | (https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other |
106 | 106 | projects use: the DCO (Developer Certificate of Origin: |
107 | 107 | http://developercertificate.org/). This is a simple declaration that you wrote |
85 | 85 | System requirements: |
86 | 86 | |
87 | 87 | - POSIX-compliant system (tested on Linux & OS X) |
88 | - Python 3.5, 3.6, or 2.7 | |
88 | - Python 3.5, 3.6, 3.7, or 2.7 | |
89 | 89 | - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org |
90 | 90 | |
91 | 91 | Installing from source |
147 | 147 | source ~/synapse/env/bin/activate |
148 | 148 | pip install --upgrade pip |
149 | 149 | pip install --upgrade setuptools |
150 | pip install matrix-synapse | |
150 | pip install matrix-synapse[all] | |
151 | 151 | |
152 | 152 | This installs Synapse, along with the libraries it uses, into a virtual |
153 | 153 | environment under ``~/synapse/env``. Feel free to pick a different directory |
157 | 157 | update flag:: |
158 | 158 | |
159 | 159 | source ~/synapse/env/bin/activate |
160 | pip install -U matrix-synapse | |
160 | pip install -U matrix-synapse[all] | |
161 | 161 | |
162 | 162 | In case of problems, please see the _`Troubleshooting` section below. |
163 | 163 | |
724 | 724 | tell other servers how to find you. See `Setting up Federation`_. |
725 | 725 | |
726 | 726 | When updating the SSL certificate, just update the file pointed to by |
727 | ``tls_certificate_path``: there is no need to restart synapse. (You may like to | |
728 | use a symbolic link to help make this process atomic.) | |
727 | ``tls_certificate_path`` and then restart Synapse. (You may like to use a symbolic link | |
728 | to help make this process atomic.) | |
729 | 729 | |
730 | 730 | The most common mistake when setting up federation is not to tell Synapse about |
731 | 731 | your SSL certificate. To check it, you can visit |
825 | 825 | |
826 | 826 | virtualenv -p python2.7 env |
827 | 827 | source env/bin/activate |
828 | python -m synapse.python_dependencies | xargs pip install | |
829 | pip install lxml mock | |
828 | python -m pip install -e .[all] | |
830 | 829 | |
831 | 830 | This will run a process of downloading and installing all the needed |
832 | 831 | dependencies into a virtual env. |
834 | 833 | Once this is done, you may wish to run Synapse's unit tests, to |
835 | 834 | check that everything is installed as it should be:: |
836 | 835 | |
837 | PYTHONPATH="." trial tests | |
836 | python -m twisted.trial tests | |
838 | 837 | |
839 | 838 | This should end with a 'PASSED' result:: |
840 | 839 |
36 | 36 | labels: |
37 | 37 | - traefik.enable=true |
38 | 38 | - traefik.frontend.rule=Host:my.matrix.Host |
39 | - traefik.port=8448 | |
39 | - traefik.port=8008 | |
40 | 40 | |
41 | 41 | db: |
42 | 42 | image: docker.io/postgres:10-alpine |
0 | # Example systemd configuration file for synapse. Copy into | |
1 | # /etc/systemd/system/, update the paths if necessary, then: | |
2 | # | |
3 | # systemctl enable matrix-synapse | |
4 | # systemctl start matrix-synapse | |
5 | # | |
6 | # This assumes that Synapse has been installed in a virtualenv in | |
7 | # /opt/synapse/env. | |
8 | # | |
9 | # **NOTE:** This is an example service file that may change in the future. If you | |
10 | # wish to use this please copy rather than symlink it. | |
11 | ||
12 | [Unit] | |
13 | Description=Synapse Matrix homeserver | |
14 | ||
15 | [Service] | |
16 | Type=simple | |
17 | Restart=on-abort | |
18 | ||
19 | User=synapse | |
20 | Group=nogroup | |
21 | ||
22 | WorkingDirectory=/opt/synapse | |
23 | ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml | |
24 | ||
25 | # adjust the cache factor if necessary | |
26 | # Environment=SYNAPSE_CACHE_FACTOR=2.0 | |
27 | ||
28 | [Install] | |
29 | WantedBy=multi-user.target | |
30 |
0 | # This assumes that Synapse has been installed as a system package | |
1 | # (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux) | |
2 | # rather than in a user home directory or similar under virtualenv. | |
3 | ||
4 | # **NOTE:** This is an example service file that may change in the future. If you | |
5 | # wish to use this please copy rather than symlink it. | |
6 | ||
7 | [Unit] | |
8 | Description=Synapse Matrix homeserver | |
9 | ||
10 | [Service] | |
11 | Type=simple | |
12 | User=synapse | |
13 | Group=synapse | |
14 | WorkingDirectory=/var/lib/synapse | |
15 | ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml | |
16 | ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml | |
17 | # EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR | |
18 | ||
19 | [Install] | |
20 | WantedBy=multi-user.target | |
21 |
0 | 0 | matrix-synapse-py3 (0.34.0) stable; urgency=medium |
1 | 1 | |
2 | 2 | matrix-synapse-py3 is intended as a drop-in replacement for the existing |
3 | matrix-synapse package. The replacement should be relatively seamless, | |
3 | matrix-synapse package. When the package is installed, matrix-synapse will be | |
4 | automatically uninstalled. The replacement should be relatively seamless, | |
4 | 5 | however, please note the following important differences to matrix-synapse: |
5 | 6 | |
6 | 7 | * Most importantly, the matrix-synapse service now runs under Python 3 rather |
8 | 9 | |
9 | 10 | * Synapse is installed into its own virtualenv (in /opt/venvs/matrix-synapse) |
10 | 11 | instead of using the system python libraries. (This may mean that you can |
11 | remove a number of old dependencies with `apt-get autoremove`). | |
12 | remove a number of old dependencies with `apt autoremove`). | |
13 | ||
14 | * If you have previously manually installed any custom python extensions | |
15 | (such as matrix-synapse-rest-auth) into the system python directories, you | |
16 | will need to reinstall them in the new virtualenv. Please consult the | |
17 | documentation of the relevant extensions for further details. | |
12 | 18 | |
13 | 19 | matrix-synapse-py3 will take over responsibility for the existing |
14 | 20 | configuration files, including the matrix-synapse systemd service. |
15 | 21 | |
16 | Beware, however, that `apt-get purge matrix-synapse` will *disable* the | |
22 | Beware, however, that `apt purge matrix-synapse` will *disable* the | |
17 | 23 | matrix-synapse service (so that it will not be started on reboot), even |
18 | 24 | though that service is no longer being provided by the matrix-synapse |
19 | 25 | package. It can be re-enabled with `systemctl enable matrix-synapse`. |
32 | 32 | --preinstall="lxml" \ |
33 | 33 | --preinstall="mock" \ |
34 | 34 | --extra-pip-arg="--no-cache-dir" \ |
35 | --extra-pip-arg="--compile" | |
35 | --extra-pip-arg="--compile" \ | |
36 | --extras="all" | |
36 | 37 | |
37 | 38 | # we copy the tests to a temporary directory so that we can put them on the |
38 | 39 | # PYTHONPATH without putting the uninstalled synapse on the pythonpath. |
40 | 41 | trap "rm -r $tmpdir" EXIT |
41 | 42 | |
42 | 43 | cp -r tests "$tmpdir" |
43 | cd debian/matrix-synapse-py3 | |
44 | 44 | |
45 | 45 | PYTHONPATH="$tmpdir" \ |
46 | ./opt/venvs/matrix-synapse/bin/python \ | |
46 | debian/matrix-synapse-py3/opt/venvs/matrix-synapse/bin/python \ | |
47 | 47 | -B -m twisted.trial --reporter=text -j2 tests |
0 | matrix-synapse-py3 (0.34.1+1) stable; urgency=medium | |
1 | ||
2 | * Remove 'Breaks: matrix-synapse-ldap3'. (matrix-synapse-py3 includes | |
3 | the matrix-synapse-ldap3 python files, which makes the | |
4 | matrix-synapse-ldap3 debian package redundant but not broken. | |
5 | ||
6 | -- Synapse Packaging team <packages@matrix.org> Wed, 09 Jan 2019 15:30:00 +0000 | |
7 | ||
8 | matrix-synapse-py3 (0.34.1) stable; urgency=medium | |
9 | ||
10 | * New synapse release 0.34.1. | |
11 | * Update Conflicts specifications to allow installation alongside our | |
12 | matrix-synapse transitional package. | |
13 | ||
14 | -- Synapse Packaging team <packages@matrix.org> Wed, 09 Jan 2019 14:52:24 +0000 | |
15 | ||
0 | 16 | matrix-synapse-py3 (0.34.0) stable; urgency=medium |
1 | 17 | |
2 | 18 | * New synapse release 0.34.0. |
4 | 4 | Build-Depends: |
5 | 5 | debhelper (>= 9), |
6 | 6 | dh-systemd, |
7 | dh-virtualenv (>= 1.0), | |
7 | dh-virtualenv (>= 1.1), | |
8 | 8 | lsb-release, |
9 | 9 | python3-dev, |
10 | 10 | python3, |
12 | 12 | python3-pip, |
13 | 13 | python3-venv, |
14 | 14 | tar, |
15 | Standards-Version: 3.9.5 | |
15 | Standards-Version: 3.9.8 | |
16 | 16 | Homepage: https://github.com/matrix-org/synapse |
17 | 17 | |
18 | 18 | Package: matrix-synapse-py3 |
19 | 19 | Architecture: amd64 |
20 | Conflicts: matrix-synapse | |
20 | Provides: matrix-synapse | |
21 | Breaks: | |
22 | matrix-synapse (<< 0.34.0-0matrix2), | |
23 | matrix-synapse (>= 0.34.0-1), | |
21 | 24 | Pre-Depends: dpkg (>= 1.16.1) |
22 | 25 | Depends: |
23 | 26 | adduser, |
32 | 32 | |
33 | 33 | COPY . /synapse |
34 | 34 | RUN pip install --prefix="/install" --no-warn-script-location \ |
35 | lxml \ | |
36 | psycopg2 \ | |
37 | /synapse | |
35 | /synapse[all] | |
38 | 36 | |
39 | 37 | ### |
40 | 38 | ### Stage 1: runtime |
10 | 10 | |
11 | 11 | # Get the distro we want to pull from as a dynamic build variable |
12 | 12 | ARG distro="" |
13 | ||
14 | ### | |
15 | ### Stage 0: build a dh-virtualenv | |
16 | ### | |
17 | FROM ${distro} as builder | |
18 | ||
19 | RUN apt-get update -qq -o Acquire::Languages=none | |
20 | RUN env DEBIAN_FRONTEND=noninteractive apt-get install \ | |
21 | -yqq --no-install-recommends \ | |
22 | build-essential \ | |
23 | ca-certificates \ | |
24 | devscripts \ | |
25 | equivs \ | |
26 | wget | |
27 | ||
28 | # fetch and unpack the package | |
29 | RUN wget -q -O /dh-virtuenv-1.1.tar.gz https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz | |
30 | RUN tar xvf /dh-virtuenv-1.1.tar.gz | |
31 | ||
32 | # install its build deps | |
33 | RUN cd dh-virtualenv-1.1/ \ | |
34 | && env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends" | |
35 | ||
36 | # build it | |
37 | RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b | |
38 | ||
39 | ### | |
40 | ### Stage 1 | |
41 | ### | |
13 | 42 | FROM ${distro} |
14 | 43 | |
15 | 44 | # Install the build dependencies |
20 | 49 | debhelper \ |
21 | 50 | devscripts \ |
22 | 51 | dh-systemd \ |
23 | dh-virtualenv \ | |
24 | equivs \ | |
25 | 52 | lsb-release \ |
26 | 53 | python3-dev \ |
27 | 54 | python3-pip \ |
28 | 55 | python3-setuptools \ |
29 | 56 | python3-venv \ |
30 | sqlite3 \ | |
31 | wget | |
57 | sqlite3 | |
58 | ||
59 | COPY --from=builder /dh-virtualenv_1.1-1_all.deb / | |
60 | RUN apt-get install -yq /dh-virtualenv_1.1-1_all.deb | |
32 | 61 | |
33 | 62 | WORKDIR /synapse/source |
34 | 63 | ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"] |
4 | 4 | set -ex |
5 | 5 | |
6 | 6 | DIST=`lsb_release -c -s` |
7 | ||
8 | # We need to build a newer dh_virtualenv on older OSes like Xenial. | |
9 | if [ "$DIST" = 'xenial' ]; then | |
10 | mkdir -p /tmp/dhvenv | |
11 | cd /tmp/dhvenv | |
12 | wget https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz | |
13 | tar xvf 1.1.tar.gz | |
14 | cd dh-virtualenv-1.1/ | |
15 | env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io" | |
16 | dpkg-buildpackage -us -uc -b | |
17 | cd /tmp/dhvenv | |
18 | apt-get install -yqq ./dh-virtualenv_1.1-1_all.deb | |
19 | fi | |
20 | ||
21 | 7 | |
22 | 8 | # we get a read-only copy of the source: make a writeable copy |
23 | 9 | cp -aT /synapse/source /synapse/build |
13 | 13 | cd `dirname $0` |
14 | 14 | |
15 | 15 | if [ $# -lt 1 ]; then |
16 | DISTS=(debian:stretch debian:sid ubuntu:xenial ubuntu:bionic ubuntu:cosmic) | |
16 | DISTS=( | |
17 | debian:stretch | |
18 | debian:buster | |
19 | debian:sid | |
20 | ubuntu:xenial | |
21 | ubuntu:bionic | |
22 | ubuntu:cosmic | |
23 | ) | |
17 | 24 | else |
18 | 25 | DISTS=("$@") |
19 | 26 | fi |
38 | 38 | } |
39 | 39 | |
40 | 40 | The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being |
41 | the shared secret and the content being the nonce, user, password, and either | |
42 | the string "admin" or "notadmin", each separated by NULs. For an example of | |
43 | generation in Python:: | |
41 | the shared secret and the content being the nonce, user, password, either the | |
42 | string "admin" or "notadmin", and optionally the user_type | |
43 | each separated by NULs. For an example of generation in Python:: | |
44 | 44 | |
45 | 45 | import hmac, hashlib |
46 | 46 | |
47 | def generate_mac(nonce, user, password, admin=False): | |
47 | def generate_mac(nonce, user, password, admin=False, user_type=None): | |
48 | 48 | |
49 | 49 | mac = hmac.new( |
50 | 50 | key=shared_secret, |
58 | 58 | mac.update(password.encode('utf8')) |
59 | 59 | mac.update(b"\x00") |
60 | 60 | mac.update(b"admin" if admin else b"notadmin") |
61 | if user_type: | |
62 | mac.update(b"\x00") | |
63 | mac.update(user_type.encode('utf8')) | |
61 | 64 | |
62 | 65 | return mac.hexdigest() |
39 | 39 | 4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant |
40 | 40 | lines, with example values, are:: |
41 | 41 | |
42 | lt-cred-mech | |
43 | 42 | use-auth-secret |
44 | 43 | static-auth-secret=[your secret key here] |
45 | 44 | realm=turn.myserver.org |
51 | 50 | |
52 | 51 | 5. Consider your security settings. TURN lets users request a relay |
53 | 52 | which will connect to arbitrary IP addresses and ports. At the least |
54 | we recommend: | |
53 | we recommend:: | |
55 | 54 | |
56 | 55 | # VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay. |
57 | 56 | no-tcp-relay |
105 | 104 | to refresh credentials. The TURN REST API specification recommends |
106 | 105 | one day (86400000). |
107 | 106 | |
108 | 4. "turn_allow_guests": Whether to allow guest users to use the TURN | |
107 | 4. "turn_allow_guests": Whether to allow guest users to use the TURN | |
109 | 108 | server. This is enabled by default, as otherwise VoIP will not |
110 | 109 | work reliably for guests. However, it does introduce a security risk |
111 | 110 | as it lets guests connect to arbitrary endpoints without having gone |
0 | #!/usr/bin/env python | |
1 | ||
2 | import argparse | |
3 | import sys | |
4 | ||
5 | from synapse.config.homeserver import HomeServerConfig | |
6 | ||
7 | if __name__ == "__main__": | |
8 | parser = argparse.ArgumentParser() | |
9 | parser.add_argument( | |
10 | "--config-dir", | |
11 | default="CONFDIR", | |
12 | ||
13 | help="The path where the config files are kept. Used to create filenames for " | |
14 | "things like the log config and the signing key. Default: %(default)s", | |
15 | ) | |
16 | ||
17 | parser.add_argument( | |
18 | "--data-dir", | |
19 | default="DATADIR", | |
20 | help="The path where the data files are kept. Used to create filenames for " | |
21 | "things like the database and media store. Default: %(default)s", | |
22 | ) | |
23 | ||
24 | parser.add_argument( | |
25 | "--server-name", | |
26 | default="SERVERNAME", | |
27 | help="The server name. Used to initialise the server_name config param, but also " | |
28 | "used in the names of some of the config files. Default: %(default)s", | |
29 | ) | |
30 | ||
31 | parser.add_argument( | |
32 | "--report-stats", | |
33 | action="store", | |
34 | help="Whether the generated config reports anonymized usage statistics", | |
35 | choices=["yes", "no"], | |
36 | ) | |
37 | ||
38 | parser.add_argument( | |
39 | "--generate-secrets", | |
40 | action="store_true", | |
41 | help="Enable generation of new secrets for things like the macaroon_secret_key." | |
42 | "By default, these parameters will be left unset." | |
43 | ) | |
44 | ||
45 | parser.add_argument( | |
46 | "-o", "--output-file", | |
47 | type=argparse.FileType('w'), | |
48 | default=sys.stdout, | |
49 | help="File to write the configuration to. Default: stdout", | |
50 | ) | |
51 | ||
52 | args = parser.parse_args() | |
53 | ||
54 | report_stats = args.report_stats | |
55 | if report_stats is not None: | |
56 | report_stats = report_stats == "yes" | |
57 | ||
58 | conf = HomeServerConfig().generate_config( | |
59 | config_dir_path=args.config_dir, | |
60 | data_dir_path=args.data_dir, | |
61 | server_name=args.server_name, | |
62 | generate_secrets=args.generate_secrets, | |
63 | report_stats=report_stats, | |
64 | ) | |
65 | ||
66 | args.output_file.write(conf) |
83 | 83 | dependencies = exec_file(("synapse", "python_dependencies.py")) |
84 | 84 | long_description = read_file(("README.rst",)) |
85 | 85 | |
86 | REQUIREMENTS = dependencies['REQUIREMENTS'] | |
87 | CONDITIONAL_REQUIREMENTS = dependencies['CONDITIONAL_REQUIREMENTS'] | |
88 | ||
89 | # Make `pip install matrix-synapse[all]` install all the optional dependencies. | |
90 | ALL_OPTIONAL_REQUIREMENTS = set() | |
91 | ||
92 | for optional_deps in CONDITIONAL_REQUIREMENTS.values(): | |
93 | ALL_OPTIONAL_REQUIREMENTS = set(optional_deps) | ALL_OPTIONAL_REQUIREMENTS | |
94 | ||
95 | CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS) | |
96 | ||
97 | ||
86 | 98 | setup( |
87 | 99 | name="matrix-synapse", |
88 | 100 | version=version, |
89 | 101 | packages=find_packages(exclude=["tests", "tests.*"]), |
90 | 102 | description="Reference homeserver for the Matrix decentralised comms protocol", |
91 | install_requires=dependencies['requirements'](include_conditional=True).keys(), | |
92 | dependency_links=dependencies["DEPENDENCY_LINKS"].values(), | |
103 | install_requires=REQUIREMENTS, | |
104 | extras_require=CONDITIONAL_REQUIREMENTS, | |
93 | 105 | include_package_data=True, |
94 | 106 | zip_safe=False, |
95 | 107 | long_description=long_description, |
34 | 34 | server_location, |
35 | 35 | shared_secret, |
36 | 36 | admin=False, |
37 | user_type=None, | |
37 | 38 | requests=_requests, |
38 | 39 | _print=print, |
39 | 40 | exit=sys.exit, |
64 | 65 | mac.update(password.encode('utf8')) |
65 | 66 | mac.update(b"\x00") |
66 | 67 | mac.update(b"admin" if admin else b"notadmin") |
68 | if user_type: | |
69 | mac.update(b"\x00") | |
70 | mac.update(user_type.encode('utf8')) | |
67 | 71 | |
68 | 72 | mac = mac.hexdigest() |
69 | 73 | |
73 | 77 | "password": password, |
74 | 78 | "mac": mac, |
75 | 79 | "admin": admin, |
80 | "user_type": user_type, | |
76 | 81 | } |
77 | 82 | |
78 | 83 | _print("Sending registration request...") |
90 | 95 | _print("Success!") |
91 | 96 | |
92 | 97 | |
93 | def register_new_user(user, password, server_location, shared_secret, admin): | |
98 | def register_new_user(user, password, server_location, shared_secret, admin, user_type): | |
94 | 99 | if not user: |
95 | 100 | try: |
96 | 101 | default_user = getpass.getuser() |
128 | 133 | else: |
129 | 134 | admin = False |
130 | 135 | |
131 | request_registration(user, password, server_location, shared_secret, bool(admin)) | |
136 | request_registration(user, password, server_location, shared_secret, | |
137 | bool(admin), user_type) | |
132 | 138 | |
133 | 139 | |
134 | 140 | def main(): |
152 | 158 | "--password", |
153 | 159 | default=None, |
154 | 160 | help="New password for user. Will prompt if omitted.", |
161 | ) | |
162 | parser.add_argument( | |
163 | "-t", | |
164 | "--user_type", | |
165 | default=None, | |
166 | help="User type as specified in synapse.api.constants.UserTypes", | |
155 | 167 | ) |
156 | 168 | admin_group = parser.add_mutually_exclusive_group() |
157 | 169 | admin_group.add_argument( |
207 | 219 | if args.admin or args.no_admin: |
208 | 220 | admin = args.admin |
209 | 221 | |
210 | register_new_user(args.user, args.password, args.server_url, secret, admin) | |
222 | register_new_user(args.user, args.password, args.server_url, secret, | |
223 | admin, args.user_type) | |
211 | 224 | |
212 | 225 | |
213 | 226 | if __name__ == "__main__": |
299 | 299 | Raises: |
300 | 300 | AuthError if no user by that token exists or the token is invalid. |
301 | 301 | """ |
302 | ||
303 | if rights == "access": | |
304 | # first look in the database | |
305 | r = yield self._look_up_user_by_access_token(token) | |
306 | if r: | |
307 | defer.returnValue(r) | |
308 | ||
309 | # otherwise it needs to be a valid macaroon | |
302 | 310 | try: |
303 | 311 | user_id, guest = self._parse_and_validate_macaroon(token, rights) |
304 | except _InvalidMacaroonException: | |
305 | # doesn't look like a macaroon: treat it as an opaque token which | |
306 | # must be in the database. | |
307 | # TODO: it would be nice to get rid of this, but apparently some | |
308 | # people use access tokens which aren't macaroons | |
309 | r = yield self._look_up_user_by_access_token(token) | |
310 | defer.returnValue(r) | |
311 | ||
312 | try: | |
313 | 312 | user = UserID.from_string(user_id) |
314 | 313 | |
315 | if guest: | |
314 | if rights == "access": | |
315 | if not guest: | |
316 | # non-guest access tokens must be in the database | |
317 | logger.warning("Unrecognised access token - not in store.") | |
318 | raise AuthError( | |
319 | self.TOKEN_NOT_FOUND_HTTP_STATUS, | |
320 | "Unrecognised access token.", | |
321 | errcode=Codes.UNKNOWN_TOKEN, | |
322 | ) | |
323 | ||
316 | 324 | # Guest access tokens are not stored in the database (there can |
317 | 325 | # only be one access token per guest, anyway). |
318 | 326 | # |
353 | 361 | "device_id": None, |
354 | 362 | } |
355 | 363 | else: |
356 | # This codepath exists for several reasons: | |
357 | # * so that we can actually return a token ID, which is used | |
358 | # in some parts of the schema (where we probably ought to | |
359 | # use device IDs instead) | |
360 | # * the only way we currently have to invalidate an | |
361 | # access_token is by removing it from the database, so we | |
362 | # have to check here that it is still in the db | |
363 | # * some attributes (notably device_id) aren't stored in the | |
364 | # macaroon. They probably should be. | |
365 | # TODO: build the dictionary from the macaroon once the | |
366 | # above are fixed | |
367 | ret = yield self._look_up_user_by_access_token(token) | |
368 | if ret["user"] != user: | |
369 | logger.error( | |
370 | "Macaroon user (%s) != DB user (%s)", | |
371 | user, | |
372 | ret["user"] | |
373 | ) | |
374 | raise AuthError( | |
375 | self.TOKEN_NOT_FOUND_HTTP_STATUS, | |
376 | "User mismatch in macaroon", | |
377 | errcode=Codes.UNKNOWN_TOKEN | |
378 | ) | |
364 | raise RuntimeError("Unknown rights setting %s", rights) | |
379 | 365 | defer.returnValue(ret) |
380 | except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError): | |
366 | except ( | |
367 | _InvalidMacaroonException, | |
368 | pymacaroons.exceptions.MacaroonException, | |
369 | TypeError, | |
370 | ValueError, | |
371 | ) as e: | |
372 | logger.warning("Invalid macaroon in auth: %s %s", type(e), e) | |
381 | 373 | raise AuthError( |
382 | 374 | self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.", |
383 | 375 | errcode=Codes.UNKNOWN_TOKEN |
507 | 499 | def _look_up_user_by_access_token(self, token): |
508 | 500 | ret = yield self.store.get_user_by_access_token(token) |
509 | 501 | if not ret: |
510 | logger.warn("Unrecognised access token - not in store.") | |
511 | raise AuthError( | |
512 | self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.", | |
513 | errcode=Codes.UNKNOWN_TOKEN | |
514 | ) | |
502 | defer.returnValue(None) | |
503 | ||
515 | 504 | # we use ret.get() below because *lots* of unit tests stub out |
516 | 505 | # get_user_by_access_token in a way where it only returns a couple of |
517 | 506 | # the fields. |
801 | 790 | threepid should never be set at the same time. |
802 | 791 | """ |
803 | 792 | |
804 | # Never fail an auth check for the server notices users | |
793 | # Never fail an auth check for the server notices users or support user | |
805 | 794 | # This can be a problem where event creation is prohibited due to blocking |
806 | if user_id == self.hs.config.server_notices_mxid: | |
795 | is_support = yield self.store.is_support_user(user_id) | |
796 | if user_id == self.hs.config.server_notices_mxid or is_support: | |
807 | 797 | return |
808 | 798 | |
809 | 799 | if self.hs.config.hs_disabled: |
101 | 101 | |
102 | 102 | class RoomVersions(object): |
103 | 103 | V1 = "1" |
104 | V2 = "2" | |
104 | 105 | VDH_TEST = "vdh-test-version" |
105 | 106 | STATE_V2_TEST = "state-v2-test" |
106 | 107 | |
112 | 113 | # until we have a working v2. |
113 | 114 | KNOWN_ROOM_VERSIONS = { |
114 | 115 | RoomVersions.V1, |
116 | RoomVersions.V2, | |
115 | 117 | RoomVersions.VDH_TEST, |
116 | 118 | RoomVersions.STATE_V2_TEST, |
117 | 119 | } |
118 | 120 | |
119 | 121 | ServerNoticeMsgType = "m.server_notice" |
120 | 122 | ServerNoticeLimitReached = "m.server_notice.usage_limit_reached" |
123 | ||
124 | ||
125 | class UserTypes(object): | |
126 | """Allows for user type specific behaviour. With the benefit of hindsight | |
127 | 'admin' and 'guest' users should also be UserTypes. Normal users are type None | |
128 | """ | |
129 | SUPPORT = "support" | |
130 | ALL_USER_TYPES = (SUPPORT) |
347 | 347 | ) |
348 | 348 | |
349 | 349 | |
350 | class RequestSendFailed(RuntimeError): | |
351 | """Sending a HTTP request over federation failed due to not being able to | |
352 | talk to the remote server for some reason. | |
353 | ||
354 | This exception is used to differentiate "expected" errors that arise due to | |
355 | networking (e.g. DNS failures, connection timeouts etc), versus unexpected | |
356 | errors (like programming errors). | |
357 | """ | |
358 | def __init__(self, inner_exception, can_retry): | |
359 | super(RequestSendFailed, self).__init__( | |
360 | "Failed to send request: %s: %s" % ( | |
361 | type(inner_exception).__name__, inner_exception, | |
362 | ) | |
363 | ) | |
364 | self.inner_exception = inner_exception | |
365 | self.can_retry = can_retry | |
366 | ||
367 | ||
350 | 368 | def cs_error(msg, code=Codes.UNKNOWN, **kwargs): |
351 | 369 | """ Utility method for constructing an error response for client-server |
352 | 370 | interactions. |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | from six import text_type | |
15 | ||
14 | 16 | import jsonschema |
15 | 17 | from canonicaljson import json |
16 | 18 | from jsonschema import FormatChecker |
352 | 354 | sender = event.user_id |
353 | 355 | room_id = None |
354 | 356 | ev_type = "m.presence" |
355 | is_url = False | |
357 | contains_url = False | |
356 | 358 | else: |
357 | 359 | sender = event.get("sender", None) |
358 | 360 | if not sender: |
367 | 369 | |
368 | 370 | room_id = event.get("room_id", None) |
369 | 371 | ev_type = event.get("type", None) |
370 | is_url = "url" in event.get("content", {}) | |
372 | ||
373 | content = event.get("content", {}) | |
374 | # check if there is a string url field in the content for filtering purposes | |
375 | contains_url = isinstance(content.get("url"), text_type) | |
371 | 376 | |
372 | 377 | return self.check_fields( |
373 | 378 | room_id, |
374 | 379 | sender, |
375 | 380 | ev_type, |
376 | is_url, | |
381 | contains_url, | |
377 | 382 | ) |
378 | 383 | |
379 | 384 | def check_fields(self, room_id, sender, event_type, contains_url): |
18 | 18 | |
19 | 19 | sys.dont_write_bytecode = True |
20 | 20 | |
21 | ||
22 | 21 | try: |
23 | 22 | python_dependencies.check_requirements() |
24 | except python_dependencies.MissingRequirementError as e: | |
25 | message = "\n".join([ | |
26 | "Missing Requirement: %s" % (str(e),), | |
27 | "To install run:", | |
28 | " pip install --upgrade --force \"%s\"" % (e.dependency,), | |
29 | "", | |
30 | ]) | |
31 | sys.stderr.writelines(message) | |
23 | except python_dependencies.DependencyException as e: | |
24 | sys.stderr.writelines(e.message) | |
32 | 25 | sys.exit(1) |
59 | 59 | from synapse.rest import ClientRestResource |
60 | 60 | from synapse.rest.key.v2 import KeyApiV2Resource |
61 | 61 | from synapse.rest.media.v0.content_repository import ContentRepoResource |
62 | from synapse.rest.well_known import WellKnownResource | |
62 | 63 | from synapse.server import HomeServer |
63 | 64 | from synapse.storage import DataStore, are_all_users_on_domain |
64 | 65 | from synapse.storage.engines import IncorrectDatabaseSetup, create_engine |
167 | 168 | "/_matrix/client/unstable": client_resource, |
168 | 169 | "/_matrix/client/v2_alpha": client_resource, |
169 | 170 | "/_matrix/client/versions": client_resource, |
171 | "/.well-known/matrix/client": WellKnownResource(self), | |
170 | 172 | }) |
173 | ||
174 | if self.get_config().saml2_enabled: | |
175 | from synapse.rest.saml2 import SAML2Resource | |
176 | resources["/_matrix/saml2"] = SAML2Resource(self) | |
171 | 177 | |
172 | 178 | if name == "consent": |
173 | 179 | from synapse.rest.consent.consent_resource import ConsentResource |
315 | 321 | |
316 | 322 | synapse.config.logger.setup_logging(config, use_worker_options=False) |
317 | 323 | |
318 | # check any extra requirements we have now we have a config | |
319 | check_requirements(config) | |
320 | ||
321 | 324 | events.USE_FROZEN_DICTS = config.use_frozen_dicts |
322 | 325 | |
323 | 326 | tls_server_context_factory = context_factory.ServerContextFactory(config) |
530 | 533 | ) |
531 | 534 | |
532 | 535 | start_generate_monthly_active_users() |
533 | if hs.config.limit_usage_by_mau: | |
536 | if hs.config.limit_usage_by_mau or hs.config.mau_stats_only: | |
534 | 537 | clock.looping_call(start_generate_monthly_active_users, 5 * 60 * 1000) |
535 | 538 | # End of monthly active user settings |
536 | 539 |
15 | 15 | |
16 | 16 | if __name__ == "__main__": |
17 | 17 | import sys |
18 | from homeserver import HomeServerConfig | |
18 | from synapse.config.homeserver import HomeServerConfig | |
19 | 19 | |
20 | 20 | action = sys.argv[1] |
21 | 21 |
134 | 134 | return file_stream.read() |
135 | 135 | |
136 | 136 | @staticmethod |
137 | def default_path(name): | |
138 | return os.path.abspath(os.path.join(os.path.curdir, name)) | |
139 | ||
140 | @staticmethod | |
141 | 137 | def read_config_file(file_path): |
142 | 138 | with open(file_path) as file_stream: |
143 | 139 | return yaml.load(file_stream) |
150 | 146 | return results |
151 | 147 | |
152 | 148 | def generate_config( |
153 | self, config_dir_path, server_name, is_generating_file, report_stats=None | |
149 | self, | |
150 | config_dir_path, | |
151 | data_dir_path, | |
152 | server_name, | |
153 | generate_secrets=False, | |
154 | report_stats=None, | |
154 | 155 | ): |
156 | """Build a default configuration file | |
157 | ||
158 | This is used both when the user explicitly asks us to generate a config file | |
159 | (eg with --generate_config), and before loading the config at runtime (to give | |
160 | a base which the config files override) | |
161 | ||
162 | Args: | |
163 | config_dir_path (str): The path where the config files are kept. Used to | |
164 | create filenames for things like the log config and the signing key. | |
165 | ||
166 | data_dir_path (str): The path where the data files are kept. Used to create | |
167 | filenames for things like the database and media store. | |
168 | ||
169 | server_name (str): The server name. Used to initialise the server_name | |
170 | config param, but also used in the names of some of the config files. | |
171 | ||
172 | generate_secrets (bool): True if we should generate new secrets for things | |
173 | like the macaroon_secret_key. If False, these parameters will be left | |
174 | unset. | |
175 | ||
176 | report_stats (bool|None): Initial setting for the report_stats setting. | |
177 | If None, report_stats will be left unset. | |
178 | ||
179 | Returns: | |
180 | str: the yaml config file | |
181 | """ | |
155 | 182 | default_config = "# vim:ft=yaml\n" |
156 | 183 | |
157 | 184 | default_config += "\n\n".join( |
159 | 186 | for conf in self.invoke_all( |
160 | 187 | "default_config", |
161 | 188 | config_dir_path=config_dir_path, |
189 | data_dir_path=data_dir_path, | |
162 | 190 | server_name=server_name, |
163 | is_generating_file=is_generating_file, | |
191 | generate_secrets=generate_secrets, | |
164 | 192 | report_stats=report_stats, |
165 | 193 | ) |
166 | 194 | ) |
167 | 195 | |
168 | config = yaml.load(default_config) | |
169 | ||
170 | return default_config, config | |
196 | return default_config | |
171 | 197 | |
172 | 198 | @classmethod |
173 | 199 | def load_config(cls, description, argv): |
273 | 299 | if not cls.path_exists(config_dir_path): |
274 | 300 | os.makedirs(config_dir_path) |
275 | 301 | with open(config_path, "w") as config_file: |
276 | config_str, config = obj.generate_config( | |
302 | config_str = obj.generate_config( | |
277 | 303 | config_dir_path=config_dir_path, |
304 | data_dir_path=os.getcwd(), | |
278 | 305 | server_name=server_name, |
279 | 306 | report_stats=(config_args.report_stats == "yes"), |
280 | is_generating_file=True, | |
307 | generate_secrets=True, | |
281 | 308 | ) |
309 | config = yaml.load(config_str) | |
282 | 310 | obj.invoke_all("generate_files", config) |
283 | 311 | config_file.write(config_str) |
284 | 312 | print( |
349 | 377 | raise ConfigError(MISSING_SERVER_NAME) |
350 | 378 | |
351 | 379 | server_name = specified_config["server_name"] |
352 | _, config = self.generate_config( | |
380 | config_string = self.generate_config( | |
353 | 381 | config_dir_path=config_dir_path, |
382 | data_dir_path=os.getcwd(), | |
354 | 383 | server_name=server_name, |
355 | is_generating_file=False, | |
356 | ) | |
384 | generate_secrets=False, | |
385 | ) | |
386 | config = yaml.load(config_string) | |
357 | 387 | config.pop("log_config") |
358 | 388 | config.update(specified_config) |
359 | 389 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import os | |
14 | 15 | |
15 | 16 | from ._base import Config |
16 | 17 | |
44 | 45 | |
45 | 46 | self.set_databasepath(config.get("database_path")) |
46 | 47 | |
47 | def default_config(self, **kwargs): | |
48 | database_path = self.abspath("homeserver.db") | |
48 | def default_config(self, data_dir_path, **kwargs): | |
49 | database_path = os.path.join(data_dir_path, "homeserver.db") | |
49 | 50 | return """\ |
50 | 51 | # Database configuration |
51 | 52 | database: |
31 | 31 | from .registration import RegistrationConfig |
32 | 32 | from .repository import ContentRepositoryConfig |
33 | 33 | from .room_directory import RoomDirectoryConfig |
34 | from .saml2 import SAML2Config | |
34 | from .saml2_config import SAML2Config | |
35 | 35 | from .server import ServerConfig |
36 | 36 | from .server_notices_config import ServerNoticesConfig |
37 | 37 | from .spam_checker import SpamCheckerConfig |
52 | 52 | ServerNoticesConfig, RoomDirectoryConfig, |
53 | 53 | ): |
54 | 54 | pass |
55 | ||
56 | ||
57 | if __name__ == '__main__': | |
58 | import sys | |
59 | sys.stdout.write( | |
60 | HomeServerConfig().generate_config(sys.argv[1], sys.argv[2], True)[0] | |
61 | ) |
56 | 56 | # Unfortunately, there are people out there that don't have this |
57 | 57 | # set. Lets just be "nice" and derive one from their secret key. |
58 | 58 | logger.warn("Config is missing missing macaroon_secret_key") |
59 | seed = self.signing_key[0].seed | |
60 | self.macaroon_secret_key = hashlib.sha256(seed) | |
59 | seed = bytes(self.signing_key[0]) | |
60 | self.macaroon_secret_key = hashlib.sha256(seed).digest() | |
61 | 61 | |
62 | 62 | self.expire_access_token = config.get("expire_access_token", False) |
63 | 63 | |
65 | 65 | # falsification of values |
66 | 66 | self.form_secret = config.get("form_secret", None) |
67 | 67 | |
68 | def default_config(self, config_dir_path, server_name, is_generating_file=False, | |
68 | def default_config(self, config_dir_path, server_name, generate_secrets=False, | |
69 | 69 | **kwargs): |
70 | 70 | base_key_name = os.path.join(config_dir_path, server_name) |
71 | 71 | |
72 | if is_generating_file: | |
73 | macaroon_secret_key = random_string_with_symbols(50) | |
74 | form_secret = '"%s"' % random_string_with_symbols(50) | |
72 | if generate_secrets: | |
73 | macaroon_secret_key = 'macaroon_secret_key: "%s"' % ( | |
74 | random_string_with_symbols(50), | |
75 | ) | |
76 | form_secret = 'form_secret: "%s"' % random_string_with_symbols(50) | |
75 | 77 | else: |
76 | macaroon_secret_key = None | |
77 | form_secret = 'null' | |
78 | macaroon_secret_key = "# macaroon_secret_key: <PRIVATE STRING>" | |
79 | form_secret = "# form_secret: <PRIVATE STRING>" | |
78 | 80 | |
79 | 81 | return """\ |
80 | macaroon_secret_key: "%(macaroon_secret_key)s" | |
82 | # a secret which is used to sign access tokens. If none is specified, | |
83 | # the registration_shared_secret is used, if one is given; otherwise, | |
84 | # a secret key is derived from the signing key. | |
85 | # | |
86 | # Note that changing this will invalidate any active access tokens, so | |
87 | # all clients will have to log back in. | |
88 | %(macaroon_secret_key)s | |
81 | 89 | |
82 | 90 | # Used to enable access token expiration. |
83 | 91 | expire_access_token: False |
84 | 92 | |
85 | 93 | # a secret which is used to calculate HMACs for form values, to stop |
86 | # falsification of values | |
87 | form_secret: %(form_secret)s | |
94 | # falsification of values. Must be specified for the User Consent | |
95 | # forms to work. | |
96 | %(form_secret)s | |
88 | 97 | |
89 | 98 | ## Signing Keys ## |
90 | 99 |
79 | 79 | self.log_file = self.abspath(config.get("log_file")) |
80 | 80 | |
81 | 81 | def default_config(self, config_dir_path, server_name, **kwargs): |
82 | log_config = self.abspath( | |
83 | os.path.join(config_dir_path, server_name + ".log.config") | |
84 | ) | |
82 | log_config = os.path.join(config_dir_path, server_name + ".log.config") | |
85 | 83 | return """ |
86 | 84 | # A yaml python logging config file |
87 | 85 | log_config: "%(log_config)s" |
23 | 23 | self.metrics_bind_host = config.get("metrics_bind_host", "127.0.0.1") |
24 | 24 | |
25 | 25 | def default_config(self, report_stats=None, **kwargs): |
26 | suffix = "" if report_stats is None else "report_stats: %(report_stats)s\n" | |
27 | return ("""\ | |
26 | res = """\ | |
28 | 27 | ## Metrics ### |
29 | 28 | |
30 | 29 | # Enable collection and rendering of performance metrics |
31 | 30 | enable_metrics: False |
32 | """ + suffix) % locals() | |
31 | """ | |
32 | ||
33 | if report_stats is None: | |
34 | res += "# report_stats: true|false\n" | |
35 | else: | |
36 | res += "report_stats: %s\n" % ('true' if report_stats else 'false') | |
37 | ||
38 | return res |
36 | 36 | |
37 | 37 | self.bcrypt_rounds = config.get("bcrypt_rounds", 12) |
38 | 38 | self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"] |
39 | self.default_identity_server = config.get("default_identity_server") | |
39 | 40 | self.allow_guest_access = config.get("allow_guest_access", False) |
40 | 41 | |
41 | 42 | self.invite_3pid_guest = ( |
48 | 49 | raise ConfigError('Invalid auto_join_rooms entry %s' % (room_alias,)) |
49 | 50 | self.autocreate_auto_join_rooms = config.get("autocreate_auto_join_rooms", True) |
50 | 51 | |
51 | def default_config(self, **kwargs): | |
52 | registration_shared_secret = random_string_with_symbols(50) | |
52 | def default_config(self, generate_secrets=False, **kwargs): | |
53 | if generate_secrets: | |
54 | registration_shared_secret = 'registration_shared_secret: "%s"' % ( | |
55 | random_string_with_symbols(50), | |
56 | ) | |
57 | else: | |
58 | registration_shared_secret = '# registration_shared_secret: <PRIVATE STRING>' | |
53 | 59 | |
54 | 60 | return """\ |
55 | 61 | ## Registration ## |
76 | 82 | |
77 | 83 | # If set, allows registration by anyone who also has the shared |
78 | 84 | # secret, even if registration is otherwise disabled. |
79 | registration_shared_secret: "%(registration_shared_secret)s" | |
85 | %(registration_shared_secret)s | |
80 | 86 | |
81 | 87 | # Set the number of bcrypt rounds used to generate password hash. |
82 | 88 | # Larger numbers increase the work factor needed to generate the hash. |
89 | 95 | # participate in rooms hosted on this server which have been made |
90 | 96 | # accessible to anonymous users. |
91 | 97 | allow_guest_access: False |
98 | ||
99 | # The identity server which we suggest that clients should use when users log | |
100 | # in on this server. | |
101 | # | |
102 | # (By default, no suggestion is made, so it is left up to the client. | |
103 | # This setting is ignored unless public_baseurl is also set.) | |
104 | # | |
105 | # default_identity_server: https://matrix.org | |
92 | 106 | |
93 | 107 | # The list of identity servers trusted to verify third party |
94 | 108 | # identifiers by this server. |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
14 | import os | |
15 | 15 | from collections import namedtuple |
16 | 16 | |
17 | 17 | from synapse.util.module_loader import load_module |
174 | 174 | "url_preview_url_blacklist", () |
175 | 175 | ) |
176 | 176 | |
177 | def default_config(self, **kwargs): | |
178 | media_store = self.default_path("media_store") | |
179 | uploads_path = self.default_path("uploads") | |
177 | def default_config(self, data_dir_path, **kwargs): | |
178 | media_store = os.path.join(data_dir_path, "media_store") | |
179 | uploads_path = os.path.join(data_dir_path, "uploads") | |
180 | 180 | return r""" |
181 | 181 | # Directory where uploaded images and attachments are stored. |
182 | 182 | media_store_path: "%(media_store)s" |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2015 Ericsson | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from ._base import Config | |
16 | ||
17 | ||
18 | class SAML2Config(Config): | |
19 | """SAML2 Configuration | |
20 | Synapse uses pysaml2 libraries for providing SAML2 support | |
21 | ||
22 | config_path: Path to the sp_conf.py configuration file | |
23 | idp_redirect_url: Identity provider URL which will redirect | |
24 | the user back to /login/saml2 with proper info. | |
25 | ||
26 | sp_conf.py file is something like: | |
27 | https://github.com/rohe/pysaml2/blob/master/example/sp-repoze/sp_conf.py.example | |
28 | ||
29 | More information: https://pythonhosted.org/pysaml2/howto/config.html | |
30 | """ | |
31 | ||
32 | def read_config(self, config): | |
33 | saml2_config = config.get("saml2_config", None) | |
34 | if saml2_config: | |
35 | self.saml2_enabled = saml2_config.get("enabled", True) | |
36 | self.saml2_config_path = saml2_config["config_path"] | |
37 | self.saml2_idp_redirect_url = saml2_config["idp_redirect_url"] | |
38 | else: | |
39 | self.saml2_enabled = False | |
40 | self.saml2_config_path = None | |
41 | self.saml2_idp_redirect_url = None | |
42 | ||
43 | def default_config(self, config_dir_path, server_name, **kwargs): | |
44 | return """ | |
45 | # Enable SAML2 for registration and login. Uses pysaml2 | |
46 | # config_path: Path to the sp_conf.py configuration file | |
47 | # idp_redirect_url: Identity provider URL which will redirect | |
48 | # the user back to /login/saml2 with proper info. | |
49 | # See pysaml2 docs for format of config. | |
50 | #saml2_config: | |
51 | # enabled: true | |
52 | # config_path: "%s/sp_conf.py" | |
53 | # idp_redirect_url: "http://%s/idp" | |
54 | """ % (config_dir_path, server_name) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from ._base import Config, ConfigError | |
16 | ||
17 | ||
18 | class SAML2Config(Config): | |
19 | def read_config(self, config): | |
20 | self.saml2_enabled = False | |
21 | ||
22 | saml2_config = config.get("saml2_config") | |
23 | ||
24 | if not saml2_config or not saml2_config.get("enabled", True): | |
25 | return | |
26 | ||
27 | self.saml2_enabled = True | |
28 | ||
29 | import saml2.config | |
30 | self.saml2_sp_config = saml2.config.SPConfig() | |
31 | self.saml2_sp_config.load(self._default_saml_config_dict()) | |
32 | self.saml2_sp_config.load(saml2_config.get("sp_config", {})) | |
33 | ||
34 | config_path = saml2_config.get("config_path", None) | |
35 | if config_path is not None: | |
36 | self.saml2_sp_config.load_file(config_path) | |
37 | ||
38 | def _default_saml_config_dict(self): | |
39 | import saml2 | |
40 | ||
41 | public_baseurl = self.public_baseurl | |
42 | if public_baseurl is None: | |
43 | raise ConfigError( | |
44 | "saml2_config requires a public_baseurl to be set" | |
45 | ) | |
46 | ||
47 | metadata_url = public_baseurl + "_matrix/saml2/metadata.xml" | |
48 | response_url = public_baseurl + "_matrix/saml2/authn_response" | |
49 | return { | |
50 | "entityid": metadata_url, | |
51 | ||
52 | "service": { | |
53 | "sp": { | |
54 | "endpoints": { | |
55 | "assertion_consumer_service": [ | |
56 | (response_url, saml2.BINDING_HTTP_POST), | |
57 | ], | |
58 | }, | |
59 | "required_attributes": ["uid"], | |
60 | "optional_attributes": ["mail", "surname", "givenname"], | |
61 | }, | |
62 | } | |
63 | } | |
64 | ||
65 | def default_config(self, config_dir_path, server_name, **kwargs): | |
66 | return """ | |
67 | # Enable SAML2 for registration and login. Uses pysaml2. | |
68 | # | |
69 | # saml2_config: | |
70 | # | |
71 | # # The following is the configuration for the pysaml2 Service Provider. | |
72 | # # See pysaml2 docs for format of config. | |
73 | # # | |
74 | # # Default values will be used for the 'entityid' and 'service' settings, | |
75 | # # so it is not normally necessary to specify them unless you need to | |
76 | # # override them. | |
77 | # | |
78 | # sp_config: | |
79 | # # point this to the IdP's metadata. You can use either a local file or | |
80 | # # (preferably) a URL. | |
81 | # metadata: | |
82 | # # local: ["saml2/idp.xml"] | |
83 | # remote: | |
84 | # - url: https://our_idp/metadata.xml | |
85 | # | |
86 | # # The following is just used to generate our metadata xml, and you | |
87 | # # may well not need it, depending on your setup. Alternatively you | |
88 | # # may need a whole lot more detail - see the pysaml2 docs! | |
89 | # | |
90 | # description: ["My awesome SP", "en"] | |
91 | # name: ["Test SP", "en"] | |
92 | # | |
93 | # organization: | |
94 | # name: Example com | |
95 | # display_name: | |
96 | # - ["Example co", "en"] | |
97 | # url: "http://example.com" | |
98 | # | |
99 | # contact_person: | |
100 | # - given_name: Bob | |
101 | # sur_name: "the Sysadmin" | |
102 | # email_address": ["admin@example.com"] | |
103 | # contact_type": technical | |
104 | # | |
105 | # # Instead of putting the config inline as above, you can specify a | |
106 | # # separate pysaml2 configuration file: | |
107 | # # | |
108 | # # config_path: "%(config_dir_path)s/sp_conf.py" | |
109 | """ % {"config_dir_path": config_dir_path} |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | # Copyright 2017 New Vector Ltd | |
2 | # Copyright 2017-2018 New Vector Ltd | |
3 | 3 | # |
4 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
5 | 5 | # you may not use this file except in compliance with the License. |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | import os.path | |
17 | 18 | |
18 | 19 | from synapse.http.endpoint import parse_and_validate_server_name |
20 | from synapse.python_dependencies import DependencyException, check_requirements | |
19 | 21 | |
20 | 22 | from ._base import Config, ConfigError |
21 | 23 | |
202 | 204 | ] |
203 | 205 | }) |
204 | 206 | |
205 | def default_config(self, server_name, **kwargs): | |
207 | _check_resource_config(self.listeners) | |
208 | ||
209 | def default_config(self, server_name, data_dir_path, **kwargs): | |
206 | 210 | _, bind_port = parse_and_validate_server_name(server_name) |
207 | 211 | if bind_port is not None: |
208 | 212 | unsecure_port = bind_port - 400 |
210 | 214 | bind_port = 8448 |
211 | 215 | unsecure_port = 8008 |
212 | 216 | |
213 | pid_file = self.abspath("homeserver.pid") | |
217 | pid_file = os.path.join(data_dir_path, "homeserver.pid") | |
214 | 218 | return """\ |
215 | 219 | ## Server ## |
216 | 220 | |
355 | 359 | # type: manhole |
356 | 360 | |
357 | 361 | |
358 | # Homeserver blocking | |
359 | # | |
360 | # How to reach the server admin, used in ResourceLimitError | |
361 | # admin_contact: 'mailto:admin@server.com' | |
362 | # | |
363 | # Global block config | |
364 | # | |
365 | # hs_disabled: False | |
366 | # hs_disabled_message: 'Human readable reason for why the HS is blocked' | |
367 | # hs_disabled_limit_type: 'error code(str), to help clients decode reason' | |
368 | # | |
369 | # Monthly Active User Blocking | |
370 | # | |
371 | # Enables monthly active user checking | |
372 | # limit_usage_by_mau: False | |
373 | # max_mau_value: 50 | |
374 | # mau_trial_days: 2 | |
375 | # | |
376 | # If enabled, the metrics for the number of monthly active users will | |
377 | # be populated, however no one will be limited. If limit_usage_by_mau | |
378 | # is true, this is implied to be true. | |
379 | # mau_stats_only: False | |
380 | # | |
381 | # Sometimes the server admin will want to ensure certain accounts are | |
382 | # never blocked by mau checking. These accounts are specified here. | |
383 | # | |
384 | # mau_limit_reserved_threepids: | |
385 | # - medium: 'email' | |
386 | # address: 'reserved_user@example.com' | |
387 | # | |
388 | # Room searching | |
389 | # | |
390 | # If disabled, new messages will not be indexed for searching and users | |
391 | # will receive errors when searching for messages. Defaults to enabled. | |
392 | # enable_search: true | |
362 | # Homeserver blocking | |
363 | # | |
364 | # How to reach the server admin, used in ResourceLimitError | |
365 | # admin_contact: 'mailto:admin@server.com' | |
366 | # | |
367 | # Global block config | |
368 | # | |
369 | # hs_disabled: False | |
370 | # hs_disabled_message: 'Human readable reason for why the HS is blocked' | |
371 | # hs_disabled_limit_type: 'error code(str), to help clients decode reason' | |
372 | # | |
373 | # Monthly Active User Blocking | |
374 | # | |
375 | # Enables monthly active user checking | |
376 | # limit_usage_by_mau: False | |
377 | # max_mau_value: 50 | |
378 | # mau_trial_days: 2 | |
379 | # | |
380 | # If enabled, the metrics for the number of monthly active users will | |
381 | # be populated, however no one will be limited. If limit_usage_by_mau | |
382 | # is true, this is implied to be true. | |
383 | # mau_stats_only: False | |
384 | # | |
385 | # Sometimes the server admin will want to ensure certain accounts are | |
386 | # never blocked by mau checking. These accounts are specified here. | |
387 | # | |
388 | # mau_limit_reserved_threepids: | |
389 | # - medium: 'email' | |
390 | # address: 'reserved_user@example.com' | |
391 | # | |
392 | # Room searching | |
393 | # | |
394 | # If disabled, new messages will not be indexed for searching and users | |
395 | # will receive errors when searching for messages. Defaults to enabled. | |
396 | # enable_search: true | |
393 | 397 | """ % locals() |
394 | 398 | |
395 | 399 | def read_arguments(self, args): |
463 | 467 | if name == 'webclient': |
464 | 468 | logger.warning(NO_MORE_WEB_CLIENT_WARNING) |
465 | 469 | return |
470 | ||
471 | ||
472 | KNOWN_RESOURCES = ( | |
473 | 'client', | |
474 | 'consent', | |
475 | 'federation', | |
476 | 'keys', | |
477 | 'media', | |
478 | 'metrics', | |
479 | 'replication', | |
480 | 'static', | |
481 | 'webclient', | |
482 | ) | |
483 | ||
484 | ||
485 | def _check_resource_config(listeners): | |
486 | resource_names = set( | |
487 | res_name | |
488 | for listener in listeners | |
489 | for res in listener.get("resources", []) | |
490 | for res_name in res.get("names", []) | |
491 | ) | |
492 | ||
493 | for resource in resource_names: | |
494 | if resource not in KNOWN_RESOURCES: | |
495 | raise ConfigError( | |
496 | "Unknown listener resource '%s'" % (resource, ) | |
497 | ) | |
498 | if resource == "consent": | |
499 | try: | |
500 | check_requirements('resources.consent') | |
501 | except DependencyException as e: | |
502 | raise ConfigError(e.message) |
21 | 21 | from twisted.internet import defer |
22 | 22 | |
23 | 23 | import synapse.metrics |
24 | from synapse.api.errors import FederationDeniedError, HttpResponseException | |
24 | from synapse.api.errors import ( | |
25 | FederationDeniedError, | |
26 | HttpResponseException, | |
27 | RequestSendFailed, | |
28 | ) | |
25 | 29 | from synapse.handlers.presence import format_user_presence_state, get_interested_remotes |
26 | 30 | from synapse.metrics import ( |
27 | 31 | LaterGauge, |
517 | 521 | ) |
518 | 522 | except FederationDeniedError as e: |
519 | 523 | logger.info(e) |
520 | except Exception as e: | |
521 | logger.warn( | |
522 | "TX [%s] Failed to send transaction: %s", | |
524 | except HttpResponseException as e: | |
525 | logger.warning( | |
526 | "TX [%s] Received %d response to transaction: %s", | |
527 | destination, e.code, e, | |
528 | ) | |
529 | except RequestSendFailed as e: | |
530 | logger.warning("TX [%s] Failed to send transaction: %s", destination, e) | |
531 | ||
532 | for p, _ in pending_pdus: | |
533 | logger.info("Failed to send event %s to %s", p.event_id, | |
534 | destination) | |
535 | except Exception: | |
536 | logger.exception( | |
537 | "TX [%s] Failed to send transaction", | |
523 | 538 | destination, |
524 | e, | |
525 | 539 | ) |
526 | 540 | for p, _ in pending_pdus: |
527 | 541 | logger.info("Failed to send event %s to %s", p.event_id, |
562 | 562 | insensitively, but return None if there are multiple inexact matches. |
563 | 563 | |
564 | 564 | Args: |
565 | (str) user_id: complete @user:id | |
566 | ||
567 | Returns: | |
568 | defer.Deferred: (str) canonical_user_id, or None if zero or | |
565 | (unicode|bytes) user_id: complete @user:id | |
566 | ||
567 | Returns: | |
568 | defer.Deferred: (unicode) canonical_user_id, or None if zero or | |
569 | 569 | multiple matches |
570 | 570 | """ |
571 | 571 | res = yield self._find_user_id_and_pwd_hash(user_id) |
953 | 953 | return macaroon.serialize() |
954 | 954 | |
955 | 955 | def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)): |
956 | """ | |
957 | ||
958 | Args: | |
959 | user_id (unicode): | |
960 | duration_in_ms (int): | |
961 | ||
962 | Returns: | |
963 | unicode | |
964 | """ | |
956 | 965 | macaroon = self._generate_base_macaroon(user_id) |
957 | 966 | macaroon.add_first_party_caveat("type = login") |
958 | 967 | now = self.hs.get_clock().time_msec() |
234 | 234 | "room_key", next_key |
235 | 235 | ) |
236 | 236 | |
237 | if events: | |
238 | if event_filter: | |
239 | events = event_filter.filter(events) | |
240 | ||
241 | events = yield filter_events_for_client( | |
242 | self.store, | |
243 | user_id, | |
244 | events, | |
245 | is_peeking=(member_event_id is None), | |
246 | ) | |
247 | ||
237 | 248 | if not events: |
238 | 249 | defer.returnValue({ |
239 | 250 | "chunk": [], |
240 | 251 | "start": pagin_config.from_token.to_string(), |
241 | 252 | "end": next_token.to_string(), |
242 | 253 | }) |
243 | ||
244 | if event_filter: | |
245 | events = event_filter.filter(events) | |
246 | ||
247 | events = yield filter_events_for_client( | |
248 | self.store, | |
249 | user_id, | |
250 | events, | |
251 | is_peeking=(member_event_id is None), | |
252 | ) | |
253 | 254 | |
254 | 255 | state = None |
255 | 256 | if event_filter and event_filter.lazy_load_members(): |
125 | 125 | make_guest=False, |
126 | 126 | admin=False, |
127 | 127 | threepid=None, |
128 | user_type=None, | |
129 | default_display_name=None, | |
128 | 130 | ): |
129 | 131 | """Registers a new client on the server. |
130 | 132 | |
139 | 141 | since it offers no means of associating a device_id with the |
140 | 142 | access_token. Instead you should call auth_handler.issue_access_token |
141 | 143 | after registration. |
144 | user_type (str|None): type of user. One of the values from | |
145 | api.constants.UserTypes, or None for a normal user. | |
146 | default_display_name (unicode|None): if set, the new user's displayname | |
147 | will be set to this. Defaults to 'localpart'. | |
142 | 148 | Returns: |
143 | 149 | A tuple of (user_id, access_token). |
144 | 150 | Raises: |
168 | 174 | user = UserID(localpart, self.hs.hostname) |
169 | 175 | user_id = user.to_string() |
170 | 176 | |
177 | if was_guest: | |
178 | # If the user was a guest then they already have a profile | |
179 | default_display_name = None | |
180 | ||
181 | elif default_display_name is None: | |
182 | default_display_name = localpart | |
183 | ||
171 | 184 | token = None |
172 | 185 | if generate_token: |
173 | 186 | token = self.macaroon_gen.generate_access_token(user_id) |
177 | 190 | password_hash=password_hash, |
178 | 191 | was_guest=was_guest, |
179 | 192 | make_guest=make_guest, |
180 | create_profile_with_localpart=( | |
181 | # If the user was a guest then they already have a profile | |
182 | None if was_guest else user.localpart | |
183 | ), | |
193 | create_profile_with_displayname=default_display_name, | |
184 | 194 | admin=admin, |
195 | user_type=user_type, | |
185 | 196 | ) |
186 | 197 | |
187 | 198 | if self.hs.config.user_directory_search_all_users: |
202 | 213 | yield self.check_user_id_not_appservice_exclusive(user_id) |
203 | 214 | if generate_token: |
204 | 215 | token = self.macaroon_gen.generate_access_token(user_id) |
216 | if default_display_name is None: | |
217 | default_display_name = localpart | |
205 | 218 | try: |
206 | 219 | yield self.store.register( |
207 | 220 | user_id=user_id, |
208 | 221 | token=token, |
209 | 222 | password_hash=password_hash, |
210 | 223 | make_guest=make_guest, |
211 | create_profile_with_localpart=user.localpart, | |
224 | create_profile_with_displayname=default_display_name, | |
212 | 225 | ) |
213 | 226 | except SynapseError: |
214 | 227 | # if user id is taken, just generate another |
232 | 245 | # auto-join the user to any rooms we're supposed to dump them into |
233 | 246 | fake_requester = create_requester(user_id) |
234 | 247 | |
235 | # try to create the room if we're the first user on the server | |
248 | # try to create the room if we're the first real user on the server. Note | |
249 | # that an auto-generated support user is not a real user and will never be | |
250 | # the user to create the room | |
236 | 251 | should_auto_create_rooms = False |
237 | if self.hs.config.autocreate_auto_join_rooms: | |
252 | is_support = yield self.store.is_support_user(user_id) | |
253 | # There is an edge case where the first user is the support user, then | |
254 | # the room is never created, though this seems unlikely and | |
255 | # recoverable from given the support user being involved in the first | |
256 | # place. | |
257 | if self.hs.config.autocreate_auto_join_rooms and not is_support: | |
238 | 258 | count = yield self.store.count_all_users() |
239 | 259 | should_auto_create_rooms = count == 1 |
240 | 260 | for r in self.hs.config.auto_join_rooms: |
299 | 319 | user_id=user_id, |
300 | 320 | password_hash="", |
301 | 321 | appservice_id=service_id, |
302 | create_profile_with_localpart=user.localpart, | |
322 | create_profile_with_displayname=user.localpart, | |
303 | 323 | ) |
304 | 324 | defer.returnValue(user_id) |
305 | 325 | |
325 | 345 | ) |
326 | 346 | else: |
327 | 347 | logger.info("Valid captcha entered from %s", ip) |
328 | ||
329 | @defer.inlineCallbacks | |
330 | def register_saml2(self, localpart): | |
331 | """ | |
332 | Registers email_id as SAML2 Based Auth. | |
333 | """ | |
334 | if types.contains_invalid_mxid_characters(localpart): | |
335 | raise SynapseError( | |
336 | 400, | |
337 | "User ID can only contain characters a-z, 0-9, or '=_-./'", | |
338 | ) | |
339 | yield self.auth.check_auth_blocking() | |
340 | user = UserID(localpart, self.hs.hostname) | |
341 | user_id = user.to_string() | |
342 | ||
343 | yield self.check_user_id_not_appservice_exclusive(user_id) | |
344 | token = self.macaroon_gen.generate_access_token(user_id) | |
345 | try: | |
346 | yield self.store.register( | |
347 | user_id=user_id, | |
348 | token=token, | |
349 | password_hash=None, | |
350 | create_profile_with_localpart=user.localpart, | |
351 | ) | |
352 | except Exception as e: | |
353 | yield self.store.add_access_token_to_user(user_id, token) | |
354 | # Ignore Registration errors | |
355 | logger.exception(e) | |
356 | defer.returnValue((user_id, token)) | |
357 | 348 | |
358 | 349 | @defer.inlineCallbacks |
359 | 350 | def register_email(self, threepidCreds): |
506 | 497 | user_id=user_id, |
507 | 498 | token=token, |
508 | 499 | password_hash=password_hash, |
509 | create_profile_with_localpart=user.localpart, | |
500 | create_profile_with_displayname=user.localpart, | |
510 | 501 | ) |
511 | 502 | else: |
512 | 503 | yield self._auth_handler.delete_access_tokens_for_user(user_id) |
432 | 432 | """ |
433 | 433 | user_id = requester.user.to_string() |
434 | 434 | |
435 | self.auth.check_auth_blocking(user_id) | |
435 | yield self.auth.check_auth_blocking(user_id) | |
436 | 436 | |
437 | 437 | if not self.spam_checker.user_may_create_room(user_id): |
438 | 438 | raise SynapseError(403, "You are not permitted to create rooms") |
1667 | 1667 | "content": content, |
1668 | 1668 | }) |
1669 | 1669 | |
1670 | account_data = sync_config.filter_collection.filter_room_account_data( | |
1670 | account_data_events = sync_config.filter_collection.filter_room_account_data( | |
1671 | 1671 | account_data_events |
1672 | 1672 | ) |
1673 | 1673 | |
1674 | 1674 | ephemeral = sync_config.filter_collection.filter_room_ephemeral(ephemeral) |
1675 | 1675 | |
1676 | if not (always_include or batch or account_data or ephemeral or full_state): | |
1676 | if not (always_include | |
1677 | or batch | |
1678 | or account_data_events | |
1679 | or ephemeral | |
1680 | or full_state): | |
1677 | 1681 | return |
1678 | 1682 | |
1679 | 1683 | state = yield self.compute_state_delta( |
1744 | 1748 | room_id=room_id, |
1745 | 1749 | timeline=batch, |
1746 | 1750 | state=state, |
1747 | account_data=account_data, | |
1751 | account_data=account_data_events, | |
1748 | 1752 | ) |
1749 | 1753 | if room_sync or always_include: |
1750 | 1754 | sync_result_builder.archived.append(room_sync) |
124 | 124 | """ |
125 | 125 | # FIXME(#3714): We should probably do this in the same worker as all |
126 | 126 | # the other changes. |
127 | yield self.store.update_profile_in_user_dir( | |
128 | user_id, profile.display_name, profile.avatar_url, None, | |
129 | ) | |
127 | is_support = yield self.store.is_support_user(user_id) | |
128 | # Support users are for diagnostics and should not appear in the user directory. | |
129 | if not is_support: | |
130 | yield self.store.update_profile_in_user_dir( | |
131 | user_id, profile.display_name, profile.avatar_url, None, | |
132 | ) | |
130 | 133 | |
131 | 134 | @defer.inlineCallbacks |
132 | 135 | def handle_user_deactivated(self, user_id): |
328 | 331 | public_value=Membership.JOIN, |
329 | 332 | ) |
330 | 333 | |
331 | if change is None: | |
332 | # Handle any profile changes | |
333 | yield self._handle_profile_change( | |
334 | state_key, room_id, prev_event_id, event_id, | |
335 | ) | |
336 | continue | |
337 | ||
338 | if not change: | |
334 | if change is False: | |
339 | 335 | # Need to check if the server left the room entirely, if so |
340 | 336 | # we might need to remove all the users in that room |
341 | 337 | is_in_room = yield self.store.is_host_joined( |
353 | 349 | else: |
354 | 350 | logger.debug("Server is still in room: %r", room_id) |
355 | 351 | |
356 | if change: # The user joined | |
357 | event = yield self.store.get_event(event_id, allow_none=True) | |
358 | profile = ProfileInfo( | |
359 | avatar_url=event.content.get("avatar_url"), | |
360 | display_name=event.content.get("displayname"), | |
361 | ) | |
362 | ||
363 | yield self._handle_new_user(room_id, state_key, profile) | |
364 | else: # The user left | |
365 | yield self._handle_remove_user(room_id, state_key) | |
352 | is_support = yield self.store.is_support_user(state_key) | |
353 | if not is_support: | |
354 | if change is None: | |
355 | # Handle any profile changes | |
356 | yield self._handle_profile_change( | |
357 | state_key, room_id, prev_event_id, event_id, | |
358 | ) | |
359 | continue | |
360 | ||
361 | if change: # The user joined | |
362 | event = yield self.store.get_event(event_id, allow_none=True) | |
363 | profile = ProfileInfo( | |
364 | avatar_url=event.content.get("avatar_url"), | |
365 | display_name=event.content.get("displayname"), | |
366 | ) | |
367 | ||
368 | yield self._handle_new_user(room_id, state_key, profile) | |
369 | else: # The user left | |
370 | yield self._handle_remove_user(room_id, state_key) | |
366 | 371 | else: |
367 | 372 | logger.debug("Ignoring irrelevant type: %r", typ) |
368 | 373 |
20 | 20 | |
21 | 21 | import treq |
22 | 22 | from canonicaljson import encode_canonical_json, json |
23 | from netaddr import IPAddress | |
23 | 24 | from prometheus_client import Counter |
25 | from zope.interface import implementer, provider | |
24 | 26 | |
25 | 27 | from OpenSSL import SSL |
26 | 28 | from OpenSSL.SSL import VERIFY_NONE |
27 | from twisted.internet import defer, protocol, reactor, ssl | |
28 | from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS | |
29 | from twisted.internet import defer, protocol, ssl | |
30 | from twisted.internet.interfaces import ( | |
31 | IReactorPluggableNameResolver, | |
32 | IResolutionReceiver, | |
33 | ) | |
34 | from twisted.python.failure import Failure | |
29 | 35 | from twisted.web._newclient import ResponseDone |
30 | from twisted.web.client import ( | |
31 | Agent, | |
32 | BrowserLikeRedirectAgent, | |
33 | ContentDecoderAgent, | |
34 | GzipDecoder, | |
35 | HTTPConnectionPool, | |
36 | PartialDownloadError, | |
37 | readBody, | |
38 | ) | |
36 | from twisted.web.client import Agent, HTTPConnectionPool, PartialDownloadError, readBody | |
39 | 37 | from twisted.web.http import PotentialDataLoss |
40 | 38 | from twisted.web.http_headers import Headers |
41 | 39 | |
42 | 40 | from synapse.api.errors import Codes, HttpResponseException, SynapseError |
43 | 41 | from synapse.http import cancelled_to_request_timed_out_error, redact_uri |
44 | from synapse.http.endpoint import SpiderEndpoint | |
45 | 42 | from synapse.util.async_helpers import timeout_deferred |
46 | 43 | from synapse.util.caches import CACHE_SIZE_FACTOR |
47 | 44 | from synapse.util.logcontext import make_deferred_yieldable |
49 | 46 | logger = logging.getLogger(__name__) |
50 | 47 | |
51 | 48 | outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"]) |
52 | incoming_responses_counter = Counter("synapse_http_client_responses", "", | |
53 | ["method", "code"]) | |
49 | incoming_responses_counter = Counter( | |
50 | "synapse_http_client_responses", "", ["method", "code"] | |
51 | ) | |
52 | ||
53 | ||
54 | def check_against_blacklist(ip_address, ip_whitelist, ip_blacklist): | |
55 | """ | |
56 | Args: | |
57 | ip_address (netaddr.IPAddress) | |
58 | ip_whitelist (netaddr.IPSet) | |
59 | ip_blacklist (netaddr.IPSet) | |
60 | """ | |
61 | if ip_address in ip_blacklist: | |
62 | if ip_whitelist is None or ip_address not in ip_whitelist: | |
63 | return True | |
64 | return False | |
65 | ||
66 | ||
67 | class IPBlacklistingResolver(object): | |
68 | """ | |
69 | A proxy for reactor.nameResolver which only produces non-blacklisted IP | |
70 | addresses, preventing DNS rebinding attacks on URL preview. | |
71 | """ | |
72 | ||
73 | def __init__(self, reactor, ip_whitelist, ip_blacklist): | |
74 | """ | |
75 | Args: | |
76 | reactor (twisted.internet.reactor) | |
77 | ip_whitelist (netaddr.IPSet) | |
78 | ip_blacklist (netaddr.IPSet) | |
79 | """ | |
80 | self._reactor = reactor | |
81 | self._ip_whitelist = ip_whitelist | |
82 | self._ip_blacklist = ip_blacklist | |
83 | ||
84 | def resolveHostName(self, recv, hostname, portNumber=0): | |
85 | ||
86 | r = recv() | |
87 | d = defer.Deferred() | |
88 | addresses = [] | |
89 | ||
90 | @provider(IResolutionReceiver) | |
91 | class EndpointReceiver(object): | |
92 | @staticmethod | |
93 | def resolutionBegan(resolutionInProgress): | |
94 | pass | |
95 | ||
96 | @staticmethod | |
97 | def addressResolved(address): | |
98 | ip_address = IPAddress(address.host) | |
99 | ||
100 | if check_against_blacklist( | |
101 | ip_address, self._ip_whitelist, self._ip_blacklist | |
102 | ): | |
103 | logger.info( | |
104 | "Dropped %s from DNS resolution to %s" % (ip_address, hostname) | |
105 | ) | |
106 | raise SynapseError(403, "IP address blocked by IP blacklist entry") | |
107 | ||
108 | addresses.append(address) | |
109 | ||
110 | @staticmethod | |
111 | def resolutionComplete(): | |
112 | d.callback(addresses) | |
113 | ||
114 | self._reactor.nameResolver.resolveHostName( | |
115 | EndpointReceiver, hostname, portNumber=portNumber | |
116 | ) | |
117 | ||
118 | def _callback(addrs): | |
119 | r.resolutionBegan(None) | |
120 | for i in addrs: | |
121 | r.addressResolved(i) | |
122 | r.resolutionComplete() | |
123 | ||
124 | d.addCallback(_callback) | |
125 | ||
126 | return r | |
127 | ||
128 | ||
129 | class BlacklistingAgentWrapper(Agent): | |
130 | """ | |
131 | An Agent wrapper which will prevent access to IP addresses being accessed | |
132 | directly (without an IP address lookup). | |
133 | """ | |
134 | ||
135 | def __init__(self, agent, reactor, ip_whitelist=None, ip_blacklist=None): | |
136 | """ | |
137 | Args: | |
138 | agent (twisted.web.client.Agent): The Agent to wrap. | |
139 | reactor (twisted.internet.reactor) | |
140 | ip_whitelist (netaddr.IPSet) | |
141 | ip_blacklist (netaddr.IPSet) | |
142 | """ | |
143 | self._agent = agent | |
144 | self._ip_whitelist = ip_whitelist | |
145 | self._ip_blacklist = ip_blacklist | |
146 | ||
147 | def request(self, method, uri, headers=None, bodyProducer=None): | |
148 | h = urllib.parse.urlparse(uri.decode('ascii')) | |
149 | ||
150 | try: | |
151 | ip_address = IPAddress(h.hostname) | |
152 | ||
153 | if check_against_blacklist( | |
154 | ip_address, self._ip_whitelist, self._ip_blacklist | |
155 | ): | |
156 | logger.info( | |
157 | "Blocking access to %s because of blacklist" % (ip_address,) | |
158 | ) | |
159 | e = SynapseError(403, "IP address blocked by IP blacklist entry") | |
160 | return defer.fail(Failure(e)) | |
161 | except Exception: | |
162 | # Not an IP | |
163 | pass | |
164 | ||
165 | return self._agent.request( | |
166 | method, uri, headers=headers, bodyProducer=bodyProducer | |
167 | ) | |
54 | 168 | |
55 | 169 | |
56 | 170 | class SimpleHttpClient(object): |
58 | 172 | A simple, no-frills HTTP client with methods that wrap up common ways of |
59 | 173 | using HTTP in Matrix |
60 | 174 | """ |
61 | def __init__(self, hs): | |
175 | ||
176 | def __init__(self, hs, treq_args={}, ip_whitelist=None, ip_blacklist=None): | |
177 | """ | |
178 | Args: | |
179 | hs (synapse.server.HomeServer) | |
180 | treq_args (dict): Extra keyword arguments to be given to treq.request. | |
181 | ip_blacklist (netaddr.IPSet): The IP addresses that are blacklisted that | |
182 | we may not request. | |
183 | ip_whitelist (netaddr.IPSet): The whitelisted IP addresses, that we can | |
184 | request if it were otherwise caught in a blacklist. | |
185 | """ | |
62 | 186 | self.hs = hs |
63 | 187 | |
64 | pool = HTTPConnectionPool(reactor) | |
188 | self._ip_whitelist = ip_whitelist | |
189 | self._ip_blacklist = ip_blacklist | |
190 | self._extra_treq_args = treq_args | |
191 | ||
192 | self.user_agent = hs.version_string | |
193 | self.clock = hs.get_clock() | |
194 | if hs.config.user_agent_suffix: | |
195 | self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix) | |
196 | ||
197 | self.user_agent = self.user_agent.encode('ascii') | |
198 | ||
199 | if self._ip_blacklist: | |
200 | real_reactor = hs.get_reactor() | |
201 | # If we have an IP blacklist, we need to use a DNS resolver which | |
202 | # filters out blacklisted IP addresses, to prevent DNS rebinding. | |
203 | nameResolver = IPBlacklistingResolver( | |
204 | real_reactor, self._ip_whitelist, self._ip_blacklist | |
205 | ) | |
206 | ||
207 | @implementer(IReactorPluggableNameResolver) | |
208 | class Reactor(object): | |
209 | def __getattr__(_self, attr): | |
210 | if attr == "nameResolver": | |
211 | return nameResolver | |
212 | else: | |
213 | return getattr(real_reactor, attr) | |
214 | ||
215 | self.reactor = Reactor() | |
216 | else: | |
217 | self.reactor = hs.get_reactor() | |
65 | 218 | |
66 | 219 | # the pusher makes lots of concurrent SSL connections to sygnal, and |
67 | # tends to do so in batches, so we need to allow the pool to keep lots | |
68 | # of idle connections around. | |
220 | # tends to do so in batches, so we need to allow the pool to keep | |
221 | # lots of idle connections around. | |
222 | pool = HTTPConnectionPool(self.reactor) | |
69 | 223 | pool.maxPersistentPerHost = max((100 * CACHE_SIZE_FACTOR, 5)) |
70 | 224 | pool.cachedConnectionTimeout = 2 * 60 |
71 | 225 | |
73 | 227 | # BrowserLikePolicyForHTTPS which will do regular cert validation |
74 | 228 | # 'like a browser' |
75 | 229 | self.agent = Agent( |
76 | reactor, | |
230 | self.reactor, | |
77 | 231 | connectTimeout=15, |
78 | contextFactory=hs.get_http_client_context_factory(), | |
232 | contextFactory=self.hs.get_http_client_context_factory(), | |
79 | 233 | pool=pool, |
80 | 234 | ) |
81 | self.user_agent = hs.version_string | |
82 | self.clock = hs.get_clock() | |
83 | if hs.config.user_agent_suffix: | |
84 | self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix,) | |
85 | ||
86 | self.user_agent = self.user_agent.encode('ascii') | |
235 | ||
236 | if self._ip_blacklist: | |
237 | # If we have an IP blacklist, we then install the blacklisting Agent | |
238 | # which prevents direct access to IP addresses, that are not caught | |
239 | # by the DNS resolution. | |
240 | self.agent = BlacklistingAgentWrapper( | |
241 | self.agent, | |
242 | self.reactor, | |
243 | ip_whitelist=self._ip_whitelist, | |
244 | ip_blacklist=self._ip_blacklist, | |
245 | ) | |
87 | 246 | |
88 | 247 | @defer.inlineCallbacks |
89 | 248 | def request(self, method, uri, data=b'', headers=None): |
249 | """ | |
250 | Args: | |
251 | method (str): HTTP method to use. | |
252 | uri (str): URI to query. | |
253 | data (bytes): Data to send in the request body, if applicable. | |
254 | headers (t.w.http_headers.Headers): Request headers. | |
255 | ||
256 | Raises: | |
257 | SynapseError: If the IP is blacklisted. | |
258 | """ | |
90 | 259 | # A small wrapper around self.agent.request() so we can easily attach |
91 | 260 | # counters to it |
92 | 261 | outgoing_requests_counter.labels(method).inc() |
96 | 265 | |
97 | 266 | try: |
98 | 267 | request_deferred = treq.request( |
99 | method, uri, agent=self.agent, data=data, headers=headers | |
268 | method, | |
269 | uri, | |
270 | agent=self.agent, | |
271 | data=data, | |
272 | headers=headers, | |
273 | **self._extra_treq_args | |
100 | 274 | ) |
101 | 275 | request_deferred = timeout_deferred( |
102 | request_deferred, 60, self.hs.get_reactor(), | |
276 | request_deferred, | |
277 | 60, | |
278 | self.hs.get_reactor(), | |
103 | 279 | cancelled_to_request_timed_out_error, |
104 | 280 | ) |
105 | 281 | response = yield make_deferred_yieldable(request_deferred) |
106 | 282 | |
107 | 283 | incoming_responses_counter.labels(method, response.code).inc() |
108 | 284 | logger.info( |
109 | "Received response to %s %s: %s", | |
110 | method, redact_uri(uri), response.code | |
285 | "Received response to %s %s: %s", method, redact_uri(uri), response.code | |
111 | 286 | ) |
112 | 287 | defer.returnValue(response) |
113 | 288 | except Exception as e: |
114 | 289 | incoming_responses_counter.labels(method, "ERR").inc() |
115 | 290 | logger.info( |
116 | 291 | "Error sending request to %s %s: %s %s", |
117 | method, redact_uri(uri), type(e).__name__, e.args[0] | |
292 | method, | |
293 | redact_uri(uri), | |
294 | type(e).__name__, | |
295 | e.args[0], | |
118 | 296 | ) |
119 | 297 | raise |
120 | 298 | |
139 | 317 | # TODO: Do we ever want to log message contents? |
140 | 318 | logger.debug("post_urlencoded_get_json args: %s", args) |
141 | 319 | |
142 | query_bytes = urllib.parse.urlencode( | |
143 | encode_urlencode_args(args), True).encode("utf8") | |
320 | query_bytes = urllib.parse.urlencode(encode_urlencode_args(args), True).encode( | |
321 | "utf8" | |
322 | ) | |
144 | 323 | |
145 | 324 | actual_headers = { |
146 | 325 | b"Content-Type": [b"application/x-www-form-urlencoded"], |
150 | 329 | actual_headers.update(headers) |
151 | 330 | |
152 | 331 | response = yield self.request( |
153 | "POST", | |
154 | uri, | |
155 | headers=Headers(actual_headers), | |
156 | data=query_bytes | |
332 | "POST", uri, headers=Headers(actual_headers), data=query_bytes | |
157 | 333 | ) |
158 | 334 | |
159 | 335 | if 200 <= response.code < 300: |
192 | 368 | actual_headers.update(headers) |
193 | 369 | |
194 | 370 | response = yield self.request( |
195 | "POST", | |
196 | uri, | |
197 | headers=Headers(actual_headers), | |
198 | data=json_str | |
371 | "POST", uri, headers=Headers(actual_headers), data=json_str | |
199 | 372 | ) |
200 | 373 | |
201 | 374 | body = yield make_deferred_yieldable(readBody(response)) |
263 | 436 | actual_headers.update(headers) |
264 | 437 | |
265 | 438 | response = yield self.request( |
266 | "PUT", | |
267 | uri, | |
268 | headers=Headers(actual_headers), | |
269 | data=json_str | |
439 | "PUT", uri, headers=Headers(actual_headers), data=json_str | |
270 | 440 | ) |
271 | 441 | |
272 | 442 | body = yield make_deferred_yieldable(readBody(response)) |
298 | 468 | query_bytes = urllib.parse.urlencode(args, True) |
299 | 469 | uri = "%s?%s" % (uri, query_bytes) |
300 | 470 | |
301 | actual_headers = { | |
302 | b"User-Agent": [self.user_agent], | |
303 | } | |
471 | actual_headers = {b"User-Agent": [self.user_agent]} | |
304 | 472 | if headers: |
305 | 473 | actual_headers.update(headers) |
306 | 474 | |
307 | response = yield self.request( | |
308 | "GET", | |
309 | uri, | |
310 | headers=Headers(actual_headers), | |
311 | ) | |
475 | response = yield self.request("GET", uri, headers=Headers(actual_headers)) | |
312 | 476 | |
313 | 477 | body = yield make_deferred_yieldable(readBody(response)) |
314 | 478 | |
333 | 497 | headers, absolute URI of the response and HTTP response code. |
334 | 498 | """ |
335 | 499 | |
336 | actual_headers = { | |
337 | b"User-Agent": [self.user_agent], | |
338 | } | |
500 | actual_headers = {b"User-Agent": [self.user_agent]} | |
339 | 501 | if headers: |
340 | 502 | actual_headers.update(headers) |
341 | 503 | |
342 | response = yield self.request( | |
343 | "GET", | |
344 | url, | |
345 | headers=Headers(actual_headers), | |
346 | ) | |
504 | response = yield self.request("GET", url, headers=Headers(actual_headers)) | |
347 | 505 | |
348 | 506 | resp_headers = dict(response.headers.getAllRawHeaders()) |
349 | 507 | |
350 | if (b'Content-Length' in resp_headers and | |
351 | int(resp_headers[b'Content-Length']) > max_size): | |
508 | if ( | |
509 | b'Content-Length' in resp_headers | |
510 | and int(resp_headers[b'Content-Length'][0]) > max_size | |
511 | ): | |
352 | 512 | logger.warn("Requested URL is too large > %r bytes" % (self.max_size,)) |
353 | 513 | raise SynapseError( |
354 | 514 | 502, |
358 | 518 | |
359 | 519 | if response.code > 299: |
360 | 520 | logger.warn("Got %d when downloading %s" % (response.code, url)) |
361 | raise SynapseError( | |
362 | 502, | |
363 | "Got error %d" % (response.code,), | |
364 | Codes.UNKNOWN, | |
365 | ) | |
521 | raise SynapseError(502, "Got error %d" % (response.code,), Codes.UNKNOWN) | |
366 | 522 | |
367 | 523 | # TODO: if our Content-Type is HTML or something, just read the first |
368 | 524 | # N bytes into RAM rather than saving it all to disk only to read it |
369 | 525 | # straight back in again |
370 | 526 | |
371 | 527 | try: |
372 | length = yield make_deferred_yieldable(_readBodyToFile( | |
373 | response, output_stream, max_size, | |
374 | )) | |
528 | length = yield make_deferred_yieldable( | |
529 | _readBodyToFile(response, output_stream, max_size) | |
530 | ) | |
375 | 531 | except Exception as e: |
376 | 532 | logger.exception("Failed to download body") |
377 | 533 | raise SynapseError( |
378 | 502, | |
379 | ("Failed to download remote body: %s" % e), | |
380 | Codes.UNKNOWN, | |
534 | 502, ("Failed to download remote body: %s" % e), Codes.UNKNOWN | |
381 | 535 | ) |
382 | 536 | |
383 | 537 | defer.returnValue( |
386 | 540 | resp_headers, |
387 | 541 | response.request.absoluteURI.decode('ascii'), |
388 | 542 | response.code, |
389 | ), | |
543 | ) | |
390 | 544 | ) |
391 | 545 | |
392 | 546 | |
393 | 547 | # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. |
394 | 548 | # The two should be factored out. |
549 | ||
395 | 550 | |
396 | 551 | class _ReadBodyToFileProtocol(protocol.Protocol): |
397 | 552 | def __init__(self, stream, deferred, max_size): |
404 | 559 | self.stream.write(data) |
405 | 560 | self.length += len(data) |
406 | 561 | if self.max_size is not None and self.length >= self.max_size: |
407 | self.deferred.errback(SynapseError( | |
408 | 502, | |
409 | "Requested file is too large > %r bytes" % (self.max_size,), | |
410 | Codes.TOO_LARGE, | |
411 | )) | |
562 | self.deferred.errback( | |
563 | SynapseError( | |
564 | 502, | |
565 | "Requested file is too large > %r bytes" % (self.max_size,), | |
566 | Codes.TOO_LARGE, | |
567 | ) | |
568 | ) | |
412 | 569 | self.deferred = defer.Deferred() |
413 | 570 | self.transport.loseConnection() |
414 | 571 | |
426 | 583 | # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. |
427 | 584 | # The two should be factored out. |
428 | 585 | |
586 | ||
429 | 587 | def _readBodyToFile(response, stream, max_size): |
430 | 588 | d = defer.Deferred() |
431 | 589 | response.deliverBody(_ReadBodyToFileProtocol(stream, d, max_size)) |
448 | 606 | "POST", |
449 | 607 | url, |
450 | 608 | data=query_bytes, |
451 | headers=Headers({ | |
452 | b"Content-Type": [b"application/x-www-form-urlencoded"], | |
453 | b"User-Agent": [self.user_agent], | |
454 | }) | |
609 | headers=Headers( | |
610 | { | |
611 | b"Content-Type": [b"application/x-www-form-urlencoded"], | |
612 | b"User-Agent": [self.user_agent], | |
613 | } | |
614 | ), | |
455 | 615 | ) |
456 | 616 | |
457 | 617 | try: |
460 | 620 | except PartialDownloadError as e: |
461 | 621 | # twisted dislikes google's response, no content length. |
462 | 622 | defer.returnValue(e.response) |
463 | ||
464 | ||
465 | class SpiderEndpointFactory(object): | |
466 | def __init__(self, hs): | |
467 | self.blacklist = hs.config.url_preview_ip_range_blacklist | |
468 | self.whitelist = hs.config.url_preview_ip_range_whitelist | |
469 | self.policyForHTTPS = hs.get_http_client_context_factory() | |
470 | ||
471 | def endpointForURI(self, uri): | |
472 | logger.info("Getting endpoint for %s", uri.toBytes()) | |
473 | ||
474 | if uri.scheme == b"http": | |
475 | endpoint_factory = HostnameEndpoint | |
476 | elif uri.scheme == b"https": | |
477 | tlsCreator = self.policyForHTTPS.creatorForNetloc(uri.host, uri.port) | |
478 | ||
479 | def endpoint_factory(reactor, host, port, **kw): | |
480 | return wrapClientTLS( | |
481 | tlsCreator, | |
482 | HostnameEndpoint(reactor, host, port, **kw)) | |
483 | else: | |
484 | logger.warn("Can't get endpoint for unrecognised scheme %s", uri.scheme) | |
485 | return None | |
486 | return SpiderEndpoint( | |
487 | reactor, uri.host, uri.port, self.blacklist, self.whitelist, | |
488 | endpoint=endpoint_factory, endpoint_kw_args=dict(timeout=15), | |
489 | ) | |
490 | ||
491 | ||
492 | class SpiderHttpClient(SimpleHttpClient): | |
493 | """ | |
494 | Separate HTTP client for spidering arbitrary URLs. | |
495 | Special in that it follows retries and has a UA that looks | |
496 | like a browser. | |
497 | ||
498 | used by the preview_url endpoint in the content repo. | |
499 | """ | |
500 | def __init__(self, hs): | |
501 | SimpleHttpClient.__init__(self, hs) | |
502 | # clobber the base class's agent and UA: | |
503 | self.agent = ContentDecoderAgent( | |
504 | BrowserLikeRedirectAgent( | |
505 | Agent.usingEndpointFactory( | |
506 | reactor, | |
507 | SpiderEndpointFactory(hs) | |
508 | ) | |
509 | ), [(b'gzip', GzipDecoder)] | |
510 | ) | |
511 | # We could look like Chrome: | |
512 | # self.user_agent = ("Mozilla/5.0 (%s) (KHTML, like Gecko) | |
513 | # Chrome Safari" % hs.version_string) | |
514 | 623 | |
515 | 624 | |
516 | 625 | def encode_urlencode_args(args): |
215 | 215 | d.addCallback(update_request_time) |
216 | 216 | |
217 | 217 | return d |
218 | ||
219 | ||
220 | class SpiderEndpoint(object): | |
221 | """An endpoint which refuses to connect to blacklisted IP addresses | |
222 | Implements twisted.internet.interfaces.IStreamClientEndpoint. | |
223 | """ | |
224 | def __init__(self, reactor, host, port, blacklist, whitelist, | |
225 | endpoint=HostnameEndpoint, endpoint_kw_args={}): | |
226 | self.reactor = reactor | |
227 | self.host = host | |
228 | self.port = port | |
229 | self.blacklist = blacklist | |
230 | self.whitelist = whitelist | |
231 | self.endpoint = endpoint | |
232 | self.endpoint_kw_args = endpoint_kw_args | |
233 | ||
234 | @defer.inlineCallbacks | |
235 | def connect(self, protocolFactory): | |
236 | address = yield self.reactor.resolve(self.host) | |
237 | ||
238 | from netaddr import IPAddress | |
239 | ip_address = IPAddress(address) | |
240 | ||
241 | if ip_address in self.blacklist: | |
242 | if self.whitelist is None or ip_address not in self.whitelist: | |
243 | raise ConnectError( | |
244 | "Refusing to spider blacklisted IP address %s" % address | |
245 | ) | |
246 | ||
247 | logger.info("Connecting to %s:%s", address, self.port) | |
248 | endpoint = self.endpoint( | |
249 | self.reactor, address, self.port, **self.endpoint_kw_args | |
250 | ) | |
251 | connection = yield endpoint.connect(protocolFactory) | |
252 | defer.returnValue(connection) | |
253 | 218 | |
254 | 219 | |
255 | 220 | class SRVClientEndpoint(object): |
18 | 18 | import sys |
19 | 19 | from io import BytesIO |
20 | 20 | |
21 | from six import PY3, string_types | |
21 | from six import PY3, raise_from, string_types | |
22 | 22 | from six.moves import urllib |
23 | 23 | |
24 | 24 | import attr |
40 | 40 | Codes, |
41 | 41 | FederationDeniedError, |
42 | 42 | HttpResponseException, |
43 | RequestSendFailed, | |
43 | 44 | SynapseError, |
44 | 45 | ) |
45 | 46 | from synapse.http.endpoint import matrix_federation_endpoint |
230 | 231 | Deferred: resolves with the http response object on success. |
231 | 232 | |
232 | 233 | Fails with ``HttpResponseException``: if we get an HTTP response |
233 | code >= 300. | |
234 | code >= 300 (except 429). | |
234 | 235 | |
235 | 236 | Fails with ``NotRetryingDestination`` if we are not yet ready |
236 | 237 | to retry this server. |
238 | 239 | Fails with ``FederationDeniedError`` if this destination |
239 | 240 | is not on our federation whitelist |
240 | 241 | |
241 | (May also fail with plenty of other Exceptions for things like DNS | |
242 | failures, connection failures, SSL failures.) | |
242 | Fails with ``RequestSendFailed`` if there were problems connecting to | |
243 | the remote, due to e.g. DNS failures, connection timeouts etc. | |
243 | 244 | """ |
244 | 245 | if timeout: |
245 | 246 | _sec_timeout = timeout / 1000 |
334 | 335 | reactor=self.hs.get_reactor(), |
335 | 336 | ) |
336 | 337 | |
337 | with Measure(self.clock, "outbound_request"): | |
338 | response = yield make_deferred_yieldable( | |
339 | request_deferred, | |
338 | try: | |
339 | with Measure(self.clock, "outbound_request"): | |
340 | response = yield make_deferred_yieldable( | |
341 | request_deferred, | |
342 | ) | |
343 | except DNSLookupError as e: | |
344 | raise_from(RequestSendFailed(e, can_retry=retry_on_dns_fail), e) | |
345 | except Exception as e: | |
346 | raise_from(RequestSendFailed(e, can_retry=True), e) | |
347 | ||
348 | logger.info( | |
349 | "{%s} [%s] Got response headers: %d %s", | |
350 | request.txn_id, | |
351 | request.destination, | |
352 | response.code, | |
353 | response.phrase.decode('ascii', errors='replace'), | |
354 | ) | |
355 | ||
356 | if 200 <= response.code < 300: | |
357 | pass | |
358 | else: | |
359 | # :'( | |
360 | # Update transactions table? | |
361 | d = treq.content(response) | |
362 | d = timeout_deferred( | |
363 | d, | |
364 | timeout=_sec_timeout, | |
365 | reactor=self.hs.get_reactor(), | |
340 | 366 | ) |
341 | 367 | |
368 | try: | |
369 | body = yield make_deferred_yieldable(d) | |
370 | except Exception as e: | |
371 | # Eh, we're already going to raise an exception so lets | |
372 | # ignore if this fails. | |
373 | logger.warn( | |
374 | "{%s} [%s] Failed to get error response: %s %s: %s", | |
375 | request.txn_id, | |
376 | request.destination, | |
377 | request.method, | |
378 | url_str, | |
379 | _flatten_response_never_received(e), | |
380 | ) | |
381 | body = None | |
382 | ||
383 | e = HttpResponseException( | |
384 | response.code, response.phrase, body | |
385 | ) | |
386 | ||
387 | # Retry if the error is a 429 (Too Many Requests), | |
388 | # otherwise just raise a standard HttpResponseException | |
389 | if response.code == 429: | |
390 | raise_from(RequestSendFailed(e, can_retry=True), e) | |
391 | else: | |
392 | raise e | |
393 | ||
342 | 394 | break |
395 | except RequestSendFailed as e: | |
396 | logger.warn( | |
397 | "{%s} [%s] Request failed: %s %s: %s", | |
398 | request.txn_id, | |
399 | request.destination, | |
400 | request.method, | |
401 | url_str, | |
402 | _flatten_response_never_received(e.inner_exception), | |
403 | ) | |
404 | ||
405 | if not e.can_retry: | |
406 | raise | |
407 | ||
408 | if retries_left and not timeout: | |
409 | if long_retries: | |
410 | delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left) | |
411 | delay = min(delay, 60) | |
412 | delay *= random.uniform(0.8, 1.4) | |
413 | else: | |
414 | delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left) | |
415 | delay = min(delay, 2) | |
416 | delay *= random.uniform(0.8, 1.4) | |
417 | ||
418 | logger.debug( | |
419 | "{%s} [%s] Waiting %ss before re-sending...", | |
420 | request.txn_id, | |
421 | request.destination, | |
422 | delay, | |
423 | ) | |
424 | ||
425 | yield self.clock.sleep(delay) | |
426 | retries_left -= 1 | |
427 | else: | |
428 | raise | |
429 | ||
343 | 430 | except Exception as e: |
344 | 431 | logger.warn( |
345 | 432 | "{%s} [%s] Request failed: %s %s: %s", |
349 | 436 | url_str, |
350 | 437 | _flatten_response_never_received(e), |
351 | 438 | ) |
352 | ||
353 | if not retry_on_dns_fail and isinstance(e, DNSLookupError): | |
354 | raise | |
355 | ||
356 | if retries_left and not timeout: | |
357 | if long_retries: | |
358 | delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left) | |
359 | delay = min(delay, 60) | |
360 | delay *= random.uniform(0.8, 1.4) | |
361 | else: | |
362 | delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left) | |
363 | delay = min(delay, 2) | |
364 | delay *= random.uniform(0.8, 1.4) | |
365 | ||
366 | logger.debug( | |
367 | "{%s} [%s] Waiting %ss before re-sending...", | |
368 | request.txn_id, | |
369 | request.destination, | |
370 | delay, | |
371 | ) | |
372 | ||
373 | yield self.clock.sleep(delay) | |
374 | retries_left -= 1 | |
375 | else: | |
376 | raise | |
377 | ||
378 | logger.info( | |
379 | "{%s} [%s] Got response headers: %d %s", | |
380 | request.txn_id, | |
381 | request.destination, | |
382 | response.code, | |
383 | response.phrase.decode('ascii', errors='replace'), | |
384 | ) | |
385 | ||
386 | if 200 <= response.code < 300: | |
387 | pass | |
388 | else: | |
389 | # :'( | |
390 | # Update transactions table? | |
391 | d = treq.content(response) | |
392 | d = timeout_deferred( | |
393 | d, | |
394 | timeout=_sec_timeout, | |
395 | reactor=self.hs.get_reactor(), | |
396 | ) | |
397 | body = yield make_deferred_yieldable(d) | |
398 | raise HttpResponseException( | |
399 | response.code, response.phrase, body | |
400 | ) | |
439 | raise | |
401 | 440 | |
402 | 441 | defer.returnValue(response) |
403 | 442 | |
783 | 822 | headers (twisted.web.http_headers.Headers): headers to check |
784 | 823 | |
785 | 824 | Raises: |
786 | RuntimeError if the | |
825 | RequestSendFailed: if the Content-Type header is missing or isn't JSON | |
787 | 826 | |
788 | 827 | """ |
789 | 828 | c_type = headers.getRawHeaders(b"Content-Type") |
790 | 829 | if c_type is None: |
791 | raise RuntimeError( | |
830 | raise RequestSendFailed(RuntimeError( | |
792 | 831 | "No Content-Type header" |
793 | ) | |
832 | ), can_retry=False) | |
794 | 833 | |
795 | 834 | c_type = c_type[0].decode('ascii') # only the first header |
796 | 835 | val, options = cgi.parse_header(c_type) |
797 | 836 | if val != "application/json": |
798 | raise RuntimeError( | |
837 | raise RequestSendFailed(RuntimeError( | |
799 | 838 | "Content-Type not application/json: was '%s'" % c_type |
800 | ) | |
839 | ), can_retry=False) | |
801 | 840 | |
802 | 841 | |
803 | 842 | def encode_query_args(args): |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import logging |
17 | from distutils.version import LooseVersion | |
17 | ||
18 | from pkg_resources import DistributionNotFound, VersionConflict, get_distribution | |
18 | 19 | |
19 | 20 | logger = logging.getLogger(__name__) |
20 | 21 | |
21 | # this dict maps from python package name to a list of modules we expect it to | |
22 | # provide. | |
22 | ||
23 | # REQUIREMENTS is a simple list of requirement specifiers[1], and must be | |
24 | # installed. It is passed to setup() as install_requires in setup.py. | |
23 | 25 | # |
24 | # the key is a "requirement specifier", as used as a parameter to `pip | |
25 | # install`[1], or an `install_requires` argument to `setuptools.setup` [2]. | |
26 | # | |
27 | # the value is a sequence of strings; each entry should be the name of the | |
28 | # python module, optionally followed by a version assertion which can be either | |
29 | # ">=<ver>" or "==<ver>". | |
26 | # CONDITIONAL_REQUIREMENTS is the optional dependencies, represented as a dict | |
27 | # of lists. The dict key is the optional dependency name and can be passed to | |
28 | # pip when installing. The list is a series of requirement specifiers[1] to be | |
29 | # installed when that optional dependency requirement is specified. It is passed | |
30 | # to setup() as extras_require in setup.py | |
30 | 31 | # |
31 | 32 | # [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers. |
32 | # [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies | |
33 | REQUIREMENTS = { | |
34 | "jsonschema>=2.5.1": ["jsonschema>=2.5.1"], | |
35 | "frozendict>=1": ["frozendict"], | |
36 | "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"], | |
37 | "canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"], | |
38 | "signedjson>=1.0.0": ["signedjson>=1.0.0"], | |
39 | "pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"], | |
40 | "service_identity>=16.0.0": ["service_identity>=16.0.0"], | |
41 | "Twisted>=17.1.0": ["twisted>=17.1.0"], | |
42 | "treq>=15.1": ["treq>=15.1"], | |
43 | 33 | |
34 | REQUIREMENTS = [ | |
35 | "jsonschema>=2.5.1", | |
36 | "frozendict>=1", | |
37 | "unpaddedbase64>=1.1.0", | |
38 | "canonicaljson>=1.1.3", | |
39 | "signedjson>=1.0.0", | |
40 | "pynacl>=1.2.1", | |
41 | "service_identity>=16.0.0", | |
42 | "Twisted>=17.1.0", | |
43 | "treq>=15.1", | |
44 | 44 | # Twisted has required pyopenssl 16.0 since about Twisted 16.6. |
45 | "pyopenssl>=16.0.0": ["OpenSSL>=16.0.0"], | |
46 | ||
47 | "pyyaml>=3.11": ["yaml"], | |
48 | "pyasn1>=0.1.9": ["pyasn1"], | |
49 | "pyasn1-modules>=0.0.7": ["pyasn1_modules"], | |
50 | "daemonize>=2.3.1": ["daemonize"], | |
51 | "bcrypt>=3.1.0": ["bcrypt>=3.1.0"], | |
52 | "pillow>=3.1.2": ["PIL"], | |
53 | "sortedcontainers>=1.4.4": ["sortedcontainers"], | |
54 | "psutil>=2.0.0": ["psutil>=2.0.0"], | |
55 | "pysaml2>=3.0.0": ["saml2"], | |
56 | "pymacaroons-pynacl>=0.9.3": ["pymacaroons"], | |
57 | "msgpack-python>=0.4.2": ["msgpack"], | |
58 | "phonenumbers>=8.2.0": ["phonenumbers"], | |
59 | "six>=1.10": ["six"], | |
60 | ||
45 | "pyopenssl>=16.0.0", | |
46 | "pyyaml>=3.11", | |
47 | "pyasn1>=0.1.9", | |
48 | "pyasn1-modules>=0.0.7", | |
49 | "daemonize>=2.3.1", | |
50 | "bcrypt>=3.1.0", | |
51 | "pillow>=3.1.2", | |
52 | "sortedcontainers>=1.4.4", | |
53 | "psutil>=2.0.0", | |
54 | "pymacaroons-pynacl>=0.9.3", | |
55 | "msgpack-python>=0.4.2", | |
56 | "phonenumbers>=8.2.0", | |
57 | "six>=1.10", | |
61 | 58 | # prometheus_client 0.4.0 changed the format of counter metrics |
62 | 59 | # (cf https://github.com/matrix-org/synapse/issues/4001) |
63 | "prometheus_client>=0.0.18,<0.4.0": ["prometheus_client"], | |
64 | ||
60 | "prometheus_client>=0.0.18,<0.4.0", | |
65 | 61 | # we use attr.s(slots), which arrived in 16.0.0 |
66 | "attrs>=16.0.0": ["attr>=16.0.0"], | |
67 | "netaddr>=0.7.18": ["netaddr"], | |
68 | } | |
62 | "attrs>=16.0.0", | |
63 | "netaddr>=0.7.18", | |
64 | ] | |
69 | 65 | |
70 | 66 | CONDITIONAL_REQUIREMENTS = { |
71 | "email.enable_notifs": { | |
72 | "Jinja2>=2.8": ["Jinja2>=2.8"], | |
73 | "bleach>=1.4.2": ["bleach>=1.4.2"], | |
74 | }, | |
75 | "matrix-synapse-ldap3": { | |
76 | "matrix-synapse-ldap3>=0.1": ["ldap_auth_provider"], | |
77 | }, | |
78 | "postgres": { | |
79 | "psycopg2>=2.6": ["psycopg2"] | |
80 | }, | |
67 | "email.enable_notifs": ["Jinja2>=2.9", "bleach>=1.4.2"], | |
68 | "matrix-synapse-ldap3": ["matrix-synapse-ldap3>=0.1"], | |
69 | "postgres": ["psycopg2>=2.6"], | |
70 | ||
71 | # ConsentResource uses select_autoescape, which arrived in jinja 2.9 | |
72 | "resources.consent": ["Jinja2>=2.9"], | |
73 | ||
74 | "saml2": ["pysaml2>=4.5.0"], | |
75 | "url_preview": ["lxml>=3.5.0"], | |
76 | "test": ["mock>=2.0"], | |
81 | 77 | } |
82 | 78 | |
83 | 79 | |
84 | def requirements(config=None, include_conditional=False): | |
85 | reqs = REQUIREMENTS.copy() | |
86 | if include_conditional: | |
87 | for _, req in CONDITIONAL_REQUIREMENTS.items(): | |
88 | reqs.update(req) | |
89 | return reqs | |
80 | def list_requirements(): | |
81 | deps = set(REQUIREMENTS) | |
82 | for opt in CONDITIONAL_REQUIREMENTS.values(): | |
83 | deps = set(opt) | deps | |
84 | ||
85 | return list(deps) | |
90 | 86 | |
91 | 87 | |
92 | def github_link(project, version, egg): | |
93 | return "https://github.com/%s/tarball/%s/#egg=%s" % (project, version, egg) | |
88 | class DependencyException(Exception): | |
89 | @property | |
90 | def message(self): | |
91 | return "\n".join([ | |
92 | "Missing Requirements: %s" % (", ".join(self.dependencies),), | |
93 | "To install run:", | |
94 | " pip install --upgrade --force %s" % (" ".join(self.dependencies),), | |
95 | "", | |
96 | ]) | |
97 | ||
98 | @property | |
99 | def dependencies(self): | |
100 | for i in self.args[0]: | |
101 | yield '"' + i + '"' | |
94 | 102 | |
95 | 103 | |
96 | DEPENDENCY_LINKS = { | |
97 | } | |
104 | def check_requirements(for_feature=None, _get_distribution=get_distribution): | |
105 | deps_needed = [] | |
106 | errors = [] | |
98 | 107 | |
108 | if for_feature: | |
109 | reqs = CONDITIONAL_REQUIREMENTS[for_feature] | |
110 | else: | |
111 | reqs = REQUIREMENTS | |
99 | 112 | |
100 | class MissingRequirementError(Exception): | |
101 | def __init__(self, message, module_name, dependency): | |
102 | super(MissingRequirementError, self).__init__(message) | |
103 | self.module_name = module_name | |
104 | self.dependency = dependency | |
113 | for dependency in reqs: | |
114 | try: | |
115 | _get_distribution(dependency) | |
116 | except VersionConflict as e: | |
117 | deps_needed.append(dependency) | |
118 | errors.append( | |
119 | "Needed %s, got %s==%s" | |
120 | % (dependency, e.dist.project_name, e.dist.version) | |
121 | ) | |
122 | except DistributionNotFound: | |
123 | deps_needed.append(dependency) | |
124 | errors.append("Needed %s but it was not installed" % (dependency,)) | |
105 | 125 | |
126 | if not for_feature: | |
127 | # Check the optional dependencies are up to date. We allow them to not be | |
128 | # installed. | |
129 | OPTS = sum(CONDITIONAL_REQUIREMENTS.values(), []) | |
106 | 130 | |
107 | def check_requirements(config=None): | |
108 | """Checks that all the modules needed by synapse have been correctly | |
109 | installed and are at the correct version""" | |
110 | for dependency, module_requirements in ( | |
111 | requirements(config, include_conditional=False).items()): | |
112 | for module_requirement in module_requirements: | |
113 | if ">=" in module_requirement: | |
114 | module_name, required_version = module_requirement.split(">=") | |
115 | version_test = ">=" | |
116 | elif "==" in module_requirement: | |
117 | module_name, required_version = module_requirement.split("==") | |
118 | version_test = "==" | |
119 | else: | |
120 | module_name = module_requirement | |
121 | version_test = None | |
131 | for dependency in OPTS: | |
132 | try: | |
133 | _get_distribution(dependency) | |
134 | except VersionConflict: | |
135 | deps_needed.append(dependency) | |
136 | errors.append("Needed %s but it was not installed" % (dependency,)) | |
137 | except DistributionNotFound: | |
138 | # If it's not found, we don't care | |
139 | pass | |
122 | 140 | |
123 | try: | |
124 | module = __import__(module_name) | |
125 | except ImportError: | |
126 | logging.exception( | |
127 | "Can't import %r which is part of %r", | |
128 | module_name, dependency | |
129 | ) | |
130 | raise MissingRequirementError( | |
131 | "Can't import %r which is part of %r" | |
132 | % (module_name, dependency), module_name, dependency | |
133 | ) | |
134 | version = getattr(module, "__version__", None) | |
135 | file_path = getattr(module, "__file__", None) | |
136 | logger.info( | |
137 | "Using %r version %r from %r to satisfy %r", | |
138 | module_name, version, file_path, dependency | |
139 | ) | |
141 | if deps_needed: | |
142 | for e in errors: | |
143 | logging.error(e) | |
140 | 144 | |
141 | if version_test == ">=": | |
142 | if version is None: | |
143 | raise MissingRequirementError( | |
144 | "Version of %r isn't set as __version__ of module %r" | |
145 | % (dependency, module_name), module_name, dependency | |
146 | ) | |
147 | if LooseVersion(version) < LooseVersion(required_version): | |
148 | raise MissingRequirementError( | |
149 | "Version of %r in %r is too old. %r < %r" | |
150 | % (dependency, file_path, version, required_version), | |
151 | module_name, dependency | |
152 | ) | |
153 | elif version_test == "==": | |
154 | if version is None: | |
155 | raise MissingRequirementError( | |
156 | "Version of %r isn't set as __version__ of module %r" | |
157 | % (dependency, module_name), module_name, dependency | |
158 | ) | |
159 | if LooseVersion(version) != LooseVersion(required_version): | |
160 | raise MissingRequirementError( | |
161 | "Unexpected version of %r in %r. %r != %r" | |
162 | % (dependency, file_path, version, required_version), | |
163 | module_name, dependency | |
164 | ) | |
165 | ||
166 | ||
167 | def list_requirements(): | |
168 | result = [] | |
169 | linked = [] | |
170 | for link in DEPENDENCY_LINKS.values(): | |
171 | egg = link.split("#egg=")[1] | |
172 | linked.append(egg.split('-')[0]) | |
173 | result.append(link) | |
174 | for requirement in requirements(include_conditional=True): | |
175 | is_linked = False | |
176 | for link in linked: | |
177 | if requirement.replace('-', '_').startswith(link): | |
178 | is_linked = True | |
179 | if not is_linked: | |
180 | result.append(requirement) | |
181 | return result | |
145 | raise DependencyException(deps_needed) | |
182 | 146 | |
183 | 147 | |
184 | 148 | if __name__ == "__main__": |
185 | 149 | import sys |
150 | ||
186 | 151 | sys.stdout.writelines(req + "\n" for req in list_requirements()) |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | ||
16 | from six import PY3 | |
17 | 15 | |
18 | 16 | from synapse.http.server import JsonResource |
19 | 17 | from synapse.rest.client import versions |
55 | 53 | user_directory, |
56 | 54 | ) |
57 | 55 | |
58 | if not PY3: | |
59 | from synapse.rest.client.v1_only import ( | |
60 | register as v1_register, | |
61 | ) | |
62 | ||
63 | 56 | |
64 | 57 | class ClientRestResource(JsonResource): |
65 | 58 | """A resource for version 1 of the matrix client API.""" |
71 | 64 | @staticmethod |
72 | 65 | def register_servlets(client_resource, hs): |
73 | 66 | versions.register_servlets(client_resource) |
74 | ||
75 | if not PY3: | |
76 | # "v1" (Python 2 only) | |
77 | v1_register.register_servlets(hs, client_resource) | |
78 | 67 | |
79 | 68 | # Deprecated in r0 |
80 | 69 | initial_sync.register_servlets(hs, client_resource) |
22 | 22 | |
23 | 23 | from twisted.internet import defer |
24 | 24 | |
25 | from synapse.api.constants import Membership | |
25 | from synapse.api.constants import Membership, UserTypes | |
26 | 26 | from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError |
27 | 27 | from synapse.http.servlet import ( |
28 | 28 | assert_params_in_dict, |
157 | 157 | raise SynapseError(400, "Invalid password") |
158 | 158 | |
159 | 159 | admin = body.get("admin", None) |
160 | user_type = body.get("user_type", None) | |
161 | ||
162 | if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES: | |
163 | raise SynapseError(400, "Invalid user type") | |
164 | ||
160 | 165 | got_mac = body["mac"] |
161 | 166 | |
162 | 167 | want_mac = hmac.new( |
170 | 175 | want_mac.update(password) |
171 | 176 | want_mac.update(b"\x00") |
172 | 177 | want_mac.update(b"admin" if admin else b"notadmin") |
178 | if user_type: | |
179 | want_mac.update(b"\x00") | |
180 | want_mac.update(user_type.encode('utf8')) | |
173 | 181 | want_mac = want_mac.hexdigest() |
174 | 182 | |
175 | 183 | if not hmac.compare_digest( |
188 | 196 | password=body["password"], |
189 | 197 | admin=bool(admin), |
190 | 198 | generate_token=False, |
199 | user_type=user_type, | |
191 | 200 | ) |
192 | 201 | |
193 | 202 | result = yield register._create_registration_details(user_id, body) |
17 | 17 | |
18 | 18 | from six.moves import urllib |
19 | 19 | |
20 | from canonicaljson import json | |
21 | from saml2 import BINDING_HTTP_POST, config | |
22 | from saml2.client import Saml2Client | |
23 | ||
24 | 20 | from twisted.internet import defer |
25 | 21 | from twisted.web.client import PartialDownloadError |
26 | 22 | |
27 | 23 | from synapse.api.errors import Codes, LoginError, SynapseError |
28 | 24 | from synapse.http.server import finish_request |
29 | from synapse.http.servlet import RestServlet, parse_json_object_from_request | |
30 | from synapse.types import UserID | |
25 | from synapse.http.servlet import ( | |
26 | RestServlet, | |
27 | parse_json_object_from_request, | |
28 | parse_string, | |
29 | ) | |
30 | from synapse.rest.well_known import WellKnownBuilder | |
31 | from synapse.types import UserID, map_username_to_mxid_localpart | |
31 | 32 | from synapse.util.msisdn import phone_number_to_msisdn |
32 | 33 | |
33 | 34 | from .base import ClientV1RestServlet, client_path_patterns |
80 | 81 | |
81 | 82 | class LoginRestServlet(ClientV1RestServlet): |
82 | 83 | PATTERNS = client_path_patterns("/login$") |
83 | SAML2_TYPE = "m.login.saml2" | |
84 | 84 | CAS_TYPE = "m.login.cas" |
85 | 85 | SSO_TYPE = "m.login.sso" |
86 | 86 | TOKEN_TYPE = "m.login.token" |
88 | 88 | |
89 | 89 | def __init__(self, hs): |
90 | 90 | super(LoginRestServlet, self).__init__(hs) |
91 | self.idp_redirect_url = hs.config.saml2_idp_redirect_url | |
92 | self.saml2_enabled = hs.config.saml2_enabled | |
93 | 91 | self.jwt_enabled = hs.config.jwt_enabled |
94 | 92 | self.jwt_secret = hs.config.jwt_secret |
95 | 93 | self.jwt_algorithm = hs.config.jwt_algorithm |
97 | 95 | self.auth_handler = self.hs.get_auth_handler() |
98 | 96 | self.device_handler = self.hs.get_device_handler() |
99 | 97 | self.handlers = hs.get_handlers() |
98 | self._well_known_builder = WellKnownBuilder(hs) | |
100 | 99 | |
101 | 100 | def on_GET(self, request): |
102 | 101 | flows = [] |
103 | 102 | if self.jwt_enabled: |
104 | 103 | flows.append({"type": LoginRestServlet.JWT_TYPE}) |
105 | if self.saml2_enabled: | |
106 | flows.append({"type": LoginRestServlet.SAML2_TYPE}) | |
107 | 104 | if self.cas_enabled: |
108 | 105 | flows.append({"type": LoginRestServlet.SSO_TYPE}) |
109 | 106 | |
133 | 130 | def on_POST(self, request): |
134 | 131 | login_submission = parse_json_object_from_request(request) |
135 | 132 | try: |
136 | if self.saml2_enabled and (login_submission["type"] == | |
137 | LoginRestServlet.SAML2_TYPE): | |
138 | relay_state = "" | |
139 | if "relay_state" in login_submission: | |
140 | relay_state = "&RelayState=" + urllib.parse.quote( | |
141 | login_submission["relay_state"]) | |
142 | result = { | |
143 | "uri": "%s%s" % (self.idp_redirect_url, relay_state) | |
144 | } | |
145 | defer.returnValue((200, result)) | |
146 | elif self.jwt_enabled and (login_submission["type"] == | |
147 | LoginRestServlet.JWT_TYPE): | |
133 | if self.jwt_enabled and (login_submission["type"] == | |
134 | LoginRestServlet.JWT_TYPE): | |
148 | 135 | result = yield self.do_jwt_login(login_submission) |
149 | defer.returnValue(result) | |
150 | 136 | elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE: |
151 | 137 | result = yield self.do_token_login(login_submission) |
152 | defer.returnValue(result) | |
153 | 138 | else: |
154 | 139 | result = yield self._do_other_login(login_submission) |
155 | defer.returnValue(result) | |
156 | 140 | except KeyError: |
157 | 141 | raise SynapseError(400, "Missing JSON keys.") |
142 | ||
143 | well_known_data = self._well_known_builder.get_well_known() | |
144 | if well_known_data: | |
145 | result["well_known"] = well_known_data | |
146 | defer.returnValue((200, result)) | |
158 | 147 | |
159 | 148 | @defer.inlineCallbacks |
160 | 149 | def _do_other_login(self, login_submission): |
164 | 153 | login_submission: |
165 | 154 | |
166 | 155 | Returns: |
167 | (int, object): HTTP code/response | |
156 | dict: HTTP response | |
168 | 157 | """ |
169 | 158 | # Log the request we got, but only certain fields to minimise the chance of |
170 | 159 | # logging someone's password (even if they accidentally put it in the wrong |
247 | 236 | if callback is not None: |
248 | 237 | yield callback(result) |
249 | 238 | |
250 | defer.returnValue((200, result)) | |
239 | defer.returnValue(result) | |
251 | 240 | |
252 | 241 | @defer.inlineCallbacks |
253 | 242 | def do_token_login(self, login_submission): |
267 | 256 | "device_id": device_id, |
268 | 257 | } |
269 | 258 | |
270 | defer.returnValue((200, result)) | |
259 | defer.returnValue(result) | |
271 | 260 | |
272 | 261 | @defer.inlineCallbacks |
273 | 262 | def do_jwt_login(self, login_submission): |
321 | 310 | "home_server": self.hs.hostname, |
322 | 311 | } |
323 | 312 | |
324 | defer.returnValue((200, result)) | |
313 | defer.returnValue(result) | |
325 | 314 | |
326 | 315 | def _register_device(self, user_id, login_submission): |
327 | 316 | """Register a device for a user. |
342 | 331 | return self.device_handler.check_device_registered( |
343 | 332 | user_id, device_id, initial_display_name |
344 | 333 | ) |
345 | ||
346 | ||
347 | class SAML2RestServlet(ClientV1RestServlet): | |
348 | PATTERNS = client_path_patterns("/login/saml2", releases=()) | |
349 | ||
350 | def __init__(self, hs): | |
351 | super(SAML2RestServlet, self).__init__(hs) | |
352 | self.sp_config = hs.config.saml2_config_path | |
353 | self.handlers = hs.get_handlers() | |
354 | ||
355 | @defer.inlineCallbacks | |
356 | def on_POST(self, request): | |
357 | saml2_auth = None | |
358 | try: | |
359 | conf = config.SPConfig() | |
360 | conf.load_file(self.sp_config) | |
361 | SP = Saml2Client(conf) | |
362 | saml2_auth = SP.parse_authn_request_response( | |
363 | request.args['SAMLResponse'][0], BINDING_HTTP_POST) | |
364 | except Exception as e: # Not authenticated | |
365 | logger.exception(e) | |
366 | if saml2_auth and saml2_auth.status_ok() and not saml2_auth.not_signed: | |
367 | username = saml2_auth.name_id.text | |
368 | handler = self.handlers.registration_handler | |
369 | (user_id, token) = yield handler.register_saml2(username) | |
370 | # Forward to the RelayState callback along with ava | |
371 | if 'RelayState' in request.args: | |
372 | request.redirect(urllib.parse.unquote( | |
373 | request.args['RelayState'][0]) + | |
374 | '?status=authenticated&access_token=' + | |
375 | token + '&user_id=' + user_id + '&ava=' + | |
376 | urllib.quote(json.dumps(saml2_auth.ava))) | |
377 | finish_request(request) | |
378 | defer.returnValue(None) | |
379 | defer.returnValue((200, {"status": "authenticated", | |
380 | "user_id": user_id, "token": token, | |
381 | "ava": saml2_auth.ava})) | |
382 | elif 'RelayState' in request.args: | |
383 | request.redirect(urllib.parse.unquote( | |
384 | request.args['RelayState'][0]) + | |
385 | '?status=not_authenticated') | |
386 | finish_request(request) | |
387 | defer.returnValue(None) | |
388 | defer.returnValue((200, {"status": "not_authenticated"})) | |
389 | 334 | |
390 | 335 | |
391 | 336 | class CasRedirectServlet(RestServlet): |
420 | 365 | self.cas_server_url = hs.config.cas_server_url |
421 | 366 | self.cas_service_url = hs.config.cas_service_url |
422 | 367 | self.cas_required_attributes = hs.config.cas_required_attributes |
423 | self.auth_handler = hs.get_auth_handler() | |
424 | self.handlers = hs.get_handlers() | |
425 | self.macaroon_gen = hs.get_macaroon_generator() | |
368 | self._sso_auth_handler = SSOAuthHandler(hs) | |
426 | 369 | |
427 | 370 | @defer.inlineCallbacks |
428 | 371 | def on_GET(self, request): |
429 | client_redirect_url = request.args[b"redirectUrl"][0] | |
372 | client_redirect_url = parse_string(request, "redirectUrl", required=True) | |
430 | 373 | http_client = self.hs.get_simple_http_client() |
431 | 374 | uri = self.cas_server_url + "/proxyValidate" |
432 | 375 | args = { |
433 | "ticket": request.args[b"ticket"][0].decode('ascii'), | |
376 | "ticket": parse_string(request, "ticket", required=True), | |
434 | 377 | "service": self.cas_service_url |
435 | 378 | } |
436 | 379 | try: |
442 | 385 | result = yield self.handle_cas_response(request, body, client_redirect_url) |
443 | 386 | defer.returnValue(result) |
444 | 387 | |
445 | @defer.inlineCallbacks | |
446 | 388 | def handle_cas_response(self, request, cas_response_body, client_redirect_url): |
447 | 389 | user, attributes = self.parse_cas_response(cas_response_body) |
448 | 390 | |
458 | 400 | if required_value != actual_value: |
459 | 401 | raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED) |
460 | 402 | |
461 | user_id = UserID(user, self.hs.hostname).to_string() | |
462 | auth_handler = self.auth_handler | |
463 | registered_user_id = yield auth_handler.check_user_exists(user_id) | |
464 | if not registered_user_id: | |
465 | registered_user_id, _ = ( | |
466 | yield self.handlers.registration_handler.register(localpart=user) | |
467 | ) | |
468 | ||
469 | login_token = self.macaroon_gen.generate_short_term_login_token( | |
470 | registered_user_id | |
471 | ) | |
472 | redirect_url = self.add_login_token_to_redirect_url(client_redirect_url, | |
473 | login_token) | |
474 | request.redirect(redirect_url) | |
475 | finish_request(request) | |
476 | ||
477 | def add_login_token_to_redirect_url(self, url, token): | |
478 | url_parts = list(urllib.parse.urlparse(url)) | |
479 | query = dict(urllib.parse.parse_qsl(url_parts[4])) | |
480 | query.update({"loginToken": token}) | |
481 | url_parts[4] = urllib.parse.urlencode(query).encode('ascii') | |
482 | return urllib.parse.urlunparse(url_parts) | |
403 | return self._sso_auth_handler.on_successful_auth( | |
404 | user, request, client_redirect_url, | |
405 | ) | |
483 | 406 | |
484 | 407 | def parse_cas_response(self, cas_response_body): |
485 | 408 | user = None |
514 | 437 | return user, attributes |
515 | 438 | |
516 | 439 | |
440 | class SSOAuthHandler(object): | |
441 | """ | |
442 | Utility class for Resources and Servlets which handle the response from a SSO | |
443 | service | |
444 | ||
445 | Args: | |
446 | hs (synapse.server.HomeServer) | |
447 | """ | |
448 | def __init__(self, hs): | |
449 | self._hostname = hs.hostname | |
450 | self._auth_handler = hs.get_auth_handler() | |
451 | self._registration_handler = hs.get_handlers().registration_handler | |
452 | self._macaroon_gen = hs.get_macaroon_generator() | |
453 | ||
454 | @defer.inlineCallbacks | |
455 | def on_successful_auth( | |
456 | self, username, request, client_redirect_url, | |
457 | user_display_name=None, | |
458 | ): | |
459 | """Called once the user has successfully authenticated with the SSO. | |
460 | ||
461 | Registers the user if necessary, and then returns a redirect (with | |
462 | a login token) to the client. | |
463 | ||
464 | Args: | |
465 | username (unicode|bytes): the remote user id. We'll map this onto | |
466 | something sane for a MXID localpath. | |
467 | ||
468 | request (SynapseRequest): the incoming request from the browser. We'll | |
469 | respond to it with a redirect. | |
470 | ||
471 | client_redirect_url (unicode): the redirect_url the client gave us when | |
472 | it first started the process. | |
473 | ||
474 | user_display_name (unicode|None): if set, and we have to register a new user, | |
475 | we will set their displayname to this. | |
476 | ||
477 | Returns: | |
478 | Deferred[none]: Completes once we have handled the request. | |
479 | """ | |
480 | localpart = map_username_to_mxid_localpart(username) | |
481 | user_id = UserID(localpart, self._hostname).to_string() | |
482 | registered_user_id = yield self._auth_handler.check_user_exists(user_id) | |
483 | if not registered_user_id: | |
484 | registered_user_id, _ = ( | |
485 | yield self._registration_handler.register( | |
486 | localpart=localpart, | |
487 | generate_token=False, | |
488 | default_display_name=user_display_name, | |
489 | ) | |
490 | ) | |
491 | ||
492 | login_token = self._macaroon_gen.generate_short_term_login_token( | |
493 | registered_user_id | |
494 | ) | |
495 | redirect_url = self._add_login_token_to_redirect_url( | |
496 | client_redirect_url, login_token | |
497 | ) | |
498 | request.redirect(redirect_url) | |
499 | finish_request(request) | |
500 | ||
501 | @staticmethod | |
502 | def _add_login_token_to_redirect_url(url, token): | |
503 | url_parts = list(urllib.parse.urlparse(url)) | |
504 | query = dict(urllib.parse.parse_qsl(url_parts[4])) | |
505 | query.update({"loginToken": token}) | |
506 | url_parts[4] = urllib.parse.urlencode(query) | |
507 | return urllib.parse.urlunparse(url_parts) | |
508 | ||
509 | ||
517 | 510 | def register_servlets(hs, http_server): |
518 | 511 | LoginRestServlet(hs).register(http_server) |
519 | if hs.config.saml2_enabled: | |
520 | SAML2RestServlet(hs).register(http_server) | |
521 | 512 | if hs.config.cas_enabled: |
522 | 513 | CasRedirectServlet(hs).register(http_server) |
523 | 514 | CasTicketServlet(hs).register(http_server) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2014-2016 OpenMarket Ltd | |
2 | # Copyright 2018 New Vector Ltd | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
5 | # you may not use this file except in compliance with the License. | |
6 | # You may obtain a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, | |
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
13 | # See the License for the specific language governing permissions and | |
14 | # limitations under the License. | |
15 | ||
16 | """This module contains base REST classes for constructing client v1 servlets. | |
17 | """ | |
18 | ||
19 | import re | |
20 | ||
21 | from synapse.api.urls import CLIENT_PREFIX | |
22 | ||
23 | ||
24 | def v1_only_client_path_patterns(path_regex, include_in_unstable=True): | |
25 | """Creates a regex compiled client path with the correct client path | |
26 | prefix. | |
27 | ||
28 | Args: | |
29 | path_regex (str): The regex string to match. This should NOT have a ^ | |
30 | as this will be prefixed. | |
31 | Returns: | |
32 | list of SRE_Pattern | |
33 | """ | |
34 | patterns = [re.compile("^" + CLIENT_PREFIX + path_regex)] | |
35 | if include_in_unstable: | |
36 | unstable_prefix = CLIENT_PREFIX.replace("/api/v1", "/unstable") | |
37 | patterns.append(re.compile("^" + unstable_prefix + path_regex)) | |
38 | return patterns |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2014-2016 OpenMarket Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | """This module contains REST servlets to do with registration: /register""" | |
16 | import hmac | |
17 | import logging | |
18 | from hashlib import sha1 | |
19 | ||
20 | from twisted.internet import defer | |
21 | ||
22 | import synapse.util.stringutils as stringutils | |
23 | from synapse.api.constants import LoginType | |
24 | from synapse.api.errors import Codes, SynapseError | |
25 | from synapse.config.server import is_threepid_reserved | |
26 | from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request | |
27 | from synapse.rest.client.v1.base import ClientV1RestServlet | |
28 | from synapse.types import create_requester | |
29 | ||
30 | from .base import v1_only_client_path_patterns | |
31 | ||
32 | logger = logging.getLogger(__name__) | |
33 | ||
34 | ||
35 | # We ought to be using hmac.compare_digest() but on older pythons it doesn't | |
36 | # exist. It's a _really minor_ security flaw to use plain string comparison | |
37 | # because the timing attack is so obscured by all the other code here it's | |
38 | # unlikely to make much difference | |
39 | if hasattr(hmac, "compare_digest"): | |
40 | compare_digest = hmac.compare_digest | |
41 | else: | |
42 | def compare_digest(a, b): | |
43 | return a == b | |
44 | ||
45 | ||
46 | class RegisterRestServlet(ClientV1RestServlet): | |
47 | """Handles registration with the home server. | |
48 | ||
49 | This servlet is in control of the registration flow; the registration | |
50 | handler doesn't have a concept of multi-stages or sessions. | |
51 | """ | |
52 | ||
53 | PATTERNS = v1_only_client_path_patterns("/register$", include_in_unstable=False) | |
54 | ||
55 | def __init__(self, hs): | |
56 | """ | |
57 | Args: | |
58 | hs (synapse.server.HomeServer): server | |
59 | """ | |
60 | super(RegisterRestServlet, self).__init__(hs) | |
61 | # sessions are stored as: | |
62 | # self.sessions = { | |
63 | # "session_id" : { __session_dict__ } | |
64 | # } | |
65 | # TODO: persistent storage | |
66 | self.sessions = {} | |
67 | self.enable_registration = hs.config.enable_registration | |
68 | self.auth = hs.get_auth() | |
69 | self.auth_handler = hs.get_auth_handler() | |
70 | self.handlers = hs.get_handlers() | |
71 | ||
72 | def on_GET(self, request): | |
73 | ||
74 | require_email = 'email' in self.hs.config.registrations_require_3pid | |
75 | require_msisdn = 'msisdn' in self.hs.config.registrations_require_3pid | |
76 | ||
77 | flows = [] | |
78 | if self.hs.config.enable_registration_captcha: | |
79 | # only support the email-only flow if we don't require MSISDN 3PIDs | |
80 | if not require_msisdn: | |
81 | flows.extend([ | |
82 | { | |
83 | "type": LoginType.RECAPTCHA, | |
84 | "stages": [ | |
85 | LoginType.RECAPTCHA, | |
86 | LoginType.EMAIL_IDENTITY, | |
87 | LoginType.PASSWORD | |
88 | ] | |
89 | }, | |
90 | ]) | |
91 | # only support 3PIDless registration if no 3PIDs are required | |
92 | if not require_email and not require_msisdn: | |
93 | flows.extend([ | |
94 | { | |
95 | "type": LoginType.RECAPTCHA, | |
96 | "stages": [LoginType.RECAPTCHA, LoginType.PASSWORD] | |
97 | } | |
98 | ]) | |
99 | else: | |
100 | # only support the email-only flow if we don't require MSISDN 3PIDs | |
101 | if require_email or not require_msisdn: | |
102 | flows.extend([ | |
103 | { | |
104 | "type": LoginType.EMAIL_IDENTITY, | |
105 | "stages": [ | |
106 | LoginType.EMAIL_IDENTITY, LoginType.PASSWORD | |
107 | ] | |
108 | } | |
109 | ]) | |
110 | # only support 3PIDless registration if no 3PIDs are required | |
111 | if not require_email and not require_msisdn: | |
112 | flows.extend([ | |
113 | { | |
114 | "type": LoginType.PASSWORD | |
115 | } | |
116 | ]) | |
117 | return (200, {"flows": flows}) | |
118 | ||
119 | @defer.inlineCallbacks | |
120 | def on_POST(self, request): | |
121 | register_json = parse_json_object_from_request(request) | |
122 | ||
123 | session = (register_json["session"] | |
124 | if "session" in register_json else None) | |
125 | login_type = None | |
126 | assert_params_in_dict(register_json, ["type"]) | |
127 | ||
128 | try: | |
129 | login_type = register_json["type"] | |
130 | ||
131 | is_application_server = login_type == LoginType.APPLICATION_SERVICE | |
132 | can_register = ( | |
133 | self.enable_registration | |
134 | or is_application_server | |
135 | ) | |
136 | if not can_register: | |
137 | raise SynapseError(403, "Registration has been disabled") | |
138 | ||
139 | stages = { | |
140 | LoginType.RECAPTCHA: self._do_recaptcha, | |
141 | LoginType.PASSWORD: self._do_password, | |
142 | LoginType.EMAIL_IDENTITY: self._do_email_identity, | |
143 | LoginType.APPLICATION_SERVICE: self._do_app_service, | |
144 | } | |
145 | ||
146 | session_info = self._get_session_info(request, session) | |
147 | logger.debug("%s : session info %s request info %s", | |
148 | login_type, session_info, register_json) | |
149 | response = yield stages[login_type]( | |
150 | request, | |
151 | register_json, | |
152 | session_info | |
153 | ) | |
154 | ||
155 | if "access_token" not in response: | |
156 | # isn't a final response | |
157 | response["session"] = session_info["id"] | |
158 | ||
159 | defer.returnValue((200, response)) | |
160 | except KeyError as e: | |
161 | logger.exception(e) | |
162 | raise SynapseError(400, "Missing JSON keys for login type %s." % ( | |
163 | login_type, | |
164 | )) | |
165 | ||
166 | def on_OPTIONS(self, request): | |
167 | return (200, {}) | |
168 | ||
169 | def _get_session_info(self, request, session_id): | |
170 | if not session_id: | |
171 | # create a new session | |
172 | while session_id is None or session_id in self.sessions: | |
173 | session_id = stringutils.random_string(24) | |
174 | self.sessions[session_id] = { | |
175 | "id": session_id, | |
176 | LoginType.EMAIL_IDENTITY: False, | |
177 | LoginType.RECAPTCHA: False | |
178 | } | |
179 | ||
180 | return self.sessions[session_id] | |
181 | ||
182 | def _save_session(self, session): | |
183 | # TODO: Persistent storage | |
184 | logger.debug("Saving session %s", session) | |
185 | self.sessions[session["id"]] = session | |
186 | ||
187 | def _remove_session(self, session): | |
188 | logger.debug("Removing session %s", session) | |
189 | self.sessions.pop(session["id"]) | |
190 | ||
191 | @defer.inlineCallbacks | |
192 | def _do_recaptcha(self, request, register_json, session): | |
193 | if not self.hs.config.enable_registration_captcha: | |
194 | raise SynapseError(400, "Captcha not required.") | |
195 | ||
196 | yield self._check_recaptcha(request, register_json, session) | |
197 | ||
198 | session[LoginType.RECAPTCHA] = True # mark captcha as done | |
199 | self._save_session(session) | |
200 | defer.returnValue({ | |
201 | "next": [LoginType.PASSWORD, LoginType.EMAIL_IDENTITY] | |
202 | }) | |
203 | ||
204 | @defer.inlineCallbacks | |
205 | def _check_recaptcha(self, request, register_json, session): | |
206 | if ("captcha_bypass_hmac" in register_json and | |
207 | self.hs.config.captcha_bypass_secret): | |
208 | if "user" not in register_json: | |
209 | raise SynapseError(400, "Captcha bypass needs 'user'") | |
210 | ||
211 | want = hmac.new( | |
212 | key=self.hs.config.captcha_bypass_secret, | |
213 | msg=register_json["user"], | |
214 | digestmod=sha1, | |
215 | ).hexdigest() | |
216 | ||
217 | # str() because otherwise hmac complains that 'unicode' does not | |
218 | # have the buffer interface | |
219 | got = str(register_json["captcha_bypass_hmac"]) | |
220 | ||
221 | if compare_digest(want, got): | |
222 | session["user"] = register_json["user"] | |
223 | defer.returnValue(None) | |
224 | else: | |
225 | raise SynapseError( | |
226 | 400, "Captcha bypass HMAC incorrect", | |
227 | errcode=Codes.CAPTCHA_NEEDED | |
228 | ) | |
229 | ||
230 | challenge = None | |
231 | user_response = None | |
232 | try: | |
233 | challenge = register_json["challenge"] | |
234 | user_response = register_json["response"] | |
235 | except KeyError: | |
236 | raise SynapseError(400, "Captcha response is required", | |
237 | errcode=Codes.CAPTCHA_NEEDED) | |
238 | ||
239 | ip_addr = self.hs.get_ip_from_request(request) | |
240 | ||
241 | handler = self.handlers.registration_handler | |
242 | yield handler.check_recaptcha( | |
243 | ip_addr, | |
244 | self.hs.config.recaptcha_private_key, | |
245 | challenge, | |
246 | user_response | |
247 | ) | |
248 | ||
249 | @defer.inlineCallbacks | |
250 | def _do_email_identity(self, request, register_json, session): | |
251 | if (self.hs.config.enable_registration_captcha and | |
252 | not session[LoginType.RECAPTCHA]): | |
253 | raise SynapseError(400, "Captcha is required.") | |
254 | ||
255 | threepidCreds = register_json['threepidCreds'] | |
256 | handler = self.handlers.registration_handler | |
257 | logger.debug("Registering email. threepidcreds: %s" % (threepidCreds)) | |
258 | yield handler.register_email(threepidCreds) | |
259 | session["threepidCreds"] = threepidCreds # store creds for next stage | |
260 | session[LoginType.EMAIL_IDENTITY] = True # mark email as done | |
261 | self._save_session(session) | |
262 | defer.returnValue({ | |
263 | "next": LoginType.PASSWORD | |
264 | }) | |
265 | ||
266 | @defer.inlineCallbacks | |
267 | def _do_password(self, request, register_json, session): | |
268 | if (self.hs.config.enable_registration_captcha and | |
269 | not session[LoginType.RECAPTCHA]): | |
270 | # captcha should've been done by this stage! | |
271 | raise SynapseError(400, "Captcha is required.") | |
272 | ||
273 | if ("user" in session and "user" in register_json and | |
274 | session["user"] != register_json["user"]): | |
275 | raise SynapseError( | |
276 | 400, "Cannot change user ID during registration" | |
277 | ) | |
278 | ||
279 | password = register_json["password"].encode("utf-8") | |
280 | desired_user_id = ( | |
281 | register_json["user"].encode("utf-8") | |
282 | if "user" in register_json else None | |
283 | ) | |
284 | threepid = None | |
285 | if session.get(LoginType.EMAIL_IDENTITY): | |
286 | threepid = session["threepidCreds"] | |
287 | ||
288 | handler = self.handlers.registration_handler | |
289 | (user_id, token) = yield handler.register( | |
290 | localpart=desired_user_id, | |
291 | password=password, | |
292 | threepid=threepid, | |
293 | ) | |
294 | # Necessary due to auth checks prior to the threepid being | |
295 | # written to the db | |
296 | if is_threepid_reserved(self.hs.config, threepid): | |
297 | yield self.store.upsert_monthly_active_user(user_id) | |
298 | ||
299 | if session[LoginType.EMAIL_IDENTITY]: | |
300 | logger.debug("Binding emails %s to %s" % ( | |
301 | session["threepidCreds"], user_id) | |
302 | ) | |
303 | yield handler.bind_emails(user_id, session["threepidCreds"]) | |
304 | ||
305 | result = { | |
306 | "user_id": user_id, | |
307 | "access_token": token, | |
308 | "home_server": self.hs.hostname, | |
309 | } | |
310 | self._remove_session(session) | |
311 | defer.returnValue(result) | |
312 | ||
313 | @defer.inlineCallbacks | |
314 | def _do_app_service(self, request, register_json, session): | |
315 | as_token = self.auth.get_access_token_from_request(request) | |
316 | ||
317 | assert_params_in_dict(register_json, ["user"]) | |
318 | user_localpart = register_json["user"].encode("utf-8") | |
319 | ||
320 | handler = self.handlers.registration_handler | |
321 | user_id = yield handler.appservice_register( | |
322 | user_localpart, as_token | |
323 | ) | |
324 | token = yield self.auth_handler.issue_access_token(user_id) | |
325 | self._remove_session(session) | |
326 | defer.returnValue({ | |
327 | "user_id": user_id, | |
328 | "access_token": token, | |
329 | "home_server": self.hs.hostname, | |
330 | }) | |
331 | ||
332 | ||
333 | class CreateUserRestServlet(ClientV1RestServlet): | |
334 | """Handles user creation via a server-to-server interface | |
335 | """ | |
336 | ||
337 | PATTERNS = v1_only_client_path_patterns("/createUser$") | |
338 | ||
339 | def __init__(self, hs): | |
340 | super(CreateUserRestServlet, self).__init__(hs) | |
341 | self.store = hs.get_datastore() | |
342 | self.handlers = hs.get_handlers() | |
343 | ||
344 | @defer.inlineCallbacks | |
345 | def on_POST(self, request): | |
346 | user_json = parse_json_object_from_request(request) | |
347 | ||
348 | access_token = self.auth.get_access_token_from_request(request) | |
349 | app_service = self.store.get_app_service_by_token( | |
350 | access_token | |
351 | ) | |
352 | if not app_service: | |
353 | raise SynapseError(403, "Invalid application service token.") | |
354 | ||
355 | requester = create_requester(app_service.sender) | |
356 | ||
357 | logger.debug("creating user: %s", user_json) | |
358 | response = yield self._do_create(requester, user_json) | |
359 | ||
360 | defer.returnValue((200, response)) | |
361 | ||
362 | def on_OPTIONS(self, request): | |
363 | return 403, {} | |
364 | ||
365 | @defer.inlineCallbacks | |
366 | def _do_create(self, requester, user_json): | |
367 | assert_params_in_dict(user_json, ["localpart", "displayname"]) | |
368 | ||
369 | localpart = user_json["localpart"].encode("utf-8") | |
370 | displayname = user_json["displayname"].encode("utf-8") | |
371 | password_hash = user_json["password_hash"].encode("utf-8") \ | |
372 | if user_json.get("password_hash") else None | |
373 | ||
374 | handler = self.handlers.registration_handler | |
375 | user_id, token = yield handler.get_or_create_user( | |
376 | requester=requester, | |
377 | localpart=localpart, | |
378 | displayname=displayname, | |
379 | password_hash=password_hash | |
380 | ) | |
381 | ||
382 | defer.returnValue({ | |
383 | "user_id": user_id, | |
384 | "access_token": token, | |
385 | "home_server": self.hs.hostname, | |
386 | }) | |
387 | ||
388 | ||
389 | def register_servlets(hs, http_server): | |
390 | RegisterRestServlet(hs).register(http_server) | |
391 | CreateUserRestServlet(hs).register(http_server) |
16 | 16 | |
17 | 17 | from twisted.internet import defer |
18 | 18 | |
19 | from synapse.api.errors import AuthError, SynapseError | |
19 | from synapse.api.errors import AuthError, NotFoundError, SynapseError | |
20 | 20 | from synapse.http.servlet import RestServlet, parse_json_object_from_request |
21 | 21 | |
22 | 22 | from ._base import client_v2_patterns |
27 | 27 | class AccountDataServlet(RestServlet): |
28 | 28 | """ |
29 | 29 | PUT /user/{user_id}/account_data/{account_dataType} HTTP/1.1 |
30 | GET /user/{user_id}/account_data/{account_dataType} HTTP/1.1 | |
30 | 31 | """ |
31 | 32 | PATTERNS = client_v2_patterns( |
32 | 33 | "/user/(?P<user_id>[^/]*)/account_data/(?P<account_data_type>[^/]*)" |
56 | 57 | |
57 | 58 | defer.returnValue((200, {})) |
58 | 59 | |
60 | @defer.inlineCallbacks | |
61 | def on_GET(self, request, user_id, account_data_type): | |
62 | requester = yield self.auth.get_user_by_req(request) | |
63 | if user_id != requester.user.to_string(): | |
64 | raise AuthError(403, "Cannot get account data for other users.") | |
65 | ||
66 | event = yield self.store.get_global_account_data_by_type_for_user( | |
67 | account_data_type, user_id, | |
68 | ) | |
69 | ||
70 | if event is None: | |
71 | raise NotFoundError("Account data not found") | |
72 | ||
73 | defer.returnValue((200, event)) | |
74 | ||
59 | 75 | |
60 | 76 | class RoomAccountDataServlet(RestServlet): |
61 | 77 | """ |
62 | 78 | PUT /user/{user_id}/rooms/{room_id}/account_data/{account_dataType} HTTP/1.1 |
79 | GET /user/{user_id}/rooms/{room_id}/account_data/{account_dataType} HTTP/1.1 | |
63 | 80 | """ |
64 | 81 | PATTERNS = client_v2_patterns( |
65 | 82 | "/user/(?P<user_id>[^/]*)" |
98 | 115 | |
99 | 116 | defer.returnValue((200, {})) |
100 | 117 | |
118 | @defer.inlineCallbacks | |
119 | def on_GET(self, request, user_id, room_id, account_data_type): | |
120 | requester = yield self.auth.get_user_by_req(request) | |
121 | if user_id != requester.user.to_string(): | |
122 | raise AuthError(403, "Cannot get account data for other users.") | |
123 | ||
124 | event = yield self.store.get_account_data_for_room_and_type( | |
125 | user_id, room_id, account_data_type, | |
126 | ) | |
127 | ||
128 | if event is None: | |
129 | raise NotFoundError("Room account data not found") | |
130 | ||
131 | defer.returnValue((200, event)) | |
132 | ||
101 | 133 | |
102 | 134 | def register_servlets(hs, http_server): |
103 | 135 | AccountDataServlet(hs).register(http_server) |
40 | 40 | @defer.inlineCallbacks |
41 | 41 | def _async_render_GET(self, request): |
42 | 42 | yield self.auth.get_user_by_req(request) |
43 | respond_with_json(request, 200, self.limits_dict) | |
43 | respond_with_json(request, 200, self.limits_dict, send_cors=True) | |
44 | 44 | |
45 | 45 | def render_OPTIONS(self, request): |
46 | 46 | respond_with_json(request, 200, {}, send_cors=True) |
29 | 29 | FederationDeniedError, |
30 | 30 | HttpResponseException, |
31 | 31 | NotFoundError, |
32 | RequestSendFailed, | |
32 | 33 | SynapseError, |
33 | 34 | ) |
34 | 35 | from synapse.metrics.background_process_metrics import run_as_background_process |
371 | 372 | "allow_remote": "false", |
372 | 373 | } |
373 | 374 | ) |
374 | except twisted.internet.error.DNSLookupError as e: | |
375 | logger.warn("HTTP error fetching remote media %s/%s: %r", | |
375 | except RequestSendFailed as e: | |
376 | logger.warn("Request failed fetching remote media %s/%s: %r", | |
376 | 377 | server_name, media_id, e) |
377 | raise NotFoundError() | |
378 | raise SynapseError(502, "Failed to fetch remote media") | |
378 | 379 | |
379 | 380 | except HttpResponseException as e: |
380 | 381 | logger.warn("HTTP error fetching remote media %s/%s: %s", |
34 | 34 | from twisted.web.server import NOT_DONE_YET |
35 | 35 | |
36 | 36 | from synapse.api.errors import Codes, SynapseError |
37 | from synapse.http.client import SpiderHttpClient | |
37 | from synapse.http.client import SimpleHttpClient | |
38 | 38 | from synapse.http.server import ( |
39 | 39 | respond_with_json, |
40 | 40 | respond_with_json_bytes, |
68 | 68 | self.max_spider_size = hs.config.max_spider_size |
69 | 69 | self.server_name = hs.hostname |
70 | 70 | self.store = hs.get_datastore() |
71 | self.client = SpiderHttpClient(hs) | |
71 | self.client = SimpleHttpClient( | |
72 | hs, | |
73 | treq_args={"browser_like_redirects": True}, | |
74 | ip_whitelist=hs.config.url_preview_ip_range_whitelist, | |
75 | ip_blacklist=hs.config.url_preview_ip_range_blacklist, | |
76 | ) | |
72 | 77 | self.media_repo = media_repo |
73 | 78 | self.primary_base_path = media_repo.primary_base_path |
74 | 79 | self.media_storage = media_storage |
317 | 322 | length, headers, uri, code = yield self.client.get_file( |
318 | 323 | url, output_stream=f, max_size=self.max_spider_size, |
319 | 324 | ) |
325 | except SynapseError: | |
326 | # Pass SynapseErrors through directly, so that the servlet | |
327 | # handler will return a SynapseError to the client instead of | |
328 | # blank data or a 500. | |
329 | raise | |
320 | 330 | except Exception as e: |
321 | 331 | # FIXME: pass through 404s and other error messages nicely |
322 | 332 | logger.warn("Error downloading %s: %r", url, e) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | import logging | |
15 | ||
16 | from twisted.web.resource import Resource | |
17 | ||
18 | from synapse.rest.saml2.metadata_resource import SAML2MetadataResource | |
19 | from synapse.rest.saml2.response_resource import SAML2ResponseResource | |
20 | ||
21 | logger = logging.getLogger(__name__) | |
22 | ||
23 | ||
24 | class SAML2Resource(Resource): | |
25 | def __init__(self, hs): | |
26 | Resource.__init__(self) | |
27 | self.putChild(b"metadata.xml", SAML2MetadataResource(hs)) | |
28 | self.putChild(b"authn_response", SAML2ResponseResource(hs)) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | ||
16 | import saml2.metadata | |
17 | ||
18 | from twisted.web.resource import Resource | |
19 | ||
20 | ||
21 | class SAML2MetadataResource(Resource): | |
22 | """A Twisted web resource which renders the SAML metadata""" | |
23 | ||
24 | isLeaf = 1 | |
25 | ||
26 | def __init__(self, hs): | |
27 | Resource.__init__(self) | |
28 | self.sp_config = hs.config.saml2_sp_config | |
29 | ||
30 | def render_GET(self, request): | |
31 | metadata_xml = saml2.metadata.create_metadata_string( | |
32 | configfile=None, config=self.sp_config, | |
33 | ) | |
34 | request.setHeader(b"Content-Type", b"text/xml; charset=utf-8") | |
35 | return metadata_xml |
0 | # -*- coding: utf-8 -*- | |
1 | # | |
2 | # Copyright 2018 New Vector Ltd | |
3 | # | |
4 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
5 | # you may not use this file except in compliance with the License. | |
6 | # You may obtain a copy of the License at | |
7 | # | |
8 | # http://www.apache.org/licenses/LICENSE-2.0 | |
9 | # | |
10 | # Unless required by applicable law or agreed to in writing, software | |
11 | # distributed under the License is distributed on an "AS IS" BASIS, | |
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
13 | # See the License for the specific language governing permissions and | |
14 | # limitations under the License. | |
15 | import logging | |
16 | ||
17 | import saml2 | |
18 | from saml2.client import Saml2Client | |
19 | ||
20 | from twisted.web.resource import Resource | |
21 | from twisted.web.server import NOT_DONE_YET | |
22 | ||
23 | from synapse.api.errors import CodeMessageException | |
24 | from synapse.http.server import wrap_html_request_handler | |
25 | from synapse.http.servlet import parse_string | |
26 | from synapse.rest.client.v1.login import SSOAuthHandler | |
27 | ||
28 | logger = logging.getLogger(__name__) | |
29 | ||
30 | ||
31 | class SAML2ResponseResource(Resource): | |
32 | """A Twisted web resource which handles the SAML response""" | |
33 | ||
34 | isLeaf = 1 | |
35 | ||
36 | def __init__(self, hs): | |
37 | Resource.__init__(self) | |
38 | ||
39 | self._saml_client = Saml2Client(hs.config.saml2_sp_config) | |
40 | self._sso_auth_handler = SSOAuthHandler(hs) | |
41 | ||
42 | def render_POST(self, request): | |
43 | self._async_render_POST(request) | |
44 | return NOT_DONE_YET | |
45 | ||
46 | @wrap_html_request_handler | |
47 | def _async_render_POST(self, request): | |
48 | resp_bytes = parse_string(request, 'SAMLResponse', required=True) | |
49 | relay_state = parse_string(request, 'RelayState', required=True) | |
50 | ||
51 | try: | |
52 | saml2_auth = self._saml_client.parse_authn_request_response( | |
53 | resp_bytes, saml2.BINDING_HTTP_POST, | |
54 | ) | |
55 | except Exception as e: | |
56 | logger.warning("Exception parsing SAML2 response", exc_info=1) | |
57 | raise CodeMessageException( | |
58 | 400, "Unable to parse SAML2 response: %s" % (e,), | |
59 | ) | |
60 | ||
61 | if saml2_auth.not_signed: | |
62 | raise CodeMessageException(400, "SAML2 response was not signed") | |
63 | ||
64 | if "uid" not in saml2_auth.ava: | |
65 | raise CodeMessageException(400, "uid not in SAML2 response") | |
66 | ||
67 | username = saml2_auth.ava["uid"][0] | |
68 | ||
69 | displayName = saml2_auth.ava.get("displayName", [None])[0] | |
70 | return self._sso_auth_handler.on_successful_auth( | |
71 | username, request, relay_state, | |
72 | user_display_name=displayName, | |
73 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector Ltd. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import json | |
16 | import logging | |
17 | ||
18 | from twisted.web.resource import Resource | |
19 | ||
20 | logger = logging.getLogger(__name__) | |
21 | ||
22 | ||
23 | class WellKnownBuilder(object): | |
24 | """Utility to construct the well-known response | |
25 | ||
26 | Args: | |
27 | hs (synapse.server.HomeServer): | |
28 | """ | |
29 | def __init__(self, hs): | |
30 | self._config = hs.config | |
31 | ||
32 | def get_well_known(self): | |
33 | # if we don't have a public_base_url, we can't help much here. | |
34 | if self._config.public_baseurl is None: | |
35 | return None | |
36 | ||
37 | result = { | |
38 | "m.homeserver": { | |
39 | "base_url": self._config.public_baseurl, | |
40 | }, | |
41 | } | |
42 | ||
43 | if self._config.default_identity_server: | |
44 | result["m.identity_server"] = { | |
45 | "base_url": self._config.default_identity_server, | |
46 | } | |
47 | ||
48 | return result | |
49 | ||
50 | ||
51 | class WellKnownResource(Resource): | |
52 | """A Twisted web resource which renders the .well-known file""" | |
53 | ||
54 | isLeaf = 1 | |
55 | ||
56 | def __init__(self, hs): | |
57 | Resource.__init__(self) | |
58 | self._well_known_builder = WellKnownBuilder(hs) | |
59 | ||
60 | def render_GET(self, request): | |
61 | r = self._well_known_builder.get_well_known() | |
62 | if not r: | |
63 | request.setResponseCode(404) | |
64 | request.setHeader(b"Content-Type", b"text/plain") | |
65 | return b'.well-known not available' | |
66 | ||
67 | logger.error("returning: %s", r) | |
68 | request.setHeader(b"Content-Type", b"application/json") | |
69 | return json.dumps(r).encode("utf-8") |
606 | 606 | return v1.resolve_events_with_store( |
607 | 607 | state_sets, event_map, state_res_store.get_events, |
608 | 608 | ) |
609 | elif room_version in (RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST): | |
609 | elif room_version in ( | |
610 | RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST, RoomVersions.V2, | |
611 | ): | |
610 | 612 | return v2.resolve_events_with_store( |
611 | 613 | state_sets, event_map, state_res_store, |
612 | 614 | ) |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | import datetime | |
16 | import calendar | |
17 | 17 | import logging |
18 | 18 | import time |
19 | ||
20 | from dateutil import tz | |
21 | 19 | |
22 | 20 | from synapse.api.constants import PresenceState |
23 | 21 | from synapse.storage.devices import DeviceStore |
356 | 354 | """ |
357 | 355 | Returns millisecond unixtime for start of UTC day. |
358 | 356 | """ |
359 | now = datetime.datetime.utcnow() | |
360 | today_start = datetime.datetime(now.year, now.month, | |
361 | now.day, tzinfo=tz.tzutc()) | |
362 | return int(time.mktime(today_start.timetuple())) * 1000 | |
357 | now = time.gmtime() | |
358 | today_start = calendar.timegm(( | |
359 | now.tm_year, now.tm_mon, now.tm_mday, 0, 0, 0, | |
360 | )) | |
361 | return today_start * 1000 | |
363 | 362 | |
364 | 363 | def generate_user_daily_visits(self): |
365 | 364 | """ |
54 | 54 | txn, |
55 | 55 | tp["medium"], tp["address"] |
56 | 56 | ) |
57 | ||
57 | 58 | if user_id: |
58 | self.upsert_monthly_active_user_txn(txn, user_id) | |
59 | reserved_user_list.append(user_id) | |
59 | is_support = self.is_support_user_txn(txn, user_id) | |
60 | if not is_support: | |
61 | self.upsert_monthly_active_user_txn(txn, user_id) | |
62 | reserved_user_list.append(user_id) | |
60 | 63 | else: |
61 | 64 | logger.warning( |
62 | 65 | "mau limit reserved threepid %s not found in db" % tp |
181 | 184 | Args: |
182 | 185 | user_id (str): user to add/update |
183 | 186 | """ |
187 | # Support user never to be included in MAU stats. Note I can't easily call this | |
188 | # from upsert_monthly_active_user_txn because then I need a _txn form of | |
189 | # is_support_user which is complicated because I want to cache the result. | |
190 | # Therefore I call it here and ignore the case where | |
191 | # upsert_monthly_active_user_txn is called directly from | |
192 | # _initialise_reserved_users reasoning that it would be very strange to | |
193 | # include a support user in this context. | |
194 | ||
195 | is_support = yield self.is_support_user(user_id) | |
196 | if is_support: | |
197 | return | |
198 | ||
184 | 199 | is_insert = yield self.runInteraction( |
185 | 200 | "upsert_monthly_active_user", self.upsert_monthly_active_user_txn, |
186 | 201 | user_id |
199 | 214 | in a database thread rather than the main thread, and we can't call |
200 | 215 | txn.call_after because txn may not be a LoggingTransaction. |
201 | 216 | |
217 | We consciously do not call is_support_txn from this method because it | |
218 | is not possible to cache the response. is_support_txn will be false in | |
219 | almost all cases, so it seems reasonable to call it only for | |
220 | upsert_monthly_active_user and to call is_support_txn manually | |
221 | for cases where upsert_monthly_active_user_txn is called directly, | |
222 | like _initialise_reserved_users | |
223 | ||
224 | In short, don't call this method with support users. (Support users | |
225 | should not appear in the MAU stats). | |
226 | ||
202 | 227 | Args: |
203 | 228 | txn (cursor): |
204 | 229 | user_id (str): user to add/update |
207 | 232 | bool: True if a new entry was created, False if an |
208 | 233 | existing one was updated. |
209 | 234 | """ |
235 | ||
210 | 236 | # Am consciously deciding to lock the table on the basis that is ought |
211 | 237 | # never be a big table and alternative approaches (batching multiple |
212 | 238 | # upserts into a single txn) introduced a lot of extra complexity. |
18 | 18 | |
19 | 19 | from twisted.internet import defer |
20 | 20 | |
21 | from synapse.api.constants import UserTypes | |
21 | 22 | from synapse.api.errors import Codes, StoreError |
22 | 23 | from synapse.storage import background_updates |
23 | 24 | from synapse.storage._base import SQLBaseStore |
25 | from synapse.types import UserID | |
24 | 26 | from synapse.util.caches.descriptors import cached, cachedInlineCallbacks |
25 | 27 | |
26 | 28 | |
111 | 113 | |
112 | 114 | return None |
113 | 115 | |
116 | @cachedInlineCallbacks() | |
117 | def is_support_user(self, user_id): | |
118 | """Determines if the user is of type UserTypes.SUPPORT | |
119 | ||
120 | Args: | |
121 | user_id (str): user id to test | |
122 | ||
123 | Returns: | |
124 | Deferred[bool]: True if user is of type UserTypes.SUPPORT | |
125 | """ | |
126 | res = yield self.runInteraction( | |
127 | "is_support_user", self.is_support_user_txn, user_id | |
128 | ) | |
129 | defer.returnValue(res) | |
130 | ||
131 | def is_support_user_txn(self, txn, user_id): | |
132 | res = self._simple_select_one_onecol_txn( | |
133 | txn=txn, | |
134 | table="users", | |
135 | keyvalues={"name": user_id}, | |
136 | retcol="user_type", | |
137 | allow_none=True, | |
138 | ) | |
139 | return True if res == UserTypes.SUPPORT else False | |
140 | ||
114 | 141 | |
115 | 142 | class RegistrationStore(RegistrationWorkerStore, |
116 | 143 | background_updates.BackgroundUpdateStore): |
166 | 193 | |
167 | 194 | def register(self, user_id, token=None, password_hash=None, |
168 | 195 | was_guest=False, make_guest=False, appservice_id=None, |
169 | create_profile_with_localpart=None, admin=False): | |
196 | create_profile_with_displayname=None, admin=False, user_type=None): | |
170 | 197 | """Attempts to register an account. |
171 | 198 | |
172 | 199 | Args: |
180 | 207 | make_guest (boolean): True if the the new user should be guest, |
181 | 208 | false to add a regular user account. |
182 | 209 | appservice_id (str): The ID of the appservice registering the user. |
183 | create_profile_with_localpart (str): Optionally create a profile for | |
184 | the given localpart. | |
210 | create_profile_with_displayname (unicode): Optionally create a profile for | |
211 | the user, setting their displayname to the given value | |
212 | admin (boolean): is an admin user? | |
213 | user_type (str|None): type of user. One of the values from | |
214 | api.constants.UserTypes, or None for a normal user. | |
215 | ||
185 | 216 | Raises: |
186 | 217 | StoreError if the user_id could not be registered. |
187 | 218 | """ |
194 | 225 | was_guest, |
195 | 226 | make_guest, |
196 | 227 | appservice_id, |
197 | create_profile_with_localpart, | |
198 | admin | |
228 | create_profile_with_displayname, | |
229 | admin, | |
230 | user_type | |
199 | 231 | ) |
200 | 232 | |
201 | 233 | def _register( |
207 | 239 | was_guest, |
208 | 240 | make_guest, |
209 | 241 | appservice_id, |
210 | create_profile_with_localpart, | |
242 | create_profile_with_displayname, | |
211 | 243 | admin, |
244 | user_type, | |
212 | 245 | ): |
246 | user_id_obj = UserID.from_string(user_id) | |
247 | ||
213 | 248 | now = int(self.clock.time()) |
214 | 249 | |
215 | 250 | next_id = self._access_tokens_id_gen.get_next() |
243 | 278 | "is_guest": 1 if make_guest else 0, |
244 | 279 | "appservice_id": appservice_id, |
245 | 280 | "admin": 1 if admin else 0, |
281 | "user_type": user_type, | |
246 | 282 | } |
247 | 283 | ) |
248 | 284 | else: |
256 | 292 | "is_guest": 1 if make_guest else 0, |
257 | 293 | "appservice_id": appservice_id, |
258 | 294 | "admin": 1 if admin else 0, |
295 | "user_type": user_type, | |
259 | 296 | } |
260 | 297 | ) |
261 | 298 | except self.database_engine.module.IntegrityError: |
272 | 309 | (next_id, user_id, token,) |
273 | 310 | ) |
274 | 311 | |
275 | if create_profile_with_localpart: | |
312 | if create_profile_with_displayname: | |
276 | 313 | # set a default displayname serverside to avoid ugly race |
277 | 314 | # between auto-joins and clients trying to set displaynames |
315 | # | |
316 | # *obviously* the 'profiles' table uses localpart for user_id | |
317 | # while everything else uses the full mxid. | |
278 | 318 | txn.execute( |
279 | 319 | "INSERT INTO profiles(user_id, displayname) VALUES (?,?)", |
280 | (create_profile_with_localpart, create_profile_with_localpart) | |
320 | (user_id_obj.localpart, create_profile_with_displayname) | |
281 | 321 | ) |
282 | 322 | |
283 | 323 | self._invalidate_cache_and_stream( |
0 | /* Copyright 2018 New Vector Ltd | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | /* The type of the user: NULL for a regular user, or one of the constants in | |
16 | * synapse.api.constants.UserTypes | |
17 | */ | |
18 | ALTER TABLE users ADD COLUMN user_type TEXT DEFAULT NULL; |
431 | 431 | create_id = state_ids.get((EventTypes.Create, "")) |
432 | 432 | |
433 | 433 | if not create_id: |
434 | raise NotFoundError("Unknown room") | |
434 | raise NotFoundError("Unknown room %s" % (room_id)) | |
435 | 435 | |
436 | 436 | create_event = yield self.get_event(create_id) |
437 | 437 | defer.returnValue(create_event.content.get("room_version", "1")) |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import re | |
14 | 15 | import string |
15 | 16 | from collections import namedtuple |
16 | 17 | |
225 | 226 | bool: True if there are any naughty characters |
226 | 227 | """ |
227 | 228 | return any(c not in mxid_localpart_allowed_characters for c in localpart) |
229 | ||
230 | ||
231 | UPPER_CASE_PATTERN = re.compile(b"[A-Z_]") | |
232 | ||
233 | # the following is a pattern which matches '=', and bytes which are not allowed in a mxid | |
234 | # localpart. | |
235 | # | |
236 | # It works by: | |
237 | # * building a string containing the allowed characters (excluding '=') | |
238 | # * escaping every special character with a backslash (to stop '-' being interpreted as a | |
239 | # range operator) | |
240 | # * wrapping it in a '[^...]' regex | |
241 | # * converting the whole lot to a 'bytes' sequence, so that we can use it to match | |
242 | # bytes rather than strings | |
243 | # | |
244 | NON_MXID_CHARACTER_PATTERN = re.compile( | |
245 | ("[^%s]" % ( | |
246 | re.escape("".join(mxid_localpart_allowed_characters - {"="}),), | |
247 | )).encode("ascii"), | |
248 | ) | |
249 | ||
250 | ||
251 | def map_username_to_mxid_localpart(username, case_sensitive=False): | |
252 | """Map a username onto a string suitable for a MXID | |
253 | ||
254 | This follows the algorithm laid out at | |
255 | https://matrix.org/docs/spec/appendices.html#mapping-from-other-character-sets. | |
256 | ||
257 | Args: | |
258 | username (unicode|bytes): username to be mapped | |
259 | case_sensitive (bool): true if TEST and test should be mapped | |
260 | onto different mxids | |
261 | ||
262 | Returns: | |
263 | unicode: string suitable for a mxid localpart | |
264 | """ | |
265 | if not isinstance(username, bytes): | |
266 | username = username.encode('utf-8') | |
267 | ||
268 | # first we sort out upper-case characters | |
269 | if case_sensitive: | |
270 | def f1(m): | |
271 | return b"_" + m.group().lower() | |
272 | ||
273 | username = UPPER_CASE_PATTERN.sub(f1, username) | |
274 | else: | |
275 | username = username.lower() | |
276 | ||
277 | # then we sort out non-ascii characters | |
278 | def f2(m): | |
279 | g = m.group()[0] | |
280 | if isinstance(g, str): | |
281 | # on python 2, we need to do a ord(). On python 3, the | |
282 | # byte itself will do. | |
283 | g = ord(g) | |
284 | return b"=%02x" % (g,) | |
285 | ||
286 | username = NON_MXID_CHARACTER_PATTERN.sub(f2, username) | |
287 | ||
288 | # we also do the =-escaping to mxids starting with an underscore. | |
289 | username = re.sub(b'^_', b'=5f', username) | |
290 | ||
291 | # we should now only have ascii bytes left, so can decode back to a | |
292 | # unicode. | |
293 | return username.decode('ascii') | |
228 | 294 | |
229 | 295 | |
230 | 296 | class StreamToken( |
155 | 155 | write( |
156 | 156 | "No config file found\n" |
157 | 157 | "To generate a config file, run '%s -c %s --generate-config" |
158 | " --server-name=<server name>'\n" % (" ".join(SYNAPSE), options.configfile), | |
158 | " --server-name=<server name> --report-stats=<yes/no>'\n" % ( | |
159 | " ".join(SYNAPSE), options.configfile, | |
160 | ), | |
159 | 161 | stream=sys.stderr, |
160 | 162 | ) |
161 | 163 | sys.exit(1) |
49 | 49 | # this is overridden for the appservice tests |
50 | 50 | self.store.get_app_service_by_token = Mock(return_value=None) |
51 | 51 | |
52 | self.store.is_support_user = Mock(return_value=defer.succeed(False)) | |
53 | ||
52 | 54 | @defer.inlineCallbacks |
53 | 55 | def test_get_user_by_req_user_valid_token(self): |
54 | 56 | user_info = {"name": self.test_user, "token_id": "ditto", "device_id": "device"} |
191 | 193 | |
192 | 194 | @defer.inlineCallbacks |
193 | 195 | def test_get_user_from_macaroon(self): |
194 | # TODO(danielwh): Remove this mock when we remove the | |
195 | # get_user_by_access_token fallback. | |
196 | 196 | self.store.get_user_by_access_token = Mock( |
197 | 197 | return_value={"name": "@baldrick:matrix.org", "device_id": "device"} |
198 | 198 | ) |
217 | 217 | @defer.inlineCallbacks |
218 | 218 | def test_get_guest_user_from_macaroon(self): |
219 | 219 | self.store.get_user_by_id = Mock(return_value={"is_guest": True}) |
220 | self.store.get_user_by_access_token = Mock(return_value=None) | |
220 | 221 | |
221 | 222 | user_id = "@baldrick:matrix.org" |
222 | 223 | macaroon = pymacaroons.Macaroon( |
236 | 237 | self.assertEqual(UserID.from_string(user_id), user) |
237 | 238 | self.assertTrue(is_guest) |
238 | 239 | self.store.get_user_by_id.assert_called_with(user_id) |
239 | ||
240 | @defer.inlineCallbacks | |
241 | def test_get_user_from_macaroon_user_db_mismatch(self): | |
242 | self.store.get_user_by_access_token = Mock( | |
243 | return_value={"name": "@percy:matrix.org"} | |
244 | ) | |
245 | ||
246 | user = "@baldrick:matrix.org" | |
247 | macaroon = pymacaroons.Macaroon( | |
248 | location=self.hs.config.server_name, | |
249 | identifier="key", | |
250 | key=self.hs.config.macaroon_secret_key, | |
251 | ) | |
252 | macaroon.add_first_party_caveat("gen = 1") | |
253 | macaroon.add_first_party_caveat("type = access") | |
254 | macaroon.add_first_party_caveat("user_id = %s" % (user,)) | |
255 | with self.assertRaises(AuthError) as cm: | |
256 | yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
257 | self.assertEqual(401, cm.exception.code) | |
258 | self.assertIn("User mismatch", cm.exception.msg) | |
259 | ||
260 | @defer.inlineCallbacks | |
261 | def test_get_user_from_macaroon_missing_caveat(self): | |
262 | # TODO(danielwh): Remove this mock when we remove the | |
263 | # get_user_by_access_token fallback. | |
264 | self.store.get_user_by_access_token = Mock( | |
265 | return_value={"name": "@baldrick:matrix.org"} | |
266 | ) | |
267 | ||
268 | macaroon = pymacaroons.Macaroon( | |
269 | location=self.hs.config.server_name, | |
270 | identifier="key", | |
271 | key=self.hs.config.macaroon_secret_key, | |
272 | ) | |
273 | macaroon.add_first_party_caveat("gen = 1") | |
274 | macaroon.add_first_party_caveat("type = access") | |
275 | ||
276 | with self.assertRaises(AuthError) as cm: | |
277 | yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
278 | self.assertEqual(401, cm.exception.code) | |
279 | self.assertIn("No user caveat", cm.exception.msg) | |
280 | ||
281 | @defer.inlineCallbacks | |
282 | def test_get_user_from_macaroon_wrong_key(self): | |
283 | # TODO(danielwh): Remove this mock when we remove the | |
284 | # get_user_by_access_token fallback. | |
285 | self.store.get_user_by_access_token = Mock( | |
286 | return_value={"name": "@baldrick:matrix.org"} | |
287 | ) | |
288 | ||
289 | user = "@baldrick:matrix.org" | |
290 | macaroon = pymacaroons.Macaroon( | |
291 | location=self.hs.config.server_name, | |
292 | identifier="key", | |
293 | key=self.hs.config.macaroon_secret_key + "wrong", | |
294 | ) | |
295 | macaroon.add_first_party_caveat("gen = 1") | |
296 | macaroon.add_first_party_caveat("type = access") | |
297 | macaroon.add_first_party_caveat("user_id = %s" % (user,)) | |
298 | ||
299 | with self.assertRaises(AuthError) as cm: | |
300 | yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
301 | self.assertEqual(401, cm.exception.code) | |
302 | self.assertIn("Invalid macaroon", cm.exception.msg) | |
303 | ||
304 | @defer.inlineCallbacks | |
305 | def test_get_user_from_macaroon_unknown_caveat(self): | |
306 | # TODO(danielwh): Remove this mock when we remove the | |
307 | # get_user_by_access_token fallback. | |
308 | self.store.get_user_by_access_token = Mock( | |
309 | return_value={"name": "@baldrick:matrix.org"} | |
310 | ) | |
311 | ||
312 | user = "@baldrick:matrix.org" | |
313 | macaroon = pymacaroons.Macaroon( | |
314 | location=self.hs.config.server_name, | |
315 | identifier="key", | |
316 | key=self.hs.config.macaroon_secret_key, | |
317 | ) | |
318 | macaroon.add_first_party_caveat("gen = 1") | |
319 | macaroon.add_first_party_caveat("type = access") | |
320 | macaroon.add_first_party_caveat("user_id = %s" % (user,)) | |
321 | macaroon.add_first_party_caveat("cunning > fox") | |
322 | ||
323 | with self.assertRaises(AuthError) as cm: | |
324 | yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
325 | self.assertEqual(401, cm.exception.code) | |
326 | self.assertIn("Invalid macaroon", cm.exception.msg) | |
327 | ||
328 | @defer.inlineCallbacks | |
329 | def test_get_user_from_macaroon_expired(self): | |
330 | # TODO(danielwh): Remove this mock when we remove the | |
331 | # get_user_by_access_token fallback. | |
332 | self.store.get_user_by_access_token = Mock( | |
333 | return_value={"name": "@baldrick:matrix.org"} | |
334 | ) | |
335 | ||
336 | self.store.get_user_by_access_token = Mock( | |
337 | return_value={"name": "@baldrick:matrix.org"} | |
338 | ) | |
339 | ||
340 | user = "@baldrick:matrix.org" | |
341 | macaroon = pymacaroons.Macaroon( | |
342 | location=self.hs.config.server_name, | |
343 | identifier="key", | |
344 | key=self.hs.config.macaroon_secret_key, | |
345 | ) | |
346 | macaroon.add_first_party_caveat("gen = 1") | |
347 | macaroon.add_first_party_caveat("type = access") | |
348 | macaroon.add_first_party_caveat("user_id = %s" % (user,)) | |
349 | macaroon.add_first_party_caveat("time < -2000") # ms | |
350 | ||
351 | self.hs.clock.now = 5000 # seconds | |
352 | self.hs.config.expire_access_token = True | |
353 | # yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
354 | # TODO(daniel): Turn on the check that we validate expiration, when we | |
355 | # validate expiration (and remove the above line, which will start | |
356 | # throwing). | |
357 | with self.assertRaises(AuthError) as cm: | |
358 | yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
359 | self.assertEqual(401, cm.exception.code) | |
360 | self.assertIn("Invalid macaroon", cm.exception.msg) | |
361 | ||
362 | @defer.inlineCallbacks | |
363 | def test_get_user_from_macaroon_with_valid_duration(self): | |
364 | # TODO(danielwh): Remove this mock when we remove the | |
365 | # get_user_by_access_token fallback. | |
366 | self.store.get_user_by_access_token = Mock( | |
367 | return_value={"name": "@baldrick:matrix.org"} | |
368 | ) | |
369 | ||
370 | self.store.get_user_by_access_token = Mock( | |
371 | return_value={"name": "@baldrick:matrix.org"} | |
372 | ) | |
373 | ||
374 | user_id = "@baldrick:matrix.org" | |
375 | macaroon = pymacaroons.Macaroon( | |
376 | location=self.hs.config.server_name, | |
377 | identifier="key", | |
378 | key=self.hs.config.macaroon_secret_key, | |
379 | ) | |
380 | macaroon.add_first_party_caveat("gen = 1") | |
381 | macaroon.add_first_party_caveat("type = access") | |
382 | macaroon.add_first_party_caveat("user_id = %s" % (user_id,)) | |
383 | macaroon.add_first_party_caveat("time < 900000000") # ms | |
384 | ||
385 | self.hs.clock.now = 5000 # seconds | |
386 | self.hs.config.expire_access_token = True | |
387 | ||
388 | user_info = yield self.auth.get_user_by_access_token(macaroon.serialize()) | |
389 | user = user_info["user"] | |
390 | self.assertEqual(UserID.from_string(user_id), user) | |
391 | 240 | |
392 | 241 | @defer.inlineCallbacks |
393 | 242 | def test_cannot_use_regular_token_as_guest(self): |
16 | 16 | |
17 | 17 | from twisted.internet import defer |
18 | 18 | |
19 | from synapse.api.errors import ResourceLimitError | |
19 | from synapse.api.constants import UserTypes | |
20 | from synapse.api.errors import ResourceLimitError, SynapseError | |
20 | 21 | from synapse.handlers.register import RegistrationHandler |
21 | 22 | from synapse.types import RoomAlias, UserID, create_requester |
22 | 23 | |
63 | 64 | requester, frank.localpart, "Frankie" |
64 | 65 | ) |
65 | 66 | self.assertEquals(result_user_id, user_id) |
67 | self.assertTrue(result_token is not None) | |
66 | 68 | self.assertEquals(result_token, 'secret') |
67 | 69 | |
68 | 70 | @defer.inlineCallbacks |
81 | 83 | requester, local_part, None |
82 | 84 | ) |
83 | 85 | self.assertEquals(result_user_id, user_id) |
84 | self.assertEquals(result_token, 'secret') | |
86 | self.assertTrue(result_token is not None) | |
85 | 87 | |
86 | 88 | @defer.inlineCallbacks |
87 | 89 | def test_mau_limits_when_disabled(self): |
129 | 131 | yield self.handler.register(localpart="local_part") |
130 | 132 | |
131 | 133 | @defer.inlineCallbacks |
132 | def test_register_saml2_mau_blocked(self): | |
133 | self.hs.config.limit_usage_by_mau = True | |
134 | self.store.get_monthly_active_count = Mock( | |
135 | return_value=defer.succeed(self.lots_of_users) | |
136 | ) | |
137 | with self.assertRaises(ResourceLimitError): | |
138 | yield self.handler.register_saml2(localpart="local_part") | |
139 | ||
140 | self.store.get_monthly_active_count = Mock( | |
141 | return_value=defer.succeed(self.hs.config.max_mau_value) | |
142 | ) | |
143 | with self.assertRaises(ResourceLimitError): | |
144 | yield self.handler.register_saml2(localpart="local_part") | |
145 | ||
146 | @defer.inlineCallbacks | |
147 | 134 | def test_auto_create_auto_join_rooms(self): |
148 | 135 | room_alias_str = "#room:test" |
149 | 136 | self.hs.config.auto_join_rooms = [room_alias_str] |
184 | 171 | self.assertEqual(len(rooms), 0) |
185 | 172 | |
186 | 173 | @defer.inlineCallbacks |
174 | def test_auto_create_auto_join_rooms_when_support_user_exists(self): | |
175 | room_alias_str = "#room:test" | |
176 | self.hs.config.auto_join_rooms = [room_alias_str] | |
177 | ||
178 | self.store.is_support_user = Mock(return_value=True) | |
179 | res = yield self.handler.register(localpart='support') | |
180 | rooms = yield self.store.get_rooms_for_user(res[0]) | |
181 | self.assertEqual(len(rooms), 0) | |
182 | directory_handler = self.hs.get_handlers().directory_handler | |
183 | room_alias = RoomAlias.from_string(room_alias_str) | |
184 | with self.assertRaises(SynapseError): | |
185 | yield directory_handler.get_association(room_alias) | |
186 | ||
187 | @defer.inlineCallbacks | |
187 | 188 | def test_auto_create_auto_join_where_no_consent(self): |
188 | 189 | self.hs.config.user_consent_at_registration = True |
189 | 190 | self.hs.config.block_events_without_consent_error = "Error" |
193 | 194 | yield self.handler.post_consent_actions(res[0]) |
194 | 195 | rooms = yield self.store.get_rooms_for_user(res[0]) |
195 | 196 | self.assertEqual(len(rooms), 0) |
197 | ||
198 | @defer.inlineCallbacks | |
199 | def test_register_support_user(self): | |
200 | res = yield self.handler.register(localpart='user', user_type=UserTypes.SUPPORT) | |
201 | self.assertTrue(self.store.is_support_user(res[0])) | |
202 | ||
203 | @defer.inlineCallbacks | |
204 | def test_register_not_support_user(self): | |
205 | res = yield self.handler.register(localpart='user') | |
206 | self.assertFalse(self.store.is_support_user(res[0])) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | from mock import Mock | |
15 | ||
16 | from twisted.internet import defer | |
17 | ||
18 | from synapse.api.constants import UserTypes | |
19 | from synapse.handlers.user_directory import UserDirectoryHandler | |
20 | from synapse.storage.roommember import ProfileInfo | |
21 | ||
22 | from tests import unittest | |
23 | from tests.utils import setup_test_homeserver | |
24 | ||
25 | ||
26 | class UserDirectoryHandlers(object): | |
27 | def __init__(self, hs): | |
28 | self.user_directory_handler = UserDirectoryHandler(hs) | |
29 | ||
30 | ||
31 | class UserDirectoryTestCase(unittest.TestCase): | |
32 | """ Tests the UserDirectoryHandler. """ | |
33 | ||
34 | @defer.inlineCallbacks | |
35 | def setUp(self): | |
36 | hs = yield setup_test_homeserver(self.addCleanup) | |
37 | self.store = hs.get_datastore() | |
38 | hs.handlers = UserDirectoryHandlers(hs) | |
39 | ||
40 | self.handler = hs.get_handlers().user_directory_handler | |
41 | ||
42 | @defer.inlineCallbacks | |
43 | def test_handle_local_profile_change_with_support_user(self): | |
44 | support_user_id = "@support:test" | |
45 | yield self.store.register( | |
46 | user_id=support_user_id, | |
47 | token="123", | |
48 | password_hash=None, | |
49 | user_type=UserTypes.SUPPORT | |
50 | ) | |
51 | ||
52 | yield self.handler.handle_local_profile_change(support_user_id, None) | |
53 | profile = yield self.store.get_user_in_directory(support_user_id) | |
54 | self.assertTrue(profile is None) | |
55 | display_name = 'display_name' | |
56 | ||
57 | profile_info = ProfileInfo( | |
58 | avatar_url='avatar_url', | |
59 | display_name=display_name, | |
60 | ) | |
61 | regular_user_id = '@regular:test' | |
62 | yield self.handler.handle_local_profile_change(regular_user_id, profile_info) | |
63 | profile = yield self.store.get_user_in_directory(regular_user_id) | |
64 | self.assertTrue(profile['display_name'] == display_name) | |
65 | ||
66 | @defer.inlineCallbacks | |
67 | def test_handle_user_deactivated_support_user(self): | |
68 | s_user_id = "@support:test" | |
69 | self.store.register( | |
70 | user_id=s_user_id, | |
71 | token="123", | |
72 | password_hash=None, | |
73 | user_type=UserTypes.SUPPORT | |
74 | ) | |
75 | ||
76 | self.store.remove_from_user_dir = Mock() | |
77 | self.store.remove_from_user_in_public_room = Mock() | |
78 | yield self.handler.handle_user_deactivated(s_user_id) | |
79 | self.store.remove_from_user_dir.not_called() | |
80 | self.store.remove_from_user_in_public_room.not_called() | |
81 | ||
82 | @defer.inlineCallbacks | |
83 | def test_handle_user_deactivated_regular_user(self): | |
84 | r_user_id = "@regular:test" | |
85 | self.store.register(user_id=r_user_id, token="123", password_hash=None) | |
86 | self.store.remove_from_user_dir = Mock() | |
87 | self.store.remove_from_user_in_public_room = Mock() | |
88 | yield self.handler.handle_user_deactivated(r_user_id) | |
89 | self.store.remove_from_user_dir.called_once_with(r_user_id) | |
90 | self.store.remove_from_user_in_public_room.assert_called_once_with(r_user_id) |
19 | 19 | from twisted.web.client import ResponseNeverReceived |
20 | 20 | from twisted.web.http import HTTPChannel |
21 | 21 | |
22 | from synapse.api.errors import RequestSendFailed | |
22 | 23 | from synapse.http.matrixfederationclient import ( |
23 | 24 | MatrixFederationHttpClient, |
24 | 25 | MatrixFederationRequest, |
48 | 49 | self.pump() |
49 | 50 | |
50 | 51 | f = self.failureResultOf(d) |
51 | self.assertIsInstance(f.value, DNSLookupError) | |
52 | self.assertIsInstance(f.value, RequestSendFailed) | |
53 | self.assertIsInstance(f.value.inner_exception, DNSLookupError) | |
52 | 54 | |
53 | 55 | def test_client_never_connect(self): |
54 | 56 | """ |
75 | 77 | self.reactor.advance(10.5) |
76 | 78 | f = self.failureResultOf(d) |
77 | 79 | |
78 | self.assertIsInstance(f.value, (ConnectingCancelledError, TimeoutError)) | |
80 | self.assertIsInstance(f.value, RequestSendFailed) | |
81 | self.assertIsInstance( | |
82 | f.value.inner_exception, | |
83 | (ConnectingCancelledError, TimeoutError), | |
84 | ) | |
79 | 85 | |
80 | 86 | def test_client_connect_no_response(self): |
81 | 87 | """ |
106 | 112 | self.reactor.advance(10.5) |
107 | 113 | f = self.failureResultOf(d) |
108 | 114 | |
109 | self.assertIsInstance(f.value, ResponseNeverReceived) | |
115 | self.assertIsInstance(f.value, RequestSendFailed) | |
116 | self.assertIsInstance(f.value.inner_exception, ResponseNeverReceived) | |
110 | 117 | |
111 | 118 | def test_client_gets_headers(self): |
112 | 119 | """ |
18 | 18 | |
19 | 19 | from mock import Mock |
20 | 20 | |
21 | from synapse.api.constants import UserTypes | |
21 | 22 | from synapse.rest.client.v1.admin import register_servlets |
22 | 23 | |
23 | 24 | from tests import unittest |
146 | 147 | nonce = channel.json_body["nonce"] |
147 | 148 | |
148 | 149 | want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1) |
149 | want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin") | |
150 | want_mac.update( | |
151 | nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin\x00support" | |
152 | ) | |
153 | want_mac = want_mac.hexdigest() | |
154 | ||
155 | body = json.dumps( | |
156 | { | |
157 | "nonce": nonce, | |
158 | "username": "bob", | |
159 | "password": "abc123", | |
160 | "admin": True, | |
161 | "user_type": UserTypes.SUPPORT, | |
162 | "mac": want_mac, | |
163 | } | |
164 | ) | |
165 | request, channel = self.make_request("POST", self.url, body.encode('utf8')) | |
166 | self.render(request) | |
167 | ||
168 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
169 | self.assertEqual("@bob:test", channel.json_body["user_id"]) | |
170 | ||
171 | def test_nonce_reuse(self): | |
172 | """ | |
173 | A valid unrecognised nonce. | |
174 | """ | |
175 | request, channel = self.make_request("GET", self.url) | |
176 | self.render(request) | |
177 | nonce = channel.json_body["nonce"] | |
178 | ||
179 | want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1) | |
180 | want_mac.update( | |
181 | nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin" | |
182 | ) | |
150 | 183 | want_mac = want_mac.hexdigest() |
151 | 184 | |
152 | 185 | body = json.dumps( |
164 | 197 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
165 | 198 | self.assertEqual("@bob:test", channel.json_body["user_id"]) |
166 | 199 | |
167 | def test_nonce_reuse(self): | |
168 | """ | |
169 | A valid unrecognised nonce. | |
170 | """ | |
171 | request, channel = self.make_request("GET", self.url) | |
172 | self.render(request) | |
173 | nonce = channel.json_body["nonce"] | |
174 | ||
175 | want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1) | |
176 | want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin") | |
177 | want_mac = want_mac.hexdigest() | |
178 | ||
179 | body = json.dumps( | |
180 | { | |
181 | "nonce": nonce, | |
182 | "username": "bob", | |
183 | "password": "abc123", | |
184 | "admin": True, | |
185 | "mac": want_mac, | |
186 | } | |
187 | ) | |
188 | request, channel = self.make_request("POST", self.url, body.encode('utf8')) | |
189 | self.render(request) | |
190 | ||
191 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
192 | self.assertEqual("@bob:test", channel.json_body["user_id"]) | |
193 | ||
194 | 200 | # Now, try and reuse it |
195 | 201 | request, channel = self.make_request("POST", self.url, body.encode('utf8')) |
196 | 202 | self.render(request) |
201 | 207 | def test_missing_parts(self): |
202 | 208 | """ |
203 | 209 | Synapse will complain if you don't give nonce, username, password, and |
204 | mac. Admin is optional. Additional checks are done for length and | |
205 | type. | |
210 | mac. Admin and user_types are optional. Additional checks are done for length | |
211 | and type. | |
206 | 212 | """ |
207 | 213 | |
208 | 214 | def nonce(): |
259 | 265 | self.assertEqual('Invalid username', channel.json_body["error"]) |
260 | 266 | |
261 | 267 | # |
262 | # Username checks | |
268 | # Password checks | |
263 | 269 | # |
264 | 270 | |
265 | 271 | # Must be present |
295 | 301 | |
296 | 302 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) |
297 | 303 | self.assertEqual('Invalid password', channel.json_body["error"]) |
304 | ||
305 | # | |
306 | # user_type check | |
307 | # | |
308 | ||
309 | # Invalid user_type | |
310 | body = json.dumps({ | |
311 | "nonce": nonce(), | |
312 | "username": "a", | |
313 | "password": "1234", | |
314 | "user_type": "invalid"} | |
315 | ) | |
316 | request, channel = self.make_request("POST", self.url, body.encode('utf8')) | |
317 | self.render(request) | |
318 | ||
319 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
320 | self.assertEqual('Invalid user type', channel.json_body["error"]) |
15 | 15 | """ Tests REST events for /events paths.""" |
16 | 16 | |
17 | 17 | from mock import Mock, NonCallableMock |
18 | from six import PY3 | |
19 | 18 | |
20 | from twisted.internet import defer | |
19 | from synapse.rest.client.v1 import admin, events, login, room | |
21 | 20 | |
22 | from ....utils import MockHttpResource, setup_test_homeserver | |
23 | from .utils import RestTestCase | |
24 | ||
25 | PATH_PREFIX = "/_matrix/client/api/v1" | |
21 | from tests import unittest | |
26 | 22 | |
27 | 23 | |
28 | class EventStreamPermissionsTestCase(RestTestCase): | |
24 | class EventStreamPermissionsTestCase(unittest.HomeserverTestCase): | |
29 | 25 | """ Tests event streaming (GET /events). """ |
30 | 26 | |
31 | if PY3: | |
32 | skip = "Skip on Py3 until ported to use not V1 only register." | |
27 | servlets = [ | |
28 | events.register_servlets, | |
29 | room.register_servlets, | |
30 | admin.register_servlets, | |
31 | login.register_servlets, | |
32 | ] | |
33 | 33 | |
34 | @defer.inlineCallbacks | |
35 | def setUp(self): | |
36 | import synapse.rest.client.v1.events | |
37 | import synapse.rest.client.v1_only.register | |
38 | import synapse.rest.client.v1.room | |
34 | def make_homeserver(self, reactor, clock): | |
39 | 35 | |
40 | self.mock_resource = MockHttpResource(prefix=PATH_PREFIX) | |
36 | config = self.default_config() | |
37 | config.enable_registration_captcha = False | |
38 | config.enable_registration = True | |
39 | config.auto_join_rooms = [] | |
41 | 40 | |
42 | hs = yield setup_test_homeserver( | |
43 | self.addCleanup, | |
44 | http_client=None, | |
45 | federation_client=Mock(), | |
46 | ratelimiter=NonCallableMock(spec_set=["send_message"]), | |
41 | hs = self.setup_test_homeserver( | |
42 | config=config, ratelimiter=NonCallableMock(spec_set=["send_message"]) | |
47 | 43 | ) |
48 | 44 | self.ratelimiter = hs.get_ratelimiter() |
49 | 45 | self.ratelimiter.send_message.return_value = (True, 0) |
50 | hs.config.enable_registration_captcha = False | |
51 | hs.config.enable_registration = True | |
52 | hs.config.auto_join_rooms = [] | |
53 | 46 | |
54 | 47 | hs.get_handlers().federation_handler = Mock() |
55 | 48 | |
56 | synapse.rest.client.v1_only.register.register_servlets(hs, self.mock_resource) | |
57 | synapse.rest.client.v1.events.register_servlets(hs, self.mock_resource) | |
58 | synapse.rest.client.v1.room.register_servlets(hs, self.mock_resource) | |
49 | return hs | |
50 | ||
51 | def prepare(self, hs, reactor, clock): | |
59 | 52 | |
60 | 53 | # register an account |
61 | self.user_id = "sid1" | |
62 | response = yield self.register(self.user_id) | |
63 | self.token = response["access_token"] | |
64 | self.user_id = response["user_id"] | |
54 | self.user_id = self.register_user("sid1", "pass") | |
55 | self.token = self.login(self.user_id, "pass") | |
65 | 56 | |
66 | 57 | # register a 2nd account |
67 | self.other_user = "other1" | |
68 | response = yield self.register(self.other_user) | |
69 | self.other_token = response["access_token"] | |
70 | self.other_user = response["user_id"] | |
58 | self.other_user = self.register_user("other2", "pass") | |
59 | self.other_token = self.login(self.other_user, "pass") | |
71 | 60 | |
72 | def tearDown(self): | |
73 | pass | |
74 | ||
75 | @defer.inlineCallbacks | |
76 | 61 | def test_stream_basic_permissions(self): |
77 | 62 | # invalid token, expect 401 |
78 | 63 | # note: this is in violation of the original v1 spec, which expected |
80 | 65 | # implementation is now part of the r0 implementation, the newer |
81 | 66 | # behaviour is used instead to be consistent with the r0 spec. |
82 | 67 | # see issue #2602 |
83 | (code, response) = yield self.mock_resource.trigger_get( | |
84 | "/events?access_token=%s" % ("invalid" + self.token,) | |
68 | request, channel = self.make_request( | |
69 | "GET", "/events?access_token=%s" % ("invalid" + self.token,) | |
85 | 70 | ) |
86 | self.assertEquals(401, code, msg=str(response)) | |
71 | self.render(request) | |
72 | self.assertEquals(channel.code, 401, msg=channel.result) | |
87 | 73 | |
88 | 74 | # valid token, expect content |
89 | (code, response) = yield self.mock_resource.trigger_get( | |
90 | "/events?access_token=%s&timeout=0" % (self.token,) | |
75 | request, channel = self.make_request( | |
76 | "GET", "/events?access_token=%s&timeout=0" % (self.token,) | |
91 | 77 | ) |
92 | self.assertEquals(200, code, msg=str(response)) | |
93 | self.assertTrue("chunk" in response) | |
94 | self.assertTrue("start" in response) | |
95 | self.assertTrue("end" in response) | |
78 | self.render(request) | |
79 | self.assertEquals(channel.code, 200, msg=channel.result) | |
80 | self.assertTrue("chunk" in channel.json_body) | |
81 | self.assertTrue("start" in channel.json_body) | |
82 | self.assertTrue("end" in channel.json_body) | |
96 | 83 | |
97 | @defer.inlineCallbacks | |
98 | 84 | def test_stream_room_permissions(self): |
99 | room_id = yield self.create_room_as(self.other_user, tok=self.other_token) | |
100 | yield self.send(room_id, tok=self.other_token) | |
85 | room_id = self.helper.create_room_as(self.other_user, tok=self.other_token) | |
86 | self.helper.send(room_id, tok=self.other_token) | |
101 | 87 | |
102 | 88 | # invited to room (expect no content for room) |
103 | yield self.invite( | |
89 | self.helper.invite( | |
104 | 90 | room_id, src=self.other_user, targ=self.user_id, tok=self.other_token |
105 | 91 | ) |
106 | 92 | |
107 | (code, response) = yield self.mock_resource.trigger_get( | |
108 | "/events?access_token=%s&timeout=0" % (self.token,) | |
93 | # valid token, expect content | |
94 | request, channel = self.make_request( | |
95 | "GET", "/events?access_token=%s&timeout=0" % (self.token,) | |
109 | 96 | ) |
110 | self.assertEquals(200, code, msg=str(response)) | |
97 | self.render(request) | |
98 | self.assertEquals(channel.code, 200, msg=channel.result) | |
111 | 99 | |
112 | 100 | # We may get a presence event for ourselves down |
113 | 101 | self.assertEquals( |
115 | 103 | len( |
116 | 104 | [ |
117 | 105 | c |
118 | for c in response["chunk"] | |
106 | for c in channel.json_body["chunk"] | |
119 | 107 | if not ( |
120 | 108 | c.get("type") == "m.presence" |
121 | 109 | and c["content"].get("user_id") == self.user_id |
125 | 113 | ) |
126 | 114 | |
127 | 115 | # joined room (expect all content for room) |
128 | yield self.join(room=room_id, user=self.user_id, tok=self.token) | |
116 | self.helper.join(room=room_id, user=self.user_id, tok=self.token) | |
129 | 117 | |
130 | 118 | # left to room (expect no content for room) |
131 | 119 |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2015, 2016 OpenMarket Ltd | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import json | |
16 | ||
17 | from mock import Mock | |
18 | from six import PY3 | |
19 | ||
20 | from twisted.test.proto_helpers import MemoryReactorClock | |
21 | ||
22 | from synapse.http.server import JsonResource | |
23 | from synapse.rest.client.v1_only.register import register_servlets | |
24 | from synapse.util import Clock | |
25 | ||
26 | from tests import unittest | |
27 | from tests.server import make_request, render, setup_test_homeserver | |
28 | ||
29 | ||
30 | class CreateUserServletTestCase(unittest.TestCase): | |
31 | """ | |
32 | Tests for CreateUserRestServlet. | |
33 | """ | |
34 | ||
35 | if PY3: | |
36 | skip = "Not ported to Python 3." | |
37 | ||
38 | def setUp(self): | |
39 | self.registration_handler = Mock() | |
40 | ||
41 | self.appservice = Mock(sender="@as:test") | |
42 | self.datastore = Mock( | |
43 | get_app_service_by_token=Mock(return_value=self.appservice) | |
44 | ) | |
45 | ||
46 | handlers = Mock(registration_handler=self.registration_handler) | |
47 | self.reactor = MemoryReactorClock() | |
48 | self.hs_clock = Clock(self.reactor) | |
49 | ||
50 | self.hs = self.hs = setup_test_homeserver( | |
51 | self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.reactor | |
52 | ) | |
53 | self.hs.get_datastore = Mock(return_value=self.datastore) | |
54 | self.hs.get_handlers = Mock(return_value=handlers) | |
55 | ||
56 | def test_POST_createuser_with_valid_user(self): | |
57 | ||
58 | res = JsonResource(self.hs) | |
59 | register_servlets(self.hs, res) | |
60 | ||
61 | request_data = json.dumps( | |
62 | { | |
63 | "localpart": "someone", | |
64 | "displayname": "someone interesting", | |
65 | "duration_seconds": 200, | |
66 | } | |
67 | ) | |
68 | ||
69 | url = b'/_matrix/client/api/v1/createUser?access_token=i_am_an_app_service' | |
70 | ||
71 | user_id = "@someone:interesting" | |
72 | token = "my token" | |
73 | ||
74 | self.registration_handler.get_or_create_user = Mock( | |
75 | return_value=(user_id, token) | |
76 | ) | |
77 | ||
78 | request, channel = make_request(self.reactor, b"POST", url, request_data) | |
79 | render(request, res, self.reactor) | |
80 | ||
81 | self.assertEquals(channel.result["code"], b"200") | |
82 | ||
83 | det_data = { | |
84 | "user_id": user_id, | |
85 | "access_token": token, | |
86 | "home_server": self.hs.hostname, | |
87 | } | |
88 | self.assertDictContainsSubset(det_data, json.loads(channel.result["body"])) |
14 | 14 | |
15 | 15 | import os |
16 | 16 | |
17 | from mock import Mock | |
18 | ||
19 | from twisted.internet.defer import Deferred | |
17 | import attr | |
18 | from netaddr import IPSet | |
19 | ||
20 | from twisted.internet._resolver import HostResolution | |
21 | from twisted.internet.address import IPv4Address, IPv6Address | |
22 | from twisted.internet.error import DNSLookupError | |
23 | from twisted.python.failure import Failure | |
24 | from twisted.test.proto_helpers import AccumulatingProtocol | |
25 | from twisted.web._newclient import ResponseDone | |
20 | 26 | |
21 | 27 | from synapse.config.repository import MediaStorageProviderConfig |
22 | from synapse.util.logcontext import make_deferred_yieldable | |
23 | 28 | from synapse.util.module_loader import load_module |
24 | 29 | |
25 | 30 | from tests import unittest |
31 | from tests.server import FakeTransport | |
32 | ||
33 | ||
34 | @attr.s | |
35 | class FakeResponse(object): | |
36 | version = attr.ib() | |
37 | code = attr.ib() | |
38 | phrase = attr.ib() | |
39 | headers = attr.ib() | |
40 | body = attr.ib() | |
41 | absoluteURI = attr.ib() | |
42 | ||
43 | @property | |
44 | def request(self): | |
45 | @attr.s | |
46 | class FakeTransport(object): | |
47 | absoluteURI = self.absoluteURI | |
48 | ||
49 | return FakeTransport() | |
50 | ||
51 | def deliverBody(self, protocol): | |
52 | protocol.dataReceived(self.body) | |
53 | protocol.connectionLost(Failure(ResponseDone())) | |
26 | 54 | |
27 | 55 | |
28 | 56 | class URLPreviewTests(unittest.HomeserverTestCase): |
29 | 57 | |
30 | 58 | hijack_auth = True |
31 | 59 | user_id = "@test:user" |
60 | end_content = ( | |
61 | b'<html><head>' | |
62 | b'<meta property="og:title" content="~matrix~" />' | |
63 | b'<meta property="og:description" content="hi" />' | |
64 | b'</head></html>' | |
65 | ) | |
32 | 66 | |
33 | 67 | def make_homeserver(self, reactor, clock): |
34 | 68 | |
38 | 72 | config = self.default_config() |
39 | 73 | config.url_preview_enabled = True |
40 | 74 | config.max_spider_size = 9999999 |
75 | config.url_preview_ip_range_blacklist = IPSet( | |
76 | ( | |
77 | "192.168.1.1", | |
78 | "1.0.0.0/8", | |
79 | "3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", | |
80 | "2001:800::/21", | |
81 | ) | |
82 | ) | |
83 | config.url_preview_ip_range_whitelist = IPSet(("1.1.1.1",)) | |
41 | 84 | config.url_preview_url_blacklist = [] |
42 | 85 | config.media_store_path = self.storage_path |
43 | 86 | |
61 | 104 | |
62 | 105 | def prepare(self, reactor, clock, hs): |
63 | 106 | |
64 | self.fetches = [] | |
65 | ||
66 | def get_file(url, output_stream, max_size): | |
67 | """ | |
68 | Returns tuple[int,dict,str,int] of file length, response headers, | |
69 | absolute URI, and response code. | |
70 | """ | |
71 | ||
72 | def write_to(r): | |
73 | data, response = r | |
74 | output_stream.write(data) | |
75 | return response | |
76 | ||
77 | d = Deferred() | |
78 | d.addCallback(write_to) | |
79 | self.fetches.append((d, url)) | |
80 | return make_deferred_yieldable(d) | |
81 | ||
82 | client = Mock() | |
83 | client.get_file = get_file | |
84 | ||
85 | 107 | self.media_repo = hs.get_media_repository_resource() |
86 | preview_url = self.media_repo.children[b'preview_url'] | |
87 | preview_url.client = client | |
88 | self.preview_url = preview_url | |
108 | self.preview_url = self.media_repo.children[b'preview_url'] | |
109 | ||
110 | self.lookups = {} | |
111 | ||
112 | class Resolver(object): | |
113 | def resolveHostName( | |
114 | _self, | |
115 | resolutionReceiver, | |
116 | hostName, | |
117 | portNumber=0, | |
118 | addressTypes=None, | |
119 | transportSemantics='TCP', | |
120 | ): | |
121 | ||
122 | resolution = HostResolution(hostName) | |
123 | resolutionReceiver.resolutionBegan(resolution) | |
124 | if hostName not in self.lookups: | |
125 | raise DNSLookupError("OH NO") | |
126 | ||
127 | for i in self.lookups[hostName]: | |
128 | resolutionReceiver.addressResolved(i[0]('TCP', i[1], portNumber)) | |
129 | resolutionReceiver.resolutionComplete() | |
130 | return resolutionReceiver | |
131 | ||
132 | self.reactor.nameResolver = Resolver() | |
89 | 133 | |
90 | 134 | def test_cache_returns_correct_type(self): |
91 | ||
92 | request, channel = self.make_request( | |
93 | "GET", "url_preview?url=matrix.org", shorthand=False | |
94 | ) | |
95 | request.render(self.preview_url) | |
96 | self.pump() | |
97 | ||
98 | # We've made one fetch | |
99 | self.assertEqual(len(self.fetches), 1) | |
100 | ||
101 | end_content = ( | |
102 | b'<html><head>' | |
103 | b'<meta property="og:title" content="~matrix~" />' | |
104 | b'<meta property="og:description" content="hi" />' | |
105 | b'</head></html>' | |
106 | ) | |
107 | ||
108 | self.fetches[0][0].callback( | |
109 | ( | |
110 | end_content, | |
111 | ( | |
112 | len(end_content), | |
113 | { | |
114 | b"Content-Length": [b"%d" % (len(end_content))], | |
115 | b"Content-Type": [b'text/html; charset="utf8"'], | |
116 | }, | |
117 | "https://example.com", | |
118 | 200, | |
119 | ), | |
120 | ) | |
135 | self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")] | |
136 | ||
137 | request, channel = self.make_request( | |
138 | "GET", "url_preview?url=http://matrix.org", shorthand=False | |
139 | ) | |
140 | request.render(self.preview_url) | |
141 | self.pump() | |
142 | ||
143 | client = self.reactor.tcpClients[0][2].buildProtocol(None) | |
144 | server = AccumulatingProtocol() | |
145 | server.makeConnection(FakeTransport(client, self.reactor)) | |
146 | client.makeConnection(FakeTransport(server, self.reactor)) | |
147 | client.dataReceived( | |
148 | b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\nContent-Type: text/html\r\n\r\n" | |
149 | % (len(self.end_content),) | |
150 | + self.end_content | |
121 | 151 | ) |
122 | 152 | |
123 | 153 | self.pump() |
128 | 158 | |
129 | 159 | # Check the cache returns the correct response |
130 | 160 | request, channel = self.make_request( |
131 | "GET", "url_preview?url=matrix.org", shorthand=False | |
132 | ) | |
133 | request.render(self.preview_url) | |
134 | self.pump() | |
135 | ||
136 | # Only one fetch, still, since we'll lean on the cache | |
137 | self.assertEqual(len(self.fetches), 1) | |
161 | "GET", "url_preview?url=http://matrix.org", shorthand=False | |
162 | ) | |
163 | request.render(self.preview_url) | |
164 | self.pump() | |
138 | 165 | |
139 | 166 | # Check the cache response has the same content |
140 | 167 | self.assertEqual(channel.code, 200) |
143 | 170 | ) |
144 | 171 | |
145 | 172 | # Clear the in-memory cache |
146 | self.assertIn("matrix.org", self.preview_url._cache) | |
147 | self.preview_url._cache.pop("matrix.org") | |
148 | self.assertNotIn("matrix.org", self.preview_url._cache) | |
173 | self.assertIn("http://matrix.org", self.preview_url._cache) | |
174 | self.preview_url._cache.pop("http://matrix.org") | |
175 | self.assertNotIn("http://matrix.org", self.preview_url._cache) | |
149 | 176 | |
150 | 177 | # Check the database cache returns the correct response |
151 | 178 | request, channel = self.make_request( |
152 | "GET", "url_preview?url=matrix.org", shorthand=False | |
153 | ) | |
154 | request.render(self.preview_url) | |
155 | self.pump() | |
156 | ||
157 | # Only one fetch, still, since we'll lean on the cache | |
158 | self.assertEqual(len(self.fetches), 1) | |
179 | "GET", "url_preview?url=http://matrix.org", shorthand=False | |
180 | ) | |
181 | request.render(self.preview_url) | |
182 | self.pump() | |
159 | 183 | |
160 | 184 | # Check the cache response has the same content |
161 | 185 | self.assertEqual(channel.code, 200) |
164 | 188 | ) |
165 | 189 | |
166 | 190 | def test_non_ascii_preview_httpequiv(self): |
167 | ||
168 | request, channel = self.make_request( | |
169 | "GET", "url_preview?url=matrix.org", shorthand=False | |
170 | ) | |
171 | request.render(self.preview_url) | |
172 | self.pump() | |
173 | ||
174 | # We've made one fetch | |
175 | self.assertEqual(len(self.fetches), 1) | |
191 | self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")] | |
176 | 192 | |
177 | 193 | end_content = ( |
178 | 194 | b'<html><head>' |
182 | 198 | b'</head></html>' |
183 | 199 | ) |
184 | 200 | |
185 | self.fetches[0][0].callback( | |
201 | request, channel = self.make_request( | |
202 | "GET", "url_preview?url=http://matrix.org", shorthand=False | |
203 | ) | |
204 | request.render(self.preview_url) | |
205 | self.pump() | |
206 | ||
207 | client = self.reactor.tcpClients[0][2].buildProtocol(None) | |
208 | server = AccumulatingProtocol() | |
209 | server.makeConnection(FakeTransport(client, self.reactor)) | |
210 | client.makeConnection(FakeTransport(server, self.reactor)) | |
211 | client.dataReceived( | |
186 | 212 | ( |
187 | end_content, | |
188 | ( | |
189 | len(end_content), | |
190 | { | |
191 | b"Content-Length": [b"%d" % (len(end_content))], | |
192 | # This charset=utf-8 should be ignored, because the | |
193 | # document has a meta tag overriding it. | |
194 | b"Content-Type": [b'text/html; charset="utf8"'], | |
195 | }, | |
196 | "https://example.com", | |
197 | 200, | |
198 | ), | |
213 | b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\n" | |
214 | b"Content-Type: text/html; charset=\"utf8\"\r\n\r\n" | |
199 | 215 | ) |
216 | % (len(end_content),) | |
217 | + end_content | |
200 | 218 | ) |
201 | 219 | |
202 | 220 | self.pump() |
204 | 222 | self.assertEqual(channel.json_body["og:title"], u"\u0434\u043a\u0430") |
205 | 223 | |
206 | 224 | def test_non_ascii_preview_content_type(self): |
207 | ||
208 | request, channel = self.make_request( | |
209 | "GET", "url_preview?url=matrix.org", shorthand=False | |
210 | ) | |
211 | request.render(self.preview_url) | |
212 | self.pump() | |
213 | ||
214 | # We've made one fetch | |
215 | self.assertEqual(len(self.fetches), 1) | |
225 | self.lookups["matrix.org"] = [(IPv4Address, "8.8.8.8")] | |
216 | 226 | |
217 | 227 | end_content = ( |
218 | 228 | b'<html><head>' |
221 | 231 | b'</head></html>' |
222 | 232 | ) |
223 | 233 | |
224 | self.fetches[0][0].callback( | |
234 | request, channel = self.make_request( | |
235 | "GET", "url_preview?url=http://matrix.org", shorthand=False | |
236 | ) | |
237 | request.render(self.preview_url) | |
238 | self.pump() | |
239 | ||
240 | client = self.reactor.tcpClients[0][2].buildProtocol(None) | |
241 | server = AccumulatingProtocol() | |
242 | server.makeConnection(FakeTransport(client, self.reactor)) | |
243 | client.makeConnection(FakeTransport(server, self.reactor)) | |
244 | client.dataReceived( | |
225 | 245 | ( |
226 | end_content, | |
227 | ( | |
228 | len(end_content), | |
229 | { | |
230 | b"Content-Length": [b"%d" % (len(end_content))], | |
231 | b"Content-Type": [b'text/html; charset="windows-1251"'], | |
232 | }, | |
233 | "https://example.com", | |
234 | 200, | |
235 | ), | |
246 | b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\n" | |
247 | b"Content-Type: text/html; charset=\"windows-1251\"\r\n\r\n" | |
236 | 248 | ) |
249 | % (len(end_content),) | |
250 | + end_content | |
237 | 251 | ) |
238 | 252 | |
239 | 253 | self.pump() |
240 | 254 | self.assertEqual(channel.code, 200) |
241 | 255 | self.assertEqual(channel.json_body["og:title"], u"\u0434\u043a\u0430") |
256 | ||
257 | def test_ipaddr(self): | |
258 | """ | |
259 | IP addresses can be previewed directly. | |
260 | """ | |
261 | self.lookups["example.com"] = [(IPv4Address, "8.8.8.8")] | |
262 | ||
263 | request, channel = self.make_request( | |
264 | "GET", "url_preview?url=http://example.com", shorthand=False | |
265 | ) | |
266 | request.render(self.preview_url) | |
267 | self.pump() | |
268 | ||
269 | client = self.reactor.tcpClients[0][2].buildProtocol(None) | |
270 | server = AccumulatingProtocol() | |
271 | server.makeConnection(FakeTransport(client, self.reactor)) | |
272 | client.makeConnection(FakeTransport(server, self.reactor)) | |
273 | client.dataReceived( | |
274 | b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\nContent-Type: text/html\r\n\r\n" | |
275 | % (len(self.end_content),) | |
276 | + self.end_content | |
277 | ) | |
278 | ||
279 | self.pump() | |
280 | self.assertEqual(channel.code, 200) | |
281 | self.assertEqual( | |
282 | channel.json_body, {"og:title": "~matrix~", "og:description": "hi"} | |
283 | ) | |
284 | ||
285 | def test_blacklisted_ip_specific(self): | |
286 | """ | |
287 | Blacklisted IP addresses, found via DNS, are not spidered. | |
288 | """ | |
289 | self.lookups["example.com"] = [(IPv4Address, "192.168.1.1")] | |
290 | ||
291 | request, channel = self.make_request( | |
292 | "GET", "url_preview?url=http://example.com", shorthand=False | |
293 | ) | |
294 | request.render(self.preview_url) | |
295 | self.pump() | |
296 | ||
297 | # No requests made. | |
298 | self.assertEqual(len(self.reactor.tcpClients), 0) | |
299 | self.assertEqual(channel.code, 403) | |
300 | self.assertEqual( | |
301 | channel.json_body, | |
302 | { | |
303 | 'errcode': 'M_UNKNOWN', | |
304 | 'error': 'IP address blocked by IP blacklist entry', | |
305 | }, | |
306 | ) | |
307 | ||
308 | def test_blacklisted_ip_range(self): | |
309 | """ | |
310 | Blacklisted IP ranges, IPs found over DNS, are not spidered. | |
311 | """ | |
312 | self.lookups["example.com"] = [(IPv4Address, "1.1.1.2")] | |
313 | ||
314 | request, channel = self.make_request( | |
315 | "GET", "url_preview?url=http://example.com", shorthand=False | |
316 | ) | |
317 | request.render(self.preview_url) | |
318 | self.pump() | |
319 | ||
320 | self.assertEqual(channel.code, 403) | |
321 | self.assertEqual( | |
322 | channel.json_body, | |
323 | { | |
324 | 'errcode': 'M_UNKNOWN', | |
325 | 'error': 'IP address blocked by IP blacklist entry', | |
326 | }, | |
327 | ) | |
328 | ||
329 | def test_blacklisted_ip_specific_direct(self): | |
330 | """ | |
331 | Blacklisted IP addresses, accessed directly, are not spidered. | |
332 | """ | |
333 | request, channel = self.make_request( | |
334 | "GET", "url_preview?url=http://192.168.1.1", shorthand=False | |
335 | ) | |
336 | request.render(self.preview_url) | |
337 | self.pump() | |
338 | ||
339 | # No requests made. | |
340 | self.assertEqual(len(self.reactor.tcpClients), 0) | |
341 | self.assertEqual(channel.code, 403) | |
342 | self.assertEqual( | |
343 | channel.json_body, | |
344 | { | |
345 | 'errcode': 'M_UNKNOWN', | |
346 | 'error': 'IP address blocked by IP blacklist entry', | |
347 | }, | |
348 | ) | |
349 | ||
350 | def test_blacklisted_ip_range_direct(self): | |
351 | """ | |
352 | Blacklisted IP ranges, accessed directly, are not spidered. | |
353 | """ | |
354 | request, channel = self.make_request( | |
355 | "GET", "url_preview?url=http://1.1.1.2", shorthand=False | |
356 | ) | |
357 | request.render(self.preview_url) | |
358 | self.pump() | |
359 | ||
360 | self.assertEqual(channel.code, 403) | |
361 | self.assertEqual( | |
362 | channel.json_body, | |
363 | { | |
364 | 'errcode': 'M_UNKNOWN', | |
365 | 'error': 'IP address blocked by IP blacklist entry', | |
366 | }, | |
367 | ) | |
368 | ||
369 | def test_blacklisted_ip_range_whitelisted_ip(self): | |
370 | """ | |
371 | Blacklisted but then subsequently whitelisted IP addresses can be | |
372 | spidered. | |
373 | """ | |
374 | self.lookups["example.com"] = [(IPv4Address, "1.1.1.1")] | |
375 | ||
376 | request, channel = self.make_request( | |
377 | "GET", "url_preview?url=http://example.com", shorthand=False | |
378 | ) | |
379 | request.render(self.preview_url) | |
380 | self.pump() | |
381 | ||
382 | client = self.reactor.tcpClients[0][2].buildProtocol(None) | |
383 | ||
384 | server = AccumulatingProtocol() | |
385 | server.makeConnection(FakeTransport(client, self.reactor)) | |
386 | client.makeConnection(FakeTransport(server, self.reactor)) | |
387 | ||
388 | client.dataReceived( | |
389 | b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\nContent-Type: text/html\r\n\r\n" | |
390 | % (len(self.end_content),) | |
391 | + self.end_content | |
392 | ) | |
393 | ||
394 | self.pump() | |
395 | self.assertEqual(channel.code, 200) | |
396 | self.assertEqual( | |
397 | channel.json_body, {"og:title": "~matrix~", "og:description": "hi"} | |
398 | ) | |
399 | ||
400 | def test_blacklisted_ip_with_external_ip(self): | |
401 | """ | |
402 | If a hostname resolves a blacklisted IP, even if there's a | |
403 | non-blacklisted one, it will be rejected. | |
404 | """ | |
405 | # Hardcode the URL resolving to the IP we want. | |
406 | self.lookups[u"example.com"] = [ | |
407 | (IPv4Address, "1.1.1.2"), | |
408 | (IPv4Address, "8.8.8.8"), | |
409 | ] | |
410 | ||
411 | request, channel = self.make_request( | |
412 | "GET", "url_preview?url=http://example.com", shorthand=False | |
413 | ) | |
414 | request.render(self.preview_url) | |
415 | self.pump() | |
416 | self.assertEqual(channel.code, 403) | |
417 | self.assertEqual( | |
418 | channel.json_body, | |
419 | { | |
420 | 'errcode': 'M_UNKNOWN', | |
421 | 'error': 'IP address blocked by IP blacklist entry', | |
422 | }, | |
423 | ) | |
424 | ||
425 | def test_blacklisted_ipv6_specific(self): | |
426 | """ | |
427 | Blacklisted IP addresses, found via DNS, are not spidered. | |
428 | """ | |
429 | self.lookups["example.com"] = [ | |
430 | (IPv6Address, "3fff:ffff:ffff:ffff:ffff:ffff:ffff:ffff") | |
431 | ] | |
432 | ||
433 | request, channel = self.make_request( | |
434 | "GET", "url_preview?url=http://example.com", shorthand=False | |
435 | ) | |
436 | request.render(self.preview_url) | |
437 | self.pump() | |
438 | ||
439 | # No requests made. | |
440 | self.assertEqual(len(self.reactor.tcpClients), 0) | |
441 | self.assertEqual(channel.code, 403) | |
442 | self.assertEqual( | |
443 | channel.json_body, | |
444 | { | |
445 | 'errcode': 'M_UNKNOWN', | |
446 | 'error': 'IP address blocked by IP blacklist entry', | |
447 | }, | |
448 | ) | |
449 | ||
450 | def test_blacklisted_ipv6_range(self): | |
451 | """ | |
452 | Blacklisted IP ranges, IPs found over DNS, are not spidered. | |
453 | """ | |
454 | self.lookups["example.com"] = [(IPv6Address, "2001:800::1")] | |
455 | ||
456 | request, channel = self.make_request( | |
457 | "GET", "url_preview?url=http://example.com", shorthand=False | |
458 | ) | |
459 | request.render(self.preview_url) | |
460 | self.pump() | |
461 | ||
462 | self.assertEqual(channel.code, 403) | |
463 | self.assertEqual( | |
464 | channel.json_body, | |
465 | { | |
466 | 'errcode': 'M_UNKNOWN', | |
467 | 'error': 'IP address blocked by IP blacklist entry', | |
468 | }, | |
469 | ) |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2018 New Vector | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | ||
16 | from synapse.rest.well_known import WellKnownResource | |
17 | ||
18 | from tests import unittest | |
19 | ||
20 | ||
21 | class WellKnownTests(unittest.HomeserverTestCase): | |
22 | def setUp(self): | |
23 | super(WellKnownTests, self).setUp() | |
24 | ||
25 | # replace the JsonResource with a WellKnownResource | |
26 | self.resource = WellKnownResource(self.hs) | |
27 | ||
28 | def test_well_known(self): | |
29 | self.hs.config.public_baseurl = "https://tesths" | |
30 | self.hs.config.default_identity_server = "https://testis" | |
31 | ||
32 | request, channel = self.make_request( | |
33 | "GET", | |
34 | "/.well-known/matrix/client", | |
35 | shorthand=False, | |
36 | ) | |
37 | self.render(request) | |
38 | ||
39 | self.assertEqual(request.code, 200) | |
40 | self.assertEqual( | |
41 | channel.json_body, { | |
42 | "m.homeserver": {"base_url": "https://tesths"}, | |
43 | "m.identity_server": {"base_url": "https://testis"}, | |
44 | } | |
45 | ) | |
46 | ||
47 | def test_well_known_no_public_baseurl(self): | |
48 | self.hs.config.public_baseurl = None | |
49 | ||
50 | request, channel = self.make_request( | |
51 | "GET", | |
52 | "/.well-known/matrix/client", | |
53 | shorthand=False, | |
54 | ) | |
55 | self.render(request) | |
56 | ||
57 | self.assertEqual(request.code, 404) |
382 | 382 | self.disconnecting = True |
383 | 383 | |
384 | 384 | def pauseProducing(self): |
385 | if not self.producer: | |
386 | return | |
387 | ||
385 | 388 | self.producer.pauseProducing() |
389 | ||
390 | def resumeProducing(self): | |
391 | if not self.producer: | |
392 | return | |
393 | self.producer.resumeProducing() | |
386 | 394 | |
387 | 395 | def unregisterProducer(self): |
388 | 396 | if not self.producer: |
15 | 15 | |
16 | 16 | from twisted.internet import defer |
17 | 17 | |
18 | from synapse.api.constants import UserTypes | |
19 | ||
18 | 20 | from tests.unittest import HomeserverTestCase |
19 | 21 | |
20 | 22 | FORTY_DAYS = 40 * 24 * 60 * 60 |
27 | 29 | self.store = hs.get_datastore() |
28 | 30 | hs.config.limit_usage_by_mau = True |
29 | 31 | hs.config.max_mau_value = 50 |
32 | ||
30 | 33 | # Advance the clock a bit |
31 | 34 | reactor.advance(FORTY_DAYS) |
32 | 35 | |
38 | 41 | user1_email = "user1@matrix.org" |
39 | 42 | user2 = "@user2:server" |
40 | 43 | user2_email = "user2@matrix.org" |
44 | user3 = "@user3:server" | |
45 | user3_email = "user3@matrix.org" | |
46 | ||
41 | 47 | threepids = [ |
42 | 48 | {'medium': 'email', 'address': user1_email}, |
43 | 49 | {'medium': 'email', 'address': user2_email}, |
50 | {'medium': 'email', 'address': user3_email}, | |
44 | 51 | ] |
45 | user_num = len(threepids) | |
52 | # -1 because user3 is a support user and does not count | |
53 | user_num = len(threepids) - 1 | |
46 | 54 | |
47 | 55 | self.store.register(user_id=user1, token="123", password_hash=None) |
48 | 56 | self.store.register(user_id=user2, token="456", password_hash=None) |
57 | self.store.register( | |
58 | user_id=user3, token="789", | |
59 | password_hash=None, user_type=UserTypes.SUPPORT | |
60 | ) | |
49 | 61 | self.pump() |
50 | 62 | |
51 | 63 | now = int(self.hs.get_clock().time_msec()) |
59 | 71 | |
60 | 72 | active_count = self.store.get_monthly_active_count() |
61 | 73 | |
62 | # Test total counts | |
74 | # Test total counts, ensure user3 (support user) is not counted | |
63 | 75 | self.assertEquals(self.get_success(active_count), user_num) |
64 | 76 | |
65 | 77 | # Test user is marked as active |
148 | 160 | |
149 | 161 | def test_populate_monthly_users_is_guest(self): |
150 | 162 | # Test that guest users are not added to mau list |
151 | user_id = "user_id" | |
163 | user_id = "@user_id:host" | |
152 | 164 | self.store.register( |
153 | 165 | user_id=user_id, token="123", password_hash=None, make_guest=True |
154 | 166 | ) |
220 | 232 | count = self.store.get_registered_reserved_users_count() |
221 | 233 | self.assertEquals(self.get_success(count), len(threepids)) |
222 | 234 | |
235 | def test_support_user_not_add_to_mau_limits(self): | |
236 | support_user_id = "@support:test" | |
237 | count = self.store.get_monthly_active_count() | |
238 | self.pump() | |
239 | self.assertEqual(self.get_success(count), 0) | |
240 | ||
241 | self.store.register( | |
242 | user_id=support_user_id, | |
243 | token="123", | |
244 | password_hash=None, | |
245 | user_type=UserTypes.SUPPORT | |
246 | ) | |
247 | ||
248 | self.store.upsert_monthly_active_user(support_user_id) | |
249 | count = self.store.get_monthly_active_count() | |
250 | self.pump() | |
251 | self.assertEqual(self.get_success(count), 0) | |
252 | ||
223 | 253 | def test_track_monthly_users_without_cap(self): |
224 | 254 | self.hs.config.limit_usage_by_mau = False |
225 | 255 | self.hs.config.mau_stats_only = True |
14 | 14 | |
15 | 15 | |
16 | 16 | from twisted.internet import defer |
17 | ||
18 | from synapse.api.constants import UserTypes | |
17 | 19 | |
18 | 20 | from tests import unittest |
19 | 21 | from tests.utils import setup_test_homeserver |
98 | 100 | user = yield self.store.get_user_by_access_token(self.tokens[0]) |
99 | 101 | self.assertIsNone(user, "access token was not deleted without device_id") |
100 | 102 | |
103 | @defer.inlineCallbacks | |
104 | def test_is_support_user(self): | |
105 | TEST_USER = "@test:test" | |
106 | SUPPORT_USER = "@support:test" | |
107 | ||
108 | res = yield self.store.is_support_user(None) | |
109 | self.assertFalse(res) | |
110 | yield self.store.register(user_id=TEST_USER, token="123", password_hash=None) | |
111 | res = yield self.store.is_support_user(TEST_USER) | |
112 | self.assertFalse(res) | |
113 | ||
114 | yield self.store.register( | |
115 | user_id=SUPPORT_USER, | |
116 | token="456", | |
117 | password_hash=None, | |
118 | user_type=UserTypes.SUPPORT | |
119 | ) | |
120 | res = yield self.store.is_support_user(SUPPORT_USER) | |
121 | self.assertTrue(res) | |
122 | ||
101 | 123 | |
102 | 124 | class TokenGenerator: |
103 | 125 | def __init__(self): |
16 | 16 | from synapse.metrics import InFlightGauge |
17 | 17 | |
18 | 18 | from tests import unittest |
19 | ||
20 | ||
21 | def get_sample_labels_value(sample): | |
22 | """ Extract the labels and values of a sample. | |
23 | ||
24 | prometheus_client 0.5 changed the sample type to a named tuple with more | |
25 | members than the plain tuple had in 0.4 and earlier. This function can | |
26 | extract the labels and value from the sample for both sample types. | |
27 | ||
28 | Args: | |
29 | sample: The sample to get the labels and value from. | |
30 | Returns: | |
31 | A tuple of (labels, value) from the sample. | |
32 | """ | |
33 | ||
34 | # If the sample has a labels and value attribute, use those. | |
35 | if hasattr(sample, "labels") and hasattr(sample, "value"): | |
36 | return sample.labels, sample.value | |
37 | # Otherwise fall back to treating it as a plain 3 tuple. | |
38 | else: | |
39 | _, labels, value = sample | |
40 | return labels, value | |
19 | 41 | |
20 | 42 | |
21 | 43 | class TestMauLimit(unittest.TestCase): |
74 | 96 | for r in gauge.collect(): |
75 | 97 | results[r.name] = { |
76 | 98 | tuple(labels[x] for x in gauge.labels): value |
77 | for _, labels, value in r.samples | |
99 | for labels, value in map(get_sample_labels_value, r.samples) | |
78 | 100 | } |
79 | 101 | |
80 | 102 | return results |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | from synapse.api.errors import SynapseError |
16 | from synapse.types import GroupID, RoomAlias, UserID | |
16 | from synapse.types import GroupID, RoomAlias, UserID, map_username_to_mxid_localpart | |
17 | 17 | |
18 | 18 | from tests import unittest |
19 | 19 | from tests.utils import TestHomeServer |
78 | 78 | except SynapseError as exc: |
79 | 79 | self.assertEqual(400, exc.code) |
80 | 80 | self.assertEqual("M_UNKNOWN", exc.errcode) |
81 | ||
82 | ||
83 | class MapUsernameTestCase(unittest.TestCase): | |
84 | def testPassThrough(self): | |
85 | self.assertEqual(map_username_to_mxid_localpart("test1234"), "test1234") | |
86 | ||
87 | def testUpperCase(self): | |
88 | self.assertEqual(map_username_to_mxid_localpart("tEST_1234"), "test_1234") | |
89 | self.assertEqual( | |
90 | map_username_to_mxid_localpart("tEST_1234", case_sensitive=True), | |
91 | "t_e_s_t__1234", | |
92 | ) | |
93 | ||
94 | def testSymbols(self): | |
95 | self.assertEqual( | |
96 | map_username_to_mxid_localpart("test=$?_1234"), | |
97 | "test=3d=24=3f_1234", | |
98 | ) | |
99 | ||
100 | def testLeadingUnderscore(self): | |
101 | self.assertEqual(map_username_to_mxid_localpart("_test_1234"), "=5ftest_1234") | |
102 | ||
103 | def testNonAscii(self): | |
104 | # this should work with either a unicode or a bytes | |
105 | self.assertEqual(map_username_to_mxid_localpart(u'têst'), "t=c3=aast") | |
106 | self.assertEqual( | |
107 | map_username_to_mxid_localpart(u'têst'.encode('utf-8')), | |
108 | "t=c3=aast", | |
109 | ) |
372 | 372 | nonce_str += b"\x00admin" |
373 | 373 | else: |
374 | 374 | nonce_str += b"\x00notadmin" |
375 | ||
375 | 376 | want_mac.update(nonce.encode('ascii') + b"\x00" + nonce_str) |
376 | 377 | want_mac = want_mac.hexdigest() |
377 | 378 |
138 | 138 | config.admin_contact = None |
139 | 139 | config.rc_messages_per_second = 10000 |
140 | 140 | config.rc_message_burst_count = 10000 |
141 | config.saml2_enabled = False | |
142 | config.public_baseurl = None | |
143 | config.default_identity_server = None | |
141 | 144 | |
142 | 145 | config.use_frozen_dicts = False |
143 | 146 |
7 | 7 | python-subunit |
8 | 8 | junitxml |
9 | 9 | coverage |
10 | ||
11 | # needed by some of the tests | |
12 | lxml | |
13 | 10 | |
14 | 11 | # cyptography 2.2 requires setuptools >= 18.5 |
15 | 12 | # |
32 | 29 | [testenv] |
33 | 30 | deps = |
34 | 31 | {[base]deps} |
32 | extras = all | |
35 | 33 | |
36 | 34 | whitelist_externals = |
37 | 35 | sh |
134 | 132 | [testenv:check_isort] |
135 | 133 | skip_install = True |
136 | 134 | deps = isort |
137 | commands = /bin/sh -c "isort -c -sp setup.cfg -rc synapse tests" | |
135 | commands = /bin/sh -c "isort -c -df -sp setup.cfg -rc synapse tests" | |
138 | 136 | |
139 | 137 | [testenv:check-newsfragment] |
140 | 138 | skip_install = True |
150 | 148 | codecov |
151 | 149 | commands = |
152 | 150 | coverage combine |
153 | codecov -X gcov⏎ | |
151 | codecov -X gcov |