Update upstream source from tag 'upstream/1.31.0'
Update to upstream version '1.31.0'
with Debian dir 8dd90b94dcb6d9718ae30723546f2bcc1b556a77
Andrej Shadura
3 years ago
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | # this script is run by buildkite in a plain `xenial` container; it installs the |
3 | 3 | # minimal requirements for tox and hands over to the py35-old tox environment. |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | # |
2 | 2 | # Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along |
3 | 3 | # with additional dependencies needed for the test (such as coverage or the PostgreSQL |
0 | Synapse 1.31.0 (2021-04-06) | |
1 | =========================== | |
2 | ||
3 | **Note:** As announced in v1.25.0, and in line with the deprecation policy for platform dependencies, this is the last release to support Python 3.5 and PostgreSQL 9.5. Future versions of Synapse will require Python 3.6+ and PostgreSQL 9.6+, as per our [deprecation policy](docs/deprecation_policy.md). | |
4 | ||
5 | This is also the last release that the Synapse team will be publishing packages for Debian Stretch and Ubuntu Xenial. | |
6 | ||
7 | ||
8 | Improved Documentation | |
9 | ---------------------- | |
10 | ||
11 | - Add a document describing the deprecation policy for platform dependencies. ([\#9723](https://github.com/matrix-org/synapse/issues/9723)) | |
12 | ||
13 | ||
14 | Internal Changes | |
15 | ---------------- | |
16 | ||
17 | - Revert using `dmypy run` in lint script. ([\#9720](https://github.com/matrix-org/synapse/issues/9720)) | |
18 | - Pin flake8-bugbear's version. ([\#9734](https://github.com/matrix-org/synapse/issues/9734)) | |
19 | ||
20 | ||
21 | Synapse 1.31.0rc1 (2021-03-30) | |
22 | ============================== | |
23 | ||
24 | Features | |
25 | -------- | |
26 | ||
27 | - Add support to OpenID Connect login for requiring attributes on the `userinfo` response. Contributed by Hubbe King. ([\#9609](https://github.com/matrix-org/synapse/issues/9609)) | |
28 | - Add initial experimental support for a "space summary" API. ([\#9643](https://github.com/matrix-org/synapse/issues/9643), [\#9652](https://github.com/matrix-org/synapse/issues/9652), [\#9653](https://github.com/matrix-org/synapse/issues/9653)) | |
29 | - Add support for the busy presence state as described in [MSC3026](https://github.com/matrix-org/matrix-doc/pull/3026). ([\#9644](https://github.com/matrix-org/synapse/issues/9644)) | |
30 | - Add support for credentials for proxy authentication in the `HTTPS_PROXY` environment variable. ([\#9657](https://github.com/matrix-org/synapse/issues/9657)) | |
31 | ||
32 | ||
33 | Bugfixes | |
34 | -------- | |
35 | ||
36 | - Fix a longstanding bug that could cause issues when editing a reply to a message. ([\#9585](https://github.com/matrix-org/synapse/issues/9585)) | |
37 | - Fix the `/capabilities` endpoint to return `m.change_password` as disabled if the local password database is not used for authentication. Contributed by @dklimpel. ([\#9588](https://github.com/matrix-org/synapse/issues/9588)) | |
38 | - Check if local passwords are enabled before setting them for the user. ([\#9636](https://github.com/matrix-org/synapse/issues/9636)) | |
39 | - Fix a bug where federation sending can stall due to `concurrent access` database exceptions when it falls behind. ([\#9639](https://github.com/matrix-org/synapse/issues/9639)) | |
40 | - Fix a bug introduced in Synapse 1.30.1 which meant the suggested `pip` incantation to install an updated `cryptography` was incorrect. ([\#9699](https://github.com/matrix-org/synapse/issues/9699)) | |
41 | ||
42 | ||
43 | Updates to the Docker image | |
44 | --------------------------- | |
45 | ||
46 | - Speed up Docker builds and make it nicer to test against Complement while developing (install all dependencies before copying the project). ([\#9610](https://github.com/matrix-org/synapse/issues/9610)) | |
47 | - Include [opencontainers labels](https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys) in the Docker image. ([\#9612](https://github.com/matrix-org/synapse/issues/9612)) | |
48 | ||
49 | ||
50 | Improved Documentation | |
51 | ---------------------- | |
52 | ||
53 | - Clarify that `register_new_matrix_user` is present also when installed via non-pip package. ([\#9074](https://github.com/matrix-org/synapse/issues/9074)) | |
54 | - Update source install documentation to mention platform prerequisites before the source install steps. ([\#9667](https://github.com/matrix-org/synapse/issues/9667)) | |
55 | - Improve worker documentation for fallback/web auth endpoints. ([\#9679](https://github.com/matrix-org/synapse/issues/9679)) | |
56 | - Update the sample configuration for OIDC authentication. ([\#9695](https://github.com/matrix-org/synapse/issues/9695)) | |
57 | ||
58 | ||
59 | Internal Changes | |
60 | ---------------- | |
61 | ||
62 | - Preparatory steps for removing redundant `outlier` data from `event_json.internal_metadata` column. ([\#9411](https://github.com/matrix-org/synapse/issues/9411)) | |
63 | - Add type hints to the caching module. ([\#9442](https://github.com/matrix-org/synapse/issues/9442)) | |
64 | - Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9499](https://github.com/matrix-org/synapse/issues/9499), [\#9659](https://github.com/matrix-org/synapse/issues/9659)) | |
65 | - Add additional type hints to the Homeserver object. ([\#9631](https://github.com/matrix-org/synapse/issues/9631), [\#9638](https://github.com/matrix-org/synapse/issues/9638), [\#9675](https://github.com/matrix-org/synapse/issues/9675), [\#9681](https://github.com/matrix-org/synapse/issues/9681)) | |
66 | - Only save remote cross-signing and device keys if they're different from the current ones. ([\#9634](https://github.com/matrix-org/synapse/issues/9634)) | |
67 | - Rename storage function to fix spelling and not conflict with another function's name. ([\#9637](https://github.com/matrix-org/synapse/issues/9637)) | |
68 | - Improve performance of federation catch up by sending the latest events in the room to the remote, rather than just the last event sent by the local server. ([\#9640](https://github.com/matrix-org/synapse/issues/9640), [\#9664](https://github.com/matrix-org/synapse/issues/9664)) | |
69 | - In the `federation_client` commandline client, stop automatically adding the URL prefix, so that servlets on other prefixes can be tested. ([\#9645](https://github.com/matrix-org/synapse/issues/9645)) | |
70 | - In the `federation_client` commandline client, handle inline `signing_key`s in `homeserver.yaml`. ([\#9647](https://github.com/matrix-org/synapse/issues/9647)) | |
71 | - Fixed some antipattern issues to improve code quality. ([\#9649](https://github.com/matrix-org/synapse/issues/9649)) | |
72 | - Add a storage method for pulling all current user presence state from the database. ([\#9650](https://github.com/matrix-org/synapse/issues/9650)) | |
73 | - Import `HomeServer` from the proper module. ([\#9665](https://github.com/matrix-org/synapse/issues/9665)) | |
74 | - Increase default join ratelimiting burst rate. ([\#9674](https://github.com/matrix-org/synapse/issues/9674)) | |
75 | - Add type hints to third party event rules and visibility modules. ([\#9676](https://github.com/matrix-org/synapse/issues/9676)) | |
76 | - Bump mypy-zope to 0.2.13 to fix "Cannot determine consistent method resolution order (MRO)" errors when running mypy a second time. ([\#9678](https://github.com/matrix-org/synapse/issues/9678)) | |
77 | - Use interpreter from `$PATH` via `/usr/bin/env` instead of absolute paths in various scripts. ([\#9689](https://github.com/matrix-org/synapse/issues/9689)) | |
78 | - Make it possible to use `dmypy`. ([\#9692](https://github.com/matrix-org/synapse/issues/9692)) | |
79 | - Suppress "CryptographyDeprecationWarning: int_from_bytes is deprecated". ([\#9698](https://github.com/matrix-org/synapse/issues/9698)) | |
80 | - Use `dmypy run` in lint script for improved performance in type-checking while developing. ([\#9701](https://github.com/matrix-org/synapse/issues/9701)) | |
81 | - Fix undetected mypy error when using Python 3.6. ([\#9703](https://github.com/matrix-org/synapse/issues/9703)) | |
82 | - Fix type-checking CI on develop. ([\#9709](https://github.com/matrix-org/synapse/issues/9709)) | |
83 | ||
84 | ||
85 | Synapse 1.30.1 (2021-03-26) | |
86 | =========================== | |
87 | ||
88 | This release is identical to Synapse 1.30.0, with the exception of explicitly | |
89 | setting a minimum version of Python's Cryptography library to ensure that users | |
90 | of Synapse are protected from the recent [OpenSSL security advisories](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000198.html), | |
91 | especially CVE-2021-3449. | |
92 | ||
93 | Note that Cryptography defaults to bundling its own statically linked copy of | |
94 | OpenSSL, which means that you may not be protected by your operating system's | |
95 | security updates. | |
96 | ||
97 | It's also worth noting that Cryptography no longer supports Python 3.5, so | |
98 | admins deploying to older environments may not be protected against this or | |
99 | future vulnerabilities. Synapse will be dropping support for Python 3.5 at the | |
100 | end of March. | |
101 | ||
102 | ||
103 | Updates to the Docker image | |
104 | --------------------------- | |
105 | ||
106 | - Ensure that the docker container has up to date versions of openssl. ([\#9697](https://github.com/matrix-org/synapse/issues/9697)) | |
107 | ||
108 | ||
109 | Internal Changes | |
110 | ---------------- | |
111 | ||
112 | - Enforce that `cryptography` dependency is up to date to ensure it has the most recent openssl patches. ([\#9697](https://github.com/matrix-org/synapse/issues/9697)) | |
113 | ||
114 | ||
0 | 115 | Synapse 1.30.0 (2021-03-22) |
1 | 116 | =========================== |
2 | 117 |
5 | 5 | - [Choosing your server name](#choosing-your-server-name) |
6 | 6 | - [Installing Synapse](#installing-synapse) |
7 | 7 | - [Installing from source](#installing-from-source) |
8 | - [Platform-Specific Instructions](#platform-specific-instructions) | |
8 | - [Platform-specific prerequisites](#platform-specific-prerequisites) | |
9 | 9 | - [Debian/Ubuntu/Raspbian](#debianubunturaspbian) |
10 | 10 | - [ArchLinux](#archlinux) |
11 | 11 | - [CentOS/Fedora](#centosfedora) |
37 | 37 | - [URL previews](#url-previews) |
38 | 38 | - [Troubleshooting Installation](#troubleshooting-installation) |
39 | 39 | |
40 | ||
40 | 41 | ## Choosing your server name |
41 | 42 | |
42 | 43 | It is important to choose the name for your server before you install Synapse, |
59 | 60 | |
60 | 61 | (Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).) |
61 | 62 | |
63 | When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed. | |
64 | ||
62 | 65 | System requirements: |
63 | 66 | |
64 | 67 | - POSIX-compliant system (tested on Linux & OS X) |
65 | 68 | - Python 3.5.2 or later, up to Python 3.9. |
66 | 69 | - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org |
67 | 70 | |
68 | Synapse is written in Python but some of the libraries it uses are written in | |
69 | C. So before we can install Synapse itself we need a working C compiler and the | |
70 | header files for Python C extensions. See [Platform-Specific | |
71 | Instructions](#platform-specific-instructions) for information on installing | |
72 | these on various platforms. | |
73 | 71 | |
74 | 72 | To install the Synapse homeserver run: |
75 | 73 | |
127 | 125 | synctl start |
128 | 126 | ``` |
129 | 127 | |
130 | #### Platform-Specific Instructions | |
128 | #### Platform-specific prerequisites | |
129 | ||
130 | Synapse is written in Python but some of the libraries it uses are written in | |
131 | C. So before we can install Synapse itself we need a working C compiler and the | |
132 | header files for Python C extensions. | |
131 | 133 | |
132 | 134 | ##### Debian/Ubuntu/Raspbian |
133 | 135 | |
525 | 527 | |
526 | 528 | The easiest way to create a new user is to do so from a client like [Element](https://element.io/). |
527 | 529 | |
528 | Alternatively you can do so from the command line if you have installed via pip. | |
529 | ||
530 | This can be done as follows: | |
531 | ||
532 | ```sh | |
533 | $ source ~/synapse/env/bin/activate | |
534 | $ synctl start # if not already running | |
535 | $ register_new_matrix_user -c homeserver.yaml http://localhost:8008 | |
530 | Alternatively, you can do so from the command line. This can be done as follows: | |
531 | ||
532 | 1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was | |
533 | installed via a prebuilt package, `register_new_matrix_user` should already be | |
534 | on the search path): | |
535 | ```sh | |
536 | cd ~/synapse | |
537 | source env/bin/activate | |
538 | synctl start # if not already running | |
539 | ``` | |
540 | 2. Run the following command: | |
541 | ```sh | |
542 | register_new_matrix_user -c homeserver.yaml http://localhost:8008 | |
543 | ``` | |
544 | ||
545 | This will prompt you to add details for the new user, and will then connect to | |
546 | the running Synapse to create the new user. For example: | |
547 | ``` | |
536 | 548 | New user localpart: erikj |
537 | 549 | Password: |
538 | 550 | Confirm password: |
313 | 313 | Client-Server API are functioning correctly. See the `installation instructions |
314 | 314 | <https://github.com/matrix-org/sytest#installing>`_ for details. |
315 | 315 | |
316 | ||
317 | Platform dependencies | |
318 | ===================== | |
319 | ||
320 | Synapse uses a number of platform dependencies such as Python and PostgreSQL, | |
321 | and aims to follow supported upstream versions. See the | |
322 | `<docs/deprecation_policy.md>`_ document for more details. | |
323 | ||
324 | ||
316 | 325 | Troubleshooting |
317 | 326 | =============== |
318 | 327 | |
388 | 397 | People can't accept room invitations from me |
389 | 398 | -------------------------------------------- |
390 | 399 | |
391 | The typical failure mode here is that you send an invitation to someone | |
400 | The typical failure mode here is that you send an invitation to someone | |
392 | 401 | to join a room or direct chat, but when they go to accept it, they get an |
393 | 402 | error (typically along the lines of "Invalid signature"). They might see |
394 | 403 | something like the following in their logs:: |
97 | 97 | |
98 | 98 | To avoid the warning, administrators using a reverse proxy should ensure that |
99 | 99 | the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to |
100 | indicate the protocol used by the client. See the `reverse proxy documentation | |
101 | <docs/reverse_proxy.md>`_, where the example configurations have been updated to | |
102 | show how to set this header. | |
100 | indicate the protocol used by the client. | |
101 | ||
102 | Synapse also requires the `Host` header to be preserved. | |
103 | ||
104 | See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the | |
105 | example configurations have been updated to show how to set these headers. | |
103 | 106 | |
104 | 107 | (Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it |
105 | 108 | sets `X-Forwarded-Proto` by default.) |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | # this script will use the api: |
3 | 3 | # https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | DOMAIN=yourserver.tld |
3 | 3 | # add this user as admin in your home server: |
17 | 17 | ### |
18 | 18 | FROM docker.io/python:${PYTHON_VERSION}-slim as builder |
19 | 19 | |
20 | LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse' | |
21 | LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md' | |
22 | LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git' | |
23 | LABEL org.opencontainers.image.licenses='Apache-2.0' | |
24 | ||
20 | 25 | # install the OS build deps |
21 | 26 | RUN apt-get update && apt-get install -y \ |
22 | 27 | build-essential \ |
27 | 32 | libwebp-dev \ |
28 | 33 | libxml++2.6-dev \ |
29 | 34 | libxslt1-dev \ |
35 | openssl \ | |
30 | 36 | rustc \ |
31 | 37 | zlib1g-dev \ |
32 | && rm -rf /var/lib/apt/lists/* | |
38 | && rm -rf /var/lib/apt/lists/* | |
33 | 39 | |
34 | # Build dependencies that are not available as wheels, to speed up rebuilds | |
35 | RUN pip install --prefix="/install" --no-warn-script-location \ | |
36 | cryptography \ | |
37 | frozendict \ | |
38 | jaeger-client \ | |
39 | opentracing \ | |
40 | # Match the version constraints of Synapse | |
41 | "prometheus_client>=0.4.0" \ | |
42 | psycopg2 \ | |
43 | pycparser \ | |
44 | pyrsistent \ | |
45 | pyyaml \ | |
46 | simplejson \ | |
47 | threadloop \ | |
48 | thrift | |
49 | ||
50 | # now install synapse and all of the python deps to /install. | |
51 | COPY synapse /synapse/synapse/ | |
40 | # Copy just what we need to pip install | |
52 | 41 | COPY scripts /synapse/scripts/ |
53 | 42 | COPY MANIFEST.in README.rst setup.py synctl /synapse/ |
43 | COPY synapse/__init__.py /synapse/synapse/__init__.py | |
44 | COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py | |
54 | 45 | |
46 | # To speed up rebuilds, install all of the dependencies before we copy over | |
47 | # the whole synapse project so that we this layer in the Docker cache can be | |
48 | # used while you develop on the source | |
49 | # | |
50 | # This is aiming at installing the `install_requires` and `extras_require` from `setup.py` | |
55 | 51 | RUN pip install --prefix="/install" --no-warn-script-location \ |
56 | /synapse[all] | |
52 | /synapse[all] | |
53 | ||
54 | # Copy over the rest of the project | |
55 | COPY synapse /synapse/synapse/ | |
56 | ||
57 | # Install the synapse package itself and all of its children packages. | |
58 | # | |
59 | # This is aiming at installing only the `packages=find_packages(...)` from `setup.py | |
60 | RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse | |
57 | 61 | |
58 | 62 | ### |
59 | 63 | ### Stage 1: runtime |
69 | 73 | libwebp6 \ |
70 | 74 | xmlsec1 \ |
71 | 75 | libjemalloc2 \ |
72 | && rm -rf /var/lib/apt/lists/* | |
76 | libssl-dev \ | |
77 | openssl \ | |
78 | && rm -rf /var/lib/apt/lists/* | |
73 | 79 | |
74 | 80 | COPY --from=builder /install /usr/local |
75 | 81 | COPY ./docker/start.py /start.py |
82 | 88 | ENTRYPOINT ["/start.py"] |
83 | 89 | |
84 | 90 | HEALTHCHECK --interval=1m --timeout=5s \ |
85 | CMD curl -fSs http://localhost:8008/health || exit 1 | |
91 | CMD curl -fSs http://localhost:8008/health || exit 1 |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | # The script to build the Debian package, as ran inside the Docker image. |
3 | 3 |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | # This script runs the PostgreSQL tests inside a Docker container. It expects |
3 | 3 | # the relevant source files to be mounted into /src (done automatically by the |
0 | Deprecation Policy for Platform Dependencies | |
1 | ============================================ | |
2 | ||
3 | Synapse has a number of platform dependencies, including Python and PostgreSQL. | |
4 | This document outlines the policy towards which versions we support, and when we | |
5 | drop support for versions in the future. | |
6 | ||
7 | ||
8 | Policy | |
9 | ------ | |
10 | ||
11 | Synapse follows the upstream support life cycles for Python and PostgreSQL, | |
12 | i.e. when a version reaches End of Life Synapse will withdraw support for that | |
13 | version in future releases. | |
14 | ||
15 | Details on the upstream support life cycles for Python and PostgreSQL are | |
16 | documented at https://endoflife.date/python and | |
17 | https://endoflife.date/postgresql. | |
18 | ||
19 | ||
20 | Context | |
21 | ------- | |
22 | ||
23 | It is important for system admins to have a clear understanding of the platform | |
24 | requirements of Synapse and its deprecation policies so that they can | |
25 | effectively plan upgrading their infrastructure ahead of time. This is | |
26 | especially important in contexts where upgrading the infrastructure requires | |
27 | auditing and approval from a security team, or where otherwise upgrading is a | |
28 | long process. | |
29 | ||
30 | By following the upstream support life cycles Synapse can ensure that its | |
31 | dependencies continue to get security patches, while not requiring system admins | |
32 | to constantly update their platform dependencies to the latest versions. |
103 | 103 | ``` |
104 | 104 | <VirtualHost *:443> |
105 | 105 | SSLEngine on |
106 | ServerName matrix.example.com; | |
106 | ServerName matrix.example.com | |
107 | 107 | |
108 | 108 | RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} |
109 | 109 | AllowEncodedSlashes NoDecode |
110 | ProxyPreserveHost on | |
110 | 111 | ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon |
111 | 112 | ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix |
112 | 113 | ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon |
115 | 116 | |
116 | 117 | <VirtualHost *:8448> |
117 | 118 | SSLEngine on |
118 | ServerName example.com; | |
119 | ServerName example.com | |
119 | 120 | |
120 | 121 | RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME} |
121 | 122 | AllowEncodedSlashes NoDecode |
133 | 134 | SecRuleEngine off |
134 | 135 | </IfModule> |
135 | 136 | ``` |
137 | ||
138 | **NOTE 3**: Missing `ProxyPreserveHost on` can lead to a redirect loop. | |
136 | 139 | |
137 | 140 | ### HAProxy |
138 | 141 |
868 | 868 | #rc_joins: |
869 | 869 | # local: |
870 | 870 | # per_second: 0.1 |
871 | # burst_count: 3 | |
871 | # burst_count: 10 | |
872 | 872 | # remote: |
873 | 873 | # per_second: 0.01 |
874 | # burst_count: 3 | |
874 | # burst_count: 10 | |
875 | 875 | # |
876 | 876 | #rc_3pid_validation: |
877 | 877 | # per_second: 0.003 |
1757 | 1757 | # Note that, if this is changed, users authenticating via that provider |
1758 | 1758 | # will no longer be recognised as the same user! |
1759 | 1759 | # |
1760 | # (Use "oidc" here if you are migrating from an old "oidc_config" | |
1761 | # configuration.) | |
1762 | # | |
1760 | 1763 | # idp_name: A user-facing name for this identity provider, which is used to |
1761 | 1764 | # offer the user a choice of login mechanisms. |
1762 | 1765 | # |
1871 | 1874 | # When rendering, the Jinja2 templates are given a 'user' variable, |
1872 | 1875 | # which is set to the claims returned by the UserInfo Endpoint and/or |
1873 | 1876 | # in the ID Token. |
1877 | # | |
1878 | # It is possible to configure Synapse to only allow logins if certain attributes | |
1879 | # match particular values in the OIDC userinfo. The requirements can be listed under | |
1880 | # `attribute_requirements` as shown below. All of the listed attributes must | |
1881 | # match for the login to be permitted. Additional attributes can be added to | |
1882 | # userinfo by expanding the `scopes` section of the OIDC config to retrieve | |
1883 | # additional information from the OIDC provider. | |
1884 | # | |
1885 | # If the OIDC claim is a list, then the attribute must match any value in the list. | |
1886 | # Otherwise, it must exactly match the value of the claim. Using the example | |
1887 | # below, the `family_name` claim MUST be "Stephensson", but the `groups` | |
1888 | # claim MUST contain "admin". | |
1889 | # | |
1890 | # attribute_requirements: | |
1891 | # - attribute: family_name | |
1892 | # value: "Stephensson" | |
1893 | # - attribute: groups | |
1894 | # value: "admin" | |
1874 | 1895 | # |
1875 | 1896 | # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md |
1876 | 1897 | # for information on how to configure these options. |
1904 | 1925 | # localpart_template: "{{ user.login }}" |
1905 | 1926 | # display_name_template: "{{ user.name }}" |
1906 | 1927 | # email_template: "{{ user.email }}" |
1907 | ||
1908 | # For use with Keycloak | |
1909 | # | |
1910 | #- idp_id: keycloak | |
1911 | # idp_name: Keycloak | |
1912 | # issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name" | |
1913 | # client_id: "synapse" | |
1914 | # client_secret: "copy secret generated in Keycloak UI" | |
1915 | # scopes: ["openid", "profile"] | |
1916 | ||
1917 | # For use with Github | |
1918 | # | |
1919 | #- idp_id: github | |
1920 | # idp_name: Github | |
1921 | # idp_brand: github | |
1922 | # discover: false | |
1923 | # issuer: "https://github.com/" | |
1924 | # client_id: "your-client-id" # TO BE FILLED | |
1925 | # client_secret: "your-client-secret" # TO BE FILLED | |
1926 | # authorization_endpoint: "https://github.com/login/oauth/authorize" | |
1927 | # token_endpoint: "https://github.com/login/oauth/access_token" | |
1928 | # userinfo_endpoint: "https://api.github.com/user" | |
1929 | # scopes: ["read:user"] | |
1930 | # user_mapping_provider: | |
1931 | # config: | |
1932 | # subject_claim: "id" | |
1933 | # localpart_template: "{{ user.login }}" | |
1934 | # display_name_template: "{{ user.name }}" | |
1928 | # attribute_requirements: | |
1929 | # - attribute: userGroup | |
1930 | # value: "synapseUsers" | |
1935 | 1931 | |
1936 | 1932 | |
1937 | 1933 | # Enable Central Authentication Service (CAS) for registration and login. |
231 | 231 | # Registration/login requests |
232 | 232 | ^/_matrix/client/(api/v1|r0|unstable)/login$ |
233 | 233 | ^/_matrix/client/(r0|unstable)/register$ |
234 | ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$ | |
235 | 234 | |
236 | 235 | # Event sending requests |
237 | 236 | ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact |
275 | 274 | |
276 | 275 | Ensure that all SSO logins go to a single process. |
277 | 276 | For multiple workers not handling the SSO endpoints properly, see |
278 | [#7530](https://github.com/matrix-org/synapse/issues/7530) and | |
277 | [#7530](https://github.com/matrix-org/synapse/issues/7530) and | |
279 | 278 | [#9427](https://github.com/matrix-org/synapse/issues/9427). |
280 | 279 | |
281 | 280 | Note that a HTTP listener with `client` and `federation` resources must be |
0 | 0 | [mypy] |
1 | 1 | namespace_packages = True |
2 | 2 | plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py |
3 | follow_imports = silent | |
3 | follow_imports = normal | |
4 | 4 | check_untyped_defs = True |
5 | 5 | show_error_codes = True |
6 | 6 | show_traceback = True |
7 | 7 | mypy_path = stubs |
8 | 8 | warn_unreachable = True |
9 | local_partial_types = True | |
9 | 10 | |
10 | 11 | # To find all folders that pass mypy you run: |
11 | 12 | # |
19 | 20 | synapse/crypto, |
20 | 21 | synapse/event_auth.py, |
21 | 22 | synapse/events/builder.py, |
23 | synapse/events/spamcheck.py, | |
24 | synapse/events/third_party_rules.py, | |
22 | 25 | synapse/events/validator.py, |
23 | synapse/events/spamcheck.py, | |
24 | 26 | synapse/federation, |
25 | 27 | synapse/groups, |
26 | 28 | synapse/handlers, |
37 | 39 | synapse/push, |
38 | 40 | synapse/replication, |
39 | 41 | synapse/rest, |
42 | synapse/secrets.py, | |
40 | 43 | synapse/server.py, |
41 | 44 | synapse/server_notices, |
42 | 45 | synapse/spam_checker_api, |
70 | 73 | synapse/util/metrics.py, |
71 | 74 | synapse/util/macaroons.py, |
72 | 75 | synapse/util/stringutils.py, |
76 | synapse/visibility.py, | |
73 | 77 | tests/replication, |
74 | 78 | tests/test_utils, |
75 | 79 | tests/handlers/test_password_providers.py, |
50 | 50 | parts = line.split("|") |
51 | 51 | if len(parts) != 2: |
52 | 52 | print("Unable to parse input line %s" % line, file=sys.stderr) |
53 | exit(1) | |
53 | sys.exit(1) | |
54 | 54 | |
55 | 55 | move_media(parts[0], parts[1], src_paths, dest_paths) |
56 | 56 |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | # |
2 | 2 | # A script which checks that an appropriate news file has been added on this |
3 | 3 | # branch. |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | # Find linting errors in Synapse's default config file. |
2 | 2 | # Exits with 0 if there are no problems, or another code otherwise. |
3 | 3 |
21 | 21 | from typing import Any, Optional |
22 | 22 | from urllib import parse as urlparse |
23 | 23 | |
24 | import nacl.signing | |
25 | 24 | import requests |
25 | import signedjson.key | |
26 | 26 | import signedjson.types |
27 | 27 | import srvlookup |
28 | 28 | import yaml |
41 | 41 | output_bytes = base64.b64encode(input_bytes) |
42 | 42 | output_string = output_bytes[:output_len].decode("ascii") |
43 | 43 | return output_string |
44 | ||
45 | ||
46 | def decode_base64(input_string): | |
47 | """Decode a base64 string to bytes inferring padding from the length of the | |
48 | string.""" | |
49 | ||
50 | input_bytes = input_string.encode("ascii") | |
51 | input_len = len(input_bytes) | |
52 | padding = b"=" * (3 - ((input_len + 3) % 4)) | |
53 | output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2 | |
54 | output_bytes = base64.b64decode(input_bytes + padding) | |
55 | return output_bytes[:output_len] | |
56 | 44 | |
57 | 45 | |
58 | 46 | def encode_canonical_json(value): |
85 | 73 | json_object["unsigned"] = unsigned |
86 | 74 | |
87 | 75 | return json_object |
88 | ||
89 | ||
90 | NACL_ED25519 = "ed25519" | |
91 | ||
92 | ||
93 | def decode_signing_key_base64(algorithm, version, key_base64): | |
94 | """Decode a base64 encoded signing key | |
95 | Args: | |
96 | algorithm (str): The algorithm the key is for (currently "ed25519"). | |
97 | version (str): Identifies this key out of the keys for this entity. | |
98 | key_base64 (str): Base64 encoded bytes of the key. | |
99 | Returns: | |
100 | A SigningKey object. | |
101 | """ | |
102 | if algorithm == NACL_ED25519: | |
103 | key_bytes = decode_base64(key_base64) | |
104 | key = nacl.signing.SigningKey(key_bytes) | |
105 | key.version = version | |
106 | key.alg = NACL_ED25519 | |
107 | return key | |
108 | else: | |
109 | raise ValueError("Unsupported algorithm %s" % (algorithm,)) | |
110 | ||
111 | ||
112 | def read_signing_keys(stream): | |
113 | """Reads a list of keys from a stream | |
114 | Args: | |
115 | stream : A stream to iterate for keys. | |
116 | Returns: | |
117 | list of SigningKey objects. | |
118 | """ | |
119 | keys = [] | |
120 | for line in stream: | |
121 | algorithm, version, key_base64 = line.split() | |
122 | keys.append(decode_signing_key_base64(algorithm, version, key_base64)) | |
123 | return keys | |
124 | 76 | |
125 | 77 | |
126 | 78 | def request( |
222 | 174 | parser.add_argument("--body", help="Data to send as the body of the HTTP request") |
223 | 175 | |
224 | 176 | parser.add_argument( |
225 | "path", help="request path. We will add '/_matrix/federation/v1/' to this." | |
177 | "path", help="request path, including the '/_matrix/federation/...' prefix." | |
226 | 178 | ) |
227 | 179 | |
228 | 180 | args = parser.parse_args() |
229 | 181 | |
230 | if not args.server_name or not args.signing_key_path: | |
182 | args.signing_key = None | |
183 | if args.signing_key_path: | |
184 | with open(args.signing_key_path) as f: | |
185 | args.signing_key = f.readline() | |
186 | ||
187 | if not args.server_name or not args.signing_key: | |
231 | 188 | read_args_from_config(args) |
232 | 189 | |
233 | with open(args.signing_key_path) as f: | |
234 | key = read_signing_keys(f)[0] | |
190 | algorithm, version, key_base64 = args.signing_key.split() | |
191 | key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64) | |
235 | 192 | |
236 | 193 | result = request( |
237 | 194 | args.method, |
238 | 195 | args.server_name, |
239 | 196 | key, |
240 | 197 | args.destination, |
241 | "/_matrix/federation/v1/" + args.path, | |
198 | args.path, | |
242 | 199 | content=args.body, |
243 | 200 | ) |
244 | 201 | |
254 | 211 | def read_args_from_config(args): |
255 | 212 | with open(args.config, "r") as fh: |
256 | 213 | config = yaml.safe_load(fh) |
214 | ||
257 | 215 | if not args.server_name: |
258 | 216 | args.server_name = config["server_name"] |
259 | if not args.signing_key_path: | |
260 | args.signing_key_path = config["signing_key_path"] | |
217 | ||
218 | if not args.signing_key: | |
219 | if "signing_key" in config: | |
220 | args.signing_key = config["signing_key"] | |
221 | else: | |
222 | with open(config["signing_key_path"]) as f: | |
223 | args.signing_key = f.readline() | |
261 | 224 | |
262 | 225 | |
263 | 226 | class MatrixConnectionAdapter(HTTPAdapter): |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | # |
2 | 2 | # Runs linting scripts over the local Synapse checkout |
3 | 3 | # isort - sorts import statements |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | # |
2 | 2 | # This script generates SQL files for creating a brand new Synapse DB with the latest |
3 | 3 | # schema, on both SQLite3 and Postgres. |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | set -e |
3 | 3 | |
5 | 5 | # next PR number. |
6 | 6 | CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"` |
7 | 7 | CURRENT_NUMBER=$((CURRENT_NUMBER+1)) |
8 | echo $CURRENT_NUMBER⏎ | |
8 | echo $CURRENT_NUMBER |
17 | 17 | # E203: whitespace before ':' (which is contrary to pep8?) |
18 | 18 | # E731: do not assign a lambda expression, use a def |
19 | 19 | # E501: Line too long (black enforces this for us) |
20 | ignore=W503,W504,E203,E731,E501 | |
20 | # B00*: Subsection of the bugbear suite (TODO: add in remaining fixes) | |
21 | ignore=W503,W504,E203,E731,E501,B006,B007,B008 | |
21 | 22 | |
22 | 23 | [isort] |
23 | 24 | line_length = 88 |
98 | 98 | "isort==5.7.0", |
99 | 99 | "black==20.8b1", |
100 | 100 | "flake8-comprehensions", |
101 | "flake8-bugbear==21.3.2", | |
101 | 102 | "flake8", |
102 | 103 | ] |
103 | 104 | |
104 | CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"] | |
105 | CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.13"] | |
105 | 106 | |
106 | 107 | # Dependencies which are exclusively required by unit test code. This is |
107 | 108 | # NOT a list of all modules that are necessary to run the unit tests. |
47 | 47 | except ImportError: |
48 | 48 | pass |
49 | 49 | |
50 | __version__ = "1.30.0" | |
50 | __version__ = "1.31.0" | |
51 | 51 | |
52 | 52 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
53 | 53 | # We import here so that we don't have to install a bunch of deps when |
557 | 557 | Returns: |
558 | 558 | bool: False if no access_token was given, True otherwise. |
559 | 559 | """ |
560 | # This will always be set by the time Twisted calls us. | |
561 | assert request.args is not None | |
562 | ||
560 | 563 | query_params = request.args.get(b"access_token") |
561 | 564 | auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") |
562 | 565 | return bool(query_params) or bool(auth_headers) |
573 | 576 | MissingClientTokenError: If there isn't a single access_token in the |
574 | 577 | request |
575 | 578 | """ |
579 | # This will always be set by the time Twisted calls us. | |
580 | assert request.args is not None | |
576 | 581 | |
577 | 582 | auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") |
578 | 583 | query_params = request.args.get(b"access_token") |
50 | 50 | OFFLINE = "offline" |
51 | 51 | UNAVAILABLE = "unavailable" |
52 | 52 | ONLINE = "online" |
53 | BUSY = "org.matrix.msc3026.busy" | |
53 | 54 | |
54 | 55 | |
55 | 56 | class JoinRules: |
98 | 99 | Retention = "m.room.retention" |
99 | 100 | |
100 | 101 | Dummy = "org.matrix.dummy_event" |
102 | ||
103 | MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child" | |
104 | MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent" | |
101 | 105 | |
102 | 106 | |
103 | 107 | class EduTypes: |
159 | 163 | # cf https://github.com/matrix-org/matrix-doc/pull/2228 |
160 | 164 | SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after" |
161 | 165 | |
166 | # cf https://github.com/matrix-org/matrix-doc/pull/1772 | |
167 | MSC1772_ROOM_TYPE = "org.matrix.msc1772.type" | |
168 | ||
162 | 169 | |
163 | 170 | class RoomEncryptionAlgorithms: |
164 | 171 | MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2" |
21 | 21 | try: |
22 | 22 | python_dependencies.check_requirements() |
23 | 23 | except python_dependencies.DependencyException as e: |
24 | sys.stderr.writelines(e.message) | |
24 | sys.stderr.writelines( | |
25 | e.message # noqa: B306, DependencyException.message is a property | |
26 | ) | |
25 | 27 | sys.exit(1) |
26 | 28 | |
27 | 29 |
20 | 20 | import socket |
21 | 21 | import sys |
22 | 22 | import traceback |
23 | import warnings | |
23 | 24 | from typing import Awaitable, Callable, Iterable |
24 | 25 | |
26 | from cryptography.utils import CryptographyDeprecationWarning | |
25 | 27 | from typing_extensions import NoReturn |
26 | 28 | |
27 | 29 | from twisted.internet import defer, error, reactor |
192 | 194 | for host in bind_addresses: |
193 | 195 | logger.info("Starting metrics listener on %s:%d", host, port) |
194 | 196 | start_http_server(port, addr=host, registry=RegistryProxy) |
197 | ||
198 | ||
199 | def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict): | |
200 | # twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing | |
201 | # warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so | |
202 | # suppress the warning for now. | |
203 | warnings.filterwarnings( | |
204 | action="ignore", | |
205 | category=CryptographyDeprecationWarning, | |
206 | message="int_from_bytes is deprecated", | |
207 | ) | |
208 | ||
209 | from synapse.util.manhole import manhole | |
210 | ||
211 | listen_tcp( | |
212 | bind_addresses, | |
213 | port, | |
214 | manhole(username="matrix", password="rabbithole", globals=manhole_globals), | |
215 | ) | |
195 | 216 | |
196 | 217 | |
197 | 218 | def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50): |
146 | 146 | from synapse.types import ReadReceipt |
147 | 147 | from synapse.util.async_helpers import Linearizer |
148 | 148 | from synapse.util.httpresourcetree import create_resource_tree |
149 | from synapse.util.manhole import manhole | |
150 | 149 | from synapse.util.versionstring import get_version_string |
151 | 150 | |
152 | 151 | logger = logging.getLogger("synapse.app.generic_worker") |
301 | 300 | self.send_stop_syncing, UPDATE_SYNCING_USERS_MS |
302 | 301 | ) |
303 | 302 | |
303 | self._busy_presence_enabled = hs.config.experimental.msc3026_enabled | |
304 | ||
304 | 305 | hs.get_reactor().addSystemEventTrigger( |
305 | 306 | "before", |
306 | 307 | "shutdown", |
438 | 439 | PresenceState.ONLINE, |
439 | 440 | PresenceState.UNAVAILABLE, |
440 | 441 | PresenceState.OFFLINE, |
442 | PresenceState.BUSY, | |
441 | 443 | ) |
442 | if presence not in valid_presence: | |
444 | ||
445 | if presence not in valid_presence or ( | |
446 | presence == PresenceState.BUSY and not self._busy_presence_enabled | |
447 | ): | |
443 | 448 | raise SynapseError(400, "Invalid presence state") |
444 | 449 | |
445 | 450 | user_id = target_user.to_string() |
633 | 638 | if listener.type == "http": |
634 | 639 | self._listen_http(listener) |
635 | 640 | elif listener.type == "manhole": |
636 | _base.listen_tcp( | |
637 | listener.bind_addresses, | |
638 | listener.port, | |
639 | manhole( | |
640 | username="matrix", password="rabbithole", globals={"hs": self} | |
641 | ), | |
641 | _base.listen_manhole( | |
642 | listener.bind_addresses, listener.port, manhole_globals={"hs": self} | |
642 | 643 | ) |
643 | 644 | elif listener.type == "metrics": |
644 | 645 | if not self.get_config().enable_metrics: |
785 | 786 | |
786 | 787 | self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") |
787 | 788 | |
788 | def on_start(self): | |
789 | # There may be some events that are persisted but haven't been sent, | |
790 | # so send them now. | |
791 | self.federation_sender.notify_new_events( | |
792 | self.store.get_room_max_stream_ordering() | |
793 | ) | |
794 | ||
795 | 789 | def wake_destination(self, server: str): |
796 | 790 | self.federation_sender.wake_destination(server) |
797 | 791 |
66 | 66 | from synapse.storage.engines import IncorrectDatabaseSetup |
67 | 67 | from synapse.storage.prepare_database import UpgradeDatabaseException |
68 | 68 | from synapse.util.httpresourcetree import create_resource_tree |
69 | from synapse.util.manhole import manhole | |
70 | 69 | from synapse.util.module_loader import load_module |
71 | 70 | from synapse.util.versionstring import get_version_string |
72 | 71 | |
287 | 286 | if listener.type == "http": |
288 | 287 | self._listening_services.extend(self._listener_http(config, listener)) |
289 | 288 | elif listener.type == "manhole": |
290 | listen_tcp( | |
291 | listener.bind_addresses, | |
292 | listener.port, | |
293 | manhole( | |
294 | username="matrix", password="rabbithole", globals={"hs": self} | |
295 | ), | |
289 | _base.listen_manhole( | |
290 | listener.bind_addresses, listener.port, manhole_globals={"hs": self} | |
296 | 291 | ) |
297 | 292 | elif listener.type == "replication": |
298 | 293 | services = listen_tcp( |
23 | 23 | _CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR" |
24 | 24 | |
25 | 25 | # Map from canonicalised cache name to cache. |
26 | _CACHES = {} | |
26 | _CACHES = {} # type: Dict[str, Callable[[float], None]] | |
27 | 27 | |
28 | 28 | # a lock on the contents of _CACHES |
29 | 29 | _CACHES_LOCK = threading.Lock() |
58 | 58 | return cache_name.lower() |
59 | 59 | |
60 | 60 | |
61 | def add_resizable_cache(cache_name: str, cache_resize_callback: Callable): | |
61 | def add_resizable_cache( | |
62 | cache_name: str, cache_resize_callback: Callable[[float], None] | |
63 | ): | |
62 | 64 | """Register a cache that's size can dynamically change |
63 | 65 | |
64 | 66 | Args: |
26 | 26 | |
27 | 27 | # MSC2858 (multiple SSO identity providers) |
28 | 28 | self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool |
29 | # Spaces (MSC1772, MSC2946, etc) | |
30 | self.spaces_enabled = experimental.get("spaces_enabled", False) # type: bool | |
31 | # MSC3026 (busy presence state) | |
32 | self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool |
403 | 403 | try: |
404 | 404 | jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA) |
405 | 405 | except jsonschema.ValidationError as e: |
406 | raise ConfigError("Unable to parse 'trusted_key_servers': " + e.message) | |
406 | raise ConfigError( | |
407 | "Unable to parse 'trusted_key_servers': {}".format( | |
408 | e.message # noqa: B306, jsonschema.ValidationError.message is a valid attribute | |
409 | ) | |
410 | ) | |
407 | 411 | |
408 | 412 | for server in key_servers: |
409 | 413 | server_name = server["server_name"] |
55 | 55 | try: |
56 | 56 | check_requirements("sentry") |
57 | 57 | except DependencyException as e: |
58 | raise ConfigError(e.message) | |
58 | raise ConfigError( | |
59 | e.message # noqa: B306, DependencyException.message is a property | |
60 | ) | |
59 | 61 | |
60 | 62 | self.sentry_dsn = config["sentry"].get("dsn") |
61 | 63 | if not self.sentry_dsn: |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | from collections import Counter |
17 | from typing import Iterable, Mapping, Optional, Tuple, Type | |
17 | from typing import Iterable, List, Mapping, Optional, Tuple, Type | |
18 | 18 | |
19 | 19 | import attr |
20 | 20 | |
21 | 21 | from synapse.config._util import validate_config |
22 | from synapse.config.sso import SsoAttributeRequirement | |
22 | 23 | from synapse.python_dependencies import DependencyException, check_requirements |
23 | 24 | from synapse.types import Collection, JsonDict |
24 | 25 | from synapse.util.module_loader import load_module |
40 | 41 | try: |
41 | 42 | check_requirements("oidc") |
42 | 43 | except DependencyException as e: |
43 | raise ConfigError(e.message) from e | |
44 | raise ConfigError( | |
45 | e.message # noqa: B306, DependencyException.message is a property | |
46 | ) from e | |
44 | 47 | |
45 | 48 | # check we don't have any duplicate idp_ids now. (The SSO handler will also |
46 | 49 | # check for duplicates when the REST listeners get registered, but that happens |
75 | 78 | # Note that, if this is changed, users authenticating via that provider |
76 | 79 | # will no longer be recognised as the same user! |
77 | 80 | # |
81 | # (Use "oidc" here if you are migrating from an old "oidc_config" | |
82 | # configuration.) | |
83 | # | |
78 | 84 | # idp_name: A user-facing name for this identity provider, which is used to |
79 | 85 | # offer the user a choice of login mechanisms. |
80 | 86 | # |
189 | 195 | # When rendering, the Jinja2 templates are given a 'user' variable, |
190 | 196 | # which is set to the claims returned by the UserInfo Endpoint and/or |
191 | 197 | # in the ID Token. |
198 | # | |
199 | # It is possible to configure Synapse to only allow logins if certain attributes | |
200 | # match particular values in the OIDC userinfo. The requirements can be listed under | |
201 | # `attribute_requirements` as shown below. All of the listed attributes must | |
202 | # match for the login to be permitted. Additional attributes can be added to | |
203 | # userinfo by expanding the `scopes` section of the OIDC config to retrieve | |
204 | # additional information from the OIDC provider. | |
205 | # | |
206 | # If the OIDC claim is a list, then the attribute must match any value in the list. | |
207 | # Otherwise, it must exactly match the value of the claim. Using the example | |
208 | # below, the `family_name` claim MUST be "Stephensson", but the `groups` | |
209 | # claim MUST contain "admin". | |
210 | # | |
211 | # attribute_requirements: | |
212 | # - attribute: family_name | |
213 | # value: "Stephensson" | |
214 | # - attribute: groups | |
215 | # value: "admin" | |
192 | 216 | # |
193 | 217 | # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md |
194 | 218 | # for information on how to configure these options. |
222 | 246 | # localpart_template: "{{{{ user.login }}}}" |
223 | 247 | # display_name_template: "{{{{ user.name }}}}" |
224 | 248 | # email_template: "{{{{ user.email }}}}" |
225 | ||
226 | # For use with Keycloak | |
227 | # | |
228 | #- idp_id: keycloak | |
229 | # idp_name: Keycloak | |
230 | # issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name" | |
231 | # client_id: "synapse" | |
232 | # client_secret: "copy secret generated in Keycloak UI" | |
233 | # scopes: ["openid", "profile"] | |
234 | ||
235 | # For use with Github | |
236 | # | |
237 | #- idp_id: github | |
238 | # idp_name: Github | |
239 | # idp_brand: github | |
240 | # discover: false | |
241 | # issuer: "https://github.com/" | |
242 | # client_id: "your-client-id" # TO BE FILLED | |
243 | # client_secret: "your-client-secret" # TO BE FILLED | |
244 | # authorization_endpoint: "https://github.com/login/oauth/authorize" | |
245 | # token_endpoint: "https://github.com/login/oauth/access_token" | |
246 | # userinfo_endpoint: "https://api.github.com/user" | |
247 | # scopes: ["read:user"] | |
248 | # user_mapping_provider: | |
249 | # config: | |
250 | # subject_claim: "id" | |
251 | # localpart_template: "{{{{ user.login }}}}" | |
252 | # display_name_template: "{{{{ user.name }}}}" | |
249 | # attribute_requirements: | |
250 | # - attribute: userGroup | |
251 | # value: "synapseUsers" | |
253 | 252 | """.format( |
254 | 253 | mapping_provider=DEFAULT_USER_MAPPING_PROVIDER |
255 | 254 | ) |
328 | 327 | }, |
329 | 328 | "allow_existing_users": {"type": "boolean"}, |
330 | 329 | "user_mapping_provider": {"type": ["object", "null"]}, |
330 | "attribute_requirements": { | |
331 | "type": "array", | |
332 | "items": SsoAttributeRequirement.JSON_SCHEMA, | |
333 | }, | |
331 | 334 | }, |
332 | 335 | } |
333 | 336 | |
464 | 467 | jwt_header=client_secret_jwt_key_config["jwt_header"], |
465 | 468 | jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}), |
466 | 469 | ) |
470 | # parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement | |
471 | attribute_requirements = [ | |
472 | SsoAttributeRequirement(**x) | |
473 | for x in oidc_config.get("attribute_requirements", []) | |
474 | ] | |
467 | 475 | |
468 | 476 | return OidcProviderConfig( |
469 | 477 | idp_id=idp_id, |
487 | 495 | allow_existing_users=oidc_config.get("allow_existing_users", False), |
488 | 496 | user_mapping_provider_class=user_mapping_provider_class, |
489 | 497 | user_mapping_provider_config=user_mapping_provider_config, |
498 | attribute_requirements=attribute_requirements, | |
490 | 499 | ) |
491 | 500 | |
492 | 501 | |
576 | 585 | |
577 | 586 | # the config of the user mapping provider |
578 | 587 | user_mapping_provider_config = attr.ib() |
588 | ||
589 | # required attributes to require in userinfo to allow login/registration | |
590 | attribute_requirements = attr.ib(type=List[SsoAttributeRequirement]) |
94 | 94 | |
95 | 95 | self.rc_joins_local = RateLimitConfig( |
96 | 96 | config.get("rc_joins", {}).get("local", {}), |
97 | defaults={"per_second": 0.1, "burst_count": 3}, | |
97 | defaults={"per_second": 0.1, "burst_count": 10}, | |
98 | 98 | ) |
99 | 99 | self.rc_joins_remote = RateLimitConfig( |
100 | 100 | config.get("rc_joins", {}).get("remote", {}), |
101 | defaults={"per_second": 0.01, "burst_count": 3}, | |
101 | defaults={"per_second": 0.01, "burst_count": 10}, | |
102 | 102 | ) |
103 | 103 | |
104 | 104 | # Ratelimit cross-user key requests: |
186 | 186 | #rc_joins: |
187 | 187 | # local: |
188 | 188 | # per_second: 0.1 |
189 | # burst_count: 3 | |
189 | # burst_count: 10 | |
190 | 190 | # remote: |
191 | 191 | # per_second: 0.01 |
192 | # burst_count: 3 | |
192 | # burst_count: 10 | |
193 | 193 | # |
194 | 194 | #rc_3pid_validation: |
195 | 195 | # per_second: 0.003 |
175 | 175 | check_requirements("url_preview") |
176 | 176 | |
177 | 177 | except DependencyException as e: |
178 | raise ConfigError(e.message) | |
178 | raise ConfigError( | |
179 | e.message # noqa: B306, DependencyException.message is a property | |
180 | ) | |
179 | 181 | |
180 | 182 | if "url_preview_ip_range_blacklist" not in config: |
181 | 183 | raise ConfigError( |
75 | 75 | try: |
76 | 76 | check_requirements("saml2") |
77 | 77 | except DependencyException as e: |
78 | raise ConfigError(e.message) | |
78 | raise ConfigError( | |
79 | e.message # noqa: B306, DependencyException.message is a property | |
80 | ) | |
79 | 81 | |
80 | 82 | self.saml2_enabled = True |
81 | 83 |
38 | 38 | try: |
39 | 39 | check_requirements("opentracing") |
40 | 40 | except DependencyException as e: |
41 | raise ConfigError(e.message) | |
41 | raise ConfigError( | |
42 | e.message # noqa: B306, DependencyException.message is a property | |
43 | ) | |
42 | 44 | |
43 | 45 | # The tracer is enabled so sanitize the config |
44 | 46 |
190 | 190 | # ... we further assume that SSLClientConnectionCreator has set the |
191 | 191 | # '_synapse_tls_verifier' attribute to a ConnectionVerifier object. |
192 | 192 | tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where) |
193 | except: # noqa: E722, taken from the twisted implementation | |
193 | except BaseException: # taken from the twisted implementation | |
194 | 194 | logger.exception("Error during info_callback") |
195 | 195 | f = Failure() |
196 | 196 | tls_protocol.failVerification(f) |
218 | 218 | # ... and we also gut-wrench a '_synapse_tls_verifier' attribute into the |
219 | 219 | # tls_protocol so that the SSL context's info callback has something to |
220 | 220 | # call to do the cert verification. |
221 | setattr(tls_protocol, "_synapse_tls_verifier", self._verifier) | |
221 | tls_protocol._synapse_tls_verifier = self._verifier | |
222 | 222 | return connection |
223 | 223 | |
224 | 224 |
56 | 56 | from synapse.util.retryutils import NotRetryingDestination |
57 | 57 | |
58 | 58 | if TYPE_CHECKING: |
59 | from synapse.app.homeserver import HomeServer | |
59 | from synapse.server import HomeServer | |
60 | 60 | |
61 | 61 | logger = logging.getLogger(__name__) |
62 | 62 |
97 | 97 | |
98 | 98 | |
99 | 99 | class _EventInternalMetadata: |
100 | __slots__ = ["_dict", "stream_ordering"] | |
100 | __slots__ = ["_dict", "stream_ordering", "outlier"] | |
101 | 101 | |
102 | 102 | def __init__(self, internal_metadata_dict: JsonDict): |
103 | 103 | # we have to copy the dict, because it turns out that the same dict is |
107 | 107 | # the stream ordering of this event. None, until it has been persisted. |
108 | 108 | self.stream_ordering = None # type: Optional[int] |
109 | 109 | |
110 | outlier = DictProperty("outlier") # type: bool | |
110 | # whether this event is an outlier (ie, whether we have the state at that point | |
111 | # in the DAG) | |
112 | self.outlier = False | |
113 | ||
111 | 114 | out_of_band_membership = DictProperty("out_of_band_membership") # type: bool |
112 | 115 | send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str |
113 | 116 | recheck_redaction = DictProperty("recheck_redaction") # type: bool |
128 | 131 | return dict(self._dict) |
129 | 132 | |
130 | 133 | def is_outlier(self) -> bool: |
131 | return self._dict.get("outlier", False) | |
134 | return self.outlier | |
132 | 135 | |
133 | 136 | def is_out_of_band_membership(self) -> bool: |
134 | 137 | """Whether this is an out of band membership, like an invite or an invite |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from typing import Callable, Union | |
15 | from typing import TYPE_CHECKING, Union | |
16 | 16 | |
17 | 17 | from synapse.events import EventBase |
18 | 18 | from synapse.events.snapshot import EventContext |
19 | 19 | from synapse.types import Requester, StateMap |
20 | ||
21 | if TYPE_CHECKING: | |
22 | from synapse.server import HomeServer | |
20 | 23 | |
21 | 24 | |
22 | 25 | class ThirdPartyEventRules: |
27 | 30 | behaviours. |
28 | 31 | """ |
29 | 32 | |
30 | def __init__(self, hs): | |
33 | def __init__(self, hs: "HomeServer"): | |
31 | 34 | self.third_party_rules = None |
32 | 35 | |
33 | 36 | self.store = hs.get_datastore() |
94 | 97 | if self.third_party_rules is None: |
95 | 98 | return True |
96 | 99 | |
97 | ret = await self.third_party_rules.on_create_room( | |
100 | return await self.third_party_rules.on_create_room( | |
98 | 101 | requester, config, is_requester_admin |
99 | 102 | ) |
100 | return ret | |
101 | 103 | |
102 | 104 | async def check_threepid_can_be_invited( |
103 | 105 | self, medium: str, address: str, room_id: str |
118 | 120 | |
119 | 121 | state_events = await self._get_state_map_for_room(room_id) |
120 | 122 | |
121 | ret = await self.third_party_rules.check_threepid_can_be_invited( | |
123 | return await self.third_party_rules.check_threepid_can_be_invited( | |
122 | 124 | medium, address, state_events |
123 | 125 | ) |
124 | return ret | |
125 | 126 | |
126 | 127 | async def check_visibility_can_be_modified( |
127 | 128 | self, room_id: str, new_visibility: str |
142 | 143 | check_func = getattr( |
143 | 144 | self.third_party_rules, "check_visibility_can_be_modified", None |
144 | 145 | ) |
145 | if not check_func or not isinstance(check_func, Callable): | |
146 | if not check_func or not callable(check_func): | |
146 | 147 | return True |
147 | 148 | |
148 | 149 | state_events = await self._get_state_map_for_room(room_id) |
21 | 21 | from synapse.api.errors import Codes, SynapseError |
22 | 22 | from synapse.api.room_versions import RoomVersion |
23 | 23 | from synapse.util.async_helpers import yieldable_gather_results |
24 | from synapse.util.frozenutils import unfreeze | |
24 | 25 | |
25 | 26 | from . import EventBase |
26 | 27 | |
52 | 53 | pruned_event.internal_metadata.stream_ordering = ( |
53 | 54 | event.internal_metadata.stream_ordering |
54 | 55 | ) |
56 | ||
57 | pruned_event.internal_metadata.outlier = event.internal_metadata.outlier | |
55 | 58 | |
56 | 59 | # Mark the event as redacted |
57 | 60 | pruned_event.internal_metadata.redacted = True |
399 | 402 | # If there is an edit replace the content, preserving existing |
400 | 403 | # relations. |
401 | 404 | |
405 | # Ensure we take copies of the edit content, otherwise we risk modifying | |
406 | # the original event. | |
407 | edit_content = edit.content.copy() | |
408 | ||
409 | # Unfreeze the event content if necessary, so that we may modify it below | |
410 | edit_content = unfreeze(edit_content) | |
411 | serialized_event["content"] = edit_content.get("m.new_content", {}) | |
412 | ||
413 | # Check for existing relations | |
402 | 414 | relations = event.content.get("m.relates_to") |
403 | serialized_event["content"] = edit.content.get("m.new_content", {}) | |
404 | 415 | if relations: |
405 | serialized_event["content"]["m.relates_to"] = relations | |
416 | # Keep the relations, ensuring we use a dict copy of the original | |
417 | serialized_event["content"]["m.relates_to"] = relations.copy() | |
406 | 418 | else: |
407 | 419 | serialized_event["content"].pop("m.relates_to", None) |
408 | 420 |
26 | 26 | List, |
27 | 27 | Mapping, |
28 | 28 | Optional, |
29 | Sequence, | |
29 | 30 | Tuple, |
30 | 31 | TypeVar, |
31 | 32 | Union, |
32 | 33 | ) |
33 | 34 | |
35 | import attr | |
34 | 36 | from prometheus_client import Counter |
35 | 37 | |
36 | 38 | from twisted.internet import defer |
61 | 63 | from synapse.util.retryutils import NotRetryingDestination |
62 | 64 | |
63 | 65 | if TYPE_CHECKING: |
64 | from synapse.app.homeserver import HomeServer | |
66 | from synapse.server import HomeServer | |
65 | 67 | |
66 | 68 | logger = logging.getLogger(__name__) |
67 | 69 | |
454 | 456 | description: str, |
455 | 457 | destinations: Iterable[str], |
456 | 458 | callback: Callable[[str], Awaitable[T]], |
459 | failover_on_unknown_endpoint: bool = False, | |
457 | 460 | ) -> T: |
458 | 461 | """Try an operation on a series of servers, until it succeeds |
459 | 462 | |
472 | 475 | Otherwise, if the callback raises an Exception the error is logged and the |
473 | 476 | next server tried. Normally the stacktrace is logged but this is |
474 | 477 | suppressed if the exception is an InvalidResponseError. |
478 | ||
479 | failover_on_unknown_endpoint: if True, we will try other servers if it looks | |
480 | like a server doesn't support the endpoint. This is typically useful | |
481 | if the endpoint in question is new or experimental. | |
475 | 482 | |
476 | 483 | Returns: |
477 | 484 | The result of callback, if it succeeds |
492 | 499 | except UnsupportedRoomVersionError: |
493 | 500 | raise |
494 | 501 | except HttpResponseException as e: |
495 | if not 500 <= e.code < 600: | |
496 | raise e.to_synapse_error() | |
497 | else: | |
498 | logger.warning( | |
499 | "Failed to %s via %s: %i %s", | |
500 | description, | |
501 | destination, | |
502 | e.code, | |
503 | e.args[0], | |
504 | ) | |
502 | synapse_error = e.to_synapse_error() | |
503 | failover = False | |
504 | ||
505 | if 500 <= e.code < 600: | |
506 | failover = True | |
507 | ||
508 | elif failover_on_unknown_endpoint: | |
509 | # there is no good way to detect an "unknown" endpoint. Dendrite | |
510 | # returns a 404 (with no body); synapse returns a 400 | |
511 | # with M_UNRECOGNISED. | |
512 | if e.code == 404 or ( | |
513 | e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED | |
514 | ): | |
515 | failover = True | |
516 | ||
517 | if not failover: | |
518 | raise synapse_error from e | |
519 | ||
520 | logger.warning( | |
521 | "Failed to %s via %s: %i %s", | |
522 | description, | |
523 | destination, | |
524 | e.code, | |
525 | e.args[0], | |
526 | ) | |
505 | 527 | except Exception: |
506 | 528 | logger.warning( |
507 | 529 | "Failed to %s via %s", description, destination, exc_info=True |
1041 | 1063 | # If we don't manage to find it, return None. It's not an error if a |
1042 | 1064 | # server doesn't give it to us. |
1043 | 1065 | return None |
1066 | ||
1067 | async def get_space_summary( | |
1068 | self, | |
1069 | destinations: Iterable[str], | |
1070 | room_id: str, | |
1071 | suggested_only: bool, | |
1072 | max_rooms_per_space: Optional[int], | |
1073 | exclude_rooms: List[str], | |
1074 | ) -> "FederationSpaceSummaryResult": | |
1075 | """ | |
1076 | Call other servers to get a summary of the given space | |
1077 | ||
1078 | ||
1079 | Args: | |
1080 | destinations: The remote servers. We will try them in turn, omitting any | |
1081 | that have been blacklisted. | |
1082 | ||
1083 | room_id: ID of the space to be queried | |
1084 | ||
1085 | suggested_only: If true, ask the remote server to only return children | |
1086 | with the "suggested" flag set | |
1087 | ||
1088 | max_rooms_per_space: A limit on the number of children to return for each | |
1089 | space | |
1090 | ||
1091 | exclude_rooms: A list of room IDs to tell the remote server to skip | |
1092 | ||
1093 | Returns: | |
1094 | a parsed FederationSpaceSummaryResult | |
1095 | ||
1096 | Raises: | |
1097 | SynapseError if we were unable to get a valid summary from any of the | |
1098 | remote servers | |
1099 | """ | |
1100 | ||
1101 | async def send_request(destination: str) -> FederationSpaceSummaryResult: | |
1102 | res = await self.transport_layer.get_space_summary( | |
1103 | destination=destination, | |
1104 | room_id=room_id, | |
1105 | suggested_only=suggested_only, | |
1106 | max_rooms_per_space=max_rooms_per_space, | |
1107 | exclude_rooms=exclude_rooms, | |
1108 | ) | |
1109 | ||
1110 | try: | |
1111 | return FederationSpaceSummaryResult.from_json_dict(res) | |
1112 | except ValueError as e: | |
1113 | raise InvalidResponseError(str(e)) | |
1114 | ||
1115 | return await self._try_destination_list( | |
1116 | "fetch space summary", | |
1117 | destinations, | |
1118 | send_request, | |
1119 | failover_on_unknown_endpoint=True, | |
1120 | ) | |
1121 | ||
1122 | ||
1123 | @attr.s(frozen=True, slots=True) | |
1124 | class FederationSpaceSummaryEventResult: | |
1125 | """Represents a single event in the result of a successful get_space_summary call. | |
1126 | ||
1127 | It's essentially just a serialised event object, but we do a bit of parsing and | |
1128 | validation in `from_json_dict` and store some of the validated properties in | |
1129 | object attributes. | |
1130 | """ | |
1131 | ||
1132 | event_type = attr.ib(type=str) | |
1133 | state_key = attr.ib(type=str) | |
1134 | via = attr.ib(type=Sequence[str]) | |
1135 | ||
1136 | # the raw data, including the above keys | |
1137 | data = attr.ib(type=JsonDict) | |
1138 | ||
1139 | @classmethod | |
1140 | def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryEventResult": | |
1141 | """Parse an event within the result of a /spaces/ request | |
1142 | ||
1143 | Args: | |
1144 | d: json object to be parsed | |
1145 | ||
1146 | Raises: | |
1147 | ValueError if d is not a valid event | |
1148 | """ | |
1149 | ||
1150 | event_type = d.get("type") | |
1151 | if not isinstance(event_type, str): | |
1152 | raise ValueError("Invalid event: 'event_type' must be a str") | |
1153 | ||
1154 | state_key = d.get("state_key") | |
1155 | if not isinstance(state_key, str): | |
1156 | raise ValueError("Invalid event: 'state_key' must be a str") | |
1157 | ||
1158 | content = d.get("content") | |
1159 | if not isinstance(content, dict): | |
1160 | raise ValueError("Invalid event: 'content' must be a dict") | |
1161 | ||
1162 | via = content.get("via") | |
1163 | if not isinstance(via, Sequence): | |
1164 | raise ValueError("Invalid event: 'via' must be a list") | |
1165 | if any(not isinstance(v, str) for v in via): | |
1166 | raise ValueError("Invalid event: 'via' must be a list of strings") | |
1167 | ||
1168 | return cls(event_type, state_key, via, d) | |
1169 | ||
1170 | ||
1171 | @attr.s(frozen=True, slots=True) | |
1172 | class FederationSpaceSummaryResult: | |
1173 | """Represents the data returned by a successful get_space_summary call.""" | |
1174 | ||
1175 | rooms = attr.ib(type=Sequence[JsonDict]) | |
1176 | events = attr.ib(type=Sequence[FederationSpaceSummaryEventResult]) | |
1177 | ||
1178 | @classmethod | |
1179 | def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryResult": | |
1180 | """Parse the result of a /spaces/ request | |
1181 | ||
1182 | Args: | |
1183 | d: json object to be parsed | |
1184 | ||
1185 | Raises: | |
1186 | ValueError if d is not a valid /spaces/ response | |
1187 | """ | |
1188 | rooms = d.get("rooms") | |
1189 | if not isinstance(rooms, Sequence): | |
1190 | raise ValueError("'rooms' must be a list") | |
1191 | if any(not isinstance(r, dict) for r in rooms): | |
1192 | raise ValueError("Invalid room in 'rooms' list") | |
1193 | ||
1194 | events = d.get("events") | |
1195 | if not isinstance(events, Sequence): | |
1196 | raise ValueError("'events' must be a list") | |
1197 | if any(not isinstance(e, dict) for e in events): | |
1198 | raise ValueError("Invalid event in 'events' list") | |
1199 | parsed_events = [ | |
1200 | FederationSpaceSummaryEventResult.from_json_dict(e) for e in events | |
1201 | ] | |
1202 | ||
1203 | return cls(rooms, parsed_events) |
34 | 34 | from twisted.internet.abstract import isIPAddress |
35 | 35 | from twisted.python import failure |
36 | 36 | |
37 | from synapse.api.constants import EduTypes, EventTypes, Membership | |
37 | from synapse.api.constants import EduTypes, EventTypes | |
38 | 38 | from synapse.api.errors import ( |
39 | 39 | AuthError, |
40 | 40 | Codes, |
62 | 62 | ReplicationFederationSendEduRestServlet, |
63 | 63 | ReplicationGetQueryRestServlet, |
64 | 64 | ) |
65 | from synapse.types import JsonDict, get_domain_from_id | |
65 | from synapse.types import JsonDict | |
66 | 66 | from synapse.util import glob_to_regex, json_decoder, unwrapFirstError |
67 | 67 | from synapse.util.async_helpers import Linearizer, concurrently_execute |
68 | 68 | from synapse.util.caches.response_cache import ResponseCache |
726 | 726 | if the event was unacceptable for any other reason (eg, too large, |
727 | 727 | too many prev_events, couldn't find the prev_events) |
728 | 728 | """ |
729 | # check that it's actually being sent from a valid destination to | |
730 | # workaround bug #1753 in 0.18.5 and 0.18.6 | |
731 | if origin != get_domain_from_id(pdu.sender): | |
732 | # We continue to accept join events from any server; this is | |
733 | # necessary for the federation join dance to work correctly. | |
734 | # (When we join over federation, the "helper" server is | |
735 | # responsible for sending out the join event, rather than the | |
736 | # origin. See bug #1893. This is also true for some third party | |
737 | # invites). | |
738 | if not ( | |
739 | pdu.type == "m.room.member" | |
740 | and pdu.content | |
741 | and pdu.content.get("membership", None) | |
742 | in (Membership.JOIN, Membership.INVITE) | |
743 | ): | |
744 | logger.info( | |
745 | "Discarding PDU %s from invalid origin %s", pdu.event_id, origin | |
746 | ) | |
747 | return | |
748 | else: | |
749 | logger.info("Accepting join PDU %s from %s", pdu.event_id, origin) | |
750 | 729 | |
751 | 730 | # We've already checked that we know the room version by this point |
752 | 731 | room_version = await self.store.get_room_version(pdu.room_id) |
30 | 30 | |
31 | 31 | import logging |
32 | 32 | from collections import namedtuple |
33 | from typing import Dict, List, Tuple, Type | |
33 | from typing import ( | |
34 | TYPE_CHECKING, | |
35 | Dict, | |
36 | Hashable, | |
37 | Iterable, | |
38 | List, | |
39 | Optional, | |
40 | Sized, | |
41 | Tuple, | |
42 | Type, | |
43 | ) | |
34 | 44 | |
35 | 45 | from sortedcontainers import SortedDict |
36 | 46 | |
37 | from twisted.internet import defer | |
38 | ||
39 | 47 | from synapse.api.presence import UserPresenceState |
48 | from synapse.federation.sender import AbstractFederationSender, FederationSender | |
40 | 49 | from synapse.metrics import LaterGauge |
50 | from synapse.replication.tcp.streams.federation import FederationStream | |
51 | from synapse.types import JsonDict, ReadReceipt, RoomStreamToken | |
41 | 52 | from synapse.util.metrics import Measure |
42 | 53 | |
43 | 54 | from .units import Edu |
44 | 55 | |
56 | if TYPE_CHECKING: | |
57 | from synapse.server import HomeServer | |
58 | ||
45 | 59 | logger = logging.getLogger(__name__) |
46 | 60 | |
47 | 61 | |
48 | class FederationRemoteSendQueue: | |
62 | class FederationRemoteSendQueue(AbstractFederationSender): | |
49 | 63 | """A drop in replacement for FederationSender""" |
50 | 64 | |
51 | def __init__(self, hs): | |
65 | def __init__(self, hs: "HomeServer"): | |
52 | 66 | self.server_name = hs.hostname |
53 | 67 | self.clock = hs.get_clock() |
54 | 68 | self.notifier = hs.get_notifier() |
57 | 71 | # We may have multiple federation sender instances, so we need to track |
58 | 72 | # their positions separately. |
59 | 73 | self._sender_instances = hs.config.worker.federation_shard_config.instances |
60 | self._sender_positions = {} | |
74 | self._sender_positions = {} # type: Dict[str, int] | |
61 | 75 | |
62 | 76 | # Pending presence map user_id -> UserPresenceState |
63 | 77 | self.presence_map = {} # type: Dict[str, UserPresenceState] |
70 | 84 | # Stream position -> (user_id, destinations) |
71 | 85 | self.presence_destinations = ( |
72 | 86 | SortedDict() |
73 | ) # type: SortedDict[int, Tuple[str, List[str]]] | |
87 | ) # type: SortedDict[int, Tuple[str, Iterable[str]]] | |
74 | 88 | |
75 | 89 | # (destination, key) -> EDU |
76 | 90 | self.keyed_edu = {} # type: Dict[Tuple[str, tuple], Edu] |
93 | 107 | # we make a new function, so we need to make a new function so the inner |
94 | 108 | # lambda binds to the queue rather than to the name of the queue which |
95 | 109 | # changes. ARGH. |
96 | def register(name, queue): | |
110 | def register(name: str, queue: Sized) -> None: | |
97 | 111 | LaterGauge( |
98 | 112 | "synapse_federation_send_queue_%s_size" % (queue_name,), |
99 | 113 | "", |
114 | 128 | |
115 | 129 | self.clock.looping_call(self._clear_queue, 30 * 1000) |
116 | 130 | |
117 | def _next_pos(self): | |
131 | def _next_pos(self) -> int: | |
118 | 132 | pos = self.pos |
119 | 133 | self.pos += 1 |
120 | 134 | self.pos_time[self.clock.time_msec()] = pos |
121 | 135 | return pos |
122 | 136 | |
123 | def _clear_queue(self): | |
137 | def _clear_queue(self) -> None: | |
124 | 138 | """Clear the queues for anything older than N minutes""" |
125 | 139 | |
126 | 140 | FIVE_MINUTES_AGO = 5 * 60 * 1000 |
137 | 151 | |
138 | 152 | self._clear_queue_before_pos(position_to_delete) |
139 | 153 | |
140 | def _clear_queue_before_pos(self, position_to_delete): | |
154 | def _clear_queue_before_pos(self, position_to_delete: int) -> None: | |
141 | 155 | """Clear all the queues from before a given position""" |
142 | 156 | with Measure(self.clock, "send_queue._clear"): |
143 | 157 | # Delete things out of presence maps |
187 | 201 | for key in keys[:i]: |
188 | 202 | del self.edus[key] |
189 | 203 | |
190 | def notify_new_events(self, max_token): | |
204 | def notify_new_events(self, max_token: RoomStreamToken) -> None: | |
191 | 205 | """As per FederationSender""" |
192 | # We don't need to replicate this as it gets sent down a different | |
193 | # stream. | |
194 | pass | |
195 | ||
196 | def build_and_send_edu(self, destination, edu_type, content, key=None): | |
206 | # This should never get called. | |
207 | raise NotImplementedError() | |
208 | ||
209 | def build_and_send_edu( | |
210 | self, | |
211 | destination: str, | |
212 | edu_type: str, | |
213 | content: JsonDict, | |
214 | key: Optional[Hashable] = None, | |
215 | ) -> None: | |
197 | 216 | """As per FederationSender""" |
198 | 217 | if destination == self.server_name: |
199 | 218 | logger.info("Not sending EDU to ourselves") |
217 | 236 | |
218 | 237 | self.notifier.on_new_replication_data() |
219 | 238 | |
220 | def send_read_receipt(self, receipt): | |
239 | async def send_read_receipt(self, receipt: ReadReceipt) -> None: | |
221 | 240 | """As per FederationSender |
222 | 241 | |
223 | 242 | Args: |
224 | receipt (synapse.types.ReadReceipt): | |
243 | receipt: | |
225 | 244 | """ |
226 | 245 | # nothing to do here: the replication listener will handle it. |
227 | return defer.succeed(None) | |
228 | ||
229 | def send_presence(self, states): | |
246 | ||
247 | def send_presence(self, states: List[UserPresenceState]) -> None: | |
230 | 248 | """As per FederationSender |
231 | 249 | |
232 | 250 | Args: |
233 | states (list(UserPresenceState)) | |
251 | states | |
234 | 252 | """ |
235 | 253 | pos = self._next_pos() |
236 | 254 | |
237 | 255 | # We only want to send presence for our own users, so lets always just |
238 | 256 | # filter here just in case. |
239 | local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states)) | |
257 | local_states = [s for s in states if self.is_mine_id(s.user_id)] | |
240 | 258 | |
241 | 259 | self.presence_map.update({state.user_id: state for state in local_states}) |
242 | 260 | self.presence_changed[pos] = [state.user_id for state in local_states] |
243 | 261 | |
244 | 262 | self.notifier.on_new_replication_data() |
245 | 263 | |
246 | def send_presence_to_destinations(self, states, destinations): | |
264 | def send_presence_to_destinations( | |
265 | self, states: Iterable[UserPresenceState], destinations: Iterable[str] | |
266 | ) -> None: | |
247 | 267 | """As per FederationSender |
248 | 268 | |
249 | 269 | Args: |
250 | states (list[UserPresenceState]) | |
251 | destinations (list[str]) | |
270 | states | |
271 | destinations | |
252 | 272 | """ |
253 | 273 | for state in states: |
254 | 274 | pos = self._next_pos() |
257 | 277 | |
258 | 278 | self.notifier.on_new_replication_data() |
259 | 279 | |
260 | def send_device_messages(self, destination): | |
280 | def send_device_messages(self, destination: str) -> None: | |
261 | 281 | """As per FederationSender""" |
262 | 282 | # We don't need to replicate this as it gets sent down a different |
263 | 283 | # stream. |
264 | 284 | |
265 | def get_current_token(self): | |
285 | def wake_destination(self, server: str) -> None: | |
286 | pass | |
287 | ||
288 | def get_current_token(self) -> int: | |
266 | 289 | return self.pos - 1 |
267 | 290 | |
268 | def federation_ack(self, instance_name, token): | |
291 | def federation_ack(self, instance_name: str, token: int) -> None: | |
269 | 292 | if self._sender_instances: |
270 | 293 | # If we have configured multiple federation sender instances we need |
271 | 294 | # to track their positions separately, and only clear the queue up |
503 | 526 | ) |
504 | 527 | |
505 | 528 | |
506 | def process_rows_for_federation(transaction_queue, rows): | |
529 | def process_rows_for_federation( | |
530 | transaction_queue: FederationSender, | |
531 | rows: List[FederationStream.FederationStreamRow], | |
532 | ) -> None: | |
507 | 533 | """Parse a list of rows from the federation stream and put them in the |
508 | 534 | transaction queue ready for sending to the relevant homeservers. |
509 | 535 | |
510 | 536 | Args: |
511 | transaction_queue (FederationSender) | |
512 | rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow)) | |
537 | transaction_queue | |
538 | rows | |
513 | 539 | """ |
514 | 540 | |
515 | 541 | # The federation stream contains a bunch of different types of |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | import abc | |
15 | 16 | import logging |
16 | from typing import Dict, Hashable, Iterable, List, Optional, Set, Tuple | |
17 | from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple | |
17 | 18 | |
18 | 19 | from prometheus_client import Counter |
19 | 20 | |
20 | 21 | from twisted.internet import defer |
21 | 22 | |
22 | import synapse | |
23 | 23 | import synapse.metrics |
24 | 24 | from synapse.api.presence import UserPresenceState |
25 | 25 | from synapse.events import EventBase |
39 | 39 | events_processed_counter, |
40 | 40 | ) |
41 | 41 | from synapse.metrics.background_process_metrics import run_as_background_process |
42 | from synapse.types import ReadReceipt, RoomStreamToken | |
42 | from synapse.types import JsonDict, ReadReceipt, RoomStreamToken | |
43 | 43 | from synapse.util.metrics import Measure, measure_func |
44 | ||
45 | if TYPE_CHECKING: | |
46 | from synapse.server import HomeServer | |
44 | 47 | |
45 | 48 | logger = logging.getLogger(__name__) |
46 | 49 | |
64 | 67 | CATCH_UP_STARTUP_INTERVAL_SEC = 5 |
65 | 68 | |
66 | 69 | |
67 | class FederationSender: | |
68 | def __init__(self, hs: "synapse.server.HomeServer"): | |
70 | class AbstractFederationSender(metaclass=abc.ABCMeta): | |
71 | @abc.abstractmethod | |
72 | def notify_new_events(self, max_token: RoomStreamToken) -> None: | |
73 | """This gets called when we have some new events we might want to | |
74 | send out to other servers. | |
75 | """ | |
76 | raise NotImplementedError() | |
77 | ||
78 | @abc.abstractmethod | |
79 | async def send_read_receipt(self, receipt: ReadReceipt) -> None: | |
80 | """Send a RR to any other servers in the room | |
81 | ||
82 | Args: | |
83 | receipt: receipt to be sent | |
84 | """ | |
85 | raise NotImplementedError() | |
86 | ||
87 | @abc.abstractmethod | |
88 | def send_presence(self, states: List[UserPresenceState]) -> None: | |
89 | """Send the new presence states to the appropriate destinations. | |
90 | ||
91 | This actually queues up the presence states ready for sending and | |
92 | triggers a background task to process them and send out the transactions. | |
93 | """ | |
94 | raise NotImplementedError() | |
95 | ||
96 | @abc.abstractmethod | |
97 | def send_presence_to_destinations( | |
98 | self, states: Iterable[UserPresenceState], destinations: Iterable[str] | |
99 | ) -> None: | |
100 | """Send the given presence states to the given destinations. | |
101 | ||
102 | Args: | |
103 | destinations: | |
104 | """ | |
105 | raise NotImplementedError() | |
106 | ||
107 | @abc.abstractmethod | |
108 | def build_and_send_edu( | |
109 | self, | |
110 | destination: str, | |
111 | edu_type: str, | |
112 | content: JsonDict, | |
113 | key: Optional[Hashable] = None, | |
114 | ) -> None: | |
115 | """Construct an Edu object, and queue it for sending | |
116 | ||
117 | Args: | |
118 | destination: name of server to send to | |
119 | edu_type: type of EDU to send | |
120 | content: content of EDU | |
121 | key: clobbering key for this edu | |
122 | """ | |
123 | raise NotImplementedError() | |
124 | ||
125 | @abc.abstractmethod | |
126 | def send_device_messages(self, destination: str) -> None: | |
127 | raise NotImplementedError() | |
128 | ||
129 | @abc.abstractmethod | |
130 | def wake_destination(self, destination: str) -> None: | |
131 | """Called when we want to retry sending transactions to a remote. | |
132 | ||
133 | This is mainly useful if the remote server has been down and we think it | |
134 | might have come back. | |
135 | """ | |
136 | raise NotImplementedError() | |
137 | ||
138 | @abc.abstractmethod | |
139 | def get_current_token(self) -> int: | |
140 | raise NotImplementedError() | |
141 | ||
142 | @abc.abstractmethod | |
143 | def federation_ack(self, instance_name: str, token: int) -> None: | |
144 | raise NotImplementedError() | |
145 | ||
146 | @abc.abstractmethod | |
147 | async def get_replication_rows( | |
148 | self, instance_name: str, from_token: int, to_token: int, target_row_count: int | |
149 | ) -> Tuple[List[Tuple[int, Tuple]], int, bool]: | |
150 | raise NotImplementedError() | |
151 | ||
152 | ||
153 | class FederationSender(AbstractFederationSender): | |
154 | def __init__(self, hs: "HomeServer"): | |
69 | 155 | self.hs = hs |
70 | 156 | self.server_name = hs.hostname |
71 | 157 | |
431 | 517 | queue.flush_read_receipts_for_room(room_id) |
432 | 518 | |
433 | 519 | @preserve_fn # the caller should not yield on this |
434 | async def send_presence(self, states: List[UserPresenceState]): | |
520 | async def send_presence(self, states: List[UserPresenceState]) -> None: | |
435 | 521 | """Send the new presence states to the appropriate destinations. |
436 | 522 | |
437 | 523 | This actually queues up the presence states ready for sending and |
493 | 579 | self._get_per_destination_queue(destination).send_presence(states) |
494 | 580 | |
495 | 581 | @measure_func("txnqueue._process_presence") |
496 | async def _process_presence_inner(self, states: List[UserPresenceState]): | |
582 | async def _process_presence_inner(self, states: List[UserPresenceState]) -> None: | |
497 | 583 | """Given a list of states populate self.pending_presence_by_dest and |
498 | 584 | poke to send a new transaction to each destination |
499 | 585 | """ |
515 | 601 | self, |
516 | 602 | destination: str, |
517 | 603 | edu_type: str, |
518 | content: dict, | |
604 | content: JsonDict, | |
519 | 605 | key: Optional[Hashable] = None, |
520 | ): | |
606 | ) -> None: | |
521 | 607 | """Construct an Edu object, and queue it for sending |
522 | 608 | |
523 | 609 | Args: |
544 | 630 | |
545 | 631 | self.send_edu(edu, key) |
546 | 632 | |
547 | def send_edu(self, edu: Edu, key: Optional[Hashable]): | |
633 | def send_edu(self, edu: Edu, key: Optional[Hashable]) -> None: | |
548 | 634 | """Queue an EDU for sending |
549 | 635 | |
550 | 636 | Args: |
562 | 648 | else: |
563 | 649 | queue.send_edu(edu) |
564 | 650 | |
565 | def send_device_messages(self, destination: str): | |
651 | def send_device_messages(self, destination: str) -> None: | |
566 | 652 | if destination == self.server_name: |
567 | 653 | logger.warning("Not sending device update to ourselves") |
568 | 654 | return |
574 | 660 | |
575 | 661 | self._get_per_destination_queue(destination).attempt_new_transaction() |
576 | 662 | |
577 | def wake_destination(self, destination: str): | |
663 | def wake_destination(self, destination: str) -> None: | |
578 | 664 | """Called when we want to retry sending transactions to a remote. |
579 | 665 | |
580 | 666 | This is mainly useful if the remote server has been down and we think it |
597 | 683 | # Dummy implementation for case where federation sender isn't offloaded |
598 | 684 | # to a worker. |
599 | 685 | return 0 |
686 | ||
687 | def federation_ack(self, instance_name: str, token: int) -> None: | |
688 | # It is not expected that this gets called on FederationSender. | |
689 | raise NotImplementedError() | |
600 | 690 | |
601 | 691 | @staticmethod |
602 | 692 | async def get_replication_rows( |
606 | 696 | # to a worker. |
607 | 697 | return [], 0, False |
608 | 698 | |
609 | async def _wake_destinations_needing_catchup(self): | |
699 | async def _wake_destinations_needing_catchup(self) -> None: | |
610 | 700 | """ |
611 | 701 | Wakes up destinations that need catch-up and are not currently being |
612 | 702 | backed off from. |
14 | 14 | # limitations under the License. |
15 | 15 | import datetime |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, cast | |
17 | from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple | |
18 | 18 | |
19 | 19 | import attr |
20 | 20 | from prometheus_client import Counter |
76 | 76 | self._transaction_manager = transaction_manager |
77 | 77 | self._instance_name = hs.get_instance_name() |
78 | 78 | self._federation_shard_config = hs.config.worker.federation_shard_config |
79 | self._state = hs.get_state_handler() | |
79 | 80 | |
80 | 81 | self._should_send_on_this_instance = True |
81 | 82 | if not self._federation_shard_config.should_handle( |
414 | 415 | "This should not happen." % event_ids |
415 | 416 | ) |
416 | 417 | |
417 | if logger.isEnabledFor(logging.INFO): | |
418 | rooms = [p.room_id for p in catchup_pdus] | |
419 | logger.info("Catching up rooms to %s: %r", self._destination, rooms) | |
420 | ||
421 | await self._transaction_manager.send_new_transaction( | |
422 | self._destination, catchup_pdus, [] | |
423 | ) | |
424 | ||
425 | sent_transactions_counter.inc() | |
426 | final_pdu = catchup_pdus[-1] | |
427 | self._last_successful_stream_ordering = cast( | |
428 | int, final_pdu.internal_metadata.stream_ordering | |
429 | ) | |
430 | await self._store.set_destination_last_successful_stream_ordering( | |
431 | self._destination, self._last_successful_stream_ordering | |
432 | ) | |
418 | # We send transactions with events from one room only, as its likely | |
419 | # that the remote will have to do additional processing, which may | |
420 | # take some time. It's better to give it small amounts of work | |
421 | # rather than risk the request timing out and repeatedly being | |
422 | # retried, and not making any progress. | |
423 | # | |
424 | # Note: `catchup_pdus` will have exactly one PDU per room. | |
425 | for pdu in catchup_pdus: | |
426 | # The PDU from the DB will be the last PDU in the room from | |
427 | # *this server* that wasn't sent to the remote. However, other | |
428 | # servers may have sent lots of events since then, and we want | |
429 | # to try and tell the remote only about the *latest* events in | |
430 | # the room. This is so that it doesn't get inundated by events | |
431 | # from various parts of the DAG, which all need to be processed. | |
432 | # | |
433 | # Note: this does mean that in large rooms a server coming back | |
434 | # online will get sent the same events from all the different | |
435 | # servers, but the remote will correctly deduplicate them and | |
436 | # handle it only once. | |
437 | ||
438 | # Step 1, fetch the current extremities | |
439 | extrems = await self._store.get_prev_events_for_room(pdu.room_id) | |
440 | ||
441 | if pdu.event_id in extrems: | |
442 | # If the event is in the extremities, then great! We can just | |
443 | # use that without having to do further checks. | |
444 | room_catchup_pdus = [pdu] | |
445 | else: | |
446 | # If not, fetch the extremities and figure out which we can | |
447 | # send. | |
448 | extrem_events = await self._store.get_events_as_list(extrems) | |
449 | ||
450 | new_pdus = [] | |
451 | for p in extrem_events: | |
452 | # We pulled this from the DB, so it'll be non-null | |
453 | assert p.internal_metadata.stream_ordering | |
454 | ||
455 | # Filter out events that happened before the remote went | |
456 | # offline | |
457 | if ( | |
458 | p.internal_metadata.stream_ordering | |
459 | < self._last_successful_stream_ordering | |
460 | ): | |
461 | continue | |
462 | ||
463 | # Filter out events where the server is not in the room, | |
464 | # e.g. it may have left/been kicked. *Ideally* we'd pull | |
465 | # out the kick and send that, but it's a rare edge case | |
466 | # so we don't bother for now (the server that sent the | |
467 | # kick should send it out if its online). | |
468 | hosts = await self._state.get_hosts_in_room_at_events( | |
469 | p.room_id, [p.event_id] | |
470 | ) | |
471 | if self._destination not in hosts: | |
472 | continue | |
473 | ||
474 | new_pdus.append(p) | |
475 | ||
476 | # If we've filtered out all the extremities, fall back to | |
477 | # sending the original event. This should ensure that the | |
478 | # server gets at least some of missed events (especially if | |
479 | # the other sending servers are up). | |
480 | if new_pdus: | |
481 | room_catchup_pdus = new_pdus | |
482 | else: | |
483 | room_catchup_pdus = [pdu] | |
484 | ||
485 | logger.info( | |
486 | "Catching up rooms to %s: %r", self._destination, pdu.room_id | |
487 | ) | |
488 | ||
489 | await self._transaction_manager.send_new_transaction( | |
490 | self._destination, room_catchup_pdus, [] | |
491 | ) | |
492 | ||
493 | sent_transactions_counter.inc() | |
494 | ||
495 | # We pulled this from the DB, so it'll be non-null | |
496 | assert pdu.internal_metadata.stream_ordering | |
497 | ||
498 | # Note that we mark the last successful stream ordering as that | |
499 | # from the *original* PDU, rather than the PDU(s) we actually | |
500 | # send. This is because we use it to mark our position in the | |
501 | # queue of missed PDUs to process. | |
502 | self._last_successful_stream_ordering = ( | |
503 | pdu.internal_metadata.stream_ordering | |
504 | ) | |
505 | ||
506 | await self._store.set_destination_last_successful_stream_ordering( | |
507 | self._destination, self._last_successful_stream_ordering | |
508 | ) | |
433 | 509 | |
434 | 510 | def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]: |
435 | 511 | if not self._pending_rrs: |
15 | 15 | |
16 | 16 | import logging |
17 | 17 | import urllib |
18 | from typing import Any, Dict, Optional | |
18 | from typing import Any, Dict, List, Optional | |
19 | 19 | |
20 | 20 | from synapse.api.constants import Membership |
21 | 21 | from synapse.api.errors import Codes, HttpResponseException, SynapseError |
25 | 25 | FEDERATION_V2_PREFIX, |
26 | 26 | ) |
27 | 27 | from synapse.logging.utils import log_function |
28 | from synapse.types import JsonDict | |
28 | 29 | |
29 | 30 | logger = logging.getLogger(__name__) |
30 | 31 | |
977 | 978 | |
978 | 979 | return self.client.get_json(destination=destination, path=path) |
979 | 980 | |
981 | async def get_space_summary( | |
982 | self, | |
983 | destination: str, | |
984 | room_id: str, | |
985 | suggested_only: bool, | |
986 | max_rooms_per_space: Optional[int], | |
987 | exclude_rooms: List[str], | |
988 | ) -> JsonDict: | |
989 | """ | |
990 | Args: | |
991 | destination: The remote server | |
992 | room_id: The room ID to ask about. | |
993 | suggested_only: if True, only suggested rooms will be returned | |
994 | max_rooms_per_space: an optional limit to the number of children to be | |
995 | returned per space | |
996 | exclude_rooms: a list of any rooms we can skip | |
997 | """ | |
998 | path = _create_path( | |
999 | FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc2946/spaces/%s", room_id | |
1000 | ) | |
1001 | ||
1002 | params = { | |
1003 | "suggested_only": suggested_only, | |
1004 | "exclude_rooms": exclude_rooms, | |
1005 | } | |
1006 | if max_rooms_per_space is not None: | |
1007 | params["max_rooms_per_space"] = max_rooms_per_space | |
1008 | ||
1009 | return await self.client.post_json( | |
1010 | destination=destination, path=path, data=params | |
1011 | ) | |
1012 | ||
980 | 1013 | |
981 | 1014 | def _create_path(federation_prefix, path, *args): |
982 | 1015 | """ |
17 | 17 | import functools |
18 | 18 | import logging |
19 | 19 | import re |
20 | from typing import Optional, Tuple, Type | |
20 | from typing import Container, Mapping, Optional, Sequence, Tuple, Type | |
21 | 21 | |
22 | 22 | import synapse |
23 | 23 | from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH |
28 | 28 | FEDERATION_V1_PREFIX, |
29 | 29 | FEDERATION_V2_PREFIX, |
30 | 30 | ) |
31 | from synapse.http.server import JsonResource | |
31 | from synapse.http.server import HttpServer, JsonResource | |
32 | 32 | from synapse.http.servlet import ( |
33 | 33 | parse_boolean_from_args, |
34 | 34 | parse_integer_from_args, |
43 | 43 | whitelisted_homeserver, |
44 | 44 | ) |
45 | 45 | from synapse.server import HomeServer |
46 | from synapse.types import ThirdPartyInstanceID, get_domain_from_id | |
46 | from synapse.types import JsonDict, ThirdPartyInstanceID, get_domain_from_id | |
47 | from synapse.util.ratelimitutils import FederationRateLimiter | |
47 | 48 | from synapse.util.stringutils import parse_and_validate_server_name |
48 | 49 | from synapse.util.versionstring import get_version_string |
49 | 50 | |
1373 | 1374 | ) |
1374 | 1375 | |
1375 | 1376 | return 200, new_content |
1377 | ||
1378 | ||
1379 | class FederationSpaceSummaryServlet(BaseFederationServlet): | |
1380 | PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946" | |
1381 | PATH = "/spaces/(?P<room_id>[^/]*)" | |
1382 | ||
1383 | async def on_POST( | |
1384 | self, | |
1385 | origin: str, | |
1386 | content: JsonDict, | |
1387 | query: Mapping[bytes, Sequence[bytes]], | |
1388 | room_id: str, | |
1389 | ) -> Tuple[int, JsonDict]: | |
1390 | suggested_only = content.get("suggested_only", False) | |
1391 | if not isinstance(suggested_only, bool): | |
1392 | raise SynapseError( | |
1393 | 400, "'suggested_only' must be a boolean", Codes.BAD_JSON | |
1394 | ) | |
1395 | ||
1396 | exclude_rooms = content.get("exclude_rooms", []) | |
1397 | if not isinstance(exclude_rooms, list) or any( | |
1398 | not isinstance(x, str) for x in exclude_rooms | |
1399 | ): | |
1400 | raise SynapseError(400, "bad value for 'exclude_rooms'", Codes.BAD_JSON) | |
1401 | ||
1402 | max_rooms_per_space = content.get("max_rooms_per_space") | |
1403 | if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int): | |
1404 | raise SynapseError( | |
1405 | 400, "bad value for 'max_rooms_per_space'", Codes.BAD_JSON | |
1406 | ) | |
1407 | ||
1408 | return 200, await self.handler.federation_space_summary( | |
1409 | room_id, suggested_only, max_rooms_per_space, exclude_rooms | |
1410 | ) | |
1376 | 1411 | |
1377 | 1412 | |
1378 | 1413 | class RoomComplexityServlet(BaseFederationServlet): |
1473 | 1508 | ) |
1474 | 1509 | |
1475 | 1510 | |
1476 | def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=None): | |
1511 | def register_servlets( | |
1512 | hs: HomeServer, | |
1513 | resource: HttpServer, | |
1514 | authenticator: Authenticator, | |
1515 | ratelimiter: FederationRateLimiter, | |
1516 | servlet_groups: Optional[Container[str]] = None, | |
1517 | ): | |
1477 | 1518 | """Initialize and register servlet classes. |
1478 | 1519 | |
1479 | 1520 | Will by default register all servlets. For custom behaviour, pass in |
1480 | 1521 | a list of servlet_groups to register. |
1481 | 1522 | |
1482 | 1523 | Args: |
1483 | hs (synapse.server.HomeServer): homeserver | |
1484 | resource (JsonResource): resource class to register to | |
1485 | authenticator (Authenticator): authenticator to use | |
1486 | ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use | |
1487 | servlet_groups (list[str], optional): List of servlet groups to register. | |
1524 | hs: homeserver | |
1525 | resource: resource class to register to | |
1526 | authenticator: authenticator to use | |
1527 | ratelimiter: ratelimiter to use | |
1528 | servlet_groups: List of servlet groups to register. | |
1488 | 1529 | Defaults to ``DEFAULT_SERVLET_GROUPS``. |
1489 | 1530 | """ |
1490 | 1531 | if not servlet_groups: |
1494 | 1535 | for servletclass in FEDERATION_SERVLET_CLASSES: |
1495 | 1536 | servletclass( |
1496 | 1537 | handler=hs.get_federation_server(), |
1538 | authenticator=authenticator, | |
1539 | ratelimiter=ratelimiter, | |
1540 | server_name=hs.hostname, | |
1541 | ).register(resource) | |
1542 | ||
1543 | if hs.config.experimental.spaces_enabled: | |
1544 | FederationSpaceSummaryServlet( | |
1545 | handler=hs.get_space_summary_handler(), | |
1497 | 1546 | authenticator=authenticator, |
1498 | 1547 | ratelimiter=ratelimiter, |
1499 | 1548 | server_name=hs.hostname, |
45 | 45 | from synapse.types import JsonDict, get_domain_from_id |
46 | 46 | |
47 | 47 | if TYPE_CHECKING: |
48 | from synapse.app.homeserver import HomeServer | |
48 | from synapse.server import HomeServer | |
49 | 49 | |
50 | 50 | logger = logging.getLogger(__name__) |
51 | 51 |
24 | 24 | from synapse.util.async_helpers import concurrently_execute |
25 | 25 | |
26 | 26 | if TYPE_CHECKING: |
27 | from synapse.app.homeserver import HomeServer | |
27 | from synapse.server import HomeServer | |
28 | 28 | |
29 | 29 | logger = logging.getLogger(__name__) |
30 | 30 |
23 | 23 | from synapse.types import UserID |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
26 | from synapse.server import HomeServer | |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
29 | 29 |
24 | 24 | from synapse.types import JsonDict, UserID |
25 | 25 | |
26 | 26 | if TYPE_CHECKING: |
27 | from synapse.app.homeserver import HomeServer | |
27 | from synapse.server import HomeServer | |
28 | 28 | |
29 | 29 | |
30 | 30 | class AccountDataHandler: |
26 | 26 | from synapse.util import stringutils |
27 | 27 | |
28 | 28 | if TYPE_CHECKING: |
29 | from synapse.app.homeserver import HomeServer | |
29 | from synapse.server import HomeServer | |
30 | 30 | |
31 | 31 | logger = logging.getLogger(__name__) |
32 | 32 |
23 | 23 | from synapse.app import check_bind_error |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
26 | from synapse.server import HomeServer | |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
29 | 29 |
24 | 24 | from ._base import BaseHandler |
25 | 25 | |
26 | 26 | if TYPE_CHECKING: |
27 | from synapse.app.homeserver import HomeServer | |
27 | from synapse.server import HomeServer | |
28 | 28 | |
29 | 29 | logger = logging.getLogger(__name__) |
30 | 30 |
37 | 37 | from synapse.util.metrics import Measure |
38 | 38 | |
39 | 39 | if TYPE_CHECKING: |
40 | from synapse.app.homeserver import HomeServer | |
40 | from synapse.server import HomeServer | |
41 | 41 | |
42 | 42 | logger = logging.getLogger(__name__) |
43 | 43 |
69 | 69 | from synapse.util.threepids import canonicalise_email |
70 | 70 | |
71 | 71 | if TYPE_CHECKING: |
72 | from synapse.app.homeserver import HomeServer | |
72 | from synapse.server import HomeServer | |
73 | 73 | |
74 | 74 | logger = logging.getLogger(__name__) |
75 | 75 | |
885 | 885 | ) |
886 | 886 | return result |
887 | 887 | |
888 | def can_change_password(self) -> bool: | |
889 | """Get whether users on this server are allowed to change or set a password. | |
890 | ||
891 | Both `config.password_enabled` and `config.password_localdb_enabled` must be true. | |
892 | ||
893 | Note that any account (even SSO accounts) are allowed to add passwords if the above | |
894 | is true. | |
895 | ||
896 | Returns: | |
897 | Whether users on this server are allowed to change or set a password | |
898 | """ | |
899 | return self._password_enabled and self._password_localdb_enabled | |
900 | ||
888 | 901 | def get_supported_login_types(self) -> Iterable[str]: |
889 | 902 | """Get a the login types supported for the /login API |
890 | 903 |
26 | 26 | from synapse.types import UserID, map_username_to_mxid_localpart |
27 | 27 | |
28 | 28 | if TYPE_CHECKING: |
29 | from synapse.app.homeserver import HomeServer | |
29 | from synapse.server import HomeServer | |
30 | 30 | |
31 | 31 | logger = logging.getLogger(__name__) |
32 | 32 |
22 | 22 | from ._base import BaseHandler |
23 | 23 | |
24 | 24 | if TYPE_CHECKING: |
25 | from synapse.app.homeserver import HomeServer | |
25 | from synapse.server import HomeServer | |
26 | 26 | |
27 | 27 | logger = logging.getLogger(__name__) |
28 | 28 |
44 | 44 | from ._base import BaseHandler |
45 | 45 | |
46 | 46 | if TYPE_CHECKING: |
47 | from synapse.app.homeserver import HomeServer | |
47 | from synapse.server import HomeServer | |
48 | 48 | |
49 | 49 | logger = logging.getLogger(__name__) |
50 | 50 | |
165 | 165 | |
166 | 166 | # Fetch the current state at the time. |
167 | 167 | try: |
168 | event_ids = await self.store.get_forward_extremeties_for_room( | |
168 | event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering( | |
169 | 169 | room_id, stream_ordering=stream_ordering |
170 | 170 | ) |
171 | 171 | except errors.StoreError: |
906 | 906 | master_key = result.get("master_key") |
907 | 907 | self_signing_key = result.get("self_signing_key") |
908 | 908 | |
909 | ignore_devices = False | |
909 | 910 | # If the remote server has more than ~1000 devices for this user |
910 | 911 | # we assume that something is going horribly wrong (e.g. a bot |
911 | 912 | # that logs in and creates a new device every time it tries to |
924 | 925 | len(devices), |
925 | 926 | ) |
926 | 927 | devices = [] |
928 | ignore_devices = True | |
929 | else: | |
930 | cached_devices = await self.store.get_cached_devices_for_user(user_id) | |
931 | if cached_devices == {d["device_id"]: d for d in devices}: | |
932 | devices = [] | |
933 | ignore_devices = True | |
927 | 934 | |
928 | 935 | for device in devices: |
929 | 936 | logger.debug( |
933 | 940 | stream_id, |
934 | 941 | ) |
935 | 942 | |
936 | await self.store.update_remote_device_list_cache(user_id, devices, stream_id) | |
943 | if not ignore_devices: | |
944 | await self.store.update_remote_device_list_cache( | |
945 | user_id, devices, stream_id | |
946 | ) | |
937 | 947 | device_ids = [device["device_id"] for device in devices] |
938 | 948 | |
939 | 949 | # Handle cross-signing keys. |
944 | 954 | ) |
945 | 955 | device_ids = device_ids + cross_signing_device_ids |
946 | 956 | |
947 | await self.device_handler.notify_device_update(user_id, device_ids) | |
957 | if device_ids: | |
958 | await self.device_handler.notify_device_update(user_id, device_ids) | |
948 | 959 | |
949 | 960 | # We clobber the seen updates since we've re-synced from a given |
950 | 961 | # point. |
972 | 983 | """ |
973 | 984 | device_ids = [] |
974 | 985 | |
975 | if master_key: | |
986 | current_keys_map = await self.store.get_e2e_cross_signing_keys_bulk([user_id]) | |
987 | current_keys = current_keys_map.get(user_id) or {} | |
988 | ||
989 | if master_key and master_key != current_keys.get("master"): | |
976 | 990 | await self.store.set_e2e_cross_signing_key(user_id, "master", master_key) |
977 | 991 | _, verify_key = get_verify_key_from_cross_signing_key(master_key) |
978 | 992 | # verify_key is a VerifyKey from signedjson, which uses |
979 | 993 | # .version to denote the portion of the key ID after the |
980 | 994 | # algorithm and colon, which is the device ID |
981 | 995 | device_ids.append(verify_key.version) |
982 | if self_signing_key: | |
996 | if self_signing_key and self_signing_key != current_keys.get("self_signing"): | |
983 | 997 | await self.store.set_e2e_cross_signing_key( |
984 | 998 | user_id, "self_signing", self_signing_key |
985 | 999 | ) |
31 | 31 | from synapse.util.stringutils import random_string |
32 | 32 | |
33 | 33 | if TYPE_CHECKING: |
34 | from synapse.app.homeserver import HomeServer | |
34 | from synapse.server import HomeServer | |
35 | 35 | |
36 | 36 | |
37 | 37 | logger = logging.getLogger(__name__) |
41 | 41 | from synapse.util.retryutils import NotRetryingDestination |
42 | 42 | |
43 | 43 | if TYPE_CHECKING: |
44 | from synapse.app.homeserver import HomeServer | |
44 | from synapse.server import HomeServer | |
45 | 45 | |
46 | 46 | logger = logging.getLogger(__name__) |
47 | 47 |
28 | 28 | from synapse.util.async_helpers import Linearizer |
29 | 29 | |
30 | 30 | if TYPE_CHECKING: |
31 | from synapse.app.homeserver import HomeServer | |
31 | from synapse.server import HomeServer | |
32 | 32 | |
33 | 33 | logger = logging.getLogger(__name__) |
34 | 34 |
20 | 20 | from synapse.types import GroupID, JsonDict, get_domain_from_id |
21 | 21 | |
22 | 22 | if TYPE_CHECKING: |
23 | from synapse.app.homeserver import HomeServer | |
23 | from synapse.server import HomeServer | |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | 26 |
148 | 148 | Args: |
149 | 149 | request: the incoming request from the browser. |
150 | 150 | """ |
151 | # This will always be set by the time Twisted calls us. | |
152 | assert request.args is not None | |
153 | ||
151 | 154 | # The provider might redirect with an error. |
152 | 155 | # In that case, just display it as-is. |
153 | 156 | if b"error" in request.args: |
279 | 282 | self._config = provider |
280 | 283 | self._callback_url = hs.config.oidc_callback_url # type: str |
281 | 284 | |
285 | self._oidc_attribute_requirements = provider.attribute_requirements | |
282 | 286 | self._scopes = provider.scopes |
283 | 287 | self._user_profile_method = provider.user_profile_method |
284 | 288 | |
858 | 862 | ) |
859 | 863 | |
860 | 864 | # otherwise, it's a login |
865 | logger.debug("Userinfo for OIDC login: %s", userinfo) | |
866 | ||
867 | # Ensure that the attributes of the logged in user meet the required | |
868 | # attributes by checking the userinfo against attribute_requirements | |
869 | # In order to deal with the fact that OIDC userinfo can contain many | |
870 | # types of data, we wrap non-list values in lists. | |
871 | if not self._sso_handler.check_required_attributes( | |
872 | request, | |
873 | {k: v if isinstance(v, list) else [v] for k, v in userinfo.items()}, | |
874 | self._oidc_attribute_requirements, | |
875 | ): | |
876 | return | |
861 | 877 | |
862 | 878 | # Call the mapper to register/login the user |
863 | 879 | try: |
20 | 20 | from synapse.api.errors import Codes, PasswordRefusedError |
21 | 21 | |
22 | 22 | if TYPE_CHECKING: |
23 | from synapse.app.homeserver import HomeServer | |
23 | from synapse.server import HomeServer | |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | 26 |
102 | 102 | def __init__(self, hs: "HomeServer"): |
103 | 103 | self.clock = hs.get_clock() |
104 | 104 | self.store = hs.get_datastore() |
105 | ||
106 | self._busy_presence_enabled = hs.config.experimental.msc3026_enabled | |
105 | 107 | |
106 | 108 | active_presence = self.store.take_presence_startup_info() |
107 | 109 | self.user_to_current_state = {state.user_id: state for state in active_presence} |
729 | 731 | PresenceState.ONLINE, |
730 | 732 | PresenceState.UNAVAILABLE, |
731 | 733 | PresenceState.OFFLINE, |
734 | PresenceState.BUSY, | |
732 | 735 | ) |
733 | if presence not in valid_presence: | |
736 | ||
737 | if presence not in valid_presence or ( | |
738 | presence == PresenceState.BUSY and not self._busy_presence_enabled | |
739 | ): | |
734 | 740 | raise SynapseError(400, "Invalid presence state") |
735 | 741 | |
736 | 742 | user_id = target_user.to_string() |
743 | 749 | msg = status_msg if presence != PresenceState.OFFLINE else None |
744 | 750 | new_fields["status_msg"] = msg |
745 | 751 | |
746 | if presence == PresenceState.ONLINE: | |
752 | if presence == PresenceState.ONLINE or ( | |
753 | presence == PresenceState.BUSY and self._busy_presence_enabled | |
754 | ): | |
747 | 755 | new_fields["last_active_ts"] = self.clock.time_msec() |
748 | 756 | |
749 | 757 | await self._update_states([prev_state.copy_and_replace(**new_fields)]) |
35 | 35 | from ._base import BaseHandler |
36 | 36 | |
37 | 37 | if TYPE_CHECKING: |
38 | from synapse.app.homeserver import HomeServer | |
38 | from synapse.server import HomeServer | |
39 | 39 | |
40 | 40 | logger = logging.getLogger(__name__) |
41 | 41 |
20 | 20 | from ._base import BaseHandler |
21 | 21 | |
22 | 22 | if TYPE_CHECKING: |
23 | from synapse.app.homeserver import HomeServer | |
23 | from synapse.server import HomeServer | |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | 26 |
19 | 19 | from synapse.types import JsonDict, ReadReceipt, get_domain_from_id |
20 | 20 | |
21 | 21 | if TYPE_CHECKING: |
22 | from synapse.app.homeserver import HomeServer | |
22 | from synapse.server import HomeServer | |
23 | 23 | |
24 | 24 | logger = logging.getLogger(__name__) |
25 | 25 |
37 | 37 | from ._base import BaseHandler |
38 | 38 | |
39 | 39 | if TYPE_CHECKING: |
40 | from synapse.app.homeserver import HomeServer | |
40 | from synapse.server import HomeServer | |
41 | 41 | |
42 | 42 | logger = logging.getLogger(__name__) |
43 | 43 | |
436 | 436 | |
437 | 437 | if RoomAlias.is_valid(r): |
438 | 438 | ( |
439 | room_id, | |
439 | room, | |
440 | 440 | remote_room_hosts, |
441 | 441 | ) = await room_member_handler.lookup_room_alias(room_alias) |
442 | room_id = room_id.to_string() | |
442 | room_id = room.to_string() | |
443 | 443 | else: |
444 | 444 | raise SynapseError( |
445 | 445 | 400, "%s was not legal room ID or room alias" % (r,) |
28 | 28 | from ._base import BaseHandler |
29 | 29 | |
30 | 30 | if TYPE_CHECKING: |
31 | from synapse.app.homeserver import HomeServer | |
31 | from synapse.server import HomeServer | |
32 | 32 | |
33 | 33 | logger = logging.getLogger(__name__) |
34 | 34 |
152 | 152 | target |
153 | 153 | room_id |
154 | 154 | """ |
155 | raise NotImplementedError() | |
156 | ||
157 | @abc.abstractmethod | |
158 | async def forget(self, user: UserID, room_id: str) -> None: | |
155 | 159 | raise NotImplementedError() |
156 | 160 | |
157 | 161 | def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str): |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import List, Optional, Tuple | |
16 | from typing import TYPE_CHECKING, List, Optional, Tuple | |
17 | 17 | |
18 | 18 | from synapse.api.errors import SynapseError |
19 | 19 | from synapse.handlers.room_member import RoomMemberHandler |
24 | 24 | ) |
25 | 25 | from synapse.types import Requester, UserID |
26 | 26 | |
27 | if TYPE_CHECKING: | |
28 | from synapse.server import HomeServer | |
29 | ||
27 | 30 | logger = logging.getLogger(__name__) |
28 | 31 | |
29 | 32 | |
30 | 33 | class RoomMemberWorkerHandler(RoomMemberHandler): |
31 | def __init__(self, hs): | |
34 | def __init__(self, hs: "HomeServer"): | |
32 | 35 | super().__init__(hs) |
33 | 36 | |
34 | 37 | self._remote_join_client = ReplRemoteJoin.make_client(hs) |
82 | 85 | await self._notify_change_client( |
83 | 86 | user_id=target.to_string(), room_id=room_id, change="left" |
84 | 87 | ) |
88 | ||
89 | async def forget(self, target: UserID, room_id: str) -> None: | |
90 | raise RuntimeError("Cannot forget rooms on workers.") |
29 | 29 | from ._base import BaseHandler |
30 | 30 | |
31 | 31 | if TYPE_CHECKING: |
32 | from synapse.app.homeserver import HomeServer | |
32 | from synapse.server import HomeServer | |
33 | 33 | |
34 | 34 | logger = logging.getLogger(__name__) |
35 | 35 |
20 | 20 | from ._base import BaseHandler |
21 | 21 | |
22 | 22 | if TYPE_CHECKING: |
23 | from synapse.app.homeserver import HomeServer | |
23 | from synapse.server import HomeServer | |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | 26 | |
40 | 40 | logout_devices: bool, |
41 | 41 | requester: Optional[Requester] = None, |
42 | 42 | ) -> None: |
43 | if not self.hs.config.password_localdb_enabled: | |
43 | if not self._auth_handler.can_change_password(): | |
44 | 44 | raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN) |
45 | 45 | |
46 | 46 | try: |
0 | # -*- coding: utf-8 -*- | |
1 | # Copyright 2021 The Matrix.org Foundation C.I.C. | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import itertools | |
16 | import logging | |
17 | from collections import deque | |
18 | from typing import TYPE_CHECKING, Iterable, List, Optional, Sequence, Set, Tuple, cast | |
19 | ||
20 | import attr | |
21 | ||
22 | from synapse.api.constants import EventContentFields, EventTypes, HistoryVisibility | |
23 | from synapse.api.errors import AuthError | |
24 | from synapse.events import EventBase | |
25 | from synapse.events.utils import format_event_for_client_v2 | |
26 | from synapse.types import JsonDict | |
27 | ||
28 | if TYPE_CHECKING: | |
29 | from synapse.server import HomeServer | |
30 | ||
31 | logger = logging.getLogger(__name__) | |
32 | ||
33 | # number of rooms to return. We'll stop once we hit this limit. | |
34 | # TODO: allow clients to reduce this with a request param. | |
35 | MAX_ROOMS = 50 | |
36 | ||
37 | # max number of events to return per room. | |
38 | MAX_ROOMS_PER_SPACE = 50 | |
39 | ||
40 | # max number of federation servers to hit per room | |
41 | MAX_SERVERS_PER_SPACE = 3 | |
42 | ||
43 | ||
44 | class SpaceSummaryHandler: | |
45 | def __init__(self, hs: "HomeServer"): | |
46 | self._clock = hs.get_clock() | |
47 | self._auth = hs.get_auth() | |
48 | self._room_list_handler = hs.get_room_list_handler() | |
49 | self._state_handler = hs.get_state_handler() | |
50 | self._store = hs.get_datastore() | |
51 | self._event_serializer = hs.get_event_client_serializer() | |
52 | self._server_name = hs.hostname | |
53 | self._federation_client = hs.get_federation_client() | |
54 | ||
55 | async def get_space_summary( | |
56 | self, | |
57 | requester: str, | |
58 | room_id: str, | |
59 | suggested_only: bool = False, | |
60 | max_rooms_per_space: Optional[int] = None, | |
61 | ) -> JsonDict: | |
62 | """ | |
63 | Implementation of the space summary C-S API | |
64 | ||
65 | Args: | |
66 | requester: user id of the user making this request | |
67 | ||
68 | room_id: room id to start the summary at | |
69 | ||
70 | suggested_only: whether we should only return children with the "suggested" | |
71 | flag set. | |
72 | ||
73 | max_rooms_per_space: an optional limit on the number of child rooms we will | |
74 | return. This does not apply to the root room (ie, room_id), and | |
75 | is overridden by MAX_ROOMS_PER_SPACE. | |
76 | ||
77 | Returns: | |
78 | summary dict to return | |
79 | """ | |
80 | # first of all, check that the user is in the room in question (or it's | |
81 | # world-readable) | |
82 | await self._auth.check_user_in_room_or_world_readable(room_id, requester) | |
83 | ||
84 | # the queue of rooms to process | |
85 | room_queue = deque((_RoomQueueEntry(room_id, ()),)) | |
86 | ||
87 | # rooms we have already processed | |
88 | processed_rooms = set() # type: Set[str] | |
89 | ||
90 | # events we have already processed. We don't necessarily have their event ids, | |
91 | # so instead we key on (room id, state key) | |
92 | processed_events = set() # type: Set[Tuple[str, str]] | |
93 | ||
94 | rooms_result = [] # type: List[JsonDict] | |
95 | events_result = [] # type: List[JsonDict] | |
96 | ||
97 | while room_queue and len(rooms_result) < MAX_ROOMS: | |
98 | queue_entry = room_queue.popleft() | |
99 | room_id = queue_entry.room_id | |
100 | if room_id in processed_rooms: | |
101 | # already done this room | |
102 | continue | |
103 | ||
104 | logger.debug("Processing room %s", room_id) | |
105 | ||
106 | is_in_room = await self._store.is_host_joined(room_id, self._server_name) | |
107 | ||
108 | # The client-specified max_rooms_per_space limit doesn't apply to the | |
109 | # room_id specified in the request, so we ignore it if this is the | |
110 | # first room we are processing. | |
111 | max_children = max_rooms_per_space if processed_rooms else None | |
112 | ||
113 | if is_in_room: | |
114 | rooms, events = await self._summarize_local_room( | |
115 | requester, room_id, suggested_only, max_children | |
116 | ) | |
117 | else: | |
118 | rooms, events = await self._summarize_remote_room( | |
119 | queue_entry, | |
120 | suggested_only, | |
121 | max_children, | |
122 | exclude_rooms=processed_rooms, | |
123 | ) | |
124 | ||
125 | logger.debug( | |
126 | "Query of %s returned rooms %s, events %s", | |
127 | queue_entry.room_id, | |
128 | [room.get("room_id") for room in rooms], | |
129 | ["%s->%s" % (ev["room_id"], ev["state_key"]) for ev in events], | |
130 | ) | |
131 | ||
132 | rooms_result.extend(rooms) | |
133 | ||
134 | # any rooms returned don't need visiting again | |
135 | processed_rooms.update(cast(str, room.get("room_id")) for room in rooms) | |
136 | ||
137 | # the room we queried may or may not have been returned, but don't process | |
138 | # it again, anyway. | |
139 | processed_rooms.add(room_id) | |
140 | ||
141 | # XXX: is it ok that we blindly iterate through any events returned by | |
142 | # a remote server, whether or not they actually link to any rooms in our | |
143 | # tree? | |
144 | for ev in events: | |
145 | # remote servers might return events we have already processed | |
146 | # (eg, Dendrite returns inward pointers as well as outward ones), so | |
147 | # we need to filter them out, to avoid returning duplicate links to the | |
148 | # client. | |
149 | ev_key = (ev["room_id"], ev["state_key"]) | |
150 | if ev_key in processed_events: | |
151 | continue | |
152 | events_result.append(ev) | |
153 | ||
154 | # add the child to the queue. we have already validated | |
155 | # that the vias are a list of server names. | |
156 | room_queue.append( | |
157 | _RoomQueueEntry(ev["state_key"], ev["content"]["via"]) | |
158 | ) | |
159 | processed_events.add(ev_key) | |
160 | ||
161 | return {"rooms": rooms_result, "events": events_result} | |
162 | ||
163 | async def federation_space_summary( | |
164 | self, | |
165 | room_id: str, | |
166 | suggested_only: bool, | |
167 | max_rooms_per_space: Optional[int], | |
168 | exclude_rooms: Iterable[str], | |
169 | ) -> JsonDict: | |
170 | """ | |
171 | Implementation of the space summary Federation API | |
172 | ||
173 | Args: | |
174 | room_id: room id to start the summary at | |
175 | ||
176 | suggested_only: whether we should only return children with the "suggested" | |
177 | flag set. | |
178 | ||
179 | max_rooms_per_space: an optional limit on the number of child rooms we will | |
180 | return. Unlike the C-S API, this applies to the root room (room_id). | |
181 | It is clipped to MAX_ROOMS_PER_SPACE. | |
182 | ||
183 | exclude_rooms: a list of rooms to skip over (presumably because the | |
184 | calling server has already seen them). | |
185 | ||
186 | Returns: | |
187 | summary dict to return | |
188 | """ | |
189 | # the queue of rooms to process | |
190 | room_queue = deque((room_id,)) | |
191 | ||
192 | # the set of rooms that we should not walk further. Initialise it with the | |
193 | # excluded-rooms list; we will add other rooms as we process them so that | |
194 | # we do not loop. | |
195 | processed_rooms = set(exclude_rooms) # type: Set[str] | |
196 | ||
197 | rooms_result = [] # type: List[JsonDict] | |
198 | events_result = [] # type: List[JsonDict] | |
199 | ||
200 | while room_queue and len(rooms_result) < MAX_ROOMS: | |
201 | room_id = room_queue.popleft() | |
202 | if room_id in processed_rooms: | |
203 | # already done this room | |
204 | continue | |
205 | ||
206 | logger.debug("Processing room %s", room_id) | |
207 | ||
208 | rooms, events = await self._summarize_local_room( | |
209 | None, room_id, suggested_only, max_rooms_per_space | |
210 | ) | |
211 | ||
212 | processed_rooms.add(room_id) | |
213 | ||
214 | rooms_result.extend(rooms) | |
215 | events_result.extend(events) | |
216 | ||
217 | # add any children to the queue | |
218 | room_queue.extend(edge_event["state_key"] for edge_event in events) | |
219 | ||
220 | return {"rooms": rooms_result, "events": events_result} | |
221 | ||
222 | async def _summarize_local_room( | |
223 | self, | |
224 | requester: Optional[str], | |
225 | room_id: str, | |
226 | suggested_only: bool, | |
227 | max_children: Optional[int], | |
228 | ) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]: | |
229 | if not await self._is_room_accessible(room_id, requester): | |
230 | return (), () | |
231 | ||
232 | room_entry = await self._build_room_entry(room_id) | |
233 | ||
234 | # look for child rooms/spaces. | |
235 | child_events = await self._get_child_events(room_id) | |
236 | ||
237 | if suggested_only: | |
238 | # we only care about suggested children | |
239 | child_events = filter(_is_suggested_child_event, child_events) | |
240 | ||
241 | if max_children is None or max_children > MAX_ROOMS_PER_SPACE: | |
242 | max_children = MAX_ROOMS_PER_SPACE | |
243 | ||
244 | now = self._clock.time_msec() | |
245 | events_result = [] # type: List[JsonDict] | |
246 | for edge_event in itertools.islice(child_events, max_children): | |
247 | events_result.append( | |
248 | await self._event_serializer.serialize_event( | |
249 | edge_event, | |
250 | time_now=now, | |
251 | event_format=format_event_for_client_v2, | |
252 | ) | |
253 | ) | |
254 | return (room_entry,), events_result | |
255 | ||
256 | async def _summarize_remote_room( | |
257 | self, | |
258 | room: "_RoomQueueEntry", | |
259 | suggested_only: bool, | |
260 | max_children: Optional[int], | |
261 | exclude_rooms: Iterable[str], | |
262 | ) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]: | |
263 | room_id = room.room_id | |
264 | logger.info("Requesting summary for %s via %s", room_id, room.via) | |
265 | ||
266 | # we need to make the exclusion list json-serialisable | |
267 | exclude_rooms = list(exclude_rooms) | |
268 | ||
269 | via = itertools.islice(room.via, MAX_SERVERS_PER_SPACE) | |
270 | try: | |
271 | res = await self._federation_client.get_space_summary( | |
272 | via, | |
273 | room_id, | |
274 | suggested_only=suggested_only, | |
275 | max_rooms_per_space=max_children, | |
276 | exclude_rooms=exclude_rooms, | |
277 | ) | |
278 | except Exception as e: | |
279 | logger.warning( | |
280 | "Unable to get summary of %s via federation: %s", | |
281 | room_id, | |
282 | e, | |
283 | exc_info=logger.isEnabledFor(logging.DEBUG), | |
284 | ) | |
285 | return (), () | |
286 | ||
287 | return res.rooms, tuple( | |
288 | ev.data | |
289 | for ev in res.events | |
290 | if ev.event_type == EventTypes.MSC1772_SPACE_CHILD | |
291 | ) | |
292 | ||
293 | async def _is_room_accessible(self, room_id: str, requester: Optional[str]) -> bool: | |
294 | # if we have an authenticated requesting user, first check if they are in the | |
295 | # room | |
296 | if requester: | |
297 | try: | |
298 | await self._auth.check_user_in_room(room_id, requester) | |
299 | return True | |
300 | except AuthError: | |
301 | pass | |
302 | ||
303 | # otherwise, check if the room is peekable | |
304 | hist_vis_ev = await self._state_handler.get_current_state( | |
305 | room_id, EventTypes.RoomHistoryVisibility, "" | |
306 | ) | |
307 | if hist_vis_ev: | |
308 | hist_vis = hist_vis_ev.content.get("history_visibility") | |
309 | if hist_vis == HistoryVisibility.WORLD_READABLE: | |
310 | return True | |
311 | ||
312 | logger.info( | |
313 | "room %s is unpeekable and user %s is not a member, omitting from summary", | |
314 | room_id, | |
315 | requester, | |
316 | ) | |
317 | return False | |
318 | ||
319 | async def _build_room_entry(self, room_id: str) -> JsonDict: | |
320 | """Generate en entry suitable for the 'rooms' list in the summary response""" | |
321 | stats = await self._store.get_room_with_stats(room_id) | |
322 | ||
323 | # currently this should be impossible because we call | |
324 | # check_user_in_room_or_world_readable on the room before we get here, so | |
325 | # there should always be an entry | |
326 | assert stats is not None, "unable to retrieve stats for %s" % (room_id,) | |
327 | ||
328 | current_state_ids = await self._store.get_current_state_ids(room_id) | |
329 | create_event = await self._store.get_event( | |
330 | current_state_ids[(EventTypes.Create, "")] | |
331 | ) | |
332 | ||
333 | # TODO: update once MSC1772 lands | |
334 | room_type = create_event.content.get(EventContentFields.MSC1772_ROOM_TYPE) | |
335 | ||
336 | entry = { | |
337 | "room_id": stats["room_id"], | |
338 | "name": stats["name"], | |
339 | "topic": stats["topic"], | |
340 | "canonical_alias": stats["canonical_alias"], | |
341 | "num_joined_members": stats["joined_members"], | |
342 | "avatar_url": stats["avatar"], | |
343 | "world_readable": ( | |
344 | stats["history_visibility"] == HistoryVisibility.WORLD_READABLE | |
345 | ), | |
346 | "guest_can_join": stats["guest_access"] == "can_join", | |
347 | "room_type": room_type, | |
348 | } | |
349 | ||
350 | # Filter out Nones – rather omit the field altogether | |
351 | room_entry = {k: v for k, v in entry.items() if v is not None} | |
352 | ||
353 | return room_entry | |
354 | ||
355 | async def _get_child_events(self, room_id: str) -> Iterable[EventBase]: | |
356 | # look for child rooms/spaces. | |
357 | current_state_ids = await self._store.get_current_state_ids(room_id) | |
358 | ||
359 | events = await self._store.get_events_as_list( | |
360 | [ | |
361 | event_id | |
362 | for key, event_id in current_state_ids.items() | |
363 | # TODO: update once MSC1772 lands | |
364 | if key[0] == EventTypes.MSC1772_SPACE_CHILD | |
365 | ] | |
366 | ) | |
367 | ||
368 | # filter out any events without a "via" (which implies it has been redacted) | |
369 | return (e for e in events if _has_valid_via(e)) | |
370 | ||
371 | ||
372 | @attr.s(frozen=True, slots=True) | |
373 | class _RoomQueueEntry: | |
374 | room_id = attr.ib(type=str) | |
375 | via = attr.ib(type=Sequence[str]) | |
376 | ||
377 | ||
378 | def _has_valid_via(e: EventBase) -> bool: | |
379 | via = e.content.get("via") | |
380 | if not via or not isinstance(via, Sequence): | |
381 | return False | |
382 | for v in via: | |
383 | if not isinstance(v, str): | |
384 | logger.debug("Ignoring edge event %s with invalid via entry", e.event_id) | |
385 | return False | |
386 | return True | |
387 | ||
388 | ||
389 | def _is_suggested_child_event(edge_event: EventBase) -> bool: | |
390 | suggested = edge_event.content.get("suggested") | |
391 | if isinstance(suggested, bool) and suggested: | |
392 | return True | |
393 | logger.debug("Ignorning not-suggested child %s", edge_event.state_key) | |
394 | return False |
16 | 16 | from typing import TYPE_CHECKING, Optional |
17 | 17 | |
18 | 18 | if TYPE_CHECKING: |
19 | from synapse.app.homeserver import HomeServer | |
19 | from synapse.server import HomeServer | |
20 | 20 | |
21 | 21 | logger = logging.getLogger(__name__) |
22 | 22 |
23 | 23 | from synapse.types import JsonDict |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
26 | from synapse.server import HomeServer | |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
29 | 29 |
79 | 79 | filter_collection = attr.ib(type=FilterCollection) |
80 | 80 | is_guest = attr.ib(type=bool) |
81 | 81 | request_key = attr.ib(type=Tuple[Any, ...]) |
82 | device_id = attr.ib(type=str) | |
82 | device_id = attr.ib(type=Optional[str]) | |
83 | 83 | |
84 | 84 | |
85 | 85 | @attr.s(slots=True, frozen=True) |
722 | 722 | |
723 | 723 | return summary |
724 | 724 | |
725 | def get_lazy_loaded_members_cache(self, cache_key: Tuple[str, str]) -> LruCache: | |
725 | def get_lazy_loaded_members_cache( | |
726 | self, cache_key: Tuple[str, Optional[str]] | |
727 | ) -> LruCache: | |
726 | 728 | cache = self.lazy_loaded_members_cache.get(cache_key) |
727 | 729 | if cache is None: |
728 | 730 | logger.debug("creating LruCache for %r", cache_key) |
1978 | 1980 | |
1979 | 1981 | logger.info("User joined room after current token: %s", room_id) |
1980 | 1982 | |
1981 | extrems = await self.store.get_forward_extremeties_for_room( | |
1982 | room_id, event_pos.stream | |
1983 | extrems = ( | |
1984 | await self.store.get_forward_extremities_for_room_at_stream_ordering( | |
1985 | room_id, event_pos.stream | |
1986 | ) | |
1983 | 1987 | ) |
1984 | 1988 | users_in_room = await self.state.get_current_users_in_room(room_id, extrems) |
1985 | 1989 | if user_id in users_in_room: |
24 | 24 | from synapse.util.metrics import Measure |
25 | 25 | |
26 | 26 | if TYPE_CHECKING: |
27 | from synapse.app.homeserver import HomeServer | |
27 | from synapse.server import HomeServer | |
28 | 28 | |
29 | 29 | logger = logging.getLogger(__name__) |
30 | 30 |
76 | 76 | from synapse.util.async_helpers import timeout_deferred |
77 | 77 | |
78 | 78 | if TYPE_CHECKING: |
79 | from synapse.app.homeserver import HomeServer | |
79 | from synapse.server import HomeServer | |
80 | 80 | |
81 | 81 | logger = logging.getLogger(__name__) |
82 | 82 |
18 | 18 | |
19 | 19 | from twisted.internet import defer, protocol |
20 | 20 | from twisted.internet.error import ConnectError |
21 | from twisted.internet.interfaces import IStreamClientEndpoint | |
22 | from twisted.internet.protocol import connectionDone | |
21 | from twisted.internet.interfaces import IReactorCore, IStreamClientEndpoint | |
22 | from twisted.internet.protocol import ClientFactory, Protocol, connectionDone | |
23 | 23 | from twisted.web import http |
24 | from twisted.web.http_headers import Headers | |
24 | 25 | |
25 | 26 | logger = logging.getLogger(__name__) |
26 | 27 | |
42 | 43 | |
43 | 44 | Args: |
44 | 45 | reactor: the Twisted reactor to use for the connection |
45 | proxy_endpoint (IStreamClientEndpoint): the endpoint to use to connect to the | |
46 | proxy | |
47 | host (bytes): hostname that we want to CONNECT to | |
48 | port (int): port that we want to connect to | |
49 | """ | |
50 | ||
51 | def __init__(self, reactor, proxy_endpoint, host, port): | |
46 | proxy_endpoint: the endpoint to use to connect to the proxy | |
47 | host: hostname that we want to CONNECT to | |
48 | port: port that we want to connect to | |
49 | headers: Extra HTTP headers to include in the CONNECT request | |
50 | """ | |
51 | ||
52 | def __init__( | |
53 | self, | |
54 | reactor: IReactorCore, | |
55 | proxy_endpoint: IStreamClientEndpoint, | |
56 | host: bytes, | |
57 | port: int, | |
58 | headers: Headers, | |
59 | ): | |
52 | 60 | self._reactor = reactor |
53 | 61 | self._proxy_endpoint = proxy_endpoint |
54 | 62 | self._host = host |
55 | 63 | self._port = port |
64 | self._headers = headers | |
56 | 65 | |
57 | 66 | def __repr__(self): |
58 | 67 | return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,) |
59 | 68 | |
60 | def connect(self, protocolFactory): | |
61 | f = HTTPProxiedClientFactory(self._host, self._port, protocolFactory) | |
69 | def connect(self, protocolFactory: ClientFactory): | |
70 | f = HTTPProxiedClientFactory( | |
71 | self._host, self._port, protocolFactory, self._headers | |
72 | ) | |
62 | 73 | d = self._proxy_endpoint.connect(f) |
63 | 74 | # once the tcp socket connects successfully, we need to wait for the |
64 | 75 | # CONNECT to complete. |
73 | 84 | HTTP Protocol object and run the rest of the connection. |
74 | 85 | |
75 | 86 | Args: |
76 | dst_host (bytes): hostname that we want to CONNECT to | |
77 | dst_port (int): port that we want to connect to | |
78 | wrapped_factory (protocol.ClientFactory): The original Factory | |
79 | """ | |
80 | ||
81 | def __init__(self, dst_host, dst_port, wrapped_factory): | |
87 | dst_host: hostname that we want to CONNECT to | |
88 | dst_port: port that we want to connect to | |
89 | wrapped_factory: The original Factory | |
90 | headers: Extra HTTP headers to include in the CONNECT request | |
91 | """ | |
92 | ||
93 | def __init__( | |
94 | self, | |
95 | dst_host: bytes, | |
96 | dst_port: int, | |
97 | wrapped_factory: ClientFactory, | |
98 | headers: Headers, | |
99 | ): | |
82 | 100 | self.dst_host = dst_host |
83 | 101 | self.dst_port = dst_port |
84 | 102 | self.wrapped_factory = wrapped_factory |
103 | self.headers = headers | |
85 | 104 | self.on_connection = defer.Deferred() |
86 | 105 | |
87 | 106 | def startedConnecting(self, connector): |
91 | 110 | wrapped_protocol = self.wrapped_factory.buildProtocol(addr) |
92 | 111 | |
93 | 112 | return HTTPConnectProtocol( |
94 | self.dst_host, self.dst_port, wrapped_protocol, self.on_connection | |
113 | self.dst_host, | |
114 | self.dst_port, | |
115 | wrapped_protocol, | |
116 | self.on_connection, | |
117 | self.headers, | |
95 | 118 | ) |
96 | 119 | |
97 | 120 | def clientConnectionFailed(self, connector, reason): |
111 | 134 | """Protocol that wraps an existing Protocol to do a CONNECT handshake at connect |
112 | 135 | |
113 | 136 | Args: |
114 | host (bytes): The original HTTP(s) hostname or IPv4 or IPv6 address literal | |
137 | host: The original HTTP(s) hostname or IPv4 or IPv6 address literal | |
115 | 138 | to put in the CONNECT request |
116 | 139 | |
117 | port (int): The original HTTP(s) port to put in the CONNECT request | |
118 | ||
119 | wrapped_protocol (interfaces.IProtocol): the original protocol (probably | |
120 | HTTPChannel or TLSMemoryBIOProtocol, but could be anything really) | |
121 | ||
122 | connected_deferred (Deferred): a Deferred which will be callbacked with | |
140 | port: The original HTTP(s) port to put in the CONNECT request | |
141 | ||
142 | wrapped_protocol: the original protocol (probably HTTPChannel or | |
143 | TLSMemoryBIOProtocol, but could be anything really) | |
144 | ||
145 | connected_deferred: a Deferred which will be callbacked with | |
123 | 146 | wrapped_protocol when the CONNECT completes |
124 | """ | |
125 | ||
126 | def __init__(self, host, port, wrapped_protocol, connected_deferred): | |
147 | ||
148 | headers: Extra HTTP headers to include in the CONNECT request | |
149 | """ | |
150 | ||
151 | def __init__( | |
152 | self, | |
153 | host: bytes, | |
154 | port: int, | |
155 | wrapped_protocol: Protocol, | |
156 | connected_deferred: defer.Deferred, | |
157 | headers: Headers, | |
158 | ): | |
127 | 159 | self.host = host |
128 | 160 | self.port = port |
129 | 161 | self.wrapped_protocol = wrapped_protocol |
130 | 162 | self.connected_deferred = connected_deferred |
131 | self.http_setup_client = HTTPConnectSetupClient(self.host, self.port) | |
163 | self.headers = headers | |
164 | ||
165 | self.http_setup_client = HTTPConnectSetupClient( | |
166 | self.host, self.port, self.headers | |
167 | ) | |
132 | 168 | self.http_setup_client.on_connected.addCallback(self.proxyConnected) |
133 | 169 | |
134 | 170 | def connectionMade(self): |
153 | 189 | if buf: |
154 | 190 | self.wrapped_protocol.dataReceived(buf) |
155 | 191 | |
156 | def dataReceived(self, data): | |
192 | def dataReceived(self, data: bytes): | |
157 | 193 | # if we've set up the HTTP protocol, we can send the data there |
158 | 194 | if self.wrapped_protocol.connected: |
159 | 195 | return self.wrapped_protocol.dataReceived(data) |
167 | 203 | """HTTPClient protocol to send a CONNECT message for proxies and read the response. |
168 | 204 | |
169 | 205 | Args: |
170 | host (bytes): The hostname to send in the CONNECT message | |
171 | port (int): The port to send in the CONNECT message | |
172 | """ | |
173 | ||
174 | def __init__(self, host, port): | |
206 | host: The hostname to send in the CONNECT message | |
207 | port: The port to send in the CONNECT message | |
208 | headers: Extra headers to send with the CONNECT message | |
209 | """ | |
210 | ||
211 | def __init__(self, host: bytes, port: int, headers: Headers): | |
175 | 212 | self.host = host |
176 | 213 | self.port = port |
214 | self.headers = headers | |
177 | 215 | self.on_connected = defer.Deferred() |
178 | 216 | |
179 | 217 | def connectionMade(self): |
180 | 218 | logger.debug("Connected to proxy, sending CONNECT") |
181 | 219 | self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port)) |
220 | ||
221 | # Send any additional specified headers | |
222 | for name, values in self.headers.getAllRawHeaders(): | |
223 | for value in values: | |
224 | self.sendHeader(name, value) | |
225 | ||
182 | 226 | self.endHeaders() |
183 | 227 | |
184 | def handleStatus(self, version, status, message): | |
228 | def handleStatus(self, version: bytes, status: bytes, message: bytes): | |
185 | 229 | logger.debug("Got Status: %s %s %s", status, message, version) |
186 | 230 | if status != b"200": |
187 | 231 | raise ProxyConnectError("Unexpected status on CONNECT: %s" % status) |
70 | 70 | logger = logging.getLogger(__name__) |
71 | 71 | |
72 | 72 | |
73 | _well_known_cache = TTLCache("well-known") | |
74 | _had_valid_well_known_cache = TTLCache("had-valid-well-known") | |
73 | _well_known_cache = TTLCache("well-known") # type: TTLCache[bytes, Optional[bytes]] | |
74 | _had_valid_well_known_cache = TTLCache( | |
75 | "had-valid-well-known" | |
76 | ) # type: TTLCache[bytes, bool] | |
75 | 77 | |
76 | 78 | |
77 | 79 | @attr.s(slots=True, frozen=True) |
87 | 89 | reactor: IReactorTime, |
88 | 90 | agent: IAgent, |
89 | 91 | user_agent: bytes, |
90 | well_known_cache: Optional[TTLCache] = None, | |
91 | had_well_known_cache: Optional[TTLCache] = None, | |
92 | well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None, | |
93 | had_well_known_cache: Optional[TTLCache[bytes, bool]] = None, | |
92 | 94 | ): |
93 | 95 | self._reactor = reactor |
94 | 96 | self._clock = Clock(reactor) |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import base64 | |
14 | 15 | import logging |
15 | 16 | import re |
17 | from typing import Optional, Tuple | |
16 | 18 | from urllib.request import getproxies_environment, proxy_bypass_environment |
17 | 19 | |
20 | import attr | |
18 | 21 | from zope.interface import implementer |
19 | 22 | |
20 | 23 | from twisted.internet import defer |
22 | 25 | from twisted.python.failure import Failure |
23 | 26 | from twisted.web.client import URI, BrowserLikePolicyForHTTPS, _AgentBase |
24 | 27 | from twisted.web.error import SchemeNotSupported |
28 | from twisted.web.http_headers import Headers | |
25 | 29 | from twisted.web.iweb import IAgent |
26 | 30 | |
27 | 31 | from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint |
29 | 33 | logger = logging.getLogger(__name__) |
30 | 34 | |
31 | 35 | _VALID_URI = re.compile(br"\A[\x21-\x7e]+\Z") |
36 | ||
37 | ||
38 | @attr.s | |
39 | class ProxyCredentials: | |
40 | username_password = attr.ib(type=bytes) | |
41 | ||
42 | def as_proxy_authorization_value(self) -> bytes: | |
43 | """ | |
44 | Return the value for a Proxy-Authorization header (i.e. 'Basic abdef=='). | |
45 | ||
46 | Returns: | |
47 | A transformation of the authentication string the encoded value for | |
48 | a Proxy-Authorization header. | |
49 | """ | |
50 | # Encode as base64 and prepend the authorization type | |
51 | return b"Basic " + base64.encodebytes(self.username_password) | |
32 | 52 | |
33 | 53 | |
34 | 54 | @implementer(IAgent) |
94 | 114 | http_proxy = proxies["http"].encode() if "http" in proxies else None |
95 | 115 | https_proxy = proxies["https"].encode() if "https" in proxies else None |
96 | 116 | no_proxy = proxies["no"] if "no" in proxies else None |
117 | ||
118 | # Parse credentials from https proxy connection string if present | |
119 | self.https_proxy_creds, https_proxy = parse_username_password(https_proxy) | |
97 | 120 | |
98 | 121 | self.http_proxy_endpoint = _http_proxy_endpoint( |
99 | 122 | http_proxy, self.proxy_reactor, **self._endpoint_kwargs |
174 | 197 | and self.https_proxy_endpoint |
175 | 198 | and not should_skip_proxy |
176 | 199 | ): |
200 | connect_headers = Headers() | |
201 | ||
202 | # Determine whether we need to set Proxy-Authorization headers | |
203 | if self.https_proxy_creds: | |
204 | # Set a Proxy-Authorization header | |
205 | connect_headers.addRawHeader( | |
206 | b"Proxy-Authorization", | |
207 | self.https_proxy_creds.as_proxy_authorization_value(), | |
208 | ) | |
209 | ||
177 | 210 | endpoint = HTTPConnectProxyEndpoint( |
178 | 211 | self.proxy_reactor, |
179 | 212 | self.https_proxy_endpoint, |
180 | 213 | parsed_uri.host, |
181 | 214 | parsed_uri.port, |
215 | headers=connect_headers, | |
182 | 216 | ) |
183 | 217 | else: |
184 | 218 | # not using a proxy |
207 | 241 | ) |
208 | 242 | |
209 | 243 | |
210 | def _http_proxy_endpoint(proxy, reactor, **kwargs): | |
244 | def _http_proxy_endpoint(proxy: Optional[bytes], reactor, **kwargs): | |
211 | 245 | """Parses an http proxy setting and returns an endpoint for the proxy |
212 | 246 | |
213 | 247 | Args: |
214 | proxy (bytes|None): the proxy setting | |
248 | proxy: the proxy setting in the form: [<username>:<password>@]<host>[:<port>] | |
249 | Note that compared to other apps, this function currently lacks support | |
250 | for specifying a protocol schema (i.e. protocol://...). | |
251 | ||
215 | 252 | reactor: reactor to be used to connect to the proxy |
253 | ||
216 | 254 | kwargs: other args to be passed to HostnameEndpoint |
217 | 255 | |
218 | 256 | Returns: |
222 | 260 | if proxy is None: |
223 | 261 | return None |
224 | 262 | |
225 | # currently we only support hostname:port. Some apps also support | |
226 | # protocol://<host>[:port], which allows a way of requiring a TLS connection to the | |
227 | # proxy. | |
228 | ||
263 | # Parse the connection string | |
229 | 264 | host, port = parse_host_port(proxy, default_port=1080) |
230 | 265 | return HostnameEndpoint(reactor, host, port, **kwargs) |
231 | 266 | |
232 | 267 | |
233 | def parse_host_port(hostport, default_port=None): | |
234 | # could have sworn we had one of these somewhere else... | |
268 | def parse_username_password(proxy: bytes) -> Tuple[Optional[ProxyCredentials], bytes]: | |
269 | """ | |
270 | Parses the username and password from a proxy declaration e.g | |
271 | username:password@hostname:port. | |
272 | ||
273 | Args: | |
274 | proxy: The proxy connection string. | |
275 | ||
276 | Returns | |
277 | An instance of ProxyCredentials and the proxy connection string with any credentials | |
278 | stripped, i.e u:p@host:port -> host:port. If no credentials were found, the | |
279 | ProxyCredentials instance is replaced with None. | |
280 | """ | |
281 | if proxy and b"@" in proxy: | |
282 | # We use rsplit here as the password could contain an @ character | |
283 | credentials, proxy_without_credentials = proxy.rsplit(b"@", 1) | |
284 | return ProxyCredentials(credentials), proxy_without_credentials | |
285 | ||
286 | return None, proxy | |
287 | ||
288 | ||
289 | def parse_host_port(hostport: bytes, default_port: int = None) -> Tuple[bytes, int]: | |
290 | """ | |
291 | Parse the hostname and port from a proxy connection byte string. | |
292 | ||
293 | Args: | |
294 | hostport: The proxy connection string. Must be in the form 'host[:port]'. | |
295 | default_port: The default port to return if one is not found in `hostport`. | |
296 | ||
297 | Returns: | |
298 | A tuple containing the hostname and port. Uses `default_port` if one was not found. | |
299 | """ | |
235 | 300 | if b":" in hostport: |
236 | 301 | host, port = hostport.rsplit(b":", 1) |
237 | 302 | try: |
688 | 688 | current = current_context() |
689 | 689 | try: |
690 | 690 | res = f(*args, **kwargs) |
691 | except: # noqa: E722 | |
691 | except Exception: | |
692 | 692 | # the assumption here is that the caller doesn't want to be disturbed |
693 | 693 | # by synchronous exceptions, so let's turn them into Failures. |
694 | 694 | return defer.fail() |
168 | 168 | import logging |
169 | 169 | import re |
170 | 170 | from functools import wraps |
171 | from typing import TYPE_CHECKING, Dict, Optional, Type | |
171 | from typing import TYPE_CHECKING, Dict, Optional, Pattern, Type | |
172 | 172 | |
173 | 173 | import attr |
174 | 174 | |
261 | 261 | # Block everything by default |
262 | 262 | # A regex which matches the server_names to expose traces for. |
263 | 263 | # None means 'block everything'. |
264 | _homeserver_whitelist = None | |
264 | _homeserver_whitelist = None # type: Optional[Pattern[str]] | |
265 | 265 | |
266 | 266 | # Util methods |
267 | 267 |
20 | 20 | from synapse.types import JsonDict, RoomStreamToken |
21 | 21 | |
22 | 22 | if TYPE_CHECKING: |
23 | from synapse.app.homeserver import HomeServer | |
23 | from synapse.server import HomeServer | |
24 | 24 | |
25 | 25 | |
26 | 26 | @attr.s(slots=True) |
21 | 21 | from synapse.util.metrics import Measure |
22 | 22 | |
23 | 23 | if TYPE_CHECKING: |
24 | from synapse.app.homeserver import HomeServer | |
24 | from synapse.server import HomeServer | |
25 | 25 | |
26 | 26 | logger = logging.getLogger(__name__) |
27 | 27 |
32 | 32 | from .push_rule_evaluator import PushRuleEvaluatorForEvent |
33 | 33 | |
34 | 34 | if TYPE_CHECKING: |
35 | from synapse.app.homeserver import HomeServer | |
35 | from synapse.server import HomeServer | |
36 | 36 | |
37 | 37 | logger = logging.getLogger(__name__) |
38 | 38 |
23 | 23 | from synapse.push.mailer import Mailer |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
26 | from synapse.server import HomeServer | |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
29 | 29 |
30 | 30 | from . import push_rule_evaluator, push_tools |
31 | 31 | |
32 | 32 | if TYPE_CHECKING: |
33 | from synapse.app.homeserver import HomeServer | |
33 | from synapse.server import HomeServer | |
34 | 34 | |
35 | 35 | logger = logging.getLogger(__name__) |
36 | 36 | |
289 | 289 | if rejected is False: |
290 | 290 | return False |
291 | 291 | |
292 | if isinstance(rejected, list) or isinstance(rejected, tuple): | |
292 | if isinstance(rejected, (list, tuple)): | |
293 | 293 | for pk in rejected: |
294 | 294 | if pk != self.pushkey: |
295 | 295 | # for sanity, we only remove the pushkey if it |
39 | 39 | from synapse.visibility import filter_events_for_client |
40 | 40 | |
41 | 41 | if TYPE_CHECKING: |
42 | from synapse.app.homeserver import HomeServer | |
42 | from synapse.server import HomeServer | |
43 | 43 | |
44 | 44 | logger = logging.getLogger(__name__) |
45 | 45 |
21 | 21 | from synapse.push.mailer import Mailer |
22 | 22 | |
23 | 23 | if TYPE_CHECKING: |
24 | from synapse.app.homeserver import HomeServer | |
24 | from synapse.server import HomeServer | |
25 | 25 | |
26 | 26 | logger = logging.getLogger(__name__) |
27 | 27 |
14 | 14 | # See the License for the specific language governing permissions and |
15 | 15 | # limitations under the License. |
16 | 16 | |
17 | import itertools | |
17 | 18 | import logging |
18 | 19 | from typing import List, Set |
19 | 20 | |
81 | 82 | "Jinja2>=2.9", |
82 | 83 | "bleach>=1.4.3", |
83 | 84 | "typing-extensions>=3.7.4", |
85 | # We enforce that we have a `cryptography` version that bundles an `openssl` | |
86 | # with the latest security patches. | |
87 | "cryptography>=3.4.7;python_version>='3.6'", | |
84 | 88 | ] |
85 | 89 | |
86 | 90 | CONDITIONAL_REQUIREMENTS = { |
97 | 101 | "txacme>=0.9.2", |
98 | 102 | # txacme depends on eliot. Eliot 1.8.0 is incompatible with |
99 | 103 | # python 3.5.2, as per https://github.com/itamarst/eliot/issues/418 |
100 | 'eliot<1.8.0;python_version<"3.5.3"', | |
104 | "eliot<1.8.0;python_version<'3.5.3'", | |
101 | 105 | ], |
102 | 106 | "saml2": [ |
103 | 107 | # pysaml2 6.4.0 is incompatible with Python 3.5 (see https://github.com/IdentityPython/pysaml2/issues/749) |
127 | 131 | ALL_OPTIONAL_REQUIREMENTS = set(optional_deps) | ALL_OPTIONAL_REQUIREMENTS |
128 | 132 | |
129 | 133 | |
134 | # ensure there are no double-quote characters in any of the deps (otherwise the | |
135 | # 'pip install' incantation in DependencyException will break) | |
136 | for dep in itertools.chain( | |
137 | REQUIREMENTS, | |
138 | *CONDITIONAL_REQUIREMENTS.values(), | |
139 | ): | |
140 | if '"' in dep: | |
141 | raise Exception( | |
142 | "Dependency `%s` contains double-quote; use single-quotes instead" % (dep,) | |
143 | ) | |
144 | ||
145 | ||
130 | 146 | def list_requirements(): |
131 | 147 | return list(set(REQUIREMENTS) | ALL_OPTIONAL_REQUIREMENTS) |
132 | 148 | |
146 | 162 | @property |
147 | 163 | def dependencies(self): |
148 | 164 | for i in self.args[0]: |
149 | yield "'" + i + "'" | |
165 | yield '"' + i + '"' | |
150 | 166 | |
151 | 167 | |
152 | 168 | def check_requirements(for_feature=None): |
39 | 39 | // containing the event |
40 | 40 | "event_format_version": .., // 1,2,3 etc: the event format version |
41 | 41 | "internal_metadata": { .. serialized internal_metadata .. }, |
42 | "outlier": true|false, | |
42 | 43 | "rejected_reason": .., // The event.rejected_reason field |
43 | 44 | "context": { .. serialized event context .. }, |
44 | 45 | }], |
83 | 84 | "room_version": event.room_version.identifier, |
84 | 85 | "event_format_version": event.format_version, |
85 | 86 | "internal_metadata": event.internal_metadata.get_dict(), |
87 | "outlier": event.internal_metadata.is_outlier(), | |
86 | 88 | "rejected_reason": event.rejected_reason, |
87 | 89 | "context": serialized_context, |
88 | 90 | } |
115 | 117 | event = make_event_from_dict( |
116 | 118 | event_dict, room_ver, internal_metadata, rejected_reason |
117 | 119 | ) |
120 | event.internal_metadata.outlier = event_payload["outlier"] | |
118 | 121 | |
119 | 122 | context = EventContext.deserialize( |
120 | 123 | self.storage, event_payload["context"] |
39 | 39 | // containing the event |
40 | 40 | "event_format_version": .., // 1,2,3 etc: the event format version |
41 | 41 | "internal_metadata": { .. serialized internal_metadata .. }, |
42 | "outlier": true|false, | |
42 | 43 | "rejected_reason": .., // The event.rejected_reason field |
43 | 44 | "context": { .. serialized event context .. }, |
44 | 45 | "requester": { .. serialized requester .. }, |
78 | 79 | ratelimit (bool) |
79 | 80 | extra_users (list(UserID)): Any extra users to notify about event |
80 | 81 | """ |
81 | ||
82 | 82 | serialized_context = await context.serialize(event, store) |
83 | 83 | |
84 | 84 | payload = { |
86 | 86 | "room_version": event.room_version.identifier, |
87 | 87 | "event_format_version": event.format_version, |
88 | 88 | "internal_metadata": event.internal_metadata.get_dict(), |
89 | "outlier": event.internal_metadata.is_outlier(), | |
89 | 90 | "rejected_reason": event.rejected_reason, |
90 | 91 | "context": serialized_context, |
91 | 92 | "requester": requester.serialize(), |
107 | 108 | event = make_event_from_dict( |
108 | 109 | event_dict, room_ver, internal_metadata, rejected_reason |
109 | 110 | ) |
111 | event.internal_metadata.outlier = content["outlier"] | |
110 | 112 | |
111 | 113 | requester = Requester.deserialize(self.store, content["requester"]) |
112 | 114 | context = EventContext.deserialize(self.storage, content["context"]) |
23 | 23 | from ._slaved_id_tracker import SlavedIdTracker |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
26 | from synapse.server import HomeServer | |
27 | 27 | |
28 | 28 | |
29 | 29 | class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore): |
311 | 311 | |
312 | 312 | NAME = "FEDERATION_ACK" |
313 | 313 | |
314 | def __init__(self, instance_name, token): | |
314 | def __init__(self, instance_name: str, token: int): | |
315 | 315 | self.instance_name = instance_name |
316 | 316 | self.token = token |
317 | 317 | |
318 | 318 | @classmethod |
319 | def from_line(cls, line): | |
319 | def from_line(cls, line: str) -> "FederationAckCommand": | |
320 | 320 | instance_name, token = line.split(" ") |
321 | 321 | return cls(instance_name, int(token)) |
322 | 322 | |
323 | def to_line(self): | |
323 | def to_line(self) -> str: | |
324 | 324 | return "%s %s" % (self.instance_name, self.token) |
325 | 325 | |
326 | 326 |
103 | 103 | |
104 | 104 | # A list of all connected protocols. This allows us to send metrics about the |
105 | 105 | # connections. |
106 | connected_connections = [] | |
106 | connected_connections = [] # type: List[BaseReplicationStreamProtocol] | |
107 | 107 | |
108 | 108 | |
109 | 109 | logger = logging.getLogger(__name__) |
32 | 32 | from synapse.replication.http.streams import ReplicationGetStreamUpdates |
33 | 33 | |
34 | 34 | if TYPE_CHECKING: |
35 | import synapse.server | |
35 | from synapse.server import HomeServer | |
36 | 36 | |
37 | 37 | logger = logging.getLogger(__name__) |
38 | 38 | |
298 | 298 | NAME = "typing" |
299 | 299 | ROW_TYPE = TypingStreamRow |
300 | 300 | |
301 | def __init__(self, hs): | |
302 | typing_handler = hs.get_typing_handler() | |
303 | ||
301 | def __init__(self, hs: "HomeServer"): | |
304 | 302 | writer_instance = hs.config.worker.writers.typing |
305 | 303 | if writer_instance == hs.get_instance_name(): |
306 | 304 | # On the writer, query the typing handler |
307 | update_function = typing_handler.get_all_typing_updates | |
305 | typing_writer_handler = hs.get_typing_writer_handler() | |
306 | update_function = ( | |
307 | typing_writer_handler.get_all_typing_updates | |
308 | ) # type: Callable[[str, int, int, int], Awaitable[Tuple[List[Tuple[int, Any]], int, bool]]] | |
309 | current_token_function = typing_writer_handler.get_current_token | |
308 | 310 | else: |
309 | 311 | # Query the typing writer process |
310 | 312 | update_function = make_http_update_function(hs, self.NAME) |
311 | ||
312 | super().__init__( | |
313 | hs.get_instance_name(), | |
314 | current_token_without_instance(typing_handler.get_current_token), | |
313 | current_token_function = hs.get_typing_handler().get_current_token | |
314 | ||
315 | super().__init__( | |
316 | hs.get_instance_name(), | |
317 | current_token_without_instance(current_token_function), | |
315 | 318 | update_function, |
316 | 319 | ) |
317 | 320 | |
508 | 511 | NAME = "account_data" |
509 | 512 | ROW_TYPE = AccountDataStreamRow |
510 | 513 | |
511 | def __init__(self, hs: "synapse.server.HomeServer"): | |
514 | def __init__(self, hs: "HomeServer"): | |
512 | 515 | self.store = hs.get_datastore() |
513 | 516 | super().__init__( |
514 | 517 | hs.get_instance_name(), |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | from collections import namedtuple |
16 | from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Tuple | |
16 | 17 | |
17 | 18 | from synapse.replication.tcp.streams._base import ( |
18 | 19 | Stream, |
19 | 20 | current_token_without_instance, |
20 | 21 | make_http_update_function, |
21 | 22 | ) |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
22 | 26 | |
23 | 27 | |
24 | 28 | class FederationStream(Stream): |
37 | 41 | NAME = "federation" |
38 | 42 | ROW_TYPE = FederationStreamRow |
39 | 43 | |
40 | def __init__(self, hs): | |
44 | def __init__(self, hs: "HomeServer"): | |
41 | 45 | if hs.config.worker_app is None: |
42 | 46 | # master process: get updates from the FederationRemoteSendQueue. |
43 | 47 | # (if the master is configured to send federation itself, federation_sender |
47 | 51 | current_token = current_token_without_instance( |
48 | 52 | federation_sender.get_current_token |
49 | 53 | ) |
50 | update_function = federation_sender.get_replication_rows | |
54 | update_function = ( | |
55 | federation_sender.get_replication_rows | |
56 | ) # type: Callable[[str, int, int, int], Awaitable[Tuple[List[Tuple[int, Any]], int, bool]]] | |
51 | 57 | |
52 | 58 | elif hs.should_send_federation(): |
53 | 59 | # federation sender: Query master process |
68 | 74 | return 0 |
69 | 75 | |
70 | 76 | @staticmethod |
71 | async def _stub_update_function(instance_name, from_token, upto_token, limit): | |
77 | async def _stub_update_function( | |
78 | instance_name: str, from_token: int, upto_token: int, limit: int | |
79 | ) -> Tuple[list, int, bool]: | |
72 | 80 | return [], upto_token, False |
27 | 27 | from synapse.types import JsonDict |
28 | 28 | |
29 | 29 | if TYPE_CHECKING: |
30 | from synapse.app.homeserver import HomeServer | |
30 | from synapse.server import HomeServer | |
31 | 31 | |
32 | 32 | logger = logging.getLogger(__name__) |
33 | 33 |
389 | 389 | async def on_POST( |
390 | 390 | self, request: SynapseRequest, room_identifier: str |
391 | 391 | ) -> Tuple[int, JsonDict]: |
392 | # This will always be set by the time Twisted calls us. | |
393 | assert request.args is not None | |
394 | ||
392 | 395 | requester = await self.auth.get_user_by_req(request) |
393 | 396 | await assert_user_is_admin(self.auth, requester.user) |
394 | 397 |
270 | 270 | elif not deactivate and user["deactivated"]: |
271 | 271 | if ( |
272 | 272 | "password" not in body |
273 | and self.hs.config.password_localdb_enabled | |
273 | and self.auth_handler.can_change_password() | |
274 | 274 | ): |
275 | 275 | raise SynapseError( |
276 | 276 | 400, "Must provide a password to re-activate an account." |
832 | 832 | async def on_GET( |
833 | 833 | self, request: SynapseRequest, user_id: str |
834 | 834 | ) -> Tuple[int, JsonDict]: |
835 | # This will always be set by the time Twisted calls us. | |
836 | assert request.args is not None | |
837 | ||
835 | 838 | await assert_requester_is_admin(self.auth, request) |
836 | 839 | |
837 | 840 | if not self.is_mine(UserID.from_string(user_id)): |
17 | 17 | |
18 | 18 | import logging |
19 | 19 | import re |
20 | from typing import TYPE_CHECKING, List, Optional | |
20 | from typing import TYPE_CHECKING, List, Optional, Tuple | |
21 | 21 | from urllib import parse as urlparse |
22 | 22 | |
23 | 23 | from synapse.api.constants import EventTypes, Membership |
34 | 34 | from synapse.http.servlet import ( |
35 | 35 | RestServlet, |
36 | 36 | assert_params_in_dict, |
37 | parse_boolean, | |
37 | 38 | parse_integer, |
38 | 39 | parse_json_object_from_request, |
39 | 40 | parse_string, |
40 | 41 | ) |
42 | from synapse.http.site import SynapseRequest | |
41 | 43 | from synapse.logging.opentracing import set_tag |
42 | 44 | from synapse.rest.client.transactions import HttpTransactionCache |
43 | 45 | from synapse.rest.client.v2_alpha._base import client_patterns |
44 | 46 | from synapse.storage.state import StateFilter |
45 | 47 | from synapse.streams.config import PaginationConfig |
46 | from synapse.types import RoomAlias, RoomID, StreamToken, ThirdPartyInstanceID, UserID | |
48 | from synapse.types import ( | |
49 | JsonDict, | |
50 | RoomAlias, | |
51 | RoomID, | |
52 | StreamToken, | |
53 | ThirdPartyInstanceID, | |
54 | UserID, | |
55 | ) | |
47 | 56 | from synapse.util import json_decoder |
48 | 57 | from synapse.util.stringutils import parse_and_validate_server_name, random_string |
49 | 58 | |
50 | 59 | if TYPE_CHECKING: |
51 | import synapse.server | |
60 | from synapse.server import HomeServer | |
52 | 61 | |
53 | 62 | logger = logging.getLogger(__name__) |
54 | 63 | |
845 | 854 | "/rooms/(?P<room_id>[^/]*)/typing/(?P<user_id>[^/]*)$", v1=True |
846 | 855 | ) |
847 | 856 | |
848 | def __init__(self, hs): | |
849 | super().__init__() | |
857 | def __init__(self, hs: "HomeServer"): | |
858 | super().__init__() | |
859 | self.hs = hs | |
850 | 860 | self.presence_handler = hs.get_presence_handler() |
851 | self.typing_handler = hs.get_typing_handler() | |
852 | 861 | self.auth = hs.get_auth() |
853 | 862 | |
854 | 863 | # If we're not on the typing writer instance we should scream if we get |
873 | 882 | # Limit timeout to stop people from setting silly typing timeouts. |
874 | 883 | timeout = min(content.get("timeout", 30000), 120000) |
875 | 884 | |
885 | # Defer getting the typing handler since it will raise on workers. | |
886 | typing_handler = self.hs.get_typing_writer_handler() | |
887 | ||
876 | 888 | try: |
877 | 889 | if content["typing"]: |
878 | await self.typing_handler.started_typing( | |
890 | await typing_handler.started_typing( | |
879 | 891 | target_user=target_user, |
880 | 892 | requester=requester, |
881 | 893 | room_id=room_id, |
882 | 894 | timeout=timeout, |
883 | 895 | ) |
884 | 896 | else: |
885 | await self.typing_handler.stopped_typing( | |
897 | await typing_handler.stopped_typing( | |
886 | 898 | target_user=target_user, requester=requester, room_id=room_id |
887 | 899 | ) |
888 | 900 | except ShadowBanError: |
900 | 912 | ), |
901 | 913 | ] |
902 | 914 | |
903 | def __init__(self, hs: "synapse.server.HomeServer"): | |
915 | def __init__(self, hs: "HomeServer"): | |
904 | 916 | super().__init__() |
905 | 917 | self.auth = hs.get_auth() |
906 | 918 | self.directory_handler = hs.get_directory_handler() |
983 | 995 | ) |
984 | 996 | |
985 | 997 | |
986 | def register_servlets(hs, http_server, is_worker=False): | |
998 | class RoomSpaceSummaryRestServlet(RestServlet): | |
999 | PATTERNS = ( | |
1000 | re.compile( | |
1001 | "^/_matrix/client/unstable/org.matrix.msc2946" | |
1002 | "/rooms/(?P<room_id>[^/]*)/spaces$" | |
1003 | ), | |
1004 | ) | |
1005 | ||
1006 | def __init__(self, hs: "HomeServer"): | |
1007 | super().__init__() | |
1008 | self._auth = hs.get_auth() | |
1009 | self._space_summary_handler = hs.get_space_summary_handler() | |
1010 | ||
1011 | async def on_GET( | |
1012 | self, request: SynapseRequest, room_id: str | |
1013 | ) -> Tuple[int, JsonDict]: | |
1014 | requester = await self._auth.get_user_by_req(request, allow_guest=True) | |
1015 | ||
1016 | return 200, await self._space_summary_handler.get_space_summary( | |
1017 | requester.user.to_string(), | |
1018 | room_id, | |
1019 | suggested_only=parse_boolean(request, "suggested_only", default=False), | |
1020 | max_rooms_per_space=parse_integer(request, "max_rooms_per_space"), | |
1021 | ) | |
1022 | ||
1023 | async def on_POST( | |
1024 | self, request: SynapseRequest, room_id: str | |
1025 | ) -> Tuple[int, JsonDict]: | |
1026 | requester = await self._auth.get_user_by_req(request, allow_guest=True) | |
1027 | content = parse_json_object_from_request(request) | |
1028 | ||
1029 | suggested_only = content.get("suggested_only", False) | |
1030 | if not isinstance(suggested_only, bool): | |
1031 | raise SynapseError( | |
1032 | 400, "'suggested_only' must be a boolean", Codes.BAD_JSON | |
1033 | ) | |
1034 | ||
1035 | max_rooms_per_space = content.get("max_rooms_per_space") | |
1036 | if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int): | |
1037 | raise SynapseError( | |
1038 | 400, "'max_rooms_per_space' must be an integer", Codes.BAD_JSON | |
1039 | ) | |
1040 | ||
1041 | return 200, await self._space_summary_handler.get_space_summary( | |
1042 | requester.user.to_string(), | |
1043 | room_id, | |
1044 | suggested_only=suggested_only, | |
1045 | max_rooms_per_space=max_rooms_per_space, | |
1046 | ) | |
1047 | ||
1048 | ||
1049 | def register_servlets(hs: "HomeServer", http_server, is_worker=False): | |
987 | 1050 | RoomStateEventRestServlet(hs).register(http_server) |
988 | 1051 | RoomMemberListRestServlet(hs).register(http_server) |
989 | 1052 | JoinedRoomMemberListRestServlet(hs).register(http_server) |
997 | 1060 | RoomTypingRestServlet(hs).register(http_server) |
998 | 1061 | RoomEventContextServlet(hs).register(http_server) |
999 | 1062 | |
1063 | if hs.config.experimental.spaces_enabled: | |
1064 | RoomSpaceSummaryRestServlet(hs).register(http_server) | |
1065 | ||
1000 | 1066 | # Some servlets only get registered for the main process. |
1001 | 1067 | if not is_worker: |
1002 | 1068 | RoomCreateRestServlet(hs).register(http_server) |
44 | 44 | from ._base import client_patterns, interactive_auth_handler |
45 | 45 | |
46 | 46 | if TYPE_CHECKING: |
47 | from synapse.app.homeserver import HomeServer | |
47 | from synapse.server import HomeServer | |
48 | 48 | |
49 | 49 | |
50 | 50 | logger = logging.getLogger(__name__) |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import TYPE_CHECKING, Tuple | |
15 | 16 | |
16 | 17 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS |
17 | 18 | from synapse.http.servlet import RestServlet |
19 | from synapse.http.site import SynapseRequest | |
20 | from synapse.types import JsonDict | |
18 | 21 | |
19 | 22 | from ._base import client_patterns |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
20 | 26 | |
21 | 27 | logger = logging.getLogger(__name__) |
22 | 28 | |
26 | 32 | |
27 | 33 | PATTERNS = client_patterns("/capabilities$") |
28 | 34 | |
29 | def __init__(self, hs): | |
30 | """ | |
31 | Args: | |
32 | hs (synapse.server.HomeServer): server | |
33 | """ | |
35 | def __init__(self, hs: "HomeServer"): | |
34 | 36 | super().__init__() |
35 | 37 | self.hs = hs |
36 | 38 | self.config = hs.config |
37 | 39 | self.auth = hs.get_auth() |
38 | self.store = hs.get_datastore() | |
40 | self.auth_handler = hs.get_auth_handler() | |
39 | 41 | |
40 | async def on_GET(self, request): | |
41 | requester = await self.auth.get_user_by_req(request, allow_guest=True) | |
42 | user = await self.store.get_user_by_id(requester.user.to_string()) | |
43 | change_password = bool(user["password_hash"]) | |
42 | async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
43 | await self.auth.get_user_by_req(request, allow_guest=True) | |
44 | change_password = self.auth_handler.can_change_password() | |
44 | 45 | |
45 | 46 | response = { |
46 | 47 | "capabilities": { |
57 | 58 | return 200, response |
58 | 59 | |
59 | 60 | |
60 | def register_servlets(hs, http_server): | |
61 | def register_servlets(hs: "HomeServer", http_server): | |
61 | 62 | CapabilitiesRestServlet(hs).register(http_server) |
37 | 37 | from ._base import client_patterns |
38 | 38 | |
39 | 39 | if TYPE_CHECKING: |
40 | from synapse.app.homeserver import HomeServer | |
40 | from synapse.server import HomeServer | |
41 | 41 | |
42 | 42 | logger = logging.getLogger(__name__) |
43 | 43 |
14 | 14 | |
15 | 15 | import itertools |
16 | 16 | import logging |
17 | from typing import TYPE_CHECKING, Tuple | |
17 | 18 | |
18 | 19 | from synapse.api.constants import PresenceState |
19 | 20 | from synapse.api.errors import Codes, StoreError, SynapseError |
25 | 26 | from synapse.handlers.presence import format_user_presence_state |
26 | 27 | from synapse.handlers.sync import SyncConfig |
27 | 28 | from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string |
28 | from synapse.types import StreamToken | |
29 | from synapse.http.site import SynapseRequest | |
30 | from synapse.types import JsonDict, StreamToken | |
29 | 31 | from synapse.util import json_decoder |
30 | 32 | |
31 | 33 | from ._base import client_patterns, set_timeline_upper_limit |
34 | ||
35 | if TYPE_CHECKING: | |
36 | from synapse.server import HomeServer | |
32 | 37 | |
33 | 38 | logger = logging.getLogger(__name__) |
34 | 39 | |
72 | 77 | PATTERNS = client_patterns("/sync$") |
73 | 78 | ALLOWED_PRESENCE = {"online", "offline", "unavailable"} |
74 | 79 | |
75 | def __init__(self, hs): | |
80 | def __init__(self, hs: "HomeServer"): | |
76 | 81 | super().__init__() |
77 | 82 | self.hs = hs |
78 | 83 | self.auth = hs.get_auth() |
84 | 89 | self._server_notices_sender = hs.get_server_notices_sender() |
85 | 90 | self._event_serializer = hs.get_event_client_serializer() |
86 | 91 | |
87 | async def on_GET(self, request): | |
92 | async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]: | |
93 | # This will always be set by the time Twisted calls us. | |
94 | assert request.args is not None | |
95 | ||
88 | 96 | if b"from" in request.args: |
89 | 97 | # /events used to use 'from', but /sync uses 'since'. |
90 | 98 | # Lets be helpful and whine if we see a 'from'. |
80 | 80 | "io.element.e2ee_forced.public": self.e2ee_forced_public, |
81 | 81 | "io.element.e2ee_forced.private": self.e2ee_forced_private, |
82 | 82 | "io.element.e2ee_forced.trusted_private": self.e2ee_forced_trusted_private, |
83 | # Supports the busy presence state described in MSC3026. | |
84 | "org.matrix.msc3026.busy_presence": self.config.experimental.msc3026_enabled, | |
83 | 85 | }, |
84 | 86 | }, |
85 | 87 | ) |
22 | 22 | from synapse.http.site import SynapseRequest |
23 | 23 | |
24 | 24 | if TYPE_CHECKING: |
25 | from synapse.app.homeserver import HomeServer | |
25 | from synapse.server import HomeServer | |
26 | 26 | |
27 | 27 | |
28 | 28 | class MediaConfigResource(DirectServeJsonResource): |
23 | 23 | from ._base import parse_media_id, respond_404 |
24 | 24 | |
25 | 25 | if TYPE_CHECKING: |
26 | from synapse.app.homeserver import HomeServer | |
27 | 26 | from synapse.rest.media.v1.media_repository import MediaRepository |
27 | from synapse.server import HomeServer | |
28 | 28 | |
29 | 29 | logger = logging.getLogger(__name__) |
30 | 30 |
57 | 57 | from .upload_resource import UploadResource |
58 | 58 | |
59 | 59 | if TYPE_CHECKING: |
60 | from synapse.app.homeserver import HomeServer | |
60 | from synapse.server import HomeServer | |
61 | 61 | |
62 | 62 | logger = logging.getLogger(__name__) |
63 | 63 |
53 | 53 | if TYPE_CHECKING: |
54 | 54 | from lxml import etree |
55 | 55 | |
56 | from synapse.app.homeserver import HomeServer | |
57 | 56 | from synapse.rest.media.v1.media_repository import MediaRepository |
57 | from synapse.server import HomeServer | |
58 | 58 | |
59 | 59 | logger = logging.getLogger(__name__) |
60 | 60 | |
186 | 186 | respond_with_json(request, 200, {}, send_cors=True) |
187 | 187 | |
188 | 188 | async def _async_render_GET(self, request: SynapseRequest) -> None: |
189 | # This will always be set by the time Twisted calls us. | |
190 | assert request.args is not None | |
189 | 191 | |
190 | 192 | # XXX: if get_user_by_req fails, what should we do in an async render? |
191 | 193 | requester = await self.auth.get_user_by_req(request) |
28 | 28 | logger = logging.getLogger(__name__) |
29 | 29 | |
30 | 30 | if TYPE_CHECKING: |
31 | from synapse.app.homeserver import HomeServer | |
31 | from synapse.server import HomeServer | |
32 | 32 | |
33 | 33 | |
34 | 34 | class StorageProvider(metaclass=abc.ABCMeta): |
33 | 33 | ) |
34 | 34 | |
35 | 35 | if TYPE_CHECKING: |
36 | from synapse.app.homeserver import HomeServer | |
37 | 36 | from synapse.rest.media.v1.media_repository import MediaRepository |
37 | from synapse.server import HomeServer | |
38 | 38 | |
39 | 39 | logger = logging.getLogger(__name__) |
40 | 40 |
25 | 25 | from synapse.rest.media.v1.media_storage import SpamMediaException |
26 | 26 | |
27 | 27 | if TYPE_CHECKING: |
28 | from synapse.app.homeserver import HomeServer | |
29 | 28 | from synapse.rest.media.v1.media_repository import MediaRepository |
29 | from synapse.server import HomeServer | |
30 | 30 | |
31 | 31 | logger = logging.getLogger(__name__) |
32 | 32 |
103 | 103 | respond_with_html(request, 200, html) |
104 | 104 | |
105 | 105 | async def _async_render_POST(self, request: SynapseRequest): |
106 | # This will always be set by the time Twisted calls us. | |
107 | assert request.args is not None | |
108 | ||
106 | 109 | try: |
107 | 110 | session_id = get_username_mapping_session_cookie_from_request(request) |
108 | 111 | except SynapseError as e: |
25 | 25 | import secrets |
26 | 26 | |
27 | 27 | class Secrets: |
28 | def token_bytes(self, nbytes=32): | |
28 | def token_bytes(self, nbytes: int = 32) -> bytes: | |
29 | 29 | return secrets.token_bytes(nbytes) |
30 | 30 | |
31 | def token_hex(self, nbytes=32): | |
31 | def token_hex(self, nbytes: int = 32) -> str: | |
32 | 32 | return secrets.token_hex(nbytes) |
33 | 33 | |
34 | 34 | |
37 | 37 | import os |
38 | 38 | |
39 | 39 | class Secrets: |
40 | def token_bytes(self, nbytes=32): | |
40 | def token_bytes(self, nbytes: int = 32) -> bytes: | |
41 | 41 | return os.urandom(nbytes) |
42 | 42 | |
43 | def token_hex(self, nbytes=32): | |
43 | def token_hex(self, nbytes: int = 32) -> str: | |
44 | 44 | return binascii.hexlify(self.token_bytes(nbytes)).decode("ascii") |
59 | 59 | FederationServer, |
60 | 60 | ) |
61 | 61 | from synapse.federation.send_queue import FederationRemoteSendQueue |
62 | from synapse.federation.sender import FederationSender | |
62 | from synapse.federation.sender import AbstractFederationSender, FederationSender | |
63 | 63 | from synapse.federation.transport.client import TransportLayerClient |
64 | 64 | from synapse.groups.attestations import GroupAttestationSigning, GroupAttestionRenewer |
65 | 65 | from synapse.groups.groups_server import GroupsServerHandler, GroupsServerWorkerHandler |
95 | 95 | RoomShutdownHandler, |
96 | 96 | ) |
97 | 97 | from synapse.handlers.room_list import RoomListHandler |
98 | from synapse.handlers.room_member import RoomMemberMasterHandler | |
98 | from synapse.handlers.room_member import RoomMemberHandler, RoomMemberMasterHandler | |
99 | 99 | from synapse.handlers.room_member_worker import RoomMemberWorkerHandler |
100 | 100 | from synapse.handlers.search import SearchHandler |
101 | 101 | from synapse.handlers.set_password import SetPasswordHandler |
102 | from synapse.handlers.space_summary import SpaceSummaryHandler | |
102 | 103 | from synapse.handlers.sso import SsoHandler |
103 | 104 | from synapse.handlers.stats import StatsHandler |
104 | 105 | from synapse.handlers.sync import SyncHandler |
416 | 417 | return PresenceHandler(self) |
417 | 418 | |
418 | 419 | @cache_in_self |
419 | def get_typing_handler(self): | |
420 | def get_typing_writer_handler(self) -> TypingWriterHandler: | |
420 | 421 | if self.config.worker.writers.typing == self.get_instance_name(): |
421 | 422 | return TypingWriterHandler(self) |
423 | else: | |
424 | raise Exception("Workers cannot write typing") | |
425 | ||
426 | @cache_in_self | |
427 | def get_typing_handler(self) -> FollowerTypingHandler: | |
428 | if self.config.worker.writers.typing == self.get_instance_name(): | |
429 | # Use get_typing_writer_handler to ensure that we use the same | |
430 | # cached version. | |
431 | return self.get_typing_writer_handler() | |
422 | 432 | else: |
423 | 433 | return FollowerTypingHandler(self) |
424 | 434 | |
560 | 570 | return TransportLayerClient(self) |
561 | 571 | |
562 | 572 | @cache_in_self |
563 | def get_federation_sender(self): | |
573 | def get_federation_sender(self) -> AbstractFederationSender: | |
564 | 574 | if self.should_send_federation(): |
565 | 575 | return FederationSender(self) |
566 | 576 | elif not self.config.worker_app: |
629 | 639 | return ThirdPartyEventRules(self) |
630 | 640 | |
631 | 641 | @cache_in_self |
632 | def get_room_member_handler(self): | |
642 | def get_room_member_handler(self) -> RoomMemberHandler: | |
633 | 643 | if self.config.worker_app: |
634 | 644 | return RoomMemberWorkerHandler(self) |
635 | 645 | return RoomMemberMasterHandler(self) |
639 | 649 | return FederationHandlerRegistry(self) |
640 | 650 | |
641 | 651 | @cache_in_self |
642 | def get_server_notices_manager(self): | |
652 | def get_server_notices_manager(self) -> ServerNoticesManager: | |
643 | 653 | if self.config.worker_app: |
644 | 654 | raise Exception("Workers cannot send server notices") |
645 | 655 | return ServerNoticesManager(self) |
646 | 656 | |
647 | 657 | @cache_in_self |
648 | def get_server_notices_sender(self): | |
658 | def get_server_notices_sender(self) -> WorkerServerNoticesSender: | |
649 | 659 | if self.config.worker_app: |
650 | 660 | return WorkerServerNoticesSender(self) |
651 | 661 | return ServerNoticesSender(self) |
721 | 731 | @cache_in_self |
722 | 732 | def get_account_data_handler(self) -> AccountDataHandler: |
723 | 733 | return AccountDataHandler(self) |
734 | ||
735 | @cache_in_self | |
736 | def get_space_summary_handler(self) -> SpaceSummaryHandler: | |
737 | return SpaceSummaryHandler(self) | |
724 | 738 | |
725 | 739 | @cache_in_self |
726 | 740 | def get_external_cache(self) -> ExternalCache: |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import Any | |
15 | from typing import TYPE_CHECKING, Any, Set | |
16 | 16 | |
17 | 17 | from synapse.api.errors import SynapseError |
18 | 18 | from synapse.api.urls import ConsentURIBuilder |
19 | 19 | from synapse.config import ConfigError |
20 | 20 | from synapse.types import get_localpart_from_id |
21 | ||
22 | if TYPE_CHECKING: | |
23 | from synapse.server import HomeServer | |
21 | 24 | |
22 | 25 | logger = logging.getLogger(__name__) |
23 | 26 | |
27 | 30 | privacy policy consent, and sends one if we do. |
28 | 31 | """ |
29 | 32 | |
30 | def __init__(self, hs): | |
31 | """ | |
32 | ||
33 | Args: | |
34 | hs (synapse.server.HomeServer): | |
35 | """ | |
33 | def __init__(self, hs: "HomeServer"): | |
36 | 34 | self._server_notices_manager = hs.get_server_notices_manager() |
37 | 35 | self._store = hs.get_datastore() |
38 | 36 | |
39 | self._users_in_progress = set() | |
37 | self._users_in_progress = set() # type: Set[str] | |
40 | 38 | |
41 | 39 | self._current_consent_version = hs.config.user_consent_version |
42 | 40 | self._server_notice_content = hs.config.user_consent_server_notice_content |
71 | 69 | self._users_in_progress.add(user_id) |
72 | 70 | try: |
73 | 71 | u = await self._store.get_user_by_id(user_id) |
72 | ||
73 | # The user doesn't exist. | |
74 | if u is None: | |
75 | return | |
74 | 76 | |
75 | 77 | if u["is_guest"] and not self._send_to_guests: |
76 | 78 | # don't send to guests |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import List, Tuple | |
15 | from typing import TYPE_CHECKING, List, Tuple | |
16 | 16 | |
17 | 17 | from synapse.api.constants import ( |
18 | 18 | EventTypes, |
23 | 23 | from synapse.api.errors import AuthError, ResourceLimitError, SynapseError |
24 | 24 | from synapse.server_notices.server_notices_manager import SERVER_NOTICE_ROOM_TAG |
25 | 25 | |
26 | if TYPE_CHECKING: | |
27 | from synapse.server import HomeServer | |
28 | ||
26 | 29 | logger = logging.getLogger(__name__) |
27 | 30 | |
28 | 31 | |
31 | 34 | ensures that the client is kept up to date. |
32 | 35 | """ |
33 | 36 | |
34 | def __init__(self, hs): | |
35 | """ | |
36 | Args: | |
37 | hs (synapse.server.HomeServer): | |
38 | """ | |
37 | def __init__(self, hs: "HomeServer"): | |
39 | 38 | self._server_notices_manager = hs.get_server_notices_manager() |
40 | 39 | self._store = hs.get_datastore() |
41 | 40 | self._auth = hs.get_auth() |
57 | 57 | user_id: str, |
58 | 58 | event_content: dict, |
59 | 59 | type: str = EventTypes.Message, |
60 | state_key: Optional[bool] = None, | |
60 | state_key: Optional[str] = None, | |
61 | 61 | ) -> EventBase: |
62 | 62 | """Send a notice to the given user |
63 | 63 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | from typing import Iterable, Union | |
14 | from typing import TYPE_CHECKING, Iterable, Union | |
15 | 15 | |
16 | 16 | from synapse.server_notices.consent_server_notices import ConsentServerNotices |
17 | 17 | from synapse.server_notices.resource_limits_server_notices import ( |
18 | 18 | ResourceLimitsServerNotices, |
19 | 19 | ) |
20 | from synapse.server_notices.worker_server_notices_sender import ( | |
21 | WorkerServerNoticesSender, | |
22 | ) | |
23 | ||
24 | if TYPE_CHECKING: | |
25 | from synapse.server import HomeServer | |
20 | 26 | |
21 | 27 | |
22 | class ServerNoticesSender: | |
28 | class ServerNoticesSender(WorkerServerNoticesSender): | |
23 | 29 | """A centralised place which sends server notices automatically when |
24 | 30 | Certain Events take place |
25 | 31 | """ |
26 | 32 | |
27 | def __init__(self, hs): | |
28 | """ | |
29 | ||
30 | Args: | |
31 | hs (synapse.server.HomeServer): | |
32 | """ | |
33 | def __init__(self, hs: "HomeServer"): | |
34 | super().__init__(hs) | |
33 | 35 | self._server_notices = ( |
34 | 36 | ConsentServerNotices(hs), |
35 | 37 | ResourceLimitsServerNotices(hs), |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | from typing import TYPE_CHECKING | |
15 | ||
16 | if TYPE_CHECKING: | |
17 | from synapse.server import HomeServer | |
14 | 18 | |
15 | 19 | |
16 | 20 | class WorkerServerNoticesSender: |
17 | 21 | """Stub impl of ServerNoticesSender which does nothing""" |
18 | 22 | |
19 | def __init__(self, hs): | |
20 | """ | |
21 | Args: | |
22 | hs (synapse.server.HomeServer): | |
23 | """ | |
23 | def __init__(self, hs: "HomeServer"): | |
24 | pass | |
24 | 25 | |
25 | 26 | async def on_user_syncing(self, user_id: str) -> None: |
26 | 27 | """Called when the user performs a sync operation. |
35 | 35 | from synapse.storage.state import StateGroupStorage |
36 | 36 | |
37 | 37 | if TYPE_CHECKING: |
38 | from synapse.app.homeserver import HomeServer | |
38 | from synapse.server import HomeServer | |
39 | 39 | |
40 | 40 | |
41 | 41 | __all__ = ["Databases", "DataStore"] |
26 | 26 | from synapse.util import json_decoder |
27 | 27 | |
28 | 28 | if TYPE_CHECKING: |
29 | from synapse.app.homeserver import HomeServer | |
29 | from synapse.server import HomeServer | |
30 | 30 | |
31 | 31 | logger = logging.getLogger(__name__) |
32 | 32 |
22 | 22 | from . import engines |
23 | 23 | |
24 | 24 | if TYPE_CHECKING: |
25 | from synapse.app.homeserver import HomeServer | |
25 | from synapse.server import HomeServer | |
26 | 26 | from synapse.storage.database import DatabasePool, LoggingTransaction |
27 | 27 | |
28 | 28 | logger = logging.getLogger(__name__) |
669 | 669 | |
670 | 670 | for after_callback, after_args, after_kwargs in after_callbacks: |
671 | 671 | after_callback(*after_args, **after_kwargs) |
672 | except: # noqa: E722, as we reraise the exception this is fine. | |
672 | except Exception: | |
673 | 673 | for after_callback, after_args, after_kwargs in exception_callbacks: |
674 | 674 | after_callback(*after_args, **after_kwargs) |
675 | 675 | raise |
1905 | 1905 | retcols: Iterable[str], |
1906 | 1906 | filters: Optional[Dict[str, Any]] = None, |
1907 | 1907 | keyvalues: Optional[Dict[str, Any]] = None, |
1908 | exclude_keyvalues: Optional[Dict[str, Any]] = None, | |
1908 | 1909 | order_direction: str = "ASC", |
1909 | 1910 | ) -> List[Dict[str, Any]]: |
1910 | 1911 | """ |
1928 | 1929 | apply a WHERE ? LIKE ? clause. |
1929 | 1930 | keyvalues: |
1930 | 1931 | column names and values to select the rows with, or None to not |
1931 | apply a WHERE clause. | |
1932 | apply a WHERE key = value clause. | |
1933 | exclude_keyvalues: | |
1934 | column names and values to exclude rows with, or None to not | |
1935 | apply a WHERE key != value clause. | |
1932 | 1936 | order_direction: Whether the results should be ordered "ASC" or "DESC". |
1933 | 1937 | |
1934 | 1938 | Returns: |
1937 | 1941 | if order_direction not in ["ASC", "DESC"]: |
1938 | 1942 | raise ValueError("order_direction must be one of 'ASC' or 'DESC'.") |
1939 | 1943 | |
1940 | where_clause = "WHERE " if filters or keyvalues else "" | |
1944 | where_clause = "WHERE " if filters or keyvalues or exclude_keyvalues else "" | |
1941 | 1945 | arg_list = [] # type: List[Any] |
1942 | 1946 | if filters: |
1943 | 1947 | where_clause += " AND ".join("%s LIKE ?" % (k,) for k in filters) |
1946 | 1950 | if keyvalues: |
1947 | 1951 | where_clause += " AND ".join("%s = ?" % (k,) for k in keyvalues) |
1948 | 1952 | arg_list += list(keyvalues.values()) |
1953 | if exclude_keyvalues: | |
1954 | where_clause += " AND ".join("%s != ?" % (k,) for k in exclude_keyvalues) | |
1955 | arg_list += list(exclude_keyvalues.values()) | |
1949 | 1956 | |
1950 | 1957 | sql = "SELECT %s FROM %s %s ORDER BY %s %s LIMIT ? OFFSET ?" % ( |
1951 | 1958 | ", ".join(retcols), |
31 | 31 | from synapse.util import json_encoder |
32 | 32 | |
33 | 33 | if TYPE_CHECKING: |
34 | from synapse.app.homeserver import HomeServer | |
34 | from synapse.server import HomeServer | |
35 | 35 | |
36 | 36 | logger = logging.getLogger(__name__) |
37 | 37 |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import List, Tuple | |
16 | from typing import List, Optional, Tuple | |
17 | 17 | |
18 | 18 | from synapse.logging.opentracing import log_kv, set_tag, trace |
19 | 19 | from synapse.replication.tcp.streams import ToDeviceStream |
114 | 114 | async def get_new_messages_for_device( |
115 | 115 | self, |
116 | 116 | user_id: str, |
117 | device_id: str, | |
117 | device_id: Optional[str], | |
118 | 118 | last_stream_id: int, |
119 | 119 | current_stream_id: int, |
120 | 120 | limit: int = 100, |
162 | 162 | |
163 | 163 | @trace |
164 | 164 | async def delete_messages_for_device( |
165 | self, user_id: str, device_id: str, up_to_stream_id: int | |
165 | self, user_id: str, device_id: Optional[str], up_to_stream_id: int | |
166 | 166 | ) -> int: |
167 | 167 | """ |
168 | 168 | Args: |
792 | 792 | |
793 | 793 | return int(min_depth) if min_depth is not None else None |
794 | 794 | |
795 | async def get_forward_extremeties_for_room( | |
795 | async def get_forward_extremities_for_room_at_stream_ordering( | |
796 | 796 | self, room_id: str, stream_ordering: int |
797 | 797 | ) -> List[str]: |
798 | 798 | """For a given room_id and stream_ordering, return the forward |
1269 | 1269 | logger.exception("") |
1270 | 1270 | raise |
1271 | 1271 | |
1272 | # update the stored internal_metadata to update the "outlier" flag. | |
1273 | # TODO: This is unused as of Synapse 1.31. Remove it once we are happy | |
1274 | # to drop backwards-compatibility with 1.30. | |
1272 | 1275 | metadata_json = json_encoder.encode(event.internal_metadata.get_dict()) |
1273 | ||
1274 | 1276 | sql = "UPDATE event_json SET internal_metadata = ? WHERE event_id = ?" |
1275 | 1277 | txn.execute(sql, (metadata_json, event.event_id)) |
1276 | 1278 | |
1318 | 1320 | d.pop("redacted_because", None) |
1319 | 1321 | return d |
1320 | 1322 | |
1323 | def get_internal_metadata(event): | |
1324 | im = event.internal_metadata.get_dict() | |
1325 | ||
1326 | # temporary hack for database compatibility with Synapse 1.30 and earlier: | |
1327 | # store the `outlier` flag inside the internal_metadata json as well as in | |
1328 | # the `events` table, so that if anyone rolls back to an older Synapse, | |
1329 | # things keep working. This can be removed once we are happy to drop support | |
1330 | # for that | |
1331 | if event.internal_metadata.is_outlier(): | |
1332 | im["outlier"] = True | |
1333 | ||
1334 | return im | |
1335 | ||
1321 | 1336 | self.db_pool.simple_insert_many_txn( |
1322 | 1337 | txn, |
1323 | 1338 | table="event_json", |
1326 | 1341 | "event_id": event.event_id, |
1327 | 1342 | "room_id": event.room_id, |
1328 | 1343 | "internal_metadata": json_encoder.encode( |
1329 | event.internal_metadata.get_dict() | |
1344 | get_internal_metadata(event) | |
1330 | 1345 | ), |
1331 | 1346 | "json": json_encoder.encode(event_dict(event)), |
1332 | 1347 | "format_version": event.format_version, |
798 | 798 | rejected_reason=rejected_reason, |
799 | 799 | ) |
800 | 800 | original_ev.internal_metadata.stream_ordering = row["stream_ordering"] |
801 | original_ev.internal_metadata.outlier = row["outlier"] | |
801 | 802 | |
802 | 803 | event_map[event_id] = original_ev |
803 | 804 | |
904 | 905 | ej.json, |
905 | 906 | ej.format_version, |
906 | 907 | r.room_version, |
907 | rej.reason | |
908 | rej.reason, | |
909 | e.outlier | |
908 | 910 | FROM events AS e |
909 | 911 | JOIN event_json AS ej USING (event_id) |
910 | 912 | LEFT JOIN rooms r ON r.room_id = e.room_id |
928 | 930 | "room_version_id": row[5], |
929 | 931 | "rejected_reason": row[6], |
930 | 932 | "redactions": [], |
933 | "outlier": row[7], | |
931 | 934 | } |
932 | 935 | |
933 | 936 | # check for redactions |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import Dict, List | |
15 | from typing import Dict, List, Optional | |
16 | 16 | |
17 | 17 | from synapse.metrics.background_process_metrics import wrap_as_background_process |
18 | 18 | from synapse.storage._base import SQLBaseStore |
108 | 108 | return users |
109 | 109 | |
110 | 110 | @cached(num_args=1) |
111 | async def user_last_seen_monthly_active(self, user_id: str) -> int: | |
111 | async def user_last_seen_monthly_active(self, user_id: str) -> Optional[int]: | |
112 | 112 | """ |
113 | 113 | Checks if a given user is part of the monthly active user group |
114 | 114 |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from typing import List, Tuple | |
15 | from typing import Dict, List, Tuple | |
16 | 16 | |
17 | 17 | from synapse.api.presence import UserPresenceState |
18 | 18 | from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause |
156 | 156 | |
157 | 157 | return {row["user_id"]: UserPresenceState(**row) for row in rows} |
158 | 158 | |
159 | async def get_presence_for_all_users( | |
160 | self, | |
161 | include_offline: bool = True, | |
162 | ) -> Dict[str, UserPresenceState]: | |
163 | """Retrieve the current presence state for all users. | |
164 | ||
165 | Note that the presence_stream table is culled frequently, so it should only | |
166 | contain the latest presence state for each user. | |
167 | ||
168 | Args: | |
169 | include_offline: Whether to include offline presence states | |
170 | ||
171 | Returns: | |
172 | A dict of user IDs to their current UserPresenceState. | |
173 | """ | |
174 | users_to_state = {} | |
175 | ||
176 | exclude_keyvalues = None | |
177 | if not include_offline: | |
178 | # Exclude offline presence state | |
179 | exclude_keyvalues = {"state": "offline"} | |
180 | ||
181 | # This may be a very heavy database query. | |
182 | # We paginate in order to not block a database connection. | |
183 | limit = 100 | |
184 | offset = 0 | |
185 | while True: | |
186 | rows = await self.db_pool.runInteraction( | |
187 | "get_presence_for_all_users", | |
188 | self.db_pool.simple_select_list_paginate_txn, | |
189 | "presence_stream", | |
190 | orderby="stream_id", | |
191 | start=offset, | |
192 | limit=limit, | |
193 | exclude_keyvalues=exclude_keyvalues, | |
194 | retcols=( | |
195 | "user_id", | |
196 | "state", | |
197 | "last_active_ts", | |
198 | "last_federation_update_ts", | |
199 | "last_user_sync_ts", | |
200 | "status_msg", | |
201 | "currently_active", | |
202 | ), | |
203 | order_direction="ASC", | |
204 | ) | |
205 | ||
206 | for row in rows: | |
207 | users_to_state[row["user_id"]] = UserPresenceState(**row) | |
208 | ||
209 | # We've run out of updates to query | |
210 | if len(rows) < limit: | |
211 | break | |
212 | ||
213 | offset += limit | |
214 | ||
215 | return users_to_state | |
216 | ||
159 | 217 | def get_current_presence_token(self): |
160 | 218 | return self._presence_id_gen.get_current_token() |
26 | 26 | from synapse.util.caches.descriptors import cached, cachedList |
27 | 27 | |
28 | 28 | if TYPE_CHECKING: |
29 | from synapse.app.homeserver import HomeServer | |
29 | from synapse.server import HomeServer | |
30 | 30 | |
31 | 31 | logger = logging.getLogger(__name__) |
32 | 32 |
1209 | 1209 | self._invalidate_cache_and_stream( |
1210 | 1210 | txn, self.get_user_deactivated_status, (user_id,) |
1211 | 1211 | ) |
1212 | self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,)) | |
1212 | 1213 | txn.call_after(self.is_guest.invalidate, (user_id,)) |
1213 | 1214 | |
1214 | 1215 | @cached() |
21 | 21 | from synapse.metrics.background_process_metrics import wrap_as_background_process |
22 | 22 | from synapse.storage._base import SQLBaseStore, db_to_json |
23 | 23 | from synapse.storage.database import DatabasePool, LoggingTransaction |
24 | from synapse.storage.engines import PostgresEngine, Sqlite3Engine | |
25 | 24 | from synapse.types import JsonDict |
26 | 25 | from synapse.util.caches.expiringcache import ExpiringCache |
27 | 26 | |
311 | 310 | stream_ordering: the stream_ordering of the event |
312 | 311 | """ |
313 | 312 | |
314 | return await self.db_pool.runInteraction( | |
315 | "store_destination_rooms_entries", | |
316 | self._store_destination_rooms_entries_txn, | |
317 | destinations, | |
318 | room_id, | |
319 | stream_ordering, | |
320 | ) | |
321 | ||
322 | def _store_destination_rooms_entries_txn( | |
323 | self, | |
324 | txn: LoggingTransaction, | |
325 | destinations: Iterable[str], | |
326 | room_id: str, | |
327 | stream_ordering: int, | |
328 | ) -> None: | |
329 | ||
330 | # ensure we have a `destinations` row for this destination, as there is | |
331 | # a foreign key constraint. | |
332 | if isinstance(self.database_engine, PostgresEngine): | |
333 | q = """ | |
334 | INSERT INTO destinations (destination) | |
335 | VALUES (?) | |
336 | ON CONFLICT DO NOTHING; | |
337 | """ | |
338 | elif isinstance(self.database_engine, Sqlite3Engine): | |
339 | q = """ | |
340 | INSERT OR IGNORE INTO destinations (destination) | |
341 | VALUES (?); | |
342 | """ | |
343 | else: | |
344 | raise RuntimeError("Unknown database engine") | |
345 | ||
346 | txn.execute_batch(q, ((destination,) for destination in destinations)) | |
313 | await self.db_pool.simple_upsert_many( | |
314 | table="destinations", | |
315 | key_names=("destination",), | |
316 | key_values=[(d,) for d in destinations], | |
317 | value_names=[], | |
318 | value_values=[], | |
319 | desc="store_destination_rooms_entries_dests", | |
320 | ) | |
347 | 321 | |
348 | 322 | rows = [(destination, room_id) for destination in destinations] |
349 | ||
350 | self.db_pool.simple_upsert_many_txn( | |
351 | txn, | |
323 | await self.db_pool.simple_upsert_many( | |
352 | 324 | table="destination_rooms", |
353 | 325 | key_names=("destination", "room_id"), |
354 | 326 | key_values=rows, |
355 | 327 | value_names=["stream_ordering"], |
356 | 328 | value_values=[(stream_ordering,)] * len(rows), |
329 | desc="store_destination_rooms_entries_rooms", | |
357 | 330 | ) |
358 | 331 | |
359 | 332 | async def get_destination_last_successful_stream_ordering( |
182 | 182 | requests state from the cache, if False we need to query the DB for the |
183 | 183 | missing state. |
184 | 184 | """ |
185 | is_all, known_absent, state_dict_ids = cache.get(group) | |
186 | ||
187 | if is_all or state_filter.is_full(): | |
185 | cache_entry = cache.get(group) | |
186 | state_dict_ids = cache_entry.value | |
187 | ||
188 | if cache_entry.full or state_filter.is_full(): | |
188 | 189 | # Either we have everything or want everything, either way |
189 | 190 | # `is_all` tells us whether we've gotten everything. |
190 | return state_filter.filter_state(state_dict_ids), is_all | |
191 | return state_filter.filter_state(state_dict_ids), cache_entry.full | |
191 | 192 | |
192 | 193 | # tracks whether any of our requested types are missing from the cache |
193 | 194 | missing_types = False |
201 | 202 | # There aren't any wild cards, so `concrete_types()` returns the |
202 | 203 | # complete list of event types we're wanting. |
203 | 204 | for key in state_filter.concrete_types(): |
204 | if key not in state_dict_ids and key not in known_absent: | |
205 | if key not in state_dict_ids and key not in cache_entry.known_absent: | |
205 | 206 | missing_types = True |
206 | 207 | break |
207 | 208 |
19 | 19 | from synapse.storage.databases import Databases |
20 | 20 | |
21 | 21 | if TYPE_CHECKING: |
22 | from synapse.app.homeserver import HomeServer | |
22 | from synapse.server import HomeServer | |
23 | 23 | |
24 | 24 | logger = logging.getLogger(__name__) |
25 | 25 |
31 | 31 | from synapse.types import MutableStateMap, StateMap |
32 | 32 | |
33 | 33 | if TYPE_CHECKING: |
34 | from synapse.app.homeserver import HomeServer | |
34 | from synapse.server import HomeServer | |
35 | 35 | from synapse.storage.databases import Databases |
36 | 36 | |
37 | 37 | logger = logging.getLogger(__name__) |
448 | 448 | return self.stores.state._get_state_groups_from_groups(groups, state_filter) |
449 | 449 | |
450 | 450 | async def get_state_for_events( |
451 | self, event_ids: List[str], state_filter: StateFilter = StateFilter.all() | |
451 | self, event_ids: Iterable[str], state_filter: StateFilter = StateFilter.all() | |
452 | 452 | ) -> Dict[str, StateMap[EventBase]]: |
453 | 453 | """Given a list of event_ids and type tuples, return a list of state |
454 | 454 | dicts for each event. |
484 | 484 | return {event: event_to_state[event] for event in event_ids} |
485 | 485 | |
486 | 486 | async def get_state_ids_for_events( |
487 | self, event_ids: List[str], state_filter: StateFilter = StateFilter.all() | |
487 | self, event_ids: Iterable[str], state_filter: StateFilter = StateFilter.all() | |
488 | 488 | ) -> Dict[str, StateMap[str]]: |
489 | 489 | """ |
490 | 490 | Get the state dicts corresponding to a list of events, containing the event_ids |
495 | 495 | |
496 | 496 | try: |
497 | 497 | deferred.cancel() |
498 | except: # noqa: E722, if we throw any exception it'll break time outs | |
498 | except Exception: # if we throw any exception it'll break time outs | |
499 | 499 | logger.exception("Canceller failed during timeout") |
500 | 500 | |
501 | 501 | # the cancel() call should have set off a chain of errbacks which |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | 26 | |
27 | caches_by_name = {} | |
28 | collectors_by_name = {} # type: Dict | |
27 | caches_by_name = {} # type: Dict[str, Sized] | |
28 | collectors_by_name = {} # type: Dict[str, CacheMetric] | |
29 | 29 | |
30 | 30 | cache_size = Gauge("synapse_util_caches_cache:size", "", ["name"]) |
31 | 31 | cache_hits = Gauge("synapse_util_caches_cache:hits", "", ["name"]) |
115 | 115 | """ |
116 | 116 | if resizable: |
117 | 117 | if not resize_callback: |
118 | resize_callback = getattr(cache, "set_cache_factor") | |
118 | resize_callback = cache.set_cache_factor # type: ignore | |
119 | 119 | add_resizable_cache(cache_name, resize_callback) |
120 | 120 | |
121 | 121 | metric = CacheMetric(cache, cache_type, cache_name, collect_callback) |
14 | 14 | import enum |
15 | 15 | import logging |
16 | 16 | import threading |
17 | from collections import namedtuple | |
18 | from typing import Any | |
17 | from typing import Any, Dict, Generic, Iterable, Optional, Set, TypeVar | |
18 | ||
19 | import attr | |
19 | 20 | |
20 | 21 | from synapse.util.caches.lrucache import LruCache |
21 | 22 | |
22 | 23 | logger = logging.getLogger(__name__) |
23 | 24 | |
24 | 25 | |
25 | class DictionaryEntry(namedtuple("DictionaryEntry", ("full", "known_absent", "value"))): | |
26 | # The type of the cache keys. | |
27 | KT = TypeVar("KT") | |
28 | # The type of the dictionary keys. | |
29 | DKT = TypeVar("DKT") | |
30 | ||
31 | ||
32 | @attr.s(slots=True) | |
33 | class DictionaryEntry: | |
26 | 34 | """Returned when getting an entry from the cache |
27 | 35 | |
28 | 36 | Attributes: |
29 | full (bool): Whether the cache has the full or dict or just some keys. | |
37 | full: Whether the cache has the full or dict or just some keys. | |
30 | 38 | If not full then not all requested keys will necessarily be present |
31 | 39 | in `value` |
32 | known_absent (set): Keys that were looked up in the dict and were not | |
40 | known_absent: Keys that were looked up in the dict and were not | |
33 | 41 | there. |
34 | value (dict): The full or partial dict value | |
42 | value: The full or partial dict value | |
35 | 43 | """ |
44 | ||
45 | full = attr.ib(type=bool) | |
46 | known_absent = attr.ib() | |
47 | value = attr.ib() | |
36 | 48 | |
37 | 49 | def __len__(self): |
38 | 50 | return len(self.value) |
44 | 56 | sentinel = object() |
45 | 57 | |
46 | 58 | |
47 | class DictionaryCache: | |
59 | class DictionaryCache(Generic[KT, DKT]): | |
48 | 60 | """Caches key -> dictionary lookups, supporting caching partial dicts, i.e. |
49 | 61 | fetching a subset of dictionary keys for a particular key. |
50 | 62 | """ |
51 | 63 | |
52 | def __init__(self, name, max_entries=1000): | |
64 | def __init__(self, name: str, max_entries: int = 1000): | |
53 | 65 | self.cache = LruCache( |
54 | 66 | max_size=max_entries, cache_name=name, size_callback=len |
55 | ) # type: LruCache[Any, DictionaryEntry] | |
67 | ) # type: LruCache[KT, DictionaryEntry] | |
56 | 68 | |
57 | 69 | self.name = name |
58 | 70 | self.sequence = 0 |
59 | self.thread = None | |
71 | self.thread = None # type: Optional[threading.Thread] | |
60 | 72 | |
61 | def check_thread(self): | |
73 | def check_thread(self) -> None: | |
62 | 74 | expected_thread = self.thread |
63 | 75 | if expected_thread is None: |
64 | 76 | self.thread = threading.current_thread() |
68 | 80 | "Cache objects can only be accessed from the main thread" |
69 | 81 | ) |
70 | 82 | |
71 | def get(self, key, dict_keys=None): | |
83 | def get( | |
84 | self, key: KT, dict_keys: Optional[Iterable[DKT]] = None | |
85 | ) -> DictionaryEntry: | |
72 | 86 | """Fetch an entry out of the cache |
73 | 87 | |
74 | 88 | Args: |
75 | 89 | key |
76 | dict_key(list): If given a set of keys then return only those keys | |
90 | dict_key: If given a set of keys then return only those keys | |
77 | 91 | that exist in the cache. |
78 | 92 | |
79 | 93 | Returns: |
94 | 108 | |
95 | 109 | return DictionaryEntry(False, set(), {}) |
96 | 110 | |
97 | def invalidate(self, key): | |
111 | def invalidate(self, key: KT) -> None: | |
98 | 112 | self.check_thread() |
99 | 113 | |
100 | 114 | # Increment the sequence number so that any SELECT statements that |
102 | 116 | self.sequence += 1 |
103 | 117 | self.cache.pop(key, None) |
104 | 118 | |
105 | def invalidate_all(self): | |
119 | def invalidate_all(self) -> None: | |
106 | 120 | self.check_thread() |
107 | 121 | self.sequence += 1 |
108 | 122 | self.cache.clear() |
109 | 123 | |
110 | def update(self, sequence, key, value, fetched_keys=None): | |
124 | def update( | |
125 | self, | |
126 | sequence: int, | |
127 | key: KT, | |
128 | value: Dict[DKT, Any], | |
129 | fetched_keys: Optional[Set[DKT]] = None, | |
130 | ) -> None: | |
111 | 131 | """Updates the entry in the cache |
112 | 132 | |
113 | 133 | Args: |
114 | 134 | sequence |
115 | key (K) | |
116 | value (dict[X,Y]): The value to update the cache with. | |
117 | fetched_keys (None|set[X]): All of the dictionary keys which were | |
135 | key | |
136 | value: The value to update the cache with. | |
137 | fetched_keys: All of the dictionary keys which were | |
118 | 138 | fetched from the database. |
119 | 139 | |
120 | 140 | If None, this is the complete value for key K. Otherwise, it |
130 | 150 | else: |
131 | 151 | self._update_or_insert(key, value, fetched_keys) |
132 | 152 | |
133 | def _update_or_insert(self, key, value, known_absent): | |
153 | def _update_or_insert( | |
154 | self, key: KT, value: Dict[DKT, Any], known_absent: Set[DKT] | |
155 | ) -> None: | |
134 | 156 | # We pop and reinsert as we need to tell the cache the size may have |
135 | 157 | # changed |
136 | 158 | |
139 | 161 | entry.known_absent.update(known_absent) |
140 | 162 | self.cache[key] = entry |
141 | 163 | |
142 | def _insert(self, key, value, known_absent): | |
164 | def _insert(self, key: KT, value: Dict[DKT, Any], known_absent: Set[DKT]) -> None: | |
143 | 165 | self.cache[key] = DictionaryEntry(True, known_absent, value) |
14 | 14 | |
15 | 15 | import logging |
16 | 16 | import time |
17 | from typing import Any, Callable, Dict, Generic, Tuple, TypeVar, Union | |
17 | 18 | |
18 | 19 | import attr |
19 | 20 | from sortedcontainers import SortedList |
22 | 23 | |
23 | 24 | logger = logging.getLogger(__name__) |
24 | 25 | |
25 | SENTINEL = object() | |
26 | SENTINEL = object() # type: Any | |
27 | ||
28 | T = TypeVar("T") | |
29 | KT = TypeVar("KT") | |
30 | VT = TypeVar("VT") | |
26 | 31 | |
27 | 32 | |
28 | class TTLCache: | |
33 | class TTLCache(Generic[KT, VT]): | |
29 | 34 | """A key/value cache implementation where each entry has its own TTL""" |
30 | 35 | |
31 | def __init__(self, cache_name, timer=time.time): | |
36 | def __init__(self, cache_name: str, timer: Callable[[], float] = time.time): | |
32 | 37 | # map from key to _CacheEntry |
33 | self._data = {} | |
38 | self._data = {} # type: Dict[KT, _CacheEntry] | |
34 | 39 | |
35 | 40 | # the _CacheEntries, sorted by expiry time |
36 | 41 | self._expiry_list = SortedList() # type: SortedList[_CacheEntry] |
39 | 44 | |
40 | 45 | self._metrics = register_cache("ttl", cache_name, self, resizable=False) |
41 | 46 | |
42 | def set(self, key, value, ttl): | |
47 | def set(self, key: KT, value: VT, ttl: float) -> None: | |
43 | 48 | """Add/update an entry in the cache |
44 | 49 | |
45 | 50 | Args: |
46 | 51 | key: key for this entry |
47 | 52 | value: value for this entry |
48 | ttl (float): TTL for this entry, in seconds | |
53 | ttl: TTL for this entry, in seconds | |
49 | 54 | """ |
50 | 55 | expiry = self._timer() + ttl |
51 | 56 | |
52 | 57 | self.expire() |
53 | 58 | e = self._data.pop(key, SENTINEL) |
54 | if e != SENTINEL: | |
59 | if e is not SENTINEL: | |
60 | assert isinstance(e, _CacheEntry) | |
55 | 61 | self._expiry_list.remove(e) |
56 | 62 | |
57 | 63 | entry = _CacheEntry(expiry_time=expiry, ttl=ttl, key=key, value=value) |
58 | 64 | self._data[key] = entry |
59 | 65 | self._expiry_list.add(entry) |
60 | 66 | |
61 | def get(self, key, default=SENTINEL): | |
67 | def get(self, key: KT, default: T = SENTINEL) -> Union[VT, T]: | |
62 | 68 | """Get a value from the cache |
63 | 69 | |
64 | 70 | Args: |
71 | 77 | """ |
72 | 78 | self.expire() |
73 | 79 | e = self._data.get(key, SENTINEL) |
74 | if e == SENTINEL: | |
80 | if e is SENTINEL: | |
75 | 81 | self._metrics.inc_misses() |
76 | if default == SENTINEL: | |
82 | if default is SENTINEL: | |
77 | 83 | raise KeyError(key) |
78 | 84 | return default |
85 | assert isinstance(e, _CacheEntry) | |
79 | 86 | self._metrics.inc_hits() |
80 | 87 | return e.value |
81 | 88 | |
82 | def get_with_expiry(self, key): | |
89 | def get_with_expiry(self, key: KT) -> Tuple[VT, float, float]: | |
83 | 90 | """Get a value, and its expiry time, from the cache |
84 | 91 | |
85 | 92 | Args: |
86 | 93 | key: key to look up |
87 | 94 | |
88 | 95 | Returns: |
89 | Tuple[Any, float, float]: the value from the cache, the expiry time | |
90 | and the TTL | |
96 | A tuple of the value from the cache, the expiry time and the TTL | |
91 | 97 | |
92 | 98 | Raises: |
93 | 99 | KeyError if the entry is not found |
101 | 107 | self._metrics.inc_hits() |
102 | 108 | return e.value, e.expiry_time, e.ttl |
103 | 109 | |
104 | def pop(self, key, default=SENTINEL): | |
110 | def pop(self, key: KT, default: T = SENTINEL) -> Union[VT, T]: # type: ignore | |
105 | 111 | """Remove a value from the cache |
106 | 112 | |
107 | 113 | If key is in the cache, remove it and return its value, else return default. |
117 | 123 | """ |
118 | 124 | self.expire() |
119 | 125 | e = self._data.pop(key, SENTINEL) |
120 | if e == SENTINEL: | |
126 | if e is SENTINEL: | |
121 | 127 | self._metrics.inc_misses() |
122 | if default == SENTINEL: | |
128 | if default is SENTINEL: | |
123 | 129 | raise KeyError(key) |
124 | 130 | return default |
131 | assert isinstance(e, _CacheEntry) | |
125 | 132 | self._expiry_list.remove(e) |
126 | 133 | self._metrics.inc_hits() |
127 | 134 | return e.value |
128 | 135 | |
129 | def __getitem__(self, key): | |
136 | def __getitem__(self, key: KT) -> VT: | |
130 | 137 | return self.get(key) |
131 | 138 | |
132 | def __delitem__(self, key): | |
139 | def __delitem__(self, key: KT) -> None: | |
133 | 140 | self.pop(key) |
134 | 141 | |
135 | def __contains__(self, key): | |
142 | def __contains__(self, key: KT) -> bool: | |
136 | 143 | return key in self._data |
137 | 144 | |
138 | def __len__(self): | |
145 | def __len__(self) -> int: | |
139 | 146 | self.expire() |
140 | 147 | return len(self._data) |
141 | 148 | |
142 | def expire(self): | |
149 | def expire(self) -> None: | |
143 | 150 | """Run the expiry on the cache. Any entries whose expiry times are due will |
144 | 151 | be removed |
145 | 152 | """ |
157 | 164 | """TTLCache entry""" |
158 | 165 | |
159 | 166 | # expiry_time is the first attribute, so that entries are sorted by expiry. |
160 | expiry_time = attr.ib() | |
161 | ttl = attr.ib() | |
167 | expiry_time = attr.ib(type=float) | |
168 | ttl = attr.ib(type=float) | |
162 | 169 | key = attr.ib() |
163 | 170 | value = attr.ib() |
35 | 35 | |
36 | 36 | def unfreeze(o): |
37 | 37 | if isinstance(o, (dict, frozendict)): |
38 | return dict({k: unfreeze(v) for k, v in o.items()}) | |
38 | return {k: unfreeze(v) for k, v in o.items()} | |
39 | 39 | |
40 | 40 | if isinstance(o, (bytes, str)): |
41 | 41 | return o |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | import operator | |
15 | from typing import Dict, FrozenSet, List, Optional | |
16 | 16 | |
17 | 17 | from synapse.api.constants import ( |
18 | 18 | AccountDataTypes, |
20 | 20 | HistoryVisibility, |
21 | 21 | Membership, |
22 | 22 | ) |
23 | from synapse.events import EventBase | |
23 | 24 | from synapse.events.utils import prune_event |
24 | 25 | from synapse.storage import Storage |
25 | 26 | from synapse.storage.state import StateFilter |
26 | from synapse.types import get_domain_from_id | |
27 | from synapse.types import StateMap, get_domain_from_id | |
27 | 28 | |
28 | 29 | logger = logging.getLogger(__name__) |
29 | 30 | |
47 | 48 | |
48 | 49 | async def filter_events_for_client( |
49 | 50 | storage: Storage, |
50 | user_id, | |
51 | events, | |
52 | is_peeking=False, | |
53 | always_include_ids=frozenset(), | |
54 | filter_send_to_client=True, | |
55 | ): | |
51 | user_id: str, | |
52 | events: List[EventBase], | |
53 | is_peeking: bool = False, | |
54 | always_include_ids: FrozenSet[str] = frozenset(), | |
55 | filter_send_to_client: bool = True, | |
56 | ) -> List[EventBase]: | |
56 | 57 | """ |
57 | 58 | Check which events a user is allowed to see. If the user can see the event but its |
58 | 59 | sender asked for their data to be erased, prune the content of the event. |
59 | 60 | |
60 | 61 | Args: |
61 | 62 | storage |
62 | user_id(str): user id to be checked | |
63 | events(list[synapse.events.EventBase]): sequence of events to be checked | |
64 | is_peeking(bool): should be True if: | |
63 | user_id: user id to be checked | |
64 | events: sequence of events to be checked | |
65 | is_peeking: should be True if: | |
65 | 66 | * the user is not currently a member of the room, and: |
66 | 67 | * the user has not been a member of the room since the given |
67 | 68 | events |
68 | always_include_ids (set(event_id)): set of event ids to specifically | |
69 | always_include_ids: set of event ids to specifically | |
69 | 70 | include (unless sender is ignored) |
70 | filter_send_to_client (bool): Whether we're checking an event that's going to be | |
71 | filter_send_to_client: Whether we're checking an event that's going to be | |
71 | 72 | sent to a client. This might not always be the case since this function can |
72 | 73 | also be called to check whether a user can see the state at a given point. |
73 | 74 | |
74 | 75 | Returns: |
75 | list[synapse.events.EventBase] | |
76 | The filtered events. | |
76 | 77 | """ |
77 | 78 | # Filter out events that have been soft failed so that we don't relay them |
78 | 79 | # to clients. |
89 | 90 | AccountDataTypes.IGNORED_USER_LIST, user_id |
90 | 91 | ) |
91 | 92 | |
92 | ignore_list = frozenset() | |
93 | ignore_list = frozenset() # type: FrozenSet[str] | |
93 | 94 | if ignore_dict_content: |
94 | 95 | ignored_users_dict = ignore_dict_content.get("ignored_users", {}) |
95 | 96 | if isinstance(ignored_users_dict, dict): |
106 | 107 | room_id |
107 | 108 | ] = await storage.main.get_retention_policy_for_room(room_id) |
108 | 109 | |
109 | def allowed(event): | |
110 | def allowed(event: EventBase) -> Optional[EventBase]: | |
110 | 111 | """ |
111 | 112 | Args: |
112 | event (synapse.events.EventBase): event to check | |
113 | event: event to check | |
113 | 114 | |
114 | 115 | Returns: |
115 | None|EventBase: | |
116 | None if the user cannot see this event at all | |
117 | ||
118 | a redacted copy of the event if they can only see a redacted | |
119 | version | |
120 | ||
121 | the original event if they can see it as normal. | |
116 | None if the user cannot see this event at all | |
117 | ||
118 | a redacted copy of the event if they can only see a redacted | |
119 | version | |
120 | ||
121 | the original event if they can see it as normal. | |
122 | 122 | """ |
123 | 123 | # Only run some checks if these events aren't about to be sent to clients. This is |
124 | 124 | # because, if this is not the case, we're probably only checking if the users can |
251 | 251 | |
252 | 252 | return event |
253 | 253 | |
254 | # check each event: gives an iterable[None|EventBase] | |
254 | # Check each event: gives an iterable of None or (a potentially modified) | |
255 | # EventBase. | |
255 | 256 | filtered_events = map(allowed, events) |
256 | 257 | |
257 | # remove the None entries | |
258 | filtered_events = filter(operator.truth, filtered_events) | |
259 | ||
260 | # we turn it into a list before returning it. | |
261 | return list(filtered_events) | |
258 | # Turn it into a list and remove None entries before returning. | |
259 | return [ev for ev in filtered_events if ev] | |
262 | 260 | |
263 | 261 | |
264 | 262 | async def filter_events_for_server( |
265 | 263 | storage: Storage, |
266 | server_name, | |
267 | events, | |
268 | redact=True, | |
269 | check_history_visibility_only=False, | |
270 | ): | |
264 | server_name: str, | |
265 | events: List[EventBase], | |
266 | redact: bool = True, | |
267 | check_history_visibility_only: bool = False, | |
268 | ) -> List[EventBase]: | |
271 | 269 | """Filter a list of events based on whether given server is allowed to |
272 | 270 | see them. |
273 | 271 | |
274 | 272 | Args: |
275 | 273 | storage |
276 | server_name (str) | |
277 | events (iterable[FrozenEvent]) | |
278 | redact (bool): Whether to return a redacted version of the event, or | |
274 | server_name | |
275 | events | |
276 | redact: Whether to return a redacted version of the event, or | |
279 | 277 | to filter them out entirely. |
280 | check_history_visibility_only (bool): Whether to only check the | |
278 | check_history_visibility_only: Whether to only check the | |
281 | 279 | history visibility, rather than things like if the sender has been |
282 | 280 | erased. This is used e.g. during pagination to decide whether to |
283 | 281 | backfill or not. |
284 | 282 | |
285 | 283 | Returns |
286 | list[FrozenEvent] | |
284 | The filtered events. | |
287 | 285 | """ |
288 | 286 | |
289 | def is_sender_erased(event, erased_senders): | |
287 | def is_sender_erased(event: EventBase, erased_senders: Dict[str, bool]) -> bool: | |
290 | 288 | if erased_senders and erased_senders[event.sender]: |
291 | 289 | logger.info("Sender of %s has been erased, redacting", event.event_id) |
292 | 290 | return True |
293 | 291 | return False |
294 | 292 | |
295 | def check_event_is_visible(event, state): | |
293 | def check_event_is_visible(event: EventBase, state: StateMap[EventBase]) -> bool: | |
296 | 294 | history = state.get((EventTypes.RoomHistoryVisibility, ""), None) |
297 | 295 | if history: |
298 | 296 | visibility = history.content.get( |
0 | #!/bin/bash | |
0 | #!/usr/bin/env bash | |
1 | 1 | |
2 | 2 | # This script builds the Docker image to run the PostgreSQL tests, and then runs |
3 | 3 | # the tests. |
1 | 1 | |
2 | 2 | from mock import Mock |
3 | 3 | |
4 | from synapse.api.constants import EventTypes | |
4 | 5 | from synapse.events import EventBase |
5 | 6 | from synapse.federation.sender import PerDestinationQueue, TransactionManager |
6 | 7 | from synapse.federation.units import Edu |
420 | 421 | self.assertNotIn("zzzerver", woken) |
421 | 422 | # - all destinations are woken exactly once; they appear once in woken. |
422 | 423 | self.assertCountEqual(woken, server_names[:-1]) |
424 | ||
425 | @override_config({"send_federation": True}) | |
426 | def test_not_latest_event(self): | |
427 | """Test that we send the latest event in the room even if its not ours.""" | |
428 | ||
429 | per_dest_queue, sent_pdus = self.make_fake_destination_queue() | |
430 | ||
431 | # Make a room with a local user, and two servers. One will go offline | |
432 | # and one will send some events. | |
433 | self.register_user("u1", "you the one") | |
434 | u1_token = self.login("u1", "you the one") | |
435 | room_1 = self.helper.create_room_as("u1", tok=u1_token) | |
436 | ||
437 | self.get_success( | |
438 | event_injection.inject_member_event(self.hs, room_1, "@user:host2", "join") | |
439 | ) | |
440 | event_1 = self.get_success( | |
441 | event_injection.inject_member_event(self.hs, room_1, "@user:host3", "join") | |
442 | ) | |
443 | ||
444 | # First we send something from the local server, so that we notice the | |
445 | # remote is down and go into catchup mode. | |
446 | self.helper.send(room_1, "you hear me!!", tok=u1_token) | |
447 | ||
448 | # Now simulate us receiving an event from the still online remote. | |
449 | event_2 = self.get_success( | |
450 | event_injection.inject_event( | |
451 | self.hs, | |
452 | type=EventTypes.Message, | |
453 | sender="@user:host3", | |
454 | room_id=room_1, | |
455 | content={"msgtype": "m.text", "body": "Hello"}, | |
456 | ) | |
457 | ) | |
458 | ||
459 | self.get_success( | |
460 | self.hs.get_datastore().set_destination_last_successful_stream_ordering( | |
461 | "host2", event_1.internal_metadata.stream_ordering | |
462 | ) | |
463 | ) | |
464 | ||
465 | self.get_success(per_dest_queue._catch_up_transmission_loop()) | |
466 | ||
467 | # We expect only the last message from the remote, event_2, to have been | |
468 | # sent, rather than the last *local* event that was sent. | |
469 | self.assertEqual(len(sent_pdus), 1) | |
470 | self.assertEqual(sent_pdus[0].event_id, event_2.event_id) | |
471 | self.assertFalse(per_dest_queue._catching_up) |
987 | 987 | } |
988 | 988 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) |
989 | 989 | self.assertRenderedError("mapping_error", "localpart is invalid: ") |
990 | ||
991 | @override_config( | |
992 | { | |
993 | "oidc_config": { | |
994 | **DEFAULT_CONFIG, | |
995 | "attribute_requirements": [{"attribute": "test", "value": "foobar"}], | |
996 | } | |
997 | } | |
998 | ) | |
999 | def test_attribute_requirements(self): | |
1000 | """The required attributes must be met from the OIDC userinfo response.""" | |
1001 | auth_handler = self.hs.get_auth_handler() | |
1002 | auth_handler.complete_sso_login = simple_async_mock() | |
1003 | ||
1004 | # userinfo lacking "test": "foobar" attribute should fail. | |
1005 | userinfo = { | |
1006 | "sub": "tester", | |
1007 | "username": "tester", | |
1008 | } | |
1009 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1010 | auth_handler.complete_sso_login.assert_not_called() | |
1011 | ||
1012 | # userinfo with "test": "foobar" attribute should succeed. | |
1013 | userinfo = { | |
1014 | "sub": "tester", | |
1015 | "username": "tester", | |
1016 | "test": "foobar", | |
1017 | } | |
1018 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1019 | ||
1020 | # check that the auth handler got called as expected | |
1021 | auth_handler.complete_sso_login.assert_called_once_with( | |
1022 | "@tester:test", "oidc", ANY, ANY, None, new_user=True | |
1023 | ) | |
1024 | ||
1025 | @override_config( | |
1026 | { | |
1027 | "oidc_config": { | |
1028 | **DEFAULT_CONFIG, | |
1029 | "attribute_requirements": [{"attribute": "test", "value": "foobar"}], | |
1030 | } | |
1031 | } | |
1032 | ) | |
1033 | def test_attribute_requirements_contains(self): | |
1034 | """Test that auth succeeds if userinfo attribute CONTAINS required value""" | |
1035 | auth_handler = self.hs.get_auth_handler() | |
1036 | auth_handler.complete_sso_login = simple_async_mock() | |
1037 | # userinfo with "test": ["foobar", "foo", "bar"] attribute should succeed. | |
1038 | userinfo = { | |
1039 | "sub": "tester", | |
1040 | "username": "tester", | |
1041 | "test": ["foobar", "foo", "bar"], | |
1042 | } | |
1043 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1044 | ||
1045 | # check that the auth handler got called as expected | |
1046 | auth_handler.complete_sso_login.assert_called_once_with( | |
1047 | "@tester:test", "oidc", ANY, ANY, None, new_user=True | |
1048 | ) | |
1049 | ||
1050 | @override_config( | |
1051 | { | |
1052 | "oidc_config": { | |
1053 | **DEFAULT_CONFIG, | |
1054 | "attribute_requirements": [{"attribute": "test", "value": "foobar"}], | |
1055 | } | |
1056 | } | |
1057 | ) | |
1058 | def test_attribute_requirements_mismatch(self): | |
1059 | """ | |
1060 | Test that auth fails if attributes exist but don't match, | |
1061 | or are non-string values. | |
1062 | """ | |
1063 | auth_handler = self.hs.get_auth_handler() | |
1064 | auth_handler.complete_sso_login = simple_async_mock() | |
1065 | # userinfo with "test": "not_foobar" attribute should fail | |
1066 | userinfo = { | |
1067 | "sub": "tester", | |
1068 | "username": "tester", | |
1069 | "test": "not_foobar", | |
1070 | } | |
1071 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1072 | auth_handler.complete_sso_login.assert_not_called() | |
1073 | ||
1074 | # userinfo with "test": ["foo", "bar"] attribute should fail | |
1075 | userinfo = { | |
1076 | "sub": "tester", | |
1077 | "username": "tester", | |
1078 | "test": ["foo", "bar"], | |
1079 | } | |
1080 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1081 | auth_handler.complete_sso_login.assert_not_called() | |
1082 | ||
1083 | # userinfo with "test": False attribute should fail | |
1084 | # this is largely just to ensure we don't crash here | |
1085 | userinfo = { | |
1086 | "sub": "tester", | |
1087 | "username": "tester", | |
1088 | "test": False, | |
1089 | } | |
1090 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1091 | auth_handler.complete_sso_login.assert_not_called() | |
1092 | ||
1093 | # userinfo with "test": None attribute should fail | |
1094 | # a value of None breaks the OIDC spec, but it's important to not crash here | |
1095 | userinfo = { | |
1096 | "sub": "tester", | |
1097 | "username": "tester", | |
1098 | "test": None, | |
1099 | } | |
1100 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1101 | auth_handler.complete_sso_login.assert_not_called() | |
1102 | ||
1103 | # userinfo with "test": 1 attribute should fail | |
1104 | # this is largely just to ensure we don't crash here | |
1105 | userinfo = { | |
1106 | "sub": "tester", | |
1107 | "username": "tester", | |
1108 | "test": 1, | |
1109 | } | |
1110 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1111 | auth_handler.complete_sso_login.assert_not_called() | |
1112 | ||
1113 | # userinfo with "test": 3.14 attribute should fail | |
1114 | # this is largely just to ensure we don't crash here | |
1115 | userinfo = { | |
1116 | "sub": "tester", | |
1117 | "username": "tester", | |
1118 | "test": 3.14, | |
1119 | } | |
1120 | self.get_success(_make_callback_with_userinfo(self.hs, userinfo)) | |
1121 | auth_handler.complete_sso_login.assert_not_called() | |
990 | 1122 | |
991 | 1123 | def _generate_oidc_session_token( |
992 | 1124 | self, |
309 | 309 | self.assertIsNotNone(new_state) |
310 | 310 | self.assertEquals(new_state.state, PresenceState.UNAVAILABLE) |
311 | 311 | |
312 | def test_busy_no_idle(self): | |
313 | """ | |
314 | Tests that a user setting their presence to busy but idling doesn't turn their | |
315 | presence state into unavailable. | |
316 | """ | |
317 | user_id = "@foo:bar" | |
318 | now = 5000000 | |
319 | ||
320 | state = UserPresenceState.default(user_id) | |
321 | state = state.copy_and_replace( | |
322 | state=PresenceState.BUSY, | |
323 | last_active_ts=now - IDLE_TIMER - 1, | |
324 | last_user_sync_ts=now, | |
325 | ) | |
326 | ||
327 | new_state = handle_timeout(state, is_mine=True, syncing_user_ids=set(), now=now) | |
328 | ||
329 | self.assertIsNotNone(new_state) | |
330 | self.assertEquals(new_state.state, PresenceState.BUSY) | |
331 | ||
312 | 332 | def test_sync_timeout(self): |
313 | 333 | user_id = "@foo:bar" |
314 | 334 | now = 5000000 |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | import base64 | |
14 | 15 | import logging |
15 | 16 | import os |
17 | from typing import Optional | |
16 | 18 | from unittest.mock import patch |
17 | 19 | |
18 | 20 | import treq |
241 | 243 | |
242 | 244 | @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "unused.com"}) |
243 | 245 | def test_https_request_via_proxy(self): |
246 | """Tests that TLS-encrypted requests can be made through a proxy""" | |
247 | self._do_https_request_via_proxy(auth_credentials=None) | |
248 | ||
249 | @patch.dict( | |
250 | os.environ, | |
251 | {"https_proxy": "bob:pinkponies@proxy.com", "no_proxy": "unused.com"}, | |
252 | ) | |
253 | def test_https_request_via_proxy_with_auth(self): | |
254 | """Tests that authenticated, TLS-encrypted requests can be made through a proxy""" | |
255 | self._do_https_request_via_proxy(auth_credentials="bob:pinkponies") | |
256 | ||
257 | def _do_https_request_via_proxy( | |
258 | self, | |
259 | auth_credentials: Optional[str] = None, | |
260 | ): | |
244 | 261 | agent = ProxyAgent( |
245 | 262 | self.reactor, |
246 | 263 | contextFactory=get_test_https_policy(), |
277 | 294 | self.assertEqual(request.method, b"CONNECT") |
278 | 295 | self.assertEqual(request.path, b"test.com:443") |
279 | 296 | |
297 | # Check whether auth credentials have been supplied to the proxy | |
298 | proxy_auth_header_values = request.requestHeaders.getRawHeaders( | |
299 | b"Proxy-Authorization" | |
300 | ) | |
301 | ||
302 | if auth_credentials is not None: | |
303 | # Compute the correct header value for Proxy-Authorization | |
304 | encoded_credentials = base64.b64encode(b"bob:pinkponies") | |
305 | expected_header_value = b"Basic " + encoded_credentials | |
306 | ||
307 | # Validate the header's value | |
308 | self.assertIn(expected_header_value, proxy_auth_header_values) | |
309 | else: | |
310 | # Check that the Proxy-Authorization header has not been supplied to the proxy | |
311 | self.assertIsNone(proxy_auth_header_values) | |
312 | ||
280 | 313 | # tell the proxy server not to close the connection |
281 | 314 | proxy_server.persistent = True |
282 | 315 | |
311 | 344 | self.assertEqual(request.method, b"GET") |
312 | 345 | self.assertEqual(request.path, b"/abc") |
313 | 346 | self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"]) |
347 | ||
348 | # Check that the destination server DID NOT receive proxy credentials | |
349 | proxy_auth_header_values = request.requestHeaders.getRawHeaders( | |
350 | b"Proxy-Authorization" | |
351 | ) | |
352 | self.assertIsNone(proxy_auth_header_values) | |
353 | ||
314 | 354 | request.write(b"result") |
315 | 355 | request.finish() |
316 | 356 |
43 | 43 | try: |
44 | 44 | import hiredis |
45 | 45 | except ImportError: |
46 | hiredis = None | |
46 | hiredis = None # type: ignore | |
47 | 47 | |
48 | 48 | logger = logging.getLogger(__name__) |
49 | 49 |
68 | 68 | self.assert_request_is_get_repl_stream_updates(request, "typing") |
69 | 69 | |
70 | 70 | # The from token should be the token from the last RDATA we got. |
71 | assert request.args is not None | |
71 | 72 | self.assertEqual(int(request.args[b"from_token"][0]), token) |
72 | 73 | |
73 | 74 | self.test_handler.on_rdata.assert_called_once() |
14 | 14 | import logging |
15 | 15 | import os |
16 | 16 | from binascii import unhexlify |
17 | from typing import Tuple | |
17 | from typing import Optional, Tuple | |
18 | 18 | |
19 | 19 | from twisted.internet.protocol import Factory |
20 | 20 | from twisted.protocols.tls import TLSMemoryBIOFactory |
31 | 31 | |
32 | 32 | logger = logging.getLogger(__name__) |
33 | 33 | |
34 | test_server_connection_factory = None | |
34 | test_server_connection_factory = None # type: Optional[TestServerTLSConnectionFactory] | |
35 | 35 | |
36 | 36 | |
37 | 37 | class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase): |
1002 | 1002 | |
1003 | 1003 | def prepare(self, reactor, clock, hs): |
1004 | 1004 | self.store = hs.get_datastore() |
1005 | ||
1005 | self.auth_handler = hs.get_auth_handler() | |
1006 | ||
1007 | # create users and get access tokens | |
1008 | # regardless of whether password login or SSO is allowed | |
1006 | 1009 | self.admin_user = self.register_user("admin", "pass", admin=True) |
1007 | self.admin_user_tok = self.login("admin", "pass") | |
1010 | self.admin_user_tok = self.get_success( | |
1011 | self.auth_handler.get_access_token_for_user_id( | |
1012 | self.admin_user, device_id=None, valid_until_ms=None | |
1013 | ) | |
1014 | ) | |
1008 | 1015 | |
1009 | 1016 | self.other_user = self.register_user("user", "pass", displayname="User") |
1010 | self.other_user_token = self.login("user", "pass") | |
1017 | self.other_user_token = self.get_success( | |
1018 | self.auth_handler.get_access_token_for_user_id( | |
1019 | self.other_user, device_id=None, valid_until_ms=None | |
1020 | ) | |
1021 | ) | |
1011 | 1022 | self.url_other_user = "/_synapse/admin/v2/users/%s" % urllib.parse.quote( |
1012 | 1023 | self.other_user |
1013 | 1024 | ) |
1080 | 1091 | self.assertEqual("Bob's name", channel.json_body["displayname"]) |
1081 | 1092 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) |
1082 | 1093 | self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) |
1083 | self.assertEqual(True, channel.json_body["admin"]) | |
1094 | self.assertTrue(channel.json_body["admin"]) | |
1084 | 1095 | self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) |
1085 | 1096 | |
1086 | 1097 | # Get user |
1095 | 1106 | self.assertEqual("Bob's name", channel.json_body["displayname"]) |
1096 | 1107 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) |
1097 | 1108 | self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) |
1098 | self.assertEqual(True, channel.json_body["admin"]) | |
1099 | self.assertEqual(False, channel.json_body["is_guest"]) | |
1100 | self.assertEqual(False, channel.json_body["deactivated"]) | |
1109 | self.assertTrue(channel.json_body["admin"]) | |
1110 | self.assertFalse(channel.json_body["is_guest"]) | |
1111 | self.assertFalse(channel.json_body["deactivated"]) | |
1101 | 1112 | self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) |
1102 | 1113 | |
1103 | 1114 | def test_create_user(self): |
1129 | 1140 | self.assertEqual("Bob's name", channel.json_body["displayname"]) |
1130 | 1141 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) |
1131 | 1142 | self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) |
1132 | self.assertEqual(False, channel.json_body["admin"]) | |
1143 | self.assertFalse(channel.json_body["admin"]) | |
1133 | 1144 | self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) |
1134 | 1145 | |
1135 | 1146 | # Get user |
1144 | 1155 | self.assertEqual("Bob's name", channel.json_body["displayname"]) |
1145 | 1156 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) |
1146 | 1157 | self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) |
1147 | self.assertEqual(False, channel.json_body["admin"]) | |
1148 | self.assertEqual(False, channel.json_body["is_guest"]) | |
1149 | self.assertEqual(False, channel.json_body["deactivated"]) | |
1150 | self.assertEqual(False, channel.json_body["shadow_banned"]) | |
1158 | self.assertFalse(channel.json_body["admin"]) | |
1159 | self.assertFalse(channel.json_body["is_guest"]) | |
1160 | self.assertFalse(channel.json_body["deactivated"]) | |
1161 | self.assertFalse(channel.json_body["shadow_banned"]) | |
1151 | 1162 | self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) |
1152 | 1163 | |
1153 | 1164 | @override_config( |
1196 | 1207 | |
1197 | 1208 | self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) |
1198 | 1209 | self.assertEqual("@bob:test", channel.json_body["name"]) |
1199 | self.assertEqual(False, channel.json_body["admin"]) | |
1210 | self.assertFalse(channel.json_body["admin"]) | |
1200 | 1211 | |
1201 | 1212 | @override_config( |
1202 | 1213 | {"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0} |
1236 | 1247 | # Admin user is not blocked by mau anymore |
1237 | 1248 | self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) |
1238 | 1249 | self.assertEqual("@bob:test", channel.json_body["name"]) |
1239 | self.assertEqual(False, channel.json_body["admin"]) | |
1250 | self.assertFalse(channel.json_body["admin"]) | |
1240 | 1251 | |
1241 | 1252 | @override_config( |
1242 | 1253 | { |
1428 | 1439 | |
1429 | 1440 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1430 | 1441 | self.assertEqual("@user:test", channel.json_body["name"]) |
1431 | self.assertEqual(False, channel.json_body["deactivated"]) | |
1442 | self.assertFalse(channel.json_body["deactivated"]) | |
1432 | 1443 | self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"]) |
1433 | 1444 | self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) |
1434 | 1445 | self.assertEqual("User", channel.json_body["displayname"]) |
1435 | 1446 | |
1436 | 1447 | # Deactivate user |
1437 | body = json.dumps({"deactivated": True}) | |
1438 | ||
1439 | 1448 | channel = self.make_request( |
1440 | 1449 | "PUT", |
1441 | 1450 | self.url_other_user, |
1442 | 1451 | access_token=self.admin_user_tok, |
1443 | content=body.encode(encoding="utf_8"), | |
1452 | content={"deactivated": True}, | |
1444 | 1453 | ) |
1445 | 1454 | |
1446 | 1455 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1447 | 1456 | self.assertEqual("@user:test", channel.json_body["name"]) |
1448 | self.assertEqual(True, channel.json_body["deactivated"]) | |
1457 | self.assertTrue(channel.json_body["deactivated"]) | |
1458 | self.assertIsNone(channel.json_body["password_hash"]) | |
1449 | 1459 | self.assertEqual(0, len(channel.json_body["threepids"])) |
1450 | 1460 | self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) |
1451 | 1461 | self.assertEqual("User", channel.json_body["displayname"]) |
1460 | 1470 | |
1461 | 1471 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1462 | 1472 | self.assertEqual("@user:test", channel.json_body["name"]) |
1463 | self.assertEqual(True, channel.json_body["deactivated"]) | |
1473 | self.assertTrue(channel.json_body["deactivated"]) | |
1474 | self.assertIsNone(channel.json_body["password_hash"]) | |
1464 | 1475 | self.assertEqual(0, len(channel.json_body["threepids"])) |
1465 | 1476 | self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) |
1466 | 1477 | self.assertEqual("User", channel.json_body["displayname"]) |
1477 | 1488 | self.assertTrue(profile["display_name"] == "User") |
1478 | 1489 | |
1479 | 1490 | # Deactivate user |
1480 | body = json.dumps({"deactivated": True}) | |
1481 | ||
1482 | 1491 | channel = self.make_request( |
1483 | 1492 | "PUT", |
1484 | 1493 | self.url_other_user, |
1485 | 1494 | access_token=self.admin_user_tok, |
1486 | content=body.encode(encoding="utf_8"), | |
1495 | content={"deactivated": True}, | |
1487 | 1496 | ) |
1488 | 1497 | |
1489 | 1498 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1490 | 1499 | self.assertEqual("@user:test", channel.json_body["name"]) |
1491 | self.assertEqual(True, channel.json_body["deactivated"]) | |
1500 | self.assertTrue(channel.json_body["deactivated"]) | |
1492 | 1501 | |
1493 | 1502 | # is not in user directory |
1494 | 1503 | profile = self.get_success(self.store.get_user_in_directory(self.other_user)) |
1495 | self.assertTrue(profile is None) | |
1504 | self.assertIsNone(profile) | |
1496 | 1505 | |
1497 | 1506 | # Set new displayname user |
1498 | body = json.dumps({"displayname": "Foobar"}) | |
1499 | ||
1500 | 1507 | channel = self.make_request( |
1501 | 1508 | "PUT", |
1502 | 1509 | self.url_other_user, |
1503 | 1510 | access_token=self.admin_user_tok, |
1504 | content=body.encode(encoding="utf_8"), | |
1511 | content={"displayname": "Foobar"}, | |
1505 | 1512 | ) |
1506 | 1513 | |
1507 | 1514 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1508 | 1515 | self.assertEqual("@user:test", channel.json_body["name"]) |
1509 | self.assertEqual(True, channel.json_body["deactivated"]) | |
1516 | self.assertTrue(channel.json_body["deactivated"]) | |
1510 | 1517 | self.assertEqual("Foobar", channel.json_body["displayname"]) |
1511 | 1518 | |
1512 | 1519 | # is not in user directory |
1513 | 1520 | profile = self.get_success(self.store.get_user_in_directory(self.other_user)) |
1514 | self.assertTrue(profile is None) | |
1521 | self.assertIsNone(profile) | |
1515 | 1522 | |
1516 | 1523 | def test_reactivate_user(self): |
1517 | 1524 | """ |
1519 | 1526 | """ |
1520 | 1527 | |
1521 | 1528 | # Deactivate the user. |
1529 | self._deactivate_user("@user:test") | |
1530 | ||
1531 | # Attempt to reactivate the user (without a password). | |
1522 | 1532 | channel = self.make_request( |
1523 | 1533 | "PUT", |
1524 | 1534 | self.url_other_user, |
1525 | 1535 | access_token=self.admin_user_tok, |
1526 | content=json.dumps({"deactivated": True}).encode(encoding="utf_8"), | |
1527 | ) | |
1528 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1529 | self._is_erased("@user:test", False) | |
1530 | d = self.store.mark_user_erased("@user:test") | |
1531 | self.assertIsNone(self.get_success(d)) | |
1532 | self._is_erased("@user:test", True) | |
1533 | ||
1534 | # Attempt to reactivate the user (without a password). | |
1536 | content={"deactivated": False}, | |
1537 | ) | |
1538 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
1539 | ||
1540 | # Reactivate the user. | |
1535 | 1541 | channel = self.make_request( |
1536 | 1542 | "PUT", |
1537 | 1543 | self.url_other_user, |
1538 | 1544 | access_token=self.admin_user_tok, |
1539 | content=json.dumps({"deactivated": False}).encode(encoding="utf_8"), | |
1540 | ) | |
1541 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) | |
1542 | ||
1543 | # Reactivate the user. | |
1545 | content={"deactivated": False, "password": "foo"}, | |
1546 | ) | |
1547 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1548 | self.assertEqual("@user:test", channel.json_body["name"]) | |
1549 | self.assertFalse(channel.json_body["deactivated"]) | |
1550 | self.assertIsNotNone(channel.json_body["password_hash"]) | |
1551 | self._is_erased("@user:test", False) | |
1552 | ||
1553 | @override_config({"password_config": {"localdb_enabled": False}}) | |
1554 | def test_reactivate_user_localdb_disabled(self): | |
1555 | """ | |
1556 | Test reactivating another user when using SSO. | |
1557 | """ | |
1558 | ||
1559 | # Deactivate the user. | |
1560 | self._deactivate_user("@user:test") | |
1561 | ||
1562 | # Reactivate the user with a password | |
1544 | 1563 | channel = self.make_request( |
1545 | 1564 | "PUT", |
1546 | 1565 | self.url_other_user, |
1547 | 1566 | access_token=self.admin_user_tok, |
1548 | content=json.dumps({"deactivated": False, "password": "foo"}).encode( | |
1549 | encoding="utf_8" | |
1550 | ), | |
1551 | ) | |
1552 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1553 | ||
1554 | # Get user | |
1555 | channel = self.make_request( | |
1556 | "GET", | |
1557 | self.url_other_user, | |
1558 | access_token=self.admin_user_tok, | |
1559 | ) | |
1560 | ||
1561 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1562 | self.assertEqual("@user:test", channel.json_body["name"]) | |
1563 | self.assertEqual(False, channel.json_body["deactivated"]) | |
1564 | self._is_erased("@user:test", False) | |
1565 | ||
1566 | def test_set_user_as_admin(self): | |
1567 | """ | |
1568 | Test setting the admin flag on a user. | |
1569 | """ | |
1570 | ||
1571 | # Set a user as an admin | |
1572 | body = json.dumps({"admin": True}) | |
1573 | ||
1567 | content={"deactivated": False, "password": "foo"}, | |
1568 | ) | |
1569 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) | |
1570 | self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) | |
1571 | ||
1572 | # Reactivate the user without a password. | |
1574 | 1573 | channel = self.make_request( |
1575 | 1574 | "PUT", |
1576 | 1575 | self.url_other_user, |
1577 | 1576 | access_token=self.admin_user_tok, |
1578 | content=body.encode(encoding="utf_8"), | |
1579 | ) | |
1580 | ||
1577 | content={"deactivated": False}, | |
1578 | ) | |
1581 | 1579 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1582 | 1580 | self.assertEqual("@user:test", channel.json_body["name"]) |
1583 | self.assertEqual(True, channel.json_body["admin"]) | |
1581 | self.assertFalse(channel.json_body["deactivated"]) | |
1582 | self.assertIsNone(channel.json_body["password_hash"]) | |
1583 | self._is_erased("@user:test", False) | |
1584 | ||
1585 | @override_config({"password_config": {"enabled": False}}) | |
1586 | def test_reactivate_user_password_disabled(self): | |
1587 | """ | |
1588 | Test reactivating another user when using SSO. | |
1589 | """ | |
1590 | ||
1591 | # Deactivate the user. | |
1592 | self._deactivate_user("@user:test") | |
1593 | ||
1594 | # Reactivate the user with a password | |
1595 | channel = self.make_request( | |
1596 | "PUT", | |
1597 | self.url_other_user, | |
1598 | access_token=self.admin_user_tok, | |
1599 | content={"deactivated": False, "password": "foo"}, | |
1600 | ) | |
1601 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) | |
1602 | self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) | |
1603 | ||
1604 | # Reactivate the user without a password. | |
1605 | channel = self.make_request( | |
1606 | "PUT", | |
1607 | self.url_other_user, | |
1608 | access_token=self.admin_user_tok, | |
1609 | content={"deactivated": False}, | |
1610 | ) | |
1611 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1612 | self.assertEqual("@user:test", channel.json_body["name"]) | |
1613 | self.assertFalse(channel.json_body["deactivated"]) | |
1614 | self.assertIsNone(channel.json_body["password_hash"]) | |
1615 | self._is_erased("@user:test", False) | |
1616 | ||
1617 | def test_set_user_as_admin(self): | |
1618 | """ | |
1619 | Test setting the admin flag on a user. | |
1620 | """ | |
1621 | ||
1622 | # Set a user as an admin | |
1623 | channel = self.make_request( | |
1624 | "PUT", | |
1625 | self.url_other_user, | |
1626 | access_token=self.admin_user_tok, | |
1627 | content={"admin": True}, | |
1628 | ) | |
1629 | ||
1630 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1631 | self.assertEqual("@user:test", channel.json_body["name"]) | |
1632 | self.assertTrue(channel.json_body["admin"]) | |
1584 | 1633 | |
1585 | 1634 | # Get user |
1586 | 1635 | channel = self.make_request( |
1591 | 1640 | |
1592 | 1641 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
1593 | 1642 | self.assertEqual("@user:test", channel.json_body["name"]) |
1594 | self.assertEqual(True, channel.json_body["admin"]) | |
1643 | self.assertTrue(channel.json_body["admin"]) | |
1595 | 1644 | |
1596 | 1645 | def test_accidental_deactivation_prevention(self): |
1597 | 1646 | """ |
1601 | 1650 | url = "/_synapse/admin/v2/users/@bob:test" |
1602 | 1651 | |
1603 | 1652 | # Create user |
1604 | body = json.dumps({"password": "abc123"}) | |
1605 | ||
1606 | 1653 | channel = self.make_request( |
1607 | 1654 | "PUT", |
1608 | 1655 | url, |
1609 | 1656 | access_token=self.admin_user_tok, |
1610 | content=body.encode(encoding="utf_8"), | |
1657 | content={"password": "abc123"}, | |
1611 | 1658 | ) |
1612 | 1659 | |
1613 | 1660 | self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) |
1627 | 1674 | self.assertEqual(0, channel.json_body["deactivated"]) |
1628 | 1675 | |
1629 | 1676 | # Change password (and use a str for deactivate instead of a bool) |
1630 | body = json.dumps({"password": "abc123", "deactivated": "false"}) # oops! | |
1631 | ||
1632 | 1677 | channel = self.make_request( |
1633 | 1678 | "PUT", |
1634 | 1679 | url, |
1635 | 1680 | access_token=self.admin_user_tok, |
1636 | content=body.encode(encoding="utf_8"), | |
1681 | content={"password": "abc123", "deactivated": "false"}, | |
1637 | 1682 | ) |
1638 | 1683 | |
1639 | 1684 | self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) |
1652 | 1697 | # Ensure they're still alive |
1653 | 1698 | self.assertEqual(0, channel.json_body["deactivated"]) |
1654 | 1699 | |
1655 | def _is_erased(self, user_id, expect): | |
1700 | def _is_erased(self, user_id: str, expect: bool) -> None: | |
1656 | 1701 | """Assert that the user is erased or not""" |
1657 | 1702 | d = self.store.is_user_erased(user_id) |
1658 | 1703 | if expect: |
1659 | 1704 | self.assertTrue(self.get_success(d)) |
1660 | 1705 | else: |
1661 | 1706 | self.assertFalse(self.get_success(d)) |
1707 | ||
1708 | def _deactivate_user(self, user_id: str) -> None: | |
1709 | """Deactivate user and set as erased""" | |
1710 | ||
1711 | # Deactivate the user. | |
1712 | channel = self.make_request( | |
1713 | "PUT", | |
1714 | "/_synapse/admin/v2/users/%s" % urllib.parse.quote(user_id), | |
1715 | access_token=self.admin_user_tok, | |
1716 | content={"deactivated": True}, | |
1717 | ) | |
1718 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) | |
1719 | self.assertTrue(channel.json_body["deactivated"]) | |
1720 | self.assertIsNone(channel.json_body["password_hash"]) | |
1721 | self._is_erased(user_id, False) | |
1722 | d = self.store.mark_user_erased(user_id) | |
1723 | self.assertIsNone(self.get_success(d)) | |
1724 | self._is_erased(user_id, True) | |
1662 | 1725 | |
1663 | 1726 | |
1664 | 1727 | class UserMembershipRestTestCase(unittest.HomeserverTestCase): |
159 | 159 | self.assertEqual(channel.result["code"], b"200", channel.result) |
160 | 160 | ev = channel.json_body |
161 | 161 | self.assertEqual(ev["content"]["x"], "y") |
162 | ||
163 | def test_message_edit(self): | |
164 | """Ensure that the module doesn't cause issues with edited messages.""" | |
165 | # first patch the event checker so that it will modify the event | |
166 | async def check(ev: EventBase, state): | |
167 | d = ev.get_dict() | |
168 | d["content"] = { | |
169 | "msgtype": "m.text", | |
170 | "body": d["content"]["body"].upper(), | |
171 | } | |
172 | return d | |
173 | ||
174 | current_rules_module().check_event_allowed = check | |
175 | ||
176 | # Send an event, then edit it. | |
177 | channel = self.make_request( | |
178 | "PUT", | |
179 | "/_matrix/client/r0/rooms/%s/send/modifyme/1" % self.room_id, | |
180 | { | |
181 | "msgtype": "m.text", | |
182 | "body": "Original body", | |
183 | }, | |
184 | access_token=self.tok, | |
185 | ) | |
186 | self.assertEqual(channel.result["code"], b"200", channel.result) | |
187 | orig_event_id = channel.json_body["event_id"] | |
188 | ||
189 | channel = self.make_request( | |
190 | "PUT", | |
191 | "/_matrix/client/r0/rooms/%s/send/m.room.message/2" % self.room_id, | |
192 | { | |
193 | "m.new_content": {"msgtype": "m.text", "body": "Edited body"}, | |
194 | "m.relates_to": { | |
195 | "rel_type": "m.replace", | |
196 | "event_id": orig_event_id, | |
197 | }, | |
198 | "msgtype": "m.text", | |
199 | "body": "Edited body", | |
200 | }, | |
201 | access_token=self.tok, | |
202 | ) | |
203 | self.assertEqual(channel.result["code"], b"200", channel.result) | |
204 | edited_event_id = channel.json_body["event_id"] | |
205 | ||
206 | # ... and check that they both got modified | |
207 | channel = self.make_request( | |
208 | "GET", | |
209 | "/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, orig_event_id), | |
210 | access_token=self.tok, | |
211 | ) | |
212 | self.assertEqual(channel.result["code"], b"200", channel.result) | |
213 | ev = channel.json_body | |
214 | self.assertEqual(ev["content"]["body"], "ORIGINAL BODY") | |
215 | ||
216 | channel = self.make_request( | |
217 | "GET", | |
218 | "/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, edited_event_id), | |
219 | access_token=self.tok, | |
220 | ) | |
221 | self.assertEqual(channel.result["code"], b"200", channel.result) | |
222 | ev = channel.json_body | |
223 | self.assertEqual(ev["content"]["body"], "EDITED BODY") | |
162 | 224 | |
163 | 225 | def test_send_event(self): |
164 | 226 | """Tests that the module can send an event into a room via the module api""" |
17 | 17 | from synapse.rest.client.v2_alpha import capabilities |
18 | 18 | |
19 | 19 | from tests import unittest |
20 | from tests.unittest import override_config | |
20 | 21 | |
21 | 22 | |
22 | 23 | class CapabilitiesTestCase(unittest.HomeserverTestCase): |
32 | 33 | hs = self.setup_test_homeserver() |
33 | 34 | self.store = hs.get_datastore() |
34 | 35 | self.config = hs.config |
36 | self.auth_handler = hs.get_auth_handler() | |
35 | 37 | return hs |
36 | 38 | |
37 | 39 | def test_check_auth_required(self): |
55 | 57 | capabilities["m.room_versions"]["default"], |
56 | 58 | ) |
57 | 59 | |
58 | def test_get_change_password_capabilities(self): | |
60 | def test_get_change_password_capabilities_password_login(self): | |
59 | 61 | localpart = "user" |
60 | 62 | password = "pass" |
61 | 63 | user = self.register_user(localpart, password) |
65 | 67 | capabilities = channel.json_body["capabilities"] |
66 | 68 | |
67 | 69 | self.assertEqual(channel.code, 200) |
70 | self.assertTrue(capabilities["m.change_password"]["enabled"]) | |
68 | 71 | |
69 | # Test case where password is handled outside of Synapse | |
70 | self.assertTrue(capabilities["m.change_password"]["enabled"]) | |
71 | self.get_success(self.store.user_set_password_hash(user, None)) | |
72 | @override_config({"password_config": {"localdb_enabled": False}}) | |
73 | def test_get_change_password_capabilities_localdb_disabled(self): | |
74 | localpart = "user" | |
75 | password = "pass" | |
76 | user = self.register_user(localpart, password) | |
77 | access_token = self.get_success( | |
78 | self.auth_handler.get_access_token_for_user_id( | |
79 | user, device_id=None, valid_until_ms=None | |
80 | ) | |
81 | ) | |
82 | ||
72 | 83 | channel = self.make_request("GET", self.url, access_token=access_token) |
73 | 84 | capabilities = channel.json_body["capabilities"] |
74 | 85 | |
75 | 86 | self.assertEqual(channel.code, 200) |
76 | 87 | self.assertFalse(capabilities["m.change_password"]["enabled"]) |
88 | ||
89 | @override_config({"password_config": {"enabled": False}}) | |
90 | def test_get_change_password_capabilities_password_disabled(self): | |
91 | localpart = "user" | |
92 | password = "pass" | |
93 | user = self.register_user(localpart, password) | |
94 | access_token = self.get_success( | |
95 | self.auth_handler.get_access_token_for_user_id( | |
96 | user, device_id=None, valid_until_ms=None | |
97 | ) | |
98 | ) | |
99 | ||
100 | channel = self.make_request("GET", self.url, access_token=access_token) | |
101 | capabilities = channel.json_body["capabilities"] | |
102 | ||
103 | self.assertEqual(channel.code, 200) | |
104 | self.assertFalse(capabilities["m.change_password"]["enabled"]) |
38 | 38 | # We need to enable msc1849 support for aggregations |
39 | 39 | config = self.default_config() |
40 | 40 | config["experimental_msc1849_support_enabled"] = True |
41 | ||
42 | # We enable frozen dicts as relations/edits change event contents, so we | |
43 | # want to test that we don't modify the events in the caches. | |
44 | config["use_frozen_dicts"] = True | |
45 | ||
41 | 46 | return self.setup_test_homeserver(config=config) |
42 | 47 | |
43 | 48 | def prepare(self, reactor, clock, hs): |
517 | 522 | {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict |
518 | 523 | ) |
519 | 524 | |
525 | def test_edit_reply(self): | |
526 | """Test that editing a reply works.""" | |
527 | ||
528 | # Create a reply to edit. | |
529 | channel = self._send_relation( | |
530 | RelationTypes.REFERENCE, | |
531 | "m.room.message", | |
532 | content={"msgtype": "m.text", "body": "A reply!"}, | |
533 | ) | |
534 | self.assertEquals(200, channel.code, channel.json_body) | |
535 | reply = channel.json_body["event_id"] | |
536 | ||
537 | new_body = {"msgtype": "m.text", "body": "I've been edited!"} | |
538 | channel = self._send_relation( | |
539 | RelationTypes.REPLACE, | |
540 | "m.room.message", | |
541 | content={"msgtype": "m.text", "body": "foo", "m.new_content": new_body}, | |
542 | parent_id=reply, | |
543 | ) | |
544 | self.assertEquals(200, channel.code, channel.json_body) | |
545 | ||
546 | edit_event_id = channel.json_body["event_id"] | |
547 | ||
548 | channel = self.make_request( | |
549 | "GET", | |
550 | "/rooms/%s/event/%s" % (self.room, reply), | |
551 | access_token=self.user_token, | |
552 | ) | |
553 | self.assertEquals(200, channel.code, channel.json_body) | |
554 | ||
555 | # We expect to see the new body in the dict, as well as the reference | |
556 | # metadata sill intact. | |
557 | self.assertDictContainsSubset(new_body, channel.json_body["content"]) | |
558 | self.assertDictContainsSubset( | |
559 | { | |
560 | "m.relates_to": { | |
561 | "event_id": self.parent_id, | |
562 | "key": None, | |
563 | "rel_type": "m.reference", | |
564 | } | |
565 | }, | |
566 | channel.json_body["content"], | |
567 | ) | |
568 | ||
569 | # We expect that the edit relation appears in the unsigned relations | |
570 | # section. | |
571 | relations_dict = channel.json_body["unsigned"].get("m.relations") | |
572 | self.assertIn(RelationTypes.REPLACE, relations_dict) | |
573 | ||
574 | m_replace_dict = relations_dict[RelationTypes.REPLACE] | |
575 | for key in ["event_id", "sender", "origin_server_ts"]: | |
576 | self.assertIn(key, m_replace_dict) | |
577 | ||
578 | self.assert_dict( | |
579 | {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict | |
580 | ) | |
581 | ||
520 | 582 | def test_relations_redaction_redacts_edits(self): |
521 | 583 | """Test that edits of an event are redacted when the original event |
522 | 584 | is redacted. |
1 | 1 | import logging |
2 | 2 | from collections import deque |
3 | 3 | from io import SEEK_END, BytesIO |
4 | from typing import Callable, Iterable, MutableMapping, Optional, Tuple, Union | |
4 | from typing import Callable, Dict, Iterable, MutableMapping, Optional, Tuple, Union | |
5 | 5 | |
6 | 6 | import attr |
7 | 7 | from typing_extensions import Deque |
12 | 12 | from twisted.internet.defer import Deferred, fail, succeed |
13 | 13 | from twisted.internet.error import DNSLookupError |
14 | 14 | from twisted.internet.interfaces import ( |
15 | IHostnameResolver, | |
16 | IProtocol, | |
17 | IPullProducer, | |
18 | IPushProducer, | |
15 | 19 | IReactorPluggableNameResolver, |
16 | IReactorTCP, | |
17 | 20 | IResolverSimple, |
18 | 21 | ITransport, |
19 | 22 | ) |
44 | 47 | wire). |
45 | 48 | """ |
46 | 49 | |
47 | site = attr.ib(type=Site) | |
50 | site = attr.ib(type=Union[Site, "FakeSite"]) | |
48 | 51 | _reactor = attr.ib() |
49 | 52 | result = attr.ib(type=dict, default=attr.Factory(dict)) |
50 | 53 | _ip = attr.ib(type=str, default="127.0.0.1") |
51 | _producer = None | |
54 | _producer = None # type: Optional[Union[IPullProducer, IPushProducer]] | |
52 | 55 | |
53 | 56 | @property |
54 | 57 | def json_body(self): |
158 | 161 | |
159 | 162 | Any cookines found are added to the given dict |
160 | 163 | """ |
161 | for h in self.headers.getRawHeaders("Set-Cookie"): | |
164 | headers = self.headers.getRawHeaders("Set-Cookie") | |
165 | if not headers: | |
166 | return | |
167 | ||
168 | for h in headers: | |
162 | 169 | parts = h.split(";") |
163 | 170 | k, v = parts[0].split("=", maxsplit=1) |
164 | 171 | cookies[k] = v |
310 | 317 | |
311 | 318 | self._tcp_callbacks = {} |
312 | 319 | self._udp = [] |
313 | lookups = self.lookups = {} | |
314 | self._thread_callbacks = deque() # type: Deque[Callable[[], None]]() | |
320 | lookups = self.lookups = {} # type: Dict[str, str] | |
321 | self._thread_callbacks = deque() # type: Deque[Callable[[], None]] | |
315 | 322 | |
316 | 323 | @implementer(IResolverSimple) |
317 | 324 | class FakeResolver: |
322 | 329 | |
323 | 330 | self.nameResolver = SimpleResolverComplexifier(FakeResolver()) |
324 | 331 | super().__init__() |
332 | ||
333 | def installNameResolver(self, resolver: IHostnameResolver) -> IHostnameResolver: | |
334 | raise NotImplementedError() | |
325 | 335 | |
326 | 336 | def listenUDP(self, port, protocol, interface="", maxPacketSize=8196): |
327 | 337 | p = udp.Port(port, protocol, interface, maxPacketSize, self) |
592 | 602 | if self.disconnected: |
593 | 603 | return |
594 | 604 | |
595 | if getattr(self.other, "transport") is None: | |
605 | if not hasattr(self.other, "transport"): | |
596 | 606 | # the other has no transport yet; reschedule |
597 | 607 | if self.autoflush: |
598 | 608 | self._reactor.callLater(0.0, self.flush) |
620 | 630 | self.disconnected = True |
621 | 631 | |
622 | 632 | |
623 | def connect_client(reactor: IReactorTCP, client_id: int) -> AccumulatingProtocol: | |
633 | def connect_client( | |
634 | reactor: ThreadedMemoryReactorClock, client_id: int | |
635 | ) -> Tuple[IProtocol, AccumulatingProtocol]: | |
624 | 636 | """ |
625 | 637 | Connect a client to a fake TCP transport. |
626 | 638 |
376 | 376 | ####################################################### |
377 | 377 | # deliberately remove e2 (room name) from the _state_group_cache |
378 | 378 | |
379 | ( | |
380 | is_all, | |
381 | known_absent, | |
382 | state_dict_ids, | |
383 | ) = self.state_datastore._state_group_cache.get(group) | |
384 | ||
385 | self.assertEqual(is_all, True) | |
386 | self.assertEqual(known_absent, set()) | |
379 | cache_entry = self.state_datastore._state_group_cache.get(group) | |
380 | state_dict_ids = cache_entry.value | |
381 | ||
382 | self.assertEqual(cache_entry.full, True) | |
383 | self.assertEqual(cache_entry.known_absent, set()) | |
387 | 384 | self.assertDictEqual( |
388 | 385 | state_dict_ids, |
389 | 386 | { |
402 | 399 | fetched_keys=((e1.type, e1.state_key),), |
403 | 400 | ) |
404 | 401 | |
405 | ( | |
406 | is_all, | |
407 | known_absent, | |
408 | state_dict_ids, | |
409 | ) = self.state_datastore._state_group_cache.get(group) | |
410 | ||
411 | self.assertEqual(is_all, False) | |
412 | self.assertEqual(known_absent, {(e1.type, e1.state_key)}) | |
402 | cache_entry = self.state_datastore._state_group_cache.get(group) | |
403 | state_dict_ids = cache_entry.value | |
404 | ||
405 | self.assertEqual(cache_entry.full, False) | |
406 | self.assertEqual(cache_entry.known_absent, {(e1.type, e1.state_key)}) | |
413 | 407 | self.assertDictEqual(state_dict_ids, {(e1.type, e1.state_key): e1.event_id}) |
414 | 408 | |
415 | 409 | ############################################ |
31 | 31 | from twisted.trial import unittest |
32 | 32 | from twisted.web.resource import Resource |
33 | 33 | |
34 | from synapse import events | |
34 | 35 | from synapse.api.constants import EventTypes, Membership |
35 | 36 | from synapse.config.homeserver import HomeServerConfig |
36 | 37 | from synapse.config.ratelimiting import FederationRateLimitConfig |
139 | 140 | try: |
140 | 141 | self.assertEquals(attrs[key], getattr(obj, key)) |
141 | 142 | except AssertionError as e: |
142 | raise (type(e))(e.message + " for '.%s'" % key) | |
143 | raise (type(e))("Assert error for '.{}':".format(key)) from e | |
143 | 144 | |
144 | 145 | def assert_dict(self, required, actual): |
145 | 146 | """Does a partial assert of a dict. |
227 | 228 | self.reactor, self.clock = get_clock() |
228 | 229 | self._hs_args = {"clock": self.clock, "reactor": self.reactor} |
229 | 230 | self.hs = self.make_homeserver(self.reactor, self.clock) |
231 | ||
232 | # Honour the `use_frozen_dicts` config option. We have to do this | |
233 | # manually because this is taken care of in the app `start` code, which | |
234 | # we don't run. Plus we want to reset it on tearDown. | |
235 | events.USE_FROZEN_DICTS = self.hs.config.use_frozen_dicts | |
230 | 236 | |
231 | 237 | if self.hs is None: |
232 | 238 | raise Exception("No homeserver returned from make_homeserver.") |
290 | 296 | |
291 | 297 | if hasattr(self, "prepare"): |
292 | 298 | self.prepare(self.reactor, self.clock, self.hs) |
299 | ||
300 | def tearDown(self): | |
301 | # Reset to not use frozen dicts. | |
302 | events.USE_FROZEN_DICTS = False | |
293 | 303 | |
294 | 304 | def wait_on_thread(self, deferred, timeout=10): |
295 | 305 | """ |
26 | 26 | key = "test_simple_cache_hit_full" |
27 | 27 | |
28 | 28 | v = self.cache.get(key) |
29 | self.assertEqual((False, set(), {}), v) | |
29 | self.assertIs(v.full, False) | |
30 | self.assertEqual(v.known_absent, set()) | |
31 | self.assertEqual({}, v.value) | |
30 | 32 | |
31 | 33 | seq = self.cache.sequence |
32 | 34 | test_value = {"test": "test_simple_cache_hit_full"} |