Codebase list matrix-synapse / ee6bae0
Update upstream source from tag 'upstream/1.31.0' Update to upstream version '1.31.0' with Debian dir 8dd90b94dcb6d9718ae30723546f2bcc1b556a77 Andrej Shadura 3 years ago
174 changed file(s) with 2706 addition(s) and 809 deletion(s). Raw diff Collapse all Expand all
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 # this script is run by buildkite in a plain `xenial` container; it installs the
33 # minimal requirements for tox and hands over to the py35-old tox environment.
0 #!/bin/bash
0 #!/usr/bin/env bash
11 #
22 # Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
33 # with additional dependencies needed for the test (such as coverage or the PostgreSQL
0 Synapse 1.31.0 (2021-04-06)
1 ===========================
2
3 **Note:** As announced in v1.25.0, and in line with the deprecation policy for platform dependencies, this is the last release to support Python 3.5 and PostgreSQL 9.5. Future versions of Synapse will require Python 3.6+ and PostgreSQL 9.6+, as per our [deprecation policy](docs/deprecation_policy.md).
4
5 This is also the last release that the Synapse team will be publishing packages for Debian Stretch and Ubuntu Xenial.
6
7
8 Improved Documentation
9 ----------------------
10
11 - Add a document describing the deprecation policy for platform dependencies. ([\#9723](https://github.com/matrix-org/synapse/issues/9723))
12
13
14 Internal Changes
15 ----------------
16
17 - Revert using `dmypy run` in lint script. ([\#9720](https://github.com/matrix-org/synapse/issues/9720))
18 - Pin flake8-bugbear's version. ([\#9734](https://github.com/matrix-org/synapse/issues/9734))
19
20
21 Synapse 1.31.0rc1 (2021-03-30)
22 ==============================
23
24 Features
25 --------
26
27 - Add support to OpenID Connect login for requiring attributes on the `userinfo` response. Contributed by Hubbe King. ([\#9609](https://github.com/matrix-org/synapse/issues/9609))
28 - Add initial experimental support for a "space summary" API. ([\#9643](https://github.com/matrix-org/synapse/issues/9643), [\#9652](https://github.com/matrix-org/synapse/issues/9652), [\#9653](https://github.com/matrix-org/synapse/issues/9653))
29 - Add support for the busy presence state as described in [MSC3026](https://github.com/matrix-org/matrix-doc/pull/3026). ([\#9644](https://github.com/matrix-org/synapse/issues/9644))
30 - Add support for credentials for proxy authentication in the `HTTPS_PROXY` environment variable. ([\#9657](https://github.com/matrix-org/synapse/issues/9657))
31
32
33 Bugfixes
34 --------
35
36 - Fix a longstanding bug that could cause issues when editing a reply to a message. ([\#9585](https://github.com/matrix-org/synapse/issues/9585))
37 - Fix the `/capabilities` endpoint to return `m.change_password` as disabled if the local password database is not used for authentication. Contributed by @dklimpel. ([\#9588](https://github.com/matrix-org/synapse/issues/9588))
38 - Check if local passwords are enabled before setting them for the user. ([\#9636](https://github.com/matrix-org/synapse/issues/9636))
39 - Fix a bug where federation sending can stall due to `concurrent access` database exceptions when it falls behind. ([\#9639](https://github.com/matrix-org/synapse/issues/9639))
40 - Fix a bug introduced in Synapse 1.30.1 which meant the suggested `pip` incantation to install an updated `cryptography` was incorrect. ([\#9699](https://github.com/matrix-org/synapse/issues/9699))
41
42
43 Updates to the Docker image
44 ---------------------------
45
46 - Speed up Docker builds and make it nicer to test against Complement while developing (install all dependencies before copying the project). ([\#9610](https://github.com/matrix-org/synapse/issues/9610))
47 - Include [opencontainers labels](https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys) in the Docker image. ([\#9612](https://github.com/matrix-org/synapse/issues/9612))
48
49
50 Improved Documentation
51 ----------------------
52
53 - Clarify that `register_new_matrix_user` is present also when installed via non-pip package. ([\#9074](https://github.com/matrix-org/synapse/issues/9074))
54 - Update source install documentation to mention platform prerequisites before the source install steps. ([\#9667](https://github.com/matrix-org/synapse/issues/9667))
55 - Improve worker documentation for fallback/web auth endpoints. ([\#9679](https://github.com/matrix-org/synapse/issues/9679))
56 - Update the sample configuration for OIDC authentication. ([\#9695](https://github.com/matrix-org/synapse/issues/9695))
57
58
59 Internal Changes
60 ----------------
61
62 - Preparatory steps for removing redundant `outlier` data from `event_json.internal_metadata` column. ([\#9411](https://github.com/matrix-org/synapse/issues/9411))
63 - Add type hints to the caching module. ([\#9442](https://github.com/matrix-org/synapse/issues/9442))
64 - Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9499](https://github.com/matrix-org/synapse/issues/9499), [\#9659](https://github.com/matrix-org/synapse/issues/9659))
65 - Add additional type hints to the Homeserver object. ([\#9631](https://github.com/matrix-org/synapse/issues/9631), [\#9638](https://github.com/matrix-org/synapse/issues/9638), [\#9675](https://github.com/matrix-org/synapse/issues/9675), [\#9681](https://github.com/matrix-org/synapse/issues/9681))
66 - Only save remote cross-signing and device keys if they're different from the current ones. ([\#9634](https://github.com/matrix-org/synapse/issues/9634))
67 - Rename storage function to fix spelling and not conflict with another function's name. ([\#9637](https://github.com/matrix-org/synapse/issues/9637))
68 - Improve performance of federation catch up by sending the latest events in the room to the remote, rather than just the last event sent by the local server. ([\#9640](https://github.com/matrix-org/synapse/issues/9640), [\#9664](https://github.com/matrix-org/synapse/issues/9664))
69 - In the `federation_client` commandline client, stop automatically adding the URL prefix, so that servlets on other prefixes can be tested. ([\#9645](https://github.com/matrix-org/synapse/issues/9645))
70 - In the `federation_client` commandline client, handle inline `signing_key`s in `homeserver.yaml`. ([\#9647](https://github.com/matrix-org/synapse/issues/9647))
71 - Fixed some antipattern issues to improve code quality. ([\#9649](https://github.com/matrix-org/synapse/issues/9649))
72 - Add a storage method for pulling all current user presence state from the database. ([\#9650](https://github.com/matrix-org/synapse/issues/9650))
73 - Import `HomeServer` from the proper module. ([\#9665](https://github.com/matrix-org/synapse/issues/9665))
74 - Increase default join ratelimiting burst rate. ([\#9674](https://github.com/matrix-org/synapse/issues/9674))
75 - Add type hints to third party event rules and visibility modules. ([\#9676](https://github.com/matrix-org/synapse/issues/9676))
76 - Bump mypy-zope to 0.2.13 to fix "Cannot determine consistent method resolution order (MRO)" errors when running mypy a second time. ([\#9678](https://github.com/matrix-org/synapse/issues/9678))
77 - Use interpreter from `$PATH` via `/usr/bin/env` instead of absolute paths in various scripts. ([\#9689](https://github.com/matrix-org/synapse/issues/9689))
78 - Make it possible to use `dmypy`. ([\#9692](https://github.com/matrix-org/synapse/issues/9692))
79 - Suppress "CryptographyDeprecationWarning: int_from_bytes is deprecated". ([\#9698](https://github.com/matrix-org/synapse/issues/9698))
80 - Use `dmypy run` in lint script for improved performance in type-checking while developing. ([\#9701](https://github.com/matrix-org/synapse/issues/9701))
81 - Fix undetected mypy error when using Python 3.6. ([\#9703](https://github.com/matrix-org/synapse/issues/9703))
82 - Fix type-checking CI on develop. ([\#9709](https://github.com/matrix-org/synapse/issues/9709))
83
84
85 Synapse 1.30.1 (2021-03-26)
86 ===========================
87
88 This release is identical to Synapse 1.30.0, with the exception of explicitly
89 setting a minimum version of Python's Cryptography library to ensure that users
90 of Synapse are protected from the recent [OpenSSL security advisories](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000198.html),
91 especially CVE-2021-3449.
92
93 Note that Cryptography defaults to bundling its own statically linked copy of
94 OpenSSL, which means that you may not be protected by your operating system's
95 security updates.
96
97 It's also worth noting that Cryptography no longer supports Python 3.5, so
98 admins deploying to older environments may not be protected against this or
99 future vulnerabilities. Synapse will be dropping support for Python 3.5 at the
100 end of March.
101
102
103 Updates to the Docker image
104 ---------------------------
105
106 - Ensure that the docker container has up to date versions of openssl. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
107
108
109 Internal Changes
110 ----------------
111
112 - Enforce that `cryptography` dependency is up to date to ensure it has the most recent openssl patches. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
113
114
0115 Synapse 1.30.0 (2021-03-22)
1116 ===========================
2117
55 - [Choosing your server name](#choosing-your-server-name)
66 - [Installing Synapse](#installing-synapse)
77 - [Installing from source](#installing-from-source)
8 - [Platform-Specific Instructions](#platform-specific-instructions)
8 - [Platform-specific prerequisites](#platform-specific-prerequisites)
99 - [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
1010 - [ArchLinux](#archlinux)
1111 - [CentOS/Fedora](#centosfedora)
3737 - [URL previews](#url-previews)
3838 - [Troubleshooting Installation](#troubleshooting-installation)
3939
40
4041 ## Choosing your server name
4142
4243 It is important to choose the name for your server before you install Synapse,
5960
6061 (Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
6162
63 When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed.
64
6265 System requirements:
6366
6467 - POSIX-compliant system (tested on Linux & OS X)
6568 - Python 3.5.2 or later, up to Python 3.9.
6669 - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
6770
68 Synapse is written in Python but some of the libraries it uses are written in
69 C. So before we can install Synapse itself we need a working C compiler and the
70 header files for Python C extensions. See [Platform-Specific
71 Instructions](#platform-specific-instructions) for information on installing
72 these on various platforms.
7371
7472 To install the Synapse homeserver run:
7573
127125 synctl start
128126 ```
129127
130 #### Platform-Specific Instructions
128 #### Platform-specific prerequisites
129
130 Synapse is written in Python but some of the libraries it uses are written in
131 C. So before we can install Synapse itself we need a working C compiler and the
132 header files for Python C extensions.
131133
132134 ##### Debian/Ubuntu/Raspbian
133135
525527
526528 The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
527529
528 Alternatively you can do so from the command line if you have installed via pip.
529
530 This can be done as follows:
531
532 ```sh
533 $ source ~/synapse/env/bin/activate
534 $ synctl start # if not already running
535 $ register_new_matrix_user -c homeserver.yaml http://localhost:8008
530 Alternatively, you can do so from the command line. This can be done as follows:
531
532 1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was
533 installed via a prebuilt package, `register_new_matrix_user` should already be
534 on the search path):
535 ```sh
536 cd ~/synapse
537 source env/bin/activate
538 synctl start # if not already running
539 ```
540 2. Run the following command:
541 ```sh
542 register_new_matrix_user -c homeserver.yaml http://localhost:8008
543 ```
544
545 This will prompt you to add details for the new user, and will then connect to
546 the running Synapse to create the new user. For example:
547 ```
536548 New user localpart: erikj
537549 Password:
538550 Confirm password:
313313 Client-Server API are functioning correctly. See the `installation instructions
314314 <https://github.com/matrix-org/sytest#installing>`_ for details.
315315
316
317 Platform dependencies
318 =====================
319
320 Synapse uses a number of platform dependencies such as Python and PostgreSQL,
321 and aims to follow supported upstream versions. See the
322 `<docs/deprecation_policy.md>`_ document for more details.
323
324
316325 Troubleshooting
317326 ===============
318327
388397 People can't accept room invitations from me
389398 --------------------------------------------
390399
391 The typical failure mode here is that you send an invitation to someone
400 The typical failure mode here is that you send an invitation to someone
392401 to join a room or direct chat, but when they go to accept it, they get an
393402 error (typically along the lines of "Invalid signature"). They might see
394403 something like the following in their logs::
9797
9898 To avoid the warning, administrators using a reverse proxy should ensure that
9999 the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
100 indicate the protocol used by the client. See the `reverse proxy documentation
101 <docs/reverse_proxy.md>`_, where the example configurations have been updated to
102 show how to set this header.
100 indicate the protocol used by the client.
101
102 Synapse also requires the `Host` header to be preserved.
103
104 See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the
105 example configurations have been updated to show how to set these headers.
103106
104107 (Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
105108 sets `X-Forwarded-Proto` by default.)
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 # this script will use the api:
33 # https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 DOMAIN=yourserver.tld
33 # add this user as admin in your home server:
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 set -e
33
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 DIR="$( cd "$( dirname "$0" )" && pwd )"
33
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 DIR="$( cd "$( dirname "$0" )" && pwd )"
33
1717 ###
1818 FROM docker.io/python:${PYTHON_VERSION}-slim as builder
1919
20 LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
21 LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'
22 LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
23 LABEL org.opencontainers.image.licenses='Apache-2.0'
24
2025 # install the OS build deps
2126 RUN apt-get update && apt-get install -y \
2227 build-essential \
2732 libwebp-dev \
2833 libxml++2.6-dev \
2934 libxslt1-dev \
35 openssl \
3036 rustc \
3137 zlib1g-dev \
32 && rm -rf /var/lib/apt/lists/*
38 && rm -rf /var/lib/apt/lists/*
3339
34 # Build dependencies that are not available as wheels, to speed up rebuilds
35 RUN pip install --prefix="/install" --no-warn-script-location \
36 cryptography \
37 frozendict \
38 jaeger-client \
39 opentracing \
40 # Match the version constraints of Synapse
41 "prometheus_client>=0.4.0" \
42 psycopg2 \
43 pycparser \
44 pyrsistent \
45 pyyaml \
46 simplejson \
47 threadloop \
48 thrift
49
50 # now install synapse and all of the python deps to /install.
51 COPY synapse /synapse/synapse/
40 # Copy just what we need to pip install
5241 COPY scripts /synapse/scripts/
5342 COPY MANIFEST.in README.rst setup.py synctl /synapse/
43 COPY synapse/__init__.py /synapse/synapse/__init__.py
44 COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
5445
46 # To speed up rebuilds, install all of the dependencies before we copy over
47 # the whole synapse project so that we this layer in the Docker cache can be
48 # used while you develop on the source
49 #
50 # This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
5551 RUN pip install --prefix="/install" --no-warn-script-location \
56 /synapse[all]
52 /synapse[all]
53
54 # Copy over the rest of the project
55 COPY synapse /synapse/synapse/
56
57 # Install the synapse package itself and all of its children packages.
58 #
59 # This is aiming at installing only the `packages=find_packages(...)` from `setup.py
60 RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
5761
5862 ###
5963 ### Stage 1: runtime
6973 libwebp6 \
7074 xmlsec1 \
7175 libjemalloc2 \
72 && rm -rf /var/lib/apt/lists/*
76 libssl-dev \
77 openssl \
78 && rm -rf /var/lib/apt/lists/*
7379
7480 COPY --from=builder /install /usr/local
7581 COPY ./docker/start.py /start.py
8288 ENTRYPOINT ["/start.py"]
8389
8490 HEALTHCHECK --interval=1m --timeout=5s \
85 CMD curl -fSs http://localhost:8008/health || exit 1
91 CMD curl -fSs http://localhost:8008/health || exit 1
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 # The script to build the Debian package, as ran inside the Docker image.
33
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 # This script runs the PostgreSQL tests inside a Docker container. It expects
33 # the relevant source files to be mounted into /src (done automatically by the
0 Deprecation Policy for Platform Dependencies
1 ============================================
2
3 Synapse has a number of platform dependencies, including Python and PostgreSQL.
4 This document outlines the policy towards which versions we support, and when we
5 drop support for versions in the future.
6
7
8 Policy
9 ------
10
11 Synapse follows the upstream support life cycles for Python and PostgreSQL,
12 i.e. when a version reaches End of Life Synapse will withdraw support for that
13 version in future releases.
14
15 Details on the upstream support life cycles for Python and PostgreSQL are
16 documented at https://endoflife.date/python and
17 https://endoflife.date/postgresql.
18
19
20 Context
21 -------
22
23 It is important for system admins to have a clear understanding of the platform
24 requirements of Synapse and its deprecation policies so that they can
25 effectively plan upgrading their infrastructure ahead of time. This is
26 especially important in contexts where upgrading the infrastructure requires
27 auditing and approval from a security team, or where otherwise upgrading is a
28 long process.
29
30 By following the upstream support life cycles Synapse can ensure that its
31 dependencies continue to get security patches, while not requiring system admins
32 to constantly update their platform dependencies to the latest versions.
103103 ```
104104 <VirtualHost *:443>
105105 SSLEngine on
106 ServerName matrix.example.com;
106 ServerName matrix.example.com
107107
108108 RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
109109 AllowEncodedSlashes NoDecode
110 ProxyPreserveHost on
110111 ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
111112 ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
112113 ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
115116
116117 <VirtualHost *:8448>
117118 SSLEngine on
118 ServerName example.com;
119 ServerName example.com
119120
120121 RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
121122 AllowEncodedSlashes NoDecode
133134 SecRuleEngine off
134135 </IfModule>
135136 ```
137
138 **NOTE 3**: Missing `ProxyPreserveHost on` can lead to a redirect loop.
136139
137140 ### HAProxy
138141
868868 #rc_joins:
869869 # local:
870870 # per_second: 0.1
871 # burst_count: 3
871 # burst_count: 10
872872 # remote:
873873 # per_second: 0.01
874 # burst_count: 3
874 # burst_count: 10
875875 #
876876 #rc_3pid_validation:
877877 # per_second: 0.003
17571757 # Note that, if this is changed, users authenticating via that provider
17581758 # will no longer be recognised as the same user!
17591759 #
1760 # (Use "oidc" here if you are migrating from an old "oidc_config"
1761 # configuration.)
1762 #
17601763 # idp_name: A user-facing name for this identity provider, which is used to
17611764 # offer the user a choice of login mechanisms.
17621765 #
18711874 # When rendering, the Jinja2 templates are given a 'user' variable,
18721875 # which is set to the claims returned by the UserInfo Endpoint and/or
18731876 # in the ID Token.
1877 #
1878 # It is possible to configure Synapse to only allow logins if certain attributes
1879 # match particular values in the OIDC userinfo. The requirements can be listed under
1880 # `attribute_requirements` as shown below. All of the listed attributes must
1881 # match for the login to be permitted. Additional attributes can be added to
1882 # userinfo by expanding the `scopes` section of the OIDC config to retrieve
1883 # additional information from the OIDC provider.
1884 #
1885 # If the OIDC claim is a list, then the attribute must match any value in the list.
1886 # Otherwise, it must exactly match the value of the claim. Using the example
1887 # below, the `family_name` claim MUST be "Stephensson", but the `groups`
1888 # claim MUST contain "admin".
1889 #
1890 # attribute_requirements:
1891 # - attribute: family_name
1892 # value: "Stephensson"
1893 # - attribute: groups
1894 # value: "admin"
18741895 #
18751896 # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
18761897 # for information on how to configure these options.
19041925 # localpart_template: "{{ user.login }}"
19051926 # display_name_template: "{{ user.name }}"
19061927 # email_template: "{{ user.email }}"
1907
1908 # For use with Keycloak
1909 #
1910 #- idp_id: keycloak
1911 # idp_name: Keycloak
1912 # issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
1913 # client_id: "synapse"
1914 # client_secret: "copy secret generated in Keycloak UI"
1915 # scopes: ["openid", "profile"]
1916
1917 # For use with Github
1918 #
1919 #- idp_id: github
1920 # idp_name: Github
1921 # idp_brand: github
1922 # discover: false
1923 # issuer: "https://github.com/"
1924 # client_id: "your-client-id" # TO BE FILLED
1925 # client_secret: "your-client-secret" # TO BE FILLED
1926 # authorization_endpoint: "https://github.com/login/oauth/authorize"
1927 # token_endpoint: "https://github.com/login/oauth/access_token"
1928 # userinfo_endpoint: "https://api.github.com/user"
1929 # scopes: ["read:user"]
1930 # user_mapping_provider:
1931 # config:
1932 # subject_claim: "id"
1933 # localpart_template: "{{ user.login }}"
1934 # display_name_template: "{{ user.name }}"
1928 # attribute_requirements:
1929 # - attribute: userGroup
1930 # value: "synapseUsers"
19351931
19361932
19371933 # Enable Central Authentication Service (CAS) for registration and login.
231231 # Registration/login requests
232232 ^/_matrix/client/(api/v1|r0|unstable)/login$
233233 ^/_matrix/client/(r0|unstable)/register$
234 ^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
235234
236235 # Event sending requests
237236 ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
275274
276275 Ensure that all SSO logins go to a single process.
277276 For multiple workers not handling the SSO endpoints properly, see
278 [#7530](https://github.com/matrix-org/synapse/issues/7530) and
277 [#7530](https://github.com/matrix-org/synapse/issues/7530) and
279278 [#9427](https://github.com/matrix-org/synapse/issues/9427).
280279
281280 Note that a HTTP listener with `client` and `federation` resources must be
00 [mypy]
11 namespace_packages = True
22 plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
3 follow_imports = silent
3 follow_imports = normal
44 check_untyped_defs = True
55 show_error_codes = True
66 show_traceback = True
77 mypy_path = stubs
88 warn_unreachable = True
9 local_partial_types = True
910
1011 # To find all folders that pass mypy you run:
1112 #
1920 synapse/crypto,
2021 synapse/event_auth.py,
2122 synapse/events/builder.py,
23 synapse/events/spamcheck.py,
24 synapse/events/third_party_rules.py,
2225 synapse/events/validator.py,
23 synapse/events/spamcheck.py,
2426 synapse/federation,
2527 synapse/groups,
2628 synapse/handlers,
3739 synapse/push,
3840 synapse/replication,
3941 synapse/rest,
42 synapse/secrets.py,
4043 synapse/server.py,
4144 synapse/server_notices,
4245 synapse/spam_checker_api,
7073 synapse/util/metrics.py,
7174 synapse/util/macaroons.py,
7275 synapse/util/stringutils.py,
76 synapse/visibility.py,
7377 tests/replication,
7478 tests/test_utils,
7579 tests/handlers/test_password_providers.py,
5050 parts = line.split("|")
5151 if len(parts) != 2:
5252 print("Unable to parse input line %s" % line, file=sys.stderr)
53 exit(1)
53 sys.exit(1)
5454
5555 move_media(parts[0], parts[1], src_paths, dest_paths)
5656
0 #!/bin/bash
0 #!/usr/bin/env bash
11 #
22 # A script which checks that an appropriate news file has been added on this
33 # branch.
0 #!/bin/bash
0 #!/usr/bin/env bash
11 # Find linting errors in Synapse's default config file.
22 # Exits with 0 if there are no problems, or another code otherwise.
33
2121 from typing import Any, Optional
2222 from urllib import parse as urlparse
2323
24 import nacl.signing
2524 import requests
25 import signedjson.key
2626 import signedjson.types
2727 import srvlookup
2828 import yaml
4141 output_bytes = base64.b64encode(input_bytes)
4242 output_string = output_bytes[:output_len].decode("ascii")
4343 return output_string
44
45
46 def decode_base64(input_string):
47 """Decode a base64 string to bytes inferring padding from the length of the
48 string."""
49
50 input_bytes = input_string.encode("ascii")
51 input_len = len(input_bytes)
52 padding = b"=" * (3 - ((input_len + 3) % 4))
53 output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2
54 output_bytes = base64.b64decode(input_bytes + padding)
55 return output_bytes[:output_len]
5644
5745
5846 def encode_canonical_json(value):
8573 json_object["unsigned"] = unsigned
8674
8775 return json_object
88
89
90 NACL_ED25519 = "ed25519"
91
92
93 def decode_signing_key_base64(algorithm, version, key_base64):
94 """Decode a base64 encoded signing key
95 Args:
96 algorithm (str): The algorithm the key is for (currently "ed25519").
97 version (str): Identifies this key out of the keys for this entity.
98 key_base64 (str): Base64 encoded bytes of the key.
99 Returns:
100 A SigningKey object.
101 """
102 if algorithm == NACL_ED25519:
103 key_bytes = decode_base64(key_base64)
104 key = nacl.signing.SigningKey(key_bytes)
105 key.version = version
106 key.alg = NACL_ED25519
107 return key
108 else:
109 raise ValueError("Unsupported algorithm %s" % (algorithm,))
110
111
112 def read_signing_keys(stream):
113 """Reads a list of keys from a stream
114 Args:
115 stream : A stream to iterate for keys.
116 Returns:
117 list of SigningKey objects.
118 """
119 keys = []
120 for line in stream:
121 algorithm, version, key_base64 = line.split()
122 keys.append(decode_signing_key_base64(algorithm, version, key_base64))
123 return keys
12476
12577
12678 def request(
222174 parser.add_argument("--body", help="Data to send as the body of the HTTP request")
223175
224176 parser.add_argument(
225 "path", help="request path. We will add '/_matrix/federation/v1/' to this."
177 "path", help="request path, including the '/_matrix/federation/...' prefix."
226178 )
227179
228180 args = parser.parse_args()
229181
230 if not args.server_name or not args.signing_key_path:
182 args.signing_key = None
183 if args.signing_key_path:
184 with open(args.signing_key_path) as f:
185 args.signing_key = f.readline()
186
187 if not args.server_name or not args.signing_key:
231188 read_args_from_config(args)
232189
233 with open(args.signing_key_path) as f:
234 key = read_signing_keys(f)[0]
190 algorithm, version, key_base64 = args.signing_key.split()
191 key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64)
235192
236193 result = request(
237194 args.method,
238195 args.server_name,
239196 key,
240197 args.destination,
241 "/_matrix/federation/v1/" + args.path,
198 args.path,
242199 content=args.body,
243200 )
244201
254211 def read_args_from_config(args):
255212 with open(args.config, "r") as fh:
256213 config = yaml.safe_load(fh)
214
257215 if not args.server_name:
258216 args.server_name = config["server_name"]
259 if not args.signing_key_path:
260 args.signing_key_path = config["signing_key_path"]
217
218 if not args.signing_key:
219 if "signing_key" in config:
220 args.signing_key = config["signing_key"]
221 else:
222 with open(config["signing_key_path"]) as f:
223 args.signing_key = f.readline()
261224
262225
263226 class MatrixConnectionAdapter(HTTPAdapter):
0 #!/bin/bash
0 #!/usr/bin/env bash
11 #
22 # Update/check the docs/sample_config.yaml
33
0 #!/bin/bash
0 #!/usr/bin/env bash
11 #
22 # Runs linting scripts over the local Synapse checkout
33 # isort - sorts import statements
0 #!/bin/bash
0 #!/usr/bin/env bash
11 #
22 # This script generates SQL files for creating a brand new Synapse DB with the latest
33 # schema, on both SQLite3 and Postgres.
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 set -e
33
55 # next PR number.
66 CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
77 CURRENT_NUMBER=$((CURRENT_NUMBER+1))
8 echo $CURRENT_NUMBER
8 echo $CURRENT_NUMBER
1717 # E203: whitespace before ':' (which is contrary to pep8?)
1818 # E731: do not assign a lambda expression, use a def
1919 # E501: Line too long (black enforces this for us)
20 ignore=W503,W504,E203,E731,E501
20 # B00*: Subsection of the bugbear suite (TODO: add in remaining fixes)
21 ignore=W503,W504,E203,E731,E501,B006,B007,B008
2122
2223 [isort]
2324 line_length = 88
9898 "isort==5.7.0",
9999 "black==20.8b1",
100100 "flake8-comprehensions",
101 "flake8-bugbear==21.3.2",
101102 "flake8",
102103 ]
103104
104 CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
105 CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.13"]
105106
106107 # Dependencies which are exclusively required by unit test code. This is
107108 # NOT a list of all modules that are necessary to run the unit tests.
4747 except ImportError:
4848 pass
4949
50 __version__ = "1.30.0"
50 __version__ = "1.31.0"
5151
5252 if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
5353 # We import here so that we don't have to install a bunch of deps when
557557 Returns:
558558 bool: False if no access_token was given, True otherwise.
559559 """
560 # This will always be set by the time Twisted calls us.
561 assert request.args is not None
562
560563 query_params = request.args.get(b"access_token")
561564 auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
562565 return bool(query_params) or bool(auth_headers)
573576 MissingClientTokenError: If there isn't a single access_token in the
574577 request
575578 """
579 # This will always be set by the time Twisted calls us.
580 assert request.args is not None
576581
577582 auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
578583 query_params = request.args.get(b"access_token")
5050 OFFLINE = "offline"
5151 UNAVAILABLE = "unavailable"
5252 ONLINE = "online"
53 BUSY = "org.matrix.msc3026.busy"
5354
5455
5556 class JoinRules:
9899 Retention = "m.room.retention"
99100
100101 Dummy = "org.matrix.dummy_event"
102
103 MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child"
104 MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent"
101105
102106
103107 class EduTypes:
159163 # cf https://github.com/matrix-org/matrix-doc/pull/2228
160164 SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
161165
166 # cf https://github.com/matrix-org/matrix-doc/pull/1772
167 MSC1772_ROOM_TYPE = "org.matrix.msc1772.type"
168
162169
163170 class RoomEncryptionAlgorithms:
164171 MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
2121 try:
2222 python_dependencies.check_requirements()
2323 except python_dependencies.DependencyException as e:
24 sys.stderr.writelines(e.message)
24 sys.stderr.writelines(
25 e.message # noqa: B306, DependencyException.message is a property
26 )
2527 sys.exit(1)
2628
2729
2020 import socket
2121 import sys
2222 import traceback
23 import warnings
2324 from typing import Awaitable, Callable, Iterable
2425
26 from cryptography.utils import CryptographyDeprecationWarning
2527 from typing_extensions import NoReturn
2628
2729 from twisted.internet import defer, error, reactor
192194 for host in bind_addresses:
193195 logger.info("Starting metrics listener on %s:%d", host, port)
194196 start_http_server(port, addr=host, registry=RegistryProxy)
197
198
199 def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict):
200 # twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
201 # warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
202 # suppress the warning for now.
203 warnings.filterwarnings(
204 action="ignore",
205 category=CryptographyDeprecationWarning,
206 message="int_from_bytes is deprecated",
207 )
208
209 from synapse.util.manhole import manhole
210
211 listen_tcp(
212 bind_addresses,
213 port,
214 manhole(username="matrix", password="rabbithole", globals=manhole_globals),
215 )
195216
196217
197218 def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
146146 from synapse.types import ReadReceipt
147147 from synapse.util.async_helpers import Linearizer
148148 from synapse.util.httpresourcetree import create_resource_tree
149 from synapse.util.manhole import manhole
150149 from synapse.util.versionstring import get_version_string
151150
152151 logger = logging.getLogger("synapse.app.generic_worker")
301300 self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
302301 )
303302
303 self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
304
304305 hs.get_reactor().addSystemEventTrigger(
305306 "before",
306307 "shutdown",
438439 PresenceState.ONLINE,
439440 PresenceState.UNAVAILABLE,
440441 PresenceState.OFFLINE,
442 PresenceState.BUSY,
441443 )
442 if presence not in valid_presence:
444
445 if presence not in valid_presence or (
446 presence == PresenceState.BUSY and not self._busy_presence_enabled
447 ):
443448 raise SynapseError(400, "Invalid presence state")
444449
445450 user_id = target_user.to_string()
633638 if listener.type == "http":
634639 self._listen_http(listener)
635640 elif listener.type == "manhole":
636 _base.listen_tcp(
637 listener.bind_addresses,
638 listener.port,
639 manhole(
640 username="matrix", password="rabbithole", globals={"hs": self}
641 ),
641 _base.listen_manhole(
642 listener.bind_addresses, listener.port, manhole_globals={"hs": self}
642643 )
643644 elif listener.type == "metrics":
644645 if not self.get_config().enable_metrics:
785786
786787 self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
787788
788 def on_start(self):
789 # There may be some events that are persisted but haven't been sent,
790 # so send them now.
791 self.federation_sender.notify_new_events(
792 self.store.get_room_max_stream_ordering()
793 )
794
795789 def wake_destination(self, server: str):
796790 self.federation_sender.wake_destination(server)
797791
6666 from synapse.storage.engines import IncorrectDatabaseSetup
6767 from synapse.storage.prepare_database import UpgradeDatabaseException
6868 from synapse.util.httpresourcetree import create_resource_tree
69 from synapse.util.manhole import manhole
7069 from synapse.util.module_loader import load_module
7170 from synapse.util.versionstring import get_version_string
7271
287286 if listener.type == "http":
288287 self._listening_services.extend(self._listener_http(config, listener))
289288 elif listener.type == "manhole":
290 listen_tcp(
291 listener.bind_addresses,
292 listener.port,
293 manhole(
294 username="matrix", password="rabbithole", globals={"hs": self}
295 ),
289 _base.listen_manhole(
290 listener.bind_addresses, listener.port, manhole_globals={"hs": self}
296291 )
297292 elif listener.type == "replication":
298293 services = listen_tcp(
2323 _CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
2424
2525 # Map from canonicalised cache name to cache.
26 _CACHES = {}
26 _CACHES = {} # type: Dict[str, Callable[[float], None]]
2727
2828 # a lock on the contents of _CACHES
2929 _CACHES_LOCK = threading.Lock()
5858 return cache_name.lower()
5959
6060
61 def add_resizable_cache(cache_name: str, cache_resize_callback: Callable):
61 def add_resizable_cache(
62 cache_name: str, cache_resize_callback: Callable[[float], None]
63 ):
6264 """Register a cache that's size can dynamically change
6365
6466 Args:
2626
2727 # MSC2858 (multiple SSO identity providers)
2828 self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool
29 # Spaces (MSC1772, MSC2946, etc)
30 self.spaces_enabled = experimental.get("spaces_enabled", False) # type: bool
31 # MSC3026 (busy presence state)
32 self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool
403403 try:
404404 jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA)
405405 except jsonschema.ValidationError as e:
406 raise ConfigError("Unable to parse 'trusted_key_servers': " + e.message)
406 raise ConfigError(
407 "Unable to parse 'trusted_key_servers': {}".format(
408 e.message # noqa: B306, jsonschema.ValidationError.message is a valid attribute
409 )
410 )
407411
408412 for server in key_servers:
409413 server_name = server["server_name"]
5555 try:
5656 check_requirements("sentry")
5757 except DependencyException as e:
58 raise ConfigError(e.message)
58 raise ConfigError(
59 e.message # noqa: B306, DependencyException.message is a property
60 )
5961
6062 self.sentry_dsn = config["sentry"].get("dsn")
6163 if not self.sentry_dsn:
1414 # limitations under the License.
1515
1616 from collections import Counter
17 from typing import Iterable, Mapping, Optional, Tuple, Type
17 from typing import Iterable, List, Mapping, Optional, Tuple, Type
1818
1919 import attr
2020
2121 from synapse.config._util import validate_config
22 from synapse.config.sso import SsoAttributeRequirement
2223 from synapse.python_dependencies import DependencyException, check_requirements
2324 from synapse.types import Collection, JsonDict
2425 from synapse.util.module_loader import load_module
4041 try:
4142 check_requirements("oidc")
4243 except DependencyException as e:
43 raise ConfigError(e.message) from e
44 raise ConfigError(
45 e.message # noqa: B306, DependencyException.message is a property
46 ) from e
4447
4548 # check we don't have any duplicate idp_ids now. (The SSO handler will also
4649 # check for duplicates when the REST listeners get registered, but that happens
7578 # Note that, if this is changed, users authenticating via that provider
7679 # will no longer be recognised as the same user!
7780 #
81 # (Use "oidc" here if you are migrating from an old "oidc_config"
82 # configuration.)
83 #
7884 # idp_name: A user-facing name for this identity provider, which is used to
7985 # offer the user a choice of login mechanisms.
8086 #
189195 # When rendering, the Jinja2 templates are given a 'user' variable,
190196 # which is set to the claims returned by the UserInfo Endpoint and/or
191197 # in the ID Token.
198 #
199 # It is possible to configure Synapse to only allow logins if certain attributes
200 # match particular values in the OIDC userinfo. The requirements can be listed under
201 # `attribute_requirements` as shown below. All of the listed attributes must
202 # match for the login to be permitted. Additional attributes can be added to
203 # userinfo by expanding the `scopes` section of the OIDC config to retrieve
204 # additional information from the OIDC provider.
205 #
206 # If the OIDC claim is a list, then the attribute must match any value in the list.
207 # Otherwise, it must exactly match the value of the claim. Using the example
208 # below, the `family_name` claim MUST be "Stephensson", but the `groups`
209 # claim MUST contain "admin".
210 #
211 # attribute_requirements:
212 # - attribute: family_name
213 # value: "Stephensson"
214 # - attribute: groups
215 # value: "admin"
192216 #
193217 # See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
194218 # for information on how to configure these options.
222246 # localpart_template: "{{{{ user.login }}}}"
223247 # display_name_template: "{{{{ user.name }}}}"
224248 # email_template: "{{{{ user.email }}}}"
225
226 # For use with Keycloak
227 #
228 #- idp_id: keycloak
229 # idp_name: Keycloak
230 # issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
231 # client_id: "synapse"
232 # client_secret: "copy secret generated in Keycloak UI"
233 # scopes: ["openid", "profile"]
234
235 # For use with Github
236 #
237 #- idp_id: github
238 # idp_name: Github
239 # idp_brand: github
240 # discover: false
241 # issuer: "https://github.com/"
242 # client_id: "your-client-id" # TO BE FILLED
243 # client_secret: "your-client-secret" # TO BE FILLED
244 # authorization_endpoint: "https://github.com/login/oauth/authorize"
245 # token_endpoint: "https://github.com/login/oauth/access_token"
246 # userinfo_endpoint: "https://api.github.com/user"
247 # scopes: ["read:user"]
248 # user_mapping_provider:
249 # config:
250 # subject_claim: "id"
251 # localpart_template: "{{{{ user.login }}}}"
252 # display_name_template: "{{{{ user.name }}}}"
249 # attribute_requirements:
250 # - attribute: userGroup
251 # value: "synapseUsers"
253252 """.format(
254253 mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
255254 )
328327 },
329328 "allow_existing_users": {"type": "boolean"},
330329 "user_mapping_provider": {"type": ["object", "null"]},
330 "attribute_requirements": {
331 "type": "array",
332 "items": SsoAttributeRequirement.JSON_SCHEMA,
333 },
331334 },
332335 }
333336
464467 jwt_header=client_secret_jwt_key_config["jwt_header"],
465468 jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}),
466469 )
470 # parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement
471 attribute_requirements = [
472 SsoAttributeRequirement(**x)
473 for x in oidc_config.get("attribute_requirements", [])
474 ]
467475
468476 return OidcProviderConfig(
469477 idp_id=idp_id,
487495 allow_existing_users=oidc_config.get("allow_existing_users", False),
488496 user_mapping_provider_class=user_mapping_provider_class,
489497 user_mapping_provider_config=user_mapping_provider_config,
498 attribute_requirements=attribute_requirements,
490499 )
491500
492501
576585
577586 # the config of the user mapping provider
578587 user_mapping_provider_config = attr.ib()
588
589 # required attributes to require in userinfo to allow login/registration
590 attribute_requirements = attr.ib(type=List[SsoAttributeRequirement])
9494
9595 self.rc_joins_local = RateLimitConfig(
9696 config.get("rc_joins", {}).get("local", {}),
97 defaults={"per_second": 0.1, "burst_count": 3},
97 defaults={"per_second": 0.1, "burst_count": 10},
9898 )
9999 self.rc_joins_remote = RateLimitConfig(
100100 config.get("rc_joins", {}).get("remote", {}),
101 defaults={"per_second": 0.01, "burst_count": 3},
101 defaults={"per_second": 0.01, "burst_count": 10},
102102 )
103103
104104 # Ratelimit cross-user key requests:
186186 #rc_joins:
187187 # local:
188188 # per_second: 0.1
189 # burst_count: 3
189 # burst_count: 10
190190 # remote:
191191 # per_second: 0.01
192 # burst_count: 3
192 # burst_count: 10
193193 #
194194 #rc_3pid_validation:
195195 # per_second: 0.003
175175 check_requirements("url_preview")
176176
177177 except DependencyException as e:
178 raise ConfigError(e.message)
178 raise ConfigError(
179 e.message # noqa: B306, DependencyException.message is a property
180 )
179181
180182 if "url_preview_ip_range_blacklist" not in config:
181183 raise ConfigError(
7575 try:
7676 check_requirements("saml2")
7777 except DependencyException as e:
78 raise ConfigError(e.message)
78 raise ConfigError(
79 e.message # noqa: B306, DependencyException.message is a property
80 )
7981
8082 self.saml2_enabled = True
8183
3838 try:
3939 check_requirements("opentracing")
4040 except DependencyException as e:
41 raise ConfigError(e.message)
41 raise ConfigError(
42 e.message # noqa: B306, DependencyException.message is a property
43 )
4244
4345 # The tracer is enabled so sanitize the config
4446
190190 # ... we further assume that SSLClientConnectionCreator has set the
191191 # '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
192192 tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where)
193 except: # noqa: E722, taken from the twisted implementation
193 except BaseException: # taken from the twisted implementation
194194 logger.exception("Error during info_callback")
195195 f = Failure()
196196 tls_protocol.failVerification(f)
218218 # ... and we also gut-wrench a '_synapse_tls_verifier' attribute into the
219219 # tls_protocol so that the SSL context's info callback has something to
220220 # call to do the cert verification.
221 setattr(tls_protocol, "_synapse_tls_verifier", self._verifier)
221 tls_protocol._synapse_tls_verifier = self._verifier
222222 return connection
223223
224224
5656 from synapse.util.retryutils import NotRetryingDestination
5757
5858 if TYPE_CHECKING:
59 from synapse.app.homeserver import HomeServer
59 from synapse.server import HomeServer
6060
6161 logger = logging.getLogger(__name__)
6262
9797
9898
9999 class _EventInternalMetadata:
100 __slots__ = ["_dict", "stream_ordering"]
100 __slots__ = ["_dict", "stream_ordering", "outlier"]
101101
102102 def __init__(self, internal_metadata_dict: JsonDict):
103103 # we have to copy the dict, because it turns out that the same dict is
107107 # the stream ordering of this event. None, until it has been persisted.
108108 self.stream_ordering = None # type: Optional[int]
109109
110 outlier = DictProperty("outlier") # type: bool
110 # whether this event is an outlier (ie, whether we have the state at that point
111 # in the DAG)
112 self.outlier = False
113
111114 out_of_band_membership = DictProperty("out_of_band_membership") # type: bool
112115 send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str
113116 recheck_redaction = DictProperty("recheck_redaction") # type: bool
128131 return dict(self._dict)
129132
130133 def is_outlier(self) -> bool:
131 return self._dict.get("outlier", False)
134 return self.outlier
132135
133136 def is_out_of_band_membership(self) -> bool:
134137 """Whether this is an out of band membership, like an invite or an invite
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from typing import Callable, Union
15 from typing import TYPE_CHECKING, Union
1616
1717 from synapse.events import EventBase
1818 from synapse.events.snapshot import EventContext
1919 from synapse.types import Requester, StateMap
20
21 if TYPE_CHECKING:
22 from synapse.server import HomeServer
2023
2124
2225 class ThirdPartyEventRules:
2730 behaviours.
2831 """
2932
30 def __init__(self, hs):
33 def __init__(self, hs: "HomeServer"):
3134 self.third_party_rules = None
3235
3336 self.store = hs.get_datastore()
9497 if self.third_party_rules is None:
9598 return True
9699
97 ret = await self.third_party_rules.on_create_room(
100 return await self.third_party_rules.on_create_room(
98101 requester, config, is_requester_admin
99102 )
100 return ret
101103
102104 async def check_threepid_can_be_invited(
103105 self, medium: str, address: str, room_id: str
118120
119121 state_events = await self._get_state_map_for_room(room_id)
120122
121 ret = await self.third_party_rules.check_threepid_can_be_invited(
123 return await self.third_party_rules.check_threepid_can_be_invited(
122124 medium, address, state_events
123125 )
124 return ret
125126
126127 async def check_visibility_can_be_modified(
127128 self, room_id: str, new_visibility: str
142143 check_func = getattr(
143144 self.third_party_rules, "check_visibility_can_be_modified", None
144145 )
145 if not check_func or not isinstance(check_func, Callable):
146 if not check_func or not callable(check_func):
146147 return True
147148
148149 state_events = await self._get_state_map_for_room(room_id)
2121 from synapse.api.errors import Codes, SynapseError
2222 from synapse.api.room_versions import RoomVersion
2323 from synapse.util.async_helpers import yieldable_gather_results
24 from synapse.util.frozenutils import unfreeze
2425
2526 from . import EventBase
2627
5253 pruned_event.internal_metadata.stream_ordering = (
5354 event.internal_metadata.stream_ordering
5455 )
56
57 pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
5558
5659 # Mark the event as redacted
5760 pruned_event.internal_metadata.redacted = True
399402 # If there is an edit replace the content, preserving existing
400403 # relations.
401404
405 # Ensure we take copies of the edit content, otherwise we risk modifying
406 # the original event.
407 edit_content = edit.content.copy()
408
409 # Unfreeze the event content if necessary, so that we may modify it below
410 edit_content = unfreeze(edit_content)
411 serialized_event["content"] = edit_content.get("m.new_content", {})
412
413 # Check for existing relations
402414 relations = event.content.get("m.relates_to")
403 serialized_event["content"] = edit.content.get("m.new_content", {})
404415 if relations:
405 serialized_event["content"]["m.relates_to"] = relations
416 # Keep the relations, ensuring we use a dict copy of the original
417 serialized_event["content"]["m.relates_to"] = relations.copy()
406418 else:
407419 serialized_event["content"].pop("m.relates_to", None)
408420
2626 List,
2727 Mapping,
2828 Optional,
29 Sequence,
2930 Tuple,
3031 TypeVar,
3132 Union,
3233 )
3334
35 import attr
3436 from prometheus_client import Counter
3537
3638 from twisted.internet import defer
6163 from synapse.util.retryutils import NotRetryingDestination
6264
6365 if TYPE_CHECKING:
64 from synapse.app.homeserver import HomeServer
66 from synapse.server import HomeServer
6567
6668 logger = logging.getLogger(__name__)
6769
454456 description: str,
455457 destinations: Iterable[str],
456458 callback: Callable[[str], Awaitable[T]],
459 failover_on_unknown_endpoint: bool = False,
457460 ) -> T:
458461 """Try an operation on a series of servers, until it succeeds
459462
472475 Otherwise, if the callback raises an Exception the error is logged and the
473476 next server tried. Normally the stacktrace is logged but this is
474477 suppressed if the exception is an InvalidResponseError.
478
479 failover_on_unknown_endpoint: if True, we will try other servers if it looks
480 like a server doesn't support the endpoint. This is typically useful
481 if the endpoint in question is new or experimental.
475482
476483 Returns:
477484 The result of callback, if it succeeds
492499 except UnsupportedRoomVersionError:
493500 raise
494501 except HttpResponseException as e:
495 if not 500 <= e.code < 600:
496 raise e.to_synapse_error()
497 else:
498 logger.warning(
499 "Failed to %s via %s: %i %s",
500 description,
501 destination,
502 e.code,
503 e.args[0],
504 )
502 synapse_error = e.to_synapse_error()
503 failover = False
504
505 if 500 <= e.code < 600:
506 failover = True
507
508 elif failover_on_unknown_endpoint:
509 # there is no good way to detect an "unknown" endpoint. Dendrite
510 # returns a 404 (with no body); synapse returns a 400
511 # with M_UNRECOGNISED.
512 if e.code == 404 or (
513 e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED
514 ):
515 failover = True
516
517 if not failover:
518 raise synapse_error from e
519
520 logger.warning(
521 "Failed to %s via %s: %i %s",
522 description,
523 destination,
524 e.code,
525 e.args[0],
526 )
505527 except Exception:
506528 logger.warning(
507529 "Failed to %s via %s", description, destination, exc_info=True
10411063 # If we don't manage to find it, return None. It's not an error if a
10421064 # server doesn't give it to us.
10431065 return None
1066
1067 async def get_space_summary(
1068 self,
1069 destinations: Iterable[str],
1070 room_id: str,
1071 suggested_only: bool,
1072 max_rooms_per_space: Optional[int],
1073 exclude_rooms: List[str],
1074 ) -> "FederationSpaceSummaryResult":
1075 """
1076 Call other servers to get a summary of the given space
1077
1078
1079 Args:
1080 destinations: The remote servers. We will try them in turn, omitting any
1081 that have been blacklisted.
1082
1083 room_id: ID of the space to be queried
1084
1085 suggested_only: If true, ask the remote server to only return children
1086 with the "suggested" flag set
1087
1088 max_rooms_per_space: A limit on the number of children to return for each
1089 space
1090
1091 exclude_rooms: A list of room IDs to tell the remote server to skip
1092
1093 Returns:
1094 a parsed FederationSpaceSummaryResult
1095
1096 Raises:
1097 SynapseError if we were unable to get a valid summary from any of the
1098 remote servers
1099 """
1100
1101 async def send_request(destination: str) -> FederationSpaceSummaryResult:
1102 res = await self.transport_layer.get_space_summary(
1103 destination=destination,
1104 room_id=room_id,
1105 suggested_only=suggested_only,
1106 max_rooms_per_space=max_rooms_per_space,
1107 exclude_rooms=exclude_rooms,
1108 )
1109
1110 try:
1111 return FederationSpaceSummaryResult.from_json_dict(res)
1112 except ValueError as e:
1113 raise InvalidResponseError(str(e))
1114
1115 return await self._try_destination_list(
1116 "fetch space summary",
1117 destinations,
1118 send_request,
1119 failover_on_unknown_endpoint=True,
1120 )
1121
1122
1123 @attr.s(frozen=True, slots=True)
1124 class FederationSpaceSummaryEventResult:
1125 """Represents a single event in the result of a successful get_space_summary call.
1126
1127 It's essentially just a serialised event object, but we do a bit of parsing and
1128 validation in `from_json_dict` and store some of the validated properties in
1129 object attributes.
1130 """
1131
1132 event_type = attr.ib(type=str)
1133 state_key = attr.ib(type=str)
1134 via = attr.ib(type=Sequence[str])
1135
1136 # the raw data, including the above keys
1137 data = attr.ib(type=JsonDict)
1138
1139 @classmethod
1140 def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryEventResult":
1141 """Parse an event within the result of a /spaces/ request
1142
1143 Args:
1144 d: json object to be parsed
1145
1146 Raises:
1147 ValueError if d is not a valid event
1148 """
1149
1150 event_type = d.get("type")
1151 if not isinstance(event_type, str):
1152 raise ValueError("Invalid event: 'event_type' must be a str")
1153
1154 state_key = d.get("state_key")
1155 if not isinstance(state_key, str):
1156 raise ValueError("Invalid event: 'state_key' must be a str")
1157
1158 content = d.get("content")
1159 if not isinstance(content, dict):
1160 raise ValueError("Invalid event: 'content' must be a dict")
1161
1162 via = content.get("via")
1163 if not isinstance(via, Sequence):
1164 raise ValueError("Invalid event: 'via' must be a list")
1165 if any(not isinstance(v, str) for v in via):
1166 raise ValueError("Invalid event: 'via' must be a list of strings")
1167
1168 return cls(event_type, state_key, via, d)
1169
1170
1171 @attr.s(frozen=True, slots=True)
1172 class FederationSpaceSummaryResult:
1173 """Represents the data returned by a successful get_space_summary call."""
1174
1175 rooms = attr.ib(type=Sequence[JsonDict])
1176 events = attr.ib(type=Sequence[FederationSpaceSummaryEventResult])
1177
1178 @classmethod
1179 def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryResult":
1180 """Parse the result of a /spaces/ request
1181
1182 Args:
1183 d: json object to be parsed
1184
1185 Raises:
1186 ValueError if d is not a valid /spaces/ response
1187 """
1188 rooms = d.get("rooms")
1189 if not isinstance(rooms, Sequence):
1190 raise ValueError("'rooms' must be a list")
1191 if any(not isinstance(r, dict) for r in rooms):
1192 raise ValueError("Invalid room in 'rooms' list")
1193
1194 events = d.get("events")
1195 if not isinstance(events, Sequence):
1196 raise ValueError("'events' must be a list")
1197 if any(not isinstance(e, dict) for e in events):
1198 raise ValueError("Invalid event in 'events' list")
1199 parsed_events = [
1200 FederationSpaceSummaryEventResult.from_json_dict(e) for e in events
1201 ]
1202
1203 return cls(rooms, parsed_events)
3434 from twisted.internet.abstract import isIPAddress
3535 from twisted.python import failure
3636
37 from synapse.api.constants import EduTypes, EventTypes, Membership
37 from synapse.api.constants import EduTypes, EventTypes
3838 from synapse.api.errors import (
3939 AuthError,
4040 Codes,
6262 ReplicationFederationSendEduRestServlet,
6363 ReplicationGetQueryRestServlet,
6464 )
65 from synapse.types import JsonDict, get_domain_from_id
65 from synapse.types import JsonDict
6666 from synapse.util import glob_to_regex, json_decoder, unwrapFirstError
6767 from synapse.util.async_helpers import Linearizer, concurrently_execute
6868 from synapse.util.caches.response_cache import ResponseCache
726726 if the event was unacceptable for any other reason (eg, too large,
727727 too many prev_events, couldn't find the prev_events)
728728 """
729 # check that it's actually being sent from a valid destination to
730 # workaround bug #1753 in 0.18.5 and 0.18.6
731 if origin != get_domain_from_id(pdu.sender):
732 # We continue to accept join events from any server; this is
733 # necessary for the federation join dance to work correctly.
734 # (When we join over federation, the "helper" server is
735 # responsible for sending out the join event, rather than the
736 # origin. See bug #1893. This is also true for some third party
737 # invites).
738 if not (
739 pdu.type == "m.room.member"
740 and pdu.content
741 and pdu.content.get("membership", None)
742 in (Membership.JOIN, Membership.INVITE)
743 ):
744 logger.info(
745 "Discarding PDU %s from invalid origin %s", pdu.event_id, origin
746 )
747 return
748 else:
749 logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
750729
751730 # We've already checked that we know the room version by this point
752731 room_version = await self.store.get_room_version(pdu.room_id)
3030
3131 import logging
3232 from collections import namedtuple
33 from typing import Dict, List, Tuple, Type
33 from typing import (
34 TYPE_CHECKING,
35 Dict,
36 Hashable,
37 Iterable,
38 List,
39 Optional,
40 Sized,
41 Tuple,
42 Type,
43 )
3444
3545 from sortedcontainers import SortedDict
3646
37 from twisted.internet import defer
38
3947 from synapse.api.presence import UserPresenceState
48 from synapse.federation.sender import AbstractFederationSender, FederationSender
4049 from synapse.metrics import LaterGauge
50 from synapse.replication.tcp.streams.federation import FederationStream
51 from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
4152 from synapse.util.metrics import Measure
4253
4354 from .units import Edu
4455
56 if TYPE_CHECKING:
57 from synapse.server import HomeServer
58
4559 logger = logging.getLogger(__name__)
4660
4761
48 class FederationRemoteSendQueue:
62 class FederationRemoteSendQueue(AbstractFederationSender):
4963 """A drop in replacement for FederationSender"""
5064
51 def __init__(self, hs):
65 def __init__(self, hs: "HomeServer"):
5266 self.server_name = hs.hostname
5367 self.clock = hs.get_clock()
5468 self.notifier = hs.get_notifier()
5771 # We may have multiple federation sender instances, so we need to track
5872 # their positions separately.
5973 self._sender_instances = hs.config.worker.federation_shard_config.instances
60 self._sender_positions = {}
74 self._sender_positions = {} # type: Dict[str, int]
6175
6276 # Pending presence map user_id -> UserPresenceState
6377 self.presence_map = {} # type: Dict[str, UserPresenceState]
7084 # Stream position -> (user_id, destinations)
7185 self.presence_destinations = (
7286 SortedDict()
73 ) # type: SortedDict[int, Tuple[str, List[str]]]
87 ) # type: SortedDict[int, Tuple[str, Iterable[str]]]
7488
7589 # (destination, key) -> EDU
7690 self.keyed_edu = {} # type: Dict[Tuple[str, tuple], Edu]
93107 # we make a new function, so we need to make a new function so the inner
94108 # lambda binds to the queue rather than to the name of the queue which
95109 # changes. ARGH.
96 def register(name, queue):
110 def register(name: str, queue: Sized) -> None:
97111 LaterGauge(
98112 "synapse_federation_send_queue_%s_size" % (queue_name,),
99113 "",
114128
115129 self.clock.looping_call(self._clear_queue, 30 * 1000)
116130
117 def _next_pos(self):
131 def _next_pos(self) -> int:
118132 pos = self.pos
119133 self.pos += 1
120134 self.pos_time[self.clock.time_msec()] = pos
121135 return pos
122136
123 def _clear_queue(self):
137 def _clear_queue(self) -> None:
124138 """Clear the queues for anything older than N minutes"""
125139
126140 FIVE_MINUTES_AGO = 5 * 60 * 1000
137151
138152 self._clear_queue_before_pos(position_to_delete)
139153
140 def _clear_queue_before_pos(self, position_to_delete):
154 def _clear_queue_before_pos(self, position_to_delete: int) -> None:
141155 """Clear all the queues from before a given position"""
142156 with Measure(self.clock, "send_queue._clear"):
143157 # Delete things out of presence maps
187201 for key in keys[:i]:
188202 del self.edus[key]
189203
190 def notify_new_events(self, max_token):
204 def notify_new_events(self, max_token: RoomStreamToken) -> None:
191205 """As per FederationSender"""
192 # We don't need to replicate this as it gets sent down a different
193 # stream.
194 pass
195
196 def build_and_send_edu(self, destination, edu_type, content, key=None):
206 # This should never get called.
207 raise NotImplementedError()
208
209 def build_and_send_edu(
210 self,
211 destination: str,
212 edu_type: str,
213 content: JsonDict,
214 key: Optional[Hashable] = None,
215 ) -> None:
197216 """As per FederationSender"""
198217 if destination == self.server_name:
199218 logger.info("Not sending EDU to ourselves")
217236
218237 self.notifier.on_new_replication_data()
219238
220 def send_read_receipt(self, receipt):
239 async def send_read_receipt(self, receipt: ReadReceipt) -> None:
221240 """As per FederationSender
222241
223242 Args:
224 receipt (synapse.types.ReadReceipt):
243 receipt:
225244 """
226245 # nothing to do here: the replication listener will handle it.
227 return defer.succeed(None)
228
229 def send_presence(self, states):
246
247 def send_presence(self, states: List[UserPresenceState]) -> None:
230248 """As per FederationSender
231249
232250 Args:
233 states (list(UserPresenceState))
251 states
234252 """
235253 pos = self._next_pos()
236254
237255 # We only want to send presence for our own users, so lets always just
238256 # filter here just in case.
239 local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
257 local_states = [s for s in states if self.is_mine_id(s.user_id)]
240258
241259 self.presence_map.update({state.user_id: state for state in local_states})
242260 self.presence_changed[pos] = [state.user_id for state in local_states]
243261
244262 self.notifier.on_new_replication_data()
245263
246 def send_presence_to_destinations(self, states, destinations):
264 def send_presence_to_destinations(
265 self, states: Iterable[UserPresenceState], destinations: Iterable[str]
266 ) -> None:
247267 """As per FederationSender
248268
249269 Args:
250 states (list[UserPresenceState])
251 destinations (list[str])
270 states
271 destinations
252272 """
253273 for state in states:
254274 pos = self._next_pos()
257277
258278 self.notifier.on_new_replication_data()
259279
260 def send_device_messages(self, destination):
280 def send_device_messages(self, destination: str) -> None:
261281 """As per FederationSender"""
262282 # We don't need to replicate this as it gets sent down a different
263283 # stream.
264284
265 def get_current_token(self):
285 def wake_destination(self, server: str) -> None:
286 pass
287
288 def get_current_token(self) -> int:
266289 return self.pos - 1
267290
268 def federation_ack(self, instance_name, token):
291 def federation_ack(self, instance_name: str, token: int) -> None:
269292 if self._sender_instances:
270293 # If we have configured multiple federation sender instances we need
271294 # to track their positions separately, and only clear the queue up
503526 )
504527
505528
506 def process_rows_for_federation(transaction_queue, rows):
529 def process_rows_for_federation(
530 transaction_queue: FederationSender,
531 rows: List[FederationStream.FederationStreamRow],
532 ) -> None:
507533 """Parse a list of rows from the federation stream and put them in the
508534 transaction queue ready for sending to the relevant homeservers.
509535
510536 Args:
511 transaction_queue (FederationSender)
512 rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow))
537 transaction_queue
538 rows
513539 """
514540
515541 # The federation stream contains a bunch of different types of
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 import abc
1516 import logging
16 from typing import Dict, Hashable, Iterable, List, Optional, Set, Tuple
17 from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple
1718
1819 from prometheus_client import Counter
1920
2021 from twisted.internet import defer
2122
22 import synapse
2323 import synapse.metrics
2424 from synapse.api.presence import UserPresenceState
2525 from synapse.events import EventBase
3939 events_processed_counter,
4040 )
4141 from synapse.metrics.background_process_metrics import run_as_background_process
42 from synapse.types import ReadReceipt, RoomStreamToken
42 from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
4343 from synapse.util.metrics import Measure, measure_func
44
45 if TYPE_CHECKING:
46 from synapse.server import HomeServer
4447
4548 logger = logging.getLogger(__name__)
4649
6467 CATCH_UP_STARTUP_INTERVAL_SEC = 5
6568
6669
67 class FederationSender:
68 def __init__(self, hs: "synapse.server.HomeServer"):
70 class AbstractFederationSender(metaclass=abc.ABCMeta):
71 @abc.abstractmethod
72 def notify_new_events(self, max_token: RoomStreamToken) -> None:
73 """This gets called when we have some new events we might want to
74 send out to other servers.
75 """
76 raise NotImplementedError()
77
78 @abc.abstractmethod
79 async def send_read_receipt(self, receipt: ReadReceipt) -> None:
80 """Send a RR to any other servers in the room
81
82 Args:
83 receipt: receipt to be sent
84 """
85 raise NotImplementedError()
86
87 @abc.abstractmethod
88 def send_presence(self, states: List[UserPresenceState]) -> None:
89 """Send the new presence states to the appropriate destinations.
90
91 This actually queues up the presence states ready for sending and
92 triggers a background task to process them and send out the transactions.
93 """
94 raise NotImplementedError()
95
96 @abc.abstractmethod
97 def send_presence_to_destinations(
98 self, states: Iterable[UserPresenceState], destinations: Iterable[str]
99 ) -> None:
100 """Send the given presence states to the given destinations.
101
102 Args:
103 destinations:
104 """
105 raise NotImplementedError()
106
107 @abc.abstractmethod
108 def build_and_send_edu(
109 self,
110 destination: str,
111 edu_type: str,
112 content: JsonDict,
113 key: Optional[Hashable] = None,
114 ) -> None:
115 """Construct an Edu object, and queue it for sending
116
117 Args:
118 destination: name of server to send to
119 edu_type: type of EDU to send
120 content: content of EDU
121 key: clobbering key for this edu
122 """
123 raise NotImplementedError()
124
125 @abc.abstractmethod
126 def send_device_messages(self, destination: str) -> None:
127 raise NotImplementedError()
128
129 @abc.abstractmethod
130 def wake_destination(self, destination: str) -> None:
131 """Called when we want to retry sending transactions to a remote.
132
133 This is mainly useful if the remote server has been down and we think it
134 might have come back.
135 """
136 raise NotImplementedError()
137
138 @abc.abstractmethod
139 def get_current_token(self) -> int:
140 raise NotImplementedError()
141
142 @abc.abstractmethod
143 def federation_ack(self, instance_name: str, token: int) -> None:
144 raise NotImplementedError()
145
146 @abc.abstractmethod
147 async def get_replication_rows(
148 self, instance_name: str, from_token: int, to_token: int, target_row_count: int
149 ) -> Tuple[List[Tuple[int, Tuple]], int, bool]:
150 raise NotImplementedError()
151
152
153 class FederationSender(AbstractFederationSender):
154 def __init__(self, hs: "HomeServer"):
69155 self.hs = hs
70156 self.server_name = hs.hostname
71157
431517 queue.flush_read_receipts_for_room(room_id)
432518
433519 @preserve_fn # the caller should not yield on this
434 async def send_presence(self, states: List[UserPresenceState]):
520 async def send_presence(self, states: List[UserPresenceState]) -> None:
435521 """Send the new presence states to the appropriate destinations.
436522
437523 This actually queues up the presence states ready for sending and
493579 self._get_per_destination_queue(destination).send_presence(states)
494580
495581 @measure_func("txnqueue._process_presence")
496 async def _process_presence_inner(self, states: List[UserPresenceState]):
582 async def _process_presence_inner(self, states: List[UserPresenceState]) -> None:
497583 """Given a list of states populate self.pending_presence_by_dest and
498584 poke to send a new transaction to each destination
499585 """
515601 self,
516602 destination: str,
517603 edu_type: str,
518 content: dict,
604 content: JsonDict,
519605 key: Optional[Hashable] = None,
520 ):
606 ) -> None:
521607 """Construct an Edu object, and queue it for sending
522608
523609 Args:
544630
545631 self.send_edu(edu, key)
546632
547 def send_edu(self, edu: Edu, key: Optional[Hashable]):
633 def send_edu(self, edu: Edu, key: Optional[Hashable]) -> None:
548634 """Queue an EDU for sending
549635
550636 Args:
562648 else:
563649 queue.send_edu(edu)
564650
565 def send_device_messages(self, destination: str):
651 def send_device_messages(self, destination: str) -> None:
566652 if destination == self.server_name:
567653 logger.warning("Not sending device update to ourselves")
568654 return
574660
575661 self._get_per_destination_queue(destination).attempt_new_transaction()
576662
577 def wake_destination(self, destination: str):
663 def wake_destination(self, destination: str) -> None:
578664 """Called when we want to retry sending transactions to a remote.
579665
580666 This is mainly useful if the remote server has been down and we think it
597683 # Dummy implementation for case where federation sender isn't offloaded
598684 # to a worker.
599685 return 0
686
687 def federation_ack(self, instance_name: str, token: int) -> None:
688 # It is not expected that this gets called on FederationSender.
689 raise NotImplementedError()
600690
601691 @staticmethod
602692 async def get_replication_rows(
606696 # to a worker.
607697 return [], 0, False
608698
609 async def _wake_destinations_needing_catchup(self):
699 async def _wake_destinations_needing_catchup(self) -> None:
610700 """
611701 Wakes up destinations that need catch-up and are not currently being
612702 backed off from.
1414 # limitations under the License.
1515 import datetime
1616 import logging
17 from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, cast
17 from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple
1818
1919 import attr
2020 from prometheus_client import Counter
7676 self._transaction_manager = transaction_manager
7777 self._instance_name = hs.get_instance_name()
7878 self._federation_shard_config = hs.config.worker.federation_shard_config
79 self._state = hs.get_state_handler()
7980
8081 self._should_send_on_this_instance = True
8182 if not self._federation_shard_config.should_handle(
414415 "This should not happen." % event_ids
415416 )
416417
417 if logger.isEnabledFor(logging.INFO):
418 rooms = [p.room_id for p in catchup_pdus]
419 logger.info("Catching up rooms to %s: %r", self._destination, rooms)
420
421 await self._transaction_manager.send_new_transaction(
422 self._destination, catchup_pdus, []
423 )
424
425 sent_transactions_counter.inc()
426 final_pdu = catchup_pdus[-1]
427 self._last_successful_stream_ordering = cast(
428 int, final_pdu.internal_metadata.stream_ordering
429 )
430 await self._store.set_destination_last_successful_stream_ordering(
431 self._destination, self._last_successful_stream_ordering
432 )
418 # We send transactions with events from one room only, as its likely
419 # that the remote will have to do additional processing, which may
420 # take some time. It's better to give it small amounts of work
421 # rather than risk the request timing out and repeatedly being
422 # retried, and not making any progress.
423 #
424 # Note: `catchup_pdus` will have exactly one PDU per room.
425 for pdu in catchup_pdus:
426 # The PDU from the DB will be the last PDU in the room from
427 # *this server* that wasn't sent to the remote. However, other
428 # servers may have sent lots of events since then, and we want
429 # to try and tell the remote only about the *latest* events in
430 # the room. This is so that it doesn't get inundated by events
431 # from various parts of the DAG, which all need to be processed.
432 #
433 # Note: this does mean that in large rooms a server coming back
434 # online will get sent the same events from all the different
435 # servers, but the remote will correctly deduplicate them and
436 # handle it only once.
437
438 # Step 1, fetch the current extremities
439 extrems = await self._store.get_prev_events_for_room(pdu.room_id)
440
441 if pdu.event_id in extrems:
442 # If the event is in the extremities, then great! We can just
443 # use that without having to do further checks.
444 room_catchup_pdus = [pdu]
445 else:
446 # If not, fetch the extremities and figure out which we can
447 # send.
448 extrem_events = await self._store.get_events_as_list(extrems)
449
450 new_pdus = []
451 for p in extrem_events:
452 # We pulled this from the DB, so it'll be non-null
453 assert p.internal_metadata.stream_ordering
454
455 # Filter out events that happened before the remote went
456 # offline
457 if (
458 p.internal_metadata.stream_ordering
459 < self._last_successful_stream_ordering
460 ):
461 continue
462
463 # Filter out events where the server is not in the room,
464 # e.g. it may have left/been kicked. *Ideally* we'd pull
465 # out the kick and send that, but it's a rare edge case
466 # so we don't bother for now (the server that sent the
467 # kick should send it out if its online).
468 hosts = await self._state.get_hosts_in_room_at_events(
469 p.room_id, [p.event_id]
470 )
471 if self._destination not in hosts:
472 continue
473
474 new_pdus.append(p)
475
476 # If we've filtered out all the extremities, fall back to
477 # sending the original event. This should ensure that the
478 # server gets at least some of missed events (especially if
479 # the other sending servers are up).
480 if new_pdus:
481 room_catchup_pdus = new_pdus
482 else:
483 room_catchup_pdus = [pdu]
484
485 logger.info(
486 "Catching up rooms to %s: %r", self._destination, pdu.room_id
487 )
488
489 await self._transaction_manager.send_new_transaction(
490 self._destination, room_catchup_pdus, []
491 )
492
493 sent_transactions_counter.inc()
494
495 # We pulled this from the DB, so it'll be non-null
496 assert pdu.internal_metadata.stream_ordering
497
498 # Note that we mark the last successful stream ordering as that
499 # from the *original* PDU, rather than the PDU(s) we actually
500 # send. This is because we use it to mark our position in the
501 # queue of missed PDUs to process.
502 self._last_successful_stream_ordering = (
503 pdu.internal_metadata.stream_ordering
504 )
505
506 await self._store.set_destination_last_successful_stream_ordering(
507 self._destination, self._last_successful_stream_ordering
508 )
433509
434510 def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
435511 if not self._pending_rrs:
1515
1616 import logging
1717 import urllib
18 from typing import Any, Dict, Optional
18 from typing import Any, Dict, List, Optional
1919
2020 from synapse.api.constants import Membership
2121 from synapse.api.errors import Codes, HttpResponseException, SynapseError
2525 FEDERATION_V2_PREFIX,
2626 )
2727 from synapse.logging.utils import log_function
28 from synapse.types import JsonDict
2829
2930 logger = logging.getLogger(__name__)
3031
977978
978979 return self.client.get_json(destination=destination, path=path)
979980
981 async def get_space_summary(
982 self,
983 destination: str,
984 room_id: str,
985 suggested_only: bool,
986 max_rooms_per_space: Optional[int],
987 exclude_rooms: List[str],
988 ) -> JsonDict:
989 """
990 Args:
991 destination: The remote server
992 room_id: The room ID to ask about.
993 suggested_only: if True, only suggested rooms will be returned
994 max_rooms_per_space: an optional limit to the number of children to be
995 returned per space
996 exclude_rooms: a list of any rooms we can skip
997 """
998 path = _create_path(
999 FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc2946/spaces/%s", room_id
1000 )
1001
1002 params = {
1003 "suggested_only": suggested_only,
1004 "exclude_rooms": exclude_rooms,
1005 }
1006 if max_rooms_per_space is not None:
1007 params["max_rooms_per_space"] = max_rooms_per_space
1008
1009 return await self.client.post_json(
1010 destination=destination, path=path, data=params
1011 )
1012
9801013
9811014 def _create_path(federation_prefix, path, *args):
9821015 """
1717 import functools
1818 import logging
1919 import re
20 from typing import Optional, Tuple, Type
20 from typing import Container, Mapping, Optional, Sequence, Tuple, Type
2121
2222 import synapse
2323 from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH
2828 FEDERATION_V1_PREFIX,
2929 FEDERATION_V2_PREFIX,
3030 )
31 from synapse.http.server import JsonResource
31 from synapse.http.server import HttpServer, JsonResource
3232 from synapse.http.servlet import (
3333 parse_boolean_from_args,
3434 parse_integer_from_args,
4343 whitelisted_homeserver,
4444 )
4545 from synapse.server import HomeServer
46 from synapse.types import ThirdPartyInstanceID, get_domain_from_id
46 from synapse.types import JsonDict, ThirdPartyInstanceID, get_domain_from_id
47 from synapse.util.ratelimitutils import FederationRateLimiter
4748 from synapse.util.stringutils import parse_and_validate_server_name
4849 from synapse.util.versionstring import get_version_string
4950
13731374 )
13741375
13751376 return 200, new_content
1377
1378
1379 class FederationSpaceSummaryServlet(BaseFederationServlet):
1380 PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
1381 PATH = "/spaces/(?P<room_id>[^/]*)"
1382
1383 async def on_POST(
1384 self,
1385 origin: str,
1386 content: JsonDict,
1387 query: Mapping[bytes, Sequence[bytes]],
1388 room_id: str,
1389 ) -> Tuple[int, JsonDict]:
1390 suggested_only = content.get("suggested_only", False)
1391 if not isinstance(suggested_only, bool):
1392 raise SynapseError(
1393 400, "'suggested_only' must be a boolean", Codes.BAD_JSON
1394 )
1395
1396 exclude_rooms = content.get("exclude_rooms", [])
1397 if not isinstance(exclude_rooms, list) or any(
1398 not isinstance(x, str) for x in exclude_rooms
1399 ):
1400 raise SynapseError(400, "bad value for 'exclude_rooms'", Codes.BAD_JSON)
1401
1402 max_rooms_per_space = content.get("max_rooms_per_space")
1403 if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int):
1404 raise SynapseError(
1405 400, "bad value for 'max_rooms_per_space'", Codes.BAD_JSON
1406 )
1407
1408 return 200, await self.handler.federation_space_summary(
1409 room_id, suggested_only, max_rooms_per_space, exclude_rooms
1410 )
13761411
13771412
13781413 class RoomComplexityServlet(BaseFederationServlet):
14731508 )
14741509
14751510
1476 def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=None):
1511 def register_servlets(
1512 hs: HomeServer,
1513 resource: HttpServer,
1514 authenticator: Authenticator,
1515 ratelimiter: FederationRateLimiter,
1516 servlet_groups: Optional[Container[str]] = None,
1517 ):
14771518 """Initialize and register servlet classes.
14781519
14791520 Will by default register all servlets. For custom behaviour, pass in
14801521 a list of servlet_groups to register.
14811522
14821523 Args:
1483 hs (synapse.server.HomeServer): homeserver
1484 resource (JsonResource): resource class to register to
1485 authenticator (Authenticator): authenticator to use
1486 ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
1487 servlet_groups (list[str], optional): List of servlet groups to register.
1524 hs: homeserver
1525 resource: resource class to register to
1526 authenticator: authenticator to use
1527 ratelimiter: ratelimiter to use
1528 servlet_groups: List of servlet groups to register.
14881529 Defaults to ``DEFAULT_SERVLET_GROUPS``.
14891530 """
14901531 if not servlet_groups:
14941535 for servletclass in FEDERATION_SERVLET_CLASSES:
14951536 servletclass(
14961537 handler=hs.get_federation_server(),
1538 authenticator=authenticator,
1539 ratelimiter=ratelimiter,
1540 server_name=hs.hostname,
1541 ).register(resource)
1542
1543 if hs.config.experimental.spaces_enabled:
1544 FederationSpaceSummaryServlet(
1545 handler=hs.get_space_summary_handler(),
14971546 authenticator=authenticator,
14981547 ratelimiter=ratelimiter,
14991548 server_name=hs.hostname,
4545 from synapse.types import JsonDict, get_domain_from_id
4646
4747 if TYPE_CHECKING:
48 from synapse.app.homeserver import HomeServer
48 from synapse.server import HomeServer
4949
5050 logger = logging.getLogger(__name__)
5151
2424 from synapse.util.async_helpers import concurrently_execute
2525
2626 if TYPE_CHECKING:
27 from synapse.app.homeserver import HomeServer
27 from synapse.server import HomeServer
2828
2929 logger = logging.getLogger(__name__)
3030
2323 from synapse.types import UserID
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
26 from synapse.server import HomeServer
2727
2828 logger = logging.getLogger(__name__)
2929
2424 from synapse.types import JsonDict, UserID
2525
2626 if TYPE_CHECKING:
27 from synapse.app.homeserver import HomeServer
27 from synapse.server import HomeServer
2828
2929
3030 class AccountDataHandler:
2626 from synapse.util import stringutils
2727
2828 if TYPE_CHECKING:
29 from synapse.app.homeserver import HomeServer
29 from synapse.server import HomeServer
3030
3131 logger = logging.getLogger(__name__)
3232
2323 from synapse.app import check_bind_error
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
26 from synapse.server import HomeServer
2727
2828 logger = logging.getLogger(__name__)
2929
2424 from ._base import BaseHandler
2525
2626 if TYPE_CHECKING:
27 from synapse.app.homeserver import HomeServer
27 from synapse.server import HomeServer
2828
2929 logger = logging.getLogger(__name__)
3030
3737 from synapse.util.metrics import Measure
3838
3939 if TYPE_CHECKING:
40 from synapse.app.homeserver import HomeServer
40 from synapse.server import HomeServer
4141
4242 logger = logging.getLogger(__name__)
4343
6969 from synapse.util.threepids import canonicalise_email
7070
7171 if TYPE_CHECKING:
72 from synapse.app.homeserver import HomeServer
72 from synapse.server import HomeServer
7373
7474 logger = logging.getLogger(__name__)
7575
885885 )
886886 return result
887887
888 def can_change_password(self) -> bool:
889 """Get whether users on this server are allowed to change or set a password.
890
891 Both `config.password_enabled` and `config.password_localdb_enabled` must be true.
892
893 Note that any account (even SSO accounts) are allowed to add passwords if the above
894 is true.
895
896 Returns:
897 Whether users on this server are allowed to change or set a password
898 """
899 return self._password_enabled and self._password_localdb_enabled
900
888901 def get_supported_login_types(self) -> Iterable[str]:
889902 """Get a the login types supported for the /login API
890903
2626 from synapse.types import UserID, map_username_to_mxid_localpart
2727
2828 if TYPE_CHECKING:
29 from synapse.app.homeserver import HomeServer
29 from synapse.server import HomeServer
3030
3131 logger = logging.getLogger(__name__)
3232
2222 from ._base import BaseHandler
2323
2424 if TYPE_CHECKING:
25 from synapse.app.homeserver import HomeServer
25 from synapse.server import HomeServer
2626
2727 logger = logging.getLogger(__name__)
2828
4444 from ._base import BaseHandler
4545
4646 if TYPE_CHECKING:
47 from synapse.app.homeserver import HomeServer
47 from synapse.server import HomeServer
4848
4949 logger = logging.getLogger(__name__)
5050
165165
166166 # Fetch the current state at the time.
167167 try:
168 event_ids = await self.store.get_forward_extremeties_for_room(
168 event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
169169 room_id, stream_ordering=stream_ordering
170170 )
171171 except errors.StoreError:
906906 master_key = result.get("master_key")
907907 self_signing_key = result.get("self_signing_key")
908908
909 ignore_devices = False
909910 # If the remote server has more than ~1000 devices for this user
910911 # we assume that something is going horribly wrong (e.g. a bot
911912 # that logs in and creates a new device every time it tries to
924925 len(devices),
925926 )
926927 devices = []
928 ignore_devices = True
929 else:
930 cached_devices = await self.store.get_cached_devices_for_user(user_id)
931 if cached_devices == {d["device_id"]: d for d in devices}:
932 devices = []
933 ignore_devices = True
927934
928935 for device in devices:
929936 logger.debug(
933940 stream_id,
934941 )
935942
936 await self.store.update_remote_device_list_cache(user_id, devices, stream_id)
943 if not ignore_devices:
944 await self.store.update_remote_device_list_cache(
945 user_id, devices, stream_id
946 )
937947 device_ids = [device["device_id"] for device in devices]
938948
939949 # Handle cross-signing keys.
944954 )
945955 device_ids = device_ids + cross_signing_device_ids
946956
947 await self.device_handler.notify_device_update(user_id, device_ids)
957 if device_ids:
958 await self.device_handler.notify_device_update(user_id, device_ids)
948959
949960 # We clobber the seen updates since we've re-synced from a given
950961 # point.
972983 """
973984 device_ids = []
974985
975 if master_key:
986 current_keys_map = await self.store.get_e2e_cross_signing_keys_bulk([user_id])
987 current_keys = current_keys_map.get(user_id) or {}
988
989 if master_key and master_key != current_keys.get("master"):
976990 await self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
977991 _, verify_key = get_verify_key_from_cross_signing_key(master_key)
978992 # verify_key is a VerifyKey from signedjson, which uses
979993 # .version to denote the portion of the key ID after the
980994 # algorithm and colon, which is the device ID
981995 device_ids.append(verify_key.version)
982 if self_signing_key:
996 if self_signing_key and self_signing_key != current_keys.get("self_signing"):
983997 await self.store.set_e2e_cross_signing_key(
984998 user_id, "self_signing", self_signing_key
985999 )
3131 from synapse.util.stringutils import random_string
3232
3333 if TYPE_CHECKING:
34 from synapse.app.homeserver import HomeServer
34 from synapse.server import HomeServer
3535
3636
3737 logger = logging.getLogger(__name__)
4141 from synapse.util.retryutils import NotRetryingDestination
4242
4343 if TYPE_CHECKING:
44 from synapse.app.homeserver import HomeServer
44 from synapse.server import HomeServer
4545
4646 logger = logging.getLogger(__name__)
4747
2828 from synapse.util.async_helpers import Linearizer
2929
3030 if TYPE_CHECKING:
31 from synapse.app.homeserver import HomeServer
31 from synapse.server import HomeServer
3232
3333 logger = logging.getLogger(__name__)
3434
2020 from synapse.types import GroupID, JsonDict, get_domain_from_id
2121
2222 if TYPE_CHECKING:
23 from synapse.app.homeserver import HomeServer
23 from synapse.server import HomeServer
2424
2525 logger = logging.getLogger(__name__)
2626
148148 Args:
149149 request: the incoming request from the browser.
150150 """
151 # This will always be set by the time Twisted calls us.
152 assert request.args is not None
153
151154 # The provider might redirect with an error.
152155 # In that case, just display it as-is.
153156 if b"error" in request.args:
279282 self._config = provider
280283 self._callback_url = hs.config.oidc_callback_url # type: str
281284
285 self._oidc_attribute_requirements = provider.attribute_requirements
282286 self._scopes = provider.scopes
283287 self._user_profile_method = provider.user_profile_method
284288
858862 )
859863
860864 # otherwise, it's a login
865 logger.debug("Userinfo for OIDC login: %s", userinfo)
866
867 # Ensure that the attributes of the logged in user meet the required
868 # attributes by checking the userinfo against attribute_requirements
869 # In order to deal with the fact that OIDC userinfo can contain many
870 # types of data, we wrap non-list values in lists.
871 if not self._sso_handler.check_required_attributes(
872 request,
873 {k: v if isinstance(v, list) else [v] for k, v in userinfo.items()},
874 self._oidc_attribute_requirements,
875 ):
876 return
861877
862878 # Call the mapper to register/login the user
863879 try:
2020 from synapse.api.errors import Codes, PasswordRefusedError
2121
2222 if TYPE_CHECKING:
23 from synapse.app.homeserver import HomeServer
23 from synapse.server import HomeServer
2424
2525 logger = logging.getLogger(__name__)
2626
102102 def __init__(self, hs: "HomeServer"):
103103 self.clock = hs.get_clock()
104104 self.store = hs.get_datastore()
105
106 self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
105107
106108 active_presence = self.store.take_presence_startup_info()
107109 self.user_to_current_state = {state.user_id: state for state in active_presence}
729731 PresenceState.ONLINE,
730732 PresenceState.UNAVAILABLE,
731733 PresenceState.OFFLINE,
734 PresenceState.BUSY,
732735 )
733 if presence not in valid_presence:
736
737 if presence not in valid_presence or (
738 presence == PresenceState.BUSY and not self._busy_presence_enabled
739 ):
734740 raise SynapseError(400, "Invalid presence state")
735741
736742 user_id = target_user.to_string()
743749 msg = status_msg if presence != PresenceState.OFFLINE else None
744750 new_fields["status_msg"] = msg
745751
746 if presence == PresenceState.ONLINE:
752 if presence == PresenceState.ONLINE or (
753 presence == PresenceState.BUSY and self._busy_presence_enabled
754 ):
747755 new_fields["last_active_ts"] = self.clock.time_msec()
748756
749757 await self._update_states([prev_state.copy_and_replace(**new_fields)])
3535 from ._base import BaseHandler
3636
3737 if TYPE_CHECKING:
38 from synapse.app.homeserver import HomeServer
38 from synapse.server import HomeServer
3939
4040 logger = logging.getLogger(__name__)
4141
2020 from ._base import BaseHandler
2121
2222 if TYPE_CHECKING:
23 from synapse.app.homeserver import HomeServer
23 from synapse.server import HomeServer
2424
2525 logger = logging.getLogger(__name__)
2626
1919 from synapse.types import JsonDict, ReadReceipt, get_domain_from_id
2020
2121 if TYPE_CHECKING:
22 from synapse.app.homeserver import HomeServer
22 from synapse.server import HomeServer
2323
2424 logger = logging.getLogger(__name__)
2525
3737 from ._base import BaseHandler
3838
3939 if TYPE_CHECKING:
40 from synapse.app.homeserver import HomeServer
40 from synapse.server import HomeServer
4141
4242 logger = logging.getLogger(__name__)
4343
436436
437437 if RoomAlias.is_valid(r):
438438 (
439 room_id,
439 room,
440440 remote_room_hosts,
441441 ) = await room_member_handler.lookup_room_alias(room_alias)
442 room_id = room_id.to_string()
442 room_id = room.to_string()
443443 else:
444444 raise SynapseError(
445445 400, "%s was not legal room ID or room alias" % (r,)
2828 from ._base import BaseHandler
2929
3030 if TYPE_CHECKING:
31 from synapse.app.homeserver import HomeServer
31 from synapse.server import HomeServer
3232
3333 logger = logging.getLogger(__name__)
3434
152152 target
153153 room_id
154154 """
155 raise NotImplementedError()
156
157 @abc.abstractmethod
158 async def forget(self, user: UserID, room_id: str) -> None:
155159 raise NotImplementedError()
156160
157161 def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str):
1313 # limitations under the License.
1414
1515 import logging
16 from typing import List, Optional, Tuple
16 from typing import TYPE_CHECKING, List, Optional, Tuple
1717
1818 from synapse.api.errors import SynapseError
1919 from synapse.handlers.room_member import RoomMemberHandler
2424 )
2525 from synapse.types import Requester, UserID
2626
27 if TYPE_CHECKING:
28 from synapse.server import HomeServer
29
2730 logger = logging.getLogger(__name__)
2831
2932
3033 class RoomMemberWorkerHandler(RoomMemberHandler):
31 def __init__(self, hs):
34 def __init__(self, hs: "HomeServer"):
3235 super().__init__(hs)
3336
3437 self._remote_join_client = ReplRemoteJoin.make_client(hs)
8285 await self._notify_change_client(
8386 user_id=target.to_string(), room_id=room_id, change="left"
8487 )
88
89 async def forget(self, target: UserID, room_id: str) -> None:
90 raise RuntimeError("Cannot forget rooms on workers.")
2929 from ._base import BaseHandler
3030
3131 if TYPE_CHECKING:
32 from synapse.app.homeserver import HomeServer
32 from synapse.server import HomeServer
3333
3434 logger = logging.getLogger(__name__)
3535
2020 from ._base import BaseHandler
2121
2222 if TYPE_CHECKING:
23 from synapse.app.homeserver import HomeServer
23 from synapse.server import HomeServer
2424
2525 logger = logging.getLogger(__name__)
2626
4040 logout_devices: bool,
4141 requester: Optional[Requester] = None,
4242 ) -> None:
43 if not self.hs.config.password_localdb_enabled:
43 if not self._auth_handler.can_change_password():
4444 raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
4545
4646 try:
0 # -*- coding: utf-8 -*-
1 # Copyright 2021 The Matrix.org Foundation C.I.C.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import itertools
16 import logging
17 from collections import deque
18 from typing import TYPE_CHECKING, Iterable, List, Optional, Sequence, Set, Tuple, cast
19
20 import attr
21
22 from synapse.api.constants import EventContentFields, EventTypes, HistoryVisibility
23 from synapse.api.errors import AuthError
24 from synapse.events import EventBase
25 from synapse.events.utils import format_event_for_client_v2
26 from synapse.types import JsonDict
27
28 if TYPE_CHECKING:
29 from synapse.server import HomeServer
30
31 logger = logging.getLogger(__name__)
32
33 # number of rooms to return. We'll stop once we hit this limit.
34 # TODO: allow clients to reduce this with a request param.
35 MAX_ROOMS = 50
36
37 # max number of events to return per room.
38 MAX_ROOMS_PER_SPACE = 50
39
40 # max number of federation servers to hit per room
41 MAX_SERVERS_PER_SPACE = 3
42
43
44 class SpaceSummaryHandler:
45 def __init__(self, hs: "HomeServer"):
46 self._clock = hs.get_clock()
47 self._auth = hs.get_auth()
48 self._room_list_handler = hs.get_room_list_handler()
49 self._state_handler = hs.get_state_handler()
50 self._store = hs.get_datastore()
51 self._event_serializer = hs.get_event_client_serializer()
52 self._server_name = hs.hostname
53 self._federation_client = hs.get_federation_client()
54
55 async def get_space_summary(
56 self,
57 requester: str,
58 room_id: str,
59 suggested_only: bool = False,
60 max_rooms_per_space: Optional[int] = None,
61 ) -> JsonDict:
62 """
63 Implementation of the space summary C-S API
64
65 Args:
66 requester: user id of the user making this request
67
68 room_id: room id to start the summary at
69
70 suggested_only: whether we should only return children with the "suggested"
71 flag set.
72
73 max_rooms_per_space: an optional limit on the number of child rooms we will
74 return. This does not apply to the root room (ie, room_id), and
75 is overridden by MAX_ROOMS_PER_SPACE.
76
77 Returns:
78 summary dict to return
79 """
80 # first of all, check that the user is in the room in question (or it's
81 # world-readable)
82 await self._auth.check_user_in_room_or_world_readable(room_id, requester)
83
84 # the queue of rooms to process
85 room_queue = deque((_RoomQueueEntry(room_id, ()),))
86
87 # rooms we have already processed
88 processed_rooms = set() # type: Set[str]
89
90 # events we have already processed. We don't necessarily have their event ids,
91 # so instead we key on (room id, state key)
92 processed_events = set() # type: Set[Tuple[str, str]]
93
94 rooms_result = [] # type: List[JsonDict]
95 events_result = [] # type: List[JsonDict]
96
97 while room_queue and len(rooms_result) < MAX_ROOMS:
98 queue_entry = room_queue.popleft()
99 room_id = queue_entry.room_id
100 if room_id in processed_rooms:
101 # already done this room
102 continue
103
104 logger.debug("Processing room %s", room_id)
105
106 is_in_room = await self._store.is_host_joined(room_id, self._server_name)
107
108 # The client-specified max_rooms_per_space limit doesn't apply to the
109 # room_id specified in the request, so we ignore it if this is the
110 # first room we are processing.
111 max_children = max_rooms_per_space if processed_rooms else None
112
113 if is_in_room:
114 rooms, events = await self._summarize_local_room(
115 requester, room_id, suggested_only, max_children
116 )
117 else:
118 rooms, events = await self._summarize_remote_room(
119 queue_entry,
120 suggested_only,
121 max_children,
122 exclude_rooms=processed_rooms,
123 )
124
125 logger.debug(
126 "Query of %s returned rooms %s, events %s",
127 queue_entry.room_id,
128 [room.get("room_id") for room in rooms],
129 ["%s->%s" % (ev["room_id"], ev["state_key"]) for ev in events],
130 )
131
132 rooms_result.extend(rooms)
133
134 # any rooms returned don't need visiting again
135 processed_rooms.update(cast(str, room.get("room_id")) for room in rooms)
136
137 # the room we queried may or may not have been returned, but don't process
138 # it again, anyway.
139 processed_rooms.add(room_id)
140
141 # XXX: is it ok that we blindly iterate through any events returned by
142 # a remote server, whether or not they actually link to any rooms in our
143 # tree?
144 for ev in events:
145 # remote servers might return events we have already processed
146 # (eg, Dendrite returns inward pointers as well as outward ones), so
147 # we need to filter them out, to avoid returning duplicate links to the
148 # client.
149 ev_key = (ev["room_id"], ev["state_key"])
150 if ev_key in processed_events:
151 continue
152 events_result.append(ev)
153
154 # add the child to the queue. we have already validated
155 # that the vias are a list of server names.
156 room_queue.append(
157 _RoomQueueEntry(ev["state_key"], ev["content"]["via"])
158 )
159 processed_events.add(ev_key)
160
161 return {"rooms": rooms_result, "events": events_result}
162
163 async def federation_space_summary(
164 self,
165 room_id: str,
166 suggested_only: bool,
167 max_rooms_per_space: Optional[int],
168 exclude_rooms: Iterable[str],
169 ) -> JsonDict:
170 """
171 Implementation of the space summary Federation API
172
173 Args:
174 room_id: room id to start the summary at
175
176 suggested_only: whether we should only return children with the "suggested"
177 flag set.
178
179 max_rooms_per_space: an optional limit on the number of child rooms we will
180 return. Unlike the C-S API, this applies to the root room (room_id).
181 It is clipped to MAX_ROOMS_PER_SPACE.
182
183 exclude_rooms: a list of rooms to skip over (presumably because the
184 calling server has already seen them).
185
186 Returns:
187 summary dict to return
188 """
189 # the queue of rooms to process
190 room_queue = deque((room_id,))
191
192 # the set of rooms that we should not walk further. Initialise it with the
193 # excluded-rooms list; we will add other rooms as we process them so that
194 # we do not loop.
195 processed_rooms = set(exclude_rooms) # type: Set[str]
196
197 rooms_result = [] # type: List[JsonDict]
198 events_result = [] # type: List[JsonDict]
199
200 while room_queue and len(rooms_result) < MAX_ROOMS:
201 room_id = room_queue.popleft()
202 if room_id in processed_rooms:
203 # already done this room
204 continue
205
206 logger.debug("Processing room %s", room_id)
207
208 rooms, events = await self._summarize_local_room(
209 None, room_id, suggested_only, max_rooms_per_space
210 )
211
212 processed_rooms.add(room_id)
213
214 rooms_result.extend(rooms)
215 events_result.extend(events)
216
217 # add any children to the queue
218 room_queue.extend(edge_event["state_key"] for edge_event in events)
219
220 return {"rooms": rooms_result, "events": events_result}
221
222 async def _summarize_local_room(
223 self,
224 requester: Optional[str],
225 room_id: str,
226 suggested_only: bool,
227 max_children: Optional[int],
228 ) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]:
229 if not await self._is_room_accessible(room_id, requester):
230 return (), ()
231
232 room_entry = await self._build_room_entry(room_id)
233
234 # look for child rooms/spaces.
235 child_events = await self._get_child_events(room_id)
236
237 if suggested_only:
238 # we only care about suggested children
239 child_events = filter(_is_suggested_child_event, child_events)
240
241 if max_children is None or max_children > MAX_ROOMS_PER_SPACE:
242 max_children = MAX_ROOMS_PER_SPACE
243
244 now = self._clock.time_msec()
245 events_result = [] # type: List[JsonDict]
246 for edge_event in itertools.islice(child_events, max_children):
247 events_result.append(
248 await self._event_serializer.serialize_event(
249 edge_event,
250 time_now=now,
251 event_format=format_event_for_client_v2,
252 )
253 )
254 return (room_entry,), events_result
255
256 async def _summarize_remote_room(
257 self,
258 room: "_RoomQueueEntry",
259 suggested_only: bool,
260 max_children: Optional[int],
261 exclude_rooms: Iterable[str],
262 ) -> Tuple[Sequence[JsonDict], Sequence[JsonDict]]:
263 room_id = room.room_id
264 logger.info("Requesting summary for %s via %s", room_id, room.via)
265
266 # we need to make the exclusion list json-serialisable
267 exclude_rooms = list(exclude_rooms)
268
269 via = itertools.islice(room.via, MAX_SERVERS_PER_SPACE)
270 try:
271 res = await self._federation_client.get_space_summary(
272 via,
273 room_id,
274 suggested_only=suggested_only,
275 max_rooms_per_space=max_children,
276 exclude_rooms=exclude_rooms,
277 )
278 except Exception as e:
279 logger.warning(
280 "Unable to get summary of %s via federation: %s",
281 room_id,
282 e,
283 exc_info=logger.isEnabledFor(logging.DEBUG),
284 )
285 return (), ()
286
287 return res.rooms, tuple(
288 ev.data
289 for ev in res.events
290 if ev.event_type == EventTypes.MSC1772_SPACE_CHILD
291 )
292
293 async def _is_room_accessible(self, room_id: str, requester: Optional[str]) -> bool:
294 # if we have an authenticated requesting user, first check if they are in the
295 # room
296 if requester:
297 try:
298 await self._auth.check_user_in_room(room_id, requester)
299 return True
300 except AuthError:
301 pass
302
303 # otherwise, check if the room is peekable
304 hist_vis_ev = await self._state_handler.get_current_state(
305 room_id, EventTypes.RoomHistoryVisibility, ""
306 )
307 if hist_vis_ev:
308 hist_vis = hist_vis_ev.content.get("history_visibility")
309 if hist_vis == HistoryVisibility.WORLD_READABLE:
310 return True
311
312 logger.info(
313 "room %s is unpeekable and user %s is not a member, omitting from summary",
314 room_id,
315 requester,
316 )
317 return False
318
319 async def _build_room_entry(self, room_id: str) -> JsonDict:
320 """Generate en entry suitable for the 'rooms' list in the summary response"""
321 stats = await self._store.get_room_with_stats(room_id)
322
323 # currently this should be impossible because we call
324 # check_user_in_room_or_world_readable on the room before we get here, so
325 # there should always be an entry
326 assert stats is not None, "unable to retrieve stats for %s" % (room_id,)
327
328 current_state_ids = await self._store.get_current_state_ids(room_id)
329 create_event = await self._store.get_event(
330 current_state_ids[(EventTypes.Create, "")]
331 )
332
333 # TODO: update once MSC1772 lands
334 room_type = create_event.content.get(EventContentFields.MSC1772_ROOM_TYPE)
335
336 entry = {
337 "room_id": stats["room_id"],
338 "name": stats["name"],
339 "topic": stats["topic"],
340 "canonical_alias": stats["canonical_alias"],
341 "num_joined_members": stats["joined_members"],
342 "avatar_url": stats["avatar"],
343 "world_readable": (
344 stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
345 ),
346 "guest_can_join": stats["guest_access"] == "can_join",
347 "room_type": room_type,
348 }
349
350 # Filter out Nones – rather omit the field altogether
351 room_entry = {k: v for k, v in entry.items() if v is not None}
352
353 return room_entry
354
355 async def _get_child_events(self, room_id: str) -> Iterable[EventBase]:
356 # look for child rooms/spaces.
357 current_state_ids = await self._store.get_current_state_ids(room_id)
358
359 events = await self._store.get_events_as_list(
360 [
361 event_id
362 for key, event_id in current_state_ids.items()
363 # TODO: update once MSC1772 lands
364 if key[0] == EventTypes.MSC1772_SPACE_CHILD
365 ]
366 )
367
368 # filter out any events without a "via" (which implies it has been redacted)
369 return (e for e in events if _has_valid_via(e))
370
371
372 @attr.s(frozen=True, slots=True)
373 class _RoomQueueEntry:
374 room_id = attr.ib(type=str)
375 via = attr.ib(type=Sequence[str])
376
377
378 def _has_valid_via(e: EventBase) -> bool:
379 via = e.content.get("via")
380 if not via or not isinstance(via, Sequence):
381 return False
382 for v in via:
383 if not isinstance(v, str):
384 logger.debug("Ignoring edge event %s with invalid via entry", e.event_id)
385 return False
386 return True
387
388
389 def _is_suggested_child_event(edge_event: EventBase) -> bool:
390 suggested = edge_event.content.get("suggested")
391 if isinstance(suggested, bool) and suggested:
392 return True
393 logger.debug("Ignorning not-suggested child %s", edge_event.state_key)
394 return False
1616 from typing import TYPE_CHECKING, Optional
1717
1818 if TYPE_CHECKING:
19 from synapse.app.homeserver import HomeServer
19 from synapse.server import HomeServer
2020
2121 logger = logging.getLogger(__name__)
2222
2323 from synapse.types import JsonDict
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
26 from synapse.server import HomeServer
2727
2828 logger = logging.getLogger(__name__)
2929
7979 filter_collection = attr.ib(type=FilterCollection)
8080 is_guest = attr.ib(type=bool)
8181 request_key = attr.ib(type=Tuple[Any, ...])
82 device_id = attr.ib(type=str)
82 device_id = attr.ib(type=Optional[str])
8383
8484
8585 @attr.s(slots=True, frozen=True)
722722
723723 return summary
724724
725 def get_lazy_loaded_members_cache(self, cache_key: Tuple[str, str]) -> LruCache:
725 def get_lazy_loaded_members_cache(
726 self, cache_key: Tuple[str, Optional[str]]
727 ) -> LruCache:
726728 cache = self.lazy_loaded_members_cache.get(cache_key)
727729 if cache is None:
728730 logger.debug("creating LruCache for %r", cache_key)
19781980
19791981 logger.info("User joined room after current token: %s", room_id)
19801982
1981 extrems = await self.store.get_forward_extremeties_for_room(
1982 room_id, event_pos.stream
1983 extrems = (
1984 await self.store.get_forward_extremities_for_room_at_stream_ordering(
1985 room_id, event_pos.stream
1986 )
19831987 )
19841988 users_in_room = await self.state.get_current_users_in_room(room_id, extrems)
19851989 if user_id in users_in_room:
2424 from synapse.util.metrics import Measure
2525
2626 if TYPE_CHECKING:
27 from synapse.app.homeserver import HomeServer
27 from synapse.server import HomeServer
2828
2929 logger = logging.getLogger(__name__)
3030
7676 from synapse.util.async_helpers import timeout_deferred
7777
7878 if TYPE_CHECKING:
79 from synapse.app.homeserver import HomeServer
79 from synapse.server import HomeServer
8080
8181 logger = logging.getLogger(__name__)
8282
1818
1919 from twisted.internet import defer, protocol
2020 from twisted.internet.error import ConnectError
21 from twisted.internet.interfaces import IStreamClientEndpoint
22 from twisted.internet.protocol import connectionDone
21 from twisted.internet.interfaces import IReactorCore, IStreamClientEndpoint
22 from twisted.internet.protocol import ClientFactory, Protocol, connectionDone
2323 from twisted.web import http
24 from twisted.web.http_headers import Headers
2425
2526 logger = logging.getLogger(__name__)
2627
4243
4344 Args:
4445 reactor: the Twisted reactor to use for the connection
45 proxy_endpoint (IStreamClientEndpoint): the endpoint to use to connect to the
46 proxy
47 host (bytes): hostname that we want to CONNECT to
48 port (int): port that we want to connect to
49 """
50
51 def __init__(self, reactor, proxy_endpoint, host, port):
46 proxy_endpoint: the endpoint to use to connect to the proxy
47 host: hostname that we want to CONNECT to
48 port: port that we want to connect to
49 headers: Extra HTTP headers to include in the CONNECT request
50 """
51
52 def __init__(
53 self,
54 reactor: IReactorCore,
55 proxy_endpoint: IStreamClientEndpoint,
56 host: bytes,
57 port: int,
58 headers: Headers,
59 ):
5260 self._reactor = reactor
5361 self._proxy_endpoint = proxy_endpoint
5462 self._host = host
5563 self._port = port
64 self._headers = headers
5665
5766 def __repr__(self):
5867 return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,)
5968
60 def connect(self, protocolFactory):
61 f = HTTPProxiedClientFactory(self._host, self._port, protocolFactory)
69 def connect(self, protocolFactory: ClientFactory):
70 f = HTTPProxiedClientFactory(
71 self._host, self._port, protocolFactory, self._headers
72 )
6273 d = self._proxy_endpoint.connect(f)
6374 # once the tcp socket connects successfully, we need to wait for the
6475 # CONNECT to complete.
7384 HTTP Protocol object and run the rest of the connection.
7485
7586 Args:
76 dst_host (bytes): hostname that we want to CONNECT to
77 dst_port (int): port that we want to connect to
78 wrapped_factory (protocol.ClientFactory): The original Factory
79 """
80
81 def __init__(self, dst_host, dst_port, wrapped_factory):
87 dst_host: hostname that we want to CONNECT to
88 dst_port: port that we want to connect to
89 wrapped_factory: The original Factory
90 headers: Extra HTTP headers to include in the CONNECT request
91 """
92
93 def __init__(
94 self,
95 dst_host: bytes,
96 dst_port: int,
97 wrapped_factory: ClientFactory,
98 headers: Headers,
99 ):
82100 self.dst_host = dst_host
83101 self.dst_port = dst_port
84102 self.wrapped_factory = wrapped_factory
103 self.headers = headers
85104 self.on_connection = defer.Deferred()
86105
87106 def startedConnecting(self, connector):
91110 wrapped_protocol = self.wrapped_factory.buildProtocol(addr)
92111
93112 return HTTPConnectProtocol(
94 self.dst_host, self.dst_port, wrapped_protocol, self.on_connection
113 self.dst_host,
114 self.dst_port,
115 wrapped_protocol,
116 self.on_connection,
117 self.headers,
95118 )
96119
97120 def clientConnectionFailed(self, connector, reason):
111134 """Protocol that wraps an existing Protocol to do a CONNECT handshake at connect
112135
113136 Args:
114 host (bytes): The original HTTP(s) hostname or IPv4 or IPv6 address literal
137 host: The original HTTP(s) hostname or IPv4 or IPv6 address literal
115138 to put in the CONNECT request
116139
117 port (int): The original HTTP(s) port to put in the CONNECT request
118
119 wrapped_protocol (interfaces.IProtocol): the original protocol (probably
120 HTTPChannel or TLSMemoryBIOProtocol, but could be anything really)
121
122 connected_deferred (Deferred): a Deferred which will be callbacked with
140 port: The original HTTP(s) port to put in the CONNECT request
141
142 wrapped_protocol: the original protocol (probably HTTPChannel or
143 TLSMemoryBIOProtocol, but could be anything really)
144
145 connected_deferred: a Deferred which will be callbacked with
123146 wrapped_protocol when the CONNECT completes
124 """
125
126 def __init__(self, host, port, wrapped_protocol, connected_deferred):
147
148 headers: Extra HTTP headers to include in the CONNECT request
149 """
150
151 def __init__(
152 self,
153 host: bytes,
154 port: int,
155 wrapped_protocol: Protocol,
156 connected_deferred: defer.Deferred,
157 headers: Headers,
158 ):
127159 self.host = host
128160 self.port = port
129161 self.wrapped_protocol = wrapped_protocol
130162 self.connected_deferred = connected_deferred
131 self.http_setup_client = HTTPConnectSetupClient(self.host, self.port)
163 self.headers = headers
164
165 self.http_setup_client = HTTPConnectSetupClient(
166 self.host, self.port, self.headers
167 )
132168 self.http_setup_client.on_connected.addCallback(self.proxyConnected)
133169
134170 def connectionMade(self):
153189 if buf:
154190 self.wrapped_protocol.dataReceived(buf)
155191
156 def dataReceived(self, data):
192 def dataReceived(self, data: bytes):
157193 # if we've set up the HTTP protocol, we can send the data there
158194 if self.wrapped_protocol.connected:
159195 return self.wrapped_protocol.dataReceived(data)
167203 """HTTPClient protocol to send a CONNECT message for proxies and read the response.
168204
169205 Args:
170 host (bytes): The hostname to send in the CONNECT message
171 port (int): The port to send in the CONNECT message
172 """
173
174 def __init__(self, host, port):
206 host: The hostname to send in the CONNECT message
207 port: The port to send in the CONNECT message
208 headers: Extra headers to send with the CONNECT message
209 """
210
211 def __init__(self, host: bytes, port: int, headers: Headers):
175212 self.host = host
176213 self.port = port
214 self.headers = headers
177215 self.on_connected = defer.Deferred()
178216
179217 def connectionMade(self):
180218 logger.debug("Connected to proxy, sending CONNECT")
181219 self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port))
220
221 # Send any additional specified headers
222 for name, values in self.headers.getAllRawHeaders():
223 for value in values:
224 self.sendHeader(name, value)
225
182226 self.endHeaders()
183227
184 def handleStatus(self, version, status, message):
228 def handleStatus(self, version: bytes, status: bytes, message: bytes):
185229 logger.debug("Got Status: %s %s %s", status, message, version)
186230 if status != b"200":
187231 raise ProxyConnectError("Unexpected status on CONNECT: %s" % status)
7070 logger = logging.getLogger(__name__)
7171
7272
73 _well_known_cache = TTLCache("well-known")
74 _had_valid_well_known_cache = TTLCache("had-valid-well-known")
73 _well_known_cache = TTLCache("well-known") # type: TTLCache[bytes, Optional[bytes]]
74 _had_valid_well_known_cache = TTLCache(
75 "had-valid-well-known"
76 ) # type: TTLCache[bytes, bool]
7577
7678
7779 @attr.s(slots=True, frozen=True)
8789 reactor: IReactorTime,
8890 agent: IAgent,
8991 user_agent: bytes,
90 well_known_cache: Optional[TTLCache] = None,
91 had_well_known_cache: Optional[TTLCache] = None,
92 well_known_cache: Optional[TTLCache[bytes, Optional[bytes]]] = None,
93 had_well_known_cache: Optional[TTLCache[bytes, bool]] = None,
9294 ):
9395 self._reactor = reactor
9496 self._clock = Clock(reactor)
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import base64
1415 import logging
1516 import re
17 from typing import Optional, Tuple
1618 from urllib.request import getproxies_environment, proxy_bypass_environment
1719
20 import attr
1821 from zope.interface import implementer
1922
2023 from twisted.internet import defer
2225 from twisted.python.failure import Failure
2326 from twisted.web.client import URI, BrowserLikePolicyForHTTPS, _AgentBase
2427 from twisted.web.error import SchemeNotSupported
28 from twisted.web.http_headers import Headers
2529 from twisted.web.iweb import IAgent
2630
2731 from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint
2933 logger = logging.getLogger(__name__)
3034
3135 _VALID_URI = re.compile(br"\A[\x21-\x7e]+\Z")
36
37
38 @attr.s
39 class ProxyCredentials:
40 username_password = attr.ib(type=bytes)
41
42 def as_proxy_authorization_value(self) -> bytes:
43 """
44 Return the value for a Proxy-Authorization header (i.e. 'Basic abdef==').
45
46 Returns:
47 A transformation of the authentication string the encoded value for
48 a Proxy-Authorization header.
49 """
50 # Encode as base64 and prepend the authorization type
51 return b"Basic " + base64.encodebytes(self.username_password)
3252
3353
3454 @implementer(IAgent)
94114 http_proxy = proxies["http"].encode() if "http" in proxies else None
95115 https_proxy = proxies["https"].encode() if "https" in proxies else None
96116 no_proxy = proxies["no"] if "no" in proxies else None
117
118 # Parse credentials from https proxy connection string if present
119 self.https_proxy_creds, https_proxy = parse_username_password(https_proxy)
97120
98121 self.http_proxy_endpoint = _http_proxy_endpoint(
99122 http_proxy, self.proxy_reactor, **self._endpoint_kwargs
174197 and self.https_proxy_endpoint
175198 and not should_skip_proxy
176199 ):
200 connect_headers = Headers()
201
202 # Determine whether we need to set Proxy-Authorization headers
203 if self.https_proxy_creds:
204 # Set a Proxy-Authorization header
205 connect_headers.addRawHeader(
206 b"Proxy-Authorization",
207 self.https_proxy_creds.as_proxy_authorization_value(),
208 )
209
177210 endpoint = HTTPConnectProxyEndpoint(
178211 self.proxy_reactor,
179212 self.https_proxy_endpoint,
180213 parsed_uri.host,
181214 parsed_uri.port,
215 headers=connect_headers,
182216 )
183217 else:
184218 # not using a proxy
207241 )
208242
209243
210 def _http_proxy_endpoint(proxy, reactor, **kwargs):
244 def _http_proxy_endpoint(proxy: Optional[bytes], reactor, **kwargs):
211245 """Parses an http proxy setting and returns an endpoint for the proxy
212246
213247 Args:
214 proxy (bytes|None): the proxy setting
248 proxy: the proxy setting in the form: [<username>:<password>@]<host>[:<port>]
249 Note that compared to other apps, this function currently lacks support
250 for specifying a protocol schema (i.e. protocol://...).
251
215252 reactor: reactor to be used to connect to the proxy
253
216254 kwargs: other args to be passed to HostnameEndpoint
217255
218256 Returns:
222260 if proxy is None:
223261 return None
224262
225 # currently we only support hostname:port. Some apps also support
226 # protocol://<host>[:port], which allows a way of requiring a TLS connection to the
227 # proxy.
228
263 # Parse the connection string
229264 host, port = parse_host_port(proxy, default_port=1080)
230265 return HostnameEndpoint(reactor, host, port, **kwargs)
231266
232267
233 def parse_host_port(hostport, default_port=None):
234 # could have sworn we had one of these somewhere else...
268 def parse_username_password(proxy: bytes) -> Tuple[Optional[ProxyCredentials], bytes]:
269 """
270 Parses the username and password from a proxy declaration e.g
271 username:password@hostname:port.
272
273 Args:
274 proxy: The proxy connection string.
275
276 Returns
277 An instance of ProxyCredentials and the proxy connection string with any credentials
278 stripped, i.e u:p@host:port -> host:port. If no credentials were found, the
279 ProxyCredentials instance is replaced with None.
280 """
281 if proxy and b"@" in proxy:
282 # We use rsplit here as the password could contain an @ character
283 credentials, proxy_without_credentials = proxy.rsplit(b"@", 1)
284 return ProxyCredentials(credentials), proxy_without_credentials
285
286 return None, proxy
287
288
289 def parse_host_port(hostport: bytes, default_port: int = None) -> Tuple[bytes, int]:
290 """
291 Parse the hostname and port from a proxy connection byte string.
292
293 Args:
294 hostport: The proxy connection string. Must be in the form 'host[:port]'.
295 default_port: The default port to return if one is not found in `hostport`.
296
297 Returns:
298 A tuple containing the hostname and port. Uses `default_port` if one was not found.
299 """
235300 if b":" in hostport:
236301 host, port = hostport.rsplit(b":", 1)
237302 try:
688688 current = current_context()
689689 try:
690690 res = f(*args, **kwargs)
691 except: # noqa: E722
691 except Exception:
692692 # the assumption here is that the caller doesn't want to be disturbed
693693 # by synchronous exceptions, so let's turn them into Failures.
694694 return defer.fail()
168168 import logging
169169 import re
170170 from functools import wraps
171 from typing import TYPE_CHECKING, Dict, Optional, Type
171 from typing import TYPE_CHECKING, Dict, Optional, Pattern, Type
172172
173173 import attr
174174
261261 # Block everything by default
262262 # A regex which matches the server_names to expose traces for.
263263 # None means 'block everything'.
264 _homeserver_whitelist = None
264 _homeserver_whitelist = None # type: Optional[Pattern[str]]
265265
266266 # Util methods
267267
2020 from synapse.types import JsonDict, RoomStreamToken
2121
2222 if TYPE_CHECKING:
23 from synapse.app.homeserver import HomeServer
23 from synapse.server import HomeServer
2424
2525
2626 @attr.s(slots=True)
2121 from synapse.util.metrics import Measure
2222
2323 if TYPE_CHECKING:
24 from synapse.app.homeserver import HomeServer
24 from synapse.server import HomeServer
2525
2626 logger = logging.getLogger(__name__)
2727
3232 from .push_rule_evaluator import PushRuleEvaluatorForEvent
3333
3434 if TYPE_CHECKING:
35 from synapse.app.homeserver import HomeServer
35 from synapse.server import HomeServer
3636
3737 logger = logging.getLogger(__name__)
3838
2323 from synapse.push.mailer import Mailer
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
26 from synapse.server import HomeServer
2727
2828 logger = logging.getLogger(__name__)
2929
3030 from . import push_rule_evaluator, push_tools
3131
3232 if TYPE_CHECKING:
33 from synapse.app.homeserver import HomeServer
33 from synapse.server import HomeServer
3434
3535 logger = logging.getLogger(__name__)
3636
289289 if rejected is False:
290290 return False
291291
292 if isinstance(rejected, list) or isinstance(rejected, tuple):
292 if isinstance(rejected, (list, tuple)):
293293 for pk in rejected:
294294 if pk != self.pushkey:
295295 # for sanity, we only remove the pushkey if it
3939 from synapse.visibility import filter_events_for_client
4040
4141 if TYPE_CHECKING:
42 from synapse.app.homeserver import HomeServer
42 from synapse.server import HomeServer
4343
4444 logger = logging.getLogger(__name__)
4545
2121 from synapse.push.mailer import Mailer
2222
2323 if TYPE_CHECKING:
24 from synapse.app.homeserver import HomeServer
24 from synapse.server import HomeServer
2525
2626 logger = logging.getLogger(__name__)
2727
1414 # See the License for the specific language governing permissions and
1515 # limitations under the License.
1616
17 import itertools
1718 import logging
1819 from typing import List, Set
1920
8182 "Jinja2>=2.9",
8283 "bleach>=1.4.3",
8384 "typing-extensions>=3.7.4",
85 # We enforce that we have a `cryptography` version that bundles an `openssl`
86 # with the latest security patches.
87 "cryptography>=3.4.7;python_version>='3.6'",
8488 ]
8589
8690 CONDITIONAL_REQUIREMENTS = {
97101 "txacme>=0.9.2",
98102 # txacme depends on eliot. Eliot 1.8.0 is incompatible with
99103 # python 3.5.2, as per https://github.com/itamarst/eliot/issues/418
100 'eliot<1.8.0;python_version<"3.5.3"',
104 "eliot<1.8.0;python_version<'3.5.3'",
101105 ],
102106 "saml2": [
103107 # pysaml2 6.4.0 is incompatible with Python 3.5 (see https://github.com/IdentityPython/pysaml2/issues/749)
127131 ALL_OPTIONAL_REQUIREMENTS = set(optional_deps) | ALL_OPTIONAL_REQUIREMENTS
128132
129133
134 # ensure there are no double-quote characters in any of the deps (otherwise the
135 # 'pip install' incantation in DependencyException will break)
136 for dep in itertools.chain(
137 REQUIREMENTS,
138 *CONDITIONAL_REQUIREMENTS.values(),
139 ):
140 if '"' in dep:
141 raise Exception(
142 "Dependency `%s` contains double-quote; use single-quotes instead" % (dep,)
143 )
144
145
130146 def list_requirements():
131147 return list(set(REQUIREMENTS) | ALL_OPTIONAL_REQUIREMENTS)
132148
146162 @property
147163 def dependencies(self):
148164 for i in self.args[0]:
149 yield "'" + i + "'"
165 yield '"' + i + '"'
150166
151167
152168 def check_requirements(for_feature=None):
3939 // containing the event
4040 "event_format_version": .., // 1,2,3 etc: the event format version
4141 "internal_metadata": { .. serialized internal_metadata .. },
42 "outlier": true|false,
4243 "rejected_reason": .., // The event.rejected_reason field
4344 "context": { .. serialized event context .. },
4445 }],
8384 "room_version": event.room_version.identifier,
8485 "event_format_version": event.format_version,
8586 "internal_metadata": event.internal_metadata.get_dict(),
87 "outlier": event.internal_metadata.is_outlier(),
8688 "rejected_reason": event.rejected_reason,
8789 "context": serialized_context,
8890 }
115117 event = make_event_from_dict(
116118 event_dict, room_ver, internal_metadata, rejected_reason
117119 )
120 event.internal_metadata.outlier = event_payload["outlier"]
118121
119122 context = EventContext.deserialize(
120123 self.storage, event_payload["context"]
3939 // containing the event
4040 "event_format_version": .., // 1,2,3 etc: the event format version
4141 "internal_metadata": { .. serialized internal_metadata .. },
42 "outlier": true|false,
4243 "rejected_reason": .., // The event.rejected_reason field
4344 "context": { .. serialized event context .. },
4445 "requester": { .. serialized requester .. },
7879 ratelimit (bool)
7980 extra_users (list(UserID)): Any extra users to notify about event
8081 """
81
8282 serialized_context = await context.serialize(event, store)
8383
8484 payload = {
8686 "room_version": event.room_version.identifier,
8787 "event_format_version": event.format_version,
8888 "internal_metadata": event.internal_metadata.get_dict(),
89 "outlier": event.internal_metadata.is_outlier(),
8990 "rejected_reason": event.rejected_reason,
9091 "context": serialized_context,
9192 "requester": requester.serialize(),
107108 event = make_event_from_dict(
108109 event_dict, room_ver, internal_metadata, rejected_reason
109110 )
111 event.internal_metadata.outlier = content["outlier"]
110112
111113 requester = Requester.deserialize(self.store, content["requester"])
112114 context = EventContext.deserialize(self.storage, content["context"])
2323 from ._slaved_id_tracker import SlavedIdTracker
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
26 from synapse.server import HomeServer
2727
2828
2929 class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore):
311311
312312 NAME = "FEDERATION_ACK"
313313
314 def __init__(self, instance_name, token):
314 def __init__(self, instance_name: str, token: int):
315315 self.instance_name = instance_name
316316 self.token = token
317317
318318 @classmethod
319 def from_line(cls, line):
319 def from_line(cls, line: str) -> "FederationAckCommand":
320320 instance_name, token = line.split(" ")
321321 return cls(instance_name, int(token))
322322
323 def to_line(self):
323 def to_line(self) -> str:
324324 return "%s %s" % (self.instance_name, self.token)
325325
326326
103103
104104 # A list of all connected protocols. This allows us to send metrics about the
105105 # connections.
106 connected_connections = []
106 connected_connections = [] # type: List[BaseReplicationStreamProtocol]
107107
108108
109109 logger = logging.getLogger(__name__)
3232 from synapse.replication.http.streams import ReplicationGetStreamUpdates
3333
3434 if TYPE_CHECKING:
35 import synapse.server
35 from synapse.server import HomeServer
3636
3737 logger = logging.getLogger(__name__)
3838
298298 NAME = "typing"
299299 ROW_TYPE = TypingStreamRow
300300
301 def __init__(self, hs):
302 typing_handler = hs.get_typing_handler()
303
301 def __init__(self, hs: "HomeServer"):
304302 writer_instance = hs.config.worker.writers.typing
305303 if writer_instance == hs.get_instance_name():
306304 # On the writer, query the typing handler
307 update_function = typing_handler.get_all_typing_updates
305 typing_writer_handler = hs.get_typing_writer_handler()
306 update_function = (
307 typing_writer_handler.get_all_typing_updates
308 ) # type: Callable[[str, int, int, int], Awaitable[Tuple[List[Tuple[int, Any]], int, bool]]]
309 current_token_function = typing_writer_handler.get_current_token
308310 else:
309311 # Query the typing writer process
310312 update_function = make_http_update_function(hs, self.NAME)
311
312 super().__init__(
313 hs.get_instance_name(),
314 current_token_without_instance(typing_handler.get_current_token),
313 current_token_function = hs.get_typing_handler().get_current_token
314
315 super().__init__(
316 hs.get_instance_name(),
317 current_token_without_instance(current_token_function),
315318 update_function,
316319 )
317320
508511 NAME = "account_data"
509512 ROW_TYPE = AccountDataStreamRow
510513
511 def __init__(self, hs: "synapse.server.HomeServer"):
514 def __init__(self, hs: "HomeServer"):
512515 self.store = hs.get_datastore()
513516 super().__init__(
514517 hs.get_instance_name(),
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515 from collections import namedtuple
16 from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Tuple
1617
1718 from synapse.replication.tcp.streams._base import (
1819 Stream,
1920 current_token_without_instance,
2021 make_http_update_function,
2122 )
23
24 if TYPE_CHECKING:
25 from synapse.server import HomeServer
2226
2327
2428 class FederationStream(Stream):
3741 NAME = "federation"
3842 ROW_TYPE = FederationStreamRow
3943
40 def __init__(self, hs):
44 def __init__(self, hs: "HomeServer"):
4145 if hs.config.worker_app is None:
4246 # master process: get updates from the FederationRemoteSendQueue.
4347 # (if the master is configured to send federation itself, federation_sender
4751 current_token = current_token_without_instance(
4852 federation_sender.get_current_token
4953 )
50 update_function = federation_sender.get_replication_rows
54 update_function = (
55 federation_sender.get_replication_rows
56 ) # type: Callable[[str, int, int, int], Awaitable[Tuple[List[Tuple[int, Any]], int, bool]]]
5157
5258 elif hs.should_send_federation():
5359 # federation sender: Query master process
6874 return 0
6975
7076 @staticmethod
71 async def _stub_update_function(instance_name, from_token, upto_token, limit):
77 async def _stub_update_function(
78 instance_name: str, from_token: int, upto_token: int, limit: int
79 ) -> Tuple[list, int, bool]:
7280 return [], upto_token, False
2727 from synapse.types import JsonDict
2828
2929 if TYPE_CHECKING:
30 from synapse.app.homeserver import HomeServer
30 from synapse.server import HomeServer
3131
3232 logger = logging.getLogger(__name__)
3333
389389 async def on_POST(
390390 self, request: SynapseRequest, room_identifier: str
391391 ) -> Tuple[int, JsonDict]:
392 # This will always be set by the time Twisted calls us.
393 assert request.args is not None
394
392395 requester = await self.auth.get_user_by_req(request)
393396 await assert_user_is_admin(self.auth, requester.user)
394397
270270 elif not deactivate and user["deactivated"]:
271271 if (
272272 "password" not in body
273 and self.hs.config.password_localdb_enabled
273 and self.auth_handler.can_change_password()
274274 ):
275275 raise SynapseError(
276276 400, "Must provide a password to re-activate an account."
832832 async def on_GET(
833833 self, request: SynapseRequest, user_id: str
834834 ) -> Tuple[int, JsonDict]:
835 # This will always be set by the time Twisted calls us.
836 assert request.args is not None
837
835838 await assert_requester_is_admin(self.auth, request)
836839
837840 if not self.is_mine(UserID.from_string(user_id)):
1717
1818 import logging
1919 import re
20 from typing import TYPE_CHECKING, List, Optional
20 from typing import TYPE_CHECKING, List, Optional, Tuple
2121 from urllib import parse as urlparse
2222
2323 from synapse.api.constants import EventTypes, Membership
3434 from synapse.http.servlet import (
3535 RestServlet,
3636 assert_params_in_dict,
37 parse_boolean,
3738 parse_integer,
3839 parse_json_object_from_request,
3940 parse_string,
4041 )
42 from synapse.http.site import SynapseRequest
4143 from synapse.logging.opentracing import set_tag
4244 from synapse.rest.client.transactions import HttpTransactionCache
4345 from synapse.rest.client.v2_alpha._base import client_patterns
4446 from synapse.storage.state import StateFilter
4547 from synapse.streams.config import PaginationConfig
46 from synapse.types import RoomAlias, RoomID, StreamToken, ThirdPartyInstanceID, UserID
48 from synapse.types import (
49 JsonDict,
50 RoomAlias,
51 RoomID,
52 StreamToken,
53 ThirdPartyInstanceID,
54 UserID,
55 )
4756 from synapse.util import json_decoder
4857 from synapse.util.stringutils import parse_and_validate_server_name, random_string
4958
5059 if TYPE_CHECKING:
51 import synapse.server
60 from synapse.server import HomeServer
5261
5362 logger = logging.getLogger(__name__)
5463
845854 "/rooms/(?P<room_id>[^/]*)/typing/(?P<user_id>[^/]*)$", v1=True
846855 )
847856
848 def __init__(self, hs):
849 super().__init__()
857 def __init__(self, hs: "HomeServer"):
858 super().__init__()
859 self.hs = hs
850860 self.presence_handler = hs.get_presence_handler()
851 self.typing_handler = hs.get_typing_handler()
852861 self.auth = hs.get_auth()
853862
854863 # If we're not on the typing writer instance we should scream if we get
873882 # Limit timeout to stop people from setting silly typing timeouts.
874883 timeout = min(content.get("timeout", 30000), 120000)
875884
885 # Defer getting the typing handler since it will raise on workers.
886 typing_handler = self.hs.get_typing_writer_handler()
887
876888 try:
877889 if content["typing"]:
878 await self.typing_handler.started_typing(
890 await typing_handler.started_typing(
879891 target_user=target_user,
880892 requester=requester,
881893 room_id=room_id,
882894 timeout=timeout,
883895 )
884896 else:
885 await self.typing_handler.stopped_typing(
897 await typing_handler.stopped_typing(
886898 target_user=target_user, requester=requester, room_id=room_id
887899 )
888900 except ShadowBanError:
900912 ),
901913 ]
902914
903 def __init__(self, hs: "synapse.server.HomeServer"):
915 def __init__(self, hs: "HomeServer"):
904916 super().__init__()
905917 self.auth = hs.get_auth()
906918 self.directory_handler = hs.get_directory_handler()
983995 )
984996
985997
986 def register_servlets(hs, http_server, is_worker=False):
998 class RoomSpaceSummaryRestServlet(RestServlet):
999 PATTERNS = (
1000 re.compile(
1001 "^/_matrix/client/unstable/org.matrix.msc2946"
1002 "/rooms/(?P<room_id>[^/]*)/spaces$"
1003 ),
1004 )
1005
1006 def __init__(self, hs: "HomeServer"):
1007 super().__init__()
1008 self._auth = hs.get_auth()
1009 self._space_summary_handler = hs.get_space_summary_handler()
1010
1011 async def on_GET(
1012 self, request: SynapseRequest, room_id: str
1013 ) -> Tuple[int, JsonDict]:
1014 requester = await self._auth.get_user_by_req(request, allow_guest=True)
1015
1016 return 200, await self._space_summary_handler.get_space_summary(
1017 requester.user.to_string(),
1018 room_id,
1019 suggested_only=parse_boolean(request, "suggested_only", default=False),
1020 max_rooms_per_space=parse_integer(request, "max_rooms_per_space"),
1021 )
1022
1023 async def on_POST(
1024 self, request: SynapseRequest, room_id: str
1025 ) -> Tuple[int, JsonDict]:
1026 requester = await self._auth.get_user_by_req(request, allow_guest=True)
1027 content = parse_json_object_from_request(request)
1028
1029 suggested_only = content.get("suggested_only", False)
1030 if not isinstance(suggested_only, bool):
1031 raise SynapseError(
1032 400, "'suggested_only' must be a boolean", Codes.BAD_JSON
1033 )
1034
1035 max_rooms_per_space = content.get("max_rooms_per_space")
1036 if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int):
1037 raise SynapseError(
1038 400, "'max_rooms_per_space' must be an integer", Codes.BAD_JSON
1039 )
1040
1041 return 200, await self._space_summary_handler.get_space_summary(
1042 requester.user.to_string(),
1043 room_id,
1044 suggested_only=suggested_only,
1045 max_rooms_per_space=max_rooms_per_space,
1046 )
1047
1048
1049 def register_servlets(hs: "HomeServer", http_server, is_worker=False):
9871050 RoomStateEventRestServlet(hs).register(http_server)
9881051 RoomMemberListRestServlet(hs).register(http_server)
9891052 JoinedRoomMemberListRestServlet(hs).register(http_server)
9971060 RoomTypingRestServlet(hs).register(http_server)
9981061 RoomEventContextServlet(hs).register(http_server)
9991062
1063 if hs.config.experimental.spaces_enabled:
1064 RoomSpaceSummaryRestServlet(hs).register(http_server)
1065
10001066 # Some servlets only get registered for the main process.
10011067 if not is_worker:
10021068 RoomCreateRestServlet(hs).register(http_server)
4444 from ._base import client_patterns, interactive_auth_handler
4545
4646 if TYPE_CHECKING:
47 from synapse.app.homeserver import HomeServer
47 from synapse.server import HomeServer
4848
4949
5050 logger = logging.getLogger(__name__)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import TYPE_CHECKING, Tuple
1516
1617 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
1718 from synapse.http.servlet import RestServlet
19 from synapse.http.site import SynapseRequest
20 from synapse.types import JsonDict
1821
1922 from ._base import client_patterns
23
24 if TYPE_CHECKING:
25 from synapse.server import HomeServer
2026
2127 logger = logging.getLogger(__name__)
2228
2632
2733 PATTERNS = client_patterns("/capabilities$")
2834
29 def __init__(self, hs):
30 """
31 Args:
32 hs (synapse.server.HomeServer): server
33 """
35 def __init__(self, hs: "HomeServer"):
3436 super().__init__()
3537 self.hs = hs
3638 self.config = hs.config
3739 self.auth = hs.get_auth()
38 self.store = hs.get_datastore()
40 self.auth_handler = hs.get_auth_handler()
3941
40 async def on_GET(self, request):
41 requester = await self.auth.get_user_by_req(request, allow_guest=True)
42 user = await self.store.get_user_by_id(requester.user.to_string())
43 change_password = bool(user["password_hash"])
42 async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
43 await self.auth.get_user_by_req(request, allow_guest=True)
44 change_password = self.auth_handler.can_change_password()
4445
4546 response = {
4647 "capabilities": {
5758 return 200, response
5859
5960
60 def register_servlets(hs, http_server):
61 def register_servlets(hs: "HomeServer", http_server):
6162 CapabilitiesRestServlet(hs).register(http_server)
3737 from ._base import client_patterns
3838
3939 if TYPE_CHECKING:
40 from synapse.app.homeserver import HomeServer
40 from synapse.server import HomeServer
4141
4242 logger = logging.getLogger(__name__)
4343
1414
1515 import itertools
1616 import logging
17 from typing import TYPE_CHECKING, Tuple
1718
1819 from synapse.api.constants import PresenceState
1920 from synapse.api.errors import Codes, StoreError, SynapseError
2526 from synapse.handlers.presence import format_user_presence_state
2627 from synapse.handlers.sync import SyncConfig
2728 from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
28 from synapse.types import StreamToken
29 from synapse.http.site import SynapseRequest
30 from synapse.types import JsonDict, StreamToken
2931 from synapse.util import json_decoder
3032
3133 from ._base import client_patterns, set_timeline_upper_limit
34
35 if TYPE_CHECKING:
36 from synapse.server import HomeServer
3237
3338 logger = logging.getLogger(__name__)
3439
7277 PATTERNS = client_patterns("/sync$")
7378 ALLOWED_PRESENCE = {"online", "offline", "unavailable"}
7479
75 def __init__(self, hs):
80 def __init__(self, hs: "HomeServer"):
7681 super().__init__()
7782 self.hs = hs
7883 self.auth = hs.get_auth()
8489 self._server_notices_sender = hs.get_server_notices_sender()
8590 self._event_serializer = hs.get_event_client_serializer()
8691
87 async def on_GET(self, request):
92 async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
93 # This will always be set by the time Twisted calls us.
94 assert request.args is not None
95
8896 if b"from" in request.args:
8997 # /events used to use 'from', but /sync uses 'since'.
9098 # Lets be helpful and whine if we see a 'from'.
8080 "io.element.e2ee_forced.public": self.e2ee_forced_public,
8181 "io.element.e2ee_forced.private": self.e2ee_forced_private,
8282 "io.element.e2ee_forced.trusted_private": self.e2ee_forced_trusted_private,
83 # Supports the busy presence state described in MSC3026.
84 "org.matrix.msc3026.busy_presence": self.config.experimental.msc3026_enabled,
8385 },
8486 },
8587 )
2222 from synapse.http.site import SynapseRequest
2323
2424 if TYPE_CHECKING:
25 from synapse.app.homeserver import HomeServer
25 from synapse.server import HomeServer
2626
2727
2828 class MediaConfigResource(DirectServeJsonResource):
2323 from ._base import parse_media_id, respond_404
2424
2525 if TYPE_CHECKING:
26 from synapse.app.homeserver import HomeServer
2726 from synapse.rest.media.v1.media_repository import MediaRepository
27 from synapse.server import HomeServer
2828
2929 logger = logging.getLogger(__name__)
3030
5757 from .upload_resource import UploadResource
5858
5959 if TYPE_CHECKING:
60 from synapse.app.homeserver import HomeServer
60 from synapse.server import HomeServer
6161
6262 logger = logging.getLogger(__name__)
6363
5353 if TYPE_CHECKING:
5454 from lxml import etree
5555
56 from synapse.app.homeserver import HomeServer
5756 from synapse.rest.media.v1.media_repository import MediaRepository
57 from synapse.server import HomeServer
5858
5959 logger = logging.getLogger(__name__)
6060
186186 respond_with_json(request, 200, {}, send_cors=True)
187187
188188 async def _async_render_GET(self, request: SynapseRequest) -> None:
189 # This will always be set by the time Twisted calls us.
190 assert request.args is not None
189191
190192 # XXX: if get_user_by_req fails, what should we do in an async render?
191193 requester = await self.auth.get_user_by_req(request)
2828 logger = logging.getLogger(__name__)
2929
3030 if TYPE_CHECKING:
31 from synapse.app.homeserver import HomeServer
31 from synapse.server import HomeServer
3232
3333
3434 class StorageProvider(metaclass=abc.ABCMeta):
3333 )
3434
3535 if TYPE_CHECKING:
36 from synapse.app.homeserver import HomeServer
3736 from synapse.rest.media.v1.media_repository import MediaRepository
37 from synapse.server import HomeServer
3838
3939 logger = logging.getLogger(__name__)
4040
2525 from synapse.rest.media.v1.media_storage import SpamMediaException
2626
2727 if TYPE_CHECKING:
28 from synapse.app.homeserver import HomeServer
2928 from synapse.rest.media.v1.media_repository import MediaRepository
29 from synapse.server import HomeServer
3030
3131 logger = logging.getLogger(__name__)
3232
103103 respond_with_html(request, 200, html)
104104
105105 async def _async_render_POST(self, request: SynapseRequest):
106 # This will always be set by the time Twisted calls us.
107 assert request.args is not None
108
106109 try:
107110 session_id = get_username_mapping_session_cookie_from_request(request)
108111 except SynapseError as e:
2525 import secrets
2626
2727 class Secrets:
28 def token_bytes(self, nbytes=32):
28 def token_bytes(self, nbytes: int = 32) -> bytes:
2929 return secrets.token_bytes(nbytes)
3030
31 def token_hex(self, nbytes=32):
31 def token_hex(self, nbytes: int = 32) -> str:
3232 return secrets.token_hex(nbytes)
3333
3434
3737 import os
3838
3939 class Secrets:
40 def token_bytes(self, nbytes=32):
40 def token_bytes(self, nbytes: int = 32) -> bytes:
4141 return os.urandom(nbytes)
4242
43 def token_hex(self, nbytes=32):
43 def token_hex(self, nbytes: int = 32) -> str:
4444 return binascii.hexlify(self.token_bytes(nbytes)).decode("ascii")
5959 FederationServer,
6060 )
6161 from synapse.federation.send_queue import FederationRemoteSendQueue
62 from synapse.federation.sender import FederationSender
62 from synapse.federation.sender import AbstractFederationSender, FederationSender
6363 from synapse.federation.transport.client import TransportLayerClient
6464 from synapse.groups.attestations import GroupAttestationSigning, GroupAttestionRenewer
6565 from synapse.groups.groups_server import GroupsServerHandler, GroupsServerWorkerHandler
9595 RoomShutdownHandler,
9696 )
9797 from synapse.handlers.room_list import RoomListHandler
98 from synapse.handlers.room_member import RoomMemberMasterHandler
98 from synapse.handlers.room_member import RoomMemberHandler, RoomMemberMasterHandler
9999 from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
100100 from synapse.handlers.search import SearchHandler
101101 from synapse.handlers.set_password import SetPasswordHandler
102 from synapse.handlers.space_summary import SpaceSummaryHandler
102103 from synapse.handlers.sso import SsoHandler
103104 from synapse.handlers.stats import StatsHandler
104105 from synapse.handlers.sync import SyncHandler
416417 return PresenceHandler(self)
417418
418419 @cache_in_self
419 def get_typing_handler(self):
420 def get_typing_writer_handler(self) -> TypingWriterHandler:
420421 if self.config.worker.writers.typing == self.get_instance_name():
421422 return TypingWriterHandler(self)
423 else:
424 raise Exception("Workers cannot write typing")
425
426 @cache_in_self
427 def get_typing_handler(self) -> FollowerTypingHandler:
428 if self.config.worker.writers.typing == self.get_instance_name():
429 # Use get_typing_writer_handler to ensure that we use the same
430 # cached version.
431 return self.get_typing_writer_handler()
422432 else:
423433 return FollowerTypingHandler(self)
424434
560570 return TransportLayerClient(self)
561571
562572 @cache_in_self
563 def get_federation_sender(self):
573 def get_federation_sender(self) -> AbstractFederationSender:
564574 if self.should_send_federation():
565575 return FederationSender(self)
566576 elif not self.config.worker_app:
629639 return ThirdPartyEventRules(self)
630640
631641 @cache_in_self
632 def get_room_member_handler(self):
642 def get_room_member_handler(self) -> RoomMemberHandler:
633643 if self.config.worker_app:
634644 return RoomMemberWorkerHandler(self)
635645 return RoomMemberMasterHandler(self)
639649 return FederationHandlerRegistry(self)
640650
641651 @cache_in_self
642 def get_server_notices_manager(self):
652 def get_server_notices_manager(self) -> ServerNoticesManager:
643653 if self.config.worker_app:
644654 raise Exception("Workers cannot send server notices")
645655 return ServerNoticesManager(self)
646656
647657 @cache_in_self
648 def get_server_notices_sender(self):
658 def get_server_notices_sender(self) -> WorkerServerNoticesSender:
649659 if self.config.worker_app:
650660 return WorkerServerNoticesSender(self)
651661 return ServerNoticesSender(self)
721731 @cache_in_self
722732 def get_account_data_handler(self) -> AccountDataHandler:
723733 return AccountDataHandler(self)
734
735 @cache_in_self
736 def get_space_summary_handler(self) -> SpaceSummaryHandler:
737 return SpaceSummaryHandler(self)
724738
725739 @cache_in_self
726740 def get_external_cache(self) -> ExternalCache:
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import Any
15 from typing import TYPE_CHECKING, Any, Set
1616
1717 from synapse.api.errors import SynapseError
1818 from synapse.api.urls import ConsentURIBuilder
1919 from synapse.config import ConfigError
2020 from synapse.types import get_localpart_from_id
21
22 if TYPE_CHECKING:
23 from synapse.server import HomeServer
2124
2225 logger = logging.getLogger(__name__)
2326
2730 privacy policy consent, and sends one if we do.
2831 """
2932
30 def __init__(self, hs):
31 """
32
33 Args:
34 hs (synapse.server.HomeServer):
35 """
33 def __init__(self, hs: "HomeServer"):
3634 self._server_notices_manager = hs.get_server_notices_manager()
3735 self._store = hs.get_datastore()
3836
39 self._users_in_progress = set()
37 self._users_in_progress = set() # type: Set[str]
4038
4139 self._current_consent_version = hs.config.user_consent_version
4240 self._server_notice_content = hs.config.user_consent_server_notice_content
7169 self._users_in_progress.add(user_id)
7270 try:
7371 u = await self._store.get_user_by_id(user_id)
72
73 # The user doesn't exist.
74 if u is None:
75 return
7476
7577 if u["is_guest"] and not self._send_to_guests:
7678 # don't send to guests
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import List, Tuple
15 from typing import TYPE_CHECKING, List, Tuple
1616
1717 from synapse.api.constants import (
1818 EventTypes,
2323 from synapse.api.errors import AuthError, ResourceLimitError, SynapseError
2424 from synapse.server_notices.server_notices_manager import SERVER_NOTICE_ROOM_TAG
2525
26 if TYPE_CHECKING:
27 from synapse.server import HomeServer
28
2629 logger = logging.getLogger(__name__)
2730
2831
3134 ensures that the client is kept up to date.
3235 """
3336
34 def __init__(self, hs):
35 """
36 Args:
37 hs (synapse.server.HomeServer):
38 """
37 def __init__(self, hs: "HomeServer"):
3938 self._server_notices_manager = hs.get_server_notices_manager()
4039 self._store = hs.get_datastore()
4140 self._auth = hs.get_auth()
5757 user_id: str,
5858 event_content: dict,
5959 type: str = EventTypes.Message,
60 state_key: Optional[bool] = None,
60 state_key: Optional[str] = None,
6161 ) -> EventBase:
6262 """Send a notice to the given user
6363
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 from typing import Iterable, Union
14 from typing import TYPE_CHECKING, Iterable, Union
1515
1616 from synapse.server_notices.consent_server_notices import ConsentServerNotices
1717 from synapse.server_notices.resource_limits_server_notices import (
1818 ResourceLimitsServerNotices,
1919 )
20 from synapse.server_notices.worker_server_notices_sender import (
21 WorkerServerNoticesSender,
22 )
23
24 if TYPE_CHECKING:
25 from synapse.server import HomeServer
2026
2127
22 class ServerNoticesSender:
28 class ServerNoticesSender(WorkerServerNoticesSender):
2329 """A centralised place which sends server notices automatically when
2430 Certain Events take place
2531 """
2632
27 def __init__(self, hs):
28 """
29
30 Args:
31 hs (synapse.server.HomeServer):
32 """
33 def __init__(self, hs: "HomeServer"):
34 super().__init__(hs)
3335 self._server_notices = (
3436 ConsentServerNotices(hs),
3537 ResourceLimitsServerNotices(hs),
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 from typing import TYPE_CHECKING
15
16 if TYPE_CHECKING:
17 from synapse.server import HomeServer
1418
1519
1620 class WorkerServerNoticesSender:
1721 """Stub impl of ServerNoticesSender which does nothing"""
1822
19 def __init__(self, hs):
20 """
21 Args:
22 hs (synapse.server.HomeServer):
23 """
23 def __init__(self, hs: "HomeServer"):
24 pass
2425
2526 async def on_user_syncing(self, user_id: str) -> None:
2627 """Called when the user performs a sync operation.
3535 from synapse.storage.state import StateGroupStorage
3636
3737 if TYPE_CHECKING:
38 from synapse.app.homeserver import HomeServer
38 from synapse.server import HomeServer
3939
4040
4141 __all__ = ["Databases", "DataStore"]
2626 from synapse.util import json_decoder
2727
2828 if TYPE_CHECKING:
29 from synapse.app.homeserver import HomeServer
29 from synapse.server import HomeServer
3030
3131 logger = logging.getLogger(__name__)
3232
2222 from . import engines
2323
2424 if TYPE_CHECKING:
25 from synapse.app.homeserver import HomeServer
25 from synapse.server import HomeServer
2626 from synapse.storage.database import DatabasePool, LoggingTransaction
2727
2828 logger = logging.getLogger(__name__)
669669
670670 for after_callback, after_args, after_kwargs in after_callbacks:
671671 after_callback(*after_args, **after_kwargs)
672 except: # noqa: E722, as we reraise the exception this is fine.
672 except Exception:
673673 for after_callback, after_args, after_kwargs in exception_callbacks:
674674 after_callback(*after_args, **after_kwargs)
675675 raise
19051905 retcols: Iterable[str],
19061906 filters: Optional[Dict[str, Any]] = None,
19071907 keyvalues: Optional[Dict[str, Any]] = None,
1908 exclude_keyvalues: Optional[Dict[str, Any]] = None,
19081909 order_direction: str = "ASC",
19091910 ) -> List[Dict[str, Any]]:
19101911 """
19281929 apply a WHERE ? LIKE ? clause.
19291930 keyvalues:
19301931 column names and values to select the rows with, or None to not
1931 apply a WHERE clause.
1932 apply a WHERE key = value clause.
1933 exclude_keyvalues:
1934 column names and values to exclude rows with, or None to not
1935 apply a WHERE key != value clause.
19321936 order_direction: Whether the results should be ordered "ASC" or "DESC".
19331937
19341938 Returns:
19371941 if order_direction not in ["ASC", "DESC"]:
19381942 raise ValueError("order_direction must be one of 'ASC' or 'DESC'.")
19391943
1940 where_clause = "WHERE " if filters or keyvalues else ""
1944 where_clause = "WHERE " if filters or keyvalues or exclude_keyvalues else ""
19411945 arg_list = [] # type: List[Any]
19421946 if filters:
19431947 where_clause += " AND ".join("%s LIKE ?" % (k,) for k in filters)
19461950 if keyvalues:
19471951 where_clause += " AND ".join("%s = ?" % (k,) for k in keyvalues)
19481952 arg_list += list(keyvalues.values())
1953 if exclude_keyvalues:
1954 where_clause += " AND ".join("%s != ?" % (k,) for k in exclude_keyvalues)
1955 arg_list += list(exclude_keyvalues.values())
19491956
19501957 sql = "SELECT %s FROM %s %s ORDER BY %s %s LIMIT ? OFFSET ?" % (
19511958 ", ".join(retcols),
3131 from synapse.util import json_encoder
3232
3333 if TYPE_CHECKING:
34 from synapse.app.homeserver import HomeServer
34 from synapse.server import HomeServer
3535
3636 logger = logging.getLogger(__name__)
3737
1313 # limitations under the License.
1414
1515 import logging
16 from typing import List, Tuple
16 from typing import List, Optional, Tuple
1717
1818 from synapse.logging.opentracing import log_kv, set_tag, trace
1919 from synapse.replication.tcp.streams import ToDeviceStream
114114 async def get_new_messages_for_device(
115115 self,
116116 user_id: str,
117 device_id: str,
117 device_id: Optional[str],
118118 last_stream_id: int,
119119 current_stream_id: int,
120120 limit: int = 100,
162162
163163 @trace
164164 async def delete_messages_for_device(
165 self, user_id: str, device_id: str, up_to_stream_id: int
165 self, user_id: str, device_id: Optional[str], up_to_stream_id: int
166166 ) -> int:
167167 """
168168 Args:
792792
793793 return int(min_depth) if min_depth is not None else None
794794
795 async def get_forward_extremeties_for_room(
795 async def get_forward_extremities_for_room_at_stream_ordering(
796796 self, room_id: str, stream_ordering: int
797797 ) -> List[str]:
798798 """For a given room_id and stream_ordering, return the forward
12691269 logger.exception("")
12701270 raise
12711271
1272 # update the stored internal_metadata to update the "outlier" flag.
1273 # TODO: This is unused as of Synapse 1.31. Remove it once we are happy
1274 # to drop backwards-compatibility with 1.30.
12721275 metadata_json = json_encoder.encode(event.internal_metadata.get_dict())
1273
12741276 sql = "UPDATE event_json SET internal_metadata = ? WHERE event_id = ?"
12751277 txn.execute(sql, (metadata_json, event.event_id))
12761278
13181320 d.pop("redacted_because", None)
13191321 return d
13201322
1323 def get_internal_metadata(event):
1324 im = event.internal_metadata.get_dict()
1325
1326 # temporary hack for database compatibility with Synapse 1.30 and earlier:
1327 # store the `outlier` flag inside the internal_metadata json as well as in
1328 # the `events` table, so that if anyone rolls back to an older Synapse,
1329 # things keep working. This can be removed once we are happy to drop support
1330 # for that
1331 if event.internal_metadata.is_outlier():
1332 im["outlier"] = True
1333
1334 return im
1335
13211336 self.db_pool.simple_insert_many_txn(
13221337 txn,
13231338 table="event_json",
13261341 "event_id": event.event_id,
13271342 "room_id": event.room_id,
13281343 "internal_metadata": json_encoder.encode(
1329 event.internal_metadata.get_dict()
1344 get_internal_metadata(event)
13301345 ),
13311346 "json": json_encoder.encode(event_dict(event)),
13321347 "format_version": event.format_version,
798798 rejected_reason=rejected_reason,
799799 )
800800 original_ev.internal_metadata.stream_ordering = row["stream_ordering"]
801 original_ev.internal_metadata.outlier = row["outlier"]
801802
802803 event_map[event_id] = original_ev
803804
904905 ej.json,
905906 ej.format_version,
906907 r.room_version,
907 rej.reason
908 rej.reason,
909 e.outlier
908910 FROM events AS e
909911 JOIN event_json AS ej USING (event_id)
910912 LEFT JOIN rooms r ON r.room_id = e.room_id
928930 "room_version_id": row[5],
929931 "rejected_reason": row[6],
930932 "redactions": [],
933 "outlier": row[7],
931934 }
932935
933936 # check for redactions
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import Dict, List
15 from typing import Dict, List, Optional
1616
1717 from synapse.metrics.background_process_metrics import wrap_as_background_process
1818 from synapse.storage._base import SQLBaseStore
108108 return users
109109
110110 @cached(num_args=1)
111 async def user_last_seen_monthly_active(self, user_id: str) -> int:
111 async def user_last_seen_monthly_active(self, user_id: str) -> Optional[int]:
112112 """
113113 Checks if a given user is part of the monthly active user group
114114
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from typing import List, Tuple
15 from typing import Dict, List, Tuple
1616
1717 from synapse.api.presence import UserPresenceState
1818 from synapse.storage._base import SQLBaseStore, make_in_list_sql_clause
156156
157157 return {row["user_id"]: UserPresenceState(**row) for row in rows}
158158
159 async def get_presence_for_all_users(
160 self,
161 include_offline: bool = True,
162 ) -> Dict[str, UserPresenceState]:
163 """Retrieve the current presence state for all users.
164
165 Note that the presence_stream table is culled frequently, so it should only
166 contain the latest presence state for each user.
167
168 Args:
169 include_offline: Whether to include offline presence states
170
171 Returns:
172 A dict of user IDs to their current UserPresenceState.
173 """
174 users_to_state = {}
175
176 exclude_keyvalues = None
177 if not include_offline:
178 # Exclude offline presence state
179 exclude_keyvalues = {"state": "offline"}
180
181 # This may be a very heavy database query.
182 # We paginate in order to not block a database connection.
183 limit = 100
184 offset = 0
185 while True:
186 rows = await self.db_pool.runInteraction(
187 "get_presence_for_all_users",
188 self.db_pool.simple_select_list_paginate_txn,
189 "presence_stream",
190 orderby="stream_id",
191 start=offset,
192 limit=limit,
193 exclude_keyvalues=exclude_keyvalues,
194 retcols=(
195 "user_id",
196 "state",
197 "last_active_ts",
198 "last_federation_update_ts",
199 "last_user_sync_ts",
200 "status_msg",
201 "currently_active",
202 ),
203 order_direction="ASC",
204 )
205
206 for row in rows:
207 users_to_state[row["user_id"]] = UserPresenceState(**row)
208
209 # We've run out of updates to query
210 if len(rows) < limit:
211 break
212
213 offset += limit
214
215 return users_to_state
216
159217 def get_current_presence_token(self):
160218 return self._presence_id_gen.get_current_token()
2626 from synapse.util.caches.descriptors import cached, cachedList
2727
2828 if TYPE_CHECKING:
29 from synapse.app.homeserver import HomeServer
29 from synapse.server import HomeServer
3030
3131 logger = logging.getLogger(__name__)
3232
12091209 self._invalidate_cache_and_stream(
12101210 txn, self.get_user_deactivated_status, (user_id,)
12111211 )
1212 self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
12121213 txn.call_after(self.is_guest.invalidate, (user_id,))
12131214
12141215 @cached()
2121 from synapse.metrics.background_process_metrics import wrap_as_background_process
2222 from synapse.storage._base import SQLBaseStore, db_to_json
2323 from synapse.storage.database import DatabasePool, LoggingTransaction
24 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
2524 from synapse.types import JsonDict
2625 from synapse.util.caches.expiringcache import ExpiringCache
2726
311310 stream_ordering: the stream_ordering of the event
312311 """
313312
314 return await self.db_pool.runInteraction(
315 "store_destination_rooms_entries",
316 self._store_destination_rooms_entries_txn,
317 destinations,
318 room_id,
319 stream_ordering,
320 )
321
322 def _store_destination_rooms_entries_txn(
323 self,
324 txn: LoggingTransaction,
325 destinations: Iterable[str],
326 room_id: str,
327 stream_ordering: int,
328 ) -> None:
329
330 # ensure we have a `destinations` row for this destination, as there is
331 # a foreign key constraint.
332 if isinstance(self.database_engine, PostgresEngine):
333 q = """
334 INSERT INTO destinations (destination)
335 VALUES (?)
336 ON CONFLICT DO NOTHING;
337 """
338 elif isinstance(self.database_engine, Sqlite3Engine):
339 q = """
340 INSERT OR IGNORE INTO destinations (destination)
341 VALUES (?);
342 """
343 else:
344 raise RuntimeError("Unknown database engine")
345
346 txn.execute_batch(q, ((destination,) for destination in destinations))
313 await self.db_pool.simple_upsert_many(
314 table="destinations",
315 key_names=("destination",),
316 key_values=[(d,) for d in destinations],
317 value_names=[],
318 value_values=[],
319 desc="store_destination_rooms_entries_dests",
320 )
347321
348322 rows = [(destination, room_id) for destination in destinations]
349
350 self.db_pool.simple_upsert_many_txn(
351 txn,
323 await self.db_pool.simple_upsert_many(
352324 table="destination_rooms",
353325 key_names=("destination", "room_id"),
354326 key_values=rows,
355327 value_names=["stream_ordering"],
356328 value_values=[(stream_ordering,)] * len(rows),
329 desc="store_destination_rooms_entries_rooms",
357330 )
358331
359332 async def get_destination_last_successful_stream_ordering(
182182 requests state from the cache, if False we need to query the DB for the
183183 missing state.
184184 """
185 is_all, known_absent, state_dict_ids = cache.get(group)
186
187 if is_all or state_filter.is_full():
185 cache_entry = cache.get(group)
186 state_dict_ids = cache_entry.value
187
188 if cache_entry.full or state_filter.is_full():
188189 # Either we have everything or want everything, either way
189190 # `is_all` tells us whether we've gotten everything.
190 return state_filter.filter_state(state_dict_ids), is_all
191 return state_filter.filter_state(state_dict_ids), cache_entry.full
191192
192193 # tracks whether any of our requested types are missing from the cache
193194 missing_types = False
201202 # There aren't any wild cards, so `concrete_types()` returns the
202203 # complete list of event types we're wanting.
203204 for key in state_filter.concrete_types():
204 if key not in state_dict_ids and key not in known_absent:
205 if key not in state_dict_ids and key not in cache_entry.known_absent:
205206 missing_types = True
206207 break
207208
1919 from synapse.storage.databases import Databases
2020
2121 if TYPE_CHECKING:
22 from synapse.app.homeserver import HomeServer
22 from synapse.server import HomeServer
2323
2424 logger = logging.getLogger(__name__)
2525
3131 from synapse.types import MutableStateMap, StateMap
3232
3333 if TYPE_CHECKING:
34 from synapse.app.homeserver import HomeServer
34 from synapse.server import HomeServer
3535 from synapse.storage.databases import Databases
3636
3737 logger = logging.getLogger(__name__)
448448 return self.stores.state._get_state_groups_from_groups(groups, state_filter)
449449
450450 async def get_state_for_events(
451 self, event_ids: List[str], state_filter: StateFilter = StateFilter.all()
451 self, event_ids: Iterable[str], state_filter: StateFilter = StateFilter.all()
452452 ) -> Dict[str, StateMap[EventBase]]:
453453 """Given a list of event_ids and type tuples, return a list of state
454454 dicts for each event.
484484 return {event: event_to_state[event] for event in event_ids}
485485
486486 async def get_state_ids_for_events(
487 self, event_ids: List[str], state_filter: StateFilter = StateFilter.all()
487 self, event_ids: Iterable[str], state_filter: StateFilter = StateFilter.all()
488488 ) -> Dict[str, StateMap[str]]:
489489 """
490490 Get the state dicts corresponding to a list of events, containing the event_ids
495495
496496 try:
497497 deferred.cancel()
498 except: # noqa: E722, if we throw any exception it'll break time outs
498 except Exception: # if we throw any exception it'll break time outs
499499 logger.exception("Canceller failed during timeout")
500500
501501 # the cancel() call should have set off a chain of errbacks which
2424
2525 logger = logging.getLogger(__name__)
2626
27 caches_by_name = {}
28 collectors_by_name = {} # type: Dict
27 caches_by_name = {} # type: Dict[str, Sized]
28 collectors_by_name = {} # type: Dict[str, CacheMetric]
2929
3030 cache_size = Gauge("synapse_util_caches_cache:size", "", ["name"])
3131 cache_hits = Gauge("synapse_util_caches_cache:hits", "", ["name"])
115115 """
116116 if resizable:
117117 if not resize_callback:
118 resize_callback = getattr(cache, "set_cache_factor")
118 resize_callback = cache.set_cache_factor # type: ignore
119119 add_resizable_cache(cache_name, resize_callback)
120120
121121 metric = CacheMetric(cache, cache_type, cache_name, collect_callback)
1414 import enum
1515 import logging
1616 import threading
17 from collections import namedtuple
18 from typing import Any
17 from typing import Any, Dict, Generic, Iterable, Optional, Set, TypeVar
18
19 import attr
1920
2021 from synapse.util.caches.lrucache import LruCache
2122
2223 logger = logging.getLogger(__name__)
2324
2425
25 class DictionaryEntry(namedtuple("DictionaryEntry", ("full", "known_absent", "value"))):
26 # The type of the cache keys.
27 KT = TypeVar("KT")
28 # The type of the dictionary keys.
29 DKT = TypeVar("DKT")
30
31
32 @attr.s(slots=True)
33 class DictionaryEntry:
2634 """Returned when getting an entry from the cache
2735
2836 Attributes:
29 full (bool): Whether the cache has the full or dict or just some keys.
37 full: Whether the cache has the full or dict or just some keys.
3038 If not full then not all requested keys will necessarily be present
3139 in `value`
32 known_absent (set): Keys that were looked up in the dict and were not
40 known_absent: Keys that were looked up in the dict and were not
3341 there.
34 value (dict): The full or partial dict value
42 value: The full or partial dict value
3543 """
44
45 full = attr.ib(type=bool)
46 known_absent = attr.ib()
47 value = attr.ib()
3648
3749 def __len__(self):
3850 return len(self.value)
4456 sentinel = object()
4557
4658
47 class DictionaryCache:
59 class DictionaryCache(Generic[KT, DKT]):
4860 """Caches key -> dictionary lookups, supporting caching partial dicts, i.e.
4961 fetching a subset of dictionary keys for a particular key.
5062 """
5163
52 def __init__(self, name, max_entries=1000):
64 def __init__(self, name: str, max_entries: int = 1000):
5365 self.cache = LruCache(
5466 max_size=max_entries, cache_name=name, size_callback=len
55 ) # type: LruCache[Any, DictionaryEntry]
67 ) # type: LruCache[KT, DictionaryEntry]
5668
5769 self.name = name
5870 self.sequence = 0
59 self.thread = None
71 self.thread = None # type: Optional[threading.Thread]
6072
61 def check_thread(self):
73 def check_thread(self) -> None:
6274 expected_thread = self.thread
6375 if expected_thread is None:
6476 self.thread = threading.current_thread()
6880 "Cache objects can only be accessed from the main thread"
6981 )
7082
71 def get(self, key, dict_keys=None):
83 def get(
84 self, key: KT, dict_keys: Optional[Iterable[DKT]] = None
85 ) -> DictionaryEntry:
7286 """Fetch an entry out of the cache
7387
7488 Args:
7589 key
76 dict_key(list): If given a set of keys then return only those keys
90 dict_key: If given a set of keys then return only those keys
7791 that exist in the cache.
7892
7993 Returns:
94108
95109 return DictionaryEntry(False, set(), {})
96110
97 def invalidate(self, key):
111 def invalidate(self, key: KT) -> None:
98112 self.check_thread()
99113
100114 # Increment the sequence number so that any SELECT statements that
102116 self.sequence += 1
103117 self.cache.pop(key, None)
104118
105 def invalidate_all(self):
119 def invalidate_all(self) -> None:
106120 self.check_thread()
107121 self.sequence += 1
108122 self.cache.clear()
109123
110 def update(self, sequence, key, value, fetched_keys=None):
124 def update(
125 self,
126 sequence: int,
127 key: KT,
128 value: Dict[DKT, Any],
129 fetched_keys: Optional[Set[DKT]] = None,
130 ) -> None:
111131 """Updates the entry in the cache
112132
113133 Args:
114134 sequence
115 key (K)
116 value (dict[X,Y]): The value to update the cache with.
117 fetched_keys (None|set[X]): All of the dictionary keys which were
135 key
136 value: The value to update the cache with.
137 fetched_keys: All of the dictionary keys which were
118138 fetched from the database.
119139
120140 If None, this is the complete value for key K. Otherwise, it
130150 else:
131151 self._update_or_insert(key, value, fetched_keys)
132152
133 def _update_or_insert(self, key, value, known_absent):
153 def _update_or_insert(
154 self, key: KT, value: Dict[DKT, Any], known_absent: Set[DKT]
155 ) -> None:
134156 # We pop and reinsert as we need to tell the cache the size may have
135157 # changed
136158
139161 entry.known_absent.update(known_absent)
140162 self.cache[key] = entry
141163
142 def _insert(self, key, value, known_absent):
164 def _insert(self, key: KT, value: Dict[DKT, Any], known_absent: Set[DKT]) -> None:
143165 self.cache[key] = DictionaryEntry(True, known_absent, value)
1414
1515 import logging
1616 import time
17 from typing import Any, Callable, Dict, Generic, Tuple, TypeVar, Union
1718
1819 import attr
1920 from sortedcontainers import SortedList
2223
2324 logger = logging.getLogger(__name__)
2425
25 SENTINEL = object()
26 SENTINEL = object() # type: Any
27
28 T = TypeVar("T")
29 KT = TypeVar("KT")
30 VT = TypeVar("VT")
2631
2732
28 class TTLCache:
33 class TTLCache(Generic[KT, VT]):
2934 """A key/value cache implementation where each entry has its own TTL"""
3035
31 def __init__(self, cache_name, timer=time.time):
36 def __init__(self, cache_name: str, timer: Callable[[], float] = time.time):
3237 # map from key to _CacheEntry
33 self._data = {}
38 self._data = {} # type: Dict[KT, _CacheEntry]
3439
3540 # the _CacheEntries, sorted by expiry time
3641 self._expiry_list = SortedList() # type: SortedList[_CacheEntry]
3944
4045 self._metrics = register_cache("ttl", cache_name, self, resizable=False)
4146
42 def set(self, key, value, ttl):
47 def set(self, key: KT, value: VT, ttl: float) -> None:
4348 """Add/update an entry in the cache
4449
4550 Args:
4651 key: key for this entry
4752 value: value for this entry
48 ttl (float): TTL for this entry, in seconds
53 ttl: TTL for this entry, in seconds
4954 """
5055 expiry = self._timer() + ttl
5156
5257 self.expire()
5358 e = self._data.pop(key, SENTINEL)
54 if e != SENTINEL:
59 if e is not SENTINEL:
60 assert isinstance(e, _CacheEntry)
5561 self._expiry_list.remove(e)
5662
5763 entry = _CacheEntry(expiry_time=expiry, ttl=ttl, key=key, value=value)
5864 self._data[key] = entry
5965 self._expiry_list.add(entry)
6066
61 def get(self, key, default=SENTINEL):
67 def get(self, key: KT, default: T = SENTINEL) -> Union[VT, T]:
6268 """Get a value from the cache
6369
6470 Args:
7177 """
7278 self.expire()
7379 e = self._data.get(key, SENTINEL)
74 if e == SENTINEL:
80 if e is SENTINEL:
7581 self._metrics.inc_misses()
76 if default == SENTINEL:
82 if default is SENTINEL:
7783 raise KeyError(key)
7884 return default
85 assert isinstance(e, _CacheEntry)
7986 self._metrics.inc_hits()
8087 return e.value
8188
82 def get_with_expiry(self, key):
89 def get_with_expiry(self, key: KT) -> Tuple[VT, float, float]:
8390 """Get a value, and its expiry time, from the cache
8491
8592 Args:
8693 key: key to look up
8794
8895 Returns:
89 Tuple[Any, float, float]: the value from the cache, the expiry time
90 and the TTL
96 A tuple of the value from the cache, the expiry time and the TTL
9197
9298 Raises:
9399 KeyError if the entry is not found
101107 self._metrics.inc_hits()
102108 return e.value, e.expiry_time, e.ttl
103109
104 def pop(self, key, default=SENTINEL):
110 def pop(self, key: KT, default: T = SENTINEL) -> Union[VT, T]: # type: ignore
105111 """Remove a value from the cache
106112
107113 If key is in the cache, remove it and return its value, else return default.
117123 """
118124 self.expire()
119125 e = self._data.pop(key, SENTINEL)
120 if e == SENTINEL:
126 if e is SENTINEL:
121127 self._metrics.inc_misses()
122 if default == SENTINEL:
128 if default is SENTINEL:
123129 raise KeyError(key)
124130 return default
131 assert isinstance(e, _CacheEntry)
125132 self._expiry_list.remove(e)
126133 self._metrics.inc_hits()
127134 return e.value
128135
129 def __getitem__(self, key):
136 def __getitem__(self, key: KT) -> VT:
130137 return self.get(key)
131138
132 def __delitem__(self, key):
139 def __delitem__(self, key: KT) -> None:
133140 self.pop(key)
134141
135 def __contains__(self, key):
142 def __contains__(self, key: KT) -> bool:
136143 return key in self._data
137144
138 def __len__(self):
145 def __len__(self) -> int:
139146 self.expire()
140147 return len(self._data)
141148
142 def expire(self):
149 def expire(self) -> None:
143150 """Run the expiry on the cache. Any entries whose expiry times are due will
144151 be removed
145152 """
157164 """TTLCache entry"""
158165
159166 # expiry_time is the first attribute, so that entries are sorted by expiry.
160 expiry_time = attr.ib()
161 ttl = attr.ib()
167 expiry_time = attr.ib(type=float)
168 ttl = attr.ib(type=float)
162169 key = attr.ib()
163170 value = attr.ib()
3535
3636 def unfreeze(o):
3737 if isinstance(o, (dict, frozendict)):
38 return dict({k: unfreeze(v) for k, v in o.items()})
38 return {k: unfreeze(v) for k, v in o.items()}
3939
4040 if isinstance(o, (bytes, str)):
4141 return o
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 import operator
15 from typing import Dict, FrozenSet, List, Optional
1616
1717 from synapse.api.constants import (
1818 AccountDataTypes,
2020 HistoryVisibility,
2121 Membership,
2222 )
23 from synapse.events import EventBase
2324 from synapse.events.utils import prune_event
2425 from synapse.storage import Storage
2526 from synapse.storage.state import StateFilter
26 from synapse.types import get_domain_from_id
27 from synapse.types import StateMap, get_domain_from_id
2728
2829 logger = logging.getLogger(__name__)
2930
4748
4849 async def filter_events_for_client(
4950 storage: Storage,
50 user_id,
51 events,
52 is_peeking=False,
53 always_include_ids=frozenset(),
54 filter_send_to_client=True,
55 ):
51 user_id: str,
52 events: List[EventBase],
53 is_peeking: bool = False,
54 always_include_ids: FrozenSet[str] = frozenset(),
55 filter_send_to_client: bool = True,
56 ) -> List[EventBase]:
5657 """
5758 Check which events a user is allowed to see. If the user can see the event but its
5859 sender asked for their data to be erased, prune the content of the event.
5960
6061 Args:
6162 storage
62 user_id(str): user id to be checked
63 events(list[synapse.events.EventBase]): sequence of events to be checked
64 is_peeking(bool): should be True if:
63 user_id: user id to be checked
64 events: sequence of events to be checked
65 is_peeking: should be True if:
6566 * the user is not currently a member of the room, and:
6667 * the user has not been a member of the room since the given
6768 events
68 always_include_ids (set(event_id)): set of event ids to specifically
69 always_include_ids: set of event ids to specifically
6970 include (unless sender is ignored)
70 filter_send_to_client (bool): Whether we're checking an event that's going to be
71 filter_send_to_client: Whether we're checking an event that's going to be
7172 sent to a client. This might not always be the case since this function can
7273 also be called to check whether a user can see the state at a given point.
7374
7475 Returns:
75 list[synapse.events.EventBase]
76 The filtered events.
7677 """
7778 # Filter out events that have been soft failed so that we don't relay them
7879 # to clients.
8990 AccountDataTypes.IGNORED_USER_LIST, user_id
9091 )
9192
92 ignore_list = frozenset()
93 ignore_list = frozenset() # type: FrozenSet[str]
9394 if ignore_dict_content:
9495 ignored_users_dict = ignore_dict_content.get("ignored_users", {})
9596 if isinstance(ignored_users_dict, dict):
106107 room_id
107108 ] = await storage.main.get_retention_policy_for_room(room_id)
108109
109 def allowed(event):
110 def allowed(event: EventBase) -> Optional[EventBase]:
110111 """
111112 Args:
112 event (synapse.events.EventBase): event to check
113 event: event to check
113114
114115 Returns:
115 None|EventBase:
116 None if the user cannot see this event at all
117
118 a redacted copy of the event if they can only see a redacted
119 version
120
121 the original event if they can see it as normal.
116 None if the user cannot see this event at all
117
118 a redacted copy of the event if they can only see a redacted
119 version
120
121 the original event if they can see it as normal.
122122 """
123123 # Only run some checks if these events aren't about to be sent to clients. This is
124124 # because, if this is not the case, we're probably only checking if the users can
251251
252252 return event
253253
254 # check each event: gives an iterable[None|EventBase]
254 # Check each event: gives an iterable of None or (a potentially modified)
255 # EventBase.
255256 filtered_events = map(allowed, events)
256257
257 # remove the None entries
258 filtered_events = filter(operator.truth, filtered_events)
259
260 # we turn it into a list before returning it.
261 return list(filtered_events)
258 # Turn it into a list and remove None entries before returning.
259 return [ev for ev in filtered_events if ev]
262260
263261
264262 async def filter_events_for_server(
265263 storage: Storage,
266 server_name,
267 events,
268 redact=True,
269 check_history_visibility_only=False,
270 ):
264 server_name: str,
265 events: List[EventBase],
266 redact: bool = True,
267 check_history_visibility_only: bool = False,
268 ) -> List[EventBase]:
271269 """Filter a list of events based on whether given server is allowed to
272270 see them.
273271
274272 Args:
275273 storage
276 server_name (str)
277 events (iterable[FrozenEvent])
278 redact (bool): Whether to return a redacted version of the event, or
274 server_name
275 events
276 redact: Whether to return a redacted version of the event, or
279277 to filter them out entirely.
280 check_history_visibility_only (bool): Whether to only check the
278 check_history_visibility_only: Whether to only check the
281279 history visibility, rather than things like if the sender has been
282280 erased. This is used e.g. during pagination to decide whether to
283281 backfill or not.
284282
285283 Returns
286 list[FrozenEvent]
284 The filtered events.
287285 """
288286
289 def is_sender_erased(event, erased_senders):
287 def is_sender_erased(event: EventBase, erased_senders: Dict[str, bool]) -> bool:
290288 if erased_senders and erased_senders[event.sender]:
291289 logger.info("Sender of %s has been erased, redacting", event.event_id)
292290 return True
293291 return False
294292
295 def check_event_is_visible(event, state):
293 def check_event_is_visible(event: EventBase, state: StateMap[EventBase]) -> bool:
296294 history = state.get((EventTypes.RoomHistoryVisibility, ""), None)
297295 if history:
298296 visibility = history.content.get(
0 #!/bin/bash
0 #!/usr/bin/env bash
11
22 # This script builds the Docker image to run the PostgreSQL tests, and then runs
33 # the tests.
11
22 from mock import Mock
33
4 from synapse.api.constants import EventTypes
45 from synapse.events import EventBase
56 from synapse.federation.sender import PerDestinationQueue, TransactionManager
67 from synapse.federation.units import Edu
420421 self.assertNotIn("zzzerver", woken)
421422 # - all destinations are woken exactly once; they appear once in woken.
422423 self.assertCountEqual(woken, server_names[:-1])
424
425 @override_config({"send_federation": True})
426 def test_not_latest_event(self):
427 """Test that we send the latest event in the room even if its not ours."""
428
429 per_dest_queue, sent_pdus = self.make_fake_destination_queue()
430
431 # Make a room with a local user, and two servers. One will go offline
432 # and one will send some events.
433 self.register_user("u1", "you the one")
434 u1_token = self.login("u1", "you the one")
435 room_1 = self.helper.create_room_as("u1", tok=u1_token)
436
437 self.get_success(
438 event_injection.inject_member_event(self.hs, room_1, "@user:host2", "join")
439 )
440 event_1 = self.get_success(
441 event_injection.inject_member_event(self.hs, room_1, "@user:host3", "join")
442 )
443
444 # First we send something from the local server, so that we notice the
445 # remote is down and go into catchup mode.
446 self.helper.send(room_1, "you hear me!!", tok=u1_token)
447
448 # Now simulate us receiving an event from the still online remote.
449 event_2 = self.get_success(
450 event_injection.inject_event(
451 self.hs,
452 type=EventTypes.Message,
453 sender="@user:host3",
454 room_id=room_1,
455 content={"msgtype": "m.text", "body": "Hello"},
456 )
457 )
458
459 self.get_success(
460 self.hs.get_datastore().set_destination_last_successful_stream_ordering(
461 "host2", event_1.internal_metadata.stream_ordering
462 )
463 )
464
465 self.get_success(per_dest_queue._catch_up_transmission_loop())
466
467 # We expect only the last message from the remote, event_2, to have been
468 # sent, rather than the last *local* event that was sent.
469 self.assertEqual(len(sent_pdus), 1)
470 self.assertEqual(sent_pdus[0].event_id, event_2.event_id)
471 self.assertFalse(per_dest_queue._catching_up)
987987 }
988988 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
989989 self.assertRenderedError("mapping_error", "localpart is invalid: ")
990
991 @override_config(
992 {
993 "oidc_config": {
994 **DEFAULT_CONFIG,
995 "attribute_requirements": [{"attribute": "test", "value": "foobar"}],
996 }
997 }
998 )
999 def test_attribute_requirements(self):
1000 """The required attributes must be met from the OIDC userinfo response."""
1001 auth_handler = self.hs.get_auth_handler()
1002 auth_handler.complete_sso_login = simple_async_mock()
1003
1004 # userinfo lacking "test": "foobar" attribute should fail.
1005 userinfo = {
1006 "sub": "tester",
1007 "username": "tester",
1008 }
1009 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1010 auth_handler.complete_sso_login.assert_not_called()
1011
1012 # userinfo with "test": "foobar" attribute should succeed.
1013 userinfo = {
1014 "sub": "tester",
1015 "username": "tester",
1016 "test": "foobar",
1017 }
1018 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1019
1020 # check that the auth handler got called as expected
1021 auth_handler.complete_sso_login.assert_called_once_with(
1022 "@tester:test", "oidc", ANY, ANY, None, new_user=True
1023 )
1024
1025 @override_config(
1026 {
1027 "oidc_config": {
1028 **DEFAULT_CONFIG,
1029 "attribute_requirements": [{"attribute": "test", "value": "foobar"}],
1030 }
1031 }
1032 )
1033 def test_attribute_requirements_contains(self):
1034 """Test that auth succeeds if userinfo attribute CONTAINS required value"""
1035 auth_handler = self.hs.get_auth_handler()
1036 auth_handler.complete_sso_login = simple_async_mock()
1037 # userinfo with "test": ["foobar", "foo", "bar"] attribute should succeed.
1038 userinfo = {
1039 "sub": "tester",
1040 "username": "tester",
1041 "test": ["foobar", "foo", "bar"],
1042 }
1043 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1044
1045 # check that the auth handler got called as expected
1046 auth_handler.complete_sso_login.assert_called_once_with(
1047 "@tester:test", "oidc", ANY, ANY, None, new_user=True
1048 )
1049
1050 @override_config(
1051 {
1052 "oidc_config": {
1053 **DEFAULT_CONFIG,
1054 "attribute_requirements": [{"attribute": "test", "value": "foobar"}],
1055 }
1056 }
1057 )
1058 def test_attribute_requirements_mismatch(self):
1059 """
1060 Test that auth fails if attributes exist but don't match,
1061 or are non-string values.
1062 """
1063 auth_handler = self.hs.get_auth_handler()
1064 auth_handler.complete_sso_login = simple_async_mock()
1065 # userinfo with "test": "not_foobar" attribute should fail
1066 userinfo = {
1067 "sub": "tester",
1068 "username": "tester",
1069 "test": "not_foobar",
1070 }
1071 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1072 auth_handler.complete_sso_login.assert_not_called()
1073
1074 # userinfo with "test": ["foo", "bar"] attribute should fail
1075 userinfo = {
1076 "sub": "tester",
1077 "username": "tester",
1078 "test": ["foo", "bar"],
1079 }
1080 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1081 auth_handler.complete_sso_login.assert_not_called()
1082
1083 # userinfo with "test": False attribute should fail
1084 # this is largely just to ensure we don't crash here
1085 userinfo = {
1086 "sub": "tester",
1087 "username": "tester",
1088 "test": False,
1089 }
1090 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1091 auth_handler.complete_sso_login.assert_not_called()
1092
1093 # userinfo with "test": None attribute should fail
1094 # a value of None breaks the OIDC spec, but it's important to not crash here
1095 userinfo = {
1096 "sub": "tester",
1097 "username": "tester",
1098 "test": None,
1099 }
1100 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1101 auth_handler.complete_sso_login.assert_not_called()
1102
1103 # userinfo with "test": 1 attribute should fail
1104 # this is largely just to ensure we don't crash here
1105 userinfo = {
1106 "sub": "tester",
1107 "username": "tester",
1108 "test": 1,
1109 }
1110 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1111 auth_handler.complete_sso_login.assert_not_called()
1112
1113 # userinfo with "test": 3.14 attribute should fail
1114 # this is largely just to ensure we don't crash here
1115 userinfo = {
1116 "sub": "tester",
1117 "username": "tester",
1118 "test": 3.14,
1119 }
1120 self.get_success(_make_callback_with_userinfo(self.hs, userinfo))
1121 auth_handler.complete_sso_login.assert_not_called()
9901122
9911123 def _generate_oidc_session_token(
9921124 self,
309309 self.assertIsNotNone(new_state)
310310 self.assertEquals(new_state.state, PresenceState.UNAVAILABLE)
311311
312 def test_busy_no_idle(self):
313 """
314 Tests that a user setting their presence to busy but idling doesn't turn their
315 presence state into unavailable.
316 """
317 user_id = "@foo:bar"
318 now = 5000000
319
320 state = UserPresenceState.default(user_id)
321 state = state.copy_and_replace(
322 state=PresenceState.BUSY,
323 last_active_ts=now - IDLE_TIMER - 1,
324 last_user_sync_ts=now,
325 )
326
327 new_state = handle_timeout(state, is_mine=True, syncing_user_ids=set(), now=now)
328
329 self.assertIsNotNone(new_state)
330 self.assertEquals(new_state.state, PresenceState.BUSY)
331
312332 def test_sync_timeout(self):
313333 user_id = "@foo:bar"
314334 now = 5000000
1111 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
14 import base64
1415 import logging
1516 import os
17 from typing import Optional
1618 from unittest.mock import patch
1719
1820 import treq
241243
242244 @patch.dict(os.environ, {"https_proxy": "proxy.com", "no_proxy": "unused.com"})
243245 def test_https_request_via_proxy(self):
246 """Tests that TLS-encrypted requests can be made through a proxy"""
247 self._do_https_request_via_proxy(auth_credentials=None)
248
249 @patch.dict(
250 os.environ,
251 {"https_proxy": "bob:pinkponies@proxy.com", "no_proxy": "unused.com"},
252 )
253 def test_https_request_via_proxy_with_auth(self):
254 """Tests that authenticated, TLS-encrypted requests can be made through a proxy"""
255 self._do_https_request_via_proxy(auth_credentials="bob:pinkponies")
256
257 def _do_https_request_via_proxy(
258 self,
259 auth_credentials: Optional[str] = None,
260 ):
244261 agent = ProxyAgent(
245262 self.reactor,
246263 contextFactory=get_test_https_policy(),
277294 self.assertEqual(request.method, b"CONNECT")
278295 self.assertEqual(request.path, b"test.com:443")
279296
297 # Check whether auth credentials have been supplied to the proxy
298 proxy_auth_header_values = request.requestHeaders.getRawHeaders(
299 b"Proxy-Authorization"
300 )
301
302 if auth_credentials is not None:
303 # Compute the correct header value for Proxy-Authorization
304 encoded_credentials = base64.b64encode(b"bob:pinkponies")
305 expected_header_value = b"Basic " + encoded_credentials
306
307 # Validate the header's value
308 self.assertIn(expected_header_value, proxy_auth_header_values)
309 else:
310 # Check that the Proxy-Authorization header has not been supplied to the proxy
311 self.assertIsNone(proxy_auth_header_values)
312
280313 # tell the proxy server not to close the connection
281314 proxy_server.persistent = True
282315
311344 self.assertEqual(request.method, b"GET")
312345 self.assertEqual(request.path, b"/abc")
313346 self.assertEqual(request.requestHeaders.getRawHeaders(b"host"), [b"test.com"])
347
348 # Check that the destination server DID NOT receive proxy credentials
349 proxy_auth_header_values = request.requestHeaders.getRawHeaders(
350 b"Proxy-Authorization"
351 )
352 self.assertIsNone(proxy_auth_header_values)
353
314354 request.write(b"result")
315355 request.finish()
316356
4343 try:
4444 import hiredis
4545 except ImportError:
46 hiredis = None
46 hiredis = None # type: ignore
4747
4848 logger = logging.getLogger(__name__)
4949
6868 self.assert_request_is_get_repl_stream_updates(request, "typing")
6969
7070 # The from token should be the token from the last RDATA we got.
71 assert request.args is not None
7172 self.assertEqual(int(request.args[b"from_token"][0]), token)
7273
7374 self.test_handler.on_rdata.assert_called_once()
1414 import logging
1515 import os
1616 from binascii import unhexlify
17 from typing import Tuple
17 from typing import Optional, Tuple
1818
1919 from twisted.internet.protocol import Factory
2020 from twisted.protocols.tls import TLSMemoryBIOFactory
3131
3232 logger = logging.getLogger(__name__)
3333
34 test_server_connection_factory = None
34 test_server_connection_factory = None # type: Optional[TestServerTLSConnectionFactory]
3535
3636
3737 class MediaRepoShardTestCase(BaseMultiWorkerStreamTestCase):
10021002
10031003 def prepare(self, reactor, clock, hs):
10041004 self.store = hs.get_datastore()
1005
1005 self.auth_handler = hs.get_auth_handler()
1006
1007 # create users and get access tokens
1008 # regardless of whether password login or SSO is allowed
10061009 self.admin_user = self.register_user("admin", "pass", admin=True)
1007 self.admin_user_tok = self.login("admin", "pass")
1010 self.admin_user_tok = self.get_success(
1011 self.auth_handler.get_access_token_for_user_id(
1012 self.admin_user, device_id=None, valid_until_ms=None
1013 )
1014 )
10081015
10091016 self.other_user = self.register_user("user", "pass", displayname="User")
1010 self.other_user_token = self.login("user", "pass")
1017 self.other_user_token = self.get_success(
1018 self.auth_handler.get_access_token_for_user_id(
1019 self.other_user, device_id=None, valid_until_ms=None
1020 )
1021 )
10111022 self.url_other_user = "/_synapse/admin/v2/users/%s" % urllib.parse.quote(
10121023 self.other_user
10131024 )
10801091 self.assertEqual("Bob's name", channel.json_body["displayname"])
10811092 self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
10821093 self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
1083 self.assertEqual(True, channel.json_body["admin"])
1094 self.assertTrue(channel.json_body["admin"])
10841095 self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
10851096
10861097 # Get user
10951106 self.assertEqual("Bob's name", channel.json_body["displayname"])
10961107 self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
10971108 self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
1098 self.assertEqual(True, channel.json_body["admin"])
1099 self.assertEqual(False, channel.json_body["is_guest"])
1100 self.assertEqual(False, channel.json_body["deactivated"])
1109 self.assertTrue(channel.json_body["admin"])
1110 self.assertFalse(channel.json_body["is_guest"])
1111 self.assertFalse(channel.json_body["deactivated"])
11011112 self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
11021113
11031114 def test_create_user(self):
11291140 self.assertEqual("Bob's name", channel.json_body["displayname"])
11301141 self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
11311142 self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
1132 self.assertEqual(False, channel.json_body["admin"])
1143 self.assertFalse(channel.json_body["admin"])
11331144 self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
11341145
11351146 # Get user
11441155 self.assertEqual("Bob's name", channel.json_body["displayname"])
11451156 self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
11461157 self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
1147 self.assertEqual(False, channel.json_body["admin"])
1148 self.assertEqual(False, channel.json_body["is_guest"])
1149 self.assertEqual(False, channel.json_body["deactivated"])
1150 self.assertEqual(False, channel.json_body["shadow_banned"])
1158 self.assertFalse(channel.json_body["admin"])
1159 self.assertFalse(channel.json_body["is_guest"])
1160 self.assertFalse(channel.json_body["deactivated"])
1161 self.assertFalse(channel.json_body["shadow_banned"])
11511162 self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
11521163
11531164 @override_config(
11961207
11971208 self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
11981209 self.assertEqual("@bob:test", channel.json_body["name"])
1199 self.assertEqual(False, channel.json_body["admin"])
1210 self.assertFalse(channel.json_body["admin"])
12001211
12011212 @override_config(
12021213 {"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
12361247 # Admin user is not blocked by mau anymore
12371248 self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
12381249 self.assertEqual("@bob:test", channel.json_body["name"])
1239 self.assertEqual(False, channel.json_body["admin"])
1250 self.assertFalse(channel.json_body["admin"])
12401251
12411252 @override_config(
12421253 {
14281439
14291440 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
14301441 self.assertEqual("@user:test", channel.json_body["name"])
1431 self.assertEqual(False, channel.json_body["deactivated"])
1442 self.assertFalse(channel.json_body["deactivated"])
14321443 self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
14331444 self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"])
14341445 self.assertEqual("User", channel.json_body["displayname"])
14351446
14361447 # Deactivate user
1437 body = json.dumps({"deactivated": True})
1438
14391448 channel = self.make_request(
14401449 "PUT",
14411450 self.url_other_user,
14421451 access_token=self.admin_user_tok,
1443 content=body.encode(encoding="utf_8"),
1452 content={"deactivated": True},
14441453 )
14451454
14461455 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
14471456 self.assertEqual("@user:test", channel.json_body["name"])
1448 self.assertEqual(True, channel.json_body["deactivated"])
1457 self.assertTrue(channel.json_body["deactivated"])
1458 self.assertIsNone(channel.json_body["password_hash"])
14491459 self.assertEqual(0, len(channel.json_body["threepids"]))
14501460 self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"])
14511461 self.assertEqual("User", channel.json_body["displayname"])
14601470
14611471 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
14621472 self.assertEqual("@user:test", channel.json_body["name"])
1463 self.assertEqual(True, channel.json_body["deactivated"])
1473 self.assertTrue(channel.json_body["deactivated"])
1474 self.assertIsNone(channel.json_body["password_hash"])
14641475 self.assertEqual(0, len(channel.json_body["threepids"]))
14651476 self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"])
14661477 self.assertEqual("User", channel.json_body["displayname"])
14771488 self.assertTrue(profile["display_name"] == "User")
14781489
14791490 # Deactivate user
1480 body = json.dumps({"deactivated": True})
1481
14821491 channel = self.make_request(
14831492 "PUT",
14841493 self.url_other_user,
14851494 access_token=self.admin_user_tok,
1486 content=body.encode(encoding="utf_8"),
1495 content={"deactivated": True},
14871496 )
14881497
14891498 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
14901499 self.assertEqual("@user:test", channel.json_body["name"])
1491 self.assertEqual(True, channel.json_body["deactivated"])
1500 self.assertTrue(channel.json_body["deactivated"])
14921501
14931502 # is not in user directory
14941503 profile = self.get_success(self.store.get_user_in_directory(self.other_user))
1495 self.assertTrue(profile is None)
1504 self.assertIsNone(profile)
14961505
14971506 # Set new displayname user
1498 body = json.dumps({"displayname": "Foobar"})
1499
15001507 channel = self.make_request(
15011508 "PUT",
15021509 self.url_other_user,
15031510 access_token=self.admin_user_tok,
1504 content=body.encode(encoding="utf_8"),
1511 content={"displayname": "Foobar"},
15051512 )
15061513
15071514 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
15081515 self.assertEqual("@user:test", channel.json_body["name"])
1509 self.assertEqual(True, channel.json_body["deactivated"])
1516 self.assertTrue(channel.json_body["deactivated"])
15101517 self.assertEqual("Foobar", channel.json_body["displayname"])
15111518
15121519 # is not in user directory
15131520 profile = self.get_success(self.store.get_user_in_directory(self.other_user))
1514 self.assertTrue(profile is None)
1521 self.assertIsNone(profile)
15151522
15161523 def test_reactivate_user(self):
15171524 """
15191526 """
15201527
15211528 # Deactivate the user.
1529 self._deactivate_user("@user:test")
1530
1531 # Attempt to reactivate the user (without a password).
15221532 channel = self.make_request(
15231533 "PUT",
15241534 self.url_other_user,
15251535 access_token=self.admin_user_tok,
1526 content=json.dumps({"deactivated": True}).encode(encoding="utf_8"),
1527 )
1528 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1529 self._is_erased("@user:test", False)
1530 d = self.store.mark_user_erased("@user:test")
1531 self.assertIsNone(self.get_success(d))
1532 self._is_erased("@user:test", True)
1533
1534 # Attempt to reactivate the user (without a password).
1536 content={"deactivated": False},
1537 )
1538 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
1539
1540 # Reactivate the user.
15351541 channel = self.make_request(
15361542 "PUT",
15371543 self.url_other_user,
15381544 access_token=self.admin_user_tok,
1539 content=json.dumps({"deactivated": False}).encode(encoding="utf_8"),
1540 )
1541 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
1542
1543 # Reactivate the user.
1545 content={"deactivated": False, "password": "foo"},
1546 )
1547 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1548 self.assertEqual("@user:test", channel.json_body["name"])
1549 self.assertFalse(channel.json_body["deactivated"])
1550 self.assertIsNotNone(channel.json_body["password_hash"])
1551 self._is_erased("@user:test", False)
1552
1553 @override_config({"password_config": {"localdb_enabled": False}})
1554 def test_reactivate_user_localdb_disabled(self):
1555 """
1556 Test reactivating another user when using SSO.
1557 """
1558
1559 # Deactivate the user.
1560 self._deactivate_user("@user:test")
1561
1562 # Reactivate the user with a password
15441563 channel = self.make_request(
15451564 "PUT",
15461565 self.url_other_user,
15471566 access_token=self.admin_user_tok,
1548 content=json.dumps({"deactivated": False, "password": "foo"}).encode(
1549 encoding="utf_8"
1550 ),
1551 )
1552 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1553
1554 # Get user
1555 channel = self.make_request(
1556 "GET",
1557 self.url_other_user,
1558 access_token=self.admin_user_tok,
1559 )
1560
1561 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1562 self.assertEqual("@user:test", channel.json_body["name"])
1563 self.assertEqual(False, channel.json_body["deactivated"])
1564 self._is_erased("@user:test", False)
1565
1566 def test_set_user_as_admin(self):
1567 """
1568 Test setting the admin flag on a user.
1569 """
1570
1571 # Set a user as an admin
1572 body = json.dumps({"admin": True})
1573
1567 content={"deactivated": False, "password": "foo"},
1568 )
1569 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
1570 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
1571
1572 # Reactivate the user without a password.
15741573 channel = self.make_request(
15751574 "PUT",
15761575 self.url_other_user,
15771576 access_token=self.admin_user_tok,
1578 content=body.encode(encoding="utf_8"),
1579 )
1580
1577 content={"deactivated": False},
1578 )
15811579 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
15821580 self.assertEqual("@user:test", channel.json_body["name"])
1583 self.assertEqual(True, channel.json_body["admin"])
1581 self.assertFalse(channel.json_body["deactivated"])
1582 self.assertIsNone(channel.json_body["password_hash"])
1583 self._is_erased("@user:test", False)
1584
1585 @override_config({"password_config": {"enabled": False}})
1586 def test_reactivate_user_password_disabled(self):
1587 """
1588 Test reactivating another user when using SSO.
1589 """
1590
1591 # Deactivate the user.
1592 self._deactivate_user("@user:test")
1593
1594 # Reactivate the user with a password
1595 channel = self.make_request(
1596 "PUT",
1597 self.url_other_user,
1598 access_token=self.admin_user_tok,
1599 content={"deactivated": False, "password": "foo"},
1600 )
1601 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
1602 self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
1603
1604 # Reactivate the user without a password.
1605 channel = self.make_request(
1606 "PUT",
1607 self.url_other_user,
1608 access_token=self.admin_user_tok,
1609 content={"deactivated": False},
1610 )
1611 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1612 self.assertEqual("@user:test", channel.json_body["name"])
1613 self.assertFalse(channel.json_body["deactivated"])
1614 self.assertIsNone(channel.json_body["password_hash"])
1615 self._is_erased("@user:test", False)
1616
1617 def test_set_user_as_admin(self):
1618 """
1619 Test setting the admin flag on a user.
1620 """
1621
1622 # Set a user as an admin
1623 channel = self.make_request(
1624 "PUT",
1625 self.url_other_user,
1626 access_token=self.admin_user_tok,
1627 content={"admin": True},
1628 )
1629
1630 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1631 self.assertEqual("@user:test", channel.json_body["name"])
1632 self.assertTrue(channel.json_body["admin"])
15841633
15851634 # Get user
15861635 channel = self.make_request(
15911640
15921641 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
15931642 self.assertEqual("@user:test", channel.json_body["name"])
1594 self.assertEqual(True, channel.json_body["admin"])
1643 self.assertTrue(channel.json_body["admin"])
15951644
15961645 def test_accidental_deactivation_prevention(self):
15971646 """
16011650 url = "/_synapse/admin/v2/users/@bob:test"
16021651
16031652 # Create user
1604 body = json.dumps({"password": "abc123"})
1605
16061653 channel = self.make_request(
16071654 "PUT",
16081655 url,
16091656 access_token=self.admin_user_tok,
1610 content=body.encode(encoding="utf_8"),
1657 content={"password": "abc123"},
16111658 )
16121659
16131660 self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
16271674 self.assertEqual(0, channel.json_body["deactivated"])
16281675
16291676 # Change password (and use a str for deactivate instead of a bool)
1630 body = json.dumps({"password": "abc123", "deactivated": "false"}) # oops!
1631
16321677 channel = self.make_request(
16331678 "PUT",
16341679 url,
16351680 access_token=self.admin_user_tok,
1636 content=body.encode(encoding="utf_8"),
1681 content={"password": "abc123", "deactivated": "false"},
16371682 )
16381683
16391684 self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
16521697 # Ensure they're still alive
16531698 self.assertEqual(0, channel.json_body["deactivated"])
16541699
1655 def _is_erased(self, user_id, expect):
1700 def _is_erased(self, user_id: str, expect: bool) -> None:
16561701 """Assert that the user is erased or not"""
16571702 d = self.store.is_user_erased(user_id)
16581703 if expect:
16591704 self.assertTrue(self.get_success(d))
16601705 else:
16611706 self.assertFalse(self.get_success(d))
1707
1708 def _deactivate_user(self, user_id: str) -> None:
1709 """Deactivate user and set as erased"""
1710
1711 # Deactivate the user.
1712 channel = self.make_request(
1713 "PUT",
1714 "/_synapse/admin/v2/users/%s" % urllib.parse.quote(user_id),
1715 access_token=self.admin_user_tok,
1716 content={"deactivated": True},
1717 )
1718 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
1719 self.assertTrue(channel.json_body["deactivated"])
1720 self.assertIsNone(channel.json_body["password_hash"])
1721 self._is_erased(user_id, False)
1722 d = self.store.mark_user_erased(user_id)
1723 self.assertIsNone(self.get_success(d))
1724 self._is_erased(user_id, True)
16621725
16631726
16641727 class UserMembershipRestTestCase(unittest.HomeserverTestCase):
159159 self.assertEqual(channel.result["code"], b"200", channel.result)
160160 ev = channel.json_body
161161 self.assertEqual(ev["content"]["x"], "y")
162
163 def test_message_edit(self):
164 """Ensure that the module doesn't cause issues with edited messages."""
165 # first patch the event checker so that it will modify the event
166 async def check(ev: EventBase, state):
167 d = ev.get_dict()
168 d["content"] = {
169 "msgtype": "m.text",
170 "body": d["content"]["body"].upper(),
171 }
172 return d
173
174 current_rules_module().check_event_allowed = check
175
176 # Send an event, then edit it.
177 channel = self.make_request(
178 "PUT",
179 "/_matrix/client/r0/rooms/%s/send/modifyme/1" % self.room_id,
180 {
181 "msgtype": "m.text",
182 "body": "Original body",
183 },
184 access_token=self.tok,
185 )
186 self.assertEqual(channel.result["code"], b"200", channel.result)
187 orig_event_id = channel.json_body["event_id"]
188
189 channel = self.make_request(
190 "PUT",
191 "/_matrix/client/r0/rooms/%s/send/m.room.message/2" % self.room_id,
192 {
193 "m.new_content": {"msgtype": "m.text", "body": "Edited body"},
194 "m.relates_to": {
195 "rel_type": "m.replace",
196 "event_id": orig_event_id,
197 },
198 "msgtype": "m.text",
199 "body": "Edited body",
200 },
201 access_token=self.tok,
202 )
203 self.assertEqual(channel.result["code"], b"200", channel.result)
204 edited_event_id = channel.json_body["event_id"]
205
206 # ... and check that they both got modified
207 channel = self.make_request(
208 "GET",
209 "/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, orig_event_id),
210 access_token=self.tok,
211 )
212 self.assertEqual(channel.result["code"], b"200", channel.result)
213 ev = channel.json_body
214 self.assertEqual(ev["content"]["body"], "ORIGINAL BODY")
215
216 channel = self.make_request(
217 "GET",
218 "/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, edited_event_id),
219 access_token=self.tok,
220 )
221 self.assertEqual(channel.result["code"], b"200", channel.result)
222 ev = channel.json_body
223 self.assertEqual(ev["content"]["body"], "EDITED BODY")
162224
163225 def test_send_event(self):
164226 """Tests that the module can send an event into a room via the module api"""
1717 from synapse.rest.client.v2_alpha import capabilities
1818
1919 from tests import unittest
20 from tests.unittest import override_config
2021
2122
2223 class CapabilitiesTestCase(unittest.HomeserverTestCase):
3233 hs = self.setup_test_homeserver()
3334 self.store = hs.get_datastore()
3435 self.config = hs.config
36 self.auth_handler = hs.get_auth_handler()
3537 return hs
3638
3739 def test_check_auth_required(self):
5557 capabilities["m.room_versions"]["default"],
5658 )
5759
58 def test_get_change_password_capabilities(self):
60 def test_get_change_password_capabilities_password_login(self):
5961 localpart = "user"
6062 password = "pass"
6163 user = self.register_user(localpart, password)
6567 capabilities = channel.json_body["capabilities"]
6668
6769 self.assertEqual(channel.code, 200)
70 self.assertTrue(capabilities["m.change_password"]["enabled"])
6871
69 # Test case where password is handled outside of Synapse
70 self.assertTrue(capabilities["m.change_password"]["enabled"])
71 self.get_success(self.store.user_set_password_hash(user, None))
72 @override_config({"password_config": {"localdb_enabled": False}})
73 def test_get_change_password_capabilities_localdb_disabled(self):
74 localpart = "user"
75 password = "pass"
76 user = self.register_user(localpart, password)
77 access_token = self.get_success(
78 self.auth_handler.get_access_token_for_user_id(
79 user, device_id=None, valid_until_ms=None
80 )
81 )
82
7283 channel = self.make_request("GET", self.url, access_token=access_token)
7384 capabilities = channel.json_body["capabilities"]
7485
7586 self.assertEqual(channel.code, 200)
7687 self.assertFalse(capabilities["m.change_password"]["enabled"])
88
89 @override_config({"password_config": {"enabled": False}})
90 def test_get_change_password_capabilities_password_disabled(self):
91 localpart = "user"
92 password = "pass"
93 user = self.register_user(localpart, password)
94 access_token = self.get_success(
95 self.auth_handler.get_access_token_for_user_id(
96 user, device_id=None, valid_until_ms=None
97 )
98 )
99
100 channel = self.make_request("GET", self.url, access_token=access_token)
101 capabilities = channel.json_body["capabilities"]
102
103 self.assertEqual(channel.code, 200)
104 self.assertFalse(capabilities["m.change_password"]["enabled"])
3838 # We need to enable msc1849 support for aggregations
3939 config = self.default_config()
4040 config["experimental_msc1849_support_enabled"] = True
41
42 # We enable frozen dicts as relations/edits change event contents, so we
43 # want to test that we don't modify the events in the caches.
44 config["use_frozen_dicts"] = True
45
4146 return self.setup_test_homeserver(config=config)
4247
4348 def prepare(self, reactor, clock, hs):
517522 {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict
518523 )
519524
525 def test_edit_reply(self):
526 """Test that editing a reply works."""
527
528 # Create a reply to edit.
529 channel = self._send_relation(
530 RelationTypes.REFERENCE,
531 "m.room.message",
532 content={"msgtype": "m.text", "body": "A reply!"},
533 )
534 self.assertEquals(200, channel.code, channel.json_body)
535 reply = channel.json_body["event_id"]
536
537 new_body = {"msgtype": "m.text", "body": "I've been edited!"}
538 channel = self._send_relation(
539 RelationTypes.REPLACE,
540 "m.room.message",
541 content={"msgtype": "m.text", "body": "foo", "m.new_content": new_body},
542 parent_id=reply,
543 )
544 self.assertEquals(200, channel.code, channel.json_body)
545
546 edit_event_id = channel.json_body["event_id"]
547
548 channel = self.make_request(
549 "GET",
550 "/rooms/%s/event/%s" % (self.room, reply),
551 access_token=self.user_token,
552 )
553 self.assertEquals(200, channel.code, channel.json_body)
554
555 # We expect to see the new body in the dict, as well as the reference
556 # metadata sill intact.
557 self.assertDictContainsSubset(new_body, channel.json_body["content"])
558 self.assertDictContainsSubset(
559 {
560 "m.relates_to": {
561 "event_id": self.parent_id,
562 "key": None,
563 "rel_type": "m.reference",
564 }
565 },
566 channel.json_body["content"],
567 )
568
569 # We expect that the edit relation appears in the unsigned relations
570 # section.
571 relations_dict = channel.json_body["unsigned"].get("m.relations")
572 self.assertIn(RelationTypes.REPLACE, relations_dict)
573
574 m_replace_dict = relations_dict[RelationTypes.REPLACE]
575 for key in ["event_id", "sender", "origin_server_ts"]:
576 self.assertIn(key, m_replace_dict)
577
578 self.assert_dict(
579 {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict
580 )
581
520582 def test_relations_redaction_redacts_edits(self):
521583 """Test that edits of an event are redacted when the original event
522584 is redacted.
11 import logging
22 from collections import deque
33 from io import SEEK_END, BytesIO
4 from typing import Callable, Iterable, MutableMapping, Optional, Tuple, Union
4 from typing import Callable, Dict, Iterable, MutableMapping, Optional, Tuple, Union
55
66 import attr
77 from typing_extensions import Deque
1212 from twisted.internet.defer import Deferred, fail, succeed
1313 from twisted.internet.error import DNSLookupError
1414 from twisted.internet.interfaces import (
15 IHostnameResolver,
16 IProtocol,
17 IPullProducer,
18 IPushProducer,
1519 IReactorPluggableNameResolver,
16 IReactorTCP,
1720 IResolverSimple,
1821 ITransport,
1922 )
4447 wire).
4548 """
4649
47 site = attr.ib(type=Site)
50 site = attr.ib(type=Union[Site, "FakeSite"])
4851 _reactor = attr.ib()
4952 result = attr.ib(type=dict, default=attr.Factory(dict))
5053 _ip = attr.ib(type=str, default="127.0.0.1")
51 _producer = None
54 _producer = None # type: Optional[Union[IPullProducer, IPushProducer]]
5255
5356 @property
5457 def json_body(self):
158161
159162 Any cookines found are added to the given dict
160163 """
161 for h in self.headers.getRawHeaders("Set-Cookie"):
164 headers = self.headers.getRawHeaders("Set-Cookie")
165 if not headers:
166 return
167
168 for h in headers:
162169 parts = h.split(";")
163170 k, v = parts[0].split("=", maxsplit=1)
164171 cookies[k] = v
310317
311318 self._tcp_callbacks = {}
312319 self._udp = []
313 lookups = self.lookups = {}
314 self._thread_callbacks = deque() # type: Deque[Callable[[], None]]()
320 lookups = self.lookups = {} # type: Dict[str, str]
321 self._thread_callbacks = deque() # type: Deque[Callable[[], None]]
315322
316323 @implementer(IResolverSimple)
317324 class FakeResolver:
322329
323330 self.nameResolver = SimpleResolverComplexifier(FakeResolver())
324331 super().__init__()
332
333 def installNameResolver(self, resolver: IHostnameResolver) -> IHostnameResolver:
334 raise NotImplementedError()
325335
326336 def listenUDP(self, port, protocol, interface="", maxPacketSize=8196):
327337 p = udp.Port(port, protocol, interface, maxPacketSize, self)
592602 if self.disconnected:
593603 return
594604
595 if getattr(self.other, "transport") is None:
605 if not hasattr(self.other, "transport"):
596606 # the other has no transport yet; reschedule
597607 if self.autoflush:
598608 self._reactor.callLater(0.0, self.flush)
620630 self.disconnected = True
621631
622632
623 def connect_client(reactor: IReactorTCP, client_id: int) -> AccumulatingProtocol:
633 def connect_client(
634 reactor: ThreadedMemoryReactorClock, client_id: int
635 ) -> Tuple[IProtocol, AccumulatingProtocol]:
624636 """
625637 Connect a client to a fake TCP transport.
626638
376376 #######################################################
377377 # deliberately remove e2 (room name) from the _state_group_cache
378378
379 (
380 is_all,
381 known_absent,
382 state_dict_ids,
383 ) = self.state_datastore._state_group_cache.get(group)
384
385 self.assertEqual(is_all, True)
386 self.assertEqual(known_absent, set())
379 cache_entry = self.state_datastore._state_group_cache.get(group)
380 state_dict_ids = cache_entry.value
381
382 self.assertEqual(cache_entry.full, True)
383 self.assertEqual(cache_entry.known_absent, set())
387384 self.assertDictEqual(
388385 state_dict_ids,
389386 {
402399 fetched_keys=((e1.type, e1.state_key),),
403400 )
404401
405 (
406 is_all,
407 known_absent,
408 state_dict_ids,
409 ) = self.state_datastore._state_group_cache.get(group)
410
411 self.assertEqual(is_all, False)
412 self.assertEqual(known_absent, {(e1.type, e1.state_key)})
402 cache_entry = self.state_datastore._state_group_cache.get(group)
403 state_dict_ids = cache_entry.value
404
405 self.assertEqual(cache_entry.full, False)
406 self.assertEqual(cache_entry.known_absent, {(e1.type, e1.state_key)})
413407 self.assertDictEqual(state_dict_ids, {(e1.type, e1.state_key): e1.event_id})
414408
415409 ############################################
3131 from twisted.trial import unittest
3232 from twisted.web.resource import Resource
3333
34 from synapse import events
3435 from synapse.api.constants import EventTypes, Membership
3536 from synapse.config.homeserver import HomeServerConfig
3637 from synapse.config.ratelimiting import FederationRateLimitConfig
139140 try:
140141 self.assertEquals(attrs[key], getattr(obj, key))
141142 except AssertionError as e:
142 raise (type(e))(e.message + " for '.%s'" % key)
143 raise (type(e))("Assert error for '.{}':".format(key)) from e
143144
144145 def assert_dict(self, required, actual):
145146 """Does a partial assert of a dict.
227228 self.reactor, self.clock = get_clock()
228229 self._hs_args = {"clock": self.clock, "reactor": self.reactor}
229230 self.hs = self.make_homeserver(self.reactor, self.clock)
231
232 # Honour the `use_frozen_dicts` config option. We have to do this
233 # manually because this is taken care of in the app `start` code, which
234 # we don't run. Plus we want to reset it on tearDown.
235 events.USE_FROZEN_DICTS = self.hs.config.use_frozen_dicts
230236
231237 if self.hs is None:
232238 raise Exception("No homeserver returned from make_homeserver.")
290296
291297 if hasattr(self, "prepare"):
292298 self.prepare(self.reactor, self.clock, self.hs)
299
300 def tearDown(self):
301 # Reset to not use frozen dicts.
302 events.USE_FROZEN_DICTS = False
293303
294304 def wait_on_thread(self, deferred, timeout=10):
295305 """
2626 key = "test_simple_cache_hit_full"
2727
2828 v = self.cache.get(key)
29 self.assertEqual((False, set(), {}), v)
29 self.assertIs(v.full, False)
30 self.assertEqual(v.known_absent, set())
31 self.assertEqual({}, v.value)
3032
3133 seq = self.cache.sequence
3234 test_value = {"test": "test_simple_cache_hit_full"}