New upstream version 1.51.0
Andrej Shadura
2 years ago
7 | 7 | - Use markdown where necessary, mostly for `code blocks`. |
8 | 8 | - End with either a period (.) or an exclamation mark (!). |
9 | 9 | - Start with a capital letter. |
10 | - Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry. | |
10 | 11 | * [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off) |
11 | 12 | * [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct |
12 | 13 | (run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters)) |
365 | 365 | # Build initial Synapse image |
366 | 366 | - run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile . |
367 | 367 | working-directory: synapse |
368 | env: | |
369 | DOCKER_BUILDKIT: 1 | |
368 | 370 | |
369 | 371 | # Build a ready-to-run Synapse image based on the initial image above. |
370 | 372 | # This new image includes a config file, keys for signing and TLS, and |
373 | 375 | working-directory: complement/dockerfiles |
374 | 376 | |
375 | 377 | # Run Complement |
376 | - run: go test -v -tags synapse_blacklist,msc2403 ./tests/... | |
378 | - run: set -o pipefail && go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt | |
379 | shell: bash | |
377 | 380 | env: |
378 | 381 | COMPLEMENT_BASE_IMAGE: complement-synapse:latest |
379 | 382 | working-directory: complement |
0 | Synapse 1.50.2 (2022-01-24) | |
0 | Synapse 1.51.0 (2022-01-25) | |
1 | 1 | =========================== |
2 | 2 | |
3 | No significant changes since 1.51.0rc2. | |
4 | ||
5 | Synapse 1.51.0 deprecates `webclient` listeners and non-HTTP(S) `web_client_location`s. Support for these will be removed in Synapse 1.53.0, at which point Synapse will not be capable of directly serving a web client for Matrix. | |
6 | ||
7 | Synapse 1.51.0rc2 (2022-01-24) | |
8 | ============================== | |
9 | ||
3 | 10 | Bugfixes |
4 | 11 | -------- |
5 | 12 | |
6 | 13 | - Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806)) |
14 | ||
15 | ||
16 | Synapse 1.51.0rc1 (2022-01-21) | |
17 | ============================== | |
18 | ||
19 | Features | |
20 | -------- | |
21 | ||
22 | - Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts. ([\#11561](https://github.com/matrix-org/synapse/issues/11561), [\#11749](https://github.com/matrix-org/synapse/issues/11749), [\#11757](https://github.com/matrix-org/synapse/issues/11757)) | |
23 | - Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11577](https://github.com/matrix-org/synapse/issues/11577)) | |
24 | - Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room. ([\#11672](https://github.com/matrix-org/synapse/issues/11672)) | |
25 | - Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users. ([\#11675](https://github.com/matrix-org/synapse/issues/11675), [\#11770](https://github.com/matrix-org/synapse/issues/11770)) | |
26 | ||
27 | ||
28 | Bugfixes | |
29 | -------- | |
30 | ||
31 | - Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events | |
32 | received over federation. ([\#11530](https://github.com/matrix-org/synapse/issues/11530)) | |
33 | - Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices. ([\#11587](https://github.com/matrix-org/synapse/issues/11587)) | |
34 | - Fix an error that occurs whilst trying to get the federation status of a destination server that was working normally. This admin API was newly introduced in Synapse v1.49.0. ([\#11593](https://github.com/matrix-org/synapse/issues/11593)) | |
35 | - Fix bundled aggregations not being included in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612), [\#11659](https://github.com/matrix-org/synapse/issues/11659), [\#11791](https://github.com/matrix-org/synapse/issues/11791)) | |
36 | - Fix the `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0. ([\#11667](https://github.com/matrix-org/synapse/issues/11667)) | |
37 | - Fix preview of some GIF URLs (like tenor.com). Contributed by Philippe Daouadi. ([\#11669](https://github.com/matrix-org/synapse/issues/11669)) | |
38 | - Fix a bug where only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0. ([\#11695](https://github.com/matrix-org/synapse/issues/11695)) | |
39 | - Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan. ([\#11710](https://github.com/matrix-org/synapse/issues/11710), [\#11745](https://github.com/matrix-org/synapse/issues/11745)) | |
40 | - Make the 'List Rooms' Admin API sort stable. Contributed by Daniƫl Sonck. ([\#11737](https://github.com/matrix-org/synapse/issues/11737)) | |
41 | - Fix a long-standing bug where space hierarchy over federation would only work correctly some of the time. ([\#11775](https://github.com/matrix-org/synapse/issues/11775)) | |
42 | - Fix a bug introduced in Synapse v1.46.0 that prevented `on_logged_out` module callbacks from being correctly awaited by Synapse. ([\#11786](https://github.com/matrix-org/synapse/issues/11786)) | |
43 | ||
44 | ||
45 | Improved Documentation | |
46 | ---------------------- | |
47 | ||
48 | - Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This works around client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr. ([\#11686](https://github.com/matrix-org/synapse/issues/11686)) | |
49 | - Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide. ([\#11715](https://github.com/matrix-org/synapse/issues/11715)) | |
50 | - Document that the minimum supported PostgreSQL version is now 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725)) | |
51 | - Fix typo in demo docs: differnt. ([\#11735](https://github.com/matrix-org/synapse/issues/11735)) | |
52 | - Update room spec URL in config files. ([\#11739](https://github.com/matrix-org/synapse/issues/11739)) | |
53 | - Mention `python3-venv` and `libpq-dev` dependencies in the contribution guide. ([\#11740](https://github.com/matrix-org/synapse/issues/11740)) | |
54 | - Update documentation for configuring login with Facebook. ([\#11755](https://github.com/matrix-org/synapse/issues/11755)) | |
55 | - Update installation instructions to note that Python 3.6 is no longer supported. ([\#11781](https://github.com/matrix-org/synapse/issues/11781)) | |
56 | ||
57 | ||
58 | Deprecations and Removals | |
59 | ------------------------- | |
60 | ||
61 | - Remove the unstable `/send_relation` endpoint. ([\#11682](https://github.com/matrix-org/synapse/issues/11682)) | |
62 | - Remove `python_twisted_reactor_pending_calls` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724)) | |
63 | - Remove the `password_hash` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#11576](https://github.com/matrix-org/synapse/issues/11576)) | |
64 | - **Deprecate support for `webclient` listeners and non-HTTP(S) `web_client_location` configuration. ([\#11774](https://github.com/matrix-org/synapse/issues/11774), [\#11783](https://github.com/matrix-org/synapse/issues/11783))** | |
65 | ||
66 | ||
67 | Internal Changes | |
68 | ---------------- | |
69 | ||
70 | - Run `pyupgrade --py37-plus --keep-percent-format` on Synapse. ([\#11685](https://github.com/matrix-org/synapse/issues/11685)) | |
71 | - Use buildkit's cache feature to speed up docker builds. ([\#11691](https://github.com/matrix-org/synapse/issues/11691)) | |
72 | - Use `auto_attribs` and native type hints for attrs classes. ([\#11692](https://github.com/matrix-org/synapse/issues/11692), [\#11768](https://github.com/matrix-org/synapse/issues/11768)) | |
73 | - Remove debug logging for #4422, which has been closed since Synapse 0.99. ([\#11693](https://github.com/matrix-org/synapse/issues/11693)) | |
74 | - Remove fallback code for Python 2. ([\#11699](https://github.com/matrix-org/synapse/issues/11699)) | |
75 | - Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic. ([\#11701](https://github.com/matrix-org/synapse/issues/11701)) | |
76 | - Add the option to write SQLite test dbs to disk when running tests. ([\#11702](https://github.com/matrix-org/synapse/issues/11702)) | |
77 | - Improve Complement test output for Gitub Actions. ([\#11707](https://github.com/matrix-org/synapse/issues/11707)) | |
78 | - Fix docstring on `add_account_data_for_user`. ([\#11716](https://github.com/matrix-org/synapse/issues/11716)) | |
79 | - Complement environment variable name change and update `.gitignore`. ([\#11718](https://github.com/matrix-org/synapse/issues/11718)) | |
80 | - Simplify calculation of Prometheus metrics for garbage collection. ([\#11723](https://github.com/matrix-org/synapse/issues/11723)) | |
81 | - Improve accuracy of `python_twisted_reactor_tick_time` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724), [\#11771](https://github.com/matrix-org/synapse/issues/11771)) | |
82 | - Minor efficiency improvements when inserting many values into the database. ([\#11742](https://github.com/matrix-org/synapse/issues/11742)) | |
83 | - Invite PR authors to give themselves credit in the changelog. ([\#11744](https://github.com/matrix-org/synapse/issues/11744)) | |
84 | - Add optional debugging to investigate [issue 8631](https://github.com/matrix-org/synapse/issues/8631). ([\#11760](https://github.com/matrix-org/synapse/issues/11760)) | |
85 | - Remove `log_function` utility function and its uses. ([\#11761](https://github.com/matrix-org/synapse/issues/11761)) | |
86 | - Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled. ([\#11765](https://github.com/matrix-org/synapse/issues/11765)) | |
87 | - Allow overriding complement commit using `COMPLEMENT_REF`. ([\#11766](https://github.com/matrix-org/synapse/issues/11766)) | |
88 | - Add some comments and type annotations for `_update_outliers_txn`. ([\#11776](https://github.com/matrix-org/synapse/issues/11776)) | |
7 | 89 | |
8 | 90 | |
9 | 91 | Synapse 1.50.1 (2022-01-18) |
88 | 88 | yHoverFormatter: PromConsole.NumberFormatter.humanize, |
89 | 89 | yUnits: "s", |
90 | 90 | yTitle: "Time" |
91 | }) | |
92 | </script> | |
93 | ||
94 | <h3>Pending calls per tick</h3> | |
95 | <div id="reactor_pending_calls"></div> | |
96 | <script> | |
97 | new PromConsole.Graph({ | |
98 | node: document.querySelector("#reactor_pending_calls"), | |
99 | expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])", | |
100 | name: "[[job]]-[[index]]", | |
101 | min: 0, | |
102 | renderer: "line", | |
103 | height: 150, | |
104 | yAxisFormatter: PromConsole.NumberFormatter.humanize, | |
105 | yHoverFormatter: PromConsole.NumberFormatter.humanize, | |
106 | yTitle: "Pending Calls" | |
107 | 91 | }) |
108 | 92 | </script> |
109 | 93 |
0 | matrix-synapse-py3 (1.50.2) stable; urgency=medium | |
1 | ||
2 | * New synapse release 1.50.2. | |
3 | ||
4 | -- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 13:37:11 +0000 | |
0 | matrix-synapse-py3 (1.51.0) stable; urgency=medium | |
1 | ||
2 | * New synapse release 1.51.0. | |
3 | ||
4 | -- Synapse Packaging team <packages@matrix.org> Tue, 25 Jan 2022 11:28:51 +0000 | |
5 | ||
6 | matrix-synapse-py3 (1.51.0~rc2) stable; urgency=medium | |
7 | ||
8 | * New synapse release 1.51.0~rc2. | |
9 | ||
10 | -- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 12:25:00 +0000 | |
11 | ||
12 | matrix-synapse-py3 (1.51.0~rc1) stable; urgency=medium | |
13 | ||
14 | * New synapse release 1.51.0~rc1. | |
15 | ||
16 | -- Synapse Packaging team <packages@matrix.org> Fri, 21 Jan 2022 10:46:02 +0000 | |
5 | 17 | |
6 | 18 | matrix-synapse-py3 (1.50.1) stable; urgency=medium |
7 | 19 |
21 | 21 | |
22 | 22 | |
23 | 23 | |
24 | Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name. | |
24 | Also note that when joining a public room on a different HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name. | |
25 | 25 |
0 | 0 | # Dockerfile to build the matrixdotorg/synapse docker images. |
1 | # | |
2 | # Note that it uses features which are only available in BuildKit - see | |
3 | # https://docs.docker.com/go/buildkit/ for more information. | |
1 | 4 | # |
2 | 5 | # To build the image, run `docker build` command from the root of the |
3 | 6 | # synapse repository: |
4 | 7 | # |
5 | # docker build -f docker/Dockerfile . | |
8 | # DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile . | |
6 | 9 | # |
7 | 10 | # There is an optional PYTHON_VERSION build argument which sets the |
8 | 11 | # version of python to build against: for example: |
9 | 12 | # |
10 | # docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 . | |
13 | # DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 . | |
11 | 14 | # |
12 | 15 | |
13 | 16 | ARG PYTHON_VERSION=3.8 |
18 | 21 | FROM docker.io/python:${PYTHON_VERSION}-slim as builder |
19 | 22 | |
20 | 23 | # install the OS build deps |
21 | RUN apt-get update && apt-get install -y \ | |
24 | # | |
25 | # RUN --mount is specific to buildkit and is documented at | |
26 | # https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount. | |
27 | # Here we use it to set up a cache for apt, to improve rebuild speeds on | |
28 | # slow connections. | |
29 | # | |
30 | RUN \ | |
31 | --mount=type=cache,target=/var/cache/apt,sharing=locked \ | |
32 | --mount=type=cache,target=/var/lib/apt,sharing=locked \ | |
33 | apt-get update && apt-get install -y \ | |
22 | 34 | build-essential \ |
23 | 35 | libffi-dev \ |
24 | 36 | libjpeg-dev \ |
43 | 55 | # used while you develop on the source |
44 | 56 | # |
45 | 57 | # This is aiming at installing the `install_requires` and `extras_require` from `setup.py` |
46 | RUN pip install --prefix="/install" --no-warn-script-location \ | |
58 | RUN --mount=type=cache,target=/root/.cache/pip \ | |
59 | pip install --prefix="/install" --no-warn-script-location \ | |
47 | 60 | /synapse[all] |
48 | 61 | |
49 | 62 | # Copy over the rest of the project |
65 | 78 | LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git' |
66 | 79 | LABEL org.opencontainers.image.licenses='Apache-2.0' |
67 | 80 | |
68 | RUN apt-get update && apt-get install -y \ | |
81 | RUN \ | |
82 | --mount=type=cache,target=/var/cache/apt,sharing=locked \ | |
83 | --mount=type=cache,target=/var/lib/apt,sharing=locked \ | |
84 | apt-get update && apt-get install -y \ | |
69 | 85 | curl \ |
70 | 86 | gosu \ |
71 | 87 | libjpeg62-turbo \ |
14 | 14 | |
15 | 15 | It returns a JSON body like the following: |
16 | 16 | |
17 | ```json | |
18 | { | |
19 | "displayname": "User", | |
17 | ```jsonc | |
18 | { | |
19 | "name": "@user:example.com", | |
20 | "displayname": "User", // can be null if not set | |
20 | 21 | "threepids": [ |
21 | 22 | { |
22 | 23 | "medium": "email", |
31 | 32 | "validated_at": 1586458409743 |
32 | 33 | } |
33 | 34 | ], |
34 | "avatar_url": "<avatar_url>", | |
35 | "avatar_url": "<avatar_url>", // can be null if not set | |
36 | "is_guest": 0, | |
35 | 37 | "admin": 0, |
36 | 38 | "deactivated": 0, |
37 | 39 | "shadow_banned": 0, |
38 | "password_hash": "$2b$12$p9B4GkqYdRTPGD", | |
39 | 40 | "creation_ts": 1560432506, |
40 | 41 | "appservice_id": null, |
41 | 42 | "consent_server_notice_sent": null, |
19 | 19 | <https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively |
20 | 20 | on Windows is not officially supported. |
21 | 21 | |
22 | The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download). | |
22 | The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough. | |
23 | ||
24 | Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`. | |
23 | 25 | |
24 | 26 | The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git). |
25 | 27 | |
168 | 170 | SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests |
169 | 171 | ``` |
170 | 172 | |
173 | By default, tests will use an in-memory SQLite database for test data. For additional | |
174 | help with debugging, one can use an on-disk SQLite database file instead, in order to | |
175 | review database state during and after running tests. This can be done by setting | |
176 | the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the | |
177 | database state to be stored in a file named `test.db` under the trial process' | |
178 | working directory. Typically, this ends up being `_trial_temp/test.db`. For example: | |
179 | ||
180 | ```sh | |
181 | SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests | |
182 | ``` | |
183 | ||
184 | The database file can then be inspected with: | |
185 | ||
186 | ```sh | |
187 | sqlite3 _trial_temp/test.db | |
188 | ``` | |
189 | ||
190 | Note that the database file is cleared at the beginning of each test run. Thus it | |
191 | will always only contain the data generated by the *last run test*. Though generally | |
192 | when debugging, one is only running a single test anyway. | |
193 | ||
171 | 194 | ### Running tests under PostgreSQL |
172 | 195 | |
173 | 196 | Invoking `trial` as above will use an in-memory SQLite database. This is great for |
34 | 34 | 5. If the media is HTML: |
35 | 35 | 1. Decodes the HTML via the stored file. |
36 | 36 | 2. Generates an Open Graph response from the HTML. |
37 | 3. If an image exists in the Open Graph response: | |
37 | 3. If a JSON oEmbed URL was found in the HTML via autodiscovery: | |
38 | 1. Downloads the URL and stores it into a file via the media storage provider | |
39 | and saves the local media metadata. | |
40 | 2. Convert the oEmbed response to an Open Graph response. | |
41 | 3. Override any Open Graph data from the HTML with data from oEmbed. | |
42 | 4. If an image exists in the Open Graph response: | |
38 | 43 | 1. Downloads the URL and stores it into a file via the media storage |
39 | 44 | provider and saves the local media metadata. |
40 | 45 | 2. Generates thumbnails. |
389 | 389 | |
390 | 390 | |
391 | 391 | |
392 | Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant | |
393 | one so requires a little more configuration. | |
394 | ||
395 | 392 | 0. You will need a Facebook developer account. You can register for one |
396 | 393 | [here](https://developers.facebook.com/async/registration/). |
397 | 394 | 1. On the [apps](https://developers.facebook.com/apps/) page of the developer |
411 | 408 | idp_name: Facebook |
412 | 409 | idp_brand: "facebook" # optional: styling hint for clients |
413 | 410 | discover: false |
414 | issuer: "https://facebook.com" | |
411 | issuer: "https://www.facebook.com" | |
415 | 412 | client_id: "your-client-id" # TO BE FILLED |
416 | 413 | client_secret: "your-client-secret" # TO BE FILLED |
417 | 414 | scopes: ["openid", "email"] |
418 | authorization_endpoint: https://facebook.com/dialog/oauth | |
419 | token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token | |
420 | user_profile_method: "userinfo_endpoint" | |
421 | userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture" | |
422 | user_mapping_provider: | |
423 | config: | |
424 | subject_claim: "id" | |
425 | display_name_template: "{{ user.name }}" | |
415 | authorization_endpoint: "https://facebook.com/dialog/oauth" | |
416 | token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token" | |
417 | jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/" | |
418 | user_mapping_provider: | |
419 | config: | |
420 | display_name_template: "{{ user.name }}" | |
421 | email_template: "{{ '{{ user.email }}' }}" | |
426 | 422 | ``` |
427 | 423 | |
428 | 424 | Relevant documents: |
429 | * https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow | |
430 | * Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/ | |
431 | * Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user | |
425 | * [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow) | |
426 | * [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/) | |
427 | * [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user) | |
428 | ||
429 | Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration), | |
430 | but it has a `response_types_supported` which excludes "code" (which we rely on, and | |
431 | is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)), | |
432 | so we have to disable discovery and configure the URIs manually. | |
432 | 433 | |
433 | 434 | ### Gitea |
434 | 435 |
73 | 73 | # |
74 | 74 | pid_file: DATADIR/homeserver.pid |
75 | 75 | |
76 | # The absolute URL to the web client which /_matrix/client will redirect | |
77 | # to if 'webclient' is configured under the 'listeners' configuration. | |
78 | # | |
79 | # This option can be also set to the filesystem path to the web client | |
80 | # which will be served at /_matrix/client/ if 'webclient' is configured | |
81 | # under the 'listeners' configuration, however this is a security risk: | |
82 | # https://github.com/matrix-org/synapse#security-note | |
76 | # The absolute URL to the web client which / will redirect to. | |
83 | 77 | # |
84 | 78 | #web_client_location: https://riot.example.com/ |
85 | 79 | |
163 | 157 | # The default room version for newly created rooms. |
164 | 158 | # |
165 | 159 | # Known room versions are listed here: |
166 | # https://matrix.org/docs/spec/#complete-list-of-room-versions | |
160 | # https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions | |
167 | 161 | # |
168 | 162 | # For example, for room version 1, default_room_version should be set |
169 | 163 | # to "1". |
308 | 302 | # |
309 | 303 | # static: static resources under synapse/static (/_matrix/static). (Mostly |
310 | 304 | # useful for 'fallback authentication'.) |
311 | # | |
312 | # webclient: A web client. Requires web_client_location to be set. | |
313 | 305 | # |
314 | 306 | listeners: |
315 | 307 | # TLS-enabled listener: for when matrix traffic is sent directly to synapse. |
1502 | 1494 | #additional_event_types: |
1503 | 1495 | # - org.example.custom.event.type |
1504 | 1496 | |
1497 | # We record the IP address of clients used to access the API for various | |
1498 | # reasons, including displaying it to the user in the "Where you're signed in" | |
1499 | # dialog. | |
1500 | # | |
1501 | # By default, when puppeting another user via the admin API, the client IP | |
1502 | # address is recorded against the user who created the access token (ie, the | |
1503 | # admin user), and *not* the puppeted user. | |
1504 | # | |
1505 | # Uncomment the following to also record the IP address against the puppeted | |
1506 | # user. (This also means that the puppeted user will count as an "active" user | |
1507 | # for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc | |
1508 | # above.) | |
1509 | # | |
1510 | #track_puppeted_user_ips: true | |
1511 | ||
1505 | 1512 | |
1506 | 1513 | # A list of application service config files to use |
1507 | 1514 | # |
1869 | 1876 | # Defaults to false. Avoid this in production. |
1870 | 1877 | # |
1871 | 1878 | # user_profile_method: Whether to fetch the user profile from the userinfo |
1872 | # endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. | |
1873 | # | |
1874 | # Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is | |
1875 | # included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the | |
1879 | # endpoint, or to rely on the data returned in the id_token from the | |
1880 | # token_endpoint. | |
1881 | # | |
1882 | # Valid values are: 'auto' or 'userinfo_endpoint'. | |
1883 | # | |
1884 | # Defaults to 'auto', which uses the userinfo endpoint if 'openid' is | |
1885 | # not included in 'scopes'. Set to 'userinfo_endpoint' to always use the | |
1876 | 1886 | # userinfo endpoint. |
1877 | 1887 | # |
1878 | 1888 | # allow_existing_users: set to 'true' to allow a user logging in via OIDC to |
193 | 193 | System requirements: |
194 | 194 | |
195 | 195 | - POSIX-compliant system (tested on Linux & OS X) |
196 | - Python 3.6 or later, up to Python 3.9. | |
196 | - Python 3.7 or later, up to Python 3.9. | |
197 | 197 | - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org |
198 | 198 | |
199 | 199 | To install the Synapse homeserver run: |
136 | 136 | |
137 | 137 | # TLS private key file |
138 | 138 | pkey=/path/to/privkey.pem |
139 | ||
140 | # Ensure the configuration lines that disable TLS/DTLS are commented-out or removed | |
141 | #no-tls | |
142 | #no-dtls | |
139 | 143 | ``` |
140 | 144 | |
141 | 145 | In this case, replace the `turn:` schemes in the `turn_uris` settings below |
143 | 147 | |
144 | 148 | We recommend that you only try to set up TLS/DTLS once you have set up a |
145 | 149 | basic installation and got it working. |
150 | ||
151 | NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will | |
152 | not work with any Matrix client that uses Chromium's WebRTC library. This | |
153 | currently includes Element Android & iOS; for more details, see their | |
154 | [respective](https://github.com/vector-im/element-android/issues/1533) | |
155 | [issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying | |
156 | [WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710). | |
157 | Consider using a ZeroSSL certificate for your TURN server as a working alternative. | |
146 | 158 | |
147 | 159 | 1. Ensure your firewall allows traffic into the TURN server on the ports |
148 | 160 | you've configured it to listen on (By default: 3478 and 5349 for TURN |
249 | 261 | * Check that you have opened your firewall to allow UDP traffic to the UDP |
250 | 262 | relay ports (49152-65535 by default). |
251 | 263 | |
264 | * Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted) | |
265 | TCP/UDP listeners. (This will only leave signaling traffic unencrypted; | |
266 | voice & video WebRTC traffic is always encrypted.) | |
267 | ||
252 | 268 | * Some WebRTC implementations (notably, that of Google Chrome) appear to get |
253 | 269 | confused by TURN servers which are reachable over IPv6 (this appears to be |
254 | 270 | an unexpected side-effect of its handling of multiple IP addresses as |
83 | 83 | wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
84 | 84 | dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb |
85 | 85 | ``` |
86 | ||
87 | # Upgrading to v1.51.0 | |
88 | ||
89 | ## Deprecation of `webclient` listeners and non-HTTP(S) `web_client_location` | |
90 | ||
91 | Listeners of type `webclient` are deprecated and scheduled to be removed in | |
92 | Synapse v1.53.0. | |
93 | ||
94 | Similarly, a non-HTTP(S) `web_client_location` configuration is deprecated and | |
95 | will become a configuration error in Synapse v1.53.0. | |
96 | ||
86 | 97 | |
87 | 98 | # Upgrading to v1.50.0 |
88 | 99 |
7 | 7 | # By default the script will fetch the latest Complement master branch and |
8 | 8 | # run tests with that. This can be overridden to use a custom Complement |
9 | 9 | # checkout by setting the COMPLEMENT_DIR environment variable to the |
10 | # filepath of a local Complement checkout. | |
10 | # filepath of a local Complement checkout or by setting the COMPLEMENT_REF | |
11 | # environment variable to pull a different branch or commit. | |
11 | 12 | # |
12 | 13 | # By default Synapse is run in monolith mode. This can be overridden by |
13 | 14 | # setting the WORKERS environment variable. |
22 | 23 | # Exit if a line returns a non-zero exit code |
23 | 24 | set -e |
24 | 25 | |
26 | # enable buildkit for the docker builds | |
27 | export DOCKER_BUILDKIT=1 | |
28 | ||
25 | 29 | # Change to the repository root |
26 | 30 | cd "$(dirname $0)/.." |
27 | 31 | |
28 | 32 | # Check for a user-specified Complement checkout |
29 | 33 | if [[ -z "$COMPLEMENT_DIR" ]]; then |
30 | echo "COMPLEMENT_DIR not set. Fetching the latest Complement checkout..." | |
31 | wget -Nq https://github.com/matrix-org/complement/archive/master.tar.gz | |
32 | tar -xzf master.tar.gz | |
33 | COMPLEMENT_DIR=complement-master | |
34 | echo "Checkout available at 'complement-master'" | |
34 | COMPLEMENT_REF=${COMPLEMENT_REF:-master} | |
35 | echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..." | |
36 | wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz | |
37 | tar -xzf ${COMPLEMENT_REF}.tar.gz | |
38 | COMPLEMENT_DIR=complement-${COMPLEMENT_REF} | |
39 | echo "Checkout available at 'complement-${COMPLEMENT_REF}'" | |
35 | 40 | fi |
36 | 41 | |
37 | 42 | # Build the base Synapse image from the local checkout |
46 | 51 | COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile |
47 | 52 | # And provide some more configuration to complement. |
48 | 53 | export COMPLEMENT_CA=true |
49 | export COMPLEMENT_VERSION_CHECK_ITERATIONS=500 | |
54 | export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25 | |
50 | 55 | else |
51 | 56 | export COMPLEMENT_BASE_IMAGE=complement-synapse |
52 | 57 | COMPLEMENT_DOCKERFILE=Synapse.Dockerfile |
64 | 69 | fi |
65 | 70 | |
66 | 71 | # Run the tests! |
72 | echo "Images built; running complement" | |
67 | 73 | go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/... |
46 | 46 | except ImportError: |
47 | 47 | pass |
48 | 48 | |
49 | __version__ = "1.50.2" | |
49 | __version__ = "1.51.0" | |
50 | 50 | |
51 | 51 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
52 | 52 | # We import here so that we don't have to install a bunch of deps when |
45 | 45 | ips: List[str] = attr.Factory(list) |
46 | 46 | |
47 | 47 | |
48 | def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]: | |
48 | def get_recent_users( | |
49 | txn: LoggingTransaction, since_ms: int, exclude_app_service: bool | |
50 | ) -> List[UserInfo]: | |
49 | 51 | """Fetches recently registered users and some info on them.""" |
50 | 52 | |
51 | 53 | sql = """ |
54 | 56 | ? <= creation_ts |
55 | 57 | AND deactivated = 0 |
56 | 58 | """ |
59 | ||
60 | if exclude_app_service: | |
61 | sql += " AND appservice_id IS NULL" | |
57 | 62 | |
58 | 63 | txn.execute(sql, (since_ms / 1000,)) |
59 | 64 | |
112 | 117 | "-e", |
113 | 118 | "--exclude-emails", |
114 | 119 | action="store_true", |
115 | help="Exclude users that have validated email addresses", | |
120 | help="Exclude users that have validated email addresses.", | |
116 | 121 | ) |
117 | 122 | parser.add_argument( |
118 | 123 | "-u", |
119 | 124 | "--only-users", |
120 | 125 | action="store_true", |
121 | 126 | help="Only print user IDs that match.", |
127 | ) | |
128 | parser.add_argument( | |
129 | "-a", | |
130 | "--exclude-app-service", | |
131 | help="Exclude appservice users.", | |
132 | action="store_true", | |
122 | 133 | ) |
123 | 134 | |
124 | 135 | config = ReviewConfig() |
132 | 143 | |
133 | 144 | since_ms = time.time() * 1000 - Config.parse_duration(config_args.since) |
134 | 145 | exclude_users_with_email = config_args.exclude_emails |
146 | exclude_users_with_appservice = config_args.exclude_app_service | |
135 | 147 | include_context = not config_args.only_users |
136 | 148 | |
137 | 149 | for database_config in config.database.databases: |
142 | 154 | |
143 | 155 | with make_conn(database_config, engine, "review_recent_signups") as db_conn: |
144 | 156 | # This generates a type of Cursor, not LoggingTransaction. |
145 | user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type] | |
157 | user_infos = get_recent_users(db_conn.cursor(), since_ms, exclude_users_with_appservice) # type: ignore[arg-type] | |
146 | 158 | |
147 | 159 | for user_info in user_infos: |
148 | 160 | if exclude_users_with_email and user_info.emails: |
70 | 70 | self._auth_blocking = AuthBlocking(self.hs) |
71 | 71 | |
72 | 72 | self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips |
73 | self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips | |
73 | 74 | self._macaroon_secret_key = hs.config.key.macaroon_secret_key |
74 | 75 | self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users |
75 | 76 | |
245 | 246 | user_agent=user_agent, |
246 | 247 | device_id=device_id, |
247 | 248 | ) |
249 | # Track also the puppeted user client IP if enabled and the user is puppeting | |
250 | if ( | |
251 | user_info.user_id != user_info.token_owner | |
252 | and self._track_puppeted_user_ips | |
253 | ): | |
254 | await self.store.insert_client_ip( | |
255 | user_id=user_info.user_id, | |
256 | access_token=access_token, | |
257 | ip=ip_addr, | |
258 | user_agent=user_agent, | |
259 | device_id=device_id, | |
260 | ) | |
248 | 261 | |
249 | 262 | if is_guest and not allow_guest: |
250 | 263 | raise AuthError( |
45 | 45 | UNSTABLE = "unstable" |
46 | 46 | |
47 | 47 | |
48 | @attr.s(slots=True, frozen=True) | |
48 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
49 | 49 | class RoomVersion: |
50 | 50 | """An object which describes the unique attributes of a room version.""" |
51 | 51 | |
52 | identifier = attr.ib(type=str) # the identifier for this version | |
53 | disposition = attr.ib(type=str) # one of the RoomDispositions | |
54 | event_format = attr.ib(type=int) # one of the EventFormatVersions | |
55 | state_res = attr.ib(type=int) # one of the StateResolutionVersions | |
56 | enforce_key_validity = attr.ib(type=bool) | |
52 | identifier: str # the identifier for this version | |
53 | disposition: str # one of the RoomDispositions | |
54 | event_format: int # one of the EventFormatVersions | |
55 | state_res: int # one of the StateResolutionVersions | |
56 | enforce_key_validity: bool | |
57 | 57 | |
58 | 58 | # Before MSC2432, m.room.aliases had special auth rules and redaction rules |
59 | special_case_aliases_auth = attr.ib(type=bool) | |
59 | special_case_aliases_auth: bool | |
60 | 60 | # Strictly enforce canonicaljson, do not allow: |
61 | 61 | # * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1] |
62 | 62 | # * Floats |
63 | 63 | # * NaN, Infinity, -Infinity |
64 | strict_canonicaljson = attr.ib(type=bool) | |
64 | strict_canonicaljson: bool | |
65 | 65 | # MSC2209: Check 'notifications' key while verifying |
66 | 66 | # m.room.power_levels auth rules. |
67 | limit_notifications_power_levels = attr.ib(type=bool) | |
67 | limit_notifications_power_levels: bool | |
68 | 68 | # MSC2174/MSC2176: Apply updated redaction rules algorithm. |
69 | msc2176_redaction_rules = attr.ib(type=bool) | |
69 | msc2176_redaction_rules: bool | |
70 | 70 | # MSC3083: Support the 'restricted' join_rule. |
71 | msc3083_join_rules = attr.ib(type=bool) | |
71 | msc3083_join_rules: bool | |
72 | 72 | # MSC3375: Support for the proper redaction rules for MSC3083. This mustn't |
73 | 73 | # be enabled if MSC3083 is not. |
74 | msc3375_redaction_rules = attr.ib(type=bool) | |
74 | msc3375_redaction_rules: bool | |
75 | 75 | # MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending |
76 | 76 | # m.room.membership event with membership 'knock'. |
77 | msc2403_knocking = attr.ib(type=bool) | |
77 | msc2403_knocking: bool | |
78 | 78 | # MSC2716: Adds m.room.power_levels -> content.historical field to control |
79 | 79 | # whether "insertion", "chunk", "marker" events can be sent |
80 | msc2716_historical = attr.ib(type=bool) | |
80 | msc2716_historical: bool | |
81 | 81 | # MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events |
82 | msc2716_redactions = attr.ib(type=bool) | |
82 | msc2716_redactions: bool | |
83 | 83 | |
84 | 84 | |
85 | 85 | class RoomVersions: |
59 | 59 | from synapse.events.third_party_rules import load_legacy_third_party_event_rules |
60 | 60 | from synapse.handlers.auth import load_legacy_password_auth_providers |
61 | 61 | from synapse.logging.context import PreserveLoggingContext |
62 | from synapse.metrics import register_threadpool | |
62 | from synapse.metrics import install_gc_manager, register_threadpool | |
63 | 63 | from synapse.metrics.background_process_metrics import wrap_as_background_process |
64 | 64 | from synapse.metrics.jemalloc import setup_jemalloc_stats |
65 | 65 | from synapse.types import ISynapseReactor |
158 | 158 | change_resource_limit(soft_file_limit) |
159 | 159 | if gc_thresholds: |
160 | 160 | gc.set_threshold(*gc_thresholds) |
161 | install_gc_manager() | |
161 | 162 | run_command() |
162 | 163 | |
163 | 164 | # make sure that we run the reactor with the sentinel log context, |
130 | 130 | resources.update(self._module_web_resources) |
131 | 131 | self._module_web_resources_consumed = True |
132 | 132 | |
133 | # try to find something useful to redirect '/' to | |
134 | if WEB_CLIENT_PREFIX in resources: | |
135 | root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX) | |
133 | # Try to find something useful to serve at '/': | |
134 | # | |
135 | # 1. Redirect to the web client if it is an HTTP(S) URL. | |
136 | # 2. Redirect to the web client served via Synapse. | |
137 | # 3. Redirect to the static "Synapse is running" page. | |
138 | # 4. Do not redirect and use a blank resource. | |
139 | if self.config.server.web_client_location_is_redirect: | |
140 | root_resource: Resource = RootOptionsRedirectResource( | |
141 | self.config.server.web_client_location | |
142 | ) | |
143 | elif WEB_CLIENT_PREFIX in resources: | |
144 | root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX) | |
136 | 145 | elif STATIC_PREFIX in resources: |
137 | 146 | root_resource = RootOptionsRedirectResource(STATIC_PREFIX) |
138 | 147 | else: |
261 | 270 | resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self) |
262 | 271 | |
263 | 272 | if name == "webclient": |
273 | # webclient listeners are deprecated as of Synapse v1.51.0, remove it | |
274 | # in > v1.53.0. | |
264 | 275 | webclient_loc = self.config.server.web_client_location |
265 | 276 | |
266 | 277 | if webclient_loc is None: |
267 | 278 | logger.warning( |
268 | 279 | "Not enabling webclient resource, as web_client_location is unset." |
269 | 280 | ) |
270 | elif webclient_loc.startswith("http://") or webclient_loc.startswith( | |
271 | "https://" | |
272 | ): | |
281 | elif self.config.server.web_client_location_is_redirect: | |
273 | 282 | resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc) |
274 | 283 | else: |
275 | 284 | logger.warning( |
28 | 28 | def read_config(self, config: JsonDict, **kwargs): |
29 | 29 | validate_config(_MAIN_SCHEMA, config, ()) |
30 | 30 | self.room_prejoin_state = list(self._get_prejoin_state_types(config)) |
31 | self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False) | |
31 | 32 | |
32 | 33 | def generate_config_section(cls, **kwargs) -> str: |
33 | 34 | formatted_default_state_types = "\n".join( |
58 | 59 | # |
59 | 60 | #additional_event_types: |
60 | 61 | # - org.example.custom.event.type |
62 | ||
63 | # We record the IP address of clients used to access the API for various | |
64 | # reasons, including displaying it to the user in the "Where you're signed in" | |
65 | # dialog. | |
66 | # | |
67 | # By default, when puppeting another user via the admin API, the client IP | |
68 | # address is recorded against the user who created the access token (ie, the | |
69 | # admin user), and *not* the puppeted user. | |
70 | # | |
71 | # Uncomment the following to also record the IP address against the puppeted | |
72 | # user. (This also means that the puppeted user will count as an "active" user | |
73 | # for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc | |
74 | # above.) | |
75 | # | |
76 | #track_puppeted_user_ips: true | |
61 | 77 | """ % { |
62 | 78 | "formatted_default_state_types": formatted_default_state_types |
63 | 79 | } |
137 | 153 | "properties": { |
138 | 154 | "room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA, |
139 | 155 | "room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA, |
156 | "track_puppeted_user_ips": { | |
157 | "type": "boolean", | |
158 | }, | |
140 | 159 | }, |
141 | 160 | } |
54 | 54 | ---------------------------------------------------------------------------------------""" |
55 | 55 | |
56 | 56 | |
57 | @attr.s(slots=True, frozen=True) | |
57 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
58 | 58 | class EmailSubjectConfig: |
59 | message_from_person_in_room = attr.ib(type=str) | |
60 | message_from_person = attr.ib(type=str) | |
61 | messages_from_person = attr.ib(type=str) | |
62 | messages_in_room = attr.ib(type=str) | |
63 | messages_in_room_and_others = attr.ib(type=str) | |
64 | messages_from_person_and_others = attr.ib(type=str) | |
65 | invite_from_person = attr.ib(type=str) | |
66 | invite_from_person_to_room = attr.ib(type=str) | |
67 | invite_from_person_to_space = attr.ib(type=str) | |
68 | password_reset = attr.ib(type=str) | |
69 | email_validation = attr.ib(type=str) | |
59 | message_from_person_in_room: str | |
60 | message_from_person: str | |
61 | messages_from_person: str | |
62 | messages_in_room: str | |
63 | messages_in_room_and_others: str | |
64 | messages_from_person_and_others: str | |
65 | invite_from_person: str | |
66 | invite_from_person_to_room: str | |
67 | invite_from_person_to_space: str | |
68 | password_reset: str | |
69 | email_validation: str | |
70 | 70 | |
71 | 71 | |
72 | 72 | class EmailConfig(Config): |
147 | 147 | # Defaults to false. Avoid this in production. |
148 | 148 | # |
149 | 149 | # user_profile_method: Whether to fetch the user profile from the userinfo |
150 | # endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. | |
151 | # | |
152 | # Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is | |
153 | # included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the | |
150 | # endpoint, or to rely on the data returned in the id_token from the | |
151 | # token_endpoint. | |
152 | # | |
153 | # Valid values are: 'auto' or 'userinfo_endpoint'. | |
154 | # | |
155 | # Defaults to 'auto', which uses the userinfo endpoint if 'openid' is | |
156 | # not included in 'scopes'. Set to 'userinfo_endpoint' to always use the | |
154 | 157 | # userinfo endpoint. |
155 | 158 | # |
156 | 159 | # allow_existing_users: set to 'true' to allow a user logging in via OIDC to |
199 | 199 | """Object describing the http-specific parts of the config of a listener""" |
200 | 200 | |
201 | 201 | x_forwarded: bool = False |
202 | resources: List[HttpResourceConfig] = attr.ib(factory=list) | |
203 | additional_resources: Dict[str, dict] = attr.ib(factory=dict) | |
202 | resources: List[HttpResourceConfig] = attr.Factory(list) | |
203 | additional_resources: Dict[str, dict] = attr.Factory(dict) | |
204 | 204 | tag: Optional[str] = None |
205 | 205 | |
206 | 206 | |
258 | 258 | raise ConfigError(str(e)) |
259 | 259 | |
260 | 260 | self.pid_file = self.abspath(config.get("pid_file")) |
261 | self.web_client_location = config.get("web_client_location", None) | |
262 | 261 | self.soft_file_limit = config.get("soft_file_limit", 0) |
263 | 262 | self.daemonize = config.get("daemonize") |
264 | 263 | self.print_pidfile = config.get("print_pidfile") |
505 | 504 | l2.append(listener) |
506 | 505 | self.listeners = l2 |
507 | 506 | |
508 | if not self.web_client_location: | |
509 | _warn_if_webclient_configured(self.listeners) | |
507 | self.web_client_location = config.get("web_client_location", None) | |
508 | self.web_client_location_is_redirect = self.web_client_location and ( | |
509 | self.web_client_location.startswith("http://") | |
510 | or self.web_client_location.startswith("https://") | |
511 | ) | |
512 | # A non-HTTP(S) web client location is deprecated. | |
513 | if self.web_client_location and not self.web_client_location_is_redirect: | |
514 | logger.warning(NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING) | |
515 | ||
516 | # Warn if webclient is configured for a worker. | |
517 | _warn_if_webclient_configured(self.listeners) | |
510 | 518 | |
511 | 519 | self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None)) |
512 | 520 | self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None)) |
792 | 800 | # |
793 | 801 | pid_file: %(pid_file)s |
794 | 802 | |
795 | # The absolute URL to the web client which /_matrix/client will redirect | |
796 | # to if 'webclient' is configured under the 'listeners' configuration. | |
797 | # | |
798 | # This option can be also set to the filesystem path to the web client | |
799 | # which will be served at /_matrix/client/ if 'webclient' is configured | |
800 | # under the 'listeners' configuration, however this is a security risk: | |
801 | # https://github.com/matrix-org/synapse#security-note | |
803 | # The absolute URL to the web client which / will redirect to. | |
802 | 804 | # |
803 | 805 | #web_client_location: https://riot.example.com/ |
804 | 806 | |
882 | 884 | # The default room version for newly created rooms. |
883 | 885 | # |
884 | 886 | # Known room versions are listed here: |
885 | # https://matrix.org/docs/spec/#complete-list-of-room-versions | |
887 | # https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions | |
886 | 888 | # |
887 | 889 | # For example, for room version 1, default_room_version should be set |
888 | 890 | # to "1". |
1009 | 1011 | # |
1010 | 1012 | # static: static resources under synapse/static (/_matrix/static). (Mostly |
1011 | 1013 | # useful for 'fallback authentication'.) |
1012 | # | |
1013 | # webclient: A web client. Requires web_client_location to be set. | |
1014 | 1014 | # |
1015 | 1015 | listeners: |
1016 | 1016 | # TLS-enabled listener: for when matrix traffic is sent directly to synapse. |
1348 | 1348 | return ListenerConfig(port, bind_addresses, listener_type, tls, http_config) |
1349 | 1349 | |
1350 | 1350 | |
1351 | NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING = """ | |
1352 | Synapse no longer supports serving a web client. To remove this warning, | |
1353 | configure 'web_client_location' with an HTTP(S) URL. | |
1354 | """ | |
1355 | ||
1356 | ||
1351 | 1357 | NO_MORE_WEB_CLIENT_WARNING = """ |
1352 | Synapse no longer includes a web client. To enable a web client, configure | |
1353 | web_client_location. To remove this warning, remove 'webclient' from the 'listeners' | |
1358 | Synapse no longer includes a web client. To redirect the root resource to a web client, configure | |
1359 | 'web_client_location'. To remove this warning, remove 'webclient' from the 'listeners' | |
1354 | 1360 | configuration. |
1355 | 1361 | """ |
1356 | 1362 |
50 | 50 | return obj |
51 | 51 | |
52 | 52 | |
53 | @attr.s | |
53 | @attr.s(auto_attribs=True) | |
54 | 54 | class InstanceLocationConfig: |
55 | 55 | """The host and port to talk to an instance via HTTP replication.""" |
56 | 56 | |
57 | host = attr.ib(type=str) | |
58 | port = attr.ib(type=int) | |
57 | host: str | |
58 | port: int | |
59 | 59 | |
60 | 60 | |
61 | 61 | @attr.s |
76 | 76 | can only be a single instance. |
77 | 77 | """ |
78 | 78 | |
79 | events = attr.ib( | |
80 | default=["master"], | |
81 | type=List[str], | |
82 | converter=_instance_to_list_converter, | |
83 | ) | |
84 | typing = attr.ib( | |
85 | default=["master"], | |
86 | type=List[str], | |
87 | converter=_instance_to_list_converter, | |
88 | ) | |
89 | to_device = attr.ib( | |
90 | default=["master"], | |
91 | type=List[str], | |
92 | converter=_instance_to_list_converter, | |
93 | ) | |
94 | account_data = attr.ib( | |
95 | default=["master"], | |
96 | type=List[str], | |
97 | converter=_instance_to_list_converter, | |
98 | ) | |
99 | receipts = attr.ib( | |
100 | default=["master"], | |
101 | type=List[str], | |
102 | converter=_instance_to_list_converter, | |
103 | ) | |
104 | presence = attr.ib( | |
105 | default=["master"], | |
106 | type=List[str], | |
79 | events: List[str] = attr.ib( | |
80 | default=["master"], | |
81 | converter=_instance_to_list_converter, | |
82 | ) | |
83 | typing: List[str] = attr.ib( | |
84 | default=["master"], | |
85 | converter=_instance_to_list_converter, | |
86 | ) | |
87 | to_device: List[str] = attr.ib( | |
88 | default=["master"], | |
89 | converter=_instance_to_list_converter, | |
90 | ) | |
91 | account_data: List[str] = attr.ib( | |
92 | default=["master"], | |
93 | converter=_instance_to_list_converter, | |
94 | ) | |
95 | receipts: List[str] = attr.ib( | |
96 | default=["master"], | |
97 | converter=_instance_to_list_converter, | |
98 | ) | |
99 | presence: List[str] = attr.ib( | |
100 | default=["master"], | |
107 | 101 | converter=_instance_to_list_converter, |
108 | 102 | ) |
109 | 103 |
57 | 57 | logger = logging.getLogger(__name__) |
58 | 58 | |
59 | 59 | |
60 | @attr.s(slots=True, cmp=False) | |
60 | @attr.s(slots=True, frozen=True, cmp=False, auto_attribs=True) | |
61 | 61 | class VerifyJsonRequest: |
62 | 62 | """ |
63 | 63 | A request to verify a JSON object. |
77 | 77 | key_ids: The set of key_ids to that could be used to verify the JSON object |
78 | 78 | """ |
79 | 79 | |
80 | server_name = attr.ib(type=str) | |
81 | get_json_object = attr.ib(type=Callable[[], JsonDict]) | |
82 | minimum_valid_until_ts = attr.ib(type=int) | |
83 | key_ids = attr.ib(type=List[str]) | |
80 | server_name: str | |
81 | get_json_object: Callable[[], JsonDict] | |
82 | minimum_valid_until_ts: int | |
83 | key_ids: List[str] | |
84 | 84 | |
85 | 85 | @staticmethod |
86 | 86 | def from_json_object( |
123 | 123 | pass |
124 | 124 | |
125 | 125 | |
126 | @attr.s(slots=True) | |
126 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
127 | 127 | class _FetchKeyRequest: |
128 | 128 | """A request for keys for a given server. |
129 | 129 | |
137 | 137 | key_ids: The IDs of the keys to attempt to fetch |
138 | 138 | """ |
139 | 139 | |
140 | server_name = attr.ib(type=str) | |
141 | minimum_valid_until_ts = attr.ib(type=int) | |
142 | key_ids = attr.ib(type=List[str]) | |
140 | server_name: str | |
141 | minimum_valid_until_ts: int | |
142 | key_ids: List[str] | |
143 | 143 | |
144 | 144 | |
145 | 145 | class Keyring: |
27 | 27 | from synapse.storage.databases.main import DataStore |
28 | 28 | |
29 | 29 | |
30 | @attr.s(slots=True) | |
30 | @attr.s(slots=True, auto_attribs=True) | |
31 | 31 | class EventContext: |
32 | 32 | """ |
33 | 33 | Holds information relevant to persisting an event |
102 | 102 | accessed via get_prev_state_ids. |
103 | 103 | """ |
104 | 104 | |
105 | rejected = attr.ib(default=False, type=Union[bool, str]) | |
106 | _state_group = attr.ib(default=None, type=Optional[int]) | |
107 | state_group_before_event = attr.ib(default=None, type=Optional[int]) | |
108 | prev_group = attr.ib(default=None, type=Optional[int]) | |
109 | delta_ids = attr.ib(default=None, type=Optional[StateMap[str]]) | |
110 | app_service = attr.ib(default=None, type=Optional[ApplicationService]) | |
111 | ||
112 | _current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]]) | |
113 | _prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]]) | |
105 | rejected: Union[bool, str] = False | |
106 | _state_group: Optional[int] = None | |
107 | state_group_before_event: Optional[int] = None | |
108 | prev_group: Optional[int] = None | |
109 | delta_ids: Optional[StateMap[str]] = None | |
110 | app_service: Optional[ApplicationService] = None | |
111 | ||
112 | _current_state_ids: Optional[StateMap[str]] = None | |
113 | _prev_state_ids: Optional[StateMap[str]] = None | |
114 | 114 | |
115 | 115 | @staticmethod |
116 | 116 | def with_state( |
13 | 13 | # limitations under the License. |
14 | 14 | import collections.abc |
15 | 15 | import re |
16 | from typing import ( | |
17 | TYPE_CHECKING, | |
18 | Any, | |
19 | Callable, | |
20 | Dict, | |
21 | Iterable, | |
22 | List, | |
23 | Mapping, | |
24 | Optional, | |
25 | Union, | |
26 | ) | |
16 | from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Union | |
27 | 17 | |
28 | 18 | from frozendict import frozendict |
29 | 19 | |
31 | 21 | from synapse.api.errors import Codes, SynapseError |
32 | 22 | from synapse.api.room_versions import RoomVersion |
33 | 23 | from synapse.types import JsonDict |
34 | from synapse.util.async_helpers import yieldable_gather_results | |
35 | 24 | from synapse.util.frozenutils import unfreeze |
36 | 25 | |
37 | 26 | from . import EventBase |
38 | ||
39 | if TYPE_CHECKING: | |
40 | from synapse.server import HomeServer | |
41 | 27 | |
42 | 28 | # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\' |
43 | 29 | # (?<!stuff) matches if the current position in the string is not preceded |
384 | 370 | clients. |
385 | 371 | """ |
386 | 372 | |
387 | def __init__(self, hs: "HomeServer"): | |
388 | self.store = hs.get_datastore() | |
389 | self._msc1849_enabled = hs.config.experimental.msc1849_enabled | |
390 | self._msc3440_enabled = hs.config.experimental.msc3440_enabled | |
391 | ||
392 | async def serialize_event( | |
373 | def serialize_event( | |
393 | 374 | self, |
394 | 375 | event: Union[JsonDict, EventBase], |
395 | 376 | time_now: int, |
396 | 377 | *, |
397 | bundle_aggregations: bool = False, | |
378 | bundle_aggregations: Optional[Dict[str, JsonDict]] = None, | |
398 | 379 | **kwargs: Any, |
399 | 380 | ) -> JsonDict: |
400 | 381 | """Serializes a single event. |
417 | 398 | serialized_event = serialize_event(event, time_now, **kwargs) |
418 | 399 | |
419 | 400 | # Check if there are any bundled aggregations to include with the event. |
420 | # | |
421 | # Do not bundle aggregations if any of the following at true: | |
422 | # | |
423 | # * Support is disabled via the configuration or the caller. | |
424 | # * The event is a state event. | |
425 | # * The event has been redacted. | |
426 | if ( | |
427 | self._msc1849_enabled | |
428 | and bundle_aggregations | |
429 | and not event.is_state() | |
430 | and not event.internal_metadata.is_redacted() | |
431 | ): | |
432 | await self._injected_bundled_aggregations(event, time_now, serialized_event) | |
401 | if bundle_aggregations: | |
402 | event_aggregations = bundle_aggregations.get(event.event_id) | |
403 | if event_aggregations: | |
404 | self._inject_bundled_aggregations( | |
405 | event, | |
406 | time_now, | |
407 | bundle_aggregations[event.event_id], | |
408 | serialized_event, | |
409 | ) | |
433 | 410 | |
434 | 411 | return serialized_event |
435 | 412 | |
436 | async def _injected_bundled_aggregations( | |
437 | self, event: EventBase, time_now: int, serialized_event: JsonDict | |
413 | def _inject_bundled_aggregations( | |
414 | self, | |
415 | event: EventBase, | |
416 | time_now: int, | |
417 | aggregations: JsonDict, | |
418 | serialized_event: JsonDict, | |
438 | 419 | ) -> None: |
439 | 420 | """Potentially injects bundled aggregations into the unsigned portion of the serialized event. |
440 | 421 | |
441 | 422 | Args: |
442 | 423 | event: The event being serialized. |
443 | 424 | time_now: The current time in milliseconds |
425 | aggregations: The bundled aggregation to serialize. | |
444 | 426 | serialized_event: The serialized event which may be modified. |
445 | 427 | |
446 | 428 | """ |
447 | # Do not bundle aggregations for an event which represents an edit or an | |
448 | # annotation. It does not make sense for them to have related events. | |
449 | relates_to = event.content.get("m.relates_to") | |
450 | if isinstance(relates_to, (dict, frozendict)): | |
451 | relation_type = relates_to.get("rel_type") | |
452 | if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE): | |
453 | return | |
454 | ||
455 | event_id = event.event_id | |
456 | room_id = event.room_id | |
457 | ||
458 | # The bundled aggregations to include. | |
459 | aggregations = {} | |
460 | ||
461 | annotations = await self.store.get_aggregation_groups_for_event( | |
462 | event_id, room_id | |
463 | ) | |
464 | if annotations.chunk: | |
465 | aggregations[RelationTypes.ANNOTATION] = annotations.to_dict() | |
466 | ||
467 | references = await self.store.get_relations_for_event( | |
468 | event_id, room_id, RelationTypes.REFERENCE, direction="f" | |
469 | ) | |
470 | if references.chunk: | |
471 | aggregations[RelationTypes.REFERENCE] = references.to_dict() | |
472 | ||
473 | edit = None | |
474 | if event.type == EventTypes.Message: | |
475 | edit = await self.store.get_applicable_edit(event_id, room_id) | |
476 | ||
477 | if edit: | |
429 | # Make a copy in-case the object is cached. | |
430 | aggregations = aggregations.copy() | |
431 | ||
432 | if RelationTypes.REPLACE in aggregations: | |
478 | 433 | # If there is an edit replace the content, preserving existing |
479 | 434 | # relations. |
435 | edit = aggregations[RelationTypes.REPLACE] | |
480 | 436 | |
481 | 437 | # Ensure we take copies of the edit content, otherwise we risk modifying |
482 | 438 | # the original event. |
501 | 457 | } |
502 | 458 | |
503 | 459 | # If this event is the start of a thread, include a summary of the replies. |
504 | if self._msc3440_enabled: | |
505 | ( | |
506 | thread_count, | |
507 | latest_thread_event, | |
508 | ) = await self.store.get_thread_summary(event_id, room_id) | |
509 | if latest_thread_event: | |
510 | aggregations[RelationTypes.THREAD] = { | |
511 | # Don't bundle aggregations as this could recurse forever. | |
512 | "latest_event": await self.serialize_event( | |
513 | latest_thread_event, time_now, bundle_aggregations=False | |
514 | ), | |
515 | "count": thread_count, | |
516 | } | |
517 | ||
518 | # If any bundled aggregations were found, include them. | |
519 | if aggregations: | |
520 | serialized_event["unsigned"].setdefault("m.relations", {}).update( | |
521 | aggregations | |
460 | if RelationTypes.THREAD in aggregations: | |
461 | # Serialize the latest thread event. | |
462 | latest_thread_event = aggregations[RelationTypes.THREAD]["latest_event"] | |
463 | ||
464 | # Don't bundle aggregations as this could recurse forever. | |
465 | aggregations[RelationTypes.THREAD]["latest_event"] = self.serialize_event( | |
466 | latest_thread_event, time_now, bundle_aggregations=None | |
522 | 467 | ) |
523 | 468 | |
524 | async def serialize_events( | |
469 | # Include the bundled aggregations in the event. | |
470 | serialized_event["unsigned"].setdefault("m.relations", {}).update(aggregations) | |
471 | ||
472 | def serialize_events( | |
525 | 473 | self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any |
526 | 474 | ) -> List[JsonDict]: |
527 | 475 | """Serializes multiple events. |
534 | 482 | Returns: |
535 | 483 | The list of serialized events |
536 | 484 | """ |
537 | return await yieldable_gather_results( | |
538 | self.serialize_event, events, time_now=time_now, **kwargs | |
539 | ) | |
485 | return [ | |
486 | self.serialize_event(event, time_now=time_now, **kwargs) for event in events | |
487 | ] | |
540 | 488 | |
541 | 489 | |
542 | 490 | def copy_power_levels_contents( |
229 | 229 | # origin, etc etc) |
230 | 230 | assert_params_in_dict(pdu_json, ("type", "depth")) |
231 | 231 | |
232 | # Strip any unauthorized values from "unsigned" if they exist | |
233 | if "unsigned" in pdu_json: | |
234 | _strip_unsigned_values(pdu_json) | |
235 | ||
232 | 236 | depth = pdu_json["depth"] |
233 | 237 | if not isinstance(depth, int): |
234 | 238 | raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON) |
244 | 248 | |
245 | 249 | event = make_event_from_dict(pdu_json, room_version) |
246 | 250 | return event |
251 | ||
252 | ||
253 | def _strip_unsigned_values(pdu_dict: JsonDict) -> None: | |
254 | """ | |
255 | Strip any unsigned values unless specifically allowed, as defined by the whitelist. | |
256 | ||
257 | pdu: the json dict to strip values from. Note that the dict is mutated by this | |
258 | function | |
259 | """ | |
260 | unsigned = pdu_dict["unsigned"] | |
261 | ||
262 | if not isinstance(unsigned, dict): | |
263 | pdu_dict["unsigned"] = {} | |
264 | ||
265 | if pdu_dict["type"] == "m.room.member": | |
266 | whitelist = ["knock_room_state", "invite_room_state", "age"] | |
267 | else: | |
268 | whitelist = ["age"] | |
269 | ||
270 | filtered_unsigned = {k: v for k, v in unsigned.items() if k in whitelist} | |
271 | pdu_dict["unsigned"] = filtered_unsigned |
55 | 55 | from synapse.events import EventBase, builder |
56 | 56 | from synapse.federation.federation_base import FederationBase, event_from_pdu_json |
57 | 57 | from synapse.federation.transport.client import SendJoinResponse |
58 | from synapse.logging.utils import log_function | |
59 | 58 | from synapse.types import JsonDict, get_domain_from_id |
60 | 59 | from synapse.util.async_helpers import concurrently_execute |
61 | 60 | from synapse.util.caches.expiringcache import ExpiringCache |
118 | 117 | # It is a map of (room ID, suggested-only) -> the response of |
119 | 118 | # get_room_hierarchy. |
120 | 119 | self._get_room_hierarchy_cache: ExpiringCache[ |
121 | Tuple[str, bool], Tuple[JsonDict, Sequence[JsonDict], Sequence[str]] | |
120 | Tuple[str, bool], | |
121 | Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]], | |
122 | 122 | ] = ExpiringCache( |
123 | 123 | cache_name="get_room_hierarchy_cache", |
124 | 124 | clock=self._clock, |
143 | 143 | if destination_dict: |
144 | 144 | self.pdu_destination_tried[event_id] = destination_dict |
145 | 145 | |
146 | @log_function | |
147 | 146 | async def make_query( |
148 | 147 | self, |
149 | 148 | destination: str, |
177 | 176 | ignore_backoff=ignore_backoff, |
178 | 177 | ) |
179 | 178 | |
180 | @log_function | |
181 | 179 | async def query_client_keys( |
182 | 180 | self, destination: str, content: JsonDict, timeout: int |
183 | 181 | ) -> JsonDict: |
195 | 193 | destination, content, timeout |
196 | 194 | ) |
197 | 195 | |
198 | @log_function | |
199 | 196 | async def query_user_devices( |
200 | 197 | self, destination: str, user_id: str, timeout: int = 30000 |
201 | 198 | ) -> JsonDict: |
207 | 204 | destination, user_id, timeout |
208 | 205 | ) |
209 | 206 | |
210 | @log_function | |
211 | 207 | async def claim_client_keys( |
212 | 208 | self, destination: str, content: JsonDict, timeout: int |
213 | 209 | ) -> JsonDict: |
1337 | 1333 | destinations: Iterable[str], |
1338 | 1334 | room_id: str, |
1339 | 1335 | suggested_only: bool, |
1340 | ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: | |
1336 | ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]: | |
1341 | 1337 | """ |
1342 | 1338 | Call other servers to get a hierarchy of the given room. |
1343 | 1339 | |
1352 | 1348 | |
1353 | 1349 | Returns: |
1354 | 1350 | A tuple of: |
1355 | The room as a JSON dictionary. | |
1351 | The room as a JSON dictionary, without a "children_state" key. | |
1352 | A list of `m.space.child` state events. | |
1356 | 1353 | A list of children rooms, as JSON dictionaries. |
1357 | 1354 | A list of inaccessible children room IDs. |
1358 | 1355 | |
1367 | 1364 | |
1368 | 1365 | async def send_request( |
1369 | 1366 | destination: str, |
1370 | ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: | |
1367 | ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]: | |
1371 | 1368 | try: |
1372 | 1369 | res = await self.transport_layer.get_room_hierarchy( |
1373 | 1370 | destination=destination, |
1396 | 1393 | raise InvalidResponseError("'room' must be a dict") |
1397 | 1394 | |
1398 | 1395 | # Validate children_state of the room. |
1399 | children_state = room.get("children_state", []) | |
1396 | children_state = room.pop("children_state", []) | |
1400 | 1397 | if not isinstance(children_state, Sequence): |
1401 | 1398 | raise InvalidResponseError("'room.children_state' must be a list") |
1402 | 1399 | if any(not isinstance(e, dict) for e in children_state): |
1425 | 1422 | "Invalid room ID in 'inaccessible_children' list" |
1426 | 1423 | ) |
1427 | 1424 | |
1428 | return room, children, inaccessible_children | |
1425 | return room, children_state, children, inaccessible_children | |
1429 | 1426 | |
1430 | 1427 | try: |
1431 | 1428 | result = await self._try_destination_list( |
1473 | 1470 | if event.room_id == room_id: |
1474 | 1471 | children_events.append(event.data) |
1475 | 1472 | children_room_ids.add(event.state_key) |
1476 | # And add them under the requested room. | |
1477 | requested_room["children_state"] = children_events | |
1478 | 1473 | |
1479 | 1474 | # Find the children rooms. |
1480 | 1475 | children = [] |
1484 | 1479 | |
1485 | 1480 | # It isn't clear from the response whether some of the rooms are |
1486 | 1481 | # not accessible. |
1487 | result = (requested_room, children, ()) | |
1482 | result = (requested_room, children_events, children, ()) | |
1488 | 1483 | |
1489 | 1484 | # Cache the result to avoid fetching data over federation every time. |
1490 | 1485 | self._get_room_hierarchy_cache[(room_id, suggested_only)] = result |
57 | 57 | run_in_background, |
58 | 58 | ) |
59 | 59 | from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace |
60 | from synapse.logging.utils import log_function | |
61 | 60 | from synapse.metrics.background_process_metrics import wrap_as_background_process |
62 | 61 | from synapse.replication.http.federation import ( |
63 | 62 | ReplicationFederationSendEduRestServlet, |
858 | 857 | res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]} |
859 | 858 | return 200, res |
860 | 859 | |
861 | @log_function | |
862 | 860 | async def on_query_client_keys( |
863 | 861 | self, origin: str, content: Dict[str, str] |
864 | 862 | ) -> Tuple[int, Dict[str, Any]]: |
939 | 937 | |
940 | 938 | return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]} |
941 | 939 | |
942 | @log_function | |
943 | 940 | async def on_openid_userinfo(self, token: str) -> Optional[str]: |
944 | 941 | ts_now_ms = self._clock.time_msec() |
945 | 942 | return await self.store.get_user_id_for_open_id_token(token, ts_now_ms) |
22 | 22 | from typing import Optional, Tuple |
23 | 23 | |
24 | 24 | from synapse.federation.units import Transaction |
25 | from synapse.logging.utils import log_function | |
26 | 25 | from synapse.storage.databases.main import DataStore |
27 | 26 | from synapse.types import JsonDict |
28 | 27 | |
35 | 34 | def __init__(self, datastore: DataStore): |
36 | 35 | self.store = datastore |
37 | 36 | |
38 | @log_function | |
39 | 37 | async def have_responded( |
40 | 38 | self, origin: str, transaction: Transaction |
41 | 39 | ) -> Optional[Tuple[int, JsonDict]]: |
52 | 50 | |
53 | 51 | return await self.store.get_received_txn_response(transaction_id, origin) |
54 | 52 | |
55 | @log_function | |
56 | 53 | async def set_response( |
57 | 54 | self, origin: str, transaction: Transaction, code: int, response: JsonDict |
58 | 55 | ) -> None: |
606 | 606 | self._pending_pdus = [] |
607 | 607 | |
608 | 608 | |
609 | @attr.s(slots=True) | |
609 | @attr.s(slots=True, auto_attribs=True) | |
610 | 610 | class _TransactionQueueManager: |
611 | 611 | """A helper async context manager for pulling stuff off the queues and |
612 | 612 | tracking what was last successfully sent, etc. |
613 | 613 | """ |
614 | 614 | |
615 | queue = attr.ib(type=PerDestinationQueue) | |
616 | ||
617 | _device_stream_id = attr.ib(type=Optional[int], default=None) | |
618 | _device_list_id = attr.ib(type=Optional[int], default=None) | |
619 | _last_stream_ordering = attr.ib(type=Optional[int], default=None) | |
620 | _pdus = attr.ib(type=List[EventBase], factory=list) | |
615 | queue: PerDestinationQueue | |
616 | ||
617 | _device_stream_id: Optional[int] = None | |
618 | _device_list_id: Optional[int] = None | |
619 | _last_stream_ordering: Optional[int] = None | |
620 | _pdus: List[EventBase] = attr.Factory(list) | |
621 | 621 | |
622 | 622 | async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]: |
623 | 623 | # First we calculate the EDUs we want to send, if any. |
34 | 34 | import synapse.server |
35 | 35 | |
36 | 36 | logger = logging.getLogger(__name__) |
37 | issue_8631_logger = logging.getLogger("synapse.8631_debug") | |
37 | 38 | |
38 | 39 | last_pdu_ts_metric = Gauge( |
39 | 40 | "synapse_federation_last_sent_pdu_time", |
123 | 124 | len(pdus), |
124 | 125 | len(edus), |
125 | 126 | ) |
127 | if issue_8631_logger.isEnabledFor(logging.DEBUG): | |
128 | DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"} | |
129 | device_list_updates = [ | |
130 | edu.content for edu in edus if edu.edu_type in DEVICE_UPDATE_EDUS | |
131 | ] | |
132 | if device_list_updates: | |
133 | issue_8631_logger.debug( | |
134 | "about to send txn [%s] including device list updates: %s", | |
135 | transaction.transaction_id, | |
136 | device_list_updates, | |
137 | ) | |
126 | 138 | |
127 | 139 | # Actually send the transaction |
128 | 140 |
43 | 43 | from synapse.events import EventBase, make_event_from_dict |
44 | 44 | from synapse.federation.units import Transaction |
45 | 45 | from synapse.http.matrixfederationclient import ByteParser |
46 | from synapse.logging.utils import log_function | |
47 | 46 | from synapse.types import JsonDict |
48 | 47 | |
49 | 48 | logger = logging.getLogger(__name__) |
61 | 60 | self.server_name = hs.hostname |
62 | 61 | self.client = hs.get_federation_http_client() |
63 | 62 | |
64 | @log_function | |
65 | 63 | async def get_room_state_ids( |
66 | 64 | self, destination: str, room_id: str, event_id: str |
67 | 65 | ) -> JsonDict: |
87 | 85 | try_trailing_slash_on_400=True, |
88 | 86 | ) |
89 | 87 | |
90 | @log_function | |
91 | 88 | async def get_event( |
92 | 89 | self, destination: str, event_id: str, timeout: Optional[int] = None |
93 | 90 | ) -> JsonDict: |
110 | 107 | destination, path=path, timeout=timeout, try_trailing_slash_on_400=True |
111 | 108 | ) |
112 | 109 | |
113 | @log_function | |
114 | 110 | async def backfill( |
115 | 111 | self, destination: str, room_id: str, event_tuples: Collection[str], limit: int |
116 | 112 | ) -> Optional[JsonDict]: |
148 | 144 | destination, path=path, args=args, try_trailing_slash_on_400=True |
149 | 145 | ) |
150 | 146 | |
151 | @log_function | |
152 | 147 | async def timestamp_to_event( |
153 | 148 | self, destination: str, room_id: str, timestamp: int, direction: str |
154 | 149 | ) -> Union[JsonDict, List]: |
184 | 179 | |
185 | 180 | return remote_response |
186 | 181 | |
187 | @log_function | |
188 | 182 | async def send_transaction( |
189 | 183 | self, |
190 | 184 | transaction: Transaction, |
233 | 227 | try_trailing_slash_on_400=True, |
234 | 228 | ) |
235 | 229 | |
236 | @log_function | |
237 | 230 | async def make_query( |
238 | 231 | self, |
239 | 232 | destination: str, |
253 | 246 | ignore_backoff=ignore_backoff, |
254 | 247 | ) |
255 | 248 | |
256 | @log_function | |
257 | 249 | async def make_membership_event( |
258 | 250 | self, |
259 | 251 | destination: str, |
316 | 308 | ignore_backoff=ignore_backoff, |
317 | 309 | ) |
318 | 310 | |
319 | @log_function | |
320 | 311 | async def send_join_v1( |
321 | 312 | self, |
322 | 313 | room_version: RoomVersion, |
335 | 326 | max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN, |
336 | 327 | ) |
337 | 328 | |
338 | @log_function | |
339 | 329 | async def send_join_v2( |
340 | 330 | self, |
341 | 331 | room_version: RoomVersion, |
354 | 344 | max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN, |
355 | 345 | ) |
356 | 346 | |
357 | @log_function | |
358 | 347 | async def send_leave_v1( |
359 | 348 | self, destination: str, room_id: str, event_id: str, content: JsonDict |
360 | 349 | ) -> Tuple[int, JsonDict]: |
371 | 360 | ignore_backoff=True, |
372 | 361 | ) |
373 | 362 | |
374 | @log_function | |
375 | 363 | async def send_leave_v2( |
376 | 364 | self, destination: str, room_id: str, event_id: str, content: JsonDict |
377 | 365 | ) -> JsonDict: |
388 | 376 | ignore_backoff=True, |
389 | 377 | ) |
390 | 378 | |
391 | @log_function | |
392 | 379 | async def send_knock_v1( |
393 | 380 | self, |
394 | 381 | destination: str, |
422 | 409 | destination=destination, path=path, data=content |
423 | 410 | ) |
424 | 411 | |
425 | @log_function | |
426 | 412 | async def send_invite_v1( |
427 | 413 | self, destination: str, room_id: str, event_id: str, content: JsonDict |
428 | 414 | ) -> Tuple[int, JsonDict]: |
432 | 418 | destination=destination, path=path, data=content, ignore_backoff=True |
433 | 419 | ) |
434 | 420 | |
435 | @log_function | |
436 | 421 | async def send_invite_v2( |
437 | 422 | self, destination: str, room_id: str, event_id: str, content: JsonDict |
438 | 423 | ) -> JsonDict: |
442 | 427 | destination=destination, path=path, data=content, ignore_backoff=True |
443 | 428 | ) |
444 | 429 | |
445 | @log_function | |
446 | 430 | async def get_public_rooms( |
447 | 431 | self, |
448 | 432 | remote_server: str, |
515 | 499 | |
516 | 500 | return response |
517 | 501 | |
518 | @log_function | |
519 | 502 | async def exchange_third_party_invite( |
520 | 503 | self, destination: str, room_id: str, event_dict: JsonDict |
521 | 504 | ) -> JsonDict: |
525 | 508 | destination=destination, path=path, data=event_dict |
526 | 509 | ) |
527 | 510 | |
528 | @log_function | |
529 | 511 | async def get_event_auth( |
530 | 512 | self, destination: str, room_id: str, event_id: str |
531 | 513 | ) -> JsonDict: |
533 | 515 | |
534 | 516 | return await self.client.get_json(destination=destination, path=path) |
535 | 517 | |
536 | @log_function | |
537 | 518 | async def query_client_keys( |
538 | 519 | self, destination: str, query_content: JsonDict, timeout: int |
539 | 520 | ) -> JsonDict: |
575 | 556 | destination=destination, path=path, data=query_content, timeout=timeout |
576 | 557 | ) |
577 | 558 | |
578 | @log_function | |
579 | 559 | async def query_user_devices( |
580 | 560 | self, destination: str, user_id: str, timeout: int |
581 | 561 | ) -> JsonDict: |
615 | 595 | destination=destination, path=path, timeout=timeout |
616 | 596 | ) |
617 | 597 | |
618 | @log_function | |
619 | 598 | async def claim_client_keys( |
620 | 599 | self, destination: str, query_content: JsonDict, timeout: int |
621 | 600 | ) -> JsonDict: |
654 | 633 | destination=destination, path=path, data=query_content, timeout=timeout |
655 | 634 | ) |
656 | 635 | |
657 | @log_function | |
658 | 636 | async def get_missing_events( |
659 | 637 | self, |
660 | 638 | destination: str, |
679 | 657 | timeout=timeout, |
680 | 658 | ) |
681 | 659 | |
682 | @log_function | |
683 | 660 | async def get_group_profile( |
684 | 661 | self, destination: str, group_id: str, requester_user_id: str |
685 | 662 | ) -> JsonDict: |
693 | 670 | ignore_backoff=True, |
694 | 671 | ) |
695 | 672 | |
696 | @log_function | |
697 | 673 | async def update_group_profile( |
698 | 674 | self, destination: str, group_id: str, requester_user_id: str, content: JsonDict |
699 | 675 | ) -> JsonDict: |
715 | 691 | ignore_backoff=True, |
716 | 692 | ) |
717 | 693 | |
718 | @log_function | |
719 | 694 | async def get_group_summary( |
720 | 695 | self, destination: str, group_id: str, requester_user_id: str |
721 | 696 | ) -> JsonDict: |
729 | 704 | ignore_backoff=True, |
730 | 705 | ) |
731 | 706 | |
732 | @log_function | |
733 | 707 | async def get_rooms_in_group( |
734 | 708 | self, destination: str, group_id: str, requester_user_id: str |
735 | 709 | ) -> JsonDict: |
797 | 771 | ignore_backoff=True, |
798 | 772 | ) |
799 | 773 | |
800 | @log_function | |
801 | 774 | async def get_users_in_group( |
802 | 775 | self, destination: str, group_id: str, requester_user_id: str |
803 | 776 | ) -> JsonDict: |
811 | 784 | ignore_backoff=True, |
812 | 785 | ) |
813 | 786 | |
814 | @log_function | |
815 | 787 | async def get_invited_users_in_group( |
816 | 788 | self, destination: str, group_id: str, requester_user_id: str |
817 | 789 | ) -> JsonDict: |
825 | 797 | ignore_backoff=True, |
826 | 798 | ) |
827 | 799 | |
828 | @log_function | |
829 | 800 | async def accept_group_invite( |
830 | 801 | self, destination: str, group_id: str, user_id: str, content: JsonDict |
831 | 802 | ) -> JsonDict: |
836 | 807 | destination=destination, path=path, data=content, ignore_backoff=True |
837 | 808 | ) |
838 | 809 | |
839 | @log_function | |
840 | 810 | def join_group( |
841 | 811 | self, destination: str, group_id: str, user_id: str, content: JsonDict |
842 | 812 | ) -> Awaitable[JsonDict]: |
847 | 817 | destination=destination, path=path, data=content, ignore_backoff=True |
848 | 818 | ) |
849 | 819 | |
850 | @log_function | |
851 | 820 | async def invite_to_group( |
852 | 821 | self, |
853 | 822 | destination: str, |
867 | 836 | ignore_backoff=True, |
868 | 837 | ) |
869 | 838 | |
870 | @log_function | |
871 | 839 | async def invite_to_group_notification( |
872 | 840 | self, destination: str, group_id: str, user_id: str, content: JsonDict |
873 | 841 | ) -> JsonDict: |
881 | 849 | destination=destination, path=path, data=content, ignore_backoff=True |
882 | 850 | ) |
883 | 851 | |
884 | @log_function | |
885 | 852 | async def remove_user_from_group( |
886 | 853 | self, |
887 | 854 | destination: str, |
901 | 868 | ignore_backoff=True, |
902 | 869 | ) |
903 | 870 | |
904 | @log_function | |
905 | 871 | async def remove_user_from_group_notification( |
906 | 872 | self, destination: str, group_id: str, user_id: str, content: JsonDict |
907 | 873 | ) -> JsonDict: |
915 | 881 | destination=destination, path=path, data=content, ignore_backoff=True |
916 | 882 | ) |
917 | 883 | |
918 | @log_function | |
919 | 884 | async def renew_group_attestation( |
920 | 885 | self, destination: str, group_id: str, user_id: str, content: JsonDict |
921 | 886 | ) -> JsonDict: |
929 | 894 | destination=destination, path=path, data=content, ignore_backoff=True |
930 | 895 | ) |
931 | 896 | |
932 | @log_function | |
933 | 897 | async def update_group_summary_room( |
934 | 898 | self, |
935 | 899 | destination: str, |
958 | 922 | ignore_backoff=True, |
959 | 923 | ) |
960 | 924 | |
961 | @log_function | |
962 | 925 | async def delete_group_summary_room( |
963 | 926 | self, |
964 | 927 | destination: str, |
985 | 948 | ignore_backoff=True, |
986 | 949 | ) |
987 | 950 | |
988 | @log_function | |
989 | 951 | async def get_group_categories( |
990 | 952 | self, destination: str, group_id: str, requester_user_id: str |
991 | 953 | ) -> JsonDict: |
999 | 961 | ignore_backoff=True, |
1000 | 962 | ) |
1001 | 963 | |
1002 | @log_function | |
1003 | 964 | async def get_group_category( |
1004 | 965 | self, destination: str, group_id: str, requester_user_id: str, category_id: str |
1005 | 966 | ) -> JsonDict: |
1013 | 974 | ignore_backoff=True, |
1014 | 975 | ) |
1015 | 976 | |
1016 | @log_function | |
1017 | 977 | async def update_group_category( |
1018 | 978 | self, |
1019 | 979 | destination: str, |
1033 | 993 | ignore_backoff=True, |
1034 | 994 | ) |
1035 | 995 | |
1036 | @log_function | |
1037 | 996 | async def delete_group_category( |
1038 | 997 | self, destination: str, group_id: str, requester_user_id: str, category_id: str |
1039 | 998 | ) -> JsonDict: |
1047 | 1006 | ignore_backoff=True, |
1048 | 1007 | ) |
1049 | 1008 | |
1050 | @log_function | |
1051 | 1009 | async def get_group_roles( |
1052 | 1010 | self, destination: str, group_id: str, requester_user_id: str |
1053 | 1011 | ) -> JsonDict: |
1061 | 1019 | ignore_backoff=True, |
1062 | 1020 | ) |
1063 | 1021 | |
1064 | @log_function | |
1065 | 1022 | async def get_group_role( |
1066 | 1023 | self, destination: str, group_id: str, requester_user_id: str, role_id: str |
1067 | 1024 | ) -> JsonDict: |
1075 | 1032 | ignore_backoff=True, |
1076 | 1033 | ) |
1077 | 1034 | |
1078 | @log_function | |
1079 | 1035 | async def update_group_role( |
1080 | 1036 | self, |
1081 | 1037 | destination: str, |
1095 | 1051 | ignore_backoff=True, |
1096 | 1052 | ) |
1097 | 1053 | |
1098 | @log_function | |
1099 | 1054 | async def delete_group_role( |
1100 | 1055 | self, destination: str, group_id: str, requester_user_id: str, role_id: str |
1101 | 1056 | ) -> JsonDict: |
1109 | 1064 | ignore_backoff=True, |
1110 | 1065 | ) |
1111 | 1066 | |
1112 | @log_function | |
1113 | 1067 | async def update_group_summary_user( |
1114 | 1068 | self, |
1115 | 1069 | destination: str, |
1135 | 1089 | ignore_backoff=True, |
1136 | 1090 | ) |
1137 | 1091 | |
1138 | @log_function | |
1139 | 1092 | async def set_group_join_policy( |
1140 | 1093 | self, destination: str, group_id: str, requester_user_id: str, content: JsonDict |
1141 | 1094 | ) -> JsonDict: |
1150 | 1103 | ignore_backoff=True, |
1151 | 1104 | ) |
1152 | 1105 | |
1153 | @log_function | |
1154 | 1106 | async def delete_group_summary_user( |
1155 | 1107 | self, |
1156 | 1108 | destination: str, |
35 | 35 | from synapse.util.versionstring import get_version_string |
36 | 36 | |
37 | 37 | logger = logging.getLogger(__name__) |
38 | issue_8631_logger = logging.getLogger("synapse.8631_debug") | |
38 | 39 | |
39 | 40 | |
40 | 41 | class BaseFederationServerServlet(BaseFederationServlet): |
93 | 94 | len(transaction_data.get("pdus", [])), |
94 | 95 | len(transaction_data.get("edus", [])), |
95 | 96 | ) |
97 | ||
98 | if issue_8631_logger.isEnabledFor(logging.DEBUG): | |
99 | DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"} | |
100 | device_list_updates = [ | |
101 | edu.content | |
102 | for edu in transaction_data.get("edus", []) | |
103 | if edu.edu_type in DEVICE_UPDATE_EDUS | |
104 | ] | |
105 | if device_list_updates: | |
106 | issue_8631_logger.debug( | |
107 | "received transaction [%s] including device list updates: %s", | |
108 | transaction_id, | |
109 | device_list_updates, | |
110 | ) | |
96 | 111 | |
97 | 112 | except Exception as e: |
98 | 113 | logger.exception(e) |
76 | 76 | async def add_account_data_for_user( |
77 | 77 | self, user_id: str, account_data_type: str, content: JsonDict |
78 | 78 | ) -> int: |
79 | """Add some account_data to a room for a user. | |
79 | """Add some global account_data for a user. | |
80 | 80 | |
81 | 81 | Args: |
82 | 82 | user_id: The user to add a tag for. |
54 | 54 | |
55 | 55 | async def get_user(self, user: UserID) -> Optional[JsonDict]: |
56 | 56 | """Function to get user details""" |
57 | ret = await self.store.get_user_by_id(user.to_string()) | |
58 | if ret: | |
59 | profile = await self.store.get_profileinfo(user.localpart) | |
60 | threepids = await self.store.user_get_threepids(user.to_string()) | |
61 | external_ids = [ | |
62 | ({"auth_provider": auth_provider, "external_id": external_id}) | |
63 | for auth_provider, external_id in await self.store.get_external_ids_by_user( | |
64 | user.to_string() | |
65 | ) | |
66 | ] | |
67 | ret["displayname"] = profile.display_name | |
68 | ret["avatar_url"] = profile.avatar_url | |
69 | ret["threepids"] = threepids | |
70 | ret["external_ids"] = external_ids | |
71 | return ret | |
57 | user_info_dict = await self.store.get_user_by_id(user.to_string()) | |
58 | if user_info_dict is None: | |
59 | return None | |
60 | ||
61 | # Restrict returned information to a known set of fields. This prevents additional | |
62 | # fields added to get_user_by_id from modifying Synapse's external API surface. | |
63 | user_info_to_return = { | |
64 | "name", | |
65 | "admin", | |
66 | "deactivated", | |
67 | "shadow_banned", | |
68 | "creation_ts", | |
69 | "appservice_id", | |
70 | "consent_server_notice_sent", | |
71 | "consent_version", | |
72 | "user_type", | |
73 | "is_guest", | |
74 | } | |
75 | ||
76 | # Restrict returned keys to a known set. | |
77 | user_info_dict = { | |
78 | key: value | |
79 | for key, value in user_info_dict.items() | |
80 | if key in user_info_to_return | |
81 | } | |
82 | ||
83 | # Add additional user metadata | |
84 | profile = await self.store.get_profileinfo(user.localpart) | |
85 | threepids = await self.store.user_get_threepids(user.to_string()) | |
86 | external_ids = [ | |
87 | ({"auth_provider": auth_provider, "external_id": external_id}) | |
88 | for auth_provider, external_id in await self.store.get_external_ids_by_user( | |
89 | user.to_string() | |
90 | ) | |
91 | ] | |
92 | user_info_dict["displayname"] = profile.display_name | |
93 | user_info_dict["avatar_url"] = profile.avatar_url | |
94 | user_info_dict["threepids"] = threepids | |
95 | user_info_dict["external_ids"] = external_ids | |
96 | ||
97 | return user_info_dict | |
72 | 98 | |
73 | 99 | async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any: |
74 | 100 | """Write all data we have on the user to the given writer. |
167 | 167 | } |
168 | 168 | |
169 | 169 | |
170 | @attr.s(slots=True) | |
170 | @attr.s(slots=True, auto_attribs=True) | |
171 | 171 | class SsoLoginExtraAttributes: |
172 | 172 | """Data we track about SAML2 sessions""" |
173 | 173 | |
174 | 174 | # time the session was created, in milliseconds |
175 | creation_time = attr.ib(type=int) | |
176 | extra_attributes = attr.ib(type=JsonDict) | |
177 | ||
178 | ||
179 | @attr.s(slots=True, frozen=True) | |
175 | creation_time: int | |
176 | extra_attributes: JsonDict | |
177 | ||
178 | ||
179 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
180 | 180 | class LoginTokenAttributes: |
181 | 181 | """Data we store in a short-term login token""" |
182 | 182 | |
183 | user_id = attr.ib(type=str) | |
184 | ||
185 | auth_provider_id = attr.ib(type=str) | |
183 | user_id: str | |
184 | ||
185 | auth_provider_id: str | |
186 | 186 | """The SSO Identity Provider that the user authenticated with, to get this token.""" |
187 | 187 | |
188 | auth_provider_session_id = attr.ib(type=Optional[str]) | |
188 | auth_provider_session_id: Optional[str] | |
189 | 189 | """The session ID advertised by the SSO Identity Provider.""" |
190 | 190 | |
191 | 191 | |
2280 | 2280 | # call all of the on_logged_out callbacks |
2281 | 2281 | for callback in self.on_logged_out_callbacks: |
2282 | 2282 | try: |
2283 | callback(user_id, device_id, access_token) | |
2283 | await callback(user_id, device_id, access_token) | |
2284 | 2284 | except Exception as e: |
2285 | 2285 | logger.warning("Failed to run module API callback %s: %s", callback, e) |
2286 | 2286 | continue |
947 | 947 | devices = [] |
948 | 948 | ignore_devices = True |
949 | 949 | else: |
950 | prev_stream_id = await self.store.get_device_list_last_stream_id_for_remote( | |
951 | user_id | |
952 | ) | |
950 | 953 | cached_devices = await self.store.get_cached_devices_for_user(user_id) |
951 | if cached_devices == {d["device_id"]: d for d in devices}: | |
954 | ||
955 | # To ensure that a user with no devices is cached, we skip the resync only | |
956 | # if we have a stream_id from previously writing a cache entry. | |
957 | if prev_stream_id is not None and cached_devices == { | |
958 | d["device_id"]: d for d in devices | |
959 | }: | |
952 | 960 | logging.info( |
953 | 961 | "Skipping device list resync for %s, as our cache matches already", |
954 | 962 | user_id, |
1320 | 1320 | return old_key == new_key_copy |
1321 | 1321 | |
1322 | 1322 | |
1323 | @attr.s(slots=True) | |
1323 | @attr.s(slots=True, auto_attribs=True) | |
1324 | 1324 | class SignatureListItem: |
1325 | 1325 | """An item in the signature list as used by upload_signatures_for_device_keys.""" |
1326 | 1326 | |
1327 | signing_key_id = attr.ib(type=str) | |
1328 | target_user_id = attr.ib(type=str) | |
1329 | target_device_id = attr.ib(type=str) | |
1330 | signature = attr.ib(type=JsonDict) | |
1327 | signing_key_id: str | |
1328 | target_user_id: str | |
1329 | target_device_id: str | |
1330 | signature: JsonDict | |
1331 | 1331 | |
1332 | 1332 | |
1333 | 1333 | class SigningKeyEduUpdater: |
19 | 19 | from synapse.api.errors import AuthError, SynapseError |
20 | 20 | from synapse.events import EventBase |
21 | 21 | from synapse.handlers.presence import format_user_presence_state |
22 | from synapse.logging.utils import log_function | |
23 | 22 | from synapse.streams.config import PaginationConfig |
24 | 23 | from synapse.types import JsonDict, UserID |
25 | 24 | from synapse.visibility import filter_events_for_client |
42 | 41 | self._server_notices_sender = hs.get_server_notices_sender() |
43 | 42 | self._event_serializer = hs.get_event_client_serializer() |
44 | 43 | |
45 | @log_function | |
46 | 44 | async def get_stream( |
47 | 45 | self, |
48 | 46 | auth_user_id: str, |
118 | 116 | |
119 | 117 | events.extend(to_add) |
120 | 118 | |
121 | chunks = await self._event_serializer.serialize_events( | |
119 | chunks = self._event_serializer.serialize_events( | |
122 | 120 | events, |
123 | 121 | time_now, |
124 | 122 | as_client_event=as_client_event, |
50 | 50 | preserve_fn, |
51 | 51 | run_in_background, |
52 | 52 | ) |
53 | from synapse.logging.utils import log_function | |
54 | 53 | from synapse.replication.http.federation import ( |
55 | 54 | ReplicationCleanRoomRestServlet, |
56 | 55 | ReplicationStoreRoomOnOutlierMembershipRestServlet, |
555 | 554 | |
556 | 555 | run_in_background(self._handle_queued_pdus, room_queue) |
557 | 556 | |
558 | @log_function | |
559 | 557 | async def do_knock( |
560 | 558 | self, |
561 | 559 | target_hosts: List[str], |
927 | 925 | |
928 | 926 | return event |
929 | 927 | |
930 | @log_function | |
931 | 928 | async def on_make_knock_request( |
932 | 929 | self, origin: str, room_id: str, user_id: str |
933 | 930 | ) -> EventBase: |
1038 | 1035 | else: |
1039 | 1036 | return [] |
1040 | 1037 | |
1041 | @log_function | |
1042 | 1038 | async def on_backfill_request( |
1043 | 1039 | self, origin: str, room_id: str, pdu_list: List[str], limit: int |
1044 | 1040 | ) -> List[EventBase]: |
1055 | 1051 | |
1056 | 1052 | return events |
1057 | 1053 | |
1058 | @log_function | |
1059 | 1054 | async def get_persisted_pdu( |
1060 | 1055 | self, origin: str, event_id: str |
1061 | 1056 | ) -> Optional[EventBase]: |
1117 | 1112 | |
1118 | 1113 | return missing_events |
1119 | 1114 | |
1120 | @log_function | |
1121 | 1115 | async def exchange_third_party_invite( |
1122 | 1116 | self, sender_user_id: str, target_user_id: str, room_id: str, signed: JsonDict |
1123 | 1117 | ) -> None: |
55 | 55 | from synapse.events.snapshot import EventContext |
56 | 56 | from synapse.federation.federation_client import InvalidResponseError |
57 | 57 | from synapse.logging.context import nested_logging_context, run_in_background |
58 | from synapse.logging.utils import log_function | |
59 | 58 | from synapse.metrics.background_process_metrics import run_as_background_process |
60 | 59 | from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet |
61 | 60 | from synapse.replication.http.federation import ( |
274 | 273 | |
275 | 274 | await self._process_received_pdu(origin, pdu, state=None) |
276 | 275 | |
277 | @log_function | |
278 | 276 | async def on_send_membership_event( |
279 | 277 | self, origin: str, event: EventBase |
280 | 278 | ) -> Tuple[EventBase, EventContext]: |
471 | 469 | |
472 | 470 | return await self.persist_events_and_notify(room_id, [(event, context)]) |
473 | 471 | |
474 | @log_function | |
475 | 472 | async def backfill( |
476 | 473 | self, dest: str, room_id: str, limit: int, extremities: Collection[str] |
477 | 474 | ) -> None: |
169 | 169 | d["inviter"] = event.sender |
170 | 170 | |
171 | 171 | invite_event = await self.store.get_event(event.event_id) |
172 | d["invite"] = await self._event_serializer.serialize_event( | |
172 | d["invite"] = self._event_serializer.serialize_event( | |
173 | 173 | invite_event, |
174 | 174 | time_now, |
175 | 175 | as_client_event=as_client_event, |
221 | 221 | |
222 | 222 | d["messages"] = { |
223 | 223 | "chunk": ( |
224 | await self._event_serializer.serialize_events( | |
224 | self._event_serializer.serialize_events( | |
225 | 225 | messages, |
226 | 226 | time_now=time_now, |
227 | 227 | as_client_event=as_client_event, |
231 | 231 | "end": await end_token.to_string(self.store), |
232 | 232 | } |
233 | 233 | |
234 | d["state"] = await self._event_serializer.serialize_events( | |
234 | d["state"] = self._event_serializer.serialize_events( | |
235 | 235 | current_state.values(), |
236 | 236 | time_now=time_now, |
237 | 237 | as_client_event=as_client_event, |
375 | 375 | "messages": { |
376 | 376 | "chunk": ( |
377 | 377 | # Don't bundle aggregations as this is a deprecated API. |
378 | await self._event_serializer.serialize_events(messages, time_now) | |
378 | self._event_serializer.serialize_events(messages, time_now) | |
379 | 379 | ), |
380 | 380 | "start": await start_token.to_string(self.store), |
381 | 381 | "end": await end_token.to_string(self.store), |
382 | 382 | }, |
383 | 383 | "state": ( |
384 | 384 | # Don't bundle aggregations as this is a deprecated API. |
385 | await self._event_serializer.serialize_events( | |
386 | room_state.values(), time_now | |
387 | ) | |
385 | self._event_serializer.serialize_events(room_state.values(), time_now) | |
388 | 386 | ), |
389 | 387 | "presence": [], |
390 | 388 | "receipts": [], |
403 | 401 | # TODO: These concurrently |
404 | 402 | time_now = self.clock.time_msec() |
405 | 403 | # Don't bundle aggregations as this is a deprecated API. |
406 | state = await self._event_serializer.serialize_events( | |
404 | state = self._event_serializer.serialize_events( | |
407 | 405 | current_state.values(), time_now |
408 | 406 | ) |
409 | 407 | |
479 | 477 | "messages": { |
480 | 478 | "chunk": ( |
481 | 479 | # Don't bundle aggregations as this is a deprecated API. |
482 | await self._event_serializer.serialize_events(messages, time_now) | |
480 | self._event_serializer.serialize_events(messages, time_now) | |
483 | 481 | ), |
484 | 482 | "start": await start_token.to_string(self.store), |
485 | 483 | "end": await end_token.to_string(self.store), |
245 | 245 | room_state = room_state_events[membership_event_id] |
246 | 246 | |
247 | 247 | now = self.clock.time_msec() |
248 | events = await self._event_serializer.serialize_events(room_state.values(), now) | |
248 | events = self._event_serializer.serialize_events(room_state.values(), now) | |
249 | 249 | return events |
250 | 250 | |
251 | 251 | async def get_joined_members(self, requester: Requester, room_id: str) -> dict: |
536 | 536 | state_dict = await self.store.get_events(list(state_ids.values())) |
537 | 537 | state = state_dict.values() |
538 | 538 | |
539 | aggregations = await self.store.get_bundled_aggregations(events, user_id) | |
540 | ||
539 | 541 | time_now = self.clock.time_msec() |
540 | 542 | |
541 | 543 | chunk = { |
542 | 544 | "chunk": ( |
543 | await self._event_serializer.serialize_events( | |
545 | self._event_serializer.serialize_events( | |
544 | 546 | events, |
545 | 547 | time_now, |
546 | bundle_aggregations=True, | |
548 | bundle_aggregations=aggregations, | |
547 | 549 | as_client_event=as_client_event, |
548 | 550 | ) |
549 | 551 | ), |
552 | 554 | } |
553 | 555 | |
554 | 556 | if state: |
555 | chunk["state"] = await self._event_serializer.serialize_events( | |
557 | chunk["state"] = self._event_serializer.serialize_events( | |
556 | 558 | state, time_now, as_client_event=as_client_event |
557 | 559 | ) |
558 | 560 |
54 | 54 | from synapse.appservice import ApplicationService |
55 | 55 | from synapse.events.presence_router import PresenceRouter |
56 | 56 | from synapse.logging.context import run_in_background |
57 | from synapse.logging.utils import log_function | |
58 | 57 | from synapse.metrics import LaterGauge |
59 | 58 | from synapse.metrics.background_process_metrics import run_as_background_process |
60 | 59 | from synapse.replication.http.presence import ( |
1541 | 1540 | self.clock = hs.get_clock() |
1542 | 1541 | self.store = hs.get_datastore() |
1543 | 1542 | |
1544 | @log_function | |
1545 | 1543 | async def get_new_events( |
1546 | 1544 | self, |
1547 | 1545 | user: UserID, |
392 | 392 | user_id = requester.user.to_string() |
393 | 393 | |
394 | 394 | if not await self.spam_checker.user_may_create_room(user_id): |
395 | raise SynapseError(403, "You are not permitted to create rooms") | |
395 | raise SynapseError( | |
396 | 403, "You are not permitted to create rooms", Codes.FORBIDDEN | |
397 | ) | |
396 | 398 | |
397 | 399 | creation_content: JsonDict = { |
398 | 400 | "room_version": new_room_version.identifier, |
684 | 686 | invite_3pid_list, |
685 | 687 | ) |
686 | 688 | ): |
687 | raise SynapseError(403, "You are not permitted to create rooms") | |
689 | raise SynapseError( | |
690 | 403, "You are not permitted to create rooms", Codes.FORBIDDEN | |
691 | ) | |
688 | 692 | |
689 | 693 | if ratelimit: |
690 | 694 | await self.request_ratelimiter.ratelimit(requester) |
1175 | 1179 | # there's something there but not see the content, so use the event that's in |
1176 | 1180 | # `filtered` rather than the event we retrieved from the datastore. |
1177 | 1181 | results["event"] = filtered[0] |
1182 | ||
1183 | # Fetch the aggregations. | |
1184 | aggregations = await self.store.get_bundled_aggregations( | |
1185 | [results["event"]], user.to_string() | |
1186 | ) | |
1187 | aggregations.update( | |
1188 | await self.store.get_bundled_aggregations( | |
1189 | results["events_before"], user.to_string() | |
1190 | ) | |
1191 | ) | |
1192 | aggregations.update( | |
1193 | await self.store.get_bundled_aggregations( | |
1194 | results["events_after"], user.to_string() | |
1195 | ) | |
1196 | ) | |
1197 | results["aggregations"] = aggregations | |
1178 | 1198 | |
1179 | 1199 | if results["events_after"]: |
1180 | 1200 | last_event_id = results["events_after"][-1].event_id |
152 | 152 | rooms_result: List[JsonDict] = [] |
153 | 153 | events_result: List[JsonDict] = [] |
154 | 154 | |
155 | if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE: | |
156 | max_rooms_per_space = MAX_ROOMS_PER_SPACE | |
157 | ||
155 | 158 | while room_queue and len(rooms_result) < MAX_ROOMS: |
156 | 159 | queue_entry = room_queue.popleft() |
157 | 160 | room_id = queue_entry.room_id |
166 | 169 | # The client-specified max_rooms_per_space limit doesn't apply to the |
167 | 170 | # room_id specified in the request, so we ignore it if this is the |
168 | 171 | # first room we are processing. |
169 | max_children = max_rooms_per_space if processed_rooms else None | |
172 | max_children = max_rooms_per_space if processed_rooms else MAX_ROOMS | |
170 | 173 | |
171 | 174 | if is_in_room: |
172 | 175 | room_entry = await self._summarize_local_room( |
208 | 211 | # Before returning to the client, remove the allowed_room_ids |
209 | 212 | # and allowed_spaces keys. |
210 | 213 | room.pop("allowed_room_ids", None) |
211 | room.pop("allowed_spaces", None) | |
214 | room.pop("allowed_spaces", None) # historical | |
212 | 215 | |
213 | 216 | rooms_result.append(room) |
214 | 217 | events.extend(room_entry.children_state_events) |
394 | 397 | None, |
395 | 398 | room_id, |
396 | 399 | suggested_only, |
397 | # TODO Handle max children. | |
400 | # Do not limit the maximum children. | |
398 | 401 | max_children=None, |
399 | 402 | ) |
400 | 403 | |
524 | 527 | rooms_result: List[JsonDict] = [] |
525 | 528 | events_result: List[JsonDict] = [] |
526 | 529 | |
530 | # Set a limit on the number of rooms to return. | |
531 | if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE: | |
532 | max_rooms_per_space = MAX_ROOMS_PER_SPACE | |
533 | ||
527 | 534 | while room_queue and len(rooms_result) < MAX_ROOMS: |
528 | 535 | room_id = room_queue.popleft() |
529 | 536 | if room_id in processed_rooms: |
582 | 589 | |
583 | 590 | # Iterate through each child and potentially add it, but not its children, |
584 | 591 | # to the response. |
585 | for child_room in root_room_entry.children_state_events: | |
592 | for child_room in itertools.islice( | |
593 | root_room_entry.children_state_events, MAX_ROOMS_PER_SPACE | |
594 | ): | |
586 | 595 | room_id = child_room.get("state_key") |
587 | 596 | assert isinstance(room_id, str) |
588 | 597 | # If the room is unknown, skip it. |
632 | 641 | suggested_only: True if only suggested children should be returned. |
633 | 642 | Otherwise, all children are returned. |
634 | 643 | max_children: |
635 | The maximum number of children rooms to include. This is capped | |
636 | to a server-set limit. | |
644 | The maximum number of children rooms to include. A value of None | |
645 | means no limit. | |
637 | 646 | |
638 | 647 | Returns: |
639 | 648 | A room entry if the room should be returned. None, otherwise. |
655 | 664 | # we only care about suggested children |
656 | 665 | child_events = filter(_is_suggested_child_event, child_events) |
657 | 666 | |
658 | if max_children is None or max_children > MAX_ROOMS_PER_SPACE: | |
659 | max_children = MAX_ROOMS_PER_SPACE | |
667 | # TODO max_children is legacy code for the /spaces endpoint. | |
668 | if max_children is not None: | |
669 | child_iter: Iterable[EventBase] = itertools.islice( | |
670 | child_events, max_children | |
671 | ) | |
672 | else: | |
673 | child_iter = child_events | |
660 | 674 | |
661 | 675 | stripped_events: List[JsonDict] = [ |
662 | 676 | { |
667 | 681 | "sender": e.sender, |
668 | 682 | "origin_server_ts": e.origin_server_ts, |
669 | 683 | } |
670 | for e in itertools.islice(child_events, max_children) | |
684 | for e in child_iter | |
671 | 685 | ] |
672 | 686 | return _RoomEntry(room_id, room_entry, stripped_events) |
673 | 687 | |
765 | 779 | try: |
766 | 780 | ( |
767 | 781 | room_response, |
782 | children_state_events, | |
768 | 783 | children, |
769 | 784 | inaccessible_children, |
770 | 785 | ) = await self._federation_client.get_room_hierarchy( |
789 | 804 | } |
790 | 805 | |
791 | 806 | return ( |
792 | _RoomEntry(room_id, room_response, room_response.pop("children_state", ())), | |
807 | _RoomEntry(room_id, room_response, children_state_events), | |
793 | 808 | children_by_room_id, |
794 | 809 | set(inaccessible_children), |
795 | 810 | ) |
987 | 1002 | "canonical_alias": stats["canonical_alias"], |
988 | 1003 | "num_joined_members": stats["joined_members"], |
989 | 1004 | "avatar_url": stats["avatar"], |
1005 | # plural join_rules is a documentation error but kept for historical | |
1006 | # purposes. Should match /publicRooms. | |
990 | 1007 | "join_rules": stats["join_rules"], |
1008 | "join_rule": stats["join_rules"], | |
991 | 1009 | "world_readable": ( |
992 | 1010 | stats["history_visibility"] == HistoryVisibility.WORLD_READABLE |
993 | 1011 | ), |
994 | 1012 | "guest_can_join": stats["guest_access"] == "can_join", |
995 | "creation_ts": create_event.origin_server_ts, | |
996 | 1013 | "room_type": create_event.content.get(EventContentFields.ROOM_TYPE), |
997 | 1014 | } |
998 | 1015 |
419 | 419 | time_now = self.clock.time_msec() |
420 | 420 | |
421 | 421 | for context in contexts.values(): |
422 | context["events_before"] = await self._event_serializer.serialize_events( | |
422 | context["events_before"] = self._event_serializer.serialize_events( | |
423 | 423 | context["events_before"], time_now |
424 | 424 | ) |
425 | context["events_after"] = await self._event_serializer.serialize_events( | |
425 | context["events_after"] = self._event_serializer.serialize_events( | |
426 | 426 | context["events_after"], time_now |
427 | 427 | ) |
428 | 428 | |
440 | 440 | results.append( |
441 | 441 | { |
442 | 442 | "rank": rank_map[e.event_id], |
443 | "result": ( | |
444 | await self._event_serializer.serialize_event(e, time_now) | |
445 | ), | |
443 | "result": self._event_serializer.serialize_event(e, time_now), | |
446 | 444 | "context": contexts.get(e.event_id, {}), |
447 | 445 | } |
448 | 446 | ) |
456 | 454 | if state_results: |
457 | 455 | s = {} |
458 | 456 | for room_id, state_events in state_results.items(): |
459 | s[room_id] = await self._event_serializer.serialize_events( | |
457 | s[room_id] = self._event_serializer.serialize_events( | |
460 | 458 | state_events, time_now |
461 | 459 | ) |
462 | 460 |
125 | 125 | raise NotImplementedError() |
126 | 126 | |
127 | 127 | |
128 | @attr.s | |
128 | @attr.s(auto_attribs=True) | |
129 | 129 | class UserAttributes: |
130 | 130 | # the localpart of the mxid that the mapper has assigned to the user. |
131 | 131 | # if `None`, the mapper has not picked a userid, and the user should be prompted to |
132 | 132 | # enter one. |
133 | localpart = attr.ib(type=Optional[str]) | |
134 | display_name = attr.ib(type=Optional[str], default=None) | |
135 | emails = attr.ib(type=Collection[str], default=attr.Factory(list)) | |
136 | ||
137 | ||
138 | @attr.s(slots=True) | |
133 | localpart: Optional[str] | |
134 | display_name: Optional[str] = None | |
135 | emails: Collection[str] = attr.Factory(list) | |
136 | ||
137 | ||
138 | @attr.s(slots=True, auto_attribs=True) | |
139 | 139 | class UsernameMappingSession: |
140 | 140 | """Data we track about SSO sessions""" |
141 | 141 | |
142 | 142 | # A unique identifier for this SSO provider, e.g. "oidc" or "saml". |
143 | auth_provider_id = attr.ib(type=str) | |
143 | auth_provider_id: str | |
144 | 144 | |
145 | 145 | # user ID on the IdP server |
146 | remote_user_id = attr.ib(type=str) | |
146 | remote_user_id: str | |
147 | 147 | |
148 | 148 | # attributes returned by the ID mapper |
149 | display_name = attr.ib(type=Optional[str]) | |
150 | emails = attr.ib(type=Collection[str]) | |
149 | display_name: Optional[str] | |
150 | emails: Collection[str] | |
151 | 151 | |
152 | 152 | # An optional dictionary of extra attributes to be provided to the client in the |
153 | 153 | # login response. |
154 | extra_login_attributes = attr.ib(type=Optional[JsonDict]) | |
154 | extra_login_attributes: Optional[JsonDict] | |
155 | 155 | |
156 | 156 | # where to redirect the client back to |
157 | client_redirect_url = attr.ib(type=str) | |
157 | client_redirect_url: str | |
158 | 158 | |
159 | 159 | # expiry time for the session, in milliseconds |
160 | expiry_time_ms = attr.ib(type=int) | |
160 | expiry_time_ms: int | |
161 | 161 | |
162 | 162 | # choices made by the user |
163 | chosen_localpart = attr.ib(type=Optional[str], default=None) | |
164 | use_display_name = attr.ib(type=bool, default=True) | |
165 | emails_to_use = attr.ib(type=Collection[str], default=()) | |
166 | terms_accepted_version = attr.ib(type=Optional[str], default=None) | |
163 | chosen_localpart: Optional[str] = None | |
164 | use_display_name: bool = True | |
165 | emails_to_use: Collection[str] = () | |
166 | terms_accepted_version: Optional[str] = None | |
167 | 167 | |
168 | 168 | |
169 | 169 | # the HTTP cookie used to track the mapping session id |
59 | 59 | |
60 | 60 | logger = logging.getLogger(__name__) |
61 | 61 | |
62 | # Debug logger for https://github.com/matrix-org/synapse/issues/4422 | |
63 | issue4422_logger = logging.getLogger("synapse.handler.sync.4422_debug") | |
64 | ||
65 | ||
66 | 62 | # Counts the number of times we returned a non-empty sync. `type` is one of |
67 | 63 | # "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is |
68 | 64 | # "true" or "false" depending on if the request asked for lazy loaded members or |
101 | 97 | prev_batch: StreamToken |
102 | 98 | events: List[EventBase] |
103 | 99 | limited: bool |
100 | # A mapping of event ID to the bundled aggregations for the above events. | |
101 | # This is only calculated if limited is true. | |
102 | bundled_aggregations: Optional[Dict[str, Dict[str, Any]]] = None | |
104 | 103 | |
105 | 104 | def __bool__(self) -> bool: |
106 | 105 | """Make the result appear empty if there are no updates. This is used |
633 | 632 | |
634 | 633 | prev_batch_token = now_token.copy_and_replace("room_key", room_key) |
635 | 634 | |
635 | # Don't bother to bundle aggregations if the timeline is unlimited, | |
636 | # as clients will have all the necessary information. | |
637 | bundled_aggregations = None | |
638 | if limited or newly_joined_room: | |
639 | bundled_aggregations = await self.store.get_bundled_aggregations( | |
640 | recents, sync_config.user.to_string() | |
641 | ) | |
642 | ||
636 | 643 | return TimelineBatch( |
637 | 644 | events=recents, |
638 | 645 | prev_batch=prev_batch_token, |
639 | 646 | limited=limited or newly_joined_room, |
647 | bundled_aggregations=bundled_aggregations, | |
640 | 648 | ) |
641 | 649 | |
642 | 650 | async def get_state_after_event( |
1160 | 1168 | |
1161 | 1169 | num_events = 0 |
1162 | 1170 | |
1163 | # debug for https://github.com/matrix-org/synapse/issues/4422 | |
1171 | # debug for https://github.com/matrix-org/synapse/issues/9424 | |
1164 | 1172 | for joined_room in sync_result_builder.joined: |
1165 | room_id = joined_room.room_id | |
1166 | if room_id in newly_joined_rooms: | |
1167 | issue4422_logger.debug( | |
1168 | "Sync result for newly joined room %s: %r", room_id, joined_room | |
1169 | ) | |
1170 | 1173 | num_events += len(joined_room.timeline.events) |
1171 | 1174 | |
1172 | 1175 | log_kv( |
1737 | 1740 | if old_mem_ev_id: |
1738 | 1741 | old_mem_ev = await self.store.get_event( |
1739 | 1742 | old_mem_ev_id, allow_none=True |
1740 | ) | |
1741 | ||
1742 | # debug for #4422 | |
1743 | if has_join: | |
1744 | prev_membership = None | |
1745 | if old_mem_ev: | |
1746 | prev_membership = old_mem_ev.membership | |
1747 | issue4422_logger.debug( | |
1748 | "Previous membership for room %s with join: %s (event %s)", | |
1749 | room_id, | |
1750 | prev_membership, | |
1751 | old_mem_ev_id, | |
1752 | 1743 | ) |
1753 | 1744 | |
1754 | 1745 | if not old_mem_ev or old_mem_ev.membership != Membership.JOIN: |
1892 | 1883 | upto_token=since_token, |
1893 | 1884 | ) |
1894 | 1885 | |
1895 | if newly_joined: | |
1896 | # debugging for https://github.com/matrix-org/synapse/issues/4422 | |
1897 | issue4422_logger.debug( | |
1898 | "RoomSyncResultBuilder events for newly joined room %s: %r", | |
1899 | room_id, | |
1900 | entry.events, | |
1901 | ) | |
1902 | 1886 | room_entries.append(entry) |
1903 | 1887 | |
1904 | 1888 | return _RoomChanges( |
2075 | 2059 | # Note: `batch` can be both empty and limited here in the case where |
2076 | 2060 | # `_load_filtered_recents` can't find any events the user should see |
2077 | 2061 | # (e.g. due to having ignored the sender of the last 50 events). |
2078 | ||
2079 | if newly_joined: | |
2080 | # debug for https://github.com/matrix-org/synapse/issues/4422 | |
2081 | issue4422_logger.debug( | |
2082 | "Timeline events after filtering in newly-joined room %s: %r", | |
2083 | room_id, | |
2084 | batch, | |
2085 | ) | |
2086 | 2062 | |
2087 | 2063 | # When we join the room (or the client requests full_state), we should |
2088 | 2064 | # send down any existing tags. Usually the user won't have tags in a |
31 | 31 | pass |
32 | 32 | |
33 | 33 | |
34 | @attr.s | |
34 | @attr.s(auto_attribs=True) | |
35 | 35 | class ProxyCredentials: |
36 | username_password = attr.ib(type=bytes) | |
36 | username_password: bytes | |
37 | 37 | |
38 | 38 | def as_proxy_authorization_value(self) -> bytes: |
39 | 39 | """ |
122 | 122 | pass |
123 | 123 | |
124 | 124 | |
125 | @attr.s(slots=True, frozen=True) | |
125 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
126 | 126 | class MatrixFederationRequest: |
127 | method = attr.ib(type=str) | |
127 | method: str | |
128 | 128 | """HTTP method |
129 | 129 | """ |
130 | 130 | |
131 | path = attr.ib(type=str) | |
131 | path: str | |
132 | 132 | """HTTP path |
133 | 133 | """ |
134 | 134 | |
135 | destination = attr.ib(type=str) | |
135 | destination: str | |
136 | 136 | """The remote server to send the HTTP request to. |
137 | 137 | """ |
138 | 138 | |
139 | json = attr.ib(default=None, type=Optional[JsonDict]) | |
139 | json: Optional[JsonDict] = None | |
140 | 140 | """JSON to send in the body. |
141 | 141 | """ |
142 | 142 | |
143 | json_callback = attr.ib(default=None, type=Optional[Callable[[], JsonDict]]) | |
143 | json_callback: Optional[Callable[[], JsonDict]] = None | |
144 | 144 | """A callback to generate the JSON. |
145 | 145 | """ |
146 | 146 | |
147 | query = attr.ib(default=None, type=Optional[dict]) | |
147 | query: Optional[dict] = None | |
148 | 148 | """Query arguments. |
149 | 149 | """ |
150 | 150 | |
151 | txn_id = attr.ib(default=None, type=Optional[str]) | |
151 | txn_id: Optional[str] = None | |
152 | 152 | """Unique ID for this request (for logging) |
153 | 153 | """ |
154 | 154 | |
155 | uri = attr.ib(init=False, type=bytes) | |
155 | uri: bytes = attr.ib(init=False) | |
156 | 156 | """The URI of this request |
157 | 157 | """ |
158 | 158 |
533 | 533 | |
534 | 534 | |
535 | 535 | @implementer(IAddress) |
536 | @attr.s(frozen=True, slots=True) | |
536 | @attr.s(frozen=True, slots=True, auto_attribs=True) | |
537 | 537 | class _XForwardedForAddress: |
538 | host = attr.ib(type=str) | |
538 | host: str | |
539 | 539 | |
540 | 540 | |
541 | 541 | class SynapseSite(Site): |
38 | 38 | logger = logging.getLogger(__name__) |
39 | 39 | |
40 | 40 | |
41 | @attr.s | |
41 | @attr.s(slots=True, auto_attribs=True) | |
42 | 42 | @implementer(IPushProducer) |
43 | 43 | class LogProducer: |
44 | 44 | """ |
53 | 53 | |
54 | 54 | # This is essentially ITCPTransport, but that is missing certain fields |
55 | 55 | # (connected and registerProducer) which are part of the implementation. |
56 | transport = attr.ib(type=Connection) | |
57 | _format = attr.ib(type=Callable[[logging.LogRecord], str]) | |
58 | _buffer = attr.ib(type=deque) | |
59 | _paused = attr.ib(default=False, type=bool, init=False) | |
56 | transport: Connection | |
57 | _format: Callable[[logging.LogRecord], str] | |
58 | _buffer: Deque[logging.LogRecord] | |
59 | _paused: bool = attr.ib(default=False, init=False) | |
60 | 60 | |
61 | 61 | def pauseProducing(self): |
62 | 62 | self._paused = True |
192 | 192 | return res |
193 | 193 | |
194 | 194 | |
195 | @attr.s(slots=True) | |
195 | @attr.s(slots=True, auto_attribs=True) | |
196 | 196 | class ContextRequest: |
197 | 197 | """ |
198 | 198 | A bundle of attributes from the SynapseRequest object. |
204 | 204 | their children. |
205 | 205 | """ |
206 | 206 | |
207 | request_id = attr.ib(type=str) | |
208 | ip_address = attr.ib(type=str) | |
209 | site_tag = attr.ib(type=str) | |
210 | requester = attr.ib(type=Optional[str]) | |
211 | authenticated_entity = attr.ib(type=Optional[str]) | |
212 | method = attr.ib(type=str) | |
213 | url = attr.ib(type=str) | |
214 | protocol = attr.ib(type=str) | |
215 | user_agent = attr.ib(type=str) | |
207 | request_id: str | |
208 | ip_address: str | |
209 | site_tag: str | |
210 | requester: Optional[str] | |
211 | authenticated_entity: Optional[str] | |
212 | method: str | |
213 | url: str | |
214 | protocol: str | |
215 | user_agent: str | |
216 | 216 | |
217 | 217 | |
218 | 218 | LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"] |
246 | 246 | class BaseReporter: # type: ignore[no-redef] |
247 | 247 | pass |
248 | 248 | |
249 | @attr.s(slots=True, frozen=True) | |
249 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
250 | 250 | class _WrappedRustReporter(BaseReporter): |
251 | 251 | """Wrap the reporter to ensure `report_span` never throws.""" |
252 | 252 | |
253 | _reporter = attr.ib(type=Reporter, default=attr.Factory(Reporter)) | |
253 | _reporter: Reporter = attr.Factory(Reporter) | |
254 | 254 | |
255 | 255 | def set_process(self, *args, **kwargs): |
256 | 256 | return self._reporter.set_process(*args, **kwargs) |
0 | # Copyright 2014-2016 OpenMarket Ltd | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | ||
15 | import logging | |
16 | from functools import wraps | |
17 | from inspect import getcallargs | |
18 | from typing import Callable, TypeVar, cast | |
19 | ||
20 | _TIME_FUNC_ID = 0 | |
21 | ||
22 | ||
23 | def _log_debug_as_f(f, msg, msg_args): | |
24 | name = f.__module__ | |
25 | logger = logging.getLogger(name) | |
26 | ||
27 | if logger.isEnabledFor(logging.DEBUG): | |
28 | lineno = f.__code__.co_firstlineno | |
29 | pathname = f.__code__.co_filename | |
30 | ||
31 | record = logger.makeRecord( | |
32 | name=name, | |
33 | level=logging.DEBUG, | |
34 | fn=pathname, | |
35 | lno=lineno, | |
36 | msg=msg, | |
37 | args=msg_args, | |
38 | exc_info=None, | |
39 | ) | |
40 | ||
41 | logger.handle(record) | |
42 | ||
43 | ||
44 | F = TypeVar("F", bound=Callable) | |
45 | ||
46 | ||
47 | def log_function(f: F) -> F: | |
48 | """Function decorator that logs every call to that function.""" | |
49 | func_name = f.__name__ | |
50 | ||
51 | @wraps(f) | |
52 | def wrapped(*args, **kwargs): | |
53 | name = f.__module__ | |
54 | logger = logging.getLogger(name) | |
55 | level = logging.DEBUG | |
56 | ||
57 | if logger.isEnabledFor(level): | |
58 | bound_args = getcallargs(f, *args, **kwargs) | |
59 | ||
60 | def format(value): | |
61 | r = str(value) | |
62 | if len(r) > 50: | |
63 | r = r[:50] + "..." | |
64 | return r | |
65 | ||
66 | func_args = ["%s=%s" % (k, format(v)) for k, v in bound_args.items()] | |
67 | ||
68 | msg_args = {"func_name": func_name, "args": ", ".join(func_args)} | |
69 | ||
70 | _log_debug_as_f(f, "Invoked '%(func_name)s' with args: %(args)s", msg_args) | |
71 | ||
72 | return f(*args, **kwargs) | |
73 | ||
74 | wrapped.__name__ = func_name | |
75 | return cast(F, wrapped) |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | 13 | |
14 | import functools | |
15 | import gc | |
16 | 14 | import itertools |
17 | 15 | import logging |
18 | 16 | import os |
19 | 17 | import platform |
20 | 18 | import threading |
21 | import time | |
22 | 19 | from typing import ( |
23 | Any, | |
24 | 20 | Callable, |
25 | 21 | Dict, |
26 | 22 | Generic, |
33 | 29 | Type, |
34 | 30 | TypeVar, |
35 | 31 | Union, |
36 | cast, | |
37 | 32 | ) |
38 | 33 | |
39 | 34 | import attr |
40 | 35 | from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric |
41 | 36 | from prometheus_client.core import ( |
42 | 37 | REGISTRY, |
43 | CounterMetricFamily, | |
44 | 38 | GaugeHistogramMetricFamily, |
45 | 39 | GaugeMetricFamily, |
46 | 40 | ) |
47 | 41 | |
48 | from twisted.internet import reactor | |
49 | from twisted.internet.base import ReactorBase | |
50 | 42 | from twisted.python.threadpool import ThreadPool |
51 | 43 | |
52 | import synapse | |
44 | import synapse.metrics._reactor_metrics | |
53 | 45 | from synapse.metrics._exposition import ( |
54 | 46 | MetricsResource, |
55 | 47 | generate_latest, |
56 | 48 | start_http_server, |
57 | 49 | ) |
50 | from synapse.metrics._gc import MIN_TIME_BETWEEN_GCS, install_gc_manager | |
58 | 51 | from synapse.util.versionstring import get_version_string |
59 | 52 | |
60 | 53 | logger = logging.getLogger(__name__) |
61 | 54 | |
62 | 55 | METRICS_PREFIX = "/_synapse/metrics" |
63 | 56 | |
64 | running_on_pypy = platform.python_implementation() == "PyPy" | |
65 | 57 | all_gauges: "Dict[str, Union[LaterGauge, InFlightGauge]]" = {} |
66 | 58 | |
67 | 59 | HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat") |
75 | 67 | yield metric |
76 | 68 | |
77 | 69 | |
78 | @attr.s(slots=True, hash=True) | |
70 | @attr.s(slots=True, hash=True, auto_attribs=True) | |
79 | 71 | class LaterGauge: |
80 | 72 | |
81 | name = attr.ib(type=str) | |
82 | desc = attr.ib(type=str) | |
83 | labels = attr.ib(hash=False, type=Optional[Iterable[str]]) | |
73 | name: str | |
74 | desc: str | |
75 | labels: Optional[Iterable[str]] = attr.ib(hash=False) | |
84 | 76 | # callback: should either return a value (if there are no labels for this metric), |
85 | 77 | # or dict mapping from a label tuple to a value |
86 | caller = attr.ib( | |
87 | type=Callable[ | |
88 | [], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]] | |
89 | ] | |
90 | ) | |
78 | caller: Callable[ | |
79 | [], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]] | |
80 | ] | |
91 | 81 | |
92 | 82 | def collect(self) -> Iterable[Metric]: |
93 | 83 | |
156 | 146 | # Create a class which have the sub_metrics values as attributes, which |
157 | 147 | # default to 0 on initialization. Used to pass to registered callbacks. |
158 | 148 | self._metrics_class: Type[MetricsEntry] = attr.make_class( |
159 | "_MetricsEntry", attrs={x: attr.ib(0) for x in sub_metrics}, slots=True | |
149 | "_MetricsEntry", | |
150 | attrs={x: attr.ib(default=0) for x in sub_metrics}, | |
151 | slots=True, | |
160 | 152 | ) |
161 | 153 | |
162 | 154 | # Counts number of in flight blocks for a given set of label values |
368 | 360 | |
369 | 361 | REGISTRY.register(CPUMetrics()) |
370 | 362 | |
371 | # | |
372 | # Python GC metrics | |
373 | # | |
374 | ||
375 | gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"]) | |
376 | gc_time = Histogram( | |
377 | "python_gc_time", | |
378 | "Time taken to GC (sec)", | |
379 | ["gen"], | |
380 | buckets=[ | |
381 | 0.0025, | |
382 | 0.005, | |
383 | 0.01, | |
384 | 0.025, | |
385 | 0.05, | |
386 | 0.10, | |
387 | 0.25, | |
388 | 0.50, | |
389 | 1.00, | |
390 | 2.50, | |
391 | 5.00, | |
392 | 7.50, | |
393 | 15.00, | |
394 | 30.00, | |
395 | 45.00, | |
396 | 60.00, | |
397 | ], | |
398 | ) | |
399 | ||
400 | ||
401 | class GCCounts: | |
402 | def collect(self) -> Iterable[Metric]: | |
403 | cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"]) | |
404 | for n, m in enumerate(gc.get_count()): | |
405 | cm.add_metric([str(n)], m) | |
406 | ||
407 | yield cm | |
408 | ||
409 | ||
410 | if not running_on_pypy: | |
411 | REGISTRY.register(GCCounts()) | |
412 | ||
413 | ||
414 | # | |
415 | # PyPy GC / memory metrics | |
416 | # | |
417 | ||
418 | ||
419 | class PyPyGCStats: | |
420 | def collect(self) -> Iterable[Metric]: | |
421 | ||
422 | # @stats is a pretty-printer object with __str__() returning a nice table, | |
423 | # plus some fields that contain data from that table. | |
424 | # unfortunately, fields are pretty-printed themselves (i. e. '4.5MB'). | |
425 | stats = gc.get_stats(memory_pressure=False) # type: ignore | |
426 | # @s contains same fields as @stats, but as actual integers. | |
427 | s = stats._s # type: ignore | |
428 | ||
429 | # also note that field naming is completely braindead | |
430 | # and only vaguely correlates with the pretty-printed table. | |
431 | # >>>> gc.get_stats(False) | |
432 | # Total memory consumed: | |
433 | # GC used: 8.7MB (peak: 39.0MB) # s.total_gc_memory, s.peak_memory | |
434 | # in arenas: 3.0MB # s.total_arena_memory | |
435 | # rawmalloced: 1.7MB # s.total_rawmalloced_memory | |
436 | # nursery: 4.0MB # s.nursery_size | |
437 | # raw assembler used: 31.0kB # s.jit_backend_used | |
438 | # ----------------------------- | |
439 | # Total: 8.8MB # stats.memory_used_sum | |
440 | # | |
441 | # Total memory allocated: | |
442 | # GC allocated: 38.7MB (peak: 41.1MB) # s.total_allocated_memory, s.peak_allocated_memory | |
443 | # in arenas: 30.9MB # s.peak_arena_memory | |
444 | # rawmalloced: 4.1MB # s.peak_rawmalloced_memory | |
445 | # nursery: 4.0MB # s.nursery_size | |
446 | # raw assembler allocated: 1.0MB # s.jit_backend_allocated | |
447 | # ----------------------------- | |
448 | # Total: 39.7MB # stats.memory_allocated_sum | |
449 | # | |
450 | # Total time spent in GC: 0.073 # s.total_gc_time | |
451 | ||
452 | pypy_gc_time = CounterMetricFamily( | |
453 | "pypy_gc_time_seconds_total", | |
454 | "Total time spent in PyPy GC", | |
455 | labels=[], | |
456 | ) | |
457 | pypy_gc_time.add_metric([], s.total_gc_time / 1000) | |
458 | yield pypy_gc_time | |
459 | ||
460 | pypy_mem = GaugeMetricFamily( | |
461 | "pypy_memory_bytes", | |
462 | "Memory tracked by PyPy allocator", | |
463 | labels=["state", "class", "kind"], | |
464 | ) | |
465 | # memory used by JIT assembler | |
466 | pypy_mem.add_metric(["used", "", "jit"], s.jit_backend_used) | |
467 | pypy_mem.add_metric(["allocated", "", "jit"], s.jit_backend_allocated) | |
468 | # memory used by GCed objects | |
469 | pypy_mem.add_metric(["used", "", "arenas"], s.total_arena_memory) | |
470 | pypy_mem.add_metric(["allocated", "", "arenas"], s.peak_arena_memory) | |
471 | pypy_mem.add_metric(["used", "", "rawmalloced"], s.total_rawmalloced_memory) | |
472 | pypy_mem.add_metric(["allocated", "", "rawmalloced"], s.peak_rawmalloced_memory) | |
473 | pypy_mem.add_metric(["used", "", "nursery"], s.nursery_size) | |
474 | pypy_mem.add_metric(["allocated", "", "nursery"], s.nursery_size) | |
475 | # totals | |
476 | pypy_mem.add_metric(["used", "totals", "gc"], s.total_gc_memory) | |
477 | pypy_mem.add_metric(["allocated", "totals", "gc"], s.total_allocated_memory) | |
478 | pypy_mem.add_metric(["used", "totals", "gc_peak"], s.peak_memory) | |
479 | pypy_mem.add_metric(["allocated", "totals", "gc_peak"], s.peak_allocated_memory) | |
480 | yield pypy_mem | |
481 | ||
482 | ||
483 | if running_on_pypy: | |
484 | REGISTRY.register(PyPyGCStats()) | |
485 | ||
486 | ||
487 | # | |
488 | # Twisted reactor metrics | |
489 | # | |
490 | ||
491 | tick_time = Histogram( | |
492 | "python_twisted_reactor_tick_time", | |
493 | "Tick time of the Twisted reactor (sec)", | |
494 | buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5], | |
495 | ) | |
496 | pending_calls_metric = Histogram( | |
497 | "python_twisted_reactor_pending_calls", | |
498 | "Pending calls", | |
499 | buckets=[1, 2, 5, 10, 25, 50, 100, 250, 500, 1000], | |
500 | ) | |
501 | 363 | |
502 | 364 | # |
503 | 365 | # Federation Metrics |
549 | 411 | get_version_string(synapse), |
550 | 412 | " ".join([platform.system(), platform.release()]), |
551 | 413 | ).set(1) |
552 | ||
553 | last_ticked = time.time() | |
554 | 414 | |
555 | 415 | # 3PID send info |
556 | 416 | threepid_send_requests = Histogram( |
599 | 459 | ) |
600 | 460 | |
601 | 461 | |
602 | class ReactorLastSeenMetric: | |
603 | def collect(self) -> Iterable[Metric]: | |
604 | cm = GaugeMetricFamily( | |
605 | "python_twisted_reactor_last_seen", | |
606 | "Seconds since the Twisted reactor was last seen", | |
607 | ) | |
608 | cm.add_metric([], time.time() - last_ticked) | |
609 | yield cm | |
610 | ||
611 | ||
612 | REGISTRY.register(ReactorLastSeenMetric()) | |
613 | ||
614 | # The minimum time in seconds between GCs for each generation, regardless of the current GC | |
615 | # thresholds and counts. | |
616 | MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0) | |
617 | ||
618 | # The time (in seconds since the epoch) of the last time we did a GC for each generation. | |
619 | _last_gc = [0.0, 0.0, 0.0] | |
620 | ||
621 | ||
622 | F = TypeVar("F", bound=Callable[..., Any]) | |
623 | ||
624 | ||
625 | def runUntilCurrentTimer(reactor: ReactorBase, func: F) -> F: | |
626 | @functools.wraps(func) | |
627 | def f(*args: Any, **kwargs: Any) -> Any: | |
628 | now = reactor.seconds() | |
629 | num_pending = 0 | |
630 | ||
631 | # _newTimedCalls is one long list of *all* pending calls. Below loop | |
632 | # is based off of impl of reactor.runUntilCurrent | |
633 | for delayed_call in reactor._newTimedCalls: | |
634 | if delayed_call.time > now: | |
635 | break | |
636 | ||
637 | if delayed_call.delayed_time > 0: | |
638 | continue | |
639 | ||
640 | num_pending += 1 | |
641 | ||
642 | num_pending += len(reactor.threadCallQueue) | |
643 | start = time.time() | |
644 | ret = func(*args, **kwargs) | |
645 | end = time.time() | |
646 | ||
647 | # record the amount of wallclock time spent running pending calls. | |
648 | # This is a proxy for the actual amount of time between reactor polls, | |
649 | # since about 25% of time is actually spent running things triggered by | |
650 | # I/O events, but that is harder to capture without rewriting half the | |
651 | # reactor. | |
652 | tick_time.observe(end - start) | |
653 | pending_calls_metric.observe(num_pending) | |
654 | ||
655 | # Update the time we last ticked, for the metric to test whether | |
656 | # Synapse's reactor has frozen | |
657 | global last_ticked | |
658 | last_ticked = end | |
659 | ||
660 | if running_on_pypy: | |
661 | return ret | |
662 | ||
663 | # Check if we need to do a manual GC (since its been disabled), and do | |
664 | # one if necessary. Note we go in reverse order as e.g. a gen 1 GC may | |
665 | # promote an object into gen 2, and we don't want to handle the same | |
666 | # object multiple times. | |
667 | threshold = gc.get_threshold() | |
668 | counts = gc.get_count() | |
669 | for i in (2, 1, 0): | |
670 | # We check if we need to do one based on a straightforward | |
671 | # comparison between the threshold and count. We also do an extra | |
672 | # check to make sure that we don't a GC too often. | |
673 | if threshold[i] < counts[i] and MIN_TIME_BETWEEN_GCS[i] < end - _last_gc[i]: | |
674 | if i == 0: | |
675 | logger.debug("Collecting gc %d", i) | |
676 | else: | |
677 | logger.info("Collecting gc %d", i) | |
678 | ||
679 | start = time.time() | |
680 | unreachable = gc.collect(i) | |
681 | end = time.time() | |
682 | ||
683 | _last_gc[i] = end | |
684 | ||
685 | gc_time.labels(i).observe(end - start) | |
686 | gc_unreachable.labels(i).set(unreachable) | |
687 | ||
688 | return ret | |
689 | ||
690 | return cast(F, f) | |
691 | ||
692 | ||
693 | try: | |
694 | # Ensure the reactor has all the attributes we expect | |
695 | reactor.seconds # type: ignore | |
696 | reactor.runUntilCurrent # type: ignore | |
697 | reactor._newTimedCalls # type: ignore | |
698 | reactor.threadCallQueue # type: ignore | |
699 | ||
700 | # runUntilCurrent is called when we have pending calls. It is called once | |
701 | # per iteratation after fd polling. | |
702 | reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore | |
703 | ||
704 | # We manually run the GC each reactor tick so that we can get some metrics | |
705 | # about time spent doing GC, | |
706 | if not running_on_pypy: | |
707 | gc.disable() | |
708 | except AttributeError: | |
709 | pass | |
710 | ||
711 | ||
712 | 462 | __all__ = [ |
713 | 463 | "MetricsResource", |
714 | 464 | "generate_latest", |
716 | 466 | "LaterGauge", |
717 | 467 | "InFlightGauge", |
718 | 468 | "GaugeBucketCollector", |
469 | "MIN_TIME_BETWEEN_GCS", | |
470 | "install_gc_manager", | |
719 | 471 | ] |
0 | # Copyright 2015-2022 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | ||
15 | import gc | |
16 | import logging | |
17 | import platform | |
18 | import time | |
19 | from typing import Iterable | |
20 | ||
21 | from prometheus_client.core import ( | |
22 | REGISTRY, | |
23 | CounterMetricFamily, | |
24 | Gauge, | |
25 | GaugeMetricFamily, | |
26 | Histogram, | |
27 | Metric, | |
28 | ) | |
29 | ||
30 | from twisted.internet import task | |
31 | ||
32 | """Prometheus metrics for garbage collection""" | |
33 | ||
34 | ||
35 | logger = logging.getLogger(__name__) | |
36 | ||
37 | # The minimum time in seconds between GCs for each generation, regardless of the current GC | |
38 | # thresholds and counts. | |
39 | MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0) | |
40 | ||
41 | running_on_pypy = platform.python_implementation() == "PyPy" | |
42 | ||
43 | # | |
44 | # Python GC metrics | |
45 | # | |
46 | ||
47 | gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"]) | |
48 | gc_time = Histogram( | |
49 | "python_gc_time", | |
50 | "Time taken to GC (sec)", | |
51 | ["gen"], | |
52 | buckets=[ | |
53 | 0.0025, | |
54 | 0.005, | |
55 | 0.01, | |
56 | 0.025, | |
57 | 0.05, | |
58 | 0.10, | |
59 | 0.25, | |
60 | 0.50, | |
61 | 1.00, | |
62 | 2.50, | |
63 | 5.00, | |
64 | 7.50, | |
65 | 15.00, | |
66 | 30.00, | |
67 | 45.00, | |
68 | 60.00, | |
69 | ], | |
70 | ) | |
71 | ||
72 | ||
73 | class GCCounts: | |
74 | def collect(self) -> Iterable[Metric]: | |
75 | cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"]) | |
76 | for n, m in enumerate(gc.get_count()): | |
77 | cm.add_metric([str(n)], m) | |
78 | ||
79 | yield cm | |
80 | ||
81 | ||
82 | def install_gc_manager() -> None: | |
83 | """Disable automatic GC, and replace it with a task that runs every 100ms | |
84 | ||
85 | This means that (a) we can limit how often GC runs; (b) we can get some metrics | |
86 | about GC activity. | |
87 | ||
88 | It does nothing on PyPy. | |
89 | """ | |
90 | ||
91 | if running_on_pypy: | |
92 | return | |
93 | ||
94 | REGISTRY.register(GCCounts()) | |
95 | ||
96 | gc.disable() | |
97 | ||
98 | # The time (in seconds since the epoch) of the last time we did a GC for each generation. | |
99 | _last_gc = [0.0, 0.0, 0.0] | |
100 | ||
101 | def _maybe_gc() -> None: | |
102 | # Check if we need to do a manual GC (since its been disabled), and do | |
103 | # one if necessary. Note we go in reverse order as e.g. a gen 1 GC may | |
104 | # promote an object into gen 2, and we don't want to handle the same | |
105 | # object multiple times. | |
106 | threshold = gc.get_threshold() | |
107 | counts = gc.get_count() | |
108 | end = time.time() | |
109 | for i in (2, 1, 0): | |
110 | # We check if we need to do one based on a straightforward | |
111 | # comparison between the threshold and count. We also do an extra | |
112 | # check to make sure that we don't a GC too often. | |
113 | if threshold[i] < counts[i] and MIN_TIME_BETWEEN_GCS[i] < end - _last_gc[i]: | |
114 | if i == 0: | |
115 | logger.debug("Collecting gc %d", i) | |
116 | else: | |
117 | logger.info("Collecting gc %d", i) | |
118 | ||
119 | start = time.time() | |
120 | unreachable = gc.collect(i) | |
121 | end = time.time() | |
122 | ||
123 | _last_gc[i] = end | |
124 | ||
125 | gc_time.labels(i).observe(end - start) | |
126 | gc_unreachable.labels(i).set(unreachable) | |
127 | ||
128 | gc_task = task.LoopingCall(_maybe_gc) | |
129 | gc_task.start(0.1) | |
130 | ||
131 | ||
132 | # | |
133 | # PyPy GC / memory metrics | |
134 | # | |
135 | ||
136 | ||
137 | class PyPyGCStats: | |
138 | def collect(self) -> Iterable[Metric]: | |
139 | ||
140 | # @stats is a pretty-printer object with __str__() returning a nice table, | |
141 | # plus some fields that contain data from that table. | |
142 | # unfortunately, fields are pretty-printed themselves (i. e. '4.5MB'). | |
143 | stats = gc.get_stats(memory_pressure=False) # type: ignore | |
144 | # @s contains same fields as @stats, but as actual integers. | |
145 | s = stats._s # type: ignore | |
146 | ||
147 | # also note that field naming is completely braindead | |
148 | # and only vaguely correlates with the pretty-printed table. | |
149 | # >>>> gc.get_stats(False) | |
150 | # Total memory consumed: | |
151 | # GC used: 8.7MB (peak: 39.0MB) # s.total_gc_memory, s.peak_memory | |
152 | # in arenas: 3.0MB # s.total_arena_memory | |
153 | # rawmalloced: 1.7MB # s.total_rawmalloced_memory | |
154 | # nursery: 4.0MB # s.nursery_size | |
155 | # raw assembler used: 31.0kB # s.jit_backend_used | |
156 | # ----------------------------- | |
157 | # Total: 8.8MB # stats.memory_used_sum | |
158 | # | |
159 | # Total memory allocated: | |
160 | # GC allocated: 38.7MB (peak: 41.1MB) # s.total_allocated_memory, s.peak_allocated_memory | |
161 | # in arenas: 30.9MB # s.peak_arena_memory | |
162 | # rawmalloced: 4.1MB # s.peak_rawmalloced_memory | |
163 | # nursery: 4.0MB # s.nursery_size | |
164 | # raw assembler allocated: 1.0MB # s.jit_backend_allocated | |
165 | # ----------------------------- | |
166 | # Total: 39.7MB # stats.memory_allocated_sum | |
167 | # | |
168 | # Total time spent in GC: 0.073 # s.total_gc_time | |
169 | ||
170 | pypy_gc_time = CounterMetricFamily( | |
171 | "pypy_gc_time_seconds_total", | |
172 | "Total time spent in PyPy GC", | |
173 | labels=[], | |
174 | ) | |
175 | pypy_gc_time.add_metric([], s.total_gc_time / 1000) | |
176 | yield pypy_gc_time | |
177 | ||
178 | pypy_mem = GaugeMetricFamily( | |
179 | "pypy_memory_bytes", | |
180 | "Memory tracked by PyPy allocator", | |
181 | labels=["state", "class", "kind"], | |
182 | ) | |
183 | # memory used by JIT assembler | |
184 | pypy_mem.add_metric(["used", "", "jit"], s.jit_backend_used) | |
185 | pypy_mem.add_metric(["allocated", "", "jit"], s.jit_backend_allocated) | |
186 | # memory used by GCed objects | |
187 | pypy_mem.add_metric(["used", "", "arenas"], s.total_arena_memory) | |
188 | pypy_mem.add_metric(["allocated", "", "arenas"], s.peak_arena_memory) | |
189 | pypy_mem.add_metric(["used", "", "rawmalloced"], s.total_rawmalloced_memory) | |
190 | pypy_mem.add_metric(["allocated", "", "rawmalloced"], s.peak_rawmalloced_memory) | |
191 | pypy_mem.add_metric(["used", "", "nursery"], s.nursery_size) | |
192 | pypy_mem.add_metric(["allocated", "", "nursery"], s.nursery_size) | |
193 | # totals | |
194 | pypy_mem.add_metric(["used", "totals", "gc"], s.total_gc_memory) | |
195 | pypy_mem.add_metric(["allocated", "totals", "gc"], s.total_allocated_memory) | |
196 | pypy_mem.add_metric(["used", "totals", "gc_peak"], s.peak_memory) | |
197 | pypy_mem.add_metric(["allocated", "totals", "gc_peak"], s.peak_allocated_memory) | |
198 | yield pypy_mem | |
199 | ||
200 | ||
201 | if running_on_pypy: | |
202 | REGISTRY.register(PyPyGCStats()) |
0 | # Copyright 2022 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | ||
14 | import select | |
15 | import time | |
16 | from typing import Any, Iterable, List, Tuple | |
17 | ||
18 | from prometheus_client import Histogram, Metric | |
19 | from prometheus_client.core import REGISTRY, GaugeMetricFamily | |
20 | ||
21 | from twisted.internet import reactor | |
22 | ||
23 | # | |
24 | # Twisted reactor metrics | |
25 | # | |
26 | ||
27 | tick_time = Histogram( | |
28 | "python_twisted_reactor_tick_time", | |
29 | "Tick time of the Twisted reactor (sec)", | |
30 | buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5], | |
31 | ) | |
32 | ||
33 | ||
34 | class EpollWrapper: | |
35 | """a wrapper for an epoll object which records the time between polls""" | |
36 | ||
37 | def __init__(self, poller: "select.epoll"): # type: ignore[name-defined] | |
38 | self.last_polled = time.time() | |
39 | self._poller = poller | |
40 | ||
41 | def poll(self, *args, **kwargs) -> List[Tuple[int, int]]: # type: ignore[no-untyped-def] | |
42 | # record the time since poll() was last called. This gives a good proxy for | |
43 | # how long it takes to run everything in the reactor - ie, how long anything | |
44 | # waiting for the next tick will have to wait. | |
45 | tick_time.observe(time.time() - self.last_polled) | |
46 | ||
47 | ret = self._poller.poll(*args, **kwargs) | |
48 | ||
49 | self.last_polled = time.time() | |
50 | return ret | |
51 | ||
52 | def __getattr__(self, item: str) -> Any: | |
53 | return getattr(self._poller, item) | |
54 | ||
55 | ||
56 | class ReactorLastSeenMetric: | |
57 | def __init__(self, epoll_wrapper: EpollWrapper): | |
58 | self._epoll_wrapper = epoll_wrapper | |
59 | ||
60 | def collect(self) -> Iterable[Metric]: | |
61 | cm = GaugeMetricFamily( | |
62 | "python_twisted_reactor_last_seen", | |
63 | "Seconds since the Twisted reactor was last seen", | |
64 | ) | |
65 | cm.add_metric([], time.time() - self._epoll_wrapper.last_polled) | |
66 | yield cm | |
67 | ||
68 | ||
69 | try: | |
70 | # if the reactor has a `_poller` attribute, which is an `epoll` object | |
71 | # (ie, it's an EPollReactor), we wrap the `epoll` with a thing that will | |
72 | # measure the time between ticks | |
73 | from select import epoll # type: ignore[attr-defined] | |
74 | ||
75 | poller = reactor._poller # type: ignore[attr-defined] | |
76 | except (AttributeError, ImportError): | |
77 | pass | |
78 | else: | |
79 | if isinstance(poller, epoll): | |
80 | poller = EpollWrapper(poller) | |
81 | reactor._poller = poller # type: ignore[attr-defined] | |
82 | REGISTRY.register(ReactorLastSeenMetric(poller)) |
39 | 39 | from synapse.logging import issue9533_logger |
40 | 40 | from synapse.logging.context import PreserveLoggingContext |
41 | 41 | from synapse.logging.opentracing import log_kv, start_active_span |
42 | from synapse.logging.utils import log_function | |
43 | 42 | from synapse.metrics import LaterGauge |
44 | 43 | from synapse.streams.config import PaginationConfig |
45 | 44 | from synapse.types import ( |
192 | 191 | return bool(self.events) |
193 | 192 | |
194 | 193 | |
195 | @attr.s(slots=True, frozen=True) | |
194 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
196 | 195 | class _PendingRoomEventEntry: |
197 | event_pos = attr.ib(type=PersistedEventPosition) | |
198 | extra_users = attr.ib(type=Collection[UserID]) | |
199 | ||
200 | room_id = attr.ib(type=str) | |
201 | type = attr.ib(type=str) | |
202 | state_key = attr.ib(type=Optional[str]) | |
203 | membership = attr.ib(type=Optional[str]) | |
196 | event_pos: PersistedEventPosition | |
197 | extra_users: Collection[UserID] | |
198 | ||
199 | room_id: str | |
200 | type: str | |
201 | state_key: Optional[str] | |
202 | membership: Optional[str] | |
204 | 203 | |
205 | 204 | |
206 | 205 | class Notifier: |
685 | 684 | else: |
686 | 685 | return False |
687 | 686 | |
688 | @log_function | |
689 | 687 | def remove_expired_streams(self) -> None: |
690 | 688 | time_now_ms = self.clock.time_msec() |
691 | 689 | expired_streams = [] |
699 | 697 | for expired_stream in expired_streams: |
700 | 698 | expired_stream.remove(self) |
701 | 699 | |
702 | @log_function | |
703 | 700 | def _register_with_keys(self, user_stream: _NotifierUserStream): |
704 | 701 | self.user_to_user_stream[user_stream.user_id] = user_stream |
705 | 702 |
22 | 22 | from synapse.server import HomeServer |
23 | 23 | |
24 | 24 | |
25 | @attr.s(slots=True) | |
25 | @attr.s(slots=True, auto_attribs=True) | |
26 | 26 | class PusherConfig: |
27 | 27 | """Parameters necessary to configure a pusher.""" |
28 | 28 | |
29 | id = attr.ib(type=Optional[str]) | |
30 | user_name = attr.ib(type=str) | |
31 | access_token = attr.ib(type=Optional[int]) | |
32 | profile_tag = attr.ib(type=str) | |
33 | kind = attr.ib(type=str) | |
34 | app_id = attr.ib(type=str) | |
35 | app_display_name = attr.ib(type=str) | |
36 | device_display_name = attr.ib(type=str) | |
37 | pushkey = attr.ib(type=str) | |
38 | ts = attr.ib(type=int) | |
39 | lang = attr.ib(type=Optional[str]) | |
40 | data = attr.ib(type=Optional[JsonDict]) | |
41 | last_stream_ordering = attr.ib(type=int) | |
42 | last_success = attr.ib(type=Optional[int]) | |
43 | failing_since = attr.ib(type=Optional[int]) | |
29 | id: Optional[str] | |
30 | user_name: str | |
31 | access_token: Optional[int] | |
32 | profile_tag: str | |
33 | kind: str | |
34 | app_id: str | |
35 | app_display_name: str | |
36 | device_display_name: str | |
37 | pushkey: str | |
38 | ts: int | |
39 | lang: Optional[str] | |
40 | data: Optional[JsonDict] | |
41 | last_stream_ordering: int | |
42 | last_success: Optional[int] | |
43 | failing_since: Optional[int] | |
44 | 44 | |
45 | 45 | def as_dict(self) -> Dict[str, Any]: |
46 | 46 | """Information that can be retrieved about a pusher after creation.""" |
56 | 56 | } |
57 | 57 | |
58 | 58 | |
59 | @attr.s(slots=True) | |
59 | @attr.s(slots=True, auto_attribs=True) | |
60 | 60 | class ThrottleParams: |
61 | 61 | """Parameters for controlling the rate of sending pushes via email.""" |
62 | 62 | |
63 | last_sent_ts = attr.ib(type=int) | |
64 | throttle_ms = attr.ib(type=int) | |
63 | last_sent_ts: int | |
64 | throttle_ms: int | |
65 | 65 | |
66 | 66 | |
67 | 67 | class Pusher(metaclass=abc.ABCMeta): |
297 | 297 | StateGroup = Union[object, int] |
298 | 298 | |
299 | 299 | |
300 | @attr.s(slots=True) | |
300 | @attr.s(slots=True, auto_attribs=True) | |
301 | 301 | class RulesForRoomData: |
302 | 302 | """The data stored in the cache by `RulesForRoom`. |
303 | 303 | |
306 | 306 | """ |
307 | 307 | |
308 | 308 | # event_id -> (user_id, state) |
309 | member_map = attr.ib(type=MemberMap, factory=dict) | |
309 | member_map: MemberMap = attr.Factory(dict) | |
310 | 310 | # user_id -> rules |
311 | rules_by_user = attr.ib(type=RulesByUser, factory=dict) | |
311 | rules_by_user: RulesByUser = attr.Factory(dict) | |
312 | 312 | |
313 | 313 | # The last state group we updated the caches for. If the state_group of |
314 | 314 | # a new event comes along, we know that we can just return the cached |
315 | 315 | # result. |
316 | 316 | # On invalidation of the rules themselves (if the user changes them), |
317 | 317 | # we invalidate everything and set state_group to `object()` |
318 | state_group = attr.ib(type=StateGroup, factory=object) | |
318 | state_group: StateGroup = attr.Factory(object) | |
319 | 319 | |
320 | 320 | # A sequence number to keep track of when we're allowed to update the |
321 | 321 | # cache. We bump the sequence number when we invalidate the cache. If |
322 | 322 | # the sequence number changes while we're calculating stuff we should |
323 | 323 | # not update the cache with it. |
324 | sequence = attr.ib(type=int, default=0) | |
324 | sequence: int = 0 | |
325 | 325 | |
326 | 326 | # A cache of user_ids that we *know* aren't interesting, e.g. user_ids |
327 | 327 | # owned by AS's, or remote users, etc. (I.e. users we will never need to |
328 | 328 | # calculate push for) |
329 | 329 | # These never need to be invalidated as we will never set up push for |
330 | 330 | # them. |
331 | uninteresting_user_set = attr.ib(type=Set[str], factory=set) | |
331 | uninteresting_user_set: Set[str] = attr.Factory(set) | |
332 | 332 | |
333 | 333 | |
334 | 334 | class RulesForRoom: |
552 | 552 | self.data.state_group = state_group |
553 | 553 | |
554 | 554 | |
555 | @attr.attrs(slots=True, frozen=True) | |
555 | @attr.attrs(slots=True, frozen=True, auto_attribs=True) | |
556 | 556 | class _Invalidation: |
557 | 557 | # _Invalidation is passed as an `on_invalidate` callback to bulk_get_push_rules, |
558 | 558 | # which means that it it is stored on the bulk_get_push_rules cache entry. In order |
563 | 563 | # attrs provides suitable __hash__ and __eq__ methods, provided we remember to |
564 | 564 | # set `frozen=True`. |
565 | 565 | |
566 | cache = attr.ib(type=LruCache) | |
567 | room_id = attr.ib(type=str) | |
566 | cache: LruCache | |
567 | room_id: str | |
568 | 568 | |
569 | 569 | def __call__(self) -> None: |
570 | 570 | rules_data = self.cache.get(self.room_id, None, update_metrics=False) |
177 | 177 | await self.send_email( |
178 | 178 | email_address, |
179 | 179 | self.email_subjects.email_validation |
180 | % {"server_name": self.hs.config.server.server_name}, | |
180 | % {"server_name": self.hs.config.server.server_name, "app": self.app_name}, | |
181 | 181 | template_vars, |
182 | 182 | ) |
183 | 183 | |
208 | 208 | await self.send_email( |
209 | 209 | email_address, |
210 | 210 | self.email_subjects.email_validation |
211 | % {"server_name": self.hs.config.server.server_name}, | |
211 | % {"server_name": self.hs.config.server.server_name, "app": self.app_name}, | |
212 | 212 | template_vars, |
213 | 213 | ) |
214 | 214 |
49 | 49 | """ |
50 | 50 | |
51 | 51 | |
52 | @attr.s(slots=True, frozen=True) | |
52 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
53 | 53 | class EventsStreamRow: |
54 | 54 | """A parsed row from the events replication stream""" |
55 | 55 | |
56 | type = attr.ib() # str: the TypeId of one of the *EventsStreamRows | |
57 | data = attr.ib() # BaseEventsStreamRow | |
56 | type: str # the TypeId of one of the *EventsStreamRows | |
57 | data: "BaseEventsStreamRow" | |
58 | 58 | |
59 | 59 | |
60 | 60 | class BaseEventsStreamRow: |
78 | 78 | return cls(*data) |
79 | 79 | |
80 | 80 | |
81 | @attr.s(slots=True, frozen=True) | |
81 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
82 | 82 | class EventsStreamEventRow(BaseEventsStreamRow): |
83 | 83 | TypeId = "ev" |
84 | 84 | |
85 | event_id = attr.ib(type=str) | |
86 | room_id = attr.ib(type=str) | |
87 | type = attr.ib(type=str) | |
88 | state_key = attr.ib(type=Optional[str]) | |
89 | redacts = attr.ib(type=Optional[str]) | |
90 | relates_to = attr.ib(type=Optional[str]) | |
91 | membership = attr.ib(type=Optional[str]) | |
92 | rejected = attr.ib(type=bool) | |
93 | ||
94 | ||
95 | @attr.s(slots=True, frozen=True) | |
85 | event_id: str | |
86 | room_id: str | |
87 | type: str | |
88 | state_key: Optional[str] | |
89 | redacts: Optional[str] | |
90 | relates_to: Optional[str] | |
91 | membership: Optional[str] | |
92 | rejected: bool | |
93 | ||
94 | ||
95 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
96 | 96 | class EventsStreamCurrentStateRow(BaseEventsStreamRow): |
97 | 97 | TypeId = "state" |
98 | 98 | |
99 | room_id = attr.ib() # str | |
100 | type = attr.ib() # str | |
101 | state_key = attr.ib() # str | |
102 | event_id = attr.ib() # str, optional | |
99 | room_id: str | |
100 | type: str | |
101 | state_key: str | |
102 | event_id: Optional[str] | |
103 | 103 | |
104 | 104 | |
105 | 105 | _EventRows: Tuple[Type[BaseEventsStreamRow], ...] = ( |
122 | 122 | job_name = body["job_name"] |
123 | 123 | |
124 | 124 | if job_name == "populate_stats_process_rooms": |
125 | jobs = [ | |
126 | { | |
127 | "update_name": "populate_stats_process_rooms", | |
128 | "progress_json": "{}", | |
129 | }, | |
130 | ] | |
125 | jobs = [("populate_stats_process_rooms", "{}", "")] | |
131 | 126 | elif job_name == "regenerate_directory": |
132 | 127 | jobs = [ |
133 | { | |
134 | "update_name": "populate_user_directory_createtables", | |
135 | "progress_json": "{}", | |
136 | "depends_on": "", | |
137 | }, | |
138 | { | |
139 | "update_name": "populate_user_directory_process_rooms", | |
140 | "progress_json": "{}", | |
141 | "depends_on": "populate_user_directory_createtables", | |
142 | }, | |
143 | { | |
144 | "update_name": "populate_user_directory_process_users", | |
145 | "progress_json": "{}", | |
146 | "depends_on": "populate_user_directory_process_rooms", | |
147 | }, | |
148 | { | |
149 | "update_name": "populate_user_directory_cleanup", | |
150 | "progress_json": "{}", | |
151 | "depends_on": "populate_user_directory_process_users", | |
152 | }, | |
128 | ("populate_user_directory_createtables", "{}", ""), | |
129 | ( | |
130 | "populate_user_directory_process_rooms", | |
131 | "{}", | |
132 | "populate_user_directory_createtables", | |
133 | ), | |
134 | ( | |
135 | "populate_user_directory_process_users", | |
136 | "{}", | |
137 | "populate_user_directory_process_rooms", | |
138 | ), | |
139 | ( | |
140 | "populate_user_directory_cleanup", | |
141 | "{}", | |
142 | "populate_user_directory_process_users", | |
143 | ), | |
153 | 144 | ] |
154 | 145 | else: |
155 | 146 | raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid job_name") |
157 | 148 | try: |
158 | 149 | await self._store.db_pool.simple_insert_many( |
159 | 150 | table="background_updates", |
151 | keys=("update_name", "progress_json", "depends_on"), | |
160 | 152 | values=jobs, |
161 | 153 | desc=f"admin_api_run_{job_name}", |
162 | 154 | ) |
110 | 110 | ) -> Tuple[int, JsonDict]: |
111 | 111 | await assert_requester_is_admin(self._auth, request) |
112 | 112 | |
113 | if not await self._store.is_destination_known(destination): | |
114 | raise NotFoundError("Unknown destination") | |
115 | ||
113 | 116 | destination_retry_timings = await self._store.get_destination_retry_timings( |
114 | 117 | destination |
115 | 118 | ) |
116 | ||
117 | if not destination_retry_timings: | |
118 | raise NotFoundError("Unknown destination") | |
119 | 119 | |
120 | 120 | last_successful_stream_ordering = ( |
121 | 121 | await self._store.get_destination_last_successful_stream_ordering( |
123 | 123 | ) |
124 | 124 | ) |
125 | 125 | |
126 | response = { | |
126 | response: JsonDict = { | |
127 | 127 | "destination": destination, |
128 | "failure_ts": destination_retry_timings.failure_ts, | |
129 | "retry_last_ts": destination_retry_timings.retry_last_ts, | |
130 | "retry_interval": destination_retry_timings.retry_interval, | |
131 | 128 | "last_successful_stream_ordering": last_successful_stream_ordering, |
132 | 129 | } |
133 | 130 | |
131 | if destination_retry_timings: | |
132 | response = { | |
133 | **response, | |
134 | "failure_ts": destination_retry_timings.failure_ts, | |
135 | "retry_last_ts": destination_retry_timings.retry_last_ts, | |
136 | "retry_interval": destination_retry_timings.retry_interval, | |
137 | } | |
138 | else: | |
139 | response = { | |
140 | **response, | |
141 | "failure_ts": None, | |
142 | "retry_last_ts": 0, | |
143 | "retry_interval": 0, | |
144 | } | |
145 | ||
134 | 146 | return HTTPStatus.OK, response |
465 | 465 | ) |
466 | 466 | |
467 | 467 | deleted_media, total = await self.media_repository.delete_local_media_ids( |
468 | ([row["media_id"] for row in media]) | |
468 | [row["media_id"] for row in media] | |
469 | 469 | ) |
470 | 470 | |
471 | 471 | return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total} |
423 | 423 | event_ids = await self.store.get_current_state_ids(room_id) |
424 | 424 | events = await self.store.get_events(event_ids.values()) |
425 | 425 | now = self.clock.time_msec() |
426 | room_state = await self._event_serializer.serialize_events(events.values(), now) | |
426 | room_state = self._event_serializer.serialize_events(events.values(), now) | |
427 | 427 | ret = {"state": room_state} |
428 | 428 | |
429 | 429 | return HTTPStatus.OK, ret |
743 | 743 | ) |
744 | 744 | |
745 | 745 | time_now = self.clock.time_msec() |
746 | results["events_before"] = await self._event_serializer.serialize_events( | |
747 | results["events_before"], | |
748 | time_now, | |
749 | bundle_aggregations=True, | |
750 | ) | |
751 | results["event"] = await self._event_serializer.serialize_event( | |
752 | results["event"], | |
753 | time_now, | |
754 | bundle_aggregations=True, | |
755 | ) | |
756 | results["events_after"] = await self._event_serializer.serialize_events( | |
757 | results["events_after"], | |
758 | time_now, | |
759 | bundle_aggregations=True, | |
760 | ) | |
761 | results["state"] = await self._event_serializer.serialize_events( | |
746 | aggregations = results.pop("aggregations", None) | |
747 | results["events_before"] = self._event_serializer.serialize_events( | |
748 | results["events_before"], time_now, bundle_aggregations=aggregations | |
749 | ) | |
750 | results["event"] = self._event_serializer.serialize_event( | |
751 | results["event"], time_now, bundle_aggregations=aggregations | |
752 | ) | |
753 | results["events_after"] = self._event_serializer.serialize_events( | |
754 | results["events_after"], time_now, bundle_aggregations=aggregations | |
755 | ) | |
756 | results["state"] = self._event_serializer.serialize_events( | |
762 | 757 | results["state"], time_now |
763 | 758 | ) |
764 | 759 |
172 | 172 | if not self.hs.is_mine(target_user): |
173 | 173 | raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users") |
174 | 174 | |
175 | ret = await self.admin_handler.get_user(target_user) | |
176 | ||
177 | if not ret: | |
175 | user_info_dict = await self.admin_handler.get_user(target_user) | |
176 | if not user_info_dict: | |
178 | 177 | raise NotFoundError("User not found") |
179 | 178 | |
180 | return HTTPStatus.OK, ret | |
179 | return HTTPStatus.OK, user_info_dict | |
181 | 180 | |
182 | 181 | async def on_PUT( |
183 | 182 | self, request: SynapseRequest, user_id: str |
398 | 397 | target_user, requester, body["avatar_url"], True |
399 | 398 | ) |
400 | 399 | |
401 | user = await self.admin_handler.get_user(target_user) | |
402 | assert user is not None | |
403 | ||
404 | return 201, user | |
400 | user_info_dict = await self.admin_handler.get_user(target_user) | |
401 | assert user_info_dict is not None | |
402 | ||
403 | return HTTPStatus.CREATED, user_info_dict | |
405 | 404 | |
406 | 405 | |
407 | 406 | class UserRegisterServlet(RestServlet): |
90 | 90 | |
91 | 91 | time_now = self.clock.time_msec() |
92 | 92 | if event: |
93 | result = await self._event_serializer.serialize_event(event, time_now) | |
93 | result = self._event_serializer.serialize_event(event, time_now) | |
94 | 94 | return 200, result |
95 | 95 | else: |
96 | 96 | return 404, "Event not found." |
71 | 71 | "actions": pa.actions, |
72 | 72 | "ts": pa.received_ts, |
73 | 73 | "event": ( |
74 | await self._event_serializer.serialize_event( | |
74 | self._event_serializer.serialize_event( | |
75 | 75 | notif_events[pa.event_id], |
76 | 76 | self.clock.time_msec(), |
77 | 77 | event_format=format_event_for_client_v2_without_room_id, |
18 | 18 | """ |
19 | 19 | |
20 | 20 | import logging |
21 | from typing import TYPE_CHECKING, Awaitable, Optional, Tuple | |
22 | ||
23 | from synapse.api.constants import EventTypes, RelationTypes | |
24 | from synapse.api.errors import ShadowBanError, SynapseError | |
21 | from typing import TYPE_CHECKING, Optional, Tuple | |
22 | ||
23 | from synapse.api.constants import RelationTypes | |
24 | from synapse.api.errors import SynapseError | |
25 | 25 | from synapse.http.server import HttpServer |
26 | from synapse.http.servlet import ( | |
27 | RestServlet, | |
28 | parse_integer, | |
29 | parse_json_object_from_request, | |
30 | parse_string, | |
31 | ) | |
26 | from synapse.http.servlet import RestServlet, parse_integer, parse_string | |
32 | 27 | from synapse.http.site import SynapseRequest |
33 | from synapse.rest.client.transactions import HttpTransactionCache | |
28 | from synapse.rest.client._base import client_patterns | |
34 | 29 | from synapse.storage.relations import ( |
35 | 30 | AggregationPaginationToken, |
36 | 31 | PaginationChunk, |
37 | 32 | RelationPaginationToken, |
38 | 33 | ) |
39 | 34 | from synapse.types import JsonDict |
40 | from synapse.util.stringutils import random_string | |
41 | ||
42 | from ._base import client_patterns | |
43 | 35 | |
44 | 36 | if TYPE_CHECKING: |
45 | 37 | from synapse.server import HomeServer |
46 | 38 | |
47 | 39 | logger = logging.getLogger(__name__) |
48 | ||
49 | ||
50 | class RelationSendServlet(RestServlet): | |
51 | """Helper API for sending events that have relation data. | |
52 | ||
53 | Example API shape to send a š reaction to a room: | |
54 | ||
55 | POST /rooms/!foo/send_relation/$bar/m.annotation/m.reaction?key=%F0%9F%91%8D | |
56 | {} | |
57 | ||
58 | { | |
59 | "event_id": "$foobar" | |
60 | } | |
61 | """ | |
62 | ||
63 | PATTERN = ( | |
64 | "/rooms/(?P<room_id>[^/]*)/send_relation" | |
65 | "/(?P<parent_id>[^/]*)/(?P<relation_type>[^/]*)/(?P<event_type>[^/]*)" | |
66 | ) | |
67 | ||
68 | def __init__(self, hs: "HomeServer"): | |
69 | super().__init__() | |
70 | self.auth = hs.get_auth() | |
71 | self.event_creation_handler = hs.get_event_creation_handler() | |
72 | self.txns = HttpTransactionCache(hs) | |
73 | ||
74 | def register(self, http_server: HttpServer) -> None: | |
75 | http_server.register_paths( | |
76 | "POST", | |
77 | client_patterns(self.PATTERN + "$", releases=()), | |
78 | self.on_PUT_or_POST, | |
79 | self.__class__.__name__, | |
80 | ) | |
81 | http_server.register_paths( | |
82 | "PUT", | |
83 | client_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()), | |
84 | self.on_PUT, | |
85 | self.__class__.__name__, | |
86 | ) | |
87 | ||
88 | def on_PUT( | |
89 | self, | |
90 | request: SynapseRequest, | |
91 | room_id: str, | |
92 | parent_id: str, | |
93 | relation_type: str, | |
94 | event_type: str, | |
95 | txn_id: Optional[str] = None, | |
96 | ) -> Awaitable[Tuple[int, JsonDict]]: | |
97 | return self.txns.fetch_or_execute_request( | |
98 | request, | |
99 | self.on_PUT_or_POST, | |
100 | request, | |
101 | room_id, | |
102 | parent_id, | |
103 | relation_type, | |
104 | event_type, | |
105 | txn_id, | |
106 | ) | |
107 | ||
108 | async def on_PUT_or_POST( | |
109 | self, | |
110 | request: SynapseRequest, | |
111 | room_id: str, | |
112 | parent_id: str, | |
113 | relation_type: str, | |
114 | event_type: str, | |
115 | txn_id: Optional[str] = None, | |
116 | ) -> Tuple[int, JsonDict]: | |
117 | requester = await self.auth.get_user_by_req(request, allow_guest=True) | |
118 | ||
119 | if event_type == EventTypes.Member: | |
120 | # Add relations to a membership is meaningless, so we just deny it | |
121 | # at the CS API rather than trying to handle it correctly. | |
122 | raise SynapseError(400, "Cannot send member events with relations") | |
123 | ||
124 | content = parse_json_object_from_request(request) | |
125 | ||
126 | aggregation_key = parse_string(request, "key", encoding="utf-8") | |
127 | ||
128 | content["m.relates_to"] = { | |
129 | "event_id": parent_id, | |
130 | "rel_type": relation_type, | |
131 | } | |
132 | if aggregation_key is not None: | |
133 | content["m.relates_to"]["key"] = aggregation_key | |
134 | ||
135 | event_dict = { | |
136 | "type": event_type, | |
137 | "content": content, | |
138 | "room_id": room_id, | |
139 | "sender": requester.user.to_string(), | |
140 | } | |
141 | ||
142 | try: | |
143 | ( | |
144 | event, | |
145 | _, | |
146 | ) = await self.event_creation_handler.create_and_send_nonmember_event( | |
147 | requester, event_dict=event_dict, txn_id=txn_id | |
148 | ) | |
149 | event_id = event.event_id | |
150 | except ShadowBanError: | |
151 | event_id = "$" + random_string(43) | |
152 | ||
153 | return 200, {"event_id": event_id} | |
154 | 40 | |
155 | 41 | |
156 | 42 | class RelationPaginationServlet(RestServlet): |
226 | 112 | now = self.clock.time_msec() |
227 | 113 | # Do not bundle aggregations when retrieving the original event because |
228 | 114 | # we want the content before relations are applied to it. |
229 | original_event = await self._event_serializer.serialize_event( | |
230 | event, now, bundle_aggregations=False | |
115 | original_event = self._event_serializer.serialize_event( | |
116 | event, now, bundle_aggregations=None | |
231 | 117 | ) |
232 | 118 | # The relations returned for the requested event do include their |
233 | 119 | # bundled aggregations. |
234 | serialized_events = await self._event_serializer.serialize_events( | |
235 | events, now, bundle_aggregations=True | |
120 | aggregations = await self.store.get_bundled_aggregations( | |
121 | events, requester.user.to_string() | |
122 | ) | |
123 | serialized_events = self._event_serializer.serialize_events( | |
124 | events, now, bundle_aggregations=aggregations | |
236 | 125 | ) |
237 | 126 | |
238 | 127 | return_value = pagination_chunk.to_dict() |
421 | 310 | ) |
422 | 311 | |
423 | 312 | now = self.clock.time_msec() |
424 | serialized_events = await self._event_serializer.serialize_events(events, now) | |
313 | serialized_events = self._event_serializer.serialize_events(events, now) | |
425 | 314 | |
426 | 315 | return_value = result.to_dict() |
427 | 316 | return_value["chunk"] = serialized_events |
430 | 319 | |
431 | 320 | |
432 | 321 | def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None: |
433 | RelationSendServlet(hs).register(http_server) | |
434 | 322 | RelationPaginationServlet(hs).register(http_server) |
435 | 323 | RelationAggregationPaginationServlet(hs).register(http_server) |
436 | 324 | RelationAggregationGroupPaginationServlet(hs).register(http_server) |
641 | 641 | def __init__(self, hs: "HomeServer"): |
642 | 642 | super().__init__() |
643 | 643 | self.clock = hs.get_clock() |
644 | self._store = hs.get_datastore() | |
644 | 645 | self.event_handler = hs.get_event_handler() |
645 | 646 | self._event_serializer = hs.get_event_client_serializer() |
646 | 647 | self.auth = hs.get_auth() |
659 | 660 | # https://matrix.org/docs/spec/client_server/r0.5.0#get-matrix-client-r0-rooms-roomid-event-eventid |
660 | 661 | raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND) |
661 | 662 | |
662 | time_now = self.clock.time_msec() | |
663 | 663 | if event: |
664 | event_dict = await self._event_serializer.serialize_event( | |
665 | event, time_now, bundle_aggregations=True | |
664 | # Ensure there are bundled aggregations available. | |
665 | aggregations = await self._store.get_bundled_aggregations( | |
666 | [event], requester.user.to_string() | |
667 | ) | |
668 | ||
669 | time_now = self.clock.time_msec() | |
670 | event_dict = self._event_serializer.serialize_event( | |
671 | event, time_now, bundle_aggregations=aggregations | |
666 | 672 | ) |
667 | 673 | return 200, event_dict |
668 | 674 | |
707 | 713 | raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND) |
708 | 714 | |
709 | 715 | time_now = self.clock.time_msec() |
710 | results["events_before"] = await self._event_serializer.serialize_events( | |
711 | results["events_before"], time_now, bundle_aggregations=True | |
712 | ) | |
713 | results["event"] = await self._event_serializer.serialize_event( | |
714 | results["event"], time_now, bundle_aggregations=True | |
715 | ) | |
716 | results["events_after"] = await self._event_serializer.serialize_events( | |
717 | results["events_after"], time_now, bundle_aggregations=True | |
718 | ) | |
719 | results["state"] = await self._event_serializer.serialize_events( | |
716 | aggregations = results.pop("aggregations", None) | |
717 | results["events_before"] = self._event_serializer.serialize_events( | |
718 | results["events_before"], time_now, bundle_aggregations=aggregations | |
719 | ) | |
720 | results["event"] = self._event_serializer.serialize_event( | |
721 | results["event"], time_now, bundle_aggregations=aggregations | |
722 | ) | |
723 | results["events_after"] = self._event_serializer.serialize_events( | |
724 | results["events_after"], time_now, bundle_aggregations=aggregations | |
725 | ) | |
726 | results["state"] = self._event_serializer.serialize_events( | |
720 | 727 | results["state"], time_now |
721 | 728 | ) |
722 | 729 |
16 | 16 | from typing import ( |
17 | 17 | TYPE_CHECKING, |
18 | 18 | Any, |
19 | Awaitable, | |
20 | 19 | Callable, |
21 | 20 | Dict, |
22 | 21 | Iterable, |
394 | 393 | """ |
395 | 394 | invited = {} |
396 | 395 | for room in rooms: |
397 | invite = await self._event_serializer.serialize_event( | |
396 | invite = self._event_serializer.serialize_event( | |
398 | 397 | room.invite, |
399 | 398 | time_now, |
400 | 399 | token_id=token_id, |
431 | 430 | """ |
432 | 431 | knocked = {} |
433 | 432 | for room in rooms: |
434 | knock = await self._event_serializer.serialize_event( | |
433 | knock = self._event_serializer.serialize_event( | |
435 | 434 | room.knock, |
436 | 435 | time_now, |
437 | 436 | token_id=token_id, |
524 | 523 | The room, encoded in our response format |
525 | 524 | """ |
526 | 525 | |
527 | def serialize(events: Iterable[EventBase]) -> Awaitable[List[JsonDict]]: | |
526 | def serialize( | |
527 | events: Iterable[EventBase], | |
528 | aggregations: Optional[Dict[str, Dict[str, Any]]] = None, | |
529 | ) -> List[JsonDict]: | |
528 | 530 | return self._event_serializer.serialize_events( |
529 | 531 | events, |
530 | 532 | time_now=time_now, |
531 | # Don't bother to bundle aggregations if the timeline is unlimited, | |
532 | # as clients will have all the necessary information. | |
533 | # bundle_aggregations=room.timeline.limited, | |
534 | # | |
535 | # richvdh 2021-12-15: disable this temporarily as it has too high an | |
536 | # overhead for initialsyncs. We need to figure out a way that the | |
537 | # bundling can be done *before* the events are stored in the | |
538 | # SyncResponseCache so that this part can be synchronous. | |
539 | # | |
540 | # Ensure to re-enable the test at tests/rest/client/test_relations.py::RelationsTestCase.test_bundled_aggregations. | |
541 | bundle_aggregations=False, | |
533 | bundle_aggregations=aggregations, | |
542 | 534 | token_id=token_id, |
543 | 535 | event_format=event_formatter, |
544 | 536 | only_event_fields=only_fields, |
560 | 552 | event.room_id, |
561 | 553 | ) |
562 | 554 | |
563 | serialized_state = await serialize(state_events) | |
564 | serialized_timeline = await serialize(timeline_events) | |
555 | serialized_state = serialize(state_events) | |
556 | serialized_timeline = serialize( | |
557 | timeline_events, room.timeline.bundled_aggregations | |
558 | ) | |
565 | 559 | |
566 | 560 | account_data = room.account_data |
567 | 561 |
342 | 342 | """ |
343 | 343 | |
344 | 344 | |
345 | @attr.s(slots=True) | |
345 | @attr.s(slots=True, auto_attribs=True) | |
346 | 346 | class ReadableFileWrapper: |
347 | 347 | """Wrapper that allows reading a file in chunks, yielding to the reactor, |
348 | 348 | and writing to a callback. |
353 | 353 | |
354 | 354 | CHUNK_SIZE = 2 ** 14 |
355 | 355 | |
356 | clock = attr.ib(type=Clock) | |
357 | path = attr.ib(type=str) | |
356 | clock: Clock | |
357 | path: str | |
358 | 358 | |
359 | 359 | async def write_chunks_to(self, callback: Callable[[bytes], None]) -> None: |
360 | 360 | """Reads the file in chunks and calls the callback with each chunk.""" |
32 | 32 | class OEmbedResult: |
33 | 33 | # The Open Graph result (converted from the oEmbed result). |
34 | 34 | open_graph_result: JsonDict |
35 | # The author_name of the oEmbed result | |
36 | author_name: Optional[str] | |
35 | 37 | # Number of milliseconds to cache the content, according to the oEmbed response. |
36 | 38 | # |
37 | 39 | # This will be None if no cache-age is provided in the oEmbed response (or |
153 | 155 | "og:url": url, |
154 | 156 | } |
155 | 157 | |
156 | # Use either title or author's name as the title. | |
157 | title = oembed.get("title") or oembed.get("author_name") | |
158 | title = oembed.get("title") | |
158 | 159 | if title: |
159 | 160 | open_graph_response["og:title"] = title |
161 | ||
162 | author_name = oembed.get("author_name") | |
160 | 163 | |
161 | 164 | # Use the provider name and as the site. |
162 | 165 | provider_name = oembed.get("provider_name") |
192 | 195 | # Trap any exception and let the code follow as usual. |
193 | 196 | logger.warning("Error parsing oEmbed metadata from %s: %r", url, e) |
194 | 197 | open_graph_response = {} |
198 | author_name = None | |
195 | 199 | cache_age = None |
196 | 200 | |
197 | return OEmbedResult(open_graph_response, cache_age) | |
201 | return OEmbedResult(open_graph_response, author_name, cache_age) | |
198 | 202 | |
199 | 203 | |
200 | 204 | def _fetch_urls(tree: "etree.Element", tag_name: str) -> List[str]: |
261 | 261 | |
262 | 262 | # The number of milliseconds that the response should be considered valid. |
263 | 263 | expiration_ms = media_info.expires |
264 | author_name: Optional[str] = None | |
264 | 265 | |
265 | 266 | if _is_media(media_info.media_type): |
266 | 267 | file_id = media_info.filesystem_id |
293 | 294 | # Check if this HTML document points to oEmbed information and |
294 | 295 | # defer to that. |
295 | 296 | oembed_url = self._oembed.autodiscover_from_html(tree) |
296 | og = {} | |
297 | og_from_oembed: JsonDict = {} | |
297 | 298 | if oembed_url: |
298 | 299 | oembed_info = await self._download_url(oembed_url, user) |
299 | og, expiration_ms = await self._handle_oembed_response( | |
300 | ( | |
301 | og_from_oembed, | |
302 | author_name, | |
303 | expiration_ms, | |
304 | ) = await self._handle_oembed_response( | |
300 | 305 | url, oembed_info, expiration_ms |
301 | 306 | ) |
302 | 307 | |
303 | # If there was no oEmbed URL (or oEmbed parsing failed), attempt | |
304 | # to generate the Open Graph information from the HTML. | |
305 | if not oembed_url or not og: | |
306 | og = parse_html_to_open_graph(tree, media_info.uri) | |
308 | # Parse Open Graph information from the HTML in case the oEmbed | |
309 | # response failed or is incomplete. | |
310 | og_from_html = parse_html_to_open_graph(tree, media_info.uri) | |
311 | ||
312 | # Compile the Open Graph response by using the scraped | |
313 | # information from the HTML and overlaying any information | |
314 | # from the oEmbed response. | |
315 | og = {**og_from_html, **og_from_oembed} | |
307 | 316 | |
308 | 317 | await self._precache_image_url(user, media_info, og) |
309 | 318 | else: |
311 | 320 | |
312 | 321 | elif oembed_url: |
313 | 322 | # Handle the oEmbed information. |
314 | og, expiration_ms = await self._handle_oembed_response( | |
323 | og, author_name, expiration_ms = await self._handle_oembed_response( | |
315 | 324 | url, media_info, expiration_ms |
316 | 325 | ) |
317 | 326 | await self._precache_image_url(user, media_info, og) |
319 | 328 | else: |
320 | 329 | logger.warning("Failed to find any OG data in %s", url) |
321 | 330 | og = {} |
331 | ||
332 | # If we don't have a title but we have author_name, copy it as | |
333 | # title | |
334 | if not og.get("og:title") and author_name: | |
335 | og["og:title"] = author_name | |
322 | 336 | |
323 | 337 | # filter out any stupidly long values |
324 | 338 | keys_to_remove = [] |
483 | 497 | |
484 | 498 | async def _handle_oembed_response( |
485 | 499 | self, url: str, media_info: MediaInfo, expiration_ms: int |
486 | ) -> Tuple[JsonDict, int]: | |
500 | ) -> Tuple[JsonDict, Optional[str], int]: | |
487 | 501 | """ |
488 | 502 | Parse the downloaded oEmbed info. |
489 | 503 | |
496 | 510 | Returns: |
497 | 511 | A tuple of: |
498 | 512 | The Open Graph dictionary, if the oEmbed info can be parsed. |
513 | The author name if it could be retrieved from oEmbed. | |
499 | 514 | The (possibly updated) length of time, in milliseconds, the media is valid for. |
500 | 515 | """ |
501 | 516 | # If JSON was not returned, there's nothing to do. |
502 | 517 | if not _is_json(media_info.media_type): |
503 | return {}, expiration_ms | |
518 | return {}, None, expiration_ms | |
504 | 519 | |
505 | 520 | with open(media_info.filename, "rb") as file: |
506 | 521 | body = file.read() |
512 | 527 | if open_graph_result and oembed_response.cache_age is not None: |
513 | 528 | expiration_ms = oembed_response.cache_age |
514 | 529 | |
515 | return open_graph_result, expiration_ms | |
530 | return open_graph_result, oembed_response.author_name, expiration_ms | |
516 | 531 | |
517 | 532 | def _start_expire_url_cache_data(self) -> Deferred: |
518 | 533 | return run_as_background_process( |
758 | 758 | |
759 | 759 | @cache_in_self |
760 | 760 | def get_event_client_serializer(self) -> EventClientSerializer: |
761 | return EventClientSerializer(self) | |
761 | return EventClientSerializer() | |
762 | 762 | |
763 | 763 | @cache_in_self |
764 | 764 | def get_password_policy_handler(self) -> PasswordPolicyHandler: |
44 | 44 | from synapse.events import EventBase |
45 | 45 | from synapse.events.snapshot import EventContext |
46 | 46 | from synapse.logging.context import ContextResourceUsage |
47 | from synapse.logging.utils import log_function | |
48 | 47 | from synapse.state import v1, v2 |
49 | 48 | from synapse.storage.databases.main.events_worker import EventRedactBehaviour |
50 | 49 | from synapse.storage.roommember import ProfileInfo |
449 | 448 | return {key: state_map[ev_id] for key, ev_id in new_state.items()} |
450 | 449 | |
451 | 450 | |
452 | @attr.s(slots=True) | |
451 | @attr.s(slots=True, auto_attribs=True) | |
453 | 452 | class _StateResMetrics: |
454 | 453 | """Keeps track of some usage metrics about state res.""" |
455 | 454 | |
456 | 455 | # System and User CPU time, in seconds |
457 | cpu_time = attr.ib(type=float, default=0.0) | |
456 | cpu_time: float = 0.0 | |
458 | 457 | |
459 | 458 | # time spent on database transactions (excluding scheduling time). This roughly |
460 | 459 | # corresponds to the amount of work done on the db server, excluding event fetches. |
461 | db_time = attr.ib(type=float, default=0.0) | |
460 | db_time: float = 0.0 | |
462 | 461 | |
463 | 462 | # number of events fetched from the db. |
464 | db_events = attr.ib(type=int, default=0) | |
463 | db_events: int = 0 | |
465 | 464 | |
466 | 465 | |
467 | 466 | _biggest_room_by_cpu_counter = Counter( |
511 | 510 | |
512 | 511 | self.clock.looping_call(self._report_metrics, 120 * 1000) |
513 | 512 | |
514 | @log_function | |
515 | 513 | async def resolve_state_groups( |
516 | 514 | self, |
517 | 515 | room_id: str, |
142 | 142 | return db_conn |
143 | 143 | |
144 | 144 | |
145 | @attr.s(slots=True) | |
145 | @attr.s(slots=True, auto_attribs=True) | |
146 | 146 | class LoggingDatabaseConnection: |
147 | 147 | """A wrapper around a database connection that returns `LoggingTransaction` |
148 | 148 | as its cursor class. |
150 | 150 | This is mainly used on startup to ensure that queries get logged correctly |
151 | 151 | """ |
152 | 152 | |
153 | conn = attr.ib(type=Connection) | |
154 | engine = attr.ib(type=BaseDatabaseEngine) | |
155 | default_txn_name = attr.ib(type=str) | |
153 | conn: Connection | |
154 | engine: BaseDatabaseEngine | |
155 | default_txn_name: str | |
156 | 156 | |
157 | 157 | def cursor( |
158 | 158 | self, *, txn_name=None, after_callbacks=None, exception_callbacks=None |
933 | 933 | txn.execute(sql, vals) |
934 | 934 | |
935 | 935 | async def simple_insert_many( |
936 | self, table: str, values: List[Dict[str, Any]], desc: str | |
937 | ) -> None: | |
938 | """Executes an INSERT query on the named table. | |
939 | ||
940 | The input is given as a list of dicts, with one dict per row. | |
941 | Generally simple_insert_many_values should be preferred for new code. | |
942 | ||
943 | Args: | |
944 | table: string giving the table name | |
945 | values: dict of new column names and values for them | |
946 | desc: description of the transaction, for logging and metrics | |
947 | """ | |
948 | await self.runInteraction(desc, self.simple_insert_many_txn, table, values) | |
949 | ||
950 | @staticmethod | |
951 | def simple_insert_many_txn( | |
952 | txn: LoggingTransaction, table: str, values: List[Dict[str, Any]] | |
953 | ) -> None: | |
954 | """Executes an INSERT query on the named table. | |
955 | ||
956 | The input is given as a list of dicts, with one dict per row. | |
957 | Generally simple_insert_many_values_txn should be preferred for new code. | |
958 | ||
959 | Args: | |
960 | txn: The transaction to use. | |
961 | table: string giving the table name | |
962 | values: dict of new column names and values for them | |
963 | """ | |
964 | if not values: | |
965 | return | |
966 | ||
967 | # This is a *slight* abomination to get a list of tuples of key names | |
968 | # and a list of tuples of value names. | |
969 | # | |
970 | # i.e. [{"a": 1, "b": 2}, {"c": 3, "d": 4}] | |
971 | # => [("a", "b",), ("c", "d",)] and [(1, 2,), (3, 4,)] | |
972 | # | |
973 | # The sort is to ensure that we don't rely on dictionary iteration | |
974 | # order. | |
975 | keys, vals = zip( | |
976 | *(zip(*(sorted(i.items(), key=lambda kv: kv[0]))) for i in values if i) | |
977 | ) | |
978 | ||
979 | for k in keys: | |
980 | if k != keys[0]: | |
981 | raise RuntimeError("All items must have the same keys") | |
982 | ||
983 | return DatabasePool.simple_insert_many_values_txn(txn, table, keys[0], vals) | |
984 | ||
985 | async def simple_insert_many_values( | |
986 | 936 | self, |
987 | 937 | table: str, |
988 | 938 | keys: Collection[str], |
1001 | 951 | desc: description of the transaction, for logging and metrics |
1002 | 952 | """ |
1003 | 953 | await self.runInteraction( |
1004 | desc, self.simple_insert_many_values_txn, table, keys, values | |
954 | desc, self.simple_insert_many_txn, table, keys, values | |
1005 | 955 | ) |
1006 | 956 | |
1007 | 957 | @staticmethod |
1008 | def simple_insert_many_values_txn( | |
958 | def simple_insert_many_txn( | |
1009 | 959 | txn: LoggingTransaction, |
1010 | 960 | table: str, |
1011 | 961 | keys: Collection[str], |
449 | 449 | async def add_account_data_for_user( |
450 | 450 | self, user_id: str, account_data_type: str, content: JsonDict |
451 | 451 | ) -> int: |
452 | """Add some account_data to a room for a user. | |
452 | """Add some global account_data for a user. | |
453 | 453 | |
454 | 454 | Args: |
455 | 455 | user_id: The user to add a tag for. |
535 | 535 | self.db_pool.simple_insert_many_txn( |
536 | 536 | txn, |
537 | 537 | table="ignored_users", |
538 | keys=("ignorer_user_id", "ignored_user_id"), | |
538 | 539 | values=[ |
539 | {"ignorer_user_id": user_id, "ignored_user_id": u} | |
540 | for u in currently_ignored_users - previously_ignored_users | |
540 | (user_id, u) for u in currently_ignored_users - previously_ignored_users | |
541 | 541 | ], |
542 | 542 | ) |
543 | 543 |
431 | 431 | self.db_pool.simple_insert_many_txn( |
432 | 432 | txn, |
433 | 433 | table="device_federation_outbox", |
434 | keys=( | |
435 | "destination", | |
436 | "stream_id", | |
437 | "queued_ts", | |
438 | "messages_json", | |
439 | "instance_name", | |
440 | ), | |
434 | 441 | values=[ |
435 | { | |
436 | "destination": destination, | |
437 | "stream_id": stream_id, | |
438 | "queued_ts": now_ms, | |
439 | "messages_json": json_encoder.encode(edu), | |
440 | "instance_name": self._instance_name, | |
441 | } | |
442 | ( | |
443 | destination, | |
444 | stream_id, | |
445 | now_ms, | |
446 | json_encoder.encode(edu), | |
447 | self._instance_name, | |
448 | ) | |
442 | 449 | for destination, edu in remote_messages_by_destination.items() |
443 | 450 | ], |
444 | 451 | ) |
570 | 577 | self.db_pool.simple_insert_many_txn( |
571 | 578 | txn, |
572 | 579 | table="device_inbox", |
580 | keys=("user_id", "device_id", "stream_id", "message_json", "instance_name"), | |
573 | 581 | values=[ |
574 | { | |
575 | "user_id": user_id, | |
576 | "device_id": device_id, | |
577 | "stream_id": stream_id, | |
578 | "message_json": message_json, | |
579 | "instance_name": self._instance_name, | |
580 | } | |
582 | (user_id, device_id, stream_id, message_json, self._instance_name) | |
581 | 583 | for user_id, messages_by_device in local_by_user_then_device.items() |
582 | 584 | for device_id, message_json in messages_by_device.items() |
583 | 585 | ], |
52 | 52 | from synapse.server import HomeServer |
53 | 53 | |
54 | 54 | logger = logging.getLogger(__name__) |
55 | issue_8631_logger = logging.getLogger("synapse.8631_debug") | |
55 | 56 | |
56 | 57 | DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = ( |
57 | 58 | "drop_device_list_streams_non_unique_indexes" |
227 | 228 | # Return an empty list if there are no updates |
228 | 229 | if not updates: |
229 | 230 | return now_stream_id, [] |
231 | ||
232 | if issue_8631_logger.isEnabledFor(logging.DEBUG): | |
233 | data = {(user, device): stream_id for user, device, stream_id, _ in updates} | |
234 | issue_8631_logger.debug( | |
235 | "device updates need to be sent to %s: %s", destination, data | |
236 | ) | |
230 | 237 | |
231 | 238 | # get the cross-signing keys of the users in the list, so that we can |
232 | 239 | # determine which of the device changes were cross-signing keys |
364 | 371 | # and remove the length budgeting above. |
365 | 372 | results.append(("org.matrix.signing_key_update", result)) |
366 | 373 | |
374 | if issue_8631_logger.isEnabledFor(logging.DEBUG): | |
375 | for (user_id, edu) in results: | |
376 | issue_8631_logger.debug( | |
377 | "device update to %s for %s from %s to %s: %s", | |
378 | destination, | |
379 | user_id, | |
380 | from_stream_id, | |
381 | last_processed_stream_id, | |
382 | edu, | |
383 | ) | |
384 | ||
367 | 385 | return last_processed_stream_id, results |
368 | 386 | |
369 | 387 | def _get_device_updates_by_remote_txn( |
780 | 798 | @cached(max_entries=10000) |
781 | 799 | async def get_device_list_last_stream_id_for_remote( |
782 | 800 | self, user_id: str |
783 | ) -> Optional[Any]: | |
801 | ) -> Optional[str]: | |
784 | 802 | """Get the last stream_id we got for a user. May be None if we haven't |
785 | 803 | got any information for them. |
786 | 804 | """ |
796 | 814 | cached_method_name="get_device_list_last_stream_id_for_remote", |
797 | 815 | list_name="user_ids", |
798 | 816 | ) |
799 | async def get_device_list_last_stream_id_for_remotes(self, user_ids: Iterable[str]): | |
817 | async def get_device_list_last_stream_id_for_remotes( | |
818 | self, user_ids: Iterable[str] | |
819 | ) -> Dict[str, Optional[str]]: | |
800 | 820 | rows = await self.db_pool.simple_select_many_batch( |
801 | 821 | table="device_lists_remote_extremeties", |
802 | 822 | column="user_id", |
1383 | 1403 | content: JsonDict, |
1384 | 1404 | stream_id: str, |
1385 | 1405 | ) -> None: |
1406 | """Delete, update or insert a cache entry for this (user, device) pair.""" | |
1386 | 1407 | if content.get("deleted"): |
1387 | 1408 | self.db_pool.simple_delete_txn( |
1388 | 1409 | txn, |
1442 | 1463 | def _update_remote_device_list_cache_txn( |
1443 | 1464 | self, txn: LoggingTransaction, user_id: str, devices: List[dict], stream_id: int |
1444 | 1465 | ) -> None: |
1466 | """Replace the list of cached devices for this user with the given list.""" | |
1445 | 1467 | self.db_pool.simple_delete_txn( |
1446 | 1468 | txn, table="device_lists_remote_cache", keyvalues={"user_id": user_id} |
1447 | 1469 | ) |
1449 | 1471 | self.db_pool.simple_insert_many_txn( |
1450 | 1472 | txn, |
1451 | 1473 | table="device_lists_remote_cache", |
1474 | keys=("user_id", "device_id", "content"), | |
1452 | 1475 | values=[ |
1453 | { | |
1454 | "user_id": user_id, | |
1455 | "device_id": content["device_id"], | |
1456 | "content": json_encoder.encode(content), | |
1457 | } | |
1476 | (user_id, content["device_id"], json_encoder.encode(content)) | |
1458 | 1477 | for content in devices |
1459 | 1478 | ], |
1460 | 1479 | ) |
1542 | 1561 | self.db_pool.simple_insert_many_txn( |
1543 | 1562 | txn, |
1544 | 1563 | table="device_lists_stream", |
1564 | keys=("stream_id", "user_id", "device_id"), | |
1545 | 1565 | values=[ |
1546 | {"stream_id": stream_id, "user_id": user_id, "device_id": device_id} | |
1566 | (stream_id, user_id, device_id) | |
1547 | 1567 | for stream_id, device_id in zip(stream_ids, device_ids) |
1548 | 1568 | ], |
1549 | 1569 | ) |
1570 | 1590 | self.db_pool.simple_insert_many_txn( |
1571 | 1591 | txn, |
1572 | 1592 | table="device_lists_outbound_pokes", |
1593 | keys=( | |
1594 | "destination", | |
1595 | "stream_id", | |
1596 | "user_id", | |
1597 | "device_id", | |
1598 | "sent", | |
1599 | "ts", | |
1600 | "opentracing_context", | |
1601 | ), | |
1573 | 1602 | values=[ |
1574 | { | |
1575 | "destination": destination, | |
1576 | "stream_id": next(next_stream_id), | |
1577 | "user_id": user_id, | |
1578 | "device_id": device_id, | |
1579 | "sent": False, | |
1580 | "ts": now, | |
1581 | "opentracing_context": json_encoder.encode(context) | |
1603 | ( | |
1604 | destination, | |
1605 | next(next_stream_id), | |
1606 | user_id, | |
1607 | device_id, | |
1608 | False, | |
1609 | now, | |
1610 | json_encoder.encode(context) | |
1582 | 1611 | if whitelisted_homeserver(destination) |
1583 | 1612 | else "{}", |
1584 | } | |
1613 | ) | |
1585 | 1614 | for destination in hosts |
1586 | 1615 | for device_id in device_ids |
1587 | 1616 | ], |
111 | 111 | self.db_pool.simple_insert_many_txn( |
112 | 112 | txn, |
113 | 113 | table="room_alias_servers", |
114 | values=[ | |
115 | {"room_alias": room_alias.to_string(), "server": server} | |
116 | for server in servers | |
117 | ], | |
114 | keys=("room_alias", "server"), | |
115 | values=[(room_alias.to_string(), server) for server in servers], | |
118 | 116 | ) |
119 | 117 | |
120 | 118 | self._invalidate_cache_and_stream( |
109 | 109 | values = [] |
110 | 110 | for (room_id, session_id, room_key) in room_keys: |
111 | 111 | values.append( |
112 | { | |
113 | "user_id": user_id, | |
114 | "version": version_int, | |
115 | "room_id": room_id, | |
116 | "session_id": session_id, | |
117 | "first_message_index": room_key["first_message_index"], | |
118 | "forwarded_count": room_key["forwarded_count"], | |
119 | "is_verified": room_key["is_verified"], | |
120 | "session_data": json_encoder.encode(room_key["session_data"]), | |
121 | } | |
112 | ( | |
113 | user_id, | |
114 | version_int, | |
115 | room_id, | |
116 | session_id, | |
117 | room_key["first_message_index"], | |
118 | room_key["forwarded_count"], | |
119 | room_key["is_verified"], | |
120 | json_encoder.encode(room_key["session_data"]), | |
121 | ) | |
122 | 122 | ) |
123 | 123 | log_kv( |
124 | 124 | { |
130 | 130 | ) |
131 | 131 | |
132 | 132 | await self.db_pool.simple_insert_many( |
133 | table="e2e_room_keys", values=values, desc="add_e2e_room_keys" | |
133 | table="e2e_room_keys", | |
134 | keys=( | |
135 | "user_id", | |
136 | "version", | |
137 | "room_id", | |
138 | "session_id", | |
139 | "first_message_index", | |
140 | "forwarded_count", | |
141 | "is_verified", | |
142 | "session_data", | |
143 | ), | |
144 | values=values, | |
145 | desc="add_e2e_room_keys", | |
134 | 146 | ) |
135 | 147 | |
136 | 148 | @trace |
49 | 49 | from synapse.server import HomeServer |
50 | 50 | |
51 | 51 | |
52 | @attr.s(slots=True) | |
52 | @attr.s(slots=True, auto_attribs=True) | |
53 | 53 | class DeviceKeyLookupResult: |
54 | 54 | """The type returned by get_e2e_device_keys_and_signatures""" |
55 | 55 | |
56 | display_name = attr.ib(type=Optional[str]) | |
56 | display_name: Optional[str] | |
57 | 57 | |
58 | 58 | # the key data from e2e_device_keys_json. Typically includes fields like |
59 | 59 | # "algorithm", "keys" (including the curve25519 identity key and the ed25519 signing |
60 | 60 | # key) and "signatures" (a map from (user id) to (key id/device_id) to signature.) |
61 | keys = attr.ib(type=Optional[JsonDict]) | |
61 | keys: Optional[JsonDict] | |
62 | 62 | |
63 | 63 | |
64 | 64 | class EndToEndKeyBackgroundStore(SQLBaseStore): |
386 | 386 | self.db_pool.simple_insert_many_txn( |
387 | 387 | txn, |
388 | 388 | table="e2e_one_time_keys_json", |
389 | keys=( | |
390 | "user_id", | |
391 | "device_id", | |
392 | "algorithm", | |
393 | "key_id", | |
394 | "ts_added_ms", | |
395 | "key_json", | |
396 | ), | |
389 | 397 | values=[ |
390 | { | |
391 | "user_id": user_id, | |
392 | "device_id": device_id, | |
393 | "algorithm": algorithm, | |
394 | "key_id": key_id, | |
395 | "ts_added_ms": time_now, | |
396 | "key_json": json_bytes, | |
397 | } | |
398 | (user_id, device_id, algorithm, key_id, time_now, json_bytes) | |
398 | 399 | for algorithm, key_id, json_bytes in new_keys |
399 | 400 | ], |
400 | 401 | ) |
1185 | 1186 | """ |
1186 | 1187 | await self.db_pool.simple_insert_many( |
1187 | 1188 | "e2e_cross_signing_signatures", |
1188 | [ | |
1189 | { | |
1190 | "user_id": user_id, | |
1191 | "key_id": item.signing_key_id, | |
1192 | "target_user_id": item.target_user_id, | |
1193 | "target_device_id": item.target_device_id, | |
1194 | "signature": item.signature, | |
1195 | } | |
1189 | keys=( | |
1190 | "user_id", | |
1191 | "key_id", | |
1192 | "target_user_id", | |
1193 | "target_device_id", | |
1194 | "signature", | |
1195 | ), | |
1196 | values=[ | |
1197 | ( | |
1198 | user_id, | |
1199 | item.signing_key_id, | |
1200 | item.target_user_id, | |
1201 | item.target_device_id, | |
1202 | item.signature, | |
1203 | ) | |
1196 | 1204 | for item in signatures |
1197 | 1205 | ], |
1198 | "add_e2e_signing_key", | |
1199 | ) | |
1206 | desc="add_e2e_signing_key", | |
1207 | ) |
874 | 874 | self.db_pool.simple_insert_many_txn( |
875 | 875 | txn, |
876 | 876 | table="event_push_summary", |
877 | keys=( | |
878 | "user_id", | |
879 | "room_id", | |
880 | "notif_count", | |
881 | "unread_count", | |
882 | "stream_ordering", | |
883 | ), | |
877 | 884 | values=[ |
878 | { | |
879 | "user_id": user_id, | |
880 | "room_id": room_id, | |
881 | "notif_count": summary.notif_count, | |
882 | "unread_count": summary.unread_count, | |
883 | "stream_ordering": summary.stream_ordering, | |
884 | } | |
885 | ( | |
886 | user_id, | |
887 | room_id, | |
888 | summary.notif_count, | |
889 | summary.unread_count, | |
890 | summary.stream_ordering, | |
891 | ) | |
885 | 892 | for ((user_id, room_id), summary) in summaries.items() |
886 | 893 | if summary.old_user_id is None |
887 | 894 | ], |
38 | 38 | from synapse.crypto.event_signing import compute_event_reference_hash |
39 | 39 | from synapse.events import EventBase # noqa: F401 |
40 | 40 | from synapse.events.snapshot import EventContext # noqa: F401 |
41 | from synapse.logging.utils import log_function | |
42 | 41 | from synapse.storage._base import db_to_json, make_in_list_sql_clause |
43 | 42 | from synapse.storage.database import ( |
44 | 43 | DatabasePool, |
68 | 67 | ) |
69 | 68 | |
70 | 69 | |
71 | @attr.s(slots=True) | |
70 | @attr.s(slots=True, auto_attribs=True) | |
72 | 71 | class DeltaState: |
73 | 72 | """Deltas to use to update the `current_state_events` table. |
74 | 73 | |
79 | 78 | should e.g. be removed from `current_state_events` table. |
80 | 79 | """ |
81 | 80 | |
82 | to_delete = attr.ib(type=List[Tuple[str, str]]) | |
83 | to_insert = attr.ib(type=StateMap[str]) | |
84 | no_longer_in_room = attr.ib(type=bool, default=False) | |
81 | to_delete: List[Tuple[str, str]] | |
82 | to_insert: StateMap[str] | |
83 | no_longer_in_room: bool = False | |
85 | 84 | |
86 | 85 | |
87 | 86 | class PersistEventsStore: |
327 | 326 | |
328 | 327 | return existing_prevs |
329 | 328 | |
330 | @log_function | |
331 | 329 | def _persist_events_txn( |
332 | 330 | self, |
333 | 331 | txn: LoggingTransaction, |
441 | 439 | self.db_pool.simple_insert_many_txn( |
442 | 440 | txn, |
443 | 441 | table="event_auth", |
442 | keys=("event_id", "room_id", "auth_id"), | |
444 | 443 | values=[ |
445 | { | |
446 | "event_id": event.event_id, | |
447 | "room_id": event.room_id, | |
448 | "auth_id": auth_id, | |
449 | } | |
444 | (event.event_id, event.room_id, auth_id) | |
450 | 445 | for event in events |
451 | 446 | for auth_id in event.auth_event_ids() |
452 | 447 | if event.is_state() |
674 | 669 | db_pool.simple_insert_many_txn( |
675 | 670 | txn, |
676 | 671 | table="event_auth_chains", |
672 | keys=("event_id", "chain_id", "sequence_number"), | |
677 | 673 | values=[ |
678 | {"event_id": event_id, "chain_id": c_id, "sequence_number": seq} | |
674 | (event_id, c_id, seq) | |
679 | 675 | for event_id, (c_id, seq) in new_chain_tuples.items() |
680 | 676 | ], |
681 | 677 | ) |
781 | 777 | db_pool.simple_insert_many_txn( |
782 | 778 | txn, |
783 | 779 | table="event_auth_chain_links", |
780 | keys=( | |
781 | "origin_chain_id", | |
782 | "origin_sequence_number", | |
783 | "target_chain_id", | |
784 | "target_sequence_number", | |
785 | ), | |
784 | 786 | values=[ |
785 | { | |
786 | "origin_chain_id": source_id, | |
787 | "origin_sequence_number": source_seq, | |
788 | "target_chain_id": target_id, | |
789 | "target_sequence_number": target_seq, | |
790 | } | |
787 | (source_id, source_seq, target_id, target_seq) | |
791 | 788 | for ( |
792 | 789 | source_id, |
793 | 790 | source_seq, |
942 | 939 | txn_id = getattr(event.internal_metadata, "txn_id", None) |
943 | 940 | if token_id and txn_id: |
944 | 941 | to_insert.append( |
945 | { | |
946 | "event_id": event.event_id, | |
947 | "room_id": event.room_id, | |
948 | "user_id": event.sender, | |
949 | "token_id": token_id, | |
950 | "txn_id": txn_id, | |
951 | "inserted_ts": self._clock.time_msec(), | |
952 | } | |
942 | ( | |
943 | event.event_id, | |
944 | event.room_id, | |
945 | event.sender, | |
946 | token_id, | |
947 | txn_id, | |
948 | self._clock.time_msec(), | |
949 | ) | |
953 | 950 | ) |
954 | 951 | |
955 | 952 | if to_insert: |
956 | 953 | self.db_pool.simple_insert_many_txn( |
957 | 954 | txn, |
958 | 955 | table="event_txn_id", |
956 | keys=( | |
957 | "event_id", | |
958 | "room_id", | |
959 | "user_id", | |
960 | "token_id", | |
961 | "txn_id", | |
962 | "inserted_ts", | |
963 | ), | |
959 | 964 | values=to_insert, |
960 | 965 | ) |
961 | 966 | |
1160 | 1165 | self.db_pool.simple_insert_many_txn( |
1161 | 1166 | txn, |
1162 | 1167 | table="event_forward_extremities", |
1168 | keys=("event_id", "room_id"), | |
1163 | 1169 | values=[ |
1164 | {"event_id": ev_id, "room_id": room_id} | |
1170 | (ev_id, room_id) | |
1165 | 1171 | for room_id, new_extrem in new_forward_extremities.items() |
1166 | 1172 | for ev_id in new_extrem |
1167 | 1173 | ], |
1173 | 1179 | self.db_pool.simple_insert_many_txn( |
1174 | 1180 | txn, |
1175 | 1181 | table="stream_ordering_to_exterm", |
1182 | keys=("room_id", "event_id", "stream_ordering"), | |
1176 | 1183 | values=[ |
1177 | { | |
1178 | "room_id": room_id, | |
1179 | "event_id": event_id, | |
1180 | "stream_ordering": max_stream_order, | |
1181 | } | |
1184 | (room_id, event_id, max_stream_order) | |
1182 | 1185 | for room_id, new_extrem in new_forward_extremities.items() |
1183 | 1186 | for event_id in new_extrem |
1184 | 1187 | ], |
1250 | 1253 | for room_id, depth in depth_updates.items(): |
1251 | 1254 | self._update_min_depth_for_room_txn(txn, room_id, depth) |
1252 | 1255 | |
1253 | def _update_outliers_txn(self, txn, events_and_contexts): | |
1256 | def _update_outliers_txn( | |
1257 | self, | |
1258 | txn: LoggingTransaction, | |
1259 | events_and_contexts: List[Tuple[EventBase, EventContext]], | |
1260 | ) -> List[Tuple[EventBase, EventContext]]: | |
1254 | 1261 | """Update any outliers with new event info. |
1255 | 1262 | |
1256 | This turns outliers into ex-outliers (unless the new event was | |
1257 | rejected). | |
1263 | This turns outliers into ex-outliers (unless the new event was rejected), and | |
1264 | also removes any other events we have already seen from the list. | |
1258 | 1265 | |
1259 | 1266 | Args: |
1260 | txn (twisted.enterprise.adbapi.Connection): db connection | |
1261 | events_and_contexts (list[(EventBase, EventContext)]): events | |
1262 | we are persisting | |
1267 | txn: db connection | |
1268 | events_and_contexts: events we are persisting | |
1263 | 1269 | |
1264 | 1270 | Returns: |
1265 | list[(EventBase, EventContext)] new list, without events which | |
1266 | are already in the events table. | |
1271 | new list, without events which are already in the events table. | |
1267 | 1272 | """ |
1268 | 1273 | txn.execute( |
1269 | 1274 | "SELECT event_id, outlier FROM events WHERE event_id in (%s)" |
1271 | 1276 | [event.event_id for event, _ in events_and_contexts], |
1272 | 1277 | ) |
1273 | 1278 | |
1274 | have_persisted = {event_id: outlier for event_id, outlier in txn} | |
1279 | have_persisted: Dict[str, bool] = { | |
1280 | event_id: outlier for event_id, outlier in txn | |
1281 | } | |
1275 | 1282 | |
1276 | 1283 | to_remove = set() |
1277 | 1284 | for event, context in events_and_contexts: |
1281 | 1288 | to_remove.add(event) |
1282 | 1289 | |
1283 | 1290 | if context.rejected: |
1284 | # If the event is rejected then we don't care if the event | |
1285 | # was an outlier or not. | |
1291 | # If the incoming event is rejected then we don't care if the event | |
1292 | # was an outlier or not - what we have is at least as good. | |
1286 | 1293 | continue |
1287 | 1294 | |
1288 | 1295 | outlier_persisted = have_persisted[event.event_id] |
1289 | 1296 | if not event.internal_metadata.is_outlier() and outlier_persisted: |
1290 | 1297 | # We received a copy of an event that we had already stored as |
1291 | # an outlier in the database. We now have some state at that | |
1298 | # an outlier in the database. We now have some state at that event | |
1292 | 1299 | # so we need to update the state_groups table with that state. |
1300 | # | |
1301 | # Note that we do not update the stream_ordering of the event in this | |
1302 | # scenario. XXX: does this cause bugs? It will mean we won't send such | |
1303 | # events down /sync. In general they will be historical events, so that | |
1304 | # doesn't matter too much, but that is not always the case. | |
1305 | ||
1306 | logger.info("Updating state for ex-outlier event %s", event.event_id) | |
1293 | 1307 | |
1294 | 1308 | # insert into event_to_state_groups. |
1295 | 1309 | try: |
1341 | 1355 | d.pop("redacted_because", None) |
1342 | 1356 | return d |
1343 | 1357 | |
1344 | self.db_pool.simple_insert_many_values_txn( | |
1358 | self.db_pool.simple_insert_many_txn( | |
1345 | 1359 | txn, |
1346 | 1360 | table="event_json", |
1347 | 1361 | keys=("event_id", "room_id", "internal_metadata", "json", "format_version"), |
1357 | 1371 | ), |
1358 | 1372 | ) |
1359 | 1373 | |
1360 | self.db_pool.simple_insert_many_values_txn( | |
1374 | self.db_pool.simple_insert_many_txn( | |
1361 | 1375 | txn, |
1362 | 1376 | table="events", |
1363 | 1377 | keys=( |
1411 | 1425 | ) |
1412 | 1426 | txn.execute(sql + clause, [False] + args) |
1413 | 1427 | |
1414 | self.db_pool.simple_insert_many_values_txn( | |
1428 | self.db_pool.simple_insert_many_txn( | |
1415 | 1429 | txn, |
1416 | 1430 | table="state_events", |
1417 | 1431 | keys=("event_id", "room_id", "type", "state_key"), |
1621 | 1635 | return self.db_pool.simple_insert_many_txn( |
1622 | 1636 | txn=txn, |
1623 | 1637 | table="event_labels", |
1638 | keys=("event_id", "label", "room_id", "topological_ordering"), | |
1624 | 1639 | values=[ |
1625 | { | |
1626 | "event_id": event_id, | |
1627 | "label": label, | |
1628 | "room_id": room_id, | |
1629 | "topological_ordering": topological_ordering, | |
1630 | } | |
1631 | for label in labels | |
1640 | (event_id, label, room_id, topological_ordering) for label in labels | |
1632 | 1641 | ], |
1633 | 1642 | ) |
1634 | 1643 | |
1656 | 1665 | vals = [] |
1657 | 1666 | for event in events: |
1658 | 1667 | ref_alg, ref_hash_bytes = compute_event_reference_hash(event) |
1659 | vals.append( | |
1660 | { | |
1661 | "event_id": event.event_id, | |
1662 | "algorithm": ref_alg, | |
1663 | "hash": memoryview(ref_hash_bytes), | |
1664 | } | |
1665 | ) | |
1668 | vals.append((event.event_id, ref_alg, memoryview(ref_hash_bytes))) | |
1666 | 1669 | |
1667 | 1670 | self.db_pool.simple_insert_many_txn( |
1668 | txn, table="event_reference_hashes", values=vals | |
1671 | txn, | |
1672 | table="event_reference_hashes", | |
1673 | keys=("event_id", "algorithm", "hash"), | |
1674 | values=vals, | |
1669 | 1675 | ) |
1670 | 1676 | |
1671 | 1677 | def _store_room_members_txn( |
1688 | 1694 | self.db_pool.simple_insert_many_txn( |
1689 | 1695 | txn, |
1690 | 1696 | table="room_memberships", |
1697 | keys=( | |
1698 | "event_id", | |
1699 | "user_id", | |
1700 | "sender", | |
1701 | "room_id", | |
1702 | "membership", | |
1703 | "display_name", | |
1704 | "avatar_url", | |
1705 | ), | |
1691 | 1706 | values=[ |
1692 | { | |
1693 | "event_id": event.event_id, | |
1694 | "user_id": event.state_key, | |
1695 | "sender": event.user_id, | |
1696 | "room_id": event.room_id, | |
1697 | "membership": event.membership, | |
1698 | "display_name": non_null_str_or_none( | |
1699 | event.content.get("displayname") | |
1700 | ), | |
1701 | "avatar_url": non_null_str_or_none(event.content.get("avatar_url")), | |
1702 | } | |
1707 | ( | |
1708 | event.event_id, | |
1709 | event.state_key, | |
1710 | event.user_id, | |
1711 | event.room_id, | |
1712 | event.membership, | |
1713 | non_null_str_or_none(event.content.get("displayname")), | |
1714 | non_null_str_or_none(event.content.get("avatar_url")), | |
1715 | ) | |
1703 | 1716 | for event in events |
1704 | 1717 | ], |
1705 | 1718 | ) |
1790 | 1803 | txn.call_after( |
1791 | 1804 | self.store.get_thread_summary.invalidate, (parent_id, event.room_id) |
1792 | 1805 | ) |
1806 | # It should be safe to only invalidate the cache if the user has not | |
1807 | # previously participated in the thread, but that's difficult (and | |
1808 | # potentially error-prone) so it is always invalidated. | |
1809 | txn.call_after( | |
1810 | self.store.get_thread_participated.invalidate, | |
1811 | (parent_id, event.room_id, event.sender), | |
1812 | ) | |
1793 | 1813 | |
1794 | 1814 | def _handle_insertion_event(self, txn: LoggingTransaction, event: EventBase): |
1795 | 1815 | """Handles keeping track of insertion events and edges/connections. |
2162 | 2182 | self.db_pool.simple_insert_many_txn( |
2163 | 2183 | txn, |
2164 | 2184 | table="event_edges", |
2185 | keys=("event_id", "prev_event_id", "room_id", "is_state"), | |
2165 | 2186 | values=[ |
2166 | { | |
2167 | "event_id": ev.event_id, | |
2168 | "prev_event_id": e_id, | |
2169 | "room_id": ev.room_id, | |
2170 | "is_state": False, | |
2171 | } | |
2187 | (ev.event_id, e_id, ev.room_id, False) | |
2172 | 2188 | for ev in events |
2173 | 2189 | for e_id in ev.prev_event_ids() |
2174 | 2190 | ], |
2225 | 2241 | ) |
2226 | 2242 | |
2227 | 2243 | |
2228 | @attr.s(slots=True) | |
2244 | @attr.s(slots=True, auto_attribs=True) | |
2229 | 2245 | class _LinkMap: |
2230 | 2246 | """A helper type for tracking links between chains.""" |
2231 | 2247 | |
2232 | 2248 | # Stores the set of links as nested maps: source chain ID -> target chain ID |
2233 | 2249 | # -> source sequence number -> target sequence number. |
2234 | maps = attr.ib(type=Dict[int, Dict[int, Dict[int, int]]], factory=dict) | |
2250 | maps: Dict[int, Dict[int, Dict[int, int]]] = attr.Factory(dict) | |
2235 | 2251 | |
2236 | 2252 | # Stores the links that have been added (with new set to true), as tuples of |
2237 | 2253 | # `(source chain ID, source sequence no, target chain ID, target sequence no.)` |
2238 | additions = attr.ib(type=Set[Tuple[int, int, int, int]], factory=set) | |
2254 | additions: Set[Tuple[int, int, int, int]] = attr.Factory(set) | |
2239 | 2255 | |
2240 | 2256 | def add_link( |
2241 | 2257 | self, |
64 | 64 | REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column" |
65 | 65 | |
66 | 66 | |
67 | @attr.s(slots=True, frozen=True) | |
67 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
68 | 68 | class _CalculateChainCover: |
69 | 69 | """Return value for _calculate_chain_cover_txn.""" |
70 | 70 | |
71 | 71 | # The last room_id/depth/stream processed. |
72 | room_id = attr.ib(type=str) | |
73 | depth = attr.ib(type=int) | |
74 | stream = attr.ib(type=int) | |
72 | room_id: str | |
73 | depth: int | |
74 | stream: int | |
75 | 75 | |
76 | 76 | # Number of rows processed |
77 | processed_count = attr.ib(type=int) | |
77 | processed_count: int | |
78 | 78 | |
79 | 79 | # Map from room_id to last depth/stream processed for each room that we have |
80 | 80 | # processed all events for (i.e. the rooms we can flip the |
81 | 81 | # `has_auth_chain_index` for) |
82 | finished_room_map = attr.ib(type=Dict[str, Tuple[int, int]]) | |
82 | finished_room_map: Dict[str, Tuple[int, int]] | |
83 | 83 | |
84 | 84 | |
85 | 85 | class EventsBackgroundUpdatesStore(SQLBaseStore): |
683 | 683 | self.db_pool.simple_insert_many_txn( |
684 | 684 | txn=txn, |
685 | 685 | table="event_labels", |
686 | keys=("event_id", "label", "room_id", "topological_ordering"), | |
686 | 687 | values=[ |
687 | { | |
688 | "event_id": event_id, | |
689 | "label": label, | |
690 | "room_id": event_json["room_id"], | |
691 | "topological_ordering": event_json["depth"], | |
692 | } | |
688 | ( | |
689 | event_id, | |
690 | label, | |
691 | event_json["room_id"], | |
692 | event_json["depth"], | |
693 | ) | |
693 | 694 | for label in event_json["content"].get( |
694 | 695 | EventContentFields.LABELS, [] |
695 | 696 | ) |
802 | 803 | |
803 | 804 | if not has_state: |
804 | 805 | state_events.append( |
805 | { | |
806 | "event_id": event.event_id, | |
807 | "room_id": event.room_id, | |
808 | "type": event.type, | |
809 | "state_key": event.state_key, | |
810 | } | |
806 | (event.event_id, event.room_id, event.type, event.state_key) | |
811 | 807 | ) |
812 | 808 | |
813 | 809 | if not has_event_auth: |
814 | 810 | # Old, dodgy, events may have duplicate auth events, which we |
815 | 811 | # need to deduplicate as we have a unique constraint. |
816 | 812 | for auth_id in set(event.auth_event_ids()): |
817 | auth_events.append( | |
818 | { | |
819 | "room_id": event.room_id, | |
820 | "event_id": event.event_id, | |
821 | "auth_id": auth_id, | |
822 | } | |
823 | ) | |
813 | auth_events.append((event.event_id, event.room_id, auth_id)) | |
824 | 814 | |
825 | 815 | if state_events: |
826 | 816 | await self.db_pool.simple_insert_many( |
827 | 817 | table="state_events", |
818 | keys=("event_id", "room_id", "type", "state_key"), | |
828 | 819 | values=state_events, |
829 | 820 | desc="_rejected_events_metadata_state_events", |
830 | 821 | ) |
832 | 823 | if auth_events: |
833 | 824 | await self.db_pool.simple_insert_many( |
834 | 825 | table="event_auth", |
826 | keys=("event_id", "room_id", "auth_id"), | |
835 | 827 | values=auth_events, |
836 | 828 | desc="_rejected_events_metadata_event_auth", |
837 | 829 | ) |
128 | 128 | self.db_pool.simple_insert_many_txn( |
129 | 129 | txn, |
130 | 130 | table="presence_stream", |
131 | keys=( | |
132 | "stream_id", | |
133 | "user_id", | |
134 | "state", | |
135 | "last_active_ts", | |
136 | "last_federation_update_ts", | |
137 | "last_user_sync_ts", | |
138 | "status_msg", | |
139 | "currently_active", | |
140 | "instance_name", | |
141 | ), | |
131 | 142 | values=[ |
132 | { | |
133 | "stream_id": stream_id, | |
134 | "user_id": state.user_id, | |
135 | "state": state.state, | |
136 | "last_active_ts": state.last_active_ts, | |
137 | "last_federation_update_ts": state.last_federation_update_ts, | |
138 | "last_user_sync_ts": state.last_user_sync_ts, | |
139 | "status_msg": state.status_msg, | |
140 | "currently_active": state.currently_active, | |
141 | "instance_name": self._instance_name, | |
142 | } | |
143 | ( | |
144 | stream_id, | |
145 | state.user_id, | |
146 | state.state, | |
147 | state.last_active_ts, | |
148 | state.last_federation_update_ts, | |
149 | state.last_user_sync_ts, | |
150 | state.status_msg, | |
151 | state.currently_active, | |
152 | self._instance_name, | |
153 | ) | |
143 | 154 | for stream_id, state in zip(stream_orderings, presence_states) |
144 | 155 | ], |
145 | 156 | ) |
560 | 560 | self.db_pool.simple_insert_many_txn( |
561 | 561 | txn, |
562 | 562 | table="deleted_pushers", |
563 | keys=("stream_id", "app_id", "pushkey", "user_id"), | |
563 | 564 | values=[ |
564 | { | |
565 | "stream_id": stream_id, | |
566 | "app_id": pusher.app_id, | |
567 | "pushkey": pusher.pushkey, | |
568 | "user_id": user_id, | |
569 | } | |
565 | (stream_id, pusher.app_id, pusher.pushkey, user_id) | |
570 | 566 | for stream_id, pusher in zip(stream_ids, pushers) |
571 | 567 | ], |
572 | 568 | ) |
50 | 50 | pass |
51 | 51 | |
52 | 52 | |
53 | @attr.s(frozen=True, slots=True) | |
53 | @attr.s(frozen=True, slots=True, auto_attribs=True) | |
54 | 54 | class TokenLookupResult: |
55 | 55 | """Result of looking up an access token. |
56 | 56 | |
68 | 68 | cached. |
69 | 69 | """ |
70 | 70 | |
71 | user_id = attr.ib(type=str) | |
72 | is_guest = attr.ib(type=bool, default=False) | |
73 | shadow_banned = attr.ib(type=bool, default=False) | |
74 | token_id = attr.ib(type=Optional[int], default=None) | |
75 | device_id = attr.ib(type=Optional[str], default=None) | |
76 | valid_until_ms = attr.ib(type=Optional[int], default=None) | |
77 | token_owner = attr.ib(type=str) | |
78 | token_used = attr.ib(type=bool, default=False) | |
71 | user_id: str | |
72 | is_guest: bool = False | |
73 | shadow_banned: bool = False | |
74 | token_id: Optional[int] = None | |
75 | device_id: Optional[str] = None | |
76 | valid_until_ms: Optional[int] = None | |
77 | token_owner: str = attr.ib() | |
78 | token_used: bool = False | |
79 | 79 | |
80 | 80 | # Make the token owner default to the user ID, which is the common case. |
81 | 81 | @token_owner.default |
12 | 12 | # limitations under the License. |
13 | 13 | |
14 | 14 | import logging |
15 | from typing import List, Optional, Tuple, Union, cast | |
15 | from typing import ( | |
16 | TYPE_CHECKING, | |
17 | Any, | |
18 | Dict, | |
19 | Iterable, | |
20 | List, | |
21 | Optional, | |
22 | Tuple, | |
23 | Union, | |
24 | cast, | |
25 | ) | |
16 | 26 | |
17 | 27 | import attr |
18 | ||
19 | from synapse.api.constants import RelationTypes | |
28 | from frozendict import frozendict | |
29 | ||
30 | from synapse.api.constants import EventTypes, RelationTypes | |
20 | 31 | from synapse.events import EventBase |
21 | 32 | from synapse.storage._base import SQLBaseStore |
22 | from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause | |
33 | from synapse.storage.database import ( | |
34 | DatabasePool, | |
35 | LoggingDatabaseConnection, | |
36 | LoggingTransaction, | |
37 | make_in_list_sql_clause, | |
38 | ) | |
23 | 39 | from synapse.storage.databases.main.stream import generate_pagination_where_clause |
24 | 40 | from synapse.storage.relations import ( |
25 | 41 | AggregationPaginationToken, |
28 | 44 | ) |
29 | 45 | from synapse.util.caches.descriptors import cached |
30 | 46 | |
47 | if TYPE_CHECKING: | |
48 | from synapse.server import HomeServer | |
49 | ||
31 | 50 | logger = logging.getLogger(__name__) |
32 | 51 | |
33 | 52 | |
34 | 53 | class RelationsWorkerStore(SQLBaseStore): |
54 | def __init__( | |
55 | self, | |
56 | database: DatabasePool, | |
57 | db_conn: LoggingDatabaseConnection, | |
58 | hs: "HomeServer", | |
59 | ): | |
60 | super().__init__(database, db_conn, hs) | |
61 | ||
62 | self._msc1849_enabled = hs.config.experimental.msc1849_enabled | |
63 | self._msc3440_enabled = hs.config.experimental.msc3440_enabled | |
64 | ||
35 | 65 | @cached(tree=True) |
36 | 66 | async def get_relations_for_event( |
37 | 67 | self, |
353 | 383 | async def get_thread_summary( |
354 | 384 | self, event_id: str, room_id: str |
355 | 385 | ) -> Tuple[int, Optional[EventBase]]: |
356 | """Get the number of threaded replies, the senders of those replies, and | |
357 | the latest reply (if any) for the given event. | |
386 | """Get the number of threaded replies and the latest reply (if any) for the given event. | |
358 | 387 | |
359 | 388 | Args: |
360 | 389 | event_id: Summarize the thread related to this event ID. |
367 | 396 | def _get_thread_summary_txn( |
368 | 397 | txn: LoggingTransaction, |
369 | 398 | ) -> Tuple[int, Optional[str]]: |
370 | # Fetch the count of threaded events and the latest event ID. | |
399 | # Fetch the latest event ID in the thread. | |
371 | 400 | # TODO Should this only allow m.room.message events. |
372 | 401 | sql = """ |
373 | 402 | SELECT event_id |
388 | 417 | |
389 | 418 | latest_event_id = row[0] |
390 | 419 | |
420 | # Fetch the number of threaded replies. | |
391 | 421 | sql = """ |
392 | 422 | SELECT COUNT(event_id) |
393 | 423 | FROM event_relations |
411 | 441 | latest_event = await self.get_event(latest_event_id, allow_none=True) # type: ignore[attr-defined] |
412 | 442 | |
413 | 443 | return count, latest_event |
444 | ||
445 | @cached() | |
446 | async def get_thread_participated( | |
447 | self, event_id: str, room_id: str, user_id: str | |
448 | ) -> bool: | |
449 | """Get whether the requesting user participated in a thread. | |
450 | ||
451 | This is separate from get_thread_summary since that can be cached across | |
452 | all users while this value is specific to the requeser. | |
453 | ||
454 | Args: | |
455 | event_id: The thread related to this event ID. | |
456 | room_id: The room the event belongs to. | |
457 | user_id: The user requesting the summary. | |
458 | ||
459 | Returns: | |
460 | True if the requesting user participated in the thread, otherwise false. | |
461 | """ | |
462 | ||
463 | def _get_thread_summary_txn(txn: LoggingTransaction) -> bool: | |
464 | # Fetch whether the requester has participated or not. | |
465 | sql = """ | |
466 | SELECT 1 | |
467 | FROM event_relations | |
468 | INNER JOIN events USING (event_id) | |
469 | WHERE | |
470 | relates_to_id = ? | |
471 | AND room_id = ? | |
472 | AND relation_type = ? | |
473 | AND sender = ? | |
474 | """ | |
475 | ||
476 | txn.execute(sql, (event_id, room_id, RelationTypes.THREAD, user_id)) | |
477 | return bool(txn.fetchone()) | |
478 | ||
479 | return await self.db_pool.runInteraction( | |
480 | "get_thread_summary", _get_thread_summary_txn | |
481 | ) | |
414 | 482 | |
415 | 483 | async def events_have_relations( |
416 | 484 | self, |
514 | 582 | "get_if_user_has_annotated_event", _get_if_user_has_annotated_event |
515 | 583 | ) |
516 | 584 | |
585 | async def _get_bundled_aggregation_for_event( | |
586 | self, event: EventBase, user_id: str | |
587 | ) -> Optional[Dict[str, Any]]: | |
588 | """Generate bundled aggregations for an event. | |
589 | ||
590 | Note that this does not use a cache, but depends on cached methods. | |
591 | ||
592 | Args: | |
593 | event: The event to calculate bundled aggregations for. | |
594 | user_id: The user requesting the bundled aggregations. | |
595 | ||
596 | Returns: | |
597 | The bundled aggregations for an event, if bundled aggregations are | |
598 | enabled and the event can have bundled aggregations. | |
599 | """ | |
600 | # State events and redacted events do not get bundled aggregations. | |
601 | if event.is_state() or event.internal_metadata.is_redacted(): | |
602 | return None | |
603 | ||
604 | # Do not bundle aggregations for an event which represents an edit or an | |
605 | # annotation. It does not make sense for them to have related events. | |
606 | relates_to = event.content.get("m.relates_to") | |
607 | if isinstance(relates_to, (dict, frozendict)): | |
608 | relation_type = relates_to.get("rel_type") | |
609 | if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE): | |
610 | return None | |
611 | ||
612 | event_id = event.event_id | |
613 | room_id = event.room_id | |
614 | ||
615 | # The bundled aggregations to include, a mapping of relation type to a | |
616 | # type-specific value. Some types include the direct return type here | |
617 | # while others need more processing during serialization. | |
618 | aggregations: Dict[str, Any] = {} | |
619 | ||
620 | annotations = await self.get_aggregation_groups_for_event(event_id, room_id) | |
621 | if annotations.chunk: | |
622 | aggregations[RelationTypes.ANNOTATION] = annotations.to_dict() | |
623 | ||
624 | references = await self.get_relations_for_event( | |
625 | event_id, room_id, RelationTypes.REFERENCE, direction="f" | |
626 | ) | |
627 | if references.chunk: | |
628 | aggregations[RelationTypes.REFERENCE] = references.to_dict() | |
629 | ||
630 | edit = None | |
631 | if event.type == EventTypes.Message: | |
632 | edit = await self.get_applicable_edit(event_id, room_id) | |
633 | ||
634 | if edit: | |
635 | aggregations[RelationTypes.REPLACE] = edit | |
636 | ||
637 | # If this event is the start of a thread, include a summary of the replies. | |
638 | if self._msc3440_enabled: | |
639 | thread_count, latest_thread_event = await self.get_thread_summary( | |
640 | event_id, room_id | |
641 | ) | |
642 | participated = await self.get_thread_participated( | |
643 | event_id, room_id, user_id | |
644 | ) | |
645 | if latest_thread_event: | |
646 | aggregations[RelationTypes.THREAD] = { | |
647 | "latest_event": latest_thread_event, | |
648 | "count": thread_count, | |
649 | "current_user_participated": participated, | |
650 | } | |
651 | ||
652 | # Store the bundled aggregations in the event metadata for later use. | |
653 | return aggregations | |
654 | ||
655 | async def get_bundled_aggregations( | |
656 | self, | |
657 | events: Iterable[EventBase], | |
658 | user_id: str, | |
659 | ) -> Dict[str, Dict[str, Any]]: | |
660 | """Generate bundled aggregations for events. | |
661 | ||
662 | Args: | |
663 | events: The iterable of events to calculate bundled aggregations for. | |
664 | user_id: The user requesting the bundled aggregations. | |
665 | ||
666 | Returns: | |
667 | A map of event ID to the bundled aggregation for the event. Not all | |
668 | events may have bundled aggregations in the results. | |
669 | """ | |
670 | # If bundled aggregations are disabled, nothing to do. | |
671 | if not self._msc1849_enabled: | |
672 | return {} | |
673 | ||
674 | # TODO Parallelize. | |
675 | results = {} | |
676 | for event in events: | |
677 | event_result = await self._get_bundled_aggregation_for_event(event, user_id) | |
678 | if event_result is not None: | |
679 | results[event.event_id] = event_result | |
680 | ||
681 | return results | |
682 | ||
517 | 683 | |
518 | 684 | class RelationsStore(RelationsWorkerStore): |
519 | 685 | pass |
550 | 550 | FROM room_stats_state state |
551 | 551 | INNER JOIN room_stats_current curr USING (room_id) |
552 | 552 | INNER JOIN rooms USING (room_id) |
553 | %s | |
554 | ORDER BY %s %s | |
553 | {where} | |
554 | ORDER BY {order_by} {direction}, state.room_id {direction} | |
555 | 555 | LIMIT ? |
556 | 556 | OFFSET ? |
557 | """ % ( | |
558 | where_statement, | |
559 | order_by_column, | |
560 | "ASC" if order_by_asc else "DESC", | |
557 | """.format( | |
558 | where=where_statement, | |
559 | order_by=order_by_column, | |
560 | direction="ASC" if order_by_asc else "DESC", | |
561 | 561 | ) |
562 | 562 | |
563 | 563 | # Use a nested SELECT statement as SQL can't count(*) with an OFFSET |
564 | 564 | count_sql = """ |
565 | 565 | SELECT count(*) FROM ( |
566 | 566 | SELECT room_id FROM room_stats_state state |
567 | %s | |
567 | {where} | |
568 | 568 | ) AS get_room_ids |
569 | """ % ( | |
570 | where_statement, | |
569 | """.format( | |
570 | where=where_statement, | |
571 | 571 | ) |
572 | 572 | |
573 | 573 | def _get_rooms_paginate_txn( |
1176 | 1176 | await self.db_pool.runInteraction("forget_membership", f) |
1177 | 1177 | |
1178 | 1178 | |
1179 | @attr.s(slots=True) | |
1179 | @attr.s(slots=True, auto_attribs=True) | |
1180 | 1180 | class _JoinedHostsCache: |
1181 | 1181 | """The cached data used by the `_get_joined_hosts_cache`.""" |
1182 | 1182 | |
1183 | 1183 | # Dict of host to the set of their users in the room at the state group. |
1184 | hosts_to_joined_users = attr.ib(type=Dict[str, Set[str]], factory=dict) | |
1184 | hosts_to_joined_users: Dict[str, Set[str]] = attr.Factory(dict) | |
1185 | 1185 | |
1186 | 1186 | # The state group `hosts_to_joined_users` is derived from. Will be an object |
1187 | 1187 | # if the instance is newly created or if the state is not based on a state |
1188 | 1188 | # group. (An object is used as a sentinel value to ensure that it never is |
1189 | 1189 | # equal to anything else). |
1190 | state_group = attr.ib(type=Union[object, int], factory=object) | |
1190 | state_group: Union[object, int] = attr.Factory(object) | |
1191 | 1191 | |
1192 | 1192 | def __len__(self): |
1193 | 1193 | return sum(len(v) for v in self.hosts_to_joined_users.values()) |
0 | # -*- coding: utf-8 -*- | |
1 | 0 | # Copyright 2021 The Matrix.org Foundation C.I.C. |
2 | 1 | # |
3 | 2 | # Licensed under the Apache License, Version 2.0 (the "License"); |
559 | 559 | return await self.db_pool.runInteraction( |
560 | 560 | "get_destinations_paginate_txn", get_destinations_paginate_txn |
561 | 561 | ) |
562 | ||
563 | async def is_destination_known(self, destination: str) -> bool: | |
564 | """Check if a destination is known to the server.""" | |
565 | result = await self.db_pool.simple_select_one_onecol( | |
566 | table="destinations", | |
567 | keyvalues={"destination": destination}, | |
568 | retcol="1", | |
569 | allow_none=True, | |
570 | desc="is_destination_known", | |
571 | ) | |
572 | return bool(result) |
22 | 22 | from synapse.util import json_encoder, stringutils |
23 | 23 | |
24 | 24 | |
25 | @attr.s(slots=True) | |
25 | @attr.s(slots=True, auto_attribs=True) | |
26 | 26 | class UIAuthSessionData: |
27 | session_id = attr.ib(type=str) | |
27 | session_id: str | |
28 | 28 | # The dictionary from the client root level, not the 'auth' key. |
29 | clientdict = attr.ib(type=JsonDict) | |
29 | clientdict: JsonDict | |
30 | 30 | # The URI and method the session was intiatied with. These are checked at |
31 | 31 | # each stage of the authentication to ensure that the asked for operation |
32 | 32 | # has not changed. |
33 | uri = attr.ib(type=str) | |
34 | method = attr.ib(type=str) | |
33 | uri: str | |
34 | method: str | |
35 | 35 | # A string description of the operation that the current authentication is |
36 | 36 | # authorising. |
37 | description = attr.ib(type=str) | |
37 | description: str | |
38 | 38 | |
39 | 39 | |
40 | 40 | class UIAuthWorkerStore(SQLBaseStore): |
104 | 104 | GROUP BY room_id |
105 | 105 | """ |
106 | 106 | txn.execute(sql) |
107 | rooms = [{"room_id": x[0], "events": x[1]} for x in txn.fetchall()] | |
108 | self.db_pool.simple_insert_many_txn(txn, TEMP_TABLE + "_rooms", rooms) | |
107 | rooms = list(txn.fetchall()) | |
108 | self.db_pool.simple_insert_many_txn( | |
109 | txn, TEMP_TABLE + "_rooms", keys=("room_id", "events"), values=rooms | |
110 | ) | |
109 | 111 | del rooms |
110 | 112 | |
111 | 113 | sql = ( |
116 | 118 | txn.execute(sql) |
117 | 119 | |
118 | 120 | txn.execute("SELECT name FROM users") |
119 | users = [{"user_id": x[0]} for x in txn.fetchall()] | |
120 | ||
121 | self.db_pool.simple_insert_many_txn(txn, TEMP_TABLE + "_users", users) | |
121 | users = list(txn.fetchall()) | |
122 | ||
123 | self.db_pool.simple_insert_many_txn( | |
124 | txn, TEMP_TABLE + "_users", keys=("user_id",), values=users | |
125 | ) | |
122 | 126 | |
123 | 127 | new_pos = await self.get_max_stream_id_in_current_state_deltas() |
124 | 128 | await self.db_pool.runInteraction( |
326 | 326 | self.db_pool.simple_insert_many_txn( |
327 | 327 | txn, |
328 | 328 | table="state_groups_state", |
329 | keys=( | |
330 | "state_group", | |
331 | "room_id", | |
332 | "type", | |
333 | "state_key", | |
334 | "event_id", | |
335 | ), | |
329 | 336 | values=[ |
330 | { | |
331 | "state_group": state_group, | |
332 | "room_id": room_id, | |
333 | "type": key[0], | |
334 | "state_key": key[1], | |
335 | "event_id": state_id, | |
336 | } | |
337 | (state_group, room_id, key[0], key[1], state_id) | |
337 | 338 | for key, state_id in delta_state.items() |
338 | 339 | ], |
339 | 340 | ) |
459 | 459 | self.db_pool.simple_insert_many_txn( |
460 | 460 | txn, |
461 | 461 | table="state_groups_state", |
462 | keys=("state_group", "room_id", "type", "state_key", "event_id"), | |
462 | 463 | values=[ |
463 | { | |
464 | "state_group": state_group, | |
465 | "room_id": room_id, | |
466 | "type": key[0], | |
467 | "state_key": key[1], | |
468 | "event_id": state_id, | |
469 | } | |
464 | (state_group, room_id, key[0], key[1], state_id) | |
470 | 465 | for key, state_id in delta_ids.items() |
471 | 466 | ], |
472 | 467 | ) |
474 | 469 | self.db_pool.simple_insert_many_txn( |
475 | 470 | txn, |
476 | 471 | table="state_groups_state", |
472 | keys=("state_group", "room_id", "type", "state_key", "event_id"), | |
477 | 473 | values=[ |
478 | { | |
479 | "state_group": state_group, | |
480 | "room_id": room_id, | |
481 | "type": key[0], | |
482 | "state_key": key[1], | |
483 | "event_id": state_id, | |
484 | } | |
474 | (state_group, room_id, key[0], key[1], state_id) | |
485 | 475 | for key, state_id in current_state_ids.items() |
486 | 476 | ], |
487 | 477 | ) |
588 | 578 | self.db_pool.simple_insert_many_txn( |
589 | 579 | txn, |
590 | 580 | table="state_groups_state", |
581 | keys=("state_group", "room_id", "type", "state_key", "event_id"), | |
591 | 582 | values=[ |
592 | { | |
593 | "state_group": sg, | |
594 | "room_id": room_id, | |
595 | "type": key[0], | |
596 | "state_key": key[1], | |
597 | "event_id": state_id, | |
598 | } | |
583 | (sg, room_id, key[0], key[1], state_id) | |
599 | 584 | for key, state_id in curr_state.items() |
600 | 585 | ], |
601 | 586 | ) |
20 | 20 | logger = logging.getLogger(__name__) |
21 | 21 | |
22 | 22 | |
23 | @attr.s(slots=True, frozen=True) | |
23 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
24 | 24 | class FetchKeyResult: |
25 | verify_key = attr.ib(type=VerifyKey) # the key itself | |
26 | valid_until_ts = attr.ib(type=int) # how long we can use this key for | |
25 | verify_key: VerifyKey # the key itself | |
26 | valid_until_ts: int # how long we can use this key for |
695 | 695 | ) |
696 | 696 | |
697 | 697 | |
698 | @attr.s(slots=True) | |
698 | @attr.s(slots=True, auto_attribs=True) | |
699 | 699 | class _DirectoryListing: |
700 | 700 | """Helper class to store schema file name and the |
701 | 701 | absolute path to it. |
704 | 704 | `file_name` attr is kept first. |
705 | 705 | """ |
706 | 706 | |
707 | file_name = attr.ib(type=str) | |
708 | absolute_path = attr.ib(type=str) | |
707 | file_name: str | |
708 | absolute_path: str |
22 | 22 | logger = logging.getLogger(__name__) |
23 | 23 | |
24 | 24 | |
25 | @attr.s(slots=True) | |
25 | @attr.s(slots=True, auto_attribs=True) | |
26 | 26 | class PaginationChunk: |
27 | 27 | """Returned by relation pagination APIs. |
28 | 28 | |
34 | 34 | None then there are no previous results. |
35 | 35 | """ |
36 | 36 | |
37 | chunk = attr.ib(type=List[JsonDict]) | |
38 | next_batch = attr.ib(type=Optional[Any], default=None) | |
39 | prev_batch = attr.ib(type=Optional[Any], default=None) | |
37 | chunk: List[JsonDict] | |
38 | next_batch: Optional[Any] = None | |
39 | prev_batch: Optional[Any] = None | |
40 | 40 | |
41 | 41 | def to_dict(self) -> Dict[str, Any]: |
42 | 42 | d = {"chunk": self.chunk} |
50 | 50 | return d |
51 | 51 | |
52 | 52 | |
53 | @attr.s(frozen=True, slots=True) | |
53 | @attr.s(frozen=True, slots=True, auto_attribs=True) | |
54 | 54 | class RelationPaginationToken: |
55 | 55 | """Pagination token for relation pagination API. |
56 | 56 | |
63 | 63 | stream: The stream ordering of the boundary event. |
64 | 64 | """ |
65 | 65 | |
66 | topological = attr.ib(type=int) | |
67 | stream = attr.ib(type=int) | |
66 | topological: int | |
67 | stream: int | |
68 | 68 | |
69 | 69 | @staticmethod |
70 | 70 | def from_string(string: str) -> "RelationPaginationToken": |
81 | 81 | return attr.astuple(self) |
82 | 82 | |
83 | 83 | |
84 | @attr.s(frozen=True, slots=True) | |
84 | @attr.s(frozen=True, slots=True, auto_attribs=True) | |
85 | 85 | class AggregationPaginationToken: |
86 | 86 | """Pagination token for relation aggregation pagination API. |
87 | 87 | |
93 | 93 | stream: The MAX stream ordering in the boundary group. |
94 | 94 | """ |
95 | 95 | |
96 | count = attr.ib(type=int) | |
97 | stream = attr.ib(type=int) | |
96 | count: int | |
97 | stream: int | |
98 | 98 | |
99 | 99 | @staticmethod |
100 | 100 | def from_string(string: str) -> "AggregationPaginationToken": |
44 | 44 | T = TypeVar("T") |
45 | 45 | |
46 | 46 | |
47 | @attr.s(slots=True, frozen=True) | |
47 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
48 | 48 | class StateFilter: |
49 | 49 | """A filter used when querying for state. |
50 | 50 | |
57 | 57 | appear in `types`. |
58 | 58 | """ |
59 | 59 | |
60 | types = attr.ib(type="frozendict[str, Optional[FrozenSet[str]]]") | |
61 | include_others = attr.ib(default=False, type=bool) | |
60 | types: "frozendict[str, Optional[FrozenSet[str]]]" | |
61 | include_others: bool = False | |
62 | 62 | |
63 | 63 | def __attrs_post_init__(self): |
64 | 64 | # If `include_others` is set we canonicalise the filter by removing |
761 | 761 | return self.inner.__exit__(exc_type, exc, tb) |
762 | 762 | |
763 | 763 | |
764 | @attr.s(slots=True) | |
764 | @attr.s(slots=True, auto_attribs=True) | |
765 | 765 | class _MultiWriterCtxManager: |
766 | 766 | """Async context manager returned by MultiWriterIdGenerator""" |
767 | 767 | |
768 | id_gen = attr.ib(type=MultiWriterIdGenerator) | |
769 | multiple_ids = attr.ib(type=Optional[int], default=None) | |
770 | stream_ids = attr.ib(type=List[int], factory=list) | |
768 | id_gen: MultiWriterIdGenerator | |
769 | multiple_ids: Optional[int] = None | |
770 | stream_ids: List[int] = attr.Factory(list) | |
771 | 771 | |
772 | 772 | async def __aenter__(self) -> Union[int, List[int]]: |
773 | 773 | # It's safe to run this in autocommit mode as fetching values from a |
27 | 27 | MAX_LIMIT = 1000 |
28 | 28 | |
29 | 29 | |
30 | @attr.s(slots=True) | |
30 | @attr.s(slots=True, auto_attribs=True) | |
31 | 31 | class PaginationConfig: |
32 | 32 | """A configuration object which stores pagination parameters.""" |
33 | 33 | |
34 | from_token = attr.ib(type=Optional[StreamToken]) | |
35 | to_token = attr.ib(type=Optional[StreamToken]) | |
36 | direction = attr.ib(type=str) | |
37 | limit = attr.ib(type=Optional[int]) | |
34 | from_token: Optional[StreamToken] | |
35 | to_token: Optional[StreamToken] | |
36 | direction: str | |
37 | limit: Optional[int] | |
38 | 38 | |
39 | 39 | @classmethod |
40 | 40 | async def from_request( |
19 | 19 | Any, |
20 | 20 | ClassVar, |
21 | 21 | Dict, |
22 | List, | |
22 | 23 | Mapping, |
24 | Match, | |
23 | 25 | MutableMapping, |
24 | 26 | Optional, |
25 | 27 | Tuple, |
78 | 80 | """The interfaces necessary for Synapse to function.""" |
79 | 81 | |
80 | 82 | |
81 | @attr.s(frozen=True, slots=True) | |
83 | @attr.s(frozen=True, slots=True, auto_attribs=True) | |
82 | 84 | class Requester: |
83 | 85 | """ |
84 | 86 | Represents the user making a request |
96 | 98 | "puppeting" the user. |
97 | 99 | """ |
98 | 100 | |
99 | user = attr.ib(type="UserID") | |
100 | access_token_id = attr.ib(type=Optional[int]) | |
101 | is_guest = attr.ib(type=bool) | |
102 | shadow_banned = attr.ib(type=bool) | |
103 | device_id = attr.ib(type=Optional[str]) | |
104 | app_service = attr.ib(type=Optional["ApplicationService"]) | |
105 | authenticated_entity = attr.ib(type=str) | |
101 | user: "UserID" | |
102 | access_token_id: Optional[int] | |
103 | is_guest: bool | |
104 | shadow_banned: bool | |
105 | device_id: Optional[str] | |
106 | app_service: Optional["ApplicationService"] | |
107 | authenticated_entity: str | |
106 | 108 | |
107 | 109 | def serialize(self): |
108 | 110 | """Converts self to a type that can be serialized as JSON, and then |
209 | 211 | DS = TypeVar("DS", bound="DomainSpecificString") |
210 | 212 | |
211 | 213 | |
212 | @attr.s(slots=True, frozen=True, repr=False) | |
214 | @attr.s(slots=True, frozen=True, repr=False, auto_attribs=True) | |
213 | 215 | class DomainSpecificString(metaclass=abc.ABCMeta): |
214 | 216 | """Common base class among ID/name strings that have a local part and a |
215 | 217 | domain name, prefixed with a sigil. |
222 | 224 | |
223 | 225 | SIGIL: ClassVar[str] = abc.abstractproperty() # type: ignore |
224 | 226 | |
225 | localpart = attr.ib(type=str) | |
226 | domain = attr.ib(type=str) | |
227 | localpart: str | |
228 | domain: str | |
227 | 229 | |
228 | 230 | # Because this is a frozen class, it is deeply immutable. |
229 | 231 | def __copy__(self): |
379 | 381 | onto different mxids |
380 | 382 | |
381 | 383 | Returns: |
382 | unicode: string suitable for a mxid localpart | |
384 | string suitable for a mxid localpart | |
383 | 385 | """ |
384 | 386 | if not isinstance(username, bytes): |
385 | 387 | username = username.encode("utf-8") |
387 | 389 | # first we sort out upper-case characters |
388 | 390 | if case_sensitive: |
389 | 391 | |
390 | def f1(m): | |
392 | def f1(m: Match[bytes]) -> bytes: | |
391 | 393 | return b"_" + m.group().lower() |
392 | 394 | |
393 | 395 | username = UPPER_CASE_PATTERN.sub(f1, username) |
394 | 396 | else: |
395 | 397 | username = username.lower() |
396 | 398 | |
397 | # then we sort out non-ascii characters | |
398 | def f2(m): | |
399 | g = m.group()[0] | |
400 | if isinstance(g, str): | |
401 | # on python 2, we need to do a ord(). On python 3, the | |
402 | # byte itself will do. | |
403 | g = ord(g) | |
404 | return b"=%02x" % (g,) | |
399 | # then we sort out non-ascii characters by converting to the hex equivalent. | |
400 | def f2(m: Match[bytes]) -> bytes: | |
401 | return b"=%02x" % (m.group()[0],) | |
405 | 402 | |
406 | 403 | username = NON_MXID_CHARACTER_PATTERN.sub(f2, username) |
407 | 404 | |
408 | 405 | # we also do the =-escaping to mxids starting with an underscore. |
409 | 406 | username = re.sub(b"^_", b"=5f", username) |
410 | 407 | |
411 | # we should now only have ascii bytes left, so can decode back to a | |
412 | # unicode. | |
408 | # we should now only have ascii bytes left, so can decode back to a string. | |
413 | 409 | return username.decode("ascii") |
414 | 410 | |
415 | 411 | |
465 | 461 | attributes, must be hashable. |
466 | 462 | """ |
467 | 463 | |
468 | topological = attr.ib( | |
469 | type=Optional[int], | |
464 | topological: Optional[int] = attr.ib( | |
470 | 465 | validator=attr.validators.optional(attr.validators.instance_of(int)), |
471 | 466 | ) |
472 | stream = attr.ib(type=int, validator=attr.validators.instance_of(int)) | |
473 | ||
474 | instance_map = attr.ib( | |
475 | type="frozendict[str, int]", | |
467 | stream: int = attr.ib(validator=attr.validators.instance_of(int)) | |
468 | ||
469 | instance_map: "frozendict[str, int]" = attr.ib( | |
476 | 470 | factory=frozendict, |
477 | 471 | validator=attr.validators.deep_mapping( |
478 | 472 | key_validator=attr.validators.instance_of(str), |
481 | 475 | ), |
482 | 476 | ) |
483 | 477 | |
484 | def __attrs_post_init__(self): | |
478 | def __attrs_post_init__(self) -> None: | |
485 | 479 | """Validates that both `topological` and `instance_map` aren't set.""" |
486 | 480 | |
487 | 481 | if self.instance_map and self.topological: |
597 | 591 | return "s%d" % (self.stream,) |
598 | 592 | |
599 | 593 | |
600 | @attr.s(slots=True, frozen=True) | |
594 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
601 | 595 | class StreamToken: |
602 | 596 | """A collection of positions within multiple streams. |
603 | 597 | |
605 | 599 | must be hashable. |
606 | 600 | """ |
607 | 601 | |
608 | room_key = attr.ib( | |
609 | type=RoomStreamToken, validator=attr.validators.instance_of(RoomStreamToken) | |
602 | room_key: RoomStreamToken = attr.ib( | |
603 | validator=attr.validators.instance_of(RoomStreamToken) | |
610 | 604 | ) |
611 | presence_key = attr.ib(type=int) | |
612 | typing_key = attr.ib(type=int) | |
613 | receipt_key = attr.ib(type=int) | |
614 | account_data_key = attr.ib(type=int) | |
615 | push_rules_key = attr.ib(type=int) | |
616 | to_device_key = attr.ib(type=int) | |
617 | device_list_key = attr.ib(type=int) | |
618 | groups_key = attr.ib(type=int) | |
605 | presence_key: int | |
606 | typing_key: int | |
607 | receipt_key: int | |
608 | account_data_key: int | |
609 | push_rules_key: int | |
610 | to_device_key: int | |
611 | device_list_key: int | |
612 | groups_key: int | |
619 | 613 | |
620 | 614 | _SEPARATOR = "_" |
621 | START: "StreamToken" | |
615 | START: ClassVar["StreamToken"] | |
622 | 616 | |
623 | 617 | @classmethod |
624 | 618 | async def from_string(cls, store: "DataStore", string: str) -> "StreamToken": |
678 | 672 | StreamToken.START = StreamToken(RoomStreamToken(None, 0), 0, 0, 0, 0, 0, 0, 0, 0) |
679 | 673 | |
680 | 674 | |
681 | @attr.s(slots=True, frozen=True) | |
675 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
682 | 676 | class PersistedEventPosition: |
683 | 677 | """Position of a newly persisted event with instance that persisted it. |
684 | 678 | |
686 | 680 | RoomStreamToken. |
687 | 681 | """ |
688 | 682 | |
689 | instance_name = attr.ib(type=str) | |
690 | stream = attr.ib(type=int) | |
683 | instance_name: str | |
684 | stream: int | |
691 | 685 | |
692 | 686 | def persisted_after(self, token: RoomStreamToken) -> bool: |
693 | 687 | return token.get_stream_pos_for_instance(self.instance_name) < self.stream |
737 | 731 | __str__ = to_string |
738 | 732 | |
739 | 733 | |
740 | @attr.s(slots=True) | |
734 | @attr.s(slots=True, frozen=True, auto_attribs=True) | |
741 | 735 | class ReadReceipt: |
742 | 736 | """Information about a read-receipt""" |
743 | 737 | |
744 | room_id = attr.ib() | |
745 | receipt_type = attr.ib() | |
746 | user_id = attr.ib() | |
747 | event_ids = attr.ib() | |
748 | data = attr.ib() | |
738 | room_id: str | |
739 | receipt_type: str | |
740 | user_id: str | |
741 | event_ids: List[str] | |
742 | data: JsonDict | |
749 | 743 | |
750 | 744 | |
751 | 745 | def get_verify_key_from_cross_signing_key(key_info): |
308 | 308 | return deferred.addCallback(tuple) |
309 | 309 | |
310 | 310 | |
311 | @attr.s(slots=True) | |
311 | @attr.s(slots=True, auto_attribs=True) | |
312 | 312 | class _LinearizerEntry: |
313 | 313 | # The number of things executing. |
314 | count = attr.ib(type=int) | |
314 | count: int | |
315 | 315 | # Deferreds for the things blocked from executing. |
316 | deferreds = attr.ib(type=collections.OrderedDict) | |
316 | deferreds: collections.OrderedDict | |
317 | 317 | |
318 | 318 | |
319 | 319 | class Linearizer: |
32 | 32 | |
33 | 33 | # This class can't be generic because it uses slots with attrs. |
34 | 34 | # See: https://github.com/python-attrs/attrs/issues/313 |
35 | @attr.s(slots=True) | |
35 | @attr.s(slots=True, auto_attribs=True) | |
36 | 36 | class DictionaryEntry: # should be: Generic[DKT, DV]. |
37 | 37 | """Returned when getting an entry from the cache |
38 | 38 | |
40 | 40 | full: Whether the cache has the full or dict or just some keys. |
41 | 41 | If not full then not all requested keys will necessarily be present |
42 | 42 | in `value` |
43 | known_absent: Keys that were looked up in the dict and were not | |
44 | there. | |
43 | known_absent: Keys that were looked up in the dict and were not there. | |
45 | 44 | value: The full or partial dict value |
46 | 45 | """ |
47 | 46 | |
48 | full = attr.ib(type=bool) | |
49 | known_absent = attr.ib(type=Set[Any]) # should be: Set[DKT] | |
50 | value = attr.ib(type=Dict[Any, Any]) # should be: Dict[DKT, DV] | |
47 | full: bool | |
48 | known_absent: Set[Any] # should be: Set[DKT] | |
49 | value: Dict[Any, Any] # should be: Dict[DKT, DV] | |
51 | 50 | |
52 | 51 | def __len__(self) -> int: |
53 | 52 | return len(self.value) |
273 | 273 | self.assertEquals(failure.value.code, 400) |
274 | 274 | self.assertEquals(failure.value.errcode, Codes.EXCLUSIVE) |
275 | 275 | |
276 | def test_get_user_by_req__puppeted_token__not_tracking_puppeted_mau(self): | |
277 | self.store.get_user_by_access_token = simple_async_mock( | |
278 | TokenLookupResult( | |
279 | user_id="@baldrick:matrix.org", | |
280 | device_id="device", | |
281 | token_owner="@admin:matrix.org", | |
282 | ) | |
283 | ) | |
284 | self.store.insert_client_ip = simple_async_mock(None) | |
285 | request = Mock(args={}) | |
286 | request.getClientIP.return_value = "127.0.0.1" | |
287 | request.args[b"access_token"] = [self.test_token] | |
288 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
289 | self.get_success(self.auth.get_user_by_req(request)) | |
290 | self.store.insert_client_ip.assert_called_once() | |
291 | ||
292 | def test_get_user_by_req__puppeted_token__tracking_puppeted_mau(self): | |
293 | self.auth._track_puppeted_user_ips = True | |
294 | self.store.get_user_by_access_token = simple_async_mock( | |
295 | TokenLookupResult( | |
296 | user_id="@baldrick:matrix.org", | |
297 | device_id="device", | |
298 | token_owner="@admin:matrix.org", | |
299 | ) | |
300 | ) | |
301 | self.store.insert_client_ip = simple_async_mock(None) | |
302 | request = Mock(args={}) | |
303 | request.getClientIP.return_value = "127.0.0.1" | |
304 | request.args[b"access_token"] = [self.test_token] | |
305 | request.requestHeaders.getRawHeaders = mock_getRawHeaders() | |
306 | self.get_success(self.auth.get_user_by_req(request)) | |
307 | self.assertEquals(self.store.insert_client_ip.call_count, 2) | |
308 | ||
276 | 309 | def test_get_user_from_macaroon(self): |
277 | 310 | self.store.get_user_by_access_token = simple_async_mock( |
278 | 311 | TokenLookupResult(user_id="@baldrick:matrix.org", device_id="device") |
12 | 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | from typing import Iterable | |
15 | 16 | from unittest import mock |
16 | 17 | |
18 | from parameterized import parameterized | |
17 | 19 | from signedjson import key as key, sign as sign |
18 | 20 | |
19 | 21 | from twisted.internet import defer |
22 | 24 | from synapse.api.errors import Codes, SynapseError |
23 | 25 | |
24 | 26 | from tests import unittest |
27 | from tests.test_utils import make_awaitable | |
25 | 28 | |
26 | 29 | |
27 | 30 | class E2eKeysHandlerTestCase(unittest.HomeserverTestCase): |
764 | 767 | remote_user_id = "@test:other" |
765 | 768 | local_user_id = "@test:test" |
766 | 769 | |
770 | # Pretend we're sharing a room with the user we're querying. If not, | |
771 | # `_query_devices_for_destination` will return early. | |
767 | 772 | self.store.get_rooms_for_user = mock.Mock( |
768 | 773 | return_value=defer.succeed({"some_room_id"}) |
769 | 774 | ) |
830 | 835 | } |
831 | 836 | }, |
832 | 837 | ) |
838 | ||
839 | @parameterized.expand( | |
840 | [ | |
841 | # The remote homeserver's response indicates that this user has 0/1/2 devices. | |
842 | ([],), | |
843 | (["device_1"],), | |
844 | (["device_1", "device_2"],), | |
845 | ] | |
846 | ) | |
847 | def test_query_all_devices_caches_result(self, device_ids: Iterable[str]): | |
848 | """Test that requests for all of a remote user's devices are cached. | |
849 | ||
850 | We do this by asserting that only one call over federation was made, and that | |
851 | the two queries to the local homeserver produce the same response. | |
852 | """ | |
853 | local_user_id = "@test:test" | |
854 | remote_user_id = "@test:other" | |
855 | request_body = {"device_keys": {remote_user_id: []}} | |
856 | ||
857 | response_devices = [ | |
858 | { | |
859 | "device_id": device_id, | |
860 | "keys": { | |
861 | "algorithms": ["dummy"], | |
862 | "device_id": device_id, | |
863 | "keys": {f"dummy:{device_id}": "dummy"}, | |
864 | "signatures": {device_id: {f"dummy:{device_id}": "dummy"}}, | |
865 | "unsigned": {}, | |
866 | "user_id": "@test:other", | |
867 | }, | |
868 | } | |
869 | for device_id in device_ids | |
870 | ] | |
871 | ||
872 | response_body = { | |
873 | "devices": response_devices, | |
874 | "user_id": remote_user_id, | |
875 | "stream_id": 12345, # an integer, according to the spec | |
876 | } | |
877 | ||
878 | e2e_handler = self.hs.get_e2e_keys_handler() | |
879 | ||
880 | # Pretend we're sharing a room with the user we're querying. If not, | |
881 | # `_query_devices_for_destination` will return early. | |
882 | mock_get_rooms = mock.patch.object( | |
883 | self.store, | |
884 | "get_rooms_for_user", | |
885 | new_callable=mock.MagicMock, | |
886 | return_value=make_awaitable(["some_room_id"]), | |
887 | ) | |
888 | mock_request = mock.patch.object( | |
889 | self.hs.get_federation_client(), | |
890 | "query_user_devices", | |
891 | new_callable=mock.MagicMock, | |
892 | return_value=make_awaitable(response_body), | |
893 | ) | |
894 | ||
895 | with mock_get_rooms, mock_request as mocked_federation_request: | |
896 | # Make the first query and sanity check it succeeds. | |
897 | response_1 = self.get_success( | |
898 | e2e_handler.query_devices( | |
899 | request_body, | |
900 | timeout=10, | |
901 | from_user_id=local_user_id, | |
902 | from_device_id="some_device_id", | |
903 | ) | |
904 | ) | |
905 | self.assertEqual(response_1["failures"], {}) | |
906 | ||
907 | # We should have made a federation request to do so. | |
908 | mocked_federation_request.assert_called_once() | |
909 | ||
910 | # Reset the mock so we can prove we don't make a second federation request. | |
911 | mocked_federation_request.reset_mock() | |
912 | ||
913 | # Repeat the query. | |
914 | response_2 = self.get_success( | |
915 | e2e_handler.query_devices( | |
916 | request_body, | |
917 | timeout=10, | |
918 | from_user_id=local_user_id, | |
919 | from_device_id="some_device_id", | |
920 | ) | |
921 | ) | |
922 | self.assertEqual(response_2["failures"], {}) | |
923 | ||
924 | # We should not have made a second federation request. | |
925 | mocked_federation_request.assert_not_called() | |
926 | ||
927 | # The two requests to the local homeserver should be identical. | |
928 | self.assertEqual(response_1, response_2) |
21 | 21 | import synapse |
22 | 22 | from synapse.handlers.auth import load_legacy_password_auth_providers |
23 | 23 | from synapse.module_api import ModuleApi |
24 | from synapse.rest.client import devices, login | |
24 | from synapse.rest.client import devices, login, logout | |
25 | 25 | from synapse.types import JsonDict |
26 | 26 | |
27 | 27 | from tests import unittest |
154 | 154 | synapse.rest.admin.register_servlets, |
155 | 155 | login.register_servlets, |
156 | 156 | devices.register_servlets, |
157 | logout.register_servlets, | |
157 | 158 | ] |
158 | 159 | |
159 | 160 | def setUp(self): |
717 | 718 | # ("unknown login type") |
718 | 719 | channel = self._send_password_login("localuser", "localpass") |
719 | 720 | self.assertEqual(channel.code, 400, channel.result) |
721 | ||
722 | def test_on_logged_out(self): | |
723 | """Tests that the on_logged_out callback is called when the user logs out.""" | |
724 | self.register_user("rin", "password") | |
725 | tok = self.login("rin", "password") | |
726 | ||
727 | self.called = False | |
728 | ||
729 | async def on_logged_out(user_id, device_id, access_token): | |
730 | self.called = True | |
731 | ||
732 | on_logged_out = Mock(side_effect=on_logged_out) | |
733 | self.hs.get_password_auth_provider().on_logged_out_callbacks.append( | |
734 | on_logged_out | |
735 | ) | |
736 | ||
737 | channel = self.make_request( | |
738 | "POST", | |
739 | "/_matrix/client/v3/logout", | |
740 | {}, | |
741 | access_token=tok, | |
742 | ) | |
743 | self.assertEqual(channel.code, 200) | |
744 | on_logged_out.assert_called_once() | |
745 | self.assertTrue(self.called) | |
720 | 746 | |
721 | 747 | def _get_login_flows(self) -> JsonDict: |
722 | 748 | channel = self.make_request("GET", "/_matrix/client/r0/login") |
27 | 27 | from synapse.api.errors import AuthError, NotFoundError, SynapseError |
28 | 28 | from synapse.api.room_versions import RoomVersions |
29 | 29 | from synapse.events import make_event_from_dict |
30 | from synapse.federation.transport.client import TransportLayerClient | |
30 | 31 | from synapse.handlers.room_summary import _child_events_comparison_key, _RoomEntry |
31 | 32 | from synapse.rest import admin |
32 | 33 | from synapse.rest.client import login, room |
133 | 134 | self._add_child(self.space, self.room, self.token) |
134 | 135 | |
135 | 136 | def _add_child( |
136 | self, space_id: str, room_id: str, token: str, order: Optional[str] = None | |
137 | self, | |
138 | space_id: str, | |
139 | room_id: str, | |
140 | token: str, | |
141 | order: Optional[str] = None, | |
142 | via: Optional[List[str]] = None, | |
137 | 143 | ) -> None: |
138 | 144 | """Add a child room to a space.""" |
139 | content: JsonDict = {"via": [self.hs.hostname]} | |
145 | if via is None: | |
146 | via = [self.hs.hostname] | |
147 | ||
148 | content: JsonDict = {"via": via} | |
140 | 149 | if order is not None: |
141 | 150 | content["order"] = order |
142 | 151 | self.helper.send_state( |
250 | 259 | result = self.get_success( |
251 | 260 | self.handler.get_room_hierarchy(create_requester(self.user), self.space) |
252 | 261 | ) |
262 | self._assert_hierarchy(result, expected) | |
263 | ||
264 | def test_large_space(self): | |
265 | """Test a space with a large number of rooms.""" | |
266 | rooms = [self.room] | |
267 | # Make at least 51 rooms that are part of the space. | |
268 | for _ in range(55): | |
269 | room = self.helper.create_room_as(self.user, tok=self.token) | |
270 | self._add_child(self.space, room, self.token) | |
271 | rooms.append(room) | |
272 | ||
273 | result = self.get_success(self.handler.get_space_summary(self.user, self.space)) | |
274 | # The spaces result should have the space and the first 50 rooms in it, | |
275 | # along with the links from space -> room for those 50 rooms. | |
276 | expected = [(self.space, rooms[:50])] + [(room, []) for room in rooms[:49]] | |
277 | self._assert_rooms(result, expected) | |
278 | ||
279 | # The result should have the space and the rooms in it, along with the links | |
280 | # from space -> room. | |
281 | expected = [(self.space, rooms)] + [(room, []) for room in rooms] | |
282 | ||
283 | # Make two requests to fully paginate the results. | |
284 | result = self.get_success( | |
285 | self.handler.get_room_hierarchy(create_requester(self.user), self.space) | |
286 | ) | |
287 | result2 = self.get_success( | |
288 | self.handler.get_room_hierarchy( | |
289 | create_requester(self.user), self.space, from_token=result["next_batch"] | |
290 | ) | |
291 | ) | |
292 | # Combine the results. | |
293 | result["rooms"] += result2["rooms"] | |
253 | 294 | self._assert_hierarchy(result, expected) |
254 | 295 | |
255 | 296 | def test_visibility(self): |
1003 | 1044 | ) |
1004 | 1045 | self._assert_hierarchy(result, expected) |
1005 | 1046 | |
1047 | def test_fed_caching(self): | |
1048 | """ | |
1049 | Federation `/hierarchy` responses should be cached. | |
1050 | """ | |
1051 | fed_hostname = self.hs.hostname + "2" | |
1052 | fed_subspace = "#space:" + fed_hostname | |
1053 | fed_room = "#room:" + fed_hostname | |
1054 | ||
1055 | # Add a room to the space which is on another server. | |
1056 | self._add_child(self.space, fed_subspace, self.token, via=[fed_hostname]) | |
1057 | ||
1058 | federation_requests = 0 | |
1059 | ||
1060 | async def get_room_hierarchy( | |
1061 | _self: TransportLayerClient, | |
1062 | destination: str, | |
1063 | room_id: str, | |
1064 | suggested_only: bool, | |
1065 | ) -> JsonDict: | |
1066 | nonlocal federation_requests | |
1067 | federation_requests += 1 | |
1068 | ||
1069 | return { | |
1070 | "room": { | |
1071 | "room_id": fed_subspace, | |
1072 | "world_readable": True, | |
1073 | "room_type": RoomTypes.SPACE, | |
1074 | "children_state": [ | |
1075 | { | |
1076 | "type": EventTypes.SpaceChild, | |
1077 | "room_id": fed_subspace, | |
1078 | "state_key": fed_room, | |
1079 | "content": {"via": [fed_hostname]}, | |
1080 | }, | |
1081 | ], | |
1082 | }, | |
1083 | "children": [ | |
1084 | { | |
1085 | "room_id": fed_room, | |
1086 | "world_readable": True, | |
1087 | }, | |
1088 | ], | |
1089 | "inaccessible_children": [], | |
1090 | } | |
1091 | ||
1092 | expected = [ | |
1093 | (self.space, [self.room, fed_subspace]), | |
1094 | (self.room, ()), | |
1095 | (fed_subspace, [fed_room]), | |
1096 | (fed_room, ()), | |
1097 | ] | |
1098 | ||
1099 | with mock.patch( | |
1100 | "synapse.federation.transport.client.TransportLayerClient.get_room_hierarchy", | |
1101 | new=get_room_hierarchy, | |
1102 | ): | |
1103 | result = self.get_success( | |
1104 | self.handler.get_room_hierarchy(create_requester(self.user), self.space) | |
1105 | ) | |
1106 | self.assertEqual(federation_requests, 1) | |
1107 | self._assert_hierarchy(result, expected) | |
1108 | ||
1109 | # The previous federation response should be reused. | |
1110 | result = self.get_success( | |
1111 | self.handler.get_room_hierarchy(create_requester(self.user), self.space) | |
1112 | ) | |
1113 | self.assertEqual(federation_requests, 1) | |
1114 | self._assert_hierarchy(result, expected) | |
1115 | ||
1116 | # Expire the response cache | |
1117 | self.reactor.advance(5 * 60 + 1) | |
1118 | ||
1119 | # A new federation request should be made. | |
1120 | result = self.get_success( | |
1121 | self.handler.get_room_hierarchy(create_requester(self.user), self.space) | |
1122 | ) | |
1123 | self.assertEqual(federation_requests, 2) | |
1124 | self._assert_hierarchy(result, expected) | |
1125 | ||
1006 | 1126 | |
1007 | 1127 | class RoomSummaryTestCase(unittest.HomeserverTestCase): |
1008 | 1128 | servlets = [ |
10 | 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
11 | 11 | # See the License for the specific language governing permissions and |
12 | 12 | # limitations under the License. |
13 | ||
14 | 13 | from typing import Optional |
15 | from unittest.mock import Mock | |
14 | from unittest.mock import MagicMock, Mock, patch | |
16 | 15 | |
17 | 16 | from synapse.api.constants import EventTypes, JoinRules |
18 | 17 | from synapse.api.errors import Codes, ResourceLimitError |
19 | 18 | from synapse.api.filtering import Filtering |
20 | 19 | from synapse.api.room_versions import RoomVersions |
21 | from synapse.handlers.sync import SyncConfig | |
20 | from synapse.handlers.sync import SyncConfig, SyncResult | |
22 | 21 | from synapse.rest import admin |
23 | 22 | from synapse.rest.client import knock, login, room |
24 | 23 | from synapse.server import HomeServer |
26 | 25 | |
27 | 26 | import tests.unittest |
28 | 27 | import tests.utils |
28 | from tests.test_utils import make_awaitable | |
29 | 29 | |
30 | 30 | |
31 | 31 | class SyncTestCase(tests.unittest.HomeserverTestCase): |
184 | 184 | self.assertNotIn(joined_room, [r.room_id for r in result.joined]) |
185 | 185 | self.assertNotIn(invite_room, [r.room_id for r in result.invited]) |
186 | 186 | self.assertNotIn(knock_room, [r.room_id for r in result.knocked]) |
187 | ||
188 | def test_ban_wins_race_with_join(self): | |
189 | """Rooms shouldn't appear under "joined" if a join loses a race to a ban. | |
190 | ||
191 | A complicated edge case. Imagine the following scenario: | |
192 | ||
193 | * you attempt to join a room | |
194 | * racing with that is a ban which comes in over federation, which ends up with | |
195 | an earlier stream_ordering than the join. | |
196 | * you get a sync response with a sync token which is _after_ the ban, but before | |
197 | the join | |
198 | * now your join lands; it is a valid event because its `prev_event`s predate the | |
199 | ban, but will not make it into current_state_events (because bans win over | |
200 | joins in state res, essentially). | |
201 | * When we do a sync from the incremental sync, the only event in the timeline | |
202 | is your join ... and yet you aren't joined. | |
203 | ||
204 | The ban coming in over federation isn't crucial for this behaviour; the key | |
205 | requirements are: | |
206 | 1. the homeserver generates a join event with prev_events that precede the ban | |
207 | (so that it passes the "are you banned" test) | |
208 | 2. the join event has a stream_ordering after that of the ban. | |
209 | ||
210 | We use monkeypatching to artificially trigger condition (1). | |
211 | """ | |
212 | # A local user Alice creates a room. | |
213 | owner = self.register_user("alice", "password") | |
214 | owner_tok = self.login(owner, "password") | |
215 | room_id = self.helper.create_room_as(owner, is_public=True, tok=owner_tok) | |
216 | ||
217 | # Do a sync as Alice to get the latest event in the room. | |
218 | alice_sync_result: SyncResult = self.get_success( | |
219 | self.sync_handler.wait_for_sync_for_user( | |
220 | create_requester(owner), generate_sync_config(owner) | |
221 | ) | |
222 | ) | |
223 | self.assertEqual(len(alice_sync_result.joined), 1) | |
224 | self.assertEqual(alice_sync_result.joined[0].room_id, room_id) | |
225 | last_room_creation_event_id = ( | |
226 | alice_sync_result.joined[0].timeline.events[-1].event_id | |
227 | ) | |
228 | ||
229 | # Eve, a ne'er-do-well, registers. | |
230 | eve = self.register_user("eve", "password") | |
231 | eve_token = self.login(eve, "password") | |
232 | ||
233 | # Alice preemptively bans Eve. | |
234 | self.helper.ban(room_id, owner, eve, tok=owner_tok) | |
235 | ||
236 | # Eve syncs. | |
237 | eve_requester = create_requester(eve) | |
238 | eve_sync_config = generate_sync_config(eve) | |
239 | eve_sync_after_ban: SyncResult = self.get_success( | |
240 | self.sync_handler.wait_for_sync_for_user(eve_requester, eve_sync_config) | |
241 | ) | |
242 | ||
243 | # Sanity check this sync result. We shouldn't be joined to the room. | |
244 | self.assertEqual(eve_sync_after_ban.joined, []) | |
245 | ||
246 | # Eve tries to join the room. We monkey patch the internal logic which selects | |
247 | # the prev_events used when creating the join event, such that the ban does not | |
248 | # precede the join. | |
249 | mocked_get_prev_events = patch.object( | |
250 | self.hs.get_datastore(), | |
251 | "get_prev_events_for_room", | |
252 | new_callable=MagicMock, | |
253 | return_value=make_awaitable([last_room_creation_event_id]), | |
254 | ) | |
255 | with mocked_get_prev_events: | |
256 | self.helper.join(room_id, eve, tok=eve_token) | |
257 | ||
258 | # Eve makes a second, incremental sync. | |
259 | eve_incremental_sync_after_join: SyncResult = self.get_success( | |
260 | self.sync_handler.wait_for_sync_for_user( | |
261 | eve_requester, | |
262 | eve_sync_config, | |
263 | since_token=eve_sync_after_ban.next_batch, | |
264 | ) | |
265 | ) | |
266 | # Eve should not see herself as joined to the room. | |
267 | self.assertEqual(eve_incremental_sync_after_join.joined, []) | |
268 | ||
269 | # If we did a third initial sync, we should _still_ see eve is not joined to the room. | |
270 | eve_initial_sync_after_join: SyncResult = self.get_success( | |
271 | self.sync_handler.wait_for_sync_for_user( | |
272 | eve_requester, | |
273 | eve_sync_config, | |
274 | since_token=None, | |
275 | ) | |
276 | ) | |
277 | self.assertEqual(eve_initial_sync_after_join.joined, []) | |
187 | 278 | |
188 | 279 | |
189 | 280 | _request_key = 0 |
0 | # Copyright 2022 The Matrix.org Foundation C.I.C. | |
1 | # | |
2 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | # you may not use this file except in compliance with the License. | |
4 | # You may obtain a copy of the License at | |
5 | # | |
6 | # http://www.apache.org/licenses/LICENSE-2.0 | |
7 | # | |
8 | # Unless required by applicable law or agreed to in writing, software | |
9 | # distributed under the License is distributed on an "AS IS" BASIS, | |
10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | # See the License for the specific language governing permissions and | |
12 | # limitations under the License. | |
13 | from http import HTTPStatus | |
14 | from typing import Dict | |
15 | ||
16 | from twisted.web.resource import Resource | |
17 | ||
18 | from synapse.app.homeserver import SynapseHomeServer | |
19 | from synapse.config.server import HttpListenerConfig, HttpResourceConfig, ListenerConfig | |
20 | from synapse.http.site import SynapseSite | |
21 | ||
22 | from tests.server import make_request | |
23 | from tests.unittest import HomeserverTestCase, create_resource_tree, override_config | |
24 | ||
25 | ||
26 | class WebClientTests(HomeserverTestCase): | |
27 | @override_config( | |
28 | { | |
29 | "web_client_location": "https://example.org", | |
30 | } | |
31 | ) | |
32 | def test_webclient_resolves_with_client_resource(self): | |
33 | """ | |
34 | Tests that both client and webclient resources can be accessed simultaneously. | |
35 | ||
36 | This is a regression test created in response to https://github.com/matrix-org/synapse/issues/11763. | |
37 | """ | |
38 | for resource_name_order_list in [ | |
39 | ["webclient", "client"], | |
40 | ["client", "webclient"], | |
41 | ]: | |
42 | # Create a dictionary from path regex -> resource | |
43 | resource_dict: Dict[str, Resource] = {} | |
44 | ||
45 | for resource_name in resource_name_order_list: | |
46 | resource_dict.update( | |
47 | SynapseHomeServer._configure_named_resource(self.hs, resource_name) | |
48 | ) | |
49 | ||
50 | # Create a root resource which ties the above resources together into one | |
51 | root_resource = Resource() | |
52 | create_resource_tree(resource_dict, root_resource) | |
53 | ||
54 | # Create a site configured with this resource to make HTTP requests against | |
55 | listener_config = ListenerConfig( | |
56 | port=8008, | |
57 | bind_addresses=["127.0.0.1"], | |
58 | type="http", | |
59 | http_options=HttpListenerConfig( | |
60 | resources=[HttpResourceConfig(names=resource_name_order_list)] | |
61 | ), | |
62 | ) | |
63 | test_site = SynapseSite( | |
64 | logger_name="synapse.access.http.fake", | |
65 | site_tag=self.hs.config.server.server_name, | |
66 | config=listener_config, | |
67 | resource=root_resource, | |
68 | server_version_string="1", | |
69 | max_request_body_size=1234, | |
70 | reactor=self.reactor, | |
71 | ) | |
72 | ||
73 | # Attempt to make requests to endpoints on both the webclient and client resources | |
74 | # on test_site. | |
75 | self._request_client_and_webclient_resources(test_site) | |
76 | ||
77 | def _request_client_and_webclient_resources(self, test_site: SynapseSite) -> None: | |
78 | """Make a request to an endpoint on both the webclient and client-server resources | |
79 | of the given SynapseSite. | |
80 | ||
81 | Args: | |
82 | test_site: The SynapseSite object to make requests against. | |
83 | """ | |
84 | ||
85 | # Ensure that the *webclient* resource is behaving as expected (we get redirected to | |
86 | # the configured web_client_location) | |
87 | channel = make_request( | |
88 | self.reactor, | |
89 | site=test_site, | |
90 | method="GET", | |
91 | path="/_matrix/client", | |
92 | ) | |
93 | # Check that we are being redirected to the webclient location URI. | |
94 | self.assertEqual(channel.code, HTTPStatus.FOUND) | |
95 | self.assertEqual( | |
96 | channel.headers.getRawHeaders("Location"), ["https://example.org"] | |
97 | ) | |
98 | ||
99 | # Ensure that a request to the *client* resource works. | |
100 | channel = make_request( | |
101 | self.reactor, | |
102 | site=test_site, | |
103 | method="GET", | |
104 | path="/_matrix/client/v3/login", | |
105 | ) | |
106 | self.assertEqual(channel.code, HTTPStatus.OK) | |
107 | self.assertIn("flows", channel.json_body) |
313 | 313 | retry_interval, |
314 | 314 | last_successful_stream_ordering, |
315 | 315 | ) in dest: |
316 | self.get_success( | |
317 | self.store.set_destination_retry_timings( | |
318 | destination, failure_ts, retry_last_ts, retry_interval | |
319 | ) | |
320 | ) | |
321 | self.get_success( | |
322 | self.store.set_destination_last_successful_stream_ordering( | |
323 | destination, last_successful_stream_ordering | |
324 | ) | |
316 | self._create_destination( | |
317 | destination, | |
318 | failure_ts, | |
319 | retry_last_ts, | |
320 | retry_interval, | |
321 | last_successful_stream_ordering, | |
325 | 322 | ) |
326 | 323 | |
327 | 324 | # order by default (destination) |
412 | 409 | _search_test(None, "foo") |
413 | 410 | _search_test(None, "bar") |
414 | 411 | |
415 | def test_get_single_destination(self) -> None: | |
416 | """ | |
417 | Get one specific destinations. | |
418 | """ | |
419 | self._create_destinations(5) | |
412 | def test_get_single_destination_with_retry_timings(self) -> None: | |
413 | """Get one specific destination which has retry timings.""" | |
414 | self._create_destinations(1) | |
420 | 415 | |
421 | 416 | channel = self.make_request( |
422 | 417 | "GET", |
431 | 426 | # convert channel.json_body into a List |
432 | 427 | self._check_fields([channel.json_body]) |
433 | 428 | |
429 | def test_get_single_destination_no_retry_timings(self) -> None: | |
430 | """Get one specific destination which has no retry timings.""" | |
431 | self._create_destination("sub0.example.com") | |
432 | ||
433 | channel = self.make_request( | |
434 | "GET", | |
435 | self.url + "/sub0.example.com", | |
436 | access_token=self.admin_user_tok, | |
437 | ) | |
438 | ||
439 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) | |
440 | self.assertEqual("sub0.example.com", channel.json_body["destination"]) | |
441 | self.assertEqual(0, channel.json_body["retry_last_ts"]) | |
442 | self.assertEqual(0, channel.json_body["retry_interval"]) | |
443 | self.assertIsNone(channel.json_body["failure_ts"]) | |
444 | self.assertIsNone(channel.json_body["last_successful_stream_ordering"]) | |
445 | ||
446 | def _create_destination( | |
447 | self, | |
448 | destination: str, | |
449 | failure_ts: Optional[int] = None, | |
450 | retry_last_ts: int = 0, | |
451 | retry_interval: int = 0, | |
452 | last_successful_stream_ordering: Optional[int] = None, | |
453 | ) -> None: | |
454 | """Create one specific destination | |
455 | ||
456 | Args: | |
457 | destination: the destination we have successfully sent to | |
458 | failure_ts: when the server started failing (ms since epoch) | |
459 | retry_last_ts: time of last retry attempt in unix epoch ms | |
460 | retry_interval: how long until next retry in ms | |
461 | last_successful_stream_ordering: the stream_ordering of the most | |
462 | recent successfully-sent PDU | |
463 | """ | |
464 | self.get_success( | |
465 | self.store.set_destination_retry_timings( | |
466 | destination, failure_ts, retry_last_ts, retry_interval | |
467 | ) | |
468 | ) | |
469 | if last_successful_stream_ordering is not None: | |
470 | self.get_success( | |
471 | self.store.set_destination_last_successful_stream_ordering( | |
472 | destination, last_successful_stream_ordering | |
473 | ) | |
474 | ) | |
475 | ||
434 | 476 | def _create_destinations(self, number_destinations: int) -> None: |
435 | 477 | """Create a number of destinations |
436 | 478 | |
439 | 481 | """ |
440 | 482 | for i in range(0, number_destinations): |
441 | 483 | dest = f"sub{i}.example.com" |
442 | self.get_success(self.store.set_destination_retry_timings(dest, 50, 50, 50)) | |
443 | self.get_success( | |
444 | self.store.set_destination_last_successful_stream_ordering(dest, 100) | |
445 | ) | |
484 | self._create_destination(dest, 50, 50, 50, 100) | |
446 | 485 | |
447 | 486 | def _check_fields(self, content: List[JsonDict]) -> None: |
448 | 487 | """Checks that the expected destination attributes are present in content |
222 | 222 | # Create all possible single character tokens |
223 | 223 | tokens = [] |
224 | 224 | for c in string.ascii_letters + string.digits + "._~-": |
225 | tokens.append( | |
226 | { | |
227 | "token": c, | |
228 | "uses_allowed": None, | |
229 | "pending": 0, | |
230 | "completed": 0, | |
231 | "expiry_time": None, | |
232 | } | |
233 | ) | |
225 | tokens.append((c, None, 0, 0, None)) | |
234 | 226 | self.get_success( |
235 | 227 | self.store.db_pool.simple_insert_many( |
236 | 228 | "registration_tokens", |
237 | tokens, | |
238 | "create_all_registration_tokens", | |
229 | keys=("token", "uses_allowed", "pending", "completed", "expiry_time"), | |
230 | values=tokens, | |
231 | desc="create_all_registration_tokens", | |
239 | 232 | ) |
240 | 233 | ) |
241 | 234 |
1088 | 1088 | ) |
1089 | 1089 | room_ids.append(room_id) |
1090 | 1090 | |
1091 | room_ids.sort() | |
1092 | ||
1091 | 1093 | # Request the list of rooms |
1092 | 1094 | url = "/_synapse/admin/v1/rooms" |
1093 | 1095 | channel = self.make_request( |
1359 | 1361 | room_id_2 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok) |
1360 | 1362 | room_id_3 = self.helper.create_room_as(self.admin_user, tok=self.admin_user_tok) |
1361 | 1363 | |
1364 | # Also create a list sorted by IDs for properties that are equal (and thus sorted by room_id) | |
1365 | sorted_by_room_id_asc = [room_id_1, room_id_2, room_id_3] | |
1366 | sorted_by_room_id_asc.sort() | |
1367 | sorted_by_room_id_desc = sorted_by_room_id_asc.copy() | |
1368 | sorted_by_room_id_desc.reverse() | |
1369 | ||
1362 | 1370 | # Set room names in alphabetical order. room 1 -> A, 2 -> B, 3 -> C |
1363 | 1371 | self.helper.send_state( |
1364 | 1372 | room_id_1, |
1404 | 1412 | _order_test("canonical_alias", [room_id_1, room_id_2, room_id_3]) |
1405 | 1413 | _order_test("canonical_alias", [room_id_3, room_id_2, room_id_1], reverse=True) |
1406 | 1414 | |
1415 | # Note: joined_member counts are sorted in descending order when dir=f | |
1407 | 1416 | _order_test("joined_members", [room_id_3, room_id_2, room_id_1]) |
1408 | 1417 | _order_test("joined_members", [room_id_1, room_id_2, room_id_3], reverse=True) |
1409 | 1418 | |
1419 | # Note: joined_local_member counts are sorted in descending order when dir=f | |
1410 | 1420 | _order_test("joined_local_members", [room_id_3, room_id_2, room_id_1]) |
1411 | 1421 | _order_test( |
1412 | 1422 | "joined_local_members", [room_id_1, room_id_2, room_id_3], reverse=True |
1413 | 1423 | ) |
1414 | 1424 | |
1415 | _order_test("version", [room_id_1, room_id_2, room_id_3]) | |
1416 | _order_test("version", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1417 | ||
1418 | _order_test("creator", [room_id_1, room_id_2, room_id_3]) | |
1419 | _order_test("creator", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1420 | ||
1421 | _order_test("encryption", [room_id_1, room_id_2, room_id_3]) | |
1422 | _order_test("encryption", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1423 | ||
1424 | _order_test("federatable", [room_id_1, room_id_2, room_id_3]) | |
1425 | _order_test("federatable", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1426 | ||
1427 | _order_test("public", [room_id_1, room_id_2, room_id_3]) | |
1428 | # Different sort order of SQlite and PostreSQL | |
1429 | # _order_test("public", [room_id_3, room_id_2, room_id_1], reverse=True) | |
1430 | ||
1431 | _order_test("join_rules", [room_id_1, room_id_2, room_id_3]) | |
1432 | _order_test("join_rules", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1433 | ||
1434 | _order_test("guest_access", [room_id_1, room_id_2, room_id_3]) | |
1435 | _order_test("guest_access", [room_id_1, room_id_2, room_id_3], reverse=True) | |
1436 | ||
1437 | _order_test("history_visibility", [room_id_1, room_id_2, room_id_3]) | |
1438 | _order_test( | |
1439 | "history_visibility", [room_id_1, room_id_2, room_id_3], reverse=True | |
1440 | ) | |
1441 | ||
1425 | # Note: versions are sorted in descending order when dir=f | |
1426 | _order_test("version", sorted_by_room_id_asc, reverse=True) | |
1427 | _order_test("version", sorted_by_room_id_desc) | |
1428 | ||
1429 | _order_test("creator", sorted_by_room_id_asc) | |
1430 | _order_test("creator", sorted_by_room_id_desc, reverse=True) | |
1431 | ||
1432 | _order_test("encryption", sorted_by_room_id_asc) | |
1433 | _order_test("encryption", sorted_by_room_id_desc, reverse=True) | |
1434 | ||
1435 | _order_test("federatable", sorted_by_room_id_asc) | |
1436 | _order_test("federatable", sorted_by_room_id_desc, reverse=True) | |
1437 | ||
1438 | _order_test("public", sorted_by_room_id_asc) | |
1439 | _order_test("public", sorted_by_room_id_desc, reverse=True) | |
1440 | ||
1441 | _order_test("join_rules", sorted_by_room_id_asc) | |
1442 | _order_test("join_rules", sorted_by_room_id_desc, reverse=True) | |
1443 | ||
1444 | _order_test("guest_access", sorted_by_room_id_asc) | |
1445 | _order_test("guest_access", sorted_by_room_id_desc, reverse=True) | |
1446 | ||
1447 | _order_test("history_visibility", sorted_by_room_id_asc) | |
1448 | _order_test("history_visibility", sorted_by_room_id_desc, reverse=True) | |
1449 | ||
1450 | # Note: state_event counts are sorted in descending order when dir=f | |
1442 | 1451 | _order_test("state_events", [room_id_3, room_id_2, room_id_1]) |
1443 | 1452 | _order_test("state_events", [room_id_1, room_id_2, room_id_3], reverse=True) |
1444 | 1453 |
1180 | 1180 | self.other_user, device_id=None, valid_until_ms=None |
1181 | 1181 | ) |
1182 | 1182 | ) |
1183 | ||
1183 | 1184 | self.url_prefix = "/_synapse/admin/v2/users/%s" |
1184 | 1185 | self.url_other_user = self.url_prefix % self.other_user |
1185 | 1186 | |
1187 | 1188 | """ |
1188 | 1189 | If the user is not a server admin, an error is returned. |
1189 | 1190 | """ |
1190 | url = "/_synapse/admin/v2/users/@bob:test" | |
1191 | url = self.url_prefix % "@bob:test" | |
1191 | 1192 | |
1192 | 1193 | channel = self.make_request( |
1193 | 1194 | "GET", |
1215 | 1216 | |
1216 | 1217 | channel = self.make_request( |
1217 | 1218 | "GET", |
1218 | "/_synapse/admin/v2/users/@unknown_person:test", | |
1219 | self.url_prefix % "@unknown_person:test", | |
1219 | 1220 | access_token=self.admin_user_tok, |
1220 | 1221 | ) |
1221 | 1222 | |
1336 | 1337 | """ |
1337 | 1338 | Check that a new admin user is created successfully. |
1338 | 1339 | """ |
1339 | url = "/_synapse/admin/v2/users/@bob:test" | |
1340 | url = self.url_prefix % "@bob:test" | |
1340 | 1341 | |
1341 | 1342 | # Create user (server admin) |
1342 | 1343 | body = { |
1385 | 1386 | """ |
1386 | 1387 | Check that a new regular user is created successfully. |
1387 | 1388 | """ |
1388 | url = "/_synapse/admin/v2/users/@bob:test" | |
1389 | url = self.url_prefix % "@bob:test" | |
1389 | 1390 | |
1390 | 1391 | # Create user |
1391 | 1392 | body = { |
1477 | 1478 | ) |
1478 | 1479 | |
1479 | 1480 | # Register new user with admin API |
1480 | url = "/_synapse/admin/v2/users/@bob:test" | |
1481 | url = self.url_prefix % "@bob:test" | |
1481 | 1482 | |
1482 | 1483 | # Create user |
1483 | 1484 | channel = self.make_request( |
1514 | 1515 | ) |
1515 | 1516 | |
1516 | 1517 | # Register new user with admin API |
1517 | url = "/_synapse/admin/v2/users/@bob:test" | |
1518 | url = self.url_prefix % "@bob:test" | |
1518 | 1519 | |
1519 | 1520 | # Create user |
1520 | 1521 | channel = self.make_request( |
1544 | 1545 | Check that a new regular user is created successfully and |
1545 | 1546 | got an email pusher. |
1546 | 1547 | """ |
1547 | url = "/_synapse/admin/v2/users/@bob:test" | |
1548 | url = self.url_prefix % "@bob:test" | |
1548 | 1549 | |
1549 | 1550 | # Create user |
1550 | 1551 | body = { |
1587 | 1588 | Check that a new regular user is created successfully and |
1588 | 1589 | got not an email pusher. |
1589 | 1590 | """ |
1590 | url = "/_synapse/admin/v2/users/@bob:test" | |
1591 | url = self.url_prefix % "@bob:test" | |
1591 | 1592 | |
1592 | 1593 | # Create user |
1593 | 1594 | body = { |
2084 | 2085 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2085 | 2086 | self.assertEqual("@user:test", channel.json_body["name"]) |
2086 | 2087 | self.assertTrue(channel.json_body["deactivated"]) |
2087 | self.assertIsNone(channel.json_body["password_hash"]) | |
2088 | 2088 | self.assertEqual(0, len(channel.json_body["threepids"])) |
2089 | 2089 | self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) |
2090 | 2090 | self.assertEqual("User", channel.json_body["displayname"]) |
2091 | ||
2092 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2093 | self.assertNotIn("password_hash", channel.json_body) | |
2094 | ||
2091 | 2095 | # the user is deactivated, the threepid will be deleted |
2092 | 2096 | |
2093 | 2097 | # Get user |
2100 | 2104 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2101 | 2105 | self.assertEqual("@user:test", channel.json_body["name"]) |
2102 | 2106 | self.assertTrue(channel.json_body["deactivated"]) |
2103 | self.assertIsNone(channel.json_body["password_hash"]) | |
2104 | 2107 | self.assertEqual(0, len(channel.json_body["threepids"])) |
2105 | 2108 | self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) |
2106 | 2109 | self.assertEqual("User", channel.json_body["displayname"]) |
2107 | 2110 | |
2111 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2112 | self.assertNotIn("password_hash", channel.json_body) | |
2113 | ||
2108 | 2114 | @override_config({"user_directory": {"enabled": True, "search_all_users": True}}) |
2109 | 2115 | def test_change_name_deactivate_user_user_directory(self): |
2110 | 2116 | """ |
2176 | 2182 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2177 | 2183 | self.assertEqual("@user:test", channel.json_body["name"]) |
2178 | 2184 | self.assertFalse(channel.json_body["deactivated"]) |
2179 | self.assertIsNotNone(channel.json_body["password_hash"]) | |
2180 | 2185 | self._is_erased("@user:test", False) |
2186 | ||
2187 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2188 | self.assertNotIn("password_hash", channel.json_body) | |
2181 | 2189 | |
2182 | 2190 | @override_config({"password_config": {"localdb_enabled": False}}) |
2183 | 2191 | def test_reactivate_user_localdb_disabled(self): |
2208 | 2216 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2209 | 2217 | self.assertEqual("@user:test", channel.json_body["name"]) |
2210 | 2218 | self.assertFalse(channel.json_body["deactivated"]) |
2211 | self.assertIsNone(channel.json_body["password_hash"]) | |
2212 | 2219 | self._is_erased("@user:test", False) |
2220 | ||
2221 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2222 | self.assertNotIn("password_hash", channel.json_body) | |
2213 | 2223 | |
2214 | 2224 | @override_config({"password_config": {"enabled": False}}) |
2215 | 2225 | def test_reactivate_user_password_disabled(self): |
2240 | 2250 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2241 | 2251 | self.assertEqual("@user:test", channel.json_body["name"]) |
2242 | 2252 | self.assertFalse(channel.json_body["deactivated"]) |
2243 | self.assertIsNone(channel.json_body["password_hash"]) | |
2244 | 2253 | self._is_erased("@user:test", False) |
2254 | ||
2255 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2256 | self.assertNotIn("password_hash", channel.json_body) | |
2245 | 2257 | |
2246 | 2258 | def test_set_user_as_admin(self): |
2247 | 2259 | """ |
2327 | 2339 | Ensure an account can't accidentally be deactivated by using a str value |
2328 | 2340 | for the deactivated body parameter |
2329 | 2341 | """ |
2330 | url = "/_synapse/admin/v2/users/@bob:test" | |
2342 | url = self.url_prefix % "@bob:test" | |
2331 | 2343 | |
2332 | 2344 | # Create user |
2333 | 2345 | channel = self.make_request( |
2391 | 2403 | # Deactivate the user. |
2392 | 2404 | channel = self.make_request( |
2393 | 2405 | "PUT", |
2394 | "/_synapse/admin/v2/users/%s" % urllib.parse.quote(user_id), | |
2406 | self.url_prefix % urllib.parse.quote(user_id), | |
2395 | 2407 | access_token=self.admin_user_tok, |
2396 | 2408 | content={"deactivated": True}, |
2397 | 2409 | ) |
2398 | 2410 | self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body) |
2399 | 2411 | self.assertTrue(channel.json_body["deactivated"]) |
2400 | self.assertIsNone(channel.json_body["password_hash"]) | |
2401 | 2412 | self._is_erased(user_id, False) |
2402 | 2413 | d = self.store.mark_user_erased(user_id) |
2403 | 2414 | self.assertIsNone(self.get_success(d)) |
2404 | 2415 | self._is_erased(user_id, True) |
2416 | ||
2417 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2418 | self.assertNotIn("password_hash", channel.json_body) | |
2405 | 2419 | |
2406 | 2420 | def _check_fields(self, content: JsonDict): |
2407 | 2421 | """Checks that the expected user attributes are present in content |
2415 | 2429 | self.assertIn("admin", content) |
2416 | 2430 | self.assertIn("deactivated", content) |
2417 | 2431 | self.assertIn("shadow_banned", content) |
2418 | self.assertIn("password_hash", content) | |
2419 | 2432 | self.assertIn("creation_ts", content) |
2420 | 2433 | self.assertIn("appservice_id", content) |
2421 | 2434 | self.assertIn("consent_server_notice_sent", content) |
2422 | 2435 | self.assertIn("consent_version", content) |
2423 | 2436 | self.assertIn("external_ids", content) |
2437 | ||
2438 | # This key was removed intentionally. Ensure it is not accidentally re-included. | |
2439 | self.assertNotIn("password_hash", content) | |
2424 | 2440 | |
2425 | 2441 | |
2426 | 2442 | class UserMembershipRestTestCase(unittest.HomeserverTestCase): |
20 | 20 | from synapse.api.constants import EventTypes, RelationTypes |
21 | 21 | from synapse.rest import admin |
22 | 22 | from synapse.rest.client import login, register, relations, room, sync |
23 | from synapse.types import JsonDict | |
23 | 24 | |
24 | 25 | from tests import unittest |
25 | 26 | from tests.server import FakeChannel |
92 | 93 | channel.json_body, |
93 | 94 | ) |
94 | 95 | |
95 | def test_deny_membership(self): | |
96 | """Test that we deny relations on membership events""" | |
97 | channel = self._send_relation(RelationTypes.ANNOTATION, EventTypes.Member) | |
98 | self.assertEquals(400, channel.code, channel.json_body) | |
99 | ||
100 | 96 | def test_deny_invalid_event(self): |
101 | 97 | """Test that we deny relations on non-existant events""" |
102 | 98 | channel = self._send_relation( |
458 | 454 | |
459 | 455 | @unittest.override_config({"experimental_features": {"msc3440_enabled": True}}) |
460 | 456 | def test_bundled_aggregations(self): |
461 | """Test that annotations, references, and threads get correctly bundled.""" | |
457 | """ | |
458 | Test that annotations, references, and threads get correctly bundled. | |
459 | ||
460 | Note that this doesn't test against /relations since only thread relations | |
461 | get bundled via that API. See test_aggregation_get_event_for_thread. | |
462 | ||
463 | See test_edit for a similar test for edits. | |
464 | """ | |
462 | 465 | # Setup by sending a variety of relations. |
463 | 466 | channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a") |
464 | 467 | self.assertEquals(200, channel.code, channel.json_body) |
486 | 489 | self.assertEquals(200, channel.code, channel.json_body) |
487 | 490 | thread_2 = channel.json_body["event_id"] |
488 | 491 | |
489 | def assert_bundle(actual): | |
492 | def assert_bundle(event_json: JsonDict) -> None: | |
490 | 493 | """Assert the expected values of the bundled aggregations.""" |
494 | relations_dict = event_json["unsigned"].get("m.relations") | |
491 | 495 | |
492 | 496 | # Ensure the fields are as expected. |
493 | 497 | self.assertCountEqual( |
494 | actual.keys(), | |
498 | relations_dict.keys(), | |
495 | 499 | ( |
496 | 500 | RelationTypes.ANNOTATION, |
497 | 501 | RelationTypes.REFERENCE, |
507 | 511 | {"type": "m.reaction", "key": "b", "count": 1}, |
508 | 512 | ] |
509 | 513 | }, |
510 | actual[RelationTypes.ANNOTATION], | |
514 | relations_dict[RelationTypes.ANNOTATION], | |
511 | 515 | ) |
512 | 516 | |
513 | 517 | self.assertEquals( |
514 | 518 | {"chunk": [{"event_id": reply_1}, {"event_id": reply_2}]}, |
515 | actual[RelationTypes.REFERENCE], | |
519 | relations_dict[RelationTypes.REFERENCE], | |
516 | 520 | ) |
517 | 521 | |
518 | 522 | self.assertEquals( |
519 | 523 | 2, |
520 | actual[RelationTypes.THREAD].get("count"), | |
524 | relations_dict[RelationTypes.THREAD].get("count"), | |
525 | ) | |
526 | self.assertTrue( | |
527 | relations_dict[RelationTypes.THREAD].get("current_user_participated") | |
521 | 528 | ) |
522 | 529 | # The latest thread event has some fields that don't matter. |
523 | 530 | self.assert_dict( |
534 | 541 | "type": "m.room.test", |
535 | 542 | "user_id": self.user_id, |
536 | 543 | }, |
537 | actual[RelationTypes.THREAD].get("latest_event"), | |
538 | ) | |
539 | ||
540 | def _find_and_assert_event(events): | |
541 | """ | |
542 | Find the parent event in a chunk of events and assert that it has the proper bundled aggregations. | |
543 | """ | |
544 | for event in events: | |
545 | if event["event_id"] == self.parent_id: | |
546 | break | |
547 | else: | |
548 | raise AssertionError(f"Event {self.parent_id} not found in chunk") | |
549 | assert_bundle(event["unsigned"].get("m.relations")) | |
544 | relations_dict[RelationTypes.THREAD].get("latest_event"), | |
545 | ) | |
550 | 546 | |
551 | 547 | # Request the event directly. |
552 | 548 | channel = self.make_request( |
555 | 551 | access_token=self.user_token, |
556 | 552 | ) |
557 | 553 | self.assertEquals(200, channel.code, channel.json_body) |
558 | assert_bundle(channel.json_body["unsigned"].get("m.relations")) | |
554 | assert_bundle(channel.json_body) | |
559 | 555 | |
560 | 556 | # Request the room messages. |
561 | 557 | channel = self.make_request( |
564 | 560 | access_token=self.user_token, |
565 | 561 | ) |
566 | 562 | self.assertEquals(200, channel.code, channel.json_body) |
567 | _find_and_assert_event(channel.json_body["chunk"]) | |
563 | assert_bundle(self._find_event_in_chunk(channel.json_body["chunk"])) | |
568 | 564 | |
569 | 565 | # Request the room context. |
570 | 566 | channel = self.make_request( |
573 | 569 | access_token=self.user_token, |
574 | 570 | ) |
575 | 571 | self.assertEquals(200, channel.code, channel.json_body) |
576 | assert_bundle(channel.json_body["event"]["unsigned"].get("m.relations")) | |
572 | assert_bundle(channel.json_body["event"]) | |
577 | 573 | |
578 | 574 | # Request sync. |
579 | # channel = self.make_request("GET", "/sync", access_token=self.user_token) | |
580 | # self.assertEquals(200, channel.code, channel.json_body) | |
581 | # room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"] | |
582 | # self.assertTrue(room_timeline["limited"]) | |
583 | # _find_and_assert_event(room_timeline["events"]) | |
584 | ||
585 | # Note that /relations is tested separately in test_aggregation_get_event_for_thread | |
586 | # since it needs different data configured. | |
575 | channel = self.make_request("GET", "/sync", access_token=self.user_token) | |
576 | self.assertEquals(200, channel.code, channel.json_body) | |
577 | room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"] | |
578 | self.assertTrue(room_timeline["limited"]) | |
579 | self._find_event_in_chunk(room_timeline["events"]) | |
587 | 580 | |
588 | 581 | def test_aggregation_get_event_for_annotation(self): |
589 | 582 | """Test that annotations do not get bundled aggregations included |
778 | 771 | |
779 | 772 | edit_event_id = channel.json_body["event_id"] |
780 | 773 | |
781 | channel = self.make_request( | |
782 | "GET", | |
783 | "/rooms/%s/event/%s" % (self.room, self.parent_id), | |
784 | access_token=self.user_token, | |
785 | ) | |
786 | self.assertEquals(200, channel.code, channel.json_body) | |
787 | ||
774 | def assert_bundle(event_json: JsonDict) -> None: | |
775 | """Assert the expected values of the bundled aggregations.""" | |
776 | relations_dict = event_json["unsigned"].get("m.relations") | |
777 | self.assertIn(RelationTypes.REPLACE, relations_dict) | |
778 | ||
779 | m_replace_dict = relations_dict[RelationTypes.REPLACE] | |
780 | for key in ["event_id", "sender", "origin_server_ts"]: | |
781 | self.assertIn(key, m_replace_dict) | |
782 | ||
783 | self.assert_dict( | |
784 | {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict | |
785 | ) | |
786 | ||
787 | channel = self.make_request( | |
788 | "GET", | |
789 | f"/rooms/{self.room}/event/{self.parent_id}", | |
790 | access_token=self.user_token, | |
791 | ) | |
792 | self.assertEquals(200, channel.code, channel.json_body) | |
788 | 793 | self.assertEquals(channel.json_body["content"], new_body) |
789 | ||
790 | relations_dict = channel.json_body["unsigned"].get("m.relations") | |
791 | self.assertIn(RelationTypes.REPLACE, relations_dict) | |
792 | ||
793 | m_replace_dict = relations_dict[RelationTypes.REPLACE] | |
794 | for key in ["event_id", "sender", "origin_server_ts"]: | |
795 | self.assertIn(key, m_replace_dict) | |
796 | ||
797 | self.assert_dict( | |
798 | {"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict | |
799 | ) | |
794 | assert_bundle(channel.json_body) | |
795 | ||
796 | # Request the room messages. | |
797 | channel = self.make_request( | |
798 | "GET", | |
799 | f"/rooms/{self.room}/messages?dir=b", | |
800 | access_token=self.user_token, | |
801 | ) | |
802 | self.assertEquals(200, channel.code, channel.json_body) | |
803 | assert_bundle(self._find_event_in_chunk(channel.json_body["chunk"])) | |
804 | ||
805 | # Request the room context. | |
806 | channel = self.make_request( | |
807 | "GET", | |
808 | f"/rooms/{self.room}/context/{self.parent_id}", | |
809 | access_token=self.user_token, | |
810 | ) | |
811 | self.assertEquals(200, channel.code, channel.json_body) | |
812 | assert_bundle(channel.json_body["event"]) | |
813 | ||
814 | # Request sync, but limit the timeline so it becomes limited (and includes | |
815 | # bundled aggregations). | |
816 | filter = urllib.parse.quote_plus( | |
817 | '{"room": {"timeline": {"limit": 2}}}'.encode() | |
818 | ) | |
819 | channel = self.make_request( | |
820 | "GET", f"/sync?filter={filter}", access_token=self.user_token | |
821 | ) | |
822 | self.assertEquals(200, channel.code, channel.json_body) | |
823 | room_timeline = channel.json_body["rooms"]["join"][self.room]["timeline"] | |
824 | self.assertTrue(room_timeline["limited"]) | |
825 | assert_bundle(self._find_event_in_chunk(room_timeline["events"])) | |
800 | 826 | |
801 | 827 | def test_multi_edit(self): |
802 | 828 | """Test that multiple edits, including attempts by people who |
1103 | 1129 | self.assertEquals(200, channel.code, channel.json_body) |
1104 | 1130 | self.assertEquals(channel.json_body["chunk"], []) |
1105 | 1131 | |
1132 | def _find_event_in_chunk(self, events: List[JsonDict]) -> JsonDict: | |
1133 | """ | |
1134 | Find the parent event in a chunk of events and assert that it has the proper bundled aggregations. | |
1135 | """ | |
1136 | for event in events: | |
1137 | if event["event_id"] == self.parent_id: | |
1138 | return event | |
1139 | ||
1140 | raise AssertionError(f"Event {self.parent_id} not found in chunk") | |
1141 | ||
1106 | 1142 | def _send_relation( |
1107 | 1143 | self, |
1108 | 1144 | relation_type: str, |
1118 | 1154 | relation_type: One of `RelationTypes` |
1119 | 1155 | event_type: The type of the event to create |
1120 | 1156 | key: The aggregation key used for m.annotation relation type. |
1121 | content: The content of the created event. | |
1157 | content: The content of the created event. Will be modified to configure | |
1158 | the m.relates_to key based on the other provided parameters. | |
1122 | 1159 | access_token: The access token used to send the relation, defaults |
1123 | 1160 | to `self.user_token` |
1124 | 1161 | parent_id: The event_id this relation relates to. If None, then self.parent_id |
1129 | 1166 | if not access_token: |
1130 | 1167 | access_token = self.user_token |
1131 | 1168 | |
1132 | query = "" | |
1133 | if key: | |
1134 | query = "?key=" + urllib.parse.quote_plus(key.encode("utf-8")) | |
1135 | ||
1136 | 1169 | original_id = parent_id if parent_id else self.parent_id |
1137 | 1170 | |
1171 | if content is None: | |
1172 | content = {} | |
1173 | content["m.relates_to"] = { | |
1174 | "event_id": original_id, | |
1175 | "rel_type": relation_type, | |
1176 | } | |
1177 | if key is not None: | |
1178 | content["m.relates_to"]["key"] = key | |
1179 | ||
1138 | 1180 | channel = self.make_request( |
1139 | 1181 | "POST", |
1140 | "/_matrix/client/unstable/rooms/%s/send_relation/%s/%s/%s%s" | |
1141 | % (self.room, original_id, relation_type, event_type, query), | |
1142 | content or {}, | |
1182 | f"/_matrix/client/v3/rooms/{self.room}/send/{event_type}", | |
1183 | content, | |
1143 | 1184 | access_token=access_token, |
1144 | 1185 | ) |
1145 | 1186 | return channel |
227 | 227 | self.assertIsNotNone(event) |
228 | 228 | |
229 | 229 | time_now = self.clock.time_msec() |
230 | serialized = self.get_success(self.serializer.serialize_event(event, time_now)) | |
230 | serialized = self.serializer.serialize_event(event, time_now) | |
231 | 231 | |
232 | 232 | return serialized |
233 | 233 |
195 | 195 | expect_code=expect_code, |
196 | 196 | ) |
197 | 197 | |
198 | def ban(self, room: str, src: str, targ: str, **kwargs: object): | |
199 | """A convenience helper: `change_membership` with `membership` preset to "ban".""" | |
200 | self.change_membership( | |
201 | room=room, | |
202 | src=src, | |
203 | targ=targ, | |
204 | membership=Membership.BAN, | |
205 | **kwargs, | |
206 | ) | |
207 | ||
198 | 208 | def change_membership( |
199 | 209 | self, |
200 | 210 | room: str, |
13 | 13 | import hashlib |
14 | 14 | import json |
15 | 15 | import logging |
16 | import os | |
17 | import os.path | |
16 | 18 | import time |
17 | 19 | import uuid |
18 | 20 | import warnings |
70 | 72 | POSTGRES_HOST, |
71 | 73 | POSTGRES_PASSWORD, |
72 | 74 | POSTGRES_USER, |
75 | SQLITE_PERSIST_DB, | |
73 | 76 | USE_POSTGRES_FOR_TESTS, |
74 | 77 | MockClock, |
75 | 78 | default_config, |
738 | 741 | }, |
739 | 742 | } |
740 | 743 | else: |
744 | if SQLITE_PERSIST_DB: | |
745 | # The current working directory is in _trial_temp, so this gets created within that directory. | |
746 | test_db_location = os.path.abspath("test.db") | |
747 | logger.debug("Will persist db to %s", test_db_location) | |
748 | # Ensure each test gets a clean database. | |
749 | try: | |
750 | os.remove(test_db_location) | |
751 | except FileNotFoundError: | |
752 | pass | |
753 | else: | |
754 | logger.debug("Removed existing DB at %s", test_db_location) | |
755 | else: | |
756 | test_db_location = ":memory:" | |
757 | ||
741 | 758 | database_config = { |
742 | 759 | "name": "sqlite3", |
743 | "args": {"database": ":memory:", "cp_min": 1, "cp_max": 1}, | |
760 | "args": {"database": test_db_location, "cp_min": 1, "cp_max": 1}, | |
744 | 761 | } |
745 | 762 | |
746 | 763 | if "db_txn_limit" in kwargs: |
530 | 530 | self.get_success( |
531 | 531 | self.store.db_pool.simple_insert_many( |
532 | 532 | table="federation_inbound_events_staging", |
533 | keys=( | |
534 | "origin", | |
535 | "room_id", | |
536 | "received_ts", | |
537 | "event_id", | |
538 | "event_json", | |
539 | "internal_metadata", | |
540 | ), | |
533 | 541 | values=[ |
534 | { | |
535 | "origin": "some_origin", | |
536 | "room_id": room_id, | |
537 | "received_ts": 0, | |
538 | "event_id": f"$fake_event_id_{i + 1}", | |
539 | "event_json": json_encoder.encode( | |
542 | ( | |
543 | "some_origin", | |
544 | room_id, | |
545 | 0, | |
546 | f"$fake_event_id_{i + 1}", | |
547 | json_encoder.encode( | |
540 | 548 | {"prev_events": [prev_event_format(f"$fake_event_id_{i}")]} |
541 | 549 | ), |
542 | "internal_metadata": "{}", | |
543 | } | |
550 | "{}", | |
551 | ) | |
544 | 552 | for i in range(500) |
545 | 553 | ], |
546 | 554 | desc="test_prune_inbound_federation_queue", |
16 | 16 | from twisted.internet.defer import succeed |
17 | 17 | |
18 | 18 | from synapse.api.errors import FederationError |
19 | from synapse.api.room_versions import RoomVersions | |
19 | 20 | from synapse.events import make_event_from_dict |
21 | from synapse.federation.federation_base import event_from_pdu_json | |
20 | 22 | from synapse.logging.context import LoggingContext |
21 | 23 | from synapse.types import UserID, create_requester |
22 | 24 | from synapse.util import Clock |
275 | 277 | "ed25519:" + remote_self_signing_key in self_signing_key["keys"].keys(), |
276 | 278 | ) |
277 | 279 | self.assertTrue(remote_self_signing_key in self_signing_key["keys"].values()) |
280 | ||
281 | ||
282 | class StripUnsignedFromEventsTestCase(unittest.TestCase): | |
283 | def test_strip_unauthorized_unsigned_values(self): | |
284 | event1 = { | |
285 | "sender": "@baduser:test.serv", | |
286 | "state_key": "@baduser:test.serv", | |
287 | "event_id": "$event1:test.serv", | |
288 | "depth": 1000, | |
289 | "origin_server_ts": 1, | |
290 | "type": "m.room.member", | |
291 | "origin": "test.servx", | |
292 | "content": {"membership": "join"}, | |
293 | "auth_events": [], | |
294 | "unsigned": {"malicious garbage": "hackz", "more warez": "more hackz"}, | |
295 | } | |
296 | filtered_event = event_from_pdu_json(event1, RoomVersions.V1) | |
297 | # Make sure unauthorized fields are stripped from unsigned | |
298 | self.assertNotIn("more warez", filtered_event.unsigned) | |
299 | ||
300 | def test_strip_event_maintains_allowed_fields(self): | |
301 | event2 = { | |
302 | "sender": "@baduser:test.serv", | |
303 | "state_key": "@baduser:test.serv", | |
304 | "event_id": "$event2:test.serv", | |
305 | "depth": 1000, | |
306 | "origin_server_ts": 1, | |
307 | "type": "m.room.member", | |
308 | "origin": "test.servx", | |
309 | "auth_events": [], | |
310 | "content": {"membership": "join"}, | |
311 | "unsigned": { | |
312 | "malicious garbage": "hackz", | |
313 | "more warez": "more hackz", | |
314 | "age": 14, | |
315 | "invite_room_state": [], | |
316 | }, | |
317 | } | |
318 | ||
319 | filtered_event2 = event_from_pdu_json(event2, RoomVersions.V1) | |
320 | self.assertIn("age", filtered_event2.unsigned) | |
321 | self.assertEqual(14, filtered_event2.unsigned["age"]) | |
322 | self.assertNotIn("more warez", filtered_event2.unsigned) | |
323 | # Invite_room_state is allowed in events of type m.room.member | |
324 | self.assertIn("invite_room_state", filtered_event2.unsigned) | |
325 | self.assertEqual([], filtered_event2.unsigned["invite_room_state"]) | |
326 | ||
327 | def test_strip_event_removes_fields_based_on_event_type(self): | |
328 | event3 = { | |
329 | "sender": "@baduser:test.serv", | |
330 | "state_key": "@baduser:test.serv", | |
331 | "event_id": "$event3:test.serv", | |
332 | "depth": 1000, | |
333 | "origin_server_ts": 1, | |
334 | "type": "m.room.power_levels", | |
335 | "origin": "test.servx", | |
336 | "content": {}, | |
337 | "auth_events": [], | |
338 | "unsigned": { | |
339 | "malicious garbage": "hackz", | |
340 | "more warez": "more hackz", | |
341 | "age": 14, | |
342 | "invite_room_state": [], | |
343 | }, | |
344 | } | |
345 | filtered_event3 = event_from_pdu_json(event3, RoomVersions.V1) | |
346 | self.assertIn("age", filtered_event3.unsigned) | |
347 | # Invite_room_state field is only permitted in event type m.room.member | |
348 | self.assertNotIn("invite_room_state", filtered_event3.unsigned) | |
349 | self.assertNotIn("more warez", filtered_event3.unsigned) |
18 | 18 | import warnings |
19 | 19 | from asyncio import Future |
20 | 20 | from binascii import unhexlify |
21 | from typing import Any, Awaitable, Callable, TypeVar | |
21 | from typing import Awaitable, Callable, TypeVar | |
22 | 22 | from unittest.mock import Mock |
23 | 23 | |
24 | 24 | import attr |
45 | 45 | raise Exception("awaitable has not yet completed") |
46 | 46 | |
47 | 47 | |
48 | def make_awaitable(result: Any) -> Awaitable[Any]: | |
48 | def make_awaitable(result: TV) -> Awaitable[TV]: | |
49 | 49 | """ |
50 | 50 | Makes an awaitable, suitable for mocking an `async` function. |
51 | 51 | This uses Futures as they can be awaited multiple times so can be returned |
40 | 40 | POSTGRES_HOST = os.environ.get("SYNAPSE_POSTGRES_HOST", None) |
41 | 41 | POSTGRES_PASSWORD = os.environ.get("SYNAPSE_POSTGRES_PASSWORD", None) |
42 | 42 | POSTGRES_BASE_DB = "_synapse_unit_tests_base_%s" % (os.getpid(),) |
43 | ||
44 | # When debugging a specific test, it's occasionally useful to write the | |
45 | # DB to disk and query it with the sqlite CLI. | |
46 | SQLITE_PERSIST_DB = os.environ.get("SYNAPSE_TEST_PERSIST_SQLITE_DB") is not None | |
43 | 47 | |
44 | 48 | # the dbname we will connect to in order to create the base database. |
45 | 49 | POSTGRES_DBNAME_FOR_INITIAL_CREATE = "postgres" |