New upstream version 1.11.0
Andrej Shadura
4 years ago
38 | 38 | |
39 | 39 | # this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536 |
40 | 40 | Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state |
41 | ||
42 | Can get rooms/{roomId}/members at a given point |
0 | Synapse 1.11.0 (2020-02-21) | |
1 | =========================== | |
2 | ||
3 | Improved Documentation | |
4 | ---------------------- | |
5 | ||
6 | - Small grammatical fixes to the ACME v1 deprecation notice. ([\#6944](https://github.com/matrix-org/synapse/issues/6944)) | |
7 | ||
8 | ||
9 | Synapse 1.11.0rc1 (2020-02-19) | |
10 | ============================== | |
11 | ||
12 | Features | |
13 | -------- | |
14 | ||
15 | - Admin API to add or modify threepids of user accounts. ([\#6769](https://github.com/matrix-org/synapse/issues/6769)) | |
16 | - Limit the number of events that can be requested by the backfill federation API to 100. ([\#6864](https://github.com/matrix-org/synapse/issues/6864)) | |
17 | - Add ability to run some group APIs on workers. ([\#6866](https://github.com/matrix-org/synapse/issues/6866)) | |
18 | - Reject device display names over 100 characters in length to prevent abuse. ([\#6882](https://github.com/matrix-org/synapse/issues/6882)) | |
19 | - Add ability to route federation user device queries to workers. ([\#6873](https://github.com/matrix-org/synapse/issues/6873)) | |
20 | - The result of a user directory search can now be filtered via the spam checker. ([\#6888](https://github.com/matrix-org/synapse/issues/6888)) | |
21 | - Implement new `GET /_matrix/client/unstable/org.matrix.msc2432/rooms/{roomId}/aliases` endpoint as per [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#6939](https://github.com/matrix-org/synapse/issues/6939), [\#6948](https://github.com/matrix-org/synapse/issues/6948), [\#6949](https://github.com/matrix-org/synapse/issues/6949)) | |
22 | - Stop sending `m.room.alias` events wheng adding / removing aliases. Check `alt_aliases` in the latest `m.room.canonical_alias` event when deleting an alias. ([\#6904](https://github.com/matrix-org/synapse/issues/6904)) | |
23 | - Change the default power levels of invites, tombstones and server ACLs for new rooms. ([\#6834](https://github.com/matrix-org/synapse/issues/6834)) | |
24 | ||
25 | Bugfixes | |
26 | -------- | |
27 | ||
28 | - Fixed third party event rules function `on_create_room`'s return value being ignored. ([\#6781](https://github.com/matrix-org/synapse/issues/6781)) | |
29 | - Allow URL-encoded User IDs on `/_synapse/admin/v2/users/<user_id>[/admin]` endpoints. Thanks to @NHAS for reporting. ([\#6825](https://github.com/matrix-org/synapse/issues/6825)) | |
30 | - Fix Synapse refusing to start if `federation_certificate_verification_whitelist` option is blank. ([\#6849](https://github.com/matrix-org/synapse/issues/6849)) | |
31 | - Fix errors from logging in the purge jobs related to the message retention policies support. ([\#6945](https://github.com/matrix-org/synapse/issues/6945)) | |
32 | - Return a 404 instead of 200 for querying information of a non-existant user through the admin API. ([\#6901](https://github.com/matrix-org/synapse/issues/6901)) | |
33 | ||
34 | ||
35 | Updates to the Docker image | |
36 | --------------------------- | |
37 | ||
38 | - The deprecated "generate-config-on-the-fly" mode is no longer supported. ([\#6918](https://github.com/matrix-org/synapse/issues/6918)) | |
39 | ||
40 | ||
41 | Improved Documentation | |
42 | ---------------------- | |
43 | ||
44 | - Add details of PR merge strategy to contributing docs. ([\#6846](https://github.com/matrix-org/synapse/issues/6846)) | |
45 | - Spell out that the last event sent to a room won't be deleted by a purge. ([\#6891](https://github.com/matrix-org/synapse/issues/6891)) | |
46 | - Update Synapse's documentation to warn about the deprecation of ACME v1. ([\#6905](https://github.com/matrix-org/synapse/issues/6905), [\#6907](https://github.com/matrix-org/synapse/issues/6907), [\#6909](https://github.com/matrix-org/synapse/issues/6909)) | |
47 | - Add documentation for the spam checker. ([\#6906](https://github.com/matrix-org/synapse/issues/6906)) | |
48 | - Fix worker docs to point `/publicised_groups` API correctly. ([\#6938](https://github.com/matrix-org/synapse/issues/6938)) | |
49 | - Clean up and update docs on setting up federation. ([\#6940](https://github.com/matrix-org/synapse/issues/6940)) | |
50 | - Add a warning about indentation to generated configuration files. ([\#6920](https://github.com/matrix-org/synapse/issues/6920)) | |
51 | - Databases created using the compose file in contrib/docker will now always have correct encoding and locale settings. Contributed by Fridtjof Mund. ([\#6921](https://github.com/matrix-org/synapse/issues/6921)) | |
52 | - Update pip install directions in readme to avoid error when using zsh. ([\#6855](https://github.com/matrix-org/synapse/issues/6855)) | |
53 | ||
54 | ||
55 | Deprecations and Removals | |
56 | ------------------------- | |
57 | ||
58 | - Remove `m.lazy_load_members` from `unstable_features` since lazy loading is in the stable Client-Server API version r0.5.0. ([\#6877](https://github.com/matrix-org/synapse/issues/6877)) | |
59 | ||
60 | ||
61 | Internal Changes | |
62 | ---------------- | |
63 | ||
64 | - Add type hints to `SyncHandler`. ([\#6821](https://github.com/matrix-org/synapse/issues/6821)) | |
65 | - Refactoring work in preparation for changing the event redaction algorithm. ([\#6823](https://github.com/matrix-org/synapse/issues/6823), [\#6827](https://github.com/matrix-org/synapse/issues/6827), [\#6854](https://github.com/matrix-org/synapse/issues/6854), [\#6856](https://github.com/matrix-org/synapse/issues/6856), [\#6857](https://github.com/matrix-org/synapse/issues/6857), [\#6858](https://github.com/matrix-org/synapse/issues/6858)) | |
66 | - Fix stacktraces when using `ObservableDeferred` and async/await. ([\#6836](https://github.com/matrix-org/synapse/issues/6836)) | |
67 | - Port much of `synapse.handlers.federation` to async/await. ([\#6837](https://github.com/matrix-org/synapse/issues/6837), [\#6840](https://github.com/matrix-org/synapse/issues/6840)) | |
68 | - Populate `rooms.room_version` database column at startup, rather than in a background update. ([\#6847](https://github.com/matrix-org/synapse/issues/6847)) | |
69 | - Reduce amount we log at `INFO` level. ([\#6833](https://github.com/matrix-org/synapse/issues/6833), [\#6862](https://github.com/matrix-org/synapse/issues/6862)) | |
70 | - Remove unused `get_room_stats_state` method. ([\#6869](https://github.com/matrix-org/synapse/issues/6869)) | |
71 | - Add typing to `synapse.federation.sender` and port to async/await. ([\#6871](https://github.com/matrix-org/synapse/issues/6871)) | |
72 | - Refactor `_EventInternalMetadata` object to improve type safety. ([\#6872](https://github.com/matrix-org/synapse/issues/6872)) | |
73 | - Add an additional entry to the SyTest blacklist for worker mode. ([\#6883](https://github.com/matrix-org/synapse/issues/6883)) | |
74 | - Fix the use of sed in the linting scripts when using BSD sed. ([\#6887](https://github.com/matrix-org/synapse/issues/6887)) | |
75 | - Add type hints to the spam checker module. ([\#6915](https://github.com/matrix-org/synapse/issues/6915)) | |
76 | - Convert the directory handler tests to use HomeserverTestCase. ([\#6919](https://github.com/matrix-org/synapse/issues/6919)) | |
77 | - Increase DB/CPU perf of `_is_server_still_joined` check. ([\#6936](https://github.com/matrix-org/synapse/issues/6936)) | |
78 | - Tiny optimisation for incoming HTTP request dispatch. ([\#6950](https://github.com/matrix-org/synapse/issues/6950)) | |
79 | ||
80 | ||
81 | Synapse 1.10.1 (2020-02-17) | |
82 | =========================== | |
83 | ||
84 | Bugfixes | |
85 | -------- | |
86 | ||
87 | - Fix a bug introduced in Synapse 1.10.0 which would cause room state to be cleared in the database if Synapse was upgraded direct from 1.2.1 or earlier to 1.10.0. ([\#6924](https://github.com/matrix-org/synapse/issues/6924)) | |
88 | ||
89 | ||
0 | 90 | Synapse 1.10.0 (2020-02-12) |
1 | 91 | =========================== |
2 | 92 |
199 | 199 | flag to `git commit`, which uses the name and email set in your |
200 | 200 | `user.name` and `user.email` git configs. |
201 | 201 | |
202 | ## Merge Strategy | |
203 | ||
204 | We use the commit history of develop/master extensively to identify | |
205 | when regressions were introduced and what changes have been made. | |
206 | ||
207 | We aim to have a clean merge history, which means we normally squash-merge | |
208 | changes into develop. For small changes this means there is no need to rebase | |
209 | to clean up your PR before merging. Larger changes with an organised set of | |
210 | commits may be merged as-is, if the history is judged to be useful. | |
211 | ||
212 | This use of squash-merging will mean PRs built on each other will be hard to | |
213 | merge. We suggest avoiding these where possible, and if required, ensuring | |
214 | each PR has a tidy set of commits to ease merging. | |
215 | ||
202 | 216 | ## Conclusion |
203 | 217 | |
204 | 218 | That's it! Matrix is a very open and collaborative project as you might expect |
387 | 387 | |
388 | 388 | ## TLS certificates |
389 | 389 | |
390 | The default configuration exposes a single HTTP port: http://localhost:8008. It | |
391 | is suitable for local testing, but for any practical use, you will either need | |
392 | to enable a reverse proxy, or configure Synapse to expose an HTTPS port. | |
393 | ||
394 | For information on using a reverse proxy, see | |
390 | The default configuration exposes a single HTTP port on the local | |
391 | interface: `http://localhost:8008`. It is suitable for local testing, | |
392 | but for any practical use, you will need Synapse's APIs to be served | |
393 | over HTTPS. | |
394 | ||
395 | The recommended way to do so is to set up a reverse proxy on port | |
396 | `8448`. You can find documentation on doing so in | |
395 | 397 | [docs/reverse_proxy.md](docs/reverse_proxy.md). |
396 | 398 | |
397 | To configure Synapse to expose an HTTPS port, you will need to edit | |
398 | `homeserver.yaml`, as follows: | |
399 | Alternatively, you can configure Synapse to expose an HTTPS port. To do | |
400 | so, you will need to edit `homeserver.yaml`, as follows: | |
399 | 401 | |
400 | 402 | * First, under the `listeners` section, uncomment the configuration for the |
401 | 403 | TLS-enabled listener. (Remove the hash sign (`#`) at the start of |
413 | 415 | point these settings at an existing certificate and key, or you can |
414 | 416 | enable Synapse's built-in ACME (Let's Encrypt) support. Instructions |
415 | 417 | for having Synapse automatically provision and renew federation |
416 | certificates through ACME can be found at [ACME.md](docs/ACME.md). If you | |
417 | are using your own certificate, be sure to use a `.pem` file that includes | |
418 | the full certificate chain including any intermediate certificates (for | |
419 | instance, if using certbot, use `fullchain.pem` as your certificate, not | |
418 | certificates through ACME can be found at [ACME.md](docs/ACME.md). | |
419 | Note that, as pointed out in that document, this feature will not | |
420 | work with installs set up after November 2020. | |
421 | ||
422 | If you are using your own certificate, be sure to use a `.pem` file that | |
423 | includes the full certificate chain including any intermediate certificates | |
424 | (for instance, if using certbot, use `fullchain.pem` as your certificate, not | |
420 | 425 | `cert.pem`). |
421 | 426 | |
422 | 427 | For a more detailed guide to configuring your server for federation, see |
271 | 271 | |
272 | 272 | virtualenv -p python3 env |
273 | 273 | source env/bin/activate |
274 | python -m pip install --no-use-pep517 -e .[all] | |
274 | python -m pip install --no-use-pep517 -e ".[all]" | |
275 | 275 | |
276 | 276 | This will run a process of downloading and installing all the needed |
277 | 277 | dependencies into a virtual env. |
55 | 55 | environment: |
56 | 56 | - POSTGRES_USER=synapse |
57 | 57 | - POSTGRES_PASSWORD=changeme |
58 | # ensure the database gets created correctly | |
59 | # https://github.com/matrix-org/synapse/blob/master/docs/postgres.md#set-up-database | |
60 | - POSTGRES_INITDB_ARGS="--encoding=UTF-8 --lc-collate=C --lc-ctype=C" | |
58 | 61 | volumes: |
59 | 62 | # You may store the database tables in a local folder.. |
60 | 63 | - ./schemas:/var/lib/postgresql/data |
0 | matrix-synapse-py3 (1.11.0) stable; urgency=medium | |
1 | ||
2 | * New synapse release 1.11.0. | |
3 | ||
4 | -- Synapse Packaging team <packages@matrix.org> Fri, 21 Feb 2020 08:54:34 +0000 | |
5 | ||
6 | matrix-synapse-py3 (1.10.1) stable; urgency=medium | |
7 | ||
8 | * New synapse release 1.10.1. | |
9 | ||
10 | -- Synapse Packaging team <packages@matrix.org> Mon, 17 Feb 2020 16:27:28 +0000 | |
11 | ||
0 | 12 | matrix-synapse-py3 (1.10.0) stable; urgency=medium |
1 | 13 | |
2 | 14 | * New synapse release 1.10.0. |
109 | 109 | |
110 | 110 | ## Legacy dynamic configuration file support |
111 | 111 | |
112 | For backwards-compatibility only, the docker image supports creating a dynamic | |
113 | configuration file based on environment variables. This is now deprecated, but | |
114 | is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is | |
115 | not given). | |
112 | The docker image used to support creating a dynamic configuration file based | |
113 | on environment variables. This is no longer supported, and an error will be | |
114 | raised if you try to run synapse without a config file. | |
116 | 115 | |
117 | To migrate from a dynamic configuration file to a static one, run the docker | |
116 | It is, however, possible to generate a static configuration file based on | |
117 | the environment variables that were previously used. To do this, run the docker | |
118 | 118 | container once with the environment variables set, and `migrate_config` |
119 | 119 | command line option. For example: |
120 | 120 | |
126 | 126 | matrixdotorg/synapse:latest migrate_config |
127 | 127 | ``` |
128 | 128 | |
129 | This will generate the same configuration file as the legacy mode used, but | |
130 | will store it in `/data/homeserver.yaml` instead of a temporary location. You | |
131 | can then use it as shown above at [Running synapse](#running-synapse). | |
129 | This will generate the same configuration file as the legacy mode used, and | |
130 | will store it in `/data/homeserver.yaml`. You can then use it as shown above at | |
131 | [Running synapse](#running-synapse). | |
132 | ||
133 | Note that the defaults used in this configuration file may be different to | |
134 | those when generating a new config file with `generate`: for example, TLS is | |
135 | enabled by default in this mode. You are encouraged to inspect the generated | |
136 | configuration file and edit it to ensure it meets your needs. | |
132 | 137 | |
133 | 138 | ## Building the image |
134 | 139 | |
135 | 140 | If you need to build the image from a Synapse checkout, use the following `docker |
136 | 141 | build` command from the repo's root: |
137 | ||
142 | ||
138 | 143 | ``` |
139 | 144 | docker build -t matrixdotorg/synapse -f docker/Dockerfile . |
140 | 145 | ``` |
187 | 187 | else: |
188 | 188 | ownership = "{}:{}".format(desired_uid, desired_gid) |
189 | 189 | |
190 | log( | |
191 | "Container running as UserID %s:%s, ENV (or defaults) requests %s:%s" | |
192 | % (os.getuid(), os.getgid(), desired_uid, desired_gid) | |
193 | ) | |
194 | ||
195 | 190 | if ownership is None: |
196 | 191 | log("Will not perform chmod/su-exec as UserID already matches request") |
197 | 192 | |
212 | 207 | if mode is not None: |
213 | 208 | error("Unknown execution mode '%s'" % (mode,)) |
214 | 209 | |
215 | if "SYNAPSE_SERVER_NAME" in environ: | |
216 | # backwards-compatibility generate-a-config-on-the-fly mode | |
217 | if "SYNAPSE_CONFIG_PATH" in environ: | |
210 | config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data") | |
211 | config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml") | |
212 | ||
213 | if not os.path.exists(config_path): | |
214 | if "SYNAPSE_SERVER_NAME" in environ: | |
218 | 215 | error( |
219 | "SYNAPSE_SERVER_NAME can only be combined with SYNAPSE_CONFIG_PATH " | |
220 | "in `generate` or `migrate_config` mode. To start synapse using a " | |
221 | "config file, unset the SYNAPSE_SERVER_NAME environment variable." | |
216 | """\ | |
217 | Config file '%s' does not exist. | |
218 | ||
219 | The synapse docker image no longer supports generating a config file on-the-fly | |
220 | based on environment variables. You can migrate to a static config file by | |
221 | running with 'migrate_config'. See the README for more details. | |
222 | """ | |
223 | % (config_path,) | |
222 | 224 | ) |
223 | 225 | |
224 | config_path = "/compiled/homeserver.yaml" | |
225 | log( | |
226 | "Generating config file '%s' on-the-fly from environment variables.\n" | |
227 | "Note that this mode is deprecated. You can migrate to a static config\n" | |
228 | "file by running with 'migrate_config'. See the README for more details." | |
226 | error( | |
227 | "Config file '%s' does not exist. You should either create a new " | |
228 | "config file by running with the `generate` argument (and then edit " | |
229 | "the resulting file before restarting) or specify the path to an " | |
230 | "existing config file with the SYNAPSE_CONFIG_PATH variable." | |
229 | 231 | % (config_path,) |
230 | 232 | ) |
231 | ||
232 | generate_config_from_template("/compiled", config_path, environ, ownership) | |
233 | else: | |
234 | config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data") | |
235 | config_path = environ.get( | |
236 | "SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml" | |
237 | ) | |
238 | if not os.path.exists(config_path): | |
239 | error( | |
240 | "Config file '%s' does not exist. You should either create a new " | |
241 | "config file by running with the `generate` argument (and then edit " | |
242 | "the resulting file before restarting) or specify the path to an " | |
243 | "existing config file with the SYNAPSE_CONFIG_PATH variable." | |
244 | % (config_path,) | |
245 | ) | |
246 | 233 | |
247 | 234 | log("Starting synapse with config file " + config_path) |
248 | 235 |
0 | # The config is maintained as an up-to-date snapshot of the default | |
0 | # This file is maintained as an up-to-date snapshot of the default | |
1 | 1 | # homeserver.yaml configuration generated by Synapse. |
2 | 2 | # |
3 | 3 | # It is intended to act as a reference for the default configuration, |
9 | 9 | # homeserver.yaml. Instead, if you are starting from scratch, please generate |
10 | 10 | # a fresh config using Synapse by following the instructions in INSTALL.md. |
11 | 11 | |
12 | ################################################################################ | |
13 |
0 | 0 | # ACME |
1 | 1 | |
2 | Synapse v1.0 will require valid TLS certificates for communication between | |
3 | servers (port `8448` by default) in addition to those that are client-facing | |
4 | (port `443`). If you do not already have a valid certificate for your domain, | |
5 | the easiest way to get one is with Synapse's new ACME support, which will use | |
6 | the ACME protocol to provision a certificate automatically. Synapse v0.99.0+ | |
7 | will provision server-to-server certificates automatically for you for free | |
8 | through [Let's Encrypt](https://letsencrypt.org/) if you tell it to. | |
2 | From version 1.0 (June 2019) onwards, Synapse requires valid TLS | |
3 | certificates for communication between servers (by default on port | |
4 | `8448`) in addition to those that are client-facing (port `443`). To | |
5 | help homeserver admins fulfil this new requirement, Synapse v0.99.0 | |
6 | introduced support for automatically provisioning certificates through | |
7 | [Let's Encrypt](https://letsencrypt.org/) using the ACME protocol. | |
8 | ||
9 | ## Deprecation of ACME v1 | |
10 | ||
11 | In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430), | |
12 | Let's Encrypt announced that they were deprecating version 1 of the ACME | |
13 | protocol, with the plan to disable the use of it for new accounts in | |
14 | November 2019, and for existing accounts in June 2020. | |
15 | ||
16 | Synapse doesn't currently support version 2 of the ACME protocol, which | |
17 | means that: | |
18 | ||
19 | * for existing installs, Synapse's built-in ACME support will continue | |
20 | to work until June 2020. | |
21 | * for new installs, this feature will not work at all. | |
22 | ||
23 | Either way, it is recommended to move from Synapse's ACME support | |
24 | feature to an external automated tool such as [certbot](https://github.com/certbot/certbot) | |
25 | (or browse [this list](https://letsencrypt.org/fr/docs/client-options/) | |
26 | for an alternative ACME client). | |
27 | ||
28 | It's also recommended to use a reverse proxy for the server-facing | |
29 | communications (more documentation about this can be found | |
30 | [here](/docs/reverse_proxy.md)) as well as the client-facing ones and | |
31 | have it serve the certificates. | |
32 | ||
33 | In case you can't do that and need Synapse to serve them itself, make | |
34 | sure to set the `tls_certificate_path` configuration setting to the path | |
35 | of the certificate (make sure to use the certificate containing the full | |
36 | certification chain, e.g. `fullchain.pem` if using certbot) and | |
37 | `tls_private_key_path` to the path of the matching private key. Note | |
38 | that in this case you will need to restart Synapse after each | |
39 | certificate renewal so that Synapse stops using the old certificate. | |
40 | ||
41 | If you still want to use Synapse's built-in ACME support, the rest of | |
42 | this document explains how to set it up. | |
43 | ||
44 | ## Initial setup | |
9 | 45 | |
10 | 46 | In the case that your `server_name` config variable is the same as |
11 | 47 | the hostname that the client connects to, then the same certificate can be |
30 | 66 | If you already have certificates, you will need to back up or delete them |
31 | 67 | (files `example.com.tls.crt` and `example.com.tls.key` in Synapse's root |
32 | 68 | directory), Synapse's ACME implementation will not overwrite them. |
33 | ||
34 | You may wish to use alternate methods such as Certbot to obtain a certificate | |
35 | from Let's Encrypt, depending on your server configuration. Of course, if you | |
36 | already have a valid certificate for your homeserver's domain, that can be | |
37 | placed in Synapse's config directory without the need for any ACME setup. | |
38 | 69 | |
39 | 70 | ## ACME setup |
40 | 71 |
6 | 6 | Depending on the amount of history being purged a call to the API may take |
7 | 7 | several minutes or longer. During this period users will not be able to |
8 | 8 | paginate further back in the room from the point being purged from. |
9 | ||
10 | Note that Synapse requires at least one message in each room, so it will never | |
11 | delete the last message in a room. | |
9 | 12 | |
10 | 13 | The API is: |
11 | 14 |
1 | 1 | ======================== |
2 | 2 | |
3 | 3 | This API allows an administrator to create or modify a user account with a |
4 | specific ``user_id``. | |
4 | specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example, | |
5 | ``@user:server.com``. | |
5 | 6 | |
6 | 7 | This api is:: |
7 | 8 | |
14 | 15 | { |
15 | 16 | "password": "user_password", |
16 | 17 | "displayname": "User", |
18 | "threepids": [ | |
19 | { | |
20 | "medium": "email", | |
21 | "address": "<user_mail_1>" | |
22 | }, | |
23 | { | |
24 | "medium": "email", | |
25 | "address": "<user_mail_2>" | |
26 | } | |
27 | ], | |
17 | 28 | "avatar_url": "<avatar_url>", |
18 | 29 | "admin": false, |
19 | 30 | "deactivated": false |
22 | 33 | including an ``access_token`` of a server admin. |
23 | 34 | |
24 | 35 | The parameter ``displayname`` is optional and defaults to ``user_id``. |
36 | The parameter ``threepids`` is optional. | |
25 | 37 | The parameter ``avatar_url`` is optional. |
26 | 38 | The parameter ``admin`` is optional and defaults to 'false'. |
27 | 39 | The parameter ``deactivated`` is optional and defaults to 'false'. |
0 | # Delegation | |
1 | ||
2 | By default, other homeservers will expect to be able to reach yours via | |
3 | your `server_name`, on port 8448. For example, if you set your `server_name` | |
4 | to `example.com` (so that your user names look like `@user:example.com`), | |
5 | other servers will try to connect to yours at `https://example.com:8448/`. | |
6 | ||
7 | Delegation is a Matrix feature allowing a homeserver admin to retain a | |
8 | `server_name` of `example.com` so that user IDs, room aliases, etc continue | |
9 | to look like `*:example.com`, whilst having federation traffic routed | |
10 | to a different server and/or port (e.g. `synapse.example.com:443`). | |
11 | ||
12 | ## .well-known delegation | |
13 | ||
14 | To use this method, you need to be able to alter the | |
15 | `server_name` 's https server to serve the `/.well-known/matrix/server` | |
16 | URL. Having an active server (with a valid TLS certificate) serving your | |
17 | `server_name` domain is out of the scope of this documentation. | |
18 | ||
19 | The URL `https://<server_name>/.well-known/matrix/server` should | |
20 | return a JSON structure containing the key `m.server` like so: | |
21 | ||
22 | ```json | |
23 | { | |
24 | "m.server": "<synapse.server.name>[:<yourport>]" | |
25 | } | |
26 | ``` | |
27 | ||
28 | In our example, this would mean that URL `https://example.com/.well-known/matrix/server` | |
29 | should return: | |
30 | ||
31 | ```json | |
32 | { | |
33 | "m.server": "synapse.example.com:443" | |
34 | } | |
35 | ``` | |
36 | ||
37 | Note, specifying a port is optional. If no port is specified, then it defaults | |
38 | to 8448. | |
39 | ||
40 | With .well-known delegation, federating servers will check for a valid TLS | |
41 | certificate for the delegated hostname (in our example: `synapse.example.com`). | |
42 | ||
43 | ## SRV DNS record delegation | |
44 | ||
45 | It is also possible to do delegation using a SRV DNS record. However, that is | |
46 | considered an advanced topic since it's a bit complex to set up, and `.well-known` | |
47 | delegation is already enough in most cases. | |
48 | ||
49 | However, if you really need it, you can find some documentation on how such a | |
50 | record should look like and how Synapse will use it in [the Matrix | |
51 | specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names). | |
52 | ||
53 | ## Delegation FAQ | |
54 | ||
55 | ### When do I need delegation? | |
56 | ||
57 | If your homeserver's APIs are accessible on the default federation port (8448) | |
58 | and the domain your `server_name` points to, you do not need any delegation. | |
59 | ||
60 | For instance, if you registered `example.com` and pointed its DNS A record at a | |
61 | fresh server, you could install Synapse on that host, giving it a `server_name` | |
62 | of `example.com`, and once a reverse proxy has been set up to proxy all requests | |
63 | sent to the port `8448` and serve TLS certificates for `example.com`, you | |
64 | wouldn't need any delegation set up. | |
65 | ||
66 | **However**, if your homeserver's APIs aren't accessible on port 8448 and on the | |
67 | domain `server_name` points to, you will need to let other servers know how to | |
68 | find it using delegation. | |
69 | ||
70 | ### Do you still recommend against using a reverse proxy on the federation port? | |
71 | ||
72 | We no longer actively recommend against using a reverse proxy. Many admins will | |
73 | find it easier to direct federation traffic to a reverse proxy and manage their | |
74 | own TLS certificates, and this is a supported configuration. | |
75 | ||
76 | See [reverse_proxy.md](reverse_proxy.md) for information on setting up a | |
77 | reverse proxy. | |
78 | ||
79 | ### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy? | |
80 | ||
81 | This is no longer necessary. If you are using a reverse proxy for all of your | |
82 | TLS traffic, then you can set `no_tls: True` in the Synapse config. | |
83 | ||
84 | In that case, the only reason Synapse needs the certificate is to populate a legacy | |
85 | `tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0 | |
86 | and later, and the only time pre-0.99 Synapses will check it is when attempting to | |
87 | fetch the server keys - and generally this is delegated via `matrix.org`, which | |
88 | is running a modern version of Synapse. | |
89 | ||
90 | ### Do I need the same certificate for the client and federation port? | |
91 | ||
92 | No. There is nothing stopping you from using different certificates, | |
93 | particularly if you are using a reverse proxy.⏎ |
0 | Setting up Federation | |
0 | Setting up federation | |
1 | 1 | ===================== |
2 | 2 | |
3 | 3 | Federation is the process by which users on different servers can participate |
4 | 4 | in the same room. For this to work, those other servers must be able to contact |
5 | 5 | yours to send messages. |
6 | 6 | |
7 | The ``server_name`` configured in the Synapse configuration file (often | |
8 | ``homeserver.yaml``) defines how resources (users, rooms, etc.) will be | |
9 | identified (eg: ``@user:example.com``, ``#room:example.com``). By | |
10 | default, it is also the domain that other servers will use to | |
11 | try to reach your server (via port 8448). This is easy to set | |
12 | up and will work provided you set the ``server_name`` to match your | |
13 | machine's public DNS hostname, and provide Synapse with a TLS certificate | |
14 | which is valid for your ``server_name``. | |
7 | The `server_name` configured in the Synapse configuration file (often | |
8 | `homeserver.yaml`) defines how resources (users, rooms, etc.) will be | |
9 | identified (eg: `@user:example.com`, `#room:example.com`). By default, | |
10 | it is also the domain that other servers will use to try to reach your | |
11 | server (via port 8448). This is easy to set up and will work provided | |
12 | you set the `server_name` to match your machine's public DNS hostname. | |
13 | ||
14 | For this default configuration to work, you will need to listen for TLS | |
15 | connections on port 8448. The preferred way to do that is by using a | |
16 | reverse proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions | |
17 | on how to correctly set one up. | |
18 | ||
19 | In some cases you might not want to run Synapse on the machine that has | |
20 | the `server_name` as its public DNS hostname, or you might want federation | |
21 | traffic to use a different port than 8448. For example, you might want to | |
22 | have your user names look like `@user:example.com`, but you want to run | |
23 | Synapse on `synapse.example.com` on port 443. This can be done using | |
24 | delegation, which allows an admin to control where federation traffic should | |
25 | be sent. See [delegate.md](delegate.md) for instructions on how to set this up. | |
15 | 26 | |
16 | 27 | Once federation has been configured, you should be able to join a room over |
17 | federation. A good place to start is ``#synapse:matrix.org`` - a room for | |
28 | federation. A good place to start is `#synapse:matrix.org` - a room for | |
18 | 29 | Synapse admins. |
19 | ||
20 | ||
21 | ## Delegation | |
22 | ||
23 | For a more flexible configuration, you can have ``server_name`` | |
24 | resources (eg: ``@user:example.com``) served by a different host and | |
25 | port (eg: ``synapse.example.com:443``). There are two ways to do this: | |
26 | ||
27 | - adding a ``/.well-known/matrix/server`` URL served on ``https://example.com``. | |
28 | - adding a DNS ``SRV`` record in the DNS zone of domain | |
29 | ``example.com``. | |
30 | ||
31 | Without configuring delegation, the matrix federation will | |
32 | expect to find your server via ``example.com:8448``. The following methods | |
33 | allow you retain a `server_name` of `example.com` so that your user IDs, room | |
34 | aliases, etc continue to look like `*:example.com`, whilst having your | |
35 | federation traffic routed to a different server. | |
36 | ||
37 | ### .well-known delegation | |
38 | ||
39 | To use this method, you need to be able to alter the | |
40 | ``server_name`` 's https server to serve the ``/.well-known/matrix/server`` | |
41 | URL. Having an active server (with a valid TLS certificate) serving your | |
42 | ``server_name`` domain is out of the scope of this documentation. | |
43 | ||
44 | The URL ``https://<server_name>/.well-known/matrix/server`` should | |
45 | return a JSON structure containing the key ``m.server`` like so: | |
46 | ||
47 | { | |
48 | "m.server": "<synapse.server.name>[:<yourport>]" | |
49 | } | |
50 | ||
51 | In our example, this would mean that URL ``https://example.com/.well-known/matrix/server`` | |
52 | should return: | |
53 | ||
54 | { | |
55 | "m.server": "synapse.example.com:443" | |
56 | } | |
57 | ||
58 | Note, specifying a port is optional. If a port is not specified an SRV lookup | |
59 | is performed, as described below. If the target of the | |
60 | delegation does not have an SRV record, then the port defaults to 8448. | |
61 | ||
62 | Most installations will not need to configure .well-known. However, it can be | |
63 | useful in cases where the admin is hosting on behalf of someone else and | |
64 | therefore cannot gain access to the necessary certificate. With .well-known, | |
65 | federation servers will check for a valid TLS certificate for the delegated | |
66 | hostname (in our example: ``synapse.example.com``). | |
67 | ||
68 | ### DNS SRV delegation | |
69 | ||
70 | To use this delegation method, you need to have write access to your | |
71 | ``server_name`` 's domain zone DNS records (in our example it would be | |
72 | ``example.com`` DNS zone). | |
73 | ||
74 | This method requires the target server to provide a | |
75 | valid TLS certificate for the original ``server_name``. | |
76 | ||
77 | You need to add a SRV record in your ``server_name`` 's DNS zone with | |
78 | this format: | |
79 | ||
80 | _matrix._tcp.<yourdomain.com> <ttl> IN SRV <priority> <weight> <port> <synapse.server.name> | |
81 | ||
82 | In our example, we would need to add this SRV record in the | |
83 | ``example.com`` DNS zone: | |
84 | ||
85 | _matrix._tcp.example.com. 3600 IN SRV 10 5 443 synapse.example.com. | |
86 | ||
87 | Once done and set up, you can check the DNS record with ``dig -t srv | |
88 | _matrix._tcp.<server_name>``. In our example, we would expect this: | |
89 | ||
90 | $ dig -t srv _matrix._tcp.example.com | |
91 | _matrix._tcp.example.com. 3600 IN SRV 10 0 443 synapse.example.com. | |
92 | ||
93 | Note that the target of a SRV record cannot be an alias (CNAME record): it has to point | |
94 | directly to the server hosting the synapse instance. | |
95 | ||
96 | ### Delegation FAQ | |
97 | #### When do I need a SRV record or .well-known URI? | |
98 | ||
99 | If your homeserver listens on the default federation port (8448), and your | |
100 | `server_name` points to the host that your homeserver runs on, you do not need an SRV | |
101 | record or `.well-known/matrix/server` URI. | |
102 | ||
103 | For instance, if you registered `example.com` and pointed its DNS A record at a | |
104 | fresh server, you could install Synapse on that host, | |
105 | giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is enabled, | |
106 | it would automatically generate a valid TLS certificate for you via Let's Encrypt | |
107 | and no SRV record or .well-known URI would be needed. | |
108 | ||
109 | **However**, if your server does not listen on port 8448, or if your `server_name` | |
110 | does not point to the host that your homeserver runs on, you will need to let | |
111 | other servers know how to find it. The way to do this is via .well-known or an | |
112 | SRV record. | |
113 | ||
114 | #### I have created a .well-known URI. Do I also need an SRV record? | |
115 | ||
116 | No. You can use either `.well-known` delegation or use an SRV record for delegation. You | |
117 | do not need to use both to delegate to the same location. | |
118 | ||
119 | #### Can I manage my own certificates rather than having Synapse renew certificates itself? | |
120 | ||
121 | Yes, you are welcome to manage your certificates yourself. Synapse will only | |
122 | attempt to obtain certificates from Let's Encrypt if you configure it to do | |
123 | so.The only requirement is that there is a valid TLS cert present for | |
124 | federation end points. | |
125 | ||
126 | #### Do you still recommend against using a reverse proxy on the federation port? | |
127 | ||
128 | We no longer actively recommend against using a reverse proxy. Many admins will | |
129 | find it easier to direct federation traffic to a reverse proxy and manage their | |
130 | own TLS certificates, and this is a supported configuration. | |
131 | ||
132 | See [reverse_proxy.md](reverse_proxy.md) for information on setting up a | |
133 | reverse proxy. | |
134 | ||
135 | #### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy? | |
136 | ||
137 | Practically speaking, this is no longer necessary. | |
138 | ||
139 | If you are using a reverse proxy for all of your TLS traffic, then you can set | |
140 | `no_tls: True` in the Synapse config. In that case, the only reason Synapse | |
141 | needs the certificate is to populate a legacy `tls_fingerprints` field in the | |
142 | federation API. This is ignored by Synapse 0.99.0 and later, and the only time | |
143 | pre-0.99 Synapses will check it is when attempting to fetch the server keys - | |
144 | and generally this is delegated via `matrix.org`, which will be running a modern | |
145 | version of Synapse. | |
146 | ||
147 | #### Do I need the same certificate for the client and federation port? | |
148 | ||
149 | No. There is nothing stopping you from using different certificates, | |
150 | particularly if you are using a reverse proxy. However, Synapse will use the | |
151 | same certificate on any ports where TLS is configured. | |
152 | 30 | |
153 | 31 | ## Troubleshooting |
154 | 32 | |
155 | You can use the [federation tester]( | |
156 | <https://matrix.org/federationtester>) to check if your homeserver is | |
157 | configured correctly. Alternatively try the [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN). | |
158 | Note that you'll have to modify this URL to replace ``DOMAIN`` with your | |
159 | ``server_name``. Hitting the API directly provides extra detail. | |
33 | You can use the [federation tester](https://matrix.org/federationtester) | |
34 | to check if your homeserver is configured correctly. Alternatively try the | |
35 | [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN). | |
36 | Note that you'll have to modify this URL to replace `DOMAIN` with your | |
37 | `server_name`. Hitting the API directly provides extra detail. | |
160 | 38 | |
161 | 39 | The typical failure mode for federation is that when the server tries to join |
162 | 40 | a room, it is rejected with "401: Unauthorized". Generally this means that other |
168 | 46 | proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly |
169 | 47 | configure a reverse proxy. |
170 | 48 | |
171 | ## Running a Demo Federation of Synapses | |
49 | ## Running a demo federation of Synapses | |
172 | 50 | |
173 | 51 | If you want to get up and running quickly with a trio of homeservers in a |
174 | private federation, there is a script in the ``demo`` directory. This is mainly | |
52 | private federation, there is a script in the `demo` directory. This is mainly | |
175 | 53 | useful just for development purposes. See [demo/README](<../demo/README>). |
40 | 40 | purged according to its room's policy, then the receiving server will |
41 | 41 | process and store that event until it's picked up by the next purge job, |
42 | 42 | though it will always hide it from clients. |
43 | ||
44 | Synapse requires at least one message in each room, so it will never | |
45 | delete the last message in a room. It will, however, hide it from | |
46 | clients. | |
43 | 47 | |
44 | 48 | |
45 | 49 | ## Server configuration |
17 | 17 | Matrix servers do not necessarily need to connect to your server via the |
18 | 18 | same server name or port. Indeed, clients will use port 443 by default, |
19 | 19 | whereas servers default to port 8448. Where these are different, we |
20 | refer to the 'client port' and the \'federation port\'. See [Setting | |
21 | up federation](federate.md) for more details of the algorithm used for | |
22 | federation connections. | |
20 | refer to the 'client port' and the \'federation port\'. See [the Matrix | |
21 | specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names) | |
22 | for more details of the algorithm used for federation connections, and | |
23 | [delegate.md](<delegate.md>) for instructions on setting up delegation. | |
23 | 24 | |
24 | 25 | Let's assume that we expect clients to connect to our server at |
25 | 26 | `https://matrix.example.com`, and other servers to connect at |
0 | # The config is maintained as an up-to-date snapshot of the default | |
0 | # This file is maintained as an up-to-date snapshot of the default | |
1 | 1 | # homeserver.yaml configuration generated by Synapse. |
2 | 2 | # |
3 | 3 | # It is intended to act as a reference for the default configuration, |
8 | 8 | # It is *not* intended to be copied and used as the basis for a real |
9 | 9 | # homeserver.yaml. Instead, if you are starting from scratch, please generate |
10 | 10 | # a fresh config using Synapse by following the instructions in INSTALL.md. |
11 | ||
12 | ################################################################################ | |
13 | ||
14 | # Configuration file for Synapse. | |
15 | # | |
16 | # This is a YAML file: see [1] for a quick introduction. Note in particular | |
17 | # that *indentation is important*: all the elements of a list or dictionary | |
18 | # should have the same indentation. | |
19 | # | |
20 | # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html | |
11 | 21 | |
12 | 22 | ## Server ## |
13 | 23 | |
464 | 474 | |
465 | 475 | # ACME support: This will configure Synapse to request a valid TLS certificate |
466 | 476 | # for your configured `server_name` via Let's Encrypt. |
477 | # | |
478 | # Note that ACME v1 is now deprecated, and Synapse currently doesn't support | |
479 | # ACME v2. This means that this feature currently won't work with installs set | |
480 | # up after November 2019. For more info, and alternative solutions, see | |
481 | # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1 | |
467 | 482 | # |
468 | 483 | # Note that provisioning a certificate in this way requires port 80 to be |
469 | 484 | # routed to Synapse so that it can complete the http-01 ACME challenge. |
0 | # Handling spam in Synapse | |
1 | ||
2 | Synapse has support to customize spam checking behavior. It can plug into a | |
3 | variety of events and affect how they are presented to users on your homeserver. | |
4 | ||
5 | The spam checking behavior is implemented as a Python class, which must be | |
6 | able to be imported by the running Synapse. | |
7 | ||
8 | ## Python spam checker class | |
9 | ||
10 | The Python class is instantiated with two objects: | |
11 | ||
12 | * Any configuration (see below). | |
13 | * An instance of `synapse.spam_checker_api.SpamCheckerApi`. | |
14 | ||
15 | It then implements methods which return a boolean to alter behavior in Synapse. | |
16 | ||
17 | There's a generic method for checking every event (`check_event_for_spam`), as | |
18 | well as some specific methods: | |
19 | ||
20 | * `user_may_invite` | |
21 | * `user_may_create_room` | |
22 | * `user_may_create_room_alias` | |
23 | * `user_may_publish_room` | |
24 | ||
25 | The details of the each of these methods (as well as their inputs and outputs) | |
26 | are documented in the `synapse.events.spamcheck.SpamChecker` class. | |
27 | ||
28 | The `SpamCheckerApi` class provides a way for the custom spam checker class to | |
29 | call back into the homeserver internals. It currently implements the following | |
30 | methods: | |
31 | ||
32 | * `get_state_events_in_room` | |
33 | ||
34 | ### Example | |
35 | ||
36 | ```python | |
37 | class ExampleSpamChecker: | |
38 | def __init__(self, config, api): | |
39 | self.config = config | |
40 | self.api = api | |
41 | ||
42 | def check_event_for_spam(self, foo): | |
43 | return False # allow all events | |
44 | ||
45 | def user_may_invite(self, inviter_userid, invitee_userid, room_id): | |
46 | return True # allow all invites | |
47 | ||
48 | def user_may_create_room(self, userid): | |
49 | return True # allow all room creations | |
50 | ||
51 | def user_may_create_room_alias(self, userid, room_alias): | |
52 | return True # allow all room aliases | |
53 | ||
54 | def user_may_publish_room(self, userid, room_id): | |
55 | return True # allow publishing of all rooms | |
56 | ||
57 | def check_username_for_spam(self, user_profile): | |
58 | return False # allow all usernames | |
59 | ``` | |
60 | ||
61 | ## Configuration | |
62 | ||
63 | Modify the `spam_checker` section of your `homeserver.yaml` in the following | |
64 | manner: | |
65 | ||
66 | `module` should point to the fully qualified Python class that implements your | |
67 | custom logic, e.g. `my_module.ExampleSpamChecker`. | |
68 | ||
69 | `config` is a dictionary that gets passed to the spam checker class. | |
70 | ||
71 | ### Example | |
72 | ||
73 | This section might look like: | |
74 | ||
75 | ```yaml | |
76 | spam_checker: | |
77 | module: my_module.ExampleSpamChecker | |
78 | config: | |
79 | # Enable or disable a specific option in ExampleSpamChecker. | |
80 | my_custom_option: true | |
81 | ``` | |
82 | ||
83 | ## Examples | |
84 | ||
85 | The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged | |
86 | example using the Synapse spam checking API, including a bot for dynamic | |
87 | configuration. |
175 | 175 | ^/_matrix/federation/v1/query_auth/ |
176 | 176 | ^/_matrix/federation/v1/event_auth/ |
177 | 177 | ^/_matrix/federation/v1/exchange_third_party_invite/ |
178 | ^/_matrix/federation/v1/user/devices/ | |
178 | 179 | ^/_matrix/federation/v1/send/ |
180 | ^/_matrix/federation/v1/get_groups_publicised$ | |
179 | 181 | ^/_matrix/key/v2/query |
182 | ||
183 | Additionally, the following REST endpoints can be handled for GET requests: | |
184 | ||
185 | ^/_matrix/federation/v1/groups/ | |
180 | 186 | |
181 | 187 | The above endpoints should all be routed to the federation_reader worker by the |
182 | 188 | reverse-proxy configuration. |
253 | 259 | ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$ |
254 | 260 | ^/_matrix/client/versions$ |
255 | 261 | ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$ |
262 | ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$ | |
263 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$ | |
264 | ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/ | |
256 | 265 | |
257 | 266 | Additionally, the following REST endpoints can be handled for GET requests: |
258 | 267 | |
259 | 268 | ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$ |
269 | ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$ | |
260 | 270 | |
261 | 271 | Additionally, the following REST endpoints can be handled, but all requests must |
262 | 272 | be routed to the same instance: |
277 | 287 | |
278 | 288 | ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$ |
279 | 289 | |
280 | When using this worker you must also set `update_user_directory: False` in the | |
281 | shared configuration file to stop the main synapse running background | |
290 | When using this worker you must also set `update_user_directory: False` in the | |
291 | shared configuration file to stop the main synapse running background | |
282 | 292 | jobs related to updating the user directory. |
283 | 293 | |
284 | 294 | ### `synapse.app.frontend_proxy` |
2 | 2 | # Exits with 0 if there are no problems, or another code otherwise. |
3 | 3 | |
4 | 4 | # Fix non-lowercase true/false values |
5 | sed -i -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml | |
5 | sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml | |
6 | rm docs/sample_config.yaml.bak | |
6 | 7 | |
7 | 8 | # Check if anything changed |
8 | 9 | git diff --exit-code docs/sample_config.yaml |
35 | 35 | except ImportError: |
36 | 36 | pass |
37 | 37 | |
38 | __version__ = "1.10.0" | |
38 | __version__ = "1.11.0" | |
39 | 39 | |
40 | 40 | if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): |
41 | 41 | # We import here so that we don't have to install a bunch of deps when |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import Optional | |
16 | 17 | |
17 | 18 | from six import itervalues |
18 | 19 | |
34 | 35 | ) |
35 | 36 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS |
36 | 37 | from synapse.config.server import is_threepid_reserved |
38 | from synapse.events import EventBase | |
37 | 39 | from synapse.types import StateMap, UserID |
38 | 40 | from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache |
39 | 41 | from synapse.util.caches.lrucache import LruCache |
91 | 93 | ) |
92 | 94 | |
93 | 95 | @defer.inlineCallbacks |
94 | def check_joined_room(self, room_id, user_id, current_state=None): | |
95 | """Check if the user is currently joined in the room | |
96 | Args: | |
97 | room_id(str): The room to check. | |
98 | user_id(str): The user to check. | |
99 | current_state(dict): Optional map of the current state of the room. | |
96 | def check_user_in_room( | |
97 | self, | |
98 | room_id: str, | |
99 | user_id: str, | |
100 | current_state: Optional[StateMap[EventBase]] = None, | |
101 | allow_departed_users: bool = False, | |
102 | ): | |
103 | """Check if the user is in the room, or was at some point. | |
104 | Args: | |
105 | room_id: The room to check. | |
106 | ||
107 | user_id: The user to check. | |
108 | ||
109 | current_state: Optional map of the current state of the room. | |
100 | 110 | If provided then that map is used to check whether they are a |
101 | 111 | member of the room. Otherwise the current membership is |
102 | 112 | loaded from the database. |
113 | ||
114 | allow_departed_users: if True, accept users that were previously | |
115 | members but have now departed. | |
116 | ||
103 | 117 | Raises: |
104 | AuthError if the user is not in the room. | |
105 | Returns: | |
106 | A deferred membership event for the user if the user is in | |
107 | the room. | |
118 | AuthError if the user is/was not in the room. | |
119 | Returns: | |
120 | Deferred[Optional[EventBase]]: | |
121 | Membership event for the user if the user was in the | |
122 | room. This will be the join event if they are currently joined to | |
123 | the room. This will be the leave event if they have left the room. | |
108 | 124 | """ |
109 | 125 | if current_state: |
110 | 126 | member = current_state.get((EventTypes.Member, user_id), None) |
112 | 128 | member = yield self.state.get_current_state( |
113 | 129 | room_id=room_id, event_type=EventTypes.Member, state_key=user_id |
114 | 130 | ) |
115 | ||
116 | self._check_joined_room(member, user_id, room_id) | |
117 | return member | |
118 | ||
119 | @defer.inlineCallbacks | |
120 | def check_user_was_in_room(self, room_id, user_id): | |
121 | """Check if the user was in the room at some point. | |
122 | Args: | |
123 | room_id(str): The room to check. | |
124 | user_id(str): The user to check. | |
125 | Raises: | |
126 | AuthError if the user was never in the room. | |
127 | Returns: | |
128 | A deferred membership event for the user if the user was in the | |
129 | room. This will be the join event if they are currently joined to | |
130 | the room. This will be the leave event if they have left the room. | |
131 | """ | |
132 | member = yield self.state.get_current_state( | |
133 | room_id=room_id, event_type=EventTypes.Member, state_key=user_id | |
134 | ) | |
135 | 131 | membership = member.membership if member else None |
136 | 132 | |
137 | if membership not in (Membership.JOIN, Membership.LEAVE): | |
138 | raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) | |
139 | ||
140 | if membership == Membership.LEAVE: | |
133 | if membership == Membership.JOIN: | |
134 | return member | |
135 | ||
136 | # XXX this looks totally bogus. Why do we not allow users who have been banned, | |
137 | # or those who were members previously and have been re-invited? | |
138 | if allow_departed_users and membership == Membership.LEAVE: | |
141 | 139 | forgot = yield self.store.did_forget(user_id, room_id) |
142 | if forgot: | |
143 | raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) | |
144 | ||
145 | return member | |
140 | if not forgot: | |
141 | return member | |
142 | ||
143 | raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) | |
146 | 144 | |
147 | 145 | @defer.inlineCallbacks |
148 | 146 | def check_host_in_room(self, room_id, host): |
149 | 147 | with Measure(self.clock, "check_host_in_room"): |
150 | 148 | latest_event_ids = yield self.store.is_host_joined(room_id, host) |
151 | 149 | return latest_event_ids |
152 | ||
153 | def _check_joined_room(self, member, user_id, room_id): | |
154 | if not member or member.membership != Membership.JOIN: | |
155 | raise AuthError( | |
156 | 403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member)) | |
157 | ) | |
158 | 150 | |
159 | 151 | def can_federate(self, event, auth_events): |
160 | 152 | creation_event = auth_events.get((EventTypes.Create, "")) |
559 | 551 | return True |
560 | 552 | |
561 | 553 | user_id = user.to_string() |
562 | yield self.check_joined_room(room_id, user_id) | |
554 | yield self.check_user_in_room(room_id, user_id) | |
563 | 555 | |
564 | 556 | # We currently require the user is a "moderator" in the room. We do this |
565 | 557 | # by checking if they would (theoretically) be able to change the |
632 | 624 | return query_params[0].decode("ascii") |
633 | 625 | |
634 | 626 | @defer.inlineCallbacks |
635 | def check_in_room_or_world_readable(self, room_id, user_id): | |
627 | def check_user_in_room_or_world_readable( | |
628 | self, room_id: str, user_id: str, allow_departed_users: bool = False | |
629 | ): | |
636 | 630 | """Checks that the user is or was in the room or the room is world |
637 | 631 | readable. If it isn't then an exception is raised. |
632 | ||
633 | Args: | |
634 | room_id: room to check | |
635 | user_id: user to check | |
636 | allow_departed_users: if True, accept users that were previously | |
637 | members but have now departed | |
638 | 638 | |
639 | 639 | Returns: |
640 | 640 | Deferred[tuple[str, str|None]]: Resolves to the current membership of |
644 | 644 | """ |
645 | 645 | |
646 | 646 | try: |
647 | # check_user_was_in_room will return the most recent membership | |
647 | # check_user_in_room will return the most recent membership | |
648 | 648 | # event for the user if: |
649 | 649 | # * The user is a non-guest user, and was ever in the room |
650 | 650 | # * The user is a guest user, and has joined the room |
651 | 651 | # else it will throw. |
652 | member_event = yield self.check_user_was_in_room(room_id, user_id) | |
652 | member_event = yield self.check_user_in_room( | |
653 | room_id, user_id, allow_departed_users=allow_departed_users | |
654 | ) | |
653 | 655 | return member_event.membership, member_event.event_id |
654 | 656 | except AuthError: |
655 | 657 | visibility = yield self.state.get_current_state( |
661 | 663 | ): |
662 | 664 | return Membership.JOIN, None |
663 | 665 | raise AuthError( |
664 | 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN | |
666 | 403, | |
667 | "User %s not in room %s, and room previews are disabled" | |
668 | % (user_id, room_id), | |
665 | 669 | ) |
666 | 670 | |
667 | 671 | @defer.inlineCallbacks |
56 | 56 | RoomStateRestServlet, |
57 | 57 | ) |
58 | 58 | from synapse.rest.client.v1.voip import VoipRestServlet |
59 | from synapse.rest.client.v2_alpha import groups | |
59 | 60 | from synapse.rest.client.v2_alpha.account import ThreepidRestServlet |
60 | 61 | from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet |
61 | 62 | from synapse.rest.client.v2_alpha.register import RegisterRestServlet |
122 | 123 | VoipRestServlet(self).register(resource) |
123 | 124 | PushRuleRestServlet(self).register(resource) |
124 | 125 | VersionsRestServlet(self).register(resource) |
126 | ||
127 | groups.register_servlets(self, resource) | |
125 | 128 | |
126 | 129 | resources.update({"/_matrix/client": resource}) |
127 | 130 |
32 | 32 | from synapse.replication.slave.storage._base import BaseSlavedStore |
33 | 33 | from synapse.replication.slave.storage.account_data import SlavedAccountDataStore |
34 | 34 | from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore |
35 | from synapse.replication.slave.storage.devices import SlavedDeviceStore | |
35 | 36 | from synapse.replication.slave.storage.directory import DirectoryStore |
36 | 37 | from synapse.replication.slave.storage.events import SlavedEventStore |
38 | from synapse.replication.slave.storage.groups import SlavedGroupServerStore | |
37 | 39 | from synapse.replication.slave.storage.keys import SlavedKeyStore |
38 | 40 | from synapse.replication.slave.storage.profile import SlavedProfileStore |
39 | 41 | from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore |
65 | 67 | SlavedEventStore, |
66 | 68 | SlavedKeyStore, |
67 | 69 | SlavedRegistrationStore, |
70 | SlavedGroupServerStore, | |
71 | SlavedDeviceStore, | |
68 | 72 | RoomStore, |
69 | 73 | DirectoryStore, |
70 | 74 | SlavedTransactionStore, |
49 | 49 | |
50 | 50 | MISSING_SERVER_NAME = """\ |
51 | 51 | Missing mandatory `server_name` config option. |
52 | """ | |
53 | ||
54 | ||
55 | CONFIG_FILE_HEADER = """\ | |
56 | # Configuration file for Synapse. | |
57 | # | |
58 | # This is a YAML file: see [1] for a quick introduction. Note in particular | |
59 | # that *indentation is important*: all the elements of a list or dictionary | |
60 | # should have the same indentation. | |
61 | # | |
62 | # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html | |
63 | ||
52 | 64 | """ |
53 | 65 | |
54 | 66 | |
343 | 355 | str: the yaml config file |
344 | 356 | """ |
345 | 357 | |
346 | return "\n\n".join( | |
358 | return CONFIG_FILE_HEADER + "\n\n".join( | |
347 | 359 | dedent(conf) |
348 | 360 | for conf in self.invoke_all( |
349 | 361 | "generate_config_section", |
573 | 585 | if not path_exists(config_dir_path): |
574 | 586 | os.makedirs(config_dir_path) |
575 | 587 | with open(config_path, "w") as config_file: |
576 | config_file.write("# vim:ft=yaml\n\n") | |
577 | 588 | config_file.write(config_str) |
589 | config_file.write("\n\n# vim:ft=yaml") | |
578 | 590 | |
579 | 591 | config_dict = yaml.safe_load(config_str) |
580 | 592 | obj.generate_missing_files(config_dict, config_dir_path) |
31 | 31 | |
32 | 32 | logger = logging.getLogger(__name__) |
33 | 33 | |
34 | ACME_SUPPORT_ENABLED_WARN = """\ | |
35 | This server uses Synapse's built-in ACME support. Note that ACME v1 has been | |
36 | deprecated by Let's Encrypt, and that Synapse doesn't currently support ACME v2, | |
37 | which means that this feature will not work with Synapse installs set up after | |
38 | November 2019, and that it may stop working on June 2020 for installs set up | |
39 | before that date. | |
40 | ||
41 | For more info and alternative solutions, see | |
42 | https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1 | |
43 | --------------------------------------------------------------------------------""" | |
44 | ||
34 | 45 | |
35 | 46 | class TlsConfig(Config): |
36 | 47 | section = "tls" |
42 | 53 | acme_config = {} |
43 | 54 | |
44 | 55 | self.acme_enabled = acme_config.get("enabled", False) |
56 | ||
57 | if self.acme_enabled: | |
58 | logger.warning(ACME_SUPPORT_ENABLED_WARN) | |
45 | 59 | |
46 | 60 | # hyperlink complains on py2 if this is not a Unicode |
47 | 61 | self.acme_url = six.text_type( |
108 | 122 | fed_whitelist_entries = config.get( |
109 | 123 | "federation_certificate_verification_whitelist", [] |
110 | 124 | ) |
125 | if fed_whitelist_entries is None: | |
126 | fed_whitelist_entries = [] | |
111 | 127 | |
112 | 128 | # Support globs (*) in whitelist values |
113 | 129 | self.federation_certificate_verification_whitelist = [] # type: List[str] |
359 | 375 | # ACME support: This will configure Synapse to request a valid TLS certificate |
360 | 376 | # for your configured `server_name` via Let's Encrypt. |
361 | 377 | # |
378 | # Note that ACME v1 is now deprecated, and Synapse currently doesn't support | |
379 | # ACME v2. This means that this feature currently won't work with installs set | |
380 | # up after November 2019. For more info, and alternative solutions, see | |
381 | # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1 | |
382 | # | |
362 | 383 | # Note that provisioning a certificate in this way requires port 80 to be |
363 | 384 | # routed to Synapse so that it can complete the http-01 ACME challenge. |
364 | 385 | # By default, if you enable ACME support, Synapse will attempt to listen on |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2014-2016 OpenMarket Ltd |
2 | 2 | # Copyright 2019 New Vector Ltd |
3 | # Copyright 2020 The Matrix.org Foundation C.I.C. | |
3 | 4 | # |
4 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); |
5 | 6 | # you may not use this file except in compliance with the License. |
15 | 16 | |
16 | 17 | import os |
17 | 18 | from distutils.util import strtobool |
19 | from typing import Optional, Type | |
18 | 20 | |
19 | 21 | import six |
20 | 22 | |
21 | 23 | from unpaddedbase64 import encode_base64 |
22 | 24 | |
23 | from synapse.api.errors import UnsupportedRoomVersionError | |
24 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions | |
25 | from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions | |
25 | 26 | from synapse.types import JsonDict |
26 | 27 | from synapse.util.caches import intern_dict |
27 | 28 | from synapse.util.frozenutils import freeze |
36 | 37 | USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0")) |
37 | 38 | |
38 | 39 | |
40 | class DictProperty: | |
41 | """An object property which delegates to the `_dict` within its parent object.""" | |
42 | ||
43 | __slots__ = ["key"] | |
44 | ||
45 | def __init__(self, key: str): | |
46 | self.key = key | |
47 | ||
48 | def __get__(self, instance, owner=None): | |
49 | # if the property is accessed as a class property rather than an instance | |
50 | # property, return the property itself rather than the value | |
51 | if instance is None: | |
52 | return self | |
53 | try: | |
54 | return instance._dict[self.key] | |
55 | except KeyError as e1: | |
56 | # We want this to look like a regular attribute error (mostly so that | |
57 | # hasattr() works correctly), so we convert the KeyError into an | |
58 | # AttributeError. | |
59 | # | |
60 | # To exclude the KeyError from the traceback, we explicitly | |
61 | # 'raise from e1.__context__' (which is better than 'raise from None', | |
62 | # becuase that would omit any *earlier* exceptions). | |
63 | # | |
64 | raise AttributeError( | |
65 | "'%s' has no '%s' property" % (type(instance), self.key) | |
66 | ) from e1.__context__ | |
67 | ||
68 | def __set__(self, instance, v): | |
69 | instance._dict[self.key] = v | |
70 | ||
71 | def __delete__(self, instance): | |
72 | try: | |
73 | del instance._dict[self.key] | |
74 | except KeyError as e1: | |
75 | raise AttributeError( | |
76 | "'%s' has no '%s' property" % (type(instance), self.key) | |
77 | ) from e1.__context__ | |
78 | ||
79 | ||
80 | class DefaultDictProperty(DictProperty): | |
81 | """An extension of DictProperty which provides a default if the property is | |
82 | not present in the parent's _dict. | |
83 | ||
84 | Note that this means that hasattr() on the property always returns True. | |
85 | """ | |
86 | ||
87 | __slots__ = ["default"] | |
88 | ||
89 | def __init__(self, key, default): | |
90 | super().__init__(key) | |
91 | self.default = default | |
92 | ||
93 | def __get__(self, instance, owner=None): | |
94 | if instance is None: | |
95 | return self | |
96 | return instance._dict.get(self.key, self.default) | |
97 | ||
98 | ||
39 | 99 | class _EventInternalMetadata(object): |
40 | def __init__(self, internal_metadata_dict): | |
41 | self.__dict__ = dict(internal_metadata_dict) | |
42 | ||
43 | def get_dict(self): | |
44 | return dict(self.__dict__) | |
45 | ||
46 | def is_outlier(self): | |
47 | return getattr(self, "outlier", False) | |
48 | ||
49 | def is_out_of_band_membership(self): | |
100 | __slots__ = ["_dict"] | |
101 | ||
102 | def __init__(self, internal_metadata_dict: JsonDict): | |
103 | # we have to copy the dict, because it turns out that the same dict is | |
104 | # reused. TODO: fix that | |
105 | self._dict = dict(internal_metadata_dict) | |
106 | ||
107 | outlier = DictProperty("outlier") # type: bool | |
108 | out_of_band_membership = DictProperty("out_of_band_membership") # type: bool | |
109 | send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str | |
110 | recheck_redaction = DictProperty("recheck_redaction") # type: bool | |
111 | soft_failed = DictProperty("soft_failed") # type: bool | |
112 | proactively_send = DictProperty("proactively_send") # type: bool | |
113 | redacted = DictProperty("redacted") # type: bool | |
114 | txn_id = DictProperty("txn_id") # type: str | |
115 | token_id = DictProperty("token_id") # type: str | |
116 | stream_ordering = DictProperty("stream_ordering") # type: int | |
117 | ||
118 | # XXX: These are set by StreamWorkerStore._set_before_and_after. | |
119 | # I'm pretty sure that these are never persisted to the database, so shouldn't | |
120 | # be here | |
121 | before = DictProperty("before") # type: str | |
122 | after = DictProperty("after") # type: str | |
123 | order = DictProperty("order") # type: int | |
124 | ||
125 | def get_dict(self) -> JsonDict: | |
126 | return dict(self._dict) | |
127 | ||
128 | def is_outlier(self) -> bool: | |
129 | return self._dict.get("outlier", False) | |
130 | ||
131 | def is_out_of_band_membership(self) -> bool: | |
50 | 132 | """Whether this is an out of band membership, like an invite or an invite |
51 | 133 | rejection. This is needed as those events are marked as outliers, but |
52 | 134 | they still need to be processed as if they're new events (e.g. updating |
53 | 135 | invite state in the database, relaying to clients, etc). |
54 | 136 | """ |
55 | return getattr(self, "out_of_band_membership", False) | |
56 | ||
57 | def get_send_on_behalf_of(self): | |
137 | return self._dict.get("out_of_band_membership", False) | |
138 | ||
139 | def get_send_on_behalf_of(self) -> Optional[str]: | |
58 | 140 | """Whether this server should send the event on behalf of another server. |
59 | 141 | This is used by the federation "send_join" API to forward the initial join |
60 | 142 | event for a server in the room. |
61 | 143 | |
62 | 144 | returns a str with the name of the server this event is sent on behalf of. |
63 | 145 | """ |
64 | return getattr(self, "send_on_behalf_of", None) | |
65 | ||
66 | def need_to_check_redaction(self): | |
146 | return self._dict.get("send_on_behalf_of") | |
147 | ||
148 | def need_to_check_redaction(self) -> bool: | |
67 | 149 | """Whether the redaction event needs to be rechecked when fetching |
68 | 150 | from the database. |
69 | 151 | |
76 | 158 | Returns: |
77 | 159 | bool |
78 | 160 | """ |
79 | return getattr(self, "recheck_redaction", False) | |
80 | ||
81 | def is_soft_failed(self): | |
161 | return self._dict.get("recheck_redaction", False) | |
162 | ||
163 | def is_soft_failed(self) -> bool: | |
82 | 164 | """Whether the event has been soft failed. |
83 | 165 | |
84 | 166 | Soft failed events should be handled as usual, except: |
90 | 172 | Returns: |
91 | 173 | bool |
92 | 174 | """ |
93 | return getattr(self, "soft_failed", False) | |
175 | return self._dict.get("soft_failed", False) | |
94 | 176 | |
95 | 177 | def should_proactively_send(self): |
96 | 178 | """Whether the event, if ours, should be sent to other clients and |
102 | 184 | Returns: |
103 | 185 | bool |
104 | 186 | """ |
105 | return getattr(self, "proactively_send", True) | |
187 | return self._dict.get("proactively_send", True) | |
106 | 188 | |
107 | 189 | def is_redacted(self): |
108 | 190 | """Whether the event has been redacted. |
113 | 195 | Returns: |
114 | 196 | bool |
115 | 197 | """ |
116 | return getattr(self, "redacted", False) | |
117 | ||
118 | ||
119 | _SENTINEL = object() | |
120 | ||
121 | ||
122 | def _event_dict_property(key, default=_SENTINEL): | |
123 | """Creates a new property for the given key that delegates access to | |
124 | `self._event_dict`. | |
125 | ||
126 | The default is used if the key is missing from the `_event_dict`, if given, | |
127 | otherwise an AttributeError will be raised. | |
128 | ||
129 | Note: If a default is given then `hasattr` will always return true. | |
130 | """ | |
131 | ||
132 | # We want to be able to use hasattr with the event dict properties. | |
133 | # However, (on python3) hasattr expects AttributeError to be raised. Hence, | |
134 | # we need to transform the KeyError into an AttributeError | |
135 | ||
136 | def getter_raises(self): | |
137 | try: | |
138 | return self._event_dict[key] | |
139 | except KeyError: | |
140 | raise AttributeError(key) | |
141 | ||
142 | def getter_default(self): | |
143 | return self._event_dict.get(key, default) | |
144 | ||
145 | def setter(self, v): | |
146 | try: | |
147 | self._event_dict[key] = v | |
148 | except KeyError: | |
149 | raise AttributeError(key) | |
150 | ||
151 | def delete(self): | |
152 | try: | |
153 | del self._event_dict[key] | |
154 | except KeyError: | |
155 | raise AttributeError(key) | |
156 | ||
157 | if default is _SENTINEL: | |
158 | # No default given, so use the getter that raises | |
159 | return property(getter_raises, setter, delete) | |
160 | else: | |
161 | return property(getter_default, setter, delete) | |
198 | return self._dict.get("redacted", False) | |
162 | 199 | |
163 | 200 | |
164 | 201 | class EventBase(object): |
174 | 211 | self.unsigned = unsigned |
175 | 212 | self.rejected_reason = rejected_reason |
176 | 213 | |
177 | self._event_dict = event_dict | |
214 | self._dict = event_dict | |
178 | 215 | |
179 | 216 | self.internal_metadata = _EventInternalMetadata(internal_metadata_dict) |
180 | 217 | |
181 | auth_events = _event_dict_property("auth_events") | |
182 | depth = _event_dict_property("depth") | |
183 | content = _event_dict_property("content") | |
184 | hashes = _event_dict_property("hashes") | |
185 | origin = _event_dict_property("origin") | |
186 | origin_server_ts = _event_dict_property("origin_server_ts") | |
187 | prev_events = _event_dict_property("prev_events") | |
188 | redacts = _event_dict_property("redacts", None) | |
189 | room_id = _event_dict_property("room_id") | |
190 | sender = _event_dict_property("sender") | |
191 | user_id = _event_dict_property("sender") | |
218 | auth_events = DictProperty("auth_events") | |
219 | depth = DictProperty("depth") | |
220 | content = DictProperty("content") | |
221 | hashes = DictProperty("hashes") | |
222 | origin = DictProperty("origin") | |
223 | origin_server_ts = DictProperty("origin_server_ts") | |
224 | prev_events = DictProperty("prev_events") | |
225 | redacts = DefaultDictProperty("redacts", None) | |
226 | room_id = DictProperty("room_id") | |
227 | sender = DictProperty("sender") | |
228 | state_key = DictProperty("state_key") | |
229 | type = DictProperty("type") | |
230 | user_id = DictProperty("sender") | |
231 | ||
232 | @property | |
233 | def event_id(self) -> str: | |
234 | raise NotImplementedError() | |
192 | 235 | |
193 | 236 | @property |
194 | 237 | def membership(self): |
198 | 241 | return hasattr(self, "state_key") and self.state_key is not None |
199 | 242 | |
200 | 243 | def get_dict(self) -> JsonDict: |
201 | d = dict(self._event_dict) | |
244 | d = dict(self._dict) | |
202 | 245 | d.update({"signatures": self.signatures, "unsigned": dict(self.unsigned)}) |
203 | 246 | |
204 | 247 | return d |
205 | 248 | |
206 | 249 | def get(self, key, default=None): |
207 | return self._event_dict.get(key, default) | |
250 | return self._dict.get(key, default) | |
208 | 251 | |
209 | 252 | def get_internal_metadata_dict(self): |
210 | 253 | return self.internal_metadata.get_dict() |
226 | 269 | raise AttributeError("Unrecognized attribute %s" % (instance,)) |
227 | 270 | |
228 | 271 | def __getitem__(self, field): |
229 | return self._event_dict[field] | |
272 | return self._dict[field] | |
230 | 273 | |
231 | 274 | def __contains__(self, field): |
232 | return field in self._event_dict | |
275 | return field in self._dict | |
233 | 276 | |
234 | 277 | def items(self): |
235 | return list(self._event_dict.items()) | |
278 | return list(self._dict.items()) | |
236 | 279 | |
237 | 280 | def keys(self): |
238 | return six.iterkeys(self._event_dict) | |
281 | return six.iterkeys(self._dict) | |
239 | 282 | |
240 | 283 | def prev_event_ids(self): |
241 | 284 | """Returns the list of prev event IDs. The order matches the order |
280 | 323 | else: |
281 | 324 | frozen_dict = event_dict |
282 | 325 | |
283 | self.event_id = event_dict["event_id"] | |
284 | self.type = event_dict["type"] | |
285 | if "state_key" in event_dict: | |
286 | self.state_key = event_dict["state_key"] | |
326 | self._event_id = event_dict["event_id"] | |
287 | 327 | |
288 | 328 | super(FrozenEvent, self).__init__( |
289 | 329 | frozen_dict, |
293 | 333 | rejected_reason=rejected_reason, |
294 | 334 | ) |
295 | 335 | |
336 | @property | |
337 | def event_id(self) -> str: | |
338 | return self._event_id | |
339 | ||
296 | 340 | def __str__(self): |
297 | 341 | return self.__repr__() |
298 | 342 | |
331 | 375 | frozen_dict = event_dict |
332 | 376 | |
333 | 377 | self._event_id = None |
334 | self.type = event_dict["type"] | |
335 | if "state_key" in event_dict: | |
336 | self.state_key = event_dict["state_key"] | |
337 | 378 | |
338 | 379 | super(FrozenEventV2, self).__init__( |
339 | 380 | frozen_dict, |
403 | 444 | return self._event_id |
404 | 445 | |
405 | 446 | |
406 | def room_version_to_event_format(room_version): | |
407 | """Converts a room version string to the event format | |
408 | ||
409 | Args: | |
410 | room_version (str) | |
411 | ||
412 | Returns: | |
413 | int | |
414 | ||
415 | Raises: | |
416 | UnsupportedRoomVersionError if the room version is unknown | |
417 | """ | |
418 | v = KNOWN_ROOM_VERSIONS.get(room_version) | |
419 | ||
420 | if not v: | |
421 | # this can happen if support is withdrawn for a room version | |
422 | raise UnsupportedRoomVersionError() | |
423 | ||
424 | return v.event_format | |
425 | ||
426 | ||
427 | def event_type_from_format_version(format_version): | |
447 | def event_type_from_format_version(format_version: int) -> Type[EventBase]: | |
428 | 448 | """Returns the python type to use to construct an Event object for the |
429 | 449 | given event format version. |
430 | 450 | |
444 | 464 | return FrozenEventV3 |
445 | 465 | else: |
446 | 466 | raise Exception("No event format %r" % (format_version,)) |
467 | ||
468 | ||
469 | def make_event_from_dict( | |
470 | event_dict: JsonDict, | |
471 | room_version: RoomVersion = RoomVersions.V1, | |
472 | internal_metadata_dict: JsonDict = {}, | |
473 | rejected_reason: Optional[str] = None, | |
474 | ) -> EventBase: | |
475 | """Construct an EventBase from the given event dict""" | |
476 | event_type = event_type_from_format_version(room_version.event_format) | |
477 | return event_type(event_dict, internal_metadata_dict, rejected_reason) |
27 | 27 | RoomVersion, |
28 | 28 | ) |
29 | 29 | from synapse.crypto.event_signing import add_hashes_and_signatures |
30 | from synapse.events import ( | |
31 | EventBase, | |
32 | _EventInternalMetadata, | |
33 | event_type_from_format_version, | |
34 | ) | |
30 | from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict | |
35 | 31 | from synapse.types import EventID, JsonDict |
36 | 32 | from synapse.util import Clock |
37 | 33 | from synapse.util.stringutils import random_string |
255 | 251 | event_dict.setdefault("signatures", {}) |
256 | 252 | |
257 | 253 | add_hashes_and_signatures(room_version, event_dict, hostname, signing_key) |
258 | return event_type_from_format_version(format_version)( | |
259 | event_dict, internal_metadata_dict=internal_metadata_dict | |
254 | return make_event_from_dict( | |
255 | event_dict, room_version, internal_metadata_dict=internal_metadata_dict | |
260 | 256 | ) |
261 | 257 | |
262 | 258 |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | 16 | import inspect |
17 | from typing import Dict | |
17 | 18 | |
18 | 19 | from synapse.spam_checker_api import SpamCheckerApi |
19 | 20 | |
21 | MYPY = False | |
22 | if MYPY: | |
23 | import synapse.server | |
24 | ||
20 | 25 | |
21 | 26 | class SpamChecker(object): |
22 | def __init__(self, hs): | |
27 | def __init__(self, hs: "synapse.server.HomeServer"): | |
23 | 28 | self.spam_checker = None |
24 | 29 | |
25 | 30 | module = None |
39 | 44 | else: |
40 | 45 | self.spam_checker = module(config=config) |
41 | 46 | |
42 | def check_event_for_spam(self, event): | |
47 | def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool: | |
43 | 48 | """Checks if a given event is considered "spammy" by this server. |
44 | 49 | |
45 | 50 | If the server considers an event spammy, then it will be rejected if |
47 | 52 | users receive a blank event. |
48 | 53 | |
49 | 54 | Args: |
50 | event (synapse.events.EventBase): the event to be checked | |
55 | event: the event to be checked | |
51 | 56 | |
52 | 57 | Returns: |
53 | bool: True if the event is spammy. | |
58 | True if the event is spammy. | |
54 | 59 | """ |
55 | 60 | if self.spam_checker is None: |
56 | 61 | return False |
57 | 62 | |
58 | 63 | return self.spam_checker.check_event_for_spam(event) |
59 | 64 | |
60 | def user_may_invite(self, inviter_userid, invitee_userid, room_id): | |
65 | def user_may_invite( | |
66 | self, inviter_userid: str, invitee_userid: str, room_id: str | |
67 | ) -> bool: | |
61 | 68 | """Checks if a given user may send an invite |
62 | 69 | |
63 | 70 | If this method returns false, the invite will be rejected. |
64 | 71 | |
65 | 72 | Args: |
66 | userid (string): The sender's user ID | |
73 | inviter_userid: The user ID of the sender of the invitation | |
74 | invitee_userid: The user ID targeted in the invitation | |
75 | room_id: The room ID | |
67 | 76 | |
68 | 77 | Returns: |
69 | bool: True if the user may send an invite, otherwise False | |
78 | True if the user may send an invite, otherwise False | |
70 | 79 | """ |
71 | 80 | if self.spam_checker is None: |
72 | 81 | return True |
75 | 84 | inviter_userid, invitee_userid, room_id |
76 | 85 | ) |
77 | 86 | |
78 | def user_may_create_room(self, userid): | |
87 | def user_may_create_room(self, userid: str) -> bool: | |
79 | 88 | """Checks if a given user may create a room |
80 | 89 | |
81 | 90 | If this method returns false, the creation request will be rejected. |
82 | 91 | |
83 | 92 | Args: |
84 | userid (string): The sender's user ID | |
93 | userid: The ID of the user attempting to create a room | |
85 | 94 | |
86 | 95 | Returns: |
87 | bool: True if the user may create a room, otherwise False | |
96 | True if the user may create a room, otherwise False | |
88 | 97 | """ |
89 | 98 | if self.spam_checker is None: |
90 | 99 | return True |
91 | 100 | |
92 | 101 | return self.spam_checker.user_may_create_room(userid) |
93 | 102 | |
94 | def user_may_create_room_alias(self, userid, room_alias): | |
103 | def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool: | |
95 | 104 | """Checks if a given user may create a room alias |
96 | 105 | |
97 | 106 | If this method returns false, the association request will be rejected. |
98 | 107 | |
99 | 108 | Args: |
100 | userid (string): The sender's user ID | |
101 | room_alias (string): The alias to be created | |
109 | userid: The ID of the user attempting to create a room alias | |
110 | room_alias: The alias to be created | |
102 | 111 | |
103 | 112 | Returns: |
104 | bool: True if the user may create a room alias, otherwise False | |
113 | True if the user may create a room alias, otherwise False | |
105 | 114 | """ |
106 | 115 | if self.spam_checker is None: |
107 | 116 | return True |
108 | 117 | |
109 | 118 | return self.spam_checker.user_may_create_room_alias(userid, room_alias) |
110 | 119 | |
111 | def user_may_publish_room(self, userid, room_id): | |
120 | def user_may_publish_room(self, userid: str, room_id: str) -> bool: | |
112 | 121 | """Checks if a given user may publish a room to the directory |
113 | 122 | |
114 | 123 | If this method returns false, the publish request will be rejected. |
115 | 124 | |
116 | 125 | Args: |
117 | userid (string): The sender's user ID | |
118 | room_id (string): The ID of the room that would be published | |
126 | userid: The user ID attempting to publish the room | |
127 | room_id: The ID of the room that would be published | |
119 | 128 | |
120 | 129 | Returns: |
121 | bool: True if the user may publish the room, otherwise False | |
130 | True if the user may publish the room, otherwise False | |
122 | 131 | """ |
123 | 132 | if self.spam_checker is None: |
124 | 133 | return True |
125 | 134 | |
126 | 135 | return self.spam_checker.user_may_publish_room(userid, room_id) |
136 | ||
137 | def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool: | |
138 | """Checks if a user ID or display name are considered "spammy" by this server. | |
139 | ||
140 | If the server considers a username spammy, then it will not be included in | |
141 | user directory results. | |
142 | ||
143 | Args: | |
144 | user_profile: The user information to check, it contains the keys: | |
145 | * user_id | |
146 | * display_name | |
147 | * avatar_url | |
148 | ||
149 | Returns: | |
150 | True if the user is spammy. | |
151 | """ | |
152 | if self.spam_checker is None: | |
153 | return False | |
154 | ||
155 | # For backwards compatibility, if the method does not exist on the spam checker, fallback to not interfering. | |
156 | checker = getattr(self.spam_checker, "check_username_for_spam", None) | |
157 | if not checker: | |
158 | return False | |
159 | # Make a copy of the user profile object to ensure the spam checker | |
160 | # cannot modify it. | |
161 | return checker(user_profile.copy()) |
73 | 73 | is_requester_admin (bool): If the requester is an admin |
74 | 74 | |
75 | 75 | Returns: |
76 | defer.Deferred | |
76 | defer.Deferred[bool]: Whether room creation is allowed or denied. | |
77 | 77 | """ |
78 | 78 | |
79 | 79 | if self.third_party_rules is None: |
80 | return | |
80 | return True | |
81 | 81 | |
82 | yield self.third_party_rules.on_create_room( | |
82 | ret = yield self.third_party_rules.on_create_room( | |
83 | 83 | requester, config, is_requester_admin |
84 | 84 | ) |
85 | return ret | |
85 | 86 | |
86 | 87 | @defer.inlineCallbacks |
87 | 88 | def check_threepid_can_be_invited(self, medium, address, room_id): |
0 | 0 | # -*- coding: utf-8 -*- |
1 | 1 | # Copyright 2015, 2016 OpenMarket Ltd |
2 | # Copyright 2020 The Matrix.org Foundation C.I.C. | |
2 | 3 | # |
3 | 4 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 5 | # you may not use this file except in compliance with the License. |
21 | 22 | |
22 | 23 | from synapse.api.constants import MAX_DEPTH, EventTypes, Membership |
23 | 24 | from synapse.api.errors import Codes, SynapseError |
24 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions | |
25 | from synapse.api.room_versions import ( | |
26 | KNOWN_ROOM_VERSIONS, | |
27 | EventFormatVersions, | |
28 | RoomVersion, | |
29 | ) | |
25 | 30 | from synapse.crypto.event_signing import check_event_content_hash |
26 | from synapse.events import event_type_from_format_version | |
31 | from synapse.events import EventBase, make_event_from_dict | |
27 | 32 | from synapse.events.utils import prune_event |
28 | 33 | from synapse.http.servlet import assert_params_in_dict |
29 | 34 | from synapse.logging.context import ( |
32 | 37 | make_deferred_yieldable, |
33 | 38 | preserve_fn, |
34 | 39 | ) |
35 | from synapse.types import get_domain_from_id | |
40 | from synapse.types import JsonDict, get_domain_from_id | |
36 | 41 | from synapse.util import unwrapFirstError |
37 | 42 | |
38 | 43 | logger = logging.getLogger(__name__) |
341 | 346 | ) |
342 | 347 | |
343 | 348 | |
344 | def event_from_pdu_json(pdu_json, event_format_version, outlier=False): | |
345 | """Construct a FrozenEvent from an event json received over federation | |
349 | def event_from_pdu_json( | |
350 | pdu_json: JsonDict, room_version: RoomVersion, outlier: bool = False | |
351 | ) -> EventBase: | |
352 | """Construct an EventBase from an event json received over federation | |
346 | 353 | |
347 | 354 | Args: |
348 | pdu_json (object): pdu as received over federation | |
349 | event_format_version (int): The event format version | |
350 | outlier (bool): True to mark this event as an outlier | |
351 | ||
352 | Returns: | |
353 | FrozenEvent | |
355 | pdu_json: pdu as received over federation | |
356 | room_version: The version of the room this event belongs to | |
357 | outlier: True to mark this event as an outlier | |
354 | 358 | |
355 | 359 | Raises: |
356 | 360 | SynapseError: if the pdu is missing required fields or is otherwise |
369 | 373 | elif depth > MAX_DEPTH: |
370 | 374 | raise SynapseError(400, "Depth too large", Codes.BAD_JSON) |
371 | 375 | |
372 | event = event_type_from_format_version(event_format_version)(pdu_json) | |
373 | ||
376 | event = make_event_from_dict(pdu_json, room_version) | |
374 | 377 | event.internal_metadata.outlier = outlier |
375 | 378 | |
376 | 379 | return event |
16 | 16 | import copy |
17 | 17 | import itertools |
18 | 18 | import logging |
19 | from typing import Dict, Iterable | |
19 | from typing import ( | |
20 | Any, | |
21 | Awaitable, | |
22 | Callable, | |
23 | Dict, | |
24 | Iterable, | |
25 | List, | |
26 | Optional, | |
27 | Sequence, | |
28 | Tuple, | |
29 | TypeVar, | |
30 | ) | |
20 | 31 | |
21 | 32 | from prometheus_client import Counter |
22 | 33 | |
34 | 45 | from synapse.api.room_versions import ( |
35 | 46 | KNOWN_ROOM_VERSIONS, |
36 | 47 | EventFormatVersions, |
48 | RoomVersion, | |
37 | 49 | RoomVersions, |
38 | 50 | ) |
39 | from synapse.events import builder, room_version_to_event_format | |
51 | from synapse.events import EventBase, builder | |
40 | 52 | from synapse.federation.federation_base import FederationBase, event_from_pdu_json |
41 | 53 | from synapse.logging.context import make_deferred_yieldable |
42 | 54 | from synapse.logging.utils import log_function |
55 | from synapse.types import JsonDict | |
43 | 56 | from synapse.util import unwrapFirstError |
44 | 57 | from synapse.util.caches.expiringcache import ExpiringCache |
45 | 58 | from synapse.util.retryutils import NotRetryingDestination |
50 | 63 | |
51 | 64 | |
52 | 65 | PDU_RETRY_TIME_MS = 1 * 60 * 1000 |
66 | ||
67 | T = TypeVar("T") | |
53 | 68 | |
54 | 69 | |
55 | 70 | class InvalidResponseError(RuntimeError): |
169 | 184 | sent_queries_counter.labels("client_one_time_keys").inc() |
170 | 185 | return self.transport_layer.claim_client_keys(destination, content, timeout) |
171 | 186 | |
172 | @defer.inlineCallbacks | |
173 | @log_function | |
174 | def backfill(self, dest, room_id, limit, extremities): | |
175 | """Requests some more historic PDUs for the given context from the | |
187 | async def backfill( | |
188 | self, dest: str, room_id: str, limit: int, extremities: Iterable[str] | |
189 | ) -> List[EventBase]: | |
190 | """Requests some more historic PDUs for the given room from the | |
176 | 191 | given destination server. |
177 | 192 | |
178 | 193 | Args: |
179 | 194 | dest (str): The remote homeserver to ask. |
180 | 195 | room_id (str): The room_id to backfill. |
181 | limit (int): The maximum number of PDUs to return. | |
182 | extremities (list): List of PDU id and origins of the first pdus | |
183 | we have seen from the context | |
184 | ||
185 | Returns: | |
186 | Deferred: Results in the received PDUs. | |
196 | limit (int): The maximum number of events to return. | |
197 | extremities (list): our current backwards extremities, to backfill from | |
187 | 198 | """ |
188 | 199 | logger.debug("backfill extrem=%s", extremities) |
189 | 200 | |
191 | 202 | if not extremities: |
192 | 203 | return |
193 | 204 | |
194 | transaction_data = yield self.transport_layer.backfill( | |
205 | transaction_data = await self.transport_layer.backfill( | |
195 | 206 | dest, room_id, extremities, limit |
196 | 207 | ) |
197 | 208 | |
198 | 209 | logger.debug("backfill transaction_data=%r", transaction_data) |
199 | 210 | |
200 | room_version = yield self.store.get_room_version_id(room_id) | |
201 | format_ver = room_version_to_event_format(room_version) | |
211 | room_version = await self.store.get_room_version(room_id) | |
202 | 212 | |
203 | 213 | pdus = [ |
204 | event_from_pdu_json(p, format_ver, outlier=False) | |
214 | event_from_pdu_json(p, room_version, outlier=False) | |
205 | 215 | for p in transaction_data["pdus"] |
206 | 216 | ] |
207 | 217 | |
208 | 218 | # FIXME: We should handle signature failures more gracefully. |
209 | pdus[:] = yield make_deferred_yieldable( | |
219 | pdus[:] = await make_deferred_yieldable( | |
210 | 220 | defer.gatherResults( |
211 | self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True | |
221 | self._check_sigs_and_hashes(room_version.identifier, pdus), | |
222 | consumeErrors=True, | |
212 | 223 | ).addErrback(unwrapFirstError) |
213 | 224 | ) |
214 | 225 | |
215 | 226 | return pdus |
216 | 227 | |
217 | @defer.inlineCallbacks | |
218 | @log_function | |
219 | def get_pdu( | |
220 | self, destinations, event_id, room_version, outlier=False, timeout=None | |
221 | ): | |
228 | async def get_pdu( | |
229 | self, | |
230 | destinations: Iterable[str], | |
231 | event_id: str, | |
232 | room_version: RoomVersion, | |
233 | outlier: bool = False, | |
234 | timeout: Optional[int] = None, | |
235 | ) -> Optional[EventBase]: | |
222 | 236 | """Requests the PDU with given origin and ID from the remote home |
223 | 237 | servers. |
224 | 238 | |
226 | 240 | one succeeds. |
227 | 241 | |
228 | 242 | Args: |
229 | destinations (list): Which homeservers to query | |
230 | event_id (str): event to fetch | |
231 | room_version (str): version of the room | |
232 | outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if | |
243 | destinations: Which homeservers to query | |
244 | event_id: event to fetch | |
245 | room_version: version of the room | |
246 | outlier: Indicates whether the PDU is an `outlier`, i.e. if | |
233 | 247 | it's from an arbitary point in the context as opposed to part |
234 | 248 | of the current block of PDUs. Defaults to `False` |
235 | timeout (int): How long to try (in ms) each destination for before | |
249 | timeout: How long to try (in ms) each destination for before | |
236 | 250 | moving to the next destination. None indicates no timeout. |
237 | 251 | |
238 | 252 | Returns: |
239 | Deferred: Results in the requested PDU, or None if we were unable to find | |
240 | it. | |
253 | The requested PDU, or None if we were unable to find it. | |
241 | 254 | """ |
242 | 255 | |
243 | 256 | # TODO: Rate limit the number of times we try and get the same event. |
247 | 260 | return ev |
248 | 261 | |
249 | 262 | pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {}) |
250 | ||
251 | format_ver = room_version_to_event_format(room_version) | |
252 | 263 | |
253 | 264 | signed_pdu = None |
254 | 265 | for destination in destinations: |
258 | 269 | continue |
259 | 270 | |
260 | 271 | try: |
261 | transaction_data = yield self.transport_layer.get_event( | |
272 | transaction_data = await self.transport_layer.get_event( | |
262 | 273 | destination, event_id, timeout=timeout |
263 | 274 | ) |
264 | 275 | |
270 | 281 | ) |
271 | 282 | |
272 | 283 | pdu_list = [ |
273 | event_from_pdu_json(p, format_ver, outlier=outlier) | |
284 | event_from_pdu_json(p, room_version, outlier=outlier) | |
274 | 285 | for p in transaction_data["pdus"] |
275 | 286 | ] |
276 | 287 | |
278 | 289 | pdu = pdu_list[0] |
279 | 290 | |
280 | 291 | # Check signatures are correct. |
281 | signed_pdu = yield self._check_sigs_and_hash(room_version, pdu) | |
292 | signed_pdu = await self._check_sigs_and_hash( | |
293 | room_version.identifier, pdu | |
294 | ) | |
282 | 295 | |
283 | 296 | break |
284 | 297 | |
308 | 321 | |
309 | 322 | return signed_pdu |
310 | 323 | |
311 | @defer.inlineCallbacks | |
312 | def get_room_state_ids(self, destination: str, room_id: str, event_id: str): | |
324 | async def get_room_state_ids( | |
325 | self, destination: str, room_id: str, event_id: str | |
326 | ) -> Tuple[List[str], List[str]]: | |
313 | 327 | """Calls the /state_ids endpoint to fetch the state at a particular point |
314 | 328 | in the room, and the auth events for the given event |
315 | 329 | |
316 | 330 | Returns: |
317 | Tuple[List[str], List[str]]: a tuple of (state event_ids, auth event_ids) | |
318 | """ | |
319 | result = yield self.transport_layer.get_room_state_ids( | |
331 | a tuple of (state event_ids, auth event_ids) | |
332 | """ | |
333 | result = await self.transport_layer.get_room_state_ids( | |
320 | 334 | destination, room_id, event_id=event_id |
321 | 335 | ) |
322 | 336 | |
330 | 344 | |
331 | 345 | return state_event_ids, auth_event_ids |
332 | 346 | |
333 | @defer.inlineCallbacks | |
334 | @log_function | |
335 | def get_event_auth(self, destination, room_id, event_id): | |
336 | res = yield self.transport_layer.get_event_auth(destination, room_id, event_id) | |
337 | ||
338 | room_version = yield self.store.get_room_version_id(room_id) | |
339 | format_ver = room_version_to_event_format(room_version) | |
347 | async def get_event_auth(self, destination, room_id, event_id): | |
348 | res = await self.transport_layer.get_event_auth(destination, room_id, event_id) | |
349 | ||
350 | room_version = await self.store.get_room_version(room_id) | |
340 | 351 | |
341 | 352 | auth_chain = [ |
342 | event_from_pdu_json(p, format_ver, outlier=True) for p in res["auth_chain"] | |
353 | event_from_pdu_json(p, room_version, outlier=True) | |
354 | for p in res["auth_chain"] | |
343 | 355 | ] |
344 | 356 | |
345 | signed_auth = yield self._check_sigs_and_hash_and_fetch( | |
346 | destination, auth_chain, outlier=True, room_version=room_version | |
357 | signed_auth = await self._check_sigs_and_hash_and_fetch( | |
358 | destination, auth_chain, outlier=True, room_version=room_version.identifier | |
347 | 359 | ) |
348 | 360 | |
349 | 361 | signed_auth.sort(key=lambda e: e.depth) |
350 | 362 | |
351 | 363 | return signed_auth |
352 | 364 | |
353 | @defer.inlineCallbacks | |
354 | def _try_destination_list(self, description, destinations, callback): | |
365 | async def _try_destination_list( | |
366 | self, | |
367 | description: str, | |
368 | destinations: Iterable[str], | |
369 | callback: Callable[[str], Awaitable[T]], | |
370 | ) -> T: | |
355 | 371 | """Try an operation on a series of servers, until it succeeds |
356 | 372 | |
357 | 373 | Args: |
358 | description (unicode): description of the operation we're doing, for logging | |
359 | ||
360 | destinations (Iterable[unicode]): list of server_names to try | |
361 | ||
362 | callback (callable): Function to run for each server. Passed a single | |
363 | argument: the server_name to try. May return a deferred. | |
374 | description: description of the operation we're doing, for logging | |
375 | ||
376 | destinations: list of server_names to try | |
377 | ||
378 | callback: Function to run for each server. Passed a single | |
379 | argument: the server_name to try. | |
364 | 380 | |
365 | 381 | If the callback raises a CodeMessageException with a 300/400 code, |
366 | 382 | attempts to perform the operation stop immediately and the exception is |
371 | 387 | suppressed if the exception is an InvalidResponseError. |
372 | 388 | |
373 | 389 | Returns: |
374 | The [Deferred] result of callback, if it succeeds | |
390 | The result of callback, if it succeeds | |
375 | 391 | |
376 | 392 | Raises: |
377 | 393 | SynapseError if the chosen remote server returns a 300/400 code, or |
382 | 398 | continue |
383 | 399 | |
384 | 400 | try: |
385 | res = yield callback(destination) | |
401 | res = await callback(destination) | |
386 | 402 | return res |
387 | 403 | except InvalidResponseError as e: |
388 | 404 | logger.warning("Failed to %s via %s: %s", description, destination, e) |
401 | 417 | ) |
402 | 418 | except Exception: |
403 | 419 | logger.warning( |
404 | "Failed to %s via %s", description, destination, exc_info=1 | |
420 | "Failed to %s via %s", description, destination, exc_info=True | |
405 | 421 | ) |
406 | 422 | |
407 | 423 | raise SynapseError(502, "Failed to %s via any server" % (description,)) |
408 | 424 | |
409 | def make_membership_event( | |
425 | async def make_membership_event( | |
410 | 426 | self, |
411 | 427 | destinations: Iterable[str], |
412 | 428 | room_id: str, |
414 | 430 | membership: str, |
415 | 431 | content: dict, |
416 | 432 | params: Dict[str, str], |
417 | ): | |
433 | ) -> Tuple[str, EventBase, RoomVersion]: | |
418 | 434 | """ |
419 | 435 | Creates an m.room.member event, with context, without participating in the room. |
420 | 436 | |
435 | 451 | content: Any additional data to put into the content field of the |
436 | 452 | event. |
437 | 453 | params: Query parameters to include in the request. |
438 | Return: | |
439 | Deferred[Tuple[str, FrozenEvent, RoomVersion]]: resolves to a tuple of | |
454 | ||
455 | Returns: | |
440 | 456 | `(origin, event, room_version)` where origin is the remote |
441 | 457 | homeserver which generated the event, and room_version is the |
442 | 458 | version of the room. |
443 | 459 | |
444 | Fails with a `UnsupportedRoomVersionError` if remote responds with | |
445 | a room version we don't understand. | |
446 | ||
447 | Fails with a ``SynapseError`` if the chosen remote server | |
448 | returns a 300/400 code. | |
449 | ||
450 | Fails with a ``RuntimeError`` if no servers were reachable. | |
460 | Raises: | |
461 | UnsupportedRoomVersionError: if remote responds with | |
462 | a room version we don't understand. | |
463 | ||
464 | SynapseError: if the chosen remote server returns a 300/400 code. | |
465 | ||
466 | RuntimeError: if no servers were reachable. | |
451 | 467 | """ |
452 | 468 | valid_memberships = {Membership.JOIN, Membership.LEAVE} |
453 | 469 | if membership not in valid_memberships: |
456 | 472 | % (membership, ",".join(valid_memberships)) |
457 | 473 | ) |
458 | 474 | |
459 | @defer.inlineCallbacks | |
460 | def send_request(destination): | |
461 | ret = yield self.transport_layer.make_membership_event( | |
475 | async def send_request(destination: str) -> Tuple[str, EventBase, RoomVersion]: | |
476 | ret = await self.transport_layer.make_membership_event( | |
462 | 477 | destination, room_id, user_id, membership, params |
463 | 478 | ) |
464 | 479 | |
491 | 506 | event_dict=pdu_dict, |
492 | 507 | ) |
493 | 508 | |
494 | return (destination, ev, room_version) | |
495 | ||
496 | return self._try_destination_list( | |
509 | return destination, ev, room_version | |
510 | ||
511 | return await self._try_destination_list( | |
497 | 512 | "make_" + membership, destinations, send_request |
498 | 513 | ) |
499 | 514 | |
500 | def send_join(self, destinations, pdu, event_format_version): | |
515 | async def send_join( | |
516 | self, destinations: Iterable[str], pdu: EventBase, room_version: RoomVersion | |
517 | ) -> Dict[str, Any]: | |
501 | 518 | """Sends a join event to one of a list of homeservers. |
502 | 519 | |
503 | 520 | Doing so will cause the remote server to add the event to the graph, |
504 | 521 | and send the event out to the rest of the federation. |
505 | 522 | |
506 | 523 | Args: |
507 | destinations (str): Candidate homeservers which are probably | |
524 | destinations: Candidate homeservers which are probably | |
508 | 525 | participating in the room. |
509 | pdu (BaseEvent): event to be sent | |
510 | event_format_version (int): The event format version | |
511 | ||
512 | Return: | |
513 | Deferred: resolves to a dict with members ``origin`` (a string | |
514 | giving the serer the event was sent to, ``state`` (?) and | |
526 | pdu: event to be sent | |
527 | room_version: the version of the room (according to the server that | |
528 | did the make_join) | |
529 | ||
530 | Returns: | |
531 | a dict with members ``origin`` (a string | |
532 | giving the server the event was sent to, ``state`` (?) and | |
515 | 533 | ``auth_chain``. |
516 | 534 | |
517 | Fails with a ``SynapseError`` if the chosen remote server | |
518 | returns a 300/400 code. | |
519 | ||
520 | Fails with a ``RuntimeError`` if no servers were reachable. | |
521 | """ | |
522 | ||
523 | def check_authchain_validity(signed_auth_chain): | |
524 | for e in signed_auth_chain: | |
525 | if e.type == EventTypes.Create: | |
535 | Raises: | |
536 | SynapseError: if the chosen remote server returns a 300/400 code. | |
537 | ||
538 | RuntimeError: if no servers were reachable. | |
539 | """ | |
540 | ||
541 | async def send_request(destination) -> Dict[str, Any]: | |
542 | content = await self._do_send_join(destination, pdu) | |
543 | ||
544 | logger.debug("Got content: %s", content) | |
545 | ||
546 | state = [ | |
547 | event_from_pdu_json(p, room_version, outlier=True) | |
548 | for p in content.get("state", []) | |
549 | ] | |
550 | ||
551 | auth_chain = [ | |
552 | event_from_pdu_json(p, room_version, outlier=True) | |
553 | for p in content.get("auth_chain", []) | |
554 | ] | |
555 | ||
556 | pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)} | |
557 | ||
558 | create_event = None | |
559 | for e in state: | |
560 | if (e.type, e.state_key) == (EventTypes.Create, ""): | |
526 | 561 | create_event = e |
527 | 562 | break |
528 | else: | |
529 | raise InvalidResponseError("no %s in auth chain" % (EventTypes.Create,)) | |
530 | ||
531 | # the room version should be sane. | |
532 | room_version = create_event.content.get("room_version", "1") | |
533 | if room_version not in KNOWN_ROOM_VERSIONS: | |
534 | # This shouldn't be possible, because the remote server should have | |
535 | # rejected the join attempt during make_join. | |
536 | raise InvalidResponseError( | |
537 | "room appears to have unsupported version %s" % (room_version,) | |
538 | ) | |
539 | ||
540 | @defer.inlineCallbacks | |
541 | def send_request(destination): | |
542 | content = yield self._do_send_join(destination, pdu) | |
543 | ||
544 | logger.debug("Got content: %s", content) | |
545 | ||
546 | state = [ | |
547 | event_from_pdu_json(p, event_format_version, outlier=True) | |
548 | for p in content.get("state", []) | |
549 | ] | |
550 | ||
551 | auth_chain = [ | |
552 | event_from_pdu_json(p, event_format_version, outlier=True) | |
553 | for p in content.get("auth_chain", []) | |
554 | ] | |
555 | ||
556 | pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)} | |
557 | ||
558 | room_version = None | |
559 | for e in state: | |
560 | if (e.type, e.state_key) == (EventTypes.Create, ""): | |
561 | room_version = e.content.get( | |
562 | "room_version", RoomVersions.V1.identifier | |
563 | ) | |
564 | break | |
565 | ||
566 | if room_version is None: | |
563 | ||
564 | if create_event is None: | |
567 | 565 | # If the state doesn't have a create event then the room is |
568 | 566 | # invalid, and it would fail auth checks anyway. |
569 | 567 | raise SynapseError(400, "No create event in state") |
570 | 568 | |
571 | valid_pdus = yield self._check_sigs_and_hash_and_fetch( | |
569 | # the room version should be sane. | |
570 | create_room_version = create_event.content.get( | |
571 | "room_version", RoomVersions.V1.identifier | |
572 | ) | |
573 | if create_room_version != room_version.identifier: | |
574 | # either the server that fulfilled the make_join, or the server that is | |
575 | # handling the send_join, is lying. | |
576 | raise InvalidResponseError( | |
577 | "Unexpected room version %s in create event" | |
578 | % (create_room_version,) | |
579 | ) | |
580 | ||
581 | valid_pdus = await self._check_sigs_and_hash_and_fetch( | |
572 | 582 | destination, |
573 | 583 | list(pdus.values()), |
574 | 584 | outlier=True, |
575 | room_version=room_version, | |
585 | room_version=room_version.identifier, | |
576 | 586 | ) |
577 | 587 | |
578 | 588 | valid_pdus_map = {p.event_id: p for p in valid_pdus} |
596 | 606 | for s in signed_state: |
597 | 607 | s.internal_metadata = copy.deepcopy(s.internal_metadata) |
598 | 608 | |
599 | check_authchain_validity(signed_auth) | |
609 | # double-check that the same create event has ended up in the auth chain | |
610 | auth_chain_create_events = [ | |
611 | e.event_id | |
612 | for e in signed_auth | |
613 | if (e.type, e.state_key) == (EventTypes.Create, "") | |
614 | ] | |
615 | if auth_chain_create_events != [create_event.event_id]: | |
616 | raise InvalidResponseError( | |
617 | "Unexpected create event(s) in auth chain" | |
618 | % (auth_chain_create_events,) | |
619 | ) | |
600 | 620 | |
601 | 621 | return { |
602 | 622 | "state": signed_state, |
604 | 624 | "origin": destination, |
605 | 625 | } |
606 | 626 | |
607 | return self._try_destination_list("send_join", destinations, send_request) | |
608 | ||
609 | @defer.inlineCallbacks | |
610 | def _do_send_join(self, destination, pdu): | |
627 | return await self._try_destination_list("send_join", destinations, send_request) | |
628 | ||
629 | async def _do_send_join(self, destination: str, pdu: EventBase): | |
611 | 630 | time_now = self._clock.time_msec() |
612 | 631 | |
613 | 632 | try: |
614 | content = yield self.transport_layer.send_join_v2( | |
633 | content = await self.transport_layer.send_join_v2( | |
615 | 634 | destination=destination, |
616 | 635 | room_id=pdu.room_id, |
617 | 636 | event_id=pdu.event_id, |
633 | 652 | |
634 | 653 | logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API") |
635 | 654 | |
636 | resp = yield self.transport_layer.send_join_v1( | |
655 | resp = await self.transport_layer.send_join_v1( | |
637 | 656 | destination=destination, |
638 | 657 | room_id=pdu.room_id, |
639 | 658 | event_id=pdu.event_id, |
644 | 663 | # content. |
645 | 664 | return resp[1] |
646 | 665 | |
647 | @defer.inlineCallbacks | |
648 | def send_invite(self, destination, room_id, event_id, pdu): | |
649 | room_version = yield self.store.get_room_version_id(room_id) | |
650 | ||
651 | content = yield self._do_send_invite(destination, pdu, room_version) | |
666 | async def send_invite( | |
667 | self, destination: str, room_id: str, event_id: str, pdu: EventBase, | |
668 | ) -> EventBase: | |
669 | room_version = await self.store.get_room_version(room_id) | |
670 | ||
671 | content = await self._do_send_invite(destination, pdu, room_version) | |
652 | 672 | |
653 | 673 | pdu_dict = content["event"] |
654 | 674 | |
655 | 675 | logger.debug("Got response to send_invite: %s", pdu_dict) |
656 | 676 | |
657 | room_version = yield self.store.get_room_version_id(room_id) | |
658 | format_ver = room_version_to_event_format(room_version) | |
659 | ||
660 | pdu = event_from_pdu_json(pdu_dict, format_ver) | |
677 | pdu = event_from_pdu_json(pdu_dict, room_version) | |
661 | 678 | |
662 | 679 | # Check signatures are correct. |
663 | pdu = yield self._check_sigs_and_hash(room_version, pdu) | |
680 | pdu = await self._check_sigs_and_hash(room_version.identifier, pdu) | |
664 | 681 | |
665 | 682 | # FIXME: We should handle signature failures more gracefully. |
666 | 683 | |
667 | 684 | return pdu |
668 | 685 | |
669 | @defer.inlineCallbacks | |
670 | def _do_send_invite(self, destination, pdu, room_version): | |
686 | async def _do_send_invite( | |
687 | self, destination: str, pdu: EventBase, room_version: RoomVersion | |
688 | ) -> JsonDict: | |
671 | 689 | """Actually sends the invite, first trying v2 API and falling back to |
672 | 690 | v1 API if necessary. |
673 | 691 | |
674 | Args: | |
675 | destination (str): Target server | |
676 | pdu (FrozenEvent) | |
677 | room_version (str) | |
678 | ||
679 | 692 | Returns: |
680 | dict: The event as a dict as returned by the remote server | |
693 | The event as a dict as returned by the remote server | |
681 | 694 | """ |
682 | 695 | time_now = self._clock.time_msec() |
683 | 696 | |
684 | 697 | try: |
685 | content = yield self.transport_layer.send_invite_v2( | |
698 | content = await self.transport_layer.send_invite_v2( | |
686 | 699 | destination=destination, |
687 | 700 | room_id=pdu.room_id, |
688 | 701 | event_id=pdu.event_id, |
689 | 702 | content={ |
690 | 703 | "event": pdu.get_pdu_json(time_now), |
691 | "room_version": room_version, | |
704 | "room_version": room_version.identifier, | |
692 | 705 | "invite_room_state": pdu.unsigned.get("invite_room_state", []), |
693 | 706 | }, |
694 | 707 | ) |
706 | 719 | # Otherwise, we assume that the remote server doesn't understand |
707 | 720 | # the v2 invite API. That's ok provided the room uses old-style event |
708 | 721 | # IDs. |
709 | v = KNOWN_ROOM_VERSIONS.get(room_version) | |
710 | if v.event_format != EventFormatVersions.V1: | |
722 | if room_version.event_format != EventFormatVersions.V1: | |
711 | 723 | raise SynapseError( |
712 | 724 | 400, |
713 | 725 | "User's homeserver does not support this room version", |
721 | 733 | # Didn't work, try v1 API. |
722 | 734 | # Note the v1 API returns a tuple of `(200, content)` |
723 | 735 | |
724 | _, content = yield self.transport_layer.send_invite_v1( | |
736 | _, content = await self.transport_layer.send_invite_v1( | |
725 | 737 | destination=destination, |
726 | 738 | room_id=pdu.room_id, |
727 | 739 | event_id=pdu.event_id, |
729 | 741 | ) |
730 | 742 | return content |
731 | 743 | |
732 | def send_leave(self, destinations, pdu): | |
744 | async def send_leave(self, destinations: Iterable[str], pdu: EventBase) -> None: | |
733 | 745 | """Sends a leave event to one of a list of homeservers. |
734 | 746 | |
735 | 747 | Doing so will cause the remote server to add the event to the graph, |
738 | 750 | This is mostly useful to reject received invites. |
739 | 751 | |
740 | 752 | Args: |
741 | destinations (str): Candidate homeservers which are probably | |
753 | destinations: Candidate homeservers which are probably | |
742 | 754 | participating in the room. |
743 | pdu (BaseEvent): event to be sent | |
744 | ||
745 | Return: | |
746 | Deferred: resolves to None. | |
747 | ||
748 | Fails with a ``SynapseError`` if the chosen remote server | |
749 | returns a 300/400 code. | |
750 | ||
751 | Fails with a ``RuntimeError`` if no servers were reachable. | |
752 | """ | |
753 | ||
754 | @defer.inlineCallbacks | |
755 | def send_request(destination): | |
756 | content = yield self._do_send_leave(destination, pdu) | |
757 | ||
755 | pdu: event to be sent | |
756 | ||
757 | Raises: | |
758 | SynapseError if the chosen remote server returns a 300/400 code. | |
759 | ||
760 | RuntimeError if no servers were reachable. | |
761 | """ | |
762 | ||
763 | async def send_request(destination: str) -> None: | |
764 | content = await self._do_send_leave(destination, pdu) | |
758 | 765 | logger.debug("Got content: %s", content) |
759 | return None | |
760 | ||
761 | return self._try_destination_list("send_leave", destinations, send_request) | |
762 | ||
763 | @defer.inlineCallbacks | |
764 | def _do_send_leave(self, destination, pdu): | |
766 | ||
767 | return await self._try_destination_list( | |
768 | "send_leave", destinations, send_request | |
769 | ) | |
770 | ||
771 | async def _do_send_leave(self, destination, pdu): | |
765 | 772 | time_now = self._clock.time_msec() |
766 | 773 | |
767 | 774 | try: |
768 | content = yield self.transport_layer.send_leave_v2( | |
775 | content = await self.transport_layer.send_leave_v2( | |
769 | 776 | destination=destination, |
770 | 777 | room_id=pdu.room_id, |
771 | 778 | event_id=pdu.event_id, |
787 | 794 | |
788 | 795 | logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API") |
789 | 796 | |
790 | resp = yield self.transport_layer.send_leave_v1( | |
797 | resp = await self.transport_layer.send_leave_v1( | |
791 | 798 | destination=destination, |
792 | 799 | room_id=pdu.room_id, |
793 | 800 | event_id=pdu.event_id, |
819 | 826 | third_party_instance_id=third_party_instance_id, |
820 | 827 | ) |
821 | 828 | |
822 | @defer.inlineCallbacks | |
823 | def get_missing_events( | |
829 | async def get_missing_events( | |
824 | 830 | self, |
825 | destination, | |
826 | room_id, | |
827 | earliest_events_ids, | |
828 | latest_events, | |
829 | limit, | |
830 | min_depth, | |
831 | timeout, | |
832 | ): | |
831 | destination: str, | |
832 | room_id: str, | |
833 | earliest_events_ids: Sequence[str], | |
834 | latest_events: Iterable[EventBase], | |
835 | limit: int, | |
836 | min_depth: int, | |
837 | timeout: int, | |
838 | ) -> List[EventBase]: | |
833 | 839 | """Tries to fetch events we are missing. This is called when we receive |
834 | 840 | an event without having received all of its ancestors. |
835 | 841 | |
836 | 842 | Args: |
837 | destination (str) | |
838 | room_id (str) | |
839 | earliest_events_ids (list): List of event ids. Effectively the | |
843 | destination | |
844 | room_id | |
845 | earliest_events_ids: List of event ids. Effectively the | |
840 | 846 | events we expected to receive, but haven't. `get_missing_events` |
841 | 847 | should only return events that didn't happen before these. |
842 | latest_events (list): List of events we have received that we don't | |
848 | latest_events: List of events we have received that we don't | |
843 | 849 | have all previous events for. |
844 | limit (int): Maximum number of events to return. | |
845 | min_depth (int): Minimum depth of events tor return. | |
846 | timeout (int): Max time to wait in ms | |
850 | limit: Maximum number of events to return. | |
851 | min_depth: Minimum depth of events to return. | |
852 | timeout: Max time to wait in ms | |
847 | 853 | """ |
848 | 854 | try: |
849 | content = yield self.transport_layer.get_missing_events( | |
855 | content = await self.transport_layer.get_missing_events( | |
850 | 856 | destination=destination, |
851 | 857 | room_id=room_id, |
852 | 858 | earliest_events=earliest_events_ids, |
856 | 862 | timeout=timeout, |
857 | 863 | ) |
858 | 864 | |
859 | room_version = yield self.store.get_room_version_id(room_id) | |
860 | format_ver = room_version_to_event_format(room_version) | |
865 | room_version = await self.store.get_room_version(room_id) | |
861 | 866 | |
862 | 867 | events = [ |
863 | event_from_pdu_json(e, format_ver) for e in content.get("events", []) | |
868 | event_from_pdu_json(e, room_version) for e in content.get("events", []) | |
864 | 869 | ] |
865 | 870 | |
866 | signed_events = yield self._check_sigs_and_hash_and_fetch( | |
867 | destination, events, outlier=False, room_version=room_version | |
871 | signed_events = await self._check_sigs_and_hash_and_fetch( | |
872 | destination, events, outlier=False, room_version=room_version.identifier | |
868 | 873 | ) |
869 | 874 | except HttpResponseException as e: |
870 | 875 | if not e.code == 400: |
37 | 37 | UnsupportedRoomVersionError, |
38 | 38 | ) |
39 | 39 | from synapse.api.room_versions import KNOWN_ROOM_VERSIONS |
40 | from synapse.events import room_version_to_event_format | |
41 | 40 | from synapse.federation.federation_base import FederationBase, event_from_pdu_json |
42 | 41 | from synapse.federation.persistence import TransactionActions |
43 | 42 | from synapse.federation.units import Edu, Transaction |
53 | 52 | ReplicationFederationSendEduRestServlet, |
54 | 53 | ReplicationGetQueryRestServlet, |
55 | 54 | ) |
56 | from synapse.types import get_domain_from_id | |
55 | from synapse.types import JsonDict, get_domain_from_id | |
57 | 56 | from synapse.util import glob_to_regex, unwrapFirstError |
58 | 57 | from synapse.util.async_helpers import Linearizer, concurrently_execute |
59 | 58 | from synapse.util.caches.response_cache import ResponseCache |
80 | 79 | self.auth = hs.get_auth() |
81 | 80 | self.handler = hs.get_handlers().federation_handler |
82 | 81 | self.state = hs.get_state_handler() |
82 | ||
83 | self.device_handler = hs.get_device_handler() | |
83 | 84 | |
84 | 85 | self._server_linearizer = Linearizer("fed_server") |
85 | 86 | self._transaction_linearizer = Linearizer("fed_txn_handler") |
233 | 234 | continue |
234 | 235 | |
235 | 236 | try: |
236 | room_version = await self.store.get_room_version_id(room_id) | |
237 | room_version = await self.store.get_room_version(room_id) | |
237 | 238 | except NotFoundError: |
238 | 239 | logger.info("Ignoring PDU for unknown room_id: %s", room_id) |
239 | 240 | continue |
240 | ||
241 | try: | |
242 | format_ver = room_version_to_event_format(room_version) | |
243 | except UnsupportedRoomVersionError: | |
241 | except UnsupportedRoomVersionError as e: | |
244 | 242 | # this can happen if support for a given room version is withdrawn, |
245 | 243 | # so that we still get events for said room. |
246 | logger.info( | |
247 | "Ignoring PDU for room %s with unknown version %s", | |
248 | room_id, | |
249 | room_version, | |
250 | ) | |
244 | logger.info("Ignoring PDU: %s", e) | |
251 | 245 | continue |
252 | 246 | |
253 | event = event_from_pdu_json(p, format_ver) | |
247 | event = event_from_pdu_json(p, room_version) | |
254 | 248 | pdus_by_room.setdefault(room_id, []).append(event) |
255 | 249 | |
256 | 250 | pdu_results = {} |
301 | 295 | async def _process_edu(edu_dict): |
302 | 296 | received_edus_counter.inc() |
303 | 297 | |
304 | edu = Edu(**edu_dict) | |
298 | edu = Edu( | |
299 | origin=origin, | |
300 | destination=self.server_name, | |
301 | edu_type=edu_dict["edu_type"], | |
302 | content=edu_dict["content"], | |
303 | ) | |
305 | 304 | await self.registry.on_edu(edu.edu_type, origin, edu.content) |
306 | 305 | |
307 | 306 | await concurrently_execute( |
395 | 394 | time_now = self._clock.time_msec() |
396 | 395 | return {"event": pdu.get_pdu_json(time_now), "room_version": room_version} |
397 | 396 | |
398 | async def on_invite_request(self, origin, content, room_version): | |
399 | if room_version not in KNOWN_ROOM_VERSIONS: | |
397 | async def on_invite_request( | |
398 | self, origin: str, content: JsonDict, room_version_id: str | |
399 | ): | |
400 | room_version = KNOWN_ROOM_VERSIONS.get(room_version_id) | |
401 | if not room_version: | |
400 | 402 | raise SynapseError( |
401 | 403 | 400, |
402 | 404 | "Homeserver does not support this room version", |
403 | 405 | Codes.UNSUPPORTED_ROOM_VERSION, |
404 | 406 | ) |
405 | 407 | |
406 | format_ver = room_version_to_event_format(room_version) | |
407 | ||
408 | pdu = event_from_pdu_json(content, format_ver) | |
408 | pdu = event_from_pdu_json(content, room_version) | |
409 | 409 | origin_host, _ = parse_server_name(origin) |
410 | 410 | await self.check_server_matches_acl(origin_host, pdu.room_id) |
411 | pdu = await self._check_sigs_and_hash(room_version, pdu) | |
411 | pdu = await self._check_sigs_and_hash(room_version.identifier, pdu) | |
412 | 412 | ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version) |
413 | 413 | time_now = self._clock.time_msec() |
414 | 414 | return {"event": ret_pdu.get_pdu_json(time_now)} |
416 | 416 | async def on_send_join_request(self, origin, content, room_id): |
417 | 417 | logger.debug("on_send_join_request: content: %s", content) |
418 | 418 | |
419 | room_version = await self.store.get_room_version_id(room_id) | |
420 | format_ver = room_version_to_event_format(room_version) | |
421 | pdu = event_from_pdu_json(content, format_ver) | |
419 | room_version = await self.store.get_room_version(room_id) | |
420 | pdu = event_from_pdu_json(content, room_version) | |
422 | 421 | |
423 | 422 | origin_host, _ = parse_server_name(origin) |
424 | 423 | await self.check_server_matches_acl(origin_host, pdu.room_id) |
425 | 424 | |
426 | 425 | logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures) |
427 | 426 | |
428 | pdu = await self._check_sigs_and_hash(room_version, pdu) | |
427 | pdu = await self._check_sigs_and_hash(room_version.identifier, pdu) | |
429 | 428 | |
430 | 429 | res_pdus = await self.handler.on_send_join_request(origin, pdu) |
431 | 430 | time_now = self._clock.time_msec() |
447 | 446 | async def on_send_leave_request(self, origin, content, room_id): |
448 | 447 | logger.debug("on_send_leave_request: content: %s", content) |
449 | 448 | |
450 | room_version = await self.store.get_room_version_id(room_id) | |
451 | format_ver = room_version_to_event_format(room_version) | |
452 | pdu = event_from_pdu_json(content, format_ver) | |
449 | room_version = await self.store.get_room_version(room_id) | |
450 | pdu = event_from_pdu_json(content, room_version) | |
453 | 451 | |
454 | 452 | origin_host, _ = parse_server_name(origin) |
455 | 453 | await self.check_server_matches_acl(origin_host, pdu.room_id) |
456 | 454 | |
457 | 455 | logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures) |
458 | 456 | |
459 | pdu = await self._check_sigs_and_hash(room_version, pdu) | |
457 | pdu = await self._check_sigs_and_hash(room_version.identifier, pdu) | |
460 | 458 | |
461 | 459 | await self.handler.on_send_leave_request(origin, pdu) |
462 | 460 | return {} |
494 | 492 | origin_host, _ = parse_server_name(origin) |
495 | 493 | await self.check_server_matches_acl(origin_host, room_id) |
496 | 494 | |
497 | room_version = await self.store.get_room_version_id(room_id) | |
498 | format_ver = room_version_to_event_format(room_version) | |
495 | room_version = await self.store.get_room_version(room_id) | |
499 | 496 | |
500 | 497 | auth_chain = [ |
501 | event_from_pdu_json(e, format_ver) for e in content["auth_chain"] | |
498 | event_from_pdu_json(e, room_version) for e in content["auth_chain"] | |
502 | 499 | ] |
503 | 500 | |
504 | 501 | signed_auth = await self._check_sigs_and_hash_and_fetch( |
505 | origin, auth_chain, outlier=True, room_version=room_version | |
502 | origin, auth_chain, outlier=True, room_version=room_version.identifier | |
506 | 503 | ) |
507 | 504 | |
508 | 505 | ret = await self.handler.on_query_auth( |
527 | 524 | def on_query_client_keys(self, origin, content): |
528 | 525 | return self.on_query_request("client_keys", content) |
529 | 526 | |
530 | def on_query_user_devices(self, origin, user_id): | |
531 | return self.on_query_request("user_devices", user_id) | |
527 | async def on_query_user_devices(self, origin: str, user_id: str): | |
528 | keys = await self.device_handler.on_federation_query_user_devices(user_id) | |
529 | return 200, keys | |
532 | 530 | |
533 | 531 | @trace |
534 | 532 | async def on_claim_client_keys(self, origin, content): |
569 | 567 | origin_host, _ = parse_server_name(origin) |
570 | 568 | await self.check_server_matches_acl(origin_host, room_id) |
571 | 569 | |
572 | logger.info( | |
570 | logger.debug( | |
573 | 571 | "on_get_missing_events: earliest_events: %r, latest_events: %r," |
574 | 572 | " limit: %d", |
575 | 573 | earliest_events, |
582 | 580 | ) |
583 | 581 | |
584 | 582 | if len(missing_events) < 5: |
585 | logger.info( | |
583 | logger.debug( | |
586 | 584 | "Returning %d events: %r", len(missing_events), missing_events |
587 | 585 | ) |
588 | 586 | else: |
589 | logger.info("Returning %d events", len(missing_events)) | |
587 | logger.debug("Returning %d events", len(missing_events)) | |
590 | 588 | |
591 | 589 | time_now = self._clock.time_msec() |
592 | 590 |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import logging |
16 | from typing import Dict, Hashable, Iterable, List, Optional, Set | |
16 | 17 | |
17 | 18 | from six import itervalues |
18 | 19 | |
22 | 23 | |
23 | 24 | import synapse |
24 | 25 | import synapse.metrics |
26 | from synapse.events import EventBase | |
25 | 27 | from synapse.federation.sender.per_destination_queue import PerDestinationQueue |
26 | 28 | from synapse.federation.sender.transaction_manager import TransactionManager |
27 | 29 | from synapse.federation.units import Edu |
38 | 40 | events_processed_counter, |
39 | 41 | ) |
40 | 42 | from synapse.metrics.background_process_metrics import run_as_background_process |
43 | from synapse.storage.presence import UserPresenceState | |
44 | from synapse.types import ReadReceipt | |
41 | 45 | from synapse.util.metrics import Measure, measure_func |
42 | 46 | |
43 | 47 | logger = logging.getLogger(__name__) |
67 | 71 | self._transaction_manager = TransactionManager(hs) |
68 | 72 | |
69 | 73 | # map from destination to PerDestinationQueue |
70 | self._per_destination_queues = {} # type: dict[str, PerDestinationQueue] | |
74 | self._per_destination_queues = {} # type: Dict[str, PerDestinationQueue] | |
71 | 75 | |
72 | 76 | LaterGauge( |
73 | 77 | "synapse_federation_transaction_queue_pending_destinations", |
83 | 87 | # Map of user_id -> UserPresenceState for all the pending presence |
84 | 88 | # to be sent out by user_id. Entries here get processed and put in |
85 | 89 | # pending_presence_by_dest |
86 | self.pending_presence = {} | |
90 | self.pending_presence = {} # type: Dict[str, UserPresenceState] | |
87 | 91 | |
88 | 92 | LaterGauge( |
89 | 93 | "synapse_federation_transaction_queue_pending_pdus", |
115 | 119 | # and that there is a pending call to _flush_rrs_for_room in the system. |
116 | 120 | self._queues_awaiting_rr_flush_by_room = ( |
117 | 121 | {} |
118 | ) # type: dict[str, set[PerDestinationQueue]] | |
122 | ) # type: Dict[str, Set[PerDestinationQueue]] | |
119 | 123 | |
120 | 124 | self._rr_txn_interval_per_room_ms = ( |
121 | 1000.0 / hs.get_config().federation_rr_transactions_per_room_per_second | |
122 | ) | |
123 | ||
124 | def _get_per_destination_queue(self, destination): | |
125 | 1000.0 / hs.config.federation_rr_transactions_per_room_per_second | |
126 | ) | |
127 | ||
128 | def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue: | |
125 | 129 | """Get or create a PerDestinationQueue for the given destination |
126 | 130 | |
127 | 131 | Args: |
128 | destination (str): server_name of remote server | |
129 | ||
130 | Returns: | |
131 | PerDestinationQueue | |
132 | destination: server_name of remote server | |
132 | 133 | """ |
133 | 134 | queue = self._per_destination_queues.get(destination) |
134 | 135 | if not queue: |
136 | 137 | self._per_destination_queues[destination] = queue |
137 | 138 | return queue |
138 | 139 | |
139 | def notify_new_events(self, current_id): | |
140 | def notify_new_events(self, current_id: int) -> None: | |
140 | 141 | """This gets called when we have some new events we might want to |
141 | 142 | send out to other servers. |
142 | 143 | """ |
150 | 151 | "process_event_queue_for_federation", self._process_event_queue_loop |
151 | 152 | ) |
152 | 153 | |
153 | @defer.inlineCallbacks | |
154 | def _process_event_queue_loop(self): | |
154 | async def _process_event_queue_loop(self) -> None: | |
155 | 155 | try: |
156 | 156 | self._is_processing = True |
157 | 157 | while True: |
158 | last_token = yield self.store.get_federation_out_pos("events") | |
159 | next_token, events = yield self.store.get_all_new_events_stream( | |
158 | last_token = await self.store.get_federation_out_pos("events") | |
159 | next_token, events = await self.store.get_all_new_events_stream( | |
160 | 160 | last_token, self._last_poked_id, limit=100 |
161 | 161 | ) |
162 | 162 | |
165 | 165 | if not events and next_token >= self._last_poked_id: |
166 | 166 | break |
167 | 167 | |
168 | @defer.inlineCallbacks | |
169 | def handle_event(event): | |
168 | async def handle_event(event: EventBase) -> None: | |
170 | 169 | # Only send events for this server. |
171 | 170 | send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() |
172 | 171 | is_mine = self.is_mine_id(event.sender) |
183 | 182 | # Otherwise if the last member on a server in a room is |
184 | 183 | # banned then it won't receive the event because it won't |
185 | 184 | # be in the room after the ban. |
186 | destinations = yield self.state.get_hosts_in_room_at_events( | |
185 | destinations = await self.state.get_hosts_in_room_at_events( | |
187 | 186 | event.room_id, event_ids=event.prev_event_ids() |
188 | 187 | ) |
189 | 188 | except Exception: |
205 | 204 | |
206 | 205 | self._send_pdu(event, destinations) |
207 | 206 | |
208 | @defer.inlineCallbacks | |
209 | def handle_room_events(events): | |
207 | async def handle_room_events(events: Iterable[EventBase]) -> None: | |
210 | 208 | with Measure(self.clock, "handle_room_events"): |
211 | 209 | for event in events: |
212 | yield handle_event(event) | |
213 | ||
214 | events_by_room = {} | |
210 | await handle_event(event) | |
211 | ||
212 | events_by_room = {} # type: Dict[str, List[EventBase]] | |
215 | 213 | for event in events: |
216 | 214 | events_by_room.setdefault(event.room_id, []).append(event) |
217 | 215 | |
218 | yield make_deferred_yieldable( | |
216 | await make_deferred_yieldable( | |
219 | 217 | defer.gatherResults( |
220 | 218 | [ |
221 | 219 | run_in_background(handle_room_events, evs) |
225 | 223 | ) |
226 | 224 | ) |
227 | 225 | |
228 | yield self.store.update_federation_out_pos("events", next_token) | |
226 | await self.store.update_federation_out_pos("events", next_token) | |
229 | 227 | |
230 | 228 | if events: |
231 | 229 | now = self.clock.time_msec() |
232 | ts = yield self.store.get_received_ts(events[-1].event_id) | |
230 | ts = await self.store.get_received_ts(events[-1].event_id) | |
233 | 231 | |
234 | 232 | synapse.metrics.event_processing_lag.labels( |
235 | 233 | "federation_sender" |
253 | 251 | finally: |
254 | 252 | self._is_processing = False |
255 | 253 | |
256 | def _send_pdu(self, pdu, destinations): | |
254 | def _send_pdu(self, pdu: EventBase, destinations: Iterable[str]) -> None: | |
257 | 255 | # We loop through all destinations to see whether we already have |
258 | 256 | # a transaction in progress. If we do, stick it in the pending_pdus |
259 | 257 | # table and we'll get back to it later. |
275 | 273 | self._get_per_destination_queue(destination).send_pdu(pdu, order) |
276 | 274 | |
277 | 275 | @defer.inlineCallbacks |
278 | def send_read_receipt(self, receipt): | |
276 | def send_read_receipt(self, receipt: ReadReceipt): | |
279 | 277 | """Send a RR to any other servers in the room |
280 | 278 | |
281 | 279 | Args: |
282 | receipt (synapse.types.ReadReceipt): receipt to be sent | |
280 | receipt: receipt to be sent | |
283 | 281 | """ |
284 | 282 | |
285 | 283 | # Some background on the rate-limiting going on here. |
342 | 340 | else: |
343 | 341 | queue.flush_read_receipts_for_room(room_id) |
344 | 342 | |
345 | def _schedule_rr_flush_for_room(self, room_id, n_domains): | |
343 | def _schedule_rr_flush_for_room(self, room_id: str, n_domains: int) -> None: | |
346 | 344 | # that is going to cause approximately len(domains) transactions, so now back |
347 | 345 | # off for that multiplied by RR_TXN_INTERVAL_PER_ROOM |
348 | 346 | backoff_ms = self._rr_txn_interval_per_room_ms * n_domains |
351 | 349 | self.clock.call_later(backoff_ms, self._flush_rrs_for_room, room_id) |
352 | 350 | self._queues_awaiting_rr_flush_by_room[room_id] = set() |
353 | 351 | |
354 | def _flush_rrs_for_room(self, room_id): | |
352 | def _flush_rrs_for_room(self, room_id: str) -> None: | |
355 | 353 | queues = self._queues_awaiting_rr_flush_by_room.pop(room_id) |
356 | 354 | logger.debug("Flushing RRs in %s to %s", room_id, queues) |
357 | 355 | |
367 | 365 | |
368 | 366 | @preserve_fn # the caller should not yield on this |
369 | 367 | @defer.inlineCallbacks |
370 | def send_presence(self, states): | |
368 | def send_presence(self, states: List[UserPresenceState]): | |
371 | 369 | """Send the new presence states to the appropriate destinations. |
372 | 370 | |
373 | 371 | This actually queues up the presence states ready for sending and |
374 | 372 | triggers a background task to process them and send out the transactions. |
375 | ||
376 | Args: | |
377 | states (list(UserPresenceState)) | |
378 | 373 | """ |
379 | 374 | if not self.hs.config.use_presence: |
380 | 375 | # No-op if presence is disabled. |
411 | 406 | finally: |
412 | 407 | self._processing_pending_presence = False |
413 | 408 | |
414 | def send_presence_to_destinations(self, states, destinations): | |
409 | def send_presence_to_destinations( | |
410 | self, states: List[UserPresenceState], destinations: List[str] | |
411 | ) -> None: | |
415 | 412 | """Send the given presence states to the given destinations. |
416 | ||
417 | Args: | |
418 | states (list[UserPresenceState]) | |
419 | 413 | destinations (list[str]) |
420 | 414 | """ |
421 | 415 | |
430 | 424 | |
431 | 425 | @measure_func("txnqueue._process_presence") |
432 | 426 | @defer.inlineCallbacks |
433 | def _process_presence_inner(self, states): | |
427 | def _process_presence_inner(self, states: List[UserPresenceState]): | |
434 | 428 | """Given a list of states populate self.pending_presence_by_dest and |
435 | 429 | poke to send a new transaction to each destination |
436 | ||
437 | Args: | |
438 | states (list(UserPresenceState)) | |
439 | 430 | """ |
440 | 431 | hosts_and_states = yield get_interested_remotes(self.store, states, self.state) |
441 | 432 | |
445 | 436 | continue |
446 | 437 | self._get_per_destination_queue(destination).send_presence(states) |
447 | 438 | |
448 | def build_and_send_edu(self, destination, edu_type, content, key=None): | |
439 | def build_and_send_edu( | |
440 | self, | |
441 | destination: str, | |
442 | edu_type: str, | |
443 | content: dict, | |
444 | key: Optional[Hashable] = None, | |
445 | ): | |
449 | 446 | """Construct an Edu object, and queue it for sending |
450 | 447 | |
451 | 448 | Args: |
452 | destination (str): name of server to send to | |
453 | edu_type (str): type of EDU to send | |
454 | content (dict): content of EDU | |
455 | key (Any|None): clobbering key for this edu | |
449 | destination: name of server to send to | |
450 | edu_type: type of EDU to send | |
451 | content: content of EDU | |
452 | key: clobbering key for this edu | |
456 | 453 | """ |
457 | 454 | if destination == self.server_name: |
458 | 455 | logger.info("Not sending EDU to ourselves") |
467 | 464 | |
468 | 465 | self.send_edu(edu, key) |
469 | 466 | |
470 | def send_edu(self, edu, key): | |
467 | def send_edu(self, edu: Edu, key: Optional[Hashable]): | |
471 | 468 | """Queue an EDU for sending |
472 | 469 | |
473 | 470 | Args: |
474 | edu (Edu): edu to send | |
475 | key (Any|None): clobbering key for this edu | |
471 | edu: edu to send | |
472 | key: clobbering key for this edu | |
476 | 473 | """ |
477 | 474 | queue = self._get_per_destination_queue(edu.destination) |
478 | 475 | if key: |
480 | 477 | else: |
481 | 478 | queue.send_edu(edu) |
482 | 479 | |
483 | def send_device_messages(self, destination): | |
480 | def send_device_messages(self, destination: str): | |
484 | 481 | if destination == self.server_name: |
485 | 482 | logger.warning("Not sending device update to ourselves") |
486 | 483 | return |
500 | 497 | |
501 | 498 | self._get_per_destination_queue(destination).attempt_new_transaction() |
502 | 499 | |
503 | def get_current_token(self): | |
500 | def get_current_token(self) -> int: | |
504 | 501 | return 0 |
14 | 14 | # limitations under the License. |
15 | 15 | import datetime |
16 | 16 | import logging |
17 | from typing import Dict, Hashable, Iterable, List, Tuple | |
17 | 18 | |
18 | 19 | from prometheus_client import Counter |
19 | 20 | |
20 | from twisted.internet import defer | |
21 | ||
21 | import synapse.server | |
22 | 22 | from synapse.api.errors import ( |
23 | 23 | FederationDeniedError, |
24 | 24 | HttpResponseException, |
30 | 30 | from synapse.metrics import sent_transactions_counter |
31 | 31 | from synapse.metrics.background_process_metrics import run_as_background_process |
32 | 32 | from synapse.storage.presence import UserPresenceState |
33 | from synapse.types import StateMap | |
33 | from synapse.types import ReadReceipt | |
34 | 34 | from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter |
35 | 35 | |
36 | 36 | # This is defined in the Matrix spec and enforced by the receiver. |
55 | 55 | Manages the per-destination transmission queues. |
56 | 56 | |
57 | 57 | Args: |
58 | hs (synapse.HomeServer): | |
59 | transaction_sender (TransactionManager): | |
60 | destination (str): the server_name of the destination that we are managing | |
58 | hs | |
59 | transaction_sender | |
60 | destination: the server_name of the destination that we are managing | |
61 | 61 | transmission for. |
62 | 62 | """ |
63 | 63 | |
64 | def __init__(self, hs, transaction_manager, destination): | |
64 | def __init__( | |
65 | self, | |
66 | hs: "synapse.server.HomeServer", | |
67 | transaction_manager: "synapse.federation.sender.TransactionManager", | |
68 | destination: str, | |
69 | ): | |
65 | 70 | self._server_name = hs.hostname |
66 | 71 | self._clock = hs.get_clock() |
67 | 72 | self._store = hs.get_datastore() |
71 | 76 | self.transmission_loop_running = False |
72 | 77 | |
73 | 78 | # a list of tuples of (pending pdu, order) |
74 | self._pending_pdus = [] # type: list[tuple[EventBase, int]] | |
75 | self._pending_edus = [] # type: list[Edu] | |
79 | self._pending_pdus = [] # type: List[Tuple[EventBase, int]] | |
80 | self._pending_edus = [] # type: List[Edu] | |
76 | 81 | |
77 | 82 | # Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered |
78 | 83 | # based on their key (e.g. typing events by room_id) |
79 | 84 | # Map of (edu_type, key) -> Edu |
80 | self._pending_edus_keyed = {} # type: StateMap[Edu] | |
85 | self._pending_edus_keyed = {} # type: Dict[Tuple[str, Hashable], Edu] | |
81 | 86 | |
82 | 87 | # Map of user_id -> UserPresenceState of pending presence to be sent to this |
83 | 88 | # destination |
84 | self._pending_presence = {} # type: dict[str, UserPresenceState] | |
89 | self._pending_presence = {} # type: Dict[str, UserPresenceState] | |
85 | 90 | |
86 | 91 | # room_id -> receipt_type -> user_id -> receipt_dict |
87 | self._pending_rrs = {} | |
92 | self._pending_rrs = {} # type: Dict[str, Dict[str, Dict[str, dict]]] | |
88 | 93 | self._rrs_pending_flush = False |
89 | 94 | |
90 | 95 | # stream_id of last successfully sent to-device message. |
94 | 99 | # stream_id of last successfully sent device list update. |
95 | 100 | self._last_device_list_stream_id = 0 |
96 | 101 | |
97 | def __str__(self): | |
102 | def __str__(self) -> str: | |
98 | 103 | return "PerDestinationQueue[%s]" % self._destination |
99 | 104 | |
100 | def pending_pdu_count(self): | |
105 | def pending_pdu_count(self) -> int: | |
101 | 106 | return len(self._pending_pdus) |
102 | 107 | |
103 | def pending_edu_count(self): | |
108 | def pending_edu_count(self) -> int: | |
104 | 109 | return ( |
105 | 110 | len(self._pending_edus) |
106 | 111 | + len(self._pending_presence) |
107 | 112 | + len(self._pending_edus_keyed) |
108 | 113 | ) |
109 | 114 | |
110 | def send_pdu(self, pdu, order): | |
115 | def send_pdu(self, pdu: EventBase, order: int) -> None: | |
111 | 116 | """Add a PDU to the queue, and start the transmission loop if neccessary |
112 | 117 | |
113 | 118 | Args: |
114 | pdu (EventBase): pdu to send | |
115 | order (int): | |
119 | pdu: pdu to send | |
120 | order | |
116 | 121 | """ |
117 | 122 | self._pending_pdus.append((pdu, order)) |
118 | 123 | self.attempt_new_transaction() |
119 | 124 | |
120 | def send_presence(self, states): | |
125 | def send_presence(self, states: Iterable[UserPresenceState]) -> None: | |
121 | 126 | """Add presence updates to the queue. Start the transmission loop if neccessary. |
122 | 127 | |
123 | 128 | Args: |
124 | states (iterable[UserPresenceState]): presence to send | |
129 | states: presence to send | |
125 | 130 | """ |
126 | 131 | self._pending_presence.update({state.user_id: state for state in states}) |
127 | 132 | self.attempt_new_transaction() |
128 | 133 | |
129 | def queue_read_receipt(self, receipt): | |
134 | def queue_read_receipt(self, receipt: ReadReceipt) -> None: | |
130 | 135 | """Add a RR to the list to be sent. Doesn't start the transmission loop yet |
131 | 136 | (see flush_read_receipts_for_room) |
132 | 137 | |
133 | 138 | Args: |
134 | receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued | |
139 | receipt: receipt to be queued | |
135 | 140 | """ |
136 | 141 | self._pending_rrs.setdefault(receipt.room_id, {}).setdefault( |
137 | 142 | receipt.receipt_type, {} |
138 | 143 | )[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data} |
139 | 144 | |
140 | def flush_read_receipts_for_room(self, room_id): | |
145 | def flush_read_receipts_for_room(self, room_id: str) -> None: | |
141 | 146 | # if we don't have any read-receipts for this room, it may be that we've already |
142 | 147 | # sent them out, so we don't need to flush. |
143 | 148 | if room_id not in self._pending_rrs: |
145 | 150 | self._rrs_pending_flush = True |
146 | 151 | self.attempt_new_transaction() |
147 | 152 | |
148 | def send_keyed_edu(self, edu, key): | |
153 | def send_keyed_edu(self, edu: Edu, key: Hashable) -> None: | |
149 | 154 | self._pending_edus_keyed[(edu.edu_type, key)] = edu |
150 | 155 | self.attempt_new_transaction() |
151 | 156 | |
152 | def send_edu(self, edu): | |
157 | def send_edu(self, edu) -> None: | |
153 | 158 | self._pending_edus.append(edu) |
154 | 159 | self.attempt_new_transaction() |
155 | 160 | |
156 | def attempt_new_transaction(self): | |
161 | def attempt_new_transaction(self) -> None: | |
157 | 162 | """Try to start a new transaction to this destination |
158 | 163 | |
159 | 164 | If there is already a transaction in progress to this destination, |
176 | 181 | self._transaction_transmission_loop, |
177 | 182 | ) |
178 | 183 | |
179 | @defer.inlineCallbacks | |
180 | def _transaction_transmission_loop(self): | |
181 | pending_pdus = [] | |
184 | async def _transaction_transmission_loop(self) -> None: | |
185 | pending_pdus = [] # type: List[Tuple[EventBase, int]] | |
182 | 186 | try: |
183 | 187 | self.transmission_loop_running = True |
184 | 188 | |
185 | 189 | # This will throw if we wouldn't retry. We do this here so we fail |
186 | 190 | # quickly, but we will later check this again in the http client, |
187 | 191 | # hence why we throw the result away. |
188 | yield get_retry_limiter(self._destination, self._clock, self._store) | |
192 | await get_retry_limiter(self._destination, self._clock, self._store) | |
189 | 193 | |
190 | 194 | pending_pdus = [] |
191 | 195 | while True: |
192 | 196 | # We have to keep 2 free slots for presence and rr_edus |
193 | 197 | limit = MAX_EDUS_PER_TRANSACTION - 2 |
194 | 198 | |
195 | device_update_edus, dev_list_id = yield self._get_device_update_edus( | |
199 | device_update_edus, dev_list_id = await self._get_device_update_edus( | |
196 | 200 | limit |
197 | 201 | ) |
198 | 202 | |
201 | 205 | ( |
202 | 206 | to_device_edus, |
203 | 207 | device_stream_id, |
204 | ) = yield self._get_to_device_message_edus(limit) | |
208 | ) = await self._get_to_device_message_edus(limit) | |
205 | 209 | |
206 | 210 | pending_edus = device_update_edus + to_device_edus |
207 | 211 | |
268 | 272 | |
269 | 273 | # END CRITICAL SECTION |
270 | 274 | |
271 | success = yield self._transaction_manager.send_new_transaction( | |
275 | success = await self._transaction_manager.send_new_transaction( | |
272 | 276 | self._destination, pending_pdus, pending_edus |
273 | 277 | ) |
274 | 278 | if success: |
279 | 283 | # Remove the acknowledged device messages from the database |
280 | 284 | # Only bother if we actually sent some device messages |
281 | 285 | if to_device_edus: |
282 | yield self._store.delete_device_msgs_for_remote( | |
286 | await self._store.delete_device_msgs_for_remote( | |
283 | 287 | self._destination, device_stream_id |
284 | 288 | ) |
285 | 289 | |
288 | 292 | logger.info( |
289 | 293 | "Marking as sent %r %r", self._destination, dev_list_id |
290 | 294 | ) |
291 | yield self._store.mark_as_sent_devices_by_remote( | |
295 | await self._store.mark_as_sent_devices_by_remote( | |
292 | 296 | self._destination, dev_list_id |
293 | 297 | ) |
294 | 298 | |
333 | 337 | # We want to be *very* sure we clear this after we stop processing |
334 | 338 | self.transmission_loop_running = False |
335 | 339 | |
336 | def _get_rr_edus(self, force_flush): | |
340 | def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]: | |
337 | 341 | if not self._pending_rrs: |
338 | 342 | return |
339 | 343 | if not force_flush and not self._rrs_pending_flush: |
350 | 354 | self._rrs_pending_flush = False |
351 | 355 | yield edu |
352 | 356 | |
353 | def _pop_pending_edus(self, limit): | |
357 | def _pop_pending_edus(self, limit: int) -> List[Edu]: | |
354 | 358 | pending_edus = self._pending_edus |
355 | 359 | pending_edus, self._pending_edus = pending_edus[:limit], pending_edus[limit:] |
356 | 360 | return pending_edus |
357 | 361 | |
358 | @defer.inlineCallbacks | |
359 | def _get_device_update_edus(self, limit): | |
362 | async def _get_device_update_edus(self, limit: int) -> Tuple[List[Edu], int]: | |
360 | 363 | last_device_list = self._last_device_list_stream_id |
361 | 364 | |
362 | 365 | # Retrieve list of new device updates to send to the destination |
363 | now_stream_id, results = yield self._store.get_device_updates_by_remote( | |
366 | now_stream_id, results = await self._store.get_device_updates_by_remote( | |
364 | 367 | self._destination, last_device_list, limit=limit |
365 | 368 | ) |
366 | 369 | edus = [ |
377 | 380 | |
378 | 381 | return (edus, now_stream_id) |
379 | 382 | |
380 | @defer.inlineCallbacks | |
381 | def _get_to_device_message_edus(self, limit): | |
383 | async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]: | |
382 | 384 | last_device_stream_id = self._last_device_stream_id |
383 | 385 | to_device_stream_id = self._store.get_to_device_stream_token() |
384 | contents, stream_id = yield self._store.get_new_device_msgs_for_remote( | |
386 | contents, stream_id = await self._store.get_new_device_msgs_for_remote( | |
385 | 387 | self._destination, last_device_stream_id, to_device_stream_id, limit |
386 | 388 | ) |
387 | 389 | edus = [ |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | import logging |
15 | from typing import List | |
15 | 16 | |
16 | 17 | from canonicaljson import json |
17 | 18 | |
18 | from twisted.internet import defer | |
19 | ||
19 | import synapse.server | |
20 | 20 | from synapse.api.errors import HttpResponseException |
21 | from synapse.events import EventBase | |
21 | 22 | from synapse.federation.persistence import TransactionActions |
22 | from synapse.federation.units import Transaction | |
23 | from synapse.federation.units import Edu, Transaction | |
23 | 24 | from synapse.logging.opentracing import ( |
24 | 25 | extract_text_map, |
25 | 26 | set_tag, |
38 | 39 | shared between PerDestinationQueue objects |
39 | 40 | """ |
40 | 41 | |
41 | def __init__(self, hs): | |
42 | def __init__(self, hs: "synapse.server.HomeServer"): | |
42 | 43 | self._server_name = hs.hostname |
43 | 44 | self.clock = hs.get_clock() # nb must be called this for @measure_func |
44 | 45 | self._store = hs.get_datastore() |
49 | 50 | self._next_txn_id = int(self.clock.time_msec()) |
50 | 51 | |
51 | 52 | @measure_func("_send_new_transaction") |
52 | @defer.inlineCallbacks | |
53 | def send_new_transaction(self, destination, pending_pdus, pending_edus): | |
53 | async def send_new_transaction( | |
54 | self, destination: str, pending_pdus: List[EventBase], pending_edus: List[Edu] | |
55 | ): | |
54 | 56 | |
55 | 57 | # Make a transaction-sending opentracing span. This span follows on from |
56 | 58 | # all the edus in that transaction. This needs to be done since there is |
126 | 128 | return data |
127 | 129 | |
128 | 130 | try: |
129 | response = yield self._transport_layer.send_transaction( | |
131 | response = await self._transport_layer.send_transaction( | |
130 | 132 | transaction, json_data_cb |
131 | 133 | ) |
132 | 134 | code = 200 |
157 | 157 | origin, json_request, now, "Incoming request" |
158 | 158 | ) |
159 | 159 | |
160 | logger.info("Request from %s", origin) | |
160 | logger.debug("Request from %s", origin) | |
161 | 161 | request.authenticated_entity = origin |
162 | 162 | |
163 | 163 | # If we get a valid signed request from the other side, its probably |
578 | 578 | # state resolution algorithm, and we don't use that for processing |
579 | 579 | # invites |
580 | 580 | content = await self.handler.on_invite_request( |
581 | origin, content, room_version=RoomVersions.V1.identifier | |
581 | origin, content, room_version_id=RoomVersions.V1.identifier | |
582 | 582 | ) |
583 | 583 | |
584 | 584 | # V1 federation API is defined to return a content of `[200, {...}]` |
605 | 605 | event.setdefault("unsigned", {})["invite_room_state"] = invite_room_state |
606 | 606 | |
607 | 607 | content = await self.handler.on_invite_request( |
608 | origin, event, room_version=room_version | |
608 | origin, event, room_version_id=room_version | |
609 | 609 | ) |
610 | 610 | return 200, content |
611 | 611 |
18 | 18 | |
19 | 19 | import logging |
20 | 20 | |
21 | import attr | |
22 | ||
23 | from synapse.types import JsonDict | |
21 | 24 | from synapse.util.jsonobject import JsonEncodedObject |
22 | 25 | |
23 | 26 | logger = logging.getLogger(__name__) |
24 | 27 | |
25 | 28 | |
29 | @attr.s(slots=True) | |
26 | 30 | class Edu(JsonEncodedObject): |
27 | 31 | """ An Edu represents a piece of data sent from one homeserver to another. |
28 | 32 | |
31 | 35 | internal ID or previous references graph. |
32 | 36 | """ |
33 | 37 | |
34 | valid_keys = ["origin", "destination", "edu_type", "content"] | |
38 | edu_type = attr.ib(type=str) | |
39 | content = attr.ib(type=dict) | |
40 | origin = attr.ib(type=str) | |
41 | destination = attr.ib(type=str) | |
35 | 42 | |
36 | required_keys = ["edu_type"] | |
43 | def get_dict(self) -> JsonDict: | |
44 | return { | |
45 | "edu_type": self.edu_type, | |
46 | "content": self.content, | |
47 | } | |
37 | 48 | |
38 | internal_keys = ["origin", "destination"] | |
49 | def get_internal_dict(self) -> JsonDict: | |
50 | return { | |
51 | "edu_type": self.edu_type, | |
52 | "content": self.content, | |
53 | "origin": self.origin, | |
54 | "destination": self.destination, | |
55 | } | |
39 | 56 | |
40 | 57 | def get_context(self): |
41 | 58 | return getattr(self, "content", {}).get("org.matrix.opentracing_context", "{}") |
35 | 35 | # TODO: Flairs |
36 | 36 | |
37 | 37 | |
38 | class GroupsServerHandler(object): | |
38 | class GroupsServerWorkerHandler(object): | |
39 | 39 | def __init__(self, hs): |
40 | 40 | self.hs = hs |
41 | 41 | self.store = hs.get_datastore() |
49 | 49 | self.attestations = hs.get_groups_attestation_signing() |
50 | 50 | self.transport_client = hs.get_federation_transport_client() |
51 | 51 | self.profile_handler = hs.get_profile_handler() |
52 | ||
53 | # Ensure attestations get renewed | |
54 | hs.get_groups_attestation_renewer() | |
55 | 52 | |
56 | 53 | @defer.inlineCallbacks |
57 | 54 | def check_group_is_ours( |
167 | 164 | } |
168 | 165 | |
169 | 166 | @defer.inlineCallbacks |
170 | def update_group_summary_room( | |
171 | self, group_id, requester_user_id, room_id, category_id, content | |
172 | ): | |
173 | """Add/update a room to the group summary | |
174 | """ | |
175 | yield self.check_group_is_ours( | |
176 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
177 | ) | |
178 | ||
179 | RoomID.from_string(room_id) # Ensure valid room id | |
180 | ||
181 | order = content.get("order", None) | |
182 | ||
183 | is_public = _parse_visibility_from_contents(content) | |
184 | ||
185 | yield self.store.add_room_to_summary( | |
186 | group_id=group_id, | |
187 | room_id=room_id, | |
188 | category_id=category_id, | |
189 | order=order, | |
190 | is_public=is_public, | |
191 | ) | |
192 | ||
193 | return {} | |
194 | ||
195 | @defer.inlineCallbacks | |
196 | def delete_group_summary_room( | |
197 | self, group_id, requester_user_id, room_id, category_id | |
198 | ): | |
199 | """Remove a room from the summary | |
200 | """ | |
201 | yield self.check_group_is_ours( | |
202 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
203 | ) | |
204 | ||
205 | yield self.store.remove_room_from_summary( | |
206 | group_id=group_id, room_id=room_id, category_id=category_id | |
207 | ) | |
208 | ||
209 | return {} | |
210 | ||
211 | @defer.inlineCallbacks | |
212 | def set_group_join_policy(self, group_id, requester_user_id, content): | |
213 | """Sets the group join policy. | |
214 | ||
215 | Currently supported policies are: | |
216 | - "invite": an invite must be received and accepted in order to join. | |
217 | - "open": anyone can join. | |
218 | """ | |
219 | yield self.check_group_is_ours( | |
220 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
221 | ) | |
222 | ||
223 | join_policy = _parse_join_policy_from_contents(content) | |
224 | if join_policy is None: | |
225 | raise SynapseError(400, "No value specified for 'm.join_policy'") | |
226 | ||
227 | yield self.store.set_group_join_policy(group_id, join_policy=join_policy) | |
228 | ||
229 | return {} | |
230 | ||
231 | @defer.inlineCallbacks | |
232 | 167 | def get_group_categories(self, group_id, requester_user_id): |
233 | 168 | """Get all categories in a group (as seen by user) |
234 | 169 | """ |
247 | 182 | group_id=group_id, category_id=category_id |
248 | 183 | ) |
249 | 184 | |
185 | logger.info("group %s", res) | |
186 | ||
250 | 187 | return res |
251 | ||
252 | @defer.inlineCallbacks | |
253 | def update_group_category(self, group_id, requester_user_id, category_id, content): | |
254 | """Add/Update a group category | |
255 | """ | |
256 | yield self.check_group_is_ours( | |
257 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
258 | ) | |
259 | ||
260 | is_public = _parse_visibility_from_contents(content) | |
261 | profile = content.get("profile") | |
262 | ||
263 | yield self.store.upsert_group_category( | |
264 | group_id=group_id, | |
265 | category_id=category_id, | |
266 | is_public=is_public, | |
267 | profile=profile, | |
268 | ) | |
269 | ||
270 | return {} | |
271 | ||
272 | @defer.inlineCallbacks | |
273 | def delete_group_category(self, group_id, requester_user_id, category_id): | |
274 | """Delete a group category | |
275 | """ | |
276 | yield self.check_group_is_ours( | |
277 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
278 | ) | |
279 | ||
280 | yield self.store.remove_group_category( | |
281 | group_id=group_id, category_id=category_id | |
282 | ) | |
283 | ||
284 | return {} | |
285 | 188 | |
286 | 189 | @defer.inlineCallbacks |
287 | 190 | def get_group_roles(self, group_id, requester_user_id): |
300 | 203 | |
301 | 204 | res = yield self.store.get_group_role(group_id=group_id, role_id=role_id) |
302 | 205 | return res |
303 | ||
304 | @defer.inlineCallbacks | |
305 | def update_group_role(self, group_id, requester_user_id, role_id, content): | |
306 | """Add/update a role in a group | |
307 | """ | |
308 | yield self.check_group_is_ours( | |
309 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
310 | ) | |
311 | ||
312 | is_public = _parse_visibility_from_contents(content) | |
313 | ||
314 | profile = content.get("profile") | |
315 | ||
316 | yield self.store.upsert_group_role( | |
317 | group_id=group_id, role_id=role_id, is_public=is_public, profile=profile | |
318 | ) | |
319 | ||
320 | return {} | |
321 | ||
322 | @defer.inlineCallbacks | |
323 | def delete_group_role(self, group_id, requester_user_id, role_id): | |
324 | """Remove role from group | |
325 | """ | |
326 | yield self.check_group_is_ours( | |
327 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
328 | ) | |
329 | ||
330 | yield self.store.remove_group_role(group_id=group_id, role_id=role_id) | |
331 | ||
332 | return {} | |
333 | ||
334 | @defer.inlineCallbacks | |
335 | def update_group_summary_user( | |
336 | self, group_id, requester_user_id, user_id, role_id, content | |
337 | ): | |
338 | """Add/update a users entry in the group summary | |
339 | """ | |
340 | yield self.check_group_is_ours( | |
341 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
342 | ) | |
343 | ||
344 | order = content.get("order", None) | |
345 | ||
346 | is_public = _parse_visibility_from_contents(content) | |
347 | ||
348 | yield self.store.add_user_to_summary( | |
349 | group_id=group_id, | |
350 | user_id=user_id, | |
351 | role_id=role_id, | |
352 | order=order, | |
353 | is_public=is_public, | |
354 | ) | |
355 | ||
356 | return {} | |
357 | ||
358 | @defer.inlineCallbacks | |
359 | def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id): | |
360 | """Remove a user from the group summary | |
361 | """ | |
362 | yield self.check_group_is_ours( | |
363 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
364 | ) | |
365 | ||
366 | yield self.store.remove_user_from_summary( | |
367 | group_id=group_id, user_id=user_id, role_id=role_id | |
368 | ) | |
369 | ||
370 | return {} | |
371 | 206 | |
372 | 207 | @defer.inlineCallbacks |
373 | 208 | def get_group_profile(self, group_id, requester_user_id): |
394 | 229 | raise SynapseError(404, "Unknown group") |
395 | 230 | |
396 | 231 | @defer.inlineCallbacks |
397 | def update_group_profile(self, group_id, requester_user_id, content): | |
398 | """Update the group profile | |
399 | """ | |
400 | yield self.check_group_is_ours( | |
401 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
402 | ) | |
403 | ||
404 | profile = {} | |
405 | for keyname in ("name", "avatar_url", "short_description", "long_description"): | |
406 | if keyname in content: | |
407 | value = content[keyname] | |
408 | if not isinstance(value, string_types): | |
409 | raise SynapseError(400, "%r value is not a string" % (keyname,)) | |
410 | profile[keyname] = value | |
411 | ||
412 | yield self.store.update_group_profile(group_id, profile) | |
413 | ||
414 | @defer.inlineCallbacks | |
415 | 232 | def get_users_in_group(self, group_id, requester_user_id): |
416 | 233 | """Get the users in group as seen by requester_user_id. |
417 | 234 | |
528 | 345 | chunk.sort(key=lambda e: -e["num_joined_members"]) |
529 | 346 | |
530 | 347 | return {"chunk": chunk, "total_room_count_estimate": len(room_results)} |
348 | ||
349 | ||
350 | class GroupsServerHandler(GroupsServerWorkerHandler): | |
351 | def __init__(self, hs): | |
352 | super(GroupsServerHandler, self).__init__(hs) | |
353 | ||
354 | # Ensure attestations get renewed | |
355 | hs.get_groups_attestation_renewer() | |
356 | ||
357 | @defer.inlineCallbacks | |
358 | def update_group_summary_room( | |
359 | self, group_id, requester_user_id, room_id, category_id, content | |
360 | ): | |
361 | """Add/update a room to the group summary | |
362 | """ | |
363 | yield self.check_group_is_ours( | |
364 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
365 | ) | |
366 | ||
367 | RoomID.from_string(room_id) # Ensure valid room id | |
368 | ||
369 | order = content.get("order", None) | |
370 | ||
371 | is_public = _parse_visibility_from_contents(content) | |
372 | ||
373 | yield self.store.add_room_to_summary( | |
374 | group_id=group_id, | |
375 | room_id=room_id, | |
376 | category_id=category_id, | |
377 | order=order, | |
378 | is_public=is_public, | |
379 | ) | |
380 | ||
381 | return {} | |
382 | ||
383 | @defer.inlineCallbacks | |
384 | def delete_group_summary_room( | |
385 | self, group_id, requester_user_id, room_id, category_id | |
386 | ): | |
387 | """Remove a room from the summary | |
388 | """ | |
389 | yield self.check_group_is_ours( | |
390 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
391 | ) | |
392 | ||
393 | yield self.store.remove_room_from_summary( | |
394 | group_id=group_id, room_id=room_id, category_id=category_id | |
395 | ) | |
396 | ||
397 | return {} | |
398 | ||
399 | @defer.inlineCallbacks | |
400 | def set_group_join_policy(self, group_id, requester_user_id, content): | |
401 | """Sets the group join policy. | |
402 | ||
403 | Currently supported policies are: | |
404 | - "invite": an invite must be received and accepted in order to join. | |
405 | - "open": anyone can join. | |
406 | """ | |
407 | yield self.check_group_is_ours( | |
408 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
409 | ) | |
410 | ||
411 | join_policy = _parse_join_policy_from_contents(content) | |
412 | if join_policy is None: | |
413 | raise SynapseError(400, "No value specified for 'm.join_policy'") | |
414 | ||
415 | yield self.store.set_group_join_policy(group_id, join_policy=join_policy) | |
416 | ||
417 | return {} | |
418 | ||
419 | @defer.inlineCallbacks | |
420 | def update_group_category(self, group_id, requester_user_id, category_id, content): | |
421 | """Add/Update a group category | |
422 | """ | |
423 | yield self.check_group_is_ours( | |
424 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
425 | ) | |
426 | ||
427 | is_public = _parse_visibility_from_contents(content) | |
428 | profile = content.get("profile") | |
429 | ||
430 | yield self.store.upsert_group_category( | |
431 | group_id=group_id, | |
432 | category_id=category_id, | |
433 | is_public=is_public, | |
434 | profile=profile, | |
435 | ) | |
436 | ||
437 | return {} | |
438 | ||
439 | @defer.inlineCallbacks | |
440 | def delete_group_category(self, group_id, requester_user_id, category_id): | |
441 | """Delete a group category | |
442 | """ | |
443 | yield self.check_group_is_ours( | |
444 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
445 | ) | |
446 | ||
447 | yield self.store.remove_group_category( | |
448 | group_id=group_id, category_id=category_id | |
449 | ) | |
450 | ||
451 | return {} | |
452 | ||
453 | @defer.inlineCallbacks | |
454 | def update_group_role(self, group_id, requester_user_id, role_id, content): | |
455 | """Add/update a role in a group | |
456 | """ | |
457 | yield self.check_group_is_ours( | |
458 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
459 | ) | |
460 | ||
461 | is_public = _parse_visibility_from_contents(content) | |
462 | ||
463 | profile = content.get("profile") | |
464 | ||
465 | yield self.store.upsert_group_role( | |
466 | group_id=group_id, role_id=role_id, is_public=is_public, profile=profile | |
467 | ) | |
468 | ||
469 | return {} | |
470 | ||
471 | @defer.inlineCallbacks | |
472 | def delete_group_role(self, group_id, requester_user_id, role_id): | |
473 | """Remove role from group | |
474 | """ | |
475 | yield self.check_group_is_ours( | |
476 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
477 | ) | |
478 | ||
479 | yield self.store.remove_group_role(group_id=group_id, role_id=role_id) | |
480 | ||
481 | return {} | |
482 | ||
483 | @defer.inlineCallbacks | |
484 | def update_group_summary_user( | |
485 | self, group_id, requester_user_id, user_id, role_id, content | |
486 | ): | |
487 | """Add/update a users entry in the group summary | |
488 | """ | |
489 | yield self.check_group_is_ours( | |
490 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
491 | ) | |
492 | ||
493 | order = content.get("order", None) | |
494 | ||
495 | is_public = _parse_visibility_from_contents(content) | |
496 | ||
497 | yield self.store.add_user_to_summary( | |
498 | group_id=group_id, | |
499 | user_id=user_id, | |
500 | role_id=role_id, | |
501 | order=order, | |
502 | is_public=is_public, | |
503 | ) | |
504 | ||
505 | return {} | |
506 | ||
507 | @defer.inlineCallbacks | |
508 | def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id): | |
509 | """Remove a user from the group summary | |
510 | """ | |
511 | yield self.check_group_is_ours( | |
512 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
513 | ) | |
514 | ||
515 | yield self.store.remove_user_from_summary( | |
516 | group_id=group_id, user_id=user_id, role_id=role_id | |
517 | ) | |
518 | ||
519 | return {} | |
520 | ||
521 | @defer.inlineCallbacks | |
522 | def update_group_profile(self, group_id, requester_user_id, content): | |
523 | """Update the group profile | |
524 | """ | |
525 | yield self.check_group_is_ours( | |
526 | group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id | |
527 | ) | |
528 | ||
529 | profile = {} | |
530 | for keyname in ("name", "avatar_url", "short_description", "long_description"): | |
531 | if keyname in content: | |
532 | value = content[keyname] | |
533 | if not isinstance(value, string_types): | |
534 | raise SynapseError(400, "%r value is not a string" % (keyname,)) | |
535 | profile[keyname] = value | |
536 | ||
537 | yield self.store.update_group_profile(group_id, profile) | |
531 | 538 | |
532 | 539 | @defer.inlineCallbacks |
533 | 540 | def add_room_to_group(self, group_id, requester_user_id, room_id, content): |
23 | 23 | from synapse.app import check_bind_error |
24 | 24 | |
25 | 25 | logger = logging.getLogger(__name__) |
26 | ||
27 | ACME_REGISTER_FAIL_ERROR = """ | |
28 | -------------------------------------------------------------------------------- | |
29 | Failed to register with the ACME provider. This is likely happening because the installation | |
30 | is new, and ACME v1 has been deprecated by Let's Encrypt and disabled for | |
31 | new installations since November 2019. | |
32 | At the moment, Synapse doesn't support ACME v2. For more information and alternative | |
33 | solutions, please read https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1 | |
34 | --------------------------------------------------------------------------------""" | |
26 | 35 | |
27 | 36 | |
28 | 37 | class AcmeHandler(object): |
70 | 79 | # want it to control where we save the certificates, we have to reach in |
71 | 80 | # and trigger the registration machinery ourselves. |
72 | 81 | self._issuer._registered = False |
73 | yield self._issuer._ensure_registered() | |
82 | ||
83 | try: | |
84 | yield self._issuer._ensure_registered() | |
85 | except Exception: | |
86 | logger.error(ACME_REGISTER_FAIL_ERROR) | |
87 | raise | |
74 | 88 | |
75 | 89 | @defer.inlineCallbacks |
76 | 90 | def provision_certificate(self): |
57 | 57 | ret = await self.store.get_user_by_id(user.to_string()) |
58 | 58 | if ret: |
59 | 59 | profile = await self.store.get_profileinfo(user.localpart) |
60 | threepids = await self.store.user_get_threepids(user.to_string()) | |
60 | 61 | ret["displayname"] = profile.display_name |
61 | 62 | ret["avatar_url"] = profile.avatar_url |
63 | ret["threepids"] = threepids | |
62 | 64 | return ret |
63 | 65 | |
64 | 66 | async def export_user_data(self, user_id, writer): |
815 | 815 | |
816 | 816 | @defer.inlineCallbacks |
817 | 817 | def add_threepid(self, user_id, medium, address, validated_at): |
818 | # check if medium has a valid value | |
819 | if medium not in ["email", "msisdn"]: | |
820 | raise SynapseError( | |
821 | code=400, | |
822 | msg=("'%s' is not a valid value for 'medium'" % (medium,)), | |
823 | errcode=Codes.INVALID_PARAM, | |
824 | ) | |
825 | ||
818 | 826 | # 'Canonicalise' email addresses down to lower case. |
819 | 827 | # We've now moving towards the homeserver being the entity that |
820 | 828 | # is responsible for validating threepids used for resetting passwords |
25 | 25 | FederationDeniedError, |
26 | 26 | HttpResponseException, |
27 | 27 | RequestSendFailed, |
28 | SynapseError, | |
28 | 29 | ) |
29 | 30 | from synapse.logging.opentracing import log_kv, set_tag, trace |
30 | 31 | from synapse.types import RoomStreamToken, get_domain_from_id |
37 | 38 | from ._base import BaseHandler |
38 | 39 | |
39 | 40 | logger = logging.getLogger(__name__) |
41 | ||
42 | MAX_DEVICE_DISPLAY_NAME_LEN = 100 | |
40 | 43 | |
41 | 44 | |
42 | 45 | class DeviceWorkerHandler(BaseHandler): |
224 | 227 | |
225 | 228 | return result |
226 | 229 | |
230 | @defer.inlineCallbacks | |
231 | def on_federation_query_user_devices(self, user_id): | |
232 | stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id) | |
233 | master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master") | |
234 | self_signing_key = yield self.store.get_e2e_cross_signing_key( | |
235 | user_id, "self_signing" | |
236 | ) | |
237 | ||
238 | return { | |
239 | "user_id": user_id, | |
240 | "stream_id": stream_id, | |
241 | "devices": devices, | |
242 | "master_key": master_key, | |
243 | "self_signing_key": self_signing_key, | |
244 | } | |
245 | ||
227 | 246 | |
228 | 247 | class DeviceHandler(DeviceWorkerHandler): |
229 | 248 | def __init__(self, hs): |
237 | 256 | |
238 | 257 | federation_registry.register_edu_handler( |
239 | 258 | "m.device_list_update", self.device_list_updater.incoming_device_list_update |
240 | ) | |
241 | federation_registry.register_query_handler( | |
242 | "user_devices", self.on_federation_query_user_devices | |
243 | 259 | ) |
244 | 260 | |
245 | 261 | hs.get_distributor().observe("user_left_room", self.user_left_room) |
390 | 406 | defer.Deferred: |
391 | 407 | """ |
392 | 408 | |
409 | # Reject a new displayname which is too long. | |
410 | new_display_name = content.get("display_name") | |
411 | if new_display_name and len(new_display_name) > MAX_DEVICE_DISPLAY_NAME_LEN: | |
412 | raise SynapseError( | |
413 | 400, | |
414 | "Device display name is too long (max %i)" | |
415 | % (MAX_DEVICE_DISPLAY_NAME_LEN,), | |
416 | ) | |
417 | ||
393 | 418 | try: |
394 | 419 | yield self.store.update_device( |
395 | user_id, device_id, new_display_name=content.get("display_name") | |
420 | user_id, device_id, new_display_name=new_display_name | |
396 | 421 | ) |
397 | 422 | yield self.notify_device_update(user_id, [device_id]) |
398 | 423 | except errors.StoreError as e: |
454 | 479 | ) |
455 | 480 | |
456 | 481 | self.notifier.on_new_event("device_list_key", position, users=[from_user_id]) |
457 | ||
458 | @defer.inlineCallbacks | |
459 | def on_federation_query_user_devices(self, user_id): | |
460 | stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id) | |
461 | master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master") | |
462 | self_signing_key = yield self.store.get_e2e_cross_signing_key( | |
463 | user_id, "self_signing" | |
464 | ) | |
465 | ||
466 | return { | |
467 | "user_id": user_id, | |
468 | "stream_id": stream_id, | |
469 | "devices": devices, | |
470 | "master_key": master_key, | |
471 | "self_signing_key": self_signing_key, | |
472 | } | |
473 | 482 | |
474 | 483 | @defer.inlineCallbacks |
475 | 484 | def user_left_room(self, user, room_id): |
15 | 15 | |
16 | 16 | import logging |
17 | 17 | import string |
18 | from typing import List | |
18 | 19 | |
19 | 20 | from twisted.internet import defer |
20 | 21 | |
27 | 28 | StoreError, |
28 | 29 | SynapseError, |
29 | 30 | ) |
30 | from synapse.types import RoomAlias, UserID, get_domain_from_id | |
31 | from synapse.types import Requester, RoomAlias, UserID, get_domain_from_id | |
31 | 32 | |
32 | 33 | from ._base import BaseHandler |
33 | 34 | |
80 | 81 | |
81 | 82 | @defer.inlineCallbacks |
82 | 83 | def create_association( |
83 | self, | |
84 | requester, | |
85 | room_alias, | |
86 | room_id, | |
87 | servers=None, | |
88 | send_event=True, | |
89 | check_membership=True, | |
84 | self, requester, room_alias, room_id, servers=None, check_membership=True, | |
90 | 85 | ): |
91 | 86 | """Attempt to create a new alias |
92 | 87 | |
96 | 91 | room_id (str) |
97 | 92 | servers (list[str]|None): List of servers that others servers |
98 | 93 | should try and join via |
99 | send_event (bool): Whether to send an updated m.room.aliases event | |
100 | 94 | check_membership (bool): Whether to check if the user is in the room |
101 | 95 | before the alias can be set (if the server's config requires it). |
102 | 96 | |
149 | 143 | ) |
150 | 144 | |
151 | 145 | yield self._create_association(room_alias, room_id, servers, creator=user_id) |
152 | if send_event: | |
153 | try: | |
154 | yield self.send_room_alias_update_event(requester, room_id) | |
155 | except AuthError as e: | |
156 | # sending the aliases event may fail due to the user not having | |
157 | # permission in the room; this is permitted. | |
158 | logger.info("Skipping updating aliases event due to auth error %s", e) | |
159 | ||
160 | @defer.inlineCallbacks | |
161 | def delete_association(self, requester, room_alias, send_event=True): | |
146 | ||
147 | @defer.inlineCallbacks | |
148 | def delete_association(self, requester, room_alias): | |
162 | 149 | """Remove an alias from the directory |
163 | 150 | |
164 | 151 | (this is only meant for human users; AS users should call |
167 | 154 | Args: |
168 | 155 | requester (Requester): |
169 | 156 | room_alias (RoomAlias): |
170 | send_event (bool): Whether to send an updated m.room.aliases event. | |
171 | Note that, if we delete the canonical alias, we will always attempt | |
172 | to send an m.room.canonical_alias event | |
173 | 157 | |
174 | 158 | Returns: |
175 | 159 | Deferred[unicode]: room id that the alias used to point to |
205 | 189 | room_id = yield self._delete_association(room_alias) |
206 | 190 | |
207 | 191 | try: |
208 | if send_event: | |
209 | yield self.send_room_alias_update_event(requester, room_id) | |
210 | ||
211 | 192 | yield self._update_canonical_alias( |
212 | 193 | requester, requester.user.to_string(), room_id, room_alias |
213 | 194 | ) |
318 | 299 | |
319 | 300 | @defer.inlineCallbacks |
320 | 301 | def _update_canonical_alias(self, requester, user_id, room_id, room_alias): |
302 | """ | |
303 | Send an updated canonical alias event if the removed alias was set as | |
304 | the canonical alias or listed in the alt_aliases field. | |
305 | """ | |
321 | 306 | alias_event = yield self.state.get_current_state( |
322 | 307 | room_id, EventTypes.CanonicalAlias, "" |
323 | 308 | ) |
324 | 309 | |
310 | # There is no canonical alias, nothing to do. | |
311 | if not alias_event: | |
312 | return | |
313 | ||
314 | # Obtain a mutable version of the event content. | |
315 | content = dict(alias_event.content) | |
316 | send_update = False | |
317 | ||
318 | # Remove the alias property if it matches the removed alias. | |
325 | 319 | alias_str = room_alias.to_string() |
326 | if not alias_event or alias_event.content.get("alias", "") != alias_str: | |
327 | return | |
328 | ||
329 | yield self.event_creation_handler.create_and_send_nonmember_event( | |
330 | requester, | |
331 | { | |
332 | "type": EventTypes.CanonicalAlias, | |
333 | "state_key": "", | |
334 | "room_id": room_id, | |
335 | "sender": user_id, | |
336 | "content": {}, | |
337 | }, | |
338 | ratelimit=False, | |
339 | ) | |
320 | if alias_event.content.get("alias", "") == alias_str: | |
321 | send_update = True | |
322 | content.pop("alias", "") | |
323 | ||
324 | # Filter alt_aliases for the removed alias. | |
325 | alt_aliases = content.pop("alt_aliases", None) | |
326 | # If the aliases are not a list (or not found) do not attempt to modify | |
327 | # the list. | |
328 | if isinstance(alt_aliases, list): | |
329 | send_update = True | |
330 | alt_aliases = [alias for alias in alt_aliases if alias != alias_str] | |
331 | if alt_aliases: | |
332 | content["alt_aliases"] = alt_aliases | |
333 | ||
334 | if send_update: | |
335 | yield self.event_creation_handler.create_and_send_nonmember_event( | |
336 | requester, | |
337 | { | |
338 | "type": EventTypes.CanonicalAlias, | |
339 | "state_key": "", | |
340 | "room_id": room_id, | |
341 | "sender": user_id, | |
342 | "content": content, | |
343 | }, | |
344 | ratelimit=False, | |
345 | ) | |
340 | 346 | |
341 | 347 | @defer.inlineCallbacks |
342 | 348 | def get_association_from_room_alias(self, room_alias): |
446 | 452 | yield self.store.set_room_is_public_appservice( |
447 | 453 | room_id, appservice_id, network_id, visibility == "public" |
448 | 454 | ) |
455 | ||
456 | async def get_aliases_for_room( | |
457 | self, requester: Requester, room_id: str | |
458 | ) -> List[str]: | |
459 | """ | |
460 | Get a list of the aliases that currently point to this room on this server | |
461 | """ | |
462 | # allow access to server admins and current members of the room | |
463 | is_admin = await self.auth.is_server_admin(requester.user) | |
464 | if not is_admin: | |
465 | await self.auth.check_user_in_room_or_world_readable( | |
466 | room_id, requester.user.to_string() | |
467 | ) | |
468 | ||
469 | aliases = await self.store.get_aliases_for_room(room_id) | |
470 | return aliases |
64 | 64 | from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet |
65 | 65 | from synapse.state import StateResolutionStore, resolve_events_with_store |
66 | 66 | from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour |
67 | from synapse.types import StateMap, UserID, get_domain_from_id | |
67 | from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id | |
68 | 68 | from synapse.util.async_helpers import Linearizer, concurrently_execute |
69 | 69 | from synapse.util.distributor import user_joined_room |
70 | 70 | from synapse.util.retryutils import NotRetryingDestination |
1155 | 1155 | Logs a warning if we can't find the given event. |
1156 | 1156 | """ |
1157 | 1157 | |
1158 | room_version = await self.store.get_room_version_id(room_id) | |
1158 | room_version = await self.store.get_room_version(room_id) | |
1159 | 1159 | |
1160 | 1160 | event_infos = [] |
1161 | 1161 | |
1229 | 1229 | ) |
1230 | 1230 | raise SynapseError(http_client.BAD_REQUEST, "Too many auth_events") |
1231 | 1231 | |
1232 | @defer.inlineCallbacks | |
1233 | def send_invite(self, target_host, event): | |
1232 | async def send_invite(self, target_host, event): | |
1234 | 1233 | """ Sends the invite to the remote server for signing. |
1235 | 1234 | |
1236 | 1235 | Invites must be signed by the invitee's server before distribution. |
1237 | 1236 | """ |
1238 | pdu = yield self.federation_client.send_invite( | |
1237 | pdu = await self.federation_client.send_invite( | |
1239 | 1238 | destination=target_host, |
1240 | 1239 | room_id=event.room_id, |
1241 | 1240 | event_id=event.event_id, |
1244 | 1243 | |
1245 | 1244 | return pdu |
1246 | 1245 | |
1247 | @defer.inlineCallbacks | |
1248 | def on_event_auth(self, event_id): | |
1249 | event = yield self.store.get_event(event_id) | |
1250 | auth = yield self.store.get_auth_chain( | |
1246 | async def on_event_auth(self, event_id: str) -> List[EventBase]: | |
1247 | event = await self.store.get_event(event_id) | |
1248 | auth = await self.store.get_auth_chain( | |
1251 | 1249 | [auth_id for auth_id in event.auth_event_ids()], include_given=True |
1252 | 1250 | ) |
1253 | return [e for e in auth] | |
1254 | ||
1255 | @log_function | |
1256 | @defer.inlineCallbacks | |
1257 | def do_invite_join(self, target_hosts, room_id, joinee, content): | |
1251 | return list(auth) | |
1252 | ||
1253 | async def do_invite_join( | |
1254 | self, target_hosts: Iterable[str], room_id: str, joinee: str, content: JsonDict | |
1255 | ) -> None: | |
1258 | 1256 | """ Attempts to join the `joinee` to the room `room_id` via the |
1259 | 1257 | servers contained in `target_hosts`. |
1260 | 1258 | |
1267 | 1265 | have finished processing the join. |
1268 | 1266 | |
1269 | 1267 | Args: |
1270 | target_hosts (Iterable[str]): List of servers to attempt to join the room with. | |
1271 | ||
1272 | room_id (str): The ID of the room to join. | |
1273 | ||
1274 | joinee (str): The User ID of the joining user. | |
1275 | ||
1276 | content (dict): The event content to use for the join event. | |
1268 | target_hosts: List of servers to attempt to join the room with. | |
1269 | ||
1270 | room_id: The ID of the room to join. | |
1271 | ||
1272 | joinee: The User ID of the joining user. | |
1273 | ||
1274 | content: The event content to use for the join event. | |
1277 | 1275 | """ |
1278 | 1276 | logger.debug("Joining %s to %s", joinee, room_id) |
1279 | 1277 | |
1280 | origin, event, room_version_obj = yield self._make_and_verify_event( | |
1278 | origin, event, room_version_obj = await self._make_and_verify_event( | |
1281 | 1279 | target_hosts, |
1282 | 1280 | room_id, |
1283 | 1281 | joinee, |
1293 | 1291 | |
1294 | 1292 | self.room_queues[room_id] = [] |
1295 | 1293 | |
1296 | yield self._clean_room_for_join(room_id) | |
1294 | await self._clean_room_for_join(room_id) | |
1297 | 1295 | |
1298 | 1296 | handled_events = set() |
1299 | 1297 | |
1306 | 1304 | except ValueError: |
1307 | 1305 | pass |
1308 | 1306 | |
1309 | event_format_version = room_version_obj.event_format | |
1310 | ret = yield self.federation_client.send_join( | |
1311 | target_hosts, event, event_format_version | |
1307 | ret = await self.federation_client.send_join( | |
1308 | target_hosts, event, room_version_obj | |
1312 | 1309 | ) |
1313 | 1310 | |
1314 | 1311 | origin = ret["origin"] |
1326 | 1323 | logger.debug("do_invite_join event: %s", event) |
1327 | 1324 | |
1328 | 1325 | try: |
1329 | yield self.store.store_room( | |
1326 | await self.store.store_room( | |
1330 | 1327 | room_id=room_id, |
1331 | 1328 | room_creator_user_id="", |
1332 | 1329 | is_public=False, |
1336 | 1333 | # FIXME |
1337 | 1334 | pass |
1338 | 1335 | |
1339 | yield self._persist_auth_tree( | |
1336 | await self._persist_auth_tree( | |
1340 | 1337 | origin, auth_chain, state, event, room_version_obj |
1341 | 1338 | ) |
1342 | 1339 | |
1343 | 1340 | # Check whether this room is the result of an upgrade of a room we already know |
1344 | 1341 | # about. If so, migrate over user information |
1345 | predecessor = yield self.store.get_room_predecessor(room_id) | |
1342 | predecessor = await self.store.get_room_predecessor(room_id) | |
1346 | 1343 | if not predecessor or not isinstance(predecessor.get("room_id"), str): |
1347 | 1344 | return |
1348 | 1345 | old_room_id = predecessor["room_id"] |
1352 | 1349 | |
1353 | 1350 | # We retrieve the room member handler here as to not cause a cyclic dependency |
1354 | 1351 | member_handler = self.hs.get_room_member_handler() |
1355 | yield member_handler.transfer_room_state_on_room_upgrade( | |
1352 | await member_handler.transfer_room_state_on_room_upgrade( | |
1356 | 1353 | old_room_id, room_id |
1357 | 1354 | ) |
1358 | 1355 | |
1368 | 1365 | # have. Hence we fire off the deferred, but don't wait for it. |
1369 | 1366 | |
1370 | 1367 | run_in_background(self._handle_queued_pdus, room_queue) |
1371 | ||
1372 | return True | |
1373 | 1368 | |
1374 | 1369 | async def _handle_queued_pdus(self, room_queue): |
1375 | 1370 | """Process PDUs which got queued up while we were busy send_joining. |
1393 | 1388 | "Error handling queued PDU %s from %s: %s", p.event_id, origin, e |
1394 | 1389 | ) |
1395 | 1390 | |
1396 | @defer.inlineCallbacks | |
1397 | @log_function | |
1398 | def on_make_join_request(self, origin, room_id, user_id): | |
1391 | async def on_make_join_request( | |
1392 | self, origin: str, room_id: str, user_id: str | |
1393 | ) -> EventBase: | |
1399 | 1394 | """ We've received a /make_join/ request, so we create a partial |
1400 | 1395 | join event for the room and return that. We do *not* persist or |
1401 | 1396 | process it until the other server has signed it and sent it back. |
1402 | 1397 | |
1403 | 1398 | Args: |
1404 | origin (str): The (verified) server name of the requesting server. | |
1405 | room_id (str): Room to create join event in | |
1406 | user_id (str): The user to create the join for | |
1407 | ||
1408 | Returns: | |
1409 | Deferred[FrozenEvent] | |
1399 | origin: The (verified) server name of the requesting server. | |
1400 | room_id: Room to create join event in | |
1401 | user_id: The user to create the join for | |
1410 | 1402 | """ |
1411 | 1403 | if get_domain_from_id(user_id) != origin: |
1412 | 1404 | logger.info( |
1418 | 1410 | |
1419 | 1411 | event_content = {"membership": Membership.JOIN} |
1420 | 1412 | |
1421 | room_version = yield self.store.get_room_version_id(room_id) | |
1413 | room_version = await self.store.get_room_version_id(room_id) | |
1422 | 1414 | |
1423 | 1415 | builder = self.event_builder_factory.new( |
1424 | 1416 | room_version, |
1432 | 1424 | ) |
1433 | 1425 | |
1434 | 1426 | try: |
1435 | event, context = yield self.event_creation_handler.create_new_client_event( | |
1427 | event, context = await self.event_creation_handler.create_new_client_event( | |
1436 | 1428 | builder=builder |
1437 | 1429 | ) |
1438 | 1430 | except AuthError as e: |
1439 | 1431 | logger.warning("Failed to create join to %s because %s", room_id, e) |
1440 | 1432 | raise e |
1441 | 1433 | |
1442 | event_allowed = yield self.third_party_event_rules.check_event_allowed( | |
1434 | event_allowed = await self.third_party_event_rules.check_event_allowed( | |
1443 | 1435 | event, context |
1444 | 1436 | ) |
1445 | 1437 | if not event_allowed: |
1450 | 1442 | |
1451 | 1443 | # The remote hasn't signed it yet, obviously. We'll do the full checks |
1452 | 1444 | # when we get the event back in `on_send_join_request` |
1453 | yield self.auth.check_from_context( | |
1445 | await self.auth.check_from_context( | |
1454 | 1446 | room_version, event, context, do_sig_check=False |
1455 | 1447 | ) |
1456 | 1448 | |
1457 | 1449 | return event |
1458 | 1450 | |
1459 | @defer.inlineCallbacks | |
1460 | @log_function | |
1461 | def on_send_join_request(self, origin, pdu): | |
1451 | async def on_send_join_request(self, origin, pdu): | |
1462 | 1452 | """ We have received a join event for a room. Fully process it and |
1463 | 1453 | respond with the current state and auth chains. |
1464 | 1454 | """ |
1495 | 1485 | # would introduce the danger of backwards-compatibility problems. |
1496 | 1486 | event.internal_metadata.send_on_behalf_of = origin |
1497 | 1487 | |
1498 | context = yield self._handle_new_event(origin, event) | |
1499 | ||
1500 | event_allowed = yield self.third_party_event_rules.check_event_allowed( | |
1488 | context = await self._handle_new_event(origin, event) | |
1489 | ||
1490 | event_allowed = await self.third_party_event_rules.check_event_allowed( | |
1501 | 1491 | event, context |
1502 | 1492 | ) |
1503 | 1493 | if not event_allowed: |
1515 | 1505 | if event.type == EventTypes.Member: |
1516 | 1506 | if event.content["membership"] == Membership.JOIN: |
1517 | 1507 | user = UserID.from_string(event.state_key) |
1518 | yield self.user_joined_room(user, event.room_id) | |
1519 | ||
1520 | prev_state_ids = yield context.get_prev_state_ids() | |
1508 | await self.user_joined_room(user, event.room_id) | |
1509 | ||
1510 | prev_state_ids = await context.get_prev_state_ids() | |
1521 | 1511 | |
1522 | 1512 | state_ids = list(prev_state_ids.values()) |
1523 | auth_chain = yield self.store.get_auth_chain(state_ids) | |
1524 | ||
1525 | state = yield self.store.get_events(list(prev_state_ids.values())) | |
1513 | auth_chain = await self.store.get_auth_chain(state_ids) | |
1514 | ||
1515 | state = await self.store.get_events(list(prev_state_ids.values())) | |
1526 | 1516 | |
1527 | 1517 | return {"state": list(state.values()), "auth_chain": auth_chain} |
1528 | 1518 | |
1529 | @defer.inlineCallbacks | |
1530 | def on_invite_request( | |
1519 | async def on_invite_request( | |
1531 | 1520 | self, origin: str, event: EventBase, room_version: RoomVersion |
1532 | 1521 | ): |
1533 | 1522 | """ We've got an invite event. Process and persist it. Sign it. |
1537 | 1526 | if event.state_key is None: |
1538 | 1527 | raise SynapseError(400, "The invite event did not have a state key") |
1539 | 1528 | |
1540 | is_blocked = yield self.store.is_room_blocked(event.room_id) | |
1529 | is_blocked = await self.store.is_room_blocked(event.room_id) | |
1541 | 1530 | if is_blocked: |
1542 | 1531 | raise SynapseError(403, "This room has been blocked on this server") |
1543 | 1532 | |
1580 | 1569 | ) |
1581 | 1570 | ) |
1582 | 1571 | |
1583 | context = yield self.state_handler.compute_event_context(event) | |
1584 | yield self.persist_events_and_notify([(event, context)]) | |
1572 | context = await self.state_handler.compute_event_context(event) | |
1573 | await self.persist_events_and_notify([(event, context)]) | |
1585 | 1574 | |
1586 | 1575 | return event |
1587 | 1576 | |
1588 | @defer.inlineCallbacks | |
1589 | def do_remotely_reject_invite(self, target_hosts, room_id, user_id, content): | |
1590 | origin, event, room_version = yield self._make_and_verify_event( | |
1577 | async def do_remotely_reject_invite( | |
1578 | self, target_hosts: Iterable[str], room_id: str, user_id: str, content: JsonDict | |
1579 | ) -> EventBase: | |
1580 | origin, event, room_version = await self._make_and_verify_event( | |
1591 | 1581 | target_hosts, room_id, user_id, "leave", content=content |
1592 | 1582 | ) |
1593 | 1583 | # Mark as outlier as we don't have any state for this event; we're not |
1603 | 1593 | except ValueError: |
1604 | 1594 | pass |
1605 | 1595 | |
1606 | yield self.federation_client.send_leave(target_hosts, event) | |
1607 | ||
1608 | context = yield self.state_handler.compute_event_context(event) | |
1609 | yield self.persist_events_and_notify([(event, context)]) | |
1596 | await self.federation_client.send_leave(target_hosts, event) | |
1597 | ||
1598 | context = await self.state_handler.compute_event_context(event) | |
1599 | await self.persist_events_and_notify([(event, context)]) | |
1610 | 1600 | |
1611 | 1601 | return event |
1612 | 1602 | |
1613 | @defer.inlineCallbacks | |
1614 | def _make_and_verify_event( | |
1615 | self, target_hosts, room_id, user_id, membership, content={}, params=None | |
1616 | ): | |
1603 | async def _make_and_verify_event( | |
1604 | self, | |
1605 | target_hosts: Iterable[str], | |
1606 | room_id: str, | |
1607 | user_id: str, | |
1608 | membership: str, | |
1609 | content: JsonDict = {}, | |
1610 | params: Optional[Dict[str, str]] = None, | |
1611 | ) -> Tuple[str, EventBase, RoomVersion]: | |
1617 | 1612 | ( |
1618 | 1613 | origin, |
1619 | 1614 | event, |
1620 | 1615 | room_version, |
1621 | ) = yield self.federation_client.make_membership_event( | |
1616 | ) = await self.federation_client.make_membership_event( | |
1622 | 1617 | target_hosts, room_id, user_id, membership, content, params=params |
1623 | 1618 | ) |
1624 | 1619 | |
1632 | 1627 | assert event.room_id == room_id |
1633 | 1628 | return origin, event, room_version |
1634 | 1629 | |
1635 | @defer.inlineCallbacks | |
1636 | @log_function | |
1637 | def on_make_leave_request(self, origin, room_id, user_id): | |
1630 | async def on_make_leave_request( | |
1631 | self, origin: str, room_id: str, user_id: str | |
1632 | ) -> EventBase: | |
1638 | 1633 | """ We've received a /make_leave/ request, so we create a partial |
1639 | 1634 | leave event for the room and return that. We do *not* persist or |
1640 | 1635 | process it until the other server has signed it and sent it back. |
1641 | 1636 | |
1642 | 1637 | Args: |
1643 | origin (str): The (verified) server name of the requesting server. | |
1644 | room_id (str): Room to create leave event in | |
1645 | user_id (str): The user to create the leave for | |
1646 | ||
1647 | Returns: | |
1648 | Deferred[FrozenEvent] | |
1638 | origin: The (verified) server name of the requesting server. | |
1639 | room_id: Room to create leave event in | |
1640 | user_id: The user to create the leave for | |
1649 | 1641 | """ |
1650 | 1642 | if get_domain_from_id(user_id) != origin: |
1651 | 1643 | logger.info( |
1655 | 1647 | ) |
1656 | 1648 | raise SynapseError(403, "User not from origin", Codes.FORBIDDEN) |
1657 | 1649 | |
1658 | room_version = yield self.store.get_room_version_id(room_id) | |
1650 | room_version = await self.store.get_room_version_id(room_id) | |
1659 | 1651 | builder = self.event_builder_factory.new( |
1660 | 1652 | room_version, |
1661 | 1653 | { |
1667 | 1659 | }, |
1668 | 1660 | ) |
1669 | 1661 | |
1670 | event, context = yield self.event_creation_handler.create_new_client_event( | |
1662 | event, context = await self.event_creation_handler.create_new_client_event( | |
1671 | 1663 | builder=builder |
1672 | 1664 | ) |
1673 | 1665 | |
1674 | event_allowed = yield self.third_party_event_rules.check_event_allowed( | |
1666 | event_allowed = await self.third_party_event_rules.check_event_allowed( | |
1675 | 1667 | event, context |
1676 | 1668 | ) |
1677 | 1669 | if not event_allowed: |
1683 | 1675 | try: |
1684 | 1676 | # The remote hasn't signed it yet, obviously. We'll do the full checks |
1685 | 1677 | # when we get the event back in `on_send_leave_request` |
1686 | yield self.auth.check_from_context( | |
1678 | await self.auth.check_from_context( | |
1687 | 1679 | room_version, event, context, do_sig_check=False |
1688 | 1680 | ) |
1689 | 1681 | except AuthError as e: |
1692 | 1684 | |
1693 | 1685 | return event |
1694 | 1686 | |
1695 | @defer.inlineCallbacks | |
1696 | @log_function | |
1697 | def on_send_leave_request(self, origin, pdu): | |
1687 | async def on_send_leave_request(self, origin, pdu): | |
1698 | 1688 | """ We have received a leave event for a room. Fully process it.""" |
1699 | 1689 | event = pdu |
1700 | 1690 | |
1714 | 1704 | |
1715 | 1705 | event.internal_metadata.outlier = False |
1716 | 1706 | |
1717 | context = yield self._handle_new_event(origin, event) | |
1718 | ||
1719 | event_allowed = yield self.third_party_event_rules.check_event_allowed( | |
1707 | context = await self._handle_new_event(origin, event) | |
1708 | ||
1709 | event_allowed = await self.third_party_event_rules.check_event_allowed( | |
1720 | 1710 | event, context |
1721 | 1711 | ) |
1722 | 1712 | if not event_allowed: |
1797 | 1787 | if not in_room: |
1798 | 1788 | raise AuthError(403, "Host not in room.") |
1799 | 1789 | |
1790 | # Synapse asks for 100 events per backfill request. Do not allow more. | |
1791 | limit = min(limit, 100) | |
1792 | ||
1800 | 1793 | events = yield self.store.get_backfill_events(room_id, pdu_list, limit) |
1801 | 1794 | |
1802 | 1795 | events = yield filter_events_for_server(self.storage, origin, events) |
1838 | 1831 | def get_min_depth_for_context(self, context): |
1839 | 1832 | return self.store.get_min_depth(context) |
1840 | 1833 | |
1841 | @defer.inlineCallbacks | |
1842 | def _handle_new_event( | |
1834 | async def _handle_new_event( | |
1843 | 1835 | self, origin, event, state=None, auth_events=None, backfilled=False |
1844 | 1836 | ): |
1845 | context = yield self._prep_event( | |
1837 | context = await self._prep_event( | |
1846 | 1838 | origin, event, state=state, auth_events=auth_events, backfilled=backfilled |
1847 | 1839 | ) |
1848 | 1840 | |
1855 | 1847 | and not backfilled |
1856 | 1848 | and not context.rejected |
1857 | 1849 | ): |
1858 | yield self.action_generator.handle_push_actions_for_event( | |
1850 | await self.action_generator.handle_push_actions_for_event( | |
1859 | 1851 | event, context |
1860 | 1852 | ) |
1861 | 1853 | |
1862 | yield self.persist_events_and_notify( | |
1854 | await self.persist_events_and_notify( | |
1863 | 1855 | [(event, context)], backfilled=backfilled |
1864 | 1856 | ) |
1865 | 1857 | success = True |
1871 | 1863 | |
1872 | 1864 | return context |
1873 | 1865 | |
1874 | @defer.inlineCallbacks | |
1875 | def _handle_new_events( | |
1866 | async def _handle_new_events( | |
1876 | 1867 | self, |
1877 | 1868 | origin: str, |
1878 | 1869 | event_infos: Iterable[_NewEventInfo], |
1879 | 1870 | backfilled: bool = False, |
1880 | ): | |
1871 | ) -> None: | |
1881 | 1872 | """Creates the appropriate contexts and persists events. The events |
1882 | 1873 | should not depend on one another, e.g. this should be used to persist |
1883 | 1874 | a bunch of outliers, but not a chunk of individual events that depend |
1886 | 1877 | Notifies about the events where appropriate. |
1887 | 1878 | """ |
1888 | 1879 | |
1889 | @defer.inlineCallbacks | |
1890 | def prep(ev_info: _NewEventInfo): | |
1880 | async def prep(ev_info: _NewEventInfo): | |
1891 | 1881 | event = ev_info.event |
1892 | 1882 | with nested_logging_context(suffix=event.event_id): |
1893 | res = yield self._prep_event( | |
1883 | res = await self._prep_event( | |
1894 | 1884 | origin, |
1895 | 1885 | event, |
1896 | 1886 | state=ev_info.state, |
1899 | 1889 | ) |
1900 | 1890 | return res |
1901 | 1891 | |
1902 | contexts = yield make_deferred_yieldable( | |
1892 | contexts = await make_deferred_yieldable( | |
1903 | 1893 | defer.gatherResults( |
1904 | 1894 | [run_in_background(prep, ev_info) for ev_info in event_infos], |
1905 | 1895 | consumeErrors=True, |
1906 | 1896 | ) |
1907 | 1897 | ) |
1908 | 1898 | |
1909 | yield self.persist_events_and_notify( | |
1899 | await self.persist_events_and_notify( | |
1910 | 1900 | [ |
1911 | 1901 | (ev_info.event, context) |
1912 | 1902 | for ev_info, context in zip(event_infos, contexts) |
1914 | 1904 | backfilled=backfilled, |
1915 | 1905 | ) |
1916 | 1906 | |
1917 | @defer.inlineCallbacks | |
1918 | def _persist_auth_tree( | |
1907 | async def _persist_auth_tree( | |
1919 | 1908 | self, |
1920 | 1909 | origin: str, |
1921 | 1910 | auth_events: List[EventBase], |
1922 | 1911 | state: List[EventBase], |
1923 | 1912 | event: EventBase, |
1924 | 1913 | room_version: RoomVersion, |
1925 | ): | |
1914 | ) -> None: | |
1926 | 1915 | """Checks the auth chain is valid (and passes auth checks) for the |
1927 | 1916 | state and event. Then persists the auth chain and state atomically. |
1928 | 1917 | Persists the event separately. Notifies about the persisted events |
1937 | 1926 | event |
1938 | 1927 | room_version: The room version we expect this room to have, and |
1939 | 1928 | will raise if it doesn't match the version in the create event. |
1940 | ||
1941 | Returns: | |
1942 | Deferred | |
1943 | 1929 | """ |
1944 | 1930 | events_to_context = {} |
1945 | 1931 | for e in itertools.chain(auth_events, state): |
1946 | 1932 | e.internal_metadata.outlier = True |
1947 | ctx = yield self.state_handler.compute_event_context(e) | |
1933 | ctx = await self.state_handler.compute_event_context(e) | |
1948 | 1934 | events_to_context[e.event_id] = ctx |
1949 | 1935 | |
1950 | 1936 | event_map = { |
1976 | 1962 | missing_auth_events.add(e_id) |
1977 | 1963 | |
1978 | 1964 | for e_id in missing_auth_events: |
1979 | m_ev = yield self.federation_client.get_pdu( | |
1980 | [origin], | |
1981 | e_id, | |
1982 | room_version=room_version.identifier, | |
1983 | outlier=True, | |
1984 | timeout=10000, | |
1965 | m_ev = await self.federation_client.get_pdu( | |
1966 | [origin], e_id, room_version=room_version, outlier=True, timeout=10000, | |
1985 | 1967 | ) |
1986 | 1968 | if m_ev and m_ev.event_id == e_id: |
1987 | 1969 | event_map[e_id] = m_ev |
2012 | 1994 | raise |
2013 | 1995 | events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR |
2014 | 1996 | |
2015 | yield self.persist_events_and_notify( | |
1997 | await self.persist_events_and_notify( | |
2016 | 1998 | [ |
2017 | 1999 | (e, events_to_context[e.event_id]) |
2018 | 2000 | for e in itertools.chain(auth_events, state) |
2019 | 2001 | ] |
2020 | 2002 | ) |
2021 | 2003 | |
2022 | new_event_context = yield self.state_handler.compute_event_context( | |
2004 | new_event_context = await self.state_handler.compute_event_context( | |
2023 | 2005 | event, old_state=state |
2024 | 2006 | ) |
2025 | 2007 | |
2026 | yield self.persist_events_and_notify([(event, new_event_context)]) | |
2027 | ||
2028 | @defer.inlineCallbacks | |
2029 | def _prep_event( | |
2008 | await self.persist_events_and_notify([(event, new_event_context)]) | |
2009 | ||
2010 | async def _prep_event( | |
2030 | 2011 | self, |
2031 | 2012 | origin: str, |
2032 | 2013 | event: EventBase, |
2033 | 2014 | state: Optional[Iterable[EventBase]], |
2034 | 2015 | auth_events: Optional[StateMap[EventBase]], |
2035 | 2016 | backfilled: bool, |
2036 | ): | |
2037 | """ | |
2038 | ||
2039 | Args: | |
2040 | origin: | |
2041 | event: | |
2042 | state: | |
2043 | auth_events: | |
2044 | backfilled: | |
2045 | ||
2046 | Returns: | |
2047 | Deferred, which resolves to synapse.events.snapshot.EventContext | |
2048 | """ | |
2049 | context = yield self.state_handler.compute_event_context(event, old_state=state) | |
2017 | ) -> EventContext: | |
2018 | context = await self.state_handler.compute_event_context(event, old_state=state) | |
2050 | 2019 | |
2051 | 2020 | if not auth_events: |
2052 | prev_state_ids = yield context.get_prev_state_ids() | |
2053 | auth_events_ids = yield self.auth.compute_auth_events( | |
2021 | prev_state_ids = await context.get_prev_state_ids() | |
2022 | auth_events_ids = await self.auth.compute_auth_events( | |
2054 | 2023 | event, prev_state_ids, for_verification=True |
2055 | 2024 | ) |
2056 | auth_events = yield self.store.get_events(auth_events_ids) | |
2025 | auth_events = await self.store.get_events(auth_events_ids) | |
2057 | 2026 | auth_events = {(e.type, e.state_key): e for e in auth_events.values()} |
2058 | 2027 | |
2059 | 2028 | # This is a hack to fix some old rooms where the initial join event |
2060 | 2029 | # didn't reference the create event in its auth events. |
2061 | 2030 | if event.type == EventTypes.Member and not event.auth_event_ids(): |
2062 | 2031 | if len(event.prev_event_ids()) == 1 and event.depth < 5: |
2063 | c = yield self.store.get_event( | |
2032 | c = await self.store.get_event( | |
2064 | 2033 | event.prev_event_ids()[0], allow_none=True |
2065 | 2034 | ) |
2066 | 2035 | if c and c.type == EventTypes.Create: |
2067 | 2036 | auth_events[(c.type, c.state_key)] = c |
2068 | 2037 | |
2069 | context = yield self.do_auth(origin, event, context, auth_events=auth_events) | |
2038 | context = await self.do_auth(origin, event, context, auth_events=auth_events) | |
2070 | 2039 | |
2071 | 2040 | if not context.rejected: |
2072 | yield self._check_for_soft_fail(event, state, backfilled) | |
2041 | await self._check_for_soft_fail(event, state, backfilled) | |
2073 | 2042 | |
2074 | 2043 | if event.type == EventTypes.GuestAccess and not context.rejected: |
2075 | yield self.maybe_kick_guest_users(event) | |
2044 | await self.maybe_kick_guest_users(event) | |
2076 | 2045 | |
2077 | 2046 | return context |
2078 | 2047 | |
2079 | @defer.inlineCallbacks | |
2080 | def _check_for_soft_fail( | |
2048 | async def _check_for_soft_fail( | |
2081 | 2049 | self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool |
2082 | ): | |
2083 | """Checks if we should soft fail the event, if so marks the event as | |
2050 | ) -> None: | |
2051 | """Checks if we should soft fail the event; if so, marks the event as | |
2084 | 2052 | such. |
2085 | 2053 | |
2086 | 2054 | Args: |
2087 | 2055 | event |
2088 | 2056 | state: The state at the event if we don't have all the event's prev events |
2089 | 2057 | backfilled: Whether the event is from backfill |
2090 | ||
2091 | Returns: | |
2092 | Deferred | |
2093 | 2058 | """ |
2094 | 2059 | # For new (non-backfilled and non-outlier) events we check if the event |
2095 | 2060 | # passes auth based on the current state. If it doesn't then we |
2096 | 2061 | # "soft-fail" the event. |
2097 | 2062 | do_soft_fail_check = not backfilled and not event.internal_metadata.is_outlier() |
2098 | 2063 | if do_soft_fail_check: |
2099 | extrem_ids = yield self.store.get_latest_event_ids_in_room(event.room_id) | |
2064 | extrem_ids = await self.store.get_latest_event_ids_in_room(event.room_id) | |
2100 | 2065 | |
2101 | 2066 | extrem_ids = set(extrem_ids) |
2102 | 2067 | prev_event_ids = set(event.prev_event_ids()) |
2107 | 2072 | do_soft_fail_check = False |
2108 | 2073 | |
2109 | 2074 | if do_soft_fail_check: |
2110 | room_version = yield self.store.get_room_version_id(event.room_id) | |
2075 | room_version = await self.store.get_room_version_id(event.room_id) | |
2111 | 2076 | room_version_obj = KNOWN_ROOM_VERSIONS[room_version] |
2112 | 2077 | |
2113 | 2078 | # Calculate the "current state". |
2124 | 2089 | # given state at the event. This should correctly handle cases |
2125 | 2090 | # like bans, especially with state res v2. |
2126 | 2091 | |
2127 | state_sets = yield self.state_store.get_state_groups( | |
2092 | state_sets = await self.state_store.get_state_groups( | |
2128 | 2093 | event.room_id, extrem_ids |
2129 | 2094 | ) |
2130 | 2095 | state_sets = list(state_sets.values()) |
2131 | 2096 | state_sets.append(state) |
2132 | current_state_ids = yield self.state_handler.resolve_events( | |
2097 | current_state_ids = await self.state_handler.resolve_events( | |
2133 | 2098 | room_version, state_sets, event |
2134 | 2099 | ) |
2135 | 2100 | current_state_ids = { |
2136 | 2101 | k: e.event_id for k, e in iteritems(current_state_ids) |
2137 | 2102 | } |
2138 | 2103 | else: |
2139 | current_state_ids = yield self.state_handler.get_current_state_ids( | |
2104 | current_state_ids = await self.state_handler.get_current_state_ids( | |
2140 | 2105 | event.room_id, latest_event_ids=extrem_ids |
2141 | 2106 | ) |
2142 | 2107 | |
2152 | 2117 | e for k, e in iteritems(current_state_ids) if k in auth_types |
2153 | 2118 | ] |
2154 | 2119 | |
2155 | current_auth_events = yield self.store.get_events(current_state_ids) | |
2120 | current_auth_events = await self.store.get_events(current_state_ids) | |
2156 | 2121 | current_auth_events = { |
2157 | 2122 | (e.type, e.state_key): e for e in current_auth_events.values() |
2158 | 2123 | } |
2165 | 2130 | logger.warning("Soft-failing %r because %s", event, e) |
2166 | 2131 | event.internal_metadata.soft_failed = True |
2167 | 2132 | |
2168 | @defer.inlineCallbacks | |
2169 | def on_query_auth( | |
2133 | async def on_query_auth( | |
2170 | 2134 | self, origin, event_id, room_id, remote_auth_chain, rejects, missing |
2171 | 2135 | ): |
2172 | in_room = yield self.auth.check_host_in_room(room_id, origin) | |
2136 | in_room = await self.auth.check_host_in_room(room_id, origin) | |
2173 | 2137 | if not in_room: |
2174 | 2138 | raise AuthError(403, "Host not in room.") |
2175 | 2139 | |
2176 | event = yield self.store.get_event( | |
2140 | event = await self.store.get_event( | |
2177 | 2141 | event_id, allow_none=False, check_room_id=room_id |
2178 | 2142 | ) |
2179 | 2143 | |
2181 | 2145 | # don't want to fall into the trap of `missing` being wrong. |
2182 | 2146 | for e in remote_auth_chain: |
2183 | 2147 | try: |
2184 | yield self._handle_new_event(origin, e) | |
2148 | await self._handle_new_event(origin, e) | |
2185 | 2149 | except AuthError: |
2186 | 2150 | pass |
2187 | 2151 | |
2188 | 2152 | # Now get the current auth_chain for the event. |
2189 | local_auth_chain = yield self.store.get_auth_chain( | |
2153 | local_auth_chain = await self.store.get_auth_chain( | |
2190 | 2154 | [auth_id for auth_id in event.auth_event_ids()], include_given=True |
2191 | 2155 | ) |
2192 | 2156 | |
2193 | 2157 | # TODO: Check if we would now reject event_id. If so we need to tell |
2194 | 2158 | # everyone. |
2195 | 2159 | |
2196 | ret = yield self.construct_auth_difference(local_auth_chain, remote_auth_chain) | |
2160 | ret = await self.construct_auth_difference(local_auth_chain, remote_auth_chain) | |
2197 | 2161 | |
2198 | 2162 | logger.debug("on_query_auth returning: %s", ret) |
2199 | 2163 | |
2200 | 2164 | return ret |
2201 | 2165 | |
2202 | @defer.inlineCallbacks | |
2203 | def on_get_missing_events( | |
2166 | async def on_get_missing_events( | |
2204 | 2167 | self, origin, room_id, earliest_events, latest_events, limit |
2205 | 2168 | ): |
2206 | in_room = yield self.auth.check_host_in_room(room_id, origin) | |
2169 | in_room = await self.auth.check_host_in_room(room_id, origin) | |
2207 | 2170 | if not in_room: |
2208 | 2171 | raise AuthError(403, "Host not in room.") |
2209 | 2172 | |
2173 | # Only allow up to 20 events to be retrieved per request. | |
2210 | 2174 | limit = min(limit, 20) |
2211 | 2175 | |
2212 | missing_events = yield self.store.get_missing_events( | |
2176 | missing_events = await self.store.get_missing_events( | |
2213 | 2177 | room_id=room_id, |
2214 | 2178 | earliest_events=earliest_events, |
2215 | 2179 | latest_events=latest_events, |
2216 | 2180 | limit=limit, |
2217 | 2181 | ) |
2218 | 2182 | |
2219 | missing_events = yield filter_events_for_server( | |
2183 | missing_events = await filter_events_for_server( | |
2220 | 2184 | self.storage, origin, missing_events |
2221 | 2185 | ) |
2222 | 2186 | |
2223 | 2187 | return missing_events |
2224 | 2188 | |
2225 | @defer.inlineCallbacks | |
2226 | @log_function | |
2227 | def do_auth(self, origin, event, context, auth_events): | |
2189 | async def do_auth( | |
2190 | self, | |
2191 | origin: str, | |
2192 | event: EventBase, | |
2193 | context: EventContext, | |
2194 | auth_events: StateMap[EventBase], | |
2195 | ) -> EventContext: | |
2228 | 2196 | """ |
2229 | 2197 | |
2230 | 2198 | Args: |
2231 | origin (str): | |
2232 | event (synapse.events.EventBase): | |
2233 | context (synapse.events.snapshot.EventContext): | |
2234 | auth_events (dict[(str, str)->synapse.events.EventBase]): | |
2199 | origin: | |
2200 | event: | |
2201 | context: | |
2202 | auth_events: | |
2235 | 2203 | Map from (event_type, state_key) to event |
2236 | 2204 | |
2237 | 2205 | Normally, our calculated auth_events based on the state of the room |
2241 | 2209 | |
2242 | 2210 | Also NB that this function adds entries to it. |
2243 | 2211 | Returns: |
2244 | defer.Deferred[EventContext]: updated context object | |
2245 | """ | |
2246 | room_version = yield self.store.get_room_version_id(event.room_id) | |
2212 | updated context object | |
2213 | """ | |
2214 | room_version = await self.store.get_room_version_id(event.room_id) | |
2247 | 2215 | room_version_obj = KNOWN_ROOM_VERSIONS[room_version] |
2248 | 2216 | |
2249 | 2217 | try: |
2250 | context = yield self._update_auth_events_and_context_for_auth( | |
2218 | context = await self._update_auth_events_and_context_for_auth( | |
2251 | 2219 | origin, event, context, auth_events |
2252 | 2220 | ) |
2253 | 2221 | except Exception: |
2269 | 2237 | |
2270 | 2238 | return context |
2271 | 2239 | |
2272 | @defer.inlineCallbacks | |
2273 | def _update_auth_events_and_context_for_auth( | |
2274 | self, origin, event, context, auth_events | |
2275 | ): | |
2240 | async def _update_auth_events_and_context_for_auth( | |
2241 | self, | |
2242 | origin: str, | |
2243 | event: EventBase, | |
2244 | context: EventContext, | |
2245 | auth_events: StateMap[EventBase], | |
2246 | ) -> EventContext: | |
2276 | 2247 | """Helper for do_auth. See there for docs. |
2277 | 2248 | |
2278 | 2249 | Checks whether a given event has the expected auth events. If it |
2280 | 2251 | we can come to a consensus (e.g. if one server missed some valid |
2281 | 2252 | state). |
2282 | 2253 | |
2283 | This attempts to resovle any potential divergence of state between | |
2254 | This attempts to resolve any potential divergence of state between | |
2284 | 2255 | servers, but is not essential and so failures should not block further |
2285 | 2256 | processing of the event. |
2286 | 2257 | |
2287 | 2258 | Args: |
2288 | origin (str): | |
2289 | event (synapse.events.EventBase): | |
2290 | context (synapse.events.snapshot.EventContext): | |
2291 | ||
2292 | auth_events (dict[(str, str)->synapse.events.EventBase]): | |
2259 | origin: | |
2260 | event: | |
2261 | context: | |
2262 | ||
2263 | auth_events: | |
2293 | 2264 | Map from (event_type, state_key) to event |
2294 | 2265 | |
2295 | 2266 | Normally, our calculated auth_events based on the state of the room |
2300 | 2271 | Also NB that this function adds entries to it. |
2301 | 2272 | |
2302 | 2273 | Returns: |
2303 | defer.Deferred[EventContext]: updated context | |
2274 | updated context | |
2304 | 2275 | """ |
2305 | 2276 | event_auth_events = set(event.auth_event_ids()) |
2306 | 2277 | |
2314 | 2285 | # |
2315 | 2286 | # we start by checking if they are in the store, and then try calling /event_auth/. |
2316 | 2287 | if missing_auth: |
2317 | have_events = yield self.store.have_seen_events(missing_auth) | |
2288 | have_events = await self.store.have_seen_events(missing_auth) | |
2318 | 2289 | logger.debug("Events %s are in the store", have_events) |
2319 | 2290 | missing_auth.difference_update(have_events) |
2320 | 2291 | |
2323 | 2294 | logger.info("auth_events contains unknown events: %s", missing_auth) |
2324 | 2295 | try: |
2325 | 2296 | try: |
2326 | remote_auth_chain = yield self.federation_client.get_event_auth( | |
2297 | remote_auth_chain = await self.federation_client.get_event_auth( | |
2327 | 2298 | origin, event.room_id, event.event_id |
2328 | 2299 | ) |
2329 | 2300 | except RequestSendFailed as e: |
2332 | 2303 | logger.info("Failed to get event auth from remote: %s", e) |
2333 | 2304 | return context |
2334 | 2305 | |
2335 | seen_remotes = yield self.store.have_seen_events( | |
2306 | seen_remotes = await self.store.have_seen_events( | |
2336 | 2307 | [e.event_id for e in remote_auth_chain] |
2337 | 2308 | ) |
2338 | 2309 | |
2355 | 2326 | logger.debug( |
2356 | 2327 | "do_auth %s missing_auth: %s", event.event_id, e.event_id |
2357 | 2328 | ) |
2358 | yield self._handle_new_event(origin, e, auth_events=auth) | |
2329 | await self._handle_new_event(origin, e, auth_events=auth) | |
2359 | 2330 | |
2360 | 2331 | if e.event_id in event_auth_events: |
2361 | 2332 | auth_events[(e.type, e.state_key)] = e |
2389 | 2360 | |
2390 | 2361 | # XXX: currently this checks for redactions but I'm not convinced that is |
2391 | 2362 | # necessary? |
2392 | different_events = yield self.store.get_events_as_list(different_auth) | |
2363 | different_events = await self.store.get_events_as_list(different_auth) | |
2393 | 2364 | |
2394 | 2365 | for d in different_events: |
2395 | 2366 | if d.room_id != event.room_id: |
2415 | 2386 | remote_auth_events.update({(d.type, d.state_key): d for d in different_events}) |
2416 | 2387 | remote_state = remote_auth_events.values() |
2417 | 2388 | |
2418 | room_version = yield self.store.get_room_version_id(event.room_id) | |
2419 | new_state = yield self.state_handler.resolve_events( | |
2389 | room_version = await self.store.get_room_version_id(event.room_id) | |
2390 | new_state = await self.state_handler.resolve_events( | |
2420 | 2391 | room_version, (local_state, remote_state), event |
2421 | 2392 | ) |
2422 | 2393 | |
2431 | 2402 | |
2432 | 2403 | auth_events.update(new_state) |
2433 | 2404 | |
2434 | context = yield self._update_context_for_auth_events( | |
2405 | context = await self._update_context_for_auth_events( | |
2435 | 2406 | event, context, auth_events |
2436 | 2407 | ) |
2437 | 2408 | |
2438 | 2409 | return context |
2439 | 2410 | |
2440 | @defer.inlineCallbacks | |
2441 | def _update_context_for_auth_events(self, event, context, auth_events): | |
2411 | async def _update_context_for_auth_events( | |
2412 | self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase] | |
2413 | ) -> EventContext: | |
2442 | 2414 | """Update the state_ids in an event context after auth event resolution, |
2443 | 2415 | storing the changes as a new state group. |
2444 | 2416 | |
2445 | 2417 | Args: |
2446 | event (Event): The event we're handling the context for | |
2447 | ||
2448 | context (synapse.events.snapshot.EventContext): initial event context | |
2449 | ||
2450 | auth_events (dict[(str, str)->EventBase]): Events to update in the event | |
2451 | context. | |
2418 | event: The event we're handling the context for | |
2419 | ||
2420 | context: initial event context | |
2421 | ||
2422 | auth_events: Events to update in the event context. | |
2452 | 2423 | |
2453 | 2424 | Returns: |
2454 | Deferred[EventContext]: new event context | |
2425 | new event context | |
2455 | 2426 | """ |
2456 | 2427 | # exclude the state key of the new event from the current_state in the context. |
2457 | 2428 | if event.is_state(): |
2462 | 2433 | k: a.event_id for k, a in iteritems(auth_events) if k != event_key |
2463 | 2434 | } |
2464 | 2435 | |
2465 | current_state_ids = yield context.get_current_state_ids() | |
2436 | current_state_ids = await context.get_current_state_ids() | |
2466 | 2437 | current_state_ids = dict(current_state_ids) |
2467 | 2438 | |
2468 | 2439 | current_state_ids.update(state_updates) |
2469 | 2440 | |
2470 | prev_state_ids = yield context.get_prev_state_ids() | |
2441 | prev_state_ids = await context.get_prev_state_ids() | |
2471 | 2442 | prev_state_ids = dict(prev_state_ids) |
2472 | 2443 | |
2473 | 2444 | prev_state_ids.update({k: a.event_id for k, a in iteritems(auth_events)}) |
2474 | 2445 | |
2475 | 2446 | # create a new state group as a delta from the existing one. |
2476 | 2447 | prev_group = context.state_group |
2477 | state_group = yield self.state_store.store_state_group( | |
2448 | state_group = await self.state_store.store_state_group( | |
2478 | 2449 | event.event_id, |
2479 | 2450 | event.room_id, |
2480 | 2451 | prev_group=prev_group, |
2491 | 2462 | delta_ids=state_updates, |
2492 | 2463 | ) |
2493 | 2464 | |
2494 | @defer.inlineCallbacks | |
2495 | def construct_auth_difference(self, local_auth, remote_auth): | |
2465 | async def construct_auth_difference( | |
2466 | self, local_auth: Iterable[EventBase], remote_auth: Iterable[EventBase] | |
2467 | ) -> Dict: | |
2496 | 2468 | """ Given a local and remote auth chain, find the differences. This |
2497 | 2469 | assumes that we have already processed all events in remote_auth |
2498 | 2470 | |
2601 | 2573 | reason_map = {} |
2602 | 2574 | |
2603 | 2575 | for e in base_remote_rejected: |
2604 | reason = yield self.store.get_rejection_reason(e.event_id) | |
2576 | reason = await self.store.get_rejection_reason(e.event_id) | |
2605 | 2577 | if reason is None: |
2606 | 2578 | # TODO: e is not in the current state, so we should |
2607 | 2579 | # construct some proof of that. |
2686 | 2658 | destinations, room_id, event_dict |
2687 | 2659 | ) |
2688 | 2660 | |
2689 | @defer.inlineCallbacks | |
2690 | @log_function | |
2691 | def on_exchange_third_party_invite_request(self, room_id, event_dict): | |
2661 | async def on_exchange_third_party_invite_request( | |
2662 | self, room_id: str, event_dict: JsonDict | |
2663 | ) -> None: | |
2692 | 2664 | """Handle an exchange_third_party_invite request from a remote server |
2693 | 2665 | |
2694 | 2666 | The remote server will call this when it wants to turn a 3pid invite |
2695 | 2667 | into a normal m.room.member invite. |
2696 | 2668 | |
2697 | 2669 | Args: |
2698 | room_id (str): The ID of the room. | |
2670 | room_id: The ID of the room. | |
2699 | 2671 | |
2700 | 2672 | event_dict (dict[str, Any]): Dictionary containing the event body. |
2701 | 2673 | |
2702 | Returns: | |
2703 | Deferred: resolves (to None) | |
2704 | """ | |
2705 | room_version = yield self.store.get_room_version_id(room_id) | |
2674 | """ | |
2675 | room_version = await self.store.get_room_version_id(room_id) | |
2706 | 2676 | |
2707 | 2677 | # NB: event_dict has a particular specced format we might need to fudge |
2708 | 2678 | # if we change event formats too much. |
2709 | 2679 | builder = self.event_builder_factory.new(room_version, event_dict) |
2710 | 2680 | |
2711 | event, context = yield self.event_creation_handler.create_new_client_event( | |
2681 | event, context = await self.event_creation_handler.create_new_client_event( | |
2712 | 2682 | builder=builder |
2713 | 2683 | ) |
2714 | 2684 | |
2715 | event_allowed = yield self.third_party_event_rules.check_event_allowed( | |
2685 | event_allowed = await self.third_party_event_rules.check_event_allowed( | |
2716 | 2686 | event, context |
2717 | 2687 | ) |
2718 | 2688 | if not event_allowed: |
2723 | 2693 | 403, "This event is not allowed in this context", Codes.FORBIDDEN |
2724 | 2694 | ) |
2725 | 2695 | |
2726 | event, context = yield self.add_display_name_to_third_party_invite( | |
2696 | event, context = await self.add_display_name_to_third_party_invite( | |
2727 | 2697 | room_version, event_dict, event, context |
2728 | 2698 | ) |
2729 | 2699 | |
2730 | 2700 | try: |
2731 | yield self.auth.check_from_context(room_version, event, context) | |
2701 | await self.auth.check_from_context(room_version, event, context) | |
2732 | 2702 | except AuthError as e: |
2733 | 2703 | logger.warning("Denying third party invite %r because %s", event, e) |
2734 | 2704 | raise e |
2735 | yield self._check_signature(event, context) | |
2705 | await self._check_signature(event, context) | |
2736 | 2706 | |
2737 | 2707 | # We need to tell the transaction queue to send this out, even |
2738 | 2708 | # though the sender isn't a local user. |
2740 | 2710 | |
2741 | 2711 | # We retrieve the room member handler here as to not cause a cyclic dependency |
2742 | 2712 | member_handler = self.hs.get_room_member_handler() |
2743 | yield member_handler.send_membership_event(None, event, context) | |
2713 | await member_handler.send_membership_event(None, event, context) | |
2744 | 2714 | |
2745 | 2715 | @defer.inlineCallbacks |
2746 | 2716 | def add_display_name_to_third_party_invite( |
2888 | 2858 | if "valid" not in response or not response["valid"]: |
2889 | 2859 | raise AuthError(403, "Third party certificate was invalid") |
2890 | 2860 | |
2891 | @defer.inlineCallbacks | |
2892 | def persist_events_and_notify(self, event_and_contexts, backfilled=False): | |
2861 | async def persist_events_and_notify( | |
2862 | self, | |
2863 | event_and_contexts: Sequence[Tuple[EventBase, EventContext]], | |
2864 | backfilled: bool = False, | |
2865 | ) -> None: | |
2893 | 2866 | """Persists events and tells the notifier/pushers about them, if |
2894 | 2867 | necessary. |
2895 | 2868 | |
2896 | 2869 | Args: |
2897 | event_and_contexts(list[tuple[FrozenEvent, EventContext]]) | |
2898 | backfilled (bool): Whether these events are a result of | |
2870 | event_and_contexts: | |
2871 | backfilled: Whether these events are a result of | |
2899 | 2872 | backfilling or not |
2900 | ||
2901 | Returns: | |
2902 | Deferred | |
2903 | 2873 | """ |
2904 | 2874 | if self.config.worker_app: |
2905 | yield self._send_events_to_master( | |
2875 | await self._send_events_to_master( | |
2906 | 2876 | store=self.store, |
2907 | 2877 | event_and_contexts=event_and_contexts, |
2908 | 2878 | backfilled=backfilled, |
2909 | 2879 | ) |
2910 | 2880 | else: |
2911 | max_stream_id = yield self.storage.persistence.persist_events( | |
2881 | max_stream_id = await self.storage.persistence.persist_events( | |
2912 | 2882 | event_and_contexts, backfilled=backfilled |
2913 | 2883 | ) |
2914 | 2884 | |
2919 | 2889 | |
2920 | 2890 | if not backfilled: # Never notify for backfilled events |
2921 | 2891 | for event, _ in event_and_contexts: |
2922 | yield self._notify_persisted_event(event, max_stream_id) | |
2923 | ||
2924 | def _notify_persisted_event(self, event, max_stream_id): | |
2892 | await self._notify_persisted_event(event, max_stream_id) | |
2893 | ||
2894 | async def _notify_persisted_event( | |
2895 | self, event: EventBase, max_stream_id: int | |
2896 | ) -> None: | |
2925 | 2897 | """Checks to see if notifier/pushers should be notified about the |
2926 | 2898 | event or not. |
2927 | 2899 | |
2928 | 2900 | Args: |
2929 | event (FrozenEvent) | |
2930 | max_stream_id (int): The max_stream_id returned by persist_events | |
2901 | event: | |
2902 | max_stream_id: The max_stream_id returned by persist_events | |
2931 | 2903 | """ |
2932 | 2904 | |
2933 | 2905 | extra_users = [] |
2951 | 2923 | event, event_stream_id, max_stream_id, extra_users=extra_users |
2952 | 2924 | ) |
2953 | 2925 | |
2954 | return self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id) | |
2955 | ||
2956 | def _clean_room_for_join(self, room_id): | |
2926 | await self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id) | |
2927 | ||
2928 | async def _clean_room_for_join(self, room_id: str) -> None: | |
2957 | 2929 | """Called to clean up any data in DB for a given room, ready for the |
2958 | 2930 | server to join the room. |
2959 | 2931 | |
2960 | 2932 | Args: |
2961 | room_id (str) | |
2933 | room_id | |
2962 | 2934 | """ |
2963 | 2935 | if self.config.worker_app: |
2964 | return self._clean_room_for_join_client(room_id) | |
2936 | await self._clean_room_for_join_client(room_id) | |
2965 | 2937 | else: |
2966 | return self.store.clean_room_for_join(room_id) | |
2967 | ||
2968 | def user_joined_room(self, user, room_id): | |
2938 | await self.store.clean_room_for_join(room_id) | |
2939 | ||
2940 | async def user_joined_room(self, user: UserID, room_id: str) -> None: | |
2969 | 2941 | """Called when a new user has joined the room |
2970 | 2942 | """ |
2971 | 2943 | if self.config.worker_app: |
2972 | return self._notify_user_membership_change( | |
2944 | await self._notify_user_membership_change( | |
2973 | 2945 | room_id=room_id, user_id=user.to_string(), change="joined" |
2974 | 2946 | ) |
2975 | 2947 | else: |
2976 | return defer.succeed(user_joined_room(self.distributor, user, room_id)) | |
2948 | user_joined_room(self.distributor, user, room_id) | |
2977 | 2949 | |
2978 | 2950 | @defer.inlineCallbacks |
2979 | 2951 | def get_room_complexity(self, remote_room_hosts, room_id): |
62 | 62 | return f |
63 | 63 | |
64 | 64 | |
65 | class GroupsLocalHandler(object): | |
65 | class GroupsLocalWorkerHandler(object): | |
66 | 66 | def __init__(self, hs): |
67 | 67 | self.hs = hs |
68 | 68 | self.store = hs.get_datastore() |
80 | 80 | |
81 | 81 | self.profile_handler = hs.get_profile_handler() |
82 | 82 | |
83 | # Ensure attestations get renewed | |
84 | hs.get_groups_attestation_renewer() | |
85 | ||
86 | 83 | # The following functions merely route the query to the local groups server |
87 | 84 | # or federation depending on if the group is local or remote |
88 | 85 | |
89 | 86 | get_group_profile = _create_rerouter("get_group_profile") |
90 | update_group_profile = _create_rerouter("update_group_profile") | |
91 | 87 | get_rooms_in_group = _create_rerouter("get_rooms_in_group") |
92 | ||
93 | 88 | get_invited_users_in_group = _create_rerouter("get_invited_users_in_group") |
94 | ||
95 | add_room_to_group = _create_rerouter("add_room_to_group") | |
96 | update_room_in_group = _create_rerouter("update_room_in_group") | |
97 | remove_room_from_group = _create_rerouter("remove_room_from_group") | |
98 | ||
99 | update_group_summary_room = _create_rerouter("update_group_summary_room") | |
100 | delete_group_summary_room = _create_rerouter("delete_group_summary_room") | |
101 | ||
102 | update_group_category = _create_rerouter("update_group_category") | |
103 | delete_group_category = _create_rerouter("delete_group_category") | |
104 | 89 | get_group_category = _create_rerouter("get_group_category") |
105 | 90 | get_group_categories = _create_rerouter("get_group_categories") |
106 | ||
107 | update_group_summary_user = _create_rerouter("update_group_summary_user") | |
108 | delete_group_summary_user = _create_rerouter("delete_group_summary_user") | |
109 | ||
110 | update_group_role = _create_rerouter("update_group_role") | |
111 | delete_group_role = _create_rerouter("delete_group_role") | |
112 | 91 | get_group_role = _create_rerouter("get_group_role") |
113 | 92 | get_group_roles = _create_rerouter("get_group_roles") |
114 | ||
115 | set_group_join_policy = _create_rerouter("set_group_join_policy") | |
116 | 93 | |
117 | 94 | @defer.inlineCallbacks |
118 | 95 | def get_group_summary(self, group_id, requester_user_id): |
169 | 146 | return res |
170 | 147 | |
171 | 148 | @defer.inlineCallbacks |
149 | def get_users_in_group(self, group_id, requester_user_id): | |
150 | """Get users in a group | |
151 | """ | |
152 | if self.is_mine_id(group_id): | |
153 | res = yield self.groups_server_handler.get_users_in_group( | |
154 | group_id, requester_user_id | |
155 | ) | |
156 | return res | |
157 | ||
158 | group_server_name = get_domain_from_id(group_id) | |
159 | ||
160 | try: | |
161 | res = yield self.transport_client.get_users_in_group( | |
162 | get_domain_from_id(group_id), group_id, requester_user_id | |
163 | ) | |
164 | except HttpResponseException as e: | |
165 | raise e.to_synapse_error() | |
166 | except RequestSendFailed: | |
167 | raise SynapseError(502, "Failed to contact group server") | |
168 | ||
169 | chunk = res["chunk"] | |
170 | valid_entries = [] | |
171 | for entry in chunk: | |
172 | g_user_id = entry["user_id"] | |
173 | attestation = entry.pop("attestation", {}) | |
174 | try: | |
175 | if get_domain_from_id(g_user_id) != group_server_name: | |
176 | yield self.attestations.verify_attestation( | |
177 | attestation, | |
178 | group_id=group_id, | |
179 | user_id=g_user_id, | |
180 | server_name=get_domain_from_id(g_user_id), | |
181 | ) | |
182 | valid_entries.append(entry) | |
183 | except Exception as e: | |
184 | logger.info("Failed to verify user is in group: %s", e) | |
185 | ||
186 | res["chunk"] = valid_entries | |
187 | ||
188 | return res | |
189 | ||
190 | @defer.inlineCallbacks | |
191 | def get_joined_groups(self, user_id): | |
192 | group_ids = yield self.store.get_joined_groups(user_id) | |
193 | return {"groups": group_ids} | |
194 | ||
195 | @defer.inlineCallbacks | |
196 | def get_publicised_groups_for_user(self, user_id): | |
197 | if self.hs.is_mine_id(user_id): | |
198 | result = yield self.store.get_publicised_groups_for_user(user_id) | |
199 | ||
200 | # Check AS associated groups for this user - this depends on the | |
201 | # RegExps in the AS registration file (under `users`) | |
202 | for app_service in self.store.get_app_services(): | |
203 | result.extend(app_service.get_groups_for_user(user_id)) | |
204 | ||
205 | return {"groups": result} | |
206 | else: | |
207 | try: | |
208 | bulk_result = yield self.transport_client.bulk_get_publicised_groups( | |
209 | get_domain_from_id(user_id), [user_id] | |
210 | ) | |
211 | except HttpResponseException as e: | |
212 | raise e.to_synapse_error() | |
213 | except RequestSendFailed: | |
214 | raise SynapseError(502, "Failed to contact group server") | |
215 | ||
216 | result = bulk_result.get("users", {}).get(user_id) | |
217 | # TODO: Verify attestations | |
218 | return {"groups": result} | |
219 | ||
220 | @defer.inlineCallbacks | |
221 | def bulk_get_publicised_groups(self, user_ids, proxy=True): | |
222 | destinations = {} | |
223 | local_users = set() | |
224 | ||
225 | for user_id in user_ids: | |
226 | if self.hs.is_mine_id(user_id): | |
227 | local_users.add(user_id) | |
228 | else: | |
229 | destinations.setdefault(get_domain_from_id(user_id), set()).add(user_id) | |
230 | ||
231 | if not proxy and destinations: | |
232 | raise SynapseError(400, "Some user_ids are not local") | |
233 | ||
234 | results = {} | |
235 | failed_results = [] | |
236 | for destination, dest_user_ids in iteritems(destinations): | |
237 | try: | |
238 | r = yield self.transport_client.bulk_get_publicised_groups( | |
239 | destination, list(dest_user_ids) | |
240 | ) | |
241 | results.update(r["users"]) | |
242 | except Exception: | |
243 | failed_results.extend(dest_user_ids) | |
244 | ||
245 | for uid in local_users: | |
246 | results[uid] = yield self.store.get_publicised_groups_for_user(uid) | |
247 | ||
248 | # Check AS associated groups for this user - this depends on the | |
249 | # RegExps in the AS registration file (under `users`) | |
250 | for app_service in self.store.get_app_services(): | |
251 | results[uid].extend(app_service.get_groups_for_user(uid)) | |
252 | ||
253 | return {"users": results} | |
254 | ||
255 | ||
256 | class GroupsLocalHandler(GroupsLocalWorkerHandler): | |
257 | def __init__(self, hs): | |
258 | super(GroupsLocalHandler, self).__init__(hs) | |
259 | ||
260 | # Ensure attestations get renewed | |
261 | hs.get_groups_attestation_renewer() | |
262 | ||
263 | # The following functions merely route the query to the local groups server | |
264 | # or federation depending on if the group is local or remote | |
265 | ||
266 | update_group_profile = _create_rerouter("update_group_profile") | |
267 | ||
268 | add_room_to_group = _create_rerouter("add_room_to_group") | |
269 | update_room_in_group = _create_rerouter("update_room_in_group") | |
270 | remove_room_from_group = _create_rerouter("remove_room_from_group") | |
271 | ||
272 | update_group_summary_room = _create_rerouter("update_group_summary_room") | |
273 | delete_group_summary_room = _create_rerouter("delete_group_summary_room") | |
274 | ||
275 | update_group_category = _create_rerouter("update_group_category") | |
276 | delete_group_category = _create_rerouter("delete_group_category") | |
277 | ||
278 | update_group_summary_user = _create_rerouter("update_group_summary_user") | |
279 | delete_group_summary_user = _create_rerouter("delete_group_summary_user") | |
280 | ||
281 | update_group_role = _create_rerouter("update_group_role") | |
282 | delete_group_role = _create_rerouter("delete_group_role") | |
283 | ||
284 | set_group_join_policy = _create_rerouter("set_group_join_policy") | |
285 | ||
286 | @defer.inlineCallbacks | |
172 | 287 | def create_group(self, group_id, user_id, content): |
173 | 288 | """Create a group |
174 | 289 | """ |
215 | 330 | is_publicised=is_publicised, |
216 | 331 | ) |
217 | 332 | self.notifier.on_new_event("groups_key", token, users=[user_id]) |
218 | ||
219 | return res | |
220 | ||
221 | @defer.inlineCallbacks | |
222 | def get_users_in_group(self, group_id, requester_user_id): | |
223 | """Get users in a group | |
224 | """ | |
225 | if self.is_mine_id(group_id): | |
226 | res = yield self.groups_server_handler.get_users_in_group( | |
227 | group_id, requester_user_id | |
228 | ) | |
229 | return res | |
230 | ||
231 | group_server_name = get_domain_from_id(group_id) | |
232 | ||
233 | try: | |
234 | res = yield self.transport_client.get_users_in_group( | |
235 | get_domain_from_id(group_id), group_id, requester_user_id | |
236 | ) | |
237 | except HttpResponseException as e: | |
238 | raise e.to_synapse_error() | |
239 | except RequestSendFailed: | |
240 | raise SynapseError(502, "Failed to contact group server") | |
241 | ||
242 | chunk = res["chunk"] | |
243 | valid_entries = [] | |
244 | for entry in chunk: | |
245 | g_user_id = entry["user_id"] | |
246 | attestation = entry.pop("attestation", {}) | |
247 | try: | |
248 | if get_domain_from_id(g_user_id) != group_server_name: | |
249 | yield self.attestations.verify_attestation( | |
250 | attestation, | |
251 | group_id=group_id, | |
252 | user_id=g_user_id, | |
253 | server_name=get_domain_from_id(g_user_id), | |
254 | ) | |
255 | valid_entries.append(entry) | |
256 | except Exception as e: | |
257 | logger.info("Failed to verify user is in group: %s", e) | |
258 | ||
259 | res["chunk"] = valid_entries | |
260 | 333 | |
261 | 334 | return res |
262 | 335 | |
451 | 524 | group_id, user_id, membership="leave" |
452 | 525 | ) |
453 | 526 | self.notifier.on_new_event("groups_key", token, users=[user_id]) |
454 | ||
455 | @defer.inlineCallbacks | |
456 | def get_joined_groups(self, user_id): | |
457 | group_ids = yield self.store.get_joined_groups(user_id) | |
458 | return {"groups": group_ids} | |
459 | ||
460 | @defer.inlineCallbacks | |
461 | def get_publicised_groups_for_user(self, user_id): | |
462 | if self.hs.is_mine_id(user_id): | |
463 | result = yield self.store.get_publicised_groups_for_user(user_id) | |
464 | ||
465 | # Check AS associated groups for this user - this depends on the | |
466 | # RegExps in the AS registration file (under `users`) | |
467 | for app_service in self.store.get_app_services(): | |
468 | result.extend(app_service.get_groups_for_user(user_id)) | |
469 | ||
470 | return {"groups": result} | |
471 | else: | |
472 | try: | |
473 | bulk_result = yield self.transport_client.bulk_get_publicised_groups( | |
474 | get_domain_from_id(user_id), [user_id] | |
475 | ) | |
476 | except HttpResponseException as e: | |
477 | raise e.to_synapse_error() | |
478 | except RequestSendFailed: | |
479 | raise SynapseError(502, "Failed to contact group server") | |
480 | ||
481 | result = bulk_result.get("users", {}).get(user_id) | |
482 | # TODO: Verify attestations | |
483 | return {"groups": result} | |
484 | ||
485 | @defer.inlineCallbacks | |
486 | def bulk_get_publicised_groups(self, user_ids, proxy=True): | |
487 | destinations = {} | |
488 | local_users = set() | |
489 | ||
490 | for user_id in user_ids: | |
491 | if self.hs.is_mine_id(user_id): | |
492 | local_users.add(user_id) | |
493 | else: | |
494 | destinations.setdefault(get_domain_from_id(user_id), set()).add(user_id) | |
495 | ||
496 | if not proxy and destinations: | |
497 | raise SynapseError(400, "Some user_ids are not local") | |
498 | ||
499 | results = {} | |
500 | failed_results = [] | |
501 | for destination, dest_user_ids in iteritems(destinations): | |
502 | try: | |
503 | r = yield self.transport_client.bulk_get_publicised_groups( | |
504 | destination, list(dest_user_ids) | |
505 | ) | |
506 | results.update(r["users"]) | |
507 | except Exception: | |
508 | failed_results.extend(dest_user_ids) | |
509 | ||
510 | for uid in local_users: | |
511 | results[uid] = yield self.store.get_publicised_groups_for_user(uid) | |
512 | ||
513 | # Check AS associated groups for this user - this depends on the | |
514 | # RegExps in the AS registration file (under `users`) | |
515 | for app_service in self.store.get_app_services(): | |
516 | results[uid].extend(app_service.get_groups_for_user(uid)) | |
517 | ||
518 | return {"users": results} |
17 | 17 | from twisted.internet import defer |
18 | 18 | |
19 | 19 | from synapse.api.constants import EventTypes, Membership |
20 | from synapse.api.errors import AuthError, Codes, SynapseError | |
20 | from synapse.api.errors import SynapseError | |
21 | 21 | from synapse.events.validator import EventValidator |
22 | 22 | from synapse.handlers.presence import format_user_presence_state |
23 | 23 | from synapse.logging.context import make_deferred_yieldable, run_in_background |
273 | 273 | |
274 | 274 | user_id = requester.user.to_string() |
275 | 275 | |
276 | membership, member_event_id = await self._check_in_room_or_world_readable( | |
277 | room_id, user_id | |
276 | ( | |
277 | membership, | |
278 | member_event_id, | |
279 | ) = await self.auth.check_user_in_room_or_world_readable( | |
280 | room_id, user_id, allow_departed_users=True, | |
278 | 281 | ) |
279 | 282 | is_peeking = member_event_id is None |
280 | 283 | |
432 | 435 | ret["membership"] = membership |
433 | 436 | |
434 | 437 | return ret |
435 | ||
436 | async def _check_in_room_or_world_readable(self, room_id, user_id): | |
437 | try: | |
438 | # check_user_was_in_room will return the most recent membership | |
439 | # event for the user if: | |
440 | # * The user is a non-guest user, and was ever in the room | |
441 | # * The user is a guest user, and has joined the room | |
442 | # else it will throw. | |
443 | member_event = await self.auth.check_user_was_in_room(room_id, user_id) | |
444 | return member_event.membership, member_event.event_id | |
445 | except AuthError: | |
446 | visibility = await self.state_handler.get_current_state( | |
447 | room_id, EventTypes.RoomHistoryVisibility, "" | |
448 | ) | |
449 | if ( | |
450 | visibility | |
451 | and visibility.content["history_visibility"] == "world_readable" | |
452 | ): | |
453 | return Membership.JOIN, None | |
454 | raise AuthError( | |
455 | 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN | |
456 | ) |
98 | 98 | ( |
99 | 99 | membership, |
100 | 100 | membership_event_id, |
101 | ) = yield self.auth.check_in_room_or_world_readable(room_id, user_id) | |
101 | ) = yield self.auth.check_user_in_room_or_world_readable( | |
102 | room_id, user_id, allow_departed_users=True | |
103 | ) | |
102 | 104 | |
103 | 105 | if membership == Membership.JOIN: |
104 | 106 | data = yield self.state.get_current_state(room_id, event_type, state_key) |
176 | 178 | ( |
177 | 179 | membership, |
178 | 180 | membership_event_id, |
179 | ) = yield self.auth.check_in_room_or_world_readable(room_id, user_id) | |
181 | ) = yield self.auth.check_user_in_room_or_world_readable( | |
182 | room_id, user_id, allow_departed_users=True | |
183 | ) | |
180 | 184 | |
181 | 185 | if membership == Membership.JOIN: |
182 | 186 | state_ids = yield self.store.get_filtered_current_state_ids( |
215 | 219 | if not requester.app_service: |
216 | 220 | # We check AS auth after fetching the room membership, as it |
217 | 221 | # requires us to pull out all joined members anyway. |
218 | membership, _ = yield self.auth.check_in_room_or_world_readable( | |
219 | room_id, user_id | |
222 | membership, _ = yield self.auth.check_user_in_room_or_world_readable( | |
223 | room_id, user_id, allow_departed_users=True | |
220 | 224 | ) |
221 | 225 | if membership != Membership.JOIN: |
222 | 226 | raise NotImplementedError( |
931 | 935 | # way? If we have been invited by a remote server, we need |
932 | 936 | # to get them to sign the event. |
933 | 937 | |
934 | returned_invite = yield federation_handler.send_invite( | |
935 | invitee.domain, event | |
938 | returned_invite = yield defer.ensureDeferred( | |
939 | federation_handler.send_invite(invitee.domain, event) | |
936 | 940 | ) |
937 | ||
938 | 941 | event.unsigned.pop("room_state", None) |
939 | 942 | |
940 | 943 | # TODO: Make sure the signatures actually are correct. |
132 | 132 | include_null = False |
133 | 133 | |
134 | 134 | logger.info( |
135 | "[purge] Running purge job for %d < max_lifetime <= %d (include NULLs = %s)", | |
135 | "[purge] Running purge job for %s < max_lifetime <= %s (include NULLs = %s)", | |
136 | 136 | min_ms, |
137 | 137 | max_ms, |
138 | 138 | include_null, |
334 | 334 | ( |
335 | 335 | membership, |
336 | 336 | member_event_id, |
337 | ) = await self.auth.check_in_room_or_world_readable(room_id, user_id) | |
337 | ) = await self.auth.check_user_in_room_or_world_readable( | |
338 | room_id, user_id, allow_departed_users=True | |
339 | ) | |
338 | 340 | |
339 | 341 | if source_config.direction == "b": |
340 | 342 | # if we're going backwards, we might need to backfill. This |
63 | 63 | "history_visibility": "shared", |
64 | 64 | "original_invitees_have_ops": False, |
65 | 65 | "guest_can_join": True, |
66 | "power_level_content_override": {"invite": 0}, | |
66 | 67 | }, |
67 | 68 | RoomCreationPreset.TRUSTED_PRIVATE_CHAT: { |
68 | 69 | "join_rules": JoinRules.INVITE, |
69 | 70 | "history_visibility": "shared", |
70 | 71 | "original_invitees_have_ops": True, |
71 | 72 | "guest_can_join": True, |
73 | "power_level_content_override": {"invite": 0}, | |
72 | 74 | }, |
73 | 75 | RoomCreationPreset.PUBLIC_CHAT: { |
74 | 76 | "join_rules": JoinRules.PUBLIC, |
75 | 77 | "history_visibility": "shared", |
76 | 78 | "original_invitees_have_ops": False, |
77 | 79 | "guest_can_join": False, |
80 | "power_level_content_override": {}, | |
78 | 81 | }, |
79 | 82 | } |
80 | 83 | |
258 | 261 | for v in ("invite", "events_default"): |
259 | 262 | current = int(pl_content.get(v, 0)) |
260 | 263 | if current < restricted_level: |
261 | logger.info( | |
264 | logger.debug( | |
262 | 265 | "Setting level for %s in %s to %i (was %i)", |
263 | 266 | v, |
264 | 267 | old_room_id, |
268 | 271 | pl_content[v] = restricted_level |
269 | 272 | updated = True |
270 | 273 | else: |
271 | logger.info("Not setting level for %s (already %i)", v, current) | |
274 | logger.debug("Not setting level for %s (already %i)", v, current) | |
272 | 275 | |
273 | 276 | if updated: |
274 | 277 | try: |
295 | 298 | EventTypes.Aliases, events_default |
296 | 299 | ) |
297 | 300 | |
298 | logger.info("Setting correct PLs in new room to %s", new_pl_content) | |
301 | logger.debug("Setting correct PLs in new room to %s", new_pl_content) | |
299 | 302 | yield self.event_creation_handler.create_and_send_nonmember_event( |
300 | 303 | requester, |
301 | 304 | { |
474 | 477 | for alias_str in aliases: |
475 | 478 | alias = RoomAlias.from_string(alias_str) |
476 | 479 | try: |
477 | yield directory_handler.delete_association( | |
478 | requester, alias, send_event=False | |
479 | ) | |
480 | yield directory_handler.delete_association(requester, alias) | |
480 | 481 | removed_aliases.append(alias_str) |
481 | 482 | except SynapseError as e: |
482 | 483 | logger.warning("Unable to remove alias %s from old room: %s", alias, e) |
507 | 508 | RoomAlias.from_string(alias), |
508 | 509 | new_room_id, |
509 | 510 | servers=(self.hs.hostname,), |
510 | send_event=False, | |
511 | 511 | check_membership=False, |
512 | 512 | ) |
513 | 513 | logger.info("Moved alias %s to new room", alias) |
578 | 578 | |
579 | 579 | # Check whether the third party rules allows/changes the room create |
580 | 580 | # request. |
581 | yield self.third_party_event_rules.on_create_room( | |
581 | event_allowed = yield self.third_party_event_rules.on_create_room( | |
582 | 582 | requester, config, is_requester_admin=is_requester_admin |
583 | 583 | ) |
584 | if not event_allowed: | |
585 | raise SynapseError( | |
586 | 403, "You are not permitted to create rooms", Codes.FORBIDDEN | |
587 | ) | |
584 | 588 | |
585 | 589 | if not is_requester_admin and not self.spam_checker.user_may_create_room( |
586 | 590 | user_id |
656 | 660 | room_id=room_id, |
657 | 661 | room_alias=room_alias, |
658 | 662 | servers=[self.hs.hostname], |
659 | send_event=False, | |
660 | 663 | check_membership=False, |
661 | 664 | ) |
662 | 665 | |
781 | 784 | @defer.inlineCallbacks |
782 | 785 | def send(etype, content, **kwargs): |
783 | 786 | event = create(etype, content, **kwargs) |
784 | logger.info("Sending %s in new room", etype) | |
787 | logger.debug("Sending %s in new room", etype) | |
785 | 788 | yield self.event_creation_handler.create_and_send_nonmember_event( |
786 | 789 | creator, event, ratelimit=False |
787 | 790 | ) |
795 | 798 | creation_content.update({"creator": creator_id}) |
796 | 799 | yield send(etype=EventTypes.Create, content=creation_content) |
797 | 800 | |
798 | logger.info("Sending %s in new room", EventTypes.Member) | |
801 | logger.debug("Sending %s in new room", EventTypes.Member) | |
799 | 802 | yield self.room_member_handler.update_membership( |
800 | 803 | creator, |
801 | 804 | creator.user, |
824 | 827 | # This will be reudundant on pre-MSC2260 rooms, since the |
825 | 828 | # aliases event is special-cased. |
826 | 829 | EventTypes.Aliases: 0, |
830 | EventTypes.Tombstone: 100, | |
831 | EventTypes.ServerACL: 100, | |
827 | 832 | }, |
828 | 833 | "events_default": 0, |
829 | 834 | "state_default": 50, |
830 | 835 | "ban": 50, |
831 | 836 | "kick": 50, |
832 | 837 | "redact": 50, |
833 | "invite": 0, | |
838 | "invite": 50, | |
834 | 839 | } |
835 | 840 | |
836 | 841 | if config["original_invitees_have_ops"]: |
837 | 842 | for invitee in invite_list: |
838 | 843 | power_level_content["users"][invitee] = 100 |
844 | ||
845 | # Power levels overrides are defined per chat preset | |
846 | power_level_content.update(config["power_level_content_override"]) | |
839 | 847 | |
840 | 848 | if power_level_content_override: |
841 | 849 | power_level_content.update(power_level_content_override) |
943 | 943 | # join dance for now, since we're kinda implicitly checking |
944 | 944 | # that we are allowed to join when we decide whether or not we |
945 | 945 | # need to do the invite/join dance. |
946 | yield self.federation_handler.do_invite_join( | |
947 | remote_room_hosts, room_id, user.to_string(), content | |
946 | yield defer.ensureDeferred( | |
947 | self.federation_handler.do_invite_join( | |
948 | remote_room_hosts, room_id, user.to_string(), content | |
949 | ) | |
948 | 950 | ) |
949 | 951 | yield self._user_joined_room(user, room_id) |
950 | 952 | |
981 | 983 | """ |
982 | 984 | fed_handler = self.federation_handler |
983 | 985 | try: |
984 | ret = yield fed_handler.do_remotely_reject_invite( | |
985 | remote_room_hosts, room_id, target.to_string(), content=content, | |
986 | ret = yield defer.ensureDeferred( | |
987 | fed_handler.do_remotely_reject_invite( | |
988 | remote_room_hosts, room_id, target.to_string(), content=content, | |
989 | ) | |
986 | 990 | ) |
987 | 991 | return ret |
988 | 992 | except Exception as e: |
299 | 299 | room_state["guest_access"] = event_content.get("guest_access") |
300 | 300 | |
301 | 301 | for room_id, state in room_to_state_updates.items(): |
302 | logger.info("Updating room_stats_state for %s: %s", room_id, state) | |
302 | logger.debug("Updating room_stats_state for %s: %s", room_id, state) | |
303 | 303 | yield self.store.update_room_state(room_id, state) |
304 | 304 | |
305 | 305 | return room_to_stats_deltas, user_to_stats_deltas |
13 | 13 | # See the License for the specific language governing permissions and |
14 | 14 | # limitations under the License. |
15 | 15 | |
16 | import collections | |
17 | 16 | import itertools |
18 | 17 | import logging |
18 | from typing import Any, Dict, FrozenSet, List, Optional, Set, Tuple | |
19 | 19 | |
20 | 20 | from six import iteritems, itervalues |
21 | 21 | |
22 | import attr | |
22 | 23 | from prometheus_client import Counter |
23 | 24 | |
24 | 25 | from synapse.api.constants import EventTypes, Membership |
26 | from synapse.api.filtering import FilterCollection | |
27 | from synapse.events import EventBase | |
25 | 28 | from synapse.logging.context import LoggingContext |
26 | 29 | from synapse.push.clientformat import format_push_rules_for_user |
27 | 30 | from synapse.storage.roommember import MemberSummary |
28 | 31 | from synapse.storage.state import StateFilter |
29 | from synapse.types import RoomStreamToken | |
32 | from synapse.types import ( | |
33 | Collection, | |
34 | JsonDict, | |
35 | RoomStreamToken, | |
36 | StateMap, | |
37 | StreamToken, | |
38 | UserID, | |
39 | ) | |
30 | 40 | from synapse.util.async_helpers import concurrently_execute |
31 | 41 | from synapse.util.caches.expiringcache import ExpiringCache |
32 | 42 | from synapse.util.caches.lrucache import LruCache |
61 | 71 | LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100 |
62 | 72 | |
63 | 73 | |
64 | SyncConfig = collections.namedtuple( | |
65 | "SyncConfig", ["user", "filter_collection", "is_guest", "request_key", "device_id"] | |
66 | ) | |
67 | ||
68 | ||
69 | class TimelineBatch( | |
70 | collections.namedtuple("TimelineBatch", ["prev_batch", "events", "limited"]) | |
71 | ): | |
72 | __slots__ = [] | |
73 | ||
74 | def __nonzero__(self): | |
74 | @attr.s(slots=True, frozen=True) | |
75 | class SyncConfig: | |
76 | user = attr.ib(type=UserID) | |
77 | filter_collection = attr.ib(type=FilterCollection) | |
78 | is_guest = attr.ib(type=bool) | |
79 | request_key = attr.ib(type=Tuple[Any, ...]) | |
80 | device_id = attr.ib(type=str) | |
81 | ||
82 | ||
83 | @attr.s(slots=True, frozen=True) | |
84 | class TimelineBatch: | |
85 | prev_batch = attr.ib(type=StreamToken) | |
86 | events = attr.ib(type=List[EventBase]) | |
87 | limited = attr.ib(bool) | |
88 | ||
89 | def __nonzero__(self) -> bool: | |
75 | 90 | """Make the result appear empty if there are no updates. This is used |
76 | 91 | to tell if room needs to be part of the sync result. |
77 | 92 | """ |
80 | 95 | __bool__ = __nonzero__ # python3 |
81 | 96 | |
82 | 97 | |
83 | class JoinedSyncResult( | |
84 | collections.namedtuple( | |
85 | "JoinedSyncResult", | |
86 | [ | |
87 | "room_id", # str | |
88 | "timeline", # TimelineBatch | |
89 | "state", # dict[(str, str), FrozenEvent] | |
90 | "ephemeral", | |
91 | "account_data", | |
92 | "unread_notifications", | |
93 | "summary", | |
94 | ], | |
95 | ) | |
96 | ): | |
97 | __slots__ = [] | |
98 | ||
99 | def __nonzero__(self): | |
98 | @attr.s(slots=True, frozen=True) | |
99 | class JoinedSyncResult: | |
100 | room_id = attr.ib(type=str) | |
101 | timeline = attr.ib(type=TimelineBatch) | |
102 | state = attr.ib(type=StateMap[EventBase]) | |
103 | ephemeral = attr.ib(type=List[JsonDict]) | |
104 | account_data = attr.ib(type=List[JsonDict]) | |
105 | unread_notifications = attr.ib(type=JsonDict) | |
106 | summary = attr.ib(type=Optional[JsonDict]) | |
107 | ||
108 | def __nonzero__(self) -> bool: | |
100 | 109 | """Make the result appear empty if there are no updates. This is used |
101 | 110 | to tell if room needs to be part of the sync result. |
102 | 111 | """ |
112 | 121 | __bool__ = __nonzero__ # python3 |
113 | 122 | |
114 | 123 | |
115 | class ArchivedSyncResult( | |
116 | collections.namedtuple( | |
117 | "ArchivedSyncResult", | |
118 | [ | |
119 | "room_id", # str | |
120 | "timeline", # TimelineBatch | |
121 | "state", # dict[(str, str), FrozenEvent] | |
122 | "account_data", | |
123 | ], | |
124 | ) | |
125 | ): | |
126 | __slots__ = [] | |
127 | ||
128 | def __nonzero__(self): | |
124 | @attr.s(slots=True, frozen=True) | |
125 | class ArchivedSyncResult: | |
126 | room_id = attr.ib(type=str) | |
127 | timeline = attr.ib(type=TimelineBatch) | |
128 | state = attr.ib(type=StateMap[EventBase]) | |
129 | account_data = attr.ib(type=List[JsonDict]) | |
130 | ||
131 | def __nonzero__(self) -> bool: | |
129 | 132 | """Make the result appear empty if there are no updates. This is used |
130 | 133 | to tell if room needs to be part of the sync result. |
131 | 134 | """ |
134 | 137 | __bool__ = __nonzero__ # python3 |
135 | 138 | |
136 | 139 | |
137 | class InvitedSyncResult( | |
138 | collections.namedtuple( | |
139 | "InvitedSyncResult", | |
140 | ["room_id", "invite"], # str # FrozenEvent: the invite event | |
141 | ) | |
142 | ): | |
143 | __slots__ = [] | |
144 | ||
145 | def __nonzero__(self): | |
140 | @attr.s(slots=True, frozen=True) | |
141 | class InvitedSyncResult: | |
142 | room_id = attr.ib(type=str) | |
143 | invite = attr.ib(type=EventBase) | |
144 | ||
145 | def __nonzero__(self) -> bool: | |
146 | 146 | """Invited rooms should always be reported to the client""" |
147 | 147 | return True |
148 | 148 | |
149 | 149 | __bool__ = __nonzero__ # python3 |
150 | 150 | |
151 | 151 | |
152 | class GroupsSyncResult( | |
153 | collections.namedtuple("GroupsSyncResult", ["join", "invite", "leave"]) | |
154 | ): | |
155 | __slots__ = [] | |
156 | ||
157 | def __nonzero__(self): | |
152 | @attr.s(slots=True, frozen=True) | |
153 | class GroupsSyncResult: | |
154 | join = attr.ib(type=JsonDict) | |
155 | invite = attr.ib(type=JsonDict) | |
156 | leave = attr.ib(type=JsonDict) | |
157 | ||
158 | def __nonzero__(self) -> bool: | |
158 | 159 | return bool(self.join or self.invite or self.leave) |
159 | 160 | |
160 | 161 | __bool__ = __nonzero__ # python3 |
161 | 162 | |
162 | 163 | |
163 | class DeviceLists( | |
164 | collections.namedtuple( | |
165 | "DeviceLists", | |
166 | [ | |
167 | "changed", # list of user_ids whose devices may have changed | |
168 | "left", # list of user_ids whose devices we no longer track | |
169 | ], | |
170 | ) | |
171 | ): | |
172 | __slots__ = [] | |
173 | ||
174 | def __nonzero__(self): | |
164 | @attr.s(slots=True, frozen=True) | |
165 | class DeviceLists: | |
166 | """ | |
167 | Attributes: | |
168 | changed: List of user_ids whose devices may have changed | |
169 | left: List of user_ids whose devices we no longer track | |
170 | """ | |
171 | ||
172 | changed = attr.ib(type=Collection[str]) | |
173 | left = attr.ib(type=Collection[str]) | |
174 | ||
175 | def __nonzero__(self) -> bool: | |
175 | 176 | return bool(self.changed or self.left) |
176 | 177 | |
177 | 178 | __bool__ = __nonzero__ # python3 |
178 | 179 | |
179 | 180 | |
180 | class SyncResult( | |
181 | collections.namedtuple( | |
182 | "SyncResult", | |
183 | [ | |
184 | "next_batch", # Token for the next sync | |
185 | "presence", # List of presence events for the user. | |
186 | "account_data", # List of account_data events for the user. | |
187 | "joined", # JoinedSyncResult for each joined room. | |
188 | "invited", # InvitedSyncResult for each invited room. | |
189 | "archived", # ArchivedSyncResult for each archived room. | |
190 | "to_device", # List of direct messages for the device. | |
191 | "device_lists", # List of user_ids whose devices have changed | |
192 | "device_one_time_keys_count", # Dict of algorithm to count for one time keys | |
193 | # for this device | |
194 | "groups", | |
195 | ], | |
196 | ) | |
197 | ): | |
198 | __slots__ = [] | |
199 | ||
200 | def __nonzero__(self): | |
181 | @attr.s | |
182 | class _RoomChanges: | |
183 | """The set of room entries to include in the sync, plus the set of joined | |
184 | and left room IDs since last sync. | |
185 | """ | |
186 | ||
187 | room_entries = attr.ib(type=List["RoomSyncResultBuilder"]) | |
188 | invited = attr.ib(type=List[InvitedSyncResult]) | |
189 | newly_joined_rooms = attr.ib(type=List[str]) | |
190 | newly_left_rooms = attr.ib(type=List[str]) | |
191 | ||
192 | ||
193 | @attr.s(slots=True, frozen=True) | |
194 | class SyncResult: | |
195 | """ | |
196 | Attributes: | |
197 | next_batch: Token for the next sync | |
198 | presence: List of presence events for the user. | |
199 | account_data: List of account_data events for the user. | |
200 | joined: JoinedSyncResult for each joined room. | |
201 | invited: InvitedSyncResult for each invited room. | |
202 | archived: ArchivedSyncResult for each archived room. | |
203 | to_device: List of direct messages for the device. | |
204 | device_lists: List of user_ids whose devices have changed | |
205 | device_one_time_keys_count: Dict of algorithm to count for one time keys | |
206 | for this device | |
207 | groups: Group updates, if any | |
208 | """ | |
209 | ||
210 | next_batch = attr.ib(type=StreamToken) | |
211 | presence = attr.ib(type=List[JsonDict]) | |
212 | account_data = attr.ib(type=List[JsonDict]) | |
213 | joined = attr.ib(type=List[JoinedSyncResult]) | |
214 | invited = attr.ib(type=List[InvitedSyncResult]) | |
215 | archived = attr.ib(type=List[ArchivedSyncResult]) | |
216 | to_device = attr.ib(type=List[JsonDict]) | |
217 | device_lists = attr.ib(type=DeviceLists) | |
218 | device_one_time_keys_count = attr.ib(type=JsonDict) | |
219 | groups = attr.ib(type=Optional[GroupsSyncResult]) | |
220 | ||
221 | def __nonzero__(self) -> bool: | |
201 | 222 | """Make the result appear empty if there are no updates. This is used |
202 | 223 | to tell if the notifier needs to wait for more events when polling for |
203 | 224 | events. |
239 | 260 | ) |
240 | 261 | |
241 | 262 | async def wait_for_sync_for_user( |
242 | self, sync_config, since_token=None, timeout=0, full_state=False | |
243 | ): | |
263 | self, | |
264 | sync_config: SyncConfig, | |
265 | since_token: Optional[StreamToken] = None, | |
266 | timeout: int = 0, | |
267 | full_state: bool = False, | |
268 | ) -> SyncResult: | |
244 | 269 | """Get the sync for a client if we have new data for it now. Otherwise |
245 | 270 | wait for new data to arrive on the server. If the timeout expires, then |
246 | 271 | return an empty sync result. |
247 | Returns: | |
248 | Deferred[SyncResult] | |
249 | 272 | """ |
250 | 273 | # If the user is not part of the mau group, then check that limits have |
251 | 274 | # not been exceeded (if not part of the group by this point, almost certain |
264 | 287 | return res |
265 | 288 | |
266 | 289 | async def _wait_for_sync_for_user( |
267 | self, sync_config, since_token, timeout, full_state | |
268 | ): | |
290 | self, | |
291 | sync_config: SyncConfig, | |
292 | since_token: Optional[StreamToken] = None, | |
293 | timeout: int = 0, | |
294 | full_state: bool = False, | |
295 | ) -> SyncResult: | |
269 | 296 | if since_token is None: |
270 | 297 | sync_type = "initial_sync" |
271 | 298 | elif full_state: |
304 | 331 | |
305 | 332 | return result |
306 | 333 | |
307 | def current_sync_for_user(self, sync_config, since_token=None, full_state=False): | |
334 | async def current_sync_for_user( | |
335 | self, | |
336 | sync_config: SyncConfig, | |
337 | since_token: Optional[StreamToken] = None, | |
338 | full_state: bool = False, | |
339 | ) -> SyncResult: | |
308 | 340 | """Get the sync for client needed to match what the server has now. |
309 | Returns: | |
310 | A Deferred SyncResult. | |
311 | """ | |
312 | return self.generate_sync_result(sync_config, since_token, full_state) | |
313 | ||
314 | async def push_rules_for_user(self, user): | |
341 | """ | |
342 | return await self.generate_sync_result(sync_config, since_token, full_state) | |
343 | ||
344 | async def push_rules_for_user(self, user: UserID) -> JsonDict: | |
315 | 345 | user_id = user.to_string() |
316 | 346 | rules = await self.store.get_push_rules_for_user(user_id) |
317 | 347 | rules = format_push_rules_for_user(user, rules) |
318 | 348 | return rules |
319 | 349 | |
320 | async def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None): | |
350 | async def ephemeral_by_room( | |
351 | self, | |
352 | sync_result_builder: "SyncResultBuilder", | |
353 | now_token: StreamToken, | |
354 | since_token: Optional[StreamToken] = None, | |
355 | ) -> Tuple[StreamToken, Dict[str, List[JsonDict]]]: | |
321 | 356 | """Get the ephemeral events for each room the user is in |
322 | 357 | Args: |
323 | sync_result_builder(SyncResultBuilder) | |
324 | now_token (StreamToken): Where the server is currently up to. | |
325 | since_token (StreamToken): Where the server was when the client | |
358 | sync_result_builder | |
359 | now_token: Where the server is currently up to. | |
360 | since_token: Where the server was when the client | |
326 | 361 | last synced. |
327 | 362 | Returns: |
328 | 363 | A tuple of the now StreamToken, updated to reflect the which typing |
347 | 382 | ) |
348 | 383 | now_token = now_token.copy_and_replace("typing_key", typing_key) |
349 | 384 | |
350 | ephemeral_by_room = {} | |
385 | ephemeral_by_room = {} # type: JsonDict | |
351 | 386 | |
352 | 387 | for event in typing: |
353 | 388 | # we want to exclude the room_id from the event, but modifying the |
379 | 414 | |
380 | 415 | async def _load_filtered_recents( |
381 | 416 | self, |
382 | room_id, | |
383 | sync_config, | |
384 | now_token, | |
385 | since_token=None, | |
386 | recents=None, | |
387 | newly_joined_room=False, | |
388 | ): | |
417 | room_id: str, | |
418 | sync_config: SyncConfig, | |
419 | now_token: StreamToken, | |
420 | since_token: Optional[StreamToken] = None, | |
421 | potential_recents: Optional[List[EventBase]] = None, | |
422 | newly_joined_room: bool = False, | |
423 | ) -> TimelineBatch: | |
389 | 424 | """ |
390 | 425 | Returns: |
391 | 426 | a Deferred TimelineBatch |
396 | 431 | sync_config.filter_collection.blocks_all_room_timeline() |
397 | 432 | ) |
398 | 433 | |
399 | if recents is None or newly_joined_room or timeline_limit < len(recents): | |
434 | if ( | |
435 | potential_recents is None | |
436 | or newly_joined_room | |
437 | or timeline_limit < len(potential_recents) | |
438 | ): | |
400 | 439 | limited = True |
401 | 440 | else: |
402 | 441 | limited = False |
403 | 442 | |
404 | if recents: | |
405 | recents = sync_config.filter_collection.filter_room_timeline(recents) | |
443 | if potential_recents: | |
444 | recents = sync_config.filter_collection.filter_room_timeline( | |
445 | potential_recents | |
446 | ) | |
406 | 447 | |
407 | 448 | # We check if there are any state events, if there are then we pass |
408 | 449 | # all current state events to the filter_events function. This is to |
409 | 450 | # ensure that we always include current state in the timeline |
410 | current_state_ids = frozenset() | |
451 | current_state_ids = frozenset() # type: FrozenSet[str] | |
411 | 452 | if any(e.is_state() for e in recents): |
412 | current_state_ids = await self.state.get_current_state_ids(room_id) | |
413 | current_state_ids = frozenset(itervalues(current_state_ids)) | |
453 | current_state_ids_map = await self.state.get_current_state_ids( | |
454 | room_id | |
455 | ) | |
456 | current_state_ids = frozenset(itervalues(current_state_ids_map)) | |
414 | 457 | |
415 | 458 | recents = await filter_events_for_client( |
416 | 459 | self.storage, |
462 | 505 | # ensure that we always include current state in the timeline |
463 | 506 | current_state_ids = frozenset() |
464 | 507 | if any(e.is_state() for e in loaded_recents): |
465 | current_state_ids = await self.state.get_current_state_ids(room_id) | |
466 | current_state_ids = frozenset(itervalues(current_state_ids)) | |
508 | current_state_ids_map = await self.state.get_current_state_ids( | |
509 | room_id | |
510 | ) | |
511 | current_state_ids = frozenset(itervalues(current_state_ids_map)) | |
467 | 512 | |
468 | 513 | loaded_recents = await filter_events_for_client( |
469 | 514 | self.storage, |
492 | 537 | limited=limited or newly_joined_room, |
493 | 538 | ) |
494 | 539 | |
495 | async def get_state_after_event(self, event, state_filter=StateFilter.all()): | |
540 | async def get_state_after_event( | |
541 | self, event: EventBase, state_filter: StateFilter = StateFilter.all() | |
542 | ) -> StateMap[str]: | |
496 | 543 | """ |
497 | 544 | Get the room state after the given event |
498 | 545 | |
499 | 546 | Args: |
500 | event(synapse.events.EventBase): event of interest | |
501 | state_filter (StateFilter): The state filter used to fetch state | |
502 | from the database. | |
503 | ||
504 | Returns: | |
505 | A Deferred map from ((type, state_key)->Event) | |
547 | event: event of interest | |
548 | state_filter: The state filter used to fetch state from the database. | |
506 | 549 | """ |
507 | 550 | state_ids = await self.state_store.get_state_ids_for_event( |
508 | 551 | event.event_id, state_filter=state_filter |
513 | 556 | return state_ids |
514 | 557 | |
515 | 558 | async def get_state_at( |
516 | self, room_id, stream_position, state_filter=StateFilter.all() | |
517 | ): | |
559 | self, | |
560 | room_id: str, | |
561 | stream_position: StreamToken, | |
562 | state_filter: StateFilter = StateFilter.all(), | |
563 | ) -> StateMap[str]: | |
518 | 564 | """ Get the room state at a particular stream position |
519 | 565 | |
520 | 566 | Args: |
521 | room_id(str): room for which to get state | |
522 | stream_position(StreamToken): point at which to get state | |
523 | state_filter (StateFilter): The state filter used to fetch state | |
524 | from the database. | |
525 | ||
526 | Returns: | |
527 | A Deferred map from ((type, state_key)->Event) | |
567 | room_id: room for which to get state | |
568 | stream_position: point at which to get state | |
569 | state_filter: The state filter used to fetch state from the database. | |
528 | 570 | """ |
529 | 571 | # FIXME this claims to get the state at a stream position, but |
530 | 572 | # get_recent_events_for_room operates by topo ordering. This therefore |
545 | 587 | state = {} |
546 | 588 | return state |
547 | 589 | |
548 | async def compute_summary(self, room_id, sync_config, batch, state, now_token): | |
590 | async def compute_summary( | |
591 | self, | |
592 | room_id: str, | |
593 | sync_config: SyncConfig, | |
594 | batch: TimelineBatch, | |
595 | state: StateMap[EventBase], | |
596 | now_token: StreamToken, | |
597 | ) -> Optional[JsonDict]: | |
549 | 598 | """ Works out a room summary block for this room, summarising the number |
550 | 599 | of joined members in the room, and providing the 'hero' members if the |
551 | 600 | room has no name so clients can consistently name rooms. Also adds |
552 | 601 | state events to 'state' if needed to describe the heroes. |
553 | 602 | |
554 | Args: | |
555 | room_id(str): | |
556 | sync_config(synapse.handlers.sync.SyncConfig): | |
557 | batch(synapse.handlers.sync.TimelineBatch): The timeline batch for | |
558 | the room that will be sent to the user. | |
559 | state(dict): dict of (type, state_key) -> Event as returned by | |
560 | compute_state_delta | |
561 | now_token(str): Token of the end of the current batch. | |
562 | ||
563 | Returns: | |
564 | A deferred dict describing the room summary | |
603 | Args | |
604 | room_id | |
605 | sync_config | |
606 | batch: The timeline batch for the room that will be sent to the user. | |
607 | state: State as returned by compute_state_delta | |
608 | now_token: Token of the end of the current batch. | |
565 | 609 | """ |
566 | 610 | |
567 | 611 | # FIXME: we could/should get this from room_stats when matthew/stats lands |
680 | 724 | |
681 | 725 | return summary |
682 | 726 | |
683 | def get_lazy_loaded_members_cache(self, cache_key): | |
727 | def get_lazy_loaded_members_cache(self, cache_key: Tuple[str, str]) -> LruCache: | |
684 | 728 | cache = self.lazy_loaded_members_cache.get(cache_key) |
685 | 729 | if cache is None: |
686 | 730 | logger.debug("creating LruCache for %r", cache_key) |
691 | 735 | return cache |
692 | 736 | |
693 | 737 | async def compute_state_delta( |
694 | self, room_id, batch, sync_config, since_token, now_token, full_state | |
695 | ): | |
738 | self, | |
739 | room_id: str, | |
740 | batch: TimelineBatch, | |
741 | sync_config: SyncConfig, | |
742 | since_token: Optional[StreamToken], | |
743 | now_token: StreamToken, | |
744 | full_state: bool, | |
745 | ) -> StateMap[EventBase]: | |
696 | 746 | """ Works out the difference in state between the start of the timeline |
697 | 747 | and the previous sync. |
698 | 748 | |
699 | 749 | Args: |
700 | room_id(str): | |
701 | batch(synapse.handlers.sync.TimelineBatch): The timeline batch for | |
702 | the room that will be sent to the user. | |
703 | sync_config(synapse.handlers.sync.SyncConfig): | |
704 | since_token(str|None): Token of the end of the previous batch. May | |
705 | be None. | |
706 | now_token(str): Token of the end of the current batch. | |
707 | full_state(bool): Whether to force returning the full state. | |
708 | ||
709 | Returns: | |
710 | A deferred dict of (type, state_key) -> Event | |
750 | room_id: | |
751 | batch: The timeline batch for the room that will be sent to the user. | |
752 | sync_config: | |
753 | since_token: Token of the end of the previous batch. May be None. | |
754 | now_token: Token of the end of the current batch. | |
755 | full_state: Whether to force returning the full state. | |
711 | 756 | """ |
712 | 757 | # TODO(mjark) Check if the state events were received by the server |
713 | 758 | # after the previous sync, since we need to include those state |
799 | 844 | # about them). |
800 | 845 | state_filter = StateFilter.all() |
801 | 846 | |
847 | # If this is an initial sync then full_state should be set, and | |
848 | # that case is handled above. We assert here to ensure that this | |
849 | # is indeed the case. | |
850 | assert since_token is not None | |
802 | 851 | state_at_previous_sync = await self.get_state_at( |
803 | 852 | room_id, stream_position=since_token, state_filter=state_filter |
804 | 853 | ) |
873 | 922 | if t[0] == EventTypes.Member: |
874 | 923 | cache.set(t[1], event_id) |
875 | 924 | |
876 | state = {} | |
925 | state = {} # type: Dict[str, EventBase] | |
877 | 926 | if state_ids: |
878 | 927 | state = await self.store.get_events(list(state_ids.values())) |
879 | 928 | |
885 | 934 | if e.type != EventTypes.Aliases # until MSC2261 or alternative solution |
886 | 935 | } |
887 | 936 | |
888 | async def unread_notifs_for_room_id(self, room_id, sync_config): | |
937 | async def unread_notifs_for_room_id( | |
938 | self, room_id: str, sync_config: SyncConfig | |
939 | ) -> Optional[Dict[str, str]]: | |
889 | 940 | with Measure(self.clock, "unread_notifs_for_room_id"): |
890 | 941 | last_unread_event_id = await self.store.get_last_receipt_event_id_for_user( |
891 | 942 | user_id=sync_config.user.to_string(), |
893 | 944 | receipt_type="m.read", |
894 | 945 | ) |
895 | 946 | |
896 | notifs = [] | |
897 | 947 | if last_unread_event_id: |
898 | 948 | notifs = await self.store.get_unread_event_push_actions_by_room_for_user( |
899 | 949 | room_id, sync_config.user.to_string(), last_unread_event_id |
905 | 955 | return None |
906 | 956 | |
907 | 957 | async def generate_sync_result( |
908 | self, sync_config, since_token=None, full_state=False | |
909 | ): | |
958 | self, | |
959 | sync_config: SyncConfig, | |
960 | since_token: Optional[StreamToken] = None, | |
961 | full_state: bool = False, | |
962 | ) -> SyncResult: | |
910 | 963 | """Generates a sync result. |
911 | ||
912 | Args: | |
913 | sync_config (SyncConfig) | |
914 | since_token (StreamToken) | |
915 | full_state (bool) | |
916 | ||
917 | Returns: | |
918 | Deferred(SyncResult) | |
919 | 964 | """ |
920 | 965 | # NB: The now_token gets changed by some of the generate_sync_* methods, |
921 | 966 | # this is due to some of the underlying streams not supporting the ability |
923 | 968 | # Always use the `now_token` in `SyncResultBuilder` |
924 | 969 | now_token = await self.event_sources.get_current_token() |
925 | 970 | |
926 | logger.info( | |
971 | logger.debug( | |
927 | 972 | "Calculating sync response for %r between %s and %s", |
928 | 973 | sync_config.user, |
929 | 974 | since_token, |
977 | 1022 | ) |
978 | 1023 | |
979 | 1024 | device_id = sync_config.device_id |
980 | one_time_key_counts = {} | |
1025 | one_time_key_counts = {} # type: JsonDict | |
981 | 1026 | if device_id: |
982 | 1027 | one_time_key_counts = await self.store.count_e2e_one_time_keys( |
983 | 1028 | user_id, device_id |
1007 | 1052 | ) |
1008 | 1053 | |
1009 | 1054 | @measure_func("_generate_sync_entry_for_groups") |
1010 | async def _generate_sync_entry_for_groups(self, sync_result_builder): | |
1055 | async def _generate_sync_entry_for_groups( | |
1056 | self, sync_result_builder: "SyncResultBuilder" | |
1057 | ) -> None: | |
1011 | 1058 | user_id = sync_result_builder.sync_config.user.to_string() |
1012 | 1059 | since_token = sync_result_builder.since_token |
1013 | 1060 | now_token = sync_result_builder.now_token |
1052 | 1099 | @measure_func("_generate_sync_entry_for_device_list") |
1053 | 1100 | async def _generate_sync_entry_for_device_list( |
1054 | 1101 | self, |
1055 | sync_result_builder, | |
1056 | newly_joined_rooms, | |
1057 | newly_joined_or_invited_users, | |
1058 | newly_left_rooms, | |
1059 | newly_left_users, | |
1060 | ): | |
1102 | sync_result_builder: "SyncResultBuilder", | |
1103 | newly_joined_rooms: Set[str], | |
1104 | newly_joined_or_invited_users: Set[str], | |
1105 | newly_left_rooms: Set[str], | |
1106 | newly_left_users: Set[str], | |
1107 | ) -> DeviceLists: | |
1061 | 1108 | """Generate the DeviceLists section of sync |
1062 | 1109 | |
1063 | 1110 | Args: |
1064 | sync_result_builder (SyncResultBuilder) | |
1065 | newly_joined_rooms (set[str]): Set of rooms user has joined since | |
1111 | sync_result_builder | |
1112 | newly_joined_rooms: Set of rooms user has joined since previous sync | |
1113 | newly_joined_or_invited_users: Set of users that have joined or | |
1114 | been invited to a room since previous sync. | |
1115 | newly_left_rooms: Set of rooms user has left since previous sync | |
1116 | newly_left_users: Set of users that have left a room we're in since | |
1066 | 1117 | previous sync |
1067 | newly_joined_or_invited_users (set[str]): Set of users that have | |
1068 | joined or been invited to a room since previous sync. | |
1069 | newly_left_rooms (set[str]): Set of rooms user has left since | |
1070 | previous sync | |
1071 | newly_left_users (set[str]): Set of users that have left a room | |
1072 | we're in since previous sync | |
1073 | ||
1074 | Returns: | |
1075 | Deferred[DeviceLists] | |
1076 | 1118 | """ |
1077 | 1119 | |
1078 | 1120 | user_id = sync_result_builder.sync_config.user.to_string() |
1133 | 1175 | else: |
1134 | 1176 | return DeviceLists(changed=[], left=[]) |
1135 | 1177 | |
1136 | async def _generate_sync_entry_for_to_device(self, sync_result_builder): | |
1178 | async def _generate_sync_entry_for_to_device( | |
1179 | self, sync_result_builder: "SyncResultBuilder" | |
1180 | ) -> None: | |
1137 | 1181 | """Generates the portion of the sync response. Populates |
1138 | 1182 | `sync_result_builder` with the result. |
1139 | ||
1140 | Args: | |
1141 | sync_result_builder(SyncResultBuilder) | |
1142 | ||
1143 | Returns: | |
1144 | Deferred(dict): A dictionary containing the per room account data. | |
1145 | 1183 | """ |
1146 | 1184 | user_id = sync_result_builder.sync_config.user.to_string() |
1147 | 1185 | device_id = sync_result_builder.sync_config.device_id |
1179 | 1217 | else: |
1180 | 1218 | sync_result_builder.to_device = [] |
1181 | 1219 | |
1182 | async def _generate_sync_entry_for_account_data(self, sync_result_builder): | |
1220 | async def _generate_sync_entry_for_account_data( | |
1221 | self, sync_result_builder: "SyncResultBuilder" | |
1222 | ) -> Dict[str, Dict[str, JsonDict]]: | |
1183 | 1223 | """Generates the account data portion of the sync response. Populates |
1184 | 1224 | `sync_result_builder` with the result. |
1185 | 1225 | |
1186 | 1226 | Args: |
1187 | sync_result_builder(SyncResultBuilder) | |
1227 | sync_result_builder | |
1188 | 1228 | |
1189 | 1229 | Returns: |
1190 | Deferred(dict): A dictionary containing the per room account data. | |
1230 | A dictionary containing the per room account data. | |
1191 | 1231 | """ |
1192 | 1232 | sync_config = sync_result_builder.sync_config |
1193 | 1233 | user_id = sync_result_builder.sync_config.user.to_string() |
1231 | 1271 | return account_data_by_room |
1232 | 1272 | |
1233 | 1273 | async def _generate_sync_entry_for_presence( |
1234 | self, sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users | |
1235 | ): | |
1274 | self, | |
1275 | sync_result_builder: "SyncResultBuilder", | |
1276 | newly_joined_rooms: Set[str], | |
1277 | newly_joined_or_invited_users: Set[str], | |
1278 | ) -> None: | |
1236 | 1279 | """Generates the presence portion of the sync response. Populates the |
1237 | 1280 | `sync_result_builder` with the result. |
1238 | 1281 | |
1239 | 1282 | Args: |
1240 | sync_result_builder(SyncResultBuilder) | |
1241 | newly_joined_rooms(list): List of rooms that the user has joined | |
1242 | since the last sync (or empty if an initial sync) | |
1243 | newly_joined_or_invited_users(list): List of users that have joined | |
1244 | or been invited to rooms since the last sync (or empty if an initial | |
1245 | sync) | |
1283 | sync_result_builder | |
1284 | newly_joined_rooms: Set of rooms that the user has joined since | |
1285 | the last sync (or empty if an initial sync) | |
1286 | newly_joined_or_invited_users: Set of users that have joined or | |
1287 | been invited to rooms since the last sync (or empty if an | |
1288 | initial sync) | |
1246 | 1289 | """ |
1247 | 1290 | now_token = sync_result_builder.now_token |
1248 | 1291 | sync_config = sync_result_builder.sync_config |
1286 | 1329 | sync_result_builder.presence = presence |
1287 | 1330 | |
1288 | 1331 | async def _generate_sync_entry_for_rooms( |
1289 | self, sync_result_builder, account_data_by_room | |
1290 | ): | |
1332 | self, | |
1333 | sync_result_builder: "SyncResultBuilder", | |
1334 | account_data_by_room: Dict[str, Dict[str, JsonDict]], | |
1335 | ) -> Tuple[Set[str], Set[str], Set[str], Set[str]]: | |
1291 | 1336 | """Generates the rooms portion of the sync response. Populates the |
1292 | 1337 | `sync_result_builder` with the result. |
1293 | 1338 | |
1294 | 1339 | Args: |
1295 | sync_result_builder(SyncResultBuilder) | |
1296 | account_data_by_room(dict): Dictionary of per room account data | |
1340 | sync_result_builder | |
1341 | account_data_by_room: Dictionary of per room account data | |
1297 | 1342 | |
1298 | 1343 | Returns: |
1299 | Deferred(tuple): Returns a 4-tuple of | |
1344 | Returns a 4-tuple of | |
1300 | 1345 | `(newly_joined_rooms, newly_joined_or_invited_users, |
1301 | 1346 | newly_left_rooms, newly_left_users)` |
1302 | 1347 | """ |
1307 | 1352 | ) |
1308 | 1353 | |
1309 | 1354 | if block_all_room_ephemeral: |
1310 | ephemeral_by_room = {} | |
1355 | ephemeral_by_room = {} # type: Dict[str, List[JsonDict]] | |
1311 | 1356 | else: |
1312 | 1357 | now_token, ephemeral_by_room = await self.ephemeral_by_room( |
1313 | 1358 | sync_result_builder, |
1328 | 1373 | ) |
1329 | 1374 | if not tags_by_room: |
1330 | 1375 | logger.debug("no-oping sync") |
1331 | return [], [], [], [] | |
1376 | return set(), set(), set(), set() | |
1332 | 1377 | |
1333 | 1378 | ignored_account_data = await self.store.get_global_account_data_by_type_for_user( |
1334 | 1379 | "m.ignored_user_list", user_id=user_id |
1340 | 1385 | ignored_users = frozenset() |
1341 | 1386 | |
1342 | 1387 | if since_token: |
1343 | res = await self._get_rooms_changed(sync_result_builder, ignored_users) | |
1344 | room_entries, invited, newly_joined_rooms, newly_left_rooms = res | |
1345 | ||
1388 | room_changes = await self._get_rooms_changed( | |
1389 | sync_result_builder, ignored_users | |
1390 | ) | |
1346 | 1391 | tags_by_room = await self.store.get_updated_tags( |
1347 | 1392 | user_id, since_token.account_data_key |
1348 | 1393 | ) |
1349 | 1394 | else: |
1350 | res = await self._get_all_rooms(sync_result_builder, ignored_users) | |
1351 | room_entries, invited, newly_joined_rooms = res | |
1352 | newly_left_rooms = [] | |
1395 | room_changes = await self._get_all_rooms(sync_result_builder, ignored_users) | |
1353 | 1396 | |
1354 | 1397 | tags_by_room = await self.store.get_tags_for_user(user_id) |
1398 | ||
1399 | room_entries = room_changes.room_entries | |
1400 | invited = room_changes.invited | |
1401 | newly_joined_rooms = room_changes.newly_joined_rooms | |
1402 | newly_left_rooms = room_changes.newly_left_rooms | |
1355 | 1403 | |
1356 | 1404 | def handle_room_entries(room_entry): |
1357 | 1405 | return self._generate_room_entry( |
1392 | 1440 | newly_left_users -= newly_joined_or_invited_users |
1393 | 1441 | |
1394 | 1442 | return ( |
1395 | newly_joined_rooms, | |
1443 | set(newly_joined_rooms), | |
1396 | 1444 | newly_joined_or_invited_users, |
1397 | newly_left_rooms, | |
1445 | set(newly_left_rooms), | |
1398 | 1446 | newly_left_users, |
1399 | 1447 | ) |
1400 | 1448 | |
1401 | async def _have_rooms_changed(self, sync_result_builder): | |
1449 | async def _have_rooms_changed( | |
1450 | self, sync_result_builder: "SyncResultBuilder" | |
1451 | ) -> bool: | |
1402 | 1452 | """Returns whether there may be any new events that should be sent down |
1403 | 1453 | the sync. Returns True if there are. |
1404 | 1454 | """ |
1422 | 1472 | return True |
1423 | 1473 | return False |
1424 | 1474 | |
1425 | async def _get_rooms_changed(self, sync_result_builder, ignored_users): | |
1475 | async def _get_rooms_changed( | |
1476 | self, sync_result_builder: "SyncResultBuilder", ignored_users: Set[str] | |
1477 | ) -> _RoomChanges: | |
1426 | 1478 | """Gets the the changes that have happened since the last sync. |
1427 | ||
1428 | Args: | |
1429 | sync_result_builder(SyncResultBuilder) | |
1430 | ignored_users(set(str)): Set of users ignored by user. | |
1431 | ||
1432 | Returns: | |
1433 | Deferred(tuple): Returns a tuple of the form: | |
1434 | `(room_entries, invited_rooms, newly_joined_rooms, newly_left_rooms)` | |
1435 | ||
1436 | where: | |
1437 | room_entries is a list [RoomSyncResultBuilder] | |
1438 | invited_rooms is a list [InvitedSyncResult] | |
1439 | newly_joined_rooms is a list[str] of room ids | |
1440 | newly_left_rooms is a list[str] of room ids | |
1441 | 1479 | """ |
1442 | 1480 | user_id = sync_result_builder.sync_config.user.to_string() |
1443 | 1481 | since_token = sync_result_builder.since_token |
1451 | 1489 | user_id, since_token.room_key, now_token.room_key |
1452 | 1490 | ) |
1453 | 1491 | |
1454 | mem_change_events_by_room_id = {} | |
1492 | mem_change_events_by_room_id = {} # type: Dict[str, List[EventBase]] | |
1455 | 1493 | for event in rooms_changed: |
1456 | 1494 | mem_change_events_by_room_id.setdefault(event.room_id, []).append(event) |
1457 | 1495 | |
1460 | 1498 | room_entries = [] |
1461 | 1499 | invited = [] |
1462 | 1500 | for room_id, events in iteritems(mem_change_events_by_room_id): |
1463 | logger.info( | |
1501 | logger.debug( | |
1464 | 1502 | "Membership changes in %s: [%s]", |
1465 | 1503 | room_id, |
1466 | 1504 | ", ".join(("%s (%s)" % (e.event_id, e.membership) for e in events)), |
1570 | 1608 | # This is all screaming out for a refactor, as the logic here is |
1571 | 1609 | # subtle and the moving parts numerous. |
1572 | 1610 | if leave_event.internal_metadata.is_out_of_band_membership(): |
1573 | batch_events = [leave_event] | |
1611 | batch_events = [leave_event] # type: Optional[List[EventBase]] | |
1574 | 1612 | else: |
1575 | 1613 | batch_events = None |
1576 | 1614 | |
1636 | 1674 | ) |
1637 | 1675 | room_entries.append(entry) |
1638 | 1676 | |
1639 | return room_entries, invited, newly_joined_rooms, newly_left_rooms | |
1640 | ||
1641 | async def _get_all_rooms(self, sync_result_builder, ignored_users): | |
1677 | return _RoomChanges(room_entries, invited, newly_joined_rooms, newly_left_rooms) | |
1678 | ||
1679 | async def _get_all_rooms( | |
1680 | self, sync_result_builder: "SyncResultBuilder", ignored_users: Set[str] | |
1681 | ) -> _RoomChanges: | |
1642 | 1682 | """Returns entries for all rooms for the user. |
1643 | 1683 | |
1644 | 1684 | Args: |
1645 | sync_result_builder(SyncResultBuilder) | |
1646 | ignored_users(set(str)): Set of users ignored by user. | |
1647 | ||
1648 | Returns: | |
1649 | Deferred(tuple): Returns a tuple of the form: | |
1650 | `([RoomSyncResultBuilder], [InvitedSyncResult], [])` | |
1685 | sync_result_builder | |
1686 | ignored_users: Set of users ignored by user. | |
1687 | ||
1651 | 1688 | """ |
1652 | 1689 | |
1653 | 1690 | user_id = sync_result_builder.sync_config.user.to_string() |
1709 | 1746 | ) |
1710 | 1747 | ) |
1711 | 1748 | |
1712 | return room_entries, invited, [] | |
1749 | return _RoomChanges(room_entries, invited, [], []) | |
1713 | 1750 | |
1714 | 1751 | async def _generate_room_entry( |
1715 | 1752 | self, |
1716 | sync_result_builder, | |
1717 | ignored_users, | |
1718 | room_builder, | |
1719 | ephemeral, | |
1720 | tags, | |
1721 | account_data, | |
1722 | always_include=False, | |
1753 | sync_result_builder: "SyncResultBuilder", | |
1754 | ignored_users: Set[str], | |
1755 | room_builder: "RoomSyncResultBuilder", | |
1756 | ephemeral: List[JsonDict], | |
1757 | tags: Optional[List[JsonDict]], | |
1758 | account_data: Dict[str, JsonDict], | |
1759 | always_include: bool = False, | |
1723 | 1760 | ): |
1724 | 1761 | """Populates the `joined` and `archived` section of `sync_result_builder` |
1725 | 1762 | based on the `room_builder`. |
1726 | 1763 | |
1727 | 1764 | Args: |
1728 | sync_result_builder(SyncResultBuilder) | |
1729 | ignored_users(set(str)): Set of users ignored by user. | |
1730 | room_builder(RoomSyncResultBuilder) | |
1731 | ephemeral(list): List of new ephemeral events for room | |
1732 | tags(list): List of *all* tags for room, or None if there has been | |
1765 | sync_result_builder | |
1766 | ignored_users: Set of users ignored by user. | |
1767 | room_builder | |
1768 | ephemeral: List of new ephemeral events for room | |
1769 | tags: List of *all* tags for room, or None if there has been | |
1733 | 1770 | no change. |
1734 | account_data(list): List of new account data for room | |
1735 | always_include(bool): Always include this room in the sync response, | |
1771 | account_data: List of new account data for room | |
1772 | always_include: Always include this room in the sync response, | |
1736 | 1773 | even if empty. |
1737 | 1774 | """ |
1738 | 1775 | newly_joined = room_builder.newly_joined |
1758 | 1795 | sync_config, |
1759 | 1796 | now_token=upto_token, |
1760 | 1797 | since_token=since_token, |
1761 | recents=events, | |
1798 | potential_recents=events, | |
1762 | 1799 | newly_joined_room=newly_joined, |
1763 | 1800 | ) |
1764 | 1801 | |
1809 | 1846 | room_id, batch, sync_config, since_token, now_token, full_state=full_state |
1810 | 1847 | ) |
1811 | 1848 | |
1812 | summary = {} | |
1849 | summary = {} # type: Optional[JsonDict] | |
1813 | 1850 | |
1814 | 1851 | # we include a summary in room responses when we're lazy loading |
1815 | 1852 | # members (as the client otherwise doesn't have enough info to form |
1833 | 1870 | ) |
1834 | 1871 | |
1835 | 1872 | if room_builder.rtype == "joined": |
1836 | unread_notifications = {} | |
1873 | unread_notifications = {} # type: Dict[str, str] | |
1837 | 1874 | room_sync = JoinedSyncResult( |
1838 | 1875 | room_id=room_id, |
1839 | 1876 | timeline=batch, |
1855 | 1892 | |
1856 | 1893 | if batch.limited and since_token: |
1857 | 1894 | user_id = sync_result_builder.sync_config.user.to_string() |
1858 | logger.info( | |
1895 | logger.debug( | |
1859 | 1896 | "Incremental gappy sync of %s for user %s with %d state events" |
1860 | 1897 | % (room_id, user_id, len(state)) |
1861 | 1898 | ) |
1862 | 1899 | elif room_builder.rtype == "archived": |
1863 | room_sync = ArchivedSyncResult( | |
1900 | archived_room_sync = ArchivedSyncResult( | |
1864 | 1901 | room_id=room_id, |
1865 | 1902 | timeline=batch, |
1866 | 1903 | state=state, |
1867 | 1904 | account_data=account_data_events, |
1868 | 1905 | ) |
1869 | if room_sync or always_include: | |
1870 | sync_result_builder.archived.append(room_sync) | |
1906 | if archived_room_sync or always_include: | |
1907 | sync_result_builder.archived.append(archived_room_sync) | |
1871 | 1908 | else: |
1872 | 1909 | raise Exception("Unrecognized rtype: %r", room_builder.rtype) |
1873 | 1910 | |
1874 | async def get_rooms_for_user_at(self, user_id, stream_ordering): | |
1911 | async def get_rooms_for_user_at( | |
1912 | self, user_id: str, stream_ordering: int | |
1913 | ) -> FrozenSet[str]: | |
1875 | 1914 | """Get set of joined rooms for a user at the given stream ordering. |
1876 | 1915 | |
1877 | 1916 | The stream ordering *must* be recent, otherwise this may throw an |
1879 | 1918 | current token, which should be perfectly fine). |
1880 | 1919 | |
1881 | 1920 | Args: |
1882 | user_id (str) | |
1883 | stream_ordering (int) | |
1921 | user_id | |
1922 | stream_ordering | |
1884 | 1923 | |
1885 | 1924 | ReturnValue: |
1886 | Deferred[frozenset[str]]: Set of room_ids the user is in at given | |
1887 | stream_ordering. | |
1925 | Set of room_ids the user is in at given stream_ordering. | |
1888 | 1926 | """ |
1889 | 1927 | joined_rooms = await self.store.get_rooms_for_user_with_stream_ordering(user_id) |
1890 | 1928 | |
1911 | 1949 | if user_id in users_in_room: |
1912 | 1950 | joined_room_ids.add(room_id) |
1913 | 1951 | |
1914 | joined_room_ids = frozenset(joined_room_ids) | |
1915 | return joined_room_ids | |
1916 | ||
1917 | ||
1918 | def _action_has_highlight(actions): | |
1952 | return frozenset(joined_room_ids) | |
1953 | ||
1954 | ||
1955 | def _action_has_highlight(actions: List[JsonDict]) -> bool: | |
1919 | 1956 | for action in actions: |
1920 | 1957 | try: |
1921 | 1958 | if action.get("set_tweak", None) == "highlight": |
1927 | 1964 | |
1928 | 1965 | |
1929 | 1966 | def _calculate_state( |
1930 | timeline_contains, timeline_start, previous, current, lazy_load_members | |
1931 | ): | |
1967 | timeline_contains: StateMap[str], | |
1968 | timeline_start: StateMap[str], | |
1969 | previous: StateMap[str], | |
1970 | current: StateMap[str], | |
1971 | lazy_load_members: bool, | |
1972 | ) -> StateMap[str]: | |
1932 | 1973 | """Works out what state to include in a sync response. |
1933 | 1974 | |
1934 | 1975 | Args: |
1935 | timeline_contains (dict): state in the timeline | |
1936 | timeline_start (dict): state at the start of the timeline | |
1937 | previous (dict): state at the end of the previous sync (or empty dict | |
1976 | timeline_contains: state in the timeline | |
1977 | timeline_start: state at the start of the timeline | |
1978 | previous: state at the end of the previous sync (or empty dict | |
1938 | 1979 | if this is an initial sync) |
1939 | current (dict): state at the end of the timeline | |
1940 | lazy_load_members (bool): whether to return members from timeline_start | |
1980 | current: state at the end of the timeline | |
1981 | lazy_load_members: whether to return members from timeline_start | |
1941 | 1982 | or not. assumes that timeline_start has already been filtered to |
1942 | 1983 | include only the members the client needs to know about. |
1943 | ||
1944 | Returns: | |
1945 | dict | |
1946 | 1984 | """ |
1947 | 1985 | event_id_to_key = { |
1948 | 1986 | e: key |
1979 | 2017 | return {event_id_to_key[e]: e for e in state_ids} |
1980 | 2018 | |
1981 | 2019 | |
1982 | class SyncResultBuilder(object): | |
2020 | @attr.s | |
2021 | class SyncResultBuilder: | |
1983 | 2022 | """Used to help build up a new SyncResult for a user |
1984 | 2023 | |
1985 | 2024 | Attributes: |
1986 | sync_config (SyncConfig) | |
1987 | full_state (bool) | |
1988 | since_token (StreamToken) | |
1989 | now_token (StreamToken) | |
1990 | joined_room_ids (list[str]) | |
2025 | sync_config | |
2026 | full_state: The full_state flag as specified by user | |
2027 | since_token: The token supplied by user, or None. | |
2028 | now_token: The token to sync up to. | |
2029 | joined_room_ids: List of rooms the user is joined to | |
1991 | 2030 | |
1992 | 2031 | # The following mirror the fields in a sync response |
1993 | 2032 | presence (list) |
1995 | 2034 | joined (list[JoinedSyncResult]) |
1996 | 2035 | invited (list[InvitedSyncResult]) |
1997 | 2036 | archived (list[ArchivedSyncResult]) |
1998 | device (list) | |
1999 | 2037 | groups (GroupsSyncResult|None) |
2000 | 2038 | to_device (list) |
2001 | 2039 | """ |
2002 | 2040 | |
2003 | def __init__( | |
2004 | self, sync_config, full_state, since_token, now_token, joined_room_ids | |
2005 | ): | |
2006 | """ | |
2007 | Args: | |
2008 | sync_config (SyncConfig) | |
2009 | full_state (bool): The full_state flag as specified by user | |
2010 | since_token (StreamToken): The token supplied by user, or None. | |
2011 | now_token (StreamToken): The token to sync up to. | |
2012 | joined_room_ids (list[str]): List of rooms the user is joined to | |
2013 | """ | |
2014 | self.sync_config = sync_config | |
2015 | self.full_state = full_state | |
2016 | self.since_token = since_token | |
2017 | self.now_token = now_token | |
2018 | self.joined_room_ids = joined_room_ids | |
2019 | ||
2020 | self.presence = [] | |
2021 | self.account_data = [] | |
2022 | self.joined = [] | |
2023 | self.invited = [] | |
2024 | self.archived = [] | |
2025 | self.device = [] | |
2026 | self.groups = None | |
2027 | self.to_device = [] | |
2028 | ||
2029 | ||
2041 | sync_config = attr.ib(type=SyncConfig) | |
2042 | full_state = attr.ib(type=bool) | |
2043 | since_token = attr.ib(type=Optional[StreamToken]) | |
2044 | now_token = attr.ib(type=StreamToken) | |
2045 | joined_room_ids = attr.ib(type=FrozenSet[str]) | |
2046 | ||
2047 | presence = attr.ib(type=List[JsonDict], default=attr.Factory(list)) | |
2048 | account_data = attr.ib(type=List[JsonDict], default=attr.Factory(list)) | |
2049 | joined = attr.ib(type=List[JoinedSyncResult], default=attr.Factory(list)) | |
2050 | invited = attr.ib(type=List[InvitedSyncResult], default=attr.Factory(list)) | |
2051 | archived = attr.ib(type=List[ArchivedSyncResult], default=attr.Factory(list)) | |
2052 | groups = attr.ib(type=Optional[GroupsSyncResult], default=None) | |
2053 | to_device = attr.ib(type=List[JsonDict], default=attr.Factory(list)) | |
2054 | ||
2055 | ||
2056 | @attr.s | |
2030 | 2057 | class RoomSyncResultBuilder(object): |
2031 | 2058 | """Stores information needed to create either a `JoinedSyncResult` or |
2032 | 2059 | `ArchivedSyncResult`. |
2060 | ||
2061 | Attributes: | |
2062 | room_id | |
2063 | rtype: One of `"joined"` or `"archived"` | |
2064 | events: List of events to include in the room (more events may be added | |
2065 | when generating result). | |
2066 | newly_joined: If the user has newly joined the room | |
2067 | full_state: Whether the full state should be sent in result | |
2068 | since_token: Earliest point to return events from, or None | |
2069 | upto_token: Latest point to return events from. | |
2033 | 2070 | """ |
2034 | 2071 | |
2035 | def __init__( | |
2036 | self, room_id, rtype, events, newly_joined, full_state, since_token, upto_token | |
2037 | ): | |
2038 | """ | |
2039 | Args: | |
2040 | room_id(str) | |
2041 | rtype(str): One of `"joined"` or `"archived"` | |
2042 | events(list[FrozenEvent]): List of events to include in the room | |
2043 | (more events may be added when generating result). | |
2044 | newly_joined(bool): If the user has newly joined the room | |
2045 | full_state(bool): Whether the full state should be sent in result | |
2046 | since_token(StreamToken): Earliest point to return events from, or None | |
2047 | upto_token(StreamToken): Latest point to return events from. | |
2048 | """ | |
2049 | self.room_id = room_id | |
2050 | self.rtype = rtype | |
2051 | self.events = events | |
2052 | self.newly_joined = newly_joined | |
2053 | self.full_state = full_state | |
2054 | self.since_token = since_token | |
2055 | self.upto_token = upto_token | |
2072 | room_id = attr.ib(type=str) | |
2073 | rtype = attr.ib(type=str) | |
2074 | events = attr.ib(type=Optional[List[EventBase]]) | |
2075 | newly_joined = attr.ib(type=bool) | |
2076 | full_state = attr.ib(type=bool) | |
2077 | since_token = attr.ib(type=Optional[StreamToken]) | |
2078 | upto_token = attr.ib(type=StreamToken) |
124 | 124 | if target_user_id != auth_user_id: |
125 | 125 | raise AuthError(400, "Cannot set another user's typing state") |
126 | 126 | |
127 | yield self.auth.check_joined_room(room_id, target_user_id) | |
127 | yield self.auth.check_user_in_room(room_id, target_user_id) | |
128 | 128 | |
129 | 129 | logger.debug("%s has started typing in %s", target_user_id, room_id) |
130 | 130 | |
154 | 154 | if target_user_id != auth_user_id: |
155 | 155 | raise AuthError(400, "Cannot set another user's typing state") |
156 | 156 | |
157 | yield self.auth.check_joined_room(room_id, target_user_id) | |
157 | yield self.auth.check_user_in_room(room_id, target_user_id) | |
158 | 158 | |
159 | 159 | logger.debug("%s has stopped typing in %s", target_user_id, room_id) |
160 | 160 |
51 | 51 | self.is_mine_id = hs.is_mine_id |
52 | 52 | self.update_user_directory = hs.config.update_user_directory |
53 | 53 | self.search_all_users = hs.config.user_directory_search_all_users |
54 | self.spam_checker = hs.get_spam_checker() | |
54 | 55 | # The current position in the current_state_delta stream |
55 | 56 | self.pos = None |
56 | 57 | |
64 | 65 | # we start populating the user directory |
65 | 66 | self.clock.call_later(0, self.notify_new_event) |
66 | 67 | |
67 | def search_users(self, user_id, search_term, limit): | |
68 | async def search_users(self, user_id, search_term, limit): | |
68 | 69 | """Searches for users in directory |
69 | 70 | |
70 | 71 | Returns: |
81 | 82 | ] |
82 | 83 | } |
83 | 84 | """ |
84 | return self.store.search_user_dir(user_id, search_term, limit) | |
85 | results = await self.store.search_user_dir(user_id, search_term, limit) | |
86 | ||
87 | # Remove any spammy users from the results. | |
88 | results["results"] = [ | |
89 | user | |
90 | for user in results["results"] | |
91 | if not self.spam_checker.check_username_for_spam(user) | |
92 | ] | |
93 | ||
94 | return results | |
85 | 95 | |
86 | 96 | def notify_new_event(self): |
87 | 97 | """Called when there may be more deltas to process |
148 | 158 | self.pos, room_max_stream_ordering |
149 | 159 | ) |
150 | 160 | |
151 | logger.info("Handling %d state deltas", len(deltas)) | |
161 | logger.debug("Handling %d state deltas", len(deltas)) | |
152 | 162 | yield self._handle_deltas(deltas) |
153 | 163 | |
154 | 164 | self.pos = max_pos |
194 | 204 | room_id, self.server_name |
195 | 205 | ) |
196 | 206 | if not is_in_room: |
197 | logger.info("Server left room: %r", room_id) | |
207 | logger.debug("Server left room: %r", room_id) | |
198 | 208 | # Fetch all the users that we marked as being in user |
199 | 209 | # directory due to being in the room and then check if |
200 | 210 | # need to remove those users or not |
352 | 352 | if request.method == b"OPTIONS": |
353 | 353 | return _options_handler, "options_request_handler", {} |
354 | 354 | |
355 | request_path = request.path.decode("ascii") | |
356 | ||
355 | 357 | # Loop through all the registered callbacks to check if the method |
356 | 358 | # and path regex match |
357 | 359 | for path_entry in self.path_regexs.get(request.method, []): |
358 | m = path_entry.pattern.match(request.path.decode("ascii")) | |
360 | m = path_entry.pattern.match(request_path) | |
359 | 361 | if m: |
360 | 362 | # We found a match! |
361 | 363 | return path_entry.callback, path_entry.servlet_classname, m.groupdict() |
224 | 224 | self.start_time, name=servlet_name, method=self.get_method() |
225 | 225 | ) |
226 | 226 | |
227 | self.site.access_logger.info( | |
227 | self.site.access_logger.debug( | |
228 | 228 | "%s - %s - Received request: %s %s", |
229 | 229 | self.getClientIP(), |
230 | 230 | self.site.site_tag, |
397 | 397 | Args: |
398 | 398 | badge (int): number of unread messages |
399 | 399 | """ |
400 | logger.info("Sending updated badge count %d to %s", badge, self.name) | |
400 | logger.debug("Sending updated badge count %d to %s", badge, self.name) | |
401 | 401 | d = { |
402 | 402 | "notification": { |
403 | 403 | "id": "", |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from synapse.storage import DataStore | |
15 | from synapse.replication.slave.storage._base import BaseSlavedStore | |
16 | from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker | |
17 | from synapse.storage.data_stores.main.group_server import GroupServerWorkerStore | |
16 | 18 | from synapse.storage.database import Database |
17 | 19 | from synapse.util.caches.stream_change_cache import StreamChangeCache |
18 | 20 | |
19 | from ._base import BaseSlavedStore, __func__ | |
20 | from ._slaved_id_tracker import SlavedIdTracker | |
21 | 21 | |
22 | ||
23 | class SlavedGroupServerStore(BaseSlavedStore): | |
22 | class SlavedGroupServerStore(GroupServerWorkerStore, BaseSlavedStore): | |
24 | 23 | def __init__(self, database: Database, db_conn, hs): |
25 | 24 | super(SlavedGroupServerStore, self).__init__(database, db_conn, hs) |
26 | 25 | |
34 | 33 | self._group_updates_id_gen.get_current_token(), |
35 | 34 | ) |
36 | 35 | |
37 | get_groups_changes_for_user = __func__(DataStore.get_groups_changes_for_user) | |
38 | get_group_stream_token = __func__(DataStore.get_group_stream_token) | |
39 | get_all_groups_for_user = __func__(DataStore.get_all_groups_for_user) | |
36 | def get_group_stream_token(self): | |
37 | return self._group_updates_id_gen.get_current_token() | |
40 | 38 | |
41 | 39 | def stream_positions(self): |
42 | 40 | result = super(SlavedGroupServerStore, self).stream_positions() |
20 | 20 | from six.moves import http_client |
21 | 21 | |
22 | 22 | from synapse.api.constants import UserTypes |
23 | from synapse.api.errors import Codes, SynapseError | |
23 | from synapse.api.errors import Codes, NotFoundError, SynapseError | |
24 | 24 | from synapse.http.servlet import ( |
25 | 25 | RestServlet, |
26 | 26 | assert_params_in_dict, |
104 | 104 | |
105 | 105 | |
106 | 106 | class UserRestServletV2(RestServlet): |
107 | PATTERNS = (re.compile("^/_synapse/admin/v2/users/(?P<user_id>@[^/]+)$"),) | |
107 | PATTERNS = (re.compile("^/_synapse/admin/v2/users/(?P<user_id>[^/]+)$"),) | |
108 | 108 | |
109 | 109 | """Get request to list user details. |
110 | 110 | This needs user to have administrator access in Synapse. |
135 | 135 | self.hs = hs |
136 | 136 | self.auth = hs.get_auth() |
137 | 137 | self.admin_handler = hs.get_handlers().admin_handler |
138 | self.store = hs.get_datastore() | |
139 | self.auth_handler = hs.get_auth_handler() | |
138 | 140 | self.profile_handler = hs.get_profile_handler() |
139 | 141 | self.set_password_handler = hs.get_set_password_handler() |
140 | 142 | self.deactivate_account_handler = hs.get_deactivate_account_handler() |
149 | 151 | |
150 | 152 | ret = await self.admin_handler.get_user(target_user) |
151 | 153 | |
154 | if not ret: | |
155 | raise NotFoundError("User not found") | |
156 | ||
152 | 157 | return 200, ret |
153 | 158 | |
154 | 159 | async def on_PUT(self, request, user_id): |
162 | 167 | raise SynapseError(400, "This endpoint can only be used with local users") |
163 | 168 | |
164 | 169 | user = await self.admin_handler.get_user(target_user) |
170 | user_id = target_user.to_string() | |
165 | 171 | |
166 | 172 | if user: # modify user |
167 | 173 | if "displayname" in body: |
168 | 174 | await self.profile_handler.set_displayname( |
169 | 175 | target_user, requester, body["displayname"], True |
170 | 176 | ) |
177 | ||
178 | if "threepids" in body: | |
179 | # check for required parameters for each threepid | |
180 | for threepid in body["threepids"]: | |
181 | assert_params_in_dict(threepid, ["medium", "address"]) | |
182 | ||
183 | # remove old threepids from user | |
184 | threepids = await self.store.user_get_threepids(user_id) | |
185 | for threepid in threepids: | |
186 | try: | |
187 | await self.auth_handler.delete_threepid( | |
188 | user_id, threepid["medium"], threepid["address"], None | |
189 | ) | |
190 | except Exception: | |
191 | logger.exception("Failed to remove threepids") | |
192 | raise SynapseError(500, "Failed to remove threepids") | |
193 | ||
194 | # add new threepids to user | |
195 | current_time = self.hs.get_clock().time_msec() | |
196 | for threepid in body["threepids"]: | |
197 | await self.auth_handler.add_threepid( | |
198 | user_id, threepid["medium"], threepid["address"], current_time | |
199 | ) | |
171 | 200 | |
172 | 201 | if "avatar_url" in body: |
173 | 202 | await self.profile_handler.set_avatar_url( |
220 | 249 | admin = body.get("admin", None) |
221 | 250 | user_type = body.get("user_type", None) |
222 | 251 | displayname = body.get("displayname", None) |
252 | threepids = body.get("threepids", None) | |
223 | 253 | |
224 | 254 | if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES: |
225 | 255 | raise SynapseError(400, "Invalid user type") |
231 | 261 | default_display_name=displayname, |
232 | 262 | user_type=user_type, |
233 | 263 | ) |
264 | ||
265 | if "threepids" in body: | |
266 | # check for required parameters for each threepid | |
267 | for threepid in body["threepids"]: | |
268 | assert_params_in_dict(threepid, ["medium", "address"]) | |
269 | ||
270 | current_time = self.hs.get_clock().time_msec() | |
271 | for threepid in body["threepids"]: | |
272 | await self.auth_handler.add_threepid( | |
273 | user_id, threepid["medium"], threepid["address"], current_time | |
274 | ) | |
275 | ||
234 | 276 | if "avatar_url" in body: |
235 | 277 | await self.profile_handler.set_avatar_url( |
236 | 278 | user_id, requester, body["avatar_url"], True |
567 | 609 | {} |
568 | 610 | """ |
569 | 611 | |
570 | PATTERNS = (re.compile("^/_synapse/admin/v1/users/(?P<user_id>@[^/]*)/admin$"),) | |
612 | PATTERNS = (re.compile("^/_synapse/admin/v1/users/(?P<user_id>[^/]*)/admin$"),) | |
571 | 613 | |
572 | 614 | def __init__(self, hs): |
573 | 615 | self.hs = hs |
15 | 15 | |
16 | 16 | """ This module contains REST servlets to do with rooms: /rooms/<paths> """ |
17 | 17 | import logging |
18 | import re | |
18 | 19 | from typing import List, Optional |
19 | 20 | |
20 | 21 | from six.moves.urllib import parse as urlparse |
44 | 45 | from synapse.streams.config import PaginationConfig |
45 | 46 | from synapse.types import RoomAlias, RoomID, StreamToken, ThirdPartyInstanceID, UserID |
46 | 47 | |
48 | MYPY = False | |
49 | if MYPY: | |
50 | import synapse.server | |
51 | ||
47 | 52 | logger = logging.getLogger(__name__) |
48 | 53 | |
49 | 54 | |
840 | 845 | ) |
841 | 846 | |
842 | 847 | return 200, {} |
848 | ||
849 | ||
850 | class RoomAliasListServlet(RestServlet): | |
851 | PATTERNS = [ | |
852 | re.compile( | |
853 | r"^/_matrix/client/unstable/org\.matrix\.msc2432" | |
854 | r"/rooms/(?P<room_id>[^/]*)/aliases" | |
855 | ), | |
856 | ] | |
857 | ||
858 | def __init__(self, hs: "synapse.server.HomeServer"): | |
859 | super().__init__() | |
860 | self.auth = hs.get_auth() | |
861 | self.directory_handler = hs.get_handlers().directory_handler | |
862 | ||
863 | async def on_GET(self, request, room_id): | |
864 | requester = await self.auth.get_user_by_req(request) | |
865 | ||
866 | alias_list = await self.directory_handler.get_aliases_for_room( | |
867 | requester, room_id | |
868 | ) | |
869 | ||
870 | return 200, {"aliases": alias_list} | |
843 | 871 | |
844 | 872 | |
845 | 873 | class SearchRestServlet(RestServlet): |
930 | 958 | JoinedRoomsRestServlet(hs).register(http_server) |
931 | 959 | RoomEventServlet(hs).register(http_server) |
932 | 960 | RoomEventContextServlet(hs).register(http_server) |
961 | RoomAliasListServlet(hs).register(http_server) | |
933 | 962 | |
934 | 963 | |
935 | 964 | def register_deprecated_servlets(hs, http_server): |
141 | 141 | ): |
142 | 142 | requester = await self.auth.get_user_by_req(request, allow_guest=True) |
143 | 143 | |
144 | await self.auth.check_in_room_or_world_readable( | |
145 | room_id, requester.user.to_string() | |
144 | await self.auth.check_user_in_room_or_world_readable( | |
145 | room_id, requester.user.to_string(), allow_departed_users=True | |
146 | 146 | ) |
147 | 147 | |
148 | 148 | # This gets the original event and checks that a) the event exists and |
234 | 234 | ): |
235 | 235 | requester = await self.auth.get_user_by_req(request, allow_guest=True) |
236 | 236 | |
237 | await self.auth.check_in_room_or_world_readable( | |
238 | room_id, requester.user.to_string() | |
237 | await self.auth.check_user_in_room_or_world_readable( | |
238 | room_id, requester.user.to_string(), allow_departed_users=True, | |
239 | 239 | ) |
240 | 240 | |
241 | 241 | # This checks that a) the event exists and b) the user is allowed to |
312 | 312 | async def on_GET(self, request, room_id, parent_id, relation_type, event_type, key): |
313 | 313 | requester = await self.auth.get_user_by_req(request, allow_guest=True) |
314 | 314 | |
315 | await self.auth.check_in_room_or_world_readable( | |
316 | room_id, requester.user.to_string() | |
315 | await self.auth.check_user_in_room_or_world_readable( | |
316 | room_id, requester.user.to_string(), allow_departed_users=True, | |
317 | 317 | ) |
318 | 318 | |
319 | 319 | # This checks that a) the event exists and b) the user is allowed to |
51 | 51 | ], |
52 | 52 | # as per MSC1497: |
53 | 53 | "unstable_features": { |
54 | "m.lazy_load_members": True, | |
55 | 54 | # as per MSC2190, as amended by MSC2264 |
56 | 55 | # to be removed in r0.6.0 |
57 | 56 | "m.id_access_token": True, |
72 | 71 | "org.matrix.label_based_filtering": True, |
73 | 72 | # Implements support for cross signing as described in MSC1756 |
74 | 73 | "org.matrix.e2e_cross_signing": True, |
74 | # Implements additional endpoints as described in MSC2432 | |
75 | "org.matrix.msc2432": True, | |
75 | 76 | }, |
76 | 77 | }, |
77 | 78 | ) |
49 | 49 | from synapse.federation.sender import FederationSender |
50 | 50 | from synapse.federation.transport.client import TransportLayerClient |
51 | 51 | from synapse.groups.attestations import GroupAttestationSigning, GroupAttestionRenewer |
52 | from synapse.groups.groups_server import GroupsServerHandler | |
52 | from synapse.groups.groups_server import GroupsServerHandler, GroupsServerWorkerHandler | |
53 | 53 | from synapse.handlers import Handlers |
54 | 54 | from synapse.handlers.account_validity import AccountValidityHandler |
55 | 55 | from synapse.handlers.acme import AcmeHandler |
61 | 61 | from synapse.handlers.e2e_keys import E2eKeysHandler |
62 | 62 | from synapse.handlers.e2e_room_keys import E2eRoomKeysHandler |
63 | 63 | from synapse.handlers.events import EventHandler, EventStreamHandler |
64 | from synapse.handlers.groups_local import GroupsLocalHandler | |
64 | from synapse.handlers.groups_local import GroupsLocalHandler, GroupsLocalWorkerHandler | |
65 | 65 | from synapse.handlers.initial_sync import InitialSyncHandler |
66 | 66 | from synapse.handlers.message import EventCreationHandler, MessageHandler |
67 | 67 | from synapse.handlers.pagination import PaginationHandler |
459 | 459 | return UserDirectoryHandler(self) |
460 | 460 | |
461 | 461 | def build_groups_local_handler(self): |
462 | return GroupsLocalHandler(self) | |
462 | if self.config.worker_app: | |
463 | return GroupsLocalWorkerHandler(self) | |
464 | else: | |
465 | return GroupsLocalHandler(self) | |
463 | 466 | |
464 | 467 | def build_groups_server_handler(self): |
465 | return GroupsServerHandler(self) | |
468 | if self.config.worker_app: | |
469 | return GroupsServerWorkerHandler(self) | |
470 | else: | |
471 | return GroupsServerHandler(self) | |
466 | 472 | |
467 | 473 | def build_groups_attestation_signing(self): |
468 | 474 | return GroupAttestationSigning(self) |
106 | 106 | self, |
107 | 107 | ) -> synapse.replication.tcp.client.ReplicationClientHandler: |
108 | 108 | pass |
109 | def is_mine_id(self, domain_id: str) -> bool: | |
110 | pass |
17 | 17 | |
18 | 18 | from synapse.storage.state import StateFilter |
19 | 19 | |
20 | MYPY = False | |
21 | if MYPY: | |
22 | import synapse.server | |
23 | ||
20 | 24 | logger = logging.getLogger(__name__) |
21 | 25 | |
22 | 26 | |
25 | 29 | access to rooms and other relevant information. |
26 | 30 | """ |
27 | 31 | |
28 | def __init__(self, hs): | |
32 | def __init__(self, hs: "synapse.server.HomeServer"): | |
29 | 33 | self.hs = hs |
30 | 34 | |
31 | 35 | self._store = hs.get_datastore() |
32 | 36 | |
33 | 37 | @defer.inlineCallbacks |
34 | def get_state_events_in_room(self, room_id, types): | |
38 | def get_state_events_in_room(self, room_id: str, types: tuple) -> defer.Deferred: | |
35 | 39 | """Gets state events for the given room. |
36 | 40 | |
37 | 41 | Args: |
38 | room_id (string): The room ID to get state events in. | |
39 | types (tuple): The event type and state key (using None | |
42 | room_id: The room ID to get state events in. | |
43 | types: The event type and state key (using None | |
40 | 44 | to represent 'any') of the room state to acquire. |
41 | 45 | |
42 | 46 | Returns: |
26 | 26 | _DEFAULT_ROLE_ID = "" |
27 | 27 | |
28 | 28 | |
29 | class GroupServerStore(SQLBaseStore): | |
30 | def set_group_join_policy(self, group_id, join_policy): | |
31 | """Set the join policy of a group. | |
32 | ||
33 | join_policy can be one of: | |
34 | * "invite" | |
35 | * "open" | |
36 | """ | |
37 | return self.db.simple_update_one( | |
38 | table="groups", | |
39 | keyvalues={"group_id": group_id}, | |
40 | updatevalues={"join_policy": join_policy}, | |
41 | desc="set_group_join_policy", | |
42 | ) | |
43 | ||
29 | class GroupServerWorkerStore(SQLBaseStore): | |
44 | 30 | def get_group(self, group_id): |
45 | 31 | return self.db.simple_select_one( |
46 | 32 | table="groups", |
156 | 142 | "get_rooms_for_summary", _get_rooms_for_summary_txn |
157 | 143 | ) |
158 | 144 | |
145 | @defer.inlineCallbacks | |
146 | def get_group_categories(self, group_id): | |
147 | rows = yield self.db.simple_select_list( | |
148 | table="group_room_categories", | |
149 | keyvalues={"group_id": group_id}, | |
150 | retcols=("category_id", "is_public", "profile"), | |
151 | desc="get_group_categories", | |
152 | ) | |
153 | ||
154 | return { | |
155 | row["category_id"]: { | |
156 | "is_public": row["is_public"], | |
157 | "profile": json.loads(row["profile"]), | |
158 | } | |
159 | for row in rows | |
160 | } | |
161 | ||
162 | @defer.inlineCallbacks | |
163 | def get_group_category(self, group_id, category_id): | |
164 | category = yield self.db.simple_select_one( | |
165 | table="group_room_categories", | |
166 | keyvalues={"group_id": group_id, "category_id": category_id}, | |
167 | retcols=("is_public", "profile"), | |
168 | desc="get_group_category", | |
169 | ) | |
170 | ||
171 | category["profile"] = json.loads(category["profile"]) | |
172 | ||
173 | return category | |
174 | ||
175 | @defer.inlineCallbacks | |
176 | def get_group_roles(self, group_id): | |
177 | rows = yield self.db.simple_select_list( | |
178 | table="group_roles", | |
179 | keyvalues={"group_id": group_id}, | |
180 | retcols=("role_id", "is_public", "profile"), | |
181 | desc="get_group_roles", | |
182 | ) | |
183 | ||
184 | return { | |
185 | row["role_id"]: { | |
186 | "is_public": row["is_public"], | |
187 | "profile": json.loads(row["profile"]), | |
188 | } | |
189 | for row in rows | |
190 | } | |
191 | ||
192 | @defer.inlineCallbacks | |
193 | def get_group_role(self, group_id, role_id): | |
194 | role = yield self.db.simple_select_one( | |
195 | table="group_roles", | |
196 | keyvalues={"group_id": group_id, "role_id": role_id}, | |
197 | retcols=("is_public", "profile"), | |
198 | desc="get_group_role", | |
199 | ) | |
200 | ||
201 | role["profile"] = json.loads(role["profile"]) | |
202 | ||
203 | return role | |
204 | ||
205 | def get_local_groups_for_room(self, room_id): | |
206 | """Get all of the local group that contain a given room | |
207 | Args: | |
208 | room_id (str): The ID of a room | |
209 | Returns: | |
210 | Deferred[list[str]]: A twisted.Deferred containing a list of group ids | |
211 | containing this room | |
212 | """ | |
213 | return self.db.simple_select_onecol( | |
214 | table="group_rooms", | |
215 | keyvalues={"room_id": room_id}, | |
216 | retcol="group_id", | |
217 | desc="get_local_groups_for_room", | |
218 | ) | |
219 | ||
220 | def get_users_for_summary_by_role(self, group_id, include_private=False): | |
221 | """Get the users and roles that should be included in a summary request | |
222 | ||
223 | Returns ([users], [roles]) | |
224 | """ | |
225 | ||
226 | def _get_users_for_summary_txn(txn): | |
227 | keyvalues = {"group_id": group_id} | |
228 | if not include_private: | |
229 | keyvalues["is_public"] = True | |
230 | ||
231 | sql = """ | |
232 | SELECT user_id, is_public, role_id, user_order | |
233 | FROM group_summary_users | |
234 | WHERE group_id = ? | |
235 | """ | |
236 | ||
237 | if not include_private: | |
238 | sql += " AND is_public = ?" | |
239 | txn.execute(sql, (group_id, True)) | |
240 | else: | |
241 | txn.execute(sql, (group_id,)) | |
242 | ||
243 | users = [ | |
244 | { | |
245 | "user_id": row[0], | |
246 | "is_public": row[1], | |
247 | "role_id": row[2] if row[2] != _DEFAULT_ROLE_ID else None, | |
248 | "order": row[3], | |
249 | } | |
250 | for row in txn | |
251 | ] | |
252 | ||
253 | sql = """ | |
254 | SELECT role_id, is_public, profile, role_order | |
255 | FROM group_summary_roles | |
256 | INNER JOIN group_roles USING (group_id, role_id) | |
257 | WHERE group_id = ? | |
258 | """ | |
259 | ||
260 | if not include_private: | |
261 | sql += " AND is_public = ?" | |
262 | txn.execute(sql, (group_id, True)) | |
263 | else: | |
264 | txn.execute(sql, (group_id,)) | |
265 | ||
266 | roles = { | |
267 | row[0]: { | |
268 | "is_public": row[1], | |
269 | "profile": json.loads(row[2]), | |
270 | "order": row[3], | |
271 | } | |
272 | for row in txn | |
273 | } | |
274 | ||
275 | return users, roles | |
276 | ||
277 | return self.db.runInteraction( | |
278 | "get_users_for_summary_by_role", _get_users_for_summary_txn | |
279 | ) | |
280 | ||
281 | def is_user_in_group(self, user_id, group_id): | |
282 | return self.db.simple_select_one_onecol( | |
283 | table="group_users", | |
284 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
285 | retcol="user_id", | |
286 | allow_none=True, | |
287 | desc="is_user_in_group", | |
288 | ).addCallback(lambda r: bool(r)) | |
289 | ||
290 | def is_user_admin_in_group(self, group_id, user_id): | |
291 | return self.db.simple_select_one_onecol( | |
292 | table="group_users", | |
293 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
294 | retcol="is_admin", | |
295 | allow_none=True, | |
296 | desc="is_user_admin_in_group", | |
297 | ) | |
298 | ||
299 | def is_user_invited_to_local_group(self, group_id, user_id): | |
300 | """Has the group server invited a user? | |
301 | """ | |
302 | return self.db.simple_select_one_onecol( | |
303 | table="group_invites", | |
304 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
305 | retcol="user_id", | |
306 | desc="is_user_invited_to_local_group", | |
307 | allow_none=True, | |
308 | ) | |
309 | ||
310 | def get_users_membership_info_in_group(self, group_id, user_id): | |
311 | """Get a dict describing the membership of a user in a group. | |
312 | ||
313 | Example if joined: | |
314 | ||
315 | { | |
316 | "membership": "join", | |
317 | "is_public": True, | |
318 | "is_privileged": False, | |
319 | } | |
320 | ||
321 | Returns an empty dict if the user is not join/invite/etc | |
322 | """ | |
323 | ||
324 | def _get_users_membership_in_group_txn(txn): | |
325 | row = self.db.simple_select_one_txn( | |
326 | txn, | |
327 | table="group_users", | |
328 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
329 | retcols=("is_admin", "is_public"), | |
330 | allow_none=True, | |
331 | ) | |
332 | ||
333 | if row: | |
334 | return { | |
335 | "membership": "join", | |
336 | "is_public": row["is_public"], | |
337 | "is_privileged": row["is_admin"], | |
338 | } | |
339 | ||
340 | row = self.db.simple_select_one_onecol_txn( | |
341 | txn, | |
342 | table="group_invites", | |
343 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
344 | retcol="user_id", | |
345 | allow_none=True, | |
346 | ) | |
347 | ||
348 | if row: | |
349 | return {"membership": "invite"} | |
350 | ||
351 | return {} | |
352 | ||
353 | return self.db.runInteraction( | |
354 | "get_users_membership_info_in_group", _get_users_membership_in_group_txn | |
355 | ) | |
356 | ||
357 | def get_publicised_groups_for_user(self, user_id): | |
358 | """Get all groups a user is publicising | |
359 | """ | |
360 | return self.db.simple_select_onecol( | |
361 | table="local_group_membership", | |
362 | keyvalues={"user_id": user_id, "membership": "join", "is_publicised": True}, | |
363 | retcol="group_id", | |
364 | desc="get_publicised_groups_for_user", | |
365 | ) | |
366 | ||
367 | def get_attestations_need_renewals(self, valid_until_ms): | |
368 | """Get all attestations that need to be renewed until givent time | |
369 | """ | |
370 | ||
371 | def _get_attestations_need_renewals_txn(txn): | |
372 | sql = """ | |
373 | SELECT group_id, user_id FROM group_attestations_renewals | |
374 | WHERE valid_until_ms <= ? | |
375 | """ | |
376 | txn.execute(sql, (valid_until_ms,)) | |
377 | return self.db.cursor_to_dict(txn) | |
378 | ||
379 | return self.db.runInteraction( | |
380 | "get_attestations_need_renewals", _get_attestations_need_renewals_txn | |
381 | ) | |
382 | ||
383 | @defer.inlineCallbacks | |
384 | def get_remote_attestation(self, group_id, user_id): | |
385 | """Get the attestation that proves the remote agrees that the user is | |
386 | in the group. | |
387 | """ | |
388 | row = yield self.db.simple_select_one( | |
389 | table="group_attestations_remote", | |
390 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
391 | retcols=("valid_until_ms", "attestation_json"), | |
392 | desc="get_remote_attestation", | |
393 | allow_none=True, | |
394 | ) | |
395 | ||
396 | now = int(self._clock.time_msec()) | |
397 | if row and now < row["valid_until_ms"]: | |
398 | return json.loads(row["attestation_json"]) | |
399 | ||
400 | return None | |
401 | ||
402 | def get_joined_groups(self, user_id): | |
403 | return self.db.simple_select_onecol( | |
404 | table="local_group_membership", | |
405 | keyvalues={"user_id": user_id, "membership": "join"}, | |
406 | retcol="group_id", | |
407 | desc="get_joined_groups", | |
408 | ) | |
409 | ||
410 | def get_all_groups_for_user(self, user_id, now_token): | |
411 | def _get_all_groups_for_user_txn(txn): | |
412 | sql = """ | |
413 | SELECT group_id, type, membership, u.content | |
414 | FROM local_group_updates AS u | |
415 | INNER JOIN local_group_membership USING (group_id, user_id) | |
416 | WHERE user_id = ? AND membership != 'leave' | |
417 | AND stream_id <= ? | |
418 | """ | |
419 | txn.execute(sql, (user_id, now_token)) | |
420 | return [ | |
421 | { | |
422 | "group_id": row[0], | |
423 | "type": row[1], | |
424 | "membership": row[2], | |
425 | "content": json.loads(row[3]), | |
426 | } | |
427 | for row in txn | |
428 | ] | |
429 | ||
430 | return self.db.runInteraction( | |
431 | "get_all_groups_for_user", _get_all_groups_for_user_txn | |
432 | ) | |
433 | ||
434 | def get_groups_changes_for_user(self, user_id, from_token, to_token): | |
435 | from_token = int(from_token) | |
436 | has_changed = self._group_updates_stream_cache.has_entity_changed( | |
437 | user_id, from_token | |
438 | ) | |
439 | if not has_changed: | |
440 | return defer.succeed([]) | |
441 | ||
442 | def _get_groups_changes_for_user_txn(txn): | |
443 | sql = """ | |
444 | SELECT group_id, membership, type, u.content | |
445 | FROM local_group_updates AS u | |
446 | INNER JOIN local_group_membership USING (group_id, user_id) | |
447 | WHERE user_id = ? AND ? < stream_id AND stream_id <= ? | |
448 | """ | |
449 | txn.execute(sql, (user_id, from_token, to_token)) | |
450 | return [ | |
451 | { | |
452 | "group_id": group_id, | |
453 | "membership": membership, | |
454 | "type": gtype, | |
455 | "content": json.loads(content_json), | |
456 | } | |
457 | for group_id, membership, gtype, content_json in txn | |
458 | ] | |
459 | ||
460 | return self.db.runInteraction( | |
461 | "get_groups_changes_for_user", _get_groups_changes_for_user_txn | |
462 | ) | |
463 | ||
464 | def get_all_groups_changes(self, from_token, to_token, limit): | |
465 | from_token = int(from_token) | |
466 | has_changed = self._group_updates_stream_cache.has_any_entity_changed( | |
467 | from_token | |
468 | ) | |
469 | if not has_changed: | |
470 | return defer.succeed([]) | |
471 | ||
472 | def _get_all_groups_changes_txn(txn): | |
473 | sql = """ | |
474 | SELECT stream_id, group_id, user_id, type, content | |
475 | FROM local_group_updates | |
476 | WHERE ? < stream_id AND stream_id <= ? | |
477 | LIMIT ? | |
478 | """ | |
479 | txn.execute(sql, (from_token, to_token, limit)) | |
480 | return [ | |
481 | (stream_id, group_id, user_id, gtype, json.loads(content_json)) | |
482 | for stream_id, group_id, user_id, gtype, content_json in txn | |
483 | ] | |
484 | ||
485 | return self.db.runInteraction( | |
486 | "get_all_groups_changes", _get_all_groups_changes_txn | |
487 | ) | |
488 | ||
489 | ||
490 | class GroupServerStore(GroupServerWorkerStore): | |
491 | def set_group_join_policy(self, group_id, join_policy): | |
492 | """Set the join policy of a group. | |
493 | ||
494 | join_policy can be one of: | |
495 | * "invite" | |
496 | * "open" | |
497 | """ | |
498 | return self.db.simple_update_one( | |
499 | table="groups", | |
500 | keyvalues={"group_id": group_id}, | |
501 | updatevalues={"join_policy": join_policy}, | |
502 | desc="set_group_join_policy", | |
503 | ) | |
504 | ||
159 | 505 | def add_room_to_summary(self, group_id, room_id, category_id, order, is_public): |
160 | 506 | return self.db.runInteraction( |
161 | 507 | "add_room_to_summary", |
298 | 644 | desc="remove_room_from_summary", |
299 | 645 | ) |
300 | 646 | |
301 | @defer.inlineCallbacks | |
302 | def get_group_categories(self, group_id): | |
303 | rows = yield self.db.simple_select_list( | |
304 | table="group_room_categories", | |
305 | keyvalues={"group_id": group_id}, | |
306 | retcols=("category_id", "is_public", "profile"), | |
307 | desc="get_group_categories", | |
308 | ) | |
309 | ||
310 | return { | |
311 | row["category_id"]: { | |
312 | "is_public": row["is_public"], | |
313 | "profile": json.loads(row["profile"]), | |
314 | } | |
315 | for row in rows | |
316 | } | |
317 | ||
318 | @defer.inlineCallbacks | |
319 | def get_group_category(self, group_id, category_id): | |
320 | category = yield self.db.simple_select_one( | |
321 | table="group_room_categories", | |
322 | keyvalues={"group_id": group_id, "category_id": category_id}, | |
323 | retcols=("is_public", "profile"), | |
324 | desc="get_group_category", | |
325 | ) | |
326 | ||
327 | category["profile"] = json.loads(category["profile"]) | |
328 | ||
329 | return category | |
330 | ||
331 | 647 | def upsert_group_category(self, group_id, category_id, profile, is_public): |
332 | 648 | """Add/update room category for group |
333 | 649 | """ |
358 | 674 | keyvalues={"group_id": group_id, "category_id": category_id}, |
359 | 675 | desc="remove_group_category", |
360 | 676 | ) |
361 | ||
362 | @defer.inlineCallbacks | |
363 | def get_group_roles(self, group_id): | |
364 | rows = yield self.db.simple_select_list( | |
365 | table="group_roles", | |
366 | keyvalues={"group_id": group_id}, | |
367 | retcols=("role_id", "is_public", "profile"), | |
368 | desc="get_group_roles", | |
369 | ) | |
370 | ||
371 | return { | |
372 | row["role_id"]: { | |
373 | "is_public": row["is_public"], | |
374 | "profile": json.loads(row["profile"]), | |
375 | } | |
376 | for row in rows | |
377 | } | |
378 | ||
379 | @defer.inlineCallbacks | |
380 | def get_group_role(self, group_id, role_id): | |
381 | role = yield self.db.simple_select_one( | |
382 | table="group_roles", | |
383 | keyvalues={"group_id": group_id, "role_id": role_id}, | |
384 | retcols=("is_public", "profile"), | |
385 | desc="get_group_role", | |
386 | ) | |
387 | ||
388 | role["profile"] = json.loads(role["profile"]) | |
389 | ||
390 | return role | |
391 | 677 | |
392 | 678 | def upsert_group_role(self, group_id, role_id, profile, is_public): |
393 | 679 | """Add/remove user role |
554 | 840 | desc="remove_user_from_summary", |
555 | 841 | ) |
556 | 842 | |
557 | def get_local_groups_for_room(self, room_id): | |
558 | """Get all of the local group that contain a given room | |
559 | Args: | |
560 | room_id (str): The ID of a room | |
561 | Returns: | |
562 | Deferred[list[str]]: A twisted.Deferred containing a list of group ids | |
563 | containing this room | |
564 | """ | |
565 | return self.db.simple_select_onecol( | |
566 | table="group_rooms", | |
567 | keyvalues={"room_id": room_id}, | |
568 | retcol="group_id", | |
569 | desc="get_local_groups_for_room", | |
570 | ) | |
571 | ||
572 | def get_users_for_summary_by_role(self, group_id, include_private=False): | |
573 | """Get the users and roles that should be included in a summary request | |
574 | ||
575 | Returns ([users], [roles]) | |
576 | """ | |
577 | ||
578 | def _get_users_for_summary_txn(txn): | |
579 | keyvalues = {"group_id": group_id} | |
580 | if not include_private: | |
581 | keyvalues["is_public"] = True | |
582 | ||
583 | sql = """ | |
584 | SELECT user_id, is_public, role_id, user_order | |
585 | FROM group_summary_users | |
586 | WHERE group_id = ? | |
587 | """ | |
588 | ||
589 | if not include_private: | |
590 | sql += " AND is_public = ?" | |
591 | txn.execute(sql, (group_id, True)) | |
592 | else: | |
593 | txn.execute(sql, (group_id,)) | |
594 | ||
595 | users = [ | |
596 | { | |
597 | "user_id": row[0], | |
598 | "is_public": row[1], | |
599 | "role_id": row[2] if row[2] != _DEFAULT_ROLE_ID else None, | |
600 | "order": row[3], | |
601 | } | |
602 | for row in txn | |
603 | ] | |
604 | ||
605 | sql = """ | |
606 | SELECT role_id, is_public, profile, role_order | |
607 | FROM group_summary_roles | |
608 | INNER JOIN group_roles USING (group_id, role_id) | |
609 | WHERE group_id = ? | |
610 | """ | |
611 | ||
612 | if not include_private: | |
613 | sql += " AND is_public = ?" | |
614 | txn.execute(sql, (group_id, True)) | |
615 | else: | |
616 | txn.execute(sql, (group_id,)) | |
617 | ||
618 | roles = { | |
619 | row[0]: { | |
620 | "is_public": row[1], | |
621 | "profile": json.loads(row[2]), | |
622 | "order": row[3], | |
623 | } | |
624 | for row in txn | |
625 | } | |
626 | ||
627 | return users, roles | |
628 | ||
629 | return self.db.runInteraction( | |
630 | "get_users_for_summary_by_role", _get_users_for_summary_txn | |
631 | ) | |
632 | ||
633 | def is_user_in_group(self, user_id, group_id): | |
634 | return self.db.simple_select_one_onecol( | |
635 | table="group_users", | |
636 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
637 | retcol="user_id", | |
638 | allow_none=True, | |
639 | desc="is_user_in_group", | |
640 | ).addCallback(lambda r: bool(r)) | |
641 | ||
642 | def is_user_admin_in_group(self, group_id, user_id): | |
643 | return self.db.simple_select_one_onecol( | |
644 | table="group_users", | |
645 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
646 | retcol="is_admin", | |
647 | allow_none=True, | |
648 | desc="is_user_admin_in_group", | |
649 | ) | |
650 | ||
651 | 843 | def add_group_invite(self, group_id, user_id): |
652 | 844 | """Record that the group server has invited a user |
653 | 845 | """ |
655 | 847 | table="group_invites", |
656 | 848 | values={"group_id": group_id, "user_id": user_id}, |
657 | 849 | desc="add_group_invite", |
658 | ) | |
659 | ||
660 | def is_user_invited_to_local_group(self, group_id, user_id): | |
661 | """Has the group server invited a user? | |
662 | """ | |
663 | return self.db.simple_select_one_onecol( | |
664 | table="group_invites", | |
665 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
666 | retcol="user_id", | |
667 | desc="is_user_invited_to_local_group", | |
668 | allow_none=True, | |
669 | ) | |
670 | ||
671 | def get_users_membership_info_in_group(self, group_id, user_id): | |
672 | """Get a dict describing the membership of a user in a group. | |
673 | ||
674 | Example if joined: | |
675 | ||
676 | { | |
677 | "membership": "join", | |
678 | "is_public": True, | |
679 | "is_privileged": False, | |
680 | } | |
681 | ||
682 | Returns an empty dict if the user is not join/invite/etc | |
683 | """ | |
684 | ||
685 | def _get_users_membership_in_group_txn(txn): | |
686 | row = self.db.simple_select_one_txn( | |
687 | txn, | |
688 | table="group_users", | |
689 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
690 | retcols=("is_admin", "is_public"), | |
691 | allow_none=True, | |
692 | ) | |
693 | ||
694 | if row: | |
695 | return { | |
696 | "membership": "join", | |
697 | "is_public": row["is_public"], | |
698 | "is_privileged": row["is_admin"], | |
699 | } | |
700 | ||
701 | row = self.db.simple_select_one_onecol_txn( | |
702 | txn, | |
703 | table="group_invites", | |
704 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
705 | retcol="user_id", | |
706 | allow_none=True, | |
707 | ) | |
708 | ||
709 | if row: | |
710 | return {"membership": "invite"} | |
711 | ||
712 | return {} | |
713 | ||
714 | return self.db.runInteraction( | |
715 | "get_users_membership_info_in_group", _get_users_membership_in_group_txn | |
716 | 850 | ) |
717 | 851 | |
718 | 852 | def add_user_to_group( |
843 | 977 | |
844 | 978 | return self.db.runInteraction( |
845 | 979 | "remove_room_from_group", _remove_room_from_group_txn |
846 | ) | |
847 | ||
848 | def get_publicised_groups_for_user(self, user_id): | |
849 | """Get all groups a user is publicising | |
850 | """ | |
851 | return self.db.simple_select_onecol( | |
852 | table="local_group_membership", | |
853 | keyvalues={"user_id": user_id, "membership": "join", "is_publicised": True}, | |
854 | retcol="group_id", | |
855 | desc="get_publicised_groups_for_user", | |
856 | 980 | ) |
857 | 981 | |
858 | 982 | def update_group_publicity(self, group_id, user_id, publicise): |
999 | 1123 | desc="update_group_profile", |
1000 | 1124 | ) |
1001 | 1125 | |
1002 | def get_attestations_need_renewals(self, valid_until_ms): | |
1003 | """Get all attestations that need to be renewed until givent time | |
1004 | """ | |
1005 | ||
1006 | def _get_attestations_need_renewals_txn(txn): | |
1007 | sql = """ | |
1008 | SELECT group_id, user_id FROM group_attestations_renewals | |
1009 | WHERE valid_until_ms <= ? | |
1010 | """ | |
1011 | txn.execute(sql, (valid_until_ms,)) | |
1012 | return self.db.cursor_to_dict(txn) | |
1013 | ||
1014 | return self.db.runInteraction( | |
1015 | "get_attestations_need_renewals", _get_attestations_need_renewals_txn | |
1016 | ) | |
1017 | ||
1018 | 1126 | def update_attestation_renewal(self, group_id, user_id, attestation): |
1019 | 1127 | """Update an attestation that we have renewed |
1020 | 1128 | """ |
1051 | 1159 | table="group_attestations_renewals", |
1052 | 1160 | keyvalues={"group_id": group_id, "user_id": user_id}, |
1053 | 1161 | desc="remove_attestation_renewal", |
1054 | ) | |
1055 | ||
1056 | @defer.inlineCallbacks | |
1057 | def get_remote_attestation(self, group_id, user_id): | |
1058 | """Get the attestation that proves the remote agrees that the user is | |
1059 | in the group. | |
1060 | """ | |
1061 | row = yield self.db.simple_select_one( | |
1062 | table="group_attestations_remote", | |
1063 | keyvalues={"group_id": group_id, "user_id": user_id}, | |
1064 | retcols=("valid_until_ms", "attestation_json"), | |
1065 | desc="get_remote_attestation", | |
1066 | allow_none=True, | |
1067 | ) | |
1068 | ||
1069 | now = int(self._clock.time_msec()) | |
1070 | if row and now < row["valid_until_ms"]: | |
1071 | return json.loads(row["attestation_json"]) | |
1072 | ||
1073 | return None | |
1074 | ||
1075 | def get_joined_groups(self, user_id): | |
1076 | return self.db.simple_select_onecol( | |
1077 | table="local_group_membership", | |
1078 | keyvalues={"user_id": user_id, "membership": "join"}, | |
1079 | retcol="group_id", | |
1080 | desc="get_joined_groups", | |
1081 | ) | |
1082 | ||
1083 | def get_all_groups_for_user(self, user_id, now_token): | |
1084 | def _get_all_groups_for_user_txn(txn): | |
1085 | sql = """ | |
1086 | SELECT group_id, type, membership, u.content | |
1087 | FROM local_group_updates AS u | |
1088 | INNER JOIN local_group_membership USING (group_id, user_id) | |
1089 | WHERE user_id = ? AND membership != 'leave' | |
1090 | AND stream_id <= ? | |
1091 | """ | |
1092 | txn.execute(sql, (user_id, now_token)) | |
1093 | return [ | |
1094 | { | |
1095 | "group_id": row[0], | |
1096 | "type": row[1], | |
1097 | "membership": row[2], | |
1098 | "content": json.loads(row[3]), | |
1099 | } | |
1100 | for row in txn | |
1101 | ] | |
1102 | ||
1103 | return self.db.runInteraction( | |
1104 | "get_all_groups_for_user", _get_all_groups_for_user_txn | |
1105 | ) | |
1106 | ||
1107 | def get_groups_changes_for_user(self, user_id, from_token, to_token): | |
1108 | from_token = int(from_token) | |
1109 | has_changed = self._group_updates_stream_cache.has_entity_changed( | |
1110 | user_id, from_token | |
1111 | ) | |
1112 | if not has_changed: | |
1113 | return defer.succeed([]) | |
1114 | ||
1115 | def _get_groups_changes_for_user_txn(txn): | |
1116 | sql = """ | |
1117 | SELECT group_id, membership, type, u.content | |
1118 | FROM local_group_updates AS u | |
1119 | INNER JOIN local_group_membership USING (group_id, user_id) | |
1120 | WHERE user_id = ? AND ? < stream_id AND stream_id <= ? | |
1121 | """ | |
1122 | txn.execute(sql, (user_id, from_token, to_token)) | |
1123 | return [ | |
1124 | { | |
1125 | "group_id": group_id, | |
1126 | "membership": membership, | |
1127 | "type": gtype, | |
1128 | "content": json.loads(content_json), | |
1129 | } | |
1130 | for group_id, membership, gtype, content_json in txn | |
1131 | ] | |
1132 | ||
1133 | return self.db.runInteraction( | |
1134 | "get_groups_changes_for_user", _get_groups_changes_for_user_txn | |
1135 | ) | |
1136 | ||
1137 | def get_all_groups_changes(self, from_token, to_token, limit): | |
1138 | from_token = int(from_token) | |
1139 | has_changed = self._group_updates_stream_cache.has_any_entity_changed( | |
1140 | from_token | |
1141 | ) | |
1142 | if not has_changed: | |
1143 | return defer.succeed([]) | |
1144 | ||
1145 | def _get_all_groups_changes_txn(txn): | |
1146 | sql = """ | |
1147 | SELECT stream_id, group_id, user_id, type, content | |
1148 | FROM local_group_updates | |
1149 | WHERE ? < stream_id AND stream_id <= ? | |
1150 | LIMIT ? | |
1151 | """ | |
1152 | txn.execute(sql, (from_token, to_token, limit)) | |
1153 | return [ | |
1154 | (stream_id, group_id, user_id, gtype, json.loads(content_json)) | |
1155 | for stream_id, group_id, user_id, gtype, content_json in txn | |
1156 | ] | |
1157 | ||
1158 | return self.db.runInteraction( | |
1159 | "get_all_groups_changes", _get_all_groups_changes_txn | |
1160 | 1162 | ) |
1161 | 1163 | |
1162 | 1164 | def get_group_stream_token(self): |
867 | 867 | desc="get_membership_from_event_ids", |
868 | 868 | ) |
869 | 869 | |
870 | async def is_local_host_in_room_ignoring_users( | |
871 | self, room_id: str, ignore_users: Collection[str] | |
872 | ) -> bool: | |
873 | """Check if there are any local users, excluding those in the given | |
874 | list, in the room. | |
875 | """ | |
876 | ||
877 | clause, args = make_in_list_sql_clause( | |
878 | self.database_engine, "user_id", ignore_users | |
879 | ) | |
880 | ||
881 | sql = """ | |
882 | SELECT 1 FROM local_current_membership | |
883 | WHERE | |
884 | room_id = ? AND membership = ? | |
885 | AND NOT (%s) | |
886 | LIMIT 1 | |
887 | """ % ( | |
888 | clause, | |
889 | ) | |
890 | ||
891 | def _is_local_host_in_room_ignoring_users_txn(txn): | |
892 | txn.execute(sql, (room_id, Membership.JOIN, *args)) | |
893 | ||
894 | return bool(txn.fetchone()) | |
895 | ||
896 | return await self.db.runInteraction( | |
897 | "is_local_host_in_room_ignoring_users", | |
898 | _is_local_host_in_room_ignoring_users_txn, | |
899 | ) | |
900 | ||
870 | 901 | |
871 | 902 | class RoomMemberBackgroundUpdateStore(SQLBaseStore): |
872 | 903 | def __init__(self, database: Database, db_conn, hs): |
+5
-2
14 | 14 | |
15 | 15 | -- Add background update to go and delete current state events for rooms the |
16 | 16 | -- server is no longer in. |
17 | INSERT into background_updates (update_name, progress_json) | |
18 | VALUES ('delete_old_current_state_events', '{}'); | |
17 | -- | |
18 | -- this relies on the 'membership' column of current_state_events, so make sure | |
19 | -- that's populated first! | |
20 | INSERT into background_updates (update_name, progress_json, depends_on) | |
21 | VALUES ('delete_old_current_state_events', '{}', 'current_state_events_membership'); |
+35
-0
0 | /* Copyright 2020 The Matrix.org Foundation C.I.C. | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | -- when we first added the room_version column, it was populated via a background | |
16 | -- update. We now need it to be populated before synapse starts, so we populate | |
17 | -- any remaining rows with a NULL room version now. For servers which have completed | |
18 | -- the background update, this will be pretty quick. | |
19 | ||
20 | -- the following query will set room_version to NULL if no create event is found for | |
21 | -- the room in current_state_events, and will set it to '1' if a create event with no | |
22 | -- room_version is found. | |
23 | ||
24 | UPDATE rooms SET room_version=( | |
25 | SELECT COALESCE(json::json->'content'->>'room_version','1') | |
26 | FROM current_state_events cse INNER JOIN event_json ej USING (event_id) | |
27 | WHERE cse.room_id=rooms.room_id AND cse.type='m.room.create' AND cse.state_key='' | |
28 | ) WHERE rooms.room_version IS NULL; | |
29 | ||
30 | -- we still allow the background update to complete: it has the useful side-effect of | |
31 | -- populating `rooms` with any missing rooms (based on the current_state_events table). | |
32 | ||
33 | -- see also rooms_version_column_2.sql.sqlite which has a copy of the above query, using | |
34 | -- sqlite syntax for the json extraction. |
0 | /* Copyright 2020 The Matrix.org Foundation C.I.C. | |
1 | * | |
2 | * Licensed under the Apache License, Version 2.0 (the "License"); | |
3 | * you may not use this file except in compliance with the License. | |
4 | * You may obtain a copy of the License at | |
5 | * | |
6 | * http://www.apache.org/licenses/LICENSE-2.0 | |
7 | * | |
8 | * Unless required by applicable law or agreed to in writing, software | |
9 | * distributed under the License is distributed on an "AS IS" BASIS, | |
10 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
11 | * See the License for the specific language governing permissions and | |
12 | * limitations under the License. | |
13 | */ | |
14 | ||
15 | -- see rooms_version_column_2.sql.postgres for details of what's going on here. | |
16 | ||
17 | UPDATE rooms SET room_version=( | |
18 | SELECT COALESCE(json_extract(ej.json, '$.content.room_version'), '1') | |
19 | FROM current_state_events cse INNER JOIN event_json ej USING (event_id) | |
20 | WHERE cse.room_id=rooms.room_id AND cse.type='m.room.create' AND cse.state_key='' | |
21 | ) WHERE rooms.room_version IS NULL; |
269 | 269 | ) |
270 | 270 | |
271 | 271 | return slice_list |
272 | ||
273 | def get_room_stats_state(self, room_id): | |
274 | """ | |
275 | Returns the current room_stats_state for a room. | |
276 | ||
277 | Args: | |
278 | room_id (str): The ID of the room to return state for. | |
279 | ||
280 | Returns (dict): | |
281 | Dictionary containing these keys: | |
282 | "name", "topic", "canonical_alias", "avatar", "join_rules", | |
283 | "history_visibility" | |
284 | """ | |
285 | return self.db.simple_select_one( | |
286 | "room_stats_state", | |
287 | {"room_id": room_id}, | |
288 | retcols=( | |
289 | "name", | |
290 | "topic", | |
291 | "canonical_alias", | |
292 | "avatar", | |
293 | "join_rules", | |
294 | "history_visibility", | |
295 | ), | |
296 | ) | |
297 | 272 | |
298 | 273 | @cached() |
299 | 274 | def get_earliest_token_for_stats(self, stats_type, id): |
182 | 182 | ) |
183 | 183 | return 1 |
184 | 184 | |
185 | logger.info( | |
185 | logger.debug( | |
186 | 186 | "Processing the next %d rooms of %d remaining" |
187 | 187 | % (len(rooms_to_work_on), progress["remaining"]) |
188 | 188 | ) |
307 | 307 | ) |
308 | 308 | return 1 |
309 | 309 | |
310 | logger.info( | |
310 | logger.debug( | |
311 | 311 | "Processing the next %d users of %d remaining" |
312 | 312 | % (len(users_to_work_on), progress["remaining"]) |
313 | 313 | ) |
342 | 342 | |
343 | 343 | top_three_counters = self._txn_perf_counters.interval(duration, limit=3) |
344 | 344 | |
345 | perf_logger.info( | |
345 | perf_logger.debug( | |
346 | 346 | "Total database time: %.3f%% {%s}", ratio * 100, top_three_counters |
347 | 347 | ) |
348 | 348 |
389 | 389 | state_delta_reuse_delta_counter.inc() |
390 | 390 | break |
391 | 391 | |
392 | logger.info("Calculating state delta for room %s", room_id) | |
392 | logger.debug("Calculating state delta for room %s", room_id) | |
393 | 393 | with Measure( |
394 | 394 | self._clock, "persist_events.get_new_state_after_events" |
395 | 395 | ): |
726 | 726 | |
727 | 727 | # Check if any of the given events are a local join that appear in the |
728 | 728 | # current state |
729 | events_to_check = [] # Event IDs that aren't an event we're persisting | |
729 | 730 | for (typ, state_key), event_id in delta.to_insert.items(): |
730 | 731 | if typ != EventTypes.Member or not self.is_mine_id(state_key): |
731 | 732 | continue |
735 | 736 | if event.membership == Membership.JOIN: |
736 | 737 | return True |
737 | 738 | |
738 | # There's been a change of membership but we don't have a local join | |
739 | # event in the new events, so we need to check the full state. | |
739 | # The event is not in `ev_ctx_rm`, so we need to pull it out of | |
740 | # the DB. | |
741 | events_to_check.append(event_id) | |
742 | ||
743 | # Check if any of the changes that we don't have events for are joins. | |
744 | if events_to_check: | |
745 | rows = await self.main_store.get_membership_from_event_ids(events_to_check) | |
746 | is_still_joined = any(row["membership"] == Membership.JOIN for row in rows) | |
747 | if is_still_joined: | |
748 | return True | |
749 | ||
750 | # None of the new state events are local joins, so we check the database | |
751 | # to see if there are any other local users in the room. We ignore users | |
752 | # whose state has changed as we've already their new state above. | |
753 | users_to_ignore = [ | |
754 | state_key | |
755 | for _, state_key in itertools.chain(delta.to_insert, delta.to_delete) | |
756 | if self.is_mine_id(state_key) | |
757 | ] | |
758 | ||
759 | if await self.main_store.is_local_host_in_room_ignoring_users( | |
760 | room_id, users_to_ignore | |
761 | ): | |
762 | return True | |
763 | ||
764 | # The server will leave the room, so we go and find out which remote | |
765 | # users will still be joined when we leave. | |
740 | 766 | if current_state is None: |
741 | 767 | current_state = await self.main_store.get_current_state_ids(room_id) |
742 | 768 | current_state = dict(current_state) |
745 | 771 | |
746 | 772 | current_state.update(delta.to_insert) |
747 | 773 | |
748 | event_ids = [ | |
749 | event_id | |
750 | for (typ, state_key,), event_id in current_state.items() | |
751 | if typ == EventTypes.Member and self.is_mine_id(state_key) | |
752 | ] | |
753 | ||
754 | rows = await self.main_store.get_membership_from_event_ids(event_ids) | |
755 | is_still_joined = any(row["membership"] == Membership.JOIN for row in rows) | |
756 | if is_still_joined: | |
757 | return True | |
758 | ||
759 | # The server will leave the room, so we go and find out which remote | |
760 | # users will still be joined when we leave. | |
761 | 774 | remote_event_ids = [ |
762 | 775 | event_id |
763 | 776 | for (typ, state_key,), event_id in current_state.items() |
72 | 72 | def errback(f): |
73 | 73 | object.__setattr__(self, "_result", (False, f)) |
74 | 74 | while self._observers: |
75 | # This is a little bit of magic to correctly propagate stack | |
76 | # traces when we `await` on one of the observer deferreds. | |
77 | f.value.__failure__ = f | |
78 | ||
75 | 79 | try: |
76 | 80 | # TODO: Handle errors here. |
77 | 81 | self._observers.pop().errback(f) |
143 | 143 | """ |
144 | 144 | result = self.get(key) |
145 | 145 | if not result: |
146 | logger.info( | |
146 | logger.debug( | |
147 | 147 | "[%s]: no cached result for [%s], calculating new one", self._name, key |
148 | 148 | ) |
149 | 149 | d = run_in_background(callback, *args, **kwargs) |
24 | 24 | from synapse.api.constants import EventContentFields |
25 | 25 | from synapse.api.errors import SynapseError |
26 | 26 | from synapse.api.filtering import Filter |
27 | from synapse.events import FrozenEvent | |
27 | from synapse.events import make_event_from_dict | |
28 | 28 | |
29 | 29 | from tests import unittest |
30 | 30 | from tests.utils import DeferredMockCallable, MockHttpResource, setup_test_homeserver |
37 | 37 | kwargs["event_id"] = "fake_event_id" |
38 | 38 | if "type" not in kwargs: |
39 | 39 | kwargs["type"] = "fake_type" |
40 | return FrozenEvent(kwargs) | |
40 | return make_event_from_dict(kwargs) | |
41 | 41 | |
42 | 42 | |
43 | 43 | class FilteringTestCase(unittest.TestCase): |
18 | 18 | |
19 | 19 | from synapse.api.room_versions import RoomVersions |
20 | 20 | from synapse.crypto.event_signing import add_hashes_and_signatures |
21 | from synapse.events import FrozenEvent | |
21 | from synapse.events import make_event_from_dict | |
22 | 22 | |
23 | 23 | from tests import unittest |
24 | 24 | |
53 | 53 | RoomVersions.V1, event_dict, HOSTNAME, self.signing_key |
54 | 54 | ) |
55 | 55 | |
56 | event = FrozenEvent(event_dict) | |
56 | event = make_event_from_dict(event_dict) | |
57 | 57 | |
58 | 58 | self.assertTrue(hasattr(event, "hashes")) |
59 | 59 | self.assertIn("sha256", event.hashes) |
87 | 87 | RoomVersions.V1, event_dict, HOSTNAME, self.signing_key |
88 | 88 | ) |
89 | 89 | |
90 | event = FrozenEvent(event_dict) | |
90 | event = make_event_from_dict(event_dict) | |
91 | 91 | |
92 | 92 | self.assertTrue(hasattr(event, "hashes")) |
93 | 93 | self.assertIn("sha256", event.hashes) |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | ||
16 | from synapse.events import FrozenEvent | |
15 | from synapse.events import make_event_from_dict | |
17 | 16 | from synapse.events.utils import ( |
18 | 17 | copy_power_levels_contents, |
19 | 18 | prune_event, |
29 | 28 | kwargs["event_id"] = "fake_event_id" |
30 | 29 | if "type" not in kwargs: |
31 | 30 | kwargs["type"] = "fake_type" |
32 | return FrozenEvent(kwargs) | |
31 | return make_event_from_dict(kwargs) | |
33 | 32 | |
34 | 33 | |
35 | 34 | class PruneEventTestCase(unittest.TestCase): |
37 | 36 | `matchdict` when it is redacted. """ |
38 | 37 | |
39 | 38 | def run_test(self, evdict, matchdict): |
40 | self.assertEquals(prune_event(FrozenEvent(evdict)).get_dict(), matchdict) | |
39 | self.assertEquals( | |
40 | prune_event(make_event_from_dict(evdict)).get_dict(), matchdict | |
41 | ) | |
41 | 42 | |
42 | 43 | def test_minimal(self): |
43 | 44 | self.run_test( |
14 | 14 | # limitations under the License. |
15 | 15 | import logging |
16 | 16 | |
17 | from synapse.events import FrozenEvent | |
17 | from synapse.events import make_event_from_dict | |
18 | 18 | from synapse.federation.federation_server import server_matches_acl_event |
19 | 19 | from synapse.rest import admin |
20 | 20 | from synapse.rest.client.v1 import login, room |
104 | 104 | |
105 | 105 | |
106 | 106 | def _create_acl_event(content): |
107 | return FrozenEvent( | |
107 | return make_event_from_dict( | |
108 | 108 | { |
109 | 109 | "room_id": "!a:b", |
110 | 110 | "event_id": "$a:b", |
159 | 159 | res = self.get_success(self.handler.get_device(user1, "abc")) |
160 | 160 | self.assertEqual(res["display_name"], "new display") |
161 | 161 | |
162 | def test_update_device_too_long_display_name(self): | |
163 | """Update a device with a display name that is invalid (too long).""" | |
164 | self._record_users() | |
165 | ||
166 | # Request to update a device display name with a new value that is longer than allowed. | |
167 | update = { | |
168 | "display_name": "a" | |
169 | * (synapse.handlers.device.MAX_DEVICE_DISPLAY_NAME_LEN + 1) | |
170 | } | |
171 | self.get_failure( | |
172 | self.handler.update_device(user1, "abc", update), | |
173 | synapse.api.errors.SynapseError, | |
174 | ) | |
175 | ||
176 | # Ensure the display name was not updated. | |
177 | res = self.get_success(self.handler.get_device(user1, "abc")) | |
178 | self.assertEqual(res["display_name"], "display 2") | |
179 | ||
162 | 180 | def test_update_unknown_device(self): |
163 | 181 | update = {"display_name": "new_display"} |
164 | 182 | res = self.handler.update_device("user_id", "unknown_device_id", update) |
17 | 17 | |
18 | 18 | from twisted.internet import defer |
19 | 19 | |
20 | import synapse.api.errors | |
21 | from synapse.api.constants import EventTypes | |
20 | 22 | from synapse.config.room_directory import RoomDirectoryConfig |
21 | from synapse.handlers.directory import DirectoryHandler | |
22 | from synapse.rest.client.v1 import directory, room | |
23 | from synapse.types import RoomAlias | |
23 | from synapse.rest.client.v1 import directory, login, room | |
24 | from synapse.types import RoomAlias, create_requester | |
24 | 25 | |
25 | 26 | from tests import unittest |
26 | from tests.utils import setup_test_homeserver | |
27 | ||
28 | ||
29 | class DirectoryHandlers(object): | |
30 | def __init__(self, hs): | |
31 | self.directory_handler = DirectoryHandler(hs) | |
32 | ||
33 | ||
34 | class DirectoryTestCase(unittest.TestCase): | |
27 | ||
28 | ||
29 | class DirectoryTestCase(unittest.HomeserverTestCase): | |
35 | 30 | """ Tests the directory service. """ |
36 | 31 | |
37 | @defer.inlineCallbacks | |
38 | def setUp(self): | |
32 | def make_homeserver(self, reactor, clock): | |
39 | 33 | self.mock_federation = Mock() |
40 | 34 | self.mock_registry = Mock() |
41 | 35 | |
46 | 40 | |
47 | 41 | self.mock_registry.register_query_handler = register_query_handler |
48 | 42 | |
49 | hs = yield setup_test_homeserver( | |
50 | self.addCleanup, | |
43 | hs = self.setup_test_homeserver( | |
51 | 44 | http_client=None, |
52 | 45 | resource_for_federation=Mock(), |
53 | 46 | federation_client=self.mock_federation, |
54 | 47 | federation_registry=self.mock_registry, |
55 | 48 | ) |
56 | hs.handlers = DirectoryHandlers(hs) | |
57 | 49 | |
58 | 50 | self.handler = hs.get_handlers().directory_handler |
59 | 51 | |
63 | 55 | self.your_room = RoomAlias.from_string("#your-room:test") |
64 | 56 | self.remote_room = RoomAlias.from_string("#another:remote") |
65 | 57 | |
66 | @defer.inlineCallbacks | |
58 | return hs | |
59 | ||
67 | 60 | def test_get_local_association(self): |
68 | yield self.store.create_room_alias_association( | |
69 | self.my_room, "!8765qwer:test", ["test"] | |
70 | ) | |
71 | ||
72 | result = yield self.handler.get_association(self.my_room) | |
61 | self.get_success( | |
62 | self.store.create_room_alias_association( | |
63 | self.my_room, "!8765qwer:test", ["test"] | |
64 | ) | |
65 | ) | |
66 | ||
67 | result = self.get_success(self.handler.get_association(self.my_room)) | |
73 | 68 | |
74 | 69 | self.assertEquals({"room_id": "!8765qwer:test", "servers": ["test"]}, result) |
75 | 70 | |
76 | @defer.inlineCallbacks | |
77 | 71 | def test_get_remote_association(self): |
78 | 72 | self.mock_federation.make_query.return_value = defer.succeed( |
79 | 73 | {"room_id": "!8765qwer:test", "servers": ["test", "remote"]} |
80 | 74 | ) |
81 | 75 | |
82 | result = yield self.handler.get_association(self.remote_room) | |
76 | result = self.get_success(self.handler.get_association(self.remote_room)) | |
83 | 77 | |
84 | 78 | self.assertEquals( |
85 | 79 | {"room_id": "!8765qwer:test", "servers": ["test", "remote"]}, result |
92 | 86 | ignore_backoff=True, |
93 | 87 | ) |
94 | 88 | |
95 | @defer.inlineCallbacks | |
89 | def test_delete_alias_not_allowed(self): | |
90 | room_id = "!8765qwer:test" | |
91 | self.get_success( | |
92 | self.store.create_room_alias_association(self.my_room, room_id, ["test"]) | |
93 | ) | |
94 | ||
95 | self.get_failure( | |
96 | self.handler.delete_association( | |
97 | create_requester("@user:test"), self.my_room | |
98 | ), | |
99 | synapse.api.errors.AuthError, | |
100 | ) | |
101 | ||
102 | def test_delete_alias(self): | |
103 | room_id = "!8765qwer:test" | |
104 | user_id = "@user:test" | |
105 | self.get_success( | |
106 | self.store.create_room_alias_association( | |
107 | self.my_room, room_id, ["test"], user_id | |
108 | ) | |
109 | ) | |
110 | ||
111 | result = self.get_success( | |
112 | self.handler.delete_association(create_requester(user_id), self.my_room) | |
113 | ) | |
114 | self.assertEquals(room_id, result) | |
115 | ||
116 | # The alias should not be found. | |
117 | self.get_failure( | |
118 | self.handler.get_association(self.my_room), synapse.api.errors.SynapseError | |
119 | ) | |
120 | ||
96 | 121 | def test_incoming_fed_query(self): |
97 | yield self.store.create_room_alias_association( | |
98 | self.your_room, "!8765asdf:test", ["test"] | |
99 | ) | |
100 | ||
101 | response = yield self.query_handlers["directory"]( | |
102 | {"room_alias": "#your-room:test"} | |
122 | self.get_success( | |
123 | self.store.create_room_alias_association( | |
124 | self.your_room, "!8765asdf:test", ["test"] | |
125 | ) | |
126 | ) | |
127 | ||
128 | response = self.get_success( | |
129 | self.handler.on_directory_query({"room_alias": "#your-room:test"}) | |
103 | 130 | ) |
104 | 131 | |
105 | 132 | self.assertEquals({"room_id": "!8765asdf:test", "servers": ["test"]}, response) |
133 | ||
134 | ||
135 | class CanonicalAliasTestCase(unittest.HomeserverTestCase): | |
136 | """Test modifications of the canonical alias when delete aliases. | |
137 | """ | |
138 | ||
139 | servlets = [ | |
140 | synapse.rest.admin.register_servlets, | |
141 | login.register_servlets, | |
142 | room.register_servlets, | |
143 | directory.register_servlets, | |
144 | ] | |
145 | ||
146 | def prepare(self, reactor, clock, hs): | |
147 | self.store = hs.get_datastore() | |
148 | self.handler = hs.get_handlers().directory_handler | |
149 | self.state_handler = hs.get_state_handler() | |
150 | ||
151 | # Create user | |
152 | self.admin_user = self.register_user("admin", "pass", admin=True) | |
153 | self.admin_user_tok = self.login("admin", "pass") | |
154 | ||
155 | # Create a test room | |
156 | self.room_id = self.helper.create_room_as( | |
157 | self.admin_user, tok=self.admin_user_tok | |
158 | ) | |
159 | ||
160 | self.test_alias = "#test:test" | |
161 | self.room_alias = RoomAlias.from_string(self.test_alias) | |
162 | ||
163 | # Create a new alias to this room. | |
164 | self.get_success( | |
165 | self.store.create_room_alias_association( | |
166 | self.room_alias, self.room_id, ["test"], self.admin_user | |
167 | ) | |
168 | ) | |
169 | ||
170 | def test_remove_alias(self): | |
171 | """Removing an alias that is the canonical alias should remove it there too.""" | |
172 | # Set this new alias as the canonical alias for this room | |
173 | self.helper.send_state( | |
174 | self.room_id, | |
175 | "m.room.canonical_alias", | |
176 | {"alias": self.test_alias, "alt_aliases": [self.test_alias]}, | |
177 | tok=self.admin_user_tok, | |
178 | ) | |
179 | ||
180 | data = self.get_success( | |
181 | self.state_handler.get_current_state( | |
182 | self.room_id, EventTypes.CanonicalAlias, "" | |
183 | ) | |
184 | ) | |
185 | self.assertEqual(data["content"]["alias"], self.test_alias) | |
186 | self.assertEqual(data["content"]["alt_aliases"], [self.test_alias]) | |
187 | ||
188 | # Finally, delete the alias. | |
189 | self.get_success( | |
190 | self.handler.delete_association( | |
191 | create_requester(self.admin_user), self.room_alias | |
192 | ) | |
193 | ) | |
194 | ||
195 | data = self.get_success( | |
196 | self.state_handler.get_current_state( | |
197 | self.room_id, EventTypes.CanonicalAlias, "" | |
198 | ) | |
199 | ) | |
200 | self.assertNotIn("alias", data["content"]) | |
201 | self.assertNotIn("alt_aliases", data["content"]) | |
202 | ||
203 | def test_remove_other_alias(self): | |
204 | """Removing an alias listed as in alt_aliases should remove it there too.""" | |
205 | # Create a second alias. | |
206 | other_test_alias = "#test2:test" | |
207 | other_room_alias = RoomAlias.from_string(other_test_alias) | |
208 | self.get_success( | |
209 | self.store.create_room_alias_association( | |
210 | other_room_alias, self.room_id, ["test"], self.admin_user | |
211 | ) | |
212 | ) | |
213 | ||
214 | # Set the alias as the canonical alias for this room. | |
215 | self.helper.send_state( | |
216 | self.room_id, | |
217 | "m.room.canonical_alias", | |
218 | { | |
219 | "alias": self.test_alias, | |
220 | "alt_aliases": [self.test_alias, other_test_alias], | |
221 | }, | |
222 | tok=self.admin_user_tok, | |
223 | ) | |
224 | ||
225 | data = self.get_success( | |
226 | self.state_handler.get_current_state( | |
227 | self.room_id, EventTypes.CanonicalAlias, "" | |
228 | ) | |
229 | ) | |
230 | self.assertEqual(data["content"]["alias"], self.test_alias) | |
231 | self.assertEqual( | |
232 | data["content"]["alt_aliases"], [self.test_alias, other_test_alias] | |
233 | ) | |
234 | ||
235 | # Delete the second alias. | |
236 | self.get_success( | |
237 | self.handler.delete_association( | |
238 | create_requester(self.admin_user), other_room_alias | |
239 | ) | |
240 | ) | |
241 | ||
242 | data = self.get_success( | |
243 | self.state_handler.get_current_state( | |
244 | self.room_id, EventTypes.CanonicalAlias, "" | |
245 | ) | |
246 | ) | |
247 | self.assertEqual(data["content"]["alias"], self.test_alias) | |
248 | self.assertEqual(data["content"]["alt_aliases"], [self.test_alias]) | |
106 | 249 | |
107 | 250 | |
108 | 251 | class TestCreateAliasACL(unittest.HomeserverTestCase): |
98 | 98 | user_id = self.register_user("kermit", "test") |
99 | 99 | tok = self.login("kermit", "test") |
100 | 100 | room_id = self.helper.create_room_as(room_creator=user_id, tok=tok) |
101 | room_version = self.get_success(self.store.get_room_version(room_id)) | |
101 | 102 | |
102 | 103 | # pretend that another server has joined |
103 | 104 | join_event = self._build_and_send_join_event(OTHER_SERVER, OTHER_USER, room_id) |
119 | 120 | "auth_events": [], |
120 | 121 | "origin_server_ts": self.clock.time_msec(), |
121 | 122 | }, |
122 | join_event.format_version, | |
123 | room_version, | |
123 | 124 | ) |
124 | 125 | |
125 | 126 | with LoggingContext(request="send_rejected"): |
148 | 149 | user_id = self.register_user("kermit", "test") |
149 | 150 | tok = self.login("kermit", "test") |
150 | 151 | room_id = self.helper.create_room_as(room_creator=user_id, tok=tok) |
152 | room_version = self.get_success(self.store.get_room_version(room_id)) | |
151 | 153 | |
152 | 154 | # pretend that another server has joined |
153 | 155 | join_event = self._build_and_send_join_event(OTHER_SERVER, OTHER_USER, room_id) |
170 | 172 | "auth_events": [], |
171 | 173 | "origin_server_ts": self.clock.time_msec(), |
172 | 174 | }, |
173 | join_event.format_version, | |
175 | room_version, | |
174 | 176 | ) |
175 | 177 | |
176 | 178 | with LoggingContext(request="send_rejected"): |
110 | 110 | retry_timings_res |
111 | 111 | ) |
112 | 112 | |
113 | self.datastore.get_device_updates_by_remote.return_value = (0, []) | |
113 | self.datastore.get_device_updates_by_remote.return_value = defer.succeed( | |
114 | (0, []) | |
115 | ) | |
114 | 116 | |
115 | 117 | def get_received_txn_response(*args): |
116 | 118 | return defer.succeed(None) |
119 | 121 | |
120 | 122 | self.room_members = [] |
121 | 123 | |
122 | def check_joined_room(room_id, user_id): | |
124 | def check_user_in_room(room_id, user_id): | |
123 | 125 | if user_id not in [u.to_string() for u in self.room_members]: |
124 | 126 | raise AuthError(401, "User is not in the room") |
125 | 127 | |
126 | hs.get_auth().check_joined_room = check_joined_room | |
128 | hs.get_auth().check_user_in_room = check_user_in_room | |
127 | 129 | |
128 | 130 | def get_joined_hosts_for_room(room_id): |
129 | 131 | return set(member.domain for member in self.room_members) |
143 | 145 | self.datastore.get_current_state_deltas.return_value = (0, None) |
144 | 146 | |
145 | 147 | self.datastore.get_to_device_stream_token = lambda: 0 |
146 | self.datastore.get_new_device_msgs_for_remote = lambda *args, **kargs: ([], 0) | |
148 | self.datastore.get_new_device_msgs_for_remote = lambda *args, **kargs: defer.succeed( | |
149 | ([], 0) | |
150 | ) | |
147 | 151 | self.datastore.delete_device_msgs_for_remote = lambda *args, **kargs: None |
148 | 152 | self.datastore.set_received_txn_response = lambda *args, **kwargs: defer.succeed( |
149 | 153 | None |
146 | 146 | s = self.get_success(self.handler.search_users(u1, "user3", 10)) |
147 | 147 | self.assertEqual(len(s["results"]), 0) |
148 | 148 | |
149 | def test_spam_checker(self): | |
150 | """ | |
151 | A user which fails to the spam checks will not appear in search results. | |
152 | """ | |
153 | u1 = self.register_user("user1", "pass") | |
154 | u1_token = self.login(u1, "pass") | |
155 | u2 = self.register_user("user2", "pass") | |
156 | u2_token = self.login(u2, "pass") | |
157 | ||
158 | # We do not add users to the directory until they join a room. | |
159 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
160 | self.assertEqual(len(s["results"]), 0) | |
161 | ||
162 | room = self.helper.create_room_as(u1, is_public=False, tok=u1_token) | |
163 | self.helper.invite(room, src=u1, targ=u2, tok=u1_token) | |
164 | self.helper.join(room, user=u2, tok=u2_token) | |
165 | ||
166 | # Check we have populated the database correctly. | |
167 | shares_private = self.get_users_who_share_private_rooms() | |
168 | public_users = self.get_users_in_public_rooms() | |
169 | ||
170 | self.assertEqual( | |
171 | self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)]) | |
172 | ) | |
173 | self.assertEqual(public_users, []) | |
174 | ||
175 | # We get one search result when searching for user2 by user1. | |
176 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
177 | self.assertEqual(len(s["results"]), 1) | |
178 | ||
179 | # Configure a spam checker that does not filter any users. | |
180 | spam_checker = self.hs.get_spam_checker() | |
181 | ||
182 | class AllowAll(object): | |
183 | def check_username_for_spam(self, user_profile): | |
184 | # Allow all users. | |
185 | return False | |
186 | ||
187 | spam_checker.spam_checker = AllowAll() | |
188 | ||
189 | # The results do not change: | |
190 | # We get one search result when searching for user2 by user1. | |
191 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
192 | self.assertEqual(len(s["results"]), 1) | |
193 | ||
194 | # Configure a spam checker that filters all users. | |
195 | class BlockAll(object): | |
196 | def check_username_for_spam(self, user_profile): | |
197 | # All users are spammy. | |
198 | return True | |
199 | ||
200 | spam_checker.spam_checker = BlockAll() | |
201 | ||
202 | # User1 now gets no search results for any of the other users. | |
203 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
204 | self.assertEqual(len(s["results"]), 0) | |
205 | ||
206 | def test_legacy_spam_checker(self): | |
207 | """ | |
208 | A spam checker without the expected method should be ignored. | |
209 | """ | |
210 | u1 = self.register_user("user1", "pass") | |
211 | u1_token = self.login(u1, "pass") | |
212 | u2 = self.register_user("user2", "pass") | |
213 | u2_token = self.login(u2, "pass") | |
214 | ||
215 | # We do not add users to the directory until they join a room. | |
216 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
217 | self.assertEqual(len(s["results"]), 0) | |
218 | ||
219 | room = self.helper.create_room_as(u1, is_public=False, tok=u1_token) | |
220 | self.helper.invite(room, src=u1, targ=u2, tok=u1_token) | |
221 | self.helper.join(room, user=u2, tok=u2_token) | |
222 | ||
223 | # Check we have populated the database correctly. | |
224 | shares_private = self.get_users_who_share_private_rooms() | |
225 | public_users = self.get_users_in_public_rooms() | |
226 | ||
227 | self.assertEqual( | |
228 | self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)]) | |
229 | ) | |
230 | self.assertEqual(public_users, []) | |
231 | ||
232 | # Configure a spam checker. | |
233 | spam_checker = self.hs.get_spam_checker() | |
234 | # The spam checker doesn't need any methods, so create a bare object. | |
235 | spam_checker.spam_checker = object() | |
236 | ||
237 | # We get one search result when searching for user2 by user1. | |
238 | s = self.get_success(self.handler.search_users(u1, "user2", 10)) | |
239 | self.assertEqual(len(s["results"]), 1) | |
240 | ||
149 | 241 | def _compress_shared(self, shared): |
150 | 242 | """ |
151 | 243 | Compress a list of users who share rooms dicts to a list of tuples. |
14 | 14 | |
15 | 15 | from canonicaljson import encode_canonical_json |
16 | 16 | |
17 | from synapse.events import FrozenEvent, _EventInternalMetadata | |
17 | from synapse.events import FrozenEvent, _EventInternalMetadata, make_event_from_dict | |
18 | 18 | from synapse.events.snapshot import EventContext |
19 | 19 | from synapse.handlers.room import RoomEventSource |
20 | 20 | from synapse.replication.slave.storage.events import SlavedEventStore |
89 | 89 | msg_dict["content"] = {} |
90 | 90 | msg_dict["unsigned"]["redacted_by"] = redaction.event_id |
91 | 91 | msg_dict["unsigned"]["redacted_because"] = redaction |
92 | redacted = FrozenEvent(msg_dict, msg.internal_metadata.get_dict()) | |
92 | redacted = make_event_from_dict( | |
93 | msg_dict, internal_metadata_dict=msg.internal_metadata.get_dict() | |
94 | ) | |
93 | 95 | self.check("get_event", [msg.event_id], redacted) |
94 | 96 | |
95 | 97 | def test_backfilled_redactions(self): |
109 | 111 | msg_dict["content"] = {} |
110 | 112 | msg_dict["unsigned"]["redacted_by"] = redaction.event_id |
111 | 113 | msg_dict["unsigned"]["redacted_because"] = redaction |
112 | redacted = FrozenEvent(msg_dict, msg.internal_metadata.get_dict()) | |
114 | redacted = make_event_from_dict( | |
115 | msg_dict, internal_metadata_dict=msg.internal_metadata.get_dict() | |
116 | ) | |
113 | 117 | self.check("get_event", [msg.event_id], redacted) |
114 | 118 | |
115 | 119 | def test_invites(self): |
344 | 348 | if redacts is not None: |
345 | 349 | event_dict["redacts"] = redacts |
346 | 350 | |
347 | event = FrozenEvent(event_dict, internal_metadata_dict=internal) | |
351 | event = make_event_from_dict(event_dict, internal_metadata_dict=internal) | |
348 | 352 | |
349 | 353 | self.event_id += 1 |
350 | 354 |
400 | 400 | self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) |
401 | 401 | self.assertEqual("You are not a server admin", channel.json_body["error"]) |
402 | 402 | |
403 | def test_user_does_not_exist(self): | |
404 | """ | |
405 | Tests that a lookup for a user that does not exist returns a 404 | |
406 | """ | |
407 | self.hs.config.registration_shared_secret = None | |
408 | ||
409 | request, channel = self.make_request( | |
410 | "GET", | |
411 | "/_synapse/admin/v2/users/@unknown_person:test", | |
412 | access_token=self.admin_user_tok, | |
413 | ) | |
414 | self.render(request) | |
415 | ||
416 | self.assertEqual(404, channel.code, msg=channel.json_body) | |
417 | self.assertEqual("M_NOT_FOUND", channel.json_body["errcode"]) | |
418 | ||
403 | 419 | def test_requester_is_admin(self): |
404 | 420 | """ |
405 | 421 | If the user is a server admin, a new user is created. |
406 | 422 | """ |
407 | 423 | self.hs.config.registration_shared_secret = None |
408 | 424 | |
409 | body = json.dumps({"password": "abc123", "admin": True}) | |
425 | body = json.dumps( | |
426 | { | |
427 | "password": "abc123", | |
428 | "admin": True, | |
429 | "threepids": [{"medium": "email", "address": "bob@bob.bob"}], | |
430 | } | |
431 | ) | |
410 | 432 | |
411 | 433 | # Create user |
412 | 434 | request, channel = self.make_request( |
420 | 442 | self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) |
421 | 443 | self.assertEqual("@bob:test", channel.json_body["name"]) |
422 | 444 | self.assertEqual("bob", channel.json_body["displayname"]) |
445 | self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) | |
446 | self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) | |
423 | 447 | |
424 | 448 | # Get user |
425 | 449 | request, channel = self.make_request( |
448 | 472 | self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) |
449 | 473 | |
450 | 474 | # Modify user |
451 | body = json.dumps({"displayname": "foobar", "deactivated": True}) | |
475 | body = json.dumps( | |
476 | { | |
477 | "displayname": "foobar", | |
478 | "deactivated": True, | |
479 | "threepids": [{"medium": "email", "address": "bob2@bob.bob"}], | |
480 | } | |
481 | ) | |
452 | 482 | |
453 | 483 | request, channel = self.make_request( |
454 | 484 | "PUT", |
462 | 492 | self.assertEqual("@bob:test", channel.json_body["name"]) |
463 | 493 | self.assertEqual("foobar", channel.json_body["displayname"]) |
464 | 494 | self.assertEqual(True, channel.json_body["deactivated"]) |
495 | # the user is deactivated, the threepid will be deleted | |
465 | 496 | |
466 | 497 | # Get user |
467 | 498 | request, channel = self.make_request( |
27 | 27 | import synapse.rest.admin |
28 | 28 | from synapse.api.constants import EventContentFields, EventTypes, Membership |
29 | 29 | from synapse.handlers.pagination import PurgeStatus |
30 | from synapse.rest.client.v1 import login, profile, room | |
30 | from synapse.rest.client.v1 import directory, login, profile, room | |
31 | 31 | from synapse.rest.client.v2_alpha import account |
32 | from synapse.types import JsonDict, RoomAlias | |
32 | 33 | from synapse.util.stringutils import random_string |
33 | 34 | |
34 | 35 | from tests import unittest |
1611 | 1612 | def prepare(self, reactor, clock, homeserver): |
1612 | 1613 | self.user_id = self.register_user("user", "password") |
1613 | 1614 | self.tok = self.login("user", "password") |
1614 | self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok) | |
1615 | self.room_id = self.helper.create_room_as( | |
1616 | self.user_id, tok=self.tok, is_public=False | |
1617 | ) | |
1615 | 1618 | |
1616 | 1619 | self.other_user_id = self.register_user("user2", "password") |
1617 | 1620 | self.other_tok = self.login("user2", "password") |
1723 | 1726 | self.assertEqual(len(events_after), 2, events_after) |
1724 | 1727 | self.assertDictEqual(events_after[0].get("content"), {}, events_after[0]) |
1725 | 1728 | self.assertEqual(events_after[1].get("content"), {}, events_after[1]) |
1729 | ||
1730 | ||
1731 | class RoomAliasListTestCase(unittest.HomeserverTestCase): | |
1732 | servlets = [ | |
1733 | synapse.rest.admin.register_servlets_for_client_rest_resource, | |
1734 | directory.register_servlets, | |
1735 | login.register_servlets, | |
1736 | room.register_servlets, | |
1737 | ] | |
1738 | ||
1739 | def prepare(self, reactor, clock, homeserver): | |
1740 | self.room_owner = self.register_user("room_owner", "test") | |
1741 | self.room_owner_tok = self.login("room_owner", "test") | |
1742 | ||
1743 | self.room_id = self.helper.create_room_as( | |
1744 | self.room_owner, tok=self.room_owner_tok | |
1745 | ) | |
1746 | ||
1747 | def test_no_aliases(self): | |
1748 | res = self._get_aliases(self.room_owner_tok) | |
1749 | self.assertEqual(res["aliases"], []) | |
1750 | ||
1751 | def test_not_in_room(self): | |
1752 | self.register_user("user", "test") | |
1753 | user_tok = self.login("user", "test") | |
1754 | res = self._get_aliases(user_tok, expected_code=403) | |
1755 | self.assertEqual(res["errcode"], "M_FORBIDDEN") | |
1756 | ||
1757 | def test_admin_user(self): | |
1758 | alias1 = self._random_alias() | |
1759 | self._set_alias_via_directory(alias1) | |
1760 | ||
1761 | self.register_user("user", "test", admin=True) | |
1762 | user_tok = self.login("user", "test") | |
1763 | ||
1764 | res = self._get_aliases(user_tok) | |
1765 | self.assertEqual(res["aliases"], [alias1]) | |
1766 | ||
1767 | def test_with_aliases(self): | |
1768 | alias1 = self._random_alias() | |
1769 | alias2 = self._random_alias() | |
1770 | ||
1771 | self._set_alias_via_directory(alias1) | |
1772 | self._set_alias_via_directory(alias2) | |
1773 | ||
1774 | res = self._get_aliases(self.room_owner_tok) | |
1775 | self.assertEqual(set(res["aliases"]), {alias1, alias2}) | |
1776 | ||
1777 | def test_peekable_room(self): | |
1778 | alias1 = self._random_alias() | |
1779 | self._set_alias_via_directory(alias1) | |
1780 | ||
1781 | self.helper.send_state( | |
1782 | self.room_id, | |
1783 | EventTypes.RoomHistoryVisibility, | |
1784 | body={"history_visibility": "world_readable"}, | |
1785 | tok=self.room_owner_tok, | |
1786 | ) | |
1787 | ||
1788 | self.register_user("user", "test") | |
1789 | user_tok = self.login("user", "test") | |
1790 | ||
1791 | res = self._get_aliases(user_tok) | |
1792 | self.assertEqual(res["aliases"], [alias1]) | |
1793 | ||
1794 | def _get_aliases(self, access_token: str, expected_code: int = 200) -> JsonDict: | |
1795 | """Calls the endpoint under test. returns the json response object.""" | |
1796 | request, channel = self.make_request( | |
1797 | "GET", | |
1798 | "/_matrix/client/unstable/org.matrix.msc2432/rooms/%s/aliases" | |
1799 | % (self.room_id,), | |
1800 | access_token=access_token, | |
1801 | ) | |
1802 | self.render(request) | |
1803 | self.assertEqual(channel.code, expected_code, channel.result) | |
1804 | res = channel.json_body | |
1805 | self.assertIsInstance(res, dict) | |
1806 | if expected_code == 200: | |
1807 | self.assertIsInstance(res["aliases"], list) | |
1808 | return res | |
1809 | ||
1810 | def _random_alias(self) -> str: | |
1811 | return RoomAlias(random_string(5), self.hs.hostname).to_string() | |
1812 | ||
1813 | def _set_alias_via_directory(self, alias: str, expected_code: int = 200): | |
1814 | url = "/_matrix/client/r0/directory/room/" + alias | |
1815 | data = {"room_id": self.room_id} | |
1816 | request_data = json.dumps(data) | |
1817 | ||
1818 | request, channel = self.make_request( | |
1819 | "PUT", url, request_data, access_token=self.room_owner_tok | |
1820 | ) | |
1821 | self.render(request) | |
1822 | self.assertEqual(channel.code, expected_code, channel.result) |
21 | 21 | from synapse.api.constants import EventTypes, JoinRules, Membership |
22 | 22 | from synapse.api.room_versions import RoomVersions |
23 | 23 | from synapse.event_auth import auth_types_for_event |
24 | from synapse.events import FrozenEvent | |
24 | from synapse.events import make_event_from_dict | |
25 | 25 | from synapse.state.v2 import lexicographical_topological_sort, resolve_events_with_store |
26 | 26 | from synapse.types import EventID |
27 | 27 | |
88 | 88 | if self.state_key is not None: |
89 | 89 | event_dict["state_key"] = self.state_key |
90 | 90 | |
91 | return FrozenEvent(event_dict) | |
91 | return make_event_from_dict(event_dict) | |
92 | 92 | |
93 | 93 | |
94 | 94 | # All graphs start with this set of events |
237 | 237 | @defer.inlineCallbacks |
238 | 238 | def build(self, prev_event_ids): |
239 | 239 | built_event = yield self._base_builder.build(prev_event_ids) |
240 | built_event.event_id = self._event_id | |
241 | built_event._event_dict["event_id"] = self._event_id | |
240 | ||
241 | built_event._event_id = self._event_id | |
242 | built_event._dict["event_id"] = self._event_id | |
243 | assert built_event.event_id == self._event_id | |
244 | ||
242 | 245 | return built_event |
243 | 246 | |
244 | 247 | @property |
17 | 17 | from synapse import event_auth |
18 | 18 | from synapse.api.errors import AuthError |
19 | 19 | from synapse.api.room_versions import RoomVersions |
20 | from synapse.events import FrozenEvent | |
20 | from synapse.events import make_event_from_dict | |
21 | 21 | |
22 | 22 | |
23 | 23 | class EventAuthTestCase(unittest.TestCase): |
93 | 93 | |
94 | 94 | |
95 | 95 | def _create_event(user_id): |
96 | return FrozenEvent( | |
96 | return make_event_from_dict( | |
97 | 97 | { |
98 | 98 | "room_id": TEST_ROOM_ID, |
99 | 99 | "event_id": _get_event_id(), |
105 | 105 | |
106 | 106 | |
107 | 107 | def _join_event(user_id): |
108 | return FrozenEvent( | |
108 | return make_event_from_dict( | |
109 | 109 | { |
110 | 110 | "room_id": TEST_ROOM_ID, |
111 | 111 | "event_id": _get_event_id(), |
118 | 118 | |
119 | 119 | |
120 | 120 | def _power_levels_event(sender, content): |
121 | return FrozenEvent( | |
121 | return make_event_from_dict( | |
122 | 122 | { |
123 | 123 | "room_id": TEST_ROOM_ID, |
124 | 124 | "event_id": _get_event_id(), |
131 | 131 | |
132 | 132 | |
133 | 133 | def _random_state_event(sender): |
134 | return FrozenEvent( | |
134 | return make_event_from_dict( | |
135 | 135 | { |
136 | 136 | "room_id": TEST_ROOM_ID, |
137 | 137 | "event_id": _get_event_id(), |
1 | 1 | |
2 | 2 | from twisted.internet.defer import ensureDeferred, maybeDeferred, succeed |
3 | 3 | |
4 | from synapse.events import FrozenEvent | |
4 | from synapse.events import make_event_from_dict | |
5 | 5 | from synapse.logging.context import LoggingContext |
6 | 6 | from synapse.types import Requester, UserID |
7 | 7 | from synapse.util import Clock |
42 | 42 | ) |
43 | 43 | )[0] |
44 | 44 | |
45 | join_event = FrozenEvent( | |
45 | join_event = make_event_from_dict( | |
46 | 46 | { |
47 | 47 | "room_id": self.room_id, |
48 | 48 | "sender": "@baduser:test.serv", |
104 | 104 | )[0] |
105 | 105 | |
106 | 106 | # Now lie about an event |
107 | lying_event = FrozenEvent( | |
107 | lying_event = make_event_from_dict( | |
108 | 108 | { |
109 | 109 | "room_id": self.room_id, |
110 | 110 | "sender": "@baduser:test.serv", |
19 | 19 | from synapse.api.auth import Auth |
20 | 20 | from synapse.api.constants import EventTypes, Membership |
21 | 21 | from synapse.api.room_versions import RoomVersions |
22 | from synapse.events import FrozenEvent | |
22 | from synapse.events import make_event_from_dict | |
23 | 23 | from synapse.events.snapshot import EventContext |
24 | 24 | from synapse.state import StateHandler, StateResolutionHandler |
25 | 25 | |
65 | 65 | |
66 | 66 | d.update(kwargs) |
67 | 67 | |
68 | event = FrozenEvent(d) | |
68 | event = make_event_from_dict(d) | |
69 | 69 | |
70 | 70 | return event |
71 | 71 |
20 | 20 | import inspect |
21 | 21 | import logging |
22 | 22 | import time |
23 | from typing import Optional, Tuple, Type, TypeVar, Union | |
23 | 24 | |
24 | 25 | from mock import Mock |
25 | 26 | |
41 | 42 | from synapse.types import Requester, UserID, create_requester |
42 | 43 | from synapse.util.ratelimitutils import FederationRateLimiter |
43 | 44 | |
44 | from tests.server import get_clock, make_request, render, setup_test_homeserver | |
45 | from tests.server import ( | |
46 | FakeChannel, | |
47 | get_clock, | |
48 | make_request, | |
49 | render, | |
50 | setup_test_homeserver, | |
51 | ) | |
45 | 52 | from tests.test_utils.logging_setup import setup_logging |
46 | 53 | from tests.utils import default_config, setupdb |
47 | 54 | |
68 | 75 | setattr(target, name, new) |
69 | 76 | |
70 | 77 | return _around |
78 | ||
79 | ||
80 | T = TypeVar("T") | |
71 | 81 | |
72 | 82 | |
73 | 83 | class TestCase(unittest.TestCase): |
333 | 343 | |
334 | 344 | def make_request( |
335 | 345 | self, |
336 | method, | |
337 | path, | |
338 | content=b"", | |
339 | access_token=None, | |
340 | request=SynapseRequest, | |
341 | shorthand=True, | |
342 | federation_auth_origin=None, | |
343 | ): | |
346 | method: Union[bytes, str], | |
347 | path: Union[bytes, str], | |
348 | content: Union[bytes, dict] = b"", | |
349 | access_token: Optional[str] = None, | |
350 | request: Type[T] = SynapseRequest, | |
351 | shorthand: bool = True, | |
352 | federation_auth_origin: str = None, | |
353 | ) -> Tuple[T, FakeChannel]: | |
344 | 354 | """ |
345 | 355 | Create a SynapseRequest at the path using the method and containing the |
346 | 356 | given content. |