Codebase list matrix-synapse / 4b3a967
New upstream version 1.11.0 Andrej Shadura 4 years ago
97 changed file(s) with 3392 addition(s) and 2354 deletion(s). Raw diff Collapse all Expand all
3838
3939 # this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536
4040 Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state
41
42 Can get rooms/{roomId}/members at a given point
0 Synapse 1.11.0 (2020-02-21)
1 ===========================
2
3 Improved Documentation
4 ----------------------
5
6 - Small grammatical fixes to the ACME v1 deprecation notice. ([\#6944](https://github.com/matrix-org/synapse/issues/6944))
7
8
9 Synapse 1.11.0rc1 (2020-02-19)
10 ==============================
11
12 Features
13 --------
14
15 - Admin API to add or modify threepids of user accounts. ([\#6769](https://github.com/matrix-org/synapse/issues/6769))
16 - Limit the number of events that can be requested by the backfill federation API to 100. ([\#6864](https://github.com/matrix-org/synapse/issues/6864))
17 - Add ability to run some group APIs on workers. ([\#6866](https://github.com/matrix-org/synapse/issues/6866))
18 - Reject device display names over 100 characters in length to prevent abuse. ([\#6882](https://github.com/matrix-org/synapse/issues/6882))
19 - Add ability to route federation user device queries to workers. ([\#6873](https://github.com/matrix-org/synapse/issues/6873))
20 - The result of a user directory search can now be filtered via the spam checker. ([\#6888](https://github.com/matrix-org/synapse/issues/6888))
21 - Implement new `GET /_matrix/client/unstable/org.matrix.msc2432/rooms/{roomId}/aliases` endpoint as per [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#6939](https://github.com/matrix-org/synapse/issues/6939), [\#6948](https://github.com/matrix-org/synapse/issues/6948), [\#6949](https://github.com/matrix-org/synapse/issues/6949))
22 - Stop sending `m.room.alias` events wheng adding / removing aliases. Check `alt_aliases` in the latest `m.room.canonical_alias` event when deleting an alias. ([\#6904](https://github.com/matrix-org/synapse/issues/6904))
23 - Change the default power levels of invites, tombstones and server ACLs for new rooms. ([\#6834](https://github.com/matrix-org/synapse/issues/6834))
24
25 Bugfixes
26 --------
27
28 - Fixed third party event rules function `on_create_room`'s return value being ignored. ([\#6781](https://github.com/matrix-org/synapse/issues/6781))
29 - Allow URL-encoded User IDs on `/_synapse/admin/v2/users/<user_id>[/admin]` endpoints. Thanks to @NHAS for reporting. ([\#6825](https://github.com/matrix-org/synapse/issues/6825))
30 - Fix Synapse refusing to start if `federation_certificate_verification_whitelist` option is blank. ([\#6849](https://github.com/matrix-org/synapse/issues/6849))
31 - Fix errors from logging in the purge jobs related to the message retention policies support. ([\#6945](https://github.com/matrix-org/synapse/issues/6945))
32 - Return a 404 instead of 200 for querying information of a non-existant user through the admin API. ([\#6901](https://github.com/matrix-org/synapse/issues/6901))
33
34
35 Updates to the Docker image
36 ---------------------------
37
38 - The deprecated "generate-config-on-the-fly" mode is no longer supported. ([\#6918](https://github.com/matrix-org/synapse/issues/6918))
39
40
41 Improved Documentation
42 ----------------------
43
44 - Add details of PR merge strategy to contributing docs. ([\#6846](https://github.com/matrix-org/synapse/issues/6846))
45 - Spell out that the last event sent to a room won't be deleted by a purge. ([\#6891](https://github.com/matrix-org/synapse/issues/6891))
46 - Update Synapse's documentation to warn about the deprecation of ACME v1. ([\#6905](https://github.com/matrix-org/synapse/issues/6905), [\#6907](https://github.com/matrix-org/synapse/issues/6907), [\#6909](https://github.com/matrix-org/synapse/issues/6909))
47 - Add documentation for the spam checker. ([\#6906](https://github.com/matrix-org/synapse/issues/6906))
48 - Fix worker docs to point `/publicised_groups` API correctly. ([\#6938](https://github.com/matrix-org/synapse/issues/6938))
49 - Clean up and update docs on setting up federation. ([\#6940](https://github.com/matrix-org/synapse/issues/6940))
50 - Add a warning about indentation to generated configuration files. ([\#6920](https://github.com/matrix-org/synapse/issues/6920))
51 - Databases created using the compose file in contrib/docker will now always have correct encoding and locale settings. Contributed by Fridtjof Mund. ([\#6921](https://github.com/matrix-org/synapse/issues/6921))
52 - Update pip install directions in readme to avoid error when using zsh. ([\#6855](https://github.com/matrix-org/synapse/issues/6855))
53
54
55 Deprecations and Removals
56 -------------------------
57
58 - Remove `m.lazy_load_members` from `unstable_features` since lazy loading is in the stable Client-Server API version r0.5.0. ([\#6877](https://github.com/matrix-org/synapse/issues/6877))
59
60
61 Internal Changes
62 ----------------
63
64 - Add type hints to `SyncHandler`. ([\#6821](https://github.com/matrix-org/synapse/issues/6821))
65 - Refactoring work in preparation for changing the event redaction algorithm. ([\#6823](https://github.com/matrix-org/synapse/issues/6823), [\#6827](https://github.com/matrix-org/synapse/issues/6827), [\#6854](https://github.com/matrix-org/synapse/issues/6854), [\#6856](https://github.com/matrix-org/synapse/issues/6856), [\#6857](https://github.com/matrix-org/synapse/issues/6857), [\#6858](https://github.com/matrix-org/synapse/issues/6858))
66 - Fix stacktraces when using `ObservableDeferred` and async/await. ([\#6836](https://github.com/matrix-org/synapse/issues/6836))
67 - Port much of `synapse.handlers.federation` to async/await. ([\#6837](https://github.com/matrix-org/synapse/issues/6837), [\#6840](https://github.com/matrix-org/synapse/issues/6840))
68 - Populate `rooms.room_version` database column at startup, rather than in a background update. ([\#6847](https://github.com/matrix-org/synapse/issues/6847))
69 - Reduce amount we log at `INFO` level. ([\#6833](https://github.com/matrix-org/synapse/issues/6833), [\#6862](https://github.com/matrix-org/synapse/issues/6862))
70 - Remove unused `get_room_stats_state` method. ([\#6869](https://github.com/matrix-org/synapse/issues/6869))
71 - Add typing to `synapse.federation.sender` and port to async/await. ([\#6871](https://github.com/matrix-org/synapse/issues/6871))
72 - Refactor `_EventInternalMetadata` object to improve type safety. ([\#6872](https://github.com/matrix-org/synapse/issues/6872))
73 - Add an additional entry to the SyTest blacklist for worker mode. ([\#6883](https://github.com/matrix-org/synapse/issues/6883))
74 - Fix the use of sed in the linting scripts when using BSD sed. ([\#6887](https://github.com/matrix-org/synapse/issues/6887))
75 - Add type hints to the spam checker module. ([\#6915](https://github.com/matrix-org/synapse/issues/6915))
76 - Convert the directory handler tests to use HomeserverTestCase. ([\#6919](https://github.com/matrix-org/synapse/issues/6919))
77 - Increase DB/CPU perf of `_is_server_still_joined` check. ([\#6936](https://github.com/matrix-org/synapse/issues/6936))
78 - Tiny optimisation for incoming HTTP request dispatch. ([\#6950](https://github.com/matrix-org/synapse/issues/6950))
79
80
81 Synapse 1.10.1 (2020-02-17)
82 ===========================
83
84 Bugfixes
85 --------
86
87 - Fix a bug introduced in Synapse 1.10.0 which would cause room state to be cleared in the database if Synapse was upgraded direct from 1.2.1 or earlier to 1.10.0. ([\#6924](https://github.com/matrix-org/synapse/issues/6924))
88
89
090 Synapse 1.10.0 (2020-02-12)
191 ===========================
292
199199 flag to `git commit`, which uses the name and email set in your
200200 `user.name` and `user.email` git configs.
201201
202 ## Merge Strategy
203
204 We use the commit history of develop/master extensively to identify
205 when regressions were introduced and what changes have been made.
206
207 We aim to have a clean merge history, which means we normally squash-merge
208 changes into develop. For small changes this means there is no need to rebase
209 to clean up your PR before merging. Larger changes with an organised set of
210 commits may be merged as-is, if the history is judged to be useful.
211
212 This use of squash-merging will mean PRs built on each other will be hard to
213 merge. We suggest avoiding these where possible, and if required, ensuring
214 each PR has a tidy set of commits to ease merging.
215
202216 ## Conclusion
203217
204218 That's it! Matrix is a very open and collaborative project as you might expect
387387
388388 ## TLS certificates
389389
390 The default configuration exposes a single HTTP port: http://localhost:8008. It
391 is suitable for local testing, but for any practical use, you will either need
392 to enable a reverse proxy, or configure Synapse to expose an HTTPS port.
393
394 For information on using a reverse proxy, see
390 The default configuration exposes a single HTTP port on the local
391 interface: `http://localhost:8008`. It is suitable for local testing,
392 but for any practical use, you will need Synapse's APIs to be served
393 over HTTPS.
394
395 The recommended way to do so is to set up a reverse proxy on port
396 `8448`. You can find documentation on doing so in
395397 [docs/reverse_proxy.md](docs/reverse_proxy.md).
396398
397 To configure Synapse to expose an HTTPS port, you will need to edit
398 `homeserver.yaml`, as follows:
399 Alternatively, you can configure Synapse to expose an HTTPS port. To do
400 so, you will need to edit `homeserver.yaml`, as follows:
399401
400402 * First, under the `listeners` section, uncomment the configuration for the
401403 TLS-enabled listener. (Remove the hash sign (`#`) at the start of
413415 point these settings at an existing certificate and key, or you can
414416 enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
415417 for having Synapse automatically provision and renew federation
416 certificates through ACME can be found at [ACME.md](docs/ACME.md). If you
417 are using your own certificate, be sure to use a `.pem` file that includes
418 the full certificate chain including any intermediate certificates (for
419 instance, if using certbot, use `fullchain.pem` as your certificate, not
418 certificates through ACME can be found at [ACME.md](docs/ACME.md).
419 Note that, as pointed out in that document, this feature will not
420 work with installs set up after November 2020.
421
422 If you are using your own certificate, be sure to use a `.pem` file that
423 includes the full certificate chain including any intermediate certificates
424 (for instance, if using certbot, use `fullchain.pem` as your certificate, not
420425 `cert.pem`).
421426
422427 For a more detailed guide to configuring your server for federation, see
271271
272272 virtualenv -p python3 env
273273 source env/bin/activate
274 python -m pip install --no-use-pep517 -e .[all]
274 python -m pip install --no-use-pep517 -e ".[all]"
275275
276276 This will run a process of downloading and installing all the needed
277277 dependencies into a virtual env.
5555 environment:
5656 - POSTGRES_USER=synapse
5757 - POSTGRES_PASSWORD=changeme
58 # ensure the database gets created correctly
59 # https://github.com/matrix-org/synapse/blob/master/docs/postgres.md#set-up-database
60 - POSTGRES_INITDB_ARGS="--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
5861 volumes:
5962 # You may store the database tables in a local folder..
6063 - ./schemas:/var/lib/postgresql/data
0 matrix-synapse-py3 (1.11.0) stable; urgency=medium
1
2 * New synapse release 1.11.0.
3
4 -- Synapse Packaging team <packages@matrix.org> Fri, 21 Feb 2020 08:54:34 +0000
5
6 matrix-synapse-py3 (1.10.1) stable; urgency=medium
7
8 * New synapse release 1.10.1.
9
10 -- Synapse Packaging team <packages@matrix.org> Mon, 17 Feb 2020 16:27:28 +0000
11
012 matrix-synapse-py3 (1.10.0) stable; urgency=medium
113
214 * New synapse release 1.10.0.
109109
110110 ## Legacy dynamic configuration file support
111111
112 For backwards-compatibility only, the docker image supports creating a dynamic
113 configuration file based on environment variables. This is now deprecated, but
114 is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is
115 not given).
112 The docker image used to support creating a dynamic configuration file based
113 on environment variables. This is no longer supported, and an error will be
114 raised if you try to run synapse without a config file.
116115
117 To migrate from a dynamic configuration file to a static one, run the docker
116 It is, however, possible to generate a static configuration file based on
117 the environment variables that were previously used. To do this, run the docker
118118 container once with the environment variables set, and `migrate_config`
119119 command line option. For example:
120120
126126 matrixdotorg/synapse:latest migrate_config
127127 ```
128128
129 This will generate the same configuration file as the legacy mode used, but
130 will store it in `/data/homeserver.yaml` instead of a temporary location. You
131 can then use it as shown above at [Running synapse](#running-synapse).
129 This will generate the same configuration file as the legacy mode used, and
130 will store it in `/data/homeserver.yaml`. You can then use it as shown above at
131 [Running synapse](#running-synapse).
132
133 Note that the defaults used in this configuration file may be different to
134 those when generating a new config file with `generate`: for example, TLS is
135 enabled by default in this mode. You are encouraged to inspect the generated
136 configuration file and edit it to ensure it meets your needs.
132137
133138 ## Building the image
134139
135140 If you need to build the image from a Synapse checkout, use the following `docker
136141 build` command from the repo's root:
137
142
138143 ```
139144 docker build -t matrixdotorg/synapse -f docker/Dockerfile .
140145 ```
187187 else:
188188 ownership = "{}:{}".format(desired_uid, desired_gid)
189189
190 log(
191 "Container running as UserID %s:%s, ENV (or defaults) requests %s:%s"
192 % (os.getuid(), os.getgid(), desired_uid, desired_gid)
193 )
194
195190 if ownership is None:
196191 log("Will not perform chmod/su-exec as UserID already matches request")
197192
212207 if mode is not None:
213208 error("Unknown execution mode '%s'" % (mode,))
214209
215 if "SYNAPSE_SERVER_NAME" in environ:
216 # backwards-compatibility generate-a-config-on-the-fly mode
217 if "SYNAPSE_CONFIG_PATH" in environ:
210 config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
211 config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
212
213 if not os.path.exists(config_path):
214 if "SYNAPSE_SERVER_NAME" in environ:
218215 error(
219 "SYNAPSE_SERVER_NAME can only be combined with SYNAPSE_CONFIG_PATH "
220 "in `generate` or `migrate_config` mode. To start synapse using a "
221 "config file, unset the SYNAPSE_SERVER_NAME environment variable."
216 """\
217 Config file '%s' does not exist.
218
219 The synapse docker image no longer supports generating a config file on-the-fly
220 based on environment variables. You can migrate to a static config file by
221 running with 'migrate_config'. See the README for more details.
222 """
223 % (config_path,)
222224 )
223225
224 config_path = "/compiled/homeserver.yaml"
225 log(
226 "Generating config file '%s' on-the-fly from environment variables.\n"
227 "Note that this mode is deprecated. You can migrate to a static config\n"
228 "file by running with 'migrate_config'. See the README for more details."
226 error(
227 "Config file '%s' does not exist. You should either create a new "
228 "config file by running with the `generate` argument (and then edit "
229 "the resulting file before restarting) or specify the path to an "
230 "existing config file with the SYNAPSE_CONFIG_PATH variable."
229231 % (config_path,)
230232 )
231
232 generate_config_from_template("/compiled", config_path, environ, ownership)
233 else:
234 config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
235 config_path = environ.get(
236 "SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
237 )
238 if not os.path.exists(config_path):
239 error(
240 "Config file '%s' does not exist. You should either create a new "
241 "config file by running with the `generate` argument (and then edit "
242 "the resulting file before restarting) or specify the path to an "
243 "existing config file with the SYNAPSE_CONFIG_PATH variable."
244 % (config_path,)
245 )
246233
247234 log("Starting synapse with config file " + config_path)
248235
0 # The config is maintained as an up-to-date snapshot of the default
0 # This file is maintained as an up-to-date snapshot of the default
11 # homeserver.yaml configuration generated by Synapse.
22 #
33 # It is intended to act as a reference for the default configuration,
99 # homeserver.yaml. Instead, if you are starting from scratch, please generate
1010 # a fresh config using Synapse by following the instructions in INSTALL.md.
1111
12 ################################################################################
13
00 # ACME
11
2 Synapse v1.0 will require valid TLS certificates for communication between
3 servers (port `8448` by default) in addition to those that are client-facing
4 (port `443`). If you do not already have a valid certificate for your domain,
5 the easiest way to get one is with Synapse's new ACME support, which will use
6 the ACME protocol to provision a certificate automatically. Synapse v0.99.0+
7 will provision server-to-server certificates automatically for you for free
8 through [Let's Encrypt](https://letsencrypt.org/) if you tell it to.
2 From version 1.0 (June 2019) onwards, Synapse requires valid TLS
3 certificates for communication between servers (by default on port
4 `8448`) in addition to those that are client-facing (port `443`). To
5 help homeserver admins fulfil this new requirement, Synapse v0.99.0
6 introduced support for automatically provisioning certificates through
7 [Let's Encrypt](https://letsencrypt.org/) using the ACME protocol.
8
9 ## Deprecation of ACME v1
10
11 In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
12 Let's Encrypt announced that they were deprecating version 1 of the ACME
13 protocol, with the plan to disable the use of it for new accounts in
14 November 2019, and for existing accounts in June 2020.
15
16 Synapse doesn't currently support version 2 of the ACME protocol, which
17 means that:
18
19 * for existing installs, Synapse's built-in ACME support will continue
20 to work until June 2020.
21 * for new installs, this feature will not work at all.
22
23 Either way, it is recommended to move from Synapse's ACME support
24 feature to an external automated tool such as [certbot](https://github.com/certbot/certbot)
25 (or browse [this list](https://letsencrypt.org/fr/docs/client-options/)
26 for an alternative ACME client).
27
28 It's also recommended to use a reverse proxy for the server-facing
29 communications (more documentation about this can be found
30 [here](/docs/reverse_proxy.md)) as well as the client-facing ones and
31 have it serve the certificates.
32
33 In case you can't do that and need Synapse to serve them itself, make
34 sure to set the `tls_certificate_path` configuration setting to the path
35 of the certificate (make sure to use the certificate containing the full
36 certification chain, e.g. `fullchain.pem` if using certbot) and
37 `tls_private_key_path` to the path of the matching private key. Note
38 that in this case you will need to restart Synapse after each
39 certificate renewal so that Synapse stops using the old certificate.
40
41 If you still want to use Synapse's built-in ACME support, the rest of
42 this document explains how to set it up.
43
44 ## Initial setup
945
1046 In the case that your `server_name` config variable is the same as
1147 the hostname that the client connects to, then the same certificate can be
3066 If you already have certificates, you will need to back up or delete them
3167 (files `example.com.tls.crt` and `example.com.tls.key` in Synapse's root
3268 directory), Synapse's ACME implementation will not overwrite them.
33
34 You may wish to use alternate methods such as Certbot to obtain a certificate
35 from Let's Encrypt, depending on your server configuration. Of course, if you
36 already have a valid certificate for your homeserver's domain, that can be
37 placed in Synapse's config directory without the need for any ACME setup.
3869
3970 ## ACME setup
4071
66 Depending on the amount of history being purged a call to the API may take
77 several minutes or longer. During this period users will not be able to
88 paginate further back in the room from the point being purged from.
9
10 Note that Synapse requires at least one message in each room, so it will never
11 delete the last message in a room.
912
1013 The API is:
1114
11 ========================
22
33 This API allows an administrator to create or modify a user account with a
4 specific ``user_id``.
4 specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example,
5 ``@user:server.com``.
56
67 This api is::
78
1415 {
1516 "password": "user_password",
1617 "displayname": "User",
18 "threepids": [
19 {
20 "medium": "email",
21 "address": "<user_mail_1>"
22 },
23 {
24 "medium": "email",
25 "address": "<user_mail_2>"
26 }
27 ],
1728 "avatar_url": "<avatar_url>",
1829 "admin": false,
1930 "deactivated": false
2233 including an ``access_token`` of a server admin.
2334
2435 The parameter ``displayname`` is optional and defaults to ``user_id``.
36 The parameter ``threepids`` is optional.
2537 The parameter ``avatar_url`` is optional.
2638 The parameter ``admin`` is optional and defaults to 'false'.
2739 The parameter ``deactivated`` is optional and defaults to 'false'.
0 # Delegation
1
2 By default, other homeservers will expect to be able to reach yours via
3 your `server_name`, on port 8448. For example, if you set your `server_name`
4 to `example.com` (so that your user names look like `@user:example.com`),
5 other servers will try to connect to yours at `https://example.com:8448/`.
6
7 Delegation is a Matrix feature allowing a homeserver admin to retain a
8 `server_name` of `example.com` so that user IDs, room aliases, etc continue
9 to look like `*:example.com`, whilst having federation traffic routed
10 to a different server and/or port (e.g. `synapse.example.com:443`).
11
12 ## .well-known delegation
13
14 To use this method, you need to be able to alter the
15 `server_name` 's https server to serve the `/.well-known/matrix/server`
16 URL. Having an active server (with a valid TLS certificate) serving your
17 `server_name` domain is out of the scope of this documentation.
18
19 The URL `https://<server_name>/.well-known/matrix/server` should
20 return a JSON structure containing the key `m.server` like so:
21
22 ```json
23 {
24 "m.server": "<synapse.server.name>[:<yourport>]"
25 }
26 ```
27
28 In our example, this would mean that URL `https://example.com/.well-known/matrix/server`
29 should return:
30
31 ```json
32 {
33 "m.server": "synapse.example.com:443"
34 }
35 ```
36
37 Note, specifying a port is optional. If no port is specified, then it defaults
38 to 8448.
39
40 With .well-known delegation, federating servers will check for a valid TLS
41 certificate for the delegated hostname (in our example: `synapse.example.com`).
42
43 ## SRV DNS record delegation
44
45 It is also possible to do delegation using a SRV DNS record. However, that is
46 considered an advanced topic since it's a bit complex to set up, and `.well-known`
47 delegation is already enough in most cases.
48
49 However, if you really need it, you can find some documentation on how such a
50 record should look like and how Synapse will use it in [the Matrix
51 specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names).
52
53 ## Delegation FAQ
54
55 ### When do I need delegation?
56
57 If your homeserver's APIs are accessible on the default federation port (8448)
58 and the domain your `server_name` points to, you do not need any delegation.
59
60 For instance, if you registered `example.com` and pointed its DNS A record at a
61 fresh server, you could install Synapse on that host, giving it a `server_name`
62 of `example.com`, and once a reverse proxy has been set up to proxy all requests
63 sent to the port `8448` and serve TLS certificates for `example.com`, you
64 wouldn't need any delegation set up.
65
66 **However**, if your homeserver's APIs aren't accessible on port 8448 and on the
67 domain `server_name` points to, you will need to let other servers know how to
68 find it using delegation.
69
70 ### Do you still recommend against using a reverse proxy on the federation port?
71
72 We no longer actively recommend against using a reverse proxy. Many admins will
73 find it easier to direct federation traffic to a reverse proxy and manage their
74 own TLS certificates, and this is a supported configuration.
75
76 See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
77 reverse proxy.
78
79 ### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
80
81 This is no longer necessary. If you are using a reverse proxy for all of your
82 TLS traffic, then you can set `no_tls: True` in the Synapse config.
83
84 In that case, the only reason Synapse needs the certificate is to populate a legacy
85 `tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0
86 and later, and the only time pre-0.99 Synapses will check it is when attempting to
87 fetch the server keys - and generally this is delegated via `matrix.org`, which
88 is running a modern version of Synapse.
89
90 ### Do I need the same certificate for the client and federation port?
91
92 No. There is nothing stopping you from using different certificates,
93 particularly if you are using a reverse proxy.
0 Setting up Federation
0 Setting up federation
11 =====================
22
33 Federation is the process by which users on different servers can participate
44 in the same room. For this to work, those other servers must be able to contact
55 yours to send messages.
66
7 The ``server_name`` configured in the Synapse configuration file (often
8 ``homeserver.yaml``) defines how resources (users, rooms, etc.) will be
9 identified (eg: ``@user:example.com``, ``#room:example.com``). By
10 default, it is also the domain that other servers will use to
11 try to reach your server (via port 8448). This is easy to set
12 up and will work provided you set the ``server_name`` to match your
13 machine's public DNS hostname, and provide Synapse with a TLS certificate
14 which is valid for your ``server_name``.
7 The `server_name` configured in the Synapse configuration file (often
8 `homeserver.yaml`) defines how resources (users, rooms, etc.) will be
9 identified (eg: `@user:example.com`, `#room:example.com`). By default,
10 it is also the domain that other servers will use to try to reach your
11 server (via port 8448). This is easy to set up and will work provided
12 you set the `server_name` to match your machine's public DNS hostname.
13
14 For this default configuration to work, you will need to listen for TLS
15 connections on port 8448. The preferred way to do that is by using a
16 reverse proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions
17 on how to correctly set one up.
18
19 In some cases you might not want to run Synapse on the machine that has
20 the `server_name` as its public DNS hostname, or you might want federation
21 traffic to use a different port than 8448. For example, you might want to
22 have your user names look like `@user:example.com`, but you want to run
23 Synapse on `synapse.example.com` on port 443. This can be done using
24 delegation, which allows an admin to control where federation traffic should
25 be sent. See [delegate.md](delegate.md) for instructions on how to set this up.
1526
1627 Once federation has been configured, you should be able to join a room over
17 federation. A good place to start is ``#synapse:matrix.org`` - a room for
28 federation. A good place to start is `#synapse:matrix.org` - a room for
1829 Synapse admins.
19
20
21 ## Delegation
22
23 For a more flexible configuration, you can have ``server_name``
24 resources (eg: ``@user:example.com``) served by a different host and
25 port (eg: ``synapse.example.com:443``). There are two ways to do this:
26
27 - adding a ``/.well-known/matrix/server`` URL served on ``https://example.com``.
28 - adding a DNS ``SRV`` record in the DNS zone of domain
29 ``example.com``.
30
31 Without configuring delegation, the matrix federation will
32 expect to find your server via ``example.com:8448``. The following methods
33 allow you retain a `server_name` of `example.com` so that your user IDs, room
34 aliases, etc continue to look like `*:example.com`, whilst having your
35 federation traffic routed to a different server.
36
37 ### .well-known delegation
38
39 To use this method, you need to be able to alter the
40 ``server_name`` 's https server to serve the ``/.well-known/matrix/server``
41 URL. Having an active server (with a valid TLS certificate) serving your
42 ``server_name`` domain is out of the scope of this documentation.
43
44 The URL ``https://<server_name>/.well-known/matrix/server`` should
45 return a JSON structure containing the key ``m.server`` like so:
46
47 {
48 "m.server": "<synapse.server.name>[:<yourport>]"
49 }
50
51 In our example, this would mean that URL ``https://example.com/.well-known/matrix/server``
52 should return:
53
54 {
55 "m.server": "synapse.example.com:443"
56 }
57
58 Note, specifying a port is optional. If a port is not specified an SRV lookup
59 is performed, as described below. If the target of the
60 delegation does not have an SRV record, then the port defaults to 8448.
61
62 Most installations will not need to configure .well-known. However, it can be
63 useful in cases where the admin is hosting on behalf of someone else and
64 therefore cannot gain access to the necessary certificate. With .well-known,
65 federation servers will check for a valid TLS certificate for the delegated
66 hostname (in our example: ``synapse.example.com``).
67
68 ### DNS SRV delegation
69
70 To use this delegation method, you need to have write access to your
71 ``server_name`` 's domain zone DNS records (in our example it would be
72 ``example.com`` DNS zone).
73
74 This method requires the target server to provide a
75 valid TLS certificate for the original ``server_name``.
76
77 You need to add a SRV record in your ``server_name`` 's DNS zone with
78 this format:
79
80 _matrix._tcp.<yourdomain.com> <ttl> IN SRV <priority> <weight> <port> <synapse.server.name>
81
82 In our example, we would need to add this SRV record in the
83 ``example.com`` DNS zone:
84
85 _matrix._tcp.example.com. 3600 IN SRV 10 5 443 synapse.example.com.
86
87 Once done and set up, you can check the DNS record with ``dig -t srv
88 _matrix._tcp.<server_name>``. In our example, we would expect this:
89
90 $ dig -t srv _matrix._tcp.example.com
91 _matrix._tcp.example.com. 3600 IN SRV 10 0 443 synapse.example.com.
92
93 Note that the target of a SRV record cannot be an alias (CNAME record): it has to point
94 directly to the server hosting the synapse instance.
95
96 ### Delegation FAQ
97 #### When do I need a SRV record or .well-known URI?
98
99 If your homeserver listens on the default federation port (8448), and your
100 `server_name` points to the host that your homeserver runs on, you do not need an SRV
101 record or `.well-known/matrix/server` URI.
102
103 For instance, if you registered `example.com` and pointed its DNS A record at a
104 fresh server, you could install Synapse on that host,
105 giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is enabled,
106 it would automatically generate a valid TLS certificate for you via Let's Encrypt
107 and no SRV record or .well-known URI would be needed.
108
109 **However**, if your server does not listen on port 8448, or if your `server_name`
110 does not point to the host that your homeserver runs on, you will need to let
111 other servers know how to find it. The way to do this is via .well-known or an
112 SRV record.
113
114 #### I have created a .well-known URI. Do I also need an SRV record?
115
116 No. You can use either `.well-known` delegation or use an SRV record for delegation. You
117 do not need to use both to delegate to the same location.
118
119 #### Can I manage my own certificates rather than having Synapse renew certificates itself?
120
121 Yes, you are welcome to manage your certificates yourself. Synapse will only
122 attempt to obtain certificates from Let's Encrypt if you configure it to do
123 so.The only requirement is that there is a valid TLS cert present for
124 federation end points.
125
126 #### Do you still recommend against using a reverse proxy on the federation port?
127
128 We no longer actively recommend against using a reverse proxy. Many admins will
129 find it easier to direct federation traffic to a reverse proxy and manage their
130 own TLS certificates, and this is a supported configuration.
131
132 See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
133 reverse proxy.
134
135 #### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
136
137 Practically speaking, this is no longer necessary.
138
139 If you are using a reverse proxy for all of your TLS traffic, then you can set
140 `no_tls: True` in the Synapse config. In that case, the only reason Synapse
141 needs the certificate is to populate a legacy `tls_fingerprints` field in the
142 federation API. This is ignored by Synapse 0.99.0 and later, and the only time
143 pre-0.99 Synapses will check it is when attempting to fetch the server keys -
144 and generally this is delegated via `matrix.org`, which will be running a modern
145 version of Synapse.
146
147 #### Do I need the same certificate for the client and federation port?
148
149 No. There is nothing stopping you from using different certificates,
150 particularly if you are using a reverse proxy. However, Synapse will use the
151 same certificate on any ports where TLS is configured.
15230
15331 ## Troubleshooting
15432
155 You can use the [federation tester](
156 <https://matrix.org/federationtester>) to check if your homeserver is
157 configured correctly. Alternatively try the [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN).
158 Note that you'll have to modify this URL to replace ``DOMAIN`` with your
159 ``server_name``. Hitting the API directly provides extra detail.
33 You can use the [federation tester](https://matrix.org/federationtester)
34 to check if your homeserver is configured correctly. Alternatively try the
35 [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN).
36 Note that you'll have to modify this URL to replace `DOMAIN` with your
37 `server_name`. Hitting the API directly provides extra detail.
16038
16139 The typical failure mode for federation is that when the server tries to join
16240 a room, it is rejected with "401: Unauthorized". Generally this means that other
16846 proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
16947 configure a reverse proxy.
17048
171 ## Running a Demo Federation of Synapses
49 ## Running a demo federation of Synapses
17250
17351 If you want to get up and running quickly with a trio of homeservers in a
174 private federation, there is a script in the ``demo`` directory. This is mainly
52 private federation, there is a script in the `demo` directory. This is mainly
17553 useful just for development purposes. See [demo/README](<../demo/README>).
4040 purged according to its room's policy, then the receiving server will
4141 process and store that event until it's picked up by the next purge job,
4242 though it will always hide it from clients.
43
44 Synapse requires at least one message in each room, so it will never
45 delete the last message in a room. It will, however, hide it from
46 clients.
4347
4448
4549 ## Server configuration
1717 Matrix servers do not necessarily need to connect to your server via the
1818 same server name or port. Indeed, clients will use port 443 by default,
1919 whereas servers default to port 8448. Where these are different, we
20 refer to the 'client port' and the \'federation port\'. See [Setting
21 up federation](federate.md) for more details of the algorithm used for
22 federation connections.
20 refer to the 'client port' and the \'federation port\'. See [the Matrix
21 specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
22 for more details of the algorithm used for federation connections, and
23 [delegate.md](<delegate.md>) for instructions on setting up delegation.
2324
2425 Let's assume that we expect clients to connect to our server at
2526 `https://matrix.example.com`, and other servers to connect at
0 # The config is maintained as an up-to-date snapshot of the default
0 # This file is maintained as an up-to-date snapshot of the default
11 # homeserver.yaml configuration generated by Synapse.
22 #
33 # It is intended to act as a reference for the default configuration,
88 # It is *not* intended to be copied and used as the basis for a real
99 # homeserver.yaml. Instead, if you are starting from scratch, please generate
1010 # a fresh config using Synapse by following the instructions in INSTALL.md.
11
12 ################################################################################
13
14 # Configuration file for Synapse.
15 #
16 # This is a YAML file: see [1] for a quick introduction. Note in particular
17 # that *indentation is important*: all the elements of a list or dictionary
18 # should have the same indentation.
19 #
20 # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
1121
1222 ## Server ##
1323
464474
465475 # ACME support: This will configure Synapse to request a valid TLS certificate
466476 # for your configured `server_name` via Let's Encrypt.
477 #
478 # Note that ACME v1 is now deprecated, and Synapse currently doesn't support
479 # ACME v2. This means that this feature currently won't work with installs set
480 # up after November 2019. For more info, and alternative solutions, see
481 # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
467482 #
468483 # Note that provisioning a certificate in this way requires port 80 to be
469484 # routed to Synapse so that it can complete the http-01 ACME challenge.
0 # Handling spam in Synapse
1
2 Synapse has support to customize spam checking behavior. It can plug into a
3 variety of events and affect how they are presented to users on your homeserver.
4
5 The spam checking behavior is implemented as a Python class, which must be
6 able to be imported by the running Synapse.
7
8 ## Python spam checker class
9
10 The Python class is instantiated with two objects:
11
12 * Any configuration (see below).
13 * An instance of `synapse.spam_checker_api.SpamCheckerApi`.
14
15 It then implements methods which return a boolean to alter behavior in Synapse.
16
17 There's a generic method for checking every event (`check_event_for_spam`), as
18 well as some specific methods:
19
20 * `user_may_invite`
21 * `user_may_create_room`
22 * `user_may_create_room_alias`
23 * `user_may_publish_room`
24
25 The details of the each of these methods (as well as their inputs and outputs)
26 are documented in the `synapse.events.spamcheck.SpamChecker` class.
27
28 The `SpamCheckerApi` class provides a way for the custom spam checker class to
29 call back into the homeserver internals. It currently implements the following
30 methods:
31
32 * `get_state_events_in_room`
33
34 ### Example
35
36 ```python
37 class ExampleSpamChecker:
38 def __init__(self, config, api):
39 self.config = config
40 self.api = api
41
42 def check_event_for_spam(self, foo):
43 return False # allow all events
44
45 def user_may_invite(self, inviter_userid, invitee_userid, room_id):
46 return True # allow all invites
47
48 def user_may_create_room(self, userid):
49 return True # allow all room creations
50
51 def user_may_create_room_alias(self, userid, room_alias):
52 return True # allow all room aliases
53
54 def user_may_publish_room(self, userid, room_id):
55 return True # allow publishing of all rooms
56
57 def check_username_for_spam(self, user_profile):
58 return False # allow all usernames
59 ```
60
61 ## Configuration
62
63 Modify the `spam_checker` section of your `homeserver.yaml` in the following
64 manner:
65
66 `module` should point to the fully qualified Python class that implements your
67 custom logic, e.g. `my_module.ExampleSpamChecker`.
68
69 `config` is a dictionary that gets passed to the spam checker class.
70
71 ### Example
72
73 This section might look like:
74
75 ```yaml
76 spam_checker:
77 module: my_module.ExampleSpamChecker
78 config:
79 # Enable or disable a specific option in ExampleSpamChecker.
80 my_custom_option: true
81 ```
82
83 ## Examples
84
85 The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged
86 example using the Synapse spam checking API, including a bot for dynamic
87 configuration.
175175 ^/_matrix/federation/v1/query_auth/
176176 ^/_matrix/federation/v1/event_auth/
177177 ^/_matrix/federation/v1/exchange_third_party_invite/
178 ^/_matrix/federation/v1/user/devices/
178179 ^/_matrix/federation/v1/send/
180 ^/_matrix/federation/v1/get_groups_publicised$
179181 ^/_matrix/key/v2/query
182
183 Additionally, the following REST endpoints can be handled for GET requests:
184
185 ^/_matrix/federation/v1/groups/
180186
181187 The above endpoints should all be routed to the federation_reader worker by the
182188 reverse-proxy configuration.
253259 ^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
254260 ^/_matrix/client/versions$
255261 ^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
262 ^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
263 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
264 ^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
256265
257266 Additionally, the following REST endpoints can be handled for GET requests:
258267
259268 ^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
269 ^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
260270
261271 Additionally, the following REST endpoints can be handled, but all requests must
262272 be routed to the same instance:
277287
278288 ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
279289
280 When using this worker you must also set `update_user_directory: False` in the
281 shared configuration file to stop the main synapse running background
290 When using this worker you must also set `update_user_directory: False` in the
291 shared configuration file to stop the main synapse running background
282292 jobs related to updating the user directory.
283293
284294 ### `synapse.app.frontend_proxy`
22 # Exits with 0 if there are no problems, or another code otherwise.
33
44 # Fix non-lowercase true/false values
5 sed -i -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
5 sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
6 rm docs/sample_config.yaml.bak
67
78 # Check if anything changed
89 git diff --exit-code docs/sample_config.yaml
3535 except ImportError:
3636 pass
3737
38 __version__ = "1.10.0"
38 __version__ = "1.11.0"
3939
4040 if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
4141 # We import here so that we don't have to install a bunch of deps when
1313 # limitations under the License.
1414
1515 import logging
16 from typing import Optional
1617
1718 from six import itervalues
1819
3435 )
3536 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
3637 from synapse.config.server import is_threepid_reserved
38 from synapse.events import EventBase
3739 from synapse.types import StateMap, UserID
3840 from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
3941 from synapse.util.caches.lrucache import LruCache
9193 )
9294
9395 @defer.inlineCallbacks
94 def check_joined_room(self, room_id, user_id, current_state=None):
95 """Check if the user is currently joined in the room
96 Args:
97 room_id(str): The room to check.
98 user_id(str): The user to check.
99 current_state(dict): Optional map of the current state of the room.
96 def check_user_in_room(
97 self,
98 room_id: str,
99 user_id: str,
100 current_state: Optional[StateMap[EventBase]] = None,
101 allow_departed_users: bool = False,
102 ):
103 """Check if the user is in the room, or was at some point.
104 Args:
105 room_id: The room to check.
106
107 user_id: The user to check.
108
109 current_state: Optional map of the current state of the room.
100110 If provided then that map is used to check whether they are a
101111 member of the room. Otherwise the current membership is
102112 loaded from the database.
113
114 allow_departed_users: if True, accept users that were previously
115 members but have now departed.
116
103117 Raises:
104 AuthError if the user is not in the room.
105 Returns:
106 A deferred membership event for the user if the user is in
107 the room.
118 AuthError if the user is/was not in the room.
119 Returns:
120 Deferred[Optional[EventBase]]:
121 Membership event for the user if the user was in the
122 room. This will be the join event if they are currently joined to
123 the room. This will be the leave event if they have left the room.
108124 """
109125 if current_state:
110126 member = current_state.get((EventTypes.Member, user_id), None)
112128 member = yield self.state.get_current_state(
113129 room_id=room_id, event_type=EventTypes.Member, state_key=user_id
114130 )
115
116 self._check_joined_room(member, user_id, room_id)
117 return member
118
119 @defer.inlineCallbacks
120 def check_user_was_in_room(self, room_id, user_id):
121 """Check if the user was in the room at some point.
122 Args:
123 room_id(str): The room to check.
124 user_id(str): The user to check.
125 Raises:
126 AuthError if the user was never in the room.
127 Returns:
128 A deferred membership event for the user if the user was in the
129 room. This will be the join event if they are currently joined to
130 the room. This will be the leave event if they have left the room.
131 """
132 member = yield self.state.get_current_state(
133 room_id=room_id, event_type=EventTypes.Member, state_key=user_id
134 )
135131 membership = member.membership if member else None
136132
137 if membership not in (Membership.JOIN, Membership.LEAVE):
138 raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
139
140 if membership == Membership.LEAVE:
133 if membership == Membership.JOIN:
134 return member
135
136 # XXX this looks totally bogus. Why do we not allow users who have been banned,
137 # or those who were members previously and have been re-invited?
138 if allow_departed_users and membership == Membership.LEAVE:
141139 forgot = yield self.store.did_forget(user_id, room_id)
142 if forgot:
143 raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
144
145 return member
140 if not forgot:
141 return member
142
143 raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
146144
147145 @defer.inlineCallbacks
148146 def check_host_in_room(self, room_id, host):
149147 with Measure(self.clock, "check_host_in_room"):
150148 latest_event_ids = yield self.store.is_host_joined(room_id, host)
151149 return latest_event_ids
152
153 def _check_joined_room(self, member, user_id, room_id):
154 if not member or member.membership != Membership.JOIN:
155 raise AuthError(
156 403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
157 )
158150
159151 def can_federate(self, event, auth_events):
160152 creation_event = auth_events.get((EventTypes.Create, ""))
559551 return True
560552
561553 user_id = user.to_string()
562 yield self.check_joined_room(room_id, user_id)
554 yield self.check_user_in_room(room_id, user_id)
563555
564556 # We currently require the user is a "moderator" in the room. We do this
565557 # by checking if they would (theoretically) be able to change the
632624 return query_params[0].decode("ascii")
633625
634626 @defer.inlineCallbacks
635 def check_in_room_or_world_readable(self, room_id, user_id):
627 def check_user_in_room_or_world_readable(
628 self, room_id: str, user_id: str, allow_departed_users: bool = False
629 ):
636630 """Checks that the user is or was in the room or the room is world
637631 readable. If it isn't then an exception is raised.
632
633 Args:
634 room_id: room to check
635 user_id: user to check
636 allow_departed_users: if True, accept users that were previously
637 members but have now departed
638638
639639 Returns:
640640 Deferred[tuple[str, str|None]]: Resolves to the current membership of
644644 """
645645
646646 try:
647 # check_user_was_in_room will return the most recent membership
647 # check_user_in_room will return the most recent membership
648648 # event for the user if:
649649 # * The user is a non-guest user, and was ever in the room
650650 # * The user is a guest user, and has joined the room
651651 # else it will throw.
652 member_event = yield self.check_user_was_in_room(room_id, user_id)
652 member_event = yield self.check_user_in_room(
653 room_id, user_id, allow_departed_users=allow_departed_users
654 )
653655 return member_event.membership, member_event.event_id
654656 except AuthError:
655657 visibility = yield self.state.get_current_state(
661663 ):
662664 return Membership.JOIN, None
663665 raise AuthError(
664 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
666 403,
667 "User %s not in room %s, and room previews are disabled"
668 % (user_id, room_id),
665669 )
666670
667671 @defer.inlineCallbacks
5656 RoomStateRestServlet,
5757 )
5858 from synapse.rest.client.v1.voip import VoipRestServlet
59 from synapse.rest.client.v2_alpha import groups
5960 from synapse.rest.client.v2_alpha.account import ThreepidRestServlet
6061 from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
6162 from synapse.rest.client.v2_alpha.register import RegisterRestServlet
122123 VoipRestServlet(self).register(resource)
123124 PushRuleRestServlet(self).register(resource)
124125 VersionsRestServlet(self).register(resource)
126
127 groups.register_servlets(self, resource)
125128
126129 resources.update({"/_matrix/client": resource})
127130
3232 from synapse.replication.slave.storage._base import BaseSlavedStore
3333 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
3434 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
35 from synapse.replication.slave.storage.devices import SlavedDeviceStore
3536 from synapse.replication.slave.storage.directory import DirectoryStore
3637 from synapse.replication.slave.storage.events import SlavedEventStore
38 from synapse.replication.slave.storage.groups import SlavedGroupServerStore
3739 from synapse.replication.slave.storage.keys import SlavedKeyStore
3840 from synapse.replication.slave.storage.profile import SlavedProfileStore
3941 from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
6567 SlavedEventStore,
6668 SlavedKeyStore,
6769 SlavedRegistrationStore,
70 SlavedGroupServerStore,
71 SlavedDeviceStore,
6872 RoomStore,
6973 DirectoryStore,
7074 SlavedTransactionStore,
4949
5050 MISSING_SERVER_NAME = """\
5151 Missing mandatory `server_name` config option.
52 """
53
54
55 CONFIG_FILE_HEADER = """\
56 # Configuration file for Synapse.
57 #
58 # This is a YAML file: see [1] for a quick introduction. Note in particular
59 # that *indentation is important*: all the elements of a list or dictionary
60 # should have the same indentation.
61 #
62 # [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
63
5264 """
5365
5466
343355 str: the yaml config file
344356 """
345357
346 return "\n\n".join(
358 return CONFIG_FILE_HEADER + "\n\n".join(
347359 dedent(conf)
348360 for conf in self.invoke_all(
349361 "generate_config_section",
573585 if not path_exists(config_dir_path):
574586 os.makedirs(config_dir_path)
575587 with open(config_path, "w") as config_file:
576 config_file.write("# vim:ft=yaml\n\n")
577588 config_file.write(config_str)
589 config_file.write("\n\n# vim:ft=yaml")
578590
579591 config_dict = yaml.safe_load(config_str)
580592 obj.generate_missing_files(config_dict, config_dir_path)
3131
3232 logger = logging.getLogger(__name__)
3333
34 ACME_SUPPORT_ENABLED_WARN = """\
35 This server uses Synapse's built-in ACME support. Note that ACME v1 has been
36 deprecated by Let's Encrypt, and that Synapse doesn't currently support ACME v2,
37 which means that this feature will not work with Synapse installs set up after
38 November 2019, and that it may stop working on June 2020 for installs set up
39 before that date.
40
41 For more info and alternative solutions, see
42 https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
43 --------------------------------------------------------------------------------"""
44
3445
3546 class TlsConfig(Config):
3647 section = "tls"
4253 acme_config = {}
4354
4455 self.acme_enabled = acme_config.get("enabled", False)
56
57 if self.acme_enabled:
58 logger.warning(ACME_SUPPORT_ENABLED_WARN)
4559
4660 # hyperlink complains on py2 if this is not a Unicode
4761 self.acme_url = six.text_type(
108122 fed_whitelist_entries = config.get(
109123 "federation_certificate_verification_whitelist", []
110124 )
125 if fed_whitelist_entries is None:
126 fed_whitelist_entries = []
111127
112128 # Support globs (*) in whitelist values
113129 self.federation_certificate_verification_whitelist = [] # type: List[str]
359375 # ACME support: This will configure Synapse to request a valid TLS certificate
360376 # for your configured `server_name` via Let's Encrypt.
361377 #
378 # Note that ACME v1 is now deprecated, and Synapse currently doesn't support
379 # ACME v2. This means that this feature currently won't work with installs set
380 # up after November 2019. For more info, and alternative solutions, see
381 # https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
382 #
362383 # Note that provisioning a certificate in this way requires port 80 to be
363384 # routed to Synapse so that it can complete the http-01 ACME challenge.
364385 # By default, if you enable ACME support, Synapse will attempt to listen on
00 # -*- coding: utf-8 -*-
11 # Copyright 2014-2016 OpenMarket Ltd
22 # Copyright 2019 New Vector Ltd
3 # Copyright 2020 The Matrix.org Foundation C.I.C.
34 #
45 # Licensed under the Apache License, Version 2.0 (the "License");
56 # you may not use this file except in compliance with the License.
1516
1617 import os
1718 from distutils.util import strtobool
19 from typing import Optional, Type
1820
1921 import six
2022
2123 from unpaddedbase64 import encode_base64
2224
23 from synapse.api.errors import UnsupportedRoomVersionError
24 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
25 from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
2526 from synapse.types import JsonDict
2627 from synapse.util.caches import intern_dict
2728 from synapse.util.frozenutils import freeze
3637 USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
3738
3839
40 class DictProperty:
41 """An object property which delegates to the `_dict` within its parent object."""
42
43 __slots__ = ["key"]
44
45 def __init__(self, key: str):
46 self.key = key
47
48 def __get__(self, instance, owner=None):
49 # if the property is accessed as a class property rather than an instance
50 # property, return the property itself rather than the value
51 if instance is None:
52 return self
53 try:
54 return instance._dict[self.key]
55 except KeyError as e1:
56 # We want this to look like a regular attribute error (mostly so that
57 # hasattr() works correctly), so we convert the KeyError into an
58 # AttributeError.
59 #
60 # To exclude the KeyError from the traceback, we explicitly
61 # 'raise from e1.__context__' (which is better than 'raise from None',
62 # becuase that would omit any *earlier* exceptions).
63 #
64 raise AttributeError(
65 "'%s' has no '%s' property" % (type(instance), self.key)
66 ) from e1.__context__
67
68 def __set__(self, instance, v):
69 instance._dict[self.key] = v
70
71 def __delete__(self, instance):
72 try:
73 del instance._dict[self.key]
74 except KeyError as e1:
75 raise AttributeError(
76 "'%s' has no '%s' property" % (type(instance), self.key)
77 ) from e1.__context__
78
79
80 class DefaultDictProperty(DictProperty):
81 """An extension of DictProperty which provides a default if the property is
82 not present in the parent's _dict.
83
84 Note that this means that hasattr() on the property always returns True.
85 """
86
87 __slots__ = ["default"]
88
89 def __init__(self, key, default):
90 super().__init__(key)
91 self.default = default
92
93 def __get__(self, instance, owner=None):
94 if instance is None:
95 return self
96 return instance._dict.get(self.key, self.default)
97
98
3999 class _EventInternalMetadata(object):
40 def __init__(self, internal_metadata_dict):
41 self.__dict__ = dict(internal_metadata_dict)
42
43 def get_dict(self):
44 return dict(self.__dict__)
45
46 def is_outlier(self):
47 return getattr(self, "outlier", False)
48
49 def is_out_of_band_membership(self):
100 __slots__ = ["_dict"]
101
102 def __init__(self, internal_metadata_dict: JsonDict):
103 # we have to copy the dict, because it turns out that the same dict is
104 # reused. TODO: fix that
105 self._dict = dict(internal_metadata_dict)
106
107 outlier = DictProperty("outlier") # type: bool
108 out_of_band_membership = DictProperty("out_of_band_membership") # type: bool
109 send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str
110 recheck_redaction = DictProperty("recheck_redaction") # type: bool
111 soft_failed = DictProperty("soft_failed") # type: bool
112 proactively_send = DictProperty("proactively_send") # type: bool
113 redacted = DictProperty("redacted") # type: bool
114 txn_id = DictProperty("txn_id") # type: str
115 token_id = DictProperty("token_id") # type: str
116 stream_ordering = DictProperty("stream_ordering") # type: int
117
118 # XXX: These are set by StreamWorkerStore._set_before_and_after.
119 # I'm pretty sure that these are never persisted to the database, so shouldn't
120 # be here
121 before = DictProperty("before") # type: str
122 after = DictProperty("after") # type: str
123 order = DictProperty("order") # type: int
124
125 def get_dict(self) -> JsonDict:
126 return dict(self._dict)
127
128 def is_outlier(self) -> bool:
129 return self._dict.get("outlier", False)
130
131 def is_out_of_band_membership(self) -> bool:
50132 """Whether this is an out of band membership, like an invite or an invite
51133 rejection. This is needed as those events are marked as outliers, but
52134 they still need to be processed as if they're new events (e.g. updating
53135 invite state in the database, relaying to clients, etc).
54136 """
55 return getattr(self, "out_of_band_membership", False)
56
57 def get_send_on_behalf_of(self):
137 return self._dict.get("out_of_band_membership", False)
138
139 def get_send_on_behalf_of(self) -> Optional[str]:
58140 """Whether this server should send the event on behalf of another server.
59141 This is used by the federation "send_join" API to forward the initial join
60142 event for a server in the room.
61143
62144 returns a str with the name of the server this event is sent on behalf of.
63145 """
64 return getattr(self, "send_on_behalf_of", None)
65
66 def need_to_check_redaction(self):
146 return self._dict.get("send_on_behalf_of")
147
148 def need_to_check_redaction(self) -> bool:
67149 """Whether the redaction event needs to be rechecked when fetching
68150 from the database.
69151
76158 Returns:
77159 bool
78160 """
79 return getattr(self, "recheck_redaction", False)
80
81 def is_soft_failed(self):
161 return self._dict.get("recheck_redaction", False)
162
163 def is_soft_failed(self) -> bool:
82164 """Whether the event has been soft failed.
83165
84166 Soft failed events should be handled as usual, except:
90172 Returns:
91173 bool
92174 """
93 return getattr(self, "soft_failed", False)
175 return self._dict.get("soft_failed", False)
94176
95177 def should_proactively_send(self):
96178 """Whether the event, if ours, should be sent to other clients and
102184 Returns:
103185 bool
104186 """
105 return getattr(self, "proactively_send", True)
187 return self._dict.get("proactively_send", True)
106188
107189 def is_redacted(self):
108190 """Whether the event has been redacted.
113195 Returns:
114196 bool
115197 """
116 return getattr(self, "redacted", False)
117
118
119 _SENTINEL = object()
120
121
122 def _event_dict_property(key, default=_SENTINEL):
123 """Creates a new property for the given key that delegates access to
124 `self._event_dict`.
125
126 The default is used if the key is missing from the `_event_dict`, if given,
127 otherwise an AttributeError will be raised.
128
129 Note: If a default is given then `hasattr` will always return true.
130 """
131
132 # We want to be able to use hasattr with the event dict properties.
133 # However, (on python3) hasattr expects AttributeError to be raised. Hence,
134 # we need to transform the KeyError into an AttributeError
135
136 def getter_raises(self):
137 try:
138 return self._event_dict[key]
139 except KeyError:
140 raise AttributeError(key)
141
142 def getter_default(self):
143 return self._event_dict.get(key, default)
144
145 def setter(self, v):
146 try:
147 self._event_dict[key] = v
148 except KeyError:
149 raise AttributeError(key)
150
151 def delete(self):
152 try:
153 del self._event_dict[key]
154 except KeyError:
155 raise AttributeError(key)
156
157 if default is _SENTINEL:
158 # No default given, so use the getter that raises
159 return property(getter_raises, setter, delete)
160 else:
161 return property(getter_default, setter, delete)
198 return self._dict.get("redacted", False)
162199
163200
164201 class EventBase(object):
174211 self.unsigned = unsigned
175212 self.rejected_reason = rejected_reason
176213
177 self._event_dict = event_dict
214 self._dict = event_dict
178215
179216 self.internal_metadata = _EventInternalMetadata(internal_metadata_dict)
180217
181 auth_events = _event_dict_property("auth_events")
182 depth = _event_dict_property("depth")
183 content = _event_dict_property("content")
184 hashes = _event_dict_property("hashes")
185 origin = _event_dict_property("origin")
186 origin_server_ts = _event_dict_property("origin_server_ts")
187 prev_events = _event_dict_property("prev_events")
188 redacts = _event_dict_property("redacts", None)
189 room_id = _event_dict_property("room_id")
190 sender = _event_dict_property("sender")
191 user_id = _event_dict_property("sender")
218 auth_events = DictProperty("auth_events")
219 depth = DictProperty("depth")
220 content = DictProperty("content")
221 hashes = DictProperty("hashes")
222 origin = DictProperty("origin")
223 origin_server_ts = DictProperty("origin_server_ts")
224 prev_events = DictProperty("prev_events")
225 redacts = DefaultDictProperty("redacts", None)
226 room_id = DictProperty("room_id")
227 sender = DictProperty("sender")
228 state_key = DictProperty("state_key")
229 type = DictProperty("type")
230 user_id = DictProperty("sender")
231
232 @property
233 def event_id(self) -> str:
234 raise NotImplementedError()
192235
193236 @property
194237 def membership(self):
198241 return hasattr(self, "state_key") and self.state_key is not None
199242
200243 def get_dict(self) -> JsonDict:
201 d = dict(self._event_dict)
244 d = dict(self._dict)
202245 d.update({"signatures": self.signatures, "unsigned": dict(self.unsigned)})
203246
204247 return d
205248
206249 def get(self, key, default=None):
207 return self._event_dict.get(key, default)
250 return self._dict.get(key, default)
208251
209252 def get_internal_metadata_dict(self):
210253 return self.internal_metadata.get_dict()
226269 raise AttributeError("Unrecognized attribute %s" % (instance,))
227270
228271 def __getitem__(self, field):
229 return self._event_dict[field]
272 return self._dict[field]
230273
231274 def __contains__(self, field):
232 return field in self._event_dict
275 return field in self._dict
233276
234277 def items(self):
235 return list(self._event_dict.items())
278 return list(self._dict.items())
236279
237280 def keys(self):
238 return six.iterkeys(self._event_dict)
281 return six.iterkeys(self._dict)
239282
240283 def prev_event_ids(self):
241284 """Returns the list of prev event IDs. The order matches the order
280323 else:
281324 frozen_dict = event_dict
282325
283 self.event_id = event_dict["event_id"]
284 self.type = event_dict["type"]
285 if "state_key" in event_dict:
286 self.state_key = event_dict["state_key"]
326 self._event_id = event_dict["event_id"]
287327
288328 super(FrozenEvent, self).__init__(
289329 frozen_dict,
293333 rejected_reason=rejected_reason,
294334 )
295335
336 @property
337 def event_id(self) -> str:
338 return self._event_id
339
296340 def __str__(self):
297341 return self.__repr__()
298342
331375 frozen_dict = event_dict
332376
333377 self._event_id = None
334 self.type = event_dict["type"]
335 if "state_key" in event_dict:
336 self.state_key = event_dict["state_key"]
337378
338379 super(FrozenEventV2, self).__init__(
339380 frozen_dict,
403444 return self._event_id
404445
405446
406 def room_version_to_event_format(room_version):
407 """Converts a room version string to the event format
408
409 Args:
410 room_version (str)
411
412 Returns:
413 int
414
415 Raises:
416 UnsupportedRoomVersionError if the room version is unknown
417 """
418 v = KNOWN_ROOM_VERSIONS.get(room_version)
419
420 if not v:
421 # this can happen if support is withdrawn for a room version
422 raise UnsupportedRoomVersionError()
423
424 return v.event_format
425
426
427 def event_type_from_format_version(format_version):
447 def event_type_from_format_version(format_version: int) -> Type[EventBase]:
428448 """Returns the python type to use to construct an Event object for the
429449 given event format version.
430450
444464 return FrozenEventV3
445465 else:
446466 raise Exception("No event format %r" % (format_version,))
467
468
469 def make_event_from_dict(
470 event_dict: JsonDict,
471 room_version: RoomVersion = RoomVersions.V1,
472 internal_metadata_dict: JsonDict = {},
473 rejected_reason: Optional[str] = None,
474 ) -> EventBase:
475 """Construct an EventBase from the given event dict"""
476 event_type = event_type_from_format_version(room_version.event_format)
477 return event_type(event_dict, internal_metadata_dict, rejected_reason)
2727 RoomVersion,
2828 )
2929 from synapse.crypto.event_signing import add_hashes_and_signatures
30 from synapse.events import (
31 EventBase,
32 _EventInternalMetadata,
33 event_type_from_format_version,
34 )
30 from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
3531 from synapse.types import EventID, JsonDict
3632 from synapse.util import Clock
3733 from synapse.util.stringutils import random_string
255251 event_dict.setdefault("signatures", {})
256252
257253 add_hashes_and_signatures(room_version, event_dict, hostname, signing_key)
258 return event_type_from_format_version(format_version)(
259 event_dict, internal_metadata_dict=internal_metadata_dict
254 return make_event_from_dict(
255 event_dict, room_version, internal_metadata_dict=internal_metadata_dict
260256 )
261257
262258
1414 # limitations under the License.
1515
1616 import inspect
17 from typing import Dict
1718
1819 from synapse.spam_checker_api import SpamCheckerApi
1920
21 MYPY = False
22 if MYPY:
23 import synapse.server
24
2025
2126 class SpamChecker(object):
22 def __init__(self, hs):
27 def __init__(self, hs: "synapse.server.HomeServer"):
2328 self.spam_checker = None
2429
2530 module = None
3944 else:
4045 self.spam_checker = module(config=config)
4146
42 def check_event_for_spam(self, event):
47 def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool:
4348 """Checks if a given event is considered "spammy" by this server.
4449
4550 If the server considers an event spammy, then it will be rejected if
4752 users receive a blank event.
4853
4954 Args:
50 event (synapse.events.EventBase): the event to be checked
55 event: the event to be checked
5156
5257 Returns:
53 bool: True if the event is spammy.
58 True if the event is spammy.
5459 """
5560 if self.spam_checker is None:
5661 return False
5762
5863 return self.spam_checker.check_event_for_spam(event)
5964
60 def user_may_invite(self, inviter_userid, invitee_userid, room_id):
65 def user_may_invite(
66 self, inviter_userid: str, invitee_userid: str, room_id: str
67 ) -> bool:
6168 """Checks if a given user may send an invite
6269
6370 If this method returns false, the invite will be rejected.
6471
6572 Args:
66 userid (string): The sender's user ID
73 inviter_userid: The user ID of the sender of the invitation
74 invitee_userid: The user ID targeted in the invitation
75 room_id: The room ID
6776
6877 Returns:
69 bool: True if the user may send an invite, otherwise False
78 True if the user may send an invite, otherwise False
7079 """
7180 if self.spam_checker is None:
7281 return True
7584 inviter_userid, invitee_userid, room_id
7685 )
7786
78 def user_may_create_room(self, userid):
87 def user_may_create_room(self, userid: str) -> bool:
7988 """Checks if a given user may create a room
8089
8190 If this method returns false, the creation request will be rejected.
8291
8392 Args:
84 userid (string): The sender's user ID
93 userid: The ID of the user attempting to create a room
8594
8695 Returns:
87 bool: True if the user may create a room, otherwise False
96 True if the user may create a room, otherwise False
8897 """
8998 if self.spam_checker is None:
9099 return True
91100
92101 return self.spam_checker.user_may_create_room(userid)
93102
94 def user_may_create_room_alias(self, userid, room_alias):
103 def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
95104 """Checks if a given user may create a room alias
96105
97106 If this method returns false, the association request will be rejected.
98107
99108 Args:
100 userid (string): The sender's user ID
101 room_alias (string): The alias to be created
109 userid: The ID of the user attempting to create a room alias
110 room_alias: The alias to be created
102111
103112 Returns:
104 bool: True if the user may create a room alias, otherwise False
113 True if the user may create a room alias, otherwise False
105114 """
106115 if self.spam_checker is None:
107116 return True
108117
109118 return self.spam_checker.user_may_create_room_alias(userid, room_alias)
110119
111 def user_may_publish_room(self, userid, room_id):
120 def user_may_publish_room(self, userid: str, room_id: str) -> bool:
112121 """Checks if a given user may publish a room to the directory
113122
114123 If this method returns false, the publish request will be rejected.
115124
116125 Args:
117 userid (string): The sender's user ID
118 room_id (string): The ID of the room that would be published
126 userid: The user ID attempting to publish the room
127 room_id: The ID of the room that would be published
119128
120129 Returns:
121 bool: True if the user may publish the room, otherwise False
130 True if the user may publish the room, otherwise False
122131 """
123132 if self.spam_checker is None:
124133 return True
125134
126135 return self.spam_checker.user_may_publish_room(userid, room_id)
136
137 def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
138 """Checks if a user ID or display name are considered "spammy" by this server.
139
140 If the server considers a username spammy, then it will not be included in
141 user directory results.
142
143 Args:
144 user_profile: The user information to check, it contains the keys:
145 * user_id
146 * display_name
147 * avatar_url
148
149 Returns:
150 True if the user is spammy.
151 """
152 if self.spam_checker is None:
153 return False
154
155 # For backwards compatibility, if the method does not exist on the spam checker, fallback to not interfering.
156 checker = getattr(self.spam_checker, "check_username_for_spam", None)
157 if not checker:
158 return False
159 # Make a copy of the user profile object to ensure the spam checker
160 # cannot modify it.
161 return checker(user_profile.copy())
7373 is_requester_admin (bool): If the requester is an admin
7474
7575 Returns:
76 defer.Deferred
76 defer.Deferred[bool]: Whether room creation is allowed or denied.
7777 """
7878
7979 if self.third_party_rules is None:
80 return
80 return True
8181
82 yield self.third_party_rules.on_create_room(
82 ret = yield self.third_party_rules.on_create_room(
8383 requester, config, is_requester_admin
8484 )
85 return ret
8586
8687 @defer.inlineCallbacks
8788 def check_threepid_can_be_invited(self, medium, address, room_id):
00 # -*- coding: utf-8 -*-
11 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2020 The Matrix.org Foundation C.I.C.
23 #
34 # Licensed under the Apache License, Version 2.0 (the "License");
45 # you may not use this file except in compliance with the License.
2122
2223 from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
2324 from synapse.api.errors import Codes, SynapseError
24 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
25 from synapse.api.room_versions import (
26 KNOWN_ROOM_VERSIONS,
27 EventFormatVersions,
28 RoomVersion,
29 )
2530 from synapse.crypto.event_signing import check_event_content_hash
26 from synapse.events import event_type_from_format_version
31 from synapse.events import EventBase, make_event_from_dict
2732 from synapse.events.utils import prune_event
2833 from synapse.http.servlet import assert_params_in_dict
2934 from synapse.logging.context import (
3237 make_deferred_yieldable,
3338 preserve_fn,
3439 )
35 from synapse.types import get_domain_from_id
40 from synapse.types import JsonDict, get_domain_from_id
3641 from synapse.util import unwrapFirstError
3742
3843 logger = logging.getLogger(__name__)
341346 )
342347
343348
344 def event_from_pdu_json(pdu_json, event_format_version, outlier=False):
345 """Construct a FrozenEvent from an event json received over federation
349 def event_from_pdu_json(
350 pdu_json: JsonDict, room_version: RoomVersion, outlier: bool = False
351 ) -> EventBase:
352 """Construct an EventBase from an event json received over federation
346353
347354 Args:
348 pdu_json (object): pdu as received over federation
349 event_format_version (int): The event format version
350 outlier (bool): True to mark this event as an outlier
351
352 Returns:
353 FrozenEvent
355 pdu_json: pdu as received over federation
356 room_version: The version of the room this event belongs to
357 outlier: True to mark this event as an outlier
354358
355359 Raises:
356360 SynapseError: if the pdu is missing required fields or is otherwise
369373 elif depth > MAX_DEPTH:
370374 raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
371375
372 event = event_type_from_format_version(event_format_version)(pdu_json)
373
376 event = make_event_from_dict(pdu_json, room_version)
374377 event.internal_metadata.outlier = outlier
375378
376379 return event
1616 import copy
1717 import itertools
1818 import logging
19 from typing import Dict, Iterable
19 from typing import (
20 Any,
21 Awaitable,
22 Callable,
23 Dict,
24 Iterable,
25 List,
26 Optional,
27 Sequence,
28 Tuple,
29 TypeVar,
30 )
2031
2132 from prometheus_client import Counter
2233
3445 from synapse.api.room_versions import (
3546 KNOWN_ROOM_VERSIONS,
3647 EventFormatVersions,
48 RoomVersion,
3749 RoomVersions,
3850 )
39 from synapse.events import builder, room_version_to_event_format
51 from synapse.events import EventBase, builder
4052 from synapse.federation.federation_base import FederationBase, event_from_pdu_json
4153 from synapse.logging.context import make_deferred_yieldable
4254 from synapse.logging.utils import log_function
55 from synapse.types import JsonDict
4356 from synapse.util import unwrapFirstError
4457 from synapse.util.caches.expiringcache import ExpiringCache
4558 from synapse.util.retryutils import NotRetryingDestination
5063
5164
5265 PDU_RETRY_TIME_MS = 1 * 60 * 1000
66
67 T = TypeVar("T")
5368
5469
5570 class InvalidResponseError(RuntimeError):
169184 sent_queries_counter.labels("client_one_time_keys").inc()
170185 return self.transport_layer.claim_client_keys(destination, content, timeout)
171186
172 @defer.inlineCallbacks
173 @log_function
174 def backfill(self, dest, room_id, limit, extremities):
175 """Requests some more historic PDUs for the given context from the
187 async def backfill(
188 self, dest: str, room_id: str, limit: int, extremities: Iterable[str]
189 ) -> List[EventBase]:
190 """Requests some more historic PDUs for the given room from the
176191 given destination server.
177192
178193 Args:
179194 dest (str): The remote homeserver to ask.
180195 room_id (str): The room_id to backfill.
181 limit (int): The maximum number of PDUs to return.
182 extremities (list): List of PDU id and origins of the first pdus
183 we have seen from the context
184
185 Returns:
186 Deferred: Results in the received PDUs.
196 limit (int): The maximum number of events to return.
197 extremities (list): our current backwards extremities, to backfill from
187198 """
188199 logger.debug("backfill extrem=%s", extremities)
189200
191202 if not extremities:
192203 return
193204
194 transaction_data = yield self.transport_layer.backfill(
205 transaction_data = await self.transport_layer.backfill(
195206 dest, room_id, extremities, limit
196207 )
197208
198209 logger.debug("backfill transaction_data=%r", transaction_data)
199210
200 room_version = yield self.store.get_room_version_id(room_id)
201 format_ver = room_version_to_event_format(room_version)
211 room_version = await self.store.get_room_version(room_id)
202212
203213 pdus = [
204 event_from_pdu_json(p, format_ver, outlier=False)
214 event_from_pdu_json(p, room_version, outlier=False)
205215 for p in transaction_data["pdus"]
206216 ]
207217
208218 # FIXME: We should handle signature failures more gracefully.
209 pdus[:] = yield make_deferred_yieldable(
219 pdus[:] = await make_deferred_yieldable(
210220 defer.gatherResults(
211 self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True
221 self._check_sigs_and_hashes(room_version.identifier, pdus),
222 consumeErrors=True,
212223 ).addErrback(unwrapFirstError)
213224 )
214225
215226 return pdus
216227
217 @defer.inlineCallbacks
218 @log_function
219 def get_pdu(
220 self, destinations, event_id, room_version, outlier=False, timeout=None
221 ):
228 async def get_pdu(
229 self,
230 destinations: Iterable[str],
231 event_id: str,
232 room_version: RoomVersion,
233 outlier: bool = False,
234 timeout: Optional[int] = None,
235 ) -> Optional[EventBase]:
222236 """Requests the PDU with given origin and ID from the remote home
223237 servers.
224238
226240 one succeeds.
227241
228242 Args:
229 destinations (list): Which homeservers to query
230 event_id (str): event to fetch
231 room_version (str): version of the room
232 outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
243 destinations: Which homeservers to query
244 event_id: event to fetch
245 room_version: version of the room
246 outlier: Indicates whether the PDU is an `outlier`, i.e. if
233247 it's from an arbitary point in the context as opposed to part
234248 of the current block of PDUs. Defaults to `False`
235 timeout (int): How long to try (in ms) each destination for before
249 timeout: How long to try (in ms) each destination for before
236250 moving to the next destination. None indicates no timeout.
237251
238252 Returns:
239 Deferred: Results in the requested PDU, or None if we were unable to find
240 it.
253 The requested PDU, or None if we were unable to find it.
241254 """
242255
243256 # TODO: Rate limit the number of times we try and get the same event.
247260 return ev
248261
249262 pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
250
251 format_ver = room_version_to_event_format(room_version)
252263
253264 signed_pdu = None
254265 for destination in destinations:
258269 continue
259270
260271 try:
261 transaction_data = yield self.transport_layer.get_event(
272 transaction_data = await self.transport_layer.get_event(
262273 destination, event_id, timeout=timeout
263274 )
264275
270281 )
271282
272283 pdu_list = [
273 event_from_pdu_json(p, format_ver, outlier=outlier)
284 event_from_pdu_json(p, room_version, outlier=outlier)
274285 for p in transaction_data["pdus"]
275286 ]
276287
278289 pdu = pdu_list[0]
279290
280291 # Check signatures are correct.
281 signed_pdu = yield self._check_sigs_and_hash(room_version, pdu)
292 signed_pdu = await self._check_sigs_and_hash(
293 room_version.identifier, pdu
294 )
282295
283296 break
284297
308321
309322 return signed_pdu
310323
311 @defer.inlineCallbacks
312 def get_room_state_ids(self, destination: str, room_id: str, event_id: str):
324 async def get_room_state_ids(
325 self, destination: str, room_id: str, event_id: str
326 ) -> Tuple[List[str], List[str]]:
313327 """Calls the /state_ids endpoint to fetch the state at a particular point
314328 in the room, and the auth events for the given event
315329
316330 Returns:
317 Tuple[List[str], List[str]]: a tuple of (state event_ids, auth event_ids)
318 """
319 result = yield self.transport_layer.get_room_state_ids(
331 a tuple of (state event_ids, auth event_ids)
332 """
333 result = await self.transport_layer.get_room_state_ids(
320334 destination, room_id, event_id=event_id
321335 )
322336
330344
331345 return state_event_ids, auth_event_ids
332346
333 @defer.inlineCallbacks
334 @log_function
335 def get_event_auth(self, destination, room_id, event_id):
336 res = yield self.transport_layer.get_event_auth(destination, room_id, event_id)
337
338 room_version = yield self.store.get_room_version_id(room_id)
339 format_ver = room_version_to_event_format(room_version)
347 async def get_event_auth(self, destination, room_id, event_id):
348 res = await self.transport_layer.get_event_auth(destination, room_id, event_id)
349
350 room_version = await self.store.get_room_version(room_id)
340351
341352 auth_chain = [
342 event_from_pdu_json(p, format_ver, outlier=True) for p in res["auth_chain"]
353 event_from_pdu_json(p, room_version, outlier=True)
354 for p in res["auth_chain"]
343355 ]
344356
345 signed_auth = yield self._check_sigs_and_hash_and_fetch(
346 destination, auth_chain, outlier=True, room_version=room_version
357 signed_auth = await self._check_sigs_and_hash_and_fetch(
358 destination, auth_chain, outlier=True, room_version=room_version.identifier
347359 )
348360
349361 signed_auth.sort(key=lambda e: e.depth)
350362
351363 return signed_auth
352364
353 @defer.inlineCallbacks
354 def _try_destination_list(self, description, destinations, callback):
365 async def _try_destination_list(
366 self,
367 description: str,
368 destinations: Iterable[str],
369 callback: Callable[[str], Awaitable[T]],
370 ) -> T:
355371 """Try an operation on a series of servers, until it succeeds
356372
357373 Args:
358 description (unicode): description of the operation we're doing, for logging
359
360 destinations (Iterable[unicode]): list of server_names to try
361
362 callback (callable): Function to run for each server. Passed a single
363 argument: the server_name to try. May return a deferred.
374 description: description of the operation we're doing, for logging
375
376 destinations: list of server_names to try
377
378 callback: Function to run for each server. Passed a single
379 argument: the server_name to try.
364380
365381 If the callback raises a CodeMessageException with a 300/400 code,
366382 attempts to perform the operation stop immediately and the exception is
371387 suppressed if the exception is an InvalidResponseError.
372388
373389 Returns:
374 The [Deferred] result of callback, if it succeeds
390 The result of callback, if it succeeds
375391
376392 Raises:
377393 SynapseError if the chosen remote server returns a 300/400 code, or
382398 continue
383399
384400 try:
385 res = yield callback(destination)
401 res = await callback(destination)
386402 return res
387403 except InvalidResponseError as e:
388404 logger.warning("Failed to %s via %s: %s", description, destination, e)
401417 )
402418 except Exception:
403419 logger.warning(
404 "Failed to %s via %s", description, destination, exc_info=1
420 "Failed to %s via %s", description, destination, exc_info=True
405421 )
406422
407423 raise SynapseError(502, "Failed to %s via any server" % (description,))
408424
409 def make_membership_event(
425 async def make_membership_event(
410426 self,
411427 destinations: Iterable[str],
412428 room_id: str,
414430 membership: str,
415431 content: dict,
416432 params: Dict[str, str],
417 ):
433 ) -> Tuple[str, EventBase, RoomVersion]:
418434 """
419435 Creates an m.room.member event, with context, without participating in the room.
420436
435451 content: Any additional data to put into the content field of the
436452 event.
437453 params: Query parameters to include in the request.
438 Return:
439 Deferred[Tuple[str, FrozenEvent, RoomVersion]]: resolves to a tuple of
454
455 Returns:
440456 `(origin, event, room_version)` where origin is the remote
441457 homeserver which generated the event, and room_version is the
442458 version of the room.
443459
444 Fails with a `UnsupportedRoomVersionError` if remote responds with
445 a room version we don't understand.
446
447 Fails with a ``SynapseError`` if the chosen remote server
448 returns a 300/400 code.
449
450 Fails with a ``RuntimeError`` if no servers were reachable.
460 Raises:
461 UnsupportedRoomVersionError: if remote responds with
462 a room version we don't understand.
463
464 SynapseError: if the chosen remote server returns a 300/400 code.
465
466 RuntimeError: if no servers were reachable.
451467 """
452468 valid_memberships = {Membership.JOIN, Membership.LEAVE}
453469 if membership not in valid_memberships:
456472 % (membership, ",".join(valid_memberships))
457473 )
458474
459 @defer.inlineCallbacks
460 def send_request(destination):
461 ret = yield self.transport_layer.make_membership_event(
475 async def send_request(destination: str) -> Tuple[str, EventBase, RoomVersion]:
476 ret = await self.transport_layer.make_membership_event(
462477 destination, room_id, user_id, membership, params
463478 )
464479
491506 event_dict=pdu_dict,
492507 )
493508
494 return (destination, ev, room_version)
495
496 return self._try_destination_list(
509 return destination, ev, room_version
510
511 return await self._try_destination_list(
497512 "make_" + membership, destinations, send_request
498513 )
499514
500 def send_join(self, destinations, pdu, event_format_version):
515 async def send_join(
516 self, destinations: Iterable[str], pdu: EventBase, room_version: RoomVersion
517 ) -> Dict[str, Any]:
501518 """Sends a join event to one of a list of homeservers.
502519
503520 Doing so will cause the remote server to add the event to the graph,
504521 and send the event out to the rest of the federation.
505522
506523 Args:
507 destinations (str): Candidate homeservers which are probably
524 destinations: Candidate homeservers which are probably
508525 participating in the room.
509 pdu (BaseEvent): event to be sent
510 event_format_version (int): The event format version
511
512 Return:
513 Deferred: resolves to a dict with members ``origin`` (a string
514 giving the serer the event was sent to, ``state`` (?) and
526 pdu: event to be sent
527 room_version: the version of the room (according to the server that
528 did the make_join)
529
530 Returns:
531 a dict with members ``origin`` (a string
532 giving the server the event was sent to, ``state`` (?) and
515533 ``auth_chain``.
516534
517 Fails with a ``SynapseError`` if the chosen remote server
518 returns a 300/400 code.
519
520 Fails with a ``RuntimeError`` if no servers were reachable.
521 """
522
523 def check_authchain_validity(signed_auth_chain):
524 for e in signed_auth_chain:
525 if e.type == EventTypes.Create:
535 Raises:
536 SynapseError: if the chosen remote server returns a 300/400 code.
537
538 RuntimeError: if no servers were reachable.
539 """
540
541 async def send_request(destination) -> Dict[str, Any]:
542 content = await self._do_send_join(destination, pdu)
543
544 logger.debug("Got content: %s", content)
545
546 state = [
547 event_from_pdu_json(p, room_version, outlier=True)
548 for p in content.get("state", [])
549 ]
550
551 auth_chain = [
552 event_from_pdu_json(p, room_version, outlier=True)
553 for p in content.get("auth_chain", [])
554 ]
555
556 pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)}
557
558 create_event = None
559 for e in state:
560 if (e.type, e.state_key) == (EventTypes.Create, ""):
526561 create_event = e
527562 break
528 else:
529 raise InvalidResponseError("no %s in auth chain" % (EventTypes.Create,))
530
531 # the room version should be sane.
532 room_version = create_event.content.get("room_version", "1")
533 if room_version not in KNOWN_ROOM_VERSIONS:
534 # This shouldn't be possible, because the remote server should have
535 # rejected the join attempt during make_join.
536 raise InvalidResponseError(
537 "room appears to have unsupported version %s" % (room_version,)
538 )
539
540 @defer.inlineCallbacks
541 def send_request(destination):
542 content = yield self._do_send_join(destination, pdu)
543
544 logger.debug("Got content: %s", content)
545
546 state = [
547 event_from_pdu_json(p, event_format_version, outlier=True)
548 for p in content.get("state", [])
549 ]
550
551 auth_chain = [
552 event_from_pdu_json(p, event_format_version, outlier=True)
553 for p in content.get("auth_chain", [])
554 ]
555
556 pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)}
557
558 room_version = None
559 for e in state:
560 if (e.type, e.state_key) == (EventTypes.Create, ""):
561 room_version = e.content.get(
562 "room_version", RoomVersions.V1.identifier
563 )
564 break
565
566 if room_version is None:
563
564 if create_event is None:
567565 # If the state doesn't have a create event then the room is
568566 # invalid, and it would fail auth checks anyway.
569567 raise SynapseError(400, "No create event in state")
570568
571 valid_pdus = yield self._check_sigs_and_hash_and_fetch(
569 # the room version should be sane.
570 create_room_version = create_event.content.get(
571 "room_version", RoomVersions.V1.identifier
572 )
573 if create_room_version != room_version.identifier:
574 # either the server that fulfilled the make_join, or the server that is
575 # handling the send_join, is lying.
576 raise InvalidResponseError(
577 "Unexpected room version %s in create event"
578 % (create_room_version,)
579 )
580
581 valid_pdus = await self._check_sigs_and_hash_and_fetch(
572582 destination,
573583 list(pdus.values()),
574584 outlier=True,
575 room_version=room_version,
585 room_version=room_version.identifier,
576586 )
577587
578588 valid_pdus_map = {p.event_id: p for p in valid_pdus}
596606 for s in signed_state:
597607 s.internal_metadata = copy.deepcopy(s.internal_metadata)
598608
599 check_authchain_validity(signed_auth)
609 # double-check that the same create event has ended up in the auth chain
610 auth_chain_create_events = [
611 e.event_id
612 for e in signed_auth
613 if (e.type, e.state_key) == (EventTypes.Create, "")
614 ]
615 if auth_chain_create_events != [create_event.event_id]:
616 raise InvalidResponseError(
617 "Unexpected create event(s) in auth chain"
618 % (auth_chain_create_events,)
619 )
600620
601621 return {
602622 "state": signed_state,
604624 "origin": destination,
605625 }
606626
607 return self._try_destination_list("send_join", destinations, send_request)
608
609 @defer.inlineCallbacks
610 def _do_send_join(self, destination, pdu):
627 return await self._try_destination_list("send_join", destinations, send_request)
628
629 async def _do_send_join(self, destination: str, pdu: EventBase):
611630 time_now = self._clock.time_msec()
612631
613632 try:
614 content = yield self.transport_layer.send_join_v2(
633 content = await self.transport_layer.send_join_v2(
615634 destination=destination,
616635 room_id=pdu.room_id,
617636 event_id=pdu.event_id,
633652
634653 logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API")
635654
636 resp = yield self.transport_layer.send_join_v1(
655 resp = await self.transport_layer.send_join_v1(
637656 destination=destination,
638657 room_id=pdu.room_id,
639658 event_id=pdu.event_id,
644663 # content.
645664 return resp[1]
646665
647 @defer.inlineCallbacks
648 def send_invite(self, destination, room_id, event_id, pdu):
649 room_version = yield self.store.get_room_version_id(room_id)
650
651 content = yield self._do_send_invite(destination, pdu, room_version)
666 async def send_invite(
667 self, destination: str, room_id: str, event_id: str, pdu: EventBase,
668 ) -> EventBase:
669 room_version = await self.store.get_room_version(room_id)
670
671 content = await self._do_send_invite(destination, pdu, room_version)
652672
653673 pdu_dict = content["event"]
654674
655675 logger.debug("Got response to send_invite: %s", pdu_dict)
656676
657 room_version = yield self.store.get_room_version_id(room_id)
658 format_ver = room_version_to_event_format(room_version)
659
660 pdu = event_from_pdu_json(pdu_dict, format_ver)
677 pdu = event_from_pdu_json(pdu_dict, room_version)
661678
662679 # Check signatures are correct.
663 pdu = yield self._check_sigs_and_hash(room_version, pdu)
680 pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
664681
665682 # FIXME: We should handle signature failures more gracefully.
666683
667684 return pdu
668685
669 @defer.inlineCallbacks
670 def _do_send_invite(self, destination, pdu, room_version):
686 async def _do_send_invite(
687 self, destination: str, pdu: EventBase, room_version: RoomVersion
688 ) -> JsonDict:
671689 """Actually sends the invite, first trying v2 API and falling back to
672690 v1 API if necessary.
673691
674 Args:
675 destination (str): Target server
676 pdu (FrozenEvent)
677 room_version (str)
678
679692 Returns:
680 dict: The event as a dict as returned by the remote server
693 The event as a dict as returned by the remote server
681694 """
682695 time_now = self._clock.time_msec()
683696
684697 try:
685 content = yield self.transport_layer.send_invite_v2(
698 content = await self.transport_layer.send_invite_v2(
686699 destination=destination,
687700 room_id=pdu.room_id,
688701 event_id=pdu.event_id,
689702 content={
690703 "event": pdu.get_pdu_json(time_now),
691 "room_version": room_version,
704 "room_version": room_version.identifier,
692705 "invite_room_state": pdu.unsigned.get("invite_room_state", []),
693706 },
694707 )
706719 # Otherwise, we assume that the remote server doesn't understand
707720 # the v2 invite API. That's ok provided the room uses old-style event
708721 # IDs.
709 v = KNOWN_ROOM_VERSIONS.get(room_version)
710 if v.event_format != EventFormatVersions.V1:
722 if room_version.event_format != EventFormatVersions.V1:
711723 raise SynapseError(
712724 400,
713725 "User's homeserver does not support this room version",
721733 # Didn't work, try v1 API.
722734 # Note the v1 API returns a tuple of `(200, content)`
723735
724 _, content = yield self.transport_layer.send_invite_v1(
736 _, content = await self.transport_layer.send_invite_v1(
725737 destination=destination,
726738 room_id=pdu.room_id,
727739 event_id=pdu.event_id,
729741 )
730742 return content
731743
732 def send_leave(self, destinations, pdu):
744 async def send_leave(self, destinations: Iterable[str], pdu: EventBase) -> None:
733745 """Sends a leave event to one of a list of homeservers.
734746
735747 Doing so will cause the remote server to add the event to the graph,
738750 This is mostly useful to reject received invites.
739751
740752 Args:
741 destinations (str): Candidate homeservers which are probably
753 destinations: Candidate homeservers which are probably
742754 participating in the room.
743 pdu (BaseEvent): event to be sent
744
745 Return:
746 Deferred: resolves to None.
747
748 Fails with a ``SynapseError`` if the chosen remote server
749 returns a 300/400 code.
750
751 Fails with a ``RuntimeError`` if no servers were reachable.
752 """
753
754 @defer.inlineCallbacks
755 def send_request(destination):
756 content = yield self._do_send_leave(destination, pdu)
757
755 pdu: event to be sent
756
757 Raises:
758 SynapseError if the chosen remote server returns a 300/400 code.
759
760 RuntimeError if no servers were reachable.
761 """
762
763 async def send_request(destination: str) -> None:
764 content = await self._do_send_leave(destination, pdu)
758765 logger.debug("Got content: %s", content)
759 return None
760
761 return self._try_destination_list("send_leave", destinations, send_request)
762
763 @defer.inlineCallbacks
764 def _do_send_leave(self, destination, pdu):
766
767 return await self._try_destination_list(
768 "send_leave", destinations, send_request
769 )
770
771 async def _do_send_leave(self, destination, pdu):
765772 time_now = self._clock.time_msec()
766773
767774 try:
768 content = yield self.transport_layer.send_leave_v2(
775 content = await self.transport_layer.send_leave_v2(
769776 destination=destination,
770777 room_id=pdu.room_id,
771778 event_id=pdu.event_id,
787794
788795 logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API")
789796
790 resp = yield self.transport_layer.send_leave_v1(
797 resp = await self.transport_layer.send_leave_v1(
791798 destination=destination,
792799 room_id=pdu.room_id,
793800 event_id=pdu.event_id,
819826 third_party_instance_id=third_party_instance_id,
820827 )
821828
822 @defer.inlineCallbacks
823 def get_missing_events(
829 async def get_missing_events(
824830 self,
825 destination,
826 room_id,
827 earliest_events_ids,
828 latest_events,
829 limit,
830 min_depth,
831 timeout,
832 ):
831 destination: str,
832 room_id: str,
833 earliest_events_ids: Sequence[str],
834 latest_events: Iterable[EventBase],
835 limit: int,
836 min_depth: int,
837 timeout: int,
838 ) -> List[EventBase]:
833839 """Tries to fetch events we are missing. This is called when we receive
834840 an event without having received all of its ancestors.
835841
836842 Args:
837 destination (str)
838 room_id (str)
839 earliest_events_ids (list): List of event ids. Effectively the
843 destination
844 room_id
845 earliest_events_ids: List of event ids. Effectively the
840846 events we expected to receive, but haven't. `get_missing_events`
841847 should only return events that didn't happen before these.
842 latest_events (list): List of events we have received that we don't
848 latest_events: List of events we have received that we don't
843849 have all previous events for.
844 limit (int): Maximum number of events to return.
845 min_depth (int): Minimum depth of events tor return.
846 timeout (int): Max time to wait in ms
850 limit: Maximum number of events to return.
851 min_depth: Minimum depth of events to return.
852 timeout: Max time to wait in ms
847853 """
848854 try:
849 content = yield self.transport_layer.get_missing_events(
855 content = await self.transport_layer.get_missing_events(
850856 destination=destination,
851857 room_id=room_id,
852858 earliest_events=earliest_events_ids,
856862 timeout=timeout,
857863 )
858864
859 room_version = yield self.store.get_room_version_id(room_id)
860 format_ver = room_version_to_event_format(room_version)
865 room_version = await self.store.get_room_version(room_id)
861866
862867 events = [
863 event_from_pdu_json(e, format_ver) for e in content.get("events", [])
868 event_from_pdu_json(e, room_version) for e in content.get("events", [])
864869 ]
865870
866 signed_events = yield self._check_sigs_and_hash_and_fetch(
867 destination, events, outlier=False, room_version=room_version
871 signed_events = await self._check_sigs_and_hash_and_fetch(
872 destination, events, outlier=False, room_version=room_version.identifier
868873 )
869874 except HttpResponseException as e:
870875 if not e.code == 400:
3737 UnsupportedRoomVersionError,
3838 )
3939 from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
40 from synapse.events import room_version_to_event_format
4140 from synapse.federation.federation_base import FederationBase, event_from_pdu_json
4241 from synapse.federation.persistence import TransactionActions
4342 from synapse.federation.units import Edu, Transaction
5352 ReplicationFederationSendEduRestServlet,
5453 ReplicationGetQueryRestServlet,
5554 )
56 from synapse.types import get_domain_from_id
55 from synapse.types import JsonDict, get_domain_from_id
5756 from synapse.util import glob_to_regex, unwrapFirstError
5857 from synapse.util.async_helpers import Linearizer, concurrently_execute
5958 from synapse.util.caches.response_cache import ResponseCache
8079 self.auth = hs.get_auth()
8180 self.handler = hs.get_handlers().federation_handler
8281 self.state = hs.get_state_handler()
82
83 self.device_handler = hs.get_device_handler()
8384
8485 self._server_linearizer = Linearizer("fed_server")
8586 self._transaction_linearizer = Linearizer("fed_txn_handler")
233234 continue
234235
235236 try:
236 room_version = await self.store.get_room_version_id(room_id)
237 room_version = await self.store.get_room_version(room_id)
237238 except NotFoundError:
238239 logger.info("Ignoring PDU for unknown room_id: %s", room_id)
239240 continue
240
241 try:
242 format_ver = room_version_to_event_format(room_version)
243 except UnsupportedRoomVersionError:
241 except UnsupportedRoomVersionError as e:
244242 # this can happen if support for a given room version is withdrawn,
245243 # so that we still get events for said room.
246 logger.info(
247 "Ignoring PDU for room %s with unknown version %s",
248 room_id,
249 room_version,
250 )
244 logger.info("Ignoring PDU: %s", e)
251245 continue
252246
253 event = event_from_pdu_json(p, format_ver)
247 event = event_from_pdu_json(p, room_version)
254248 pdus_by_room.setdefault(room_id, []).append(event)
255249
256250 pdu_results = {}
301295 async def _process_edu(edu_dict):
302296 received_edus_counter.inc()
303297
304 edu = Edu(**edu_dict)
298 edu = Edu(
299 origin=origin,
300 destination=self.server_name,
301 edu_type=edu_dict["edu_type"],
302 content=edu_dict["content"],
303 )
305304 await self.registry.on_edu(edu.edu_type, origin, edu.content)
306305
307306 await concurrently_execute(
395394 time_now = self._clock.time_msec()
396395 return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
397396
398 async def on_invite_request(self, origin, content, room_version):
399 if room_version not in KNOWN_ROOM_VERSIONS:
397 async def on_invite_request(
398 self, origin: str, content: JsonDict, room_version_id: str
399 ):
400 room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
401 if not room_version:
400402 raise SynapseError(
401403 400,
402404 "Homeserver does not support this room version",
403405 Codes.UNSUPPORTED_ROOM_VERSION,
404406 )
405407
406 format_ver = room_version_to_event_format(room_version)
407
408 pdu = event_from_pdu_json(content, format_ver)
408 pdu = event_from_pdu_json(content, room_version)
409409 origin_host, _ = parse_server_name(origin)
410410 await self.check_server_matches_acl(origin_host, pdu.room_id)
411 pdu = await self._check_sigs_and_hash(room_version, pdu)
411 pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
412412 ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version)
413413 time_now = self._clock.time_msec()
414414 return {"event": ret_pdu.get_pdu_json(time_now)}
416416 async def on_send_join_request(self, origin, content, room_id):
417417 logger.debug("on_send_join_request: content: %s", content)
418418
419 room_version = await self.store.get_room_version_id(room_id)
420 format_ver = room_version_to_event_format(room_version)
421 pdu = event_from_pdu_json(content, format_ver)
419 room_version = await self.store.get_room_version(room_id)
420 pdu = event_from_pdu_json(content, room_version)
422421
423422 origin_host, _ = parse_server_name(origin)
424423 await self.check_server_matches_acl(origin_host, pdu.room_id)
425424
426425 logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
427426
428 pdu = await self._check_sigs_and_hash(room_version, pdu)
427 pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
429428
430429 res_pdus = await self.handler.on_send_join_request(origin, pdu)
431430 time_now = self._clock.time_msec()
447446 async def on_send_leave_request(self, origin, content, room_id):
448447 logger.debug("on_send_leave_request: content: %s", content)
449448
450 room_version = await self.store.get_room_version_id(room_id)
451 format_ver = room_version_to_event_format(room_version)
452 pdu = event_from_pdu_json(content, format_ver)
449 room_version = await self.store.get_room_version(room_id)
450 pdu = event_from_pdu_json(content, room_version)
453451
454452 origin_host, _ = parse_server_name(origin)
455453 await self.check_server_matches_acl(origin_host, pdu.room_id)
456454
457455 logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
458456
459 pdu = await self._check_sigs_and_hash(room_version, pdu)
457 pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
460458
461459 await self.handler.on_send_leave_request(origin, pdu)
462460 return {}
494492 origin_host, _ = parse_server_name(origin)
495493 await self.check_server_matches_acl(origin_host, room_id)
496494
497 room_version = await self.store.get_room_version_id(room_id)
498 format_ver = room_version_to_event_format(room_version)
495 room_version = await self.store.get_room_version(room_id)
499496
500497 auth_chain = [
501 event_from_pdu_json(e, format_ver) for e in content["auth_chain"]
498 event_from_pdu_json(e, room_version) for e in content["auth_chain"]
502499 ]
503500
504501 signed_auth = await self._check_sigs_and_hash_and_fetch(
505 origin, auth_chain, outlier=True, room_version=room_version
502 origin, auth_chain, outlier=True, room_version=room_version.identifier
506503 )
507504
508505 ret = await self.handler.on_query_auth(
527524 def on_query_client_keys(self, origin, content):
528525 return self.on_query_request("client_keys", content)
529526
530 def on_query_user_devices(self, origin, user_id):
531 return self.on_query_request("user_devices", user_id)
527 async def on_query_user_devices(self, origin: str, user_id: str):
528 keys = await self.device_handler.on_federation_query_user_devices(user_id)
529 return 200, keys
532530
533531 @trace
534532 async def on_claim_client_keys(self, origin, content):
569567 origin_host, _ = parse_server_name(origin)
570568 await self.check_server_matches_acl(origin_host, room_id)
571569
572 logger.info(
570 logger.debug(
573571 "on_get_missing_events: earliest_events: %r, latest_events: %r,"
574572 " limit: %d",
575573 earliest_events,
582580 )
583581
584582 if len(missing_events) < 5:
585 logger.info(
583 logger.debug(
586584 "Returning %d events: %r", len(missing_events), missing_events
587585 )
588586 else:
589 logger.info("Returning %d events", len(missing_events))
587 logger.debug("Returning %d events", len(missing_events))
590588
591589 time_now = self._clock.time_msec()
592590
1313 # limitations under the License.
1414
1515 import logging
16 from typing import Dict, Hashable, Iterable, List, Optional, Set
1617
1718 from six import itervalues
1819
2223
2324 import synapse
2425 import synapse.metrics
26 from synapse.events import EventBase
2527 from synapse.federation.sender.per_destination_queue import PerDestinationQueue
2628 from synapse.federation.sender.transaction_manager import TransactionManager
2729 from synapse.federation.units import Edu
3840 events_processed_counter,
3941 )
4042 from synapse.metrics.background_process_metrics import run_as_background_process
43 from synapse.storage.presence import UserPresenceState
44 from synapse.types import ReadReceipt
4145 from synapse.util.metrics import Measure, measure_func
4246
4347 logger = logging.getLogger(__name__)
6771 self._transaction_manager = TransactionManager(hs)
6872
6973 # map from destination to PerDestinationQueue
70 self._per_destination_queues = {} # type: dict[str, PerDestinationQueue]
74 self._per_destination_queues = {} # type: Dict[str, PerDestinationQueue]
7175
7276 LaterGauge(
7377 "synapse_federation_transaction_queue_pending_destinations",
8387 # Map of user_id -> UserPresenceState for all the pending presence
8488 # to be sent out by user_id. Entries here get processed and put in
8589 # pending_presence_by_dest
86 self.pending_presence = {}
90 self.pending_presence = {} # type: Dict[str, UserPresenceState]
8791
8892 LaterGauge(
8993 "synapse_federation_transaction_queue_pending_pdus",
115119 # and that there is a pending call to _flush_rrs_for_room in the system.
116120 self._queues_awaiting_rr_flush_by_room = (
117121 {}
118 ) # type: dict[str, set[PerDestinationQueue]]
122 ) # type: Dict[str, Set[PerDestinationQueue]]
119123
120124 self._rr_txn_interval_per_room_ms = (
121 1000.0 / hs.get_config().federation_rr_transactions_per_room_per_second
122 )
123
124 def _get_per_destination_queue(self, destination):
125 1000.0 / hs.config.federation_rr_transactions_per_room_per_second
126 )
127
128 def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue:
125129 """Get or create a PerDestinationQueue for the given destination
126130
127131 Args:
128 destination (str): server_name of remote server
129
130 Returns:
131 PerDestinationQueue
132 destination: server_name of remote server
132133 """
133134 queue = self._per_destination_queues.get(destination)
134135 if not queue:
136137 self._per_destination_queues[destination] = queue
137138 return queue
138139
139 def notify_new_events(self, current_id):
140 def notify_new_events(self, current_id: int) -> None:
140141 """This gets called when we have some new events we might want to
141142 send out to other servers.
142143 """
150151 "process_event_queue_for_federation", self._process_event_queue_loop
151152 )
152153
153 @defer.inlineCallbacks
154 def _process_event_queue_loop(self):
154 async def _process_event_queue_loop(self) -> None:
155155 try:
156156 self._is_processing = True
157157 while True:
158 last_token = yield self.store.get_federation_out_pos("events")
159 next_token, events = yield self.store.get_all_new_events_stream(
158 last_token = await self.store.get_federation_out_pos("events")
159 next_token, events = await self.store.get_all_new_events_stream(
160160 last_token, self._last_poked_id, limit=100
161161 )
162162
165165 if not events and next_token >= self._last_poked_id:
166166 break
167167
168 @defer.inlineCallbacks
169 def handle_event(event):
168 async def handle_event(event: EventBase) -> None:
170169 # Only send events for this server.
171170 send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
172171 is_mine = self.is_mine_id(event.sender)
183182 # Otherwise if the last member on a server in a room is
184183 # banned then it won't receive the event because it won't
185184 # be in the room after the ban.
186 destinations = yield self.state.get_hosts_in_room_at_events(
185 destinations = await self.state.get_hosts_in_room_at_events(
187186 event.room_id, event_ids=event.prev_event_ids()
188187 )
189188 except Exception:
205204
206205 self._send_pdu(event, destinations)
207206
208 @defer.inlineCallbacks
209 def handle_room_events(events):
207 async def handle_room_events(events: Iterable[EventBase]) -> None:
210208 with Measure(self.clock, "handle_room_events"):
211209 for event in events:
212 yield handle_event(event)
213
214 events_by_room = {}
210 await handle_event(event)
211
212 events_by_room = {} # type: Dict[str, List[EventBase]]
215213 for event in events:
216214 events_by_room.setdefault(event.room_id, []).append(event)
217215
218 yield make_deferred_yieldable(
216 await make_deferred_yieldable(
219217 defer.gatherResults(
220218 [
221219 run_in_background(handle_room_events, evs)
225223 )
226224 )
227225
228 yield self.store.update_federation_out_pos("events", next_token)
226 await self.store.update_federation_out_pos("events", next_token)
229227
230228 if events:
231229 now = self.clock.time_msec()
232 ts = yield self.store.get_received_ts(events[-1].event_id)
230 ts = await self.store.get_received_ts(events[-1].event_id)
233231
234232 synapse.metrics.event_processing_lag.labels(
235233 "federation_sender"
253251 finally:
254252 self._is_processing = False
255253
256 def _send_pdu(self, pdu, destinations):
254 def _send_pdu(self, pdu: EventBase, destinations: Iterable[str]) -> None:
257255 # We loop through all destinations to see whether we already have
258256 # a transaction in progress. If we do, stick it in the pending_pdus
259257 # table and we'll get back to it later.
275273 self._get_per_destination_queue(destination).send_pdu(pdu, order)
276274
277275 @defer.inlineCallbacks
278 def send_read_receipt(self, receipt):
276 def send_read_receipt(self, receipt: ReadReceipt):
279277 """Send a RR to any other servers in the room
280278
281279 Args:
282 receipt (synapse.types.ReadReceipt): receipt to be sent
280 receipt: receipt to be sent
283281 """
284282
285283 # Some background on the rate-limiting going on here.
342340 else:
343341 queue.flush_read_receipts_for_room(room_id)
344342
345 def _schedule_rr_flush_for_room(self, room_id, n_domains):
343 def _schedule_rr_flush_for_room(self, room_id: str, n_domains: int) -> None:
346344 # that is going to cause approximately len(domains) transactions, so now back
347345 # off for that multiplied by RR_TXN_INTERVAL_PER_ROOM
348346 backoff_ms = self._rr_txn_interval_per_room_ms * n_domains
351349 self.clock.call_later(backoff_ms, self._flush_rrs_for_room, room_id)
352350 self._queues_awaiting_rr_flush_by_room[room_id] = set()
353351
354 def _flush_rrs_for_room(self, room_id):
352 def _flush_rrs_for_room(self, room_id: str) -> None:
355353 queues = self._queues_awaiting_rr_flush_by_room.pop(room_id)
356354 logger.debug("Flushing RRs in %s to %s", room_id, queues)
357355
367365
368366 @preserve_fn # the caller should not yield on this
369367 @defer.inlineCallbacks
370 def send_presence(self, states):
368 def send_presence(self, states: List[UserPresenceState]):
371369 """Send the new presence states to the appropriate destinations.
372370
373371 This actually queues up the presence states ready for sending and
374372 triggers a background task to process them and send out the transactions.
375
376 Args:
377 states (list(UserPresenceState))
378373 """
379374 if not self.hs.config.use_presence:
380375 # No-op if presence is disabled.
411406 finally:
412407 self._processing_pending_presence = False
413408
414 def send_presence_to_destinations(self, states, destinations):
409 def send_presence_to_destinations(
410 self, states: List[UserPresenceState], destinations: List[str]
411 ) -> None:
415412 """Send the given presence states to the given destinations.
416
417 Args:
418 states (list[UserPresenceState])
419413 destinations (list[str])
420414 """
421415
430424
431425 @measure_func("txnqueue._process_presence")
432426 @defer.inlineCallbacks
433 def _process_presence_inner(self, states):
427 def _process_presence_inner(self, states: List[UserPresenceState]):
434428 """Given a list of states populate self.pending_presence_by_dest and
435429 poke to send a new transaction to each destination
436
437 Args:
438 states (list(UserPresenceState))
439430 """
440431 hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
441432
445436 continue
446437 self._get_per_destination_queue(destination).send_presence(states)
447438
448 def build_and_send_edu(self, destination, edu_type, content, key=None):
439 def build_and_send_edu(
440 self,
441 destination: str,
442 edu_type: str,
443 content: dict,
444 key: Optional[Hashable] = None,
445 ):
449446 """Construct an Edu object, and queue it for sending
450447
451448 Args:
452 destination (str): name of server to send to
453 edu_type (str): type of EDU to send
454 content (dict): content of EDU
455 key (Any|None): clobbering key for this edu
449 destination: name of server to send to
450 edu_type: type of EDU to send
451 content: content of EDU
452 key: clobbering key for this edu
456453 """
457454 if destination == self.server_name:
458455 logger.info("Not sending EDU to ourselves")
467464
468465 self.send_edu(edu, key)
469466
470 def send_edu(self, edu, key):
467 def send_edu(self, edu: Edu, key: Optional[Hashable]):
471468 """Queue an EDU for sending
472469
473470 Args:
474 edu (Edu): edu to send
475 key (Any|None): clobbering key for this edu
471 edu: edu to send
472 key: clobbering key for this edu
476473 """
477474 queue = self._get_per_destination_queue(edu.destination)
478475 if key:
480477 else:
481478 queue.send_edu(edu)
482479
483 def send_device_messages(self, destination):
480 def send_device_messages(self, destination: str):
484481 if destination == self.server_name:
485482 logger.warning("Not sending device update to ourselves")
486483 return
500497
501498 self._get_per_destination_queue(destination).attempt_new_transaction()
502499
503 def get_current_token(self):
500 def get_current_token(self) -> int:
504501 return 0
1414 # limitations under the License.
1515 import datetime
1616 import logging
17 from typing import Dict, Hashable, Iterable, List, Tuple
1718
1819 from prometheus_client import Counter
1920
20 from twisted.internet import defer
21
21 import synapse.server
2222 from synapse.api.errors import (
2323 FederationDeniedError,
2424 HttpResponseException,
3030 from synapse.metrics import sent_transactions_counter
3131 from synapse.metrics.background_process_metrics import run_as_background_process
3232 from synapse.storage.presence import UserPresenceState
33 from synapse.types import StateMap
33 from synapse.types import ReadReceipt
3434 from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
3535
3636 # This is defined in the Matrix spec and enforced by the receiver.
5555 Manages the per-destination transmission queues.
5656
5757 Args:
58 hs (synapse.HomeServer):
59 transaction_sender (TransactionManager):
60 destination (str): the server_name of the destination that we are managing
58 hs
59 transaction_sender
60 destination: the server_name of the destination that we are managing
6161 transmission for.
6262 """
6363
64 def __init__(self, hs, transaction_manager, destination):
64 def __init__(
65 self,
66 hs: "synapse.server.HomeServer",
67 transaction_manager: "synapse.federation.sender.TransactionManager",
68 destination: str,
69 ):
6570 self._server_name = hs.hostname
6671 self._clock = hs.get_clock()
6772 self._store = hs.get_datastore()
7176 self.transmission_loop_running = False
7277
7378 # a list of tuples of (pending pdu, order)
74 self._pending_pdus = [] # type: list[tuple[EventBase, int]]
75 self._pending_edus = [] # type: list[Edu]
79 self._pending_pdus = [] # type: List[Tuple[EventBase, int]]
80 self._pending_edus = [] # type: List[Edu]
7681
7782 # Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
7883 # based on their key (e.g. typing events by room_id)
7984 # Map of (edu_type, key) -> Edu
80 self._pending_edus_keyed = {} # type: StateMap[Edu]
85 self._pending_edus_keyed = {} # type: Dict[Tuple[str, Hashable], Edu]
8186
8287 # Map of user_id -> UserPresenceState of pending presence to be sent to this
8388 # destination
84 self._pending_presence = {} # type: dict[str, UserPresenceState]
89 self._pending_presence = {} # type: Dict[str, UserPresenceState]
8590
8691 # room_id -> receipt_type -> user_id -> receipt_dict
87 self._pending_rrs = {}
92 self._pending_rrs = {} # type: Dict[str, Dict[str, Dict[str, dict]]]
8893 self._rrs_pending_flush = False
8994
9095 # stream_id of last successfully sent to-device message.
9499 # stream_id of last successfully sent device list update.
95100 self._last_device_list_stream_id = 0
96101
97 def __str__(self):
102 def __str__(self) -> str:
98103 return "PerDestinationQueue[%s]" % self._destination
99104
100 def pending_pdu_count(self):
105 def pending_pdu_count(self) -> int:
101106 return len(self._pending_pdus)
102107
103 def pending_edu_count(self):
108 def pending_edu_count(self) -> int:
104109 return (
105110 len(self._pending_edus)
106111 + len(self._pending_presence)
107112 + len(self._pending_edus_keyed)
108113 )
109114
110 def send_pdu(self, pdu, order):
115 def send_pdu(self, pdu: EventBase, order: int) -> None:
111116 """Add a PDU to the queue, and start the transmission loop if neccessary
112117
113118 Args:
114 pdu (EventBase): pdu to send
115 order (int):
119 pdu: pdu to send
120 order
116121 """
117122 self._pending_pdus.append((pdu, order))
118123 self.attempt_new_transaction()
119124
120 def send_presence(self, states):
125 def send_presence(self, states: Iterable[UserPresenceState]) -> None:
121126 """Add presence updates to the queue. Start the transmission loop if neccessary.
122127
123128 Args:
124 states (iterable[UserPresenceState]): presence to send
129 states: presence to send
125130 """
126131 self._pending_presence.update({state.user_id: state for state in states})
127132 self.attempt_new_transaction()
128133
129 def queue_read_receipt(self, receipt):
134 def queue_read_receipt(self, receipt: ReadReceipt) -> None:
130135 """Add a RR to the list to be sent. Doesn't start the transmission loop yet
131136 (see flush_read_receipts_for_room)
132137
133138 Args:
134 receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
139 receipt: receipt to be queued
135140 """
136141 self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
137142 receipt.receipt_type, {}
138143 )[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
139144
140 def flush_read_receipts_for_room(self, room_id):
145 def flush_read_receipts_for_room(self, room_id: str) -> None:
141146 # if we don't have any read-receipts for this room, it may be that we've already
142147 # sent them out, so we don't need to flush.
143148 if room_id not in self._pending_rrs:
145150 self._rrs_pending_flush = True
146151 self.attempt_new_transaction()
147152
148 def send_keyed_edu(self, edu, key):
153 def send_keyed_edu(self, edu: Edu, key: Hashable) -> None:
149154 self._pending_edus_keyed[(edu.edu_type, key)] = edu
150155 self.attempt_new_transaction()
151156
152 def send_edu(self, edu):
157 def send_edu(self, edu) -> None:
153158 self._pending_edus.append(edu)
154159 self.attempt_new_transaction()
155160
156 def attempt_new_transaction(self):
161 def attempt_new_transaction(self) -> None:
157162 """Try to start a new transaction to this destination
158163
159164 If there is already a transaction in progress to this destination,
176181 self._transaction_transmission_loop,
177182 )
178183
179 @defer.inlineCallbacks
180 def _transaction_transmission_loop(self):
181 pending_pdus = []
184 async def _transaction_transmission_loop(self) -> None:
185 pending_pdus = [] # type: List[Tuple[EventBase, int]]
182186 try:
183187 self.transmission_loop_running = True
184188
185189 # This will throw if we wouldn't retry. We do this here so we fail
186190 # quickly, but we will later check this again in the http client,
187191 # hence why we throw the result away.
188 yield get_retry_limiter(self._destination, self._clock, self._store)
192 await get_retry_limiter(self._destination, self._clock, self._store)
189193
190194 pending_pdus = []
191195 while True:
192196 # We have to keep 2 free slots for presence and rr_edus
193197 limit = MAX_EDUS_PER_TRANSACTION - 2
194198
195 device_update_edus, dev_list_id = yield self._get_device_update_edus(
199 device_update_edus, dev_list_id = await self._get_device_update_edus(
196200 limit
197201 )
198202
201205 (
202206 to_device_edus,
203207 device_stream_id,
204 ) = yield self._get_to_device_message_edus(limit)
208 ) = await self._get_to_device_message_edus(limit)
205209
206210 pending_edus = device_update_edus + to_device_edus
207211
268272
269273 # END CRITICAL SECTION
270274
271 success = yield self._transaction_manager.send_new_transaction(
275 success = await self._transaction_manager.send_new_transaction(
272276 self._destination, pending_pdus, pending_edus
273277 )
274278 if success:
279283 # Remove the acknowledged device messages from the database
280284 # Only bother if we actually sent some device messages
281285 if to_device_edus:
282 yield self._store.delete_device_msgs_for_remote(
286 await self._store.delete_device_msgs_for_remote(
283287 self._destination, device_stream_id
284288 )
285289
288292 logger.info(
289293 "Marking as sent %r %r", self._destination, dev_list_id
290294 )
291 yield self._store.mark_as_sent_devices_by_remote(
295 await self._store.mark_as_sent_devices_by_remote(
292296 self._destination, dev_list_id
293297 )
294298
333337 # We want to be *very* sure we clear this after we stop processing
334338 self.transmission_loop_running = False
335339
336 def _get_rr_edus(self, force_flush):
340 def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
337341 if not self._pending_rrs:
338342 return
339343 if not force_flush and not self._rrs_pending_flush:
350354 self._rrs_pending_flush = False
351355 yield edu
352356
353 def _pop_pending_edus(self, limit):
357 def _pop_pending_edus(self, limit: int) -> List[Edu]:
354358 pending_edus = self._pending_edus
355359 pending_edus, self._pending_edus = pending_edus[:limit], pending_edus[limit:]
356360 return pending_edus
357361
358 @defer.inlineCallbacks
359 def _get_device_update_edus(self, limit):
362 async def _get_device_update_edus(self, limit: int) -> Tuple[List[Edu], int]:
360363 last_device_list = self._last_device_list_stream_id
361364
362365 # Retrieve list of new device updates to send to the destination
363 now_stream_id, results = yield self._store.get_device_updates_by_remote(
366 now_stream_id, results = await self._store.get_device_updates_by_remote(
364367 self._destination, last_device_list, limit=limit
365368 )
366369 edus = [
377380
378381 return (edus, now_stream_id)
379382
380 @defer.inlineCallbacks
381 def _get_to_device_message_edus(self, limit):
383 async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]:
382384 last_device_stream_id = self._last_device_stream_id
383385 to_device_stream_id = self._store.get_to_device_stream_token()
384 contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
386 contents, stream_id = await self._store.get_new_device_msgs_for_remote(
385387 self._destination, last_device_stream_id, to_device_stream_id, limit
386388 )
387389 edus = [
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414 import logging
15 from typing import List
1516
1617 from canonicaljson import json
1718
18 from twisted.internet import defer
19
19 import synapse.server
2020 from synapse.api.errors import HttpResponseException
21 from synapse.events import EventBase
2122 from synapse.federation.persistence import TransactionActions
22 from synapse.federation.units import Transaction
23 from synapse.federation.units import Edu, Transaction
2324 from synapse.logging.opentracing import (
2425 extract_text_map,
2526 set_tag,
3839 shared between PerDestinationQueue objects
3940 """
4041
41 def __init__(self, hs):
42 def __init__(self, hs: "synapse.server.HomeServer"):
4243 self._server_name = hs.hostname
4344 self.clock = hs.get_clock() # nb must be called this for @measure_func
4445 self._store = hs.get_datastore()
4950 self._next_txn_id = int(self.clock.time_msec())
5051
5152 @measure_func("_send_new_transaction")
52 @defer.inlineCallbacks
53 def send_new_transaction(self, destination, pending_pdus, pending_edus):
53 async def send_new_transaction(
54 self, destination: str, pending_pdus: List[EventBase], pending_edus: List[Edu]
55 ):
5456
5557 # Make a transaction-sending opentracing span. This span follows on from
5658 # all the edus in that transaction. This needs to be done since there is
126128 return data
127129
128130 try:
129 response = yield self._transport_layer.send_transaction(
131 response = await self._transport_layer.send_transaction(
130132 transaction, json_data_cb
131133 )
132134 code = 200
157157 origin, json_request, now, "Incoming request"
158158 )
159159
160 logger.info("Request from %s", origin)
160 logger.debug("Request from %s", origin)
161161 request.authenticated_entity = origin
162162
163163 # If we get a valid signed request from the other side, its probably
578578 # state resolution algorithm, and we don't use that for processing
579579 # invites
580580 content = await self.handler.on_invite_request(
581 origin, content, room_version=RoomVersions.V1.identifier
581 origin, content, room_version_id=RoomVersions.V1.identifier
582582 )
583583
584584 # V1 federation API is defined to return a content of `[200, {...}]`
605605 event.setdefault("unsigned", {})["invite_room_state"] = invite_room_state
606606
607607 content = await self.handler.on_invite_request(
608 origin, event, room_version=room_version
608 origin, event, room_version_id=room_version
609609 )
610610 return 200, content
611611
1818
1919 import logging
2020
21 import attr
22
23 from synapse.types import JsonDict
2124 from synapse.util.jsonobject import JsonEncodedObject
2225
2326 logger = logging.getLogger(__name__)
2427
2528
29 @attr.s(slots=True)
2630 class Edu(JsonEncodedObject):
2731 """ An Edu represents a piece of data sent from one homeserver to another.
2832
3135 internal ID or previous references graph.
3236 """
3337
34 valid_keys = ["origin", "destination", "edu_type", "content"]
38 edu_type = attr.ib(type=str)
39 content = attr.ib(type=dict)
40 origin = attr.ib(type=str)
41 destination = attr.ib(type=str)
3542
36 required_keys = ["edu_type"]
43 def get_dict(self) -> JsonDict:
44 return {
45 "edu_type": self.edu_type,
46 "content": self.content,
47 }
3748
38 internal_keys = ["origin", "destination"]
49 def get_internal_dict(self) -> JsonDict:
50 return {
51 "edu_type": self.edu_type,
52 "content": self.content,
53 "origin": self.origin,
54 "destination": self.destination,
55 }
3956
4057 def get_context(self):
4158 return getattr(self, "content", {}).get("org.matrix.opentracing_context", "{}")
3535 # TODO: Flairs
3636
3737
38 class GroupsServerHandler(object):
38 class GroupsServerWorkerHandler(object):
3939 def __init__(self, hs):
4040 self.hs = hs
4141 self.store = hs.get_datastore()
4949 self.attestations = hs.get_groups_attestation_signing()
5050 self.transport_client = hs.get_federation_transport_client()
5151 self.profile_handler = hs.get_profile_handler()
52
53 # Ensure attestations get renewed
54 hs.get_groups_attestation_renewer()
5552
5653 @defer.inlineCallbacks
5754 def check_group_is_ours(
167164 }
168165
169166 @defer.inlineCallbacks
170 def update_group_summary_room(
171 self, group_id, requester_user_id, room_id, category_id, content
172 ):
173 """Add/update a room to the group summary
174 """
175 yield self.check_group_is_ours(
176 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
177 )
178
179 RoomID.from_string(room_id) # Ensure valid room id
180
181 order = content.get("order", None)
182
183 is_public = _parse_visibility_from_contents(content)
184
185 yield self.store.add_room_to_summary(
186 group_id=group_id,
187 room_id=room_id,
188 category_id=category_id,
189 order=order,
190 is_public=is_public,
191 )
192
193 return {}
194
195 @defer.inlineCallbacks
196 def delete_group_summary_room(
197 self, group_id, requester_user_id, room_id, category_id
198 ):
199 """Remove a room from the summary
200 """
201 yield self.check_group_is_ours(
202 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
203 )
204
205 yield self.store.remove_room_from_summary(
206 group_id=group_id, room_id=room_id, category_id=category_id
207 )
208
209 return {}
210
211 @defer.inlineCallbacks
212 def set_group_join_policy(self, group_id, requester_user_id, content):
213 """Sets the group join policy.
214
215 Currently supported policies are:
216 - "invite": an invite must be received and accepted in order to join.
217 - "open": anyone can join.
218 """
219 yield self.check_group_is_ours(
220 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
221 )
222
223 join_policy = _parse_join_policy_from_contents(content)
224 if join_policy is None:
225 raise SynapseError(400, "No value specified for 'm.join_policy'")
226
227 yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
228
229 return {}
230
231 @defer.inlineCallbacks
232167 def get_group_categories(self, group_id, requester_user_id):
233168 """Get all categories in a group (as seen by user)
234169 """
247182 group_id=group_id, category_id=category_id
248183 )
249184
185 logger.info("group %s", res)
186
250187 return res
251
252 @defer.inlineCallbacks
253 def update_group_category(self, group_id, requester_user_id, category_id, content):
254 """Add/Update a group category
255 """
256 yield self.check_group_is_ours(
257 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
258 )
259
260 is_public = _parse_visibility_from_contents(content)
261 profile = content.get("profile")
262
263 yield self.store.upsert_group_category(
264 group_id=group_id,
265 category_id=category_id,
266 is_public=is_public,
267 profile=profile,
268 )
269
270 return {}
271
272 @defer.inlineCallbacks
273 def delete_group_category(self, group_id, requester_user_id, category_id):
274 """Delete a group category
275 """
276 yield self.check_group_is_ours(
277 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
278 )
279
280 yield self.store.remove_group_category(
281 group_id=group_id, category_id=category_id
282 )
283
284 return {}
285188
286189 @defer.inlineCallbacks
287190 def get_group_roles(self, group_id, requester_user_id):
300203
301204 res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
302205 return res
303
304 @defer.inlineCallbacks
305 def update_group_role(self, group_id, requester_user_id, role_id, content):
306 """Add/update a role in a group
307 """
308 yield self.check_group_is_ours(
309 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
310 )
311
312 is_public = _parse_visibility_from_contents(content)
313
314 profile = content.get("profile")
315
316 yield self.store.upsert_group_role(
317 group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
318 )
319
320 return {}
321
322 @defer.inlineCallbacks
323 def delete_group_role(self, group_id, requester_user_id, role_id):
324 """Remove role from group
325 """
326 yield self.check_group_is_ours(
327 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
328 )
329
330 yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
331
332 return {}
333
334 @defer.inlineCallbacks
335 def update_group_summary_user(
336 self, group_id, requester_user_id, user_id, role_id, content
337 ):
338 """Add/update a users entry in the group summary
339 """
340 yield self.check_group_is_ours(
341 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
342 )
343
344 order = content.get("order", None)
345
346 is_public = _parse_visibility_from_contents(content)
347
348 yield self.store.add_user_to_summary(
349 group_id=group_id,
350 user_id=user_id,
351 role_id=role_id,
352 order=order,
353 is_public=is_public,
354 )
355
356 return {}
357
358 @defer.inlineCallbacks
359 def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
360 """Remove a user from the group summary
361 """
362 yield self.check_group_is_ours(
363 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
364 )
365
366 yield self.store.remove_user_from_summary(
367 group_id=group_id, user_id=user_id, role_id=role_id
368 )
369
370 return {}
371206
372207 @defer.inlineCallbacks
373208 def get_group_profile(self, group_id, requester_user_id):
394229 raise SynapseError(404, "Unknown group")
395230
396231 @defer.inlineCallbacks
397 def update_group_profile(self, group_id, requester_user_id, content):
398 """Update the group profile
399 """
400 yield self.check_group_is_ours(
401 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
402 )
403
404 profile = {}
405 for keyname in ("name", "avatar_url", "short_description", "long_description"):
406 if keyname in content:
407 value = content[keyname]
408 if not isinstance(value, string_types):
409 raise SynapseError(400, "%r value is not a string" % (keyname,))
410 profile[keyname] = value
411
412 yield self.store.update_group_profile(group_id, profile)
413
414 @defer.inlineCallbacks
415232 def get_users_in_group(self, group_id, requester_user_id):
416233 """Get the users in group as seen by requester_user_id.
417234
528345 chunk.sort(key=lambda e: -e["num_joined_members"])
529346
530347 return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
348
349
350 class GroupsServerHandler(GroupsServerWorkerHandler):
351 def __init__(self, hs):
352 super(GroupsServerHandler, self).__init__(hs)
353
354 # Ensure attestations get renewed
355 hs.get_groups_attestation_renewer()
356
357 @defer.inlineCallbacks
358 def update_group_summary_room(
359 self, group_id, requester_user_id, room_id, category_id, content
360 ):
361 """Add/update a room to the group summary
362 """
363 yield self.check_group_is_ours(
364 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
365 )
366
367 RoomID.from_string(room_id) # Ensure valid room id
368
369 order = content.get("order", None)
370
371 is_public = _parse_visibility_from_contents(content)
372
373 yield self.store.add_room_to_summary(
374 group_id=group_id,
375 room_id=room_id,
376 category_id=category_id,
377 order=order,
378 is_public=is_public,
379 )
380
381 return {}
382
383 @defer.inlineCallbacks
384 def delete_group_summary_room(
385 self, group_id, requester_user_id, room_id, category_id
386 ):
387 """Remove a room from the summary
388 """
389 yield self.check_group_is_ours(
390 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
391 )
392
393 yield self.store.remove_room_from_summary(
394 group_id=group_id, room_id=room_id, category_id=category_id
395 )
396
397 return {}
398
399 @defer.inlineCallbacks
400 def set_group_join_policy(self, group_id, requester_user_id, content):
401 """Sets the group join policy.
402
403 Currently supported policies are:
404 - "invite": an invite must be received and accepted in order to join.
405 - "open": anyone can join.
406 """
407 yield self.check_group_is_ours(
408 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
409 )
410
411 join_policy = _parse_join_policy_from_contents(content)
412 if join_policy is None:
413 raise SynapseError(400, "No value specified for 'm.join_policy'")
414
415 yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
416
417 return {}
418
419 @defer.inlineCallbacks
420 def update_group_category(self, group_id, requester_user_id, category_id, content):
421 """Add/Update a group category
422 """
423 yield self.check_group_is_ours(
424 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
425 )
426
427 is_public = _parse_visibility_from_contents(content)
428 profile = content.get("profile")
429
430 yield self.store.upsert_group_category(
431 group_id=group_id,
432 category_id=category_id,
433 is_public=is_public,
434 profile=profile,
435 )
436
437 return {}
438
439 @defer.inlineCallbacks
440 def delete_group_category(self, group_id, requester_user_id, category_id):
441 """Delete a group category
442 """
443 yield self.check_group_is_ours(
444 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
445 )
446
447 yield self.store.remove_group_category(
448 group_id=group_id, category_id=category_id
449 )
450
451 return {}
452
453 @defer.inlineCallbacks
454 def update_group_role(self, group_id, requester_user_id, role_id, content):
455 """Add/update a role in a group
456 """
457 yield self.check_group_is_ours(
458 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
459 )
460
461 is_public = _parse_visibility_from_contents(content)
462
463 profile = content.get("profile")
464
465 yield self.store.upsert_group_role(
466 group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
467 )
468
469 return {}
470
471 @defer.inlineCallbacks
472 def delete_group_role(self, group_id, requester_user_id, role_id):
473 """Remove role from group
474 """
475 yield self.check_group_is_ours(
476 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
477 )
478
479 yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
480
481 return {}
482
483 @defer.inlineCallbacks
484 def update_group_summary_user(
485 self, group_id, requester_user_id, user_id, role_id, content
486 ):
487 """Add/update a users entry in the group summary
488 """
489 yield self.check_group_is_ours(
490 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
491 )
492
493 order = content.get("order", None)
494
495 is_public = _parse_visibility_from_contents(content)
496
497 yield self.store.add_user_to_summary(
498 group_id=group_id,
499 user_id=user_id,
500 role_id=role_id,
501 order=order,
502 is_public=is_public,
503 )
504
505 return {}
506
507 @defer.inlineCallbacks
508 def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
509 """Remove a user from the group summary
510 """
511 yield self.check_group_is_ours(
512 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
513 )
514
515 yield self.store.remove_user_from_summary(
516 group_id=group_id, user_id=user_id, role_id=role_id
517 )
518
519 return {}
520
521 @defer.inlineCallbacks
522 def update_group_profile(self, group_id, requester_user_id, content):
523 """Update the group profile
524 """
525 yield self.check_group_is_ours(
526 group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
527 )
528
529 profile = {}
530 for keyname in ("name", "avatar_url", "short_description", "long_description"):
531 if keyname in content:
532 value = content[keyname]
533 if not isinstance(value, string_types):
534 raise SynapseError(400, "%r value is not a string" % (keyname,))
535 profile[keyname] = value
536
537 yield self.store.update_group_profile(group_id, profile)
531538
532539 @defer.inlineCallbacks
533540 def add_room_to_group(self, group_id, requester_user_id, room_id, content):
2323 from synapse.app import check_bind_error
2424
2525 logger = logging.getLogger(__name__)
26
27 ACME_REGISTER_FAIL_ERROR = """
28 --------------------------------------------------------------------------------
29 Failed to register with the ACME provider. This is likely happening because the installation
30 is new, and ACME v1 has been deprecated by Let's Encrypt and disabled for
31 new installations since November 2019.
32 At the moment, Synapse doesn't support ACME v2. For more information and alternative
33 solutions, please read https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
34 --------------------------------------------------------------------------------"""
2635
2736
2837 class AcmeHandler(object):
7079 # want it to control where we save the certificates, we have to reach in
7180 # and trigger the registration machinery ourselves.
7281 self._issuer._registered = False
73 yield self._issuer._ensure_registered()
82
83 try:
84 yield self._issuer._ensure_registered()
85 except Exception:
86 logger.error(ACME_REGISTER_FAIL_ERROR)
87 raise
7488
7589 @defer.inlineCallbacks
7690 def provision_certificate(self):
5757 ret = await self.store.get_user_by_id(user.to_string())
5858 if ret:
5959 profile = await self.store.get_profileinfo(user.localpart)
60 threepids = await self.store.user_get_threepids(user.to_string())
6061 ret["displayname"] = profile.display_name
6162 ret["avatar_url"] = profile.avatar_url
63 ret["threepids"] = threepids
6264 return ret
6365
6466 async def export_user_data(self, user_id, writer):
815815
816816 @defer.inlineCallbacks
817817 def add_threepid(self, user_id, medium, address, validated_at):
818 # check if medium has a valid value
819 if medium not in ["email", "msisdn"]:
820 raise SynapseError(
821 code=400,
822 msg=("'%s' is not a valid value for 'medium'" % (medium,)),
823 errcode=Codes.INVALID_PARAM,
824 )
825
818826 # 'Canonicalise' email addresses down to lower case.
819827 # We've now moving towards the homeserver being the entity that
820828 # is responsible for validating threepids used for resetting passwords
2525 FederationDeniedError,
2626 HttpResponseException,
2727 RequestSendFailed,
28 SynapseError,
2829 )
2930 from synapse.logging.opentracing import log_kv, set_tag, trace
3031 from synapse.types import RoomStreamToken, get_domain_from_id
3738 from ._base import BaseHandler
3839
3940 logger = logging.getLogger(__name__)
41
42 MAX_DEVICE_DISPLAY_NAME_LEN = 100
4043
4144
4245 class DeviceWorkerHandler(BaseHandler):
224227
225228 return result
226229
230 @defer.inlineCallbacks
231 def on_federation_query_user_devices(self, user_id):
232 stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
233 master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
234 self_signing_key = yield self.store.get_e2e_cross_signing_key(
235 user_id, "self_signing"
236 )
237
238 return {
239 "user_id": user_id,
240 "stream_id": stream_id,
241 "devices": devices,
242 "master_key": master_key,
243 "self_signing_key": self_signing_key,
244 }
245
227246
228247 class DeviceHandler(DeviceWorkerHandler):
229248 def __init__(self, hs):
237256
238257 federation_registry.register_edu_handler(
239258 "m.device_list_update", self.device_list_updater.incoming_device_list_update
240 )
241 federation_registry.register_query_handler(
242 "user_devices", self.on_federation_query_user_devices
243259 )
244260
245261 hs.get_distributor().observe("user_left_room", self.user_left_room)
390406 defer.Deferred:
391407 """
392408
409 # Reject a new displayname which is too long.
410 new_display_name = content.get("display_name")
411 if new_display_name and len(new_display_name) > MAX_DEVICE_DISPLAY_NAME_LEN:
412 raise SynapseError(
413 400,
414 "Device display name is too long (max %i)"
415 % (MAX_DEVICE_DISPLAY_NAME_LEN,),
416 )
417
393418 try:
394419 yield self.store.update_device(
395 user_id, device_id, new_display_name=content.get("display_name")
420 user_id, device_id, new_display_name=new_display_name
396421 )
397422 yield self.notify_device_update(user_id, [device_id])
398423 except errors.StoreError as e:
454479 )
455480
456481 self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
457
458 @defer.inlineCallbacks
459 def on_federation_query_user_devices(self, user_id):
460 stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
461 master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
462 self_signing_key = yield self.store.get_e2e_cross_signing_key(
463 user_id, "self_signing"
464 )
465
466 return {
467 "user_id": user_id,
468 "stream_id": stream_id,
469 "devices": devices,
470 "master_key": master_key,
471 "self_signing_key": self_signing_key,
472 }
473482
474483 @defer.inlineCallbacks
475484 def user_left_room(self, user, room_id):
1515
1616 import logging
1717 import string
18 from typing import List
1819
1920 from twisted.internet import defer
2021
2728 StoreError,
2829 SynapseError,
2930 )
30 from synapse.types import RoomAlias, UserID, get_domain_from_id
31 from synapse.types import Requester, RoomAlias, UserID, get_domain_from_id
3132
3233 from ._base import BaseHandler
3334
8081
8182 @defer.inlineCallbacks
8283 def create_association(
83 self,
84 requester,
85 room_alias,
86 room_id,
87 servers=None,
88 send_event=True,
89 check_membership=True,
84 self, requester, room_alias, room_id, servers=None, check_membership=True,
9085 ):
9186 """Attempt to create a new alias
9287
9691 room_id (str)
9792 servers (list[str]|None): List of servers that others servers
9893 should try and join via
99 send_event (bool): Whether to send an updated m.room.aliases event
10094 check_membership (bool): Whether to check if the user is in the room
10195 before the alias can be set (if the server's config requires it).
10296
149143 )
150144
151145 yield self._create_association(room_alias, room_id, servers, creator=user_id)
152 if send_event:
153 try:
154 yield self.send_room_alias_update_event(requester, room_id)
155 except AuthError as e:
156 # sending the aliases event may fail due to the user not having
157 # permission in the room; this is permitted.
158 logger.info("Skipping updating aliases event due to auth error %s", e)
159
160 @defer.inlineCallbacks
161 def delete_association(self, requester, room_alias, send_event=True):
146
147 @defer.inlineCallbacks
148 def delete_association(self, requester, room_alias):
162149 """Remove an alias from the directory
163150
164151 (this is only meant for human users; AS users should call
167154 Args:
168155 requester (Requester):
169156 room_alias (RoomAlias):
170 send_event (bool): Whether to send an updated m.room.aliases event.
171 Note that, if we delete the canonical alias, we will always attempt
172 to send an m.room.canonical_alias event
173157
174158 Returns:
175159 Deferred[unicode]: room id that the alias used to point to
205189 room_id = yield self._delete_association(room_alias)
206190
207191 try:
208 if send_event:
209 yield self.send_room_alias_update_event(requester, room_id)
210
211192 yield self._update_canonical_alias(
212193 requester, requester.user.to_string(), room_id, room_alias
213194 )
318299
319300 @defer.inlineCallbacks
320301 def _update_canonical_alias(self, requester, user_id, room_id, room_alias):
302 """
303 Send an updated canonical alias event if the removed alias was set as
304 the canonical alias or listed in the alt_aliases field.
305 """
321306 alias_event = yield self.state.get_current_state(
322307 room_id, EventTypes.CanonicalAlias, ""
323308 )
324309
310 # There is no canonical alias, nothing to do.
311 if not alias_event:
312 return
313
314 # Obtain a mutable version of the event content.
315 content = dict(alias_event.content)
316 send_update = False
317
318 # Remove the alias property if it matches the removed alias.
325319 alias_str = room_alias.to_string()
326 if not alias_event or alias_event.content.get("alias", "") != alias_str:
327 return
328
329 yield self.event_creation_handler.create_and_send_nonmember_event(
330 requester,
331 {
332 "type": EventTypes.CanonicalAlias,
333 "state_key": "",
334 "room_id": room_id,
335 "sender": user_id,
336 "content": {},
337 },
338 ratelimit=False,
339 )
320 if alias_event.content.get("alias", "") == alias_str:
321 send_update = True
322 content.pop("alias", "")
323
324 # Filter alt_aliases for the removed alias.
325 alt_aliases = content.pop("alt_aliases", None)
326 # If the aliases are not a list (or not found) do not attempt to modify
327 # the list.
328 if isinstance(alt_aliases, list):
329 send_update = True
330 alt_aliases = [alias for alias in alt_aliases if alias != alias_str]
331 if alt_aliases:
332 content["alt_aliases"] = alt_aliases
333
334 if send_update:
335 yield self.event_creation_handler.create_and_send_nonmember_event(
336 requester,
337 {
338 "type": EventTypes.CanonicalAlias,
339 "state_key": "",
340 "room_id": room_id,
341 "sender": user_id,
342 "content": content,
343 },
344 ratelimit=False,
345 )
340346
341347 @defer.inlineCallbacks
342348 def get_association_from_room_alias(self, room_alias):
446452 yield self.store.set_room_is_public_appservice(
447453 room_id, appservice_id, network_id, visibility == "public"
448454 )
455
456 async def get_aliases_for_room(
457 self, requester: Requester, room_id: str
458 ) -> List[str]:
459 """
460 Get a list of the aliases that currently point to this room on this server
461 """
462 # allow access to server admins and current members of the room
463 is_admin = await self.auth.is_server_admin(requester.user)
464 if not is_admin:
465 await self.auth.check_user_in_room_or_world_readable(
466 room_id, requester.user.to_string()
467 )
468
469 aliases = await self.store.get_aliases_for_room(room_id)
470 return aliases
6464 from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
6565 from synapse.state import StateResolutionStore, resolve_events_with_store
6666 from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
67 from synapse.types import StateMap, UserID, get_domain_from_id
67 from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
6868 from synapse.util.async_helpers import Linearizer, concurrently_execute
6969 from synapse.util.distributor import user_joined_room
7070 from synapse.util.retryutils import NotRetryingDestination
11551155 Logs a warning if we can't find the given event.
11561156 """
11571157
1158 room_version = await self.store.get_room_version_id(room_id)
1158 room_version = await self.store.get_room_version(room_id)
11591159
11601160 event_infos = []
11611161
12291229 )
12301230 raise SynapseError(http_client.BAD_REQUEST, "Too many auth_events")
12311231
1232 @defer.inlineCallbacks
1233 def send_invite(self, target_host, event):
1232 async def send_invite(self, target_host, event):
12341233 """ Sends the invite to the remote server for signing.
12351234
12361235 Invites must be signed by the invitee's server before distribution.
12371236 """
1238 pdu = yield self.federation_client.send_invite(
1237 pdu = await self.federation_client.send_invite(
12391238 destination=target_host,
12401239 room_id=event.room_id,
12411240 event_id=event.event_id,
12441243
12451244 return pdu
12461245
1247 @defer.inlineCallbacks
1248 def on_event_auth(self, event_id):
1249 event = yield self.store.get_event(event_id)
1250 auth = yield self.store.get_auth_chain(
1246 async def on_event_auth(self, event_id: str) -> List[EventBase]:
1247 event = await self.store.get_event(event_id)
1248 auth = await self.store.get_auth_chain(
12511249 [auth_id for auth_id in event.auth_event_ids()], include_given=True
12521250 )
1253 return [e for e in auth]
1254
1255 @log_function
1256 @defer.inlineCallbacks
1257 def do_invite_join(self, target_hosts, room_id, joinee, content):
1251 return list(auth)
1252
1253 async def do_invite_join(
1254 self, target_hosts: Iterable[str], room_id: str, joinee: str, content: JsonDict
1255 ) -> None:
12581256 """ Attempts to join the `joinee` to the room `room_id` via the
12591257 servers contained in `target_hosts`.
12601258
12671265 have finished processing the join.
12681266
12691267 Args:
1270 target_hosts (Iterable[str]): List of servers to attempt to join the room with.
1271
1272 room_id (str): The ID of the room to join.
1273
1274 joinee (str): The User ID of the joining user.
1275
1276 content (dict): The event content to use for the join event.
1268 target_hosts: List of servers to attempt to join the room with.
1269
1270 room_id: The ID of the room to join.
1271
1272 joinee: The User ID of the joining user.
1273
1274 content: The event content to use for the join event.
12771275 """
12781276 logger.debug("Joining %s to %s", joinee, room_id)
12791277
1280 origin, event, room_version_obj = yield self._make_and_verify_event(
1278 origin, event, room_version_obj = await self._make_and_verify_event(
12811279 target_hosts,
12821280 room_id,
12831281 joinee,
12931291
12941292 self.room_queues[room_id] = []
12951293
1296 yield self._clean_room_for_join(room_id)
1294 await self._clean_room_for_join(room_id)
12971295
12981296 handled_events = set()
12991297
13061304 except ValueError:
13071305 pass
13081306
1309 event_format_version = room_version_obj.event_format
1310 ret = yield self.federation_client.send_join(
1311 target_hosts, event, event_format_version
1307 ret = await self.federation_client.send_join(
1308 target_hosts, event, room_version_obj
13121309 )
13131310
13141311 origin = ret["origin"]
13261323 logger.debug("do_invite_join event: %s", event)
13271324
13281325 try:
1329 yield self.store.store_room(
1326 await self.store.store_room(
13301327 room_id=room_id,
13311328 room_creator_user_id="",
13321329 is_public=False,
13361333 # FIXME
13371334 pass
13381335
1339 yield self._persist_auth_tree(
1336 await self._persist_auth_tree(
13401337 origin, auth_chain, state, event, room_version_obj
13411338 )
13421339
13431340 # Check whether this room is the result of an upgrade of a room we already know
13441341 # about. If so, migrate over user information
1345 predecessor = yield self.store.get_room_predecessor(room_id)
1342 predecessor = await self.store.get_room_predecessor(room_id)
13461343 if not predecessor or not isinstance(predecessor.get("room_id"), str):
13471344 return
13481345 old_room_id = predecessor["room_id"]
13521349
13531350 # We retrieve the room member handler here as to not cause a cyclic dependency
13541351 member_handler = self.hs.get_room_member_handler()
1355 yield member_handler.transfer_room_state_on_room_upgrade(
1352 await member_handler.transfer_room_state_on_room_upgrade(
13561353 old_room_id, room_id
13571354 )
13581355
13681365 # have. Hence we fire off the deferred, but don't wait for it.
13691366
13701367 run_in_background(self._handle_queued_pdus, room_queue)
1371
1372 return True
13731368
13741369 async def _handle_queued_pdus(self, room_queue):
13751370 """Process PDUs which got queued up while we were busy send_joining.
13931388 "Error handling queued PDU %s from %s: %s", p.event_id, origin, e
13941389 )
13951390
1396 @defer.inlineCallbacks
1397 @log_function
1398 def on_make_join_request(self, origin, room_id, user_id):
1391 async def on_make_join_request(
1392 self, origin: str, room_id: str, user_id: str
1393 ) -> EventBase:
13991394 """ We've received a /make_join/ request, so we create a partial
14001395 join event for the room and return that. We do *not* persist or
14011396 process it until the other server has signed it and sent it back.
14021397
14031398 Args:
1404 origin (str): The (verified) server name of the requesting server.
1405 room_id (str): Room to create join event in
1406 user_id (str): The user to create the join for
1407
1408 Returns:
1409 Deferred[FrozenEvent]
1399 origin: The (verified) server name of the requesting server.
1400 room_id: Room to create join event in
1401 user_id: The user to create the join for
14101402 """
14111403 if get_domain_from_id(user_id) != origin:
14121404 logger.info(
14181410
14191411 event_content = {"membership": Membership.JOIN}
14201412
1421 room_version = yield self.store.get_room_version_id(room_id)
1413 room_version = await self.store.get_room_version_id(room_id)
14221414
14231415 builder = self.event_builder_factory.new(
14241416 room_version,
14321424 )
14331425
14341426 try:
1435 event, context = yield self.event_creation_handler.create_new_client_event(
1427 event, context = await self.event_creation_handler.create_new_client_event(
14361428 builder=builder
14371429 )
14381430 except AuthError as e:
14391431 logger.warning("Failed to create join to %s because %s", room_id, e)
14401432 raise e
14411433
1442 event_allowed = yield self.third_party_event_rules.check_event_allowed(
1434 event_allowed = await self.third_party_event_rules.check_event_allowed(
14431435 event, context
14441436 )
14451437 if not event_allowed:
14501442
14511443 # The remote hasn't signed it yet, obviously. We'll do the full checks
14521444 # when we get the event back in `on_send_join_request`
1453 yield self.auth.check_from_context(
1445 await self.auth.check_from_context(
14541446 room_version, event, context, do_sig_check=False
14551447 )
14561448
14571449 return event
14581450
1459 @defer.inlineCallbacks
1460 @log_function
1461 def on_send_join_request(self, origin, pdu):
1451 async def on_send_join_request(self, origin, pdu):
14621452 """ We have received a join event for a room. Fully process it and
14631453 respond with the current state and auth chains.
14641454 """
14951485 # would introduce the danger of backwards-compatibility problems.
14961486 event.internal_metadata.send_on_behalf_of = origin
14971487
1498 context = yield self._handle_new_event(origin, event)
1499
1500 event_allowed = yield self.third_party_event_rules.check_event_allowed(
1488 context = await self._handle_new_event(origin, event)
1489
1490 event_allowed = await self.third_party_event_rules.check_event_allowed(
15011491 event, context
15021492 )
15031493 if not event_allowed:
15151505 if event.type == EventTypes.Member:
15161506 if event.content["membership"] == Membership.JOIN:
15171507 user = UserID.from_string(event.state_key)
1518 yield self.user_joined_room(user, event.room_id)
1519
1520 prev_state_ids = yield context.get_prev_state_ids()
1508 await self.user_joined_room(user, event.room_id)
1509
1510 prev_state_ids = await context.get_prev_state_ids()
15211511
15221512 state_ids = list(prev_state_ids.values())
1523 auth_chain = yield self.store.get_auth_chain(state_ids)
1524
1525 state = yield self.store.get_events(list(prev_state_ids.values()))
1513 auth_chain = await self.store.get_auth_chain(state_ids)
1514
1515 state = await self.store.get_events(list(prev_state_ids.values()))
15261516
15271517 return {"state": list(state.values()), "auth_chain": auth_chain}
15281518
1529 @defer.inlineCallbacks
1530 def on_invite_request(
1519 async def on_invite_request(
15311520 self, origin: str, event: EventBase, room_version: RoomVersion
15321521 ):
15331522 """ We've got an invite event. Process and persist it. Sign it.
15371526 if event.state_key is None:
15381527 raise SynapseError(400, "The invite event did not have a state key")
15391528
1540 is_blocked = yield self.store.is_room_blocked(event.room_id)
1529 is_blocked = await self.store.is_room_blocked(event.room_id)
15411530 if is_blocked:
15421531 raise SynapseError(403, "This room has been blocked on this server")
15431532
15801569 )
15811570 )
15821571
1583 context = yield self.state_handler.compute_event_context(event)
1584 yield self.persist_events_and_notify([(event, context)])
1572 context = await self.state_handler.compute_event_context(event)
1573 await self.persist_events_and_notify([(event, context)])
15851574
15861575 return event
15871576
1588 @defer.inlineCallbacks
1589 def do_remotely_reject_invite(self, target_hosts, room_id, user_id, content):
1590 origin, event, room_version = yield self._make_and_verify_event(
1577 async def do_remotely_reject_invite(
1578 self, target_hosts: Iterable[str], room_id: str, user_id: str, content: JsonDict
1579 ) -> EventBase:
1580 origin, event, room_version = await self._make_and_verify_event(
15911581 target_hosts, room_id, user_id, "leave", content=content
15921582 )
15931583 # Mark as outlier as we don't have any state for this event; we're not
16031593 except ValueError:
16041594 pass
16051595
1606 yield self.federation_client.send_leave(target_hosts, event)
1607
1608 context = yield self.state_handler.compute_event_context(event)
1609 yield self.persist_events_and_notify([(event, context)])
1596 await self.federation_client.send_leave(target_hosts, event)
1597
1598 context = await self.state_handler.compute_event_context(event)
1599 await self.persist_events_and_notify([(event, context)])
16101600
16111601 return event
16121602
1613 @defer.inlineCallbacks
1614 def _make_and_verify_event(
1615 self, target_hosts, room_id, user_id, membership, content={}, params=None
1616 ):
1603 async def _make_and_verify_event(
1604 self,
1605 target_hosts: Iterable[str],
1606 room_id: str,
1607 user_id: str,
1608 membership: str,
1609 content: JsonDict = {},
1610 params: Optional[Dict[str, str]] = None,
1611 ) -> Tuple[str, EventBase, RoomVersion]:
16171612 (
16181613 origin,
16191614 event,
16201615 room_version,
1621 ) = yield self.federation_client.make_membership_event(
1616 ) = await self.federation_client.make_membership_event(
16221617 target_hosts, room_id, user_id, membership, content, params=params
16231618 )
16241619
16321627 assert event.room_id == room_id
16331628 return origin, event, room_version
16341629
1635 @defer.inlineCallbacks
1636 @log_function
1637 def on_make_leave_request(self, origin, room_id, user_id):
1630 async def on_make_leave_request(
1631 self, origin: str, room_id: str, user_id: str
1632 ) -> EventBase:
16381633 """ We've received a /make_leave/ request, so we create a partial
16391634 leave event for the room and return that. We do *not* persist or
16401635 process it until the other server has signed it and sent it back.
16411636
16421637 Args:
1643 origin (str): The (verified) server name of the requesting server.
1644 room_id (str): Room to create leave event in
1645 user_id (str): The user to create the leave for
1646
1647 Returns:
1648 Deferred[FrozenEvent]
1638 origin: The (verified) server name of the requesting server.
1639 room_id: Room to create leave event in
1640 user_id: The user to create the leave for
16491641 """
16501642 if get_domain_from_id(user_id) != origin:
16511643 logger.info(
16551647 )
16561648 raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
16571649
1658 room_version = yield self.store.get_room_version_id(room_id)
1650 room_version = await self.store.get_room_version_id(room_id)
16591651 builder = self.event_builder_factory.new(
16601652 room_version,
16611653 {
16671659 },
16681660 )
16691661
1670 event, context = yield self.event_creation_handler.create_new_client_event(
1662 event, context = await self.event_creation_handler.create_new_client_event(
16711663 builder=builder
16721664 )
16731665
1674 event_allowed = yield self.third_party_event_rules.check_event_allowed(
1666 event_allowed = await self.third_party_event_rules.check_event_allowed(
16751667 event, context
16761668 )
16771669 if not event_allowed:
16831675 try:
16841676 # The remote hasn't signed it yet, obviously. We'll do the full checks
16851677 # when we get the event back in `on_send_leave_request`
1686 yield self.auth.check_from_context(
1678 await self.auth.check_from_context(
16871679 room_version, event, context, do_sig_check=False
16881680 )
16891681 except AuthError as e:
16921684
16931685 return event
16941686
1695 @defer.inlineCallbacks
1696 @log_function
1697 def on_send_leave_request(self, origin, pdu):
1687 async def on_send_leave_request(self, origin, pdu):
16981688 """ We have received a leave event for a room. Fully process it."""
16991689 event = pdu
17001690
17141704
17151705 event.internal_metadata.outlier = False
17161706
1717 context = yield self._handle_new_event(origin, event)
1718
1719 event_allowed = yield self.third_party_event_rules.check_event_allowed(
1707 context = await self._handle_new_event(origin, event)
1708
1709 event_allowed = await self.third_party_event_rules.check_event_allowed(
17201710 event, context
17211711 )
17221712 if not event_allowed:
17971787 if not in_room:
17981788 raise AuthError(403, "Host not in room.")
17991789
1790 # Synapse asks for 100 events per backfill request. Do not allow more.
1791 limit = min(limit, 100)
1792
18001793 events = yield self.store.get_backfill_events(room_id, pdu_list, limit)
18011794
18021795 events = yield filter_events_for_server(self.storage, origin, events)
18381831 def get_min_depth_for_context(self, context):
18391832 return self.store.get_min_depth(context)
18401833
1841 @defer.inlineCallbacks
1842 def _handle_new_event(
1834 async def _handle_new_event(
18431835 self, origin, event, state=None, auth_events=None, backfilled=False
18441836 ):
1845 context = yield self._prep_event(
1837 context = await self._prep_event(
18461838 origin, event, state=state, auth_events=auth_events, backfilled=backfilled
18471839 )
18481840
18551847 and not backfilled
18561848 and not context.rejected
18571849 ):
1858 yield self.action_generator.handle_push_actions_for_event(
1850 await self.action_generator.handle_push_actions_for_event(
18591851 event, context
18601852 )
18611853
1862 yield self.persist_events_and_notify(
1854 await self.persist_events_and_notify(
18631855 [(event, context)], backfilled=backfilled
18641856 )
18651857 success = True
18711863
18721864 return context
18731865
1874 @defer.inlineCallbacks
1875 def _handle_new_events(
1866 async def _handle_new_events(
18761867 self,
18771868 origin: str,
18781869 event_infos: Iterable[_NewEventInfo],
18791870 backfilled: bool = False,
1880 ):
1871 ) -> None:
18811872 """Creates the appropriate contexts and persists events. The events
18821873 should not depend on one another, e.g. this should be used to persist
18831874 a bunch of outliers, but not a chunk of individual events that depend
18861877 Notifies about the events where appropriate.
18871878 """
18881879
1889 @defer.inlineCallbacks
1890 def prep(ev_info: _NewEventInfo):
1880 async def prep(ev_info: _NewEventInfo):
18911881 event = ev_info.event
18921882 with nested_logging_context(suffix=event.event_id):
1893 res = yield self._prep_event(
1883 res = await self._prep_event(
18941884 origin,
18951885 event,
18961886 state=ev_info.state,
18991889 )
19001890 return res
19011891
1902 contexts = yield make_deferred_yieldable(
1892 contexts = await make_deferred_yieldable(
19031893 defer.gatherResults(
19041894 [run_in_background(prep, ev_info) for ev_info in event_infos],
19051895 consumeErrors=True,
19061896 )
19071897 )
19081898
1909 yield self.persist_events_and_notify(
1899 await self.persist_events_and_notify(
19101900 [
19111901 (ev_info.event, context)
19121902 for ev_info, context in zip(event_infos, contexts)
19141904 backfilled=backfilled,
19151905 )
19161906
1917 @defer.inlineCallbacks
1918 def _persist_auth_tree(
1907 async def _persist_auth_tree(
19191908 self,
19201909 origin: str,
19211910 auth_events: List[EventBase],
19221911 state: List[EventBase],
19231912 event: EventBase,
19241913 room_version: RoomVersion,
1925 ):
1914 ) -> None:
19261915 """Checks the auth chain is valid (and passes auth checks) for the
19271916 state and event. Then persists the auth chain and state atomically.
19281917 Persists the event separately. Notifies about the persisted events
19371926 event
19381927 room_version: The room version we expect this room to have, and
19391928 will raise if it doesn't match the version in the create event.
1940
1941 Returns:
1942 Deferred
19431929 """
19441930 events_to_context = {}
19451931 for e in itertools.chain(auth_events, state):
19461932 e.internal_metadata.outlier = True
1947 ctx = yield self.state_handler.compute_event_context(e)
1933 ctx = await self.state_handler.compute_event_context(e)
19481934 events_to_context[e.event_id] = ctx
19491935
19501936 event_map = {
19761962 missing_auth_events.add(e_id)
19771963
19781964 for e_id in missing_auth_events:
1979 m_ev = yield self.federation_client.get_pdu(
1980 [origin],
1981 e_id,
1982 room_version=room_version.identifier,
1983 outlier=True,
1984 timeout=10000,
1965 m_ev = await self.federation_client.get_pdu(
1966 [origin], e_id, room_version=room_version, outlier=True, timeout=10000,
19851967 )
19861968 if m_ev and m_ev.event_id == e_id:
19871969 event_map[e_id] = m_ev
20121994 raise
20131995 events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR
20141996
2015 yield self.persist_events_and_notify(
1997 await self.persist_events_and_notify(
20161998 [
20171999 (e, events_to_context[e.event_id])
20182000 for e in itertools.chain(auth_events, state)
20192001 ]
20202002 )
20212003
2022 new_event_context = yield self.state_handler.compute_event_context(
2004 new_event_context = await self.state_handler.compute_event_context(
20232005 event, old_state=state
20242006 )
20252007
2026 yield self.persist_events_and_notify([(event, new_event_context)])
2027
2028 @defer.inlineCallbacks
2029 def _prep_event(
2008 await self.persist_events_and_notify([(event, new_event_context)])
2009
2010 async def _prep_event(
20302011 self,
20312012 origin: str,
20322013 event: EventBase,
20332014 state: Optional[Iterable[EventBase]],
20342015 auth_events: Optional[StateMap[EventBase]],
20352016 backfilled: bool,
2036 ):
2037 """
2038
2039 Args:
2040 origin:
2041 event:
2042 state:
2043 auth_events:
2044 backfilled:
2045
2046 Returns:
2047 Deferred, which resolves to synapse.events.snapshot.EventContext
2048 """
2049 context = yield self.state_handler.compute_event_context(event, old_state=state)
2017 ) -> EventContext:
2018 context = await self.state_handler.compute_event_context(event, old_state=state)
20502019
20512020 if not auth_events:
2052 prev_state_ids = yield context.get_prev_state_ids()
2053 auth_events_ids = yield self.auth.compute_auth_events(
2021 prev_state_ids = await context.get_prev_state_ids()
2022 auth_events_ids = await self.auth.compute_auth_events(
20542023 event, prev_state_ids, for_verification=True
20552024 )
2056 auth_events = yield self.store.get_events(auth_events_ids)
2025 auth_events = await self.store.get_events(auth_events_ids)
20572026 auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
20582027
20592028 # This is a hack to fix some old rooms where the initial join event
20602029 # didn't reference the create event in its auth events.
20612030 if event.type == EventTypes.Member and not event.auth_event_ids():
20622031 if len(event.prev_event_ids()) == 1 and event.depth < 5:
2063 c = yield self.store.get_event(
2032 c = await self.store.get_event(
20642033 event.prev_event_ids()[0], allow_none=True
20652034 )
20662035 if c and c.type == EventTypes.Create:
20672036 auth_events[(c.type, c.state_key)] = c
20682037
2069 context = yield self.do_auth(origin, event, context, auth_events=auth_events)
2038 context = await self.do_auth(origin, event, context, auth_events=auth_events)
20702039
20712040 if not context.rejected:
2072 yield self._check_for_soft_fail(event, state, backfilled)
2041 await self._check_for_soft_fail(event, state, backfilled)
20732042
20742043 if event.type == EventTypes.GuestAccess and not context.rejected:
2075 yield self.maybe_kick_guest_users(event)
2044 await self.maybe_kick_guest_users(event)
20762045
20772046 return context
20782047
2079 @defer.inlineCallbacks
2080 def _check_for_soft_fail(
2048 async def _check_for_soft_fail(
20812049 self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool
2082 ):
2083 """Checks if we should soft fail the event, if so marks the event as
2050 ) -> None:
2051 """Checks if we should soft fail the event; if so, marks the event as
20842052 such.
20852053
20862054 Args:
20872055 event
20882056 state: The state at the event if we don't have all the event's prev events
20892057 backfilled: Whether the event is from backfill
2090
2091 Returns:
2092 Deferred
20932058 """
20942059 # For new (non-backfilled and non-outlier) events we check if the event
20952060 # passes auth based on the current state. If it doesn't then we
20962061 # "soft-fail" the event.
20972062 do_soft_fail_check = not backfilled and not event.internal_metadata.is_outlier()
20982063 if do_soft_fail_check:
2099 extrem_ids = yield self.store.get_latest_event_ids_in_room(event.room_id)
2064 extrem_ids = await self.store.get_latest_event_ids_in_room(event.room_id)
21002065
21012066 extrem_ids = set(extrem_ids)
21022067 prev_event_ids = set(event.prev_event_ids())
21072072 do_soft_fail_check = False
21082073
21092074 if do_soft_fail_check:
2110 room_version = yield self.store.get_room_version_id(event.room_id)
2075 room_version = await self.store.get_room_version_id(event.room_id)
21112076 room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
21122077
21132078 # Calculate the "current state".
21242089 # given state at the event. This should correctly handle cases
21252090 # like bans, especially with state res v2.
21262091
2127 state_sets = yield self.state_store.get_state_groups(
2092 state_sets = await self.state_store.get_state_groups(
21282093 event.room_id, extrem_ids
21292094 )
21302095 state_sets = list(state_sets.values())
21312096 state_sets.append(state)
2132 current_state_ids = yield self.state_handler.resolve_events(
2097 current_state_ids = await self.state_handler.resolve_events(
21332098 room_version, state_sets, event
21342099 )
21352100 current_state_ids = {
21362101 k: e.event_id for k, e in iteritems(current_state_ids)
21372102 }
21382103 else:
2139 current_state_ids = yield self.state_handler.get_current_state_ids(
2104 current_state_ids = await self.state_handler.get_current_state_ids(
21402105 event.room_id, latest_event_ids=extrem_ids
21412106 )
21422107
21522117 e for k, e in iteritems(current_state_ids) if k in auth_types
21532118 ]
21542119
2155 current_auth_events = yield self.store.get_events(current_state_ids)
2120 current_auth_events = await self.store.get_events(current_state_ids)
21562121 current_auth_events = {
21572122 (e.type, e.state_key): e for e in current_auth_events.values()
21582123 }
21652130 logger.warning("Soft-failing %r because %s", event, e)
21662131 event.internal_metadata.soft_failed = True
21672132
2168 @defer.inlineCallbacks
2169 def on_query_auth(
2133 async def on_query_auth(
21702134 self, origin, event_id, room_id, remote_auth_chain, rejects, missing
21712135 ):
2172 in_room = yield self.auth.check_host_in_room(room_id, origin)
2136 in_room = await self.auth.check_host_in_room(room_id, origin)
21732137 if not in_room:
21742138 raise AuthError(403, "Host not in room.")
21752139
2176 event = yield self.store.get_event(
2140 event = await self.store.get_event(
21772141 event_id, allow_none=False, check_room_id=room_id
21782142 )
21792143
21812145 # don't want to fall into the trap of `missing` being wrong.
21822146 for e in remote_auth_chain:
21832147 try:
2184 yield self._handle_new_event(origin, e)
2148 await self._handle_new_event(origin, e)
21852149 except AuthError:
21862150 pass
21872151
21882152 # Now get the current auth_chain for the event.
2189 local_auth_chain = yield self.store.get_auth_chain(
2153 local_auth_chain = await self.store.get_auth_chain(
21902154 [auth_id for auth_id in event.auth_event_ids()], include_given=True
21912155 )
21922156
21932157 # TODO: Check if we would now reject event_id. If so we need to tell
21942158 # everyone.
21952159
2196 ret = yield self.construct_auth_difference(local_auth_chain, remote_auth_chain)
2160 ret = await self.construct_auth_difference(local_auth_chain, remote_auth_chain)
21972161
21982162 logger.debug("on_query_auth returning: %s", ret)
21992163
22002164 return ret
22012165
2202 @defer.inlineCallbacks
2203 def on_get_missing_events(
2166 async def on_get_missing_events(
22042167 self, origin, room_id, earliest_events, latest_events, limit
22052168 ):
2206 in_room = yield self.auth.check_host_in_room(room_id, origin)
2169 in_room = await self.auth.check_host_in_room(room_id, origin)
22072170 if not in_room:
22082171 raise AuthError(403, "Host not in room.")
22092172
2173 # Only allow up to 20 events to be retrieved per request.
22102174 limit = min(limit, 20)
22112175
2212 missing_events = yield self.store.get_missing_events(
2176 missing_events = await self.store.get_missing_events(
22132177 room_id=room_id,
22142178 earliest_events=earliest_events,
22152179 latest_events=latest_events,
22162180 limit=limit,
22172181 )
22182182
2219 missing_events = yield filter_events_for_server(
2183 missing_events = await filter_events_for_server(
22202184 self.storage, origin, missing_events
22212185 )
22222186
22232187 return missing_events
22242188
2225 @defer.inlineCallbacks
2226 @log_function
2227 def do_auth(self, origin, event, context, auth_events):
2189 async def do_auth(
2190 self,
2191 origin: str,
2192 event: EventBase,
2193 context: EventContext,
2194 auth_events: StateMap[EventBase],
2195 ) -> EventContext:
22282196 """
22292197
22302198 Args:
2231 origin (str):
2232 event (synapse.events.EventBase):
2233 context (synapse.events.snapshot.EventContext):
2234 auth_events (dict[(str, str)->synapse.events.EventBase]):
2199 origin:
2200 event:
2201 context:
2202 auth_events:
22352203 Map from (event_type, state_key) to event
22362204
22372205 Normally, our calculated auth_events based on the state of the room
22412209
22422210 Also NB that this function adds entries to it.
22432211 Returns:
2244 defer.Deferred[EventContext]: updated context object
2245 """
2246 room_version = yield self.store.get_room_version_id(event.room_id)
2212 updated context object
2213 """
2214 room_version = await self.store.get_room_version_id(event.room_id)
22472215 room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
22482216
22492217 try:
2250 context = yield self._update_auth_events_and_context_for_auth(
2218 context = await self._update_auth_events_and_context_for_auth(
22512219 origin, event, context, auth_events
22522220 )
22532221 except Exception:
22692237
22702238 return context
22712239
2272 @defer.inlineCallbacks
2273 def _update_auth_events_and_context_for_auth(
2274 self, origin, event, context, auth_events
2275 ):
2240 async def _update_auth_events_and_context_for_auth(
2241 self,
2242 origin: str,
2243 event: EventBase,
2244 context: EventContext,
2245 auth_events: StateMap[EventBase],
2246 ) -> EventContext:
22762247 """Helper for do_auth. See there for docs.
22772248
22782249 Checks whether a given event has the expected auth events. If it
22802251 we can come to a consensus (e.g. if one server missed some valid
22812252 state).
22822253
2283 This attempts to resovle any potential divergence of state between
2254 This attempts to resolve any potential divergence of state between
22842255 servers, but is not essential and so failures should not block further
22852256 processing of the event.
22862257
22872258 Args:
2288 origin (str):
2289 event (synapse.events.EventBase):
2290 context (synapse.events.snapshot.EventContext):
2291
2292 auth_events (dict[(str, str)->synapse.events.EventBase]):
2259 origin:
2260 event:
2261 context:
2262
2263 auth_events:
22932264 Map from (event_type, state_key) to event
22942265
22952266 Normally, our calculated auth_events based on the state of the room
23002271 Also NB that this function adds entries to it.
23012272
23022273 Returns:
2303 defer.Deferred[EventContext]: updated context
2274 updated context
23042275 """
23052276 event_auth_events = set(event.auth_event_ids())
23062277
23142285 #
23152286 # we start by checking if they are in the store, and then try calling /event_auth/.
23162287 if missing_auth:
2317 have_events = yield self.store.have_seen_events(missing_auth)
2288 have_events = await self.store.have_seen_events(missing_auth)
23182289 logger.debug("Events %s are in the store", have_events)
23192290 missing_auth.difference_update(have_events)
23202291
23232294 logger.info("auth_events contains unknown events: %s", missing_auth)
23242295 try:
23252296 try:
2326 remote_auth_chain = yield self.federation_client.get_event_auth(
2297 remote_auth_chain = await self.federation_client.get_event_auth(
23272298 origin, event.room_id, event.event_id
23282299 )
23292300 except RequestSendFailed as e:
23322303 logger.info("Failed to get event auth from remote: %s", e)
23332304 return context
23342305
2335 seen_remotes = yield self.store.have_seen_events(
2306 seen_remotes = await self.store.have_seen_events(
23362307 [e.event_id for e in remote_auth_chain]
23372308 )
23382309
23552326 logger.debug(
23562327 "do_auth %s missing_auth: %s", event.event_id, e.event_id
23572328 )
2358 yield self._handle_new_event(origin, e, auth_events=auth)
2329 await self._handle_new_event(origin, e, auth_events=auth)
23592330
23602331 if e.event_id in event_auth_events:
23612332 auth_events[(e.type, e.state_key)] = e
23892360
23902361 # XXX: currently this checks for redactions but I'm not convinced that is
23912362 # necessary?
2392 different_events = yield self.store.get_events_as_list(different_auth)
2363 different_events = await self.store.get_events_as_list(different_auth)
23932364
23942365 for d in different_events:
23952366 if d.room_id != event.room_id:
24152386 remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
24162387 remote_state = remote_auth_events.values()
24172388
2418 room_version = yield self.store.get_room_version_id(event.room_id)
2419 new_state = yield self.state_handler.resolve_events(
2389 room_version = await self.store.get_room_version_id(event.room_id)
2390 new_state = await self.state_handler.resolve_events(
24202391 room_version, (local_state, remote_state), event
24212392 )
24222393
24312402
24322403 auth_events.update(new_state)
24332404
2434 context = yield self._update_context_for_auth_events(
2405 context = await self._update_context_for_auth_events(
24352406 event, context, auth_events
24362407 )
24372408
24382409 return context
24392410
2440 @defer.inlineCallbacks
2441 def _update_context_for_auth_events(self, event, context, auth_events):
2411 async def _update_context_for_auth_events(
2412 self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
2413 ) -> EventContext:
24422414 """Update the state_ids in an event context after auth event resolution,
24432415 storing the changes as a new state group.
24442416
24452417 Args:
2446 event (Event): The event we're handling the context for
2447
2448 context (synapse.events.snapshot.EventContext): initial event context
2449
2450 auth_events (dict[(str, str)->EventBase]): Events to update in the event
2451 context.
2418 event: The event we're handling the context for
2419
2420 context: initial event context
2421
2422 auth_events: Events to update in the event context.
24522423
24532424 Returns:
2454 Deferred[EventContext]: new event context
2425 new event context
24552426 """
24562427 # exclude the state key of the new event from the current_state in the context.
24572428 if event.is_state():
24622433 k: a.event_id for k, a in iteritems(auth_events) if k != event_key
24632434 }
24642435
2465 current_state_ids = yield context.get_current_state_ids()
2436 current_state_ids = await context.get_current_state_ids()
24662437 current_state_ids = dict(current_state_ids)
24672438
24682439 current_state_ids.update(state_updates)
24692440
2470 prev_state_ids = yield context.get_prev_state_ids()
2441 prev_state_ids = await context.get_prev_state_ids()
24712442 prev_state_ids = dict(prev_state_ids)
24722443
24732444 prev_state_ids.update({k: a.event_id for k, a in iteritems(auth_events)})
24742445
24752446 # create a new state group as a delta from the existing one.
24762447 prev_group = context.state_group
2477 state_group = yield self.state_store.store_state_group(
2448 state_group = await self.state_store.store_state_group(
24782449 event.event_id,
24792450 event.room_id,
24802451 prev_group=prev_group,
24912462 delta_ids=state_updates,
24922463 )
24932464
2494 @defer.inlineCallbacks
2495 def construct_auth_difference(self, local_auth, remote_auth):
2465 async def construct_auth_difference(
2466 self, local_auth: Iterable[EventBase], remote_auth: Iterable[EventBase]
2467 ) -> Dict:
24962468 """ Given a local and remote auth chain, find the differences. This
24972469 assumes that we have already processed all events in remote_auth
24982470
26012573 reason_map = {}
26022574
26032575 for e in base_remote_rejected:
2604 reason = yield self.store.get_rejection_reason(e.event_id)
2576 reason = await self.store.get_rejection_reason(e.event_id)
26052577 if reason is None:
26062578 # TODO: e is not in the current state, so we should
26072579 # construct some proof of that.
26862658 destinations, room_id, event_dict
26872659 )
26882660
2689 @defer.inlineCallbacks
2690 @log_function
2691 def on_exchange_third_party_invite_request(self, room_id, event_dict):
2661 async def on_exchange_third_party_invite_request(
2662 self, room_id: str, event_dict: JsonDict
2663 ) -> None:
26922664 """Handle an exchange_third_party_invite request from a remote server
26932665
26942666 The remote server will call this when it wants to turn a 3pid invite
26952667 into a normal m.room.member invite.
26962668
26972669 Args:
2698 room_id (str): The ID of the room.
2670 room_id: The ID of the room.
26992671
27002672 event_dict (dict[str, Any]): Dictionary containing the event body.
27012673
2702 Returns:
2703 Deferred: resolves (to None)
2704 """
2705 room_version = yield self.store.get_room_version_id(room_id)
2674 """
2675 room_version = await self.store.get_room_version_id(room_id)
27062676
27072677 # NB: event_dict has a particular specced format we might need to fudge
27082678 # if we change event formats too much.
27092679 builder = self.event_builder_factory.new(room_version, event_dict)
27102680
2711 event, context = yield self.event_creation_handler.create_new_client_event(
2681 event, context = await self.event_creation_handler.create_new_client_event(
27122682 builder=builder
27132683 )
27142684
2715 event_allowed = yield self.third_party_event_rules.check_event_allowed(
2685 event_allowed = await self.third_party_event_rules.check_event_allowed(
27162686 event, context
27172687 )
27182688 if not event_allowed:
27232693 403, "This event is not allowed in this context", Codes.FORBIDDEN
27242694 )
27252695
2726 event, context = yield self.add_display_name_to_third_party_invite(
2696 event, context = await self.add_display_name_to_third_party_invite(
27272697 room_version, event_dict, event, context
27282698 )
27292699
27302700 try:
2731 yield self.auth.check_from_context(room_version, event, context)
2701 await self.auth.check_from_context(room_version, event, context)
27322702 except AuthError as e:
27332703 logger.warning("Denying third party invite %r because %s", event, e)
27342704 raise e
2735 yield self._check_signature(event, context)
2705 await self._check_signature(event, context)
27362706
27372707 # We need to tell the transaction queue to send this out, even
27382708 # though the sender isn't a local user.
27402710
27412711 # We retrieve the room member handler here as to not cause a cyclic dependency
27422712 member_handler = self.hs.get_room_member_handler()
2743 yield member_handler.send_membership_event(None, event, context)
2713 await member_handler.send_membership_event(None, event, context)
27442714
27452715 @defer.inlineCallbacks
27462716 def add_display_name_to_third_party_invite(
28882858 if "valid" not in response or not response["valid"]:
28892859 raise AuthError(403, "Third party certificate was invalid")
28902860
2891 @defer.inlineCallbacks
2892 def persist_events_and_notify(self, event_and_contexts, backfilled=False):
2861 async def persist_events_and_notify(
2862 self,
2863 event_and_contexts: Sequence[Tuple[EventBase, EventContext]],
2864 backfilled: bool = False,
2865 ) -> None:
28932866 """Persists events and tells the notifier/pushers about them, if
28942867 necessary.
28952868
28962869 Args:
2897 event_and_contexts(list[tuple[FrozenEvent, EventContext]])
2898 backfilled (bool): Whether these events are a result of
2870 event_and_contexts:
2871 backfilled: Whether these events are a result of
28992872 backfilling or not
2900
2901 Returns:
2902 Deferred
29032873 """
29042874 if self.config.worker_app:
2905 yield self._send_events_to_master(
2875 await self._send_events_to_master(
29062876 store=self.store,
29072877 event_and_contexts=event_and_contexts,
29082878 backfilled=backfilled,
29092879 )
29102880 else:
2911 max_stream_id = yield self.storage.persistence.persist_events(
2881 max_stream_id = await self.storage.persistence.persist_events(
29122882 event_and_contexts, backfilled=backfilled
29132883 )
29142884
29192889
29202890 if not backfilled: # Never notify for backfilled events
29212891 for event, _ in event_and_contexts:
2922 yield self._notify_persisted_event(event, max_stream_id)
2923
2924 def _notify_persisted_event(self, event, max_stream_id):
2892 await self._notify_persisted_event(event, max_stream_id)
2893
2894 async def _notify_persisted_event(
2895 self, event: EventBase, max_stream_id: int
2896 ) -> None:
29252897 """Checks to see if notifier/pushers should be notified about the
29262898 event or not.
29272899
29282900 Args:
2929 event (FrozenEvent)
2930 max_stream_id (int): The max_stream_id returned by persist_events
2901 event:
2902 max_stream_id: The max_stream_id returned by persist_events
29312903 """
29322904
29332905 extra_users = []
29512923 event, event_stream_id, max_stream_id, extra_users=extra_users
29522924 )
29532925
2954 return self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
2955
2956 def _clean_room_for_join(self, room_id):
2926 await self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
2927
2928 async def _clean_room_for_join(self, room_id: str) -> None:
29572929 """Called to clean up any data in DB for a given room, ready for the
29582930 server to join the room.
29592931
29602932 Args:
2961 room_id (str)
2933 room_id
29622934 """
29632935 if self.config.worker_app:
2964 return self._clean_room_for_join_client(room_id)
2936 await self._clean_room_for_join_client(room_id)
29652937 else:
2966 return self.store.clean_room_for_join(room_id)
2967
2968 def user_joined_room(self, user, room_id):
2938 await self.store.clean_room_for_join(room_id)
2939
2940 async def user_joined_room(self, user: UserID, room_id: str) -> None:
29692941 """Called when a new user has joined the room
29702942 """
29712943 if self.config.worker_app:
2972 return self._notify_user_membership_change(
2944 await self._notify_user_membership_change(
29732945 room_id=room_id, user_id=user.to_string(), change="joined"
29742946 )
29752947 else:
2976 return defer.succeed(user_joined_room(self.distributor, user, room_id))
2948 user_joined_room(self.distributor, user, room_id)
29772949
29782950 @defer.inlineCallbacks
29792951 def get_room_complexity(self, remote_room_hosts, room_id):
6262 return f
6363
6464
65 class GroupsLocalHandler(object):
65 class GroupsLocalWorkerHandler(object):
6666 def __init__(self, hs):
6767 self.hs = hs
6868 self.store = hs.get_datastore()
8080
8181 self.profile_handler = hs.get_profile_handler()
8282
83 # Ensure attestations get renewed
84 hs.get_groups_attestation_renewer()
85
8683 # The following functions merely route the query to the local groups server
8784 # or federation depending on if the group is local or remote
8885
8986 get_group_profile = _create_rerouter("get_group_profile")
90 update_group_profile = _create_rerouter("update_group_profile")
9187 get_rooms_in_group = _create_rerouter("get_rooms_in_group")
92
9388 get_invited_users_in_group = _create_rerouter("get_invited_users_in_group")
94
95 add_room_to_group = _create_rerouter("add_room_to_group")
96 update_room_in_group = _create_rerouter("update_room_in_group")
97 remove_room_from_group = _create_rerouter("remove_room_from_group")
98
99 update_group_summary_room = _create_rerouter("update_group_summary_room")
100 delete_group_summary_room = _create_rerouter("delete_group_summary_room")
101
102 update_group_category = _create_rerouter("update_group_category")
103 delete_group_category = _create_rerouter("delete_group_category")
10489 get_group_category = _create_rerouter("get_group_category")
10590 get_group_categories = _create_rerouter("get_group_categories")
106
107 update_group_summary_user = _create_rerouter("update_group_summary_user")
108 delete_group_summary_user = _create_rerouter("delete_group_summary_user")
109
110 update_group_role = _create_rerouter("update_group_role")
111 delete_group_role = _create_rerouter("delete_group_role")
11291 get_group_role = _create_rerouter("get_group_role")
11392 get_group_roles = _create_rerouter("get_group_roles")
114
115 set_group_join_policy = _create_rerouter("set_group_join_policy")
11693
11794 @defer.inlineCallbacks
11895 def get_group_summary(self, group_id, requester_user_id):
169146 return res
170147
171148 @defer.inlineCallbacks
149 def get_users_in_group(self, group_id, requester_user_id):
150 """Get users in a group
151 """
152 if self.is_mine_id(group_id):
153 res = yield self.groups_server_handler.get_users_in_group(
154 group_id, requester_user_id
155 )
156 return res
157
158 group_server_name = get_domain_from_id(group_id)
159
160 try:
161 res = yield self.transport_client.get_users_in_group(
162 get_domain_from_id(group_id), group_id, requester_user_id
163 )
164 except HttpResponseException as e:
165 raise e.to_synapse_error()
166 except RequestSendFailed:
167 raise SynapseError(502, "Failed to contact group server")
168
169 chunk = res["chunk"]
170 valid_entries = []
171 for entry in chunk:
172 g_user_id = entry["user_id"]
173 attestation = entry.pop("attestation", {})
174 try:
175 if get_domain_from_id(g_user_id) != group_server_name:
176 yield self.attestations.verify_attestation(
177 attestation,
178 group_id=group_id,
179 user_id=g_user_id,
180 server_name=get_domain_from_id(g_user_id),
181 )
182 valid_entries.append(entry)
183 except Exception as e:
184 logger.info("Failed to verify user is in group: %s", e)
185
186 res["chunk"] = valid_entries
187
188 return res
189
190 @defer.inlineCallbacks
191 def get_joined_groups(self, user_id):
192 group_ids = yield self.store.get_joined_groups(user_id)
193 return {"groups": group_ids}
194
195 @defer.inlineCallbacks
196 def get_publicised_groups_for_user(self, user_id):
197 if self.hs.is_mine_id(user_id):
198 result = yield self.store.get_publicised_groups_for_user(user_id)
199
200 # Check AS associated groups for this user - this depends on the
201 # RegExps in the AS registration file (under `users`)
202 for app_service in self.store.get_app_services():
203 result.extend(app_service.get_groups_for_user(user_id))
204
205 return {"groups": result}
206 else:
207 try:
208 bulk_result = yield self.transport_client.bulk_get_publicised_groups(
209 get_domain_from_id(user_id), [user_id]
210 )
211 except HttpResponseException as e:
212 raise e.to_synapse_error()
213 except RequestSendFailed:
214 raise SynapseError(502, "Failed to contact group server")
215
216 result = bulk_result.get("users", {}).get(user_id)
217 # TODO: Verify attestations
218 return {"groups": result}
219
220 @defer.inlineCallbacks
221 def bulk_get_publicised_groups(self, user_ids, proxy=True):
222 destinations = {}
223 local_users = set()
224
225 for user_id in user_ids:
226 if self.hs.is_mine_id(user_id):
227 local_users.add(user_id)
228 else:
229 destinations.setdefault(get_domain_from_id(user_id), set()).add(user_id)
230
231 if not proxy and destinations:
232 raise SynapseError(400, "Some user_ids are not local")
233
234 results = {}
235 failed_results = []
236 for destination, dest_user_ids in iteritems(destinations):
237 try:
238 r = yield self.transport_client.bulk_get_publicised_groups(
239 destination, list(dest_user_ids)
240 )
241 results.update(r["users"])
242 except Exception:
243 failed_results.extend(dest_user_ids)
244
245 for uid in local_users:
246 results[uid] = yield self.store.get_publicised_groups_for_user(uid)
247
248 # Check AS associated groups for this user - this depends on the
249 # RegExps in the AS registration file (under `users`)
250 for app_service in self.store.get_app_services():
251 results[uid].extend(app_service.get_groups_for_user(uid))
252
253 return {"users": results}
254
255
256 class GroupsLocalHandler(GroupsLocalWorkerHandler):
257 def __init__(self, hs):
258 super(GroupsLocalHandler, self).__init__(hs)
259
260 # Ensure attestations get renewed
261 hs.get_groups_attestation_renewer()
262
263 # The following functions merely route the query to the local groups server
264 # or federation depending on if the group is local or remote
265
266 update_group_profile = _create_rerouter("update_group_profile")
267
268 add_room_to_group = _create_rerouter("add_room_to_group")
269 update_room_in_group = _create_rerouter("update_room_in_group")
270 remove_room_from_group = _create_rerouter("remove_room_from_group")
271
272 update_group_summary_room = _create_rerouter("update_group_summary_room")
273 delete_group_summary_room = _create_rerouter("delete_group_summary_room")
274
275 update_group_category = _create_rerouter("update_group_category")
276 delete_group_category = _create_rerouter("delete_group_category")
277
278 update_group_summary_user = _create_rerouter("update_group_summary_user")
279 delete_group_summary_user = _create_rerouter("delete_group_summary_user")
280
281 update_group_role = _create_rerouter("update_group_role")
282 delete_group_role = _create_rerouter("delete_group_role")
283
284 set_group_join_policy = _create_rerouter("set_group_join_policy")
285
286 @defer.inlineCallbacks
172287 def create_group(self, group_id, user_id, content):
173288 """Create a group
174289 """
215330 is_publicised=is_publicised,
216331 )
217332 self.notifier.on_new_event("groups_key", token, users=[user_id])
218
219 return res
220
221 @defer.inlineCallbacks
222 def get_users_in_group(self, group_id, requester_user_id):
223 """Get users in a group
224 """
225 if self.is_mine_id(group_id):
226 res = yield self.groups_server_handler.get_users_in_group(
227 group_id, requester_user_id
228 )
229 return res
230
231 group_server_name = get_domain_from_id(group_id)
232
233 try:
234 res = yield self.transport_client.get_users_in_group(
235 get_domain_from_id(group_id), group_id, requester_user_id
236 )
237 except HttpResponseException as e:
238 raise e.to_synapse_error()
239 except RequestSendFailed:
240 raise SynapseError(502, "Failed to contact group server")
241
242 chunk = res["chunk"]
243 valid_entries = []
244 for entry in chunk:
245 g_user_id = entry["user_id"]
246 attestation = entry.pop("attestation", {})
247 try:
248 if get_domain_from_id(g_user_id) != group_server_name:
249 yield self.attestations.verify_attestation(
250 attestation,
251 group_id=group_id,
252 user_id=g_user_id,
253 server_name=get_domain_from_id(g_user_id),
254 )
255 valid_entries.append(entry)
256 except Exception as e:
257 logger.info("Failed to verify user is in group: %s", e)
258
259 res["chunk"] = valid_entries
260333
261334 return res
262335
451524 group_id, user_id, membership="leave"
452525 )
453526 self.notifier.on_new_event("groups_key", token, users=[user_id])
454
455 @defer.inlineCallbacks
456 def get_joined_groups(self, user_id):
457 group_ids = yield self.store.get_joined_groups(user_id)
458 return {"groups": group_ids}
459
460 @defer.inlineCallbacks
461 def get_publicised_groups_for_user(self, user_id):
462 if self.hs.is_mine_id(user_id):
463 result = yield self.store.get_publicised_groups_for_user(user_id)
464
465 # Check AS associated groups for this user - this depends on the
466 # RegExps in the AS registration file (under `users`)
467 for app_service in self.store.get_app_services():
468 result.extend(app_service.get_groups_for_user(user_id))
469
470 return {"groups": result}
471 else:
472 try:
473 bulk_result = yield self.transport_client.bulk_get_publicised_groups(
474 get_domain_from_id(user_id), [user_id]
475 )
476 except HttpResponseException as e:
477 raise e.to_synapse_error()
478 except RequestSendFailed:
479 raise SynapseError(502, "Failed to contact group server")
480
481 result = bulk_result.get("users", {}).get(user_id)
482 # TODO: Verify attestations
483 return {"groups": result}
484
485 @defer.inlineCallbacks
486 def bulk_get_publicised_groups(self, user_ids, proxy=True):
487 destinations = {}
488 local_users = set()
489
490 for user_id in user_ids:
491 if self.hs.is_mine_id(user_id):
492 local_users.add(user_id)
493 else:
494 destinations.setdefault(get_domain_from_id(user_id), set()).add(user_id)
495
496 if not proxy and destinations:
497 raise SynapseError(400, "Some user_ids are not local")
498
499 results = {}
500 failed_results = []
501 for destination, dest_user_ids in iteritems(destinations):
502 try:
503 r = yield self.transport_client.bulk_get_publicised_groups(
504 destination, list(dest_user_ids)
505 )
506 results.update(r["users"])
507 except Exception:
508 failed_results.extend(dest_user_ids)
509
510 for uid in local_users:
511 results[uid] = yield self.store.get_publicised_groups_for_user(uid)
512
513 # Check AS associated groups for this user - this depends on the
514 # RegExps in the AS registration file (under `users`)
515 for app_service in self.store.get_app_services():
516 results[uid].extend(app_service.get_groups_for_user(uid))
517
518 return {"users": results}
1717 from twisted.internet import defer
1818
1919 from synapse.api.constants import EventTypes, Membership
20 from synapse.api.errors import AuthError, Codes, SynapseError
20 from synapse.api.errors import SynapseError
2121 from synapse.events.validator import EventValidator
2222 from synapse.handlers.presence import format_user_presence_state
2323 from synapse.logging.context import make_deferred_yieldable, run_in_background
273273
274274 user_id = requester.user.to_string()
275275
276 membership, member_event_id = await self._check_in_room_or_world_readable(
277 room_id, user_id
276 (
277 membership,
278 member_event_id,
279 ) = await self.auth.check_user_in_room_or_world_readable(
280 room_id, user_id, allow_departed_users=True,
278281 )
279282 is_peeking = member_event_id is None
280283
432435 ret["membership"] = membership
433436
434437 return ret
435
436 async def _check_in_room_or_world_readable(self, room_id, user_id):
437 try:
438 # check_user_was_in_room will return the most recent membership
439 # event for the user if:
440 # * The user is a non-guest user, and was ever in the room
441 # * The user is a guest user, and has joined the room
442 # else it will throw.
443 member_event = await self.auth.check_user_was_in_room(room_id, user_id)
444 return member_event.membership, member_event.event_id
445 except AuthError:
446 visibility = await self.state_handler.get_current_state(
447 room_id, EventTypes.RoomHistoryVisibility, ""
448 )
449 if (
450 visibility
451 and visibility.content["history_visibility"] == "world_readable"
452 ):
453 return Membership.JOIN, None
454 raise AuthError(
455 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
456 )
9898 (
9999 membership,
100100 membership_event_id,
101 ) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
101 ) = yield self.auth.check_user_in_room_or_world_readable(
102 room_id, user_id, allow_departed_users=True
103 )
102104
103105 if membership == Membership.JOIN:
104106 data = yield self.state.get_current_state(room_id, event_type, state_key)
176178 (
177179 membership,
178180 membership_event_id,
179 ) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
181 ) = yield self.auth.check_user_in_room_or_world_readable(
182 room_id, user_id, allow_departed_users=True
183 )
180184
181185 if membership == Membership.JOIN:
182186 state_ids = yield self.store.get_filtered_current_state_ids(
215219 if not requester.app_service:
216220 # We check AS auth after fetching the room membership, as it
217221 # requires us to pull out all joined members anyway.
218 membership, _ = yield self.auth.check_in_room_or_world_readable(
219 room_id, user_id
222 membership, _ = yield self.auth.check_user_in_room_or_world_readable(
223 room_id, user_id, allow_departed_users=True
220224 )
221225 if membership != Membership.JOIN:
222226 raise NotImplementedError(
931935 # way? If we have been invited by a remote server, we need
932936 # to get them to sign the event.
933937
934 returned_invite = yield federation_handler.send_invite(
935 invitee.domain, event
938 returned_invite = yield defer.ensureDeferred(
939 federation_handler.send_invite(invitee.domain, event)
936940 )
937
938941 event.unsigned.pop("room_state", None)
939942
940943 # TODO: Make sure the signatures actually are correct.
132132 include_null = False
133133
134134 logger.info(
135 "[purge] Running purge job for %d < max_lifetime <= %d (include NULLs = %s)",
135 "[purge] Running purge job for %s < max_lifetime <= %s (include NULLs = %s)",
136136 min_ms,
137137 max_ms,
138138 include_null,
334334 (
335335 membership,
336336 member_event_id,
337 ) = await self.auth.check_in_room_or_world_readable(room_id, user_id)
337 ) = await self.auth.check_user_in_room_or_world_readable(
338 room_id, user_id, allow_departed_users=True
339 )
338340
339341 if source_config.direction == "b":
340342 # if we're going backwards, we might need to backfill. This
6363 "history_visibility": "shared",
6464 "original_invitees_have_ops": False,
6565 "guest_can_join": True,
66 "power_level_content_override": {"invite": 0},
6667 },
6768 RoomCreationPreset.TRUSTED_PRIVATE_CHAT: {
6869 "join_rules": JoinRules.INVITE,
6970 "history_visibility": "shared",
7071 "original_invitees_have_ops": True,
7172 "guest_can_join": True,
73 "power_level_content_override": {"invite": 0},
7274 },
7375 RoomCreationPreset.PUBLIC_CHAT: {
7476 "join_rules": JoinRules.PUBLIC,
7577 "history_visibility": "shared",
7678 "original_invitees_have_ops": False,
7779 "guest_can_join": False,
80 "power_level_content_override": {},
7881 },
7982 }
8083
258261 for v in ("invite", "events_default"):
259262 current = int(pl_content.get(v, 0))
260263 if current < restricted_level:
261 logger.info(
264 logger.debug(
262265 "Setting level for %s in %s to %i (was %i)",
263266 v,
264267 old_room_id,
268271 pl_content[v] = restricted_level
269272 updated = True
270273 else:
271 logger.info("Not setting level for %s (already %i)", v, current)
274 logger.debug("Not setting level for %s (already %i)", v, current)
272275
273276 if updated:
274277 try:
295298 EventTypes.Aliases, events_default
296299 )
297300
298 logger.info("Setting correct PLs in new room to %s", new_pl_content)
301 logger.debug("Setting correct PLs in new room to %s", new_pl_content)
299302 yield self.event_creation_handler.create_and_send_nonmember_event(
300303 requester,
301304 {
474477 for alias_str in aliases:
475478 alias = RoomAlias.from_string(alias_str)
476479 try:
477 yield directory_handler.delete_association(
478 requester, alias, send_event=False
479 )
480 yield directory_handler.delete_association(requester, alias)
480481 removed_aliases.append(alias_str)
481482 except SynapseError as e:
482483 logger.warning("Unable to remove alias %s from old room: %s", alias, e)
507508 RoomAlias.from_string(alias),
508509 new_room_id,
509510 servers=(self.hs.hostname,),
510 send_event=False,
511511 check_membership=False,
512512 )
513513 logger.info("Moved alias %s to new room", alias)
578578
579579 # Check whether the third party rules allows/changes the room create
580580 # request.
581 yield self.third_party_event_rules.on_create_room(
581 event_allowed = yield self.third_party_event_rules.on_create_room(
582582 requester, config, is_requester_admin=is_requester_admin
583583 )
584 if not event_allowed:
585 raise SynapseError(
586 403, "You are not permitted to create rooms", Codes.FORBIDDEN
587 )
584588
585589 if not is_requester_admin and not self.spam_checker.user_may_create_room(
586590 user_id
656660 room_id=room_id,
657661 room_alias=room_alias,
658662 servers=[self.hs.hostname],
659 send_event=False,
660663 check_membership=False,
661664 )
662665
781784 @defer.inlineCallbacks
782785 def send(etype, content, **kwargs):
783786 event = create(etype, content, **kwargs)
784 logger.info("Sending %s in new room", etype)
787 logger.debug("Sending %s in new room", etype)
785788 yield self.event_creation_handler.create_and_send_nonmember_event(
786789 creator, event, ratelimit=False
787790 )
795798 creation_content.update({"creator": creator_id})
796799 yield send(etype=EventTypes.Create, content=creation_content)
797800
798 logger.info("Sending %s in new room", EventTypes.Member)
801 logger.debug("Sending %s in new room", EventTypes.Member)
799802 yield self.room_member_handler.update_membership(
800803 creator,
801804 creator.user,
824827 # This will be reudundant on pre-MSC2260 rooms, since the
825828 # aliases event is special-cased.
826829 EventTypes.Aliases: 0,
830 EventTypes.Tombstone: 100,
831 EventTypes.ServerACL: 100,
827832 },
828833 "events_default": 0,
829834 "state_default": 50,
830835 "ban": 50,
831836 "kick": 50,
832837 "redact": 50,
833 "invite": 0,
838 "invite": 50,
834839 }
835840
836841 if config["original_invitees_have_ops"]:
837842 for invitee in invite_list:
838843 power_level_content["users"][invitee] = 100
844
845 # Power levels overrides are defined per chat preset
846 power_level_content.update(config["power_level_content_override"])
839847
840848 if power_level_content_override:
841849 power_level_content.update(power_level_content_override)
943943 # join dance for now, since we're kinda implicitly checking
944944 # that we are allowed to join when we decide whether or not we
945945 # need to do the invite/join dance.
946 yield self.federation_handler.do_invite_join(
947 remote_room_hosts, room_id, user.to_string(), content
946 yield defer.ensureDeferred(
947 self.federation_handler.do_invite_join(
948 remote_room_hosts, room_id, user.to_string(), content
949 )
948950 )
949951 yield self._user_joined_room(user, room_id)
950952
981983 """
982984 fed_handler = self.federation_handler
983985 try:
984 ret = yield fed_handler.do_remotely_reject_invite(
985 remote_room_hosts, room_id, target.to_string(), content=content,
986 ret = yield defer.ensureDeferred(
987 fed_handler.do_remotely_reject_invite(
988 remote_room_hosts, room_id, target.to_string(), content=content,
989 )
986990 )
987991 return ret
988992 except Exception as e:
299299 room_state["guest_access"] = event_content.get("guest_access")
300300
301301 for room_id, state in room_to_state_updates.items():
302 logger.info("Updating room_stats_state for %s: %s", room_id, state)
302 logger.debug("Updating room_stats_state for %s: %s", room_id, state)
303303 yield self.store.update_room_state(room_id, state)
304304
305305 return room_to_stats_deltas, user_to_stats_deltas
1313 # See the License for the specific language governing permissions and
1414 # limitations under the License.
1515
16 import collections
1716 import itertools
1817 import logging
18 from typing import Any, Dict, FrozenSet, List, Optional, Set, Tuple
1919
2020 from six import iteritems, itervalues
2121
22 import attr
2223 from prometheus_client import Counter
2324
2425 from synapse.api.constants import EventTypes, Membership
26 from synapse.api.filtering import FilterCollection
27 from synapse.events import EventBase
2528 from synapse.logging.context import LoggingContext
2629 from synapse.push.clientformat import format_push_rules_for_user
2730 from synapse.storage.roommember import MemberSummary
2831 from synapse.storage.state import StateFilter
29 from synapse.types import RoomStreamToken
32 from synapse.types import (
33 Collection,
34 JsonDict,
35 RoomStreamToken,
36 StateMap,
37 StreamToken,
38 UserID,
39 )
3040 from synapse.util.async_helpers import concurrently_execute
3141 from synapse.util.caches.expiringcache import ExpiringCache
3242 from synapse.util.caches.lrucache import LruCache
6171 LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100
6272
6373
64 SyncConfig = collections.namedtuple(
65 "SyncConfig", ["user", "filter_collection", "is_guest", "request_key", "device_id"]
66 )
67
68
69 class TimelineBatch(
70 collections.namedtuple("TimelineBatch", ["prev_batch", "events", "limited"])
71 ):
72 __slots__ = []
73
74 def __nonzero__(self):
74 @attr.s(slots=True, frozen=True)
75 class SyncConfig:
76 user = attr.ib(type=UserID)
77 filter_collection = attr.ib(type=FilterCollection)
78 is_guest = attr.ib(type=bool)
79 request_key = attr.ib(type=Tuple[Any, ...])
80 device_id = attr.ib(type=str)
81
82
83 @attr.s(slots=True, frozen=True)
84 class TimelineBatch:
85 prev_batch = attr.ib(type=StreamToken)
86 events = attr.ib(type=List[EventBase])
87 limited = attr.ib(bool)
88
89 def __nonzero__(self) -> bool:
7590 """Make the result appear empty if there are no updates. This is used
7691 to tell if room needs to be part of the sync result.
7792 """
8095 __bool__ = __nonzero__ # python3
8196
8297
83 class JoinedSyncResult(
84 collections.namedtuple(
85 "JoinedSyncResult",
86 [
87 "room_id", # str
88 "timeline", # TimelineBatch
89 "state", # dict[(str, str), FrozenEvent]
90 "ephemeral",
91 "account_data",
92 "unread_notifications",
93 "summary",
94 ],
95 )
96 ):
97 __slots__ = []
98
99 def __nonzero__(self):
98 @attr.s(slots=True, frozen=True)
99 class JoinedSyncResult:
100 room_id = attr.ib(type=str)
101 timeline = attr.ib(type=TimelineBatch)
102 state = attr.ib(type=StateMap[EventBase])
103 ephemeral = attr.ib(type=List[JsonDict])
104 account_data = attr.ib(type=List[JsonDict])
105 unread_notifications = attr.ib(type=JsonDict)
106 summary = attr.ib(type=Optional[JsonDict])
107
108 def __nonzero__(self) -> bool:
100109 """Make the result appear empty if there are no updates. This is used
101110 to tell if room needs to be part of the sync result.
102111 """
112121 __bool__ = __nonzero__ # python3
113122
114123
115 class ArchivedSyncResult(
116 collections.namedtuple(
117 "ArchivedSyncResult",
118 [
119 "room_id", # str
120 "timeline", # TimelineBatch
121 "state", # dict[(str, str), FrozenEvent]
122 "account_data",
123 ],
124 )
125 ):
126 __slots__ = []
127
128 def __nonzero__(self):
124 @attr.s(slots=True, frozen=True)
125 class ArchivedSyncResult:
126 room_id = attr.ib(type=str)
127 timeline = attr.ib(type=TimelineBatch)
128 state = attr.ib(type=StateMap[EventBase])
129 account_data = attr.ib(type=List[JsonDict])
130
131 def __nonzero__(self) -> bool:
129132 """Make the result appear empty if there are no updates. This is used
130133 to tell if room needs to be part of the sync result.
131134 """
134137 __bool__ = __nonzero__ # python3
135138
136139
137 class InvitedSyncResult(
138 collections.namedtuple(
139 "InvitedSyncResult",
140 ["room_id", "invite"], # str # FrozenEvent: the invite event
141 )
142 ):
143 __slots__ = []
144
145 def __nonzero__(self):
140 @attr.s(slots=True, frozen=True)
141 class InvitedSyncResult:
142 room_id = attr.ib(type=str)
143 invite = attr.ib(type=EventBase)
144
145 def __nonzero__(self) -> bool:
146146 """Invited rooms should always be reported to the client"""
147147 return True
148148
149149 __bool__ = __nonzero__ # python3
150150
151151
152 class GroupsSyncResult(
153 collections.namedtuple("GroupsSyncResult", ["join", "invite", "leave"])
154 ):
155 __slots__ = []
156
157 def __nonzero__(self):
152 @attr.s(slots=True, frozen=True)
153 class GroupsSyncResult:
154 join = attr.ib(type=JsonDict)
155 invite = attr.ib(type=JsonDict)
156 leave = attr.ib(type=JsonDict)
157
158 def __nonzero__(self) -> bool:
158159 return bool(self.join or self.invite or self.leave)
159160
160161 __bool__ = __nonzero__ # python3
161162
162163
163 class DeviceLists(
164 collections.namedtuple(
165 "DeviceLists",
166 [
167 "changed", # list of user_ids whose devices may have changed
168 "left", # list of user_ids whose devices we no longer track
169 ],
170 )
171 ):
172 __slots__ = []
173
174 def __nonzero__(self):
164 @attr.s(slots=True, frozen=True)
165 class DeviceLists:
166 """
167 Attributes:
168 changed: List of user_ids whose devices may have changed
169 left: List of user_ids whose devices we no longer track
170 """
171
172 changed = attr.ib(type=Collection[str])
173 left = attr.ib(type=Collection[str])
174
175 def __nonzero__(self) -> bool:
175176 return bool(self.changed or self.left)
176177
177178 __bool__ = __nonzero__ # python3
178179
179180
180 class SyncResult(
181 collections.namedtuple(
182 "SyncResult",
183 [
184 "next_batch", # Token for the next sync
185 "presence", # List of presence events for the user.
186 "account_data", # List of account_data events for the user.
187 "joined", # JoinedSyncResult for each joined room.
188 "invited", # InvitedSyncResult for each invited room.
189 "archived", # ArchivedSyncResult for each archived room.
190 "to_device", # List of direct messages for the device.
191 "device_lists", # List of user_ids whose devices have changed
192 "device_one_time_keys_count", # Dict of algorithm to count for one time keys
193 # for this device
194 "groups",
195 ],
196 )
197 ):
198 __slots__ = []
199
200 def __nonzero__(self):
181 @attr.s
182 class _RoomChanges:
183 """The set of room entries to include in the sync, plus the set of joined
184 and left room IDs since last sync.
185 """
186
187 room_entries = attr.ib(type=List["RoomSyncResultBuilder"])
188 invited = attr.ib(type=List[InvitedSyncResult])
189 newly_joined_rooms = attr.ib(type=List[str])
190 newly_left_rooms = attr.ib(type=List[str])
191
192
193 @attr.s(slots=True, frozen=True)
194 class SyncResult:
195 """
196 Attributes:
197 next_batch: Token for the next sync
198 presence: List of presence events for the user.
199 account_data: List of account_data events for the user.
200 joined: JoinedSyncResult for each joined room.
201 invited: InvitedSyncResult for each invited room.
202 archived: ArchivedSyncResult for each archived room.
203 to_device: List of direct messages for the device.
204 device_lists: List of user_ids whose devices have changed
205 device_one_time_keys_count: Dict of algorithm to count for one time keys
206 for this device
207 groups: Group updates, if any
208 """
209
210 next_batch = attr.ib(type=StreamToken)
211 presence = attr.ib(type=List[JsonDict])
212 account_data = attr.ib(type=List[JsonDict])
213 joined = attr.ib(type=List[JoinedSyncResult])
214 invited = attr.ib(type=List[InvitedSyncResult])
215 archived = attr.ib(type=List[ArchivedSyncResult])
216 to_device = attr.ib(type=List[JsonDict])
217 device_lists = attr.ib(type=DeviceLists)
218 device_one_time_keys_count = attr.ib(type=JsonDict)
219 groups = attr.ib(type=Optional[GroupsSyncResult])
220
221 def __nonzero__(self) -> bool:
201222 """Make the result appear empty if there are no updates. This is used
202223 to tell if the notifier needs to wait for more events when polling for
203224 events.
239260 )
240261
241262 async def wait_for_sync_for_user(
242 self, sync_config, since_token=None, timeout=0, full_state=False
243 ):
263 self,
264 sync_config: SyncConfig,
265 since_token: Optional[StreamToken] = None,
266 timeout: int = 0,
267 full_state: bool = False,
268 ) -> SyncResult:
244269 """Get the sync for a client if we have new data for it now. Otherwise
245270 wait for new data to arrive on the server. If the timeout expires, then
246271 return an empty sync result.
247 Returns:
248 Deferred[SyncResult]
249272 """
250273 # If the user is not part of the mau group, then check that limits have
251274 # not been exceeded (if not part of the group by this point, almost certain
264287 return res
265288
266289 async def _wait_for_sync_for_user(
267 self, sync_config, since_token, timeout, full_state
268 ):
290 self,
291 sync_config: SyncConfig,
292 since_token: Optional[StreamToken] = None,
293 timeout: int = 0,
294 full_state: bool = False,
295 ) -> SyncResult:
269296 if since_token is None:
270297 sync_type = "initial_sync"
271298 elif full_state:
304331
305332 return result
306333
307 def current_sync_for_user(self, sync_config, since_token=None, full_state=False):
334 async def current_sync_for_user(
335 self,
336 sync_config: SyncConfig,
337 since_token: Optional[StreamToken] = None,
338 full_state: bool = False,
339 ) -> SyncResult:
308340 """Get the sync for client needed to match what the server has now.
309 Returns:
310 A Deferred SyncResult.
311 """
312 return self.generate_sync_result(sync_config, since_token, full_state)
313
314 async def push_rules_for_user(self, user):
341 """
342 return await self.generate_sync_result(sync_config, since_token, full_state)
343
344 async def push_rules_for_user(self, user: UserID) -> JsonDict:
315345 user_id = user.to_string()
316346 rules = await self.store.get_push_rules_for_user(user_id)
317347 rules = format_push_rules_for_user(user, rules)
318348 return rules
319349
320 async def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
350 async def ephemeral_by_room(
351 self,
352 sync_result_builder: "SyncResultBuilder",
353 now_token: StreamToken,
354 since_token: Optional[StreamToken] = None,
355 ) -> Tuple[StreamToken, Dict[str, List[JsonDict]]]:
321356 """Get the ephemeral events for each room the user is in
322357 Args:
323 sync_result_builder(SyncResultBuilder)
324 now_token (StreamToken): Where the server is currently up to.
325 since_token (StreamToken): Where the server was when the client
358 sync_result_builder
359 now_token: Where the server is currently up to.
360 since_token: Where the server was when the client
326361 last synced.
327362 Returns:
328363 A tuple of the now StreamToken, updated to reflect the which typing
347382 )
348383 now_token = now_token.copy_and_replace("typing_key", typing_key)
349384
350 ephemeral_by_room = {}
385 ephemeral_by_room = {} # type: JsonDict
351386
352387 for event in typing:
353388 # we want to exclude the room_id from the event, but modifying the
379414
380415 async def _load_filtered_recents(
381416 self,
382 room_id,
383 sync_config,
384 now_token,
385 since_token=None,
386 recents=None,
387 newly_joined_room=False,
388 ):
417 room_id: str,
418 sync_config: SyncConfig,
419 now_token: StreamToken,
420 since_token: Optional[StreamToken] = None,
421 potential_recents: Optional[List[EventBase]] = None,
422 newly_joined_room: bool = False,
423 ) -> TimelineBatch:
389424 """
390425 Returns:
391426 a Deferred TimelineBatch
396431 sync_config.filter_collection.blocks_all_room_timeline()
397432 )
398433
399 if recents is None or newly_joined_room or timeline_limit < len(recents):
434 if (
435 potential_recents is None
436 or newly_joined_room
437 or timeline_limit < len(potential_recents)
438 ):
400439 limited = True
401440 else:
402441 limited = False
403442
404 if recents:
405 recents = sync_config.filter_collection.filter_room_timeline(recents)
443 if potential_recents:
444 recents = sync_config.filter_collection.filter_room_timeline(
445 potential_recents
446 )
406447
407448 # We check if there are any state events, if there are then we pass
408449 # all current state events to the filter_events function. This is to
409450 # ensure that we always include current state in the timeline
410 current_state_ids = frozenset()
451 current_state_ids = frozenset() # type: FrozenSet[str]
411452 if any(e.is_state() for e in recents):
412 current_state_ids = await self.state.get_current_state_ids(room_id)
413 current_state_ids = frozenset(itervalues(current_state_ids))
453 current_state_ids_map = await self.state.get_current_state_ids(
454 room_id
455 )
456 current_state_ids = frozenset(itervalues(current_state_ids_map))
414457
415458 recents = await filter_events_for_client(
416459 self.storage,
462505 # ensure that we always include current state in the timeline
463506 current_state_ids = frozenset()
464507 if any(e.is_state() for e in loaded_recents):
465 current_state_ids = await self.state.get_current_state_ids(room_id)
466 current_state_ids = frozenset(itervalues(current_state_ids))
508 current_state_ids_map = await self.state.get_current_state_ids(
509 room_id
510 )
511 current_state_ids = frozenset(itervalues(current_state_ids_map))
467512
468513 loaded_recents = await filter_events_for_client(
469514 self.storage,
492537 limited=limited or newly_joined_room,
493538 )
494539
495 async def get_state_after_event(self, event, state_filter=StateFilter.all()):
540 async def get_state_after_event(
541 self, event: EventBase, state_filter: StateFilter = StateFilter.all()
542 ) -> StateMap[str]:
496543 """
497544 Get the room state after the given event
498545
499546 Args:
500 event(synapse.events.EventBase): event of interest
501 state_filter (StateFilter): The state filter used to fetch state
502 from the database.
503
504 Returns:
505 A Deferred map from ((type, state_key)->Event)
547 event: event of interest
548 state_filter: The state filter used to fetch state from the database.
506549 """
507550 state_ids = await self.state_store.get_state_ids_for_event(
508551 event.event_id, state_filter=state_filter
513556 return state_ids
514557
515558 async def get_state_at(
516 self, room_id, stream_position, state_filter=StateFilter.all()
517 ):
559 self,
560 room_id: str,
561 stream_position: StreamToken,
562 state_filter: StateFilter = StateFilter.all(),
563 ) -> StateMap[str]:
518564 """ Get the room state at a particular stream position
519565
520566 Args:
521 room_id(str): room for which to get state
522 stream_position(StreamToken): point at which to get state
523 state_filter (StateFilter): The state filter used to fetch state
524 from the database.
525
526 Returns:
527 A Deferred map from ((type, state_key)->Event)
567 room_id: room for which to get state
568 stream_position: point at which to get state
569 state_filter: The state filter used to fetch state from the database.
528570 """
529571 # FIXME this claims to get the state at a stream position, but
530572 # get_recent_events_for_room operates by topo ordering. This therefore
545587 state = {}
546588 return state
547589
548 async def compute_summary(self, room_id, sync_config, batch, state, now_token):
590 async def compute_summary(
591 self,
592 room_id: str,
593 sync_config: SyncConfig,
594 batch: TimelineBatch,
595 state: StateMap[EventBase],
596 now_token: StreamToken,
597 ) -> Optional[JsonDict]:
549598 """ Works out a room summary block for this room, summarising the number
550599 of joined members in the room, and providing the 'hero' members if the
551600 room has no name so clients can consistently name rooms. Also adds
552601 state events to 'state' if needed to describe the heroes.
553602
554 Args:
555 room_id(str):
556 sync_config(synapse.handlers.sync.SyncConfig):
557 batch(synapse.handlers.sync.TimelineBatch): The timeline batch for
558 the room that will be sent to the user.
559 state(dict): dict of (type, state_key) -> Event as returned by
560 compute_state_delta
561 now_token(str): Token of the end of the current batch.
562
563 Returns:
564 A deferred dict describing the room summary
603 Args
604 room_id
605 sync_config
606 batch: The timeline batch for the room that will be sent to the user.
607 state: State as returned by compute_state_delta
608 now_token: Token of the end of the current batch.
565609 """
566610
567611 # FIXME: we could/should get this from room_stats when matthew/stats lands
680724
681725 return summary
682726
683 def get_lazy_loaded_members_cache(self, cache_key):
727 def get_lazy_loaded_members_cache(self, cache_key: Tuple[str, str]) -> LruCache:
684728 cache = self.lazy_loaded_members_cache.get(cache_key)
685729 if cache is None:
686730 logger.debug("creating LruCache for %r", cache_key)
691735 return cache
692736
693737 async def compute_state_delta(
694 self, room_id, batch, sync_config, since_token, now_token, full_state
695 ):
738 self,
739 room_id: str,
740 batch: TimelineBatch,
741 sync_config: SyncConfig,
742 since_token: Optional[StreamToken],
743 now_token: StreamToken,
744 full_state: bool,
745 ) -> StateMap[EventBase]:
696746 """ Works out the difference in state between the start of the timeline
697747 and the previous sync.
698748
699749 Args:
700 room_id(str):
701 batch(synapse.handlers.sync.TimelineBatch): The timeline batch for
702 the room that will be sent to the user.
703 sync_config(synapse.handlers.sync.SyncConfig):
704 since_token(str|None): Token of the end of the previous batch. May
705 be None.
706 now_token(str): Token of the end of the current batch.
707 full_state(bool): Whether to force returning the full state.
708
709 Returns:
710 A deferred dict of (type, state_key) -> Event
750 room_id:
751 batch: The timeline batch for the room that will be sent to the user.
752 sync_config:
753 since_token: Token of the end of the previous batch. May be None.
754 now_token: Token of the end of the current batch.
755 full_state: Whether to force returning the full state.
711756 """
712757 # TODO(mjark) Check if the state events were received by the server
713758 # after the previous sync, since we need to include those state
799844 # about them).
800845 state_filter = StateFilter.all()
801846
847 # If this is an initial sync then full_state should be set, and
848 # that case is handled above. We assert here to ensure that this
849 # is indeed the case.
850 assert since_token is not None
802851 state_at_previous_sync = await self.get_state_at(
803852 room_id, stream_position=since_token, state_filter=state_filter
804853 )
873922 if t[0] == EventTypes.Member:
874923 cache.set(t[1], event_id)
875924
876 state = {}
925 state = {} # type: Dict[str, EventBase]
877926 if state_ids:
878927 state = await self.store.get_events(list(state_ids.values()))
879928
885934 if e.type != EventTypes.Aliases # until MSC2261 or alternative solution
886935 }
887936
888 async def unread_notifs_for_room_id(self, room_id, sync_config):
937 async def unread_notifs_for_room_id(
938 self, room_id: str, sync_config: SyncConfig
939 ) -> Optional[Dict[str, str]]:
889940 with Measure(self.clock, "unread_notifs_for_room_id"):
890941 last_unread_event_id = await self.store.get_last_receipt_event_id_for_user(
891942 user_id=sync_config.user.to_string(),
893944 receipt_type="m.read",
894945 )
895946
896 notifs = []
897947 if last_unread_event_id:
898948 notifs = await self.store.get_unread_event_push_actions_by_room_for_user(
899949 room_id, sync_config.user.to_string(), last_unread_event_id
905955 return None
906956
907957 async def generate_sync_result(
908 self, sync_config, since_token=None, full_state=False
909 ):
958 self,
959 sync_config: SyncConfig,
960 since_token: Optional[StreamToken] = None,
961 full_state: bool = False,
962 ) -> SyncResult:
910963 """Generates a sync result.
911
912 Args:
913 sync_config (SyncConfig)
914 since_token (StreamToken)
915 full_state (bool)
916
917 Returns:
918 Deferred(SyncResult)
919964 """
920965 # NB: The now_token gets changed by some of the generate_sync_* methods,
921966 # this is due to some of the underlying streams not supporting the ability
923968 # Always use the `now_token` in `SyncResultBuilder`
924969 now_token = await self.event_sources.get_current_token()
925970
926 logger.info(
971 logger.debug(
927972 "Calculating sync response for %r between %s and %s",
928973 sync_config.user,
929974 since_token,
9771022 )
9781023
9791024 device_id = sync_config.device_id
980 one_time_key_counts = {}
1025 one_time_key_counts = {} # type: JsonDict
9811026 if device_id:
9821027 one_time_key_counts = await self.store.count_e2e_one_time_keys(
9831028 user_id, device_id
10071052 )
10081053
10091054 @measure_func("_generate_sync_entry_for_groups")
1010 async def _generate_sync_entry_for_groups(self, sync_result_builder):
1055 async def _generate_sync_entry_for_groups(
1056 self, sync_result_builder: "SyncResultBuilder"
1057 ) -> None:
10111058 user_id = sync_result_builder.sync_config.user.to_string()
10121059 since_token = sync_result_builder.since_token
10131060 now_token = sync_result_builder.now_token
10521099 @measure_func("_generate_sync_entry_for_device_list")
10531100 async def _generate_sync_entry_for_device_list(
10541101 self,
1055 sync_result_builder,
1056 newly_joined_rooms,
1057 newly_joined_or_invited_users,
1058 newly_left_rooms,
1059 newly_left_users,
1060 ):
1102 sync_result_builder: "SyncResultBuilder",
1103 newly_joined_rooms: Set[str],
1104 newly_joined_or_invited_users: Set[str],
1105 newly_left_rooms: Set[str],
1106 newly_left_users: Set[str],
1107 ) -> DeviceLists:
10611108 """Generate the DeviceLists section of sync
10621109
10631110 Args:
1064 sync_result_builder (SyncResultBuilder)
1065 newly_joined_rooms (set[str]): Set of rooms user has joined since
1111 sync_result_builder
1112 newly_joined_rooms: Set of rooms user has joined since previous sync
1113 newly_joined_or_invited_users: Set of users that have joined or
1114 been invited to a room since previous sync.
1115 newly_left_rooms: Set of rooms user has left since previous sync
1116 newly_left_users: Set of users that have left a room we're in since
10661117 previous sync
1067 newly_joined_or_invited_users (set[str]): Set of users that have
1068 joined or been invited to a room since previous sync.
1069 newly_left_rooms (set[str]): Set of rooms user has left since
1070 previous sync
1071 newly_left_users (set[str]): Set of users that have left a room
1072 we're in since previous sync
1073
1074 Returns:
1075 Deferred[DeviceLists]
10761118 """
10771119
10781120 user_id = sync_result_builder.sync_config.user.to_string()
11331175 else:
11341176 return DeviceLists(changed=[], left=[])
11351177
1136 async def _generate_sync_entry_for_to_device(self, sync_result_builder):
1178 async def _generate_sync_entry_for_to_device(
1179 self, sync_result_builder: "SyncResultBuilder"
1180 ) -> None:
11371181 """Generates the portion of the sync response. Populates
11381182 `sync_result_builder` with the result.
1139
1140 Args:
1141 sync_result_builder(SyncResultBuilder)
1142
1143 Returns:
1144 Deferred(dict): A dictionary containing the per room account data.
11451183 """
11461184 user_id = sync_result_builder.sync_config.user.to_string()
11471185 device_id = sync_result_builder.sync_config.device_id
11791217 else:
11801218 sync_result_builder.to_device = []
11811219
1182 async def _generate_sync_entry_for_account_data(self, sync_result_builder):
1220 async def _generate_sync_entry_for_account_data(
1221 self, sync_result_builder: "SyncResultBuilder"
1222 ) -> Dict[str, Dict[str, JsonDict]]:
11831223 """Generates the account data portion of the sync response. Populates
11841224 `sync_result_builder` with the result.
11851225
11861226 Args:
1187 sync_result_builder(SyncResultBuilder)
1227 sync_result_builder
11881228
11891229 Returns:
1190 Deferred(dict): A dictionary containing the per room account data.
1230 A dictionary containing the per room account data.
11911231 """
11921232 sync_config = sync_result_builder.sync_config
11931233 user_id = sync_result_builder.sync_config.user.to_string()
12311271 return account_data_by_room
12321272
12331273 async def _generate_sync_entry_for_presence(
1234 self, sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
1235 ):
1274 self,
1275 sync_result_builder: "SyncResultBuilder",
1276 newly_joined_rooms: Set[str],
1277 newly_joined_or_invited_users: Set[str],
1278 ) -> None:
12361279 """Generates the presence portion of the sync response. Populates the
12371280 `sync_result_builder` with the result.
12381281
12391282 Args:
1240 sync_result_builder(SyncResultBuilder)
1241 newly_joined_rooms(list): List of rooms that the user has joined
1242 since the last sync (or empty if an initial sync)
1243 newly_joined_or_invited_users(list): List of users that have joined
1244 or been invited to rooms since the last sync (or empty if an initial
1245 sync)
1283 sync_result_builder
1284 newly_joined_rooms: Set of rooms that the user has joined since
1285 the last sync (or empty if an initial sync)
1286 newly_joined_or_invited_users: Set of users that have joined or
1287 been invited to rooms since the last sync (or empty if an
1288 initial sync)
12461289 """
12471290 now_token = sync_result_builder.now_token
12481291 sync_config = sync_result_builder.sync_config
12861329 sync_result_builder.presence = presence
12871330
12881331 async def _generate_sync_entry_for_rooms(
1289 self, sync_result_builder, account_data_by_room
1290 ):
1332 self,
1333 sync_result_builder: "SyncResultBuilder",
1334 account_data_by_room: Dict[str, Dict[str, JsonDict]],
1335 ) -> Tuple[Set[str], Set[str], Set[str], Set[str]]:
12911336 """Generates the rooms portion of the sync response. Populates the
12921337 `sync_result_builder` with the result.
12931338
12941339 Args:
1295 sync_result_builder(SyncResultBuilder)
1296 account_data_by_room(dict): Dictionary of per room account data
1340 sync_result_builder
1341 account_data_by_room: Dictionary of per room account data
12971342
12981343 Returns:
1299 Deferred(tuple): Returns a 4-tuple of
1344 Returns a 4-tuple of
13001345 `(newly_joined_rooms, newly_joined_or_invited_users,
13011346 newly_left_rooms, newly_left_users)`
13021347 """
13071352 )
13081353
13091354 if block_all_room_ephemeral:
1310 ephemeral_by_room = {}
1355 ephemeral_by_room = {} # type: Dict[str, List[JsonDict]]
13111356 else:
13121357 now_token, ephemeral_by_room = await self.ephemeral_by_room(
13131358 sync_result_builder,
13281373 )
13291374 if not tags_by_room:
13301375 logger.debug("no-oping sync")
1331 return [], [], [], []
1376 return set(), set(), set(), set()
13321377
13331378 ignored_account_data = await self.store.get_global_account_data_by_type_for_user(
13341379 "m.ignored_user_list", user_id=user_id
13401385 ignored_users = frozenset()
13411386
13421387 if since_token:
1343 res = await self._get_rooms_changed(sync_result_builder, ignored_users)
1344 room_entries, invited, newly_joined_rooms, newly_left_rooms = res
1345
1388 room_changes = await self._get_rooms_changed(
1389 sync_result_builder, ignored_users
1390 )
13461391 tags_by_room = await self.store.get_updated_tags(
13471392 user_id, since_token.account_data_key
13481393 )
13491394 else:
1350 res = await self._get_all_rooms(sync_result_builder, ignored_users)
1351 room_entries, invited, newly_joined_rooms = res
1352 newly_left_rooms = []
1395 room_changes = await self._get_all_rooms(sync_result_builder, ignored_users)
13531396
13541397 tags_by_room = await self.store.get_tags_for_user(user_id)
1398
1399 room_entries = room_changes.room_entries
1400 invited = room_changes.invited
1401 newly_joined_rooms = room_changes.newly_joined_rooms
1402 newly_left_rooms = room_changes.newly_left_rooms
13551403
13561404 def handle_room_entries(room_entry):
13571405 return self._generate_room_entry(
13921440 newly_left_users -= newly_joined_or_invited_users
13931441
13941442 return (
1395 newly_joined_rooms,
1443 set(newly_joined_rooms),
13961444 newly_joined_or_invited_users,
1397 newly_left_rooms,
1445 set(newly_left_rooms),
13981446 newly_left_users,
13991447 )
14001448
1401 async def _have_rooms_changed(self, sync_result_builder):
1449 async def _have_rooms_changed(
1450 self, sync_result_builder: "SyncResultBuilder"
1451 ) -> bool:
14021452 """Returns whether there may be any new events that should be sent down
14031453 the sync. Returns True if there are.
14041454 """
14221472 return True
14231473 return False
14241474
1425 async def _get_rooms_changed(self, sync_result_builder, ignored_users):
1475 async def _get_rooms_changed(
1476 self, sync_result_builder: "SyncResultBuilder", ignored_users: Set[str]
1477 ) -> _RoomChanges:
14261478 """Gets the the changes that have happened since the last sync.
1427
1428 Args:
1429 sync_result_builder(SyncResultBuilder)
1430 ignored_users(set(str)): Set of users ignored by user.
1431
1432 Returns:
1433 Deferred(tuple): Returns a tuple of the form:
1434 `(room_entries, invited_rooms, newly_joined_rooms, newly_left_rooms)`
1435
1436 where:
1437 room_entries is a list [RoomSyncResultBuilder]
1438 invited_rooms is a list [InvitedSyncResult]
1439 newly_joined_rooms is a list[str] of room ids
1440 newly_left_rooms is a list[str] of room ids
14411479 """
14421480 user_id = sync_result_builder.sync_config.user.to_string()
14431481 since_token = sync_result_builder.since_token
14511489 user_id, since_token.room_key, now_token.room_key
14521490 )
14531491
1454 mem_change_events_by_room_id = {}
1492 mem_change_events_by_room_id = {} # type: Dict[str, List[EventBase]]
14551493 for event in rooms_changed:
14561494 mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
14571495
14601498 room_entries = []
14611499 invited = []
14621500 for room_id, events in iteritems(mem_change_events_by_room_id):
1463 logger.info(
1501 logger.debug(
14641502 "Membership changes in %s: [%s]",
14651503 room_id,
14661504 ", ".join(("%s (%s)" % (e.event_id, e.membership) for e in events)),
15701608 # This is all screaming out for a refactor, as the logic here is
15711609 # subtle and the moving parts numerous.
15721610 if leave_event.internal_metadata.is_out_of_band_membership():
1573 batch_events = [leave_event]
1611 batch_events = [leave_event] # type: Optional[List[EventBase]]
15741612 else:
15751613 batch_events = None
15761614
16361674 )
16371675 room_entries.append(entry)
16381676
1639 return room_entries, invited, newly_joined_rooms, newly_left_rooms
1640
1641 async def _get_all_rooms(self, sync_result_builder, ignored_users):
1677 return _RoomChanges(room_entries, invited, newly_joined_rooms, newly_left_rooms)
1678
1679 async def _get_all_rooms(
1680 self, sync_result_builder: "SyncResultBuilder", ignored_users: Set[str]
1681 ) -> _RoomChanges:
16421682 """Returns entries for all rooms for the user.
16431683
16441684 Args:
1645 sync_result_builder(SyncResultBuilder)
1646 ignored_users(set(str)): Set of users ignored by user.
1647
1648 Returns:
1649 Deferred(tuple): Returns a tuple of the form:
1650 `([RoomSyncResultBuilder], [InvitedSyncResult], [])`
1685 sync_result_builder
1686 ignored_users: Set of users ignored by user.
1687
16511688 """
16521689
16531690 user_id = sync_result_builder.sync_config.user.to_string()
17091746 )
17101747 )
17111748
1712 return room_entries, invited, []
1749 return _RoomChanges(room_entries, invited, [], [])
17131750
17141751 async def _generate_room_entry(
17151752 self,
1716 sync_result_builder,
1717 ignored_users,
1718 room_builder,
1719 ephemeral,
1720 tags,
1721 account_data,
1722 always_include=False,
1753 sync_result_builder: "SyncResultBuilder",
1754 ignored_users: Set[str],
1755 room_builder: "RoomSyncResultBuilder",
1756 ephemeral: List[JsonDict],
1757 tags: Optional[List[JsonDict]],
1758 account_data: Dict[str, JsonDict],
1759 always_include: bool = False,
17231760 ):
17241761 """Populates the `joined` and `archived` section of `sync_result_builder`
17251762 based on the `room_builder`.
17261763
17271764 Args:
1728 sync_result_builder(SyncResultBuilder)
1729 ignored_users(set(str)): Set of users ignored by user.
1730 room_builder(RoomSyncResultBuilder)
1731 ephemeral(list): List of new ephemeral events for room
1732 tags(list): List of *all* tags for room, or None if there has been
1765 sync_result_builder
1766 ignored_users: Set of users ignored by user.
1767 room_builder
1768 ephemeral: List of new ephemeral events for room
1769 tags: List of *all* tags for room, or None if there has been
17331770 no change.
1734 account_data(list): List of new account data for room
1735 always_include(bool): Always include this room in the sync response,
1771 account_data: List of new account data for room
1772 always_include: Always include this room in the sync response,
17361773 even if empty.
17371774 """
17381775 newly_joined = room_builder.newly_joined
17581795 sync_config,
17591796 now_token=upto_token,
17601797 since_token=since_token,
1761 recents=events,
1798 potential_recents=events,
17621799 newly_joined_room=newly_joined,
17631800 )
17641801
18091846 room_id, batch, sync_config, since_token, now_token, full_state=full_state
18101847 )
18111848
1812 summary = {}
1849 summary = {} # type: Optional[JsonDict]
18131850
18141851 # we include a summary in room responses when we're lazy loading
18151852 # members (as the client otherwise doesn't have enough info to form
18331870 )
18341871
18351872 if room_builder.rtype == "joined":
1836 unread_notifications = {}
1873 unread_notifications = {} # type: Dict[str, str]
18371874 room_sync = JoinedSyncResult(
18381875 room_id=room_id,
18391876 timeline=batch,
18551892
18561893 if batch.limited and since_token:
18571894 user_id = sync_result_builder.sync_config.user.to_string()
1858 logger.info(
1895 logger.debug(
18591896 "Incremental gappy sync of %s for user %s with %d state events"
18601897 % (room_id, user_id, len(state))
18611898 )
18621899 elif room_builder.rtype == "archived":
1863 room_sync = ArchivedSyncResult(
1900 archived_room_sync = ArchivedSyncResult(
18641901 room_id=room_id,
18651902 timeline=batch,
18661903 state=state,
18671904 account_data=account_data_events,
18681905 )
1869 if room_sync or always_include:
1870 sync_result_builder.archived.append(room_sync)
1906 if archived_room_sync or always_include:
1907 sync_result_builder.archived.append(archived_room_sync)
18711908 else:
18721909 raise Exception("Unrecognized rtype: %r", room_builder.rtype)
18731910
1874 async def get_rooms_for_user_at(self, user_id, stream_ordering):
1911 async def get_rooms_for_user_at(
1912 self, user_id: str, stream_ordering: int
1913 ) -> FrozenSet[str]:
18751914 """Get set of joined rooms for a user at the given stream ordering.
18761915
18771916 The stream ordering *must* be recent, otherwise this may throw an
18791918 current token, which should be perfectly fine).
18801919
18811920 Args:
1882 user_id (str)
1883 stream_ordering (int)
1921 user_id
1922 stream_ordering
18841923
18851924 ReturnValue:
1886 Deferred[frozenset[str]]: Set of room_ids the user is in at given
1887 stream_ordering.
1925 Set of room_ids the user is in at given stream_ordering.
18881926 """
18891927 joined_rooms = await self.store.get_rooms_for_user_with_stream_ordering(user_id)
18901928
19111949 if user_id in users_in_room:
19121950 joined_room_ids.add(room_id)
19131951
1914 joined_room_ids = frozenset(joined_room_ids)
1915 return joined_room_ids
1916
1917
1918 def _action_has_highlight(actions):
1952 return frozenset(joined_room_ids)
1953
1954
1955 def _action_has_highlight(actions: List[JsonDict]) -> bool:
19191956 for action in actions:
19201957 try:
19211958 if action.get("set_tweak", None) == "highlight":
19271964
19281965
19291966 def _calculate_state(
1930 timeline_contains, timeline_start, previous, current, lazy_load_members
1931 ):
1967 timeline_contains: StateMap[str],
1968 timeline_start: StateMap[str],
1969 previous: StateMap[str],
1970 current: StateMap[str],
1971 lazy_load_members: bool,
1972 ) -> StateMap[str]:
19321973 """Works out what state to include in a sync response.
19331974
19341975 Args:
1935 timeline_contains (dict): state in the timeline
1936 timeline_start (dict): state at the start of the timeline
1937 previous (dict): state at the end of the previous sync (or empty dict
1976 timeline_contains: state in the timeline
1977 timeline_start: state at the start of the timeline
1978 previous: state at the end of the previous sync (or empty dict
19381979 if this is an initial sync)
1939 current (dict): state at the end of the timeline
1940 lazy_load_members (bool): whether to return members from timeline_start
1980 current: state at the end of the timeline
1981 lazy_load_members: whether to return members from timeline_start
19411982 or not. assumes that timeline_start has already been filtered to
19421983 include only the members the client needs to know about.
1943
1944 Returns:
1945 dict
19461984 """
19471985 event_id_to_key = {
19481986 e: key
19792017 return {event_id_to_key[e]: e for e in state_ids}
19802018
19812019
1982 class SyncResultBuilder(object):
2020 @attr.s
2021 class SyncResultBuilder:
19832022 """Used to help build up a new SyncResult for a user
19842023
19852024 Attributes:
1986 sync_config (SyncConfig)
1987 full_state (bool)
1988 since_token (StreamToken)
1989 now_token (StreamToken)
1990 joined_room_ids (list[str])
2025 sync_config
2026 full_state: The full_state flag as specified by user
2027 since_token: The token supplied by user, or None.
2028 now_token: The token to sync up to.
2029 joined_room_ids: List of rooms the user is joined to
19912030
19922031 # The following mirror the fields in a sync response
19932032 presence (list)
19952034 joined (list[JoinedSyncResult])
19962035 invited (list[InvitedSyncResult])
19972036 archived (list[ArchivedSyncResult])
1998 device (list)
19992037 groups (GroupsSyncResult|None)
20002038 to_device (list)
20012039 """
20022040
2003 def __init__(
2004 self, sync_config, full_state, since_token, now_token, joined_room_ids
2005 ):
2006 """
2007 Args:
2008 sync_config (SyncConfig)
2009 full_state (bool): The full_state flag as specified by user
2010 since_token (StreamToken): The token supplied by user, or None.
2011 now_token (StreamToken): The token to sync up to.
2012 joined_room_ids (list[str]): List of rooms the user is joined to
2013 """
2014 self.sync_config = sync_config
2015 self.full_state = full_state
2016 self.since_token = since_token
2017 self.now_token = now_token
2018 self.joined_room_ids = joined_room_ids
2019
2020 self.presence = []
2021 self.account_data = []
2022 self.joined = []
2023 self.invited = []
2024 self.archived = []
2025 self.device = []
2026 self.groups = None
2027 self.to_device = []
2028
2029
2041 sync_config = attr.ib(type=SyncConfig)
2042 full_state = attr.ib(type=bool)
2043 since_token = attr.ib(type=Optional[StreamToken])
2044 now_token = attr.ib(type=StreamToken)
2045 joined_room_ids = attr.ib(type=FrozenSet[str])
2046
2047 presence = attr.ib(type=List[JsonDict], default=attr.Factory(list))
2048 account_data = attr.ib(type=List[JsonDict], default=attr.Factory(list))
2049 joined = attr.ib(type=List[JoinedSyncResult], default=attr.Factory(list))
2050 invited = attr.ib(type=List[InvitedSyncResult], default=attr.Factory(list))
2051 archived = attr.ib(type=List[ArchivedSyncResult], default=attr.Factory(list))
2052 groups = attr.ib(type=Optional[GroupsSyncResult], default=None)
2053 to_device = attr.ib(type=List[JsonDict], default=attr.Factory(list))
2054
2055
2056 @attr.s
20302057 class RoomSyncResultBuilder(object):
20312058 """Stores information needed to create either a `JoinedSyncResult` or
20322059 `ArchivedSyncResult`.
2060
2061 Attributes:
2062 room_id
2063 rtype: One of `"joined"` or `"archived"`
2064 events: List of events to include in the room (more events may be added
2065 when generating result).
2066 newly_joined: If the user has newly joined the room
2067 full_state: Whether the full state should be sent in result
2068 since_token: Earliest point to return events from, or None
2069 upto_token: Latest point to return events from.
20332070 """
20342071
2035 def __init__(
2036 self, room_id, rtype, events, newly_joined, full_state, since_token, upto_token
2037 ):
2038 """
2039 Args:
2040 room_id(str)
2041 rtype(str): One of `"joined"` or `"archived"`
2042 events(list[FrozenEvent]): List of events to include in the room
2043 (more events may be added when generating result).
2044 newly_joined(bool): If the user has newly joined the room
2045 full_state(bool): Whether the full state should be sent in result
2046 since_token(StreamToken): Earliest point to return events from, or None
2047 upto_token(StreamToken): Latest point to return events from.
2048 """
2049 self.room_id = room_id
2050 self.rtype = rtype
2051 self.events = events
2052 self.newly_joined = newly_joined
2053 self.full_state = full_state
2054 self.since_token = since_token
2055 self.upto_token = upto_token
2072 room_id = attr.ib(type=str)
2073 rtype = attr.ib(type=str)
2074 events = attr.ib(type=Optional[List[EventBase]])
2075 newly_joined = attr.ib(type=bool)
2076 full_state = attr.ib(type=bool)
2077 since_token = attr.ib(type=Optional[StreamToken])
2078 upto_token = attr.ib(type=StreamToken)
124124 if target_user_id != auth_user_id:
125125 raise AuthError(400, "Cannot set another user's typing state")
126126
127 yield self.auth.check_joined_room(room_id, target_user_id)
127 yield self.auth.check_user_in_room(room_id, target_user_id)
128128
129129 logger.debug("%s has started typing in %s", target_user_id, room_id)
130130
154154 if target_user_id != auth_user_id:
155155 raise AuthError(400, "Cannot set another user's typing state")
156156
157 yield self.auth.check_joined_room(room_id, target_user_id)
157 yield self.auth.check_user_in_room(room_id, target_user_id)
158158
159159 logger.debug("%s has stopped typing in %s", target_user_id, room_id)
160160
5151 self.is_mine_id = hs.is_mine_id
5252 self.update_user_directory = hs.config.update_user_directory
5353 self.search_all_users = hs.config.user_directory_search_all_users
54 self.spam_checker = hs.get_spam_checker()
5455 # The current position in the current_state_delta stream
5556 self.pos = None
5657
6465 # we start populating the user directory
6566 self.clock.call_later(0, self.notify_new_event)
6667
67 def search_users(self, user_id, search_term, limit):
68 async def search_users(self, user_id, search_term, limit):
6869 """Searches for users in directory
6970
7071 Returns:
8182 ]
8283 }
8384 """
84 return self.store.search_user_dir(user_id, search_term, limit)
85 results = await self.store.search_user_dir(user_id, search_term, limit)
86
87 # Remove any spammy users from the results.
88 results["results"] = [
89 user
90 for user in results["results"]
91 if not self.spam_checker.check_username_for_spam(user)
92 ]
93
94 return results
8595
8696 def notify_new_event(self):
8797 """Called when there may be more deltas to process
148158 self.pos, room_max_stream_ordering
149159 )
150160
151 logger.info("Handling %d state deltas", len(deltas))
161 logger.debug("Handling %d state deltas", len(deltas))
152162 yield self._handle_deltas(deltas)
153163
154164 self.pos = max_pos
194204 room_id, self.server_name
195205 )
196206 if not is_in_room:
197 logger.info("Server left room: %r", room_id)
207 logger.debug("Server left room: %r", room_id)
198208 # Fetch all the users that we marked as being in user
199209 # directory due to being in the room and then check if
200210 # need to remove those users or not
352352 if request.method == b"OPTIONS":
353353 return _options_handler, "options_request_handler", {}
354354
355 request_path = request.path.decode("ascii")
356
355357 # Loop through all the registered callbacks to check if the method
356358 # and path regex match
357359 for path_entry in self.path_regexs.get(request.method, []):
358 m = path_entry.pattern.match(request.path.decode("ascii"))
360 m = path_entry.pattern.match(request_path)
359361 if m:
360362 # We found a match!
361363 return path_entry.callback, path_entry.servlet_classname, m.groupdict()
224224 self.start_time, name=servlet_name, method=self.get_method()
225225 )
226226
227 self.site.access_logger.info(
227 self.site.access_logger.debug(
228228 "%s - %s - Received request: %s %s",
229229 self.getClientIP(),
230230 self.site.site_tag,
397397 Args:
398398 badge (int): number of unread messages
399399 """
400 logger.info("Sending updated badge count %d to %s", badge, self.name)
400 logger.debug("Sending updated badge count %d to %s", badge, self.name)
401401 d = {
402402 "notification": {
403403 "id": "",
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15 from synapse.storage import DataStore
15 from synapse.replication.slave.storage._base import BaseSlavedStore
16 from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
17 from synapse.storage.data_stores.main.group_server import GroupServerWorkerStore
1618 from synapse.storage.database import Database
1719 from synapse.util.caches.stream_change_cache import StreamChangeCache
1820
19 from ._base import BaseSlavedStore, __func__
20 from ._slaved_id_tracker import SlavedIdTracker
2121
22
23 class SlavedGroupServerStore(BaseSlavedStore):
22 class SlavedGroupServerStore(GroupServerWorkerStore, BaseSlavedStore):
2423 def __init__(self, database: Database, db_conn, hs):
2524 super(SlavedGroupServerStore, self).__init__(database, db_conn, hs)
2625
3433 self._group_updates_id_gen.get_current_token(),
3534 )
3635
37 get_groups_changes_for_user = __func__(DataStore.get_groups_changes_for_user)
38 get_group_stream_token = __func__(DataStore.get_group_stream_token)
39 get_all_groups_for_user = __func__(DataStore.get_all_groups_for_user)
36 def get_group_stream_token(self):
37 return self._group_updates_id_gen.get_current_token()
4038
4139 def stream_positions(self):
4240 result = super(SlavedGroupServerStore, self).stream_positions()
2020 from six.moves import http_client
2121
2222 from synapse.api.constants import UserTypes
23 from synapse.api.errors import Codes, SynapseError
23 from synapse.api.errors import Codes, NotFoundError, SynapseError
2424 from synapse.http.servlet import (
2525 RestServlet,
2626 assert_params_in_dict,
104104
105105
106106 class UserRestServletV2(RestServlet):
107 PATTERNS = (re.compile("^/_synapse/admin/v2/users/(?P<user_id>@[^/]+)$"),)
107 PATTERNS = (re.compile("^/_synapse/admin/v2/users/(?P<user_id>[^/]+)$"),)
108108
109109 """Get request to list user details.
110110 This needs user to have administrator access in Synapse.
135135 self.hs = hs
136136 self.auth = hs.get_auth()
137137 self.admin_handler = hs.get_handlers().admin_handler
138 self.store = hs.get_datastore()
139 self.auth_handler = hs.get_auth_handler()
138140 self.profile_handler = hs.get_profile_handler()
139141 self.set_password_handler = hs.get_set_password_handler()
140142 self.deactivate_account_handler = hs.get_deactivate_account_handler()
149151
150152 ret = await self.admin_handler.get_user(target_user)
151153
154 if not ret:
155 raise NotFoundError("User not found")
156
152157 return 200, ret
153158
154159 async def on_PUT(self, request, user_id):
162167 raise SynapseError(400, "This endpoint can only be used with local users")
163168
164169 user = await self.admin_handler.get_user(target_user)
170 user_id = target_user.to_string()
165171
166172 if user: # modify user
167173 if "displayname" in body:
168174 await self.profile_handler.set_displayname(
169175 target_user, requester, body["displayname"], True
170176 )
177
178 if "threepids" in body:
179 # check for required parameters for each threepid
180 for threepid in body["threepids"]:
181 assert_params_in_dict(threepid, ["medium", "address"])
182
183 # remove old threepids from user
184 threepids = await self.store.user_get_threepids(user_id)
185 for threepid in threepids:
186 try:
187 await self.auth_handler.delete_threepid(
188 user_id, threepid["medium"], threepid["address"], None
189 )
190 except Exception:
191 logger.exception("Failed to remove threepids")
192 raise SynapseError(500, "Failed to remove threepids")
193
194 # add new threepids to user
195 current_time = self.hs.get_clock().time_msec()
196 for threepid in body["threepids"]:
197 await self.auth_handler.add_threepid(
198 user_id, threepid["medium"], threepid["address"], current_time
199 )
171200
172201 if "avatar_url" in body:
173202 await self.profile_handler.set_avatar_url(
220249 admin = body.get("admin", None)
221250 user_type = body.get("user_type", None)
222251 displayname = body.get("displayname", None)
252 threepids = body.get("threepids", None)
223253
224254 if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
225255 raise SynapseError(400, "Invalid user type")
231261 default_display_name=displayname,
232262 user_type=user_type,
233263 )
264
265 if "threepids" in body:
266 # check for required parameters for each threepid
267 for threepid in body["threepids"]:
268 assert_params_in_dict(threepid, ["medium", "address"])
269
270 current_time = self.hs.get_clock().time_msec()
271 for threepid in body["threepids"]:
272 await self.auth_handler.add_threepid(
273 user_id, threepid["medium"], threepid["address"], current_time
274 )
275
234276 if "avatar_url" in body:
235277 await self.profile_handler.set_avatar_url(
236278 user_id, requester, body["avatar_url"], True
567609 {}
568610 """
569611
570 PATTERNS = (re.compile("^/_synapse/admin/v1/users/(?P<user_id>@[^/]*)/admin$"),)
612 PATTERNS = (re.compile("^/_synapse/admin/v1/users/(?P<user_id>[^/]*)/admin$"),)
571613
572614 def __init__(self, hs):
573615 self.hs = hs
1515
1616 """ This module contains REST servlets to do with rooms: /rooms/<paths> """
1717 import logging
18 import re
1819 from typing import List, Optional
1920
2021 from six.moves.urllib import parse as urlparse
4445 from synapse.streams.config import PaginationConfig
4546 from synapse.types import RoomAlias, RoomID, StreamToken, ThirdPartyInstanceID, UserID
4647
48 MYPY = False
49 if MYPY:
50 import synapse.server
51
4752 logger = logging.getLogger(__name__)
4853
4954
840845 )
841846
842847 return 200, {}
848
849
850 class RoomAliasListServlet(RestServlet):
851 PATTERNS = [
852 re.compile(
853 r"^/_matrix/client/unstable/org\.matrix\.msc2432"
854 r"/rooms/(?P<room_id>[^/]*)/aliases"
855 ),
856 ]
857
858 def __init__(self, hs: "synapse.server.HomeServer"):
859 super().__init__()
860 self.auth = hs.get_auth()
861 self.directory_handler = hs.get_handlers().directory_handler
862
863 async def on_GET(self, request, room_id):
864 requester = await self.auth.get_user_by_req(request)
865
866 alias_list = await self.directory_handler.get_aliases_for_room(
867 requester, room_id
868 )
869
870 return 200, {"aliases": alias_list}
843871
844872
845873 class SearchRestServlet(RestServlet):
930958 JoinedRoomsRestServlet(hs).register(http_server)
931959 RoomEventServlet(hs).register(http_server)
932960 RoomEventContextServlet(hs).register(http_server)
961 RoomAliasListServlet(hs).register(http_server)
933962
934963
935964 def register_deprecated_servlets(hs, http_server):
141141 ):
142142 requester = await self.auth.get_user_by_req(request, allow_guest=True)
143143
144 await self.auth.check_in_room_or_world_readable(
145 room_id, requester.user.to_string()
144 await self.auth.check_user_in_room_or_world_readable(
145 room_id, requester.user.to_string(), allow_departed_users=True
146146 )
147147
148148 # This gets the original event and checks that a) the event exists and
234234 ):
235235 requester = await self.auth.get_user_by_req(request, allow_guest=True)
236236
237 await self.auth.check_in_room_or_world_readable(
238 room_id, requester.user.to_string()
237 await self.auth.check_user_in_room_or_world_readable(
238 room_id, requester.user.to_string(), allow_departed_users=True,
239239 )
240240
241241 # This checks that a) the event exists and b) the user is allowed to
312312 async def on_GET(self, request, room_id, parent_id, relation_type, event_type, key):
313313 requester = await self.auth.get_user_by_req(request, allow_guest=True)
314314
315 await self.auth.check_in_room_or_world_readable(
316 room_id, requester.user.to_string()
315 await self.auth.check_user_in_room_or_world_readable(
316 room_id, requester.user.to_string(), allow_departed_users=True,
317317 )
318318
319319 # This checks that a) the event exists and b) the user is allowed to
5151 ],
5252 # as per MSC1497:
5353 "unstable_features": {
54 "m.lazy_load_members": True,
5554 # as per MSC2190, as amended by MSC2264
5655 # to be removed in r0.6.0
5756 "m.id_access_token": True,
7271 "org.matrix.label_based_filtering": True,
7372 # Implements support for cross signing as described in MSC1756
7473 "org.matrix.e2e_cross_signing": True,
74 # Implements additional endpoints as described in MSC2432
75 "org.matrix.msc2432": True,
7576 },
7677 },
7778 )
4949 from synapse.federation.sender import FederationSender
5050 from synapse.federation.transport.client import TransportLayerClient
5151 from synapse.groups.attestations import GroupAttestationSigning, GroupAttestionRenewer
52 from synapse.groups.groups_server import GroupsServerHandler
52 from synapse.groups.groups_server import GroupsServerHandler, GroupsServerWorkerHandler
5353 from synapse.handlers import Handlers
5454 from synapse.handlers.account_validity import AccountValidityHandler
5555 from synapse.handlers.acme import AcmeHandler
6161 from synapse.handlers.e2e_keys import E2eKeysHandler
6262 from synapse.handlers.e2e_room_keys import E2eRoomKeysHandler
6363 from synapse.handlers.events import EventHandler, EventStreamHandler
64 from synapse.handlers.groups_local import GroupsLocalHandler
64 from synapse.handlers.groups_local import GroupsLocalHandler, GroupsLocalWorkerHandler
6565 from synapse.handlers.initial_sync import InitialSyncHandler
6666 from synapse.handlers.message import EventCreationHandler, MessageHandler
6767 from synapse.handlers.pagination import PaginationHandler
459459 return UserDirectoryHandler(self)
460460
461461 def build_groups_local_handler(self):
462 return GroupsLocalHandler(self)
462 if self.config.worker_app:
463 return GroupsLocalWorkerHandler(self)
464 else:
465 return GroupsLocalHandler(self)
463466
464467 def build_groups_server_handler(self):
465 return GroupsServerHandler(self)
468 if self.config.worker_app:
469 return GroupsServerWorkerHandler(self)
470 else:
471 return GroupsServerHandler(self)
466472
467473 def build_groups_attestation_signing(self):
468474 return GroupAttestationSigning(self)
106106 self,
107107 ) -> synapse.replication.tcp.client.ReplicationClientHandler:
108108 pass
109 def is_mine_id(self, domain_id: str) -> bool:
110 pass
1717
1818 from synapse.storage.state import StateFilter
1919
20 MYPY = False
21 if MYPY:
22 import synapse.server
23
2024 logger = logging.getLogger(__name__)
2125
2226
2529 access to rooms and other relevant information.
2630 """
2731
28 def __init__(self, hs):
32 def __init__(self, hs: "synapse.server.HomeServer"):
2933 self.hs = hs
3034
3135 self._store = hs.get_datastore()
3236
3337 @defer.inlineCallbacks
34 def get_state_events_in_room(self, room_id, types):
38 def get_state_events_in_room(self, room_id: str, types: tuple) -> defer.Deferred:
3539 """Gets state events for the given room.
3640
3741 Args:
38 room_id (string): The room ID to get state events in.
39 types (tuple): The event type and state key (using None
42 room_id: The room ID to get state events in.
43 types: The event type and state key (using None
4044 to represent 'any') of the room state to acquire.
4145
4246 Returns:
2626 _DEFAULT_ROLE_ID = ""
2727
2828
29 class GroupServerStore(SQLBaseStore):
30 def set_group_join_policy(self, group_id, join_policy):
31 """Set the join policy of a group.
32
33 join_policy can be one of:
34 * "invite"
35 * "open"
36 """
37 return self.db.simple_update_one(
38 table="groups",
39 keyvalues={"group_id": group_id},
40 updatevalues={"join_policy": join_policy},
41 desc="set_group_join_policy",
42 )
43
29 class GroupServerWorkerStore(SQLBaseStore):
4430 def get_group(self, group_id):
4531 return self.db.simple_select_one(
4632 table="groups",
156142 "get_rooms_for_summary", _get_rooms_for_summary_txn
157143 )
158144
145 @defer.inlineCallbacks
146 def get_group_categories(self, group_id):
147 rows = yield self.db.simple_select_list(
148 table="group_room_categories",
149 keyvalues={"group_id": group_id},
150 retcols=("category_id", "is_public", "profile"),
151 desc="get_group_categories",
152 )
153
154 return {
155 row["category_id"]: {
156 "is_public": row["is_public"],
157 "profile": json.loads(row["profile"]),
158 }
159 for row in rows
160 }
161
162 @defer.inlineCallbacks
163 def get_group_category(self, group_id, category_id):
164 category = yield self.db.simple_select_one(
165 table="group_room_categories",
166 keyvalues={"group_id": group_id, "category_id": category_id},
167 retcols=("is_public", "profile"),
168 desc="get_group_category",
169 )
170
171 category["profile"] = json.loads(category["profile"])
172
173 return category
174
175 @defer.inlineCallbacks
176 def get_group_roles(self, group_id):
177 rows = yield self.db.simple_select_list(
178 table="group_roles",
179 keyvalues={"group_id": group_id},
180 retcols=("role_id", "is_public", "profile"),
181 desc="get_group_roles",
182 )
183
184 return {
185 row["role_id"]: {
186 "is_public": row["is_public"],
187 "profile": json.loads(row["profile"]),
188 }
189 for row in rows
190 }
191
192 @defer.inlineCallbacks
193 def get_group_role(self, group_id, role_id):
194 role = yield self.db.simple_select_one(
195 table="group_roles",
196 keyvalues={"group_id": group_id, "role_id": role_id},
197 retcols=("is_public", "profile"),
198 desc="get_group_role",
199 )
200
201 role["profile"] = json.loads(role["profile"])
202
203 return role
204
205 def get_local_groups_for_room(self, room_id):
206 """Get all of the local group that contain a given room
207 Args:
208 room_id (str): The ID of a room
209 Returns:
210 Deferred[list[str]]: A twisted.Deferred containing a list of group ids
211 containing this room
212 """
213 return self.db.simple_select_onecol(
214 table="group_rooms",
215 keyvalues={"room_id": room_id},
216 retcol="group_id",
217 desc="get_local_groups_for_room",
218 )
219
220 def get_users_for_summary_by_role(self, group_id, include_private=False):
221 """Get the users and roles that should be included in a summary request
222
223 Returns ([users], [roles])
224 """
225
226 def _get_users_for_summary_txn(txn):
227 keyvalues = {"group_id": group_id}
228 if not include_private:
229 keyvalues["is_public"] = True
230
231 sql = """
232 SELECT user_id, is_public, role_id, user_order
233 FROM group_summary_users
234 WHERE group_id = ?
235 """
236
237 if not include_private:
238 sql += " AND is_public = ?"
239 txn.execute(sql, (group_id, True))
240 else:
241 txn.execute(sql, (group_id,))
242
243 users = [
244 {
245 "user_id": row[0],
246 "is_public": row[1],
247 "role_id": row[2] if row[2] != _DEFAULT_ROLE_ID else None,
248 "order": row[3],
249 }
250 for row in txn
251 ]
252
253 sql = """
254 SELECT role_id, is_public, profile, role_order
255 FROM group_summary_roles
256 INNER JOIN group_roles USING (group_id, role_id)
257 WHERE group_id = ?
258 """
259
260 if not include_private:
261 sql += " AND is_public = ?"
262 txn.execute(sql, (group_id, True))
263 else:
264 txn.execute(sql, (group_id,))
265
266 roles = {
267 row[0]: {
268 "is_public": row[1],
269 "profile": json.loads(row[2]),
270 "order": row[3],
271 }
272 for row in txn
273 }
274
275 return users, roles
276
277 return self.db.runInteraction(
278 "get_users_for_summary_by_role", _get_users_for_summary_txn
279 )
280
281 def is_user_in_group(self, user_id, group_id):
282 return self.db.simple_select_one_onecol(
283 table="group_users",
284 keyvalues={"group_id": group_id, "user_id": user_id},
285 retcol="user_id",
286 allow_none=True,
287 desc="is_user_in_group",
288 ).addCallback(lambda r: bool(r))
289
290 def is_user_admin_in_group(self, group_id, user_id):
291 return self.db.simple_select_one_onecol(
292 table="group_users",
293 keyvalues={"group_id": group_id, "user_id": user_id},
294 retcol="is_admin",
295 allow_none=True,
296 desc="is_user_admin_in_group",
297 )
298
299 def is_user_invited_to_local_group(self, group_id, user_id):
300 """Has the group server invited a user?
301 """
302 return self.db.simple_select_one_onecol(
303 table="group_invites",
304 keyvalues={"group_id": group_id, "user_id": user_id},
305 retcol="user_id",
306 desc="is_user_invited_to_local_group",
307 allow_none=True,
308 )
309
310 def get_users_membership_info_in_group(self, group_id, user_id):
311 """Get a dict describing the membership of a user in a group.
312
313 Example if joined:
314
315 {
316 "membership": "join",
317 "is_public": True,
318 "is_privileged": False,
319 }
320
321 Returns an empty dict if the user is not join/invite/etc
322 """
323
324 def _get_users_membership_in_group_txn(txn):
325 row = self.db.simple_select_one_txn(
326 txn,
327 table="group_users",
328 keyvalues={"group_id": group_id, "user_id": user_id},
329 retcols=("is_admin", "is_public"),
330 allow_none=True,
331 )
332
333 if row:
334 return {
335 "membership": "join",
336 "is_public": row["is_public"],
337 "is_privileged": row["is_admin"],
338 }
339
340 row = self.db.simple_select_one_onecol_txn(
341 txn,
342 table="group_invites",
343 keyvalues={"group_id": group_id, "user_id": user_id},
344 retcol="user_id",
345 allow_none=True,
346 )
347
348 if row:
349 return {"membership": "invite"}
350
351 return {}
352
353 return self.db.runInteraction(
354 "get_users_membership_info_in_group", _get_users_membership_in_group_txn
355 )
356
357 def get_publicised_groups_for_user(self, user_id):
358 """Get all groups a user is publicising
359 """
360 return self.db.simple_select_onecol(
361 table="local_group_membership",
362 keyvalues={"user_id": user_id, "membership": "join", "is_publicised": True},
363 retcol="group_id",
364 desc="get_publicised_groups_for_user",
365 )
366
367 def get_attestations_need_renewals(self, valid_until_ms):
368 """Get all attestations that need to be renewed until givent time
369 """
370
371 def _get_attestations_need_renewals_txn(txn):
372 sql = """
373 SELECT group_id, user_id FROM group_attestations_renewals
374 WHERE valid_until_ms <= ?
375 """
376 txn.execute(sql, (valid_until_ms,))
377 return self.db.cursor_to_dict(txn)
378
379 return self.db.runInteraction(
380 "get_attestations_need_renewals", _get_attestations_need_renewals_txn
381 )
382
383 @defer.inlineCallbacks
384 def get_remote_attestation(self, group_id, user_id):
385 """Get the attestation that proves the remote agrees that the user is
386 in the group.
387 """
388 row = yield self.db.simple_select_one(
389 table="group_attestations_remote",
390 keyvalues={"group_id": group_id, "user_id": user_id},
391 retcols=("valid_until_ms", "attestation_json"),
392 desc="get_remote_attestation",
393 allow_none=True,
394 )
395
396 now = int(self._clock.time_msec())
397 if row and now < row["valid_until_ms"]:
398 return json.loads(row["attestation_json"])
399
400 return None
401
402 def get_joined_groups(self, user_id):
403 return self.db.simple_select_onecol(
404 table="local_group_membership",
405 keyvalues={"user_id": user_id, "membership": "join"},
406 retcol="group_id",
407 desc="get_joined_groups",
408 )
409
410 def get_all_groups_for_user(self, user_id, now_token):
411 def _get_all_groups_for_user_txn(txn):
412 sql = """
413 SELECT group_id, type, membership, u.content
414 FROM local_group_updates AS u
415 INNER JOIN local_group_membership USING (group_id, user_id)
416 WHERE user_id = ? AND membership != 'leave'
417 AND stream_id <= ?
418 """
419 txn.execute(sql, (user_id, now_token))
420 return [
421 {
422 "group_id": row[0],
423 "type": row[1],
424 "membership": row[2],
425 "content": json.loads(row[3]),
426 }
427 for row in txn
428 ]
429
430 return self.db.runInteraction(
431 "get_all_groups_for_user", _get_all_groups_for_user_txn
432 )
433
434 def get_groups_changes_for_user(self, user_id, from_token, to_token):
435 from_token = int(from_token)
436 has_changed = self._group_updates_stream_cache.has_entity_changed(
437 user_id, from_token
438 )
439 if not has_changed:
440 return defer.succeed([])
441
442 def _get_groups_changes_for_user_txn(txn):
443 sql = """
444 SELECT group_id, membership, type, u.content
445 FROM local_group_updates AS u
446 INNER JOIN local_group_membership USING (group_id, user_id)
447 WHERE user_id = ? AND ? < stream_id AND stream_id <= ?
448 """
449 txn.execute(sql, (user_id, from_token, to_token))
450 return [
451 {
452 "group_id": group_id,
453 "membership": membership,
454 "type": gtype,
455 "content": json.loads(content_json),
456 }
457 for group_id, membership, gtype, content_json in txn
458 ]
459
460 return self.db.runInteraction(
461 "get_groups_changes_for_user", _get_groups_changes_for_user_txn
462 )
463
464 def get_all_groups_changes(self, from_token, to_token, limit):
465 from_token = int(from_token)
466 has_changed = self._group_updates_stream_cache.has_any_entity_changed(
467 from_token
468 )
469 if not has_changed:
470 return defer.succeed([])
471
472 def _get_all_groups_changes_txn(txn):
473 sql = """
474 SELECT stream_id, group_id, user_id, type, content
475 FROM local_group_updates
476 WHERE ? < stream_id AND stream_id <= ?
477 LIMIT ?
478 """
479 txn.execute(sql, (from_token, to_token, limit))
480 return [
481 (stream_id, group_id, user_id, gtype, json.loads(content_json))
482 for stream_id, group_id, user_id, gtype, content_json in txn
483 ]
484
485 return self.db.runInteraction(
486 "get_all_groups_changes", _get_all_groups_changes_txn
487 )
488
489
490 class GroupServerStore(GroupServerWorkerStore):
491 def set_group_join_policy(self, group_id, join_policy):
492 """Set the join policy of a group.
493
494 join_policy can be one of:
495 * "invite"
496 * "open"
497 """
498 return self.db.simple_update_one(
499 table="groups",
500 keyvalues={"group_id": group_id},
501 updatevalues={"join_policy": join_policy},
502 desc="set_group_join_policy",
503 )
504
159505 def add_room_to_summary(self, group_id, room_id, category_id, order, is_public):
160506 return self.db.runInteraction(
161507 "add_room_to_summary",
298644 desc="remove_room_from_summary",
299645 )
300646
301 @defer.inlineCallbacks
302 def get_group_categories(self, group_id):
303 rows = yield self.db.simple_select_list(
304 table="group_room_categories",
305 keyvalues={"group_id": group_id},
306 retcols=("category_id", "is_public", "profile"),
307 desc="get_group_categories",
308 )
309
310 return {
311 row["category_id"]: {
312 "is_public": row["is_public"],
313 "profile": json.loads(row["profile"]),
314 }
315 for row in rows
316 }
317
318 @defer.inlineCallbacks
319 def get_group_category(self, group_id, category_id):
320 category = yield self.db.simple_select_one(
321 table="group_room_categories",
322 keyvalues={"group_id": group_id, "category_id": category_id},
323 retcols=("is_public", "profile"),
324 desc="get_group_category",
325 )
326
327 category["profile"] = json.loads(category["profile"])
328
329 return category
330
331647 def upsert_group_category(self, group_id, category_id, profile, is_public):
332648 """Add/update room category for group
333649 """
358674 keyvalues={"group_id": group_id, "category_id": category_id},
359675 desc="remove_group_category",
360676 )
361
362 @defer.inlineCallbacks
363 def get_group_roles(self, group_id):
364 rows = yield self.db.simple_select_list(
365 table="group_roles",
366 keyvalues={"group_id": group_id},
367 retcols=("role_id", "is_public", "profile"),
368 desc="get_group_roles",
369 )
370
371 return {
372 row["role_id"]: {
373 "is_public": row["is_public"],
374 "profile": json.loads(row["profile"]),
375 }
376 for row in rows
377 }
378
379 @defer.inlineCallbacks
380 def get_group_role(self, group_id, role_id):
381 role = yield self.db.simple_select_one(
382 table="group_roles",
383 keyvalues={"group_id": group_id, "role_id": role_id},
384 retcols=("is_public", "profile"),
385 desc="get_group_role",
386 )
387
388 role["profile"] = json.loads(role["profile"])
389
390 return role
391677
392678 def upsert_group_role(self, group_id, role_id, profile, is_public):
393679 """Add/remove user role
554840 desc="remove_user_from_summary",
555841 )
556842
557 def get_local_groups_for_room(self, room_id):
558 """Get all of the local group that contain a given room
559 Args:
560 room_id (str): The ID of a room
561 Returns:
562 Deferred[list[str]]: A twisted.Deferred containing a list of group ids
563 containing this room
564 """
565 return self.db.simple_select_onecol(
566 table="group_rooms",
567 keyvalues={"room_id": room_id},
568 retcol="group_id",
569 desc="get_local_groups_for_room",
570 )
571
572 def get_users_for_summary_by_role(self, group_id, include_private=False):
573 """Get the users and roles that should be included in a summary request
574
575 Returns ([users], [roles])
576 """
577
578 def _get_users_for_summary_txn(txn):
579 keyvalues = {"group_id": group_id}
580 if not include_private:
581 keyvalues["is_public"] = True
582
583 sql = """
584 SELECT user_id, is_public, role_id, user_order
585 FROM group_summary_users
586 WHERE group_id = ?
587 """
588
589 if not include_private:
590 sql += " AND is_public = ?"
591 txn.execute(sql, (group_id, True))
592 else:
593 txn.execute(sql, (group_id,))
594
595 users = [
596 {
597 "user_id": row[0],
598 "is_public": row[1],
599 "role_id": row[2] if row[2] != _DEFAULT_ROLE_ID else None,
600 "order": row[3],
601 }
602 for row in txn
603 ]
604
605 sql = """
606 SELECT role_id, is_public, profile, role_order
607 FROM group_summary_roles
608 INNER JOIN group_roles USING (group_id, role_id)
609 WHERE group_id = ?
610 """
611
612 if not include_private:
613 sql += " AND is_public = ?"
614 txn.execute(sql, (group_id, True))
615 else:
616 txn.execute(sql, (group_id,))
617
618 roles = {
619 row[0]: {
620 "is_public": row[1],
621 "profile": json.loads(row[2]),
622 "order": row[3],
623 }
624 for row in txn
625 }
626
627 return users, roles
628
629 return self.db.runInteraction(
630 "get_users_for_summary_by_role", _get_users_for_summary_txn
631 )
632
633 def is_user_in_group(self, user_id, group_id):
634 return self.db.simple_select_one_onecol(
635 table="group_users",
636 keyvalues={"group_id": group_id, "user_id": user_id},
637 retcol="user_id",
638 allow_none=True,
639 desc="is_user_in_group",
640 ).addCallback(lambda r: bool(r))
641
642 def is_user_admin_in_group(self, group_id, user_id):
643 return self.db.simple_select_one_onecol(
644 table="group_users",
645 keyvalues={"group_id": group_id, "user_id": user_id},
646 retcol="is_admin",
647 allow_none=True,
648 desc="is_user_admin_in_group",
649 )
650
651843 def add_group_invite(self, group_id, user_id):
652844 """Record that the group server has invited a user
653845 """
655847 table="group_invites",
656848 values={"group_id": group_id, "user_id": user_id},
657849 desc="add_group_invite",
658 )
659
660 def is_user_invited_to_local_group(self, group_id, user_id):
661 """Has the group server invited a user?
662 """
663 return self.db.simple_select_one_onecol(
664 table="group_invites",
665 keyvalues={"group_id": group_id, "user_id": user_id},
666 retcol="user_id",
667 desc="is_user_invited_to_local_group",
668 allow_none=True,
669 )
670
671 def get_users_membership_info_in_group(self, group_id, user_id):
672 """Get a dict describing the membership of a user in a group.
673
674 Example if joined:
675
676 {
677 "membership": "join",
678 "is_public": True,
679 "is_privileged": False,
680 }
681
682 Returns an empty dict if the user is not join/invite/etc
683 """
684
685 def _get_users_membership_in_group_txn(txn):
686 row = self.db.simple_select_one_txn(
687 txn,
688 table="group_users",
689 keyvalues={"group_id": group_id, "user_id": user_id},
690 retcols=("is_admin", "is_public"),
691 allow_none=True,
692 )
693
694 if row:
695 return {
696 "membership": "join",
697 "is_public": row["is_public"],
698 "is_privileged": row["is_admin"],
699 }
700
701 row = self.db.simple_select_one_onecol_txn(
702 txn,
703 table="group_invites",
704 keyvalues={"group_id": group_id, "user_id": user_id},
705 retcol="user_id",
706 allow_none=True,
707 )
708
709 if row:
710 return {"membership": "invite"}
711
712 return {}
713
714 return self.db.runInteraction(
715 "get_users_membership_info_in_group", _get_users_membership_in_group_txn
716850 )
717851
718852 def add_user_to_group(
843977
844978 return self.db.runInteraction(
845979 "remove_room_from_group", _remove_room_from_group_txn
846 )
847
848 def get_publicised_groups_for_user(self, user_id):
849 """Get all groups a user is publicising
850 """
851 return self.db.simple_select_onecol(
852 table="local_group_membership",
853 keyvalues={"user_id": user_id, "membership": "join", "is_publicised": True},
854 retcol="group_id",
855 desc="get_publicised_groups_for_user",
856980 )
857981
858982 def update_group_publicity(self, group_id, user_id, publicise):
9991123 desc="update_group_profile",
10001124 )
10011125
1002 def get_attestations_need_renewals(self, valid_until_ms):
1003 """Get all attestations that need to be renewed until givent time
1004 """
1005
1006 def _get_attestations_need_renewals_txn(txn):
1007 sql = """
1008 SELECT group_id, user_id FROM group_attestations_renewals
1009 WHERE valid_until_ms <= ?
1010 """
1011 txn.execute(sql, (valid_until_ms,))
1012 return self.db.cursor_to_dict(txn)
1013
1014 return self.db.runInteraction(
1015 "get_attestations_need_renewals", _get_attestations_need_renewals_txn
1016 )
1017
10181126 def update_attestation_renewal(self, group_id, user_id, attestation):
10191127 """Update an attestation that we have renewed
10201128 """
10511159 table="group_attestations_renewals",
10521160 keyvalues={"group_id": group_id, "user_id": user_id},
10531161 desc="remove_attestation_renewal",
1054 )
1055
1056 @defer.inlineCallbacks
1057 def get_remote_attestation(self, group_id, user_id):
1058 """Get the attestation that proves the remote agrees that the user is
1059 in the group.
1060 """
1061 row = yield self.db.simple_select_one(
1062 table="group_attestations_remote",
1063 keyvalues={"group_id": group_id, "user_id": user_id},
1064 retcols=("valid_until_ms", "attestation_json"),
1065 desc="get_remote_attestation",
1066 allow_none=True,
1067 )
1068
1069 now = int(self._clock.time_msec())
1070 if row and now < row["valid_until_ms"]:
1071 return json.loads(row["attestation_json"])
1072
1073 return None
1074
1075 def get_joined_groups(self, user_id):
1076 return self.db.simple_select_onecol(
1077 table="local_group_membership",
1078 keyvalues={"user_id": user_id, "membership": "join"},
1079 retcol="group_id",
1080 desc="get_joined_groups",
1081 )
1082
1083 def get_all_groups_for_user(self, user_id, now_token):
1084 def _get_all_groups_for_user_txn(txn):
1085 sql = """
1086 SELECT group_id, type, membership, u.content
1087 FROM local_group_updates AS u
1088 INNER JOIN local_group_membership USING (group_id, user_id)
1089 WHERE user_id = ? AND membership != 'leave'
1090 AND stream_id <= ?
1091 """
1092 txn.execute(sql, (user_id, now_token))
1093 return [
1094 {
1095 "group_id": row[0],
1096 "type": row[1],
1097 "membership": row[2],
1098 "content": json.loads(row[3]),
1099 }
1100 for row in txn
1101 ]
1102
1103 return self.db.runInteraction(
1104 "get_all_groups_for_user", _get_all_groups_for_user_txn
1105 )
1106
1107 def get_groups_changes_for_user(self, user_id, from_token, to_token):
1108 from_token = int(from_token)
1109 has_changed = self._group_updates_stream_cache.has_entity_changed(
1110 user_id, from_token
1111 )
1112 if not has_changed:
1113 return defer.succeed([])
1114
1115 def _get_groups_changes_for_user_txn(txn):
1116 sql = """
1117 SELECT group_id, membership, type, u.content
1118 FROM local_group_updates AS u
1119 INNER JOIN local_group_membership USING (group_id, user_id)
1120 WHERE user_id = ? AND ? < stream_id AND stream_id <= ?
1121 """
1122 txn.execute(sql, (user_id, from_token, to_token))
1123 return [
1124 {
1125 "group_id": group_id,
1126 "membership": membership,
1127 "type": gtype,
1128 "content": json.loads(content_json),
1129 }
1130 for group_id, membership, gtype, content_json in txn
1131 ]
1132
1133 return self.db.runInteraction(
1134 "get_groups_changes_for_user", _get_groups_changes_for_user_txn
1135 )
1136
1137 def get_all_groups_changes(self, from_token, to_token, limit):
1138 from_token = int(from_token)
1139 has_changed = self._group_updates_stream_cache.has_any_entity_changed(
1140 from_token
1141 )
1142 if not has_changed:
1143 return defer.succeed([])
1144
1145 def _get_all_groups_changes_txn(txn):
1146 sql = """
1147 SELECT stream_id, group_id, user_id, type, content
1148 FROM local_group_updates
1149 WHERE ? < stream_id AND stream_id <= ?
1150 LIMIT ?
1151 """
1152 txn.execute(sql, (from_token, to_token, limit))
1153 return [
1154 (stream_id, group_id, user_id, gtype, json.loads(content_json))
1155 for stream_id, group_id, user_id, gtype, content_json in txn
1156 ]
1157
1158 return self.db.runInteraction(
1159 "get_all_groups_changes", _get_all_groups_changes_txn
11601162 )
11611163
11621164 def get_group_stream_token(self):
867867 desc="get_membership_from_event_ids",
868868 )
869869
870 async def is_local_host_in_room_ignoring_users(
871 self, room_id: str, ignore_users: Collection[str]
872 ) -> bool:
873 """Check if there are any local users, excluding those in the given
874 list, in the room.
875 """
876
877 clause, args = make_in_list_sql_clause(
878 self.database_engine, "user_id", ignore_users
879 )
880
881 sql = """
882 SELECT 1 FROM local_current_membership
883 WHERE
884 room_id = ? AND membership = ?
885 AND NOT (%s)
886 LIMIT 1
887 """ % (
888 clause,
889 )
890
891 def _is_local_host_in_room_ignoring_users_txn(txn):
892 txn.execute(sql, (room_id, Membership.JOIN, *args))
893
894 return bool(txn.fetchone())
895
896 return await self.db.runInteraction(
897 "is_local_host_in_room_ignoring_users",
898 _is_local_host_in_room_ignoring_users_txn,
899 )
900
870901
871902 class RoomMemberBackgroundUpdateStore(SQLBaseStore):
872903 def __init__(self, database: Database, db_conn, hs):
1414
1515 -- Add background update to go and delete current state events for rooms the
1616 -- server is no longer in.
17 INSERT into background_updates (update_name, progress_json)
18 VALUES ('delete_old_current_state_events', '{}');
17 --
18 -- this relies on the 'membership' column of current_state_events, so make sure
19 -- that's populated first!
20 INSERT into background_updates (update_name, progress_json, depends_on)
21 VALUES ('delete_old_current_state_events', '{}', 'current_state_events_membership');
0 /* Copyright 2020 The Matrix.org Foundation C.I.C.
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- when we first added the room_version column, it was populated via a background
16 -- update. We now need it to be populated before synapse starts, so we populate
17 -- any remaining rows with a NULL room version now. For servers which have completed
18 -- the background update, this will be pretty quick.
19
20 -- the following query will set room_version to NULL if no create event is found for
21 -- the room in current_state_events, and will set it to '1' if a create event with no
22 -- room_version is found.
23
24 UPDATE rooms SET room_version=(
25 SELECT COALESCE(json::json->'content'->>'room_version','1')
26 FROM current_state_events cse INNER JOIN event_json ej USING (event_id)
27 WHERE cse.room_id=rooms.room_id AND cse.type='m.room.create' AND cse.state_key=''
28 ) WHERE rooms.room_version IS NULL;
29
30 -- we still allow the background update to complete: it has the useful side-effect of
31 -- populating `rooms` with any missing rooms (based on the current_state_events table).
32
33 -- see also rooms_version_column_2.sql.sqlite which has a copy of the above query, using
34 -- sqlite syntax for the json extraction.
0 /* Copyright 2020 The Matrix.org Foundation C.I.C.
1 *
2 * Licensed under the Apache License, Version 2.0 (the "License");
3 * you may not use this file except in compliance with the License.
4 * You may obtain a copy of the License at
5 *
6 * http://www.apache.org/licenses/LICENSE-2.0
7 *
8 * Unless required by applicable law or agreed to in writing, software
9 * distributed under the License is distributed on an "AS IS" BASIS,
10 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 * See the License for the specific language governing permissions and
12 * limitations under the License.
13 */
14
15 -- see rooms_version_column_2.sql.postgres for details of what's going on here.
16
17 UPDATE rooms SET room_version=(
18 SELECT COALESCE(json_extract(ej.json, '$.content.room_version'), '1')
19 FROM current_state_events cse INNER JOIN event_json ej USING (event_id)
20 WHERE cse.room_id=rooms.room_id AND cse.type='m.room.create' AND cse.state_key=''
21 ) WHERE rooms.room_version IS NULL;
269269 )
270270
271271 return slice_list
272
273 def get_room_stats_state(self, room_id):
274 """
275 Returns the current room_stats_state for a room.
276
277 Args:
278 room_id (str): The ID of the room to return state for.
279
280 Returns (dict):
281 Dictionary containing these keys:
282 "name", "topic", "canonical_alias", "avatar", "join_rules",
283 "history_visibility"
284 """
285 return self.db.simple_select_one(
286 "room_stats_state",
287 {"room_id": room_id},
288 retcols=(
289 "name",
290 "topic",
291 "canonical_alias",
292 "avatar",
293 "join_rules",
294 "history_visibility",
295 ),
296 )
297272
298273 @cached()
299274 def get_earliest_token_for_stats(self, stats_type, id):
182182 )
183183 return 1
184184
185 logger.info(
185 logger.debug(
186186 "Processing the next %d rooms of %d remaining"
187187 % (len(rooms_to_work_on), progress["remaining"])
188188 )
307307 )
308308 return 1
309309
310 logger.info(
310 logger.debug(
311311 "Processing the next %d users of %d remaining"
312312 % (len(users_to_work_on), progress["remaining"])
313313 )
342342
343343 top_three_counters = self._txn_perf_counters.interval(duration, limit=3)
344344
345 perf_logger.info(
345 perf_logger.debug(
346346 "Total database time: %.3f%% {%s}", ratio * 100, top_three_counters
347347 )
348348
389389 state_delta_reuse_delta_counter.inc()
390390 break
391391
392 logger.info("Calculating state delta for room %s", room_id)
392 logger.debug("Calculating state delta for room %s", room_id)
393393 with Measure(
394394 self._clock, "persist_events.get_new_state_after_events"
395395 ):
726726
727727 # Check if any of the given events are a local join that appear in the
728728 # current state
729 events_to_check = [] # Event IDs that aren't an event we're persisting
729730 for (typ, state_key), event_id in delta.to_insert.items():
730731 if typ != EventTypes.Member or not self.is_mine_id(state_key):
731732 continue
735736 if event.membership == Membership.JOIN:
736737 return True
737738
738 # There's been a change of membership but we don't have a local join
739 # event in the new events, so we need to check the full state.
739 # The event is not in `ev_ctx_rm`, so we need to pull it out of
740 # the DB.
741 events_to_check.append(event_id)
742
743 # Check if any of the changes that we don't have events for are joins.
744 if events_to_check:
745 rows = await self.main_store.get_membership_from_event_ids(events_to_check)
746 is_still_joined = any(row["membership"] == Membership.JOIN for row in rows)
747 if is_still_joined:
748 return True
749
750 # None of the new state events are local joins, so we check the database
751 # to see if there are any other local users in the room. We ignore users
752 # whose state has changed as we've already their new state above.
753 users_to_ignore = [
754 state_key
755 for _, state_key in itertools.chain(delta.to_insert, delta.to_delete)
756 if self.is_mine_id(state_key)
757 ]
758
759 if await self.main_store.is_local_host_in_room_ignoring_users(
760 room_id, users_to_ignore
761 ):
762 return True
763
764 # The server will leave the room, so we go and find out which remote
765 # users will still be joined when we leave.
740766 if current_state is None:
741767 current_state = await self.main_store.get_current_state_ids(room_id)
742768 current_state = dict(current_state)
745771
746772 current_state.update(delta.to_insert)
747773
748 event_ids = [
749 event_id
750 for (typ, state_key,), event_id in current_state.items()
751 if typ == EventTypes.Member and self.is_mine_id(state_key)
752 ]
753
754 rows = await self.main_store.get_membership_from_event_ids(event_ids)
755 is_still_joined = any(row["membership"] == Membership.JOIN for row in rows)
756 if is_still_joined:
757 return True
758
759 # The server will leave the room, so we go and find out which remote
760 # users will still be joined when we leave.
761774 remote_event_ids = [
762775 event_id
763776 for (typ, state_key,), event_id in current_state.items()
7272 def errback(f):
7373 object.__setattr__(self, "_result", (False, f))
7474 while self._observers:
75 # This is a little bit of magic to correctly propagate stack
76 # traces when we `await` on one of the observer deferreds.
77 f.value.__failure__ = f
78
7579 try:
7680 # TODO: Handle errors here.
7781 self._observers.pop().errback(f)
143143 """
144144 result = self.get(key)
145145 if not result:
146 logger.info(
146 logger.debug(
147147 "[%s]: no cached result for [%s], calculating new one", self._name, key
148148 )
149149 d = run_in_background(callback, *args, **kwargs)
2424 from synapse.api.constants import EventContentFields
2525 from synapse.api.errors import SynapseError
2626 from synapse.api.filtering import Filter
27 from synapse.events import FrozenEvent
27 from synapse.events import make_event_from_dict
2828
2929 from tests import unittest
3030 from tests.utils import DeferredMockCallable, MockHttpResource, setup_test_homeserver
3737 kwargs["event_id"] = "fake_event_id"
3838 if "type" not in kwargs:
3939 kwargs["type"] = "fake_type"
40 return FrozenEvent(kwargs)
40 return make_event_from_dict(kwargs)
4141
4242
4343 class FilteringTestCase(unittest.TestCase):
1818
1919 from synapse.api.room_versions import RoomVersions
2020 from synapse.crypto.event_signing import add_hashes_and_signatures
21 from synapse.events import FrozenEvent
21 from synapse.events import make_event_from_dict
2222
2323 from tests import unittest
2424
5353 RoomVersions.V1, event_dict, HOSTNAME, self.signing_key
5454 )
5555
56 event = FrozenEvent(event_dict)
56 event = make_event_from_dict(event_dict)
5757
5858 self.assertTrue(hasattr(event, "hashes"))
5959 self.assertIn("sha256", event.hashes)
8787 RoomVersions.V1, event_dict, HOSTNAME, self.signing_key
8888 )
8989
90 event = FrozenEvent(event_dict)
90 event = make_event_from_dict(event_dict)
9191
9292 self.assertTrue(hasattr(event, "hashes"))
9393 self.assertIn("sha256", event.hashes)
1212 # See the License for the specific language governing permissions and
1313 # limitations under the License.
1414
15
16 from synapse.events import FrozenEvent
15 from synapse.events import make_event_from_dict
1716 from synapse.events.utils import (
1817 copy_power_levels_contents,
1918 prune_event,
2928 kwargs["event_id"] = "fake_event_id"
3029 if "type" not in kwargs:
3130 kwargs["type"] = "fake_type"
32 return FrozenEvent(kwargs)
31 return make_event_from_dict(kwargs)
3332
3433
3534 class PruneEventTestCase(unittest.TestCase):
3736 `matchdict` when it is redacted. """
3837
3938 def run_test(self, evdict, matchdict):
40 self.assertEquals(prune_event(FrozenEvent(evdict)).get_dict(), matchdict)
39 self.assertEquals(
40 prune_event(make_event_from_dict(evdict)).get_dict(), matchdict
41 )
4142
4243 def test_minimal(self):
4344 self.run_test(
1414 # limitations under the License.
1515 import logging
1616
17 from synapse.events import FrozenEvent
17 from synapse.events import make_event_from_dict
1818 from synapse.federation.federation_server import server_matches_acl_event
1919 from synapse.rest import admin
2020 from synapse.rest.client.v1 import login, room
104104
105105
106106 def _create_acl_event(content):
107 return FrozenEvent(
107 return make_event_from_dict(
108108 {
109109 "room_id": "!a:b",
110110 "event_id": "$a:b",
159159 res = self.get_success(self.handler.get_device(user1, "abc"))
160160 self.assertEqual(res["display_name"], "new display")
161161
162 def test_update_device_too_long_display_name(self):
163 """Update a device with a display name that is invalid (too long)."""
164 self._record_users()
165
166 # Request to update a device display name with a new value that is longer than allowed.
167 update = {
168 "display_name": "a"
169 * (synapse.handlers.device.MAX_DEVICE_DISPLAY_NAME_LEN + 1)
170 }
171 self.get_failure(
172 self.handler.update_device(user1, "abc", update),
173 synapse.api.errors.SynapseError,
174 )
175
176 # Ensure the display name was not updated.
177 res = self.get_success(self.handler.get_device(user1, "abc"))
178 self.assertEqual(res["display_name"], "display 2")
179
162180 def test_update_unknown_device(self):
163181 update = {"display_name": "new_display"}
164182 res = self.handler.update_device("user_id", "unknown_device_id", update)
1717
1818 from twisted.internet import defer
1919
20 import synapse.api.errors
21 from synapse.api.constants import EventTypes
2022 from synapse.config.room_directory import RoomDirectoryConfig
21 from synapse.handlers.directory import DirectoryHandler
22 from synapse.rest.client.v1 import directory, room
23 from synapse.types import RoomAlias
23 from synapse.rest.client.v1 import directory, login, room
24 from synapse.types import RoomAlias, create_requester
2425
2526 from tests import unittest
26 from tests.utils import setup_test_homeserver
27
28
29 class DirectoryHandlers(object):
30 def __init__(self, hs):
31 self.directory_handler = DirectoryHandler(hs)
32
33
34 class DirectoryTestCase(unittest.TestCase):
27
28
29 class DirectoryTestCase(unittest.HomeserverTestCase):
3530 """ Tests the directory service. """
3631
37 @defer.inlineCallbacks
38 def setUp(self):
32 def make_homeserver(self, reactor, clock):
3933 self.mock_federation = Mock()
4034 self.mock_registry = Mock()
4135
4640
4741 self.mock_registry.register_query_handler = register_query_handler
4842
49 hs = yield setup_test_homeserver(
50 self.addCleanup,
43 hs = self.setup_test_homeserver(
5144 http_client=None,
5245 resource_for_federation=Mock(),
5346 federation_client=self.mock_federation,
5447 federation_registry=self.mock_registry,
5548 )
56 hs.handlers = DirectoryHandlers(hs)
5749
5850 self.handler = hs.get_handlers().directory_handler
5951
6355 self.your_room = RoomAlias.from_string("#your-room:test")
6456 self.remote_room = RoomAlias.from_string("#another:remote")
6557
66 @defer.inlineCallbacks
58 return hs
59
6760 def test_get_local_association(self):
68 yield self.store.create_room_alias_association(
69 self.my_room, "!8765qwer:test", ["test"]
70 )
71
72 result = yield self.handler.get_association(self.my_room)
61 self.get_success(
62 self.store.create_room_alias_association(
63 self.my_room, "!8765qwer:test", ["test"]
64 )
65 )
66
67 result = self.get_success(self.handler.get_association(self.my_room))
7368
7469 self.assertEquals({"room_id": "!8765qwer:test", "servers": ["test"]}, result)
7570
76 @defer.inlineCallbacks
7771 def test_get_remote_association(self):
7872 self.mock_federation.make_query.return_value = defer.succeed(
7973 {"room_id": "!8765qwer:test", "servers": ["test", "remote"]}
8074 )
8175
82 result = yield self.handler.get_association(self.remote_room)
76 result = self.get_success(self.handler.get_association(self.remote_room))
8377
8478 self.assertEquals(
8579 {"room_id": "!8765qwer:test", "servers": ["test", "remote"]}, result
9286 ignore_backoff=True,
9387 )
9488
95 @defer.inlineCallbacks
89 def test_delete_alias_not_allowed(self):
90 room_id = "!8765qwer:test"
91 self.get_success(
92 self.store.create_room_alias_association(self.my_room, room_id, ["test"])
93 )
94
95 self.get_failure(
96 self.handler.delete_association(
97 create_requester("@user:test"), self.my_room
98 ),
99 synapse.api.errors.AuthError,
100 )
101
102 def test_delete_alias(self):
103 room_id = "!8765qwer:test"
104 user_id = "@user:test"
105 self.get_success(
106 self.store.create_room_alias_association(
107 self.my_room, room_id, ["test"], user_id
108 )
109 )
110
111 result = self.get_success(
112 self.handler.delete_association(create_requester(user_id), self.my_room)
113 )
114 self.assertEquals(room_id, result)
115
116 # The alias should not be found.
117 self.get_failure(
118 self.handler.get_association(self.my_room), synapse.api.errors.SynapseError
119 )
120
96121 def test_incoming_fed_query(self):
97 yield self.store.create_room_alias_association(
98 self.your_room, "!8765asdf:test", ["test"]
99 )
100
101 response = yield self.query_handlers["directory"](
102 {"room_alias": "#your-room:test"}
122 self.get_success(
123 self.store.create_room_alias_association(
124 self.your_room, "!8765asdf:test", ["test"]
125 )
126 )
127
128 response = self.get_success(
129 self.handler.on_directory_query({"room_alias": "#your-room:test"})
103130 )
104131
105132 self.assertEquals({"room_id": "!8765asdf:test", "servers": ["test"]}, response)
133
134
135 class CanonicalAliasTestCase(unittest.HomeserverTestCase):
136 """Test modifications of the canonical alias when delete aliases.
137 """
138
139 servlets = [
140 synapse.rest.admin.register_servlets,
141 login.register_servlets,
142 room.register_servlets,
143 directory.register_servlets,
144 ]
145
146 def prepare(self, reactor, clock, hs):
147 self.store = hs.get_datastore()
148 self.handler = hs.get_handlers().directory_handler
149 self.state_handler = hs.get_state_handler()
150
151 # Create user
152 self.admin_user = self.register_user("admin", "pass", admin=True)
153 self.admin_user_tok = self.login("admin", "pass")
154
155 # Create a test room
156 self.room_id = self.helper.create_room_as(
157 self.admin_user, tok=self.admin_user_tok
158 )
159
160 self.test_alias = "#test:test"
161 self.room_alias = RoomAlias.from_string(self.test_alias)
162
163 # Create a new alias to this room.
164 self.get_success(
165 self.store.create_room_alias_association(
166 self.room_alias, self.room_id, ["test"], self.admin_user
167 )
168 )
169
170 def test_remove_alias(self):
171 """Removing an alias that is the canonical alias should remove it there too."""
172 # Set this new alias as the canonical alias for this room
173 self.helper.send_state(
174 self.room_id,
175 "m.room.canonical_alias",
176 {"alias": self.test_alias, "alt_aliases": [self.test_alias]},
177 tok=self.admin_user_tok,
178 )
179
180 data = self.get_success(
181 self.state_handler.get_current_state(
182 self.room_id, EventTypes.CanonicalAlias, ""
183 )
184 )
185 self.assertEqual(data["content"]["alias"], self.test_alias)
186 self.assertEqual(data["content"]["alt_aliases"], [self.test_alias])
187
188 # Finally, delete the alias.
189 self.get_success(
190 self.handler.delete_association(
191 create_requester(self.admin_user), self.room_alias
192 )
193 )
194
195 data = self.get_success(
196 self.state_handler.get_current_state(
197 self.room_id, EventTypes.CanonicalAlias, ""
198 )
199 )
200 self.assertNotIn("alias", data["content"])
201 self.assertNotIn("alt_aliases", data["content"])
202
203 def test_remove_other_alias(self):
204 """Removing an alias listed as in alt_aliases should remove it there too."""
205 # Create a second alias.
206 other_test_alias = "#test2:test"
207 other_room_alias = RoomAlias.from_string(other_test_alias)
208 self.get_success(
209 self.store.create_room_alias_association(
210 other_room_alias, self.room_id, ["test"], self.admin_user
211 )
212 )
213
214 # Set the alias as the canonical alias for this room.
215 self.helper.send_state(
216 self.room_id,
217 "m.room.canonical_alias",
218 {
219 "alias": self.test_alias,
220 "alt_aliases": [self.test_alias, other_test_alias],
221 },
222 tok=self.admin_user_tok,
223 )
224
225 data = self.get_success(
226 self.state_handler.get_current_state(
227 self.room_id, EventTypes.CanonicalAlias, ""
228 )
229 )
230 self.assertEqual(data["content"]["alias"], self.test_alias)
231 self.assertEqual(
232 data["content"]["alt_aliases"], [self.test_alias, other_test_alias]
233 )
234
235 # Delete the second alias.
236 self.get_success(
237 self.handler.delete_association(
238 create_requester(self.admin_user), other_room_alias
239 )
240 )
241
242 data = self.get_success(
243 self.state_handler.get_current_state(
244 self.room_id, EventTypes.CanonicalAlias, ""
245 )
246 )
247 self.assertEqual(data["content"]["alias"], self.test_alias)
248 self.assertEqual(data["content"]["alt_aliases"], [self.test_alias])
106249
107250
108251 class TestCreateAliasACL(unittest.HomeserverTestCase):
9898 user_id = self.register_user("kermit", "test")
9999 tok = self.login("kermit", "test")
100100 room_id = self.helper.create_room_as(room_creator=user_id, tok=tok)
101 room_version = self.get_success(self.store.get_room_version(room_id))
101102
102103 # pretend that another server has joined
103104 join_event = self._build_and_send_join_event(OTHER_SERVER, OTHER_USER, room_id)
119120 "auth_events": [],
120121 "origin_server_ts": self.clock.time_msec(),
121122 },
122 join_event.format_version,
123 room_version,
123124 )
124125
125126 with LoggingContext(request="send_rejected"):
148149 user_id = self.register_user("kermit", "test")
149150 tok = self.login("kermit", "test")
150151 room_id = self.helper.create_room_as(room_creator=user_id, tok=tok)
152 room_version = self.get_success(self.store.get_room_version(room_id))
151153
152154 # pretend that another server has joined
153155 join_event = self._build_and_send_join_event(OTHER_SERVER, OTHER_USER, room_id)
170172 "auth_events": [],
171173 "origin_server_ts": self.clock.time_msec(),
172174 },
173 join_event.format_version,
175 room_version,
174176 )
175177
176178 with LoggingContext(request="send_rejected"):
110110 retry_timings_res
111111 )
112112
113 self.datastore.get_device_updates_by_remote.return_value = (0, [])
113 self.datastore.get_device_updates_by_remote.return_value = defer.succeed(
114 (0, [])
115 )
114116
115117 def get_received_txn_response(*args):
116118 return defer.succeed(None)
119121
120122 self.room_members = []
121123
122 def check_joined_room(room_id, user_id):
124 def check_user_in_room(room_id, user_id):
123125 if user_id not in [u.to_string() for u in self.room_members]:
124126 raise AuthError(401, "User is not in the room")
125127
126 hs.get_auth().check_joined_room = check_joined_room
128 hs.get_auth().check_user_in_room = check_user_in_room
127129
128130 def get_joined_hosts_for_room(room_id):
129131 return set(member.domain for member in self.room_members)
143145 self.datastore.get_current_state_deltas.return_value = (0, None)
144146
145147 self.datastore.get_to_device_stream_token = lambda: 0
146 self.datastore.get_new_device_msgs_for_remote = lambda *args, **kargs: ([], 0)
148 self.datastore.get_new_device_msgs_for_remote = lambda *args, **kargs: defer.succeed(
149 ([], 0)
150 )
147151 self.datastore.delete_device_msgs_for_remote = lambda *args, **kargs: None
148152 self.datastore.set_received_txn_response = lambda *args, **kwargs: defer.succeed(
149153 None
146146 s = self.get_success(self.handler.search_users(u1, "user3", 10))
147147 self.assertEqual(len(s["results"]), 0)
148148
149 def test_spam_checker(self):
150 """
151 A user which fails to the spam checks will not appear in search results.
152 """
153 u1 = self.register_user("user1", "pass")
154 u1_token = self.login(u1, "pass")
155 u2 = self.register_user("user2", "pass")
156 u2_token = self.login(u2, "pass")
157
158 # We do not add users to the directory until they join a room.
159 s = self.get_success(self.handler.search_users(u1, "user2", 10))
160 self.assertEqual(len(s["results"]), 0)
161
162 room = self.helper.create_room_as(u1, is_public=False, tok=u1_token)
163 self.helper.invite(room, src=u1, targ=u2, tok=u1_token)
164 self.helper.join(room, user=u2, tok=u2_token)
165
166 # Check we have populated the database correctly.
167 shares_private = self.get_users_who_share_private_rooms()
168 public_users = self.get_users_in_public_rooms()
169
170 self.assertEqual(
171 self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)])
172 )
173 self.assertEqual(public_users, [])
174
175 # We get one search result when searching for user2 by user1.
176 s = self.get_success(self.handler.search_users(u1, "user2", 10))
177 self.assertEqual(len(s["results"]), 1)
178
179 # Configure a spam checker that does not filter any users.
180 spam_checker = self.hs.get_spam_checker()
181
182 class AllowAll(object):
183 def check_username_for_spam(self, user_profile):
184 # Allow all users.
185 return False
186
187 spam_checker.spam_checker = AllowAll()
188
189 # The results do not change:
190 # We get one search result when searching for user2 by user1.
191 s = self.get_success(self.handler.search_users(u1, "user2", 10))
192 self.assertEqual(len(s["results"]), 1)
193
194 # Configure a spam checker that filters all users.
195 class BlockAll(object):
196 def check_username_for_spam(self, user_profile):
197 # All users are spammy.
198 return True
199
200 spam_checker.spam_checker = BlockAll()
201
202 # User1 now gets no search results for any of the other users.
203 s = self.get_success(self.handler.search_users(u1, "user2", 10))
204 self.assertEqual(len(s["results"]), 0)
205
206 def test_legacy_spam_checker(self):
207 """
208 A spam checker without the expected method should be ignored.
209 """
210 u1 = self.register_user("user1", "pass")
211 u1_token = self.login(u1, "pass")
212 u2 = self.register_user("user2", "pass")
213 u2_token = self.login(u2, "pass")
214
215 # We do not add users to the directory until they join a room.
216 s = self.get_success(self.handler.search_users(u1, "user2", 10))
217 self.assertEqual(len(s["results"]), 0)
218
219 room = self.helper.create_room_as(u1, is_public=False, tok=u1_token)
220 self.helper.invite(room, src=u1, targ=u2, tok=u1_token)
221 self.helper.join(room, user=u2, tok=u2_token)
222
223 # Check we have populated the database correctly.
224 shares_private = self.get_users_who_share_private_rooms()
225 public_users = self.get_users_in_public_rooms()
226
227 self.assertEqual(
228 self._compress_shared(shares_private), set([(u1, u2, room), (u2, u1, room)])
229 )
230 self.assertEqual(public_users, [])
231
232 # Configure a spam checker.
233 spam_checker = self.hs.get_spam_checker()
234 # The spam checker doesn't need any methods, so create a bare object.
235 spam_checker.spam_checker = object()
236
237 # We get one search result when searching for user2 by user1.
238 s = self.get_success(self.handler.search_users(u1, "user2", 10))
239 self.assertEqual(len(s["results"]), 1)
240
149241 def _compress_shared(self, shared):
150242 """
151243 Compress a list of users who share rooms dicts to a list of tuples.
1414
1515 from canonicaljson import encode_canonical_json
1616
17 from synapse.events import FrozenEvent, _EventInternalMetadata
17 from synapse.events import FrozenEvent, _EventInternalMetadata, make_event_from_dict
1818 from synapse.events.snapshot import EventContext
1919 from synapse.handlers.room import RoomEventSource
2020 from synapse.replication.slave.storage.events import SlavedEventStore
8989 msg_dict["content"] = {}
9090 msg_dict["unsigned"]["redacted_by"] = redaction.event_id
9191 msg_dict["unsigned"]["redacted_because"] = redaction
92 redacted = FrozenEvent(msg_dict, msg.internal_metadata.get_dict())
92 redacted = make_event_from_dict(
93 msg_dict, internal_metadata_dict=msg.internal_metadata.get_dict()
94 )
9395 self.check("get_event", [msg.event_id], redacted)
9496
9597 def test_backfilled_redactions(self):
109111 msg_dict["content"] = {}
110112 msg_dict["unsigned"]["redacted_by"] = redaction.event_id
111113 msg_dict["unsigned"]["redacted_because"] = redaction
112 redacted = FrozenEvent(msg_dict, msg.internal_metadata.get_dict())
114 redacted = make_event_from_dict(
115 msg_dict, internal_metadata_dict=msg.internal_metadata.get_dict()
116 )
113117 self.check("get_event", [msg.event_id], redacted)
114118
115119 def test_invites(self):
344348 if redacts is not None:
345349 event_dict["redacts"] = redacts
346350
347 event = FrozenEvent(event_dict, internal_metadata_dict=internal)
351 event = make_event_from_dict(event_dict, internal_metadata_dict=internal)
348352
349353 self.event_id += 1
350354
400400 self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
401401 self.assertEqual("You are not a server admin", channel.json_body["error"])
402402
403 def test_user_does_not_exist(self):
404 """
405 Tests that a lookup for a user that does not exist returns a 404
406 """
407 self.hs.config.registration_shared_secret = None
408
409 request, channel = self.make_request(
410 "GET",
411 "/_synapse/admin/v2/users/@unknown_person:test",
412 access_token=self.admin_user_tok,
413 )
414 self.render(request)
415
416 self.assertEqual(404, channel.code, msg=channel.json_body)
417 self.assertEqual("M_NOT_FOUND", channel.json_body["errcode"])
418
403419 def test_requester_is_admin(self):
404420 """
405421 If the user is a server admin, a new user is created.
406422 """
407423 self.hs.config.registration_shared_secret = None
408424
409 body = json.dumps({"password": "abc123", "admin": True})
425 body = json.dumps(
426 {
427 "password": "abc123",
428 "admin": True,
429 "threepids": [{"medium": "email", "address": "bob@bob.bob"}],
430 }
431 )
410432
411433 # Create user
412434 request, channel = self.make_request(
420442 self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"])
421443 self.assertEqual("@bob:test", channel.json_body["name"])
422444 self.assertEqual("bob", channel.json_body["displayname"])
445 self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
446 self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
423447
424448 # Get user
425449 request, channel = self.make_request(
448472 self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
449473
450474 # Modify user
451 body = json.dumps({"displayname": "foobar", "deactivated": True})
475 body = json.dumps(
476 {
477 "displayname": "foobar",
478 "deactivated": True,
479 "threepids": [{"medium": "email", "address": "bob2@bob.bob"}],
480 }
481 )
452482
453483 request, channel = self.make_request(
454484 "PUT",
462492 self.assertEqual("@bob:test", channel.json_body["name"])
463493 self.assertEqual("foobar", channel.json_body["displayname"])
464494 self.assertEqual(True, channel.json_body["deactivated"])
495 # the user is deactivated, the threepid will be deleted
465496
466497 # Get user
467498 request, channel = self.make_request(
2727 import synapse.rest.admin
2828 from synapse.api.constants import EventContentFields, EventTypes, Membership
2929 from synapse.handlers.pagination import PurgeStatus
30 from synapse.rest.client.v1 import login, profile, room
30 from synapse.rest.client.v1 import directory, login, profile, room
3131 from synapse.rest.client.v2_alpha import account
32 from synapse.types import JsonDict, RoomAlias
3233 from synapse.util.stringutils import random_string
3334
3435 from tests import unittest
16111612 def prepare(self, reactor, clock, homeserver):
16121613 self.user_id = self.register_user("user", "password")
16131614 self.tok = self.login("user", "password")
1614 self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok)
1615 self.room_id = self.helper.create_room_as(
1616 self.user_id, tok=self.tok, is_public=False
1617 )
16151618
16161619 self.other_user_id = self.register_user("user2", "password")
16171620 self.other_tok = self.login("user2", "password")
17231726 self.assertEqual(len(events_after), 2, events_after)
17241727 self.assertDictEqual(events_after[0].get("content"), {}, events_after[0])
17251728 self.assertEqual(events_after[1].get("content"), {}, events_after[1])
1729
1730
1731 class RoomAliasListTestCase(unittest.HomeserverTestCase):
1732 servlets = [
1733 synapse.rest.admin.register_servlets_for_client_rest_resource,
1734 directory.register_servlets,
1735 login.register_servlets,
1736 room.register_servlets,
1737 ]
1738
1739 def prepare(self, reactor, clock, homeserver):
1740 self.room_owner = self.register_user("room_owner", "test")
1741 self.room_owner_tok = self.login("room_owner", "test")
1742
1743 self.room_id = self.helper.create_room_as(
1744 self.room_owner, tok=self.room_owner_tok
1745 )
1746
1747 def test_no_aliases(self):
1748 res = self._get_aliases(self.room_owner_tok)
1749 self.assertEqual(res["aliases"], [])
1750
1751 def test_not_in_room(self):
1752 self.register_user("user", "test")
1753 user_tok = self.login("user", "test")
1754 res = self._get_aliases(user_tok, expected_code=403)
1755 self.assertEqual(res["errcode"], "M_FORBIDDEN")
1756
1757 def test_admin_user(self):
1758 alias1 = self._random_alias()
1759 self._set_alias_via_directory(alias1)
1760
1761 self.register_user("user", "test", admin=True)
1762 user_tok = self.login("user", "test")
1763
1764 res = self._get_aliases(user_tok)
1765 self.assertEqual(res["aliases"], [alias1])
1766
1767 def test_with_aliases(self):
1768 alias1 = self._random_alias()
1769 alias2 = self._random_alias()
1770
1771 self._set_alias_via_directory(alias1)
1772 self._set_alias_via_directory(alias2)
1773
1774 res = self._get_aliases(self.room_owner_tok)
1775 self.assertEqual(set(res["aliases"]), {alias1, alias2})
1776
1777 def test_peekable_room(self):
1778 alias1 = self._random_alias()
1779 self._set_alias_via_directory(alias1)
1780
1781 self.helper.send_state(
1782 self.room_id,
1783 EventTypes.RoomHistoryVisibility,
1784 body={"history_visibility": "world_readable"},
1785 tok=self.room_owner_tok,
1786 )
1787
1788 self.register_user("user", "test")
1789 user_tok = self.login("user", "test")
1790
1791 res = self._get_aliases(user_tok)
1792 self.assertEqual(res["aliases"], [alias1])
1793
1794 def _get_aliases(self, access_token: str, expected_code: int = 200) -> JsonDict:
1795 """Calls the endpoint under test. returns the json response object."""
1796 request, channel = self.make_request(
1797 "GET",
1798 "/_matrix/client/unstable/org.matrix.msc2432/rooms/%s/aliases"
1799 % (self.room_id,),
1800 access_token=access_token,
1801 )
1802 self.render(request)
1803 self.assertEqual(channel.code, expected_code, channel.result)
1804 res = channel.json_body
1805 self.assertIsInstance(res, dict)
1806 if expected_code == 200:
1807 self.assertIsInstance(res["aliases"], list)
1808 return res
1809
1810 def _random_alias(self) -> str:
1811 return RoomAlias(random_string(5), self.hs.hostname).to_string()
1812
1813 def _set_alias_via_directory(self, alias: str, expected_code: int = 200):
1814 url = "/_matrix/client/r0/directory/room/" + alias
1815 data = {"room_id": self.room_id}
1816 request_data = json.dumps(data)
1817
1818 request, channel = self.make_request(
1819 "PUT", url, request_data, access_token=self.room_owner_tok
1820 )
1821 self.render(request)
1822 self.assertEqual(channel.code, expected_code, channel.result)
2121 from synapse.api.constants import EventTypes, JoinRules, Membership
2222 from synapse.api.room_versions import RoomVersions
2323 from synapse.event_auth import auth_types_for_event
24 from synapse.events import FrozenEvent
24 from synapse.events import make_event_from_dict
2525 from synapse.state.v2 import lexicographical_topological_sort, resolve_events_with_store
2626 from synapse.types import EventID
2727
8888 if self.state_key is not None:
8989 event_dict["state_key"] = self.state_key
9090
91 return FrozenEvent(event_dict)
91 return make_event_from_dict(event_dict)
9292
9393
9494 # All graphs start with this set of events
237237 @defer.inlineCallbacks
238238 def build(self, prev_event_ids):
239239 built_event = yield self._base_builder.build(prev_event_ids)
240 built_event.event_id = self._event_id
241 built_event._event_dict["event_id"] = self._event_id
240
241 built_event._event_id = self._event_id
242 built_event._dict["event_id"] = self._event_id
243 assert built_event.event_id == self._event_id
244
242245 return built_event
243246
244247 @property
1717 from synapse import event_auth
1818 from synapse.api.errors import AuthError
1919 from synapse.api.room_versions import RoomVersions
20 from synapse.events import FrozenEvent
20 from synapse.events import make_event_from_dict
2121
2222
2323 class EventAuthTestCase(unittest.TestCase):
9393
9494
9595 def _create_event(user_id):
96 return FrozenEvent(
96 return make_event_from_dict(
9797 {
9898 "room_id": TEST_ROOM_ID,
9999 "event_id": _get_event_id(),
105105
106106
107107 def _join_event(user_id):
108 return FrozenEvent(
108 return make_event_from_dict(
109109 {
110110 "room_id": TEST_ROOM_ID,
111111 "event_id": _get_event_id(),
118118
119119
120120 def _power_levels_event(sender, content):
121 return FrozenEvent(
121 return make_event_from_dict(
122122 {
123123 "room_id": TEST_ROOM_ID,
124124 "event_id": _get_event_id(),
131131
132132
133133 def _random_state_event(sender):
134 return FrozenEvent(
134 return make_event_from_dict(
135135 {
136136 "room_id": TEST_ROOM_ID,
137137 "event_id": _get_event_id(),
11
22 from twisted.internet.defer import ensureDeferred, maybeDeferred, succeed
33
4 from synapse.events import FrozenEvent
4 from synapse.events import make_event_from_dict
55 from synapse.logging.context import LoggingContext
66 from synapse.types import Requester, UserID
77 from synapse.util import Clock
4242 )
4343 )[0]
4444
45 join_event = FrozenEvent(
45 join_event = make_event_from_dict(
4646 {
4747 "room_id": self.room_id,
4848 "sender": "@baduser:test.serv",
104104 )[0]
105105
106106 # Now lie about an event
107 lying_event = FrozenEvent(
107 lying_event = make_event_from_dict(
108108 {
109109 "room_id": self.room_id,
110110 "sender": "@baduser:test.serv",
1919 from synapse.api.auth import Auth
2020 from synapse.api.constants import EventTypes, Membership
2121 from synapse.api.room_versions import RoomVersions
22 from synapse.events import FrozenEvent
22 from synapse.events import make_event_from_dict
2323 from synapse.events.snapshot import EventContext
2424 from synapse.state import StateHandler, StateResolutionHandler
2525
6565
6666 d.update(kwargs)
6767
68 event = FrozenEvent(d)
68 event = make_event_from_dict(d)
6969
7070 return event
7171
2020 import inspect
2121 import logging
2222 import time
23 from typing import Optional, Tuple, Type, TypeVar, Union
2324
2425 from mock import Mock
2526
4142 from synapse.types import Requester, UserID, create_requester
4243 from synapse.util.ratelimitutils import FederationRateLimiter
4344
44 from tests.server import get_clock, make_request, render, setup_test_homeserver
45 from tests.server import (
46 FakeChannel,
47 get_clock,
48 make_request,
49 render,
50 setup_test_homeserver,
51 )
4552 from tests.test_utils.logging_setup import setup_logging
4653 from tests.utils import default_config, setupdb
4754
6875 setattr(target, name, new)
6976
7077 return _around
78
79
80 T = TypeVar("T")
7181
7282
7383 class TestCase(unittest.TestCase):
333343
334344 def make_request(
335345 self,
336 method,
337 path,
338 content=b"",
339 access_token=None,
340 request=SynapseRequest,
341 shorthand=True,
342 federation_auth_origin=None,
343 ):
346 method: Union[bytes, str],
347 path: Union[bytes, str],
348 content: Union[bytes, dict] = b"",
349 access_token: Optional[str] = None,
350 request: Type[T] = SynapseRequest,
351 shorthand: bool = True,
352 federation_auth_origin: str = None,
353 ) -> Tuple[T, FakeChannel]:
344354 """
345355 Create a SynapseRequest at the path using the method and containing the
346356 given content.
178178 commands = mypy \
179179 synapse/api \
180180 synapse/config/ \
181 synapse/events/spamcheck.py \
182 synapse/federation/sender \
181183 synapse/federation/transport \
184 synapse/handlers/sync.py \
182185 synapse/handlers/ui_auth \
183186 synapse/logging/ \
184187 synapse/module_api \